From a16dd801cea1fb959ffe999266dd95815fd9dc66 Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 10:42:43 +0800 Subject: [PATCH 1/7] use github mmdet 3.3.0 instead of toolbox mmdet --- cv/classification/byol/pytorch/README.md | 29 ++++----------- cv/classification/mobileone/pytorch/README.md | 25 +++---------- cv/classification/mocov2/pytorch/README.md | 25 +++---------- cv/detection/atss_mmdet/pytorch/README.md | 19 ++++++++-- .../cascade_rcnn_mmdet/pytorch/README.md | 19 ++++++++-- cv/detection/co-detr/pytorch/README.md | 14 +++++-- .../cornernet_mmdet/pytorch/README.md | 19 +++++++--- cv/detection/dcnv2_mmdet/pytorch/README.md | 19 +++++++--- .../reppoints_mmdet/pytorch/README.md | 19 +++++++--- cv/detection/rtmdet/pytorch/README.md | 37 +++++++------------ .../solov2/pytorch/README.md | 21 ++++------- cv/ocr/dbnetpp/pytorch/README.md | 19 +++++----- 12 files changed, 132 insertions(+), 133 deletions(-) diff --git a/cv/classification/byol/pytorch/README.md b/cv/classification/byol/pytorch/README.md index 409c776e..c9640543 100644 --- a/cv/classification/byol/pytorch/README.md +++ b/cv/classification/byol/pytorch/README.md @@ -10,25 +10,11 @@ ## Step 1: Installation ```bash -## install libGL +# Install libGL +## CentOS yum install -y mesa-libGL - -## install zlib -wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz -tar xvf zlib-1.2.9.tar.gz -cd zlib-1.2.9/ -./configure && make install -cd .. -rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ -``` - -```bash -# install mmcv -pushd ../../../../toolbox/MMDetection/patch/mmcv/v2.0.0rc4/ -bash clean_mmcv.sh -bash build_mmcv.sh -bash install_mmcv.sh -popd +## Ubuntu +apt install -y libgl1-mesa-glx # clone mmpretrain cd deepsparkhub/cv/classification/byol/pytorch @@ -45,7 +31,6 @@ sed -i '9,26s/^/# /' mmpretrain/__init__.py sed -i 's/python /python3 /g' tools/dist_train.sh # install mmpretrain -pip3 install mmengine==0.8.3 python3 setup.py install ``` @@ -74,12 +59,14 @@ imagenet ## Step 3: Training ```bash +mkdir -p data +ln -s /path/to/imagenet data/imagenet + wget https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth vim configs/byol/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py model = dict( backbone=dict( - frozen_stages=4, - init_cfg=dict(type='Pretrained', checkpoint='./byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth', prefix='backbone.'))) + frozen_stages=4,init_cfg=dict(type='Pretrained', checkpoint='./byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth', prefix='backbone.'))) bash tools/dist_train.sh configs/byol/benchmarks/resnet50_8xb512-linear-coslr-90e_in1k.py 8 ``` diff --git a/cv/classification/mobileone/pytorch/README.md b/cv/classification/mobileone/pytorch/README.md index 8c257fc0..1ac88324 100644 --- a/cv/classification/mobileone/pytorch/README.md +++ b/cv/classification/mobileone/pytorch/README.md @@ -15,25 +15,11 @@ Efficient neural network backbones for mobile devices are often optimized for me ## Step 1: Installation ```bash -## install libGL -yum install mesa-libGL - -## install zlib -wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz -tar xvf zlib-1.2.9.tar.gz -cd zlib-1.2.9/ -./configure && make install -cd .. -rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ -``` - -```bash -# install mmcv -pushd ../../../../toolbox/MMDetection/patch/mmcv/v2.0.0rc4/ -bash clean_mmcv.sh -bash build_mmcv.sh -bash install_mmcv.sh -popd +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx # clone mmpretrain cd deepsparkhub/cv/classification/mobileone/pytorch @@ -50,7 +36,6 @@ sed -i '9,26s/^/# /' mmpretrain/__init__.py sed -i 's/python /python3 /g' tools/dist_train.sh # install mmpretrain -pip3 install mmengine==0.8.3 python3 setup.py install ``` diff --git a/cv/classification/mocov2/pytorch/README.md b/cv/classification/mocov2/pytorch/README.md index 0095032c..408f1ab2 100644 --- a/cv/classification/mocov2/pytorch/README.md +++ b/cv/classification/mocov2/pytorch/README.md @@ -10,25 +10,11 @@ Contrastive unsupervised learning has recently shown encouraging progress, e.g., ## Installation ```bash -# install libGL -yum install mesa-libGL - -# install zlib -wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz -tar xvf zlib-1.2.9.tar.gz -cd zlib-1.2.9/ -./configure && make install -cd .. -rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ -``` - -```bash -# install mmcv -pushd ../../../../toolbox/MMDetection/patch/mmcv/v2.0.0rc4/ -bash clean_mmcv.sh -bash build_mmcv.sh -bash install_mmcv.sh -popd +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx # clone mmpretrain cd deepsparkhub/cv/classification/mocov2/pytorch @@ -45,7 +31,6 @@ sed -i '9,26s/^/# /' mmpretrain/__init__.py sed -i 's/python /python3 /g' tools/dist_train.sh # install mmpretrain -pip3 install mmengine==0.8.3 python3 setup.py install ``` diff --git a/cv/detection/atss_mmdet/pytorch/README.md b/cv/detection/atss_mmdet/pytorch/README.md index cb7a1742..3b5af2d9 100644 --- a/cv/detection/atss_mmdet/pytorch/README.md +++ b/cv/detection/atss_mmdet/pytorch/README.md @@ -9,9 +9,16 @@ Object detection has been dominated by anchor-based detectors for several years. ATSS model is using MMDetection toolbox. Before you run this model, you need to setup MMDetection first. ```bash -# Go to "toolbox/MMDetection" directory in root path -cd ../../../../toolbox/MMDetection/ -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -47,9 +54,15 @@ cd mmdetection/ mkdir -p data/ ln -s /path/to/coco2017 data/coco +# Prepare resnet50-0676ba61.pth, skip this if fast network +mkdir -p /root/.cache/torch/hub/checkpoints/ +wget https://download.pytorch.org/models/resnet50-0676ba61.pth -O /root/.cache/torch/hub/checkpoints/resnet50-0676ba61.pth + # On single GPU python3 tools/train.py configs/atss/atss_r50_fpn_1x_coco.py +sed -i 's/python /python3 /g' tools/dist_train.sh + # Multiple GPUs on one machine bash tools/dist_train.sh configs/atss/atss_r50_fpn_1x_coco.py 8 ``` diff --git a/cv/detection/cascade_rcnn_mmdet/pytorch/README.md b/cv/detection/cascade_rcnn_mmdet/pytorch/README.md index 3ae7b84a..756bc2f0 100644 --- a/cv/detection/cascade_rcnn_mmdet/pytorch/README.md +++ b/cv/detection/cascade_rcnn_mmdet/pytorch/README.md @@ -8,9 +8,16 @@ In object detection, the intersection over union (IoU) threshold is frequently u Cascade R-CNN model is using MMDetection toolbox. Before you run this model, you need to setup MMDetection first. ```bash -# Go to "toolbox/MMDetection" directory in root path -cd ../../../../toolbox/MMDetection/ -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -46,9 +53,15 @@ cd mmdetection/ mkdir -p data/ ln -s /path/to/coco2017 data/coco +# Prepare resnet50-0676ba61.pth, skip this if fast network +mkdir -p /root/.cache/torch/hub/checkpoints/ +wget https://download.pytorch.org/models/resnet50-0676ba61.pth -O /root/.cache/torch/hub/checkpoints/resnet50-0676ba61.pth + # On single GPU python3 tools/train.py configs/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py +sed -i 's/python /python3 /g' tools/dist_train.sh + # Multiple GPUs on one machine bash tools/dist_train.sh configs/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py 8 ``` diff --git a/cv/detection/co-detr/pytorch/README.md b/cv/detection/co-detr/pytorch/README.md index e7ace99f..e864b540 100644 --- a/cv/detection/co-detr/pytorch/README.md +++ b/cv/detection/co-detr/pytorch/README.md @@ -9,16 +9,22 @@ In this paper, we present a novel collaborative hybrid assignments training sche 3. **State-of-the-art performance**: Co-DETR with [ViT-L](https://github.com/baaivision/EVA/tree/master/EVA-02) (304M parameters) is **the first model to achieve 66.0 AP on COCO test-dev.** ## Step 1: Installation -### (1) install MMCV ```bash -# Go to "toolbox/MMDetection" directory in root path -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ### (2) install other ```bash pip3 install -r requirements.txt pip3 install urllib3==1.26.15 -yum install -y mesa-libGL ``` ### (3) download repo diff --git a/cv/detection/cornernet_mmdet/pytorch/README.md b/cv/detection/cornernet_mmdet/pytorch/README.md index 55efaa87..876cb1f9 100644 --- a/cv/detection/cornernet_mmdet/pytorch/README.md +++ b/cv/detection/cornernet_mmdet/pytorch/README.md @@ -7,9 +7,16 @@ CornerNet, a new approach to object detection where we detect an object bounding ## Step 1: Installation CornerNet model is using MMDetection toolbox. Before you run this model, you need to setup MMDetection first. ```bash -# Go to "toolbox/MMDetection" directory in root path -cd ../../../../toolbox/MMDetection/ -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -46,10 +53,12 @@ mkdir -p data/ ln -s /path/to/coco2017 data/coco # On single GPU -python3 tools/train.py configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py +python3 tools/train.py configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py + +sed -i 's/python /python3 /g' tools/dist_train.sh # Multiple GPUs on one machine -bash tools/dist_train.sh configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py 8 +bash tools/dist_train.sh configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py 8 ``` ## Results diff --git a/cv/detection/dcnv2_mmdet/pytorch/README.md b/cv/detection/dcnv2_mmdet/pytorch/README.md index 86c5177a..77246fee 100644 --- a/cv/detection/dcnv2_mmdet/pytorch/README.md +++ b/cv/detection/dcnv2_mmdet/pytorch/README.md @@ -9,9 +9,16 @@ The superior performance of Deformable Convolutional Networks arises from its ab DCNv2 model is using MMDetection toolbox. Before you run this model, you need to setup MMDetection first. ```bash -# Go to "toolbox/MMDetection" directory in root path -cd ../../../../toolbox/MMDetection/ -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -48,10 +55,12 @@ mkdir -p data/ ln -s /path/to/coco2017 data/coco # On single GPU -python3 tools/train.py configs/dcnv2/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py +python3 tools/train.py configs/dcnv2/faster-rcnn_r50-mdconv-c3-c5_fpn_1x_coco.py + +sed -i 's/python /python3 /g' tools/dist_train.sh # Multiple GPUs on one machine -bash tools/dist_train.sh configs/dcnv2/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py 8 +bash tools/dist_train.sh configs/dcnv2/faster-rcnn_r50-mdconv-c3-c5_fpn_1x_coco.py 8 ``` ## Results diff --git a/cv/detection/reppoints_mmdet/pytorch/README.md b/cv/detection/reppoints_mmdet/pytorch/README.md index 4e2b04f7..038872b5 100644 --- a/cv/detection/reppoints_mmdet/pytorch/README.md +++ b/cv/detection/reppoints_mmdet/pytorch/README.md @@ -7,9 +7,16 @@ Modern object detectors rely heavily on rectangular bounding boxes, such as anch ## Step 1: Installation RepPoints model is using MMDetection toolbox. Before you run this model, you need to setup MMDetection first. ```bash -# Go to "toolbox/MMDetection" directory in root path -cd ../../../../../toolbox/MMDetection/ -bash install_toolbox_mmdetection.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -46,10 +53,12 @@ mkdir -p data/ ln -s /path/to/coco2017 data/coco # On single GPU -python3 tools/train.py configs/reppoints/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck+head_2x_coco.py +python3 tools/train.py configs/reppoints/reppoints-moment_r101-dconv-c3-c5_fpn-gn_head-gn_2x_coco.py + +sed -i 's/python /python3 /g' tools/dist_train.sh # Multiple GPUs on one machine -bash tools/dist_train.sh configs/reppoints/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck+head_2x_coco.py 8 +bash tools/dist_train.sh configs/reppoints/reppoints-moment_r101-dconv-c3-c5_fpn-gn_head-gn_2x_coco.py 8 ``` ## Results diff --git a/cv/detection/rtmdet/pytorch/README.md b/cv/detection/rtmdet/pytorch/README.md index 75aa0f23..1bfe8036 100644 --- a/cv/detection/rtmdet/pytorch/README.md +++ b/cv/detection/rtmdet/pytorch/README.md @@ -13,33 +13,16 @@ In this paper, we aim to design an efficient real-time object detector that exce RTMDet model uses the MMDetection toolbox. Before you run this model, you need to set up MMDetection first. ```bash -# Install mmcv -pushd ../../../../toolbox/MMDetection/patch/mmcv/v2.0.0rc4/ -bash clean_mmcv.sh -bash build_mmcv.sh -bash install_mmcv.sh -popd - -# Install mmdetection -pushd ../../../../toolbox/MMDetection/ -git clone --depth 1 -b v2.22.0 https://github.com/open-mmlab/mmdetection.git -cp -r -T patch/mmdetection/ mmdetection/ - -cd mmdetection/ -bash clean_mmdetection.sh -bash build_mmdetection.sh - -pip3 install build_pip/mmdet-2.22.0+corex*-py3-none-any.whl -popd - # Install libGL +## CentOS yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx -# Install urllib3 -pip3 install urllib3==1.26.6 - -cd ../../../../toolbox/MMDetection/mmdetection -pip3 install -v -e . +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -74,9 +57,15 @@ cd mmdetection/ mkdir -p data/ ln -s /path/to/coco2017 data/coco +# Prepare cspnext-tiny_imagenet_600e.pth, skip this if fast network +mkdir -p /root/.cache/torch/hub/checkpoints/ +wget -O /root/.cache/torch/hub/checkpoints/cspnext-tiny_imagenet_600e.pth https://download.openmmlab.com/mmdetection/v3.0/rtmdet/cspnext_rsb_pretrain/cspnext-tiny_imagenet_600e.pth + # On single GPU python3 tools/train.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py +sed -i 's/python /python3 /g' tools/dist_train.sh + # Multiple GPUs on one machine bash tools/dist_train.sh configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py 8 ``` diff --git a/cv/instance_segmentation/solov2/pytorch/README.md b/cv/instance_segmentation/solov2/pytorch/README.md index 250c04bb..c396ebfa 100644 --- a/cv/instance_segmentation/solov2/pytorch/README.md +++ b/cv/instance_segmentation/solov2/pytorch/README.md @@ -7,19 +7,16 @@ In this work, we aim at building a simple, direct, and fast instance segmentatio ## Step 1: Installation ```bash -# Install mmcv -pushd ../../../../toolbox/MMDetection -bash prepare_mmcv.sh v2.0.0rc4 -popd +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx -# Install mmdetection -git clone -b v3.2.0 https://github.com/open-mmlab/mmdetection.git +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 cd mmdetection -pip3 install -r requirements.txt -python3 setup.py develop - -# Install mmengine -pip3 install mmengine==0.8.3 +pip install -v -e . # Prepare resnet50-0676ba61.pth, skip this if fast network mkdir -p /root/.cache/torch/hub/checkpoints/ @@ -27,10 +24,8 @@ wget https://download.pytorch.org/models/resnet50-0676ba61.pth -O /root/.cache/t # Install others pip3 install yapf==0.31.0 urllib3==1.26.18 -yum install -y mesa-libGL ``` - ## Step 2: Preparing datasets Go to visit [COCO official website](https://cocodataset.org/#download), then select the COCO dataset you want to download. diff --git a/cv/ocr/dbnetpp/pytorch/README.md b/cv/ocr/dbnetpp/pytorch/README.md index aadf49a2..244dc828 100644 --- a/cv/ocr/dbnetpp/pytorch/README.md +++ b/cv/ocr/dbnetpp/pytorch/README.md @@ -7,23 +7,20 @@ Recently, segmentation-based scene text detection methods have drawn extensive a ## Step 1: Installation ```bash -# Install mmcv -pushd ../../../../toolbox/MMDetection -bash prepare_mmcv.sh v2.0.0rc4 -popd +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx -# Install mmdet and mmocr -pip3 install mmdet==3.1.0 +# Install mmdet and mmocr +pip3 install mmdet==3.3.0 git clone -b v1.0.1 https://github.com/open-mmlab/mmocr.git cd mmocr pip3 install -r requirements.txt python3 setup.py develop -# Install mmengine -pip3 install mmengine==0.8.3 -yum install -y mesa-libGL - # Prepare resnet50-0676ba61.pth, skip this if fast network mkdir -p /root/.cache/torch/hub/checkpoints/ wget https://download.pytorch.org/models/resnet50-0676ba61.pth -O /root/.cache/torch/hub/checkpoints/resnet50-0676ba61.pth @@ -41,6 +38,8 @@ python3 tools/dataset_converters/prepare_dataset.py icdar2015 --task textdet ```bash sed -i 's/val_interval=20/val_interval=1200/g' configs/textdet/_base_/schedules/schedule_sgd_1200e.py sed -i 's/python /python3 /g' tools/dist_train.sh +# match mmdet 3.3.0 +sed -i 's/3.2.0/3.4.0/g' mmocr/__init__.py # On single GPU python3 tools/train.py configs/textdet/dbnetpp/dbnetpp_resnet50_fpnc_1200e_icdar2015.py -- Gitee From f4702b8290b2e928061be76a40b2a523d3fee80f Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 10:44:39 +0800 Subject: [PATCH 2/7] update PAConv use github mmdetection3d and delete useless code --- cv/3d_detection/paconv/pytorch/.gitignore | 136 - cv/3d_detection/paconv/pytorch/LICENSE | 203 -- cv/3d_detection/paconv/pytorch/README.md | 36 +- .../configs/3dssd/3dssd_4x4_kitti-3d-car.py | 121 - .../paconv/pytorch/configs/3dssd/README.md | 45 - .../paconv/pytorch/configs/3dssd/metafile.yml | 29 - .../configs/_base_/datasets/coco_instance.py | 48 - .../_base_/datasets/kitti-3d-3class.py | 140 - .../configs/_base_/datasets/kitti-3d-car.py | 138 - .../configs/_base_/datasets/kitti-mono3d.py | 92 - .../configs/_base_/datasets/lyft-3d.py | 136 - .../configs/_base_/datasets/nuim_instance.py | 59 - .../pytorch/configs/_base_/datasets/nus-3d.py | 142 - .../configs/_base_/datasets/nus-mono3d.py | 100 - .../_base_/datasets/range100_lyft-3d.py | 136 - .../_base_/datasets/s3dis-3d-5class.py | 114 - .../_base_/datasets/s3dis_seg-3d-13class.py | 161 - .../_base_/datasets/scannet-3d-18class.py | 128 - .../_base_/datasets/scannet_seg-3d-20class.py | 132 - .../_base_/datasets/sunrgbd-3d-10class.py | 107 - .../_base_/datasets/waymoD5-3d-3class.py | 145 - .../configs/_base_/datasets/waymoD5-3d-car.py | 143 - .../pytorch/configs/_base_/default_runtime.py | 23 - .../pytorch/configs/_base_/models/3dssd.py | 77 - .../models/cascade_mask_rcnn_r50_fpn.py | 198 -- .../centerpoint_01voxel_second_secfpn_nus.py | 83 - .../centerpoint_02pillar_second_secfpn_nus.py | 83 - .../pytorch/configs/_base_/models/dgcnn.py | 28 - .../pytorch/configs/_base_/models/fcos3d.py | 78 - .../configs/_base_/models/groupfree3d.py | 71 - .../pytorch/configs/_base_/models/h3dnet.py | 341 -- .../_base_/models/hv_pointpillars_fpn_lyft.py | 22 - .../_base_/models/hv_pointpillars_fpn_nus.py | 95 - .../hv_pointpillars_fpn_range100_lyft.py | 22 - .../models/hv_pointpillars_secfpn_kitti.py | 94 - .../models/hv_pointpillars_secfpn_waymo.py | 107 - .../_base_/models/hv_second_secfpn_kitti.py | 89 - .../_base_/models/hv_second_secfpn_waymo.py | 99 - .../configs/_base_/models/imvotenet_image.py | 108 - .../_base_/models/mask_rcnn_r50_fpn.py | 124 - .../configs/_base_/models/paconv_cuda_ssg.py | 7 - .../configs/_base_/models/paconv_ssg.py | 49 - .../pytorch/configs/_base_/models/parta2.py | 201 -- .../pytorch/configs/_base_/models/pgd.py | 55 - .../configs/_base_/models/point_rcnn.py | 131 - .../configs/_base_/models/pointnet2_msg.py | 28 - .../configs/_base_/models/pointnet2_ssg.py | 35 - .../pytorch/configs/_base_/models/smoke.py | 53 - .../pytorch/configs/_base_/models/votenet.py | 73 - .../configs/_base_/schedules/cosine.py | 20 - .../configs/_base_/schedules/cyclic_20e.py | 24 - .../configs/_base_/schedules/cyclic_40e.py | 31 - .../_base_/schedules/mmdet_schedule_1x.py | 11 - .../configs/_base_/schedules/schedule_2x.py | 14 - .../configs/_base_/schedules/schedule_3x.py | 9 - .../_base_/schedules/seg_cosine_100e.py | 8 - .../_base_/schedules/seg_cosine_150e.py | 9 - .../_base_/schedules/seg_cosine_200e.py | 9 - .../_base_/schedules/seg_cosine_50e.py | 9 - ...pn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py | 332 -- ...lars_secfpn_3x8_100e_det3d_kitti-3d-car.py | 201 -- ...rs_secfpn_4x8_80e_pcdet_kitti-3d-3class.py | 244 -- ...nd_secfpn_4x8_80e_pcdet_kitti-3d-3class.py | 251 -- .../pytorch/configs/centerpoint/README.md | 138 - ...5voxel_second_secfpn_4x8_cyclic_20e_nus.py | 140 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...el_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ..._secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py | 50 - ...econd_secfpn_dcn_4x8_cyclic_tta_20e_nus.py | 52 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - ...n_circlenms_4x8_cyclic_flip-tta_20e_nus.py | 51 - ...1voxel_second_secfpn_4x8_cyclic_20e_nus.py | 171 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...el_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - ...pillar_second_secfpn_4x8_cyclic_20e_nus.py | 170 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...ar_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - .../pytorch/configs/centerpoint/metafile.yml | 95 - .../paconv/pytorch/configs/dgcnn/README.md | 55 - ...n_32x4_cosine_100e_s3dis_seg-3d-13class.py | 24 - .../paconv/pytorch/configs/dgcnn/metafile.yml | 24 - .../configs/dynamic_voxelization/README.md | 40 - ...intpillars_secfpn_6x8_160e_kitti-3d-car.py | 19 - ...d_secfpn_2x8_cosine_80e_kitti-3d-3class.py | 22 - .../dv_second_secfpn_6x8_80e_kitti-3d-car.py | 18 - .../configs/dynamic_voxelization/metafile.yml | 53 - .../paconv/pytorch/configs/fcos3d/README.md | 75 - ...caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py | 75 - ..._gn-head_dcn_2x8_1x_nus-mono3d_finetune.py | 8 - .../pytorch/configs/fcos3d/metafile.yml | 43 - .../pytorch/configs/free_anchor/README.md | 105 - ...s_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 47 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - ...ll_free-anchor_strong-aug_4x8_3x_nus-3d.py | 70 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - ...ll_free-anchor_strong-aug_4x8_3x_nus-3d.py | 70 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - .../pytorch/configs/free_anchor/metafile.yml | 96 - .../pytorch/configs/groupfree3d/README.md | 44 - ...pfree3d_8x4_scannet-3d-18class-L12-O256.py | 199 -- ...upfree3d_8x4_scannet-3d-18class-L6-O256.py | 198 -- ...e3d_8x4_scannet-3d-18class-w2x-L12-O256.py | 214 -- ...e3d_8x4_scannet-3d-18class-w2x-L12-O512.py | 215 -- .../pytorch/configs/groupfree3d/metafile.yml | 72 - .../paconv/pytorch/configs/h3dnet/README.md | 44 - .../h3dnet/h3dnet_3x8_scannet-3d-18class.py | 64 - .../pytorch/configs/h3dnet/metafile.yml | 29 - .../pytorch/configs/imvotenet/README.md | 43 - ...ter_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py | 58 - ...mvotenet_stage2_16x8_sunrgbd-3d-10class.py | 260 -- .../pytorch/configs/imvotenet/metafile.yml | 43 - .../pytorch/configs/imvoxelnet/README.md | 38 - .../imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py | 162 - .../pytorch/configs/imvoxelnet/metafile.yml | 29 - .../paconv/pytorch/configs/monoflex/README.md | 48 - .../pytorch/configs/monoflex/metafile.yml | 30 - .../paconv/pytorch/configs/mvxnet/README.md | 38 - ...nd_secfpn_adamw_2x8_80e_kitti-3d-3class.py | 251 -- .../pytorch/configs/mvxnet/metafile.yml | 30 - .../paconv/pytorch/configs/nuimages/README.md | 59 - .../cascade_mask_rcnn_r101_fpn_1x_nuim.py | 2 - .../cascade_mask_rcnn_r50_fpn_1x_nuim.py | 60 - ...cade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py | 3 - ...ade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py | 7 - ...ascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py | 13 - .../configs/nuimages/htc_r50_fpn_1x_nuim.py | 44 - .../nuimages/htc_r50_fpn_coco-20e_1x_nuim.py | 3 - .../nuimages/htc_r50_fpn_coco-20e_20e_nuim.py | 4 - .../htc_without_semantic_r50_fpn_1x_nuim.py | 221 -- ..._fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py | 23 - .../nuimages/mask_rcnn_r101_fpn_1x_nuim.py | 2 - .../mask_rcnn_r50_caffe_fpn_1x_nuim.py | 46 - ...mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py | 48 - ...ask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py | 52 - .../nuimages/mask_rcnn_r50_fpn_1x_nuim.py | 8 - .../mask_rcnn_r50_fpn_coco-2x_1x_nuim.py | 9 - .../mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py | 39 - .../mask_rcnn_x101_32x4d_fpn_1x_nuim.py | 13 - .../pytorch/configs/nuimages/metafile.yml | 255 -- .../paconv/pytorch/configs/paconv/README.md | 51 - .../pytorch/configs/paconv/metafile.yml | 29 - ...sg_8x8_cosine_200e_s3dis_seg-3d-13class.py | 69 - ...sg_8x8_cosine_150e_s3dis_seg-3d-13class.py | 66 - .../paconv/pytorch/configs/parta2/README.md | 38 - ...2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py | 122 - ...rtA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py | 137 - .../pytorch/configs/parta2/metafile.yml | 41 - .../paconv/pytorch/configs/pgd/README.md | 69 - .../paconv/pytorch/configs/pgd/metafile.yml | 81 - ...01_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py | 107 - ...fpn_gn-head_2x16_1x_nus-mono3d_finetune.py | 9 - ...01_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py | 5 - ...fpn_gn-head_2x16_2x_nus-mono3d_finetune.py | 9 - ...1_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py | 127 - .../pytorch/configs/point_rcnn/README.md | 47 - .../pytorch/configs/point_rcnn/metafile.yml | 29 - .../point_rcnn_2x8_kitti-3d-3classes.py | 94 - .../pytorch/configs/pointnet2/README.md | 72 - .../pytorch/configs/pointnet2/metafile.yml | 94 - ...16x2_cosine_250e_scannet_seg-3d-20class.py | 36 - ...sg_16x2_cosine_80e_s3dis_seg-3d-13class.py | 27 - ...16x2_cosine_250e_scannet_seg-3d-20class.py | 166 - ...16x2_cosine_200e_scannet_seg-3d-20class.py | 34 - ...sg_16x2_cosine_50e_s3dis_seg-3d-13class.py | 25 - ...16x2_cosine_200e_scannet_seg-3d-20class.py | 164 - .../pytorch/configs/pointpillars/README.md | 78 - ...pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py | 5 - ..._pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py | 5 - ...tpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ...ars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py | 5 - ...pillars_secfpn_6x8_160e_kitti-3d-3class.py | 81 - ...intpillars_secfpn_6x8_160e_kitti-3d-car.py | 87 - ...ntpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py | 43 - ...intpillars_secfpn_sbn-all_4x8_2x_nus-3d.py | 42 - ...llars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ..._secfpn_sbn-all_range100_2x8_2x_lyft-3d.py | 42 - ...lars_secfpn_sbn_2x16_2x_waymo-3d-3class.py | 9 - ...pillars_secfpn_sbn_2x16_2x_waymo-3d-car.py | 37 - ...rs_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py | 6 - ...llars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py | 34 - .../pytorch/configs/pointpillars/metafile.yml | 213 -- .../paconv/pytorch/configs/regnet/README.md | 82 - ..._regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py | 24 - ...regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py | 24 - ..._regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py | 24 - ...et-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ...0mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py | 24 - ...net-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py | 39 - ...gnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py | 38 - ..._secfpn_sbn-all_range100_2x8_2x_lyft-3d.py | 40 - .../pytorch/configs/regnet/metafile.yml | 85 - .../paconv/pytorch/configs/sassd/README.md | 28 - .../sassd/sassd_6x8_80e_kitti-3d-3class.py | 94 - .../paconv/pytorch/configs/second/README.md | 54 - ...v_second_secfpn_6x8_80e_kitti-3d-3class.py | 5 - .../hv_second_secfpn_6x8_80e_kitti-3d-car.py | 30 - ...ond_secfpn_fp16_6x8_80e_kitti-3d-3class.py | 3 - ...second_secfpn_fp16_6x8_80e_kitti-3d-car.py | 3 - ...nd_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py | 112 - .../pytorch/configs/second/metafile.yml | 97 - .../paconv/pytorch/configs/smoke/README.md | 47 - .../paconv/pytorch/configs/smoke/metafile.yml | 30 - ...orch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py | 64 - .../paconv/pytorch/configs/ssn/README.md | 53 - ...et-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py | 21 - ...net-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py | 19 - .../hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py | 224 -- .../hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py | 238 -- .../paconv/pytorch/configs/ssn/metafile.yml | 72 - .../paconv/pytorch/configs/votenet/README.md | 68 - .../pytorch/configs/votenet/metafile.yml | 59 - .../votenet_16x8_sunrgbd-3d-10class.py | 21 - .../votenet/votenet_8x8_scannet-3d-18class.py | 36 - .../votenet_iouloss_8x8_scannet-3d-18class.py | 8 - .../paconv/pytorch/data/s3dis/README.md | 59 - .../data/s3dis/collect_indoor3d_data.py | 48 - .../pytorch/data/s3dis/indoor3d_util.py | 53 - .../data/s3dis/meta_data/anno_paths.txt | 272 -- .../data/s3dis/meta_data/class_names.txt | 13 - cv/3d_detection/paconv/pytorch/dist_train.sh | 34 - .../paconv/pytorch/mmdet/__init__.py | 29 - .../paconv/pytorch/mmdet/apis/__init__.py | 12 - .../paconv/pytorch/mmdet/apis/inference.py | 251 -- .../paconv/pytorch/mmdet/apis/test.py | 209 -- .../paconv/pytorch/mmdet/apis/train.py | 244 -- .../paconv/pytorch/mmdet/core/__init__.py | 9 - .../pytorch/mmdet/core/anchor/__init__.py | 14 - .../mmdet/core/anchor/anchor_generator.py | 866 ----- .../pytorch/mmdet/core/anchor/builder.py | 19 - .../mmdet/core/anchor/point_generator.py | 263 -- .../paconv/pytorch/mmdet/core/anchor/utils.py | 72 - .../pytorch/mmdet/core/bbox/__init__.py | 28 - .../mmdet/core/bbox/assigners/__init__.py | 22 - .../bbox/assigners/approx_max_iou_assigner.py | 146 - .../core/bbox/assigners/assign_result.py | 206 -- .../core/bbox/assigners/atss_assigner.py | 179 - .../core/bbox/assigners/base_assigner.py | 10 - .../bbox/assigners/center_region_assigner.py | 336 -- .../core/bbox/assigners/grid_assigner.py | 156 - .../core/bbox/assigners/hungarian_assigner.py | 146 - .../bbox/assigners/mask_hungarian_assigner.py | 132 - .../core/bbox/assigners/max_iou_assigner.py | 218 -- .../core/bbox/assigners/point_assigner.py | 134 - .../core/bbox/assigners/region_assigner.py | 222 -- .../core/bbox/assigners/sim_ota_assigner.py | 257 -- .../bbox/assigners/task_aligned_assigner.py | 151 - .../core/bbox/assigners/uniform_assigner.py | 135 - .../paconv/pytorch/mmdet/core/bbox/builder.py | 21 - .../pytorch/mmdet/core/bbox/coder/__init__.py | 15 - .../mmdet/core/bbox/coder/base_bbox_coder.py | 18 - .../core/bbox/coder/bucketing_bbox_coder.py | 351 -- .../core/bbox/coder/delta_xywh_bbox_coder.py | 392 --- .../bbox/coder/distance_point_bbox_coder.py | 63 - .../coder/legacy_delta_xywh_bbox_coder.py | 216 -- .../core/bbox/coder/pseudo_bbox_coder.py | 19 - .../mmdet/core/bbox/coder/tblr_bbox_coder.py | 206 -- .../mmdet/core/bbox/coder/yolo_bbox_coder.py | 83 - .../pytorch/mmdet/core/bbox/demodata.py | 42 - .../core/bbox/iou_calculators/__init__.py | 5 - .../core/bbox/iou_calculators/builder.py | 9 - .../bbox/iou_calculators/iou2d_calculator.py | 261 -- .../mmdet/core/bbox/match_costs/__init__.py | 9 - .../mmdet/core/bbox/match_costs/builder.py | 9 - .../mmdet/core/bbox/match_costs/match_cost.py | 359 -- .../mmdet/core/bbox/samplers/__init__.py | 19 - .../mmdet/core/bbox/samplers/base_sampler.py | 102 - .../core/bbox/samplers/combined_sampler.py | 21 - .../samplers/instance_balanced_pos_sampler.py | 56 - .../bbox/samplers/iou_balanced_neg_sampler.py | 158 - .../core/bbox/samplers/mask_pseudo_sampler.py | 44 - .../bbox/samplers/mask_sampling_result.py | 60 - .../mmdet/core/bbox/samplers/ohem_sampler.py | 111 - .../core/bbox/samplers/pseudo_sampler.py | 42 - .../core/bbox/samplers/random_sampler.py | 82 - .../core/bbox/samplers/sampling_result.py | 153 - .../core/bbox/samplers/score_hlr_sampler.py | 265 -- .../pytorch/mmdet/core/bbox/transforms.py | 270 -- .../mmdet/core/data_structures/__init__.py | 5 - .../core/data_structures/general_data.py | 326 -- .../core/data_structures/instance_data.py | 188 -- .../pytorch/mmdet/core/evaluation/__init__.py | 19 - .../mmdet/core/evaluation/bbox_overlaps.py | 65 - .../mmdet/core/evaluation/class_names.py | 332 -- .../mmdet/core/evaluation/eval_hooks.py | 130 - .../pytorch/mmdet/core/evaluation/mean_ap.py | 753 ----- .../mmdet/core/evaluation/panoptic_utils.py | 6 - .../pytorch/mmdet/core/evaluation/recall.py | 197 -- .../pytorch/mmdet/core/export/__init__.py | 12 - .../mmdet/core/export/model_wrappers.py | 183 -- .../pytorch/mmdet/core/export/onnx_helper.py | 223 -- .../pytorch/mmdet/core/export/pytorch2onnx.py | 159 - .../pytorch/mmdet/core/hook/__init__.py | 15 - .../pytorch/mmdet/core/hook/checkloss_hook.py | 24 - .../paconv/pytorch/mmdet/core/hook/ema.py | 130 - .../mmdet/core/hook/memory_profiler_hook.py | 55 - .../mmdet/core/hook/set_epoch_info_hook.py | 15 - .../pytorch/mmdet/core/hook/sync_norm_hook.py | 52 - .../mmdet/core/hook/sync_random_size_hook.py | 72 - .../mmdet/core/hook/yolox_lrupdater_hook.py | 67 - .../mmdet/core/hook/yolox_mode_switch_hook.py | 52 - .../pytorch/mmdet/core/mask/__init__.py | 9 - .../pytorch/mmdet/core/mask/mask_target.py | 127 - .../pytorch/mmdet/core/mask/structures.py | 1102 ------- .../paconv/pytorch/mmdet/core/mask/utils.py | 89 - .../mmdet/core/post_processing/__init__.py | 10 - .../mmdet/core/post_processing/bbox_nms.py | 171 - .../mmdet/core/post_processing/matrix_nms.py | 121 - .../mmdet/core/post_processing/merge_augs.py | 154 - .../pytorch/mmdet/core/utils/__init__.py | 13 - .../pytorch/mmdet/core/utils/dist_utils.py | 193 -- .../paconv/pytorch/mmdet/core/utils/misc.py | 208 -- .../mmdet/core/visualization/__init__.py | 9 - .../pytorch/mmdet/core/visualization/image.py | 524 --- .../mmdet/core/visualization/palette.py | 63 - .../paconv/pytorch/mmdet/datasets/__init__.py | 28 - .../mmdet/datasets/api_wrappers/__init__.py | 7 - .../mmdet/datasets/api_wrappers/coco_api.py | 47 - .../api_wrappers/panoptic_evaluation.py | 224 -- .../paconv/pytorch/mmdet/datasets/builder.py | 215 -- .../pytorch/mmdet/datasets/cityscapes.py | 338 -- .../paconv/pytorch/mmdet/datasets/coco.py | 649 ---- .../pytorch/mmdet/datasets/coco_panoptic.py | 692 ---- .../paconv/pytorch/mmdet/datasets/custom.py | 410 --- .../mmdet/datasets/dataset_wrappers.py | 456 --- .../pytorch/mmdet/datasets/deepfashion.py | 16 - .../paconv/pytorch/mmdet/datasets/lvis.py | 742 ----- .../pytorch/mmdet/datasets/openimages.py | 891 ----- .../mmdet/datasets/pipelines/__init__.py | 30 - .../mmdet/datasets/pipelines/auto_augment.py | 894 ----- .../mmdet/datasets/pipelines/compose.py | 55 - .../mmdet/datasets/pipelines/formating.py | 9 - .../mmdet/datasets/pipelines/formatting.py | 392 --- .../mmdet/datasets/pipelines/instaboost.py | 118 - .../mmdet/datasets/pipelines/loading.py | 609 ---- .../mmdet/datasets/pipelines/test_time_aug.py | 121 - .../mmdet/datasets/pipelines/transforms.py | 2919 ----------------- .../mmdet/datasets/samplers/__init__.py | 10 - .../datasets/samplers/class_aware_sampler.py | 176 - .../datasets/samplers/distributed_sampler.py | 54 - .../mmdet/datasets/samplers/group_sampler.py | 148 - .../datasets/samplers/infinite_sampler.py | 186 -- .../paconv/pytorch/mmdet/datasets/utils.py | 166 - .../paconv/pytorch/mmdet/datasets/voc.py | 112 - .../pytorch/mmdet/datasets/wider_face.py | 54 - .../pytorch/mmdet/datasets/xml_style.py | 178 - .../paconv/pytorch/mmdet/models/__init__.py | 19 - .../mmdet/models/backbones/__init__.py | 26 - .../mmdet/models/backbones/csp_darknet.py | 284 -- .../pytorch/mmdet/models/backbones/darknet.py | 213 -- .../models/backbones/detectors_resnet.py | 353 -- .../models/backbones/detectors_resnext.py | 123 - .../mmdet/models/backbones/efficientnet.py | 417 --- .../mmdet/models/backbones/hourglass.py | 222 -- .../pytorch/mmdet/models/backbones/hrnet.py | 589 ---- .../mmdet/models/backbones/mobilenet_v2.py | 197 -- .../pytorch/mmdet/models/backbones/pvt.py | 591 ---- .../pytorch/mmdet/models/backbones/regnet.py | 356 -- .../pytorch/mmdet/models/backbones/res2net.py | 327 -- .../pytorch/mmdet/models/backbones/resnest.py | 322 -- .../pytorch/mmdet/models/backbones/resnet.py | 672 ---- .../pytorch/mmdet/models/backbones/resnext.py | 154 - .../pytorch/mmdet/models/backbones/ssd_vgg.py | 128 - .../pytorch/mmdet/models/backbones/swin.py | 763 ----- .../mmdet/models/backbones/trident_resnet.py | 298 -- .../paconv/pytorch/mmdet/models/builder.py | 59 - .../mmdet/models/dense_heads/__init__.py | 56 - .../models/dense_heads/anchor_free_head.py | 350 -- .../mmdet/models/dense_heads/anchor_head.py | 542 --- .../mmdet/models/dense_heads/atss_head.py | 501 --- .../models/dense_heads/autoassign_head.py | 527 --- .../models/dense_heads/base_dense_head.py | 526 --- .../models/dense_heads/base_mask_head.py | 116 - .../models/dense_heads/cascade_rpn_head.py | 801 ----- .../models/dense_heads/centernet_head.py | 412 --- .../models/dense_heads/centripetal_head.py | 430 --- .../mmdet/models/dense_heads/corner_head.py | 1086 ------ .../dense_heads/deformable_detr_head.py | 318 -- .../models/dense_heads/dense_test_mixins.py | 206 -- .../mmdet/models/dense_heads/detr_head.py | 844 ----- .../models/dense_heads/embedding_rpn_head.py | 116 - .../mmdet/models/dense_heads/fcos_head.py | 455 --- .../mmdet/models/dense_heads/fovea_head.py | 385 --- .../dense_heads/free_anchor_retina_head.py | 272 -- .../mmdet/models/dense_heads/fsaf_head.py | 433 --- .../models/dense_heads/ga_retina_head.py | 113 - .../mmdet/models/dense_heads/ga_rpn_head.py | 177 - .../mmdet/models/dense_heads/gfl_head.py | 648 ---- .../models/dense_heads/guided_anchor_head.py | 868 ----- .../mmdet/models/dense_heads/lad_head.py | 232 -- .../mmdet/models/dense_heads/ld_head.py | 261 -- .../models/dense_heads/mask2former_head.py | 430 --- .../models/dense_heads/maskformer_head.py | 553 ---- .../mmdet/models/dense_heads/nasfcos_head.py | 80 - .../mmdet/models/dense_heads/paa_head.py | 756 ----- .../models/dense_heads/pisa_retinanet_head.py | 155 - .../mmdet/models/dense_heads/pisa_ssd_head.py | 140 - .../models/dense_heads/reppoints_head.py | 764 ----- .../mmdet/models/dense_heads/retina_head.py | 115 - .../models/dense_heads/retina_sepbn_head.py | 118 - .../mmdet/models/dense_heads/rpn_head.py | 265 -- .../models/dense_heads/sabl_retina_head.py | 630 ---- .../mmdet/models/dense_heads/solo_head.py | 1177 ------- .../mmdet/models/dense_heads/ssd_head.py | 357 -- .../mmdet/models/dense_heads/tood_head.py | 778 ----- .../mmdet/models/dense_heads/vfnet_head.py | 740 ----- .../mmdet/models/dense_heads/yolact_head.py | 1018 ------ .../mmdet/models/dense_heads/yolo_head.py | 619 ---- .../mmdet/models/dense_heads/yolof_head.py | 416 --- .../mmdet/models/dense_heads/yolox_head.py | 491 --- .../mmdet/models/detectors/__init__.py | 56 - .../pytorch/mmdet/models/detectors/atss.py | 19 - .../mmdet/models/detectors/autoassign.py | 19 - .../pytorch/mmdet/models/detectors/base.py | 360 -- .../mmdet/models/detectors/cascade_rcnn.py | 49 - .../mmdet/models/detectors/centernet.py | 111 - .../mmdet/models/detectors/cornernet.py | 97 - .../mmdet/models/detectors/deformable_detr.py | 10 - .../pytorch/mmdet/models/detectors/detr.py | 70 - .../mmdet/models/detectors/fast_rcnn.py | 55 - .../mmdet/models/detectors/faster_rcnn.py | 27 - .../pytorch/mmdet/models/detectors/fcos.py | 19 - .../pytorch/mmdet/models/detectors/fovea.py | 19 - .../pytorch/mmdet/models/detectors/fsaf.py | 19 - .../pytorch/mmdet/models/detectors/gfl.py | 18 - .../mmdet/models/detectors/grid_rcnn.py | 32 - .../pytorch/mmdet/models/detectors/htc.py | 16 - .../mmdet/models/detectors/kd_one_stage.py | 103 - .../pytorch/mmdet/models/detectors/lad.py | 92 - .../mmdet/models/detectors/mask2former.py | 27 - .../mmdet/models/detectors/mask_rcnn.py | 27 - .../models/detectors/mask_scoring_rcnn.py | 30 - .../mmdet/models/detectors/maskformer.py | 233 -- .../pytorch/mmdet/models/detectors/nasfcos.py | 22 - .../pytorch/mmdet/models/detectors/paa.py | 19 - .../mmdet/models/detectors/panoptic_fpn.py | 34 - .../detectors/panoptic_two_stage_segmentor.py | 279 -- .../mmdet/models/detectors/point_rend.py | 32 - .../mmdet/models/detectors/queryinst.py | 28 - .../models/detectors/reppoints_detector.py | 24 - .../mmdet/models/detectors/retinanet.py | 19 - .../pytorch/mmdet/models/detectors/rpn.py | 159 - .../pytorch/mmdet/models/detectors/scnet.py | 11 - .../mmdet/models/detectors/single_stage.py | 171 - .../detectors/single_stage_instance_seg.py | 363 -- .../pytorch/mmdet/models/detectors/solo.py | 30 - .../mmdet/models/detectors/sparse_rcnn.py | 111 - .../pytorch/mmdet/models/detectors/tood.py | 23 - .../models/detectors/trident_faster_rcnn.py | 70 - .../mmdet/models/detectors/two_stage.py | 211 -- .../pytorch/mmdet/models/detectors/vfnet.py | 20 - .../pytorch/mmdet/models/detectors/yolact.py | 120 - .../pytorch/mmdet/models/detectors/yolo.py | 42 - .../pytorch/mmdet/models/detectors/yolof.py | 19 - .../pytorch/mmdet/models/detectors/yolox.py | 136 - .../pytorch/mmdet/models/losses/__init__.py | 32 - .../pytorch/mmdet/models/losses/accuracy.py | 79 - .../pytorch/mmdet/models/losses/ae_loss.py | 103 - .../mmdet/models/losses/balanced_l1_loss.py | 124 - .../mmdet/models/losses/cross_entropy_loss.py | 301 -- .../pytorch/mmdet/models/losses/dice_loss.py | 146 - .../pytorch/mmdet/models/losses/focal_loss.py | 244 -- .../models/losses/gaussian_focal_loss.py | 92 - .../mmdet/models/losses/gfocal_loss.py | 245 -- .../pytorch/mmdet/models/losses/ghm_loss.py | 213 -- .../pytorch/mmdet/models/losses/iou_loss.py | 474 --- .../pytorch/mmdet/models/losses/kd_loss.py | 88 - .../pytorch/mmdet/models/losses/mse_loss.py | 57 - .../pytorch/mmdet/models/losses/pisa_loss.py | 184 -- .../mmdet/models/losses/seesaw_loss.py | 262 -- .../mmdet/models/losses/smooth_l1_loss.py | 146 - .../pytorch/mmdet/models/losses/utils.py | 105 - .../mmdet/models/losses/varifocal_loss.py | 134 - .../pytorch/mmdet/models/necks/__init__.py | 23 - .../paconv/pytorch/mmdet/models/necks/bfp.py | 102 - .../mmdet/models/necks/channel_mapper.py | 100 - .../mmdet/models/necks/ct_resnet_neck.py | 94 - .../mmdet/models/necks/dilated_encoder.py | 108 - .../pytorch/mmdet/models/necks/dyhead.py | 174 - .../paconv/pytorch/mmdet/models/necks/fpg.py | 406 --- .../paconv/pytorch/mmdet/models/necks/fpn.py | 204 -- .../pytorch/mmdet/models/necks/fpn_carafe.py | 275 -- .../pytorch/mmdet/models/necks/hrfpn.py | 100 - .../pytorch/mmdet/models/necks/nas_fpn.py | 158 - .../pytorch/mmdet/models/necks/nasfcos_fpn.py | 170 - .../pytorch/mmdet/models/necks/pafpn.py | 158 - .../paconv/pytorch/mmdet/models/necks/rfp.py | 135 - .../pytorch/mmdet/models/necks/ssd_neck.py | 129 - .../pytorch/mmdet/models/necks/yolo_neck.py | 140 - .../pytorch/mmdet/models/necks/yolox_pafpn.py | 156 - .../pytorch/mmdet/models/plugins/__init__.py | 9 - .../pytorch/mmdet/models/plugins/dropblock.py | 85 - .../plugins/msdeformattn_pixel_decoder.py | 269 -- .../mmdet/models/plugins/pixel_decoder.py | 243 -- .../mmdet/models/roi_heads/__init__.py | 37 - .../mmdet/models/roi_heads/base_roi_head.py | 103 - .../models/roi_heads/bbox_heads/__init__.py | 14 - .../models/roi_heads/bbox_heads/bbox_head.py | 594 ---- .../roi_heads/bbox_heads/convfc_bbox_head.py | 229 -- .../models/roi_heads/bbox_heads/dii_head.py | 426 --- .../roi_heads/bbox_heads/double_bbox_head.py | 178 - .../models/roi_heads/bbox_heads/sabl_head.py | 596 ---- .../roi_heads/bbox_heads/scnet_bbox_head.py | 77 - .../models/roi_heads/cascade_roi_head.py | 631 ---- .../mmdet/models/roi_heads/double_roi_head.py | 34 - .../models/roi_heads/dynamic_roi_head.py | 155 - .../mmdet/models/roi_heads/grid_roi_head.py | 170 - .../mmdet/models/roi_heads/htc_roi_head.py | 628 ---- .../models/roi_heads/mask_heads/__init__.py | 20 - .../roi_heads/mask_heads/coarse_mask_head.py | 100 - .../roi_heads/mask_heads/dynamic_mask_head.py | 147 - .../roi_heads/mask_heads/fcn_mask_head.py | 412 --- .../mask_heads/feature_relay_head.py | 53 - .../mask_heads/fused_semantic_head.py | 117 - .../mask_heads/global_context_head.py | 101 - .../models/roi_heads/mask_heads/grid_head.py | 363 -- .../roi_heads/mask_heads/htc_mask_head.py | 39 - .../roi_heads/mask_heads/mask_point_head.py | 253 -- .../roi_heads/mask_heads/maskiou_head.py | 183 -- .../roi_heads/mask_heads/scnet_mask_head.py | 28 - .../mask_heads/scnet_semantic_head.py | 28 - .../models/roi_heads/mask_scoring_roi_head.py | 113 - .../mmdet/models/roi_heads/pisa_roi_head.py | 160 - .../models/roi_heads/point_rend_roi_head.py | 393 --- .../roi_heads/roi_extractors/__init__.py | 6 - .../roi_extractors/base_roi_extractor.py | 88 - .../roi_extractors/generic_roi_extractor.py | 84 - .../single_level_roi_extractor.py | 115 - .../mmdet/models/roi_heads/scnet_roi_head.py | 605 ---- .../models/roi_heads/shared_heads/__init__.py | 4 - .../roi_heads/shared_heads/res_layer.py | 80 - .../mmdet/models/roi_heads/sparse_roi_head.py | 424 --- .../models/roi_heads/standard_roi_head.py | 397 --- .../mmdet/models/roi_heads/test_mixins.py | 311 -- .../models/roi_heads/trident_roi_head.py | 120 - .../mmdet/models/seg_heads/__init__.py | 3 - .../models/seg_heads/base_semantic_head.py | 86 - .../models/seg_heads/panoptic_fpn_head.py | 155 - .../panoptic_fusion_heads/__init__.py | 5 - .../base_panoptic_fusion_head.py | 48 - .../heuristic_fusion_head.py | 126 - .../maskformer_fusion_head.py | 241 -- .../pytorch/mmdet/models/utils/__init__.py | 34 - .../mmdet/models/utils/brick_wrappers.py | 51 - .../pytorch/mmdet/models/utils/builder.py | 47 - .../mmdet/models/utils/ckpt_convert.py | 137 - .../mmdet/models/utils/conv_upsample.py | 67 - .../pytorch/mmdet/models/utils/csp_layer.py | 150 - .../mmdet/models/utils/gaussian_target.py | 268 -- .../mmdet/models/utils/inverted_residual.py | 130 - .../mmdet/models/utils/make_divisible.py | 28 - .../paconv/pytorch/mmdet/models/utils/misc.py | 72 - .../mmdet/models/utils/normed_predictor.py | 88 - .../models/utils/panoptic_gt_processing.py | 62 - .../mmdet/models/utils/point_sample.py | 87 - .../mmdet/models/utils/positional_encoding.py | 163 - .../pytorch/mmdet/models/utils/res_layer.py | 190 -- .../pytorch/mmdet/models/utils/se_layer.py | 127 - .../pytorch/mmdet/models/utils/transformer.py | 1167 ------- .../paconv/pytorch/mmdet/utils/__init__.py | 15 - .../paconv/pytorch/mmdet/utils/collect_env.py | 17 - .../pytorch/mmdet/utils/compat_config.py | 139 - .../pytorch/mmdet/utils/contextmanagers.py | 122 - .../paconv/pytorch/mmdet/utils/logger.py | 65 - .../paconv/pytorch/mmdet/utils/misc.py | 76 - .../paconv/pytorch/mmdet/utils/profiling.py | 40 - .../paconv/pytorch/mmdet/utils/setup_env.py | 53 - .../paconv/pytorch/mmdet/utils/split_batch.py | 45 - .../pytorch/mmdet/utils/util_distribution.py | 74 - .../paconv/pytorch/mmdet/utils/util_mixins.py | 105 - .../paconv/pytorch/mmdet/utils/util_random.py | 34 - .../paconv/pytorch/mmdet/version.py | 19 - .../paconv/pytorch/mmdet3d/__init__.py | 49 - .../paconv/pytorch/mmdet3d/apis/__init__.py | 14 - .../paconv/pytorch/mmdet3d/apis/inference.py | 526 --- .../paconv/pytorch/mmdet3d/apis/test.py | 90 - .../paconv/pytorch/mmdet3d/apis/train.py | 351 -- .../paconv/pytorch/mmdet3d/core/__init__.py | 9 - .../pytorch/mmdet3d/core/anchor/__init__.py | 10 - .../core/anchor/anchor_3d_generator.py | 419 --- .../pytorch/mmdet3d/core/bbox/__init__.py | 30 - .../mmdet3d/core/bbox/assigners/__init__.py | 4 - .../pytorch/mmdet3d/core/bbox/box_np_ops.py | 827 ----- .../mmdet3d/core/bbox/coders/__init__.py | 19 - .../bbox/coders/anchor_free_bbox_coder.py | 130 - .../bbox/coders/centerpoint_bbox_coders.py | 229 -- .../bbox/coders/delta_xyzwhlr_bbox_coder.py | 91 - .../core/bbox/coders/fcos3d_bbox_coder.py | 127 - .../bbox/coders/groupfree3d_bbox_coder.py | 191 -- .../core/bbox/coders/monoflex_bbox_coder.py | 515 --- .../coders/partial_bin_based_bbox_coder.py | 241 -- .../core/bbox/coders/pgd_bbox_coder.py | 128 - .../bbox/coders/point_xyzwhlr_bbox_coder.py | 117 - .../core/bbox/coders/smoke_bbox_coder.py | 216 -- .../core/bbox/iou_calculators/__init__.py | 11 - .../bbox/iou_calculators/iou3d_calculator.py | 329 -- .../mmdet3d/core/bbox/samplers/__init__.py | 13 - .../samplers/iou_neg_piecewise_sampler.py | 183 -- .../mmdet3d/core/bbox/structures/__init__.py | 18 - .../core/bbox/structures/base_box3d.py | 578 ---- .../core/bbox/structures/box_3d_mode.py | 197 -- .../mmdet3d/core/bbox/structures/cam_box3d.py | 354 -- .../core/bbox/structures/coord_3d_mode.py | 234 -- .../core/bbox/structures/depth_box3d.py | 270 -- .../core/bbox/structures/lidar_box3d.py | 210 -- .../mmdet3d/core/bbox/structures/utils.py | 342 -- .../pytorch/mmdet3d/core/bbox/transforms.py | 76 - .../mmdet3d/core/evaluation/__init__.py | 11 - .../mmdet3d/core/evaluation/indoor_eval.py | 309 -- .../core/evaluation/instance_seg_eval.py | 128 - .../core/evaluation/kitti_utils/__init__.py | 4 - .../core/evaluation/kitti_utils/eval.py | 950 ------ .../core/evaluation/kitti_utils/rotate_iou.py | 379 --- .../mmdet3d/core/evaluation/lyft_eval.py | 285 -- .../core/evaluation/scannet_utils/__init__.py | 4 - .../evaluate_semantic_instance.py | 347 -- .../core/evaluation/scannet_utils/util_3d.py | 84 - .../mmdet3d/core/evaluation/seg_eval.py | 131 - .../core/evaluation/waymo_utils/__init__.py | 4 - .../waymo_utils/prediction_kitti_to_waymo.py | 263 -- .../pytorch/mmdet3d/core/points/__init__.py | 30 - .../mmdet3d/core/points/base_points.py | 440 --- .../pytorch/mmdet3d/core/points/cam_points.py | 63 - .../mmdet3d/core/points/depth_points.py | 58 - .../mmdet3d/core/points/lidar_points.py | 58 - .../mmdet3d/core/post_processing/__init__.py | 14 - .../mmdet3d/core/post_processing/box3d_nms.py | 288 -- .../core/post_processing/merge_augs.py | 92 - .../pytorch/mmdet3d/core/utils/__init__.py | 10 - .../mmdet3d/core/utils/array_converter.py | 324 -- .../pytorch/mmdet3d/core/utils/gaussian.py | 158 - .../mmdet3d/core/visualizer/__init__.py | 5 - .../mmdet3d/core/visualizer/image_vis.py | 206 -- .../mmdet3d/core/visualizer/open3d_vis.py | 460 --- .../mmdet3d/core/visualizer/show_result.py | 291 -- .../pytorch/mmdet3d/core/voxel/__init__.py | 5 - .../pytorch/mmdet3d/core/voxel/builder.py | 16 - .../mmdet3d/core/voxel/voxel_generator.py | 280 -- .../pytorch/mmdet3d/datasets/__init__.py | 49 - .../pytorch/mmdet3d/datasets/builder.py | 47 - .../pytorch/mmdet3d/datasets/custom_3d.py | 448 --- .../pytorch/mmdet3d/datasets/custom_3d_seg.py | 465 --- .../mmdet3d/datasets/dataset_wrappers.py | 76 - .../mmdet3d/datasets/kitti2d_dataset.py | 241 -- .../pytorch/mmdet3d/datasets/kitti_dataset.py | 773 ----- .../mmdet3d/datasets/kitti_mono_dataset.py | 569 ---- .../pytorch/mmdet3d/datasets/lyft_dataset.py | 567 ---- .../mmdet3d/datasets/nuscenes_dataset.py | 654 ---- .../mmdet3d/datasets/nuscenes_mono_dataset.py | 840 ----- .../mmdet3d/datasets/pipelines/__init__.py | 36 - .../mmdet3d/datasets/pipelines/compose.py | 60 - .../datasets/pipelines/data_augment_utils.py | 411 --- .../mmdet3d/datasets/pipelines/dbsampler.py | 340 -- .../mmdet3d/datasets/pipelines/formating.py | 266 -- .../mmdet3d/datasets/pipelines/loading.py | 685 ---- .../datasets/pipelines/test_time_aug.py | 229 -- .../datasets/pipelines/transforms_3d.py | 1855 ----------- .../pytorch/mmdet3d/datasets/s3dis_dataset.py | 445 --- .../mmdet3d/datasets/scannet_dataset.py | 614 ---- .../mmdet3d/datasets/semantickitti_dataset.py | 110 - .../mmdet3d/datasets/sunrgbd_dataset.py | 280 -- .../paconv/pytorch/mmdet3d/datasets/utils.py | 140 - .../pytorch/mmdet3d/datasets/waymo_dataset.py | 549 ---- .../paconv/pytorch/mmdet3d/models/__init__.py | 29 - .../mmdet3d/models/backbones/__init__.py | 16 - .../mmdet3d/models/backbones/base_pointnet.py | 39 - .../pytorch/mmdet3d/models/backbones/dgcnn.py | 98 - .../pytorch/mmdet3d/models/backbones/dla.py | 446 --- .../mmdet3d/models/backbones/mink_resnet.py | 116 - .../models/backbones/multi_backbone.py | 127 - .../mmdet3d/models/backbones/nostem_regnet.py | 84 - .../models/backbones/pointnet2_sa_msg.py | 175 - .../models/backbones/pointnet2_sa_ssg.py | 143 - .../mmdet3d/models/backbones/second.py | 91 - .../paconv/pytorch/mmdet3d/models/builder.py | 137 - .../mmdet3d/models/decode_heads/__init__.py | 6 - .../models/decode_heads/decode_head.py | 123 - .../mmdet3d/models/decode_heads/dgcnn_head.py | 67 - .../models/decode_heads/paconv_head.py | 63 - .../models/decode_heads/pointnet2_head.py | 85 - .../mmdet3d/models/dense_heads/__init__.py | 25 - .../models/dense_heads/anchor3d_head.py | 516 --- .../dense_heads/anchor_free_mono3d_head.py | 534 --- .../models/dense_heads/base_conv_bbox_head.py | 131 - .../dense_heads/base_mono3d_dense_head.py | 78 - .../models/dense_heads/centerpoint_head.py | 830 ----- .../models/dense_heads/fcos_mono3d_head.py | 956 ------ .../models/dense_heads/free_anchor3d_head.py | 285 -- .../models/dense_heads/groupfree3d_head.py | 994 ------ .../models/dense_heads/monoflex_head.py | 771 ----- .../models/dense_heads/parta2_rpn_head.py | 310 -- .../mmdet3d/models/dense_heads/pgd_head.py | 1229 ------- .../models/dense_heads/point_rpn_head.py | 381 --- .../models/dense_heads/shape_aware_head.py | 515 --- .../models/dense_heads/smoke_mono3d_head.py | 516 --- .../mmdet3d/models/dense_heads/ssd_3d_head.py | 557 ---- .../models/dense_heads/train_mixins.py | 349 -- .../mmdet3d/models/dense_heads/vote_head.py | 663 ---- .../mmdet3d/models/detectors/__init__.py | 27 - .../pytorch/mmdet3d/models/detectors/base.py | 127 - .../mmdet3d/models/detectors/centerpoint.py | 196 -- .../models/detectors/dynamic_voxelnet.py | 71 - .../mmdet3d/models/detectors/fcos_mono3d.py | 22 - .../models/detectors/groupfree3dnet.py | 105 - .../mmdet3d/models/detectors/h3dnet.py | 176 - .../mmdet3d/models/detectors/imvotenet.py | 819 ----- .../mmdet3d/models/detectors/imvoxelnet.py | 138 - .../models/detectors/mvx_faster_rcnn.py | 61 - .../mmdet3d/models/detectors/mvx_two_stage.py | 503 --- .../mmdet3d/models/detectors/parta2.py | 151 - .../mmdet3d/models/detectors/point_rcnn.py | 148 - .../pytorch/mmdet3d/models/detectors/sassd.py | 136 - .../mmdet3d/models/detectors/single_stage.py | 71 - .../models/detectors/single_stage_mono3d.py | 250 -- .../mmdet3d/models/detectors/smoke_mono3d.py | 21 - .../mmdet3d/models/detectors/ssd3dnet.py | 26 - .../mmdet3d/models/detectors/two_stage.py | 51 - .../mmdet3d/models/detectors/votenet.py | 107 - .../mmdet3d/models/detectors/voxelnet.py | 130 - .../mmdet3d/models/fusion_layers/__init__.py | 10 - .../models/fusion_layers/coord_transform.py | 222 -- .../models/fusion_layers/point_fusion.py | 306 -- .../models/fusion_layers/vote_fusion.py | 200 -- .../pytorch/mmdet3d/models/losses/__init__.py | 14 - .../models/losses/axis_aligned_iou_loss.py | 79 - .../mmdet3d/models/losses/chamfer_distance.py | 147 - .../mmdet3d/models/losses/multibin_loss.py | 93 - .../losses/paconv_regularization_loss.py | 108 - .../models/losses/uncertain_smooth_l1_loss.py | 176 - .../models/middle_encoders/__init__.py | 14 - .../models/middle_encoders/pillar_scatter.py | 102 - .../models/middle_encoders/sparse_encoder.py | 491 --- .../models/middle_encoders/sparse_unet.py | 300 -- .../mmdet3d/models/model_utils/__init__.py | 6 - .../models/model_utils/edge_fusion_module.py | 78 - .../mmdet3d/models/model_utils/transformer.py | 139 - .../mmdet3d/models/model_utils/vote_module.py | 184 -- .../pytorch/mmdet3d/models/necks/__init__.py | 10 - .../pytorch/mmdet3d/models/necks/dla_neck.py | 233 -- .../mmdet3d/models/necks/imvoxel_neck.py | 110 - .../mmdet3d/models/necks/pointnet2_fp_neck.py | 89 - .../mmdet3d/models/necks/second_fpn.py | 91 - .../mmdet3d/models/roi_heads/__init__.py | 22 - .../models/roi_heads/base_3droi_head.py | 98 - .../models/roi_heads/bbox_heads/__init__.py | 14 - .../roi_heads/bbox_heads/h3d_bbox_head.py | 925 ------ .../roi_heads/bbox_heads/parta2_bbox_head.py | 629 ---- .../bbox_heads/point_rcnn_bbox_head.py | 575 ---- .../mmdet3d/models/roi_heads/h3d_roi_head.py | 159 - .../models/roi_heads/mask_heads/__init__.py | 5 - .../mask_heads/pointwise_semantic_head.py | 202 -- .../roi_heads/mask_heads/primitive_head.py | 966 ------ .../roi_heads/part_aggregation_roi_head.py | 325 -- .../models/roi_heads/point_rcnn_roi_head.py | 286 -- .../roi_heads/roi_extractors/__init__.py | 9 - .../single_roiaware_extractor.py | 54 - .../single_roipoint_extractor.py | 64 - .../mmdet3d/models/segmentors/__init__.py | 5 - .../pytorch/mmdet3d/models/segmentors/base.py | 136 - .../models/segmentors/encoder_decoder.py | 454 --- .../pytorch/mmdet3d/models/utils/__init__.py | 11 - .../mmdet3d/models/utils/clip_sigmoid.py | 17 - .../mmdet3d/models/utils/edge_indices.py | 88 - .../mmdet3d/models/utils/gen_keypoints.py | 80 - .../mmdet3d/models/utils/handle_objs.py | 135 - .../pytorch/mmdet3d/models/utils/mlp.py | 51 - .../mmdet3d/models/voxel_encoders/__init__.py | 8 - .../models/voxel_encoders/pillar_encoder.py | 323 -- .../mmdet3d/models/voxel_encoders/utils.py | 182 - .../models/voxel_encoders/voxel_encoder.py | 489 --- .../paconv/pytorch/mmdet3d/ops/__init__.py | 50 - .../mmdet3d/ops/dgcnn_modules/__init__.py | 6 - .../ops/dgcnn_modules/dgcnn_fa_module.py | 68 - .../ops/dgcnn_modules/dgcnn_fp_module.py | 59 - .../ops/dgcnn_modules/dgcnn_gf_module.py | 221 -- .../paconv/pytorch/mmdet3d/ops/norm.py | 163 - .../pytorch/mmdet3d/ops/paconv/__init__.py | 4 - .../pytorch/mmdet3d/ops/paconv/paconv.py | 392 --- .../pytorch/mmdet3d/ops/paconv/utils.py | 87 - .../mmdet3d/ops/pointnet_modules/__init__.py | 12 - .../mmdet3d/ops/pointnet_modules/builder.py | 39 - .../ops/pointnet_modules/paconv_sa_module.py | 342 -- .../ops/pointnet_modules/point_fp_module.py | 79 - .../ops/pointnet_modules/point_sa_module.py | 352 -- .../pytorch/mmdet3d/ops/sparse_block.py | 199 -- .../pytorch/mmdet3d/ops/spconv/__init__.py | 14 - .../ops/spconv/overwrite_spconv/__init__.py | 4 - .../spconv/overwrite_spconv/write_spconv2.py | 118 - .../paconv/pytorch/mmdet3d/utils/__init__.py | 14 - .../pytorch/mmdet3d/utils/collect_env.py | 25 - .../pytorch/mmdet3d/utils/compat_cfg.py | 139 - .../paconv/pytorch/mmdet3d/utils/logger.py | 31 - .../paconv/pytorch/mmdet3d/utils/misc.py | 39 - .../paconv/pytorch/mmdet3d/utils/setup_env.py | 53 - .../paconv/pytorch/mmdet3d/version.py | 19 - .../paconv/pytorch/mmseg/__init__.py | 62 - .../paconv/pytorch/mmseg/apis/__init__.py | 11 - .../paconv/pytorch/mmseg/apis/inference.py | 136 - .../paconv/pytorch/mmseg/apis/test.py | 233 -- .../paconv/pytorch/mmseg/apis/train.py | 167 - .../paconv/pytorch/mmseg/core/__init__.py | 4 - .../pytorch/mmseg/core/evaluation/__init__.py | 11 - .../mmseg/core/evaluation/class_names.py | 153 - .../mmseg/core/evaluation/eval_hooks.py | 128 - .../pytorch/mmseg/core/evaluation/metrics.py | 395 --- .../paconv/pytorch/mmseg/core/seg/__init__.py | 5 - .../paconv/pytorch/mmseg/core/seg/builder.py | 9 - .../mmseg/core/seg/sampler/__init__.py | 5 - .../core/seg/sampler/base_pixel_sampler.py | 13 - .../core/seg/sampler/ohem_pixel_sampler.py | 85 - .../pytorch/mmseg/core/utils/__init__.py | 4 - .../paconv/pytorch/mmseg/core/utils/misc.py | 18 - .../paconv/pytorch/mmseg/datasets/__init__.py | 25 - .../paconv/pytorch/mmseg/datasets/ade.py | 167 - .../paconv/pytorch/mmseg/datasets/builder.py | 182 - .../pytorch/mmseg/datasets/chase_db1.py | 28 - .../pytorch/mmseg/datasets/cityscapes.py | 214 -- .../pytorch/mmseg/datasets/coco_stuff.py | 93 - .../paconv/pytorch/mmseg/datasets/custom.py | 457 --- .../pytorch/mmseg/datasets/dark_zurich.py | 13 - .../mmseg/datasets/dataset_wrappers.py | 190 -- .../paconv/pytorch/mmseg/datasets/drive.py | 28 - .../paconv/pytorch/mmseg/datasets/hrf.py | 28 - .../paconv/pytorch/mmseg/datasets/loveda.py | 92 - .../pytorch/mmseg/datasets/night_driving.py | 13 - .../pytorch/mmseg/datasets/pascal_context.py | 104 - .../mmseg/datasets/pipelines/__init__.py | 18 - .../mmseg/datasets/pipelines/compose.py | 52 - .../mmseg/datasets/pipelines/formating.py | 9 - .../mmseg/datasets/pipelines/formatting.py | 289 -- .../mmseg/datasets/pipelines/loading.py | 154 - .../mmseg/datasets/pipelines/test_time_aug.py | 134 - .../mmseg/datasets/pipelines/transforms.py | 1042 ------ .../paconv/pytorch/mmseg/datasets/stare.py | 28 - .../paconv/pytorch/mmseg/datasets/voc.py | 30 - .../paconv/pytorch/mmseg/models/__init__.py | 13 - .../mmseg/models/backbones/__init__.py | 28 - .../mmseg/models/backbones/bisenetv1.py | 332 -- .../mmseg/models/backbones/bisenetv2.py | 622 ---- .../pytorch/mmseg/models/backbones/cgnet.py | 372 --- .../pytorch/mmseg/models/backbones/erfnet.py | 329 -- .../mmseg/models/backbones/fast_scnn.py | 409 --- .../pytorch/mmseg/models/backbones/hrnet.py | 642 ---- .../pytorch/mmseg/models/backbones/icnet.py | 165 - .../pytorch/mmseg/models/backbones/mit.py | 431 --- .../mmseg/models/backbones/mobilenet_v2.py | 197 -- .../mmseg/models/backbones/mobilenet_v3.py | 267 -- .../pytorch/mmseg/models/backbones/resnest.py | 318 -- .../pytorch/mmseg/models/backbones/resnet.py | 714 ---- .../pytorch/mmseg/models/backbones/resnext.py | 150 - .../pytorch/mmseg/models/backbones/stdc.py | 422 --- .../pytorch/mmseg/models/backbones/swin.py | 755 ----- .../mmseg/models/backbones/timm_backbone.py | 63 - .../pytorch/mmseg/models/backbones/twins.py | 587 ---- .../pytorch/mmseg/models/backbones/unet.py | 438 --- .../pytorch/mmseg/models/backbones/vit.py | 412 --- .../paconv/pytorch/mmseg/models/builder.py | 49 - .../mmseg/models/decode_heads/__init__.py | 37 - .../mmseg/models/decode_heads/ann_head.py | 246 -- .../mmseg/models/decode_heads/apc_head.py | 159 - .../mmseg/models/decode_heads/aspp_head.py | 108 - .../decode_heads/cascade_decode_head.py | 58 - .../mmseg/models/decode_heads/cc_head.py | 43 - .../mmseg/models/decode_heads/da_head.py | 179 - .../mmseg/models/decode_heads/decode_head.py | 265 -- .../mmseg/models/decode_heads/dm_head.py | 141 - .../mmseg/models/decode_heads/dnl_head.py | 132 - .../mmseg/models/decode_heads/dpt_head.py | 293 -- .../mmseg/models/decode_heads/ema_head.py | 169 - .../mmseg/models/decode_heads/enc_head.py | 188 -- .../mmseg/models/decode_heads/fcn_head.py | 82 - .../mmseg/models/decode_heads/fpn_head.py | 69 - .../mmseg/models/decode_heads/gc_head.py | 48 - .../mmseg/models/decode_heads/isa_head.py | 142 - .../mmseg/models/decode_heads/lraspp_head.py | 91 - .../mmseg/models/decode_heads/nl_head.py | 50 - .../mmseg/models/decode_heads/ocr_head.py | 128 - .../mmseg/models/decode_heads/point_head.py | 356 -- .../mmseg/models/decode_heads/psa_head.py | 197 -- .../mmseg/models/decode_heads/psp_head.py | 103 - .../models/decode_heads/segformer_head.py | 66 - .../models/decode_heads/sep_aspp_head.py | 102 - .../mmseg/models/decode_heads/sep_fcn_head.py | 60 - .../models/decode_heads/setr_mla_head.py | 63 - .../mmseg/models/decode_heads/setr_up_head.py | 81 - .../mmseg/models/decode_heads/stdc_head.py | 90 - .../mmseg/models/decode_heads/uper_head.py | 127 - .../pytorch/mmseg/models/losses/__init__.py | 15 - .../pytorch/mmseg/models/losses/accuracy.py | 79 - .../mmseg/models/losses/cross_entropy_loss.py | 218 -- .../pytorch/mmseg/models/losses/dice_loss.py | 137 - .../pytorch/mmseg/models/losses/focal_loss.py | 327 -- .../mmseg/models/losses/lovasz_loss.py | 323 -- .../pytorch/mmseg/models/losses/utils.py | 122 - .../pytorch/mmseg/models/necks/__init__.py | 8 - .../paconv/pytorch/mmseg/models/necks/fpn.py | 213 -- .../pytorch/mmseg/models/necks/ic_neck.py | 147 - .../paconv/pytorch/mmseg/models/necks/jpu.py | 131 - .../pytorch/mmseg/models/necks/mla_neck.py | 118 - .../mmseg/models/necks/multilevel_neck.py | 78 - .../mmseg/models/segmentors/__init__.py | 6 - .../pytorch/mmseg/models/segmentors/base.py | 277 -- .../segmentors/cascade_encoder_decoder.py | 84 - .../models/segmentors/encoder_decoder.py | 284 -- .../pytorch/mmseg/models/utils/__init__.py | 14 - .../pytorch/mmseg/models/utils/embed.py | 330 -- .../mmseg/models/utils/inverted_residual.py | 213 -- .../mmseg/models/utils/make_divisible.py | 28 - .../pytorch/mmseg/models/utils/res_layer.py | 96 - .../pytorch/mmseg/models/utils/se_layer.py | 58 - .../models/utils/self_attention_block.py | 160 - .../mmseg/models/utils/shape_convert.py | 29 - .../mmseg/models/utils/up_conv_block.py | 102 - .../paconv/pytorch/mmseg/ops/__init__.py | 5 - .../paconv/pytorch/mmseg/ops/encoding.py | 75 - .../paconv/pytorch/mmseg/ops/wrappers.py | 51 - .../paconv/pytorch/mmseg/utils/__init__.py | 5 - .../paconv/pytorch/mmseg/utils/collect_env.py | 18 - .../paconv/pytorch/mmseg/utils/logger.py | 28 - .../paconv/pytorch/mmseg/version.py | 18 - .../paconv/pytorch/requirements.txt | 4 - .../paconv/pytorch/requirements/build.txt | 0 .../paconv/pytorch/requirements/docs.txt | 8 - .../paconv/pytorch/requirements/mminstall.txt | 3 - .../paconv/pytorch/requirements/optional.txt | 3 - .../pytorch/requirements/readthedocs.txt | 5 - .../paconv/pytorch/requirements/runtime.txt | 15 - .../paconv/pytorch/requirements/tests.txt | 13 - cv/3d_detection/paconv/pytorch/setup.cfg | 16 - cv/3d_detection/paconv/pytorch/setup.py | 429 --- .../tools/analysis_tools/analyze_logs.py | 202 -- .../pytorch/tools/analysis_tools/benchmark.py | 96 - .../pytorch/tools/analysis_tools/get_flops.py | 92 - .../paconv/pytorch/tools/create_data.py | 303 -- .../paconv/pytorch/tools/create_data.sh | 24 - .../pytorch/tools/data_converter/__init__.py | 1 - .../data_converter/create_gt_database.py | 624 ---- .../tools/data_converter/indoor_converter.py | 110 - .../tools/data_converter/kitti_converter.py | 624 ---- .../tools/data_converter/kitti_data_utils.py | 619 ---- .../tools/data_converter/lyft_converter.py | 271 -- .../tools/data_converter/lyft_data_fixer.py | 39 - .../tools/data_converter/nuimage_converter.py | 226 -- .../data_converter/nuscenes_converter.py | 628 ---- .../tools/data_converter/s3dis_data_utils.py | 245 -- .../data_converter/scannet_data_utils.py | 297 -- .../data_converter/sunrgbd_data_utils.py | 226 -- .../tools/data_converter/waymo_converter.py | 556 ---- .../tools/deployment/mmdet3d2torchserve.py | 111 - .../tools/deployment/mmdet3d_handler.py | 120 - .../tools/deployment/test_torchserver.py | 56 - .../paconv/pytorch/tools/dist_test.sh | 22 - .../pytorch/tools/misc/browse_dataset.py | 232 -- .../paconv/pytorch/tools/misc/fuse_conv_bn.py | 68 - .../paconv/pytorch/tools/misc/print_config.py | 27 - .../pytorch/tools/misc/visualize_results.py | 50 - .../convert_h3dnet_checkpoints.py | 177 - .../convert_votenet_checkpoints.py | 153 - .../tools/model_converters/publish_model.py | 36 - .../tools/model_converters/regnet2mmdet.py | 90 - .../paconv/pytorch/tools/slurm_test.sh | 24 - .../paconv/pytorch/tools/slurm_train.sh | 24 - cv/3d_detection/paconv/pytorch/tools/test.py | 260 -- .../pytorch/tools/update_data_coords.py | 168 - .../pytorch/tools/update_data_coords.sh | 22 - cv/3d_detection/paconv/pytorch/train.py | 263 -- .../internimage/pytorch/README.md | 5 +- 968 files changed, 15 insertions(+), 167666 deletions(-) delete mode 100644 cv/3d_detection/paconv/pytorch/.gitignore delete mode 100644 cv/3d_detection/paconv/pytorch/LICENSE delete mode 100644 cv/3d_detection/paconv/pytorch/configs/3dssd/3dssd_4x4_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/3dssd/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/3dssd/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/coco_instance.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nuim_instance.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/range100_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis-3d-5class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet-3d-18class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/default_runtime.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/3dssd.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/dgcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/fcos3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/groupfree3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/h3dnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_lyft.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_kitti.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_waymo.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_kitti.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_waymo.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/imvotenet_image.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/mask_rcnn_r50_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_cuda_ssg.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_ssg.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/parta2.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/pgd.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/point_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_msg.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_ssg.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/smoke.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/models/votenet.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cosine.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_20e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_40e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/mmdet_schedule_1x.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_2x.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_3x.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_100e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_150e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_200e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_50e.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/centerpoint/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dgcnn/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dgcnn/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/fcos3d/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/fcos3d/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/free_anchor/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/groupfree3d/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/h3dnet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/h3dnet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvotenet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvotenet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvoxelnet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/imvoxelnet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/monoflex/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/monoflex/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/mvxnet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/mvxnet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/nuimages/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/paconv/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/paconv/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/parta2/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/parta2/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/point_rcnn/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/point_rcnn/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/pointpillars/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/regnet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/sassd/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/second/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/smoke/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/smoke/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/ssn/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/votenet/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/configs/votenet/metafile.yml delete mode 100644 cv/3d_detection/paconv/pytorch/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/votenet/votenet_8x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/paconv/pytorch/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/paconv/pytorch/data/s3dis/README.md delete mode 100644 cv/3d_detection/paconv/pytorch/data/s3dis/collect_indoor3d_data.py delete mode 100644 cv/3d_detection/paconv/pytorch/data/s3dis/indoor3d_util.py delete mode 100644 cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/anno_paths.txt delete mode 100644 cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/class_names.txt delete mode 100644 cv/3d_detection/paconv/pytorch/dist_train.sh delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/apis/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/apis/inference.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/apis/test.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/apis/train.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/anchor/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/anchor/anchor_generator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/anchor/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/anchor/point_generator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/anchor/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/approx_max_iou_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/assign_result.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/atss_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/base_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/center_region_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/grid_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/hungarian_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/mask_hungarian_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/max_iou_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/point_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/region_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/sim_ota_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/task_aligned_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/bucketing_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/distance_point_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/pseudo_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/tblr_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/yolo_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/demodata.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/match_cost.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/base_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/combined_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_pseudo_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_sampling_result.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/ohem_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/random_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/sampling_result.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/score_hlr_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/bbox/transforms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/general_data.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/instance_data.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/bbox_overlaps.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/class_names.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/eval_hooks.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/mean_ap.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/panoptic_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/recall.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/export/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/export/model_wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/export/onnx_helper.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/export/pytorch2onnx.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/checkloss_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/ema.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/memory_profiler_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/set_epoch_info_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_norm_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_random_size_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_lrupdater_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_mode_switch_hook.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/mask/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/mask/mask_target.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/mask/structures.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/mask/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/bbox_nms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/matrix_nms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/merge_augs.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/utils/dist_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/utils/misc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/visualization/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/visualization/image.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/core/visualization/palette.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/coco_api.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/cityscapes.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/coco.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/coco_panoptic.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/custom.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/deepfashion.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/lvis.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/openimages.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/auto_augment.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formatting.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/instaboost.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/transforms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/class_aware_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/distributed_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/group_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/infinite_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/voc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/wider_face.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/datasets/xml_style.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/csp_darknet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/darknet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnext.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/efficientnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hourglass.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hrnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/mobilenet_v2.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/pvt.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/regnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/res2net.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnest.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnext.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/ssd_vgg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/swin.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/backbones/trident_resnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_free_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/atss_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/autoassign_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_dense_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/cascade_rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centernet_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centripetal_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/corner_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/deformable_detr_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/dense_test_mixins.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/detr_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/embedding_rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fcos_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fovea_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/free_anchor_retina_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fsaf_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_retina_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/gfl_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/guided_anchor_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/lad_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ld_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/mask2former_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/maskformer_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/nasfcos_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/paa_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_retinanet_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_ssd_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/reppoints_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_sepbn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/sabl_retina_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/solo_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ssd_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/tood_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/vfnet_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolact_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolo_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolof_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/atss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/autoassign.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/base.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cascade_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/centernet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cornernet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/deformable_detr.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/detr.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fast_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/faster_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fcos.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fovea.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fsaf.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/gfl.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/grid_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/htc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/kd_one_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/lad.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask2former.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_scoring_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/maskformer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/nasfcos.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/paa.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_two_stage_segmentor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/point_rend.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/queryinst.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/reppoints_detector.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/retinanet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/rpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/scnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage_instance_seg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/solo.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/sparse_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/tood.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/trident_faster_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/two_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/vfnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolact.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolo.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolof.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolox.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/accuracy.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/ae_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/balanced_l1_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/cross_entropy_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/dice_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/focal_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/gaussian_focal_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/gfocal_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/ghm_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/iou_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/kd_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/mse_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/pisa_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/seesaw_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/smooth_l1_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/losses/varifocal_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/bfp.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/channel_mapper.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/ct_resnet_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/dilated_encoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/dyhead.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn_carafe.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/hrfpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/nas_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/nasfcos_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/pafpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/rfp.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/ssd_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolo_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolox_pafpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/plugins/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/plugins/dropblock.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/plugins/msdeformattn_pixel_decoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/plugins/pixel_decoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/base_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/dii_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/sabl_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/cascade_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/double_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/dynamic_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/grid_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/htc_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/feature_relay_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/global_context_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/grid_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/htc_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/mask_point_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/maskiou_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_scoring_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/pisa_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/point_rend_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/scnet_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/res_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/sparse_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/standard_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/test_mixins.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/trident_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/base_semantic_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/brick_wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/ckpt_convert.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/conv_upsample.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/csp_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/gaussian_target.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/inverted_residual.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/make_divisible.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/misc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/normed_predictor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/panoptic_gt_processing.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/point_sample.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/positional_encoding.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/res_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/se_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/models/utils/transformer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/collect_env.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/compat_config.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/contextmanagers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/logger.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/misc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/profiling.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/setup_env.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/split_batch.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/util_distribution.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/util_mixins.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/utils/util_random.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet/version.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/apis/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/apis/inference.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/apis/test.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/apis/train.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/anchor_3d_generator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/assigners/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/box_np_ops.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/pgd_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/smoke_bbox_coder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/base_box3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/box_3d_mode.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/cam_box3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/coord_3d_mode.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/depth_box3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/lidar_box3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/transforms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/indoor_eval.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/instance_seg_eval.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/eval.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/lyft_eval.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/util_3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/seg_eval.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/points/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/points/base_points.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/points/cam_points.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/points/depth_points.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/points/lidar_points.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/box3d_nms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/merge_augs.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/array_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/gaussian.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/image_vis.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/open3d_vis.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/show_result.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/voxel_generator.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d_seg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti2d_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_mono_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/lyft_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_mono_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/data_augment_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/dbsampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/transforms_3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/s3dis_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/scannet_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/semantickitti_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/sunrgbd_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/datasets/waymo_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/base_pointnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dgcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dla.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/mink_resnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/multi_backbone.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/nostem_regnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_msg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_ssg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/second.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/decode_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/dgcnn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/paconv_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/pointnet2_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_conv_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_mono3d_dense_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/centerpoint_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/fcos_mono3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/free_anchor3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/groupfree3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/monoflex_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/parta2_rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/pgd_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/point_rpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/shape_aware_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/smoke_mono3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/ssd_3d_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/train_mixins.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/vote_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/base.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/centerpoint.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/dynamic_voxelnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/fcos_mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/groupfree3dnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/h3dnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvotenet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvoxelnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_faster_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_two_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/parta2.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/point_rcnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/sassd.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage_mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/smoke_mono3d.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/ssd3dnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/two_stage.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/votenet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/voxelnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/coord_transform.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/point_fusion.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/vote_fusion.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/axis_aligned_iou_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/chamfer_distance.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/multibin_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/paconv_regularization_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/uncertain_smooth_l1_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/pillar_scatter.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_encoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_unet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/edge_fusion_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/transformer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/vote_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/dla_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/imvoxel_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/pointnet2_fp_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/second_fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/base_3droi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/h3d_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/primitive_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/part_aggregation_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/point_rcnn_roi_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/base.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/encoder_decoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/clip_sigmoid.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/edge_indices.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/gen_keypoints.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/handle_objs.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/mlp.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/pillar_encoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/voxel_encoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/norm.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/paconv.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/paconv_sa_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_fp_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_sa_module.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/sparse_block.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/collect_env.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/compat_cfg.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/logger.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/misc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/utils/setup_env.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmdet3d/version.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/apis/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/apis/inference.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/apis/test.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/apis/train.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/class_names.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/eval_hooks.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/metrics.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/seg/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/seg/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/base_pixel_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/ohem_pixel_sampler.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/core/utils/misc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/ade.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/chase_db1.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/cityscapes.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/coco_stuff.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/custom.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/dark_zurich.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/drive.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/hrf.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/loveda.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/night_driving.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pascal_context.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formatting.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/transforms.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/stare.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/datasets/voc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv1.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv2.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/cgnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/erfnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/fast_scnn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/hrnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/icnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mit.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v2.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v3.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnest.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnext.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/stdc.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/swin.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/timm_backbone.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/twins.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/unet.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/backbones/vit.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/builder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ann_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/apc_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/aspp_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cascade_decode_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cc_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/da_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/decode_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dm_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dnl_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dpt_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ema_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/enc_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fcn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fpn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/gc_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/isa_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/lraspp_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/nl_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ocr_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/point_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psa_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psp_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/segformer_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_aspp_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_fcn_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_mla_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_up_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/stdc_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/uper_head.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/accuracy.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/cross_entropy_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/dice_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/focal_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/lovasz_loss.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/losses/utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/fpn.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/ic_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/jpu.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/mla_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/necks/multilevel_neck.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/base.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/cascade_encoder_decoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/encoder_decoder.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/embed.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/inverted_residual.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/make_divisible.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/res_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/se_layer.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/self_attention_block.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/shape_convert.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/models/utils/up_conv_block.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/ops/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/ops/encoding.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/ops/wrappers.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/utils/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/utils/collect_env.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/utils/logger.py delete mode 100644 cv/3d_detection/paconv/pytorch/mmseg/version.py delete mode 100644 cv/3d_detection/paconv/pytorch/requirements.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/build.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/docs.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/mminstall.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/optional.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/readthedocs.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/runtime.txt delete mode 100644 cv/3d_detection/paconv/pytorch/requirements/tests.txt delete mode 100644 cv/3d_detection/paconv/pytorch/setup.cfg delete mode 100755 cv/3d_detection/paconv/pytorch/setup.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/analysis_tools/analyze_logs.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/analysis_tools/benchmark.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/analysis_tools/get_flops.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/create_data.py delete mode 100755 cv/3d_detection/paconv/pytorch/tools/create_data.sh delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/__init__.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/create_gt_database.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/indoor_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_data_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_data_fixer.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/nuimage_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/nuscenes_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/s3dis_data_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/scannet_data_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/sunrgbd_data_utils.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/data_converter/waymo_converter.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d2torchserve.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d_handler.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/deployment/test_torchserver.py delete mode 100755 cv/3d_detection/paconv/pytorch/tools/dist_test.sh delete mode 100644 cv/3d_detection/paconv/pytorch/tools/misc/browse_dataset.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/misc/fuse_conv_bn.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/misc/print_config.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/misc/visualize_results.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/model_converters/convert_h3dnet_checkpoints.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/model_converters/convert_votenet_checkpoints.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/model_converters/publish_model.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/model_converters/regnet2mmdet.py delete mode 100755 cv/3d_detection/paconv/pytorch/tools/slurm_test.sh delete mode 100755 cv/3d_detection/paconv/pytorch/tools/slurm_train.sh delete mode 100644 cv/3d_detection/paconv/pytorch/tools/test.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/update_data_coords.py delete mode 100644 cv/3d_detection/paconv/pytorch/tools/update_data_coords.sh delete mode 100644 cv/3d_detection/paconv/pytorch/train.py diff --git a/cv/3d_detection/paconv/pytorch/.gitignore b/cv/3d_detection/paconv/pytorch/.gitignore deleted file mode 100644 index 7de6b802..00000000 --- a/cv/3d_detection/paconv/pytorch/.gitignore +++ /dev/null @@ -1,136 +0,0 @@ -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class -*.ipynb -mmcv/__pycache__/ - -# C extensions -*.so -*pyc - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/en/_build/ -docs/zh_cn/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ - -# cython generated cpp -.vscode -.idea - -# custom -*.pkl -*.pkl.json -*.log.json -work_dirs/ -exps/ -*~ -mmdet3d/.mim - -# Pytorch -*.pth - -# demo -*.jpg -*.png -data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/ -data/scannet/scans/ -data/sunrgbd/OFFICIAL_SUNRGBD/ -*.obj -*.ply - -# Waymo evaluation -mmdet3d/core/evaluation/waymo_utils/compute_detection_metrics_main diff --git a/cv/3d_detection/paconv/pytorch/LICENSE b/cv/3d_detection/paconv/pytorch/LICENSE deleted file mode 100644 index 04adf5cb..00000000 --- a/cv/3d_detection/paconv/pytorch/LICENSE +++ /dev/null @@ -1,203 +0,0 @@ -Copyright 2018-2019 Open-MMLab. All rights reserved. - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2018-2019 Open-MMLab. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/cv/3d_detection/paconv/pytorch/README.md b/cv/3d_detection/paconv/pytorch/README.md index 7beb8745..c1b1da32 100644 --- a/cv/3d_detection/paconv/pytorch/README.md +++ b/cv/3d_detection/paconv/pytorch/README.md @@ -6,30 +6,16 @@ We introduce Position Adaptive Convolution (PAConv), a generic convolution opera ## Step 1: Installation ```bash -## install libGL -yum install mesa-libGL - -## install zlib -wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz -tar xvf zlib-1.2.9.tar.gz -cd zlib-1.2.9/ -./configure && make install -cd .. -rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ - -## install urllib3 -pip3 install urllib3==1.26 -``` - -```bash -pip3 install -r requirements/runtime.txt -``` - -```bash -#install mmcv v1.7.1 -cd deepsparkhub/toolbox/MMDetection/patch/mmcv/v1.7.1 -bash build_mmcv.sh -bash install_mmcv.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +#install mmdetection3d +git clone https://github.com/open-mmlab/mmdetection3d.git -b v1.4.0 --depth=1 +cd mmdetection3d +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -58,4 +44,4 @@ results | 0.9488 | 0.9838 | 0.8184 | 0.0000 | 0.1682 | 0.5836 | 0.7387 | 0.7782 fps = batchsize*8/1batchtime = 65.3 samples/sec ## Reference -- [mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0rc3) +- [mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/v1.4.0) diff --git a/cv/3d_detection/paconv/pytorch/configs/3dssd/3dssd_4x4_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/3dssd/3dssd_4x4_kitti-3d-car.py deleted file mode 100644 index bcc8c822..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/3dssd/3dssd_4x4_kitti-3d-car.py +++ /dev/null @@ -1,121 +0,0 @@ -_base_ = [ - '../_base_/models/3dssd.py', '../_base_/datasets/kitti-3d-car.py', - '../_base_/default_runtime.py' -] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -point_cloud_range = [0, -40, -5, 70, 40, 3] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0], - global_rot_range=[0.0, 0.0], - rot_range=[-1.0471975511965976, 1.0471975511965976]), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.9, 1.1]), - # 3DSSD can get a higher performance without this transform - # dict(type='BackgroundPointsFilter', bbox_enlarge_range=(0.5, 2.0, 0.5)), - dict(type='PointSample', num_points=16384), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -evaluation = dict(interval=2) - -# model settings -model = dict( - bbox_head=dict( - num_classes=1, - bbox_coder=dict( - type='AnchorFreeBBoxCoder', num_dir_bins=12, with_rot=True))) - -# optimizer -lr = 0.002 # max learning rate -optimizer = dict(type='AdamW', lr=lr, weight_decay=0) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[45, 60]) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) - -# yapf:disable -log_config = dict( - interval=30, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable diff --git a/cv/3d_detection/paconv/pytorch/configs/3dssd/README.md b/cv/3d_detection/paconv/pytorch/configs/3dssd/README.md deleted file mode 100644 index 4feb6d76..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/3dssd/README.md +++ /dev/null @@ -1,45 +0,0 @@ -# 3DSSD: Point-based 3D Single Stage Object Detector - -> [3DSSD: Point-based 3D Single Stage Object Detector](https://arxiv.org/abs/2002.10187) - - - -## Abstract - -Currently, there have been many kinds of voxel-based 3D single stage detectors, while point-based single stage methods are still underexplored. In this paper, we first present a lightweight and effective point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. We novelly propose a fusion sampling strategy in downsampling process to make detection on less representative points feasible. A delicate box prediction network including a candidate generation layer, an anchor-free regression head with a 3D center-ness assignment strategy is designed to meet with our demand of accuracy and speed. Our paradigm is an elegant single stage anchor-free framework, showing great superiority to other existing methods. We evaluate 3DSSD on widely used KITTI dataset and more challenging nuScenes dataset. Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well, with inference speed more than 25 FPS, 2x faster than former state-of-the-art point-based methods. - -
- -
- -## Introduction - -We implement 3DSSD and provide the results and checkpoints on KITTI datasets. - -Some settings in our implementation are different from the [official implementation](https://github.com/Jia-Research-Lab/3DSSD), which bring marginal differences to the performance on KITTI datasets in our experiments. To simplify and unify the models of our implementation, we skip them in our models. These differences are listed as below: - -1. We keep the scenes without any object while the official code skips these scenes in training. In the official implementation, only 3229 and 3394 samples are used as training and validation sets, respectively. In our implementation, we keep using 3712 and 3769 samples as training and validation sets, respectively, as those used for all the other models in our implementation on KITTI datasets. -2. We do not modify the decay of `batch normalization` during training. -3. While using [`DataBaseSampler`](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/pipelines/dbsampler.py#L80) for data augmentation, the official code uses road planes as reference to place the sampled objects while we do not. -4. We perform detection using LIDAR coordinates while the official code uses camera coordinates. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------: | :---: | :-----: | :------: | :------------: | :----------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet2SAMSG](./3dssd_4x4_kitti-3d-car.py) | Car | 72e | 4.7 | | 78.58(81.27)1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828-b89c8fc4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828.log.json) | - -\[1\]: We report two different 3D object detection performance here. 78.58mAP is evaluated by our evaluation code and 81.27mAP is evaluated by the official development kit (so as that used in the paper and official code of 3DSSD ). We found that the commonly used Python implementation of [`rotate_iou`](https://github.com/traveller59/second.pytorch/blob/e42e4a0e17262ab7d180ee96a0a36427f2c20a44/second/core/non_max_suppression/nms_gpu.py#L605) which is used in our KITTI dataset evaluation, is different from the official implementation in [KITTI benchmark](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d). - -## Citation - -```latex -@inproceedings{yang20203dssd, - author = {Zetong Yang and Yanan Sun and Shu Liu and Jiaya Jia}, - title = {3DSSD: Point-based 3D Single Stage Object Detector}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, - year = {2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/3dssd/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/3dssd/metafile.yml deleted file mode 100644 index f6dbb3c4..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/3dssd/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: 3DSSD - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 4x TITAN X - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/2002.10187 - Title: '3DSSD: Point-based 3D Single Stage Object Detector' - README: configs/3dssd/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/ssd3dnet.py#L7 - Version: v0.6.0 - -Models: - - Name: 3dssd_4x4_kitti-3d-car - In Collection: 3DSSD - Config: configs/3dssd/3dssd_4x4_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.58 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828-b89c8fc4.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/coco_instance.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/coco_instance.py deleted file mode 100644 index f6ea4f45..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/coco_instance.py +++ /dev/null @@ -1,48 +0,0 @@ -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-3class.py deleted file mode 100644 index 1822af42..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-3class.py +++ /dev/null @@ -1,140 +0,0 @@ -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=6, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=1, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-car.py deleted file mode 100644 index 1e81226e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-3d-car.py +++ /dev/null @@ -1,138 +0,0 @@ -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=6, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=1, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-mono3d.py deleted file mode 100644 index 5817dc70..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/kitti-mono3d.py +++ /dev/null @@ -1,92 +0,0 @@ -dataset_type = 'KittiMonoDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=False, use_camera=True) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1242, 375), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - img_scale=(1242, 375), - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train_mono3d.coco.json', - info_file=data_root + 'kitti_infos_train.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline, - modality=input_modality, - test_mode=False, - box_type_3d='Camera'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val_mono3d.coco.json', - info_file=data_root + 'kitti_infos_val.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val_mono3d.coco.json', - info_file=data_root + 'kitti_infos_val.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera')) -evaluation = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/lyft-3d.py deleted file mode 100644 index 71baff04..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/lyft-3d.py +++ /dev/null @@ -1,136 +0,0 @@ -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-80, -80, -5, 80, 80, 3] -# For Lyft we usually do 9-class detection -class_names = [ - 'car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', 'motorcycle', - 'bicycle', 'pedestrian', 'animal' -] -dataset_type = 'LyftDataset' -data_root = 'data/lyft/' -# Input modality for Lyft dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/lyft/': 's3://lyft/lyft/', -# 'data/lyft/': 's3://lyft/lyft/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_test.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True)) -# For Lyft dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nuim_instance.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nuim_instance.py deleted file mode 100644 index 82fce56b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nuim_instance.py +++ /dev/null @@ -1,59 +0,0 @@ -dataset_type = 'CocoDataset' -data_root = 'data/nuimages/' -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-train.json', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-val.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-val.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-3d.py deleted file mode 100644 index 15481717..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-3d.py +++ /dev/null @@ -1,142 +0,0 @@ -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -# Input modality for nuScenes dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True, - box_type_3d='LiDAR')) -# For nuScenes dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-mono3d.py deleted file mode 100644 index 5decdacd..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/nus-mono3d.py +++ /dev/null @@ -1,100 +0,0 @@ -dataset_type = 'NuScenesMonoDataset' -data_root = 'data/nuscenes/' -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -# Input modality for nuScenes dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=False, - use_camera=True, - use_radar=False, - use_map=False, - use_external=False) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline, - modality=input_modality, - test_mode=False, - box_type_3d='Camera'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera')) -evaluation = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/range100_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/range100_lyft-3d.py deleted file mode 100644 index efa63ea3..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/range100_lyft-3d.py +++ /dev/null @@ -1,136 +0,0 @@ -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-100, -100, -5, 100, 100, 3] -# For Lyft we usually do 9-class detection -class_names = [ - 'car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', 'motorcycle', - 'bicycle', 'pedestrian', 'animal' -] -dataset_type = 'LyftDataset' -data_root = 'data/lyft/' -# Input modality for Lyft dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/lyft/': 's3://lyft/lyft/', -# 'data/lyft/': 's3://lyft/lyft/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_test.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True)) -# For Lyft dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis-3d-5class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis-3d-5class.py deleted file mode 100644 index 2422766f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis-3d-5class.py +++ /dev/null @@ -1,114 +0,0 @@ -# dataset settings -dataset_type = 'S3DISDataset' -data_root = './data/s3dis/' -class_names = ('table', 'chair', 'sofa', 'bookcase', 'board') -train_area = [1, 2, 3, 4, 6] -test_area = 5 - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='PointSample', num_points=40000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - # following ScanNet dataset the rotation range is 5 degrees - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0], - shift_height=True), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=40000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type='ConcatDataset', - datasets=[ - dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{i}.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - box_type_3d='Depth') for i in train_area - ], - separate_eval=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis_seg-3d-13class.py deleted file mode 100644 index cad81c61..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/s3dis_seg-3d-13class.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -dataset_type = 'S3DISSegDataset' -data_root = 'data/s3dis/' -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/s3dis/': -# 's3://openmmlab/datasets/detection3d/s3dis_processed/', -# 'data/s3dis/': -# 's3://openmmlab/datasets/detection3d/s3dis_processed/' -# })) -num_points = 4096 -train_area = [1, 2, 3, 4, 6] -test_area = 5 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - file_client_args=file_client_args, - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - ignore_index=len(class_names), - use_normalized_coord=True, - enlarge_size=0.2, - min_unique_num=None), - dict(type='NormalizePointsColor', color_mean=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='NormalizePointsColor', color_mean=None), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - file_client_args=file_client_args, - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - # train on area 1, 2, 3, 4, 6 - # test on area 5 - train=dict( - type=dataset_type, - data_root=data_root, - ann_files=[ - data_root + f's3dis_infos_Area_{i}.pkl' for i in train_area - ], - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=[ - data_root + f'seg_info/Area_{i}_resampled_scene_idxs.npy' - for i in train_area - ], - file_client_args=file_client_args), - val=dict( - type=dataset_type, - data_root=data_root, - ann_files=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names), - scene_idxs=data_root + - f'seg_info/Area_{test_area}_resampled_scene_idxs.npy', - file_client_args=file_client_args), - test=dict( - type=dataset_type, - data_root=data_root, - ann_files=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names), - file_client_args=file_client_args)) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet-3d-18class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet-3d-18class.py deleted file mode 100644 index 93da1e58..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet-3d-18class.py +++ /dev/null @@ -1,128 +0,0 @@ -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39), - max_cat_id=40), - dict(type='PointSample', num_points=40000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0], - shift_height=True), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=40000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet_seg-3d-20class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet_seg-3d-20class.py deleted file mode 100644 index cf73b09c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/scannet_seg-3d-20class.py +++ /dev/null @@ -1,132 +0,0 @@ -# dataset settings -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='NormalizePointsColor', color_mean=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='NormalizePointsColor', color_mean=None), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/sunrgbd-3d-10class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/sunrgbd-3d-10class.py deleted file mode 100644 index 7121b75b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/sunrgbd-3d-10class.py +++ /dev/null @@ -1,107 +0,0 @@ -dataset_type = 'SUNRGBDDataset' -data_root = 'data/sunrgbd/' -class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='LoadAnnotations3D'), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.523599, 0.523599], - scale_ratio_range=[0.85, 1.15], - shift_height=True), - dict(type='PointSample', num_points=20000), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict(type='PointSample', num_points=20000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - filter_empty_gt=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-3class.py deleted file mode 100644 index e3937fb0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-3class.py +++ /dev/null @@ -1,145 +0,0 @@ -# dataset settings -# D5 in the config name means the whole dataset is divided into 5 folds -# We only use one fold for efficient experiments -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://waymo_data/')) - -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [-74.88, -74.88, -2, 74.88, 74.88, 4] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=10, Cyclist=10), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-car.py deleted file mode 100644 index e119e5a6..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/datasets/waymoD5-3d-car.py +++ /dev/null @@ -1,143 +0,0 @@ -# dataset settings -# D5 in the config name means the whole dataset is divided into 5 folds -# We only use one fold for efficient experiments -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://waymo_data/')) - -class_names = ['Car'] -point_cloud_range = [-74.88, -74.88, -2, 74.88, 74.88, 4] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/default_runtime.py b/cv/3d_detection/paconv/pytorch/configs/_base_/default_runtime.py deleted file mode 100644 index 5fc198bb..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/default_runtime.py +++ /dev/null @@ -1,23 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable push -# By default we use textlogger hook and tensorboard -# For more loggers see -# https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.LoggerHook -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = None -load_from = None -resume_from = None -workflow = [('train', 1)] - -# disable opencv multithreading to avoid system being overloaded -opencv_num_threads = 0 -# set multi-process start method as `fork` to speed up the training -mp_start_method = 'fork' diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/3dssd.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/3dssd.py deleted file mode 100644 index 55344c7d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/3dssd.py +++ /dev/null @@ -1,77 +0,0 @@ -model = dict( - type='SSD3DNet', - backbone=dict( - type='PointNet2SAMSG', - in_channels=4, - num_points=(4096, 512, (256, 256)), - radii=((0.2, 0.4, 0.8), (0.4, 0.8, 1.6), (1.6, 3.2, 4.8)), - num_samples=((32, 32, 64), (32, 32, 64), (32, 32, 32)), - sa_channels=(((16, 16, 32), (16, 16, 32), (32, 32, 64)), - ((64, 64, 128), (64, 64, 128), (64, 96, 128)), - ((128, 128, 256), (128, 192, 256), (128, 256, 256))), - aggregation_channels=(64, 128, 256), - fps_mods=(('D-FPS'), ('FS'), ('F-FPS', 'D-FPS')), - fps_sample_range_lists=((-1), (-1), (512, -1)), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - bbox_head=dict( - type='SSD3DHead', - in_channels=256, - vote_module_cfg=dict( - in_channels=256, - num_points=256, - gt_per_seed=1, - conv_channels=(128, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - with_res_feat=False, - vote_xyz_range=(3.0, 3.0, 2.0)), - vote_aggregation_cfg=dict( - type='PointSAModuleMSG', - num_point=256, - radii=(4.8, 6.4), - sample_nums=(16, 32), - mlp_channels=((256, 256, 256, 512), (256, 256, 512, 1024)), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - use_xyz=True, - normalize_xyz=False, - bias=True), - pred_layer_cfg=dict( - in_channels=1536, - shared_conv_channels=(512, 128), - cls_conv_channels=(128, ), - reg_conv_channels=(128, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - objectness_loss=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - corner_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - vote_loss=dict(type='SmoothL1Loss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - sample_mod='spec', pos_distance_thr=10.0, expand_dims_length=0.05), - test_cfg=dict( - nms_cfg=dict(type='nms', iou_thr=0.1), - sample_mod='spec', - score_thr=0.0, - per_class_proposal=True, - max_output_num=100)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py deleted file mode 100644 index cafb530c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,198 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py deleted file mode 100644 index efdce59c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py +++ /dev/null @@ -1,83 +0,0 @@ -voxel_size = [0.1, 0.1, 0.2] -model = dict( - type='CenterPoint', - pts_voxel_layer=dict( - max_num_points=10, voxel_size=voxel_size, max_voxels=(90000, 120000)), - pts_voxel_encoder=dict(type='HardSimpleVFE', num_features=5), - pts_middle_encoder=dict( - type='SparseEncoder', - in_channels=5, - sparse_shape=[41, 1024, 1024], - output_channels=128, - order=('conv', 'norm', 'act'), - encoder_channels=((16, 16, 32), (32, 32, 64), (64, 64, 128), (128, - 128)), - encoder_paddings=((0, 0, 1), (0, 0, 1), (0, 0, [0, 1, 1]), (0, 0)), - block_type='basicblock'), - pts_backbone=dict( - type='SECOND', - in_channels=256, - out_channels=[128, 256], - layer_nums=[5, 5], - layer_strides=[1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False)), - pts_neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - out_channels=[256, 256], - upsample_strides=[1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - use_conv_for_no_stride=True), - pts_bbox_head=dict( - type='CenterHead', - in_channels=sum([256, 256]), - tasks=[ - dict(num_class=1, class_names=['car']), - dict(num_class=2, class_names=['truck', 'construction_vehicle']), - dict(num_class=2, class_names=['bus', 'trailer']), - dict(num_class=1, class_names=['barrier']), - dict(num_class=2, class_names=['motorcycle', 'bicycle']), - dict(num_class=2, class_names=['pedestrian', 'traffic_cone']), - ], - common_heads=dict( - reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)), - share_conv_channel=64, - bbox_coder=dict( - type='CenterPointBBoxCoder', - post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_num=500, - score_threshold=0.1, - out_size_factor=8, - voxel_size=voxel_size[:2], - code_size=9), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25), - norm_bbox=True), - # model training and testing settings - train_cfg=dict( - pts=dict( - grid_size=[1024, 1024, 40], - voxel_size=voxel_size, - out_size_factor=8, - dense_reg=1, - gaussian_overlap=0.1, - max_objs=500, - min_radius=2, - code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2])), - test_cfg=dict( - pts=dict( - post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_per_img=500, - max_pool_nms=False, - min_radius=[4, 12, 10, 1, 0.85, 0.175], - score_threshold=0.1, - out_size_factor=8, - voxel_size=voxel_size[:2], - nms_type='rotate', - pre_max_size=1000, - post_max_size=83, - nms_thr=0.2))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py deleted file mode 100644 index 311d7637..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py +++ /dev/null @@ -1,83 +0,0 @@ -voxel_size = [0.2, 0.2, 8] -model = dict( - type='CenterPoint', - pts_voxel_layer=dict( - max_num_points=20, voxel_size=voxel_size, max_voxels=(30000, 40000)), - pts_voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=5, - feat_channels=[64], - with_distance=False, - voxel_size=(0.2, 0.2, 8), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - legacy=False), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=(512, 512)), - pts_backbone=dict( - type='SECOND', - in_channels=64, - out_channels=[64, 128, 256], - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False)), - pts_neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - out_channels=[128, 128, 128], - upsample_strides=[0.5, 1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - use_conv_for_no_stride=True), - pts_bbox_head=dict( - type='CenterHead', - in_channels=sum([128, 128, 128]), - tasks=[ - dict(num_class=1, class_names=['car']), - dict(num_class=2, class_names=['truck', 'construction_vehicle']), - dict(num_class=2, class_names=['bus', 'trailer']), - dict(num_class=1, class_names=['barrier']), - dict(num_class=2, class_names=['motorcycle', 'bicycle']), - dict(num_class=2, class_names=['pedestrian', 'traffic_cone']), - ], - common_heads=dict( - reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)), - share_conv_channel=64, - bbox_coder=dict( - type='CenterPointBBoxCoder', - post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_num=500, - score_threshold=0.1, - out_size_factor=4, - voxel_size=voxel_size[:2], - code_size=9), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25), - norm_bbox=True), - # model training and testing settings - train_cfg=dict( - pts=dict( - grid_size=[512, 512, 1], - voxel_size=voxel_size, - out_size_factor=4, - dense_reg=1, - gaussian_overlap=0.1, - max_objs=500, - min_radius=2, - code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2])), - test_cfg=dict( - pts=dict( - post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_per_img=500, - max_pool_nms=False, - min_radius=[4, 12, 10, 1, 0.85, 0.175], - score_threshold=0.1, - pc_range=[-51.2, -51.2], - out_size_factor=4, - voxel_size=voxel_size[:2], - nms_type='rotate', - pre_max_size=1000, - post_max_size=83, - nms_thr=0.2))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/dgcnn.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/dgcnn.py deleted file mode 100644 index 61e72726..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/dgcnn.py +++ /dev/null @@ -1,28 +0,0 @@ -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='DGCNNBackbone', - in_channels=9, # [xyz, rgb, normal_xyz], modified with dataset - num_samples=(20, 20, 20), - knn_modes=('D-KNN', 'F-KNN', 'F-KNN'), - radius=(None, None, None), - gf_channels=((64, 64), (64, 64), (64, )), - fa_channels=(1024, ), - act_cfg=dict(type='LeakyReLU', negative_slope=0.2)), - decode_head=dict( - type='DGCNNHead', - fp_channels=(1216, 512), - channels=256, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='LeakyReLU', negative_slope=0.2), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # modified with dataset - loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/fcos3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/fcos3d.py deleted file mode 100644 index be83001d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/fcos3d.py +++ /dev/null @@ -1,78 +0,0 @@ -model = dict( - type='FCOSMono3D', - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe', - init_cfg=dict( - type='Pretrained', - checkpoint='open-mmlab://detectron2/resnet101_caffe')), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5, - relu_before_extra_convs=True), - bbox_head=dict( - type='FCOSMono3DHead', - num_classes=10, - in_channels=256, - stacked_convs=2, - feat_channels=256, - use_direction_classifier=True, - diff_rad_by_sin=True, - pred_attrs=True, - pred_velo=True, - dir_offset=0.7854, # pi/4 - dir_limit_offset=0, - strides=[8, 16, 32, 64, 128], - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo - cls_branch=(256, ), - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - () # velo - ), - dir_branch=(256, ), - attr_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - bbox_coder=dict(type='FCOS3DBBoxCoder', code_size=9), - norm_on_bbox=True, - centerness_on_reg=True, - center_sampling=True, - conv_bias=True, - dcn_on_last_conv=True), - train_cfg=dict( - allowed_border=0, - code_weight=[1.0, 1.0, 0.2, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05], - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=1000, - nms_thr=0.8, - score_thr=0.05, - min_bbox_size=0, - max_per_img=200)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/groupfree3d.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/groupfree3d.py deleted file mode 100644 index 077d0496..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/groupfree3d.py +++ /dev/null @@ -1,71 +0,0 @@ -model = dict( - type='GroupFree3DNet', - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - type='GroupFree3DHead', - in_channels=288, - num_decoder_layers=6, - num_proposal=256, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=dict( - type='GroupFree3DMHA', - embed_dims=288, - num_heads=8, - attn_drop=0.1, - dropout_layer=dict(type='Dropout', drop_prob=0.1)), - ffn_cfgs=dict( - embed_dims=288, - feedforward_channels=2048, - ffn_drop=0.1, - act_cfg=dict(type='ReLU', inplace=True)), - operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', - 'norm')), - pred_layer_cfg=dict( - in_channels=288, shared_conv_channels=(288, 288), bias=True), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', beta=1.0, reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(sample_mod='kps'), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last')) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/h3dnet.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/h3dnet.py deleted file mode 100644 index 76056674..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/h3dnet.py +++ /dev/null @@ -1,341 +0,0 @@ -primitive_z_cfg = dict( - type='PrimitiveHead', - num_dims=2, - num_classes=18, - primitive_mode='z', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -primitive_xy_cfg = dict( - type='PrimitiveHead', - num_dims=1, - num_classes=18, - primitive_mode='xy', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -primitive_line_cfg = dict( - type='PrimitiveHead', - num_dims=0, - num_classes=18, - primitive_mode='line', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=1.0, - loss_dst_weight=1.0), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=1.0, - loss_dst_weight=1.0), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=2.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -model = dict( - type='H3DNet', - backbone=dict( - type='MultiBackbone', - num_streams=4, - suffixes=['net0', 'net1', 'net2', 'net3'], - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-5, momentum=0.01), - act_cfg=dict(type='ReLU'), - backbones=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True))), - rpn_head=dict( - type='VoteHead', - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - roi_head=dict( - type='H3DRoIHead', - primitive_list=[primitive_z_cfg, primitive_xy_cfg, primitive_line_cfg], - bbox_head=dict( - type='H3DBboxHead', - gt_per_seed=3, - num_proposal=256, - suface_matching_cfg=dict( - type='PointSAModule', - num_point=256 * 6, - radius=0.5, - num_sample=32, - mlp_channels=[128 + 6, 128, 64, 32], - use_xyz=True, - normalize_xyz=True), - line_matching_cfg=dict( - type='PointSAModule', - num_point=256 * 12, - radius=0.5, - num_sample=32, - mlp_channels=[128 + 12, 128, 64, 32], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - primitive_refine_channels=[128, 128, 128], - upper_thresh=100.0, - surface_thresh=0.5, - line_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - cues_objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.3, 0.7], - reduction='mean', - loss_weight=5.0), - cues_semantic_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.3, 0.7], - reduction='mean', - loss_weight=5.0), - proposal_objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='none', - loss_weight=5.0), - primitive_center_loss=dict( - type='MSELoss', reduction='none', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote'), - rpn_proposal=dict(use_nms=False), - rcnn=dict( - pos_distance_thr=0.3, - neg_distance_thr=0.6, - sample_mod='vote', - far_threshold=0.6, - near_threshold=0.3, - mask_surface_threshold=0.3, - label_surface_threshold=0.3, - mask_line_threshold=0.3, - label_line_threshold=0.3)), - test_cfg=dict( - rpn=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True, - use_nms=False), - rcnn=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_lyft.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_lyft.py deleted file mode 100644 index 87c7fe0c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_lyft.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = './hv_pointpillars_fpn_nus.py' - -# model settings (based on nuScenes model settings) -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -model = dict( - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-80, -80, -5, 80, 80, 3], - max_voxels=(60000, 60000)), - pts_voxel_encoder=dict( - feat_channels=[64], point_cloud_range=[-80, -80, -5, 80, 80, 3]), - pts_middle_encoder=dict(output_shape=[640, 640]), - pts_bbox_head=dict( - num_classes=9, - anchor_generator=dict( - ranges=[[-80, -80, -1.8, 80, 80, -1.8]], custom_values=[]), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7)), - # model training settings (based on nuScenes model settings) - train_cfg=dict(pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_nus.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_nus.py deleted file mode 100644 index be29269d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_nus.py +++ /dev/null @@ -1,95 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.25, 0.25, 8] -model = dict( - type='MVXFasterRCNN', - pts_voxel_layer=dict( - max_num_points=64, - point_cloud_range=[-50, -50, -5, 50, 50, 3], - voxel_size=voxel_size, - max_voxels=(30000, 40000)), - pts_voxel_encoder=dict( - type='HardVFE', - in_channels=4, - feat_channels=[64, 64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=[-50, -50, -5, 50, 50, 3], - norm_cfg=dict(type='naiveSyncBN1d', eps=1e-3, momentum=0.01)), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[400, 400]), - pts_backbone=dict( - type='SECOND', - in_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - pts_neck=dict( - type='FPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - act_cfg=dict(type='ReLU'), - in_channels=[64, 128, 256], - out_channels=256, - start_level=0, - num_outs=3), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2], - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=1000, - nms_thr=0.2, - score_thr=0.05, - min_bbox_size=0, - max_num=500))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py deleted file mode 100644 index 9cd200f3..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = './hv_pointpillars_fpn_nus.py' - -# model settings (based on nuScenes model settings) -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -model = dict( - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-100, -100, -5, 100, 100, 3], - max_voxels=(60000, 60000)), - pts_voxel_encoder=dict( - feat_channels=[64], point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_middle_encoder=dict(output_shape=[800, 800]), - pts_bbox_head=dict( - num_classes=9, - anchor_generator=dict( - ranges=[[-100, -100, -1.8, 100, 100, -1.8]], custom_values=[]), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7)), - # model training settings (based on nuScenes model settings) - train_cfg=dict(pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_kitti.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_kitti.py deleted file mode 100644 index ac46475d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_kitti.py +++ /dev/null @@ -1,94 +0,0 @@ -voxel_size = [0.16, 0.16, 4] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=32, # max_points_per_voxel - point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_voxels - ), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1]), - middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[496, 432]), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - assign_per_class=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -1.78, 69.12, 39.68, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_waymo.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_waymo.py deleted file mode 100644 index 30e23e95..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_pointpillars_secfpn_waymo.py +++ /dev/null @@ -1,107 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.32, 0.32, 6] -model = dict( - type='MVXFasterRCNN', - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-74.88, -74.88, -2, 74.88, 74.88, 4], - voxel_size=voxel_size, - max_voxels=(32000, 32000)), - pts_voxel_encoder=dict( - type='HardVFE', - in_channels=5, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=[-74.88, -74.88, -2, 74.88, 74.88, 4], - norm_cfg=dict(type='naiveSyncBN1d', eps=1e-3, momentum=0.01)), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[468, 468]), - pts_backbone=dict( - type='SECOND', - in_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[3, 5, 5], - layer_strides=[1, 2, 2], - out_channels=[64, 128, 256]), - pts_neck=dict( - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345], - [-74.88, -74.88, -0.1188, 74.88, 74.88, -0.1188], - [-74.88, -74.88, 0, 74.88, 74.88, 0]], - sizes=[ - [4.73, 2.08, 1.77], # car - [1.81, 0.84, 1.77], # cyclist - [0.91, 0.84, 1.74] # pedestrian - ], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=[ - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.25, - score_thr=0.1, - min_bbox_size=0, - max_num=500))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_kitti.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_kitti.py deleted file mode 100644 index e7d569a5..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_kitti.py +++ /dev/null @@ -1,89 +0,0 @@ -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=[0, -40, -3, 70.4, 40, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoder', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_waymo.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_waymo.py deleted file mode 100644 index 0fa39e15..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/hv_second_secfpn_waymo.py +++ /dev/null @@ -1,99 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.08, 0.08, 0.1] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=10, - point_cloud_range=[-76.8, -51.2, -2, 76.8, 51.2, 4], - voxel_size=voxel_size, - max_voxels=(80000, 90000)), - voxel_encoder=dict(type='HardSimpleVFE', num_features=5), - middle_encoder=dict( - type='SparseEncoder', - in_channels=5, - sparse_shape=[61, 1280, 1920], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=384, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-76.8, -51.2, -0.0345, 76.8, 51.2, -0.0345], - [-76.8, -51.2, 0, 76.8, 51.2, 0], - [-76.8, -51.2, -0.1188, 76.8, 51.2, -0.1188]], - sizes=[ - [4.73, 2.08, 1.77], # car - [0.91, 0.84, 1.74], # pedestrian - [1.81, 0.84, 1.77] # cyclist - ], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.25, - score_thr=0.1, - min_bbox_size=0, - max_num=500)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/imvotenet_image.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/imvotenet_image.py deleted file mode 100644 index 981f8bc9..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/imvotenet_image.py +++ /dev/null @@ -1,108 +0,0 @@ -model = dict( - type='ImVoteNet', - img_backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - img_neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - img_rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - img_roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - - # model training and testing settings - train_cfg=dict( - img_rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - img_rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - img_rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - img_rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - img_rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/mask_rcnn_r50_fpn.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/mask_rcnn_r50_fpn.py deleted file mode 100644 index 4e670e9d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,124 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_cuda_ssg.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_cuda_ssg.py deleted file mode 100644 index f513bd4a..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_cuda_ssg.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = './paconv_ssg.py' - -model = dict( - backbone=dict( - sa_cfg=dict( - type='PAConvCUDASAModule', - scorenet_cfg=dict(mlp_channels=[8, 16, 16])))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_ssg.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_ssg.py deleted file mode 100644 index 1d4f1ed3..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/paconv_ssg.py +++ /dev/null @@ -1,49 +0,0 @@ -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='PointNet2SASSG', - in_channels=9, # [xyz, rgb, normalized_xyz] - num_points=(1024, 256, 64, 16), - radius=(None, None, None, None), # use kNN instead of ball query - num_samples=(32, 32, 32, 32), - sa_channels=((32, 32, 64), (64, 64, 128), (128, 128, 256), (256, 256, - 512)), - fp_channels=(), - norm_cfg=dict(type='BN2d', momentum=0.1), - sa_cfg=dict( - type='PAConvSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=False, - paconv_num_kernels=[16, 16, 16], - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False))), - decode_head=dict( - type='PAConvHead', - # PAConv model's decoder takes skip connections from beckbone - # different from PointNet++, it also concats input features in the last - # level of decoder, leading to `128 + 6` as the channel number - fp_channels=((768, 256, 256), (384, 256, 256), (320, 256, 128), - (128 + 6, 128, 128, 128)), - channels=128, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # should be modified with dataset - loss_weight=1.0)), - # correlation loss to regularize PAConv's kernel weights - loss_regularization=dict( - type='PAConvRegularizationLoss', reduction='sum', loss_weight=10.0), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/parta2.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/parta2.py deleted file mode 100644 index aa155678..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/parta2.py +++ /dev/null @@ -1,201 +0,0 @@ -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='PartA2', - voxel_layer=dict( - max_num_points=5, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_voxels - ), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseUNet', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - rpn_head=dict( - type='PartA2RPNHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - assigner_per_size=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - roi_head=dict( - type='PartAggregationROIHead', - num_classes=3, - semantic_head=dict( - type='PointwiseSemanticHead', - in_channels=16, - extra_width=0.2, - seg_score_thr=0.3, - num_classes=3, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)), - seg_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='max')), - part_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='avg')), - bbox_head=dict( - type='PartA2BboxHead', - num_classes=3, - seg_in_channels=16, - part_in_channels=4, - seg_conv_channels=[64, 64], - part_conv_channels=[64, 64], - merge_conv_channels=[128, 128], - down_conv_channels=[128, 256], - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - shared_fc_channels=[256, 512, 512, 512], - cls_channels=[256, 256], - reg_channels=[256, 256], - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.1))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pgd.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/pgd.py deleted file mode 100644 index e63fc1fc..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pgd.py +++ /dev/null @@ -1,55 +0,0 @@ -_base_ = './fcos3d.py' -# model settings -model = dict( - bbox_head=dict( - _delete_=True, - type='PGDHead', - num_classes=10, - in_channels=256, - stacked_convs=2, - feat_channels=256, - use_direction_classifier=True, - diff_rad_by_sin=True, - pred_attrs=True, - pred_velo=True, - pred_bbox2d=True, - pred_keypoints=False, - dir_offset=0.7854, # pi/4 - strides=[8, 16, 32, 64, 128], - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo - cls_branch=(256, ), - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - () # velo - ), - dir_branch=(256, ), - attr_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - norm_on_bbox=True, - centerness_on_reg=True, - center_sampling=True, - conv_bias=True, - dcn_on_last_conv=True, - use_depth_classifier=True, - depth_branch=(256, ), - depth_range=(0, 50), - depth_unit=10, - division='uniform', - depth_bins=6, - bbox_coder=dict(type='PGDBBoxCoder', code_size=9)), - test_cfg=dict(nms_pre=1000, nms_thr=0.8, score_thr=0.01, max_per_img=200)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/point_rcnn.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/point_rcnn.py deleted file mode 100644 index 02a1414f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/point_rcnn.py +++ /dev/null @@ -1,131 +0,0 @@ -model = dict( - type='PointRCNN', - backbone=dict( - type='PointNet2SAMSG', - in_channels=4, - num_points=(4096, 1024, 256, 64), - radii=((0.1, 0.5), (0.5, 1.0), (1.0, 2.0), (2.0, 4.0)), - num_samples=((16, 32), (16, 32), (16, 32), (16, 32)), - sa_channels=(((16, 16, 32), (32, 32, 64)), ((64, 64, 128), (64, 96, - 128)), - ((128, 196, 256), (128, 196, 256)), ((256, 256, 512), - (256, 384, 512))), - fps_mods=(('D-FPS'), ('D-FPS'), ('D-FPS'), ('D-FPS')), - fps_sample_range_lists=((-1), (-1), (-1), (-1)), - aggregation_channels=(None, None, None, None), - dilated_group=(False, False, False, False), - out_indices=(0, 1, 2, 3), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - neck=dict( - type='PointNetFPNeck', - fp_channels=((1536, 512, 512), (768, 512, 512), (608, 256, 256), - (257, 128, 128))), - rpn_head=dict( - type='PointRPNHead', - num_classes=3, - enlarge_width=0.1, - pred_layer_cfg=dict( - in_channels=128, - cls_linear_channels=(256, 256), - reg_linear_channels=(256, 256)), - cls_loss=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - bbox_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - bbox_coder=dict( - type='PointXYZWHLRBBoxCoder', - code_size=8, - # code_size: (center residual (3), size regression (3), - # torch.cos(yaw) (1), torch.sin(yaw) (1) - use_mean_size=True, - mean_size=[[3.9, 1.6, 1.56], [0.8, 0.6, 1.73], [1.76, 0.6, - 1.73]])), - roi_head=dict( - type='PointRCNNRoIHead', - point_roi_extractor=dict( - type='Single3DRoIPointExtractor', - roi_layer=dict(type='RoIPointPool3d', num_sampled_points=512)), - bbox_head=dict( - type='PointRCNNBboxHead', - num_classes=1, - pred_layer_cfg=dict( - in_channels=512, - cls_conv_channels=(256, 256), - reg_conv_channels=(256, 256), - bias=True), - in_channels=5, - # 5 = 3 (xyz) + scores + depth - mlp_channels=[128, 128], - num_points=(128, 32, -1), - radius=(0.2, 0.4, 100), - num_samples=(16, 16, 16), - sa_channels=((128, 128, 128), (128, 128, 256), (256, 256, 512)), - with_corner_loss=True), - depth_normalizer=70.0), - # model training and testing settings - train_cfg=dict( - pos_distance_thr=10.0, - rpn=dict( - nms_cfg=dict( - use_rotate_nms=True, iou_thr=0.8, nms_pre=9000, nms_post=512), - score_thr=None), - rcnn=dict( - assigner=[ - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False), - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.5, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.7, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_cfg=dict( - use_rotate_nms=True, iou_thr=0.85, nms_pre=9000, nms_post=512), - score_thr=None), - rcnn=dict(use_rotate_nms=True, nms_thr=0.1, score_thr=0.1))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_msg.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_msg.py deleted file mode 100644 index 222ab885..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_msg.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = './pointnet2_ssg.py' - -# model settings -model = dict( - backbone=dict( - _delete_=True, - type='PointNet2SAMSG', - in_channels=6, # [xyz, rgb], should be modified with dataset - num_points=(1024, 256, 64, 16), - radii=((0.05, 0.1), (0.1, 0.2), (0.2, 0.4), (0.4, 0.8)), - num_samples=((16, 32), (16, 32), (16, 32), (16, 32)), - sa_channels=(((16, 16, 32), (32, 32, 64)), ((64, 64, 128), (64, 96, - 128)), - ((128, 196, 256), (128, 196, 256)), ((256, 256, 512), - (256, 384, 512))), - aggregation_channels=(None, None, None, None), - fps_mods=(('D-FPS'), ('D-FPS'), ('D-FPS'), ('D-FPS')), - fps_sample_range_lists=((-1), (-1), (-1), (-1)), - dilated_group=(False, False, False, False), - out_indices=(0, 1, 2, 3), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - decode_head=dict( - fp_channels=((1536, 256, 256), (512, 256, 256), (352, 256, 128), - (128, 128, 128, 128)))) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_ssg.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_ssg.py deleted file mode 100644 index 58b4c243..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/pointnet2_ssg.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='PointNet2SASSG', - in_channels=6, # [xyz, rgb], should be modified with dataset - num_points=(1024, 256, 64, 16), - radius=(0.1, 0.2, 0.4, 0.8), - num_samples=(32, 32, 32, 32), - sa_channels=((32, 32, 64), (64, 64, 128), (128, 128, 256), (256, 256, - 512)), - fp_channels=(), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - decode_head=dict( - type='PointNet2Head', - fp_channels=((768, 256, 256), (384, 256, 256), (320, 256, 128), - (128, 128, 128, 128)), - channels=128, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # should be modified with dataset - loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/smoke.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/smoke.py deleted file mode 100644 index 0a7452b4..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/smoke.py +++ /dev/null @@ -1,53 +0,0 @@ -model = dict( - type='SMOKEMono3D', - backbone=dict( - type='DLANet', - depth=34, - in_channels=3, - norm_cfg=dict(type='GN', num_groups=32), - init_cfg=dict( - type='Pretrained', - checkpoint='http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth' - )), - neck=dict( - type='DLANeck', - in_channels=[16, 32, 64, 128, 256, 512], - start_level=2, - end_level=5, - norm_cfg=dict(type='GN', num_groups=32)), - bbox_head=dict( - type='SMOKEMono3DHead', - num_classes=3, - in_channels=64, - dim_channel=[3, 4, 5], - ori_channel=[6, 7], - stacked_convs=0, - feat_channels=64, - use_direction_classifier=False, - diff_rad_by_sin=False, - pred_attrs=False, - pred_velo=False, - dir_offset=0, - strides=None, - group_reg_dims=(8, ), - cls_branch=(256, ), - reg_branch=((256, ), ), - num_attrs=0, - bbox_code_size=7, - dir_branch=(), - attr_branch=(), - bbox_coder=dict( - type='SMOKECoder', - base_depth=(28.01, 16.32), - base_dims=((0.88, 1.73, 0.67), (1.78, 1.70, 0.58), (3.88, 1.63, - 1.53)), - code_size=7), - loss_cls=dict(type='GaussianFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='L1Loss', reduction='sum', loss_weight=1 / 300), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=None, - conv_bias=True, - dcn_on_last_conv=False), - train_cfg=None, - test_cfg=dict(topK=100, local_maximum_kernel=3, max_per_img=100)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/models/votenet.py b/cv/3d_detection/paconv/pytorch/configs/_base_/models/votenet.py deleted file mode 100644 index 129339dc..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/models/votenet.py +++ /dev/null @@ -1,73 +0,0 @@ -model = dict( - type='VoteNet', - backbone=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - type='VoteHead', - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0 / 3.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote'), - test_cfg=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True)) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cosine.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cosine.py deleted file mode 100644 index 69cb7df8..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cosine.py +++ /dev/null @@ -1,20 +0,0 @@ -# This schedule is mainly used by models with dynamic voxelization -# optimizer -lr = 0.003 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.001) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) - -lr_config = dict( - policy='CosineAnnealing', - warmup='linear', - warmup_iters=1000, - warmup_ratio=1.0 / 10, - min_lr_ratio=1e-5) - -momentum_config = None - -runner = dict(type='EpochBasedRunner', max_epochs=40) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_20e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_20e.py deleted file mode 100644 index 704740ee..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_20e.py +++ /dev/null @@ -1,24 +0,0 @@ -# For nuScenes dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 20. Please change the interval accordingly if you do not -# use a default schedule. -# optimizer -# This schedule is mainly used by models on nuScenes dataset -optimizer = dict(type='AdamW', lr=1e-4, weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, -) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, -) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_40e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_40e.py deleted file mode 100644 index 66498633..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/cyclic_40e.py +++ /dev/null @@ -1,31 +0,0 @@ -# The schedule is usually used by models trained on KITTI dataset - -# The learning rate set in the cyclic schedule is the initial learning rate -# rather than the max learning rate. Since the target_ratio is (10, 1e-4), -# the learning rate will change from 0.0018 to 0.018, than go to 0.0018*1e-4 -lr = 0.0018 -# The optimizer follows the setting in SECOND.Pytorch, but here we use -# the official AdamW optimizer implemented by PyTorch. -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -# We use cyclic learning rate and momentum schedule following SECOND.Pytorch -# https://github.com/traveller59/second.pytorch/blob/3aba19c9688274f75ebb5e576f65cfe54773c021/torchplus/train/learning_schedules_fastai.py#L69 # noqa -# We implement them in mmcv, for more details, please refer to -# https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327 # noqa -# https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130 # noqa -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, -) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, -) -# Although the max_epochs is 40, this schedule is usually used we -# RepeatDataset with repeat ratio N, thus the actual max epoch -# number could be Nx40 -runner = dict(type='EpochBasedRunner', max_epochs=40) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/mmdet_schedule_1x.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/mmdet_schedule_1x.py deleted file mode 100644 index 13b3783c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/mmdet_schedule_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_2x.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_2x.py deleted file mode 100644 index afde799d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_2x.py +++ /dev/null @@ -1,14 +0,0 @@ -# optimizer -# This schedule is mainly used by models on nuScenes dataset -optimizer = dict(type='AdamW', lr=0.001, weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=1.0 / 1000, - step=[20, 23]) -momentum_config = None -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_3x.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_3x.py deleted file mode 100644 index 115cd26b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/schedule_3x.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used by models on indoor dataset, -# e.g., VoteNet on SUNRGBD and ScanNet -lr = 0.008 # max learning rate -optimizer = dict(type='AdamW', lr=lr, weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[24, 32]) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_100e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_100e.py deleted file mode 100644 index 3b75932b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_100e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=100) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_150e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_150e.py deleted file mode 100644 index 04b44e51..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_150e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='SGD', lr=0.2, weight_decay=0.0001, momentum=0.9) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=0.002) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=150) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_200e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_200e.py deleted file mode 100644 index 6a49484c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_200e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on ScanNet dataset in segmentation task -optimizer = dict(type='Adam', lr=0.001, weight_decay=0.01) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=200) diff --git a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_50e.py b/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_50e.py deleted file mode 100644 index 975a8f9f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/_base_/schedules/seg_cosine_50e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='Adam', lr=0.001, weight_decay=0.001) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=50) diff --git a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index 398a19cd..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,332 +0,0 @@ -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] # velodyne coordinates, x, y, z - -model = dict( - type='PartA2', - voxel_layer=dict( - max_num_points=5, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_coxels - ), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseUNet', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - rpn_head=dict( - type='PartA2RPNHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - assigner_per_size=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - roi_head=dict( - type='PartAggregationROIHead', - num_classes=3, - semantic_head=dict( - type='PointwiseSemanticHead', - in_channels=16, - extra_width=0.2, - seg_score_thr=0.3, - num_classes=3, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)), - seg_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='max')), - part_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='avg')), - bbox_head=dict( - type='PartA2BboxHead', - num_classes=3, - seg_in_channels=16, - part_in_channels=4, - seg_conv_channels=[64, 64], - part_conv_channels=[64, 64], - merge_conv_channels=[128, 128], - down_conv_channels=[128, 256], - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - shared_fc_channels=[256, 512, 512, 512], - cls_channels=[256, 256], - reg_channels=[256, 256], - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.3))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=20, Pedestrian=15, Cyclist=15)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.001 # max learning rate -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=1, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl', port=29506) -log_level = 'INFO' -find_unused_parameters = True -work_dir = './work_dirs/parta2_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py deleted file mode 100644 index 72c73724..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py +++ /dev/null @@ -1,201 +0,0 @@ -# model settings -voxel_size = [0.16, 0.16, 4] -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=64, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(12000, 20000)), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range), - middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[496, 432]), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -39.68, -1.78, 69.12, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - sample_groups=dict(Car=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[0.25, 0.25, 0.25], - global_rot_range=[0.0, 0.0], - rot_range=[-0.15707963267, 0.15707963267]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=3, - workers_per_gpu=3, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.001 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=1, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=50) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/pp_secfpn_100e' -load_from = None -resume_from = None -workflow = [('train', 50)] diff --git a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index 02eed9fb..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,244 +0,0 @@ -# model settings -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -voxel_size = [0.16, 0.16, 4] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=32, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_coxels - ), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - ), - middle_encoder=dict( - type='PointPillarsScatter', - in_channels=64, - output_shape=[496, 432], - ), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256], - ), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128], - ), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2), - ), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - Car=5, - Pedestrian=5, - Cyclist=5, - )), - classes=class_names, - sample_groups=dict( - Car=15, - Pedestrian=15, - Cyclist=15, - )) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']), -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.0003 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -# learning policy -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=2, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/pp_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index d61a050f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,251 +0,0 @@ -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoder', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=False, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - Car=5, - Pedestrian=5, - Cyclist=5, - )), - classes=class_names, - sample_groups=dict( - Car=20, - Pedestrian=15, - Cyclist=15, - )) -file_client_args = dict(backend='disk') -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.0003 # max learning rate -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=2, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/sec_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/README.md b/cv/3d_detection/paconv/pytorch/configs/centerpoint/README.md deleted file mode 100644 index d9173c93..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# Center-based 3D Object Detection and Tracking - -> [Center-based 3D Object Detection and Tracking](https://arxiv.org/abs/2006.11275) - - - -## Abstract - -Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions. - -
- -
- -## Introduction - -We implement CenterPoint and provide the result and checkpoints on nuScenes dataset. - -We follow the below style to name config files. Contributors are advised to follow the same style. -`{xxx}` is required field and `[yyy]` is optional. - -`{model}`: model type like `centerpoint`. - -`{model setting}`: voxel size and voxel type like `01voxel`, `02pillar`. - -`{backbone}`: backbone type like `second`. - -`{neck}`: neck type like `secfpn`. - -`[dcn]`: Whether to use deformable convolution. - -`[circle]`: Whether to use circular nms. - -`[batch_per_gpu x gpu]`: GPUs and samples per GPU, 4x8 is used by default. - -`{schedule}`: training schedule, options are 1x, 2x, 20e, etc. 1x and 2x means 12 epochs and 24 epochs respectively. 20e is adopted in cascade models, which denotes 20 epochs. For 1x/2x, initial learning rate decays by a factor of 10 at the 8/16th and 11/22th epochs. For 20e, initial learning rate decays by a factor of 10 at the 16th and 19th epochs. - -`{dataset}`: dataset like nus-3d, kitti-3d, lyft-3d, scannet-3d, sunrgbd-3d. We also indicate the number of classes we are using if there exist multiple settings, e.g., kitti-3d-3class and kitti-3d-car means training on KITTI dataset with 3 classes and single class, respectively. - -## Usage - -### Test time augmentation - -We have supported double-flip and scale augmentation during test time. To use test time augmentation, users need to modify the -`test_pipeline` and `test_cfg` in the config. -For example, we change `centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py` to the following. - -```python -_base_ = './centerpoint_0075voxel_second_secfpn_circlenms' \ - '_4x8_cyclic_20e_nus.py' - -model = dict( - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - max_num=83))) - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=[0.95, 1.0, 1.05], - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) - -``` - -## Results and models - -### CenterPoint - -| Backbone | Voxel type (voxel size) | Dcn | Circular nms | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :---------------------------------------------------------------------------------: | :---------------------: | :-: | :----------: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.1) | ✗ | ✓ | 4.9 | | 56.19 | 64.43 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210815_085857-9ba7f3a5.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210815_085857.log.json) | -| above w/o circle nms | voxel (0.1) | ✗ | ✗ | | | 56.56 | 64.46 | | -| [SECFPN](./centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.1) | ✓ | ✓ | 5.2 | | 56.34 | 64.81 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210814_060754-c9d535d2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210814_060754.log.json) | -| above w/o circle nms | voxel (0.1) | ✓ | ✗ | | | 56.60 | 64.90 | | -| [SECFPN](./centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.075) | ✗ | ✓ | 7.8 | | 57.34 | 65.23 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210814_113418-76ae0cf0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210814_113418.log.json) | -| above w/o circle nms | voxel (0.075) | ✗ | ✗ | | | 57.63 | 65.39 | | -| [SECFPN](./centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.075) | ✓ | ✓ | 8.5 | | 57.27 | 65.58 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210827_161135-1782af3e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210827_161135.log.json) | -| above w/o circle nms | voxel (0.075) | ✓ | ✗ | | | 57.43 | 65.63 | | -| above w/ double flip | voxel (0.075) | ✓ | ✗ | | | 59.73 | 67.39 | | -| above w/ scale tta | voxel (0.075) | ✓ | ✗ | | | 60.43 | 67.65 | | -| above w/ circle nms w/o scale tta | voxel (0.075) | ✓ | ✗ | | | 59.52 | 67.24 | | -| [SECFPN](./centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | pillar (0.2) | ✗ | ✓ | 4.4 | | 49.07 | 59.66 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624-0f3299c0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624.log.json) | -| above w/o circle nms | pillar (0.2) | ✗ | ✗ | | | 49.12 | 59.66 | | -| [SECFPN](./centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py) | pillar (0.2) | ✓ | ✗ | 4.6 | | 48.8 | 59.67 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20210815_202702-f03ab9e4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20210815_202702.log.json) | -| above w/ circle nms | pillar (0.2) | ✓ | ✓ | | | 48.79 | 59.65 | | - -## Citation - -```latex -@article{yin2021center, - title={Center-based 3D Object Detection and Tracking}, - author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp}, - journal={CVPR}, - year={2021}, -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index f17d98ef..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,140 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -voxel_size = [0.075, 0.075, 0.2] -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict( - voxel_size=voxel_size, point_cloud_range=point_cloud_range), - pts_middle_encoder=dict(sparse_shape=[41, 1440, 1440]), - pts_bbox_head=dict( - bbox_coder=dict( - voxel_size=voxel_size[:2], pc_range=point_cloud_range[:2])), - train_cfg=dict( - pts=dict( - grid_size=[1440, 1440, 40], - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)), - test_cfg=dict( - pts=dict(voxel_size=voxel_size[:2], pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 1541a102..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index e479650a..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py deleted file mode 100644 index 0090b3cb..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py +++ /dev/null @@ -1,50 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py' - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py deleted file mode 100644 index cdbdf060..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py' - -model = dict(test_cfg=dict(pts=dict(use_rotate_nms=True, max_num=500))) - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=[0.95, 1.0, 1.05], - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 1e7d14e2..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py deleted file mode 100644 index d3956fc1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py +++ /dev/null @@ -1,51 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_' \ - 'circlenms_4x8_cyclic_20e_nus.py' - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index eae92849..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,171 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-3d.py', - '../_base_/models/centerpoint_01voxel_second_secfpn_nus.py', - '../_base_/schedules/cyclic_20e.py', '../_base_/default_runtime.py' -] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict(point_cloud_range=point_cloud_range), - pts_bbox_head=dict(bbox_coder=dict(pc_range=point_cloud_range[:2])), - # model training and testing settings - train_cfg=dict(pts=dict(point_cloud_range=point_cloud_range)), - test_cfg=dict(pts=dict(pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - train=dict( - type='CBGSDataset', - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - use_valid_flag=True, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -evaluation = dict(interval=20, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index ae560321..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index 5f31c441..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index cc5488e0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index cd903492..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,170 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-3d.py', - '../_base_/models/centerpoint_02pillar_second_secfpn_nus.py', - '../_base_/schedules/cyclic_20e.py', '../_base_/default_runtime.py' -] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict(point_cloud_range=point_cloud_range), - pts_voxel_encoder=dict(point_cloud_range=point_cloud_range), - pts_bbox_head=dict(bbox_coder=dict(pc_range=point_cloud_range[:2])), - # model training and testing settings - train_cfg=dict(pts=dict(point_cloud_range=point_cloud_range)), - test_cfg=dict(pts=dict(pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - train=dict( - type='CBGSDataset', - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - use_valid_flag=True, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -evaluation = dict(interval=20, pipeline=eval_pipeline) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 67a1cf6e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index e6948921..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index c62488df..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/paconv/pytorch/configs/centerpoint/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/centerpoint/metafile.yml deleted file mode 100644 index 1651689e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/centerpoint/metafile.yml +++ /dev/null @@ -1,95 +0,0 @@ -Collections: - - Name: CenterPoint - Metadata: - Training Data: nuScenes - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Hard Voxelization - Paper: - URL: https://arxiv.org/abs/2006.11275 - Title: 'Center-based 3D Object Detection and Tracking' - README: configs/centerpoint/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/centerpoint.py#L10 - Version: v0.6.0 - -Models: - - Name: centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.9 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 56.19 - NDS: 64.43 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20201001_135205-5db91e00.pth - - - Name: centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 5.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 56.34 - NDS: 64.81 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20201004_075317-26d8176c.pth - - - Name: centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 7.8 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 57.34 - NDS: 65.23 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20200925_230905-358fbe3b.pth - - - Name: centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 8.5 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 57.27 - NDS: 65.58 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20200930_201619-67c8496f.pth - - - Name: centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 49.07 - NDS: 59.66 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20201004_170716-a134a233.pth - - - Name: centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.8 - NDS: 59.67 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20200930_103722-3bb135f2.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/dgcnn/README.md b/cv/3d_detection/paconv/pytorch/configs/dgcnn/README.md deleted file mode 100644 index 52554350..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dgcnn/README.md +++ /dev/null @@ -1,55 +0,0 @@ -# Dynamic Graph CNN for Learning on Point Clouds - -> [Dynamic Graph CNN for Learning on Point Clouds](https://arxiv.org/abs/1801.07829) - - - -## Abstract - -Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS. - -
- -
- -## Introduction - -We implement DGCNN and provide the results and checkpoints on S3DIS dataset. - -**Notice**: We follow the implementations in the original DGCNN paper and a PyTorch implementation of DGCNN [code](https://github.com/AnTao97/dgcnn.pytorch). - -## Results and models - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------: | :----: | :---------: | :------: | :------------: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_1 | cosine 100e | 13.1 | | 68.33 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734-39658f14.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_2 | cosine 100e | 13.1 | | 40.68 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648-aea9ecb6.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_3 | cosine 100e | 13.1 | | 69.38 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629-2ff50ee0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_4 | cosine 100e | 13.1 | | 50.07 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551-dffab9cd.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_5 | cosine 100e | 13.1 | | 50.59 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_6 | cosine 100e | 13.1 | | 77.94 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317-e3511b32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | 6-fold | | | | 59.43 | | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. -- `6-fold` Split means the overall result of 6 different splits (Area_1, Area_2, Area_3, Area_4, Area_5 and Area_6 Splits). -- Users need to modify `train_area` and `test_area` in the S3DIS dataset's [config](./configs/_base_/datasets/s3dis_seg-3d-13class.py) to set the training and testing areas, respectively. - -## Indeterminism - -Since DGCNN testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@article{dgcnn, - title={Dynamic Graph CNN for Learning on Point Clouds}, - author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.}, - journal={ACM Transactions on Graphics (TOG)}, - year={2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py deleted file mode 100644 index 6f1b5822..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', '../_base_/models/dgcnn.py', - '../_base_/schedules/seg_cosine_100e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=32) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/dgcnn/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/dgcnn/metafile.yml deleted file mode 100644 index 87ff9156..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dgcnn/metafile.yml +++ /dev/null @@ -1,24 +0,0 @@ -Collections: - - Name: DGCNN - Metadata: - Training Techniques: - - SGD - Training Resources: 4x Titan XP GPUs - Architecture: - - DGCNN - Paper: https://arxiv.org/abs/1801.07829 - README: configs/dgcnn/README.md - -Models: - - Name: dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py - In Collection: DGCNN - Config: configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 13.3 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 50.59 - Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/README.md b/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/README.md deleted file mode 100644 index ab2bbc69..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Dynamic Voxelization - -> [End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds](https://arxiv.org/abs/1910.06528) - - - -## Abstract - -Recent work on 3D object detection advocates point cloud voxelization in birds-eye view, where objects preserve their physical dimensions and are naturally separable. When represented in this view, however, point clouds are sparse and have highly variable point density, which may cause detectors difficulties in detecting distant or small objects (pedestrians, traffic signs, etc.). On the other hand, perspective view provides dense observations, which could allow more favorable feature encoding for such cases. In this paper, we aim to synergize the birds-eye view and the perspective view and propose a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both. Specifically, we introduce dynamic voxelization, which has four merits compared to existing voxelization methods, i) removing the need of pre-allocating a tensor with fixed size; ii) overcoming the information loss due to stochastic point/voxel dropout; iii) yielding deterministic voxel embeddings and more stable detection outcomes; iv) establishing the bi-directional relationship between points and voxels, which potentially lays a natural foundation for cross-view feature fusion. By employing dynamic voxelization, the proposed feature fusion architecture enables each point to learn to fuse context information from different views. MVF operates on points and can be naturally extended to other approaches using LiDAR point clouds. We evaluate our MVF model extensively on the newly released Waymo Open Dataset and on the KITTI dataset and demonstrate that it significantly improves detection accuracy over the comparable single-view PointPillars baseline. - -
- -
- -## Introduction - -We implement Dynamic Voxelization proposed in and provide its results and models on KITTI dataset. - -## Results and models - -### KITTI - -| Model | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECOND](./dv_second_secfpn_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 5.5 | | 78.83 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228.log.json) | -| [SECOND](./dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py) | 3 Class | cosine 80e | 5.5 | | 65.27 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106.log.json) | -| [PointPillars](./dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py) | Car | cyclic 80e | 4.7 | | 77.76 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844.log.json) | - -## Citation - -```latex -@article{zhou2019endtoend, - title={End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds}, - author={Yin Zhou and Pei Sun and Yu Zhang and Dragomir Anguelov and Jiyang Gao and Tom Ouyang and James Guo and Jiquan Ngiam and Vijay Vasudevan}, - year={2019}, - eprint={1910.06528}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py deleted file mode 100644 index 68baae91..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = '../pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py' - -voxel_size = [0.16, 0.16, 4] -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - type='DynamicPillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py deleted file mode 100644 index 87fefadd..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', '../_base_/schedules/cosine.py', - '../_base_/default_runtime.py' -] - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - _delete_=True, - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - _delete_=True, - type='DynamicSimpleVFE', - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py deleted file mode 100644 index 9da4ffe5..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = '../second/hv_second_secfpn_6x8_80e_kitti-3d-car.py' - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - _delete_=True, - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - _delete_=True, - type='DynamicSimpleVFE', - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/metafile.yml deleted file mode 100644 index 190c51de..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/dynamic_voxelization/metafile.yml +++ /dev/null @@ -1,53 +0,0 @@ -Collections: - - Name: Dynamic Voxelization - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Dynamic Voxelization - Paper: - URL: https://arxiv.org/abs/1910.06528 - Title: 'End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds' - README: configs/dynamic_voxelization/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/dynamic_voxelnet.py#L11 - Version: v0.5.0 - -Models: - - Name: dv_second_secfpn_6x8_80e_kitti-3d-car - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py - Metadata: - Training Memory (GB): 5.5 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.83 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth - - - Name: dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 5.5 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 65.27 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth - - - Name: dv_pointpillars_secfpn_6x8_160e_kitti-3d-car - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 77.76 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/fcos3d/README.md b/cv/3d_detection/paconv/pytorch/configs/fcos3d/README.md deleted file mode 100644 index e47a489b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/fcos3d/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection - -> [FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection](https://arxiv.org/abs/2104.10956) - - - -## Abstract - -Monocular 3D object detection is an important task for autonomous driving considering its advantage of low cost. It is much more challenging than conventional 2D cases due to its inherent ill-posed property, which is mainly reflected in the lack of depth information. Recent progress on 2D detection offers opportunities to better solving this problem. However, it is non-trivial to make a general adapted 2D detector work in this 3D task. In this paper, we study this problem with a practice built on a fully convolutional single-stage detector and propose a general framework FCOS3D. Specifically, we first transform the commonly defined 7-DoF 3D targets to the image domain and decouple them as 2D and 3D attributes. Then the objects are distributed to different feature levels with consideration of their 2D scales and assigned only according to the projected 3D-center for the training procedure. Furthermore, the center-ness is redefined with a 2D Gaussian distribution based on the 3D-center to fit the 3D target formulation. All of these make this framework simple yet effective, getting rid of any 2D detection or 2D-3D correspondence priors. Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020. - -
- -
- -## Introduction - -FCOS3D is a general anchor-free, one-stage monocular 3D object detector adapted from the original 2D version FCOS. -It serves as a baseline built on top of mmdetection and mmdetection3d for 3D detection based on monocular vision. - -Currently we first support the benchmark on the large-scale nuScenes dataset, which achieved 1st place out of all the vision-only methods in the [nuScenes 3D detecton challenge](https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Camera) of NeurIPS 2020. - -![demo image](../../resources/browse_dataset_mono.png) - -## Usage - -### Data Preparation - -After supporting FCOS3D and monocular 3D object detection in v0.13.0, the coco-style 2D json info files will include related annotations by default -(see [here](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/nuscenes_converter.py#L333) if you would like to change the parameter). -So you can just follow the data preparation steps given in the documentation, then all the needed infos are ready together. - -### Training and Inference - -The way to training and inference a monocular 3D object detector is the same as others in mmdetection and mmdetection3d. You can basically follow the [documentation](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html#train-predefined-models-on-standard-datasets) and change the `config`, `work_dirs`, etc. accordingly. - -### Test time augmentation - -We implement test time augmentation for the dense outputs of detection heads, which is more effective than merging predicted boxes at last. -You can turn on it by setting `flip=True` in the `test_pipeline`. - -### Training with finetune - -Due to the scale and measurements of depth is different from those of other regression targets, we first train the model with depth weight equal to 0.2 for a more stable training procedure. For a stronger detector with better performance, please finetune the model with depth weight changed to 1.0 as shown in the [config](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py). Note that the path of `load_from` needs to be changed to yours accordingly. - -### Visualizing prediction results - -We also provide visualization functions to show the monocular 3D detection results. Simply follow the [documentation](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html#test-existing-models-on-standard-datasets) and use the `single-gpu testing` command. You only need to add the `--show` flag and specify `--show-dir` to store the visualization results. - -## Results and models - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :--: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101 w/ DCN](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py) | 1x | 8.69 | | 29.8 | 37.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210715_235813-4bed5239.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210715_235813.log.json) | -| [above w/ finetune](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py) | 1x | 8.69 | | 32.1 | 39.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210717_095645-8d806dc2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210717_095645.log.json) | -| above w/ tta | 1x | 8.69 | | 33.1 | 40.3 | | - -## Citation - -```latex -@inproceedings{wang2021fcos3d, - title={{FCOS3D: Fully} Convolutional One-Stage Monocular 3D Object Detection}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, - year={2021} -} -# For the original 2D version -@inproceedings{tian2019fcos, - title = {{FCOS: Fully} Convolutional One-Stage Object Detection}, - author = {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong}, - booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, - year = {2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py deleted file mode 100644 index 3b7eb99f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py +++ /dev/null @@ -1,75 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-mono3d.py', '../_base_/models/fcos3d.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True))) - -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.002, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[8, 11]) -total_epochs = 12 -evaluation = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py b/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py deleted file mode 100644 index ade5b4ec..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = './fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict( - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05])) -# optimizer -optimizer = dict(lr=0.001) -load_from = 'work_dirs/fcos3d_nus/latest.pth' diff --git a/cv/3d_detection/paconv/pytorch/configs/fcos3d/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/fcos3d/metafile.yml deleted file mode 100644 index 11de4911..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/fcos3d/metafile.yml +++ /dev/null @@ -1,43 +0,0 @@ -Collections: - - Name: FCOS3D - Metadata: - Training Data: NuScenes - Training Techniques: - - SGD - Training Resources: 8x GeForce RTX 2080 Ti - Architecture: - - FCOSMono3DHead - Paper: - URL: https://arxiv.org/abs/2104.10956 - Title: 'FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection' - README: configs/fcos3d/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/fcos_mono3d.py#L7 - Version: v0.13.0 - -Models: - - Name: fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d - In Collection: FCOS3D - Config: configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py - Metadata: - Training Memory (GB): 8.7 - Results: - - Task: 3D Object Detection - Dataset: NuScenes - Metrics: - mAP: 29.9 - NDS: 37.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210425_181341-8d5a21fe.pth - - - Name: fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune - In Collection: FCOS3D - Config: configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 8.7 - Results: - - Task: 3D Object Detection - Dataset: NuScenes - Metrics: - mAP: 32.1 - NDS: 39.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210427_091419-35aaaad0.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/README.md b/cv/3d_detection/paconv/pytorch/configs/free_anchor/README.md deleted file mode 100644 index 727a7006..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# FreeAnchor for 3D Object Detection - -> [FreeAnchor: Learning to Match Anchors for Visual Object Detection](https://arxiv.org/abs/1909.02466) - - - -## Abstract - -Modern CNN-based object detectors assign anchors for ground-truth objects under the restriction of object-anchor Intersection-over-Unit (IoU). In this study, we propose a learning-to-match approach to break IoU restriction, allowing objects to match anchors in a flexible manner. Our approach, referred to as FreeAnchor, updates hand-crafted anchor assignment to “free" anchor matching by formulating detector training as a maximum likelihood estimation (MLE) procedure. FreeAnchor targets at learning features which best explain a class of objects in terms of both classification and localization. FreeAnchor is implemented by optimizing detection customized likelihood and can be fused with CNN-based detectors in a plug-and-play manner. Experiments on COCO demonstrate that FreeAnchor consistently outperforms the counterparts with significant margins. - -
- -
- -## Introduction - -We implement FreeAnchor in 3D detection systems and provide their first results with PointPillars on nuScenes dataset. -With the implemented `FreeAnchor3DHead`, a PointPillar detector with a big backbone (e.g., RegNet-3.2GF) achieves top performance -on the nuScenes benchmark. - -## Usage - -### Modify config - -As in the [baseline config](hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py), we only need to replace the head of an existing one-stage detector to use FreeAnchor head. -Since the config is inherit from a common detector head, `_delete_=True` is necessary to avoid conflicts. -The hyperparameters are specifically tuned according to the original paper. - -```python -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] - -model = dict( - pts_bbox_head=dict( - _delete_=True, - type='FreeAnchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - pre_anchor_topk=25, - bbox_thr=0.5, - gamma=2.0, - alpha=0.5, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.8), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg = dict( - pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.25, 0.25]))) -``` - -## Results and models - -### PointPillars - -| Backbone | FreeAnchor | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :-------------------------------------------------------------------------------------------------------: | :--------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | ✗ | 2x | 17.1 | | 40.0 | 53.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405-2fa62f3d.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 16.3 | | 43.82 | 54.86 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441-ae0897e7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441.log.json) | -| [RegNetX-400MF-FPN](../regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py) | ✗ | 2x | 17.3 | | 44.8 | 56.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 17.6 | | 48.3 | 58.65 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939-a2dd3fff.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939.log.json) | -| [RegNetX-1.6GF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 24.3 | | 52.04 | 61.49 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608-bfbd506e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608.log.json) | -| [RegNetX-1.6GF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py)\* | ✓ | 3x | 24.4 | | 52.69 | 62.45 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909-14d2dbd1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909.log.json) | -| [RegNetX-3.2GF-FPN](./hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 29.4 | | 52.4 | 61.94 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237-e385c35a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237.log.json) | -| [RegNetX-3.2GF-FPN](./hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py)\* | ✓ | 3x | 29.2 | | 54.23 | 63.41 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816-06708918.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816.log.json) | - -**Note**: Models noted by `*` means it is trained using stronger augmentation with vertical flip under bird-eye-view, global translation, and larger range of global rotation. - -## Citation - -```latex -@inproceedings{zhang2019freeanchor, - title = {{FreeAnchor}: Learning to Match Anchors for Visual Object Detection}, - author = {Zhang, Xiaosong and Wan, Fang and Liu, Chang and Ji, Rongrong and Ye, Qixiang}, - booktitle = {Neural Information Processing Systems}, - year = {2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 7412b930..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,47 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] - -model = dict( - pts_bbox_head=dict( - _delete_=True, - type='FreeAnchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - pre_anchor_topk=25, - bbox_thr=0.5, - gamma=2.0, - alpha=0.5, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.8), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.25, 0.25]))) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index ef740a8a..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py deleted file mode 100644 index d4e48d36..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.7854, 0.7854], - scale_ratio_range=[0.95, 1.05], - translation_std=[0.2, 0.2, 0.2]), - dict( - type='RandomFlip3D', - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -data = dict(train=dict(pipeline=train_pipeline)) - -lr_config = dict(step=[28, 34]) -runner = dict(max_epochs=36) -evaluation = dict(interval=36) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 13bc0d68..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_3.2gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_3.2gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[192, 432, 1008])) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py deleted file mode 100644 index 6fbce89b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_3.2gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_3.2gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[192, 432, 1008])) - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.7854, 0.7854], - scale_ratio_range=[0.9, 1.1], - translation_std=[0.2, 0.2, 0.2]), - dict( - type='RandomFlip3D', - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -data = dict(train=dict(pipeline=train_pipeline)) -lr_config = dict(step=[28, 34]) -runner = dict(max_epochs=36) -evaluation = dict(interval=36) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 2b5f254b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_400mf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/paconv/pytorch/configs/free_anchor/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/free_anchor/metafile.yml deleted file mode 100644 index 73b55f5f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/free_anchor/metafile.yml +++ /dev/null @@ -1,96 +0,0 @@ -Collections: - - Name: FreeAnchor - Metadata: - Training Data: nuScenes - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Hard Voxelization - - Free Anchor - Paper: - URL: https://arxiv.org/abs/1909.02466 - Title: 'FreeAnchor: Learning to Match Anchors for Visual Object Detection' - README: configs/free_anchor/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/free_anchor3d_head.py#L13 - Version: v0.5.0 - -Models: - - Name: hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 16.3 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 43.82 - NDS: 54.86 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441-ae0897e7.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 17.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.3 - NDS: 58.65 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939-a2dd3fff.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 24.3 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.04 - NDS: 61.49 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608-bfbd506e.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py - Metadata: - Training Memory (GB): 24.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.69 - NDS: 62.45 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909-14d2dbd1.pth - - - Name: hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 29.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.4 - NDS: 61.94 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237-e385c35a.pth - - - Name: hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py - Metadata: - Training Memory (GB): 29.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 54.23 - NDS: 63.41 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816-06708918.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/README.md b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/README.md deleted file mode 100644 index 5b055e7e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# Group-Free 3D Object Detection via Transformers - -> [Group-Free 3D Object Detection via Transformers](https://arxiv.org/abs/2104.00678) - - - -## Abstract - -Recently, directly detecting 3D objects from 3D point clouds has received increasing attention. To extract object representation from an irregular point cloud, existing methods usually take a point grouping step to assign the points to an object candidate so that a PointNet-like network could be used to derive object features from the grouped points. However, the inaccurate point assignments caused by the hand-crafted grouping scheme decrease the performance of 3D object detection. In this paper, we present a simple yet effective method for directly detecting 3D objects from the 3D point cloud. Instead of grouping local points to each object candidate, our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers, where the contribution of each point is automatically learned in the network training. With an improved attention stacking scheme, our method fuses object features in different stages and generates more accurate object detection results. With few bells and whistles, the proposed method achieves state-of-the-art 3D object detection performance on two widely used benchmarks, ScanNet V2 and SUN RGB-D. - -
- -
- -## Introduction - -We implement Group-Free-3D and provide the result and checkpoints on ScanNet datasets. - -## Results and models - -### ScanNet - -| Method | Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------------: | :-----------: | :-----: | :------: | :------------: | :-------------: | :-------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [L6, O256](./groupfree3d_8x4_scannet-3d-18class-L6-O256.py) | PointNet++ | 3x | 6.7 | | 66.32 (65.67\*) | 47.82 (47.74\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347-3499eb55.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347.log.json) | -| [L12, O256](./groupfree3d_8x4_scannet-3d-18class-L12-O256.py) | PointNet++ | 3x | 9.4 | | 66.57 (66.22\*) | 48.21 (48.95\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907-1c5551ad.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907.log.json) | -| [L12, O256](./groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py) | PointNet++w2x | 3x | 13.3 | | 68.20 (67.30\*) | 51.02 (50.44\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301-944f0ac0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301.log.json) | -| [L12, O512](./groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py) | PointNet++w2x | 3x | 18.8 | | 68.22 (68.20\*) | 52.61 (51.31\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204-187b71c7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204.log.json) | - -**Notes:** - -- We report the best results (AP@0.50) on validation set during each training. * means the evaluation method in the paper: we train each setting 5 times and test each training trial 5 times, then the average performance of these 25 trials is reported to account for algorithm randomness. -- We use 4 GPUs for training by default as the original code. - -## Citation - -```latex -@article{liu2021, - title={Group-Free 3D Object Detection via Transformers}, - author={Liu, Ze and Zhang, Zheng and Cao, Yue and Hu, Han and Tong, Xin}, - journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, - year={2021} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py deleted file mode 100644 index 987bcec6..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py +++ /dev/null @@ -1,199 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py deleted file mode 100644 index 62821293..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py +++ /dev/null @@ -1,198 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py deleted file mode 100644 index 8551b740..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py +++ /dev/null @@ -1,214 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((128, 128, 256), (256, 256, 512), (256, 256, 512), - (256, 256, 512)), - fp_channels=((512, 512), (512, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py deleted file mode 100644 index 199e08bf..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py +++ /dev/null @@ -1,215 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((128, 128, 256), (256, 256, 512), (256, 256, 512), - (256, 256, 512)), - fp_channels=((512, 512), (512, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - num_proposal=512, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/groupfree3d/metafile.yml deleted file mode 100644 index ff0b63cc..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/groupfree3d/metafile.yml +++ /dev/null @@ -1,72 +0,0 @@ -Collections: - - Name: Group-Free-3D - Metadata: - Training Techniques: - - AdamW - Training Resources: 4x V100 GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/2104.00678 - Title: 'Group-Free 3D Object Detection via Transformers' - README: configs/groupfree3d/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/groupfree3dnet.py#L10 - Version: v0.15.0 - -Models: - - Name: groupfree3d_8x4_scannet-3d-18class-L6-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 6.7 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.32 - AP@0.5: 47.82 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347-3499eb55.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-L12-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 9.4 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.57 - AP@0.5: 48.21 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907-1c5551ad.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 13.3 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 68.20 - AP@0.5: 51.02 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301-944f0ac0.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 18.8 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 68.22 - AP@0.5: 52.61 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204-187b71c7.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/h3dnet/README.md b/cv/3d_detection/paconv/pytorch/configs/h3dnet/README.md deleted file mode 100644 index 60cc30f3..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/h3dnet/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# H3DNet: 3D Object Detection Using Hybrid Geometric Primitives - -> [H3DNet: 3D Object Detection Using Hybrid Geometric Primitives](https://arxiv.org/abs/2006.05682) - - - -## Abstract - -We introduce H3DNet, which takes a colorless 3D point cloud as input and outputs a collection of oriented object bounding boxes (or BB) and their semantic labels. The critical idea of H3DNet is to predict a hybrid set of geometric primitives, i.e., BB centers, BB face centers, and BB edge centers. We show how to convert the predicted geometric primitives into object proposals by defining a distance function between an object and the geometric primitives. This distance function enables continuous optimization of object proposals, and its local minimums provide high-fidelity object proposals. H3DNet then utilizes a matching and refinement module to classify object proposals into detected objects and fine-tune the geometric parameters of the detected objects. The hybrid set of geometric primitives not only provides more accurate signals for object detection than using a single type of geometric primitives, but it also provides an overcomplete set of constraints on the resulting 3D layout. Therefore, H3DNet can tolerate outliers in predicted geometric primitives. Our model achieves state-of-the-art 3D detection results on two large datasets with real 3D scans, ScanNet and SUN RGB-D. - -
- -
- -## Introduction - -We implement H3DNet and provide the result and checkpoints on ScanNet datasets. - -## Results and models - -### ScanNet - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [MultiBackbone](./h3dnet_3x8_scannet-3d-18class.py) | 3x | 7.9 | | 66.07 | 47.68 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149-414bd304.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149.log.json) | - -**Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version \< 0.6.0, the checkpoints have to be first converted via [tools/model_converters/convert_h3dnet_checkpoints.py](../../tools/model_converters/convert_h3dnet_checkpoints.py): - -``` -python ./tools/model_converters/convert_h3dnet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH} -``` - -Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md). - -## Citation - -```latex -@inproceedings{zhang2020h3dnet, - author = {Zhang, Zaiwei and Sun, Bo and Yang, Haitao and Huang, Qixing}, - title = {H3DNet: 3D Object Detection Using Hybrid Geometric Primitives}, - booktitle = {Proceedings of the European Conference on Computer Vision}, - year = {2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py b/cv/3d_detection/paconv/pytorch/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py deleted file mode 100644 index e6534a4b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py +++ /dev/null @@ -1,64 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', '../_base_/models/h3dnet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] - -# model settings -model = dict( - rpn_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=24, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]])), - roi_head=dict( - bbox_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=24, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]])))) - -data = dict(samples_per_gpu=3, workers_per_gpu=2) - -# yapf:disable -log_config = dict(interval=30) -# yapf:enable diff --git a/cv/3d_detection/paconv/pytorch/configs/h3dnet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/h3dnet/metafile.yml deleted file mode 100644 index 6d731d6d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/h3dnet/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: H3DNet - Metadata: - Training Data: ScanNet - Training Techniques: - - AdamW - Training Resources: 8x GeForce GTX 1080 Ti - Architecture: - Paper: - URL: https://arxiv.org/abs/2006.05682 - Title: 'H3DNet: 3D Object Detection Using Hybrid Geometric Primitives' - README: configs/h3dnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/h3dnet.py#L10 - Version: v0.6.0 - -Models: - - Name: h3dnet_3x8_scannet-3d-18class - In Collection: H3DNet - Config: configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py - Metadata: - Training Memory (GB): 7.9 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.07 - AP@0.5: 47.68 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149-414bd304.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/imvotenet/README.md b/cv/3d_detection/paconv/pytorch/configs/imvotenet/README.md deleted file mode 100644 index a491b9d8..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvotenet/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes - -> [ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes](https://arxiv.org/abs/2001.10692) - - - -## Abstract - -3D object detection has seen quick progress thanks to advances in deep learning on point clouds. A few recent works have even shown state-of-the-art performance with just point clouds input (e.g. VOTENET). However, point cloud data have inherent limitations. They are sparse, lack color information and often suffer from sensor noise. Images, on the other hand, have high resolution and rich texture. Thus they can complement the 3D geometry provided by point clouds. Yet how to effectively use image information to assist point cloud based detection is still an open question. In this work, we build on top of VOTENET and propose a 3D detection architecture called IMVOTENET specialized for RGB-D scenes. IMVOTENET is based on fusing 2D votes in images and 3D votes in point clouds. Compared to prior work on multi-modal detection, we explicitly extract both geometric and semantic features from the 2D images. We leverage camera parameters to lift these features to 3D. To improve the synergy of 2D-3D feature fusion, we also propose a multi-tower training scheme. We validate our model on the challenging SUN RGB-D dataset, advancing state-of-the-art results by 5.7 mAP. We also provide rich ablation studies to analyze the contribution of each design choice. - -
- -
- -## Introduction - -We implement ImVoteNet and provide the result and checkpoints on SUNRGBD. - -## Results and models - -### SUNRGBD-2D (Stage 1, image branch pre-train) - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py) | | 2.1 | | | 62.70 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618-62eba6ce.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618.json) | - -### SUNRGBD-3D (Stage 2) - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./imvotenet_stage2_16x8_sunrgbd-3d-10class.py) | 3x | 9.4 | | 64.55 | | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851-1bcd1b97.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851.log.json) | - -## Citation - -```latex -@inproceedings{qi2020imvotenet, - title={Imvotenet: Boosting 3D object detection in point clouds with image votes}, - author={Qi, Charles R and Chen, Xinlei and Litany, Or and Guibas, Leonidas J}, - booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, - pages={4404--4413}, - year={2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py b/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py deleted file mode 100644 index e999c650..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py +++ /dev/null @@ -1,58 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', '../_base_/default_runtime.py', - '../_base_/models/imvotenet_image.py' -] - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 504), (1333, 528), (1333, 552), - (1333, 576), (1333, 600)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 600), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(times=1, dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[6]) -runner = dict(type='EpochBasedRunner', max_epochs=8) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py b/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py deleted file mode 100644 index ef1e5539..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py +++ /dev/null @@ -1,260 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py', - '../_base_/models/imvotenet_image.py' -] - -class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) - -model = dict( - pts_backbone=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - pts_bbox_heads=dict( - common=dict( - type='VoteHead', - num_classes=10, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=10, - num_dir_bins=12, - with_rot=True, - mean_sizes=[[2.114256, 1.620300, 0.927272], - [0.791118, 1.279516, 0.718182], - [0.923508, 1.867419, 0.845495], - [0.591958, 0.552978, 0.827272], - [0.699104, 0.454178, 0.75625], - [0.69519, 1.346299, 0.736364], - [0.528526, 1.002642, 1.172878], - [0.500618, 0.632163, 0.683424], - [0.404671, 1.071108, 1.688889], - [0.76584, 1.398258, 0.472728]]), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0 / 3.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - joint=dict( - vote_module_cfg=dict( - in_channels=512, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(512, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[512, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - pts=dict( - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - img=dict( - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - loss_weights=[0.4, 0.3, 0.3]), - img_mlp=dict( - in_channel=18, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU')), - fusion_layer=dict( - type='VoteFusion', - num_classes=len(class_names), - max_imvote_per_pixel=3), - num_sampled_seed=1024, - freeze_img_branch=True, - - # model training and testing settings - train_cfg=dict( - pts=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote')), - test_cfg=dict( - img_rcnn=dict(score_thr=0.1), - pts=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True))) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations3D'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 600), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.0), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.523599, 0.523599], - scale_ratio_range=[0.85, 1.15], - shift_height=True), - dict(type='PointSample', num_points=20000), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'points', 'gt_bboxes_3d', - 'gt_labels_3d' - ]) -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 600), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.0), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict(type='PointSample', num_points=20000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img', 'points']) - ]), -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img', 'points']) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -evaluation = dict(pipeline=eval_pipeline) - -# may also use your own pre-trained image branch -load_from = 'https://download.openmmlab.com/mmdetection3d/v0.1.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210323_173222-cad62aeb.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/imvotenet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/imvotenet/metafile.yml deleted file mode 100644 index 28051c43..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvotenet/metafile.yml +++ /dev/null @@ -1,43 +0,0 @@ -Collections: - - Name: ImVoteNet - Metadata: - Training Data: SUNRGBD - Training Techniques: - - AdamW - Training Resources: 8x TITAN Xp - Architecture: - - Faster R-CNN - - VoteNet - - Feature Pyramid Network - Paper: - URL: https://arxiv.org/abs/2001.10692 - Title: 'ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes' - README: configs/imvotenet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/imvotenet.py#L56 - Version: v0.12.0 - -Models: - - Name: imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class - In Collection: ImVoteNet - Config: configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py - Metadata: - Training Memory (GB): 2.1 - Results: - - Task: Object Detection - Dataset: SUNRGBD-2D - Metrics: - AP@0.5: 62.70 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618-62eba6ce.pth - - - Name: imvotenet_stage2_16x8_sunrgbd-3d-10class - In Collection: ImVoteNet - Config: configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py - Metadata: - Training Memory (GB): 9.4 - Results: - - Task: 3D Object Detection - Dataset: SUNRGBD-3D - Metrics: - AP@0.25: 64.55 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851-1bcd1b97.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/README.md b/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/README.md deleted file mode 100644 index faaddf29..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection - -> [ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection](https://arxiv.org/abs/2106.01178) - - - -## Abstract - -In this paper, we introduce the task of multi-view RGB-based 3D object detection as an end-to-end optimization problem. To address this problem, we propose ImVoxelNet, a novel fully convolutional method of 3D object detection based on posed monocular or multi-view RGB images. The number of monocular images in each multiview input can variate during training and inference; actually, this number might be unique for each multi-view input. ImVoxelNet successfully handles both indoor and outdoor scenes, which makes it general-purpose. Specifically, it achieves state-of-the-art results in car detection on KITTI (monocular) and nuScenes (multi-view) benchmarks among all methods that accept RGB images. Moreover, it surpasses existing RGB-based 3D object detection methods on the SUN RGB-D dataset. On ScanNet, ImVoxelNet sets a new benchmark for multi-view 3D object detection. - -
- -
- -## Introduction - -We implement a monocular 3D detector ImVoxelNet and provide its results and checkpoints on KITTI dataset. -Results for SUN RGB-D, ScanNet and nuScenes are currently available in ImVoxelNet authors -[repo](https://github.com/saic-vul/imvoxelnet) (based on mmdetection3d). - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------: | :---: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet-50](./imvoxelnet_kitti-3d-car.py) | Car | 3x | | | 17.26 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014-3d0ffdf4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014.log.json) | - -## Citation - -```latex -@article{rukhovich2021imvoxelnet, - title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection}, - author={Danila Rukhovich, Anna Vorontsova, Anton Konushin}, - journal={arXiv preprint arXiv:2106.01178}, - year={2021} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py deleted file mode 100644 index 89bf2426..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='ImVoxelNet', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=64, - num_outs=4), - neck_3d=dict(type='OutdoorImVoxelNeck', in_channels=64, out_channels=256), - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-0.16, -39.68, -1.78, 68.96, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - n_voxels=[216, 248, 12], - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-0.16, -39.68, -3.08, 68.96, 39.68, 0.76]], - rotations=[.0]), - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=False, use_camera=True) -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadAnnotations3D'), - dict(type='LoadImageFromFile'), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='Resize', - img_scale=[(1173, 352), (1387, 416)], - keep_ratio=True, - multiscale_mode='range'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['img', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='Resize', img_scale=(1280, 384), keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=3, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) - -optimizer = dict( - type='AdamW', - lr=0.0001, - weight_decay=0.0001, - paramwise_cfg=dict( - custom_keys={'backbone': dict(lr_mult=0.1, decay_mult=1.0)})) -optimizer_config = dict(grad_clip=dict(max_norm=35., norm_type=2)) -lr_config = dict(policy='step', step=[8, 11]) -total_epochs = 12 - -checkpoint_config = dict(interval=1, max_keep_ckpts=1) -log_config = dict( - interval=1, - hooks=[dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook')]) -evaluation = dict(interval=1) -dist_params = dict(backend='nccl') -find_unused_parameters = True # only 1 of 4 FPN outputs is used -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/metafile.yml deleted file mode 100644 index 0dea4866..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/imvoxelnet/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: ImVoxelNet - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x Tesla P40 - Architecture: - - Anchor3DHead - Paper: - URL: https://arxiv.org/abs/2106.01178 - Title: 'ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection' - README: configs/imvoxelnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/imvoxelnet.py#L11 - Version: v0.15.0 - -Models: - - Name: imvoxelnet_kitti-3d-car - In Collection: ImVoxelNet - Config: configs/imvoxelnet/imvoxelnet_kitti-3d-car.py - Metadata: - Training Memory (GB): 15.0 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 17.26 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014-3d0ffdf4.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/monoflex/README.md b/cv/3d_detection/paconv/pytorch/configs/monoflex/README.md deleted file mode 100644 index 0f402be2..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/monoflex/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Objects are Different: Flexible Monocular 3D Object Detection - -> [Objects are Different: Flexible Monocular 3D Object Detection](https://arxiv.org/abs/2104.02323) - - - -## Abstract - -The precise localization of 3D objects from a single image without depth information is a highly challenging problem. Most existing methods adopt the same approach for all objects regardless of their diverse distributions, leading to limited performance for truncated objects. In this paper, we propose a flexible framework for monocular 3D object detection which explicitly decouples the truncated objects and adaptively combines multiple approaches for object depth estimation. Specifically, we decouple the edge of the feature map for predicting long-tail truncated objects so that the optimization of normal objects is not influenced. Furthermore, we formulate the object depth estimation as an uncertainty-guided ensemble of directly regressed object depth and solved depths from different groups of keypoints. Experiments demonstrate that our method outperforms the state-of-the-art method by relatively 27% for the moderate level and 30% for the hard level in the test set of KITTI benchmark while maintaining real-time efficiency. - -
- -
- -## Introduction - -We implement MonoFlex and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DLA34](./monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d.py) | 6x | 9.64 | | 21.86 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553-d46d9bb0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553.log.json) | - -Note: mAP represents Car moderate 3D strict AP11 results. -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 and AP40 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car (AP11) | 28.02 / 36.11 | 21.86 / 29.46 | 19.01 / 24.83 | -| Car (AP40) | 23.22 / 32.74 | 17.18 / 24.02 | 15.13 / 20.67 | - -Note: mAP represents Car moderate 3D strict AP11 / AP40 results. Because of the limited data for pedestrians and cyclists, the detection performance for these two classes is usually unstable. Therefore, we only list car detection results here. In addition, the AP11 result may fluctuate in a larger range (~1 AP), so AP40 is a more recommended metric for reference due to its much better stability. - -## Citation - -```latex -@InProceedings{MonoFlex, - author = {Zhang, Yunpeng and Lu, Jiwen and Zhou, Jie}, - title = {Objects Are Different: Flexible Monocular 3D Object Detection}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2021}, - pages = {3289-3298} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/monoflex/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/monoflex/metafile.yml deleted file mode 100644 index c64dd6ff..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/monoflex/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: MonoFlex - Metadata: - Training Data: KITTI - Training Techniques: - - Adam - Training Resources: 2x V100 GPUS - Architecture: - - MonoFlexHead - - DLA - Paper: - URL: https://arxiv.org/abs/2104.02323 - Title: 'Objects are Different: Flexible Monocular 3D Object Detection' - README: configs/monoflex/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/monoflex.py#L7 - Version: v1.0.0 - -Models: - - Name: monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d - In Collection: MonoFlex - Config: configs/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.64 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 21.98 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553-d46d9bb0.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/mvxnet/README.md b/cv/3d_detection/paconv/pytorch/configs/mvxnet/README.md deleted file mode 100644 index d786efa7..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/mvxnet/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# MVX-Net: Multimodal VoxelNet for 3D Object Detection - -> [MVX-Net: Multimodal VoxelNet for 3D Object Detection](https://arxiv.org/abs/1904.01649) - - - -## Abstract - -Many recent works on 3D object detection have focused on designing neural network architectures that can consume point cloud data. While these approaches demonstrate encouraging performance, they are typically based on a single modality and are unable to leverage information from other modalities, such as a camera. Although a few approaches fuse data from different modalities, these methods either use a complicated pipeline to process the modalities sequentially, or perform late-fusion and are unable to learn interaction between different modalities at early stages. In this work, we present PointFusion and VoxelFusion: two simple yet effective early-fusion approaches to combine the RGB and point cloud modalities, by leveraging the recently introduced VoxelNet architecture. Evaluation on the KITTI dataset demonstrates significant improvements in performance over approaches which only use point cloud data. Furthermore, the proposed method provides results competitive with the state-of-the-art multimodal algorithms, achieving top-2 ranking in five of the six bird's eye view and 3D detection categories on the KITTI benchmark, by using a simple single stage network. - -
- -
- -## Introduction - -We implement MVX-Net and provide its results and models on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py) | 3 Class | cosine 80e | 6.7 | | 63.22 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805-83442923.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805.log.json) | - -## Citation - -```latex -@inproceedings{sindagi2019mvx, - title={MVX-Net: Multimodal voxelnet for 3D object detection}, - author={Sindagi, Vishwanath A and Zhou, Yin and Tuzel, Oncel}, - booktitle={2019 International Conference on Robotics and Automation (ICRA)}, - pages={7276--7282}, - year={2019}, - organization={IEEE} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py deleted file mode 100644 index e9f592f5..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,251 +0,0 @@ -_base_ = ['../_base_/schedules/cosine.py', '../_base_/default_runtime.py'] - -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='DynamicMVXFasterRCNN', - img_backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - img_neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - pts_voxel_layer=dict( - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1), - ), - pts_voxel_encoder=dict( - type='DynamicVFE', - in_channels=4, - feat_channels=[64, 64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=point_cloud_range, - fusion_layer=dict( - type='PointFusion', - img_channels=256, - pts_channels=64, - mid_channels=128, - out_channels=128, - img_levels=[0, 1, 2, 3, 4], - align_corners=False, - activate_out=True, - fuse_out=False)), - pts_middle_encoder=dict( - type='SparseEncoder', - in_channels=128, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - pts_backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - pts_neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - assigner_per_size=True, - diff_rad_by_sin=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -input_modality = dict(use_lidar=True, use_camera=True) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='Resize', - img_scale=[(640, 192), (2560, 768)], - multiscale_mode='range', - keep_ratio=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05], - translation_std=[0.2, 0.2, 0.2]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']), -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1280, 384), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict(type='Resize', multiscale_mode='value', keep_ratio=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points', 'img']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points', 'img']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -# Training settings -optimizer = dict(weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) - -evaluation = dict(interval=1, pipeline=eval_pipeline) - -# You may need to download the model first is the network is unstable -load_from = 'https://download.openmmlab.com/mmdetection3d/pretrain_models/mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/mvxnet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/mvxnet/metafile.yml deleted file mode 100644 index 4ce10b71..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/mvxnet/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: MVX-Net - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Feature Pyramid Network - - Dynamic Voxelization - Paper: - URL: https://arxiv.org/abs/1904.01649 - Title: 'MVX-Net: Multimodal VoxelNet for 3D Object Detection' - README: configs/mvxnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/mvx_two_stage.py#L20 - Version: v0.5.0 - -Models: - - Name: dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class - In Collection: MVX-Net - Config: configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 6.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 63.22 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805-83442923.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/README.md b/cv/3d_detection/paconv/pytorch/configs/nuimages/README.md deleted file mode 100644 index 91062296..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/README.md +++ /dev/null @@ -1,59 +0,0 @@ -# NuImages Results - - - -## Introduction - -We support and provide some baseline results on [nuImages dataset](https://www.nuscenes.org/nuimages). -We follow the class mapping in nuScenes dataset, which maps the original categories into 10 foreground categories. -The convert script can be found [here](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/nuimage_converter.py). -The baseline results include instance segmentation models, e.g., Mask R-CNN, Cascade Mask R-CNN, and HTC. -We will support panoptic segmentation models in the future. - -![demo image](../../resources/nuimages_demo.gif) - -The dataset converted by the script of v0.6.0 only supports instance segmentation. Since v0.7.0, we also support to produce semantic segmentation mask of each image; thus, we can train HTC or semantic segmentation models using the dataset. To convert the nuImages dataset into COCO format, please use the command below: - -```shell -python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --version ${VERSIONS} \ - --out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG} -``` - -- `--data-root`: the root of the dataset, defaults to `./data/nuimages`. -- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini` -- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`. -- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel. -- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study. - -## Results and models - -### Instance Segmentation - -We report Mask R-CNN and Cascade Mask R-CNN results on nuimages. - -| Method | Backbone | Pretraining | Lr schd | Mem (GB) | Box AP | Mask AP | Download | -| :----------------: | :-----------------------------------------------------------------------------------: | :---------: | :-----: | :------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| Mask R-CNN | [R-50](./mask_rcnn_r50_fpn_1x_nuim.py) | IN | 1x | 7.4 | 47.8 | 38.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238-e99f5182.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238.log.json) | -| Mask R-CNN | [R-50](./mask_rcnn_r50_fpn_coco-2x_1x_nuim.py) | IN+COCO-2x | 1x | 7.4 | 49.7 | 40.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238-b1742a60.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238.log.json) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_1x_nuim.py) | IN | 1x | 7.0 | 47.7 | 38.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py) | IN+COCO-3x | 1x | 7.0 | 49.9 | 40.8 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305-661a992e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305.log.json) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py) | IN+COCO-3x | 20e | 7.0 | 50.6 | 41.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002-5529442c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002.log.json) | -| Mask R-CNN | [R-101](./mask_rcnn_r101_fpn_1x_nuim.py) | IN | 1x | 10.9 | 48.9 | 39.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803-65c7623a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803.log.json) | -| Mask R-CNN | [X-101_32x4d](./mask_rcnn_x101_32x4d_fpn_1x_nuim.py) | IN | 1x | 13.3 | 50.4 | 40.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741-b699ab37.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_1x_nuim.py) | IN | 1x | 8.9 | 50.8 | 40.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342-1147c036.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py) | IN+COCO-20e | 1x | 8.9 | 52.8 | 42.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158-ad0540e3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py) | IN+COCO-20e | 20e | 8.9 | 52.8 | 42.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951-40963960.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951.log.json) | -| Cascade Mask R-CNN | [R-101](./cascade_mask_rcnn_r101_fpn_1x_nuim.py) | IN | 1x | 12.5 | 51.5 | 40.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804-45215b1e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804.log.json) | -| Cascade Mask R-CNN | [X-101_32x4d](./cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py) | IN | 1x | 14.9 | 52.8 | 41.6 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753-e0e49778.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753.log.json) | -| HTC w/o semantic | [R-50](./htc_without_semantic_r50_fpn_1x_nuim.py) | IN | 1x | | [model](<>) \| [log](<>) | | | -| HTC | [R-50](./htc_r50_fpn_1x_nuim.py) | IN | 1x | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/) | | | -| HTC | [R-50](./htc_r50_fpn_coco-20e_1x_nuim.py) | IN+COCO-20e | 1x | 11.6 | 53.8 | 43.8 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203-0b53a65e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203.log.json) | -| HTC | [R-50](./htc_r50_fpn_coco-20e_20e_nuim.py) | IN+COCO-20e | 20e | 11.6 | 54.8 | 44.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415-d6c60a2c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415.log.json) | -| HTC | [X-101_64x4d + DCN_c3-c5](./htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py) | IN+COCO-20e | 20e | 13.3 | 57.3 | 46.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222-0b16ac4b.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222.log.json) | - -**Note**: - -1. `IN` means only using ImageNet pre-trained backbone. `IN+COCO-Nx` and `IN+COCO-Ne` means the backbone is first pre-trained on ImageNet, and then the detector is pre-trained on COCO train2017 dataset by `Nx` and `N` epochs schedules, respectively. -2. All the training hyper-parameters follow the standard schedules on COCO dataset except that the images are resized from - 1280 x 720 to 1920 x 1080 (relative ratio 0.8 to 1.2) since the images are in size 1600 x 900. -3. The class order in the detectors released in v0.6.0 is different from the order in the configs because the bug in the conversion script. This bug has been fixed since v0.7.0 and the models trained by the correct class order are also released. If you used nuImages since v0.6.0, please re-convert the data through the conversion script using the above-mentioned command. diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py deleted file mode 100644 index 28a54f71..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py deleted file mode 100644 index c6ce25e3..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,60 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_head=dict(num_classes=10))) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py deleted file mode 100644 index bf3ffed0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_bbox_mAP-0.419__segm_mAP-0.365_20200504_174711-4af8e66e.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py deleted file mode 100644 index 5d69466f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' - -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_bbox_mAP-0.419__segm_mAP-0.365_20200504_174711-4af8e66e.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py deleted file mode 100644 index 19f35aef..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_1x_nuim.py deleted file mode 100644 index 46806836..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,44 +0,0 @@ -_base_ = './htc_without_semantic_r50_fpn_1x_nuim.py' -model = dict( - roi_head=dict( - semantic_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[8]), - semantic_head=dict( - type='FusedSemanticHead', - num_ins=5, - fusion_level=1, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=32, - ignore_label=0, - loss_weight=0.2))) - -data_root = 'data/nuimages/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='SegRescale', scale_factor=1 / 8), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']) -] -data = dict( - train=dict( - seg_prefix=data_root + 'annotations/semantic_masks/', - pipeline=train_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py deleted file mode 100644 index e5f60523..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './htc_r50_fpn_1x_nuim.py' - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/htc/htc_r50_fpn_20e_coco/htc_r50_fpn_20e_coco_20200319-fe28c577.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py deleted file mode 100644 index 2274900f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './htc_r50_fpn_coco-20e_1x_nuim.py' -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py deleted file mode 100644 index 09fde671..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,221 +0,0 @@ -_base_ = [ - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='HybridTaskCascade', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='HybridTaskCascadeRoIHead', - interleaved=True, - mask_info_flow=True, - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=[ - dict( - type='HTCMaskHead', - with_conv_res=False, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - dict( - type='HTCMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - dict( - type='HTCMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)) - ]), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.001, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py deleted file mode 100644 index 4ab095a8..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = './htc_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) - -data = dict(samples_per_gpu=1, workers_per_gpu=1) -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/htc/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco_20200312-946fd751.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py deleted file mode 100644 index 6245194c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_1x_nuim.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py deleted file mode 100644 index 4af79e59..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py +++ /dev/null @@ -1,46 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py deleted file mode 100644 index 32c3f44c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py deleted file mode 100644 index 60973539..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py deleted file mode 100644 index ec999ecd..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py deleted file mode 100644 index fd603538..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_2x_coco/mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth' # noqa diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py deleted file mode 100644 index 06d27450..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) - -file_client_args = dict( - backend='petrel', - path_mapping=dict({ - './data/nuscenes/': 's3://nuscenes/nuscenes/', - 'data/nuscenes/': 's3://nuscenes/nuscenes/' - })) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data_root = 'data/nuimages/' -# data = dict( -# val=dict( -# ann_file=data_root + 'annotations/nuimages_v1.0-mini.json'), -# test=dict( -# ann_file=data_root + 'annotations/nuimages_v1.0-mini.json')) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py b/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py deleted file mode 100644 index eb3e81b6..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/cv/3d_detection/paconv/pytorch/configs/nuimages/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/nuimages/metafile.yml deleted file mode 100644 index 7b94ce7d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/nuimages/metafile.yml +++ /dev/null @@ -1,255 +0,0 @@ -Models: - - Name: mask_rcnn_r50_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 7.4 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 47.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 38.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238-e99f5182.pth - - - Name: mask_rcnn_r50_fpn_coco-2x_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py - Metadata: - Training Memory (GB): 7.4 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 49.7 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.5 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238-b1742a60.pth - - - Name: mask_rcnn_r50_caffe_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 47.7 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 38.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/ - - - Name: mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 49.9 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305-661a992e.pth - - - Name: mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.6 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 41.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002-5529442c.pth - - - Name: mask_rcnn_r101_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 10.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 48.9 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 39.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803-65c7623a.pth - - - Name: mask_rcnn_x101_32x4d_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 13.3 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.4 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.5 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741-b699ab37.pth - - - Name: cascade_mask_rcnn_r50_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342-1147c036.pth - - - Name: cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 42.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158-ad0540e3.pth - - - Name: cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 42.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951-40963960.pth - - - Name: cascade_mask_rcnn_r101_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 12.5 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 51.5 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.7 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804-45215b1e.pth - - - Name: cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 14.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 41.6 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753-e0e49778.pth - - - Name: htc_r50_fpn_coco-20e_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py - Metadata: - Training Memory (GB): 11.6 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 53.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 43.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203-0b53a65e.pth - - - Name: htc_r50_fpn_coco-20e_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py - Metadata: - Training Memory (GB): 11.6 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 54.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 44.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415-d6c60a2c.pth - - - Name: htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py - Metadata: - Training Memory (GB): 13.3 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 57.3 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 46.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222-0b16ac4b.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/paconv/README.md b/cv/3d_detection/paconv/pytorch/configs/paconv/README.md deleted file mode 100644 index 83ab5b08..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/paconv/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds - -> [PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds](https://arxiv.org/abs/2103.14635) - - - -## Abstract - -We introduce Position Adaptive Convolution (PAConv), a generic convolution operation for 3D point cloud processing. The key of PAConv is to construct the convolution kernel by dynamically assembling basic weight matrices stored in Weight Bank, where the coefficients of these weight matrices are self-adaptively learned from point positions through ScoreNet. In this way, the kernel is built in a data-driven manner, endowing PAConv with more flexibility than 2D convolutions to better handle the irregular and unordered point cloud data. Besides, the complexity of the learning process is reduced by combining weight matrices instead of brutally predicting kernels from point positions. -Furthermore, different from the existing point convolution operators whose network architectures are often heavily engineered, we integrate our PAConv into classical MLP-based point cloud pipelines without changing network configurations. Even built on simple networks, our method still approaches or even surpasses the state-of-the-art models, and significantly improves baseline performance on both classification and segmentation tasks, yet with decent efficiency. Thorough ablation studies and visualizations are provided to understand PAConv. - -
- -
- -## Introduction - -We implement PAConv and provide the result and checkpoints on S3DIS dataset. - -**Notice**: The original PAConv paper used step learning rate schedule. We discovered that cosine schedule achieves slightly better results and adopt it in our implementations. - -## Results and models - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------------------------: | :----: | :---------: | :------: | :------------: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PAConv (SSG)](./paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py) | Area_5 | cosine 150e | 5.8 | | 66.65 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615-2147b2d1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615.log.json) | -| [PAConv\* (SSG)](./paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py) | Area_5 | cosine 200e | 3.8 | | 65.33 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class_20210802_171802-e5ea9bb9.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class_20210802_171802.log.json) | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. -- PAConv\* stands for the CUDA implementation of PAConv operations. See the [paper](https://arxiv.org/pdf/2103.14635.pdf) appendix section D for more details. In our experiments, the training of PAConv\* is found to be very unstable. We achieved slightly lower mIoU than the result in the paper, but is consistent with the result obtained by running their [official code](https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg). Besides, although the GPU memory consumption of PAConv\* is significantly lower than PAConv, its training and inference speed are actually slower (by ~10%). - -## Indeterminism - -Since PAConv testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@inproceedings{xu2021paconv, - title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds}, - author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan}, - booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, - pages={3173--3182}, - year={2021} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/paconv/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/paconv/metafile.yml deleted file mode 100644 index 589f8079..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/paconv/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: PAConv - Metadata: - Training Techniques: - - SGD - Training Resources: 8x Titan XP GPUs - Architecture: - - PAConv - Paper: - URL: https://arxiv.org/abs/2103.14635 - Title: 'PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds' - README: configs/paconv/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/paconv/paconv.py#L106 - Version: v0.16.0 - -Models: - - Name: paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py - In Collection: PAConv - Config: configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 5.8 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 66.65 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615-2147b2d1.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py deleted file mode 100644 index b2a1440e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,69 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/paconv_cuda_ssg.py', - '../_base_/schedules/seg_cosine_150e.py', '../_base_/default_runtime.py' -] - -# data settings -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -num_points = 4096 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - use_normalized_coord=True, - num_try=10000, - enlarge_size=None, - min_unique_num=num_points // 4, - eps=0.0), - dict(type='NormalizePointsColor', color_mean=None), - dict( - type='GlobalRotScaleTrans', - rot_range=[0.0, 6.283185307179586], # [0, 2 * pi] - scale_ratio_range=[0.8, 1.2], - translation_std=[0, 0, 0]), - dict( - type='RandomJitterPoints', - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]), - dict(type='RandomDropPointsColor', drop_ratio=0.2), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict(samples_per_gpu=8, train=dict(pipeline=train_pipeline)) -evaluation = dict(interval=1) - -# model settings -model = dict( - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=12)) - -# runtime settings -runner = dict(max_epochs=200) diff --git a/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py deleted file mode 100644 index 6b22a67f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,66 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/paconv_ssg.py', '../_base_/schedules/seg_cosine_150e.py', - '../_base_/default_runtime.py' -] - -# data settings -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -num_points = 4096 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - use_normalized_coord=True, - num_try=10000, - enlarge_size=None, - min_unique_num=num_points // 4, - eps=0.0), - dict(type='NormalizePointsColor', color_mean=None), - dict( - type='GlobalRotScaleTrans', - rot_range=[0.0, 6.283185307179586], # [0, 2 * pi] - scale_ratio_range=[0.8, 1.2], - translation_std=[0, 0, 0]), - dict( - type='RandomJitterPoints', - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]), - dict(type='RandomDropPointsColor', drop_ratio=0.2), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict(samples_per_gpu=8, train=dict(pipeline=train_pipeline)) -evaluation = dict(interval=1) - -# model settings -model = dict( - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=12)) diff --git a/cv/3d_detection/paconv/pytorch/configs/parta2/README.md b/cv/3d_detection/paconv/pytorch/configs/parta2/README.md deleted file mode 100644 index b94b8492..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/parta2/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network - -> [From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network](https://arxiv.org/abs/1907.03670) - - - -## Abstract - -3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. - -
- -
- -## Introduction - -We implement Part-A^2 and provide its results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 4.1 | | 68.33 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017-454a5344.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017.log.json) | -| [SECFPN](./hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py) | Car | cyclic 80e | 4.0 | | 79.08 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017-cb7ff621.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017.log.json) | - -## Citation - -```latex -@article{shi2020points, - title={From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network}, - author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - year={2020}, - publisher={IEEE} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py deleted file mode 100644 index 11662318..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py +++ /dev/null @@ -1,122 +0,0 @@ -_base_ = [ - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py', - '../_base_/models/parta2.py' -] - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=True)) - -# Part-A2 uses a different learning rate from what SECOND uses. -lr = 0.001 -optimizer = dict(lr=lr) -evaluation = dict(pipeline=eval_pipeline) -find_unused_parameters = True diff --git a/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py deleted file mode 100644 index 89be085d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py +++ /dev/null @@ -1,137 +0,0 @@ -_base_ = './hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py' - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] # velodyne coordinates, x, y, z - -model = dict( - rpn_head=dict( - type='PartA2RPNHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False)), - roi_head=dict( - num_classes=1, - semantic_head=dict(num_classes=1), - bbox_head=dict(num_classes=1)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.1))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -find_unused_parameters = True diff --git a/cv/3d_detection/paconv/pytorch/configs/parta2/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/parta2/metafile.yml deleted file mode 100644 index d626fcb0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/parta2/metafile.yml +++ /dev/null @@ -1,41 +0,0 @@ -Collections: - - Name: Part-A^2 - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Sparse U-Net - Paper: - URL: https://arxiv.org/abs/1907.03670 - Title: 'From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network' - README: configs/parta2/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/parta2.py#L12 - Version: v0.5.0 - -Models: - - Name: hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class - In Collection: Part-A^2 - Config: configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 4.1 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 68.33 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017-454a5344.pth - - - Name: hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car - In Collection: Part-A^2 - Config: configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.0 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 79.08 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017-cb7ff621.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/README.md b/cv/3d_detection/paconv/pytorch/configs/pgd/README.md deleted file mode 100644 index f805f53d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Probabilistic and Geometric Depth: Detecting Objects in Perspective - -> [Probabilistic and Geometric Depth: Detecting Objects in Perspective](https://arxiv.org/abs/2107.14160) - - - -## Abstract - -3D object detection is an important capability needed in various practical applications such as driver assistance systems. Monocular 3D detection, as a representative general setting among image-based approaches, provides a more economical solution than conventional settings relying on LiDARs but still yields unsatisfactory results. This paper first presents a systematic study on this problem. We observe that the current monocular 3D detection can be simplified as an instance depth estimation problem: The inaccurate instance depth blocks all the other 3D attribute predictions from improving the overall detection performance. Moreover, recent methods directly estimate the depth based on isolated instances or pixels while ignoring the geometric relations across different objects. To this end, we construct geometric relation graphs across predicted objects and use the graph to facilitate depth estimation. As the preliminary depth estimation of each instance is usually inaccurate in this ill-posed setting, we incorporate a probabilistic representation to capture the uncertainty. It provides an important indicator to identify confident predictions and further guide the depth propagation. Despite the simplicity of the basic idea, our method, PGD, obtains significant improvements on KITTI and nuScenes benchmarks, achieving 1st place out of all monocular vision-only methods while still maintaining real-time efficiency. Code and models will be released at [this https URL](https://github.com/open-mmlab/mmdetection3d). - -
- -
- -## Introduction - -PGD, also can be regarded as FCOS3D++, is a simple yet effective monocular 3D detector. It enhances the FCOS3D baseline by involving local geometric constraints and improving instance depth estimation. - -We release the code and model for both KITTI and nuScenes benchmark, which is a good supplement for the original FCOS3D baseline (only supported on nuScenes). - -For clean implementation, our preliminary release supports base models with proposed local geometric constraints and the probabilistic depth representation. We will involve the geometric graph part in the future. - -A more extensive study based on FCOS3D and PGD is on-going. Please stay tuned. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP_11 / mAP_40 | Download | -| :--------------------------------------------------------------: | :-----: | :------: | :------------: | :-------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101](./pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py) | 4x | 9.07 | | 18.33 / 13.23 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608-8a97533b.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608.log.json) | - -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 and AP40 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car (AP11) | 24.09 / 30.11 | 18.33 / 23.46 | 16.90 / 19.33 | -| Car (AP40) | 19.27 / 26.60 | 13.23 / 18.23 | 10.65 / 15.00 | - -Note: mAP represents Car moderate 3D strict AP11 / AP40 results. Because of the limited data for pedestrians and cyclists, the detection performance for these two classes is usually unstable. Therefore, we only list car detection results here. In addition, AP40 is a more recommended metric for reference due to its much better stability. - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | mAP | NDS | Download | -| :------------------------------------------------------------------------------: | :-----: | :------: | :--: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101 w/ DCN](./pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py) | 1x | 9.20 | 31.7 | 39.3 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350-f4b5eec2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350.log.json) | -| [above w/ finetune](./pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py) | 1x | 9.20 | 34.6 | 41.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245-fd419681.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245.log.json) | -| above w/ tta | 1x | 9.20 | 35.5 | 41.8 | | -| [ResNet101 w/ DCN](./pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py) | 2x | 9.20 | 33.6 | 40.9 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314-cb677266.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314.log.json) | -| [above w/ finetune](./pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py) | 2x | 9.20 | 35.8 | 42.5 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135-5ec7c1cd.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135.log.json) | -| above w/ tta | 2x | 9.20 | 36.8 | 43.1 | | - -## Citation - -```latex -@inproceedings{wang2021pgd, - title={{Probabilistic and Geometric Depth: Detecting} Objects in Perspective}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Conference on Robot Learning (CoRL) 2021}, - year={2021} -} -# For the baseline version -@inproceedings{wang2021fcos3d, - title={{FCOS3D: Fully} Convolutional One-Stage Monocular 3D Object Detection}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, - year={2021} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/pgd/metafile.yml deleted file mode 100644 index d7d66265..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/metafile.yml +++ /dev/null @@ -1,81 +0,0 @@ -Collections: - - Name: PGD - Metadata: - Training Data: KITTI - Training Techniques: - - SGD - Training Resources: 4x TITAN XP - Architecture: - - PGDHead - Paper: - URL: https://arxiv.org/abs/2107.14160 - Title: 'Probabilistic and Geometric Depth: Detecting Objects in Perspective' - README: configs/pgd/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/dense_heads/pgd_head.py#17 - Version: v1.0.0 - -Models: - - Name: pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.1 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 18.33 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608-8a97533b.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 31.7 - NDS: 39.3 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350-f4b5eec2.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 34.6 - NDS: 41.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245-fd419681.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 33.6 - NDS: 40.9 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314-cb677266.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 35.8 - NDS: 42.5 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135-5ec7c1cd.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py deleted file mode 100644 index 37b50493..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py +++ /dev/null @@ -1,107 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-mono3d.py', '../_base_/models/pgd.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True)), - bbox_head=dict( - pred_bbox2d=True, - group_reg_dims=(2, 1, 3, 1, 2, - 4), # offset, depth, size, rot, velo, bbox2d - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - (), # velo - (256, ) # bbox2d - ), - loss_depth=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((31.99, 21.12), (37.15, 24.63), (39.69, 23.97), - (40.91, 26.34), (34.16, 20.11), (22.35, 13.70), - (24.28, 16.05), (27.26, 15.50), (20.61, 13.68), - (22.74, 15.01)), - base_dims=((4.62, 1.73, 1.96), (6.93, 2.83, 2.51), - (12.56, 3.89, 2.94), (11.22, 3.50, 2.95), - (6.68, 3.21, 2.85), (6.68, 3.21, 2.85), - (2.11, 1.46, 0.78), (0.73, 1.77, 0.67), - (0.41, 1.08, 0.41), (0.50, 0.99, 2.52)), - code_size=9)), - # set weight 1.0 for base 7 dims (offset, depth, size, rot) - # 0.05 for 2-dim velocity and 0.2 for 4-dim 2D distance targets - train_cfg=dict(code_weight=[ - 1.0, 1.0, 0.2, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ]), - test_cfg=dict(nms_pre=1000, nms_thr=0.8, score_thr=0.01, max_per_img=200)) - -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.004, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[8, 11]) -total_epochs = 12 -evaluation = dict(interval=4) -runner = dict(max_epochs=total_epochs) diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py b/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py deleted file mode 100644 index f5d64232..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ])) -# optimizer -optimizer = dict(lr=0.002) -load_from = 'work_dirs/pgd_nus_benchmark_1x/latest.pth' diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py deleted file mode 100644 index 2dd59575..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py' -# learning policy -lr_config = dict(step=[16, 22]) -total_epochs = 24 -runner = dict(max_epochs=total_epochs) diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py b/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py deleted file mode 100644 index 19a3d630..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ])) -# optimizer -optimizer = dict(lr=0.002) -load_from = 'work_dirs/pgd_nus_benchmark_2x/latest.pth' diff --git a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py deleted file mode 100644 index 832b34e6..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py +++ /dev/null @@ -1,127 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-mono3d.py', '../_base_/models/pgd.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict(frozen_stages=0), - neck=dict(start_level=0, num_outs=4), - bbox_head=dict( - num_classes=3, - bbox_code_size=7, - pred_attrs=False, - pred_velo=False, - pred_bbox2d=True, - use_onlyreg_proj=True, - strides=(4, 8, 16, 32), - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 1e8)), - group_reg_dims=(2, 1, 3, 1, 16, - 4), # offset, depth, size, rot, kpts, bbox2d - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - (256, ), # kpts - (256, ) # bbox2d - ), - centerness_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - use_depth_classifier=True, - depth_branch=(256, ), - depth_range=(0, 70), - depth_unit=10, - division='uniform', - depth_bins=8, - pred_keypoints=True, - weight_dim=1, - loss_depth=dict( - type='UncertainSmoothL1Loss', alpha=1.0, beta=3.0, - loss_weight=1.0), - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), (3.9, 1.56, 1.6)), - code_size=7)), - # set weight 1.0 for base 7 dims (offset, depth, size, rot) - # 0.2 for 16-dim keypoint offsets and 1.0 for 4-dim 2D distance targets - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, - 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 1.0, 1.0, 1.0, 1.0 - ]), - test_cfg=dict(nms_pre=100, nms_thr=0.05, score_thr=0.001, max_per_img=20)) - -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1242, 375), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=3, - workers_per_gpu=3, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.001, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[32, 44]) -total_epochs = 48 -runner = dict(type='EpochBasedRunner', max_epochs=48) -evaluation = dict(interval=2) -checkpoint_config = dict(interval=8) diff --git a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/README.md b/cv/3d_detection/paconv/pytorch/configs/point_rcnn/README.md deleted file mode 100644 index eddbdc72..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud - -> [PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud](https://arxiv.org/abs/1812.04244) - - - -## Abstract - -In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. - -
- -
- -## Introduction - -We implement PointRCNN and provide the result with checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./point_rcnn_2x8_kitti-3d-3classes.py) | 3 Class | cyclic 40e | 4.6 | | 70.83 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.log.json) | - -Note: mAP represents AP11 results on 3 Class under the moderate setting. - -Detailed performance on KITTI 3D detection (3D) is as follows, evaluated by AP11 metric: - -| | Easy | Moderate | Hard | -| ---------- | :---: | :------: | :---: | -| Car | 89.13 | 78.72 | 78.24 | -| Pedestrian | 65.81 | 59.57 | 52.75 | -| Cyclist | 93.51 | 74.19 | 70.73 | - -## Citation - -```latex -@inproceedings{Shi_2019_CVPR, - title = {PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud}, - author = {Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng}, - booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/point_rcnn/metafile.yml deleted file mode 100644 index a7627cee..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: PointRCNN - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x Titan XP GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1812.04244 - Title: 'PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud' - README: configs/point_rcnn/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/point_rcnn.py#L8 - Version: v1.0.0 - -Models: - - Name: point_rcnn_2x8_kitti-3d-3classes.py - In Collection: PointRCNN - Config: configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py - Metadata: - Training Memory (GB): 4.6 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 70.83 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py b/cv/3d_detection/paconv/pytorch/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py deleted file mode 100644 index 1344aca5..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py +++ /dev/null @@ -1,94 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-3d-car.py', '../_base_/models/point_rcnn.py', - '../_base_/default_runtime.py', '../_base_/schedules/cyclic_40e.py' -] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - sample_groups=dict(Car=20, Pedestrian=15, Cyclist=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384, sample_range=40.0), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384, sample_range=40.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# optimizer -lr = 0.001 # max learning rate -optimizer = dict(lr=lr, betas=(0.95, 0.85)) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -evaluation = dict(interval=2) -# yapf:disable -log_config = dict( - interval=30, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/README.md b/cv/3d_detection/paconv/pytorch/configs/pointnet2/README.md deleted file mode 100644 index c9204eb1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space - -> [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413) - - - -## Abstract - -Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. - -
- -
- -## Introduction - -We implement PointNet++ and provide the result and checkpoints on ScanNet and S3DIS datasets. - -**Notice**: The original PointNet++ paper used step learning rate schedule. We discovered that cosine schedule achieves much better results and adopt it in our implementations. We also use a larger `weight_decay` factor because we find it consistently improves the performance. - -## Results and models - -### ScanNet - -| Method | Input | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | mIoU (Test set) | Download | -| :-------------------------------------------------------------------------------------: | :-------: | :---------: | :------: | :------------: | :------------: | :-------------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| [PointNet++ (SSG)](./pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py) | XYZ | cosine 200e | 1.9 | | 53.91 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628-4e341a48.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628.log.json) | -| [PointNet++ (SSG)](./pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py) | XYZ+Color | cosine 200e | 1.9 | | 54.44 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644-ee73704a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py) | XYZ | cosine 250e | 2.4 | | 54.26 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838-b4a3cf89.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py) | XYZ+Color | cosine 250e | 2.4 | | 55.05 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009-24477ab1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009.log.json) | - -**Notes:** - -- The original PointNet++ paper conducted experiments on the ScanNet V1 dataset, while later point cloud segmentor papers often used ScanNet V2. Following common practice, we report results on the ScanNet V2 dataset. - -- Since ScanNet dataset doesn't provide ground-truth labels for the test set, users can only evaluate test set performance by submitting to its online benchmark [website](http://kaldir.vc.in.tum.de/scannet_benchmark/). However, users are only allowed to submit once every two weeks. Therefore, we currently report val set mIoU. Test set performance may be added in the future. - -- To generate submission file for ScanNet online benchmark, you need to modify the ScanNet dataset's [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/scannet_seg-3d-20class.py#L126). Change `ann_file=data_root + 'scannet_infos_val.pkl'` to `ann_file=data_root + 'scannet_infos_test.pkl'`, and then simply run: - - ```shell - python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --format-only --options 'txt_prefix=exps/pointnet2_scannet_results' - ``` - - This will save the prediction results as `txt` files in `exps/pointnet2_scannet_results/`. Then, go to this folder and zip all files into `pn2_scannet.zip`. Now you can submit it to the online benchmark and wait for the test set result. More instructions can be found at their official [website](http://kaldir.vc.in.tum.de/scannet_benchmark/documentation#submission-policy). - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------------------------: | :----: | :--------: | :------: | :------------: | :------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++ (SSG)](./pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py) | Area_5 | cosine 50e | 3.6 | | 56.93 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205-995d0119.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py) | Area_5 | cosine 80e | 3.6 | | 58.04 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307-b2059817.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307.log.json) | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. - -## Indeterminism - -Since PointNet++ testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@inproceedings{qi2017pointnet++, - title={PointNet++ deep hierarchical feature learning on point sets in a metric space}, - author={Qi, Charles R and Yi, Li and Su, Hao and Guibas, Leonidas J}, - booktitle={Proceedings of the 31st International Conference on Neural Information Processing Systems}, - pages={5105--5114}, - year={2017} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/pointnet2/metafile.yml deleted file mode 100644 index e7e51759..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/metafile.yml +++ /dev/null @@ -1,94 +0,0 @@ -Collections: - - Name: PointNet++ - Metadata: - Training Techniques: - - Adam - Training Resources: 2x Titan XP GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1706.02413 - Title: 'PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space' - README: configs/pointnet2/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/backbones/pointnet2_sa_ssg.py#L12 - Version: v0.14.0 - -Models: - - Name: pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 1.9 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 53.91 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628-4e341a48.pth - - - Name: pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 1.9 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 54.44 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644-ee73704a.pth - - - Name: pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 2.4 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 54.26 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838-b4a3cf89.pth - - - Name: pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 2.4 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 55.05 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009-24477ab1.pth - - - Name: pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 3.6 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 56.93 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205-995d0119.pth - - - Name: pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 3.6 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 58.04 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307-b2059817.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py deleted file mode 100644 index fbad158d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=5) - -# model settings -model = dict( - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=250) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py deleted file mode 100644 index ed1e3c43..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,27 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_50e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=80) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py deleted file mode 100644 index 2cb7ee18..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,166 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# dataset settings -# in this setting, we only use xyz as network input -# so we need to re-write all the data pipeline -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), # only load xyz coordinates - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline, interval=5) - -# model settings -model = dict( - backbone=dict(in_channels=3), # only [xyz] - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=250) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py deleted file mode 100644 index b5261077..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,34 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=5) - -# model settings -model = dict( - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py deleted file mode 100644 index b14100d1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,25 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_50e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py b/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py deleted file mode 100644 index 9dff449c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,164 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# dataset settings -# in this setting, we only use xyz as network input -# so we need to re-write all the data pipeline -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), # only load xyz coordinates - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline, interval=5) - -# model settings -model = dict( - backbone=dict(in_channels=3), # only [xyz] - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/README.md b/cv/3d_detection/paconv/pytorch/configs/pointpillars/README.md deleted file mode 100644 index 62090972..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/README.md +++ /dev/null @@ -1,78 +0,0 @@ -# PointPillars: Fast Encoders for Object Detection from Point Clouds - -> [PointPillars: Fast Encoders for Object Detection from Point Clouds](https://arxiv.org/abs/1812.05784) - - - -## Abstract - -Object detection in point clouds is an important aspect of many robotics applications such as autonomous driving. In this paper we consider the problem of encoding a point cloud into a format appropriate for a downstream detection pipeline. Recent literature suggests two types of encoders; fixed encoders tend to be fast but sacrifice accuracy, while encoders that are learned from data are more accurate, but slower. In this work we propose PointPillars, a novel encoder which utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). While the encoded features can be used with any standard 2D convolutional detection architecture, we further propose a lean downstream network. Extensive experimentation shows that PointPillars outperforms previous encoders with respect to both speed and accuracy by a large margin. Despite only using lidar, our full detection pipeline significantly outperforms the state of the art, even among fusion methods, with respect to both the 3D and bird's eye view KITTI benchmarks. This detection performance is achieved while running at 62 Hz: a 2 - 4 fold runtime improvement. A faster version of our method matches the state of the art at 105 Hz. These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds. - -
- -
- -## Introduction - -We implement PointPillars and provide the results and checkpoints on KITTI, nuScenes, Lyft and Waymo datasets. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | AP | Download | -| :------------------------------------------------------------: | :-----: | :---------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py) | Car | cyclic 160e | 5.4 | | 77.6 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py) | 3 Class | cyclic 160e | 5.5 | | 64.07 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306.log.json) | - -### nuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 34.33 | 49.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857-f19d00a3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857.log.json) | -| [SECFPN (FP16)](./hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py) | 2x | 8.37 | | 35.19 | 50.27 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.3 | | 39.7 | 53.2 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936-fca299c1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936.log.json) | -| [FPN (FP16)](./hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) | 2x | 8.40 | | 39.26 | 53.26 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719-269f9dd6.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :----------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.8 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455-82b81c39.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 9.2 | | 14.8 | 15.0 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429-0b3d6196.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429.log.json) | - -### Waymo - -| Backbone | Load Interval | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP@L1 | mAPH@L1 | mAP@L2 | **mAPH@L2** | Download | -| :-----------------------------------------------------------------: | :-----------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :----: | :---------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py) | 5 | Car | 2x | 7.76 | | 70.2 | 69.6 | 62.6 | 62.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315-302fc3e7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py) | 5 | 3 Class | 2x | 8.12 | | 64.7 | 57.6 | 58.4 | 52.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144-d1a706b1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144.log.json) | -| above @ Car | | | 2x | 8.12 | | 68.5 | 67.9 | 60.1 | 59.6 | | -| above @ Pedestrian | | | 2x | 8.12 | | 67.8 | 50.6 | 59.6 | 44.3 | | -| above @ Cyclist | | | 2x | 8.12 | | 57.7 | 54.4 | 55.5 | 52.4 | | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py) | 1 | Car | 2x | 7.76 | | 72.1 | 71.5 | 63.6 | 63.1 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py) | 1 | 3 Class | 2x | 8.12 | | 68.8 | 63.3 | 62.6 | 57.6 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.log.json) | -| above @ Car | | | 2x | 8.12 | | 71.6 | 71.0 | 63.1 | 62.5 | | -| above @ Pedestrian | | | 2x | 8.12 | | 70.6 | 56.7 | 62.9 | 50.2 | | -| above @ Cyclist | | | 2x | 8.12 | | 64.4 | 62.3 | 61.9 | 59.9 | | - -#### Note: - -- **Metric**: For model trained with 3 classes, the average APH@L2 (mAPH@L2) of all the categories is reported and used to rank the model. For model trained with only 1 class, the APH@L2 is reported and used to rank the model. -- **Data Split**: Here we provide several baselines for waymo dataset, among which D5 means that we divide the dataset into 5 folds and only use one fold for efficient experiments. Using the complete dataset can boost the performance a lot, especially for the detection of cyclist and pedestrian, where more than 5 mAP or mAPH improvement can be expected. -- **Implementation Details**: We basically follow the implementation in the [paper](https://arxiv.org/pdf/1912.04838.pdf) in terms of the network architecture (having a - stride of 1 for the first convolutional block). Different settings of voxelization, data augmentation and hyper parameters make these baselines outperform those in the paper by about 7 mAP for car and 4 mAP for pedestrian with only a subset of the whole dataset. All of these results are achieved without bells-and-whistles, e.g. ensemble, multi-scale training and test augmentation. -- **License Aggrement**: To comply the [license agreement of Waymo dataset](https://waymo.com/open/terms/), the pre-trained models on Waymo dataset are not released. We still release the training log as a reference to ease the future research. -- `FP16` means Mixed Precision (FP16) is adopted in training. With mixed precision training, we can train PointPillars with nuScenes dataset on 8 Titan XP GPUS with batch size of 2. This will cause OOM error without mixed precision training. The loss scale for PointPillars on nuScenes dataset is specifically tuned to avoid the loss to be Nan. We find 32 is more stable than 512, though loss scale 32 still cause Nan sometimes. - -## Citation - -```latex -@inproceedings{lang2019pointpillars, - title={Pointpillars: Fast encoders for object detection from point clouds}, - author={Lang, Alex H and Vora, Sourabh and Caesar, Holger and Zhou, Lubing and Yang, Jiong and Beijbom, Oscar}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={12697--12705}, - year={2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 6cc3e2d1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 2c6ba49b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index 9764aa33..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 57c90db7..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py deleted file mode 100644 index d8aad2fb..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py +++ /dev/null @@ -1,81 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] - -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -# dataset settings -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -# PointPillars adopted a different sampling strategies among classes -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=15, Cyclist=15)) - -# PointPillars uses different augmentation hyper parameters -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler, use_ground_plane=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# In practice PointPillars also uses a different schedule -# optimizer -lr = 0.001 -optimizer = dict(lr=lr) -# max_norm=35 is slightly better than 10 for PointPillars in the earlier -# development of the codebase thus we keep the setting. But we does not -# specifically tune this parameter. -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# PointPillars usually need longer schedule than second, we simply double -# the training schedule. Do remind that since we use RepeatDataset and -# repeat factor is 2, so we actually train 160 epochs. -runner = dict(max_epochs=80) - -# Use evaluation interval=2 reduce the number of evaluation timese -evaluation = dict(interval=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py deleted file mode 100644 index 3537ce3e..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py +++ /dev/null @@ -1,87 +0,0 @@ -# model settings -_base_ = './hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py' - -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -model = dict( - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[0, -39.68, -1.78, 69.12, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - sample_groups=dict(Car=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler, use_ground_plane=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict( - type='RepeatDataset', - times=2, - dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 1a0400eb..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,43 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-80, -80, -1.0715024, 80, 80, -1.0715024], - [-80, -80, -0.3033737, 80, 80, -0.3033737], - [-80, -80, -0.3519405, 80, 80, -0.3519405], - [-80, -80, -0.8871424, 80, 80, -0.8871424], - [-80, -80, -0.6276341, 80, 80, -0.6276341], - [-80, -80, -1.3220503, 80, 80, -1.3220503], - [-80, -80, -1.0709302, 80, 80, -1.0709302], - [-80, -80, -0.9122268, 80, 80, -0.9122268], - [-80, -80, -1.8012227, 80, 80, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index afff99c6..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [-49.6, -49.6, -1.80032795, 49.6, 49.6, -1.80032795], - [-49.6, -49.6, -1.74440365, 49.6, 49.6, -1.74440365], - [-49.6, -49.6, -1.68526504, 49.6, 49.6, -1.68526504], - [-49.6, -49.6, -1.67339111, 49.6, 49.6, -1.67339111], - [-49.6, -49.6, -1.61785072, 49.6, 49.6, -1.61785072], - [-49.6, -49.6, -1.80984986, 49.6, 49.6, -1.80984986], - [-49.6, -49.6, -1.763965, 49.6, 49.6, -1.763965], - ], - sizes=[ - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.4560939, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [1.68452161, 0.60058911, 1.27192197], # bicycle - [0.7256437, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic_cone - [0.48578221, 2.49008838, 0.98297065], # barrier - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index ff0f67a0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 7964b799..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.3033737, 100, 100, -0.3033737], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py deleted file mode 100644 index 8655691b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# data settings -data = dict(train=dict(dataset=dict(load_interval=1))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py deleted file mode 100644 index 90f2a42c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py +++ /dev/null @@ -1,37 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-car.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# data settings -data = dict(train=dict(dataset=dict(load_interval=1))) - -# model settings -model = dict( - type='MVXFasterRCNN', - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345]], - sizes=[[4.73, 2.08, 1.77]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py deleted file mode 100644 index e4f1ce5c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py deleted file mode 100644 index 3a3e3266..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py +++ /dev/null @@ -1,34 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-car.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# model settings -model = dict( - type='MVXFasterRCNN', - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345]], - sizes=[[4.73, 2.08, 1.77]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/paconv/pytorch/configs/pointpillars/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/pointpillars/metafile.yml deleted file mode 100644 index 9a898c4b..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/pointpillars/metafile.yml +++ /dev/null @@ -1,213 +0,0 @@ -Collections: - - Name: PointPillars - Metadata: - Training Techniques: - - AdamW - Architecture: - - Feature Pyramid Network - Paper: - URL: https://arxiv.org/abs/1812.05784 - Title: 'PointPillars: Fast Encoders for Object Detection from Point Clouds' - README: configs/pointpillars/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/voxel_encoders/pillar_encoder.py#L13 - Version: v0.6.0 - -Models: - - Name: hv_pointpillars_secfpn_6x8_160e_kitti-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - AP: 77.6 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth - - - Name: hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.5 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - AP: 64.07 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth - - - Name: hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 34.33 - NDS: 49.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857-f19d00a3.pth - - - Name: hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.3 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 39.71 - NDS: 53.15 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936-fca299c1.pth - - - Name: hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 12.2 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 13.8 - Public Score: 14.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455-82b81c39.pth - - - Name: hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 9.2 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 14.0 - Public Score: 15.0 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429-0b3d6196.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py - Metadata: - Training Data: Waymo - Training Memory (GB): 7.76 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 70.2 - mAPH@L1: 69.6 - mAP@L2: 62.6 - mAPH@L2: 62.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315-302fc3e7.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 64.7 - mAPH@L1: 57.6 - mAP@L2: 58.4 - mAPH@L2: 52.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144-d1a706b1.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py - Metadata: - Training Data: Waymo - Training Memory (GB): 7.76 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 72.1 - mAPH@L1: 71.5 - mAP@L2: 63.6 - mAPH@L2: 63.1 - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 68.8 - mAPH@L1: 63.3 - mAP@L2: 62.6 - mAPH@L2: 57.6 - - - Name: hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Architecture: - - Hard Voxelization - Training Data: nuScenes - Training Memory (GB): 8.37 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 35.19 - NDS: 50.27 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth - Code: - Version: v0.7.0 - - - Name: hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Architecture: - - Hard Voxelization - Training Data: nuScenes - Training Memory (GB): 8.40 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 39.26 - NDS: 53.26 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719-269f9dd6.pth - Code: - Version: v0.7.0 diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/README.md b/cv/3d_detection/paconv/pytorch/configs/regnet/README.md deleted file mode 100644 index f15b94fe..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/README.md +++ /dev/null @@ -1,82 +0,0 @@ -# Designing Network Design Spaces - -> [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) - - - -## Abstract - -In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs. - -
- -
- -## Introduction - -We implement RegNetX models in 3D detection systems and provide their first results with PointPillars on nuScenes and Lyft dataset. - -The pre-trained modles are converted from [model zoo of pycls](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md) and maintained in [mmcv](https://github.com/open-mmlab/mmcv). - -## Usage - -To use a regnet model, there are two steps to do: - -1. Convert the model to ResNet-style supported by MMDetection -2. Modify backbone and neck in config accordingly - -### Convert model - -We already prepare models of FLOPs from 800M to 12G in our model zoo. - -For more general usage, we also provide script `regnet2mmdet.py` in the tools directory to convert the key of models pretrained by [pycls](https://github.com/facebookresearch/pycls/) to -ResNet-style checkpoints used in MMDetection. - -```bash -python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH} -``` - -This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`. - -### Modify config - -The users can modify the config's `depth` of backbone and corresponding keys in `arch` according to the configs in the [pycls model zoo](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md). -The parameter `in_channels` in FPN can be found in the Figure 15 & 16 of the paper (`wi` in the legend). -This directory already provides some configs with their performance, using RegNetX from 800MF to 12GF level. -For other pre-trained models or self-implemented regnet models, the users are responsible to check these parameters by themselves. - -**Note**: Although Fig. 15 & 16 also provide `w0`, `wa`, `wm`, `group_w`, and `bot_mul` for `arch`, they are quantized thus inaccurate, using them sometimes produces different backbone that does not match the key in the pre-trained model. - -## Results and models - -### nuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 35.17 | 49.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725-0817d270.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725.log.json) | -| [RegNetX-400MF-SECFPN](./hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 41.2 | 55.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334.log.json) | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 17.1 | | 40.0 | 53.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405-2fa62f3d.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 17.3 | | 44.8 | 56.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239.log.json) | -| [RegNetX-1.6gF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 24.0 | | 48.2 | 59.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311-dcd4e090.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :-------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.9 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807-2518e3de.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807.log.json) | -| [RegNetX-400MF-SECFPN](./hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_lyft-3d.py) | 2x | 15.9 | | 14.9 | 15.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151-42513826.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151.log.json) | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 9.2 | | 14.9 | 15.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210517_202818-fc6904c3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210517_202818.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_lyft-3d.py) | 2x | 13.0 | | 16.0 | 16.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618-823dcf18.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618.log.json) | - -## Citation - -```latex -@article{radosavovic2020designing, - title={Designing Network Design Spaces}, - author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár}, - year={2020}, - eprint={2003.13678}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 0574be57..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 1f391a32..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 884729cc..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index e5863652..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index fef308df..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index fb330d78..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-80, -80, -1.0715024, 80, 80, -1.0715024], - [-80, -80, -0.3033737, 80, 80, -0.3033737], - [-80, -80, -0.3519405, 80, 80, -0.3519405], - [-80, -80, -0.8871424, 80, 80, -0.8871424], - [-80, -80, -0.6276341, 80, 80, -0.6276341], - [-80, -80, -1.3220503, 80, 80, -1.3220503], - [-80, -80, -1.0709302, 80, 80, -1.0709302], - [-80, -80, -0.9122268, 80, 80, -0.9122268], - [-80, -80, -1.8012227, 80, 80, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index ef8996a1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [-49.6, -49.6, -1.80032795, 49.6, 49.6, -1.80032795], - [-49.6, -49.6, -1.74440365, 49.6, 49.6, -1.74440365], - [-49.6, -49.6, -1.68526504, 49.6, 49.6, -1.68526504], - [-49.6, -49.6, -1.67339111, 49.6, 49.6, -1.67339111], - [-49.6, -49.6, -1.61785072, 49.6, 49.6, -1.61785072], - [-49.6, -49.6, -1.80984986, 49.6, 49.6, -1.80984986], - [-49.6, -49.6, -1.763965, 49.6, 49.6, -1.763965], - ], - sizes=[ - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.4560939, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [1.68452161, 0.60058911, 1.27192197], # bicycle - [0.7256437, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic_cone - [0.48578221, 2.49008838, 0.98297065], # barrier - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 2af3719c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,40 +0,0 @@ -_base_ = \ - './hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.3033737, 100, 100, -0.3033737], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/paconv/pytorch/configs/regnet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/regnet/metafile.yml deleted file mode 100644 index 18f13b1d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/regnet/metafile.yml +++ /dev/null @@ -1,85 +0,0 @@ -Models: - - Name: hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.4 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 41.2 - NDS: 55.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 17.3 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 44.8 - NDS: 56.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 24.0 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.2 - NDS: 59.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311-dcd4e090.pth - - - Name: hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 15.9 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 14.9 - Public Score: 15.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151-42513826.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 13.0 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 16.0 - Public Score: 16.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618-823dcf18.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/sassd/README.md b/cv/3d_detection/paconv/pytorch/configs/sassd/README.md deleted file mode 100644 index 3a4444a0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/sassd/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Structure Aware Single-stage 3D Object Detection from Point Cloud - -> [Structure Aware Single-stage 3D Object Detection from Point Cloud]([https://arxiv.org/abs/2104.02323](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)) - - - -## Abstract - -3D object detection from point cloud data plays an essential role in autonomous driving. Current single-stage detectors are efficient by progressively downscaling the 3D point clouds in a fully convolutional manner. However, the downscaled features inevitably lose spatial information and cannot make full use of the structure information of 3D point cloud, degrading their localization precision. In this work, we propose to improve the localization precision of single-stage detectors by explicitly leveraging the structure information of 3D point cloud. Specifically, we design an auxiliary network which converts the convolutional features in the backbone network back to point-level representations. The auxiliary network is jointly optimized, by two point-level supervisions, to guide the convolutional features in the backbone network to be aware of the object structure. The auxiliary network can be detached after training and therefore introduces no extra computation in the inference stage. Besides, considering that single-stage detectors suffer from the discordance between the predicted bounding boxes and corresponding classification confidences, we develop an efficient part-sensitive warping operation to align the confidences to the predicted bounding boxes. Our proposed detector ranks at the top of KITTI 3D/BEV detection leaderboards and runs at 25 FPS for inference. - -
- -
- -## Introduction - -We implement SA-SSD and provide the results and checkpoints on KITTI dataset. - -## Citation - -```latex -@InProceedings{he2020sassd, - title={Structure Aware Single-stage 3D Object Detection from Point Cloud}, - author={He, Chenhang and Zeng, Hui and Huang, Jianqiang and Hua, Xian-Sheng and Zhang, Lei}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - year={2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index efc67c7d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,94 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] - -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='SASSD', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=[0, -40, -3, 70.4, 40, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoderSASSD', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.6, 0.8, 1.73], [0.6, 1.76, 1.73], [1.6, 3.9, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/paconv/pytorch/configs/second/README.md b/cv/3d_detection/paconv/pytorch/configs/second/README.md deleted file mode 100644 index 1aa96501..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# Second: Sparsely embedded convolutional detection - -> [SECOND: Sparsely Embedded Convolutional Detection](https://www.mdpi.com/1424-8220/18/10/3337) - - - -## Abstract - -LiDAR-based or RGB-D-based object detection is used in numerous applications, ranging from autonomous driving to robot vision. Voxel-based 3D convolutional networks have been used for some time to enhance the retention of information when processing point cloud LiDAR data. However, problems remain, including a slow inference speed and low orientation estimation performance. We therefore investigate an improved sparse convolution method for such networks, which significantly increases the speed of both training and inference. We also introduce a new form of angle loss regression to improve the orientation estimation performance and a new data augmentation approach that can enhance the convergence speed and performance. The proposed network produces state-of-the-art results on the KITTI 3D object detection benchmarks while maintaining a fast inference speed. - -
- -
- -## Introduction - -We implement SECOND and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-----------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 5.4 | | 79.07 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238.log.json) | -| [SECFPN (FP16)](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 2.9 | | 78.72 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth)\| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301.log.json) | -| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 5.4 | | 65.74 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017log.json) | -| [SECFPN (FP16)](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 2.9 | | 67.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059.log.json) | - -### Waymo - -| Backbone | Load Interval | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP@L1 | mAPH@L1 | mAP@L2 | **mAPH@L2** | Download | -| :-----------------------------------------------------------: | :-----------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :----: | :---------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py) | 5 | 3 Class | 2x | 8.12 | | 65.3 | 61.7 | 58.9 | 55.7 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_sbn_4x8_2x_waymoD5-3d-3class/hv_second_secfpn_sbn_4x8_2x_waymoD5-3d-3class_20201115_112448.log.json) | -| above @ Car | | | 2x | 8.12 | | 67.1 | 66.6 | 58.7 | 58.2 | | -| above @ Pedestrian | | | 2x | 8.12 | | 68.1 | 59.1 | 59.5 | 51.5 | | -| above @ Cyclist | | | 2x | 8.12 | | 60.7 | 59.5 | 58.4 | 57.3 | | - -Note: - -- See more details about metrics and data split on Waymo [HERE](https://github.com/open-mmlab/mmdetection3d/tree/master/configs/pointpillars). For implementation details, we basically follow the original settings. All of these results are achieved without bells-and-whistles, e.g. ensemble, multi-scale training and test augmentation. -- `FP16` means Mixed Precision (FP16) is adopted in training. - -## Citation - -```latex -@article{yan2018second, - title={Second: Sparsely embedded convolutional detection}, - author={Yan, Yan and Mao, Yuxing and Li, Bo}, - journal={Sensors}, - year={2018}, - publisher={Multidisciplinary Digital Publishing Institute} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index 0f28921f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py deleted file mode 100644 index 9ab7350a..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-car.py', '../_base_/schedules/cyclic_40e.py', - '../_base_/default_runtime.py' -] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -model = dict( - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False)) diff --git a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index bf0336a4..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './hv_second_secfpn_6x8_80e_kitti-3d-3class.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py b/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py deleted file mode 100644 index efba5533..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './hv_second_secfpn_6x8_80e_kitti-3d-car.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py b/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py deleted file mode 100644 index 758827f8..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +++ /dev/null @@ -1,112 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [-76.8, -51.2, -2, 76.8, 51.2, 4] -input_modality = dict(use_lidar=True, use_camera=False) - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=10, Cyclist=10), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4])) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=6, use_dim=5), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=6, use_dim=5), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) diff --git a/cv/3d_detection/paconv/pytorch/configs/second/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/second/metafile.yml deleted file mode 100644 index 5b68fe9c..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/second/metafile.yml +++ /dev/null @@ -1,97 +0,0 @@ -Collections: - - Name: SECOND - Metadata: - Training Techniques: - - AdamW - Architecture: - - Hard Voxelization - Paper: - URL: https://www.mdpi.com/1424-8220/18/10/3337 - Title: 'SECOND: Sparsely Embedded Convolutional Detection' - README: configs/second/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/backbones/second.py#L11 - Version: v0.5.0 - -Models: - - Name: hv_second_secfpn_6x8_80e_kitti-3d-car - In Collection: SECOND - Config: configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 79.07 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth - - - Name: hv_second_secfpn_6x8_80e_kitti-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 65.74 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth - - - Name: hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 65.3 - mAPH@L1: 61.7 - mAP@L2: 58.9 - mAPH@L2: 55.7 - - - Name: hv_second_secfpn_fp16_6x8_80e_kitti-3d-car - In Collection: SECOND - Config: configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Training Data: KITTI - Training Memory (GB): 2.9 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.72 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth - Code: - Version: v0.7.0 - - - Name: hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Training Data: KITTI - Training Memory (GB): 2.9 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 67.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth - Code: - Version: v0.7.0 diff --git a/cv/3d_detection/paconv/pytorch/configs/smoke/README.md b/cv/3d_detection/paconv/pytorch/configs/smoke/README.md deleted file mode 100644 index 8d91314d..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/smoke/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation - -> [SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation](https://arxiv.org/abs/2002.10111) - - - -## Abstract - -Estimating 3D orientation and translation of objects is essential for infrastructure-less autonomous navigation and driving. In case of monocular vision, successful methods have been mainly based on two ingredients: (i) a network generating 2D region proposals, (ii) a R-CNN structure predicting 3D object pose by utilizing the acquired regions of interest. We argue that the 2D detection network is redundant and introduces non-negligible noise for 3D detection. Hence, we propose a novel 3D object detection method, named SMOKE, in this paper that predicts a 3D bounding box for each detected object by combining a single keypoint estimate with regressed 3D variables. As a second contribution, we propose a multi-step disentangling approach for constructing the 3D bounding box, which significantly improves both training convergence and detection accuracy. In contrast to previous 3D detection techniques, our method does not require complicated pre/post-processing, extra data, and a refinement stage. Despite of its structural simplicity, our proposed SMOKE network outperforms all existing monocular 3D detection methods on the KITTI dataset, giving the best state-of-the-art result on both 3D object detection and Bird's eye view evaluation. - -
- -
- -## Introduction - -We implement SMOKE and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DLA34](./smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py) | 6x | 9.64 | | 13.85 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553-d46d9bb0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553.log.json) | - -Note: mAP represents Car moderate 3D strict AP11 results. - -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car | 16.92 / 22.97 | 13.85 / 18.32 | 11.90 / 15.88 | -| Pedestrian | 11.13 / 12.61 | 11.10 / 11.32 | 10.67 / 11.14 | -| Cyclist | 0.99 / 1.47 | 0.54 / 0.65 | 0.55 / 0.67 | - -## Citation - -```latex -@inproceedings{liu2020smoke, - title={Smoke: Single-stage monocular 3d object detection via keypoint estimation}, - author={Liu, Zechen and Wu, Zizhang and T{\'o}th, Roland}, - booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, - pages={996--997}, - year={2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/smoke/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/smoke/metafile.yml deleted file mode 100644 index df956e49..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/smoke/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: SMOKE - Metadata: - Training Data: KITTI - Training Techniques: - - Adam - Training Resources: 4x V100 GPUS - Architecture: - - SMOKEMono3DHead - - DLA - Paper: - URL: https://arxiv.org/abs/2002.10111 - Title: 'SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation' - README: configs/smoke/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/smoke_mono3d.py#L7 - Version: v1.0.0 - -Models: - - Name: smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d - In Collection: SMOKE - Config: configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.6 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 13.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553-d46d9bb0.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py b/cv/3d_detection/paconv/pytorch/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py deleted file mode 100644 index c802ce30..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py +++ /dev/null @@ -1,64 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-mono3d.py', '../_base_/models/smoke.py', - '../_base_/default_runtime.py' -] - -# optimizer -optimizer = dict(type='Adam', lr=2.5e-4) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='step', warmup=None, step=[50]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=72) -log_config = dict(interval=10) - -find_unused_parameters = True -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='RandomShiftScale', shift_scale=(0.2, 0.4), aug_prob=0.3), - dict(type='AffineResize', img_scale=(1280, 384), down_ratio=4), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - img_scale=(1280, 384), - flip=False, - transforms=[ - dict(type='AffineResize', img_scale=(1280, 384), down_ratio=4), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/README.md b/cv/3d_detection/paconv/pytorch/configs/ssn/README.md deleted file mode 100644 index dad03f86..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds - -> [SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds](https://arxiv.org/abs/2004.02774) - - - -## Abstract - -Multi-class 3D object detection aims to localize and classify objects of multiple categories from point clouds. Due to the nature of point clouds, i.e. unstructured, sparse and noisy, some features benefit-ting multi-class discrimination are underexploited, such as shape information. In this paper, we propose a novel 3D shape signature to explore the shape information from point clouds. By incorporating operations of symmetry, convex hull and chebyshev fitting, the proposed shape sig-nature is not only compact and effective but also robust to the noise, which serves as a soft constraint to improve the feature capability of multi-class discrimination. Based on the proposed shape signature, we develop the shape signature networks (SSN) for 3D object detection, which consist of pyramid feature encoding part, shape-aware grouping heads and explicit shape encoding objective. Experiments show that the proposed method performs remarkably better than existing methods on two large-scale datasets. Furthermore, our shape signature can act as a plug-and-play component and ablation study shows its effectiveness and good scalability. - -
- -
- -## Introduction - -We implement PointPillars with Shape-aware grouping heads used in the SSN and provide the results and checkpoints on the nuScenes and Lyft dataset. - -## Results and models - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :--------------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 35.17 | 49.76 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725-0817d270.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725.log.json) | -| [SSN](./hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py) | 2x | 3.6 | | 40.91 | 54.44 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351-51915986.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351.log.json) | -| [RegNetX-400MF-SECFPN](../regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 41.15 | 55.20 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334.log.json) | -| [RegNetX-400MF-SSN](./hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py) | 2x | 5.1 | | 46.65 | 58.24 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615-361e5e04.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :--------------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.9 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807-2518e3de.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807.log.json) | -| [SSN](./hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py) | 2x | 8.5 | | 17.5 | 17.5 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731-46841b41.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731.log.json) | -| [RegNetX-400MF-SSN](./hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py) | 2x | 7.4 | | 17.9 | 18 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825-d93475a1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825.log.json) | - -Note: - -The main difference of the shape-aware grouping heads with the original SECOND FPN heads is that the former groups objects with similar sizes and shapes together, and design shape-specific heads for each group. Heavier heads (with more convolutions and large strides) are designed for large objects while smaller heads for small objects. Note that there may appear different feature map sizes in the outputs, so an anchor generator tailored to these feature maps is also needed in the implementation. - -Users could try other settings in terms of the head design. Here we basically refer to the implementation [HERE](https://github.com/xinge008/SSN). - -## Citation - -```latex -@inproceedings{zhu2020ssn, - title={SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds}, - author={Zhu, Xinge and Ma, Yuexin and Wang, Tai and Xu, Yan and Shi, Jianping and Lin, Dahua}, - booktitle={Proceedings of the European Conference on Computer Vision}, - year={2020} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py deleted file mode 100644 index 1103bcf1..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py +++ /dev/null @@ -1,21 +0,0 @@ -_base_ = './hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py' -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) -# dataset settings -data = dict(samples_per_gpu=1, workers_per_gpu=2) diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py deleted file mode 100644 index fb9ef316..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = './hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py' -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py b/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py deleted file mode 100644 index 50b33c80..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py +++ /dev/null @@ -1,224 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -point_cloud_range = [-100, -100, -5, 100, 100, 3] -# Note that the order of class names should be consistent with -# the following anchors' order -class_names = [ - 'bicycle', 'motorcycle', 'pedestrian', 'animal', 'car', - 'emergency_vehicle', 'bus', 'other_vehicle', 'truck' -] - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline, classes=class_names), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# model settings -model = dict( - pts_voxel_layer=dict(point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_voxel_encoder=dict( - feat_channels=[32, 64], - point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_middle_encoder=dict(output_shape=[800, 800]), - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - _delete_=True, - type='ShapeAwareHead', - num_classes=9, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGeneratorPerCls', - ranges=[[-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227], - [-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -0.3033737, 100, 100, -0.3033737]], - sizes=[ - [1.76, 0.63, 1.44], # bicycle - [2.35, 0.96, 1.59], # motorcycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50], # animal - [4.75, 1.92, 1.71], # car - [6.52, 2.42, 2.34], # emergency vehicle - [12.70, 2.92, 3.42], # bus - [8.17, 2.75, 3.20], # other vehicle - [10.24, 2.84, 3.44] # truck - ], - custom_values=[], - rotations=[0, 1.57], - reshape_out=False), - tasks=[ - dict( - num_class=2, - class_names=['bicycle', 'motorcycle'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['pedestrian', 'animal'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['car', 'emergency_vehicle'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=3, - class_names=['bus', 'other_vehicle', 'truck'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)) - ], - assign_per_class=True, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi/4 - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=[ - dict( # bicycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # motorcycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # animal - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # emergency vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # bus - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # other vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # truck - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py b/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py deleted file mode 100644 index 85502014..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py +++ /dev/null @@ -1,238 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# Note that the order of class names should be consistent with -# the following anchors' order -point_cloud_range = [-50, -50, -5, 50, 50, 3] -class_names = [ - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', 'car', - 'truck', 'trailer', 'bus', 'construction_vehicle' -] - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline, classes=class_names), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# model settings -model = dict( - pts_voxel_layer=dict(max_num_points=20), - pts_voxel_encoder=dict(feat_channels=[64, 64]), - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - _delete_=True, - type='ShapeAwareHead', - num_classes=10, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGeneratorPerCls', - ranges=[[-50, -50, -1.67339111, 50, 50, -1.67339111], - [-50, -50, -1.71396371, 50, 50, -1.71396371], - [-50, -50, -1.61785072, 50, 50, -1.61785072], - [-50, -50, -1.80984986, 50, 50, -1.80984986], - [-50, -50, -1.76396500, 50, 50, -1.76396500], - [-50, -50, -1.80032795, 50, 50, -1.80032795], - [-50, -50, -1.74440365, 50, 50, -1.74440365], - [-50, -50, -1.68526504, 50, 50, -1.68526504], - [-50, -50, -1.80673031, 50, 50, -1.80673031], - [-50, -50, -1.64824291, 50, 50, -1.64824291]], - sizes=[ - [1.68452161, 0.60058911, 1.27192197], # bicycle - [2.09973778, 0.76279481, 1.44403034], # motorcycle - [0.72564370, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic cone - [0.48578221, 2.49008838, 0.98297065], # barrier - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.45609390, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [11.1885991, 2.94046906, 3.47030982], # bus - [6.38352896, 2.73050468, 3.13312415] # construction vehicle - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=False), - tasks=[ - dict( - num_class=2, - class_names=['bicycle', 'motorcycle'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=1, - class_names=['pedestrian'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['traffic_cone', 'barrier'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=1, - class_names=['car'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=4, - class_names=[ - 'truck', 'trailer', 'bus', 'construction_vehicle' - ], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)) - ], - assign_per_class=True, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi/4 - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=[ - dict( # bicycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # motorcycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # traffic cone - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # barrier - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # truck - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # trailer - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # bus - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # construction vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/paconv/pytorch/configs/ssn/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/ssn/metafile.yml deleted file mode 100644 index df6dd9ed..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/ssn/metafile.yml +++ /dev/null @@ -1,72 +0,0 @@ -Collections: - - Name: SSN - Metadata: - Training Techniques: - - AdamW - Training Resources: 8x GeForce GTX 1080 Ti - Architecture: - - Hard Voxelization - Paper: - URL: https://arxiv.org/abs/2004.02774 - Title: 'SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds' - README: configs/ssn/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/shape_aware_head.py#L166 - Version: v0.7.0 - -Models: - - Name: hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 3.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 40.91 - NDS: 54.44 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351-51915986.pth - - - Name: hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 5.1 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 46.65 - NDS: 58.24 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615-361e5e04.pth - - - Name: hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 8.5 - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 17.5 - Public Score: 17.5 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731-46841b41.pth - - - Name: hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 7.4 - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 17.9 - Public Score: 18.0 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825-d93475a1.pth diff --git a/cv/3d_detection/paconv/pytorch/configs/votenet/README.md b/cv/3d_detection/paconv/pytorch/configs/votenet/README.md deleted file mode 100644 index d74486f0..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/votenet/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# Deep Hough Voting for 3D Object Detection in Point Clouds - -> [Deep Hough Voting for 3D Object Detection in Point Clouds](https://arxiv.org/abs/1904.09664) - - - -## Abstract - -Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -- samples from 2D manifolds in 3D space -- we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images. - -
- -
- -## Introduction - -We implement VoteNet and provide the result and checkpoints on ScanNet and SUNRGBD datasets. - -## Results and models - -### ScanNet - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-----------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./votenet_8x8_scannet-3d-18class.py) | 3x | 4.1 | | 62.34 | 40.82 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503-cf8134fa.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503.log.json) | - -### SUNRGBD - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./votenet_16x8_sunrgbd-3d-10class.py) | 3x | 8.1 | | 59.78 | 35.77 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823-bf11f014.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823.log.json) | - -**Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version \< 0.6.0, the checkpoints have to be first converted via [tools/model_converters/convert_votenet_checkpoints.py](../../tools/model_converters/convert_votenet_checkpoints.py): - -``` -python ./tools/model_converters/convert_votenet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH} -``` - -Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md). - -## Indeterminism - -Since test data preparation randomly downsamples the points, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## IoU loss - -Adding IoU loss (simply = 1-IoU) boosts VoteNet's performance. To use IoU loss, add this loss term to the config file: - -```python -iou_loss=dict(type='AxisAlignedIoULoss', reduction='sum', loss_weight=10.0 / 3.0) -``` - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :------: | -| [PointNet++](./votenet_iouloss_8x8_scannet-3d-18class.py) | 3x | 4.1 | | 63.81 | 44.21 | / | - -For now, we only support calculating IoU loss for axis-aligned bounding boxes since the CUDA op of general 3D IoU calculation does not implement the backward method. Therefore, IoU loss can only be used for ScanNet dataset for now. - -## Citation - -```latex -@inproceedings{qi2019deep, - author = {Qi, Charles R and Litany, Or and He, Kaiming and Guibas, Leonidas J}, - title = {Deep Hough Voting for 3D Object Detection in Point Clouds}, - booktitle = {Proceedings of the IEEE International Conference on Computer Vision}, - year = {2019} -} -``` diff --git a/cv/3d_detection/paconv/pytorch/configs/votenet/metafile.yml b/cv/3d_detection/paconv/pytorch/configs/votenet/metafile.yml deleted file mode 100644 index cd18680f..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/votenet/metafile.yml +++ /dev/null @@ -1,59 +0,0 @@ -Collections: - - Name: VoteNet - Metadata: - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1904.09664 - Title: 'Deep Hough Voting for 3D Object Detection in Point Clouds' - README: configs/votenet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/votenet.py#L10 - Version: v0.5.0 - -Models: - - Name: votenet_16x8_sunrgbd-3d-10class.py - In Collection: VoteNet - Config: configs/votenet/votenet_16x8_sunrgbd-3d-10class.py - Metadata: - Training Data: SUNRGBD - Training Memory (GB): 8.1 - Results: - - Task: 3D Object Detection - Dataset: SUNRGBD - Metrics: - AP@0.25: 59.78 - AP@0.5: 35.77 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823-bf11f014.pth - - - Name: votenet_8x8_scannet-3d-18class.py - In Collection: VoteNet - Config: configs/votenet/votenet_8x8_scannet-3d-18class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 4.1 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 62.34 - AP@0.5: 40.82 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503-cf8134fa.pth - - - Name: votenet_iouloss_8x8_scannet-3d-18class - In Collection: VoteNet - Config: configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 4.1 - Architecture: - - IoU Loss - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 63.81 - AP@0.5: 44.21 diff --git a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py b/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py deleted file mode 100644 index 5ddfa7ad..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py +++ /dev/null @@ -1,21 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', '../_base_/models/votenet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - bbox_head=dict( - num_classes=10, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=10, - num_dir_bins=12, - with_rot=True, - mean_sizes=[ - [2.114256, 1.620300, 0.927272], [0.791118, 1.279516, 0.718182], - [0.923508, 1.867419, 0.845495], [0.591958, 0.552978, 0.827272], - [0.699104, 0.454178, 0.75625], [0.69519, 1.346299, 0.736364], - [0.528526, 1.002642, 1.172878], [0.500618, 0.632163, 0.683424], - [0.404671, 1.071108, 1.688889], [0.76584, 1.398258, 0.472728] - ]), - )) diff --git a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_8x8_scannet-3d-18class.py b/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_8x8_scannet-3d-18class.py deleted file mode 100644 index 62e56303..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_8x8_scannet-3d-18class.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', '../_base_/models/votenet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]))) - -# yapf:disable -log_config = dict(interval=30) -# yapf:enable diff --git a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py b/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py deleted file mode 100644 index ac2a6c00..00000000 --- a/cv/3d_detection/paconv/pytorch/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = ['./votenet_8x8_scannet-3d-18class.py'] - -# model settings, add iou loss -model = dict( - bbox_head=dict( - iou_loss=dict( - type='AxisAlignedIoULoss', reduction='sum', loss_weight=10.0 / - 3.0))) diff --git a/cv/3d_detection/paconv/pytorch/data/s3dis/README.md b/cv/3d_detection/paconv/pytorch/data/s3dis/README.md deleted file mode 100644 index 20170c65..00000000 --- a/cv/3d_detection/paconv/pytorch/data/s3dis/README.md +++ /dev/null @@ -1,59 +0,0 @@ -### Prepare S3DIS Data - -We follow the procedure in [pointnet](https://github.com/charlesq34/pointnet). - -1. Download S3DIS data by filling this [Google form](https://docs.google.com/forms/d/e/1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw/viewform?c=0&w=1). Download the ```Stanford3dDataset_v1.2_Aligned_Version.zip``` file and unzip it. Link or move the folder to this level of directory. - -2. In this directory, extract point clouds and annotations by running `python3 collect_indoor3d_data.py`. - -3. Enter the project root directory, generate training data by running - -```bash -python3 tools/create_data.py s3dis --root-path ./data/s3dis --out-dir ./data/s3dis --extra-tag s3dis -``` - -The overall process could be achieved through the following script - -```bash -python3 collect_indoor3d_data.py -cd ../.. -python3 tools/create_data.py s3dis --root-path ./data/s3dis --out-dir ./data/s3dis --extra-tag s3dis -``` - -The directory structure after pre-processing should be as below - -``` -s3dis -├── meta_data -├── indoor3d_util.py -├── collect_indoor3d_data.py -├── README.md -├── Stanford3dDataset_v1.2_Aligned_Version -├── s3dis_data -├── points -│ ├── xxxxx.bin -├── instance_mask -│ ├── xxxxx.bin -├── semantic_mask -│ ├── xxxxx.bin -├── seg_info -│ ├── Area_1_label_weight.npy -│ ├── Area_1_resampled_scene_idxs.npy -│ ├── Area_2_label_weight.npy -│ ├── Area_2_resampled_scene_idxs.npy -│ ├── Area_3_label_weight.npy -│ ├── Area_3_resampled_scene_idxs.npy -│ ├── Area_4_label_weight.npy -│ ├── Area_4_resampled_scene_idxs.npy -│ ├── Area_5_label_weight.npy -│ ├── Area_5_resampled_scene_idxs.npy -│ ├── Area_6_label_weight.npy -│ ├── Area_6_resampled_scene_idxs.npy -├── s3dis_infos_Area_1.pkl -├── s3dis_infos_Area_2.pkl -├── s3dis_infos_Area_3.pkl -├── s3dis_infos_Area_4.pkl -├── s3dis_infos_Area_5.pkl -├── s3dis_infos_Area_6.pkl - -``` \ No newline at end of file diff --git a/cv/3d_detection/paconv/pytorch/data/s3dis/collect_indoor3d_data.py b/cv/3d_detection/paconv/pytorch/data/s3dis/collect_indoor3d_data.py deleted file mode 100644 index 59a7cda5..00000000 --- a/cv/3d_detection/paconv/pytorch/data/s3dis/collect_indoor3d_data.py +++ /dev/null @@ -1,48 +0,0 @@ -import argparse -import mmcv -from indoor3d_util import export -from os import path as osp - -parser = argparse.ArgumentParser() -parser.add_argument( - '--output-folder', - default='./s3dis_data', - help='output folder of the result.') -parser.add_argument( - '--data-dir', - default='Stanford3dDataset_v1.2_Aligned_Version', - help='s3dis data directory.') -parser.add_argument( - '--ann-file', - default='meta_data/anno_paths.txt', - help='The path of the file that stores the annotation names.') -args = parser.parse_args() - -anno_paths = [line.rstrip() for line in open(args.ann_file)] -anno_paths = [osp.join(args.data_dir, p) for p in anno_paths] - -output_folder = args.output_folder -mmcv.mkdir_or_exist(output_folder) - -# Note: there is an extra character in the v1.2 data in Area_5/hallway_6. -# It's fixed manually here. -# Refer to https://github.com/AnTao97/dgcnn.pytorch/blob/843abe82dd731eb51a4b3f70632c2ed3c60560e9/prepare_data/collect_indoor3d_data.py#L18 # noqa -revise_file = osp.join(args.data_dir, - 'Area_5/hallway_6/Annotations/ceiling_1.txt') -with open(revise_file, 'r') as f: - data = f.read() - # replace that extra character with blank space to separate data - data = data[:5545347] + ' ' + data[5545348:] -with open(revise_file, 'w') as f: - f.write(data) - -for anno_path in anno_paths: - print(f'Exporting data from annotation file: {anno_path}') - elements = anno_path.split('/') - out_filename = \ - elements[-3] + '_' + elements[-2] # Area_1_hallway_1 - out_filename = osp.join(output_folder, out_filename) - if osp.isfile(f'{out_filename}_point.npy'): - print('File already exists. skipping.') - continue - export(anno_path, out_filename) \ No newline at end of file diff --git a/cv/3d_detection/paconv/pytorch/data/s3dis/indoor3d_util.py b/cv/3d_detection/paconv/pytorch/data/s3dis/indoor3d_util.py deleted file mode 100644 index 2474b0f2..00000000 --- a/cv/3d_detection/paconv/pytorch/data/s3dis/indoor3d_util.py +++ /dev/null @@ -1,53 +0,0 @@ -import glob -import numpy as np -from os import path as osp - -# ----------------------------------------------------------------------------- -# CONSTANTS -# ----------------------------------------------------------------------------- - -BASE_DIR = osp.dirname(osp.abspath(__file__)) - -class_names = [ - x.rstrip() for x in open(osp.join(BASE_DIR, 'meta_data/class_names.txt')) -] -class2label = {one_class: i for i, one_class in enumerate(class_names)} - -# ----------------------------------------------------------------------------- -# CONVERT ORIGINAL DATA TO POINTS, SEM_LABEL AND INS_LABEL FILES -# ----------------------------------------------------------------------------- - - -def export(anno_path, out_filename): - """Convert original dataset files to points, instance mask and semantic - mask files. We aggregated all the points from each instance in the room. - - Args: - anno_path (str): path to annotations. e.g. Area_1/office_2/Annotations/ - out_filename (str): path to save collected points and labels - file_format (str): txt or numpy, determines what file format to save. - - Note: - the points are shifted before save, the most negative point is now - at origin. - """ - points_list = [] - ins_idx = 1 # instance ids should be indexed from 1, so 0 is unannotated - - for f in glob.glob(osp.join(anno_path, '*.txt')): - one_class = osp.basename(f).split('_')[0] - if one_class not in class_names: # some rooms have 'staris' class - one_class = 'clutter' - points = np.loadtxt(f) - labels = np.ones((points.shape[0], 1)) * class2label[one_class] - ins_labels = np.ones((points.shape[0], 1)) * ins_idx - ins_idx += 1 - points_list.append(np.concatenate([points, labels, ins_labels], 1)) - - data_label = np.concatenate(points_list, 0) # [N, 8], (pts, rgb, sem, ins) - xyz_min = np.amin(data_label, axis=0)[0:3] - data_label[:, 0:3] -= xyz_min - - np.save(f'{out_filename}_point.npy', data_label[:, :6].astype(np.float32)) - np.save(f'{out_filename}_sem_label.npy', data_label[:, 6].astype(np.int)) - np.save(f'{out_filename}_ins_label.npy', data_label[:, 7].astype(np.int)) \ No newline at end of file diff --git a/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/anno_paths.txt b/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/anno_paths.txt deleted file mode 100644 index e5a4d7b9..00000000 --- a/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/anno_paths.txt +++ /dev/null @@ -1,272 +0,0 @@ -Area_1/conferenceRoom_1/Annotations -Area_1/conferenceRoom_2/Annotations -Area_1/copyRoom_1/Annotations -Area_1/hallway_1/Annotations -Area_1/hallway_2/Annotations -Area_1/hallway_3/Annotations -Area_1/hallway_4/Annotations -Area_1/hallway_5/Annotations -Area_1/hallway_6/Annotations -Area_1/hallway_7/Annotations -Area_1/hallway_8/Annotations -Area_1/office_10/Annotations -Area_1/office_11/Annotations -Area_1/office_12/Annotations -Area_1/office_13/Annotations -Area_1/office_14/Annotations -Area_1/office_15/Annotations -Area_1/office_16/Annotations -Area_1/office_17/Annotations -Area_1/office_18/Annotations -Area_1/office_19/Annotations -Area_1/office_1/Annotations -Area_1/office_20/Annotations -Area_1/office_21/Annotations -Area_1/office_22/Annotations -Area_1/office_23/Annotations -Area_1/office_24/Annotations -Area_1/office_25/Annotations -Area_1/office_26/Annotations -Area_1/office_27/Annotations -Area_1/office_28/Annotations -Area_1/office_29/Annotations -Area_1/office_2/Annotations -Area_1/office_30/Annotations -Area_1/office_31/Annotations -Area_1/office_3/Annotations -Area_1/office_4/Annotations -Area_1/office_5/Annotations -Area_1/office_6/Annotations -Area_1/office_7/Annotations -Area_1/office_8/Annotations -Area_1/office_9/Annotations -Area_1/pantry_1/Annotations -Area_1/WC_1/Annotations -Area_2/auditorium_1/Annotations -Area_2/auditorium_2/Annotations -Area_2/conferenceRoom_1/Annotations -Area_2/hallway_10/Annotations -Area_2/hallway_11/Annotations -Area_2/hallway_12/Annotations -Area_2/hallway_1/Annotations -Area_2/hallway_2/Annotations -Area_2/hallway_3/Annotations -Area_2/hallway_4/Annotations -Area_2/hallway_5/Annotations -Area_2/hallway_6/Annotations -Area_2/hallway_7/Annotations -Area_2/hallway_8/Annotations -Area_2/hallway_9/Annotations -Area_2/office_10/Annotations -Area_2/office_11/Annotations -Area_2/office_12/Annotations -Area_2/office_13/Annotations -Area_2/office_14/Annotations -Area_2/office_1/Annotations -Area_2/office_2/Annotations -Area_2/office_3/Annotations -Area_2/office_4/Annotations -Area_2/office_5/Annotations -Area_2/office_6/Annotations -Area_2/office_7/Annotations -Area_2/office_8/Annotations -Area_2/office_9/Annotations -Area_2/storage_1/Annotations -Area_2/storage_2/Annotations -Area_2/storage_3/Annotations -Area_2/storage_4/Annotations -Area_2/storage_5/Annotations -Area_2/storage_6/Annotations -Area_2/storage_7/Annotations -Area_2/storage_8/Annotations -Area_2/storage_9/Annotations -Area_2/WC_1/Annotations -Area_2/WC_2/Annotations -Area_3/conferenceRoom_1/Annotations -Area_3/hallway_1/Annotations -Area_3/hallway_2/Annotations -Area_3/hallway_3/Annotations -Area_3/hallway_4/Annotations -Area_3/hallway_5/Annotations -Area_3/hallway_6/Annotations -Area_3/lounge_1/Annotations -Area_3/lounge_2/Annotations -Area_3/office_10/Annotations -Area_3/office_1/Annotations -Area_3/office_2/Annotations -Area_3/office_3/Annotations -Area_3/office_4/Annotations -Area_3/office_5/Annotations -Area_3/office_6/Annotations -Area_3/office_7/Annotations -Area_3/office_8/Annotations -Area_3/office_9/Annotations -Area_3/storage_1/Annotations -Area_3/storage_2/Annotations -Area_3/WC_1/Annotations -Area_3/WC_2/Annotations -Area_4/conferenceRoom_1/Annotations -Area_4/conferenceRoom_2/Annotations -Area_4/conferenceRoom_3/Annotations -Area_4/hallway_10/Annotations -Area_4/hallway_11/Annotations -Area_4/hallway_12/Annotations -Area_4/hallway_13/Annotations -Area_4/hallway_14/Annotations -Area_4/hallway_1/Annotations -Area_4/hallway_2/Annotations -Area_4/hallway_3/Annotations -Area_4/hallway_4/Annotations -Area_4/hallway_5/Annotations -Area_4/hallway_6/Annotations -Area_4/hallway_7/Annotations -Area_4/hallway_8/Annotations -Area_4/hallway_9/Annotations -Area_4/lobby_1/Annotations -Area_4/lobby_2/Annotations -Area_4/office_10/Annotations -Area_4/office_11/Annotations -Area_4/office_12/Annotations -Area_4/office_13/Annotations -Area_4/office_14/Annotations -Area_4/office_15/Annotations -Area_4/office_16/Annotations -Area_4/office_17/Annotations -Area_4/office_18/Annotations -Area_4/office_19/Annotations -Area_4/office_1/Annotations -Area_4/office_20/Annotations -Area_4/office_21/Annotations -Area_4/office_22/Annotations -Area_4/office_2/Annotations -Area_4/office_3/Annotations -Area_4/office_4/Annotations -Area_4/office_5/Annotations -Area_4/office_6/Annotations -Area_4/office_7/Annotations -Area_4/office_8/Annotations -Area_4/office_9/Annotations -Area_4/storage_1/Annotations -Area_4/storage_2/Annotations -Area_4/storage_3/Annotations -Area_4/storage_4/Annotations -Area_4/WC_1/Annotations -Area_4/WC_2/Annotations -Area_4/WC_3/Annotations -Area_4/WC_4/Annotations -Area_5/conferenceRoom_1/Annotations -Area_5/conferenceRoom_2/Annotations -Area_5/conferenceRoom_3/Annotations -Area_5/hallway_10/Annotations -Area_5/hallway_11/Annotations -Area_5/hallway_12/Annotations -Area_5/hallway_13/Annotations -Area_5/hallway_14/Annotations -Area_5/hallway_15/Annotations -Area_5/hallway_1/Annotations -Area_5/hallway_2/Annotations -Area_5/hallway_3/Annotations -Area_5/hallway_4/Annotations -Area_5/hallway_5/Annotations -Area_5/hallway_6/Annotations -Area_5/hallway_7/Annotations -Area_5/hallway_8/Annotations -Area_5/hallway_9/Annotations -Area_5/lobby_1/Annotations -Area_5/office_10/Annotations -Area_5/office_11/Annotations -Area_5/office_12/Annotations -Area_5/office_13/Annotations -Area_5/office_14/Annotations -Area_5/office_15/Annotations -Area_5/office_16/Annotations -Area_5/office_17/Annotations -Area_5/office_18/Annotations -Area_5/office_19/Annotations -Area_5/office_1/Annotations -Area_5/office_20/Annotations -Area_5/office_21/Annotations -Area_5/office_22/Annotations -Area_5/office_23/Annotations -Area_5/office_24/Annotations -Area_5/office_25/Annotations -Area_5/office_26/Annotations -Area_5/office_27/Annotations -Area_5/office_28/Annotations -Area_5/office_29/Annotations -Area_5/office_2/Annotations -Area_5/office_30/Annotations -Area_5/office_31/Annotations -Area_5/office_32/Annotations -Area_5/office_33/Annotations -Area_5/office_34/Annotations -Area_5/office_35/Annotations -Area_5/office_36/Annotations -Area_5/office_37/Annotations -Area_5/office_38/Annotations -Area_5/office_39/Annotations -Area_5/office_3/Annotations -Area_5/office_40/Annotations -Area_5/office_41/Annotations -Area_5/office_42/Annotations -Area_5/office_4/Annotations -Area_5/office_5/Annotations -Area_5/office_6/Annotations -Area_5/office_7/Annotations -Area_5/office_8/Annotations -Area_5/office_9/Annotations -Area_5/pantry_1/Annotations -Area_5/storage_1/Annotations -Area_5/storage_2/Annotations -Area_5/storage_3/Annotations -Area_5/storage_4/Annotations -Area_5/WC_1/Annotations -Area_5/WC_2/Annotations -Area_6/conferenceRoom_1/Annotations -Area_6/copyRoom_1/Annotations -Area_6/hallway_1/Annotations -Area_6/hallway_2/Annotations -Area_6/hallway_3/Annotations -Area_6/hallway_4/Annotations -Area_6/hallway_5/Annotations -Area_6/hallway_6/Annotations -Area_6/lounge_1/Annotations -Area_6/office_10/Annotations -Area_6/office_11/Annotations -Area_6/office_12/Annotations -Area_6/office_13/Annotations -Area_6/office_14/Annotations -Area_6/office_15/Annotations -Area_6/office_16/Annotations -Area_6/office_17/Annotations -Area_6/office_18/Annotations -Area_6/office_19/Annotations -Area_6/office_1/Annotations -Area_6/office_20/Annotations -Area_6/office_21/Annotations -Area_6/office_22/Annotations -Area_6/office_23/Annotations -Area_6/office_24/Annotations -Area_6/office_25/Annotations -Area_6/office_26/Annotations -Area_6/office_27/Annotations -Area_6/office_28/Annotations -Area_6/office_29/Annotations -Area_6/office_2/Annotations -Area_6/office_30/Annotations -Area_6/office_31/Annotations -Area_6/office_32/Annotations -Area_6/office_33/Annotations -Area_6/office_34/Annotations -Area_6/office_35/Annotations -Area_6/office_36/Annotations -Area_6/office_37/Annotations -Area_6/office_3/Annotations -Area_6/office_4/Annotations -Area_6/office_5/Annotations -Area_6/office_6/Annotations -Area_6/office_7/Annotations -Area_6/office_8/Annotations -Area_6/office_9/Annotations -Area_6/openspace_1/Annotations -Area_6/pantry_1/Annotations \ No newline at end of file diff --git a/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/class_names.txt b/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/class_names.txt deleted file mode 100644 index b4b91540..00000000 --- a/cv/3d_detection/paconv/pytorch/data/s3dis/meta_data/class_names.txt +++ /dev/null @@ -1,13 +0,0 @@ -ceiling -floor -wall -beam -column -window -door -table -chair -sofa -bookcase -board -clutter \ No newline at end of file diff --git a/cv/3d_detection/paconv/pytorch/dist_train.sh b/cv/3d_detection/paconv/pytorch/dist_train.sh deleted file mode 100644 index ecb19be2..00000000 --- a/cv/3d_detection/paconv/pytorch/dist_train.sh +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python3 -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --seed 0 \ - --launcher pytorch ${@:3} diff --git a/cv/3d_detection/paconv/pytorch/mmdet/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/__init__.py deleted file mode 100644 index 96f91ac6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.3.17' -mmcv_maximum_version = '1.6.0' -mmcv_version = digit_version(mmcv.__version__) - - -# assert (mmcv_version >= digit_version(mmcv_minimum_version) -# and mmcv_version <= digit_version(mmcv_maximum_version)), \ -# f'MMCV=={mmcv.__version__} is used but incompatible. ' \ -# f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/apis/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/apis/__init__.py deleted file mode 100644 index a865e942..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/apis/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import (async_inference_detector, inference_detector, - init_detector, show_result_pyplot) -from .test import multi_gpu_test, single_gpu_test -from .train import (get_root_logger, init_random_seed, set_random_seed, - train_detector) - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_detector', 'init_detector', - 'async_inference_detector', 'inference_detector', 'show_result_pyplot', - 'multi_gpu_test', 'single_gpu_test', 'init_random_seed' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/apis/inference.py b/cv/3d_detection/paconv/pytorch/mmdet/apis/inference.py deleted file mode 100644 index 795fce51..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/apis/inference.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from pathlib import Path - -import mmcv -import numpy as np -import torch -from mmcv.ops import RoIPool -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmdet.core import get_classes -from mmdet.datasets import replace_ImageToTensor -from mmdet.datasets.pipelines import Compose -from mmdet.models import build_detector - - -def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None): - """Initialize a detector from config file. - - Args: - config (str, :obj:`Path`, or :obj:`mmcv.Config`): Config file path, - :obj:`Path`, or the config object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - cfg_options (dict): Options to override some settings in the used - config. - - Returns: - nn.Module: The constructed detector. - """ - if isinstance(config, (str, Path)): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - if cfg_options is not None: - config.merge_from_dict(cfg_options) - if 'pretrained' in config.model: - config.model.pretrained = None - elif 'init_cfg' in config.model.backbone: - config.model.backbone.init_cfg = None - config.model.train_cfg = None - model = build_detector(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - warnings.simplefilter('once') - warnings.warn('Class names are not saved in the checkpoint\'s ' - 'meta data, use COCO classes by default.') - model.CLASSES = get_classes('coco') - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """Deprecated. - - A simple pipeline to load image. - """ - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - Returns: - dict: ``results`` will be returned containing loaded image. - """ - warnings.simplefilter('once') - warnings.warn('`LoadImage` is deprecated and will be removed in ' - 'future releases. You may use `LoadImageFromWebcam` ' - 'from `mmdet.datasets.pipelines.` instead.') - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_fields'] = ['img'] - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_detector(model, imgs): - """Inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]): - Either image files or loaded images. - - Returns: - If imgs is a list or tuple, the same length list type results - will be returned, otherwise return the detection results directly. - """ - - if isinstance(imgs, (list, tuple)): - is_batch = True - else: - imgs = [imgs] - is_batch = False - - cfg = model.cfg - device = next(model.parameters()).device # model device - - if isinstance(imgs[0], np.ndarray): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - - cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) - test_pipeline = Compose(cfg.data.test.pipeline) - - datas = [] - for img in imgs: - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - data = test_pipeline(data) - datas.append(data) - - data = collate(datas, samples_per_gpu=len(imgs)) - # just get the actual data from DataContainer - data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']] - data['img'] = [img.data[0] for img in data['img']] - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - for m in model.modules(): - assert not isinstance( - m, RoIPool - ), 'CPU inference with RoIPool is not supported currently.' - - # forward the model - with torch.no_grad(): - results = model(return_loss=False, rescale=True, **data) - - if not is_batch: - return results[0] - else: - return results - - -async def async_inference_detector(model, imgs): - """Async inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - img (str | ndarray): Either image files or loaded images. - - Returns: - Awaitable detection results. - """ - if not isinstance(imgs, (list, tuple)): - imgs = [imgs] - - cfg = model.cfg - device = next(model.parameters()).device # model device - - if isinstance(imgs[0], np.ndarray): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - - cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) - test_pipeline = Compose(cfg.data.test.pipeline) - - datas = [] - for img in imgs: - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - data = test_pipeline(data) - datas.append(data) - - data = collate(datas, samples_per_gpu=len(imgs)) - # just get the actual data from DataContainer - data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']] - data['img'] = [img.data[0] for img in data['img']] - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - for m in model.modules(): - assert not isinstance( - m, RoIPool - ), 'CPU inference with RoIPool is not supported currently.' - - # We don't restore `torch.is_grad_enabled()` value during concurrent - # inference since execution can overlap - torch.set_grad_enabled(False) - results = await model.aforward_test(rescale=True, **data) - return results - - -def show_result_pyplot(model, - img, - result, - score_thr=0.3, - title='result', - wait_time=0, - palette=None, - out_file=None): - """Visualize the detection results on the image. - - Args: - model (nn.Module): The loaded detector. - img (str or np.ndarray): Image filename or loaded image. - result (tuple[list] or list): The detection result, can be either - (bbox, segm) or just bbox. - score_thr (float): The threshold to visualize the bboxes and masks. - title (str): Title of the pyplot figure. - wait_time (float): Value of waitKey param. Default: 0. - palette (str or tuple(int) or :obj:`Color`): Color. - The tuple of color should be in BGR order. - out_file (str or None): The path to write the image. - Default: None. - """ - if hasattr(model, 'module'): - model = model.module - model.show_result( - img, - result, - score_thr=score_thr, - show=True, - wait_time=wait_time, - win_name=title, - bbox_color=palette, - text_color=(200, 200, 200), - mask_color=palette, - out_file=out_file) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/apis/test.py b/cv/3d_detection/paconv/pytorch/mmdet/apis/test.py deleted file mode 100644 index 973d3623..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/apis/test.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time - -import mmcv -import torch -import torch.distributed as dist -from mmcv.image import tensor2imgs -from mmcv.runner import get_dist_info - -from mmdet.core import encode_mask_results - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - show_score_thr=0.3): - model.eval() - results = [] - dataset = data_loader.dataset - PALETTE = getattr(dataset, 'PALETTE', None) - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - batch_size = len(result) - if show or out_dir: - if batch_size == 1 and isinstance(data['img'][0], torch.Tensor): - img_tensor = data['img'][0] - else: - img_tensor = data['img'][0].data[0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for i, (img, img_meta) in enumerate(zip(imgs, img_metas)): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result[i], - bbox_color=PALETTE, - text_color=PALETTE, - mask_color=PALETTE, - show=show, - out_file=out_file, - score_thr=show_score_thr) - - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = (bbox_results, - encode_mask_results(mask_results)) - - results.extend(result) - - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = ( - bbox_results, encode_mask_results(mask_results)) - - results.extend(result) - - if rank == 0: - batch_size = len(result) - for _ in range(batch_size * world_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl')) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append( - pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/apis/train.py b/cv/3d_detection/paconv/pytorch/mmdet/apis/train.py deleted file mode 100644 index 3bba6600..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/apis/train.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import (DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner, get_dist_info) - -from mmdet.core import DistEvalHook, EvalHook -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import (build_ddp, build_dp, compat_cfg, - find_latest_checkpoint, get_root_logger) - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def auto_scale_lr(cfg, distributed, logger): - """Automatically scaling LR according to GPU number and sample per GPU. - - Args: - cfg (config): Training config. - distributed (bool): Using distributed or not. - logger (logging.Logger): Logger. - """ - # Get flag from config - if ('auto_scale_lr' not in cfg) or \ - (not cfg.auto_scale_lr.get('enable', False)): - logger.info('Automatic scaling of learning rate (LR)' - ' has been disabled.') - return - - # Get base batch size from config - base_batch_size = cfg.auto_scale_lr.get('base_batch_size', None) - if base_batch_size is None: - return - - # Get gpu number - if distributed: - _, world_size = get_dist_info() - num_gpus = len(range(world_size)) - else: - num_gpus = len(cfg.gpu_ids) - - # calculate the batch size - batch_size = num_gpus * cfg.data.samples_per_gpu - logger.info(f'You are using {num_gpus} GPU(s) ' - f'and {cfg.data.samples_per_gpu} samples per GPU. ' - f'Total batch size is {batch_size}.') - - if batch_size != base_batch_size: - # scale LR with - # [linear scaling rule](https://arxiv.org/abs/1706.02677) - scaled_lr = (batch_size / base_batch_size) * cfg.optimizer.lr - logger.info('LR has been automatically scaled ' - f'from {cfg.optimizer.lr} to {scaled_lr}') - cfg.optimizer.lr = scaled_lr - else: - logger.info('The batch size match the ' - f'base batch size: {base_batch_size}, ' - f'will not scaling the LR ({cfg.optimizer.lr}).') - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - - cfg = compat_cfg(cfg) - logger = get_root_logger(log_level=cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - - runner_type = 'EpochBasedRunner' if 'runner' not in cfg else cfg.runner[ - 'type'] - - train_dataloader_default_args = dict( - samples_per_gpu=2, - workers_per_gpu=2, - # `num_gpus` will be ignored if distributed - num_gpus=len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - runner_type=runner_type, - persistent_workers=False) - - train_loader_cfg = { - **train_dataloader_default_args, - **cfg.data.get('train_dataloader', {}) - } - - data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = build_ddp( - model, - cfg.device, - device_ids=[int(os.environ['LOCAL_RANK'])], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = build_dp(model, cfg.device, device_ids=cfg.gpu_ids) - - # build optimizer - auto_scale_lr(cfg, distributed, logger) - optimizer = build_optimizer(model, cfg.optimizer) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - optimizer_config, - cfg.checkpoint_config, - cfg.log_config, - cfg.get('momentum_config', None), - custom_hooks_config=cfg.get('custom_hooks', None)) - - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - val_dataloader_default_args = dict( - samples_per_gpu=1, - workers_per_gpu=2, - dist=distributed, - shuffle=False, - persistent_workers=False) - - val_dataloader_args = { - **val_dataloader_default_args, - **cfg.data.get('val_dataloader', {}) - } - # Support batch_size > 1 in validation - - if val_dataloader_args['samples_per_gpu'] > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - - val_dataloader = build_dataloader(val_dataset, **val_dataloader_args) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - resume_from = None - if cfg.resume_from is None and cfg.get('auto_resume'): - resume_from = find_latest_checkpoint(cfg.work_dir) - if resume_from is not None: - cfg.resume_from = resume_from - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/__init__.py deleted file mode 100644 index 7eca58cf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .data_structures import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -from .hook import * # noqa: F401, F403 -from .mask import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/__init__.py deleted file mode 100644 index fcc7e4af..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor_generator import (AnchorGenerator, LegacyAnchorGenerator, - YOLOAnchorGenerator) -from .builder import (ANCHOR_GENERATORS, PRIOR_GENERATORS, - build_anchor_generator, build_prior_generator) -from .point_generator import MlvlPointGenerator, PointGenerator -from .utils import anchor_inside_flags, calc_region, images_to_levels - -__all__ = [ - 'AnchorGenerator', 'LegacyAnchorGenerator', 'anchor_inside_flags', - 'PointGenerator', 'images_to_levels', 'calc_region', - 'build_anchor_generator', 'ANCHOR_GENERATORS', 'YOLOAnchorGenerator', - 'build_prior_generator', 'PRIOR_GENERATORS', 'MlvlPointGenerator' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/anchor_generator.py b/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/anchor_generator.py deleted file mode 100644 index 20886fbd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/anchor_generator.py +++ /dev/null @@ -1,866 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import PRIOR_GENERATORS - - -@PRIOR_GENERATORS.register_module() -class AnchorGenerator: - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int] | None): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - - Examples: - >>> from mmdet.core import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_priors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_priors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides, - ratios, - scales=None, - base_sizes=None, - scale_major=True, - octave_base_scale=None, - scales_per_octave=None, - centers=None, - center_offset=0.): - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - - @property - def num_base_anchors(self): - """list[int]: total number of base anchors in a feature grid""" - return self.num_base_priors - - @property - def num_base_priors(self): - """list[int]: The number of priors (anchors) at a point - on the feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, x, y, row_major=True): - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool, optional): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_priors(self, featmap_sizes, dtype=torch.float32, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - dtype (:obj:`torch.dtype`): Dtype of priors. - Default: torch.float32. - device (str): The device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_priors( - featmap_sizes[i], level_idx=i, dtype=dtype, device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_priors(self, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int]): Size of the feature maps. - level_idx (int): The index of corresponding feature map level. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (str, optional): The device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - base_anchors = self.base_anchors[level_idx].to(device).to(dtype) - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - shift_x = torch.arange(0, feat_w, device=device).to(dtype) * stride_w - shift_y = torch.arange(0, feat_h, device=device).to(dtype) * stride_h - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def sparse_priors(self, - prior_idxs, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate sparse anchors according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int]): feature map size arrange as (h, w). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (obj:`torch.device`): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 4), N should be equal to - the length of ``prior_idxs``. - """ - - height, width = featmap_size - num_base_anchors = self.num_base_anchors[level_idx] - base_anchor_id = prior_idxs % num_base_anchors - x = (prior_idxs // - num_base_anchors) % width * self.strides[level_idx][0] - y = (prior_idxs // width // - num_base_anchors) % height * self.strides[level_idx][1] - priors = torch.stack([x, y, x, y], 1).to(dtype).to(device) + \ - self.base_anchors[level_idx][base_anchor_id, :].to(device) - - return priors - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str): Device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - warnings.warn('``grid_anchors`` would be deprecated soon. ' - 'Please use ``grid_priors`` ') - - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors, - featmap_size, - stride=(16, 16), - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int], optional): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - warnings.warn( - '``single_level_grid_anchors`` would be deprecated soon. ' - 'Please use ``single_level_grid_priors`` ') - - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - num_base_anchors, - device='cuda'): - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - min_sizes (list[float]): The list of minimum anchor sizes on each - level. - max_sizes (list[float]): The list of maximum anchor sizes on each - level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. Being - used when not setting min_sizes and max_sizes. - input_size (int): Size of feature map, 300 for SSD300, 512 for - SSD512. Being used when not setting min_sizes and max_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - """ - - def __init__(self, - strides, - ratios, - min_sizes=None, - max_sizes=None, - basesize_ratio_range=(0.15, 0.9), - input_size=300, - scale_major=True): - assert len(strides) == len(ratios) - assert not (min_sizes is None) ^ (max_sizes is None) - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - - if min_sizes is None and max_sizes is None: - # use hard code to generate SSD anchors - self.input_size = input_size - assert mmcv.is_tuple_of(basesize_ratio_range, float) - self.basesize_ratio_range = basesize_ratio_range - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError( - 'When not setting min_sizes and max_sizes,' - 'basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError( - 'Only support 300 or 512 in SSDAnchorGenerator when ' - 'not setting min_sizes and max_sizes, ' - f'got {self.input_size}.') - - assert len(min_sizes) == len(max_sizes) == len(strides) - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - - Examples: - >>> from mmdet.core import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@PRIOR_GENERATORS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - super(LegacySSDAnchorGenerator, self).__init__( - strides=strides, - ratios=ratios, - basesize_ratio_range=basesize_ratio_range, - input_size=input_size, - scale_major=scale_major) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@PRIOR_GENERATORS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, strides, base_sizes): - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, base_sizes_per_level, center=None): - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int, int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors - - def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'): - """Generate responsible anchor flags of grid cells in multiple scales. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in multiple - feature levels. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): responsible flags of anchors in multiple level - """ - assert self.num_levels == len(featmap_sizes) - multi_level_responsible_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - flags = self.single_level_responsible_flags( - featmap_sizes[i], - gt_bboxes, - anchor_stride, - self.num_base_anchors[i], - device=device) - multi_level_responsible_flags.append(flags) - return multi_level_responsible_flags - - def single_level_responsible_flags(self, - featmap_size, - gt_bboxes, - stride, - num_base_anchors, - device='cuda'): - """Generate the responsible flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - stride (tuple(int)): stride of current level - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device) - gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device) - gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long() - gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long() - - # row major indexing - gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x - - responsible_grid = torch.zeros( - feat_h * feat_w, dtype=torch.uint8, device=device) - responsible_grid[gt_bboxes_grid_idx] = 1 - - responsible_grid = responsible_grid[:, None].expand( - responsible_grid.size(0), num_base_anchors).contiguous().view(-1) - return responsible_grid diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/builder.py deleted file mode 100644 index ddb25ad3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/builder.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.utils import Registry, build_from_cfg - -PRIOR_GENERATORS = Registry('Generator for anchors and points') - -ANCHOR_GENERATORS = PRIOR_GENERATORS - - -def build_prior_generator(cfg, default_args=None): - return build_from_cfg(cfg, PRIOR_GENERATORS, default_args) - - -def build_anchor_generator(cfg, default_args=None): - warnings.warn( - '``build_anchor_generator`` would be deprecated soon, please use ' - '``build_prior_generator`` ') - return build_prior_generator(cfg, default_args=default_args) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/point_generator.py b/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/point_generator.py deleted file mode 100644 index cc9c3887..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/point_generator.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import PRIOR_GENERATORS - - -@PRIOR_GENERATORS.register_module() -class PointGenerator: - - def _meshgrid(self, x, y, row_major=True): - xx = x.repeat(len(y)) - yy = y.view(-1, 1).repeat(1, len(x)).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_points(self, featmap_size, stride=16, device='cuda'): - feat_h, feat_w = featmap_size - shift_x = torch.arange(0., feat_w, device=device) * stride - shift_y = torch.arange(0., feat_h, device=device) * stride - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - stride = shift_x.new_full((shift_xx.shape[0], ), stride) - shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_size, valid_size, device='cuda'): - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid - - -@PRIOR_GENERATORS.register_module() -class MlvlPointGenerator: - """Standard points generator for multi-level (Mlvl) feature maps in 2D - points-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - offset (float): The offset of points, the value is normalized with - corresponding stride. Defaults to 0.5. - """ - - def __init__(self, strides, offset=0.5): - self.strides = [_pair(stride) for stride in strides] - self.offset = offset - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - @property - def num_base_priors(self): - """list[int]: The number of priors (points) at a point - on the feature grid""" - return [1 for _ in range(len(self.strides))] - - def _meshgrid(self, x, y, row_major=True): - yy, xx = torch.meshgrid(y, x) - if row_major: - # warning .flatten() would cause error in ONNX exporting - # have to use reshape here - return xx.reshape(-1), yy.reshape(-1) - - else: - return yy.reshape(-1), xx.reshape(-1) - - def grid_priors(self, - featmap_sizes, - dtype=torch.float32, - device='cuda', - with_stride=False): - """Generate grid points of multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels, each size arrange as - as (h, w). - dtype (:obj:`dtype`): Dtype of priors. Default: torch.float32. - device (str): The device where the anchors will be put on. - with_stride (bool): Whether to concatenate the stride to - the last dimension of points. - - Return: - list[torch.Tensor]: Points of multiple feature levels. - The sizes of each tensor should be (N, 2) when with stride is - ``False``, where N = width * height, width and height - are the sizes of the corresponding feature level, - and the last dimension 2 represent (coord_x, coord_y), - otherwise the shape should be (N, 4), - and the last dimension 4 represent - (coord_x, coord_y, stride_w, stride_h). - """ - - assert self.num_levels == len(featmap_sizes) - multi_level_priors = [] - for i in range(self.num_levels): - priors = self.single_level_grid_priors( - featmap_sizes[i], - level_idx=i, - dtype=dtype, - device=device, - with_stride=with_stride) - multi_level_priors.append(priors) - return multi_level_priors - - def single_level_grid_priors(self, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda', - with_stride=False): - """Generate grid Points of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int]): Size of the feature maps, arrange as - (h, w). - level_idx (int): The index of corresponding feature map level. - dtype (:obj:`dtype`): Dtype of priors. Default: torch.float32. - device (str, optional): The device the tensor will be put on. - Defaults to 'cuda'. - with_stride (bool): Concatenate the stride to the last dimension - of points. - - Return: - Tensor: Points of single feature levels. - The shape of tensor should be (N, 2) when with stride is - ``False``, where N = width * height, width and height - are the sizes of the corresponding feature level, - and the last dimension 2 represent (coord_x, coord_y), - otherwise the shape should be (N, 4), - and the last dimension 4 represent - (coord_x, coord_y, stride_w, stride_h). - """ - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - shift_x = (torch.arange(0, feat_w, device=device) + - self.offset) * stride_w - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - shift_x = shift_x.to(dtype) - - shift_y = (torch.arange(0, feat_h, device=device) + - self.offset) * stride_h - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - shift_y = shift_y.to(dtype) - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - if not with_stride: - shifts = torch.stack([shift_xx, shift_yy], dim=-1) - else: - # use `shape[0]` instead of `len(shift_xx)` for ONNX export - stride_w = shift_xx.new_full((shift_xx.shape[0], ), - stride_w).to(dtype) - stride_h = shift_xx.new_full((shift_yy.shape[0], ), - stride_h).to(dtype) - shifts = torch.stack([shift_xx, shift_yy, stride_w, stride_h], - dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of points of multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels, each size arrange as - as (h, w). - pad_shape (tuple(int)): The padded shape of the image, - arrange as (h, w). - device (str): The device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of points of multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - point_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / point_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / point_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - device='cuda'): - """Generate the valid flags of points of a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange as - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - The size arrange as as (h, w). - device (str, optional): The device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each points in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid - - def sparse_priors(self, - prior_idxs, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate sparse points according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int]): feature map size arrange as (w, h). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points. Defaults to - ``torch.float32``. - device (obj:`torch.device`): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 2), N should be equal to - the length of ``prior_idxs``. And last dimension - 2 represent (coord_x, coord_y). - """ - height, width = featmap_size - x = (prior_idxs % width + self.offset) * self.strides[level_idx][0] - y = ((prior_idxs // width) % height + - self.offset) * self.strides[level_idx][1] - prioris = torch.stack([x, y], 1).to(dtype) - prioris = prioris.to(device) - return prioris diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/utils.py b/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/utils.py deleted file mode 100644 index c2f20247..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/__init__.py deleted file mode 100644 index 371eba19..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assigners import (AssignResult, BaseAssigner, CenterRegionAssigner, - MaxIoUAssigner, RegionAssigner) -from .builder import build_assigner, build_bbox_coder, build_sampler -from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, DistancePointBBoxCoder, - PseudoBBoxCoder, TBLRBBoxCoder) -from .iou_calculators import BboxOverlaps2D, bbox_overlaps -from .samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, IoUBalancedNegSampler, - OHEMSampler, PseudoSampler, RandomSampler, - SamplingResult, ScoreHLRSampler) -from .transforms import (bbox2distance, bbox2result, bbox2roi, - bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping, - bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh, - distance2bbox, find_inside_bboxes, roi2bbox) - -__all__ = [ - 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', 'MaxIoUAssigner', - 'AssignResult', 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'build_assigner', - 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back', - 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance', - 'build_bbox_coder', 'BaseBBoxCoder', 'PseudoBBoxCoder', - 'DeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'DistancePointBBoxCoder', - 'CenterRegionAssigner', 'bbox_rescale', 'bbox_cxcywh_to_xyxy', - 'bbox_xyxy_to_cxcywh', 'RegionAssigner', 'find_inside_bboxes' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/__init__.py deleted file mode 100644 index 5eaf7fa3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .mask_hungarian_assigner import MaskHungarianAssigner -from .max_iou_assigner import MaxIoUAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner -from .sim_ota_assigner import SimOTAAssigner -from .task_aligned_assigner import TaskAlignedAssigner -from .uniform_assigner import UniformAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner', 'UniformAssigner', 'SimOTAAssigner', - 'TaskAlignedAssigner', 'MaskHungarianAssigner' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/approx_max_iou_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/approx_max_iou_assigner.py deleted file mode 100644 index 304d09c3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/approx_max_iou_assigner.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .max_iou_assigner import MaxIoUAssigner - - -@BBOX_ASSIGNERS.register_module() -class ApproxMaxIoUAssigner(MaxIoUAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with an integer indicating the ground truth - index. (semi-positive index: gt label (0-based), -1: background) - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - approxs, - squares, - approxs_per_octave, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to approxs. - - This method assign a gt bbox to each group of approxs (bboxes), - each group of approxs is represent by a base approx (bbox) and - will be assigned with -1, or a semi-positive number. - background_label (-1) means negative sample, - semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to background_label (-1) - 2. use the max IoU of each group of approxs to assign - 2. assign proposals whose iou with all gts < neg_iou_thr to background - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - approxs (Tensor): Bounding boxes to be assigned, - shape(approxs_per_octave*n, 4). - squares (Tensor): Base Bounding boxes to be assigned, - shape(n, 4). - approxs_per_octave (int): number of approxs per octave - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_squares = squares.size(0) - num_gts = gt_bboxes.size(0) - - if num_squares == 0 or num_gts == 0: - # No predictions and/or truth, return empty assignment - overlaps = approxs.new(num_gts, num_squares) - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - return assign_result - - # re-organize anchors by approxs_per_octave x num_squares - approxs = torch.transpose( - approxs.view(num_squares, approxs_per_octave, 4), 0, - 1).contiguous().view(-1, 4) - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - num_gts > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = approxs.device - approxs = approxs.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - all_overlaps = self.iou_calculator(approxs, gt_bboxes) - - overlaps, _ = all_overlaps.view(approxs_per_octave, num_squares, - num_gts).max(dim=0) - overlaps = torch.transpose(overlaps, 0, 1) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and squares.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - squares, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, squares, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/assign_result.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/assign_result.py deleted file mode 100644 index 488010b5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assigned to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned].long() - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/atss_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/atss_assigner.py deleted file mode 100644 index 7b195303..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/atss_assigner.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class ATSSAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (float): number of bbox selected in each level - """ - - def __init__(self, - topk, - iou_calculator=dict(type='BboxOverlaps2D'), - ignore_iof_thr=-1): - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - self.ignore_iof_thr = ignore_iof_thr - - # https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py - - def assign(self, - bboxes, - num_level_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute iou between all bbox (bbox of all pyramid levels) and gt - 2. compute center distance between all bbox and gt - 3. on each pyramid level, for each gt, select k bbox whose center - are closest to the gt center, so we total select k*l bbox as - candidates for each gt - 4. get corresponding iou for the these candidates, and compute the - mean and std, set mean + std as the iou threshold - 5. select these candidates whose iou are greater than or equal to - the threshold as positive - 6. limit the positive sample's center in gt - - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - num_level_bboxes (List): num of bboxes in each level - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - INF = 100000000 - bboxes = bboxes[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all bbox and gt - overlaps = self.iou_calculator(bboxes, gt_bboxes) - - # assign 0 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - 0, - dtype=torch.long) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - # compute center distance between all bbox and gt - gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - gt_points = torch.stack((gt_cx, gt_cy), dim=1) - - bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0 - bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0 - bboxes_points = torch.stack((bboxes_cx, bboxes_cy), dim=1) - - distances = (bboxes_points[:, None, :] - - gt_points[None, :, :]).pow(2).sum(-1).sqrt() - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - ignore_idxs = ignore_max_overlaps > self.ignore_iof_thr - distances[ignore_idxs, :] = INF - assigned_gt_inds[ignore_idxs] = -1 - - # Selecting candidates based on the center distance - candidate_idxs = [] - start_idx = 0 - for level, bboxes_per_level in enumerate(num_level_bboxes): - # on each pyramid level, for each gt, - # select k bbox whose center are closest to the gt center - end_idx = start_idx + bboxes_per_level - distances_per_level = distances[start_idx:end_idx, :] - selectable_k = min(self.topk, bboxes_per_level) - _, topk_idxs_per_level = distances_per_level.topk( - selectable_k, dim=0, largest=False) - candidate_idxs.append(topk_idxs_per_level + start_idx) - start_idx = end_idx - candidate_idxs = torch.cat(candidate_idxs, dim=0) - - # get corresponding iou for the these candidates, and compute the - # mean and std, set mean + std as the iou threshold - candidate_overlaps = overlaps[candidate_idxs, torch.arange(num_gt)] - overlaps_mean_per_gt = candidate_overlaps.mean(0) - overlaps_std_per_gt = candidate_overlaps.std(0) - overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt - - is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :] - - # limit the positive sample's center in gt - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_bboxes_cx = bboxes_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_bboxes_cy = bboxes_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest IoU will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/base_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/base_assigner.py deleted file mode 100644 index 3c2d597a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/base_assigner.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseAssigner(metaclass=ABCMeta): - """Base assigner that assigns boxes to ground truth boxes.""" - - @abstractmethod - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign boxes to either a ground truth boxes or a negative boxes.""" diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/center_region_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/center_region_assigner.py deleted file mode 100644 index 86e78597..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/center_region_assigner.py +++ /dev/null @@ -1,336 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def scale_boxes(bboxes, scale): - """Expand an array of boxes by a given scale. - - Args: - bboxes (Tensor): Shape (m, 4) - scale (float): The scale factor of bboxes - - Returns: - (Tensor): Shape (m, 4). Scaled bboxes - """ - assert bboxes.size(1) == 4 - w_half = (bboxes[:, 2] - bboxes[:, 0]) * .5 - h_half = (bboxes[:, 3] - bboxes[:, 1]) * .5 - x_c = (bboxes[:, 2] + bboxes[:, 0]) * .5 - y_c = (bboxes[:, 3] + bboxes[:, 1]) * .5 - - w_half *= scale - h_half *= scale - - boxes_scaled = torch.zeros_like(bboxes) - boxes_scaled[:, 0] = x_c - w_half - boxes_scaled[:, 2] = x_c + w_half - boxes_scaled[:, 1] = y_c - h_half - boxes_scaled[:, 3] = y_c + h_half - return boxes_scaled - - -def is_located_in(points, bboxes): - """Are points located in bboxes. - - Args: - points (Tensor): Points, shape: (m, 2). - bboxes (Tensor): Bounding boxes, shape: (n, 4). - - Return: - Tensor: Flags indicating if points are located in bboxes, shape: (m, n). - """ - assert points.size(1) == 2 - assert bboxes.size(1) == 4 - return (points[:, 0].unsqueeze(1) > bboxes[:, 0].unsqueeze(0)) & \ - (points[:, 0].unsqueeze(1) < bboxes[:, 2].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) > bboxes[:, 1].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) < bboxes[:, 3].unsqueeze(0)) - - -def bboxes_area(bboxes): - """Compute the area of an array of bboxes. - - Args: - bboxes (Tensor): The coordinates ox bboxes. Shape: (m, 4) - - Returns: - Tensor: Area of the bboxes. Shape: (m, ) - """ - assert bboxes.size(1) == 4 - w = (bboxes[:, 2] - bboxes[:, 0]) - h = (bboxes[:, 3] - bboxes[:, 1]) - areas = w * h - return areas - - -@BBOX_ASSIGNERS.register_module() -class CenterRegionAssigner(BaseAssigner): - """Assign pixels at the center region of a bbox as positive. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - -1: negative samples - - semi-positive numbers: positive sample, index (0-based) of assigned gt - - Args: - pos_scale (float): Threshold within which pixels are - labelled as positive. - neg_scale (float): Threshold above which pixels are - labelled as positive. - min_pos_iof (float): Minimum iof of a pixel with a gt to be - labelled as positive. Default: 1e-2 - ignore_gt_scale (float): Threshold within which the pixels - are ignored when the gt is labelled as shadowed. Default: 0.5 - foreground_dominate (bool): If True, the bbox will be assigned as - positive when a gt's kernel region overlaps with another's shadowed - (ignored) region, otherwise it is set as ignored. Default to False. - """ - - def __init__(self, - pos_scale, - neg_scale, - min_pos_iof=1e-2, - ignore_gt_scale=0.5, - foreground_dominate=False, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_scale = pos_scale - self.neg_scale = neg_scale - self.min_pos_iof = min_pos_iof - self.ignore_gt_scale = ignore_gt_scale - self.foreground_dominate = foreground_dominate - self.iou_calculator = build_iou_calculator(iou_calculator) - - def get_gt_priorities(self, gt_bboxes): - """Get gt priorities according to their areas. - - Smaller gt has higher priority. - - Args: - gt_bboxes (Tensor): Ground truth boxes, shape (k, 4). - - Returns: - Tensor: The priority of gts so that gts with larger priority is \ - more likely to be assigned. Shape (k, ) - """ - gt_areas = bboxes_area(gt_bboxes) - # Rank all gt bbox areas. Smaller objects has larger priority - _, sort_idx = gt_areas.sort(descending=True) - sort_idx = sort_idx.argsort() - return sort_idx - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assigns gts to every bbox (proposal/anchor), each bbox \ - will be assigned with -1, or a semi-positive number. -1 means \ - negative sample, semi-positive number is the index (0-based) of \ - assigned gt. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (tensor, optional): Label of gt_bboxes, shape (num_gts,). - - Returns: - :obj:`AssignResult`: The assigned result. Note that \ - shadowed_labels of shape (N, 2) is also added as an \ - `assign_result` attribute. `shadowed_labels` is a tensor \ - composed of N pairs of anchor_ind, class_label], where N \ - is the number of anchors that lie in the outer region of a \ - gt, anchor_ind is the shadowed anchor index and class_label \ - is the shadowed class label. - - Example: - >>> self = CenterRegionAssigner(0.2, 0.2) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - # There are in total 5 steps in the pixel assignment - # 1. Find core (the center region, say inner 0.2) - # and shadow (the relatively ourter part, say inner 0.2-0.5) - # regions of every gt. - # 2. Find all prior bboxes that lie in gt_core and gt_shadow regions - # 3. Assign prior bboxes in gt_core with a one-hot id of the gt in - # the image. - # 3.1. For overlapping objects, the prior bboxes in gt_core is - # assigned with the object with smallest area - # 4. Assign prior bboxes with class label according to its gt id. - # 4.1. Assign -1 to prior bboxes lying in shadowed gts - # 4.2. Assign positive prior boxes with the corresponding label - # 5. Find pixels lying in the shadow of an object and assign them with - # background label, but set the loss weight of its corresponding - # gt to zero. - assert bboxes.size(1) == 4, 'bboxes must have size of 4' - # 1. Find core positive and shadow region of every gt - gt_core = scale_boxes(gt_bboxes, self.pos_scale) - gt_shadow = scale_boxes(gt_bboxes, self.neg_scale) - - # 2. Find prior bboxes that lie in gt_core and gt_shadow regions - bbox_centers = (bboxes[:, 2:4] + bboxes[:, 0:2]) / 2 - # The center points lie within the gt boxes - is_bbox_in_gt = is_located_in(bbox_centers, gt_bboxes) - # Only calculate bbox and gt_core IoF. This enables small prior bboxes - # to match large gts - bbox_and_gt_core_overlaps = self.iou_calculator( - bboxes, gt_core, mode='iof') - # The center point of effective priors should be within the gt box - is_bbox_in_gt_core = is_bbox_in_gt & ( - bbox_and_gt_core_overlaps > self.min_pos_iof) # shape (n, k) - - is_bbox_in_gt_shadow = ( - self.iou_calculator(bboxes, gt_shadow, mode='iof') > - self.min_pos_iof) - # Rule out center effective positive pixels - is_bbox_in_gt_shadow &= (~is_bbox_in_gt_core) - - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - if num_gts == 0 or num_bboxes == 0: - # If no gts exist, assign all pixels to negative - assigned_gt_ids = \ - is_bbox_in_gt_core.new_zeros((num_bboxes,), - dtype=torch.long) - pixels_in_gt_shadow = assigned_gt_ids.new_empty((0, 2)) - else: - # Step 3: assign a one-hot gt id to each pixel, and smaller objects - # have high priority to assign the pixel. - sort_idx = self.get_gt_priorities(gt_bboxes) - assigned_gt_ids, pixels_in_gt_shadow = \ - self.assign_one_hot_gt_indices(is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=sort_idx) - - if gt_bboxes_ignore is not None and gt_bboxes_ignore.numel() > 0: - # No ground truth or boxes, return empty assignment - gt_bboxes_ignore = scale_boxes( - gt_bboxes_ignore, scale=self.ignore_gt_scale) - is_bbox_in_ignored_gts = is_located_in(bbox_centers, - gt_bboxes_ignore) - is_bbox_in_ignored_gts = is_bbox_in_ignored_gts.any(dim=1) - assigned_gt_ids[is_bbox_in_ignored_gts] = -1 - - # 4. Assign prior bboxes with class label according to its gt id. - assigned_labels = None - shadowed_pixel_labels = None - if gt_labels is not None: - # Default assigned label is the background (-1) - assigned_labels = assigned_gt_ids.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_ids > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_ids[pos_inds] - - 1] - # 5. Find pixels lying in the shadow of an object - shadowed_pixel_labels = pixels_in_gt_shadow.clone() - if pixels_in_gt_shadow.numel() > 0: - pixel_idx, gt_idx =\ - pixels_in_gt_shadow[:, 0], pixels_in_gt_shadow[:, 1] - assert (assigned_gt_ids[pixel_idx] != gt_idx).all(), \ - 'Some pixels are dually assigned to ignore and gt!' - shadowed_pixel_labels[:, 1] = gt_labels[gt_idx - 1] - override = ( - assigned_labels[pixel_idx] == shadowed_pixel_labels[:, 1]) - if self.foreground_dominate: - # When a pixel is both positive and shadowed, set it as pos - shadowed_pixel_labels = shadowed_pixel_labels[~override] - else: - # When a pixel is both pos and shadowed, set it as shadowed - assigned_labels[pixel_idx[override]] = -1 - assigned_gt_ids[pixel_idx[override]] = 0 - - assign_result = AssignResult( - num_gts, assigned_gt_ids, None, labels=assigned_labels) - # Add shadowed_labels as assign_result property. Shape: (num_shadow, 2) - assign_result.set_extra_property('shadowed_labels', - shadowed_pixel_labels) - return assign_result - - def assign_one_hot_gt_indices(self, - is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=None): - """Assign only one gt index to each prior box. - - Gts with large gt_priority are more likely to be assigned. - - Args: - is_bbox_in_gt_core (Tensor): Bool tensor indicating the bbox center - is in the core area of a gt (e.g. 0-0.2). - Shape: (num_prior, num_gt). - is_bbox_in_gt_shadow (Tensor): Bool tensor indicating the bbox - center is in the shadowed area of a gt (e.g. 0.2-0.5). - Shape: (num_prior, num_gt). - gt_priority (Tensor): Priorities of gts. The gt with a higher - priority is more likely to be assigned to the bbox when the bbox - match with multiple gts. Shape: (num_gt, ). - - Returns: - tuple: Returns (assigned_gt_inds, shadowed_gt_inds). - - - assigned_gt_inds: The assigned gt index of each prior bbox \ - (i.e. index from 1 to num_gts). Shape: (num_prior, ). - - shadowed_gt_inds: shadowed gt indices. It is a tensor of \ - shape (num_ignore, 2) with first column being the \ - shadowed prior bbox indices and the second column the \ - shadowed gt indices (1-based). - """ - num_bboxes, num_gts = is_bbox_in_gt_core.shape - - if gt_priority is None: - gt_priority = torch.arange( - num_gts, device=is_bbox_in_gt_core.device) - assert gt_priority.size(0) == num_gts - # The bigger gt_priority, the more preferable to be assigned - # The assigned inds are by default 0 (background) - assigned_gt_inds = is_bbox_in_gt_core.new_zeros((num_bboxes, ), - dtype=torch.long) - # Shadowed bboxes are assigned to be background. But the corresponding - # label is ignored during loss calculation, which is done through - # shadowed_gt_inds - shadowed_gt_inds = torch.nonzero(is_bbox_in_gt_shadow, as_tuple=False) - if is_bbox_in_gt_core.sum() == 0: # No gt match - shadowed_gt_inds[:, 1] += 1 # 1-based. For consistency issue - return assigned_gt_inds, shadowed_gt_inds - - # The priority of each prior box and gt pair. If one prior box is - # matched bo multiple gts. Only the pair with the highest priority - # is saved - pair_priority = is_bbox_in_gt_core.new_full((num_bboxes, num_gts), - -1, - dtype=torch.long) - - # Each bbox could match with multiple gts. - # The following codes deal with this situation - # Matched bboxes (to any gt). Shape: (num_pos_anchor, ) - inds_of_match = torch.any(is_bbox_in_gt_core, dim=1) - # The matched gt index of each positive bbox. Length >= num_pos_anchor - # , since one bbox could match multiple gts - matched_bbox_gt_inds = torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)[:, 1] - # Assign priority to each bbox-gt pair. - pair_priority[is_bbox_in_gt_core] = gt_priority[matched_bbox_gt_inds] - _, argmax_priority = pair_priority[inds_of_match].max(dim=1) - assigned_gt_inds[inds_of_match] = argmax_priority + 1 # 1-based - # Zero-out the assigned anchor box to filter the shadowed gt indices - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 0 - # Concat the shadowed indices due to overlapping with that out side of - # effective scale. shape: (total_num_ignore, 2) - shadowed_gt_inds = torch.cat( - (shadowed_gt_inds, torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)), - dim=0) - # `is_bbox_in_gt_core` should be changed back to keep arguments intact. - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 1 - # 1-based shadowed gt indices, to be consistent with `assigned_gt_inds` - if shadowed_gt_inds.numel() > 0: - shadowed_gt_inds[:, 1] += 1 - return assigned_gt_inds, shadowed_gt_inds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/grid_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/grid_assigner.py deleted file mode 100644 index a0c814e7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/grid_assigner.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class GridAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, box_responsible_flags, gt_bboxes, gt_labels=None): - """Assign gt to bboxes. The process is very much like the max iou - assigner, except that positive samples are constrained within the cell - that the gt boxes fell in. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to -1 - 2. assign proposals whose iou with all gts <= neg_iou_thr to 0 - 3. for each bbox within a cell, if the iou with its nearest gt > - pos_iou_thr and the center of that gt falls inside the cell, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals within the cell the - gt bbox falls in to itself. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - box_responsible_flags (Tensor): flag to indicate whether box is - responsible for prediction, shape(n, ) - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all gt and bboxes - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # 2. assign negative: below - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - # shape of max_overlaps == argmax_overlaps == num_bboxes - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps <= self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, (tuple, list)): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps > self.neg_iou_thr[0]) - & (max_overlaps <= self.neg_iou_thr[1])] = 0 - - # 3. assign positive: falls into responsible cell and above - # positive IOU threshold, the order matters. - # the prior condition of comparison is to filter out all - # unrelated anchors, i.e. not box_responsible_flags - overlaps[:, ~box_responsible_flags.type(torch.bool)] = -1. - - # calculate max_overlaps again, but this time we only consider IOUs - # for anchors responsible for prediction - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - # shape of gt_max_overlaps == gt_argmax_overlaps == num_gts - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - pos_inds = (max_overlaps > - self.pos_iou_thr) & box_responsible_flags.type(torch.bool) - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - # 4. assign positive to max overlapped anchors within responsible cell - for i in range(num_gts): - if gt_max_overlaps[i] > self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = (overlaps[i, :] == gt_max_overlaps[i]) & \ - box_responsible_flags.type(torch.bool) - assigned_gt_inds[max_iou_inds] = i + 1 - elif box_responsible_flags[gt_argmax_overlaps[i]]: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - # assign labels of positive anchors - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/hungarian_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/hungarian_assigner.py deleted file mode 100644 index 4105fb5c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/hungarian_assigner.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..match_costs import build_match_cost -from ..transforms import bbox_cxcywh_to_xyxy -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class HungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, regression L1 cost and regression iou cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - bbox_weight (int | float, optional): The scale factor for regression - L1 cost. Default 1.0. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 1.0. - iou_calculator (dict | optional): The config for the iou calculation. - Default type `BboxOverlaps2D`. - iou_mode (str | optional): "iou" (intersection over union), "iof" - (intersection over foreground), or "giou" (generalized - intersection over union). Default "giou". - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=1.0), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.reg_cost = build_match_cost(reg_cost) - self.iou_cost = build_match_cost(iou_cost) - - def assign(self, - bbox_pred, - cls_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - This method assign each query prediction to a ground truth or - background. The `assigned_gt_inds` with -1 means don't care, - 0 means negative sample, and positive number is the index (1-based) - of assigned gt. - The assignment is done in the following steps, the order matters. - - 1. assign every prediction to -1 - 2. compute the weighted costs - 3. do Hungarian matching on CPU based on the costs - 4. assign all to 0 (background) first, then for each matched pair - between predictions and gts, treat this prediction as foreground - and assign the corresponding gt index (plus 1) to it. - - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - img_h, img_w, _ = img_meta['img_shape'] - factor = gt_bboxes.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - - # 2. compute the weighted costs - # classification and bboxcost. - cls_cost = self.cls_cost(cls_pred, gt_labels) - # regression L1 cost - normalize_gt_bboxes = gt_bboxes / factor - reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes) - # regression iou cost, defaultly giou is used in official DETR. - bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor - iou_cost = self.iou_cost(bboxes, gt_bboxes) - # weighted sum of above three costs - cost = cls_cost + reg_cost + iou_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - bbox_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - bbox_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/mask_hungarian_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/mask_hungarian_assigner.py deleted file mode 100644 index f5f27f3f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/mask_hungarian_assigner.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox.builder import BBOX_ASSIGNERS -from mmdet.core.bbox.match_costs.builder import build_match_cost -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class MaskHungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth for - mask. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, mask focal cost and mask dice cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_cost (:obj:`mmcv.ConfigDict` | dict): Classification cost config. - mask_cost (:obj:`mmcv.ConfigDict` | dict): Mask cost config. - dice_cost (:obj:`mmcv.ConfigDict` | dict): Dice cost config. - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.0), - mask_cost=dict( - type='FocalLossCost', weight=1.0, binary_input=True), - dice_cost=dict(type='DiceCost', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.mask_cost = build_match_cost(mask_cost) - self.dice_cost = build_match_cost(dice_cost) - - def assign(self, - cls_pred, - mask_pred, - gt_labels, - gt_mask, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - Args: - cls_pred (Tensor | None): Class prediction in shape - (num_query, cls_out_channels). - mask_pred (Tensor): Mask prediction in shape (num_query, H, W). - gt_labels (Tensor): Label of 'gt_mask'in shape = (num_gt, ). - gt_mask (Tensor): Ground truth mask in shape = (num_gt, H, W). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - # K-Net sometimes passes cls_pred=None to this assigner. - # So we should use the shape of mask_pred - num_gt, num_query = gt_labels.shape[0], mask_pred.shape[0] - - # 1. assign -1 by default - assigned_gt_inds = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - assigned_labels = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - if num_gt == 0 or num_query == 0: - # No ground truth or boxes, return empty assignment - if num_gt == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) - - # 2. compute the weighted costs - # classification and maskcost. - if self.cls_cost.weight != 0 and cls_pred is not None: - cls_cost = self.cls_cost(cls_pred, gt_labels) - else: - cls_cost = 0 - - if self.mask_cost.weight != 0: - # mask_pred shape = [num_query, h, w] - # gt_mask shape = [num_gt, h, w] - # mask_cost shape = [num_query, num_gt] - mask_cost = self.mask_cost(mask_pred, gt_mask) - else: - mask_cost = 0 - - if self.dice_cost.weight != 0: - dice_cost = self.dice_cost(mask_pred, gt_mask) - else: - dice_cost = 0 - cost = cls_cost + mask_cost + dice_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - mask_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - mask_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/max_iou_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/max_iou_assigner.py deleted file mode 100644 index 676421f7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/max_iou_assigner.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - `min_pos_iou` is set to avoid assigning bboxes that have extremely - small iou with GT as positive samples. It brings about 0.3 mAP - improvements in 1x schedule but does not affect the performance of - 3x schedule. More comparisons can be found in - `PR #7464 `_. - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = bboxes.device - bboxes = bboxes.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, bboxes, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps, gt_labels=None): - """Assign w.r.t. the overlaps of bboxes with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox 2. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/point_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/point_assigner.py deleted file mode 100644 index b0dc2246..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/point_assigner.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class PointAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each point. - - Each proposals will be assigned with `0`, or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - """ - - def __init__(self, scale=4, pos_num=3): - self.scale = scale - self.pos_num = pos_num - - def assign(self, points, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to points. - - This method assign a gt bbox to every points set, each points set - will be assigned with the background_label (-1), or a label number. - -1 is background, and semi-positive number is the index (0-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every points to the background_label (-1) - 2. A point is assigned to some gt bbox if - (i) the point is within the k closest points to the gt bbox - (ii) the distance between this point and the gt is smaller than - other gt bboxes - - Args: - points (Tensor): points to be assigned, shape(n, 3) while last - dimension stands for (x, y, stride). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - NOTE: currently unused. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_points = points.shape[0] - num_gts = gt_bboxes.shape[0] - - if num_gts == 0 or num_points == 0: - # If no truth assign everything to the background - assigned_gt_inds = points.new_full((num_points, ), - 0, - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = points.new_full((num_points, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - - points_xy = points[:, :2] - points_stride = points[:, 2] - points_lvl = torch.log2( - points_stride).int() # [3...,4...,5...,6...,7...] - lvl_min, lvl_max = points_lvl.min(), points_lvl.max() - - # assign gt box - gt_bboxes_xy = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2 - gt_bboxes_wh = (gt_bboxes[:, 2:] - gt_bboxes[:, :2]).clamp(min=1e-6) - scale = self.scale - gt_bboxes_lvl = ((torch.log2(gt_bboxes_wh[:, 0] / scale) + - torch.log2(gt_bboxes_wh[:, 1] / scale)) / 2).int() - gt_bboxes_lvl = torch.clamp(gt_bboxes_lvl, min=lvl_min, max=lvl_max) - - # stores the assigned gt index of each point - assigned_gt_inds = points.new_zeros((num_points, ), dtype=torch.long) - # stores the assigned gt dist (to this point) of each point - assigned_gt_dist = points.new_full((num_points, ), float('inf')) - points_range = torch.arange(points.shape[0]) - - for idx in range(num_gts): - gt_lvl = gt_bboxes_lvl[idx] - # get the index of points in this level - lvl_idx = gt_lvl == points_lvl - points_index = points_range[lvl_idx] - # get the points in this level - lvl_points = points_xy[lvl_idx, :] - # get the center point of gt - gt_point = gt_bboxes_xy[[idx], :] - # get width and height of gt - gt_wh = gt_bboxes_wh[[idx], :] - # compute the distance between gt center and - # all points in this level - points_gt_dist = ((lvl_points - gt_point) / gt_wh).norm(dim=1) - # find the nearest k points to gt center in this level - min_dist, min_dist_index = torch.topk( - points_gt_dist, self.pos_num, largest=False) - # the index of nearest k points to gt center in this level - min_dist_points_index = points_index[min_dist_index] - # The less_than_recorded_index stores the index - # of min_dist that is less then the assigned_gt_dist. Where - # assigned_gt_dist stores the dist from previous assigned gt - # (if exist) to each point. - less_than_recorded_index = min_dist < assigned_gt_dist[ - min_dist_points_index] - # The min_dist_points_index stores the index of points satisfy: - # (1) it is k nearest to current gt center in this level. - # (2) it is closer to current gt center than other gt center. - min_dist_points_index = min_dist_points_index[ - less_than_recorded_index] - # assign the result - assigned_gt_inds[min_dist_points_index] = idx + 1 - assigned_gt_dist[min_dist_points_index] = min_dist[ - less_than_recorded_index] - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_points, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/region_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/region_assigner.py deleted file mode 100644 index 1833b894..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/region_assigner.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import anchor_inside_flags -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def calc_region(bbox, ratio, stride, featmap_size=None): - """Calculate region of the box defined by the ratio, the ratio is from the - center of the box to every edge.""" - # project bbox on the feature - f_bbox = bbox / stride - x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2]) - y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3]) - x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2]) - y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3]) - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) - - -def anchor_ctr_inside_region_flags(anchors, stride, region): - """Get the flag indicate whether anchor centers are inside regions.""" - x1, y1, x2, y2 = region - f_anchors = anchors / stride - x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5 - y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5 - flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2) - return flags - - -@BBOX_ASSIGNERS.register_module() -class RegionAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - center_ratio: ratio of the region in the center of the bbox to - define positive sample. - ignore_ratio: ratio of the region to define ignore samples. - """ - - def __init__(self, center_ratio=0.2, ignore_ratio=0.5): - self.center_ratio = center_ratio - self.ignore_ratio = ignore_ratio - - def assign(self, - mlvl_anchors, - mlvl_valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - anchor_scale, - anchor_strides, - gt_bboxes_ignore=None, - gt_labels=None, - allowed_border=0): - """Assign gt to anchors. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - - The assignment is done in following steps, and the order matters. - - 1. Assign every anchor to 0 (negative) - 2. (For each gt_bboxes) Compute ignore flags based on ignore_region - then assign -1 to anchors w.r.t. ignore flags - 3. (For each gt_bboxes) Compute pos flags based on center_region then - assign gt_bboxes to anchors w.r.t. pos flags - 4. (For each gt_bboxes) Compute ignore flags based on adjacent anchor - level then assign -1 to anchors w.r.t. ignore flags - 5. Assign anchor outside of image to -1 - - Args: - mlvl_anchors (list[Tensor]): Multi level anchors. - mlvl_valid_flags (list[Tensor]): Multi level valid flags. - gt_bboxes (Tensor): Ground truth bboxes of image - img_meta (dict): Meta info of image. - featmap_sizes (list[Tensor]): Feature mapsize each level - anchor_scale (int): Scale of the anchor. - anchor_strides (list[int]): Stride of the anchor. - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - allowed_border (int, optional): The border to allow the valid - anchor. Defaults to 0. - - Returns: - :obj:`AssignResult`: The assign result. - """ - if gt_bboxes_ignore is not None: - raise NotImplementedError - - num_gts = gt_bboxes.shape[0] - num_bboxes = sum(x.shape[0] for x in mlvl_anchors) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = gt_bboxes.new_zeros((num_bboxes, )) - assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ), - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = gt_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - num_lvls = len(mlvl_anchors) - r1 = (1 - self.center_ratio) / 2 - r2 = (1 - self.ignore_ratio) / 2 - - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - - # 1. assign 0 (negative) by default - mlvl_assigned_gt_inds = [] - mlvl_ignore_flags = [] - for lvl in range(num_lvls): - h, w = featmap_sizes[lvl] - assert h * w == mlvl_anchors[lvl].shape[0] - assigned_gt_inds = gt_bboxes.new_full((h * w, ), - 0, - dtype=torch.long) - ignore_flags = torch.zeros_like(assigned_gt_inds) - mlvl_assigned_gt_inds.append(assigned_gt_inds) - mlvl_ignore_flags.append(ignore_flags) - - for gt_id in range(num_gts): - lvl = target_lvls[gt_id].item() - featmap_size = featmap_sizes[lvl] - stride = anchor_strides[lvl] - anchors = mlvl_anchors[lvl] - gt_bbox = gt_bboxes[gt_id, :4] - - # Compute regions - ignore_region = calc_region(gt_bbox, r2, stride, featmap_size) - ctr_region = calc_region(gt_bbox, r1, stride, featmap_size) - - # 2. Assign -1 to ignore flags - ignore_flags = anchor_ctr_inside_region_flags( - anchors, stride, ignore_region) - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 3. Assign gt_bboxes to pos flags - pos_flags = anchor_ctr_inside_region_flags(anchors, stride, - ctr_region) - mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1 - - # 4. Assign -1 to ignore adjacent lvl - if lvl > 0: - d_lvl = lvl - 1 - d_anchors = mlvl_anchors[d_lvl] - d_featmap_size = featmap_sizes[d_lvl] - d_stride = anchor_strides[d_lvl] - d_ignore_region = calc_region(gt_bbox, r2, d_stride, - d_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - d_anchors, d_stride, d_ignore_region) - mlvl_ignore_flags[d_lvl][ignore_flags] = 1 - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - u_anchors = mlvl_anchors[u_lvl] - u_featmap_size = featmap_sizes[u_lvl] - u_stride = anchor_strides[u_lvl] - u_ignore_region = calc_region(gt_bbox, r2, u_stride, - u_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - u_anchors, u_stride, u_ignore_region) - mlvl_ignore_flags[u_lvl][ignore_flags] = 1 - - # 4. (cont.) Assign -1 to ignore adjacent lvl - for lvl in range(num_lvls): - ignore_flags = mlvl_ignore_flags[lvl] - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 5. Assign -1 to anchor outside of image - flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds) - flat_anchors = torch.cat(mlvl_anchors) - flat_valid_flags = torch.cat(mlvl_valid_flags) - assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] == - flat_valid_flags.shape[0]) - inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags, - img_meta['img_shape'], - allowed_border) - outside_flags = ~inside_flags - flat_assigned_gt_inds[outside_flags] = -1 - - if gt_labels is not None: - assigned_labels = torch.zeros_like(flat_assigned_gt_inds) - pos_flags = assigned_gt_inds > 0 - assigned_labels[pos_flags] = gt_labels[ - flat_assigned_gt_inds[pos_flags] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, flat_assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/sim_ota_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/sim_ota_assigner.py deleted file mode 100644 index 58bfef43..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/sim_ota_assigner.py +++ /dev/null @@ -1,257 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn.functional as F - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import bbox_overlaps -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class SimOTAAssigner(BaseAssigner): - """Computes matching between predictions and ground truth. - - Args: - center_radius (int | float, optional): Ground truth center size - to judge whether a prior is in center. Default 2.5. - candidate_topk (int, optional): The candidate top-k which used to - get top-k ious to calculate dynamic-k. Default 10. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 3.0. - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - """ - - def __init__(self, - center_radius=2.5, - candidate_topk=10, - iou_weight=3.0, - cls_weight=1.0): - self.center_radius = center_radius - self.candidate_topk = candidate_topk - self.iou_weight = iou_weight - self.cls_weight = cls_weight - - def assign(self, - pred_scores, - priors, - decoded_bboxes, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - eps=1e-7): - """Assign gt to priors using SimOTA. It will switch to CPU mode when - GPU is out of memory. - Args: - pred_scores (Tensor): Classification scores of one image, - a 2D-Tensor with shape [num_priors, num_classes] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Predicted bboxes, a 2D-Tensor with shape - [num_priors, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - eps (float): A value added to the denominator for numerical - stability. Default 1e-7. - Returns: - assign_result (obj:`AssignResult`): The assigned result. - """ - try: - assign_result = self._assign(pred_scores, priors, decoded_bboxes, - gt_bboxes, gt_labels, - gt_bboxes_ignore, eps) - return assign_result - except RuntimeError: - origin_device = pred_scores.device - warnings.warn('OOM RuntimeError is raised due to the huge memory ' - 'cost during label assignment. CPU mode is applied ' - 'in this batch. If you want to avoid this issue, ' - 'try to reduce the batch size or image size.') - torch.cuda.empty_cache() - - pred_scores = pred_scores.cpu() - priors = priors.cpu() - decoded_bboxes = decoded_bboxes.cpu() - gt_bboxes = gt_bboxes.cpu().float() - gt_labels = gt_labels.cpu() - - assign_result = self._assign(pred_scores, priors, decoded_bboxes, - gt_bboxes, gt_labels, - gt_bboxes_ignore, eps) - assign_result.gt_inds = assign_result.gt_inds.to(origin_device) - assign_result.max_overlaps = assign_result.max_overlaps.to( - origin_device) - assign_result.labels = assign_result.labels.to(origin_device) - - return assign_result - - def _assign(self, - pred_scores, - priors, - decoded_bboxes, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - eps=1e-7): - """Assign gt to priors using SimOTA. - Args: - pred_scores (Tensor): Classification scores of one image, - a 2D-Tensor with shape [num_priors, num_classes] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Predicted bboxes, a 2D-Tensor with shape - [num_priors, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - eps (float): A value added to the denominator for numerical - stability. Default 1e-7. - Returns: - :obj:`AssignResult`: The assigned result. - """ - INF = 100000.0 - num_gt = gt_bboxes.size(0) - num_bboxes = decoded_bboxes.size(0) - - # assign 0 by default - assigned_gt_inds = decoded_bboxes.new_full((num_bboxes, ), - 0, - dtype=torch.long) - valid_mask, is_in_boxes_and_center = self.get_in_gt_and_in_center_info( - priors, gt_bboxes) - valid_decoded_bbox = decoded_bboxes[valid_mask] - valid_pred_scores = pred_scores[valid_mask] - num_valid = valid_decoded_bbox.size(0) - - if num_gt == 0 or num_bboxes == 0 or num_valid == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = decoded_bboxes.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = decoded_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - pairwise_ious = bbox_overlaps(valid_decoded_bbox, gt_bboxes) - iou_cost = -torch.log(pairwise_ious + eps) - - gt_onehot_label = ( - F.one_hot(gt_labels.to(torch.int64), - pred_scores.shape[-1]).float().unsqueeze(0).repeat( - num_valid, 1, 1)) - - valid_pred_scores = valid_pred_scores.unsqueeze(1).repeat(1, num_gt, 1) - cls_cost = ( - F.binary_cross_entropy( - valid_pred_scores.to(dtype=torch.float32).sqrt_(), - gt_onehot_label, - reduction='none', - ).sum(-1).to(dtype=valid_pred_scores.dtype)) - - cost_matrix = ( - cls_cost * self.cls_weight + iou_cost * self.iou_weight + - (~is_in_boxes_and_center) * INF) - - matched_pred_ious, matched_gt_inds = \ - self.dynamic_k_matching( - cost_matrix, pairwise_ious, num_gt, valid_mask) - - # convert to AssignResult format - assigned_gt_inds[valid_mask] = matched_gt_inds + 1 - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - assigned_labels[valid_mask] = gt_labels[matched_gt_inds].long() - max_overlaps = assigned_gt_inds.new_full((num_bboxes, ), - -INF, - dtype=torch.float32) - max_overlaps[valid_mask] = matched_pred_ious - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - def get_in_gt_and_in_center_info(self, priors, gt_bboxes): - num_gt = gt_bboxes.size(0) - - repeated_x = priors[:, 0].unsqueeze(1).repeat(1, num_gt) - repeated_y = priors[:, 1].unsqueeze(1).repeat(1, num_gt) - repeated_stride_x = priors[:, 2].unsqueeze(1).repeat(1, num_gt) - repeated_stride_y = priors[:, 3].unsqueeze(1).repeat(1, num_gt) - - # is prior centers in gt bboxes, shape: [n_prior, n_gt] - l_ = repeated_x - gt_bboxes[:, 0] - t_ = repeated_y - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - repeated_x - b_ = gt_bboxes[:, 3] - repeated_y - - deltas = torch.stack([l_, t_, r_, b_], dim=1) - is_in_gts = deltas.min(dim=1).values > 0 - is_in_gts_all = is_in_gts.sum(dim=1) > 0 - - # is prior centers in gt centers - gt_cxs = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cys = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - ct_box_l = gt_cxs - self.center_radius * repeated_stride_x - ct_box_t = gt_cys - self.center_radius * repeated_stride_y - ct_box_r = gt_cxs + self.center_radius * repeated_stride_x - ct_box_b = gt_cys + self.center_radius * repeated_stride_y - - cl_ = repeated_x - ct_box_l - ct_ = repeated_y - ct_box_t - cr_ = ct_box_r - repeated_x - cb_ = ct_box_b - repeated_y - - ct_deltas = torch.stack([cl_, ct_, cr_, cb_], dim=1) - is_in_cts = ct_deltas.min(dim=1).values > 0 - is_in_cts_all = is_in_cts.sum(dim=1) > 0 - - # in boxes or in centers, shape: [num_priors] - is_in_gts_or_centers = is_in_gts_all | is_in_cts_all - - # both in boxes and centers, shape: [num_fg, num_gt] - is_in_boxes_and_centers = ( - is_in_gts[is_in_gts_or_centers, :] - & is_in_cts[is_in_gts_or_centers, :]) - return is_in_gts_or_centers, is_in_boxes_and_centers - - def dynamic_k_matching(self, cost, pairwise_ious, num_gt, valid_mask): - matching_matrix = torch.zeros_like(cost, dtype=torch.uint8) - # select candidate topk ious for dynamic-k calculation - candidate_topk = min(self.candidate_topk, pairwise_ious.size(0)) - topk_ious, _ = torch.topk(pairwise_ious, candidate_topk, dim=0) - # calculate dynamic k for each gt - dynamic_ks = torch.clamp(topk_ious.sum(0).int(), min=1) - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[:, gt_idx], k=dynamic_ks[gt_idx], largest=False) - matching_matrix[:, gt_idx][pos_idx] = 1 - - del topk_ious, dynamic_ks, pos_idx - - prior_match_gt_mask = matching_matrix.sum(1) > 1 - if prior_match_gt_mask.sum() > 0: - cost_min, cost_argmin = torch.min( - cost[prior_match_gt_mask, :], dim=1) - matching_matrix[prior_match_gt_mask, :] *= 0 - matching_matrix[prior_match_gt_mask, cost_argmin] = 1 - # get foreground mask inside box and center prior - fg_mask_inboxes = matching_matrix.sum(1) > 0 - valid_mask[valid_mask.clone()] = fg_mask_inboxes - - matched_gt_inds = matching_matrix[fg_mask_inboxes, :].argmax(1) - matched_pred_ious = (matching_matrix * - pairwise_ious).sum(1)[fg_mask_inboxes] - return matched_pred_ious, matched_gt_inds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/task_aligned_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/task_aligned_assigner.py deleted file mode 100644 index 1872de4a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/task_aligned_assigner.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -INF = 100000000 - - -@BBOX_ASSIGNERS.register_module() -class TaskAlignedAssigner(BaseAssigner): - """Task aligned assigner used in the paper: - `TOOD: Task-aligned One-stage Object Detection. - `_. - - Assign a corresponding gt bbox or background to each predicted bbox. - Each bbox will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (int): number of bbox selected in each level - iou_calculator (dict): Config dict for iou calculator. - Default: dict(type='BboxOverlaps2D') - """ - - def __init__(self, topk, iou_calculator=dict(type='BboxOverlaps2D')): - assert topk >= 1 - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - pred_scores, - decode_bboxes, - anchors, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None, - alpha=1, - beta=6): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute alignment metric between all bbox (bbox of all pyramid - levels) and gt - 2. select top-k bbox as candidates for each gt - 3. limit the positive sample's center in gt (because the anchor-free - detector only can predict positive distance) - - - Args: - pred_scores (Tensor): predicted class probability, - shape(n, num_classes) - decode_bboxes (Tensor): predicted bounding boxes, shape(n, 4) - anchors (Tensor): pre-defined anchors, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`TaskAlignedAssignResult`: The assign result. - """ - anchors = anchors[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), anchors.size(0) - # compute alignment metric between all bbox and gt - overlaps = self.iou_calculator(decode_bboxes, gt_bboxes).detach() - bbox_scores = pred_scores[:, gt_labels].detach() - # assign 0 by default - assigned_gt_inds = anchors.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assign_metrics = anchors.new_zeros((num_bboxes, )) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = anchors.new_zeros((num_bboxes, )) - if num_gt == 0: - # No gt boxes, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = anchors.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assign_result = AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - assign_result.assign_metrics = assign_metrics - return assign_result - - # select top-k bboxes as candidates for each gt - alignment_metrics = bbox_scores**alpha * overlaps**beta - topk = min(self.topk, alignment_metrics.size(0)) - _, candidate_idxs = alignment_metrics.topk(topk, dim=0, largest=True) - candidate_metrics = alignment_metrics[candidate_idxs, - torch.arange(num_gt)] - is_pos = candidate_metrics > 0 - - # limit the positive sample's center in gt - anchors_cx = (anchors[:, 0] + anchors[:, 2]) / 2.0 - anchors_cy = (anchors[:, 1] + anchors[:, 3]) / 2.0 - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_anchors_cx = anchors_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_anchors_cy = anchors_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_anchors_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_anchors_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_anchors_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_anchors_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest iou will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - assign_metrics[max_overlaps != -INF] = alignment_metrics[ - max_overlaps != -INF, argmax_overlaps[max_overlaps != -INF]] - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - assign_result = AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - assign_result.assign_metrics = assign_metrics - return assign_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py deleted file mode 100644 index 70294fc4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from ..transforms import bbox_xyxy_to_cxcywh -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class UniformAssigner(BaseAssigner): - """Uniform Matching between the anchors and gt boxes, which can achieve - balance in positive anchors, and gt_bboxes_ignore was not considered for - now. - - Args: - pos_ignore_thr (float): the threshold to ignore positive anchors - neg_ignore_thr (float): the threshold to ignore negative anchors - match_times(int): Number of positive anchors for each gt box. - Default 4. - iou_calculator (dict): iou_calculator config - """ - - def __init__(self, - pos_ignore_thr, - neg_ignore_thr, - match_times=4, - iou_calculator=dict(type='BboxOverlaps2D')): - self.match_times = match_times - self.pos_ignore_thr = pos_ignore_thr - self.neg_ignore_thr = neg_ignore_thr - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - bbox_pred, - anchor, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - assign_result = AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - assign_result.set_extra_property( - 'pos_idx', bbox_pred.new_empty(0, dtype=torch.bool)) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred.new_empty((0, 4))) - assign_result.set_extra_property('target_boxes', - bbox_pred.new_empty((0, 4))) - return assign_result - - # 2. Compute the L1 cost between boxes - # Note that we use anchors and predict boxes both - cost_bbox = torch.cdist( - bbox_xyxy_to_cxcywh(bbox_pred), - bbox_xyxy_to_cxcywh(gt_bboxes), - p=1) - cost_bbox_anchors = torch.cdist( - bbox_xyxy_to_cxcywh(anchor), bbox_xyxy_to_cxcywh(gt_bboxes), p=1) - - # We found that topk function has different results in cpu and - # cuda mode. In order to ensure consistency with the source code, - # we also use cpu mode. - # TODO: Check whether the performance of cpu and cuda are the same. - C = cost_bbox.cpu() - C1 = cost_bbox_anchors.cpu() - - # self.match_times x n - index = torch.topk( - C, # c=b,n,x c[i]=n,x - k=self.match_times, - dim=0, - largest=False)[1] - - # self.match_times x n - index1 = torch.topk(C1, k=self.match_times, dim=0, largest=False)[1] - # (self.match_times*2) x n - indexes = torch.cat((index, index1), - dim=1).reshape(-1).to(bbox_pred.device) - - pred_overlaps = self.iou_calculator(bbox_pred, gt_bboxes) - anchor_overlaps = self.iou_calculator(anchor, gt_bboxes) - pred_max_overlaps, _ = pred_overlaps.max(dim=1) - anchor_max_overlaps, _ = anchor_overlaps.max(dim=0) - - # 3. Compute the ignore indexes use gt_bboxes and predict boxes - ignore_idx = pred_max_overlaps > self.neg_ignore_thr - assigned_gt_inds[ignore_idx] = -1 - - # 4. Compute the ignore indexes of positive sample use anchors - # and predict boxes - pos_gt_index = torch.arange( - 0, C1.size(1), - device=bbox_pred.device).repeat(self.match_times * 2) - pos_ious = anchor_overlaps[indexes, pos_gt_index] - pos_ignore_idx = pos_ious < self.pos_ignore_thr - - pos_gt_index_with_ignore = pos_gt_index + 1 - pos_gt_index_with_ignore[pos_ignore_idx] = -1 - assigned_gt_inds[indexes] = pos_gt_index_with_ignore - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - assign_result = AssignResult( - num_gts, - assigned_gt_inds, - anchor_max_overlaps, - labels=assigned_labels) - assign_result.set_extra_property('pos_idx', ~pos_ignore_idx) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred[indexes]) - assign_result.set_extra_property('target_boxes', - gt_bboxes[pos_gt_index]) - return assign_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/builder.py deleted file mode 100644 index 9cfa055b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/builder.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -BBOX_ASSIGNERS = Registry('bbox_assigner') -BBOX_SAMPLERS = Registry('bbox_sampler') -BBOX_CODERS = Registry('bbox_coder') - - -def build_assigner(cfg, **default_args): - """Builder of box assigner.""" - return build_from_cfg(cfg, BBOX_ASSIGNERS, default_args) - - -def build_sampler(cfg, **default_args): - """Builder of box sampler.""" - return build_from_cfg(cfg, BBOX_SAMPLERS, default_args) - - -def build_bbox_coder(cfg, **default_args): - """Builder of box coder.""" - return build_from_cfg(cfg, BBOX_CODERS, default_args) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/__init__.py deleted file mode 100644 index e12fd64e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_bbox_coder import BaseBBoxCoder -from .bucketing_bbox_coder import BucketingBBoxCoder -from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder -from .distance_point_bbox_coder import DistancePointBBoxCoder -from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder -from .pseudo_bbox_coder import PseudoBBoxCoder -from .tblr_bbox_coder import TBLRBBoxCoder -from .yolo_bbox_coder import YOLOBBoxCoder - -__all__ = [ - 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder', - 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder', - 'BucketingBBoxCoder', 'DistancePointBBoxCoder' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index a7ed041a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/bucketing_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/bucketing_bbox_coder.py deleted file mode 100644 index 4be0ada0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/bucketing_bbox_coder.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn.functional as F - -from ..builder import BBOX_CODERS -from ..transforms import bbox_rescale -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class BucketingBBoxCoder(BaseBBoxCoder): - """Bucketing BBox Coder for Side-Aware Boundary Localization (SABL). - - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented here. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_buckets (int): Number of buckets. - scale_factor (int): Scale factor of proposals to generate buckets. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset upperbound to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True, - clip_border=True): - super(BucketingBBoxCoder, self).__init__() - self.num_buckets = num_buckets - self.scale_factor = scale_factor - self.offset_topk = offset_topk - self.offset_upperbound = offset_upperbound - self.cls_ignore_neighbor = cls_ignore_neighbor - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get bucketing estimation and fine regression targets during - training. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - encoded_bboxes(tuple[Tensor]): bucketing estimation - and fine regression targets and weights - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2bucket(bboxes, gt_bboxes, self.num_buckets, - self.scale_factor, self.offset_topk, - self.offset_upperbound, - self.cls_ignore_neighbor) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Predictions for bucketing estimation - and fine regression - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert len(pred_bboxes) == 2 - cls_preds, offset_preds = pred_bboxes - assert cls_preds.size(0) == bboxes.size(0) and offset_preds.size( - 0) == bboxes.size(0) - decoded_bboxes = bucket2bbox(bboxes, cls_preds, offset_preds, - self.num_buckets, self.scale_factor, - max_shape, self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def generat_buckets(proposals, num_buckets, scale_factor=1.0): - """Generate buckets w.r.t bucket number and scale factor of proposals. - - Args: - proposals (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - - Returns: - tuple[Tensor]: (bucket_w, bucket_h, l_buckets, r_buckets, - t_buckets, d_buckets) - - - bucket_w: Width of buckets on x-axis. Shape (n, ). - - bucket_h: Height of buckets on y-axis. Shape (n, ). - - l_buckets: Left buckets. Shape (n, ceil(side_num/2)). - - r_buckets: Right buckets. Shape (n, ceil(side_num/2)). - - t_buckets: Top buckets. Shape (n, ceil(side_num/2)). - - d_buckets: Down buckets. Shape (n, ceil(side_num/2)). - """ - proposals = bbox_rescale(proposals, scale_factor) - - # number of buckets in each side - side_num = int(np.ceil(num_buckets / 2.0)) - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - px1 = proposals[..., 0] - py1 = proposals[..., 1] - px2 = proposals[..., 2] - py2 = proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - # left buckets - l_buckets = px1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # right buckets - r_buckets = px2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # top buckets - t_buckets = py1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - # down buckets - d_buckets = py2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - return bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, d_buckets - - -@mmcv.jit(coderize=True) -def bbox2bucket(proposals, - gt, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True): - """Generate buckets estimation and fine regression targets. - - Args: - proposals (Tensor): Shape (n, 4) - gt (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset allowance to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - - Returns: - tuple[Tensor]: (offsets, offsets_weights, bucket_labels, cls_weights). - - - offsets: Fine regression targets. \ - Shape (n, num_buckets*2). - - offsets_weights: Fine regression weights. \ - Shape (n, num_buckets*2). - - bucket_labels: Bucketing estimation labels. \ - Shape (n, num_buckets*2). - - cls_weights: Bucketing estimation weights. \ - Shape (n, num_buckets*2). - """ - assert proposals.size() == gt.size() - - # generate buckets - proposals = proposals.float() - gt = gt.float() - (bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, - d_buckets) = generat_buckets(proposals, num_buckets, scale_factor) - - gx1 = gt[..., 0] - gy1 = gt[..., 1] - gx2 = gt[..., 2] - gy2 = gt[..., 3] - - # generate offset targets and weights - # offsets from buckets to gts - l_offsets = (l_buckets - gx1[:, None]) / bucket_w[:, None] - r_offsets = (r_buckets - gx2[:, None]) / bucket_w[:, None] - t_offsets = (t_buckets - gy1[:, None]) / bucket_h[:, None] - d_offsets = (d_buckets - gy2[:, None]) / bucket_h[:, None] - - # select top-k nearest buckets - l_topk, l_label = l_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - r_topk, r_label = r_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - t_topk, t_label = t_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - d_topk, d_label = d_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - - offset_l_weights = l_offsets.new_zeros(l_offsets.size()) - offset_r_weights = r_offsets.new_zeros(r_offsets.size()) - offset_t_weights = t_offsets.new_zeros(t_offsets.size()) - offset_d_weights = d_offsets.new_zeros(d_offsets.size()) - inds = torch.arange(0, proposals.size(0)).to(proposals).long() - - # generate offset weights of top-k nearest buckets - for k in range(offset_topk): - if k >= 1: - offset_l_weights[inds, l_label[:, - k]] = (l_topk[:, k] < - offset_upperbound).float() - offset_r_weights[inds, r_label[:, - k]] = (r_topk[:, k] < - offset_upperbound).float() - offset_t_weights[inds, t_label[:, - k]] = (t_topk[:, k] < - offset_upperbound).float() - offset_d_weights[inds, d_label[:, - k]] = (d_topk[:, k] < - offset_upperbound).float() - else: - offset_l_weights[inds, l_label[:, k]] = 1.0 - offset_r_weights[inds, r_label[:, k]] = 1.0 - offset_t_weights[inds, t_label[:, k]] = 1.0 - offset_d_weights[inds, d_label[:, k]] = 1.0 - - offsets = torch.cat([l_offsets, r_offsets, t_offsets, d_offsets], dim=-1) - offsets_weights = torch.cat([ - offset_l_weights, offset_r_weights, offset_t_weights, offset_d_weights - ], - dim=-1) - - # generate bucket labels and weight - side_num = int(np.ceil(num_buckets / 2.0)) - labels = torch.stack( - [l_label[:, 0], r_label[:, 0], t_label[:, 0], d_label[:, 0]], dim=-1) - - batch_size = labels.size(0) - bucket_labels = F.one_hot(labels.view(-1), side_num).view(batch_size, - -1).float() - bucket_cls_l_weights = (l_offsets.abs() < 1).float() - bucket_cls_r_weights = (r_offsets.abs() < 1).float() - bucket_cls_t_weights = (t_offsets.abs() < 1).float() - bucket_cls_d_weights = (d_offsets.abs() < 1).float() - bucket_cls_weights = torch.cat([ - bucket_cls_l_weights, bucket_cls_r_weights, bucket_cls_t_weights, - bucket_cls_d_weights - ], - dim=-1) - # ignore second nearest buckets for cls if necessary - if cls_ignore_neighbor: - bucket_cls_weights = (~((bucket_cls_weights == 1) & - (bucket_labels == 0))).float() - else: - bucket_cls_weights[:] = 1.0 - return offsets, offsets_weights, bucket_labels, bucket_cls_weights - - -@mmcv.jit(coderize=True) -def bucket2bbox(proposals, - cls_preds, - offset_preds, - num_buckets, - scale_factor=1.0, - max_shape=None, - clip_border=True): - """Apply bucketing estimation (cls preds) and fine regression (offset - preds) to generate det bboxes. - - Args: - proposals (Tensor): Boxes to be transformed. Shape (n, 4) - cls_preds (Tensor): bucketing estimation. Shape (n, num_buckets*2). - offset_preds (Tensor): fine regression. Shape (n, num_buckets*2). - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Returns: - tuple[Tensor]: (bboxes, loc_confidence). - - - bboxes: predicted bboxes. Shape (n, 4) - - loc_confidence: localization confidence of predicted bboxes. - Shape (n,). - """ - - side_num = int(np.ceil(num_buckets / 2.0)) - cls_preds = cls_preds.view(-1, side_num) - offset_preds = offset_preds.view(-1, side_num) - - scores = F.softmax(cls_preds, dim=1) - score_topk, score_label = scores.topk(2, dim=1, largest=True, sorted=True) - - rescaled_proposals = bbox_rescale(proposals, scale_factor) - - pw = rescaled_proposals[..., 2] - rescaled_proposals[..., 0] - ph = rescaled_proposals[..., 3] - rescaled_proposals[..., 1] - px1 = rescaled_proposals[..., 0] - py1 = rescaled_proposals[..., 1] - px2 = rescaled_proposals[..., 2] - py2 = rescaled_proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - score_inds_l = score_label[0::4, 0] - score_inds_r = score_label[1::4, 0] - score_inds_t = score_label[2::4, 0] - score_inds_d = score_label[3::4, 0] - l_buckets = px1 + (0.5 + score_inds_l.float()) * bucket_w - r_buckets = px2 - (0.5 + score_inds_r.float()) * bucket_w - t_buckets = py1 + (0.5 + score_inds_t.float()) * bucket_h - d_buckets = py2 - (0.5 + score_inds_d.float()) * bucket_h - - offsets = offset_preds.view(-1, 4, side_num) - inds = torch.arange(proposals.size(0)).to(proposals).long() - l_offsets = offsets[:, 0, :][inds, score_inds_l] - r_offsets = offsets[:, 1, :][inds, score_inds_r] - t_offsets = offsets[:, 2, :][inds, score_inds_t] - d_offsets = offsets[:, 3, :][inds, score_inds_d] - - x1 = l_buckets - l_offsets * bucket_w - x2 = r_buckets - r_offsets * bucket_w - y1 = t_buckets - t_offsets * bucket_h - y2 = d_buckets - d_offsets * bucket_h - - if clip_border and max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.cat([x1[:, None], y1[:, None], x2[:, None], y2[:, None]], - dim=-1) - - # bucketing guided rescoring - loc_confidence = score_topk[:, 0] - top2_neighbor_inds = (score_label[:, 0] - score_label[:, 1]).abs() == 1 - loc_confidence += score_topk[:, 1] * top2_neighbor_inds.float() - loc_confidence = loc_confidence.view(-1, 4).mean(dim=1) - - return bboxes, loc_confidence diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py deleted file mode 100644 index a7f1c62f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DeltaXYWHBBoxCoder(BaseBBoxCoder): - """Delta XYWH BBox coder. - - Following the practice in `R-CNN `_, - this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and - decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2). - - Args: - target_means (Sequence[float]): Denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): Denormalizing standard deviation of - target for delta coordinates - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.), - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - self.clip_border = clip_border - self.add_ctr_clamp = add_ctr_clamp - self.ctr_clamp = ctr_clamp - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes. Shape (B, N, 4) or (N, 4) - pred_bboxes (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - - assert pred_bboxes.size(0) == bboxes.size(0) - if pred_bboxes.ndim == 3: - assert pred_bboxes.size(1) == bboxes.size(1) - - if pred_bboxes.ndim == 2 and not torch.onnx.is_in_onnx_export(): - # single image decode - decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip, - self.clip_border, self.add_ctr_clamp, - self.ctr_clamp) - else: - if pred_bboxes.ndim == 3 and not torch.onnx.is_in_onnx_export(): - warnings.warn( - 'DeprecationWarning: onnx_delta2bbox is deprecated ' - 'in the case of batch decoding and non-ONNX, ' - 'please use “delta2bbox” instead. In order to improve ' - 'the decoding speed, the batch function will no ' - 'longer be supported. ') - decoded_bboxes = onnx_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, - wh_ratio_clip, self.clip_border, - self.add_ctr_clamp, - self.ctr_clamp) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of :func:`delta2bbox`. - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] - gh = gt[..., 3] - gt[..., 1] - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4). - deltas (Tensor): Encoded offsets relative to each roi. - Has shape (N, num_classes * 4) or (N, 4). Note - N = num_base_anchors * W * H, when rois is a grid of - anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (tuple[int, int]): Maximum bounds for boxes, specifies - (H, W). Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. Default - 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp. When set to True, - the center of the prediction bounding box will be clamped to - avoid being too far away from the center of the anchor. - Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (N, num_classes * 4) or (N, 4), where 4 - represent tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - num_bboxes, num_classes = deltas.size(0), deltas.size(1) // 4 - if num_bboxes == 0: - return deltas - - deltas = deltas.reshape(-1, 4) - - means = deltas.new_tensor(means).view(1, -1) - stds = deltas.new_tensor(stds).view(1, -1) - denorm_deltas = deltas * stds + means - - dxy = denorm_deltas[:, :2] - dwh = denorm_deltas[:, 2:] - - # Compute width/height of each roi - rois_ = rois.repeat(1, num_classes).reshape(-1, 4) - pxy = ((rois_[:, :2] + rois_[:, 2:]) * 0.5) - pwh = (rois_[:, 2:] - rois_[:, :2]) - - dxy_wh = pwh * dxy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dxy_wh = torch.clamp(dxy_wh, max=ctr_clamp, min=-ctr_clamp) - dwh = torch.clamp(dwh, max=max_ratio) - else: - dwh = dwh.clamp(min=-max_ratio, max=max_ratio) - - gxy = pxy + dxy_wh - gwh = pwh * dwh.exp() - x1y1 = gxy - (gwh * 0.5) - x2y2 = gxy + (gwh * 0.5) - bboxes = torch.cat([x1y1, x2y2], dim=-1) - if clip_border and max_shape is not None: - bboxes[..., 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[..., 1::2].clamp_(min=0, max=max_shape[0]) - bboxes = bboxes.reshape(num_bboxes, -1) - return bboxes - - -def onnx_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) or (B, N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If rois shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. - Default 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4), where 4 represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - means = deltas.new_tensor(means).view(1, - -1).repeat(1, - deltas.size(-1) // 4) - stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(-1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[..., 0::4] - dy = denorm_deltas[..., 1::4] - dw = denorm_deltas[..., 2::4] - dh = denorm_deltas[..., 3::4] - - x1, y1 = rois[..., 0], rois[..., 1] - x2, y2 = rois[..., 2], rois[..., 3] - # Compute center of each roi - px = ((x1 + x2) * 0.5).unsqueeze(-1).expand_as(dx) - py = ((y1 + y2) * 0.5).unsqueeze(-1).expand_as(dy) - # Compute width/height of each roi - pw = (x2 - x1).unsqueeze(-1).expand_as(dw) - ph = (y2 - y1).unsqueeze(-1).expand_as(dh) - - dx_width = pw * dx - dy_height = ph * dy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dx_width = torch.clamp(dx_width, max=ctr_clamp, min=-ctr_clamp) - dy_height = torch.clamp(dy_height, max=ctr_clamp, min=-ctr_clamp) - dw = torch.clamp(dw, max=max_ratio) - dh = torch.clamp(dh, max=max_ratio) - else: - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + dx_width - gy = py + dy_height - # Convert center-xy/width/height to top-left, bottom-right - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat( - [max_shape] * (deltas.size(-1) // 2), - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/distance_point_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/distance_point_bbox_coder.py deleted file mode 100644 index 9f308a84..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/distance_point_bbox_coder.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_CODERS -from ..transforms import bbox2distance, distance2bbox -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DistancePointBBoxCoder(BaseBBoxCoder): - """Distance Point BBox coder. - - This coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.clip_border = clip_border - - def encode(self, points, gt_bboxes, max_dis=None, eps=0.1): - """Encode bounding box to distances. - - Args: - points (Tensor): Shape (N, 2), The format is [x, y]. - gt_bboxes (Tensor): Shape (N, 4), The format is "xyxy" - max_dis (float): Upper bound of the distance. Default None. - eps (float): a small value to ensure target < max_dis, instead <=. - Default 0.1. - - Returns: - Tensor: Box transformation deltas. The shape is (N, 4). - """ - assert points.size(0) == gt_bboxes.size(0) - assert points.size(-1) == 2 - assert gt_bboxes.size(-1) == 4 - return bbox2distance(points, gt_bboxes, max_dis, eps) - - def decode(self, points, pred_bboxes, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - pred_bboxes (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) - or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]], - and the length of max_shape should also be B. - Default None. - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - assert points.size(0) == pred_bboxes.size(0) - assert points.size(-1) == 2 - assert pred_bboxes.size(-1) == 4 - if self.clip_border is False: - max_shape = None - return distance2bbox(points, pred_bboxes, max_shape) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py deleted file mode 100644 index 7fa348b2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class LegacyDeltaXYWHBBoxCoder(BaseBBoxCoder): - """Legacy Delta XYWH BBox coder used in MMDet V1.x. - - Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2, - y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) - back to original bbox (x1, y1, x2, y2). - - Note: - The main difference between :class`LegacyDeltaXYWHBBoxCoder` and - :class:`DeltaXYWHBBoxCoder` is whether ``+ 1`` is used during width and - height calculation. We suggest to only use this coder when testing with - MMDet V1.x models. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Args: - target_means (Sequence[float]): denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): denormalizing standard deviation of - target for delta coordinates - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.)): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = legacy_bbox2delta(bboxes, gt_bboxes, self.means, - self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Encoded boxes with shape - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(0) == bboxes.size(0) - decoded_bboxes = legacy_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def legacy_bbox2delta(proposals, - gt, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt in the MMDet V1.x manner. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of `delta2bbox()` - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] + 1.0 - ph = proposals[..., 3] - proposals[..., 1] + 1.0 - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] + 1.0 - gh = gt[..., 3] - gt[..., 1] + 1.0 - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def legacy_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply deltas to shift/scale base boxes in the MMDet V1.x manner. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of `bbox2delta()` - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when - rois is a grid of anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - wh_ratio_clip (float): Maximum aspect ratio for boxes. - - Returns: - Tensor: Boxes with shape (N, 4), where columns represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> legacy_delta2bbox(rois, deltas, max_shape=(32, 32)) - tensor([[0.0000, 0.0000, 1.5000, 1.5000], - [0.0000, 0.0000, 5.2183, 5.2183], - [0.0000, 0.1321, 7.8891, 0.8679], - [5.3967, 2.4251, 6.0033, 3.7749]]) - """ - means = deltas.new_tensor(means).repeat(1, deltas.size(1) // 4) - stds = deltas.new_tensor(stds).repeat(1, deltas.size(1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[:, 0::4] - dy = denorm_deltas[:, 1::4] - dw = denorm_deltas[:, 2::4] - dh = denorm_deltas[:, 3::4] - max_ratio = np.abs(np.log(wh_ratio_clip)) - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Compute center of each roi - px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx) - py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy) - # Compute width/height of each roi - pw = (rois[:, 2] - rois[:, 0] + 1.0).unsqueeze(1).expand_as(dw) - ph = (rois[:, 3] - rois[:, 1] + 1.0).unsqueeze(1).expand_as(dh) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + pw * dx - gy = py + ph * dy - # Convert center-xy/width/height to top-left, bottom-right - - # The true legacy box coder should +- 0.5 here. - # However, current implementation improves the performance when testing - # the models trained in MMDetection 1.X (~0.5 bbox AP, 0.2 mask AP) - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - if max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view_as(deltas) - return bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/pseudo_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/pseudo_bbox_coder.py deleted file mode 100644 index fe71f369..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/pseudo_bbox_coder.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class PseudoBBoxCoder(BaseBBoxCoder): - """Pseudo bounding box coder.""" - - def __init__(self, **kwargs): - super(BaseBBoxCoder, self).__init__(**kwargs) - - def encode(self, bboxes, gt_bboxes): - """torch.Tensor: return the given ``bboxes``""" - return gt_bboxes - - def decode(self, bboxes, pred_bboxes): - """torch.Tensor: return the given ``pred_bboxes``""" - return pred_bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/tblr_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/tblr_bbox_coder.py deleted file mode 100644 index cb420663..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/tblr_bbox_coder.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class TBLRBBoxCoder(BaseBBoxCoder): - """TBLR BBox coder. - - Following the practice in `FSAF `_, - this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - normalizer (list | float): Normalization factor to be - divided with when coding the coordinates. If it is a list, it should - have length of 4 indicating normalization factor in tblr dims. - Otherwise it is a unified float factor for all dims. Default: 4.0 - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, normalizer=4.0, clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.normalizer = normalizer - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes`` in the (top, left, - bottom, right) order. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bboxes2tblr( - bboxes, gt_bboxes, normalizer=self.normalizer) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes.Shape (B, N, 4) or (N, 4) - pred_bboxes (torch.Tensor): Encoded boxes with shape - (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - torch.Tensor: Decoded boxes. - """ - decoded_bboxes = tblr2bboxes( - bboxes, - pred_bboxes, - normalizer=self.normalizer, - max_shape=max_shape, - clip_border=self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bboxes2tblr(priors, gts, normalizer=4.0, normalize_by_wh=True): - """Encode ground truth boxes to tblr coordinate. - - It first convert the gt coordinate to tblr format, - (top, bottom, left, right), relative to prior box centers. - The tblr coordinate may be normalized by the side length of prior bboxes - if `normalize_by_wh` is specified as True, and it is then normalized by - the `normalizer` factor. - - Args: - priors (Tensor): Prior boxes in point form - Shape: (num_proposals,4). - gts (Tensor): Coords of ground truth for each prior in point-form - Shape: (num_proposals, 4). - normalizer (Sequence[float] | float): normalization parameter of - encoded boxes. If it is a list, it has to have length = 4. - Default: 4.0 - normalize_by_wh (bool): Whether to normalize tblr coordinate by the - side length (wh) of prior bboxes. - - Return: - encoded boxes (Tensor), Shape: (num_proposals, 4) - """ - - # dist b/t match center and prior's center - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == gts.size(0) - prior_centers = (priors[:, 0:2] + priors[:, 2:4]) / 2 - xmin, ymin, xmax, ymax = gts.split(1, dim=1) - top = prior_centers[:, 1].unsqueeze(1) - ymin - bottom = ymax - prior_centers[:, 1].unsqueeze(1) - left = prior_centers[:, 0].unsqueeze(1) - xmin - right = xmax - prior_centers[:, 0].unsqueeze(1) - loc = torch.cat((top, bottom, left, right), dim=1) - if normalize_by_wh: - # Normalize tblr by anchor width and height - wh = priors[:, 2:4] - priors[:, 0:2] - w, h = torch.split(wh, 1, dim=1) - loc[:, :2] /= h # tb is normalized by h - loc[:, 2:] /= w # lr is normalized by w - # Normalize tblr by the given normalization factor - return loc / normalizer - - -@mmcv.jit(coderize=True) -def tblr2bboxes(priors, - tblr, - normalizer=4.0, - normalize_by_wh=True, - max_shape=None, - clip_border=True): - """Decode tblr outputs to prediction boxes. - - The process includes 3 steps: 1) De-normalize tblr coordinates by - multiplying it with `normalizer`; 2) De-normalize tblr coordinates by the - prior bbox width and height if `normalize_by_wh` is `True`; 3) Convert - tblr (top, bottom, left, right) pair relative to the center of priors back - to (xmin, ymin, xmax, ymax) coordinate. - - Args: - priors (Tensor): Prior boxes in point form (x0, y0, x1, y1) - Shape: (N,4) or (B, N, 4). - tblr (Tensor): Coords of network output in tblr form - Shape: (N, 4) or (B, N, 4). - normalizer (Sequence[float] | float): Normalization parameter of - encoded boxes. By list, it represents the normalization factors at - tblr dims. By float, it is the unified normalization factor at all - dims. Default: 4.0 - normalize_by_wh (bool): Whether the tblr coordinates have been - normalized by the side length (wh) of prior bboxes. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Return: - encoded boxes (Tensor): Boxes with shape (N, 4) or (B, N, 4) - """ - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == tblr.size(0) - if priors.ndim == 3: - assert priors.size(1) == tblr.size(1) - - loc_decode = tblr * normalizer - prior_centers = (priors[..., 0:2] + priors[..., 2:4]) / 2 - if normalize_by_wh: - wh = priors[..., 2:4] - priors[..., 0:2] - w, h = torch.split(wh, 1, dim=-1) - # Inplace operation with slice would failed for exporting to ONNX - th = h * loc_decode[..., :2] # tb - tw = w * loc_decode[..., 2:] # lr - loc_decode = torch.cat([th, tw], dim=-1) - # Cannot be exported using onnx when loc_decode.split(1, dim=-1) - top, bottom, left, right = loc_decode.split((1, 1, 1, 1), dim=-1) - xmin = prior_centers[..., 0].unsqueeze(-1) - left - xmax = prior_centers[..., 0].unsqueeze(-1) + right - ymin = prior_centers[..., 1].unsqueeze(-1) - top - ymax = prior_centers[..., 1].unsqueeze(-1) + bottom - - bboxes = torch.cat((xmin, ymin, xmax, ymax), dim=-1) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - xmin, ymin, xmax, ymax = dynamic_clip_for_onnx( - xmin, ymin, xmax, ymax, max_shape) - bboxes = torch.cat([xmin, ymin, xmax, ymax], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = priors.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(priors) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = priors.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/yolo_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/yolo_bbox_coder.py deleted file mode 100644 index 2852eca7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/coder/yolo_bbox_coder.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class YOLOBBoxCoder(BaseBBoxCoder): - """YOLO BBox coder. - - Following `YOLO `_, this coder divide - image into grids, and encode bbox (x1, y1, x2, y2) into (cx, cy, dw, dh). - cx, cy in [0., 1.], denotes relative center position w.r.t the center of - bboxes. dw, dh are the same as :obj:`DeltaXYWHBBoxCoder`. - - Args: - eps (float): Min value of cx, cy when encoding. - """ - - def __init__(self, eps=1e-6): - super(BaseBBoxCoder, self).__init__() - self.eps = eps - - @mmcv.jit(coderize=True) - def encode(self, bboxes, gt_bboxes, stride): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., anchors. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - stride (torch.Tensor | int): Stride of bboxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - x_center_gt = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) * 0.5 - y_center_gt = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) * 0.5 - w_gt = gt_bboxes[..., 2] - gt_bboxes[..., 0] - h_gt = gt_bboxes[..., 3] - gt_bboxes[..., 1] - x_center = (bboxes[..., 0] + bboxes[..., 2]) * 0.5 - y_center = (bboxes[..., 1] + bboxes[..., 3]) * 0.5 - w = bboxes[..., 2] - bboxes[..., 0] - h = bboxes[..., 3] - bboxes[..., 1] - w_target = torch.log((w_gt / w).clamp(min=self.eps)) - h_target = torch.log((h_gt / h).clamp(min=self.eps)) - x_center_target = ((x_center_gt - x_center) / stride + 0.5).clamp( - self.eps, 1 - self.eps) - y_center_target = ((y_center_gt - y_center) / stride + 0.5).clamp( - self.eps, 1 - self.eps) - encoded_bboxes = torch.stack( - [x_center_target, y_center_target, w_target, h_target], dim=-1) - return encoded_bboxes - - @mmcv.jit(coderize=True) - def decode(self, bboxes, pred_bboxes, stride): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes, e.g. anchors. - pred_bboxes (torch.Tensor): Encoded boxes with shape - stride (torch.Tensor | int): Strides of bboxes. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(-1) == bboxes.size(-1) == 4 - xy_centers = (bboxes[..., :2] + bboxes[..., 2:]) * 0.5 + ( - pred_bboxes[..., :2] - 0.5) * stride - whs = (bboxes[..., 2:] - - bboxes[..., :2]) * 0.5 * pred_bboxes[..., 2:].exp() - decoded_bboxes = torch.stack( - (xy_centers[..., 0] - whs[..., 0], xy_centers[..., 1] - - whs[..., 1], xy_centers[..., 0] + whs[..., 0], - xy_centers[..., 1] + whs[..., 1]), - dim=-1) - return decoded_bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/demodata.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/demodata.py deleted file mode 100644 index eb24b34b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/demodata.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.utils.util_random import ensure_rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/__init__.py deleted file mode 100644 index 04ba925b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_iou_calculator -from .iou2d_calculator import BboxOverlaps2D, bbox_overlaps - -__all__ = ['build_iou_calculator', 'BboxOverlaps2D', 'bbox_overlaps'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/builder.py deleted file mode 100644 index 378ee269..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -IOU_CALCULATORS = Registry('IoU calculator') - - -def build_iou_calculator(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, IOU_CALCULATORS, default_args) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py deleted file mode 100644 index 4656d619..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py +++ /dev/null @@ -1,261 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .builder import IOU_CALCULATORS - - -def cast_tensor_type(x, scale=1., dtype=None): - if dtype == 'fp16': - # scale is for preventing overflows - x = (x / scale).half() - return x - - -def fp16_clamp(x, min=None, max=None): - if not x.is_cuda and x.dtype == torch.float16: - # clamp for cpu float16, tensor fp16 has no clamp implementation - return x.float().clamp(min, max).half() - - return x.clamp(min, max) - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps2D: - """2D Overlaps (e.g. IoUs, GIoUs) Calculator.""" - - def __init__(self, scale=1., dtype=None): - self.scale = scale - self.dtype = dtype - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): bboxes have shape (m, 4) in - format, or shape (m, 5) in format. - bboxes2 (Tensor): bboxes have shape (m, 4) in - format, shape (m, 5) in format, or be - empty. If ``is_aligned `` is ``True``, then m and n must be - equal. - mode (str): "iou" (intersection over union), "iof" (intersection - over foreground), or "giou" (generalized intersection over - union). - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - - Returns: - Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,) - """ - assert bboxes1.size(-1) in [0, 4, 5] - assert bboxes2.size(-1) in [0, 4, 5] - if bboxes2.size(-1) == 5: - bboxes2 = bboxes2[..., :4] - if bboxes1.size(-1) == 5: - bboxes1 = bboxes1[..., :4] - - if self.dtype == 'fp16': - # change tensor type to save cpu and cuda memory and keep speed - bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) - bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) - overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - if not overlaps.is_cuda and overlaps.dtype == torch.float16: - # resume cpu float32 - overlaps = overlaps.float() - return overlaps - - return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + f'(' \ - f'scale={self.scale}, dtype={self.dtype})' - return repr_str - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6): - """Calculate overlap between two set of bboxes. - - FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889 - Note: - Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou', - there are some new generated variable when calculating IOU - using bbox_overlaps function: - - 1) is_aligned is False - area1: M x 1 - area2: N x 1 - lt: M x N x 2 - rb: M x N x 2 - wh: M x N x 2 - overlap: M x N x 1 - union: M x N x 1 - ious: M x N x 1 - - Total memory: - S = (9 x N x M + N + M) * 4 Byte, - - When using FP16, we can reduce: - R = (9 x N x M + N + M) * 4 / 2 Byte - R large than (N + M) * 4 * 2 is always true when N and M >= 1. - Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2, - N + 1 < 3 * N, when N or M is 1. - - Given M = 40 (ground truth), N = 400000 (three anchor boxes - in per grid, FPN, R-CNNs), - R = 275 MB (one times) - - A special case (dense detection), M = 512 (ground truth), - R = 3516 MB = 3.43 GB - - When the batch size is B, reduce: - B x R - - Therefore, CUDA memory runs out frequently. - - Experiments on GeForce RTX 2080Ti (11019 MiB): - - | dtype | M | N | Use | Real | Ideal | - |:----:|:----:|:----:|:----:|:----:|:----:| - | FP32 | 512 | 400000 | 8020 MiB | -- | -- | - | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB | - | FP32 | 40 | 400000 | 1540 MiB | -- | -- | - | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB | - - 2) is_aligned is True - area1: N x 1 - area2: N x 1 - lt: N x 2 - rb: N x 2 - wh: N x 2 - overlap: N x 1 - union: N x 1 - ious: N x 1 - - Total memory: - S = 11 x N * 4 Byte - - When using FP16, we can reduce: - R = 11 x N * 4 / 2 Byte - - So do the 'giou' (large than 'iou'). - - Time-wise, FP16 is generally faster than FP32. - - When gpu_assign_thr is not -1, it takes more time on cpu - but not reduce memory. - There, we can reduce half the memory and keep the speed. - - If ``is_aligned`` is ``False``, then calculate the overlaps between each - bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned - pair of bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 4) in format or empty. - bboxes2 (Tensor): shape (B, n, 4) in format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union), "iof" (intersection over - foreground) or "giou" (generalized intersection over union). - Default "iou". - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - - Example: - >>> empty = torch.empty(0, 4) - >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * ( - bboxes1[..., 3] - bboxes1[..., 1]) - area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * ( - bboxes2[..., 3] - bboxes2[..., 1]) - - if is_aligned: - lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2] - rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2]) - enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:]) - else: - lt = torch.max(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) # [B, rows, cols, 2] - rb = torch.min(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) # [B, rows, cols, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - else: - union = area1[..., None] - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) - enclosed_rb = torch.max(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou', 'iof']: - return ious - # calculate gious - enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/__init__.py deleted file mode 100644 index 1b636795..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_match_cost -from .match_cost import (BBoxL1Cost, ClassificationCost, CrossEntropyLossCost, - DiceCost, FocalLossCost, IoUCost) - -__all__ = [ - 'build_match_cost', 'ClassificationCost', 'BBoxL1Cost', 'IoUCost', - 'FocalLossCost', 'DiceCost', 'CrossEntropyLossCost' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/builder.py deleted file mode 100644 index ea086adf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -MATCH_COST = Registry('Match Cost') - - -def build_match_cost(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, MATCH_COST, default_args) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/match_cost.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/match_cost.py deleted file mode 100644 index 4342b024..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/match_costs/match_cost.py +++ /dev/null @@ -1,359 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.core.bbox.transforms import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh -from .builder import MATCH_COST - - -@MATCH_COST.register_module() -class BBoxL1Cost: - """BBoxL1Cost. - - Args: - weight (int | float, optional): loss_weight - box_format (str, optional): 'xyxy' for DETR, 'xywh' for Sparse_RCNN - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import BBoxL1Cost - >>> import torch - >>> self = BBoxL1Cost() - >>> bbox_pred = torch.rand(1, 4) - >>> gt_bboxes= torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(bbox_pred, gt_bboxes, factor) - tensor([[1.6172, 1.6422]]) - """ - - def __init__(self, weight=1., box_format='xyxy'): - self.weight = weight - assert box_format in ['xyxy', 'xywh'] - self.box_format = box_format - - def __call__(self, bbox_pred, gt_bboxes): - """ - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - (num_query, 4). - gt_bboxes (Tensor): Ground truth boxes with normalized - coordinates (x1, y1, x2, y2). Shape (num_gt, 4). - - Returns: - torch.Tensor: bbox_cost value with weight - """ - if self.box_format == 'xywh': - gt_bboxes = bbox_xyxy_to_cxcywh(gt_bboxes) - elif self.box_format == 'xyxy': - bbox_pred = bbox_cxcywh_to_xyxy(bbox_pred) - bbox_cost = torch.cdist(bbox_pred, gt_bboxes, p=1) - return bbox_cost * self.weight - - -@MATCH_COST.register_module() -class FocalLossCost: - """FocalLossCost. - - Args: - weight (int | float, optional): loss_weight - alpha (int | float, optional): focal_loss alpha - gamma (int | float, optional): focal_loss gamma - eps (float, optional): default 1e-12 - binary_input (bool, optional): Whether the input is binary, - default False. - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import FocalLossCost - >>> import torch - >>> self = FocalLossCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3236, -0.3364, -0.2699], - [-0.3439, -0.3209, -0.4807], - [-0.4099, -0.3795, -0.2929], - [-0.1950, -0.1207, -0.2626]]) - """ - - def __init__(self, - weight=1., - alpha=0.25, - gamma=2, - eps=1e-12, - binary_input=False): - self.weight = weight - self.alpha = alpha - self.gamma = gamma - self.eps = eps - self.binary_input = binary_input - - def _focal_loss_cost(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - (num_query, num_class). - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - cls_pred = cls_pred.sigmoid() - neg_cost = -(1 - cls_pred + self.eps).log() * ( - 1 - self.alpha) * cls_pred.pow(self.gamma) - pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( - 1 - cls_pred).pow(self.gamma) - - cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels] - return cls_cost * self.weight - - def _mask_focal_loss_cost(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits - in shape (num_query, d1, ..., dn), dtype=torch.float32. - gt_labels (Tensor): Ground truth in shape (num_gt, d1, ..., dn), - dtype=torch.long. Labels should be binary. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - cls_pred = cls_pred.flatten(1) - gt_labels = gt_labels.flatten(1).float() - n = cls_pred.shape[1] - cls_pred = cls_pred.sigmoid() - neg_cost = -(1 - cls_pred + self.eps).log() * ( - 1 - self.alpha) * cls_pred.pow(self.gamma) - pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( - 1 - cls_pred).pow(self.gamma) - - cls_cost = torch.einsum('nc,mc->nm', pos_cost, gt_labels) + \ - torch.einsum('nc,mc->nm', neg_cost, (1 - gt_labels)) - return cls_cost / n * self.weight - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits. - gt_labels (Tensor)): Labels. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - if self.binary_input: - return self._mask_focal_loss_cost(cls_pred, gt_labels) - else: - return self._focal_loss_cost(cls_pred, gt_labels) - - -@MATCH_COST.register_module() -class ClassificationCost: - """ClsSoftmaxCost. - - Args: - weight (int | float, optional): loss_weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import \ - ... ClassificationCost - >>> import torch - >>> self = ClassificationCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3430, -0.3525, -0.3045], - [-0.3077, -0.2931, -0.3992], - [-0.3664, -0.3455, -0.2881], - [-0.3343, -0.2701, -0.3956]]) - """ - - def __init__(self, weight=1.): - self.weight = weight - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - (num_query, num_class). - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - # Following the official DETR repo, contrary to the loss that - # NLL is used, we approximate it in 1 - cls_score[gt_label]. - # The 1 is a constant that doesn't change the matching, - # so it can be omitted. - cls_score = cls_pred.softmax(-1) - cls_cost = -cls_score[:, gt_labels] - return cls_cost * self.weight - - -@MATCH_COST.register_module() -class IoUCost: - """IoUCost. - - Args: - iou_mode (str, optional): iou mode such as 'iou' | 'giou' - weight (int | float, optional): loss weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import IoUCost - >>> import torch - >>> self = IoUCost() - >>> bboxes = torch.FloatTensor([[1,1, 2, 2], [2, 2, 3, 4]]) - >>> gt_bboxes = torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> self(bboxes, gt_bboxes) - tensor([[-0.1250, 0.1667], - [ 0.1667, -0.5000]]) - """ - - def __init__(self, iou_mode='giou', weight=1.): - self.weight = weight - self.iou_mode = iou_mode - - def __call__(self, bboxes, gt_bboxes): - """ - Args: - bboxes (Tensor): Predicted boxes with unnormalized coordinates - (x1, y1, x2, y2). Shape (num_query, 4). - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape (num_gt, 4). - - Returns: - torch.Tensor: iou_cost value with weight - """ - # overlaps: [num_bboxes, num_gt] - overlaps = bbox_overlaps( - bboxes, gt_bboxes, mode=self.iou_mode, is_aligned=False) - # The 1 is a constant that doesn't change the matching, so omitted. - iou_cost = -overlaps - return iou_cost * self.weight - - -@MATCH_COST.register_module() -class DiceCost: - """Cost of mask assignments based on dice losses. - - Args: - weight (int | float, optional): loss_weight. Defaults to 1. - pred_act (bool, optional): Whether to apply sigmoid to mask_pred. - Defaults to False. - eps (float, optional): default 1e-12. - naive_dice (bool, optional): If True, use the naive dice loss - in which the power of the number in the denominator is - the first power. If Flase, use the second power that - is adopted by K-Net and SOLO. - Defaults to True. - """ - - def __init__(self, weight=1., pred_act=False, eps=1e-3, naive_dice=True): - self.weight = weight - self.pred_act = pred_act - self.eps = eps - self.naive_dice = naive_dice - - def binary_mask_dice_loss(self, mask_preds, gt_masks): - """ - Args: - mask_preds (Tensor): Mask prediction in shape (num_query, *). - gt_masks (Tensor): Ground truth in shape (num_gt, *) - store 0 or 1, 0 for negative class and 1 for - positive class. - - Returns: - Tensor: Dice cost matrix in shape (num_query, num_gt). - """ - mask_preds = mask_preds.flatten(1) - gt_masks = gt_masks.flatten(1).float() - numerator = 2 * torch.einsum('nc,mc->nm', mask_preds, gt_masks) - if self.naive_dice: - denominator = mask_preds.sum(-1)[:, None] + \ - gt_masks.sum(-1)[None, :] - else: - denominator = mask_preds.pow(2).sum(1)[:, None] + \ - gt_masks.pow(2).sum(1)[None, :] - loss = 1 - (numerator + self.eps) / (denominator + self.eps) - return loss - - def __call__(self, mask_preds, gt_masks): - """ - Args: - mask_preds (Tensor): Mask prediction logits in shape (num_query, *) - gt_masks (Tensor): Ground truth in shape (num_gt, *) - - Returns: - Tensor: Dice cost matrix with weight in shape (num_query, num_gt). - """ - if self.pred_act: - mask_preds = mask_preds.sigmoid() - dice_cost = self.binary_mask_dice_loss(mask_preds, gt_masks) - return dice_cost * self.weight - - -@MATCH_COST.register_module() -class CrossEntropyLossCost: - """CrossEntropyLossCost. - - Args: - weight (int | float, optional): loss weight. Defaults to 1. - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to True. - Examples: - >>> from mmdet.core.bbox.match_costs import CrossEntropyLossCost - >>> import torch - >>> bce = CrossEntropyLossCost(use_sigmoid=True) - >>> cls_pred = torch.tensor([[7.6, 1.2], [-1.3, 10]]) - >>> gt_labels = torch.tensor([[1, 1], [1, 0]]) - >>> print(bce(cls_pred, gt_labels)) - """ - - def __init__(self, weight=1., use_sigmoid=True): - assert use_sigmoid, 'use_sigmoid = False is not supported yet.' - self.weight = weight - self.use_sigmoid = use_sigmoid - - def _binary_cross_entropy(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): The prediction with shape (num_query, 1, *) or - (num_query, *). - gt_labels (Tensor): The learning label of prediction with - shape (num_gt, *). - - Returns: - Tensor: Cross entropy cost matrix in shape (num_query, num_gt). - """ - cls_pred = cls_pred.flatten(1).float() - gt_labels = gt_labels.flatten(1).float() - n = cls_pred.shape[1] - pos = F.binary_cross_entropy_with_logits( - cls_pred, torch.ones_like(cls_pred), reduction='none') - neg = F.binary_cross_entropy_with_logits( - cls_pred, torch.zeros_like(cls_pred), reduction='none') - cls_cost = torch.einsum('nc,mc->nm', pos, gt_labels) + \ - torch.einsum('nc,mc->nm', neg, 1 - gt_labels) - cls_cost = cls_cost / n - - return cls_cost - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits. - gt_labels (Tensor): Labels. - - Returns: - Tensor: Cross entropy cost matrix with weight in - shape (num_query, num_gt). - """ - if self.use_sigmoid: - cls_cost = self._binary_cross_entropy(cls_pred, gt_labels) - else: - raise NotImplementedError - - return cls_cost * self.weight diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/__init__.py deleted file mode 100644 index f58505b5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_sampler import BaseSampler -from .combined_sampler import CombinedSampler -from .instance_balanced_pos_sampler import InstanceBalancedPosSampler -from .iou_balanced_neg_sampler import IoUBalancedNegSampler -from .mask_pseudo_sampler import MaskPseudoSampler -from .mask_sampling_result import MaskSamplingResult -from .ohem_sampler import OHEMSampler -from .pseudo_sampler import PseudoSampler -from .random_sampler import RandomSampler -from .sampling_result import SamplingResult -from .score_hlr_sampler import ScoreHLRSampler - -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'MaskPseudoSampler', - 'MaskSamplingResult' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/base_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/base_sampler.py deleted file mode 100644 index bd15c7c6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/base_sampler.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch - -from .sampling_result import SamplingResult - - -class BaseSampler(metaclass=ABCMeta): - """Base class of samplers.""" - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - self.num = num - self.pos_fraction = pos_fraction - self.neg_pos_ub = neg_pos_ub - self.add_gt_as_proposals = add_gt_as_proposals - self.pos_sampler = self - self.neg_sampler = self - - @abstractmethod - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive samples.""" - pass - - @abstractmethod - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative samples.""" - pass - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - - Example: - >>> from mmdet.core.bbox import RandomSampler - >>> from mmdet.core.bbox import AssignResult - >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes - >>> rng = ensure_rng(None) - >>> assign_result = AssignResult.random(rng=rng) - >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) - >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) - >>> gt_labels = None - >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, - >>> add_gt_as_proposals=False) - >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels) - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - neg_inds = neg_inds.unique() - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/combined_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/combined_sampler.py deleted file mode 100644 index 4f6d86ff..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/combined_sampler.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_SAMPLERS, build_sampler -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class CombinedSampler(BaseSampler): - """A sampler that combines positive sampler and negative sampler.""" - - def __init__(self, pos_sampler, neg_sampler, **kwargs): - super(CombinedSampler, self).__init__(**kwargs) - self.pos_sampler = build_sampler(pos_sampler, **kwargs) - self.neg_sampler = build_sampler(neg_sampler, **kwargs) - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py deleted file mode 100644 index 5e0d9cc0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class InstanceBalancedPosSampler(RandomSampler): - """Instance balanced sampler that samples equal number of positive samples - for each instance.""" - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - unique_gt_inds = assign_result.gt_inds[pos_inds].unique() - num_gts = len(unique_gt_inds) - num_per_gt = int(round(num_expected / float(num_gts)) + 1) - sampled_inds = [] - for i in unique_gt_inds: - inds = torch.nonzero( - assign_result.gt_inds == i.item(), as_tuple=False) - if inds.numel() != 0: - inds = inds.squeeze(1) - else: - continue - if len(inds) > num_per_gt: - inds = self.random_choice(inds, num_per_gt) - sampled_inds.append(inds) - sampled_inds = torch.cat(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array( - list(set(pos_inds.cpu()) - set(sampled_inds.cpu()))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - extra_inds = torch.from_numpy(extra_inds).to( - assign_result.gt_inds.device).long() - sampled_inds = torch.cat([sampled_inds, extra_inds]) - elif len(sampled_inds) > num_expected: - sampled_inds = self.random_choice(sampled_inds, num_expected) - return sampled_inds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py deleted file mode 100644 index 56e2874a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class IoUBalancedNegSampler(RandomSampler): - """IoU Balanced Sampling. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Sampling proposals according to their IoU. `floor_fraction` of needed RoIs - are sampled from proposals whose IoU are lower than `floor_thr` randomly. - The others are sampled from proposals whose IoU are higher than - `floor_thr`. These proposals are sampled from some bins evenly, which are - split by `num_bins` via IoU evenly. - - Args: - num (int): number of proposals. - pos_fraction (float): fraction of positive proposals. - floor_thr (float): threshold (minimum) IoU for IoU balanced sampling, - set to -1 if all using IoU balanced sampling. - floor_fraction (float): sampling fraction of proposals under floor_thr. - num_bins (int): number of bins in IoU balanced sampling. - """ - - def __init__(self, - num, - pos_fraction, - floor_thr=-1, - floor_fraction=0, - num_bins=3, - **kwargs): - super(IoUBalancedNegSampler, self).__init__(num, pos_fraction, - **kwargs) - assert floor_thr >= 0 or floor_thr == -1 - assert 0 <= floor_fraction <= 1 - assert num_bins >= 1 - - self.floor_thr = floor_thr - self.floor_fraction = floor_fraction - self.num_bins = num_bins - - def sample_via_interval(self, max_overlaps, full_set, num_expected): - """Sample according to the iou interval. - - Args: - max_overlaps (torch.Tensor): IoU between bounding boxes and ground - truth boxes. - full_set (set(int)): A full set of indices of boxes。 - num_expected (int): Number of expected samples。 - - Returns: - np.ndarray: Indices of samples - """ - max_iou = max_overlaps.max() - iou_interval = (max_iou - self.floor_thr) / self.num_bins - per_num_expected = int(num_expected / self.num_bins) - - sampled_inds = [] - for i in range(self.num_bins): - start_iou = self.floor_thr + i * iou_interval - end_iou = self.floor_thr + (i + 1) * iou_interval - tmp_set = set( - np.where( - np.logical_and(max_overlaps >= start_iou, - max_overlaps < end_iou))[0]) - tmp_inds = list(tmp_set & full_set) - if len(tmp_inds) > per_num_expected: - tmp_sampled_set = self.random_choice(tmp_inds, - per_num_expected) - else: - tmp_sampled_set = np.array(tmp_inds, dtype=np.int) - sampled_inds.append(tmp_sampled_set) - - sampled_inds = np.concatenate(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(full_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate([sampled_inds, extra_inds]) - - return sampled_inds - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected negative samples - - Returns: - Tensor or ndarray: sampled indices. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - max_overlaps = assign_result.max_overlaps.cpu().numpy() - # balance sampling for negative samples - neg_set = set(neg_inds.cpu().numpy()) - - if self.floor_thr > 0: - floor_set = set( - np.where( - np.logical_and(max_overlaps >= 0, - max_overlaps < self.floor_thr))[0]) - iou_sampling_set = set( - np.where(max_overlaps >= self.floor_thr)[0]) - elif self.floor_thr == 0: - floor_set = set(np.where(max_overlaps == 0)[0]) - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - else: - floor_set = set() - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - # for sampling interval calculation - self.floor_thr = 0 - - floor_neg_inds = list(floor_set & neg_set) - iou_sampling_neg_inds = list(iou_sampling_set & neg_set) - num_expected_iou_sampling = int(num_expected * - (1 - self.floor_fraction)) - if len(iou_sampling_neg_inds) > num_expected_iou_sampling: - if self.num_bins >= 2: - iou_sampled_inds = self.sample_via_interval( - max_overlaps, set(iou_sampling_neg_inds), - num_expected_iou_sampling) - else: - iou_sampled_inds = self.random_choice( - iou_sampling_neg_inds, num_expected_iou_sampling) - else: - iou_sampled_inds = np.array( - iou_sampling_neg_inds, dtype=np.int) - num_expected_floor = num_expected - len(iou_sampled_inds) - if len(floor_neg_inds) > num_expected_floor: - sampled_floor_inds = self.random_choice( - floor_neg_inds, num_expected_floor) - else: - sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int) - sampled_inds = np.concatenate( - (sampled_floor_inds, iou_sampled_inds)) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(neg_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate((sampled_inds, extra_inds)) - sampled_inds = torch.from_numpy(sampled_inds).long().to( - assign_result.gt_inds.device) - return sampled_inds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_pseudo_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_pseudo_sampler.py deleted file mode 100644 index b5f69658..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_pseudo_sampler.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""copy from -https://github.com/ZwwWayne/K-Net/blob/main/knet/det/mask_pseudo_sampler.py.""" - -import torch - -from mmdet.core.bbox.builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler -from .mask_sampling_result import MaskSamplingResult - - -@BBOX_SAMPLERS.register_module() -class MaskPseudoSampler(BaseSampler): - """A pseudo sampler that does not do sampling actually.""" - - def __init__(self, **kwargs): - pass - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError - - def sample(self, assign_result, masks, gt_masks, **kwargs): - """Directly returns the positive and negative indices of samples. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - masks (torch.Tensor): Bounding boxes - gt_masks (torch.Tensor): Ground truth boxes - Returns: - :obj:`SamplingResult`: sampler results - """ - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - gt_flags = masks.new_zeros(masks.shape[0], dtype=torch.uint8) - sampling_result = MaskSamplingResult(pos_inds, neg_inds, masks, - gt_masks, assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_sampling_result.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_sampling_result.py deleted file mode 100644 index 3d109432..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/mask_sampling_result.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""copy from -https://github.com/ZwwWayne/K-Net/blob/main/knet/det/mask_pseudo_sampler.py.""" - -import torch - -from .sampling_result import SamplingResult - - -class MaskSamplingResult(SamplingResult): - """Mask sampling result.""" - - def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_masks = masks[pos_inds] - self.neg_masks = masks[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_masks.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_masks.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_masks = torch.empty_like(gt_masks) - else: - self.pos_gt_masks = gt_masks[self.pos_assigned_gt_inds, :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def masks(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_masks, self.neg_masks]) - - def __nice__(self): - data = self.info.copy() - data['pos_masks'] = data.pop('pos_masks').shape - data['neg_masks'] = data.pop('neg_masks').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_masks': self.pos_masks, - 'neg_masks': self.neg_masks, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/ohem_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/ohem_sampler.py deleted file mode 100644 index 7eb06663..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/ohem_sampler.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class OHEMSampler(BaseSampler): - r"""Online Hard Example Mining Sampler described in `Training Region-based - Object Detectors with Online Hard Example Mining - `_. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - loss_key='loss_cls', - **kwargs): - super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.context = context - if not hasattr(self.context, 'num_stages'): - self.bbox_head = self.context.bbox_head - else: - self.bbox_head = self.context.bbox_head[self.context.current_stage] - - self.loss_key = loss_key - - def hard_mining(self, inds, num_expected, bboxes, labels, feats): - with torch.no_grad(): - rois = bbox2roi([bboxes]) - if not hasattr(self.context, 'num_stages'): - bbox_results = self.context._bbox_forward(feats, rois) - else: - bbox_results = self.context._bbox_forward( - self.context.current_stage, feats, rois) - cls_score = bbox_results['cls_score'] - loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=rois, - labels=labels, - label_weights=cls_score.new_ones(cls_score.size(0)), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')[self.loss_key] - _, topk_loss_inds = loss.topk(num_expected) - return inds[topk_loss_inds] - - def _sample_pos(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected positive samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of positive samples - """ - # Sample some hard positive samples - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds], - assign_result.labels[pos_inds], feats) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected negative samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of negative samples - """ - # Sample some hard negative samples - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - neg_labels = assign_result.labels.new_empty( - neg_inds.size(0)).fill_(self.bbox_head.num_classes) - return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds], - neg_labels, feats) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py deleted file mode 100644 index b5ce298e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class PseudoSampler(BaseSampler): - """A pseudo sampler that does not do sampling actually.""" - - def __init__(self, **kwargs): - pass - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError - - def sample(self, assign_result, bboxes, gt_bboxes, *args, **kwargs): - """Directly returns the positive and negative indices of samples. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - bboxes (torch.Tensor): Bounding boxes - gt_bboxes (torch.Tensor): Ground truth boxes - - Returns: - :obj:`SamplingResult`: sampler results - """ - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - gt_flags = bboxes.new_zeros(bboxes.shape[0], dtype=torch.uint8) - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/random_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/random_sampler.py deleted file mode 100644 index d09207e7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/random_sampler.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class RandomSampler(BaseSampler): - """Random sampler. - - Args: - num (int): Number of samples - pos_fraction (float): Fraction of positive samples - neg_pos_up (int, optional): Upper bound number of negative and - positive samples. Defaults to -1. - add_gt_as_proposals (bool, optional): Whether to add ground truth - boxes as proposals. Defaults to True. - """ - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - from mmdet.core.bbox import demodata - super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.rng = demodata.ensure_rng(kwargs.get('rng', None)) - - def random_choice(self, gallery, num): - """Random select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - # This is a temporary fix. We can revert the following code - # when PyTorch fixes the abnormal return of torch.randperm. - # See: https://github.com/open-mmlab/mmdetection/pull/5014 - perm = torch.randperm(gallery.numel())[:num].to(device=gallery.device) - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - return self.random_choice(neg_inds, num_expected) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/sampling_result.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/sampling_result.py deleted file mode 100644 index 50676d04..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/sampling_result.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class SamplingResult(util_mixins.NiceRepr): - """Bbox sampling result. - - Example: - >>> # xdoctest: +IGNORE_WANT - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random(rng=10) - >>> print(f'self = {self}') - self = - """ - - def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_bboxes = bboxes[pos_inds] - self.neg_bboxes = bboxes[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_bboxes.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_bboxes.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_bboxes = torch.empty_like(gt_bboxes).view(-1, 4) - else: - if len(gt_bboxes.shape) < 2: - gt_bboxes = gt_bboxes.view(-1, 4) - - self.pos_gt_bboxes = gt_bboxes[self.pos_assigned_gt_inds.long(), :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_bboxes, self.neg_bboxes]) - - def to(self, device): - """Change the device of the data inplace. - - Example: - >>> self = SamplingResult.random() - >>> print(f'self = {self.to(None)}') - >>> # xdoctest: +REQUIRES(--gpu) - >>> print(f'self = {self.to(0)}') - """ - _dict = self.__dict__ - for key, value in _dict.items(): - if isinstance(value, torch.Tensor): - _dict[key] = value.to(device) - return self - - def __nice__(self): - data = self.info.copy() - data['pos_bboxes'] = data.pop('pos_bboxes').shape - data['neg_bboxes'] = data.pop('neg_bboxes').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_bboxes': self.pos_bboxes, - 'neg_bboxes': self.neg_bboxes, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } - - @classmethod - def random(cls, rng=None, **kwargs): - """ - Args: - rng (None | int | numpy.random.RandomState): seed or state. - kwargs (keyword arguments): - - num_preds: number of predicted boxes - - num_gts: number of true boxes - - p_ignore (float): probability of a predicted box assigned to \ - an ignored truth. - - p_assigned (float): probability of a predicted box not being \ - assigned. - - p_use_label (float | bool): with labels or not. - - Returns: - :obj:`SamplingResult`: Randomly generated sampling result. - - Example: - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random() - >>> print(self.__dict__) - """ - from mmdet.core.bbox import demodata - from mmdet.core.bbox.assigners.assign_result import AssignResult - from mmdet.core.bbox.samplers.random_sampler import RandomSampler - rng = demodata.ensure_rng(rng) - - # make probabalistic? - num = 32 - pos_fraction = 0.5 - neg_pos_ub = -1 - - assign_result = AssignResult.random(rng=rng, **kwargs) - - # Note we could just compute an assignment - bboxes = demodata.random_boxes(assign_result.num_preds, rng=rng) - gt_bboxes = demodata.random_boxes(assign_result.num_gts, rng=rng) - - if rng.rand() > 0.2: - # sometimes algorithms squeeze their data, be robust to that - gt_bboxes = gt_bboxes.squeeze() - bboxes = bboxes.squeeze() - - if assign_result.labels is None: - gt_labels = None - else: - gt_labels = None # todo - - if gt_labels is None: - add_gt_as_proposals = False - else: - add_gt_as_proposals = True # make probabalistic? - - sampler = RandomSampler( - num, - pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals, - rng=rng) - self = sampler.sample(assign_result, bboxes, gt_bboxes, gt_labels) - return self diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/score_hlr_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/score_hlr_sampler.py deleted file mode 100644 index f4be9b8c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/samplers/score_hlr_sampler.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import nms_match - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class ScoreHLRSampler(BaseSampler): - r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample - Attention in Object Detection `_. - - Score hierarchical local rank (HLR) differentiates with RandomSampler in - negative part. It firstly computes Score-HLR in a two-step way, - then linearly maps score hlr to the loss weights. - - Args: - num (int): Total number of sampled RoIs. - pos_fraction (float): Fraction of positive samples. - context (:class:`BaseRoIHead`): RoI head that the sampler belongs to. - neg_pos_ub (int): Upper bound of the ratio of num negative to num - positive, -1 means no upper bound. - add_gt_as_proposals (bool): Whether to add ground truth as proposals. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - score_thr (float): Minimum score that a negative sample is to be - considered as valid bbox. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0, - score_thr=0.05, - iou_thr=0.5, - **kwargs): - super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals) - self.k = k - self.bias = bias - self.score_thr = score_thr - self.iou_thr = iou_thr - self.context = context - # context of cascade detectors is a list, so distinguish them here. - if not hasattr(context, 'num_stages'): - self.bbox_roi_extractor = context.bbox_roi_extractor - self.bbox_head = context.bbox_head - self.with_shared_head = context.with_shared_head - if self.with_shared_head: - self.shared_head = context.shared_head - else: - self.bbox_roi_extractor = context.bbox_roi_extractor[ - context.current_stage] - self.bbox_head = context.bbox_head[context.current_stage] - - @staticmethod - def random_choice(gallery, num): - """Randomly select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten() - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes, - feats=None, - img_meta=None, - **kwargs): - """Sample negative samples. - - Score-HLR sampler is done in the following steps: - 1. Take the maximum positive score prediction of each negative samples - as s_i. - 2. Filter out negative samples whose s_i <= score_thr, the left samples - are called valid samples. - 3. Use NMS-Match to divide valid samples into different groups, - samples in the same group will greatly overlap with each other - 4. Rank the matched samples in two-steps to get Score-HLR. - (1) In the same group, rank samples with their scores. - (2) In the same score rank across different groups, - rank samples with their scores again. - 5. Linearly map Score-HLR to the final label weights. - - Args: - assign_result (:obj:`AssignResult`): result of assigner. - num_expected (int): Expected number of samples. - bboxes (Tensor): bbox to be sampled. - feats (Tensor): Features come from FPN. - img_meta (dict): Meta information dictionary. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten() - num_neg = neg_inds.size(0) - if num_neg == 0: - return neg_inds, None - with torch.no_grad(): - neg_bboxes = bboxes[neg_inds] - neg_rois = bbox2roi([neg_bboxes]) - bbox_result = self.context._bbox_forward(feats, neg_rois) - cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[ - 'bbox_pred'] - - ori_loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=None, - labels=neg_inds.new_full((num_neg, ), - self.bbox_head.num_classes), - label_weights=cls_score.new_ones(num_neg), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - - # filter out samples with the max score lower than score_thr - max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1) - valid_inds = (max_score > self.score_thr).nonzero().view(-1) - invalid_inds = (max_score <= self.score_thr).nonzero().view(-1) - num_valid = valid_inds.size(0) - num_invalid = invalid_inds.size(0) - - num_expected = min(num_neg, num_expected) - num_hlr = min(num_valid, num_expected) - num_rand = num_expected - num_hlr - if num_valid > 0: - valid_rois = neg_rois[valid_inds] - valid_max_score = max_score[valid_inds] - valid_argmax_score = argmax_score[valid_inds] - valid_bbox_pred = bbox_pred[valid_inds] - - # valid_bbox_pred shape: [num_valid, #num_classes, 4] - valid_bbox_pred = valid_bbox_pred.view( - valid_bbox_pred.size(0), -1, 4) - selected_bbox_pred = valid_bbox_pred[range(num_valid), - valid_argmax_score] - pred_bboxes = self.bbox_head.bbox_coder.decode( - valid_rois[:, 1:], selected_bbox_pred) - pred_bboxes_with_score = torch.cat( - [pred_bboxes, valid_max_score[:, None]], -1) - group = nms_match(pred_bboxes_with_score, self.iou_thr) - - # imp: importance - imp = cls_score.new_zeros(num_valid) - for g in group: - g_score = valid_max_score[g] - # g_score has already sorted - rank = g_score.new_tensor(range(g_score.size(0))) - imp[g] = num_valid - rank + g_score - _, imp_rank_inds = imp.sort(descending=True) - _, imp_rank = imp_rank_inds.sort() - hlr_inds = imp_rank_inds[:num_expected] - - if num_rand > 0: - rand_inds = torch.randperm(num_invalid)[:num_rand] - select_inds = torch.cat( - [valid_inds[hlr_inds], invalid_inds[rand_inds]]) - else: - select_inds = valid_inds[hlr_inds] - - neg_label_weights = cls_score.new_ones(num_expected) - - up_bound = max(num_expected, num_valid) - imp_weights = (up_bound - - imp_rank[hlr_inds].float()) / up_bound - neg_label_weights[:num_hlr] = imp_weights - neg_label_weights[num_hlr:] = imp_weights.min() - neg_label_weights = (self.bias + - (1 - self.bias) * neg_label_weights).pow( - self.k) - ori_selected_loss = ori_loss[select_inds] - new_loss = ori_selected_loss * neg_label_weights - norm_ratio = ori_selected_loss.sum() / new_loss.sum() - neg_label_weights *= norm_ratio - else: - neg_label_weights = cls_score.new_ones(num_expected) - select_inds = torch.randperm(num_neg)[:num_expected] - - return neg_inds[select_inds], neg_label_weights - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - img_meta=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negative - label weights. - """ - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals: - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds, neg_label_weights = self.neg_sampler._sample_neg( - assign_result, - num_expected_neg, - bboxes, - img_meta=img_meta, - **kwargs) - - return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags), neg_label_weights diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/transforms.py b/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/transforms.py deleted file mode 100644 index 6d72076a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/bbox/transforms.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def find_inside_bboxes(bboxes, img_h, img_w): - """Find bboxes as long as a part of bboxes is inside the image. - - Args: - bboxes (Tensor): Shape (N, 4). - img_h (int): Image height. - img_w (int): Image width. - - Returns: - Tensor: Index of the remaining bboxes. - """ - inside_inds = (bboxes[:, 0] < img_w) & (bboxes[:, 2] > 0) \ - & (bboxes[:, 1] < img_h) & (bboxes[:, 3] > 0) - return inside_inds - - -def bbox_flip(bboxes, img_shape, direction='horizontal'): - """Flip bboxes horizontally or vertically. - - Args: - bboxes (Tensor): Shape (..., 4*k) - img_shape (tuple): Image shape. - direction (str): Flip direction, options are "horizontal", "vertical", - "diagonal". Default: "horizontal" - - Returns: - Tensor: Flipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - assert direction in ['horizontal', 'vertical', 'diagonal'] - flipped = bboxes.clone() - if direction == 'horizontal': - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - elif direction == 'vertical': - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - else: - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - return flipped - - -def bbox_mapping(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from the original image scale to testing scale.""" - new_bboxes = bboxes * bboxes.new_tensor(scale_factor) - if flip: - new_bboxes = bbox_flip(new_bboxes, img_shape, flip_direction) - return new_bboxes - - -def bbox_mapping_back(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from testing scale to original image scale.""" - new_bboxes = bbox_flip(bboxes, img_shape, - flip_direction) if flip else bboxes - new_bboxes = new_bboxes.view(-1, 4) / new_bboxes.new_tensor(scale_factor) - return new_bboxes.view(bboxes.shape) - - -def bbox2roi(bbox_list): - """Convert a list of bboxes to roi format. - - Args: - bbox_list (list[Tensor]): a list of bboxes corresponding to a batch - of images. - - Returns: - Tensor: shape (n, 5), [batch_ind, x1, y1, x2, y2] - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes[:, :4]], dim=-1) - else: - rois = bboxes.new_zeros((0, 5)) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def roi2bbox(rois): - """Convert rois to bounding box format. - - Args: - rois (torch.Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - list[torch.Tensor]: Converted boxes of corresponding rois. - """ - bbox_list = [] - img_ids = torch.unique(rois[:, 0].cpu(), sorted=True) - for img_id in img_ids: - inds = (rois[:, 0] == img_id.item()) - bbox = rois[inds, 1:] - bbox_list.append(bbox) - return bbox_list - - -def bbox2result(bboxes, labels, num_classes): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor | np.ndarray): shape (n, 5) - labels (torch.Tensor | np.ndarray): shape (n, ) - num_classes (int): class number, including background class - - Returns: - list(ndarray): bbox results of each class - """ - if bboxes.shape[0] == 0: - return [np.zeros((0, 5), dtype=np.float32) for i in range(num_classes)] - else: - if isinstance(bboxes, torch.Tensor): - bboxes = bboxes.detach().cpu().numpy() - labels = labels.detach().cpu().numpy() - return [bboxes[labels == i, :] for i in range(num_classes)] - - -def distance2bbox(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - - x1 = points[..., 0] - distance[..., 0] - y1 = points[..., 1] - distance[..., 1] - x2 = points[..., 0] + distance[..., 2] - y2 = points[..., 1] + distance[..., 3] - - bboxes = torch.stack([x1, y1, x2, y2], -1) - - if max_shape is not None: - if bboxes.dim() == 2 and not torch.onnx.is_in_onnx_export(): - # speed up - bboxes[:, 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[:, 1::2].clamp_(min=0, max=max_shape[0]) - return bboxes - - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes - - -def bbox2distance(points, bbox, max_dis=None, eps=0.1): - """Decode bounding box based on distances. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - bbox (Tensor): Shape (n, 4), "xyxy" format - max_dis (float): Upper bound of the distance. - eps (float): a small value to ensure target < max_dis, instead <= - - Returns: - Tensor: Decoded distances. - """ - left = points[:, 0] - bbox[:, 0] - top = points[:, 1] - bbox[:, 1] - right = bbox[:, 2] - points[:, 0] - bottom = bbox[:, 3] - points[:, 1] - if max_dis is not None: - left = left.clamp(min=0, max=max_dis - eps) - top = top.clamp(min=0, max=max_dis - eps) - right = right.clamp(min=0, max=max_dis - eps) - bottom = bottom.clamp(min=0, max=max_dis - eps) - return torch.stack([left, top, right, bottom], -1) - - -def bbox_rescale(bboxes, scale_factor=1.0): - """Rescale bounding box w.r.t. scale_factor. - - Args: - bboxes (Tensor): Shape (n, 4) for bboxes or (n, 5) for rois - scale_factor (float): rescale factor - - Returns: - Tensor: Rescaled bboxes. - """ - if bboxes.size(1) == 5: - bboxes_ = bboxes[:, 1:] - inds_ = bboxes[:, 0] - else: - bboxes_ = bboxes - cx = (bboxes_[:, 0] + bboxes_[:, 2]) * 0.5 - cy = (bboxes_[:, 1] + bboxes_[:, 3]) * 0.5 - w = bboxes_[:, 2] - bboxes_[:, 0] - h = bboxes_[:, 3] - bboxes_[:, 1] - w = w * scale_factor - h = h * scale_factor - x1 = cx - 0.5 * w - x2 = cx + 0.5 * w - y1 = cy - 0.5 * h - y2 = cy + 0.5 * h - if bboxes.size(1) == 5: - rescaled_bboxes = torch.stack([inds_, x1, y1, x2, y2], dim=-1) - else: - rescaled_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return rescaled_bboxes - - -def bbox_cxcywh_to_xyxy(bbox): - """Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - cx, cy, w, h = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(cx - 0.5 * w), (cy - 0.5 * h), (cx + 0.5 * w), (cy + 0.5 * h)] - return torch.cat(bbox_new, dim=-1) - - -def bbox_xyxy_to_cxcywh(bbox): - """Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - x1, y1, x2, y2 = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(x1 + x2) / 2, (y1 + y2) / 2, (x2 - x1), (y2 - y1)] - return torch.cat(bbox_new, dim=-1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/__init__.py deleted file mode 100644 index 11ab96c5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .general_data import GeneralData -from .instance_data import InstanceData - -__all__ = ['GeneralData', 'InstanceData'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/general_data.py b/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/general_data.py deleted file mode 100644 index 99316e41..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/general_data.py +++ /dev/null @@ -1,326 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch - -from mmdet.utils.util_mixins import NiceRepr - - -class GeneralData(NiceRepr): - """A general data structure of OpenMMlab. - - A data structure that stores the meta information, - the annotations of the images or the model predictions, - which can be used in communication between components. - - The attributes in `GeneralData` are divided into two parts, - the `meta_info_fields` and the `data_fields` respectively. - - - `meta_info_fields`: Usually contains the - information about the image such as filename, - image_shape, pad_shape, etc. All attributes in - it are immutable once set, - but the user can add new meta information with - `set_meta_info` function, all information can be accessed - with methods `meta_info_keys`, `meta_info_values`, - `meta_info_items`. - - - `data_fields`: Annotations or model predictions are - stored. The attributes can be accessed or modified by - dict-like or object-like operations, such as - `.` , `[]`, `in`, `del`, `pop(str)` `get(str)`, `keys()`, - `values()`, `items()`. Users can also apply tensor-like methods - to all obj:`torch.Tensor` in the `data_fileds`, - such as `.cuda()`, `.cpu()`, `.numpy()`, `device`, `.to()` - `.detach()`, `.numpy()` - - Args: - meta_info (dict, optional): A dict contains the meta information - of single image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of single image or - model predictions. Default: None. - - Examples: - >>> from mmdet.core import GeneralData - >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3)) - >>> instance_data = GeneralData(meta_info=img_meta) - >>> img_shape in instance_data - True - >>> instance_data.det_labels = torch.LongTensor([0, 1, 2, 3]) - >>> instance_data["det_scores"] = torch.Tensor([0.01, 0.1, 0.2, 0.3]) - >>> print(results) - - >>> instance_data.det_scores - tensor([0.0100, 0.1000, 0.2000, 0.3000]) - >>> instance_data.det_labels - tensor([0, 1, 2, 3]) - >>> instance_data['det_labels'] - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - True - >>> instance_data.img_shape - (800, 1196, 3) - >>> 'det_scores' in instance_data - True - >>> del instance_data.det_scores - >>> 'det_scores' in instance_data - False - >>> det_labels = instance_data.pop('det_labels', None) - >>> det_labels - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - >>> False - """ - - def __init__(self, meta_info=None, data=None): - - self._meta_info_fields = set() - self._data_fields = set() - - if meta_info is not None: - self.set_meta_info(meta_info=meta_info) - if data is not None: - self.set_data(data) - - def set_meta_info(self, meta_info): - """Add meta information. - - Args: - meta_info (dict): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - """ - assert isinstance(meta_info, - dict), f'meta should be a `dict` but get {meta_info}' - meta = copy.deepcopy(meta_info) - for k, v in meta.items(): - # should be consistent with original meta_info - if k in self._meta_info_fields: - ori_value = getattr(self, k) - if isinstance(ori_value, (torch.Tensor, np.ndarray)): - if (ori_value == v).all(): - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - elif ori_value == v: - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - else: - self._meta_info_fields.add(k) - self.__dict__[k] = v - - def set_data(self, data): - """Update a dict to `data_fields`. - - Args: - data (dict): A dict contains annotations of image or - model predictions. Default: None. - """ - assert isinstance(data, - dict), f'meta should be a `dict` but get {data}' - for k, v in data.items(): - self.__setattr__(k, v) - - def new(self, meta_info=None, data=None): - """Return a new results with same image meta information. - - Args: - meta_info (dict, optional): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of image or - model predictions. Default: None. - """ - new_data = self.__class__() - new_data.set_meta_info(dict(self.meta_info_items())) - if meta_info is not None: - new_data.set_meta_info(meta_info) - if data is not None: - new_data.set_data(data) - return new_data - - def keys(self): - """ - Returns: - list: Contains all keys in data_fields. - """ - return [key for key in self._data_fields] - - def meta_info_keys(self): - """ - Returns: - list: Contains all keys in meta_info_fields. - """ - return [key for key in self._meta_info_fields] - - def values(self): - """ - Returns: - list: Contains all values in data_fields. - """ - return [getattr(self, k) for k in self.keys()] - - def meta_info_values(self): - """ - Returns: - list: Contains all values in meta_info_fields. - """ - return [getattr(self, k) for k in self.meta_info_keys()] - - def items(self): - for k in self.keys(): - yield (k, getattr(self, k)) - - def meta_info_items(self): - for k in self.meta_info_keys(): - yield (k, getattr(self, k)) - - def __setattr__(self, name, val): - if name in ('_meta_info_fields', '_data_fields'): - if not hasattr(self, name): - super().__setattr__(name, val) - else: - raise AttributeError( - f'{name} has been used as a ' - f'private attribute, which is immutable. ') - else: - if name in self._meta_info_fields: - raise AttributeError(f'`{name}` is used in meta information,' - f'which is immutable') - - self._data_fields.add(name) - super().__setattr__(name, val) - - def __delattr__(self, item): - - if item in ('_meta_info_fields', '_data_fields'): - raise AttributeError(f'{item} has been used as a ' - f'private attribute, which is immutable. ') - - if item in self._meta_info_fields: - raise KeyError(f'{item} is used in meta information, ' - f'which is immutable.') - super().__delattr__(item) - if item in self._data_fields: - self._data_fields.remove(item) - - # dict-like methods - __setitem__ = __setattr__ - __delitem__ = __delattr__ - - def __getitem__(self, name): - return getattr(self, name) - - def get(self, *args): - assert len(args) < 3, '`get` get more than 2 arguments' - return self.__dict__.get(*args) - - def pop(self, *args): - assert len(args) < 3, '`pop` get more than 2 arguments' - name = args[0] - if name in self._meta_info_fields: - raise KeyError(f'{name} is a key in meta information, ' - f'which is immutable') - - if args[0] in self._data_fields: - self._data_fields.remove(args[0]) - return self.__dict__.pop(*args) - - # with default value - elif len(args) == 2: - return args[1] - else: - raise KeyError(f'{args[0]}') - - def __contains__(self, item): - return item in self._data_fields or \ - item in self._meta_info_fields - - # Tensor-like methods - def to(self, *args, **kwargs): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if hasattr(v, 'to'): - v = v.to(*args, **kwargs) - new_data[k] = v - return new_data - - # Tensor-like methods - def cpu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cpu() - new_data[k] = v - return new_data - - # Tensor-like methods - def mlu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.mlu() - new_data[k] = v - return new_data - - # Tensor-like methods - def cuda(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cuda() - new_data[k] = v - return new_data - - # Tensor-like methods - def detach(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach() - new_data[k] = v - return new_data - - # Tensor-like methods - def numpy(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach().cpu().numpy() - new_data[k] = v - return new_data - - def __nice__(self): - repr = '\n \n META INFORMATION \n' - for k, v in self.meta_info_items(): - repr += f'{k}: {v} \n' - repr += '\n DATA FIELDS \n' - for k, v in self.items(): - if isinstance(v, (torch.Tensor, np.ndarray)): - repr += f'shape of {k}: {v.shape} \n' - else: - repr += f'{k}: {v} \n' - return repr + '\n' diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/instance_data.py b/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/instance_data.py deleted file mode 100644 index eef2065c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/data_structures/instance_data.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -import numpy as np -import torch - -from .general_data import GeneralData - - -class InstanceData(GeneralData): - """Data structure for instance-level annnotations or predictions. - - Subclass of :class:`GeneralData`. All value in `data_fields` - should have the same length. This design refer to - https://github.com/facebookresearch/detectron2/blob/master/detectron2/structures/instances.py # noqa E501 - - Examples: - >>> from mmdet.core import InstanceData - >>> import numpy as np - >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3)) - >>> results = InstanceData(img_meta) - >>> img_shape in results - True - >>> results.det_labels = torch.LongTensor([0, 1, 2, 3]) - >>> results["det_scores"] = torch.Tensor([0.01, 0.7, 0.6, 0.3]) - >>> results["det_masks"] = np.ndarray(4, 2, 2) - >>> len(results) - 4 - >>> print(resutls) - - >>> sorted_results = results[results.det_scores.sort().indices] - >>> sorted_results.det_scores - tensor([0.0100, 0.3000, 0.6000, 0.7000]) - >>> sorted_results.det_labels - tensor([0, 3, 2, 1]) - >>> print(results[results.scores > 0.5]) - - >>> results[results.det_scores > 0.5].det_labels - tensor([1, 2]) - >>> results[results.det_scores > 0.5].det_scores - tensor([0.7000, 0.6000]) - """ - - def __setattr__(self, name, value): - - if name in ('_meta_info_fields', '_data_fields'): - if not hasattr(self, name): - super().__setattr__(name, value) - else: - raise AttributeError( - f'{name} has been used as a ' - f'private attribute, which is immutable. ') - - else: - assert isinstance(value, (torch.Tensor, np.ndarray, list)), \ - f'Can set {type(value)}, only support' \ - f' {(torch.Tensor, np.ndarray, list)}' - - if self._data_fields: - assert len(value) == len(self), f'the length of ' \ - f'values {len(value)} is ' \ - f'not consistent with' \ - f' the length ' \ - f'of this :obj:`InstanceData` ' \ - f'{len(self)} ' - super().__setattr__(name, value) - - def __getitem__(self, item): - """ - Args: - item (str, obj:`slice`, - obj`torch.LongTensor`, obj:`torch.BoolTensor`): - get the corresponding values according to item. - - Returns: - obj:`InstanceData`: Corresponding values. - """ - assert len(self), ' This is a empty instance' - - assert isinstance( - item, (str, slice, int, torch.LongTensor, torch.BoolTensor)) - - if isinstance(item, str): - return getattr(self, item) - - if type(item) == int: - if item >= len(self) or item < -len(self): - raise IndexError(f'Index {item} out of range!') - else: - # keep the dimension - item = slice(item, None, len(self)) - - new_data = self.new() - if isinstance(item, (torch.Tensor)): - assert item.dim() == 1, 'Only support to get the' \ - ' values along the first dimension.' - if isinstance(item, torch.BoolTensor): - assert len(item) == len(self), f'The shape of the' \ - f' input(BoolTensor)) ' \ - f'{len(item)} ' \ - f' does not match the shape ' \ - f'of the indexed tensor ' \ - f'in results_filed ' \ - f'{len(self)} at ' \ - f'first dimension. ' - - for k, v in self.items(): - if isinstance(v, torch.Tensor): - new_data[k] = v[item] - elif isinstance(v, np.ndarray): - new_data[k] = v[item.cpu().numpy()] - elif isinstance(v, list): - r_list = [] - # convert to indexes from boolTensor - if isinstance(item, torch.BoolTensor): - indexes = torch.nonzero(item).view(-1) - else: - indexes = item - for index in indexes: - r_list.append(v[index]) - new_data[k] = r_list - else: - # item is a slice - for k, v in self.items(): - new_data[k] = v[item] - return new_data - - @staticmethod - def cat(instances_list): - """Concat the predictions of all :obj:`InstanceData` in the list. - - Args: - instances_list (list[:obj:`InstanceData`]): A list - of :obj:`InstanceData`. - - Returns: - obj:`InstanceData` - """ - assert all( - isinstance(results, InstanceData) for results in instances_list) - assert len(instances_list) > 0 - if len(instances_list) == 1: - return instances_list[0] - - new_data = instances_list[0].new() - for k in instances_list[0]._data_fields: - values = [results[k] for results in instances_list] - v0 = values[0] - if isinstance(v0, torch.Tensor): - values = torch.cat(values, dim=0) - elif isinstance(v0, np.ndarray): - values = np.concatenate(values, axis=0) - elif isinstance(v0, list): - values = list(itertools.chain(*values)) - else: - raise ValueError( - f'Can not concat the {k} which is a {type(v0)}') - new_data[k] = values - return new_data - - def __len__(self): - if len(self._data_fields): - for v in self.values(): - return len(v) - else: - raise AssertionError('This is an empty `InstanceData`.') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/__init__.py deleted file mode 100644 index 67e7c55b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_names import (cityscapes_classes, coco_classes, dataset_aliases, - get_classes, imagenet_det_classes, - imagenet_vid_classes, oid_challenge_classes, - oid_v6_classes, voc_classes) -from .eval_hooks import DistEvalHook, EvalHook -from .mean_ap import average_precision, eval_map, print_map_summary -from .panoptic_utils import INSTANCE_OFFSET -from .recall import (eval_recalls, plot_iou_recall, plot_num_recall, - print_recall_summary) - -__all__ = [ - 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes', - 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes', - 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map', - 'print_map_summary', 'eval_recalls', 'print_recall_summary', - 'plot_num_recall', 'plot_iou_recall', 'oid_v6_classes', - 'oid_challenge_classes', 'INSTANCE_OFFSET' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/bbox_overlaps.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/bbox_overlaps.py deleted file mode 100644 index 5d6eb82f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/bbox_overlaps.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def bbox_overlaps(bboxes1, - bboxes2, - mode='iou', - eps=1e-6, - use_legacy_coordinate=False): - """Calculate the ious between each bbox of bboxes1 and bboxes2. - - Args: - bboxes1 (ndarray): Shape (n, 4) - bboxes2 (ndarray): Shape (k, 4) - mode (str): IOU (intersection over union) or IOF (intersection - over foreground) - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Note when function is used in `VOCDataset`, it should be - True to align with the official implementation - `http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar` - Default: False. - - Returns: - ious (ndarray): Shape (n, k) - """ - - assert mode in ['iou', 'iof'] - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - bboxes1 = bboxes1.astype(np.float32) - bboxes2 = bboxes2.astype(np.float32) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - ious = np.zeros((rows, cols), dtype=np.float32) - if rows * cols == 0: - return ious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - ious = np.zeros((cols, rows), dtype=np.float32) - exchange = True - area1 = (bboxes1[:, 2] - bboxes1[:, 0] + extra_length) * ( - bboxes1[:, 3] - bboxes1[:, 1] + extra_length) - area2 = (bboxes2[:, 2] - bboxes2[:, 0] + extra_length) * ( - bboxes2[:, 3] - bboxes2[:, 1] + extra_length) - for i in range(bboxes1.shape[0]): - x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0]) - y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1]) - x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2]) - y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3]) - overlap = np.maximum(x_end - x_start + extra_length, 0) * np.maximum( - y_end - y_start + extra_length, 0) - if mode == 'iou': - union = area1[i] + area2 - overlap - else: - union = area1[i] if not exchange else area2 - union = np.maximum(union, eps) - ious[i, :] = overlap / union - if exchange: - ious = ious.T - return ious diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/class_names.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/class_names.py deleted file mode 100644 index 73797118..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/class_names.py +++ /dev/null @@ -1,332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - - -def wider_face_classes(): - return ['face'] - - -def voc_classes(): - return [ - 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', - 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', - 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' - ] - - -def imagenet_det_classes(): - return [ - 'accordion', 'airplane', 'ant', 'antelope', 'apple', 'armadillo', - 'artichoke', 'axe', 'baby_bed', 'backpack', 'bagel', 'balance_beam', - 'banana', 'band_aid', 'banjo', 'baseball', 'basketball', 'bathing_cap', - 'beaker', 'bear', 'bee', 'bell_pepper', 'bench', 'bicycle', 'binder', - 'bird', 'bookshelf', 'bow_tie', 'bow', 'bowl', 'brassiere', 'burrito', - 'bus', 'butterfly', 'camel', 'can_opener', 'car', 'cart', 'cattle', - 'cello', 'centipede', 'chain_saw', 'chair', 'chime', 'cocktail_shaker', - 'coffee_maker', 'computer_keyboard', 'computer_mouse', 'corkscrew', - 'cream', 'croquet_ball', 'crutch', 'cucumber', 'cup_or_mug', 'diaper', - 'digital_clock', 'dishwasher', 'dog', 'domestic_cat', 'dragonfly', - 'drum', 'dumbbell', 'electric_fan', 'elephant', 'face_powder', 'fig', - 'filing_cabinet', 'flower_pot', 'flute', 'fox', 'french_horn', 'frog', - 'frying_pan', 'giant_panda', 'goldfish', 'golf_ball', 'golfcart', - 'guacamole', 'guitar', 'hair_dryer', 'hair_spray', 'hamburger', - 'hammer', 'hamster', 'harmonica', 'harp', 'hat_with_a_wide_brim', - 'head_cabbage', 'helmet', 'hippopotamus', 'horizontal_bar', 'horse', - 'hotdog', 'iPod', 'isopod', 'jellyfish', 'koala_bear', 'ladle', - 'ladybug', 'lamp', 'laptop', 'lemon', 'lion', 'lipstick', 'lizard', - 'lobster', 'maillot', 'maraca', 'microphone', 'microwave', 'milk_can', - 'miniskirt', 'monkey', 'motorcycle', 'mushroom', 'nail', 'neck_brace', - 'oboe', 'orange', 'otter', 'pencil_box', 'pencil_sharpener', 'perfume', - 'person', 'piano', 'pineapple', 'ping-pong_ball', 'pitcher', 'pizza', - 'plastic_bag', 'plate_rack', 'pomegranate', 'popsicle', 'porcupine', - 'power_drill', 'pretzel', 'printer', 'puck', 'punching_bag', 'purse', - 'rabbit', 'racket', 'ray', 'red_panda', 'refrigerator', - 'remote_control', 'rubber_eraser', 'rugby_ball', 'ruler', - 'salt_or_pepper_shaker', 'saxophone', 'scorpion', 'screwdriver', - 'seal', 'sheep', 'ski', 'skunk', 'snail', 'snake', 'snowmobile', - 'snowplow', 'soap_dispenser', 'soccer_ball', 'sofa', 'spatula', - 'squirrel', 'starfish', 'stethoscope', 'stove', 'strainer', - 'strawberry', 'stretcher', 'sunglasses', 'swimming_trunks', 'swine', - 'syringe', 'table', 'tape_player', 'tennis_ball', 'tick', 'tie', - 'tiger', 'toaster', 'traffic_light', 'train', 'trombone', 'trumpet', - 'turtle', 'tv_or_monitor', 'unicycle', 'vacuum', 'violin', - 'volleyball', 'waffle_iron', 'washer', 'water_bottle', 'watercraft', - 'whale', 'wine_bottle', 'zebra' - ] - - -def imagenet_vid_classes(): - return [ - 'airplane', 'antelope', 'bear', 'bicycle', 'bird', 'bus', 'car', - 'cattle', 'dog', 'domestic_cat', 'elephant', 'fox', 'giant_panda', - 'hamster', 'horse', 'lion', 'lizard', 'monkey', 'motorcycle', 'rabbit', - 'red_panda', 'sheep', 'snake', 'squirrel', 'tiger', 'train', 'turtle', - 'watercraft', 'whale', 'zebra' - ] - - -def coco_classes(): - return [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic_light', 'fire_hydrant', 'stop_sign', - 'parking_meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard', - 'surfboard', 'tennis_racket', 'bottle', 'wine_glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted_plant', 'bed', 'dining_table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell_phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy_bear', 'hair_drier', 'toothbrush' - ] - - -def cityscapes_classes(): - return [ - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def oid_challenge_classes(): - return [ - 'Footwear', 'Jeans', 'House', 'Tree', 'Woman', 'Man', 'Land vehicle', - 'Person', 'Wheel', 'Bus', 'Human face', 'Bird', 'Dress', 'Girl', - 'Vehicle', 'Building', 'Cat', 'Car', 'Belt', 'Elephant', 'Dessert', - 'Butterfly', 'Train', 'Guitar', 'Poster', 'Book', 'Boy', 'Bee', - 'Flower', 'Window', 'Hat', 'Human head', 'Dog', 'Human arm', 'Drink', - 'Human mouth', 'Human hair', 'Human nose', 'Human hand', 'Table', - 'Marine invertebrates', 'Fish', 'Sculpture', 'Rose', 'Street light', - 'Glasses', 'Fountain', 'Skyscraper', 'Swimwear', 'Brassiere', 'Drum', - 'Duck', 'Countertop', 'Furniture', 'Ball', 'Human leg', 'Boat', - 'Balloon', 'Bicycle helmet', 'Goggles', 'Door', 'Human eye', 'Shirt', - 'Toy', 'Teddy bear', 'Pasta', 'Tomato', 'Human ear', - 'Vehicle registration plate', 'Microphone', 'Musical keyboard', - 'Tower', 'Houseplant', 'Flowerpot', 'Fruit', 'Vegetable', - 'Musical instrument', 'Suit', 'Motorcycle', 'Bagel', 'French fries', - 'Hamburger', 'Chair', 'Salt and pepper shakers', 'Snail', 'Airplane', - 'Horse', 'Laptop', 'Computer keyboard', 'Football helmet', 'Cocktail', - 'Juice', 'Tie', 'Computer monitor', 'Human beard', 'Bottle', - 'Saxophone', 'Lemon', 'Mouse', 'Sock', 'Cowboy hat', 'Sun hat', - 'Football', 'Porch', 'Sunglasses', 'Lobster', 'Crab', 'Picture frame', - 'Van', 'Crocodile', 'Surfboard', 'Shorts', 'Helicopter', 'Helmet', - 'Sports uniform', 'Taxi', 'Swan', 'Goose', 'Coat', 'Jacket', 'Handbag', - 'Flag', 'Skateboard', 'Television', 'Tire', 'Spoon', 'Palm tree', - 'Stairs', 'Salad', 'Castle', 'Oven', 'Microwave oven', 'Wine', - 'Ceiling fan', 'Mechanical fan', 'Cattle', 'Truck', 'Box', 'Ambulance', - 'Desk', 'Wine glass', 'Reptile', 'Tank', 'Traffic light', 'Billboard', - 'Tent', 'Insect', 'Spider', 'Treadmill', 'Cupboard', 'Shelf', - 'Seat belt', 'Human foot', 'Bicycle', 'Bicycle wheel', 'Couch', - 'Bookcase', 'Fedora', 'Backpack', 'Bench', 'Oyster', - 'Moths and butterflies', 'Lavender', 'Waffle', 'Fork', 'Animal', - 'Accordion', 'Mobile phone', 'Plate', 'Coffee cup', 'Saucer', - 'Platter', 'Dagger', 'Knife', 'Bull', 'Tortoise', 'Sea turtle', 'Deer', - 'Weapon', 'Apple', 'Ski', 'Taco', 'Traffic sign', 'Beer', 'Necklace', - 'Sunflower', 'Piano', 'Organ', 'Harpsichord', 'Bed', 'Cabinetry', - 'Nightstand', 'Curtain', 'Chest of drawers', 'Drawer', 'Parrot', - 'Sandal', 'High heels', 'Tableware', 'Cart', 'Mushroom', 'Kite', - 'Missile', 'Seafood', 'Camera', 'Paper towel', 'Toilet paper', - 'Sombrero', 'Radish', 'Lighthouse', 'Segway', 'Pig', 'Watercraft', - 'Golf cart', 'studio couch', 'Dolphin', 'Whale', 'Earrings', 'Otter', - 'Sea lion', 'Whiteboard', 'Monkey', 'Gondola', 'Zebra', - 'Baseball glove', 'Scarf', 'Adhesive tape', 'Trousers', 'Scoreboard', - 'Lily', 'Carnivore', 'Power plugs and sockets', 'Office building', - 'Sandwich', 'Swimming pool', 'Headphones', 'Tin can', 'Crown', 'Doll', - 'Cake', 'Frog', 'Beetle', 'Ant', 'Gas stove', 'Canoe', 'Falcon', - 'Blue jay', 'Egg', 'Fire hydrant', 'Raccoon', 'Muffin', 'Wall clock', - 'Coffee', 'Mug', 'Tea', 'Bear', 'Waste container', 'Home appliance', - 'Candle', 'Lion', 'Mirror', 'Starfish', 'Marine mammal', 'Wheelchair', - 'Umbrella', 'Alpaca', 'Violin', 'Cello', 'Brown bear', 'Canary', 'Bat', - 'Ruler', 'Plastic bag', 'Penguin', 'Watermelon', 'Harbor seal', 'Pen', - 'Pumpkin', 'Harp', 'Kitchen appliance', 'Roller skates', 'Bust', - 'Coffee table', 'Tennis ball', 'Tennis racket', 'Ladder', 'Boot', - 'Bowl', 'Stop sign', 'Volleyball', 'Eagle', 'Paddle', 'Chicken', - 'Skull', 'Lamp', 'Beehive', 'Maple', 'Sink', 'Goldfish', 'Tripod', - 'Coconut', 'Bidet', 'Tap', 'Bathroom cabinet', 'Toilet', - 'Filing cabinet', 'Pretzel', 'Table tennis racket', 'Bronze sculpture', - 'Rocket', 'Mouse', 'Hamster', 'Lizard', 'Lifejacket', 'Goat', - 'Washing machine', 'Trumpet', 'Horn', 'Trombone', 'Sheep', - 'Tablet computer', 'Pillow', 'Kitchen & dining room table', - 'Parachute', 'Raven', 'Glove', 'Loveseat', 'Christmas tree', - 'Shellfish', 'Rifle', 'Shotgun', 'Sushi', 'Sparrow', 'Bread', - 'Toaster', 'Watch', 'Asparagus', 'Artichoke', 'Suitcase', 'Antelope', - 'Broccoli', 'Ice cream', 'Racket', 'Banana', 'Cookie', 'Cucumber', - 'Dragonfly', 'Lynx', 'Caterpillar', 'Light bulb', 'Office supplies', - 'Miniskirt', 'Skirt', 'Fireplace', 'Potato', 'Light switch', - 'Croissant', 'Cabbage', 'Ladybug', 'Handgun', 'Luggage and bags', - 'Window blind', 'Snowboard', 'Baseball bat', 'Digital clock', - 'Serving tray', 'Infant bed', 'Sofa bed', 'Guacamole', 'Fox', 'Pizza', - 'Snowplow', 'Jet ski', 'Refrigerator', 'Lantern', 'Convenience store', - 'Sword', 'Rugby ball', 'Owl', 'Ostrich', 'Pancake', 'Strawberry', - 'Carrot', 'Tart', 'Dice', 'Turkey', 'Rabbit', 'Invertebrate', 'Vase', - 'Stool', 'Swim cap', 'Shower', 'Clock', 'Jellyfish', 'Aircraft', - 'Chopsticks', 'Orange', 'Snake', 'Sewing machine', 'Kangaroo', 'Mixer', - 'Food processor', 'Shrimp', 'Towel', 'Porcupine', 'Jaguar', 'Cannon', - 'Limousine', 'Mule', 'Squirrel', 'Kitchen knife', 'Tiara', 'Tiger', - 'Bow and arrow', 'Candy', 'Rhinoceros', 'Shark', 'Cricket ball', - 'Doughnut', 'Plumbing fixture', 'Camel', 'Polar bear', 'Coin', - 'Printer', 'Blender', 'Giraffe', 'Billiard table', 'Kettle', - 'Dinosaur', 'Pineapple', 'Zucchini', 'Jug', 'Barge', 'Teapot', - 'Golf ball', 'Binoculars', 'Scissors', 'Hot dog', 'Door handle', - 'Seahorse', 'Bathtub', 'Leopard', 'Centipede', 'Grapefruit', 'Snowman', - 'Cheetah', 'Alarm clock', 'Grape', 'Wrench', 'Wok', 'Bell pepper', - 'Cake stand', 'Barrel', 'Woodpecker', 'Flute', 'Corded phone', - 'Willow', 'Punching bag', 'Pomegranate', 'Telephone', 'Pear', - 'Common fig', 'Bench', 'Wood-burning stove', 'Burrito', 'Nail', - 'Turtle', 'Submarine sandwich', 'Drinking straw', 'Peach', 'Popcorn', - 'Frying pan', 'Picnic basket', 'Honeycomb', 'Envelope', 'Mango', - 'Cutting board', 'Pitcher', 'Stationary bicycle', 'Dumbbell', - 'Personal care', 'Dog bed', 'Snowmobile', 'Oboe', 'Briefcase', - 'Squash', 'Tick', 'Slow cooker', 'Coffeemaker', 'Measuring cup', - 'Crutch', 'Stretcher', 'Screwdriver', 'Flashlight', 'Spatula', - 'Pressure cooker', 'Ring binder', 'Beaker', 'Torch', 'Winter melon' - ] - - -def oid_v6_classes(): - return [ - 'Tortoise', 'Container', 'Magpie', 'Sea turtle', 'Football', - 'Ambulance', 'Ladder', 'Toothbrush', 'Syringe', 'Sink', 'Toy', - 'Organ (Musical Instrument)', 'Cassette deck', 'Apple', 'Human eye', - 'Cosmetics', 'Paddle', 'Snowman', 'Beer', 'Chopsticks', 'Human beard', - 'Bird', 'Parking meter', 'Traffic light', 'Croissant', 'Cucumber', - 'Radish', 'Towel', 'Doll', 'Skull', 'Washing machine', 'Glove', 'Tick', - 'Belt', 'Sunglasses', 'Banjo', 'Cart', 'Ball', 'Backpack', 'Bicycle', - 'Home appliance', 'Centipede', 'Boat', 'Surfboard', 'Boot', - 'Headphones', 'Hot dog', 'Shorts', 'Fast food', 'Bus', 'Boy', - 'Screwdriver', 'Bicycle wheel', 'Barge', 'Laptop', 'Miniskirt', - 'Drill (Tool)', 'Dress', 'Bear', 'Waffle', 'Pancake', 'Brown bear', - 'Woodpecker', 'Blue jay', 'Pretzel', 'Bagel', 'Tower', 'Teapot', - 'Person', 'Bow and arrow', 'Swimwear', 'Beehive', 'Brassiere', 'Bee', - 'Bat (Animal)', 'Starfish', 'Popcorn', 'Burrito', 'Chainsaw', - 'Balloon', 'Wrench', 'Tent', 'Vehicle registration plate', 'Lantern', - 'Toaster', 'Flashlight', 'Billboard', 'Tiara', 'Limousine', 'Necklace', - 'Carnivore', 'Scissors', 'Stairs', 'Computer keyboard', 'Printer', - 'Traffic sign', 'Chair', 'Shirt', 'Poster', 'Cheese', 'Sock', - 'Fire hydrant', 'Land vehicle', 'Earrings', 'Tie', 'Watercraft', - 'Cabinetry', 'Suitcase', 'Muffin', 'Bidet', 'Snack', 'Snowmobile', - 'Clock', 'Medical equipment', 'Cattle', 'Cello', 'Jet ski', 'Camel', - 'Coat', 'Suit', 'Desk', 'Cat', 'Bronze sculpture', 'Juice', 'Gondola', - 'Beetle', 'Cannon', 'Computer mouse', 'Cookie', 'Office building', - 'Fountain', 'Coin', 'Calculator', 'Cocktail', 'Computer monitor', - 'Box', 'Stapler', 'Christmas tree', 'Cowboy hat', 'Hiking equipment', - 'Studio couch', 'Drum', 'Dessert', 'Wine rack', 'Drink', 'Zucchini', - 'Ladle', 'Human mouth', 'Dairy Product', 'Dice', 'Oven', 'Dinosaur', - 'Ratchet (Device)', 'Couch', 'Cricket ball', 'Winter melon', 'Spatula', - 'Whiteboard', 'Pencil sharpener', 'Door', 'Hat', 'Shower', 'Eraser', - 'Fedora', 'Guacamole', 'Dagger', 'Scarf', 'Dolphin', 'Sombrero', - 'Tin can', 'Mug', 'Tap', 'Harbor seal', 'Stretcher', 'Can opener', - 'Goggles', 'Human body', 'Roller skates', 'Coffee cup', - 'Cutting board', 'Blender', 'Plumbing fixture', 'Stop sign', - 'Office supplies', 'Volleyball (Ball)', 'Vase', 'Slow cooker', - 'Wardrobe', 'Coffee', 'Whisk', 'Paper towel', 'Personal care', 'Food', - 'Sun hat', 'Tree house', 'Flying disc', 'Skirt', 'Gas stove', - 'Salt and pepper shakers', 'Mechanical fan', 'Face powder', 'Fax', - 'Fruit', 'French fries', 'Nightstand', 'Barrel', 'Kite', 'Tart', - 'Treadmill', 'Fox', 'Flag', 'French horn', 'Window blind', - 'Human foot', 'Golf cart', 'Jacket', 'Egg (Food)', 'Street light', - 'Guitar', 'Pillow', 'Human leg', 'Isopod', 'Grape', 'Human ear', - 'Power plugs and sockets', 'Panda', 'Giraffe', 'Woman', 'Door handle', - 'Rhinoceros', 'Bathtub', 'Goldfish', 'Houseplant', 'Goat', - 'Baseball bat', 'Baseball glove', 'Mixing bowl', - 'Marine invertebrates', 'Kitchen utensil', 'Light switch', 'House', - 'Horse', 'Stationary bicycle', 'Hammer', 'Ceiling fan', 'Sofa bed', - 'Adhesive tape', 'Harp', 'Sandal', 'Bicycle helmet', 'Saucer', - 'Harpsichord', 'Human hair', 'Heater', 'Harmonica', 'Hamster', - 'Curtain', 'Bed', 'Kettle', 'Fireplace', 'Scale', 'Drinking straw', - 'Insect', 'Hair dryer', 'Kitchenware', 'Indoor rower', 'Invertebrate', - 'Food processor', 'Bookcase', 'Refrigerator', 'Wood-burning stove', - 'Punching bag', 'Common fig', 'Cocktail shaker', 'Jaguar (Animal)', - 'Golf ball', 'Fashion accessory', 'Alarm clock', 'Filing cabinet', - 'Artichoke', 'Table', 'Tableware', 'Kangaroo', 'Koala', 'Knife', - 'Bottle', 'Bottle opener', 'Lynx', 'Lavender (Plant)', 'Lighthouse', - 'Dumbbell', 'Human head', 'Bowl', 'Humidifier', 'Porch', 'Lizard', - 'Billiard table', 'Mammal', 'Mouse', 'Motorcycle', - 'Musical instrument', 'Swim cap', 'Frying pan', 'Snowplow', - 'Bathroom cabinet', 'Missile', 'Bust', 'Man', 'Waffle iron', 'Milk', - 'Ring binder', 'Plate', 'Mobile phone', 'Baked goods', 'Mushroom', - 'Crutch', 'Pitcher (Container)', 'Mirror', 'Personal flotation device', - 'Table tennis racket', 'Pencil case', 'Musical keyboard', 'Scoreboard', - 'Briefcase', 'Kitchen knife', 'Nail (Construction)', 'Tennis ball', - 'Plastic bag', 'Oboe', 'Chest of drawers', 'Ostrich', 'Piano', 'Girl', - 'Plant', 'Potato', 'Hair spray', 'Sports equipment', 'Pasta', - 'Penguin', 'Pumpkin', 'Pear', 'Infant bed', 'Polar bear', 'Mixer', - 'Cupboard', 'Jacuzzi', 'Pizza', 'Digital clock', 'Pig', 'Reptile', - 'Rifle', 'Lipstick', 'Skateboard', 'Raven', 'High heels', 'Red panda', - 'Rose', 'Rabbit', 'Sculpture', 'Saxophone', 'Shotgun', 'Seafood', - 'Submarine sandwich', 'Snowboard', 'Sword', 'Picture frame', 'Sushi', - 'Loveseat', 'Ski', 'Squirrel', 'Tripod', 'Stethoscope', 'Submarine', - 'Scorpion', 'Segway', 'Training bench', 'Snake', 'Coffee table', - 'Skyscraper', 'Sheep', 'Television', 'Trombone', 'Tea', 'Tank', 'Taco', - 'Telephone', 'Torch', 'Tiger', 'Strawberry', 'Trumpet', 'Tree', - 'Tomato', 'Train', 'Tool', 'Picnic basket', 'Cooking spray', - 'Trousers', 'Bowling equipment', 'Football helmet', 'Truck', - 'Measuring cup', 'Coffeemaker', 'Violin', 'Vehicle', 'Handbag', - 'Paper cutter', 'Wine', 'Weapon', 'Wheel', 'Worm', 'Wok', 'Whale', - 'Zebra', 'Auto part', 'Jug', 'Pizza cutter', 'Cream', 'Monkey', 'Lion', - 'Bread', 'Platter', 'Chicken', 'Eagle', 'Helicopter', 'Owl', 'Duck', - 'Turtle', 'Hippopotamus', 'Crocodile', 'Toilet', 'Toilet paper', - 'Squid', 'Clothing', 'Footwear', 'Lemon', 'Spider', 'Deer', 'Frog', - 'Banana', 'Rocket', 'Wine glass', 'Countertop', 'Tablet computer', - 'Waste container', 'Swimming pool', 'Dog', 'Book', 'Elephant', 'Shark', - 'Candle', 'Leopard', 'Axe', 'Hand dryer', 'Soap dispenser', - 'Porcupine', 'Flower', 'Canary', 'Cheetah', 'Palm tree', 'Hamburger', - 'Maple', 'Building', 'Fish', 'Lobster', 'Garden Asparagus', - 'Furniture', 'Hedgehog', 'Airplane', 'Spoon', 'Otter', 'Bull', - 'Oyster', 'Horizontal bar', 'Convenience store', 'Bomb', 'Bench', - 'Ice cream', 'Caterpillar', 'Butterfly', 'Parachute', 'Orange', - 'Antelope', 'Beaker', 'Moths and butterflies', 'Window', 'Closet', - 'Castle', 'Jellyfish', 'Goose', 'Mule', 'Swan', 'Peach', 'Coconut', - 'Seat belt', 'Raccoon', 'Chisel', 'Fork', 'Lamp', 'Camera', - 'Squash (Plant)', 'Racket', 'Human face', 'Human arm', 'Vegetable', - 'Diaper', 'Unicycle', 'Falcon', 'Chime', 'Snail', 'Shellfish', - 'Cabbage', 'Carrot', 'Mango', 'Jeans', 'Flowerpot', 'Pineapple', - 'Drawer', 'Stool', 'Envelope', 'Cake', 'Dragonfly', 'Common sunflower', - 'Microwave oven', 'Honeycomb', 'Marine mammal', 'Sea lion', 'Ladybug', - 'Shelf', 'Watch', 'Candy', 'Salad', 'Parrot', 'Handgun', 'Sparrow', - 'Van', 'Grinder', 'Spice rack', 'Light bulb', 'Corded phone', - 'Sports uniform', 'Tennis racket', 'Wall clock', 'Serving tray', - 'Kitchen & dining room table', 'Dog bed', 'Cake stand', - 'Cat furniture', 'Bathroom accessory', 'Facial tissue holder', - 'Pressure cooker', 'Kitchen appliance', 'Tire', 'Ruler', - 'Luggage and bags', 'Microphone', 'Broccoli', 'Umbrella', 'Pastry', - 'Grapefruit', 'Band-aid', 'Animal', 'Bell pepper', 'Turkey', 'Lily', - 'Pomegranate', 'Doughnut', 'Glasses', 'Human nose', 'Pen', 'Ant', - 'Car', 'Aircraft', 'Human hand', 'Skunk', 'Teddy bear', 'Watermelon', - 'Cantaloupe', 'Dishwasher', 'Flute', 'Balance beam', 'Sandwich', - 'Shrimp', 'Sewing machine', 'Binoculars', 'Rays and skates', 'Ipod', - 'Accordion', 'Willow', 'Crab', 'Crown', 'Seahorse', 'Perfume', - 'Alpaca', 'Taxi', 'Canoe', 'Remote control', 'Wheelchair', - 'Rugby ball', 'Armadillo', 'Maracas', 'Helmet' - ] - - -dataset_aliases = { - 'voc': ['voc', 'pascal_voc', 'voc07', 'voc12'], - 'imagenet_det': ['det', 'imagenet_det', 'ilsvrc_det'], - 'imagenet_vid': ['vid', 'imagenet_vid', 'ilsvrc_vid'], - 'coco': ['coco', 'mscoco', 'ms_coco'], - 'wider_face': ['WIDERFaceDataset', 'wider_face', 'WIDERFace'], - 'cityscapes': ['cityscapes'], - 'oid_challenge': ['oid_challenge', 'openimages_challenge'], - 'oid_v6': ['oid_v6', 'openimages_v6'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/eval_hooks.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/eval_hooks.py deleted file mode 100644 index 7c1fbe96..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import os.path as osp - -import mmcv -import torch.distributed as dist -from mmcv.runner import DistEvalHook as BaseDistEvalHook -from mmcv.runner import EvalHook as BaseEvalHook -from torch.nn.modules.batchnorm import _BatchNorm - - -def _calc_dynamic_intervals(start_interval, dynamic_interval_list): - assert mmcv.is_list_of(dynamic_interval_list, tuple) - - dynamic_milestones = [0] - dynamic_milestones.extend( - [dynamic_interval[0] for dynamic_interval in dynamic_interval_list]) - dynamic_intervals = [start_interval] - dynamic_intervals.extend( - [dynamic_interval[1] for dynamic_interval in dynamic_interval_list]) - return dynamic_milestones, dynamic_intervals - - -class EvalHook(BaseEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(EvalHook, self).__init__(*args, **kwargs) - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - if not self._should_evaluate(runner): - return - - from mmdet.apis import single_gpu_test - results = single_gpu_test(runner.model, self.dataloader, show=False) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - -# Note: Considering that MMCV's EvalHook updated its interface in V1.3.16, -# in order to avoid strong version dependency, we did not directly -# inherit EvalHook but BaseDistEvalHook. -class DistEvalHook(BaseDistEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(DistEvalHook, self).__init__(*args, **kwargs) - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - if not self._should_evaluate(runner): - return - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - from mmdet.apis import multi_gpu_test - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - - # the key_score may be `None` so it needs to skip - # the action to save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/mean_ap.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/mean_ap.py deleted file mode 100644 index fc1274ae..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/mean_ap.py +++ /dev/null @@ -1,753 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from multiprocessing import Pool - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps -from .class_names import get_classes - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (ndarray): shape (num_scales, num_dets) or (num_dets, ) - precisions (ndarray): shape (num_scales, num_dets) or (num_dets, ) - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or ndarray: calculated average precision - """ - no_scale = False - if recalls.ndim == 1: - no_scale = True - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - assert recalls.shape == precisions.shape and recalls.ndim == 2 - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - if no_scale: - ap = ap[0] - return ap - - -def tpfp_imagenet(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - default_iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - default_iou_thr (float): IoU threshold to be considered as matched for - medium and large bboxes (small ones have special rules). - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp - # of a certain scale. - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - ious = bbox_overlaps( - det_bboxes, gt_bboxes - 1, use_legacy_coordinate=use_legacy_coordinate) - gt_w = gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length - gt_h = gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length - iou_thrs = np.minimum((gt_w * gt_h) / ((gt_w + 10.0) * (gt_h + 10.0)), - default_iou_thr) - # sort all detections by scores in descending order - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = gt_w * gt_h - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - max_iou = -1 - matched_gt = -1 - # find best overlapped available gt - for j in range(num_gts): - # different from PASCAL VOC: allow finding other gts if the - # best overlapped ones are already matched by other det bboxes - if gt_covered[j]: - continue - elif ious[i, j] >= iou_thrs[j] and ious[i, j] > max_iou: - max_iou = ious[i, j] - matched_gt = j - # there are 4 cases for a det bbox: - # 1. it matches a gt, tp = 1, fp = 0 - # 2. it matches an ignored gt, tp = 0, fp = 0 - # 3. it matches no gt and within area range, tp = 0, fp = 1 - # 4. it matches no gt but is beyond area range, tp = 0, fp = 0 - if matched_gt >= 0: - gt_covered[matched_gt] = 1 - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - tp[k, i] = 1 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_default(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_openimages(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False, - gt_bboxes_group_of=None, - use_group_of=True, - ioa_thr=0.5): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - gt_bboxes_group_of (ndarray): GT group_of of this image, of shape - (k, 1). Default: None - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: True. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: 0.5. - - Returns: - tuple[np.ndarray]: Returns a tuple (tp, fp, det_bboxes), where - (tp, fp) whose elements are 0 and 1. The shape of each array is - (num_scales, m). (det_bboxes) whose will filter those are not - matched by group of gts when processing Open Images evaluation. - The shape is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp, det_bboxes - - if gt_bboxes_group_of is not None and use_group_of: - # if handle group-of boxes, divided gt boxes into two parts: - # non-group-of and group-of.Then calculate ious and ioas through - # non-group-of group-of gts respectively. This only used in - # OpenImages evaluation. - assert gt_bboxes_group_of.shape[0] == gt_bboxes.shape[0] - non_group_gt_bboxes = gt_bboxes[~gt_bboxes_group_of] - group_gt_bboxes = gt_bboxes[gt_bboxes_group_of] - num_gts_group = group_gt_bboxes.shape[0] - ious = bbox_overlaps(det_bboxes, non_group_gt_bboxes) - ioas = bbox_overlaps(det_bboxes, group_gt_bboxes, mode='iof') - else: - # if not consider group-of boxes, only calculate ious through gt boxes - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - ioas = None - - if ious.shape[1] > 0: - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = ( - gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - else: - # if there is no no-group-of gt bboxes in this image, - # then all det bboxes within area range are false positives. - # Only used in OpenImages evaluation. - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - - if ioas is None or ioas.shape[1] <= 0: - return tp, fp, det_bboxes - else: - # The evaluation of group-of TP and FP are done in two stages: - # 1. All detections are first matched to non group-of boxes; true - # positives are determined. - # 2. Detections that are determined as false positives are matched - # against group-of boxes and calculated group-of TP and FP. - # Only used in OpenImages evaluation. - det_bboxes_group = np.zeros( - (num_scales, ioas.shape[1], det_bboxes.shape[1]), dtype=float) - match_group_of = np.zeros((num_scales, num_dets), dtype=bool) - tp_group = np.zeros((num_scales, num_gts_group), dtype=np.float32) - ioas_max = ioas.max(axis=1) - # for each det, which gt overlaps most with it - ioas_argmax = ioas.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - box_is_covered = tp[k] - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - matched_gt = ioas_argmax[i] - if not box_is_covered[i]: - if ioas_max[i] >= ioa_thr: - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not tp_group[k, matched_gt]: - tp_group[k, matched_gt] = 1 - match_group_of[k, i] = True - else: - match_group_of[k, i] = True - - if det_bboxes_group[k, matched_gt, -1] < \ - det_bboxes[i, -1]: - det_bboxes_group[k, matched_gt] = \ - det_bboxes[i] - - fp_group = (tp_group <= 0).astype(float) - tps = [] - fps = [] - # concatenate tp, fp, and det-boxes which not matched group of - # gt boxes and tp_group, fp_group, and det_bboxes_group which - # matched group of boxes respectively. - for i in range(num_scales): - tps.append( - np.concatenate((tp[i][~match_group_of[i]], tp_group[i]))) - fps.append( - np.concatenate((fp[i][~match_group_of[i]], fp_group[i]))) - det_bboxes = np.concatenate( - (det_bboxes[~match_group_of[i]], det_bboxes_group[i])) - - tp = np.vstack(tps) - fp = np.vstack(fps) - return tp, fp, det_bboxes - - -def get_cls_results(det_results, annotations, class_id): - """Get det results and gt information of a certain class. - - Args: - det_results (list[list]): Same as `eval_map()`. - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes - """ - cls_dets = [img_res[class_id] for img_res in det_results] - cls_gts = [] - cls_gts_ignore = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - cls_gts.append(ann['bboxes'][gt_inds, :]) - - if ann.get('labels_ignore', None) is not None: - ignore_inds = ann['labels_ignore'] == class_id - cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :]) - else: - cls_gts_ignore.append(np.empty((0, 4), dtype=np.float32)) - - return cls_dets, cls_gts, cls_gts_ignore - - -def get_cls_group_ofs(annotations, class_id): - """Get `gt_group_of` of a certain class, which is used in Open Images. - - Args: - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - list[np.ndarray]: `gt_group_of` of a certain class. - """ - gt_group_ofs = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - if ann.get('gt_is_group_ofs', None) is not None: - gt_group_ofs.append(ann['gt_is_group_ofs'][gt_inds]) - else: - gt_group_ofs.append(np.empty((0, 1), dtype=np.bool)) - - return gt_group_ofs - - -def eval_map(det_results, - annotations, - scale_ranges=None, - iou_thr=0.5, - ioa_thr=None, - dataset=None, - logger=None, - tpfp_fn=None, - nproc=4, - use_legacy_coordinate=False, - use_group_of=False): - """Evaluate mAP of a dataset. - - Args: - det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. - The outer list indicates images, and the inner list indicates - per-class detected bboxes. - annotations (list[dict]): Ground truth annotations where each item of - the list indicates an image. Keys of annotations are: - - - `bboxes`: numpy array of shape (n, 4) - - `labels`: numpy array of shape (n, ) - - `bboxes_ignore` (optional): numpy array of shape (k, 4) - - `labels_ignore` (optional): numpy array of shape (k, ) - scale_ranges (list[tuple] | None): Range of scales to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. A range of - (32, 64) means the area range between (32**2, 64**2). - Default: None. - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: None. - dataset (list[str] | str | None): Dataset name or dataset classes, - there are minor differences in metrics for different datasets, e.g. - "voc07", "imagenet_det", etc. Default: None. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - tpfp_fn (callable | None): The function used to determine true/ - false positives. If None, :func:`tpfp_default` is used as default - unless dataset is 'det' or 'vid' (:func:`tpfp_imagenet` in this - case). If it is given as a function, then this function is used - to evaluate tp & fp. Default None. - nproc (int): Processes used for computing TP and FP. - Default: 4. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: False. - - Returns: - tuple: (mAP, [dict, dict, ...]) - """ - assert len(det_results) == len(annotations) - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - num_imgs = len(det_results) - num_scales = len(scale_ranges) if scale_ranges is not None else 1 - num_classes = len(det_results[0]) # positive class num - area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges] - if scale_ranges is not None else None) - - pool = Pool(nproc) - eval_results = [] - for i in range(num_classes): - # get gt and det bboxes of this class - cls_dets, cls_gts, cls_gts_ignore = get_cls_results( - det_results, annotations, i) - # choose proper function according to datasets to compute tp and fp - if tpfp_fn is None: - if dataset in ['det', 'vid']: - tpfp_fn = tpfp_imagenet - elif dataset in ['oid_challenge', 'oid_v6'] \ - or use_group_of is True: - tpfp_fn = tpfp_openimages - else: - tpfp_fn = tpfp_default - if not callable(tpfp_fn): - raise ValueError( - f'tpfp_fn has to be a function or None, but got {tpfp_fn}') - args = [] - if use_group_of: - # used in Open Images Dataset evaluation - gt_group_ofs = get_cls_group_ofs(annotations, i) - args.append(gt_group_ofs) - args.append([use_group_of for _ in range(num_imgs)]) - if ioa_thr is not None: - args.append([ioa_thr for _ in range(num_imgs)]) - # compute tp and fp for each image with multiple processes - tpfp = pool.starmap( - tpfp_fn, - zip(cls_dets, cls_gts, cls_gts_ignore, - [iou_thr for _ in range(num_imgs)], - [area_ranges for _ in range(num_imgs)], - [use_legacy_coordinate for _ in range(num_imgs)], *args)) - if use_group_of: - tp, fp, cls_dets = tuple(zip(*tpfp)) - else: - tp, fp = tuple(zip(*tpfp)) - # calculate gt number of each scale - # ignored gts or gts beyond the specific scale are not counted - num_gts = np.zeros(num_scales, dtype=int) - for j, bbox in enumerate(cls_gts): - if area_ranges is None: - num_gts[0] += bbox.shape[0] - else: - gt_areas = (bbox[:, 2] - bbox[:, 0] + extra_length) * ( - bbox[:, 3] - bbox[:, 1] + extra_length) - for k, (min_area, max_area) in enumerate(area_ranges): - num_gts[k] += np.sum((gt_areas >= min_area) - & (gt_areas < max_area)) - # sort all det bboxes by score, also sort tp and fp - cls_dets = np.vstack(cls_dets) - num_dets = cls_dets.shape[0] - sort_inds = np.argsort(-cls_dets[:, -1]) - tp = np.hstack(tp)[:, sort_inds] - fp = np.hstack(fp)[:, sort_inds] - # calculate recall and precision with tp and fp - tp = np.cumsum(tp, axis=1) - fp = np.cumsum(fp, axis=1) - eps = np.finfo(np.float32).eps - recalls = tp / np.maximum(num_gts[:, np.newaxis], eps) - precisions = tp / np.maximum((tp + fp), eps) - # calculate AP - if scale_ranges is None: - recalls = recalls[0, :] - precisions = precisions[0, :] - num_gts = num_gts.item() - mode = 'area' if dataset != 'voc07' else '11points' - ap = average_precision(recalls, precisions, mode) - eval_results.append({ - 'num_gts': num_gts, - 'num_dets': num_dets, - 'recall': recalls, - 'precision': precisions, - 'ap': ap - }) - pool.close() - if scale_ranges is not None: - # shape (num_classes, num_scales) - all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results]) - all_num_gts = np.vstack( - [cls_result['num_gts'] for cls_result in eval_results]) - mean_ap = [] - for i in range(num_scales): - if np.any(all_num_gts[:, i] > 0): - mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean()) - else: - mean_ap.append(0.0) - else: - aps = [] - for cls_result in eval_results: - if cls_result['num_gts'] > 0: - aps.append(cls_result['ap']) - mean_ap = np.array(aps).mean().item() if aps else 0.0 - - print_map_summary( - mean_ap, eval_results, dataset, area_ranges, logger=logger) - - return mean_ap, eval_results - - -def print_map_summary(mean_ap, - results, - dataset=None, - scale_ranges=None, - logger=None): - """Print mAP and results of each class. - - A table will be printed to show the gts/dets/recall/AP of each class and - the mAP. - - Args: - mean_ap (float): Calculated from `eval_map()`. - results (list[dict]): Calculated from `eval_map()`. - dataset (list[str] | str | None): Dataset name or dataset classes. - scale_ranges (list[tuple] | None): Range of scales to be evaluated. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - - if logger == 'silent': - return - - if isinstance(results[0]['ap'], np.ndarray): - num_scales = len(results[0]['ap']) - else: - num_scales = 1 - - if scale_ranges is not None: - assert len(scale_ranges) == num_scales - - num_classes = len(results) - - recalls = np.zeros((num_scales, num_classes), dtype=np.float32) - aps = np.zeros((num_scales, num_classes), dtype=np.float32) - num_gts = np.zeros((num_scales, num_classes), dtype=int) - for i, cls_result in enumerate(results): - if cls_result['recall'].size > 0: - recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1] - aps[:, i] = cls_result['ap'] - num_gts[:, i] = cls_result['num_gts'] - - if dataset is None: - label_names = [str(i) for i in range(num_classes)] - elif mmcv.is_str(dataset): - label_names = get_classes(dataset) - else: - label_names = dataset - - if not isinstance(mean_ap, list): - mean_ap = [mean_ap] - - header = ['class', 'gts', 'dets', 'recall', 'ap'] - for i in range(num_scales): - if scale_ranges is not None: - print_log(f'Scale range {scale_ranges[i]}', logger=logger) - table_data = [header] - for j in range(num_classes): - row_data = [ - label_names[j], num_gts[i, j], results[j]['num_dets'], - f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}' - ] - table_data.append(row_data) - table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}']) - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/panoptic_utils.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/panoptic_utils.py deleted file mode 100644 index 10c9ad93..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/panoptic_utils.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# A custom value to distinguish instance ID and category ID; need to -# be greater than the number of categories. -# For a pixel in the panoptic result map: -# pan_id = ins_id * INSTANCE_OFFSET + cat_id -INSTANCE_OFFSET = 1000 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/recall.py b/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/recall.py deleted file mode 100644 index 82b3c909..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/evaluation/recall.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps - - -def _recalls(all_ious, proposal_nums, thrs): - - img_num = all_ious.shape[0] - total_gt_num = sum([ious.shape[0] for ious in all_ious]) - - _ious = np.zeros((proposal_nums.size, total_gt_num), dtype=np.float32) - for k, proposal_num in enumerate(proposal_nums): - tmp_ious = np.zeros(0) - for i in range(img_num): - ious = all_ious[i][:, :proposal_num].copy() - gt_ious = np.zeros((ious.shape[0])) - if ious.size == 0: - tmp_ious = np.hstack((tmp_ious, gt_ious)) - continue - for j in range(ious.shape[0]): - gt_max_overlaps = ious.argmax(axis=1) - max_ious = ious[np.arange(0, ious.shape[0]), gt_max_overlaps] - gt_idx = max_ious.argmax() - gt_ious[j] = max_ious[gt_idx] - box_idx = gt_max_overlaps[gt_idx] - ious[gt_idx, :] = -1 - ious[:, box_idx] = -1 - tmp_ious = np.hstack((tmp_ious, gt_ious)) - _ious[k, :] = tmp_ious - - _ious = np.fliplr(np.sort(_ious, axis=1)) - recalls = np.zeros((proposal_nums.size, thrs.size)) - for i, thr in enumerate(thrs): - recalls[:, i] = (_ious >= thr).sum(axis=1) / float(total_gt_num) - - return recalls - - -def set_recall_param(proposal_nums, iou_thrs): - """Check proposal_nums and iou_thrs and set correct format.""" - if isinstance(proposal_nums, Sequence): - _proposal_nums = np.array(proposal_nums) - elif isinstance(proposal_nums, int): - _proposal_nums = np.array([proposal_nums]) - else: - _proposal_nums = proposal_nums - - if iou_thrs is None: - _iou_thrs = np.array([0.5]) - elif isinstance(iou_thrs, Sequence): - _iou_thrs = np.array(iou_thrs) - elif isinstance(iou_thrs, float): - _iou_thrs = np.array([iou_thrs]) - else: - _iou_thrs = iou_thrs - - return _proposal_nums, _iou_thrs - - -def eval_recalls(gts, - proposals, - proposal_nums=None, - iou_thrs=0.5, - logger=None, - use_legacy_coordinate=False): - """Calculate recalls. - - Args: - gts (list[ndarray]): a list of arrays of shape (n, 4) - proposals (list[ndarray]): a list of arrays of shape (k, 4) or (k, 5) - proposal_nums (int | Sequence[int]): Top N proposals to be evaluated. - iou_thrs (float | Sequence[float]): IoU thresholds. Default: 0.5. - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - use_legacy_coordinate (bool): Whether use coordinate system - in mmdet v1.x. "1" was added to both height and width - which means w, h should be - computed as 'x2 - x1 + 1` and 'y2 - y1 + 1'. Default: False. - - - Returns: - ndarray: recalls of different ious and proposal nums - """ - - img_num = len(gts) - assert img_num == len(proposals) - proposal_nums, iou_thrs = set_recall_param(proposal_nums, iou_thrs) - all_ious = [] - for i in range(img_num): - if proposals[i].ndim == 2 and proposals[i].shape[1] == 5: - scores = proposals[i][:, 4] - sort_idx = np.argsort(scores)[::-1] - img_proposal = proposals[i][sort_idx, :] - else: - img_proposal = proposals[i] - prop_num = min(img_proposal.shape[0], proposal_nums[-1]) - if gts[i] is None or gts[i].shape[0] == 0: - ious = np.zeros((0, img_proposal.shape[0]), dtype=np.float32) - else: - ious = bbox_overlaps( - gts[i], - img_proposal[:prop_num, :4], - use_legacy_coordinate=use_legacy_coordinate) - all_ious.append(ious) - all_ious = np.array(all_ious) - recalls = _recalls(all_ious, proposal_nums, iou_thrs) - - print_recall_summary(recalls, proposal_nums, iou_thrs, logger=logger) - return recalls - - -def print_recall_summary(recalls, - proposal_nums, - iou_thrs, - row_idxs=None, - col_idxs=None, - logger=None): - """Print recalls in a table. - - Args: - recalls (ndarray): calculated from `bbox_recalls` - proposal_nums (ndarray or list): top N proposals - iou_thrs (ndarray or list): iou thresholds - row_idxs (ndarray): which rows(proposal nums) to print - col_idxs (ndarray): which cols(iou thresholds) to print - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - proposal_nums = np.array(proposal_nums, dtype=np.int32) - iou_thrs = np.array(iou_thrs) - if row_idxs is None: - row_idxs = np.arange(proposal_nums.size) - if col_idxs is None: - col_idxs = np.arange(iou_thrs.size) - row_header = [''] + iou_thrs[col_idxs].tolist() - table_data = [row_header] - for i, num in enumerate(proposal_nums[row_idxs]): - row = [f'{val:.3f}' for val in recalls[row_idxs[i], col_idxs].tolist()] - row.insert(0, num) - table_data.append(row) - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - -def plot_num_recall(recalls, proposal_nums): - """Plot Proposal_num-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - proposal_nums(ndarray or list): same shape as `recalls` - """ - if isinstance(proposal_nums, np.ndarray): - _proposal_nums = proposal_nums.tolist() - else: - _proposal_nums = proposal_nums - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot([0] + _proposal_nums, [0] + _recalls) - plt.xlabel('Proposal num') - plt.ylabel('Recall') - plt.axis([0, proposal_nums.max(), 0, 1]) - f.show() - - -def plot_iou_recall(recalls, iou_thrs): - """Plot IoU-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - iou_thrs(ndarray or list): same shape as `recalls` - """ - if isinstance(iou_thrs, np.ndarray): - _iou_thrs = iou_thrs.tolist() - else: - _iou_thrs = iou_thrs - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot(_iou_thrs + [1.0], _recalls + [0.]) - plt.xlabel('IoU') - plt.ylabel('Recall') - plt.axis([iou_thrs.min(), 1, 0, 1]) - f.show() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/export/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/export/__init__.py deleted file mode 100644 index a8179c93..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/export/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .onnx_helper import (add_dummy_nms_for_onnx, dynamic_clip_for_onnx, - get_k_for_topk) -from .pytorch2onnx import (build_model_from_cfg, - generate_inputs_and_wrap_model, - preprocess_example_input) - -__all__ = [ - 'build_model_from_cfg', 'generate_inputs_and_wrap_model', - 'preprocess_example_input', 'get_k_for_topk', 'add_dummy_nms_for_onnx', - 'dynamic_clip_for_onnx' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/export/model_wrappers.py b/cv/3d_detection/paconv/pytorch/mmdet/core/export/model_wrappers.py deleted file mode 100644 index 2f62bb03..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/export/model_wrappers.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import numpy as np -import torch - -from mmdet.core import bbox2result -from mmdet.models import BaseDetector - - -class DeployBaseDetector(BaseDetector): - """DeployBaseDetector.""" - - def __init__(self, class_names, device_id): - super(DeployBaseDetector, self).__init__() - self.CLASSES = class_names - self.device_id = device_id - - def simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def extract_feat(self, imgs): - raise NotImplementedError('This method is not implemented.') - - def forward_train(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def val_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def train_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def forward_test(self, *, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def forward(self, img, img_metas, return_loss=True, **kwargs): - outputs = self.forward_test(img, img_metas, **kwargs) - batch_dets, batch_labels = outputs[:2] - batch_masks = outputs[2] if len(outputs) == 3 else None - batch_size = img[0].shape[0] - img_metas = img_metas[0] - results = [] - rescale = kwargs.get('rescale', True) - for i in range(batch_size): - dets, labels = batch_dets[i], batch_labels[i] - if rescale: - scale_factor = img_metas[i]['scale_factor'] - - if isinstance(scale_factor, (list, tuple, np.ndarray)): - assert len(scale_factor) == 4 - scale_factor = np.array(scale_factor)[None, :] # [1,4] - dets[:, :4] /= scale_factor - - if 'border' in img_metas[i]: - # offset pixel of the top-left corners between original image - # and padded/enlarged image, 'border' is used when exporting - # CornerNet and CentripetalNet to onnx - x_off = img_metas[i]['border'][2] - y_off = img_metas[i]['border'][0] - dets[:, [0, 2]] -= x_off - dets[:, [1, 3]] -= y_off - dets[:, :4] *= (dets[:, :4] > 0).astype(dets.dtype) - - dets_results = bbox2result(dets, labels, len(self.CLASSES)) - - if batch_masks is not None: - masks = batch_masks[i] - img_h, img_w = img_metas[i]['img_shape'][:2] - ori_h, ori_w = img_metas[i]['ori_shape'][:2] - masks = masks[:, :img_h, :img_w] - if rescale: - masks = masks.astype(np.float32) - masks = torch.from_numpy(masks) - masks = torch.nn.functional.interpolate( - masks.unsqueeze(0), size=(ori_h, ori_w)) - masks = masks.squeeze(0).detach().numpy() - if masks.dtype != np.bool: - masks = masks >= 0.5 - segms_results = [[] for _ in range(len(self.CLASSES))] - for j in range(len(dets)): - segms_results[labels[j]].append(masks[j]) - results.append((dets_results, segms_results)) - else: - results.append(dets_results) - return results - - -class ONNXRuntimeDetector(DeployBaseDetector): - """Wrapper for detector's inference with ONNXRuntime.""" - - def __init__(self, onnx_file, class_names, device_id): - super(ONNXRuntimeDetector, self).__init__(class_names, device_id) - import onnxruntime as ort - - # get the custom op path - ort_custom_op_path = '' - try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - providers = ['CPUExecutionProvider'] - options = [{}] - is_cuda_available = ort.get_device() == 'GPU' - if is_cuda_available: - providers.insert(0, 'CUDAExecutionProvider') - options.insert(0, {'device_id': device_id}) - - sess.set_providers(providers, options) - - self.sess = sess - self.io_binding = sess.io_binding() - self.output_names = [_.name for _ in sess.get_outputs()] - self.is_cuda_available = is_cuda_available - - def forward_test(self, imgs, img_metas, **kwargs): - input_data = imgs[0] - # set io binding for inputs/outputs - device_type = 'cuda' if self.is_cuda_available else 'cpu' - if not self.is_cuda_available: - input_data = input_data.cpu() - self.io_binding.bind_input( - name='input', - device_type=device_type, - device_id=self.device_id, - element_type=np.float32, - shape=input_data.shape, - buffer_ptr=input_data.data_ptr()) - - for name in self.output_names: - self.io_binding.bind_output(name) - # run session to get outputs - self.sess.run_with_iobinding(self.io_binding) - ort_outputs = self.io_binding.copy_outputs_to_cpu() - return ort_outputs - - -class TensorRTDetector(DeployBaseDetector): - """Wrapper for detector's inference with TensorRT.""" - - def __init__(self, engine_file, class_names, device_id, output_names=None): - super(TensorRTDetector, self).__init__(class_names, device_id) - warnings.warn('`output_names` is deprecated and will be removed in ' - 'future releases.') - from mmcv.tensorrt import TRTWraper, load_tensorrt_plugin - try: - load_tensorrt_plugin() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with TensorRT from source.') - - output_names = ['dets', 'labels'] - model = TRTWraper(engine_file, ['input'], output_names) - with_masks = False - # if TensorRT has totally 4 inputs/outputs, then - # the detector should have `mask` output. - if len(model.engine) == 4: - model.output_names = output_names + ['masks'] - with_masks = True - self.model = model - self.with_masks = with_masks - - def forward_test(self, imgs, img_metas, **kwargs): - input_data = imgs[0].contiguous() - with torch.cuda.device(self.device_id), torch.no_grad(): - outputs = self.model({'input': input_data}) - outputs = [outputs[name] for name in self.model.output_names] - outputs = [out.detach().cpu().numpy() for out in outputs] - return outputs diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/export/onnx_helper.py b/cv/3d_detection/paconv/pytorch/mmdet/core/export/onnx_helper.py deleted file mode 100644 index 9f6b9a01..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/export/onnx_helper.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -import torch - - -def dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape): - """Clip boxes dynamically for onnx. - - Since torch.clamp cannot have dynamic `min` and `max`, we scale the - boxes by 1/max_shape and clamp in the range [0, 1]. - - Args: - x1 (Tensor): The x1 for bounding boxes. - y1 (Tensor): The y1 for bounding boxes. - x2 (Tensor): The x2 for bounding boxes. - y2 (Tensor): The y2 for bounding boxes. - max_shape (Tensor or torch.Size): The (H,W) of original image. - Returns: - tuple(Tensor): The clipped x1, y1, x2, y2. - """ - assert isinstance( - max_shape, - torch.Tensor), '`max_shape` should be tensor of (h,w) for onnx' - - # scale by 1/max_shape - x1 = x1 / max_shape[1] - y1 = y1 / max_shape[0] - x2 = x2 / max_shape[1] - y2 = y2 / max_shape[0] - - # clamp [0, 1] - x1 = torch.clamp(x1, 0, 1) - y1 = torch.clamp(y1, 0, 1) - x2 = torch.clamp(x2, 0, 1) - y2 = torch.clamp(y2, 0, 1) - - # scale back - x1 = x1 * max_shape[1] - y1 = y1 * max_shape[0] - x2 = x2 * max_shape[1] - y2 = y2 * max_shape[0] - return x1, y1, x2, y2 - - -def get_k_for_topk(k, size): - """Get k of TopK for onnx exporting. - - The K of TopK in TensorRT should not be a Tensor, while in ONNX Runtime - it could be a Tensor.Due to dynamic shape feature, we have to decide - whether to do TopK and what K it should be while exporting to ONNX. - If returned K is less than zero, it means we do not have to do - TopK operation. - - Args: - k (int or Tensor): The set k value for nms from config file. - size (Tensor or torch.Size): The number of elements of \ - TopK's input tensor - Returns: - tuple: (int or Tensor): The final K for TopK. - """ - ret_k = -1 - if k <= 0 or size <= 0: - return ret_k - if torch.onnx.is_in_onnx_export(): - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - if is_trt_backend: - # TensorRT does not support dynamic K with TopK op - if 0 < k < size: - ret_k = k - else: - # Always keep topk op for dynamic input in onnx for ONNX Runtime - ret_k = torch.where(k < size, k, size) - elif k < size: - ret_k = k - else: - # ret_k is -1 - pass - return ret_k - - -def add_dummy_nms_for_onnx(boxes, - scores, - max_output_boxes_per_class=1000, - iou_threshold=0.5, - score_threshold=0.05, - pre_top_k=-1, - after_top_k=-1, - labels=None): - """Create a dummy onnx::NonMaxSuppression op while exporting to ONNX. - - This function helps exporting to onnx with batch and multiclass NMS op. - It only supports class-agnostic detection results. That is, the scores - is of shape (N, num_bboxes, num_classes) and the boxes is of shape - (N, num_boxes, 4). - - Args: - boxes (Tensor): The bounding boxes of shape [N, num_boxes, 4] - scores (Tensor): The detection scores of shape - [N, num_boxes, num_classes] - max_output_boxes_per_class (int): Maximum number of output - boxes per class of nms. Defaults to 1000. - iou_threshold (float): IOU threshold of nms. Defaults to 0.5 - score_threshold (float): score threshold of nms. - Defaults to 0.05. - pre_top_k (bool): Number of top K boxes to keep before nms. - Defaults to -1. - after_top_k (int): Number of top K boxes to keep after nms. - Defaults to -1. - labels (Tensor, optional): It not None, explicit labels would be used. - Otherwise, labels would be automatically generated using - num_classed. Defaults to None. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - max_output_boxes_per_class = torch.LongTensor([max_output_boxes_per_class]) - iou_threshold = torch.tensor([iou_threshold], dtype=torch.float32) - score_threshold = torch.tensor([score_threshold], dtype=torch.float32) - batch_size = scores.shape[0] - num_class = scores.shape[2] - - nms_pre = torch.tensor(pre_top_k, device=scores.device, dtype=torch.long) - nms_pre = get_k_for_topk(nms_pre, boxes.shape[1]) - - if nms_pre > 0: - max_scores, _ = scores.max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = boxes.shape[1] * batch_inds + topk_inds - boxes = boxes.reshape(-1, 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - scores = scores.reshape(-1, num_class)[transformed_inds, :].reshape( - batch_size, -1, num_class) - if labels is not None: - labels = labels.reshape(-1, 1)[transformed_inds].reshape( - batch_size, -1) - - scores = scores.permute(0, 2, 1) - num_box = boxes.shape[1] - # turn off tracing to create a dummy output of nms - state = torch._C._get_tracing_state() - # dummy indices of nms's output - num_fake_det = 2 - batch_inds = torch.randint(batch_size, (num_fake_det, 1)) - cls_inds = torch.randint(num_class, (num_fake_det, 1)) - box_inds = torch.randint(num_box, (num_fake_det, 1)) - indices = torch.cat([batch_inds, cls_inds, box_inds], dim=1) - output = indices - setattr(DummyONNXNMSop, 'output', output) - - # open tracing - torch._C._set_tracing_state(state) - selected_indices = DummyONNXNMSop.apply(boxes, scores, - max_output_boxes_per_class, - iou_threshold, score_threshold) - - batch_inds, cls_inds = selected_indices[:, 0], selected_indices[:, 1] - box_inds = selected_indices[:, 2] - if labels is None: - labels = torch.arange(num_class, dtype=torch.long).to(scores.device) - labels = labels.view(1, num_class, 1).expand_as(scores) - scores = scores.reshape(-1, 1) - boxes = boxes.reshape(batch_size, -1).repeat(1, num_class).reshape(-1, 4) - pos_inds = (num_class * batch_inds + cls_inds) * num_box + box_inds - mask = scores.new_zeros(scores.shape) - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - # PyTorch style code: mask[batch_inds, box_inds] += 1 - mask[pos_inds, :] += 1 - scores = scores * mask - boxes = boxes * mask - - scores = scores.reshape(batch_size, -1) - boxes = boxes.reshape(batch_size, -1, 4) - labels = labels.reshape(batch_size, -1) - - nms_after = torch.tensor( - after_top_k, device=scores.device, dtype=torch.long) - nms_after = get_k_for_topk(nms_after, num_box * num_class) - - if nms_after > 0: - _, topk_inds = scores.topk(nms_after) - batch_inds = torch.arange(batch_size).view(-1, 1).expand_as(topk_inds) - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = scores.shape[1] * batch_inds + topk_inds - scores = scores.reshape(-1, 1)[transformed_inds, :].reshape( - batch_size, -1) - boxes = boxes.reshape(-1, 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - labels = labels.reshape(-1, 1)[transformed_inds, :].reshape( - batch_size, -1) - - scores = scores.unsqueeze(2) - dets = torch.cat([boxes, scores], dim=2) - return dets, labels - - -class DummyONNXNMSop(torch.autograd.Function): - """DummyONNXNMSop. - - This class is only for creating onnx::NonMaxSuppression. - """ - - @staticmethod - def forward(ctx, boxes, scores, max_output_boxes_per_class, iou_threshold, - score_threshold): - - return DummyONNXNMSop.output - - @staticmethod - def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, - score_threshold): - return g.op( - 'NonMaxSuppression', - boxes, - scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - outputs=1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/export/pytorch2onnx.py b/cv/3d_detection/paconv/pytorch/mmdet/core/export/pytorch2onnx.py deleted file mode 100644 index b8261eed..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/export/pytorch2onnx.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import mmcv -import numpy as np -import torch -from mmcv.runner import load_checkpoint - - -def generate_inputs_and_wrap_model(config_path, - checkpoint_path, - input_config, - cfg_options=None): - """Prepare sample input and wrap model for ONNX export. - - The ONNX export API only accept args, and all inputs should be - torch.Tensor or corresponding types (such as tuple of tensor). - So we should call this function before exporting. This function will: - - 1. generate corresponding inputs which are used to execute the model. - 2. Wrap the model's forward function. - - For example, the MMDet models' forward function has a parameter - ``return_loss:bool``. As we want to set it as False while export API - supports neither bool type or kwargs. So we have to replace the forward - method like ``model.forward = partial(model.forward, return_loss=False)``. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - input_config (dict): the exactly data in this dict depends on the - framework. For MMSeg, we can just declare the input shape, - and generate the dummy data accordingly. However, for MMDet, - we may pass the real img path, or the NMS will return None - as there is no legal bbox. - - Returns: - tuple: (model, tensor_data) wrapped model which can be called by - ``model(*tensor_data)`` and a list of inputs which are used to - execute the model while exporting. - """ - - model = build_model_from_cfg( - config_path, checkpoint_path, cfg_options=cfg_options) - one_img, one_meta = preprocess_example_input(input_config) - tensor_data = [one_img] - model.forward = partial( - model.forward, img_metas=[[one_meta]], return_loss=False) - - # pytorch has some bug in pytorch1.3, we have to fix it - # by replacing these existing op - opset_version = 11 - # put the import within the function thus it will not cause import error - # when not using this function - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(opset_version) - - return model, tensor_data - - -def build_model_from_cfg(config_path, checkpoint_path, cfg_options=None): - """Build a model from config and load the given checkpoint. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - - Returns: - torch.nn.Module: the built model - """ - from mmdet.models import build_detector - - cfg = mmcv.Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the model - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - checkpoint = load_checkpoint(model, checkpoint_path, map_location='cpu') - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - from mmdet.datasets import DATASETS - dataset = DATASETS.get(cfg.data.test['type']) - assert (dataset is not None) - model.CLASSES = dataset.CLASSES - model.cpu().eval() - return model - - -def preprocess_example_input(input_config): - """Prepare an example input image for ``generate_inputs_and_wrap_model``. - - Args: - input_config (dict): customized config describing the example input. - - Returns: - tuple: (one_img, one_meta), tensor of the example input image and \ - meta information for the example input image. - - Examples: - >>> from mmdet.core.export import preprocess_example_input - >>> input_config = { - >>> 'input_shape': (1,3,224,224), - >>> 'input_path': 'demo/demo.jpg', - >>> 'normalize_cfg': { - >>> 'mean': (123.675, 116.28, 103.53), - >>> 'std': (58.395, 57.12, 57.375) - >>> } - >>> } - >>> one_img, one_meta = preprocess_example_input(input_config) - >>> print(one_img.shape) - torch.Size([1, 3, 224, 224]) - >>> print(one_meta) - {'img_shape': (224, 224, 3), - 'ori_shape': (224, 224, 3), - 'pad_shape': (224, 224, 3), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False} - """ - input_path = input_config['input_path'] - input_shape = input_config['input_shape'] - one_img = mmcv.imread(input_path) - one_img = mmcv.imresize(one_img, input_shape[2:][::-1]) - show_img = one_img.copy() - if 'normalize_cfg' in input_config.keys(): - normalize_cfg = input_config['normalize_cfg'] - mean = np.array(normalize_cfg['mean'], dtype=np.float32) - std = np.array(normalize_cfg['std'], dtype=np.float32) - to_rgb = normalize_cfg.get('to_rgb', True) - one_img = mmcv.imnormalize(one_img, mean, std, to_rgb=to_rgb) - one_img = one_img.transpose(2, 0, 1) - one_img = torch.from_numpy(one_img).unsqueeze(0).float().requires_grad_( - True) - (_, C, H, W) = input_shape - one_meta = { - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': np.ones(4, dtype=np.float32), - 'flip': False, - 'show_img': show_img, - 'flip_direction': None - } - - return one_img, one_meta diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/__init__.py deleted file mode 100644 index 788ab494..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkloss_hook import CheckInvalidLossHook -from .ema import ExpMomentumEMAHook, LinearMomentumEMAHook -from .memory_profiler_hook import MemoryProfilerHook -from .set_epoch_info_hook import SetEpochInfoHook -from .sync_norm_hook import SyncNormHook -from .sync_random_size_hook import SyncRandomSizeHook -from .yolox_lrupdater_hook import YOLOXLrUpdaterHook -from .yolox_mode_switch_hook import YOLOXModeSwitchHook - -__all__ = [ - 'SyncRandomSizeHook', 'YOLOXModeSwitchHook', 'SyncNormHook', - 'ExpMomentumEMAHook', 'LinearMomentumEMAHook', 'YOLOXLrUpdaterHook', - 'CheckInvalidLossHook', 'SetEpochInfoHook', 'MemoryProfilerHook' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/checkloss_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/checkloss_hook.py deleted file mode 100644 index 754e61be..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/checkloss_hook.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class CheckInvalidLossHook(Hook): - """Check invalid loss hook. - - This hook will regularly check whether the loss is valid - during training. - - Args: - interval (int): Checking interval (every k iterations). - Default: 50. - """ - - def __init__(self, interval=50): - self.interval = interval - - def after_train_iter(self, runner): - if self.every_n_iters(runner, self.interval): - assert torch.isfinite(runner.outputs['loss']), \ - runner.logger.info('loss become infinite or NaN!') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/ema.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/ema.py deleted file mode 100644 index ff7bfbab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/ema.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.parallel import is_module_wrapper -from mmcv.runner.hooks import HOOKS, Hook - - -class BaseEMAHook(Hook): - """Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointHook. Note, - the original model parameters are actually saved in ema field after train. - - Args: - momentum (float): The momentum used for updating ema parameter. - Ema's parameter are updated with the formula: - `ema_param = (1-momentum) * ema_param + momentum * cur_param`. - Defaults to 0.0002. - skip_buffers (bool): Whether to skip the model buffers, such as - batchnorm running stats (running_mean, running_var), it does not - perform the ema operation. Default to False. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - resume_from (str, optional): The checkpoint path. Defaults to None. - momentum_fun (func, optional): The function to change momentum - during early iteration (also warmup) to help early training. - It uses `momentum` as a constant. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - skip_buffers=False, - resume_from=None, - momentum_fun=None): - assert 0 < momentum < 1 - self.momentum = momentum - self.skip_buffers = skip_buffers - self.interval = interval - self.checkpoint = resume_from - self.momentum_fun = momentum_fun - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model. - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - if self.skip_buffers: - self.model_parameters = dict(model.named_parameters()) - else: - self.model_parameters = model.state_dict() - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers()) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def get_momentum(self, runner): - return self.momentum_fun(runner.iter) if self.momentum_fun else \ - self.momentum - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - if (runner.iter + 1) % self.interval != 0: - return - momentum = self.get_momentum(runner) - for name, parameter in self.model_parameters.items(): - # exclude num_tracking - if parameter.dtype.is_floating_point: - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_( - parameter.data, alpha=momentum) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) - - -@HOOKS.register_module() -class ExpMomentumEMAHook(BaseEMAHook): - """EMAHook using exponential momentum strategy. - - Args: - total_iter (int): The total number of iterations of EMA momentum. - Defaults to 2000. - """ - - def __init__(self, total_iter=2000, **kwargs): - super(ExpMomentumEMAHook, self).__init__(**kwargs) - self.momentum_fun = lambda x: (1 - self.momentum) * math.exp(-( - 1 + x) / total_iter) + self.momentum - - -@HOOKS.register_module() -class LinearMomentumEMAHook(BaseEMAHook): - """EMAHook using linear momentum strategy. - - Args: - warm_up (int): During first warm_up steps, we may use smaller decay - to update ema parameters more slowly. Defaults to 100. - """ - - def __init__(self, warm_up=100, **kwargs): - super(LinearMomentumEMAHook, self).__init__(**kwargs) - self.momentum_fun = lambda x: min(self.momentum**self.interval, - (1 + x) / (warm_up + x)) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/memory_profiler_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/memory_profiler_hook.py deleted file mode 100644 index e78a2838..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/memory_profiler_hook.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class MemoryProfilerHook(Hook): - """Memory profiler hook recording memory information: virtual memory, swap - memory and memory of current process. - - Args: - interval (int): Checking interval (every k iterations). - Default: 50. - """ - - def __init__(self, interval=50): - try: - from psutil import swap_memory, virtual_memory - self._swap_memory = swap_memory - self._virtual_memory = virtual_memory - except ImportError: - raise ImportError('psutil is not installed, please install it by: ' - 'pip install psutil') - - try: - from memory_profiler import memory_usage - self._memory_usage = memory_usage - except ImportError: - raise ImportError( - 'memory_profiler is not installed, please install it by: ' - 'pip install memory_profiler') - - self.interval = interval - - def after_iter(self, runner): - if self.every_n_iters(runner, self.interval): - # in Byte - virtual_memory = self._virtual_memory() - swap_memory = self._swap_memory() - # in MB - process_memory = self._memory_usage()[0] - factor = 1024 * 1024 - runner.logger.info( - 'Memory information ' - 'available_memory: ' - f'{round(virtual_memory.available / factor)} MB, ' - 'used_memory: ' - f'{round(virtual_memory.used / factor)} MB, ' - f'memory_utilization: {virtual_memory.percent} %, ' - 'available_swap_memory: ' - f'{round((swap_memory.total - swap_memory.used) / factor)}' - 'MB, ' - f'used_swap_memory: {round(swap_memory.used / factor)} MB, ' - f'swap_memory_utilization: {swap_memory.percent} %, ' - 'current_process_memory: ' - f'{round(process_memory)} MB') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/set_epoch_info_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/set_epoch_info_hook.py deleted file mode 100644 index c2b134ce..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/set_epoch_info_hook.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.parallel import is_module_wrapper -from mmcv.runner import HOOKS, Hook - - -@HOOKS.register_module() -class SetEpochInfoHook(Hook): - """Set runner's epoch information to the model.""" - - def before_train_epoch(self, runner): - epoch = runner.epoch - model = runner.model - if is_module_wrapper(model): - model = model.module - model.set_epoch(epoch) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_norm_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_norm_hook.py deleted file mode 100644 index 82931cef..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_norm_hook.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -from mmcv.runner import get_dist_info -from mmcv.runner.hooks import HOOKS, Hook -from torch import nn - -from ..utils.dist_utils import all_reduce_dict - - -def get_norm_states(module): - async_norm_states = OrderedDict() - for name, child in module.named_modules(): - if isinstance(child, nn.modules.batchnorm._NormBase): - for k, v in child.state_dict().items(): - async_norm_states['.'.join([name, k])] = v - return async_norm_states - - -@HOOKS.register_module() -class SyncNormHook(Hook): - """Synchronize Norm states after training epoch, currently used in YOLOX. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to switch to synchronizing norm interval. Default: 15. - interval (int): Synchronizing norm interval. Default: 1. - """ - - def __init__(self, num_last_epochs=15, interval=1): - self.interval = interval - self.num_last_epochs = num_last_epochs - - def before_train_epoch(self, runner): - epoch = runner.epoch - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - # Synchronize norm every epoch. - self.interval = 1 - - def after_train_epoch(self, runner): - """Synchronizing norm.""" - epoch = runner.epoch - module = runner.model - if (epoch + 1) % self.interval == 0: - _, world_size = get_dist_info() - if world_size == 1: - return - norm_states = get_norm_states(module) - if len(norm_states) == 0: - return - norm_states = all_reduce_dict(norm_states, op='mean') - module.load_state_dict(norm_states, strict=False) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_random_size_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_random_size_hook.py deleted file mode 100644 index 6d7e96c6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/sync_random_size_hook.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import torch -from mmcv.runner import get_dist_info -from mmcv.runner.hooks import HOOKS, Hook -from torch import distributed as dist - - -@HOOKS.register_module() -class SyncRandomSizeHook(Hook): - """Change and synchronize the random image size across ranks. - SyncRandomSizeHook is deprecated, please use Resize pipeline to achieve - similar functions. Such as `dict(type='Resize', img_scale=[(448, 448), - (832, 832)], multiscale_mode='range', keep_ratio=True)`. - - Note: Due to the multi-process dataloader, its behavior is different - from YOLOX's official implementation, the official is to change the - size every fixed iteration interval and what we achieved is a fixed - epoch interval. - - Args: - ratio_range (tuple[int]): Random ratio range. It will be multiplied - by 32, and then change the dataset output image size. - Default: (14, 26). - img_scale (tuple[int]): Size of input image. Default: (640, 640). - interval (int): The epoch interval of change image size. Default: 1. - device (torch.device | str): device for returned tensors. - Default: 'cuda'. - """ - - def __init__(self, - ratio_range=(14, 26), - img_scale=(640, 640), - interval=1, - device='cuda'): - warnings.warn('DeprecationWarning: SyncRandomSizeHook is deprecated. ' - 'Please use Resize pipeline to achieve similar ' - 'functions. Due to the multi-process dataloader, ' - 'its behavior is different from YOLOX\'s official ' - 'implementation, the official is to change the size ' - 'every fixed iteration interval and what we achieved ' - 'is a fixed epoch interval.') - self.rank, world_size = get_dist_info() - self.is_distributed = world_size > 1 - self.ratio_range = ratio_range - self.img_scale = img_scale - self.interval = interval - self.device = device - - def after_train_epoch(self, runner): - """Change the dataset output image size.""" - if self.ratio_range is not None and (runner.epoch + - 1) % self.interval == 0: - # Due to DDP and DP get the device behavior inconsistent, - # so we did not get the device from runner.model. - tensor = torch.LongTensor(2).to(self.device) - - if self.rank == 0: - size_factor = self.img_scale[1] * 1. / self.img_scale[0] - size = random.randint(*self.ratio_range) - size = (int(32 * size), 32 * int(size * size_factor)) - tensor[0] = size[0] - tensor[1] = size[1] - - if self.is_distributed: - dist.barrier() - dist.broadcast(tensor, 0) - - runner.data_loader.dataset.update_dynamic_scale( - (tensor[0].item(), tensor[1].item())) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_lrupdater_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_lrupdater_hook.py deleted file mode 100644 index ecb028ed..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_lrupdater_hook.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner.hooks import HOOKS -from mmcv.runner.hooks.lr_updater import (CosineAnnealingLrUpdaterHook, - annealing_cos) - - -@HOOKS.register_module() -class YOLOXLrUpdaterHook(CosineAnnealingLrUpdaterHook): - """YOLOX learning rate scheme. - - There are two main differences between YOLOXLrUpdaterHook - and CosineAnnealingLrUpdaterHook. - - 1. When the current running epoch is greater than - `max_epoch-last_epoch`, a fixed learning rate will be used - 2. The exp warmup scheme is different with LrUpdaterHook in MMCV - - Args: - num_last_epochs (int): The number of epochs with a fixed learning rate - before the end of the training. - """ - - def __init__(self, num_last_epochs, **kwargs): - self.num_last_epochs = num_last_epochs - super(YOLOXLrUpdaterHook, self).__init__(**kwargs) - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - # exp warmup scheme - k = self.warmup_ratio * pow( - (cur_iters + 1) / float(self.warmup_iters), 2) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.base_lr, dict): - lr_groups = {} - for key, base_lr in self.base_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, base_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.base_lr) - - def get_lr(self, runner, base_lr): - last_iter = len(runner.data_loader) * self.num_last_epochs - - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - progress += 1 - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress >= max_progress - last_iter: - # fixed learning rate - return target_lr - else: - return annealing_cos( - base_lr, target_lr, (progress - self.warmup_iters) / - (max_progress - self.warmup_iters - last_iter)) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_mode_switch_hook.py b/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_mode_switch_hook.py deleted file mode 100644 index 10834e68..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/hook/yolox_mode_switch_hook.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.parallel import is_module_wrapper -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class YOLOXModeSwitchHook(Hook): - """Switch the mode of YOLOX during training. - - This hook turns off the mosaic and mixup data augmentation and switches - to use L1 loss in bbox_head. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to close the data augmentation and switch to L1 loss. - Default: 15. - skip_type_keys (list[str], optional): Sequence of type string to be - skip pipeline. Default: ('Mosaic', 'RandomAffine', 'MixUp') - """ - - def __init__(self, - num_last_epochs=15, - skip_type_keys=('Mosaic', 'RandomAffine', 'MixUp')): - self.num_last_epochs = num_last_epochs - self.skip_type_keys = skip_type_keys - self._restart_dataloader = False - - def before_train_epoch(self, runner): - """Close mosaic and mixup augmentation and switches to use L1 loss.""" - epoch = runner.epoch - train_loader = runner.data_loader - model = runner.model - if is_module_wrapper(model): - model = model.module - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - runner.logger.info('No mosaic and mixup aug now!') - # The dataset pipeline cannot be updated when persistent_workers - # is True, so we need to force the dataloader's multi-process - # restart. This is a very hacky approach. - train_loader.dataset.update_skip_type_keys(self.skip_type_keys) - if hasattr(train_loader, 'persistent_workers' - ) and train_loader.persistent_workers is True: - train_loader._DataLoader__initialized = False - train_loader._iterator = None - self._restart_dataloader = True - runner.logger.info('Add additional L1 loss now!') - model.bbox_head.use_l1 = True - else: - # Once the restart is complete, we need to restore - # the initialization flag. - if self._restart_dataloader: - train_loader._DataLoader__initialized = True diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/mask/__init__.py deleted file mode 100644 index 644a9b1d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .mask_target import mask_target -from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks -from .utils import encode_mask_results, mask2bbox, split_combined_polys - -__all__ = [ - 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks', - 'PolygonMasks', 'encode_mask_results', 'mask2bbox' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/mask_target.py b/cv/3d_detection/paconv/pytorch/mmdet/core/mask/mask_target.py deleted file mode 100644 index 273e7678..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/mask_target.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn.modules.utils import _pair - - -def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, - cfg): - """Compute mask target for positive proposals in multiple images. - - Args: - pos_proposals_list (list[Tensor]): Positive proposals in multiple - images. - pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each - positive proposals. - gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of - each image. - cfg (dict): Config dict that specifies the mask size. - - Returns: - list[Tensor]: Mask target of each image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * - >>> H, W = 17, 18 - >>> cfg = mmcv.Config({'mask_size': (13, 14)}) - >>> rng = np.random.RandomState(0) - >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image - >>> pos_proposals_list = [ - >>> torch.Tensor([ - >>> [ 7.2425, 5.5929, 13.9414, 14.9541], - >>> [ 7.3241, 3.6170, 16.3850, 15.3102], - >>> ]), - >>> torch.Tensor([ - >>> [ 4.8448, 6.4010, 7.0314, 9.7681], - >>> [ 5.9790, 2.6989, 7.4416, 4.8580], - >>> [ 0.0000, 0.0000, 0.1398, 9.8232], - >>> ]), - >>> ] - >>> # Corresponding class index for each proposal for each image - >>> pos_assigned_gt_inds_list = [ - >>> torch.LongTensor([7, 0]), - >>> torch.LongTensor([5, 4, 1]), - >>> ] - >>> # Ground truth mask for each true object for each image - >>> gt_masks_list = [ - >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), - >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), - >>> ] - >>> mask_targets = mask_target( - >>> pos_proposals_list, pos_assigned_gt_inds_list, - >>> gt_masks_list, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - cfg_list = [cfg for _ in range(len(pos_proposals_list))] - mask_targets = map(mask_target_single, pos_proposals_list, - pos_assigned_gt_inds_list, gt_masks_list, cfg_list) - mask_targets = list(mask_targets) - if len(mask_targets) > 0: - mask_targets = torch.cat(mask_targets) - return mask_targets - - -def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg): - """Compute mask target for each positive proposal in the image. - - Args: - pos_proposals (Tensor): Positive proposals. - pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals. - gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap - or Polygon. - cfg (dict): Config dict that indicate the mask size. - - Returns: - Tensor: Mask target of each positive proposals in the image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * # NOQA - >>> H, W = 32, 32 - >>> cfg = mmcv.Config({'mask_size': (7, 11)}) - >>> rng = np.random.RandomState(0) - >>> # Masks for each ground truth box (relative to the image) - >>> gt_masks_data = rng.rand(3, H, W) - >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W) - >>> # Predicted positive boxes in one image - >>> pos_proposals = torch.FloatTensor([ - >>> [ 16.2, 5.5, 19.9, 20.9], - >>> [ 17.3, 13.6, 19.3, 19.3], - >>> [ 14.8, 16.4, 17.0, 23.7], - >>> [ 0.0, 0.0, 16.0, 16.0], - >>> [ 4.0, 0.0, 20.0, 16.0], - >>> ]) - >>> # For each predicted proposal, its assignment to a gt mask - >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1]) - >>> mask_targets = mask_target_single( - >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - device = pos_proposals.device - mask_size = _pair(cfg.mask_size) - binarize = not cfg.get('soft_mask_target', False) - num_pos = pos_proposals.size(0) - if num_pos > 0: - proposals_np = pos_proposals.cpu().numpy() - maxh, maxw = gt_masks.height, gt_masks.width - proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw) - proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh) - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - - mask_targets = gt_masks.crop_and_resize( - proposals_np, - mask_size, - device=device, - inds=pos_assigned_gt_inds, - binarize=binarize).to_ndarray() - - mask_targets = torch.from_numpy(mask_targets).float().to(device) - else: - mask_targets = pos_proposals.new_zeros((0, ) + mask_size) - - return mask_targets diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/structures.py b/cv/3d_detection/paconv/pytorch/mmdet/core/mask/structures.py deleted file mode 100644 index a9d0ebb4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/structures.py +++ /dev/null @@ -1,1102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import cv2 -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -import torch -from mmcv.ops.roi_align import roi_align - - -class BaseInstanceMasks(metaclass=ABCMeta): - """Base class for instance masks.""" - - @abstractmethod - def rescale(self, scale, interpolation='nearest'): - """Rescale masks as large as possible while keeping the aspect ratio. - For details can refer to `mmcv.imrescale`. - - Args: - scale (tuple[int]): The maximum size (h, w) of rescaled mask. - interpolation (str): Same as :func:`mmcv.imrescale`. - - Returns: - BaseInstanceMasks: The rescaled masks. - """ - - @abstractmethod - def resize(self, out_shape, interpolation='nearest'): - """Resize masks to the given out_shape. - - Args: - out_shape: Target (h, w) of resized mask. - interpolation (str): See :func:`mmcv.imresize`. - - Returns: - BaseInstanceMasks: The resized masks. - """ - - @abstractmethod - def flip(self, flip_direction='horizontal'): - """Flip masks alone the given direction. - - Args: - flip_direction (str): Either 'horizontal' or 'vertical'. - - Returns: - BaseInstanceMasks: The flipped masks. - """ - - @abstractmethod - def pad(self, out_shape, pad_val): - """Pad masks to the given size of (h, w). - - Args: - out_shape (tuple[int]): Target (h, w) of padded mask. - pad_val (int): The padded value. - - Returns: - BaseInstanceMasks: The padded masks. - """ - - @abstractmethod - def crop(self, bbox): - """Crop each mask by the given bbox. - - Args: - bbox (ndarray): Bbox in format [x1, y1, x2, y2], shape (4, ). - - Return: - BaseInstanceMasks: The cropped masks. - """ - - @abstractmethod - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device, - interpolation='bilinear', - binarize=True): - """Crop and resize masks by the given bboxes. - - This function is mainly used in mask targets computation. - It firstly align mask to bboxes by assigned_inds, then crop mask by the - assigned bbox and resize to the size of (mask_h, mask_w) - - Args: - bboxes (Tensor): Bboxes in format [x1, y1, x2, y2], shape (N, 4) - out_shape (tuple[int]): Target (h, w) of resized mask - inds (ndarray): Indexes to assign masks to each bbox, - shape (N,) and values should be between [0, num_masks - 1]. - device (str): Device of bboxes - interpolation (str): See `mmcv.imresize` - binarize (bool): if True fractional values are rounded to 0 or 1 - after the resize operation. if False and unsupported an error - will be raised. Defaults to True. - - Return: - BaseInstanceMasks: the cropped and resized masks. - """ - - @abstractmethod - def expand(self, expanded_h, expanded_w, top, left): - """see :class:`Expand`.""" - - @property - @abstractmethod - def areas(self): - """ndarray: areas of each instance.""" - - @abstractmethod - def to_ndarray(self): - """Convert masks to the format of ndarray. - - Return: - ndarray: Converted masks in the format of ndarray. - """ - - @abstractmethod - def to_tensor(self, dtype, device): - """Convert masks to the format of Tensor. - - Args: - dtype (str): Dtype of converted mask. - device (torch.device): Device of converted masks. - - Returns: - Tensor: Converted masks in the format of Tensor. - """ - - @abstractmethod - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - Translated masks. - """ - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. Default 0. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - ndarray: Sheared masks. - """ - - @abstractmethod - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - Rotated masks. - """ - - -class BitmapMasks(BaseInstanceMasks): - """This class represents masks in the form of bitmaps. - - Args: - masks (ndarray): ndarray of masks in shape (N, H, W), where N is - the number of objects. - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> num_masks, H, W = 3, 32, 32 - >>> rng = np.random.RandomState(0) - >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int) - >>> self = BitmapMasks(masks, height=H, width=W) - - >>> # demo crop_and_resize - >>> num_boxes = 5 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (14, 14) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - self.height = height - self.width = width - if len(masks) == 0: - self.masks = np.empty((0, self.height, self.width), dtype=np.uint8) - else: - assert isinstance(masks, (list, np.ndarray)) - if isinstance(masks, list): - assert isinstance(masks[0], np.ndarray) - assert masks[0].ndim == 2 # (H, W) - else: - assert masks.ndim == 3 # (N, H, W) - - self.masks = np.stack(masks).reshape(-1, height, width) - assert self.masks.shape[1] == self.height - assert self.masks.shape[2] == self.width - - def __getitem__(self, index): - """Index the BitmapMask. - - Args: - index (int | ndarray): Indices in the format of integer or ndarray. - - Returns: - :obj:`BitmapMasks`: Indexed bitmap masks. - """ - masks = self.masks[index].reshape(-1, self.height, self.width) - return BitmapMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation='nearest'): - """See :func:`BaseInstanceMasks.rescale`.""" - if len(self.masks) == 0: - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8) - else: - rescaled_masks = np.stack([ - mmcv.imrescale(mask, scale, interpolation=interpolation) - for mask in self.masks - ]) - height, width = rescaled_masks.shape[1:] - return BitmapMasks(rescaled_masks, height, width) - - def resize(self, out_shape, interpolation='nearest'): - """See :func:`BaseInstanceMasks.resize`.""" - if len(self.masks) == 0: - resized_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - resized_masks = np.stack([ - mmcv.imresize( - mask, out_shape[::-1], interpolation=interpolation) - for mask in self.masks - ]) - return BitmapMasks(resized_masks, *out_shape) - - def flip(self, flip_direction='horizontal'): - """See :func:`BaseInstanceMasks.flip`.""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - - if len(self.masks) == 0: - flipped_masks = self.masks - else: - flipped_masks = np.stack([ - mmcv.imflip(mask, direction=flip_direction) - for mask in self.masks - ]) - return BitmapMasks(flipped_masks, self.height, self.width) - - def pad(self, out_shape, pad_val=0): - """See :func:`BaseInstanceMasks.pad`.""" - if len(self.masks) == 0: - padded_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - padded_masks = np.stack([ - mmcv.impad(mask, shape=out_shape, pad_val=pad_val) - for mask in self.masks - ]) - return BitmapMasks(padded_masks, *out_shape) - - def crop(self, bbox): - """See :func:`BaseInstanceMasks.crop`.""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = np.empty((0, h, w), dtype=np.uint8) - else: - cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] - return BitmapMasks(cropped_masks, h, w) - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear', - binarize=True): - """See :func:`BaseInstanceMasks.crop_and_resize`.""" - if len(self.masks) == 0: - empty_masks = np.empty((0, *out_shape), dtype=np.uint8) - return BitmapMasks(empty_masks, *out_shape) - - # convert bboxes to tensor - if isinstance(bboxes, np.ndarray): - bboxes = torch.from_numpy(bboxes).to(device=device) - if isinstance(inds, np.ndarray): - inds = torch.from_numpy(inds).to(device=device) - - num_bbox = bboxes.shape[0] - fake_inds = torch.arange( - num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] - rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 - rois = rois.to(device=device) - if num_bbox > 0: - gt_masks_th = torch.from_numpy(self.masks).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - if binarize: - resized_masks = (targets >= 0.5).cpu().numpy() - else: - resized_masks = targets.cpu().numpy() - else: - resized_masks = [] - return BitmapMasks(resized_masks, *out_shape) - - def expand(self, expanded_h, expanded_w, top, left): - """See :func:`BaseInstanceMasks.expand`.""" - if len(self.masks) == 0: - expanded_mask = np.empty((0, expanded_h, expanded_w), - dtype=np.uint8) - else: - expanded_mask = np.zeros((len(self), expanded_h, expanded_w), - dtype=np.uint8) - expanded_mask[:, top:top + self.height, - left:left + self.width] = self.masks - return BitmapMasks(expanded_mask, expanded_h, expanded_w) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0 for masks. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - BitmapMasks: Translated BitmapMasks. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random(dtype=np.uint8) - >>> out_shape = (32, 32) - >>> offset = 4 - >>> direction = 'horizontal' - >>> fill_val = 0 - >>> interpolation = 'bilinear' - >>> # Note, There seem to be issues when: - >>> # * out_shape is different than self's shape - >>> # * the mask dtype is not supported by cv2.AffineWarp - >>> new = self.translate(out_shape, offset, direction, fill_val, - >>> interpolation) - >>> assert len(new) == len(self) - >>> assert new.height, new.width == out_shape - """ - if len(self.masks) == 0: - translated_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - translated_masks = mmcv.imtranslate( - self.masks.transpose((1, 2, 0)), - offset, - direction, - border_value=fill_val, - interpolation=interpolation) - if translated_masks.ndim == 2: - translated_masks = translated_masks[:, :, None] - translated_masks = translated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(translated_masks, *out_shape) - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - BitmapMasks: The sheared masks. - """ - if len(self.masks) == 0: - sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - sheared_masks = mmcv.imshear( - self.masks.transpose((1, 2, 0)), - magnitude, - direction, - border_value=border_value, - interpolation=interpolation) - if sheared_masks.ndim == 2: - sheared_masks = sheared_masks[:, :, None] - sheared_masks = sheared_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(sheared_masks, *out_shape) - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - BitmapMasks: Rotated BitmapMasks. - """ - if len(self.masks) == 0: - rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) - else: - rotated_masks = mmcv.imrotate( - self.masks.transpose((1, 2, 0)), - angle, - center=center, - scale=scale, - border_value=fill_val) - if rotated_masks.ndim == 2: - # case when only one mask, (h, w) - rotated_masks = rotated_masks[:, :, None] # (h, w, 1) - rotated_masks = rotated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(rotated_masks, *out_shape) - - @property - def areas(self): - """See :py:attr:`BaseInstanceMasks.areas`.""" - return self.masks.sum((1, 2)) - - def to_ndarray(self): - """See :func:`BaseInstanceMasks.to_ndarray`.""" - return self.masks - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - return torch.tensor(self.masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - dtype=np.uint8, - rng=None): - """Generate random bitmap masks for demo / testing purposes. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random() - >>> print('self = {}'.format(self)) - self = BitmapMasks(num_masks=3, height=32, width=32) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - masks = (rng.rand(num_masks, height, width) > 0.1).astype(dtype) - self = cls(masks, height=height, width=width) - return self - - def get_bboxes(self): - num_masks = len(self) - boxes = np.zeros((num_masks, 4), dtype=np.float32) - x_any = self.masks.any(axis=1) - y_any = self.masks.any(axis=2) - for idx in range(num_masks): - x = np.where(x_any[idx, :])[0] - y = np.where(y_any[idx, :])[0] - if len(x) > 0 and len(y) > 0: - # use +1 for x_max and y_max so that the right and bottom - # boundary of instance masks are fully included by the box - boxes[idx, :] = np.array([x[0], y[0], x[-1] + 1, y[-1] + 1], - dtype=np.float32) - return boxes - - -class PolygonMasks(BaseInstanceMasks): - """This class represents masks in the form of polygons. - - Polygons is a list of three levels. The first level of the list - corresponds to objects, the second level to the polys that compose the - object, the third level to the poly coordinates - - Args: - masks (list[list[ndarray]]): The first level of the list - corresponds to objects, the second level to the polys that - compose the object, the third level to the poly coordinates - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> masks = [ - >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ] - >>> ] - >>> height, width = 16, 16 - >>> self = PolygonMasks(masks, height, width) - - >>> # demo translate - >>> new = self.translate((16, 16), 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4) - - >>> # demo crop_and_resize - >>> num_boxes = 3 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (16, 16) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - assert isinstance(masks, list) - if len(masks) > 0: - assert isinstance(masks[0], list) - assert isinstance(masks[0][0], np.ndarray) - - self.height = height - self.width = width - self.masks = masks - - def __getitem__(self, index): - """Index the polygon masks. - - Args: - index (ndarray | List): The indices. - - Returns: - :obj:`PolygonMasks`: The indexed polygon masks. - """ - if isinstance(index, np.ndarray): - index = index.tolist() - if isinstance(index, list): - masks = [self.masks[i] for i in index] - else: - try: - masks = self.masks[index] - except Exception: - raise ValueError( - f'Unsupported input of type {type(index)} for indexing!') - if len(masks) and isinstance(masks[0], np.ndarray): - masks = [masks] # ensure a list of three levels - return PolygonMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation=None): - """see :func:`BaseInstanceMasks.rescale`""" - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - if len(self.masks) == 0: - rescaled_masks = PolygonMasks([], new_h, new_w) - else: - rescaled_masks = self.resize((new_h, new_w)) - return rescaled_masks - - def resize(self, out_shape, interpolation=None): - """see :func:`BaseInstanceMasks.resize`""" - if len(self.masks) == 0: - resized_masks = PolygonMasks([], *out_shape) - else: - h_scale = out_shape[0] / self.height - w_scale = out_shape[1] / self.width - resized_masks = [] - for poly_per_obj in self.masks: - resized_poly = [] - for p in poly_per_obj: - p = p.copy() - p[0::2] = p[0::2] * w_scale - p[1::2] = p[1::2] * h_scale - resized_poly.append(p) - resized_masks.append(resized_poly) - resized_masks = PolygonMasks(resized_masks, *out_shape) - return resized_masks - - def flip(self, flip_direction='horizontal'): - """see :func:`BaseInstanceMasks.flip`""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - if len(self.masks) == 0: - flipped_masks = PolygonMasks([], self.height, self.width) - else: - flipped_masks = [] - for poly_per_obj in self.masks: - flipped_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if flip_direction == 'horizontal': - p[0::2] = self.width - p[0::2] - elif flip_direction == 'vertical': - p[1::2] = self.height - p[1::2] - else: - p[0::2] = self.width - p[0::2] - p[1::2] = self.height - p[1::2] - flipped_poly_per_obj.append(p) - flipped_masks.append(flipped_poly_per_obj) - flipped_masks = PolygonMasks(flipped_masks, self.height, - self.width) - return flipped_masks - - def crop(self, bbox): - """see :func:`BaseInstanceMasks.crop`""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = PolygonMasks([], h, w) - else: - cropped_masks = [] - for poly_per_obj in self.masks: - cropped_poly_per_obj = [] - for p in poly_per_obj: - # pycocotools will clip the boundary - p = p.copy() - p[0::2] = p[0::2] - bbox[0] - p[1::2] = p[1::2] - bbox[1] - cropped_poly_per_obj.append(p) - cropped_masks.append(cropped_poly_per_obj) - cropped_masks = PolygonMasks(cropped_masks, h, w) - return cropped_masks - - def pad(self, out_shape, pad_val=0): - """padding has no effect on polygons`""" - return PolygonMasks(self.masks, *out_shape) - - def expand(self, *args, **kwargs): - """TODO: Add expand for polygon""" - raise NotImplementedError - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear', - binarize=True): - """see :func:`BaseInstanceMasks.crop_and_resize`""" - out_h, out_w = out_shape - if len(self.masks) == 0: - return PolygonMasks([], out_h, out_w) - - if not binarize: - raise ValueError('Polygons are always binary, ' - 'setting binarize=False is unsupported') - - resized_masks = [] - for i in range(len(bboxes)): - mask = self.masks[inds[i]] - bbox = bboxes[i, :] - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - h_scale = out_h / max(h, 0.1) # avoid too large scale - w_scale = out_w / max(w, 0.1) - - resized_mask = [] - for p in mask: - p = p.copy() - # crop - # pycocotools will clip the boundary - p[0::2] = p[0::2] - bbox[0] - p[1::2] = p[1::2] - bbox[1] - - # resize - p[0::2] = p[0::2] * w_scale - p[1::2] = p[1::2] * h_scale - resized_mask.append(p) - resized_masks.append(resized_mask) - return PolygonMasks(resized_masks, *out_shape) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=None, - interpolation=None): - """Translate the PolygonMasks. - - Example: - >>> self = PolygonMasks.random(dtype=np.int) - >>> out_shape = (self.height, self.width) - >>> new = self.translate(out_shape, 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501 - """ - assert fill_val is None or fill_val == 0, 'Here fill_val is not '\ - f'used, and defaultly should be None or 0. got {fill_val}.' - if len(self.masks) == 0: - translated_masks = PolygonMasks([], *out_shape) - else: - translated_masks = [] - for poly_per_obj in self.masks: - translated_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if direction == 'horizontal': - p[0::2] = np.clip(p[0::2] + offset, 0, out_shape[1]) - elif direction == 'vertical': - p[1::2] = np.clip(p[1::2] + offset, 0, out_shape[0]) - translated_poly_per_obj.append(p) - translated_masks.append(translated_poly_per_obj) - translated_masks = PolygonMasks(translated_masks, *out_shape) - return translated_masks - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.shear`.""" - if len(self.masks) == 0: - sheared_masks = PolygonMasks([], *out_shape) - else: - sheared_masks = [] - if direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) - elif direction == 'vertical': - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for poly_per_obj in self.masks: - sheared_poly = [] - for p in poly_per_obj: - p = np.stack([p[0::2], p[1::2]], axis=0) # [2, n] - new_coords = np.matmul(shear_matrix, p) # [2, n] - new_coords[0, :] = np.clip(new_coords[0, :], 0, - out_shape[1]) - new_coords[1, :] = np.clip(new_coords[1, :], 0, - out_shape[0]) - sheared_poly.append( - new_coords.transpose((1, 0)).reshape(-1)) - sheared_masks.append(sheared_poly) - sheared_masks = PolygonMasks(sheared_masks, *out_shape) - return sheared_masks - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """See :func:`BaseInstanceMasks.rotate`.""" - if len(self.masks) == 0: - rotated_masks = PolygonMasks([], *out_shape) - else: - rotated_masks = [] - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, scale) - for poly_per_obj in self.masks: - rotated_poly = [] - for p in poly_per_obj: - p = p.copy() - coords = np.stack([p[0::2], p[1::2]], axis=1) # [n, 2] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coords = np.concatenate( - (coords, np.ones((coords.shape[0], 1), coords.dtype)), - axis=1) # [n, 3] - rotated_coords = np.matmul( - rotate_matrix[None, :, :], - coords[:, :, None])[..., 0] # [n, 2, 1] -> [n, 2] - rotated_coords[:, 0] = np.clip(rotated_coords[:, 0], 0, - out_shape[1]) - rotated_coords[:, 1] = np.clip(rotated_coords[:, 1], 0, - out_shape[0]) - rotated_poly.append(rotated_coords.reshape(-1)) - rotated_masks.append(rotated_poly) - rotated_masks = PolygonMasks(rotated_masks, *out_shape) - return rotated_masks - - def to_bitmap(self): - """convert polygon masks to bitmap masks.""" - bitmap_masks = self.to_ndarray() - return BitmapMasks(bitmap_masks, self.height, self.width) - - @property - def areas(self): - """Compute areas of masks. - - This func is modified from `detectron2 - `_. - The function only works with Polygons using the shoelace formula. - - Return: - ndarray: areas of each instance - """ # noqa: W501 - area = [] - for polygons_per_obj in self.masks: - area_per_obj = 0 - for p in polygons_per_obj: - area_per_obj += self._polygon_area(p[0::2], p[1::2]) - area.append(area_per_obj) - return np.asarray(area) - - def _polygon_area(self, x, y): - """Compute the area of a component of a polygon. - - Using the shoelace formula: - https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates - - Args: - x (ndarray): x coordinates of the component - y (ndarray): y coordinates of the component - - Return: - float: the are of the component - """ # noqa: 501 - return 0.5 * np.abs( - np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) - - def to_ndarray(self): - """Convert masks to the format of ndarray.""" - if len(self.masks) == 0: - return np.empty((0, self.height, self.width), dtype=np.uint8) - bitmap_masks = [] - for poly_per_obj in self.masks: - bitmap_masks.append( - polygon_to_bitmap(poly_per_obj, self.height, self.width)) - return np.stack(bitmap_masks) - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - if len(self.masks) == 0: - return torch.empty((0, self.height, self.width), - dtype=dtype, - device=device) - ndarray_masks = self.to_ndarray() - return torch.tensor(ndarray_masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - n_verts=5, - dtype=np.float32, - rng=None): - """Generate random polygon masks for demo / testing purposes. - - Adapted from [1]_ - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501 - - Example: - >>> from mmdet.core.mask.structures import PolygonMasks - >>> self = PolygonMasks.random() - >>> print('self = {}'.format(self)) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - - def _gen_polygon(n, irregularity, spikeyness): - """Creates the polygon by sampling points on a circle around the - centre. Random noise is added by varying the angular spacing - between sequential points, and by varying the radial distance of - each point from the centre. - - Based on original code by Mike Ounsworth - - Args: - n (int): number of vertices - irregularity (float): [0,1] indicating how much variance there - is in the angular spacing of vertices. [0,1] will map to - [0, 2pi/numberOfVerts] - spikeyness (float): [0,1] indicating how much variance there is - in each vertex from the circle of radius aveRadius. [0,1] - will map to [0, aveRadius] - - Returns: - a list of vertices, in CCW order. - """ - from scipy.stats import truncnorm - - # Generate around the unit circle - cx, cy = (0.0, 0.0) - radius = 1 - - tau = np.pi * 2 - - irregularity = np.clip(irregularity, 0, 1) * 2 * np.pi / n - spikeyness = np.clip(spikeyness, 1e-9, 1) - - # generate n angle steps - lower = (tau / n) - irregularity - upper = (tau / n) + irregularity - angle_steps = rng.uniform(lower, upper, n) - - # normalize the steps so that point 0 and point n+1 are the same - k = angle_steps.sum() / (2 * np.pi) - angles = (angle_steps / k).cumsum() + rng.uniform(0, tau) - - # Convert high and low values to be wrt the standard normal range - # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html - low = 0 - high = 2 * radius - mean = radius - std = spikeyness - a = (low - mean) / std - b = (high - mean) / std - tnorm = truncnorm(a=a, b=b, loc=mean, scale=std) - - # now generate the points - radii = tnorm.rvs(n, random_state=rng) - x_pts = cx + radii * np.cos(angles) - y_pts = cy + radii * np.sin(angles) - - points = np.hstack([x_pts[:, None], y_pts[:, None]]) - - # Scale to 0-1 space - points = points - points.min(axis=0) - points = points / points.max(axis=0) - - # Randomly place within 0-1 space - points = points * (rng.rand() * .8 + .2) - min_pt = points.min(axis=0) - max_pt = points.max(axis=0) - - high = (1 - max_pt) - low = (0 - min_pt) - offset = (rng.rand(2) * (high - low)) + low - points = points + offset - return points - - def _order_vertices(verts): - """ - References: - https://stackoverflow.com/questions/1709283/how-can-i-sort-a-coordinate-list-for-a-rectangle-counterclockwise - """ - mlat = verts.T[0].sum() / len(verts) - mlng = verts.T[1].sum() / len(verts) - - tau = np.pi * 2 - angle = (np.arctan2(mlat - verts.T[0], verts.T[1] - mlng) + - tau) % tau - sortx = angle.argsort() - verts = verts.take(sortx, axis=0) - return verts - - # Generate a random exterior for each requested mask - masks = [] - for _ in range(num_masks): - exterior = _order_vertices(_gen_polygon(n_verts, 0.9, 0.9)) - exterior = (exterior * [(width, height)]).astype(dtype) - masks.append([exterior.ravel()]) - - self = cls(masks, height, width) - return self - - def get_bboxes(self): - num_masks = len(self) - boxes = np.zeros((num_masks, 4), dtype=np.float32) - for idx, poly_per_obj in enumerate(self.masks): - # simply use a number that is big enough for comparison with - # coordinates - xy_min = np.array([self.width * 2, self.height * 2], - dtype=np.float32) - xy_max = np.zeros(2, dtype=np.float32) - for p in poly_per_obj: - xy = np.array(p).reshape(-1, 2).astype(np.float32) - xy_min = np.minimum(xy_min, np.min(xy, axis=0)) - xy_max = np.maximum(xy_max, np.max(xy, axis=0)) - boxes[idx, :2] = xy_min - boxes[idx, 2:] = xy_max - - return boxes - - -def polygon_to_bitmap(polygons, height, width): - """Convert masks from the form of polygons to bitmaps. - - Args: - polygons (list[ndarray]): masks in polygon representation - height (int): mask height - width (int): mask width - - Return: - ndarray: the converted masks in bitmap representation - """ - rles = maskUtils.frPyObjects(polygons, height, width) - rle = maskUtils.merge(rles) - bitmap_mask = maskUtils.decode(rle).astype(np.bool) - return bitmap_mask - - -def bitmap_to_polygon(bitmap): - """Convert masks from the form of bitmaps to polygons. - - Args: - bitmap (ndarray): masks in bitmap representation. - - Return: - list[ndarray]: the converted mask in polygon representation. - bool: whether the mask has holes. - """ - bitmap = np.ascontiguousarray(bitmap).astype(np.uint8) - # cv2.RETR_CCOMP: retrieves all of the contours and organizes them - # into a two-level hierarchy. At the top level, there are external - # boundaries of the components. At the second level, there are - # boundaries of the holes. If there is another contour inside a hole - # of a connected component, it is still put at the top level. - # cv2.CHAIN_APPROX_NONE: stores absolutely all the contour points. - outs = cv2.findContours(bitmap, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) - contours = outs[-2] - hierarchy = outs[-1] - if hierarchy is None: - return [], False - # hierarchy[i]: 4 elements, for the indexes of next, previous, - # parent, or nested contours. If there is no corresponding contour, - # it will be -1. - with_hole = (hierarchy.reshape(-1, 4)[:, 3] >= 0).any() - contours = [c.reshape(-1, 2) for c in contours] - return contours, with_hole diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/utils.py b/cv/3d_detection/paconv/pytorch/mmdet/core/mask/utils.py deleted file mode 100644 index 90544b34..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/mask/utils.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import pycocotools.mask as mask_util -import torch - - -def split_combined_polys(polys, poly_lens, polys_per_mask): - """Split the combined 1-D polys into masks. - - A mask is represented as a list of polys, and a poly is represented as - a 1-D array. In dataset, all masks are concatenated into a single 1-D - tensor. Here we need to split the tensor into original representations. - - Args: - polys (list): a list (length = image num) of 1-D tensors - poly_lens (list): a list (length = image num) of poly length - polys_per_mask (list): a list (length = image num) of poly number - of each mask - - Returns: - list: a list (length = image num) of list (length = mask num) of \ - list (length = poly num) of numpy array. - """ - mask_polys_list = [] - for img_id in range(len(polys)): - polys_single = polys[img_id] - polys_lens_single = poly_lens[img_id].tolist() - polys_per_mask_single = polys_per_mask[img_id].tolist() - - split_polys = mmcv.slice_list(polys_single, polys_lens_single) - mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single) - mask_polys_list.append(mask_polys) - return mask_polys_list - - -# TODO: move this function to more proper place -def encode_mask_results(mask_results): - """Encode bitmap mask to RLE code. - - Args: - mask_results (list | tuple[list]): bitmap mask results. - In mask scoring rcnn, mask_results is a tuple of (segm_results, - segm_cls_score). - - Returns: - list | tuple: RLE encoded mask. - """ - if isinstance(mask_results, tuple): # mask scoring - cls_segms, cls_mask_scores = mask_results - else: - cls_segms = mask_results - num_classes = len(cls_segms) - encoded_mask_results = [[] for _ in range(num_classes)] - for i in range(len(cls_segms)): - for cls_segm in cls_segms[i]: - encoded_mask_results[i].append( - mask_util.encode( - np.array( - cls_segm[:, :, np.newaxis], order='F', - dtype='uint8'))[0]) # encoded with RLE - if isinstance(mask_results, tuple): - return encoded_mask_results, cls_mask_scores - else: - return encoded_mask_results - - -def mask2bbox(masks): - """Obtain tight bounding boxes of binary masks. - - Args: - masks (Tensor): Binary mask of shape (n, h, w). - - Returns: - Tensor: Bboxe with shape (n, 4) of \ - positive region in binary mask. - """ - N = masks.shape[0] - bboxes = masks.new_zeros((N, 4), dtype=torch.float32) - x_any = torch.any(masks, dim=1) - y_any = torch.any(masks, dim=2) - for i in range(N): - x = torch.where(x_any[i, :])[0] - y = torch.where(y_any[i, :])[0] - if len(x) > 0 and len(y) > 0: - bboxes[i, :] = bboxes.new_tensor( - [x[0], y[0], x[-1] + 1, y[-1] + 1]) - - return bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/__init__.py deleted file mode 100644 index 00376bd4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bbox_nms import fast_nms, multiclass_nms -from .matrix_nms import mask_matrix_nms -from .merge_augs import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores) - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'mask_matrix_nms', 'fast_nms' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/bbox_nms.py b/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/bbox_nms.py deleted file mode 100644 index 4fcf57bb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/bbox_nms.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms - -from mmdet.core.bbox.iou_calculators import bbox_overlaps - - -def multiclass_nms(multi_bboxes, - multi_scores, - score_thr, - nms_cfg, - max_num=-1, - score_factors=None, - return_inds=False): - """NMS for multi-class bboxes. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class), where the last column - contains scores of the background class, but this will be ignored. - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - nms_cfg (dict): a dict that contains the arguments of nms operations - max_num (int, optional): if there are more than max_num bboxes after - NMS, only top max_num will be kept. Default to -1. - score_factors (Tensor, optional): The factors multiplied to scores - before applying NMS. Default to None. - return_inds (bool, optional): Whether return the indices of kept - bboxes. Default to False. - - Returns: - tuple: (dets, labels, indices (optional)), tensors of shape (k, 5), - (k), and (k). Dets are boxes with scores. Labels are 0-based. - """ - num_classes = multi_scores.size(1) - 1 - # exclude background category - if multi_bboxes.shape[1] > 4: - bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4) - else: - bboxes = multi_bboxes[:, None].expand( - multi_scores.size(0), num_classes, 4) - - scores = multi_scores[:, :-1] - - labels = torch.arange(num_classes, dtype=torch.long, device=scores.device) - labels = labels.view(1, -1).expand_as(scores) - - bboxes = bboxes.reshape(-1, 4) - scores = scores.reshape(-1) - labels = labels.reshape(-1) - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - # remove low scoring boxes - valid_mask = scores > score_thr - # multiply score_factor after threshold to preserve more bboxes, improve - # mAP by 1% for YOLOv3 - if score_factors is not None: - # expand the shape to match original shape of score - score_factors = score_factors.view(-1, 1).expand( - multi_scores.size(0), num_classes) - score_factors = score_factors.reshape(-1) - scores = scores * score_factors - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - inds = valid_mask.nonzero(as_tuple=False).squeeze(1) - bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds] - else: - # TensorRT NMS plugin has invalid output filled with -1 - # add dummy data to make detection output correct. - bboxes = torch.cat([bboxes, bboxes.new_zeros(1, 4)], dim=0) - scores = torch.cat([scores, scores.new_zeros(1)], dim=0) - labels = torch.cat([labels, labels.new_zeros(1)], dim=0) - - if bboxes.numel() == 0: - if torch.onnx.is_in_onnx_export(): - raise RuntimeError('[ONNX Error] Can not record NMS ' - 'as it has not been executed this time') - dets = torch.cat([bboxes, scores[:, None]], -1) - if return_inds: - return dets, labels, inds - else: - return dets, labels - - dets, keep = batched_nms(bboxes, scores, labels, nms_cfg) - - if max_num > 0: - dets = dets[:max_num] - keep = keep[:max_num] - - if return_inds: - return dets, labels[keep], inds[keep] - else: - return dets, labels[keep] - - -def fast_nms(multi_bboxes, - multi_scores, - multi_coeffs, - score_thr, - iou_thr, - top_k, - max_num=-1): - """Fast NMS in `YOLACT `_. - - Fast NMS allows already-removed detections to suppress other detections so - that every instance can be decided to be kept or discarded in parallel, - which is not possible in traditional NMS. This relaxation allows us to - implement Fast NMS entirely in standard GPU-accelerated matrix operations. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class+1), where the last column - contains scores of the background class, but this will be ignored. - multi_coeffs (Tensor): shape (n, #class*coeffs_dim). - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - iou_thr (float): IoU threshold to be considered as conflicted. - top_k (int): if there are more than top_k bboxes before NMS, - only top top_k will be kept. - max_num (int): if there are more than max_num bboxes after NMS, - only top max_num will be kept. If -1, keep all the bboxes. - Default: -1. - - Returns: - tuple: (dets, labels, coefficients), tensors of shape (k, 5), (k, 1), - and (k, coeffs_dim). Dets are boxes with scores. - Labels are 0-based. - """ - - scores = multi_scores[:, :-1].t() # [#class, n] - scores, idx = scores.sort(1, descending=True) - - idx = idx[:, :top_k].contiguous() - scores = scores[:, :top_k] # [#class, topk] - num_classes, num_dets = idx.size() - boxes = multi_bboxes[idx.view(-1), :].view(num_classes, num_dets, 4) - coeffs = multi_coeffs[idx.view(-1), :].view(num_classes, num_dets, -1) - - iou = bbox_overlaps(boxes, boxes) # [#class, topk, topk] - iou.triu_(diagonal=1) - iou_max, _ = iou.max(dim=1) - - # Now just filter out the ones higher than the threshold - keep = iou_max <= iou_thr - - # Second thresholding introduces 0.2 mAP gain at negligible time cost - keep *= scores > score_thr - - # Assign each kept detection to its corresponding class - classes = torch.arange( - num_classes, device=boxes.device)[:, None].expand_as(keep) - classes = classes[keep] - - boxes = boxes[keep] - coeffs = coeffs[keep] - scores = scores[keep] - - # Only keep the top max_num highest scores across all classes - scores, idx = scores.sort(0, descending=True) - if max_num > 0: - idx = idx[:max_num] - scores = scores[:max_num] - - classes = classes[idx] - boxes = boxes[idx] - coeffs = coeffs[idx] - - cls_dets = torch.cat([boxes, scores[:, None]], dim=1) - return cls_dets, classes, coeffs diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/matrix_nms.py b/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/matrix_nms.py deleted file mode 100644 index 9dc8c4f7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/matrix_nms.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def mask_matrix_nms(masks, - labels, - scores, - filter_thr=-1, - nms_pre=-1, - max_num=-1, - kernel='gaussian', - sigma=2.0, - mask_area=None): - """Matrix NMS for multi-class masks. - - Args: - masks (Tensor): Has shape (num_instances, h, w) - labels (Tensor): Labels of corresponding masks, - has shape (num_instances,). - scores (Tensor): Mask scores of corresponding masks, - has shape (num_instances). - filter_thr (float): Score threshold to filter the masks - after matrix nms. Default: -1, which means do not - use filter_thr. - nms_pre (int): The max number of instances to do the matrix nms. - Default: -1, which means do not use nms_pre. - max_num (int, optional): If there are more than max_num masks after - matrix, only top max_num will be kept. Default: -1, which means - do not use max_num. - kernel (str): 'linear' or 'gaussian'. - sigma (float): std in gaussian method. - mask_area (Tensor): The sum of seg_masks. - - Returns: - tuple(Tensor): Processed mask results. - - - scores (Tensor): Updated scores, has shape (n,). - - labels (Tensor): Remained labels, has shape (n,). - - masks (Tensor): Remained masks, has shape (n, w, h). - - keep_inds (Tensor): The indices number of - the remaining mask in the input mask, has shape (n,). - """ - assert len(labels) == len(masks) == len(scores) - if len(labels) == 0: - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - if mask_area is None: - mask_area = masks.sum((1, 2)).float() - else: - assert len(masks) == len(mask_area) - - # sort and keep top nms_pre - scores, sort_inds = torch.sort(scores, descending=True) - - keep_inds = sort_inds - if nms_pre > 0 and len(sort_inds) > nms_pre: - sort_inds = sort_inds[:nms_pre] - keep_inds = keep_inds[:nms_pre] - scores = scores[:nms_pre] - masks = masks[sort_inds] - mask_area = mask_area[sort_inds] - labels = labels[sort_inds] - - num_masks = len(labels) - flatten_masks = masks.reshape(num_masks, -1).float() - # inter. - inter_matrix = torch.mm(flatten_masks, flatten_masks.transpose(1, 0)) - expanded_mask_area = mask_area.expand(num_masks, num_masks) - # Upper triangle iou matrix. - iou_matrix = (inter_matrix / - (expanded_mask_area + expanded_mask_area.transpose(1, 0) - - inter_matrix)).triu(diagonal=1) - # label_specific matrix. - expanded_labels = labels.expand(num_masks, num_masks) - # Upper triangle label matrix. - label_matrix = (expanded_labels == expanded_labels.transpose( - 1, 0)).triu(diagonal=1) - - # IoU compensation - compensate_iou, _ = (iou_matrix * label_matrix).max(0) - compensate_iou = compensate_iou.expand(num_masks, - num_masks).transpose(1, 0) - - # IoU decay - decay_iou = iou_matrix * label_matrix - - # Calculate the decay_coefficient - if kernel == 'gaussian': - decay_matrix = torch.exp(-1 * sigma * (decay_iou**2)) - compensate_matrix = torch.exp(-1 * sigma * (compensate_iou**2)) - decay_coefficient, _ = (decay_matrix / compensate_matrix).min(0) - elif kernel == 'linear': - decay_matrix = (1 - decay_iou) / (1 - compensate_iou) - decay_coefficient, _ = decay_matrix.min(0) - else: - raise NotImplementedError( - f'{kernel} kernel is not supported in matrix nms!') - # update the score. - scores = scores * decay_coefficient - - if filter_thr > 0: - keep = scores >= filter_thr - keep_inds = keep_inds[keep] - if not keep.any(): - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - masks = masks[keep] - scores = scores[keep] - labels = labels[keep] - - # sort and keep top max_num - scores, sort_inds = torch.sort(scores, descending=True) - keep_inds = keep_inds[sort_inds] - if max_num > 0 and len(sort_inds) > max_num: - sort_inds = sort_inds[:max_num] - keep_inds = keep_inds[:max_num] - scores = scores[:max_num] - masks = masks[sort_inds] - labels = labels[sort_inds] - - return scores, labels, masks, keep_inds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/merge_augs.py b/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/merge_augs.py deleted file mode 100644 index 2ac4603a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/post_processing/merge_augs.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - if flip: - flip_direction = img_info[0]['flip_direction'] - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - elif flip_direction == 'diagonal': - mask = mask[:, :, :, ::-1] - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/utils/__init__.py deleted file mode 100644 index 3f0d0708..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dist_utils import (DistOptimizerHook, all_reduce_dict, allreduce_grads, - reduce_mean, sync_random_seed) -from .misc import (center_of_mass, filter_scores_and_topk, flip_tensor, - generate_coordinate, mask2ndarray, multi_apply, - select_single_mlvl, unmap) - -__all__ = [ - 'allreduce_grads', 'DistOptimizerHook', 'reduce_mean', 'multi_apply', - 'unmap', 'mask2ndarray', 'flip_tensor', 'all_reduce_dict', - 'center_of_mass', 'generate_coordinate', 'select_single_mlvl', - 'filter_scores_and_topk', 'sync_random_seed' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/dist_utils.py b/cv/3d_detection/paconv/pytorch/mmdet/core/utils/dist_utils.py deleted file mode 100644 index 8760774f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/dist_utils.py +++ /dev/null @@ -1,193 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import pickle -import warnings -from collections import OrderedDict - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import OptimizerHook, get_dist_info -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - world_size = dist.get_world_size() - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -class DistOptimizerHook(OptimizerHook): - """Deprecated optimizer hook for distributed training.""" - - def __init__(self, *args, **kwargs): - warnings.warn('"DistOptimizerHook" is deprecated, please switch to' - '"mmcv.runner.OptimizerHook".') - super().__init__(*args, **kwargs) - - -def reduce_mean(tensor): - """"Obtain the mean of tensor on different GPUs.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -def obj2tensor(pyobj, device='cuda'): - """Serialize picklable python object to tensor.""" - storage = torch.ByteStorage.from_buffer(pickle.dumps(pyobj)) - return torch.ByteTensor(storage).to(device=device) - - -def tensor2obj(tensor): - """Deserialize tensor to picklable python object.""" - return pickle.loads(tensor.cpu().numpy().tobytes()) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """Return a process group based on gloo backend, containing all the ranks - The result is cached.""" - if dist.get_backend() == 'nccl': - return dist.new_group(backend='gloo') - else: - return dist.group.WORLD - - -def all_reduce_dict(py_dict, op='sum', group=None, to_float=True): - """Apply all reduce function for python dict object. - - The code is modified from https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/utils/allreduce_norm.py. - - NOTE: make sure that py_dict in different ranks has the same keys and - the values should be in the same shape. Currently only supports - nccl backend. - - Args: - py_dict (dict): Dict to be applied all reduce op. - op (str): Operator, could be 'sum' or 'mean'. Default: 'sum' - group (:obj:`torch.distributed.group`, optional): Distributed group, - Default: None. - to_float (bool): Whether to convert all values of dict to float. - Default: True. - - Returns: - OrderedDict: reduced python dict object. - """ - warnings.warn( - 'group` is deprecated. Currently only supports NCCL backend.') - _, world_size = get_dist_info() - if world_size == 1: - return py_dict - - # all reduce logic across different devices. - py_key = list(py_dict.keys()) - if not isinstance(py_dict, OrderedDict): - py_key_tensor = obj2tensor(py_key) - dist.broadcast(py_key_tensor, src=0) - py_key = tensor2obj(py_key_tensor) - - tensor_shapes = [py_dict[k].shape for k in py_key] - tensor_numels = [py_dict[k].numel() for k in py_key] - - if to_float: - warnings.warn('Note: the "to_float" is True, you need to ' - 'ensure that the behavior is reasonable.') - flatten_tensor = torch.cat( - [py_dict[k].flatten().float() for k in py_key]) - else: - flatten_tensor = torch.cat([py_dict[k].flatten() for k in py_key]) - - dist.all_reduce(flatten_tensor, op=dist.ReduceOp.SUM) - if op == 'mean': - flatten_tensor /= world_size - - split_tensors = [ - x.reshape(shape) for x, shape in zip( - torch.split(flatten_tensor, tensor_numels), tensor_shapes) - ] - out_dict = {k: v for k, v in zip(py_key, split_tensors)} - if isinstance(py_dict, OrderedDict): - out_dict = OrderedDict(out_dict) - return out_dict - - -def sync_random_seed(seed=None, device='cuda'): - """Make sure different ranks share the same seed. - - All workers must call this function, otherwise it will deadlock. - This method is generally used in `DistributedSampler`, - because the seed should be identical across all processes - in the distributed group. - - In distributed sampling, different ranks should sample non-overlapped - data in the dataset. Therefore, this function is used to make sure that - each rank shuffles the data indices in the same order based - on the same seed. Then different ranks could use different indices - to select non-overlapped data from the same data list. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is None: - seed = np.random.randint(2**31) - assert isinstance(seed, int) - - rank, world_size = get_dist_info() - - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/misc.py b/cv/3d_detection/paconv/pytorch/mmdet/core/utils/misc.py deleted file mode 100644 index 14cb745e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/utils/misc.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import numpy as np -import torch -from six.moves import map, zip - -from ..mask.structures import BitmapMasks, PolygonMasks - - -def multi_apply(func, *args, **kwargs): - """Apply function to a list of arguments. - - Note: - This function applies the ``func`` to multiple inputs and - map the multiple outputs of the ``func`` into different - list. Each list contains the same type of outputs corresponding - to different inputs. - - Args: - func (Function): A function that will be applied to a list of - arguments - - Returns: - tuple(list): A tuple containing multiple list, each list contains \ - a kind of returned results by the function - """ - pfunc = partial(func, **kwargs) if kwargs else func - map_results = map(pfunc, *args) - return tuple(map(list, zip(*map_results))) - - -def unmap(data, count, inds, fill=0): - """Unmap a subset of item (data) back to the original set of items (of size - count)""" - if data.dim() == 1: - ret = data.new_full((count, ), fill) - ret[inds.type(torch.bool)] = data - else: - new_size = (count, ) + data.size()[1:] - ret = data.new_full(new_size, fill) - ret[inds.type(torch.bool), :] = data - return ret - - -def mask2ndarray(mask): - """Convert Mask to ndarray.. - - Args: - mask (:obj:`BitmapMasks` or :obj:`PolygonMasks` or - torch.Tensor or np.ndarray): The mask to be converted. - - Returns: - np.ndarray: Ndarray mask of shape (n, h, w) that has been converted - """ - if isinstance(mask, (BitmapMasks, PolygonMasks)): - mask = mask.to_ndarray() - elif isinstance(mask, torch.Tensor): - mask = mask.detach().cpu().numpy() - elif not isinstance(mask, np.ndarray): - raise TypeError(f'Unsupported {type(mask)} data type') - return mask - - -def flip_tensor(src_tensor, flip_direction): - """flip tensor base on flip_direction. - - Args: - src_tensor (Tensor): input feature map, shape (B, C, H, W). - flip_direction (str): The flipping direction. Options are - 'horizontal', 'vertical', 'diagonal'. - - Returns: - out_tensor (Tensor): Flipped tensor. - """ - assert src_tensor.ndim == 4 - valid_directions = ['horizontal', 'vertical', 'diagonal'] - assert flip_direction in valid_directions - if flip_direction == 'horizontal': - out_tensor = torch.flip(src_tensor, [3]) - elif flip_direction == 'vertical': - out_tensor = torch.flip(src_tensor, [2]) - else: - out_tensor = torch.flip(src_tensor, [2, 3]) - return out_tensor - - -def select_single_mlvl(mlvl_tensors, batch_id, detach=True): - """Extract a multi-scale single image tensor from a multi-scale batch - tensor based on batch index. - - Note: The default value of detach is True, because the proposal gradient - needs to be detached during the training of the two-stage model. E.g - Cascade Mask R-CNN. - - Args: - mlvl_tensors (list[Tensor]): Batch tensor for all scale levels, - each is a 4D-tensor. - batch_id (int): Batch index. - detach (bool): Whether detach gradient. Default True. - - Returns: - list[Tensor]: Multi-scale single image tensor. - """ - assert isinstance(mlvl_tensors, (list, tuple)) - num_levels = len(mlvl_tensors) - - if detach: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id].detach() for i in range(num_levels) - ] - else: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id] for i in range(num_levels) - ] - return mlvl_tensor_list - - -def filter_scores_and_topk(scores, score_thr, topk, results=None): - """Filter results using score threshold and topk candidates. - - Args: - scores (Tensor): The scores, shape (num_bboxes, K). - score_thr (float): The score filter threshold. - topk (int): The number of topk candidates. - results (dict or list or Tensor, Optional): The results to - which the filtering rule is to be applied. The shape - of each item is (num_bboxes, N). - - Returns: - tuple: Filtered results - - - scores (Tensor): The scores after being filtered, \ - shape (num_bboxes_filtered, ). - - labels (Tensor): The class labels, shape \ - (num_bboxes_filtered, ). - - anchor_idxs (Tensor): The anchor indexes, shape \ - (num_bboxes_filtered, ). - - filtered_results (dict or list or Tensor, Optional): \ - The filtered results. The shape of each item is \ - (num_bboxes_filtered, N). - """ - valid_mask = scores > score_thr - scores = scores[valid_mask] - valid_idxs = torch.nonzero(valid_mask) - - num_topk = min(topk, valid_idxs.size(0)) - # torch.sort is actually faster than .topk (at least on GPUs) - scores, idxs = scores.sort(descending=True) - scores = scores[:num_topk] - topk_idxs = valid_idxs[idxs[:num_topk]] - keep_idxs, labels = topk_idxs.unbind(dim=1) - - filtered_results = None - if results is not None: - if isinstance(results, dict): - filtered_results = {k: v[keep_idxs] for k, v in results.items()} - elif isinstance(results, list): - filtered_results = [result[keep_idxs] for result in results] - elif isinstance(results, torch.Tensor): - filtered_results = results[keep_idxs] - else: - raise NotImplementedError(f'Only supports dict or list or Tensor, ' - f'but get {type(results)}.') - return scores, labels, keep_idxs, filtered_results - - -def center_of_mass(mask, esp=1e-6): - """Calculate the centroid coordinates of the mask. - - Args: - mask (Tensor): The mask to be calculated, shape (h, w). - esp (float): Avoid dividing by zero. Default: 1e-6. - - Returns: - tuple[Tensor]: the coordinates of the center point of the mask. - - - center_h (Tensor): the center point of the height. - - center_w (Tensor): the center point of the width. - """ - h, w = mask.shape - grid_h = torch.arange(h, device=mask.device)[:, None] - grid_w = torch.arange(w, device=mask.device) - normalizer = mask.sum().float().clamp(min=esp) - center_h = (mask * grid_h).sum() / normalizer - center_w = (mask * grid_w).sum() / normalizer - return center_h, center_w - - -def generate_coordinate(featmap_sizes, device='cuda'): - """Generate the coordinate. - - Args: - featmap_sizes (tuple): The feature to be calculated, - of shape (N, C, W, H). - device (str): The device where the feature will be put on. - Returns: - coord_feat (Tensor): The coordinate feature, of shape (N, 2, W, H). - """ - - x_range = torch.linspace(-1, 1, featmap_sizes[-1], device=device) - y_range = torch.linspace(-1, 1, featmap_sizes[-2], device=device) - y, x = torch.meshgrid(y_range, x_range) - y = y.expand([featmap_sizes[0], 1, -1, -1]) - x = x.expand([featmap_sizes[0], 1, -1, -1]) - coord_feat = torch.cat([x, y], 1) - - return coord_feat diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/__init__.py deleted file mode 100644 index 2eb17c4b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) -from .palette import get_palette, palette_val - -__all__ = [ - 'imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib', - 'palette_val', 'get_palette' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/image.py b/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/image.py deleted file mode 100644 index c574b2d4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/image.py +++ /dev/null @@ -1,524 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import matplotlib.pyplot as plt -import mmcv -import numpy as np -import pycocotools.mask as mask_util -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from ..mask.structures import bitmap_to_polygon -from ..utils import mask2ndarray -from .palette import get_palette, palette_val - -__all__ = [ - 'color_val_matplotlib', 'draw_masks', 'draw_bboxes', 'draw_labels', - 'imshow_det_bboxes', 'imshow_gt_det_bboxes' -] - -EPS = 1e-2 - - -def color_val_matplotlib(color): - """Convert various input in BGR order to normalized RGB matplotlib color - tuples. - - Args: - color (:obj`Color` | str | tuple | int | ndarray): Color inputs. - - Returns: - tuple[float]: A tuple of 3 normalized floats indicating RGB channels. - """ - color = mmcv.color_val(color) - color = [color / 255 for color in color[::-1]] - return tuple(color) - - -def _get_adaptive_scales(areas, min_area=800, max_area=30000): - """Get adaptive scales according to areas. - - The scale range is [0.5, 1.0]. When the area is less than - ``'min_area'``, the scale is 0.5 while the area is larger than - ``'max_area'``, the scale is 1.0. - - Args: - areas (ndarray): The areas of bboxes or masks with the - shape of (n, ). - min_area (int): Lower bound areas for adaptive scales. - Default: 800. - max_area (int): Upper bound areas for adaptive scales. - Default: 30000. - - Returns: - ndarray: The adaotive scales with the shape of (n, ). - """ - scales = 0.5 + (areas - min_area) / (max_area - min_area) - scales = np.clip(scales, 0.5, 1.0) - return scales - - -def _get_bias_color(base, max_dist=30): - """Get different colors for each masks. - - Get different colors for each masks by adding a bias - color to the base category color. - Args: - base (ndarray): The base category color with the shape - of (3, ). - max_dist (int): The max distance of bias. Default: 30. - - Returns: - ndarray: The new color for a mask with the shape of (3, ). - """ - new_color = base + np.random.randint( - low=-max_dist, high=max_dist + 1, size=3) - return np.clip(new_color, 0, 255, new_color) - - -def draw_bboxes(ax, bboxes, color='g', alpha=0.8, thickness=2): - """Draw bounding boxes on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - bboxes (ndarray): The input bounding boxes with the shape - of (n, 4). - color (list[tuple] | matplotlib.color): the colors for each - bounding boxes. - alpha (float): Transparency of bounding boxes. Default: 0.8. - thickness (int): Thickness of lines. Default: 2. - - Returns: - matplotlib.Axes: The result axes. - """ - polygons = [] - for i, bbox in enumerate(bboxes): - bbox_int = bbox.astype(np.int32) - poly = [[bbox_int[0], bbox_int[1]], [bbox_int[0], bbox_int[3]], - [bbox_int[2], bbox_int[3]], [bbox_int[2], bbox_int[1]]] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - p = PatchCollection( - polygons, - facecolor='none', - edgecolors=color, - linewidths=thickness, - alpha=alpha) - ax.add_collection(p) - - return ax - - -def draw_labels(ax, - labels, - positions, - scores=None, - class_names=None, - color='w', - font_size=8, - scales=None, - horizontal_alignment='left'): - """Draw labels on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - labels (ndarray): The labels with the shape of (n, ). - positions (ndarray): The positions to draw each labels. - scores (ndarray): The scores for each labels. - class_names (list[str]): The class names. - color (list[tuple] | matplotlib.color): The colors for labels. - font_size (int): Font size of texts. Default: 8. - scales (list[float]): Scales of texts. Default: None. - horizontal_alignment (str): The horizontal alignment method of - texts. Default: 'left'. - - Returns: - matplotlib.Axes: The result axes. - """ - for i, (pos, label) in enumerate(zip(positions, labels)): - label_text = class_names[ - label] if class_names is not None else f'class {label}' - if scores is not None: - label_text += f'|{scores[i]:.02f}' - text_color = color[i] if isinstance(color, list) else color - - font_size_mask = font_size if scales is None else font_size * scales[i] - ax.text( - pos[0], - pos[1], - f'{label_text}', - bbox={ - 'facecolor': 'black', - 'alpha': 0.8, - 'pad': 0.7, - 'edgecolor': 'none' - }, - color=text_color, - fontsize=font_size_mask, - verticalalignment='top', - horizontalalignment=horizontal_alignment) - - return ax - - -def draw_masks(ax, img, masks, color=None, with_edge=True, alpha=0.8): - """Draw masks on the image and their edges on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - img (ndarray): The image with the shape of (3, h, w). - masks (ndarray): The masks with the shape of (n, h, w). - color (ndarray): The colors for each masks with the shape - of (n, 3). - with_edge (bool): Whether to draw edges. Default: True. - alpha (float): Transparency of bounding boxes. Default: 0.8. - - Returns: - matplotlib.Axes: The result axes. - ndarray: The result image. - """ - taken_colors = set([0, 0, 0]) - if color is None: - random_colors = np.random.randint(0, 255, (masks.size(0), 3)) - color = [tuple(c) for c in random_colors] - color = np.array(color, dtype=np.uint8) - polygons = [] - for i, mask in enumerate(masks): - if with_edge: - contours, _ = bitmap_to_polygon(mask) - polygons += [Polygon(c) for c in contours] - - color_mask = color[i] - while tuple(color_mask) in taken_colors: - color_mask = _get_bias_color(color_mask) - taken_colors.add(tuple(color_mask)) - - mask = mask.astype(bool) - img[mask] = img[mask] * (1 - alpha) + color_mask * alpha - - p = PatchCollection( - polygons, facecolor='none', edgecolors='w', linewidths=1, alpha=0.8) - ax.add_collection(p) - - return ax, img - - -def imshow_det_bboxes(img, - bboxes=None, - labels=None, - segms=None, - class_names=None, - score_thr=0, - bbox_color='green', - text_color='green', - mask_color=None, - thickness=2, - font_size=8, - win_name='', - show=True, - wait_time=0, - out_file=None): - """Draw bboxes and class labels (with scores) on an image. - - Args: - img (str | ndarray): The image to be displayed. - bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or - (n, 5). - labels (ndarray): Labels of bboxes. - segms (ndarray | None): Masks, shaped (n,h,w) or None. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0. - bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: 'green'. - text_color (list[tuple] | tuple | str | None): Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: 'green'. - mask_color (list[tuple] | tuple | str | None, optional): Colors of - masks. If a single color is given, it will be applied to all - classes. The tuple of color should be in RGB order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - show (bool): Whether to show the image. Default: True. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - assert bboxes is None or bboxes.ndim == 2, \ - f' bboxes ndim should be 2, but its ndim is {bboxes.ndim}.' - assert labels.ndim == 1, \ - f' labels ndim should be 1, but its ndim is {labels.ndim}.' - assert bboxes is None or bboxes.shape[1] == 4 or bboxes.shape[1] == 5, \ - f' bboxes.shape[1] should be 4 or 5, but its {bboxes.shape[1]}.' - assert bboxes is None or bboxes.shape[0] <= labels.shape[0], \ - 'labels.shape[0] should not be less than bboxes.shape[0].' - assert segms is None or segms.shape[0] == labels.shape[0], \ - 'segms.shape[0] and labels.shape[0] should have the same length.' - assert segms is not None or bboxes is not None, \ - 'segms and bboxes should not be None at the same time.' - - img = mmcv.imread(img).astype(np.uint8) - - if score_thr > 0: - assert bboxes is not None and bboxes.shape[1] == 5 - scores = bboxes[:, -1] - inds = scores > score_thr - bboxes = bboxes[inds, :] - labels = labels[inds] - if segms is not None: - segms = segms[inds, ...] - - img = mmcv.bgr2rgb(img) - width, height = img.shape[1], img.shape[0] - img = np.ascontiguousarray(img) - - fig = plt.figure(win_name, frameon=False) - plt.title(win_name) - canvas = fig.canvas - dpi = fig.get_dpi() - # add a small EPS to avoid precision lost due to matplotlib's truncation - # (https://github.com/matplotlib/matplotlib/issues/15363) - fig.set_size_inches((width + EPS) / dpi, (height + EPS) / dpi) - - # remove white edges by set subplot margin - plt.subplots_adjust(left=0, right=1, bottom=0, top=1) - ax = plt.gca() - ax.axis('off') - - max_label = int(max(labels) if len(labels) > 0 else 0) - text_palette = palette_val(get_palette(text_color, max_label + 1)) - text_colors = [text_palette[label] for label in labels] - - num_bboxes = 0 - if bboxes is not None: - num_bboxes = bboxes.shape[0] - bbox_palette = palette_val(get_palette(bbox_color, max_label + 1)) - colors = [bbox_palette[label] for label in labels[:num_bboxes]] - draw_bboxes(ax, bboxes, colors, alpha=0.8, thickness=thickness) - - horizontal_alignment = 'left' - positions = bboxes[:, :2].astype(np.int32) + thickness - areas = (bboxes[:, 3] - bboxes[:, 1]) * (bboxes[:, 2] - bboxes[:, 0]) - scales = _get_adaptive_scales(areas) - scores = bboxes[:, 4] if bboxes.shape[1] == 5 else None - draw_labels( - ax, - labels[:num_bboxes], - positions, - scores=scores, - class_names=class_names, - color=text_colors, - font_size=font_size, - scales=scales, - horizontal_alignment=horizontal_alignment) - - if segms is not None: - mask_palette = get_palette(mask_color, max_label + 1) - colors = [mask_palette[label] for label in labels] - colors = np.array(colors, dtype=np.uint8) - draw_masks(ax, img, segms, colors, with_edge=True) - - if num_bboxes < segms.shape[0]: - segms = segms[num_bboxes:] - horizontal_alignment = 'center' - areas = [] - positions = [] - for mask in segms: - _, _, stats, centroids = cv2.connectedComponentsWithStats( - mask.astype(np.uint8), connectivity=8) - largest_id = np.argmax(stats[1:, -1]) + 1 - positions.append(centroids[largest_id]) - areas.append(stats[largest_id, -1]) - areas = np.stack(areas, axis=0) - scales = _get_adaptive_scales(areas) - draw_labels( - ax, - labels[num_bboxes:], - positions, - class_names=class_names, - color=text_colors, - font_size=font_size, - scales=scales, - horizontal_alignment=horizontal_alignment) - - plt.imshow(img) - - stream, _ = canvas.print_to_buffer() - buffer = np.frombuffer(stream, dtype='uint8') - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - img = rgb.astype('uint8') - img = mmcv.rgb2bgr(img) - - if show: - # We do not use cv2 for display because in some cases, opencv will - # conflict with Qt, it will output a warning: Current thread - # is not the object's thread. You can refer to - # https://github.com/opencv/opencv-python/issues/46 for details - if wait_time == 0: - plt.show() - else: - plt.show(block=False) - plt.pause(wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - plt.close() - - return img - - -def imshow_gt_det_bboxes(img, - annotation, - result, - class_names=None, - score_thr=0, - gt_bbox_color=(61, 102, 255), - gt_text_color=(200, 200, 200), - gt_mask_color=(61, 102, 255), - det_bbox_color=(241, 101, 72), - det_text_color=(200, 200, 200), - det_mask_color=(241, 101, 72), - thickness=2, - font_size=13, - win_name='', - show=True, - wait_time=0, - out_file=None): - """General visualization GT and result function. - - Args: - img (str | ndarray): The image to be displayed. - annotation (dict): Ground truth annotations where contain keys of - 'gt_bboxes' and 'gt_labels' or 'gt_masks'. - result (tuple[list] | list): The detection result, can be either - (bbox, segm) or just bbox. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0. - gt_bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (61, 102, 255). - gt_text_color (list[tuple] | tuple | str | None): Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (200, 200, 200). - gt_mask_color (list[tuple] | tuple | str | None, optional): Colors of - masks. If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (61, 102, 255). - det_bbox_color (list[tuple] | tuple | str | None):Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (241, 101, 72). - det_text_color (list[tuple] | tuple | str | None):Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (200, 200, 200). - det_mask_color (list[tuple] | tuple | str | None, optional): Color of - masks. If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (241, 101, 72). - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - show (bool): Whether to show the image. Default: True. - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None. - - Returns: - ndarray: The image with bboxes or masks drawn on it. - """ - assert 'gt_bboxes' in annotation - assert 'gt_labels' in annotation - assert isinstance(result, (tuple, list, dict)), 'Expected ' \ - f'tuple or list or dict, but get {type(result)}' - - gt_bboxes = annotation['gt_bboxes'] - gt_labels = annotation['gt_labels'] - gt_masks = annotation.get('gt_masks', None) - if gt_masks is not None: - gt_masks = mask2ndarray(gt_masks) - - gt_seg = annotation.get('gt_semantic_seg', None) - if gt_seg is not None: - pad_value = 255 # the padding value of gt_seg - sem_labels = np.unique(gt_seg) - all_labels = np.concatenate((gt_labels, sem_labels), axis=0) - all_labels, counts = np.unique(all_labels, return_counts=True) - stuff_labels = all_labels[np.logical_and(counts < 2, - all_labels != pad_value)] - stuff_masks = gt_seg[None] == stuff_labels[:, None, None] - gt_labels = np.concatenate((gt_labels, stuff_labels), axis=0) - gt_masks = np.concatenate((gt_masks, stuff_masks.astype(np.uint8)), - axis=0) - # If you need to show the bounding boxes, - # please comment the following line - # gt_bboxes = None - - img = mmcv.imread(img) - - img = imshow_det_bboxes( - img, - gt_bboxes, - gt_labels, - gt_masks, - class_names=class_names, - bbox_color=gt_bbox_color, - text_color=gt_text_color, - mask_color=gt_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=False) - - if not isinstance(result, dict): - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - segms = mask_util.decode(segms) - segms = segms.transpose(2, 0, 1) - else: - assert class_names is not None, 'We need to know the number ' \ - 'of classes.' - VOID = len(class_names) - bboxes = None - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != VOID - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms=segms, - class_names=class_names, - score_thr=score_thr, - bbox_color=det_bbox_color, - text_color=det_text_color, - mask_color=det_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - return img diff --git a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/palette.py b/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/palette.py deleted file mode 100644 index 11692cdd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/core/visualization/palette.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - - -def palette_val(palette): - """Convert palette to matplotlib palette. - - Args: - palette List[tuple]: A list of color tuples. - - Returns: - List[tuple[float]]: A list of RGB matplotlib color tuples. - """ - new_palette = [] - for color in palette: - color = [c / 255 for c in color] - new_palette.append(tuple(color)) - return new_palette - - -def get_palette(palette, num_classes): - """Get palette from various inputs. - - Args: - palette (list[tuple] | str | tuple | :obj:`Color`): palette inputs. - num_classes (int): the number of classes. - - Returns: - list[tuple[int]]: A list of color tuples. - """ - assert isinstance(num_classes, int) - - if isinstance(palette, list): - dataset_palette = palette - elif isinstance(palette, tuple): - dataset_palette = [palette] * num_classes - elif palette == 'random' or palette is None: - state = np.random.get_state() - # random color - np.random.seed(42) - palette = np.random.randint(0, 256, size=(num_classes, 3)) - np.random.set_state(state) - dataset_palette = [tuple(c) for c in palette] - elif palette == 'coco': - from mmdet.datasets import CocoDataset, CocoPanopticDataset - dataset_palette = CocoDataset.PALETTE - if len(dataset_palette) < num_classes: - dataset_palette = CocoPanopticDataset.PALETTE - elif palette == 'citys': - from mmdet.datasets import CityscapesDataset - dataset_palette = CityscapesDataset.PALETTE - elif palette == 'voc': - from mmdet.datasets import VOCDataset - dataset_palette = VOCDataset.PALETTE - elif mmcv.is_str(palette): - dataset_palette = [mmcv.color_val(palette)[::-1]] * num_classes - else: - raise TypeError(f'Invalid type for palette: {type(palette)}') - - assert len(dataset_palette) >= num_classes, \ - 'The length of palette should not be less than `num_classes`.' - return dataset_palette diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/__init__.py deleted file mode 100644 index f251d07e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .cityscapes import CityscapesDataset -from .coco import CocoDataset -from .coco_panoptic import CocoPanopticDataset -from .custom import CustomDataset -from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) -from .deepfashion import DeepFashionDataset -from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset -from .openimages import OpenImagesChallengeDataset, OpenImagesDataset -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -from .utils import (NumClassCheckHook, get_loading_pipeline, - replace_ImageToTensor) -from .voc import VOCDataset -from .wider_face import WIDERFaceDataset -from .xml_style import XMLDataset - -__all__ = [ - 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset', - 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset', - 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler', - 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', - 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', - 'NumClassCheckHook', 'CocoPanopticDataset', 'MultiImageMixDataset', - 'OpenImagesDataset', 'OpenImagesChallengeDataset' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/__init__.py deleted file mode 100644 index af855759..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coco_api import COCO, COCOeval -from .panoptic_evaluation import pq_compute_multi_core, pq_compute_single_core - -__all__ = [ - 'COCO', 'COCOeval', 'pq_compute_multi_core', 'pq_compute_single_core' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/coco_api.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/coco_api.py deleted file mode 100644 index eef6341e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/coco_api.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file add snake case alias for coco api - -import warnings - -import pycocotools -from pycocotools.coco import COCO as _COCO -from pycocotools.cocoeval import COCOeval as _COCOeval - - -class COCO(_COCO): - """This class is almost the same as official pycocotools package. - - It implements some snake case function aliases. So that the COCO class has - the same interface as LVIS class. - """ - - def __init__(self, annotation_file=None): - if getattr(pycocotools, '__version__', '0') >= '12.0.2': - warnings.warn( - 'mmpycocotools is deprecated. Please install official pycocotools by "pip install pycocotools"', # noqa: E501 - UserWarning) - super().__init__(annotation_file=annotation_file) - self.img_ann_map = self.imgToAnns - self.cat_img_map = self.catToImgs - - def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None): - return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd) - - def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]): - return self.getCatIds(cat_names, sup_names, cat_ids) - - def get_img_ids(self, img_ids=[], cat_ids=[]): - return self.getImgIds(img_ids, cat_ids) - - def load_anns(self, ids): - return self.loadAnns(ids) - - def load_cats(self, ids): - return self.loadCats(ids) - - def load_imgs(self, ids): - return self.loadImgs(ids) - - -# just for the ease of import -COCOeval = _COCOeval diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py deleted file mode 100644 index b29d5007..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# Copyright (c) 2018, Alexander Kirillov -# This file supports `file_client` for `panopticapi`, -# the source code is copied from `panopticapi`, -# only the way to load the gt images is modified. -import multiprocessing -import os - -import mmcv -import numpy as np - -try: - from panopticapi.evaluation import OFFSET, VOID, PQStat - from panopticapi.utils import rgb2id -except ImportError: - PQStat = None - rgb2id = None - VOID = 0 - OFFSET = 256 * 256 * 256 - - -def pq_compute_single_core(proc_id, - annotation_set, - gt_folder, - pred_folder, - categories, - file_client=None): - """The single core function to evaluate the metric of Panoptic - Segmentation. - - Same as the function with the same name in `panopticapi`. Only the function - to load the images is changed to use the file client. - - Args: - proc_id (int): The id of the mini process. - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - pq_stat = PQStat() - - idx = 0 - for gt_ann, pred_ann in annotation_set: - if idx % 100 == 0: - print('Core: {}, {} from {} images processed'.format( - proc_id, idx, len(annotation_set))) - idx += 1 - # The gt images can be on the local disk or `ceph`, so we use - # file_client here. - img_bytes = file_client.get( - os.path.join(gt_folder, gt_ann['file_name'])) - pan_gt = mmcv.imfrombytes(img_bytes, flag='color', channel_order='rgb') - pan_gt = rgb2id(pan_gt) - - # The predictions can only be on the local dist now. - pan_pred = mmcv.imread( - os.path.join(pred_folder, pred_ann['file_name']), - flag='color', - channel_order='rgb') - pan_pred = rgb2id(pan_pred) - - gt_segms = {el['id']: el for el in gt_ann['segments_info']} - pred_segms = {el['id']: el for el in pred_ann['segments_info']} - - # predicted segments area calculation + prediction sanity checks - pred_labels_set = set(el['id'] for el in pred_ann['segments_info']) - labels, labels_cnt = np.unique(pan_pred, return_counts=True) - for label, label_cnt in zip(labels, labels_cnt): - if label not in pred_segms: - if label == VOID: - continue - raise KeyError( - 'In the image with ID {} segment with ID {} is ' - 'presented in PNG and not presented in JSON.'.format( - gt_ann['image_id'], label)) - pred_segms[label]['area'] = label_cnt - pred_labels_set.remove(label) - if pred_segms[label]['category_id'] not in categories: - raise KeyError( - 'In the image with ID {} segment with ID {} has ' - 'unknown category_id {}.'.format( - gt_ann['image_id'], label, - pred_segms[label]['category_id'])) - if len(pred_labels_set) != 0: - raise KeyError( - 'In the image with ID {} the following segment IDs {} ' - 'are presented in JSON and not presented in PNG.'.format( - gt_ann['image_id'], list(pred_labels_set))) - - # confusion matrix calculation - pan_gt_pred = pan_gt.astype(np.uint64) * OFFSET + pan_pred.astype( - np.uint64) - gt_pred_map = {} - labels, labels_cnt = np.unique(pan_gt_pred, return_counts=True) - for label, intersection in zip(labels, labels_cnt): - gt_id = label // OFFSET - pred_id = label % OFFSET - gt_pred_map[(gt_id, pred_id)] = intersection - - # count all matched pairs - gt_matched = set() - pred_matched = set() - for label_tuple, intersection in gt_pred_map.items(): - gt_label, pred_label = label_tuple - if gt_label not in gt_segms: - continue - if pred_label not in pred_segms: - continue - if gt_segms[gt_label]['iscrowd'] == 1: - continue - if gt_segms[gt_label]['category_id'] != pred_segms[pred_label][ - 'category_id']: - continue - - union = pred_segms[pred_label]['area'] + gt_segms[gt_label][ - 'area'] - intersection - gt_pred_map.get((VOID, pred_label), 0) - iou = intersection / union - if iou > 0.5: - pq_stat[gt_segms[gt_label]['category_id']].tp += 1 - pq_stat[gt_segms[gt_label]['category_id']].iou += iou - gt_matched.add(gt_label) - pred_matched.add(pred_label) - - # count false positives - crowd_labels_dict = {} - for gt_label, gt_info in gt_segms.items(): - if gt_label in gt_matched: - continue - # crowd segments are ignored - if gt_info['iscrowd'] == 1: - crowd_labels_dict[gt_info['category_id']] = gt_label - continue - pq_stat[gt_info['category_id']].fn += 1 - - # count false positives - for pred_label, pred_info in pred_segms.items(): - if pred_label in pred_matched: - continue - # intersection of the segment with VOID - intersection = gt_pred_map.get((VOID, pred_label), 0) - # plus intersection with corresponding CROWD region if it exists - if pred_info['category_id'] in crowd_labels_dict: - intersection += gt_pred_map.get( - (crowd_labels_dict[pred_info['category_id']], pred_label), - 0) - # predicted segment is ignored if more than half of - # the segment correspond to VOID and CROWD regions - if intersection / pred_info['area'] > 0.5: - continue - pq_stat[pred_info['category_id']].fp += 1 - print('Core: {}, all {} images processed'.format(proc_id, - len(annotation_set))) - return pq_stat - - -def pq_compute_multi_core(matched_annotations_list, - gt_folder, - pred_folder, - categories, - file_client=None, - nproc=32): - """Evaluate the metrics of Panoptic Segmentation with multithreading. - - Same as the function with the same name in `panopticapi`. - - Args: - matched_annotations_list (list): The matched annotation list. Each - element is a tuple of annotations of the same image with the - format (gt_anns, pred_anns). - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - cpu_num = min(nproc, multiprocessing.cpu_count()) - - annotations_split = np.array_split(matched_annotations_list, cpu_num) - print('Number of cores: {}, images per core: {}'.format( - cpu_num, len(annotations_split[0]))) - workers = multiprocessing.Pool(processes=cpu_num) - processes = [] - for proc_id, annotation_set in enumerate(annotations_split): - p = workers.apply_async(pq_compute_single_core, - (proc_id, annotation_set, gt_folder, - pred_folder, categories, file_client)) - processes.append(p) - - # Close the process pool, otherwise it will lead to memory - # leaking problems. - workers.close() - workers.join() - - pq_stat = PQStat() - for p in processes: - pq_stat += p.get() - - return pq_stat diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/builder.py deleted file mode 100644 index 1936296a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/builder.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -import warnings -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import TORCH_VERSION, Registry, build_from_cfg, digit_version -from torch.utils.data import DataLoader - -from .samplers import (ClassAwareSampler, DistributedGroupSampler, - DistributedSampler, GroupSampler, InfiniteBatchSampler, - InfiniteGroupBatchSampler) - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from .dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif cfg['type'] == 'MultiImageMixDataset': - cp_cfg = copy.deepcopy(cfg) - cp_cfg['dataset'] = build_dataset(cp_cfg['dataset']) - cp_cfg.pop('type') - dataset = MultiImageMixDataset(**cp_cfg) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - runner_type='EpochBasedRunner', - persistent_workers=False, - class_aware_sampler=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int, Optional): Seed to be used. Default: None. - runner_type (str): Type of runner. Default: `EpochBasedRunner` - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers `Dataset` instances alive. - This argument is only valid when PyTorch>=1.7.0. Default: False. - class_aware_sampler (dict): Whether to use `ClassAwareSampler` - during training. Default: None. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - - if dist: - # When model is :obj:`DistributedDataParallel`, - # `batch_size` of :obj:`dataloader` is the - # number of training samples on each GPU. - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - # When model is obj:`DataParallel` - # the batch size is samples on all the GPUS - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - if runner_type == 'IterBasedRunner': - # this is a batch sampler, which can yield - # a mini-batch indices each time. - # it can be used in both `DataParallel` and - # `DistributedDataParallel` - if shuffle: - batch_sampler = InfiniteGroupBatchSampler( - dataset, batch_size, world_size, rank, seed=seed) - else: - batch_sampler = InfiniteBatchSampler( - dataset, - batch_size, - world_size, - rank, - seed=seed, - shuffle=False) - batch_size = 1 - sampler = None - else: - if class_aware_sampler is not None: - # ClassAwareSampler can be used in both distributed and - # non-distributed training. - num_sample_class = class_aware_sampler.get('num_sample_class', 1) - sampler = ClassAwareSampler( - dataset, - samples_per_gpu, - world_size, - rank, - seed=seed, - num_sample_class=num_sample_class) - elif dist: - # DistributedGroupSampler will definitely shuffle the data to - # satisfy that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - else: - sampler = GroupSampler(dataset, - samples_per_gpu) if shuffle else None - batch_sampler = None - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.7.0')): - kwargs['persistent_workers'] = persistent_workers - elif persistent_workers is True: - warnings.warn('persistent_workers is invalid because your pytorch ' - 'version is lower than 1.7.0') - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - batch_sampler=batch_sampler, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=kwargs.pop('pin_memory', False), - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/cityscapes.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/cityscapes.py deleted file mode 100644 index da6a2adc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/cityscapes.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa -# and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - -import glob -import os -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -from mmcv.utils import print_log - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class CityscapesDataset(CocoDataset): - - CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [(220, 20, 60), (255, 0, 0), (0, 0, 142), (0, 0, 70), - (0, 60, 100), (0, 80, 100), (0, 0, 230), (119, 11, 32)] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = img_info['id'] - ann_ids = self.coco.getAnnIds(imgIds=[img_id]) - ann_info = self.coco.loadAnns(ann_ids) - all_iscrowd = all([_['iscrowd'] for _ in ann_info]) - if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat - or all_iscrowd): - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - img_info (dict): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, \ - bboxes_ignore, labels, masks, seg_map. \ - "masks" are already decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=img_info['segm_file']) - - return ann - - def results2txt(self, results, outfile_prefix): - """Dump the detection results to a txt file. - - Args: - results (list[list | tuple]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. - If the prefix is "somepath/xxx", - the txt files will be named "somepath/xxx.txt". - - Returns: - list[str]: Result txt files which contains corresponding \ - instance segmentation images. - """ - try: - import cityscapesscripts.helpers.labels as CSLabels - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - result_files = [] - os.makedirs(outfile_prefix, exist_ok=True) - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - filename = self.data_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(outfile_prefix, basename + '_pred.txt') - - bbox_result, segm_result = result - bboxes = np.vstack(bbox_result) - # segm results - if isinstance(segm_result, tuple): - # Some detectors use different scores for bbox and mask, - # like Mask Scoring R-CNN. Score of segm will be used instead - # of bbox score. - segms = mmcv.concat_list(segm_result[0]) - mask_score = segm_result[1] - else: - # use bbox score for mask score - segms = mmcv.concat_list(segm_result) - mask_score = [bbox[-1] for bbox in bboxes] - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - assert len(bboxes) == len(segms) == len(labels) - num_instances = len(bboxes) - prog_bar.update() - with open(pred_txt, 'w') as fout: - for i in range(num_instances): - pred_class = labels[i] - classes = self.CLASSES[pred_class] - class_id = CSLabels.name2label[classes].id - score = mask_score[i] - mask = maskUtils.decode(segms[i]).astype(np.uint8) - png_filename = osp.join(outfile_prefix, - basename + f'_{i}_{classes}.png') - mmcv.imwrite(mask, png_filename) - fout.write(f'{osp.basename(png_filename)} {class_id} ' - f'{score}\n') - result_files.append(pred_txt) - - return result_files - - def format_results(self, results, txtfile_prefix=None): - """Format the results to txt (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of txt files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving txt/png files when txtfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2txt(results, txtfile_prefix) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - outfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in Cityscapes/COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - outfile_prefix (str | None): The prefix of output file. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with COCO protocol, it would be the - prefix of output json file. For example, the metric is 'bbox' - and 'segm', then json files would be "a/b/prefix.bbox.json" and - "a/b/prefix.segm.json". - If results are evaluated with cityscapes protocol, it would be - the prefix of output txt/png files. The output files would be - png images under folder "a/b/prefix/xxx/" and the file name of - images would be written into a txt file - "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of - cityscapes. If not specified, a temp file will be created. - Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: COCO style evaluation metric or cityscapes mAP \ - and AP@50. - """ - eval_results = dict() - - metrics = metric.copy() if isinstance(metric, list) else [metric] - - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, outfile_prefix, logger)) - metrics.remove('cityscapes') - - # left metrics are all coco metric - if len(metrics) > 0: - # create CocoDataset with CityscapesDataset annotation - self_coco = CocoDataset(self.ann_file, self.pipeline.transforms, - None, self.data_root, self.img_prefix, - self.seg_prefix, self.proposal_file, - self.test_mode, self.filter_empty_gt) - # TODO: remove this in the future - # reload annotations of correct class - self_coco.CLASSES = self.CLASSES - self_coco.data_infos = self_coco.load_annotations(self.ann_file) - eval_results.update( - self_coco.evaluate(results, metrics, logger, outfile_prefix, - classwise, proposal_nums, iou_thrs)) - - return eval_results - - def _evaluate_cityscapes(self, results, txtfile_prefix, logger): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of output txt file - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: Cityscapes evaluation results, contains 'mAP' \ - and 'AP@50'. - """ - - try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, txtfile_prefix) - - if tmp_dir is None: - result_dir = osp.join(txtfile_prefix, 'results') - else: - result_dir = osp.join(tmp_dir.name, 'results') - - eval_results = OrderedDict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - # set global states in cityscapes evaluation API - CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..') - CSEval.args.predictionPath = os.path.abspath(result_dir) - CSEval.args.predictionWalk = None - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = os.path.join(result_dir, - 'gtInstances.json') - CSEval.args.groundTruthSearch = os.path.join( - self.img_prefix.replace('leftImg8bit', 'gtFine'), - '*/*_gtFine_instanceIds.png') - - groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch) - assert len(groundTruthImgList), 'Cannot find ground truth images' \ - f' in {CSEval.args.groundTruthSearch}.' - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(CSEval.getPrediction(gt, CSEval.args)) - CSEval_results = CSEval.evaluateImgLists(predictionImgList, - groundTruthImgList, - CSEval.args)['averages'] - - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco.py deleted file mode 100644 index bcdd4df3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco.py +++ /dev/null @@ -1,649 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import contextlib -import io -import itertools -import logging -import os.path as osp -import tempfile -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from mmdet.core import eval_recalls -from .api_wrappers import COCO, COCOeval -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CocoDataset(CustomDataset): - - CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', - 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') - - PALETTE = [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), - (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), - (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), - (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), - (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), - (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), - (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), - (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), - (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), - (134, 134, 103), (145, 148, 174), (255, 208, 186), - (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), - (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), - (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), - (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), - (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), - (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), - (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), - (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), - (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), - (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), - (191, 162, 208)] - - def load_annotations(self, ann_file): - """Load annotation from COCO style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - - self.coco = COCO(ann_file) - # The order of returned `cat_ids` will not - # change with the order of the CLASSES - self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES) - - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - total_ann_ids = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - data_infos.append(info) - ann_ids = self.coco.get_ann_ids(img_ids=[i]) - total_ann_ids.extend(ann_ids) - assert len(set(total_ann_ids)) == len( - total_ann_ids), f"Annotation ids in '{ann_file}' are not unique!" - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def get_cat_ids(self, idx): - """Get COCO category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return [ann['category_id'] for ann in ann_info] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_in_cat: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore,\ - labels, masks, seg_map. "masks" are raw annotations and not \ - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def xyxy2xywh(self, bbox): - """Convert ``xyxy`` style bounding boxes to ``xywh`` style for COCO - evaluation. - - Args: - bbox (numpy.ndarray): The bounding boxes, shape (4, ), in - ``xyxy`` order. - - Returns: - list[float]: The converted bounding boxes, in ``xywh`` order. - """ - - _bbox = bbox.tolist() - return [ - _bbox[0], - _bbox[1], - _bbox[2] - _bbox[0], - _bbox[3] - _bbox[1], - ] - - def _proposal2json(self, results): - """Convert proposal results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - bboxes = results[idx] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = 1 - json_results.append(data) - return json_results - - def _det2json(self, results): - """Convert detection results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - result = results[idx] - for label in range(len(result)): - bboxes = result[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - json_results.append(data) - return json_results - - def _segm2json(self, results): - """Convert instance segmentation results to COCO json style.""" - bbox_json_results = [] - segm_json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - det, seg = results[idx] - for label in range(len(det)): - # bbox results - bboxes = det[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - bbox_json_results.append(data) - - # segm results - # some detectors use different scores for bbox and mask - if isinstance(seg, tuple): - segms = seg[0][label] - mask_score = seg[1][label] - else: - segms = seg[label] - mask_score = [bbox[4] for bbox in bboxes] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(mask_score[i]) - data['category_id'] = self.cat_ids[label] - if isinstance(segms[i]['counts'], bytes): - segms[i]['counts'] = segms[i]['counts'].decode() - data['segmentation'] = segms[i] - segm_json_results.append(data) - return bbox_json_results, segm_json_results - - def results2json(self, results, outfile_prefix): - """Dump the detection results to a COCO style json file. - - There are 3 types of results: proposals, bbox predictions, mask - predictions, and they have different data types. This method will - automatically recognize the type, and dump them to json files. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.bbox.json", "somepath/xxx.segm.json", - "somepath/xxx.proposal.json". - - Returns: - dict[str: str]: Possible keys are "bbox", "segm", "proposal", and \ - values are corresponding filenames. - """ - result_files = dict() - if isinstance(results[0], list): - json_results = self._det2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - mmcv.dump(json_results, result_files['bbox']) - elif isinstance(results[0], tuple): - json_results = self._segm2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(json_results[0], result_files['bbox']) - mmcv.dump(json_results[1], result_files['segm']) - elif isinstance(results[0], np.ndarray): - json_results = self._proposal2json(results) - result_files['proposal'] = f'{outfile_prefix}.proposal.json' - mmcv.dump(json_results, result_files['proposal']) - else: - raise TypeError('invalid type of results') - return result_files - - def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None): - gt_bboxes = [] - for i in range(len(self.img_ids)): - ann_ids = self.coco.get_ann_ids(img_ids=self.img_ids[i]) - ann_info = self.coco.load_anns(ann_ids) - if len(ann_info) == 0: - gt_bboxes.append(np.zeros((0, 4))) - continue - bboxes = [] - for ann in ann_info: - if ann.get('ignore', False) or ann['iscrowd']: - continue - x1, y1, w, h = ann['bbox'] - bboxes.append([x1, y1, x1 + w, y1 + h]) - bboxes = np.array(bboxes, dtype=np.float32) - if bboxes.shape[0] == 0: - bboxes = np.zeros((0, 4)) - gt_bboxes.append(bboxes) - - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thrs, logger=logger) - ar = recalls.mean(axis=1) - return ar - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - return result_files, tmp_dir - - def evaluate_det_segm(self, - results, - result_files, - coco_gt, - metrics, - logger=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Instance segmentation and object detection evaluation in COCO - protocol. - - Args: - results (list[list | tuple | dict]): Testing results of the - dataset. - result_files (dict[str, str]): a dict contains json file path. - coco_gt (COCO): COCO API object with ground truth annotation. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - if iou_thrs is None: - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - if metric_items is not None: - if not isinstance(metric_items, list): - metric_items = [metric_items] - - eval_results = OrderedDict() - for metric in metrics: - msg = f'Evaluating {metric}...' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - if isinstance(results[0], tuple): - raise KeyError('proposal_fast is not supported for ' - 'instance segmentation result.') - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}') - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - iou_type = 'bbox' if metric == 'proposal' else metric - if metric not in result_files: - raise KeyError(f'{metric} is not in results') - try: - predictions = mmcv.load(result_files[metric]) - if iou_type == 'segm': - # Refer to https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/coco.py#L331 # noqa - # When evaluating mask AP, if the results contain bbox, - # cocoapi will use the box area instead of the mask area - # for calculating the instance area. Though the overall AP - # is not affected, this leads to different - # small/medium/large mask AP results. - for x in predictions: - x.pop('bbox') - warnings.simplefilter('once') - warnings.warn( - 'The key "bbox" is deleted for more accurate mask AP ' - 'of small/medium/large instances since v2.12.0. This ' - 'does not change the overall mAP calculation.', - UserWarning) - coco_det = coco_gt.loadRes(predictions) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - cocoEval = COCOeval(coco_gt, coco_det, iou_type) - cocoEval.params.catIds = self.cat_ids - cocoEval.params.imgIds = self.img_ids - cocoEval.params.maxDets = list(proposal_nums) - cocoEval.params.iouThrs = iou_thrs - # mapping of cocoEval.stats - coco_metric_names = { - 'mAP': 0, - 'mAP_50': 1, - 'mAP_75': 2, - 'mAP_s': 3, - 'mAP_m': 4, - 'mAP_l': 5, - 'AR@100': 6, - 'AR@300': 7, - 'AR@1000': 8, - 'AR_s@1000': 9, - 'AR_m@1000': 10, - 'AR_l@1000': 11 - } - if metric_items is not None: - for metric_item in metric_items: - if metric_item not in coco_metric_names: - raise KeyError( - f'metric item {metric_item} is not supported') - - if metric == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if metric_items is None: - metric_items = [ - 'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', - 'AR_m@1000', 'AR_l@1000' - ] - - for item in metric_items: - val = float( - f'{cocoEval.stats[coco_metric_names[item]]:.3f}') - eval_results[item] = val - else: - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = cocoEval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - nm = self.coco.loadCats(catId)[0] - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - if metric_items is None: - metric_items = [ - 'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l' - ] - - for metric_item in metric_items: - key = f'{metric}_{metric_item}' - val = float( - f'{cocoEval.stats[coco_metric_names[metric_item]]:.3f}' - ) - eval_results[key] = val - ap = cocoEval.stats[:6] - eval_results[f'{metric}_mAP_copypaste'] = ( - f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} ' - f'{ap[4]:.3f} {ap[5]:.3f}') - - return eval_results - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - - coco_gt = self.coco - self.cat_ids = coco_gt.get_cat_ids(cat_names=self.CLASSES) - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - eval_results = self.evaluate_det_segm(results, result_files, coco_gt, - metrics, logger, classwise, - proposal_nums, iou_thrs, - metric_items) - - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco_panoptic.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco_panoptic.py deleted file mode 100644 index 53ef5947..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/coco_panoptic.py +++ /dev/null @@ -1,692 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools -import os -from collections import defaultdict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from mmdet.core import INSTANCE_OFFSET -from .api_wrappers import COCO, pq_compute_multi_core -from .builder import DATASETS -from .coco import CocoDataset - -try: - import panopticapi - from panopticapi.evaluation import VOID - from panopticapi.utils import id2rgb -except ImportError: - panopticapi = None - id2rgb = None - VOID = None - -__all__ = ['CocoPanopticDataset'] - - -class COCOPanoptic(COCO): - """This wrapper is for loading the panoptic style annotation file. - - The format is shown in the CocoPanopticDataset class. - - Args: - annotation_file (str): Path of annotation file. - """ - - def __init__(self, annotation_file=None): - if panopticapi is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(COCOPanoptic, self).__init__(annotation_file) - - def createIndex(self): - # create index - print('creating index...') - # anns stores 'segment_id -> annotation' - anns, cats, imgs = {}, {}, {} - img_to_anns, cat_to_imgs = defaultdict(list), defaultdict(list) - if 'annotations' in self.dataset: - for ann, img_info in zip(self.dataset['annotations'], - self.dataset['images']): - img_info['segm_file'] = ann['file_name'] - for seg_ann in ann['segments_info']: - # to match with instance.json - seg_ann['image_id'] = ann['image_id'] - seg_ann['height'] = img_info['height'] - seg_ann['width'] = img_info['width'] - img_to_anns[ann['image_id']].append(seg_ann) - # segment_id is not unique in coco dataset orz... - if seg_ann['id'] in anns.keys(): - anns[seg_ann['id']].append(seg_ann) - else: - anns[seg_ann['id']] = [seg_ann] - - if 'images' in self.dataset: - for img in self.dataset['images']: - imgs[img['id']] = img - - if 'categories' in self.dataset: - for cat in self.dataset['categories']: - cats[cat['id']] = cat - - if 'annotations' in self.dataset and 'categories' in self.dataset: - for ann in self.dataset['annotations']: - for seg_ann in ann['segments_info']: - cat_to_imgs[seg_ann['category_id']].append(ann['image_id']) - - print('index created!') - - self.anns = anns - self.imgToAnns = img_to_anns - self.catToImgs = cat_to_imgs - self.imgs = imgs - self.cats = cats - - def load_anns(self, ids=[]): - """Load anns with the specified ids. - - self.anns is a list of annotation lists instead of a - list of annotations. - - Args: - ids (int array): integer ids specifying anns - - Returns: - anns (object array): loaded ann objects - """ - anns = [] - - if hasattr(ids, '__iter__') and hasattr(ids, '__len__'): - # self.anns is a list of annotation lists instead of - # a list of annotations - for id in ids: - anns += self.anns[id] - return anns - elif type(ids) == int: - return self.anns[ids] - - -@DATASETS.register_module() -class CocoPanopticDataset(CocoDataset): - """Coco dataset for Panoptic segmentation. - - The annotation format is shown as follows. The `ann` field is optional - for testing. - - .. code-block:: none - - [ - { - 'filename': f'{image_id:012}.png', - 'image_id':9 - 'segments_info': { - [ - { - 'id': 8345037, (segment_id in panoptic png, - convert from rgb) - 'category_id': 51, - 'iscrowd': 0, - 'bbox': (x1, y1, w, h), - 'area': 24315, - 'segmentation': list,(coded mask) - }, - ... - } - } - }, - ... - ] - - Args: - ann_file (str): Panoptic segmentation annotation file path. - pipeline (list[dict]): Processing pipeline. - ins_ann_file (str): Instance segmentation annotation file path. - Defaults to None. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Defaults to None. - data_root (str, optional): Data root for ``ann_file``, - ``ins_ann_file`` ``img_prefix``, ``seg_prefix``, ``proposal_file`` - if specified. Defaults to None. - img_prefix (str, optional): Prefix of path to images. Defaults to ''. - seg_prefix (str, optional): Prefix of path to segmentation files. - Defaults to None. - proposal_file (str, optional): Path to proposal file. Defaults to None. - test_mode (bool, optional): If set True, annotation will not be loaded. - Defaults to False. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. Defaults to True. - file_client_args (:obj:`mmcv.ConfigDict` | dict): file client args. - Defaults to dict(backend='disk'). - """ - CLASSES = [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - ' truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', - 'blanket', 'bridge', 'cardboard', 'counter', 'curtain', 'door-stuff', - 'floor-wood', 'flower', 'fruit', 'gravel', 'house', 'light', - 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield', - 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow', - 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile', - 'wall-wood', 'water-other', 'window-blind', 'window-other', - 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged', - 'cabinet-merged', 'table-merged', 'floor-other-merged', - 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged', - 'paper-merged', 'food-other-merged', 'building-other-merged', - 'rock-merged', 'wall-other-merged', 'rug-merged' - ] - THING_CLASSES = [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - STUFF_CLASSES = [ - 'banner', 'blanket', 'bridge', 'cardboard', 'counter', 'curtain', - 'door-stuff', 'floor-wood', 'flower', 'fruit', 'gravel', 'house', - 'light', 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield', - 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow', - 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile', - 'wall-wood', 'water-other', 'window-blind', 'window-other', - 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged', - 'cabinet-merged', 'table-merged', 'floor-other-merged', - 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged', - 'paper-merged', 'food-other-merged', 'building-other-merged', - 'rock-merged', 'wall-other-merged', 'rug-merged' - ] - - PALETTE = [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), - (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), - (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), - (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), - (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), - (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), - (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), - (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), - (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), - (134, 134, 103), (145, 148, 174), (255, 208, 186), - (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), - (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), - (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), - (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), - (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), - (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), - (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), - (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), - (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), - (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), - (191, 162, 208), (255, 255, 128), (147, 211, 203), - (150, 100, 100), (168, 171, 172), (146, 112, 198), - (210, 170, 100), (92, 136, 89), (218, 88, 184), (241, 129, 0), - (217, 17, 255), (124, 74, 181), (70, 70, 70), (255, 228, 255), - (154, 208, 0), (193, 0, 92), (76, 91, 113), (255, 180, 195), - (106, 154, 176), - (230, 150, 140), (60, 143, 255), (128, 64, 128), (92, 82, 55), - (254, 212, 124), (73, 77, 174), (255, 160, 98), (255, 255, 255), - (104, 84, 109), (169, 164, 131), (225, 199, 255), (137, 54, 74), - (135, 158, 223), (7, 246, 231), (107, 255, 200), (58, 41, 149), - (183, 121, 142), (255, 73, 97), (107, 142, 35), (190, 153, 153), - (146, 139, 141), - (70, 130, 180), (134, 199, 156), (209, 226, 140), (96, 36, 108), - (96, 96, 96), (64, 170, 64), (152, 251, 152), (208, 229, 228), - (206, 186, 171), (152, 161, 64), (116, 112, 0), (0, 114, 143), - (102, 102, 156), (250, 141, 255)] - - def __init__(self, - ann_file, - pipeline, - ins_ann_file=None, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - super().__init__( - ann_file, - pipeline, - classes=classes, - data_root=data_root, - img_prefix=img_prefix, - seg_prefix=seg_prefix, - proposal_file=proposal_file, - test_mode=test_mode, - filter_empty_gt=filter_empty_gt, - file_client_args=file_client_args) - self.ins_ann_file = ins_ann_file - - def load_annotations(self, ann_file): - """Load annotation from COCO Panoptic style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - self.coco = COCOPanoptic(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.categories = self.coco.cats - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - info['segm_file'] = info['filename'].replace('jpg', 'png') - data_infos.append(info) - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - # filter out unmatched images - ann_info = [i for i in ann_info if i['image_id'] == img_id] - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def _parse_ann_info(self, img_info, ann_info): - """Parse annotations and load panoptic ground truths. - - Args: - img_info (int): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore, - labels, masks, seg_map. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_mask_infos = [] - - for i, ann in enumerate(ann_info): - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - bbox = [x1, y1, x1 + w, y1 + h] - - category_id = ann['category_id'] - contiguous_cat_id = self.cat2label[category_id] - - is_thing = self.coco.load_cats(ids=category_id)[0]['isthing'] - if is_thing: - is_crowd = ann.get('iscrowd', False) - if not is_crowd: - gt_bboxes.append(bbox) - gt_labels.append(contiguous_cat_id) - else: - gt_bboxes_ignore.append(bbox) - is_thing = False - - mask_info = { - 'id': ann['id'], - 'category': contiguous_cat_id, - 'is_thing': is_thing - } - gt_mask_infos.append(mask_info) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_mask_infos, - seg_map=img_info['segm_file']) - - return ann - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - ids_with_ann = [] - # check whether images have legal thing annotations. - for lists in self.coco.anns.values(): - for item in lists: - category_id = item['category_id'] - is_thing = self.coco.load_cats(ids=category_id)[0]['isthing'] - if not is_thing: - continue - ids_with_ann.append(item['image_id']) - ids_with_ann = set(ids_with_ann) - - valid_inds = [] - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_with_ann: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _pan2json(self, results, outfile_prefix): - """Convert panoptic results to COCO panoptic json style.""" - label2cat = dict((v, k) for (k, v) in self.cat2label.items()) - pred_annotations = [] - outdir = os.path.join(os.path.dirname(outfile_prefix), 'panoptic') - - for idx in range(len(self)): - img_id = self.img_ids[idx] - segm_file = self.data_infos[idx]['segm_file'] - pan = results[idx] - - pan_labels = np.unique(pan) - segm_info = [] - for pan_label in pan_labels: - sem_label = pan_label % INSTANCE_OFFSET - # We reserve the length of self.CLASSES for VOID label - if sem_label == len(self.CLASSES): - continue - # convert sem_label to json label - cat_id = label2cat[sem_label] - is_thing = self.categories[cat_id]['isthing'] - mask = pan == pan_label - area = mask.sum() - segm_info.append({ - 'id': int(pan_label), - 'category_id': cat_id, - 'isthing': is_thing, - 'area': int(area) - }) - # evaluation script uses 0 for VOID label. - pan[pan % INSTANCE_OFFSET == len(self.CLASSES)] = VOID - pan = id2rgb(pan).astype(np.uint8) - mmcv.imwrite(pan[:, :, ::-1], os.path.join(outdir, segm_file)) - record = { - 'image_id': img_id, - 'segments_info': segm_info, - 'file_name': segm_file - } - pred_annotations.append(record) - pan_json_results = dict(annotations=pred_annotations) - return pan_json_results - - def results2json(self, results, outfile_prefix): - """Dump the results to a COCO style json file. - - There are 4 types of results: proposals, bbox predictions, mask - predictions, panoptic segmentation predictions, and they have - different data types. This method will automatically recognize - the type, and dump them to json files. - - .. code-block:: none - - [ - { - 'pan_results': np.array, # shape (h, w) - # ins_results which includes bboxes and RLE encoded masks - # is optional. - 'ins_results': (list[np.array], list[list[str]]) - }, - ... - ] - - Args: - results (list[dict]): Testing results of the dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.panoptic.json", "somepath/xxx.bbox.json", - "somepath/xxx.segm.json" - - Returns: - dict[str: str]: Possible keys are "panoptic", "bbox", "segm", \ - "proposal", and values are corresponding filenames. - """ - result_files = dict() - # panoptic segmentation results - if 'pan_results' in results[0]: - pan_results = [result['pan_results'] for result in results] - pan_json_results = self._pan2json(pan_results, outfile_prefix) - result_files['panoptic'] = f'{outfile_prefix}.panoptic.json' - mmcv.dump(pan_json_results, result_files['panoptic']) - - # instance segmentation results - if 'ins_results' in results[0]: - ins_results = [result['ins_results'] for result in results] - bbox_json_results, segm_json_results = self._segm2json(ins_results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(bbox_json_results, result_files['bbox']) - mmcv.dump(segm_json_results, result_files['segm']) - - return result_files - - def evaluate_pan_json(self, - result_files, - outfile_prefix, - logger=None, - classwise=False, - nproc=32): - """Evaluate PQ according to the panoptic results json file.""" - imgs = self.coco.imgs - gt_json = self.coco.img_ann_map # image to annotations - gt_json = [{ - 'image_id': k, - 'segments_info': v, - 'file_name': imgs[k]['segm_file'] - } for k, v in gt_json.items()] - pred_json = mmcv.load(result_files['panoptic']) - pred_json = dict( - (el['image_id'], el) for el in pred_json['annotations']) - - # match the gt_anns and pred_anns in the same image - matched_annotations_list = [] - for gt_ann in gt_json: - img_id = gt_ann['image_id'] - if img_id not in pred_json.keys(): - raise Exception('no prediction for the image' - ' with id: {}'.format(img_id)) - matched_annotations_list.append((gt_ann, pred_json[img_id])) - - gt_folder = self.seg_prefix - pred_folder = os.path.join(os.path.dirname(outfile_prefix), 'panoptic') - - pq_stat = pq_compute_multi_core( - matched_annotations_list, - gt_folder, - pred_folder, - self.categories, - self.file_client, - nproc=nproc) - - metrics = [('All', None), ('Things', True), ('Stuff', False)] - pq_results = {} - - for name, isthing in metrics: - pq_results[name], classwise_results = pq_stat.pq_average( - self.categories, isthing=isthing) - if name == 'All': - pq_results['classwise'] = classwise_results - - classwise_results = None - if classwise: - classwise_results = { - k: v - for k, v in zip(self.CLASSES, pq_results['classwise'].values()) - } - print_panoptic_table(pq_results, classwise_results, logger=logger) - results = parse_pq_results(pq_results) - results['PQ_copypaste'] = ( - f'{results["PQ"]:.3f} {results["SQ"]:.3f} ' - f'{results["RQ"]:.3f} ' - f'{results["PQ_th"]:.3f} {results["SQ_th"]:.3f} ' - f'{results["RQ_th"]:.3f} ' - f'{results["PQ_st"]:.3f} {results["SQ_st"]:.3f} ' - f'{results["RQ_st"]:.3f}') - - return results - - def evaluate(self, - results, - metric='PQ', - logger=None, - jsonfile_prefix=None, - classwise=False, - nproc=32, - **kwargs): - """Evaluation in COCO Panoptic protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. 'PQ', 'bbox', - 'segm', 'proposal' are supported. 'pq' will be regarded as 'PQ. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to print classwise evaluation results. - Default: False. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - - Returns: - dict[str, float]: COCO Panoptic style evaluation metric. - """ - metrics = metric if isinstance(metric, list) else [metric] - # Compatible with lowercase 'pq' - metrics = ['PQ' if metric == 'pq' else metric for metric in metrics] - allowed_metrics = ['PQ', 'bbox', 'segm', 'proposal'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - eval_results = {} - - outfile_prefix = os.path.join(tmp_dir.name, 'results') \ - if tmp_dir is not None else jsonfile_prefix - if 'PQ' in metrics: - eval_pan_results = self.evaluate_pan_json( - result_files, outfile_prefix, logger, classwise, nproc=nproc) - - eval_results.update(eval_pan_results) - metrics.remove('PQ') - - if (('bbox' in metrics) or ('segm' in metrics) - or ('proposal' in metrics)): - - assert 'ins_results' in results[0], 'instance segmentation' \ - 'results are absent from results' - - assert self.ins_ann_file is not None, 'Annotation '\ - 'file for instance segmentation or object detection ' \ - 'shuold not be None' - - coco_gt = COCO(self.ins_ann_file) - panoptic_cat_ids = self.cat_ids - self.cat_ids = coco_gt.get_cat_ids(cat_names=self.THING_CLASSES) - - eval_ins_results = self.evaluate_det_segm(results, result_files, - coco_gt, metrics, logger, - classwise, **kwargs) - self.cat_ids = panoptic_cat_ids - eval_results.update(eval_ins_results) - - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results - - -def parse_pq_results(pq_results): - """Parse the Panoptic Quality results.""" - result = dict() - result['PQ'] = 100 * pq_results['All']['pq'] - result['SQ'] = 100 * pq_results['All']['sq'] - result['RQ'] = 100 * pq_results['All']['rq'] - result['PQ_th'] = 100 * pq_results['Things']['pq'] - result['SQ_th'] = 100 * pq_results['Things']['sq'] - result['RQ_th'] = 100 * pq_results['Things']['rq'] - result['PQ_st'] = 100 * pq_results['Stuff']['pq'] - result['SQ_st'] = 100 * pq_results['Stuff']['sq'] - result['RQ_st'] = 100 * pq_results['Stuff']['rq'] - return result - - -def print_panoptic_table(pq_results, classwise_results=None, logger=None): - """Print the panoptic evaluation results table. - - Args: - pq_results(dict): The Panoptic Quality results. - classwise_results(dict | None): The classwise Panoptic Quality results. - The keys are class names and the values are metrics. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - """ - - headers = ['', 'PQ', 'SQ', 'RQ', 'categories'] - data = [headers] - for name in ['All', 'Things', 'Stuff']: - numbers = [ - f'{(pq_results[name][k] * 100):0.3f}' for k in ['pq', 'sq', 'rq'] - ] - row = [name] + numbers + [pq_results[name]['n']] - data.append(row) - table = AsciiTable(data) - print_log('Panoptic Evaluation Results:\n' + table.table, logger=logger) - - if classwise_results is not None: - class_metrics = [(name, ) + tuple(f'{(metrics[k] * 100):0.3f}' - for k in ['pq', 'sq', 'rq']) - for name, metrics in classwise_results.items()] - num_columns = min(8, len(class_metrics) * 4) - results_flatten = list(itertools.chain(*class_metrics)) - headers = ['category', 'PQ', 'SQ', 'RQ'] * (num_columns // 4) - results_2d = itertools.zip_longest( - *[results_flatten[i::num_columns] for i in range(num_columns)]) - data = [headers] - data += [result for result in results_2d] - table = AsciiTable(data) - print_log( - 'Classwise Panoptic Evaluation Results:\n' + table.table, - logger=logger) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/custom.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/custom.py deleted file mode 100644 index a4d82589..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/custom.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable -from torch.utils.data import Dataset - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for detection. - - The annotation format is shown as follows. The `ann` field is optional for - testing. - - .. code-block:: none - - [ - { - 'filename': 'a.jpg', - 'width': 1280, - 'height': 720, - 'ann': { - 'bboxes': (n, 4) in (x1, y1, x2, y2) order. - 'labels': (n, ), - 'bboxes_ignore': (k, 4), (optional field) - 'labels_ignore': (k, 4) (optional field) - } - }, - ... - ] - - Args: - ann_file (str): Annotation file path. - pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified. - test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.file_client = mmcv.FileClient(**file_client_args) - self.CLASSES = self.get_classes(classes) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.ann_file): - self.ann_file = osp.join(self.data_root, self.ann_file) - if not (self.img_prefix is None or osp.isabs(self.img_prefix)): - self.img_prefix = osp.join(self.data_root, self.img_prefix) - if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): - self.seg_prefix = osp.join(self.data_root, self.seg_prefix) - if not (self.proposal_file is None - or osp.isabs(self.proposal_file)): - self.proposal_file = osp.join(self.data_root, - self.proposal_file) - # load annotations (and proposals) - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - if self.proposal_file is not None: - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path( - self.proposal_file) as local_path: - self.proposals = self.load_proposals(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.proposals = self.load_proposals(self.proposal_file) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - def __len__(self): - """Total number of samples of data.""" - return len(self.data_infos) - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - return mmcv.load(ann_file) - - def load_proposals(self, proposal_file): - """Load proposal from proposal file.""" - return mmcv.load(proposal_file) - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.data_infos[idx]['ann'] - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - while True: - data = self.prepare_train_img(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys \ - introduced by pipeline. - """ - - img_info = self.data_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by \ - pipeline. - """ - - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def get_cat2imgs(self): - """Get a dict with class as key and img_ids as values, which will be - used in :class:`ClassAwareSampler`. - - Returns: - dict[list]: A dict of per-label image list, - the item of the dict indicates a label index, - corresponds to the image index that contains the label. - """ - if self.CLASSES is None: - raise ValueError('self.CLASSES can not be None') - # sort the label index - cat2imgs = {i: [] for i in range(len(self.CLASSES))} - for i in range(len(self)): - cat_ids = set(self.get_cat_ids(i)) - for cat in cat_ids: - cat2imgs[cat].append(i) - return cat2imgs - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP. - Default: None. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - dataset=self.CLASSES, - logger=logger) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thr, logger=logger) - for i, num in enumerate(proposal_nums): - for j, iou in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results - - def __repr__(self): - """Print the number of instance number.""" - dataset_type = 'Test' if self.test_mode else 'Train' - result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' - f'with number of images {len(self)}, ' - f'and instance counts: \n') - if self.CLASSES is None: - result += 'Category names are not provided. \n' - return result - instance_count = np.zeros(len(self.CLASSES) + 1).astype(int) - # count the instance number in each image - for idx in range(len(self)): - label = self.get_ann_info(idx)['labels'] - unique, counts = np.unique(label, return_counts=True) - if len(unique) > 0: - # add the occurrence number to each class - instance_count[unique] += counts - else: - # background is the last index - instance_count[-1] += 1 - # create a table with category count - table_data = [['category', 'count'] * 5] - row_data = [] - for cls, count in enumerate(instance_count): - if cls < len(self.CLASSES): - row_data += [f'{cls} [{self.CLASSES[cls]}]', f'{count}'] - else: - # add the background number - row_data += ['-1 background', f'{count}'] - if len(row_data) == 10: - table_data.append(row_data) - row_data = [] - if len(row_data) >= 2: - if row_data[-1] == '0': - row_data = row_data[:-2] - if len(row_data) >= 2: - table_data.append([]) - table_data.append(row_data) - - table = AsciiTable(table_data) - result += table.table - return result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/dataset_wrappers.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/dataset_wrappers.py deleted file mode 100644 index e62b88eb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/dataset_wrappers.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import collections -import copy -import math -from collections import defaultdict - -import numpy as np -from mmcv.utils import build_from_cfg, print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS, PIPELINES -from .coco import CocoDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = getattr(datasets[0], 'PALETTE', None) - self.separate_eval = separate_eval - if not separate_eval: - if any([isinstance(ds, CocoDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - - if hasattr(datasets[0], 'flag'): - flags = [] - for i in range(0, len(datasets)): - flags.append(datasets[i].flag) - self.flag = np.concatenate(flags) - - def get_cat_ids(self, idx): - """Get category ids of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_cat_ids(sample_idx) - - def get_ann_info(self, idx): - """Get annotation of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_ann_info(sample_idx) - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[list | tuple]): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: AP results of the total dataset or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.ann_file} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - elif any([isinstance(ds, CocoDataset) for ds in self.datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - else: - original_data_infos = self.datasets[0].data_infos - self.datasets[0].data_infos = sum( - [dataset.data_infos for dataset in self.datasets], []) - eval_results = self.datasets[0].evaluate( - results, logger=logger, **kwargs) - self.datasets[0].data_infos = original_data_infos - return eval_results - - -@DATASETS.register_module() -class RepeatDataset: - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = np.tile(self.dataset.flag, times) - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - return self.dataset[idx % self._ori_len] - - def get_cat_ids(self, idx): - """Get category ids of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.dataset.get_cat_ids(idx % self._ori_len) - - def get_ann_info(self, idx): - """Get annotation of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.dataset.get_ann_info(idx % self._ori_len) - - def __len__(self): - """Length after repetition.""" - return self.times * self._ori_len - - -# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa -@DATASETS.register_module() -class ClassBalancedDataset: - """A wrapper of repeated dataset with repeat factor. - - Suitable for training on class imbalanced datasets like LVIS. Following - the sampling strategy in the `paper `_, - in each epoch, an image may appear multiple times based on its - "repeat factor". - The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] - is defined by the fraction of images in the training set (without repeats) - in which category c appears. - The dataset needs to instantiate :func:`self.get_cat_ids` to support - ClassBalancedDataset. - - The repeat factor is computed as followed. - - 1. For each category c, compute the fraction # of images - that contain it: :math:`f(c)` - 2. For each category c, compute the category-level repeat factor: - :math:`r(c) = max(1, sqrt(t/f(c)))` - 3. For each image I, compute the image-level repeat factor: - :math:`r(I) = max_{c in I} r(c)` - - Args: - dataset (:obj:`CustomDataset`): The dataset to be repeated. - oversample_thr (float): frequency threshold below which data is - repeated. For categories with ``f_c >= oversample_thr``, there is - no oversampling. For categories with ``f_c < oversample_thr``, the - degree of oversampling following the square-root inverse frequency - heuristic above. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes will not be oversampled. Otherwise, they will be categorized - as the pure background class and involved into the oversampling. - Default: True. - """ - - def __init__(self, dataset, oversample_thr, filter_empty_gt=True): - self.dataset = dataset - self.oversample_thr = oversample_thr - self.filter_empty_gt = filter_empty_gt - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - - repeat_factors = self._get_repeat_factors(dataset, oversample_thr) - repeat_indices = [] - for dataset_idx, repeat_factor in enumerate(repeat_factors): - repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor)) - self.repeat_indices = repeat_indices - - flags = [] - if hasattr(self.dataset, 'flag'): - for flag, repeat_factor in zip(self.dataset.flag, repeat_factors): - flags.extend([flag] * int(math.ceil(repeat_factor))) - assert len(flags) == len(repeat_indices) - self.flag = np.asarray(flags, dtype=np.uint8) - - def _get_repeat_factors(self, dataset, repeat_thr): - """Get repeat factor for each images in the dataset. - - Args: - dataset (:obj:`CustomDataset`): The dataset - repeat_thr (float): The threshold of frequency. If an image - contains the categories whose frequency below the threshold, - it would be repeated. - - Returns: - list[float]: The repeat factors for each images in the dataset. - """ - - # 1. For each category c, compute the fraction # of images - # that contain it: f(c) - category_freq = defaultdict(int) - num_images = len(dataset) - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - for cat_id in cat_ids: - category_freq[cat_id] += 1 - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t/f(c))) - category_repeat = { - cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - repeat_factors = [] - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - repeat_factor = 1 - if len(cat_ids) > 0: - repeat_factor = max( - {category_repeat[cat_id] - for cat_id in cat_ids}) - repeat_factors.append(repeat_factor) - - return repeat_factors - - def __getitem__(self, idx): - ori_index = self.repeat_indices[idx] - return self.dataset[ori_index] - - def get_ann_info(self, idx): - """Get annotation of dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - ori_index = self.repeat_indices[idx] - return self.dataset.get_ann_info(ori_index) - - def __len__(self): - """Length after repetition.""" - return len(self.repeat_indices) - - -@DATASETS.register_module() -class MultiImageMixDataset: - """A wrapper of multiple images mixed dataset. - - Suitable for training on multiple images mixed data augmentation like - mosaic and mixup. For the augmentation pipeline of mixed image data, - the `get_indexes` method needs to be provided to obtain the image - indexes, and you can set `skip_flags` to change the pipeline running - process. At the same time, we provide the `dynamic_scale` parameter - to dynamically change the output image size. - - Args: - dataset (:obj:`CustomDataset`): The dataset to be mixed. - pipeline (Sequence[dict]): Sequence of transform object or - config dict to be composed. - dynamic_scale (tuple[int], optional): The image scale can be changed - dynamically. Default to None. It is deprecated. - skip_type_keys (list[str], optional): Sequence of type string to - be skip pipeline. Default to None. - max_refetch (int): The maximum number of retry iterations for getting - valid results from the pipeline. If the number of iterations is - greater than `max_refetch`, but results is still None, then the - iteration is terminated and raise the error. Default: 15. - """ - - def __init__(self, - dataset, - pipeline, - dynamic_scale=None, - skip_type_keys=None, - max_refetch=15): - if dynamic_scale is not None: - raise RuntimeError( - 'dynamic_scale is deprecated. Please use Resize pipeline ' - 'to achieve similar functions') - assert isinstance(pipeline, collections.abc.Sequence) - if skip_type_keys is not None: - assert all([ - isinstance(skip_type_key, str) - for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys - - self.pipeline = [] - self.pipeline_types = [] - for transform in pipeline: - if isinstance(transform, dict): - self.pipeline_types.append(transform['type']) - transform = build_from_cfg(transform, PIPELINES) - self.pipeline.append(transform) - else: - raise TypeError('pipeline must be a dict') - - self.dataset = dataset - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = dataset.flag - self.num_samples = len(dataset) - self.max_refetch = max_refetch - - def __len__(self): - return self.num_samples - - def __getitem__(self, idx): - results = copy.deepcopy(self.dataset[idx]) - for (transform, transform_type) in zip(self.pipeline, - self.pipeline_types): - if self._skip_type_keys is not None and \ - transform_type in self._skip_type_keys: - continue - - if hasattr(transform, 'get_indexes'): - for i in range(self.max_refetch): - # Make sure the results passed the loading pipeline - # of the original dataset is not None. - indexes = transform.get_indexes(self.dataset) - if not isinstance(indexes, collections.abc.Sequence): - indexes = [indexes] - mix_results = [ - copy.deepcopy(self.dataset[index]) for index in indexes - ] - if None not in mix_results: - results['mix_results'] = mix_results - break - else: - raise RuntimeError( - 'The loading pipeline of the original dataset' - ' always return None. Please check the correctness ' - 'of the dataset and its pipeline.') - - for i in range(self.max_refetch): - # To confirm the results passed the training pipeline - # of the wrapper is not None. - updated_results = transform(copy.deepcopy(results)) - if updated_results is not None: - results = updated_results - break - else: - raise RuntimeError( - 'The training pipeline of the dataset wrapper' - ' always return None.Please check the correctness ' - 'of the dataset and its pipeline.') - - if 'mix_results' in results: - results.pop('mix_results') - - return results - - def update_skip_type_keys(self, skip_type_keys): - """Update skip_type_keys. It is called by an external hook. - - Args: - skip_type_keys (list[str], optional): Sequence of type - string to be skip pipeline. - """ - assert all([ - isinstance(skip_type_key, str) for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/deepfashion.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/deepfashion.py deleted file mode 100644 index 609f8091..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/deepfashion.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class DeepFashionDataset(CocoDataset): - - CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag', - 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair', - 'skin', 'face') - - PALETTE = [(0, 192, 64), (0, 64, 96), (128, 192, 192), (0, 64, 64), - (0, 192, 224), (0, 192, 192), (128, 192, 64), (0, 192, 96), - (128, 32, 192), (0, 0, 224), (0, 0, 64), (0, 160, 192), - (128, 0, 96), (128, 0, 192), (0, 32, 192)] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/lvis.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/lvis.py deleted file mode 100644 index 511e31ae..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/lvis.py +++ /dev/null @@ -1,742 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools -import logging -import os.path as osp -import tempfile -import warnings -from collections import OrderedDict - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class LVISV05Dataset(CocoDataset): - - CLASSES = ( - 'acorn', 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', - 'alcohol', 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', - 'antenna', 'apple', 'apple_juice', 'applesauce', 'apricot', 'apron', - 'aquarium', 'armband', 'armchair', 'armoire', 'armor', 'artichoke', - 'trash_can', 'ashtray', 'asparagus', 'atomizer', 'avocado', 'award', - 'awning', 'ax', 'baby_buggy', 'basketball_backboard', 'backpack', - 'handbag', 'suitcase', 'bagel', 'bagpipe', 'baguet', 'bait', 'ball', - 'ballet_skirt', 'balloon', 'bamboo', 'banana', 'Band_Aid', 'bandage', - 'bandanna', 'banjo', 'banner', 'barbell', 'barge', 'barrel', - 'barrette', 'barrow', 'baseball_base', 'baseball', 'baseball_bat', - 'baseball_cap', 'baseball_glove', 'basket', 'basketball_hoop', - 'basketball', 'bass_horn', 'bat_(animal)', 'bath_mat', 'bath_towel', - 'bathrobe', 'bathtub', 'batter_(food)', 'battery', 'beachball', 'bead', - 'beaker', 'bean_curd', 'beanbag', 'beanie', 'bear', 'bed', - 'bedspread', 'cow', 'beef_(food)', 'beeper', 'beer_bottle', 'beer_can', - 'beetle', 'bell', 'bell_pepper', 'belt', 'belt_buckle', 'bench', - 'beret', 'bib', 'Bible', 'bicycle', 'visor', 'binder', 'binoculars', - 'bird', 'birdfeeder', 'birdbath', 'birdcage', 'birdhouse', - 'birthday_cake', 'birthday_card', 'biscuit_(bread)', 'pirate_flag', - 'black_sheep', 'blackboard', 'blanket', 'blazer', 'blender', 'blimp', - 'blinker', 'blueberry', 'boar', 'gameboard', 'boat', 'bobbin', - 'bobby_pin', 'boiled_egg', 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', - 'book', 'book_bag', 'bookcase', 'booklet', 'bookmark', - 'boom_microphone', 'boot', 'bottle', 'bottle_opener', 'bouquet', - 'bow_(weapon)', 'bow_(decorative_ribbons)', 'bow-tie', 'bowl', - 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'bowling_pin', - 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere', - 'bread-bin', 'breechcloth', 'bridal_gown', 'briefcase', - 'bristle_brush', 'broccoli', 'broach', 'broom', 'brownie', - 'brussels_sprouts', 'bubble_gum', 'bucket', 'horse_buggy', 'bull', - 'bulldog', 'bulldozer', 'bullet_train', 'bulletin_board', - 'bulletproof_vest', 'bullhorn', 'corned_beef', 'bun', 'bunk_bed', - 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', 'butcher_knife', - 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car', - 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf', - 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)', - 'can', 'can_opener', 'candelabrum', 'candle', 'candle_holder', - 'candy_bar', 'candy_cane', 'walking_cane', 'canister', 'cannon', - 'canoe', 'cantaloup', 'canteen', 'cap_(headwear)', 'bottle_cap', - 'cape', 'cappuccino', 'car_(automobile)', 'railcar_(part_of_a_train)', - 'elevator_car', 'car_battery', 'identity_card', 'card', 'cardigan', - 'cargo_ship', 'carnation', 'horse_carriage', 'carrot', 'tote_bag', - 'cart', 'carton', 'cash_register', 'casserole', 'cassette', 'cast', - 'cat', 'cauliflower', 'caviar', 'cayenne_(spice)', 'CD_player', - 'celery', 'cellular_telephone', 'chain_mail', 'chair', 'chaise_longue', - 'champagne', 'chandelier', 'chap', 'checkbook', 'checkerboard', - 'cherry', 'chessboard', 'chest_of_drawers_(furniture)', - 'chicken_(animal)', 'chicken_wire', 'chickpea', 'Chihuahua', - 'chili_(vegetable)', 'chime', 'chinaware', 'crisp_(potato_chip)', - 'poker_chip', 'chocolate_bar', 'chocolate_cake', 'chocolate_milk', - 'chocolate_mousse', 'choker', 'chopping_board', 'chopstick', - 'Christmas_tree', 'slide', 'cider', 'cigar_box', 'cigarette', - 'cigarette_case', 'cistern', 'clarinet', 'clasp', 'cleansing_agent', - 'clementine', 'clip', 'clipboard', 'clock', 'clock_tower', - 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', 'coat', - 'coat_hanger', 'coatrack', 'cock', 'coconut', 'coffee_filter', - 'coffee_maker', 'coffee_table', 'coffeepot', 'coil', 'coin', - 'colander', 'coleslaw', 'coloring_material', 'combination_lock', - 'pacifier', 'comic_book', 'computer_keyboard', 'concrete_mixer', - 'cone', 'control', 'convertible_(automobile)', 'sofa_bed', 'cookie', - 'cookie_jar', 'cooking_utensil', 'cooler_(for_food)', - 'cork_(bottle_plug)', 'corkboard', 'corkscrew', 'edible_corn', - 'cornbread', 'cornet', 'cornice', 'cornmeal', 'corset', - 'romaine_lettuce', 'costume', 'cougar', 'coverall', 'cowbell', - 'cowboy_hat', 'crab_(animal)', 'cracker', 'crape', 'crate', 'crayon', - 'cream_pitcher', 'credit_card', 'crescent_roll', 'crib', 'crock_pot', - 'crossbar', 'crouton', 'crow', 'crown', 'crucifix', 'cruise_ship', - 'police_cruiser', 'crumb', 'crutch', 'cub_(animal)', 'cube', - 'cucumber', 'cufflink', 'cup', 'trophy_cup', 'cupcake', 'hair_curler', - 'curling_iron', 'curtain', 'cushion', 'custard', 'cutting_tool', - 'cylinder', 'cymbal', 'dachshund', 'dagger', 'dartboard', - 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk', - 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux', - 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher', - 'dishwasher_detergent', 'diskette', 'dispenser', 'Dixie_cup', 'dog', - 'dog_collar', 'doll', 'dollar', 'dolphin', 'domestic_ass', 'eye_mask', - 'doorbell', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly', - 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit', - 'dresser', 'drill', 'drinking_fountain', 'drone', 'dropper', - 'drum_(musical_instrument)', 'drumstick', 'duck', 'duckling', - 'duct_tape', 'duffel_bag', 'dumbbell', 'dumpster', 'dustpan', - 'Dutch_oven', 'eagle', 'earphone', 'earplug', 'earring', 'easel', - 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater', - 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk', - 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan', - 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)', - 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm', - 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace', - 'fireplug', 'fish', 'fish_(food)', 'fishbowl', 'fishing_boat', - 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flash', - 'flashlight', 'fleece', 'flip-flop_(sandal)', 'flipper_(footwear)', - 'flower_arrangement', 'flute_glass', 'foal', 'folding_chair', - 'food_processor', 'football_(American)', 'football_helmet', - 'footstool', 'fork', 'forklift', 'freight_car', 'French_toast', - 'freshener', 'frisbee', 'frog', 'fruit_juice', 'fruit_salad', - 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage', - 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic', - 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'giant_panda', - 'gift_wrap', 'ginger', 'giraffe', 'cincture', - 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles', - 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose', - 'gorilla', 'gourd', 'surgical_gown', 'grape', 'grasshopper', 'grater', - 'gravestone', 'gravy_boat', 'green_bean', 'green_onion', 'griddle', - 'grillroom', 'grinder_(tool)', 'grits', 'grizzly', 'grocery_bag', - 'guacamole', 'guitar', 'gull', 'gun', 'hair_spray', 'hairbrush', - 'hairnet', 'hairpin', 'ham', 'hamburger', 'hammer', 'hammock', - 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel', - 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw', - 'hardback_book', 'harmonium', 'hat', 'hatbox', 'hatch', 'veil', - 'headband', 'headboard', 'headlight', 'headscarf', 'headset', - 'headstall_(for_horses)', 'hearing_aid', 'heart', 'heater', - 'helicopter', 'helmet', 'heron', 'highchair', 'hinge', 'hippopotamus', - 'hockey_stick', 'hog', 'home_plate_(baseball)', 'honey', 'fume_hood', - 'hook', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce', - 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear', - 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate', - 'ice_tea', 'igniter', 'incense', 'inhaler', 'iPod', - 'iron_(for_clothing)', 'ironing_board', 'jacket', 'jam', 'jean', - 'jeep', 'jelly_bean', 'jersey', 'jet_plane', 'jewelry', 'joystick', - 'jumpsuit', 'kayak', 'keg', 'kennel', 'kettle', 'key', 'keycard', - 'kilt', 'kimono', 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', - 'kiwi_fruit', 'knee_pad', 'knife', 'knight_(chess_piece)', - 'knitting_needle', 'knob', 'knocker_(on_a_door)', 'koala', 'lab_coat', - 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', 'lamb-chop', 'lamp', - 'lamppost', 'lampshade', 'lantern', 'lanyard', 'laptop_computer', - 'lasagna', 'latch', 'lawn_mower', 'leather', 'legging_(clothing)', - 'Lego', 'lemon', 'lemonade', 'lettuce', 'license_plate', 'life_buoy', - 'life_jacket', 'lightbulb', 'lightning_rod', 'lime', 'limousine', - 'linen_paper', 'lion', 'lip_balm', 'lipstick', 'liquor', 'lizard', - 'Loafer_(type_of_shoe)', 'log', 'lollipop', 'lotion', - 'speaker_(stereo_equipment)', 'loveseat', 'machine_gun', 'magazine', - 'magnet', 'mail_slot', 'mailbox_(at_home)', 'mallet', 'mammoth', - 'mandarin_orange', 'manger', 'manhole', 'map', 'marker', 'martini', - 'mascot', 'mashed_potato', 'masher', 'mask', 'mast', - 'mat_(gym_equipment)', 'matchbox', 'mattress', 'measuring_cup', - 'measuring_stick', 'meatball', 'medicine', 'melon', 'microphone', - 'microscope', 'microwave_oven', 'milestone', 'milk', 'minivan', - 'mint_candy', 'mirror', 'mitten', 'mixer_(kitchen_tool)', 'money', - 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor', - 'motor_scooter', 'motor_vehicle', 'motorboat', 'motorcycle', - 'mound_(baseball)', 'mouse_(animal_rodent)', - 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom', - 'music_stool', 'musical_instrument', 'nailfile', 'nameplate', 'napkin', - 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newsstand', - 'nightshirt', 'nosebag_(for_animals)', 'noseband_(for_animals)', - 'notebook', 'notepad', 'nut', 'nutcracker', 'oar', 'octopus_(food)', - 'octopus_(animal)', 'oil_lamp', 'olive_oil', 'omelet', 'onion', - 'orange_(fruit)', 'orange_juice', 'oregano', 'ostrich', 'ottoman', - 'overalls_(clothing)', 'owl', 'packet', 'inkpad', 'pad', 'paddle', - 'padlock', 'paintbox', 'paintbrush', 'painting', 'pajamas', 'palette', - 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', 'pantyhose', - 'papaya', 'paperclip', 'paper_plate', 'paper_towel', 'paperback_book', - 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', - 'parchment', 'parka', 'parking_meter', 'parrot', - 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport', - 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter', - 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'pegboard', - 'pelican', 'pen', 'pencil', 'pencil_box', 'pencil_sharpener', - 'pendulum', 'penguin', 'pennant', 'penny_(coin)', 'pepper', - 'pepper_mill', 'perfume', 'persimmon', 'baby', 'pet', 'petfood', - 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano', - 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow', - 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball', - 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)', - 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat', - 'plate', 'platter', 'playing_card', 'playpen', 'pliers', - 'plow_(farm_equipment)', 'pocket_watch', 'pocketknife', - 'poker_(fire_stirring_tool)', 'pole', 'police_van', 'polo_shirt', - 'poncho', 'pony', 'pool_table', 'pop_(soda)', 'portrait', - 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato', - 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'printer', - 'projectile_(weapon)', 'projector', 'propeller', 'prune', 'pudding', - 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', 'puppet', - 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', 'race_car', - 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', 'raft', - 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat', - 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt', - 'recliner', 'record_player', 'red_cabbage', 'reflector', - 'remote_control', 'rhinoceros', 'rib_(food)', 'rifle', 'ring', - 'river_boat', 'road_map', 'robe', 'rocking_chair', 'roller_skate', - 'Rollerblade', 'rolling_pin', 'root_beer', - 'router_(computer_equipment)', 'rubber_band', 'runner_(carpet)', - 'plastic_bag', 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', - 'safety_pin', 'sail', 'salad', 'salad_plate', 'salami', - 'salmon_(fish)', 'salmon_(food)', 'salsa', 'saltshaker', - 'sandal_(type_of_shoe)', 'sandwich', 'satchel', 'saucepan', 'saucer', - 'sausage', 'sawhorse', 'saxophone', 'scale_(measuring_instrument)', - 'scarecrow', 'scarf', 'school_bus', 'scissors', 'scoreboard', - 'scrambled_eggs', 'scraper', 'scratcher', 'screwdriver', - 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane', - 'seashell', 'seedling', 'serving_dish', 'sewing_machine', 'shaker', - 'shampoo', 'shark', 'sharpener', 'Sharpie', 'shaver_(electric)', - 'shaving_cream', 'shawl', 'shears', 'sheep', 'shepherd_dog', - 'sherbert', 'shield', 'shirt', 'shoe', 'shopping_bag', 'shopping_cart', - 'short_pants', 'shot_glass', 'shoulder_bag', 'shovel', 'shower_head', - 'shower_curtain', 'shredder_(for_paper)', 'sieve', 'signboard', 'silo', - 'sink', 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', - 'ski_pole', 'skirt', 'sled', 'sleeping_bag', 'sling_(bandage)', - 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman', - 'snowmobile', 'soap', 'soccer_ball', 'sock', 'soda_fountain', - 'carbonated_water', 'sofa', 'softball', 'solar_array', 'sombrero', - 'soup', 'soup_bowl', 'soupspoon', 'sour_cream', 'soya_milk', - 'space_shuttle', 'sparkler_(fireworks)', 'spatula', 'spear', - 'spectacles', 'spice_rack', 'spider', 'sponge', 'spoon', 'sportswear', - 'spotlight', 'squirrel', 'stapler_(stapling_machine)', 'starfish', - 'statue_(sculpture)', 'steak_(food)', 'steak_knife', - 'steamer_(kitchen_appliance)', 'steering_wheel', 'stencil', - 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer', - 'stirrup', 'stockings_(leg_wear)', 'stool', 'stop_sign', 'brake_light', - 'stove', 'strainer', 'strap', 'straw_(for_drinking)', 'strawberry', - 'street_sign', 'streetlight', 'string_cheese', 'stylus', 'subwoofer', - 'sugar_bowl', 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', - 'sunglasses', 'sunhat', 'sunscreen', 'surfboard', 'sushi', 'mop', - 'sweat_pants', 'sweatband', 'sweater', 'sweatshirt', 'sweet_potato', - 'swimsuit', 'sword', 'syringe', 'Tabasco_sauce', 'table-tennis_table', - 'table', 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', - 'taillight', 'tambourine', 'army_tank', 'tank_(storage_vessel)', - 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure', - 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup', - 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth', - 'telephone_pole', 'telephoto_lens', 'television_camera', - 'television_set', 'tennis_ball', 'tennis_racket', 'tequila', - 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread', - 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil', - 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven', - 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush', - 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel', - 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light', - 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline', - 'tray', 'tree_house', 'trench_coat', 'triangle_(musical_instrument)', - 'tricycle', 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', - 'trunk', 'vat', 'turban', 'turkey_(bird)', 'turkey_(food)', 'turnip', - 'turtle', 'turtleneck_(clothing)', 'typewriter', 'umbrella', - 'underwear', 'unicycle', 'urinal', 'urn', 'vacuum_cleaner', 'valve', - 'vase', 'vending_machine', 'vent', 'videotape', 'vinegar', 'violin', - 'vodka', 'volleyball', 'vulture', 'waffle', 'waffle_iron', 'wagon', - 'wagon_wheel', 'walking_stick', 'wall_clock', 'wall_socket', 'wallet', - 'walrus', 'wardrobe', 'wasabi', 'automatic_washer', 'watch', - 'water_bottle', 'water_cooler', 'water_faucet', 'water_filter', - 'water_heater', 'water_jug', 'water_gun', 'water_scooter', 'water_ski', - 'water_tower', 'watering_can', 'watermelon', 'weathervane', 'webcam', - 'wedding_cake', 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', - 'whipped_cream', 'whiskey', 'whistle', 'wick', 'wig', 'wind_chime', - 'windmill', 'window_box_(for_plants)', 'windshield_wiper', 'windsock', - 'wine_bottle', 'wine_bucket', 'wineglass', 'wing_chair', - 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', 'wreath', - 'wrench', 'wristband', 'wristlet', 'yacht', 'yak', 'yogurt', - 'yoke_(animal_equipment)', 'zebra', 'zucchini') - - PALETTE = None - - def load_annotations(self, ann_file): - """Load annotation from lvis style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from LVIS api. - """ - - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVIS - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - self.coco = LVIS(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - if info['file_name'].startswith('COCO'): - # Convert form the COCO 2014 file naming convention of - # COCO_[train/val/test]2014_000000000000.jpg to the 2017 - # naming convention of 000000000000.jpg - # (LVIS v1 will fix this naming issue) - info['filename'] = info['file_name'][-16:] - else: - info['filename'] = info['file_name'] - data_infos.append(info) - return data_infos - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in LVIS protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: LVIS style metrics. - """ - - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVISEval, LVISResults - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError('metric {} is not supported'.format(metric)) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - - eval_results = OrderedDict() - # get original api - lvis_gt = self.coco - for metric in metrics: - msg = 'Evaluating {}...'.format(metric) - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results['AR@{}'.format(num)] = ar[i] - log_msg.append('\nAR@{}\t{:.4f}'.format(num, ar[i])) - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - if metric not in result_files: - raise KeyError('{} is not in results'.format(metric)) - try: - lvis_dt = LVISResults(lvis_gt, result_files[metric]) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - iou_type = 'bbox' if metric == 'proposal' else metric - lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type) - lvis_eval.params.imgIds = self.img_ids - if metric == 'proposal': - lvis_eval.params.useCats = 0 - lvis_eval.params.maxDets = list(proposal_nums) - lvis_eval.evaluate() - lvis_eval.accumulate() - lvis_eval.summarize() - for k, v in lvis_eval.get_results().items(): - if k.startswith('AR'): - val = float('{:.3f}'.format(float(v))) - eval_results[k] = val - else: - lvis_eval.evaluate() - lvis_eval.accumulate() - lvis_eval.summarize() - lvis_results = lvis_eval.get_results() - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = lvis_eval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - # the dimensions of precisions are - # [num_thrs, num_recalls, num_cats, num_area_rngs] - nm = self.coco.load_cats([catId])[0] - precision = precisions[:, :, idx, 0] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - for k, v in lvis_results.items(): - if k.startswith('AP'): - key = '{}_{}'.format(metric, k) - val = float('{:.3f}'.format(float(v))) - eval_results[key] = val - ap_summary = ' '.join([ - '{}:{:.3f}'.format(k, float(v)) - for k, v in lvis_results.items() if k.startswith('AP') - ]) - eval_results['{}_mAP_copypaste'.format(metric)] = ap_summary - lvis_eval.print_results() - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results - - -LVISDataset = LVISV05Dataset -DATASETS.register_module(name='LVISDataset', module=LVISDataset) - - -@DATASETS.register_module() -class LVISV1Dataset(LVISDataset): - - CLASSES = ( - 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', 'alcohol', - 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', 'antenna', - 'apple', 'applesauce', 'apricot', 'apron', 'aquarium', - 'arctic_(type_of_shoe)', 'armband', 'armchair', 'armoire', 'armor', - 'artichoke', 'trash_can', 'ashtray', 'asparagus', 'atomizer', - 'avocado', 'award', 'awning', 'ax', 'baboon', 'baby_buggy', - 'basketball_backboard', 'backpack', 'handbag', 'suitcase', 'bagel', - 'bagpipe', 'baguet', 'bait', 'ball', 'ballet_skirt', 'balloon', - 'bamboo', 'banana', 'Band_Aid', 'bandage', 'bandanna', 'banjo', - 'banner', 'barbell', 'barge', 'barrel', 'barrette', 'barrow', - 'baseball_base', 'baseball', 'baseball_bat', 'baseball_cap', - 'baseball_glove', 'basket', 'basketball', 'bass_horn', 'bat_(animal)', - 'bath_mat', 'bath_towel', 'bathrobe', 'bathtub', 'batter_(food)', - 'battery', 'beachball', 'bead', 'bean_curd', 'beanbag', 'beanie', - 'bear', 'bed', 'bedpan', 'bedspread', 'cow', 'beef_(food)', 'beeper', - 'beer_bottle', 'beer_can', 'beetle', 'bell', 'bell_pepper', 'belt', - 'belt_buckle', 'bench', 'beret', 'bib', 'Bible', 'bicycle', 'visor', - 'billboard', 'binder', 'binoculars', 'bird', 'birdfeeder', 'birdbath', - 'birdcage', 'birdhouse', 'birthday_cake', 'birthday_card', - 'pirate_flag', 'black_sheep', 'blackberry', 'blackboard', 'blanket', - 'blazer', 'blender', 'blimp', 'blinker', 'blouse', 'blueberry', - 'gameboard', 'boat', 'bob', 'bobbin', 'bobby_pin', 'boiled_egg', - 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', 'book', 'bookcase', - 'booklet', 'bookmark', 'boom_microphone', 'boot', 'bottle', - 'bottle_opener', 'bouquet', 'bow_(weapon)', 'bow_(decorative_ribbons)', - 'bow-tie', 'bowl', 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'box', - 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere', - 'bread-bin', 'bread', 'breechcloth', 'bridal_gown', 'briefcase', - 'broccoli', 'broach', 'broom', 'brownie', 'brussels_sprouts', - 'bubble_gum', 'bucket', 'horse_buggy', 'bull', 'bulldog', 'bulldozer', - 'bullet_train', 'bulletin_board', 'bulletproof_vest', 'bullhorn', - 'bun', 'bunk_bed', 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', - 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car', - 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf', - 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)', - 'can', 'can_opener', 'candle', 'candle_holder', 'candy_bar', - 'candy_cane', 'walking_cane', 'canister', 'canoe', 'cantaloup', - 'canteen', 'cap_(headwear)', 'bottle_cap', 'cape', 'cappuccino', - 'car_(automobile)', 'railcar_(part_of_a_train)', 'elevator_car', - 'car_battery', 'identity_card', 'card', 'cardigan', 'cargo_ship', - 'carnation', 'horse_carriage', 'carrot', 'tote_bag', 'cart', 'carton', - 'cash_register', 'casserole', 'cassette', 'cast', 'cat', 'cauliflower', - 'cayenne_(spice)', 'CD_player', 'celery', 'cellular_telephone', - 'chain_mail', 'chair', 'chaise_longue', 'chalice', 'chandelier', - 'chap', 'checkbook', 'checkerboard', 'cherry', 'chessboard', - 'chicken_(animal)', 'chickpea', 'chili_(vegetable)', 'chime', - 'chinaware', 'crisp_(potato_chip)', 'poker_chip', 'chocolate_bar', - 'chocolate_cake', 'chocolate_milk', 'chocolate_mousse', 'choker', - 'chopping_board', 'chopstick', 'Christmas_tree', 'slide', 'cider', - 'cigar_box', 'cigarette', 'cigarette_case', 'cistern', 'clarinet', - 'clasp', 'cleansing_agent', 'cleat_(for_securing_rope)', 'clementine', - 'clip', 'clipboard', 'clippers_(for_plants)', 'cloak', 'clock', - 'clock_tower', 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', - 'coat', 'coat_hanger', 'coatrack', 'cock', 'cockroach', - 'cocoa_(beverage)', 'coconut', 'coffee_maker', 'coffee_table', - 'coffeepot', 'coil', 'coin', 'colander', 'coleslaw', - 'coloring_material', 'combination_lock', 'pacifier', 'comic_book', - 'compass', 'computer_keyboard', 'condiment', 'cone', 'control', - 'convertible_(automobile)', 'sofa_bed', 'cooker', 'cookie', - 'cooking_utensil', 'cooler_(for_food)', 'cork_(bottle_plug)', - 'corkboard', 'corkscrew', 'edible_corn', 'cornbread', 'cornet', - 'cornice', 'cornmeal', 'corset', 'costume', 'cougar', 'coverall', - 'cowbell', 'cowboy_hat', 'crab_(animal)', 'crabmeat', 'cracker', - 'crape', 'crate', 'crayon', 'cream_pitcher', 'crescent_roll', 'crib', - 'crock_pot', 'crossbar', 'crouton', 'crow', 'crowbar', 'crown', - 'crucifix', 'cruise_ship', 'police_cruiser', 'crumb', 'crutch', - 'cub_(animal)', 'cube', 'cucumber', 'cufflink', 'cup', 'trophy_cup', - 'cupboard', 'cupcake', 'hair_curler', 'curling_iron', 'curtain', - 'cushion', 'cylinder', 'cymbal', 'dagger', 'dalmatian', 'dartboard', - 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk', - 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux', - 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher', - 'dishwasher_detergent', 'dispenser', 'diving_board', 'Dixie_cup', - 'dog', 'dog_collar', 'doll', 'dollar', 'dollhouse', 'dolphin', - 'domestic_ass', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly', - 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit', - 'dresser', 'drill', 'drone', 'dropper', 'drum_(musical_instrument)', - 'drumstick', 'duck', 'duckling', 'duct_tape', 'duffel_bag', 'dumbbell', - 'dumpster', 'dustpan', 'eagle', 'earphone', 'earplug', 'earring', - 'easel', 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater', - 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk', - 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan', - 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)', - 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm', - 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace', - 'fireplug', 'first-aid_kit', 'fish', 'fish_(food)', 'fishbowl', - 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flap', - 'flash', 'flashlight', 'fleece', 'flip-flop_(sandal)', - 'flipper_(footwear)', 'flower_arrangement', 'flute_glass', 'foal', - 'folding_chair', 'food_processor', 'football_(American)', - 'football_helmet', 'footstool', 'fork', 'forklift', 'freight_car', - 'French_toast', 'freshener', 'frisbee', 'frog', 'fruit_juice', - 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage', - 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic', - 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'generator', - 'giant_panda', 'gift_wrap', 'ginger', 'giraffe', 'cincture', - 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles', - 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose', - 'gorilla', 'gourd', 'grape', 'grater', 'gravestone', 'gravy_boat', - 'green_bean', 'green_onion', 'griddle', 'grill', 'grits', 'grizzly', - 'grocery_bag', 'guitar', 'gull', 'gun', 'hairbrush', 'hairnet', - 'hairpin', 'halter_top', 'ham', 'hamburger', 'hammer', 'hammock', - 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel', - 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw', - 'hardback_book', 'harmonium', 'hat', 'hatbox', 'veil', 'headband', - 'headboard', 'headlight', 'headscarf', 'headset', - 'headstall_(for_horses)', 'heart', 'heater', 'helicopter', 'helmet', - 'heron', 'highchair', 'hinge', 'hippopotamus', 'hockey_stick', 'hog', - 'home_plate_(baseball)', 'honey', 'fume_hood', 'hook', 'hookah', - 'hornet', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce', - 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear', - 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate', - 'igniter', 'inhaler', 'iPod', 'iron_(for_clothing)', 'ironing_board', - 'jacket', 'jam', 'jar', 'jean', 'jeep', 'jelly_bean', 'jersey', - 'jet_plane', 'jewel', 'jewelry', 'joystick', 'jumpsuit', 'kayak', - 'keg', 'kennel', 'kettle', 'key', 'keycard', 'kilt', 'kimono', - 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', 'kiwi_fruit', - 'knee_pad', 'knife', 'knitting_needle', 'knob', 'knocker_(on_a_door)', - 'koala', 'lab_coat', 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', - 'lamb-chop', 'lamp', 'lamppost', 'lampshade', 'lantern', 'lanyard', - 'laptop_computer', 'lasagna', 'latch', 'lawn_mower', 'leather', - 'legging_(clothing)', 'Lego', 'legume', 'lemon', 'lemonade', 'lettuce', - 'license_plate', 'life_buoy', 'life_jacket', 'lightbulb', - 'lightning_rod', 'lime', 'limousine', 'lion', 'lip_balm', 'liquor', - 'lizard', 'log', 'lollipop', 'speaker_(stereo_equipment)', 'loveseat', - 'machine_gun', 'magazine', 'magnet', 'mail_slot', 'mailbox_(at_home)', - 'mallard', 'mallet', 'mammoth', 'manatee', 'mandarin_orange', 'manger', - 'manhole', 'map', 'marker', 'martini', 'mascot', 'mashed_potato', - 'masher', 'mask', 'mast', 'mat_(gym_equipment)', 'matchbox', - 'mattress', 'measuring_cup', 'measuring_stick', 'meatball', 'medicine', - 'melon', 'microphone', 'microscope', 'microwave_oven', 'milestone', - 'milk', 'milk_can', 'milkshake', 'minivan', 'mint_candy', 'mirror', - 'mitten', 'mixer_(kitchen_tool)', 'money', - 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor', - 'motor_scooter', 'motor_vehicle', 'motorcycle', 'mound_(baseball)', - 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom', - 'music_stool', 'musical_instrument', 'nailfile', 'napkin', - 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newspaper', - 'newsstand', 'nightshirt', 'nosebag_(for_animals)', - 'noseband_(for_animals)', 'notebook', 'notepad', 'nut', 'nutcracker', - 'oar', 'octopus_(food)', 'octopus_(animal)', 'oil_lamp', 'olive_oil', - 'omelet', 'onion', 'orange_(fruit)', 'orange_juice', 'ostrich', - 'ottoman', 'oven', 'overalls_(clothing)', 'owl', 'packet', 'inkpad', - 'pad', 'paddle', 'padlock', 'paintbrush', 'painting', 'pajamas', - 'palette', 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', - 'pantyhose', 'papaya', 'paper_plate', 'paper_towel', 'paperback_book', - 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', 'parasol', - 'parchment', 'parka', 'parking_meter', 'parrot', - 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport', - 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter', - 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'wooden_leg', - 'pegboard', 'pelican', 'pen', 'pencil', 'pencil_box', - 'pencil_sharpener', 'pendulum', 'penguin', 'pennant', 'penny_(coin)', - 'pepper', 'pepper_mill', 'perfume', 'persimmon', 'person', 'pet', - 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano', - 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow', - 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball', - 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)', - 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat', - 'plate', 'platter', 'playpen', 'pliers', 'plow_(farm_equipment)', - 'plume', 'pocket_watch', 'pocketknife', 'poker_(fire_stirring_tool)', - 'pole', 'polo_shirt', 'poncho', 'pony', 'pool_table', 'pop_(soda)', - 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato', - 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'pretzel', - 'printer', 'projectile_(weapon)', 'projector', 'propeller', 'prune', - 'pudding', 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', - 'puppet', 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', - 'race_car', 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', - 'raft', 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat', - 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt', - 'recliner', 'record_player', 'reflector', 'remote_control', - 'rhinoceros', 'rib_(food)', 'rifle', 'ring', 'river_boat', 'road_map', - 'robe', 'rocking_chair', 'rodent', 'roller_skate', 'Rollerblade', - 'rolling_pin', 'root_beer', 'router_(computer_equipment)', - 'rubber_band', 'runner_(carpet)', 'plastic_bag', - 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', 'safety_pin', - 'sail', 'salad', 'salad_plate', 'salami', 'salmon_(fish)', - 'salmon_(food)', 'salsa', 'saltshaker', 'sandal_(type_of_shoe)', - 'sandwich', 'satchel', 'saucepan', 'saucer', 'sausage', 'sawhorse', - 'saxophone', 'scale_(measuring_instrument)', 'scarecrow', 'scarf', - 'school_bus', 'scissors', 'scoreboard', 'scraper', 'screwdriver', - 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane', - 'seashell', 'sewing_machine', 'shaker', 'shampoo', 'shark', - 'sharpener', 'Sharpie', 'shaver_(electric)', 'shaving_cream', 'shawl', - 'shears', 'sheep', 'shepherd_dog', 'sherbert', 'shield', 'shirt', - 'shoe', 'shopping_bag', 'shopping_cart', 'short_pants', 'shot_glass', - 'shoulder_bag', 'shovel', 'shower_head', 'shower_cap', - 'shower_curtain', 'shredder_(for_paper)', 'signboard', 'silo', 'sink', - 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', 'ski_pole', - 'skirt', 'skullcap', 'sled', 'sleeping_bag', 'sling_(bandage)', - 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman', - 'snowmobile', 'soap', 'soccer_ball', 'sock', 'sofa', 'softball', - 'solar_array', 'sombrero', 'soup', 'soup_bowl', 'soupspoon', - 'sour_cream', 'soya_milk', 'space_shuttle', 'sparkler_(fireworks)', - 'spatula', 'spear', 'spectacles', 'spice_rack', 'spider', 'crawfish', - 'sponge', 'spoon', 'sportswear', 'spotlight', 'squid_(food)', - 'squirrel', 'stagecoach', 'stapler_(stapling_machine)', 'starfish', - 'statue_(sculpture)', 'steak_(food)', 'steak_knife', 'steering_wheel', - 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer', - 'stirrup', 'stool', 'stop_sign', 'brake_light', 'stove', 'strainer', - 'strap', 'straw_(for_drinking)', 'strawberry', 'street_sign', - 'streetlight', 'string_cheese', 'stylus', 'subwoofer', 'sugar_bowl', - 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', 'sunglasses', - 'sunhat', 'surfboard', 'sushi', 'mop', 'sweat_pants', 'sweatband', - 'sweater', 'sweatshirt', 'sweet_potato', 'swimsuit', 'sword', - 'syringe', 'Tabasco_sauce', 'table-tennis_table', 'table', - 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', 'taillight', - 'tambourine', 'army_tank', 'tank_(storage_vessel)', - 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure', - 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup', - 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth', - 'telephone_pole', 'telephoto_lens', 'television_camera', - 'television_set', 'tennis_ball', 'tennis_racket', 'tequila', - 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread', - 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil', - 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven', - 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush', - 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel', - 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light', - 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline', - 'tray', 'trench_coat', 'triangle_(musical_instrument)', 'tricycle', - 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', 'trunk', 'vat', - 'turban', 'turkey_(food)', 'turnip', 'turtle', 'turtleneck_(clothing)', - 'typewriter', 'umbrella', 'underwear', 'unicycle', 'urinal', 'urn', - 'vacuum_cleaner', 'vase', 'vending_machine', 'vent', 'vest', - 'videotape', 'vinegar', 'violin', 'vodka', 'volleyball', 'vulture', - 'waffle', 'waffle_iron', 'wagon', 'wagon_wheel', 'walking_stick', - 'wall_clock', 'wall_socket', 'wallet', 'walrus', 'wardrobe', - 'washbasin', 'automatic_washer', 'watch', 'water_bottle', - 'water_cooler', 'water_faucet', 'water_heater', 'water_jug', - 'water_gun', 'water_scooter', 'water_ski', 'water_tower', - 'watering_can', 'watermelon', 'weathervane', 'webcam', 'wedding_cake', - 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', 'whipped_cream', - 'whistle', 'wig', 'wind_chime', 'windmill', 'window_box_(for_plants)', - 'windshield_wiper', 'windsock', 'wine_bottle', 'wine_bucket', - 'wineglass', 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', - 'wreath', 'wrench', 'wristband', 'wristlet', 'yacht', 'yogurt', - 'yoke_(animal_equipment)', 'zebra', 'zucchini') - - def load_annotations(self, ann_file): - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVIS - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - self.coco = LVIS(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - # coco_url is used in LVISv1 instead of file_name - # e.g. http://images.cocodataset.org/train2017/000000391895.jpg - # train/val split in specified in url - info['filename'] = info['coco_url'].replace( - 'http://images.cocodataset.org/', '') - data_infos.append(info) - return data_infos diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/openimages.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/openimages.py deleted file mode 100644 index fba660c3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/openimages.py +++ /dev/null @@ -1,891 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import csv -import json -import os.path as osp -import warnings -from collections import OrderedDict, defaultdict - -import mmcv -import numpy as np -import torch.distributed as dist -from mmcv.runner import get_dist_info -from mmcv.utils import print_log - -from mmdet.core import eval_map -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class OpenImagesDataset(CustomDataset): - """Open Images dataset for detection. - - Args: - ann_file (str): Annotation file path. - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - image_level_ann_file (str): Image level annotation, which is used - in evaluation. - get_supercategory (bool): Whether to get parent class of the - current class. Default: True. - hierarchy_file (str): The file path of the class hierarchy. - Default: None. - get_metas (bool): Whether to get image metas in testing or - validation time. This should be `True` during evaluation. - Default: True. The OpenImages annotations do not have image - metas (width and height of the image), which will be used - during evaluation. We provide two ways to get image metas - in `OpenImagesDataset`: - - - 1. `load from file`: Load image metas from pkl file, which - is suggested to use. We provided a script to get image metas: - `tools/misc/get_image_metas.py`, which need to run - this script before training/testing. Please refer to - `config/openimages/README.md` for more details. - - - 2. `load from pipeline`, which will get image metas during - test time. However, this may reduce the inference speed, - especially when using distribution. - - load_from_file (bool): Whether to get image metas from pkl file. - meta_file (str): File path to get image metas. - filter_labels (bool): Whether filter unannotated classes. - Default: True. - load_image_level_labels (bool): Whether load and consider image - level labels during evaluation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - ann_file, - label_file='', - image_level_ann_file='', - get_supercategory=True, - hierarchy_file=None, - get_metas=True, - load_from_file=True, - meta_file='', - filter_labels=True, - load_image_level_labels=True, - file_client_args=dict(backend='disk'), - **kwargs): - # may get error if use other file_client - self.file_client_args = file_client_args - - self.cat2label = defaultdict(str) - self.index_dict = {} - - # Although it will init file_client in `CustomDataset`, - # it needs to be init here. - file_client = mmcv.FileClient(**file_client_args) - # need get `index_dict` before load annotations - assert label_file.endswith('csv') - if hasattr(file_client, 'get_local_path'): - with file_client.get_local_path(label_file) as local_path: - class_names = self.get_classes_from_csv(local_path) - else: - class_names = self.get_classes_from_csv(label_file) - super(OpenImagesDataset, self).__init__( - ann_file=ann_file, file_client_args=file_client_args, **kwargs) - self.CLASSES = class_names - self.image_level_ann_file = image_level_ann_file - self.load_image_level_labels = load_image_level_labels - if get_supercategory is True: - assert hierarchy_file is not None - if self.__class__.__name__ == 'OpenImagesDataset': - assert hierarchy_file.endswith('json') - elif self.__class__.__name__ == 'OpenImagesChallengeDataset': - assert hierarchy_file.endswith('np') - else: - raise NotImplementedError - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path( - hierarchy_file) as local_path: - self.class_label_tree = self.get_relation_matrix( - local_path) - else: - self.class_label_tree = self.get_relation_matrix( - hierarchy_file) - self.get_supercategory = get_supercategory - self.get_metas = get_metas - self.load_from_file = load_from_file - self.meta_file = meta_file - if self.data_root is not None: - if not osp.isabs(self.meta_file): - self.meta_file = osp.join(self.data_root, self.meta_file) - self.filter_labels = filter_labels - self.rank, self.world_size = get_dist_info() - self.temp_img_metas = [] - self.test_img_metas = [] - self.test_img_shapes = [] - self.load_from_pipeline = False if load_from_file else True - - def get_classes_from_csv(self, label_file): - """Get classes name from file. - - Args: - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - - Returns: - list[str]: Class name of OpenImages. - """ - - index_list = [] - classes_names = [] - with open(label_file, 'r') as f: - reader = csv.reader(f) - for line in reader: - self.cat2label[line[0]] = line[1] - classes_names.append(line[1]) - index_list.append(line[0]) - self.index_dict = {index: i for i, index in enumerate(index_list)} - return classes_names - - def load_annotations(self, ann_file): - """Load annotation from annotation file. - - Special described `self.data_infos` (defaultdict[list[dict]]) - in this function: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. Keys of dicts are: - - - `bbox` (list): coordinates of the box, in normalized image - coordinates, of shape 4. - - `label` (int): the label id. - - `is_group_of` (bool): Indicates that the box spans a group - of objects (e.g., a bed of flowers or a crowd of people). - - `is_occluded` (bool): Indicates that the object is occluded - by another object in the image. - - `is_truncated` (bool): Indicates that the object extends - beyond the boundary of the image. - - `is_depiction` (bool): Indicates that the object is a - depiction. - - `is_inside` (bool): Indicates a picture taken from the - inside of the object. - - Args: - ann_file (str): CSV style annotation file path. - - Returns: - list[dict]: Data infos where each item of the list - indicates an image. Keys of annotations are: - - - `img_id` (str): Image name. - - `filename` (str): Image name with suffix. - """ - self.ann_infos = defaultdict(list) - data_infos = [] - cp_filename = None - with open(ann_file, 'r') as f: - reader = csv.reader(f) - for i, line in enumerate(reader): - if i == 0: - continue - img_id = line[0] - filename = f'{img_id}.jpg' - label_id = line[2] - assert label_id in self.index_dict - label = int(self.index_dict[label_id]) - bbox = [ - float(line[4]), # xmin - float(line[6]), # ymin - float(line[5]), # xmax - float(line[7]) # ymax - ] - is_occluded = True if int(line[8]) == 1 else False - is_truncated = True if int(line[9]) == 1 else False - is_group_of = True if int(line[10]) == 1 else False - is_depiction = True if int(line[11]) == 1 else False - is_inside = True if int(line[12]) == 1 else False - - self.ann_infos[img_id].append( - dict( - bbox=bbox, - label=label, - is_occluded=is_occluded, - is_truncated=is_truncated, - is_group_of=is_group_of, - is_depiction=is_depiction, - is_inside=is_inside)) - if filename != cp_filename: - data_infos.append(dict(img_id=img_id, filename=filename)) - cp_filename = filename - return data_infos - - def get_ann_info(self, idx): - """Get OpenImages annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - img_id = self.data_infos[idx]['img_id'] - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - is_occludeds = [] - is_truncateds = [] - is_group_ofs = [] - is_depictions = [] - is_insides = [] - for obj in self.ann_infos[img_id]: - label = int(obj['label']) - bbox = [ - float(obj['bbox'][0]), - float(obj['bbox'][1]), - float(obj['bbox'][2]), - float(obj['bbox'][3]) - ] - bboxes.append(bbox) - labels.append(label) - - # Other parameters - is_occludeds.append(obj['is_occluded']) - is_truncateds.append(obj['is_truncated']) - is_group_ofs.append(obj['is_group_of']) - is_depictions.append(obj['is_depiction']) - is_insides.append(obj['is_inside']) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes) - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore) - labels_ignore = np.array(labels_ignore) - - assert len(is_group_ofs) == len(labels) == len(bboxes) - gt_is_group_ofs = np.array(is_group_ofs, dtype=np.bool) - - # These parameters is not used yet. - is_occludeds = np.array(is_occludeds, dtype=np.bool) - is_truncateds = np.array(is_truncateds, dtype=np.bool) - is_depictions = np.array(is_depictions, dtype=np.bool) - is_insides = np.array(is_insides, dtype=np.bool) - - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64), - gt_is_group_ofs=gt_is_group_ofs, - is_occludeds=is_occludeds, - is_truncateds=is_truncateds, - is_depictions=is_depictions, - is_insides=is_insides) - - return ann - - def get_meta_from_file(self, meta_file=''): - """Get image metas from pkl file.""" - metas = mmcv.load( - meta_file, - file_format='pkl', - file_client_args=self.file_client_args) - assert len(metas) == len(self) - for i in range(len(metas)): - file_name = osp.split(metas[i]['filename'])[-1] - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - assert file_name == osp.split(img_info['filename'])[-1] - else: - assert file_name == self.data_infos[i]['filename'] - hw = metas[i]['ori_shape'][:2] - self.test_img_shapes.append(hw) - - def get_meta_from_pipeline(self, results): - """Get image metas from pipeline.""" - self.temp_img_metas.extend(results['img_metas']) - if dist.is_available() and self.world_size > 1: - from mmdet.apis.test import collect_results_cpu - - self.test_img_metas = collect_results_cpu(self.temp_img_metas, - len(self)) - else: - self.test_img_metas = self.temp_img_metas - - def get_img_shape(self, metas): - """Set images original shape into data_infos.""" - assert len(metas) == len(self) - for i in range(len(metas)): - file_name = osp.split(metas[i].data['ori_filename'])[-1] - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - assert file_name == osp.split(img_info['filename'])[-1] - else: - assert file_name == self.data_infos[i]['filename'] - hw = metas[i].data['ori_shape'][:2] - self.test_img_shapes.append(hw) - - def prepare_test_img(self, idx): - """Get testing data after pipeline.""" - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - results = self.pipeline(results) - if self.get_metas and self.load_from_pipeline: - self.get_meta_from_pipeline(results) - return results - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn('OpenImageDatasets does not support ' - 'filtering empty gt images.') - valid_inds = [i for i in range(len(self))] - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio.""" - self.flag = np.zeros(len(self), dtype=np.uint8) - # TODO: set flag without width and height - - def get_relation_matrix(self, hierarchy_file): - """Get hierarchy for classes. - - Args: - hierarchy_file (sty): File path to the hierarchy for classes. - - Returns: - ndarray: The matrix of the corresponding relationship between - the parent class and the child class, of shape - (class_num, class_num). - """ - - if self.data_root is not None: - if not osp.isabs(hierarchy_file): - hierarchy_file = osp.join(self.data_root, hierarchy_file) - with open(hierarchy_file, 'r') as f: - hierarchy = json.load(f) - class_num = len(self.CLASSES) - class_label_tree = np.eye(class_num, class_num) - class_label_tree = self._convert_hierarchy_tree( - hierarchy, class_label_tree) - return class_label_tree - - def _convert_hierarchy_tree(self, - hierarchy_map, - class_label_tree, - parents=[], - get_all_parents=True): - """Get matrix of the corresponding relationship between the parent - class and the child class. - - Args: - hierarchy_map (dict): Including label name and corresponding - subcategory. Keys of dicts are: - - - `LabeName` (str): Name of the label. - - `Subcategory` (dict | list): Corresponding subcategory(ies). - class_label_tree (ndarray): The matrix of the corresponding - relationship between the parent class and the child class, - of shape (class_num, class_num). - parents (list): Corresponding parent class. - get_all_parents (bool): Whether get all parent names. - Default: True - - Returns: - ndarray: The matrix of the corresponding relationship between - the parent class and the child class, of shape - (class_num, class_num). - """ - - if 'Subcategory' in hierarchy_map: - for node in hierarchy_map['Subcategory']: - if 'LabelName' in node: - children_name = node['LabelName'] - children_index = self.index_dict[children_name] - children = [children_index] - else: - continue - if len(parents) > 0: - for parent_index in parents: - if get_all_parents: - children.append(parent_index) - class_label_tree[children_index, parent_index] = 1 - - class_label_tree = self._convert_hierarchy_tree( - node, class_label_tree, parents=children) - - return class_label_tree - - def add_supercategory_ann(self, annotations): - """Add parent classes of the corresponding class of the ground truth - bboxes.""" - for i, ann in enumerate(annotations): - assert len(ann['labels']) == len(ann['bboxes']) == \ - len(ann['gt_is_group_ofs']) - gt_bboxes = [] - gt_is_group_ofs = [] - gt_labels = [] - for j in range(len(ann['labels'])): - label = ann['labels'][j] - bbox = ann['bboxes'][j] - is_group = ann['gt_is_group_ofs'][j] - label = np.where(self.class_label_tree[label])[0] - if len(label) > 1: - for k in range(len(label)): - gt_bboxes.append(bbox) - gt_is_group_ofs.append(is_group) - gt_labels.append(label[k]) - else: - gt_bboxes.append(bbox) - gt_is_group_ofs.append(is_group) - gt_labels.append(label[0]) - annotations[i] = dict( - bboxes=np.array(gt_bboxes).astype(np.float32), - labels=np.array(gt_labels).astype(np.int64), - bboxes_ignore=ann['bboxes_ignore'], - gt_is_group_ofs=np.array(gt_is_group_ofs).astype(np.bool)) - - return annotations - - def process_results(self, det_results, annotations, - image_level_annotations): - """Process results of the corresponding class of the detection bboxes. - - Note: It will choose to do the following two processing according to - the parameters: - - 1. Whether to add parent classes of the corresponding class of the - detection bboxes. - - 2. Whether to ignore the classes that unannotated on that image. - """ - if image_level_annotations is not None: - assert len(annotations) == \ - len(image_level_annotations) == \ - len(det_results) - else: - assert len(annotations) == len(det_results) - for i in range(len(det_results)): - results = copy.deepcopy(det_results[i]) - valid_classes = np.where( - np.array([[bbox.shape[0]] for bbox in det_results[i]]) != 0)[0] - if image_level_annotations is not None: - labels = annotations[i]['labels'] - image_level_labels = \ - image_level_annotations[i]['image_level_labels'] - allowed_labeles = np.unique( - np.append(labels, image_level_labels)) - else: - allowed_labeles = np.unique(annotations[i]['labels']) - - for valid_class in valid_classes: - det_cls = np.where(self.class_label_tree[valid_class])[0] - for index in det_cls: - if index in allowed_labeles and \ - index != valid_class and \ - self.get_supercategory: - det_results[i][index] = \ - np.concatenate((det_results[i][index], - results[valid_class])) - elif index not in allowed_labeles and self.filter_labels: - # Remove useless parts - det_results[i][index] = np.empty( - (0, 5)).astype(np.float32) - return det_results - - def load_image_label_from_csv(self, image_level_ann_file): - """Load image level annotations from csv style ann_file. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - defaultdict[list[dict]]: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. - Keys of dicts are: - - - `image_level_label` (int): Label id. - - `confidence` (float): Labels that are human-verified to be - present in an image have confidence = 1 (positive labels). - Labels that are human-verified to be absent from an image - have confidence = 0 (negative labels). Machine-generated - labels have fractional confidences, generally >= 0.5. - The higher the confidence, the smaller the chance for - the label to be a false positive. - """ - - item_lists = defaultdict(list) - with open(image_level_ann_file, 'r') as f: - reader = csv.reader(f) - for i, line in enumerate(reader): - if i == 0: - continue - img_id = line[0] - item_lists[img_id].append( - dict( - image_level_label=int(self.index_dict[line[2]]), - confidence=float(line[3]))) - return item_lists - - def get_image_level_ann(self, image_level_ann_file): - """Get OpenImages annotation by index. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - dict: Annotation info of specified index. - """ - - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(image_level_ann_file) \ - as local_path: - item_lists = self.load_image_label_from_csv(local_path) - else: - item_lists = self.load_image_label_from_csv(image_level_ann_file) - image_level_annotations = [] - for i in range(len(self)): - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - # for Open Images Challenges - img_id = osp.split(img_info['filename'])[-1][:-4] - else: - # for Open Images v6 - img_id = self.data_infos[i]['img_id'] - item_list = item_lists.get(img_id, None) - if item_list is not None: - image_level_labels = [] - confidences = [] - for obj in item_list: - image_level_label = int(obj['image_level_label']) - confidence = float(obj['confidence']) - - image_level_labels.append(image_level_label) - confidences.append(confidence) - - if not image_level_labels: - image_level_labels = np.zeros((0, )) - confidences = np.zeros((0, )) - else: - image_level_labels = np.array(image_level_labels) - confidences = np.array(confidences) - else: - image_level_labels = np.zeros((0, )) - confidences = np.zeros((0, )) - ann = dict( - image_level_labels=image_level_labels.astype(np.int64), - confidences=confidences.astype(np.float32)) - image_level_annotations.append(ann) - - return image_level_annotations - - def denormalize_gt_bboxes(self, annotations): - """Convert ground truth bboxes from relative position to absolute - position. - - Only used in evaluating time. - """ - assert len(self.test_img_shapes) == len(annotations) - for i in range(len(annotations)): - h, w = self.test_img_shapes[i] - annotations[i]['bboxes'][:, 0::2] *= w - annotations[i]['bboxes'][:, 1::2] *= h - return annotations - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - return self.get_ann_info(idx)['labels'].astype(np.int).tolist() - - def evaluate(self, - results, - metric='mAP', - logger=None, - iou_thr=0.5, - ioa_thr=0.5, - scale_ranges=None, - denorm_gt_bbox=True, - use_group_of=True): - """Evaluate in OpenImages. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Option is - 'mAP'. Default: 'mAP'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - ioa_thr (float | list[float]): IoA threshold. Default: 0.5. - scale_ranges (list[tuple], optional): Scale ranges for evaluating - mAP. If not specified, all bounding boxes would be included in - evaluation. Default: None - denorm_gt_bbox (bool): Whether to denorm ground truth bboxes from - relative position to absolute position. Default: True - use_group_of (bool): Whether consider group of groud truth bboxes - during evaluating. Default: True. - - Returns: - dict[str, float]: AP metrics. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - - if self.load_image_level_labels: - image_level_annotations = \ - self.get_image_level_ann(self.image_level_ann_file) - else: - image_level_annotations = None - - # load metas from file - if self.get_metas and self.load_from_file: - assert self.meta_file.endswith( - 'pkl'), 'File name must be pkl suffix' - self.get_meta_from_file(self.meta_file) - # load metas from pipeline - else: - self.get_img_shape(self.test_img_metas) - - if len(self.test_img_shapes) > len(self): - self.test_img_shapes = self.test_img_shapes[:len(self)] - - if denorm_gt_bbox: - annotations = self.denormalize_gt_bboxes(annotations) - - # Reset test_image_metas, temp_image_metas and test_img_shapes - # to avoid potential error - self.temp_img_metas = [] - self.test_img_shapes = [] - self.test_img_metas = [] - if self.get_supercategory: - annotations = self.add_supercategory_ann(annotations) - - results = self.process_results(results, annotations, - image_level_annotations) - if use_group_of: - assert ioa_thr is not None, \ - 'ioa_thr must have value when using group_of in evaluation.' - - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - ioa_thrs = [ioa_thr] if isinstance(ioa_thr, float) or ioa_thr is None \ - else ioa_thr - - # get dataset type - if len(self.CLASSES) == 500: - ds_name = 'oid_challenge' - elif len(self.CLASSES) == 601: - ds_name = 'oid_v6' - else: - ds_name = self.CLASSES - warnings.warn('Cannot infer dataset type from the length of the ' - 'classes. Set `oid_v6` as dataset type.') - - if metric == 'mAP': - assert isinstance(iou_thrs, list) and isinstance(ioa_thrs, list) - assert len(ioa_thrs) == len(iou_thrs) - mean_aps = [] - for iou_thr, ioa_thr in zip(iou_thrs, ioa_thrs): - print_log(f'\n{"-" * 15}iou_thr, ioa_thr: {iou_thr}, {ioa_thr}' - f'{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - ioa_thr=ioa_thr, - dataset=ds_name, - logger=logger, - use_group_of=use_group_of) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - return eval_results - - -@DATASETS.register_module() -class OpenImagesChallengeDataset(OpenImagesDataset): - """Open Images Challenge dataset for detection.""" - - def __init__(self, ann_file, **kwargs): - assert ann_file.endswith('txt') - super(OpenImagesChallengeDataset, self).__init__( - ann_file=ann_file, **kwargs) - - def get_classes_from_csv(self, label_file): - """Get classes name from file. - - Args: - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - - Returns: - list: Class name of OpenImages. - """ - - label_list = [] - id_list = [] - with open(label_file, 'r') as f: - reader = csv.reader(f) - for line in reader: - label_name = line[0] - label_id = int(line[2]) - - label_list.append(line[1]) - id_list.append(label_id) - self.index_dict[label_name] = label_id - 1 - - indexes = np.argsort(id_list) - classes_names = [] - for index in indexes: - classes_names.append(label_list[index]) - return classes_names - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - with open(ann_file) as f: - lines = f.readlines() - i = 0 - ann_infos = [] - while i < len(lines): - bboxes = [] - labels = [] - is_group_ofs = [] - filename = lines[i].rstrip() - i += 2 - img_gt_size = int(lines[i]) - i += 1 - for j in range(img_gt_size): - sp = lines[i + j].split() - bboxes.append( - [float(sp[1]), - float(sp[2]), - float(sp[3]), - float(sp[4])]) - labels.append(int(sp[0]) - 1) # labels begin from 1 - is_group_ofs.append(True if int(sp[5]) == 1 else False) - i += img_gt_size - - gt_bboxes = np.array(bboxes, dtype=np.float32) - gt_labels = np.array(labels, dtype=np.int64) - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - gt_is_group_ofs = np.array(is_group_ofs, dtype=np.bool) - - img_info = dict(filename=filename) - ann_info = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - gt_is_group_ofs=gt_is_group_ofs) - ann_infos.append(dict(img_info=img_info, ann_info=ann_info)) - - return ann_infos - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline.""" - ann_info = self.data_infos[idx] - results = dict( - img_info=ann_info['img_info'], - ann_info=ann_info['ann_info'], - ) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline.""" - ann_info = self.data_infos[idx] - results = dict(img_info=ann_info['img_info']) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - - results = self.pipeline(results) - if self.get_metas and self.load_from_pipeline: - self.get_meta_from_pipeline(results) - return results - - def get_relation_matrix(self, hierarchy_file): - """Get hierarchy for classes. - - Args: - hierarchy_file (str): File path to the hierarchy for classes. - - Returns: - ndarray: The matrix of the corresponding - relationship between the parent class and the child class, - of shape (class_num, class_num). - """ - class_label_tree = np.load(hierarchy_file, allow_pickle=True) - return class_label_tree[1:, 1:] - - def get_ann_info(self, idx): - """Get OpenImages annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - # avoid some potential error - data_infos = copy.deepcopy(self.data_infos[idx]['ann_info']) - return data_infos - - def load_image_label_from_csv(self, image_level_ann_file): - """Load image level annotations from csv style ann_file. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - defaultdict[list[dict]]: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. - Keys of dicts are: - - - `image_level_label` (int): of shape 1. - - `confidence` (float): of shape 1. - """ - - item_lists = defaultdict(list) - with open(image_level_ann_file, 'r') as f: - reader = csv.reader(f) - i = -1 - for line in reader: - i += 1 - if i == 0: - continue - else: - img_id = line[0] - label_id = line[1] - assert label_id in self.index_dict - image_level_label = int(self.index_dict[label_id]) - confidence = float(line[2]) - item_lists[img_id].append( - dict( - image_level_label=image_level_label, - confidence=confidence)) - return item_lists diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/__init__.py deleted file mode 100644 index dae4b8b1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform, - ContrastTransform, EqualizeTransform, Rotate, Shear, - Translate) -from .compose import Compose -from .formatting import (Collect, DefaultFormatBundle, ImageToTensor, - ToDataContainer, ToTensor, Transpose, to_tensor) -from .instaboost import InstaBoost -from .loading import (LoadAnnotations, LoadImageFromFile, LoadImageFromWebcam, - LoadMultiChannelImageFromFiles, LoadPanopticAnnotations, - LoadProposals) -from .test_time_aug import MultiScaleFlipAug -from .transforms import (Albu, CopyPaste, CutOut, Expand, MinIoURandomCrop, - MixUp, Mosaic, Normalize, Pad, PhotoMetricDistortion, - RandomAffine, RandomCenterCropPad, RandomCrop, - RandomFlip, RandomShift, Resize, SegRescale, - YOLOXHSVRandomAug) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations', - 'LoadImageFromFile', 'LoadImageFromWebcam', 'LoadPanopticAnnotations', - 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'MultiScaleFlipAug', - 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale', - 'MinIoURandomCrop', 'Expand', 'PhotoMetricDistortion', 'Albu', - 'InstaBoost', 'RandomCenterCropPad', 'AutoAugment', 'CutOut', 'Shear', - 'Rotate', 'ColorTransform', 'EqualizeTransform', 'BrightnessTransform', - 'ContrastTransform', 'Translate', 'RandomShift', 'Mosaic', 'MixUp', - 'RandomAffine', 'YOLOXHSVRandomAug', 'CopyPaste' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/auto_augment.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/auto_augment.py deleted file mode 100644 index b0ff67db..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/auto_augment.py +++ /dev/null @@ -1,894 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import cv2 -import mmcv -import numpy as np - -from ..builder import PIPELINES -from .compose import Compose - -_MAX_LEVEL = 10 - - -def level_to_value(level, max_value): - """Map from level to values based on max_value.""" - return (level / _MAX_LEVEL) * max_value - - -def enhance_level_to_value(level, a=1.8, b=0.1): - """Map from level to values.""" - return (level / _MAX_LEVEL) * a + b - - -def random_negative(value, random_negative_prob): - """Randomly negate value based on random_negative_prob.""" - return -value if np.random.rand() < random_negative_prob else value - - -def bbox2fields(): - """The key correspondence from bboxes to labels, masks and - segmentations.""" - bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - bbox2seg = { - 'gt_bboxes': 'gt_semantic_seg', - } - return bbox2label, bbox2mask, bbox2seg - - -@PIPELINES.register_module() -class AutoAugment: - """Auto augmentation. - - This data augmentation is proposed in `Learning Data Augmentation - Strategies for Object Detection `_. - - TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms - - Args: - policies (list[list[dict]]): The policies of auto augmentation. Each - policy in ``policies`` is a specific augmentation policy, and is - composed by several augmentations (dict). When AutoAugment is - called, a random policy in ``policies`` will be selected to - augment images. - - Examples: - >>> replace = (104, 116, 124) - >>> policies = [ - >>> [ - >>> dict(type='Sharpness', prob=0.0, level=8), - >>> dict( - >>> type='Shear', - >>> prob=0.4, - >>> level=0, - >>> replace=replace, - >>> axis='x') - >>> ], - >>> [ - >>> dict( - >>> type='Rotate', - >>> prob=0.6, - >>> level=10, - >>> replace=replace), - >>> dict(type='Color', prob=1.0, level=6) - >>> ] - >>> ] - >>> augmentation = AutoAugment(policies) - >>> img = np.ones(100, 100, 3) - >>> gt_bboxes = np.ones(10, 4) - >>> results = dict(img=img, gt_bboxes=gt_bboxes) - >>> results = augmentation(results) - """ - - def __init__(self, policies): - assert isinstance(policies, list) and len(policies) > 0, \ - 'Policies must be a non-empty list.' - for policy in policies: - assert isinstance(policy, list) and len(policy) > 0, \ - 'Each policy in policies must be a non-empty list.' - for augment in policy: - assert isinstance(augment, dict) and 'type' in augment, \ - 'Each specific augmentation must be a dict with key' \ - ' "type".' - - self.policies = copy.deepcopy(policies) - self.transforms = [Compose(policy) for policy in self.policies] - - def __call__(self, results): - transform = np.random.choice(self.transforms) - return transform(results) - - def __repr__(self): - return f'{self.__class__.__name__}(policies={self.policies})' - - -@PIPELINES.register_module() -class Shear: - """Apply Shear Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range [0,_MAX_LEVEL]. - img_fill_val (int | float | tuple): The filled values for image border. - If float, the same fill value will be used for all the three - channels of image. If tuple, the should be 3 elements. - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for performing Shear and should be in - range [0, 1]. - direction (str): The direction for shear, either "horizontal" - or "vertical". - max_shear_magnitude (float): The maximum magnitude for Shear - transformation. - random_negative_prob (float): The probability that turns the - offset negative. Should be in range [0,1] - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - - def __init__(self, - level, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - direction='horizontal', - max_shear_magnitude=0.3, - random_negative_prob=0.5, - interpolation='bilinear'): - assert isinstance(level, (int, float)), 'The level must be type ' \ - f'int or float, got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \ - f'[0,{_MAX_LEVEL}], got {level}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \ - 'elements of img_fill_val should between range [0,255].' \ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \ - f'range [0,1]. got {prob}.' - assert direction in ('horizontal', 'vertical'), 'direction must ' \ - f'in be either "horizontal" or "vertical". got {direction}.' - assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \ - f'should be type float. got {type(max_shear_magnitude)}.' - assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \ - 'max_shear_magnitude should be in range [0,1]. ' \ - f'got {max_shear_magnitude}.' - self.level = level - self.magnitude = level_to_value(level, max_shear_magnitude) - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.direction = direction - self.max_shear_magnitude = max_shear_magnitude - self.random_negative_prob = random_negative_prob - self.interpolation = interpolation - - def _shear_img(self, - results, - magnitude, - direction='horizontal', - interpolation='bilinear'): - """Shear the image. - - Args: - results (dict): Result dict from loading pipeline. - magnitude (int | float): The magnitude used for shear. - direction (str): The direction for shear, either "horizontal" - or "vertical". - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - for key in results.get('img_fields', ['img']): - img = results[key] - img_sheared = mmcv.imshear( - img, - magnitude, - direction, - border_value=self.img_fill_val, - interpolation=interpolation) - results[key] = img_sheared.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _shear_bboxes(self, results, magnitude): - """Shear the bboxes.""" - h, w, c = results['img_shape'] - if self.direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) # [2, 2] - else: - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_box, 1] - coordinates = coordinates[..., 0].transpose( - (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4] - new_coords = np.matmul(shear_matrix[None, :, :], - coordinates) # [nb_box, 2, 4] - min_x = np.min(new_coords[:, 0, :], axis=-1) - min_y = np.min(new_coords[:, 1, :], axis=-1) - max_x = np.max(new_coords[:, 0, :], axis=-1) - max_y = np.max(new_coords[:, 1, :], axis=-1) - min_x = np.clip(min_x, a_min=0, a_max=w) - min_y = np.clip(min_y, a_min=0, a_max=h) - max_x = np.clip(max_x, a_min=min_x, a_max=w) - max_y = np.clip(max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _shear_masks(self, - results, - magnitude, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Shear the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.shear((h, w), - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation) - - def _shear_seg(self, - results, - magnitude, - direction='horizontal', - fill_val=255, - interpolation='bilinear'): - """Shear the segmentation maps.""" - for key in results.get('seg_fields', []): - seg = results[key] - results[key] = mmcv.imshear( - seg, - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after shear - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to shear images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Sheared results. - """ - if np.random.rand() > self.prob: - return results - magnitude = random_negative(self.magnitude, self.random_negative_prob) - self._shear_img(results, magnitude, self.direction, self.interpolation) - self._shear_bboxes(results, magnitude) - # fill_val set to 0 for background of mask. - self._shear_masks( - results, - magnitude, - self.direction, - fill_val=0, - interpolation=self.interpolation) - self._shear_seg( - results, - magnitude, - self.direction, - fill_val=self.seg_ignore_label, - interpolation=self.interpolation) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'direction={self.direction}, ' - repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, ' - repr_str += f'random_negative_prob={self.random_negative_prob}, ' - repr_str += f'interpolation={self.interpolation})' - return repr_str - - -@PIPELINES.register_module() -class Rotate: - """Apply Rotate Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range (0,_MAX_LEVEL]. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - center (int | float | tuple[float]): Center point (w, h) of the - rotation in the source image. If None, the center of the - image will be used. Same in ``mmcv.imrotate``. - img_fill_val (int | float | tuple): The fill value for image border. - If float, the same value will be used for all the three - channels of image. If tuple, the should be 3 elements (e.g. - equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for perform transformation and - should be in range 0 to 1. - max_rotate_angle (int | float): The maximum angles for rotate - transformation. - random_negative_prob (float): The probability that turns the - offset negative. - """ - - def __init__(self, - level, - scale=1, - center=None, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - max_rotate_angle=30, - random_negative_prob=0.5): - assert isinstance(level, (int, float)), \ - f'The level must be type int or float. got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.' - assert isinstance(scale, (int, float)), \ - f'The scale must be type int or float. got type {type(scale)}.' - if isinstance(center, (int, float)): - center = (center, center) - elif isinstance(center, tuple): - assert len(center) == 2, 'center with type tuple must have '\ - f'2 elements. got {len(center)} elements.' - else: - assert center is None, 'center must be None or type int, '\ - f'float or tuple, got type {type(center)}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255]. '\ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\ - f'should be type int or float. got type {type(max_rotate_angle)}.' - self.level = level - self.scale = scale - # Rotation angle in degrees. Positive values mean - # clockwise rotation. - self.angle = level_to_value(level, max_rotate_angle) - self.center = center - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.max_rotate_angle = max_rotate_angle - self.random_negative_prob = random_negative_prob - - def _rotate_img(self, results, angle, center=None, scale=1.0): - """Rotate the image. - - Args: - results (dict): Result dict from loading pipeline. - angle (float): Rotation angle in degrees, positive values - mean clockwise rotation. Same in ``mmcv.imrotate``. - center (tuple[float], optional): Center point (w, h) of the - rotation. Same in ``mmcv.imrotate``. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - img_rotated = mmcv.imrotate( - img, angle, center, scale, border_value=self.img_fill_val) - results[key] = img_rotated.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _rotate_bboxes(self, results, rotate_matrix): - """Rotate the bboxes.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_bbox, 1] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coordinates = np.concatenate( - (coordinates, - np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)), - axis=1) # [4, 3, nb_bbox, 1] - coordinates = coordinates.transpose( - (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1] - rotated_coords = np.matmul(rotate_matrix, - coordinates) # [nb_bbox, 4, 2, 1] - rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2] - min_x, min_y = np.min( - rotated_coords[:, :, 0], axis=1), np.min( - rotated_coords[:, :, 1], axis=1) - max_x, max_y = np.max( - rotated_coords[:, :, 0], axis=1), np.max( - rotated_coords[:, :, 1], axis=1) - min_x, min_y = np.clip( - min_x, a_min=0, a_max=w), np.clip( - min_y, a_min=0, a_max=h) - max_x, max_y = np.clip( - max_x, a_min=min_x, a_max=w), np.clip( - max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _rotate_masks(self, - results, - angle, - center=None, - scale=1.0, - fill_val=0): - """Rotate the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.rotate((h, w), angle, center, scale, fill_val) - - def _rotate_seg(self, - results, - angle, - center=None, - scale=1.0, - fill_val=255): - """Rotate the segmentation map.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imrotate( - seg, angle, center, scale, - border_value=fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after rotate - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to rotate images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - if np.random.rand() > self.prob: - return results - h, w = results['img'].shape[:2] - center = self.center - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - angle = random_negative(self.angle, self.random_negative_prob) - self._rotate_img(results, angle, center, self.scale) - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale) - self._rotate_bboxes(results, rotate_matrix) - self._rotate_masks(results, angle, center, self.scale, fill_val=0) - self._rotate_seg( - results, angle, center, self.scale, fill_val=self.seg_ignore_label) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'center={self.center}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'max_rotate_angle={self.max_rotate_angle}, ' - repr_str += f'random_negative_prob={self.random_negative_prob})' - return repr_str - - -@PIPELINES.register_module() -class Translate: - """Translate the images, bboxes, masks and segmentation maps horizontally - or vertically. - - Args: - level (int | float): The level for Translate and should be in - range [0,_MAX_LEVEL]. - prob (float): The probability for performing translation and - should be in range [0, 1]. - img_fill_val (int | float | tuple): The filled value for image - border. If float, the same fill value will be used for all - the three channels of image. If tuple, the should be 3 - elements (e.g. equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - direction (str): The translate direction, either "horizontal" - or "vertical". - max_translate_offset (int | float): The maximum pixel's offset for - Translate. - random_negative_prob (float): The probability that turns the - offset negative. - min_size (int | float): The minimum pixel for filtering - invalid bboxes after the translation. - """ - - def __init__(self, - level, - prob=0.5, - img_fill_val=128, - seg_ignore_label=255, - direction='horizontal', - max_translate_offset=250., - random_negative_prob=0.5, - min_size=0): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level used for calculating Translate\'s offset should be ' \ - 'in range [0,_MAX_LEVEL]' - assert 0 <= prob <= 1.0, \ - 'The probability of translation should be in range [0, 1].' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, \ - 'img_fill_val as tuple must have 3 elements.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError('img_fill_val must be type float or tuple.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255].' - assert direction in ('horizontal', 'vertical'), \ - 'direction should be "horizontal" or "vertical".' - assert isinstance(max_translate_offset, (int, float)), \ - 'The max_translate_offset must be type int or float.' - # the offset used for translation - self.offset = int(level_to_value(level, max_translate_offset)) - self.level = level - self.prob = prob - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.direction = direction - self.max_translate_offset = max_translate_offset - self.random_negative_prob = random_negative_prob - self.min_size = min_size - - def _translate_img(self, results, offset, direction='horizontal'): - """Translate the image. - - Args: - results (dict): Result dict from loading pipeline. - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - results[key] = mmcv.imtranslate( - img, offset, direction, self.img_fill_val).astype(img.dtype) - results['img_shape'] = results[key].shape - - def _translate_bboxes(self, results, offset): - """Shift bboxes horizontally or vertically, according to offset.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - if self.direction == 'horizontal': - min_x = np.maximum(0, min_x + offset) - max_x = np.minimum(w, max_x + offset) - elif self.direction == 'vertical': - min_y = np.maximum(0, min_y + offset) - max_y = np.minimum(h, max_y + offset) - - # the boxes translated outside of image will be filtered along with - # the corresponding masks, by invoking ``_filter_invalid``. - results[key] = np.concatenate([min_x, min_y, max_x, max_y], - axis=-1) - - def _translate_masks(self, - results, - offset, - direction='horizontal', - fill_val=0): - """Translate masks horizontally or vertically.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.translate((h, w), offset, direction, fill_val) - - def _translate_seg(self, - results, - offset, - direction='horizontal', - fill_val=255): - """Translate segmentation maps horizontally or vertically.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imtranslate(seg, offset, direction, - fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_size=0): - """Filter bboxes and masks too small or translated out of image.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_size) & (bbox_h > min_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - return results - - def __call__(self, results): - """Call function to translate images, bounding boxes, masks and - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Translated results. - """ - if np.random.rand() > self.prob: - return results - offset = random_negative(self.offset, self.random_negative_prob) - self._translate_img(results, offset, self.direction) - self._translate_bboxes(results, offset) - # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks. - self._translate_masks(results, offset, self.direction) - # fill_val set to ``seg_ignore_label`` for the ignored value - # of segmentation map. - self._translate_seg( - results, offset, self.direction, fill_val=self.seg_ignore_label) - self._filter_invalid(results, min_size=self.min_size) - return results - - -@PIPELINES.register_module() -class ColorTransform: - """Apply Color transformation to image. The bboxes, masks, and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Color transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_color_img(self, results, factor=1.0): - """Apply Color transformation to image.""" - for key in results.get('img_fields', ['img']): - # NOTE defaultly the image should be BGR format - img = results[key] - results[key] = mmcv.adjust_color(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Color transformation. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Colored results. - """ - if np.random.rand() > self.prob: - return results - self._adjust_color_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class EqualizeTransform: - """Apply Equalize transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - prob (float): The probability for performing Equalize transformation. - """ - - def __init__(self, prob=0.5): - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.prob = prob - - def _imequalize(self, results): - """Equalizes the histogram of one image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.imequalize(img).astype(img.dtype) - - def __call__(self, results): - """Call function for Equalize transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._imequalize(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob})' - - -@PIPELINES.register_module() -class BrightnessTransform: - """Apply Brightness transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Brightness transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_brightness_img(self, results, factor=1.0): - """Adjust the brightness of image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_brightness(img, - factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Brightness transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_brightness_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class ContrastTransform: - """Apply Contrast transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Contrast transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_contrast_img(self, results, factor=1.0): - """Adjust the image contrast.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Contrast transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_contrast_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/compose.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/compose.py deleted file mode 100644 index d7592200..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/compose.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - str_ = t.__repr__() - if 'Compose(' in str_: - str_ = str_.replace('\n', '\n ') - format_string += '\n' - format_string += f' {str_}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formating.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formating.py deleted file mode 100644 index 3b3e45ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formating.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -import warnings - -from .formatting import * - -warnings.warn('DeprecationWarning: mmdet.datasets.pipelines.formating will be ' - 'deprecated, please replace it with ' - 'mmdet.datasets.pipelines.formatting.') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formatting.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formatting.py deleted file mode 100644 index 45ca69cf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/formatting.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor: - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor: - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = (to_tensor(img.transpose(2, 0, 1))).contiguous() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose: - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to transpose the channel order of data in results. - - Args: - results (dict): Result dict contains the data to transpose. - - Returns: - dict: The result dict contains the data transposed to \ - ``self.order``. - """ - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer: - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))``. - """ - - def __init__(self, - fields=(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to \ - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle: - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, \ - (3)to DataContainer (stack=True) - - Args: - img_to_float (bool): Whether to force the image to be converted to - float type. Default: True. - pad_val (dict): A dict for padding value in batch collating, - the default value is `dict(img=0, masks=0, seg=255)`. - Without this argument, the padding value of "gt_semantic_seg" - will be set to 0 by default, which should be 255. - """ - - def __init__(self, - img_to_float=True, - pad_val=dict(img=0, masks=0, seg=255)): - self.img_to_float = img_to_float - self.pad_val = pad_val - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with \ - default bundle. - """ - - if 'img' in results: - img = results['img'] - if self.img_to_float is True and img.dtype == np.uint8: - # Normally, image is of uint8 type without normalization. - # At this time, it needs to be forced to be converted to - # flot32, otherwise the model training and inference - # will be wrong. Only used for YOLOX currently . - img = img.astype(np.float32) - # add default meta keys - results = self._add_default_meta_keys(results) - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC( - to_tensor(img), padding_value=self.pad_val['img'], stack=True) - for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key])) - if 'gt_masks' in results: - results['gt_masks'] = DC( - results['gt_masks'], - padding_value=self.pad_val['masks'], - cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), - padding_value=self.pad_val['seg'], - stack=True) - return results - - def _add_default_meta_keys(self, results): - """Add default meta keys. - - We set default meta keys including `pad_shape`, `scale_factor` and - `img_norm_cfg` to avoid the case where no `Resize`, `Normalize` and - `Pad` are implemented during the whole pipeline. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - results (dict): Updated result dict contains the data to convert. - """ - img = results['img'] - results.setdefault('pad_shape', img.shape) - results.setdefault('scale_factor', 1.0) - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results.setdefault( - 'img_norm_cfg', - dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False)) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(img_to_float={self.img_to_float})' - - -@PIPELINES.register_module() -class Collect: - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple \ - (h, w, c). Note that images may be zero padded on the \ - bottom/right if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class WrapFieldsToLists: - """Wrap fields of the data dictionary into lists for evaluation. - - This class can be used as a last step of a test or validation - pipeline for single image evaluation or inference. - - Example: - >>> test_pipeline = [ - >>> dict(type='LoadImageFromFile'), - >>> dict(type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - >>> dict(type='Pad', size_divisor=32), - >>> dict(type='ImageToTensor', keys=['img']), - >>> dict(type='Collect', keys=['img']), - >>> dict(type='WrapFieldsToLists') - >>> ] - """ - - def __call__(self, results): - """Call function to wrap fields into lists. - - Args: - results (dict): Result dict contains the data to wrap. - - Returns: - dict: The result dict where value of ``self.keys`` are wrapped \ - into list. - """ - - # Wrap dict fields into lists - for key, val in results.items(): - results[key] = [val] - return results - - def __repr__(self): - return f'{self.__class__.__name__}()' diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/instaboost.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/instaboost.py deleted file mode 100644 index ca10c4c7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/instaboost.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class InstaBoost: - r"""Data augmentation method in `InstaBoost: Boosting Instance - Segmentation Via Probability Map Guided Copy-Pasting - `_. - - Refer to https://github.com/GothicAi/Instaboost for implementation details. - - Args: - action_candidate (tuple): Action candidates. "normal", "horizontal", \ - "vertical", "skip" are supported. Default: ('normal', \ - 'horizontal', 'skip'). - action_prob (tuple): Corresponding action probabilities. Should be \ - the same length as action_candidate. Default: (1, 0, 0). - scale (tuple): (min scale, max scale). Default: (0.8, 1.2). - dx (int): The maximum x-axis shift will be (instance width) / dx. - Default 15. - dy (int): The maximum y-axis shift will be (instance height) / dy. - Default 15. - theta (tuple): (min rotation degree, max rotation degree). \ - Default: (-1, 1). - color_prob (float): Probability of images for color augmentation. - Default 0.5. - heatmap_flag (bool): Whether to use heatmap guided. Default False. - aug_ratio (float): Probability of applying this transformation. \ - Default 0.5. - """ - - def __init__(self, - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError( - 'Please run "pip install instaboostfast" ' - 'to install instaboostfast first for instaboost augmentation.') - self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob, - scale, dx, dy, theta, - color_prob, hflag) - self.aug_ratio = aug_ratio - - def _load_anns(self, results): - labels = results['ann_info']['labels'] - masks = results['ann_info']['masks'] - bboxes = results['ann_info']['bboxes'] - n = len(labels) - - anns = [] - for i in range(n): - label = labels[i] - bbox = bboxes[i] - mask = masks[i] - x1, y1, x2, y2 = bbox - # assert (x2 - x1) >= 1 and (y2 - y1) >= 1 - bbox = [x1, y1, x2 - x1, y2 - y1] - anns.append({ - 'category_id': label, - 'segmentation': mask, - 'bbox': bbox - }) - - return anns - - def _parse_anns(self, results, anns, img): - gt_bboxes = [] - gt_labels = [] - gt_masks_ann = [] - for ann in anns: - x1, y1, w, h = ann['bbox'] - # TODO: more essential bug need to be fixed in instaboost - if w <= 0 or h <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - gt_bboxes.append(bbox) - gt_labels.append(ann['category_id']) - gt_masks_ann.append(ann['segmentation']) - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - results['ann_info']['labels'] = gt_labels - results['ann_info']['bboxes'] = gt_bboxes - results['ann_info']['masks'] = gt_masks_ann - results['img'] = img - return results - - def __call__(self, results): - img = results['img'] - ori_type = img.dtype - anns = self._load_anns(results) - if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError('Please run "pip install instaboostfast" ' - 'to install instaboostfast first.') - anns, img = instaboost.get_new_data( - anns, img.astype(np.uint8), self.cfg, background=None) - - results = self._parse_anns(results, anns, img.astype(ori_type)) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/loading.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/loading.py deleted file mode 100644 index 41ccff5d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/loading.py +++ /dev/null @@ -1,609 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -from mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - -try: - from panopticapi.utils import rgb2id -except ImportError: - rgb2id = None - - -@PIPELINES.register_module() -class LoadImageFromFile: - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - channel_order='bgr', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.channel_order = channel_order - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, channel_order=self.channel_order) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f"channel_order='{self.channel_order}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles: - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations: - """Load multiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - denorm_bbox (bool): Whether to convert bbox from relative value to - absolute value. Only used in OpenImage Dataset. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - denorm_bbox=False, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.denorm_bbox = denorm_bbox - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - if self.denorm_bbox: - bbox_num = results['gt_bboxes'].shape[0] - if bbox_num != 0: - h, w = results['img_shape'][:2] - results['gt_bboxes'][:, 0::2] *= w - results['gt_bboxes'][:, 1::2] *= h - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - - gt_is_group_ofs = ann_info.get('gt_is_group_ofs', None) - if gt_is_group_ofs is not None: - results['gt_is_group_ofs'] = gt_is_group_ofs.copy() - - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'poly2mask={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadPanopticAnnotations(LoadAnnotations): - """Load multiple types of panoptic annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: True. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=True, - with_seg=True, - file_client_args=dict(backend='disk')): - if rgb2id is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(LoadPanopticAnnotations, self).__init__( - with_bbox=with_bbox, - with_label=with_label, - with_mask=with_mask, - with_seg=with_seg, - poly2mask=True, - denorm_bbox=False, - file_client_args=file_client_args) - - def _load_masks_and_semantic_segs(self, results): - """Private function to load mask and semantic segmentation annotations. - - In gt_semantic_seg, the foreground label is from `0` to - `num_things - 1`, the background label is from `num_things` to - `num_things + num_stuff - 1`, 255 means the ignored label (`VOID`). - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask and semantic segmentation - annotations. `BitmapMasks` is used for mask annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - pan_png = mmcv.imfrombytes( - img_bytes, flag='color', channel_order='rgb').squeeze() - pan_png = rgb2id(pan_png) - - gt_masks = [] - gt_seg = np.zeros_like(pan_png) + 255 # 255 as ignore - - for mask_info in results['ann_info']['masks']: - mask = (pan_png == mask_info['id']) - gt_seg = np.where(mask, mask_info['category'], gt_seg) - - # The legal thing masks - if mask_info.get('is_thing'): - gt_masks.append(mask.astype(np.uint8)) - - if self.with_mask: - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - - if self.with_seg: - results['gt_semantic_seg'] = gt_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types panoptic annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask or self.with_seg: - # The tasks completed by '_load_masks' and '_load_semantic_segs' - # in LoadAnnotations are merged to one function. - results = self._load_masks_and_semantic_segs(results) - - return results - - -@PIPELINES.register_module() -class LoadProposals: - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations: - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth - boxes. - keep_empty (bool): Whether to return None when it - becomes an empty bbox after filtering. Default: True - """ - - def __init__(self, min_gt_bbox_wh, keep_empty=True): - # TODO: add more filter options - self.min_gt_bbox_wh = min_gt_bbox_wh - self.keep_empty = keep_empty - - def __call__(self, results): - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - if gt_bboxes.shape[0] == 0: - return results - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1]) - if not keep.any(): - if self.keep_empty: - return None - else: - return results - else: - keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg') - for key in keys: - if key in results: - results[key] = results[key][keep] - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(min_gt_bbox_wh={self.min_gt_bbox_wh},' \ - f'always_keep={self.always_keep})' diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/test_time_aug.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 5f1ab7b7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/transforms.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/transforms.py deleted file mode 100644 index 0a1b3891..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/pipelines/transforms.py +++ /dev/null @@ -1,2919 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect -import math -import warnings - -import cv2 -import mmcv -import numpy as np -from numpy import random - -from mmdet.core import BitmapMasks, PolygonMasks, find_inside_bboxes -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps -from mmdet.utils import log_img_scale -from ..builder import PIPELINES - -try: - from imagecorruptions import corrupt -except ImportError: - corrupt = None - -try: - import albumentations - from albumentations import Compose -except ImportError: - albumentations = None - Compose = None - - -@PIPELINES.register_module() -class Resize: - """Resize images & bbox & mask. - - This transform resizes the input image to some scale. Bboxes and masks are - then resized with the same scale factor. If the input dict contains the key - "scale", then the scale in the input dict is used, otherwise the specified - scale in the init method is used. If the input dict contains the key - "scale_factor" (if MultiScaleFlipAug does not give img_scale but - scale_factor), the actual scale will be computed by image shape and - scale_factor. - - `img_scale` can either be a tuple (single-scale) or a list of tuple - (multi-scale). There are 3 multiscale modes: - - - ``ratio_range is not None``: randomly sample a ratio from the ratio \ - range and multiply it with the image scale. - - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ - sample a scale from the multiscale range. - - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ - sample a scale from multiple scales. - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - backend (str): Image resize backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - override (bool, optional): Whether to override `scale` and - `scale_factor` so as to call resize twice. Default False. If True, - after the first resizing, the existed `scale` and `scale_factor` - will be ignored so the second resizing can be allowed. - This option is a work-around for multiple times of resize in DETR. - Defaults to False. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - interpolation='bilinear', - override=False): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given a scale and a range of image ratio - assert len(self.img_scale) == 1 - else: - # mode 2: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.backend = backend - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize - self.interpolation = interpolation - self.override = override - self.bbox_clip_border = bbox_clip_border - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ - where ``img_scale`` is the selected image scale and \ - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where \ - ``img_scale`` is sampled scale and None is just a placeholder \ - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where \ - ``scale`` is sampled ratio multiplied with ``img_scale`` and \ - None is just a placeholder to be consistent with \ - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - for key in results.get('img_fields', ['img']): - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results[key].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - results[key] = img - - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img_shape'] = img.shape - # in case that there is no padding - results['pad_shape'] = img.shape - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip: - """Flip the image & bbox & mask. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - When random flip is enabled, ``flip_ratio``/``direction`` can either be a - float/string or tuple of float/string. There are 3 flip modes: - - - ``flip_ratio`` is float, ``direction`` is string: the image will be - ``direction``ly flipped with probability of ``flip_ratio`` . - E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, - then image will be horizontally flipped with probability of 0.5. - - ``flip_ratio`` is float, ``direction`` is list of string: the image will - be ``direction[i]``ly flipped with probability of - ``flip_ratio/len(direction)``. - E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, - then image will be horizontally flipped with probability of 0.25, - vertically with probability of 0.25. - - ``flip_ratio`` is list of float, ``direction`` is list of string: - given ``len(flip_ratio) == len(direction)``, the image will - be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. - E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', - 'vertical']``, then image will be horizontally flipped with probability - of 0.3, vertically with probability of 0.5. - - Args: - flip_ratio (float | list[float], optional): The flipping probability. - Default: None. - direction(str | list[str], optional): The flipping direction. Options - are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. - If input is a list, the length must equal ``flip_ratio``. Each - element in ``flip_ratio`` indicates the flip probability of - corresponding direction. - """ - - def __init__(self, flip_ratio=None, direction='horizontal'): - if isinstance(flip_ratio, list): - assert mmcv.is_list_of(flip_ratio, float) - assert 0 <= sum(flip_ratio) <= 1 - elif isinstance(flip_ratio, float): - assert 0 <= flip_ratio <= 1 - elif flip_ratio is None: - pass - else: - raise ValueError('flip_ratios must be None, float, ' - 'or list of float') - self.flip_ratio = flip_ratio - - valid_directions = ['horizontal', 'vertical', 'diagonal'] - if isinstance(direction, str): - assert direction in valid_directions - elif isinstance(direction, list): - assert mmcv.is_list_of(direction, str) - assert set(direction).issubset(set(valid_directions)) - else: - raise ValueError('direction must be either str or list of str') - self.direction = direction - - if isinstance(flip_ratio, list): - assert len(self.flip_ratio) == len(self.direction) - - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added \ - into result dict. - """ - - if 'flip' not in results: - if isinstance(self.direction, list): - # None means non-flip - direction_list = self.direction + [None] - else: - # None means non-flip - direction_list = [self.direction, None] - - if isinstance(self.flip_ratio, list): - non_flip_ratio = 1 - sum(self.flip_ratio) - flip_ratio_list = self.flip_ratio + [non_flip_ratio] - else: - non_flip_ratio = 1 - self.flip_ratio - # exclude non-flip - single_ratio = self.flip_ratio / (len(direction_list) - 1) - flip_ratio_list = [single_ratio] * (len(direction_list) - - 1) + [non_flip_ratio] - - cur_dir = np.random.choice(direction_list, p=flip_ratio_list) - - results['flip'] = cur_dir is not None - if 'flip_direction' not in results: - results['flip_direction'] = cur_dir - if results['flip']: - # flip image - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - # flip bboxes - for key in results.get('bbox_fields', []): - results[key] = self.bbox_flip(results[key], - results['img_shape'], - results['flip_direction']) - # flip masks - for key in results.get('mask_fields', []): - results[key] = results[key].flip(results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - - -@PIPELINES.register_module() -class RandomShift: - """Shift the image and box given shift pixels and probability. - - Args: - shift_ratio (float): Probability of shifts. Default 0.5. - max_shift_px (int): The max pixels for shifting. Default 32. - filter_thr_px (int): The width and height threshold for filtering. - The bbox and the rest of the targets below the width and - height threshold will be filtered. Default 1. - """ - - def __init__(self, shift_ratio=0.5, max_shift_px=32, filter_thr_px=1): - assert 0 <= shift_ratio <= 1 - assert max_shift_px >= 0 - self.shift_ratio = shift_ratio - self.max_shift_px = max_shift_px - self.filter_thr_px = int(filter_thr_px) - # The key correspondence from bboxes to labels. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - - def __call__(self, results): - """Call function to random shift images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Shift results. - """ - if random.random() < self.shift_ratio: - img_shape = results['img'].shape[:2] - - random_shift_x = random.randint(-self.max_shift_px, - self.max_shift_px) - random_shift_y = random.randint(-self.max_shift_px, - self.max_shift_px) - new_x = max(0, random_shift_x) - ori_x = max(0, -random_shift_x) - new_y = max(0, random_shift_y) - ori_y = max(0, -random_shift_y) - - # TODO: support mask and semantic segmentation maps. - for key in results.get('bbox_fields', []): - bboxes = results[key].copy() - bboxes[..., 0::2] += random_shift_x - bboxes[..., 1::2] += random_shift_y - - # clip border - bboxes[..., 0::2] = np.clip(bboxes[..., 0::2], 0, img_shape[1]) - bboxes[..., 1::2] = np.clip(bboxes[..., 1::2], 0, img_shape[0]) - - # remove invalid bboxes - bbox_w = bboxes[..., 2] - bboxes[..., 0] - bbox_h = bboxes[..., 3] - bboxes[..., 1] - valid_inds = (bbox_w > self.filter_thr_px) & ( - bbox_h > self.filter_thr_px) - # If the shift does not contain any gt-bbox area, skip this - # image. - if key == 'gt_bboxes' and not valid_inds.any(): - return results - bboxes = bboxes[valid_inds] - results[key] = bboxes - - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - for key in results.get('img_fields', ['img']): - img = results[key] - new_img = np.zeros_like(img) - img_h, img_w = img.shape[:2] - new_h = img_h - np.abs(random_shift_y) - new_w = img_w - np.abs(random_shift_x) - new_img[new_y:new_y + new_h, new_x:new_x + new_w] \ - = img[ori_y:ori_y + new_h, ori_x:ori_x + new_w] - results[key] = new_img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_shift_px={self.max_shift_px}, ' - return repr_str - - -@PIPELINES.register_module() -class Pad: - """Pad the image & masks & segmentation map. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_to_square (bool): Whether to pad the image into a square. - Currently only used for YOLOX. Default: False. - pad_val (dict, optional): A dict for padding value, the default - value is `dict(img=0, masks=0, seg=255)`. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_to_square=False, - pad_val=dict(img=0, masks=0, seg=255)): - self.size = size - self.size_divisor = size_divisor - if isinstance(pad_val, float) or isinstance(pad_val, int): - warnings.warn( - 'pad_val of float type is deprecated now, ' - f'please use pad_val=dict(img={pad_val}, ' - f'masks={pad_val}, seg=255) instead.', DeprecationWarning) - pad_val = dict(img=pad_val, masks=pad_val, seg=255) - assert isinstance(pad_val, dict) - self.pad_val = pad_val - self.pad_to_square = pad_to_square - - if pad_to_square: - assert size is None and size_divisor is None, \ - 'The size and size_divisor must be None ' \ - 'when pad2square is True' - else: - assert size is not None or size_divisor is not None, \ - 'only one of size and size_divisor should be valid' - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - pad_val = self.pad_val.get('img', 0) - for key in results.get('img_fields', ['img']): - if self.pad_to_square: - max_size = max(results[key].shape[:2]) - self.size = (max_size, max_size) - if self.size is not None: - padded_img = mmcv.impad( - results[key], shape=self.size, pad_val=pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results[key], self.size_divisor, pad_val=pad_val) - results[key] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - pad_val = self.pad_val.get('masks', 0) - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=pad_val) - - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_to_square={self.pad_to_square}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize: - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imnormalize(results[key], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop: - """Random crop the image & bboxes & masks. - - The absolute `crop_size` is sampled based on `crop_type` and `image_size`, - then the cropped results are generated. - - Args: - crop_size (tuple): The relative ratio or absolute pixels of - height and width. - crop_type (str, optional): one of "relative_range", "relative", - "absolute", "absolute_range". "relative" randomly crops - (h * crop_size[0], w * crop_size[1]) part from an input of size - (h, w). "relative_range" uniformly samples relative crop size from - range [crop_size[0], 1] and [crop_size[1], 1] for height and width - respectively. "absolute" crops from an input with absolute size - (crop_size[0], crop_size[1]). "absolute_range" uniformly samples - crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w - in range [crop_size[0], min(w, crop_size[1])]. Default "absolute". - allow_negative_crop (bool, optional): Whether to allow a crop that does - not contain any bbox area. Default False. - recompute_bbox (bool, optional): Whether to re-compute the boxes based - on cropped instance masks. Default False. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - - If the image is smaller than the absolute crop size, return the - original image. - - The keys for bboxes, labels and masks must be aligned. That is, - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and - `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and - `gt_masks_ignore`. - - If the crop does not contain any gt-bbox region and - `allow_negative_crop` is set to False, skip this image. - """ - - def __init__(self, - crop_size, - crop_type='absolute', - allow_negative_crop=False, - recompute_bbox=False, - bbox_clip_border=True): - if crop_type not in [ - 'relative_range', 'relative', 'absolute', 'absolute_range' - ]: - raise ValueError(f'Invalid crop_type {crop_type}.') - if crop_type in ['absolute', 'absolute_range']: - assert crop_size[0] > 0 and crop_size[1] > 0 - assert isinstance(crop_size[0], int) and isinstance( - crop_size[1], int) - else: - assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1 - self.crop_size = crop_size - self.crop_type = crop_type - self.allow_negative_crop = allow_negative_crop - self.bbox_clip_border = bbox_clip_border - self.recompute_bbox = recompute_bbox - # The key correspondence from bboxes to labels and masks. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images, bounding boxes, masks, semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - allow_negative_crop (bool): Whether to allow a crop that does not - contain any bbox area. Default to False. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - if self.recompute_bbox: - results[key] = results[mask_key].get_bboxes() - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - def _get_crop_size(self, image_size): - """Randomly generates the absolute crop size based on `crop_type` and - `image_size`. - - Args: - image_size (tuple): (h, w). - - Returns: - crop_size (tuple): (crop_h, crop_w) in absolute pixels. - """ - h, w = image_size - if self.crop_type == 'absolute': - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == 'absolute_range': - assert self.crop_size[0] <= self.crop_size[1] - crop_h = np.random.randint( - min(h, self.crop_size[0]), - min(h, self.crop_size[1]) + 1) - crop_w = np.random.randint( - min(w, self.crop_size[0]), - min(w, self.crop_size[1]) + 1) - return crop_h, crop_w - elif self.crop_type == 'relative': - crop_h, crop_w = self.crop_size - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - elif self.crop_type == 'relative_range': - crop_size = np.asarray(self.crop_size, dtype=np.float32) - crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - - def __call__(self, results): - """Call function to randomly crop images, bounding boxes, masks, - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - image_size = results['img'].shape[:2] - crop_size = self._get_crop_size(image_size) - results = self._crop_data(results, crop_size, self.allow_negative_crop) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'crop_type={self.crop_type}, ' - repr_str += f'allow_negative_crop={self.allow_negative_crop}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class SegRescale: - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - backend (str): Image rescale backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - """ - - def __init__(self, scale_factor=1, backend='cv2'): - self.scale_factor = scale_factor - self.backend = backend - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], - self.scale_factor, - interpolation='nearest', - backend=self.backend) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion: - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - 8. randomly swap channels - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - img = img.astype(np.float32) - # random brightness - if random.randint(2): - delta = random.uniform(-self.brightness_delta, - self.brightness_delta) - img += delta - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # convert color from BGR to HSV - img = mmcv.bgr2hsv(img) - - # random saturation - if random.randint(2): - img[..., 1] *= random.uniform(self.saturation_lower, - self.saturation_upper) - - # random hue - if random.randint(2): - img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta) - img[..., 0][img[..., 0] > 360] -= 360 - img[..., 0][img[..., 0] < 0] += 360 - - # convert color from HSV to BGR - img = mmcv.hsv2bgr(img) - - # random contrast - if mode == 0: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # randomly swap channels - if random.randint(2): - img = img[..., random.permutation(3)] - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(\nbrightness_delta={self.brightness_delta},\n' - repr_str += 'contrast_range=' - repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n' - repr_str += 'saturation_range=' - repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n' - repr_str += f'hue_delta={self.hue_delta})' - return repr_str - - -@PIPELINES.register_module() -class Expand: - """Random expand the image & bboxes. - - Randomly place the original image on a canvas of 'ratio' x original image - size filled with mean values. The ratio is in the range of ratio_range. - - Args: - mean (tuple): mean value of dataset. - to_rgb (bool): if need to convert the order of mean to align with RGB. - ratio_range (tuple): range of expand ratio. - prob (float): probability of applying this transformation - """ - - def __init__(self, - mean=(0, 0, 0), - to_rgb=True, - ratio_range=(1, 4), - seg_ignore_label=None, - prob=0.5): - self.to_rgb = to_rgb - self.ratio_range = ratio_range - if to_rgb: - self.mean = mean[::-1] - else: - self.mean = mean - self.min_ratio, self.max_ratio = ratio_range - self.seg_ignore_label = seg_ignore_label - self.prob = prob - - def __call__(self, results): - """Call function to expand images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images, bounding boxes expanded - """ - - if random.uniform(0, 1) > self.prob: - return results - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - - h, w, c = img.shape - ratio = random.uniform(self.min_ratio, self.max_ratio) - # speedup expand when meets large image - if np.all(self.mean == self.mean[0]): - expand_img = np.empty((int(h * ratio), int(w * ratio), c), - img.dtype) - expand_img.fill(self.mean[0]) - else: - expand_img = np.full((int(h * ratio), int(w * ratio), c), - self.mean, - dtype=img.dtype) - left = int(random.uniform(0, w * ratio - w)) - top = int(random.uniform(0, h * ratio - h)) - expand_img[top:top + h, left:left + w] = img - - results['img'] = expand_img - # expand bboxes - for key in results.get('bbox_fields', []): - results[key] = results[key] + np.tile( - (left, top), 2).astype(results[key].dtype) - - # expand masks - for key in results.get('mask_fields', []): - results[key] = results[key].expand( - int(h * ratio), int(w * ratio), top, left) - - # expand segs - for key in results.get('seg_fields', []): - gt_seg = results[key] - expand_gt_seg = np.full((int(h * ratio), int(w * ratio)), - self.seg_ignore_label, - dtype=gt_seg.dtype) - expand_gt_seg[top:top + h, left:left + w] = gt_seg - results[key] = expand_gt_seg - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label})' - return repr_str - - -@PIPELINES.register_module() -class MinIoURandomCrop: - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size). - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - The keys for bboxes, labels and masks should be paired. That is, \ - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \ - `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`. - """ - - def __init__(self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - bbox_clip_border=True): - # 1: return ori img - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.bbox_clip_border = bbox_clip_border - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def __call__(self, results): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images and bounding boxes cropped, \ - 'img_shape' key is updated. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert 'bbox_fields' in results - boxes = [results[key] for key in results['bbox_fields']] - boxes = np.concatenate(boxes, 0) - h, w, c = img.shape - while True: - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return results - - min_iou = mode - for i in range(50): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array( - (int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = bbox_overlaps( - patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ((center[:, 0] > patch[0]) * - (center[:, 1] > patch[1]) * - (center[:, 0] < patch[2]) * - (center[:, 1] < patch[3])) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - for key in results.get('bbox_fields', []): - boxes = results[key].copy() - mask = is_center_of_bboxes_in_patch(boxes, patch) - boxes = boxes[mask] - if self.bbox_clip_border: - boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:]) - boxes[:, :2] = boxes[:, :2].clip(min=patch[:2]) - boxes -= np.tile(patch[:2], 2) - - results[key] = boxes - # labels - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][mask] - - # mask fields - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - mask.nonzero()[0]].crop(patch) - # adjust the img no matter whether the gt is empty before crop - img = img[patch[1]:patch[3], patch[0]:patch[2]] - results['img'] = img - results['img_shape'] = img.shape - - # seg fields - for key in results.get('seg_fields', []): - results[key] = results[key][patch[1]:patch[3], - patch[0]:patch[2]] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_ious={self.min_ious}, ' - repr_str += f'min_crop_size={self.min_crop_size}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class Corrupt: - """Corruption augmentation. - - Corruption transforms implemented based on - `imagecorruptions `_. - - Args: - corruption (str): Corruption name. - severity (int, optional): The severity of corruption. Default: 1. - """ - - def __init__(self, corruption, severity=1): - self.corruption = corruption - self.severity = severity - - def __call__(self, results): - """Call function to corrupt image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images corrupted. - """ - - if corrupt is None: - raise RuntimeError('imagecorruptions is not installed') - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - results['img'] = corrupt( - results['img'].astype(np.uint8), - corruption_name=self.corruption, - severity=self.severity) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(corruption={self.corruption}, ' - repr_str += f'severity={self.severity})' - return repr_str - - -@PIPELINES.register_module() -class Albu: - """Albumentation augmentation. - - Adds custom transformations from Albumentations library. - Please, visit `https://albumentations.readthedocs.io` - to get more information. - - An example of ``transforms`` is as followed: - - .. code-block:: - - [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), - ] - - Args: - transforms (list[dict]): A list of albu transformations - bbox_params (dict): Bbox_params for albumentation `Compose` - keymap (dict): Contains {'input key':'albumentation-style key'} - skip_img_without_anno (bool): Whether to skip the image if no ann left - after aug - """ - - def __init__(self, - transforms, - bbox_params=None, - keymap=None, - update_pad_shape=False, - skip_img_without_anno=False): - if Compose is None: - raise RuntimeError('albumentations is not installed') - - # Args will be modified later, copying it will be safer - transforms = copy.deepcopy(transforms) - if bbox_params is not None: - bbox_params = copy.deepcopy(bbox_params) - if keymap is not None: - keymap = copy.deepcopy(keymap) - self.transforms = transforms - self.filter_lost_elements = False - self.update_pad_shape = update_pad_shape - self.skip_img_without_anno = skip_img_without_anno - - # A simple workaround to remove masks without boxes - if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params - and 'filter_lost_elements' in bbox_params): - self.filter_lost_elements = True - self.origin_label_fields = bbox_params['label_fields'] - bbox_params['label_fields'] = ['idx_mapper'] - del bbox_params['filter_lost_elements'] - - self.bbox_params = ( - self.albu_builder(bbox_params) if bbox_params else None) - self.aug = Compose([self.albu_builder(t) for t in self.transforms], - bbox_params=self.bbox_params) - - if not keymap: - self.keymap_to_albu = { - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - } - else: - self.keymap_to_albu = keymap - self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} - - def albu_builder(self, cfg): - """Import a module from albumentations. - - It inherits some of :func:`build_from_cfg` logic. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - - Returns: - obj: The constructed object. - """ - - assert isinstance(cfg, dict) and 'type' in cfg - args = cfg.copy() - - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if albumentations is None: - raise RuntimeError('albumentations is not installed') - obj_cls = getattr(albumentations, obj_type) - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if 'transforms' in args: - args['transforms'] = [ - self.albu_builder(transform) - for transform in args['transforms'] - ] - - return obj_cls(**args) - - @staticmethod - def mapper(d, keymap): - """Dictionary mapper. Renames keys according to keymap provided. - - Args: - d (dict): old dict - keymap (dict): {'old_key':'new_key'} - Returns: - dict: new dict. - """ - - updated_dict = {} - for k, v in zip(d.keys(), d.values()): - new_k = keymap.get(k, k) - updated_dict[new_k] = d[k] - return updated_dict - - def __call__(self, results): - # dict to albumentations format - results = self.mapper(results, self.keymap_to_albu) - # TODO: add bbox_fields - if 'bboxes' in results: - # to list of boxes - if isinstance(results['bboxes'], np.ndarray): - results['bboxes'] = [x for x in results['bboxes']] - # add pseudo-field for filtration - if self.filter_lost_elements: - results['idx_mapper'] = np.arange(len(results['bboxes'])) - - # TODO: Support mask structure in albu - if 'masks' in results: - if isinstance(results['masks'], PolygonMasks): - raise NotImplementedError( - 'Albu only supports BitMap masks now') - ori_masks = results['masks'] - if albumentations.__version__ < '0.5': - results['masks'] = results['masks'].masks - else: - results['masks'] = [mask for mask in results['masks'].masks] - - results = self.aug(**results) - - if 'bboxes' in results: - if isinstance(results['bboxes'], list): - results['bboxes'] = np.array( - results['bboxes'], dtype=np.float32) - results['bboxes'] = results['bboxes'].reshape(-1, 4) - - # filter label_fields - if self.filter_lost_elements: - - for label in self.origin_label_fields: - results[label] = np.array( - [results[label][i] for i in results['idx_mapper']]) - if 'masks' in results: - results['masks'] = np.array( - [results['masks'][i] for i in results['idx_mapper']]) - results['masks'] = ori_masks.__class__( - results['masks'], results['image'].shape[0], - results['image'].shape[1]) - - if (not len(results['idx_mapper']) - and self.skip_img_without_anno): - return None - - if 'gt_labels' in results: - if isinstance(results['gt_labels'], list): - results['gt_labels'] = np.array(results['gt_labels']) - results['gt_labels'] = results['gt_labels'].astype(np.int64) - - # back to the original format - results = self.mapper(results, self.keymap_back) - - # update final shape - if self.update_pad_shape: - results['pad_shape'] = results['img'].shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' - return repr_str - - -@PIPELINES.register_module() -class RandomCenterCropPad: - """Random center crop and random around padding for CornerNet. - - This operation generates randomly cropped image from the original image and - pads it simultaneously. Different from :class:`RandomCrop`, the output - shape may not equal to ``crop_size`` strictly. We choose a random value - from ``ratios`` and the output shape could be larger or smaller than - ``crop_size``. The padding operation is also different from :class:`Pad`, - here we use around padding instead of right-bottom padding. - - The relation between output image (padding image) and original image: - - .. code:: text - - output image - - +----------------------------+ - | padded area | - +------|----------------------------|----------+ - | | cropped area | | - | | +---------------+ | | - | | | . center | | | original image - | | | range | | | - | | +---------------+ | | - +------|----------------------------|----------+ - | padded area | - +----------------------------+ - - There are 5 main areas in the figure: - - - output image: output image of this operation, also called padding - image in following instruction. - - original image: input image of this operation. - - padded area: non-intersect area of output image and original image. - - cropped area: the overlap of output image and original image. - - center range: a smaller area where random center chosen from. - center range is computed by ``border`` and original image's shape - to avoid our random center is too close to original image's border. - - Also this operation act differently in train and test mode, the summary - pipeline is listed below. - - Train pipeline: - - 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image - will be ``random_ratio * crop_size``. - 2. Choose a ``random_center`` in center range. - 3. Generate padding image with center matches the ``random_center``. - 4. Initialize the padding image with pixel value equals to ``mean``. - 5. Copy the cropped area to padding image. - 6. Refine annotations. - - Test pipeline: - - 1. Compute output shape according to ``test_pad_mode``. - 2. Generate padding image with center matches the original image - center. - 3. Initialize the padding image with pixel value equals to ``mean``. - 4. Copy the ``cropped area`` to padding image. - - Args: - crop_size (tuple | None): expected size after crop, final size will - computed according to ratio. Requires (h, w) in train mode, and - None in test mode. - ratios (tuple): random select a ratio from tuple and crop image to - (crop_size[0] * ratio) * (crop_size[1] * ratio). - Only available in train mode. - border (int): max distance from center select area to image border. - Only available in train mode. - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB. - test_mode (bool): whether involve random variables in transform. - In train mode, crop_size is fixed, center coords and ratio is - random selected from predefined lists. In test mode, crop_size - is image's original shape, center coords and ratio is fixed. - test_pad_mode (tuple): padding method and padding shape value, only - available in test mode. Default is using 'logical_or' with - 127 as padding shape value. - - - 'logical_or': final_shape = input_shape | padding_shape_value - - 'size_divisor': final_shape = int( - ceil(input_shape / padding_shape_value) * padding_shape_value) - test_pad_add_pix (int): Extra padding pixel in test mode. Default 0. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - """ - - def __init__(self, - crop_size=None, - ratios=(0.9, 1.0, 1.1), - border=128, - mean=None, - std=None, - to_rgb=None, - test_mode=False, - test_pad_mode=('logical_or', 127), - test_pad_add_pix=0, - bbox_clip_border=True): - if test_mode: - assert crop_size is None, 'crop_size must be None in test mode' - assert ratios is None, 'ratios must be None in test mode' - assert border is None, 'border must be None in test mode' - assert isinstance(test_pad_mode, (list, tuple)) - assert test_pad_mode[0] in ['logical_or', 'size_divisor'] - else: - assert isinstance(crop_size, (list, tuple)) - assert crop_size[0] > 0 and crop_size[1] > 0, ( - 'crop_size must > 0 in train mode') - assert isinstance(ratios, (list, tuple)) - assert test_pad_mode is None, ( - 'test_pad_mode must be None in train mode') - - self.crop_size = crop_size - self.ratios = ratios - self.border = border - # We do not set default value to mean, std and to_rgb because these - # hyper-parameters are easy to forget but could affect the performance. - # Please use the same setting as Normalize for performance assurance. - assert mean is not None and std is not None and to_rgb is not None - self.to_rgb = to_rgb - self.input_mean = mean - self.input_std = std - if to_rgb: - self.mean = mean[::-1] - self.std = std[::-1] - else: - self.mean = mean - self.std = std - self.test_mode = test_mode - self.test_pad_mode = test_pad_mode - self.test_pad_add_pix = test_pad_add_pix - self.bbox_clip_border = bbox_clip_border - - def _get_border(self, border, size): - """Get final border for the target size. - - This function generates a ``final_border`` according to image's shape. - The area between ``final_border`` and ``size - final_border`` is the - ``center range``. We randomly choose center from the ``center range`` - to avoid our random center is too close to original image's border. - Also ``center range`` should be larger than 0. - - Args: - border (int): The initial border, default is 128. - size (int): The width or height of original image. - Returns: - int: The final border. - """ - k = 2 * border / size - i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k))) - return border // i - - def _filter_boxes(self, patch, boxes): - """Check whether the center of each box is in the patch. - - Args: - patch (list[int]): The cropped area, [left, top, right, bottom]. - boxes (numpy array, (N x 4)): Ground truth boxes. - - Returns: - mask (numpy array, (N,)): Each box is inside or outside the patch. - """ - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * ( - center[:, 0] < patch[2]) * ( - center[:, 1] < patch[3]) - return mask - - def _crop_image_and_paste(self, image, center, size): - """Crop image with a given center and size, then paste the cropped - image to a blank image with two centers align. - - This function is equivalent to generating a blank image with ``size`` - as its shape. Then cover it on the original image with two centers ( - the center of blank image and the random center of original image) - aligned. The overlap area is paste from the original image and the - outside area is filled with ``mean pixel``. - - Args: - image (np array, H x W x C): Original image. - center (list[int]): Target crop center coord. - size (list[int]): Target crop size. [target_h, target_w] - - Returns: - cropped_img (np array, target_h x target_w x C): Cropped image. - border (np array, 4): The distance of four border of - ``cropped_img`` to the original image area, [top, bottom, - left, right] - patch (list[int]): The cropped area, [left, top, right, bottom]. - """ - center_y, center_x = center - target_h, target_w = size - img_h, img_w, img_c = image.shape - - x0 = max(0, center_x - target_w // 2) - x1 = min(center_x + target_w // 2, img_w) - y0 = max(0, center_y - target_h // 2) - y1 = min(center_y + target_h // 2, img_h) - patch = np.array((int(x0), int(y0), int(x1), int(y1))) - - left, right = center_x - x0, x1 - center_x - top, bottom = center_y - y0, y1 - center_y - - cropped_center_y, cropped_center_x = target_h // 2, target_w // 2 - cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype) - for i in range(img_c): - cropped_img[:, :, i] += self.mean[i] - y_slice = slice(cropped_center_y - top, cropped_center_y + bottom) - x_slice = slice(cropped_center_x - left, cropped_center_x + right) - cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :] - - border = np.array([ - cropped_center_y - top, cropped_center_y + bottom, - cropped_center_x - left, cropped_center_x + right - ], - dtype=np.float32) - - return cropped_img, border, patch - - def _train_aug(self, results): - """Random crop and around padding the original image. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - boxes = results['gt_bboxes'] - while True: - scale = random.choice(self.ratios) - new_h = int(self.crop_size[0] * scale) - new_w = int(self.crop_size[1] * scale) - h_border = self._get_border(self.border, h) - w_border = self._get_border(self.border, w) - - for i in range(50): - center_x = random.randint(low=w_border, high=w - w_border) - center_y = random.randint(low=h_border, high=h - h_border) - - cropped_img, border, patch = self._crop_image_and_paste( - img, [center_y, center_x], [new_h, new_w]) - - mask = self._filter_boxes(patch, boxes) - # if image do not have valid bbox, any crop patch is valid. - if not mask.any() and len(boxes) > 0: - continue - - results['img'] = cropped_img - results['img_shape'] = cropped_img.shape - results['pad_shape'] = cropped_img.shape - - x0, y0, x1, y1 = patch - - left_w, top_h = center_x - x0, center_y - y0 - cropped_center_x, cropped_center_y = new_w // 2, new_h // 2 - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - mask = self._filter_boxes(patch, results[key]) - bboxes = results[key][mask] - bboxes[:, 0:4:2] += cropped_center_x - left_w - x0 - bboxes[:, 1:4:2] += cropped_center_y - top_h - y0 - if self.bbox_clip_border: - bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w) - bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h) - keep = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - bboxes = bboxes[keep] - results[key] = bboxes - if key in ['gt_bboxes']: - if 'gt_labels' in results: - labels = results['gt_labels'][mask] - labels = labels[keep] - results['gt_labels'] = labels - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - - # crop semantic seg - for key in results.get('seg_fields', []): - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - return results - - def _test_aug(self, results): - """Around padding the original image without cropping. - - The padding mode and value are from ``test_pad_mode``. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - results['img_shape'] = img.shape - if self.test_pad_mode[0] in ['logical_or']: - # self.test_pad_add_pix is only used for centernet - target_h = (h | self.test_pad_mode[1]) + self.test_pad_add_pix - target_w = (w | self.test_pad_mode[1]) + self.test_pad_add_pix - elif self.test_pad_mode[0] in ['size_divisor']: - divisor = self.test_pad_mode[1] - target_h = int(np.ceil(h / divisor)) * divisor - target_w = int(np.ceil(w / divisor)) * divisor - else: - raise NotImplementedError( - 'RandomCenterCropPad only support two testing pad mode:' - 'logical-or and size_divisor.') - - cropped_img, border, _ = self._crop_image_and_paste( - img, [h // 2, w // 2], [target_h, target_w]) - results['img'] = cropped_img - results['pad_shape'] = cropped_img.shape - results['border'] = border - return results - - def __call__(self, results): - img = results['img'] - assert img.dtype == np.float32, ( - 'RandomCenterCropPad needs the input image of dtype np.float32,' - ' please set "to_float32=True" in "LoadImageFromFile" pipeline') - h, w, c = img.shape - assert c == len(self.mean) - if self.test_mode: - return self._test_aug(results) - else: - return self._train_aug(results) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'ratios={self.ratios}, ' - repr_str += f'border={self.border}, ' - repr_str += f'mean={self.input_mean}, ' - repr_str += f'std={self.input_std}, ' - repr_str += f'to_rgb={self.to_rgb}, ' - repr_str += f'test_mode={self.test_mode}, ' - repr_str += f'test_pad_mode={self.test_pad_mode}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class CutOut: - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - - Args: - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - """ - - def __init__(self, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0)): - - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - self.n_holes = n_holes - self.fill_in = fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in})' - return repr_str - - -@PIPELINES.register_module() -class Mosaic: - """Mosaic augmentation. - - Given 4 images, mosaic transform combines them into - one output image. The output image is composed of the parts from each sub- - image. - - .. code:: text - - mosaic transform - center_x - +------------------------------+ - | pad | pad | - | +-----------+ | - | | | | - | | image1 |--------+ | - | | | | | - | | | image2 | | - center_y |----+-------------+-----------| - | | cropped | | - |pad | image3 | image4 | - | | | | - +----|-------------+-----------+ - | | - +-------------+ - - The mosaic transform steps are as follows: - - 1. Choose the mosaic center as the intersections of 4 images - 2. Get the left top image according to the index, and randomly - sample another 3 images from the custom dataset. - 3. Sub image will be cropped if image is larger than mosaic patch - - Args: - img_scale (Sequence[int]): Image size after mosaic pipeline of single - image. The shape order should be (height, width). - Default to (640, 640). - center_ratio_range (Sequence[float]): Center ratio range of mosaic - output. Default to (0.5, 1.5). - min_bbox_size (int | float): The minimum pixel for filtering - invalid bboxes after the mosaic pipeline. Default to 0. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` is invalid. Default to True. - pad_val (int): Pad value. Default to 114. - prob (float): Probability of applying this transformation. - Default to 1.0. - """ - - def __init__(self, - img_scale=(640, 640), - center_ratio_range=(0.5, 1.5), - min_bbox_size=0, - bbox_clip_border=True, - skip_filter=True, - pad_val=114, - prob=1.0): - assert isinstance(img_scale, tuple) - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - - log_img_scale(img_scale, skip_square=True) - self.img_scale = img_scale - self.center_ratio_range = center_ratio_range - self.min_bbox_size = min_bbox_size - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - self.pad_val = pad_val - self.prob = prob - - def __call__(self, results): - """Call function to make a mosaic of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mosaic transformed. - """ - - if random.uniform(0, 1) > self.prob: - return results - - results = self._mosaic_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - indexes = [random.randint(0, len(dataset)) for _ in range(3)] - return indexes - - def _mosaic_transform(self, results): - """Mosaic transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - mosaic_labels = [] - mosaic_bboxes = [] - if len(results['img'].shape) == 3: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2), 3), - self.pad_val, - dtype=results['img'].dtype) - else: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), - self.pad_val, - dtype=results['img'].dtype) - - # mosaic center x, y - center_x = int( - random.uniform(*self.center_ratio_range) * self.img_scale[1]) - center_y = int( - random.uniform(*self.center_ratio_range) * self.img_scale[0]) - center_position = (center_x, center_y) - - loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right') - for i, loc in enumerate(loc_strs): - if loc == 'top_left': - results_patch = copy.deepcopy(results) - else: - results_patch = copy.deepcopy(results['mix_results'][i - 1]) - - img_i = results_patch['img'] - h_i, w_i = img_i.shape[:2] - # keep_ratio resize - scale_ratio_i = min(self.img_scale[0] / h_i, - self.img_scale[1] / w_i) - img_i = mmcv.imresize( - img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i))) - - # compute the combine parameters - paste_coord, crop_coord = self._mosaic_combine( - loc, center_position, img_i.shape[:2][::-1]) - x1_p, y1_p, x2_p, y2_p = paste_coord - x1_c, y1_c, x2_c, y2_c = crop_coord - - # crop and paste image - mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c] - - # adjust coordinate - gt_bboxes_i = results_patch['gt_bboxes'] - gt_labels_i = results_patch['gt_labels'] - - if gt_bboxes_i.shape[0] > 0: - padw = x1_p - x1_c - padh = y1_p - y1_c - gt_bboxes_i[:, 0::2] = \ - scale_ratio_i * gt_bboxes_i[:, 0::2] + padw - gt_bboxes_i[:, 1::2] = \ - scale_ratio_i * gt_bboxes_i[:, 1::2] + padh - - mosaic_bboxes.append(gt_bboxes_i) - mosaic_labels.append(gt_labels_i) - - if len(mosaic_labels) > 0: - mosaic_bboxes = np.concatenate(mosaic_bboxes, 0) - mosaic_labels = np.concatenate(mosaic_labels, 0) - - if self.bbox_clip_border: - mosaic_bboxes[:, 0::2] = np.clip(mosaic_bboxes[:, 0::2], 0, - 2 * self.img_scale[1]) - mosaic_bboxes[:, 1::2] = np.clip(mosaic_bboxes[:, 1::2], 0, - 2 * self.img_scale[0]) - - if not self.skip_filter: - mosaic_bboxes, mosaic_labels = \ - self._filter_box_candidates(mosaic_bboxes, mosaic_labels) - - # remove outside bboxes - inside_inds = find_inside_bboxes(mosaic_bboxes, 2 * self.img_scale[0], - 2 * self.img_scale[1]) - mosaic_bboxes = mosaic_bboxes[inside_inds] - mosaic_labels = mosaic_labels[inside_inds] - - results['img'] = mosaic_img - results['img_shape'] = mosaic_img.shape - results['gt_bboxes'] = mosaic_bboxes - results['gt_labels'] = mosaic_labels - - return results - - def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): - """Calculate global coordinate of mosaic image and local coordinate of - cropped sub-image. - - Args: - loc (str): Index for the sub-image, loc in ('top_left', - 'top_right', 'bottom_left', 'bottom_right'). - center_position_xy (Sequence[float]): Mixing center for 4 images, - (x, y). - img_shape_wh (Sequence[int]): Width and height of sub-image - - Returns: - tuple[tuple[float]]: Corresponding coordinate of pasting and - cropping - - paste_coord (tuple): paste corner coordinate in mosaic image. - - crop_coord (tuple): crop corner coordinate in mosaic image. - """ - assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') - if loc == 'top_left': - # index0 to top left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - center_position_xy[0], \ - center_position_xy[1] - crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - ( - y2 - y1), img_shape_wh[0], img_shape_wh[1] - - elif loc == 'top_right': - # index1 to top right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - center_position_xy[1] - crop_coord = 0, img_shape_wh[1] - (y2 - y1), min( - img_shape_wh[0], x2 - x1), img_shape_wh[1] - - elif loc == 'bottom_left': - # index2 to bottom left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - center_position_xy[1], \ - center_position_xy[0], \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min( - y2 - y1, img_shape_wh[1]) - - else: - # index3 to bottom right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - center_position_xy[1], \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = 0, 0, min(img_shape_wh[0], - x2 - x1), min(y2 - y1, img_shape_wh[1]) - - paste_coord = x1, y1, x2, y2 - return paste_coord, crop_coord - - def _filter_box_candidates(self, bboxes, labels): - """Filter out bboxes too small after Mosaic.""" - bbox_w = bboxes[:, 2] - bboxes[:, 0] - bbox_h = bboxes[:, 3] - bboxes[:, 1] - valid_inds = (bbox_w > self.min_bbox_size) & \ - (bbox_h > self.min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - return bboxes[valid_inds], labels[valid_inds] - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'img_scale={self.img_scale}, ' - repr_str += f'center_ratio_range={self.center_ratio_range}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class MixUp: - """MixUp data augmentation. - - .. code:: text - - mixup transform - +------------------------------+ - | mixup image | | - | +--------|--------+ | - | | | | | - |---------------+ | | - | | | | - | | image | | - | | | | - | | | | - | |-----------------+ | - | pad | - +------------------------------+ - - The mixup transform steps are as follows: - - 1. Another random image is picked by dataset and embedded in - the top left patch(after padding and resizing) - 2. The target of mixup transform is the weighted average of mixup - image and origin image. - - Args: - img_scale (Sequence[int]): Image output size after mixup pipeline. - The shape order should be (height, width). Default: (640, 640). - ratio_range (Sequence[float]): Scale ratio of mixup image. - Default: (0.5, 1.5). - flip_ratio (float): Horizontal flip ratio of mixup image. - Default: 0.5. - pad_val (int): Pad value. Default: 114. - max_iters (int): The maximum number of iterations. If the number of - iterations is greater than `max_iters`, but gt_bbox is still - empty, then the iteration is terminated. Default: 15. - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 5. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. Default: 20. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - img_scale=(640, 640), - ratio_range=(0.5, 1.5), - flip_ratio=0.5, - pad_val=114, - max_iters=15, - min_bbox_size=5, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert isinstance(img_scale, tuple) - log_img_scale(img_scale, skip_square=True) - self.dynamic_scale = img_scale - self.ratio_range = ratio_range - self.flip_ratio = flip_ratio - self.pad_val = pad_val - self.max_iters = max_iters - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - """Call function to make a mixup of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mixup transformed. - """ - - results = self._mixup_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - for i in range(self.max_iters): - index = random.randint(0, len(dataset)) - gt_bboxes_i = dataset.get_ann_info(index)['bboxes'] - if len(gt_bboxes_i) != 0: - break - - return index - - def _mixup_transform(self, results): - """MixUp transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - assert len( - results['mix_results']) == 1, 'MixUp only support 2 images now !' - - if results['mix_results'][0]['gt_bboxes'].shape[0] == 0: - # empty bbox - return results - - retrieve_results = results['mix_results'][0] - retrieve_img = retrieve_results['img'] - - jit_factor = random.uniform(*self.ratio_range) - is_filp = random.uniform(0, 1) > self.flip_ratio - - if len(retrieve_img.shape) == 3: - out_img = np.ones( - (self.dynamic_scale[0], self.dynamic_scale[1], 3), - dtype=retrieve_img.dtype) * self.pad_val - else: - out_img = np.ones( - self.dynamic_scale, dtype=retrieve_img.dtype) * self.pad_val - - # 1. keep_ratio resize - scale_ratio = min(self.dynamic_scale[0] / retrieve_img.shape[0], - self.dynamic_scale[1] / retrieve_img.shape[1]) - retrieve_img = mmcv.imresize( - retrieve_img, (int(retrieve_img.shape[1] * scale_ratio), - int(retrieve_img.shape[0] * scale_ratio))) - - # 2. paste - out_img[:retrieve_img.shape[0], :retrieve_img.shape[1]] = retrieve_img - - # 3. scale jit - scale_ratio *= jit_factor - out_img = mmcv.imresize(out_img, (int(out_img.shape[1] * jit_factor), - int(out_img.shape[0] * jit_factor))) - - # 4. flip - if is_filp: - out_img = out_img[:, ::-1, :] - - # 5. random crop - ori_img = results['img'] - origin_h, origin_w = out_img.shape[:2] - target_h, target_w = ori_img.shape[:2] - padded_img = np.zeros( - (max(origin_h, target_h), max(origin_w, - target_w), 3)).astype(np.uint8) - padded_img[:origin_h, :origin_w] = out_img - - x_offset, y_offset = 0, 0 - if padded_img.shape[0] > target_h: - y_offset = random.randint(0, padded_img.shape[0] - target_h) - if padded_img.shape[1] > target_w: - x_offset = random.randint(0, padded_img.shape[1] - target_w) - padded_cropped_img = padded_img[y_offset:y_offset + target_h, - x_offset:x_offset + target_w] - - # 6. adjust bbox - retrieve_gt_bboxes = retrieve_results['gt_bboxes'] - retrieve_gt_bboxes[:, 0::2] = retrieve_gt_bboxes[:, 0::2] * scale_ratio - retrieve_gt_bboxes[:, 1::2] = retrieve_gt_bboxes[:, 1::2] * scale_ratio - if self.bbox_clip_border: - retrieve_gt_bboxes[:, 0::2] = np.clip(retrieve_gt_bboxes[:, 0::2], - 0, origin_w) - retrieve_gt_bboxes[:, 1::2] = np.clip(retrieve_gt_bboxes[:, 1::2], - 0, origin_h) - - if is_filp: - retrieve_gt_bboxes[:, 0::2] = ( - origin_w - retrieve_gt_bboxes[:, 0::2][:, ::-1]) - - # 7. filter - cp_retrieve_gt_bboxes = retrieve_gt_bboxes.copy() - cp_retrieve_gt_bboxes[:, 0::2] = \ - cp_retrieve_gt_bboxes[:, 0::2] - x_offset - cp_retrieve_gt_bboxes[:, 1::2] = \ - cp_retrieve_gt_bboxes[:, 1::2] - y_offset - if self.bbox_clip_border: - cp_retrieve_gt_bboxes[:, 0::2] = np.clip( - cp_retrieve_gt_bboxes[:, 0::2], 0, target_w) - cp_retrieve_gt_bboxes[:, 1::2] = np.clip( - cp_retrieve_gt_bboxes[:, 1::2], 0, target_h) - - # 8. mix up - ori_img = ori_img.astype(np.float32) - mixup_img = 0.5 * ori_img + 0.5 * padded_cropped_img.astype(np.float32) - - retrieve_gt_labels = retrieve_results['gt_labels'] - if not self.skip_filter: - keep_list = self._filter_box_candidates(retrieve_gt_bboxes.T, - cp_retrieve_gt_bboxes.T) - - retrieve_gt_labels = retrieve_gt_labels[keep_list] - cp_retrieve_gt_bboxes = cp_retrieve_gt_bboxes[keep_list] - - mixup_gt_bboxes = np.concatenate( - (results['gt_bboxes'], cp_retrieve_gt_bboxes), axis=0) - mixup_gt_labels = np.concatenate( - (results['gt_labels'], retrieve_gt_labels), axis=0) - - # remove outside bbox - inside_inds = find_inside_bboxes(mixup_gt_bboxes, target_h, target_w) - mixup_gt_bboxes = mixup_gt_bboxes[inside_inds] - mixup_gt_labels = mixup_gt_labels[inside_inds] - - results['img'] = mixup_img.astype(np.uint8) - results['img_shape'] = mixup_img.shape - results['gt_bboxes'] = mixup_gt_bboxes - results['gt_labels'] = mixup_gt_labels - - return results - - def _filter_box_candidates(self, bbox1, bbox2): - """Compute candidate boxes which include following 5 things: - - bbox1 before augment, bbox2 after augment, min_bbox_size (pixels), - min_area_ratio, max_aspect_ratio. - """ - - w1, h1 = bbox1[2] - bbox1[0], bbox1[3] - bbox1[1] - w2, h2 = bbox2[2] - bbox2[0], bbox2[3] - bbox2[1] - ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) - return ((w2 > self.min_bbox_size) - & (h2 > self.min_bbox_size) - & (w2 * h2 / (w1 * h1 + 1e-16) > self.min_area_ratio) - & (ar < self.max_aspect_ratio)) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'dynamic_scale={self.dynamic_scale}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'flip_ratio={self.flip_ratio}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'max_iters={self.max_iters}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class RandomAffine: - """Random affine transform data augmentation. - - This operation randomly generates affine transform matrix which including - rotation, translation, shear and scaling transforms. - - Args: - max_rotate_degree (float): Maximum degrees of rotation transform. - Default: 10. - max_translate_ratio (float): Maximum ratio of translation. - Default: 0.1. - scaling_ratio_range (tuple[float]): Min and max ratio of - scaling transform. Default: (0.5, 1.5). - max_shear_degree (float): Maximum degrees of shear - transform. Default: 2. - border (tuple[int]): Distance from height and width sides of input - image to adjust output shape. Only used in mosaic dataset. - Default: (0, 0). - border_val (tuple[int]): Border padding values of 3 channels. - Default: (114, 114, 114). - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 2. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - max_rotate_degree=10.0, - max_translate_ratio=0.1, - scaling_ratio_range=(0.5, 1.5), - max_shear_degree=2.0, - border=(0, 0), - border_val=(114, 114, 114), - min_bbox_size=2, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert 0 <= max_translate_ratio <= 1 - assert scaling_ratio_range[0] <= scaling_ratio_range[1] - assert scaling_ratio_range[0] > 0 - self.max_rotate_degree = max_rotate_degree - self.max_translate_ratio = max_translate_ratio - self.scaling_ratio_range = scaling_ratio_range - self.max_shear_degree = max_shear_degree - self.border = border - self.border_val = border_val - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - img = results['img'] - height = img.shape[0] + self.border[0] * 2 - width = img.shape[1] + self.border[1] * 2 - - # Rotation - rotation_degree = random.uniform(-self.max_rotate_degree, - self.max_rotate_degree) - rotation_matrix = self._get_rotation_matrix(rotation_degree) - - # Scaling - scaling_ratio = random.uniform(self.scaling_ratio_range[0], - self.scaling_ratio_range[1]) - scaling_matrix = self._get_scaling_matrix(scaling_ratio) - - # Shear - x_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - y_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - shear_matrix = self._get_shear_matrix(x_degree, y_degree) - - # Translation - trans_x = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * width - trans_y = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * height - translate_matrix = self._get_translation_matrix(trans_x, trans_y) - - warp_matrix = ( - translate_matrix @ shear_matrix @ rotation_matrix @ scaling_matrix) - - img = cv2.warpPerspective( - img, - warp_matrix, - dsize=(width, height), - borderValue=self.border_val) - results['img'] = img - results['img_shape'] = img.shape - - for key in results.get('bbox_fields', []): - bboxes = results[key] - num_bboxes = len(bboxes) - if num_bboxes: - # homogeneous coordinates - xs = bboxes[:, [0, 0, 2, 2]].reshape(num_bboxes * 4) - ys = bboxes[:, [1, 3, 3, 1]].reshape(num_bboxes * 4) - ones = np.ones_like(xs) - points = np.vstack([xs, ys, ones]) - - warp_points = warp_matrix @ points - warp_points = warp_points[:2] / warp_points[2] - xs = warp_points[0].reshape(num_bboxes, 4) - ys = warp_points[1].reshape(num_bboxes, 4) - - warp_bboxes = np.vstack( - (xs.min(1), ys.min(1), xs.max(1), ys.max(1))).T - - if self.bbox_clip_border: - warp_bboxes[:, [0, 2]] = \ - warp_bboxes[:, [0, 2]].clip(0, width) - warp_bboxes[:, [1, 3]] = \ - warp_bboxes[:, [1, 3]].clip(0, height) - - # remove outside bbox - valid_index = find_inside_bboxes(warp_bboxes, height, width) - if not self.skip_filter: - # filter bboxes - filter_index = self.filter_gt_bboxes( - bboxes * scaling_ratio, warp_bboxes) - valid_index = valid_index & filter_index - - results[key] = warp_bboxes[valid_index] - if key in ['gt_bboxes']: - if 'gt_labels' in results: - results['gt_labels'] = results['gt_labels'][ - valid_index] - - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomAffine only supports bbox.') - return results - - def filter_gt_bboxes(self, origin_bboxes, wrapped_bboxes): - origin_w = origin_bboxes[:, 2] - origin_bboxes[:, 0] - origin_h = origin_bboxes[:, 3] - origin_bboxes[:, 1] - wrapped_w = wrapped_bboxes[:, 2] - wrapped_bboxes[:, 0] - wrapped_h = wrapped_bboxes[:, 3] - wrapped_bboxes[:, 1] - aspect_ratio = np.maximum(wrapped_w / (wrapped_h + 1e-16), - wrapped_h / (wrapped_w + 1e-16)) - - wh_valid_idx = (wrapped_w > self.min_bbox_size) & \ - (wrapped_h > self.min_bbox_size) - area_valid_idx = wrapped_w * wrapped_h / (origin_w * origin_h + - 1e-16) > self.min_area_ratio - aspect_ratio_valid_idx = aspect_ratio < self.max_aspect_ratio - return wh_valid_idx & area_valid_idx & aspect_ratio_valid_idx - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_rotate_degree={self.max_rotate_degree}, ' - repr_str += f'max_translate_ratio={self.max_translate_ratio}, ' - repr_str += f'scaling_ratio={self.scaling_ratio_range}, ' - repr_str += f'max_shear_degree={self.max_shear_degree}, ' - repr_str += f'border={self.border}, ' - repr_str += f'border_val={self.border_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - @staticmethod - def _get_rotation_matrix(rotate_degrees): - radian = math.radians(rotate_degrees) - rotation_matrix = np.array( - [[np.cos(radian), -np.sin(radian), 0.], - [np.sin(radian), np.cos(radian), 0.], [0., 0., 1.]], - dtype=np.float32) - return rotation_matrix - - @staticmethod - def _get_scaling_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_share_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_shear_matrix(x_shear_degrees, y_shear_degrees): - x_radian = math.radians(x_shear_degrees) - y_radian = math.radians(y_shear_degrees) - shear_matrix = np.array([[1, np.tan(x_radian), 0.], - [np.tan(y_radian), 1, 0.], [0., 0., 1.]], - dtype=np.float32) - return shear_matrix - - @staticmethod - def _get_translation_matrix(x, y): - translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]], - dtype=np.float32) - return translation_matrix - - -@PIPELINES.register_module() -class YOLOXHSVRandomAug: - """Apply HSV augmentation to image sequentially. It is referenced from - https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/data/data_augment.py#L21. - - Args: - hue_delta (int): delta of hue. Default: 5. - saturation_delta (int): delta of saturation. Default: 30. - value_delta (int): delat of value. Default: 30. - """ - - def __init__(self, hue_delta=5, saturation_delta=30, value_delta=30): - self.hue_delta = hue_delta - self.saturation_delta = saturation_delta - self.value_delta = value_delta - - def __call__(self, results): - img = results['img'] - hsv_gains = np.random.uniform(-1, 1, 3) * [ - self.hue_delta, self.saturation_delta, self.value_delta - ] - # random selection of h, s, v - hsv_gains *= np.random.randint(0, 2, 3) - # prevent overflow - hsv_gains = hsv_gains.astype(np.int16) - img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.int16) - - img_hsv[..., 0] = (img_hsv[..., 0] + hsv_gains[0]) % 180 - img_hsv[..., 1] = np.clip(img_hsv[..., 1] + hsv_gains[1], 0, 255) - img_hsv[..., 2] = np.clip(img_hsv[..., 2] + hsv_gains[2], 0, 255) - cv2.cvtColor(img_hsv.astype(img.dtype), cv2.COLOR_HSV2BGR, dst=img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(hue_delta={self.hue_delta}, ' - repr_str += f'saturation_delta={self.saturation_delta}, ' - repr_str += f'value_delta={self.value_delta})' - return repr_str - - -@PIPELINES.register_module() -class CopyPaste: - """Simple Copy-Paste is a Strong Data Augmentation Method for Instance - Segmentation The simple copy-paste transform steps are as follows: - - 1. The destination image is already resized with aspect ratio kept, - cropped and padded. - 2. Randomly select a source image, which is also already resized - with aspect ratio kept, cropped and padded in a similar way - as the destination image. - 3. Randomly select some objects from the source image. - 4. Paste these source objects to the destination image directly, - due to the source and destination image have the same size. - 5. Update object masks of the destination image, for some origin objects - may be occluded. - 6. Generate bboxes from the updated destination masks and - filter some objects which are totally occluded, and adjust bboxes - which are partly occluded. - 7. Append selected source bboxes, masks, and labels. - - Args: - max_num_pasted (int): The maximum number of pasted objects. - Default: 100. - bbox_occluded_thr (int): The threshold of occluded bbox. - Default: 10. - mask_occluded_thr (int): The threshold of occluded mask. - Default: 300. - selected (bool): Whether select objects or not. If select is False, - all objects of the source image will be pasted to the - destination image. - Default: True. - """ - - def __init__( - self, - max_num_pasted=100, - bbox_occluded_thr=10, - mask_occluded_thr=300, - selected=True, - ): - self.max_num_pasted = max_num_pasted - self.bbox_occluded_thr = bbox_occluded_thr - self.mask_occluded_thr = mask_occluded_thr - self.selected = selected - - def get_indexes(self, dataset): - """Call function to collect indexes.s. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - Returns: - list: Indexes. - """ - return random.randint(0, len(dataset)) - - def __call__(self, results): - """Call function to make a copy-paste of image. - - Args: - results (dict): Result dict. - Returns: - dict: Result dict with copy-paste transformed. - """ - - assert 'mix_results' in results - num_images = len(results['mix_results']) - assert num_images == 1, \ - f'CopyPaste only supports processing 2 images, got {num_images}' - if self.selected: - selected_results = self._select_object(results['mix_results'][0]) - else: - selected_results = results['mix_results'][0] - return self._copy_paste(results, selected_results) - - def _select_object(self, results): - """Select some objects from the source results.""" - bboxes = results['gt_bboxes'] - labels = results['gt_labels'] - masks = results['gt_masks'] - max_num_pasted = min(bboxes.shape[0] + 1, self.max_num_pasted) - num_pasted = np.random.randint(0, max_num_pasted) - selected_inds = np.random.choice( - bboxes.shape[0], size=num_pasted, replace=False) - - selected_bboxes = bboxes[selected_inds] - selected_labels = labels[selected_inds] - selected_masks = masks[selected_inds] - - results['gt_bboxes'] = selected_bboxes - results['gt_labels'] = selected_labels - results['gt_masks'] = selected_masks - return results - - def _copy_paste(self, dst_results, src_results): - """CopyPaste transform function. - - Args: - dst_results (dict): Result dict of the destination image. - src_results (dict): Result dict of the source image. - Returns: - dict: Updated result dict. - """ - dst_img = dst_results['img'] - dst_bboxes = dst_results['gt_bboxes'] - dst_labels = dst_results['gt_labels'] - dst_masks = dst_results['gt_masks'] - - src_img = src_results['img'] - src_bboxes = src_results['gt_bboxes'] - src_labels = src_results['gt_labels'] - src_masks = src_results['gt_masks'] - - if len(src_bboxes) == 0: - return dst_results - - # update masks and generate bboxes from updated masks - composed_mask = np.where(np.any(src_masks.masks, axis=0), 1, 0) - updated_dst_masks = self.get_updated_masks(dst_masks, composed_mask) - updated_dst_bboxes = updated_dst_masks.get_bboxes() - assert len(updated_dst_bboxes) == len(updated_dst_masks) - - # filter totally occluded objects - bboxes_inds = np.all( - np.abs( - (updated_dst_bboxes - dst_bboxes)) <= self.bbox_occluded_thr, - axis=-1) - masks_inds = updated_dst_masks.masks.sum( - axis=(1, 2)) > self.mask_occluded_thr - valid_inds = bboxes_inds | masks_inds - - # Paste source objects to destination image directly - img = dst_img * (1 - composed_mask[..., np.newaxis] - ) + src_img * composed_mask[..., np.newaxis] - bboxes = np.concatenate([updated_dst_bboxes[valid_inds], src_bboxes]) - labels = np.concatenate([dst_labels[valid_inds], src_labels]) - masks = np.concatenate( - [updated_dst_masks.masks[valid_inds], src_masks.masks]) - - dst_results['img'] = img - dst_results['gt_bboxes'] = bboxes - dst_results['gt_labels'] = labels - dst_results['gt_masks'] = BitmapMasks(masks, masks.shape[1], - masks.shape[2]) - - return dst_results - - def get_updated_masks(self, masks, composed_mask): - assert masks.masks.shape[-2:] == composed_mask.shape[-2:], \ - 'Cannot compare two arrays of different size' - masks.masks = np.where(composed_mask, 0, masks.masks) - return masks - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'max_num_pasted={self.max_num_pasted}, ' - repr_str += f'bbox_occluded_thr={self.bbox_occluded_thr}, ' - repr_str += f'mask_occluded_thr={self.mask_occluded_thr}, ' - repr_str += f'selected={self.selected}, ' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/__init__.py deleted file mode 100644 index a4c7ea13..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_aware_sampler import ClassAwareSampler -from .distributed_sampler import DistributedSampler -from .group_sampler import DistributedGroupSampler, GroupSampler -from .infinite_sampler import InfiniteBatchSampler, InfiniteGroupBatchSampler - -__all__ = [ - 'DistributedSampler', 'DistributedGroupSampler', 'GroupSampler', - 'InfiniteGroupBatchSampler', 'InfiniteBatchSampler', 'ClassAwareSampler' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/class_aware_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/class_aware_sampler.py deleted file mode 100644 index c52708eb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/class_aware_sampler.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - -from mmdet.core.utils import sync_random_seed - - -class ClassAwareSampler(Sampler): - r"""Sampler that restricts data loading to the label of the dataset. - - A class-aware sampling strategy to effectively tackle the - non-uniform class distribution. The length of the training data is - consistent with source data. Simple improvements based on `Relay - Backpropagation for Effective Learning of Deep Convolutional - Neural Networks `_ - - The implementation logic is referred to - https://github.com/Sense-X/TSD/blob/master/mmdet/datasets/samplers/distributed_classaware_sampler.py - - Args: - dataset: Dataset used for sampling. - samples_per_gpu (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - num_sample_class (int): The number of samples taken from each - per-label list. Default: 1 - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0, - num_sample_class=1): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - - self.dataset = dataset - self.num_replicas = num_replicas - self.samples_per_gpu = samples_per_gpu - self.rank = rank - self.epoch = 0 - # Must be the same across all workers. If None, will use a - # random seed shared among workers - # (require synchronization among all workers) - self.seed = sync_random_seed(seed) - - # The number of samples taken from each per-label list - assert num_sample_class > 0 and isinstance(num_sample_class, int) - self.num_sample_class = num_sample_class - # Get per-label image list from dataset - assert hasattr(dataset, 'get_cat2imgs'), \ - 'dataset must have `get_cat2imgs` function' - self.cat_dict = dataset.get_cat2imgs() - - self.num_samples = int( - math.ceil( - len(self.dataset) * 1.0 / self.num_replicas / - self.samples_per_gpu)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - # get number of images containing each category - self.num_cat_imgs = [len(x) for x in self.cat_dict.values()] - # filter labels without images - self.valid_cat_inds = [ - i for i, length in enumerate(self.num_cat_imgs) if length != 0 - ] - self.num_classes = len(self.valid_cat_inds) - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - # initialize label list - label_iter_list = RandomCycleIter(self.valid_cat_inds, generator=g) - # initialize each per-label image list - data_iter_dict = dict() - for i in self.valid_cat_inds: - data_iter_dict[i] = RandomCycleIter(self.cat_dict[i], generator=g) - - def gen_cat_img_inds(cls_list, data_dict, num_sample_cls): - """Traverse the categories and extract `num_sample_cls` image - indexes of the corresponding categories one by one.""" - id_indices = [] - for _ in range(len(cls_list)): - cls_idx = next(cls_list) - for _ in range(num_sample_cls): - id = next(data_dict[cls_idx]) - id_indices.append(id) - return id_indices - - # deterministically shuffle based on epoch - num_bins = int( - math.ceil(self.total_size * 1.0 / self.num_classes / - self.num_sample_class)) - indices = [] - for i in range(num_bins): - indices += gen_cat_img_inds(label_iter_list, data_iter_dict, - self.num_sample_class) - - # fix extra samples to make it evenly divisible - if len(indices) >= self.total_size: - indices = indices[:self.total_size] - else: - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch - - -class RandomCycleIter: - """Shuffle the list and do it again after the list have traversed. - - The implementation logic is referred to - https://github.com/wutong16/DistributionBalancedLoss/blob/master/mllt/datasets/loader/sampler.py - - Example: - >>> label_list = [0, 1, 2, 4, 5] - >>> g = torch.Generator() - >>> g.manual_seed(0) - >>> label_iter_list = RandomCycleIter(label_list, generator=g) - >>> index = next(label_iter_list) - Args: - data (list or ndarray): The data that needs to be shuffled. - generator: An torch.Generator object, which is used in setting the seed - for generating random numbers. - """ # noqa: W605 - - def __init__(self, data, generator=None): - self.data = data - self.length = len(data) - self.index = torch.randperm(self.length, generator=generator).numpy() - self.i = 0 - self.generator = generator - - def __iter__(self): - return self - - def __len__(self): - return len(self.data) - - def __next__(self): - if self.i == self.length: - self.index = torch.randperm( - self.length, generator=self.generator).numpy() - self.i = 0 - idx = self.data[self.index[self.i]] - self.i += 1 - return idx diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/distributed_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100644 index 1bc8b7c3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - -from mmdet.core.utils import sync_random_seed -from mmdet.utils import get_device - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - device = get_device() - self.seed = sync_random_seed(seed, device) - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/group_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/group_sampler.py deleted file mode 100644 index 783d2b21..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/group_sampler.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - - -class GroupSampler(Sampler): - - def __init__(self, dataset, samples_per_gpu=1): - assert hasattr(dataset, 'flag') - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.flag = dataset.flag.astype(np.int64) - self.group_sizes = np.bincount(self.flag) - self.num_samples = 0 - for i, size in enumerate(self.group_sizes): - self.num_samples += int(np.ceil( - size / self.samples_per_gpu)) * self.samples_per_gpu - - def __iter__(self): - indices = [] - for i, size in enumerate(self.group_sizes): - if size == 0: - continue - indice = np.where(self.flag == i)[0] - assert len(indice) == size - np.random.shuffle(indice) - num_extra = int(np.ceil(size / self.samples_per_gpu) - ) * self.samples_per_gpu - len(indice) - indice = np.concatenate( - [indice, np.random.choice(indice, num_extra)]) - indices.append(indice) - indices = np.concatenate(indices) - indices = [ - indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu] - for i in np.random.permutation( - range(len(indices) // self.samples_per_gpu)) - ] - indices = np.concatenate(indices) - indices = indices.astype(np.int64).tolist() - assert len(indices) == self.num_samples - return iter(indices) - - def __len__(self): - return self.num_samples - - -class DistributedGroupSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.seed = seed if seed is not None else 0 - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - - self.num_samples = 0 - for i, j in enumerate(self.group_sizes): - self.num_samples += int( - math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu / - self.num_replicas)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - indices = [] - for i, size in enumerate(self.group_sizes): - if size > 0: - indice = np.where(self.flag == i)[0] - assert len(indice) == size - # add .numpy() to avoid bug when selecting indice in parrots. - # TODO: check whether torch.randperm() can be replaced by - # numpy.random.permutation(). - indice = indice[list( - torch.randperm(int(size), generator=g).numpy())].tolist() - extra = int( - math.ceil( - size * 1.0 / self.samples_per_gpu / self.num_replicas) - ) * self.samples_per_gpu * self.num_replicas - len(indice) - # pad indice - tmp = indice.copy() - for _ in range(extra // size): - indice.extend(tmp) - indice.extend(tmp[:extra % size]) - indices.extend(indice) - - assert len(indices) == self.total_size - - indices = [ - indices[j] for i in list( - torch.randperm( - len(indices) // self.samples_per_gpu, generator=g)) - for j in range(i * self.samples_per_gpu, (i + 1) * - self.samples_per_gpu) - ] - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/infinite_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/infinite_sampler.py deleted file mode 100644 index d42487e6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/samplers/infinite_sampler.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data.sampler import Sampler - -from mmdet.core.utils import sync_random_seed - - -class InfiniteGroupBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `GroupSampler. It is designed for - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time, all indices in a batch should be in the same group. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the indices of a dummy `epoch`, it - should be noted that `shuffle` can not guarantee that you can - generate sequential indices because it need to ensure - that all indices in a batch is in a group. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - # buffer used to save indices of each group - self.buffer_per_group = {k: [] for k in range(len(self.group_sizes))} - - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - for idx in self.indices: - flag = self.flag[idx] - group_buffer = self.buffer_per_group[flag] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] - del group_buffer[:] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError - - -class InfiniteBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `DistributedSampler. It is designed - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU, - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the dataset or not. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - batch_buffer = [] - for idx in self.indices: - batch_buffer.append(idx) - if len(batch_buffer) == self.batch_size: - yield batch_buffer - batch_buffer = [] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/utils.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/utils.py deleted file mode 100644 index 26e922d2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/utils.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv.cnn import VGG -from mmcv.runner.hooks import HOOKS, Hook - -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines import (LoadAnnotations, LoadImageFromFile, - LoadPanopticAnnotations) -from mmdet.models.dense_heads import GARPNHead, RPNHead -from mmdet.models.roi_heads.mask_heads import FusedSemanticHead - - -def replace_ImageToTensor(pipelines): - """Replace the ImageToTensor transform in a data pipeline to - DefaultFormatBundle, which is normally useful in batch inference. - - Args: - pipelines (list[dict]): Data pipeline configs. - - Returns: - list: The new pipeline list with all ImageToTensor replaced by - DefaultFormatBundle. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='ImageToTensor', keys=['img']), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> assert expected_pipelines == replace_ImageToTensor(pipelines) - """ - pipelines = copy.deepcopy(pipelines) - for i, pipeline in enumerate(pipelines): - if pipeline['type'] == 'MultiScaleFlipAug': - assert 'transforms' in pipeline - pipeline['transforms'] = replace_ImageToTensor( - pipeline['transforms']) - elif pipeline['type'] == 'ImageToTensor': - warnings.warn( - '"ImageToTensor" pipeline is replaced by ' - '"DefaultFormatBundle" for batch inference. It is ' - 'recommended to manually replace it in the test ' - 'data pipeline in your config file.', UserWarning) - pipelines[i] = {'type': 'DefaultFormatBundle'} - return pipelines - - -def get_loading_pipeline(pipeline): - """Only keep loading image and annotations related configuration. - - Args: - pipeline (list[dict]): Data pipeline configs. - - Returns: - list[dict]: The new pipeline list with only keep - loading image and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True), - ... dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - ... dict(type='RandomFlip', flip_ratio=0.5), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True) - ... ] - >>> assert expected_pipelines ==\ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline_cfg = [] - for cfg in pipeline: - obj_cls = PIPELINES.get(cfg['type']) - # TODO:use more elegant way to distinguish loading modules - if obj_cls is not None and obj_cls in (LoadImageFromFile, - LoadAnnotations, - LoadPanopticAnnotations): - loading_pipeline_cfg.append(cfg) - assert len(loading_pipeline_cfg) == 2, \ - 'The data pipeline in your config file must include ' \ - 'loading image and annotations related pipeline.' - return loading_pipeline_cfg - - -@HOOKS.register_module() -class NumClassCheckHook(Hook): - - def _check_head(self, runner): - """Check whether the `num_classes` in head matches the length of - `CLASSES` in `dataset`. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - model = runner.model - dataset = runner.data_loader.dataset - if dataset.CLASSES is None: - runner.logger.warning( - f'Please set `CLASSES` ' - f'in the {dataset.__class__.__name__} and' - f'check if it is consistent with the `num_classes` ' - f'of head') - else: - assert type(dataset.CLASSES) is not str, \ - (f'`CLASSES` in {dataset.__class__.__name__}' - f'should be a tuple of str.' - f'Add comma if number of classes is 1 as ' - f'CLASSES = ({dataset.CLASSES},)') - for name, module in model.named_modules(): - if hasattr(module, 'num_classes') and not isinstance( - module, (RPNHead, VGG, FusedSemanticHead, GARPNHead)): - assert module.num_classes == len(dataset.CLASSES), \ - (f'The `num_classes` ({module.num_classes}) in ' - f'{module.__class__.__name__} of ' - f'{model.__class__.__name__} does not matches ' - f'the length of `CLASSES` ' - f'{len(dataset.CLASSES)}) in ' - f'{dataset.__class__.__name__}') - - def before_train_epoch(self, runner): - """Check whether the training dataset is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) - - def before_val_epoch(self, runner): - """Check whether the dataset in val epoch is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/voc.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/voc.py deleted file mode 100644 index 0a3ea7aa..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/voc.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -from mmcv.utils import print_log - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .xml_style import XMLDataset - - -@DATASETS.register_module() -class VOCDataset(XMLDataset): - - CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', - 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor') - - PALETTE = [(106, 0, 228), (119, 11, 32), (165, 42, 42), (0, 0, 192), - (197, 226, 255), (0, 60, 100), (0, 0, 142), (255, 77, 255), - (153, 69, 1), (120, 166, 157), (0, 182, 199), (0, 226, 252), - (182, 182, 255), (0, 0, 230), (220, 20, 60), (163, 255, 0), - (0, 82, 0), (3, 95, 161), (0, 80, 100), (183, 130, 88)] - - def __init__(self, **kwargs): - super(VOCDataset, self).__init__(**kwargs) - if 'VOC2007' in self.img_prefix: - self.year = 2007 - elif 'VOC2012' in self.img_prefix: - self.year = 2012 - else: - raise ValueError('Cannot infer dataset year from img_prefix') - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate in VOC protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'mAP', 'recall'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple], optional): Scale ranges for evaluating - mAP. If not specified, all bounding boxes would be included in - evaluation. Default: None. - - Returns: - dict[str, float]: AP/recall metrics. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - if self.year == 2007: - ds_name = 'voc07' - else: - ds_name = self.CLASSES - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - # Follow the official implementation, - # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar - # we should use the legacy coordinate system in mmdet 1.x, - # which means w, h should be computed as 'x2 - x1 + 1` and - # `y2 - y1 + 1` - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=None, - iou_thr=iou_thr, - dataset=ds_name, - logger=logger, - use_legacy_coordinate=True) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - eval_results.move_to_end('mAP', last=False) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, - results, - proposal_nums, - iou_thrs, - logger=logger, - use_legacy_coordinate=True) - for i, num in enumerate(proposal_nums): - for j, iou_thr in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou_thr}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/wider_face.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/wider_face.py deleted file mode 100644 index 85a5fdc5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/wider_face.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv - -from .builder import DATASETS -from .xml_style import XMLDataset - - -@DATASETS.register_module() -class WIDERFaceDataset(XMLDataset): - """Reader for the WIDER Face dataset in PASCAL VOC format. - - Conversion scripts can be found in - https://github.com/sovrasov/wider-face-pascal-voc-annotations - """ - CLASSES = ('face', ) - - PALETTE = [(0, 255, 0)] - - def __init__(self, **kwargs): - super(WIDERFaceDataset, self).__init__(**kwargs) - - def load_annotations(self, ann_file): - """Load annotation from WIDERFace XML style annotation file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = f'{img_id}.jpg' - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - width = int(size.find('width').text) - height = int(size.find('height').text) - folder = root.find('folder').text - data_infos.append( - dict( - id=img_id, - filename=osp.join(folder, filename), - width=width, - height=height)) - - return data_infos diff --git a/cv/3d_detection/paconv/pytorch/mmdet/datasets/xml_style.py b/cv/3d_detection/paconv/pytorch/mmdet/datasets/xml_style.py deleted file mode 100644 index 039d5d7d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class XMLDataset(CustomDataset): - """XML dataset for detection. - - Args: - min_size (int | float, optional): The minimum size of bounding - boxes in the images. If the size of a bounding box is less than - ``min_size``, it would be add to ignored field. - img_subdir (str): Subdir where images are stored. Default: JPEGImages. - ann_subdir (str): Subdir where annotations are. Default: Annotations. - """ - - def __init__(self, - min_size=None, - img_subdir='JPEGImages', - ann_subdir='Annotations', - **kwargs): - assert self.CLASSES or kwargs.get( - 'classes', None), 'CLASSES in `XMLDataset` can not be None.' - self.img_subdir = img_subdir - self.ann_subdir = ann_subdir - super(XMLDataset, self).__init__(**kwargs) - self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)} - self.min_size = min_size - - def load_annotations(self, ann_file): - """Load annotation from XML style ann_file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = osp.join(self.img_subdir, f'{img_id}.jpg') - xml_path = osp.join(self.img_prefix, self.ann_subdir, - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_path = osp.join(self.img_prefix, filename) - img = Image.open(img_path) - width, height = img.size - data_infos.append( - dict(id=img_id, filename=filename, width=width, height=height)) - - return data_infos - - def _filter_imgs(self, min_size=32): - """Filter images too small or without annotation.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) < min_size: - continue - if self.filter_empty_gt: - img_id = img_info['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name in self.CLASSES: - valid_inds.append(i) - break - else: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, idx): - """Get annotation from XML file by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - # TODO: check whether it is necessary to use int - # Coordinates may be float type - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - ignore = False - if self.min_size: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.min_size or h < self.min_size: - ignore = True - if difficult or ignore: - bboxes_ignore.append(bbox) - labels_ignore.append(label) - else: - bboxes.append(bbox) - labels.append(label) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes, ndmin=2) - 1 - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1 - labels_ignore = np.array(labels_ignore) - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64)) - return ann - - def get_cat_ids(self, idx): - """Get category ids in XML file by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - cat_ids = [] - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - cat_ids.append(label) - - return cat_ids diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/__init__.py deleted file mode 100644 index 12efb013..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, DETECTORS, HEADS, LOSSES, NECKS, - ROI_EXTRACTORS, SHARED_HEADS, build_backbone, - build_detector, build_head, build_loss, build_neck, - build_roi_extractor, build_shared_head) -from .dense_heads import * # noqa: F401,F403 -from .detectors import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .plugins import * # noqa: F401,F403 -from .roi_heads import * # noqa: F401,F403 -from .seg_heads import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES', - 'DETECTORS', 'build_backbone', 'build_neck', 'build_roi_extractor', - 'build_shared_head', 'build_head', 'build_loss', 'build_detector' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/__init__.py deleted file mode 100644 index 91b50d25..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .csp_darknet import CSPDarknet -from .darknet import Darknet -from .detectors_resnet import DetectoRS_ResNet -from .detectors_resnext import DetectoRS_ResNeXt -from .efficientnet import EfficientNet -from .hourglass import HourglassNet -from .hrnet import HRNet -from .mobilenet_v2 import MobileNetV2 -from .pvt import PyramidVisionTransformer, PyramidVisionTransformerV2 -from .regnet import RegNet -from .res2net import Res2Net -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1d -from .resnext import ResNeXt -from .ssd_vgg import SSDVGG -from .swin import SwinTransformer -from .trident_resnet import TridentResNet - -__all__ = [ - 'RegNet', 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', - 'MobileNetV2', 'Res2Net', 'HourglassNet', 'DetectoRS_ResNet', - 'DetectoRS_ResNeXt', 'Darknet', 'ResNeSt', 'TridentResNet', 'CSPDarknet', - 'SwinTransformer', 'PyramidVisionTransformer', - 'PyramidVisionTransformerV2', 'EfficientNet' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/csp_darknet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/csp_darknet.py deleted file mode 100644 index 2bbf3968..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/csp_darknet.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import CSPLayer - - -class Focus(nn.Module): - """Focus width and height information into channel space. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_size (int): The kernel size of the convolution. Default: 1 - stride (int): The stride of the convolution. Default: 1 - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', momentum=0.03, eps=0.001). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=1, - stride=1, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish')): - super().__init__() - self.conv = ConvModule( - in_channels * 4, - out_channels, - kernel_size, - stride, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2) - patch_top_left = x[..., ::2, ::2] - patch_top_right = x[..., ::2, 1::2] - patch_bot_left = x[..., 1::2, ::2] - patch_bot_right = x[..., 1::2, 1::2] - x = torch.cat( - ( - patch_top_left, - patch_bot_left, - patch_top_right, - patch_bot_right, - ), - dim=1, - ) - return self.conv(x) - - -class SPPBottleneck(BaseModule): - """Spatial pyramid pooling layer used in YOLOv3-SPP. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_sizes (tuple[int]): Sequential of kernel sizes of pooling - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - mid_channels = in_channels // 2 - self.conv1 = ConvModule( - in_channels, - mid_channels, - 1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.poolings = nn.ModuleList([ - nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) - for ks in kernel_sizes - ]) - conv2_channels = mid_channels * (len(kernel_sizes) + 1) - self.conv2 = ConvModule( - conv2_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - x = self.conv1(x) - x = torch.cat([x] + [pooling(x) for pooling in self.poolings], dim=1) - x = self.conv2(x) - return x - - -@BACKBONES.register_module() -class CSPDarknet(BaseModule): - """CSP-Darknet backbone used in YOLOv5 and YOLOX. - - Args: - arch (str): Architecture of CSP-Darknet, from {P5, P6}. - Default: P5. - deepen_factor (float): Depth multiplier, multiply number of - blocks in CSP layer by this amount. Default: 1.0. - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int]): Output from which stages. - Default: (2, 3, 4). - frozen_stages (int): Stages to be frozen (stop grad and set eval - mode). -1 means not freezing any parameters. Default: -1. - use_depthwise (bool): Whether to use depthwise separable convolution. - Default: False. - arch_ovewrite(list): Overwrite default arch settings. Default: None. - spp_kernal_sizes: (tuple[int]): Sequential of kernel sizes of SPP - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Example: - >>> from mmdet.models import CSPDarknet - >>> import torch - >>> self = CSPDarknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - # From left to right: - # in_channels, out_channels, num_blocks, add_identity, use_spp - arch_settings = { - 'P5': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 1024, 3, False, True]], - 'P6': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 768, 3, True, False], - [768, 1024, 3, False, True]] - } - - def __init__(self, - arch='P5', - deepen_factor=1.0, - widen_factor=1.0, - out_indices=(2, 3, 4), - frozen_stages=-1, - use_depthwise=False, - arch_ovewrite=None, - spp_kernal_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - norm_eval=False, - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - super().__init__(init_cfg) - arch_setting = self.arch_settings[arch] - if arch_ovewrite: - arch_setting = arch_ovewrite - assert set(out_indices).issubset( - i for i in range(len(arch_setting) + 1)) - if frozen_stages not in range(-1, len(arch_setting) + 1): - raise ValueError('frozen_stages must be in range(-1, ' - 'len(arch_setting) + 1). But received ' - f'{frozen_stages}') - - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.use_depthwise = use_depthwise - self.norm_eval = norm_eval - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - - self.stem = Focus( - 3, - int(arch_setting[0][0] * widen_factor), - kernel_size=3, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.layers = ['stem'] - - for i, (in_channels, out_channels, num_blocks, add_identity, - use_spp) in enumerate(arch_setting): - in_channels = int(in_channels * widen_factor) - out_channels = int(out_channels * widen_factor) - num_blocks = max(round(num_blocks * deepen_factor), 1) - stage = [] - conv_layer = conv( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(conv_layer) - if use_spp: - spp = SPPBottleneck( - out_channels, - out_channels, - kernel_sizes=spp_kernal_sizes, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(spp) - csp_layer = CSPLayer( - out_channels, - out_channels, - num_blocks=num_blocks, - add_identity=add_identity, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(csp_layer) - self.add_module(f'stage{i + 1}', nn.Sequential(*stage)) - self.layers.append(f'stage{i + 1}') - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages + 1): - m = getattr(self, self.layers[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(CSPDarknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/darknet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/darknet.py deleted file mode 100644 index adfb1159..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/darknet.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES - - -class ResBlock(BaseModule): - """The basic residual block used in Darknet. Each ResBlock consists of two - ConvModules and the input is added to the final output. Each ConvModule is - composed of Conv, BN, and LeakyReLU. In YoloV3 paper, the first convLayer - has half of the number of the filters as much as the second convLayer. The - first convLayer has filter size of 1x1 and the second one has the filter - size of 3x3. - - Args: - in_channels (int): The input channels. Must be even. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(ResBlock, self).__init__(init_cfg) - assert in_channels % 2 == 0 # ensure the in_channels is even - half_in_channels = in_channels // 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(in_channels, half_in_channels, 1, **cfg) - self.conv2 = ConvModule( - half_in_channels, in_channels, 3, padding=1, **cfg) - - def forward(self, x): - residual = x - out = self.conv1(x) - out = self.conv2(out) - out = out + residual - - return out - - -@BACKBONES.register_module() -class Darknet(BaseModule): - """Darknet backbone. - - Args: - depth (int): Depth of Darknet. Currently only support 53. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import Darknet - >>> import torch - >>> self = Darknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - - # Dict(depth: (layers, channels)) - arch_settings = { - 53: ((1, 2, 8, 8, 4), ((32, 64), (64, 128), (128, 256), (256, 512), - (512, 1024))) - } - - def __init__(self, - depth=53, - out_indices=(3, 4, 5), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - norm_eval=True, - pretrained=None, - init_cfg=None): - super(Darknet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for darknet') - - self.depth = depth - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.layers, self.channels = self.arch_settings[depth] - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(3, 32, 3, padding=1, **cfg) - - self.cr_blocks = ['conv1'] - for i, n_layers in enumerate(self.layers): - layer_name = f'conv_res_block{i + 1}' - in_c, out_c = self.channels[i] - self.add_module( - layer_name, - self.make_conv_res_block(in_c, out_c, n_layers, **cfg)) - self.cr_blocks.append(layer_name) - - self.norm_eval = norm_eval - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.cr_blocks): - cr_block = getattr(self, layer_name) - x = cr_block(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages): - m = getattr(self, self.cr_blocks[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(Darknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - @staticmethod - def make_conv_res_block(in_channels, - out_channels, - res_repeat, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', - negative_slope=0.1)): - """In Darknet backbone, ConvLayer is usually followed by ResBlock. This - function will make that. The Conv layers always have 3x3 filters with - stride=2. The number of the filters in Conv layer is the same as the - out channels of the ResBlock. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - res_repeat (int): The number of ResBlocks. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - model = nn.Sequential() - model.add_module( - 'conv', - ConvModule( - in_channels, out_channels, 3, stride=2, padding=1, **cfg)) - for idx in range(res_repeat): - model.add_module('res{}'.format(idx), - ResBlock(out_channels, **cfg)) - return model diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnet.py deleted file mode 100644 index a3c0d40b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnet.py +++ /dev/null @@ -1,353 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import Sequential, load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import BasicBlock -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - r"""Bottleneck for the ResNet backbone in `DetectoRS - `_. - - This bottleneck allows the users to specify whether to use - SAC (Switchable Atrous Convolution) and RFP (Recursive Feature Pyramid). - - Args: - inplanes (int): The number of input channels. - planes (int): The number of output channels before expansion. - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - sac (dict, optional): Dictionary to construct SAC. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - rfp_inplanes=None, - sac=None, - init_cfg=None, - **kwargs): - super(Bottleneck, self).__init__( - inplanes, planes, init_cfg=init_cfg, **kwargs) - - assert sac is None or isinstance(sac, dict) - self.sac = sac - self.with_sac = sac is not None - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False) - - self.rfp_inplanes = rfp_inplanes - if self.rfp_inplanes: - self.rfp_conv = build_conv_layer( - None, - self.rfp_inplanes, - planes * self.expansion, - 1, - stride=1, - bias=True) - if init_cfg is None: - self.init_cfg = dict( - type='Constant', val=0, override=dict(name='rfp_conv')) - - def rfp_forward(self, x, rfp_feat): - """The forward function that also takes the RFP features as input.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - if self.rfp_inplanes: - rfp_feat = self.rfp_conv(rfp_feat) - out = out + rfp_feat - - out = self.relu(out) - - return out - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone for RPF in detectoRS. - - The difference between this module and base class is that we pass - ``rfp_inplanes`` to the first block. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - rfp_inplanes=None, - **kwargs): - self.block = block - assert downsample_first, f'downsample_first={downsample_first} is ' \ - 'not supported in DetectoRS' - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down and stride != 1: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - rfp_inplanes=rfp_inplanes, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - super(ResLayer, self).__init__(*layers) - - -@BACKBONES.register_module() -class DetectoRS_ResNet(ResNet): - """ResNet backbone for DetectoRS. - - Args: - sac (dict, optional): Dictionary to construct SAC (Switchable Atrous - Convolution). Default: None. - stage_with_sac (list): Which stage to use sac. Default: (False, False, - False, False). - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - output_img (bool): If ``True``, the input image will be inserted into - the starting position of output. Default: False. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - sac=None, - stage_with_sac=(False, False, False, False), - rfp_inplanes=None, - output_img=False, - pretrained=None, - init_cfg=None, - **kwargs): - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - self.pretrained = pretrained - if init_cfg is not None: - assert isinstance(init_cfg, dict), \ - f'init_cfg must be a dict, but got {type(init_cfg)}' - if 'type' in init_cfg: - assert init_cfg.get('type') == 'Pretrained', \ - 'Only can initialize module by loading a pretrained model' - else: - raise KeyError('`init_cfg` must contain the key "type"') - self.pretrained = init_cfg.get('checkpoint') - self.sac = sac - self.stage_with_sac = stage_with_sac - self.rfp_inplanes = rfp_inplanes - self.output_img = output_img - super(DetectoRS_ResNet, self).__init__(**kwargs) - - self.inplanes = self.stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - sac = self.sac if self.stage_with_sac[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - planes = self.base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - sac=sac, - rfp_inplanes=rfp_inplanes if i > 0 else None, - plugins=stage_plugins) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - # In order to be properly initialized by RFP - def init_weights(self): - # Calling this method will cause parameter initialization exception - # super(DetectoRS_ResNet, self).init_weights() - - if isinstance(self.pretrained, str): - logger = get_root_logger() - load_checkpoint(self, self.pretrained, strict=False, logger=logger) - elif self.pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m.conv2, 'conv_offset'): - constant_init(m.conv2.conv_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer`` for DetectoRS.""" - return ResLayer(**kwargs) - - def forward(self, x): - """Forward function.""" - outs = list(super(DetectoRS_ResNet, self).forward(x)) - if self.output_img: - outs.insert(0, x) - return tuple(outs) - - def rfp_forward(self, x, rfp_feats): - """Forward function for RFP.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - rfp_feat = rfp_feats[i] if i > 0 else None - for layer in res_layer: - x = layer.rfp_forward(x, rfp_feat) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnext.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 5e8b20a0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/efficientnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/efficientnet.py deleted file mode 100644 index 7ee35956..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/efficientnet.py +++ /dev/null @@ -1,417 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -from functools import partial - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn.bricks import ConvModule, DropPath -from mmcv.runner import BaseModule, Sequential - -from ..builder import BACKBONES -from ..utils import InvertedResidual, SELayer, make_divisible - - -class EdgeResidual(BaseModule): - """Edge Residual Block. - - Args: - in_channels (int): The input channels of this module. - out_channels (int): The output channels of this module. - mid_channels (int): The input channels of the second convolution. - kernel_size (int): The kernel size of the first convolution. - Defaults to 3. - stride (int): The stride of the first convolution. Defaults to 1. - se_cfg (dict, optional): Config dict for se layer. Defaults to None, - which means no se layer. - with_residual (bool): Use residual connection. Defaults to True. - conv_cfg (dict, optional): Config dict for convolution layer. - Defaults to None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Defaults to ``dict(type='BN')``. - act_cfg (dict): Config dict for activation layer. - Defaults to ``dict(type='ReLU')``. - drop_path_rate (float): stochastic depth rate. Defaults to 0. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Defaults to False. - init_cfg (dict | list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_residual=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - drop_path_rate=0., - with_cp=False, - init_cfg=None, - **kwargs): - super(EdgeResidual, self).__init__(init_cfg=init_cfg) - assert stride in [1, 2] - self.with_cp = with_cp - self.drop_path = DropPath( - drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.with_se = se_cfg is not None - self.with_residual = ( - stride == 1 and in_channels == out_channels and with_residual) - - if self.with_se: - assert isinstance(se_cfg, dict) - - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=1, - padding=kernel_size // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.conv2 = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=stride, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - out = self.conv1(out) - - if self.with_se: - out = self.se(out) - - out = self.conv2(out) - - if self.with_residual: - return x + self.drop_path(out) - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -def model_scaling(layer_setting, arch_setting): - """Scaling operation to the layer's parameters according to the - arch_setting.""" - # scale width - new_layer_setting = copy.deepcopy(layer_setting) - for layer_cfg in new_layer_setting: - for block_cfg in layer_cfg: - block_cfg[1] = make_divisible(block_cfg[1] * arch_setting[0], 8) - - # scale depth - split_layer_setting = [new_layer_setting[0]] - for layer_cfg in new_layer_setting[1:-1]: - tmp_index = [0] - for i in range(len(layer_cfg) - 1): - if layer_cfg[i + 1][1] != layer_cfg[i][1]: - tmp_index.append(i + 1) - tmp_index.append(len(layer_cfg)) - for i in range(len(tmp_index) - 1): - split_layer_setting.append(layer_cfg[tmp_index[i]:tmp_index[i + - 1]]) - split_layer_setting.append(new_layer_setting[-1]) - - num_of_layers = [len(layer_cfg) for layer_cfg in split_layer_setting[1:-1]] - new_layers = [ - int(math.ceil(arch_setting[1] * num)) for num in num_of_layers - ] - - merge_layer_setting = [split_layer_setting[0]] - for i, layer_cfg in enumerate(split_layer_setting[1:-1]): - if new_layers[i] <= num_of_layers[i]: - tmp_layer_cfg = layer_cfg[:new_layers[i]] - else: - tmp_layer_cfg = copy.deepcopy(layer_cfg) + [layer_cfg[-1]] * ( - new_layers[i] - num_of_layers[i]) - if tmp_layer_cfg[0][3] == 1 and i != 0: - merge_layer_setting[-1] += tmp_layer_cfg.copy() - else: - merge_layer_setting.append(tmp_layer_cfg.copy()) - merge_layer_setting.append(split_layer_setting[-1]) - - return merge_layer_setting - - -@BACKBONES.register_module() -class EfficientNet(BaseModule): - """EfficientNet backbone. - - Args: - arch (str): Architecture of efficientnet. Defaults to b0. - out_indices (Sequence[int]): Output from which stages. - Defaults to (6, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Defaults to 0, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Defaults to None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Defaults to dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Defaults to dict(type='Swish'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Defaults to False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Defaults to False. - """ - - # Parameters to build layers. - # 'b' represents the architecture of normal EfficientNet family includes - # 'b0', 'b1', 'b2', 'b3', 'b4', 'b5', 'b6', 'b7', 'b8'. - # 'e' represents the architecture of EfficientNet-EdgeTPU including 'es', - # 'em', 'el'. - # 6 parameters are needed to construct a layer, From left to right: - # - kernel_size: The kernel size of the block - # - out_channel: The number of out_channels of the block - # - se_ratio: The sequeeze ratio of SELayer. - # - stride: The stride of the block - # - expand_ratio: The expand_ratio of the mid_channels - # - block_type: -1: Not a block, 0: InvertedResidual, 1: EdgeResidual - layer_settings = { - 'b': [[[3, 32, 0, 2, 0, -1]], - [[3, 16, 4, 1, 1, 0]], - [[3, 24, 4, 2, 6, 0], - [3, 24, 4, 1, 6, 0]], - [[5, 40, 4, 2, 6, 0], - [5, 40, 4, 1, 6, 0]], - [[3, 80, 4, 2, 6, 0], - [3, 80, 4, 1, 6, 0], - [3, 80, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0]], - [[5, 192, 4, 2, 6, 0], - [5, 192, 4, 1, 6, 0], - [5, 192, 4, 1, 6, 0], - [5, 192, 4, 1, 6, 0], - [3, 320, 4, 1, 6, 0]], - [[1, 1280, 0, 1, 0, -1]] - ], - 'e': [[[3, 32, 0, 2, 0, -1]], - [[3, 24, 0, 1, 3, 1]], - [[3, 32, 0, 2, 8, 1], - [3, 32, 0, 1, 8, 1]], - [[3, 48, 0, 2, 8, 1], - [3, 48, 0, 1, 8, 1], - [3, 48, 0, 1, 8, 1], - [3, 48, 0, 1, 8, 1]], - [[5, 96, 0, 2, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0]], - [[5, 192, 0, 2, 8, 0], - [5, 192, 0, 1, 8, 0]], - [[1, 1280, 0, 1, 0, -1]] - ] - } # yapf: disable - - # Parameters to build different kinds of architecture. - # From left to right: scaling factor for width, scaling factor for depth, - # resolution. - arch_settings = { - 'b0': (1.0, 1.0, 224), - 'b1': (1.0, 1.1, 240), - 'b2': (1.1, 1.2, 260), - 'b3': (1.2, 1.4, 300), - 'b4': (1.4, 1.8, 380), - 'b5': (1.6, 2.2, 456), - 'b6': (1.8, 2.6, 528), - 'b7': (2.0, 3.1, 600), - 'b8': (2.2, 3.6, 672), - 'es': (1.0, 1.0, 224), - 'em': (1.0, 1.1, 240), - 'el': (1.2, 1.4, 300) - } - - def __init__(self, - arch='b0', - drop_path_rate=0., - out_indices=(6, ), - frozen_stages=0, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='Swish'), - norm_eval=False, - with_cp=False, - init_cfg=[ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - layer=['_BatchNorm', 'GroupNorm'], - val=1) - ]): - super(EfficientNet, self).__init__(init_cfg) - assert arch in self.arch_settings, \ - f'"{arch}" is not one of the arch_settings ' \ - f'({", ".join(self.arch_settings.keys())})' - self.arch_setting = self.arch_settings[arch] - self.layer_setting = self.layer_settings[arch[:1]] - for index in out_indices: - if index not in range(0, len(self.layer_setting)): - raise ValueError('the item in out_indices must in ' - f'range(0, {len(self.layer_setting)}). ' - f'But received {index}') - - if frozen_stages not in range(len(self.layer_setting) + 1): - raise ValueError('frozen_stages must be in range(0, ' - f'{len(self.layer_setting) + 1}). ' - f'But received {frozen_stages}') - self.drop_path_rate = drop_path_rate - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.layer_setting = model_scaling(self.layer_setting, - self.arch_setting) - block_cfg_0 = self.layer_setting[0][0] - block_cfg_last = self.layer_setting[-1][0] - self.in_channels = make_divisible(block_cfg_0[1], 8) - self.out_channels = block_cfg_last[1] - self.layers = nn.ModuleList() - self.layers.append( - ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=block_cfg_0[0], - stride=block_cfg_0[3], - padding=block_cfg_0[0] // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.make_layer() - # Avoid building unused layers in mmdetection. - if len(self.layers) < max(self.out_indices) + 1: - self.layers.append( - ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=block_cfg_last[0], - stride=block_cfg_last[3], - padding=block_cfg_last[0] // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def make_layer(self): - # Without the first and the final conv block. - layer_setting = self.layer_setting[1:-1] - - total_num_blocks = sum([len(x) for x in layer_setting]) - block_idx = 0 - dpr = [ - x.item() - for x in torch.linspace(0, self.drop_path_rate, total_num_blocks) - ] # stochastic depth decay rule - - for i, layer_cfg in enumerate(layer_setting): - # Avoid building unused layers in mmdetection. - if i > max(self.out_indices) - 1: - break - layer = [] - for i, block_cfg in enumerate(layer_cfg): - (kernel_size, out_channels, se_ratio, stride, expand_ratio, - block_type) = block_cfg - - mid_channels = int(self.in_channels * expand_ratio) - out_channels = make_divisible(out_channels, 8) - if se_ratio <= 0: - se_cfg = None - else: - # In mmdetection, the `divisor` is deleted to align - # the logic of SELayer with mmcls. - se_cfg = dict( - channels=mid_channels, - ratio=expand_ratio * se_ratio, - act_cfg=(self.act_cfg, dict(type='Sigmoid'))) - if block_type == 1: # edge tpu - if i > 0 and expand_ratio == 3: - with_residual = False - expand_ratio = 4 - else: - with_residual = True - mid_channels = int(self.in_channels * expand_ratio) - if se_cfg is not None: - # In mmdetection, the `divisor` is deleted to align - # the logic of SELayer with mmcls. - se_cfg = dict( - channels=mid_channels, - ratio=se_ratio * expand_ratio, - act_cfg=(self.act_cfg, dict(type='Sigmoid'))) - block = partial(EdgeResidual, with_residual=with_residual) - else: - block = InvertedResidual - layer.append( - block( - in_channels=self.in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - drop_path_rate=dpr[block_idx], - with_cp=self.with_cp, - # In mmdetection, `with_expand_conv` is set to align - # the logic of InvertedResidual with mmcls. - with_expand_conv=(mid_channels != self.in_channels))) - self.in_channels = out_channels - block_idx += 1 - self.layers.append(Sequential(*layer)) - - def forward(self, x): - outs = [] - for i, layer in enumerate(self.layers): - x = layer(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def _freeze_stages(self): - for i in range(self.frozen_stages): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(EfficientNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hourglass.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hourglass.py deleted file mode 100644 index f0dfb434..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hourglass.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import BasicBlock - - -class HourglassModule(BaseModule): - """Hourglass Module for HourglassNet backbone. - - Generate module recursively and use BasicBlock as the base unit. - - Args: - depth (int): Depth of current HourglassModule. - stage_channels (list[int]): Feature channels of sub-modules in current - and follow-up HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in current and - follow-up HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - upsample_cfg (dict, optional): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - """ - - def __init__(self, - depth, - stage_channels, - stage_blocks, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=None, - upsample_cfg=dict(mode='nearest')): - super(HourglassModule, self).__init__(init_cfg) - - self.depth = depth - - cur_block = stage_blocks[0] - next_block = stage_blocks[1] - - cur_channel = stage_channels[0] - next_channel = stage_channels[1] - - self.up1 = ResLayer( - BasicBlock, cur_channel, cur_channel, cur_block, norm_cfg=norm_cfg) - - self.low1 = ResLayer( - BasicBlock, - cur_channel, - next_channel, - cur_block, - stride=2, - norm_cfg=norm_cfg) - - if self.depth > 1: - self.low2 = HourglassModule(depth - 1, stage_channels[1:], - stage_blocks[1:]) - else: - self.low2 = ResLayer( - BasicBlock, - next_channel, - next_channel, - next_block, - norm_cfg=norm_cfg) - - self.low3 = ResLayer( - BasicBlock, - next_channel, - cur_channel, - cur_block, - norm_cfg=norm_cfg, - downsample_first=False) - - self.up2 = F.interpolate - self.upsample_cfg = upsample_cfg - - def forward(self, x): - """Forward function.""" - up1 = self.up1(x) - low1 = self.low1(x) - low2 = self.low2(low1) - low3 = self.low3(low2) - # Fixing `scale factor` (e.g. 2) is common for upsampling, but - # in some cases the spatial size is mismatched and error will arise. - if 'scale_factor' in self.upsample_cfg: - up2 = self.up2(low3, **self.upsample_cfg) - else: - shape = up1.shape[2:] - up2 = self.up2(low3, size=shape, **self.upsample_cfg) - return up1 + up2 - - -@BACKBONES.register_module() -class HourglassNet(BaseModule): - """HourglassNet backbone. - - Stacked Hourglass Networks for Human Pose Estimation. - More details can be found in the `paper - `_ . - - Args: - downsample_times (int): Downsample times in a HourglassModule. - num_stacks (int): Number of HourglassModule modules stacked, - 1 for Hourglass-52, 2 for Hourglass-104. - stage_channels (list[int]): Feature channel of each sub-module in a - HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in a - HourglassModule. - feat_channel (int): Feature channel of conv after a HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import HourglassNet - >>> import torch - >>> self = HourglassNet() - >>> self.eval() - >>> inputs = torch.rand(1, 3, 511, 511) - >>> level_outputs = self.forward(inputs) - >>> for level_output in level_outputs: - ... print(tuple(level_output.shape)) - (1, 256, 128, 128) - (1, 256, 128, 128) - """ - - def __init__(self, - downsample_times=5, - num_stacks=2, - stage_channels=(256, 256, 384, 384, 384, 512), - stage_blocks=(2, 2, 2, 2, 2, 4), - feat_channel=256, - norm_cfg=dict(type='BN', requires_grad=True), - pretrained=None, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(HourglassNet, self).__init__(init_cfg) - - self.num_stacks = num_stacks - assert self.num_stacks >= 1 - assert len(stage_channels) == len(stage_blocks) - assert len(stage_channels) > downsample_times - - cur_channel = stage_channels[0] - - self.stem = nn.Sequential( - ConvModule( - 3, cur_channel // 2, 7, padding=3, stride=2, - norm_cfg=norm_cfg), - ResLayer( - BasicBlock, - cur_channel // 2, - cur_channel, - 1, - stride=2, - norm_cfg=norm_cfg)) - - self.hourglass_modules = nn.ModuleList([ - HourglassModule(downsample_times, stage_channels, stage_blocks) - for _ in range(num_stacks) - ]) - - self.inters = ResLayer( - BasicBlock, - cur_channel, - cur_channel, - num_stacks - 1, - norm_cfg=norm_cfg) - - self.conv1x1s = nn.ModuleList([ - ConvModule( - cur_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.out_convs = nn.ModuleList([ - ConvModule( - cur_channel, feat_channel, 3, padding=1, norm_cfg=norm_cfg) - for _ in range(num_stacks) - ]) - - self.remap_convs = nn.ModuleList([ - ConvModule( - feat_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - """Init module weights.""" - # Training Centripetal Model needs to reset parameters for Conv2d - super(HourglassNet, self).init_weights() - for m in self.modules(): - if isinstance(m, nn.Conv2d): - m.reset_parameters() - - def forward(self, x): - """Forward function.""" - inter_feat = self.stem(x) - out_feats = [] - - for ind in range(self.num_stacks): - single_hourglass = self.hourglass_modules[ind] - out_conv = self.out_convs[ind] - - hourglass_feat = single_hourglass(inter_feat) - out_feat = out_conv(hourglass_feat) - out_feats.append(out_feat) - - if ind < self.num_stacks - 1: - inter_feat = self.conv1x1s[ind]( - inter_feat) + self.remap_convs[ind]( - out_feat) - inter_feat = self.inters[ind](self.relu(inter_feat)) - - return out_feats diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hrnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hrnet.py deleted file mode 100644 index 06c210a6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/hrnet.py +++ /dev/null @@ -1,589 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, ModuleList, Sequential -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(BaseModule): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - block_init_cfg=None, - init_cfg=None): - super(HRModule, self).__init__(init_cfg) - self.block_init_cfg = block_init_cfg - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_BLOCKS({len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_CHANNELS({len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_INCHANNELS({len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - - return Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - nn.Upsample( - scale_factor=2**(j - i), mode='nearest'))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(BaseModule): - """HRNet backbone. - - `High-Resolution Representations for Labeling Pixels and Regions - arXiv: `_. - - Args: - extra (dict): Detailed configuration for each stage of HRNet. - There must be 4 stages, the configuration for each stage must have - 5 keys: - - - num_modules(int): The number of HRModule in this stage. - - num_branches(int): The number of branches in the HRModule. - - block(str): The type of convolution block. - - num_blocks(tuple): The number of blocks in each branch. - The length must be equal to num_branches. - - num_channels(tuple): The number of channels in each branch. - The length must be equal to num_branches. - in_channels (int): Number of input image channels. Default: 3. - conv_cfg (dict): Dictionary to construct and config conv layer. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: True. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: False. - multiscale_output (bool): Whether to output multi-level features - produced by multiple branches. If False, only the first level - feature will be output. Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmdet.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - norm_eval=True, - with_cp=False, - zero_init_residual=False, - multiscale_output=True, - pretrained=None, - init_cfg=None): - super(HRNet, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - # Assert configurations of 4 stages are in extra - assert 'stage1' in extra and 'stage2' in extra \ - and 'stage3' in extra and 'stage4' in extra - # Assert whether the length of `num_blocks` and `num_channels` are - # equal to `num_branches` - for i in range(4): - cfg = extra[f'stage{i + 1}'] - assert len(cfg['num_blocks']) == cfg['num_branches'] and \ - len(cfg['num_channels']) == cfg['num_branches'] - - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.zero_init_residual = zero_init_residual - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels, multiscale_output=multiscale_output) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg, - )) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - - return Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - block_init_cfg=block_init_cfg)) - - return Sequential(*hr_modules), in_channels - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/mobilenet_v2.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/mobilenet_v2.py deleted file mode 100644 index 8c6fcfaa..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(BaseModule): - """MobileNetV2 backbone. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int], optional): Output from which stages. - Default: (1, 2, 4, 7). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - # Parameters to build layers. 4 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks, stride. - arch_settings = [[1, 16, 1, 1], [6, 24, 2, 2], [6, 32, 3, 2], - [6, 64, 4, 2], [6, 96, 3, 1], [6, 160, 3, 2], - [6, 320, 1, 1]] - - def __init__(self, - widen_factor=1., - out_indices=(1, 2, 4, 7), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV2, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - self.widen_factor = widen_factor - self.out_indices = out_indices - if not set(out_indices).issubset(set(range(0, 8))): - raise ValueError('out_indices must be a subset of range' - f'(0, 8). But received {out_indices}') - - if frozen_stages not in range(-1, 8): - raise ValueError('frozen_stages must be in range(-1, 8). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks, stride = layer_cfg - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - if widen_factor > 1.0: - self.out_channel = int(1280 * widen_factor) - else: - self.out_channel = 1280 - - layer = ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channel, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.add_module('conv2', layer) - self.layers.append('conv2') - - def make_layer(self, out_channels, num_blocks, stride, expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. Default: 6. - """ - layers = [] - for i in range(num_blocks): - if i >= 1: - stride = 1 - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - mid_channels=int(round(self.in_channels * expand_ratio)), - stride=stride, - with_expand_conv=expand_ratio != 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - frozen.""" - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/pvt.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/pvt.py deleted file mode 100644 index 8b7d5d53..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/pvt.py +++ /dev/null @@ -1,591 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (Conv2d, build_activation_layer, build_norm_layer, - constant_init, normal_init, trunc_normal_init) -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import MultiheadAttention -from mmcv.cnn.utils.weight_init import trunc_normal_ -from mmcv.runner import (BaseModule, ModuleList, Sequential, _load_checkpoint, - load_state_dict) -from torch.nn.modules.utils import _pair as to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils import PatchEmbed, nchw_to_nlc, nlc_to_nchw, pvt_convert - - -class MixFFN(BaseModule): - """An implementation of MixFFN of PVT. - - The differences between MixFFN & FFN: - 1. Use 1X1 Conv to replace Linear layer. - 2. Introduce 3X3 Depth-wise Conv to encode positional information. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='GELU'). - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - Default: None. - use_conv (bool): If True, add 3x3 DWConv between two Linear layers. - Defaults: False. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - act_cfg=dict(type='GELU'), - ffn_drop=0., - dropout_layer=None, - use_conv=False, - init_cfg=None): - super(MixFFN, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.act_cfg = act_cfg - activate = build_activation_layer(act_cfg) - - in_channels = embed_dims - fc1 = Conv2d( - in_channels=in_channels, - out_channels=feedforward_channels, - kernel_size=1, - stride=1, - bias=True) - if use_conv: - # 3x3 depth wise conv to provide positional encode information - dw_conv = Conv2d( - in_channels=feedforward_channels, - out_channels=feedforward_channels, - kernel_size=3, - stride=1, - padding=(3 - 1) // 2, - bias=True, - groups=feedforward_channels) - fc2 = Conv2d( - in_channels=feedforward_channels, - out_channels=in_channels, - kernel_size=1, - stride=1, - bias=True) - drop = nn.Dropout(ffn_drop) - layers = [fc1, activate, drop, fc2, drop] - if use_conv: - layers.insert(1, dw_conv) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - - def forward(self, x, hw_shape, identity=None): - out = nlc_to_nchw(x, hw_shape) - out = self.layers(out) - out = nchw_to_nlc(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -class SpatialReductionAttention(MultiheadAttention): - """An implementation of Spatial Reduction Attention of PVT. - - This module is modified from MultiheadAttention which is a module from - mmcv.cnn.bricks.transformer. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default: True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Spatial Reduction - Attention of PVT. Default: 1. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - batch_first=True, - qkv_bias=True, - norm_cfg=dict(type='LN'), - sr_ratio=1, - init_cfg=None): - super().__init__( - embed_dims, - num_heads, - attn_drop, - proj_drop, - batch_first=batch_first, - dropout_layer=dropout_layer, - bias=qkv_bias, - init_cfg=init_cfg) - - self.sr_ratio = sr_ratio - if sr_ratio > 1: - self.sr = Conv2d( - in_channels=embed_dims, - out_channels=embed_dims, - kernel_size=sr_ratio, - stride=sr_ratio) - # The ret[0] of build_norm_layer is norm name. - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - # handle the BC-breaking from https://github.com/open-mmlab/mmcv/pull/1418 # noqa - from mmdet import digit_version, mmcv_version - if mmcv_version < digit_version('1.3.17'): - warnings.warn('The legacy version of forward function in' - 'SpatialReductionAttention is deprecated in' - 'mmcv>=1.3.17 and will no longer support in the' - 'future. Please upgrade your mmcv.') - self.forward = self.legacy_forward - - def forward(self, x, hw_shape, identity=None): - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - x_q = x_q.transpose(0, 1) - x_kv = x_kv.transpose(0, 1) - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - def legacy_forward(self, x, hw_shape, identity=None): - """multi head attention forward in mmcv version < 1.3.17.""" - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - return identity + self.dropout_layer(self.proj_drop(out)) - - -class PVTEncoderLayer(BaseModule): - """Implements one encoder layer in PVT. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed. - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): stochastic depth rate. Default: 0.0. - qkv_bias (bool): enable bias for qkv if True. - Default: True. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Spatial Reduction - Attention of PVT. Default: 1. - use_conv_ffn (bool): If True, use Convolutional FFN to replace FFN. - Default: False. - init_cfg (dict, optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=1, - use_conv_ffn=False, - init_cfg=None): - super(PVTEncoderLayer, self).__init__(init_cfg=init_cfg) - - # The ret[0] of build_norm_layer is norm name. - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.attn = SpatialReductionAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - # The ret[0] of build_norm_layer is norm name. - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.ffn = MixFFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - use_conv=use_conv_ffn, - act_cfg=act_cfg) - - def forward(self, x, hw_shape): - x = self.attn(self.norm1(x), hw_shape, identity=x) - x = self.ffn(self.norm2(x), hw_shape, identity=x) - - return x - - -class AbsolutePositionEmbedding(BaseModule): - """An implementation of the absolute position embedding in PVT. - - Args: - pos_shape (int): The shape of the absolute position embedding. - pos_dim (int): The dimension of the absolute position embedding. - drop_rate (float): Probability of an element to be zeroed. - Default: 0.0. - """ - - def __init__(self, pos_shape, pos_dim, drop_rate=0., init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(pos_shape, int): - pos_shape = to_2tuple(pos_shape) - elif isinstance(pos_shape, tuple): - if len(pos_shape) == 1: - pos_shape = to_2tuple(pos_shape[0]) - assert len(pos_shape) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pos_shape)}' - self.pos_shape = pos_shape - self.pos_dim = pos_dim - - self.pos_embed = nn.Parameter( - torch.zeros(1, pos_shape[0] * pos_shape[1], pos_dim)) - self.drop = nn.Dropout(p=drop_rate) - - def init_weights(self): - trunc_normal_(self.pos_embed, std=0.02) - - def resize_pos_embed(self, pos_embed, input_shape, mode='bilinear'): - """Resize pos_embed weights. - - Resize pos_embed using bilinear interpolate method. - - Args: - pos_embed (torch.Tensor): Position embedding weights. - input_shape (tuple): Tuple for (downsampled input image height, - downsampled input image width). - mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'bilinear'``. - - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C]. - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - pos_h, pos_w = self.pos_shape - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, self.pos_dim).permute(0, 3, 1, 2).contiguous() - pos_embed_weight = F.interpolate( - pos_embed_weight, size=input_shape, mode=mode) - pos_embed_weight = torch.flatten(pos_embed_weight, - 2).transpose(1, 2).contiguous() - pos_embed = pos_embed_weight - - return pos_embed - - def forward(self, x, hw_shape, mode='bilinear'): - pos_embed = self.resize_pos_embed(self.pos_embed, hw_shape, mode) - return self.drop(x + pos_embed) - - -@BACKBONES.register_module() -class PyramidVisionTransformer(BaseModule): - """Pyramid Vision Transformer (PVT) - - Implementation of `Pyramid Vision Transformer: A Versatile Backbone for - Dense Prediction without Convolutions - `_. - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): Embedding dimension. Default: 64. - num_stags (int): The num of stages. Default: 4. - num_layers (Sequence[int]): The layer number of each transformer encode - layer. Default: [3, 4, 6, 3]. - num_heads (Sequence[int]): The attention heads of each transformer - encode layer. Default: [1, 2, 5, 8]. - patch_sizes (Sequence[int]): The patch_size of each patch embedding. - Default: [4, 2, 2, 2]. - strides (Sequence[int]): The stride of each patch embedding. - Default: [4, 2, 2, 2]. - paddings (Sequence[int]): The padding of each patch embedding. - Default: [0, 0, 0, 0]. - sr_ratios (Sequence[int]): The spatial reduction rate of each - transformer encode layer. Default: [8, 4, 2, 1]. - out_indices (Sequence[int] | int): Output from which stages. - Default: (0, 1, 2, 3). - mlp_ratios (Sequence[int]): The ratio of the mlp hidden dim to the - embedding dim of each transformer encode layer. - Default: [8, 8, 4, 4]. - qkv_bias (bool): Enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: True. - use_conv_ffn (bool): If True, use Convolutional FFN to replace FFN. - Default: False. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - pretrained (str, optional): model pretrained path. Default: None. - convert_weights (bool): The flag indicates whether the - pre-trained model is from the original repo. We may need - to convert some keys to make it compatible. - Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=64, - num_stages=4, - num_layers=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - paddings=[0, 0, 0, 0], - sr_ratios=[8, 4, 2, 1], - out_indices=(0, 1, 2, 3), - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=True, - norm_after_stage=False, - use_conv_ffn=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - pretrained=None, - convert_weights=True, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.convert_weights = convert_weights - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - self.embed_dims = embed_dims - - self.num_stages = num_stages - self.num_layers = num_layers - self.num_heads = num_heads - self.patch_sizes = patch_sizes - self.strides = strides - self.sr_ratios = sr_ratios - assert num_stages == len(num_layers) == len(num_heads) \ - == len(patch_sizes) == len(strides) == len(sr_ratios) - - self.out_indices = out_indices - assert max(out_indices) < self.num_stages - self.pretrained = pretrained - - # transformer encoder - dpr = [ - x.item() - for x in torch.linspace(0, drop_path_rate, sum(num_layers)) - ] # stochastic num_layer decay rule - - cur = 0 - self.layers = ModuleList() - for i, num_layer in enumerate(num_layers): - embed_dims_i = embed_dims * num_heads[i] - patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims_i, - kernel_size=patch_sizes[i], - stride=strides[i], - padding=paddings[i], - bias=True, - norm_cfg=norm_cfg) - - layers = ModuleList() - if use_abs_pos_embed: - pos_shape = pretrain_img_size // np.prod(patch_sizes[:i + 1]) - pos_embed = AbsolutePositionEmbedding( - pos_shape=pos_shape, - pos_dim=embed_dims_i, - drop_rate=drop_rate) - layers.append(pos_embed) - layers.extend([ - PVTEncoderLayer( - embed_dims=embed_dims_i, - num_heads=num_heads[i], - feedforward_channels=mlp_ratios[i] * embed_dims_i, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[cur + idx], - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - sr_ratio=sr_ratios[i], - use_conv_ffn=use_conv_ffn) for idx in range(num_layer) - ]) - in_channels = embed_dims_i - # The ret[0] of build_norm_layer is norm name. - if norm_after_stage: - norm = build_norm_layer(norm_cfg, embed_dims_i)[1] - else: - norm = nn.Identity() - self.layers.append(ModuleList([patch_embed, layers, norm])) - cur += num_layer - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init(m, 0, math.sqrt(2.0 / fan_out)) - elif isinstance(m, AbsolutePositionEmbedding): - m.init_weights() - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - checkpoint = _load_checkpoint( - self.init_cfg.checkpoint, logger=logger, map_location='cpu') - logger.warn(f'Load pre-trained model for ' - f'{self.__class__.__name__} from original repo') - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - if self.convert_weights: - # Because pvt backbones are not supported by mmcls, - # so we need to convert pre-trained weights to match this - # implementation. - state_dict = pvt_convert(state_dict) - load_state_dict(self, state_dict, strict=False, logger=logger) - - def forward(self, x): - outs = [] - - for i, layer in enumerate(self.layers): - x, hw_shape = layer[0](x) - - for block in layer[1]: - x = block(x, hw_shape) - x = layer[2](x) - x = nlc_to_nchw(x, hw_shape) - if i in self.out_indices: - outs.append(x) - - return outs - - -@BACKBONES.register_module() -class PyramidVisionTransformerV2(PyramidVisionTransformer): - """Implementation of `PVTv2: Improved Baselines with Pyramid Vision - Transformer `_.""" - - def __init__(self, **kwargs): - super(PyramidVisionTransformerV2, self).__init__( - patch_sizes=[7, 3, 3, 3], - paddings=[3, 1, 1, 1], - use_abs_pos_embed=False, - norm_after_stage=True, - use_conv_ffn=True, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/regnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/regnet.py deleted file mode 100644 index 63adc3c1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/regnet.py +++ /dev/null @@ -1,356 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .resnet import ResNet -from .resnext import Bottleneck - - -@BACKBONES.register_module() -class RegNet(ResNet): - """RegNet backbone. - - More details can be found in `paper `_ . - - Args: - arch (dict): The parameter of RegNets. - - - w0 (int): initial width - - wa (float): slope of width - - wm (float): quantization parameter to quantize the width - - depth (int): depth of the backbone - - group_w (int): width of group - - bot_mul (float): bottleneck ratio, i.e. expansion of bottleneck. - strides (Sequence[int]): Strides of the first block of each stage. - base_channels (int): Base channels after stem layer. - in_channels (int): Number of input image channels. Default: 3. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import RegNet - >>> import torch - >>> self = RegNet( - arch=dict( - w0=88, - wa=26.31, - wm=2.25, - group_w=48, - depth=25, - bot_mul=1.0)) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 96, 8, 8) - (1, 192, 4, 4) - (1, 432, 2, 2) - (1, 1008, 1, 1) - """ - arch_settings = { - 'regnetx_400mf': - dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - 'regnetx_800mf': - dict(w0=56, wa=35.73, wm=2.28, group_w=16, depth=16, bot_mul=1.0), - 'regnetx_1.6gf': - dict(w0=80, wa=34.01, wm=2.25, group_w=24, depth=18, bot_mul=1.0), - 'regnetx_3.2gf': - dict(w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0), - 'regnetx_4.0gf': - dict(w0=96, wa=38.65, wm=2.43, group_w=40, depth=23, bot_mul=1.0), - 'regnetx_6.4gf': - dict(w0=184, wa=60.83, wm=2.07, group_w=56, depth=17, bot_mul=1.0), - 'regnetx_8.0gf': - dict(w0=80, wa=49.56, wm=2.88, group_w=120, depth=23, bot_mul=1.0), - 'regnetx_12gf': - dict(w0=168, wa=73.36, wm=2.37, group_w=112, depth=19, bot_mul=1.0), - } - - def __init__(self, - arch, - in_channels=3, - stem_channels=32, - base_channels=32, - strides=(2, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - - # Generate RegNet parameters first - if isinstance(arch, str): - assert arch in self.arch_settings, \ - f'"arch": "{arch}" is not one of the' \ - ' arch_settings' - arch = self.arch_settings[arch] - elif not isinstance(arch, dict): - raise ValueError('Expect "arch" to be either a string ' - f'or a dict, got {type(arch)}') - - widths, num_stages = self.generate_regnet( - arch['w0'], - arch['wa'], - arch['wm'], - arch['depth'], - ) - # Convert to per stage format - stage_widths, stage_blocks = self.get_stages_from_blocks(widths) - # Generate group widths and bot muls - group_widths = [arch['group_w'] for _ in range(num_stages)] - self.bottleneck_ratio = [arch['bot_mul'] for _ in range(num_stages)] - # Adjust the compatibility of stage_widths and group_widths - stage_widths, group_widths = self.adjust_width_group( - stage_widths, self.bottleneck_ratio, group_widths) - - # Group params by stage - self.stage_widths = stage_widths - self.group_widths = group_widths - self.depth = sum(stage_blocks) - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.zero_init_residual = zero_init_residual - self.block = Bottleneck - expansion_bak = self.block.expansion - self.block.expansion = 1 - self.stage_blocks = stage_blocks[:num_stages] - - self._make_stem_layer(in_channels, stem_channels) - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - if self.zero_init_residual: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.inplanes = stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - group_width = self.group_widths[i] - width = int(round(self.stage_widths[i] * self.bottleneck_ratio[i])) - stage_groups = width // group_width - - dcn = self.dcn if self.stage_with_dcn[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=self.stage_widths[i], - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - groups=stage_groups, - base_width=group_width, - base_channels=self.stage_widths[i], - init_cfg=block_init_cfg) - self.inplanes = self.stage_widths[i] - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = stage_widths[-1] - self.block.expansion = expansion_bak - - def _make_stem_layer(self, in_channels, base_channels): - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - base_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, base_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - - def generate_regnet(self, - initial_width, - width_slope, - width_parameter, - depth, - divisor=8): - """Generates per block width from RegNet parameters. - - Args: - initial_width ([int]): Initial width of the backbone - width_slope ([float]): Slope of the quantized linear function - width_parameter ([int]): Parameter used to quantize the width. - depth ([int]): Depth of the backbone. - divisor (int, optional): The divisor of channels. Defaults to 8. - - Returns: - list, int: return a list of widths of each stage and the number \ - of stages - """ - assert width_slope >= 0 - assert initial_width > 0 - assert width_parameter > 1 - assert initial_width % divisor == 0 - widths_cont = np.arange(depth) * width_slope + initial_width - ks = np.round( - np.log(widths_cont / initial_width) / np.log(width_parameter)) - widths = initial_width * np.power(width_parameter, ks) - widths = np.round(np.divide(widths, divisor)) * divisor - num_stages = len(np.unique(widths)) - widths, widths_cont = widths.astype(int).tolist(), widths_cont.tolist() - return widths, num_stages - - @staticmethod - def quantize_float(number, divisor): - """Converts a float to closest non-zero int divisible by divisor. - - Args: - number (int): Original number to be quantized. - divisor (int): Divisor used to quantize the number. - - Returns: - int: quantized number that is divisible by devisor. - """ - return int(round(number / divisor) * divisor) - - def adjust_width_group(self, widths, bottleneck_ratio, groups): - """Adjusts the compatibility of widths and groups. - - Args: - widths (list[int]): Width of each stage. - bottleneck_ratio (float): Bottleneck ratio. - groups (int): number of groups in each stage - - Returns: - tuple(list): The adjusted widths and groups of each stage. - """ - bottleneck_width = [ - int(w * b) for w, b in zip(widths, bottleneck_ratio) - ] - groups = [min(g, w_bot) for g, w_bot in zip(groups, bottleneck_width)] - bottleneck_width = [ - self.quantize_float(w_bot, g) - for w_bot, g in zip(bottleneck_width, groups) - ] - widths = [ - int(w_bot / b) - for w_bot, b in zip(bottleneck_width, bottleneck_ratio) - ] - return widths, groups - - def get_stages_from_blocks(self, widths): - """Gets widths/stage_blocks of network at each stage. - - Args: - widths (list[int]): Width in each stage. - - Returns: - tuple(list): width and depth of each stage - """ - width_diff = [ - width != width_prev - for width, width_prev in zip(widths + [0], [0] + widths) - ] - stage_widths = [ - width for width, diff in zip(widths, width_diff[:-1]) if diff - ] - stage_blocks = np.diff([ - depth for depth, diff in zip(range(len(width_diff)), width_diff) - if diff - ]).tolist() - return stage_widths, stage_blocks - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/res2net.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/res2net.py deleted file mode 100644 index 96afb2fb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/res2net.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import Sequential - -from ..builder import BACKBONES -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottle2neck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - scales=4, - base_width=26, - base_channels=64, - stage_type='normal', - **kwargs): - """Bottle2neck block for Res2Net. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottle2neck, self).__init__(inplanes, planes, **kwargs) - assert scales > 1, 'Res2Net degenerates to ResNet when scales = 1.' - width = int(math.floor(self.planes * (base_width / base_channels))) - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width * scales, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width * scales, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - - if stage_type == 'stage' and self.conv2_stride != 1: - self.pool = nn.AvgPool2d( - kernel_size=3, stride=self.conv2_stride, padding=1) - convs = [] - bns = [] - - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width * scales, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.stage_type = stage_type - self.scales = scales - self.width = width - delattr(self, 'conv2') - delattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - spx = torch.split(out, self.width, 1) - sp = self.convs[0](spx[0].contiguous()) - sp = self.relu(self.bns[0](sp)) - out = sp - for i in range(1, self.scales - 1): - if self.stage_type == 'stage': - sp = spx[i] - else: - sp = sp + spx[i] - sp = self.convs[i](sp.contiguous()) - sp = self.relu(self.bns[i](sp)) - out = torch.cat((out, sp), 1) - - if self.stage_type == 'normal' or self.conv2_stride == 1: - out = torch.cat((out, spx[self.scales - 1]), 1) - elif self.stage_type == 'stage': - out = torch.cat((out, self.pool(spx[self.scales - 1])), 1) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Res2Layer(Sequential): - """Res2Layer to build Res2Net style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - scales=4, - base_width=26, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False), - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=1, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1], - ) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - stage_type='stage', - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - **kwargs)) - super(Res2Layer, self).__init__(*layers) - - -@BACKBONES.register_module() -class Res2Net(ResNet): - """Res2Net backbone. - - Args: - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - depth (int): Depth of res2net, from {50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Res2net stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import Res2Net - >>> import torch - >>> self = Res2Net(depth=50, scales=4, base_width=26) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottle2neck, (3, 4, 6, 3)), - 101: (Bottle2neck, (3, 4, 23, 3)), - 152: (Bottle2neck, (3, 8, 36, 3)) - } - - def __init__(self, - scales=4, - base_width=26, - style='pytorch', - deep_stem=True, - avg_down=True, - pretrained=None, - init_cfg=None, - **kwargs): - self.scales = scales - self.base_width = base_width - super(Res2Net, self).__init__( - style='pytorch', - deep_stem=True, - avg_down=True, - pretrained=pretrained, - init_cfg=init_cfg, - **kwargs) - - def make_res_layer(self, **kwargs): - return Res2Layer( - scales=self.scales, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnest.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnest.py deleted file mode 100644 index 69629b96..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnest.py +++ /dev/null @@ -1,322 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(BaseModule): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Number of channels in the input feature map. - channels (int): Number of intermediate channels. - kernel_size (int | tuple[int]): Size of the convolution kernel. - stride (int | tuple[int]): Stride of the convolution. - padding (int | tuple[int]): Zero-padding added to both sides of - dilation (int | tuple[int]): Spacing between kernel elements. - groups (int): Number of blocked connections from input channels to - output channels. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - init_cfg=None): - super(SplitAttentionConv2d, self).__init__(init_cfg) - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - # To be consistent with original implementation, starting from 0 - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - base_width (int): Base of width in terms of base channels. Default: 4. - base_channels (int): Base of channels for calculating width. - Default: 64. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SplitAttentionConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnet.py deleted file mode 100644 index 1eaaae67..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnet.py +++ /dev/null @@ -1,672 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(out) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - stem_channels (int | None): Number of stem channels. If not specified, - it will be the same as `base_channels`. Default: None. - base_channels (int): Number of base channels of res layer. Default: 64. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=None, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - self.zero_init_residual = zero_init_residual - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - if stem_channels is None: - stem_channels = base_channels - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """Make plugins for ResNet ``stage_idx`` th stage. - - Currently we support to insert ``context_block``, - ``empirical_attention_block``, ``nonlocal_block`` into the backbone - like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be: - - Examples: - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose ``stage_idx=0``, the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->conv3->yyy->zzz1->zzz2 - - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - r"""ResNetV1d variant described in `Bag of Tricks - `_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnext.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnext.py deleted file mode 100644 index 8675d7c1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/resnext.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - if self.with_plugins: - self._del_block_plugins(self.after_conv1_plugin_names + - self.after_conv2_plugin_names + - self.after_conv3_plugin_names) - self.after_conv1_plugin_names = self.make_block_plugins( - width, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - width, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - self.planes * self.expansion, self.after_conv3_plugins) - - def _del_block_plugins(self, plugin_names): - """delete plugins for block if exist. - - Args: - plugin_names (list[str]): List of plugins name to delete. - """ - assert isinstance(plugin_names, list) - for plugin_name in plugin_names: - del self._modules[plugin_name] - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/ssd_vgg.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/ssd_vgg.py deleted file mode 100644 index c15aeac0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/ssd_vgg.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import VGG -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..necks import ssd_neck - - -@BACKBONES.register_module() -class SSDVGG(VGG, BaseModule): - """VGG Backbone network for single-shot-detection. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_last_pool (bool): Whether to add a pooling layer at the last - of the model - ceil_mode (bool): When True, will use `ceil` instead of `floor` - to compute the output shape. - out_indices (Sequence[int]): Output from which stages. - out_feature_indices (Sequence[int]): Output from which feature map. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - input_size (int, optional): Deprecated argumment. - Width and height of input, from {300, 512}. - l2_norm_scale (float, optional) : Deprecated argumment. - L2 normalization layer init scale. - - Example: - >>> self = SSDVGG(input_size=300, depth=11) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 300, 300) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 1024, 19, 19) - (1, 512, 10, 10) - (1, 256, 5, 5) - (1, 256, 3, 3) - (1, 256, 1, 1) - """ - extra_setting = { - 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256), - 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128), - } - - def __init__(self, - depth, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - pretrained=None, - init_cfg=None, - input_size=None, - l2_norm_scale=None): - # TODO: in_channels for mmcv.VGG - super(SSDVGG, self).__init__( - depth, - with_last_pool=with_last_pool, - ceil_mode=ceil_mode, - out_indices=out_indices) - - self.features.add_module( - str(len(self.features)), - nn.MaxPool2d(kernel_size=3, stride=1, padding=1)) - self.features.add_module( - str(len(self.features)), - nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.features.add_module( - str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.out_feature_indices = out_feature_indices - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - - if init_cfg is not None: - self.init_cfg = init_cfg - elif isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict(type='Constant', val=1, layer='BatchNorm2d'), - dict(type='Normal', std=0.01, layer='Linear'), - ] - else: - raise TypeError('pretrained must be a str or None') - - if input_size is not None: - warnings.warn('DeprecationWarning: input_size is deprecated') - if l2_norm_scale is not None: - warnings.warn('DeprecationWarning: l2_norm_scale in VGG is ' - 'deprecated, it has been moved to SSDNeck.') - - def init_weights(self, pretrained=None): - super(VGG, self).init_weights() - - def forward(self, x): - """Forward function.""" - outs = [] - for i, layer in enumerate(self.features): - x = layer(x) - if i in self.out_feature_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - -class L2Norm(ssd_neck.L2Norm): - - def __init__(self, **kwargs): - super(L2Norm, self).__init__(**kwargs) - warnings.warn('DeprecationWarning: L2Norm in ssd_vgg.py ' - 'is deprecated, please use L2Norm in ' - 'mmdet/models/necks/ssd_neck.py instead') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/swin.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/swin.py deleted file mode 100644 index c9f1455a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/swin.py +++ /dev/null @@ -1,763 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from copy import deepcopy - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_norm_layer, constant_init, trunc_normal_init -from mmcv.cnn.bricks.transformer import FFN, build_dropout -from mmcv.cnn.utils.weight_init import trunc_normal_ -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from mmcv.utils import to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils.ckpt_convert import swin_converter -from ..utils.transformer import PatchEmbed, PatchMerging - - -class WindowMSA(BaseModule): - """Window based multi-head self-attention (W-MSA) module with relative - position bias. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (tuple[int]): The height and width of the window. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - init_cfg (dict | None, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - init_cfg=None): - - super().__init__() - self.embed_dims = embed_dims - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_embed_dims = embed_dims // num_heads - self.scale = qk_scale or head_embed_dims**-0.5 - self.init_cfg = init_cfg - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), - num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # About 2x faster than original impl - Wh, Ww = self.window_size - rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) - rel_position_index = rel_index_coords + rel_index_coords.T - rel_position_index = rel_position_index.flip(1).contiguous() - self.register_buffer('relative_position_index', rel_position_index) - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - - self.softmax = nn.Softmax(dim=-1) - - def init_weights(self): - trunc_normal_(self.relative_position_bias_table, std=0.02) - - def forward(self, x, mask=None): - """ - Args: - - x (tensor): input features with shape of (num_windows*B, N, C) - mask (tensor | None, Optional): mask with shape of (num_windows, - Wh*Ww, Wh*Ww), value should be between (-inf, 0]. - """ - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, 4) - # make torchscript happy (cannot use tensor as tuple) - q, k, v = qkv[0], qkv[1], qkv[2] - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B // nW, nW, self.num_heads, N, - N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - @staticmethod - def double_step_seq(step1, len1, step2, len2): - seq1 = torch.arange(0, step1 * len1, step1) - seq2 = torch.arange(0, step2 * len2, step2) - return (seq1[:, None] + seq2[None, :]).reshape(1, -1) - - -class ShiftWindowMSA(BaseModule): - """Shifted Window Multihead Self-Attention Module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): The height and width of the window. - shift_size (int, optional): The shift step of each window towards - right-bottom. If zero, act as regular window-msa. Defaults to 0. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Defaults: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Defaults: 0. - proj_drop_rate (float, optional): Dropout ratio of output. - Defaults: 0. - dropout_layer (dict, optional): The dropout_layer used before output. - Defaults: dict(type='DropPath', drop_prob=0.). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - shift_size=0, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0, - proj_drop_rate=0, - dropout_layer=dict(type='DropPath', drop_prob=0.), - init_cfg=None): - super().__init__(init_cfg) - - self.window_size = window_size - self.shift_size = shift_size - assert 0 <= self.shift_size < self.window_size - - self.w_msa = WindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=to_2tuple(window_size), - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=proj_drop_rate, - init_cfg=None) - - self.drop = build_dropout(dropout_layer) - - def forward(self, query, hw_shape): - B, L, C = query.shape - H, W = hw_shape - assert L == H * W, 'input feature has wrong size' - query = query.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) - H_pad, W_pad = query.shape[1], query.shape[2] - - # cyclic shift - if self.shift_size > 0: - shifted_query = torch.roll( - query, - shifts=(-self.shift_size, -self.shift_size), - dims=(1, 2)) - - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - # nW, window_size, window_size, 1 - mask_windows = self.window_partition(img_mask) - mask_windows = mask_windows.view( - -1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-100.0)).masked_fill( - attn_mask == 0, float(0.0)) - else: - shifted_query = query - attn_mask = None - - # nW*B, window_size, window_size, C - query_windows = self.window_partition(shifted_query) - # nW*B, window_size*window_size, C - query_windows = query_windows.view(-1, self.window_size**2, C) - - # W-MSA/SW-MSA (nW*B, window_size*window_size, C) - attn_windows = self.w_msa(query_windows, mask=attn_mask) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, - self.window_size, C) - - # B H' W' C - shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, - shifts=(self.shift_size, self.shift_size), - dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - x = self.drop(x) - return x - - def window_reverse(self, windows, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - window_size = self.window_size - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, - window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - def window_partition(self, x): - """ - Args: - x: (B, H, W, C) - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - window_size = self.window_size - x = x.view(B, H // window_size, window_size, W // window_size, - window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() - windows = windows.view(-1, window_size, window_size, C) - return windows - - -class SwinBlock(BaseModule): - """" - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - window_size (int, optional): The local window scale. Default: 7. - shift (bool, optional): whether to shift window or not. Default False. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float, optional): Stochastic depth rate. Default: 0. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - window_size=7, - shift=False, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - - super(SwinBlock, self).__init__() - - self.init_cfg = init_cfg - self.with_cp = with_cp - - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - self.attn = ShiftWindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=window_size, - shift_size=window_size // 2 if shift else 0, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - init_cfg=None) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=2, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=True, - init_cfg=None) - - def forward(self, x, hw_shape): - - def _inner_forward(x): - identity = x - x = self.norm1(x) - x = self.attn(x, hw_shape) - - x = x + identity - - identity = x - x = self.norm2(x) - x = self.ffn(x, identity=identity) - - return x - - if self.with_cp and x.requires_grad: - x = cp.checkpoint(_inner_forward, x) - else: - x = _inner_forward(x) - - return x - - -class SwinBlockSequence(BaseModule): - """Implements one stage in Swin Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - depth (int): The number of blocks in this stage. - window_size (int, optional): The local window scale. Default: 7. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float | list[float], optional): Stochastic depth - rate. Default: 0. - downsample (BaseModule | None, optional): The downsample operation - module. Default: None. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - depth, - window_size=7, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - downsample=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(drop_path_rate, list): - drop_path_rates = drop_path_rate - assert len(drop_path_rates) == depth - else: - drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] - - self.blocks = ModuleList() - for i in range(depth): - block = SwinBlock( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=feedforward_channels, - window_size=window_size, - shift=False if i % 2 == 0 else True, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=drop_path_rates[i], - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.blocks.append(block) - - self.downsample = downsample - - def forward(self, x, hw_shape): - for block in self.blocks: - x = block(x, hw_shape) - - if self.downsample: - x_down, down_hw_shape = self.downsample(x, hw_shape) - return x_down, down_hw_shape, x, hw_shape - else: - return x, hw_shape, x, hw_shape - - -@BACKBONES.register_module() -class SwinTransformer(BaseModule): - """ Swin Transformer - A PyTorch implement of : `Swin Transformer: - Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/abs/2103.14030 - - Inspiration from - https://github.com/microsoft/Swin-Transformer - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): The num of input channels. - Defaults: 3. - embed_dims (int): The feature dimension. Default: 96. - patch_size (int | tuple[int]): Patch size. Default: 4. - window_size (int): Window size. Default: 7. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - Default: 4. - depths (tuple[int]): Depths of each Swin Transformer stage. - Default: (2, 2, 6, 2). - num_heads (tuple[int]): Parallel attention heads of each Swin - Transformer stage. Default: (3, 6, 12, 24). - strides (tuple[int]): The patch merging or patch embedding stride of - each Swin Transformer stage. (In swin, we set kernel size equal to - stride.) Default: (4, 2, 2, 2). - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool, optional): If True, add a learnable bias to query, key, - value. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - patch_norm (bool): If add a norm layer for patch embed and patch - merging. Default: True. - drop_rate (float): Dropout rate. Defaults: 0. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: False. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LN'). - norm_cfg (dict): Config dict for normalization layer at - output of backone. Defaults: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None. - convert_weights (bool): The flag indicates whether the - pre-trained model is from the original repo. We may need - to convert some keys to make it compatible. - Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - Default: -1 (-1 means not freezing any parameters). - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=96, - patch_size=4, - window_size=7, - mlp_ratio=4, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - strides=(4, 2, 2, 2), - out_indices=(0, 1, 2, 3), - qkv_bias=True, - qk_scale=None, - patch_norm=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - pretrained=None, - convert_weights=False, - frozen_stages=-1, - init_cfg=None): - self.convert_weights = convert_weights - self.frozen_stages = frozen_stages - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - super(SwinTransformer, self).__init__(init_cfg=init_cfg) - - num_layers = len(depths) - self.out_indices = out_indices - self.use_abs_pos_embed = use_abs_pos_embed - - assert strides[0] == patch_size, 'Use non-overlapping patch embed.' - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=strides[0], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - - if self.use_abs_pos_embed: - patch_row = pretrain_img_size[0] // patch_size - patch_col = pretrain_img_size[1] // patch_size - num_patches = patch_row * patch_col - self.absolute_pos_embed = nn.Parameter( - torch.zeros((1, num_patches, embed_dims))) - - self.drop_after_pos = nn.Dropout(p=drop_rate) - - # set stochastic depth decay rule - total_depth = sum(depths) - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, total_depth) - ] - - self.stages = ModuleList() - in_channels = embed_dims - for i in range(num_layers): - if i < num_layers - 1: - downsample = PatchMerging( - in_channels=in_channels, - out_channels=2 * in_channels, - stride=strides[i + 1], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - else: - downsample = None - - stage = SwinBlockSequence( - embed_dims=in_channels, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * in_channels, - depth=depths[i], - window_size=window_size, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], - downsample=downsample, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.stages.append(stage) - if downsample: - in_channels = downsample.out_channels - - self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] - # Add a norm layer for each output - for i in out_indices: - layer = build_norm_layer(norm_cfg, self.num_features[i])[1] - layer_name = f'norm{i}' - self.add_module(layer_name, layer) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - if self.use_abs_pos_embed: - self.absolute_pos_embed.requires_grad = False - self.drop_after_pos.eval() - - for i in range(1, self.frozen_stages + 1): - - if (i - 1) in self.out_indices: - norm_layer = getattr(self, f'norm{i-1}') - norm_layer.eval() - for param in norm_layer.parameters(): - param.requires_grad = False - - m = self.stages[i - 1] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - if self.use_abs_pos_embed: - trunc_normal_(self.absolute_pos_embed, std=0.02) - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, 1.0) - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - ckpt = _load_checkpoint( - self.init_cfg.checkpoint, logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - if self.convert_weights: - # supported loading weight from original repo, - _state_dict = swin_converter(_state_dict) - - state_dict = OrderedDict() - for k, v in _state_dict.items(): - if k.startswith('backbone.'): - state_dict[k[9:]] = v - - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = self.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H * W: - logger.warning('Error in loading absolute_pos_embed, pass') - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view( - N2, H, W, C2).permute(0, 3, 1, 2).contiguous() - - # interpolate position bias table if needed - relative_position_bias_table_keys = [ - k for k in state_dict.keys() - if 'relative_position_bias_table' in k - ] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = self.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f'Error in loading {table_key}, pass') - elif L1 != L2: - S1 = int(L1**0.5) - S2 = int(L2**0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view( - nH2, L2).permute(1, 0).contiguous() - - # load state_dict - self.load_state_dict(state_dict, False) - - def forward(self, x): - x, hw_shape = self.patch_embed(x) - - if self.use_abs_pos_embed: - x = x + self.absolute_pos_embed - x = self.drop_after_pos(x) - - outs = [] - for i, stage in enumerate(self.stages): - x, hw_shape, out, out_hw_shape = stage(x, hw_shape) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - out = norm_layer(out) - out = out.view(-1, *out_hw_shape, - self.num_features[i]).permute(0, 3, 1, - 2).contiguous() - outs.append(out) - - return outs diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/trident_resnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/trident_resnet.py deleted file mode 100644 index 013ba64b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/backbones/trident_resnet.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch.nn.modules.utils import _pair - -from mmdet.models.backbones.resnet import Bottleneck, ResNet -from mmdet.models.builder import BACKBONES - - -class TridentConv(BaseModule): - """Trident Convolution Module. - - Args: - in_channels (int): Number of channels in input. - out_channels (int): Number of channels in output. - kernel_size (int): Size of convolution kernel. - stride (int, optional): Convolution stride. Default: 1. - trident_dilations (tuple[int, int, int], optional): Dilations of - different trident branch. Default: (1, 2, 3). - test_branch_idx (int, optional): In inference, all 3 branches will - be used if `test_branch_idx==-1`, otherwise only branch with - index `test_branch_idx` will be used. Default: 1. - bias (bool, optional): Whether to use bias in convolution or not. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - trident_dilations=(1, 2, 3), - test_branch_idx=1, - bias=False, - init_cfg=None): - super(TridentConv, self).__init__(init_cfg) - self.num_branch = len(trident_dilations) - self.with_bias = bias - self.test_branch_idx = test_branch_idx - self.stride = _pair(stride) - self.kernel_size = _pair(kernel_size) - self.paddings = _pair(trident_dilations) - self.dilations = trident_dilations - self.in_channels = in_channels - self.out_channels = out_channels - self.bias = bias - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - def extra_repr(self): - tmpstr = f'in_channels={self.in_channels}' - tmpstr += f', out_channels={self.out_channels}' - tmpstr += f', kernel_size={self.kernel_size}' - tmpstr += f', num_branch={self.num_branch}' - tmpstr += f', test_branch_idx={self.test_branch_idx}' - tmpstr += f', stride={self.stride}' - tmpstr += f', paddings={self.paddings}' - tmpstr += f', dilations={self.dilations}' - tmpstr += f', bias={self.bias}' - return tmpstr - - def forward(self, inputs): - if self.training or self.test_branch_idx == -1: - outputs = [ - F.conv2d(input, self.weight, self.bias, self.stride, padding, - dilation) for input, dilation, padding in zip( - inputs, self.dilations, self.paddings) - ] - else: - assert len(inputs) == 1 - outputs = [ - F.conv2d(inputs[0], self.weight, self.bias, self.stride, - self.paddings[self.test_branch_idx], - self.dilations[self.test_branch_idx]) - ] - - return outputs - - -# Since TridentNet is defined over ResNet50 and ResNet101, here we -# only support TridentBottleneckBlock. -class TridentBottleneck(Bottleneck): - """BottleBlock for TridentResNet. - - Args: - trident_dilations (tuple[int, int, int]): Dilations of different - trident branch. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - concat_output (bool): Whether to concat the output list to a Tensor. - `True` only in the last Block. - """ - - def __init__(self, trident_dilations, test_branch_idx, concat_output, - **kwargs): - - super(TridentBottleneck, self).__init__(**kwargs) - self.trident_dilations = trident_dilations - self.num_branch = len(trident_dilations) - self.concat_output = concat_output - self.test_branch_idx = test_branch_idx - self.conv2 = TridentConv( - self.planes, - self.planes, - kernel_size=3, - stride=self.conv2_stride, - bias=False, - trident_dilations=self.trident_dilations, - test_branch_idx=test_branch_idx, - init_cfg=dict( - type='Kaiming', - distribution='uniform', - mode='fan_in', - override=dict(name='conv2'))) - - def forward(self, x): - - def _inner_forward(x): - num_branch = ( - self.num_branch - if self.training or self.test_branch_idx == -1 else 1) - identity = x - if not isinstance(x, list): - x = (x, ) * num_branch - identity = x - if self.downsample is not None: - identity = [self.downsample(b) for b in x] - - out = [self.conv1(b) for b in x] - out = [self.norm1(b) for b in out] - out = [self.relu(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv1_plugin_names) - - out = self.conv2(out) - out = [self.norm2(b) for b in out] - out = [self.relu(b) for b in out] - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv2_plugin_names) - - out = [self.conv3(b) for b in out] - out = [self.norm3(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv3_plugin_names) - - out = [ - out_b + identity_b for out_b, identity_b in zip(out, identity) - ] - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = [self.relu(b) for b in out] - if self.concat_output: - out = torch.cat(out, dim=0) - return out - - -def make_trident_res_layer(block, - inplanes, - planes, - num_blocks, - stride=1, - trident_dilations=(1, 2, 3), - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - test_branch_idx=-1): - """Build Trident Res Layers.""" - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - for i in range(num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride if i == 0 else 1, - trident_dilations=trident_dilations, - downsample=downsample if i == 0 else None, - style=style, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=plugins, - test_branch_idx=test_branch_idx, - concat_output=True if i == num_blocks - 1 else False)) - inplanes = planes * block.expansion - return nn.Sequential(*layers) - - -@BACKBONES.register_module() -class TridentResNet(ResNet): - """The stem layer, stage 1 and stage 2 in Trident ResNet are identical to - ResNet, while in stage 3, Trident BottleBlock is utilized to replace the - normal BottleBlock to yield trident output. Different branch shares the - convolution weight but uses different dilations to achieve multi-scale - output. - - / stage3(b0) \ - x - stem - stage1 - stage2 - stage3(b1) - output - \ stage3(b2) / - - Args: - depth (int): Depth of resnet, from {50, 101, 152}. - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - trident_dilations (tuple[int]): Dilations of different trident branch. - len(trident_dilations) should be equal to num_branch. - """ # noqa - - def __init__(self, depth, num_branch, test_branch_idx, trident_dilations, - **kwargs): - - assert num_branch == len(trident_dilations) - assert depth in (50, 101, 152) - super(TridentResNet, self).__init__(depth, **kwargs) - assert self.num_stages == 3 - self.test_branch_idx = test_branch_idx - self.num_branch = num_branch - - last_stage_idx = self.num_stages - 1 - stride = self.strides[last_stage_idx] - dilation = trident_dilations - dcn = self.dcn if self.stage_with_dcn[last_stage_idx] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, - last_stage_idx) - else: - stage_plugins = None - planes = self.base_channels * 2**last_stage_idx - res_layer = make_trident_res_layer( - TridentBottleneck, - inplanes=(self.block.expansion * self.base_channels * - 2**(last_stage_idx - 1)), - planes=planes, - num_blocks=self.stage_blocks[last_stage_idx], - stride=stride, - trident_dilations=dilation, - style=self.style, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - test_branch_idx=self.test_branch_idx) - - layer_name = f'layer{last_stage_idx + 1}' - - self.__setattr__(layer_name, res_layer) - self.res_layers.pop(last_stage_idx) - self.res_layers.insert(last_stage_idx, layer_name) - - self._freeze_stages() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/models/builder.py deleted file mode 100644 index ace6209f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/builder.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -ROI_EXTRACTORS = MODELS -SHARED_HEADS = MODELS -HEADS = MODELS -LOSSES = MODELS -DETECTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_roi_extractor(cfg): - """Build roi extractor.""" - return ROI_EXTRACTORS.build(cfg) - - -def build_shared_head(cfg): - """Build shared head.""" - return SHARED_HEADS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/__init__.py deleted file mode 100644 index 375197a6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor_free_head import AnchorFreeHead -from .anchor_head import AnchorHead -from .atss_head import ATSSHead -from .autoassign_head import AutoAssignHead -from .cascade_rpn_head import CascadeRPNHead, StageCascadeRPNHead -from .centernet_head import CenterNetHead -from .centripetal_head import CentripetalHead -from .corner_head import CornerHead -from .deformable_detr_head import DeformableDETRHead -from .detr_head import DETRHead -from .embedding_rpn_head import EmbeddingRPNHead -from .fcos_head import FCOSHead -from .fovea_head import FoveaHead -from .free_anchor_retina_head import FreeAnchorRetinaHead -from .fsaf_head import FSAFHead -from .ga_retina_head import GARetinaHead -from .ga_rpn_head import GARPNHead -from .gfl_head import GFLHead -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead -from .lad_head import LADHead -from .ld_head import LDHead -from .mask2former_head import Mask2FormerHead -from .maskformer_head import MaskFormerHead -from .nasfcos_head import NASFCOSHead -from .paa_head import PAAHead -from .pisa_retinanet_head import PISARetinaHead -from .pisa_ssd_head import PISASSDHead -from .reppoints_head import RepPointsHead -from .retina_head import RetinaHead -from .retina_sepbn_head import RetinaSepBNHead -from .rpn_head import RPNHead -from .sabl_retina_head import SABLRetinaHead -from .solo_head import DecoupledSOLOHead, DecoupledSOLOLightHead, SOLOHead -from .ssd_head import SSDHead -from .tood_head import TOODHead -from .vfnet_head import VFNetHead -from .yolact_head import YOLACTHead, YOLACTProtonet, YOLACTSegmHead -from .yolo_head import YOLOV3Head -from .yolof_head import YOLOFHead -from .yolox_head import YOLOXHead - -__all__ = [ - 'AnchorFreeHead', 'AnchorHead', 'GuidedAnchorHead', 'FeatureAdaption', - 'RPNHead', 'GARPNHead', 'RetinaHead', 'RetinaSepBNHead', 'GARetinaHead', - 'SSDHead', 'FCOSHead', 'RepPointsHead', 'FoveaHead', - 'FreeAnchorRetinaHead', 'ATSSHead', 'FSAFHead', 'NASFCOSHead', - 'PISARetinaHead', 'PISASSDHead', 'GFLHead', 'CornerHead', 'YOLACTHead', - 'YOLACTSegmHead', 'YOLACTProtonet', 'YOLOV3Head', 'PAAHead', - 'SABLRetinaHead', 'CentripetalHead', 'VFNetHead', 'StageCascadeRPNHead', - 'CascadeRPNHead', 'EmbeddingRPNHead', 'LDHead', 'CascadeRPNHead', - 'AutoAssignHead', 'DETRHead', 'YOLOFHead', 'DeformableDETRHead', - 'SOLOHead', 'DecoupledSOLOHead', 'CenterNetHead', 'YOLOXHead', - 'DecoupledSOLOLightHead', 'LADHead', 'TOODHead', 'MaskFormerHead', - 'Mask2FormerHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_free_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_free_head.py deleted file mode 100644 index b0460b94..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_free_head.py +++ /dev/null @@ -1,350 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import build_bbox_coder, multi_apply -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorFreeHead(BaseDenseHead, BBoxTestMixin): - """Anchor-free head (FCOS, Fovea, RepPoints, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - stacked_convs (int): Number of stacking convs of the head. - strides (tuple): Downsample factor of each feature map. - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - bbox_coder (dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - _version = 1 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - dcn_on_last_conv=False, - conv_bias='auto', - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=1.0), - bbox_coder=dict(type='DistancePointBBoxCoder'), - conv_cfg=None, - norm_cfg=None, - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01))): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.bbox_coder = build_bbox_coder(bbox_coder) - - self.prior_generator = MlvlPointGenerator(strides) - - # In order to keep a more general interface and be consistent with - # anchor_head. We can think of point like one anchor - self.num_base_priors = self.prior_generator.num_base_priors[0] - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self): - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self): - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Hack some keys of the model state dict so that can load checkpoints - of previous version.""" - version = local_metadata.get('version', None) - if version is None: - # the key is different in early versions - # for example, 'fcos_cls' become 'conv_cls' now - bbox_head_keys = [ - k for k in state_dict.keys() if k.startswith(prefix) - ] - ori_predictor_keys = [] - new_predictor_keys = [] - # e.g. 'fcos_cls' or 'fcos_reg' - for key in bbox_head_keys: - ori_predictor_keys.append(key) - key = key.split('.') - conv_name = None - if key[1].endswith('cls'): - conv_name = 'conv_cls' - elif key[1].endswith('reg'): - conv_name = 'conv_reg' - elif key[1].endswith('centerness'): - conv_name = 'conv_centerness' - else: - assert NotImplementedError - if conv_name is not None: - key[1] = conv_name - new_predictor_keys.append('.'.join(key)) - else: - ori_predictor_keys.pop(-1) - for i in range(len(new_predictor_keys)): - state_dict[new_predictor_keys[i]] = state_dict.pop( - ori_predictor_keys[i]) - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores and bbox predictions. - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - """ - return multi_apply(self.forward_single, feats)[:2] - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, features - after classification and regression conv layers, some - models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - return cls_score, bbox_pred, cls_feat, reg_feat - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - """ - raise NotImplementedError - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points of a single scale level. - - This function will be deprecated soon. - """ - - warnings.warn( - '`_get_points_single` in `AnchorFreeHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - - h, w = featmap_size - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - x_range = torch.arange(w, device=device).to(dtype) - y_range = torch.arange(h, device=device).to(dtype) - y, x = torch.meshgrid(y_range, x_range) - if flatten: - y = y.flatten() - x = x.flatten() - return y, x - - def get_points(self, featmap_sizes, dtype, device, flatten=False): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - dtype (torch.dtype): Type of points. - device (torch.device): Device of points. - - Returns: - tuple: points of each image. - """ - warnings.warn( - '`get_points` in `AnchorFreeHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of all levels ' - 'with `self.prior_generator.grid_priors` ') - - mlvl_points = [] - for i in range(len(featmap_sizes)): - mlvl_points.append( - self._get_points_single(featmap_sizes[i], self.strides[i], - dtype, device, flatten)) - return mlvl_points - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_head.py deleted file mode 100644 index d1bfab62..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01)): - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, - 'sampler') and self.train_cfg.sampler.type.split( - '.')[-1] != 'PseudoSampler': - self.sampling = True - sampler_cfg = self.train_cfg.sampler - # avoid BC-breaking - if loss_cls['type'] in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ]: - warnings.warn( - 'DeprecationWarning: Determining whether to sampling' - 'by loss type is deprecated, please delete sampler in' - 'your config when using `FocalLoss`, `GHMC`, ' - '`QualityFocalLoss` or other FocalLoss variant.') - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - else: - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.prior_generator = build_prior_generator(anchor_generator) - - # Usually the numbers of anchors for each level are the same - # except SSD detectors. So it is an int in the most dense - # heads but a list of int in SSDHead - self.num_base_priors = self.prior_generator.num_base_priors[0] - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'for consistency or also use ' - '`num_base_priors` instead') - return self.prior_generator.num_base_priors[0] - - @property - def anchor_generator(self): - warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_base_priors * 4, - 1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_base_priors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_base_priors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all - images. - - num_total_neg (int): Number of negative samples in all - images. - - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), where - 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,), The length of list should always be 1. - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/atss_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/atss_head.py deleted file mode 100644 index e8f401ca..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/atss_head.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_sampler, - images_to_levels, multi_apply, reduce_mean, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class ATSSHead(AnchorHead): - """Bridging the Gap Between Anchor-based and Anchor-free Detection via - Adaptive Training Sample Selection. - - ATSS head structure is similar with FCOS, however ATSS use anchor boxes - and assign label by Adaptive Training Sample Selection instead max-iou. - - https://arxiv.org/abs/1912.02424 - """ - - def __init__(self, - num_classes, - in_channels, - pred_kernel_size=3, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - reg_decoded_bbox=True, - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='atss_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.pred_kernel_size = pred_kernel_size - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(ATSSHead, self).__init__( - num_classes, - in_channels, - reg_decoded_bbox=reg_decoded_bbox, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pred_pad_size = self.pred_kernel_size // 2 - self.atss_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_reg = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 4, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_centerness = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 1, - self.pred_kernel_size, - padding=pred_pad_size) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - centerness (Tensor): Centerness for a single scale level, the - channel number is (N, num_anchors * 1, H, W). - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.atss_cls(cls_feat) - # we just follow atss, not apply exp in bbox_pred - bbox_pred = scale(self.atss_reg(reg_feat)).float() - centerness = self.atss_centerness(reg_feat) - return cls_score, bbox_pred, centerness - - def loss_single(self, anchors, cls_score, bbox_pred, centerness, labels, - label_weights, bbox_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - num_total_samples (int): Number os positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - centerness = centerness.permute(0, 2, 3, 1).reshape(-1) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # classification loss - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_centerness = centerness[pos_inds] - - centerness_targets = self.centerness_target( - pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchors, pos_bbox_pred) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_bbox_targets, - weight=centerness_targets, - avg_factor=1.0) - - # centerness loss - loss_centerness = self.loss_centerness( - pos_centerness, - centerness_targets, - avg_factor=num_total_samples) - - else: - loss_bbox = bbox_pred.sum() * 0 - loss_centerness = centerness.sum() * 0 - centerness_targets = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, loss_centerness, centerness_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - centernesses (list[Tensor]): Centerness for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, loss_centerness,\ - bbox_avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - centernesses, - labels_list, - label_weights_list, - bbox_targets_list, - num_total_samples=num_total_samples) - - bbox_avg_factor = sum(bbox_avg_factor) - bbox_avg_factor = reduce_mean(bbox_avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_centerness=loss_centerness) - - def centerness_target(self, anchors, gts): - # only calculate pos centerness targets, otherwise there may be nan - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - l_ = anchors_cx - gts[:, 0] - t_ = anchors_cy - gts[:, 1] - r_ = gts[:, 2] - anchors_cx - b_ = gts[:, 3] - anchors_cy - - left_right = torch.stack([l_, r_], dim=1) - top_bottom = torch.stack([t_, b_], dim=1) - centerness = torch.sqrt( - (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * - (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])) - assert not torch.isnan(centerness).any() - return centerness - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for ATSS head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4) - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if self.reg_decoded_bbox: - pos_bbox_targets = sampling_result.pos_gt_bboxes - else: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/autoassign_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/autoassign_head.py deleted file mode 100644 index 446da244..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/autoassign_head.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from mmdet.core.bbox import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.dense_heads.fcos_head import FCOSHead -from mmdet.models.dense_heads.paa_head import levels_to_images - -EPS = 1e-12 - - -class CenterPrior(nn.Module): - """Center Weighting module to adjust the category-specific prior - distributions. - - Args: - force_topk (bool): When no point falls into gt_bbox, forcibly - select the k points closest to the center to calculate - the center prior. Defaults to False. - topk (int): The number of points used to calculate the - center prior when no point falls in gt_bbox. Only work when - force_topk if True. Defaults to 9. - num_classes (int): The class number of dataset. Defaults to 80. - strides (tuple[int]): The stride of each input feature map. Defaults - to (8, 16, 32, 64, 128). - """ - - def __init__(self, - force_topk=False, - topk=9, - num_classes=80, - strides=(8, 16, 32, 64, 128)): - super(CenterPrior, self).__init__() - self.mean = nn.Parameter(torch.zeros(num_classes, 2)) - self.sigma = nn.Parameter(torch.ones(num_classes, 2)) - self.strides = strides - self.force_topk = force_topk - self.topk = topk - - def forward(self, anchor_points_list, gt_bboxes, labels, - inside_gt_bbox_mask): - """Get the center prior of each point on the feature map for each - instance. - - Args: - anchor_points_list (list[Tensor]): list of coordinate - of points on feature map. Each with shape - (num_points, 2). - gt_bboxes (Tensor): The gt_bboxes with shape of - (num_gt, 4). - labels (Tensor): The gt_labels with shape of (num_gt). - inside_gt_bbox_mask (Tensor): Tensor of bool type, - with shape of (num_points, num_gt), each - value is used to mark whether this point falls - within a certain gt. - - Returns: - tuple(Tensor): - - - center_prior_weights(Tensor): Float tensor with shape \ - of (num_points, num_gt). Each value represents \ - the center weighting coefficient. - - inside_gt_bbox_mask (Tensor): Tensor of bool type, \ - with shape of (num_points, num_gt), each \ - value is used to mark whether this point falls \ - within a certain gt or is the topk nearest points for \ - a specific gt_bbox. - """ - inside_gt_bbox_mask = inside_gt_bbox_mask.clone() - num_gts = len(labels) - num_points = sum([len(item) for item in anchor_points_list]) - if num_gts == 0: - return gt_bboxes.new_zeros(num_points, - num_gts), inside_gt_bbox_mask - center_prior_list = [] - for slvl_points, stride in zip(anchor_points_list, self.strides): - # slvl_points: points from single level in FPN, has shape (h*w, 2) - # single_level_points has shape (h*w, num_gt, 2) - single_level_points = slvl_points[:, None, :].expand( - (slvl_points.size(0), len(gt_bboxes), 2)) - gt_center_x = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2) - gt_center_y = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2) - gt_center = torch.stack((gt_center_x, gt_center_y), dim=1) - gt_center = gt_center[None] - # instance_center has shape (1, num_gt, 2) - instance_center = self.mean[labels][None] - # instance_sigma has shape (1, num_gt, 2) - instance_sigma = self.sigma[labels][None] - # distance has shape (num_points, num_gt, 2) - distance = (((single_level_points - gt_center) / float(stride) - - instance_center)**2) - center_prior = torch.exp(-distance / - (2 * instance_sigma**2)).prod(dim=-1) - center_prior_list.append(center_prior) - center_prior_weights = torch.cat(center_prior_list, dim=0) - - if self.force_topk: - gt_inds_no_points_inside = torch.nonzero( - inside_gt_bbox_mask.sum(0) == 0).reshape(-1) - if gt_inds_no_points_inside.numel(): - topk_center_index = \ - center_prior_weights[:, gt_inds_no_points_inside].topk( - self.topk, - dim=0)[1] - temp_mask = inside_gt_bbox_mask[:, gt_inds_no_points_inside] - inside_gt_bbox_mask[:, gt_inds_no_points_inside] = \ - torch.scatter(temp_mask, - dim=0, - index=topk_center_index, - src=torch.ones_like( - topk_center_index, - dtype=torch.bool)) - - center_prior_weights[~inside_gt_bbox_mask] = 0 - return center_prior_weights, inside_gt_bbox_mask - - -@HEADS.register_module() -class AutoAssignHead(FCOSHead): - """AutoAssignHead head used in AutoAssign. - - More details can be found in the `paper - `_ . - - Args: - force_topk (bool): Used in center prior initialization to - handle extremely small gt. Default is False. - topk (int): The number of points used to calculate the - center prior when no point falls in gt_bbox. Only work when - force_topk if True. Defaults to 9. - pos_loss_weight (float): The loss weight of positive loss - and with default value 0.25. - neg_loss_weight (float): The loss weight of negative loss - and with default value 0.75. - center_loss_weight (float): The loss weight of center prior - loss and with default value 0.75. - """ - - def __init__(self, - *args, - force_topk=False, - topk=9, - pos_loss_weight=0.25, - neg_loss_weight=0.75, - center_loss_weight=0.75, - **kwargs): - super().__init__(*args, conv_bias=True, **kwargs) - self.center_prior = CenterPrior( - force_topk=force_topk, - topk=topk, - num_classes=self.num_classes, - strides=self.strides) - self.pos_loss_weight = pos_loss_weight - self.neg_loss_weight = neg_loss_weight - self.center_loss_weight = center_loss_weight - self.prior_generator = MlvlPointGenerator(self.strides, offset=0) - - def init_weights(self): - """Initialize weights of the head. - - In particular, we have special initialization for classified conv's and - regression conv's bias - """ - - super(AutoAssignHead, self).init_weights() - bias_cls = bias_init_with_prob(0.02) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - normal_init(self.conv_reg, std=0.01, bias=4.0) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox predictions and centerness \ - predictions of input feature maps. - """ - cls_score, bbox_pred, cls_feat, reg_feat = super( - FCOSHead, self).forward_single(x) - centerness = self.conv_centerness(reg_feat) - # scale the bbox_pred of different level - # float to avoid overflow when enabling FP16 - bbox_pred = scale(bbox_pred).float() - # bbox_pred needed for gradient computation has been modified - # by F.relu(bbox_pred) when run with PyTorch 1.10. So replace - # F.relu(bbox_pred) with bbox_pred.clamp(min=0) - bbox_pred = bbox_pred.clamp(min=0) - bbox_pred *= stride - return cls_score, bbox_pred, centerness - - def get_pos_loss_single(self, cls_score, objectness, reg_loss, gt_labels, - center_prior_weights): - """Calculate the positive loss of all points in gt_bboxes. - - Args: - cls_score (Tensor): All category scores for each point on - the feature map. The shape is (num_points, num_class). - objectness (Tensor): Foreground probability of all points, - has shape (num_points, 1). - reg_loss (Tensor): The regression loss of each gt_bbox and each - prediction box, has shape of (num_points, num_gt). - gt_labels (Tensor): The zeros based gt_labels of all gt - with shape of (num_gt,). - center_prior_weights (Tensor): Float tensor with shape - of (num_points, num_gt). Each value represents - the center weighting coefficient. - - Returns: - tuple[Tensor]: - - - pos_loss (Tensor): The positive loss of all points - in the gt_bboxes. - """ - # p_loc: localization confidence - p_loc = torch.exp(-reg_loss) - # p_cls: classification confidence - p_cls = (cls_score * objectness)[:, gt_labels] - # p_pos: joint confidence indicator - p_pos = p_cls * p_loc - - # 3 is a hyper-parameter to control the contributions of high and - # low confidence locations towards positive losses. - confidence_weight = torch.exp(p_pos * 3) - p_pos_weight = (confidence_weight * center_prior_weights) / ( - (confidence_weight * center_prior_weights).sum( - 0, keepdim=True)).clamp(min=EPS) - reweighted_p_pos = (p_pos * p_pos_weight).sum(0) - pos_loss = F.binary_cross_entropy( - reweighted_p_pos, - torch.ones_like(reweighted_p_pos), - reduction='none') - pos_loss = pos_loss.sum() * self.pos_loss_weight - return pos_loss, - - def get_neg_loss_single(self, cls_score, objectness, gt_labels, ious, - inside_gt_bbox_mask): - """Calculate the negative loss of all points in feature map. - - Args: - cls_score (Tensor): All category scores for each point on - the feature map. The shape is (num_points, num_class). - objectness (Tensor): Foreground probability of all points - and is shape of (num_points, 1). - gt_labels (Tensor): The zeros based label of all gt with shape of - (num_gt). - ious (Tensor): Float tensor with shape of (num_points, num_gt). - Each value represent the iou of pred_bbox and gt_bboxes. - inside_gt_bbox_mask (Tensor): Tensor of bool type, - with shape of (num_points, num_gt), each - value is used to mark whether this point falls - within a certain gt. - - Returns: - tuple[Tensor]: - - - neg_loss (Tensor): The negative loss of all points - in the feature map. - """ - num_gts = len(gt_labels) - joint_conf = (cls_score * objectness) - p_neg_weight = torch.ones_like(joint_conf) - if num_gts > 0: - # the order of dinmension would affect the value of - # p_neg_weight, we strictly follow the original - # implementation. - inside_gt_bbox_mask = inside_gt_bbox_mask.permute(1, 0) - ious = ious.permute(1, 0) - - foreground_idxs = torch.nonzero(inside_gt_bbox_mask, as_tuple=True) - temp_weight = (1 / (1 - ious[foreground_idxs]).clamp_(EPS)) - - def normalize(x): - return (x - x.min() + EPS) / (x.max() - x.min() + EPS) - - for instance_idx in range(num_gts): - idxs = foreground_idxs[0] == instance_idx - if idxs.any(): - temp_weight[idxs] = normalize(temp_weight[idxs]) - - p_neg_weight[foreground_idxs[1], - gt_labels[foreground_idxs[0]]] = 1 - temp_weight - - logits = (joint_conf * p_neg_weight) - neg_loss = ( - logits**2 * F.binary_cross_entropy( - logits, torch.zeros_like(logits), reduction='none')) - neg_loss = neg_loss.sum() * self.neg_loss_weight - return neg_loss, - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def loss(self, - cls_scores, - bbox_preds, - objectnesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - objectnesses (list[Tensor]): objectness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - assert len(cls_scores) == len(bbox_preds) == len(objectnesses) - all_num_gt = sum([len(item) for item in gt_bboxes]) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - inside_gt_bbox_mask_list, bbox_targets_list = self.get_targets( - all_level_points, gt_bboxes) - - center_prior_weight_list = [] - temp_inside_gt_bbox_mask_list = [] - for gt_bboxe, gt_label, inside_gt_bbox_mask in zip( - gt_bboxes, gt_labels, inside_gt_bbox_mask_list): - center_prior_weight, inside_gt_bbox_mask = \ - self.center_prior(all_level_points, gt_bboxe, gt_label, - inside_gt_bbox_mask) - center_prior_weight_list.append(center_prior_weight) - temp_inside_gt_bbox_mask_list.append(inside_gt_bbox_mask) - inside_gt_bbox_mask_list = temp_inside_gt_bbox_mask_list - mlvl_points = torch.cat(all_level_points, dim=0) - bbox_preds = levels_to_images(bbox_preds) - cls_scores = levels_to_images(cls_scores) - objectnesses = levels_to_images(objectnesses) - - reg_loss_list = [] - ious_list = [] - num_points = len(mlvl_points) - - for bbox_pred, encoded_targets, inside_gt_bbox_mask in zip( - bbox_preds, bbox_targets_list, inside_gt_bbox_mask_list): - temp_num_gt = encoded_targets.size(1) - expand_mlvl_points = mlvl_points[:, None, :].expand( - num_points, temp_num_gt, 2).reshape(-1, 2) - encoded_targets = encoded_targets.reshape(-1, 4) - expand_bbox_pred = bbox_pred[:, None, :].expand( - num_points, temp_num_gt, 4).reshape(-1, 4) - decoded_bbox_preds = self.bbox_coder.decode( - expand_mlvl_points, expand_bbox_pred) - decoded_target_preds = self.bbox_coder.decode( - expand_mlvl_points, encoded_targets) - with torch.no_grad(): - ious = bbox_overlaps( - decoded_bbox_preds, decoded_target_preds, is_aligned=True) - ious = ious.reshape(num_points, temp_num_gt) - if temp_num_gt: - ious = ious.max( - dim=-1, keepdim=True).values.repeat(1, temp_num_gt) - else: - ious = ious.new_zeros(num_points, temp_num_gt) - ious[~inside_gt_bbox_mask] = 0 - ious_list.append(ious) - loss_bbox = self.loss_bbox( - decoded_bbox_preds, - decoded_target_preds, - weight=None, - reduction_override='none') - reg_loss_list.append(loss_bbox.reshape(num_points, temp_num_gt)) - - cls_scores = [item.sigmoid() for item in cls_scores] - objectnesses = [item.sigmoid() for item in objectnesses] - pos_loss_list, = multi_apply(self.get_pos_loss_single, cls_scores, - objectnesses, reg_loss_list, gt_labels, - center_prior_weight_list) - pos_avg_factor = reduce_mean( - bbox_pred.new_tensor(all_num_gt)).clamp_(min=1) - pos_loss = sum(pos_loss_list) / pos_avg_factor - - neg_loss_list, = multi_apply(self.get_neg_loss_single, cls_scores, - objectnesses, gt_labels, ious_list, - inside_gt_bbox_mask_list) - neg_avg_factor = sum(item.data.sum() - for item in center_prior_weight_list) - neg_avg_factor = reduce_mean(neg_avg_factor).clamp_(min=1) - neg_loss = sum(neg_loss_list) / neg_avg_factor - - center_loss = [] - for i in range(len(img_metas)): - - if inside_gt_bbox_mask_list[i].any(): - center_loss.append( - len(gt_bboxes[i]) / - center_prior_weight_list[i].sum().clamp_(min=EPS)) - # when width or height of gt_bbox is smaller than stride of p3 - else: - center_loss.append(center_prior_weight_list[i].sum() * 0) - - center_loss = torch.stack(center_loss).mean() * self.center_loss_weight - - # avoid dead lock in DDP - if all_num_gt == 0: - pos_loss = bbox_preds[0].sum() * 0 - dummy_center_prior_loss = self.center_prior.mean.sum( - ) * 0 + self.center_prior.sigma.sum() * 0 - center_loss = objectnesses[0].sum() * 0 + dummy_center_prior_loss - - loss = dict( - loss_pos=pos_loss, loss_neg=neg_loss, loss_center=center_loss) - - return loss - - def get_targets(self, points, gt_bboxes_list): - """Compute regression targets and each point inside or outside gt_bbox - in multiple images. - - Args: - points (list[Tensor]): Points of all fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - - Returns: - tuple(list[Tensor]): - - - inside_gt_bbox_mask_list (list[Tensor]): Each - Tensor is with bool type and shape of - (num_points, num_gt), each value - is used to mark whether this point falls - within a certain gt. - - concat_lvl_bbox_targets (list[Tensor]): BBox - targets of each level. Each tensor has shape - (num_points, num_gt, 4). - """ - - concat_points = torch.cat(points, dim=0) - # the number of points per img, per lvl - inside_gt_bbox_mask_list, bbox_targets_list = multi_apply( - self._get_target_single, gt_bboxes_list, points=concat_points) - return inside_gt_bbox_mask_list, bbox_targets_list - - def _get_target_single(self, gt_bboxes, points): - """Compute regression targets and each point inside or outside gt_bbox - for a single image. - - Args: - gt_bboxes (Tensor): gt_bbox of single image, has shape - (num_gt, 4). - points (Tensor): Points of all fpn level, has shape - (num_points, 2). - - Returns: - tuple[Tensor]: Containing the following Tensors: - - - inside_gt_bbox_mask (Tensor): Bool tensor with shape - (num_points, num_gt), each value is used to mark - whether this point falls within a certain gt. - - bbox_targets (Tensor): BBox targets of each points with - each gt_bboxes, has shape (num_points, num_gt, 4). - """ - num_points = points.size(0) - num_gts = gt_bboxes.size(0) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None] - ys = ys[:, None] - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - if num_gts: - inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0 - else: - inside_gt_bbox_mask = bbox_targets.new_zeros((num_points, num_gts), - dtype=torch.bool) - - return inside_gt_bbox_mask, bbox_targets - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Almost the same as the implementation in fcos, we remove half stride - offset to align with the original implementation. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `AutoAssignHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - y, x = super(FCOSHead, - self)._get_points_single(featmap_size, stride, dtype, - device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) - return points diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_dense_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_dense_head.py deleted file mode 100644 index 0c7abb7b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -from mmcv.cnn.utils.weight_init import constant_init -from mmcv.ops import batched_nms -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core.utils import filter_scores_and_topk, select_single_mlvl - - -class BaseDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for DenseHeads.""" - - def __init__(self, init_cfg=None): - super(BaseDenseHead, self).__init__(init_cfg) - - def init_weights(self): - super(BaseDenseHead, self).init_weights() - # avoid init_cfg overwrite the initialization of `conv_offset` - for m in self.modules(): - # DeformConv2dPack, ModulatedDeformConv2dPack - if hasattr(m, 'conv_offset'): - constant_init(m.conv_offset, 0) - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True, - **kwargs): - """Transform network outputs of a batch into bbox results. - - Note: When score_factors is not None, the cls_scores are - usually multiplied by it then obtain the real score used in NMS, - such as CenterNess in FCOS, IoU branch in ATSS. - - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - score_factors (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, num_priors * 1, H, W). Default None. - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. Default None. - rescale (bool): If True, return boxes in original image space. - Default False. - with_nms (bool): If True, do nms before return boxes. - Default True. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - - if score_factors is None: - # e.g. Retina, FreeAnchor, Foveabox, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, AutoAssign, etc. - with_score_factors = True - assert len(cls_scores) == len(score_factors) - - num_levels = len(cls_scores) - - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device) - - result_list = [] - - for img_id in range(len(img_metas)): - img_meta = img_metas[img_id] - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - if with_score_factors: - score_factor_list = select_single_mlvl(score_factors, img_id) - else: - score_factor_list = [None for _ in range(num_levels)] - - results = self._get_bboxes_single(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors, - img_meta, cfg, rescale, with_nms, - **kwargs) - result_list.append(results) - return result_list - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - if score_factor_list[0] is None: - # e.g. Retina, FreeAnchor, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - if with_score_factors: - mlvl_score_factors = [] - else: - mlvl_score_factors = None - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - if with_score_factors: - score_factor = score_factor.permute(1, 2, - 0).reshape(-1).sigmoid() - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - if with_score_factors: - score_factor = score_factor[keep_idxs] - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - if with_score_factors: - mlvl_score_factors.append(score_factor) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, mlvl_score_factors, **kwargs) - - def _bbox_post_process(self, - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - scale_factor, - cfg, - rescale=False, - with_nms=True, - mlvl_score_factors=None, - **kwargs): - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually `with_nms` is False is used for aug test. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_labels (list[Tensor]): Box class labels from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - mlvl_score_factors (list[Tensor], optional): Score factor from - all scale levels of a single image, each item has shape - (num_bboxes, ). Default: None. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - assert len(mlvl_scores) == len(mlvl_bboxes) == len(mlvl_labels) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_labels = torch.cat(mlvl_labels) - - if mlvl_score_factors is not None: - # TODO: Add sqrt operation in order to be consistent with - # the paper. - mlvl_score_factors = torch.cat(mlvl_score_factors) - mlvl_scores = mlvl_scores * mlvl_score_factors - - if with_nms: - if mlvl_bboxes.numel() == 0: - det_bboxes = torch.cat([mlvl_bboxes, mlvl_scores[:, None]], -1) - return det_bboxes, mlvl_labels - - det_bboxes, keep_idxs = batched_nms(mlvl_bboxes, mlvl_scores, - mlvl_labels, cfg.nms) - det_bboxes = det_bboxes[:cfg.max_per_img] - det_labels = mlvl_labels[keep_idxs][:cfg.max_per_img] - return det_bboxes, det_labels - else: - return mlvl_bboxes, mlvl_scores, mlvl_labels - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes( - *outs, img_metas=img_metas, cfg=proposal_cfg) - return losses, proposal_list - - def simple_test(self, feats, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n, ). - """ - return self.simple_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def onnx_export(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - score_factors (list[Tensor]): score_factors for each s - cale level with shape (N, num_points * 1, H, W). - Default: None. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. Default: None. - with_nms (bool): Whether apply nms to the bboxes. Default: True. - - Returns: - tuple[Tensor, Tensor] | list[tuple]: When `with_nms` is True, - it is tuple[Tensor, Tensor], first tensor bboxes with shape - [N, num_det, 5], 5 arrange as (x1, y1, x2, y2, score) - and second element is class labels of shape [N, num_det]. - When `with_nms` is False, first tensor is bboxes with - shape [N, num_det, 4], second tensor is raw score has - shape [N, num_det, num_classes]. - """ - assert len(cls_scores) == len(bbox_preds) - - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shape = img_metas[0]['img_shape_for_onnx'] - - cfg = self.test_cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_priors) - device = cls_scores[0].device - batch_size = cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - # e.g. Retina, FreeAnchor, etc. - if score_factors is None: - with_score_factors = False - mlvl_score_factor = [None for _ in range(num_levels)] - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - mlvl_score_factor = [ - score_factors[i].detach() for i in range(num_levels) - ] - mlvl_score_factors = [] - - mlvl_batch_bboxes = [] - mlvl_scores = [] - - for cls_score, bbox_pred, score_factors, priors in zip( - mlvl_cls_scores, mlvl_bbox_preds, mlvl_score_factor, - mlvl_priors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - nms_pre_score = scores - else: - scores = scores.softmax(-1) - nms_pre_score = scores - - if with_score_factors: - score_factors = score_factors.permute(0, 2, 3, 1).reshape( - batch_size, -1).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - priors = priors.expand(batch_size, -1, priors.size(-1)) - # Get top-k predictions - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - - if with_score_factors: - nms_pre_score = (nms_pre_score * score_factors[..., None]) - else: - nms_pre_score = nms_pre_score - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = nms_pre_score.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = nms_pre_score[..., :-1].max(-1) - _, topk_inds = max_scores.topk(nms_pre) - - batch_inds = torch.arange( - batch_size, device=bbox_pred.device).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = bbox_pred.shape[1] * batch_inds + topk_inds - priors = priors.reshape( - -1, priors.size(-1))[transformed_inds, :].reshape( - batch_size, -1, priors.size(-1)) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - scores = scores.reshape( - -1, self.cls_out_channels)[transformed_inds, :].reshape( - batch_size, -1, self.cls_out_channels) - if with_score_factors: - score_factors = score_factors.reshape( - -1, 1)[transformed_inds].reshape(batch_size, -1) - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_batch_bboxes.append(bboxes) - mlvl_scores.append(scores) - if with_score_factors: - mlvl_score_factors.append(score_factors) - - batch_bboxes = torch.cat(mlvl_batch_bboxes, dim=1) - batch_scores = torch.cat(mlvl_scores, dim=1) - if with_score_factors: - batch_score_factors = torch.cat(mlvl_score_factors, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - - from mmdet.core.export import add_dummy_nms_for_onnx - - if not self.use_sigmoid_cls: - batch_scores = batch_scores[..., :self.num_classes] - - if with_score_factors: - batch_scores = batch_scores * (batch_score_factors.unsqueeze(2)) - - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - score_threshold = cfg.score_thr - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx(batch_bboxes, batch_scores, - max_output_boxes_per_class, - iou_threshold, score_threshold, - nms_pre, cfg.max_per_img) - else: - return batch_bboxes, batch_scores diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_mask_head.py deleted file mode 100644 index 5eb94fb2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/base_mask_head.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class BaseMaskHead(BaseModule, metaclass=ABCMeta): - """Base class for mask heads used in One-Stage Instance Segmentation.""" - - def __init__(self, init_cfg): - super(BaseMaskHead, self).__init__(init_cfg) - - @abstractmethod - def loss(self, **kwargs): - pass - - @abstractmethod - def get_results(self, **kwargs): - """Get precessed :obj:`InstanceData` of multiple images.""" - pass - - def forward_train(self, - x, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None, - positive_infos=None, - **kwargs): - """ - Args: - x (list[Tensor] | tuple[Tensor]): Features from FPN. - Each has a shape (B, C, H, W). - gt_labels (list[Tensor]): Ground truth labels of all images. - each has a shape (num_gts,). - gt_masks (list[Tensor]) : Masks for each bbox, has a shape - (num_gts, h , w). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - each item has a shape (num_gts, 4). - gt_bboxes_ignore (list[Tensor], None): Ground truth bboxes to be - ignored, each item has a shape (num_ignored_gts, 4). - positive_infos (list[:obj:`InstanceData`], optional): Information - of positive samples. Used when the label assignment is - done outside the MaskHead, e.g., in BboxHead in - YOLACT or CondInst, etc. When the label assignment is done in - MaskHead, it would be None, like SOLO. All values - in it should have shape (num_positive_samples, *). - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if positive_infos is None: - outs = self(x) - else: - outs = self(x, positive_infos) - - assert isinstance(outs, tuple), 'Forward results should be a tuple, ' \ - 'even if only one item is returned' - loss = self.loss( - *outs, - gt_labels=gt_labels, - gt_masks=gt_masks, - img_metas=img_metas, - gt_bboxes=gt_bboxes, - gt_bboxes_ignore=gt_bboxes_ignore, - positive_infos=positive_infos, - **kwargs) - return loss - - def simple_test(self, - feats, - img_metas, - rescale=False, - instances_list=None, - **kwargs): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - instances_list (list[obj:`InstanceData`], optional): Detection - results of each image after the post process. Only exist - if there is a `bbox_head`, like `YOLACT`, `CondInst`, etc. - - Returns: - list[obj:`InstanceData`]: Instance segmentation \ - results of each image after the post process. \ - Each item usually contains following keys. \ - - - scores (Tensor): Classification scores, has a shape - (num_instance,) - - labels (Tensor): Has a shape (num_instances,). - - masks (Tensor): Processed mask results, has a - shape (num_instances, h, w). - """ - if instances_list is None: - outs = self(feats) - else: - outs = self(feats, instances_list=instances_list) - mask_inputs = outs + (img_metas, ) - results_list = self.get_results( - *mask_inputs, - rescale=rescale, - instances_list=instances_list, - **kwargs) - return results_list - - def onnx_export(self, img, img_metas): - raise NotImplementedError(f'{self.__class__.__name__} does ' - f'not support ONNX EXPORT') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/cascade_rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/cascade_rpn_head.py deleted file mode 100644 index 69347e00..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/cascade_rpn_head.py +++ /dev/null @@ -1,801 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from __future__ import division -import copy -import warnings - -import torch -import torch.nn as nn -from mmcv import ConfigDict -from mmcv.ops import DeformConv2d, batched_nms -from mmcv.runner import BaseModule, ModuleList - -from mmdet.core import (RegionAssigner, build_assigner, build_sampler, - images_to_levels, multi_apply) -from mmdet.core.utils import select_single_mlvl -from ..builder import HEADS, build_head -from .base_dense_head import BaseDenseHead -from .rpn_head import RPNHead - - -class AdaptiveConv(BaseModule): - """AdaptiveConv used to adapt the sampling location with the anchors. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel. Default: 3 - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 1 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 3 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: False. - type (str, optional): Type of adaptive conv, can be either 'offset' - (arbitrary anchors) or 'dilation' (uniform anchor). - Default: 'dilation'. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - dilation=3, - groups=1, - bias=False, - type='dilation', - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='conv'))): - super(AdaptiveConv, self).__init__(init_cfg) - assert type in ['offset', 'dilation'] - self.adapt_type = type - - assert kernel_size == 3, 'Adaptive conv only supports kernels 3' - if self.adapt_type == 'offset': - assert stride == 1 and padding == 1 and groups == 1, \ - 'Adaptive conv offset mode only supports padding: {1}, ' \ - f'stride: {1}, groups: {1}' - self.conv = DeformConv2d( - in_channels, - out_channels, - kernel_size, - padding=padding, - stride=stride, - groups=groups, - bias=bias) - else: - self.conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - padding=dilation, - dilation=dilation) - - def forward(self, x, offset): - """Forward function.""" - if self.adapt_type == 'offset': - N, _, H, W = x.shape - assert offset is not None - assert H * W == offset.shape[1] - # reshape [N, NA, 18] to (N, 18, H, W) - offset = offset.permute(0, 2, 1).reshape(N, -1, H, W) - offset = offset.contiguous() - x = self.conv(x, offset) - else: - assert offset is None - x = self.conv(x) - return x - - -@HEADS.register_module() -class StageCascadeRPNHead(RPNHead): - """Stage of CascadeRPNHead. - - Args: - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): anchor generator config. - adapt_cfg (dict): adaptation config. - bridged_feature (bool, optional): whether update rpn feature. - Default: False. - with_cls (bool, optional): whether use classification branch. - Default: True. - sampling (bool, optional): whether use sampling. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[1.0], - strides=[4, 8, 16, 32, 64]), - adapt_cfg=dict(type='dilation', dilation=3), - bridged_feature=False, - with_cls=True, - sampling=True, - init_cfg=None, - **kwargs): - self.with_cls = with_cls - self.anchor_strides = anchor_generator['strides'] - self.anchor_scales = anchor_generator['scales'] - self.bridged_feature = bridged_feature - self.adapt_cfg = adapt_cfg - super(StageCascadeRPNHead, self).__init__( - in_channels, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - - # override sampling and sampler - self.sampling = sampling - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - if init_cfg is None: - self.init_cfg = dict( - type='Normal', std=0.01, override=[dict(name='rpn_reg')]) - if self.with_cls: - self.init_cfg['override'].append(dict(name='rpn_cls')) - - def _init_layers(self): - """Init layers of a CascadeRPN stage.""" - self.rpn_conv = AdaptiveConv(self.in_channels, self.feat_channels, - **self.adapt_cfg) - if self.with_cls: - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, - 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1) - self.relu = nn.ReLU(inplace=True) - - def forward_single(self, x, offset): - """Forward function of single scale.""" - bridged_x = x - x = self.relu(self.rpn_conv(x, offset)) - if self.bridged_feature: - bridged_x = x # update feature - cls_score = self.rpn_cls(x) if self.with_cls else None - bbox_pred = self.rpn_reg(x) - return bridged_x, cls_score, bbox_pred - - def forward(self, feats, offset_list=None): - """Forward function.""" - if offset_list is None: - offset_list = [None for _ in range(len(feats))] - return multi_apply(self.forward_single, feats, offset_list) - - def _region_targets_single(self, - anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - featmap_sizes, - label_channels=1): - """Get anchor targets based on region for single level.""" - assign_result = self.assigner.assign( - anchors, - valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - self.anchor_scales[0], - self.anchor_strides, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_labels=None, - allowed_border=self.train_cfg.allowed_border) - flat_anchors = torch.cat(anchors) - sampling_result = self.sampler.sample(assign_result, flat_anchors, - gt_bboxes) - - num_anchors = flat_anchors.shape[0] - bbox_targets = torch.zeros_like(flat_anchors) - bbox_weights = torch.zeros_like(flat_anchors) - labels = flat_anchors.new_zeros(num_anchors, dtype=torch.long) - label_weights = flat_anchors.new_zeros(num_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - labels[pos_inds] = 1 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - def region_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """See :func:`StageCascadeRPNHead.get_targets`.""" - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._region_targets_single, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - featmap_sizes=featmap_sizes, - label_channels=label_channels) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=None, - label_channels=1): - """Compute regression and classification targets for anchors. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - featmap_sizes (list[Tensor]): Feature mapsize each level - gt_bboxes_ignore (list[Tensor]): Ignore bboxes of each images - label_channels (int): Channel of label. - - Returns: - cls_reg_targets (tuple) - """ - if isinstance(self.assigner, RegionAssigner): - cls_reg_targets = self.region_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - else: - cls_reg_targets = super(StageCascadeRPNHead, self).get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - return cls_reg_targets - - def anchor_offset(self, anchor_list, anchor_strides, featmap_sizes): - """ Get offset for deformable conv based on anchor shape - NOTE: currently support deformable kernel_size=3 and dilation=1 - - Args: - anchor_list (list[list[tensor])): [NI, NLVL, NA, 4] list of - multi-level anchors - anchor_strides (list[int]): anchor stride of each level - - Returns: - offset_list (list[tensor]): [NLVL, NA, 2, 18]: offset of DeformConv - kernel. - """ - - def _shape_offset(anchors, stride, ks=3, dilation=1): - # currently support kernel_size=3 and dilation=1 - assert ks == 3 and dilation == 1 - pad = (ks - 1) // 2 - idx = torch.arange(-pad, pad + 1, dtype=dtype, device=device) - yy, xx = torch.meshgrid(idx, idx) # return order matters - xx = xx.reshape(-1) - yy = yy.reshape(-1) - w = (anchors[:, 2] - anchors[:, 0]) / stride - h = (anchors[:, 3] - anchors[:, 1]) / stride - w = w / (ks - 1) - dilation - h = h / (ks - 1) - dilation - offset_x = w[:, None] * xx # (NA, ks**2) - offset_y = h[:, None] * yy # (NA, ks**2) - return offset_x, offset_y - - def _ctr_offset(anchors, stride, featmap_size): - feat_h, feat_w = featmap_size - assert len(anchors) == feat_h * feat_w - - x = (anchors[:, 0] + anchors[:, 2]) * 0.5 - y = (anchors[:, 1] + anchors[:, 3]) * 0.5 - # compute centers on feature map - x = x / stride - y = y / stride - # compute predefine centers - xx = torch.arange(0, feat_w, device=anchors.device) - yy = torch.arange(0, feat_h, device=anchors.device) - yy, xx = torch.meshgrid(yy, xx) - xx = xx.reshape(-1).type_as(x) - yy = yy.reshape(-1).type_as(y) - - offset_x = x - xx # (NA, ) - offset_y = y - yy # (NA, ) - return offset_x, offset_y - - num_imgs = len(anchor_list) - num_lvls = len(anchor_list[0]) - dtype = anchor_list[0][0].dtype - device = anchor_list[0][0].device - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - offset_list = [] - for i in range(num_imgs): - mlvl_offset = [] - for lvl in range(num_lvls): - c_offset_x, c_offset_y = _ctr_offset(anchor_list[i][lvl], - anchor_strides[lvl], - featmap_sizes[lvl]) - s_offset_x, s_offset_y = _shape_offset(anchor_list[i][lvl], - anchor_strides[lvl]) - - # offset = ctr_offset + shape_offset - offset_x = s_offset_x + c_offset_x[:, None] - offset_y = s_offset_y + c_offset_y[:, None] - - # offset order (y0, x0, y1, x2, .., y8, x8, y9, x9) - offset = torch.stack([offset_y, offset_x], dim=-1) - offset = offset.reshape(offset.size(0), -1) # [NA, 2*ks**2] - mlvl_offset.append(offset) - offset_list.append(torch.cat(mlvl_offset)) # [totalNA, 2*ks**2] - offset_list = images_to_levels(offset_list, num_level_anchors) - return offset_list - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Loss function on single scale.""" - # classification loss - if self.with_cls: - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_reg = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - if self.with_cls: - return loss_cls, loss_reg - return None, loss_reg - - def loss(self, - anchor_list, - valid_flag_list, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in bbox_preds] - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=gt_bboxes_ignore, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - if self.sampling: - num_total_samples = num_total_pos + num_total_neg - else: - # 200 is hard-coded average factor, - # which follows guided anchoring. - num_total_samples = sum([label.numel() - for label in labels_list]) / 200.0 - - # change per image, per level anchor_list to per_level, per_image - mlvl_anchor_list = list(zip(*anchor_list)) - # concat mlvl_anchor_list - mlvl_anchor_list = [ - torch.cat(anchors, dim=0) for anchors in mlvl_anchor_list - ] - - losses = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - mlvl_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - if self.with_cls: - return dict(loss_rpn_cls=losses[0], loss_rpn_reg=losses[1]) - return dict(loss_rpn_reg=losses[1]) - - def get_bboxes(self, - anchor_list, - cls_scores, - bbox_preds, - img_metas, - cfg, - rescale=False): - """Get proposal predict. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - assert len(cls_scores) == len(bbox_preds) - - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - anchor_list[img_id], img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has - shape (num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Box reference from all scale - levels of a single image, each item has shape - (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default False. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - nms_pre = cfg.get('nms_pre', -1) - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(dim=1)[:, 0] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, 4) - anchors = mlvl_anchors[idx] - - if 0 < nms_pre < scores.shape[0]: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:nms_pre] - scores = ranked_scores[:nms_pre] - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full((scores.size(0), ), idx, dtype=torch.long)) - - scores = torch.cat(mlvl_scores) - anchors = torch.cat(mlvl_valid_anchors) - rpn_bbox_pred = torch.cat(mlvl_bbox_preds) - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - ids = torch.cat(level_ids) - - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - ids = ids[valid_mask] - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \ - f' iou_threshold in nms and ' \ - f'nms_thr at the same time, but get' \ - f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - if proposals.numel() > 0: - dets, _ = batched_nms(proposals, scores, ids, cfg.nms) - else: - return proposals.new_zeros(0, 5) - - return dets[:cfg.max_per_img] - - def refine_bboxes(self, anchor_list, bbox_preds, img_metas): - """Refine bboxes through stages.""" - num_levels = len(bbox_preds) - new_anchor_list = [] - for img_id in range(len(img_metas)): - mlvl_anchors = [] - for i in range(num_levels): - bbox_pred = bbox_preds[i][img_id].detach() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - img_shape = img_metas[img_id]['img_shape'] - bboxes = self.bbox_coder.decode(anchor_list[img_id][i], - bbox_pred, img_shape) - mlvl_anchors.append(bboxes) - new_anchor_list.append(mlvl_anchors) - return new_anchor_list - - -@HEADS.register_module() -class CascadeRPNHead(BaseDenseHead): - """The CascadeRPNHead will predict more accurate region proposals, which is - required for two-stage detectors (such as Fast/Faster R-CNN). CascadeRPN - consists of a sequence of RPNStage to progressively improve the accuracy of - the detected proposals. - - More details can be found in ``https://arxiv.org/abs/1909.06720``. - - Args: - num_stages (int): number of CascadeRPN stages. - stages (list[dict]): list of configs to build the stages. - train_cfg (list[dict]): list of configs at training time each stage. - test_cfg (dict): config at testing time. - """ - - def __init__(self, num_stages, stages, train_cfg, test_cfg, init_cfg=None): - super(CascadeRPNHead, self).__init__(init_cfg) - assert num_stages == len(stages) - self.num_stages = num_stages - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.stages = ModuleList() - for i in range(len(stages)): - train_cfg_i = train_cfg[i] if train_cfg is not None else None - stages[i].update(train_cfg=train_cfg_i) - stages[i].update(test_cfg=test_cfg) - self.stages.append(build_head(stages[i])) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def loss(self): - """loss() is implemented in StageCascadeRPNHead.""" - pass - - def get_bboxes(self): - """get_bboxes() is implemented in StageCascadeRPNHead.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None): - """Forward train function.""" - assert gt_labels is None, 'RPN does not require gt_labels' - - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, valid_flag_list = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - losses = dict() - - for i in range(self.num_stages): - stage = self.stages[i] - - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - rpn_loss_inputs = (anchor_list, valid_flag_list, cls_score, - bbox_pred, gt_bboxes, img_metas) - stage_loss = stage.loss(*rpn_loss_inputs) - for name, value in stage_loss.items(): - losses['s{}.{}'.format(i, name)] = value - - # refine boxes - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - if proposal_cfg is None: - return losses - else: - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return losses, proposal_list - - def simple_test_rpn(self, x, img_metas): - """Simple forward test function.""" - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, _ = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - for i in range(self.num_stages): - stage = self.stages[i] - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return proposal_list - - def aug_test_rpn(self, x, img_metas): - """Augmented forward test function.""" - raise NotImplementedError( - 'CascadeRPNHead does not support test-time augmentation') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centernet_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centernet_head.py deleted file mode 100644 index b9d5d2f0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centernet_head.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.ops import batched_nms -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.models import HEADS, build_loss -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from ..utils.gaussian_target import (get_local_maximum, get_topk_from_heatmap, - transpose_and_gather_feat) -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class CenterNetHead(BaseDenseHead, BBoxTestMixin): - """Objects as Points Head. CenterHead use center_point to indicate object's - position. Paper link - - Args: - in_channel (int): Number of channel in the input feature map. - feat_channel (int): Number of channel in the intermediate feature map. - num_classes (int): Number of categories excluding the background - category. - loss_center_heatmap (dict | None): Config of center heatmap loss. - Default: GaussianFocalLoss. - loss_wh (dict | None): Config of wh loss. Default: L1Loss. - loss_offset (dict | None): Config of offset loss. Default: L1Loss. - train_cfg (dict | None): Training config. Useless in CenterNet, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CenterNet. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channel, - feat_channel, - num_classes, - loss_center_heatmap=dict( - type='GaussianFocalLoss', loss_weight=1.0), - loss_wh=dict(type='L1Loss', loss_weight=0.1), - loss_offset=dict(type='L1Loss', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(CenterNetHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.heatmap_head = self._build_head(in_channel, feat_channel, - num_classes) - self.wh_head = self._build_head(in_channel, feat_channel, 2) - self.offset_head = self._build_head(in_channel, feat_channel, 2) - - self.loss_center_heatmap = build_loss(loss_center_heatmap) - self.loss_wh = build_loss(loss_wh) - self.loss_offset = build_loss(loss_offset) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - - def _build_head(self, in_channel, feat_channel, out_channel): - """Build head for each branch.""" - layer = nn.Sequential( - nn.Conv2d(in_channel, feat_channel, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(feat_channel, out_channel, kernel_size=1)) - return layer - - def init_weights(self): - """Initialize weights of the head.""" - bias_init = bias_init_with_prob(0.1) - self.heatmap_head[-1].bias.data.fill_(bias_init) - for head in [self.wh_head, self.offset_head]: - for m in head.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - - def forward(self, feats): - """Forward features. Notice CenterNet head does not use FPN. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - center_heatmap_preds (List[Tensor]): center predict heatmaps for - all levels, the channels number is num_classes. - wh_preds (List[Tensor]): wh predicts for all levels, the channels - number is 2. - offset_preds (List[Tensor]): offset predicts for all levels, the - channels number is 2. - """ - return multi_apply(self.forward_single, feats) - - def forward_single(self, feat): - """Forward feature of a single level. - - Args: - feat (Tensor): Feature of a single level. - - Returns: - center_heatmap_pred (Tensor): center predict heatmaps, the - channels number is num_classes. - wh_pred (Tensor): wh predicts, the channels number is 2. - offset_pred (Tensor): offset predicts, the channels number is 2. - """ - center_heatmap_pred = self.heatmap_head(feat).sigmoid() - wh_pred = self.wh_head(feat) - offset_pred = self.offset_head(feat) - return center_heatmap_pred, wh_pred, offset_pred - - @force_fp32(apply_to=('center_heatmap_preds', 'wh_preds', 'offset_preds')) - def loss(self, - center_heatmap_preds, - wh_preds, - offset_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - center_heatmap_preds (list[Tensor]): center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): wh predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): offset predicts for all levels - with shape (B, 2, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: which has components below: - - loss_center_heatmap (Tensor): loss of center heatmap. - - loss_wh (Tensor): loss of hw heatmap - - loss_offset (Tensor): loss of offset heatmap. - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - center_heatmap_pred = center_heatmap_preds[0] - wh_pred = wh_preds[0] - offset_pred = offset_preds[0] - - target_result, avg_factor = self.get_targets(gt_bboxes, gt_labels, - center_heatmap_pred.shape, - img_metas[0]['pad_shape']) - - center_heatmap_target = target_result['center_heatmap_target'] - wh_target = target_result['wh_target'] - offset_target = target_result['offset_target'] - wh_offset_target_weight = target_result['wh_offset_target_weight'] - - # Since the channel of wh_target and offset_target is 2, the avg_factor - # of loss_center_heatmap is always 1/2 of loss_wh and loss_offset. - loss_center_heatmap = self.loss_center_heatmap( - center_heatmap_pred, center_heatmap_target, avg_factor=avg_factor) - loss_wh = self.loss_wh( - wh_pred, - wh_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - loss_offset = self.loss_offset( - offset_pred, - offset_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - return dict( - loss_center_heatmap=loss_center_heatmap, - loss_wh=loss_wh, - loss_offset=loss_offset) - - def get_targets(self, gt_bboxes, gt_labels, feat_shape, img_shape): - """Compute regression and classification targets in multiple images. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - feat_shape (list[int]): feature map shape with value [B, _, H, W] - img_shape (list[int]): image shape in [h, w] format. - - Returns: - tuple[dict,float]: The float value is mean avg_factor, the dict has - components below: - - center_heatmap_target (Tensor): targets of center heatmap, \ - shape (B, num_classes, H, W). - - wh_target (Tensor): targets of wh predict, shape \ - (B, 2, H, W). - - offset_target (Tensor): targets of offset predict, shape \ - (B, 2, H, W). - - wh_offset_target_weight (Tensor): weights of wh and offset \ - predict, shape (B, 2, H, W). - """ - img_h, img_w = img_shape[:2] - bs, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) - height_ratio = float(feat_h / img_h) - - center_heatmap_target = gt_bboxes[-1].new_zeros( - [bs, self.num_classes, feat_h, feat_w]) - wh_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - offset_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - wh_offset_target_weight = gt_bboxes[-1].new_zeros( - [bs, 2, feat_h, feat_w]) - - for batch_id in range(bs): - gt_bbox = gt_bboxes[batch_id] - gt_label = gt_labels[batch_id] - center_x = (gt_bbox[:, [0]] + gt_bbox[:, [2]]) * width_ratio / 2 - center_y = (gt_bbox[:, [1]] + gt_bbox[:, [3]]) * height_ratio / 2 - gt_centers = torch.cat((center_x, center_y), dim=1) - - for j, ct in enumerate(gt_centers): - ctx_int, cty_int = ct.int() - ctx, cty = ct - scale_box_h = (gt_bbox[j][3] - gt_bbox[j][1]) * height_ratio - scale_box_w = (gt_bbox[j][2] - gt_bbox[j][0]) * width_ratio - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.3) - radius = max(0, int(radius)) - ind = gt_label[j] - gen_gaussian_target(center_heatmap_target[batch_id, ind], - [ctx_int, cty_int], radius) - - wh_target[batch_id, 0, cty_int, ctx_int] = scale_box_w - wh_target[batch_id, 1, cty_int, ctx_int] = scale_box_h - - offset_target[batch_id, 0, cty_int, ctx_int] = ctx - ctx_int - offset_target[batch_id, 1, cty_int, ctx_int] = cty - cty_int - - wh_offset_target_weight[batch_id, :, cty_int, ctx_int] = 1 - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - target_result = dict( - center_heatmap_target=center_heatmap_target, - wh_target=wh_target, - offset_target=offset_target, - wh_offset_target_weight=wh_offset_target_weight) - return target_result, avg_factor - - @force_fp32(apply_to=('center_heatmap_preds', 'wh_preds', 'offset_preds')) - def get_bboxes(self, - center_heatmap_preds, - wh_preds, - offset_preds, - img_metas, - rescale=True, - with_nms=False): - """Transform network output for a batch into bbox predictions. - - Args: - center_heatmap_preds (list[Tensor]): Center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): WH predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): Offset predicts for all levels - with shape (B, 2, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: True. - with_nms (bool): If True, do nms before return boxes. - Default: False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - center_heatmap_preds[0][img_id:img_id + 1, ...], - wh_preds[0][img_id:img_id + 1, ...], - offset_preds[0][img_id:img_id + 1, ...], - img_metas[img_id], - rescale=rescale, - with_nms=with_nms)) - return result_list - - def _get_bboxes_single(self, - center_heatmap_pred, - wh_pred, - offset_pred, - img_meta, - rescale=False, - with_nms=True): - """Transform outputs of a single image into bbox results. - - Args: - center_heatmap_pred (Tensor): Center heatmap for current level with - shape (1, num_classes, H, W). - wh_pred (Tensor): WH heatmap for current level with shape - (1, num_classes, H, W). - offset_pred (Tensor): Offset for current level with shape - (1, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor, Tensor]: The first item is an (n, 5) tensor, where - 5 represent (tl_x, tl_y, br_x, br_y, score) and the score - between 0 and 1. The shape of the second tensor in the tuple - is (n,), and each element represents the class label of the - corresponding box. - """ - batch_det_bboxes, batch_labels = self.decode_heatmap( - center_heatmap_pred, - wh_pred, - offset_pred, - img_meta['batch_input_shape'], - k=self.test_cfg.topk, - kernel=self.test_cfg.local_maximum_kernel) - - det_bboxes = batch_det_bboxes.view([-1, 5]) - det_labels = batch_labels.view(-1) - - batch_border = det_bboxes.new_tensor(img_meta['border'])[..., - [2, 0, 2, 0]] - det_bboxes[..., :4] -= batch_border - - if rescale: - det_bboxes[..., :4] /= det_bboxes.new_tensor( - img_meta['scale_factor']) - - if with_nms: - det_bboxes, det_labels = self._bboxes_nms(det_bboxes, det_labels, - self.test_cfg) - return det_bboxes, det_labels - - def decode_heatmap(self, - center_heatmap_pred, - wh_pred, - offset_pred, - img_shape, - k=100, - kernel=3): - """Transform outputs into detections raw bbox prediction. - - Args: - center_heatmap_pred (Tensor): center predict heatmap, - shape (B, num_classes, H, W). - wh_pred (Tensor): wh predict, shape (B, 2, H, W). - offset_pred (Tensor): offset predict, shape (B, 2, H, W). - img_shape (list[int]): image shape in [h, w] format. - k (int): Get top k center keypoints from heatmap. Default 100. - kernel (int): Max pooling kernel for extract local maximum pixels. - Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of CenterNetHead, containing - the following Tensors: - - - batch_bboxes (Tensor): Coords of each box with shape (B, k, 5) - - batch_topk_labels (Tensor): Categories of each box with \ - shape (B, k) - """ - height, width = center_heatmap_pred.shape[2:] - inp_h, inp_w = img_shape - - center_heatmap_pred = get_local_maximum( - center_heatmap_pred, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=k) - batch_scores, batch_index, batch_topk_labels = batch_dets - - wh = transpose_and_gather_feat(wh_pred, batch_index) - offset = transpose_and_gather_feat(offset_pred, batch_index) - topk_xs = topk_xs + offset[..., 0] - topk_ys = topk_ys + offset[..., 1] - tl_x = (topk_xs - wh[..., 0] / 2) * (inp_w / width) - tl_y = (topk_ys - wh[..., 1] / 2) * (inp_h / height) - br_x = (topk_xs + wh[..., 0] / 2) * (inp_w / width) - br_y = (topk_ys + wh[..., 1] / 2) * (inp_h / height) - - batch_bboxes = torch.stack([tl_x, tl_y, br_x, br_y], dim=2) - batch_bboxes = torch.cat((batch_bboxes, batch_scores[..., None]), - dim=-1) - return batch_bboxes, batch_topk_labels - - def _bboxes_nms(self, bboxes, labels, cfg): - if labels.numel() > 0: - max_num = cfg.max_per_img - bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, - -1].contiguous(), - labels, cfg.nms) - if max_num > 0: - bboxes = bboxes[:max_num] - labels = labels[keep][:max_num] - - return bboxes, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centripetal_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centripetal_head.py deleted file mode 100644 index ebc721b7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/centripetal_head.py +++ /dev/null @@ -1,430 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, normal_init -from mmcv.ops import DeformConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .corner_head import CornerHead - - -@HEADS.register_module() -class CentripetalHead(CornerHead): - """Head of CentripetalNet: Pursuing High-quality Keypoint Pairs for Object - Detection. - - CentripetalHead inherits from :class:`CornerHead`. It removes the - embedding branch and adds guiding shift and centripetal shift branches. - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - num_feat_levels (int): Levels of feature from the previous module. 2 - for HourglassNet-104 and 1 for HourglassNet-52. HourglassNet-104 - outputs the final feature and intermediate supervision feature and - HourglassNet-52 only outputs the final feature. Default: 2. - corner_emb_channels (int): Channel of embedding vector. Default: 1. - train_cfg (dict | None): Training config. Useless in CornerHead, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CornerHead. Default: None. - loss_heatmap (dict | None): Config of corner heatmap loss. Default: - GaussianFocalLoss. - loss_embedding (dict | None): Config of corner embedding loss. Default: - AssociativeEmbeddingLoss. - loss_offset (dict | None): Config of corner offset loss. Default: - SmoothL1Loss. - loss_guiding_shift (dict): Config of guiding shift loss. Default: - SmoothL1Loss. - loss_centripetal_shift (dict): Config of centripetal shift loss. - Default: SmoothL1Loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - *args, - centripetal_shift_channels=2, - guiding_shift_channels=2, - feat_adaption_conv_kernel=3, - loss_guiding_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=0.05), - loss_centripetal_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - assert centripetal_shift_channels == 2, ( - 'CentripetalHead only support centripetal_shift_channels == 2') - self.centripetal_shift_channels = centripetal_shift_channels - assert guiding_shift_channels == 2, ( - 'CentripetalHead only support guiding_shift_channels == 2') - self.guiding_shift_channels = guiding_shift_channels - self.feat_adaption_conv_kernel = feat_adaption_conv_kernel - super(CentripetalHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - self.loss_guiding_shift = build_loss(loss_guiding_shift) - self.loss_centripetal_shift = build_loss(loss_centripetal_shift) - - def _init_centripetal_layers(self): - """Initialize centripetal layers. - - Including feature adaption deform convs (feat_adaption), deform offset - prediction convs (dcn_off), guiding shift (guiding_shift) and - centripetal shift ( centripetal_shift). Each branch has two parts: - prefix `tl_` for top-left and `br_` for bottom-right. - """ - self.tl_feat_adaption = nn.ModuleList() - self.br_feat_adaption = nn.ModuleList() - self.tl_dcn_offset = nn.ModuleList() - self.br_dcn_offset = nn.ModuleList() - self.tl_guiding_shift = nn.ModuleList() - self.br_guiding_shift = nn.ModuleList() - self.tl_centripetal_shift = nn.ModuleList() - self.br_centripetal_shift = nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_feat_adaption.append( - DeformConv2d(self.in_channels, self.in_channels, - self.feat_adaption_conv_kernel, 1, 1)) - self.br_feat_adaption.append( - DeformConv2d(self.in_channels, self.in_channels, - self.feat_adaption_conv_kernel, 1, 1)) - - self.tl_guiding_shift.append( - self._make_layers( - out_channels=self.guiding_shift_channels, - in_channels=self.in_channels)) - self.br_guiding_shift.append( - self._make_layers( - out_channels=self.guiding_shift_channels, - in_channels=self.in_channels)) - - self.tl_dcn_offset.append( - ConvModule( - self.guiding_shift_channels, - self.feat_adaption_conv_kernel**2 * - self.guiding_shift_channels, - 1, - bias=False, - act_cfg=None)) - self.br_dcn_offset.append( - ConvModule( - self.guiding_shift_channels, - self.feat_adaption_conv_kernel**2 * - self.guiding_shift_channels, - 1, - bias=False, - act_cfg=None)) - - self.tl_centripetal_shift.append( - self._make_layers( - out_channels=self.centripetal_shift_channels, - in_channels=self.in_channels)) - self.br_centripetal_shift.append( - self._make_layers( - out_channels=self.centripetal_shift_channels, - in_channels=self.in_channels)) - - def _init_layers(self): - """Initialize layers for CentripetalHead. - - Including two parts: CornerHead layers and CentripetalHead layers - """ - super()._init_layers() # using _init_layers in CornerHead - self._init_centripetal_layers() - - def init_weights(self): - super(CentripetalHead, self).init_weights() - for i in range(self.num_feat_levels): - normal_init(self.tl_feat_adaption[i], std=0.01) - normal_init(self.br_feat_adaption[i], std=0.01) - normal_init(self.tl_dcn_offset[i].conv, std=0.1) - normal_init(self.br_dcn_offset[i].conv, std=0.1) - _ = [x.conv.reset_parameters() for x in self.tl_guiding_shift[i]] - _ = [x.conv.reset_parameters() for x in self.br_guiding_shift[i]] - _ = [ - x.conv.reset_parameters() for x in self.tl_centripetal_shift[i] - ] - _ = [ - x.conv.reset_parameters() for x in self.br_centripetal_shift[i] - ] - - def forward_single(self, x, lvl_ind): - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - lvl_ind (int): Level index of current feature. - - Returns: - tuple[Tensor]: A tuple of CentripetalHead's output for current - feature level. Containing the following Tensors: - - - tl_heat (Tensor): Predicted top-left corner heatmap. - - br_heat (Tensor): Predicted bottom-right corner heatmap. - - tl_off (Tensor): Predicted top-left offset heatmap. - - br_off (Tensor): Predicted bottom-right offset heatmap. - - tl_guiding_shift (Tensor): Predicted top-left guiding shift - heatmap. - - br_guiding_shift (Tensor): Predicted bottom-right guiding - shift heatmap. - - tl_centripetal_shift (Tensor): Predicted top-left centripetal - shift heatmap. - - br_centripetal_shift (Tensor): Predicted bottom-right - centripetal shift heatmap. - """ - tl_heat, br_heat, _, _, tl_off, br_off, tl_pool, br_pool = super( - ).forward_single( - x, lvl_ind, return_pool=True) - - tl_guiding_shift = self.tl_guiding_shift[lvl_ind](tl_pool) - br_guiding_shift = self.br_guiding_shift[lvl_ind](br_pool) - - tl_dcn_offset = self.tl_dcn_offset[lvl_ind](tl_guiding_shift.detach()) - br_dcn_offset = self.br_dcn_offset[lvl_ind](br_guiding_shift.detach()) - - tl_feat_adaption = self.tl_feat_adaption[lvl_ind](tl_pool, - tl_dcn_offset) - br_feat_adaption = self.br_feat_adaption[lvl_ind](br_pool, - br_dcn_offset) - - tl_centripetal_shift = self.tl_centripetal_shift[lvl_ind]( - tl_feat_adaption) - br_centripetal_shift = self.br_centripetal_shift[lvl_ind]( - br_feat_adaption) - - result_list = [ - tl_heat, br_heat, tl_off, br_off, tl_guiding_shift, - br_guiding_shift, tl_centripetal_shift, br_centripetal_shift - ] - return result_list - - @force_fp32() - def loss(self, - tl_heats, - br_heats, - tl_offs, - br_offs, - tl_guiding_shifts, - br_guiding_shifts, - tl_centripetal_shifts, - br_centripetal_shifts, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each - level with shape (N, guiding_shift_channels, H, W). - br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for - each level with shape (N, guiding_shift_channels, H, W). - tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts - for each level with shape (N, centripetal_shift_channels, H, - W). - br_centripetal_shifts (list[Tensor]): Bottom-right centripetal - shifts for each level with shape (N, - centripetal_shift_channels, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [left, top, right, bottom] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. Containing the - following losses: - - - det_loss (list[Tensor]): Corner keypoint losses of all - feature levels. - - off_loss (list[Tensor]): Corner offset losses of all feature - levels. - - guiding_loss (list[Tensor]): Guiding shift losses of all - feature levels. - - centripetal_loss (list[Tensor]): Centripetal shift losses of - all feature levels. - """ - targets = self.get_targets( - gt_bboxes, - gt_labels, - tl_heats[-1].shape, - img_metas[0]['pad_shape'], - with_corner_emb=self.with_corner_emb, - with_guiding_shift=True, - with_centripetal_shift=True) - mlvl_targets = [targets for _ in range(self.num_feat_levels)] - [det_losses, off_losses, guiding_losses, centripetal_losses - ] = multi_apply(self.loss_single, tl_heats, br_heats, tl_offs, - br_offs, tl_guiding_shifts, br_guiding_shifts, - tl_centripetal_shifts, br_centripetal_shifts, - mlvl_targets) - loss_dict = dict( - det_loss=det_losses, - off_loss=off_losses, - guiding_loss=guiding_losses, - centripetal_loss=centripetal_losses) - return loss_dict - - def loss_single(self, tl_hmp, br_hmp, tl_off, br_off, tl_guiding_shift, - br_guiding_shift, tl_centripetal_shift, - br_centripetal_shift, targets): - """Compute losses for single level. - - Args: - tl_hmp (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_hmp (Tensor): Bottom-right corner heatmap for current level with - shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - tl_guiding_shift (Tensor): Top-left guiding shift for current level - with shape (N, guiding_shift_channels, H, W). - br_guiding_shift (Tensor): Bottom-right guiding shift for current - level with shape (N, guiding_shift_channels, H, W). - tl_centripetal_shift (Tensor): Top-left centripetal shift for - current level with shape (N, centripetal_shift_channels, H, W). - br_centripetal_shift (Tensor): Bottom-right centripetal shift for - current level with shape (N, centripetal_shift_channels, H, W). - targets (dict): Corner target generated by `get_targets`. - - Returns: - tuple[torch.Tensor]: Losses of the head's different branches - containing the following losses: - - - det_loss (Tensor): Corner keypoint loss. - - off_loss (Tensor): Corner offset loss. - - guiding_loss (Tensor): Guiding shift loss. - - centripetal_loss (Tensor): Centripetal shift loss. - """ - targets['corner_embedding'] = None - - det_loss, _, _, off_loss = super().loss_single(tl_hmp, br_hmp, None, - None, tl_off, br_off, - targets) - - gt_tl_guiding_shift = targets['topleft_guiding_shift'] - gt_br_guiding_shift = targets['bottomright_guiding_shift'] - gt_tl_centripetal_shift = targets['topleft_centripetal_shift'] - gt_br_centripetal_shift = targets['bottomright_centripetal_shift'] - - gt_tl_heatmap = targets['topleft_heatmap'] - gt_br_heatmap = targets['bottomright_heatmap'] - # We only compute the offset loss at the real corner position. - # The value of real corner would be 1 in heatmap ground truth. - # The mask is computed in class agnostic mode and its shape is - # batch * 1 * width * height. - tl_mask = gt_tl_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_tl_heatmap) - br_mask = gt_br_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_br_heatmap) - - # Guiding shift loss - tl_guiding_loss = self.loss_guiding_shift( - tl_guiding_shift, - gt_tl_guiding_shift, - tl_mask, - avg_factor=tl_mask.sum()) - br_guiding_loss = self.loss_guiding_shift( - br_guiding_shift, - gt_br_guiding_shift, - br_mask, - avg_factor=br_mask.sum()) - guiding_loss = (tl_guiding_loss + br_guiding_loss) / 2.0 - # Centripetal shift loss - tl_centripetal_loss = self.loss_centripetal_shift( - tl_centripetal_shift, - gt_tl_centripetal_shift, - tl_mask, - avg_factor=tl_mask.sum()) - br_centripetal_loss = self.loss_centripetal_shift( - br_centripetal_shift, - gt_br_centripetal_shift, - br_mask, - avg_factor=br_mask.sum()) - centripetal_loss = (tl_centripetal_loss + br_centripetal_loss) / 2.0 - - return det_loss, off_loss, guiding_loss, centripetal_loss - - @force_fp32() - def get_bboxes(self, - tl_heats, - br_heats, - tl_offs, - br_offs, - tl_guiding_shifts, - br_guiding_shifts, - tl_centripetal_shifts, - br_centripetal_shifts, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each - level with shape (N, guiding_shift_channels, H, W). Useless in - this function, we keep this arg because it's the raw output - from CentripetalHead. - br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for - each level with shape (N, guiding_shift_channels, H, W). - Useless in this function, we keep this arg because it's the - raw output from CentripetalHead. - tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts - for each level with shape (N, centripetal_shift_channels, H, - W). - br_centripetal_shifts (list[Tensor]): Bottom-right centripetal - shifts for each level with shape (N, - centripetal_shift_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas) - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=None, - br_emb=None, - tl_centripetal_shift=tl_centripetal_shifts[-1][ - img_id:img_id + 1, :], - br_centripetal_shift=br_centripetal_shifts[-1][ - img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - return result_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/corner_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/corner_head.py deleted file mode 100644 index c6a2866f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/corner_head.py +++ /dev/null @@ -1,1086 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from logging import warning -from math import ceil, log - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob -from mmcv.ops import CornerPool, batched_nms -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from ..utils import gaussian_radius, gen_gaussian_target -from ..utils.gaussian_target import (gather_feat, get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -class BiCornerPool(BaseModule): - """Bidirectional Corner Pooling Module (TopLeft, BottomRight, etc.) - - Args: - in_channels (int): Input channels of module. - out_channels (int): Output channels of module. - feat_channels (int): Feature channels of module. - directions (list[str]): Directions of two CornerPools. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - directions, - feat_channels=128, - out_channels=128, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=None): - super(BiCornerPool, self).__init__(init_cfg) - self.direction1_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - self.direction2_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.aftpool_conv = ConvModule( - feat_channels, - out_channels, - 3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=None) - - self.conv1 = ConvModule( - in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.conv2 = ConvModule( - in_channels, out_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.direction1_pool = CornerPool(directions[0]) - self.direction2_pool = CornerPool(directions[1]) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward features from the upstream network. - - Args: - x (tensor): Input feature of BiCornerPool. - - Returns: - conv2 (tensor): Output feature of BiCornerPool. - """ - direction1_conv = self.direction1_conv(x) - direction2_conv = self.direction2_conv(x) - direction1_feat = self.direction1_pool(direction1_conv) - direction2_feat = self.direction2_pool(direction2_conv) - aftpool_conv = self.aftpool_conv(direction1_feat + direction2_feat) - conv1 = self.conv1(x) - relu = self.relu(aftpool_conv + conv1) - conv2 = self.conv2(relu) - return conv2 - - -@HEADS.register_module() -class CornerHead(BaseDenseHead, BBoxTestMixin): - """Head of CornerNet: Detecting Objects as Paired Keypoints. - - Code is modified from the `official github repo - `_ . - - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - num_feat_levels (int): Levels of feature from the previous module. 2 - for HourglassNet-104 and 1 for HourglassNet-52. Because - HourglassNet-104 outputs the final feature and intermediate - supervision feature and HourglassNet-52 only outputs the final - feature. Default: 2. - corner_emb_channels (int): Channel of embedding vector. Default: 1. - train_cfg (dict | None): Training config. Useless in CornerHead, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CornerHead. Default: None. - loss_heatmap (dict | None): Config of corner heatmap loss. Default: - GaussianFocalLoss. - loss_embedding (dict | None): Config of corner embedding loss. Default: - AssociativeEmbeddingLoss. - loss_offset (dict | None): Config of corner offset loss. Default: - SmoothL1Loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_classes, - in_channels, - num_feat_levels=2, - corner_emb_channels=1, - train_cfg=None, - test_cfg=None, - loss_heatmap=dict( - type='GaussianFocalLoss', - alpha=2.0, - gamma=4.0, - loss_weight=1), - loss_embedding=dict( - type='AssociativeEmbeddingLoss', - pull_weight=0.25, - push_weight=0.25), - loss_offset=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(CornerHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.in_channels = in_channels - self.corner_emb_channels = corner_emb_channels - self.with_corner_emb = self.corner_emb_channels > 0 - self.corner_offset_channels = 2 - self.num_feat_levels = num_feat_levels - self.loss_heatmap = build_loss( - loss_heatmap) if loss_heatmap is not None else None - self.loss_embedding = build_loss( - loss_embedding) if loss_embedding is not None else None - self.loss_offset = build_loss( - loss_offset) if loss_offset is not None else None - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.fp16_enabled = False - self._init_layers() - - def _make_layers(self, out_channels, in_channels=256, feat_channels=256): - """Initialize conv sequential for CornerHead.""" - return nn.Sequential( - ConvModule(in_channels, feat_channels, 3, padding=1), - ConvModule( - feat_channels, out_channels, 1, norm_cfg=None, act_cfg=None)) - - def _init_corner_kpt_layers(self): - """Initialize corner keypoint layers. - - Including corner heatmap branch and corner offset branch. Each branch - has two parts: prefix `tl_` for top-left and `br_` for bottom-right. - """ - self.tl_pool, self.br_pool = nn.ModuleList(), nn.ModuleList() - self.tl_heat, self.br_heat = nn.ModuleList(), nn.ModuleList() - self.tl_off, self.br_off = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_pool.append( - BiCornerPool( - self.in_channels, ['top', 'left'], - out_channels=self.in_channels)) - self.br_pool.append( - BiCornerPool( - self.in_channels, ['bottom', 'right'], - out_channels=self.in_channels)) - - self.tl_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - self.br_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - - self.tl_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - self.br_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - - def _init_corner_emb_layers(self): - """Initialize corner embedding layers. - - Only include corner embedding branch with two parts: prefix `tl_` for - top-left and `br_` for bottom-right. - """ - self.tl_emb, self.br_emb = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - self.br_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - - def _init_layers(self): - """Initialize layers for CornerHead. - - Including two parts: corner keypoint layers and corner embedding layers - """ - self._init_corner_kpt_layers() - if self.with_corner_emb: - self._init_corner_emb_layers() - - def init_weights(self): - super(CornerHead, self).init_weights() - bias_init = bias_init_with_prob(0.1) - for i in range(self.num_feat_levels): - # The initialization of parameters are different between - # nn.Conv2d and ConvModule. Our experiments show that - # using the original initialization of nn.Conv2d increases - # the final mAP by about 0.2% - self.tl_heat[i][-1].conv.reset_parameters() - self.tl_heat[i][-1].conv.bias.data.fill_(bias_init) - self.br_heat[i][-1].conv.reset_parameters() - self.br_heat[i][-1].conv.bias.data.fill_(bias_init) - self.tl_off[i][-1].conv.reset_parameters() - self.br_off[i][-1].conv.reset_parameters() - if self.with_corner_emb: - self.tl_emb[i][-1].conv.reset_parameters() - self.br_emb[i][-1].conv.reset_parameters() - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of corner heatmaps, offset heatmaps and - embedding heatmaps. - - tl_heats (list[Tensor]): Top-left corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - br_heats (list[Tensor]): Bottom-right corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - tl_embs (list[Tensor] | list[None]): Top-left embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - br_embs (list[Tensor] | list[None]): Bottom-right embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - tl_offs (list[Tensor]): Top-left offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - - br_offs (list[Tensor]): Bottom-right offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - """ - lvl_ind = list(range(self.num_feat_levels)) - return multi_apply(self.forward_single, feats, lvl_ind) - - def forward_single(self, x, lvl_ind, return_pool=False): - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - lvl_ind (int): Level index of current feature. - return_pool (bool): Return corner pool feature or not. - - Returns: - tuple[Tensor]: A tuple of CornerHead's output for current feature - level. Containing the following Tensors: - - - tl_heat (Tensor): Predicted top-left corner heatmap. - - br_heat (Tensor): Predicted bottom-right corner heatmap. - - tl_emb (Tensor | None): Predicted top-left embedding heatmap. - None for `self.with_corner_emb == False`. - - br_emb (Tensor | None): Predicted bottom-right embedding - heatmap. None for `self.with_corner_emb == False`. - - tl_off (Tensor): Predicted top-left offset heatmap. - - br_off (Tensor): Predicted bottom-right offset heatmap. - - tl_pool (Tensor): Top-left corner pool feature. Not must - have. - - br_pool (Tensor): Bottom-right corner pool feature. Not must - have. - """ - tl_pool = self.tl_pool[lvl_ind](x) - tl_heat = self.tl_heat[lvl_ind](tl_pool) - br_pool = self.br_pool[lvl_ind](x) - br_heat = self.br_heat[lvl_ind](br_pool) - - tl_emb, br_emb = None, None - if self.with_corner_emb: - tl_emb = self.tl_emb[lvl_ind](tl_pool) - br_emb = self.br_emb[lvl_ind](br_pool) - - tl_off = self.tl_off[lvl_ind](tl_pool) - br_off = self.br_off[lvl_ind](br_pool) - - result_list = [tl_heat, br_heat, tl_emb, br_emb, tl_off, br_off] - if return_pool: - result_list.append(tl_pool) - result_list.append(br_pool) - - return result_list - - def get_targets(self, - gt_bboxes, - gt_labels, - feat_shape, - img_shape, - with_corner_emb=False, - with_guiding_shift=False, - with_centripetal_shift=False): - """Generate corner targets. - - Including corner heatmap, corner offset. - - Optional: corner embedding, corner guiding shift, centripetal shift. - - For CornerNet, we generate corner heatmap, corner offset and corner - embedding from this function. - - For CentripetalNet, we generate corner heatmap, corner offset, guiding - shift and centripetal shift from this function. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, each - has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, each has - shape (num_gt,). - feat_shape (list[int]): Shape of output feature, - [batch, channel, height, width]. - img_shape (list[int]): Shape of input image, - [height, width, channel]. - with_corner_emb (bool): Generate corner embedding target or not. - Default: False. - with_guiding_shift (bool): Generate guiding shift target or not. - Default: False. - with_centripetal_shift (bool): Generate centripetal shift target or - not. Default: False. - - Returns: - dict: Ground truth of corner heatmap, corner offset, corner - embedding, guiding shift and centripetal shift. Containing the - following keys: - - - topleft_heatmap (Tensor): Ground truth top-left corner - heatmap. - - bottomright_heatmap (Tensor): Ground truth bottom-right - corner heatmap. - - topleft_offset (Tensor): Ground truth top-left corner offset. - - bottomright_offset (Tensor): Ground truth bottom-right corner - offset. - - corner_embedding (list[list[list[int]]]): Ground truth corner - embedding. Not must have. - - topleft_guiding_shift (Tensor): Ground truth top-left corner - guiding shift. Not must have. - - bottomright_guiding_shift (Tensor): Ground truth bottom-right - corner guiding shift. Not must have. - - topleft_centripetal_shift (Tensor): Ground truth top-left - corner centripetal shift. Not must have. - - bottomright_centripetal_shift (Tensor): Ground truth - bottom-right corner centripetal shift. Not must have. - """ - batch_size, _, height, width = feat_shape - img_h, img_w = img_shape[:2] - - width_ratio = float(width / img_w) - height_ratio = float(height / img_h) - - gt_tl_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_br_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_tl_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - gt_br_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - - if with_corner_emb: - match = [] - - # Guiding shift is a kind of offset, from center to corner - if with_guiding_shift: - gt_tl_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - # Centripetal shift is also a kind of offset, from center to corner - # and normalized by log. - if with_centripetal_shift: - gt_tl_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - - for batch_id in range(batch_size): - # Ground truth of corner embedding per image is a list of coord set - corner_match = [] - for box_id in range(len(gt_labels[batch_id])): - left, top, right, bottom = gt_bboxes[batch_id][box_id] - center_x = (left + right) / 2.0 - center_y = (top + bottom) / 2.0 - label = gt_labels[batch_id][box_id] - - # Use coords in the feature level to generate ground truth - scale_left = left * width_ratio - scale_right = right * width_ratio - scale_top = top * height_ratio - scale_bottom = bottom * height_ratio - scale_center_x = center_x * width_ratio - scale_center_y = center_y * height_ratio - - # Int coords on feature map/ground truth tensor - left_idx = int(min(scale_left, width - 1)) - right_idx = int(min(scale_right, width - 1)) - top_idx = int(min(scale_top, height - 1)) - bottom_idx = int(min(scale_bottom, height - 1)) - - # Generate gaussian heatmap - scale_box_width = ceil(scale_right - scale_left) - scale_box_height = ceil(scale_bottom - scale_top) - radius = gaussian_radius((scale_box_height, scale_box_width), - min_overlap=0.3) - radius = max(0, int(radius)) - gt_tl_heatmap[batch_id, label] = gen_gaussian_target( - gt_tl_heatmap[batch_id, label], [left_idx, top_idx], - radius) - gt_br_heatmap[batch_id, label] = gen_gaussian_target( - gt_br_heatmap[batch_id, label], [right_idx, bottom_idx], - radius) - - # Generate corner offset - left_offset = scale_left - left_idx - top_offset = scale_top - top_idx - right_offset = scale_right - right_idx - bottom_offset = scale_bottom - bottom_idx - gt_tl_offset[batch_id, 0, top_idx, left_idx] = left_offset - gt_tl_offset[batch_id, 1, top_idx, left_idx] = top_offset - gt_br_offset[batch_id, 0, bottom_idx, right_idx] = right_offset - gt_br_offset[batch_id, 1, bottom_idx, - right_idx] = bottom_offset - - # Generate corner embedding - if with_corner_emb: - corner_match.append([[top_idx, left_idx], - [bottom_idx, right_idx]]) - # Generate guiding shift - if with_guiding_shift: - gt_tl_guiding_shift[batch_id, 0, top_idx, - left_idx] = scale_center_x - left_idx - gt_tl_guiding_shift[batch_id, 1, top_idx, - left_idx] = scale_center_y - top_idx - gt_br_guiding_shift[batch_id, 0, bottom_idx, - right_idx] = right_idx - scale_center_x - gt_br_guiding_shift[ - batch_id, 1, bottom_idx, - right_idx] = bottom_idx - scale_center_y - # Generate centripetal shift - if with_centripetal_shift: - gt_tl_centripetal_shift[batch_id, 0, top_idx, - left_idx] = log(scale_center_x - - scale_left) - gt_tl_centripetal_shift[batch_id, 1, top_idx, - left_idx] = log(scale_center_y - - scale_top) - gt_br_centripetal_shift[batch_id, 0, bottom_idx, - right_idx] = log(scale_right - - scale_center_x) - gt_br_centripetal_shift[batch_id, 1, bottom_idx, - right_idx] = log(scale_bottom - - scale_center_y) - - if with_corner_emb: - match.append(corner_match) - - target_result = dict( - topleft_heatmap=gt_tl_heatmap, - topleft_offset=gt_tl_offset, - bottomright_heatmap=gt_br_heatmap, - bottomright_offset=gt_br_offset) - - if with_corner_emb: - target_result.update(corner_embedding=match) - if with_guiding_shift: - target_result.update( - topleft_guiding_shift=gt_tl_guiding_shift, - bottomright_guiding_shift=gt_br_guiding_shift) - if with_centripetal_shift: - target_result.update( - topleft_centripetal_shift=gt_tl_centripetal_shift, - bottomright_centripetal_shift=gt_br_centripetal_shift) - - return target_result - - @force_fp32() - def loss(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [left, top, right, bottom] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. Containing the - following losses: - - - det_loss (list[Tensor]): Corner keypoint losses of all - feature levels. - - pull_loss (list[Tensor]): Part one of AssociativeEmbedding - losses of all feature levels. - - push_loss (list[Tensor]): Part two of AssociativeEmbedding - losses of all feature levels. - - off_loss (list[Tensor]): Corner offset losses of all feature - levels. - """ - targets = self.get_targets( - gt_bboxes, - gt_labels, - tl_heats[-1].shape, - img_metas[0]['pad_shape'], - with_corner_emb=self.with_corner_emb) - mlvl_targets = [targets for _ in range(self.num_feat_levels)] - det_losses, pull_losses, push_losses, off_losses = multi_apply( - self.loss_single, tl_heats, br_heats, tl_embs, br_embs, tl_offs, - br_offs, mlvl_targets) - loss_dict = dict(det_loss=det_losses, off_loss=off_losses) - if self.with_corner_emb: - loss_dict.update(pull_loss=pull_losses, push_loss=push_losses) - return loss_dict - - def loss_single(self, tl_hmp, br_hmp, tl_emb, br_emb, tl_off, br_off, - targets): - """Compute losses for single level. - - Args: - tl_hmp (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_hmp (Tensor): Bottom-right corner heatmap for current level with - shape (N, num_classes, H, W). - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - targets (dict): Corner target generated by `get_targets`. - - Returns: - tuple[torch.Tensor]: Losses of the head's different branches - containing the following losses: - - - det_loss (Tensor): Corner keypoint loss. - - pull_loss (Tensor): Part one of AssociativeEmbedding loss. - - push_loss (Tensor): Part two of AssociativeEmbedding loss. - - off_loss (Tensor): Corner offset loss. - """ - gt_tl_hmp = targets['topleft_heatmap'] - gt_br_hmp = targets['bottomright_heatmap'] - gt_tl_off = targets['topleft_offset'] - gt_br_off = targets['bottomright_offset'] - gt_embedding = targets['corner_embedding'] - - # Detection loss - tl_det_loss = self.loss_heatmap( - tl_hmp.sigmoid(), - gt_tl_hmp, - avg_factor=max(1, - gt_tl_hmp.eq(1).sum())) - br_det_loss = self.loss_heatmap( - br_hmp.sigmoid(), - gt_br_hmp, - avg_factor=max(1, - gt_br_hmp.eq(1).sum())) - det_loss = (tl_det_loss + br_det_loss) / 2.0 - - # AssociativeEmbedding loss - if self.with_corner_emb and self.loss_embedding is not None: - pull_loss, push_loss = self.loss_embedding(tl_emb, br_emb, - gt_embedding) - else: - pull_loss, push_loss = None, None - - # Offset loss - # We only compute the offset loss at the real corner position. - # The value of real corner would be 1 in heatmap ground truth. - # The mask is computed in class agnostic mode and its shape is - # batch * 1 * width * height. - tl_off_mask = gt_tl_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_tl_hmp) - br_off_mask = gt_br_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_br_hmp) - tl_off_loss = self.loss_offset( - tl_off, - gt_tl_off, - tl_off_mask, - avg_factor=max(1, tl_off_mask.sum())) - br_off_loss = self.loss_offset( - br_off, - gt_br_off, - br_off_mask, - avg_factor=max(1, br_off_mask.sum())) - - off_loss = (tl_off_loss + br_off_loss) / 2.0 - - return det_loss, pull_loss, push_loss, off_loss - - @force_fp32() - def get_bboxes(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas) - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=tl_embs[-1][img_id:img_id + 1, :], - br_emb=br_embs[-1][img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - return result_list - - def _get_bboxes_single(self, - tl_heat, - br_heat, - tl_off, - br_off, - img_meta, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift: Top-left corner's centripetal shift for - current level with shape (N, 2, H, W). - br_centripetal_shift: Bottom-right corner's centripetal shift for - current level with shape (N, 2, H, W). - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - if isinstance(img_meta, (list, tuple)): - img_meta = img_meta[0] - - batch_bboxes, batch_scores, batch_clses = self.decode_heatmap( - tl_heat=tl_heat.sigmoid(), - br_heat=br_heat.sigmoid(), - tl_off=tl_off, - br_off=br_off, - tl_emb=tl_emb, - br_emb=br_emb, - tl_centripetal_shift=tl_centripetal_shift, - br_centripetal_shift=br_centripetal_shift, - img_meta=img_meta, - k=self.test_cfg.corner_topk, - kernel=self.test_cfg.local_maximum_kernel, - distance_threshold=self.test_cfg.distance_threshold) - - if rescale: - batch_bboxes /= batch_bboxes.new_tensor(img_meta['scale_factor']) - - bboxes = batch_bboxes.view([-1, 4]) - scores = batch_scores.view(-1) - clses = batch_clses.view(-1) - - detections = torch.cat([bboxes, scores.unsqueeze(-1)], -1) - keepinds = (detections[:, -1] > -0.1) - detections = detections[keepinds] - labels = clses[keepinds] - - if with_nms: - detections, labels = self._bboxes_nms(detections, labels, - self.test_cfg) - - return detections, labels - - def _bboxes_nms(self, bboxes, labels, cfg): - if 'nms_cfg' in cfg: - warning.warn('nms_cfg in test_cfg will be deprecated. ' - 'Please rename it as nms') - if 'nms' not in cfg: - cfg.nms = cfg.nms_cfg - - if labels.numel() > 0: - max_num = cfg.max_per_img - bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, - -1].contiguous(), - labels, cfg.nms) - if max_num > 0: - bboxes = bboxes[:max_num] - labels = labels[keep][:max_num] - - return bboxes, labels - - def decode_heatmap(self, - tl_heat, - br_heat, - tl_off, - br_off, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - img_meta=None, - k=100, - kernel=3, - distance_threshold=0.5, - num_dets=1000): - """Transform outputs for a single batch item into raw bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - tl_emb (Tensor | None): Top-left corner embedding for current - level with shape (N, corner_emb_channels, H, W). - br_emb (Tensor | None): Bottom-right corner embedding for current - level with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift (Tensor | None): Top-left centripetal shift - for current level with shape (N, 2, H, W). - br_centripetal_shift (Tensor | None): Bottom-right centripetal - shift for current level with shape (N, 2, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - k (int): Get top k corner keypoints from heatmap. - kernel (int): Max pooling kernel for extract local maximum pixels. - distance_threshold (float): Distance threshold. Top-left and - bottom-right corner keypoints with feature distance less than - the threshold will be regarded as keypoints from same object. - num_dets (int): Num of raw boxes before doing nms. - - Returns: - tuple[torch.Tensor]: Decoded output of CornerHead, containing the - following Tensors: - - - bboxes (Tensor): Coords of each box. - - scores (Tensor): Scores of each box. - - clses (Tensor): Categories of each box. - """ - with_embedding = tl_emb is not None and br_emb is not None - with_centripetal_shift = ( - tl_centripetal_shift is not None - and br_centripetal_shift is not None) - assert with_embedding + with_centripetal_shift == 1 - batch, _, height, width = tl_heat.size() - if torch.onnx.is_in_onnx_export(): - inp_h, inp_w = img_meta['pad_shape_for_onnx'][:2] - else: - inp_h, inp_w, _ = img_meta['pad_shape'] - - # perform nms on heatmaps - tl_heat = get_local_maximum(tl_heat, kernel=kernel) - br_heat = get_local_maximum(br_heat, kernel=kernel) - - tl_scores, tl_inds, tl_clses, tl_ys, tl_xs = get_topk_from_heatmap( - tl_heat, k=k) - br_scores, br_inds, br_clses, br_ys, br_xs = get_topk_from_heatmap( - br_heat, k=k) - - # We use repeat instead of expand here because expand is a - # shallow-copy function. Thus it could cause unexpected testing result - # sometimes. Using expand will decrease about 10% mAP during testing - # compared to repeat. - tl_ys = tl_ys.view(batch, k, 1).repeat(1, 1, k) - tl_xs = tl_xs.view(batch, k, 1).repeat(1, 1, k) - br_ys = br_ys.view(batch, 1, k).repeat(1, k, 1) - br_xs = br_xs.view(batch, 1, k).repeat(1, k, 1) - - tl_off = transpose_and_gather_feat(tl_off, tl_inds) - tl_off = tl_off.view(batch, k, 1, 2) - br_off = transpose_and_gather_feat(br_off, br_inds) - br_off = br_off.view(batch, 1, k, 2) - - tl_xs = tl_xs + tl_off[..., 0] - tl_ys = tl_ys + tl_off[..., 1] - br_xs = br_xs + br_off[..., 0] - br_ys = br_ys + br_off[..., 1] - - if with_centripetal_shift: - tl_centripetal_shift = transpose_and_gather_feat( - tl_centripetal_shift, tl_inds).view(batch, k, 1, 2).exp() - br_centripetal_shift = transpose_and_gather_feat( - br_centripetal_shift, br_inds).view(batch, 1, k, 2).exp() - - tl_ctxs = tl_xs + tl_centripetal_shift[..., 0] - tl_ctys = tl_ys + tl_centripetal_shift[..., 1] - br_ctxs = br_xs - br_centripetal_shift[..., 0] - br_ctys = br_ys - br_centripetal_shift[..., 1] - - # all possible boxes based on top k corners (ignoring class) - tl_xs *= (inp_w / width) - tl_ys *= (inp_h / height) - br_xs *= (inp_w / width) - br_ys *= (inp_h / height) - - if with_centripetal_shift: - tl_ctxs *= (inp_w / width) - tl_ctys *= (inp_h / height) - br_ctxs *= (inp_w / width) - br_ctys *= (inp_h / height) - - x_off, y_off = 0, 0 # no crop - if not torch.onnx.is_in_onnx_export(): - # since `RandomCenterCropPad` is done on CPU with numpy and it's - # not dynamic traceable when exporting to ONNX, thus 'border' - # does not appears as key in 'img_meta'. As a tmp solution, - # we move this 'border' handle part to the postprocess after - # finished exporting to ONNX, which is handle in - # `mmdet/core/export/model_wrappers.py`. Though difference between - # pytorch and exported onnx model, it might be ignored since - # comparable performance is achieved between them (e.g. 40.4 vs - # 40.6 on COCO val2017, for CornerNet without test-time flip) - if 'border' in img_meta: - x_off = img_meta['border'][2] - y_off = img_meta['border'][0] - - tl_xs -= x_off - tl_ys -= y_off - br_xs -= x_off - br_ys -= y_off - - zeros = tl_xs.new_zeros(*tl_xs.size()) - tl_xs = torch.where(tl_xs > 0.0, tl_xs, zeros) - tl_ys = torch.where(tl_ys > 0.0, tl_ys, zeros) - br_xs = torch.where(br_xs > 0.0, br_xs, zeros) - br_ys = torch.where(br_ys > 0.0, br_ys, zeros) - - bboxes = torch.stack((tl_xs, tl_ys, br_xs, br_ys), dim=3) - area_bboxes = ((br_xs - tl_xs) * (br_ys - tl_ys)).abs() - - if with_centripetal_shift: - tl_ctxs -= x_off - tl_ctys -= y_off - br_ctxs -= x_off - br_ctys -= y_off - - tl_ctxs *= tl_ctxs.gt(0.0).type_as(tl_ctxs) - tl_ctys *= tl_ctys.gt(0.0).type_as(tl_ctys) - br_ctxs *= br_ctxs.gt(0.0).type_as(br_ctxs) - br_ctys *= br_ctys.gt(0.0).type_as(br_ctys) - - ct_bboxes = torch.stack((tl_ctxs, tl_ctys, br_ctxs, br_ctys), - dim=3) - area_ct_bboxes = ((br_ctxs - tl_ctxs) * (br_ctys - tl_ctys)).abs() - - rcentral = torch.zeros_like(ct_bboxes) - # magic nums from paper section 4.1 - mu = torch.ones_like(area_bboxes) / 2.4 - mu[area_bboxes > 3500] = 1 / 2.1 # large bbox have smaller mu - - bboxes_center_x = (bboxes[..., 0] + bboxes[..., 2]) / 2 - bboxes_center_y = (bboxes[..., 1] + bboxes[..., 3]) / 2 - rcentral[..., 0] = bboxes_center_x - mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 1] = bboxes_center_y - mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - rcentral[..., 2] = bboxes_center_x + mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 3] = bboxes_center_y + mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - area_rcentral = ((rcentral[..., 2] - rcentral[..., 0]) * - (rcentral[..., 3] - rcentral[..., 1])).abs() - dists = area_ct_bboxes / area_rcentral - - tl_ctx_inds = (ct_bboxes[..., 0] <= rcentral[..., 0]) | ( - ct_bboxes[..., 0] >= rcentral[..., 2]) - tl_cty_inds = (ct_bboxes[..., 1] <= rcentral[..., 1]) | ( - ct_bboxes[..., 1] >= rcentral[..., 3]) - br_ctx_inds = (ct_bboxes[..., 2] <= rcentral[..., 0]) | ( - ct_bboxes[..., 2] >= rcentral[..., 2]) - br_cty_inds = (ct_bboxes[..., 3] <= rcentral[..., 1]) | ( - ct_bboxes[..., 3] >= rcentral[..., 3]) - - if with_embedding: - tl_emb = transpose_and_gather_feat(tl_emb, tl_inds) - tl_emb = tl_emb.view(batch, k, 1) - br_emb = transpose_and_gather_feat(br_emb, br_inds) - br_emb = br_emb.view(batch, 1, k) - dists = torch.abs(tl_emb - br_emb) - - tl_scores = tl_scores.view(batch, k, 1).repeat(1, 1, k) - br_scores = br_scores.view(batch, 1, k).repeat(1, k, 1) - - scores = (tl_scores + br_scores) / 2 # scores for all possible boxes - - # tl and br should have same class - tl_clses = tl_clses.view(batch, k, 1).repeat(1, 1, k) - br_clses = br_clses.view(batch, 1, k).repeat(1, k, 1) - cls_inds = (tl_clses != br_clses) - - # reject boxes based on distances - dist_inds = dists > distance_threshold - - # reject boxes based on widths and heights - width_inds = (br_xs <= tl_xs) - height_inds = (br_ys <= tl_ys) - - # No use `scores[cls_inds]`, instead we use `torch.where` here. - # Since only 1-D indices with type 'tensor(bool)' are supported - # when exporting to ONNX, any other bool indices with more dimensions - # (e.g. 2-D bool tensor) as input parameter in node is invalid - negative_scores = -1 * torch.ones_like(scores) - scores = torch.where(cls_inds, negative_scores, scores) - scores = torch.where(width_inds, negative_scores, scores) - scores = torch.where(height_inds, negative_scores, scores) - scores = torch.where(dist_inds, negative_scores, scores) - - if with_centripetal_shift: - scores[tl_ctx_inds] = -1 - scores[tl_cty_inds] = -1 - scores[br_ctx_inds] = -1 - scores[br_cty_inds] = -1 - - scores = scores.view(batch, -1) - scores, inds = torch.topk(scores, num_dets) - scores = scores.unsqueeze(2) - - bboxes = bboxes.view(batch, -1, 4) - bboxes = gather_feat(bboxes, inds) - - clses = tl_clses.contiguous().view(batch, -1, 1) - clses = gather_feat(clses, inds).float() - - return bboxes, scores, clses - - def onnx_export(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor, Tensor]: First tensor bboxes with shape - [N, num_det, 5], 5 arrange as (x1, y1, x2, y2, score) - and second element is class labels of shape [N, num_det]. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len( - img_metas) == 1 - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=tl_embs[-1][img_id:img_id + 1, :], - br_emb=br_embs[-1][img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - detections, labels = result_list[0] - # batch_size 1 here, [1, num_det, 5], [1, num_det] - return detections.unsqueeze(0), labels.unsqueeze(0) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/deformable_detr_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/deformable_detr_head.py deleted file mode 100644 index 71c27852..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/deformable_detr_head.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Linear, bias_init_with_prob, constant_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.utils.transformer import inverse_sigmoid -from ..builder import HEADS -from .detr_head import DETRHead - - -@HEADS.register_module() -class DeformableDETRHead(DETRHead): - """Head of DeformDETR: Deformable DETR: Deformable Transformers for End-to- - End Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - with_box_refine (bool): Whether to refine the reference points - in the decoder. Defaults to False. - as_two_stage (bool) : Whether to generate the proposal from - the outputs of encoder. - transformer (obj:`ConfigDict`): ConfigDict is used for building - the Encoder and Decoder. - """ - - def __init__(self, - *args, - with_box_refine=False, - as_two_stage=False, - transformer=None, - **kwargs): - self.with_box_refine = with_box_refine - self.as_two_stage = as_two_stage - if self.as_two_stage: - transformer['as_two_stage'] = self.as_two_stage - - super(DeformableDETRHead, self).__init__( - *args, transformer=transformer, **kwargs) - - def _init_layers(self): - """Initialize classification branch and regression branch of head.""" - - fc_cls = Linear(self.embed_dims, self.cls_out_channels) - reg_branch = [] - for _ in range(self.num_reg_fcs): - reg_branch.append(Linear(self.embed_dims, self.embed_dims)) - reg_branch.append(nn.ReLU()) - reg_branch.append(Linear(self.embed_dims, 4)) - reg_branch = nn.Sequential(*reg_branch) - - def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - # last reg_branch is used to generate proposal from - # encode feature map when as_two_stage is True. - num_pred = (self.transformer.decoder.num_layers + 1) if \ - self.as_two_stage else self.transformer.decoder.num_layers - - if self.with_box_refine: - self.cls_branches = _get_clones(fc_cls, num_pred) - self.reg_branches = _get_clones(reg_branch, num_pred) - else: - - self.cls_branches = nn.ModuleList( - [fc_cls for _ in range(num_pred)]) - self.reg_branches = nn.ModuleList( - [reg_branch for _ in range(num_pred)]) - - if not self.as_two_stage: - self.query_embedding = nn.Embedding(self.num_query, - self.embed_dims * 2) - - def init_weights(self): - """Initialize weights of the DeformDETR head.""" - self.transformer.init_weights() - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - for m in self.cls_branches: - nn.init.constant_(m.bias, bias_init) - for m in self.reg_branches: - constant_init(m[-1], 0, bias=0) - nn.init.constant_(self.reg_branches[0][-1].bias.data[2:], -2.0) - if self.as_two_stage: - for m in self.reg_branches: - nn.init.constant_(m[-1].bias.data[2:], 0.0) - - def forward(self, mlvl_feats, img_metas): - """Forward function. - - Args: - mlvl_feats (tuple[Tensor]): Features from the upstream - network, each is a 4D-tensor with shape - (N, C, H, W). - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, \ - shape [nb_dec, bs, num_query, cls_out_channels]. Note \ - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression \ - head with normalized coordinate format (cx, cy, w, h). \ - Shape [nb_dec, bs, num_query, 4]. - enc_outputs_class (Tensor): The score of each point on encode \ - feature map, has shape (N, h*w, num_class). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - enc_outputs_coord (Tensor): The proposal generate from the \ - encode feature map, has shape (N, h*w, 4). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - """ - - batch_size = mlvl_feats[0].size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - img_masks = mlvl_feats[0].new_ones( - (batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - img_masks[img_id, :img_h, :img_w] = 0 - - mlvl_masks = [] - mlvl_positional_encodings = [] - for feat in mlvl_feats: - mlvl_masks.append( - F.interpolate(img_masks[None], - size=feat.shape[-2:]).to(torch.bool).squeeze(0)) - mlvl_positional_encodings.append( - self.positional_encoding(mlvl_masks[-1])) - - query_embeds = None - if not self.as_two_stage: - query_embeds = self.query_embedding.weight - hs, init_reference, inter_references, \ - enc_outputs_class, enc_outputs_coord = self.transformer( - mlvl_feats, - mlvl_masks, - query_embeds, - mlvl_positional_encodings, - reg_branches=self.reg_branches if self.with_box_refine else None, # noqa:E501 - cls_branches=self.cls_branches if self.as_two_stage else None # noqa:E501 - ) - hs = hs.permute(0, 2, 1, 3) - outputs_classes = [] - outputs_coords = [] - - for lvl in range(hs.shape[0]): - if lvl == 0: - reference = init_reference - else: - reference = inter_references[lvl - 1] - reference = inverse_sigmoid(reference) - outputs_class = self.cls_branches[lvl](hs[lvl]) - tmp = self.reg_branches[lvl](hs[lvl]) - if reference.shape[-1] == 4: - tmp += reference - else: - assert reference.shape[-1] == 2 - tmp[..., :2] += reference - outputs_coord = tmp.sigmoid() - outputs_classes.append(outputs_class) - outputs_coords.append(outputs_coord) - - outputs_classes = torch.stack(outputs_classes) - outputs_coords = torch.stack(outputs_coords) - if self.as_two_stage: - return outputs_classes, outputs_coords, \ - enc_outputs_class, \ - enc_outputs_coord.sigmoid() - else: - return outputs_classes, outputs_coords, \ - None, None - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert gt_bboxes_ignore is None, \ - f'{self.__class__.__name__} only supports ' \ - f'for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss of proposal generated from encode feature map. - if enc_cls_scores is not None: - binary_labels_list = [ - torch.zeros_like(gt_labels_list[i]) - for i in range(len(img_metas)) - ] - enc_loss_cls, enc_losses_bbox, enc_losses_iou = \ - self.loss_single(enc_cls_scores, enc_bbox_preds, - gt_bboxes_list, binary_labels_list, - img_metas, gt_bboxes_ignore) - loss_dict['enc_loss_cls'] = enc_loss_cls - loss_dict['enc_loss_bbox'] = enc_losses_bbox - loss_dict['enc_loss_iou'] = enc_losses_iou - - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - cls_scores = all_cls_scores[-1] - bbox_preds = all_bbox_preds[-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - return result_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/dense_test_mixins.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/dense_test_mixins.py deleted file mode 100644 index 34215489..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/dense_test_mixins.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from inspect import signature - -import torch -from mmcv.ops import batched_nms - -from mmdet.core import bbox_mapping_back, merge_aug_proposals - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - """Mixin class for testing det bboxes via DenseHead.""" - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,) - """ - outs = self.forward(feats) - results_list = self.get_bboxes( - *outs, img_metas=img_metas, rescale=rescale) - return results_list - - def aug_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes with test time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,). The length of list should always be 1. - """ - # check with_nms argument - gb_sig = signature(self.get_bboxes) - gb_args = [p.name for p in gb_sig.parameters.values()] - gbs_sig = signature(self._get_bboxes_single) - gbs_args = [p.name for p in gbs_sig.parameters.values()] - assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \ - f'{self.__class__.__name__}' \ - ' does not support test-time augmentation' - - aug_bboxes = [] - aug_scores = [] - aug_labels = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - outs = self.forward(x) - bbox_outputs = self.get_bboxes( - *outs, - img_metas=img_meta, - cfg=self.test_cfg, - rescale=False, - with_nms=False)[0] - aug_bboxes.append(bbox_outputs[0]) - aug_scores.append(bbox_outputs[1]) - if len(bbox_outputs) >= 3: - aug_labels.append(bbox_outputs[2]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_labels = torch.cat(aug_labels, dim=0) if aug_labels else None - - if merged_bboxes.numel() == 0: - det_bboxes = torch.cat([merged_bboxes, merged_scores[:, None]], -1) - return [ - (det_bboxes, merged_labels), - ] - - det_bboxes, keep_idxs = batched_nms(merged_bboxes, merged_scores, - merged_labels, self.test_cfg.nms) - det_bboxes = det_bboxes[:self.test_cfg.max_per_img] - det_labels = merged_labels[keep_idxs][:self.test_cfg.max_per_img] - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - - return [ - (_det_bboxes, det_labels), - ] - - def simple_test_rpn(self, x, img_metas): - """Test without augmentation, only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - rpn_outs = self(x) - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def aug_test_rpn(self, feats, img_metas): - """Test with augmentation for only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - samples_per_gpu = len(img_metas[0]) - aug_proposals = [[] for _ in range(samples_per_gpu)] - for x, img_meta in zip(feats, img_metas): - proposal_list = self.simple_test_rpn(x, img_meta) - for i, proposals in enumerate(proposal_list): - aug_proposals[i].append(proposals) - # reorganize the order of 'img_metas' to match the dimensions - # of 'aug_proposals' - aug_img_metas = [] - for i in range(samples_per_gpu): - aug_img_meta = [] - for j in range(len(img_metas)): - aug_img_meta.append(img_metas[j][i]) - aug_img_metas.append(aug_img_meta) - # after merging, proposals will be rescaled to the original image size - merged_proposals = [ - merge_aug_proposals(proposals, aug_img_meta, self.test_cfg) - for proposals, aug_img_meta in zip(aug_proposals, aug_img_metas) - ] - return merged_proposals - - if sys.version_info >= (3, 7): - - async def async_simple_test_rpn(self, x, img_metas): - sleep_interval = self.test_cfg.pop('async_sleep_interval', 0.025) - async with completed( - __name__, 'rpn_head_forward', - sleep_interval=sleep_interval): - rpn_outs = self(x) - - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - - Returns: - tuple[Tensor]: ``bboxes`` with shape (n,4), where - 4 represent (tl_x, tl_y, br_x, br_y) - and ``scores`` with shape (n,). - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/detr_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/detr_head.py deleted file mode 100644 index de1913c9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/detr_head.py +++ /dev/null @@ -1,844 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, Linear, build_activation_layer -from mmcv.cnn.bricks.transformer import FFN, build_positional_encoding -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from mmdet.models.utils import build_transformer -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class DETRHead(AnchorFreeHead): - """Implements the DETR transformer head. - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_classes (int): Number of categories excluding the background. - in_channels (int): Number of channels in the input feature map. - num_query (int): Number of query in Transformer. - num_reg_fcs (int, optional): Number of fully-connected layers used in - `FFN`, which is then used for the regression head. Default 2. - transformer (obj:`mmcv.ConfigDict`|dict): Config for transformer. - Default: None. - sync_cls_avg_factor (bool): Whether to sync the avg_factor of - all ranks. Default to False. - positional_encoding (obj:`mmcv.ConfigDict`|dict): - Config for position encoding. - loss_cls (obj:`mmcv.ConfigDict`|dict): Config of the - classification loss. Default `CrossEntropyLoss`. - loss_bbox (obj:`mmcv.ConfigDict`|dict): Config of the - regression loss. Default `L1Loss`. - loss_iou (obj:`mmcv.ConfigDict`|dict): Config of the - regression iou loss. Default `GIoULoss`. - tran_cfg (obj:`mmcv.ConfigDict`|dict): Training config of - transformer head. - test_cfg (obj:`mmcv.ConfigDict`|dict): Testing config of - transformer head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - _version = 2 - - def __init__(self, - num_classes, - in_channels, - num_query=100, - num_reg_fcs=2, - transformer=None, - sync_cls_avg_factor=False, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0), - iou_cost=dict( - type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100), - init_cfg=None, - **kwargs): - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since it brings inconvenience when the initialization of - # `AnchorFreeHead` is called. - super(AnchorFreeHead, self).__init__(init_cfg) - self.bg_cls_weight = 0 - self.sync_cls_avg_factor = sync_cls_avg_factor - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None and (self.__class__ is DETRHead): - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR rep0, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided '\ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \ - 'The classification weight for loss and matcher should be' \ - 'exactly the same.' - assert loss_bbox['loss_weight'] == assigner['reg_cost'][ - 'weight'], 'The regression L1 weight for loss and matcher ' \ - 'should be exactly the same.' - assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \ - 'The regression iou weight for loss and matcher should be' \ - 'exactly the same.' - self.assigner = build_assigner(assigner) - # DETR sampling=False, so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.num_query = num_query - self.num_classes = num_classes - self.in_channels = in_channels - self.num_reg_fcs = num_reg_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_iou = build_loss(loss_iou) - - if self.loss_cls.use_sigmoid: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.act_cfg = transformer.get('act_cfg', - dict(type='ReLU', inplace=True)) - self.activate = build_activation_layer(self.act_cfg) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.transformer = build_transformer(transformer) - self.embed_dims = self.transformer.embed_dims - assert 'num_feats' in positional_encoding - num_feats = positional_encoding['num_feats'] - assert num_feats * 2 == self.embed_dims, 'embed_dims should' \ - f' be exactly 2 times of num_feats. Found {self.embed_dims}' \ - f' and {num_feats}.' - self._init_layers() - - def _init_layers(self): - """Initialize layers of the transformer head.""" - self.input_proj = Conv2d( - self.in_channels, self.embed_dims, kernel_size=1) - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_reg_fcs, - self.act_cfg, - dropout=0.0, - add_residual=False) - self.fc_reg = Linear(self.embed_dims, 4) - self.query_embedding = nn.Embedding(self.num_query, self.embed_dims) - - def init_weights(self): - """Initialize weights of the transformer head.""" - # The initialization for transformer is important - self.transformer.init_weights() - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """load checkpoints.""" - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since `AnchorFreeHead._load_from_state_dict` should not be - # called here. Invoking the default `Module._load_from_state_dict` - # is enough. - - # Names of some parameters in has been changed. - version = local_metadata.get('version', None) - if (version is None or version < 2) and self.__class__ is DETRHead: - convert_dict = { - '.self_attn.': '.attentions.0.', - '.ffn.': '.ffns.0.', - '.multihead_attn.': '.attentions.1.', - '.decoder.norm.': '.decoder.post_norm.' - } - state_dict_keys = list(state_dict.keys()) - for k in state_dict_keys: - for ori_key, convert_key in convert_dict.items(): - if ori_key in k: - convert_key = k.replace(ori_key, convert_key) - state_dict[convert_key] = state_dict[k] - del state_dict[k] - - super(AnchorFreeHead, - self)._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, - unexpected_keys, error_msgs) - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single, feats, img_metas_list) - - def forward_single(self, x, img_metas): - """"Forward function for a single feature level. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # construct binary masks which used for the transformer. - # NOTE following the official DETR repo, non-zero values representing - # ignored positions, while zero values means valid positions. - batch_size = x.size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - masks = x.new_ones((batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - masks[img_id, :img_h, :img_w] = 0 - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - # position encoding - pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w] - # outs_dec: [nb_dec, bs, num_query, embed_dim] - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores_list, - all_bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # NOTE defaultly only the outputs from the last feature scale is used. - all_cls_scores = all_cls_scores_list[-1] - all_bbox_preds = all_bbox_preds_list[-1] - assert gt_bboxes_ignore is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, - cls_scores, - bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images. Shape [bs, num_query, cls_out_channels]. - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape [bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components for outputs from - a single decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, - img_metas, gt_bboxes_ignore_list) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - if self.sync_cls_avg_factor: - cls_avg_factor = reduce_mean( - cls_scores.new_tensor([cls_avg_factor])) - cls_avg_factor = max(cls_avg_factor, 1) - - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes across all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(img_metas, bbox_preds): - img_h, img_w, _ = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image with shape [num_query, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all \ - images. - - bbox_targets_list (list[Tensor]): BBox targets for all \ - images. - - bbox_weights_list (list[Tensor]): BBox weights for all \ - images. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - assert gt_bboxes_ignore_list is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - num_imgs = len(cls_scores_list) - gt_bboxes_ignore_list = [ - gt_bboxes_ignore_list for _ in range(num_imgs) - ] - - (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None): - """"Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth bboxes for one image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth class indices for one image - with shape (num_gts, ). - img_meta (dict): Meta information for one image. - gt_bboxes_ignore (Tensor, optional): Bounding boxes - which can be ignored. Default None. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - - num_bboxes = bbox_pred.size(0) - # assigner and sampler - assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes, - gt_labels, img_meta, - gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, bbox_pred, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - img_h, img_w, _ = img_meta['img_shape'] - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - # over-write because img_metas are needed as inputs for bbox_head. - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """Forward function for training mode. - - Args: - x (list[Tensor]): Features from backbone. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert proposal_cfg is None, '"proposal_cfg" must be None' - outs = self(x, img_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores_list, - all_bbox_preds_list, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - # NOTE defaultly only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - - return result_list - - def _get_bboxes_single(self, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False): - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_query, 4]. - img_shape (tuple[int]): Shape of input image, (height, width, 3). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - rescale (bool, optional): If True, return boxes in original image - space. Default False. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. - - - det_bboxes: Predicted bboxes with shape [num_query, 5], \ - where the first 4 columns are bounding box positions \ - (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \ - between 0 and 1. - - det_labels: Predicted labels of the corresponding box with \ - shape [num_query]. - """ - assert len(cls_score) == len(bbox_pred) - max_per_img = self.test_cfg.get('max_per_img', self.num_query) - # exclude background - if self.loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - scores, indexes = cls_score.view(-1).topk(max_per_img) - det_labels = indexes % self.num_classes - bbox_index = indexes // self.num_classes - bbox_pred = bbox_pred[bbox_index] - else: - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img) - bbox_pred = bbox_pred[bbox_index] - det_labels = det_labels[bbox_index] - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1) - - return det_bboxes, det_labels - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,) - """ - # forward of this head requires img_metas - outs = self.forward(feats, img_metas) - results_list = self.get_bboxes(*outs, img_metas, rescale=rescale) - return results_list - - def forward_onnx(self, feats, img_metas): - """Forward function for exporting to ONNX. - - Over-write `forward` because: `masks` is directly created with - zero (valid position tag) and has the same spatial size as `x`. - Thus the construction of `masks` is different from that in `forward`. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single_onnx, feats, img_metas_list) - - def forward_single_onnx(self, x, img_metas): - """"Forward function for a single feature level with ONNX exportation. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # Note `img_shape` is not dynamically traceable to ONNX, - # since the related augmentation was done with numpy under - # CPU. Thus `masks` is directly created with zeros (valid tag) - # and the same spatial shape as `x`. - # The difference between torch and exported ONNX model may be - # ignored, since the same performance is achieved (e.g. - # 40.1 vs 40.1 for DETR) - batch_size = x.size(0) - h, w = x.size()[-2:] - masks = x.new_zeros((batch_size, h, w)) # [B,h,w] - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - pos_embed = self.positional_encoding(masks) - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - def onnx_export(self, all_cls_scores_list, all_bbox_preds_list, img_metas): - """Transform network outputs into bbox predictions, with ONNX - exportation. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - assert len(img_metas) == 1, \ - 'Only support one input image while in exporting to ONNX' - - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - # Note `img_shape` is not dynamically traceable to ONNX, - # here `img_shape_for_onnx` (padded shape of image tensor) - # is used. - img_shape = img_metas[0]['img_shape_for_onnx'] - max_per_img = self.test_cfg.get('max_per_img', self.num_query) - batch_size = cls_scores.size(0) - # `batch_index_offset` is used for the gather of concatenated tensor - batch_index_offset = torch.arange(batch_size).to( - cls_scores.device) * max_per_img - batch_index_offset = batch_index_offset.unsqueeze(1).expand( - batch_size, max_per_img) - - # supports dynamical batch inference - if self.loss_cls.use_sigmoid: - cls_scores = cls_scores.sigmoid() - scores, indexes = cls_scores.view(batch_size, -1).topk( - max_per_img, dim=1) - det_labels = indexes % self.num_classes - bbox_index = indexes // self.num_classes - bbox_index = (bbox_index + batch_index_offset).view(-1) - bbox_preds = bbox_preds.view(-1, 4)[bbox_index] - bbox_preds = bbox_preds.view(batch_size, -1, 4) - else: - scores, det_labels = F.softmax( - cls_scores, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img, dim=1) - bbox_index = (bbox_index + batch_index_offset).view(-1) - bbox_preds = bbox_preds.view(-1, 4)[bbox_index] - det_labels = det_labels.view(-1)[bbox_index] - bbox_preds = bbox_preds.view(batch_size, -1, 4) - det_labels = det_labels.view(batch_size, -1) - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_preds) - # use `img_shape_tensor` for dynamically exporting to ONNX - img_shape_tensor = img_shape.flip(0).repeat(2) # [w,h,w,h] - img_shape_tensor = img_shape_tensor.unsqueeze(0).unsqueeze(0).expand( - batch_size, det_bboxes.size(1), 4) - det_bboxes = det_bboxes * img_shape_tensor - # dynamically clip bboxes - x1, y1, x2, y2 = det_bboxes.split((1, 1, 1, 1), dim=-1) - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, img_shape) - det_bboxes = torch.cat([x1, y1, x2, y2], dim=-1) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(-1)), -1) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/embedding_rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/embedding_rpn_head.py deleted file mode 100644 index 22060b96..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/embedding_rpn_head.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS -from ...core import bbox_cxcywh_to_xyxy - - -@HEADS.register_module() -class EmbeddingRPNHead(BaseModule): - """RPNHead in the `Sparse R-CNN `_ . - - Unlike traditional RPNHead, this module does not need FPN input, but just - decode `init_proposal_bboxes` and expand the first dimension of - `init_proposal_bboxes` and `init_proposal_features` to the batch_size. - - Args: - num_proposals (int): Number of init_proposals. Default 100. - proposal_feature_channel (int): Channel number of - init_proposal_feature. Defaults to 256. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_proposals=100, - proposal_feature_channel=256, - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(EmbeddingRPNHead, self).__init__(init_cfg) - self.num_proposals = num_proposals - self.proposal_feature_channel = proposal_feature_channel - self._init_layers() - - def _init_layers(self): - """Initialize a sparse set of proposal boxes and proposal features.""" - self.init_proposal_bboxes = nn.Embedding(self.num_proposals, 4) - self.init_proposal_features = nn.Embedding( - self.num_proposals, self.proposal_feature_channel) - - def init_weights(self): - """Initialize the init_proposal_bboxes as normalized. - - [c_x, c_y, w, h], and we initialize it to the size of the entire - image. - """ - super(EmbeddingRPNHead, self).init_weights() - nn.init.constant_(self.init_proposal_bboxes.weight[:, :2], 0.5) - nn.init.constant_(self.init_proposal_bboxes.weight[:, 2:], 1) - - def _decode_init_proposals(self, imgs, img_metas): - """Decode init_proposal_bboxes according to the size of images and - expand dimension of init_proposal_features to batch_size. - - Args: - imgs (list[Tensor]): List of FPN features. - img_metas (list[dict]): List of meta-information of - images. Need the img_shape to decode the init_proposals. - - Returns: - Tuple(Tensor): - - - proposals (Tensor): Decoded proposal bboxes, - has shape (batch_size, num_proposals, 4). - - init_proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel). - - imgs_whwh (Tensor): Tensor with shape - (batch_size, 4), the dimension means - [img_width, img_height, img_width, img_height]. - """ - proposals = self.init_proposal_bboxes.weight.clone() - proposals = bbox_cxcywh_to_xyxy(proposals) - num_imgs = len(imgs[0]) - imgs_whwh = [] - for meta in img_metas: - h, w, _ = meta['img_shape'] - imgs_whwh.append(imgs[0].new_tensor([[w, h, w, h]])) - imgs_whwh = torch.cat(imgs_whwh, dim=0) - imgs_whwh = imgs_whwh[:, None, :] - - # imgs_whwh has shape (batch_size, 1, 4) - # The shape of proposals change from (num_proposals, 4) - # to (batch_size ,num_proposals, 4) - proposals = proposals * imgs_whwh - - init_proposal_features = self.init_proposal_features.weight.clone() - init_proposal_features = init_proposal_features[None].expand( - num_imgs, *init_proposal_features.size()) - return proposals, init_proposal_features, imgs_whwh - - def forward_dummy(self, img, img_metas): - """Dummy forward function. - - Used in flops calculation. - """ - return self._decode_init_proposals(img, img_metas) - - def forward_train(self, img, img_metas): - """Forward function in training stage.""" - return self._decode_init_proposals(img, img_metas) - - def simple_test_rpn(self, img, img_metas): - """Forward function in testing stage.""" - return self._decode_init_proposals(img, img_metas) - - def simple_test(self, img, img_metas): - """Forward function in testing stage.""" - raise NotImplementedError - - def aug_test_rpn(self, feats, img_metas): - raise NotImplementedError( - 'EmbeddingRPNHead does not support test-time augmentation') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fcos_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fcos_head.py deleted file mode 100644 index d72fb56c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fcos_head.py +++ /dev/null @@ -1,455 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import Scale -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, reduce_mean -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -@HEADS.register_module() -class FCOSHead(AnchorFreeHead): - """Anchor-free head used in `FCOS `_. - - The FCOS head does not use anchor boxes. Instead bounding boxes are - predicted at each pixel and a centerness measure is used to suppress - low-quality predictions. - Here norm_on_bbox, centerness_on_reg, dcn_on_last_conv are training - tricks used in official repo, which will bring remarkable mAP gains - of up to 4.9. Please see https://github.com/tianzhi0549/FCOS for - more detail. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - strides (list[int] | list[tuple[int, int]]): Strides of points - in multiple feature levels. Default: (4, 8, 16, 32, 64). - regress_ranges (tuple[tuple[int, int]]): Regress range of multiple - level points. - center_sampling (bool): If true, use center sampling. Default: False. - center_sample_radius (float): Radius of center sampling. Default: 1.5. - norm_on_bbox (bool): If true, normalize the regression targets - with FPN strides. Default: False. - centerness_on_reg (bool): If true, position centerness on the - regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. - Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias of conv will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_centerness (dict): Config of centerness loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> self = FCOSHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred, centerness = self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), - (512, INF)), - center_sampling=False, - center_sample_radius=1.5, - norm_on_bbox=False, - centerness_on_reg=False, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.regress_ranges = regress_ranges - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.norm_on_bbox = norm_on_bbox - self.centerness_on_reg = centerness_on_reg - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1) - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, \ - each is a 4D-tensor, the channel number is \ - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each \ - scale level, each is a 4D-tensor, the channel number is \ - num_points * 4. - centernesses (list[Tensor]): centerness for each scale level, \ - each is a 4D-tensor, the channel number is num_points * 1. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.strides) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox predictions and centerness \ - predictions of input feature maps. - """ - cls_score, bbox_pred, cls_feat, reg_feat = super().forward_single(x) - if self.centerness_on_reg: - centerness = self.conv_centerness(reg_feat) - else: - centerness = self.conv_centerness(cls_feat) - # scale the bbox_pred of different level - # float to avoid overflow when enabling FP16 - bbox_pred = scale(bbox_pred).float() - if self.norm_on_bbox: - # bbox_pred needed for gradient computation has been modified - # by F.relu(bbox_pred) when run with PyTorch 1.10. So replace - # F.relu(bbox_pred) with bbox_pred.clamp(min=0) - bbox_pred = bbox_pred.clamp(min=0) - if not self.training: - bbox_pred *= stride - else: - bbox_pred = bbox_pred.exp() - return cls_score, bbox_pred, centerness - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - centernesses (list[Tensor]): centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(centernesses) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - labels, bbox_targets = self.get_targets(all_level_points, gt_bboxes, - gt_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds and centerness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_centerness = torch.cat(flatten_centerness) - flatten_labels = torch.cat(labels) - flatten_bbox_targets = torch.cat(bbox_targets) - # repeat points to align with bbox_preds - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < bg_class_ind)).nonzero().reshape(-1) - num_pos = torch.tensor( - len(pos_inds), dtype=torch.float, device=bbox_preds[0].device) - num_pos = max(reduce_mean(num_pos), 1.0) - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos) - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_centerness_targets = self.centerness_target(pos_bbox_targets) - # centerness weighted iou loss - centerness_denorm = max( - reduce_mean(pos_centerness_targets.sum().detach()), 1e-6) - - if len(pos_inds) > 0: - pos_points = flatten_points[pos_inds] - pos_decoded_bbox_preds = self.bbox_coder.decode( - pos_points, pos_bbox_preds) - pos_decoded_target_preds = self.bbox_coder.decode( - pos_points, pos_bbox_targets) - loss_bbox = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds, - weight=pos_centerness_targets, - avg_factor=centerness_denorm) - loss_centerness = self.loss_centerness( - pos_centerness, pos_centerness_targets, avg_factor=num_pos) - else: - loss_bbox = pos_bbox_preds.sum() - loss_centerness = pos_centerness.sum() - - return dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_centerness=loss_centerness) - - def get_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. \ - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each \ - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - # get labels and bbox_targets of each image - labels_list, bbox_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - labels_list = [labels.split(num_points, 0) for labels in labels_list] - bbox_targets_list = [ - bbox_targets.split(num_points, 0) - for bbox_targets in bbox_targets_list - ] - - # concat per level image - concat_lvl_labels = [] - concat_lvl_bbox_targets = [] - for i in range(num_levels): - concat_lvl_labels.append( - torch.cat([labels[i] for labels in labels_list])) - bbox_targets = torch.cat( - [bbox_targets[i] for bbox_targets in bbox_targets_list]) - if self.norm_on_bbox: - bbox_targets = bbox_targets / self.strides[i] - concat_lvl_bbox_targets.append(bbox_targets) - return concat_lvl_labels, concat_lvl_bbox_targets - - def _get_target_single(self, gt_bboxes, gt_labels, points, regress_ranges, - num_points_per_lvl): - """Compute regression and classification targets for a single image.""" - num_points = points.size(0) - num_gts = gt_labels.size(0) - if num_gts == 0: - return gt_labels.new_full((num_points,), self.num_classes), \ - gt_bboxes.new_zeros((num_points, 4)) - - areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # TODO: figure out why these two are different - # areas = areas[None].expand(num_points, num_gts) - areas = areas[None].repeat(num_points, 1) - regress_ranges = regress_ranges[:, None, :].expand( - num_points, num_gts, 2) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None].expand(num_points, num_gts) - ys = ys[:, None].expand(num_points, num_gts) - - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - - if self.center_sampling: - # condition1: inside a `center bbox` - radius = self.center_sample_radius - center_xs = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) / 2 - center_ys = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) / 2 - center_gts = torch.zeros_like(gt_bboxes) - stride = center_xs.new_zeros(center_xs.shape) - - # project the points on current lvl back to the `original` sizes - lvl_begin = 0 - for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl): - lvl_end = lvl_begin + num_points_lvl - stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius - lvl_begin = lvl_end - - x_mins = center_xs - stride - y_mins = center_ys - stride - x_maxs = center_xs + stride - y_maxs = center_ys + stride - center_gts[..., 0] = torch.where(x_mins > gt_bboxes[..., 0], - x_mins, gt_bboxes[..., 0]) - center_gts[..., 1] = torch.where(y_mins > gt_bboxes[..., 1], - y_mins, gt_bboxes[..., 1]) - center_gts[..., 2] = torch.where(x_maxs > gt_bboxes[..., 2], - gt_bboxes[..., 2], x_maxs) - center_gts[..., 3] = torch.where(y_maxs > gt_bboxes[..., 3], - gt_bboxes[..., 3], y_maxs) - - cb_dist_left = xs - center_gts[..., 0] - cb_dist_right = center_gts[..., 2] - xs - cb_dist_top = ys - center_gts[..., 1] - cb_dist_bottom = center_gts[..., 3] - ys - center_bbox = torch.stack( - (cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1) - inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0 - else: - # condition1: inside a gt bbox - inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0 - - # condition2: limit the regression range for each location - max_regress_distance = bbox_targets.max(-1)[0] - inside_regress_range = ( - (max_regress_distance >= regress_ranges[..., 0]) - & (max_regress_distance <= regress_ranges[..., 1])) - - # if there are still more than one objects for a location, - # we choose the one with minimal area - areas[inside_gt_bbox_mask == 0] = INF - areas[inside_regress_range == 0] = INF - min_area, min_area_inds = areas.min(dim=1) - - labels = gt_labels[min_area_inds] - labels[min_area == INF] = self.num_classes # set as BG - bbox_targets = bbox_targets[range(num_points), min_area_inds] - - return labels, bbox_targets - - def centerness_target(self, pos_bbox_targets): - """Compute centerness targets. - - Args: - pos_bbox_targets (Tensor): BBox targets of positive bboxes in shape - (num_pos, 4) - - Returns: - Tensor: Centerness target. - """ - # only calculate pos centerness targets, otherwise there may be nan - left_right = pos_bbox_targets[:, [0, 2]] - top_bottom = pos_bbox_targets[:, [1, 3]] - if len(left_right) == 0: - centerness_targets = left_right[..., 0] - else: - centerness_targets = ( - left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * ( - top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0]) - return torch.sqrt(centerness_targets) - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `FCOSHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - - y, x = super()._get_points_single(featmap_size, stride, dtype, device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) + stride // 2 - return points diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fovea_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fovea_head.py deleted file mode 100644 index 8be7fc94..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,385 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d -from mmcv.runner import BaseModule - -from mmdet.core import multi_apply -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(BaseModule): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAlign, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class FoveaHead(AnchorFreeHead): - """FoveaBox: Beyond Anchor-based Object Detector - https://arxiv.org/abs/1904.03797 - """ - - def __init__(self, - num_classes, - in_channels, - base_edge_list=(16, 32, 64, 128, 256), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, - 512)), - sigma=0.4, - with_deform=False, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__(num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bbox_list, - gt_label_list, - img_metas, - gt_bboxes_ignore=None): - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - gt_bbox_list, gt_label_list, featmap_sizes, points) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_zeros( - pos_bbox_targets.size()) + 1.0 - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points): - label_list, bbox_target_list = multi_apply( - self._get_target_single, - gt_bbox_list, - gt_label_list, - featmap_size_list=featmap_sizes, - point_list=points) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_target_single(self, - gt_bboxes_raw, - gt_labels_raw, - featmap_size_list=None, - point_list=None): - - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - points in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, point_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - points = points.view(*featmap_size, 2) - x, y = points[..., 0], points[..., 1] - labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes - bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1], - 4) + 1 - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. Fovea head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, base_len, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, self.strides, - self.base_edge_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, base_len, img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms) - - def _bbox_decode(self, priors, bbox_pred, base_len, max_shape): - bbox_pred = bbox_pred.exp() - - y = priors[:, 1] - x = priors[:, 0] - x1 = (x - base_len * bbox_pred[:, 0]). \ - clamp(min=0, max=max_shape[1] - 1) - y1 = (y - base_len * bbox_pred[:, 1]). \ - clamp(min=0, max=max_shape[0] - 1) - x2 = (x + base_len * bbox_pred[:, 2]). \ - clamp(min=0, max=max_shape[1] - 1) - y2 = (y + base_len * bbox_pred[:, 3]). \ - clamp(min=0, max=max_shape[0] - 1) - decoded_bboxes = torch.stack([x1, y1, x2, y2], -1) - return decoded_bboxes - - def _get_points_single(self, *args, **kwargs): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `FoveaHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - y, x = super()._get_points_single(*args, **kwargs) - return y + 0.5, x + 0.5 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/free_anchor_retina_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/free_anchor_retina_head.py deleted file mode 100644 index 3acd25ec..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/free_anchor_retina_head.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core import bbox_overlaps -from ..builder import HEADS -from .retina_head import RetinaHead - -EPS = 1e-12 - - -@HEADS.register_module() -class FreeAnchorRetinaHead(RetinaHead): - """FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - pre_anchor_topk (int): Number of boxes that be token in each bag. - bbox_thr (float): The threshold of the saturated linear function. It is - usually the same with the IoU threshold used in NMS. - gamma (float): Gamma parameter in focal loss. - alpha (float): Alpha parameter in focal loss. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - pre_anchor_topk=50, - bbox_thr=0.6, - gamma=2.0, - alpha=0.5, - **kwargs): - super(FreeAnchorRetinaHead, - self).__init__(num_classes, in_channels, stacked_convs, conv_cfg, - norm_cfg, **kwargs) - - self.pre_anchor_topk = pre_anchor_topk - self.bbox_thr = bbox_thr - self.gamma = gamma - self.alpha = alpha - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - device = cls_scores[0].device - anchor_list, _ = self.get_anchors( - featmap_sizes, img_metas, device=device) - anchors = [torch.cat(anchor) for anchor in anchor_list] - - # concatenate each level - cls_scores = [ - cls.permute(0, 2, 3, - 1).reshape(cls.size(0), -1, self.cls_out_channels) - for cls in cls_scores - ] - bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(bbox_pred.size(0), -1, 4) - for bbox_pred in bbox_preds - ] - cls_scores = torch.cat(cls_scores, dim=1) - bbox_preds = torch.cat(bbox_preds, dim=1) - - cls_prob = torch.sigmoid(cls_scores) - box_prob = [] - num_pos = 0 - positive_losses = [] - for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_, - bbox_preds_) in enumerate( - zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds)): - - with torch.no_grad(): - if len(gt_bboxes_) == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(bbox_preds_) - else: - # box_localization: a_{j}^{loc}, shape: [j, 4] - pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_) - - # object_box_iou: IoU_{ij}^{loc}, shape: [i, j] - object_box_iou = bbox_overlaps(gt_bboxes_, pred_boxes) - - # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j] - t1 = self.bbox_thr - t2 = object_box_iou.max( - dim=1, keepdim=True).values.clamp(min=t1 + 1e-12) - object_box_prob = ((object_box_iou - t1) / - (t2 - t1)).clamp( - min=0, max=1) - - # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j] - num_obj = gt_labels_.size(0) - indices = torch.stack([ - torch.arange(num_obj).type_as(gt_labels_), gt_labels_ - ], - dim=0) - object_cls_box_prob = torch.sparse_coo_tensor( - indices, object_box_prob) - - # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j] - """ - from "start" to "end" implement: - image_box_iou = torch.sparse.max(object_cls_box_prob, - dim=0).t() - - """ - # start - box_cls_prob = torch.sparse.sum( - object_cls_box_prob, dim=0).to_dense() - - indices = torch.nonzero(box_cls_prob, as_tuple=False).t_() - if indices.numel() == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(object_box_prob) - else: - nonzero_box_prob = torch.where( - (gt_labels_.unsqueeze(dim=-1) == indices[0]), - object_box_prob[:, indices[1]], - torch.tensor([ - 0 - ]).type_as(object_box_prob)).max(dim=0).values - - # upmap to shape [j, c] - image_box_prob = torch.sparse_coo_tensor( - indices.flip([0]), - nonzero_box_prob, - size=(anchors_.size(0), - self.cls_out_channels)).to_dense() - # end - - box_prob.append(image_box_prob) - - # construct bags for objects - match_quality_matrix = bbox_overlaps(gt_bboxes_, anchors_) - _, matched = torch.topk( - match_quality_matrix, - self.pre_anchor_topk, - dim=1, - sorted=False) - del match_quality_matrix - - # matched_cls_prob: P_{ij}^{cls} - matched_cls_prob = torch.gather( - cls_prob_[matched], 2, - gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk, - 1)).squeeze(2) - - # matched_box_prob: P_{ij}^{loc} - matched_anchors = anchors_[matched] - matched_object_targets = self.bbox_coder.encode( - matched_anchors, - gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors)) - loss_bbox = self.loss_bbox( - bbox_preds_[matched], - matched_object_targets, - reduction_override='none').sum(-1) - matched_box_prob = torch.exp(-loss_bbox) - - # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )} - num_pos += len(gt_bboxes_) - positive_losses.append( - self.positive_bag_loss(matched_cls_prob, matched_box_prob)) - positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos) - - # box_prob: P{a_{j} \in A_{+}} - box_prob = torch.stack(box_prob, dim=0) - - # negative_loss: - # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B|| - negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max( - 1, num_pos * self.pre_anchor_topk) - - # avoid the absence of gradients in regression subnet - # when no ground-truth in a batch - if num_pos == 0: - positive_loss = bbox_preds.sum() * 0 - - losses = { - 'positive_bag_loss': positive_loss, - 'negative_bag_loss': negative_loss - } - return losses - - def positive_bag_loss(self, matched_cls_prob, matched_box_prob): - """Compute positive bag loss. - - :math:`-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )`. - - :math:`P_{ij}^{cls}`: matched_cls_prob, classification probability of matched samples. - - :math:`P_{ij}^{loc}`: matched_box_prob, box probability of matched samples. - - Args: - matched_cls_prob (Tensor): Classification probability of matched - samples in shape (num_gt, pre_anchor_topk). - matched_box_prob (Tensor): BBox probability of matched samples, - in shape (num_gt, pre_anchor_topk). - - Returns: - Tensor: Positive bag loss in shape (num_gt,). - """ # noqa: E501, W605 - # bag_prob = Mean-max(matched_prob) - matched_prob = matched_cls_prob * matched_box_prob - weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None) - weight /= weight.sum(dim=1).unsqueeze(dim=-1) - bag_prob = (weight * matched_prob).sum(dim=1) - # positive_bag_loss = -self.alpha * log(bag_prob) - return self.alpha * F.binary_cross_entropy( - bag_prob, torch.ones_like(bag_prob), reduction='none') - - def negative_bag_loss(self, cls_prob, box_prob): - """Compute negative bag loss. - - :math:`FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))`. - - :math:`P_{a_{j} \in A_{+}}`: Box_probability of matched samples. - - :math:`P_{j}^{bg}`: Classification probability of negative samples. - - Args: - cls_prob (Tensor): Classification probability, in shape - (num_img, num_anchors, num_classes). - box_prob (Tensor): Box probability, in shape - (num_img, num_anchors, num_classes). - - Returns: - Tensor: Negative bag loss in shape (num_img, num_anchors, num_classes). - """ # noqa: E501, W605 - prob = cls_prob * (1 - box_prob) - # There are some cases when neg_prob = 0. - # This will cause the neg_prob.log() to be inf without clamp. - prob = prob.clamp(min=EPS, max=1 - EPS) - negative_bag_loss = prob**self.gamma * F.binary_cross_entropy( - prob, torch.zeros_like(prob), reduction='none') - return (1 - self.alpha) * negative_bag_loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fsaf_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fsaf_head.py deleted file mode 100644 index 2d2b7879..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/fsaf_head.py +++ /dev/null @@ -1,433 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, images_to_levels, multi_apply, - unmap) -from ..builder import HEADS -from ..losses.accuracy import accuracy -from ..losses.utils import weight_reduce_loss -from .retina_head import RetinaHead - - -@HEADS.register_module() -class FSAFHead(RetinaHead): - """Anchor-free head used in `FSAF `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors (num_anchors is 1 for anchor- - free methods) - - Args: - *args: Same as its base class in :class:`RetinaHead` - score_threshold (float, optional): The score_threshold to calculate - positive recall. If given, prediction scores lower than this value - is counted as incorrect prediction. Default to None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - **kwargs: Same as its base class in :class:`RetinaHead` - - Example: - >>> import torch - >>> self = FSAFHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == self.num_classes - >>> assert box_per_anchor == 4 - """ - - def __init__(self, *args, score_threshold=None, init_cfg=None, **kwargs): - # The positive bias in self.retina_reg conv is to prevent predicted \ - # bbox with 0 area - if init_cfg is None: - init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=[ - dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', name='retina_reg', std=0.01, bias=0.25) - ]) - super().__init__(*args, init_cfg=init_cfg, **kwargs) - self.score_threshold = score_threshold - - def forward_single(self, x): - """Forward feature map of a single scale level. - - Args: - x (Tensor): Feature map of a single scale level. - - Returns: - tuple (Tensor): - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - """ - cls_score, bbox_pred = super().forward_single(x) - # relu: TBLR encoder only accepts positive bbox_pred - return cls_score, self.relu(bbox_pred) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Most of the codes are the same with the base class - :obj: `AnchorHead`, except that it also collects and returns - the matched gt index in the image (from 0 to num_gt-1). If the - anchor bbox is not matched to any gt, the corresponding value in - pos_gt_inds is -1. - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # Assign gt and sample anchors - anchors = flat_anchors[inside_flags.type(torch.bool), :] - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros((num_valid_anchors, label_channels), - dtype=torch.float) - pos_gt_inds = anchors.new_full((num_valid_anchors, ), - -1, - dtype=torch.long) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, both - # the predicted boxes and regression targets should be with - # absolute coordinate format. - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - # The assigned gt_index for each anchor. (0-based) - pos_gt_inds[pos_inds] = sampling_result.pos_assigned_gt_inds - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # shadowed_labels is a tensor composed of tuples - # (anchor_inds, class_label) that indicate those anchors lying in the - # outer region of a gt or overlapped by another gt with a smaller - # area. - # - # Therefore, only the shadowed labels are ignored for loss calculation. - # the key `shadowed_labels` is defined in :obj:`CenterRegionAssigner` - shadowed_labels = assign_result.get_extra_property('shadowed_labels') - if shadowed_labels is not None and shadowed_labels.numel(): - if len(shadowed_labels.shape) == 2: - idx_, label_ = shadowed_labels[:, 0], shadowed_labels[:, 1] - assert (labels[idx_] != label_).all(), \ - 'One label cannot be both positive and ignored' - label_weights[idx_, label_] = 0 - else: - label_weights[shadowed_labels] = 0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap(labels, num_total_anchors, inside_flags) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - pos_gt_inds = unmap( - pos_gt_inds, num_total_anchors, inside_flags, fill=-1) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result, pos_gt_inds) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - for i in range(len(bbox_preds)): # loop over fpn level - # avoid 0 area of the predicted bbox - bbox_preds[i] = bbox_preds[i].clamp(min=1e-4) - # TODO: It may directly use the base-class loss function. - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - batch_size = len(gt_bboxes) - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, - pos_assigned_gt_inds_list) = cls_reg_targets - - num_gts = np.array(list(map(len, gt_labels))) - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # `pos_assigned_gt_inds_list` (length: fpn_levels) stores the assigned - # gt index of each anchor bbox in each fpn level. - cum_num_gts = list(np.cumsum(num_gts)) # length of batch_size - for i, assign in enumerate(pos_assigned_gt_inds_list): - # loop over fpn levels - for j in range(1, batch_size): - # loop over batch size - # Convert gt indices in each img to those in the batch - assign[j][assign[j] >= 0] += int(cum_num_gts[j - 1]) - pos_assigned_gt_inds_list[i] = assign.flatten() - labels_list[i] = labels_list[i].flatten() - num_gts = sum(map(len, gt_labels)) # total number of gt in the batch - # The unique label index of each gt in the batch - label_sequence = torch.arange(num_gts, device=device) - # Collect the average loss of each gt in each level - with torch.no_grad(): - loss_levels, = multi_apply( - self.collect_loss_level_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_seq=label_sequence) - # Shape: (fpn_levels, num_gts). Loss of each gt at each fpn level - loss_levels = torch.stack(loss_levels, dim=0) - # Locate the best fpn level for loss back-propagation - if loss_levels.numel() == 0: # zero gt - argmin = loss_levels.new_empty((num_gts, ), dtype=torch.long) - else: - _, argmin = loss_levels.min(dim=0) - - # Reweight the loss of each (anchor, label) pair, so that only those - # at the best gt level are back-propagated. - losses_cls, losses_bbox, pos_inds = multi_apply( - self.reweight_loss_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_list, - list(range(len(losses_cls))), - min_levels=argmin) - num_pos = torch.cat(pos_inds, 0).sum().float() - pos_recall = self.calculate_pos_recall(cls_scores, labels_list, - pos_inds) - - if num_pos == 0: # No gt - avg_factor = num_pos + float(num_total_neg) - else: - avg_factor = num_pos - for i in range(len(losses_cls)): - losses_cls[i] /= avg_factor - losses_bbox[i] /= avg_factor - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - num_pos=num_pos / batch_size, - pos_recall=pos_recall) - - def calculate_pos_recall(self, cls_scores, labels_list, pos_inds): - """Calculate positive recall with score threshold. - - Args: - cls_scores (list[Tensor]): Classification scores at all fpn levels. - Each tensor is in shape (N, num_classes * num_anchors, H, W) - labels_list (list[Tensor]): The label that each anchor is assigned - to. Shape (N * H * W * num_anchors, ) - pos_inds (list[Tensor]): List of bool tensors indicating whether - the anchor is assigned to a positive label. - Shape (N * H * W * num_anchors, ) - - Returns: - Tensor: A single float number indicating the positive recall. - """ - with torch.no_grad(): - num_class = self.num_classes - scores = [ - cls.permute(0, 2, 3, 1).reshape(-1, num_class)[pos] - for cls, pos in zip(cls_scores, pos_inds) - ] - labels = [ - label.reshape(-1)[pos] - for label, pos in zip(labels_list, pos_inds) - ] - scores = torch.cat(scores, dim=0) - labels = torch.cat(labels, dim=0) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - else: - scores = scores.softmax(dim=1) - - return accuracy(scores, labels, thresh=self.score_threshold) - - def collect_loss_level_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels_seq): - """Get the average loss in each FPN level w.r.t. each gt label. - - Args: - cls_loss (Tensor): Classification loss of each feature map pixel, - shape (num_anchor, num_class) - reg_loss (Tensor): Regression loss of each feature map pixel, - shape (num_anchor, 4) - assigned_gt_inds (Tensor): It indicates which gt the prior is - assigned to (0-based, -1: no assignment). shape (num_anchor), - labels_seq: The rank of labels. shape (num_gt) - - Returns: - shape: (num_gt), average loss of each gt in this level - """ - if len(reg_loss.shape) == 2: # iou loss has shape (num_prior, 4) - reg_loss = reg_loss.sum(dim=-1) # sum loss in tblr dims - if len(cls_loss.shape) == 2: - cls_loss = cls_loss.sum(dim=-1) # sum loss in class dims - loss = cls_loss + reg_loss - assert loss.size(0) == assigned_gt_inds.size(0) - # Default loss value is 1e6 for a layer where no anchor is positive - # to ensure it will not be chosen to back-propagate gradient - losses_ = loss.new_full(labels_seq.shape, 1e6) - for i, l in enumerate(labels_seq): - match = assigned_gt_inds == l - if match.any(): - losses_[i] = loss[match].mean() - return losses_, - - def reweight_loss_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels, level, min_levels): - """Reweight loss values at each level. - - Reassign loss values at each level by masking those where the - pre-calculated loss is too large. Then return the reduced losses. - - Args: - cls_loss (Tensor): Element-wise classification loss. - Shape: (num_anchors, num_classes) - reg_loss (Tensor): Element-wise regression loss. - Shape: (num_anchors, 4) - assigned_gt_inds (Tensor): The gt indices that each anchor bbox - is assigned to. -1 denotes a negative anchor, otherwise it is the - gt index (0-based). Shape: (num_anchors, ), - labels (Tensor): Label assigned to anchors. Shape: (num_anchors, ). - level (int): The current level index in the pyramid - (0-4 for RetinaNet) - min_levels (Tensor): The best-matching level for each gt. - Shape: (num_gts, ), - - Returns: - tuple: - - cls_loss: Reduced corrected classification loss. Scalar. - - reg_loss: Reduced corrected regression loss. Scalar. - - pos_flags (Tensor): Corrected bool tensor indicating the - final positive anchors. Shape: (num_anchors, ). - """ - loc_weight = torch.ones_like(reg_loss) - cls_weight = torch.ones_like(cls_loss) - pos_flags = assigned_gt_inds >= 0 # positive pixel flag - pos_indices = torch.nonzero(pos_flags, as_tuple=False).flatten() - - if pos_flags.any(): # pos pixels exist - pos_assigned_gt_inds = assigned_gt_inds[pos_flags] - zeroing_indices = (min_levels[pos_assigned_gt_inds] != level) - neg_indices = pos_indices[zeroing_indices] - - if neg_indices.numel(): - pos_flags[neg_indices] = 0 - loc_weight[neg_indices] = 0 - # Only the weight corresponding to the label is - # zeroed out if not selected - zeroing_labels = labels[neg_indices] - assert (zeroing_labels >= 0).all() - cls_weight[neg_indices, zeroing_labels] = 0 - - # Weighted loss for both cls and reg loss - cls_loss = weight_reduce_loss(cls_loss, cls_weight, reduction='sum') - reg_loss = weight_reduce_loss(reg_loss, loc_weight, reduction='sum') - - return cls_loss, reg_loss, pos_flags diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_retina_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_retina_head.py deleted file mode 100644 index 6d9e874c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_retina_head.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import MaskedConv2d - -from ..builder import HEADS -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead - - -@HEADS.register_module() -class GARetinaHead(GuidedAnchorHead): - """Guided-Anchor-based RetinaNet head.""" - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - if init_cfg is None: - init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=[ - dict( - type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01) - ]) - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(GARetinaHead, self).__init__( - num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2, - 1) - self.feature_adaption_cls = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.feature_adaption_reg = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.retina_cls = MaskedConv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = MaskedConv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - - loc_pred = self.conv_loc(cls_feat) - shape_pred = self.conv_shape(reg_feat) - - cls_feat = self.feature_adaption_cls(cls_feat, shape_pred) - reg_feat = self.feature_adaption_reg(reg_feat, shape_pred) - - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.retina_cls(cls_feat, mask) - bbox_pred = self.retina_reg(reg_feat, mask) - return cls_score, bbox_pred, shape_pred, loc_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_rpn_head.py deleted file mode 100644 index 4123c8b3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ga_rpn_head.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..builder import HEADS -from .guided_anchor_head import GuidedAnchorHead - - -@HEADS.register_module() -class GARPNHead(GuidedAnchorHead): - """Guided-Anchor-based RPN head.""" - - def __init__(self, - in_channels, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01)), - **kwargs): - super(GARPNHead, self).__init__( - 1, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - super(GARPNHead, self)._init_layers() - - def forward_single(self, x): - """Forward feature of a single scale level.""" - - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - (cls_score, bbox_pred, shape_pred, - loc_pred) = super(GARPNHead, self).forward_single(x) - return cls_score, bbox_pred, shape_pred, loc_pred - - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - losses = super(GARPNHead, self).loss( - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], - loss_rpn_bbox=losses['loss_bbox'], - loss_anchor_shape=losses['loss_shape'], - loss_anchor_loc=losses['loss_loc']) - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and max_per_img at the same time, ' \ - f'but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the ' \ - f'nms_thr which will be deprecated.' - - assert cfg.nms.get('type', 'nms') == 'nms', 'GARPNHead only support ' \ - 'naive nms.' - - mlvl_proposals = [] - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - anchors = mlvl_anchors[idx] - mask = mlvl_masks[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = rpn_cls_score.softmax(dim=1)[:, :-1] - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, - 4)[mask, :] - if scores.dim() == 0: - rpn_bbox_pred = rpn_bbox_pred.unsqueeze(0) - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - if cfg.nms_pre > 0 and scores.shape[0] > cfg.nms_pre: - _, topk_inds = scores.topk(cfg.nms_pre) - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - scores = scores[topk_inds] - # get proposals w.r.t. anchors and rpn_bbox_pred - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - # filter out too small bboxes - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - - # NMS in current level - proposals, _ = nms(proposals, scores, cfg.nms.iou_threshold) - proposals = proposals[:cfg.nms_post, :] - mlvl_proposals.append(proposals) - proposals = torch.cat(mlvl_proposals, 0) - if cfg.get('nms_across_levels', False): - # NMS across multi levels - proposals, _ = nms(proposals[:, :4], proposals[:, -1], - cfg.nms.iou_threshold) - proposals = proposals[:cfg.max_per_img, :] - else: - scores = proposals[:, 4] - num = min(cfg.max_per_img, proposals.shape[0]) - _, topk_inds = scores.topk(num) - proposals = proposals[topk_inds, :] - return proposals diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/gfl_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/gfl_head.py deleted file mode 100644 index 12eb89db..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/gfl_head.py +++ /dev/null @@ -1,648 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, bbox_overlaps, build_assigner, - build_sampler, images_to_levels, multi_apply, - reduce_mean, unmap) -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class Integral(nn.Module): - """A fixed layer for calculating integral result from distribution. - - This layer calculates the target location by :math: `sum{P(y_i) * y_i}`, - P(y_i) denotes the softmax vector that represents the discrete distribution - y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max} - - Args: - reg_max (int): The maximal value of the discrete set. Default: 16. You - may want to reset it according to your new dataset or related - settings. - """ - - def __init__(self, reg_max=16): - super(Integral, self).__init__() - self.reg_max = reg_max - self.register_buffer('project', - torch.linspace(0, self.reg_max, self.reg_max + 1)) - - def forward(self, x): - """Forward feature from the regression head to get integral result of - bounding box location. - - Args: - x (Tensor): Features of the regression head, shape (N, 4*(n+1)), - n is self.reg_max. - - Returns: - x (Tensor): Integral result of box locations, i.e., distance - offsets from the box center in four directions, shape (N, 4). - """ - x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1) - x = F.linear(x, self.project.type_as(x)).reshape(-1, 4) - return x - - -@HEADS.register_module() -class GFLHead(AnchorHead): - """Generalized Focal Loss: Learning Qualified and Distributed Bounding - Boxes for Dense Object Detection. - - GFL head structure is similar with ATSS, however GFL uses - 1) joint representation for classification and localization quality, and - 2) flexible General distribution for bounding box locations, - which are supervised by - Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively - - https://arxiv.org/abs/2006.04388 - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='GN', num_groups=32, requires_grad=True). - loss_qfl (dict): Config of Quality Focal Loss (QFL). - bbox_coder (dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - reg_max (int): Max value of integral set :math: `{0, ..., reg_max}` - in QFL setting. Default: 16. - init_cfg (dict or list[dict], optional): Initialization config dict. - Example: - >>> self = GFLHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_quality_score, bbox_pred = self.forward(feats) - >>> assert len(cls_quality_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25), - bbox_coder=dict(type='DistancePointBBoxCoder'), - reg_max=16, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='gfl_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.reg_max = reg_max - super(GFLHead, self).__init__( - num_classes, - in_channels, - bbox_coder=bbox_coder, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.integral = Integral(self.reg_max) - self.loss_dfl = build_loss(loss_dfl) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - assert self.num_anchors == 1, 'anchor free version' - self.gfl_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.gfl_reg = nn.Conv2d( - self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification and quality (IoU) - joint scores for all scale levels, each is a 4D-tensor, - the channel number is num_classes. - bbox_preds (list[Tensor]): Box distribution logits for all - scale levels, each is a 4D-tensor, the channel number is - 4*(n+1), n is max value of integral set. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls and quality joint scores for a single - scale level the channel number is num_classes. - bbox_pred (Tensor): Box distribution logits for a single scale - level, the channel number is 4*(n+1), n is max value of - integral set. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.gfl_cls(cls_feat) - bbox_pred = scale(self.gfl_reg(reg_feat)).float() - return cls_score, bbox_pred - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2 - anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - else: - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, weight_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl,\ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) - avg_factor = reduce_mean(avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox)) - losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl)) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. GFL head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, - self.prior_generator.strides, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert stride[0] == stride[1] - - bbox_pred = bbox_pred.permute(1, 2, 0) - bbox_pred = self.integral(bbox_pred) * stride[0] - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self.bbox_coder.decode( - self.anchor_center(priors), bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for GFL head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors, 4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4). - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/guided_anchor_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/guided_anchor_head.py deleted file mode 100644 index 53e8cd8a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/guided_anchor_head.py +++ /dev/null @@ -1,868 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.ops import DeformConv2d, MaskedConv2d -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, calc_region, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class FeatureAdaption(BaseModule): - """Feature Adaption Module. - - Feature Adaption Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Deformable conv kernel size. - deform_groups (int): Deformable conv group size. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAdaption, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 2, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape.detach()) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class GuidedAnchorHead(AnchorHead): - """Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.). - - This GuidedAnchorHead will predict high-quality feature guided - anchors and locations where anchors will be kept in inference. - There are mainly 3 categories of bounding-boxes. - - - Sampled 9 pairs for target assignment. (approxes) - - The square boxes where the predicted anchors are based on. (squares) - - Guided anchors. - - Please refer to https://arxiv.org/abs/1901.03278 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. - approx_anchor_generator (dict): Config dict for approx generator - square_anchor_generator (dict): Config dict for square generator - anchor_coder (dict): Config dict for anchor coder - bbox_coder (dict): Config dict for bbox coder - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - loc_filter_thr (float): Threshold to filter out unconcerned regions. - loss_loc (dict): Config of location loss. - loss_shape (dict): Config of anchor shape loss. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of bbox regression loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - reg_decoded_bbox=False, - deform_groups=4, - loc_filter_thr=0.01, - train_cfg=None, - test_cfg=None, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0), - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01, - override=dict(type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01))): # yapf: disable - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.deform_groups = deform_groups - self.loc_filter_thr = loc_filter_thr - - # build approx_anchor_generator and square_anchor_generator - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - self.approx_anchor_generator = build_prior_generator( - approx_anchor_generator) - self.square_anchor_generator = build_prior_generator( - square_anchor_generator) - self.approxs_per_octave = self.approx_anchor_generator \ - .num_base_priors[0] - - self.reg_decoded_bbox = reg_decoded_bbox - - # one anchor per location - self.num_base_priors = self.square_anchor_generator.num_base_priors[0] - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.loc_focal_loss = loss_loc['type'] in ['FocalLoss'] - self.sampling = loss_cls['type'] not in ['FocalLoss'] - self.ga_sampling = train_cfg is not None and hasattr( - train_cfg, 'ga_sampler') - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - - # build bbox_coder - self.anchor_coder = build_bbox_coder(anchor_coder) - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build losses - self.loss_loc = build_loss(loss_loc) - self.loss_shape = build_loss(loss_shape) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.ga_assigner = build_assigner(self.train_cfg.ga_assigner) - if self.ga_sampling: - ga_sampler_cfg = self.train_cfg.ga_sampler - else: - ga_sampler_cfg = dict(type='PseudoSampler') - self.ga_sampler = build_sampler(ga_sampler_cfg, context=self) - - self.fp16_enabled = False - - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.square_anchor_generator.num_base_priors[0] - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.conv_loc = nn.Conv2d(self.in_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.in_channels, self.num_base_priors * 2, - 1) - self.feature_adaption = FeatureAdaption( - self.in_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = MaskedConv2d( - self.feat_channels, self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = MaskedConv2d(self.feat_channels, - self.num_base_priors * 4, 1) - - def forward_single(self, x): - loc_pred = self.conv_loc(x) - shape_pred = self.conv_shape(x) - x = self.feature_adaption(x, shape_pred) - # masked conv is only used during inference for speed-up - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.conv_cls(x, mask) - bbox_pred = self.conv_reg(x, mask) - return cls_score, bbox_pred, shape_pred, loc_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'): - """Get sampled approxs and inside flags according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: approxes of each image, inside flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # approxes for one time - multi_level_approxs = self.approx_anchor_generator.grid_priors( - featmap_sizes, device=device) - approxs_list = [multi_level_approxs for _ in range(num_imgs)] - - # for each image, we compute inside flags of multi level approxes - inside_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - multi_level_approxs = approxs_list[img_id] - - # obtain valid flags for each approx first - multi_level_approx_flags = self.approx_anchor_generator \ - .valid_flags(featmap_sizes, - img_meta['pad_shape'], - device=device) - - for i, flags in enumerate(multi_level_approx_flags): - approxs = multi_level_approxs[i] - inside_flags_list = [] - for i in range(self.approxs_per_octave): - split_valid_flags = flags[i::self.approxs_per_octave] - split_approxs = approxs[i::self.approxs_per_octave, :] - inside_flags = anchor_inside_flags( - split_approxs, split_valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - inside_flags_list.append(inside_flags) - # inside_flag for a position is true if any anchor in this - # position is true - inside_flags = ( - torch.stack(inside_flags_list, 0).sum(dim=0) > 0) - multi_level_flags.append(inside_flags) - inside_flag_list.append(multi_level_flags) - return approxs_list, inside_flag_list - - def get_anchors(self, - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=False, - device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - shape_preds (list[tensor]): Multi-level shape predictions. - loc_preds (list[tensor]): Multi-level location predictions. - img_metas (list[dict]): Image meta info. - use_loc_filter (bool): Use loc filter or not. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image, guided anchors of each image, - loc masks of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_priors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - # for each image, we compute multi level guided anchors - guided_anchors_list = [] - loc_mask_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_guided_anchors = [] - multi_level_loc_mask = [] - for i in range(num_levels): - squares = squares_list[img_id][i] - shape_pred = shape_preds[i][img_id] - loc_pred = loc_preds[i][img_id] - guided_anchors, loc_mask = self._get_guided_anchors_single( - squares, - shape_pred, - loc_pred, - use_loc_filter=use_loc_filter) - multi_level_guided_anchors.append(guided_anchors) - multi_level_loc_mask.append(loc_mask) - guided_anchors_list.append(multi_level_guided_anchors) - loc_mask_list.append(multi_level_loc_mask) - return squares_list, guided_anchors_list, loc_mask_list - - def _get_guided_anchors_single(self, - squares, - shape_pred, - loc_pred, - use_loc_filter=False): - """Get guided anchors and loc masks for a single level. - - Args: - square (tensor): Squares of a single level. - shape_pred (tensor): Shape predictions of a single level. - loc_pred (tensor): Loc predictions of a single level. - use_loc_filter (list[tensor]): Use loc filter or not. - - Returns: - tuple: guided anchors, location masks - """ - # calculate location filtering mask - loc_pred = loc_pred.sigmoid().detach() - if use_loc_filter: - loc_mask = loc_pred >= self.loc_filter_thr - else: - loc_mask = loc_pred >= 0.0 - mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_base_priors) - mask = mask.contiguous().view(-1) - # calculate guided anchors - squares = squares[mask] - anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view( - -1, 2).detach()[mask] - bbox_deltas = anchor_deltas.new_full(squares.size(), 0) - bbox_deltas[:, 2:] = anchor_deltas - guided_anchors = self.anchor_coder.decode( - squares, bbox_deltas, wh_ratio_clip=1e-6) - return guided_anchors, mask - - def ga_loc_targets(self, gt_bboxes_list, featmap_sizes): - """Compute location targets for guided anchoring. - - Each feature map is divided into positive, negative and ignore regions. - - positive regions: target 1, weight 1 - - ignore regions: target 0, weight 0 - - negative regions: target 0, weight 0.1 - - Args: - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - featmap_sizes (list[tuple]): Multi level sizes of each feature - maps. - - Returns: - tuple - """ - anchor_scale = self.approx_anchor_generator.octave_base_scale - anchor_strides = self.approx_anchor_generator.strides - # Currently only supports same stride in x and y direction. - for stride in anchor_strides: - assert (stride[0] == stride[1]) - anchor_strides = [stride[0] for stride in anchor_strides] - - center_ratio = self.train_cfg.center_ratio - ignore_ratio = self.train_cfg.ignore_ratio - img_per_gpu = len(gt_bboxes_list) - num_lvls = len(featmap_sizes) - r1 = (1 - center_ratio) / 2 - r2 = (1 - ignore_ratio) / 2 - all_loc_targets = [] - all_loc_weights = [] - all_ignore_map = [] - for lvl_id in range(num_lvls): - h, w = featmap_sizes[lvl_id] - loc_targets = torch.zeros( - img_per_gpu, - 1, - h, - w, - device=gt_bboxes_list[0].device, - dtype=torch.float32) - loc_weights = torch.full_like(loc_targets, -1) - ignore_map = torch.zeros_like(loc_targets) - all_loc_targets.append(loc_targets) - all_loc_weights.append(loc_weights) - all_ignore_map.append(ignore_map) - for img_id in range(img_per_gpu): - gt_bboxes = gt_bboxes_list[img_id] - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - # assign gt bboxes to different feature levels w.r.t. their scales - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - for gt_id in range(gt_bboxes.size(0)): - lvl = target_lvls[gt_id].item() - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl] - # calculate ignore regions - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[lvl]) - # calculate positive (center) regions - ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region( - gt_, r1, featmap_sizes[lvl]) - all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 0 - all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - # calculate ignore map on nearby low level feature - if lvl > 0: - d_lvl = lvl - 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[d_lvl]) - all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - # calculate ignore map on nearby high level feature - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[u_lvl]) - all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - for lvl_id in range(num_lvls): - # ignore negative regions w.r.t. ignore map - all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0) - & (all_ignore_map[lvl_id] > 0)] = 0 - # set negative regions with weight 0.1 - all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1 - # loc average factor to balance loss - loc_avg_factor = sum( - [t.size(0) * t.size(-1) * t.size(-2) - for t in all_loc_targets]) / 200 - return all_loc_targets, all_loc_weights, loc_avg_factor - - def _ga_shape_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - img_meta, - unmap_outputs=True): - """Compute guided anchoring targets. - - This function returns sampled anchors and gt bboxes directly - rather than calculates regression targets. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image. - img_meta (dict): Meta info of a single image. - approxs_per_octave (int): number of approxs per octave - cfg (dict): RPN train configs. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - if not inside_flags.any(): - return (None, ) * 5 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.ga_assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.ga_sampler.sample(assign_result, squares, - gt_bboxes) - - bbox_anchors = torch.zeros_like(squares) - bbox_gts = torch.zeros_like(squares) - bbox_weights = torch.zeros_like(squares) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes - bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes - bbox_weights[pos_inds, :] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags) - bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds) - - def ga_shape_targets(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - unmap_outputs=True): - """Compute guided anchoring targets. - - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - (all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self._ga_shape_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - img_metas, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - bbox_anchors_list = images_to_levels(all_bbox_anchors, - num_level_squares) - bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_squares) - return (bbox_anchors_list, bbox_gts_list, bbox_weights_list, - num_total_pos, num_total_neg) - - def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts, - anchor_weights, anchor_total_num): - shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2) - bbox_anchors = bbox_anchors.contiguous().view(-1, 4) - bbox_gts = bbox_gts.contiguous().view(-1, 4) - anchor_weights = anchor_weights.contiguous().view(-1, 4) - bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0) - bbox_deltas[:, 2:] += shape_pred - # filter out negative samples to speed-up weighted_bounded_iou_loss - inds = torch.nonzero( - anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1) - bbox_deltas_ = bbox_deltas[inds] - bbox_anchors_ = bbox_anchors[inds] - bbox_gts_ = bbox_gts[inds] - anchor_weights_ = anchor_weights[inds] - pred_anchors_ = self.anchor_coder.decode( - bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6) - loss_shape = self.loss_shape( - pred_anchors_, - bbox_gts_, - anchor_weights_, - avg_factor=anchor_total_num) - return loss_shape - - def loss_loc_single(self, loc_pred, loc_target, loc_weight, - loc_avg_factor): - loss_loc = self.loss_loc( - loc_pred.reshape(-1, 1), - loc_target.reshape(-1).long(), - loc_weight.reshape(-1), - avg_factor=loc_avg_factor) - return loss_loc - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get loc targets - loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets( - gt_bboxes, featmap_sizes) - - # get sampled approxes - approxs_list, inside_flag_list = self.get_sampled_approxs( - featmap_sizes, img_metas, device=device) - # get squares and guided anchors - squares_list, guided_anchors_list, _ = self.get_anchors( - featmap_sizes, shape_preds, loc_preds, img_metas, device=device) - - # get shape targets - shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list, - squares_list, gt_bboxes, - img_metas) - if shape_targets is None: - return None - (bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num, - anchor_bg_num) = shape_targets - anchor_total_num = ( - anchor_fg_num if not self.ga_sampling else anchor_fg_num + - anchor_bg_num) - - # get anchor targets - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - guided_anchors_list, - inside_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [ - anchors.size(0) for anchors in guided_anchors_list[0] - ] - # concat all level anchors to a single tensor - concat_anchor_list = [] - for i in range(len(guided_anchors_list)): - concat_anchor_list.append(torch.cat(guided_anchors_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - # get classification and bbox regression losses - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # get anchor location loss - losses_loc = [] - for i in range(len(loc_preds)): - loss_loc = self.loss_loc_single( - loc_preds[i], - loc_targets[i], - loc_weights[i], - loc_avg_factor=loc_avg_factor) - losses_loc.append(loss_loc) - - # get anchor shape loss - losses_shape = [] - for i in range(len(shape_preds)): - loss_shape = self.loss_shape_single( - shape_preds[i], - bbox_anchors_list[i], - bbox_gts_list[i], - anchor_weights_list[i], - anchor_total_num=anchor_total_num) - losses_shape.append(loss_shape) - - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_shape=losses_shape, - loss_loc=losses_loc) - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len( - loc_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - # get guided anchors - _, guided_anchors, loc_masks = self.get_anchors( - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=not self.training, - device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - guided_anchor_list = [ - guided_anchors[img_id][i].detach() for i in range(num_levels) - ] - loc_mask_list = [ - loc_masks[img_id][i].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - guided_anchor_list, - loc_mask_list, img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds, - mlvl_anchors, - mlvl_masks): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - # reshape scores and bbox_pred - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask, :] - bbox_pred = bbox_pred[mask, :] - if scores.dim() == 0: - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - bbox_pred = bbox_pred.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - # multi class NMS - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/lad_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/lad_head.py deleted file mode 100644 index 85273bcb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/lad_head.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import bbox_overlaps, multi_apply -from ..builder import HEADS -from .paa_head import PAAHead, levels_to_images - - -@HEADS.register_module() -class LADHead(PAAHead): - """Label Assignment Head from the paper: `Improving Object Detection by - Label Assignment Distillation `_""" - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def get_label_assignment(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Get label assignment (from teacher). - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - tuple: Returns a tuple containing label assignment variables. - - - labels (Tensor): Labels of all anchors, each with - shape (num_anchors,). - - labels_weight (Tensor): Label weights of all anchor. - each with shape (num_anchors,). - - bboxes_target (Tensor): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bboxes_weight (Tensor): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds_flatten (Tensor): Contains all index of positive - sample in all anchor. - - pos_anchors (Tensor): Positive anchors. - - num_pos (int): Number of positive anchors. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - if num_pos: - pos_anchors = flatten_anchors[pos_inds_flatten] - else: - pos_anchors = None - - label_assignment_results = (labels, labels_weight, bboxes_target, - bboxes_weight, pos_inds_flatten, - pos_anchors, num_pos) - return label_assignment_results - - def forward_train(self, - x, - label_assignment_results, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - **kwargs): - """Forward train with the available label assignment (student receives - from teacher). - - Args: - x (list[Tensor]): Features from FPN. - label_assignment_results (tuple): As the outputs defined in the - function `self.get_label_assignment`. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - - Returns: - losses: (dict[str, Tensor]): A dictionary of loss components. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss( - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore, - label_assignment_results=label_assignment_results) - return losses - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None, - label_assignment_results=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - label_assignment_results (tuple): As the outputs defined in the - function `self.get_label_assignment`. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds_flatten, - pos_anchors, num_pos) = label_assignment_results - - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - pos_anchors, bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, pos_bbox_target, avg_factor=num_pos) - - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ld_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ld_head.py deleted file mode 100644 index c5a945fe..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ld_head.py +++ /dev/null @@ -1,261 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import bbox_overlaps, multi_apply, reduce_mean -from ..builder import HEADS, build_loss -from .gfl_head import GFLHead - - -@HEADS.register_module() -class LDHead(GFLHead): - """Localization distillation Head. (Short description) - - It utilizes the learned bbox distributions to transfer the localization - dark knowledge from teacher to student. Original paper: `Localization - Distillation for Object Detection. `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - loss_ld (dict): Config of Localization Distillation Loss (LD), - T is the temperature for distillation. - """ - - def __init__(self, - num_classes, - in_channels, - loss_ld=dict( - type='LocalizationDistillationLoss', - loss_weight=0.25, - T=10), - **kwargs): - - super(LDHead, self).__init__(num_classes, in_channels, **kwargs) - self.loss_ld = build_loss(loss_ld) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, soft_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[tuple, Tensor]: Loss components and weight targets. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - soft_targets = soft_targets.permute(0, 2, 3, - 1).reshape(-1, - 4 * (self.reg_max + 1)) - - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - pos_soft_targets = soft_targets[pos_inds] - soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1) - - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - # ld loss - loss_ld = self.loss_ld( - pred_corners, - soft_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - else: - loss_ld = bbox_pred.sum() * 0 - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum() - - def forward_train(self, - x, - out_teacher, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple[dict, list]: The loss components and proposals of each image. - - - losses (dict[str, Tensor]): A dictionary of loss components. - - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - soft_target = out_teacher[1] - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, soft_target, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, soft_target, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - soft_target, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl, losses_ld, \ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - soft_target, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) + 1e-6 - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = [x / avg_factor for x in losses_bbox] - losses_dfl = [x / avg_factor for x in losses_dfl] - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_dfl=losses_dfl, - loss_ld=losses_ld) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/mask2former_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/mask2former_head.py deleted file mode 100644 index 78e4d49b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/mask2former_head.py +++ /dev/null @@ -1,430 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, build_plugin_layer, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.ops import point_sample -from mmcv.runner import ModuleList - -from mmdet.core import build_assigner, build_sampler, reduce_mean -from mmdet.models.utils import get_uncertain_point_coords_with_randomness -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead -from .maskformer_head import MaskFormerHead - - -@HEADS.register_module() -class Mask2FormerHead(MaskFormerHead): - """Implements the Mask2Former head. - - See `Masked-attention Mask Transformer for Universal Image - Segmentation `_ for details. - - Args: - in_channels (list[int]): Number of channels in the input feature map. - feat_channels (int): Number of channels for features. - out_channels (int): Number of channels for output. - num_things_classes (int): Number of things. - num_stuff_classes (int): Number of stuff. - num_queries (int): Number of query in Transformer decoder. - pixel_decoder (:obj:`mmcv.ConfigDict` | dict): Config for pixel - decoder. Defaults to None. - enforce_decoder_input_project (bool, optional): Whether to add - a layer to change the embed_dim of tranformer encoder in - pixel decoder to the embed_dim of transformer decoder. - Defaults to False. - transformer_decoder (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder. Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder position encoding. Defaults to None. - loss_cls (:obj:`mmcv.ConfigDict` | dict): Config of the classification - loss. Defaults to None. - loss_mask (:obj:`mmcv.ConfigDict` | dict): Config of the mask loss. - Defaults to None. - loss_dice (:obj:`mmcv.ConfigDict` | dict): Config of the dice loss. - Defaults to None. - train_cfg (:obj:`mmcv.ConfigDict` | dict): Training config of - Mask2Former head. - test_cfg (:obj:`mmcv.ConfigDict` | dict): Testing config of - Mask2Former head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Defaults to None. - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - num_things_classes=80, - num_stuff_classes=53, - num_queries=100, - num_transformer_feat_level=3, - pixel_decoder=None, - enforce_decoder_input_project=False, - transformer_decoder=None, - positional_encoding=None, - loss_cls=None, - loss_mask=None, - loss_dice=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - **kwargs): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = self.num_things_classes + self.num_stuff_classes - self.num_queries = num_queries - self.num_transformer_feat_level = num_transformer_feat_level - self.num_heads = transformer_decoder.transformerlayers.\ - attn_cfgs.num_heads - self.num_transformer_decoder_layers = transformer_decoder.num_layers - assert pixel_decoder.encoder.transformerlayers.\ - attn_cfgs.num_levels == num_transformer_feat_level - pixel_decoder_ = copy.deepcopy(pixel_decoder) - pixel_decoder_.update( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels) - self.pixel_decoder = build_plugin_layer(pixel_decoder_)[1] - self.transformer_decoder = build_transformer_layer_sequence( - transformer_decoder) - self.decoder_embed_dims = self.transformer_decoder.embed_dims - - self.decoder_input_projs = ModuleList() - # from low resolution to high resolution - for _ in range(num_transformer_feat_level): - if (self.decoder_embed_dims != feat_channels - or enforce_decoder_input_project): - self.decoder_input_projs.append( - Conv2d( - feat_channels, self.decoder_embed_dims, kernel_size=1)) - else: - self.decoder_input_projs.append(nn.Identity()) - self.decoder_positional_encoding = build_positional_encoding( - positional_encoding) - self.query_embed = nn.Embedding(self.num_queries, feat_channels) - self.query_feat = nn.Embedding(self.num_queries, feat_channels) - # from low resolution to high resolution - self.level_embed = nn.Embedding(self.num_transformer_feat_level, - feat_channels) - - self.cls_embed = nn.Linear(feat_channels, self.num_classes + 1) - self.mask_embed = nn.Sequential( - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, out_channels)) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - self.sampler = build_sampler(self.train_cfg.sampler, context=self) - self.num_points = self.train_cfg.get('num_points', 12544) - self.oversample_ratio = self.train_cfg.get('oversample_ratio', 3.0) - self.importance_sample_ratio = self.train_cfg.get( - 'importance_sample_ratio', 0.75) - - self.class_weight = loss_cls.class_weight - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.loss_dice = build_loss(loss_dice) - - def init_weights(self): - for m in self.decoder_input_projs: - if isinstance(m, Conv2d): - caffe2_xavier_init(m, bias=0) - - self.pixel_decoder.init_weights() - - for p in self.transformer_decoder.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def _get_target_single(self, cls_score, mask_pred, gt_labels, gt_masks, - img_metas): - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_labels (Tensor): Ground truth class indices for one image with - shape (num_gts, ). - gt_masks (Tensor): Ground truth mask for each image, each with - shape (num_gts, h, w). - img_metas (dict): Image informtation. - - Returns: - tuple[Tensor]: A tuple containing the following for one image. - - - labels (Tensor): Labels of each image. \ - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. \ - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. \ - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. \ - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each \ - image. - - neg_inds (Tensor): Sampled negative indices for each \ - image. - """ - # sample points - num_queries = cls_score.shape[0] - num_gts = gt_labels.shape[0] - - point_coords = torch.rand((1, self.num_points, 2), - device=cls_score.device) - # shape (num_queries, num_points) - mask_points_pred = point_sample( - mask_pred.unsqueeze(1), point_coords.repeat(num_queries, 1, - 1)).squeeze(1) - # shape (num_gts, num_points) - gt_points_masks = point_sample( - gt_masks.unsqueeze(1).float(), point_coords.repeat(num_gts, 1, - 1)).squeeze(1) - - # assign and sample - assign_result = self.assigner.assign(cls_score, mask_points_pred, - gt_labels, gt_points_masks, - img_metas) - sampling_result = self.sampler.sample(assign_result, mask_pred, - gt_masks) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - labels = gt_labels.new_full((self.num_queries, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones((self.num_queries, )) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((self.num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds) - - def loss_single(self, cls_scores, mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function for outputs from a single decoder layer. - - Args: - cls_scores (Tensor): Mask score logits from a single decoder layer - for all images. Shape (batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - mask_preds (Tensor): Mask logits for a pixel decoder for all - images. Shape (batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image, each with shape (num_gts, ). - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (num_gts, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[Tensor]: Loss components for outputs from a single \ - decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - mask_preds_list = [mask_preds[i] for i in range(num_imgs)] - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - num_total_pos, - num_total_neg) = self.get_targets(cls_scores_list, mask_preds_list, - gt_labels_list, gt_masks_list, - img_metas) - # shape (batch_size, num_queries) - labels = torch.stack(labels_list, dim=0) - # shape (batch_size, num_queries) - label_weights = torch.stack(label_weights_list, dim=0) - # shape (num_total_gts, h, w) - mask_targets = torch.cat(mask_targets_list, dim=0) - # shape (batch_size, num_queries) - mask_weights = torch.stack(mask_weights_list, dim=0) - - # classfication loss - # shape (batch_size * num_queries, ) - cls_scores = cls_scores.flatten(0, 1) - labels = labels.flatten(0, 1) - label_weights = label_weights.flatten(0, 1) - - class_weight = cls_scores.new_tensor(self.class_weight) - loss_cls = self.loss_cls( - cls_scores, - labels, - label_weights, - avg_factor=class_weight[labels].sum()) - - num_total_masks = reduce_mean(cls_scores.new_tensor([num_total_pos])) - num_total_masks = max(num_total_masks, 1) - - # extract positive ones - # shape (batch_size, num_queries, h, w) -> (num_total_gts, h, w) - mask_preds = mask_preds[mask_weights > 0] - - if mask_targets.shape[0] == 0: - # zero match - loss_dice = mask_preds.sum() - loss_mask = mask_preds.sum() - return loss_cls, loss_mask, loss_dice - - with torch.no_grad(): - points_coords = get_uncertain_point_coords_with_randomness( - mask_preds.unsqueeze(1), None, self.num_points, - self.oversample_ratio, self.importance_sample_ratio) - # shape (num_total_gts, h, w) -> (num_total_gts, num_points) - mask_point_targets = point_sample( - mask_targets.unsqueeze(1).float(), points_coords).squeeze(1) - # shape (num_queries, h, w) -> (num_queries, num_points) - mask_point_preds = point_sample( - mask_preds.unsqueeze(1), points_coords).squeeze(1) - - # dice loss - loss_dice = self.loss_dice( - mask_point_preds, mask_point_targets, avg_factor=num_total_masks) - - # mask loss - # shape (num_queries, num_points) -> (num_queries * num_points, ) - mask_point_preds = mask_point_preds.reshape(-1) - # shape (num_total_gts, num_points) -> (num_total_gts * num_points, ) - mask_point_targets = mask_point_targets.reshape(-1) - loss_mask = self.loss_mask( - mask_point_preds, - mask_point_targets, - avg_factor=num_total_masks * self.num_points) - - return loss_cls, loss_mask, loss_dice - - def forward_head(self, decoder_out, mask_feature, attn_mask_target_size): - """Forward for head part which is called after every decoder layer. - - Args: - decoder_out (Tensor): in shape (num_queries, batch_size, c). - mask_feature (Tensor): in shape (batch_size, c, h, w). - attn_mask_target_size (tuple[int, int]): target attention - mask size. - - Returns: - tuple: A tuple contain three elements. - - - cls_pred (Tensor): Classification scores in shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred (Tensor): Mask scores in shape \ - (batch_size, num_queries,h, w). - - attn_mask (Tensor): Attention mask in shape \ - (batch_size * num_heads, num_queries, h, w). - """ - decoder_out = self.transformer_decoder.post_norm(decoder_out) - decoder_out = decoder_out.transpose(0, 1) - # shape (num_queries, batch_size, c) - cls_pred = self.cls_embed(decoder_out) - # shape (num_queries, batch_size, c) - mask_embed = self.mask_embed(decoder_out) - # shape (num_queries, batch_size, h, w) - mask_pred = torch.einsum('bqc,bchw->bqhw', mask_embed, mask_feature) - attn_mask = F.interpolate( - mask_pred, - attn_mask_target_size, - mode='bilinear', - align_corners=False) - # shape (num_queries, batch_size, h, w) -> - # (batch_size * num_head, num_queries, h, w) - attn_mask = attn_mask.flatten(2).unsqueeze(1).repeat( - (1, self.num_heads, 1, 1)).flatten(0, 1) - attn_mask = attn_mask.sigmoid() < 0.5 - attn_mask = attn_mask.detach() - - return cls_pred, mask_pred, attn_mask - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (list[Tensor]): Multi scale Features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: A tuple contains two elements. - - - cls_pred_list (list[Tensor)]: Classification logits \ - for each decoder layer. Each is a 3D-tensor with shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred_list (list[Tensor]): Mask logits for each \ - decoder layer. Each with shape (batch_size, num_queries, \ - h, w). - """ - batch_size = len(img_metas) - mask_features, multi_scale_memorys = self.pixel_decoder(feats) - # multi_scale_memorys (from low resolution to high resolution) - decoder_inputs = [] - decoder_positional_encodings = [] - for i in range(self.num_transformer_feat_level): - decoder_input = self.decoder_input_projs[i](multi_scale_memorys[i]) - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - decoder_input = decoder_input.flatten(2).permute(2, 0, 1) - level_embed = self.level_embed.weight[i].view(1, 1, -1) - decoder_input = decoder_input + level_embed - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - mask = decoder_input.new_zeros( - (batch_size, ) + multi_scale_memorys[i].shape[-2:], - dtype=torch.bool) - decoder_positional_encoding = self.decoder_positional_encoding( - mask) - decoder_positional_encoding = decoder_positional_encoding.flatten( - 2).permute(2, 0, 1) - decoder_inputs.append(decoder_input) - decoder_positional_encodings.append(decoder_positional_encoding) - # shape (num_queries, c) -> (num_queries, batch_size, c) - query_feat = self.query_feat.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - query_embed = self.query_embed.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - - cls_pred_list = [] - mask_pred_list = [] - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[0].shape[-2:]) - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - for i in range(self.num_transformer_decoder_layers): - level_idx = i % self.num_transformer_feat_level - # if a mask is all True(all background), then set it all False. - attn_mask[torch.where( - attn_mask.sum(-1) == attn_mask.shape[-1])] = False - - # cross_attn + self_attn - layer = self.transformer_decoder.layers[i] - attn_masks = [attn_mask, None] - query_feat = layer( - query=query_feat, - key=decoder_inputs[level_idx], - value=decoder_inputs[level_idx], - query_pos=query_embed, - key_pos=decoder_positional_encodings[level_idx], - attn_masks=attn_masks, - query_key_padding_mask=None, - # here we do not apply masking on padded region - key_padding_mask=None) - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[ - (i + 1) % self.num_transformer_feat_level].shape[-2:]) - - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - return cls_pred_list, mask_pred_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/maskformer_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/maskformer_head.py deleted file mode 100644 index 4541e018..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/maskformer_head.py +++ /dev/null @@ -1,553 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, build_plugin_layer, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import force_fp32 - -from mmdet.core import build_assigner, build_sampler, multi_apply, reduce_mean -from mmdet.models.utils import preprocess_panoptic_gt -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class MaskFormerHead(AnchorFreeHead): - """Implements the MaskFormer head. - - See `Per-Pixel Classification is Not All You Need for Semantic - Segmentation `_ for details. - - Args: - in_channels (list[int]): Number of channels in the input feature map. - feat_channels (int): Number of channels for feature. - out_channels (int): Number of channels for output. - num_things_classes (int): Number of things. - num_stuff_classes (int): Number of stuff. - num_queries (int): Number of query in Transformer. - pixel_decoder (:obj:`mmcv.ConfigDict` | dict): Config for pixel - decoder. Defaults to None. - enforce_decoder_input_project (bool, optional): Whether to add a layer - to change the embed_dim of tranformer encoder in pixel decoder to - the embed_dim of transformer decoder. Defaults to False. - transformer_decoder (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder. Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder position encoding. Defaults to None. - loss_cls (:obj:`mmcv.ConfigDict` | dict): Config of the classification - loss. Defaults to `CrossEntropyLoss`. - loss_mask (:obj:`mmcv.ConfigDict` | dict): Config of the mask loss. - Defaults to `FocalLoss`. - loss_dice (:obj:`mmcv.ConfigDict` | dict): Config of the dice loss. - Defaults to `DiceLoss`. - train_cfg (:obj:`mmcv.ConfigDict` | dict): Training config of - Maskformer head. - test_cfg (:obj:`mmcv.ConfigDict` | dict): Testing config of Maskformer - head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Defaults to None. - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - num_things_classes=80, - num_stuff_classes=53, - num_queries=100, - pixel_decoder=None, - enforce_decoder_input_project=False, - transformer_decoder=None, - positional_encoding=None, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[1.0] * 133 + [0.1]), - loss_mask=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=20.0), - loss_dice=dict( - type='DiceLoss', - use_sigmoid=True, - activate=True, - naive_dice=True, - loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=None, - **kwargs): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = self.num_things_classes + self.num_stuff_classes - self.num_queries = num_queries - - pixel_decoder.update( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels) - self.pixel_decoder = build_plugin_layer(pixel_decoder)[1] - self.transformer_decoder = build_transformer_layer_sequence( - transformer_decoder) - self.decoder_embed_dims = self.transformer_decoder.embed_dims - pixel_decoder_type = pixel_decoder.get('type') - if pixel_decoder_type == 'PixelDecoder' and ( - self.decoder_embed_dims != in_channels[-1] - or enforce_decoder_input_project): - self.decoder_input_proj = Conv2d( - in_channels[-1], self.decoder_embed_dims, kernel_size=1) - else: - self.decoder_input_proj = nn.Identity() - self.decoder_pe = build_positional_encoding(positional_encoding) - self.query_embed = nn.Embedding(self.num_queries, out_channels) - - self.cls_embed = nn.Linear(feat_channels, self.num_classes + 1) - self.mask_embed = nn.Sequential( - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, out_channels)) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = build_assigner(train_cfg.assigner) - self.sampler = build_sampler(train_cfg.sampler, context=self) - - self.class_weight = loss_cls.class_weight - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.loss_dice = build_loss(loss_dice) - - def init_weights(self): - if isinstance(self.decoder_input_proj, Conv2d): - caffe2_xavier_init(self.decoder_input_proj, bias=0) - - self.pixel_decoder.init_weights() - - for p in self.transformer_decoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def preprocess_gt(self, gt_labels_list, gt_masks_list, gt_semantic_segs): - """Preprocess the ground truth for all images. - - Args: - gt_labels_list (list[Tensor]): Each is ground truth - labels of each bbox, with shape (num_gts, ). - gt_masks_list (list[BitmapMasks]): Each is ground truth - masks of each instances of a image, shape - (num_gts, h, w). - gt_semantic_seg (Tensor): Ground truth of semantic - segmentation with the shape (batch_size, n, h, w). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - target_shape (tuple[int]): Shape of output mask_preds. - Resize the masks to shape of mask_preds. - - Returns: - tuple: a tuple containing the following targets. - - labels (list[Tensor]): Ground truth class indices\ - for all images. Each with shape (n, ), n is the sum of\ - number of stuff type and number of instance in a image. - - masks (list[Tensor]): Ground truth mask for each\ - image, each with shape (n, h, w). - """ - num_things_list = [self.num_things_classes] * len(gt_labels_list) - num_stuff_list = [self.num_stuff_classes] * len(gt_labels_list) - - targets = multi_apply(preprocess_panoptic_gt, gt_labels_list, - gt_masks_list, gt_semantic_segs, num_things_list, - num_stuff_list) - labels, masks = targets - return labels, masks - - def get_targets(self, cls_scores_list, mask_preds_list, gt_labels_list, - gt_masks_list, img_metas): - """Compute classification and mask targets for all images for a decoder - layer. - - Args: - cls_scores_list (list[Tensor]): Mask score logits from a single - decoder layer for all images. Each with shape (num_queries, - cls_out_channels). - mask_preds_list (list[Tensor]): Mask logits from a single decoder - layer for all images. Each with shape (num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for all - images. Each with shape (n, ), n is the sum of number of stuff - type and number of instance in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[list[Tensor]]: a tuple containing the following targets. - - labels_list (list[Tensor]): Labels of all images.\ - Each with shape (num_queries, ). - - label_weights_list (list[Tensor]): Label weights\ - of all images. Each with shape (num_queries, ). - - mask_targets_list (list[Tensor]): Mask targets of\ - all images. Each with shape (num_queries, h, w). - - mask_weights_list (list[Tensor]): Mask weights of\ - all images. Each with shape (num_queries, ). - - num_total_pos (int): Number of positive samples in\ - all images. - - num_total_neg (int): Number of negative samples in\ - all images. - """ - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - pos_inds_list, - neg_inds_list) = multi_apply(self._get_target_single, cls_scores_list, - mask_preds_list, gt_labels_list, - gt_masks_list, img_metas) - - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, mask_targets_list, - mask_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, cls_score, mask_pred, gt_labels, gt_masks, - img_metas): - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_labels (Tensor): Ground truth class indices for one image with - shape (n, ). n is the sum of number of stuff type and number - of instance in a image. - gt_masks (Tensor): Ground truth mask for each image, each with - shape (n, h, w). - img_metas (dict): Image informtation. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - labels (Tensor): Labels of each image. - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - target_shape = mask_pred.shape[-2:] - if gt_masks.shape[0] > 0: - gt_masks_downsampled = F.interpolate( - gt_masks.unsqueeze(1).float(), target_shape, - mode='nearest').squeeze(1).long() - else: - gt_masks_downsampled = gt_masks - - # assign and sample - assign_result = self.assigner.assign(cls_score, mask_pred, gt_labels, - gt_masks_downsampled, img_metas) - sampling_result = self.sampler.sample(assign_result, mask_pred, - gt_masks) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - labels = gt_labels.new_full((self.num_queries, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones(self.num_queries) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((self.num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds) - - @force_fp32(apply_to=('all_cls_scores', 'all_mask_preds')) - def loss(self, all_cls_scores, all_mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function. - - Args: - all_cls_scores (Tensor): Classification scores for all decoder - layers with shape (num_decoder, batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - all_mask_preds (Tensor): Mask scores for all decoder layers with - shape (num_decoder, batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (n, ). n is the sum of number of stuff type - and number of instance in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image with - shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_dec_layers = len(all_cls_scores) - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_masks_list = [gt_masks_list for _ in range(num_dec_layers)] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - losses_cls, losses_mask, losses_dice = multi_apply( - self.loss_single, all_cls_scores, all_mask_preds, - all_gt_labels_list, all_gt_masks_list, img_metas_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_mask'] = losses_mask[-1] - loss_dict['loss_dice'] = losses_dice[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_mask_i, loss_dice_i in zip( - losses_cls[:-1], losses_mask[:-1], losses_dice[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_mask'] = loss_mask_i - loss_dict[f'd{num_dec_layer}.loss_dice'] = loss_dice_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, cls_scores, mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function for outputs from a single decoder layer. - - Args: - cls_scores (Tensor): Mask score logits from a single decoder layer - for all images. Shape (batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - mask_preds (Tensor): Mask logits for a pixel decoder for all - images. Shape (batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image, each with shape (n, ). n is the sum of number of stuff - types and number of instances in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[Tensor]: Loss components for outputs from a single decoder\ - layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - mask_preds_list = [mask_preds[i] for i in range(num_imgs)] - - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - num_total_pos, - num_total_neg) = self.get_targets(cls_scores_list, mask_preds_list, - gt_labels_list, gt_masks_list, - img_metas) - # shape (batch_size, num_queries) - labels = torch.stack(labels_list, dim=0) - # shape (batch_size, num_queries) - label_weights = torch.stack(label_weights_list, dim=0) - # shape (num_total_gts, h, w) - mask_targets = torch.cat(mask_targets_list, dim=0) - # shape (batch_size, num_queries) - mask_weights = torch.stack(mask_weights_list, dim=0) - - # classfication loss - # shape (batch_size * num_queries, ) - cls_scores = cls_scores.flatten(0, 1) - labels = labels.flatten(0, 1) - label_weights = label_weights.flatten(0, 1) - - class_weight = cls_scores.new_tensor(self.class_weight) - loss_cls = self.loss_cls( - cls_scores, - labels, - label_weights, - avg_factor=class_weight[labels].sum()) - - num_total_masks = reduce_mean(cls_scores.new_tensor([num_total_pos])) - num_total_masks = max(num_total_masks, 1) - - # extract positive ones - # shape (batch_size, num_queries, h, w) -> (num_total_gts, h, w) - mask_preds = mask_preds[mask_weights > 0] - target_shape = mask_targets.shape[-2:] - - if mask_targets.shape[0] == 0: - # zero match - loss_dice = mask_preds.sum() - loss_mask = mask_preds.sum() - return loss_cls, loss_mask, loss_dice - - # upsample to shape of target - # shape (num_total_gts, h, w) - mask_preds = F.interpolate( - mask_preds.unsqueeze(1), - target_shape, - mode='bilinear', - align_corners=False).squeeze(1) - - # dice loss - loss_dice = self.loss_dice( - mask_preds, mask_targets, avg_factor=num_total_masks) - - # mask loss - # FocalLoss support input of shape (n, num_class) - h, w = mask_preds.shape[-2:] - # shape (num_total_gts, h, w) -> (num_total_gts * h * w, 1) - mask_preds = mask_preds.reshape(-1, 1) - # shape (num_total_gts, h, w) -> (num_total_gts * h * w) - mask_targets = mask_targets.reshape(-1) - # target is (1 - mask_targets) !!! - loss_mask = self.loss_mask( - mask_preds, 1 - mask_targets, avg_factor=num_total_masks * h * w) - - return loss_cls, loss_mask, loss_dice - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (list[Tensor]): Features from the upstream network, each - is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: a tuple contains two elements. - - all_cls_scores (Tensor): Classification scores for each\ - scale level. Each is a 4D-tensor with shape\ - (num_decoder, batch_size, num_queries, cls_out_channels).\ - Note `cls_out_channels` should includes background. - - all_mask_preds (Tensor): Mask scores for each decoder\ - layer. Each with shape (num_decoder, batch_size,\ - num_queries, h, w). - """ - batch_size = len(img_metas) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - padding_mask = feats[-1].new_ones( - (batch_size, input_img_h, input_img_w), dtype=torch.float32) - for i in range(batch_size): - img_h, img_w, _ = img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feats[-1].shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - # when backbone is swin, memory is output of last stage of swin. - # when backbone is r50, memory is output of tranformer encoder. - mask_features, memory = self.pixel_decoder(feats, img_metas) - pos_embed = self.decoder_pe(padding_mask) - memory = self.decoder_input_proj(memory) - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - memory = memory.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - # shape (batch_size, h * w) - padding_mask = padding_mask.flatten(1) - # shape = (num_queries, embed_dims) - query_embed = self.query_embed.weight - # shape = (num_queries, batch_size, embed_dims) - query_embed = query_embed.unsqueeze(1).repeat(1, batch_size, 1) - target = torch.zeros_like(query_embed) - # shape (num_decoder, num_queries, batch_size, embed_dims) - out_dec = self.transformer_decoder( - query=target, - key=memory, - value=memory, - key_pos=pos_embed, - query_pos=query_embed, - key_padding_mask=padding_mask) - # shape (num_decoder, batch_size, num_queries, embed_dims) - out_dec = out_dec.transpose(1, 2) - - # cls_scores - all_cls_scores = self.cls_embed(out_dec) - - # mask_preds - mask_embed = self.mask_embed(out_dec) - all_mask_preds = torch.einsum('lbqc,bchw->lbqhw', mask_embed, - mask_features) - - return all_cls_scores, all_mask_preds - - def forward_train(self, - feats, - img_metas, - gt_bboxes, - gt_labels, - gt_masks, - gt_semantic_seg, - gt_bboxes_ignore=None): - """Forward function for training mode. - - Args: - feats (list[Tensor]): Multi-level features from the upstream - network, each is a 4D-tensor. - img_metas (list[Dict]): List of image information. - gt_bboxes (list[Tensor]): Each element is ground truth bboxes of - the image, shape (num_gts, 4). Not used here. - gt_labels (list[Tensor]): Each element is ground truth labels of - each box, shape (num_gts,). - gt_masks (list[BitmapMasks]): Each element is masks of instances - of a image, shape (num_gts, h, w). - gt_semantic_seg (list[tensor]):Each element is the ground truth - of semantic segmentation with the shape (N, H, W). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored. Defaults to None. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # not consider ignoring bboxes - assert gt_bboxes_ignore is None - - # forward - all_cls_scores, all_mask_preds = self(feats, img_metas) - - # preprocess ground truth - gt_labels, gt_masks = self.preprocess_gt(gt_labels, gt_masks, - gt_semantic_seg) - - # loss - losses = self.loss(all_cls_scores, all_mask_preds, gt_labels, gt_masks, - img_metas) - - return losses - - def simple_test(self, feats, img_metas, **kwargs): - """Test without augmentaton. - - Args: - feats (list[Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: A tuple contains two tensors. - - - mask_cls_results (Tensor): Mask classification logits,\ - shape (batch_size, num_queries, cls_out_channels). - Note `cls_out_channels` should includes background. - - mask_pred_results (Tensor): Mask logits, shape \ - (batch_size, num_queries, h, w). - """ - all_cls_scores, all_mask_preds = self(feats, img_metas) - mask_cls_results = all_cls_scores[-1] - mask_pred_results = all_mask_preds[-1] - - # upsample masks - img_shape = img_metas[0]['batch_input_shape'] - mask_pred_results = F.interpolate( - mask_pred_results, - size=(img_shape[0], img_shape[1]), - mode='bilinear', - align_corners=False) - - return mask_cls_results, mask_pred_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/nasfcos_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/nasfcos_head.py deleted file mode 100644 index 380c912c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/nasfcos_head.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale - -from mmdet.models.dense_heads.fcos_head import FCOSHead -from ..builder import HEADS - - -@HEADS.register_module() -class NASFCOSHead(FCOSHead): - """Anchor-free head used in `NASFCOS `_. - - It is quite similar with FCOS head, except for the searched structure of - classification branch and bbox regression branch, where a structure of - "dconv3x3, conv3x3, dconv3x3, conv1x1" is utilized instead. - """ - - def __init__(self, *args, init_cfg=None, **kwargs): - if init_cfg is None: - init_cfg = [ - dict(type='Caffe2Xavier', layer=['ConvModule', 'Conv2d']), - dict( - type='Normal', - std=0.01, - override=[ - dict(name='conv_reg'), - dict(name='conv_centerness'), - dict( - name='conv_cls', - type='Normal', - std=0.01, - bias_prob=0.01) - ]), - ] - super(NASFCOSHead, self).__init__(*args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - dconv3x3_config = dict( - type='DCNv2', - kernel_size=3, - use_bias=True, - deform_groups=2, - padding=1) - conv3x3_config = dict(type='Conv', kernel_size=3, padding=1) - conv1x1_config = dict(type='Conv', kernel_size=1) - - self.arch_config = [ - dconv3x3_config, conv3x3_config, dconv3x3_config, conv1x1_config - ] - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i, op_ in enumerate(self.arch_config): - op = copy.deepcopy(op_) - chn = self.in_channels if i == 0 else self.feat_channels - assert isinstance(op, dict) - use_bias = op.pop('use_bias', False) - padding = op.pop('padding', 0) - kernel_size = op.pop('kernel_size') - module = ConvModule( - chn, - self.feat_channels, - kernel_size, - stride=1, - padding=padding, - norm_cfg=self.norm_cfg, - bias=use_bias, - conv_cfg=op) - - self.cls_convs.append(copy.deepcopy(module)) - self.reg_convs.append(copy.deepcopy(module)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1) - - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/paa_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/paa_head.py deleted file mode 100644 index d79b5b9f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/paa_head.py +++ /dev/null @@ -1,756 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, multiclass_nms -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads import ATSSHead - -EPS = 1e-12 -try: - import sklearn.mixture as skm -except ImportError: - skm = None - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class PAAHead(ATSSHead): - """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU - Prediction for Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - topk (int): Select topk samples with smallest loss in - each level. - score_voting (bool): Whether to use score voting in post-process. - covariance_type : String describing the type of covariance parameters - to be used in :class:`sklearn.mixture.GaussianMixture`. - It must be one of: - - - 'full': each component has its own general covariance matrix - - 'tied': all components share the same general covariance matrix - - 'diag': each component has its own diagonal covariance matrix - - 'spherical': each component has its own single variance - Default: 'diag'. From 'full' to 'spherical', the gmm fitting - process is faster yet the performance could be influenced. For most - cases, 'diag' should be a good choice. - """ - - def __init__(self, - *args, - topk=9, - score_voting=True, - covariance_type='diag', - **kwargs): - # topk used in paa reassign process - self.topk = topk - self.with_score_voting = score_voting - self.covariance_type = covariance_type - super(PAAHead, self).__init__(*args, **kwargs) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - flatten_anchors[pos_inds_flatten], - bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - iou_target.clamp(min=EPS), - avg_factor=iou_target.sum()) - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) - - def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight, - bbox_target, bbox_weight, pos_inds): - """Calculate loss of all potential positive samples obtained from first - match process. - - Args: - anchors (list[Tensor]): Anchors of each scale. - cls_score (Tensor): Box scores of single image with shape - (num_anchors, num_classes) - bbox_pred (Tensor): Box energies / deltas of single image - with shape (num_anchors, 4) - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_target (dict): Regression target of each anchor with - shape (num_anchors, 4). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - - Returns: - Tensor: Losses of all positive samples in single image. - """ - if not len(pos_inds): - return cls_score.new([]), - anchors_all_level = torch.cat(anchors, 0) - pos_scores = cls_score[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_label = label[pos_inds] - pos_label_weight = label_weight[pos_inds] - pos_bbox_target = bbox_target[pos_inds] - pos_bbox_weight = bbox_weight[pos_inds] - pos_anchors = anchors_all_level[pos_inds] - pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - - # to keep loss dimension - loss_cls = self.loss_cls( - pos_scores, - pos_label, - pos_label_weight, - avg_factor=1.0, - reduction_override='none') - - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - pos_bbox_weight, - avg_factor=1.0, # keep same loss weight before reassign - reduction_override='none') - - loss_cls = loss_cls.sum(-1) - pos_loss = loss_bbox + loss_cls - return pos_loss, - - def paa_reassign(self, pos_losses, label, label_weight, bbox_weight, - pos_inds, pos_gt_inds, anchors): - """Fit loss to GMM distribution and separate positive, ignore, negative - samples again with GMM model. - - Args: - pos_losses (Tensor): Losses of all positive samples in - single image. - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - pos_gt_inds (Tensor): Gt_index of all positive samples got - from first assign process. - anchors (list[Tensor]): Anchors of each scale. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - label (Tensor): classification target of each anchor after - paa assign, with shape (num_anchors,) - - label_weight (Tensor): Classification loss weight of each - anchor after paa assign, with shape (num_anchors). - - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - - num_pos (int): The number of positive samples after paa - assign. - """ - if not len(pos_inds): - return label, label_weight, bbox_weight, 0 - label = label.clone() - label_weight = label_weight.clone() - bbox_weight = bbox_weight.clone() - num_gt = pos_gt_inds.max() + 1 - num_level = len(anchors) - num_anchors_each_level = [item.size(0) for item in anchors] - num_anchors_each_level.insert(0, 0) - inds_level_interval = np.cumsum(num_anchors_each_level) - pos_level_mask = [] - for i in range(num_level): - mask = (pos_inds >= inds_level_interval[i]) & ( - pos_inds < inds_level_interval[i + 1]) - pos_level_mask.append(mask) - pos_inds_after_paa = [label.new_tensor([])] - ignore_inds_after_paa = [label.new_tensor([])] - for gt_ind in range(num_gt): - pos_inds_gmm = [] - pos_loss_gmm = [] - gt_mask = pos_gt_inds == gt_ind - for level in range(num_level): - level_mask = pos_level_mask[level] - level_gt_mask = level_mask & gt_mask - value, topk_inds = pos_losses[level_gt_mask].topk( - min(level_gt_mask.sum(), self.topk), largest=False) - pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds]) - pos_loss_gmm.append(value) - pos_inds_gmm = torch.cat(pos_inds_gmm) - pos_loss_gmm = torch.cat(pos_loss_gmm) - # fix gmm need at least two sample - if len(pos_inds_gmm) < 2: - continue - device = pos_inds_gmm.device - pos_loss_gmm, sort_inds = pos_loss_gmm.sort() - pos_inds_gmm = pos_inds_gmm[sort_inds] - pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy() - min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max() - means_init = np.array([min_loss, max_loss]).reshape(2, 1) - weights_init = np.array([0.5, 0.5]) - precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full - if self.covariance_type == 'spherical': - precisions_init = precisions_init.reshape(2) - elif self.covariance_type == 'diag': - precisions_init = precisions_init.reshape(2, 1) - elif self.covariance_type == 'tied': - precisions_init = np.array([[1.0]]) - if skm is None: - raise ImportError('Please run "pip install sklearn" ' - 'to install sklearn first.') - gmm = skm.GaussianMixture( - 2, - weights_init=weights_init, - means_init=means_init, - precisions_init=precisions_init, - covariance_type=self.covariance_type) - gmm.fit(pos_loss_gmm) - gmm_assignment = gmm.predict(pos_loss_gmm) - scores = gmm.score_samples(pos_loss_gmm) - gmm_assignment = torch.from_numpy(gmm_assignment).to(device) - scores = torch.from_numpy(scores).to(device) - - pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme( - gmm_assignment, scores, pos_inds_gmm) - pos_inds_after_paa.append(pos_inds_temp) - ignore_inds_after_paa.append(ignore_inds_temp) - - pos_inds_after_paa = torch.cat(pos_inds_after_paa) - ignore_inds_after_paa = torch.cat(ignore_inds_after_paa) - reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1) - reassign_ids = pos_inds[reassign_mask] - label[reassign_ids] = self.num_classes - label_weight[ignore_inds_after_paa] = 0 - bbox_weight[reassign_ids] = 0 - num_pos = len(pos_inds_after_paa) - return label, label_weight, bbox_weight, num_pos - - def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm): - """A general separation scheme for gmm model. - - It separates a GMM distribution of candidate samples into three - parts, 0 1 and uncertain areas, and you can implement other - separation schemes by rewriting this function. - - Args: - gmm_assignment (Tensor): The prediction of GMM which is of shape - (num_samples,). The 0/1 value indicates the distribution - that each sample comes from. - scores (Tensor): The probability of sample coming from the - fit GMM distribution. The tensor is of shape (num_samples,). - pos_inds_gmm (Tensor): All the indexes of samples which are used - to fit GMM model. The tensor is of shape (num_samples,) - - Returns: - tuple[Tensor]: The indices of positive and ignored samples. - - - pos_inds_temp (Tensor): Indices of positive samples. - - ignore_inds_temp (Tensor): Indices of ignore samples. - """ - # The implementation is (c) in Fig.3 in origin paper instead of (b). - # You can refer to issues such as - # https://github.com/kkhoot/PAA/issues/8 and - # https://github.com/kkhoot/PAA/issues/9. - fgs = gmm_assignment == 0 - pos_inds_temp = fgs.new_tensor([], dtype=torch.long) - ignore_inds_temp = fgs.new_tensor([], dtype=torch.long) - if fgs.nonzero().numel(): - _, pos_thr_ind = scores[fgs].topk(1) - pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1] - ignore_inds_temp = pos_inds_gmm.new_tensor([]) - return pos_inds_temp, ignore_inds_temp - - def get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - ): - """Get targets for PAA head. - - This method is almost the same as `AnchorHead.get_targets()`. We direct - return the results from _get_targets_single instead map it to levels - by images_to_levels function. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels (list[Tensor]): Labels of all anchors, each with - shape (num_anchors,). - - label_weights (list[Tensor]): Label weights of all anchor. - each with shape (num_anchors,). - - bbox_targets (list[Tensor]): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bbox_weights (list[Tensor]): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds (list[Tensor]): Contains all index of positive - sample in all anchor. - - gt_inds (list[Tensor]): Contains all gt_index of positive - sample in all anchor. - """ - - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - - (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds, - valid_neg_inds, sampling_result) = results - - # Due to valid flag of anchors, we have to calculate the real pos_inds - # in origin anchor set. - pos_inds = [] - for i, single_labels in enumerate(labels): - pos_mask = (0 <= single_labels) & ( - single_labels < self.num_classes) - pos_inds.append(pos_mask.nonzero().view(-1)) - - gt_inds = [item.pos_assigned_gt_inds for item in sampling_result] - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - gt_inds) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - This method is same as `AnchorHead._get_targets_single()`. - """ - assert unmap_outputs, 'We must map outputs back to the original' \ - 'set of anchors in PAAhead' - return super(ATSSHead, self)._get_targets_single( - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True, - **kwargs): - assert with_nms, 'PAA only supports "with_nms=True" now and it ' \ - 'means PAAHead does not support ' \ - 'test-time augmentation' - return super(ATSSHead, self).get_bboxes(cls_scores, bbox_preds, - score_factors, img_metas, cfg, - rescale, with_nms, **kwargs) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factors from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_score_factors = [] - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - score_factor = score_factor.permute(1, 2, 0).reshape(-1).sigmoid() - - if 0 < nms_pre < scores.shape[0]: - max_scores, _ = (scores * - score_factor[:, None]).sqrt().max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - priors = priors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - score_factor = score_factor[topk_inds] - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_score_factors.append(score_factor) - - return self._bbox_post_process(mlvl_scores, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, mlvl_score_factors, **kwargs) - - def _bbox_post_process(self, - mlvl_scores, - mlvl_bboxes, - scale_factor, - cfg, - rescale=False, - with_nms=True, - mlvl_score_factors=None, - **kwargs): - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually with_nms is False is used for aug test. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, num_class). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - mlvl_score_factors (list[Tensor], optional): Score factor from - all scale levels of a single image, each item has shape - (num_bboxes, ). Default: None. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - mlvl_iou_preds = torch.cat(mlvl_score_factors) - mlvl_nms_scores = (mlvl_scores * mlvl_iou_preds[:, None]).sqrt() - det_bboxes, det_labels = multiclass_nms( - mlvl_bboxes, - mlvl_nms_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=None) - if self.with_score_voting and len(det_bboxes) > 0: - det_bboxes, det_labels = self.score_voting(det_bboxes, det_labels, - mlvl_bboxes, - mlvl_nms_scores, - cfg.score_thr) - - return det_bboxes, det_labels - - def score_voting(self, det_bboxes, det_labels, mlvl_bboxes, - mlvl_nms_scores, score_thr): - """Implementation of score voting method works on each remaining boxes - after NMS procedure. - - Args: - det_bboxes (Tensor): Remaining boxes after NMS procedure, - with shape (k, 5), each dimension means - (x1, y1, x2, y2, score). - det_labels (Tensor): The label of remaining boxes, with shape - (k, 1),Labels are 0-based. - mlvl_bboxes (Tensor): All boxes before the NMS procedure, - with shape (num_anchors,4). - mlvl_nms_scores (Tensor): The scores of all boxes which is used - in the NMS procedure, with shape (num_anchors, num_class) - score_thr (float): The score threshold of bboxes. - - Returns: - tuple: Usually returns a tuple containing voting results. - - - det_bboxes_voted (Tensor): Remaining boxes after - score voting procedure, with shape (k, 5), each - dimension means (x1, y1, x2, y2, score). - - det_labels_voted (Tensor): Label of remaining bboxes - after voting, with shape (num_anchors,). - """ - candidate_mask = mlvl_nms_scores > score_thr - candidate_mask_nonzeros = candidate_mask.nonzero(as_tuple=False) - candidate_inds = candidate_mask_nonzeros[:, 0] - candidate_labels = candidate_mask_nonzeros[:, 1] - candidate_bboxes = mlvl_bboxes[candidate_inds] - candidate_scores = mlvl_nms_scores[candidate_mask] - det_bboxes_voted = [] - det_labels_voted = [] - for cls in range(self.cls_out_channels): - candidate_cls_mask = candidate_labels == cls - if not candidate_cls_mask.any(): - continue - candidate_cls_scores = candidate_scores[candidate_cls_mask] - candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask] - det_cls_mask = det_labels == cls - det_cls_bboxes = det_bboxes[det_cls_mask].view( - -1, det_bboxes.size(-1)) - det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4], - candidate_cls_bboxes) - for det_ind in range(len(det_cls_bboxes)): - single_det_ious = det_candidate_ious[det_ind] - pos_ious_mask = single_det_ious > 0.01 - pos_ious = single_det_ious[pos_ious_mask] - pos_bboxes = candidate_cls_bboxes[pos_ious_mask] - pos_scores = candidate_cls_scores[pos_ious_mask] - pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) * - pos_scores)[:, None] - voted_box = torch.sum( - pis * pos_bboxes, dim=0) / torch.sum( - pis, dim=0) - voted_score = det_cls_bboxes[det_ind][-1:][None, :] - det_bboxes_voted.append( - torch.cat((voted_box[None, :], voted_score), dim=1)) - det_labels_voted.append(cls) - - det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0) - det_labels_voted = det_labels.new_tensor(det_labels_voted) - return det_bboxes_voted, det_labels_voted diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_retinanet_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_retinanet_head.py deleted file mode 100644 index 8654ef45..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_retinanet_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import images_to_levels -from ..builder import HEADS -from ..losses import carl_loss, isr_p -from .retina_head import RetinaHead - - -@HEADS.register_module() -class PISARetinaHead(RetinaHead): - """PISA Retinanet Head. - - The head owns the same structure with Retinanet Head, but differs in two - aspects: - 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to - change the positive loss weights. - 2. Classification-aware regression loss is adopted as a third loss. - """ - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss, regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - num_imgs = len(img_metas) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, label_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat( - flatten_cls_scores, dim=1).reshape(-1, - flatten_cls_scores[0].size(-1)) - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_bbox_preds = torch.cat( - flatten_bbox_preds, dim=1).view(-1, flatten_bbox_preds[0].size(-1)) - flatten_labels = torch.cat(labels_list, dim=1).reshape(-1) - flatten_label_weights = torch.cat( - label_weights_list, dim=1).reshape(-1) - flatten_anchors = torch.cat(all_anchor_list, dim=1).reshape(-1, 4) - flatten_bbox_targets = torch.cat( - bbox_targets_list, dim=1).reshape(-1, 4) - flatten_bbox_weights = torch.cat( - bbox_weights_list, dim=1).reshape(-1, 4) - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - all_targets = (flatten_labels, flatten_label_weights, - flatten_bbox_targets, flatten_bbox_weights) - with torch.no_grad(): - all_targets = isr_p( - flatten_cls_scores, - flatten_bbox_preds, - all_targets, - flatten_anchors, - sampling_results_list, - bbox_coder=self.bbox_coder, - loss_cls=self.loss_cls, - num_class=self.num_classes, - **self.train_cfg.isr) - (flatten_labels, flatten_label_weights, flatten_bbox_targets, - flatten_bbox_weights) = all_targets - - # For convenience we compute loss once instead separating by fpn level, - # so that we don't need to separate the weights by level again. - # The result should be the same - losses_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - flatten_label_weights, - avg_factor=num_total_samples) - losses_bbox = self.loss_bbox( - flatten_bbox_preds, - flatten_bbox_targets, - flatten_bbox_weights, - avg_factor=num_total_samples) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - # CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - flatten_cls_scores, - flatten_labels, - flatten_bbox_preds, - flatten_bbox_targets, - self.loss_bbox, - **self.train_cfg.carl, - avg_factor=num_total_pos, - sigmoid=True, - num_class=self.num_classes) - loss_dict.update(loss_carl) - - return loss_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_ssd_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_ssd_head.py deleted file mode 100644 index 86b67abe..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/pisa_ssd_head.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import multi_apply -from ..builder import HEADS -from ..losses import CrossEntropyLoss, SmoothL1Loss, carl_loss, isr_p -from .ssd_head import SSDHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class PISASSDHead(SSDHead): - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=False, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - isr_cfg = self.train_cfg.get('isr', None) - all_targets = (all_labels.view(-1), all_label_weights.view(-1), - all_bbox_targets.view(-1, - 4), all_bbox_weights.view(-1, 4)) - # apply ISR-P - if isr_cfg is not None: - all_targets = isr_p( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_bbox_preds.view(-1, 4), - all_targets, - torch.cat(all_anchors), - sampling_results_list, - loss_cls=CrossEntropyLoss(), - bbox_coder=self.bbox_coder, - **self.train_cfg.isr, - num_class=self.num_classes) - (new_labels, new_label_weights, new_bbox_targets, - new_bbox_weights) = all_targets - all_labels = new_labels.view(all_labels.shape) - all_label_weights = new_label_weights.view(all_label_weights.shape) - all_bbox_targets = new_bbox_targets.view(all_bbox_targets.shape) - all_bbox_weights = new_bbox_weights.view(all_bbox_weights.shape) - - # add CARL loss - carl_loss_cfg = self.train_cfg.get('carl', None) - if carl_loss_cfg is not None: - loss_carl = carl_loss( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_targets[0], - all_bbox_preds.view(-1, 4), - all_targets[2], - SmoothL1Loss(beta=1.), - **self.train_cfg.carl, - avg_factor=num_total_pos, - num_class=self.num_classes) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - if carl_loss_cfg is not None: - loss_dict.update(loss_carl) - return loss_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/reppoints_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/reppoints_head.py deleted file mode 100644 index f7204141..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/reppoints_head.py +++ /dev/null @@ -1,764 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d - -from mmdet.core import (build_assigner, build_sampler, images_to_levels, - multi_apply, unmap) -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class RepPointsHead(AnchorFreeHead): - """RepPoint head. - - Args: - point_feat_channels (int): Number of channels of points features. - gradient_mul (float): The multiplier to gradients from - points refinement and recognition. - point_strides (Iterable): points strides. - point_base_scale (int): bbox scale for assigning labels. - loss_cls (dict): Config of classification loss. - loss_bbox_init (dict): Config of initial points loss. - loss_bbox_refine (dict): Config of points loss in refinement. - use_grid_points (bool): If we use bounding box representation, the - reppoints is represented as grid points on the bounding box. - center_init (bool): Whether to use center point assignment. - transform_method (str): The methods to transform RepPoints to bbox. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - point_feat_channels=256, - num_points=9, - gradient_mul=0.1, - point_strides=[8, 16, 32, 64, 128], - point_base_scale=4, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_init=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.5), - loss_bbox_refine=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - use_grid_points=False, - center_init=True, - transform_method='moment', - moment_mul=0.01, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='reppoints_cls_out', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.num_points = num_points - self.point_feat_channels = point_feat_channels - self.use_grid_points = use_grid_points - self.center_init = center_init - - # we use deform conv to extract points features - self.dcn_kernel = int(np.sqrt(num_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - assert self.dcn_kernel * self.dcn_kernel == num_points, \ - 'The points number should be a square number.' - assert self.dcn_kernel % 2 == 1, \ - 'The points number should be an odd square number.' - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - init_cfg=init_cfg, - **kwargs) - - self.gradient_mul = gradient_mul - self.point_base_scale = point_base_scale - self.point_strides = point_strides - self.prior_generator = MlvlPointGenerator( - self.point_strides, offset=0.) - - self.sampling = loss_cls['type'] not in ['FocalLoss'] - if self.train_cfg: - self.init_assigner = build_assigner(self.train_cfg.init.assigner) - self.refine_assigner = build_assigner( - self.train_cfg.refine.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.transform_method = transform_method - if self.transform_method == 'moment': - self.moment_transfer = nn.Parameter( - data=torch.zeros(2), requires_grad=True) - self.moment_mul = moment_mul - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - self.loss_bbox_init = build_loss(loss_bbox_init) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pts_out_dim = 4 if self.use_grid_points else 2 * self.num_points - self.reppoints_cls_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_cls_out = nn.Conv2d(self.point_feat_channels, - self.cls_out_channels, 1, 1, 0) - self.reppoints_pts_init_conv = nn.Conv2d(self.feat_channels, - self.point_feat_channels, 3, - 1, 1) - self.reppoints_pts_init_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - self.reppoints_pts_refine_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_pts_refine_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - - def points2bbox(self, pts, y_first=True): - """Converting the points set into bounding box. - - :param pts: the input points sets (fields), each points - set (fields) is represented as 2n scalar. - :param y_first: if y_first=True, the point set is represented as - [y1, x1, y2, x2 ... yn, xn], otherwise the point set is - represented as [x1, y1, x2, y2 ... xn, yn]. - :return: each points set is converting to a bbox [x1, y1, x2, y2]. - """ - pts_reshape = pts.view(pts.shape[0], -1, 2, *pts.shape[2:]) - pts_y = pts_reshape[:, :, 0, ...] if y_first else pts_reshape[:, :, 1, - ...] - pts_x = pts_reshape[:, :, 1, ...] if y_first else pts_reshape[:, :, 0, - ...] - if self.transform_method == 'minmax': - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'partial_minmax': - pts_y = pts_y[:, :4, ...] - pts_x = pts_x[:, :4, ...] - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'moment': - pts_y_mean = pts_y.mean(dim=1, keepdim=True) - pts_x_mean = pts_x.mean(dim=1, keepdim=True) - pts_y_std = torch.std(pts_y - pts_y_mean, dim=1, keepdim=True) - pts_x_std = torch.std(pts_x - pts_x_mean, dim=1, keepdim=True) - moment_transfer = (self.moment_transfer * self.moment_mul) + ( - self.moment_transfer.detach() * (1 - self.moment_mul)) - moment_width_transfer = moment_transfer[0] - moment_height_transfer = moment_transfer[1] - half_width = pts_x_std * torch.exp(moment_width_transfer) - half_height = pts_y_std * torch.exp(moment_height_transfer) - bbox = torch.cat([ - pts_x_mean - half_width, pts_y_mean - half_height, - pts_x_mean + half_width, pts_y_mean + half_height - ], - dim=1) - else: - raise NotImplementedError - return bbox - - def gen_grid_from_reg(self, reg, previous_boxes): - """Base on the previous bboxes and regression values, we compute the - regressed bboxes and generate the grids on the bboxes. - - :param reg: the regression value to previous bboxes. - :param previous_boxes: previous bboxes. - :return: generate grids on the regressed bboxes. - """ - b, _, h, w = reg.shape - bxy = (previous_boxes[:, :2, ...] + previous_boxes[:, 2:, ...]) / 2. - bwh = (previous_boxes[:, 2:, ...] - - previous_boxes[:, :2, ...]).clamp(min=1e-6) - grid_topleft = bxy + bwh * reg[:, :2, ...] - 0.5 * bwh * torch.exp( - reg[:, 2:, ...]) - grid_wh = bwh * torch.exp(reg[:, 2:, ...]) - grid_left = grid_topleft[:, [0], ...] - grid_top = grid_topleft[:, [1], ...] - grid_width = grid_wh[:, [0], ...] - grid_height = grid_wh[:, [1], ...] - intervel = torch.linspace(0., 1., self.dcn_kernel).view( - 1, self.dcn_kernel, 1, 1).type_as(reg) - grid_x = grid_left + grid_width * intervel - grid_x = grid_x.unsqueeze(1).repeat(1, self.dcn_kernel, 1, 1, 1) - grid_x = grid_x.view(b, -1, h, w) - grid_y = grid_top + grid_height * intervel - grid_y = grid_y.unsqueeze(2).repeat(1, 1, self.dcn_kernel, 1, 1) - grid_y = grid_y.view(b, -1, h, w) - grid_yx = torch.stack([grid_y, grid_x], dim=2) - grid_yx = grid_yx.view(b, -1, h, w) - regressed_bbox = torch.cat([ - grid_left, grid_top, grid_left + grid_width, grid_top + grid_height - ], 1) - return grid_yx, regressed_bbox - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward feature map of a single FPN level.""" - dcn_base_offset = self.dcn_base_offset.type_as(x) - # If we use center_init, the initial reppoints is from center points. - # If we use bounding bbox representation, the initial reppoints is - # from regular grid placed on a pre-defined bbox. - if self.use_grid_points or not self.center_init: - scale = self.point_base_scale / 2 - points_init = dcn_base_offset / dcn_base_offset.max() * scale - bbox_init = x.new_tensor([-scale, -scale, scale, - scale]).view(1, 4, 1, 1) - else: - points_init = 0 - cls_feat = x - pts_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - pts_feat = reg_conv(pts_feat) - # initialize reppoints - pts_out_init = self.reppoints_pts_init_out( - self.relu(self.reppoints_pts_init_conv(pts_feat))) - if self.use_grid_points: - pts_out_init, bbox_out_init = self.gen_grid_from_reg( - pts_out_init, bbox_init.detach()) - else: - pts_out_init = pts_out_init + points_init - # refine and classify reppoints - pts_out_init_grad_mul = (1 - self.gradient_mul) * pts_out_init.detach( - ) + self.gradient_mul * pts_out_init - dcn_offset = pts_out_init_grad_mul - dcn_base_offset - cls_out = self.reppoints_cls_out( - self.relu(self.reppoints_cls_conv(cls_feat, dcn_offset))) - pts_out_refine = self.reppoints_pts_refine_out( - self.relu(self.reppoints_pts_refine_conv(pts_feat, dcn_offset))) - if self.use_grid_points: - pts_out_refine, bbox_out_refine = self.gen_grid_from_reg( - pts_out_refine, bbox_out_init.detach()) - else: - pts_out_refine = pts_out_refine + pts_out_init.detach() - - if self.training: - return cls_out, pts_out_init, pts_out_refine - else: - return cls_out, self.points2bbox(pts_out_refine) - - def get_points(self, featmap_sizes, img_metas, device): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - - Returns: - tuple: points of each image, valid flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # points center for one time - multi_level_points = self.prior_generator.grid_priors( - featmap_sizes, device=device, with_stride=True) - points_list = [[point.clone() for point in multi_level_points] - for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level grids - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape']) - valid_flag_list.append(multi_level_flags) - - return points_list, valid_flag_list - - def centers_to_bboxes(self, point_list): - """Get bboxes according to center points. - - Only used in :class:`MaxIoUAssigner`. - """ - bbox_list = [] - for i_img, point in enumerate(point_list): - bbox = [] - for i_lvl in range(len(self.point_strides)): - scale = self.point_base_scale * self.point_strides[i_lvl] * 0.5 - bbox_shift = torch.Tensor([-scale, -scale, scale, - scale]).view(1, 4).type_as(point[0]) - bbox_center = torch.cat( - [point[i_lvl][:, :2], point[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + bbox_shift) - bbox_list.append(bbox) - return bbox_list - - def offset_to_pts(self, center_list, pred_list): - """Change from point offset to point coordinate.""" - pts_list = [] - for i_lvl in range(len(self.point_strides)): - pts_lvl = [] - for i_img in range(len(center_list)): - pts_center = center_list[i_img][i_lvl][:, :2].repeat( - 1, self.num_points) - pts_shift = pred_list[i_lvl][i_img] - yx_pts_shift = pts_shift.permute(1, 2, 0).view( - -1, 2 * self.num_points) - y_pts_shift = yx_pts_shift[..., 0::2] - x_pts_shift = yx_pts_shift[..., 1::2] - xy_pts_shift = torch.stack([x_pts_shift, y_pts_shift], -1) - xy_pts_shift = xy_pts_shift.view(*yx_pts_shift.shape[:-1], -1) - pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center - pts_lvl.append(pts) - pts_lvl = torch.stack(pts_lvl, 0) - pts_list.append(pts_lvl) - return pts_list - - def _point_target_single(self, - flat_proposals, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - stage='init', - unmap_outputs=True): - inside_flags = valid_flags - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample proposals - proposals = flat_proposals[inside_flags, :] - - if stage == 'init': - assigner = self.init_assigner - pos_weight = self.train_cfg.init.pos_weight - else: - assigner = self.refine_assigner - pos_weight = self.train_cfg.refine.pos_weight - assign_result = assigner.assign(proposals, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, proposals, - gt_bboxes) - - num_valid_proposals = proposals.shape[0] - bbox_gt = proposals.new_zeros([num_valid_proposals, 4]) - pos_proposals = torch.zeros_like(proposals) - proposals_weights = proposals.new_zeros([num_valid_proposals, 4]) - labels = proposals.new_full((num_valid_proposals, ), - self.num_classes, - dtype=torch.long) - label_weights = proposals.new_zeros( - num_valid_proposals, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_gt_bboxes = sampling_result.pos_gt_bboxes - bbox_gt[pos_inds, :] = pos_gt_bboxes - pos_proposals[pos_inds, :] = proposals[pos_inds, :] - proposals_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of proposals - if unmap_outputs: - num_total_proposals = flat_proposals.size(0) - labels = unmap(labels, num_total_proposals, inside_flags) - label_weights = unmap(label_weights, num_total_proposals, - inside_flags) - bbox_gt = unmap(bbox_gt, num_total_proposals, inside_flags) - pos_proposals = unmap(pos_proposals, num_total_proposals, - inside_flags) - proposals_weights = unmap(proposals_weights, num_total_proposals, - inside_flags) - - return (labels, label_weights, bbox_gt, pos_proposals, - proposals_weights, pos_inds, neg_inds) - - def get_targets(self, - proposals_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - stage='init', - label_channels=1, - unmap_outputs=True): - """Compute corresponding GT box and classification targets for - proposals. - - Args: - proposals_list (list[list]): Multi level points/bboxes of each - image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_bboxes_list (list[Tensor]): Ground truth labels of each box. - stage (str): `init` or `refine`. Generate target for init stage or - refine stage - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501 - - bbox_gt_list (list[Tensor]): Ground truth bbox of each level. - - proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501 - - proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501 - - num_total_pos (int): Number of positive samples in all images. # noqa: E501 - - num_total_neg (int): Number of negative samples in all images. # noqa: E501 - """ - assert stage in ['init', 'refine'] - num_imgs = len(img_metas) - assert len(proposals_list) == len(valid_flag_list) == num_imgs - - # points number of multi levels - num_level_proposals = [points.size(0) for points in proposals_list[0]] - - # concat all level points and flags to a single tensor - for i in range(num_imgs): - assert len(proposals_list[i]) == len(valid_flag_list[i]) - proposals_list[i] = torch.cat(proposals_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_gt, all_proposals, - all_proposal_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._point_target_single, - proposals_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - stage=stage, - unmap_outputs=unmap_outputs) - # no valid points - if any([labels is None for labels in all_labels]): - return None - # sampled points of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - labels_list = images_to_levels(all_labels, num_level_proposals) - label_weights_list = images_to_levels(all_label_weights, - num_level_proposals) - bbox_gt_list = images_to_levels(all_bbox_gt, num_level_proposals) - proposals_list = images_to_levels(all_proposals, num_level_proposals) - proposal_weights_list = images_to_levels(all_proposal_weights, - num_level_proposals) - return (labels_list, label_weights_list, bbox_gt_list, proposals_list, - proposal_weights_list, num_total_pos, num_total_neg) - - def loss_single(self, cls_score, pts_pred_init, pts_pred_refine, labels, - label_weights, bbox_gt_init, bbox_weights_init, - bbox_gt_refine, bbox_weights_refine, stride, - num_total_samples_init, num_total_samples_refine): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - cls_score = cls_score.contiguous() - loss_cls = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=num_total_samples_refine) - - # points loss - bbox_gt_init = bbox_gt_init.reshape(-1, 4) - bbox_weights_init = bbox_weights_init.reshape(-1, 4) - bbox_pred_init = self.points2bbox( - pts_pred_init.reshape(-1, 2 * self.num_points), y_first=False) - bbox_gt_refine = bbox_gt_refine.reshape(-1, 4) - bbox_weights_refine = bbox_weights_refine.reshape(-1, 4) - bbox_pred_refine = self.points2bbox( - pts_pred_refine.reshape(-1, 2 * self.num_points), y_first=False) - normalize_term = self.point_base_scale * stride - loss_pts_init = self.loss_bbox_init( - bbox_pred_init / normalize_term, - bbox_gt_init / normalize_term, - bbox_weights_init, - avg_factor=num_total_samples_init) - loss_pts_refine = self.loss_bbox_refine( - bbox_pred_refine / normalize_term, - bbox_gt_refine / normalize_term, - bbox_weights_refine, - avg_factor=num_total_samples_refine) - return loss_cls, loss_pts_init, loss_pts_refine - - def loss(self, - cls_scores, - pts_preds_init, - pts_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - # target for initial stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_init = self.offset_to_pts(center_list, - pts_preds_init) - if self.train_cfg.init.assigner['type'] == 'PointAssigner': - # Assign target for center list - candidate_list = center_list - else: - # transform center list to bbox list and - # assign target for bbox list - bbox_list = self.centers_to_bboxes(center_list) - candidate_list = bbox_list - cls_reg_targets_init = self.get_targets( - candidate_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='init', - label_channels=label_channels) - (*_, bbox_gt_list_init, candidate_list_init, bbox_weights_list_init, - num_total_pos_init, num_total_neg_init) = cls_reg_targets_init - num_total_samples_init = ( - num_total_pos_init + - num_total_neg_init if self.sampling else num_total_pos_init) - - # target for refinement stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_refine = self.offset_to_pts( - center_list, pts_preds_refine) - bbox_list = [] - for i_img, center in enumerate(center_list): - bbox = [] - for i_lvl in range(len(pts_preds_refine)): - bbox_preds_init = self.points2bbox( - pts_preds_init[i_lvl].detach()) - bbox_shift = bbox_preds_init * self.point_strides[i_lvl] - bbox_center = torch.cat( - [center[i_lvl][:, :2], center[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + - bbox_shift[i_img].permute(1, 2, 0).reshape(-1, 4)) - bbox_list.append(bbox) - cls_reg_targets_refine = self.get_targets( - bbox_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='refine', - label_channels=label_channels) - (labels_list, label_weights_list, bbox_gt_list_refine, - candidate_list_refine, bbox_weights_list_refine, num_total_pos_refine, - num_total_neg_refine) = cls_reg_targets_refine - num_total_samples_refine = ( - num_total_pos_refine + - num_total_neg_refine if self.sampling else num_total_pos_refine) - - # compute loss - losses_cls, losses_pts_init, losses_pts_refine = multi_apply( - self.loss_single, - cls_scores, - pts_coordinate_preds_init, - pts_coordinate_preds_refine, - labels_list, - label_weights_list, - bbox_gt_list_init, - bbox_weights_list_init, - bbox_gt_list_refine, - bbox_weights_list_refine, - self.point_strides, - num_total_samples_init=num_total_samples_init, - num_total_samples_refine=num_total_samples_refine) - loss_dict_all = { - 'loss_cls': losses_cls, - 'loss_pts_init': losses_pts_init, - 'loss_pts_refine': losses_pts_refine - } - return loss_dict_all - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. RepPoints head does not need - this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, - self.point_strides[level_idx], - img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def _bbox_decode(self, points, bbox_pred, stride, max_shape): - bbox_pos_center = torch.cat([points[:, :2], points[:, :2]], dim=1) - bboxes = bbox_pred * stride + bbox_pos_center - x1 = bboxes[:, 0].clamp(min=0, max=max_shape[1]) - y1 = bboxes[:, 1].clamp(min=0, max=max_shape[0]) - x2 = bboxes[:, 2].clamp(min=0, max=max_shape[1]) - y2 = bboxes[:, 3].clamp(min=0, max=max_shape[0]) - decoded_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return decoded_bboxes diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_head.py deleted file mode 100644 index a48720c2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_head.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RetinaHead(AnchorHead): - r"""An anchor-based head used in `RetinaNet - `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors. - - Example: - >>> import torch - >>> self = RetinaHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == (self.num_classes) - >>> assert box_per_anchor == 4 - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(RetinaHead, self).__init__( - num_classes, - in_channels, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_pred = self.retina_reg(reg_feat) - return cls_score, bbox_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_sepbn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_sepbn_head.py deleted file mode 100644 index b385c618..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/retina_sepbn_head.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RetinaSepBNHead(AnchorHead): - """"RetinaHead with separate BN. - - In RetinaHead, conv/norm layers are shared across different FPN levels, - while in RetinaSepBNHead, conv layers are shared across different FPN - levels, but BN layers are separated. - """ - - def __init__(self, - num_classes, - num_ins, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.num_ins = num_ins - super(RetinaSepBNHead, self).__init__( - num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.num_ins): - cls_convs = nn.ModuleList() - reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.cls_convs.append(cls_convs) - self.reg_convs.append(reg_convs) - for i in range(self.stacked_convs): - for j in range(1, self.num_ins): - self.cls_convs[j][i].conv = self.cls_convs[0][i].conv - self.reg_convs[j][i].conv = self.reg_convs[0][i].conv - self.retina_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the head.""" - super(RetinaSepBNHead, self).init_weights() - for m in self.cls_convs[0]: - normal_init(m.conv, std=0.01) - for m in self.reg_convs[0]: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for i, x in enumerate(feats): - cls_feat = feats[i] - reg_feat = feats[i] - for cls_conv in self.cls_convs[i]: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs[i]: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_pred = self.retina_reg(reg_feat) - cls_scores.append(cls_score) - bbox_preds.append(bbox_pred) - return cls_scores, bbox_preds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/rpn_head.py deleted file mode 100644 index f5d6a3b3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/rpn_head.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.ops import batched_nms - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RPNHead(AnchorHead): - """RPN head. - - Args: - in_channels (int): Number of channels in the input feature map. - init_cfg (dict or list[dict], optional): Initialization config dict. - num_convs (int): Number of convolution layers in the head. Default 1. - """ # noqa: W605 - - def __init__(self, - in_channels, - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01), - num_convs=1, - **kwargs): - self.num_convs = num_convs - super(RPNHead, self).__init__( - 1, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - if self.num_convs > 1: - rpn_convs = [] - for i in range(self.num_convs): - if i == 0: - in_channels = self.in_channels - else: - in_channels = self.feat_channels - # use ``inplace=False`` to avoid error: one of the variables - # needed for gradient computation has been modified by an - # inplace operation. - rpn_convs.append( - ConvModule( - in_channels, - self.feat_channels, - 3, - padding=1, - inplace=False)) - self.rpn_conv = nn.Sequential(*rpn_convs) - else: - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_base_priors * 4, - 1) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - rpn_cls_score = self.rpn_cls(x) - rpn_bbox_pred = self.rpn_reg(x) - return rpn_cls_score, rpn_bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - losses = super(RPNHead, self).loss( - cls_scores, - bbox_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], loss_rpn_bbox=losses['loss_bbox']) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_anchors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_anchors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has - shape (num_anchors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. RPN head does not need this value. - mlvl_anchors (list[Tensor]): Anchors of all scale level - each item has shape (num_anchors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - img_shape = img_meta['img_shape'] - - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - nms_pre = cfg.get('nms_pre', -1) - for level_idx in range(len(cls_score_list)): - rpn_cls_score = cls_score_list[level_idx] - rpn_bbox_pred = bbox_pred_list[level_idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(dim=1)[:, 0] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - anchors = mlvl_anchors[level_idx] - if 0 < nms_pre < scores.shape[0]: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:nms_pre] - scores = ranked_scores[:nms_pre] - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full((scores.size(0), ), - level_idx, - dtype=torch.long)) - - return self._bbox_post_process(mlvl_scores, mlvl_bbox_preds, - mlvl_valid_anchors, level_ids, cfg, - img_shape) - - def _bbox_post_process(self, mlvl_scores, mlvl_bboxes, mlvl_valid_anchors, - level_ids, cfg, img_shape, **kwargs): - """bbox post-processing method. - - Do the nms operation for bboxes in same level. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - mlvl_valid_anchors (list[Tensor]): Anchors of all scale level - each item has shape (num_bboxes, 4). - level_ids (list[Tensor]): Indexes from all scale levels of a - single image, each item has shape (num_bboxes, ). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, `self.test_cfg` would be used. - img_shape (tuple(int)): The shape of model's input image. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - scores = torch.cat(mlvl_scores) - anchors = torch.cat(mlvl_valid_anchors) - rpn_bbox_pred = torch.cat(mlvl_bboxes) - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - ids = torch.cat(level_ids) - - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - ids = ids[valid_mask] - - if proposals.numel() > 0: - dets, _ = batched_nms(proposals, scores, ids, cfg.nms) - else: - return proposals.new_zeros(0, 5) - - return dets[:cfg.max_per_img] - - def onnx_export(self, x, img_metas): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - Returns: - Tensor: dets of shape [N, num_det, 5]. - """ - cls_scores, bbox_preds = self(x) - - assert len(cls_scores) == len(bbox_preds) - - batch_bboxes, batch_scores = super(RPNHead, self).onnx_export( - cls_scores, bbox_preds, img_metas=img_metas, with_nms=False) - # Use ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - cfg = copy.deepcopy(self.test_cfg) - score_threshold = cfg.nms.get('score_thr', 0.0) - nms_pre = cfg.get('deploy_nms_pre', -1) - # Different from the normal forward doing NMS level by level, - # we do NMS across all levels when exporting ONNX. - dets, _ = add_dummy_nms_for_onnx(batch_bboxes, batch_scores, - cfg.max_per_img, - cfg.nms.iou_threshold, - score_threshold, nms_pre, - cfg.max_per_img) - return dets diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/sabl_retina_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/sabl_retina_head.py deleted file mode 100644 index 4fede710..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/sabl_retina_head.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, unmap) -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin -from .guided_anchor_head import GuidedAnchorHead - - -@HEADS.register_module() -class SABLRetinaHead(BaseDenseHead, BBoxTestMixin): - """Side-Aware Boundary Localization (SABL) for RetinaNet. - - The anchor generation, assigning and sampling in SABLRetinaHead - are the same as GuidedAnchorHead for guided anchoring. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of Convs for classification \ - and regression branches. Defaults to 4. - feat_channels (int): Number of hidden channels. \ - Defaults to 256. - approx_anchor_generator (dict): Config dict for approx generator. - square_anchor_generator (dict): Config dict for square generator. - conv_cfg (dict): Config dict for ConvModule. Defaults to None. - norm_cfg (dict): Config dict for Norm Layer. Defaults to None. - bbox_coder (dict): Config dict for bbox coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of SABLRetinaHead. - test_cfg (dict): Testing config of SABLRetinaHead. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - conv_cfg=None, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=3.0), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01))): - super(SABLRetinaHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.num_buckets = bbox_coder['num_buckets'] - self.side_num = int(np.ceil(self.num_buckets / 2)) - - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - - self.approx_anchor_generator = build_prior_generator( - approx_anchor_generator) - self.square_anchor_generator = build_prior_generator( - square_anchor_generator) - self.approxs_per_octave = ( - self.approx_anchor_generator.num_base_priors[0]) - - # one anchor per location - self.num_base_priors = self.square_anchor_generator.num_base_priors[0] - - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reg_decoded_bbox = reg_decoded_bbox - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.fp16_enabled = False - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.square_anchor_generator.num_base_priors[0] - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.retina_bbox_reg = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - self.retina_bbox_cls = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_cls_pred = self.retina_bbox_cls(reg_feat) - bbox_reg_pred = self.retina_bbox_reg(reg_feat) - bbox_pred = (bbox_cls_pred, bbox_reg_pred) - return cls_score, bbox_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_priors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - return squares_list - - def get_target(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute bucketing targets. - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: Returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_cls_targets_list (list[Tensor]): BBox cls targets of \ - each level. - - bbox_cls_weights_list (list[Tensor]): BBox cls weights of \ - each level. - - bbox_reg_targets_list (list[Tensor]): BBox reg targets of \ - each level. - - bbox_reg_weights_list (list[Tensor]): BBox reg weights of \ - each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_cls_targets, - all_bbox_cls_weights, all_bbox_reg_targets, all_bbox_reg_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - sampling=sampling, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_squares) - label_weights_list = images_to_levels(all_label_weights, - num_level_squares) - bbox_cls_targets_list = images_to_levels(all_bbox_cls_targets, - num_level_squares) - bbox_cls_weights_list = images_to_levels(all_bbox_cls_weights, - num_level_squares) - bbox_reg_targets_list = images_to_levels(all_bbox_reg_targets, - num_level_squares) - bbox_reg_weights_list = images_to_levels(all_bbox_reg_weights, - num_level_squares) - return (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, - bbox_reg_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image, \ - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: - - - labels_list (Tensor): Labels in a single image - - label_weights (Tensor): Label weights in a single image - - bbox_cls_targets (Tensor): BBox cls targets in a single image - - bbox_cls_weights (Tensor): BBox cls weights in a single image - - bbox_reg_targets (Tensor): BBox reg targets in a single image - - bbox_reg_weights (Tensor): BBox reg weights in a single image - - num_total_pos (int): Number of positive samples \ - in a single image - - num_total_neg (int): Number of negative samples \ - in a single image - """ - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, squares, - gt_bboxes) - - num_valid_squares = squares.shape[0] - bbox_cls_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_cls_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - labels = squares.new_full((num_valid_squares, ), - self.num_classes, - dtype=torch.long) - label_weights = squares.new_zeros(num_valid_squares, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - (pos_bbox_reg_targets, pos_bbox_reg_weights, pos_bbox_cls_targets, - pos_bbox_cls_weights) = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - - bbox_cls_targets[pos_inds, :] = pos_bbox_cls_targets - bbox_reg_targets[pos_inds, :] = pos_bbox_reg_targets - bbox_cls_weights[pos_inds, :] = pos_bbox_cls_weights - bbox_reg_weights[pos_inds, :] = pos_bbox_reg_weights - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_cls_targets = unmap(bbox_cls_targets, num_total_anchors, - inside_flags) - bbox_cls_weights = unmap(bbox_cls_weights, num_total_anchors, - inside_flags) - bbox_reg_targets = unmap(bbox_reg_targets, num_total_anchors, - inside_flags) - bbox_reg_weights = unmap(bbox_reg_weights, num_total_anchors, - inside_flags) - return (labels, label_weights, bbox_cls_targets, bbox_cls_weights, - bbox_reg_targets, bbox_reg_weights, pos_inds, neg_inds) - - def loss_single(self, cls_score, bbox_pred, labels, label_weights, - bbox_cls_targets, bbox_cls_weights, bbox_reg_targets, - bbox_reg_weights, num_total_samples): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_cls_targets = bbox_cls_targets.reshape(-1, self.side_num * 4) - bbox_cls_weights = bbox_cls_weights.reshape(-1, self.side_num * 4) - bbox_reg_targets = bbox_reg_targets.reshape(-1, self.side_num * 4) - bbox_reg_weights = bbox_reg_weights.reshape(-1, self.side_num * 4) - (bbox_cls_pred, bbox_reg_pred) = bbox_pred - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - loss_bbox_cls = self.loss_bbox_cls( - bbox_cls_pred, - bbox_cls_targets.long(), - bbox_cls_weights, - avg_factor=num_total_samples * 4 * self.side_num) - loss_bbox_reg = self.loss_bbox_reg( - bbox_reg_pred, - bbox_reg_targets, - bbox_reg_weights, - avg_factor=num_total_samples * 4 * self.bbox_coder.offset_topk) - return loss_cls, loss_bbox_cls, loss_bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get sampled approxes - approxs_list, inside_flag_list = GuidedAnchorHead.get_sampled_approxs( - self, featmap_sizes, img_metas, device=device) - - square_list = self.get_anchors(featmap_sizes, img_metas, device=device) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_target( - approxs_list, - inside_flag_list, - square_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - sampling=self.sampling) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, bbox_reg_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - losses_cls, losses_bbox_cls, losses_bbox_reg = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_cls_targets_list, - bbox_cls_weights_list, - bbox_reg_targets_list, - bbox_reg_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, - loss_bbox_cls=losses_bbox_cls, - loss_bbox_reg=losses_bbox_reg) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - - device = cls_scores[0].device - mlvl_anchors = self.get_anchors( - featmap_sizes, img_metas, device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_cls_pred_list = [ - bbox_preds[i][0][img_id].detach() for i in range(num_levels) - ] - bbox_reg_pred_list = [ - bbox_preds[i][1][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single( - cls_score_list, bbox_cls_pred_list, bbox_reg_pred_list, - mlvl_anchors[img_id], img_shape, scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_cls_preds, - bbox_reg_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_confids = [] - mlvl_labels = [] - assert len(cls_scores) == len(bbox_cls_preds) == len( - bbox_reg_preds) == len(mlvl_anchors) - for cls_score, bbox_cls_pred, bbox_reg_pred, anchors in zip( - cls_scores, bbox_cls_preds, bbox_reg_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_cls_pred.size( - )[-2:] == bbox_reg_pred.size()[-2::] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1)[:, :-1] - bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict( - anchors=anchors, - bbox_cls_pred=bbox_cls_pred, - bbox_reg_pred=bbox_reg_pred)) - scores, labels, _, filtered_results = results - - anchors = filtered_results['anchors'] - bbox_cls_pred = filtered_results['bbox_cls_pred'] - bbox_reg_pred = filtered_results['bbox_reg_pred'] - - bbox_preds = [ - bbox_cls_pred.contiguous(), - bbox_reg_pred.contiguous() - ] - bboxes, confids = self.bbox_coder.decode( - anchors.contiguous(), bbox_preds, max_shape=img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_confids.append(confids) - mlvl_labels.append(labels) - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - scale_factor, cfg, rescale, True, - mlvl_confids) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/solo_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/solo_head.py deleted file mode 100644 index 148f819f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/solo_head.py +++ /dev/null @@ -1,1177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmdet.core import InstanceData, mask_matrix_nms, multi_apply -from mmdet.core.utils import center_of_mass, generate_coordinate -from mmdet.models.builder import HEADS, build_loss -from .base_mask_head import BaseMaskHead - - -@HEADS.register_module() -class SOLOHead(BaseMaskHead): - """SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - Default: 256. - stacked_convs (int): Number of stacking convs of the head. - Default: 4. - strides (tuple): Downsample factor of each feature map. - scale_ranges (tuple[tuple[int, int]]): Area range of multiple - level masks, in the format [(min1, max1), (min2, max2), ...]. - A range of (16, 64) means the area range between (16, 64). - pos_scale (float): Constant scale factor to control the center region. - num_grids (list[int]): Divided image into a uniform grids, each - feature map has a different grid value. The number of output - channels is grid ** 2. Default: [40, 36, 24, 16, 12]. - cls_down_index (int): The index of downsample operation in - classification branch. Default: 0. - loss_mask (dict): Config of mask loss. - loss_cls (dict): Config of classification loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - train_cfg (dict): Training config of head. - test_cfg (dict): Testing config of head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), - pos_scale=0.2, - num_grids=[40, 36, 24, 16, 12], - cls_down_index=0, - loss_mask=None, - loss_cls=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - train_cfg=None, - test_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - ): - super(SOLOHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.cls_out_channels = self.num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.num_grids = num_grids - # number of FPN feats - self.num_levels = len(strides) - assert self.num_levels == len(scale_ranges) == len(num_grids) - self.scale_ranges = scale_ranges - self.pos_scale = pos_scale - - self.cls_down_index = cls_down_index - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.norm_cfg = norm_cfg - self.init_cfg = init_cfg - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self._init_layers() - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.conv_mask_list = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list.append( - nn.Conv2d(self.feat_channels, num_grid**2, 1)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def resize_feats(self, feats): - """Downsample the first feat and upsample last feat in feats.""" - out = [] - for i in range(len(feats)): - if i == 0: - out.append( - F.interpolate(feats[0], scale_factor=0.5, mode='bilinear')) - elif i == len(feats) - 1: - out.append( - F.interpolate( - feats[i], - size=feats[i - 1].shape[-2:], - mode='bilinear')) - else: - out.append(feats[i]) - return out - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mlvl_mask_preds = [] - mlvl_cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in (self.mask_convs): - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - mask_pred = self.conv_mask_list[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred = F.interpolate( - mask_pred.sigmoid(), size=upsampled_size, mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mlvl_mask_preds.append(mask_pred) - mlvl_cls_preds.append(cls_pred) - return mlvl_mask_preds, mlvl_cls_preds - - def loss(self, - mlvl_mask_preds, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds] - - # `BoolTensor` in `pos_masks` represent - # whether the corresponding point is - # positive - pos_mask_targets, labels, pos_masks = multi_apply( - self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds = [[] for _ in range(num_levels)] - mlvl_pos_masks = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - assert num_levels == len(pos_mask_targets[img_id]) - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds[lvl].append( - mlvl_mask_preds[lvl][img_id, pos_masks[img_id][lvl], ...]) - mlvl_pos_masks[lvl].append(pos_masks[img_id][lvl].flatten()) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds[lvl] = torch.cat( - mlvl_pos_mask_preds[lvl], dim=0) - mlvl_pos_masks[lvl] = torch.cat(mlvl_pos_masks[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = sum(item.sum() for item in mlvl_pos_masks) - # dice loss - loss_mask = [] - for pred, target in zip(mlvl_pos_mask_preds, mlvl_pos_mask_targets): - if pred.size()[0] == 0: - loss_mask.append(pred.sum().unsqueeze(0)) - continue - loss_mask.append( - self.loss_mask(pred, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_pos_masks (list[Tensor]): Each element is - a `BoolTensor` to represent whether the - corresponding point in single level - is positive, has shape (num_grid **2). - """ - device = gt_labels.device - gt_areas = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - - mlvl_pos_mask_targets = [] - mlvl_labels = [] - mlvl_pos_masks = [] - for (lower_bound, upper_bound), stride, featmap_size, num_grid \ - in zip(self.scale_ranges, self.strides, - featmap_sizes, self.num_grids): - - mask_target = torch.zeros( - [num_grid**2, featmap_size[0], featmap_size[1]], - dtype=torch.uint8, - device=device) - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - labels = torch.zeros([num_grid, num_grid], - dtype=torch.int64, - device=device) + self.num_classes - pos_mask = torch.zeros([num_grid**2], - dtype=torch.bool, - device=device) - - gt_inds = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(gt_inds) == 0: - mlvl_pos_mask_targets.append( - mask_target.new_zeros(0, featmap_size[0], featmap_size[1])) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - continue - hit_gt_bboxes = gt_bboxes[gt_inds] - hit_gt_labels = gt_labels[gt_inds] - hit_gt_masks = gt_masks[gt_inds, ...] - - pos_w_ranges = 0.5 * (hit_gt_bboxes[:, 2] - - hit_gt_bboxes[:, 0]) * self.pos_scale - pos_h_ranges = 0.5 * (hit_gt_bboxes[:, 3] - - hit_gt_bboxes[:, 1]) * self.pos_scale - - # Make sure hit_gt_masks has a value - valid_mask_flags = hit_gt_masks.sum(dim=-1).sum(dim=-1) > 0 - output_stride = stride / 2 - - for gt_mask, gt_label, pos_h_range, pos_w_range, \ - valid_mask_flag in \ - zip(hit_gt_masks, hit_gt_labels, pos_h_ranges, - pos_w_ranges, valid_mask_flags): - if not valid_mask_flag: - continue - upsampled_size = (featmap_sizes[0][0] * 4, - featmap_sizes[0][1] * 4) - center_h, center_w = center_of_mass(gt_mask) - - coord_w = int( - (center_w / upsampled_size[1]) // (1. / num_grid)) - coord_h = int( - (center_h / upsampled_size[0]) // (1. / num_grid)) - - # left, top, right, down - top_box = max( - 0, - int(((center_h - pos_h_range) / upsampled_size[0]) // - (1. / num_grid))) - down_box = min( - num_grid - 1, - int(((center_h + pos_h_range) / upsampled_size[0]) // - (1. / num_grid))) - left_box = max( - 0, - int(((center_w - pos_w_range) / upsampled_size[1]) // - (1. / num_grid))) - right_box = min( - num_grid - 1, - int(((center_w + pos_w_range) / upsampled_size[1]) // - (1. / num_grid))) - - top = max(top_box, coord_h - 1) - down = min(down_box, coord_h + 1) - left = max(coord_w - 1, left_box) - right = min(right_box, coord_w + 1) - - labels[top:(down + 1), left:(right + 1)] = gt_label - # ins - gt_mask = np.uint8(gt_mask.cpu().numpy()) - # Follow the original implementation, F.interpolate is - # different from cv2 and opencv - gt_mask = mmcv.imrescale(gt_mask, scale=1. / output_stride) - gt_mask = torch.from_numpy(gt_mask).to(device=device) - - for i in range(top, down + 1): - for j in range(left, right + 1): - index = int(i * num_grid + j) - mask_target[index, :gt_mask.shape[0], :gt_mask. - shape[1]] = gt_mask - pos_mask[index] = True - mlvl_pos_mask_targets.append(mask_target[pos_mask]) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - return mlvl_pos_mask_targets, mlvl_labels, mlvl_pos_masks - - def get_results(self, mlvl_mask_preds, mlvl_cls_scores, img_metas, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[lvl][img_id].view(-1, self.cls_out_channels) - for lvl in range(num_levels) - ] - mask_pred_list = [ - mlvl_mask_preds[lvl][img_id] for lvl in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list = torch.cat(mask_pred_list, dim=0) - - results = self._get_results_single( - cls_pred_list, mask_pred_list, img_meta=img_metas[img_id]) - results_list.append(results) - - return results_list - - def _get_results_single(self, cls_scores, mask_preds, img_meta, cfg=None): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds (Tensor): Mask prediction of all points in - single image, has shape (num_points, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict, optional): Config used in test phase. - Default: None. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(mask_preds) - results = InstanceData(img_meta) - - featmap_size = mask_preds.size()[-2:] - - img_shape = results.img_shape - ori_shape = results.ori_shape - - h, w, _ = img_shape - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - if len(cls_scores) == 0: - return empty_results(results, cls_scores) - - inds = score_mask.nonzero() - cls_labels = inds[:, 1] - - # Filter the mask mask with an area is smaller than - # stride of corresponding feature level - lvl_interval = cls_labels.new_tensor(self.num_grids).pow(2).cumsum(0) - strides = cls_scores.new_ones(lvl_interval[-1]) - strides[:lvl_interval[0]] *= self.strides[0] - for lvl in range(1, self.num_levels): - strides[lvl_interval[lvl - - 1]:lvl_interval[lvl]] *= self.strides[lvl] - strides = strides[inds[:, 0]] - mask_preds = mask_preds[inds[:, 0]] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOHead(SOLOHead): - """Decoupled SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - super(DecoupledSOLOHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs_x = nn.ModuleList() - self.mask_convs_y = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - chn = self.in_channels + 1 if i == 0 else self.feat_channels - self.mask_convs_x.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.mask_convs_y.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat_x = torch.cat([mask_feat, coord_feat[:, 0:1, ...]], 1) - mask_feat_y = torch.cat([mask_feat, coord_feat[:, 1:2, ...]], 1) - - for mask_layer_x, mask_layer_y in \ - zip(self.mask_convs_x, self.mask_convs_y): - mask_feat_x = mask_layer_x(mask_feat_x) - mask_feat_y = mask_layer_y(mask_feat_y) - - mask_feat_x = F.interpolate( - mask_feat_x, scale_factor=2, mode='bilinear') - mask_feat_y = F.interpolate( - mask_feat_y, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat_x) - mask_pred_y = self.conv_mask_list_y[i](mask_feat_y) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds - - def loss(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds_x] - - pos_mask_targets, labels, \ - xy_pos_indexes = \ - multi_apply(self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_x = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_y = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds_x[lvl].append( - mlvl_mask_preds_x[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 1]]) - mlvl_pos_mask_preds_y[lvl].append( - mlvl_mask_preds_y[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 0]]) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds_x[lvl] = torch.cat( - mlvl_pos_mask_preds_x[lvl], dim=0) - mlvl_pos_mask_preds_y[lvl] = torch.cat( - mlvl_pos_mask_preds_y[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = 0. - # dice loss - loss_mask = [] - for pred_x, pred_y, target in \ - zip(mlvl_pos_mask_preds_x, - mlvl_pos_mask_preds_y, mlvl_pos_mask_targets): - num_masks = pred_x.size(0) - if num_masks == 0: - # make sure can get grad - loss_mask.append((pred_x.sum() + pred_y.sum()).unsqueeze(0)) - continue - num_pos += num_masks - pred_mask = pred_y.sigmoid() * pred_x.sigmoid() - loss_mask.append( - self.loss_mask(pred_mask, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - # cate - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_xy_pos_indexes (list[Tensor]): Each element - in the list contains the index of positive samples in - corresponding level, has shape (num_pos, 2), last - dimension 2 present (index_x, index_y). - """ - mlvl_pos_mask_targets, mlvl_labels, \ - mlvl_pos_masks = \ - super()._get_targets_single(gt_bboxes, gt_labels, gt_masks, - featmap_sizes=featmap_sizes) - - mlvl_xy_pos_indexes = [(item - self.num_classes).nonzero() - for item in mlvl_labels] - - return mlvl_pos_mask_targets, mlvl_labels, mlvl_xy_pos_indexes - - def get_results(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_scores, - img_metas, - rescale=None, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_y (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes ,num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds_x) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[i][img_id].view( - -1, self.cls_out_channels).detach() - for i in range(num_levels) - ] - mask_pred_list_x = [ - mlvl_mask_preds_x[i][img_id] for i in range(num_levels) - ] - mask_pred_list_y = [ - mlvl_mask_preds_y[i][img_id] for i in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list_x = torch.cat(mask_pred_list_x, dim=0) - mask_pred_list_y = torch.cat(mask_pred_list_y, dim=0) - - results = self._get_results_single( - cls_pred_list, - mask_pred_list_x, - mask_pred_list_y, - img_meta=img_metas[img_id], - cfg=self.test_cfg) - results_list.append(results) - return results_list - - def _get_results_single(self, cls_scores, mask_preds_x, mask_preds_y, - img_meta, cfg): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds_x (Tensor): Mask prediction of x branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - mask_preds_y (Tensor): Mask prediction of y branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict): Config used in test phase. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - - results = InstanceData(img_meta) - img_shape = results.img_shape - ori_shape = results.ori_shape - h, w, _ = img_shape - featmap_size = mask_preds_x.size()[-2:] - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - inds = score_mask.nonzero() - lvl_interval = inds.new_tensor(self.num_grids).pow(2).cumsum(0) - num_all_points = lvl_interval[-1] - lvl_start_index = inds.new_ones(num_all_points) - num_grids = inds.new_ones(num_all_points) - seg_size = inds.new_tensor(self.num_grids).cumsum(0) - mask_lvl_start_index = inds.new_ones(num_all_points) - strides = inds.new_ones(num_all_points) - - lvl_start_index[:lvl_interval[0]] *= 0 - mask_lvl_start_index[:lvl_interval[0]] *= 0 - num_grids[:lvl_interval[0]] *= self.num_grids[0] - strides[:lvl_interval[0]] *= self.strides[0] - - for lvl in range(1, self.num_levels): - lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - lvl_interval[lvl - 1] - mask_lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - seg_size[lvl - 1] - num_grids[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.num_grids[lvl] - strides[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.strides[lvl] - - lvl_start_index = lvl_start_index[inds[:, 0]] - mask_lvl_start_index = mask_lvl_start_index[inds[:, 0]] - num_grids = num_grids[inds[:, 0]] - strides = strides[inds[:, 0]] - - y_lvl_offset = (inds[:, 0] - lvl_start_index) // num_grids - x_lvl_offset = (inds[:, 0] - lvl_start_index) % num_grids - y_inds = mask_lvl_start_index + y_lvl_offset - x_inds = mask_lvl_start_index + x_lvl_offset - - cls_labels = inds[:, 1] - mask_preds = mask_preds_x[x_inds, ...] * mask_preds_y[y_inds, ...] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOLightHead(DecoupledSOLOHead): - """Decoupled Light SOLO mask head used in `SOLO: Segmenting Objects by - Locations `_ - - Args: - with_dcn (bool): Whether use dcn in mask_convs and cls_convs, - default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - dcn_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - assert dcn_cfg is None or isinstance(dcn_cfg, dict) - self.dcn_cfg = dcn_cfg - super(DecoupledSOLOLightHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - if self.dcn_cfg is not None\ - and i == self.stacked_convs - 1: - conv_cfg = self.dcn_cfg - else: - conv_cfg = None - - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in self.mask_convs: - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat) - mask_pred_y = self.conv_mask_list_y[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ssd_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ssd_head.py deleted file mode 100644 index e362fd80..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/ssd_head.py +++ /dev/null @@ -1,357 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, multi_apply) -from ..builder import HEADS -from ..losses import smooth_l1_loss -from .anchor_head import AnchorHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class SSDHead(AnchorHead): - """SSD head used in https://arxiv.org/abs/1512.02325. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 0. - feat_channels (int): Number of hidden channels when stacked_convs - > 0. Default: 256. - use_depthwise (bool): Whether to use DepthwiseSeparableConv. - Default: False. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: None. - act_cfg (dict): Dictionary to construct and config activation layer. - Default: None. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes=80, - in_channels=(512, 1024, 512, 256, 256, 256), - stacked_convs=0, - feat_channels=256, - use_depthwise=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=300, - strides=[8, 16, 32, 64, 100, 300], - ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]), - basesize_ratio_range=(0.1, 0.9)), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Xavier', - layer='Conv2d', - distribution='uniform', - bias=0)): - super(AnchorHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.in_channels = in_channels - self.stacked_convs = stacked_convs - self.feat_channels = feat_channels - self.use_depthwise = use_depthwise - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.cls_out_channels = num_classes + 1 # add background class - self.prior_generator = build_prior_generator(anchor_generator) - - # Usually the numbers of anchors for each level are the same - # except SSD detectors. So it is an int in the most dense - # heads but a list of int in SSDHead - self.num_base_priors = self.prior_generator.num_base_priors - - self._init_layers() - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = False - self.cls_focal_loss = False - self.train_cfg = train_cfg - self.test_cfg = test_cfg - # set sampling=False for archor_target - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - @property - def num_anchors(self): - """ - Returns: - list[int]: Number of base_anchors on each point of each level. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - def _init_layers(self): - """Initialize layers of the head.""" - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - # TODO: Use registry to choose ConvModule type - conv = DepthwiseSeparableConvModule \ - if self.use_depthwise else ConvModule - - for channel, num_base_priors in zip(self.in_channels, - self.num_base_priors): - cls_layers = [] - reg_layers = [] - in_channel = channel - # build stacked conv tower, not used in default ssd - for i in range(self.stacked_convs): - cls_layers.append( - conv( - in_channel, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - reg_layers.append( - conv( - in_channel, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - in_channel = self.feat_channels - # SSD-Lite head - if self.use_depthwise: - cls_layers.append( - ConvModule( - in_channel, - in_channel, - 3, - padding=1, - groups=in_channel, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - reg_layers.append( - ConvModule( - in_channel, - in_channel, - 3, - padding=1, - groups=in_channel, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - cls_layers.append( - nn.Conv2d( - in_channel, - num_base_priors * self.cls_out_channels, - kernel_size=1 if self.use_depthwise else 3, - padding=0 if self.use_depthwise else 1)) - reg_layers.append( - nn.Conv2d( - in_channel, - num_base_priors * 4, - kernel_size=1 if self.use_depthwise else 3, - padding=0 if self.use_depthwise else 1)) - self.cls_convs.append(nn.Sequential(*cls_layers)) - self.reg_convs.append(nn.Sequential(*reg_layers)) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for feat, reg_conv, cls_conv in zip(feats, self.reg_convs, - self.cls_convs): - cls_scores.append(cls_conv(feat)) - bbox_preds.append(reg_conv(feat)) - return cls_scores, bbox_preds - - def loss_single(self, cls_score, bbox_pred, anchor, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single image. - - Args: - cls_score (Tensor): Box scores for eachimage - Has shape (num_total_anchors, num_classes). - bbox_pred (Tensor): Box energies / deltas for each image - level with shape (num_total_anchors, 4). - anchors (Tensor): Box reference for each scale level with shape - (num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (num_total_anchors,). - label_weights (Tensor): Label weights of each anchor with shape - (num_total_anchors,) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - loss_cls_all = F.cross_entropy( - cls_score, labels, reduction='none') * label_weights - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero( - as_tuple=False).reshape(-1) - neg_inds = (labels == self.num_classes).nonzero( - as_tuple=False).view(-1) - - num_pos_samples = pos_inds.size(0) - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchor, bbox_pred) - - loss_bbox = smooth_l1_loss( - bbox_pred, - bbox_targets, - bbox_weights, - beta=self.train_cfg.smoothl1_beta, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/tood_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/tood_head.py deleted file mode 100644 index c64ebf7a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/tood_head.py +++ /dev/null @@ -1,778 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init -from mmcv.ops import deform_conv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, distance2bbox, - images_to_levels, multi_apply, reduce_mean, unmap) -from mmdet.core.utils import filter_scores_and_topk -from mmdet.models.utils import sigmoid_geometric_mean -from ..builder import HEADS, build_loss -from .atss_head import ATSSHead - - -class TaskDecomposition(nn.Module): - """Task decomposition module in task-aligned predictor of TOOD. - - Args: - feat_channels (int): Number of feature channels in TOOD head. - stacked_convs (int): Number of conv layers in TOOD head. - la_down_rate (int): Downsample rate of layer attention. - conv_cfg (dict): Config dict for convolution layer. - norm_cfg (dict): Config dict for normalization layer. - """ - - def __init__(self, - feat_channels, - stacked_convs, - la_down_rate=8, - conv_cfg=None, - norm_cfg=None): - super(TaskDecomposition, self).__init__() - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.in_channels = self.feat_channels * self.stacked_convs - self.norm_cfg = norm_cfg - self.layer_attention = nn.Sequential( - nn.Conv2d(self.in_channels, self.in_channels // la_down_rate, 1), - nn.ReLU(inplace=True), - nn.Conv2d( - self.in_channels // la_down_rate, - self.stacked_convs, - 1, - padding=0), nn.Sigmoid()) - - self.reduction_conv = ConvModule( - self.in_channels, - self.feat_channels, - 1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=norm_cfg is None) - - def init_weights(self): - for m in self.layer_attention.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - normal_init(self.reduction_conv.conv, std=0.01) - - def forward(self, feat, avg_feat=None): - b, c, h, w = feat.shape - if avg_feat is None: - avg_feat = F.adaptive_avg_pool2d(feat, (1, 1)) - weight = self.layer_attention(avg_feat) - - # here we first compute the product between layer attention weight and - # conv weight, and then compute the convolution between new conv weight - # and feature map, in order to save memory and FLOPs. - conv_weight = weight.reshape( - b, 1, self.stacked_convs, - 1) * self.reduction_conv.conv.weight.reshape( - 1, self.feat_channels, self.stacked_convs, self.feat_channels) - conv_weight = conv_weight.reshape(b, self.feat_channels, - self.in_channels) - feat = feat.reshape(b, self.in_channels, h * w) - feat = torch.bmm(conv_weight, feat).reshape(b, self.feat_channels, h, - w) - if self.norm_cfg is not None: - feat = self.reduction_conv.norm(feat) - feat = self.reduction_conv.activate(feat) - - return feat - - -@HEADS.register_module() -class TOODHead(ATSSHead): - """TOODHead used in `TOOD: Task-aligned One-stage Object Detection. - - `_. - - TOOD uses Task-aligned head (T-head) and is optimized by Task Alignment - Learning (TAL). - - Args: - num_dcn (int): Number of deformable convolution in the head. - Default: 0. - anchor_type (str): If set to `anchor_free`, the head will use centers - to regress bboxes. If set to `anchor_based`, the head will - regress bboxes based on anchors. Default: `anchor_free`. - initial_loss_cls (dict): Config of initial loss. - - Example: - >>> self = TOODHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred = self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - num_dcn=0, - anchor_type='anchor_free', - initial_loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - activated=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - **kwargs): - assert anchor_type in ['anchor_free', 'anchor_based'] - self.num_dcn = num_dcn - self.anchor_type = anchor_type - self.epoch = 0 # which would be update in SetEpochInfoHook! - super(TOODHead, self).__init__(num_classes, in_channels, **kwargs) - - if self.train_cfg: - self.initial_epoch = self.train_cfg.initial_epoch - self.initial_assigner = build_assigner( - self.train_cfg.initial_assigner) - self.initial_loss_cls = build_loss(initial_loss_cls) - self.assigner = self.initial_assigner - self.alignment_assigner = build_assigner(self.train_cfg.assigner) - self.alpha = self.train_cfg.alpha - self.beta = self.train_cfg.beta - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.inter_convs = nn.ModuleList() - for i in range(self.stacked_convs): - if i < self.num_dcn: - conv_cfg = dict(type='DCNv2', deform_groups=4) - else: - conv_cfg = self.conv_cfg - chn = self.in_channels if i == 0 else self.feat_channels - self.inter_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - self.cls_decomp = TaskDecomposition(self.feat_channels, - self.stacked_convs, - self.stacked_convs * 8, - self.conv_cfg, self.norm_cfg) - self.reg_decomp = TaskDecomposition(self.feat_channels, - self.stacked_convs, - self.stacked_convs * 8, - self.conv_cfg, self.norm_cfg) - - self.tood_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.tood_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - self.cls_prob_module = nn.Sequential( - nn.Conv2d(self.feat_channels * self.stacked_convs, - self.feat_channels // 4, 1), nn.ReLU(inplace=True), - nn.Conv2d(self.feat_channels // 4, 1, 3, padding=1)) - self.reg_offset_module = nn.Sequential( - nn.Conv2d(self.feat_channels * self.stacked_convs, - self.feat_channels // 4, 1), nn.ReLU(inplace=True), - nn.Conv2d(self.feat_channels // 4, 4 * 2, 3, padding=1)) - - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def init_weights(self): - """Initialize weights of the head.""" - bias_cls = bias_init_with_prob(0.01) - for m in self.inter_convs: - normal_init(m.conv, std=0.01) - for m in self.cls_prob_module: - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.01) - for m in self.reg_offset_module: - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - normal_init(self.cls_prob_module[-1], std=0.01, bias=bias_cls) - - self.cls_decomp.init_weights() - self.reg_decomp.init_weights() - - normal_init(self.tood_cls, std=0.01, bias=bias_cls) - normal_init(self.tood_reg, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Decoded box for all scale levels, - each is a 4D-tensor, the channels number is - num_anchors * 4. In [tl_x, tl_y, br_x, br_y] format. - """ - cls_scores = [] - bbox_preds = [] - for idx, (x, scale, stride) in enumerate( - zip(feats, self.scales, self.prior_generator.strides)): - b, c, h, w = x.shape - anchor = self.prior_generator.single_level_grid_priors( - (h, w), idx, device=x.device) - anchor = torch.cat([anchor for _ in range(b)]) - # extract task interactive features - inter_feats = [] - for inter_conv in self.inter_convs: - x = inter_conv(x) - inter_feats.append(x) - feat = torch.cat(inter_feats, 1) - - # task decomposition - avg_feat = F.adaptive_avg_pool2d(feat, (1, 1)) - cls_feat = self.cls_decomp(feat, avg_feat) - reg_feat = self.reg_decomp(feat, avg_feat) - - # cls prediction and alignment - cls_logits = self.tood_cls(cls_feat) - cls_prob = self.cls_prob_module(feat) - cls_score = sigmoid_geometric_mean(cls_logits, cls_prob) - - # reg prediction and alignment - if self.anchor_type == 'anchor_free': - reg_dist = scale(self.tood_reg(reg_feat).exp()).float() - reg_dist = reg_dist.permute(0, 2, 3, 1).reshape(-1, 4) - reg_bbox = distance2bbox( - self.anchor_center(anchor) / stride[0], - reg_dist).reshape(b, h, w, 4).permute(0, 3, 1, - 2) # (b, c, h, w) - elif self.anchor_type == 'anchor_based': - reg_dist = scale(self.tood_reg(reg_feat)).float() - reg_dist = reg_dist.permute(0, 2, 3, 1).reshape(-1, 4) - reg_bbox = self.bbox_coder.decode(anchor, reg_dist).reshape( - b, h, w, 4).permute(0, 3, 1, 2) / stride[0] - else: - raise NotImplementedError( - f'Unknown anchor type: {self.anchor_type}.' - f'Please use `anchor_free` or `anchor_based`.') - reg_offset = self.reg_offset_module(feat) - bbox_pred = self.deform_sampling(reg_bbox.contiguous(), - reg_offset.contiguous()) - - # After deform_sampling, some boxes will become invalid (The - # left-top point is at the right or bottom of the right-bottom - # point), which will make the GIoULoss negative. - invalid_bbox_idx = (bbox_pred[:, [0]] > bbox_pred[:, [2]]) | \ - (bbox_pred[:, [1]] > bbox_pred[:, [3]]) - invalid_bbox_idx = invalid_bbox_idx.expand_as(bbox_pred) - bbox_pred = torch.where(invalid_bbox_idx, reg_bbox, bbox_pred) - - cls_scores.append(cls_score) - bbox_preds.append(bbox_pred) - return tuple(cls_scores), tuple(bbox_preds) - - def deform_sampling(self, feat, offset): - """Sampling the feature x according to offset. - - Args: - feat (Tensor): Feature - offset (Tensor): Spatial offset for feature sampling - """ - # it is an equivalent implementation of bilinear interpolation - b, c, h, w = feat.shape - weight = feat.new_ones(c, 1, 1, 1) - y = deform_conv2d(feat, offset, weight, 1, 0, 1, c, c) - return y - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, alignment_metrics, stride): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Decoded bboxes for each scale - level with shape (N, num_anchors * 4, H, W). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors). - bbox_targets (Tensor): BBox regression targets of each anchor with - shape (N, num_total_anchors, 4). - alignment_metrics (Tensor): Alignment metrics with shape - (N, num_total_anchors). - stride (tuple[int]): Downsample stride of the feature map. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - alignment_metrics = alignment_metrics.reshape(-1) - label_weights = label_weights.reshape(-1) - targets = labels if self.epoch < self.initial_epoch else ( - labels, alignment_metrics) - cls_loss_func = self.initial_loss_cls \ - if self.epoch < self.initial_epoch else self.loss_cls - - loss_cls = cls_loss_func( - cls_score, targets, label_weights, avg_factor=1.0) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - - pos_decode_bbox_pred = pos_bbox_pred - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - - # regression loss - pos_bbox_weight = self.centerness_target( - pos_anchors, pos_bbox_targets - ) if self.epoch < self.initial_epoch else alignment_metrics[ - pos_inds] - - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=pos_bbox_weight, - avg_factor=1.0) - else: - loss_bbox = bbox_pred.sum() * 0 - pos_bbox_weight = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, alignment_metrics.sum( - ), pos_bbox_weight.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Decoded box for each scale - level with shape (N, num_anchors * 4, H, W) in - [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = len(img_metas) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - flatten_cls_scores = torch.cat([ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_score in cls_scores - ], 1) - flatten_bbox_preds = torch.cat([ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) * stride[0] - for bbox_pred, stride in zip(bbox_preds, - self.prior_generator.strides) - ], 1) - - cls_reg_targets = self.get_targets( - flatten_cls_scores, - flatten_bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - alignment_metrics_list) = cls_reg_targets - - losses_cls, losses_bbox,\ - cls_avg_factors, bbox_avg_factors = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - alignment_metrics_list, - self.prior_generator.strides) - - cls_avg_factor = reduce_mean(sum(cls_avg_factors)).clamp_(min=1).item() - losses_cls = list(map(lambda x: x / cls_avg_factor, losses_cls)) - - bbox_avg_factor = reduce_mean( - sum(bbox_avg_factors)).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - - cfg = self.test_cfg if cfg is None else cfg - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for cls_score, bbox_pred, priors, stride in zip( - cls_score_list, bbox_pred_list, mlvl_priors, - self.prior_generator.strides): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) * stride[0] - scores = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bboxes = filtered_results['bbox_pred'] - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, None, **kwargs) - - def get_targets(self, - cls_scores, - bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores (Tensor): Classification predictions of images, - a 3D-Tensor with shape [num_imgs, num_priors, num_classes]. - bbox_preds (Tensor): Decoded bboxes predictions of one image, - a 3D-Tensor with shape [num_imgs, num_priors, 4] in [tl_x, - tl_y, br_x, br_y] format. - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: a tuple containing learning targets. - - - anchors_list (list[list[Tensor]]): Anchors of each level. - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - norm_alignment_metrics_list (list[Tensor]): Normalized - alignment metrics of each level. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - # anchor_list: list(b * [-1, 4]) - - if self.epoch < self.initial_epoch: - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - super()._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - all_assign_metrics = [ - weight[..., 0] for weight in all_bbox_weights - ] - else: - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_assign_metrics) = multi_apply( - self._get_target_single, - cls_scores, - bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - norm_alignment_metrics_list = images_to_levels(all_assign_metrics, - num_level_anchors) - - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, norm_alignment_metrics_list) - - def _get_target_single(self, - cls_scores, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - cls_scores (list(Tensor)): Box scores for each image. - bbox_preds (list(Tensor)): Box energies / deltas for each image. - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - norm_alignment_metrics (Tensor): Normalized alignment metrics - of all priors in the image with shape (N,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - assign_result = self.alignment_assigner.assign( - cls_scores[inside_flags, :], bbox_preds[inside_flags, :], anchors, - gt_bboxes, gt_bboxes_ignore, gt_labels, self.alpha, self.beta) - assign_ious = assign_result.max_overlaps - assign_metrics = assign_result.assign_metrics - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - norm_alignment_metrics = anchors.new_zeros( - num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - # point-based - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - class_assigned_gt_inds = torch.unique( - sampling_result.pos_assigned_gt_inds) - for gt_inds in class_assigned_gt_inds: - gt_class_inds = pos_inds[sampling_result.pos_assigned_gt_inds == - gt_inds] - pos_alignment_metrics = assign_metrics[gt_class_inds] - pos_ious = assign_ious[gt_class_inds] - pos_norm_alignment_metrics = pos_alignment_metrics / ( - pos_alignment_metrics.max() + 10e-8) * pos_ious.max() - norm_alignment_metrics[gt_class_inds] = pos_norm_alignment_metrics - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - norm_alignment_metrics = unmap(norm_alignment_metrics, - num_total_anchors, inside_flags) - return (anchors, labels, label_weights, bbox_targets, - norm_alignment_metrics) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/vfnet_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/vfnet_head.py deleted file mode 100644 index ba285e22..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/vfnet_head.py +++ /dev/null @@ -1,740 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmcv.ops import DeformConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (MlvlPointGenerator, bbox_overlaps, build_assigner, - build_prior_generator, build_sampler, multi_apply, - reduce_mean) -from ..builder import HEADS, build_loss -from .atss_head import ATSSHead -from .fcos_head import FCOSHead - -INF = 1e8 - - -@HEADS.register_module() -class VFNetHead(ATSSHead, FCOSHead): - """Head of `VarifocalNet (VFNet): An IoU-aware Dense Object - Detector.`_. - - The VFNet predicts IoU-aware classification scores which mix the - object presence confidence and object localization accuracy as the - detection score. It is built on the FCOS architecture and uses ATSS - for defining positive/negative training examples. The VFNet is trained - with Varifocal Loss and empolys star-shaped deformable convolution to - extract features for a bbox. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - regress_ranges (tuple[tuple[int, int]]): Regress range of multiple - level points. - center_sampling (bool): If true, use center sampling. Default: False. - center_sample_radius (float): Radius of center sampling. Default: 1.5. - sync_num_pos (bool): If true, synchronize the number of positive - examples across GPUs. Default: True - gradient_mul (float): The multiplier to gradients from bbox refinement - and recognition. Default: 0.1. - bbox_norm_type (str): The bbox normalization type, 'reg_denom' or - 'stride'. Default: reg_denom - loss_cls_fl (dict): Config of focal loss. - use_vfl (bool): If true, use varifocal loss for training. - Default: True. - loss_cls (dict): Config of varifocal loss. - loss_bbox (dict): Config of localization loss, GIoU Loss. - loss_bbox (dict): Config of localization refinement loss, GIoU Loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - use_atss (bool): If true, use ATSS to define positive/negative - examples. Default: True. - anchor_generator (dict): Config of anchor generator for ATSS. - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> self = VFNetHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred, bbox_pred_refine= self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), - (512, INF)), - center_sampling=False, - center_sample_radius=1.5, - sync_num_pos=True, - gradient_mul=0.1, - bbox_norm_type='reg_denom', - loss_cls_fl=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - use_vfl=True, - loss_cls=dict( - type='VarifocalLoss', - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=1.5), - loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - use_atss=True, - reg_decoded_bbox=True, - anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - octave_base_scale=8, - scales_per_octave=1, - center_offset=0.0, - strides=[8, 16, 32, 64, 128]), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='vfnet_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - # dcn base offsets, adapted from reppoints_head.py - self.num_dconv_points = 9 - self.dcn_kernel = int(np.sqrt(self.num_dconv_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super(FCOSHead, self).__init__( - num_classes, - in_channels, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.regress_ranges = regress_ranges - self.reg_denoms = [ - regress_range[-1] for regress_range in regress_ranges - ] - self.reg_denoms[-1] = self.reg_denoms[-2] * 2 - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.sync_num_pos = sync_num_pos - self.bbox_norm_type = bbox_norm_type - self.gradient_mul = gradient_mul - self.use_vfl = use_vfl - if self.use_vfl: - self.loss_cls = build_loss(loss_cls) - else: - self.loss_cls = build_loss(loss_cls_fl) - self.loss_bbox = build_loss(loss_bbox) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - # for getting ATSS targets - self.use_atss = use_atss - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - self.anchor_center_offset = anchor_generator['center_offset'] - - self.num_base_priors = self.prior_generator.num_base_priors[0] - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - # only be used in `get_atss_targets` when `use_atss` is True - self.atss_prior_generator = build_prior_generator(anchor_generator) - - self.fcos_prior_generator = MlvlPointGenerator( - anchor_generator['strides'], - self.anchor_center_offset if self.use_atss else 0.5) - - # In order to reuse the `get_bboxes` in `BaseDenseHead. - # Only be used in testing phase. - self.prior_generator = self.fcos_prior_generator - - @property - def num_anchors(self): - """ - Returns: - int: Number of anchors on each point of feature map. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - @property - def anchor_generator(self): - warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' - 'please use "atss_prior_generator" instead') - return self.prior_generator - - def _init_layers(self): - """Initialize layers of the head.""" - super(FCOSHead, self)._init_cls_convs() - super(FCOSHead, self)._init_reg_convs() - self.relu = nn.ReLU(inplace=True) - self.vfnet_reg_conv = ConvModule( - self.feat_channels, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias) - self.vfnet_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - self.vfnet_reg_refine_dconv = DeformConv2d( - self.feat_channels, - self.feat_channels, - self.dcn_kernel, - 1, - padding=self.dcn_pad) - self.vfnet_reg_refine = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.scales_refine = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - self.vfnet_cls_dconv = DeformConv2d( - self.feat_channels, - self.feat_channels, - self.dcn_kernel, - 1, - padding=self.dcn_pad) - self.vfnet_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box offsets for each - scale level, each is a 4D-tensor, the channel number is - num_points * 4. - bbox_preds_refine (list[Tensor]): Refined Box offsets for - each scale level, each is a 4D-tensor, the channel - number is num_points * 4. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.scales_refine, self.strides, self.reg_denoms) - - def forward_single(self, x, scale, scale_refine, stride, reg_denom): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - scale_refine (:obj: `mmcv.cnn.Scale`): Learnable scale module to - resize the refined bbox prediction. - stride (int): The corresponding stride for feature maps, - used to normalize the bbox prediction when - bbox_norm_type = 'stride'. - reg_denom (int): The corresponding regression range for feature - maps, only used to normalize the bbox prediction when - bbox_norm_type = 'reg_denom'. - - Returns: - tuple: iou-aware cls scores for each box, bbox predictions and - refined bbox predictions of input feature maps. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - - # predict the bbox_pred of different level - reg_feat_init = self.vfnet_reg_conv(reg_feat) - if self.bbox_norm_type == 'reg_denom': - bbox_pred = scale( - self.vfnet_reg(reg_feat_init)).float().exp() * reg_denom - elif self.bbox_norm_type == 'stride': - bbox_pred = scale( - self.vfnet_reg(reg_feat_init)).float().exp() * stride - else: - raise NotImplementedError - - # compute star deformable convolution offsets - # converting dcn_offset to reg_feat.dtype thus VFNet can be - # trained with FP16 - dcn_offset = self.star_dcn_offset(bbox_pred, self.gradient_mul, - stride).to(reg_feat.dtype) - - # refine the bbox_pred - reg_feat = self.relu(self.vfnet_reg_refine_dconv(reg_feat, dcn_offset)) - bbox_pred_refine = scale_refine( - self.vfnet_reg_refine(reg_feat)).float().exp() - bbox_pred_refine = bbox_pred_refine * bbox_pred.detach() - - # predict the iou-aware cls score - cls_feat = self.relu(self.vfnet_cls_dconv(cls_feat, dcn_offset)) - cls_score = self.vfnet_cls(cls_feat) - - if self.training: - return cls_score, bbox_pred, bbox_pred_refine - else: - return cls_score, bbox_pred_refine - - def star_dcn_offset(self, bbox_pred, gradient_mul, stride): - """Compute the star deformable conv offsets. - - Args: - bbox_pred (Tensor): Predicted bbox distance offsets (l, r, t, b). - gradient_mul (float): Gradient multiplier. - stride (int): The corresponding stride for feature maps, - used to project the bbox onto the feature map. - - Returns: - dcn_offsets (Tensor): The offsets for deformable convolution. - """ - dcn_base_offset = self.dcn_base_offset.type_as(bbox_pred) - bbox_pred_grad_mul = (1 - gradient_mul) * bbox_pred.detach() + \ - gradient_mul * bbox_pred - # map to the feature map scale - bbox_pred_grad_mul = bbox_pred_grad_mul / stride - N, C, H, W = bbox_pred.size() - - x1 = bbox_pred_grad_mul[:, 0, :, :] - y1 = bbox_pred_grad_mul[:, 1, :, :] - x2 = bbox_pred_grad_mul[:, 2, :, :] - y2 = bbox_pred_grad_mul[:, 3, :, :] - bbox_pred_grad_mul_offset = bbox_pred.new_zeros( - N, 2 * self.num_dconv_points, H, W) - bbox_pred_grad_mul_offset[:, 0, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 1, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 2, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 4, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 5, :, :] = x2 # x2 - bbox_pred_grad_mul_offset[:, 7, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 11, :, :] = x2 # x2 - bbox_pred_grad_mul_offset[:, 12, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 13, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 14, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 16, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 17, :, :] = x2 # x2 - dcn_offset = bbox_pred_grad_mul_offset - dcn_base_offset - - return dcn_offset - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine')) - def loss(self, - cls_scores, - bbox_preds, - bbox_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box offsets for each - scale level, each is a 4D-tensor, the channel number is - num_points * 4. - bbox_preds_refine (list[Tensor]): Refined Box offsets for - each scale level, each is a 4D-tensor, the channel - number is num_points * 4. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.fcos_prior_generator.grid_priors( - featmap_sizes, bbox_preds[0].dtype, bbox_preds[0].device) - labels, label_weights, bbox_targets, bbox_weights = self.get_targets( - cls_scores, all_level_points, gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds and bbox_preds_refine - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, - 1).reshape(-1, - self.cls_out_channels).contiguous() - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4).contiguous() - for bbox_pred in bbox_preds - ] - flatten_bbox_preds_refine = [ - bbox_pred_refine.permute(0, 2, 3, 1).reshape(-1, 4).contiguous() - for bbox_pred_refine in bbox_preds_refine - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_bbox_preds_refine = torch.cat(flatten_bbox_preds_refine) - flatten_labels = torch.cat(labels) - flatten_bbox_targets = torch.cat(bbox_targets) - # repeat points to align with bbox_preds - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - - # FG cat_id: [0, num_classes - 1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = torch.where( - ((flatten_labels >= 0) & (flatten_labels < bg_class_ind)) > 0)[0] - num_pos = len(pos_inds) - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_preds_refine = flatten_bbox_preds_refine[pos_inds] - pos_labels = flatten_labels[pos_inds] - - # sync num_pos across all gpus - if self.sync_num_pos: - num_pos_avg_per_gpu = reduce_mean( - pos_inds.new_tensor(num_pos).float()).item() - num_pos_avg_per_gpu = max(num_pos_avg_per_gpu, 1.0) - else: - num_pos_avg_per_gpu = num_pos - - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_points = flatten_points[pos_inds] - - pos_decoded_bbox_preds = self.bbox_coder.decode( - pos_points, pos_bbox_preds) - pos_decoded_target_preds = self.bbox_coder.decode( - pos_points, pos_bbox_targets) - iou_targets_ini = bbox_overlaps( - pos_decoded_bbox_preds, - pos_decoded_target_preds.detach(), - is_aligned=True).clamp(min=1e-6) - bbox_weights_ini = iou_targets_ini.clone().detach() - bbox_avg_factor_ini = reduce_mean( - bbox_weights_ini.sum()).clamp_(min=1).item() - - pos_decoded_bbox_preds_refine = \ - self.bbox_coder.decode(pos_points, pos_bbox_preds_refine) - iou_targets_rf = bbox_overlaps( - pos_decoded_bbox_preds_refine, - pos_decoded_target_preds.detach(), - is_aligned=True).clamp(min=1e-6) - bbox_weights_rf = iou_targets_rf.clone().detach() - bbox_avg_factor_rf = reduce_mean( - bbox_weights_rf.sum()).clamp_(min=1).item() - - if num_pos > 0: - loss_bbox = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds.detach(), - weight=bbox_weights_ini, - avg_factor=bbox_avg_factor_ini) - - loss_bbox_refine = self.loss_bbox_refine( - pos_decoded_bbox_preds_refine, - pos_decoded_target_preds.detach(), - weight=bbox_weights_rf, - avg_factor=bbox_avg_factor_rf) - - # build IoU-aware cls_score targets - if self.use_vfl: - pos_ious = iou_targets_rf.clone().detach() - cls_iou_targets = torch.zeros_like(flatten_cls_scores) - cls_iou_targets[pos_inds, pos_labels] = pos_ious - else: - loss_bbox = pos_bbox_preds.sum() * 0 - loss_bbox_refine = pos_bbox_preds_refine.sum() * 0 - if self.use_vfl: - cls_iou_targets = torch.zeros_like(flatten_cls_scores) - - if self.use_vfl: - loss_cls = self.loss_cls( - flatten_cls_scores, - cls_iou_targets, - avg_factor=num_pos_avg_per_gpu) - else: - loss_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - weight=label_weights, - avg_factor=num_pos_avg_per_gpu) - - return dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_bbox_rf=loss_bbox_refine) - - def get_targets(self, cls_scores, mlvl_points, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore): - """A wrapper for computing ATSS and FCOS targets for points in multiple - images. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level with shape (N, num_points * num_classes, H, W). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level. - label_weights (Tensor/None): Label weights of all levels. - bbox_targets_list (list[Tensor]): Regression targets of each - level, (l, t, r, b). - bbox_weights (Tensor/None): Bbox weights of all levels. - """ - if self.use_atss: - return self.get_atss_targets(cls_scores, mlvl_points, gt_bboxes, - gt_labels, img_metas, - gt_bboxes_ignore) - else: - self.norm_on_bbox = False - return self.get_fcos_targets(mlvl_points, gt_bboxes, gt_labels) - - def _get_target_single(self, *args, **kwargs): - """Avoid ambiguity in multiple inheritance.""" - if self.use_atss: - return ATSSHead._get_target_single(self, *args, **kwargs) - else: - return FCOSHead._get_target_single(self, *args, **kwargs) - - def get_fcos_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute FCOS regression and classification targets for points in - multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - labels (list[Tensor]): Labels of each level. - label_weights: None, to be compatible with ATSS targets. - bbox_targets (list[Tensor]): BBox targets of each level. - bbox_weights: None, to be compatible with ATSS targets. - """ - labels, bbox_targets = FCOSHead.get_targets(self, points, - gt_bboxes_list, - gt_labels_list) - label_weights = None - bbox_weights = None - return labels, label_weights, bbox_targets, bbox_weights - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.atss_prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.atss_prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device=device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def get_atss_targets(self, - cls_scores, - mlvl_points, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """A wrapper for computing ATSS targets for points in multiple images. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level with shape (N, num_points * num_classes, H, W). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). Default: None. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level. - label_weights (Tensor): Label weights of all levels. - bbox_targets_list (list[Tensor]): Regression targets of each - level, (l, t, r, b). - bbox_weights (Tensor): Bbox weights of all levels. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len( - featmap_sizes - ) == self.atss_prior_generator.num_levels == \ - self.fcos_prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = ATSSHead.get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - unmap_outputs=True) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - bbox_targets_list = [ - bbox_targets.reshape(-1, 4) for bbox_targets in bbox_targets_list - ] - - num_imgs = len(img_metas) - # transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format - bbox_targets_list = self.transform_bbox_targets( - bbox_targets_list, mlvl_points, num_imgs) - - labels_list = [labels.reshape(-1) for labels in labels_list] - label_weights_list = [ - label_weights.reshape(-1) for label_weights in label_weights_list - ] - bbox_weights_list = [ - bbox_weights.reshape(-1) for bbox_weights in bbox_weights_list - ] - label_weights = torch.cat(label_weights_list) - bbox_weights = torch.cat(bbox_weights_list) - return labels_list, label_weights, bbox_targets_list, bbox_weights - - def transform_bbox_targets(self, decoded_bboxes, mlvl_points, num_imgs): - """Transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format. - - Args: - decoded_bboxes (list[Tensor]): Regression targets of each level, - in the form of (x1, y1, x2, y2). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - num_imgs (int): the number of images in a batch. - - Returns: - bbox_targets (list[Tensor]): Regression targets of each level in - the form of (l, t, r, b). - """ - # TODO: Re-implemented in Class PointCoder - assert len(decoded_bboxes) == len(mlvl_points) - num_levels = len(decoded_bboxes) - mlvl_points = [points.repeat(num_imgs, 1) for points in mlvl_points] - bbox_targets = [] - for i in range(num_levels): - bbox_target = self.bbox_coder.encode(mlvl_points[i], - decoded_bboxes[i]) - bbox_targets.append(bbox_target) - - return bbox_targets - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override the method in the parent class to avoid changing para's - name.""" - pass - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - - warnings.warn( - '`_get_points_single` in `VFNetHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map' - 'with `self.fcos_prior_generator.single_level_grid_priors` ') - - h, w = featmap_size - x_range = torch.arange( - 0, w * stride, stride, dtype=dtype, device=device) - y_range = torch.arange( - 0, h * stride, stride, dtype=dtype, device=device) - y, x = torch.meshgrid(y_range, x_range) - # to be compatible with anchor points in ATSS - if self.use_atss: - points = torch.stack( - (x.reshape(-1), y.reshape(-1)), dim=-1) + \ - stride * self.anchor_center_offset - else: - points = torch.stack( - (x.reshape(-1), y.reshape(-1)), dim=-1) + stride // 2 - return points diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolact_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolact_head.py deleted file mode 100644 index 8f89a271..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolact_head.py +++ /dev/null @@ -1,1018 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList, force_fp32 - -from mmdet.core import build_sampler, fast_nms, images_to_levels, multi_apply -from mmdet.core.utils import select_single_mlvl -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class YOLACTHead(AnchorHead): - """YOLACT box head used in https://arxiv.org/abs/1904.02689. - - Note that YOLACT head is a light version of RetinaNet head. - Four differences are described as follows: - - 1. YOLACT box head has three-times fewer anchors. - 2. YOLACT box head shares the convs for box and cls branches. - 3. YOLACT box head uses OHEM instead of Focal loss. - 4. YOLACT box head predicts a set of mask coefficients for each box. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): Config dict for anchor generator - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - num_head_convs (int): Number of the conv layers shared by - box and cls branches. - num_protos (int): Number of the mask coefficients. - use_ohem (bool): If true, ``loss_single_OHEM`` will be used for - cls loss calculation. If false, ``loss_single`` will be used. - conv_cfg (dict): Dictionary to construct and config conv layer. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=3, - scales_per_octave=1, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - reduction='none', - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1.5), - num_head_convs=1, - num_protos=32, - use_ohem=True, - conv_cfg=None, - norm_cfg=None, - init_cfg=dict( - type='Xavier', - distribution='uniform', - bias=0, - layer='Conv2d'), - **kwargs): - self.num_head_convs = num_head_convs - self.num_protos = num_protos - self.use_ohem = use_ohem - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(YOLACTHead, self).__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - if self.use_ohem: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.sampling = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.head_convs = ModuleList() - for i in range(self.num_head_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.head_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.conv_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - self.conv_coeff = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.num_protos, - 3, - padding=1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - coeff_pred (Tensor): Mask coefficients for a single scale \ - level, the channels number is num_anchors * num_protos. - """ - for head_conv in self.head_convs: - x = head_conv(x) - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - coeff_pred = self.conv_coeff(x).tanh() - return cls_score, bbox_pred, coeff_pred - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """A combination of the func:``AnchorHead.loss`` and - func:``SSDHead.loss``. - - When ``self.use_ohem == True``, it functions like ``SSDHead.loss``, - otherwise, it follows ``AnchorHead.loss``. Besides, it additionally - returns ``sampling_results``. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - tuple: - dict[str, Tensor]: A dictionary of loss components. - List[:obj:``SamplingResult``]: Sampler results for each image. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - unmap_outputs=not self.use_ohem, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results) = cls_reg_targets - - if self.use_ohem: - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single_OHEM, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - else: - num_total_samples = ( - num_total_pos + - num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox), sampling_results - - def loss_single_OHEM(self, cls_score, bbox_pred, anchors, labels, - label_weights, bbox_targets, bbox_weights, - num_total_samples): - """"See func:``SSDHead.loss``.""" - loss_cls_all = self.loss_cls(cls_score, labels, label_weights) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero( - as_tuple=False).reshape(-1) - neg_inds = (labels == self.num_classes).nonzero( - as_tuple=False).view(-1) - - num_pos_samples = pos_inds.size(0) - if num_pos_samples == 0: - num_neg_samples = neg_inds.size(0) - else: - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'coeff_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - coeff_preds, - img_metas, - cfg=None, - rescale=False): - """"Similar to func:``AnchorHead.get_bboxes``, but additionally - processes coeff_preds. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - coeff_preds (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - list[tuple[Tensor, Tensor, Tensor]]: Each item in result_list is - a 3-tuple. The first item is an (n, 5) tensor, where the - first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score - between 0 and 1. The second item is an (n,) tensor where each - item is the predicted class label of the corresponding box. - The third item is an (n, num_protos) tensor where each item - is the predicted mask coefficients of instance inside the - corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - - det_bboxes = [] - det_labels = [] - det_coeffs = [] - for img_id in range(len(img_metas)): - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - coeff_pred_list = select_single_mlvl(coeff_preds, img_id) - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - bbox_res = self._get_bboxes_single(cls_score_list, bbox_pred_list, - coeff_pred_list, mlvl_anchors, - img_shape, scale_factor, cfg, - rescale) - det_bboxes.append(bbox_res[0]) - det_labels.append(bbox_res[1]) - det_coeffs.append(bbox_res[2]) - return det_bboxes, det_labels, det_coeffs - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - coeff_preds_list, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """"Similar to func:``AnchorHead._get_bboxes_single``, but additionally - processes coeff_preds_list and uses fast NMS instead of traditional - NMS. - - Args: - cls_score_list (list[Tensor]): Box scores for a single scale level - Has shape (num_anchors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas for a single - scale level with shape (num_anchors * 4, H, W). - coeff_preds_list (list[Tensor]): Mask coefficients for a single - scale level with shape (num_anchors * num_protos, H, W). - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - tuple[Tensor, Tensor, Tensor]: The first item is an (n, 5) tensor, - where the first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between - 0 and 1. The second item is an (n,) tensor where each item is - the predicted class label of the corresponding box. The third - item is an (n, num_protos) tensor where each item is the - predicted mask coefficients of instance inside the - corresponding box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) == len(mlvl_anchors) - nms_pre = cfg.get('nms_pre', -1) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_coeffs = [] - for cls_score, bbox_pred, coeff_pred, anchors in \ - zip(cls_score_list, bbox_pred_list, - coeff_preds_list, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - coeff_pred = coeff_pred.permute(1, 2, - 0).reshape(-1, self.num_protos) - - if 0 < nms_pre < scores.shape[0]: - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - coeff_pred = coeff_pred[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_coeffs.append(coeff_pred) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_coeffs = torch.cat(mlvl_coeffs) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - det_bboxes, det_labels, det_coeffs = fast_nms(mlvl_bboxes, mlvl_scores, - mlvl_coeffs, - cfg.score_thr, - cfg.iou_thr, cfg.top_k, - cfg.max_per_img) - return det_bboxes, det_labels, det_coeffs - - -@HEADS.register_module() -class YOLACTSegmHead(BaseModule): - """YOLACT segmentation head used in https://arxiv.org/abs/1904.02689. - - Apply a semantic segmentation loss on feature space using layers that are - only evaluated during training to increase performance with no speed - penalty. - - Args: - in_channels (int): Number of channels in the input feature map. - num_classes (int): Number of categories excluding the background - category. - loss_segm (dict): Config of semantic segmentation loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels=256, - loss_segm=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg=dict( - type='Xavier', - distribution='uniform', - override=dict(name='segm_conv'))): - super(YOLACTSegmHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.loss_segm = build_loss(loss_segm) - self._init_layers() - self.fp16_enabled = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.segm_conv = nn.Conv2d( - self.in_channels, self.num_classes, kernel_size=1) - - def forward(self, x): - """Forward feature from the upstream network. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - - Returns: - Tensor: Predicted semantic segmentation map with shape - (N, num_classes, H, W). - """ - return self.segm_conv(x) - - @force_fp32(apply_to=('segm_pred', )) - def loss(self, segm_pred, gt_masks, gt_labels): - """Compute loss of the head. - - Args: - segm_pred (list[Tensor]): Predicted semantic segmentation map - with shape (N, num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_labels (list[Tensor]): Class indices corresponding to each box. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_segm = [] - num_imgs, num_classes, mask_h, mask_w = segm_pred.size() - for idx in range(num_imgs): - cur_segm_pred = segm_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_labels = gt_labels[idx] - segm_targets = self.get_targets(cur_segm_pred, cur_gt_masks, - cur_gt_labels) - if segm_targets is None: - loss = self.loss_segm(cur_segm_pred, - torch.zeros_like(cur_segm_pred), - torch.zeros_like(cur_segm_pred)) - else: - loss = self.loss_segm( - cur_segm_pred, - segm_targets, - avg_factor=num_imgs * mask_h * mask_w) - loss_segm.append(loss) - return dict(loss_segm=loss_segm) - - def get_targets(self, segm_pred, gt_masks, gt_labels): - """Compute semantic segmentation targets for each image. - - Args: - segm_pred (Tensor): Predicted semantic segmentation map - with shape (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - gt_labels (Tensor): Class indices corresponding to each box. - - Returns: - Tensor: Semantic segmentation targets with shape - (num_classes, H, W). - """ - if gt_masks.size(0) == 0: - return None - num_classes, mask_h, mask_w = segm_pred.size() - with torch.no_grad(): - downsampled_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - downsampled_masks = downsampled_masks.gt(0.5).float() - segm_targets = torch.zeros_like(segm_pred, requires_grad=False) - for obj_idx in range(downsampled_masks.size(0)): - segm_targets[gt_labels[obj_idx] - 1] = torch.max( - segm_targets[gt_labels[obj_idx] - 1], - downsampled_masks[obj_idx]) - return segm_targets - - def simple_test(self, feats, img_metas, rescale=False): - """Test function without test-time augmentation.""" - raise NotImplementedError( - 'simple_test of YOLACTSegmHead is not implemented ' - 'because this head is only evaluated during training') - - -@HEADS.register_module() -class YOLACTProtonet(BaseModule): - """YOLACT mask head used in https://arxiv.org/abs/1904.02689. - - This head outputs the mask prototypes for YOLACT. - - Args: - in_channels (int): Number of channels in the input feature map. - proto_channels (tuple[int]): Output channels of protonet convs. - proto_kernel_sizes (tuple[int]): Kernel sizes of protonet convs. - include_last_relu (Bool): If keep the last relu of protonet. - num_protos (int): Number of prototypes. - num_classes (int): Number of categories excluding the background - category. - loss_mask_weight (float): Reweight the mask loss by this factor. - max_masks_to_train (int): Maximum number of masks to train for - each image. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels=256, - proto_channels=(256, 256, 256, None, 256, 32), - proto_kernel_sizes=(3, 3, 3, -2, 3, 1), - include_last_relu=True, - num_protos=32, - loss_mask_weight=1.0, - max_masks_to_train=100, - init_cfg=dict( - type='Xavier', - distribution='uniform', - override=dict(name='protonet'))): - super(YOLACTProtonet, self).__init__(init_cfg) - self.in_channels = in_channels - self.proto_channels = proto_channels - self.proto_kernel_sizes = proto_kernel_sizes - self.include_last_relu = include_last_relu - self.protonet = self._init_layers() - - self.loss_mask_weight = loss_mask_weight - self.num_protos = num_protos - self.num_classes = num_classes - self.max_masks_to_train = max_masks_to_train - self.fp16_enabled = False - - def _init_layers(self): - """A helper function to take a config setting and turn it into a - network.""" - # Possible patterns: - # ( 256, 3) -> conv - # ( 256,-2) -> deconv - # (None,-2) -> bilinear interpolate - in_channels = self.in_channels - protonets = ModuleList() - for num_channels, kernel_size in zip(self.proto_channels, - self.proto_kernel_sizes): - if kernel_size > 0: - layer = nn.Conv2d( - in_channels, - num_channels, - kernel_size, - padding=kernel_size // 2) - else: - if num_channels is None: - layer = InterpolateModule( - scale_factor=-kernel_size, - mode='bilinear', - align_corners=False) - else: - layer = nn.ConvTranspose2d( - in_channels, - num_channels, - -kernel_size, - padding=kernel_size // 2) - protonets.append(layer) - protonets.append(nn.ReLU(inplace=True)) - in_channels = num_channels if num_channels is not None \ - else in_channels - if not self.include_last_relu: - protonets = protonets[:-1] - return nn.Sequential(*protonets) - - def forward_dummy(self, x): - prototypes = self.protonet(x) - return prototypes - - def forward(self, x, coeff_pred, bboxes, img_meta, sampling_results=None): - """Forward feature from the upstream network to get prototypes and - linearly combine the prototypes, using masks coefficients, into - instance masks. Finally, crop the instance masks with given bboxes. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - coeff_pred (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W). - bboxes (list[Tensor]): Box used for cropping with shape - (N, num_anchors * 4, H, W). During training, they are - ground truth boxes. During testing, they are predicted - boxes. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - list[Tensor]: Predicted instance segmentation masks. - """ - prototypes = self.protonet(x) - prototypes = prototypes.permute(0, 2, 3, 1).contiguous() - - num_imgs = x.size(0) - - # The reason for not using self.training is that - # val workflow will have a dimension mismatch error. - # Note that this writing method is very tricky. - # Fix https://github.com/open-mmlab/mmdetection/issues/5978 - is_train_or_val_workflow = (coeff_pred[0].dim() == 4) - - # Train or val workflow - if is_train_or_val_workflow: - coeff_pred_list = [] - for coeff_pred_per_level in coeff_pred: - coeff_pred_per_level = \ - coeff_pred_per_level.permute( - 0, 2, 3, 1).reshape(num_imgs, -1, self.num_protos) - coeff_pred_list.append(coeff_pred_per_level) - coeff_pred = torch.cat(coeff_pred_list, dim=1) - - mask_pred_list = [] - for idx in range(num_imgs): - cur_prototypes = prototypes[idx] - cur_coeff_pred = coeff_pred[idx] - cur_bboxes = bboxes[idx] - cur_img_meta = img_meta[idx] - - # Testing state - if not is_train_or_val_workflow: - bboxes_for_cropping = cur_bboxes - else: - cur_sampling_results = sampling_results[idx] - pos_assigned_gt_inds = \ - cur_sampling_results.pos_assigned_gt_inds - bboxes_for_cropping = cur_bboxes[pos_assigned_gt_inds].clone() - pos_inds = cur_sampling_results.pos_inds - cur_coeff_pred = cur_coeff_pred[pos_inds] - - # Linearly combine the prototypes with the mask coefficients - mask_pred = cur_prototypes @ cur_coeff_pred.t() - mask_pred = torch.sigmoid(mask_pred) - - h, w = cur_img_meta['img_shape'][:2] - bboxes_for_cropping[:, 0] /= w - bboxes_for_cropping[:, 1] /= h - bboxes_for_cropping[:, 2] /= w - bboxes_for_cropping[:, 3] /= h - - mask_pred = self.crop(mask_pred, bboxes_for_cropping) - mask_pred = mask_pred.permute(2, 0, 1).contiguous() - mask_pred_list.append(mask_pred) - return mask_pred_list - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, gt_masks, gt_bboxes, img_meta, sampling_results): - """Compute loss of the head. - - Args: - mask_pred (list[Tensor]): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_mask = [] - num_imgs = len(mask_pred) - total_pos = 0 - for idx in range(num_imgs): - cur_mask_pred = mask_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_bboxes = gt_bboxes[idx] - cur_img_meta = img_meta[idx] - cur_sampling_results = sampling_results[idx] - - pos_assigned_gt_inds = cur_sampling_results.pos_assigned_gt_inds - num_pos = pos_assigned_gt_inds.size(0) - # Since we're producing (near) full image masks, - # it'd take too much vram to backprop on every single mask. - # Thus we select only a subset. - if num_pos > self.max_masks_to_train: - perm = torch.randperm(num_pos) - select = perm[:self.max_masks_to_train] - cur_mask_pred = cur_mask_pred[select] - pos_assigned_gt_inds = pos_assigned_gt_inds[select] - num_pos = self.max_masks_to_train - total_pos += num_pos - - gt_bboxes_for_reweight = cur_gt_bboxes[pos_assigned_gt_inds] - - mask_targets = self.get_targets(cur_mask_pred, cur_gt_masks, - pos_assigned_gt_inds) - if num_pos == 0: - loss = cur_mask_pred.sum() * 0. - elif mask_targets is None: - loss = F.binary_cross_entropy(cur_mask_pred, - torch.zeros_like(cur_mask_pred), - torch.zeros_like(cur_mask_pred)) - else: - cur_mask_pred = torch.clamp(cur_mask_pred, 0, 1) - loss = F.binary_cross_entropy( - cur_mask_pred, mask_targets, - reduction='none') * self.loss_mask_weight - - h, w = cur_img_meta['img_shape'][:2] - gt_bboxes_width = (gt_bboxes_for_reweight[:, 2] - - gt_bboxes_for_reweight[:, 0]) / w - gt_bboxes_height = (gt_bboxes_for_reweight[:, 3] - - gt_bboxes_for_reweight[:, 1]) / h - loss = loss.mean(dim=(1, - 2)) / gt_bboxes_width / gt_bboxes_height - loss = torch.sum(loss) - loss_mask.append(loss) - - if total_pos == 0: - total_pos += 1 # avoid nan - loss_mask = [x / total_pos for x in loss_mask] - - return dict(loss_mask=loss_mask) - - def get_targets(self, mask_pred, gt_masks, pos_assigned_gt_inds): - """Compute instance segmentation targets for each image. - - Args: - mask_pred (Tensor): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - pos_assigned_gt_inds (Tensor): GT indices of the corresponding - positive samples. - Returns: - Tensor: Instance segmentation targets with shape - (num_instances, H, W). - """ - if gt_masks.size(0) == 0: - return None - mask_h, mask_w = mask_pred.shape[-2:] - gt_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - gt_masks = gt_masks.gt(0.5).float() - mask_targets = gt_masks[pos_assigned_gt_inds] - return mask_targets - - def get_seg_masks(self, mask_pred, label_pred, img_meta, rescale): - """Resize, binarize, and format the instance mask predictions. - - Args: - mask_pred (Tensor): shape (N, H, W). - label_pred (Tensor): shape (N, ). - img_meta (dict): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If rescale is False, then returned masks will - fit the scale of imgs[0]. - Returns: - list[ndarray]: Mask predictions grouped by their predicted classes. - """ - ori_shape = img_meta['ori_shape'] - scale_factor = img_meta['scale_factor'] - if rescale: - img_h, img_w = ori_shape[:2] - else: - img_h = np.round(ori_shape[0] * scale_factor[1]).astype(np.int32) - img_w = np.round(ori_shape[1] * scale_factor[0]).astype(np.int32) - - cls_segms = [[] for _ in range(self.num_classes)] - if mask_pred.size(0) == 0: - return cls_segms - - mask_pred = F.interpolate( - mask_pred.unsqueeze(0), (img_h, img_w), - mode='bilinear', - align_corners=False).squeeze(0) > 0.5 - mask_pred = mask_pred.cpu().numpy().astype(np.uint8) - - for m, l in zip(mask_pred, label_pred): - cls_segms[l].append(m) - return cls_segms - - def crop(self, masks, boxes, padding=1): - """Crop predicted masks by zeroing out everything not in the predicted - bbox. - - Args: - masks (Tensor): shape [H, W, N]. - boxes (Tensor): bbox coords in relative point form with - shape [N, 4]. - - Return: - Tensor: The cropped masks. - """ - h, w, n = masks.size() - x1, x2 = self.sanitize_coordinates( - boxes[:, 0], boxes[:, 2], w, padding, cast=False) - y1, y2 = self.sanitize_coordinates( - boxes[:, 1], boxes[:, 3], h, padding, cast=False) - - rows = torch.arange( - w, device=masks.device, dtype=x1.dtype).view(1, -1, - 1).expand(h, w, n) - cols = torch.arange( - h, device=masks.device, dtype=x1.dtype).view(-1, 1, - 1).expand(h, w, n) - - masks_left = rows >= x1.view(1, 1, -1) - masks_right = rows < x2.view(1, 1, -1) - masks_up = cols >= y1.view(1, 1, -1) - masks_down = cols < y2.view(1, 1, -1) - - crop_mask = masks_left * masks_right * masks_up * masks_down - - return masks * crop_mask.float() - - def sanitize_coordinates(self, x1, x2, img_size, padding=0, cast=True): - """Sanitizes the input coordinates so that x1 < x2, x1 != x2, x1 >= 0, - and x2 <= image_size. Also converts from relative to absolute - coordinates and casts the results to long tensors. - - Warning: this does things in-place behind the scenes so - copy if necessary. - - Args: - _x1 (Tensor): shape (N, ). - _x2 (Tensor): shape (N, ). - img_size (int): Size of the input image. - padding (int): x1 >= padding, x2 <= image_size-padding. - cast (bool): If cast is false, the result won't be cast to longs. - - Returns: - tuple: - x1 (Tensor): Sanitized _x1. - x2 (Tensor): Sanitized _x2. - """ - x1 = x1 * img_size - x2 = x2 * img_size - if cast: - x1 = x1.long() - x2 = x2.long() - x1 = torch.min(x1, x2) - x2 = torch.max(x1, x2) - x1 = torch.clamp(x1 - padding, min=0) - x2 = torch.clamp(x2 + padding, max=img_size) - return x1, x2 - - def simple_test(self, - feats, - det_bboxes, - det_labels, - det_coeffs, - img_metas, - rescale=False): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - det_bboxes (list[Tensor]): BBox results of each image. each - element is (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - det_labels (list[Tensor]): BBox results of each image. each - element is (n, ) tensor, each element represents the class - label of the corresponding box. - det_coeffs (list[Tensor]): BBox coefficient of each image. each - element is (n, m) tensor, m is vector length. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - """ - num_imgs = len(img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_preds = self.forward(feats[0], det_coeffs, _bboxes, img_metas) - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append([[] for _ in range(self.num_classes)]) - else: - segm_result = self.get_seg_masks(mask_preds[i], - det_labels[i], - img_metas[i], rescale) - segm_results.append(segm_result) - return segm_results - - -class InterpolateModule(BaseModule): - """This is a module version of F.interpolate. - - Any arguments you give it just get passed along for the ride. - """ - - def __init__(self, *args, init_cfg=None, **kwargs): - super().__init__(init_cfg) - - self.args = args - self.kwargs = kwargs - - def forward(self, x): - """Forward features from the upstream network.""" - return F.interpolate(x, *self.args, **self.kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolo_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolo_head.py deleted file mode 100644 index 08957e6a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolo_head.py +++ /dev/null @@ -1,619 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, multiclass_nms) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class YOLOV3Head(BaseDenseHead, BBoxTestMixin): - """YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): Number of input channels per scale. - out_channels (List[int]): The number of output channels per scale - before the final 1x1 layer. Default: (1024, 512, 256). - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - featmap_strides (List[int]): The stride of each scale. - Should be in descending order. Default: (32, 16, 8). - one_hot_smoother (float): Set a non-zero value to enable label-smooth - Default: 0. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - loss_cls (dict): Config of classification loss. - loss_conf (dict): Config of confidence loss. - loss_xy (dict): Config of xy coordinate loss. - loss_wh (dict): Config of wh coordinate loss. - train_cfg (dict): Training config of YOLOV3 head. Default: None. - test_cfg (dict): Testing config of YOLOV3 head. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - out_channels=(1024, 512, 256), - anchor_generator=dict( - type='YOLOAnchorGenerator', - base_sizes=[[(116, 90), (156, 198), (373, 326)], - [(30, 61), (62, 45), (59, 119)], - [(10, 13), (16, 30), (33, 23)]], - strides=[32, 16, 8]), - bbox_coder=dict(type='YOLOBBoxCoder'), - featmap_strides=[32, 16, 8], - one_hot_smoother=0., - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_conf=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_xy=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_wh=dict(type='MSELoss', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Normal', std=0.01, - override=dict(name='convs_pred'))): - super(YOLOV3Head, self).__init__(init_cfg) - # Check params - assert (len(in_channels) == len(out_channels) == len(featmap_strides)) - - self.num_classes = num_classes - self.in_channels = in_channels - self.out_channels = out_channels - self.featmap_strides = featmap_strides - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.one_hot_smoother = one_hot_smoother - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - - self.prior_generator = build_prior_generator(anchor_generator) - - self.loss_cls = build_loss(loss_cls) - self.loss_conf = build_loss(loss_conf) - self.loss_xy = build_loss(loss_xy) - self.loss_wh = build_loss(loss_wh) - - self.num_base_priors = self.prior_generator.num_base_priors[0] - assert len( - self.prior_generator.num_base_priors) == len(featmap_strides) - self._init_layers() - - @property - def anchor_generator(self): - - warnings.warn('DeprecationWarning: `anchor_generator` is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - @property - def num_anchors(self): - """ - Returns: - int: Number of anchors on each point of feature map. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - @property - def num_levels(self): - return len(self.featmap_strides) - - @property - def num_attrib(self): - """int: number of attributes in pred_map, bboxes (4) + - objectness (1) + num_classes""" - - return 5 + self.num_classes - - def _init_layers(self): - self.convs_bridge = nn.ModuleList() - self.convs_pred = nn.ModuleList() - for i in range(self.num_levels): - conv_bridge = ConvModule( - self.in_channels[i], - self.out_channels[i], - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - conv_pred = nn.Conv2d(self.out_channels[i], - self.num_base_priors * self.num_attrib, 1) - - self.convs_bridge.append(conv_bridge) - self.convs_pred.append(conv_pred) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - for conv_pred, stride in zip(self.convs_pred, self.featmap_strides): - bias = conv_pred.bias.reshape(self.num_base_priors, -1) - # init objectness with prior of 8 objects per feature map - # refer to https://github.com/ultralytics/yolov3 - nn.init.constant_(bias.data[:, 4], - bias_init_with_prob(8 / (608 / stride)**2)) - nn.init.constant_(bias.data[:, 5:], bias_init_with_prob(0.01)) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple[Tensor]: A tuple of multi-level predication map, each is a - 4D-tensor of shape (batch_size, 5+num_classes, height, width). - """ - - assert len(feats) == self.num_levels - pred_maps = [] - for i in range(self.num_levels): - x = feats[i] - x = self.convs_bridge[i](x) - pred_map = self.convs_pred[i](x) - pred_maps.append(pred_map) - - return tuple(pred_maps), - - @force_fp32(apply_to=('pred_maps', )) - def get_bboxes(self, - pred_maps, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. It has - been accelerated since PR #5991. - - Args: - pred_maps (list[Tensor]): Raw predictions for a batch of images. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. Default: None. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(pred_maps) == self.num_levels - cfg = self.test_cfg if cfg is None else cfg - scale_factors = [img_meta['scale_factor'] for img_meta in img_metas] - - num_imgs = len(img_metas) - featmap_sizes = [pred_map.shape[-2:] for pred_map in pred_maps] - - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=pred_maps[0].device) - flatten_preds = [] - flatten_strides = [] - for pred, stride in zip(pred_maps, self.featmap_strides): - pred = pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.num_attrib) - pred[..., :2].sigmoid_() - flatten_preds.append(pred) - flatten_strides.append( - pred.new_tensor(stride).expand(pred.size(1))) - - flatten_preds = torch.cat(flatten_preds, dim=1) - flatten_bbox_preds = flatten_preds[..., :4] - flatten_objectness = flatten_preds[..., 4].sigmoid() - flatten_cls_scores = flatten_preds[..., 5:].sigmoid() - flatten_anchors = torch.cat(mlvl_anchors) - flatten_strides = torch.cat(flatten_strides) - flatten_bboxes = self.bbox_coder.decode(flatten_anchors, - flatten_bbox_preds, - flatten_strides.unsqueeze(-1)) - - if with_nms and (flatten_objectness.size(0) == 0): - return torch.zeros((0, 5)), torch.zeros((0, )) - - if rescale: - flatten_bboxes /= flatten_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - padding = flatten_bboxes.new_zeros(num_imgs, flatten_bboxes.shape[1], - 1) - flatten_cls_scores = torch.cat([flatten_cls_scores, padding], dim=-1) - - det_results = [] - for (bboxes, scores, objectness) in zip(flatten_bboxes, - flatten_cls_scores, - flatten_objectness): - # Filtering out all predictions with conf < conf_thr - conf_thr = cfg.get('conf_thr', -1) - if conf_thr > 0: - conf_inds = objectness >= conf_thr - bboxes = bboxes[conf_inds, :] - scores = scores[conf_inds, :] - objectness = objectness[conf_inds] - - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=objectness) - det_results.append(tuple([det_bboxes, det_labels])) - return det_results - - @force_fp32(apply_to=('pred_maps', )) - def loss(self, - pred_maps, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - pred_maps (list[Tensor]): Prediction map for each scale level, - shape (N, num_anchors * num_attrib, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = len(img_metas) - device = pred_maps[0][0].device - - featmap_sizes = [ - pred_maps[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [mlvl_anchors for _ in range(num_imgs)] - - responsible_flag_list = [] - for img_id in range(len(img_metas)): - responsible_flag_list.append( - self.prior_generator.responsible_flags(featmap_sizes, - gt_bboxes[img_id], - device)) - - target_maps_list, neg_maps_list = self.get_targets( - anchor_list, responsible_flag_list, gt_bboxes, gt_labels) - - losses_cls, losses_conf, losses_xy, losses_wh = multi_apply( - self.loss_single, pred_maps, target_maps_list, neg_maps_list) - - return dict( - loss_cls=losses_cls, - loss_conf=losses_conf, - loss_xy=losses_xy, - loss_wh=losses_wh) - - def loss_single(self, pred_map, target_map, neg_map): - """Compute loss of a single image from a batch. - - Args: - pred_map (Tensor): Raw predictions for a single level. - target_map (Tensor): The Ground-Truth target for a single level. - neg_map (Tensor): The negative masks for a single level. - - Returns: - tuple: - loss_cls (Tensor): Classification loss. - loss_conf (Tensor): Confidence loss. - loss_xy (Tensor): Regression loss of x, y coordinate. - loss_wh (Tensor): Regression loss of w, h coordinate. - """ - - num_imgs = len(pred_map) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(num_imgs, -1, self.num_attrib) - neg_mask = neg_map.float() - pos_mask = target_map[..., 4] - pos_and_neg_mask = neg_mask + pos_mask - pos_mask = pos_mask.unsqueeze(dim=-1) - if torch.max(pos_and_neg_mask) > 1.: - warnings.warn('There is overlap between pos and neg sample.') - pos_and_neg_mask = pos_and_neg_mask.clamp(min=0., max=1.) - - pred_xy = pred_map[..., :2] - pred_wh = pred_map[..., 2:4] - pred_conf = pred_map[..., 4] - pred_label = pred_map[..., 5:] - - target_xy = target_map[..., :2] - target_wh = target_map[..., 2:4] - target_conf = target_map[..., 4] - target_label = target_map[..., 5:] - - loss_cls = self.loss_cls(pred_label, target_label, weight=pos_mask) - loss_conf = self.loss_conf( - pred_conf, target_conf, weight=pos_and_neg_mask) - loss_xy = self.loss_xy(pred_xy, target_xy, weight=pos_mask) - loss_wh = self.loss_wh(pred_wh, target_wh, weight=pos_mask) - - return loss_cls, loss_conf, loss_xy, loss_wh - - def get_targets(self, anchor_list, responsible_flag_list, gt_bboxes_list, - gt_labels_list): - """Compute target maps for anchors in multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_total_anchors, 4). - responsible_flag_list (list[list[Tensor]]): Multi level responsible - flags of each image. Each element is a tensor of shape - (num_total_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - target_map_list (list[Tensor]): Target map of each level. - - neg_map_list (list[Tensor]): Negative map of each level. - """ - num_imgs = len(anchor_list) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - results = multi_apply(self._get_targets_single, anchor_list, - responsible_flag_list, gt_bboxes_list, - gt_labels_list) - - all_target_maps, all_neg_maps = results - assert num_imgs == len(all_target_maps) == len(all_neg_maps) - target_maps_list = images_to_levels(all_target_maps, num_level_anchors) - neg_maps_list = images_to_levels(all_neg_maps, num_level_anchors) - - return target_maps_list, neg_maps_list - - def _get_targets_single(self, anchors, responsible_flags, gt_bboxes, - gt_labels): - """Generate matching bounding box prior and converted GT. - - Args: - anchors (list[Tensor]): Multi-level anchors of the image. - responsible_flags (list[Tensor]): Multi-level responsible flags of - anchors - gt_bboxes (Tensor): Ground truth bboxes of single image. - gt_labels (Tensor): Ground truth labels of single image. - - Returns: - tuple: - target_map (Tensor): Predication target map of each - scale level, shape (num_total_anchors, - 5+num_classes) - neg_map (Tensor): Negative map of each scale level, - shape (num_total_anchors,) - """ - - anchor_strides = [] - for i in range(len(anchors)): - anchor_strides.append( - torch.tensor(self.featmap_strides[i], - device=gt_bboxes.device).repeat(len(anchors[i]))) - concat_anchors = torch.cat(anchors) - concat_responsible_flags = torch.cat(responsible_flags) - - anchor_strides = torch.cat(anchor_strides) - assert len(anchor_strides) == len(concat_anchors) == \ - len(concat_responsible_flags) - assign_result = self.assigner.assign(concat_anchors, - concat_responsible_flags, - gt_bboxes) - sampling_result = self.sampler.sample(assign_result, concat_anchors, - gt_bboxes) - - target_map = concat_anchors.new_zeros( - concat_anchors.size(0), self.num_attrib) - - target_map[sampling_result.pos_inds, :4] = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes, - anchor_strides[sampling_result.pos_inds]) - - target_map[sampling_result.pos_inds, 4] = 1 - - gt_labels_one_hot = F.one_hot( - gt_labels, num_classes=self.num_classes).float() - if self.one_hot_smoother != 0: # label smooth - gt_labels_one_hot = gt_labels_one_hot * ( - 1 - self.one_hot_smoother - ) + self.one_hot_smoother / self.num_classes - target_map[sampling_result.pos_inds, 5:] = gt_labels_one_hot[ - sampling_result.pos_assigned_gt_inds] - - neg_map = concat_anchors.new_zeros( - concat_anchors.size(0), dtype=torch.uint8) - neg_map[sampling_result.neg_inds] = 1 - - return target_map, neg_map - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('pred_maps')) - def onnx_export(self, pred_maps, img_metas, with_nms=True): - num_levels = len(pred_maps) - pred_maps_list = [pred_maps[i].detach() for i in range(num_levels)] - - cfg = self.test_cfg - assert len(pred_maps_list) == self.num_levels - - device = pred_maps_list[0].device - batch_size = pred_maps_list[0].shape[0] - - featmap_sizes = [ - pred_maps_list[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - multi_lvl_bboxes = [] - multi_lvl_cls_scores = [] - multi_lvl_conf_scores = [] - for i in range(self.num_levels): - # get some key info for current scale - pred_map = pred_maps_list[i] - stride = self.featmap_strides[i] - # (b,h, w, num_anchors*num_attrib) -> - # (b,h*w*num_anchors, num_attrib) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.num_attrib) - # Inplace operation like - # ```pred_map[..., :2] = \torch.sigmoid(pred_map[..., :2])``` - # would create constant tensor when exporting to onnx - pred_map_conf = torch.sigmoid(pred_map[..., :2]) - pred_map_rest = pred_map[..., 2:] - pred_map = torch.cat([pred_map_conf, pred_map_rest], dim=-1) - pred_map_boxes = pred_map[..., :4] - multi_lvl_anchor = mlvl_anchors[i] - multi_lvl_anchor = multi_lvl_anchor.expand_as(pred_map_boxes) - bbox_pred = self.bbox_coder.decode(multi_lvl_anchor, - pred_map_boxes, stride) - # conf and cls - conf_pred = torch.sigmoid(pred_map[..., 4]) - cls_pred = torch.sigmoid(pred_map[..., 5:]).view( - batch_size, -1, self.num_classes) # Cls pred one-hot. - - # Get top-k prediction - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - _, topk_inds = conf_pred.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = ( - bbox_pred.shape[1] * batch_inds + topk_inds) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - cls_pred = cls_pred.reshape( - -1, self.num_classes)[transformed_inds, :].reshape( - batch_size, -1, self.num_classes) - conf_pred = conf_pred.reshape(-1, 1)[transformed_inds].reshape( - batch_size, -1) - - # Save the result of current scale - multi_lvl_bboxes.append(bbox_pred) - multi_lvl_cls_scores.append(cls_pred) - multi_lvl_conf_scores.append(conf_pred) - - # Merge the results of different scales together - batch_mlvl_bboxes = torch.cat(multi_lvl_bboxes, dim=1) - batch_mlvl_scores = torch.cat(multi_lvl_cls_scores, dim=1) - batch_mlvl_conf_scores = torch.cat(multi_lvl_conf_scores, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - conf_thr = cfg.get('conf_thr', -1) - score_thr = cfg.get('score_thr', -1) - # follow original pipeline of YOLOv3 - if conf_thr > 0: - mask = (batch_mlvl_conf_scores >= conf_thr).float() - batch_mlvl_conf_scores *= mask - if score_thr > 0: - mask = (batch_mlvl_scores > score_thr).float() - batch_mlvl_scores *= mask - batch_mlvl_conf_scores = batch_mlvl_conf_scores.unsqueeze(2).expand_as( - batch_mlvl_scores) - batch_mlvl_scores = batch_mlvl_scores * batch_mlvl_conf_scores - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - # keep aligned with original pipeline, improve - # mAP by 1% for YOLOv3 in ONNX - score_threshold = 0 - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx( - batch_mlvl_bboxes, - batch_mlvl_scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - nms_pre, - cfg.max_per_img, - ) - else: - return batch_mlvl_bboxes, batch_mlvl_scores diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolof_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolof_head.py deleted file mode 100644 index 1063524a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolof_head.py +++ /dev/null @@ -1,416 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import anchor_inside_flags, multi_apply, reduce_mean, unmap -from ..builder import HEADS -from .anchor_head import AnchorHead - -INF = 1e8 - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class YOLOFHead(AnchorHead): - """YOLOFHead Paper link: https://arxiv.org/abs/2103.09460. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): The number of input channels per scale. - cls_num_convs (int): The number of convolutions of cls branch. - Default 2. - reg_num_convs (int): The number of convolutions of reg branch. - Default 4. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - num_classes, - in_channels, - num_cls_convs=2, - num_reg_convs=4, - norm_cfg=dict(type='BN', requires_grad=True), - **kwargs): - self.num_cls_convs = num_cls_convs - self.num_reg_convs = num_reg_convs - self.norm_cfg = norm_cfg - super(YOLOFHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - cls_subnet = [] - bbox_subnet = [] - for i in range(self.num_cls_convs): - cls_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - for i in range(self.num_reg_convs): - bbox_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - self.in_channels, - self.num_base_priors * self.num_classes, - kernel_size=3, - stride=1, - padding=1) - self.bbox_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors * 4, - kernel_size=3, - stride=1, - padding=1) - self.object_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors, - kernel_size=3, - stride=1, - padding=1) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - bias_cls = bias_init_with_prob(0.01) - torch.nn.init.constant_(self.cls_score.bias, bias_cls) - - def forward_single(self, feature): - cls_score = self.cls_score(self.cls_subnet(feature)) - N, _, H, W = cls_score.shape - cls_score = cls_score.view(N, -1, self.num_classes, H, W) - - reg_feat = self.bbox_subnet(feature) - bbox_reg = self.bbox_pred(reg_feat) - objectness = self.object_pred(reg_feat) - - # implicit objectness - objectness = objectness.view(N, -1, 1, H, W) - normalized_cls_score = cls_score + objectness - torch.log( - 1. + torch.clamp(cls_score.exp(), max=INF) + - torch.clamp(objectness.exp(), max=INF)) - normalized_cls_score = normalized_cls_score.view(N, -1, H, W) - return normalized_cls_score, bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (batch, num_anchors * num_classes, h, w) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (batch, num_anchors * 4, h, w) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == 1 - assert self.prior_generator.num_levels == 1 - - device = cls_scores[0].device - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - - # The output level is always 1 - anchor_list = [anchors[0] for anchors in anchor_list] - valid_flag_list = [valid_flags[0] for valid_flags in valid_flag_list] - - cls_scores_list = levels_to_images(cls_scores) - bbox_preds_list = levels_to_images(bbox_preds) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (batch_labels, batch_label_weights, num_total_pos, num_total_neg, - batch_bbox_weights, batch_pos_predicted_boxes, - batch_target_boxes) = cls_reg_targets - - flatten_labels = batch_labels.reshape(-1) - batch_label_weights = batch_label_weights.reshape(-1) - cls_score = cls_scores[0].permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - - num_total_samples = (num_total_pos + - num_total_neg) if self.sampling else num_total_pos - num_total_samples = reduce_mean( - cls_score.new_tensor(num_total_samples)).clamp_(1.0).item() - - # classification loss - loss_cls = self.loss_cls( - cls_score, - flatten_labels, - batch_label_weights, - avg_factor=num_total_samples) - - # regression loss - if batch_pos_predicted_boxes.shape[0] == 0: - # no pos sample - loss_bbox = batch_pos_predicted_boxes.sum() * 0 - else: - loss_bbox = self.loss_bbox( - batch_pos_predicted_boxes, - batch_target_boxes, - batch_bbox_weights.float(), - avg_factor=num_total_samples) - - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores_list (list[Tensor]): Classification scores of - each image. each is a 4D-tensor, the shape is - (h * w, num_anchors * num_classes). - bbox_preds_list (list[Tensor]): Bbox preds of each image. - each is a 4D-tensor, the shape is (h * w, num_anchors * 4). - anchor_list (list[Tensor]): Anchors of each image. Each element of - is a tensor of shape (h * w * num_anchors, 4). - valid_flag_list (list[Tensor]): Valid flags of each image. Each - element of is a tensor of shape (h * w * num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - batch_labels (Tensor): Label of all images. Each element \ - of is a tensor of shape (batch, h * w * num_anchors) - - batch_label_weights (Tensor): Label weights of all images \ - of is a tensor of shape (batch, h * w * num_anchors) - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, pos_inds_list, neg_inds_list, - sampling_results_list) = results[:5] - rest_results = list(results[5:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - - batch_labels = torch.stack(all_labels, 0) - batch_label_weights = torch.stack(all_label_weights, 0) - - res = (batch_labels, batch_label_weights, num_total_pos, num_total_neg) - for i, rests in enumerate(rest_results): # user-added return values - rest_results[i] = torch.cat(rests, 0) - - return res + tuple(rest_results) - - def _get_targets_single(self, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - bbox_preds (Tensor): Bbox prediction of the image, which - shape is (h * w ,4) - flat_anchors (Tensor): Anchors of the image, which shape is - (h * w * num_anchors ,4) - valid_flags (Tensor): Valid flags of the image, which shape is - (h * w * num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels (Tensor): Labels of image, which shape is - (h * w * num_anchors, ). - label_weights (Tensor): Label weights of image, which shape is - (h * w * num_anchors, ). - pos_inds (Tensor): Pos index of image. - neg_inds (Tensor): Neg index of image. - sampling_result (obj:`SamplingResult`): Sampling result. - pos_bbox_weights (Tensor): The Weight of using to calculate - the bbox branch loss, which shape is (num, ). - pos_predicted_boxes (Tensor): boxes predicted value of - using to calculate the bbox branch loss, which shape is - (num, 4). - pos_target_boxes (Tensor): boxes target value of - using to calculate the bbox branch loss, which shape is - (num, 4). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - bbox_preds = bbox_preds.reshape(-1, 4) - bbox_preds = bbox_preds[inside_flags, :] - - # decoded bbox - decoder_bbox_preds = self.bbox_coder.decode(anchors, bbox_preds) - assign_result = self.assigner.assign( - decoder_bbox_preds, anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - pos_bbox_weights = assign_result.get_extra_property('pos_idx') - pos_predicted_boxes = assign_result.get_extra_property( - 'pos_predicted_boxes') - pos_target_boxes = assign_result.get_extra_property('target_boxes') - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - num_valid_anchors = anchors.shape[0] - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - - return (labels, label_weights, pos_inds, neg_inds, sampling_result, - pos_bbox_weights, pos_predicted_boxes, pos_target_boxes) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolox_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolox_head.py deleted file mode 100644 index de3f93cc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/dense_heads/yolox_head.py +++ /dev/null @@ -1,491 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, - bias_init_with_prob) -from mmcv.ops.nms import batched_nms -from mmcv.runner import force_fp32 - -from mmdet.core import (MlvlPointGenerator, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class YOLOXHead(BaseDenseHead, BBoxTestMixin): - """YOLOXHead head used in `YOLOX `_. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels in stacking convs. - Default: 256 - stacked_convs (int): Number of stacking convs of the head. - Default: 2. - strides (tuple): Downsample factor of each feature map. - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. Default: None. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_obj (dict): Config of objectness loss. - loss_l1 (dict): Config of L1 loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=2, - strides=[8, 16, 32], - use_depthwise=False, - dcn_on_last_conv=False, - conv_bias='auto', - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - loss_bbox=dict( - type='IoULoss', - mode='square', - eps=1e-16, - reduction='sum', - loss_weight=5.0), - loss_obj=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - loss_l1=dict(type='L1Loss', reduction='sum', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.cls_out_channels = num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.use_depthwise = use_depthwise - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.use_sigmoid_cls = True - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_obj = build_loss(loss_obj) - - self.use_l1 = False # This flag will be modified by hooks. - self.loss_l1 = build_loss(loss_l1) - - self.prior_generator = MlvlPointGenerator(strides, offset=0) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.fp16_enabled = False - self._init_layers() - - def _init_layers(self): - self.multi_level_cls_convs = nn.ModuleList() - self.multi_level_reg_convs = nn.ModuleList() - self.multi_level_conv_cls = nn.ModuleList() - self.multi_level_conv_reg = nn.ModuleList() - self.multi_level_conv_obj = nn.ModuleList() - for _ in self.strides: - self.multi_level_cls_convs.append(self._build_stacked_convs()) - self.multi_level_reg_convs.append(self._build_stacked_convs()) - conv_cls, conv_reg, conv_obj = self._build_predictor() - self.multi_level_conv_cls.append(conv_cls) - self.multi_level_conv_reg.append(conv_reg) - self.multi_level_conv_obj.append(conv_obj) - - def _build_stacked_convs(self): - """Initialize conv layers of a single level head.""" - conv = DepthwiseSeparableConvModule \ - if self.use_depthwise else ConvModule - stacked_convs = [] - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - stacked_convs.append( - conv( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.conv_bias)) - return nn.Sequential(*stacked_convs) - - def _build_predictor(self): - """Initialize predictor layers of a single level head.""" - conv_cls = nn.Conv2d(self.feat_channels, self.cls_out_channels, 1) - conv_reg = nn.Conv2d(self.feat_channels, 4, 1) - conv_obj = nn.Conv2d(self.feat_channels, 1, 1) - return conv_cls, conv_reg, conv_obj - - def init_weights(self): - super(YOLOXHead, self).init_weights() - # Use prior in model initialization to improve stability - bias_init = bias_init_with_prob(0.01) - for conv_cls, conv_obj in zip(self.multi_level_conv_cls, - self.multi_level_conv_obj): - conv_cls.bias.data.fill_(bias_init) - conv_obj.bias.data.fill_(bias_init) - - def forward_single(self, x, cls_convs, reg_convs, conv_cls, conv_reg, - conv_obj): - """Forward feature of a single scale level.""" - - cls_feat = cls_convs(x) - reg_feat = reg_convs(x) - - cls_score = conv_cls(cls_feat) - bbox_pred = conv_reg(reg_feat) - objectness = conv_obj(reg_feat) - - return cls_score, bbox_pred, objectness - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - Returns: - tuple[Tensor]: A tuple of multi-level predication map, each is a - 4D-tensor of shape (batch_size, 5+num_classes, height, width). - """ - - return multi_apply(self.forward_single, feats, - self.multi_level_cls_convs, - self.multi_level_reg_convs, - self.multi_level_conv_cls, - self.multi_level_conv_reg, - self.multi_level_conv_obj) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - objectnesses, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True): - """Transform network outputs of a batch into bbox results. - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - objectnesses (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, 1, H, W). - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. Default None. - rescale (bool): If True, return boxes in original image space. - Default False. - with_nms (bool): If True, do nms before return boxes. - Default True. - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) == len(objectnesses) - cfg = self.test_cfg if cfg is None else cfg - scale_factors = [img_meta['scale_factor'] for img_meta in img_metas] - - num_imgs = len(img_metas) - featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device, - with_stride=True) - - # flatten cls_scores, bbox_preds and objectness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_objectness = [ - objectness.permute(0, 2, 3, 1).reshape(num_imgs, -1) - for objectness in objectnesses - ] - - flatten_cls_scores = torch.cat(flatten_cls_scores, dim=1).sigmoid() - flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) - flatten_objectness = torch.cat(flatten_objectness, dim=1).sigmoid() - flatten_priors = torch.cat(mlvl_priors) - - flatten_bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds) - - if rescale: - flatten_bboxes[..., :4] /= flatten_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - result_list = [] - for img_id in range(len(img_metas)): - cls_scores = flatten_cls_scores[img_id] - score_factor = flatten_objectness[img_id] - bboxes = flatten_bboxes[img_id] - - result_list.append( - self._bboxes_nms(cls_scores, bboxes, score_factor, cfg)) - - return result_list - - def _bbox_decode(self, priors, bbox_preds): - xys = (bbox_preds[..., :2] * priors[:, 2:]) + priors[:, :2] - whs = bbox_preds[..., 2:].exp() * priors[:, 2:] - - tl_x = (xys[..., 0] - whs[..., 0] / 2) - tl_y = (xys[..., 1] - whs[..., 1] / 2) - br_x = (xys[..., 0] + whs[..., 0] / 2) - br_y = (xys[..., 1] + whs[..., 1] / 2) - - decoded_bboxes = torch.stack([tl_x, tl_y, br_x, br_y], -1) - return decoded_bboxes - - def _bboxes_nms(self, cls_scores, bboxes, score_factor, cfg): - max_scores, labels = torch.max(cls_scores, 1) - valid_mask = score_factor * max_scores >= cfg.score_thr - - bboxes = bboxes[valid_mask] - scores = max_scores[valid_mask] * score_factor[valid_mask] - labels = labels[valid_mask] - - if labels.numel() == 0: - return bboxes, labels - else: - dets, keep = batched_nms(bboxes, scores, labels, cfg.nms) - return dets, labels[keep] - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def loss(self, - cls_scores, - bbox_preds, - objectnesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_priors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_priors * 4. - objectnesses (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, 1, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - num_imgs = len(img_metas) - featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device, - with_stride=True) - - flatten_cls_preds = [ - cls_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_pred in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_objectness = [ - objectness.permute(0, 2, 3, 1).reshape(num_imgs, -1) - for objectness in objectnesses - ] - - flatten_cls_preds = torch.cat(flatten_cls_preds, dim=1) - flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) - flatten_objectness = torch.cat(flatten_objectness, dim=1) - flatten_priors = torch.cat(mlvl_priors) - flatten_bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds) - - (pos_masks, cls_targets, obj_targets, bbox_targets, l1_targets, - num_fg_imgs) = multi_apply( - self._get_target_single, flatten_cls_preds.detach(), - flatten_objectness.detach(), - flatten_priors.unsqueeze(0).repeat(num_imgs, 1, 1), - flatten_bboxes.detach(), gt_bboxes, gt_labels) - - # The experimental results show that ‘reduce_mean’ can improve - # performance on the COCO dataset. - num_pos = torch.tensor( - sum(num_fg_imgs), - dtype=torch.float, - device=flatten_cls_preds.device) - num_total_samples = max(reduce_mean(num_pos), 1.0) - - pos_masks = torch.cat(pos_masks, 0) - cls_targets = torch.cat(cls_targets, 0) - obj_targets = torch.cat(obj_targets, 0) - bbox_targets = torch.cat(bbox_targets, 0) - if self.use_l1: - l1_targets = torch.cat(l1_targets, 0) - - loss_bbox = self.loss_bbox( - flatten_bboxes.view(-1, 4)[pos_masks], - bbox_targets) / num_total_samples - loss_obj = self.loss_obj(flatten_objectness.view(-1, 1), - obj_targets) / num_total_samples - loss_cls = self.loss_cls( - flatten_cls_preds.view(-1, self.num_classes)[pos_masks], - cls_targets) / num_total_samples - - loss_dict = dict( - loss_cls=loss_cls, loss_bbox=loss_bbox, loss_obj=loss_obj) - - if self.use_l1: - loss_l1 = self.loss_l1( - flatten_bbox_preds.view(-1, 4)[pos_masks], - l1_targets) / num_total_samples - loss_dict.update(loss_l1=loss_l1) - - return loss_dict - - @torch.no_grad() - def _get_target_single(self, cls_preds, objectness, priors, decoded_bboxes, - gt_bboxes, gt_labels): - """Compute classification, regression, and objectness targets for - priors in a single image. - Args: - cls_preds (Tensor): Classification predictions of one image, - a 2D-Tensor with shape [num_priors, num_classes] - objectness (Tensor): Objectness predictions of one image, - a 1D-Tensor with shape [num_priors] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Decoded bboxes predictions of one image, - a 2D-Tensor with shape [num_priors, 4] in [tl_x, tl_y, - br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - """ - - num_priors = priors.size(0) - num_gts = gt_labels.size(0) - gt_bboxes = gt_bboxes.to(decoded_bboxes.dtype) - # No target - if num_gts == 0: - cls_target = cls_preds.new_zeros((0, self.num_classes)) - bbox_target = cls_preds.new_zeros((0, 4)) - l1_target = cls_preds.new_zeros((0, 4)) - obj_target = cls_preds.new_zeros((num_priors, 1)) - foreground_mask = cls_preds.new_zeros(num_priors).bool() - return (foreground_mask, cls_target, obj_target, bbox_target, - l1_target, 0) - - # YOLOX uses center priors with 0.5 offset to assign targets, - # but use center priors without offset to regress bboxes. - offset_priors = torch.cat( - [priors[:, :2] + priors[:, 2:] * 0.5, priors[:, 2:]], dim=-1) - - assign_result = self.assigner.assign( - cls_preds.sigmoid() * objectness.unsqueeze(1).sigmoid(), - offset_priors, decoded_bboxes, gt_bboxes, gt_labels) - - sampling_result = self.sampler.sample(assign_result, priors, gt_bboxes) - pos_inds = sampling_result.pos_inds - num_pos_per_img = pos_inds.size(0) - - pos_ious = assign_result.max_overlaps[pos_inds] - # IOU aware classification score - cls_target = F.one_hot(sampling_result.pos_gt_labels, - self.num_classes) * pos_ious.unsqueeze(-1) - obj_target = torch.zeros_like(objectness).unsqueeze(-1) - obj_target[pos_inds] = 1 - bbox_target = sampling_result.pos_gt_bboxes - l1_target = cls_preds.new_zeros((num_pos_per_img, 4)) - if self.use_l1: - l1_target = self._get_l1_target(l1_target, bbox_target, - priors[pos_inds]) - foreground_mask = torch.zeros_like(objectness).to(torch.bool) - foreground_mask[pos_inds] = 1 - return (foreground_mask, cls_target, obj_target, bbox_target, - l1_target, num_pos_per_img) - - def _get_l1_target(self, l1_target, gt_bboxes, priors, eps=1e-8): - """Convert gt bboxes to center offset and log width height.""" - gt_cxcywh = bbox_xyxy_to_cxcywh(gt_bboxes) - l1_target[:, :2] = (gt_cxcywh[:, :2] - priors[:, :2]) / priors[:, 2:] - l1_target[:, 2:] = torch.log(gt_cxcywh[:, 2:] / priors[:, 2:] + eps) - return l1_target diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/__init__.py deleted file mode 100644 index 5f2b3088..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .atss import ATSS -from .autoassign import AutoAssign -from .base import BaseDetector -from .cascade_rcnn import CascadeRCNN -from .centernet import CenterNet -from .cornernet import CornerNet -from .deformable_detr import DeformableDETR -from .detr import DETR -from .fast_rcnn import FastRCNN -from .faster_rcnn import FasterRCNN -from .fcos import FCOS -from .fovea import FOVEA -from .fsaf import FSAF -from .gfl import GFL -from .grid_rcnn import GridRCNN -from .htc import HybridTaskCascade -from .kd_one_stage import KnowledgeDistillationSingleStageDetector -from .lad import LAD -from .mask2former import Mask2Former -from .mask_rcnn import MaskRCNN -from .mask_scoring_rcnn import MaskScoringRCNN -from .maskformer import MaskFormer -from .nasfcos import NASFCOS -from .paa import PAA -from .panoptic_fpn import PanopticFPN -from .panoptic_two_stage_segmentor import TwoStagePanopticSegmentor -from .point_rend import PointRend -from .queryinst import QueryInst -from .reppoints_detector import RepPointsDetector -from .retinanet import RetinaNet -from .rpn import RPN -from .scnet import SCNet -from .single_stage import SingleStageDetector -from .solo import SOLO -from .sparse_rcnn import SparseRCNN -from .tood import TOOD -from .trident_faster_rcnn import TridentFasterRCNN -from .two_stage import TwoStageDetector -from .vfnet import VFNet -from .yolact import YOLACT -from .yolo import YOLOV3 -from .yolof import YOLOF -from .yolox import YOLOX - -__all__ = [ - 'ATSS', 'BaseDetector', 'SingleStageDetector', 'TwoStageDetector', 'RPN', - 'KnowledgeDistillationSingleStageDetector', 'FastRCNN', 'FasterRCNN', - 'MaskRCNN', 'CascadeRCNN', 'HybridTaskCascade', 'RetinaNet', 'FCOS', - 'GridRCNN', 'MaskScoringRCNN', 'RepPointsDetector', 'FOVEA', 'FSAF', - 'NASFCOS', 'PointRend', 'GFL', 'CornerNet', 'PAA', 'YOLOV3', 'YOLACT', - 'VFNet', 'DETR', 'TridentFasterRCNN', 'SparseRCNN', 'SCNet', 'SOLO', - 'DeformableDETR', 'AutoAssign', 'YOLOF', 'CenterNet', 'YOLOX', - 'TwoStagePanopticSegmentor', 'PanopticFPN', 'QueryInst', 'LAD', 'TOOD', - 'MaskFormer', 'Mask2Former' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/atss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/atss.py deleted file mode 100644 index 00f1acd9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/atss.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class ATSS(SingleStageDetector): - """Implementation of `ATSS `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/autoassign.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/autoassign.py deleted file mode 100644 index 30ab7207..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/autoassign.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class AutoAssign(SingleStageDetector): - """Implementation of `AutoAssign: Differentiable Label Assignment for Dense - Object Detection `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(AutoAssign, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/base.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/base.py deleted file mode 100644 index bf64bce6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/base.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.core.visualization import imshow_det_bboxes - - -class BaseDetector(BaseModule, metaclass=ABCMeta): - """Base class for detectors.""" - - def __init__(self, init_cfg=None): - super(BaseDetector, self).__init__(init_cfg) - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the detector has a neck""" - return hasattr(self, 'neck') and self.neck is not None - - # TODO: these properties need to be carefully handled - # for both single stage & two stage detectors - @property - def with_shared_head(self): - """bool: whether the detector has a shared head in the RoI Head""" - return hasattr(self, 'roi_head') and self.roi_head.with_shared_head - - @property - def with_bbox(self): - """bool: whether the detector has a bbox head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_bbox) - or (hasattr(self, 'bbox_head') and self.bbox_head is not None)) - - @property - def with_mask(self): - """bool: whether the detector has a mask head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_mask) - or (hasattr(self, 'mask_head') and self.mask_head is not None)) - - @abstractmethod - def extract_feat(self, imgs): - """Extract features from images.""" - pass - - def extract_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, imgs, img_metas, **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (keyword arguments): Specific to concrete implementation. - """ - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - batch_input_shape = tuple(imgs[0].size()[-2:]) - for img_meta in img_metas: - img_meta['batch_input_shape'] = batch_input_shape - - async def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError - - @abstractmethod - def simple_test(self, img, img_metas, **kwargs): - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Test function with test time augmentation.""" - pass - - async def aforward_test(self, *, img, img_metas, **kwargs): - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image metas ({len(img_metas)})') - # TODO: remove the restriction of samples_per_gpu == 1 when prepared - samples_per_gpu = img[0].size(0) - assert samples_per_gpu == 1 - - if num_augs == 1: - return await self.async_simple_test(img[0], img_metas[0], **kwargs) - else: - raise NotImplementedError - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - for img, img_meta in zip(imgs, img_metas): - batch_size = len(img_meta) - for img_id in range(batch_size): - img_meta[img_id]['batch_input_shape'] = tuple(img.size()[-2:]) - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - assert imgs[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{imgs[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if torch.onnx.is_in_onnx_export(): - assert len(img_metas) == 1 - return self.onnx_export(img[0], img_metas[0]) - - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \ - which may be a weighted sum of all losses, log_vars contains \ - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - # If the loss_vars has different length, GPUs will wait infinitely - if dist.is_available() and dist.is_initialized(): - log_var_length = torch.tensor(len(log_vars), device=loss.device) - dist.all_reduce(log_var_length) - message = (f'rank {dist.get_rank()}' + - f' len(log_vars): {len(log_vars)}' + ' keys: ' + - ','.join(log_vars.keys())) - assert log_var_length == len(log_vars) * dist.get_world_size(), \ - 'loss log variables are different across GPUs!\n' + message - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \ - ``num_samples``. - - - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def val_step(self, data, optimizer=None): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - # draw segmentation masks - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - if isinstance(segms[0], torch.Tensor): - segms = torch.stack(segms, dim=0).detach().cpu().numpy() - else: - segms = np.stack(segms, axis=0) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms, - class_names=self.CLASSES, - score_thr=score_thr, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img - - def onnx_export(self, img, img_metas): - raise NotImplementedError(f'{self.__class__.__name__} does ' - f'not support ONNX EXPORT') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cascade_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cascade_rcnn.py deleted file mode 100644 index d8c73827..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cascade_rcnn.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class CascadeRCNN(TwoStageDetector): - r"""Implementation of `Cascade R-CNN: Delving into High Quality Object - Detection `_""" - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CascadeRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def show_result(self, data, result, **kwargs): - """Show prediction results of the detector. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if self.with_mask: - ms_bbox_result, ms_segm_result = result - if isinstance(ms_bbox_result, dict): - result = (ms_bbox_result['ensemble'], - ms_segm_result['ensemble']) - else: - if isinstance(result, dict): - result = result['ensemble'] - return super(CascadeRCNN, self).show_result(data, result, **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/centernet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/centernet.py deleted file mode 100644 index e1e3fd3c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/centernet.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result -from mmdet.models.builder import DETECTORS -from ...core.utils import flip_tensor -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CenterNet(SingleStageDetector): - """Implementation of CenterNet(Objects as Points) - - . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CenterNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def merge_aug_results(self, aug_results, with_nms): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - with_nms (bool): If True, do nms before return boxes. - - Returns: - tuple: (out_bboxes, out_labels) - """ - recovered_bboxes, aug_labels = [], [] - for single_result in aug_results: - recovered_bboxes.append(single_result[0][0]) - aug_labels.append(single_result[0][1]) - - bboxes = torch.cat(recovered_bboxes, dim=0).contiguous() - labels = torch.cat(aug_labels).contiguous() - if with_nms: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=True): - """Augment testing of CenterNet. Aug test must have flipped image pair, - and unlike CornerNet, it will perform an averaging operation on the - feature map instead of detecting bbox. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: True. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - flip_direction = img_metas[flip_ind][0]['flip_direction'] - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - center_heatmap_preds, wh_preds, offset_preds = self.bbox_head(x) - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - - # Feature map averaging - center_heatmap_preds[0] = ( - center_heatmap_preds[0][0:1] + - flip_tensor(center_heatmap_preds[0][1:2], flip_direction)) / 2 - wh_preds[0] = (wh_preds[0][0:1] + - flip_tensor(wh_preds[0][1:2], flip_direction)) / 2 - - bbox_list = self.bbox_head.get_bboxes( - center_heatmap_preds, - wh_preds, [offset_preds[0][0:1]], - img_metas[ind], - rescale=rescale, - with_nms=False) - aug_results.append(bbox_list) - - nms_cfg = self.bbox_head.test_cfg.get('nms_cfg', None) - if nms_cfg is None: - with_nms = False - else: - with_nms = True - bbox_list = [self.merge_aug_results(aug_results, with_nms)] - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cornernet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cornernet.py deleted file mode 100644 index ce921cc3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/cornernet.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result, bbox_mapping_back -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CornerNet(SingleStageDetector): - """CornerNet. - - This detector is the implementation of the paper `CornerNet: Detecting - Objects as Paired Keypoints `_ . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CornerNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def merge_aug_results(self, aug_results, img_metas): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: (bboxes, labels) - """ - recovered_bboxes, aug_labels = [], [] - for bboxes_labels, img_info in zip(aug_results, img_metas): - img_shape = img_info[0]['img_shape'] # using shape before padding - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - bboxes, labels = bboxes_labels - bboxes, scores = bboxes[:, :4], bboxes[:, -1:] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip) - recovered_bboxes.append(torch.cat([bboxes, scores], dim=-1)) - aug_labels.append(labels) - - bboxes = torch.cat(recovered_bboxes, dim=0) - labels = torch.cat(aug_labels) - - if bboxes.shape[0] > 0: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=False): - """Augment testing of CornerNet. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, [img_metas[ind], img_metas[flip_ind]], False, False) - aug_results.append(bbox_list[0]) - aug_results.append(bbox_list[1]) - - bboxes, labels = self.merge_aug_results(aug_results, img_metas) - bbox_results = bbox2result(bboxes, labels, self.bbox_head.num_classes) - - return [bbox_results] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/deformable_detr.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/deformable_detr.py deleted file mode 100644 index b1f16422..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/deformable_detr.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .detr import DETR - - -@DETECTORS.register_module() -class DeformableDETR(DETR): - - def __init__(self, *args, **kwargs): - super(DETR, self).__init__(*args, **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/detr.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/detr.py deleted file mode 100644 index 06d76913..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/detr.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class DETR(SingleStageDetector): - r"""Implementation of `DETR: End-to-End Object Detection with - Transformers `_""" - - def __init__(self, - backbone, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(DETR, self).__init__(backbone, None, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - # over-write `forward_dummy` because: - # the forward of bbox_head requires img_metas - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - warnings.warn('Warning! MultiheadAttention in DETR does not ' - 'support flops computation! Do not use the ' - 'results in your papers!') - - batch_size, _, height, width = img.shape - dummy_img_metas = [ - dict( - batch_input_shape=(height, width), - img_shape=(height, width, 3)) for _ in range(batch_size) - ] - x = self.extract_feat(img) - outs = self.bbox_head(x, dummy_img_metas) - return outs - - # over-write `onnx_export` because: - # (1) the forward of bbox_head requires img_metas - # (2) the different behavior (e.g. construction of `masks`) between - # torch and ONNX model, during the forward of bbox_head - def onnx_export(self, img, img_metas): - """Test function for exporting to ONNX, without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - # forward of this head requires img_metas - outs = self.bbox_head.forward_onnx(x, img_metas) - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fast_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fast_rcnn.py deleted file mode 100644 index 7aebe151..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/faster_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/faster_rcnn.py deleted file mode 100644 index 70fb662f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/faster_rcnn.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FasterRCNN(TwoStageDetector): - """Implementation of `Faster R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(FasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fcos.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fcos.py deleted file mode 100644 index d985bd02..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fcos.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FCOS(SingleStageDetector): - """Implementation of `FCOS `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fovea.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fovea.py deleted file mode 100644 index 6fd908c7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fovea.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FOVEA(SingleStageDetector): - """Implementation of `FoveaBox `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fsaf.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 81ed1bde..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/gfl.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/gfl.py deleted file mode 100644 index 4628e2e7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/gfl.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class GFL(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(GFL, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/grid_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/grid_rcnn.py deleted file mode 100644 index bba7873b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/grid_rcnn.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class GridRCNN(TwoStageDetector): - """Grid R-CNN. - - This detector is the implementation of: - - Grid R-CNN (https://arxiv.org/abs/1811.12030) - - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688) - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(GridRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/htc.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/htc.py deleted file mode 100644 index f7c95338..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/htc.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class HybridTaskCascade(CascadeRCNN): - """Implementation of `HTC `_""" - - def __init__(self, **kwargs): - super(HybridTaskCascade, self).__init__(**kwargs) - - @property - def with_semantic(self): - """bool: whether the detector has a semantic head""" - return self.roi_head.with_semantic diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/kd_one_stage.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/kd_one_stage.py deleted file mode 100644 index fb66b515..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/kd_one_stage.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from pathlib import Path - -import mmcv -import torch -from mmcv.runner import load_checkpoint - -from .. import build_detector -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class KnowledgeDistillationSingleStageDetector(SingleStageDetector): - r"""Implementation of `Distilling the Knowledge in a Neural Network. - `_. - - Args: - teacher_config (str | dict): Config file path - or the config object of teacher model. - teacher_ckpt (str, optional): Checkpoint path of teacher model. - If left as None, the model will not load any weights. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_config, - teacher_ckpt=None, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - # Build teacher model - if isinstance(teacher_config, (str, Path)): - teacher_config = mmcv.Config.fromfile(teacher_config) - self.teacher_model = build_detector(teacher_config['model']) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - with torch.no_grad(): - teacher_x = self.teacher_model.extract_feat(img) - out_teacher = self.teacher_model.bbox_head(teacher_x) - losses = self.bbox_head.forward_train(x, out_teacher, img_metas, - gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses - - def cuda(self, device=None): - """Since teacher_model is registered as a plain object, it is necessary - to put the teacher model to cuda when calling cuda function.""" - self.teacher_model.cuda(device=device) - return super().cuda(device=device) - - def train(self, mode=True): - """Set the same train mode for teacher and student model.""" - if self.eval_teacher: - self.teacher_model.train(False) - else: - self.teacher_model.train(mode) - super().train(mode) - - def __setattr__(self, name, value): - """Set attribute, i.e. self.name = value - - This reloading prevent the teacher model from being registered as a - nn.Module. The teacher module is registered as a plain object, so that - the teacher parameters will not show up when calling - ``self.parameters``, ``self.modules``, ``self.children`` methods. - """ - if name == 'teacher_model': - object.__setattr__(self, name, value) - else: - super().__setattr__(name, value) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/lad.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/lad.py deleted file mode 100644 index c6cc1e0b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/lad.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import load_checkpoint - -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .kd_one_stage import KnowledgeDistillationSingleStageDetector - - -@DETECTORS.register_module() -class LAD(KnowledgeDistillationSingleStageDetector): - """Implementation of `LAD `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_backbone, - teacher_neck, - teacher_bbox_head, - teacher_ckpt, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(KnowledgeDistillationSingleStageDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - self.teacher_model = nn.Module() - self.teacher_model.backbone = build_backbone(teacher_backbone) - if teacher_neck is not None: - self.teacher_model.neck = build_neck(teacher_neck) - teacher_bbox_head.update(train_cfg=train_cfg) - teacher_bbox_head.update(test_cfg=test_cfg) - self.teacher_model.bbox_head = build_head(teacher_bbox_head) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - @property - def with_teacher_neck(self): - """bool: whether the detector has a teacher_neck""" - return hasattr(self.teacher_model, 'neck') and \ - self.teacher_model.neck is not None - - def extract_teacher_feat(self, img): - """Directly extract teacher features from the backbone+neck.""" - x = self.teacher_model.backbone(img) - if self.with_teacher_neck: - x = self.teacher_model.neck(x) - return x - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # get label assignment from the teacher - with torch.no_grad(): - x_teacher = self.extract_teacher_feat(img) - outs_teacher = self.teacher_model.bbox_head(x_teacher) - label_assignment_results = \ - self.teacher_model.bbox_head.get_label_assignment( - *outs_teacher, gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - - # the student use the label assignment from the teacher to learn - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, label_assignment_results, - img_metas, gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask2former.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask2former.py deleted file mode 100644 index b9ad2ed2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask2former.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .maskformer import MaskFormer - - -@DETECTORS.register_module() -class Mask2Former(MaskFormer): - r"""Implementation of `Masked-attention Mask - Transformer for Universal Image Segmentation - `_.""" - - def __init__(self, - backbone, - neck=None, - panoptic_head=None, - panoptic_fusion_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super().__init__( - backbone, - neck=neck, - panoptic_head=panoptic_head, - panoptic_fusion_head=panoptic_fusion_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_rcnn.py deleted file mode 100644 index c68489f9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_rcnn.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskRCNN(TwoStageDetector): - """Implementation of `Mask R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_scoring_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_scoring_rcnn.py deleted file mode 100644 index 5f55656f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/mask_scoring_rcnn.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskScoringRCNN(TwoStageDetector): - """Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskScoringRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/maskformer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/maskformer.py deleted file mode 100644 index b626e070..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/maskformer.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet.core import INSTANCE_OFFSET, bbox2result -from mmdet.core.visualization import imshow_det_bboxes -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class MaskFormer(SingleStageDetector): - r"""Implementation of `Per-Pixel Classification is - NOT All You Need for Semantic Segmentation - `_.""" - - def __init__(self, - backbone, - neck=None, - panoptic_head=None, - panoptic_fusion_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - - panoptic_head_ = panoptic_head.deepcopy() - panoptic_head_.update(train_cfg=train_cfg) - panoptic_head_.update(test_cfg=test_cfg) - self.panoptic_head = build_head(panoptic_head_) - - panoptic_fusion_head_ = panoptic_fusion_head.deepcopy() - panoptic_fusion_head_.update(test_cfg=test_cfg) - self.panoptic_fusion_head = build_head(panoptic_fusion_head_) - - self.num_things_classes = self.panoptic_head.num_things_classes - self.num_stuff_classes = self.panoptic_head.num_stuff_classes - self.num_classes = self.panoptic_head.num_classes - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def forward_dummy(self, img, img_metas): - """Used for computing network flops. See - `mmdetection/tools/analysis_tools/get_flops.py` - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[Dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - outs = self.panoptic_head(x, img_metas) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_masks, - gt_semantic_seg, - gt_bboxes_ignore=None, - **kargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[Dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - gt_masks (list[BitmapMasks]): true segmentation masks for each box - used if the architecture supports a segmentation task. - gt_semantic_seg (list[tensor]): semantic segmentation mask for - images. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - Defaults to None. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # add batch_input_shape in img_metas - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.panoptic_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_masks, - gt_semantic_seg, - gt_bboxes_ignore) - - return losses - - def simple_test(self, imgs, img_metas, **kwargs): - """Test without augmentation. - - Args: - imgs (Tensor): A batch of images. - img_metas (list[dict]): List of image information. - - Returns: - list[dict[str, np.array | tuple]]: Semantic segmentation \ - results and panoptic segmentation results for each \ - image. - - .. code-block:: none - - [ - { - 'pan_results': np.array, # shape = [h, w] - 'ins_results': tuple[list], - # semantic segmentation results are not supported yet - 'sem_results': np.array - }, - ... - ] - """ - feats = self.extract_feat(imgs) - mask_cls_results, mask_pred_results = self.panoptic_head.simple_test( - feats, img_metas, **kwargs) - results = self.panoptic_fusion_head.simple_test( - mask_cls_results, mask_pred_results, img_metas, **kwargs) - for i in range(len(results)): - if 'pan_results' in results[i]: - results[i]['pan_results'] = results[i]['pan_results'].detach( - ).cpu().numpy() - - if 'ins_results' in results[i]: - labels_per_image, bboxes, mask_pred_binary = results[i][ - 'ins_results'] - bbox_results = bbox2result(bboxes, labels_per_image, - self.num_things_classes) - mask_results = [[] for _ in range(self.num_things_classes)] - for j, label in enumerate(labels_per_image): - mask = mask_pred_binary[j].detach().cpu().numpy() - mask_results[label].append(mask) - results[i]['ins_results'] = bbox_results, mask_results - - assert 'sem_results' not in results[i], 'segmantic segmentation '\ - 'results are not supported yet.' - - return results - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError - - def onnx_export(self, img, img_metas): - raise NotImplementedError - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (dict): The results. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green'. - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green'. - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file`. - """ - img = mmcv.imread(img) - img = img.copy() - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != self.num_classes # for VOID label - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - segms=segms, - labels=labels, - class_names=self.CLASSES, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/nasfcos.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/nasfcos.py deleted file mode 100644 index a34c2280..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/nasfcos.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class NASFCOS(SingleStageDetector): - """NAS-FCOS: Fast Neural Architecture Search for Object Detection. - - https://arxiv.org/abs/1906.0442 - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(NASFCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/paa.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/paa.py deleted file mode 100644 index f5cb8372..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/paa.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class PAA(SingleStageDetector): - """Implementation of `PAA `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PAA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_fpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_fpn.py deleted file mode 100644 index f8ac751f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_fpn.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .panoptic_two_stage_segmentor import TwoStagePanopticSegmentor - - -@DETECTORS.register_module() -class PanopticFPN(TwoStagePanopticSegmentor): - r"""Implementation of `Panoptic feature pyramid - networks `_""" - - def __init__( - self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None, - # for panoptic segmentation - semantic_head=None, - panoptic_fusion_head=None): - super(PanopticFPN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg, - semantic_head=semantic_head, - panoptic_fusion_head=panoptic_fusion_head) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_two_stage_segmentor.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_two_stage_segmentor.py deleted file mode 100644 index 5ad49bac..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/panoptic_two_stage_segmentor.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch - -from mmdet.core import INSTANCE_OFFSET, bbox2roi, multiclass_nms -from mmdet.core.visualization import imshow_det_bboxes -from ..builder import DETECTORS, build_head -from ..roi_heads.mask_heads.fcn_mask_head import _do_paste_mask -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class TwoStagePanopticSegmentor(TwoStageDetector): - """Base class of Two-stage Panoptic Segmentor. - - As well as the components in TwoStageDetector, Panoptic Segmentor has extra - semantic_head and panoptic_fusion_head. - """ - - def __init__( - self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None, - # for panoptic segmentation - semantic_head=None, - panoptic_fusion_head=None): - super(TwoStagePanopticSegmentor, - self).__init__(backbone, neck, rpn_head, roi_head, train_cfg, - test_cfg, pretrained, init_cfg) - if semantic_head is not None: - self.semantic_head = build_head(semantic_head) - if panoptic_fusion_head is not None: - panoptic_cfg = test_cfg.panoptic if test_cfg is not None else None - panoptic_fusion_head_ = panoptic_fusion_head.deepcopy() - panoptic_fusion_head_.update(test_cfg=panoptic_cfg) - self.panoptic_fusion_head = build_head(panoptic_fusion_head_) - - self.num_things_classes = self.panoptic_fusion_head.\ - num_things_classes - self.num_stuff_classes = self.panoptic_fusion_head.\ - num_stuff_classes - self.num_classes = self.panoptic_fusion_head.num_classes - - @property - def with_semantic_head(self): - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - @property - def with_panoptic_fusion_head(self): - return hasattr(self, 'panoptic_fusion_heads') and \ - self.panoptic_fusion_head is not None - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/get_flops.py` - """ - raise NotImplementedError( - f'`forward_dummy` is not implemented in {self.__class__.__name__}') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None, - proposals=None, - **kwargs): - x = self.extract_feat(img) - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - semantic_loss = self.semantic_head.forward_train(x, gt_semantic_seg) - losses.update(semantic_loss) - - return losses - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - img_shapes = tuple(meta['ori_shape'] - for meta in img_metas) if rescale else tuple( - meta['pad_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - masks = [] - for img_shape in img_shapes: - out_shape = (0, self.roi_head.bbox_head.num_classes) \ - + img_shape[:2] - masks.append(det_bboxes[0].new_zeros(out_shape)) - mask_pred = det_bboxes[0].new_zeros((0, 80, 28, 28)) - mask_results = dict( - masks=masks, mask_pred=mask_pred, mask_feats=None) - return mask_results - - _bboxes = [det_bboxes[i][:, :4] for i in range(len(det_bboxes))] - if rescale: - if not isinstance(scale_factors[0], float): - scale_factors = [ - det_bboxes[0].new_tensor(scale_factor) - for scale_factor in scale_factors - ] - _bboxes = [ - _bboxes[i] * scale_factors[i] for i in range(len(_bboxes)) - ] - - mask_rois = bbox2roi(_bboxes) - mask_results = self.roi_head._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - - # resize the mask_preds to (K, H, W) - masks = [] - for i in range(len(_bboxes)): - det_bbox = det_bboxes[i][:, :4] - det_label = det_labels[i] - - mask_pred = mask_preds[i].sigmoid() - - box_inds = torch.arange(mask_pred.shape[0]) - mask_pred = mask_pred[box_inds, det_label][:, None] - - img_h, img_w, _ = img_shapes[i] - mask_pred, _ = _do_paste_mask( - mask_pred, det_bbox, img_h, img_w, skip_empty=False) - masks.append(mask_pred) - - mask_results['masks'] = masks - - return mask_results - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without Augmentation.""" - x = self.extract_feat(img) - - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - bboxes, scores = self.roi_head.simple_test_bboxes( - x, img_metas, proposal_list, None, rescale=rescale) - - pan_cfg = self.test_cfg.panoptic - # class-wise predictions - det_bboxes = [] - det_labels = [] - for bboxe, score in zip(bboxes, scores): - det_bbox, det_label = multiclass_nms(bboxe, score, - pan_cfg.score_thr, - pan_cfg.nms, - pan_cfg.max_per_img) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - mask_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - masks = mask_results['masks'] - - seg_preds = self.semantic_head.simple_test(x, img_metas, rescale) - - results = [] - for i in range(len(det_bboxes)): - pan_results = self.panoptic_fusion_head.simple_test( - det_bboxes[i], det_labels[i], masks[i], seg_preds[i]) - pan_results = pan_results.int().detach().cpu().numpy() - result = dict(pan_results=pan_results) - results.append(result) - return results - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (dict): The results. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green'. - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green'. - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file`. - """ - img = mmcv.imread(img) - img = img.copy() - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != self.num_classes # for VOID label - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - segms=segms, - labels=labels, - class_names=self.CLASSES, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/point_rend.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/point_rend.py deleted file mode 100644 index 90eb4d40..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/point_rend.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class PointRend(TwoStageDetector): - """PointRend: Image Segmentation as Rendering - - This detector is the implementation of - `PointRend `_. - - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(PointRend, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/queryinst.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/queryinst.py deleted file mode 100644 index 5fc216c4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/queryinst.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .sparse_rcnn import SparseRCNN - - -@DETECTORS.register_module() -class QueryInst(SparseRCNN): - r"""Implementation of - `Instances as Queries `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(QueryInst, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/reppoints_detector.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/reppoints_detector.py deleted file mode 100644 index f1986cdc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/reppoints_detector.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RepPointsDetector(SingleStageDetector): - """RepPoints: Point Set Representation for Object Detection. - - This detector is the implementation of: - - RepPoints detector (https://arxiv.org/pdf/1904.11490) - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(RepPointsDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/retinanet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/retinanet.py deleted file mode 100644 index c28545ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/retinanet.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RetinaNet(SingleStageDetector): - """Implementation of `RetinaNet `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(RetinaNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/rpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/rpn.py deleted file mode 100644 index 6ec326b7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/rpn.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import torch -from mmcv.image import tensor2imgs - -from mmdet.core import bbox_mapping -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class RPN(BaseDetector): - """Implementation of Region Proposal Network.""" - - def __init__(self, - backbone, - neck, - rpn_head, - train_cfg, - test_cfg, - pretrained=None, - init_cfg=None): - super(RPN, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) if neck is not None else None - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head.update(train_cfg=rpn_train_cfg) - rpn_head.update(test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Extract features. - - Args: - img (torch.Tensor): Image tensor with shape (n, c, h ,w). - - Returns: - list[torch.Tensor]: Multi-level features that may have - different resolutions. - """ - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Dummy forward function.""" - x = self.extract_feat(img) - rpn_outs = self.rpn_head(x) - return rpn_outs - - def forward_train(self, - img, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if (isinstance(self.train_cfg.rpn, dict) - and self.train_cfg.rpn.get('debug', False)): - self.rpn_head.debug_imgs = tensor2imgs(img) - - x = self.extract_feat(img) - losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None, - gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - x = self.extract_feat(img) - # get origin input shape to onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - if rescale: - for proposals, meta in zip(proposal_list, img_metas): - proposals[:, :4] /= proposals.new_tensor(meta['scale_factor']) - if torch.onnx.is_in_onnx_export(): - return proposal_list - - return [proposal.cpu().numpy() for proposal in proposal_list] - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - proposal_list = self.rpn_head.aug_test_rpn( - self.extract_feats(imgs), img_metas) - if not rescale: - for proposals, img_meta in zip(proposal_list, img_metas[0]): - img_shape = img_meta['img_shape'] - scale_factor = img_meta['scale_factor'] - flip = img_meta['flip'] - flip_direction = img_meta['flip_direction'] - proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - return [proposal.cpu().numpy() for proposal in proposal_list] - - def show_result(self, data, result, top_k=20, **kwargs): - """Show RPN proposals on the image. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - top_k (int): Plot the first k bboxes only - if set positive. Default: 20 - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if kwargs is not None: - kwargs.pop('score_thr', None) - kwargs.pop('text_color', None) - kwargs['colors'] = kwargs.pop('bbox_color', 'green') - mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/scnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/scnet.py deleted file mode 100644 index a361d81c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/scnet.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class SCNet(CascadeRCNN): - """Implementation of `SCNet `_""" - - def __init__(self, **kwargs): - super(SCNet, self).__init__(**kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage.py deleted file mode 100644 index c375c72d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class SingleStageDetector(BaseDetector): - """Base class for single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - img (torch.Tensor): Images with shape (N, C, H, W). - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - feat = self.extract_feat(img) - results_list = self.bbox_head.simple_test( - feat, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - assert hasattr(self.bbox_head, 'aug_test'), \ - f'{self.bbox_head.__class__.__name__}' \ - ' does not support test-time augmentation' - - feats = self.extract_feats(imgs) - results_list = self.bbox_head.aug_test( - feats, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def onnx_export(self, img, img_metas, with_nms=True): - """Test function without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - # get origin input shape to support onnx dynamic shape - - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - # get pad input shape to support onnx dynamic shape for exporting - # `CornerNet` and `CentripetalNet`, which 'pad_shape' is used - # for inference - img_metas[0]['pad_shape_for_onnx'] = img_shape - - if len(outs) == 2: - # add dummy score_factor - outs = (*outs, None) - # TODO Can we change to `get_bboxes` when `onnx_export` fail - det_bboxes, det_labels = self.bbox_head.onnx_export( - *outs, img_metas, with_nms=with_nms) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage_instance_seg.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage_instance_seg.py deleted file mode 100644 index 239b6699..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/single_stage_instance_seg.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import mmcv -import numpy as np -import torch - -from mmdet.core.visualization.image import imshow_det_bboxes -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - -INF = 1e8 - - -@DETECTORS.register_module() -class SingleStageInstanceSegmentor(BaseDetector): - """Base class for single-stage instance segmentors.""" - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - super(SingleStageInstanceSegmentor, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - else: - self.neck = None - if bbox_head is not None: - bbox_head.update(train_cfg=copy.deepcopy(train_cfg)) - bbox_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.bbox_head = build_head(bbox_head) - else: - self.bbox_head = None - - assert mask_head, f'`mask_head` must ' \ - f'be implemented in {self.__class__.__name__}' - mask_head.update(train_cfg=copy.deepcopy(train_cfg)) - mask_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.mask_head = build_head(mask_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone and neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - raise NotImplementedError( - f'`forward_dummy` is not implemented in {self.__class__.__name__}') - - def forward_train(self, - img, - img_metas, - gt_masks, - gt_labels, - gt_bboxes=None, - gt_bboxes_ignore=None, - **kwargs): - """ - Args: - img (Tensor): Input images of shape (B, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_masks (list[:obj:`BitmapMasks`] | None) : The segmentation - masks for each box. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes (list[Tensor]): Each item is the truth boxes - of each image in [tl_x, tl_y, br_x, br_y] format. - Default: None. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - gt_masks = [ - gt_mask.to_tensor(dtype=torch.bool, device=img.device) - for gt_mask in gt_masks - ] - x = self.extract_feat(img) - losses = dict() - - # CondInst and YOLACT have bbox_head - if self.bbox_head: - # bbox_head_preds is a tuple - bbox_head_preds = self.bbox_head(x) - # positive_infos is a list of obj:`InstanceData` - # It contains the information about the positive samples - # CondInst, YOLACT - det_losses, positive_infos = self.bbox_head.loss( - *bbox_head_preds, - gt_bboxes=gt_bboxes, - gt_labels=gt_labels, - gt_masks=gt_masks, - img_metas=img_metas, - gt_bboxes_ignore=gt_bboxes_ignore, - **kwargs) - losses.update(det_losses) - else: - positive_infos = None - - mask_loss = self.mask_head.forward_train( - x, - gt_labels, - gt_masks, - img_metas, - positive_infos=positive_infos, - gt_bboxes=gt_bboxes, - gt_bboxes_ignore=gt_bboxes_ignore, - **kwargs) - # avoid loss override - assert not set(mask_loss.keys()) & set(losses.keys()) - - losses.update(mask_loss) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - img (torch.Tensor): Images with shape (B, C, H, W). - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list(tuple): Formatted bbox and mask results of multiple \ - images. The outer list corresponds to each image. \ - Each tuple contains two type of results of single image: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has a shape (N, img_h, img_w), N - is the number of masks with this category. - """ - feat = self.extract_feat(img) - if self.bbox_head: - outs = self.bbox_head(feat) - # results_list is list[obj:`InstanceData`] - results_list = self.bbox_head.get_results( - *outs, img_metas=img_metas, cfg=self.test_cfg, rescale=rescale) - else: - results_list = None - - results_list = self.mask_head.simple_test( - feat, img_metas, rescale=rescale, instances_list=results_list) - - format_results_list = [] - for results in results_list: - format_results_list.append(self.format_results(results)) - - return format_results_list - - def format_results(self, results): - """Format the model predictions according to the interface with - dataset. - - Args: - results (:obj:`InstanceData`): Processed - results of single images. Usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,) - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - - Returns: - tuple: Formatted bbox and mask results.. It contains two items: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has shape (N, img_h, img_w), N - is the number of masks with this category. - """ - data_keys = results.keys() - assert 'scores' in data_keys - assert 'labels' in data_keys - - assert 'masks' in data_keys, \ - 'results should contain ' \ - 'masks when format the results ' - mask_results = [[] for _ in range(self.mask_head.num_classes)] - - num_masks = len(results) - - if num_masks == 0: - bbox_results = [ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.mask_head.num_classes) - ] - return bbox_results, mask_results - - labels = results.labels.detach().cpu().numpy() - - if 'bboxes' not in results: - # create dummy bbox results to store the scores - results.bboxes = results.scores.new_zeros(len(results), 4) - - det_bboxes = torch.cat([results.bboxes, results.scores[:, None]], - dim=-1) - det_bboxes = det_bboxes.detach().cpu().numpy() - bbox_results = [ - det_bboxes[labels == i, :] - for i in range(self.mask_head.num_classes) - ] - - masks = results.masks.detach().cpu().numpy() - - for idx in range(num_masks): - mask = masks[idx] - mask_results[labels[idx]].append(mask) - - return bbox_results, mask_results - - def aug_test(self, imgs, img_metas, rescale=False): - raise NotImplementedError - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (tuple): Format bbox and mask results. - It contains two items: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has shape (N, img_h, img_w), N - is the number of masks with this category. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - - assert isinstance(result, tuple) - bbox_result, mask_result = result - bboxes = np.vstack(bbox_result) - img = mmcv.imread(img) - img = img.copy() - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - if len(labels) == 0: - bboxes = np.zeros([0, 5]) - masks = np.zeros([0, 0, 0]) - # draw segmentation masks - else: - masks = mmcv.concat_list(mask_result) - - if isinstance(masks[0], torch.Tensor): - masks = torch.stack(masks, dim=0).detach().cpu().numpy() - else: - masks = np.stack(masks, axis=0) - # dummy bboxes - if bboxes[:, :4].sum() == 0: - num_masks = len(bboxes) - x_any = masks.any(axis=1) - y_any = masks.any(axis=2) - for idx in range(num_masks): - x = np.where(x_any[idx, :])[0] - y = np.where(y_any[idx, :])[0] - if len(x) > 0 and len(y) > 0: - bboxes[idx, :4] = np.array( - [x[0], y[0], x[-1] + 1, y[-1] + 1], - dtype=np.float32) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - bboxes, - labels, - masks, - class_names=self.CLASSES, - score_thr=score_thr, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/solo.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/solo.py deleted file mode 100644 index df6f6de0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/solo.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_instance_seg import SingleStageInstanceSegmentor - - -@DETECTORS.register_module() -class SOLO(SingleStageInstanceSegmentor): - """`SOLO: Segmenting Objects by Locations - `_ - - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - mask_head=mask_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/sparse_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/sparse_rcnn.py deleted file mode 100644 index e90c2a5a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/sparse_rcnn.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class SparseRCNN(TwoStageDetector): - r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_""" - - def __init__(self, *args, **kwargs): - super(SparseRCNN, self).__init__(*args, **kwargs) - assert self.with_rpn, 'Sparse R-CNN and QueryInst ' \ - 'do not support external proposals' - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """Forward function of SparseR-CNN and QueryInst in train stage. - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (List[Tensor], optional) : Segmentation masks for - each box. This is required to train QueryInst. - proposals (List[Tensor], optional): override rpn proposals with - custom proposals. Use when `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - assert proposals is None, 'Sparse R-CNN and QueryInst ' \ - 'do not support external proposals' - - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.forward_train(x, img_metas) - roi_losses = self.roi_head.forward_train( - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_masks=gt_masks, - imgs_whwh=imgs_whwh) - return roi_losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, img_metas) - results = self.roi_head.simple_test( - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh=imgs_whwh, - rescale=rescale) - return results - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - # backbone - x = self.extract_feat(img) - # rpn - num_imgs = len(img) - dummy_img_metas = [ - dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs) - ] - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, dummy_img_metas) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposal_boxes, - proposal_features, - dummy_img_metas) - return roi_outs diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/tood.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/tood.py deleted file mode 100644 index 7dd18c3c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/tood.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class TOOD(SingleStageDetector): - r"""Implementation of `TOOD: Task-aligned One-stage Object Detection. - `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TOOD, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def set_epoch(self, epoch): - self.bbox_head.epoch = epoch diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/trident_faster_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index fb26168c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .faster_rcnn import FasterRCNN - - -@DETECTORS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - - super(TridentFasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = img_metas * num_branch - proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas) - else: - proposal_list = proposals - # TODO: Fix trident_img_metas undefined errors - # when proposals is specified - return self.roi_head.simple_test( - x, proposal_list, trident_img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs): - """make copies of img and gts to fit multi-branch.""" - trident_gt_bboxes = tuple(gt_bboxes * self.num_branch) - trident_gt_labels = tuple(gt_labels * self.num_branch) - trident_img_metas = tuple(img_metas * self.num_branch) - - return super(TridentFasterRCNN, - self).forward_train(img, trident_img_metas, - trident_gt_bboxes, trident_gt_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/two_stage.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/two_stage.py deleted file mode 100644 index 870e2b84..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/two_stage.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class TwoStageDetector(BaseDetector): - """Base class for two-stage detectors. - - Two-stage detectors typically consisting of a region proposal network and a - task-specific regression head. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TwoStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - roi_head.pretrained = pretrained - self.roi_head = build_head(roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - @property - def with_rpn(self): - """bool: whether the detector has RPN""" - return hasattr(self, 'rpn_head') and self.rpn_head is not None - - @property - def with_roi_head(self): - """bool: whether the detector has a RoI head""" - return hasattr(self, 'roi_head') and self.roi_head is not None - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - outs = () - # backbone - x = self.extract_feat(img) - # rpn - if self.with_rpn: - rpn_outs = self.rpn_head(x) - outs = outs + (rpn_outs, ) - proposals = torch.randn(1000, 4).to(img.device) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposals) - outs = outs + (roi_outs, ) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - proposals : override rpn proposals with custom proposals. Use when - `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self.extract_feat(img) - - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg, - **kwargs) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - return losses - - async def async_simple_test(self, - img, - img_meta, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - - if proposals is None: - proposal_list = await self.rpn_head.async_simple_test_rpn( - x, img_meta) - else: - proposal_list = proposals - - return await self.roi_head.async_simple_test( - x, proposal_list, img_meta, rescale=rescale) - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - proposal_list = self.rpn_head.aug_test_rpn(x, img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def onnx_export(self, img, img_metas): - - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - x = self.extract_feat(img) - proposals = self.rpn_head.onnx_export(x, img_metas) - if hasattr(self.roi_head, 'onnx_export'): - return self.roi_head.onnx_export(x, proposals, img_metas) - else: - raise NotImplementedError( - f'{self.__class__.__name__} can not ' - f'be exported to ONNX. Please refer to the ' - f'list of supported models,' - f'https://mmdetection.readthedocs.io/en/latest/tutorials/pytorch2onnx.html#list-of-supported-models-exportable-to-onnx' # noqa E501 - ) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/vfnet.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/vfnet.py deleted file mode 100644 index 38ddcdab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/vfnet.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class VFNet(SingleStageDetector): - """Implementation of `VarifocalNet - (VFNet).`_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(VFNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolact.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolact.py deleted file mode 100644 index 4ddea0b2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolact.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_head -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLACT(SingleStageDetector): - """Implementation of `YOLACT `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - segm_head, - mask_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(YOLACT, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - self.segm_head = build_head(segm_head) - self.mask_head = build_head(mask_head) - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - feat = self.extract_feat(img) - bbox_outs = self.bbox_head(feat) - prototypes = self.mask_head.forward_dummy(feat[0]) - return (bbox_outs, prototypes) - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # convert Bitmap mask or Polygon Mask to Tensor here - gt_masks = [ - gt_mask.to_tensor(dtype=torch.uint8, device=img.device) - for gt_mask in gt_masks - ] - - x = self.extract_feat(img) - - cls_score, bbox_pred, coeff_pred = self.bbox_head(x) - bbox_head_loss_inputs = (cls_score, bbox_pred) + (gt_bboxes, gt_labels, - img_metas) - losses, sampling_results = self.bbox_head.loss( - *bbox_head_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - - segm_head_outs = self.segm_head(x[0]) - loss_segm = self.segm_head.loss(segm_head_outs, gt_masks, gt_labels) - losses.update(loss_segm) - - mask_pred = self.mask_head(x[0], coeff_pred, gt_bboxes, img_metas, - sampling_results) - loss_mask = self.mask_head.loss(mask_pred, gt_masks, gt_bboxes, - img_metas, sampling_results) - losses.update(loss_mask) - - # check NaN and Inf - for loss_name in losses.keys(): - assert torch.isfinite(torch.stack(losses[loss_name]))\ - .all().item(), '{} becomes infinite or NaN!'\ - .format(loss_name) - - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation.""" - feat = self.extract_feat(img) - det_bboxes, det_labels, det_coeffs = self.bbox_head.simple_test( - feat, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bbox, det_label, self.bbox_head.num_classes) - for det_bbox, det_label in zip(det_bboxes, det_labels) - ] - - segm_results = self.mask_head.simple_test( - feat, - det_bboxes, - det_labels, - det_coeffs, - img_metas, - rescale=rescale) - - return list(zip(bbox_results, segm_results)) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations.""" - raise NotImplementedError( - 'YOLACT does not support test-time augmentation') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolo.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolo.py deleted file mode 100644 index 0ccd4177..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. -import torch - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOV3(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(YOLOV3, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def onnx_export(self, img, img_metas): - """Test function for exporting to ONNX, without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - outs = self.bbox_head.forward(x) - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolof.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolof.py deleted file mode 100644 index 6d08d16d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolof.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOF(SingleStageDetector): - r"""Implementation of `You Only Look One-level Feature - `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolox.py b/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolox.py deleted file mode 100644 index d26dc734..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/detectors/yolox.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random - -import torch -import torch.distributed as dist -import torch.nn.functional as F -from mmcv.runner import get_dist_info - -from ...utils import log_img_scale -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOX(SingleStageDetector): - r"""Implementation of `YOLOX: Exceeding YOLO Series in 2021 - `_ - - Note: Considering the trade-off between training speed and accuracy, - multi-scale training is temporarily kept. More elegant implementation - will be adopted in the future. - - Args: - backbone (nn.Module): The backbone module. - neck (nn.Module): The neck module. - bbox_head (nn.Module): The bbox head module. - train_cfg (obj:`ConfigDict`, optional): The training config - of YOLOX. Default: None. - test_cfg (obj:`ConfigDict`, optional): The testing config - of YOLOX. Default: None. - pretrained (str, optional): model pretrained path. - Default: None. - input_size (tuple): The model default input image size. The shape - order should be (height, width). Default: (640, 640). - size_multiplier (int): Image size multiplication factor. - Default: 32. - random_size_range (tuple): The multi-scale random range during - multi-scale training. The real training image size will - be multiplied by size_multiplier. Default: (15, 25). - random_size_interval (int): The iter interval of change - image size. Default: 10. - init_cfg (dict, optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - input_size=(640, 640), - size_multiplier=32, - random_size_range=(15, 25), - random_size_interval=10, - init_cfg=None): - super(YOLOX, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - log_img_scale(input_size, skip_square=True) - self.rank, self.world_size = get_dist_info() - self._default_input_size = input_size - self._input_size = input_size - self._random_size_range = random_size_range - self._random_size_interval = random_size_interval - self._size_multiplier = size_multiplier - self._progress_in_iter = 0 - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # Multi-scale training - img, gt_bboxes = self._preprocess(img, gt_bboxes) - - losses = super(YOLOX, self).forward_train(img, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - - # random resizing - if (self._progress_in_iter + 1) % self._random_size_interval == 0: - self._input_size = self._random_resize() - self._progress_in_iter += 1 - - return losses - - def _preprocess(self, img, gt_bboxes): - scale_y = self._input_size[0] / self._default_input_size[0] - scale_x = self._input_size[1] / self._default_input_size[1] - if scale_x != 1 or scale_y != 1: - img = F.interpolate( - img, - size=self._input_size, - mode='bilinear', - align_corners=False) - for gt_bbox in gt_bboxes: - gt_bbox[..., 0::2] = gt_bbox[..., 0::2] * scale_x - gt_bbox[..., 1::2] = gt_bbox[..., 1::2] * scale_y - return img, gt_bboxes - - def _random_resize(self): - tensor = torch.LongTensor(2).cuda() - - if self.rank == 0: - size = random.randint(*self._random_size_range) - aspect_ratio = float( - self._default_input_size[1]) / self._default_input_size[0] - size = (self._size_multiplier * size, - self._size_multiplier * int(aspect_ratio * size)) - tensor[0] = size[0] - tensor[1] = size[1] - - if self.world_size > 1: - dist.barrier() - dist.broadcast(tensor, 0) - - input_size = (tensor[0].item(), tensor[1].item()) - return input_size diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/__init__.py deleted file mode 100644 index 068a54d6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .accuracy import Accuracy, accuracy -from .ae_loss import AssociativeEmbeddingLoss -from .balanced_l1_loss import BalancedL1Loss, balanced_l1_loss -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .focal_loss import FocalLoss, sigmoid_focal_loss -from .gaussian_focal_loss import GaussianFocalLoss -from .gfocal_loss import DistributionFocalLoss, QualityFocalLoss -from .ghm_loss import GHMC, GHMR -from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, GIoULoss, IoULoss, - bounded_iou_loss, iou_loss) -from .kd_loss import KnowledgeDistillationKLDivLoss -from .mse_loss import MSELoss, mse_loss -from .pisa_loss import carl_loss, isr_p -from .seesaw_loss import SeesawLoss -from .smooth_l1_loss import L1Loss, SmoothL1Loss, l1_loss, smooth_l1_loss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss -from .varifocal_loss import VarifocalLoss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'sigmoid_focal_loss', - 'FocalLoss', 'smooth_l1_loss', 'SmoothL1Loss', 'balanced_l1_loss', - 'BalancedL1Loss', 'mse_loss', 'MSELoss', 'iou_loss', 'bounded_iou_loss', - 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss', 'GHMC', - 'GHMR', 'reduce_loss', 'weight_reduce_loss', 'weighted_loss', 'L1Loss', - 'l1_loss', 'isr_p', 'carl_loss', 'AssociativeEmbeddingLoss', - 'GaussianFocalLoss', 'QualityFocalLoss', 'DistributionFocalLoss', - 'VarifocalLoss', 'KnowledgeDistillationKLDivLoss', 'SeesawLoss', 'DiceLoss' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/accuracy.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/accuracy.py deleted file mode 100644 index fe765a39..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/accuracy.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn - - -@mmcv.jit(coderize=True) -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class) - target (torch.Tensor): The target of each prediction, shape (N, ) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == 2 and target.ndim == 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - pred_label = pred_label.t() # transpose to shape (maxk, N) - correct = pred_label.eq(target.view(1, -1).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / pred.size(0))) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ae_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ae_loss.py deleted file mode 100644 index 5c6da22a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ae_loss.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES - - -@mmcv.jit(derivate=True, coderize=True) -def ae_loss_per_image(tl_preds, br_preds, match): - """Associative Embedding Loss in one image. - - Associative Embedding Loss including two parts: pull loss and push loss. - Pull loss makes embedding vectors from same object closer to each other. - Push loss distinguish embedding vector from different objects, and makes - the gap between them is large enough. - - During computing, usually there are 3 cases: - - no object in image: both pull loss and push loss will be 0. - - one object in image: push loss will be 0 and pull loss is computed - by the two corner of the only object. - - more than one objects in image: pull loss is computed by corner pairs - from each object, push loss is computed by each object with all - other objects. We use confusion matrix with 0 in diagonal to - compute the push loss. - - Args: - tl_preds (tensor): Embedding feature map of left-top corner. - br_preds (tensor): Embedding feature map of bottim-right corner. - match (list): Downsampled coordinates pair of each ground truth box. - """ - - tl_list, br_list, me_list = [], [], [] - if len(match) == 0: # no object in image - pull_loss = tl_preds.sum() * 0. - push_loss = tl_preds.sum() * 0. - else: - for m in match: - [tl_y, tl_x], [br_y, br_x] = m - tl_e = tl_preds[:, tl_y, tl_x].view(-1, 1) - br_e = br_preds[:, br_y, br_x].view(-1, 1) - tl_list.append(tl_e) - br_list.append(br_e) - me_list.append((tl_e + br_e) / 2.0) - - tl_list = torch.cat(tl_list) - br_list = torch.cat(br_list) - me_list = torch.cat(me_list) - - assert tl_list.size() == br_list.size() - - # N is object number in image, M is dimension of embedding vector - N, M = tl_list.size() - - pull_loss = (tl_list - me_list).pow(2) + (br_list - me_list).pow(2) - pull_loss = pull_loss.sum() / N - - margin = 1 # exp setting of CornerNet, details in section 3.3 of paper - - # confusion matrix of push loss - conf_mat = me_list.expand((N, N, M)).permute(1, 0, 2) - me_list - conf_weight = 1 - torch.eye(N).type_as(me_list) - conf_mat = conf_weight * (margin - conf_mat.sum(-1).abs()) - - if N > 1: # more than one object in current image - push_loss = F.relu(conf_mat).sum() / (N * (N - 1)) - else: - push_loss = tl_preds.sum() * 0. - - return pull_loss, push_loss - - -@LOSSES.register_module() -class AssociativeEmbeddingLoss(nn.Module): - """Associative Embedding Loss. - - More details can be found in - `Associative Embedding `_ and - `CornerNet `_ . - Code is modified from `kp_utils.py `_ # noqa: E501 - - Args: - pull_weight (float): Loss weight for corners from same object. - push_weight (float): Loss weight for corners from different object. - """ - - def __init__(self, pull_weight=0.25, push_weight=0.25): - super(AssociativeEmbeddingLoss, self).__init__() - self.pull_weight = pull_weight - self.push_weight = push_weight - - def forward(self, pred, target, match): - """Forward function.""" - batch = pred.size(0) - pull_all, push_all = 0.0, 0.0 - for i in range(batch): - pull, push = ae_loss_per_image(pred[i], target[i], match[i]) - - pull_all += self.pull_weight * pull - push_all += self.push_weight * push - - return pull_all, push_all diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/balanced_l1_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/balanced_l1_loss.py deleted file mode 100644 index 8500345f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/balanced_l1_loss.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def balanced_l1_loss(pred, - target, - beta=1.0, - alpha=0.5, - gamma=1.5, - reduction='mean'): - """Calculate balanced L1 loss. - - Please see the `Libra R-CNN `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - beta (float): The loss is a piecewise function of prediction and target - and ``beta`` serves as a threshold for the difference between the - prediction and target. Defaults to 1.0. - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. - Defaults to 1.5. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert beta > 0 - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - - diff = torch.abs(pred - target) - b = np.e**(gamma / alpha) - 1 - loss = torch.where( - diff < beta, alpha / b * - (b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff, - gamma * diff + gamma / b - alpha * beta) - - return loss - - -@LOSSES.register_module() -class BalancedL1Loss(nn.Module): - """Balanced L1 Loss. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Args: - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. Defaults to 1.5. - beta (float, optional): The loss is a piecewise function of prediction - and target. ``beta`` serves as a threshold for the difference - between the prediction and target. Defaults to 1.0. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, - alpha=0.5, - gamma=1.5, - beta=1.0, - reduction='mean', - loss_weight=1.0): - super(BalancedL1Loss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - weight (torch.Tensor, optional): Sample-wise loss weight with - shape (N, ). - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * balanced_l1_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/cross_entropy_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/cross_entropy_loss.py deleted file mode 100644 index 41411fc5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,301 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=-100, - avg_non_ignore=False): - """Calculate the CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. - If None, it will be set to default value. Default: -100. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - - Returns: - torch.Tensor: The calculated loss - """ - # The default value of ignore_index is the same as F.cross_entropy - ignore_index = -100 if ignore_index is None else ignore_index - # element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # average loss over non-ignored elements - # pytorch's official cross_entropy average loss over non-ignored elements - # refer to https://github.com/pytorch/pytorch/blob/56b43f4fec1f76953f15a627694d4bba34588969/torch/nn/functional.py#L2660 # noqa - if (avg_factor is None) and avg_non_ignore and reduction == 'mean': - avg_factor = label.numel() - (label == ignore_index).sum().item() - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, label_channels, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero( - valid_mask & (labels < label_channels), as_tuple=False) - - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - - valid_mask = valid_mask.view(-1, 1).expand(labels.size(0), - label_channels).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.view(-1, 1).repeat(1, label_channels) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights, valid_mask - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=-100, - avg_non_ignore=False): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1) or (N, ). - When the shape of pred is (N, 1), label will be expanded to - one-hot format, and when the shape of pred is (N, ), label - will not be expanded to one-hot format. - label (torch.Tensor): The learning label of the prediction, - with shape (N, ). - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. - If None, it will be set to default value. Default: -100. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - - Returns: - torch.Tensor: The calculated loss. - """ - # The default value of ignore_index is the same as F.cross_entropy - ignore_index = -100 if ignore_index is None else ignore_index - - if pred.dim() != label.dim(): - label, weight, valid_mask = _expand_onehot_labels( - label, weight, pred.size(-1), ignore_index) - else: - # should mask out the ignored elements - valid_mask = ((label >= 0) & (label != ignore_index)).float() - if weight is not None: - # The inplace writing method will have a mismatched broadcast - # shape error if the weight and valid_mask dimensions - # are inconsistent such as (B,N,1) and (B,N,C). - weight = weight * valid_mask - else: - weight = valid_mask - - # average loss over non-ignored elements - if (avg_factor is None) and avg_non_ignore and reduction == 'mean': - avg_factor = valid_mask.sum().item() - - # weighted element-wise losses - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None, - **kwargs): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C, *), C is the - number of classes. The trailing * indicates arbitrary shape. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - - Example: - >>> N, C = 3, 11 - >>> H, W = 2, 2 - >>> pred = torch.randn(N, C, H, W) * 1000 - >>> target = torch.rand(N, H, W) - >>> label = torch.randint(0, C, size=(N,)) - >>> reduction = 'mean' - >>> avg_factor = None - >>> class_weights = None - >>> loss = mask_cross_entropy(pred, target, label, reduction, - >>> avg_factor, class_weights) - >>> assert loss.shape == (1,) - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - ignore_index=None, - loss_weight=1.0, - avg_non_ignore=False): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - ignore_index (int | None): The label index to be ignored. - Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - """ - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = class_weight - self.ignore_index = ignore_index - self.avg_non_ignore = avg_non_ignore - if ((ignore_index is not None) and not self.avg_non_ignore - and self.reduction == 'mean'): - warnings.warn( - 'Default ``avg_non_ignore`` is False, if you would like to ' - 'ignore the certain label and average loss over non-ignore ' - 'labels, which is the same with PyTorch official ' - 'cross_entropy, set ``avg_non_ignore=True``.') - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def extra_repr(self): - """Extra repr.""" - s = f'avg_non_ignore={self.avg_non_ignore}' - return s - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - ignore_index=None, - **kwargs): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The method used to reduce the - loss. Options are "none", "mean" and "sum". - ignore_index (int | None): The label index to be ignored. - If not None, it will override the default value. Default: None. - Returns: - torch.Tensor: The calculated loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if ignore_index is None: - ignore_index = self.ignore_index - - if self.class_weight is not None: - class_weight = cls_score.new_tensor( - self.class_weight, device=cls_score.device) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - ignore_index=ignore_index, - avg_non_ignore=self.avg_non_ignore, - **kwargs) - return loss_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/dice_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/dice_loss.py deleted file mode 100644 index 585beeaf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/dice_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def dice_loss(pred, - target, - weight=None, - eps=1e-3, - reduction='mean', - naive_dice=False, - avg_factor=None): - """Calculate dice loss, there are two forms of dice loss is supported: - - - the one proposed in `V-Net: Fully Convolutional Neural - Networks for Volumetric Medical Image Segmentation - `_. - - the dice loss in which the power of the number in the - denominator is the first power instead of the second - power. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *) - target (torch.Tensor): The learning label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - eps (float): Avoid dividing by zero. Default: 1e-3. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - Options are "none", "mean" and "sum". - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power.Defaults to False. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - - input = pred.flatten(1) - target = target.flatten(1).float() - - a = torch.sum(input * target, 1) - if naive_dice: - b = torch.sum(input, 1) - c = torch.sum(target, 1) - d = (2 * a + eps) / (b + c + eps) - else: - b = torch.sum(input * input, 1) + eps - c = torch.sum(target * target, 1) + eps - d = (2 * a) / (b + c) - - loss = 1 - d - if weight is not None: - assert weight.ndim == loss.ndim - assert len(weight) == len(pred) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - activate=True, - reduction='mean', - naive_dice=False, - loss_weight=1.0, - eps=1e-3): - """Compute dice loss. - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - activate (bool): Whether to activate the predictions inside, - this will disable the inside sigmoid operation. - Defaults to True. - reduction (str, optional): The method used - to reduce the loss. Options are "none", - "mean" and "sum". Defaults to 'mean'. - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power. Defaults to False. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - eps (float): Avoid dividing by zero. Defaults to 1e-3. - """ - - super(DiceLoss, self).__init__() - self.use_sigmoid = use_sigmoid - self.reduction = reduction - self.naive_dice = naive_dice - self.loss_weight = loss_weight - self.eps = eps - self.activate = activate - - def forward(self, - pred, - target, - weight=None, - reduction_override=None, - avg_factor=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *). - target (torch.Tensor): The label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - if self.activate: - if self.use_sigmoid: - pred = pred.sigmoid() - else: - raise NotImplementedError - - loss = self.loss_weight * dice_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - naive_dice=self.naive_dice, - avg_factor=avg_factor) - - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/focal_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/focal_loss.py deleted file mode 100644 index 6c20fddd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/focal_loss.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -# This method is only for debugging -def py_sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def py_focal_loss_with_prob(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - Different from `py_sigmoid_focal_loss`, this function accepts probability - as input. - - Args: - pred (torch.Tensor): The prediction probability with shape (N, C), - C is the number of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - - target = target.type_as(pred) - pt = (1 - pred) * target + pred * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - r"""A warpper of cuda version `Focal Loss - `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # Function.apply does not accept keyword arguments, so the decorator - # "weighted_loss" is not applicable - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), gamma, - alpha, None, 'none') - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class FocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - reduction='mean', - loss_weight=1.0, - activated=False): - """`Focal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - activated (bool, optional): Whether the input is activated. - If True, it means the input has been activated and can be - treated as probabilities. Else, it should be treated as logits. - Defaults to False. - """ - super(FocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid focal loss supported now.' - self.use_sigmoid = use_sigmoid - self.gamma = gamma - self.alpha = alpha - self.reduction = reduction - self.loss_weight = loss_weight - self.activated = activated - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - if self.activated: - calculate_loss_func = py_focal_loss_with_prob - else: - if torch.cuda.is_available() and pred.is_cuda: - calculate_loss_func = sigmoid_focal_loss - else: - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - calculate_loss_func = py_sigmoid_focal_loss - - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - weight, - gamma=self.gamma, - alpha=self.alpha, - reduction=reduction, - avg_factor=avg_factor) - - else: - raise NotImplementedError - return loss_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gaussian_focal_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gaussian_focal_loss.py deleted file mode 100644 index 7abcb691..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gaussian_focal_loss.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def gaussian_focal_loss(pred, gaussian_target, alpha=2.0, gamma=4.0): - """`Focal Loss `_ for targets in gaussian - distribution. - - Args: - pred (torch.Tensor): The prediction. - gaussian_target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 2.0. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 4.0. - """ - eps = 1e-12 - pos_weights = gaussian_target.eq(1) - neg_weights = (1 - gaussian_target).pow(gamma) - pos_loss = -(pred + eps).log() * (1 - pred).pow(alpha) * pos_weights - neg_loss = -(1 - pred + eps).log() * pred.pow(alpha) * neg_weights - return pos_loss + neg_loss - - -@LOSSES.register_module() -class GaussianFocalLoss(nn.Module): - """GaussianFocalLoss is a variant of focal loss. - - More details can be found in the `paper - `_ - Code is modified from `kp_utils.py - `_ # noqa: E501 - Please notice that the target in GaussianFocalLoss is a gaussian heatmap, - not 0/1 binary target. - - Args: - alpha (float): Power of prediction. - gamma (float): Power of target for negative samples. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - alpha=2.0, - gamma=4.0, - reduction='mean', - loss_weight=1.0): - super(GaussianFocalLoss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_reg = self.loss_weight * gaussian_focal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - reduction=reduction, - avg_factor=avg_factor) - return loss_reg diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gfocal_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gfocal_loss.py deleted file mode 100644 index 0e8d2637..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/gfocal_loss.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def quality_focal_loss(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred.sigmoid() - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy_with_logits( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy_with_logits( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@weighted_loss -def quality_focal_loss_with_prob(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - Different from `quality_focal_loss`, this function accepts probability - as input. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def distribution_focal_loss(pred, label): - r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding boxes - (before softmax) with shape (N, n+1), n is the max value of the - integral set `{0, ..., n}` in paper. - label (torch.Tensor): Target distance label for bounding boxes with - shape (N,). - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - dis_left = label.long() - dis_right = dis_left + 1 - weight_left = dis_right.float() - label - weight_right = label - dis_left.float() - loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \ - + F.cross_entropy(pred, dis_right, reduction='none') * weight_right - return loss - - -@LOSSES.register_module() -class QualityFocalLoss(nn.Module): - r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - use_sigmoid (bool): Whether sigmoid operation is conducted in QFL. - Defaults to True. - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - activated (bool, optional): Whether the input is activated. - If True, it means the input has been activated and can be - treated as probabilities. Else, it should be treated as logits. - Defaults to False. - """ - - def __init__(self, - use_sigmoid=True, - beta=2.0, - reduction='mean', - loss_weight=1.0, - activated=False): - super(QualityFocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid in QFL supported now.' - self.use_sigmoid = use_sigmoid - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - self.activated = activated - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted joint representation of - classification and quality (IoU) estimation with shape (N, C), - C is the number of classes. - target (tuple([torch.Tensor])): Target category label with shape - (N,) and target quality label with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - if self.activated: - calculate_loss_func = quality_focal_loss_with_prob - else: - calculate_loss_func = quality_focal_loss - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls - - -@LOSSES.register_module() -class DistributionFocalLoss(nn.Module): - r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(DistributionFocalLoss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding - boxes (before softmax) with shape (N, n+1), n is the max value - of the integral set `{0, ..., n}` in paper. - target (torch.Tensor): Target distance label for bounding boxes - with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_cls = self.loss_weight * distribution_focal_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ghm_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ghm_loss.py deleted file mode 100644 index a4df9fe8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/ghm_loss.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def _expand_onehot_labels(labels, label_weights, label_channels): - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - inds = torch.nonzero( - (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze() - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - bin_label_weights = label_weights.view(-1, 1).expand( - label_weights.size(0), label_channels) - return bin_labels, bin_label_weights - - -# TODO: code refactoring to make it consistent with other losses -@LOSSES.register_module() -class GHMC(nn.Module): - """GHM Classification Loss. - - Details of the theorem can be viewed in the paper - `Gradient Harmonized Single-stage Detector - `_. - - Args: - bins (int): Number of the unit regions for distribution calculation. - momentum (float): The parameter for moving average. - use_sigmoid (bool): Can only be true for BCE based loss now. - loss_weight (float): The weight of the total GHM-C loss. - reduction (str): Options are "none", "mean" and "sum". - Defaults to "mean" - """ - - def __init__(self, - bins=10, - momentum=0, - use_sigmoid=True, - loss_weight=1.0, - reduction='mean'): - super(GHMC, self).__init__() - self.bins = bins - self.momentum = momentum - edges = torch.arange(bins + 1).float() / bins - self.register_buffer('edges', edges) - self.edges[-1] += 1e-6 - if momentum > 0: - acc_sum = torch.zeros(bins) - self.register_buffer('acc_sum', acc_sum) - self.use_sigmoid = use_sigmoid - if not self.use_sigmoid: - raise NotImplementedError - self.loss_weight = loss_weight - self.reduction = reduction - - def forward(self, - pred, - target, - label_weight, - reduction_override=None, - **kwargs): - """Calculate the GHM-C loss. - - Args: - pred (float tensor of size [batch_num, class_num]): - The direct prediction of classification fc layer. - target (float tensor of size [batch_num, class_num]): - Binary class target for each sample. - label_weight (float tensor of size [batch_num, class_num]): - the value is 1 if the sample is valid and 0 if ignored. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - Returns: - The gradient harmonized loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - # the target should be binary class label - if pred.dim() != target.dim(): - target, label_weight = _expand_onehot_labels( - target, label_weight, pred.size(-1)) - target, label_weight = target.float(), label_weight.float() - edges = self.edges - mmt = self.momentum - weights = torch.zeros_like(pred) - - # gradient length - g = torch.abs(pred.sigmoid().detach() - target) - - valid = label_weight > 0 - tot = max(valid.float().sum().item(), 1.0) - n = 0 # n valid bins - for i in range(self.bins): - inds = (g >= edges[i]) & (g < edges[i + 1]) & valid - num_in_bin = inds.sum().item() - if num_in_bin > 0: - if mmt > 0: - self.acc_sum[i] = mmt * self.acc_sum[i] \ - + (1 - mmt) * num_in_bin - weights[inds] = tot / self.acc_sum[i] - else: - weights[inds] = tot / num_in_bin - n += 1 - if n > 0: - weights = weights / n - - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') - loss = weight_reduce_loss( - loss, weights, reduction=reduction, avg_factor=tot) - return loss * self.loss_weight - - -# TODO: code refactoring to make it consistent with other losses -@LOSSES.register_module() -class GHMR(nn.Module): - """GHM Regression Loss. - - Details of the theorem can be viewed in the paper - `Gradient Harmonized Single-stage Detector - `_. - - Args: - mu (float): The parameter for the Authentic Smooth L1 loss. - bins (int): Number of the unit regions for distribution calculation. - momentum (float): The parameter for moving average. - loss_weight (float): The weight of the total GHM-R loss. - reduction (str): Options are "none", "mean" and "sum". - Defaults to "mean" - """ - - def __init__(self, - mu=0.02, - bins=10, - momentum=0, - loss_weight=1.0, - reduction='mean'): - super(GHMR, self).__init__() - self.mu = mu - self.bins = bins - edges = torch.arange(bins + 1).float() / bins - self.register_buffer('edges', edges) - self.edges[-1] = 1e3 - self.momentum = momentum - if momentum > 0: - acc_sum = torch.zeros(bins) - self.register_buffer('acc_sum', acc_sum) - self.loss_weight = loss_weight - self.reduction = reduction - - # TODO: support reduction parameter - def forward(self, - pred, - target, - label_weight, - avg_factor=None, - reduction_override=None): - """Calculate the GHM-R loss. - - Args: - pred (float tensor of size [batch_num, 4 (* class_num)]): - The prediction of box regression layer. Channel number can be 4 - or 4 * class_num depending on whether it is class-agnostic. - target (float tensor of size [batch_num, 4 (* class_num)]): - The target regression values with the same size of pred. - label_weight (float tensor of size [batch_num, 4 (* class_num)]): - The weight of each sample, 0 if ignored. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - Returns: - The gradient harmonized loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - mu = self.mu - edges = self.edges - mmt = self.momentum - - # ASL1 loss - diff = pred - target - loss = torch.sqrt(diff * diff + mu * mu) - mu - - # gradient length - g = torch.abs(diff / torch.sqrt(mu * mu + diff * diff)).detach() - weights = torch.zeros_like(g) - - valid = label_weight > 0 - tot = max(label_weight.float().sum().item(), 1.0) - n = 0 # n: valid bins - for i in range(self.bins): - inds = (g >= edges[i]) & (g < edges[i + 1]) & valid - num_in_bin = inds.sum().item() - if num_in_bin > 0: - n += 1 - if mmt > 0: - self.acc_sum[i] = mmt * self.acc_sum[i] \ - + (1 - mmt) * num_in_bin - weights[inds] = tot / self.acc_sum[i] - else: - weights[inds] = tot / num_in_bin - if n > 0: - weights /= n - loss = weight_reduce_loss( - loss, weights, reduction=reduction, avg_factor=tot) - return loss * self.loss_weight diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/iou_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/iou_loss.py deleted file mode 100644 index bf1ed04e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,474 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, mode='log', eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'iou_loss is deprecated, please use "mode=`linear`" ' - 'instead.') - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if mode == 'linear': - loss = 1 - ious - elif mode == 'square': - loss = 1 - ious**2 - elif mode == 'log': - loss = -ious.log() - else: - raise NotImplementedError - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - # view(..., -1) does not work for empty tensor - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).flatten(1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - with torch.no_grad(): - alpha = (ious > 0.5).float() * v / (1 - ious + v) - - # CIoU - cious = ious - (rho2 / c2 + alpha * v) - loss = 1 - cious.clamp(min=-1.0, max=1.0) - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss else determined - by mode. Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0, - mode='log'): - super(IoULoss, self).__init__() - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'IOULoss is deprecated, please use "mode=`linear`" ' - 'instead.') - self.mode = mode - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - mode=self.mode, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/kd_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/kd_loss.py deleted file mode 100644 index 75c19355..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/kd_loss.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def knowledge_distillation_kl_div_loss(pred, - soft_label, - T, - detach_target=True): - r"""Loss function for knowledge distilling using KL divergence. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - T (int): Temperature for distillation. - detach_target (bool): Remove soft_label from automatic differentiation - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert pred.size() == soft_label.size() - target = F.softmax(soft_label / T, dim=1) - if detach_target: - target = target.detach() - - kd_loss = F.kl_div( - F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * ( - T * T) - - return kd_loss - - -@LOSSES.register_module() -class KnowledgeDistillationKLDivLoss(nn.Module): - """Loss function for knowledge distilling using KL divergence. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - T (int): Temperature for distillation. - """ - - def __init__(self, reduction='mean', loss_weight=1.0, T=10): - super(KnowledgeDistillationKLDivLoss, self).__init__() - assert T >= 1 - self.reduction = reduction - self.loss_weight = loss_weight - self.T = T - - def forward(self, - pred, - soft_label, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - - reduction = ( - reduction_override if reduction_override else self.reduction) - - loss_kd = self.loss_weight * knowledge_distillation_kl_div_loss( - pred, - soft_label, - weight, - reduction=reduction, - avg_factor=avg_factor, - T=self.T) - - return loss_kd diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/mse_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/mse_loss.py deleted file mode 100644 index 4a622f86..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/mse_loss.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@weighted_loss -def mse_loss(pred, target): - """Warpper of mse loss.""" - return F.mse_loss(pred, target, reduction='none') - - -@LOSSES.register_module() -class MSELoss(nn.Module): - """MSELoss. - - Args: - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super().__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): Weight of the loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * mse_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/pisa_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/pisa_loss.py deleted file mode 100644 index 6afea0e5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/pisa_loss.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from mmdet.core import bbox_overlaps - - -@mmcv.jit(derivate=True, coderize=True) -def isr_p(cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - loss_cls, - bbox_coder, - k=2, - bias=0, - num_class=80): - """Importance-based Sample Reweighting (ISR_P), positive part. - - Args: - cls_score (Tensor): Predicted classification scores. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (tuple[Tensor]): A tuple of bbox targets, the are - labels, label_weights, bbox_targets, bbox_weights, respectively. - rois (Tensor): Anchors (single_stage) in shape (n, 4) or RoIs - (two_stage) in shape (n, 5). - sampling_results (obj): Sampling results. - loss_cls (func): Classification loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - num_class (int): Number of classes, default: 80. - - Return: - tuple([Tensor]): labels, imp_based_label_weights, bbox_targets, - bbox_target_weights - """ - - labels, label_weights, bbox_targets, bbox_weights = bbox_targets - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - pos_labels = labels[pos_label_inds] - - # if no positive samples, return the original targets - num_pos = float(pos_label_inds.size(0)) - if num_pos == 0: - return labels, label_weights, bbox_targets, bbox_weights - - # merge pos_assigned_gt_inds of per image to a single tensor - gts = list() - last_max_gt = 0 - for i in range(len(sampling_results)): - gt_i = sampling_results[i].pos_assigned_gt_inds - gts.append(gt_i + last_max_gt) - if len(gt_i) != 0: - last_max_gt = gt_i.max() + 1 - gts = torch.cat(gts) - assert len(gts) == num_pos - - cls_score = cls_score.detach() - bbox_pred = bbox_pred.detach() - - # For single stage detectors, rois here indicate anchors, in shape (N, 4) - # For two stage detectors, rois are in shape (N, 5) - if rois.size(-1) == 5: - pos_rois = rois[pos_label_inds][:, 1:] - else: - pos_rois = rois[pos_label_inds] - - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_delta_pred = bbox_pred[pos_label_inds, pos_labels].view(-1, 4) - else: - pos_delta_pred = bbox_pred[pos_label_inds].view(-1, 4) - - # compute iou of the predicted bbox and the corresponding GT - pos_delta_target = bbox_targets[pos_label_inds].view(-1, 4) - pos_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_pred) - target_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_target) - ious = bbox_overlaps(pos_bbox_pred, target_bbox_pred, is_aligned=True) - - pos_imp_weights = label_weights[pos_label_inds] - # Two steps to compute IoU-HLR. Samples are first sorted by IoU locally, - # then sorted again within the same-rank group - max_l_num = pos_labels.bincount().max() - for label in pos_labels.unique(): - l_inds = (pos_labels == label).nonzero().view(-1) - l_gts = gts[l_inds] - for t in l_gts.unique(): - t_inds = l_inds[l_gts == t] - t_ious = ious[t_inds] - _, t_iou_rank_idx = t_ious.sort(descending=True) - _, t_iou_rank = t_iou_rank_idx.sort() - ious[t_inds] += max_l_num - t_iou_rank.float() - l_ious = ious[l_inds] - _, l_iou_rank_idx = l_ious.sort(descending=True) - _, l_iou_rank = l_iou_rank_idx.sort() # IoU-HLR - # linearly map HLR to label weights - pos_imp_weights[l_inds] *= (max_l_num - l_iou_rank.float()) / max_l_num - - pos_imp_weights = (bias + pos_imp_weights * (1 - bias)).pow(k) - - # normalize to make the new weighted loss value equal to the original loss - pos_loss_cls = loss_cls( - cls_score[pos_label_inds], pos_labels, reduction_override='none') - if pos_loss_cls.dim() > 1: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds][:, - None] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights[:, None] - else: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights - pos_loss_cls_ratio = ori_pos_loss_cls.sum() / new_pos_loss_cls.sum() - pos_imp_weights = pos_imp_weights * pos_loss_cls_ratio - label_weights[pos_label_inds] = pos_imp_weights - - bbox_targets = labels, label_weights, bbox_targets, bbox_weights - return bbox_targets - - -@mmcv.jit(derivate=True, coderize=True) -def carl_loss(cls_score, - labels, - bbox_pred, - bbox_targets, - loss_bbox, - k=1, - bias=0.2, - avg_factor=None, - sigmoid=False, - num_class=80): - """Classification-Aware Regression Loss (CARL). - - Args: - cls_score (Tensor): Predicted classification scores. - labels (Tensor): Targets of classification. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (Tensor): Target of bbox regression. - loss_bbox (func): Regression loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - avg_factor (int): Average factor used in regression loss. - sigmoid (bool): Activation of the classification score. - num_class (int): Number of classes, default: 80. - - Return: - dict: CARL loss dict. - """ - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - if pos_label_inds.numel() == 0: - return dict(loss_carl=cls_score.sum()[None] * 0.) - pos_labels = labels[pos_label_inds] - - # multiply pos_cls_score with the corresponding bbox weight - # and remain gradient - if sigmoid: - pos_cls_score = cls_score.sigmoid()[pos_label_inds, pos_labels] - else: - pos_cls_score = cls_score.softmax(-1)[pos_label_inds, pos_labels] - carl_loss_weights = (bias + (1 - bias) * pos_cls_score).pow(k) - - # normalize carl_loss_weight to make its sum equal to num positive - num_pos = float(pos_cls_score.size(0)) - weight_ratio = num_pos / carl_loss_weights.sum() - carl_loss_weights *= weight_ratio - - if avg_factor is None: - avg_factor = bbox_targets.size(0) - # if is class agnostic, bbox pred is in shape (N, 4) - # otherwise, bbox pred is in shape (N, #classes, 4) - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_bbox_preds = bbox_pred[pos_label_inds, pos_labels] - else: - pos_bbox_preds = bbox_pred[pos_label_inds] - ori_loss_reg = loss_bbox( - pos_bbox_preds, - bbox_targets[pos_label_inds], - reduction_override='none') / avg_factor - loss_carl = (ori_loss_reg * carl_loss_weights[:, None]).sum() - return dict(loss_carl=loss_carl[None]) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/seesaw_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/seesaw_loss.py deleted file mode 100644 index 01040472..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/seesaw_loss.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .accuracy import accuracy -from .cross_entropy_loss import cross_entropy -from .utils import weight_reduce_loss - - -def seesaw_ce_loss(cls_score, - labels, - label_weights, - cum_samples, - num_classes, - p, - q, - eps, - reduction='mean', - avg_factor=None): - """Calculate the Seesaw CrossEntropy loss. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C), - C is the number of classes. - labels (torch.Tensor): The learning label of the prediction. - label_weights (torch.Tensor): Sample-wise loss weight. - cum_samples (torch.Tensor): Cumulative samples for each category. - num_classes (int): The number of classes. - p (float): The ``p`` in the mitigation factor. - q (float): The ``q`` in the compenstation factor. - eps (float): The minimal value of divisor to smooth - the computation of compensation factor - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - - Returns: - torch.Tensor: The calculated loss - """ - assert cls_score.size(-1) == num_classes - assert len(cum_samples) == num_classes - - onehot_labels = F.one_hot(labels, num_classes) - seesaw_weights = cls_score.new_ones(onehot_labels.size()) - - # mitigation factor - if p > 0: - sample_ratio_matrix = cum_samples[None, :].clamp( - min=1) / cum_samples[:, None].clamp(min=1) - index = (sample_ratio_matrix < 1.0).float() - sample_weights = sample_ratio_matrix.pow(p) * index + (1 - index) - mitigation_factor = sample_weights[labels.long(), :] - seesaw_weights = seesaw_weights * mitigation_factor - - # compensation factor - if q > 0: - scores = F.softmax(cls_score.detach(), dim=1) - self_scores = scores[ - torch.arange(0, len(scores)).to(scores.device).long(), - labels.long()] - score_matrix = scores / self_scores[:, None].clamp(min=eps) - index = (score_matrix > 1.0).float() - compensation_factor = score_matrix.pow(q) * index + (1 - index) - seesaw_weights = seesaw_weights * compensation_factor - - cls_score = cls_score + (seesaw_weights.log() * (1 - onehot_labels)) - - loss = F.cross_entropy(cls_score, labels, weight=None, reduction='none') - - if label_weights is not None: - label_weights = label_weights.float() - loss = weight_reduce_loss( - loss, weight=label_weights, reduction=reduction, avg_factor=avg_factor) - return loss - - -@LOSSES.register_module() -class SeesawLoss(nn.Module): - """ - Seesaw Loss for Long-Tailed Instance Segmentation (CVPR 2021) - arXiv: https://arxiv.org/abs/2008.10032 - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Only False is supported. - p (float, optional): The ``p`` in the mitigation factor. - Defaults to 0.8. - q (float, optional): The ``q`` in the compenstation factor. - Defaults to 2.0. - num_classes (int, optional): The number of classes. - Default to 1203 for LVIS v1 dataset. - eps (float, optional): The minimal value of divisor to smooth - the computation of compensation factor - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - return_dict (bool, optional): Whether return the losses as a dict. - Default to True. - """ - - def __init__(self, - use_sigmoid=False, - p=0.8, - q=2.0, - num_classes=1203, - eps=1e-2, - reduction='mean', - loss_weight=1.0, - return_dict=True): - super(SeesawLoss, self).__init__() - assert not use_sigmoid - self.use_sigmoid = False - self.p = p - self.q = q - self.num_classes = num_classes - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - self.return_dict = return_dict - - # 0 for pos, 1 for neg - self.cls_criterion = seesaw_ce_loss - - # cumulative samples for each category - self.register_buffer( - 'cum_samples', - torch.zeros(self.num_classes + 1, dtype=torch.float)) - - # custom output channels of the classifier - self.custom_cls_channels = True - # custom activation of cls_score - self.custom_activation = True - # custom accuracy of the classsifier - self.custom_accuracy = True - - def _split_cls_score(self, cls_score): - # split cls_score to cls_score_classes and cls_score_objectness - assert cls_score.size(-1) == self.num_classes + 2 - cls_score_classes = cls_score[..., :-2] - cls_score_objectness = cls_score[..., -2:] - return cls_score_classes, cls_score_objectness - - def get_cls_channels(self, num_classes): - """Get custom classification channels. - - Args: - num_classes (int): The number of classes. - - Returns: - int: The custom classification channels. - """ - assert num_classes == self.num_classes - return num_classes + 2 - - def get_activation(self, cls_score): - """Get custom activation of cls_score. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - - Returns: - torch.Tensor: The custom activation of cls_score with shape - (N, C + 1). - """ - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - score_classes = F.softmax(cls_score_classes, dim=-1) - score_objectness = F.softmax(cls_score_objectness, dim=-1) - score_pos = score_objectness[..., [0]] - score_neg = score_objectness[..., [1]] - score_classes = score_classes * score_pos - scores = torch.cat([score_classes, score_neg], dim=-1) - return scores - - def get_accuracy(self, cls_score, labels): - """Get custom accuracy w.r.t. cls_score and labels. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - labels (torch.Tensor): The learning label of the prediction. - - Returns: - Dict [str, torch.Tensor]: The accuracy for objectness and classes, - respectively. - """ - pos_inds = labels < self.num_classes - obj_labels = (labels == self.num_classes).long() - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - acc_objectness = accuracy(cls_score_objectness, obj_labels) - acc_classes = accuracy(cls_score_classes[pos_inds], labels[pos_inds]) - acc = dict() - acc['acc_objectness'] = acc_objectness - acc['acc_classes'] = acc_classes - return acc - - def forward(self, - cls_score, - labels, - label_weights=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - labels (torch.Tensor): The learning label of the prediction. - label_weights (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - Returns: - torch.Tensor | Dict [str, torch.Tensor]: - if return_dict == False: The calculated loss | - if return_dict == True: The dict of calculated losses - for objectness and classes, respectively. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - assert cls_score.size(-1) == self.num_classes + 2 - pos_inds = labels < self.num_classes - # 0 for pos, 1 for neg - obj_labels = (labels == self.num_classes).long() - - # accumulate the samples for each category - unique_labels = labels.unique() - for u_l in unique_labels: - inds_ = labels == u_l.item() - self.cum_samples[u_l] += inds_.sum() - - if label_weights is not None: - label_weights = label_weights.float() - else: - label_weights = labels.new_ones(labels.size(), dtype=torch.float) - - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - # calculate loss_cls_classes (only need pos samples) - if pos_inds.sum() > 0: - loss_cls_classes = self.loss_weight * self.cls_criterion( - cls_score_classes[pos_inds], labels[pos_inds], - label_weights[pos_inds], self.cum_samples[:self.num_classes], - self.num_classes, self.p, self.q, self.eps, reduction, - avg_factor) - else: - loss_cls_classes = cls_score_classes[pos_inds].sum() - # calculate loss_cls_objectness - loss_cls_objectness = self.loss_weight * cross_entropy( - cls_score_objectness, obj_labels, label_weights, reduction, - avg_factor) - - if self.return_dict: - loss_cls = dict() - loss_cls['loss_cls_objectness'] = loss_cls_objectness - loss_cls['loss_cls_classes'] = loss_cls_classes - else: - loss_cls = loss_cls_classes + loss_cls_objectness - return loss_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/smooth_l1_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/smooth_l1_loss.py deleted file mode 100644 index 55117467..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/smooth_l1_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def smooth_l1_loss(pred, target, beta=1.0): - """Smooth L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def l1_loss(pred, target): - """L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - - Returns: - torch.Tensor: Calculated loss - """ - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - loss = torch.abs(pred - target) - return loss - - -@LOSSES.register_module() -class SmoothL1Loss(nn.Module): - """Smooth L1 loss. - - Args: - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". Defaults to "mean". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, beta=1.0, reduction='mean', loss_weight=1.0): - super(SmoothL1Loss, self).__init__() - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * smooth_l1_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class L1Loss(nn.Module): - """L1 loss. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(L1Loss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * l1_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_bbox diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/utils.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/utils.py deleted file mode 100644 index 778237eb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import mmcv -import torch -import torch.nn.functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -@mmcv.jit(derivate=True, coderize=True) -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Average factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - # Avoid causing ZeroDivisionError when avg_factor is 0.0, - # i.e., all labels of an image belong to ignore index. - eps = torch.finfo(torch.float32).eps - loss = loss.sum() / (avg_factor + eps) - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/varifocal_loss.py b/cv/3d_detection/paconv/pytorch/mmdet/models/losses/varifocal_loss.py deleted file mode 100644 index 42f0eef9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/losses/varifocal_loss.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -@mmcv.jit(derivate=True, coderize=True) -def varifocal_loss(pred, - target, - weight=None, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - avg_factor=None): - """`Varifocal Loss `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning target of the iou-aware - classification score with shape (N, C), C is the number of classes. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal Loss. - Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive example with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # pred and target should be of the same size - assert pred.size() == target.size() - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - if iou_weighted: - focal_weight = target * (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - else: - focal_weight = (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class VarifocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - loss_weight=1.0): - """`Varifocal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether the prediction is - used for sigmoid or softmax. Defaults to True. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal - Loss. Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive examples with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - super(VarifocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'Only sigmoid varifocal loss supported now.' - assert alpha >= 0.0 - self.use_sigmoid = use_sigmoid - self.alpha = alpha - self.gamma = gamma - self.iou_weighted = iou_weighted - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * varifocal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - iou_weighted=self.iou_weighted, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/__init__.py deleted file mode 100644 index 6f2fa823..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bfp import BFP -from .channel_mapper import ChannelMapper -from .ct_resnet_neck import CTResNetNeck -from .dilated_encoder import DilatedEncoder -from .dyhead import DyHead -from .fpg import FPG -from .fpn import FPN -from .fpn_carafe import FPN_CARAFE -from .hrfpn import HRFPN -from .nas_fpn import NASFPN -from .nasfcos_fpn import NASFCOS_FPN -from .pafpn import PAFPN -from .rfp import RFP -from .ssd_neck import SSDNeck -from .yolo_neck import YOLOV3Neck -from .yolox_pafpn import YOLOXPAFPN - -__all__ = [ - 'FPN', 'BFP', 'ChannelMapper', 'HRFPN', 'NASFPN', 'FPN_CARAFE', 'PAFPN', - 'NASFCOS_FPN', 'RFP', 'YOLOV3Neck', 'FPG', 'DilatedEncoder', - 'CTResNetNeck', 'SSDNeck', 'YOLOXPAFPN', 'DyHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/bfp.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/bfp.py deleted file mode 100644 index 9fdfa036..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/bfp.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import NonLocal2d -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class BFP(BaseModule): - """BFP (Balanced Feature Pyramids) - - BFP takes multi-level features as inputs and gather them into a single one, - then refine the gathered feature and scatter the refined results to - multi-level features. This module is used in Libra R-CNN (CVPR 2019), see - the paper `Libra R-CNN: Towards Balanced Learning for Object Detection - `_ for details. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - num_levels (int): Number of input feature levels. - conv_cfg (dict): The config dict for convolution layers. - norm_cfg (dict): The config dict for normalization layers. - refine_level (int): Index of integration and refine level of BSF in - multi-level features from bottom to top. - refine_type (str): Type of the refine op, currently support - [None, 'conv', 'non_local']. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - num_levels, - refine_level=2, - refine_type=None, - conv_cfg=None, - norm_cfg=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(BFP, self).__init__(init_cfg) - assert refine_type in [None, 'conv', 'non_local'] - - self.in_channels = in_channels - self.num_levels = num_levels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.refine_level = refine_level - self.refine_type = refine_type - assert 0 <= self.refine_level < self.num_levels - - if self.refine_type == 'conv': - self.refine = ConvModule( - self.in_channels, - self.in_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - elif self.refine_type == 'non_local': - self.refine = NonLocal2d( - self.in_channels, - reduction=1, - use_scale=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == self.num_levels - - # step 1: gather multi-level features by resize and average - feats = [] - gather_size = inputs[self.refine_level].size()[2:] - for i in range(self.num_levels): - if i < self.refine_level: - gathered = F.adaptive_max_pool2d( - inputs[i], output_size=gather_size) - else: - gathered = F.interpolate( - inputs[i], size=gather_size, mode='nearest') - feats.append(gathered) - - bsf = sum(feats) / len(feats) - - # step 2: refine gathered features - if self.refine_type is not None: - bsf = self.refine(bsf) - - # step 3: scatter refined features to multi-levels by a residual path - outs = [] - for i in range(self.num_levels): - out_size = inputs[i].size()[2:] - if i < self.refine_level: - residual = F.interpolate(bsf, size=out_size, mode='nearest') - else: - residual = F.adaptive_max_pool2d(bsf, output_size=out_size) - outs.append(residual + inputs[i]) - - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/channel_mapper.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/channel_mapper.py deleted file mode 100644 index 774bdb1d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/channel_mapper.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class ChannelMapper(BaseModule): - r"""Channel Mapper to reduce/increase channels of backbone features. - - This is used to reduce/increase channels of backbone features. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - kernel_size (int, optional): kernel_size for reducing channels (used - at each scale). Default: 3. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - act_cfg (dict, optional): Config dict for activation layer in - ConvModule. Default: dict(type='ReLU'). - num_outs (int, optional): Number of output feature maps. There - would be extra_convs when num_outs larger than the length - of in_channels. - init_cfg (dict or list[dict], optional): Initialization config dict. - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = ChannelMapper(in_channels, 11, 3).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - num_outs=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(ChannelMapper, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.extra_convs = None - if num_outs is None: - num_outs = len(in_channels) - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - if num_outs > len(in_channels): - self.extra_convs = nn.ModuleList() - for i in range(len(in_channels), num_outs): - if i == len(in_channels): - in_channel = in_channels[-1] - else: - in_channel = out_channels - self.extra_convs.append( - ConvModule( - in_channel, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.convs) - outs = [self.convs[i](inputs[i]) for i in range(len(inputs))] - if self.extra_convs: - for i in range(len(self.extra_convs)): - if i == 0: - outs.append(self.extra_convs[0](inputs[-1])) - else: - outs.append(self.extra_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ct_resnet_neck.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ct_resnet_neck.py deleted file mode 100644 index 40eb2685..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ct_resnet_neck.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.builder import NECKS - - -@NECKS.register_module() -class CTResNetNeck(BaseModule): - """The neck used in `CenterNet `_ for - object classification and box regression. - - Args: - in_channel (int): Number of input channels. - num_deconv_filters (tuple[int]): Number of filters per stage. - num_deconv_kernels (tuple[int]): Number of kernels per stage. - use_dcn (bool): If True, use DCNv2. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channel, - num_deconv_filters, - num_deconv_kernels, - use_dcn=True, - init_cfg=None): - super(CTResNetNeck, self).__init__(init_cfg) - assert len(num_deconv_filters) == len(num_deconv_kernels) - self.fp16_enabled = False - self.use_dcn = use_dcn - self.in_channel = in_channel - self.deconv_layers = self._make_deconv_layer(num_deconv_filters, - num_deconv_kernels) - - def _make_deconv_layer(self, num_deconv_filters, num_deconv_kernels): - """use deconv layers to upsample backbone's output.""" - layers = [] - for i in range(len(num_deconv_filters)): - feat_channel = num_deconv_filters[i] - conv_module = ConvModule( - self.in_channel, - feat_channel, - 3, - padding=1, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=dict(type='BN')) - layers.append(conv_module) - upsample_module = ConvModule( - feat_channel, - feat_channel, - num_deconv_kernels[i], - stride=2, - padding=1, - conv_cfg=dict(type='deconv'), - norm_cfg=dict(type='BN')) - layers.append(upsample_module) - self.in_channel = feat_channel - - return nn.Sequential(*layers) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.ConvTranspose2d): - # In order to be consistent with the source code, - # reset the ConvTranspose2d initialization parameters - m.reset_parameters() - # Simulated bilinear upsampling kernel - w = m.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * ( - 1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - # self.use_dcn is False - elif not self.use_dcn and isinstance(m, nn.Conv2d): - # In order to be consistent with the source code, - # reset the Conv2d initialization parameters - m.reset_parameters() - - @auto_fp16() - def forward(self, inputs): - assert isinstance(inputs, (list, tuple)) - outs = self.deconv_layers(inputs[-1]) - return outs, diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dilated_encoder.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dilated_encoder.py deleted file mode 100644 index 6679835b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dilated_encoder.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import (ConvModule, caffe2_xavier_init, constant_init, is_norm, - normal_init) -from torch.nn import BatchNorm2d - -from ..builder import NECKS - - -class Bottleneck(nn.Module): - """Bottleneck block for DilatedEncoder used in `YOLOF. - - `. - - The Bottleneck contains three ConvLayers and one residual connection. - - Args: - in_channels (int): The number of input channels. - mid_channels (int): The number of middle output channels. - dilation (int): Dilation rate. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - in_channels, - mid_channels, - dilation, - norm_cfg=dict(type='BN', requires_grad=True)): - super(Bottleneck, self).__init__() - self.conv1 = ConvModule( - in_channels, mid_channels, 1, norm_cfg=norm_cfg) - self.conv2 = ConvModule( - mid_channels, - mid_channels, - 3, - padding=dilation, - dilation=dilation, - norm_cfg=norm_cfg) - self.conv3 = ConvModule( - mid_channels, in_channels, 1, norm_cfg=norm_cfg) - - def forward(self, x): - identity = x - out = self.conv1(x) - out = self.conv2(out) - out = self.conv3(out) - out = out + identity - return out - - -@NECKS.register_module() -class DilatedEncoder(nn.Module): - """Dilated Encoder for YOLOF `. - - This module contains two types of components: - - the original FPN lateral convolution layer and fpn convolution layer, - which are 1x1 conv + 3x3 conv - - the dilated residual block - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - block_mid_channels (int): The number of middle block output channels - num_residual_blocks (int): The number of residual blocks. - """ - - def __init__(self, in_channels, out_channels, block_mid_channels, - num_residual_blocks): - super(DilatedEncoder, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.block_mid_channels = block_mid_channels - self.num_residual_blocks = num_residual_blocks - self.block_dilations = [2, 4, 6, 8] - self._init_layers() - - def _init_layers(self): - self.lateral_conv = nn.Conv2d( - self.in_channels, self.out_channels, kernel_size=1) - self.lateral_norm = BatchNorm2d(self.out_channels) - self.fpn_conv = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=3, padding=1) - self.fpn_norm = BatchNorm2d(self.out_channels) - encoder_blocks = [] - for i in range(self.num_residual_blocks): - dilation = self.block_dilations[i] - encoder_blocks.append( - Bottleneck( - self.out_channels, - self.block_mid_channels, - dilation=dilation)) - self.dilated_encoder_blocks = nn.Sequential(*encoder_blocks) - - def init_weights(self): - caffe2_xavier_init(self.lateral_conv) - caffe2_xavier_init(self.fpn_conv) - for m in [self.lateral_norm, self.fpn_norm]: - constant_init(m, 1) - for m in self.dilated_encoder_blocks.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - def forward(self, feature): - out = self.lateral_norm(self.lateral_conv(feature[-1])) - out = self.fpn_norm(self.fpn_conv(out)) - return self.dilated_encoder_blocks(out), diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dyhead.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dyhead.py deleted file mode 100644 index 5d752c34..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/dyhead.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (build_activation_layer, build_norm_layer, constant_init, - normal_init) -from mmcv.ops.modulated_deform_conv import ModulatedDeformConv2d -from mmcv.runner import BaseModule - -from ..builder import NECKS -from ..utils import DyReLU - -# Reference: -# https://github.com/microsoft/DynamicHead -# https://github.com/jshilong/SEPC - - -class DyDCNv2(nn.Module): - """ModulatedDeformConv2d with normalization layer used in DyHead. - - This module cannot be configured with `conv_cfg=dict(type='DCNv2')` - because DyHead calculates offset and mask from middle-level feature. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - stride (int | tuple[int], optional): Stride of the convolution. - Default: 1. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='GN', num_groups=16, requires_grad=True). - """ - - def __init__(self, - in_channels, - out_channels, - stride=1, - norm_cfg=dict(type='GN', num_groups=16, requires_grad=True)): - super().__init__() - self.with_norm = norm_cfg is not None - bias = not self.with_norm - self.conv = ModulatedDeformConv2d( - in_channels, out_channels, 3, stride=stride, padding=1, bias=bias) - if self.with_norm: - self.norm = build_norm_layer(norm_cfg, out_channels)[1] - - def forward(self, x, offset, mask): - """Forward function.""" - x = self.conv(x.contiguous(), offset, mask) - if self.with_norm: - x = self.norm(x) - return x - - -class DyHeadBlock(nn.Module): - """DyHead Block with three types of attention. - - HSigmoid arguments in default act_cfg follow official code, not paper. - https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - zero_init_offset (bool, optional): Whether to use zero init for - `spatial_conv_offset`. Default: True. - act_cfg (dict, optional): Config dict for the last activation layer of - scale-aware attention. Default: dict(type='HSigmoid', bias=3.0, - divisor=6.0). - """ - - def __init__(self, - in_channels, - out_channels, - zero_init_offset=True, - act_cfg=dict(type='HSigmoid', bias=3.0, divisor=6.0)): - super().__init__() - self.zero_init_offset = zero_init_offset - # (offset_x, offset_y, mask) * kernel_size_y * kernel_size_x - self.offset_and_mask_dim = 3 * 3 * 3 - self.offset_dim = 2 * 3 * 3 - - self.spatial_conv_high = DyDCNv2(in_channels, out_channels) - self.spatial_conv_mid = DyDCNv2(in_channels, out_channels) - self.spatial_conv_low = DyDCNv2(in_channels, out_channels, stride=2) - self.spatial_conv_offset = nn.Conv2d( - in_channels, self.offset_and_mask_dim, 3, padding=1) - self.scale_attn_module = nn.Sequential( - nn.AdaptiveAvgPool2d(1), nn.Conv2d(out_channels, 1, 1), - nn.ReLU(inplace=True), build_activation_layer(act_cfg)) - self.task_attn_module = DyReLU(out_channels) - self._init_weights() - - def _init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, 0, 0.01) - if self.zero_init_offset: - constant_init(self.spatial_conv_offset, 0) - - def forward(self, x): - """Forward function.""" - outs = [] - for level in range(len(x)): - # calculate offset and mask of DCNv2 from middle-level feature - offset_and_mask = self.spatial_conv_offset(x[level]) - offset = offset_and_mask[:, :self.offset_dim, :, :] - mask = offset_and_mask[:, self.offset_dim:, :, :].sigmoid() - - mid_feat = self.spatial_conv_mid(x[level], offset, mask) - sum_feat = mid_feat * self.scale_attn_module(mid_feat) - summed_levels = 1 - if level > 0: - low_feat = self.spatial_conv_low(x[level - 1], offset, mask) - sum_feat += low_feat * self.scale_attn_module(low_feat) - summed_levels += 1 - if level < len(x) - 1: - # this upsample order is weird, but faster than natural order - # https://github.com/microsoft/DynamicHead/issues/25 - high_feat = F.interpolate( - self.spatial_conv_high(x[level + 1], offset, mask), - size=x[level].shape[-2:], - mode='bilinear', - align_corners=True) - sum_feat += high_feat * self.scale_attn_module(high_feat) - summed_levels += 1 - outs.append(self.task_attn_module(sum_feat / summed_levels)) - - return outs - - -@NECKS.register_module() -class DyHead(BaseModule): - """DyHead neck consisting of multiple DyHead Blocks. - - See `Dynamic Head: Unifying Object Detection Heads with Attentions - `_ for details. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_blocks (int, optional): Number of DyHead Blocks. Default: 6. - zero_init_offset (bool, optional): Whether to use zero init for - `spatial_conv_offset`. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_blocks=6, - zero_init_offset=True, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_blocks = num_blocks - self.zero_init_offset = zero_init_offset - - dyhead_blocks = [] - for i in range(num_blocks): - in_channels = self.in_channels if i == 0 else self.out_channels - dyhead_blocks.append( - DyHeadBlock( - in_channels, - self.out_channels, - zero_init_offset=zero_init_offset)) - self.dyhead_blocks = nn.Sequential(*dyhead_blocks) - - def forward(self, inputs): - """Forward function.""" - assert isinstance(inputs, (tuple, list)) - outs = self.dyhead_blocks(inputs) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpg.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpg.py deleted file mode 100644 index a6a2a12e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpg.py +++ /dev/null @@ -1,406 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -class Transition(BaseModule): - """Base class for transition. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - """ - - def __init__(self, in_channels, out_channels, init_cfg=None): - super().__init__(init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - - def forward(x): - pass - - -class UpInterpolationConv(Transition): - """A transition used for up-sampling. - - Up-sample the input by interpolation then refines the feature by - a convolution layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Up-sampling factor. Default: 2. - mode (int): Interpolation mode. Default: nearest. - align_corners (bool): Whether align corners when interpolation. - Default: None. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=2, - mode='nearest', - align_corners=None, - kernel_size=3, - init_cfg=None, - **kwargs): - super().__init__(in_channels, out_channels, init_cfg) - self.mode = mode - self.scale_factor = scale_factor - self.align_corners = align_corners - self.conv = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, x): - x = F.interpolate( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners) - x = self.conv(x) - return x - - -class LastConv(Transition): - """A transition used for refining the output of the last stage. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_inputs (int): Number of inputs of the FPN features. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - num_inputs, - kernel_size=3, - init_cfg=None, - **kwargs): - super().__init__(in_channels, out_channels, init_cfg) - self.num_inputs = num_inputs - self.conv_out = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, inputs): - assert len(inputs) == self.num_inputs - return self.conv_out(inputs[-1]) - - -@NECKS.register_module() -class FPG(BaseModule): - """FPG. - - Implementation of `Feature Pyramid Grids (FPG) - `_. - This implementation only gives the basic structure stated in the paper. - But users can implement different type of transitions to fully explore the - the potential power of the structure of FPG. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - paths (list[str]): Specify the path order of each stack level. - Each element in the list should be either 'bu' (bottom-up) or - 'td' (top-down). - inter_channels (int): Number of inter channels. - same_up_trans (dict): Transition that goes down at the same stage. - same_down_trans (dict): Transition that goes up at the same stage. - across_lateral_trans (dict): Across-pathway same-stage - across_down_trans (dict): Across-pathway bottom-up connection. - across_up_trans (dict): Across-pathway top-down connection. - across_skip_trans (dict): Across-pathway skip connection. - output_trans (dict): Transition that trans the output of the - last stage. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - transition_types = { - 'conv': ConvModule, - 'interpolation_conv': UpInterpolationConv, - 'last_conv': LastConv, - } - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - paths, - inter_channels=None, - same_down_trans=None, - same_up_trans=dict( - type='conv', kernel_size=3, stride=2, padding=1), - across_lateral_trans=dict(type='conv', kernel_size=1), - across_down_trans=dict(type='conv', kernel_size=3), - across_up_trans=None, - across_skip_trans=dict(type='identity'), - output_trans=dict(type='last_conv', kernel_size=3), - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - skip_inds=None, - init_cfg=[ - dict(type='Caffe2Xavier', layer='Conv2d'), - dict( - type='Constant', - layer=[ - '_BatchNorm', '_InstanceNorm', 'GroupNorm', - 'LayerNorm' - ], - val=1.0) - ]): - super(FPG, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - if inter_channels is None: - self.inter_channels = [out_channels for _ in range(num_outs)] - elif isinstance(inter_channels, int): - self.inter_channels = [inter_channels for _ in range(num_outs)] - else: - assert isinstance(inter_channels, list) - assert len(inter_channels) == num_outs - self.inter_channels = inter_channels - self.stack_times = stack_times - self.paths = paths - assert isinstance(paths, list) and len(paths) == stack_times - for d in paths: - assert d in ('bu', 'td') - - self.same_down_trans = same_down_trans - self.same_up_trans = same_up_trans - self.across_lateral_trans = across_lateral_trans - self.across_down_trans = across_down_trans - self.across_up_trans = across_up_trans - self.output_trans = output_trans - self.across_skip_trans = across_skip_trans - - self.with_bias = norm_cfg is None - # skip inds must be specified if across skip trans is not None - if self.across_skip_trans is not None: - skip_inds is not None - self.skip_inds = skip_inds - assert len(self.skip_inds[0]) <= self.stack_times - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # build lateral 1x1 convs to reduce channels - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = nn.Conv2d(self.in_channels[i], - self.inter_channels[i - self.start_level], 1) - self.lateral_convs.append(l_conv) - - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - if self.add_extra_convs: - fpn_idx = self.backbone_end_level - self.start_level + i - extra_conv = nn.Conv2d( - self.inter_channels[fpn_idx - 1], - self.inter_channels[fpn_idx], - 3, - stride=2, - padding=1) - self.extra_downsamples.append(extra_conv) - else: - self.extra_downsamples.append(nn.MaxPool2d(1, stride=2)) - - self.fpn_transitions = nn.ModuleList() # stack times - for s in range(self.stack_times): - stage_trans = nn.ModuleList() # num of feature levels - for i in range(self.num_outs): - # same, across_lateral, across_down, across_up - trans = nn.ModuleDict() - if s in self.skip_inds[i]: - stage_trans.append(trans) - continue - # build same-stage down trans (used in bottom-up paths) - if i == 0 or self.same_up_trans is None: - same_up_trans = None - else: - same_up_trans = self.build_trans( - self.same_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['same_up'] = same_up_trans - # build same-stage up trans (used in top-down paths) - if i == self.num_outs - 1 or self.same_down_trans is None: - same_down_trans = None - else: - same_down_trans = self.build_trans( - self.same_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['same_down'] = same_down_trans - # build across lateral trans - across_lateral_trans = self.build_trans( - self.across_lateral_trans, self.inter_channels[i], - self.inter_channels[i]) - trans['across_lateral'] = across_lateral_trans - # build across down trans - if i == self.num_outs - 1 or self.across_down_trans is None: - across_down_trans = None - else: - across_down_trans = self.build_trans( - self.across_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['across_down'] = across_down_trans - # build across up trans - if i == 0 or self.across_up_trans is None: - across_up_trans = None - else: - across_up_trans = self.build_trans( - self.across_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_up'] = across_up_trans - if self.across_skip_trans is None: - across_skip_trans = None - else: - across_skip_trans = self.build_trans( - self.across_skip_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_skip'] = across_skip_trans - # build across_skip trans - stage_trans.append(trans) - self.fpn_transitions.append(stage_trans) - - self.output_transition = nn.ModuleList() # output levels - for i in range(self.num_outs): - trans = self.build_trans( - self.output_trans, - self.inter_channels[i], - self.out_channels, - num_inputs=self.stack_times + 1) - self.output_transition.append(trans) - - self.relu = nn.ReLU(inplace=True) - - def build_trans(self, cfg, in_channels, out_channels, **extra_args): - cfg_ = cfg.copy() - trans_type = cfg_.pop('type') - trans_cls = self.transition_types[trans_type] - return trans_cls(in_channels, out_channels, **cfg_, **extra_args) - - def fuse(self, fuse_dict): - out = None - for item in fuse_dict.values(): - if item is not None: - if out is None: - out = item - else: - out = out + item - return out - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build all levels from original feature maps - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - outs = [feats] - - for i in range(self.stack_times): - current_outs = outs[-1] - next_outs = [] - direction = self.paths[i] - for j in range(self.num_outs): - if i in self.skip_inds[j]: - next_outs.append(outs[-1][j]) - continue - # feature level - if direction == 'td': - lvl = self.num_outs - j - 1 - else: - lvl = j - # get transitions - if direction == 'td': - same_trans = self.fpn_transitions[i][lvl]['same_down'] - else: - same_trans = self.fpn_transitions[i][lvl]['same_up'] - across_lateral_trans = self.fpn_transitions[i][lvl][ - 'across_lateral'] - across_down_trans = self.fpn_transitions[i][lvl]['across_down'] - across_up_trans = self.fpn_transitions[i][lvl]['across_up'] - across_skip_trans = self.fpn_transitions[i][lvl]['across_skip'] - # init output - to_fuse = dict( - same=None, lateral=None, across_up=None, across_down=None) - # same downsample/upsample - if same_trans is not None: - to_fuse['same'] = same_trans(next_outs[-1]) - # across lateral - if across_lateral_trans is not None: - to_fuse['lateral'] = across_lateral_trans( - current_outs[lvl]) - # across downsample - if lvl > 0 and across_up_trans is not None: - to_fuse['across_up'] = across_up_trans(current_outs[lvl - - 1]) - # across upsample - if (lvl < self.num_outs - 1 and across_down_trans is not None): - to_fuse['across_down'] = across_down_trans( - current_outs[lvl + 1]) - if across_skip_trans is not None: - to_fuse['across_skip'] = across_skip_trans(outs[0][lvl]) - x = self.fuse(to_fuse) - next_outs.append(x) - - if direction == 'td': - outs.append(next_outs[::-1]) - else: - outs.append(next_outs) - - # output trans - final_outs = [] - for i in range(self.num_outs): - lvl_out_list = [] - for s in range(len(outs)): - lvl_out_list.append(outs[s][i]) - lvl_out = self.output_transition[i](lvl_out_list) - final_outs.append(lvl_out) - - return final_outs diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn.py deleted file mode 100644 index 4bdb5b22..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(BaseModule): - r"""Feature Pyramid Network. - - This is an implementation of paper `Feature Pyramid Networks for Object - Detection `_. - - Args: - in_channels (list[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, it is equivalent to `add_extra_convs='on_input'`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: dict(mode='nearest'). - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest'), - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - self.add_extra_convs = 'on_input' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - @auto_fp16() - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - # fix runtime error of "+=" inplace operation in PyTorch 1.10 - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn_carafe.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn_carafe.py deleted file mode 100644 index fdd91f34..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/fpn_carafe.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_upsample_layer, xavier_init -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN_CARAFE(BaseModule): - """FPN_CARAFE is a more flexible implementation of FPN. It allows more - choice for upsample methods during the top-down pathway. - - It can reproduce the performance of ICCV 2019 paper - CARAFE: Content-Aware ReAssembly of FEatures - Please refer to https://arxiv.org/abs/1905.02188 for more details. - - Args: - in_channels (list[int]): Number of channels for each input feature map. - out_channels (int): Output channels of feature pyramids. - num_outs (int): Number of output stages. - start_level (int): Start level of feature pyramids. - (Default: 0) - end_level (int): End level of feature pyramids. - (Default: -1 indicates the last level). - norm_cfg (dict): Dictionary to construct and config norm layer. - activate (str): Type of activation function in ConvModule - (Default: None indicates w/o activation). - order (dict): Order of components in ConvModule. - upsample (str): Type of upsample layer. - upsample_cfg (dict): Dictionary to construct and config upsample layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(FPN_CARAFE, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.with_bias = norm_cfg is None - self.upsample_cfg = upsample_cfg.copy() - self.upsample = self.upsample_cfg.get('type') - self.relu = nn.ReLU(inplace=False) - - self.order = order - assert order in [('conv', 'norm', 'act'), ('act', 'conv', 'norm')] - - assert self.upsample in [ - 'nearest', 'bilinear', 'deconv', 'pixel_shuffle', 'carafe', None - ] - if self.upsample in ['deconv', 'pixel_shuffle']: - assert hasattr( - self.upsample_cfg, - 'upsample_kernel') and self.upsample_cfg.upsample_kernel > 0 - self.upsample_kernel = self.upsample_cfg.pop('upsample_kernel') - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - - self.lateral_convs = ModuleList() - self.fpn_convs = ModuleList() - self.upsample_modules = ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if i != self.backbone_end_level - 1: - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample == 'deconv': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsample_cfg_.update(channels=out_channels, scale_factor=2) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsample_cfg_.update( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsample_module = build_upsample_layer(upsample_cfg_) - self.upsample_modules.append(upsample_module) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_out_levels = ( - num_outs - self.backbone_end_level + self.start_level) - if extra_out_levels >= 1: - for i in range(extra_out_levels): - in_channels = ( - self.in_channels[self.backbone_end_level - - 1] if i == 0 else out_channels) - extra_l_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if self.upsample == 'deconv': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsampler_cfg_ = dict( - channels=out_channels, - scale_factor=2, - **self.upsample_cfg) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsampler_cfg_ = dict( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsampler_cfg_['type'] = self.upsample - upsample_module = build_upsample_layer(upsampler_cfg_) - extra_fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - self.upsample_modules.append(upsample_module) - self.fpn_convs.append(extra_fpn_conv) - self.lateral_convs.append(extra_l_conv) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - """Initialize the weights of module.""" - super(FPN_CARAFE, self).init_weights() - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - xavier_init(m, distribution='uniform') - for m in self.modules(): - if isinstance(m, CARAFEPack): - m.init_weights() - - def slice_as(self, src, dst): - """Slice ``src`` as ``dst`` - - Note: - ``src`` should have the same or larger size than ``dst``. - - Args: - src (torch.Tensor): Tensors to be sliced. - dst (torch.Tensor): ``src`` will be sliced to have the same - size as ``dst``. - - Returns: - torch.Tensor: Sliced tensor. - """ - assert (src.size(2) >= dst.size(2)) and (src.size(3) >= dst.size(3)) - if src.size(2) == dst.size(2) and src.size(3) == dst.size(3): - return src - else: - return src[:, :, :dst.size(2), :dst.size(3)] - - def tensor_add(self, a, b): - """Add tensors ``a`` and ``b`` that might have different sizes.""" - if a.size() == b.size(): - c = a + b - else: - c = a + self.slice_as(b, a) - return c - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [] - for i, lateral_conv in enumerate(self.lateral_convs): - if i <= self.backbone_end_level - self.start_level: - input = inputs[min(i + self.start_level, len(inputs) - 1)] - else: - input = laterals[-1] - lateral = lateral_conv(input) - laterals.append(lateral) - - # build top-down path - for i in range(len(laterals) - 1, 0, -1): - if self.upsample is not None: - upsample_feat = self.upsample_modules[i - 1](laterals[i]) - else: - upsample_feat = laterals[i] - laterals[i - 1] = self.tensor_add(laterals[i - 1], upsample_feat) - - # build outputs - num_conv_outs = len(self.fpn_convs) - outs = [] - for i in range(num_conv_outs): - out = self.fpn_convs[i](laterals[i]) - outs.append(out) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/hrfpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/hrfpn.py deleted file mode 100644 index ca15be6b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/hrfpn.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.utils.checkpoint import checkpoint - -from ..builder import NECKS - - -@NECKS.register_module() -class HRFPN(BaseModule): - """HRFPN (High Resolution Feature Pyramids) - - paper: `High-Resolution Representations for Labeling Pixels and Regions - `_. - - Args: - in_channels (list): number of channels for each branch. - out_channels (int): output channels of feature pyramids. - num_outs (int): number of output stages. - pooling_type (str): pooling for generating feature pyramids - from {MAX, AVG}. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - stride (int): stride of 3x3 convolutional layers - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs=5, - pooling_type='AVG', - conv_cfg=None, - norm_cfg=None, - with_cp=False, - stride=1, - init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')): - super(HRFPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reduction_conv = ConvModule( - sum(in_channels), - out_channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - act_cfg=None) - - self.fpn_convs = nn.ModuleList() - for i in range(self.num_outs): - self.fpn_convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=stride, - conv_cfg=self.conv_cfg, - act_cfg=None)) - - if pooling_type == 'MAX': - self.pooling = F.max_pool2d - else: - self.pooling = F.avg_pool2d - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == self.num_ins - outs = [inputs[0]] - for i in range(1, self.num_ins): - outs.append( - F.interpolate(inputs[i], scale_factor=2**i, mode='bilinear')) - out = torch.cat(outs, dim=1) - if out.requires_grad and self.with_cp: - out = checkpoint(self.reduction_conv, out) - else: - out = self.reduction_conv(out) - outs = [out] - for i in range(1, self.num_outs): - outs.append(self.pooling(out, kernel_size=2**i, stride=2**i)) - outputs = [] - - for i in range(self.num_outs): - if outs[i].requires_grad and self.with_cp: - tmp_out = checkpoint(self.fpn_convs[i], outs[i]) - else: - tmp_out = self.fpn_convs[i](outs[i]) - outputs.append(tmp_out) - return tuple(outputs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nas_fpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nas_fpn.py deleted file mode 100644 index 710592ec..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nas_fpn.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops.merge_cells import GlobalPoolingCell, SumCell -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFPN(BaseModule): - """NAS-FPN. - - Implementation of `NAS-FPN: Learning Scalable Feature Pyramid Architecture - for Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')): - super(NASFPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) # num of input feature levels - self.num_outs = num_outs # num of output feature levels - self.stack_times = stack_times - self.norm_cfg = norm_cfg - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # add lateral connections - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None) - self.lateral_convs.append(l_conv) - - # add extra downsample layers (stride-2 pooling or conv) - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_conv = ConvModule( - out_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.extra_downsamples.append( - nn.Sequential(extra_conv, nn.MaxPool2d(2, 2))) - - # add NAS FPN connections - self.fpn_stages = ModuleList() - for _ in range(self.stack_times): - stage = nn.ModuleDict() - # gp(p6, p4) -> p4_1 - stage['gp_64_4'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_1, p4) -> p4_2 - stage['sum_44_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_2, p3) -> p3_out - stage['sum_43_3'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p3_out, p4_2) -> p4_out - stage['sum_34_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - stage['gp_43_5'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_55_5'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - stage['gp_54_7'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_77_7'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # gp(p7_out, p5_out) -> p6_out - stage['gp_75_6'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - self.fpn_stages.append(stage) - - def forward(self, inputs): - """Forward function.""" - # build P3-P5 - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # build P6-P7 on top of P5 - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - p3, p4, p5, p6, p7 = feats - - for stage in self.fpn_stages: - # gp(p6, p4) -> p4_1 - p4_1 = stage['gp_64_4'](p6, p4, out_size=p4.shape[-2:]) - # sum(p4_1, p4) -> p4_2 - p4_2 = stage['sum_44_4'](p4_1, p4, out_size=p4.shape[-2:]) - # sum(p4_2, p3) -> p3_out - p3 = stage['sum_43_3'](p4_2, p3, out_size=p3.shape[-2:]) - # sum(p3_out, p4_2) -> p4_out - p4 = stage['sum_34_4'](p3, p4_2, out_size=p4.shape[-2:]) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - p5_tmp = stage['gp_43_5'](p4, p3, out_size=p5.shape[-2:]) - p5 = stage['sum_55_5'](p5, p5_tmp, out_size=p5.shape[-2:]) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - p7_tmp = stage['gp_54_7'](p5, p4_2, out_size=p7.shape[-2:]) - p7 = stage['sum_77_7'](p7, p7_tmp, out_size=p7.shape[-2:]) - # gp(p7_out, p5_out) -> p6_out - p6 = stage['gp_75_6'](p7, p5, out_size=p6.shape[-2:]) - - return p3, p4, p5, p6, p7 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nasfcos_fpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nasfcos_fpn.py deleted file mode 100644 index c4abfe7b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/nasfcos_fpn.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import ConcatCell -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFCOS_FPN(BaseModule): - """FPN structure in NASFPN. - - Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for - Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=1, - end_level=-1, - add_extra_convs=False, - conv_cfg=None, - norm_cfg=None, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(NASFCOS_FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - self.adapt_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - adapt_conv = ConvModule( - in_channels[i], - out_channels, - 1, - stride=1, - padding=0, - bias=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU', inplace=False)) - self.adapt_convs.append(adapt_conv) - - # C2 is omitted according to the paper - extra_levels = num_outs - self.backbone_end_level + self.start_level - - def build_concat_cell(with_input1_conv, with_input2_conv): - cell_conv_cfg = dict( - kernel_size=1, padding=0, bias=False, groups=out_channels) - return ConcatCell( - in_channels=out_channels, - out_channels=out_channels, - with_out_conv=True, - out_conv_cfg=cell_conv_cfg, - out_norm_cfg=dict(type='BN'), - out_conv_order=('norm', 'act', 'conv'), - with_input1_conv=with_input1_conv, - with_input2_conv=with_input2_conv, - input_conv_cfg=conv_cfg, - input_norm_cfg=norm_cfg, - upsample_mode='nearest') - - # Denote c3=f0, c4=f1, c5=f2 for convince - self.fpn = nn.ModuleDict() - self.fpn['c22_1'] = build_concat_cell(True, True) - self.fpn['c22_2'] = build_concat_cell(True, True) - self.fpn['c32'] = build_concat_cell(True, False) - self.fpn['c02'] = build_concat_cell(True, False) - self.fpn['c42'] = build_concat_cell(True, True) - self.fpn['c36'] = build_concat_cell(True, True) - self.fpn['c61'] = build_concat_cell(True, True) # f9 - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_act_cfg = None if i == 0 \ - else dict(type='ReLU', inplace=False) - self.extra_downsamples.append( - ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - act_cfg=extra_act_cfg, - order=('act', 'norm', 'conv'))) - - def forward(self, inputs): - """Forward function.""" - feats = [ - adapt_conv(inputs[i + self.start_level]) - for i, adapt_conv in enumerate(self.adapt_convs) - ] - - for (i, module_name) in enumerate(self.fpn): - idx_1, idx_2 = int(module_name[1]), int(module_name[2]) - res = self.fpn[module_name](feats[idx_1], feats[idx_2]) - feats.append(res) - - ret = [] - for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5 - feats1, feats2 = feats[idx], feats[5] - feats2_resize = F.interpolate( - feats2, - size=feats1.size()[2:], - mode='bilinear', - align_corners=False) - - feats_sum = feats1 + feats2_resize - ret.append( - F.interpolate( - feats_sum, - size=inputs[input_idx].size()[2:], - mode='bilinear', - align_corners=False)) - - for submodule in self.extra_downsamples: - ret.append(submodule(ret[-1])) - - return tuple(ret) - - def init_weights(self): - """Initialize the weights of module.""" - super(NASFCOS_FPN, self).init_weights() - for module in self.fpn.values(): - if hasattr(module, 'conv_out'): - caffe2_xavier_init(module.out_conv.conv) - - for modules in [ - self.adapt_convs.modules(), - self.extra_downsamples.modules() - ]: - for module in modules: - if isinstance(module, nn.Conv2d): - caffe2_xavier_init(module) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/pafpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/pafpn.py deleted file mode 100644 index 8d5e32f0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/pafpn.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16 - -from ..builder import NECKS -from .fpn import FPN - - -@NECKS.register_module() -class PAFPN(FPN): - """Path Aggregation Network for Instance Segmentation. - - This is an implementation of the `PAFPN in Path Aggregation Network - `_. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, it is equivalent to `add_extra_convs='on_input'`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(PAFPN, self).__init__( - in_channels, - out_channels, - num_outs, - start_level, - end_level, - add_extra_convs, - relu_before_extra_convs, - no_norm_on_lateral, - conv_cfg, - norm_cfg, - act_cfg, - init_cfg=init_cfg) - # add extra bottom up pathway - self.downsample_convs = nn.ModuleList() - self.pafpn_convs = nn.ModuleList() - for i in range(self.start_level + 1, self.backbone_end_level): - d_conv = ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - pafpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.downsample_convs.append(d_conv) - self.pafpn_convs.append(pafpn_conv) - - @auto_fp16() - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += F.interpolate( - laterals[i], size=prev_shape, mode='nearest') - - # build outputs - # part 1: from original levels - inter_outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - - # part 2: add bottom-up path - for i in range(0, used_backbone_levels - 1): - inter_outs[i + 1] += self.downsample_convs[i](inter_outs[i]) - - outs = [] - outs.append(inter_outs[0]) - outs.extend([ - self.pafpn_convs[i - 1](inter_outs[i]) - for i in range(1, used_backbone_levels) - ]) - - # part 3: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - orig = inputs[self.backbone_end_level - 1] - outs.append(self.fpn_convs[used_backbone_levels](orig)) - elif self.add_extra_convs == 'on_lateral': - outs.append(self.fpn_convs[used_backbone_levels]( - laterals[-1])) - elif self.add_extra_convs == 'on_output': - outs.append(self.fpn_convs[used_backbone_levels](outs[-1])) - else: - raise NotImplementedError - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/rfp.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/rfp.py deleted file mode 100644 index 6976f4da..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/rfp.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import constant_init, xavier_init -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS, build_backbone -from .fpn import FPN - - -class ASPP(BaseModule): - """ASPP (Atrous Spatial Pyramid Pooling) - - This is an implementation of the ASPP module used in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of channels produced by this module - dilations (tuple[int]): Dilations of the four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - dilations=(1, 3, 6, 1), - init_cfg=dict(type='Kaiming', layer='Conv2d')): - super().__init__(init_cfg) - assert dilations[-1] == 1 - self.aspp = nn.ModuleList() - for dilation in dilations: - kernel_size = 3 if dilation > 1 else 1 - padding = dilation if dilation > 1 else 0 - conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - dilation=dilation, - padding=padding, - bias=True) - self.aspp.append(conv) - self.gap = nn.AdaptiveAvgPool2d(1) - - def forward(self, x): - avg_x = self.gap(x) - out = [] - for aspp_idx in range(len(self.aspp)): - inp = avg_x if (aspp_idx == len(self.aspp) - 1) else x - out.append(F.relu_(self.aspp[aspp_idx](inp))) - out[-1] = out[-1].expand_as(out[-2]) - out = torch.cat(out, dim=1) - return out - - -@NECKS.register_module() -class RFP(FPN): - """RFP (Recursive Feature Pyramid) - - This is an implementation of RFP in `DetectoRS - `_. Different from standard FPN, the - input of RFP should be multi level features along with origin input image - of backbone. - - Args: - rfp_steps (int): Number of unrolled steps of RFP. - rfp_backbone (dict): Configuration of the backbone for RFP. - aspp_out_channels (int): Number of output channels of ASPP module. - aspp_dilations (tuple[int]): Dilation rates of four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - rfp_steps, - rfp_backbone, - aspp_out_channels, - aspp_dilations=(1, 3, 6, 1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg, **kwargs) - self.rfp_steps = rfp_steps - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.rfp_modules = ModuleList() - for rfp_idx in range(1, rfp_steps): - rfp_module = build_backbone(rfp_backbone) - self.rfp_modules.append(rfp_module) - self.rfp_aspp = ASPP(self.out_channels, aspp_out_channels, - aspp_dilations) - self.rfp_weight = nn.Conv2d( - self.out_channels, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=True) - - def init_weights(self): - # Avoid using super().init_weights(), which may alter the default - # initialization of the modules in self.rfp_modules that have missing - # keys in the pretrained checkpoint. - for convs in [self.lateral_convs, self.fpn_convs]: - for m in convs.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - for rfp_idx in range(self.rfp_steps - 1): - self.rfp_modules[rfp_idx].init_weights() - constant_init(self.rfp_weight, 0) - - def forward(self, inputs): - inputs = list(inputs) - assert len(inputs) == len(self.in_channels) + 1 # +1 for input image - img = inputs.pop(0) - # FPN forward - x = super().forward(tuple(inputs)) - for rfp_idx in range(self.rfp_steps - 1): - rfp_feats = [x[0]] + list( - self.rfp_aspp(x[i]) for i in range(1, len(x))) - x_idx = self.rfp_modules[rfp_idx].rfp_forward(img, rfp_feats) - # FPN forward - x_idx = super().forward(x_idx) - x_new = [] - for ft_idx in range(len(x_idx)): - add_weight = torch.sigmoid(self.rfp_weight(x_idx[ft_idx])) - x_new.append(add_weight * x_idx[ft_idx] + - (1 - add_weight) * x[ft_idx]) - x = x_new - return x diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ssd_neck.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ssd_neck.py deleted file mode 100644 index 179d575e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/ssd_neck.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class SSDNeck(BaseModule): - """Extra layers of SSD backbone to generate multi-scale feature maps. - - Args: - in_channels (Sequence[int]): Number of input channels per scale. - out_channels (Sequence[int]): Number of output channels per scale. - level_strides (Sequence[int]): Stride of 3x3 conv per level. - level_paddings (Sequence[int]): Padding size of 3x3 conv per level. - l2_norm_scale (float|None): L2 normalization layer init scale. - If None, not use L2 normalization on the first input feature. - last_kernel_size (int): Kernel size of the last conv layer. - Default: 3. - use_depthwise (bool): Whether to use DepthwiseSeparableConv. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - level_strides, - level_paddings, - l2_norm_scale=20., - last_kernel_size=3, - use_depthwise=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - init_cfg=[ - dict( - type='Xavier', distribution='uniform', - layer='Conv2d'), - dict(type='Constant', val=1, layer='BatchNorm2d'), - ]): - super(SSDNeck, self).__init__(init_cfg) - assert len(out_channels) > len(in_channels) - assert len(out_channels) - len(in_channels) == len(level_strides) - assert len(level_strides) == len(level_paddings) - assert in_channels == out_channels[:len(in_channels)] - - if l2_norm_scale: - self.l2_norm = L2Norm(in_channels[0], l2_norm_scale) - self.init_cfg += [ - dict( - type='Constant', - val=self.l2_norm.scale, - override=dict(name='l2_norm')) - ] - - self.extra_layers = nn.ModuleList() - extra_layer_channels = out_channels[len(in_channels):] - second_conv = DepthwiseSeparableConvModule if \ - use_depthwise else ConvModule - - for i, (out_channel, stride, padding) in enumerate( - zip(extra_layer_channels, level_strides, level_paddings)): - kernel_size = last_kernel_size \ - if i == len(extra_layer_channels) - 1 else 3 - per_lvl_convs = nn.Sequential( - ConvModule( - out_channels[len(in_channels) - 1 + i], - out_channel // 2, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - second_conv( - out_channel // 2, - out_channel, - kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.extra_layers.append(per_lvl_convs) - - def forward(self, inputs): - """Forward function.""" - outs = [feat for feat in inputs] - if hasattr(self, 'l2_norm'): - outs[0] = self.l2_norm(outs[0]) - - feat = outs[-1] - for layer in self.extra_layers: - feat = layer(feat) - outs.append(feat) - return tuple(outs) - - -class L2Norm(nn.Module): - - def __init__(self, n_dims, scale=20., eps=1e-10): - """L2 normalization layer. - - Args: - n_dims (int): Number of dimensions to be normalized - scale (float, optional): Defaults to 20.. - eps (float, optional): Used to avoid division by zero. - Defaults to 1e-10. - """ - super(L2Norm, self).__init__() - self.n_dims = n_dims - self.weight = nn.Parameter(torch.Tensor(self.n_dims)) - self.eps = eps - self.scale = scale - - def forward(self, x): - """Forward function.""" - # normalization layer convert to FP32 in FP16 training - x_float = x.float() - norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps - return (self.weight[None, :, None, None].float().expand_as(x_float) * - x_float / norm).type_as(x) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolo_neck.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolo_neck.py deleted file mode 100644 index c8eeb573..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolo_neck.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -class DetectionBlock(BaseModule): - """Detection block in YOLO neck. - - Let out_channels = n, the DetectionBlock contains: - Six ConvLayers, 1 Conv2D Layer and 1 YoloLayer. - The first 6 ConvLayers are formed the following way: - 1x1xn, 3x3x2n, 1x1xn, 3x3x2n, 1x1xn, 3x3x2n. - The Conv2D layer is 1x1x255. - Some block will have branch after the fifth ConvLayer. - The input channel is arbitrary (in_channels) - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(DetectionBlock, self).__init__(init_cfg) - double_out_channels = out_channels * 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.conv1 = ConvModule(in_channels, out_channels, 1, **cfg) - self.conv2 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv3 = ConvModule(double_out_channels, out_channels, 1, **cfg) - self.conv4 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv5 = ConvModule(double_out_channels, out_channels, 1, **cfg) - - def forward(self, x): - tmp = self.conv1(x) - tmp = self.conv2(tmp) - tmp = self.conv3(tmp) - tmp = self.conv4(tmp) - out = self.conv5(tmp) - return out - - -@NECKS.register_module() -class YOLOV3Neck(BaseModule): - """The neck of YOLOV3. - - It can be treated as a simplified version of FPN. It - will take the result from Darknet backbone and do some upsampling and - concatenation. It will finally output the detection result. - - Note: - The input feats should be from top to bottom. - i.e., from high-lvl to low-lvl - But YOLOV3Neck will process them in reversed order. - i.e., from bottom (high-lvl) to top (low-lvl) - - Args: - num_scales (int): The number of scales / stages. - in_channels (List[int]): The number of input channels per scale. - out_channels (List[int]): The number of output channels per scale. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Dictionary to construct and config norm - layer. Default: dict(type='BN', requires_grad=True) - act_cfg (dict, optional): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_scales, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(YOLOV3Neck, self).__init__(init_cfg) - assert (num_scales == len(in_channels) == len(out_channels)) - self.num_scales = num_scales - self.in_channels = in_channels - self.out_channels = out_channels - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - # To support arbitrary scales, the code looks awful, but it works. - # Better solution is welcomed. - self.detect1 = DetectionBlock(in_channels[0], out_channels[0], **cfg) - for i in range(1, self.num_scales): - in_c, out_c = self.in_channels[i], self.out_channels[i] - inter_c = out_channels[i - 1] - self.add_module(f'conv{i}', ConvModule(inter_c, out_c, 1, **cfg)) - # in_c + out_c : High-lvl feats will be cat with low-lvl feats - self.add_module(f'detect{i+1}', - DetectionBlock(in_c + out_c, out_c, **cfg)) - - def forward(self, feats): - assert len(feats) == self.num_scales - - # processed from bottom (high-lvl) to top (low-lvl) - outs = [] - out = self.detect1(feats[-1]) - outs.append(out) - - for i, x in enumerate(reversed(feats[:-1])): - conv = getattr(self, f'conv{i+1}') - tmp = conv(out) - - # Cat with low-lvl feats - tmp = F.interpolate(tmp, scale_factor=2) - tmp = torch.cat((tmp, x), 1) - - detect = getattr(self, f'detect{i+2}') - out = detect(tmp) - outs.append(out) - - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolox_pafpn.py b/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolox_pafpn.py deleted file mode 100644 index b0f6f706..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/necks/yolox_pafpn.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS -from ..utils import CSPLayer - - -@NECKS.register_module() -class YOLOXPAFPN(BaseModule): - """Path Aggregation Network used in YOLOX. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_csp_blocks (int): Number of bottlenecks in CSPLayer. Default: 3 - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(scale_factor=2, mode='nearest')` - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN') - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish') - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_csp_blocks=3, - use_depthwise=False, - upsample_cfg=dict(scale_factor=2, mode='nearest'), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - super(YOLOXPAFPN, self).__init__(init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - - # build top-down blocks - self.upsample = nn.Upsample(**upsample_cfg) - self.reduce_layers = nn.ModuleList() - self.top_down_blocks = nn.ModuleList() - for idx in range(len(in_channels) - 1, 0, -1): - self.reduce_layers.append( - ConvModule( - in_channels[idx], - in_channels[idx - 1], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.top_down_blocks.append( - CSPLayer( - in_channels[idx - 1] * 2, - in_channels[idx - 1], - num_blocks=num_csp_blocks, - add_identity=False, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # build bottom-up blocks - self.downsamples = nn.ModuleList() - self.bottom_up_blocks = nn.ModuleList() - for idx in range(len(in_channels) - 1): - self.downsamples.append( - conv( - in_channels[idx], - in_channels[idx], - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottom_up_blocks.append( - CSPLayer( - in_channels[idx] * 2, - in_channels[idx + 1], - num_blocks=num_csp_blocks, - add_identity=False, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.out_convs = nn.ModuleList() - for i in range(len(in_channels)): - self.out_convs.append( - ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - """ - Args: - inputs (tuple[Tensor]): input features. - - Returns: - tuple[Tensor]: YOLOXPAFPN features. - """ - assert len(inputs) == len(self.in_channels) - - # top-down path - inner_outs = [inputs[-1]] - for idx in range(len(self.in_channels) - 1, 0, -1): - feat_heigh = inner_outs[0] - feat_low = inputs[idx - 1] - feat_heigh = self.reduce_layers[len(self.in_channels) - 1 - idx]( - feat_heigh) - inner_outs[0] = feat_heigh - - upsample_feat = self.upsample(feat_heigh) - - inner_out = self.top_down_blocks[len(self.in_channels) - 1 - idx]( - torch.cat([upsample_feat, feat_low], 1)) - inner_outs.insert(0, inner_out) - - # bottom-up path - outs = [inner_outs[0]] - for idx in range(len(self.in_channels) - 1): - feat_low = outs[-1] - feat_height = inner_outs[idx + 1] - downsample_feat = self.downsamples[idx](feat_low) - out = self.bottom_up_blocks[idx]( - torch.cat([downsample_feat, feat_height], 1)) - outs.append(out) - - # out convs - for idx, conv in enumerate(self.out_convs): - outs[idx] = conv(outs[idx]) - - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/__init__.py deleted file mode 100644 index a455c07b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dropblock import DropBlock -from .msdeformattn_pixel_decoder import MSDeformAttnPixelDecoder -from .pixel_decoder import PixelDecoder, TransformerEncoderPixelDecoder - -__all__ = [ - 'DropBlock', 'PixelDecoder', 'TransformerEncoderPixelDecoder', - 'MSDeformAttnPixelDecoder' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/dropblock.py b/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/dropblock.py deleted file mode 100644 index bb00ade7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/dropblock.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import PLUGIN_LAYERS - -eps = 1e-6 - - -@PLUGIN_LAYERS.register_module() -class DropBlock(nn.Module): - """Randomly drop some regions of feature maps. - - Please refer to the method proposed in `DropBlock - `_ for details. - - Args: - drop_prob (float): The probability of dropping each block. - block_size (int): The size of dropped blocks. - warmup_iters (int): The drop probability will linearly increase - from `0` to `drop_prob` during the first `warmup_iters` iterations. - Default: 2000. - """ - - def __init__(self, drop_prob, block_size, warmup_iters=2000, **kwargs): - super(DropBlock, self).__init__() - assert block_size % 2 == 1 - assert 0 < drop_prob <= 1 - assert warmup_iters >= 0 - self.drop_prob = drop_prob - self.block_size = block_size - self.warmup_iters = warmup_iters - self.iter_cnt = 0 - - def forward(self, x): - """ - Args: - x (Tensor): Input feature map on which some areas will be randomly - dropped. - - Returns: - Tensor: The tensor after DropBlock layer. - """ - if not self.training: - return x - self.iter_cnt += 1 - N, C, H, W = list(x.shape) - gamma = self._compute_gamma((H, W)) - mask_shape = (N, C, H - self.block_size + 1, W - self.block_size + 1) - mask = torch.bernoulli(torch.full(mask_shape, gamma, device=x.device)) - - mask = F.pad(mask, [self.block_size // 2] * 4, value=0) - mask = F.max_pool2d( - input=mask, - stride=(1, 1), - kernel_size=(self.block_size, self.block_size), - padding=self.block_size // 2) - mask = 1 - mask - x = x * mask * mask.numel() / (eps + mask.sum()) - return x - - def _compute_gamma(self, feat_size): - """Compute the value of gamma according to paper. gamma is the - parameter of bernoulli distribution, which controls the number of - features to drop. - - gamma = (drop_prob * fm_area) / (drop_area * keep_area) - - Args: - feat_size (tuple[int, int]): The height and width of feature map. - - Returns: - float: The value of gamma. - """ - gamma = (self.drop_prob * feat_size[0] * feat_size[1]) - gamma /= ((feat_size[0] - self.block_size + 1) * - (feat_size[1] - self.block_size + 1)) - gamma /= (self.block_size**2) - factor = (1.0 if self.iter_cnt > self.warmup_iters else self.iter_cnt / - self.warmup_iters) - return gamma * factor - - def extra_repr(self): - return (f'drop_prob={self.drop_prob}, block_size={self.block_size}, ' - f'warmup_iters={self.warmup_iters}') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/msdeformattn_pixel_decoder.py b/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/msdeformattn_pixel_decoder.py deleted file mode 100644 index d553582b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/msdeformattn_pixel_decoder.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init, - normal_init, xavier_init) -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import BaseModule, ModuleList - -from mmdet.core.anchor import MlvlPointGenerator -from mmdet.models.utils.transformer import MultiScaleDeformableAttention - - -@PLUGIN_LAYERS.register_module() -class MSDeformAttnPixelDecoder(BaseModule): - """Pixel decoder with multi-scale deformable attention. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - strides (list[int] | tuple[int]): Output strides of feature from - backbone. - feat_channels (int): Number of channels for feature. - out_channels (int): Number of channels for output. - num_outs (int): Number of output scales. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transformer - encoder. Defaults to `DetrTransformerEncoder`. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - """ - - def __init__(self, - in_channels=[256, 512, 1024, 2048], - strides=[4, 8, 16, 32], - feat_channels=256, - out_channels=256, - num_outs=3, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=dict( - type='MultiScaleDeformableAttention', - embed_dims=256, - num_heads=8, - num_levels=3, - num_points=4, - im2col_step=64, - dropout=0.0, - batch_first=False, - norm_cfg=None, - init_cfg=None), - feedforward_channels=1024, - ffn_dropout=0.0, - operation_order=('self_attn', 'norm', 'ffn', 'norm')), - init_cfg=None), - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.strides = strides - self.num_input_levels = len(in_channels) - self.num_encoder_levels = \ - encoder.transformerlayers.attn_cfgs.num_levels - assert self.num_encoder_levels >= 1, \ - 'num_levels in attn_cfgs must be at least one' - input_conv_list = [] - # from top to down (low to high resolution) - for i in range(self.num_input_levels - 1, - self.num_input_levels - self.num_encoder_levels - 1, - -1): - input_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=None, - bias=True) - input_conv_list.append(input_conv) - self.input_convs = ModuleList(input_conv_list) - - self.encoder = build_transformer_layer_sequence(encoder) - self.postional_encoding = build_positional_encoding( - positional_encoding) - # high resolution to low resolution - self.level_encoding = nn.Embedding(self.num_encoder_levels, - feat_channels) - - # fpn-like structure - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - # from top to down (low to high resolution) - # fpn for the rest features that didn't pass in encoder - for i in range(self.num_input_levels - self.num_encoder_levels - 1, -1, - -1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=1, stride=1, padding=0) - - self.num_outs = num_outs - self.point_generator = MlvlPointGenerator(strides) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_encoder_levels): - xavier_init( - self.input_convs[i].conv, - gain=1, - bias=0, - distribution='uniform') - - for i in range(0, self.num_input_levels - self.num_encoder_levels): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - - normal_init(self.level_encoding, mean=0, std=1) - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - # init_weights defined in MultiScaleDeformableAttention - for layer in self.encoder.layers: - for attn in layer.attentions: - if isinstance(attn, MultiScaleDeformableAttention): - attn.init_weights() - - def forward(self, feats): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - - Returns: - tuple: A tuple containing the following: - - - mask_feature (Tensor): shape (batch_size, c, h, w). - - multi_scale_features (list[Tensor]): Multi scale \ - features, each in shape (batch_size, c, h, w). - """ - # generate padding mask for each level, for each image - batch_size = feats[0].shape[0] - encoder_input_list = [] - padding_mask_list = [] - level_positional_encoding_list = [] - spatial_shapes = [] - reference_points_list = [] - for i in range(self.num_encoder_levels): - level_idx = self.num_input_levels - i - 1 - feat = feats[level_idx] - feat_projected = self.input_convs[i](feat) - h, w = feat.shape[-2:] - - # no padding - padding_mask_resized = feat.new_zeros( - (batch_size, ) + feat.shape[-2:], dtype=torch.bool) - pos_embed = self.postional_encoding(padding_mask_resized) - level_embed = self.level_encoding.weight[i] - level_pos_embed = level_embed.view(1, -1, 1, 1) + pos_embed - # (h_i * w_i, 2) - reference_points = self.point_generator.single_level_grid_priors( - feat.shape[-2:], level_idx, device=feat.device) - # normalize - factor = feat.new_tensor([[w, h]]) * self.strides[level_idx] - reference_points = reference_points / factor - - # shape (batch_size, c, h_i, w_i) -> (h_i * w_i, batch_size, c) - feat_projected = feat_projected.flatten(2).permute(2, 0, 1) - level_pos_embed = level_pos_embed.flatten(2).permute(2, 0, 1) - padding_mask_resized = padding_mask_resized.flatten(1) - - encoder_input_list.append(feat_projected) - padding_mask_list.append(padding_mask_resized) - level_positional_encoding_list.append(level_pos_embed) - spatial_shapes.append(feat.shape[-2:]) - reference_points_list.append(reference_points) - # shape (batch_size, total_num_query), - # total_num_query=sum([., h_i * w_i,.]) - padding_masks = torch.cat(padding_mask_list, dim=1) - # shape (total_num_query, batch_size, c) - encoder_inputs = torch.cat(encoder_input_list, dim=0) - level_positional_encodings = torch.cat( - level_positional_encoding_list, dim=0) - device = encoder_inputs.device - # shape (num_encoder_levels, 2), from low - # resolution to high resolution - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=device) - # shape (0, h_0*w_0, h_0*w_0+h_1*w_1, ...) - level_start_index = torch.cat((spatial_shapes.new_zeros( - (1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - reference_points = torch.cat(reference_points_list, dim=0) - reference_points = reference_points[None, :, None].repeat( - batch_size, 1, self.num_encoder_levels, 1) - valid_radios = reference_points.new_ones( - (batch_size, self.num_encoder_levels, 2)) - # shape (num_total_query, batch_size, c) - memory = self.encoder( - query=encoder_inputs, - key=None, - value=None, - query_pos=level_positional_encodings, - key_pos=None, - attn_masks=None, - key_padding_mask=None, - query_key_padding_mask=padding_masks, - spatial_shapes=spatial_shapes, - reference_points=reference_points, - level_start_index=level_start_index, - valid_radios=valid_radios) - # (num_total_query, batch_size, c) -> (batch_size, c, num_total_query) - memory = memory.permute(1, 2, 0) - - # from low resolution to high resolution - num_query_per_level = [e[0] * e[1] for e in spatial_shapes] - outs = torch.split(memory, num_query_per_level, dim=-1) - outs = [ - x.reshape(batch_size, -1, spatial_shapes[i][0], - spatial_shapes[i][1]) for i, x in enumerate(outs) - ] - - for i in range(self.num_input_levels - self.num_encoder_levels - 1, -1, - -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + F.interpolate( - outs[-1], - size=cur_feat.shape[-2:], - mode='bilinear', - align_corners=False) - y = self.output_convs[i](y) - outs.append(y) - multi_scale_features = outs[:self.num_outs] - - mask_feature = self.mask_feature(outs[-1]) - return mask_feature, multi_scale_features diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/pixel_decoder.py b/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/pixel_decoder.py deleted file mode 100644 index 537a187d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/plugins/pixel_decoder.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import BaseModule, ModuleList - - -@PLUGIN_LAYERS.register_module() -class PixelDecoder(BaseModule): - """Pixel decoder with a structure like fpn. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_inputs = len(in_channels) - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - for i in range(0, self.num_inputs - 1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.last_feat_conv = ConvModule( - in_channels[-1], - feat_channels, - kernel_size=3, - padding=1, - stride=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=3, stride=1, padding=1) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.last_feat_conv, bias=0) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. Not used here. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): Shape (batch_size, c, h, w). - - memory (Tensor): Output of last stage of backbone.\ - Shape (batch_size, c, h, w). - """ - y = self.last_feat_conv(feats[-1]) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - memory = feats[-1] - return mask_feature, memory - - -@PLUGIN_LAYERS.register_module() -class TransformerEncoderPixelDecoder(PixelDecoder): - """Pixel decoder with transormer encoder inside. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=None, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - init_cfg=None): - super(TransformerEncoderPixelDecoder, self).__init__( - in_channels, - feat_channels, - out_channels, - norm_cfg, - act_cfg, - init_cfg=init_cfg) - self.last_feat_conv = None - - self.encoder = build_transformer_layer_sequence(encoder) - self.encoder_embed_dims = self.encoder.embed_dims - assert self.encoder_embed_dims == feat_channels, 'embed_dims({}) of ' \ - 'tranformer encoder must equal to feat_channels({})'.format( - feat_channels, self.encoder_embed_dims) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.encoder_in_proj = Conv2d( - in_channels[-1], feat_channels, kernel_size=1) - self.encoder_out_proj = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.encoder_in_proj, bias=0) - caffe2_xavier_init(self.encoder_out_proj.conv, bias=0) - - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): shape (batch_size, c, h, w). - - memory (Tensor): shape (batch_size, c, h, w). - """ - feat_last = feats[-1] - bs, c, h, w = feat_last.shape - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - padding_mask = feat_last.new_ones((bs, input_img_h, input_img_w), - dtype=torch.float32) - for i in range(bs): - img_h, img_w, _ = img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feat_last.shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - - pos_embed = self.positional_encoding(padding_mask) - feat_last = self.encoder_in_proj(feat_last) - # (batch_size, c, h, w) -> (num_queries, batch_size, c) - feat_last = feat_last.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - # (batch_size, h, w) -> (batch_size, h*w) - padding_mask = padding_mask.flatten(1) - memory = self.encoder( - query=feat_last, - key=None, - value=None, - query_pos=pos_embed, - query_key_padding_mask=padding_mask) - # (num_queries, batch_size, c) -> (batch_size, c, h, w) - memory = memory.permute(1, 2, 0).view(bs, self.encoder_embed_dims, h, - w) - y = self.encoder_out_proj(memory) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - return mask_feature, memory diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/__init__.py deleted file mode 100644 index baae2a05..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_roi_head import BaseRoIHead -from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DIIHead, - DoubleConvFCBBoxHead, SABLHead, SCNetBBoxHead, - Shared2FCBBoxHead, Shared4Conv1FCBBoxHead) -from .cascade_roi_head import CascadeRoIHead -from .double_roi_head import DoubleHeadRoIHead -from .dynamic_roi_head import DynamicRoIHead -from .grid_roi_head import GridRoIHead -from .htc_roi_head import HybridTaskCascadeRoIHead -from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead, - FusedSemanticHead, GlobalContextHead, GridHead, - HTCMaskHead, MaskIoUHead, MaskPointHead, - SCNetMaskHead, SCNetSemanticHead) -from .mask_scoring_roi_head import MaskScoringRoIHead -from .pisa_roi_head import PISARoIHead -from .point_rend_roi_head import PointRendRoIHead -from .roi_extractors import (BaseRoIExtractor, GenericRoIExtractor, - SingleRoIExtractor) -from .scnet_roi_head import SCNetRoIHead -from .shared_heads import ResLayer -from .sparse_roi_head import SparseRoIHead -from .standard_roi_head import StandardRoIHead -from .trident_roi_head import TridentRoIHead - -__all__ = [ - 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead', - 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead', - 'ConvFCBBoxHead', 'DIIHead', 'SABLHead', 'Shared2FCBBoxHead', - 'StandardRoIHead', 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', - 'FCNMaskHead', 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', - 'MaskIoUHead', 'BaseRoIExtractor', 'GenericRoIExtractor', - 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead', - 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead', - 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead', - 'FeatureRelayHead', 'GlobalContextHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/base_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/base_roi_head.py deleted file mode 100644 index 4adbdef8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/base_roi_head.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - -from ..builder import build_shared_head - - -class BaseRoIHead(BaseModule, metaclass=ABCMeta): - """Base class for RoIHeads.""" - - def __init__(self, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(BaseRoIHead, self).__init__(init_cfg) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if shared_head is not None: - shared_head.pretrained = pretrained - self.shared_head = build_shared_head(shared_head) - - if bbox_head is not None: - self.init_bbox_head(bbox_roi_extractor, bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoI head contains a `bbox_head`""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoI head contains a `mask_head`""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @property - def with_shared_head(self): - """bool: whether the RoI head contains a `shared_head`""" - return hasattr(self, 'shared_head') and self.shared_head is not None - - @abstractmethod - def init_bbox_head(self): - """Initialize ``bbox_head``""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize ``mask_head``""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_meta, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """Forward function during training.""" - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False, - **kwargs): - """Asynchronized test function.""" - raise NotImplementedError - - def simple_test(self, - x, - proposal_list, - img_meta, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/__init__.py deleted file mode 100644 index d1207dbe..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bbox_head import BBoxHead -from .convfc_bbox_head import (ConvFCBBoxHead, Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .dii_head import DIIHead -from .double_bbox_head import DoubleConvFCBBoxHead -from .sabl_head import SABLHead -from .scnet_bbox_head import SCNetBBoxHead - -__all__ = [ - 'BBoxHead', 'ConvFCBBoxHead', 'Shared2FCBBoxHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'SABLHead', 'DIIHead', - 'SCNetBBoxHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/bbox_head.py deleted file mode 100644 index 461b18b7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/bbox_head.py +++ /dev/null @@ -1,594 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.runner import BaseModule, auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.losses import accuracy -from mmdet.models.utils import build_linear_layer - - -@HEADS.register_module() -class BBoxHead(BaseModule): - """Simplest RoI head, with only two fc layers for classification and - regression respectively.""" - - def __init__(self, - with_avg_pool=False, - with_cls=True, - with_reg=True, - roi_feat_size=7, - in_channels=256, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - reg_decoded_bbox=False, - reg_predictor_cfg=dict(type='Linear'), - cls_predictor_cfg=dict(type='Linear'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1.0), - init_cfg=None): - super(BBoxHead, self).__init__(init_cfg) - assert with_cls or with_reg - self.with_avg_pool = with_avg_pool - self.with_cls = with_cls - self.with_reg = with_reg - self.roi_feat_size = _pair(roi_feat_size) - self.roi_feat_area = self.roi_feat_size[0] * self.roi_feat_size[1] - self.in_channels = in_channels - self.num_classes = num_classes - self.reg_class_agnostic = reg_class_agnostic - self.reg_decoded_bbox = reg_decoded_bbox - self.reg_predictor_cfg = reg_predictor_cfg - self.cls_predictor_cfg = cls_predictor_cfg - self.fp16_enabled = False - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - in_channels = self.in_channels - if self.with_avg_pool: - self.avg_pool = nn.AvgPool2d(self.roi_feat_size) - else: - in_channels *= self.roi_feat_area - if self.with_cls: - # need to add background class - if self.custom_cls_channels: - cls_channels = self.loss_cls.get_cls_channels(self.num_classes) - else: - cls_channels = num_classes + 1 - self.fc_cls = build_linear_layer( - self.cls_predictor_cfg, - in_features=in_channels, - out_features=cls_channels) - if self.with_reg: - out_dim_reg = 4 if reg_class_agnostic else 4 * num_classes - self.fc_reg = build_linear_layer( - self.reg_predictor_cfg, - in_features=in_channels, - out_features=out_dim_reg) - self.debug_imgs = None - if init_cfg is None: - self.init_cfg = [] - if self.with_cls: - self.init_cfg += [ - dict( - type='Normal', std=0.01, override=dict(name='fc_cls')) - ] - if self.with_reg: - self.init_cfg += [ - dict( - type='Normal', std=0.001, override=dict(name='fc_reg')) - ] - - @property - def custom_cls_channels(self): - return getattr(self.loss_cls, 'custom_cls_channels', False) - - @property - def custom_activation(self): - return getattr(self.loss_cls, 'custom_activation', False) - - @property - def custom_accuracy(self): - return getattr(self.loss_cls, 'custom_accuracy', False) - - @auto_fp16() - def forward(self, x): - if self.with_avg_pool: - if x.numel() > 0: - x = self.avg_pool(x) - x = x.view(x.size(0), -1) - else: - # avg_pool does not support empty tensor, - # so use torch.mean instead it - x = torch.mean(x, dim=(-1, -2)) - cls_score = self.fc_cls(x) if self.with_cls else None - bbox_pred = self.fc_reg(x) if self.with_reg else None - return cls_score, bbox_pred - - def _get_target_single(self, pos_bboxes, neg_bboxes, pos_gt_bboxes, - pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Args: - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains gt_boxes for - all positive samples, has shape (num_pos, 4), - the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains gt_labels for - all positive samples, has shape (num_pos, ). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals - in a single image. Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all - proposals, has shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all - proposals, has shape (num_proposals, 4), the - last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all - proposals, has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[:num_pos] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[:num_pos] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, both - # the predicted boxes and regression targets should be with - # absolute coordinate format. - pos_bbox_targets = pos_gt_bboxes - bbox_targets[:num_pos, :] = pos_bbox_targets - bbox_weights[:num_pos, :] = 1 - if num_neg > 0: - label_weights[-num_neg:] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:ConfigDict): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list - has shape (num_proposals, 4) when `concat=False`, - otherwise just a single tensor has shape - (num_all_proposals, 4), the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - rois, - labels, - label_weights, - bbox_targets, - bbox_weights, - reduction_override=None): - losses = dict() - if cls_score is not None: - avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.) - if cls_score.numel() > 0: - loss_cls_ = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - if isinstance(loss_cls_, dict): - losses.update(loss_cls_) - else: - losses['loss_cls'] = loss_cls_ - if self.custom_activation: - acc_ = self.loss_cls.get_accuracy(cls_score, labels) - losses.update(acc_) - else: - losses['acc'] = accuracy(cls_score, labels) - if bbox_pred is not None: - bg_class_ind = self.num_classes - # 0~self.num_classes-1 are FG, self.num_classes is BG - pos_inds = (labels >= 0) & (labels < bg_class_ind) - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, - # `GIouLoss`, `DIouLoss`) is applied directly on - # the decoded bounding boxes, it decodes the - # already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(rois[:, 1:], bbox_pred) - if self.reg_class_agnostic: - pos_bbox_pred = bbox_pred.view( - bbox_pred.size(0), 4)[pos_inds.type(torch.bool)] - else: - pos_bbox_pred = bbox_pred.view( - bbox_pred.size(0), -1, - 4)[pos_inds.type(torch.bool), - labels[pos_inds.type(torch.bool)]] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=bbox_targets.size(0), - reduction_override=reduction_override) - else: - losses['loss_bbox'] = bbox_pred[pos_inds].sum() - return losses - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False, - cfg=None): - """Transform network output for a batch into bbox predictions. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (num_boxes, 5). - last dimension 5 arrange as (batch_index, x1, y1, x2, y2). - cls_score (Tensor): Box scores, has shape - (num_boxes, num_classes + 1). - bbox_pred (Tensor, optional): Box energies / deltas. - has shape (num_boxes, num_classes * 4). - img_shape (Sequence[int], optional): Maximum bounds for boxes, - specifies (H, W, C) or (H, W). - scale_factor (ndarray): Scale factor of the - image arrange as (w_scale, h_scale, w_scale, h_scale). - rescale (bool): If True, return boxes in original image space. - Default: False. - cfg (obj:`ConfigDict`): `test_cfg` of Bbox Head. Default: None - - Returns: - tuple[Tensor, Tensor]: - First tensor is `det_bboxes`, has the shape - (num_boxes, 5) and last - dimension 5 represent (tl_x, tl_y, br_x, br_y, score). - Second tensor is the labels with shape (num_boxes, ). - """ - - # some loss (Seesaw loss..) may have custom activation - if self.custom_cls_channels: - scores = self.loss_cls.get_activation(cls_score) - else: - scores = F.softmax( - cls_score, dim=-1) if cls_score is not None else None - # bbox_pred would be None in some detector when with_reg is False, - # e.g. Grid R-CNN. - if bbox_pred is not None: - bboxes = self.bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=img_shape) - else: - bboxes = rois[:, 1:].clone() - if img_shape is not None: - bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1]) - bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0]) - - if rescale and bboxes.size(0) > 0: - scale_factor = bboxes.new_tensor(scale_factor) - bboxes = (bboxes.view(bboxes.size(0), -1, 4) / scale_factor).view( - bboxes.size()[0], -1) - - if cfg is None: - return bboxes, scores - else: - det_bboxes, det_labels = multiclass_nms(bboxes, scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - - return det_bboxes, det_labels - - @force_fp32(apply_to=('bbox_preds', )) - def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas): - """Refine bboxes during training. - - Args: - rois (Tensor): Shape (n*bs, 5), where n is image number per GPU, - and bs is the sampled RoIs per image. The first column is - the image id and the next 4 columns are x1, y1, x2, y2. - labels (Tensor): Shape (n*bs, ). - bbox_preds (Tensor): Shape (n*bs, 4) or (n*bs, 4*#class). - pos_is_gts (list[Tensor]): Flags indicating if each positive bbox - is a gt bbox. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Refined bboxes of each image in a mini-batch. - - Example: - >>> # xdoctest: +REQUIRES(module:kwarray) - >>> import kwarray - >>> import numpy as np - >>> from mmdet.core.bbox.demodata import random_boxes - >>> self = BBoxHead(reg_class_agnostic=True) - >>> n_roi = 2 - >>> n_img = 4 - >>> scale = 512 - >>> rng = np.random.RandomState(0) - >>> img_metas = [{'img_shape': (scale, scale)} - ... for _ in range(n_img)] - >>> # Create rois in the expected format - >>> roi_boxes = random_boxes(n_roi, scale=scale, rng=rng) - >>> img_ids = torch.randint(0, n_img, (n_roi,)) - >>> img_ids = img_ids.float() - >>> rois = torch.cat([img_ids[:, None], roi_boxes], dim=1) - >>> # Create other args - >>> labels = torch.randint(0, 2, (n_roi,)).long() - >>> bbox_preds = random_boxes(n_roi, scale=scale, rng=rng) - >>> # For each image, pretend random positive boxes are gts - >>> is_label_pos = (labels.numpy() > 0).astype(np.int) - >>> lbl_per_img = kwarray.group_items(is_label_pos, - ... img_ids.numpy()) - >>> pos_per_img = [sum(lbl_per_img.get(gid, [])) - ... for gid in range(n_img)] - >>> pos_is_gts = [ - >>> torch.randint(0, 2, (npos,)).byte().sort( - >>> descending=True)[0] - >>> for npos in pos_per_img - >>> ] - >>> bboxes_list = self.refine_bboxes(rois, labels, bbox_preds, - >>> pos_is_gts, img_metas) - >>> print(bboxes_list) - """ - img_ids = rois[:, 0].long().unique(sorted=True) - assert img_ids.numel() <= len(img_metas) - - bboxes_list = [] - for i in range(len(img_metas)): - inds = torch.nonzero( - rois[:, 0] == i, as_tuple=False).squeeze(dim=1) - num_rois = inds.numel() - - bboxes_ = rois[inds, 1:] - label_ = labels[inds] - bbox_pred_ = bbox_preds[inds] - img_meta_ = img_metas[i] - pos_is_gts_ = pos_is_gts[i] - - bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_, - img_meta_) - - # filter gt bboxes - pos_keep = 1 - pos_is_gts_ - keep_inds = pos_is_gts_.new_ones(num_rois) - keep_inds[:len(pos_is_gts_)] = pos_keep - - bboxes_list.append(bboxes[keep_inds.type(torch.bool)]) - - return bboxes_list - - @force_fp32(apply_to=('bbox_pred', )) - def regress_by_class(self, rois, label, bbox_pred, img_meta): - """Regress the bbox for the predicted class. Used in Cascade R-CNN. - - Args: - rois (Tensor): Rois from `rpn_head` or last stage - `bbox_head`, has shape (num_proposals, 4) or - (num_proposals, 5). - label (Tensor): Only used when `self.reg_class_agnostic` - is False, has shape (num_proposals, ). - bbox_pred (Tensor): Regression prediction of - current stage `bbox_head`. When `self.reg_class_agnostic` - is False, it has shape (n, num_classes * 4), otherwise - it has shape (n, 4). - img_meta (dict): Image meta info. - - Returns: - Tensor: Regressed bboxes, the same shape as input rois. - """ - - assert rois.size(1) == 4 or rois.size(1) == 5, repr(rois.shape) - - if not self.reg_class_agnostic: - label = label * 4 - inds = torch.stack((label, label + 1, label + 2, label + 3), 1) - bbox_pred = torch.gather(bbox_pred, 1, inds) - assert bbox_pred.size(1) == 4 - - max_shape = img_meta['img_shape'] - - if rois.size(1) == 4: - new_rois = self.bbox_coder.decode( - rois, bbox_pred, max_shape=max_shape) - else: - bboxes = self.bbox_coder.decode( - rois[:, 1:], bbox_pred, max_shape=max_shape) - new_rois = torch.cat((rois[:, [0]], bboxes), dim=1) - - return new_rois - - def onnx_export(self, - rois, - cls_score, - bbox_pred, - img_shape, - cfg=None, - **kwargs): - """Transform network output for a batch into bbox predictions. - - Args: - rois (Tensor): Boxes to be transformed. - Has shape (B, num_boxes, 5) - cls_score (Tensor): Box scores. has shape - (B, num_boxes, num_classes + 1), 1 represent the background. - bbox_pred (Tensor, optional): Box energies / deltas for, - has shape (B, num_boxes, num_classes * 4) when. - img_shape (torch.Tensor): Shape of image. - cfg (obj:`ConfigDict`): `test_cfg` of Bbox Head. Default: None - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - - assert rois.ndim == 3, 'Only support export two stage ' \ - 'model to ONNX ' \ - 'with batch dimension. ' - if self.custom_cls_channels: - scores = self.loss_cls.get_activation(cls_score) - else: - scores = F.softmax( - cls_score, dim=-1) if cls_score is not None else None - - if bbox_pred is not None: - bboxes = self.bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=img_shape) - else: - bboxes = rois[..., 1:].clone() - if img_shape is not None: - max_shape = bboxes.new_tensor(img_shape)[..., :2] - min_xy = bboxes.new_tensor(0) - max_xy = torch.cat( - [max_shape] * 2, dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - max_output_boxes_per_class = cfg.nms.get('max_output_boxes_per_class', - cfg.max_per_img) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - score_threshold = cfg.score_thr - nms_pre = cfg.get('deploy_nms_pre', -1) - - scores = scores[..., :self.num_classes] - if self.reg_class_agnostic: - return add_dummy_nms_for_onnx( - bboxes, - scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - pre_top_k=nms_pre, - after_top_k=cfg.max_per_img) - else: - batch_size = scores.shape[0] - labels = torch.arange( - self.num_classes, dtype=torch.long).to(scores.device) - labels = labels.view(1, 1, -1).expand_as(scores) - labels = labels.reshape(batch_size, -1) - scores = scores.reshape(batch_size, -1) - bboxes = bboxes.reshape(batch_size, -1, 4) - - max_size = torch.max(img_shape) - # Offset bboxes of each class so that bboxes of different labels - # do not overlap. - offsets = (labels * max_size + 1).unsqueeze(2) - bboxes_for_nms = bboxes + offsets - - batch_dets, labels = add_dummy_nms_for_onnx( - bboxes_for_nms, - scores.unsqueeze(2), - max_output_boxes_per_class, - iou_threshold, - score_threshold, - pre_top_k=nms_pre, - after_top_k=cfg.max_per_img, - labels=labels) - # Offset the bboxes back after dummy nms. - offsets = (labels * max_size + 1).unsqueeze(2) - # Indexing + inplace operation fails with dynamic shape in ONNX - # original style: batch_dets[..., :4] -= offsets - bboxes, scores = batch_dets[..., 0:4], batch_dets[..., 4:5] - bboxes -= offsets - batch_dets = torch.cat([bboxes, scores], dim=2) - return batch_dets, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py deleted file mode 100644 index 21124b9c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from mmdet.models.utils import build_linear_layer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class ConvFCBBoxHead(BBoxHead): - r"""More general bbox head, with shared conv and fc layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls fcs -> cls - shared convs -> shared fcs - \-> reg convs -> reg fcs -> reg - """ # noqa: W605 - - def __init__(self, - num_shared_convs=0, - num_shared_fcs=0, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - conv_out_channels=256, - fc_out_channels=1024, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - *args, - **kwargs): - super(ConvFCBBoxHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - assert (num_shared_convs + num_shared_fcs + num_cls_convs + - num_cls_fcs + num_reg_convs + num_reg_fcs > 0) - if num_cls_convs > 0 or num_reg_convs > 0: - assert num_shared_fcs == 0 - if not self.with_cls: - assert num_cls_convs == 0 and num_cls_fcs == 0 - if not self.with_reg: - assert num_reg_convs == 0 and num_reg_fcs == 0 - self.num_shared_convs = num_shared_convs - self.num_shared_fcs = num_shared_fcs - self.num_cls_convs = num_cls_convs - self.num_cls_fcs = num_cls_fcs - self.num_reg_convs = num_reg_convs - self.num_reg_fcs = num_reg_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # add shared convs and fcs - self.shared_convs, self.shared_fcs, last_layer_dim = \ - self._add_conv_fc_branch( - self.num_shared_convs, self.num_shared_fcs, self.in_channels, - True) - self.shared_out_channels = last_layer_dim - - # add cls specific branch - self.cls_convs, self.cls_fcs, self.cls_last_dim = \ - self._add_conv_fc_branch( - self.num_cls_convs, self.num_cls_fcs, self.shared_out_channels) - - # add reg specific branch - self.reg_convs, self.reg_fcs, self.reg_last_dim = \ - self._add_conv_fc_branch( - self.num_reg_convs, self.num_reg_fcs, self.shared_out_channels) - - if self.num_shared_fcs == 0 and not self.with_avg_pool: - if self.num_cls_fcs == 0: - self.cls_last_dim *= self.roi_feat_area - if self.num_reg_fcs == 0: - self.reg_last_dim *= self.roi_feat_area - - self.relu = nn.ReLU(inplace=True) - # reconstruct fc_cls and fc_reg since input channels are changed - if self.with_cls: - if self.custom_cls_channels: - cls_channels = self.loss_cls.get_cls_channels(self.num_classes) - else: - cls_channels = self.num_classes + 1 - self.fc_cls = build_linear_layer( - self.cls_predictor_cfg, - in_features=self.cls_last_dim, - out_features=cls_channels) - if self.with_reg: - out_dim_reg = (4 if self.reg_class_agnostic else 4 * - self.num_classes) - self.fc_reg = build_linear_layer( - self.reg_predictor_cfg, - in_features=self.reg_last_dim, - out_features=out_dim_reg) - - if init_cfg is None: - # when init_cfg is None, - # It has been set to - # [[dict(type='Normal', std=0.01, override=dict(name='fc_cls'))], - # [dict(type='Normal', std=0.001, override=dict(name='fc_reg'))] - # after `super(ConvFCBBoxHead, self).__init__()` - # we only need to append additional configuration - # for `shared_fcs`, `cls_fcs` and `reg_fcs` - self.init_cfg += [ - dict( - type='Xavier', - distribution='uniform', - override=[ - dict(name='shared_fcs'), - dict(name='cls_fcs'), - dict(name='reg_fcs') - ]) - ] - - def _add_conv_fc_branch(self, - num_branch_convs, - num_branch_fcs, - in_channels, - is_shared=False): - """Add shared or separable branch. - - convs -> avg pool (optional) -> fcs - """ - last_layer_dim = in_channels - # add branch specific conv layers - branch_convs = nn.ModuleList() - if num_branch_convs > 0: - for i in range(num_branch_convs): - conv_in_channels = ( - last_layer_dim if i == 0 else self.conv_out_channels) - branch_convs.append( - ConvModule( - conv_in_channels, - self.conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - last_layer_dim = self.conv_out_channels - # add branch specific fc layers - branch_fcs = nn.ModuleList() - if num_branch_fcs > 0: - # for shared branch, only consider self.with_avg_pool - # for separated branches, also consider self.num_shared_fcs - if (is_shared - or self.num_shared_fcs == 0) and not self.with_avg_pool: - last_layer_dim *= self.roi_feat_area - for i in range(num_branch_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - branch_fcs.append( - nn.Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - return branch_convs, branch_fcs, last_layer_dim - - def forward(self, x): - # shared part - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - # separate branches - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - return cls_score, bbox_pred - - -@HEADS.register_module() -class Shared2FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared2FCBBoxHead, self).__init__( - num_shared_convs=0, - num_shared_fcs=2, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) - - -@HEADS.register_module() -class Shared4Conv1FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared4Conv1FCBBoxHead, self).__init__( - num_shared_convs=4, - num_shared_fcs=1, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/dii_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/dii_head.py deleted file mode 100644 index 3777f52b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/dii_head.py +++ /dev/null @@ -1,426 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (bias_init_with_prob, build_activation_layer, - build_norm_layer) -from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.losses import accuracy -from mmdet.models.utils import build_transformer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class DIIHead(BBoxHead): - r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object - Detection with Learnable Proposals `_ - - Args: - num_classes (int): Number of class in dataset. - Defaults to 80. - num_ffn_fcs (int): The number of fully-connected - layers in FFNs. Defaults to 2. - num_heads (int): The hidden dimension of FFNs. - Defaults to 8. - num_cls_fcs (int): The number of fully-connected - layers in classification subnet. Defaults to 1. - num_reg_fcs (int): The number of fully-connected - layers in regression subnet. Defaults to 3. - feedforward_channels (int): The hidden dimension - of FFNs. Defaults to 2048 - in_channels (int): Hidden_channels of MultiheadAttention. - Defaults to 256. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - ffn_act_cfg (dict): The activation config for FFNs. - dynamic_conv_cfg (dict): The convolution config - for DynamicConv. - loss_iou (dict): The config for iou or giou loss. - - """ - - def __init__(self, - num_classes=80, - num_ffn_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - in_channels=256, - dropout=0.0, - ffn_act_cfg=dict(type='ReLU', inplace=True), - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(DIIHead, self).__init__( - num_classes=num_classes, - reg_decoded_bbox=True, - reg_class_agnostic=True, - init_cfg=init_cfg, - **kwargs) - self.loss_iou = build_loss(loss_iou) - self.in_channels = in_channels - self.fp16_enabled = False - self.attention = MultiheadAttention(in_channels, num_heads, dropout) - self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - self.instance_interactive_conv_dropout = nn.Dropout(dropout) - self.instance_interactive_conv_norm = build_norm_layer( - dict(type='LN'), in_channels)[1] - - self.ffn = FFN( - in_channels, - feedforward_channels, - num_ffn_fcs, - act_cfg=ffn_act_cfg, - dropout=dropout) - self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.cls_fcs = nn.ModuleList() - for _ in range(num_cls_fcs): - self.cls_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.cls_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.cls_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - - # over load the self.fc_cls in BBoxHead - if self.loss_cls.use_sigmoid: - self.fc_cls = nn.Linear(in_channels, self.num_classes) - else: - self.fc_cls = nn.Linear(in_channels, self.num_classes + 1) - - self.reg_fcs = nn.ModuleList() - for _ in range(num_reg_fcs): - self.reg_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.reg_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.reg_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - # over load the self.fc_cls in BBoxHead - self.fc_reg = nn.Linear(in_channels, 4) - - assert self.reg_class_agnostic, 'DIIHead only ' \ - 'suppport `reg_class_agnostic=True` ' - assert self.reg_decoded_bbox, 'DIIHead only ' \ - 'suppport `reg_decoded_bbox=True`' - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - super(DIIHead, self).init_weights() - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - else: - # adopt the default initialization for - # the weight and bias of the layer norm - pass - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - nn.init.constant_(self.fc_cls.bias, bias_init) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of Dynamic Instance Interactive Head. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size, num_proposals, feature_dimensions) - - Returns: - tuple[Tensor]: Usually a tuple of classification scores - and bbox prediction and a intermediate feature. - - - cls_scores (Tensor): Classification scores for - all proposals, has shape - (batch_size, num_proposals, num_classes). - - bbox_preds (Tensor): Box energies / deltas for - all proposals, has shape - (batch_size, num_proposals, 4). - - obj_feat (Tensor): Object feature before classification - and regression subnet, has shape - (batch_size, num_proposal, feature_dimensions). - """ - N, num_proposals = proposal_feat.shape[:2] - - # Self attention - proposal_feat = proposal_feat.permute(1, 0, 2) - proposal_feat = self.attention_norm(self.attention(proposal_feat)) - attn_feats = proposal_feat.permute(1, 0, 2) - - # instance interactive - proposal_feat = attn_feats.reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - proposal_feat = proposal_feat + self.instance_interactive_conv_dropout( - proposal_feat_iic) - obj_feat = self.instance_interactive_conv_norm(proposal_feat) - - # FFN - obj_feat = self.ffn_norm(self.ffn(obj_feat)) - - cls_feat = obj_feat - reg_feat = obj_feat - - for cls_layer in self.cls_fcs: - cls_feat = cls_layer(cls_feat) - for reg_layer in self.reg_fcs: - reg_feat = reg_layer(reg_feat) - - cls_score = self.fc_cls(cls_feat).view( - N, num_proposals, self.num_classes - if self.loss_cls.use_sigmoid else self.num_classes + 1) - bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, 4) - - return cls_score, bbox_delta, obj_feat.view( - N, num_proposals, self.in_channels), attn_feats - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - labels, - label_weights, - bbox_targets, - bbox_weights, - imgs_whwh=None, - reduction_override=None, - **kwargs): - """"Loss function of DIIHead, get loss of all images. - - Args: - cls_score (Tensor): Classification prediction - results of all class, has shape - (batch_size * num_proposals_single_image, num_classes) - bbox_pred (Tensor): Regression prediction results, - has shape - (batch_size * num_proposals_single_image, 4), the last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - labels (Tensor): Label of each proposals, has shape - (batch_size * num_proposals_single_image - label_weights (Tensor): Classification loss - weight of each proposals, has shape - (batch_size * num_proposals_single_image - bbox_targets (Tensor): Regression targets of each - proposals, has shape - (batch_size * num_proposals_single_image, 4), - the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - bbox_weights (Tensor): Regression loss weight of each - proposals's coordinate, has shape - (batch_size * num_proposals_single_image, 4), - imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\ - shape (batch_size, num_proposals, 4), the last - dimension means - [img_width,img_height, img_width, img_height]. - reduction_override (str, optional): The reduction - method used to override the original reduction - method of the loss. Options are "none", - "mean" and "sum". Defaults to None, - - Returns: - dict[str, Tensor]: Dictionary of loss components - """ - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0) & (labels < bg_class_ind) - num_pos = pos_inds.sum().float() - avg_factor = reduce_mean(num_pos) - if cls_score is not None: - if cls_score.numel() > 0: - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['pos_acc'] = accuracy(cls_score[pos_inds], - labels[pos_inds]) - if bbox_pred is not None: - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred / imgs_whwh, - bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh, - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - losses['loss_iou'] = self.loss_iou( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - else: - losses['loss_bbox'] = bbox_pred.sum() * 0 - losses['loss_iou'] = bbox_pred.sum() * 0 - return losses - - def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes, - pos_gt_bboxes, pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Almost the same as the implementation in `bbox_head`, - we add pos_inds and neg_inds to select positive and - negative samples instead of selecting the first num_pos - as positive samples. - - Args: - pos_inds (Tensor): The length is equal to the - positive sample numbers contain all index - of the positive sample in the origin proposal set. - neg_inds (Tensor): The length is equal to the - negative sample numbers contain all index - of the negative sample in the origin proposal set. - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains gt_boxes for - all positive samples, has shape (num_pos, 4), - the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains gt_labels for - all positive samples, has shape (num_pos, ). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all proposals, has - shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all proposals, has - shape (num_proposals, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all proposals, - has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[pos_inds] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - pos_bbox_targets = pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1 - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise just - a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has shape - (num_proposals,) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list has - shape (num_proposals, 4) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals, 4), - the last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_inds_list = [res.pos_inds for res in sampling_results] - neg_inds_list = [res.neg_inds for res in sampling_results] - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_inds_list, - neg_inds_list, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py deleted file mode 100644 index 2a38d591..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList - -from mmdet.models.backbones.resnet import Bottleneck -from mmdet.models.builder import HEADS -from .bbox_head import BBoxHead - - -class BasicResBlock(BaseModule): - """Basic residual block. - - This block is a little different from the block in the ResNet backbone. - The kernel size of conv1 is 1 in this block while 3 in ResNet BasicBlock. - - Args: - in_channels (int): Channels of the input feature map. - out_channels (int): Channels of the output feature map. - conv_cfg (dict): The config dict for convolution layers. - norm_cfg (dict): The config dict for normalization layers. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - init_cfg=None): - super(BasicResBlock, self).__init__(init_cfg) - - # main path - self.conv1 = ConvModule( - in_channels, - in_channels, - kernel_size=3, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - self.conv2 = ConvModule( - in_channels, - out_channels, - kernel_size=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - # identity path - self.conv_identity = ConvModule( - in_channels, - out_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - identity = x - - x = self.conv1(x) - x = self.conv2(x) - - identity = self.conv_identity(identity) - out = x + identity - - out = self.relu(out) - return out - - -@HEADS.register_module() -class DoubleConvFCBBoxHead(BBoxHead): - r"""Bbox head used in Double-Head R-CNN - - .. code-block:: none - - /-> cls - /-> shared convs -> - \-> reg - roi features - /-> cls - \-> shared fc -> - \-> reg - """ # noqa: W605 - - def __init__(self, - num_convs=0, - num_fcs=0, - conv_out_channels=1024, - fc_out_channels=1024, - conv_cfg=None, - norm_cfg=dict(type='BN'), - init_cfg=dict( - type='Normal', - override=[ - dict(type='Normal', name='fc_cls', std=0.01), - dict(type='Normal', name='fc_reg', std=0.001), - dict( - type='Xavier', - name='fc_branch', - distribution='uniform') - ]), - **kwargs): - kwargs.setdefault('with_avg_pool', True) - super(DoubleConvFCBBoxHead, self).__init__(init_cfg=init_cfg, **kwargs) - assert self.with_avg_pool - assert num_convs > 0 - assert num_fcs > 0 - self.num_convs = num_convs - self.num_fcs = num_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # increase the channel of input features - self.res_block = BasicResBlock(self.in_channels, - self.conv_out_channels) - - # add conv heads - self.conv_branch = self._add_conv_branch() - # add fc heads - self.fc_branch = self._add_fc_branch() - - out_dim_reg = 4 if self.reg_class_agnostic else 4 * self.num_classes - self.fc_reg = nn.Linear(self.conv_out_channels, out_dim_reg) - - self.fc_cls = nn.Linear(self.fc_out_channels, self.num_classes + 1) - self.relu = nn.ReLU(inplace=True) - - def _add_conv_branch(self): - """Add the fc branch which consists of a sequential of conv layers.""" - branch_convs = ModuleList() - for i in range(self.num_convs): - branch_convs.append( - Bottleneck( - inplanes=self.conv_out_channels, - planes=self.conv_out_channels // 4, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - return branch_convs - - def _add_fc_branch(self): - """Add the fc branch which consists of a sequential of fc layers.""" - branch_fcs = ModuleList() - for i in range(self.num_fcs): - fc_in_channels = ( - self.in_channels * - self.roi_feat_area if i == 0 else self.fc_out_channels) - branch_fcs.append(nn.Linear(fc_in_channels, self.fc_out_channels)) - return branch_fcs - - def forward(self, x_cls, x_reg): - # conv head - x_conv = self.res_block(x_reg) - - for conv in self.conv_branch: - x_conv = conv(x_conv) - - if self.with_avg_pool: - x_conv = self.avg_pool(x_conv) - - x_conv = x_conv.view(x_conv.size(0), -1) - bbox_pred = self.fc_reg(x_conv) - - # fc head - x_fc = x_cls.view(x_cls.size(0), -1) - for fc in self.fc_branch: - x_fc = self.relu(fc(x_fc)) - - cls_score = self.fc_cls(x_fc) - - return cls_score, bbox_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/sabl_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/sabl_head.py deleted file mode 100644 index 0ce986b9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/sabl_head.py +++ /dev/null @@ -1,596 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.losses import accuracy - - -@HEADS.register_module() -class SABLHead(BaseModule): - """Side-Aware Boundary Localization (SABL) for RoI-Head. - - Side-Aware features are extracted by conv layers - with an attention mechanism. - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented in BucketingBBoxCoder. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - cls_in_channels (int): Input channels of cls RoI feature. \ - Defaults to 256. - reg_in_channels (int): Input channels of reg RoI feature. \ - Defaults to 256. - roi_feat_size (int): Size of RoI features. Defaults to 7. - reg_feat_up_ratio (int): Upsample ratio of reg features. \ - Defaults to 2. - reg_pre_kernel (int): Kernel of 2D conv layers before \ - attention pooling. Defaults to 3. - reg_post_kernel (int): Kernel of 1D conv layers after \ - attention pooling. Defaults to 3. - reg_pre_num (int): Number of pre convs. Defaults to 2. - reg_post_num (int): Number of post convs. Defaults to 1. - num_classes (int): Number of classes in dataset. Defaults to 80. - cls_out_channels (int): Hidden channels in cls fcs. Defaults to 1024. - reg_offset_out_channels (int): Hidden and output channel \ - of reg offset branch. Defaults to 256. - reg_cls_out_channels (int): Hidden and output channel \ - of reg cls branch. Defaults to 256. - num_cls_fcs (int): Number of fcs for cls branch. Defaults to 1. - num_reg_fcs (int): Number of fcs for reg branch.. Defaults to 0. - reg_class_agnostic (bool): Class agnostic regression or not. \ - Defaults to True. - norm_cfg (dict): Config of norm layers. Defaults to None. - bbox_coder (dict): Config of bbox coder. Defaults 'BucketingBBoxCoder'. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_classes, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=1.7), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=0.1, loss_weight=1.0), - init_cfg=None): - super(SABLHead, self).__init__(init_cfg) - self.cls_in_channels = cls_in_channels - self.reg_in_channels = reg_in_channels - self.roi_feat_size = roi_feat_size - self.reg_feat_up_ratio = int(reg_feat_up_ratio) - self.num_buckets = bbox_coder['num_buckets'] - assert self.reg_feat_up_ratio // 2 >= 1 - self.up_reg_feat_size = roi_feat_size * self.reg_feat_up_ratio - assert self.up_reg_feat_size == bbox_coder['num_buckets'] - self.reg_pre_kernel = reg_pre_kernel - self.reg_post_kernel = reg_post_kernel - self.reg_pre_num = reg_pre_num - self.reg_post_num = reg_post_num - self.num_classes = num_classes - self.cls_out_channels = cls_out_channels - self.reg_offset_out_channels = reg_offset_out_channels - self.reg_cls_out_channels = reg_cls_out_channels - self.num_cls_fcs = num_cls_fcs - self.num_reg_fcs = num_reg_fcs - self.reg_class_agnostic = reg_class_agnostic - assert self.reg_class_agnostic - self.norm_cfg = norm_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.cls_fcs = self._add_fc_branch(self.num_cls_fcs, - self.cls_in_channels, - self.roi_feat_size, - self.cls_out_channels) - - self.side_num = int(np.ceil(self.num_buckets / 2)) - - if self.reg_feat_up_ratio > 1: - self.upsample_x = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - self.upsample_y = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - - self.reg_pre_convs = nn.ModuleList() - for i in range(self.reg_pre_num): - reg_pre_conv = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=reg_pre_kernel, - padding=reg_pre_kernel // 2, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_pre_convs.append(reg_pre_conv) - - self.reg_post_conv_xs = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_x = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(1, reg_post_kernel), - padding=(0, reg_post_kernel // 2), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_xs.append(reg_post_conv_x) - self.reg_post_conv_ys = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_y = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(reg_post_kernel, 1), - padding=(reg_post_kernel // 2, 0), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_ys.append(reg_post_conv_y) - - self.reg_conv_att_x = nn.Conv2d(reg_in_channels, 1, 1) - self.reg_conv_att_y = nn.Conv2d(reg_in_channels, 1, 1) - - self.fc_cls = nn.Linear(self.cls_out_channels, self.num_classes + 1) - self.relu = nn.ReLU(inplace=True) - - self.reg_cls_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_cls_out_channels) - self.reg_offset_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_offset_out_channels) - self.fc_reg_cls = nn.Linear(self.reg_cls_out_channels, 1) - self.fc_reg_offset = nn.Linear(self.reg_offset_out_channels, 1) - - if init_cfg is None: - self.init_cfg = [ - dict( - type='Xavier', - layer='Linear', - distribution='uniform', - override=[ - dict(type='Normal', name='reg_conv_att_x', std=0.01), - dict(type='Normal', name='reg_conv_att_y', std=0.01), - dict(type='Normal', name='fc_reg_cls', std=0.01), - dict(type='Normal', name='fc_cls', std=0.01), - dict(type='Normal', name='fc_reg_offset', std=0.001) - ]) - ] - if self.reg_feat_up_ratio > 1: - self.init_cfg += [ - dict( - type='Kaiming', - distribution='normal', - override=[ - dict(name='upsample_x'), - dict(name='upsample_y') - ]) - ] - - @property - def custom_cls_channels(self): - return getattr(self.loss_cls, 'custom_cls_channels', False) - - @property - def custom_activation(self): - return getattr(self.loss_cls, 'custom_activation', False) - - @property - def custom_accuracy(self): - return getattr(self.loss_cls, 'custom_accuracy', False) - - def _add_fc_branch(self, num_branch_fcs, in_channels, roi_feat_size, - fc_out_channels): - in_channels = in_channels * roi_feat_size * roi_feat_size - branch_fcs = nn.ModuleList() - for i in range(num_branch_fcs): - fc_in_channels = (in_channels if i == 0 else fc_out_channels) - branch_fcs.append(nn.Linear(fc_in_channels, fc_out_channels)) - return branch_fcs - - def cls_forward(self, cls_x): - cls_x = cls_x.view(cls_x.size(0), -1) - for fc in self.cls_fcs: - cls_x = self.relu(fc(cls_x)) - cls_score = self.fc_cls(cls_x) - return cls_score - - def attention_pool(self, reg_x): - """Extract direction-specific features fx and fy with attention - methanism.""" - reg_fx = reg_x - reg_fy = reg_x - reg_fx_att = self.reg_conv_att_x(reg_fx).sigmoid() - reg_fy_att = self.reg_conv_att_y(reg_fy).sigmoid() - reg_fx_att = reg_fx_att / reg_fx_att.sum(dim=2).unsqueeze(2) - reg_fy_att = reg_fy_att / reg_fy_att.sum(dim=3).unsqueeze(3) - reg_fx = (reg_fx * reg_fx_att).sum(dim=2) - reg_fy = (reg_fy * reg_fy_att).sum(dim=3) - return reg_fx, reg_fy - - def side_aware_feature_extractor(self, reg_x): - """Refine and extract side-aware features without split them.""" - for reg_pre_conv in self.reg_pre_convs: - reg_x = reg_pre_conv(reg_x) - reg_fx, reg_fy = self.attention_pool(reg_x) - - if self.reg_post_num > 0: - reg_fx = reg_fx.unsqueeze(2) - reg_fy = reg_fy.unsqueeze(3) - for i in range(self.reg_post_num): - reg_fx = self.reg_post_conv_xs[i](reg_fx) - reg_fy = self.reg_post_conv_ys[i](reg_fy) - reg_fx = reg_fx.squeeze(2) - reg_fy = reg_fy.squeeze(3) - if self.reg_feat_up_ratio > 1: - reg_fx = self.relu(self.upsample_x(reg_fx)) - reg_fy = self.relu(self.upsample_y(reg_fy)) - reg_fx = torch.transpose(reg_fx, 1, 2) - reg_fy = torch.transpose(reg_fy, 1, 2) - return reg_fx.contiguous(), reg_fy.contiguous() - - def reg_pred(self, x, offset_fcs, cls_fcs): - """Predict bucketing estimation (cls_pred) and fine regression (offset - pred) with side-aware features.""" - x_offset = x.view(-1, self.reg_in_channels) - x_cls = x.view(-1, self.reg_in_channels) - - for fc in offset_fcs: - x_offset = self.relu(fc(x_offset)) - for fc in cls_fcs: - x_cls = self.relu(fc(x_cls)) - offset_pred = self.fc_reg_offset(x_offset) - cls_pred = self.fc_reg_cls(x_cls) - - offset_pred = offset_pred.view(x.size(0), -1) - cls_pred = cls_pred.view(x.size(0), -1) - - return offset_pred, cls_pred - - def side_aware_split(self, feat): - """Split side-aware features aligned with orders of bucketing - targets.""" - l_end = int(np.ceil(self.up_reg_feat_size / 2)) - r_start = int(np.floor(self.up_reg_feat_size / 2)) - feat_fl = feat[:, :l_end] - feat_fr = feat[:, r_start:].flip(dims=(1, )) - feat_fl = feat_fl.contiguous() - feat_fr = feat_fr.contiguous() - feat = torch.cat([feat_fl, feat_fr], dim=-1) - return feat - - def bbox_pred_split(self, bbox_pred, num_proposals_per_img): - """Split batch bbox prediction back to each image.""" - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_preds = bucket_cls_preds.split(num_proposals_per_img, 0) - bucket_offset_preds = bucket_offset_preds.split( - num_proposals_per_img, 0) - bbox_pred = tuple(zip(bucket_cls_preds, bucket_offset_preds)) - return bbox_pred - - def reg_forward(self, reg_x): - outs = self.side_aware_feature_extractor(reg_x) - edge_offset_preds = [] - edge_cls_preds = [] - reg_fx = outs[0] - reg_fy = outs[1] - offset_pred_x, cls_pred_x = self.reg_pred(reg_fx, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_y, cls_pred_y = self.reg_pred(reg_fy, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_x = self.side_aware_split(offset_pred_x) - offset_pred_y = self.side_aware_split(offset_pred_y) - cls_pred_x = self.side_aware_split(cls_pred_x) - cls_pred_y = self.side_aware_split(cls_pred_y) - edge_offset_preds = torch.cat([offset_pred_x, offset_pred_y], dim=-1) - edge_cls_preds = torch.cat([cls_pred_x, cls_pred_y], dim=-1) - - return (edge_cls_preds, edge_offset_preds) - - def forward(self, x): - - bbox_pred = self.reg_forward(x) - cls_score = self.cls_forward(x) - - return cls_score, bbox_pred - - def get_targets(self, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - neg_proposals = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels = [res.pos_gt_labels for res in sampling_results] - cls_reg_targets = self.bucket_target(pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, - rcnn_train_cfg) - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = cls_reg_targets - return (labels, label_weights, (bucket_cls_targets, - bucket_offset_targets), - (bucket_cls_weights, bucket_offset_weights)) - - def bucket_target(self, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - rcnn_train_cfg, - concat=True): - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = multi_apply( - self._bucket_target_single, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bucket_cls_targets = torch.cat(bucket_cls_targets, 0) - bucket_cls_weights = torch.cat(bucket_cls_weights, 0) - bucket_offset_targets = torch.cat(bucket_offset_targets, 0) - bucket_offset_weights = torch.cat(bucket_offset_weights, 0) - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def _bucket_target_single(self, pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, cfg): - """Compute bucketing estimation targets and fine regression targets for - a single image. - - Args: - pos_proposals (Tensor): positive proposals of a single image, - Shape (n_pos, 4) - neg_proposals (Tensor): negative proposals of a single image, - Shape (n_neg, 4). - pos_gt_bboxes (Tensor): gt bboxes assigned to positive proposals - of a single image, Shape (n_pos, 4). - pos_gt_labels (Tensor): gt labels assigned to positive proposals - of a single image, Shape (n_pos, ). - cfg (dict): Config of calculating targets - - Returns: - tuple: - - - labels (Tensor): Labels in a single image. \ - Shape (n,). - - label_weights (Tensor): Label weights in a single image.\ - Shape (n,) - - bucket_cls_targets (Tensor): Bucket cls targets in \ - a single image. Shape (n, num_buckets*2). - - bucket_cls_weights (Tensor): Bucket cls weights in \ - a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset targets \ - in a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset weights \ - in a single image. Shape (n, num_buckets*2). - """ - num_pos = pos_proposals.size(0) - num_neg = neg_proposals.size(0) - num_samples = num_pos + num_neg - labels = pos_gt_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_proposals.new_zeros(num_samples) - bucket_cls_targets = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_cls_weights = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_offset_targets = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - bucket_offset_weights = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - if num_pos > 0: - labels[:num_pos] = pos_gt_labels - label_weights[:num_pos] = 1.0 - (pos_bucket_offset_targets, pos_bucket_offset_weights, - pos_bucket_cls_targets, - pos_bucket_cls_weights) = self.bbox_coder.encode( - pos_proposals, pos_gt_bboxes) - bucket_cls_targets[:num_pos, :] = pos_bucket_cls_targets - bucket_cls_weights[:num_pos, :] = pos_bucket_cls_weights - bucket_offset_targets[:num_pos, :] = pos_bucket_offset_targets - bucket_offset_weights[:num_pos, :] = pos_bucket_offset_weights - if num_neg > 0: - label_weights[-num_neg:] = 1.0 - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def loss(self, - cls_score, - bbox_pred, - rois, - labels, - label_weights, - bbox_targets, - bbox_weights, - reduction_override=None): - losses = dict() - if cls_score is not None: - avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.) - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['acc'] = accuracy(cls_score, labels) - - if bbox_pred is not None: - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_targets, bucket_offset_targets = bbox_targets - bucket_cls_weights, bucket_offset_weights = bbox_weights - # edge cls - bucket_cls_preds = bucket_cls_preds.view(-1, self.side_num) - bucket_cls_targets = bucket_cls_targets.view(-1, self.side_num) - bucket_cls_weights = bucket_cls_weights.view(-1, self.side_num) - losses['loss_bbox_cls'] = self.loss_bbox_cls( - bucket_cls_preds, - bucket_cls_targets, - bucket_cls_weights, - avg_factor=bucket_cls_targets.size(0), - reduction_override=reduction_override) - - losses['loss_bbox_reg'] = self.loss_bbox_reg( - bucket_offset_preds, - bucket_offset_targets, - bucket_offset_weights, - avg_factor=bucket_offset_targets.size(0), - reduction_override=reduction_override) - - return losses - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False, - cfg=None): - if isinstance(cls_score, list): - cls_score = sum(cls_score) / float(len(cls_score)) - scores = F.softmax(cls_score, dim=1) if cls_score is not None else None - - if bbox_pred is not None: - bboxes, confidences = self.bbox_coder.decode( - rois[:, 1:], bbox_pred, img_shape) - else: - bboxes = rois[:, 1:].clone() - confidences = None - if img_shape is not None: - bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1] - 1) - bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0] - 1) - - if rescale and bboxes.size(0) > 0: - if isinstance(scale_factor, float): - bboxes /= scale_factor - else: - bboxes /= torch.from_numpy(scale_factor).to(bboxes.device) - - if cfg is None: - return bboxes, scores - else: - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=confidences) - - return det_bboxes, det_labels - - @force_fp32(apply_to=('bbox_preds', )) - def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas): - """Refine bboxes during training. - - Args: - rois (Tensor): Shape (n*bs, 5), where n is image number per GPU, - and bs is the sampled RoIs per image. - labels (Tensor): Shape (n*bs, ). - bbox_preds (list[Tensor]): Shape [(n*bs, num_buckets*2), \ - (n*bs, num_buckets*2)]. - pos_is_gts (list[Tensor]): Flags indicating if each positive bbox - is a gt bbox. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Refined bboxes of each image in a mini-batch. - """ - img_ids = rois[:, 0].long().unique(sorted=True) - assert img_ids.numel() == len(img_metas) - - bboxes_list = [] - for i in range(len(img_metas)): - inds = torch.nonzero( - rois[:, 0] == i, as_tuple=False).squeeze(dim=1) - num_rois = inds.numel() - - bboxes_ = rois[inds, 1:] - label_ = labels[inds] - edge_cls_preds, edge_offset_preds = bbox_preds - edge_cls_preds_ = edge_cls_preds[inds] - edge_offset_preds_ = edge_offset_preds[inds] - bbox_pred_ = [edge_cls_preds_, edge_offset_preds_] - img_meta_ = img_metas[i] - pos_is_gts_ = pos_is_gts[i] - - bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_, - img_meta_) - # filter gt bboxes - pos_keep = 1 - pos_is_gts_ - keep_inds = pos_is_gts_.new_ones(num_rois) - keep_inds[:len(pos_is_gts_)] = pos_keep - - bboxes_list.append(bboxes[keep_inds.type(torch.bool)]) - - return bboxes_list - - @force_fp32(apply_to=('bbox_pred', )) - def regress_by_class(self, rois, label, bbox_pred, img_meta): - """Regress the bbox for the predicted class. Used in Cascade R-CNN. - - Args: - rois (Tensor): shape (n, 4) or (n, 5) - label (Tensor): shape (n, ) - bbox_pred (list[Tensor]): shape [(n, num_buckets *2), \ - (n, num_buckets *2)] - img_meta (dict): Image meta info. - - Returns: - Tensor: Regressed bboxes, the same shape as input rois. - """ - assert rois.size(1) == 4 or rois.size(1) == 5 - - if rois.size(1) == 4: - new_rois, _ = self.bbox_coder.decode(rois, bbox_pred, - img_meta['img_shape']) - else: - bboxes, _ = self.bbox_coder.decode(rois[:, 1:], bbox_pred, - img_meta['img_shape']) - new_rois = torch.cat((rois[:, [0]], bboxes), dim=1) - - return new_rois diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index cf39ebef..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/cascade_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/cascade_roi_head.py deleted file mode 100644 index e17313f2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/cascade_roi_head.py +++ /dev/null @@ -1,631 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.runner import ModuleList - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner, - build_sampler, merge_aug_bboxes, merge_aug_masks, - multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1712.00726 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert shared_head is None, \ - 'Shared head is not supported in Cascade RCNN anymore' - - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - super(CascadeRoIHead, self).__init__( - bbox_roi_extractor=bbox_roi_extractor, - bbox_head=bbox_head, - mask_roi_extractor=mask_roi_extractor, - mask_head=mask_head, - shared_head=shared_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (dict): Config of box roi extractor. - bbox_head (dict): Config of box in box head. - """ - self.bbox_roi_extractor = ModuleList() - self.bbox_head = ModuleList() - if not isinstance(bbox_roi_extractor, list): - bbox_roi_extractor = [ - bbox_roi_extractor for _ in range(self.num_stages) - ] - if not isinstance(bbox_head, list): - bbox_head = [bbox_head for _ in range(self.num_stages)] - assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages - for roi_extractor, head in zip(bbox_roi_extractor, bbox_head): - self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor)) - self.bbox_head.append(build_head(head)) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize mask head and mask roi extractor. - - Args: - mask_roi_extractor (dict): Config of mask roi extractor. - mask_head (dict): Config of mask in mask head. - """ - self.mask_head = nn.ModuleList() - if not isinstance(mask_head, list): - mask_head = [mask_head for _ in range(self.num_stages)] - assert len(mask_head) == self.num_stages - for head in mask_head: - self.mask_head.append(build_head(head)) - if mask_roi_extractor is not None: - self.share_roi_extractor = False - self.mask_roi_extractor = ModuleList() - if not isinstance(mask_roi_extractor, list): - mask_roi_extractor = [ - mask_roi_extractor for _ in range(self.num_stages) - ] - assert len(mask_roi_extractor) == self.num_stages - for roi_extractor in mask_roi_extractor: - self.mask_roi_extractor.append( - build_roi_extractor(roi_extractor)) - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - - def init_assigner_sampler(self): - """Initialize assigner and sampler for each stage.""" - self.bbox_assigner = [] - self.bbox_sampler = [] - if self.train_cfg is not None: - for idx, rcnn_train_cfg in enumerate(self.train_cfg): - self.bbox_assigner.append( - build_assigner(rcnn_train_cfg.assigner)) - self.current_stage = idx - self.bbox_sampler.append( - build_sampler(rcnn_train_cfg.sampler, context=self)) - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward(self, stage, x, rois): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(stage, x, rois) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg) - loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward(self, stage, x, rois): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - bbox_feats=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(stage, x, pos_rois) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask) - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - if self.with_bbox or self.with_mask: - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self._bbox_forward_train(i, x, sampling_results, - gt_bboxes, gt_labels, - rcnn_train_cfg) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - bbox_results['bbox_feats']) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - # bbox_targets is a tuple - roi_labels = bbox_results['bbox_targets'][0] - with torch.no_grad(): - cls_score = bbox_results['cls_score'] - if self.bbox_head[i].custom_activation: - cls_score = self.bbox_head[i].loss_cls.get_activation( - cls_score) - - # Empty proposal. - if cls_score.numel() == 0: - break - - roi_labels = torch.where( - roi_labels == self.bbox_head[i].num_classes, - cls_score[:, :-1].argmax(1), roi_labels) - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple( - len(proposals) for proposals in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head[i].bbox_pred_split( - bbox_pred, num_proposals_per_img) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - if self.bbox_head[i].custom_activation: - cls_score = [ - self.bbox_head[i].loss_cls.get_activation(s) - for s in cls_score - ] - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refined_rois = self.bbox_head[i].regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refined_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_results - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - num_mask_rois_per_img = tuple( - _bbox.size(0) for _bbox in _bboxes) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_mask_rois_per_img, 0) - aug_masks.append([ - m.sigmoid().cpu().detach().numpy() for m in mask_pred - ]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_masks = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(features, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - cls_score = bbox_results['cls_score'] - if self.bbox_head[i].custom_activation: - cls_score = self.bbox_head[i].loss_cls.get_activation( - cls_score) - bbox_label = cls_score[:, :-1].argmax(dim=1) - rois = self.bbox_head[i].regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[] - for _ in range(self.mask_head[-1].num_classes)] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta in zip(features, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - dummy_scale_factor = np.ones(4) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=dummy_scale_factor, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] - - def onnx_export(self, x, proposals, img_metas): - - assert self.with_bbox, 'Bbox head must be implemented.' - assert proposals.shape[0] == 1, 'Only support one input image ' \ - 'while in exporting to ONNX' - # remove the scores - rois = proposals[..., :-1] - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - # Eliminate the batch dimension - rois = rois.view(-1, 4) - - # add dummy batch index - rois = torch.cat([rois.new_zeros(rois.shape[0], 1), rois], dim=-1) - - max_shape = img_metas[0]['img_shape_for_onnx'] - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, - rois.size(-1)) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, - cls_score.size(-1)) - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, 4) - ms_scores.append(cls_score) - if i < self.num_stages - 1: - assert self.bbox_head[i].reg_class_agnostic - new_rois = self.bbox_head[i].bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=max_shape) - rois = new_rois.reshape(-1, new_rois.shape[-1]) - # add dummy batch index - rois = torch.cat([rois.new_zeros(rois.shape[0], 1), rois], - dim=-1) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, 4) - rois = rois.reshape(batch_size, num_proposals_per_img, -1) - det_bboxes, det_labels = self.bbox_head[-1].onnx_export( - rois, cls_score, bbox_pred, max_shape, cfg=rcnn_test_cfg) - - if not self.with_mask: - return det_bboxes, det_labels - else: - batch_index = torch.arange( - det_bboxes.size(0), - device=det_bboxes.device).float().view(-1, 1, 1).expand( - det_bboxes.size(0), det_bboxes.size(1), 1) - rois = det_bboxes[..., :4] - mask_rois = torch.cat([batch_index, rois], dim=-1) - mask_rois = mask_rois.view(-1, 5) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - aug_masks.append(mask_pred) - max_shape = img_metas[0]['img_shape_for_onnx'] - # calculate the mean of masks from several stage - mask_pred = sum(aug_masks) / len(aug_masks) - segm_results = self.mask_head[-1].onnx_export( - mask_pred, rois.reshape(-1, 4), det_labels.reshape(-1), - self.test_cfg, max_shape) - segm_results = segm_results.reshape(batch_size, - det_bboxes.shape[1], - max_shape[0], max_shape[1]) - return det_bboxes, det_labels, segm_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/double_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/double_roi_head.py deleted file mode 100644 index 895b5d30..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/double_roi_head.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class DoubleHeadRoIHead(StandardRoIHead): - """RoI head for Double Head RCNN. - - https://arxiv.org/abs/1904.06493 - """ - - def __init__(self, reg_roi_scale_factor, **kwargs): - super(DoubleHeadRoIHead, self).__init__(**kwargs) - self.reg_roi_scale_factor = reg_roi_scale_factor - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing time.""" - bbox_cls_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - bbox_reg_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], - rois, - roi_scale_factor=self.reg_roi_scale_factor) - if self.with_shared_head: - bbox_cls_feats = self.shared_head(bbox_cls_feats) - bbox_reg_feats = self.shared_head(bbox_reg_feats) - cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - bbox_feats=bbox_cls_feats) - return bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/dynamic_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/dynamic_roi_head.py deleted file mode 100644 index 4c2b6cda..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/dynamic_roi_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2roi -from mmdet.models.losses import SmoothL1Loss -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - -EPS = 1e-15 - - -@HEADS.register_module() -class DynamicRoIHead(StandardRoIHead): - """RoI head for `Dynamic R-CNN `_.""" - - def __init__(self, **kwargs): - super(DynamicRoIHead, self).__init__(**kwargs) - assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss) - # the IoU history of the past `update_iter_interval` iterations - self.iou_history = [] - # the beta history of the past `update_iter_interval` iterations - self.beta_history = [] - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): list of multi-level img features. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - proposals (list[Tensors]): list of region proposals. - - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - cur_iou = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # record the `iou_topk`-th largest IoU in an image - iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk, - len(assign_result.max_overlaps)) - ious, _ = torch.topk(assign_result.max_overlaps, iou_topk) - cur_iou.append(ious[-1].item()) - sampling_results.append(sampling_result) - # average the current IoUs over images - cur_iou = np.mean(cur_iou) - self.iou_history.append(cur_iou) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - # update IoU threshold and SmoothL1 beta - update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval - if len(self.iou_history) % update_iter_interval == 0: - new_iou_thr, new_beta = self.update_hyperparameters() - - return losses - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - num_imgs = len(img_metas) - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - # record the `beta_topk`-th smallest target - # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets - # and bbox_weights, respectively - pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1) - num_pos = len(pos_inds) - cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1) - beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs, - num_pos) - cur_target = torch.kthvalue(cur_target, beta_topk)[0].item() - self.beta_history.append(cur_target) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def update_hyperparameters(self): - """Update hyperparameters like IoU thresholds for assigner and beta for - SmoothL1 loss based on the training statistics. - - Returns: - tuple[float]: the updated ``iou_thr`` and ``beta``. - """ - new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou, - np.mean(self.iou_history)) - self.iou_history = [] - self.bbox_assigner.pos_iou_thr = new_iou_thr - self.bbox_assigner.neg_iou_thr = new_iou_thr - self.bbox_assigner.min_pos_iou = new_iou_thr - if (np.median(self.beta_history) < EPS): - # avoid 0 or too small value for new_beta - new_beta = self.bbox_head.loss_bbox.beta - else: - new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta, - np.median(self.beta_history)) - self.beta_history = [] - self.bbox_head.loss_bbox.beta = new_beta - return new_iou_thr, new_beta diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/grid_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/grid_roi_head.py deleted file mode 100644 index 333f6297..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/grid_roi_head.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2result, bbox2roi -from ..builder import HEADS, build_head, build_roi_extractor -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class GridRoIHead(StandardRoIHead): - """Grid roi head for Grid R-CNN. - - https://arxiv.org/abs/1811.12030 - """ - - def __init__(self, grid_roi_extractor, grid_head, **kwargs): - assert grid_head is not None - super(GridRoIHead, self).__init__(**kwargs) - if grid_roi_extractor is not None: - self.grid_roi_extractor = build_roi_extractor(grid_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.grid_roi_extractor = self.bbox_roi_extractor - self.grid_head = build_head(grid_head) - - def _random_jitter(self, sampling_results, img_metas, amplitude=0.15): - """Ramdom jitter positive proposals for training.""" - for sampling_result, img_meta in zip(sampling_results, img_metas): - bboxes = sampling_result.pos_bboxes - random_offsets = bboxes.new_empty(bboxes.shape[0], 4).uniform_( - -amplitude, amplitude) - # before jittering - cxcy = (bboxes[:, 2:4] + bboxes[:, :2]) / 2 - wh = (bboxes[:, 2:4] - bboxes[:, :2]).abs() - # after jittering - new_cxcy = cxcy + wh * random_offsets[:, :2] - new_wh = wh * (1 + random_offsets[:, 2:]) - # xywh to xyxy - new_x1y1 = (new_cxcy - new_wh / 2) - new_x2y2 = (new_cxcy + new_wh / 2) - new_bboxes = torch.cat([new_x1y1, new_x2y2], dim=1) - # clip bboxes - max_shape = img_meta['img_shape'] - if max_shape is not None: - new_bboxes[:, 0::2].clamp_(min=0, max=max_shape[1] - 1) - new_bboxes[:, 1::2].clamp_(min=0, max=max_shape[0] - 1) - - sampling_result.pos_bboxes = new_bboxes - return sampling_results - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - - # grid head - grid_rois = rois[:100] - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], grid_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - grid_pred = self.grid_head(grid_feats) - outs = outs + (grid_pred, ) - - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - bbox_results = super(GridRoIHead, - self)._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - - # Grid head forward and loss - sampling_results = self._random_jitter(sampling_results, img_metas) - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - # GN in head does not support zero shape input - if pos_rois.shape[0] == 0: - return bbox_results - - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], pos_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - # Accelerate training - max_sample_num_grid = self.train_cfg.get('max_num_grid', 192) - sample_idx = torch.randperm( - grid_feats.shape[0])[:min(grid_feats.shape[0], max_sample_num_grid - )] - grid_feats = grid_feats[sample_idx] - - grid_pred = self.grid_head(grid_feats) - - grid_targets = self.grid_head.get_targets(sampling_results, - self.train_cfg) - grid_targets = grid_targets[sample_idx] - - loss_grid = self.grid_head.loss(grid_pred, grid_targets) - - bbox_results['loss_bbox'].update(loss_grid) - return bbox_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=False) - # pack rois into bboxes - grid_rois = bbox2roi([det_bbox[:, :4] for det_bbox in det_bboxes]) - if grid_rois.shape[0] != 0: - grid_feats = self.grid_roi_extractor( - x[:len(self.grid_roi_extractor.featmap_strides)], grid_rois) - self.grid_head.test_mode = True - grid_pred = self.grid_head(grid_feats) - # split batch grid head prediction back to each image - num_roi_per_img = tuple(len(det_bbox) for det_bbox in det_bboxes) - grid_pred = { - k: v.split(num_roi_per_img, 0) - for k, v in grid_pred.items() - } - - # apply bbox post-processing to each image individually - bbox_results = [] - num_imgs = len(det_bboxes) - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - bbox_results.append([ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head.num_classes) - ]) - else: - det_bbox = self.grid_head.get_bboxes( - det_bboxes[i], grid_pred['fused'][i], [img_metas[i]]) - if rescale: - det_bbox[:, :4] /= img_metas[i]['scale_factor'] - bbox_results.append( - bbox2result(det_bbox, det_labels[i], - self.bbox_head.num_classes)) - else: - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head.num_classes) - ] for _ in range(len(det_bboxes))] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/htc_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/htc_roi_head.py deleted file mode 100644 index 08bc1dbf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/htc_roi_head.py +++ /dev/null @@ -1,628 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn.functional as F - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from ..utils.brick_wrappers import adaptive_avg_pool2d -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class HybridTaskCascadeRoIHead(CascadeRoIHead): - """Hybrid task cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1901.07518 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - semantic_roi_extractor=None, - semantic_head=None, - semantic_fusion=('bbox', 'mask'), - interleaved=True, - mask_info_flow=True, - **kwargs): - super(HybridTaskCascadeRoIHead, - self).__init__(num_stages, stage_loss_weights, **kwargs) - assert self.with_bbox - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = build_roi_extractor( - semantic_roi_extractor) - self.semantic_head = build_head(semantic_head) - - self.semantic_fusion = semantic_fusion - self.interleaved = interleaved - self.mask_info_flow = mask_info_flow - - @property - def with_semantic(self): - """bool: whether the head has semantic head""" - if hasattr(self, 'semantic_head') and self.semantic_head is not None: - return True - else: - return False - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - outs = () - # semantic head - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - # bbox heads - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - outs = outs + (mask_pred, ) - return outs - - def _bbox_forward_train(self, - stage, - x, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for box head in training.""" - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, x, rois, semantic_feat=semantic_feat) - - bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg) - loss_bbox = bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, - rois=rois, - bbox_targets=bbox_targets, - ) - return bbox_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for mask head in - training.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - pos_rois) - - # semantic feature fusion - # element-wise sum for original features and pooled semantic features - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - pos_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - - # mask information flow - # forward all previous mask heads to obtain last_feat, and fuse it - # with the normal mask feature - if self.mask_info_flow: - last_feat = None - for i in range(stage): - last_feat = self.mask_head[i]( - mask_feats, last_feat, return_logits=False) - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - else: - mask_pred = mask_head(mask_feats, return_feat=False) - - mask_targets = mask_head.get_targets(sampling_results, gt_masks, - rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = mask_head.loss(mask_pred, mask_targets, pos_labels) - - mask_results = dict(loss_mask=loss_mask) - return mask_results - - def _bbox_forward(self, stage, x, rois, semantic_feat=None): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor( - x[:len(bbox_roi_extractor.featmap_strides)], rois) - if self.with_semantic and 'bbox' in self.semantic_fusion: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def _mask_forward_test(self, stage, x, bboxes, semantic_feat=None): - """Mask head forward function for testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_rois = bbox2roi([bboxes]) - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.mask_info_flow: - last_feat = None - last_pred = None - for i in range(stage): - mask_pred, last_feat = self.mask_head[i](mask_feats, last_feat) - if last_pred is not None: - mask_pred = mask_pred + last_pred - last_pred = mask_pred - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - if last_pred is not None: - mask_pred = mask_pred + last_pred - else: - mask_pred = mask_head(mask_feats) - return mask_pred - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - proposal_list (list[Tensors]): list of region proposals. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None, list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None, Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - gt_semantic_seg (None, list[Tensor]): semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # semantic segmentation part - # 2 outputs: segmentation prediction and embedded features - losses = dict() - if self.with_semantic: - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign(proposal_list[j], - gt_bboxes[j], - gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = \ - self._bbox_forward_train( - i, x, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg, semantic_feat) - roi_labels = bbox_results['bbox_targets'][0] - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - # interleaved execution: use regressed bboxes by the box branch - # to train the mask branch - if self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - # re-assign and sample 512 RoIs from 512 RoIs - sampling_results = [] - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], - gt_bboxes_ignore[j], gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - semantic_feat) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes (same as Cascade R-CNN) - if i < self.num_stages - 1 and not self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refine_rois = bbox_head.regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refine_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - bbox_result = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_result - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - aug_masks = [] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - - num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes) - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_bbox_per_img, 0) - aug_masks.append( - [mask.sigmoid().cpu().numpy() for mask in mask_pred]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_mask, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, img_feats, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - if self.with_semantic: - semantic_feats = [ - self.semantic_head(feat)[1] for feat in img_feats - ] - else: - semantic_feats = [None] * len(img_metas) - - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta, semantic in zip(img_feats, img_metas, semantic_feats): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'].argmax(dim=1) - rois = bbox_head.regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[] - for _ in range(self.mask_head[-1].num_classes)] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta, semantic in zip(img_feats, img_metas, - semantic_feats): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor[-1]( - x[:len(self.mask_roi_extractor[-1].featmap_strides)], - mask_rois) - if self.with_semantic: - semantic_feat = semantic - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[ - -2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head( - mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - aug_masks.append(mask_pred.sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/__init__.py deleted file mode 100644 index 48a5d422..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coarse_mask_head import CoarseMaskHead -from .dynamic_mask_head import DynamicMaskHead -from .fcn_mask_head import FCNMaskHead -from .feature_relay_head import FeatureRelayHead -from .fused_semantic_head import FusedSemanticHead -from .global_context_head import GlobalContextHead -from .grid_head import GridHead -from .htc_mask_head import HTCMaskHead -from .mask_point_head import MaskPointHead -from .maskiou_head import MaskIoUHead -from .scnet_mask_head import SCNetMaskHead -from .scnet_semantic_head import SCNetSemanticHead - -__all__ = [ - 'FCNMaskHead', 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', - 'MaskIoUHead', 'CoarseMaskHead', 'MaskPointHead', 'SCNetMaskHead', - 'SCNetSemanticHead', 'GlobalContextHead', 'FeatureRelayHead', - 'DynamicMaskHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py deleted file mode 100644 index 946254cb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule, Linear -from mmcv.runner import ModuleList, auto_fp16 - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class CoarseMaskHead(FCNMaskHead): - """Coarse mask head used in PointRend. - - Compared with standard ``FCNMaskHead``, ``CoarseMaskHead`` will downsample - the input feature map instead of upsample it. - - Args: - num_convs (int): Number of conv layers in the head. Default: 0. - num_fcs (int): Number of fc layers in the head. Default: 2. - fc_out_channels (int): Number of output channels of fc layer. - Default: 1024. - downsample_factor (int): The factor that feature map is downsampled by. - Default: 2. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_convs=0, - num_fcs=2, - fc_out_channels=1024, - downsample_factor=2, - init_cfg=dict( - type='Xavier', - override=[ - dict(name='fcs'), - dict(type='Constant', val=0.001, name='fc_logits') - ]), - *arg, - **kwarg): - super(CoarseMaskHead, self).__init__( - *arg, - num_convs=num_convs, - upsample_cfg=dict(type=None), - init_cfg=None, - **kwarg) - self.init_cfg = init_cfg - self.num_fcs = num_fcs - assert self.num_fcs > 0 - self.fc_out_channels = fc_out_channels - self.downsample_factor = downsample_factor - assert self.downsample_factor >= 1 - # remove conv_logit - delattr(self, 'conv_logits') - - if downsample_factor > 1: - downsample_in_channels = ( - self.conv_out_channels - if self.num_convs > 0 else self.in_channels) - self.downsample_conv = ConvModule( - downsample_in_channels, - self.conv_out_channels, - kernel_size=downsample_factor, - stride=downsample_factor, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - else: - self.downsample_conv = None - - self.output_size = (self.roi_feat_size[0] // downsample_factor, - self.roi_feat_size[1] // downsample_factor) - self.output_area = self.output_size[0] * self.output_size[1] - - last_layer_dim = self.conv_out_channels * self.output_area - - self.fcs = ModuleList() - for i in range(num_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - output_channels = self.num_classes * self.output_area - self.fc_logits = Linear(last_layer_dim, output_channels) - - def init_weights(self): - super(FCNMaskHead, self).init_weights() - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - - if self.downsample_conv is not None: - x = self.downsample_conv(x) - - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_pred = self.fc_logits(x).view( - x.size(0), self.num_classes, *self.output_size) - return mask_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py deleted file mode 100644 index 5bbe7eea..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.utils import build_transformer -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class DynamicMaskHead(FCNMaskHead): - r"""Dynamic Mask Head for - `Instances as Queries `_ - - Args: - num_convs (int): Number of convolution layer. - Defaults to 4. - roi_feat_size (int): The output size of RoI extractor, - Defaults to 14. - in_channels (int): Input feature channels. - Defaults to 256. - conv_kernel_size (int): Kernel size of convolution layers. - Defaults to 3. - conv_out_channels (int): Output channels of convolution layers. - Defaults to 256. - num_classes (int): Number of classes. - Defaults to 80 - class_agnostic (int): Whether generate class agnostic prediction. - Defaults to False. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - upsample_cfg (dict): The config for upsample layer. - conv_cfg (dict): The convolution layer config. - norm_cfg (dict): The norm layer config. - dynamic_conv_cfg (dict): The dynamic convolution layer config. - loss_mask (dict): The config for mask loss. - """ - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=14, - with_proj=False, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_mask=dict(type='DiceLoss', loss_weight=8.0), - **kwargs): - super(DynamicMaskHead, self).__init__( - num_convs=num_convs, - roi_feat_size=roi_feat_size, - in_channels=in_channels, - conv_kernel_size=conv_kernel_size, - conv_out_channels=conv_out_channels, - num_classes=num_classes, - class_agnostic=class_agnostic, - upsample_cfg=upsample_cfg, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - loss_mask=loss_mask, - **kwargs) - assert class_agnostic is False, \ - 'DynamicMaskHead only support class_agnostic=False' - self.fp16_enabled = False - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - nn.init.constant_(self.conv_logits.bias, 0.) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of DynamicMaskHead. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size*num_proposals, feature_dimensions) - - Returns: - mask_pred (Tensor): Predicted foreground masks with shape - (batch_size*num_proposals, num_classes, - pooling_h*2, pooling_w*2). - """ - - proposal_feat = proposal_feat.reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - - x = proposal_feat_iic.permute(0, 2, 1).reshape(roi_feat.size()) - - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - num_pos = labels.new_ones(labels.size()).float().sum() - avg_factor = torch.clamp(reduce_mean(num_pos), min=1.).item() - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - loss_mask = self.loss_mask( - mask_pred[torch.arange(num_pos).long(), labels, ...].sigmoid(), - mask_targets, - avg_factor=avg_factor) - loss['loss_mask'] = loss_mask - return loss - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py deleted file mode 100644 index 355d8822..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from warnings import warn - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_conv_layer, build_upsample_layer -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import BaseModule, ModuleList, auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS, build_loss - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit - - -@HEADS.register_module() -class FCNMaskHead(BaseModule): - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - predictor_cfg=dict(type='Conv'), - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(FCNMaskHead, self).__init__(init_cfg) - self.upsample_cfg = upsample_cfg.copy() - if self.upsample_cfg['type'] not in [ - None, 'deconv', 'nearest', 'bilinear', 'carafe' - ]: - raise ValueError( - f'Invalid upsample method {self.upsample_cfg["type"]}, ' - 'accepted methods are "deconv", "nearest", "bilinear", ' - '"carafe"') - self.num_convs = num_convs - # WARN: roi_feat_size is reserved and not used - self.roi_feat_size = _pair(roi_feat_size) - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.conv_out_channels = conv_out_channels - self.upsample_method = self.upsample_cfg.get('type') - self.scale_factor = self.upsample_cfg.pop('scale_factor', None) - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.predictor_cfg = predictor_cfg - self.fp16_enabled = False - self.loss_mask = build_loss(loss_mask) - - self.convs = ModuleList() - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - upsample_in_channels = ( - self.conv_out_channels if self.num_convs > 0 else in_channels) - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample_method is None: - self.upsample = None - elif self.upsample_method == 'deconv': - upsample_cfg_.update( - in_channels=upsample_in_channels, - out_channels=self.conv_out_channels, - kernel_size=self.scale_factor, - stride=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - elif self.upsample_method == 'carafe': - upsample_cfg_.update( - channels=upsample_in_channels, scale_factor=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - else: - # suppress warnings - align_corners = (None - if self.upsample_method == 'nearest' else False) - upsample_cfg_.update( - scale_factor=self.scale_factor, - mode=self.upsample_method, - align_corners=align_corners) - self.upsample = build_upsample_layer(upsample_cfg_) - - out_channels = 1 if self.class_agnostic else self.num_classes - logits_in_channel = ( - self.conv_out_channels - if self.upsample_method == 'deconv' else upsample_in_channels) - self.conv_logits = build_conv_layer(self.predictor_cfg, - logits_in_channel, out_channels, 1) - self.relu = nn.ReLU(inplace=True) - self.debug_imgs = None - - def init_weights(self): - super(FCNMaskHead, self).init_weights() - for m in [self.upsample, self.conv_logits]: - if m is None: - continue - elif isinstance(m, CARAFEPack): - m.init_weights() - elif hasattr(m, 'weight') and hasattr(m, 'bias'): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu') - nn.init.constant_(m.bias, 0) - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - """ - Example: - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> # There are lots of variations depending on the configuration - >>> self = FCNMaskHead(num_classes=C, num_convs=1) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> sf = self.scale_factor - >>> labels = torch.randint(0, C, size=(N,)) - >>> # With the default properties the mask targets should indicate - >>> # a (potentially soft) single-class label - >>> mask_targets = torch.rand(N, H * sf, W * sf) - >>> loss = self.loss(mask_pred, mask_targets, labels) - >>> print('loss = {!r}'.format(loss)) - """ - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - if self.class_agnostic: - loss_mask = self.loss_mask(mask_pred, mask_targets, - torch.zeros_like(labels)) - else: - loss_mask = self.loss_mask(mask_pred, mask_targets, labels) - loss['loss_mask'] = loss_mask - return loss - - def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, scale_factor, rescale): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor or ndarray): shape (n, #class, h, w). - For single-scale testing, mask_pred is the direct output of - model, whose type is Tensor, while for multi-scale testing, - it will be converted to numpy array outside of this method. - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - scale_factor(ndarray | Tensor): If ``rescale is True``, box - coordinates are divided by this scale factor to fit - ``ori_shape``. - rescale (bool): If True, the resulting masks will be rescaled to - ``ori_shape``. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - - Example: - >>> import mmcv - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> self = FCNMaskHead(num_classes=C, num_convs=0) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> # Each input is associated with some bounding box - >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N) - >>> det_labels = torch.randint(0, C, size=(N,)) - >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, }) - >>> ori_shape = (H * 4, W * 4) - >>> scale_factor = torch.FloatTensor((1, 1)) - >>> rescale = False - >>> # Encoded masks are a list for each category. - >>> encoded_masks = self.get_seg_masks( - >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, - >>> scale_factor, rescale - >>> ) - >>> assert len(encoded_masks) == C - >>> assert sum(list(map(len, encoded_masks))) == N - """ - if isinstance(mask_pred, torch.Tensor): - mask_pred = mask_pred.sigmoid() - else: - # In AugTest, has been activated before - mask_pred = det_bboxes.new_tensor(mask_pred) - - device = mask_pred.device - cls_segms = [[] for _ in range(self.num_classes) - ] # BG is not included in num_classes - bboxes = det_bboxes[:, :4] - labels = det_labels - - # In most cases, scale_factor should have been - # converted to Tensor when rescale the bbox - if not isinstance(scale_factor, torch.Tensor): - if isinstance(scale_factor, float): - scale_factor = np.array([scale_factor] * 4) - warn('Scale_factor should be a Tensor or ndarray ' - 'with shape (4,), float would be deprecated. ') - assert isinstance(scale_factor, np.ndarray) - scale_factor = torch.Tensor(scale_factor) - - if rescale: - img_h, img_w = ori_shape[:2] - bboxes = bboxes / scale_factor.to(bboxes) - else: - w_scale, h_scale = scale_factor[0], scale_factor[1] - img_h = np.round(ori_shape[0] * h_scale.item()).astype(np.int32) - img_w = np.round(ori_shape[1] * w_scale.item()).astype(np.int32) - - N = len(mask_pred) - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == 'cpu': - # CPU is most efficient when they are pasted one by one with - # skip_empty=True, so that it performs minimal number of - # operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, - # but may have memory issue - # the types of img_w and img_h are np.int32, - # when the image resolution is large, - # the calculation of num_chunks will overflow. - # so we need to change the types of img_w and img_h to int. - # See https://github.com/open-mmlab/mmdetection/pull/5191 - num_chunks = int( - np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / - GPU_MEM_LIMIT)) - assert (num_chunks <= - N), 'Default GPU_MEM_LIMIT is too small; try increasing it' - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - threshold = rcnn_test_cfg.mask_thr_binary - im_mask = torch.zeros( - N, - img_h, - img_w, - device=device, - dtype=torch.bool if threshold >= 0 else torch.uint8) - - if not self.class_agnostic: - mask_pred = mask_pred[range(N), labels][:, None] - - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - mask_pred[inds], - bboxes[inds], - img_h, - img_w, - skip_empty=device.type == 'cpu') - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - im_mask[(inds, ) + spatial_inds] = masks_chunk - - for i in range(N): - cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy()) - return cls_segms - - def onnx_export(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, **kwargs): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor): shape (n, #class, h, w). - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - - Returns: - Tensor: a mask of shape (N, img_h, img_w). - """ - - mask_pred = mask_pred.sigmoid() - bboxes = det_bboxes[:, :4] - labels = det_labels - # No need to consider rescale and scale_factor while exporting to ONNX - img_h, img_w = ori_shape[:2] - threshold = rcnn_test_cfg.mask_thr_binary - if not self.class_agnostic: - box_inds = torch.arange(mask_pred.shape[0]) - mask_pred = mask_pred[box_inds, labels][:, None] - masks, _ = _do_paste_mask( - mask_pred, bboxes, img_h, img_w, skip_empty=False) - if threshold >= 0: - # should convert to float to avoid problems in TRT - masks = (masks >= threshold).to(dtype=torch.float) - return masks - - -def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True): - """Paste instance masks according to boxes. - - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - Args: - masks (Tensor): N, 1, H, W - boxes (Tensor): N, 4 - img_h (int): Height of the image to be pasted. - img_w (int): Width of the image to be pasted. - skip_empty (bool): Only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - tuple: (Tensor, tuple). The first item is mask tensor, the second one - is the slice object. - If skip_empty == False, the whole image will be pasted. It will - return a mask of shape (N, img_h, img_w) and an empty tuple. - If skip_empty == True, only area around the mask will be pasted. - A mask of shape (N, h', w') and its start and end coordinates - in the original image will be returned. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - if skip_empty: - x0_int, y0_int = torch.clamp( - boxes.min(dim=0).values.floor()[:2] - 1, - min=0).to(dtype=torch.int32) - x1_int = torch.clamp( - boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp( - boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange(y0_int, y1_int, device=device).to(torch.float32) + 0.5 - img_x = torch.arange(x0_int, x1_int, device=device).to(torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - # IsInf op is not supported with ONNX<=1.7.0 - if not torch.onnx.is_in_onnx_export(): - if torch.isinf(img_x).any(): - inds = torch.where(torch.isinf(img_x)) - img_x[inds] = 0 - if torch.isinf(img_y).any(): - inds = torch.where(torch.isinf(img_y)) - img_y[inds] = 0 - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - img_masks = F.grid_sample( - masks.to(dtype=torch.float32), grid, align_corners=False) - - if skip_empty: - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/feature_relay_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/feature_relay_head.py deleted file mode 100644 index 452f37af..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/feature_relay_head.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.builder import HEADS - - -@HEADS.register_module() -class FeatureRelayHead(BaseModule): - """Feature Relay Head used in `SCNet `_. - - Args: - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - roi_feat_size (int, optional): roi feat size at box head. Default: 7. - scale_factor (int, optional): scale factor to match roi feat size - at mask head. Default: 2. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels=1024, - out_conv_channels=256, - roi_feat_size=7, - scale_factor=2, - init_cfg=dict(type='Kaiming', layer='Linear')): - super(FeatureRelayHead, self).__init__(init_cfg) - assert isinstance(roi_feat_size, int) - - self.in_channels = in_channels - self.out_conv_channels = out_conv_channels - self.roi_feat_size = roi_feat_size - self.out_channels = (roi_feat_size**2) * out_conv_channels - self.scale_factor = scale_factor - self.fp16_enabled = False - - self.fc = nn.Linear(self.in_channels, self.out_channels) - self.upsample = nn.Upsample( - scale_factor=scale_factor, mode='bilinear', align_corners=True) - - @auto_fp16() - def forward(self, x): - """Forward function.""" - N, in_C = x.shape - if N > 0: - out_C = self.out_conv_channels - out_HW = self.roi_feat_size - x = self.fc(x) - x = x.reshape(N, out_C, out_HW, out_HW) - x = self.upsample(x) - return x - return None diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py deleted file mode 100644 index 8494f7e4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class FusedSemanticHead(BaseModule): - r"""Multi-level fused semantic segmentation head. - - .. code-block:: none - - in_1 -> 1x1 conv --- - | - in_2 -> 1x1 conv -- | - || - in_3 -> 1x1 conv - || - ||| /-> 1x1 conv (mask prediction) - in_4 -> 1x1 conv -----> 3x3 convs (*4) - | \-> 1x1 conv (feature) - in_5 -> 1x1 conv --- - """ # noqa: W605 - - def __init__(self, - num_ins, - fusion_level, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=183, - conv_cfg=None, - norm_cfg=None, - ignore_label=None, - loss_weight=None, - loss_seg=dict( - type='CrossEntropyLoss', - ignore_index=255, - loss_weight=0.2), - init_cfg=dict( - type='Kaiming', override=dict(name='conv_logits'))): - super(FusedSemanticHead, self).__init__(init_cfg) - self.num_ins = num_ins - self.fusion_level = fusion_level - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self.lateral_convs = nn.ModuleList() - for i in range(self.num_ins): - self.lateral_convs.append( - ConvModule( - self.in_channels, - self.in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_embedding = ConvModule( - conv_out_channels, - conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.conv_logits = nn.Conv2d(conv_out_channels, self.num_classes, 1) - if ignore_label: - loss_seg['ignore_index'] = ignore_label - if loss_weight: - loss_seg['loss_weight'] = loss_weight - if ignore_label or loss_weight: - warnings.warn('``ignore_label`` and ``loss_weight`` would be ' - 'deprecated soon. Please set ``ingore_index`` and ' - '``loss_weight`` in ``loss_seg`` instead.') - self.criterion = build_loss(loss_seg) - - @auto_fp16() - def forward(self, feats): - x = self.lateral_convs[self.fusion_level](feats[self.fusion_level]) - fused_size = tuple(x.shape[-2:]) - for i, feat in enumerate(feats): - if i != self.fusion_level: - feat = F.interpolate( - feat, size=fused_size, mode='bilinear', align_corners=True) - x += self.lateral_convs[i](feat) - - for i in range(self.num_convs): - x = self.convs[i](x) - - mask_pred = self.conv_logits(x) - x = self.conv_embedding(x) - return mask_pred, x - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, labels): - labels = labels.squeeze(1).long() - loss_semantic_seg = self.criterion(mask_pred, labels) - return loss_semantic_seg diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/global_context_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/global_context_head.py deleted file mode 100644 index af76a174..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/global_context_head.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock - - -@HEADS.register_module() -class GlobalContextHead(BaseModule): - """Global context head used in `SCNet `_. - - Args: - num_convs (int, optional): number of convolutional layer in GlbCtxHead. - Default: 4. - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - num_classes (int, optional): number of classes. Default: 80. - loss_weight (float, optional): global context loss weight. Default: 1. - conv_cfg (dict, optional): config to init conv layer. Default: None. - norm_cfg (dict, optional): config to init norm layer. Default: None. - conv_to_res (bool, optional): if True, 2 convs will be grouped into - 1 `SimplifiedBasicBlock` using a skip connection. Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_weight=1.0, - conv_cfg=None, - norm_cfg=None, - conv_to_res=False, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='fc'))): - super(GlobalContextHead, self).__init__(init_cfg) - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.loss_weight = loss_weight - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.conv_to_res = conv_to_res - self.fp16_enabled = False - - if self.conv_to_res: - num_res_blocks = num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks - else: - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Linear(conv_out_channels, num_classes) - - self.criterion = nn.BCEWithLogitsLoss() - - @auto_fp16() - def forward(self, feats): - """Forward function.""" - x = feats[-1] - for i in range(self.num_convs): - x = self.convs[i](x) - x = self.pool(x) - - # multi-class prediction - mc_pred = x.reshape(x.size(0), -1) - mc_pred = self.fc(mc_pred) - - return mc_pred, x - - @force_fp32(apply_to=('pred', )) - def loss(self, pred, labels): - """Loss function.""" - labels = [lbl.unique() for lbl in labels] - targets = pred.new_zeros(pred.size()) - for i, label in enumerate(labels): - targets[i, label] = 1.0 - loss = self.loss_weight * self.criterion(pred, targets) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/grid_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/grid_head.py deleted file mode 100644 index 0c0702d2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/grid_head.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class GridHead(BaseModule): - - def __init__(self, - grid_points=9, - num_convs=8, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - point_feat_channels=64, - deconv_kernel_size=4, - class_agnostic=False, - loss_grid=dict( - type='CrossEntropyLoss', use_sigmoid=True, - loss_weight=15), - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=36), - init_cfg=[ - dict(type='Kaiming', layer=['Conv2d', 'Linear']), - dict( - type='Normal', - layer='ConvTranspose2d', - std=0.001, - override=dict( - type='Normal', - name='deconv2', - std=0.001, - bias=-np.log(0.99 / 0.01))) - ]): - super(GridHead, self).__init__(init_cfg) - self.grid_points = grid_points - self.num_convs = num_convs - self.roi_feat_size = roi_feat_size - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.point_feat_channels = point_feat_channels - self.conv_out_channels = self.point_feat_channels * self.grid_points - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - if isinstance(norm_cfg, dict) and norm_cfg['type'] == 'GN': - assert self.conv_out_channels % norm_cfg['num_groups'] == 0 - - assert self.grid_points >= 4 - self.grid_size = int(np.sqrt(self.grid_points)) - if self.grid_size * self.grid_size != self.grid_points: - raise ValueError('grid_points must be a square number') - - # the predicted heatmap is half of whole_map_size - if not isinstance(self.roi_feat_size, int): - raise ValueError('Only square RoIs are supporeted in Grid R-CNN') - self.whole_map_size = self.roi_feat_size * 4 - - # compute point-wise sub-regions - self.sub_regions = self.calc_sub_regions() - - self.convs = [] - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - stride = 2 if i == 0 else 1 - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - stride=stride, - padding=padding, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=True)) - self.convs = nn.Sequential(*self.convs) - - self.deconv1 = nn.ConvTranspose2d( - self.conv_out_channels, - self.conv_out_channels, - kernel_size=deconv_kernel_size, - stride=2, - padding=(deconv_kernel_size - 2) // 2, - groups=grid_points) - self.norm1 = nn.GroupNorm(grid_points, self.conv_out_channels) - self.deconv2 = nn.ConvTranspose2d( - self.conv_out_channels, - grid_points, - kernel_size=deconv_kernel_size, - stride=2, - padding=(deconv_kernel_size - 2) // 2, - groups=grid_points) - - # find the 4-neighbor of each grid point - self.neighbor_points = [] - grid_size = self.grid_size - for i in range(grid_size): # i-th column - for j in range(grid_size): # j-th row - neighbors = [] - if i > 0: # left: (i - 1, j) - neighbors.append((i - 1) * grid_size + j) - if j > 0: # up: (i, j - 1) - neighbors.append(i * grid_size + j - 1) - if j < grid_size - 1: # down: (i, j + 1) - neighbors.append(i * grid_size + j + 1) - if i < grid_size - 1: # right: (i + 1, j) - neighbors.append((i + 1) * grid_size + j) - self.neighbor_points.append(tuple(neighbors)) - # total edges in the grid - self.num_edges = sum([len(p) for p in self.neighbor_points]) - - self.forder_trans = nn.ModuleList() # first-order feature transition - self.sorder_trans = nn.ModuleList() # second-order feature transition - for neighbors in self.neighbor_points: - fo_trans = nn.ModuleList() - so_trans = nn.ModuleList() - for _ in range(len(neighbors)): - # each transition module consists of a 5x5 depth-wise conv and - # 1x1 conv. - fo_trans.append( - nn.Sequential( - nn.Conv2d( - self.point_feat_channels, - self.point_feat_channels, - 5, - stride=1, - padding=2, - groups=self.point_feat_channels), - nn.Conv2d(self.point_feat_channels, - self.point_feat_channels, 1))) - so_trans.append( - nn.Sequential( - nn.Conv2d( - self.point_feat_channels, - self.point_feat_channels, - 5, - 1, - 2, - groups=self.point_feat_channels), - nn.Conv2d(self.point_feat_channels, - self.point_feat_channels, 1))) - self.forder_trans.append(fo_trans) - self.sorder_trans.append(so_trans) - - self.loss_grid = build_loss(loss_grid) - - def forward(self, x): - assert x.shape[-1] == x.shape[-2] == self.roi_feat_size - # RoI feature transformation, downsample 2x - x = self.convs(x) - - c = self.point_feat_channels - # first-order fusion - x_fo = [None for _ in range(self.grid_points)] - for i, points in enumerate(self.neighbor_points): - x_fo[i] = x[:, i * c:(i + 1) * c] - for j, point_idx in enumerate(points): - x_fo[i] = x_fo[i] + self.forder_trans[i][j]( - x[:, point_idx * c:(point_idx + 1) * c]) - - # second-order fusion - x_so = [None for _ in range(self.grid_points)] - for i, points in enumerate(self.neighbor_points): - x_so[i] = x[:, i * c:(i + 1) * c] - for j, point_idx in enumerate(points): - x_so[i] = x_so[i] + self.sorder_trans[i][j](x_fo[point_idx]) - - # predicted heatmap with fused features - x2 = torch.cat(x_so, dim=1) - x2 = self.deconv1(x2) - x2 = F.relu(self.norm1(x2), inplace=True) - heatmap = self.deconv2(x2) - - # predicted heatmap with original features (applicable during training) - if self.training: - x1 = x - x1 = self.deconv1(x1) - x1 = F.relu(self.norm1(x1), inplace=True) - heatmap_unfused = self.deconv2(x1) - else: - heatmap_unfused = heatmap - - return dict(fused=heatmap, unfused=heatmap_unfused) - - def calc_sub_regions(self): - """Compute point specific representation regions. - - See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details. - """ - # to make it consistent with the original implementation, half_size - # is computed as 2 * quarter_size, which is smaller - half_size = self.whole_map_size // 4 * 2 - sub_regions = [] - for i in range(self.grid_points): - x_idx = i // self.grid_size - y_idx = i % self.grid_size - if x_idx == 0: - sub_x1 = 0 - elif x_idx == self.grid_size - 1: - sub_x1 = half_size - else: - ratio = x_idx / (self.grid_size - 1) - 0.25 - sub_x1 = max(int(ratio * self.whole_map_size), 0) - - if y_idx == 0: - sub_y1 = 0 - elif y_idx == self.grid_size - 1: - sub_y1 = half_size - else: - ratio = y_idx / (self.grid_size - 1) - 0.25 - sub_y1 = max(int(ratio * self.whole_map_size), 0) - sub_regions.append( - (sub_x1, sub_y1, sub_x1 + half_size, sub_y1 + half_size)) - return sub_regions - - def get_targets(self, sampling_results, rcnn_train_cfg): - # mix all samples (across images) together. - pos_bboxes = torch.cat([res.pos_bboxes for res in sampling_results], - dim=0).cpu() - pos_gt_bboxes = torch.cat( - [res.pos_gt_bboxes for res in sampling_results], dim=0).cpu() - assert pos_bboxes.shape == pos_gt_bboxes.shape - - # expand pos_bboxes to 2x of original size - x1 = pos_bboxes[:, 0] - (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2 - y1 = pos_bboxes[:, 1] - (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2 - x2 = pos_bboxes[:, 2] + (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2 - y2 = pos_bboxes[:, 3] + (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2 - pos_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - pos_bbox_ws = (pos_bboxes[:, 2] - pos_bboxes[:, 0]).unsqueeze(-1) - pos_bbox_hs = (pos_bboxes[:, 3] - pos_bboxes[:, 1]).unsqueeze(-1) - - num_rois = pos_bboxes.shape[0] - map_size = self.whole_map_size - # this is not the final target shape - targets = torch.zeros((num_rois, self.grid_points, map_size, map_size), - dtype=torch.float) - - # pre-compute interpolation factors for all grid points. - # the first item is the factor of x-dim, and the second is y-dim. - # for a 9-point grid, factors are like (1, 0), (0.5, 0.5), (0, 1) - factors = [] - for j in range(self.grid_points): - x_idx = j // self.grid_size - y_idx = j % self.grid_size - factors.append((1 - x_idx / (self.grid_size - 1), - 1 - y_idx / (self.grid_size - 1))) - - radius = rcnn_train_cfg.pos_radius - radius2 = radius**2 - for i in range(num_rois): - # ignore small bboxes - if (pos_bbox_ws[i] <= self.grid_size - or pos_bbox_hs[i] <= self.grid_size): - continue - # for each grid point, mark a small circle as positive - for j in range(self.grid_points): - factor_x, factor_y = factors[j] - gridpoint_x = factor_x * pos_gt_bboxes[i, 0] + ( - 1 - factor_x) * pos_gt_bboxes[i, 2] - gridpoint_y = factor_y * pos_gt_bboxes[i, 1] + ( - 1 - factor_y) * pos_gt_bboxes[i, 3] - - cx = int((gridpoint_x - pos_bboxes[i, 0]) / pos_bbox_ws[i] * - map_size) - cy = int((gridpoint_y - pos_bboxes[i, 1]) / pos_bbox_hs[i] * - map_size) - - for x in range(cx - radius, cx + radius + 1): - for y in range(cy - radius, cy + radius + 1): - if x >= 0 and x < map_size and y >= 0 and y < map_size: - if (x - cx)**2 + (y - cy)**2 <= radius2: - targets[i, j, y, x] = 1 - # reduce the target heatmap size by a half - # proposed in Grid R-CNN Plus (https://arxiv.org/abs/1906.05688). - sub_targets = [] - for i in range(self.grid_points): - sub_x1, sub_y1, sub_x2, sub_y2 = self.sub_regions[i] - sub_targets.append(targets[:, [i], sub_y1:sub_y2, sub_x1:sub_x2]) - sub_targets = torch.cat(sub_targets, dim=1) - sub_targets = sub_targets.to(sampling_results[0].pos_bboxes.device) - return sub_targets - - def loss(self, grid_pred, grid_targets): - loss_fused = self.loss_grid(grid_pred['fused'], grid_targets) - loss_unfused = self.loss_grid(grid_pred['unfused'], grid_targets) - loss_grid = loss_fused + loss_unfused - return dict(loss_grid=loss_grid) - - def get_bboxes(self, det_bboxes, grid_pred, img_metas): - # TODO: refactoring - assert det_bboxes.shape[0] == grid_pred.shape[0] - det_bboxes = det_bboxes.cpu() - cls_scores = det_bboxes[:, [4]] - det_bboxes = det_bboxes[:, :4] - grid_pred = grid_pred.sigmoid().cpu() - - R, c, h, w = grid_pred.shape - half_size = self.whole_map_size // 4 * 2 - assert h == w == half_size - assert c == self.grid_points - - # find the point with max scores in the half-sized heatmap - grid_pred = grid_pred.view(R * c, h * w) - pred_scores, pred_position = grid_pred.max(dim=1) - xs = pred_position % w - ys = pred_position // w - - # get the position in the whole heatmap instead of half-sized heatmap - for i in range(self.grid_points): - xs[i::self.grid_points] += self.sub_regions[i][0] - ys[i::self.grid_points] += self.sub_regions[i][1] - - # reshape to (num_rois, grid_points) - pred_scores, xs, ys = tuple( - map(lambda x: x.view(R, c), [pred_scores, xs, ys])) - - # get expanded pos_bboxes - widths = (det_bboxes[:, 2] - det_bboxes[:, 0]).unsqueeze(-1) - heights = (det_bboxes[:, 3] - det_bboxes[:, 1]).unsqueeze(-1) - x1 = (det_bboxes[:, 0, None] - widths / 2) - y1 = (det_bboxes[:, 1, None] - heights / 2) - # map the grid point to the absolute coordinates - abs_xs = (xs.float() + 0.5) / w * widths + x1 - abs_ys = (ys.float() + 0.5) / h * heights + y1 - - # get the grid points indices that fall on the bbox boundaries - x1_inds = [i for i in range(self.grid_size)] - y1_inds = [i * self.grid_size for i in range(self.grid_size)] - x2_inds = [ - self.grid_points - self.grid_size + i - for i in range(self.grid_size) - ] - y2_inds = [(i + 1) * self.grid_size - 1 for i in range(self.grid_size)] - - # voting of all grid points on some boundary - bboxes_x1 = (abs_xs[:, x1_inds] * pred_scores[:, x1_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, x1_inds].sum(dim=1, keepdim=True)) - bboxes_y1 = (abs_ys[:, y1_inds] * pred_scores[:, y1_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, y1_inds].sum(dim=1, keepdim=True)) - bboxes_x2 = (abs_xs[:, x2_inds] * pred_scores[:, x2_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, x2_inds].sum(dim=1, keepdim=True)) - bboxes_y2 = (abs_ys[:, y2_inds] * pred_scores[:, y2_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, y2_inds].sum(dim=1, keepdim=True)) - - bbox_res = torch.cat( - [bboxes_x1, bboxes_y1, bboxes_x2, bboxes_y2, cls_scores], dim=1) - bbox_res[:, [0, 2]].clamp_(min=0, max=img_metas[0]['img_shape'][1]) - bbox_res[:, [1, 3]].clamp_(min=0, max=img_metas[0]['img_shape'][0]) - - return bbox_res diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/htc_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/htc_mask_head.py deleted file mode 100644 index 7ad8592b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/htc_mask_head.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class HTCMaskHead(FCNMaskHead): - - def __init__(self, with_conv_res=True, *args, **kwargs): - super(HTCMaskHead, self).__init__(*args, **kwargs) - self.with_conv_res = with_conv_res - if self.with_conv_res: - self.conv_res = ConvModule( - self.conv_out_channels, - self.conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def forward(self, x, res_feat=None, return_logits=True, return_feat=True): - if res_feat is not None: - assert self.with_conv_res - res_feat = self.conv_res(res_feat) - x = x + res_feat - for conv in self.convs: - x = conv(x) - res_feat = x - outs = [] - if return_logits: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - outs.append(mask_pred) - if return_feat: - outs.append(res_feat) - return outs if len(outs) > 1 else outs[0] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/mask_point_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/mask_point_head.py deleted file mode 100644 index c77c46d2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/mask_point_head.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.utils import (get_uncertain_point_coords_with_randomness, - get_uncertainty) - - -@HEADS.register_module() -class MaskPointHead(BaseModule): - """A mask point head use in PointRend. - - ``MaskPointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict | None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict | None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - num_fcs=3, - in_channels=256, - fc_channels=256, - class_agnostic=False, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU'), - loss_point=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - init_cfg=dict( - type='Normal', std=0.001, - override=dict(name='fc_logits'))): - super().__init__(init_cfg) - self.num_fcs = num_fcs - self.in_channels = in_channels - self.fc_channels = fc_channels - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.coarse_pred_each_layer = coarse_pred_each_layer - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.loss_point = build_loss(loss_point) - - fc_in_channels = in_channels + num_classes - self.fcs = nn.ModuleList() - for _ in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += num_classes if self.coarse_pred_each_layer else 0 - - out_channels = 1 if self.class_agnostic else self.num_classes - self.fc_logits = nn.Conv1d( - fc_in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, fine_grained_feats, coarse_feats): - """Classify each point base on fine grained and coarse feats. - - Args: - fine_grained_feats (Tensor): Fine grained feature sampled from FPN, - shape (num_rois, in_channels, num_points). - coarse_feats (Tensor): Coarse feature sampled from CoarseMaskHead, - shape (num_rois, num_classes, num_points). - - Returns: - Tensor: Point classification results, - shape (num_rois, num_class, num_points). - """ - - x = torch.cat([fine_grained_feats, coarse_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_feats), dim=1) - return self.fc_logits(x) - - def get_targets(self, rois, rel_roi_points, sampling_results, gt_masks, - cfg): - """Get training targets of MaskPointHead for all images. - - Args: - rois (Tensor): Region of Interest, shape (num_rois, 5). - rel_roi_points: Points coordinates relative to RoI, shape - (num_rois, num_points, 2). - sampling_results (:obj:`SamplingResult`): Sampling result after - sampling and assignment. - gt_masks (Tensor) : Ground truth segmentation masks of - corresponding boxes, shape (num_rois, height, width). - cfg (dict): Training cfg. - - Returns: - Tensor: Point target, shape (num_rois, num_points). - """ - - num_imgs = len(sampling_results) - rois_list = [] - rel_roi_points_list = [] - for batch_ind in range(num_imgs): - inds = (rois[:, 0] == batch_ind) - rois_list.append(rois[inds]) - rel_roi_points_list.append(rel_roi_points[inds]) - pos_assigned_gt_inds_list = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - cfg_list = [cfg for _ in range(num_imgs)] - - point_targets = map(self._get_target_single, rois_list, - rel_roi_points_list, pos_assigned_gt_inds_list, - gt_masks, cfg_list) - point_targets = list(point_targets) - - if len(point_targets) > 0: - point_targets = torch.cat(point_targets) - - return point_targets - - def _get_target_single(self, rois, rel_roi_points, pos_assigned_gt_inds, - gt_masks, cfg): - """Get training target of MaskPointHead for each image.""" - num_pos = rois.size(0) - num_points = cfg.num_points - if num_pos > 0: - gt_masks_th = ( - gt_masks.to_tensor(rois.dtype, rois.device).index_select( - 0, pos_assigned_gt_inds)) - gt_masks_th = gt_masks_th.unsqueeze(1) - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, gt_masks_th) - point_targets = point_sample(gt_masks_th, - rel_img_points).squeeze(1) - else: - point_targets = rois.new_zeros((0, num_points)) - return point_targets - - def loss(self, point_pred, point_targets, labels): - """Calculate loss for MaskPointHead. - - Args: - point_pred (Tensor): Point predication result, shape - (num_rois, num_classes, num_points). - point_targets (Tensor): Point targets, shape (num_roi, num_points). - labels (Tensor): Class label of corresponding boxes, - shape (num_rois, ) - - Returns: - dict[str, Tensor]: a dictionary of point loss components - """ - - loss = dict() - if self.class_agnostic: - loss_point = self.loss_point(point_pred, point_targets, - torch.zeros_like(labels)) - else: - loss_point = self.loss_point(point_pred, point_targets, labels) - loss['loss_point'] = loss_point - return loss - - def get_roi_rel_points_train(self, mask_pred, labels, cfg): - """Get ``num_points`` most uncertain points with random points during - train. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - '_get_uncertainty()' function that takes point's logit prediction as - input. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - labels (list): The ground truth class for each instance. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains the coordinates sampled points. - """ - point_coords = get_uncertain_point_coords_with_randomness( - mask_pred, labels, cfg.num_points, cfg.oversample_ratio, - cfg.importance_sample_ratio) - return point_coords - - def get_roi_rel_points_test(self, mask_pred, pred_label, cfg): - """Get ``num_points`` most uncertain points during test. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - pred_label (list): The predication class for each instance. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (num_rois, num_points) - that contains indices from [0, mask_height x mask_width) of the - most uncertain points. - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid . - """ - num_points = cfg.subdivision_num_points - uncertainty_map = get_uncertainty(mask_pred, pred_label) - num_rois, _, mask_height, mask_width = uncertainty_map.shape - - # During ONNX exporting, the type of each elements of 'shape' is - # `Tensor(float)`, while it is `float` during PyTorch inference. - if isinstance(mask_height, torch.Tensor): - h_step = 1.0 / mask_height.float() - w_step = 1.0 / mask_width.float() - else: - h_step = 1.0 / mask_height - w_step = 1.0 / mask_width - # cast to int to avoid dynamic K for TopK op in ONNX - mask_size = int(mask_height * mask_width) - uncertainty_map = uncertainty_map.view(num_rois, mask_size) - num_points = min(mask_size, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - xs = w_step / 2.0 + (point_indices % mask_width).float() * w_step - ys = h_step / 2.0 + (point_indices // mask_width).float() * h_step - point_coords = torch.stack([xs, ys], dim=2) - return point_indices, point_coords diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/maskiou_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/maskiou_head.py deleted file mode 100644 index a7ff7c7c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/maskiou_head.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, Linear, MaxPool2d -from mmcv.runner import BaseModule, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class MaskIoUHead(BaseModule): - """Mask IoU Head. - - This head predicts the IoU of predicted masks and corresponding gt masks. - """ - - def __init__(self, - num_convs=4, - num_fcs=2, - roi_feat_size=14, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_iou=dict(type='MSELoss', loss_weight=0.5), - init_cfg=[ - dict(type='Kaiming', override=dict(name='convs')), - dict(type='Caffe2Xavier', override=dict(name='fcs')), - dict( - type='Normal', - std=0.01, - override=dict(name='fc_mask_iou')) - ]): - super(MaskIoUHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.num_classes = num_classes - self.fp16_enabled = False - - self.convs = nn.ModuleList() - for i in range(num_convs): - if i == 0: - # concatenation of mask feature and mask prediction - in_channels = self.in_channels + 1 - else: - in_channels = self.conv_out_channels - stride = 2 if i == num_convs - 1 else 1 - self.convs.append( - Conv2d( - in_channels, - self.conv_out_channels, - 3, - stride=stride, - padding=1)) - - roi_feat_size = _pair(roi_feat_size) - pooled_area = (roi_feat_size[0] // 2) * (roi_feat_size[1] // 2) - self.fcs = nn.ModuleList() - for i in range(num_fcs): - in_channels = ( - self.conv_out_channels * - pooled_area if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(in_channels, self.fc_out_channels)) - - self.fc_mask_iou = Linear(self.fc_out_channels, self.num_classes) - self.relu = nn.ReLU() - self.max_pool = MaxPool2d(2, 2) - self.loss_iou = build_loss(loss_iou) - - def forward(self, mask_feat, mask_pred): - mask_pred = mask_pred.sigmoid() - mask_pred_pooled = self.max_pool(mask_pred.unsqueeze(1)) - - x = torch.cat((mask_feat, mask_pred_pooled), 1) - - for conv in self.convs: - x = self.relu(conv(x)) - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_iou = self.fc_mask_iou(x) - return mask_iou - - @force_fp32(apply_to=('mask_iou_pred', )) - def loss(self, mask_iou_pred, mask_iou_targets): - pos_inds = mask_iou_targets > 0 - if pos_inds.sum() > 0: - loss_mask_iou = self.loss_iou(mask_iou_pred[pos_inds], - mask_iou_targets[pos_inds]) - else: - loss_mask_iou = mask_iou_pred.sum() * 0 - return dict(loss_mask_iou=loss_mask_iou) - - @force_fp32(apply_to=('mask_pred', )) - def get_targets(self, sampling_results, gt_masks, mask_pred, mask_targets, - rcnn_train_cfg): - """Compute target of mask IoU. - - Mask IoU target is the IoU of the predicted mask (inside a bbox) and - the gt mask of corresponding gt mask (the whole instance). - The intersection area is computed inside the bbox, and the gt mask area - is computed with two steps, firstly we compute the gt area inside the - bbox, then divide it by the area ratio of gt area inside the bbox and - the gt area of the whole instance. - - Args: - sampling_results (list[:obj:`SamplingResult`]): sampling results. - gt_masks (BitmapMask | PolygonMask): Gt masks (the whole instance) - of each image, with the same shape of the input image. - mask_pred (Tensor): Predicted masks of each positive proposal, - shape (num_pos, h, w). - mask_targets (Tensor): Gt mask of each positive proposal, - binary map of the shape (num_pos, h, w). - rcnn_train_cfg (dict): Training config for R-CNN part. - - Returns: - Tensor: mask iou target (length == num positive). - """ - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - - # compute the area ratio of gt areas inside the proposals and - # the whole instance - area_ratios = map(self._get_area_ratio, pos_proposals, - pos_assigned_gt_inds, gt_masks) - area_ratios = torch.cat(list(area_ratios)) - assert mask_targets.size(0) == area_ratios.size(0) - - mask_pred = (mask_pred > rcnn_train_cfg.mask_thr_binary).float() - mask_pred_areas = mask_pred.sum((-1, -2)) - - # mask_pred and mask_targets are binary maps - overlap_areas = (mask_pred * mask_targets).sum((-1, -2)) - - # compute the mask area of the whole instance - gt_full_areas = mask_targets.sum((-1, -2)) / (area_ratios + 1e-7) - - mask_iou_targets = overlap_areas / ( - mask_pred_areas + gt_full_areas - overlap_areas) - return mask_iou_targets - - def _get_area_ratio(self, pos_proposals, pos_assigned_gt_inds, gt_masks): - """Compute area ratio of the gt mask inside the proposal and the gt - mask of the corresponding instance.""" - num_pos = pos_proposals.size(0) - if num_pos > 0: - area_ratios = [] - proposals_np = pos_proposals.cpu().numpy() - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - # compute mask areas of gt instances (batch processing for speedup) - gt_instance_mask_area = gt_masks.areas - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - - # crop the gt mask inside the proposal - bbox = proposals_np[i, :].astype(np.int32) - gt_mask_in_proposal = gt_mask.crop(bbox) - - ratio = gt_mask_in_proposal.areas[0] / ( - gt_instance_mask_area[pos_assigned_gt_inds[i]] + 1e-7) - area_ratios.append(ratio) - area_ratios = torch.from_numpy(np.stack(area_ratios)).float().to( - pos_proposals.device) - else: - area_ratios = pos_proposals.new_zeros((0, )) - return area_ratios - - @force_fp32(apply_to=('mask_iou_pred', )) - def get_mask_scores(self, mask_iou_pred, det_bboxes, det_labels): - """Get the mask scores. - - mask_score = bbox_score * mask_iou - """ - inds = range(det_labels.size(0)) - mask_scores = mask_iou_pred[inds, det_labels] * det_bboxes[inds, -1] - mask_scores = mask_scores.cpu().numpy() - det_labels = det_labels.cpu().numpy() - return [mask_scores[det_labels == i] for i in range(self.num_classes)] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py deleted file mode 100644 index ca624866..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class SCNetMaskHead(FCNMaskHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetMaskHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if conv_to_res: - assert self.conv_kernel_size == 3 - self.num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - self.num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py deleted file mode 100644 index 2b8c5c32..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fused_semantic_head import FusedSemanticHead - - -@HEADS.register_module() -class SCNetSemanticHead(FusedSemanticHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetSemanticHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if self.conv_to_res: - num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_scoring_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_scoring_roi_head.py deleted file mode 100644 index 4617988e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/mask_scoring_roi_head.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2roi -from ..builder import HEADS, build_head -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class MaskScoringRoIHead(StandardRoIHead): - """Mask Scoring RoIHead for Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, mask_iou_head, **kwargs): - assert mask_iou_head is not None - super(MaskScoringRoIHead, self).__init__(**kwargs) - self.mask_iou_head = build_head(mask_iou_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for Mask head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - mask_results = super(MaskScoringRoIHead, - self)._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is None: - return mask_results - - # mask iou head forward and loss - pos_mask_pred = mask_results['mask_pred'][ - range(mask_results['mask_pred'].size(0)), pos_labels] - mask_iou_pred = self.mask_iou_head(mask_results['mask_feats'], - pos_mask_pred) - pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)), - pos_labels] - - mask_iou_targets = self.mask_iou_head.get_targets( - sampling_results, gt_masks, pos_mask_pred, - mask_results['mask_targets'], self.train_cfg) - loss_mask_iou = self.mask_iou_head.loss(pos_mask_iou_pred, - mask_iou_targets) - mask_results['loss_mask'].update(loss_mask_iou) - return mask_results - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - num_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - mask_scores = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - concat_det_labels = torch.cat(det_labels) - # get mask scores with mask iou head - mask_feats = mask_results['mask_feats'] - mask_pred = mask_results['mask_pred'] - mask_iou_pred = self.mask_iou_head( - mask_feats, mask_pred[range(concat_det_labels.size(0)), - concat_det_labels]) - # split batch mask prediction back to each image - num_bboxes_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bboxes_per_img, 0) - mask_iou_preds = mask_iou_pred.split(num_bboxes_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - mask_scores = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - mask_scores.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - # get mask scores with mask iou head - mask_score = self.mask_iou_head.get_mask_scores( - mask_iou_preds[i], det_bboxes[i], det_labels[i]) - segm_results.append(segm_result) - mask_scores.append(mask_score) - return list(zip(segm_results, mask_scores)) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/pisa_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/pisa_roi_head.py deleted file mode 100644 index 92a51186..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/pisa_roi_head.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core import bbox2roi -from ..builder import HEADS -from ..losses.pisa_loss import carl_loss, isr_p -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PISARoIHead(StandardRoIHead): - r"""The RoI head for `Prime Sample Attention in Object Detection - `_.""" - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): List of region proposals. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (list[Tensor], optional): Specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : True segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - neg_label_weights = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # neg label weight is obtained by sampling when using ISR-N - neg_label_weight = None - if isinstance(sampling_result, tuple): - sampling_result, neg_label_weight = sampling_result - sampling_results.append(sampling_result) - neg_label_weights.append(neg_label_weight) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train( - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=neg_label_weights) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=None): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - - # neg_label_weights obtained by sampler is image-wise, mapping back to - # the corresponding location in label weights - if neg_label_weights[0] is not None: - label_weights = bbox_targets[1] - cur_num_rois = 0 - for i in range(len(sampling_results)): - num_pos = sampling_results[i].pos_inds.size(0) - num_neg = sampling_results[i].neg_inds.size(0) - label_weights[cur_num_rois + num_pos:cur_num_rois + num_pos + - num_neg] = neg_label_weights[i] - cur_num_rois += num_pos + num_neg - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - bbox_targets = isr_p( - cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - self.bbox_head.loss_cls, - self.bbox_head.bbox_coder, - **isr_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox = self.bbox_head.loss(cls_score, bbox_pred, rois, - *bbox_targets) - - # Add CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - cls_score, - bbox_targets[0], - bbox_pred, - bbox_targets[2], - self.bbox_head.loss_bbox, - **carl_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox.update(loss_carl) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/point_rend_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/point_rend_roi_head.py deleted file mode 100644 index 9f667793..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/point_rend_roi_head.py +++ /dev/null @@ -1,393 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa -import os -import warnings - -import numpy as np -import torch -import torch.nn.functional as F -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point - -from mmdet.core import bbox2roi, bbox_mapping, merge_aug_masks -from .. import builder -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PointRendRoIHead(StandardRoIHead): - """`PointRend `_.""" - - def __init__(self, point_head, *args, **kwargs): - super().__init__(*args, **kwargs) - assert self.with_bbox and self.with_mask - self.init_point_head(point_head) - - def init_point_head(self, point_head): - """Initialize ``point_head``""" - self.point_head = builder.build_head(point_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head and point head - in training.""" - mask_results = super()._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is not None: - loss_point = self._mask_point_forward_train( - x, sampling_results, mask_results['mask_pred'], gt_masks, - img_metas) - mask_results['loss_mask'].update(loss_point) - - return mask_results - - def _mask_point_forward_train(self, x, sampling_results, mask_pred, - gt_masks, img_metas): - """Run forward function and calculate loss for point head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - rel_roi_points = self.point_head.get_roi_rel_points_train( - mask_pred, pos_labels, cfg=self.train_cfg) - rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - mask_point_target = self.point_head.get_targets( - rois, rel_roi_points, sampling_results, gt_masks, self.train_cfg) - loss_mask_point = self.point_head.loss(mask_point_pred, - mask_point_target, pos_labels) - - return loss_mask_point - - def _get_fine_grained_point_feats(self, x, rois, rel_roi_points, - img_metas): - """Sample fine grained feats from each level feature map and - concatenate them together. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - num_imgs = len(img_metas) - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = feats[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat.shape[2:], - spatial_scale).unsqueeze(0) - point_feat = point_sample(feat, rel_img_points) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - fine_grained_feats.append(torch.cat(point_feats, dim=0)) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_forward_test(self, x, rois, label_pred, mask_pred, - img_metas): - """Mask refining process with point head in testing. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if isinstance(scale_factors[0], float): - warnings.warn( - 'Scale factor in img_metas should be a ' - 'ndarray with shape (4,) ' - 'arrange as (factor_w, factor_h, factor_w, factor_h), ' - 'The scale_factor with float type has been deprecated. ') - scale_factors = np.array([scale_factors] * 4, dtype=np.float32) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - _bboxes = [det_bboxes[i][:, :4] for i in range(len(det_bboxes))] - if rescale: - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - _bboxes[i] * scale_factors[i] for i in range(len(_bboxes)) - ] - - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - # split batch mask prediction back to each image - mask_pred = mask_results['mask_pred'] - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - mask_rois = mask_rois.split(num_mask_roi_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - x_i = [xx[[i]] for xx in x] - mask_rois_i = mask_rois[i] - mask_rois_i[:, 0] = 0 # TODO: remove this hack - mask_pred_i = self._mask_point_forward_test( - x_i, mask_rois_i, det_labels[i], mask_preds[i], - [img_metas]) - segm_result = self.mask_head.get_seg_masks( - mask_pred_i, _bboxes[i], det_labels[i], self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - mask_results['mask_pred'] = self._mask_point_forward_test( - x, mask_rois, det_labels, mask_results['mask_pred'], - img_meta) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result - - def _onnx_get_fine_grained_point_feats(self, x, rois, rel_roi_points): - """Export the process of sampling fine grained feats to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - batch_size = x[0].shape[0] - num_rois = rois.shape[0] - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, feats, spatial_scale) - channels = feats.shape[1] - num_points = rel_img_points.shape[1] - rel_img_points = rel_img_points.reshape(batch_size, -1, num_points, - 2) - point_feats = point_sample(feats, rel_img_points) - point_feats = point_feats.transpose(1, 2).reshape( - num_rois, channels, num_points) - fine_grained_feats.append(point_feats) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_onnx_export(self, x, rois, label_pred, mask_pred): - """Export mask refining process with point head to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._onnx_get_fine_grained_point_feats( - x, rois, rel_roi_points) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - # avoid ScatterElements op in ONNX for TensorRT - if is_trt_backend: - mask_shape = refined_mask_pred.shape - point_shape = point_indices.shape - inds_dim0 = torch.arange(point_shape[0]).reshape( - point_shape[0], 1, 1).expand_as(point_indices) - inds_dim1 = torch.arange(point_shape[1]).reshape( - 1, point_shape[1], 1).expand_as(point_indices) - inds_1d = inds_dim0.reshape( - -1) * mask_shape[1] * mask_shape[2] + inds_dim1.reshape( - -1) * mask_shape[2] + point_indices.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(-1) - refined_mask_pred[inds_1d] = mask_point_pred.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(*mask_shape) - else: - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def mask_onnx_export(self, x, img_metas, det_bboxes, det_labels, **kwargs): - """Export mask branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - det_bboxes (Tensor): Bboxes and corresponding scores. - has shape [N, num_bboxes, 5]. - det_labels (Tensor): class labels of - shape [N, num_bboxes]. - - Returns: - Tensor: The segmentation results of shape [N, num_bboxes, - image_height, image_width]. - """ - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - raise RuntimeError('[ONNX Error] Can not record MaskHead ' - 'as it has not been executed this time') - batch_size = det_bboxes.size(0) - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - max_shape = img_metas[0]['img_shape_for_onnx'] - num_det = det_bboxes.shape[1] - det_bboxes = det_bboxes.reshape(-1, 4) - det_labels = det_labels.reshape(-1) - - mask_pred = self._mask_point_onnx_export(x, mask_rois, det_labels, - mask_pred) - - segm_results = self.mask_head.onnx_export(mask_pred, det_bboxes, - det_labels, self.test_cfg, - max_shape) - segm_results = segm_results.reshape(batch_size, num_det, max_shape[0], - max_shape[1]) - return segm_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/__init__.py deleted file mode 100644 index 0f602149..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_roi_extractor import BaseRoIExtractor -from .generic_roi_extractor import GenericRoIExtractor -from .single_level_roi_extractor import SingleRoIExtractor - -__all__ = ['BaseRoIExtractor', 'SingleRoIExtractor', 'GenericRoIExtractor'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py deleted file mode 100644 index 82629757..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -import torch.nn as nn -from mmcv import ops -from mmcv.runner import BaseModule - - -class BaseRoIExtractor(BaseModule, metaclass=ABCMeta): - """Base class for RoI extractor. - - Args: - roi_layer (dict): Specify RoI layer type and arguments. - out_channels (int): Output channels of RoI layers. - featmap_strides (int): Strides of input feature maps. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - roi_layer, - out_channels, - featmap_strides, - init_cfg=None): - super(BaseRoIExtractor, self).__init__(init_cfg) - self.roi_layers = self.build_roi_layers(roi_layer, featmap_strides) - self.out_channels = out_channels - self.featmap_strides = featmap_strides - self.fp16_enabled = False - - @property - def num_inputs(self): - """int: Number of input feature maps.""" - return len(self.featmap_strides) - - def build_roi_layers(self, layer_cfg, featmap_strides): - """Build RoI operator to extract feature from each level feature map. - - Args: - layer_cfg (dict): Dictionary to construct and config RoI layer - operation. Options are modules under ``mmcv/ops`` such as - ``RoIAlign``. - featmap_strides (List[int]): The stride of input feature map w.r.t - to the original image size, which would be used to scale RoI - coordinate (original image coordinate system) to feature - coordinate system. - - Returns: - nn.ModuleList: The RoI extractor modules for each level feature - map. - """ - - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = nn.ModuleList( - [layer_cls(spatial_scale=1 / s, **cfg) for s in featmap_strides]) - return roi_layers - - def roi_rescale(self, rois, scale_factor): - """Scale RoI coordinates by scale factor. - - Args: - rois (torch.Tensor): RoI (Region of Interest), shape (n, 5) - scale_factor (float): Scale factor that RoI will be multiplied by. - - Returns: - torch.Tensor: Scaled RoI. - """ - - cx = (rois[:, 1] + rois[:, 3]) * 0.5 - cy = (rois[:, 2] + rois[:, 4]) * 0.5 - w = rois[:, 3] - rois[:, 1] - h = rois[:, 4] - rois[:, 2] - new_w = w * scale_factor - new_h = h * scale_factor - x1 = cx - new_w * 0.5 - x2 = cx + new_w * 0.5 - y1 = cy - new_h * 0.5 - y2 = cy + new_h * 0.5 - new_rois = torch.stack((rois[:, 0], x1, y1, x2, y2), dim=-1) - return new_rois - - @abstractmethod - def forward(self, feats, rois, roi_scale_factor=None): - pass diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py deleted file mode 100644 index 566d3de8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import build_plugin_layer -from mmcv.runner import force_fp32 - -from mmdet.models.builder import ROI_EXTRACTORS -from .base_roi_extractor import BaseRoIExtractor - - -@ROI_EXTRACTORS.register_module() -class GenericRoIExtractor(BaseRoIExtractor): - """Extract RoI features from all level feature maps levels. - - This is the implementation of `A novel Region of Interest Extraction Layer - for Instance Segmentation `_. - - Args: - aggregation (str): The method to aggregate multiple feature maps. - Options are 'sum', 'concat'. Default: 'sum'. - pre_cfg (dict | None): Specify pre-processing modules. Default: None. - post_cfg (dict | None): Specify post-processing modules. Default: None. - kwargs (keyword arguments): Arguments that are the same - as :class:`BaseRoIExtractor`. - """ - - def __init__(self, - aggregation='sum', - pre_cfg=None, - post_cfg=None, - **kwargs): - super(GenericRoIExtractor, self).__init__(**kwargs) - - assert aggregation in ['sum', 'concat'] - - self.aggregation = aggregation - self.with_post = post_cfg is not None - self.with_pre = pre_cfg is not None - # build pre/post processing modules - if self.with_post: - self.post_module = build_plugin_layer(post_cfg, '_post_module')[1] - if self.with_pre: - self.pre_module = build_plugin_layer(pre_cfg, '_pre_module')[1] - - @force_fp32(apply_to=('feats', ), out_fp16=True) - def forward(self, feats, rois, roi_scale_factor=None): - """Forward function.""" - if len(feats) == 1: - return self.roi_layers[0](feats[0], rois) - - out_size = self.roi_layers[0].output_size - num_levels = len(feats) - roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, *out_size) - - # some times rois is an empty tensor - if roi_feats.shape[0] == 0: - return roi_feats - - if roi_scale_factor is not None: - rois = self.roi_rescale(rois, roi_scale_factor) - - # mark the starting channels for concat mode - start_channels = 0 - for i in range(num_levels): - roi_feats_t = self.roi_layers[i](feats[i], rois) - end_channels = start_channels + roi_feats_t.size(1) - if self.with_pre: - # apply pre-processing to a RoI extracted from each layer - roi_feats_t = self.pre_module(roi_feats_t) - if self.aggregation == 'sum': - # and sum them all - roi_feats += roi_feats_t - else: - # and concat them along channel dimension - roi_feats[:, start_channels:end_channels] = roi_feats_t - # update channels starting position - start_channels = end_channels - # check if concat channels match at the end - if self.aggregation == 'concat': - assert start_channels == self.out_channels - - if self.with_post: - # apply post-processing before return the result - roi_feats = self.post_module(roi_feats) - return roi_feats diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py deleted file mode 100644 index 1b569ce1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.models.builder import ROI_EXTRACTORS -from .base_roi_extractor import BaseRoIExtractor - - -@ROI_EXTRACTORS.register_module() -class SingleRoIExtractor(BaseRoIExtractor): - """Extract RoI features from a single level feature map. - - If there are multiple input feature levels, each RoI is mapped to a level - according to its scale. The mapping rule is proposed in - `FPN `_. - - Args: - roi_layer (dict): Specify RoI layer type and arguments. - out_channels (int): Output channels of RoI layers. - featmap_strides (List[int]): Strides of input feature maps. - finest_scale (int): Scale threshold of mapping to level 0. Default: 56. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - roi_layer, - out_channels, - featmap_strides, - finest_scale=56, - init_cfg=None): - super(SingleRoIExtractor, self).__init__(roi_layer, out_channels, - featmap_strides, init_cfg) - self.finest_scale = finest_scale - - def map_roi_levels(self, rois, num_levels): - """Map rois to corresponding feature levels by scales. - - - scale < finest_scale * 2: level 0 - - finest_scale * 2 <= scale < finest_scale * 4: level 1 - - finest_scale * 4 <= scale < finest_scale * 8: level 2 - - scale >= finest_scale * 8: level 3 - - Args: - rois (Tensor): Input RoIs, shape (k, 5). - num_levels (int): Total level number. - - Returns: - Tensor: Level index (0-based) of each RoI, shape (k, ) - """ - scale = torch.sqrt( - (rois[:, 3] - rois[:, 1]) * (rois[:, 4] - rois[:, 2])) - target_lvls = torch.floor(torch.log2(scale / self.finest_scale + 1e-6)) - target_lvls = target_lvls.clamp(min=0, max=num_levels - 1).long() - return target_lvls - - @force_fp32(apply_to=('feats', ), out_fp16=True) - def forward(self, feats, rois, roi_scale_factor=None): - """Forward function.""" - out_size = self.roi_layers[0].output_size - num_levels = len(feats) - expand_dims = (-1, self.out_channels * out_size[0] * out_size[1]) - if torch.onnx.is_in_onnx_export(): - # Work around to export mask-rcnn to onnx - roi_feats = rois[:, :1].clone().detach() - roi_feats = roi_feats.expand(*expand_dims) - roi_feats = roi_feats.reshape(-1, self.out_channels, *out_size) - roi_feats = roi_feats * 0 - else: - roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, *out_size) - # TODO: remove this when parrots supports - if torch.__version__ == 'parrots': - roi_feats.requires_grad = True - - if num_levels == 1: - if len(rois) == 0: - return roi_feats - return self.roi_layers[0](feats[0], rois) - - target_lvls = self.map_roi_levels(rois, num_levels) - - if roi_scale_factor is not None: - rois = self.roi_rescale(rois, roi_scale_factor) - - for i in range(num_levels): - mask = target_lvls == i - if torch.onnx.is_in_onnx_export(): - # To keep all roi_align nodes exported to onnx - # and skip nonzero op - mask = mask.float().unsqueeze(-1) - # select target level rois and reset the rest rois to zero. - rois_i = rois.clone().detach() - rois_i *= mask - mask_exp = mask.expand(*expand_dims).reshape(roi_feats.shape) - roi_feats_t = self.roi_layers[i](feats[i], rois_i) - roi_feats_t *= mask_exp - roi_feats += roi_feats_t - continue - inds = mask.nonzero(as_tuple=False).squeeze(1) - if inds.numel() > 0: - rois_ = rois[inds] - roi_feats_t = self.roi_layers[i](feats[i], rois_) - roi_feats[inds] = roi_feats_t - else: - # Sometimes some pyramid levels will not be used for RoI - # feature extraction and this will cause an incomplete - # computation graph in one GPU, which is different from those - # in other GPUs and will cause a hanging error. - # Therefore, we add it to ensure each feature pyramid is - # included in the computation graph to avoid runtime bugs. - roi_feats += sum( - x.view(-1)[0] - for x in self.parameters()) * 0. + feats[i].sum() * 0. - return roi_feats diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/scnet_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/scnet_roi_head.py deleted file mode 100644 index 705430a2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/scnet_roi_head.py +++ /dev/null @@ -1,605 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn.functional as F - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from ..utils.brick_wrappers import adaptive_avg_pool2d -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class SCNetRoIHead(CascadeRoIHead): - """RoIHead for `SCNet `_. - - Args: - num_stages (int): number of cascade stages. - stage_loss_weights (list): loss weight of cascade stages. - semantic_roi_extractor (dict): config to init semantic roi extractor. - semantic_head (dict): config to init semantic head. - feat_relay_head (dict): config to init feature_relay_head. - glbctx_head (dict): config to init global context head. - """ - - def __init__(self, - num_stages, - stage_loss_weights, - semantic_roi_extractor=None, - semantic_head=None, - feat_relay_head=None, - glbctx_head=None, - **kwargs): - super(SCNetRoIHead, self).__init__(num_stages, stage_loss_weights, - **kwargs) - assert self.with_bbox and self.with_mask - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = build_roi_extractor( - semantic_roi_extractor) - self.semantic_head = build_head(semantic_head) - - if feat_relay_head is not None: - self.feat_relay_head = build_head(feat_relay_head) - - if glbctx_head is not None: - self.glbctx_head = build_head(glbctx_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.mask_head = build_head(mask_head) - - @property - def with_semantic(self): - """bool: whether the head has semantic head""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - @property - def with_feat_relay(self): - """bool: whether the head has feature relay head""" - return (hasattr(self, 'feat_relay_head') - and self.feat_relay_head is not None) - - @property - def with_glbctx(self): - """bool: whether the head has global context head""" - return hasattr(self, 'glbctx_head') and self.glbctx_head is not None - - def _fuse_glbctx(self, roi_feats, glbctx_feat, rois): - """Fuse global context feats with roi feats.""" - assert roi_feats.size(0) == rois.size(0) - img_inds = torch.unique(rois[:, 0].cpu(), sorted=True).long() - fused_feats = torch.zeros_like(roi_feats) - for img_id in img_inds: - inds = (rois[:, 0] == img_id.item()) - fused_feats[inds] = roi_feats[inds] + glbctx_feat[img_id] - return fused_feats - - def _slice_pos_feats(self, feats, sampling_results): - """Get features from pos rois.""" - num_rois = [res.bboxes.size(0) for res in sampling_results] - num_pos_rois = [res.pos_bboxes.size(0) for res in sampling_results] - inds = torch.zeros(sum(num_rois), dtype=torch.bool) - start = 0 - for i in range(len(num_rois)): - start = 0 if i == 0 else start + num_rois[i - 1] - stop = start + num_pos_rois[i] - inds[start:stop] = 1 - sliced_feats = feats[inds] - return sliced_feats - - def _bbox_forward(self, - stage, - x, - rois, - semantic_feat=None, - glbctx_feat=None): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor( - x[:len(bbox_roi_extractor.featmap_strides)], rois) - if self.with_semantic and semantic_feat is not None: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - bbox_feats = self._fuse_glbctx(bbox_feats, glbctx_feat, rois) - cls_score, bbox_pred, relayed_feat = bbox_head( - bbox_feats, return_shared_feat=True) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - relayed_feat=relayed_feat) - return bbox_results - - def _mask_forward(self, - x, - rois, - semantic_feat=None, - glbctx_feat=None, - relayed_feat=None): - """Mask head forward function used in both training and testing.""" - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_semantic and semantic_feat is not None: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - mask_feats = self._fuse_glbctx(mask_feats, glbctx_feat, rois) - if self.with_feat_relay and relayed_feat is not None: - mask_feats = mask_feats + relayed_feat - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred) - - return mask_results - - def _bbox_forward_train(self, - stage, - x, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - semantic_feat=None, - glbctx_feat=None): - """Run forward function and calculate loss for box head in training.""" - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - - bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg) - loss_bbox = bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward_train(self, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - semantic_feat=None, - glbctx_feat=None, - relayed_feat=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward( - x, - pos_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results = loss_mask - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposal_list (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None, list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None, Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - gt_semantic_seg (None, list[Tensor]): semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - - # semantic segmentation branch - if self.with_semantic: - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - # global context branch - if self.with_glbctx: - mc_pred, glbctx_feat = self.glbctx_head(x) - loss_glbctx = self.glbctx_head.loss(mc_pred, gt_labels) - losses['loss_glbctx'] = loss_glbctx - else: - glbctx_feat = None - - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign(proposal_list[j], - gt_bboxes[j], - gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - bbox_results = \ - self._bbox_forward_train( - i, x, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg, semantic_feat, glbctx_feat) - roi_labels = bbox_results['bbox_targets'][0] - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine boxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - if self.with_feat_relay: - relayed_feat = self._slice_pos_feats(bbox_results['relayed_feat'], - sampling_results) - relayed_feat = self.feat_relay_head(relayed_feat) - else: - relayed_feat = None - - mask_results = self._mask_forward_train(x, sampling_results, gt_masks, - rcnn_train_cfg, semantic_feat, - glbctx_feat, relayed_feat) - mask_lw = sum(self.stage_loss_weights) - losses['loss_mask'] = mask_lw * mask_results['loss_mask'] - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - if self.with_glbctx: - mc_pred, glbctx_feat = self.glbctx_head(x) - else: - glbctx_feat = None - - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refine_rois = bbox_head.regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refine_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - det_bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head.num_classes - det_segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - - # get relay feature on mask_rois - bbox_results = self._bbox_forward( - -1, - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - relayed_feat = bbox_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - - mask_results = self._mask_forward( - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_pred = mask_results['mask_pred'] - - # split batch mask prediction back to each image - num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bbox_per_img, 0) - - # apply mask post-processing to each image individually - det_segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - det_segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - det_segm_results.append(segm_result) - - # return results - if self.with_mask: - return list(zip(det_bbox_results, det_segm_results)) - else: - return det_bbox_results - - def aug_test(self, img_feats, proposal_list, img_metas, rescale=False): - if self.with_semantic: - semantic_feats = [ - self.semantic_head(feat)[1] for feat in img_feats - ] - else: - semantic_feats = [None] * len(img_metas) - - if self.with_glbctx: - glbctx_feats = [self.glbctx_head(feat)[1] for feat in img_feats] - else: - glbctx_feats = [None] * len(img_metas) - - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta, semantic_feat, glbctx_feat in zip( - img_feats, img_metas, semantic_feats, glbctx_feats): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - ms_scores.append(bbox_results['cls_score']) - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'].argmax(dim=1) - rois = bbox_head.regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - det_bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - det_segm_results = [[] - for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta, semantic_feat, glbctx_feat in zip( - img_feats, img_metas, semantic_feats, glbctx_feats): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip) - mask_rois = bbox2roi([_bboxes]) - # get relay feature on mask_rois - bbox_results = self._bbox_forward( - -1, - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - relayed_feat = bbox_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - mask_results = self._mask_forward( - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_pred = mask_results['mask_pred'] - aug_masks.append(mask_pred.sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, - self.test_cfg) - ori_shape = img_metas[0][0]['ori_shape'] - det_segm_results = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(det_bbox_results, det_segm_results)] - else: - return [det_bbox_results] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/__init__.py deleted file mode 100644 index d56636ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .res_layer import ResLayer - -__all__ = ['ResLayer'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/res_layer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/res_layer.py deleted file mode 100644 index bef00a05..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/shared_heads/res_layer.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.backbones import ResNet -from mmdet.models.builder import SHARED_HEADS -from mmdet.models.utils import ResLayer as _ResLayer - - -@SHARED_HEADS.register_module() -class ResLayer(BaseModule): - - def __init__(self, - depth, - stage=3, - stride=2, - dilation=1, - style='pytorch', - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - with_cp=False, - dcn=None, - pretrained=None, - init_cfg=None): - super(ResLayer, self).__init__(init_cfg) - - self.norm_eval = norm_eval - self.norm_cfg = norm_cfg - self.stage = stage - self.fp16_enabled = False - block, stage_blocks = ResNet.arch_settings[depth] - stage_block = stage_blocks[stage] - planes = 64 * 2**stage - inplanes = 64 * 2**(stage - 1) * block.expansion - - res_layer = _ResLayer( - block, - inplanes, - planes, - stage_block, - stride=stride, - dilation=dilation, - style=style, - with_cp=with_cp, - norm_cfg=self.norm_cfg, - dcn=dcn) - self.add_module(f'layer{stage + 1}', res_layer) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - @auto_fp16() - def forward(self, x): - res_layer = getattr(self, f'layer{self.stage + 1}') - out = res_layer(x) - return out - - def train(self, mode=True): - super(ResLayer, self).train(mode) - if self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/sparse_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/sparse_roi_head.py deleted file mode 100644 index 2613469e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/sparse_roi_head.py +++ /dev/null @@ -1,424 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2result, bbox2roi, bbox_xyxy_to_cxcywh -from mmdet.core.bbox.samplers import PseudoSampler -from ..builder import HEADS -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class SparseRoIHead(CascadeRoIHead): - r"""The RoIHead for `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_ - and `Instances as Queries `_ - - Args: - num_stages (int): Number of stage whole iterative process. - Defaults to 6. - stage_loss_weights (Tuple[float]): The loss - weight of each stage. By default all stages have - the same weight 1. - bbox_roi_extractor (dict): Config of box roi extractor. - mask_roi_extractor (dict): Config of mask roi extractor. - bbox_head (dict): Config of box head. - mask_head (dict): Config of mask head. - train_cfg (dict, optional): Configuration information in train stage. - Defaults to None. - test_cfg (dict, optional): Configuration information in test stage. - Defaults to None. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - """ - - def __init__(self, - num_stages=6, - stage_loss_weights=(1, 1, 1, 1, 1, 1), - proposal_feature_channel=256, - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict( - type='RoIAlign', output_size=7, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_roi_extractor=None, - bbox_head=dict( - type='DIIHead', - num_classes=80, - num_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - hidden_channels=256, - dropout=0.0, - roi_feat_size=7, - ffn_act_cfg=dict(type='ReLU', inplace=True)), - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert len(stage_loss_weights) == num_stages - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - self.proposal_feature_channel = proposal_feature_channel - super(SparseRoIHead, self).__init__( - num_stages, - stage_loss_weights, - bbox_roi_extractor=bbox_roi_extractor, - mask_roi_extractor=mask_roi_extractor, - bbox_head=bbox_head, - mask_head=mask_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - # train_cfg would be None when run the test.py - if train_cfg is not None: - for stage in range(num_stages): - assert isinstance(self.bbox_sampler[stage], PseudoSampler), \ - 'Sparse R-CNN and QueryInst only support `PseudoSampler`' - - def _bbox_forward(self, stage, x, rois, object_feats, img_metas): - """Box head forward function used in both training and testing. Returns - all regression, classification results and a intermediate feature. - - Args: - stage (int): The index of current stage in - iterative process. - x (List[Tensor]): List of FPN features - rois (Tensor): Rois in total batch. With shape (num_proposal, 5). - the last dimension 5 represents (img_index, x1, y1, x2, y2). - object_feats (Tensor): The object feature extracted from - the previous stage. - img_metas (dict): meta information of images. - - Returns: - dict[str, Tensor]: a dictionary of bbox head outputs, - Containing the following results: - - - cls_score (Tensor): The score of each class, has - shape (batch_size, num_proposals, num_classes) - when use focal loss or - (batch_size, num_proposals, num_classes+1) - otherwise. - - decode_bbox_pred (Tensor): The regression results - with shape (batch_size, num_proposal, 4). - The last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - - object_feats (Tensor): The object feature extracted - from current stage - - detach_cls_score_list (list[Tensor]): The detached - classification results, length is batch_size, and - each tensor has shape (num_proposal, num_classes). - - detach_proposal_list (list[tensor]): The detached - regression results, length is batch_size, and each - tensor has shape (num_proposal, 4). The last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - """ - num_imgs = len(img_metas) - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - cls_score, bbox_pred, object_feats, attn_feats = bbox_head( - bbox_feats, object_feats) - proposal_list = self.bbox_head[stage].refine_bboxes( - rois, - rois.new_zeros(len(rois)), # dummy arg - bbox_pred.view(-1, bbox_pred.size(-1)), - [rois.new_zeros(object_feats.size(1)) for _ in range(num_imgs)], - img_metas) - bbox_results = dict( - cls_score=cls_score, - decode_bbox_pred=torch.cat(proposal_list), - object_feats=object_feats, - attn_feats=attn_feats, - # detach then use it in label assign - detach_cls_score_list=[ - cls_score[i].detach() for i in range(num_imgs) - ], - detach_proposal_list=[item.detach() for item in proposal_list]) - - return bbox_results - - def _mask_forward(self, stage, x, rois, attn_feats): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats, attn_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, stage, x, attn_feats, sampling_results, - gt_masks, rcnn_train_cfg): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - attn_feats = torch.cat([ - feats[res.pos_inds] - for (feats, res) in zip(attn_feats, sampling_results) - ]) - mask_results = self._mask_forward(stage, x, pos_rois, attn_feats) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - mask_results.update(loss_mask) - return mask_results - - def forward_train(self, - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - imgs_whwh=None, - gt_masks=None): - """Forward function in training stage. - - Args: - x (list[Tensor]): list of multi-level img features. - proposals (Tensor): Decoded proposal bboxes, has shape - (batch_size, num_proposals, 4) - proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel) - img_metas (list[dict]): list of image info dict where - each dict has: 'img_shape', 'scale_factor', 'flip', - and may also contain 'filename', 'ori_shape', - 'pad_shape', and 'img_norm_cfg'. For details on the - values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - imgs_whwh (Tensor): Tensor with shape (batch_size, 4), - the dimension means - [img_width,img_height, img_width, img_height]. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components of all stage. - """ - - num_imgs = len(img_metas) - num_proposals = proposal_boxes.size(1) - imgs_whwh = imgs_whwh.repeat(1, num_proposals, 1) - all_stage_bbox_results = [] - proposal_list = [proposal_boxes[i] for i in range(len(proposal_boxes))] - object_feats = proposal_features - all_stage_loss = {} - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - all_stage_bbox_results.append(bbox_results) - if gt_bboxes_ignore is None: - # TODO support ignore - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - cls_pred_list = bbox_results['detach_cls_score_list'] - proposal_list = bbox_results['detach_proposal_list'] - for i in range(num_imgs): - normalize_bbox_ccwh = bbox_xyxy_to_cxcywh(proposal_list[i] / - imgs_whwh[i]) - assign_result = self.bbox_assigner[stage].assign( - normalize_bbox_ccwh, cls_pred_list[i], gt_bboxes[i], - gt_labels[i], img_metas[i]) - sampling_result = self.bbox_sampler[stage].sample( - assign_result, proposal_list[i], gt_bboxes[i]) - sampling_results.append(sampling_result) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, self.train_cfg[stage], - True) - cls_score = bbox_results['cls_score'] - decode_bbox_pred = bbox_results['decode_bbox_pred'] - - single_stage_loss = self.bbox_head[stage].loss( - cls_score.view(-1, cls_score.size(-1)), - decode_bbox_pred.view(-1, 4), - *bbox_targets, - imgs_whwh=imgs_whwh) - - if self.with_mask: - mask_results = self._mask_forward_train( - stage, x, bbox_results['attn_feats'], sampling_results, - gt_masks, self.train_cfg[stage]) - single_stage_loss['loss_mask'] = mask_results['loss_mask'] - - for key, value in single_stage_loss.items(): - all_stage_loss[f'stage{stage}_{key}'] = value * \ - self.stage_loss_weights[stage] - object_feats = bbox_results['object_feats'] - - return all_stage_loss - - def simple_test(self, - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh, - rescale=False): - """Test without augmentation. - - Args: - x (list[Tensor]): list of multi-level img features. - proposal_boxes (Tensor): Decoded proposal bboxes, has shape - (batch_size, num_proposals, 4) - proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel) - img_metas (dict): meta information of images. - imgs_whwh (Tensor): Tensor with shape (batch_size, 4), - the dimension means - [img_width,img_height, img_width, img_height]. - rescale (bool): If True, return boxes in original image - space. Defaults to False. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has a mask branch, - it is a list[tuple] that contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - # Decode initial proposals - num_imgs = len(img_metas) - proposal_list = [proposal_boxes[i] for i in range(num_imgs)] - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - object_feats = proposal_features - if all([proposal.shape[0] == 0 for proposal in proposal_list]): - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for i in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - return bbox_results - - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - object_feats = bbox_results['object_feats'] - cls_score = bbox_results['cls_score'] - proposal_list = bbox_results['detach_proposal_list'] - - if self.with_mask: - rois = bbox2roi(proposal_list) - mask_results = self._mask_forward(stage, x, rois, - bbox_results['attn_feats']) - mask_results['mask_pred'] = mask_results['mask_pred'].reshape( - num_imgs, -1, *mask_results['mask_pred'].size()[1:]) - - num_classes = self.bbox_head[-1].num_classes - det_bboxes = [] - det_labels = [] - - if self.bbox_head[-1].loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - else: - cls_score = cls_score.softmax(-1)[..., :-1] - - for img_id in range(num_imgs): - cls_score_per_img = cls_score[img_id] - scores_per_img, topk_indices = cls_score_per_img.flatten( - 0, 1).topk( - self.test_cfg.max_per_img, sorted=False) - labels_per_img = topk_indices % num_classes - bbox_pred_per_img = proposal_list[img_id][topk_indices // - num_classes] - if rescale: - scale_factor = img_metas[img_id]['scale_factor'] - bbox_pred_per_img /= bbox_pred_per_img.new_tensor(scale_factor) - det_bboxes.append( - torch.cat([bbox_pred_per_img, scores_per_img[:, None]], dim=1)) - det_labels.append(labels_per_img) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], num_classes) - for i in range(num_imgs) - ] - - if self.with_mask: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - segm_results = [] - mask_pred = mask_results['mask_pred'] - for img_id in range(num_imgs): - mask_pred_per_img = mask_pred[img_id].flatten(0, - 1)[topk_indices] - mask_pred_per_img = mask_pred_per_img[:, None, ...].repeat( - 1, num_classes, 1, 1) - segm_result = self.mask_head[-1].get_seg_masks( - mask_pred_per_img, _bboxes[img_id], det_labels[img_id], - self.test_cfg, ori_shapes[img_id], scale_factors[img_id], - rescale) - segm_results.append(segm_result) - - if self.with_mask: - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - raise NotImplementedError( - 'Sparse R-CNN and QueryInst does not support `aug_test`') - - def forward_dummy(self, x, proposal_boxes, proposal_features, img_metas): - """Dummy forward function when do the flops computing.""" - all_stage_bbox_results = [] - proposal_list = [proposal_boxes[i] for i in range(len(proposal_boxes))] - object_feats = proposal_features - if self.with_bbox: - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - - all_stage_bbox_results.append((bbox_results, )) - proposal_list = bbox_results['detach_proposal_list'] - object_feats = bbox_results['object_feats'] - - if self.with_mask: - rois = bbox2roi(proposal_list) - mask_results = self._mask_forward( - stage, x, rois, bbox_results['attn_feats']) - all_stage_bbox_results[-1] += (mask_results, ) - return all_stage_bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/standard_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/standard_roi_head.py deleted file mode 100644 index 3fdd82ad..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/standard_roi_head.py +++ /dev/null @@ -1,397 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Simplest base roi head including one bbox head and one mask head.""" - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - self.bbox_sampler = build_sampler( - self.train_cfg.sampler, context=self) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize ``bbox_head``""" - self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor) - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - self.mask_head = build_head(mask_head) - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head in - training.""" - if not self.share_roi_extractor: - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(x, pos_rois) - else: - pos_inds = [] - device = bbox_feats.device - for res in sampling_results: - pos_inds.append( - torch.ones( - res.pos_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds.append( - torch.zeros( - res.neg_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds = torch.cat(pos_inds) - - mask_results = self._mask_forward( - x, pos_inds=pos_inds, bbox_feats=bbox_feats) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - self.train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets) - return mask_results - - def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None): - """Mask head forward function used in both training and testing.""" - assert ((rois is not None) ^ - (pos_inds is not None and bbox_feats is not None)) - if rois is not None: - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - else: - assert bbox_feats is not None - mask_feats = bbox_feats[pos_inds] - - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats) - return mask_results - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = await self.async_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head.num_classes) - if not self.with_mask: - return bbox_results - else: - segm_results = await self.async_test_mask( - x, - img_metas, - det_bboxes, - det_labels, - rescale=rescale, - mask_test_cfg=self.test_cfg.get('mask')) - return bbox_results, segm_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, x, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas, - proposal_list, - self.test_cfg) - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, - self.bbox_head.num_classes) - - # det_bboxes always keep the original scale - if self.with_mask: - segm_results = self.aug_test_mask(x, img_metas, det_bboxes, - det_labels) - return [(bbox_results, segm_results)] - else: - return [bbox_results] - - def onnx_export(self, x, proposals, img_metas, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes, det_labels = self.bbox_onnx_export( - x, img_metas, proposals, self.test_cfg, rescale=rescale) - - if not self.with_mask: - return det_bboxes, det_labels - else: - segm_results = self.mask_onnx_export( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return det_bboxes, det_labels, segm_results - - def mask_onnx_export(self, x, img_metas, det_bboxes, det_labels, **kwargs): - """Export mask branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - det_bboxes (Tensor): Bboxes and corresponding scores. - has shape [N, num_bboxes, 5]. - det_labels (Tensor): class labels of - shape [N, num_bboxes]. - - Returns: - Tensor: The segmentation results of shape [N, num_bboxes, - image_height, image_width]. - """ - # image shapes of images in the batch - - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - raise RuntimeError('[ONNX Error] Can not record MaskHead ' - 'as it has not been executed this time') - batch_size = det_bboxes.size(0) - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - max_shape = img_metas[0]['img_shape_for_onnx'] - num_det = det_bboxes.shape[1] - det_bboxes = det_bboxes.reshape(-1, 4) - det_labels = det_labels.reshape(-1) - segm_results = self.mask_head.onnx_export(mask_pred, det_bboxes, - det_labels, self.test_cfg, - max_shape) - segm_results = segm_results.reshape(batch_size, num_det, max_shape[0], - max_shape[1]) - return segm_results - - def bbox_onnx_export(self, x, img_metas, proposals, rcnn_test_cfg, - **kwargs): - """Export bbox branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (Tensor): Region proposals with - batch dimension, has shape [N, num_bboxes, 5]. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - - Returns: - tuple[Tensor, Tensor]: bboxes of shape [N, num_bboxes, 5] - and class labels of shape [N, num_bboxes]. - """ - # get origin input shape to support onnx dynamic input shape - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - - rois = proposals - - batch_index = torch.arange( - rois.size(0), device=rois.device).float().view(-1, 1, 1).expand( - rois.size(0), rois.size(1), 1) - - rois = torch.cat([batch_index, rois[..., :4]], dim=-1) - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - - # Eliminate the batch dimension - rois = rois.view(-1, 5) - bbox_results = self._bbox_forward(x, rois) - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, rois.size(-1)) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, - cls_score.size(-1)) - - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, - bbox_pred.size(-1)) - det_bboxes, det_labels = self.bbox_head.onnx_export( - rois, cls_score, bbox_pred, img_shapes, cfg=rcnn_test_cfg) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/test_mixins.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/test_mixins.py deleted file mode 100644 index ae6e79ae..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/test_mixins.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -import warnings - -import numpy as np -import torch - -from mmdet.core import (bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin: - - if sys.version_info >= (3, 7): - - async def async_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False, - **kwargs): - """Asynchronized test for box head without augmentation.""" - rois = bbox2roi(proposals) - roi_feats = self.bbox_roi_extractor( - x[:len(self.bbox_roi_extractor.featmap_strides)], rois) - if self.with_shared_head: - roi_feats = self.shared_head(roi_feats) - sleep_interval = rcnn_test_cfg.get('async_sleep_interval', 0.017) - - async with completed( - __name__, 'bbox_head_forward', - sleep_interval=sleep_interval): - cls_score, bbox_pred = self.bbox_head(roi_feats) - - img_shape = img_metas[0]['img_shape'] - scale_factor = img_metas[0]['scale_factor'] - det_bboxes, det_labels = self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=rescale, - cfg=rcnn_test_cfg) - return det_bboxes, det_labels - - def simple_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False): - """Test only det bboxes without augmentation. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (List[Tensor]): Region proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - tuple[list[Tensor], list[Tensor]]: The first list contains - the boxes of the corresponding image in a batch, each - tensor has the shape (num_boxes, 5) and last dimension - 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor - in the second list is the labels with shape (num_boxes, ). - The length of both lists should be equal to batch_size. - """ - - rois = bbox2roi(proposals) - - if rois.shape[0] == 0: - batch_size = len(proposals) - det_bbox = rois.new_zeros(0, 5) - det_label = rois.new_zeros((0, ), dtype=torch.long) - if rcnn_test_cfg is None: - det_bbox = det_bbox[:, :4] - det_label = rois.new_zeros( - (0, self.bbox_head.fc_cls.out_features)) - # There is no proposal in the whole batch - return [det_bbox] * batch_size, [det_label] * batch_size - - bbox_results = self._bbox_forward(x, rois) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - - # some detector with_reg is False, bbox_pred will be None - if bbox_pred is not None: - # TODO move this to a sabl_roi_head - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head.bbox_pred_split( - bbox_pred, num_proposals_per_img) - else: - bbox_pred = (None, ) * len(proposals) - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(len(proposals)): - if rois[i].shape[0] == 0: - # There is no proposal in the single image - det_bbox = rois[i].new_zeros(0, 5) - det_label = rois[i].new_zeros((0, ), dtype=torch.long) - if rcnn_test_cfg is None: - det_bbox = det_bbox[:, :4] - det_label = rois[i].new_zeros( - (0, self.bbox_head.fc_cls.out_features)) - - else: - det_bbox, det_label = self.bbox_head.get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - return det_bboxes, det_labels - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - # TODO more flexible - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - if merged_bboxes.shape[0] == 0: - # There is no proposal in the single image - det_bboxes = merged_bboxes.new_zeros(0, 5) - det_labels = merged_bboxes.new_zeros((0, ), dtype=torch.long) - else: - det_bboxes, det_labels = multiclass_nms(merged_bboxes, - merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels - - -class MaskTestMixin: - - if sys.version_info >= (3, 7): - - async def async_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False, - mask_test_cfg=None): - """Asynchronized test for mask head without augmentation.""" - # image shape of the first image in the batch (only one) - ori_shape = img_metas[0]['ori_shape'] - scale_factor = img_metas[0]['scale_factor'] - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - if rescale and not isinstance(scale_factor, - (float, torch.Tensor)): - scale_factor = det_bboxes.new_tensor(scale_factor) - _bboxes = ( - det_bboxes[:, :4] * - scale_factor if rescale else det_bboxes) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor( - x[:len(self.mask_roi_extractor.featmap_strides)], - mask_rois) - - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - if mask_test_cfg and mask_test_cfg.get('async_sleep_interval'): - sleep_interval = mask_test_cfg['async_sleep_interval'] - else: - sleep_interval = 0.035 - async with completed( - __name__, - 'mask_head_forward', - sleep_interval=sleep_interval): - mask_pred = self.mask_head(mask_feats) - segm_result = self.mask_head.get_seg_masks( - mask_pred, _bboxes, det_labels, self.test_cfg, ori_shape, - scale_factor, rescale) - return segm_result - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if isinstance(scale_factors[0], float): - warnings.warn( - 'Scale factor in img_metas should be a ' - 'ndarray with shape (4,) ' - 'arrange as (factor_w, factor_h, factor_w, factor_h), ' - 'The scale_factor with float type has been deprecated. ') - scale_factors = np.array([scale_factors] * 4, dtype=np.float32) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale: - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - scale_factor = det_bboxes.new_ones(4) - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=scale_factor, - rescale=False) - return segm_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/trident_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/trident_roi_head.py deleted file mode 100644 index 09758792..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/roi_heads/trident_roi_head.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import batched_nms - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - multiclass_nms) -from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead -from ..builder import HEADS - - -@HEADS.register_module() -class TridentRoIHead(StandardRoIHead): - """Trident roi head. - - Args: - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - """ - - def __init__(self, num_branch, test_branch_idx, **kwargs): - self.num_branch = num_branch - self.test_branch_idx = test_branch_idx - super(TridentRoIHead, self).__init__(**kwargs) - - def merge_trident_bboxes(self, trident_det_bboxes, trident_det_labels): - """Merge bbox predictions of each branch.""" - if trident_det_bboxes.numel() == 0: - det_bboxes = trident_det_bboxes.new_zeros((0, 5)) - det_labels = trident_det_bboxes.new_zeros((0, ), dtype=torch.long) - else: - nms_bboxes = trident_det_bboxes[:, :4] - nms_scores = trident_det_bboxes[:, 4].contiguous() - nms_inds = trident_det_labels - nms_cfg = self.test_cfg['nms'] - det_bboxes, keep = batched_nms(nms_bboxes, nms_scores, nms_inds, - nms_cfg) - det_labels = trident_det_labels[keep] - if self.test_cfg['max_per_img'] > 0: - det_labels = det_labels[:self.test_cfg['max_per_img']] - det_bboxes = det_bboxes[:self.test_cfg['max_per_img']] - - return det_bboxes, det_labels - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation as follows: - - 1. Compute prediction bbox and label per branch. - 2. Merge predictions of each branch according to scores of - bboxes, i.e., bboxes with higher score are kept to give - top-k prediction. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes_list, det_labels_list = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - num_branch = self.num_branch if self.test_branch_idx == -1 else 1 - for _ in range(len(det_bboxes_list)): - if det_bboxes_list[_].shape[0] == 0: - det_bboxes_list[_] = det_bboxes_list[_].new_empty((0, 5)) - det_bboxes, det_labels = [], [] - for i in range(len(img_metas) // num_branch): - det_result = self.merge_trident_bboxes( - torch.cat(det_bboxes_list[i * num_branch:(i + 1) * - num_branch]), - torch.cat(det_labels_list[i * num_branch:(i + 1) * - num_branch])) - det_bboxes.append(det_result[0]) - det_labels.append(det_result[1]) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - return bbox_results - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - trident_bboxes, trident_scores = [], [] - for branch_idx in range(len(proposal_list)): - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - trident_bboxes.append(bboxes) - trident_scores.append(scores) - - aug_bboxes.append(torch.cat(trident_bboxes, 0)) - aug_scores.append(torch.cat(trident_scores, 0)) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/__init__.py deleted file mode 100644 index b489a905..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .panoptic_fpn_head import PanopticFPNHead # noqa: F401,F403 -from .panoptic_fusion_heads import * # noqa: F401,F403 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/base_semantic_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/base_semantic_head.py deleted file mode 100644 index 2b6ca145..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/base_semantic_head.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch.nn.functional as F -from mmcv.runner import BaseModule, force_fp32 - -from ..builder import build_loss -from ..utils import interpolate_as - - -class BaseSemanticHead(BaseModule, metaclass=ABCMeta): - """Base module of Semantic Head. - - Args: - num_classes (int): the number of classes. - init_cfg (dict): the initialization config. - loss_seg (dict): the loss of the semantic head. - """ - - def __init__(self, - num_classes, - init_cfg=None, - loss_seg=dict( - type='CrossEntropyLoss', - ignore_index=255, - loss_weight=1.0)): - super(BaseSemanticHead, self).__init__(init_cfg) - self.loss_seg = build_loss(loss_seg) - self.num_classes = num_classes - - @force_fp32(apply_to=('seg_preds', )) - def loss(self, seg_preds, gt_semantic_seg): - """Get the loss of semantic head. - - Args: - seg_preds (Tensor): The input logits with the shape (N, C, H, W). - gt_semantic_seg: The ground truth of semantic segmentation with - the shape (N, H, W). - label_bias: The starting number of the semantic label. - Default: 1. - - Returns: - dict: the loss of semantic head. - """ - if seg_preds.shape[-2:] != gt_semantic_seg.shape[-2:]: - seg_preds = interpolate_as(seg_preds, gt_semantic_seg) - seg_preds = seg_preds.permute((0, 2, 3, 1)) - - loss_seg = self.loss_seg( - seg_preds.reshape(-1, self.num_classes), # => [NxHxW, C] - gt_semantic_seg.reshape(-1).long()) - return dict(loss_seg=loss_seg) - - @abstractmethod - def forward(self, x): - """Placeholder of forward function. - - Returns: - dict[str, Tensor]: A dictionary, including features - and predicted scores. Required keys: 'seg_preds' - and 'feats'. - """ - pass - - def forward_train(self, x, gt_semantic_seg): - output = self.forward(x) - seg_preds = output['seg_preds'] - return self.loss(seg_preds, gt_semantic_seg) - - def simple_test(self, x, img_metas, rescale=False): - output = self.forward(x) - seg_preds = output['seg_preds'] - seg_preds = F.interpolate( - seg_preds, - size=img_metas[0]['pad_shape'][:2], - mode='bilinear', - align_corners=False) - - if rescale: - h, w, _ = img_metas[0]['img_shape'] - seg_preds = seg_preds[:, :, :h, :w] - - h, w, _ = img_metas[0]['ori_shape'] - seg_preds = F.interpolate( - seg_preds, size=(h, w), mode='bilinear', align_corners=False) - return seg_preds diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fpn_head.py deleted file mode 100644 index f1df2976..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fpn_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.runner import ModuleList - -from ..builder import HEADS -from ..utils import ConvUpsample -from .base_semantic_head import BaseSemanticHead - - -@HEADS.register_module() -class PanopticFPNHead(BaseSemanticHead): - """PanopticFPNHead used in Panoptic FPN. - - In this head, the number of output channels is ``num_stuff_classes - + 1``, including all stuff classes and one thing class. The stuff - classes will be reset from ``0`` to ``num_stuff_classes - 1``, the - thing classes will be merged to ``num_stuff_classes``-th channel. - - Arg: - num_things_classes (int): Number of thing classes. Default: 80. - num_stuff_classes (int): Number of stuff classes. Default: 53. - num_classes (int): Number of classes, including all stuff - classes and one thing class. This argument is deprecated, - please use ``num_things_classes`` and ``num_stuff_classes``. - The module will automatically infer the num_classes by - ``num_stuff_classes + 1``. - in_channels (int): Number of channels in the input feature - map. - inner_channels (int): Number of channels in inner features. - start_level (int): The start level of the input features - used in PanopticFPN. - end_level (int): The end level of the used features, the - ``end_level``-th layer will not be used. - fg_range (tuple): Range of the foreground classes. It starts - from ``0`` to ``num_things_classes-1``. Deprecated, please use - ``num_things_classes`` directly. - bg_range (tuple): Range of the background classes. It starts - from ``num_things_classes`` to ``num_things_classes + - num_stuff_classes - 1``. Deprecated, please use - ``num_stuff_classes`` and ``num_things_classes`` directly. - conv_cfg (dict): Dictionary to construct and config - conv layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Use ``GN`` by default. - init_cfg (dict or list[dict], optional): Initialization config dict. - loss_seg (dict): the loss of the semantic head. - """ - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - num_classes=None, - in_channels=256, - inner_channels=128, - start_level=0, - end_level=4, - fg_range=None, - bg_range=None, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=None, - loss_seg=dict( - type='CrossEntropyLoss', ignore_index=-1, - loss_weight=1.0)): - if num_classes is not None: - warnings.warn( - '`num_classes` is deprecated now, please set ' - '`num_stuff_classes` directly, the `num_classes` will be ' - 'set to `num_stuff_classes + 1`') - # num_classes = num_stuff_classes + 1 for PanopticFPN. - assert num_classes == num_stuff_classes + 1 - super(PanopticFPNHead, self).__init__(num_stuff_classes + 1, init_cfg, - loss_seg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - if fg_range is not None and bg_range is not None: - self.fg_range = fg_range - self.bg_range = bg_range - self.num_things_classes = fg_range[1] - fg_range[0] + 1 - self.num_stuff_classes = bg_range[1] - bg_range[0] + 1 - warnings.warn( - '`fg_range` and `bg_range` are deprecated now, ' - f'please use `num_things_classes`={self.num_things_classes} ' - f'and `num_stuff_classes`={self.num_stuff_classes} instead.') - - # Used feature layers are [start_level, end_level) - self.start_level = start_level - self.end_level = end_level - self.num_stages = end_level - start_level - self.inner_channels = inner_channels - - self.conv_upsample_layers = ModuleList() - for i in range(start_level, end_level): - self.conv_upsample_layers.append( - ConvUpsample( - in_channels, - inner_channels, - num_layers=i if i > 0 else 1, - num_upsample=i if i > 0 else 0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - )) - self.conv_logits = nn.Conv2d(inner_channels, self.num_classes, 1) - - def _set_things_to_void(self, gt_semantic_seg): - """Merge thing classes to one class. - - In PanopticFPN, the background labels will be reset from `0` to - `self.num_stuff_classes-1`, the foreground labels will be merged to - `self.num_stuff_classes`-th channel. - """ - gt_semantic_seg = gt_semantic_seg.int() - fg_mask = gt_semantic_seg < self.num_things_classes - bg_mask = (gt_semantic_seg >= self.num_things_classes) * ( - gt_semantic_seg < self.num_things_classes + self.num_stuff_classes) - - new_gt_seg = torch.clone(gt_semantic_seg) - new_gt_seg = torch.where(bg_mask, - gt_semantic_seg - self.num_things_classes, - new_gt_seg) - new_gt_seg = torch.where(fg_mask, - fg_mask.int() * self.num_stuff_classes, - new_gt_seg) - return new_gt_seg - - def loss(self, seg_preds, gt_semantic_seg): - """The loss of PanopticFPN head. - - Things classes will be merged to one class in PanopticFPN. - """ - gt_semantic_seg = self._set_things_to_void(gt_semantic_seg) - return super().loss(seg_preds, gt_semantic_seg) - - def init_weights(self): - super().init_weights() - nn.init.normal_(self.conv_logits.weight.data, 0, 0.01) - self.conv_logits.bias.data.zero_() - - def forward(self, x): - # the number of subnets must be not more than - # the length of features. - assert self.num_stages <= len(x) - - feats = [] - for i, layer in enumerate(self.conv_upsample_layers): - f = layer(x[self.start_level + i]) - feats.append(f) - - feats = torch.sum(torch.stack(feats, dim=0), dim=0) - seg_preds = self.conv_logits(feats) - out = dict(seg_preds=seg_preds, feats=feats) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py deleted file mode 100644 index 41625a61..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_panoptic_fusion_head import \ - BasePanopticFusionHead # noqa: F401,F403 -from .heuristic_fusion_head import HeuristicFusionHead # noqa: F401,F403 -from .maskformer_fusion_head import MaskFormerFusionHead # noqa: F401,F403 diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py deleted file mode 100644 index a38ac1c6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - -from ...builder import build_loss - - -class BasePanopticFusionHead(BaseModule, metaclass=ABCMeta): - """Base class for panoptic heads.""" - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - loss_panoptic=None, - init_cfg=None, - **kwargs): - super(BasePanopticFusionHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = num_things_classes + num_stuff_classes - self.test_cfg = test_cfg - - if loss_panoptic: - self.loss_panoptic = build_loss(loss_panoptic) - else: - self.loss_panoptic = None - - @property - def with_loss(self): - """bool: whether the panoptic head contains loss function.""" - return self.loss_panoptic is not None - - @abstractmethod - def forward_train(self, gt_masks=None, gt_semantic_seg=None, **kwargs): - """Forward function during training.""" - - @abstractmethod - def simple_test(self, - img_metas, - det_labels, - mask_preds, - seg_preds, - det_bboxes, - cfg=None, - **kwargs): - """Test without augmentation.""" diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py deleted file mode 100644 index 06c1de2b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from mmdet.models.builder import HEADS -from .base_panoptic_fusion_head import BasePanopticFusionHead - - -@HEADS.register_module() -class HeuristicFusionHead(BasePanopticFusionHead): - """Fusion Head with Heuristic method.""" - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - init_cfg=None, - **kwargs): - super(HeuristicFusionHead, - self).__init__(num_things_classes, num_stuff_classes, test_cfg, - None, init_cfg, **kwargs) - - def forward_train(self, gt_masks=None, gt_semantic_seg=None, **kwargs): - """HeuristicFusionHead has no training loss.""" - return dict() - - def _lay_masks(self, bboxes, labels, masks, overlap_thr=0.5): - """Lay instance masks to a result map. - - Args: - bboxes: The bboxes results, (K, 4). - labels: The labels of bboxes, (K, ). - masks: The instance masks, (K, H, W). - overlap_thr: Threshold to determine whether two masks overlap. - default: 0.5. - - Returns: - Tensor: The result map, (H, W). - """ - num_insts = bboxes.shape[0] - id_map = torch.zeros( - masks.shape[-2:], device=bboxes.device, dtype=torch.long) - if num_insts == 0: - return id_map, labels - - scores, bboxes = bboxes[:, -1], bboxes[:, :4] - - # Sort by score to use heuristic fusion - order = torch.argsort(-scores) - bboxes = bboxes[order] - labels = labels[order] - segm_masks = masks[order] - - instance_id = 1 - left_labels = [] - for idx in range(bboxes.shape[0]): - _cls = labels[idx] - _mask = segm_masks[idx] - instance_id_map = torch.ones_like( - _mask, dtype=torch.long) * instance_id - area = _mask.sum() - if area == 0: - continue - - pasted = id_map > 0 - intersect = (_mask * pasted).sum() - if (intersect / (area + 1e-5)) > overlap_thr: - continue - - _part = _mask * (~pasted) - id_map = torch.where(_part, instance_id_map, id_map) - left_labels.append(_cls) - instance_id += 1 - - if len(left_labels) > 0: - instance_labels = torch.stack(left_labels) - else: - instance_labels = bboxes.new_zeros((0, ), dtype=torch.long) - assert instance_id == (len(instance_labels) + 1) - return id_map, instance_labels - - def simple_test(self, det_bboxes, det_labels, mask_preds, seg_preds, - **kwargs): - """Fuse the results of instance and semantic segmentations. - - Args: - det_bboxes: The bboxes results, (K, 4). - det_labels: The labels of bboxes, (K,). - mask_preds: The masks results, (K, H, W). - seg_preds: The semantic segmentation results, - (K, num_stuff + 1, H, W). - - Returns: - Tensor : The panoptic segmentation result, (H, W). - """ - mask_preds = mask_preds >= self.test_cfg.mask_thr_binary - id_map, labels = self._lay_masks(det_bboxes, det_labels, mask_preds, - self.test_cfg.mask_overlap) - - seg_results = seg_preds.argmax(dim=0) - seg_results = seg_results + self.num_things_classes - - pan_results = seg_results - instance_id = 1 - for idx in range(det_labels.shape[0]): - _mask = id_map == (idx + 1) - if _mask.sum() == 0: - continue - _cls = labels[idx] - # simply trust detection - segment_id = _cls + instance_id * INSTANCE_OFFSET - pan_results[_mask] = segment_id - instance_id += 1 - - ids, counts = torch.unique( - pan_results % INSTANCE_OFFSET, return_counts=True) - stuff_ids = ids[ids >= self.num_things_classes] - stuff_counts = counts[ids >= self.num_things_classes] - ignore_stuff_ids = stuff_ids[ - stuff_counts < self.test_cfg.stuff_area_limit] - - assert pan_results.ndim == 2 - pan_results[(pan_results.unsqueeze(2) == ignore_stuff_ids.reshape( - 1, 1, -1)).any(dim=2)] = self.num_classes - - return pan_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py b/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py deleted file mode 100644 index 5b59ce4d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from mmdet.core.mask import mask2bbox -from mmdet.models.builder import HEADS -from .base_panoptic_fusion_head import BasePanopticFusionHead - - -@HEADS.register_module() -class MaskFormerFusionHead(BasePanopticFusionHead): - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - loss_panoptic=None, - init_cfg=None, - **kwargs): - super().__init__(num_things_classes, num_stuff_classes, test_cfg, - loss_panoptic, init_cfg, **kwargs) - - def forward_train(self, **kwargs): - """MaskFormerFusionHead has no training loss.""" - return dict() - - def panoptic_postprocess(self, mask_cls, mask_pred): - """Panoptic segmengation inference. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - Tensor: Panoptic segment result of shape \ - (h, w), each element in Tensor means: \ - ``segment_id = _cls + instance_id * INSTANCE_OFFSET``. - """ - object_mask_thr = self.test_cfg.get('object_mask_thr', 0.8) - iou_thr = self.test_cfg.get('iou_thr', 0.8) - filter_low_score = self.test_cfg.get('filter_low_score', False) - - scores, labels = F.softmax(mask_cls, dim=-1).max(-1) - mask_pred = mask_pred.sigmoid() - - keep = labels.ne(self.num_classes) & (scores > object_mask_thr) - cur_scores = scores[keep] - cur_classes = labels[keep] - cur_masks = mask_pred[keep] - - cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks - - h, w = cur_masks.shape[-2:] - panoptic_seg = torch.full((h, w), - self.num_classes, - dtype=torch.int32, - device=cur_masks.device) - if cur_masks.shape[0] == 0: - # We didn't detect any mask :( - pass - else: - cur_mask_ids = cur_prob_masks.argmax(0) - instance_id = 1 - for k in range(cur_classes.shape[0]): - pred_class = int(cur_classes[k].item()) - isthing = pred_class < self.num_things_classes - mask = cur_mask_ids == k - mask_area = mask.sum().item() - original_area = (cur_masks[k] >= 0.5).sum().item() - - if filter_low_score: - mask = mask & (cur_masks[k] >= 0.5) - - if mask_area > 0 and original_area > 0: - if mask_area / original_area < iou_thr: - continue - - if not isthing: - # different stuff regions of same class will be - # merged here, and stuff share the instance_id 0. - panoptic_seg[mask] = pred_class - else: - panoptic_seg[mask] = ( - pred_class + instance_id * INSTANCE_OFFSET) - instance_id += 1 - - return panoptic_seg - - def semantic_postprocess(self, mask_cls, mask_pred): - """Semantic segmengation postprocess. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - Tensor: Semantic segment result of shape \ - (cls_out_channels, h, w). - """ - # TODO add semantic segmentation result - raise NotImplementedError - - def instance_postprocess(self, mask_cls, mask_pred): - """Instance segmengation postprocess. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - tuple[Tensor]: Instance segmentation results. - - - labels_per_image (Tensor): Predicted labels,\ - shape (n, ). - - bboxes (Tensor): Bboxes and scores with shape (n, 5) of \ - positive region in binary mask, the last column is scores. - - mask_pred_binary (Tensor): Instance masks of \ - shape (n, h, w). - """ - max_per_image = self.test_cfg.get('max_per_image', 100) - num_queries = mask_cls.shape[0] - # shape (num_queries, num_class) - scores = F.softmax(mask_cls, dim=-1)[:, :-1] - # shape (num_queries * num_class, ) - labels = torch.arange(self.num_classes, device=mask_cls.device).\ - unsqueeze(0).repeat(num_queries, 1).flatten(0, 1) - scores_per_image, top_indices = scores.flatten(0, 1).topk( - max_per_image, sorted=False) - labels_per_image = labels[top_indices] - - query_indices = top_indices // self.num_classes - mask_pred = mask_pred[query_indices] - - # extract things - is_thing = labels_per_image < self.num_things_classes - scores_per_image = scores_per_image[is_thing] - labels_per_image = labels_per_image[is_thing] - mask_pred = mask_pred[is_thing] - - mask_pred_binary = (mask_pred > 0).float() - mask_scores_per_image = (mask_pred.sigmoid() * - mask_pred_binary).flatten(1).sum(1) / ( - mask_pred_binary.flatten(1).sum(1) + 1e-6) - det_scores = scores_per_image * mask_scores_per_image - mask_pred_binary = mask_pred_binary.bool() - bboxes = mask2bbox(mask_pred_binary) - bboxes = torch.cat([bboxes, det_scores[:, None]], dim=-1) - - return labels_per_image, bboxes, mask_pred_binary - - def simple_test(self, - mask_cls_results, - mask_pred_results, - img_metas, - rescale=False, - **kwargs): - """Test segment without test-time aumengtation. - - Only the output of last decoder layers was used. - - Args: - mask_cls_results (Tensor): Mask classification logits, - shape (batch_size, num_queries, cls_out_channels). - Note `cls_out_channels` should includes background. - mask_pred_results (Tensor): Mask logits, shape - (batch_size, num_queries, h, w). - img_metas (list[dict]): List of image information. - rescale (bool, optional): If True, return boxes in - original image space. Default False. - - Returns: - list[dict[str, Tensor | tuple[Tensor]]]: Semantic segmentation \ - results and panoptic segmentation results for each \ - image. - - .. code-block:: none - - [ - { - 'pan_results': Tensor, # shape = [h, w] - 'ins_results': tuple[Tensor], - # semantic segmentation results are not supported yet - 'sem_results': Tensor - }, - ... - ] - """ - panoptic_on = self.test_cfg.get('panoptic_on', True) - semantic_on = self.test_cfg.get('semantic_on', False) - instance_on = self.test_cfg.get('instance_on', False) - assert not semantic_on, 'segmantic segmentation '\ - 'results are not supported yet.' - - results = [] - for mask_cls_result, mask_pred_result, meta in zip( - mask_cls_results, mask_pred_results, img_metas): - # remove padding - img_height, img_width = meta['img_shape'][:2] - mask_pred_result = mask_pred_result[:, :img_height, :img_width] - - if rescale: - # return result in original resolution - ori_height, ori_width = meta['ori_shape'][:2] - mask_pred_result = F.interpolate( - mask_pred_result[:, None], - size=(ori_height, ori_width), - mode='bilinear', - align_corners=False)[:, 0] - - result = dict() - if panoptic_on: - pan_results = self.panoptic_postprocess( - mask_cls_result, mask_pred_result) - result['pan_results'] = pan_results - - if instance_on: - ins_results = self.instance_postprocess( - mask_cls_result, mask_pred_result) - result['ins_results'] = ins_results - - if semantic_on: - sem_results = self.semantic_postprocess( - mask_cls_result, mask_pred_result) - result['sem_results'] = sem_results - - results.append(result) - - return results diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/__init__.py deleted file mode 100644 index e74ba89e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .brick_wrappers import AdaptiveAvgPool2d, adaptive_avg_pool2d -from .builder import build_linear_layer, build_transformer -from .ckpt_convert import pvt_convert -from .conv_upsample import ConvUpsample -from .csp_layer import CSPLayer -from .gaussian_target import gaussian_radius, gen_gaussian_target -from .inverted_residual import InvertedResidual -from .make_divisible import make_divisible -from .misc import interpolate_as, sigmoid_geometric_mean -from .normed_predictor import NormedConv2d, NormedLinear -from .panoptic_gt_processing import preprocess_panoptic_gt -from .point_sample import (get_uncertain_point_coords_with_randomness, - get_uncertainty) -from .positional_encoding import (LearnedPositionalEncoding, - SinePositionalEncoding) -from .res_layer import ResLayer, SimplifiedBasicBlock -from .se_layer import DyReLU, SELayer -from .transformer import (DetrTransformerDecoder, DetrTransformerDecoderLayer, - DynamicConv, PatchEmbed, Transformer, nchw_to_nlc, - nlc_to_nchw) - -__all__ = [ - 'ResLayer', 'gaussian_radius', 'gen_gaussian_target', - 'DetrTransformerDecoderLayer', 'DetrTransformerDecoder', 'Transformer', - 'build_transformer', 'build_linear_layer', 'SinePositionalEncoding', - 'LearnedPositionalEncoding', 'DynamicConv', 'SimplifiedBasicBlock', - 'NormedLinear', 'NormedConv2d', 'make_divisible', 'InvertedResidual', - 'SELayer', 'interpolate_as', 'ConvUpsample', 'CSPLayer', - 'adaptive_avg_pool2d', 'AdaptiveAvgPool2d', 'PatchEmbed', 'nchw_to_nlc', - 'nlc_to_nchw', 'pvt_convert', 'sigmoid_geometric_mean', - 'preprocess_panoptic_gt', 'DyReLU', - 'get_uncertain_point_coords_with_randomness', 'get_uncertainty' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/brick_wrappers.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/brick_wrappers.py deleted file mode 100644 index fa0279ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/brick_wrappers.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn.bricks.wrappers import NewEmptyTensorOp, obsolete_torch_version - -if torch.__version__ == 'parrots': - TORCH_VERSION = torch.__version__ -else: - # torch.__version__ could be 1.3.1+cu92, we only need the first two - # for comparison - TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2]) - - -def adaptive_avg_pool2d(input, output_size): - """Handle empty batch dimension to adaptive_avg_pool2d. - - Args: - input (tensor): 4D tensor. - output_size (int, tuple[int,int]): the target output size. - """ - if input.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - if isinstance(output_size, int): - output_size = [output_size, output_size] - output_size = [*input.shape[:2], *output_size] - empty = NewEmptyTensorOp.apply(input, output_size) - return empty - else: - return F.adaptive_avg_pool2d(input, output_size) - - -class AdaptiveAvgPool2d(nn.AdaptiveAvgPool2d): - """Handle empty batch dimension to AdaptiveAvgPool2d.""" - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - output_size = self.output_size - if isinstance(output_size, int): - output_size = [output_size, output_size] - else: - output_size = [ - v if v is not None else d - for v, d in zip(output_size, - x.size()[-2:]) - ] - output_size = [*x.shape[:2], *output_size] - empty = NewEmptyTensorOp.apply(x, output_size) - return empty - - return super().forward(x) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/builder.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/builder.py deleted file mode 100644 index 20fe7a6d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/builder.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.utils import Registry, build_from_cfg - -TRANSFORMER = Registry('Transformer') -LINEAR_LAYERS = Registry('linear layers') - - -def build_transformer(cfg, default_args=None): - """Builder for Transformer.""" - return build_from_cfg(cfg, TRANSFORMER, default_args) - - -LINEAR_LAYERS.register_module('Linear', module=nn.Linear) - - -def build_linear_layer(cfg, *args, **kwargs): - """Build linear layer. - Args: - cfg (None or dict): The linear layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an linear layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding linear layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding linear layer. - Returns: - nn.Module: Created linear layer. - """ - if cfg is None: - cfg_ = dict(type='Linear') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in LINEAR_LAYERS: - raise KeyError(f'Unrecognized linear type {layer_type}') - else: - linear_layer = LINEAR_LAYERS.get(layer_type) - - layer = linear_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/ckpt_convert.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/ckpt_convert.py deleted file mode 100644 index 4d660c4e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/ckpt_convert.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# This script consists of several convert functions which -# can modify the weights of model in original repo to be -# pre-trained weights. - -from collections import OrderedDict - -import torch - - -def pvt_convert(ckpt): - new_ckpt = OrderedDict() - # Process the concat between q linear weights and kv linear weights - use_abs_pos_embed = False - use_conv_ffn = False - for k in ckpt.keys(): - if k.startswith('pos_embed'): - use_abs_pos_embed = True - if k.find('dwconv') >= 0: - use_conv_ffn = True - for k, v in ckpt.items(): - if k.startswith('head'): - continue - if k.startswith('norm.'): - continue - if k.startswith('cls_token'): - continue - if k.startswith('pos_embed'): - stage_i = int(k.replace('pos_embed', '')) - new_k = k.replace(f'pos_embed{stage_i}', - f'layers.{stage_i - 1}.1.0.pos_embed') - if stage_i == 4 and v.size(1) == 50: # 1 (cls token) + 7 * 7 - new_v = v[:, 1:, :] # remove cls token - else: - new_v = v - elif k.startswith('patch_embed'): - stage_i = int(k.split('.')[0].replace('patch_embed', '')) - new_k = k.replace(f'patch_embed{stage_i}', - f'layers.{stage_i - 1}.0') - new_v = v - if 'proj.' in new_k: - new_k = new_k.replace('proj.', 'projection.') - elif k.startswith('block'): - stage_i = int(k.split('.')[0].replace('block', '')) - layer_i = int(k.split('.')[1]) - new_layer_i = layer_i + use_abs_pos_embed - new_k = k.replace(f'block{stage_i}.{layer_i}', - f'layers.{stage_i - 1}.1.{new_layer_i}') - new_v = v - if 'attn.q.' in new_k: - sub_item_k = k.replace('q.', 'kv.') - new_k = new_k.replace('q.', 'attn.in_proj_') - new_v = torch.cat([v, ckpt[sub_item_k]], dim=0) - elif 'attn.kv.' in new_k: - continue - elif 'attn.proj.' in new_k: - new_k = new_k.replace('proj.', 'attn.out_proj.') - elif 'attn.sr.' in new_k: - new_k = new_k.replace('sr.', 'sr.') - elif 'mlp.' in new_k: - string = f'{new_k}-' - new_k = new_k.replace('mlp.', 'ffn.layers.') - if 'fc1.weight' in new_k or 'fc2.weight' in new_k: - new_v = v.reshape((*v.shape, 1, 1)) - new_k = new_k.replace('fc1.', '0.') - new_k = new_k.replace('dwconv.dwconv.', '1.') - if use_conv_ffn: - new_k = new_k.replace('fc2.', '4.') - else: - new_k = new_k.replace('fc2.', '3.') - string += f'{new_k} {v.shape}-{new_v.shape}' - elif k.startswith('norm'): - stage_i = int(k[4]) - new_k = k.replace(f'norm{stage_i}', f'layers.{stage_i - 1}.2') - new_v = v - else: - new_k = k - new_v = v - new_ckpt[new_k] = new_v - - return new_ckpt - - -def swin_converter(ckpt): - - new_ckpt = OrderedDict() - - def correct_unfold_reduction_order(x): - out_channel, in_channel = x.shape - x = x.reshape(out_channel, 4, in_channel // 4) - x = x[:, [0, 2, 1, 3], :].transpose(1, - 2).reshape(out_channel, in_channel) - return x - - def correct_unfold_norm_order(x): - in_channel = x.shape[0] - x = x.reshape(4, in_channel // 4) - x = x[[0, 2, 1, 3], :].transpose(0, 1).reshape(in_channel) - return x - - for k, v in ckpt.items(): - if k.startswith('head'): - continue - elif k.startswith('layers'): - new_v = v - if 'attn.' in k: - new_k = k.replace('attn.', 'attn.w_msa.') - elif 'mlp.' in k: - if 'mlp.fc1.' in k: - new_k = k.replace('mlp.fc1.', 'ffn.layers.0.0.') - elif 'mlp.fc2.' in k: - new_k = k.replace('mlp.fc2.', 'ffn.layers.1.') - else: - new_k = k.replace('mlp.', 'ffn.') - elif 'downsample' in k: - new_k = k - if 'reduction.' in k: - new_v = correct_unfold_reduction_order(v) - elif 'norm.' in k: - new_v = correct_unfold_norm_order(v) - else: - new_k = k - new_k = new_k.replace('layers', 'stages', 1) - elif k.startswith('patch_embed'): - new_v = v - if 'proj' in k: - new_k = k.replace('proj', 'projection') - else: - new_k = k - else: - new_v = v - new_k = k - - new_ckpt['backbone.' + new_k] = new_v - - return new_ckpt diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/conv_upsample.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/conv_upsample.py deleted file mode 100644 index bb5ba767..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/conv_upsample.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList - - -class ConvUpsample(BaseModule): - """ConvUpsample performs 2x upsampling after Conv. - - There are several `ConvModule` layers. In the first few layers, upsampling - will be applied after each layer of convolution. The number of upsampling - must be no more than the number of ConvModule layers. - - Args: - in_channels (int): Number of channels in the input feature map. - inner_channels (int): Number of channels produced by the convolution. - num_layers (int): Number of convolution layers. - num_upsample (int | optional): Number of upsampling layer. Must be no - more than num_layers. Upsampling will be applied after the first - ``num_upsample`` layers of convolution. Default: ``num_layers``. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict): Config dict for initialization. Default: None. - kwargs (key word augments): Other augments used in ConvModule. - """ - - def __init__(self, - in_channels, - inner_channels, - num_layers=1, - num_upsample=None, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - super(ConvUpsample, self).__init__(init_cfg) - if num_upsample is None: - num_upsample = num_layers - assert num_upsample <= num_layers, \ - f'num_upsample({num_upsample})must be no more than ' \ - f'num_layers({num_layers})' - self.num_layers = num_layers - self.num_upsample = num_upsample - self.conv = ModuleList() - for i in range(num_layers): - self.conv.append( - ConvModule( - in_channels, - inner_channels, - 3, - padding=1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - in_channels = inner_channels - - def forward(self, x): - num_upsample = self.num_upsample - for i in range(self.num_layers): - x = self.conv[i](x) - if num_upsample > 0: - num_upsample -= 1 - x = F.interpolate( - x, scale_factor=2, mode='bilinear', align_corners=False) - return x diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/csp_layer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/csp_layer.py deleted file mode 100644 index 5760b014..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/csp_layer.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - - -class DarknetBottleneck(BaseModule): - """The basic bottleneck block used in Darknet. - - Each ResBlock consists of two ConvModules and the input is added to the - final output. Each ConvModule is composed of Conv, BN, and LeakyReLU. - The first convLayer has filter size of 1x1 and the second one has the - filter size of 3x3. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - expansion (int): The kernel size of the convolution. Default: 0.5 - add_identity (bool): Whether to add identity to the out. - Default: True - use_depthwise (bool): Whether to use depthwise separable convolution. - Default: False - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - """ - - def __init__(self, - in_channels, - out_channels, - expansion=0.5, - add_identity=True, - use_depthwise=False, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - hidden_channels = int(out_channels * expansion) - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - self.conv1 = ConvModule( - in_channels, - hidden_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv2 = conv( - hidden_channels, - out_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.add_identity = \ - add_identity and in_channels == out_channels - - def forward(self, x): - identity = x - out = self.conv1(x) - out = self.conv2(out) - - if self.add_identity: - return out + identity - else: - return out - - -class CSPLayer(BaseModule): - """Cross Stage Partial Layer. - - Args: - in_channels (int): The input channels of the CSP layer. - out_channels (int): The output channels of the CSP layer. - expand_ratio (float): Ratio to adjust the number of channels of the - hidden layer. Default: 0.5 - num_blocks (int): Number of blocks. Default: 1 - add_identity (bool): Whether to add identity in blocks. - Default: True - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN') - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish') - """ - - def __init__(self, - in_channels, - out_channels, - expand_ratio=0.5, - num_blocks=1, - add_identity=True, - use_depthwise=False, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - mid_channels = int(out_channels * expand_ratio) - self.main_conv = ConvModule( - in_channels, - mid_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.short_conv = ConvModule( - in_channels, - mid_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.final_conv = ConvModule( - 2 * mid_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.blocks = nn.Sequential(*[ - DarknetBottleneck( - mid_channels, - mid_channels, - 1.0, - add_identity, - use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) for _ in range(num_blocks) - ]) - - def forward(self, x): - x_short = self.short_conv(x) - - x_main = self.main_conv(x) - x_main = self.blocks(x_main) - - x_final = torch.cat((x_main, x_short), dim=1) - return self.final_conv(x_final) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/gaussian_target.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/gaussian_target.py deleted file mode 100644 index 5bf4d558..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/gaussian_target.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from math import sqrt - -import torch -import torch.nn.functional as F - - -def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'): - """Generate 2D gaussian kernel. - - Args: - radius (int): Radius of gaussian kernel. - sigma (int): Sigma of gaussian function. Default: 1. - dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32. - device (str): Device of gaussian tensor. Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius + 1) * (2 * radius + 1)`` shape. - """ - x = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x + y * y) / (2 * sigma * sigma)).exp() - - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def gen_gaussian_target(heatmap, center, radius, k=1): - """Generate 2D gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius (int): Radius of gaussian kernel. - k (int): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter = 2 * radius + 1 - gaussian_kernel = gaussian2D( - radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device) - - x, y = center - - height, width = heatmap.shape[:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius - top:radius + bottom, - radius - left:radius + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def gaussian_radius(det_size, min_overlap): - r"""Generate 2D gaussian radius. - - This function is modified from the `official github repo - `_. - - Given ``min_overlap``, radius could computed by a quadratic equation - according to Vieta's formulas. - - There are 3 cases for computing gaussian radius, details are following: - - - Explanation of figure: ``lt`` and ``br`` indicates the left-top and - bottom-right corner of ground truth box. ``x`` indicates the - generated corner at the limited position when ``radius=r``. - - - Case1: one corner is inside the gt box and the other is outside. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x----------+--+ - | | | | - | | | | height - | | overlap | | - | | | | - | | | | v - +--+---------br--+ - - | | | - +----------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad - {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ - {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case2: both two corners are inside the gt box. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x-------+ | - | | | | - | |overlap| | height - | | | | - | +-------x--+ - | | | v - +----------+-br - - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad - {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ - {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case3: both two corners are outside the gt box. - - .. code:: text - - |< width >| - - x--+----------------+ - | | | - +-lt-------------+ | - - | | | | ^ - | | | | - | | overlap | | height - | | | | - | | | | v - | +------------br--+ - - | | | - +----------------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad - {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ - {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ - {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a} - - Args: - det_size (list[int]): Shape of object. - min_overlap (float): Min IoU with ground truth for boxes generated by - keypoints inside the gaussian kernel. - - Returns: - radius (int): Radius of gaussian kernel. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 - sq1) / (2 * a1) - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 - sq2) / (2 * a2) - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / (2 * a3) - return min(r1, r2, r3) - - -def get_local_maximum(heat, kernel=3): - """Extract local maximum pixel with given kernel. - - Args: - heat (Tensor): Target heatmap. - kernel (int): Kernel size of max pooling. Default: 3. - - Returns: - heat (Tensor): A heatmap where local maximum pixels maintain its - own value and other positions are 0. - """ - pad = (kernel - 1) // 2 - hmax = F.max_pool2d(heat, kernel, stride=1, padding=pad) - keep = (hmax == heat).float() - return heat * keep - - -def get_topk_from_heatmap(scores, k=20): - """Get top k positions from heatmap. - - Args: - scores (Tensor): Target heatmap with shape - [batch, num_classes, height, width]. - k (int): Target number. Default: 20. - - Returns: - tuple[torch.Tensor]: Scores, indexes, categories and coords of - topk keypoint. Containing following Tensors: - - - topk_scores (Tensor): Max scores of each topk keypoint. - - topk_inds (Tensor): Indexes of each topk keypoint. - - topk_clses (Tensor): Categories of each topk keypoint. - - topk_ys (Tensor): Y-coord of each topk keypoint. - - topk_xs (Tensor): X-coord of each topk keypoint. - """ - batch, _, height, width = scores.size() - topk_scores, topk_inds = torch.topk(scores.view(batch, -1), k) - topk_clses = topk_inds // (height * width) - topk_inds = topk_inds % (height * width) - topk_ys = topk_inds // width - topk_xs = (topk_inds % width).int().float() - return topk_scores, topk_inds, topk_clses, topk_ys, topk_xs - - -def gather_feat(feat, ind, mask=None): - """Gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - mask (Tensor | None): Mask of feature map. Default: None. - - Returns: - feat (Tensor): Gathered feature. - """ - dim = feat.size(2) - ind = ind.unsqueeze(2).repeat(1, 1, dim) - feat = feat.gather(1, ind) - if mask is not None: - mask = mask.unsqueeze(2).expand_as(feat) - feat = feat[mask] - feat = feat.view(-1, dim) - return feat - - -def transpose_and_gather_feat(feat, ind): - """Transpose and gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - - Returns: - feat (Tensor): Transposed and gathered feature. - """ - feat = feat.permute(0, 2, 3, 1).contiguous() - feat = feat.view(feat.size(0), -1, feat.size(3)) - feat = gather_feat(feat, ind) - return feat diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/inverted_residual.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/inverted_residual.py deleted file mode 100644 index 1f241ae3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/inverted_residual.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import DropPath -from mmcv.runner import BaseModule - -from .se_layer import SELayer - - -class InvertedResidual(BaseModule): - """Inverted Residual Block. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - mid_channels (int): The input channels of the depthwise convolution. - kernel_size (int): The kernel size of the depthwise convolution. - Default: 3. - stride (int): The stride of the depthwise convolution. Default: 1. - se_cfg (dict): Config dict for se layer. Default: None, which means no - se layer. - with_expand_conv (bool): Use expand conv or not. If set False, - mid_channels must be the same with in_channels. - Default: True. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - drop_path_rate (float): stochastic depth rate. Defaults to 0. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_expand_conv=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - drop_path_rate=0., - with_cp=False, - init_cfg=None): - super(InvertedResidual, self).__init__(init_cfg) - self.with_res_shortcut = (stride == 1 and in_channels == out_channels) - assert stride in [1, 2], f'stride must in [1, 2]. ' \ - f'But received {stride}.' - self.with_cp = with_cp - self.drop_path = DropPath( - drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.with_se = se_cfg is not None - self.with_expand_conv = with_expand_conv - - if self.with_se: - assert isinstance(se_cfg, dict) - if not self.with_expand_conv: - assert mid_channels == in_channels - - if self.with_expand_conv: - self.expand_conv = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.depthwise_conv = ConvModule( - in_channels=mid_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - padding=kernel_size // 2, - groups=mid_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.linear_conv = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - - if self.with_expand_conv: - out = self.expand_conv(out) - - out = self.depthwise_conv(out) - - if self.with_se: - out = self.se(out) - - out = self.linear_conv(out) - - if self.with_res_shortcut: - return x + self.drop_path(out) - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/make_divisible.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/make_divisible.py deleted file mode 100644 index ed42c2ee..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/make_divisible.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def make_divisible(value, divisor, min_value=None, min_ratio=0.9): - """Make divisible function. - - This function rounds the channel number to the nearest value that can be - divisible by the divisor. It is taken from the original tf repo. It ensures - that all layers have a channel number that is divisible by divisor. It can - be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa - - Args: - value (int): The original channel number. - divisor (int): The divisor to fully divide the channel number. - min_value (int): The minimum value of the output channel. - Default: None, means that the minimum value equal to the divisor. - min_ratio (float): The minimum ratio of the rounded channel number to - the original channel number. Default: 0.9. - - Returns: - int: The modified output channel number. - """ - - if min_value is None: - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than (1-min_ratio). - if new_value < min_ratio * value: - new_value += divisor - return new_value diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/misc.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/misc.py deleted file mode 100644 index 8f9be9ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/misc.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.autograd import Function -from torch.nn import functional as F - - -class SigmoidGeometricMean(Function): - """Forward and backward function of geometric mean of two sigmoid - functions. - - This implementation with analytical gradient function substitutes - the autograd function of (x.sigmoid() * y.sigmoid()).sqrt(). The - original implementation incurs none during gradient backprapagation - if both x and y are very small values. - """ - - @staticmethod - def forward(ctx, x, y): - x_sigmoid = x.sigmoid() - y_sigmoid = y.sigmoid() - z = (x_sigmoid * y_sigmoid).sqrt() - ctx.save_for_backward(x_sigmoid, y_sigmoid, z) - return z - - @staticmethod - def backward(ctx, grad_output): - x_sigmoid, y_sigmoid, z = ctx.saved_tensors - grad_x = grad_output * z * (1 - x_sigmoid) / 2 - grad_y = grad_output * z * (1 - y_sigmoid) / 2 - return grad_x, grad_y - - -sigmoid_geometric_mean = SigmoidGeometricMean.apply - - -def interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` to the shape of the `target`. - - The `source` must be a Tensor, but the `target` can be a Tensor or a - np.ndarray with the shape (..., target_h, target_w). - - Args: - source (Tensor): A 3D/4D Tensor with the shape (N, H, W) or - (N, C, H, W). - target (Tensor | np.ndarray): The interpolation target with the shape - (..., target_h, target_w). - mode (str): Algorithm used for interpolation. The options are the - same as those in F.interpolate(). Default: ``'bilinear'``. - align_corners (bool): The same as the argument in F.interpolate(). - - Returns: - Tensor: The interpolated source Tensor. - """ - assert len(target.shape) >= 2 - - def _interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` (4D) to the shape of the `target`.""" - target_h, target_w = target.shape[-2:] - source_h, source_w = source.shape[-2:] - if target_h != source_h or target_w != source_w: - source = F.interpolate( - source, - size=(target_h, target_w), - mode=mode, - align_corners=align_corners) - return source - - if len(source.shape) == 3: - source = source[:, None, :, :] - source = _interpolate_as(source, target, mode, align_corners) - return source[:, 0, :, :] - else: - return _interpolate_as(source, target, mode, align_corners) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/normed_predictor.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/normed_predictor.py deleted file mode 100644 index f0eeef7d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/normed_predictor.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import CONV_LAYERS - -from .builder import LINEAR_LAYERS - - -@LINEAR_LAYERS.register_module(name='NormedLinear') -class NormedLinear(nn.Linear): - """Normalized Linear Layer. - - Args: - tempeature (float, optional): Tempeature term. Default to 20. - power (int, optional): Power term. Default to 1.0. - eps (float, optional): The minimal value of divisor to - keep numerical stability. Default to 1e-6. - """ - - def __init__(self, *args, tempearture=20, power=1.0, eps=1e-6, **kwargs): - super(NormedLinear, self).__init__(*args, **kwargs) - self.tempearture = tempearture - self.power = power - self.eps = eps - self.init_weights() - - def init_weights(self): - nn.init.normal_(self.weight, mean=0, std=0.01) - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, x): - weight_ = self.weight / ( - self.weight.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x / (x.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x_ * self.tempearture - - return F.linear(x_, weight_, self.bias) - - -@CONV_LAYERS.register_module(name='NormedConv2d') -class NormedConv2d(nn.Conv2d): - """Normalized Conv2d Layer. - - Args: - tempeature (float, optional): Tempeature term. Default to 20. - power (int, optional): Power term. Default to 1.0. - eps (float, optional): The minimal value of divisor to - keep numerical stability. Default to 1e-6. - norm_over_kernel (bool, optional): Normalize over kernel. - Default to False. - """ - - def __init__(self, - *args, - tempearture=20, - power=1.0, - eps=1e-6, - norm_over_kernel=False, - **kwargs): - super(NormedConv2d, self).__init__(*args, **kwargs) - self.tempearture = tempearture - self.power = power - self.norm_over_kernel = norm_over_kernel - self.eps = eps - - def forward(self, x): - if not self.norm_over_kernel: - weight_ = self.weight / ( - self.weight.norm(dim=1, keepdim=True).pow(self.power) + - self.eps) - else: - weight_ = self.weight / ( - self.weight.view(self.weight.size(0), -1).norm( - dim=1, keepdim=True).pow(self.power)[..., None, None] + - self.eps) - x_ = x / (x.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x_ * self.tempearture - - if hasattr(self, 'conv2d_forward'): - x_ = self.conv2d_forward(x_, weight_) - else: - if torch.__version__ >= '1.8': - x_ = self._conv_forward(x_, weight_, self.bias) - else: - x_ = self._conv_forward(x_, weight_) - return x_ diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/panoptic_gt_processing.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/panoptic_gt_processing.py deleted file mode 100644 index 513f6449..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/panoptic_gt_processing.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def preprocess_panoptic_gt(gt_labels, gt_masks, gt_semantic_seg, num_things, - num_stuff): - """Preprocess the ground truth for a image. - - Args: - gt_labels (Tensor): Ground truth labels of each bbox, - with shape (num_gts, ). - gt_masks (BitmapMasks): Ground truth masks of each instances - of a image, shape (num_gts, h, w). - gt_semantic_seg (Tensor): Ground truth of semantic - segmentation with the shape (1, h, w). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - target_shape (tuple[int]): Shape of output mask_preds. - Resize the masks to shape of mask_preds. - - Returns: - tuple: a tuple containing the following targets. - - - labels (Tensor): Ground truth class indices for a - image, with shape (n, ), n is the sum of number - of stuff type and number of instance in a image. - - masks (Tensor): Ground truth mask for a image, with - shape (n, h, w). - """ - num_classes = num_things + num_stuff - things_labels = gt_labels - gt_semantic_seg = gt_semantic_seg.squeeze(0) - - things_masks = gt_masks.pad(gt_semantic_seg.shape[-2:], pad_val=0)\ - .to_tensor(dtype=torch.bool, device=gt_labels.device) - - semantic_labels = torch.unique( - gt_semantic_seg, - sorted=False, - return_inverse=False, - return_counts=False) - stuff_masks_list = [] - stuff_labels_list = [] - for label in semantic_labels: - if label < num_things or label >= num_classes: - continue - stuff_mask = gt_semantic_seg == label - stuff_masks_list.append(stuff_mask) - stuff_labels_list.append(label) - - if len(stuff_masks_list) > 0: - stuff_masks = torch.stack(stuff_masks_list, dim=0) - stuff_labels = torch.stack(stuff_labels_list, dim=0) - labels = torch.cat([things_labels, stuff_labels], dim=0) - masks = torch.cat([things_masks, stuff_masks], dim=0) - else: - labels = things_labels - masks = things_masks - - masks = masks.long() - return labels, masks diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/point_sample.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/point_sample.py deleted file mode 100644 index c2c3cf91..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/point_sample.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import point_sample - - -def get_uncertainty(mask_pred, labels): - """Estimate uncertainty based on pred logits. - - We estimate uncertainty as L1 distance between 0.0 and the logits - prediction in 'mask_pred' for the foreground class in `classes`. - - Args: - mask_pred (Tensor): mask predication logits, shape (num_rois, - num_classes, mask_height, mask_width). - - labels (list[Tensor]): Either predicted or ground truth label for - each predicted mask, of length num_rois. - - Returns: - scores (Tensor): Uncertainty scores with the most uncertain - locations having the highest uncertainty score, - shape (num_rois, 1, mask_height, mask_width) - """ - if mask_pred.shape[1] == 1: - gt_class_logits = mask_pred.clone() - else: - inds = torch.arange(mask_pred.shape[0], device=mask_pred.device) - gt_class_logits = mask_pred[inds, labels].unsqueeze(1) - return -torch.abs(gt_class_logits) - - -def get_uncertain_point_coords_with_randomness(mask_pred, labels, num_points, - oversample_ratio, - importance_sample_ratio): - """Get ``num_points`` most uncertain points with random points during - train. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - 'get_uncertainty()' function that takes point's logit prediction as - input. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - labels (list): The ground truth class for each instance. - num_points (int): The number of points to sample. - oversample_ratio (int): Oversampling parameter. - importance_sample_ratio (float): Ratio of points that are sampled - via importnace sampling. - - Returns: - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains the coordinates sampled points. - """ - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = mask_pred.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=mask_pred.device) - point_logits = point_sample(mask_pred, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = get_uncertainty(point_logits, labels) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=mask_pred.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_roi_coords = torch.rand( - batch_size, num_random_points, 2, device=mask_pred.device) - point_coords = torch.cat((point_coords, rand_roi_coords), dim=1) - return point_coords diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/positional_encoding.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/positional_encoding.py deleted file mode 100644 index dd29cd65..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/positional_encoding.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn.bricks.transformer import POSITIONAL_ENCODING -from mmcv.runner import BaseModule - - -@POSITIONAL_ENCODING.register_module() -class SinePositionalEncoding(BaseModule): - """Position encoding with sine and cosine functions. - - See `End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_feats (int): The feature dimension for each position - along x-axis or y-axis. Note the final returned dimension - for each position is 2 times of this value. - temperature (int, optional): The temperature used for scaling - the position embedding. Defaults to 10000. - normalize (bool, optional): Whether to normalize the position - embedding. Defaults to False. - scale (float, optional): A scale factor that scales the position - embedding. The scale will be used only when `normalize` is True. - Defaults to 2*pi. - eps (float, optional): A value added to the denominator for - numerical stability. Defaults to 1e-6. - offset (float): offset add to embed when do the normalization. - Defaults to 0. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_feats, - temperature=10000, - normalize=False, - scale=2 * math.pi, - eps=1e-6, - offset=0., - init_cfg=None): - super(SinePositionalEncoding, self).__init__(init_cfg) - if normalize: - assert isinstance(scale, (float, int)), 'when normalize is set,' \ - 'scale should be provided and in float or int type, ' \ - f'found {type(scale)}' - self.num_feats = num_feats - self.temperature = temperature - self.normalize = normalize - self.scale = scale - self.eps = eps - self.offset = offset - - def forward(self, mask): - """Forward function for `SinePositionalEncoding`. - - Args: - mask (Tensor): ByteTensor mask. Non-zero values representing - ignored positions, while zero values means valid positions - for this image. Shape [bs, h, w]. - - Returns: - pos (Tensor): Returned position embedding with shape - [bs, num_feats*2, h, w]. - """ - # For convenience of exporting to ONNX, it's required to convert - # `masks` from bool to int. - mask = mask.to(torch.int) - not_mask = 1 - mask # logical_not - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - y_embed = (y_embed + self.offset) / \ - (y_embed[:, -1:, :] + self.eps) * self.scale - x_embed = (x_embed + self.offset) / \ - (x_embed[:, :, -1:] + self.eps) * self.scale - dim_t = torch.arange( - self.num_feats, dtype=torch.float32, device=mask.device) - dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - # use `view` instead of `flatten` for dynamically exporting to ONNX - B, H, W = mask.size() - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), - dim=4).view(B, H, W, -1) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), - dim=4).view(B, H, W, -1) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'temperature={self.temperature}, ' - repr_str += f'normalize={self.normalize}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'eps={self.eps})' - return repr_str - - -@POSITIONAL_ENCODING.register_module() -class LearnedPositionalEncoding(BaseModule): - """Position embedding with learnable embedding weights. - - Args: - num_feats (int): The feature dimension for each position - along x-axis or y-axis. The final returned dimension for - each position is 2 times of this value. - row_num_embed (int, optional): The dictionary size of row embeddings. - Default 50. - col_num_embed (int, optional): The dictionary size of col embeddings. - Default 50. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_feats, - row_num_embed=50, - col_num_embed=50, - init_cfg=dict(type='Uniform', layer='Embedding')): - super(LearnedPositionalEncoding, self).__init__(init_cfg) - self.row_embed = nn.Embedding(row_num_embed, num_feats) - self.col_embed = nn.Embedding(col_num_embed, num_feats) - self.num_feats = num_feats - self.row_num_embed = row_num_embed - self.col_num_embed = col_num_embed - - def forward(self, mask): - """Forward function for `LearnedPositionalEncoding`. - - Args: - mask (Tensor): ByteTensor mask. Non-zero values representing - ignored positions, while zero values means valid positions - for this image. Shape [bs, h, w]. - - Returns: - pos (Tensor): Returned position embedding with shape - [bs, num_feats*2, h, w]. - """ - h, w = mask.shape[-2:] - x = torch.arange(w, device=mask.device) - y = torch.arange(h, device=mask.device) - x_embed = self.col_embed(x) - y_embed = self.row_embed(y) - pos = torch.cat( - (x_embed.unsqueeze(0).repeat(h, 1, 1), y_embed.unsqueeze(1).repeat( - 1, w, 1)), - dim=-1).permute(2, 0, - 1).unsqueeze(0).repeat(mask.shape[0], 1, 1, 1) - return pos - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'row_num_embed={self.row_num_embed}, ' - repr_str += f'col_num_embed={self.col_num_embed})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/res_layer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/res_layer.py deleted file mode 100644 index 5c3e89fb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(BaseModule): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_fg=None): - super(SimplifiedBasicBlock, self).__init__(init_fg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/se_layer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/se_layer.py deleted file mode 100644 index a2492103..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/se_layer.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - - -class SELayer(BaseModule): - """Squeeze-and-Excitation Module. - - Args: - channels (int): The input (and output) channels of the SE layer. - ratio (int): Squeeze ratio in SELayer, the intermediate channel will be - ``int(channels/ratio)``. Default: 16. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configurated - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configurated by the first dict and the - second activation layer will be configurated by the second dict. - Default: (dict(type='ReLU'), dict(type='Sigmoid')) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - channels, - ratio=16, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), dict(type='Sigmoid')), - init_cfg=None): - super(SELayer, self).__init__(init_cfg) - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=int(channels / ratio), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=int(channels / ratio), - out_channels=channels, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - out = self.global_avgpool(x) - out = self.conv1(out) - out = self.conv2(out) - return x * out - - -class DyReLU(BaseModule): - """Dynamic ReLU (DyReLU) module. - - See `Dynamic ReLU `_ for details. - Current implementation is specialized for task-aware attention in DyHead. - HSigmoid arguments in default act_cfg follow DyHead official code. - https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py - - Args: - channels (int): The input (and output) channels of DyReLU module. - ratio (int): Squeeze ratio in Squeeze-and-Excitation-like module, - the intermediate channel will be ``int(channels/ratio)``. - Default: 4. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configurated - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configurated by the first dict and the - second activation layer will be configurated by the second dict. - Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, - divisor=6.0)) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - channels, - ratio=4, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0)), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.channels = channels - self.expansion = 4 # for a1, b1, a2, b2 - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=int(channels / ratio), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=int(channels / ratio), - out_channels=channels * self.expansion, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - """Forward function.""" - coeffs = self.global_avgpool(x) - coeffs = self.conv1(coeffs) - coeffs = self.conv2(coeffs) - 0.5 # value range: [-0.5, 0.5] - a1, b1, a2, b2 = torch.split(coeffs, self.channels, dim=1) - a1 = a1 * 2.0 + 1.0 # [-1.0, 1.0] + 1.0 - a2 = a2 * 2.0 # [-1.0, 1.0] - out = torch.max(x * a1 + b1, x * a2 + b2) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/transformer.py b/cv/3d_detection/paconv/pytorch/mmdet/models/utils/transformer.py deleted file mode 100644 index 3c390c83..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/models/utils/transformer.py +++ /dev/null @@ -1,1167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings -from typing import Sequence - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (build_activation_layer, build_conv_layer, - build_norm_layer, xavier_init) -from mmcv.cnn.bricks.registry import (TRANSFORMER_LAYER, - TRANSFORMER_LAYER_SEQUENCE) -from mmcv.cnn.bricks.transformer import (BaseTransformerLayer, - TransformerLayerSequence, - build_transformer_layer_sequence) -from mmcv.runner.base_module import BaseModule -from mmcv.utils import to_2tuple -from torch.nn.init import normal_ - -from mmdet.models.utils.builder import TRANSFORMER - -try: - from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention - -except ImportError: - warnings.warn( - '`MultiScaleDeformableAttention` in MMCV has been moved to ' - '`mmcv.ops.multi_scale_deform_attn`, please update your MMCV') - from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention - - -def nlc_to_nchw(x, hw_shape): - """Convert [N, L, C] shape tensor to [N, C, H, W] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, L, C] before conversion. - hw_shape (Sequence[int]): The height and width of output feature map. - - Returns: - Tensor: The output tensor of shape [N, C, H, W] after conversion. - """ - H, W = hw_shape - assert len(x.shape) == 3 - B, L, C = x.shape - assert L == H * W, 'The seq_len does not match H, W' - return x.transpose(1, 2).reshape(B, C, H, W).contiguous() - - -def nchw_to_nlc(x): - """Flatten [N, C, H, W] shape tensor to [N, L, C] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, C, H, W] before conversion. - - Returns: - Tensor: The output tensor of shape [N, L, C] after conversion. - """ - assert len(x.shape) == 4 - return x.flatten(2).transpose(1, 2).contiguous() - - -class AdaptivePadding(nn.Module): - """Applies padding to input (if needed) so that input can get fully covered - by filter you specified. It support two modes "same" and "corner". The - "same" mode is same with "SAME" padding mode in TensorFlow, pad zero around - input. The "corner" mode would pad zero to bottom right. - - Args: - kernel_size (int | tuple): Size of the kernel: - stride (int | tuple): Stride of the filter. Default: 1: - dilation (int | tuple): Spacing between kernel elements. - Default: 1 - padding (str): Support "same" and "corner", "corner" mode - would pad zero to bottom right, and "same" mode would - pad zero around input. Default: "corner". - Example: - >>> kernel_size = 16 - >>> stride = 16 - >>> dilation = 1 - >>> input = torch.rand(1, 1, 15, 17) - >>> adap_pad = AdaptivePadding( - >>> kernel_size=kernel_size, - >>> stride=stride, - >>> dilation=dilation, - >>> padding="corner") - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - >>> input = torch.rand(1, 1, 16, 17) - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - """ - - def __init__(self, kernel_size=1, stride=1, dilation=1, padding='corner'): - - super(AdaptivePadding, self).__init__() - - assert padding in ('same', 'corner') - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - padding = to_2tuple(padding) - dilation = to_2tuple(dilation) - - self.padding = padding - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - - def get_pad_shape(self, input_shape): - input_h, input_w = input_shape - kernel_h, kernel_w = self.kernel_size - stride_h, stride_w = self.stride - output_h = math.ceil(input_h / stride_h) - output_w = math.ceil(input_w / stride_w) - pad_h = max((output_h - 1) * stride_h + - (kernel_h - 1) * self.dilation[0] + 1 - input_h, 0) - pad_w = max((output_w - 1) * stride_w + - (kernel_w - 1) * self.dilation[1] + 1 - input_w, 0) - return pad_h, pad_w - - def forward(self, x): - pad_h, pad_w = self.get_pad_shape(x.size()[-2:]) - if pad_h > 0 or pad_w > 0: - if self.padding == 'corner': - x = F.pad(x, [0, pad_w, 0, pad_h]) - elif self.padding == 'same': - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2 - ]) - return x - - -class PatchEmbed(BaseModule): - """Image to Patch Embedding. - - We use a conv layer to implement PatchEmbed. - - Args: - in_channels (int): The num of input channels. Default: 3 - embed_dims (int): The dimensions of embedding. Default: 768 - conv_type (str): The config dict for embedding - conv layer type selection. Default: "Conv2d. - kernel_size (int): The kernel_size of embedding conv. Default: 16. - stride (int): The slide stride of embedding conv. - Default: None (Would be set as `kernel_size`). - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int): The dilation rate of embedding conv. Default: 1. - bias (bool): Bias of embed conv. Default: True. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - input_size (int | tuple | None): The size of input, which will be - used to calculate the out size. Only work when `dynamic_size` - is False. Default: None. - init_cfg (`mmcv.ConfigDict`, optional): The Config for initialization. - Default: None. - """ - - def __init__( - self, - in_channels=3, - embed_dims=768, - conv_type='Conv2d', - kernel_size=16, - stride=16, - padding='corner', - dilation=1, - bias=True, - norm_cfg=None, - input_size=None, - init_cfg=None, - ): - super(PatchEmbed, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - if stride is None: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of conv - padding = 0 - else: - self.adap_padding = None - padding = to_2tuple(padding) - - self.projection = build_conv_layer( - dict(type=conv_type), - in_channels=in_channels, - out_channels=embed_dims, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias) - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - else: - self.norm = None - - if input_size: - input_size = to_2tuple(input_size) - # `init_out_size` would be used outside to - # calculate the num_patches - # when `use_abs_pos_embed` outside - self.init_input_size = input_size - if self.adap_padding: - pad_h, pad_w = self.adap_padding.get_pad_shape(input_size) - input_h, input_w = input_size - input_h = input_h + pad_h - input_w = input_w + pad_w - input_size = (input_h, input_w) - - # https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html - h_out = (input_size[0] + 2 * padding[0] - dilation[0] * - (kernel_size[0] - 1) - 1) // stride[0] + 1 - w_out = (input_size[1] + 2 * padding[1] - dilation[1] * - (kernel_size[1] - 1) - 1) // stride[1] + 1 - self.init_out_size = (h_out, w_out) - else: - self.init_input_size = None - self.init_out_size = None - - def forward(self, x): - """ - Args: - x (Tensor): Has shape (B, C, H, W). In most case, C is 3. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, out_h * out_w, embed_dims) - - out_size (tuple[int]): Spatial shape of x, arrange as - (out_h, out_w). - """ - - if self.adap_padding: - x = self.adap_padding(x) - - x = self.projection(x) - out_size = (x.shape[2], x.shape[3]) - x = x.flatten(2).transpose(1, 2) - if self.norm is not None: - x = self.norm(x) - return x, out_size - - -class PatchMerging(BaseModule): - """Merge patch feature map. - - This layer groups feature map by kernel_size, and applies norm and linear - layers to the grouped feature map. Our implementation uses `nn.Unfold` to - merge patch, which is about 25% faster than original implementation. - Instead, we need to modify pretrained models for compatibility. - - Args: - in_channels (int): The num of input channels. - to gets fully covered by filter and stride you specified.. - Default: True. - out_channels (int): The num of output channels. - kernel_size (int | tuple, optional): the kernel size in the unfold - layer. Defaults to 2. - stride (int | tuple, optional): the stride of the sliding blocks in the - unfold layer. Default: None. (Would be set as `kernel_size`) - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int | tuple, optional): dilation parameter in the unfold - layer. Default: 1. - bias (bool, optional): Whether to add bias in linear layer or not. - Defaults: False. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=2, - stride=None, - padding='corner', - dilation=1, - bias=False, - norm_cfg=dict(type='LN'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - if stride: - stride = stride - else: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of unfold - padding = 0 - else: - self.adap_padding = None - - padding = to_2tuple(padding) - self.sampler = nn.Unfold( - kernel_size=kernel_size, - dilation=dilation, - padding=padding, - stride=stride) - - sample_dim = kernel_size[0] * kernel_size[1] * in_channels - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, sample_dim)[1] - else: - self.norm = None - - self.reduction = nn.Linear(sample_dim, out_channels, bias=bias) - - def forward(self, x, input_size): - """ - Args: - x (Tensor): Has shape (B, H*W, C_in). - input_size (tuple[int]): The spatial shape of x, arrange as (H, W). - Default: None. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, Merged_H * Merged_W, C_out) - - out_size (tuple[int]): Spatial shape of x, arrange as - (Merged_H, Merged_W). - """ - B, L, C = x.shape - assert isinstance(input_size, Sequence), f'Expect ' \ - f'input_size is ' \ - f'`Sequence` ' \ - f'but get {input_size}' - - H, W = input_size - assert L == H * W, 'input feature has wrong size' - - x = x.view(B, H, W, C).permute([0, 3, 1, 2]) # B, C, H, W - # Use nn.Unfold to merge patch. About 25% faster than original method, - # but need to modify pretrained model for compatibility - - if self.adap_padding: - x = self.adap_padding(x) - H, W = x.shape[-2:] - - x = self.sampler(x) - # if kernel_size=2 and stride=2, x should has shape (B, 4*C, H/2*W/2) - - out_h = (H + 2 * self.sampler.padding[0] - self.sampler.dilation[0] * - (self.sampler.kernel_size[0] - 1) - - 1) // self.sampler.stride[0] + 1 - out_w = (W + 2 * self.sampler.padding[1] - self.sampler.dilation[1] * - (self.sampler.kernel_size[1] - 1) - - 1) // self.sampler.stride[1] + 1 - - output_size = (out_h, out_w) - x = x.transpose(1, 2) # B, H/2*W/2, 4*C - x = self.norm(x) if self.norm else x - x = self.reduction(x) - return x, output_size - - -def inverse_sigmoid(x, eps=1e-5): - """Inverse function of sigmoid. - - Args: - x (Tensor): The tensor to do the - inverse. - eps (float): EPS avoid numerical - overflow. Defaults 1e-5. - Returns: - Tensor: The x has passed the inverse - function of sigmoid, has same - shape with input. - """ - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -@TRANSFORMER_LAYER.register_module() -class DetrTransformerDecoderLayer(BaseTransformerLayer): - """Implements decoder layer in DETR transformer. - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | list[dict] | dict )): - Configs for self_attention or cross_attention, the order - should be consistent with it in `operation_order`. If it is - a dict, it would be expand to the number of attention in - `operation_order`. - feedforward_channels (int): The hidden dimension for FFNs. - ffn_dropout (float): Probability of an element to be zeroed - in ffn. Default 0.0. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Default:None - act_cfg (dict): The activation config for FFNs. Default: `LN` - norm_cfg (dict): Config dict for normalization layer. - Default: `LN`. - ffn_num_fcs (int): The number of fully-connected layers in FFNs. - Default:2. - """ - - def __init__(self, - attn_cfgs, - feedforward_channels, - ffn_dropout=0.0, - operation_order=None, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - ffn_num_fcs=2, - **kwargs): - super(DetrTransformerDecoderLayer, self).__init__( - attn_cfgs=attn_cfgs, - feedforward_channels=feedforward_channels, - ffn_dropout=ffn_dropout, - operation_order=operation_order, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - ffn_num_fcs=ffn_num_fcs, - **kwargs) - assert len(operation_order) == 6 - assert set(operation_order) == set( - ['self_attn', 'norm', 'cross_attn', 'ffn']) - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DetrTransformerEncoder(TransformerLayerSequence): - """TransformerEncoder of DETR. - - Args: - post_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. Only used when `self.pre_norm` is `True` - """ - - def __init__(self, *args, post_norm_cfg=dict(type='LN'), **kwargs): - super(DetrTransformerEncoder, self).__init__(*args, **kwargs) - if post_norm_cfg is not None: - self.post_norm = build_norm_layer( - post_norm_cfg, self.embed_dims)[1] if self.pre_norm else None - else: - assert not self.pre_norm, f'Use prenorm in ' \ - f'{self.__class__.__name__},' \ - f'Please specify post_norm_cfg' - self.post_norm = None - - def forward(self, *args, **kwargs): - """Forward function for `TransformerCoder`. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - x = super(DetrTransformerEncoder, self).forward(*args, **kwargs) - if self.post_norm is not None: - x = self.post_norm(x) - return x - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DetrTransformerDecoder(TransformerLayerSequence): - """Implements the decoder in DETR transformer. - - Args: - return_intermediate (bool): Whether to return intermediate outputs. - post_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. - """ - - def __init__(self, - *args, - post_norm_cfg=dict(type='LN'), - return_intermediate=False, - **kwargs): - - super(DetrTransformerDecoder, self).__init__(*args, **kwargs) - self.return_intermediate = return_intermediate - if post_norm_cfg is not None: - self.post_norm = build_norm_layer(post_norm_cfg, - self.embed_dims)[1] - else: - self.post_norm = None - - def forward(self, query, *args, **kwargs): - """Forward function for `TransformerDecoder`. - - Args: - query (Tensor): Input query with shape - `(num_query, bs, embed_dims)`. - - Returns: - Tensor: Results with shape [1, num_query, bs, embed_dims] when - return_intermediate is `False`, otherwise it has shape - [num_layers, num_query, bs, embed_dims]. - """ - if not self.return_intermediate: - x = super().forward(query, *args, **kwargs) - if self.post_norm: - x = self.post_norm(x)[None] - return x - - intermediate = [] - for layer in self.layers: - query = layer(query, *args, **kwargs) - if self.return_intermediate: - if self.post_norm is not None: - intermediate.append(self.post_norm(query)) - else: - intermediate.append(query) - return torch.stack(intermediate) - - -@TRANSFORMER.register_module() -class Transformer(BaseModule): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - encoder (`mmcv.ConfigDict` | Dict): Config of - TransformerEncoder. Defaults to None. - decoder ((`mmcv.ConfigDict` | Dict)): Config of - TransformerDecoder. Defaults to None - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Defaults to None. - """ - - def __init__(self, encoder=None, decoder=None, init_cfg=None): - super(Transformer, self).__init__(init_cfg=init_cfg) - self.encoder = build_transformer_layer_sequence(encoder) - self.decoder = build_transformer_layer_sequence(decoder) - self.embed_dims = self.encoder.embed_dims - - def init_weights(self): - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution='uniform') - self._is_init = True - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - # use `view` instead of `flatten` for dynamically exporting to ONNX - x = x.view(bs, c, -1).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.view(bs, c, -1).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.view(bs, -1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - query=x, - key=None, - value=None, - query_pos=pos_embed, - query_key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - query=target, - key=memory, - value=memory, - key_pos=pos_embed, - query_pos=query_embed, - key_padding_mask=mask) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DeformableDetrTransformerDecoder(TransformerLayerSequence): - """Implements the decoder in DETR transformer. - - Args: - return_intermediate (bool): Whether to return intermediate outputs. - coder_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. - """ - - def __init__(self, *args, return_intermediate=False, **kwargs): - - super(DeformableDetrTransformerDecoder, self).__init__(*args, **kwargs) - self.return_intermediate = return_intermediate - - def forward(self, - query, - *args, - reference_points=None, - valid_ratios=None, - reg_branches=None, - **kwargs): - """Forward function for `TransformerDecoder`. - - Args: - query (Tensor): Input query with shape - `(num_query, bs, embed_dims)`. - reference_points (Tensor): The reference - points of offset. has shape - (bs, num_query, 4) when as_two_stage, - otherwise has shape ((bs, num_query, 2). - valid_ratios (Tensor): The radios of valid - points on the feature map, has shape - (bs, num_levels, 2) - reg_branch: (obj:`nn.ModuleList`): Used for - refining the regression results. Only would - be passed when with_box_refine is True, - otherwise would be passed a `None`. - - Returns: - Tensor: Results with shape [1, num_query, bs, embed_dims] when - return_intermediate is `False`, otherwise it has shape - [num_layers, num_query, bs, embed_dims]. - """ - output = query - intermediate = [] - intermediate_reference_points = [] - for lid, layer in enumerate(self.layers): - if reference_points.shape[-1] == 4: - reference_points_input = reference_points[:, :, None] * \ - torch.cat([valid_ratios, valid_ratios], -1)[:, None] - else: - assert reference_points.shape[-1] == 2 - reference_points_input = reference_points[:, :, None] * \ - valid_ratios[:, None] - output = layer( - output, - *args, - reference_points=reference_points_input, - **kwargs) - output = output.permute(1, 0, 2) - - if reg_branches is not None: - tmp = reg_branches[lid](output) - if reference_points.shape[-1] == 4: - new_reference_points = tmp + inverse_sigmoid( - reference_points) - new_reference_points = new_reference_points.sigmoid() - else: - assert reference_points.shape[-1] == 2 - new_reference_points = tmp - new_reference_points[..., :2] = tmp[ - ..., :2] + inverse_sigmoid(reference_points) - new_reference_points = new_reference_points.sigmoid() - reference_points = new_reference_points.detach() - - output = output.permute(1, 0, 2) - if self.return_intermediate: - intermediate.append(output) - intermediate_reference_points.append(reference_points) - - if self.return_intermediate: - return torch.stack(intermediate), torch.stack( - intermediate_reference_points) - - return output, reference_points - - -@TRANSFORMER.register_module() -class DeformableDetrTransformer(Transformer): - """Implements the DeformableDETR transformer. - - Args: - as_two_stage (bool): Generate query from encoder features. - Default: False. - num_feature_levels (int): Number of feature maps from FPN: - Default: 4. - two_stage_num_proposals (int): Number of proposals when set - `as_two_stage` as True. Default: 300. - """ - - def __init__(self, - as_two_stage=False, - num_feature_levels=4, - two_stage_num_proposals=300, - **kwargs): - super(DeformableDetrTransformer, self).__init__(**kwargs) - self.as_two_stage = as_two_stage - self.num_feature_levels = num_feature_levels - self.two_stage_num_proposals = two_stage_num_proposals - self.embed_dims = self.encoder.embed_dims - self.init_layers() - - def init_layers(self): - """Initialize layers of the DeformableDetrTransformer.""" - self.level_embeds = nn.Parameter( - torch.Tensor(self.num_feature_levels, self.embed_dims)) - - if self.as_two_stage: - self.enc_output = nn.Linear(self.embed_dims, self.embed_dims) - self.enc_output_norm = nn.LayerNorm(self.embed_dims) - self.pos_trans = nn.Linear(self.embed_dims * 2, - self.embed_dims * 2) - self.pos_trans_norm = nn.LayerNorm(self.embed_dims * 2) - else: - self.reference_points = nn.Linear(self.embed_dims, 2) - - def init_weights(self): - """Initialize the transformer weights.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MultiScaleDeformableAttention): - m.init_weights() - if not self.as_two_stage: - xavier_init(self.reference_points, distribution='uniform', bias=0.) - normal_(self.level_embeds) - - def gen_encoder_output_proposals(self, memory, memory_padding_mask, - spatial_shapes): - """Generate proposals from encoded memory. - - Args: - memory (Tensor) : The output of encoder, - has shape (bs, num_key, embed_dim). num_key is - equal the number of points on feature map from - all level. - memory_padding_mask (Tensor): Padding mask for memory. - has shape (bs, num_key). - spatial_shapes (Tensor): The shape of all feature maps. - has shape (num_level, 2). - - Returns: - tuple: A tuple of feature map and bbox prediction. - - - output_memory (Tensor): The input of decoder, \ - has shape (bs, num_key, embed_dim). num_key is \ - equal the number of points on feature map from \ - all levels. - - output_proposals (Tensor): The normalized proposal \ - after a inverse sigmoid, has shape \ - (bs, num_keys, 4). - """ - - N, S, C = memory.shape - proposals = [] - _cur = 0 - for lvl, (H, W) in enumerate(spatial_shapes): - mask_flatten_ = memory_padding_mask[:, _cur:(_cur + H * W)].view( - N, H, W, 1) - valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1) - valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1) - - grid_y, grid_x = torch.meshgrid( - torch.linspace( - 0, H - 1, H, dtype=torch.float32, device=memory.device), - torch.linspace( - 0, W - 1, W, dtype=torch.float32, device=memory.device)) - grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) - - scale = torch.cat([valid_W.unsqueeze(-1), - valid_H.unsqueeze(-1)], 1).view(N, 1, 1, 2) - grid = (grid.unsqueeze(0).expand(N, -1, -1, -1) + 0.5) / scale - wh = torch.ones_like(grid) * 0.05 * (2.0**lvl) - proposal = torch.cat((grid, wh), -1).view(N, -1, 4) - proposals.append(proposal) - _cur += (H * W) - output_proposals = torch.cat(proposals, 1) - output_proposals_valid = ((output_proposals > 0.01) & - (output_proposals < 0.99)).all( - -1, keepdim=True) - output_proposals = torch.log(output_proposals / (1 - output_proposals)) - output_proposals = output_proposals.masked_fill( - memory_padding_mask.unsqueeze(-1), float('inf')) - output_proposals = output_proposals.masked_fill( - ~output_proposals_valid, float('inf')) - - output_memory = memory - output_memory = output_memory.masked_fill( - memory_padding_mask.unsqueeze(-1), float(0)) - output_memory = output_memory.masked_fill(~output_proposals_valid, - float(0)) - output_memory = self.enc_output_norm(self.enc_output(output_memory)) - return output_memory, output_proposals - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - """Get the reference points used in decoder. - - Args: - spatial_shapes (Tensor): The shape of all - feature maps, has shape (num_level, 2). - valid_ratios (Tensor): The radios of valid - points on the feature map, has shape - (bs, num_levels, 2) - device (obj:`device`): The device where - reference_points should be. - - Returns: - Tensor: reference points used in decoder, has \ - shape (bs, num_keys, num_levels, 2). - """ - reference_points_list = [] - for lvl, (H, W) in enumerate(spatial_shapes): - # TODO check this 0.5 - ref_y, ref_x = torch.meshgrid( - torch.linspace( - 0.5, H - 0.5, H, dtype=torch.float32, device=device), - torch.linspace( - 0.5, W - 0.5, W, dtype=torch.float32, device=device)) - ref_y = ref_y.reshape(-1)[None] / ( - valid_ratios[:, None, lvl, 1] * H) - ref_x = ref_x.reshape(-1)[None] / ( - valid_ratios[:, None, lvl, 0] * W) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def get_valid_ratio(self, mask): - """Get the valid radios of feature maps of all level.""" - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def get_proposal_pos_embed(self, - proposals, - num_pos_feats=128, - temperature=10000): - """Get the position embedding of proposal.""" - scale = 2 * math.pi - dim_t = torch.arange( - num_pos_feats, dtype=torch.float32, device=proposals.device) - dim_t = temperature**(2 * (dim_t // 2) / num_pos_feats) - # N, L, 4 - proposals = proposals.sigmoid() * scale - # N, L, 4, 128 - pos = proposals[:, :, :, None] / dim_t - # N, L, 4, 64, 2 - pos = torch.stack((pos[:, :, :, 0::2].sin(), pos[:, :, :, 1::2].cos()), - dim=4).flatten(2) - return pos - - def forward(self, - mlvl_feats, - mlvl_masks, - query_embed, - mlvl_pos_embeds, - reg_branches=None, - cls_branches=None, - **kwargs): - """Forward function for `Transformer`. - - Args: - mlvl_feats (list(Tensor)): Input queries from - different level. Each element has shape - [bs, embed_dims, h, w]. - mlvl_masks (list(Tensor)): The key_padding_mask from - different level used for encoder and decoder, - each element has shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, - with shape [num_query, c]. - mlvl_pos_embeds (list(Tensor)): The positional encoding - of feats from different level, has the shape - [bs, embed_dims, h, w]. - reg_branches (obj:`nn.ModuleList`): Regression heads for - feature maps from each decoder layer. Only would - be passed when - `with_box_refine` is True. Default to None. - cls_branches (obj:`nn.ModuleList`): Classification heads - for feature maps from each decoder layer. Only would - be passed when `as_two_stage` - is True. Default to None. - - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - inter_states: Outputs from decoder. If - return_intermediate_dec is True output has shape \ - (num_dec_layers, bs, num_query, embed_dims), else has \ - shape (1, bs, num_query, embed_dims). - - init_reference_out: The initial value of reference \ - points, has shape (bs, num_queries, 4). - - inter_references_out: The internal value of reference \ - points in decoder, has shape \ - (num_dec_layers, bs,num_query, embed_dims) - - enc_outputs_class: The classification score of \ - proposals generated from \ - encoder's feature maps, has shape \ - (batch, h*w, num_classes). \ - Only would be returned when `as_two_stage` is True, \ - otherwise None. - - enc_outputs_coord_unact: The regression results \ - generated from encoder's feature maps., has shape \ - (batch, h*w, 4). Only would \ - be returned when `as_two_stage` is True, \ - otherwise None. - """ - assert self.as_two_stage or query_embed is not None - - feat_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (feat, mask, pos_embed) in enumerate( - zip(mlvl_feats, mlvl_masks, mlvl_pos_embeds)): - bs, c, h, w = feat.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - feat = feat.flatten(2).transpose(1, 2) - mask = mask.flatten(1) - pos_embed = pos_embed.flatten(2).transpose(1, 2) - lvl_pos_embed = pos_embed + self.level_embeds[lvl].view(1, 1, -1) - lvl_pos_embed_flatten.append(lvl_pos_embed) - feat_flatten.append(feat) - mask_flatten.append(mask) - feat_flatten = torch.cat(feat_flatten, 1) - mask_flatten = torch.cat(mask_flatten, 1) - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=feat_flatten.device) - level_start_index = torch.cat((spatial_shapes.new_zeros( - (1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - valid_ratios = torch.stack( - [self.get_valid_ratio(m) for m in mlvl_masks], 1) - - reference_points = \ - self.get_reference_points(spatial_shapes, - valid_ratios, - device=feat.device) - - feat_flatten = feat_flatten.permute(1, 0, 2) # (H*W, bs, embed_dims) - lvl_pos_embed_flatten = lvl_pos_embed_flatten.permute( - 1, 0, 2) # (H*W, bs, embed_dims) - memory = self.encoder( - query=feat_flatten, - key=None, - value=None, - query_pos=lvl_pos_embed_flatten, - query_key_padding_mask=mask_flatten, - spatial_shapes=spatial_shapes, - reference_points=reference_points, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - **kwargs) - - memory = memory.permute(1, 0, 2) - bs, _, c = memory.shape - if self.as_two_stage: - output_memory, output_proposals = \ - self.gen_encoder_output_proposals( - memory, mask_flatten, spatial_shapes) - enc_outputs_class = cls_branches[self.decoder.num_layers]( - output_memory) - enc_outputs_coord_unact = \ - reg_branches[ - self.decoder.num_layers](output_memory) + output_proposals - - topk = self.two_stage_num_proposals - # We only use the first channel in enc_outputs_class as foreground, - # the other (num_classes - 1) channels are actually not used. - # Its targets are set to be 0s, which indicates the first - # class (foreground) because we use [0, num_classes - 1] to - # indicate class labels, background class is indicated by - # num_classes (similar convention in RPN). - # See https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/deformable_detr_head.py#L241 # noqa - # This follows the official implementation of Deformable DETR. - topk_proposals = torch.topk( - enc_outputs_class[..., 0], topk, dim=1)[1] - topk_coords_unact = torch.gather( - enc_outputs_coord_unact, 1, - topk_proposals.unsqueeze(-1).repeat(1, 1, 4)) - topk_coords_unact = topk_coords_unact.detach() - reference_points = topk_coords_unact.sigmoid() - init_reference_out = reference_points - pos_trans_out = self.pos_trans_norm( - self.pos_trans(self.get_proposal_pos_embed(topk_coords_unact))) - query_pos, query = torch.split(pos_trans_out, c, dim=2) - else: - query_pos, query = torch.split(query_embed, c, dim=1) - query_pos = query_pos.unsqueeze(0).expand(bs, -1, -1) - query = query.unsqueeze(0).expand(bs, -1, -1) - reference_points = self.reference_points(query_pos).sigmoid() - init_reference_out = reference_points - - # decoder - query = query.permute(1, 0, 2) - memory = memory.permute(1, 0, 2) - query_pos = query_pos.permute(1, 0, 2) - inter_states, inter_references = self.decoder( - query=query, - key=None, - value=memory, - query_pos=query_pos, - key_padding_mask=mask_flatten, - reference_points=reference_points, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - reg_branches=reg_branches, - **kwargs) - - inter_references_out = inter_references - if self.as_two_stage: - return inter_states, init_reference_out,\ - inter_references_out, enc_outputs_class,\ - enc_outputs_coord_unact - return inter_states, init_reference_out, \ - inter_references_out, None, None - - -@TRANSFORMER.register_module() -class DynamicConv(BaseModule): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - with_proj (bool): Project two-dimentional feature to - one-dimentional feature. Default to True. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - with_proj=True, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - init_cfg=None): - super(DynamicConv, self).__init__(init_cfg) - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.with_proj = with_proj - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - if self.with_proj: - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - input_feature = input_feature.flatten(2).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - if self.with_proj: - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/__init__.py deleted file mode 100644 index 350452a9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collect_env import collect_env -from .compat_config import compat_cfg -from .logger import get_caller_name, get_root_logger, log_img_scale -from .misc import find_latest_checkpoint, update_data_root -from .setup_env import setup_multi_processes -from .split_batch import split_batch -from .util_distribution import build_ddp, build_dp, get_device - -__all__ = [ - 'get_root_logger', 'collect_env', 'find_latest_checkpoint', - 'update_data_root', 'setup_multi_processes', 'get_caller_name', - 'log_img_scale', 'compat_cfg', 'split_batch', 'build_ddp', 'build_dp', - 'get_device' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/collect_env.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/collect_env.py deleted file mode 100644 index 97e25c0e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/collect_env.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmdet - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMDetection'] = mmdet.__version__ + '+' + get_git_hash()[:7] - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print(f'{name}: {val}') diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/compat_config.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/compat_config.py deleted file mode 100644 index 05aa37dc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/compat_config.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv import ConfigDict - - -def compat_cfg(cfg): - """This function would modify some filed to keep the compatibility of - config. - - For example, it will move some args which will be deprecated to the correct - fields. - """ - cfg = copy.deepcopy(cfg) - cfg = compat_imgs_per_gpu(cfg) - cfg = compat_loader_args(cfg) - cfg = compat_runner_args(cfg) - return cfg - - -def compat_runner_args(cfg): - if 'runner' not in cfg: - cfg.runner = ConfigDict({ - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - }) - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - return cfg - - -def compat_imgs_per_gpu(cfg): - cfg = copy.deepcopy(cfg) - if 'imgs_per_gpu' in cfg.data: - warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - warnings.warn( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - return cfg - - -def compat_loader_args(cfg): - """Deprecated sample_per_gpu in cfg.data.""" - - cfg = copy.deepcopy(cfg) - if 'train_dataloader' not in cfg.data: - cfg.data['train_dataloader'] = ConfigDict() - if 'val_dataloader' not in cfg.data: - cfg.data['val_dataloader'] = ConfigDict() - if 'test_dataloader' not in cfg.data: - cfg.data['test_dataloader'] = ConfigDict() - - # special process for train_dataloader - if 'samples_per_gpu' in cfg.data: - - samples_per_gpu = cfg.data.pop('samples_per_gpu') - assert 'samples_per_gpu' not in \ - cfg.data.train_dataloader, ('`samples_per_gpu` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu - - if 'persistent_workers' in cfg.data: - - persistent_workers = cfg.data.pop('persistent_workers') - assert 'persistent_workers' not in \ - cfg.data.train_dataloader, ('`persistent_workers` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['persistent_workers'] = persistent_workers - - if 'workers_per_gpu' in cfg.data: - - workers_per_gpu = cfg.data.pop('workers_per_gpu') - cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu - - # special process for val_dataloader - if 'samples_per_gpu' in cfg.data.val: - # keep default value of `sample_per_gpu` is 1 - assert 'samples_per_gpu' not in \ - cfg.data.val_dataloader, ('`samples_per_gpu` are set ' - 'in `data.val` field and ` ' - 'data.val_dataloader` at ' - 'the same time. ' - 'Please only set it in ' - '`data.val_dataloader`. ') - cfg.data.val_dataloader['samples_per_gpu'] = \ - cfg.data.val.pop('samples_per_gpu') - # special process for val_dataloader - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - if 'samples_per_gpu' in cfg.data.test: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - - cfg.data.test_dataloader['samples_per_gpu'] = \ - cfg.data.test.pop('samples_per_gpu') - - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - if 'samples_per_gpu' in ds_cfg: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` at' - ' the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu - - return cfg diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/contextmanagers.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/contextmanagers.py deleted file mode 100644 index fa12bfca..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/contextmanagers.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import asyncio -import contextlib -import logging -import os -import time -from typing import List - -import torch - -logger = logging.getLogger(__name__) - -DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False)) - - -@contextlib.asynccontextmanager -async def completed(trace_name='', - name='', - sleep_interval=0.05, - streams: List[torch.cuda.Stream] = None): - """Async context manager that waits for work to complete on given CUDA - streams.""" - if not torch.cuda.is_available(): - yield - return - - stream_before_context_switch = torch.cuda.current_stream() - if not streams: - streams = [stream_before_context_switch] - else: - streams = [s if s else stream_before_context_switch for s in streams] - - end_events = [ - torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams - ] - - if DEBUG_COMPLETED_TIME: - start = torch.cuda.Event(enable_timing=True) - stream_before_context_switch.record_event(start) - - cpu_start = time.monotonic() - logger.debug('%s %s starting, streams: %s', trace_name, name, streams) - grad_enabled_before = torch.is_grad_enabled() - try: - yield - finally: - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_end = time.monotonic() - for i, stream in enumerate(streams): - event = end_events[i] - stream.record_event(event) - - grad_enabled_after = torch.is_grad_enabled() - - # observed change of torch.is_grad_enabled() during concurrent run of - # async_test_bboxes code - assert (grad_enabled_before == grad_enabled_after - ), 'Unexpected is_grad_enabled() value change' - - are_done = [e.query() for e in end_events] - logger.debug('%s %s completed: %s streams: %s', trace_name, name, - are_done, streams) - with torch.cuda.stream(stream_before_context_switch): - while not all(are_done): - await asyncio.sleep(sleep_interval) - are_done = [e.query() for e in end_events] - logger.debug( - '%s %s completed: %s streams: %s', - trace_name, - name, - are_done, - streams, - ) - - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_time = (cpu_end - cpu_start) * 1000 - stream_times_ms = '' - for i, stream in enumerate(streams): - elapsed_time = start.elapsed_time(end_events[i]) - stream_times_ms += f' {stream} {elapsed_time:.2f} ms' - logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time, - stream_times_ms) - - -@contextlib.asynccontextmanager -async def concurrent(streamqueue: asyncio.Queue, - trace_name='concurrent', - name='stream'): - """Run code concurrently in different streams. - - :param streamqueue: asyncio.Queue instance. - - Queue tasks define the pool of streams used for concurrent execution. - """ - if not torch.cuda.is_available(): - yield - return - - initial_stream = torch.cuda.current_stream() - - with torch.cuda.stream(initial_stream): - stream = await streamqueue.get() - assert isinstance(stream, torch.cuda.Stream) - - try: - with torch.cuda.stream(stream): - logger.debug('%s %s is starting, stream: %s', trace_name, name, - stream) - yield - current = torch.cuda.current_stream() - assert current == stream - logger.debug('%s %s has finished, stream: %s', trace_name, - name, stream) - finally: - streamqueue.task_done() - streamqueue.put_nowait(stream) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/logger.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/logger.py deleted file mode 100644 index 485f641b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/logger.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get root logger. - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level) - - return logger - - -def get_caller_name(): - """Get name of caller method.""" - # this_func_frame = inspect.stack()[0][0] # i.e., get_caller_name - # callee_frame = inspect.stack()[1][0] # e.g., log_img_scale - caller_frame = inspect.stack()[2][0] # e.g., caller of log_img_scale - caller_method = caller_frame.f_code.co_name - try: - caller_class = caller_frame.f_locals['self'].__class__.__name__ - return f'{caller_class}.{caller_method}' - except KeyError: # caller is a function - return caller_method - - -def log_img_scale(img_scale, shape_order='hw', skip_square=False): - """Log image size. - - Args: - img_scale (tuple): Image size to be logged. - shape_order (str, optional): The order of image shape. - 'hw' for (height, width) and 'wh' for (width, height). - Defaults to 'hw'. - skip_square (bool, optional): Whether to skip logging for square - img_scale. Defaults to False. - - Returns: - bool: Whether to have done logging. - """ - if shape_order == 'hw': - height, width = img_scale - elif shape_order == 'wh': - width, height = img_scale - else: - raise ValueError(f'Invalid shape_order {shape_order}.') - - if skip_square and (height == width): - return False - - logger = get_root_logger() - caller = get_caller_name() - logger.info(f'image shape: height={height}, width={width} in {caller}') - - return True diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/misc.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/misc.py deleted file mode 100644 index 4113672a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/misc.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os -import os.path as osp -import warnings - -import mmcv -from mmcv.utils import print_log - - -def find_latest_checkpoint(path, suffix='pth'): - """Find the latest checkpoint from the working directory. - - Args: - path(str): The path to find checkpoints. - suffix(str): File extension. - Defaults to pth. - - Returns: - latest_path(str | None): File path of the latest checkpoint. - References: - .. [1] https://github.com/microsoft/SoftTeacher - /blob/main/ssod/utils/patch.py - """ - if not osp.exists(path): - warnings.warn('The path of checkpoints does not exist.') - return None - if osp.exists(osp.join(path, f'latest.{suffix}')): - return osp.join(path, f'latest.{suffix}') - - checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) - if len(checkpoints) == 0: - warnings.warn('There are no checkpoints in the path.') - return None - latest = -1 - latest_path = None - for checkpoint in checkpoints: - count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) - if count > latest: - latest = count - latest_path = checkpoint - return latest_path - - -def update_data_root(cfg, logger=None): - """Update data root according to env MMDET_DATASETS. - - If set env MMDET_DATASETS, update cfg.data_root according to - MMDET_DATASETS. Otherwise, using cfg.data_root as default. - - Args: - cfg (mmcv.Config): The model config need to modify - logger (logging.Logger | str | None): the way to print msg - """ - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - if 'MMDET_DATASETS' in os.environ: - dst_root = os.environ['MMDET_DATASETS'] - print_log(f'MMDET_DATASETS has been set to be {dst_root}.' - f'Using {dst_root} as data root.') - else: - return - - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - def update(cfg, src_str, dst_str): - for k, v in cfg.items(): - if isinstance(v, mmcv.ConfigDict): - update(cfg[k], src_str, dst_str) - if isinstance(v, str) and src_str in v: - cfg[k] = v.replace(src_str, dst_str) - - update(cfg.data, cfg.data_root, dst_root) - cfg.data_root = dst_root diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/profiling.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/profiling.py deleted file mode 100644 index 2f53f456..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/profiling.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import contextlib -import sys -import time - -import torch - -if sys.version_info >= (3, 7): - - @contextlib.contextmanager - def profile_time(trace_name, - name, - enabled=True, - stream=None, - end_stream=None): - """Print time spent by CPU and GPU. - - Useful as a temporary context manager to find sweet spots of code - suitable for async implementation. - """ - if (not enabled) or not torch.cuda.is_available(): - yield - return - stream = stream if stream else torch.cuda.current_stream() - end_stream = end_stream if end_stream else stream - start = torch.cuda.Event(enable_timing=True) - end = torch.cuda.Event(enable_timing=True) - stream.record_event(start) - try: - cpu_start = time.monotonic() - yield - finally: - cpu_end = time.monotonic() - end_stream.record_event(end) - end.synchronize() - cpu_time = (cpu_end - cpu_start) * 1000 - gpu_time = start.elapsed_time(end) - msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms ' - msg += f'gpu_time {gpu_time:.2f} ms stream {stream}' - print(msg, end_stream) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/setup_env.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/setup_env.py deleted file mode 100644 index 6637cf87..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/setup_env.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -import torch.multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - workers_per_gpu = cfg.data.get('workers_per_gpu', 1) - if 'train_dataloader' in cfg.data: - workers_per_gpu = \ - max(cfg.data.train_dataloader.get('workers_per_gpu', 1), - workers_per_gpu) - - if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/split_batch.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/split_batch.py deleted file mode 100644 index 0276fb33..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/split_batch.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def split_batch(img, img_metas, kwargs): - """Split data_batch by tags. - - Code is modified from - # noqa: E501 - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (dict): Specific to concrete implementation. - - Returns: - data_groups (dict): a dict that data_batch splited by tags, - such as 'sup', 'unsup_teacher', and 'unsup_student'. - """ - - # only stack img in the batch - def fuse_list(obj_list, obj): - return torch.stack(obj_list) if isinstance(obj, - torch.Tensor) else obj_list - - # select data with tag from data_batch - def select_group(data_batch, current_tag): - group_flag = [tag == current_tag for tag in data_batch['tag']] - return { - k: fuse_list([vv for vv, gf in zip(v, group_flag) if gf], v) - for k, v in data_batch.items() - } - - kwargs.update({'img': img, 'img_metas': img_metas}) - kwargs.update({'tag': [meta['tag'] for meta in img_metas]}) - tags = list(set(kwargs['tag'])) - data_groups = {tag: select_group(kwargs, tag) for tag in tags} - for tag, group in data_groups.items(): - group.pop('tag') - return data_groups diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_distribution.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/util_distribution.py deleted file mode 100644 index a186bf6c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_distribution.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel - -dp_factory = {'cuda': MMDataParallel, 'cpu': MMDataParallel} - -ddp_factory = {'cuda': MMDistributedDataParallel} - - -def build_dp(model, device='cuda', dim=0, *args, **kwargs): - """build DataParallel module by device type. - - if device is cuda, return a MMDataParallel model; if device is mlu, - return a MLUDataParallel model. - - Args: - model (:class:`nn.Module`): model to be parallelized. - device (str): device type, cuda, cpu or mlu. Defaults to cuda. - dim (int): Dimension used to scatter the data. Defaults to 0. - - Returns: - nn.Module: the model to be parallelized. - """ - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDataParallel - dp_factory['mlu'] = MLUDataParallel - model = model.mlu() - - return dp_factory[device](model, dim=dim, *args, **kwargs) - - -def build_ddp(model, device='cuda', *args, **kwargs): - """Build DistributedDataParallel module by device type. - - If device is cuda, return a MMDistributedDataParallel model; - if device is mlu, return a MLUDistributedDataParallel model. - - Args: - model (:class:`nn.Module`): module to be parallelized. - device (str): device type, mlu or cuda. - - Returns: - :class:`nn.Module`: the module to be parallelized - - References: - .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. - DistributedDataParallel.html - """ - assert device in ['cuda', 'mlu'], 'Only available for cuda or mlu devices.' - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDistributedDataParallel - ddp_factory['mlu'] = MLUDistributedDataParallel - model = model.mlu() - - return ddp_factory[device](model, *args, **kwargs) - - -def is_mlu_available(): - """Returns a bool indicating if MLU is currently available.""" - return hasattr(torch, 'is_mlu_available') and torch.is_mlu_available() - - -def get_device(): - """Returns an available device, cpu, cuda or mlu.""" - is_device_available = { - 'cuda': torch.cuda.is_available(), - 'mlu': is_mlu_available() - } - device_list = [k for k, v in is_device_available.items() if v] - return device_list[0] if len(device_list) == 1 else 'cpu' diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_mixins.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/util_mixins.py deleted file mode 100644 index b83b6617..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_mixins.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This module defines the :class:`NiceRepr` mixin class, which defines a -``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__`` -method, which you must define. This means you only have to overload one -function instead of two. Furthermore, if the object defines a ``__len__`` -method, then the ``__nice__`` method defaults to something sensible, otherwise -it is treated as abstract and raises ``NotImplementedError``. - -To use simply have your object inherit from :class:`NiceRepr` -(multi-inheritance should be ok). - -This code was copied from the ubelt library: https://github.com/Erotemic/ubelt - -Example: - >>> # Objects that define __nice__ have a default __str__ and __repr__ - >>> class Student(NiceRepr): - ... def __init__(self, name): - ... self.name = name - ... def __nice__(self): - ... return self.name - >>> s1 = Student('Alice') - >>> s2 = Student('Bob') - >>> print(f's1 = {s1}') - >>> print(f's2 = {s2}') - s1 = - s2 = - -Example: - >>> # Objects that define __len__ have a default __nice__ - >>> class Group(NiceRepr): - ... def __init__(self, data): - ... self.data = data - ... def __len__(self): - ... return len(self.data) - >>> g = Group([1, 2, 3]) - >>> print(f'g = {g}') - g = -""" -import warnings - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, '__len__'): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError( - f'Define the __nice__ method for {self.__class__!r}') - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f'<{classname}({nice}) at {hex(id(self))}>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f'<{classname}({nice})>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) diff --git a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_random.py b/cv/3d_detection/paconv/pytorch/mmdet/utils/util_random.py deleted file mode 100644 index dc1ecb6c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/utils/util_random.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Helpers for random number generators.""" -import numpy as np - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng diff --git a/cv/3d_detection/paconv/pytorch/mmdet/version.py b/cv/3d_detection/paconv/pytorch/mmdet/version.py deleted file mode 100644 index 0e03a9d3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet/version.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -__version__ = '2.24.0' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/__init__.py deleted file mode 100644 index 190764ba..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -import mmdet -import mmseg -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.4.8' -mmcv_maximum_version = '1.6.0' -mmcv_version = digit_version(mmcv.__version__) - - -# assert (mmcv_version >= digit_version(mmcv_minimum_version) -# and mmcv_version <= digit_version(mmcv_maximum_version)), \ -# f'MMCV=={mmcv.__version__} is used but incompatible. ' \ -# f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -mmdet_minimum_version = '2.24.0' -mmdet_maximum_version = '3.0.0' -mmdet_version = digit_version(mmdet.__version__) -assert (mmdet_version >= digit_version(mmdet_minimum_version) - and mmdet_version <= digit_version(mmdet_maximum_version)), \ - f'MMDET=={mmdet.__version__} is used but incompatible. ' \ - f'Please install mmdet>={mmdet_minimum_version}, ' \ - f'<={mmdet_maximum_version}.' - -mmseg_minimum_version = '0.20.0' -mmseg_maximum_version = '1.0.0' -mmseg_version = digit_version(mmseg.__version__) -assert (mmseg_version >= digit_version(mmseg_minimum_version) - and mmseg_version <= digit_version(mmseg_maximum_version)), \ - f'MMSEG=={mmseg.__version__} is used but incompatible. ' \ - f'Please install mmseg>={mmseg_minimum_version}, ' \ - f'<={mmseg_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/apis/__init__.py deleted file mode 100644 index 5befc10d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import (convert_SyncBN, inference_detector, - inference_mono_3d_detector, - inference_multi_modality_detector, inference_segmentor, - init_model, show_result_meshlab) -from .test import single_gpu_test -from .train import init_random_seed, train_model - -__all__ = [ - 'inference_detector', 'init_model', 'single_gpu_test', - 'inference_mono_3d_detector', 'show_result_meshlab', 'convert_SyncBN', - 'train_model', 'inference_multi_modality_detector', 'inference_segmentor', - 'init_random_seed' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/inference.py b/cv/3d_detection/paconv/pytorch/mmdet3d/apis/inference.py deleted file mode 100644 index 1457182c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/inference.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import re -from copy import deepcopy -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmdet3d.core import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - DepthInstance3DBoxes, LiDARInstance3DBoxes, - show_multi_modality_result, show_result, - show_seg_result) -from mmdet3d.core.bbox import get_box_type -from mmdet3d.datasets.pipelines import Compose -from mmdet3d.models import build_model -from mmdet3d.utils import get_root_logger - - -def convert_SyncBN(config): - """Convert config's naiveSyncBN to BN. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - """ - if isinstance(config, dict): - for item in config: - if item == 'norm_cfg': - config[item]['type'] = config[item]['type']. \ - replace('naiveSyncBN', 'BN') - else: - convert_SyncBN(config[item]) - - -def init_model(config, checkpoint=None, device='cuda:0'): - """Initialize a model from config file, which could be a 3D detector or a - 3D segmentor. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str): Device to use. - - Returns: - nn.Module: The constructed detector. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - config.model.pretrained = None - convert_SyncBN(config.model) - config.model.train_cfg = None - model = build_model(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - if 'CLASSES' in checkpoint['meta']: - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = config.class_names - if 'PALETTE' in checkpoint['meta']: # 3D Segmentor - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - if device != 'cpu': - torch.cuda.set_device(device) - else: - logger = get_root_logger() - logger.warning('Don\'t suggest using CPU device. ' - 'Some functions are not supported for now.') - model.to(device) - model.eval() - return model - - -def inference_detector(model, pcd): - """Inference point cloud with the detector. - - Args: - model (nn.Module): The loaded detector. - pcd (str): Point cloud files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - - if not isinstance(pcd, str): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadPointsFromDict' - - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - - if isinstance(pcd, str): - # load from point clouds file - data = dict( - pts_filename=pcd, - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - # for ScanNet demo we need axis_align_matrix - ann_info=dict(axis_align_matrix=np.eye(4)), - sweeps=[], - # set timestamp = 0 - timestamp=[0], - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - else: - # load from http - data = dict( - points=pcd, - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - # for ScanNet demo we need axis_align_matrix - ann_info=dict(axis_align_matrix=np.eye(4)), - sweeps=[], - # set timestamp = 0 - timestamp=[0], - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_multi_modality_detector(model, pcd, image, ann_file): - """Inference point cloud with the multi-modality detector. - - Args: - model (nn.Module): The loaded detector. - pcd (str): Point cloud files. - image (str): Image files. - ann_file (str): Annotation files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - # get data info containing calib - data_infos = mmcv.load(ann_file) - image_idx = int(re.findall(r'\d+', image)[-1]) # xxx/sunrgbd_000017.jpg - for x in data_infos: - if int(x['image']['image_idx']) != image_idx: - continue - info = x - break - data = dict( - pts_filename=pcd, - img_prefix=osp.dirname(image), - img_info=dict(filename=osp.basename(image)), - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - - # TODO: this code is dataset-specific. Move lidar2img and - # depth2img to .pkl annotations in the future. - # LiDAR to image conversion - if box_mode_3d == Box3DMode.LIDAR: - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - lidar2img = P2 @ rect @ Trv2c - data['img_metas'][0].data['lidar2img'] = lidar2img - # Depth to image conversion - elif box_mode_3d == Box3DMode.DEPTH: - rt_mat = info['calib']['Rt'] - # follow Coord3DMode.convert_point - rt_mat = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0] - ]) @ rt_mat.transpose(1, 0) - depth2img = info['calib']['K'] @ rt_mat - data['img_metas'][0].data['depth2img'] = depth2img - - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - data['img'] = data['img'][0].data - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_mono_3d_detector(model, image, ann_file): - """Inference image with the monocular 3D detector. - - Args: - model (nn.Module): The loaded detector. - image (str): Image files. - ann_file (str): Annotation files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - # get data info containing calib - data_infos = mmcv.load(ann_file) - # find the info corresponding to this image - for x in data_infos['images']: - if osp.basename(x['file_name']) != osp.basename(image): - continue - img_info = x - break - data = dict( - img_prefix=osp.dirname(image), - img_info=dict(filename=osp.basename(image)), - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - - # camera points to image conversion - if box_mode_3d == Box3DMode.CAM: - data['img_info'].update(dict(cam_intrinsic=img_info['cam_intrinsic'])) - - data = test_pipeline(data) - - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['img'] = data['img'][0].data - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_segmentor(model, pcd): - """Inference point cloud with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - pcd (str): Point cloud files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - data = dict( - pts_filename=pcd, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def show_det_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False): - """Show 3D detection result by meshlab.""" - points = data['points'][0][0].cpu().numpy() - pts_filename = data['img_metas'][0][0]['pts_filename'] - file_name = osp.split(pts_filename)[-1].split('.')[0] - - if 'pts_bbox' in result[0].keys(): - pred_bboxes = result[0]['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = result[0]['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = result[0]['boxes_3d'].tensor.numpy() - pred_scores = result[0]['scores_3d'].numpy() - - # filter out low score bboxes for visualization - if score_thr > 0: - inds = pred_scores > score_thr - pred_bboxes = pred_bboxes[inds] - - # for now we convert points into depth mode - box_mode = data['img_metas'][0][0]['box_mode_3d'] - if box_mode != Box3DMode.DEPTH: - points = Coord3DMode.convert(points, box_mode, Coord3DMode.DEPTH) - show_bboxes = Box3DMode.convert(pred_bboxes, box_mode, Box3DMode.DEPTH) - else: - show_bboxes = deepcopy(pred_bboxes) - - show_result( - points, - None, - show_bboxes, - out_dir, - file_name, - show=show, - snapshot=snapshot) - - return file_name - - -def show_seg_result_meshlab(data, - result, - out_dir, - palette, - show=False, - snapshot=False): - """Show 3D segmentation result by meshlab.""" - points = data['points'][0][0].cpu().numpy() - pts_filename = data['img_metas'][0][0]['pts_filename'] - file_name = osp.split(pts_filename)[-1].split('.')[0] - - pred_seg = result[0]['semantic_mask'].numpy() - - if palette is None: - # generate random color map - max_idx = pred_seg.max() - palette = np.random.randint(0, 256, size=(max_idx + 1, 3)) - palette = np.array(palette).astype(np.int) - - show_seg_result( - points, - None, - pred_seg, - out_dir, - file_name, - palette=palette, - show=show, - snapshot=snapshot) - - return file_name - - -def show_proj_det_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False): - """Show result of projecting 3D bbox to 2D image by meshlab.""" - assert 'img' in data.keys(), 'image data is not provided for visualization' - - img_filename = data['img_metas'][0][0]['filename'] - file_name = osp.split(img_filename)[-1].split('.')[0] - - # read from file because img in data_dict has undergone pipeline transform - img = mmcv.imread(img_filename) - - if 'pts_bbox' in result[0].keys(): - result[0] = result[0]['pts_bbox'] - elif 'img_bbox' in result[0].keys(): - result[0] = result[0]['img_bbox'] - pred_bboxes = result[0]['boxes_3d'].tensor.numpy() - pred_scores = result[0]['scores_3d'].numpy() - - # filter out low score bboxes for visualization - if score_thr > 0: - inds = pred_scores > score_thr - pred_bboxes = pred_bboxes[inds] - - box_mode = data['img_metas'][0][0]['box_mode_3d'] - if box_mode == Box3DMode.LIDAR: - if 'lidar2img' not in data['img_metas'][0][0]: - raise NotImplementedError( - 'LiDAR to image transformation matrix is not provided') - - show_bboxes = LiDARInstance3DBoxes(pred_bboxes, origin=(0.5, 0.5, 0)) - - show_multi_modality_result( - img, - None, - show_bboxes, - data['img_metas'][0][0]['lidar2img'], - out_dir, - file_name, - box_mode='lidar', - show=show) - elif box_mode == Box3DMode.DEPTH: - show_bboxes = DepthInstance3DBoxes(pred_bboxes, origin=(0.5, 0.5, 0)) - - show_multi_modality_result( - img, - None, - show_bboxes, - None, - out_dir, - file_name, - box_mode='depth', - img_metas=data['img_metas'][0][0], - show=show) - elif box_mode == Box3DMode.CAM: - if 'cam2img' not in data['img_metas'][0][0]: - raise NotImplementedError( - 'camera intrinsic matrix is not provided') - - show_bboxes = CameraInstance3DBoxes( - pred_bboxes, box_dim=pred_bboxes.shape[-1], origin=(0.5, 1.0, 0.5)) - - show_multi_modality_result( - img, - None, - show_bboxes, - data['img_metas'][0][0]['cam2img'], - out_dir, - file_name, - box_mode='camera', - show=show) - else: - raise NotImplementedError( - f'visualization of {box_mode} bbox is not supported') - - return file_name - - -def show_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False, - task='det', - palette=None): - """Show result by meshlab. - - Args: - data (dict): Contain data from pipeline. - result (dict): Predicted result from model. - out_dir (str): Directory to save visualized result. - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.0 - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - task (str, optional): Distinguish which task result to visualize. - Currently we support 3D detection, multi-modality detection and - 3D segmentation. Defaults to 'det'. - palette (list[list[int]]] | np.ndarray, optional): The palette - of segmentation map. If None is given, random palette will be - generated. Defaults to None. - """ - assert task in ['det', 'multi_modality-det', 'seg', 'mono-det'], \ - f'unsupported visualization task {task}' - assert out_dir is not None, 'Expect out_dir, got none.' - - if task in ['det', 'multi_modality-det']: - file_name = show_det_result_meshlab(data, result, out_dir, score_thr, - show, snapshot) - - if task in ['seg']: - file_name = show_seg_result_meshlab(data, result, out_dir, palette, - show, snapshot) - - if task in ['multi_modality-det', 'mono-det']: - file_name = show_proj_det_result_meshlab(data, result, out_dir, - score_thr, show, snapshot) - - return out_dir, file_name diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/test.py b/cv/3d_detection/paconv/pytorch/mmdet3d/apis/test.py deleted file mode 100644 index c0e66c07..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/test.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import torch -from mmcv.image import tensor2imgs - -from mmdet3d.models import (Base3DDetector, Base3DSegmentor, - SingleStageMono3DDetector) - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - show_score_thr=0.3): - """Test model with single gpu. - - This method tests model with single gpu and gives the 'show' option. - By setting ``show=True``, it saves the visualization results under - ``out_dir``. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - show (bool, optional): Whether to save viualization results. - Default: True. - out_dir (str, optional): The path to save visualization results. - Default: None. - - Returns: - list[dict]: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if show: - # Visualize the results of MMDetection3D model - # 'show_results' is MMdetection3D visualization API - models_3d = (Base3DDetector, Base3DSegmentor, - SingleStageMono3DDetector) - if isinstance(model.module, models_3d): - model.module.show_results( - data, - result, - out_dir=out_dir, - show=show, - score_thr=show_score_thr) - # Visualize the results of MMDetection model - # 'show_result' is MMdetection visualization API - else: - batch_size = len(result) - if batch_size == 1 and isinstance(data['img'][0], - torch.Tensor): - img_tensor = data['img'][0] - else: - img_tensor = data['img'][0].data[0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for i, (img, img_meta) in enumerate(zip(imgs, img_metas)): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result[i], - show=show, - out_file=out_file, - score_thr=show_score_thr) - results.extend(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/train.py b/cv/3d_detection/paconv/pytorch/mmdet3d/apis/train.py deleted file mode 100644 index 4d970264..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/apis/train.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner, get_dist_info) -from mmcv.utils import build_from_cfg -from torch import distributed as dist - -from mmdet3d.datasets import build_dataset -from mmdet3d.utils import find_latest_checkpoint -from mmdet.core import DistEvalHook as MMDET_DistEvalHook -from mmdet.core import EvalHook as MMDET_EvalHook -from mmdet.datasets import build_dataloader as build_mmdet_dataloader -from mmdet.datasets import replace_ImageToTensor -from mmdet.utils import get_root_logger as get_mmdet_root_logger -from mmseg.core import DistEvalHook as MMSEG_DistEvalHook -from mmseg.core import EvalHook as MMSEG_EvalHook -from mmseg.datasets import build_dataloader as build_mmseg_dataloader -from mmseg.utils import get_root_logger as get_mmseg_root_logger - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - Args: - seed (int, optional): The seed. Default to None. - device (str, optional): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_mmseg_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_mmseg_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_mmseg_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = MMSEG_DistEvalHook if distributed else MMSEG_EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - logger = get_mmdet_root_logger(log_level=cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - if 'imgs_per_gpu' in cfg.data: - logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - logger.warning( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - logger.warning( - 'Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - - runner_type = 'EpochBasedRunner' if 'runner' not in cfg else cfg.runner[ - 'type'] - data_loaders = [ - build_mmdet_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # `num_gpus` will be ignored if distributed - num_gpus=len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - runner_type=runner_type, - persistent_workers=cfg.data.get('persistent_workers', False)) - for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if 'runner' not in cfg: - cfg.runner = { - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - } - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - optimizer_config, - cfg.checkpoint_config, - cfg.log_config, - cfg.get('momentum_config', None), - custom_hooks_config=cfg.get('custom_hooks', None)) - - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - # Support batch_size > 1 in validation - val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1) - if val_samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_mmdet_dataloader( - val_dataset, - samples_per_gpu=val_samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = MMDET_DistEvalHook if distributed else MMDET_EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - resume_from = None - if cfg.resume_from is None and cfg.get('auto_resume'): - resume_from = find_latest_checkpoint(cfg.work_dir) - - if resume_from is not None: - cfg.resume_from = resume_from - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) - - -def train_model(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """A function wrapper for launching model training according to cfg. - - Because we need different eval_hook in runner. Should be deprecated in the - future. - """ - if cfg.model.type in ['EncoderDecoder3D']: - train_segmentor( - model, - dataset, - cfg, - distributed=distributed, - validate=validate, - timestamp=timestamp, - meta=meta) - else: - train_detector( - model, - dataset, - cfg, - distributed=distributed, - validate=validate, - timestamp=timestamp, - meta=meta) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/__init__.py deleted file mode 100644 index ffb0c1ac..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -from .points import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 -from .visualizer import * # noqa: F401, F403 -from .voxel import * # noqa: F401, F403 diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/__init__.py deleted file mode 100644 index 7a34bf56..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.anchor import build_prior_generator -from .anchor_3d_generator import (AlignedAnchor3DRangeGenerator, - AlignedAnchor3DRangeGeneratorPerCls, - Anchor3DRangeGenerator) - -__all__ = [ - 'AlignedAnchor3DRangeGenerator', 'Anchor3DRangeGenerator', - 'build_prior_generator', 'AlignedAnchor3DRangeGeneratorPerCls' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/anchor_3d_generator.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/anchor_3d_generator.py deleted file mode 100644 index e8681b71..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/anchor/anchor_3d_generator.py +++ /dev/null @@ -1,419 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from mmdet.core.anchor import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class Anchor3DRangeGenerator(object): - """3D Anchor Generator by range. - - This anchor generator generates anchors by the given range in different - feature levels. - Due the convention in 3D detection, different anchor sizes are related to - different ranges for different categories. However we find this setting - does not effect the performance much in some datasets, e.g., nuScenes. - - Args: - ranges (list[list[float]]): Ranges of different anchors. - The ranges are the same across different feature levels. But may - vary for different anchor sizes if size_per_range is True. - sizes (list[list[float]], optional): 3D sizes of anchors. - Defaults to [[3.9, 1.6, 1.56]]. - scales (list[int], optional): Scales of anchors in different feature - levels. Defaults to [1]. - rotations (list[float], optional): Rotations of anchors in a feature - grid. Defaults to [0, 1.5707963]. - custom_values (tuple[float], optional): Customized values of that - anchor. For example, in nuScenes the anchors have velocities. - Defaults to (). - reshape_out (bool, optional): Whether to reshape the output into - (N x 4). Defaults to True. - size_per_range (bool, optional): Whether to use separate ranges for - different sizes. If size_per_range is True, the ranges should have - the same length as the sizes, if not, it will be duplicated. - Defaults to True. - """ - - def __init__(self, - ranges, - sizes=[[3.9, 1.6, 1.56]], - scales=[1], - rotations=[0, 1.5707963], - custom_values=(), - reshape_out=True, - size_per_range=True): - assert mmcv.is_list_of(ranges, list) - if size_per_range: - if len(sizes) != len(ranges): - assert len(ranges) == 1 - ranges = ranges * len(sizes) - assert len(ranges) == len(sizes) - else: - assert len(ranges) == 1 - assert mmcv.is_list_of(sizes, list) - assert isinstance(scales, list) - - self.sizes = sizes - self.scales = scales - self.ranges = ranges - self.rotations = rotations - self.custom_values = custom_values - self.cached_anchors = None - self.reshape_out = reshape_out - self.size_per_range = size_per_range - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'anchor_range={self.ranges},\n' - s += f'scales={self.scales},\n' - s += f'sizes={self.sizes},\n' - s += f'rotations={self.rotations},\n' - s += f'reshape_out={self.reshape_out},\n' - s += f'size_per_range={self.size_per_range})' - return s - - @property - def num_base_anchors(self): - """list[int]: Total number of base anchors in a feature grid.""" - num_rot = len(self.rotations) - num_size = torch.tensor(self.sizes).reshape(-1, 3).size(0) - return num_rot * num_size - - @property - def num_levels(self): - """int: Number of feature levels that the generator is applied to.""" - return len(self.scales) - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str, optional): Device where the anchors will be put on. - Defaults to 'cuda'. - - Returns: - list[torch.Tensor]: Anchors in multiple feature levels. - The sizes of each tensor should be [N, 4], where - N = width * height * num_base_anchors, width and height - are the sizes of the corresponding feature level, - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - featmap_sizes[i], self.scales[i], device=device) - if self.reshape_out: - anchors = anchors.reshape(-1, anchors.size(-1)) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, featmap_size, scale, device='cuda'): - """Generate grid anchors of a single level feature map. - - This function is usually called by method ``self.grid_anchors``. - - Args: - featmap_size (tuple[int]): Size of the feature map. - scale (float): Scale factor of the anchors in the current level. - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature map. - """ - # We reimplement the anchor generator using torch in cuda - # torch: 0.6975 s for 1000 times - # numpy: 4.3345 s for 1000 times - # which is ~5 times faster than the numpy implementation - if not self.size_per_range: - return self.anchors_single_range( - featmap_size, - self.ranges[0], - scale, - self.sizes, - self.rotations, - device=device) - - mr_anchors = [] - for anchor_range, anchor_size in zip(self.ranges, self.sizes): - mr_anchors.append( - self.anchors_single_range( - featmap_size, - anchor_range, - scale, - anchor_size, - self.rotations, - device=device)) - mr_anchors = torch.cat(mr_anchors, dim=-3) - return mr_anchors - - def anchors_single_range(self, - feature_size, - anchor_range, - scale=1, - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.5707963], - device='cuda'): - """Generate anchors in a single range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - scale (float | int, optional): The scale factor of anchors. - Defaults to 1. - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to [[3.9, 1.6, 1.56]]. - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to [0, 1.5707963]. - device (str): Devices that the anchors will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors with shape - [*feature_size, num_sizes, num_rots, 7]. - """ - if len(feature_size) == 2: - feature_size = [1, feature_size[0], feature_size[1]] - anchor_range = torch.tensor(anchor_range, device=device) - z_centers = torch.linspace( - anchor_range[2], anchor_range[5], feature_size[0], device=device) - y_centers = torch.linspace( - anchor_range[1], anchor_range[4], feature_size[1], device=device) - x_centers = torch.linspace( - anchor_range[0], anchor_range[3], feature_size[2], device=device) - sizes = torch.tensor(sizes, device=device).reshape(-1, 3) * scale - rotations = torch.tensor(rotations, device=device) - - # torch.meshgrid default behavior is 'id', np's default is 'xy' - rets = torch.meshgrid(x_centers, y_centers, z_centers, rotations) - # torch.meshgrid returns a tuple rather than list - rets = list(rets) - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = rets[i].unsqueeze(-2).repeat(tile_shape).unsqueeze(-1) - - sizes = sizes.reshape([1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = sizes.repeat(tile_size_shape) - rets.insert(3, sizes) - - ret = torch.cat(rets, dim=-1).permute([2, 1, 0, 3, 4, 5]) - # [1, 200, 176, N, 2, 7] for kitti after permute - - if len(self.custom_values) > 0: - custom_ndim = len(self.custom_values) - custom = ret.new_zeros([*ret.shape[:-1], custom_ndim]) - # custom[:] = self.custom_values - ret = torch.cat([ret, custom], dim=-1) - # [1, 200, 176, N, 2, 9] for nus dataset after permute - return ret - - -@ANCHOR_GENERATORS.register_module() -class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator): - """Aligned 3D Anchor Generator by range. - - This anchor generator uses a different manner to generate the positions - of anchors' centers from :class:`Anchor3DRangeGenerator`. - - Note: - The `align` means that the anchor's center is aligned with the voxel - grid, which is also the feature grid. The previous implementation of - :class:`Anchor3DRangeGenerator` does not generate the anchors' center - according to the voxel grid. Rather, it generates the center by - uniformly distributing the anchors inside the minimum and maximum - anchor ranges according to the feature map sizes. - However, this makes the anchors center does not match the feature grid. - The :class:`AlignedAnchor3DRangeGenerator` add + 1 when using the - feature map sizes to obtain the corners of the voxel grid. Then it - shifts the coordinates to the center of voxel grid and use the left - up corner to distribute anchors. - - Args: - anchor_corner (bool, optional): Whether to align with the corner of the - voxel grid. By default it is False and the anchor's center will be - the same as the corresponding voxel's center, which is also the - center of the corresponding greature grid. Defaults to False. - """ - - def __init__(self, align_corner=False, **kwargs): - super(AlignedAnchor3DRangeGenerator, self).__init__(**kwargs) - self.align_corner = align_corner - - def anchors_single_range(self, - feature_size, - anchor_range, - scale, - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.5707963], - device='cuda'): - """Generate anchors in a single range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - scale (float | int): The scale factor of anchors. - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to [[3.9, 1.6, 1.56]]. - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to [0, 1.5707963]. - device (str, optional): Devices that the anchors will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors with shape - [*feature_size, num_sizes, num_rots, 7]. - """ - if len(feature_size) == 2: - feature_size = [1, feature_size[0], feature_size[1]] - anchor_range = torch.tensor(anchor_range, device=device) - z_centers = torch.linspace( - anchor_range[2], - anchor_range[5], - feature_size[0] + 1, - device=device) - y_centers = torch.linspace( - anchor_range[1], - anchor_range[4], - feature_size[1] + 1, - device=device) - x_centers = torch.linspace( - anchor_range[0], - anchor_range[3], - feature_size[2] + 1, - device=device) - sizes = torch.tensor(sizes, device=device).reshape(-1, 3) * scale - rotations = torch.tensor(rotations, device=device) - - # shift the anchor center - if not self.align_corner: - z_shift = (z_centers[1] - z_centers[0]) / 2 - y_shift = (y_centers[1] - y_centers[0]) / 2 - x_shift = (x_centers[1] - x_centers[0]) / 2 - z_centers += z_shift - y_centers += y_shift - x_centers += x_shift - - # torch.meshgrid default behavior is 'id', np's default is 'xy' - rets = torch.meshgrid(x_centers[:feature_size[2]], - y_centers[:feature_size[1]], - z_centers[:feature_size[0]], rotations) - - # torch.meshgrid returns a tuple rather than list - rets = list(rets) - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = rets[i].unsqueeze(-2).repeat(tile_shape).unsqueeze(-1) - - sizes = sizes.reshape([1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = sizes.repeat(tile_size_shape) - rets.insert(3, sizes) - - ret = torch.cat(rets, dim=-1).permute([2, 1, 0, 3, 4, 5]) - - if len(self.custom_values) > 0: - custom_ndim = len(self.custom_values) - custom = ret.new_zeros([*ret.shape[:-1], custom_ndim]) - # TODO: check the support of custom values - # custom[:] = self.custom_values - ret = torch.cat([ret, custom], dim=-1) - return ret - - -@ANCHOR_GENERATORS.register_module() -class AlignedAnchor3DRangeGeneratorPerCls(AlignedAnchor3DRangeGenerator): - """3D Anchor Generator by range for per class. - - This anchor generator generates anchors by the given range for per class. - Note that feature maps of different classes may be different. - - Args: - kwargs (dict): Arguments are the same as those in - :class:`AlignedAnchor3DRangeGenerator`. - """ - - def __init__(self, **kwargs): - super(AlignedAnchor3DRangeGeneratorPerCls, self).__init__(**kwargs) - assert len(self.scales) == 1, 'Multi-scale feature map levels are' + \ - ' not supported currently in this kind of anchor generator.' - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes for - different classes in a single feature level. - device (str, optional): Device where the anchors will be put on. - Defaults to 'cuda'. - - Returns: - list[list[torch.Tensor]]: Anchors in multiple feature levels. - Note that in this anchor generator, we currently only - support single feature level. The sizes of each tensor - should be [num_sizes/ranges*num_rots*featmap_size, - box_code_size]. - """ - multi_level_anchors = [] - anchors = self.multi_cls_grid_anchors( - featmap_sizes, self.scales[0], device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def multi_cls_grid_anchors(self, featmap_sizes, scale, device='cuda'): - """Generate grid anchors of a single level feature map for multi-class - with different feature map sizes. - - This function is usually called by method ``self.grid_anchors``. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes for - different classes in a single feature level. - scale (float): Scale factor of the anchors in the current level. - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature map. - """ - assert len(featmap_sizes) == len(self.sizes) == len(self.ranges), \ - 'The number of different feature map sizes anchor sizes and ' + \ - 'ranges should be the same.' - - multi_cls_anchors = [] - for i in range(len(featmap_sizes)): - anchors = self.anchors_single_range( - featmap_sizes[i], - self.ranges[i], - scale, - self.sizes[i], - self.rotations, - device=device) - # [*featmap_size, num_sizes/ranges, num_rots, box_code_size] - ndim = len(featmap_sizes[i]) - anchors = anchors.view(*featmap_sizes[i], -1, anchors.size(-1)) - # [*featmap_size, num_sizes/ranges*num_rots, box_code_size] - anchors = anchors.permute(ndim, *range(0, ndim), ndim + 1) - # [num_sizes/ranges*num_rots, *featmap_size, box_code_size] - multi_cls_anchors.append(anchors.reshape(-1, anchors.size(-1))) - # [num_sizes/ranges*num_rots*featmap_size, box_code_size] - return multi_cls_anchors diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/__init__.py deleted file mode 100644 index 8c666306..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner -from .coders import DeltaXYZWLHRBBoxCoder -# from .bbox_target import bbox_target -from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, - BboxOverlapsNearest3D, - axis_aligned_bbox_overlaps_3d, bbox_overlaps_3d, - bbox_overlaps_nearest_3d) -from .samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, IoUBalancedNegSampler, - PseudoSampler, RandomSampler, SamplingResult) -from .structures import (BaseInstance3DBoxes, Box3DMode, CameraInstance3DBoxes, - Coord3DMode, DepthInstance3DBoxes, - LiDARInstance3DBoxes, get_box_type, limit_period, - mono_cam_box2vis, points_cam2img, points_img2cam, - xywhr2xyxyr) -from .transforms import bbox3d2result, bbox3d2roi, bbox3d_mapping_back - -__all__ = [ - 'BaseSampler', 'AssignResult', 'BaseAssigner', 'MaxIoUAssigner', - 'PseudoSampler', 'RandomSampler', 'InstanceBalancedPosSampler', - 'IoUBalancedNegSampler', 'CombinedSampler', 'SamplingResult', - 'DeltaXYZWLHRBBoxCoder', 'BboxOverlapsNearest3D', 'BboxOverlaps3D', - 'bbox_overlaps_nearest_3d', 'bbox_overlaps_3d', - 'AxisAlignedBboxOverlaps3D', 'axis_aligned_bbox_overlaps_3d', 'Box3DMode', - 'LiDARInstance3DBoxes', 'CameraInstance3DBoxes', 'bbox3d2roi', - 'bbox3d2result', 'DepthInstance3DBoxes', 'BaseInstance3DBoxes', - 'bbox3d_mapping_back', 'xywhr2xyxyr', 'limit_period', 'points_cam2img', - 'points_img2cam', 'get_box_type', 'Coord3DMode', 'mono_cam_box2vis' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/assigners/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/assigners/__init__.py deleted file mode 100644 index d1493687..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox import AssignResult, BaseAssigner, MaxIoUAssigner - -__all__ = ['BaseAssigner', 'MaxIoUAssigner', 'AssignResult'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/box_np_ops.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/box_np_ops.py deleted file mode 100644 index bb52bbbf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/box_np_ops.py +++ /dev/null @@ -1,827 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# TODO: clean the functions in this file and move the APIs into box structures -# in the future -# NOTICE: All functions in this file are valid for LiDAR or depth boxes only -# if we use default parameters. - -import numba -import numpy as np - -from .structures.utils import limit_period, points_cam2img, rotation_3d_in_axis - - -def camera_to_lidar(points, r_rect, velo2cam): - """Convert points in camera coordinate to lidar coordinate. - - Note: - This function is for KITTI only. - - Args: - points (np.ndarray, shape=[N, 3]): Points in camera coordinate. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray, shape=[N, 3]: Points in lidar coordinate. - """ - points_shape = list(points.shape[0:-1]) - if points.shape[-1] == 3: - points = np.concatenate([points, np.ones(points_shape + [1])], axis=-1) - lidar_points = points @ np.linalg.inv((r_rect @ velo2cam).T) - return lidar_points[..., :3] - - -def box_camera_to_lidar(data, r_rect, velo2cam): - """Convert boxes in camera coordinate to lidar coordinate. - - Note: - This function is for KITTI only. - - Args: - data (np.ndarray, shape=[N, 7]): Boxes in camera coordinate. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray, shape=[N, 3]: Boxes in lidar coordinate. - """ - xyz = data[:, 0:3] - x_size, y_size, z_size = data[:, 3:4], data[:, 4:5], data[:, 5:6] - r = data[:, 6:7] - xyz_lidar = camera_to_lidar(xyz, r_rect, velo2cam) - # yaw and dims also needs to be converted - r_new = -r - np.pi / 2 - r_new = limit_period(r_new, period=np.pi * 2) - return np.concatenate([xyz_lidar, x_size, z_size, y_size, r_new], axis=1) - - -def corners_nd(dims, origin=0.5): - """Generate relative box corners based on length per dim and origin point. - - Args: - dims (np.ndarray, shape=[N, ndim]): Array of length per dim - origin (list or array or float, optional): origin point relate to - smallest point. Defaults to 0.5 - - Returns: - np.ndarray, shape=[N, 2 ** ndim, ndim]: Returned corners. - point layout example: (2d) x0y0, x0y1, x1y0, x1y1; - (3d) x0y0z0, x0y0z1, x0y1z0, x0y1z1, x1y0z0, x1y0z1, x1y1z0, x1y1z1 - where x0 < x1, y0 < y1, z0 < z1. - """ - ndim = int(dims.shape[1]) - corners_norm = np.stack( - np.unravel_index(np.arange(2**ndim), [2] * ndim), - axis=1).astype(dims.dtype) - # now corners_norm has format: (2d) x0y0, x0y1, x1y0, x1y1 - # (3d) x0y0z0, x0y0z1, x0y1z0, x0y1z1, x1y0z0, x1y0z1, x1y1z0, x1y1z1 - # so need to convert to a format which is convenient to do other computing. - # for 2d boxes, format is clockwise start with minimum point - # for 3d boxes, please draw lines by your hand. - if ndim == 2: - # generate clockwise box corners - corners_norm = corners_norm[[0, 1, 3, 2]] - elif ndim == 3: - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - corners_norm = corners_norm - np.array(origin, dtype=dims.dtype) - corners = dims.reshape([-1, 1, ndim]) * corners_norm.reshape( - [1, 2**ndim, ndim]) - return corners - - -def center_to_corner_box2d(centers, dims, angles=None, origin=0.5): - """Convert kitti locations, dimensions and angles to corners. - format: center(xy), dims(xy), angles(counterclockwise when positive) - - Args: - centers (np.ndarray): Locations in kitti label file with shape (N, 2). - dims (np.ndarray): Dimensions in kitti label file with shape (N, 2). - angles (np.ndarray, optional): Rotation_y in kitti label file with - shape (N). Defaults to None. - origin (list or array or float, optional): origin point relate to - smallest point. Defaults to 0.5. - - Returns: - np.ndarray: Corners with the shape of (N, 4, 2). - """ - # 'length' in kitti format is in x axis. - # xyz(hwl)(kitti label file)<->xyz(lhw)(camera)<->z(-x)(-y)(wlh)(lidar) - # center in kitti format is [0.5, 1.0, 0.5] in xyz. - corners = corners_nd(dims, origin=origin) - # corners: [N, 4, 2] - if angles is not None: - corners = rotation_3d_in_axis(corners, angles) - corners += centers.reshape([-1, 1, 2]) - return corners - - -@numba.jit(nopython=True) -def depth_to_points(depth, trunc_pixel): - """Convert depth map to points. - - Args: - depth (np.array, shape=[H, W]): Depth map which - the row of [0~`trunc_pixel`] are truncated. - trunc_pixel (int): The number of truncated row. - - Returns: - np.ndarray: Points in camera coordinates. - """ - num_pts = np.sum(depth[trunc_pixel:, ] > 0.1) - points = np.zeros((num_pts, 3), dtype=depth.dtype) - x = np.array([0, 0, 1], dtype=depth.dtype) - k = 0 - for i in range(trunc_pixel, depth.shape[0]): - for j in range(depth.shape[1]): - if depth[i, j] > 0.1: - x = np.array([j, i, 1], dtype=depth.dtype) - points[k] = x * depth[i, j] - k += 1 - return points - - -def depth_to_lidar_points(depth, trunc_pixel, P2, r_rect, velo2cam): - """Convert depth map to points in lidar coordinate. - - Args: - depth (np.array, shape=[H, W]): Depth map which - the row of [0~`trunc_pixel`] are truncated. - trunc_pixel (int): The number of truncated row. - P2 (p.array, shape=[4, 4]): Intrinsics of Camera2. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray: Points in lidar coordinates. - """ - pts = depth_to_points(depth, trunc_pixel) - points_shape = list(pts.shape[0:-1]) - points = np.concatenate([pts, np.ones(points_shape + [1])], axis=-1) - points = points @ np.linalg.inv(P2.T) - lidar_points = camera_to_lidar(points, r_rect, velo2cam) - return lidar_points - - -def center_to_corner_box3d(centers, - dims, - angles=None, - origin=(0.5, 1.0, 0.5), - axis=1): - """Convert kitti locations, dimensions and angles to corners. - - Args: - centers (np.ndarray): Locations in kitti label file with shape (N, 3). - dims (np.ndarray): Dimensions in kitti label file with shape (N, 3). - angles (np.ndarray, optional): Rotation_y in kitti label file with - shape (N). Defaults to None. - origin (list or array or float, optional): Origin point relate to - smallest point. Use (0.5, 1.0, 0.5) in camera and (0.5, 0.5, 0) - in lidar. Defaults to (0.5, 1.0, 0.5). - axis (int, optional): Rotation axis. 1 for camera and 2 for lidar. - Defaults to 1. - - Returns: - np.ndarray: Corners with the shape of (N, 8, 3). - """ - # 'length' in kitti format is in x axis. - # yzx(hwl)(kitti label file)<->xyz(lhw)(camera)<->z(-x)(-y)(lwh)(lidar) - # center in kitti format is [0.5, 1.0, 0.5] in xyz. - corners = corners_nd(dims, origin=origin) - # corners: [N, 8, 3] - if angles is not None: - corners = rotation_3d_in_axis(corners, angles, axis=axis) - corners += centers.reshape([-1, 1, 3]) - return corners - - -@numba.jit(nopython=True) -def box2d_to_corner_jit(boxes): - """Convert box2d to corner. - - Args: - boxes (np.ndarray, shape=[N, 5]): Boxes2d with rotation. - - Returns: - box_corners (np.ndarray, shape=[N, 4, 2]): Box corners. - """ - num_box = boxes.shape[0] - corners_norm = np.zeros((4, 2), dtype=boxes.dtype) - corners_norm[1, 1] = 1.0 - corners_norm[2] = 1.0 - corners_norm[3, 0] = 1.0 - corners_norm -= np.array([0.5, 0.5], dtype=boxes.dtype) - corners = boxes.reshape(num_box, 1, 5)[:, :, 2:4] * corners_norm.reshape( - 1, 4, 2) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - box_corners = np.zeros((num_box, 4, 2), dtype=boxes.dtype) - for i in range(num_box): - rot_sin = np.sin(boxes[i, -1]) - rot_cos = np.cos(boxes[i, -1]) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - box_corners[i] = corners[i] @ rot_mat_T + boxes[i, :2] - return box_corners - - -@numba.njit -def corner_to_standup_nd_jit(boxes_corner): - """Convert boxes_corner to aligned (min-max) boxes. - - Args: - boxes_corner (np.ndarray, shape=[N, 2**dim, dim]): Boxes corners. - - Returns: - np.ndarray, shape=[N, dim*2]: Aligned (min-max) boxes. - """ - num_boxes = boxes_corner.shape[0] - ndim = boxes_corner.shape[-1] - result = np.zeros((num_boxes, ndim * 2), dtype=boxes_corner.dtype) - for i in range(num_boxes): - for j in range(ndim): - result[i, j] = np.min(boxes_corner[i, :, j]) - for j in range(ndim): - result[i, j + ndim] = np.max(boxes_corner[i, :, j]) - return result - - -@numba.jit(nopython=True) -def corner_to_surfaces_3d_jit(corners): - """Convert 3d box corners from corner function above to surfaces that - normal vectors all direct to internal. - - Args: - corners (np.ndarray): 3d box corners with the shape of (N, 8, 3). - - Returns: - np.ndarray: Surfaces with the shape of (N, 6, 4, 3). - """ - # box_corners: [N, 8, 3], must from corner functions in this module - num_boxes = corners.shape[0] - surfaces = np.zeros((num_boxes, 6, 4, 3), dtype=corners.dtype) - corner_idxes = np.array([ - 0, 1, 2, 3, 7, 6, 5, 4, 0, 3, 7, 4, 1, 5, 6, 2, 0, 4, 5, 1, 3, 2, 6, 7 - ]).reshape(6, 4) - for i in range(num_boxes): - for j in range(6): - for k in range(4): - surfaces[i, j, k] = corners[i, corner_idxes[j, k]] - return surfaces - - -def rotation_points_single_angle(points, angle, axis=0): - """Rotate points with a single angle. - - Args: - points (np.ndarray, shape=[N, 3]]): - angle (np.ndarray, shape=[1]]): - axis (int, optional): Axis to rotate at. Defaults to 0. - - Returns: - np.ndarray: Rotated points. - """ - # points: [N, 3] - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - if axis == 1: - rot_mat_T = np.array( - [[rot_cos, 0, rot_sin], [0, 1, 0], [-rot_sin, 0, rot_cos]], - dtype=points.dtype) - elif axis == 2 or axis == -1: - rot_mat_T = np.array( - [[rot_cos, rot_sin, 0], [-rot_sin, rot_cos, 0], [0, 0, 1]], - dtype=points.dtype) - elif axis == 0: - rot_mat_T = np.array( - [[1, 0, 0], [0, rot_cos, rot_sin], [0, -rot_sin, rot_cos]], - dtype=points.dtype) - else: - raise ValueError('axis should in range') - - return points @ rot_mat_T, rot_mat_T - - -def box3d_to_bbox(box3d, P2): - """Convert box3d in camera coordinates to bbox in image coordinates. - - Args: - box3d (np.ndarray, shape=[N, 7]): Boxes in camera coordinate. - P2 (np.array, shape=[4, 4]): Intrinsics of Camera2. - - Returns: - np.ndarray, shape=[N, 4]: Boxes 2d in image coordinates. - """ - box_corners = center_to_corner_box3d( - box3d[:, :3], box3d[:, 3:6], box3d[:, 6], [0.5, 1.0, 0.5], axis=1) - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = np.min(box_corners_in_image, axis=1) - maxxy = np.max(box_corners_in_image, axis=1) - bbox = np.concatenate([minxy, maxxy], axis=1) - return bbox - - -def corner_to_surfaces_3d(corners): - """convert 3d box corners from corner function above to surfaces that - normal vectors all direct to internal. - - Args: - corners (np.ndarray): 3D box corners with shape of (N, 8, 3). - - Returns: - np.ndarray: Surfaces with the shape of (N, 6, 4, 3). - """ - # box_corners: [N, 8, 3], must from corner functions in this module - surfaces = np.array([ - [corners[:, 0], corners[:, 1], corners[:, 2], corners[:, 3]], - [corners[:, 7], corners[:, 6], corners[:, 5], corners[:, 4]], - [corners[:, 0], corners[:, 3], corners[:, 7], corners[:, 4]], - [corners[:, 1], corners[:, 5], corners[:, 6], corners[:, 2]], - [corners[:, 0], corners[:, 4], corners[:, 5], corners[:, 1]], - [corners[:, 3], corners[:, 2], corners[:, 6], corners[:, 7]], - ]).transpose([2, 0, 1, 3]) - return surfaces - - -def points_in_rbbox(points, rbbox, z_axis=2, origin=(0.5, 0.5, 0)): - """Check points in rotated bbox and return indices. - - Note: - This function is for counterclockwise boxes. - - Args: - points (np.ndarray, shape=[N, 3+dim]): Points to query. - rbbox (np.ndarray, shape=[M, 7]): Boxes3d with rotation. - z_axis (int, optional): Indicate which axis is height. - Defaults to 2. - origin (tuple[int], optional): Indicate the position of - box center. Defaults to (0.5, 0.5, 0). - - Returns: - np.ndarray, shape=[N, M]: Indices of points in each box. - """ - # TODO: this function is different from PointCloud3D, be careful - # when start to use nuscene, check the input - rbbox_corners = center_to_corner_box3d( - rbbox[:, :3], rbbox[:, 3:6], rbbox[:, 6], origin=origin, axis=z_axis) - surfaces = corner_to_surfaces_3d(rbbox_corners) - indices = points_in_convex_polygon_3d_jit(points[:, :3], surfaces) - return indices - - -def minmax_to_corner_2d(minmax_box): - """Convert minmax box to corners2d. - - Args: - minmax_box (np.ndarray, shape=[N, dims]): minmax boxes. - - Returns: - np.ndarray: 2d corners of boxes - """ - ndim = minmax_box.shape[-1] // 2 - center = minmax_box[..., :ndim] - dims = minmax_box[..., ndim:] - center - return center_to_corner_box2d(center, dims, origin=0.0) - - -def create_anchors_3d_range(feature_size, - anchor_range, - sizes=((3.9, 1.6, 1.56), ), - rotations=(0, np.pi / 2), - dtype=np.float32): - """Create anchors 3d by range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to ((3.9, 1.6, 1.56), ). - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to (0, np.pi / 2). - dtype (type, optional): Data type. Defaults to np.float32. - - Returns: - np.ndarray: Range based anchors with shape of - (*feature_size, num_sizes, num_rots, 7). - """ - anchor_range = np.array(anchor_range, dtype) - z_centers = np.linspace( - anchor_range[2], anchor_range[5], feature_size[0], dtype=dtype) - y_centers = np.linspace( - anchor_range[1], anchor_range[4], feature_size[1], dtype=dtype) - x_centers = np.linspace( - anchor_range[0], anchor_range[3], feature_size[2], dtype=dtype) - sizes = np.reshape(np.array(sizes, dtype=dtype), [-1, 3]) - rotations = np.array(rotations, dtype=dtype) - rets = np.meshgrid( - x_centers, y_centers, z_centers, rotations, indexing='ij') - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = np.tile(rets[i][..., np.newaxis, :], tile_shape) - rets[i] = rets[i][..., np.newaxis] # for concat - sizes = np.reshape(sizes, [1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = np.tile(sizes, tile_size_shape) - rets.insert(3, sizes) - ret = np.concatenate(rets, axis=-1) - return np.transpose(ret, [2, 1, 0, 3, 4, 5]) - - -def center_to_minmax_2d(centers, dims, origin=0.5): - """Center to minmax. - - Args: - centers (np.ndarray): Center points. - dims (np.ndarray): Dimensions. - origin (list or array or float, optional): Origin point relate - to smallest point. Defaults to 0.5. - - Returns: - np.ndarray: Minmax points. - """ - if origin == 0.5: - return np.concatenate([centers - dims / 2, centers + dims / 2], - axis=-1) - corners = center_to_corner_box2d(centers, dims, origin=origin) - return corners[:, [0, 2]].reshape([-1, 4]) - - -def rbbox2d_to_near_bbox(rbboxes): - """convert rotated bbox to nearest 'standing' or 'lying' bbox. - - Args: - rbboxes (np.ndarray): Rotated bboxes with shape of - (N, 5(x, y, xdim, ydim, rad)). - - Returns: - np.ndarray: Bounding boxes with the shape of - (N, 4(xmin, ymin, xmax, ymax)). - """ - rots = rbboxes[..., -1] - rots_0_pi_div_2 = np.abs(limit_period(rots, 0.5, np.pi)) - cond = (rots_0_pi_div_2 > np.pi / 4)[..., np.newaxis] - bboxes_center = np.where(cond, rbboxes[:, [0, 1, 3, 2]], rbboxes[:, :4]) - bboxes = center_to_minmax_2d(bboxes_center[:, :2], bboxes_center[:, 2:]) - return bboxes - - -@numba.jit(nopython=True) -def iou_jit(boxes, query_boxes, mode='iou', eps=0.0): - """Calculate box iou. Note that jit version runs ~10x faster than the - box_overlaps function in mmdet3d.core.evaluation. - - Note: - This function is for counterclockwise boxes. - - Args: - boxes (np.ndarray): Input bounding boxes with shape of (N, 4). - query_boxes (np.ndarray): Query boxes with shape of (K, 4). - mode (str, optional): IoU mode. Defaults to 'iou'. - eps (float, optional): Value added to denominator. Defaults to 0. - - Returns: - np.ndarray: Overlap between boxes and query_boxes - with the shape of [N, K]. - """ - N = boxes.shape[0] - K = query_boxes.shape[0] - overlaps = np.zeros((N, K), dtype=boxes.dtype) - for k in range(K): - box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + eps) * - (query_boxes[k, 3] - query_boxes[k, 1] + eps)) - for n in range(N): - iw = ( - min(boxes[n, 2], query_boxes[k, 2]) - - max(boxes[n, 0], query_boxes[k, 0]) + eps) - if iw > 0: - ih = ( - min(boxes[n, 3], query_boxes[k, 3]) - - max(boxes[n, 1], query_boxes[k, 1]) + eps) - if ih > 0: - if mode == 'iou': - ua = ((boxes[n, 2] - boxes[n, 0] + eps) * - (boxes[n, 3] - boxes[n, 1] + eps) + box_area - - iw * ih) - else: - ua = ((boxes[n, 2] - boxes[n, 0] + eps) * - (boxes[n, 3] - boxes[n, 1] + eps)) - overlaps[n, k] = iw * ih / ua - return overlaps - - -def projection_matrix_to_CRT_kitti(proj): - """Split projection matrix of KITTI. - - Note: - This function is for KITTI only. - - P = C @ [R|T] - C is upper triangular matrix, so we need to inverse CR and use QR - stable for all kitti camera projection matrix. - - Args: - proj (p.array, shape=[4, 4]): Intrinsics of camera. - - Returns: - tuple[np.ndarray]: Splited matrix of C, R and T. - """ - - CR = proj[0:3, 0:3] - CT = proj[0:3, 3] - RinvCinv = np.linalg.inv(CR) - Rinv, Cinv = np.linalg.qr(RinvCinv) - C = np.linalg.inv(Cinv) - R = np.linalg.inv(Rinv) - T = Cinv @ CT - return C, R, T - - -def remove_outside_points(points, rect, Trv2c, P2, image_shape): - """Remove points which are outside of image. - - Note: - This function is for KITTI only. - - Args: - points (np.ndarray, shape=[N, 3+dims]): Total points. - rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - Trv2c (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - P2 (p.array, shape=[4, 4]): Intrinsics of Camera2. - image_shape (list[int]): Shape of image. - - Returns: - np.ndarray, shape=[N, 3+dims]: Filtered points. - """ - # 5x faster than remove_outside_points_v1(2ms vs 10ms) - C, R, T = projection_matrix_to_CRT_kitti(P2) - image_bbox = [0, 0, image_shape[1], image_shape[0]] - frustum = get_frustum(image_bbox, C) - frustum -= T - frustum = np.linalg.inv(R) @ frustum.T - frustum = camera_to_lidar(frustum.T, rect, Trv2c) - frustum_surfaces = corner_to_surfaces_3d_jit(frustum[np.newaxis, ...]) - indices = points_in_convex_polygon_3d_jit(points[:, :3], frustum_surfaces) - points = points[indices.reshape([-1])] - return points - - -def get_frustum(bbox_image, C, near_clip=0.001, far_clip=100): - """Get frustum corners in camera coordinates. - - Args: - bbox_image (list[int]): box in image coordinates. - C (np.ndarray): Intrinsics. - near_clip (float, optional): Nearest distance of frustum. - Defaults to 0.001. - far_clip (float, optional): Farthest distance of frustum. - Defaults to 100. - - Returns: - np.ndarray, shape=[8, 3]: coordinates of frustum corners. - """ - fku = C[0, 0] - fkv = -C[1, 1] - u0v0 = C[0:2, 2] - z_points = np.array( - [near_clip] * 4 + [far_clip] * 4, dtype=C.dtype)[:, np.newaxis] - b = bbox_image - box_corners = np.array( - [[b[0], b[1]], [b[0], b[3]], [b[2], b[3]], [b[2], b[1]]], - dtype=C.dtype) - near_box_corners = (box_corners - u0v0) / np.array( - [fku / near_clip, -fkv / near_clip], dtype=C.dtype) - far_box_corners = (box_corners - u0v0) / np.array( - [fku / far_clip, -fkv / far_clip], dtype=C.dtype) - ret_xy = np.concatenate([near_box_corners, far_box_corners], - axis=0) # [8, 2] - ret_xyz = np.concatenate([ret_xy, z_points], axis=1) - return ret_xyz - - -def surface_equ_3d(polygon_surfaces): - """ - - Args: - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - [num_polygon, max_num_surfaces, max_num_points_of_surface, 3]. - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - - Returns: - tuple: normal vector and its direction. - """ - # return [a, b, c], d in ax+by+cz+d=0 - # polygon_surfaces: [num_polygon, num_surfaces, num_points_of_polygon, 3] - surface_vec = polygon_surfaces[:, :, :2, :] - \ - polygon_surfaces[:, :, 1:3, :] - # normal_vec: [..., 3] - normal_vec = np.cross(surface_vec[:, :, 0, :], surface_vec[:, :, 1, :]) - # print(normal_vec.shape, points[..., 0, :].shape) - # d = -np.inner(normal_vec, points[..., 0, :]) - d = np.einsum('aij, aij->ai', normal_vec, polygon_surfaces[:, :, 0, :]) - return normal_vec, -d - - -@numba.njit -def _points_in_convex_polygon_3d_jit(points, polygon_surfaces, normal_vec, d, - num_surfaces): - """ - Args: - points (np.ndarray): Input points with shape of (num_points, 3). - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - (num_polygon, max_num_surfaces, max_num_points_of_surface, 3). - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - normal_vec (np.ndarray): Normal vector of polygon_surfaces. - d (int): Directions of normal vector. - num_surfaces (np.ndarray): Number of surfaces a polygon contains - shape of (num_polygon). - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3] - num_points = points.shape[0] - num_polygons = polygon_surfaces.shape[0] - ret = np.ones((num_points, num_polygons), dtype=np.bool_) - sign = 0.0 - for i in range(num_points): - for j in range(num_polygons): - for k in range(max_num_surfaces): - if k > num_surfaces[j]: - break - sign = ( - points[i, 0] * normal_vec[j, k, 0] + - points[i, 1] * normal_vec[j, k, 1] + - points[i, 2] * normal_vec[j, k, 2] + d[j, k]) - if sign >= 0: - ret[i, j] = False - break - return ret - - -def points_in_convex_polygon_3d_jit(points, - polygon_surfaces, - num_surfaces=None): - """Check points is in 3d convex polygons. - - Args: - points (np.ndarray): Input points with shape of (num_points, 3). - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - (num_polygon, max_num_surfaces, max_num_points_of_surface, 3). - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - num_surfaces (np.ndarray, optional): Number of surfaces a polygon - contains shape of (num_polygon). Defaults to None. - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3] - # num_points = points.shape[0] - num_polygons = polygon_surfaces.shape[0] - if num_surfaces is None: - num_surfaces = np.full((num_polygons, ), 9999999, dtype=np.int64) - normal_vec, d = surface_equ_3d(polygon_surfaces[:, :, :3, :]) - # normal_vec: [num_polygon, max_num_surfaces, 3] - # d: [num_polygon, max_num_surfaces] - return _points_in_convex_polygon_3d_jit(points, polygon_surfaces, - normal_vec, d, num_surfaces) - - -@numba.njit -def points_in_convex_polygon_jit(points, polygon, clockwise=False): - """Check points is in 2d convex polygons. True when point in polygon. - - Args: - points (np.ndarray): Input points with the shape of [num_points, 2]. - polygon (np.ndarray): Input polygon with the shape of - [num_polygon, num_points_of_polygon, 2]. - clockwise (bool, optional): Indicate polygon is clockwise. Defaults - to True. - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - # first convert polygon to directed lines - num_points_of_polygon = polygon.shape[1] - num_points = points.shape[0] - num_polygons = polygon.shape[0] - # vec for all the polygons - if clockwise: - vec1 = polygon - polygon[:, - np.array([num_points_of_polygon - 1] + list( - range(num_points_of_polygon - 1))), :] - else: - vec1 = polygon[:, - np.array([num_points_of_polygon - 1] + - list(range(num_points_of_polygon - - 1))), :] - polygon - ret = np.zeros((num_points, num_polygons), dtype=np.bool_) - success = True - cross = 0.0 - for i in range(num_points): - for j in range(num_polygons): - success = True - for k in range(num_points_of_polygon): - vec = vec1[j, k] - cross = vec[1] * (polygon[j, k, 0] - points[i, 0]) - cross -= vec[0] * (polygon[j, k, 1] - points[i, 1]) - if cross >= 0: - success = False - break - ret[i, j] = success - return ret - - -def boxes3d_to_corners3d_lidar(boxes3d, bottom_center=True): - """Convert kitti center boxes to corners. - - 7 -------- 4 - /| /| - 6 -------- 5 . - | | | | - . 3 -------- 0 - |/ |/ - 2 -------- 1 - - Note: - This function is for LiDAR boxes only. - - Args: - boxes3d (np.ndarray): Boxes with shape of (N, 7) - [x, y, z, x_size, y_size, z_size, ry] in LiDAR coords, - see the definition of ry in KITTI dataset. - bottom_center (bool, optional): Whether z is on the bottom center - of object. Defaults to True. - - Returns: - np.ndarray: Box corners with the shape of [N, 8, 3]. - """ - boxes_num = boxes3d.shape[0] - x_size, y_size, z_size = boxes3d[:, 3], boxes3d[:, 4], boxes3d[:, 5] - x_corners = np.array([ - x_size / 2., -x_size / 2., -x_size / 2., x_size / 2., x_size / 2., - -x_size / 2., -x_size / 2., x_size / 2. - ], - dtype=np.float32).T - y_corners = np.array([ - -y_size / 2., -y_size / 2., y_size / 2., y_size / 2., -y_size / 2., - -y_size / 2., y_size / 2., y_size / 2. - ], - dtype=np.float32).T - if bottom_center: - z_corners = np.zeros((boxes_num, 8), dtype=np.float32) - z_corners[:, 4:8] = z_size.reshape(boxes_num, 1).repeat( - 4, axis=1) # (N, 8) - else: - z_corners = np.array([ - -z_size / 2., -z_size / 2., -z_size / 2., -z_size / 2., - z_size / 2., z_size / 2., z_size / 2., z_size / 2. - ], - dtype=np.float32).T - - ry = boxes3d[:, 6] - zeros, ones = np.zeros( - ry.size, dtype=np.float32), np.ones( - ry.size, dtype=np.float32) - rot_list = np.array([[np.cos(ry), np.sin(ry), zeros], - [-np.sin(ry), np.cos(ry), zeros], - [zeros, zeros, ones]]) # (3, 3, N) - R_list = np.transpose(rot_list, (2, 0, 1)) # (N, 3, 3) - - temp_corners = np.concatenate((x_corners.reshape( - -1, 8, 1), y_corners.reshape(-1, 8, 1), z_corners.reshape(-1, 8, 1)), - axis=2) # (N, 8, 3) - rotated_corners = np.matmul(temp_corners, R_list) # (N, 8, 3) - x_corners = rotated_corners[:, :, 0] - y_corners = rotated_corners[:, :, 1] - z_corners = rotated_corners[:, :, 2] - - x_loc, y_loc, z_loc = boxes3d[:, 0], boxes3d[:, 1], boxes3d[:, 2] - - x = x_loc.reshape(-1, 1) + x_corners.reshape(-1, 8) - y = y_loc.reshape(-1, 1) + y_corners.reshape(-1, 8) - z = z_loc.reshape(-1, 1) + z_corners.reshape(-1, 8) - - corners = np.concatenate( - (x.reshape(-1, 8, 1), y.reshape(-1, 8, 1), z.reshape(-1, 8, 1)), - axis=2) - - return corners.astype(np.float32) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/__init__.py deleted file mode 100644 index b306525c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox import build_bbox_coder -from .anchor_free_bbox_coder import AnchorFreeBBoxCoder -from .centerpoint_bbox_coders import CenterPointBBoxCoder -from .delta_xyzwhlr_bbox_coder import DeltaXYZWLHRBBoxCoder -from .fcos3d_bbox_coder import FCOS3DBBoxCoder -from .groupfree3d_bbox_coder import GroupFree3DBBoxCoder -from .monoflex_bbox_coder import MonoFlexCoder -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder -from .pgd_bbox_coder import PGDBBoxCoder -from .point_xyzwhlr_bbox_coder import PointXYZWHLRBBoxCoder -from .smoke_bbox_coder import SMOKECoder - -__all__ = [ - 'build_bbox_coder', 'DeltaXYZWLHRBBoxCoder', 'PartialBinBasedBBoxCoder', - 'CenterPointBBoxCoder', 'AnchorFreeBBoxCoder', 'GroupFree3DBBoxCoder', - 'PointXYZWHLRBBoxCoder', 'FCOS3DBBoxCoder', 'PGDBBoxCoder', 'SMOKECoder', - 'MonoFlexCoder' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py deleted file mode 100644 index d64f38b5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox.builder import BBOX_CODERS -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder - - -@BBOX_CODERS.register_module() -class AnchorFreeBBoxCoder(PartialBinBasedBBoxCoder): - """Anchor free bbox coder for 3D boxes. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - with_rot (bool): Whether the bbox is with rotation. - """ - - def __init__(self, num_dir_bins, with_rot=True): - super(AnchorFreeBBoxCoder, self).__init__( - num_dir_bins, 0, [], with_rot=with_rot) - self.num_dir_bins = num_dir_bins - self.with_rot = with_rot - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_res_target = gt_bboxes_3d.dims / 2 - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - dir_res_target /= (2 * np.pi / self.num_dir_bins) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_res_target, dir_class_target, - dir_res_target) - - def decode(self, bbox_out): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size: predicted bbox size. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out['center'] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out['dir_class'], -1) - dir_res = torch.gather(bbox_out['dir_res'], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - bbox_size = torch.clamp(bbox_out['size'] * 2, min=0.1) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def split_pred(self, cls_preds, reg_preds, base_xyz): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - results['obj_scores'] = cls_preds - - start, end = 0, 0 - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['center_offset'] = reg_preds_trans[..., start:end] - results['center'] = base_xyz.detach() + reg_preds_trans[..., start:end] - start = end - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['size'] = reg_preds_trans[..., start:end] - start = end - - # decode direction - end += self.num_dir_bins - results['dir_class'] = reg_preds_trans[..., start:end] - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end] - start = end - - results['dir_res_norm'] = dir_res_norm - results['dir_res'] = dir_res_norm * (2 * np.pi / self.num_dir_bins) - - return results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py deleted file mode 100644 index 6d43a63d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class CenterPointBBoxCoder(BaseBBoxCoder): - """Bbox coder for CenterPoint. - - Args: - pc_range (list[float]): Range of point cloud. - out_size_factor (int): Downsample factor of the model. - voxel_size (list[float]): Size of voxel. - post_center_range (list[float], optional): Limit of the center. - Default: None. - max_num (int, optional): Max number to be kept. Default: 100. - score_threshold (float, optional): Threshold to filter boxes - based on score. Default: None. - code_size (int, optional): Code size of bboxes. Default: 9 - """ - - def __init__(self, - pc_range, - out_size_factor, - voxel_size, - post_center_range=None, - max_num=100, - score_threshold=None, - code_size=9): - - self.pc_range = pc_range - self.out_size_factor = out_size_factor - self.voxel_size = voxel_size - self.post_center_range = post_center_range - self.max_num = max_num - self.score_threshold = score_threshold - self.code_size = code_size - - def _gather_feat(self, feats, inds, feat_masks=None): - """Given feats and indexes, returns the gathered feats. - - Args: - feats (torch.Tensor): Features to be transposed and gathered - with the shape of [B, 2, W, H]. - inds (torch.Tensor): Indexes with the shape of [B, N]. - feat_masks (torch.Tensor, optional): Mask of the feats. - Default: None. - - Returns: - torch.Tensor: Gathered feats. - """ - dim = feats.size(2) - inds = inds.unsqueeze(2).expand(inds.size(0), inds.size(1), dim) - feats = feats.gather(1, inds) - if feat_masks is not None: - feat_masks = feat_masks.unsqueeze(2).expand_as(feats) - feats = feats[feat_masks] - feats = feats.view(-1, dim) - return feats - - def _topk(self, scores, K=80): - """Get indexes based on scores. - - Args: - scores (torch.Tensor): scores with the shape of [B, N, W, H]. - K (int, optional): Number to be kept. Defaults to 80. - - Returns: - tuple[torch.Tensor] - torch.Tensor: Selected scores with the shape of [B, K]. - torch.Tensor: Selected indexes with the shape of [B, K]. - torch.Tensor: Selected classes with the shape of [B, K]. - torch.Tensor: Selected y coord with the shape of [B, K]. - torch.Tensor: Selected x coord with the shape of [B, K]. - """ - batch, cat, height, width = scores.size() - - topk_scores, topk_inds = torch.topk(scores.view(batch, cat, -1), K) - - topk_inds = topk_inds % (height * width) - topk_ys = (topk_inds.float() / - torch.tensor(width, dtype=torch.float)).int().float() - topk_xs = (topk_inds % width).int().float() - - topk_score, topk_ind = torch.topk(topk_scores.view(batch, -1), K) - topk_clses = (topk_ind / torch.tensor(K, dtype=torch.float)).int() - topk_inds = self._gather_feat(topk_inds.view(batch, -1, 1), - topk_ind).view(batch, K) - topk_ys = self._gather_feat(topk_ys.view(batch, -1, 1), - topk_ind).view(batch, K) - topk_xs = self._gather_feat(topk_xs.view(batch, -1, 1), - topk_ind).view(batch, K) - - return topk_score, topk_inds, topk_clses, topk_ys, topk_xs - - def _transpose_and_gather_feat(self, feat, ind): - """Given feats and indexes, returns the transposed and gathered feats. - - Args: - feat (torch.Tensor): Features to be transposed and gathered - with the shape of [B, 2, W, H]. - ind (torch.Tensor): Indexes with the shape of [B, N]. - - Returns: - torch.Tensor: Transposed and gathered feats. - """ - feat = feat.permute(0, 2, 3, 1).contiguous() - feat = feat.view(feat.size(0), -1, feat.size(3)) - feat = self._gather_feat(feat, ind) - return feat - - def encode(self): - pass - - def decode(self, - heat, - rot_sine, - rot_cosine, - hei, - dim, - vel, - reg=None, - task_id=-1): - """Decode bboxes. - - Args: - heat (torch.Tensor): Heatmap with the shape of [B, N, W, H]. - rot_sine (torch.Tensor): Sine of rotation with the shape of - [B, 1, W, H]. - rot_cosine (torch.Tensor): Cosine of rotation with the shape of - [B, 1, W, H]. - hei (torch.Tensor): Height of the boxes with the shape - of [B, 1, W, H]. - dim (torch.Tensor): Dim of the boxes with the shape of - [B, 1, W, H]. - vel (torch.Tensor): Velocity with the shape of [B, 1, W, H]. - reg (torch.Tensor, optional): Regression value of the boxes in - 2D with the shape of [B, 2, W, H]. Default: None. - task_id (int, optional): Index of task. Default: -1. - - Returns: - list[dict]: Decoded boxes. - """ - batch, cat, _, _ = heat.size() - - scores, inds, clses, ys, xs = self._topk(heat, K=self.max_num) - - if reg is not None: - reg = self._transpose_and_gather_feat(reg, inds) - reg = reg.view(batch, self.max_num, 2) - xs = xs.view(batch, self.max_num, 1) + reg[:, :, 0:1] - ys = ys.view(batch, self.max_num, 1) + reg[:, :, 1:2] - else: - xs = xs.view(batch, self.max_num, 1) + 0.5 - ys = ys.view(batch, self.max_num, 1) + 0.5 - - # rotation value and direction label - rot_sine = self._transpose_and_gather_feat(rot_sine, inds) - rot_sine = rot_sine.view(batch, self.max_num, 1) - - rot_cosine = self._transpose_and_gather_feat(rot_cosine, inds) - rot_cosine = rot_cosine.view(batch, self.max_num, 1) - rot = torch.atan2(rot_sine, rot_cosine) - - # height in the bev - hei = self._transpose_and_gather_feat(hei, inds) - hei = hei.view(batch, self.max_num, 1) - - # dim of the box - dim = self._transpose_and_gather_feat(dim, inds) - dim = dim.view(batch, self.max_num, 3) - - # class label - clses = clses.view(batch, self.max_num).float() - scores = scores.view(batch, self.max_num) - - xs = xs.view( - batch, self.max_num, - 1) * self.out_size_factor * self.voxel_size[0] + self.pc_range[0] - ys = ys.view( - batch, self.max_num, - 1) * self.out_size_factor * self.voxel_size[1] + self.pc_range[1] - - if vel is None: # KITTI FORMAT - final_box_preds = torch.cat([xs, ys, hei, dim, rot], dim=2) - else: # exist velocity, nuscene format - vel = self._transpose_and_gather_feat(vel, inds) - vel = vel.view(batch, self.max_num, 2) - final_box_preds = torch.cat([xs, ys, hei, dim, rot, vel], dim=2) - - final_scores = scores - final_preds = clses - - # use score threshold - if self.score_threshold is not None: - thresh_mask = final_scores > self.score_threshold - - if self.post_center_range is not None: - self.post_center_range = torch.tensor( - self.post_center_range, device=heat.device) - mask = (final_box_preds[..., :3] >= - self.post_center_range[:3]).all(2) - mask &= (final_box_preds[..., :3] <= - self.post_center_range[3:]).all(2) - - predictions_dicts = [] - for i in range(batch): - cmask = mask[i, :] - if self.score_threshold: - cmask &= thresh_mask[i] - - boxes3d = final_box_preds[i, cmask] - scores = final_scores[i, cmask] - labels = final_preds[i, cmask] - predictions_dict = { - 'bboxes': boxes3d, - 'scores': scores, - 'labels': labels - } - - predictions_dicts.append(predictions_dict) - else: - raise NotImplementedError( - 'Need to reorganize output as a batch, only ' - 'support post_center_range is not None for now!') - - return predictions_dicts diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py deleted file mode 100644 index 931e8398..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class DeltaXYZWLHRBBoxCoder(BaseBBoxCoder): - """Bbox Coder for 3D boxes. - - Args: - code_size (int): The dimension of boxes to be encoded. - """ - - def __init__(self, code_size=7): - super(DeltaXYZWLHRBBoxCoder, self).__init__() - self.code_size = code_size - - @staticmethod - def encode(src_boxes, dst_boxes): - """Get box regression transformation deltas (dx, dy, dz, dx_size, - dy_size, dz_size, dr, dv*) that can be used to transform the - `src_boxes` into the `target_boxes`. - - Args: - src_boxes (torch.Tensor): source boxes, e.g., object proposals. - dst_boxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas. - """ - box_ndim = src_boxes.shape[-1] - cas, cgs, cts = [], [], [] - if box_ndim > 7: - xa, ya, za, wa, la, ha, ra, *cas = torch.split( - src_boxes, 1, dim=-1) - xg, yg, zg, wg, lg, hg, rg, *cgs = torch.split( - dst_boxes, 1, dim=-1) - cts = [g - a for g, a in zip(cgs, cas)] - else: - xa, ya, za, wa, la, ha, ra = torch.split(src_boxes, 1, dim=-1) - xg, yg, zg, wg, lg, hg, rg = torch.split(dst_boxes, 1, dim=-1) - za = za + ha / 2 - zg = zg + hg / 2 - diagonal = torch.sqrt(la**2 + wa**2) - xt = (xg - xa) / diagonal - yt = (yg - ya) / diagonal - zt = (zg - za) / ha - lt = torch.log(lg / la) - wt = torch.log(wg / wa) - ht = torch.log(hg / ha) - rt = rg - ra - return torch.cat([xt, yt, zt, wt, lt, ht, rt, *cts], dim=-1) - - @staticmethod - def decode(anchors, deltas): - """Apply transformation `deltas` (dx, dy, dz, dx_size, dy_size, - dz_size, dr, dv*) to `boxes`. - - Args: - anchors (torch.Tensor): Parameters of anchors with shape (N, 7). - deltas (torch.Tensor): Encoded boxes with shape - (N, 7+n) [x, y, z, x_size, y_size, z_size, r, velo*]. - - Returns: - torch.Tensor: Decoded boxes. - """ - cas, cts = [], [] - box_ndim = anchors.shape[-1] - if box_ndim > 7: - xa, ya, za, wa, la, ha, ra, *cas = torch.split(anchors, 1, dim=-1) - xt, yt, zt, wt, lt, ht, rt, *cts = torch.split(deltas, 1, dim=-1) - else: - xa, ya, za, wa, la, ha, ra = torch.split(anchors, 1, dim=-1) - xt, yt, zt, wt, lt, ht, rt = torch.split(deltas, 1, dim=-1) - - za = za + ha / 2 - diagonal = torch.sqrt(la**2 + wa**2) - xg = xt * diagonal + xa - yg = yt * diagonal + ya - zg = zt * ha + za - - lg = torch.exp(lt) * la - wg = torch.exp(wt) * wa - hg = torch.exp(ht) * ha - rg = rt + ra - zg = zg - hg / 2 - cgs = [t + a for t, a in zip(cts, cas)] - return torch.cat([xg, yg, zg, wg, lg, hg, rg, *cgs], dim=-1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py deleted file mode 100644 index 7cb6b1a3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS -from ..structures import limit_period - - -@BBOX_CODERS.register_module() -class FCOS3DBBoxCoder(BaseBBoxCoder): - """Bounding box coder for FCOS3D. - - Args: - base_depths (tuple[tuple[float]]): Depth references for decode box - depth. Defaults to None. - base_dims (tuple[tuple[float]]): Dimension references for decode box - dimension. Defaults to None. - code_size (int): The dimension of boxes to be encoded. Defaults to 7. - norm_on_bbox (bool): Whether to apply normalization on the bounding - box 2D attributes. Defaults to True. - """ - - def __init__(self, - base_depths=None, - base_dims=None, - code_size=7, - norm_on_bbox=True): - super(FCOS3DBBoxCoder, self).__init__() - self.base_depths = base_depths - self.base_dims = base_dims - self.bbox_code_size = code_size - self.norm_on_bbox = norm_on_bbox - - def encode(self, gt_bboxes_3d, gt_labels_3d, gt_bboxes, gt_labels): - # TODO: refactor the encoder in the FCOS3D and PGD head - pass - - def decode(self, bbox, scale, stride, training, cls_score=None): - """Decode regressed results into 3D predictions. - - Note that offsets are not transformed to the projected 3D centers. - - Args: - bbox (torch.Tensor): Raw bounding box predictions in shape - [N, C, H, W]. - scale (tuple[`Scale`]): Learnable scale parameters. - stride (int): Stride for a specific feature level. - training (bool): Whether the decoding is in the training - procedure. - cls_score (torch.Tensor): Classification score map for deciding - which base depth or dim is used. Defaults to None. - - Returns: - torch.Tensor: Decoded boxes. - """ - # scale the bbox of different level - # only apply to offset, depth and size prediction - scale_offset, scale_depth, scale_size = scale[0:3] - - clone_bbox = bbox.clone() - bbox[:, :2] = scale_offset(clone_bbox[:, :2]).float() - bbox[:, 2] = scale_depth(clone_bbox[:, 2]).float() - bbox[:, 3:6] = scale_size(clone_bbox[:, 3:6]).float() - - if self.base_depths is None: - bbox[:, 2] = bbox[:, 2].exp() - elif len(self.base_depths) == 1: # only single prior - mean = self.base_depths[0][0] - std = self.base_depths[0][1] - bbox[:, 2] = mean + bbox.clone()[:, 2] * std - else: # multi-class priors - assert len(self.base_depths) == cls_score.shape[1], \ - 'The number of multi-class depth priors should be equal to ' \ - 'the number of categories.' - indices = cls_score.max(dim=1)[1] - depth_priors = cls_score.new_tensor( - self.base_depths)[indices, :].permute(0, 3, 1, 2) - mean = depth_priors[:, 0] - std = depth_priors[:, 1] - bbox[:, 2] = mean + bbox.clone()[:, 2] * std - - bbox[:, 3:6] = bbox[:, 3:6].exp() - if self.base_dims is not None: - assert len(self.base_dims) == cls_score.shape[1], \ - 'The number of anchor sizes should be equal to the number ' \ - 'of categories.' - indices = cls_score.max(dim=1)[1] - size_priors = cls_score.new_tensor( - self.base_dims)[indices, :].permute(0, 3, 1, 2) - bbox[:, 3:6] = size_priors * bbox.clone()[:, 3:6] - - assert self.norm_on_bbox is True, 'Setting norm_on_bbox to False '\ - 'has not been thoroughly tested for FCOS3D.' - if self.norm_on_bbox: - if not training: - # Note that this line is conducted only when testing - bbox[:, :2] *= stride - - return bbox - - @staticmethod - def decode_yaw(bbox, centers2d, dir_cls, dir_offset, cam2img): - """Decode yaw angle and change it from local to global.i. - - Args: - bbox (torch.Tensor): Bounding box predictions in shape - [N, C] with yaws to be decoded. - centers2d (torch.Tensor): Projected 3D-center on the image planes - corresponding to the box predictions. - dir_cls (torch.Tensor): Predicted direction classes. - dir_offset (float): Direction offset before dividing all the - directions into several classes. - cam2img (torch.Tensor): Camera intrinsic matrix in shape [4, 4]. - - Returns: - torch.Tensor: Bounding boxes with decoded yaws. - """ - if bbox.shape[0] > 0: - dir_rot = limit_period(bbox[..., 6] - dir_offset, 0, np.pi) - bbox[..., 6] = \ - dir_rot + dir_offset + np.pi * dir_cls.to(bbox.dtype) - - bbox[:, 6] = torch.atan2(centers2d[:, 0] - cam2img[0, 2], - cam2img[0, 0]) + bbox[:, 6] - - return bbox diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py deleted file mode 100644 index 08d83e92..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox.builder import BBOX_CODERS -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder - - -@BBOX_CODERS.register_module() -class GroupFree3DBBoxCoder(PartialBinBasedBBoxCoder): - """Modified partial bin based bbox coder for GroupFree3D. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - num_sizes (int): Number of size clusters. - mean_sizes (list[list[int]]): Mean size of bboxes in each class. - with_rot (bool, optional): Whether the bbox is with rotation. - Defaults to True. - size_cls_agnostic (bool, optional): Whether the predicted size is - class-agnostic. Defaults to True. - """ - - def __init__(self, - num_dir_bins, - num_sizes, - mean_sizes, - with_rot=True, - size_cls_agnostic=True): - super(GroupFree3DBBoxCoder, self).__init__( - num_dir_bins=num_dir_bins, - num_sizes=num_sizes, - mean_sizes=mean_sizes, - with_rot=with_rot) - self.size_cls_agnostic = size_cls_agnostic - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_target = gt_bboxes_3d.dims - size_class_target = gt_labels_3d - size_res_target = gt_bboxes_3d.dims - gt_bboxes_3d.tensor.new_tensor( - self.mean_sizes)[size_class_target] - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_target, size_class_target, size_res_target, - dir_class_target, dir_res_target) - - def decode(self, bbox_out, prefix=''): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size_class: predicted bbox size class. - - size_res: predicted bbox size residual. - - size: predicted class-agnostic bbox size - prefix (str, optional): Decode predictions with specific prefix. - Defaults to ''. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out[f'{prefix}center'] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out[f'{prefix}dir_class'], -1) - dir_res = torch.gather(bbox_out[f'{prefix}dir_res'], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - if self.size_cls_agnostic: - bbox_size = bbox_out[f'{prefix}size'].reshape( - batch_size, num_proposal, 3) - else: - size_class = torch.argmax( - bbox_out[f'{prefix}size_class'], -1, keepdim=True) - size_res = torch.gather( - bbox_out[f'{prefix}size_res'], 2, - size_class.unsqueeze(-1).repeat(1, 1, 1, 3)) - mean_sizes = center.new_tensor(self.mean_sizes) - size_base = torch.index_select(mean_sizes, 0, - size_class.reshape(-1)) - bbox_size = size_base.reshape(batch_size, num_proposal, - -1) + size_res.squeeze(2) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def split_pred(self, cls_preds, reg_preds, base_xyz, prefix=''): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - prefix (str, optional): Decode predictions with specific prefix. - Defaults to ''. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - start, end = 0, 0 - - cls_preds_trans = cls_preds.transpose(2, 1) - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results[f'{prefix}center_residual'] = \ - reg_preds_trans[..., start:end].contiguous() - results[f'{prefix}center'] = base_xyz + \ - reg_preds_trans[..., start:end].contiguous() - start = end - - # decode direction - end += self.num_dir_bins - results[f'{prefix}dir_class'] = \ - reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end].contiguous() - start = end - - results[f'{prefix}dir_res_norm'] = dir_res_norm - results[f'{prefix}dir_res'] = dir_res_norm * ( - np.pi / self.num_dir_bins) - - # decode size - if self.size_cls_agnostic: - end += 3 - results[f'{prefix}size'] = \ - reg_preds_trans[..., start:end].contiguous() - else: - end += self.num_sizes - results[f'{prefix}size_class'] = reg_preds_trans[ - ..., start:end].contiguous() - start = end - - end += self.num_sizes * 3 - size_res_norm = reg_preds_trans[..., start:end] - batch_size, num_proposal = reg_preds_trans.shape[:2] - size_res_norm = size_res_norm.view( - [batch_size, num_proposal, self.num_sizes, 3]) - start = end - - results[f'{prefix}size_res_norm'] = size_res_norm.contiguous() - mean_sizes = reg_preds.new_tensor(self.mean_sizes) - results[f'{prefix}size_res'] = ( - size_res_norm * mean_sizes.unsqueeze(0).unsqueeze(0)) - - # decode objectness score - # Group-Free-3D objectness output shape (batch, proposal, 1) - results[f'{prefix}obj_scores'] = cls_preds_trans[..., :1].contiguous() - - # decode semantic score - results[f'{prefix}sem_scores'] = cls_preds_trans[..., 1:].contiguous() - - return results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py deleted file mode 100644 index e2ada29a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py +++ /dev/null @@ -1,515 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn import functional as F - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class MonoFlexCoder(BaseBBoxCoder): - """Bbox Coder for MonoFlex. - - Args: - depth_mode (str): The mode for depth calculation. - Available options are "linear", "inv_sigmoid", and "exp". - base_depth (tuple[float]): References for decoding box depth. - depth_range (list): Depth range of predicted depth. - combine_depth (bool): Whether to use combined depth (direct depth - and depth from keypoints) or use direct depth only. - uncertainty_range (list): Uncertainty range of predicted depth. - base_dims (tuple[tuple[float]]): Dimensions mean and std of decode bbox - dimensions [l, h, w] for each category. - dims_mode (str): The mode for dimension calculation. - Available options are "linear" and "exp". - multibin (bool): Whether to use multibin representation. - num_dir_bins (int): Number of Number of bins to encode - direction angle. - bin_centers (list[float]): Local yaw centers while using multibin - representations. - bin_margin (float): Margin of multibin representations. - code_size (int): The dimension of boxes to be encoded. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-3. - """ - - def __init__(self, - depth_mode, - base_depth, - depth_range, - combine_depth, - uncertainty_range, - base_dims, - dims_mode, - multibin, - num_dir_bins, - bin_centers, - bin_margin, - code_size, - eps=1e-3): - super(MonoFlexCoder, self).__init__() - - # depth related - self.depth_mode = depth_mode - self.base_depth = base_depth - self.depth_range = depth_range - self.combine_depth = combine_depth - self.uncertainty_range = uncertainty_range - - # dimensions related - self.base_dims = base_dims - self.dims_mode = dims_mode - - # orientation related - self.multibin = multibin - self.num_dir_bins = num_dir_bins - self.bin_centers = bin_centers - self.bin_margin = bin_margin - - # output related - self.bbox_code_size = code_size - self.eps = eps - - def encode(self, gt_bboxes_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (`BaseInstance3DBoxes`): Ground truth 3D bboxes. - shape: (N, 7). - - Returns: - torch.Tensor: Targets of orientations. - """ - local_yaw = gt_bboxes_3d.local_yaw - # encode local yaw (-pi ~ pi) to multibin format - encode_local_yaw = local_yaw.new_zeros( - [local_yaw.shape[0], self.num_dir_bins * 2]) - bin_size = 2 * np.pi / self.num_dir_bins - margin_size = bin_size * self.bin_margin - - bin_centers = local_yaw.new_tensor(self.bin_centers) - range_size = bin_size / 2 + margin_size - - offsets = local_yaw.unsqueeze(1) - bin_centers.unsqueeze(0) - offsets[offsets > np.pi] = offsets[offsets > np.pi] - 2 * np.pi - offsets[offsets < -np.pi] = offsets[offsets < -np.pi] + 2 * np.pi - - for i in range(self.num_dir_bins): - offset = offsets[:, i] - inds = abs(offset) < range_size - encode_local_yaw[inds, i] = 1 - encode_local_yaw[inds, i + self.num_dir_bins] = offset[inds] - - orientation_target = encode_local_yaw - - return orientation_target - - def decode(self, bbox, base_centers2d, labels, downsample_ratio, cam2imgs): - """Decode bounding box regression into 3D predictions. - - Args: - bbox (Tensor): Raw bounding box predictions for each - predict center2d point. - shape: (N, C) - base_centers2d (torch.Tensor): Base centers2d for 3D bboxes. - shape: (N, 2). - labels (Tensor): Batch predict class label for each predict - center2d point. - shape: (N, ) - downsample_ratio (int): The stride of feature map. - cam2imgs (Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - - Return: - dict: The 3D prediction dict decoded from regression map. - the dict has components below: - - bboxes2d (torch.Tensor): Decoded [x1, y1, x2, y2] format - 2D bboxes. - - dimensions (torch.Tensor): Decoded dimensions for each - object. - - offsets2d (torch.Tenosr): Offsets between base centers2d - and real centers2d. - - direct_depth (torch.Tensor): Decoded directly regressed - depth. - - keypoints2d (torch.Tensor): Keypoints of each projected - 3D box on image. - - keypoints_depth (torch.Tensor): Decoded depth from keypoints. - - combined_depth (torch.Tensor): Combined depth using direct - depth and keypoints depth with depth uncertainty. - - orientations (torch.Tensor): Multibin format orientations - (local yaw) for each objects. - """ - - # 4 dimensions for FCOS style regression - pred_bboxes2d = bbox[:, 0:4] - - # change FCOS style to [x1, y1, x2, y2] format for IOU Loss - pred_bboxes2d = self.decode_bboxes2d(pred_bboxes2d, base_centers2d) - - # 2 dimensions for projected centers2d offsets - pred_offsets2d = bbox[:, 4:6] - - # 3 dimensions for 3D bbox dimensions offsets - pred_dimensions_offsets3d = bbox[:, 29:32] - - # the first 8 dimensions are for orientation bin classification - # and the second 8 dimensions are for orientation offsets. - pred_orientations = torch.cat((bbox[:, 32:40], bbox[:, 40:48]), dim=1) - - # 3 dimensions for the uncertainties of the solved depths from - # groups of keypoints - pred_keypoints_depth_uncertainty = bbox[:, 26:29] - - # 1 dimension for the uncertainty of directly regressed depth - pred_direct_depth_uncertainty = bbox[:, 49:50].squeeze(-1) - - # 2 dimension of offsets x keypoints (8 corners + top/bottom center) - pred_keypoints2d = bbox[:, 6:26].reshape(-1, 10, 2) - - # 1 dimension for depth offsets - pred_direct_depth_offsets = bbox[:, 48:49].squeeze(-1) - - # decode the pred residual dimensions to real dimensions - pred_dimensions = self.decode_dims(labels, pred_dimensions_offsets3d) - pred_direct_depth = self.decode_direct_depth(pred_direct_depth_offsets) - pred_keypoints_depth = self.keypoints2depth(pred_keypoints2d, - pred_dimensions, cam2imgs, - downsample_ratio) - - pred_direct_depth_uncertainty = torch.clamp( - pred_direct_depth_uncertainty, self.uncertainty_range[0], - self.uncertainty_range[1]) - pred_keypoints_depth_uncertainty = torch.clamp( - pred_keypoints_depth_uncertainty, self.uncertainty_range[0], - self.uncertainty_range[1]) - - if self.combine_depth: - pred_depth_uncertainty = torch.cat( - (pred_direct_depth_uncertainty.unsqueeze(-1), - pred_keypoints_depth_uncertainty), - dim=1).exp() - pred_depth = torch.cat( - (pred_direct_depth.unsqueeze(-1), pred_keypoints_depth), dim=1) - pred_combined_depth = \ - self.combine_depths(pred_depth, pred_depth_uncertainty) - else: - pred_combined_depth = None - - preds = dict( - bboxes2d=pred_bboxes2d, - dimensions=pred_dimensions, - offsets2d=pred_offsets2d, - keypoints2d=pred_keypoints2d, - orientations=pred_orientations, - direct_depth=pred_direct_depth, - keypoints_depth=pred_keypoints_depth, - combined_depth=pred_combined_depth, - direct_depth_uncertainty=pred_direct_depth_uncertainty, - keypoints_depth_uncertainty=pred_keypoints_depth_uncertainty, - ) - - return preds - - def decode_direct_depth(self, depth_offsets): - """Transform depth offset to directly regressed depth. - - Args: - depth_offsets (torch.Tensor): Predicted depth offsets. - shape: (N, ) - - Return: - torch.Tensor: Directly regressed depth. - shape: (N, ) - """ - if self.depth_mode == 'exp': - direct_depth = depth_offsets.exp() - elif self.depth_mode == 'linear': - base_depth = depth_offsets.new_tensor(self.base_depth) - direct_depth = depth_offsets * base_depth[1] + base_depth[0] - elif self.depth_mode == 'inv_sigmoid': - direct_depth = 1 / torch.sigmoid(depth_offsets) - 1 - else: - raise ValueError - - if self.depth_range is not None: - direct_depth = torch.clamp( - direct_depth, min=self.depth_range[0], max=self.depth_range[1]) - - return direct_depth - - def decode_location(self, - base_centers2d, - offsets2d, - depths, - cam2imgs, - downsample_ratio, - pad_mode='default'): - """Retrieve object location. - - Args: - base_centers2d (torch.Tensor): predicted base centers2d. - shape: (N, 2) - offsets2d (torch.Tensor): The offsets between real centers2d - and base centers2d. - shape: (N , 2) - depths (torch.Tensor): Depths of objects. - shape: (N, ) - cam2imgs (torch.Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - downsample_ratio (int): The stride of feature map. - pad_mode (str, optional): Padding mode used in - training data augmentation. - - Return: - tuple(torch.Tensor): Centers of 3D boxes. - shape: (N, 3) - """ - N = cam2imgs.shape[0] - # (N, 4, 4) - cam2imgs_inv = cam2imgs.inverse() - if pad_mode == 'default': - centers2d_img = (base_centers2d + offsets2d) * downsample_ratio - else: - raise NotImplementedError - # (N, 3) - centers2d_img = \ - torch.cat((centers2d_img, depths.unsqueeze(-1)), dim=1) - # (N, 4, 1) - centers2d_extend = \ - torch.cat((centers2d_img, centers2d_img.new_ones(N, 1)), - dim=1).unsqueeze(-1) - locations = torch.matmul(cam2imgs_inv, centers2d_extend).squeeze(-1) - - return locations[:, :3] - - def keypoints2depth(self, - keypoints2d, - dimensions, - cam2imgs, - downsample_ratio=4, - group0_index=[(7, 3), (0, 4)], - group1_index=[(2, 6), (1, 5)]): - """Decode depth form three groups of keypoints and geometry projection - model. 2D keypoints inlucding 8 coreners and top/bottom centers will be - divided into three groups which will be used to calculate three depths - of object. - - .. code-block:: none - - Group center keypoints: - - + --------------- + - /| top center /| - / | . / | - / | | / | - + ---------|----- + + - | / | | / - | / . | / - |/ bottom center |/ - + --------------- + - - Group 0 keypoints: - - 0 - + -------------- + - /| /| - / | / | - / | 5/ | - + -------------- + + - | /3 | / - | / | / - |/ |/ - + -------------- + 6 - - Group 1 keypoints: - - 4 - + -------------- + - /| /| - / | / | - / | / | - 1 + -------------- + + 7 - | / | / - | / | / - |/ |/ - 2 + -------------- + - - - Args: - keypoints2d (torch.Tensor): Keypoints of objects. - 8 vertices + top/bottom center. - shape: (N, 10, 2) - dimensions (torch.Tensor): Dimensions of objetcts. - shape: (N, 3) - cam2imgs (torch.Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - downsample_ratio (int, opitonal): The stride of feature map. - Defaults: 4. - group0_index(list[tuple[int]], optional): Keypoints group 0 - of index to calculate the depth. - Defaults: [0, 3, 4, 7]. - group1_index(list[tuple[int]], optional): Keypoints group 1 - of index to calculate the depth. - Defaults: [1, 2, 5, 6] - - Return: - tuple(torch.Tensor): Depth computed from three groups of - keypoints (top/bottom, group0, group1) - shape: (N, 3) - """ - - pred_height_3d = dimensions[:, 1].clone() - f_u = cam2imgs[:, 0, 0] - center_height = keypoints2d[:, -2, 1] - keypoints2d[:, -1, 1] - corner_group0_height = keypoints2d[:, group0_index[0], 1] \ - - keypoints2d[:, group0_index[1], 1] - corner_group1_height = keypoints2d[:, group1_index[0], 1] \ - - keypoints2d[:, group1_index[1], 1] - center_depth = f_u * pred_height_3d / ( - F.relu(center_height) * downsample_ratio + self.eps) - corner_group0_depth = (f_u * pred_height_3d).unsqueeze(-1) / ( - F.relu(corner_group0_height) * downsample_ratio + self.eps) - corner_group1_depth = (f_u * pred_height_3d).unsqueeze(-1) / ( - F.relu(corner_group1_height) * downsample_ratio + self.eps) - - corner_group0_depth = corner_group0_depth.mean(dim=1) - corner_group1_depth = corner_group1_depth.mean(dim=1) - - keypoints_depth = torch.stack( - (center_depth, corner_group0_depth, corner_group1_depth), dim=1) - keypoints_depth = torch.clamp( - keypoints_depth, min=self.depth_range[0], max=self.depth_range[1]) - - return keypoints_depth - - def decode_dims(self, labels, dims_offset): - """Retrieve object dimensions. - - Args: - labels (torch.Tensor): Each points' category id. - shape: (N, K) - dims_offset (torch.Tensor): Dimension offsets. - shape: (N, 3) - - Returns: - torch.Tensor: Shape (N, 3) - """ - - if self.dims_mode == 'exp': - dims_offset = dims_offset.exp() - elif self.dims_mode == 'linear': - labels = labels.long() - base_dims = dims_offset.new_tensor(self.base_dims) - dims_mean = base_dims[:, :3] - dims_std = base_dims[:, 3:6] - cls_dimension_mean = dims_mean[labels, :] - cls_dimension_std = dims_std[labels, :] - dimensions = dims_offset * cls_dimension_mean + cls_dimension_std - else: - raise ValueError - - return dimensions - - def decode_orientation(self, ori_vector, locations): - """Retrieve object orientation. - - Args: - ori_vector (torch.Tensor): Local orientation vector - in [axis_cls, head_cls, sin, cos] format. - shape: (N, num_dir_bins * 4) - locations (torch.Tensor): Object location. - shape: (N, 3) - - Returns: - tuple[torch.Tensor]: yaws and local yaws of 3d bboxes. - """ - if self.multibin: - pred_bin_cls = ori_vector[:, :self.num_dir_bins * 2].view( - -1, self.num_dir_bins, 2) - pred_bin_cls = pred_bin_cls.softmax(dim=2)[..., 1] - orientations = ori_vector.new_zeros(ori_vector.shape[0]) - for i in range(self.num_dir_bins): - mask_i = (pred_bin_cls.argmax(dim=1) == i) - start_bin = self.num_dir_bins * 2 + i * 2 - end_bin = start_bin + 2 - pred_bin_offset = ori_vector[mask_i, start_bin:end_bin] - orientations[mask_i] = pred_bin_offset[:, 0].atan2( - pred_bin_offset[:, 1]) + self.bin_centers[i] - else: - axis_cls = ori_vector[:, :2].softmax(dim=1) - axis_cls = axis_cls[:, 0] < axis_cls[:, 1] - head_cls = ori_vector[:, 2:4].softmax(dim=1) - head_cls = head_cls[:, 0] < head_cls[:, 1] - # cls axis - orientations = self.bin_centers[axis_cls + head_cls * 2] - sin_cos_offset = F.normalize(ori_vector[:, 4:]) - orientations += sin_cos_offset[:, 0].atan(sin_cos_offset[:, 1]) - - locations = locations.view(-1, 3) - rays = locations[:, 0].atan2(locations[:, 2]) - local_yaws = orientations - yaws = local_yaws + rays - - larger_idx = (yaws > np.pi).nonzero(as_tuple=False) - small_idx = (yaws < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - yaws[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - yaws[small_idx] += 2 * np.pi - - larger_idx = (local_yaws > np.pi).nonzero(as_tuple=False) - small_idx = (local_yaws < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - local_yaws[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - local_yaws[small_idx] += 2 * np.pi - - return yaws, local_yaws - - def decode_bboxes2d(self, reg_bboxes2d, base_centers2d): - """Retrieve [x1, y1, x2, y2] format 2D bboxes. - - Args: - reg_bboxes2d (torch.Tensor): Predicted FCOS style - 2D bboxes. - shape: (N, 4) - base_centers2d (torch.Tensor): predicted base centers2d. - shape: (N, 2) - - Returns: - torch.Tenosr: [x1, y1, x2, y2] format 2D bboxes. - """ - centers_x = base_centers2d[:, 0] - centers_y = base_centers2d[:, 1] - - xs_min = centers_x - reg_bboxes2d[..., 0] - ys_min = centers_y - reg_bboxes2d[..., 1] - xs_max = centers_x + reg_bboxes2d[..., 2] - ys_max = centers_y + reg_bboxes2d[..., 3] - - bboxes2d = torch.stack([xs_min, ys_min, xs_max, ys_max], dim=-1) - - return bboxes2d - - def combine_depths(self, depth, depth_uncertainty): - """Combine all the prediced depths with depth uncertainty. - - Args: - depth (torch.Tensor): Predicted depths of each object. - 2D bboxes. - shape: (N, 4) - depth_uncertainty (torch.Tensor): Depth uncertainty for - each depth of each object. - shape: (N, 4) - - Returns: - torch.Tenosr: combined depth. - """ - uncertainty_weights = 1 / depth_uncertainty - uncertainty_weights = \ - uncertainty_weights / \ - uncertainty_weights.sum(dim=1, keepdim=True) - combined_depth = torch.sum(depth * uncertainty_weights, dim=1) - - return combined_depth diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py deleted file mode 100644 index ed8020d7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class PartialBinBasedBBoxCoder(BaseBBoxCoder): - """Partial bin based bbox coder. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - num_sizes (int): Number of size clusters. - mean_sizes (list[list[int]]): Mean size of bboxes in each class. - with_rot (bool): Whether the bbox is with rotation. - """ - - def __init__(self, num_dir_bins, num_sizes, mean_sizes, with_rot=True): - super(PartialBinBasedBBoxCoder, self).__init__() - assert len(mean_sizes) == num_sizes - self.num_dir_bins = num_dir_bins - self.num_sizes = num_sizes - self.mean_sizes = mean_sizes - self.with_rot = with_rot - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_class_target = gt_labels_3d - size_res_target = gt_bboxes_3d.dims - gt_bboxes_3d.tensor.new_tensor( - self.mean_sizes)[size_class_target] - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_class_target, size_res_target, - dir_class_target, dir_res_target) - - def decode(self, bbox_out, suffix=''): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size_class: predicted bbox size class. - - size_res: predicted bbox size residual. - suffix (str): Decode predictions with specific suffix. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out['center' + suffix] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out['dir_class' + suffix], -1) - dir_res = torch.gather(bbox_out['dir_res' + suffix], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - size_class = torch.argmax( - bbox_out['size_class' + suffix], -1, keepdim=True) - size_res = torch.gather(bbox_out['size_res' + suffix], 2, - size_class.unsqueeze(-1).repeat(1, 1, 1, 3)) - mean_sizes = center.new_tensor(self.mean_sizes) - size_base = torch.index_select(mean_sizes, 0, size_class.reshape(-1)) - bbox_size = size_base.reshape(batch_size, num_proposal, - -1) + size_res.squeeze(2) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def decode_corners(self, center, size_res, size_class): - """Decode center, size residuals and class to corners. Only useful for - axis-aligned bounding boxes, so angle isn't considered. - - Args: - center (torch.Tensor): Shape [B, N, 3] - size_res (torch.Tensor): Shape [B, N, 3] or [B, N, C, 3] - size_class (torch.Tensor): Shape: [B, N] or [B, N, 1] - or [B, N, C, 3] - - Returns: - torch.Tensor: Corners with shape [B, N, 6] - """ - if len(size_class.shape) == 2 or size_class.shape[-1] == 1: - batch_size, proposal_num = size_class.shape[:2] - one_hot_size_class = size_res.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - if len(size_class.shape) == 2: - size_class = size_class.unsqueeze(-1) - one_hot_size_class.scatter_(2, size_class, 1) - one_hot_size_class_expand = one_hot_size_class.unsqueeze( - -1).repeat(1, 1, 1, 3).contiguous() - else: - one_hot_size_class_expand = size_class - - if len(size_res.shape) == 4: - size_res = torch.sum(size_res * one_hot_size_class_expand, 2) - - mean_sizes = size_res.new_tensor(self.mean_sizes) - mean_sizes = torch.sum(mean_sizes * one_hot_size_class_expand, 2) - size_full = (size_res + 1) * mean_sizes - size_full = torch.clamp(size_full, 0) - half_size_full = size_full / 2 - corner1 = center - half_size_full - corner2 = center + half_size_full - corners = torch.cat([corner1, corner2], dim=-1) - return corners - - def split_pred(self, cls_preds, reg_preds, base_xyz): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - start, end = 0, 0 - - cls_preds_trans = cls_preds.transpose(2, 1) - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['center'] = base_xyz + \ - reg_preds_trans[..., start:end].contiguous() - start = end - - # decode direction - end += self.num_dir_bins - results['dir_class'] = reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end].contiguous() - start = end - - results['dir_res_norm'] = dir_res_norm - results['dir_res'] = dir_res_norm * (np.pi / self.num_dir_bins) - - # decode size - end += self.num_sizes - results['size_class'] = reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_sizes * 3 - size_res_norm = reg_preds_trans[..., start:end] - batch_size, num_proposal = reg_preds_trans.shape[:2] - size_res_norm = size_res_norm.view( - [batch_size, num_proposal, self.num_sizes, 3]) - start = end - - results['size_res_norm'] = size_res_norm.contiguous() - mean_sizes = reg_preds.new_tensor(self.mean_sizes) - results['size_res'] = ( - size_res_norm * mean_sizes.unsqueeze(0).unsqueeze(0)) - - # decode objectness score - start = 0 - end = 2 - results['obj_scores'] = cls_preds_trans[..., start:end].contiguous() - start = end - - # decode semantic score - results['sem_scores'] = cls_preds_trans[..., start:].contiguous() - - return results - - def angle2class(self, angle): - """Convert continuous angle to a discrete class and a residual. - - Convert continuous angle to a discrete class and a small - regression number from class center angle to current angle. - - Args: - angle (torch.Tensor): Angle is from 0-2pi (or -pi~pi), - class center at 0, 1*(2pi/N), 2*(2pi/N) ... (N-1)*(2pi/N). - - Returns: - tuple: Encoded discrete class and residual. - """ - angle = angle % (2 * np.pi) - angle_per_class = 2 * np.pi / float(self.num_dir_bins) - shifted_angle = (angle + angle_per_class / 2) % (2 * np.pi) - angle_cls = shifted_angle // angle_per_class - angle_res = shifted_angle - ( - angle_cls * angle_per_class + angle_per_class / 2) - return angle_cls.long(), angle_res - - def class2angle(self, angle_cls, angle_res, limit_period=True): - """Inverse function to angle2class. - - Args: - angle_cls (torch.Tensor): Angle class to decode. - angle_res (torch.Tensor): Angle residual to decode. - limit_period (bool): Whether to limit angle to [-pi, pi]. - - Returns: - torch.Tensor: Angle decoded from angle_cls and angle_res. - """ - angle_per_class = 2 * np.pi / float(self.num_dir_bins) - angle_center = angle_cls.float() * angle_per_class - angle = angle_center + angle_res - if limit_period: - angle[angle > np.pi] -= 2 * np.pi - return angle diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/pgd_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/pgd_bbox_coder.py deleted file mode 100644 index 094ed39d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/pgd_bbox_coder.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn import functional as F - -from mmdet.core.bbox.builder import BBOX_CODERS -from .fcos3d_bbox_coder import FCOS3DBBoxCoder - - -@BBOX_CODERS.register_module() -class PGDBBoxCoder(FCOS3DBBoxCoder): - """Bounding box coder for PGD.""" - - def encode(self, gt_bboxes_3d, gt_labels_3d, gt_bboxes, gt_labels): - # TODO: refactor the encoder codes in the FCOS3D and PGD head - pass - - def decode_2d(self, - bbox, - scale, - stride, - max_regress_range, - training, - pred_keypoints=False, - pred_bbox2d=True): - """Decode regressed 2D attributes. - - Args: - bbox (torch.Tensor): Raw bounding box predictions in shape - [N, C, H, W]. - scale (tuple[`Scale`]): Learnable scale parameters. - stride (int): Stride for a specific feature level. - max_regress_range (int): Maximum regression range for a specific - feature level. - training (bool): Whether the decoding is in the training - procedure. - pred_keypoints (bool, optional): Whether to predict keypoints. - Defaults to False. - pred_bbox2d (bool, optional): Whether to predict 2D bounding - boxes. Defaults to False. - - Returns: - torch.Tensor: Decoded boxes. - """ - clone_bbox = bbox.clone() - if pred_keypoints: - scale_kpts = scale[3] - # 2 dimension of offsets x 8 corners of a 3D bbox - bbox[:, self.bbox_code_size:self.bbox_code_size + 16] = \ - torch.tanh(scale_kpts(clone_bbox[ - :, self.bbox_code_size:self.bbox_code_size + 16]).float()) - - if pred_bbox2d: - scale_bbox2d = scale[-1] - # The last four dimensions are offsets to four sides of a 2D bbox - bbox[:, -4:] = scale_bbox2d(clone_bbox[:, -4:]).float() - - if self.norm_on_bbox: - if pred_bbox2d: - bbox[:, -4:] = F.relu(bbox.clone()[:, -4:]) - if not training: - if pred_keypoints: - bbox[ - :, self.bbox_code_size:self.bbox_code_size + 16] *= \ - max_regress_range - if pred_bbox2d: - bbox[:, -4:] *= stride - else: - if pred_bbox2d: - bbox[:, -4:] = bbox.clone()[:, -4:].exp() - return bbox - - def decode_prob_depth(self, depth_cls_preds, depth_range, depth_unit, - division, num_depth_cls): - """Decode probabilistic depth map. - - Args: - depth_cls_preds (torch.Tensor): Depth probabilistic map in shape - [..., self.num_depth_cls] (raw output before softmax). - depth_range (tuple[float]): Range of depth estimation. - depth_unit (int): Unit of depth range division. - division (str): Depth division method. Options include 'uniform', - 'linear', 'log', 'loguniform'. - num_depth_cls (int): Number of depth classes. - - Returns: - torch.Tensor: Decoded probabilistic depth estimation. - """ - if division == 'uniform': - depth_multiplier = depth_unit * \ - depth_cls_preds.new_tensor( - list(range(num_depth_cls))).reshape([1, -1]) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'linear': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - depth_multiplier = depth_range[0] + ( - depth_range[1] - depth_range[0]) / \ - (num_depth_cls * (num_depth_cls - 1)) * \ - (split_pts * (split_pts+1)) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'log': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - start = max(depth_range[0], 1) - end = depth_range[1] - depth_multiplier = (np.log(start) + - split_pts * np.log(end / start) / - (num_depth_cls - 1)).exp() - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'loguniform': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - start = max(depth_range[0], 1) - end = depth_range[1] - log_multiplier = np.log(start) + \ - split_pts * np.log(end / start) / (num_depth_cls - 1) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - log_multiplier).sum(dim=-1).exp() - return prob_depth_preds - else: - raise NotImplementedError diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py deleted file mode 100644 index d246777b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class PointXYZWHLRBBoxCoder(BaseBBoxCoder): - """Point based bbox coder for 3D boxes. - - Args: - code_size (int): The dimension of boxes to be encoded. - use_mean_size (bool, optional): Whether using anchors based on class. - Defaults to True. - mean_size (list[list[float]], optional): Mean size of bboxes in - each class. Defaults to None. - """ - - def __init__(self, code_size=7, use_mean_size=True, mean_size=None): - super(PointXYZWHLRBBoxCoder, self).__init__() - self.code_size = code_size - self.use_mean_size = use_mean_size - if self.use_mean_size: - self.mean_size = torch.from_numpy(np.array(mean_size)).float() - assert self.mean_size.min() > 0, \ - f'The min of mean_size should > 0, however currently it is '\ - f'{self.mean_size.min()}, please check it in your config.' - - def encode(self, gt_bboxes_3d, points, gt_labels_3d=None): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth bboxes - with shape (N, 7 + C). - points (torch.Tensor): Point cloud with shape (N, 3). - gt_labels_3d (torch.Tensor, optional): Ground truth classes. - Defaults to None. - - Returns: - torch.Tensor: Encoded boxes with shape (N, 8 + C). - """ - gt_bboxes_3d[:, 3:6] = torch.clamp_min(gt_bboxes_3d[:, 3:6], min=1e-5) - - xg, yg, zg, dxg, dyg, dzg, rg, *cgs = torch.split( - gt_bboxes_3d, 1, dim=-1) - xa, ya, za = torch.split(points, 1, dim=-1) - - if self.use_mean_size: - assert gt_labels_3d.max() <= self.mean_size.shape[0] - 1, \ - f'the max gt label {gt_labels_3d.max()} is bigger than' \ - f'anchor types {self.mean_size.shape[0] - 1}.' - self.mean_size = self.mean_size.to(gt_labels_3d.device) - point_anchor_size = self.mean_size[gt_labels_3d] - dxa, dya, dza = torch.split(point_anchor_size, 1, dim=-1) - diagonal = torch.sqrt(dxa**2 + dya**2) - xt = (xg - xa) / diagonal - yt = (yg - ya) / diagonal - zt = (zg - za) / dza - dxt = torch.log(dxg / dxa) - dyt = torch.log(dyg / dya) - dzt = torch.log(dzg / dza) - else: - xt = (xg - xa) - yt = (yg - ya) - zt = (zg - za) - dxt = torch.log(dxg) - dyt = torch.log(dyg) - dzt = torch.log(dzg) - - return torch.cat( - [xt, yt, zt, dxt, dyt, dzt, - torch.cos(rg), - torch.sin(rg), *cgs], - dim=-1) - - def decode(self, box_encodings, points, pred_labels_3d=None): - """Decode predicted parts and points to bbox3d. - - Args: - box_encodings (torch.Tensor): Encoded boxes with shape (N, 8 + C). - points (torch.Tensor): Point cloud with shape (N, 3). - pred_labels_3d (torch.Tensor): Bbox predicted labels (N, M). - - Returns: - torch.Tensor: Decoded boxes with shape (N, 7 + C) - """ - xt, yt, zt, dxt, dyt, dzt, cost, sint, *cts = torch.split( - box_encodings, 1, dim=-1) - xa, ya, za = torch.split(points, 1, dim=-1) - - if self.use_mean_size: - assert pred_labels_3d.max() <= self.mean_size.shape[0] - 1, \ - f'The max pred label {pred_labels_3d.max()} is bigger than' \ - f'anchor types {self.mean_size.shape[0] - 1}.' - self.mean_size = self.mean_size.to(pred_labels_3d.device) - point_anchor_size = self.mean_size[pred_labels_3d] - dxa, dya, dza = torch.split(point_anchor_size, 1, dim=-1) - diagonal = torch.sqrt(dxa**2 + dya**2) - xg = xt * diagonal + xa - yg = yt * diagonal + ya - zg = zt * dza + za - - dxg = torch.exp(dxt) * dxa - dyg = torch.exp(dyt) * dya - dzg = torch.exp(dzt) * dza - else: - xg = xt + xa - yg = yt + ya - zg = zt + za - dxg, dyg, dzg = torch.split( - torch.exp(box_encodings[..., 3:6]), 1, dim=-1) - - rg = torch.atan2(sint, cost) - - return torch.cat([xg, yg, zg, dxg, dyg, dzg, rg, *cts], dim=-1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/smoke_bbox_coder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/smoke_bbox_coder.py deleted file mode 100644 index 66aae917..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/coders/smoke_bbox_coder.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class SMOKECoder(BaseBBoxCoder): - """Bbox Coder for SMOKE. - - Args: - base_depth (tuple[float]): Depth references for decode box depth. - base_dims (tuple[tuple[float]]): Dimension references [l, h, w] - for decode box dimension for each category. - code_size (int): The dimension of boxes to be encoded. - """ - - def __init__(self, base_depth, base_dims, code_size): - super(SMOKECoder, self).__init__() - self.base_depth = base_depth - self.base_dims = base_dims - self.bbox_code_size = code_size - - def encode(self, locations, dimensions, orientations, input_metas): - """Encode CameraInstance3DBoxes by locations, dimensions, orientations. - - Args: - locations (Tensor): Center location for 3D boxes. - (N, 3) - dimensions (Tensor): Dimensions for 3D boxes. - shape (N, 3) - orientations (Tensor): Orientations for 3D boxes. - shape (N, 1) - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Return: - :obj:`CameraInstance3DBoxes`: 3D bboxes of batch images, - shape (N, bbox_code_size). - """ - - bboxes = torch.cat((locations, dimensions, orientations), dim=1) - assert bboxes.shape[1] == self.bbox_code_size, 'bboxes shape dose not'\ - 'match the bbox_code_size.' - batch_bboxes = input_metas[0]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size) - - return batch_bboxes - - def decode(self, - reg, - points, - labels, - cam2imgs, - trans_mats, - locations=None): - """Decode regression into locations, dimensions, orientations. - - Args: - reg (Tensor): Batch regression for each predict center2d point. - shape: (batch * K (max_objs), C) - points(Tensor): Batch projected bbox centers on image plane. - shape: (batch * K (max_objs) , 2) - labels (Tensor): Batch predict class label for each predict - center2d point. - shape: (batch, K (max_objs)) - cam2imgs (Tensor): Batch images' camera intrinsic matrix. - shape: kitti (batch, 4, 4) nuscenes (batch, 3, 3) - trans_mats (Tensor): transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - locations (None | Tensor): if locations is None, this function - is used to decode while inference, otherwise, it's used while - training using the ground truth 3d bbox locations. - shape: (batch * K (max_objs), 3) - - Return: - tuple(Tensor): The tuple has components below: - - locations (Tensor): Centers of 3D boxes. - shape: (batch * K (max_objs), 3) - - dimensions (Tensor): Dimensions of 3D boxes. - shape: (batch * K (max_objs), 3) - - orientations (Tensor): Orientations of 3D - boxes. - shape: (batch * K (max_objs), 1) - """ - depth_offsets = reg[:, 0] - centers2d_offsets = reg[:, 1:3] - dimensions_offsets = reg[:, 3:6] - orientations = reg[:, 6:8] - depths = self._decode_depth(depth_offsets) - # get the 3D Bounding box's center location. - pred_locations = self._decode_location(points, centers2d_offsets, - depths, cam2imgs, trans_mats) - pred_dimensions = self._decode_dimension(labels, dimensions_offsets) - if locations is None: - pred_orientations = self._decode_orientation( - orientations, pred_locations) - else: - pred_orientations = self._decode_orientation( - orientations, locations) - - return pred_locations, pred_dimensions, pred_orientations - - def _decode_depth(self, depth_offsets): - """Transform depth offset to depth.""" - base_depth = depth_offsets.new_tensor(self.base_depth) - depths = depth_offsets * base_depth[1] + base_depth[0] - - return depths - - def _decode_location(self, points, centers2d_offsets, depths, cam2imgs, - trans_mats): - """Retrieve objects location in camera coordinate based on projected - points. - - Args: - points (Tensor): Projected points on feature map in (x, y) - shape: (batch * K, 2) - centers2d_offset (Tensor): Project points offset in - (delta_x, delta_y). shape: (batch * K, 2) - depths (Tensor): Object depth z. - shape: (batch * K) - cam2imgs (Tensor): Batch camera intrinsics matrix. - shape: kitti (batch, 4, 4) nuscenes (batch, 3, 3) - trans_mats (Tensor): transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - """ - # number of points - N = centers2d_offsets.shape[0] - # batch_size - N_batch = cam2imgs.shape[0] - batch_id = torch.arange(N_batch).unsqueeze(1) - obj_id = batch_id.repeat(1, N // N_batch).flatten() - # trans_mats_inv = trans_mats.inverse()[obj_id] - # cam2imgs_inv = cam2imgs.inverse()[obj_id] - - #change for smoke - device = trans_mats.device - trans_mats_inv = trans_mats.cpu().inverse()[obj_id].to(device) - cam2imgs_inv = cam2imgs.cpu().inverse()[obj_id].to(device) - - centers2d = points + centers2d_offsets - centers2d_extend = torch.cat((centers2d, centers2d.new_ones(N, 1)), - dim=1) - # expand project points as [N, 3, 1] - centers2d_extend = centers2d_extend.unsqueeze(-1) - # transform project points back on original image - centers2d_img = torch.matmul(trans_mats_inv, centers2d_extend) - centers2d_img = centers2d_img * depths.view(N, -1, 1) - if cam2imgs.shape[1] == 4: - centers2d_img = torch.cat( - (centers2d_img, centers2d.new_ones(N, 1, 1)), dim=1) - locations = torch.matmul(cam2imgs_inv, centers2d_img).squeeze(2) - - return locations[:, :3] - - def _decode_dimension(self, labels, dims_offset): - """Transform dimension offsets to dimension according to its category. - - Args: - labels (Tensor): Each points' category id. - shape: (N, K) - dims_offset (Tensor): Dimension offsets. - shape: (N, 3) - """ - labels = labels.flatten().long() - base_dims = dims_offset.new_tensor(self.base_dims) - dims_select = base_dims[labels, :] - dimensions = dims_offset.exp() * dims_select - - return dimensions - - def _decode_orientation(self, ori_vector, locations): - """Retrieve object orientation. - - Args: - ori_vector (Tensor): Local orientation in [sin, cos] format. - shape: (N, 2) - locations (Tensor): Object location. - shape: (N, 3) - - Return: - Tensor: yaw(Orientation). Notice that the yaw's - range is [-np.pi, np.pi]. - shape:(N, 1) - """ - assert len(ori_vector) == len(locations) - locations = locations.view(-1, 3) - rays = torch.atan(locations[:, 0] / (locations[:, 2] + 1e-7)) - alphas = torch.atan(ori_vector[:, 0] / (ori_vector[:, 1] + 1e-7)) - - # get cosine value positive and negative index. - cos_pos_inds = (ori_vector[:, 1] >= 0).nonzero(as_tuple=False) - cos_neg_inds = (ori_vector[:, 1] < 0).nonzero(as_tuple=False) - - alphas[cos_pos_inds] -= np.pi / 2 - alphas[cos_neg_inds] += np.pi / 2 - # retrieve object rotation y angle. - yaws = alphas + rays - - larger_inds = (yaws > np.pi).nonzero(as_tuple=False) - small_inds = (yaws < -np.pi).nonzero(as_tuple=False) - - if len(larger_inds) != 0: - yaws[larger_inds] -= 2 * np.pi - if len(small_inds) != 0: - yaws[small_inds] += 2 * np.pi - - yaws = yaws.unsqueeze(-1) - return yaws diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/__init__.py deleted file mode 100644 index d2faf69c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, - BboxOverlapsNearest3D, - axis_aligned_bbox_overlaps_3d, bbox_overlaps_3d, - bbox_overlaps_nearest_3d) - -__all__ = [ - 'BboxOverlapsNearest3D', 'BboxOverlaps3D', 'bbox_overlaps_nearest_3d', - 'bbox_overlaps_3d', 'AxisAlignedBboxOverlaps3D', - 'axis_aligned_bbox_overlaps_3d' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py deleted file mode 100644 index 2b1d8eab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import bbox_overlaps -from mmdet.core.bbox.iou_calculators.builder import IOU_CALCULATORS -from ..structures import get_box_type - - -@IOU_CALCULATORS.register_module() -class BboxOverlapsNearest3D(object): - """Nearest 3D IoU Calculator. - - Note: - This IoU calculator first finds the nearest 2D boxes in bird eye view - (BEV), and then calculates the 2D IoU using :meth:`bbox_overlaps`. - - Args: - coordinate (str): 'camera', 'lidar', or 'depth' coordinate system. - """ - - def __init__(self, coordinate='lidar'): - assert coordinate in ['camera', 'lidar', 'depth'] - self.coordinate = coordinate - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate nearest 3D IoU. - - Note: - If ``is_aligned`` is ``False``, then it calculates the ious between - each bbox of bboxes1 and bboxes2, otherwise it calculates the ious - between each aligned pair of bboxes1 and bboxes2. - - Args: - bboxes1 (torch.Tensor): shape (N, 7+N) - [x, y, z, x_size, y_size, z_size, ry, v]. - bboxes2 (torch.Tensor): shape (M, 7+N) - [x, y, z, x_size, y_size, z_size, ry, v]. - mode (str): "iou" (intersection over union) or iof - (intersection over foreground). - is_aligned (bool): Whether the calculation is aligned. - - Return: - torch.Tensor: If ``is_aligned`` is ``True``, return ious between - bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is - ``False``, return shape is M. - """ - return bbox_overlaps_nearest_3d(bboxes1, bboxes2, mode, is_aligned, - self.coordinate) - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(coordinate={self.coordinate}' - return repr_str - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps3D(object): - """3D IoU Calculator. - - Args: - coordinate (str): The coordinate system, valid options are - 'camera', 'lidar', and 'depth'. - """ - - def __init__(self, coordinate): - assert coordinate in ['camera', 'lidar', 'depth'] - self.coordinate = coordinate - - def __call__(self, bboxes1, bboxes2, mode='iou'): - """Calculate 3D IoU using cuda implementation. - - Note: - This function calculate the IoU of 3D boxes based on their volumes. - IoU calculator ``:class:BboxOverlaps3D`` uses this function to - calculate the actual 3D IoUs of boxes. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or - iof (intersection over foreground). - - Return: - torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2 - with shape (M, N) (aligned mode is not supported currently). - """ - return bbox_overlaps_3d(bboxes1, bboxes2, mode, self.coordinate) - - def __repr__(self): - """str: return a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(coordinate={self.coordinate}' - return repr_str - - -def bbox_overlaps_nearest_3d(bboxes1, - bboxes2, - mode='iou', - is_aligned=False, - coordinate='lidar'): - """Calculate nearest 3D IoU. - - Note: - This function first finds the nearest 2D boxes in bird eye view - (BEV), and then calculates the 2D IoU using :meth:`bbox_overlaps`. - This IoU calculator :class:`BboxOverlapsNearest3D` uses this - function to calculate IoUs of boxes. - - If ``is_aligned`` is ``False``, then it calculates the ious between - each bbox of bboxes1 and bboxes2, otherwise the ious between each - aligned pair of bboxes1 and bboxes2. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or iof - (intersection over foreground). - is_aligned (bool): Whether the calculation is aligned - - Return: - torch.Tensor: If ``is_aligned`` is ``True``, return ious between - bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is - ``False``, return shape is M. - """ - assert bboxes1.size(-1) == bboxes2.size(-1) >= 7 - - box_type, _ = get_box_type(coordinate) - - bboxes1 = box_type(bboxes1, box_dim=bboxes1.shape[-1]) - bboxes2 = box_type(bboxes2, box_dim=bboxes2.shape[-1]) - - # Change the bboxes to bev - # box conversion and iou calculation in torch version on CUDA - # is 10x faster than that in numpy version - bboxes1_bev = bboxes1.nearest_bev - bboxes2_bev = bboxes2.nearest_bev - - ret = bbox_overlaps( - bboxes1_bev, bboxes2_bev, mode=mode, is_aligned=is_aligned) - return ret - - -def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'): - """Calculate 3D IoU using cuda implementation. - - Note: - This function calculates the IoU of 3D boxes based on their volumes. - IoU calculator :class:`BboxOverlaps3D` uses this function to - calculate the actual IoUs of boxes. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or - iof (intersection over foreground). - coordinate (str): 'camera' or 'lidar' coordinate system. - - Return: - torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2 - with shape (M, N) (aligned mode is not supported currently). - """ - assert bboxes1.size(-1) == bboxes2.size(-1) >= 7 - - box_type, _ = get_box_type(coordinate) - - bboxes1 = box_type(bboxes1, box_dim=bboxes1.shape[-1]) - bboxes2 = box_type(bboxes2, box_dim=bboxes2.shape[-1]) - - return bboxes1.overlaps(bboxes1, bboxes2, mode=mode) - - -@IOU_CALCULATORS.register_module() -class AxisAlignedBboxOverlaps3D(object): - """Axis-aligned 3D Overlaps (IoU) Calculator.""" - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): shape (B, m, 6) in - format or empty. - bboxes2 (Tensor): shape (B, n, 6) in - format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or "giou" (generalized - intersection over union). - is_aligned (bool, optional): If True, then m and n must be equal. - Defaults to False. - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - """ - assert bboxes1.size(-1) == bboxes2.size(-1) == 6 - return axis_aligned_bbox_overlaps_3d(bboxes1, bboxes2, mode, - is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + '()' - return repr_str - - -def axis_aligned_bbox_overlaps_3d(bboxes1, - bboxes2, - mode='iou', - is_aligned=False, - eps=1e-6): - """Calculate overlap between two set of axis aligned 3D bboxes. If - ``is_aligned`` is ``False``, then calculate the overlaps between each bbox - of bboxes1 and bboxes2, otherwise the overlaps between each aligned pair of - bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 6) in - format or empty. - bboxes2 (Tensor): shape (B, n, 6) in - format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or "giou" (generalized - intersection over union). - is_aligned (bool, optional): If True, then m and n must be equal. - Defaults to False. - eps (float, optional): A value added to the denominator for numerical - stability. Defaults to 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 0, 10, 10, 10], - >>> [10, 10, 10, 20, 20, 20], - >>> [32, 32, 32, 38, 40, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 0, 10, 20, 20], - >>> [0, 10, 10, 10, 19, 20], - >>> [10, 10, 10, 20, 20, 20], - >>> ]) - >>> overlaps = axis_aligned_bbox_overlaps_3d(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - Example: - >>> empty = torch.empty(0, 6) - >>> nonempty = torch.FloatTensor([[0, 0, 0, 10, 9, 10]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes's last dimension is 6 - assert (bboxes1.size(-1) == 6 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 6 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 3] - - bboxes1[..., 0]) * (bboxes1[..., 4] - bboxes1[..., 1]) * ( - bboxes1[..., 5] - bboxes1[..., 2]) - area2 = (bboxes2[..., 3] - - bboxes2[..., 0]) * (bboxes2[..., 4] - bboxes2[..., 1]) * ( - bboxes2[..., 5] - bboxes2[..., 2]) - - if is_aligned: - lt = torch.max(bboxes1[..., :3], bboxes2[..., :3]) # [B, rows, 3] - rb = torch.min(bboxes1[..., 3:], bboxes2[..., 3:]) # [B, rows, 3] - - wh = (rb - lt).clamp(min=0) # [B, rows, 2] - overlap = wh[..., 0] * wh[..., 1] * wh[..., 2] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :3], bboxes2[..., :3]) - enclosed_rb = torch.max(bboxes1[..., 3:], bboxes2[..., 3:]) - else: - lt = torch.max(bboxes1[..., :, None, :3], - bboxes2[..., None, :, :3]) # [B, rows, cols, 3] - rb = torch.min(bboxes1[..., :, None, 3:], - bboxes2[..., None, :, 3:]) # [B, rows, cols, 3] - - wh = (rb - lt).clamp(min=0) # [B, rows, cols, 3] - overlap = wh[..., 0] * wh[..., 1] * wh[..., 2] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :3], - bboxes2[..., None, :, :3]) - enclosed_rb = torch.max(bboxes1[..., :, None, 3:], - bboxes2[..., None, :, 3:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou']: - return ious - # calculate gious - enclose_wh = (enclosed_rb - enclosed_lt).clamp(min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] * enclose_wh[..., 2] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/__init__.py deleted file mode 100644 index 168780b2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox.samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, - IoUBalancedNegSampler, OHEMSampler, - PseudoSampler, RandomSampler, - SamplingResult) -from .iou_neg_piecewise_sampler import IoUNegPiecewiseSampler - -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'IoUNegPiecewiseSampler' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py deleted file mode 100644 index cbd8483c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox.builder import BBOX_SAMPLERS -from . import RandomSampler, SamplingResult - - -@BBOX_SAMPLERS.register_module() -class IoUNegPiecewiseSampler(RandomSampler): - """IoU Piece-wise Sampling. - - Sampling negative proposals according to a list of IoU thresholds. - The negative proposals are divided into several pieces according - to `neg_iou_piece_thrs`. And the ratio of each piece is indicated - by `neg_piece_fractions`. - - Args: - num (int): Number of proposals. - pos_fraction (float): The fraction of positive proposals. - neg_piece_fractions (list): A list contains fractions that indicates - the ratio of each piece of total negative samplers. - neg_iou_piece_thrs (list): A list contains IoU thresholds that - indicate the upper bound of this piece. - neg_pos_ub (float): The total ratio to limit the upper bound - number of negative samples. - add_gt_as_proposals (bool): Whether to add gt as proposals. - """ - - def __init__(self, - num, - pos_fraction=None, - neg_piece_fractions=None, - neg_iou_piece_thrs=None, - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=False): - super(IoUNegPiecewiseSampler, - self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - assert isinstance(neg_piece_fractions, list) - assert len(neg_piece_fractions) == len(neg_iou_piece_thrs) - self.neg_piece_fractions = neg_piece_fractions - self.neg_iou_thr = neg_iou_piece_thrs - self.return_iou = return_iou - self.neg_piece_num = len(self.neg_piece_fractions) - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= 0: - return neg_inds.squeeze(1) - else: - neg_inds_choice = neg_inds.new_zeros([0]) - extend_num = 0 - max_overlaps = assign_result.max_overlaps[neg_inds] - - for piece_inds in range(self.neg_piece_num): - if piece_inds == self.neg_piece_num - 1: # for the last piece - piece_expected_num = num_expected - len(neg_inds_choice) - min_iou_thr = 0 - else: - # if the numbers of negative samplers in previous - # pieces are less than the expected number, extend - # the same number in the current piece. - piece_expected_num = int( - num_expected * - self.neg_piece_fractions[piece_inds]) + extend_num - min_iou_thr = self.neg_iou_thr[piece_inds + 1] - max_iou_thr = self.neg_iou_thr[piece_inds] - piece_neg_inds = torch.nonzero( - (max_overlaps >= min_iou_thr) - & (max_overlaps < max_iou_thr), - as_tuple=False).view(-1) - - if len(piece_neg_inds) < piece_expected_num: - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds[piece_neg_inds]], dim=0) - extend_num += piece_expected_num - len(piece_neg_inds) - - # for the last piece - if piece_inds == self.neg_piece_num - 1: - extend_neg_num = num_expected - len(neg_inds_choice) - # if the numbers of nagetive samples > 0, we will - # randomly select num_expected samples in last piece - if piece_neg_inds.numel() > 0: - rand_idx = torch.randint( - low=0, - high=piece_neg_inds.numel(), - size=(extend_neg_num, )).long() - neg_inds_choice = torch.cat( - [neg_inds_choice, piece_neg_inds[rand_idx]], - dim=0) - # if the numbers of nagetive samples == 0, we will - # randomly select num_expected samples in all - # previous pieces - else: - rand_idx = torch.randint( - low=0, - high=neg_inds_choice.numel(), - size=(extend_neg_num, )).long() - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds_choice[rand_idx]], - dim=0) - else: - piece_choice = self.random_choice(piece_neg_inds, - piece_expected_num) - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds[piece_choice]], dim=0) - extend_num = 0 - assert len(neg_inds_choice) == num_expected - return neg_inds_choice - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (torch.Tensor): Boxes to be sampled from. - gt_bboxes (torch.Tensor): Ground truth bboxes. - gt_labels (torch.Tensor, optional): Class labels of ground truth - bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.bool) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.bool) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - if self.return_iou: - # PartA2 needs iou score to regression. - sampling_result.iou = assign_result.max_overlaps[torch.cat( - [pos_inds, neg_inds])] - sampling_result.iou.detach_() - - return sampling_result diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/__init__.py deleted file mode 100644 index 460035a5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_box3d import BaseInstance3DBoxes -from .box_3d_mode import Box3DMode -from .cam_box3d import CameraInstance3DBoxes -from .coord_3d_mode import Coord3DMode -from .depth_box3d import DepthInstance3DBoxes -from .lidar_box3d import LiDARInstance3DBoxes -from .utils import (get_box_type, get_proj_mat_by_coord_type, limit_period, - mono_cam_box2vis, points_cam2img, points_img2cam, - rotation_3d_in_axis, xywhr2xyxyr) - -__all__ = [ - 'Box3DMode', 'BaseInstance3DBoxes', 'LiDARInstance3DBoxes', - 'CameraInstance3DBoxes', 'DepthInstance3DBoxes', 'xywhr2xyxyr', - 'get_box_type', 'rotation_3d_in_axis', 'limit_period', 'points_cam2img', - 'points_img2cam', 'Coord3DMode', 'mono_cam_box2vis', - 'get_proj_mat_by_coord_type' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/base_box3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/base_box3d.py deleted file mode 100644 index 3c74f670..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/base_box3d.py +++ /dev/null @@ -1,578 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import numpy as np -import torch -from mmcv.ops import box_iou_rotated, points_in_boxes_all, points_in_boxes_part - -from .utils import limit_period - - -class BaseInstance3DBoxes(object): - """Base class for 3D Boxes. - - Note: - The box is bottom centered, i.e. the relative position of origin in - the box is (0.5, 0.5, 0). - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x box_dim matrix. - box_dim (int): Number of the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw). - Defaults to 7. - with_yaw (bool): Whether the box is with yaw rotation. - If False, the value of yaw will be set to 0 as minmax boxes. - Defaults to True. - origin (tuple[float], optional): Relative position of the box origin. - Defaults to (0.5, 0.5, 0). This will guide the box be converted to - (0.5, 0.5, 0) mode. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicating the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - - def __init__(self, tensor, box_dim=7, with_yaw=True, origin=(0.5, 0.5, 0)): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, box_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == box_dim, tensor.size() - - if tensor.shape[-1] == 6: - # If the dimension of boxes is 6, we expand box_dim by padding - # 0 as a fake yaw and set with_yaw to False. - assert box_dim == 6 - fake_rot = tensor.new_zeros(tensor.shape[0], 1) - tensor = torch.cat((tensor, fake_rot), dim=-1) - self.box_dim = box_dim + 1 - self.with_yaw = False - else: - self.box_dim = box_dim - self.with_yaw = with_yaw - self.tensor = tensor.clone() - - if origin != (0.5, 0.5, 0): - dst = self.tensor.new_tensor((0.5, 0.5, 0)) - src = self.tensor.new_tensor(origin) - self.tensor[:, :3] += self.tensor[:, 3:6] * (dst - src) - - @property - def volume(self): - """torch.Tensor: A vector with volume of each box.""" - return self.tensor[:, 3] * self.tensor[:, 4] * self.tensor[:, 5] - - @property - def dims(self): - """torch.Tensor: Size dimensions of each box in shape (N, 3).""" - return self.tensor[:, 3:6] - - @property - def yaw(self): - """torch.Tensor: A vector with yaw of each box in shape (N, ).""" - return self.tensor[:, 6] - - @property - def height(self): - """torch.Tensor: A vector with height of each box in shape (N, ).""" - return self.tensor[:, 5] - - @property - def top_height(self): - """torch.Tensor: - A vector with the top height of each box in shape (N, ).""" - return self.bottom_height + self.height - - @property - def bottom_height(self): - """torch.Tensor: - A vector with bottom's height of each box in shape (N, ).""" - return self.tensor[:, 2] - - @property - def center(self): - """Calculate the center of all the boxes. - - Note: - In MMDetection3D's convention, the bottom center is - usually taken as the default center. - - The relative position of the centers in different kinds of - boxes are different, e.g., the relative center of a boxes is - (0.5, 1.0, 0.5) in camera and (0.5, 0.5, 0) in lidar. - It is recommended to use ``bottom_center`` or ``gravity_center`` - for clearer usage. - - Returns: - torch.Tensor: A tensor with center of each box in shape (N, 3). - """ - return self.bottom_center - - @property - def bottom_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - return self.tensor[:, :3] - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - pass - - @property - def corners(self): - """torch.Tensor: - a tensor with 8 corners of each box in shape (N, 8, 3).""" - pass - - @property - def bev(self): - """torch.Tensor: 2D BEV box of each box with rotation - in XYWHR format, in shape (N, 5).""" - return self.tensor[:, [0, 1, 3, 4, 6]] - - @property - def nearest_bev(self): - """torch.Tensor: A tensor of 2D BEV box of each box - without rotation.""" - # Obtain BEV boxes with rotation in XYWHR format - bev_rotated_boxes = self.bev - # convert the rotation to a valid range - rotations = bev_rotated_boxes[:, -1] - normed_rotations = torch.abs(limit_period(rotations, 0.5, np.pi)) - - # find the center of boxes - conditions = (normed_rotations > np.pi / 4)[..., None] - bboxes_xywh = torch.where(conditions, bev_rotated_boxes[:, - [0, 1, 3, 2]], - bev_rotated_boxes[:, :4]) - - centers = bboxes_xywh[:, :2] - dims = bboxes_xywh[:, 2:] - bev_boxes = torch.cat([centers - dims / 2, centers + dims / 2], dim=-1) - return bev_boxes - - def in_range_bev(self, box_range): - """Check whether the boxes are in the given range. - - Args: - box_range (list | torch.Tensor): the range of box - (x_min, y_min, x_max, y_max) - - Note: - The original implementation of SECOND checks whether boxes in - a range by checking whether the points are in a convex - polygon, we reduce the burden for simpler cases. - - Returns: - torch.Tensor: Whether each box is inside the reference range. - """ - in_range_flags = ((self.bev[:, 0] > box_range[0]) - & (self.bev[:, 1] > box_range[1]) - & (self.bev[:, 0] < box_range[2]) - & (self.bev[:, 1] < box_range[3])) - return in_range_flags - - @abstractmethod - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | numpy.ndarray | - :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - """ - pass - - @abstractmethod - def flip(self, bev_direction='horizontal'): - """Flip the boxes in BEV along given BEV direction. - - Args: - bev_direction (str, optional): Direction by which to flip. - Can be chosen from 'horizontal' and 'vertical'. - Defaults to 'horizontal'. - """ - pass - - def translate(self, trans_vector): - """Translate boxes with the given translation vector. - - Args: - trans_vector (torch.Tensor): Translation vector of size (1, 3). - """ - if not isinstance(trans_vector, torch.Tensor): - trans_vector = self.tensor.new_tensor(trans_vector) - self.tensor[:, :3] += trans_vector - - def in_range_3d(self, box_range): - """Check whether the boxes are in the given range. - - Args: - box_range (list | torch.Tensor): The range of box - (x_min, y_min, z_min, x_max, y_max, z_max) - - Note: - In the original implementation of SECOND, checking whether - a box in the range checks whether the points are in a convex - polygon, we try to reduce the burden for simpler cases. - - Returns: - torch.Tensor: A binary vector indicating whether each box is - inside the reference range. - """ - in_range_flags = ((self.tensor[:, 0] > box_range[0]) - & (self.tensor[:, 1] > box_range[1]) - & (self.tensor[:, 2] > box_range[2]) - & (self.tensor[:, 0] < box_range[3]) - & (self.tensor[:, 1] < box_range[4]) - & (self.tensor[:, 2] < box_range[5])) - return in_range_flags - - @abstractmethod - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: The converted box of the same type - in the `dst` mode. - """ - pass - - def scale(self, scale_factor): - """Scale the box with horizontal and vertical scaling factors. - - Args: - scale_factors (float): Scale factors to scale the boxes. - """ - self.tensor[:, :6] *= scale_factor - self.tensor[:, 7:] *= scale_factor # velocity - - def limit_yaw(self, offset=0.5, period=np.pi): - """Limit the yaw to a given period and offset. - - Args: - offset (float, optional): The offset of the yaw. Defaults to 0.5. - period (float, optional): The expected period. Defaults to np.pi. - """ - self.tensor[:, 6] = limit_period(self.tensor[:, 6], offset, period) - - def nonempty(self, threshold=0.0): - """Find boxes that are non-empty. - - A box is considered empty, - if either of its side is no larger than threshold. - - Args: - threshold (float, optional): The threshold of minimal sizes. - Defaults to 0.0. - - Returns: - torch.Tensor: A binary vector which represents whether each - box is empty (False) or non-empty (True). - """ - box = self.tensor - size_x = box[..., 3] - size_y = box[..., 4] - size_z = box[..., 5] - keep = ((size_x > threshold) - & (size_y > threshold) & (size_z > threshold)) - return keep - - def __getitem__(self, item): - """ - Note: - The following usage are allowed: - 1. `new_boxes = boxes[3]`: - return a `Boxes` that contains only one box. - 2. `new_boxes = boxes[2:10]`: - return a slice of boxes. - 3. `new_boxes = boxes[vector]`: - where vector is a torch.BoolTensor with `length = len(boxes)`. - Nonzero elements in the vector will be selected. - Note that the returned Boxes might share storage with this Boxes, - subject to Pytorch's indexing semantics. - - Returns: - :obj:`BaseInstance3DBoxes`: A new object of - :class:`BaseInstance3DBoxes` after indexing. - """ - original_type = type(self) - if isinstance(item, int): - return original_type( - self.tensor[item].view(1, -1), - box_dim=self.box_dim, - with_yaw=self.with_yaw) - b = self.tensor[item] - assert b.dim() == 2, \ - f'Indexing on Boxes with {item} failed to return a matrix!' - return original_type(b, box_dim=self.box_dim, with_yaw=self.with_yaw) - - def __len__(self): - """int: Number of boxes in the current object.""" - return self.tensor.shape[0] - - def __repr__(self): - """str: Return a strings that describes the object.""" - return self.__class__.__name__ + '(\n ' + str(self.tensor) + ')' - - @classmethod - def cat(cls, boxes_list): - """Concatenate a list of Boxes into a single Boxes. - - Args: - boxes_list (list[:obj:`BaseInstance3DBoxes`]): List of boxes. - - Returns: - :obj:`BaseInstance3DBoxes`: The concatenated Boxes. - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all(isinstance(box, cls) for box in boxes_list) - - # use torch.cat (v.s. layers.cat) - # so the returned boxes never share storage with input - cat_boxes = cls( - torch.cat([b.tensor for b in boxes_list], dim=0), - box_dim=boxes_list[0].tensor.shape[1], - with_yaw=boxes_list[0].with_yaw) - return cat_boxes - - def to(self, device): - """Convert current boxes to a specific device. - - Args: - device (str | :obj:`torch.device`): The name of the device. - - Returns: - :obj:`BaseInstance3DBoxes`: A new boxes object on the - specific device. - """ - original_type = type(self) - return original_type( - self.tensor.to(device), - box_dim=self.box_dim, - with_yaw=self.with_yaw) - - def clone(self): - """Clone the Boxes. - - Returns: - :obj:`BaseInstance3DBoxes`: Box object with the same properties - as self. - """ - original_type = type(self) - return original_type( - self.tensor.clone(), box_dim=self.box_dim, with_yaw=self.with_yaw) - - @property - def device(self): - """str: The device of the boxes are on.""" - return self.tensor.device - - def __iter__(self): - """Yield a box as a Tensor of shape (4,) at a time. - - Returns: - torch.Tensor: A box of shape (4,). - """ - yield from self.tensor - - @classmethod - def height_overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate height overlaps of two boxes. - - Note: - This function calculates the height overlaps between boxes1 and - boxes2, boxes1 and boxes2 should be in the same type. - - Args: - boxes1 (:obj:`BaseInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`BaseInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of IoU calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated iou of boxes. - """ - assert isinstance(boxes1, BaseInstance3DBoxes) - assert isinstance(boxes2, BaseInstance3DBoxes) - assert type(boxes1) == type(boxes2), '"boxes1" and "boxes2" should' \ - f'be in the same type, got {type(boxes1)} and {type(boxes2)}.' - - boxes1_top_height = boxes1.top_height.view(-1, 1) - boxes1_bottom_height = boxes1.bottom_height.view(-1, 1) - boxes2_top_height = boxes2.top_height.view(1, -1) - boxes2_bottom_height = boxes2.bottom_height.view(1, -1) - - heighest_of_bottom = torch.max(boxes1_bottom_height, - boxes2_bottom_height) - lowest_of_top = torch.min(boxes1_top_height, boxes2_top_height) - overlaps_h = torch.clamp(lowest_of_top - heighest_of_bottom, min=0) - return overlaps_h - - @classmethod - def overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate 3D overlaps of two boxes. - - Note: - This function calculates the overlaps between ``boxes1`` and - ``boxes2``, ``boxes1`` and ``boxes2`` should be in the same type. - - Args: - boxes1 (:obj:`BaseInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`BaseInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of iou calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated 3D overlaps of the boxes. - """ - assert isinstance(boxes1, BaseInstance3DBoxes) - assert isinstance(boxes2, BaseInstance3DBoxes) - assert type(boxes1) == type(boxes2), '"boxes1" and "boxes2" should' \ - f'be in the same type, got {type(boxes1)} and {type(boxes2)}.' - - assert mode in ['iou', 'iof'] - - rows = len(boxes1) - cols = len(boxes2) - if rows * cols == 0: - return boxes1.tensor.new(rows, cols) - - # height overlap - overlaps_h = cls.height_overlaps(boxes1, boxes2) - - # bev overlap - iou2d = box_iou_rotated(boxes1.bev, boxes2.bev) - areas1 = (boxes1.bev[:, 2] * boxes1.bev[:, 3]).unsqueeze(1).expand( - rows, cols) - areas2 = (boxes2.bev[:, 2] * boxes2.bev[:, 3]).unsqueeze(0).expand( - rows, cols) - overlaps_bev = iou2d * (areas1 + areas2) / (1 + iou2d) - - # 3d overlaps - overlaps_3d = overlaps_bev.to(boxes1.device) * overlaps_h - - volume1 = boxes1.volume.view(-1, 1) - volume2 = boxes2.volume.view(1, -1) - - if mode == 'iou': - # the clamp func is used to avoid division of 0 - iou3d = overlaps_3d / torch.clamp( - volume1 + volume2 - overlaps_3d, min=1e-8) - else: - iou3d = overlaps_3d / torch.clamp(volume1, min=1e-8) - - return iou3d - - def new_box(self, data): - """Create a new box object with data. - - The new box and its tensor has the similar properties - as self and self.tensor, respectively. - - Args: - data (torch.Tensor | numpy.array | list): Data to be copied. - - Returns: - :obj:`BaseInstance3DBoxes`: A new bbox object with ``data``, - the object's other properties are similar to ``self``. - """ - new_tensor = self.tensor.new_tensor(data) \ - if not isinstance(data, torch.Tensor) else data.to(self.device) - original_type = type(self) - return original_type( - new_tensor, box_dim=self.box_dim, with_yaw=self.with_yaw) - - def points_in_boxes_part(self, points, boxes_override=None): - """Find the box in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor`. Defaults to None. - - Returns: - torch.Tensor: The index of the first box that each point - is in, in shape (M, ). Default value is -1 - (if the point is not enclosed by any box). - - Note: - If a point is enclosed by multiple boxes, the index of the - first box will be returned. - """ - if boxes_override is not None: - boxes = boxes_override - else: - boxes = self.tensor - if points.dim() == 2: - points = points.unsqueeze(0) - box_idx = points_in_boxes_part(points, - boxes.unsqueeze(0).to( - points.device)).squeeze(0) - return box_idx - - def points_in_boxes_all(self, points, boxes_override=None): - """Find all boxes in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor`. Defaults to None. - - Returns: - torch.Tensor: A tensor indicating whether a point is in a box, - in shape (M, T). T is the number of boxes. Denote this - tensor as A, if the m^th point is in the t^th box, then - `A[m, t] == 1`, elsewise `A[m, t] == 0`. - """ - if boxes_override is not None: - boxes = boxes_override - else: - boxes = self.tensor - - points_clone = points.clone()[..., :3] - if points_clone.dim() == 2: - points_clone = points_clone.unsqueeze(0) - else: - assert points_clone.dim() == 3 and points_clone.shape[0] == 1 - - boxes = boxes.to(points_clone.device).unsqueeze(0) - box_idxs_of_pts = points_in_boxes_all(points_clone, boxes) - - return box_idxs_of_pts.squeeze(0) - - def points_in_boxes(self, points, boxes_override=None): - warnings.warn('DeprecationWarning: points_in_boxes is a ' - 'deprecated method, please consider using ' - 'points_in_boxes_part.') - return self.points_in_boxes_part(points, boxes_override) - - def points_in_boxes_batch(self, points, boxes_override=None): - warnings.warn('DeprecationWarning: points_in_boxes_batch is a ' - 'deprecated method, please consider using ' - 'points_in_boxes_all.') - return self.points_in_boxes_all(points, boxes_override) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/box_3d_mode.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/box_3d_mode.py deleted file mode 100644 index 3048b0ad..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/box_3d_mode.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import IntEnum, unique - -import numpy as np -import torch - -from .base_box3d import BaseInstance3DBoxes -from .cam_box3d import CameraInstance3DBoxes -from .depth_box3d import DepthInstance3DBoxes -from .lidar_box3d import LiDARInstance3DBoxes -from .utils import limit_period - - -@unique -class Box3DMode(IntEnum): - r"""Enum of different ways to represent a box. - - Coordinates in LiDAR: - - .. code-block:: none - - up z - ^ x front - | / - | / - left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - - Coordinates in camera: - - .. code-block:: none - - z front - / - / - 0 ------> x right - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is [0.5, 1.0, 0.5], - and the yaw is around the y axis, thus the rotation axis=1. - - Coordinates in Depth mode: - - .. code-block:: none - - up z - ^ y front - | / - | / - 0 ------> x right - - The relative coordinate of bottom center in a DEPTH box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - """ - - LIDAR = 0 - CAM = 1 - DEPTH = 2 - - @staticmethod - def convert(box, src, dst, rt_mat=None, with_yaw=True): - """Convert boxes from `src` mode to `dst` mode. - - Args: - box (tuple | list | np.ndarray | - torch.Tensor | :obj:`BaseInstance3DBoxes`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode`): The src Box mode. - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool, optional): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes`): - The converted box of the same type. - """ - if src == dst: - return box - - is_numpy = isinstance(box, np.ndarray) - is_Instance3DBoxes = isinstance(box, BaseInstance3DBoxes) - single_box = isinstance(box, (list, tuple)) - if single_box: - assert len(box) >= 7, ( - 'Box3DMode.convert takes either a k-tuple/list or ' - 'an Nxk array/tensor, where k >= 7') - arr = torch.tensor(box)[None, :] - else: - # avoid modifying the input box - if is_numpy: - arr = torch.from_numpy(np.asarray(box)).clone() - elif is_Instance3DBoxes: - arr = box.tensor.clone() - else: - arr = box.clone() - - if is_Instance3DBoxes: - with_yaw = box.with_yaw - - # convert box from `src` mode to `dst` mode. - x_size, y_size, z_size = arr[..., 3:4], arr[..., 4:5], arr[..., 5:6] - if with_yaw: - yaw = arr[..., 6:7] - if src == Box3DMode.LIDAR and dst == Box3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [0, 0, -1], [1, 0, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.CAM and dst == Box3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 0, 1], [-1, 0, 0], [0, -1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.DEPTH and dst == Box3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - elif src == Box3DMode.CAM and dst == Box3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, 1], [0, -1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - elif src == Box3DMode.LIDAR and dst == Box3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) - xyz_size = torch.cat([x_size, y_size, z_size], dim=-1) - if with_yaw: - yaw = yaw + np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.DEPTH and dst == Box3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) - xyz_size = torch.cat([x_size, y_size, z_size], dim=-1) - if with_yaw: - yaw = yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - else: - raise NotImplementedError( - f'Conversion from Box3DMode {src} to {dst} ' - 'is not supported yet') - - if not isinstance(rt_mat, torch.Tensor): - rt_mat = arr.new_tensor(rt_mat) - if rt_mat.size(1) == 4: - extended_xyz = torch.cat( - [arr[..., :3], arr.new_ones(arr.size(0), 1)], dim=-1) - xyz = extended_xyz @ rt_mat.t() - else: - xyz = arr[..., :3] @ rt_mat.t() - - if with_yaw: - remains = arr[..., 7:] - arr = torch.cat([xyz[..., :3], xyz_size, yaw, remains], dim=-1) - else: - remains = arr[..., 6:] - arr = torch.cat([xyz[..., :3], xyz_size, remains], dim=-1) - - # convert arr to the original type - original_type = type(box) - if single_box: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - elif is_Instance3DBoxes: - if dst == Box3DMode.CAM: - target_type = CameraInstance3DBoxes - elif dst == Box3DMode.LIDAR: - target_type = LiDARInstance3DBoxes - elif dst == Box3DMode.DEPTH: - target_type = DepthInstance3DBoxes - else: - raise NotImplementedError( - f'Conversion to {dst} through {original_type}' - ' is not supported yet') - return target_type(arr, box_dim=arr.size(-1), with_yaw=with_yaw) - else: - return arr diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/cam_box3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/cam_box3d.py deleted file mode 100644 index b7086134..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/cam_box3d.py +++ /dev/null @@ -1,354 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ...points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis, yaw2local - - -class CameraInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in CAM coordinates. - - Coordinates in camera: - - .. code-block:: none - - z front (yaw=-0.5*pi) - / - / - 0 ------> x right (yaw=0) - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is (0.5, 1.0, 0.5), - and the yaw is around the y axis, thus the rotation axis=1. - The yaw is 0 at the positive direction of x axis, and decreases from - the positive direction of x to the positive direction of z. - - Attributes: - tensor (torch.Tensor): Float matrix in shape (N, box_dim). - box_dim (int): Integer indicating the dimension of a box - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as - axis-aligned boxes tightly enclosing the original boxes. - """ - YAW_AXIS = 1 - - def __init__(self, - tensor, - box_dim=7, - with_yaw=True, - origin=(0.5, 1.0, 0.5)): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, box_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == box_dim, tensor.size() - - if tensor.shape[-1] == 6: - # If the dimension of boxes is 6, we expand box_dim by padding - # 0 as a fake yaw and set with_yaw to False. - assert box_dim == 6 - fake_rot = tensor.new_zeros(tensor.shape[0], 1) - tensor = torch.cat((tensor, fake_rot), dim=-1) - self.box_dim = box_dim + 1 - self.with_yaw = False - else: - self.box_dim = box_dim - self.with_yaw = with_yaw - self.tensor = tensor.clone() - - if origin != (0.5, 1.0, 0.5): - dst = self.tensor.new_tensor((0.5, 1.0, 0.5)) - src = self.tensor.new_tensor(origin) - self.tensor[:, :3] += self.tensor[:, 3:6] * (dst - src) - - @property - def height(self): - """torch.Tensor: A vector with height of each box in shape (N, ).""" - return self.tensor[:, 4] - - @property - def top_height(self): - """torch.Tensor: - A vector with the top height of each box in shape (N, ).""" - # the positive direction is down rather than up - return self.bottom_height - self.height - - @property - def bottom_height(self): - """torch.Tensor: - A vector with bottom's height of each box in shape (N, ).""" - return self.tensor[:, 1] - - @property - def local_yaw(self): - """torch.Tensor: - A vector with local yaw of each box in shape (N, ). - local_yaw equals to alpha in kitti, which is commonly - used in monocular 3D object detection task, so only - :obj:`CameraInstance3DBoxes` has the property. - """ - yaw = self.yaw - loc = self.gravity_center - local_yaw = yaw2local(yaw, loc) - - return local_yaw - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, [0, 2]] = bottom_center[:, [0, 2]] - gravity_center[:, 1] = bottom_center[:, 1] - self.tensor[:, 4] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes in - shape (N, 8, 3). - - Convert the boxes to in clockwise order, in the form of - (x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0) - - .. code-block:: none - - front z - / - / - (x0, y0, z1) + ----------- + (x1, y0, z1) - /| / | - / | / | - (x0, y0, z0) + ----------- + + (x1, y1, z1) - | / . | / - | / origin | / - (x0, y1, z0) + ----------- + -------> x right - | (x1, y1, z0) - | - v - down y - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin [0.5, 1, 0.5] - corners_norm = corners_norm - dims.new_tensor([0.5, 1, 0.5]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - @property - def bev(self): - """torch.Tensor: 2D BEV box of each box with rotation - in XYWHR format, in shape (N, 5).""" - bev = self.tensor[:, [0, 2, 3, 5, 6]].clone() - # positive direction of the gravity axis - # in cam coord system points to the earth - # so the bev yaw angle needs to be reversed - bev[:, -1] = -bev[:, -1] - return bev - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[2, 0] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - self.tensor[:, 6] += angle - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In CAM coordinates, it flips the x (horizontal) or z (vertical) axis. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - elif bev_direction == 'vertical': - self.tensor[:, 2::7] = -self.tensor[:, 2::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 0] = -points[:, 0] - elif bev_direction == 'vertical': - points[:, 2] = -points[:, 2] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - @classmethod - def height_overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate height overlaps of two boxes. - - This function calculates the height overlaps between ``boxes1`` and - ``boxes2``, where ``boxes1`` and ``boxes2`` should be in the same type. - - Args: - boxes1 (:obj:`CameraInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`CameraInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of iou calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated iou of boxes' heights. - """ - assert isinstance(boxes1, CameraInstance3DBoxes) - assert isinstance(boxes2, CameraInstance3DBoxes) - - boxes1_top_height = boxes1.top_height.view(-1, 1) - boxes1_bottom_height = boxes1.bottom_height.view(-1, 1) - boxes2_top_height = boxes2.top_height.view(1, -1) - boxes2_bottom_height = boxes2.bottom_height.view(1, -1) - - # positive direction of the gravity axis - # in cam coord system points to the earth - heighest_of_bottom = torch.min(boxes1_bottom_height, - boxes2_bottom_height) - lowest_of_top = torch.max(boxes1_top_height, boxes2_top_height) - overlaps_h = torch.clamp(heighest_of_bottom - lowest_of_top, min=0) - return overlaps_h - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.CAM, dst=dst, rt_mat=rt_mat) - - def points_in_boxes_part(self, points, boxes_override=None): - """Find the box in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor `. Defaults to None. - - Returns: - torch.Tensor: The index of the box in which - each point is, in shape (M, ). Default value is -1 - (if the point is not enclosed by any box). - """ - from .coord_3d_mode import Coord3DMode - - points_lidar = Coord3DMode.convert(points, Coord3DMode.CAM, - Coord3DMode.LIDAR) - if boxes_override is not None: - boxes_lidar = boxes_override - else: - boxes_lidar = Coord3DMode.convert(self.tensor, Coord3DMode.CAM, - Coord3DMode.LIDAR) - - box_idx = super().points_in_boxes_part(points_lidar, boxes_lidar) - return box_idx - - def points_in_boxes_all(self, points, boxes_override=None): - """Find all boxes in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor `. Defaults to None. - - Returns: - torch.Tensor: The index of all boxes in which each point is, - in shape (B, M, T). - """ - from .coord_3d_mode import Coord3DMode - - points_lidar = Coord3DMode.convert(points, Coord3DMode.CAM, - Coord3DMode.LIDAR) - if boxes_override is not None: - boxes_lidar = boxes_override - else: - boxes_lidar = Coord3DMode.convert(self.tensor, Coord3DMode.CAM, - Coord3DMode.LIDAR) - - box_idx = super().points_in_boxes_all(points_lidar, boxes_lidar) - return box_idx diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/coord_3d_mode.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/coord_3d_mode.py deleted file mode 100644 index 6309b654..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/coord_3d_mode.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import IntEnum, unique - -import numpy as np -import torch - -from ...points import BasePoints, CameraPoints, DepthPoints, LiDARPoints -from .base_box3d import BaseInstance3DBoxes -from .box_3d_mode import Box3DMode - - -@unique -class Coord3DMode(IntEnum): - r"""Enum of different ways to represent a box - and point cloud. - - Coordinates in LiDAR: - - .. code-block:: none - - up z - ^ x front - | / - | / - left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - - Coordinates in camera: - - .. code-block:: none - - z front - / - / - 0 ------> x right - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is [0.5, 1.0, 0.5], - and the yaw is around the y axis, thus the rotation axis=1. - - Coordinates in Depth mode: - - .. code-block:: none - - up z - ^ y front - | / - | / - 0 ------> x right - - The relative coordinate of bottom center in a DEPTH box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - """ - - LIDAR = 0 - CAM = 1 - DEPTH = 2 - - @staticmethod - def convert(input, src, dst, rt_mat=None, with_yaw=True, is_point=True): - """Convert boxes or points from `src` mode to `dst` mode. - - Args: - input (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes` | :obj:`BasePoints`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode` | :obj:`Coord3DMode`): The source mode. - dst (:obj:`Box3DMode` | :obj:`Coord3DMode`): The target mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - is_point (bool): If `input` is neither an instance of - :obj:`BaseInstance3DBoxes` nor an instance of - :obj:`BasePoints`, whether or not it is point data. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes` | :obj:`BasePoints`): - The converted box of the same type. - """ - if isinstance(input, BaseInstance3DBoxes): - return Coord3DMode.convert_box( - input, src, dst, rt_mat=rt_mat, with_yaw=with_yaw) - elif isinstance(input, BasePoints): - return Coord3DMode.convert_point(input, src, dst, rt_mat=rt_mat) - elif isinstance(input, (tuple, list, np.ndarray, torch.Tensor)): - if is_point: - return Coord3DMode.convert_point( - input, src, dst, rt_mat=rt_mat) - else: - return Coord3DMode.convert_box( - input, src, dst, rt_mat=rt_mat, with_yaw=with_yaw) - else: - raise NotImplementedError - - @staticmethod - def convert_box(box, src, dst, rt_mat=None, with_yaw=True): - """Convert boxes from `src` mode to `dst` mode. - - Args: - box (tuple | list | np.ndarray | - torch.Tensor | :obj:`BaseInstance3DBoxes`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode`): The src Box mode. - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes`): - The converted box of the same type. - """ - return Box3DMode.convert(box, src, dst, rt_mat=rt_mat) - - @staticmethod - def convert_point(point, src, dst, rt_mat=None): - """Convert points from `src` mode to `dst` mode. - - Args: - point (tuple | list | np.ndarray | - torch.Tensor | :obj:`BasePoints`): - Can be a k-tuple, k-list or an Nxk array/tensor. - src (:obj:`CoordMode`): The src Point mode. - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | :obj:`BasePoints`): - The converted point of the same type. - """ - if src == dst: - return point - - is_numpy = isinstance(point, np.ndarray) - is_InstancePoints = isinstance(point, BasePoints) - single_point = isinstance(point, (list, tuple)) - if single_point: - assert len(point) >= 3, ( - 'CoordMode.convert takes either a k-tuple/list or ' - 'an Nxk array/tensor, where k >= 3') - arr = torch.tensor(point)[None, :] - else: - # avoid modifying the input point - if is_numpy: - arr = torch.from_numpy(np.asarray(point)).clone() - elif is_InstancePoints: - arr = point.tensor.clone() - else: - arr = point.clone() - - # convert point from `src` mode to `dst` mode. - if src == Coord3DMode.LIDAR and dst == Coord3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [0, 0, -1], [1, 0, 0]]) - elif src == Coord3DMode.CAM and dst == Coord3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 0, 1], [-1, 0, 0], [0, -1, 0]]) - elif src == Coord3DMode.DEPTH and dst == Coord3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) - elif src == Coord3DMode.CAM and dst == Coord3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, 1], [0, -1, 0]]) - elif src == Coord3DMode.LIDAR and dst == Coord3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) - elif src == Coord3DMode.DEPTH and dst == Coord3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) - else: - raise NotImplementedError( - f'Conversion from Coord3DMode {src} to {dst} ' - 'is not supported yet') - - if not isinstance(rt_mat, torch.Tensor): - rt_mat = arr.new_tensor(rt_mat) - if rt_mat.size(1) == 4: - extended_xyz = torch.cat( - [arr[..., :3], arr.new_ones(arr.size(0), 1)], dim=-1) - xyz = extended_xyz @ rt_mat.t() - else: - xyz = arr[..., :3] @ rt_mat.t() - - remains = arr[..., 3:] - arr = torch.cat([xyz[..., :3], remains], dim=-1) - - # convert arr to the original type - original_type = type(point) - if single_point: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - elif is_InstancePoints: - if dst == Coord3DMode.CAM: - target_type = CameraPoints - elif dst == Coord3DMode.LIDAR: - target_type = LiDARPoints - elif dst == Coord3DMode.DEPTH: - target_type = DepthPoints - else: - raise NotImplementedError( - f'Conversion to {dst} through {original_type}' - ' is not supported yet') - return target_type( - arr, - points_dim=arr.size(-1), - attribute_dims=point.attribute_dims) - else: - return arr diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/depth_box3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/depth_box3d.py deleted file mode 100644 index dd9278bf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/depth_box3d.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core.points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis - - -class DepthInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in Depth coordinates. - - Coordinates in Depth: - - .. code-block:: none - - up z y front (yaw=-0.5*pi) - ^ ^ - | / - | / - 0 ------> x right (yaw=0) - - The relative coordinate of bottom center in a Depth box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - The yaw is 0 at the positive direction of x axis, and decreases from - the positive direction of x to the positive direction of y. - Also note that rotation of DepthInstance3DBoxes is counterclockwise, - which is reverse to the definition of the yaw angle (clockwise). - - A refactor is ongoing to make the three coordinate systems - easier to understand and convert between each other. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicates the dimension of a box - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - YAW_AXIS = 2 - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, :2] = bottom_center[:, :2] - gravity_center[:, 2] = bottom_center[:, 2] + self.tensor[:, 5] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes - in shape (N, 8, 3). - - Convert the boxes to corners in clockwise order, in form of - ``(x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0)`` - - .. code-block:: none - - up z - front y ^ - / | - / | - (x0, y1, z1) + ----------- + (x1, y1, z1) - /| / | - / | / | - (x0, y0, z1) + ----------- + + (x1, y1, z0) - | / . | / - | / origin | / - (x0, y0, z0) + ----------- + --------> right x - (x1, y0, z0) - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin (0.5, 0.5, 0) - corners_norm = corners_norm - dims.new_tensor([0.5, 0.5, 0]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - # rotate around z axis - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[0, 1] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - if self.with_yaw: - self.tensor[:, 6] += angle - else: - # for axis-aligned boxes, we take the new - # enclosing axis-aligned boxes after rotation - corners_rot = self.corners @ rot_mat_T - new_x_size = corners_rot[..., 0].max( - dim=1, keepdim=True)[0] - corners_rot[..., 0].min( - dim=1, keepdim=True)[0] - new_y_size = corners_rot[..., 1].max( - dim=1, keepdim=True)[0] - corners_rot[..., 1].min( - dim=1, keepdim=True)[0] - self.tensor[:, 3:5] = torch.cat((new_x_size, new_y_size), dim=-1) - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In Depth coordinates, it flips x (horizontal) or y (vertical) axis. - - Args: - bev_direction (str, optional): Flip direction - (horizontal or vertical). Defaults to 'horizontal'. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - elif bev_direction == 'vertical': - self.tensor[:, 1::7] = -self.tensor[:, 1::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 0] = -points[:, 0] - elif bev_direction == 'vertical': - points[:, 1] = -points[:, 1] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`DepthInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.DEPTH, dst=dst, rt_mat=rt_mat) - - def enlarged_box(self, extra_width): - """Enlarge the length, width and height boxes. - - Args: - extra_width (float | torch.Tensor): Extra width to enlarge the box. - - Returns: - :obj:`DepthInstance3DBoxes`: Enlarged boxes. - """ - enlarged_boxes = self.tensor.clone() - enlarged_boxes[:, 3:6] += extra_width * 2 - # bottom center z minus extra_width - enlarged_boxes[:, 2] -= extra_width - return self.new_box(enlarged_boxes) - - def get_surface_line_center(self): - """Compute surface and line center of bounding boxes. - - Returns: - torch.Tensor: Surface and line center of bounding boxes. - """ - obj_size = self.dims - center = self.gravity_center.view(-1, 1, 3) - batch_size = center.shape[0] - - rot_sin = torch.sin(-self.yaw) - rot_cos = torch.cos(-self.yaw) - rot_mat_T = self.yaw.new_zeros(tuple(list(self.yaw.shape) + [3, 3])) - rot_mat_T[..., 0, 0] = rot_cos - rot_mat_T[..., 0, 1] = -rot_sin - rot_mat_T[..., 1, 0] = rot_sin - rot_mat_T[..., 1, 1] = rot_cos - rot_mat_T[..., 2, 2] = 1 - - # Get the object surface center - offset = obj_size.new_tensor([[0, 0, 1], [0, 0, -1], [0, 1, 0], - [0, -1, 0], [1, 0, 0], [-1, 0, 0]]) - offset = offset.view(1, 6, 3) / 2 - surface_3d = (offset * - obj_size.view(batch_size, 1, 3).repeat(1, 6, 1)).reshape( - -1, 3) - - # Get the object line center - offset = obj_size.new_tensor([[1, 0, 1], [-1, 0, 1], [0, 1, 1], - [0, -1, 1], [1, 0, -1], [-1, 0, -1], - [0, 1, -1], [0, -1, -1], [1, 1, 0], - [1, -1, 0], [-1, 1, 0], [-1, -1, 0]]) - offset = offset.view(1, 12, 3) / 2 - - line_3d = (offset * - obj_size.view(batch_size, 1, 3).repeat(1, 12, 1)).reshape( - -1, 3) - - surface_rot = rot_mat_T.repeat(6, 1, 1) - surface_3d = torch.matmul(surface_3d.unsqueeze(-2), - surface_rot).squeeze(-2) - surface_center = center.repeat(1, 6, 1).reshape(-1, 3) + surface_3d - - line_rot = rot_mat_T.repeat(12, 1, 1) - line_3d = torch.matmul(line_3d.unsqueeze(-2), line_rot).squeeze(-2) - line_center = center.repeat(1, 12, 1).reshape(-1, 3) + line_3d - - return surface_center, line_center diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/lidar_box3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/lidar_box3d.py deleted file mode 100644 index 706a6c0d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/lidar_box3d.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core.points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis - - -class LiDARInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in LIDAR coordinates. - - Coordinates in LiDAR: - - .. code-block:: none - - up z x front (yaw=0) - ^ ^ - | / - | / - (yaw=0.5*pi) left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - The yaw is 0 at the positive direction of x axis, and increases from - the positive direction of x to the positive direction of y. - - A refactor is ongoing to make the three coordinate systems - easier to understand and convert between each other. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicating the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - YAW_AXIS = 2 - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, :2] = bottom_center[:, :2] - gravity_center[:, 2] = bottom_center[:, 2] + self.tensor[:, 5] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes - in shape (N, 8, 3). - - Convert the boxes to corners in clockwise order, in form of - ``(x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0)`` - - .. code-block:: none - - up z - front x ^ - / | - / | - (x1, y0, z1) + ----------- + (x1, y1, z1) - /| / | - / | / | - (x0, y0, z1) + ----------- + + (x1, y1, z0) - | / . | / - | / origin | / - left y<-------- + ----------- + (x0, y1, z0) - (x0, y0, z0) - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin [0.5, 0.5, 0] - corners_norm = corners_norm - dims.new_tensor([0.5, 0.5, 0]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - # rotate around z axis - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angles (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[0, 1] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - self.tensor[:, 6] += angle - - if self.tensor.shape[1] == 9: - # rotate velo vector - self.tensor[:, 7:9] = self.tensor[:, 7:9] @ rot_mat_T[:2, :2] - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In LIDAR coordinates, it flips the y (horizontal) or x (vertical) axis. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 1::7] = -self.tensor[:, 1::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - elif bev_direction == 'vertical': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 1] = -points[:, 1] - elif bev_direction == 'vertical': - points[:, 0] = -points[:, 0] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): the target Box mode - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.LIDAR, dst=dst, rt_mat=rt_mat) - - def enlarged_box(self, extra_width): - """Enlarge the length, width and height boxes. - - Args: - extra_width (float | torch.Tensor): Extra width to enlarge the box. - - Returns: - :obj:`LiDARInstance3DBoxes`: Enlarged boxes. - """ - enlarged_boxes = self.tensor.clone() - enlarged_boxes[:, 3:6] += extra_width * 2 - # bottom center z minus extra_width - enlarged_boxes[:, 2] -= extra_width - return self.new_box(enlarged_boxes) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/utils.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/utils.py deleted file mode 100644 index 6ebaabe0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/structures/utils.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from logging import warning - -import numpy as np -import torch - -from mmdet3d.core.utils import array_converter - - -@array_converter(apply_to=('val', )) -def limit_period(val, offset=0.5, period=np.pi): - """Limit the value into a period for periodic function. - - Args: - val (torch.Tensor | np.ndarray): The value to be converted. - offset (float, optional): Offset to set the value range. - Defaults to 0.5. - period ([type], optional): Period of the value. Defaults to np.pi. - - Returns: - (torch.Tensor | np.ndarray): Value in the range of - [-offset * period, (1-offset) * period] - """ - limited_val = val - torch.floor(val / period + offset) * period - return limited_val - - -@array_converter(apply_to=('points', 'angles')) -def rotation_3d_in_axis(points, - angles, - axis=0, - return_mat=False, - clockwise=False): - """Rotate points by angles according to axis. - - Args: - points (np.ndarray | torch.Tensor | list | tuple ): - Points of shape (N, M, 3). - angles (np.ndarray | torch.Tensor | list | tuple | float): - Vector of angles in shape (N,) - axis (int, optional): The axis to be rotated. Defaults to 0. - return_mat: Whether or not return the rotation matrix (transposed). - Defaults to False. - clockwise: Whether the rotation is clockwise. Defaults to False. - - Raises: - ValueError: when the axis is not in range [0, 1, 2], it will - raise value error. - - Returns: - (torch.Tensor | np.ndarray): Rotated points in shape (N, M, 3). - """ - batch_free = len(points.shape) == 2 - if batch_free: - points = points[None] - - if isinstance(angles, float) or len(angles.shape) == 0: - angles = torch.full(points.shape[:1], angles) - - assert len(points.shape) == 3 and len(angles.shape) == 1 \ - and points.shape[0] == angles.shape[0], f'Incorrect shape of points ' \ - f'angles: {points.shape}, {angles.shape}' - - assert points.shape[-1] in [2, 3], \ - f'Points size should be 2 or 3 instead of {points.shape[-1]}' - - rot_sin = torch.sin(angles) - rot_cos = torch.cos(angles) - ones = torch.ones_like(rot_cos) - zeros = torch.zeros_like(rot_cos) - - if points.shape[-1] == 3: - if axis == 1 or axis == -2: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, zeros, -rot_sin]), - torch.stack([zeros, ones, zeros]), - torch.stack([rot_sin, zeros, rot_cos]) - ]) - elif axis == 2 or axis == -1: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, rot_sin, zeros]), - torch.stack([-rot_sin, rot_cos, zeros]), - torch.stack([zeros, zeros, ones]) - ]) - elif axis == 0 or axis == -3: - rot_mat_T = torch.stack([ - torch.stack([ones, zeros, zeros]), - torch.stack([zeros, rot_cos, rot_sin]), - torch.stack([zeros, -rot_sin, rot_cos]) - ]) - else: - raise ValueError(f'axis should in range ' - f'[-3, -2, -1, 0, 1, 2], got {axis}') - else: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, rot_sin]), - torch.stack([-rot_sin, rot_cos]) - ]) - - if clockwise: - rot_mat_T = rot_mat_T.transpose(0, 1) - - if points.shape[0] == 0: - points_new = points - else: - points_new = torch.einsum('aij,jka->aik', points, rot_mat_T) - - if batch_free: - points_new = points_new.squeeze(0) - - if return_mat: - rot_mat_T = torch.einsum('jka->ajk', rot_mat_T) - if batch_free: - rot_mat_T = rot_mat_T.squeeze(0) - return points_new, rot_mat_T - else: - return points_new - - -@array_converter(apply_to=('boxes_xywhr', )) -def xywhr2xyxyr(boxes_xywhr): - """Convert a rotated boxes in XYWHR format to XYXYR format. - - Args: - boxes_xywhr (torch.Tensor | np.ndarray): Rotated boxes in XYWHR format. - - Returns: - (torch.Tensor | np.ndarray): Converted boxes in XYXYR format. - """ - boxes = torch.zeros_like(boxes_xywhr) - half_w = boxes_xywhr[..., 2] / 2 - half_h = boxes_xywhr[..., 3] / 2 - - boxes[..., 0] = boxes_xywhr[..., 0] - half_w - boxes[..., 1] = boxes_xywhr[..., 1] - half_h - boxes[..., 2] = boxes_xywhr[..., 0] + half_w - boxes[..., 3] = boxes_xywhr[..., 1] + half_h - boxes[..., 4] = boxes_xywhr[..., 4] - return boxes - - -def get_box_type(box_type): - """Get the type and mode of box structure. - - Args: - box_type (str): The type of box structure. - The valid value are "LiDAR", "Camera", or "Depth". - - Raises: - ValueError: A ValueError is raised when `box_type` - does not belong to the three valid types. - - Returns: - tuple: Box type and box mode. - """ - from .box_3d_mode import (Box3DMode, CameraInstance3DBoxes, - DepthInstance3DBoxes, LiDARInstance3DBoxes) - box_type_lower = box_type.lower() - if box_type_lower == 'lidar': - box_type_3d = LiDARInstance3DBoxes - box_mode_3d = Box3DMode.LIDAR - elif box_type_lower == 'camera': - box_type_3d = CameraInstance3DBoxes - box_mode_3d = Box3DMode.CAM - elif box_type_lower == 'depth': - box_type_3d = DepthInstance3DBoxes - box_mode_3d = Box3DMode.DEPTH - else: - raise ValueError('Only "box_type" of "camera", "lidar", "depth"' - f' are supported, got {box_type}') - - return box_type_3d, box_mode_3d - - -@array_converter(apply_to=('points_3d', 'proj_mat')) -def points_cam2img(points_3d, proj_mat, with_depth=False): - """Project points in camera coordinates to image coordinates. - - Args: - points_3d (torch.Tensor | np.ndarray): Points in shape (N, 3) - proj_mat (torch.Tensor | np.ndarray): - Transformation matrix between coordinates. - with_depth (bool, optional): Whether to keep depth in the output. - Defaults to False. - - Returns: - (torch.Tensor | np.ndarray): Points in image coordinates, - with shape [N, 2] if `with_depth=False`, else [N, 3]. - """ - points_shape = list(points_3d.shape) - points_shape[-1] = 1 - - assert len(proj_mat.shape) == 2, 'The dimension of the projection'\ - f' matrix should be 2 instead of {len(proj_mat.shape)}.' - d1, d2 = proj_mat.shape[:2] - assert (d1 == 3 and d2 == 3) or (d1 == 3 and d2 == 4) or ( - d1 == 4 and d2 == 4), 'The shape of the projection matrix'\ - f' ({d1}*{d2}) is not supported.' - if d1 == 3: - proj_mat_expanded = torch.eye( - 4, device=proj_mat.device, dtype=proj_mat.dtype) - proj_mat_expanded[:d1, :d2] = proj_mat - proj_mat = proj_mat_expanded - - # previous implementation use new_zeros, new_one yields better results - points_4 = torch.cat([points_3d, points_3d.new_ones(points_shape)], dim=-1) - - point_2d = points_4 @ proj_mat.T - point_2d_res = point_2d[..., :2] / point_2d[..., 2:3] - - if with_depth: - point_2d_res = torch.cat([point_2d_res, point_2d[..., 2:3]], dim=-1) - - return point_2d_res - - -@array_converter(apply_to=('points', 'cam2img')) -def points_img2cam(points, cam2img): - """Project points in image coordinates to camera coordinates. - - Args: - points (torch.Tensor): 2.5D points in 2D images, [N, 3], - 3 corresponds with x, y in the image and depth. - cam2img (torch.Tensor): Camera intrinsic matrix. The shape can be - [3, 3], [3, 4] or [4, 4]. - - Returns: - torch.Tensor: points in 3D space. [N, 3], - 3 corresponds with x, y, z in 3D space. - """ - assert cam2img.shape[0] <= 4 - assert cam2img.shape[1] <= 4 - assert points.shape[1] == 3 - - xys = points[:, :2] - depths = points[:, 2].view(-1, 1) - unnormed_xys = torch.cat([xys * depths, depths], dim=1) - - pad_cam2img = torch.eye(4, dtype=xys.dtype, device=xys.device) - pad_cam2img[:cam2img.shape[0], :cam2img.shape[1]] = cam2img - # inv_pad_cam2img = torch.inverse(pad_cam2img).transpose(0, 1) - - #change for pgd - device = pad_cam2img.device - inv_pad_cam2img = torch.inverse(pad_cam2img.cpu()).transpose(0, 1).cpu() - inv_pad_cam2img = inv_pad_cam2img.to(device) - - # Do operation in homogeneous coordinates. - num_points = unnormed_xys.shape[0] - homo_xys = torch.cat([unnormed_xys, xys.new_ones((num_points, 1))], dim=1) - points3D = torch.mm(homo_xys, inv_pad_cam2img)[:, :3] - - return points3D - - -def mono_cam_box2vis(cam_box): - """This is a post-processing function on the bboxes from Mono-3D task. If - we want to perform projection visualization, we need to: - - 1. rotate the box along x-axis for np.pi / 2 (roll) - 2. change orientation from local yaw to global yaw - 3. convert yaw by (np.pi / 2 - yaw) - - After applying this function, we can project and draw it on 2D images. - - Args: - cam_box (:obj:`CameraInstance3DBoxes`): 3D bbox in camera coordinate - system before conversion. Could be gt bbox loaded from dataset - or network prediction output. - - Returns: - :obj:`CameraInstance3DBoxes`: Box after conversion. - """ - warning.warn('DeprecationWarning: The hack of yaw and dimension in the ' - 'monocular 3D detection on nuScenes has been removed. The ' - 'function mono_cam_box2vis will be deprecated.') - from . import CameraInstance3DBoxes - assert isinstance(cam_box, CameraInstance3DBoxes), \ - 'input bbox should be CameraInstance3DBoxes!' - - loc = cam_box.gravity_center - dim = cam_box.dims - yaw = cam_box.yaw - feats = cam_box.tensor[:, 7:] - # rotate along x-axis for np.pi / 2 - # see also here: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/nuscenes_mono_dataset.py#L557 # noqa - dim[:, [1, 2]] = dim[:, [2, 1]] - # change local yaw to global yaw for visualization - # refer to https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/nuscenes_mono_dataset.py#L164-L166 # noqa - yaw += torch.atan2(loc[:, 0], loc[:, 2]) - # convert yaw by (-yaw - np.pi / 2) - # this is because mono 3D box class such as `NuScenesBox` has different - # definition of rotation with our `CameraInstance3DBoxes` - yaw = -yaw - np.pi / 2 - cam_box = torch.cat([loc, dim, yaw[:, None], feats], dim=1) - cam_box = CameraInstance3DBoxes( - cam_box, box_dim=cam_box.shape[-1], origin=(0.5, 0.5, 0.5)) - - return cam_box - - -def get_proj_mat_by_coord_type(img_meta, coord_type): - """Obtain image features using points. - - Args: - img_meta (dict): Meta info. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - Can be case-insensitive. - - Returns: - torch.Tensor: transformation matrix. - """ - coord_type = coord_type.upper() - mapping = {'LIDAR': 'lidar2img', 'DEPTH': 'depth2img', 'CAMERA': 'cam2img'} - assert coord_type in mapping.keys() - return img_meta[mapping[coord_type]] - - -def yaw2local(yaw, loc): - """Transform global yaw to local yaw (alpha in kitti) in camera - coordinates, ranges from -pi to pi. - - Args: - yaw (torch.Tensor): A vector with local yaw of each box. - shape: (N, ) - loc (torch.Tensor): gravity center of each box. - shape: (N, 3) - - Returns: - torch.Tensor: local yaw (alpha in kitti). - """ - local_yaw = yaw - torch.atan2(loc[:, 0], loc[:, 2]) - larger_idx = (local_yaw > np.pi).nonzero(as_tuple=False) - small_idx = (local_yaw < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - local_yaw[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - local_yaw[small_idx] += 2 * np.pi - - return local_yaw diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/transforms.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/transforms.py deleted file mode 100644 index 8a2eb90f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/bbox/transforms.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def bbox3d_mapping_back(bboxes, scale_factor, flip_horizontal, flip_vertical): - """Map bboxes from testing scale to original image scale. - - Args: - bboxes (:obj:`BaseInstance3DBoxes`): Boxes to be mapped back. - scale_factor (float): Scale factor. - flip_horizontal (bool): Whether to flip horizontally. - flip_vertical (bool): Whether to flip vertically. - - Returns: - :obj:`BaseInstance3DBoxes`: Boxes mapped back. - """ - new_bboxes = bboxes.clone() - if flip_horizontal: - new_bboxes.flip('horizontal') - if flip_vertical: - new_bboxes.flip('vertical') - new_bboxes.scale(1 / scale_factor) - - return new_bboxes - - -def bbox3d2roi(bbox_list): - """Convert a list of bounding boxes to roi format. - - Args: - bbox_list (list[torch.Tensor]): A list of bounding boxes - corresponding to a batch of images. - - Returns: - torch.Tensor: Region of interests in shape (n, c), where - the channels are in order of [batch_ind, x, y ...]. - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes], dim=-1) - else: - rois = torch.zeros_like(bboxes) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def bbox3d2result(bboxes, scores, labels, attrs=None): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor): Bounding boxes with shape (N, 5). - labels (torch.Tensor): Labels with shape (N, ). - scores (torch.Tensor): Scores with shape (N, ). - attrs (torch.Tensor, optional): Attributes with shape (N, ). - Defaults to None. - - Returns: - dict[str, torch.Tensor]: Bounding box results in cpu mode. - - - boxes_3d (torch.Tensor): 3D boxes. - - scores (torch.Tensor): Prediction scores. - - labels_3d (torch.Tensor): Box labels. - - attrs_3d (torch.Tensor, optional): Box attributes. - """ - result_dict = dict( - boxes_3d=bboxes.to('cpu'), - scores_3d=scores.cpu(), - labels_3d=labels.cpu()) - - if attrs is not None: - result_dict['attrs_3d'] = attrs.cpu() - - return result_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/__init__.py deleted file mode 100644 index b1d489f3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .indoor_eval import indoor_eval -from .instance_seg_eval import instance_seg_eval -from .kitti_utils import kitti_eval, kitti_eval_coco_style -from .lyft_eval import lyft_eval -from .seg_eval import seg_eval - -__all__ = [ - 'kitti_eval_coco_style', 'kitti_eval', 'indoor_eval', 'lyft_eval', - 'seg_eval', 'instance_seg_eval' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/indoor_eval.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/indoor_eval.py deleted file mode 100644 index 2ff98773..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/indoor_eval.py +++ /dev/null @@ -1,309 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (np.ndarray): Recalls with shape of (num_scales, num_dets) - or (num_dets, ). - precisions (np.ndarray): Precisions with shape of - (num_scales, num_dets) or (num_dets, ). - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or np.ndarray: Calculated average precision. - """ - if recalls.ndim == 1: - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - - assert recalls.shape == precisions.shape - assert recalls.ndim == 2 - - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - return ap - - -def eval_det_cls(pred, gt, iou_thr=None): - """Generic functions to compute precision/recall for object detection for a - single class. - - Args: - pred (dict): Predictions mapping from image id to bounding boxes - and scores. - gt (dict): Ground truths mapping from image id to bounding boxes. - iou_thr (list[float]): A list of iou thresholds. - - Return: - tuple (np.ndarray, np.ndarray, float): Recalls, precisions and - average precision. - """ - - # {img_id: {'bbox': box structure, 'det': matched list}} - class_recs = {} - npos = 0 - for img_id in gt.keys(): - cur_gt_num = len(gt[img_id]) - if cur_gt_num != 0: - gt_cur = torch.zeros([cur_gt_num, 7], dtype=torch.float32) - for i in range(cur_gt_num): - gt_cur[i] = gt[img_id][i].tensor - bbox = gt[img_id][0].new_box(gt_cur) - else: - bbox = gt[img_id] - det = [[False] * len(bbox) for i in iou_thr] - npos += len(bbox) - class_recs[img_id] = {'bbox': bbox, 'det': det} - - # construct dets - image_ids = [] - confidence = [] - ious = [] - for img_id in pred.keys(): - cur_num = len(pred[img_id]) - if cur_num == 0: - continue - pred_cur = torch.zeros((cur_num, 7), dtype=torch.float32) - box_idx = 0 - for box, score in pred[img_id]: - image_ids.append(img_id) - confidence.append(score) - pred_cur[box_idx] = box.tensor - box_idx += 1 - pred_cur = box.new_box(pred_cur) - gt_cur = class_recs[img_id]['bbox'] - if len(gt_cur) > 0: - # calculate iou in each image - iou_cur = pred_cur.overlaps(pred_cur, gt_cur) - for i in range(cur_num): - ious.append(iou_cur[i]) - else: - for i in range(cur_num): - ious.append(np.zeros(1)) - - confidence = np.array(confidence) - - # sort by confidence - sorted_ind = np.argsort(-confidence) - image_ids = [image_ids[x] for x in sorted_ind] - ious = [ious[x] for x in sorted_ind] - - # go down dets and mark TPs and FPs - nd = len(image_ids) - tp_thr = [np.zeros(nd) for i in iou_thr] - fp_thr = [np.zeros(nd) for i in iou_thr] - for d in range(nd): - R = class_recs[image_ids[d]] - iou_max = -np.inf - BBGT = R['bbox'] - cur_iou = ious[d] - - if len(BBGT) > 0: - # compute overlaps - for j in range(len(BBGT)): - # iou = get_iou_main(get_iou_func, (bb, BBGT[j,...])) - iou = cur_iou[j] - if iou > iou_max: - iou_max = iou - jmax = j - - for iou_idx, thresh in enumerate(iou_thr): - if iou_max > thresh: - if not R['det'][iou_idx][jmax]: - tp_thr[iou_idx][d] = 1. - R['det'][iou_idx][jmax] = 1 - else: - fp_thr[iou_idx][d] = 1. - else: - fp_thr[iou_idx][d] = 1. - - ret = [] - for iou_idx, thresh in enumerate(iou_thr): - # compute precision recall - fp = np.cumsum(fp_thr[iou_idx]) - tp = np.cumsum(tp_thr[iou_idx]) - recall = tp / float(npos) - # avoid divide by zero in case the first detection matches a difficult - # ground truth - precision = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) - ap = average_precision(recall, precision) - ret.append((recall, precision, ap)) - - return ret - - -def eval_map_recall(pred, gt, ovthresh=None): - """Evaluate mAP and recall. - - Generic functions to compute precision/recall for object detection - for multiple classes. - - Args: - pred (dict): Information of detection results, - which maps class_id and predictions. - gt (dict): Information of ground truths, which maps class_id and - ground truths. - ovthresh (list[float], optional): iou threshold. Default: None. - - Return: - tuple[dict]: dict results of recall, AP, and precision for all classes. - """ - - ret_values = {} - for classname in gt.keys(): - if classname in pred: - ret_values[classname] = eval_det_cls(pred[classname], - gt[classname], ovthresh) - recall = [{} for i in ovthresh] - precision = [{} for i in ovthresh] - ap = [{} for i in ovthresh] - - for label in gt.keys(): - for iou_idx, thresh in enumerate(ovthresh): - if label in pred: - recall[iou_idx][label], precision[iou_idx][label], ap[iou_idx][ - label] = ret_values[label][iou_idx] - else: - recall[iou_idx][label] = np.zeros(1) - precision[iou_idx][label] = np.zeros(1) - ap[iou_idx][label] = np.zeros(1) - - return recall, precision, ap - - -def indoor_eval(gt_annos, - dt_annos, - metric, - label2cat, - logger=None, - box_type_3d=None, - box_mode_3d=None): - """Indoor Evaluation. - - Evaluate the result of the detection. - - Args: - gt_annos (list[dict]): Ground truth annotations. - dt_annos (list[dict]): Detection annotations. the dict - includes the following keys - - - labels_3d (torch.Tensor): Labels of boxes. - - boxes_3d (:obj:`BaseInstance3DBoxes`): - 3D bounding boxes in Depth coordinate. - - scores_3d (torch.Tensor): Scores of boxes. - metric (list[float]): IoU thresholds for computing average precisions. - label2cat (dict): Map from label to category. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Return: - dict[str, float]: Dict of results. - """ - assert len(dt_annos) == len(gt_annos) - pred = {} # map {class_id: pred} - gt = {} # map {class_id: gt} - for img_id in range(len(dt_annos)): - # parse detected annotations - det_anno = dt_annos[img_id] - for i in range(len(det_anno['labels_3d'])): - label = det_anno['labels_3d'].numpy()[i] - bbox = det_anno['boxes_3d'].convert_to(box_mode_3d)[i] - score = det_anno['scores_3d'].numpy()[i] - if label not in pred: - pred[int(label)] = {} - if img_id not in pred[label]: - pred[int(label)][img_id] = [] - if label not in gt: - gt[int(label)] = {} - if img_id not in gt[label]: - gt[int(label)][img_id] = [] - pred[int(label)][img_id].append((bbox, score)) - - # parse gt annotations - gt_anno = gt_annos[img_id] - if gt_anno['gt_num'] != 0: - gt_boxes = box_type_3d( - gt_anno['gt_boxes_upright_depth'], - box_dim=gt_anno['gt_boxes_upright_depth'].shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(box_mode_3d) - labels_3d = gt_anno['class'] - else: - gt_boxes = box_type_3d(np.array([], dtype=np.float32)) - labels_3d = np.array([], dtype=np.int64) - - for i in range(len(labels_3d)): - label = labels_3d[i] - bbox = gt_boxes[i] - if label not in gt: - gt[label] = {} - if img_id not in gt[label]: - gt[label][img_id] = [] - gt[label][img_id].append(bbox) - - rec, prec, ap = eval_map_recall(pred, gt, metric) - ret_dict = dict() - header = ['classes'] - table_columns = [[label2cat[label] - for label in ap[0].keys()] + ['Overall']] - - for i, iou_thresh in enumerate(metric): - header.append(f'AP_{iou_thresh:.2f}') - header.append(f'AR_{iou_thresh:.2f}') - rec_list = [] - for label in ap[i].keys(): - ret_dict[f'{label2cat[label]}_AP_{iou_thresh:.2f}'] = float( - ap[i][label][0]) - ret_dict[f'mAP_{iou_thresh:.2f}'] = float( - np.mean(list(ap[i].values()))) - - table_columns.append(list(map(float, list(ap[i].values())))) - table_columns[-1] += [ret_dict[f'mAP_{iou_thresh:.2f}']] - table_columns[-1] = [f'{x:.4f}' for x in table_columns[-1]] - - for label in rec[i].keys(): - ret_dict[f'{label2cat[label]}_rec_{iou_thresh:.2f}'] = float( - rec[i][label][-1]) - rec_list.append(rec[i][label][-1]) - ret_dict[f'mAR_{iou_thresh:.2f}'] = float(np.mean(rec_list)) - - table_columns.append(list(map(float, rec_list))) - table_columns[-1] += [ret_dict[f'mAR_{iou_thresh:.2f}']] - table_columns[-1] = [f'{x:.4f}' for x in table_columns[-1]] - - table_data = [header] - table_rows = list(zip(*table_columns)) - table_data += table_rows - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - - return ret_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/instance_seg_eval.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/instance_seg_eval.py deleted file mode 100644 index 31f5110a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/instance_seg_eval.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .scannet_utils.evaluate_semantic_instance import scannet_eval - - -def aggregate_predictions(masks, labels, scores, valid_class_ids): - """Maps predictions to ScanNet evaluator format. - - Args: - masks (list[torch.Tensor]): Per scene predicted instance masks. - labels (list[torch.Tensor]): Per scene predicted instance labels. - scores (list[torch.Tensor]): Per scene predicted instance scores. - valid_class_ids (tuple[int]): Ids of valid categories. - - Returns: - list[dict]: Per scene aggregated predictions. - """ - infos = [] - for id, (mask, label, score) in enumerate(zip(masks, labels, scores)): - mask = mask.clone().numpy() - label = label.clone().numpy() - score = score.clone().numpy() - info = dict() - n_instances = mask.max() + 1 - for i in range(n_instances): - # match pred_instance['filename'] from assign_instances_for_scan - file_name = f'{id}_{i}' - info[file_name] = dict() - info[file_name]['mask'] = (mask == i).astype(np.int) - info[file_name]['label_id'] = valid_class_ids[label[i]] - info[file_name]['conf'] = score[i] - infos.append(info) - return infos - - -def rename_gt(gt_semantic_masks, gt_instance_masks, valid_class_ids): - """Maps gt instance and semantic masks to instance masks for ScanNet - evaluator. - - Args: - gt_semantic_masks (list[torch.Tensor]): Per scene gt semantic masks. - gt_instance_masks (list[torch.Tensor]): Per scene gt instance masks. - valid_class_ids (tuple[int]): Ids of valid categories. - - Returns: - list[np.array]: Per scene instance masks. - """ - renamed_instance_masks = [] - for semantic_mask, instance_mask in zip(gt_semantic_masks, - gt_instance_masks): - semantic_mask = semantic_mask.clone().numpy() - instance_mask = instance_mask.clone().numpy() - unique = np.unique(instance_mask) - assert len(unique) < 1000 - for i in unique: - semantic_instance = semantic_mask[instance_mask == i] - semantic_unique = np.unique(semantic_instance) - assert len(semantic_unique) == 1 - if semantic_unique[0] < len(valid_class_ids): - instance_mask[ - instance_mask == - i] = 1000 * valid_class_ids[semantic_unique[0]] + i - renamed_instance_masks.append(instance_mask) - return renamed_instance_masks - - -def instance_seg_eval(gt_semantic_masks, - gt_instance_masks, - pred_instance_masks, - pred_instance_labels, - pred_instance_scores, - valid_class_ids, - class_labels, - options=None, - logger=None): - """Instance Segmentation Evaluation. - - Evaluate the result of the instance segmentation. - - Args: - gt_semantic_masks (list[torch.Tensor]): Ground truth semantic masks. - gt_instance_masks (list[torch.Tensor]): Ground truth instance masks. - pred_instance_masks (list[torch.Tensor]): Predicted instance masks. - pred_instance_labels (list[torch.Tensor]): Predicted instance labels. - pred_instance_scores (list[torch.Tensor]): Predicted instance labels. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Names of valid categories. - options (dict, optional): Additional options. Keys may contain: - `overlaps`, `min_region_sizes`, `distance_threshes`, - `distance_confs`. Default: None. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Returns: - dict[str, float]: Dict of results. - """ - assert len(valid_class_ids) == len(class_labels) - id_to_label = { - valid_class_ids[i]: class_labels[i] - for i in range(len(valid_class_ids)) - } - preds = aggregate_predictions( - masks=pred_instance_masks, - labels=pred_instance_labels, - scores=pred_instance_scores, - valid_class_ids=valid_class_ids) - gts = rename_gt(gt_semantic_masks, gt_instance_masks, valid_class_ids) - metrics = scannet_eval( - preds=preds, - gts=gts, - options=options, - valid_class_ids=valid_class_ids, - class_labels=class_labels, - id_to_label=id_to_label) - header = ['classes', 'AP_0.25', 'AP_0.50', 'AP'] - rows = [] - for label, data in metrics['classes'].items(): - aps = [data['ap25%'], data['ap50%'], data['ap']] - rows.append([label] + [f'{ap:.4f}' for ap in aps]) - aps = metrics['all_ap_25%'], metrics['all_ap_50%'], metrics['all_ap'] - footer = ['Overall'] + [f'{ap:.4f}' for ap in aps] - table = AsciiTable([header] + rows + [footer]) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - return metrics diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/__init__.py deleted file mode 100644 index 23c1cdf2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .eval import kitti_eval, kitti_eval_coco_style - -__all__ = ['kitti_eval', 'kitti_eval_coco_style'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/eval.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/eval.py deleted file mode 100644 index f8408dfa..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/eval.py +++ /dev/null @@ -1,950 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import gc -import io as sysio - -import numba -import numpy as np - - -@numba.jit -def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): - scores.sort() - scores = scores[::-1] - current_recall = 0 - thresholds = [] - for i, score in enumerate(scores): - l_recall = (i + 1) / num_gt - if i < (len(scores) - 1): - r_recall = (i + 2) / num_gt - else: - r_recall = l_recall - if (((r_recall - current_recall) < (current_recall - l_recall)) - and (i < (len(scores) - 1))): - continue - # recall = l_recall - thresholds.append(score) - current_recall += 1 / (num_sample_pts - 1.0) - return thresholds - - -def clean_data(gt_anno, dt_anno, current_class, difficulty): - CLASS_NAMES = ['car', 'pedestrian', 'cyclist'] - MIN_HEIGHT = [40, 25, 25] - MAX_OCCLUSION = [0, 1, 2] - MAX_TRUNCATION = [0.15, 0.3, 0.5] - dc_bboxes, ignored_gt, ignored_dt = [], [], [] - current_cls_name = CLASS_NAMES[current_class].lower() - num_gt = len(gt_anno['name']) - num_dt = len(dt_anno['name']) - num_valid_gt = 0 - for i in range(num_gt): - bbox = gt_anno['bbox'][i] - gt_name = gt_anno['name'][i].lower() - height = bbox[3] - bbox[1] - valid_class = -1 - if (gt_name == current_cls_name): - valid_class = 1 - elif (current_cls_name == 'Pedestrian'.lower() - and 'Person_sitting'.lower() == gt_name): - valid_class = 0 - elif (current_cls_name == 'Car'.lower() and 'Van'.lower() == gt_name): - valid_class = 0 - else: - valid_class = -1 - ignore = False - if ((gt_anno['occluded'][i] > MAX_OCCLUSION[difficulty]) - or (gt_anno['truncated'][i] > MAX_TRUNCATION[difficulty]) - or (height <= MIN_HEIGHT[difficulty])): - ignore = True - if valid_class == 1 and not ignore: - ignored_gt.append(0) - num_valid_gt += 1 - elif (valid_class == 0 or (ignore and (valid_class == 1))): - ignored_gt.append(1) - else: - ignored_gt.append(-1) - # for i in range(num_gt): - if gt_anno['name'][i] == 'DontCare': - dc_bboxes.append(gt_anno['bbox'][i]) - for i in range(num_dt): - if (dt_anno['name'][i].lower() == current_cls_name): - valid_class = 1 - else: - valid_class = -1 - height = abs(dt_anno['bbox'][i, 3] - dt_anno['bbox'][i, 1]) - if height < MIN_HEIGHT[difficulty]: - ignored_dt.append(1) - elif valid_class == 1: - ignored_dt.append(0) - else: - ignored_dt.append(-1) - - return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes - - -@numba.jit(nopython=True) -def image_box_overlap(boxes, query_boxes, criterion=-1): - N = boxes.shape[0] - K = query_boxes.shape[0] - overlaps = np.zeros((N, K), dtype=boxes.dtype) - for k in range(K): - qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) * - (query_boxes[k, 3] - query_boxes[k, 1])) - for n in range(N): - iw = ( - min(boxes[n, 2], query_boxes[k, 2]) - - max(boxes[n, 0], query_boxes[k, 0])) - if iw > 0: - ih = ( - min(boxes[n, 3], query_boxes[k, 3]) - - max(boxes[n, 1], query_boxes[k, 1])) - if ih > 0: - if criterion == -1: - ua = ((boxes[n, 2] - boxes[n, 0]) * - (boxes[n, 3] - boxes[n, 1]) + qbox_area - - iw * ih) - elif criterion == 0: - ua = ((boxes[n, 2] - boxes[n, 0]) * - (boxes[n, 3] - boxes[n, 1])) - elif criterion == 1: - ua = qbox_area - else: - ua = 1.0 - overlaps[n, k] = iw * ih / ua - return overlaps - - -def bev_box_overlap(boxes, qboxes, criterion=-1): - from .rotate_iou import rotate_iou_gpu_eval - riou = rotate_iou_gpu_eval(boxes, qboxes, criterion) - return riou - - -@numba.jit(nopython=True, parallel=True) -def d3_box_overlap_kernel(boxes, qboxes, rinc, criterion=-1): - # ONLY support overlap in CAMERA, not lidar. - # TODO: change to use prange for parallel mode, should check the difference - N, K = boxes.shape[0], qboxes.shape[0] - for i in numba.prange(N): - for j in numba.prange(K): - if rinc[i, j] > 0: - # iw = (min(boxes[i, 1] + boxes[i, 4], qboxes[j, 1] + - # qboxes[j, 4]) - max(boxes[i, 1], qboxes[j, 1])) - iw = ( - min(boxes[i, 1], qboxes[j, 1]) - - max(boxes[i, 1] - boxes[i, 4], - qboxes[j, 1] - qboxes[j, 4])) - - if iw > 0: - area1 = boxes[i, 3] * boxes[i, 4] * boxes[i, 5] - area2 = qboxes[j, 3] * qboxes[j, 4] * qboxes[j, 5] - inc = iw * rinc[i, j] - if criterion == -1: - ua = (area1 + area2 - inc) - elif criterion == 0: - ua = area1 - elif criterion == 1: - ua = area2 - else: - ua = inc - rinc[i, j] = inc / ua - else: - rinc[i, j] = 0.0 - - -def d3_box_overlap(boxes, qboxes, criterion=-1): - from .rotate_iou import rotate_iou_gpu_eval - rinc = rotate_iou_gpu_eval(boxes[:, [0, 2, 3, 5, 6]], - qboxes[:, [0, 2, 3, 5, 6]], 2) - d3_box_overlap_kernel(boxes, qboxes, rinc, criterion) - return rinc - - -@numba.jit(nopython=True) -def compute_statistics_jit(overlaps, - gt_datas, - dt_datas, - ignored_gt, - ignored_det, - dc_bboxes, - metric, - min_overlap, - thresh=0, - compute_fp=False, - compute_aos=False): - - det_size = dt_datas.shape[0] - gt_size = gt_datas.shape[0] - dt_scores = dt_datas[:, -1] - dt_alphas = dt_datas[:, 4] - gt_alphas = gt_datas[:, 4] - dt_bboxes = dt_datas[:, :4] - # gt_bboxes = gt_datas[:, :4] - - assigned_detection = [False] * det_size - ignored_threshold = [False] * det_size - if compute_fp: - for i in range(det_size): - if (dt_scores[i] < thresh): - ignored_threshold[i] = True - NO_DETECTION = -10000000 - tp, fp, fn, similarity = 0, 0, 0, 0 - # thresholds = [0.0] - # delta = [0.0] - thresholds = np.zeros((gt_size, )) - thresh_idx = 0 - delta = np.zeros((gt_size, )) - delta_idx = 0 - for i in range(gt_size): - if ignored_gt[i] == -1: - continue - det_idx = -1 - valid_detection = NO_DETECTION - max_overlap = 0 - assigned_ignored_det = False - - for j in range(det_size): - if (ignored_det[j] == -1): - continue - if (assigned_detection[j]): - continue - if (ignored_threshold[j]): - continue - overlap = overlaps[j, i] - dt_score = dt_scores[j] - if (not compute_fp and (overlap > min_overlap) - and dt_score > valid_detection): - det_idx = j - valid_detection = dt_score - elif (compute_fp and (overlap > min_overlap) - and (overlap > max_overlap or assigned_ignored_det) - and ignored_det[j] == 0): - max_overlap = overlap - det_idx = j - valid_detection = 1 - assigned_ignored_det = False - elif (compute_fp and (overlap > min_overlap) - and (valid_detection == NO_DETECTION) - and ignored_det[j] == 1): - det_idx = j - valid_detection = 1 - assigned_ignored_det = True - - if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0: - fn += 1 - elif ((valid_detection != NO_DETECTION) - and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)): - assigned_detection[det_idx] = True - elif valid_detection != NO_DETECTION: - tp += 1 - # thresholds.append(dt_scores[det_idx]) - thresholds[thresh_idx] = dt_scores[det_idx] - thresh_idx += 1 - if compute_aos: - # delta.append(gt_alphas[i] - dt_alphas[det_idx]) - delta[delta_idx] = gt_alphas[i] - dt_alphas[det_idx] - delta_idx += 1 - - assigned_detection[det_idx] = True - if compute_fp: - for i in range(det_size): - if (not (assigned_detection[i] or ignored_det[i] == -1 - or ignored_det[i] == 1 or ignored_threshold[i])): - fp += 1 - nstuff = 0 - if metric == 0: - overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0) - for i in range(dc_bboxes.shape[0]): - for j in range(det_size): - if (assigned_detection[j]): - continue - if (ignored_det[j] == -1 or ignored_det[j] == 1): - continue - if (ignored_threshold[j]): - continue - if overlaps_dt_dc[j, i] > min_overlap: - assigned_detection[j] = True - nstuff += 1 - fp -= nstuff - if compute_aos: - tmp = np.zeros((fp + delta_idx, )) - # tmp = [0] * fp - for i in range(delta_idx): - tmp[i + fp] = (1.0 + np.cos(delta[i])) / 2.0 - # tmp.append((1.0 + np.cos(delta[i])) / 2.0) - # assert len(tmp) == fp + tp - # assert len(delta) == tp - if tp > 0 or fp > 0: - similarity = np.sum(tmp) - else: - similarity = -1 - return tp, fp, fn, similarity, thresholds[:thresh_idx] - - -def get_split_parts(num, num_part): - same_part = num // num_part - remain_num = num % num_part - if remain_num == 0: - return [same_part] * num_part - else: - return [same_part] * num_part + [remain_num] - - -@numba.jit(nopython=True) -def fused_compute_statistics(overlaps, - pr, - gt_nums, - dt_nums, - dc_nums, - gt_datas, - dt_datas, - dontcares, - ignored_gts, - ignored_dets, - metric, - min_overlap, - thresholds, - compute_aos=False): - gt_num = 0 - dt_num = 0 - dc_num = 0 - for i in range(gt_nums.shape[0]): - for t, thresh in enumerate(thresholds): - overlap = overlaps[dt_num:dt_num + dt_nums[i], - gt_num:gt_num + gt_nums[i]] - - gt_data = gt_datas[gt_num:gt_num + gt_nums[i]] - dt_data = dt_datas[dt_num:dt_num + dt_nums[i]] - ignored_gt = ignored_gts[gt_num:gt_num + gt_nums[i]] - ignored_det = ignored_dets[dt_num:dt_num + dt_nums[i]] - dontcare = dontcares[dc_num:dc_num + dc_nums[i]] - tp, fp, fn, similarity, _ = compute_statistics_jit( - overlap, - gt_data, - dt_data, - ignored_gt, - ignored_det, - dontcare, - metric, - min_overlap=min_overlap, - thresh=thresh, - compute_fp=True, - compute_aos=compute_aos) - pr[t, 0] += tp - pr[t, 1] += fp - pr[t, 2] += fn - if similarity != -1: - pr[t, 3] += similarity - gt_num += gt_nums[i] - dt_num += dt_nums[i] - dc_num += dc_nums[i] - - -def calculate_iou_partly(gt_annos, dt_annos, metric, num_parts=50): - """Fast iou algorithm. this function can be used independently to do result - analysis. Must be used in CAMERA coordinate system. - - Args: - gt_annos (dict): Must from get_label_annos() in kitti_common.py. - dt_annos (dict): Must from get_label_annos() in kitti_common.py. - metric (int): Eval type. 0: bbox, 1: bev, 2: 3d. - num_parts (int): A parameter for fast calculate algorithm. - """ - assert len(gt_annos) == len(dt_annos) - total_dt_num = np.stack([len(a['name']) for a in dt_annos], 0) - total_gt_num = np.stack([len(a['name']) for a in gt_annos], 0) - num_examples = len(gt_annos) - split_parts = get_split_parts(num_examples, num_parts) - parted_overlaps = [] - example_idx = 0 - - for num_part in split_parts: - gt_annos_part = gt_annos[example_idx:example_idx + num_part] - dt_annos_part = dt_annos[example_idx:example_idx + num_part] - if metric == 0: - gt_boxes = np.concatenate([a['bbox'] for a in gt_annos_part], 0) - dt_boxes = np.concatenate([a['bbox'] for a in dt_annos_part], 0) - overlap_part = image_box_overlap(gt_boxes, dt_boxes) - elif metric == 1: - loc = np.concatenate( - [a['location'][:, [0, 2]] for a in gt_annos_part], 0) - dims = np.concatenate( - [a['dimensions'][:, [0, 2]] for a in gt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in gt_annos_part], 0) - gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - loc = np.concatenate( - [a['location'][:, [0, 2]] for a in dt_annos_part], 0) - dims = np.concatenate( - [a['dimensions'][:, [0, 2]] for a in dt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in dt_annos_part], 0) - dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - overlap_part = bev_box_overlap(gt_boxes, - dt_boxes).astype(np.float64) - elif metric == 2: - loc = np.concatenate([a['location'] for a in gt_annos_part], 0) - dims = np.concatenate([a['dimensions'] for a in gt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in gt_annos_part], 0) - gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - loc = np.concatenate([a['location'] for a in dt_annos_part], 0) - dims = np.concatenate([a['dimensions'] for a in dt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in dt_annos_part], 0) - dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - overlap_part = d3_box_overlap(gt_boxes, - dt_boxes).astype(np.float64) - else: - raise ValueError('unknown metric') - parted_overlaps.append(overlap_part) - example_idx += num_part - overlaps = [] - example_idx = 0 - for j, num_part in enumerate(split_parts): - gt_annos_part = gt_annos[example_idx:example_idx + num_part] - dt_annos_part = dt_annos[example_idx:example_idx + num_part] - gt_num_idx, dt_num_idx = 0, 0 - for i in range(num_part): - gt_box_num = total_gt_num[example_idx + i] - dt_box_num = total_dt_num[example_idx + i] - overlaps.append( - parted_overlaps[j][gt_num_idx:gt_num_idx + gt_box_num, - dt_num_idx:dt_num_idx + dt_box_num]) - gt_num_idx += gt_box_num - dt_num_idx += dt_box_num - example_idx += num_part - - return overlaps, parted_overlaps, total_gt_num, total_dt_num - - -def _prepare_data(gt_annos, dt_annos, current_class, difficulty): - gt_datas_list = [] - dt_datas_list = [] - total_dc_num = [] - ignored_gts, ignored_dets, dontcares = [], [], [] - total_num_valid_gt = 0 - for i in range(len(gt_annos)): - rets = clean_data(gt_annos[i], dt_annos[i], current_class, difficulty) - num_valid_gt, ignored_gt, ignored_det, dc_bboxes = rets - ignored_gts.append(np.array(ignored_gt, dtype=np.int64)) - ignored_dets.append(np.array(ignored_det, dtype=np.int64)) - if len(dc_bboxes) == 0: - dc_bboxes = np.zeros((0, 4)).astype(np.float64) - else: - dc_bboxes = np.stack(dc_bboxes, 0).astype(np.float64) - total_dc_num.append(dc_bboxes.shape[0]) - dontcares.append(dc_bboxes) - total_num_valid_gt += num_valid_gt - gt_datas = np.concatenate( - [gt_annos[i]['bbox'], gt_annos[i]['alpha'][..., np.newaxis]], 1) - dt_datas = np.concatenate([ - dt_annos[i]['bbox'], dt_annos[i]['alpha'][..., np.newaxis], - dt_annos[i]['score'][..., np.newaxis] - ], 1) - gt_datas_list.append(gt_datas) - dt_datas_list.append(dt_datas) - total_dc_num = np.stack(total_dc_num, axis=0) - return (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, dontcares, - total_dc_num, total_num_valid_gt) - - -def eval_class(gt_annos, - dt_annos, - current_classes, - difficultys, - metric, - min_overlaps, - compute_aos=False, - num_parts=200): - """Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP. - - Args: - gt_annos (dict): Must from get_label_annos() in kitti_common.py. - dt_annos (dict): Must from get_label_annos() in kitti_common.py. - current_classes (list[int]): 0: car, 1: pedestrian, 2: cyclist. - difficultys (list[int]): Eval difficulty, 0: easy, 1: normal, 2: hard - metric (int): Eval type. 0: bbox, 1: bev, 2: 3d - min_overlaps (float): Min overlap. format: - [num_overlap, metric, class]. - num_parts (int): A parameter for fast calculate algorithm - - Returns: - dict[str, np.ndarray]: recall, precision and aos - """ - assert len(gt_annos) == len(dt_annos) - num_examples = len(gt_annos) - if num_examples < num_parts: - num_parts = num_examples - split_parts = get_split_parts(num_examples, num_parts) - - rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts) - overlaps, parted_overlaps, total_dt_num, total_gt_num = rets - N_SAMPLE_PTS = 41 - num_minoverlap = len(min_overlaps) - num_class = len(current_classes) - num_difficulty = len(difficultys) - precision = np.zeros( - [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - recall = np.zeros( - [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - aos = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - for m, current_class in enumerate(current_classes): - for idx_l, difficulty in enumerate(difficultys): - rets = _prepare_data(gt_annos, dt_annos, current_class, difficulty) - (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, - dontcares, total_dc_num, total_num_valid_gt) = rets - for k, min_overlap in enumerate(min_overlaps[:, metric, m]): - thresholdss = [] - for i in range(len(gt_annos)): - rets = compute_statistics_jit( - overlaps[i], - gt_datas_list[i], - dt_datas_list[i], - ignored_gts[i], - ignored_dets[i], - dontcares[i], - metric, - min_overlap=min_overlap, - thresh=0.0, - compute_fp=False) - tp, fp, fn, similarity, thresholds = rets - thresholdss += thresholds.tolist() - thresholdss = np.array(thresholdss) - thresholds = get_thresholds(thresholdss, total_num_valid_gt) - thresholds = np.array(thresholds) - pr = np.zeros([len(thresholds), 4]) - idx = 0 - for j, num_part in enumerate(split_parts): - gt_datas_part = np.concatenate( - gt_datas_list[idx:idx + num_part], 0) - dt_datas_part = np.concatenate( - dt_datas_list[idx:idx + num_part], 0) - dc_datas_part = np.concatenate( - dontcares[idx:idx + num_part], 0) - ignored_dets_part = np.concatenate( - ignored_dets[idx:idx + num_part], 0) - ignored_gts_part = np.concatenate( - ignored_gts[idx:idx + num_part], 0) - fused_compute_statistics( - parted_overlaps[j], - pr, - total_gt_num[idx:idx + num_part], - total_dt_num[idx:idx + num_part], - total_dc_num[idx:idx + num_part], - gt_datas_part, - dt_datas_part, - dc_datas_part, - ignored_gts_part, - ignored_dets_part, - metric, - min_overlap=min_overlap, - thresholds=thresholds, - compute_aos=compute_aos) - idx += num_part - for i in range(len(thresholds)): - recall[m, idx_l, k, i] = pr[i, 0] / (pr[i, 0] + pr[i, 2]) - precision[m, idx_l, k, i] = pr[i, 0] / ( - pr[i, 0] + pr[i, 1]) - if compute_aos: - aos[m, idx_l, k, i] = pr[i, 3] / (pr[i, 0] + pr[i, 1]) - for i in range(len(thresholds)): - precision[m, idx_l, k, i] = np.max( - precision[m, idx_l, k, i:], axis=-1) - recall[m, idx_l, k, i] = np.max( - recall[m, idx_l, k, i:], axis=-1) - if compute_aos: - aos[m, idx_l, k, i] = np.max( - aos[m, idx_l, k, i:], axis=-1) - ret_dict = { - 'recall': recall, - 'precision': precision, - 'orientation': aos, - } - - # clean temp variables - del overlaps - del parted_overlaps - - gc.collect() - return ret_dict - - -def get_mAP11(prec): - sums = 0 - for i in range(0, prec.shape[-1], 4): - sums = sums + prec[..., i] - return sums / 11 * 100 - - -def get_mAP40(prec): - sums = 0 - for i in range(1, prec.shape[-1]): - sums = sums + prec[..., i] - return sums / 40 * 100 - - -def print_str(value, *arg, sstream=None): - if sstream is None: - sstream = sysio.StringIO() - sstream.truncate(0) - sstream.seek(0) - print(value, *arg, file=sstream) - return sstream.getvalue() - - -def do_eval(gt_annos, - dt_annos, - current_classes, - min_overlaps, - eval_types=['bbox', 'bev', '3d']): - # min_overlaps: [num_minoverlap, metric, num_class] - difficultys = [0, 1, 2] - mAP11_bbox = None - mAP11_aos = None - mAP40_bbox = None - mAP40_aos = None - if 'bbox' in eval_types: - ret = eval_class( - gt_annos, - dt_annos, - current_classes, - difficultys, - 0, - min_overlaps, - compute_aos=('aos' in eval_types)) - # ret: [num_class, num_diff, num_minoverlap, num_sample_points] - mAP11_bbox = get_mAP11(ret['precision']) - mAP40_bbox = get_mAP40(ret['precision']) - if 'aos' in eval_types: - mAP11_aos = get_mAP11(ret['orientation']) - mAP40_aos = get_mAP40(ret['orientation']) - - mAP11_bev = None - mAP40_bev = None - if 'bev' in eval_types: - ret = eval_class(gt_annos, dt_annos, current_classes, difficultys, 1, - min_overlaps) - mAP11_bev = get_mAP11(ret['precision']) - mAP40_bev = get_mAP40(ret['precision']) - - mAP11_3d = None - mAP40_3d = None - if '3d' in eval_types: - ret = eval_class(gt_annos, dt_annos, current_classes, difficultys, 2, - min_overlaps) - mAP11_3d = get_mAP11(ret['precision']) - mAP40_3d = get_mAP40(ret['precision']) - return (mAP11_bbox, mAP11_bev, mAP11_3d, mAP11_aos, mAP40_bbox, mAP40_bev, - mAP40_3d, mAP40_aos) - - -def do_coco_style_eval(gt_annos, dt_annos, current_classes, overlap_ranges, - compute_aos): - # overlap_ranges: [range, metric, num_class] - min_overlaps = np.zeros([10, *overlap_ranges.shape[1:]]) - for i in range(overlap_ranges.shape[1]): - for j in range(overlap_ranges.shape[2]): - min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j]) - mAP_bbox, mAP_bev, mAP_3d, mAP_aos, _, _, \ - _, _ = do_eval(gt_annos, dt_annos, - current_classes, min_overlaps, - compute_aos) - # ret: [num_class, num_diff, num_minoverlap] - mAP_bbox = mAP_bbox.mean(-1) - mAP_bev = mAP_bev.mean(-1) - mAP_3d = mAP_3d.mean(-1) - if mAP_aos is not None: - mAP_aos = mAP_aos.mean(-1) - return mAP_bbox, mAP_bev, mAP_3d, mAP_aos - - -def kitti_eval(gt_annos, - dt_annos, - current_classes, - eval_types=['bbox', 'bev', '3d']): - """KITTI evaluation. - - Args: - gt_annos (list[dict]): Contain gt information of each sample. - dt_annos (list[dict]): Contain detected information of each sample. - current_classes (list[str]): Classes to evaluation. - eval_types (list[str], optional): Types to eval. - Defaults to ['bbox', 'bev', '3d']. - - Returns: - tuple: String and dict of evaluation results. - """ - assert len(eval_types) > 0, 'must contain at least one evaluation type' - if 'aos' in eval_types: - assert 'bbox' in eval_types, 'must evaluate bbox when evaluating aos' - overlap_0_7 = np.array([[0.7, 0.5, 0.5, 0.7, - 0.5], [0.7, 0.5, 0.5, 0.7, 0.5], - [0.7, 0.5, 0.5, 0.7, 0.5]]) - overlap_0_5 = np.array([[0.7, 0.5, 0.5, 0.7, 0.5], - [0.5, 0.25, 0.25, 0.5, 0.25], - [0.5, 0.25, 0.25, 0.5, 0.25]]) - min_overlaps = np.stack([overlap_0_7, overlap_0_5], axis=0) # [2, 3, 5] - class_to_name = { - 0: 'Car', - 1: 'Pedestrian', - 2: 'Cyclist', - 3: 'Van', - 4: 'Person_sitting', - } - name_to_class = {v: n for n, v in class_to_name.items()} - if not isinstance(current_classes, (list, tuple)): - current_classes = [current_classes] - current_classes_int = [] - for curcls in current_classes: - if isinstance(curcls, str): - current_classes_int.append(name_to_class[curcls]) - else: - current_classes_int.append(curcls) - current_classes = current_classes_int - min_overlaps = min_overlaps[:, :, current_classes] - result = '' - # check whether alpha is valid - compute_aos = False - pred_alpha = False - valid_alpha_gt = False - for anno in dt_annos: - mask = (anno['alpha'] != -10) - if anno['alpha'][mask].shape[0] != 0: - pred_alpha = True - break - for anno in gt_annos: - if anno['alpha'][0] != -10: - valid_alpha_gt = True - break - compute_aos = (pred_alpha and valid_alpha_gt) - if compute_aos: - eval_types.append('aos') - - mAP11_bbox, mAP11_bev, mAP11_3d, mAP11_aos, mAP40_bbox, mAP40_bev, \ - mAP40_3d, mAP40_aos = do_eval(gt_annos, dt_annos, - current_classes, min_overlaps, - eval_types) - - ret_dict = {} - difficulty = ['easy', 'moderate', 'hard'] - - # calculate AP11 - result += '\n----------- AP11 Results ------------\n\n' - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - curcls_name = class_to_name[curcls] - for i in range(min_overlaps.shape[0]): - # prepare results for print - result += ('{} AP11@{:.2f}, {:.2f}, {:.2f}:\n'.format( - curcls_name, *min_overlaps[i, :, j])) - if mAP11_bbox is not None: - result += 'bbox AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bbox[j, :, i]) - if mAP11_bev is not None: - result += 'bev AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bev[j, :, i]) - if mAP11_3d is not None: - result += '3d AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_3d[j, :, i]) - if compute_aos: - result += 'aos AP11:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP11_aos[j, :, i]) - - # prepare results for logger - for idx in range(3): - if i == 0: - postfix = f'{difficulty[idx]}_strict' - else: - postfix = f'{difficulty[idx]}_loose' - prefix = f'KITTI/{curcls_name}' - if mAP11_3d is not None: - ret_dict[f'{prefix}_3D_AP11_{postfix}'] =\ - mAP11_3d[j, idx, i] - if mAP11_bev is not None: - ret_dict[f'{prefix}_BEV_AP11_{postfix}'] =\ - mAP11_bev[j, idx, i] - if mAP11_bbox is not None: - ret_dict[f'{prefix}_2D_AP11_{postfix}'] =\ - mAP11_bbox[j, idx, i] - - # calculate mAP11 over all classes if there are multiple classes - if len(current_classes) > 1: - # prepare results for print - result += ('\nOverall AP11@{}, {}, {}:\n'.format(*difficulty)) - if mAP11_bbox is not None: - mAP11_bbox = mAP11_bbox.mean(axis=0) - result += 'bbox AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bbox[:, 0]) - if mAP11_bev is not None: - mAP11_bev = mAP11_bev.mean(axis=0) - result += 'bev AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bev[:, 0]) - if mAP11_3d is not None: - mAP11_3d = mAP11_3d.mean(axis=0) - result += '3d AP11:{:.4f}, {:.4f}, {:.4f}\n'.format(*mAP11_3d[:, - 0]) - if compute_aos: - mAP11_aos = mAP11_aos.mean(axis=0) - result += 'aos AP11:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP11_aos[:, 0]) - - # prepare results for logger - for idx in range(3): - postfix = f'{difficulty[idx]}' - if mAP11_3d is not None: - ret_dict[f'KITTI/Overall_3D_AP11_{postfix}'] = mAP11_3d[idx, 0] - if mAP11_bev is not None: - ret_dict[f'KITTI/Overall_BEV_AP11_{postfix}'] =\ - mAP11_bev[idx, 0] - if mAP11_bbox is not None: - ret_dict[f'KITTI/Overall_2D_AP11_{postfix}'] =\ - mAP11_bbox[idx, 0] - - # Calculate AP40 - result += '\n----------- AP40 Results ------------\n\n' - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - curcls_name = class_to_name[curcls] - for i in range(min_overlaps.shape[0]): - # prepare results for print - result += ('{} AP40@{:.2f}, {:.2f}, {:.2f}:\n'.format( - curcls_name, *min_overlaps[i, :, j])) - if mAP40_bbox is not None: - result += 'bbox AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bbox[j, :, i]) - if mAP40_bev is not None: - result += 'bev AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bev[j, :, i]) - if mAP40_3d is not None: - result += '3d AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_3d[j, :, i]) - if compute_aos: - result += 'aos AP40:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP40_aos[j, :, i]) - - # prepare results for logger - for idx in range(3): - if i == 0: - postfix = f'{difficulty[idx]}_strict' - else: - postfix = f'{difficulty[idx]}_loose' - prefix = f'KITTI/{curcls_name}' - if mAP40_3d is not None: - ret_dict[f'{prefix}_3D_AP40_{postfix}'] =\ - mAP40_3d[j, idx, i] - if mAP40_bev is not None: - ret_dict[f'{prefix}_BEV_AP40_{postfix}'] =\ - mAP40_bev[j, idx, i] - if mAP40_bbox is not None: - ret_dict[f'{prefix}_2D_AP40_{postfix}'] =\ - mAP40_bbox[j, idx, i] - - # calculate mAP40 over all classes if there are multiple classes - if len(current_classes) > 1: - # prepare results for print - result += ('\nOverall AP40@{}, {}, {}:\n'.format(*difficulty)) - if mAP40_bbox is not None: - mAP40_bbox = mAP40_bbox.mean(axis=0) - result += 'bbox AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bbox[:, 0]) - if mAP40_bev is not None: - mAP40_bev = mAP40_bev.mean(axis=0) - result += 'bev AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bev[:, 0]) - if mAP40_3d is not None: - mAP40_3d = mAP40_3d.mean(axis=0) - result += '3d AP40:{:.4f}, {:.4f}, {:.4f}\n'.format(*mAP40_3d[:, - 0]) - if compute_aos: - mAP40_aos = mAP40_aos.mean(axis=0) - result += 'aos AP40:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP40_aos[:, 0]) - - # prepare results for logger - for idx in range(3): - postfix = f'{difficulty[idx]}' - if mAP40_3d is not None: - ret_dict[f'KITTI/Overall_3D_AP40_{postfix}'] = mAP40_3d[idx, 0] - if mAP40_bev is not None: - ret_dict[f'KITTI/Overall_BEV_AP40_{postfix}'] =\ - mAP40_bev[idx, 0] - if mAP40_bbox is not None: - ret_dict[f'KITTI/Overall_2D_AP40_{postfix}'] =\ - mAP40_bbox[idx, 0] - - return result, ret_dict - - -def kitti_eval_coco_style(gt_annos, dt_annos, current_classes): - """coco style evaluation of kitti. - - Args: - gt_annos (list[dict]): Contain gt information of each sample. - dt_annos (list[dict]): Contain detected information of each sample. - current_classes (list[str]): Classes to evaluation. - - Returns: - string: Evaluation results. - """ - class_to_name = { - 0: 'Car', - 1: 'Pedestrian', - 2: 'Cyclist', - 3: 'Van', - 4: 'Person_sitting', - } - class_to_range = { - 0: [0.5, 0.95, 10], - 1: [0.25, 0.7, 10], - 2: [0.25, 0.7, 10], - 3: [0.5, 0.95, 10], - 4: [0.25, 0.7, 10], - } - name_to_class = {v: n for n, v in class_to_name.items()} - if not isinstance(current_classes, (list, tuple)): - current_classes = [current_classes] - current_classes_int = [] - for curcls in current_classes: - if isinstance(curcls, str): - current_classes_int.append(name_to_class[curcls]) - else: - current_classes_int.append(curcls) - current_classes = current_classes_int - overlap_ranges = np.zeros([3, 3, len(current_classes)]) - for i, curcls in enumerate(current_classes): - overlap_ranges[:, :, i] = np.array(class_to_range[curcls])[:, - np.newaxis] - result = '' - # check whether alpha is valid - compute_aos = False - for anno in dt_annos: - if anno['alpha'].shape[0] != 0: - if anno['alpha'][0] != -10: - compute_aos = True - break - mAPbbox, mAPbev, mAP3d, mAPaos = do_coco_style_eval( - gt_annos, dt_annos, current_classes, overlap_ranges, compute_aos) - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - o_range = np.array(class_to_range[curcls])[[0, 2, 1]] - o_range[1] = (o_range[2] - o_range[0]) / (o_range[1] - 1) - result += print_str((f'{class_to_name[curcls]} ' - 'coco AP@{:.2f}:{:.2f}:{:.2f}:'.format(*o_range))) - result += print_str((f'bbox AP:{mAPbbox[j, 0]:.2f}, ' - f'{mAPbbox[j, 1]:.2f}, ' - f'{mAPbbox[j, 2]:.2f}')) - result += print_str((f'bev AP:{mAPbev[j, 0]:.2f}, ' - f'{mAPbev[j, 1]:.2f}, ' - f'{mAPbev[j, 2]:.2f}')) - result += print_str((f'3d AP:{mAP3d[j, 0]:.2f}, ' - f'{mAP3d[j, 1]:.2f}, ' - f'{mAP3d[j, 2]:.2f}')) - if compute_aos: - result += print_str((f'aos AP:{mAPaos[j, 0]:.2f}, ' - f'{mAPaos[j, 1]:.2f}, ' - f'{mAPaos[j, 2]:.2f}')) - return result diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py deleted file mode 100644 index 9ed75bf0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py +++ /dev/null @@ -1,379 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -##################### -# Based on https://github.com/hongzhenwang/RRPN-revise -# Licensed under The MIT License -# Author: yanyan, scrin@foxmail.com -##################### -import math - -import numba -import numpy as np -from numba import cuda - - -@numba.jit(nopython=True) -def div_up(m, n): - return m // n + (m % n > 0) - - -@cuda.jit(device=True, inline=True) -def trangle_area(a, b, c): - return ((a[0] - c[0]) * (b[1] - c[1]) - (a[1] - c[1]) * - (b[0] - c[0])) / 2.0 - - -@cuda.jit(device=True, inline=True) -def area(int_pts, num_of_inter): - area_val = 0.0 - for i in range(num_of_inter - 2): - area_val += abs( - trangle_area(int_pts[:2], int_pts[2 * i + 2:2 * i + 4], - int_pts[2 * i + 4:2 * i + 6])) - return area_val - - -@cuda.jit(device=True, inline=True) -def sort_vertex_in_convex_polygon(int_pts, num_of_inter): - if num_of_inter > 0: - center = cuda.local.array((2, ), dtype=numba.float32) - center[:] = 0.0 - for i in range(num_of_inter): - center[0] += int_pts[2 * i] - center[1] += int_pts[2 * i + 1] - center[0] /= num_of_inter - center[1] /= num_of_inter - v = cuda.local.array((2, ), dtype=numba.float32) - vs = cuda.local.array((16, ), dtype=numba.float32) - for i in range(num_of_inter): - v[0] = int_pts[2 * i] - center[0] - v[1] = int_pts[2 * i + 1] - center[1] - d = math.sqrt(v[0] * v[0] + v[1] * v[1]) - v[0] = v[0] / d - v[1] = v[1] / d - if v[1] < 0: - v[0] = -2 - v[0] - vs[i] = v[0] - j = 0 - temp = 0 - for i in range(1, num_of_inter): - if vs[i - 1] > vs[i]: - temp = vs[i] - tx = int_pts[2 * i] - ty = int_pts[2 * i + 1] - j = i - while j > 0 and vs[j - 1] > temp: - vs[j] = vs[j - 1] - int_pts[j * 2] = int_pts[j * 2 - 2] - int_pts[j * 2 + 1] = int_pts[j * 2 - 1] - j -= 1 - - vs[j] = temp - int_pts[j * 2] = tx - int_pts[j * 2 + 1] = ty - - -@cuda.jit(device=True, inline=True) -def line_segment_intersection(pts1, pts2, i, j, temp_pts): - A = cuda.local.array((2, ), dtype=numba.float32) - B = cuda.local.array((2, ), dtype=numba.float32) - C = cuda.local.array((2, ), dtype=numba.float32) - D = cuda.local.array((2, ), dtype=numba.float32) - - A[0] = pts1[2 * i] - A[1] = pts1[2 * i + 1] - - B[0] = pts1[2 * ((i + 1) % 4)] - B[1] = pts1[2 * ((i + 1) % 4) + 1] - - C[0] = pts2[2 * j] - C[1] = pts2[2 * j + 1] - - D[0] = pts2[2 * ((j + 1) % 4)] - D[1] = pts2[2 * ((j + 1) % 4) + 1] - BA0 = B[0] - A[0] - BA1 = B[1] - A[1] - DA0 = D[0] - A[0] - CA0 = C[0] - A[0] - DA1 = D[1] - A[1] - CA1 = C[1] - A[1] - acd = DA1 * CA0 > CA1 * DA0 - bcd = (D[1] - B[1]) * (C[0] - B[0]) > (C[1] - B[1]) * (D[0] - B[0]) - if acd != bcd: - abc = CA1 * BA0 > BA1 * CA0 - abd = DA1 * BA0 > BA1 * DA0 - if abc != abd: - DC0 = D[0] - C[0] - DC1 = D[1] - C[1] - ABBA = A[0] * B[1] - B[0] * A[1] - CDDC = C[0] * D[1] - D[0] * C[1] - DH = BA1 * DC0 - BA0 * DC1 - Dx = ABBA * DC0 - BA0 * CDDC - Dy = ABBA * DC1 - BA1 * CDDC - temp_pts[0] = Dx / DH - temp_pts[1] = Dy / DH - return True - return False - - -@cuda.jit(device=True, inline=True) -def line_segment_intersection_v1(pts1, pts2, i, j, temp_pts): - a = cuda.local.array((2, ), dtype=numba.float32) - b = cuda.local.array((2, ), dtype=numba.float32) - c = cuda.local.array((2, ), dtype=numba.float32) - d = cuda.local.array((2, ), dtype=numba.float32) - - a[0] = pts1[2 * i] - a[1] = pts1[2 * i + 1] - - b[0] = pts1[2 * ((i + 1) % 4)] - b[1] = pts1[2 * ((i + 1) % 4) + 1] - - c[0] = pts2[2 * j] - c[1] = pts2[2 * j + 1] - - d[0] = pts2[2 * ((j + 1) % 4)] - d[1] = pts2[2 * ((j + 1) % 4) + 1] - - area_abc = trangle_area(a, b, c) - area_abd = trangle_area(a, b, d) - - if area_abc * area_abd >= 0: - return False - - area_cda = trangle_area(c, d, a) - area_cdb = area_cda + area_abc - area_abd - - if area_cda * area_cdb >= 0: - return False - t = area_cda / (area_abd - area_abc) - - dx = t * (b[0] - a[0]) - dy = t * (b[1] - a[1]) - temp_pts[0] = a[0] + dx - temp_pts[1] = a[1] + dy - return True - - -@cuda.jit(device=True, inline=True) -def point_in_quadrilateral(pt_x, pt_y, corners): - ab0 = corners[2] - corners[0] - ab1 = corners[3] - corners[1] - - ad0 = corners[6] - corners[0] - ad1 = corners[7] - corners[1] - - ap0 = pt_x - corners[0] - ap1 = pt_y - corners[1] - - abab = ab0 * ab0 + ab1 * ab1 - abap = ab0 * ap0 + ab1 * ap1 - adad = ad0 * ad0 + ad1 * ad1 - adap = ad0 * ap0 + ad1 * ap1 - - return abab >= abap and abap >= 0 and adad >= adap and adap >= 0 - - -@cuda.jit(device=True, inline=True) -def quadrilateral_intersection(pts1, pts2, int_pts): - num_of_inter = 0 - for i in range(4): - if point_in_quadrilateral(pts1[2 * i], pts1[2 * i + 1], pts2): - int_pts[num_of_inter * 2] = pts1[2 * i] - int_pts[num_of_inter * 2 + 1] = pts1[2 * i + 1] - num_of_inter += 1 - if point_in_quadrilateral(pts2[2 * i], pts2[2 * i + 1], pts1): - int_pts[num_of_inter * 2] = pts2[2 * i] - int_pts[num_of_inter * 2 + 1] = pts2[2 * i + 1] - num_of_inter += 1 - temp_pts = cuda.local.array((2, ), dtype=numba.float32) - for i in range(4): - for j in range(4): - has_pts = line_segment_intersection(pts1, pts2, i, j, temp_pts) - if has_pts: - int_pts[num_of_inter * 2] = temp_pts[0] - int_pts[num_of_inter * 2 + 1] = temp_pts[1] - num_of_inter += 1 - - return num_of_inter - - -@cuda.jit(device=True, inline=True) -def rbbox_to_corners(corners, rbbox): - # generate clockwise corners and rotate it clockwise - angle = rbbox[4] - a_cos = math.cos(angle) - a_sin = math.sin(angle) - center_x = rbbox[0] - center_y = rbbox[1] - x_d = rbbox[2] - y_d = rbbox[3] - corners_x = cuda.local.array((4, ), dtype=numba.float32) - corners_y = cuda.local.array((4, ), dtype=numba.float32) - corners_x[0] = -x_d / 2 - corners_x[1] = -x_d / 2 - corners_x[2] = x_d / 2 - corners_x[3] = x_d / 2 - corners_y[0] = -y_d / 2 - corners_y[1] = y_d / 2 - corners_y[2] = y_d / 2 - corners_y[3] = -y_d / 2 - for i in range(4): - corners[2 * i] = a_cos * corners_x[i] + a_sin * corners_y[i] + center_x - corners[2 * i + - 1] = -a_sin * corners_x[i] + a_cos * corners_y[i] + center_y - - -@cuda.jit(device=True, inline=True) -def inter(rbbox1, rbbox2): - """Compute intersection of two rotated boxes. - - Args: - rbox1 (np.ndarray, shape=[5]): Rotated 2d box. - rbox2 (np.ndarray, shape=[5]): Rotated 2d box. - - Returns: - float: Intersection of two rotated boxes. - """ - corners1 = cuda.local.array((8, ), dtype=numba.float32) - corners2 = cuda.local.array((8, ), dtype=numba.float32) - intersection_corners = cuda.local.array((16, ), dtype=numba.float32) - - rbbox_to_corners(corners1, rbbox1) - rbbox_to_corners(corners2, rbbox2) - - num_intersection = quadrilateral_intersection(corners1, corners2, - intersection_corners) - sort_vertex_in_convex_polygon(intersection_corners, num_intersection) - # print(intersection_corners.reshape([-1, 2])[:num_intersection]) - - return area(intersection_corners, num_intersection) - - -@cuda.jit(device=True, inline=True) -def devRotateIoUEval(rbox1, rbox2, criterion=-1): - """Compute rotated iou on device. - - Args: - rbox1 (np.ndarray, shape=[5]): Rotated 2d box. - rbox2 (np.ndarray, shape=[5]): Rotated 2d box. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - - Returns: - float: iou between two input boxes. - """ - area1 = rbox1[2] * rbox1[3] - area2 = rbox2[2] * rbox2[3] - area_inter = inter(rbox1, rbox2) - if criterion == -1: - return area_inter / (area1 + area2 - area_inter) - elif criterion == 0: - return area_inter / area1 - elif criterion == 1: - return area_inter / area2 - else: - return area_inter - - -@cuda.jit( - '(int64, int64, float32[:], float32[:], float32[:], int32)', - fastmath=False) -def rotate_iou_kernel_eval(N, - K, - dev_boxes, - dev_query_boxes, - dev_iou, - criterion=-1): - """Kernel of computing rotated IoU. This function is for bev boxes in - camera coordinate system ONLY (the rotation is clockwise). - - Args: - N (int): The number of boxes. - K (int): The number of query boxes. - dev_boxes (np.ndarray): Boxes on device. - dev_query_boxes (np.ndarray): Query boxes on device. - dev_iou (np.ndarray): Computed iou to return. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - """ - threadsPerBlock = 8 * 8 - row_start = cuda.blockIdx.x - col_start = cuda.blockIdx.y - tx = cuda.threadIdx.x - row_size = min(N - row_start * threadsPerBlock, threadsPerBlock) - col_size = min(K - col_start * threadsPerBlock, threadsPerBlock) - block_boxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32) - block_qboxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32) - - dev_query_box_idx = threadsPerBlock * col_start + tx - dev_box_idx = threadsPerBlock * row_start + tx - if (tx < col_size): - block_qboxes[tx * 5 + 0] = dev_query_boxes[dev_query_box_idx * 5 + 0] - block_qboxes[tx * 5 + 1] = dev_query_boxes[dev_query_box_idx * 5 + 1] - block_qboxes[tx * 5 + 2] = dev_query_boxes[dev_query_box_idx * 5 + 2] - block_qboxes[tx * 5 + 3] = dev_query_boxes[dev_query_box_idx * 5 + 3] - block_qboxes[tx * 5 + 4] = dev_query_boxes[dev_query_box_idx * 5 + 4] - if (tx < row_size): - block_boxes[tx * 5 + 0] = dev_boxes[dev_box_idx * 5 + 0] - block_boxes[tx * 5 + 1] = dev_boxes[dev_box_idx * 5 + 1] - block_boxes[tx * 5 + 2] = dev_boxes[dev_box_idx * 5 + 2] - block_boxes[tx * 5 + 3] = dev_boxes[dev_box_idx * 5 + 3] - block_boxes[tx * 5 + 4] = dev_boxes[dev_box_idx * 5 + 4] - cuda.syncthreads() - if tx < row_size: - for i in range(col_size): - offset = ( - row_start * threadsPerBlock * K + col_start * threadsPerBlock + - tx * K + i) - dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5], - block_boxes[tx * 5:tx * 5 + 5], - criterion) - - -def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0): - """Rotated box iou running in gpu. 500x faster than cpu version (take 5ms - in one example with numba.cuda code). convert from [this project]( - https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation). - - This function is for bev boxes in camera coordinate system ONLY - (the rotation is clockwise). - - Args: - boxes (torch.Tensor): rbboxes. format: centers, dims, - angles(clockwise when positive) with the shape of [N, 5]. - query_boxes (torch.FloatTensor, shape=(K, 5)): - rbboxes to compute iou with boxes. - device_id (int, optional): Defaults to 0. Device to use. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - - Returns: - np.ndarray: IoU results. - """ - boxes = boxes.astype(np.float32) - query_boxes = query_boxes.astype(np.float32) - N = boxes.shape[0] - K = query_boxes.shape[0] - iou = np.zeros((N, K), dtype=np.float32) - if N == 0 or K == 0: - return iou - threadsPerBlock = 8 * 8 - cuda.select_device(device_id) - blockspergrid = (div_up(N, threadsPerBlock), div_up(K, threadsPerBlock)) - - stream = cuda.stream() - with stream.auto_synchronize(): - boxes_dev = cuda.to_device(boxes.reshape([-1]), stream) - query_boxes_dev = cuda.to_device(query_boxes.reshape([-1]), stream) - iou_dev = cuda.to_device(iou.reshape([-1]), stream) - rotate_iou_kernel_eval[blockspergrid, threadsPerBlock, - stream](N, K, boxes_dev, query_boxes_dev, - iou_dev, criterion) - iou_dev.copy_to_host(iou.reshape([-1]), stream=stream) - return iou.astype(boxes.dtype) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/lyft_eval.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/lyft_eval.py deleted file mode 100644 index 47c5cd6a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/lyft_eval.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -from lyft_dataset_sdk.eval.detection.mAP_evaluation import (Box3D, get_ap, - get_class_names, - get_ious, - group_by_key, - wrap_in_box) -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def load_lyft_gts(lyft, data_root, eval_split, logger=None): - """Loads ground truth boxes from database. - - Args: - lyft (:obj:`LyftDataset`): Lyft class in the sdk. - data_root (str): Root of data for reading splits. - eval_split (str): Name of the split for evaluation. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - - Returns: - list[dict]: List of annotation dictionaries. - """ - split_scenes = mmcv.list_from_file( - osp.join(data_root, f'{eval_split}.txt')) - - # Read out all sample_tokens in DB. - sample_tokens_all = [s['token'] for s in lyft.sample] - assert len(sample_tokens_all) > 0, 'Error: Database has no samples!' - - if eval_split == 'test': - # Check that you aren't trying to cheat :) - assert len(lyft.sample_annotation) > 0, \ - 'Error: You are trying to evaluate on the test set \ - but you do not have the annotations!' - - sample_tokens = [] - for sample_token in sample_tokens_all: - scene_token = lyft.get('sample', sample_token)['scene_token'] - scene_record = lyft.get('scene', scene_token) - if scene_record['name'] in split_scenes: - sample_tokens.append(sample_token) - - all_annotations = [] - - print_log('Loading ground truth annotations...', logger=logger) - # Load annotations and filter predictions and annotations. - for sample_token in mmcv.track_iter_progress(sample_tokens): - sample = lyft.get('sample', sample_token) - sample_annotation_tokens = sample['anns'] - for sample_annotation_token in sample_annotation_tokens: - # Get label name in detection task and filter unused labels. - sample_annotation = \ - lyft.get('sample_annotation', sample_annotation_token) - detection_name = sample_annotation['category_name'] - if detection_name is None: - continue - annotation = { - 'sample_token': sample_token, - 'translation': sample_annotation['translation'], - 'size': sample_annotation['size'], - 'rotation': sample_annotation['rotation'], - 'name': detection_name, - } - all_annotations.append(annotation) - - return all_annotations - - -def load_lyft_predictions(res_path): - """Load Lyft predictions from json file. - - Args: - res_path (str): Path of result json file recording detections. - - Returns: - list[dict]: List of prediction dictionaries. - """ - predictions = mmcv.load(res_path) - predictions = predictions['results'] - all_preds = [] - for sample_token in predictions.keys(): - all_preds.extend(predictions[sample_token]) - return all_preds - - -def lyft_eval(lyft, data_root, res_path, eval_set, output_dir, logger=None): - """Evaluation API for Lyft dataset. - - Args: - lyft (:obj:`LyftDataset`): Lyft class in the sdk. - data_root (str): Root of data for reading splits. - res_path (str): Path of result json file recording detections. - eval_set (str): Name of the split for evaluation. - output_dir (str): Output directory for output json files. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str, float]: The evaluation results. - """ - # evaluate by lyft metrics - gts = load_lyft_gts(lyft, data_root, eval_set, logger) - predictions = load_lyft_predictions(res_path) - - class_names = get_class_names(gts) - print('Calculating mAP@0.5:0.95...') - - iou_thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] - metrics = {} - average_precisions = \ - get_classwise_aps(gts, predictions, class_names, iou_thresholds) - APs_data = [['IOU', 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]] - - mAPs = np.mean(average_precisions, axis=0) - mAPs_cate = np.mean(average_precisions, axis=1) - final_mAP = np.mean(mAPs) - - metrics['average_precisions'] = average_precisions.tolist() - metrics['mAPs'] = mAPs.tolist() - metrics['Final mAP'] = float(final_mAP) - metrics['class_names'] = class_names - metrics['mAPs_cate'] = mAPs_cate.tolist() - - APs_data = [['class', 'mAP@0.5:0.95']] - for i in range(len(class_names)): - row = [class_names[i], round(mAPs_cate[i], 3)] - APs_data.append(row) - APs_data.append(['Overall', round(final_mAP, 3)]) - APs_table = AsciiTable(APs_data, title='mAPs@0.5:0.95') - APs_table.inner_footing_row_border = True - print_log(APs_table.table, logger=logger) - - res_path = osp.join(output_dir, 'lyft_metrics.json') - mmcv.dump(metrics, res_path) - return metrics - - -def get_classwise_aps(gt, predictions, class_names, iou_thresholds): - """Returns an array with an average precision per class. - - Note: Ground truth and predictions should have the following format. - - .. code-block:: - - gt = [{ - 'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207 - fbb039a550991a5149214f98cec136ac', - 'translation': [974.2811881299899, 1714.6815014457964, - -23.689857123368846], - 'size': [1.796, 4.488, 1.664], - 'rotation': [0.14882026466054782, 0, 0, 0.9888642620837121], - 'name': 'car' - }] - - predictions = [{ - 'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207 - fbb039a550991a5149214f98cec136ac', - 'translation': [971.8343488872263, 1713.6816097857359, - -25.82534357061308], - 'size': [2.519726579986132, 7.810161372666739, 3.483438286096803], - 'rotation': [0.10913582721095375, 0.04099572636992043, - 0.01927712319721745, 1.029328402625659], - 'name': 'car', - 'score': 0.3077029437237213 - }] - - Args: - gt (list[dict]): list of dictionaries in the format described below. - predictions (list[dict]): list of dictionaries in the format - described below. - class_names (list[str]): list of the class names. - iou_thresholds (list[float]): IOU thresholds used to calculate - TP / FN - - Returns: - np.ndarray: an array with an average precision per class. - """ - assert all([0 <= iou_th <= 1 for iou_th in iou_thresholds]) - - gt_by_class_name = group_by_key(gt, 'name') - pred_by_class_name = group_by_key(predictions, 'name') - - average_precisions = np.zeros((len(class_names), len(iou_thresholds))) - - for class_id, class_name in enumerate(class_names): - if class_name in pred_by_class_name: - recalls, precisions, average_precision = get_single_class_aps( - gt_by_class_name[class_name], pred_by_class_name[class_name], - iou_thresholds) - average_precisions[class_id, :] = average_precision - - return average_precisions - - -def get_single_class_aps(gt, predictions, iou_thresholds): - """Compute recall and precision for all iou thresholds. Adapted from - LyftDatasetDevkit. - - Args: - gt (list[dict]): list of dictionaries in the format described above. - predictions (list[dict]): list of dictionaries in the format - described below. - iou_thresholds (list[float]): IOU thresholds used to calculate - TP / FN - - Returns: - tuple[np.ndarray]: Returns (recalls, precisions, average precisions) - for each class. - """ - num_gts = len(gt) - image_gts = group_by_key(gt, 'sample_token') - image_gts = wrap_in_box(image_gts) - - sample_gt_checked = { - sample_token: np.zeros((len(boxes), len(iou_thresholds))) - for sample_token, boxes in image_gts.items() - } - - predictions = sorted(predictions, key=lambda x: x['score'], reverse=True) - - # go down dets and mark TPs and FPs - num_predictions = len(predictions) - tps = np.zeros((num_predictions, len(iou_thresholds))) - fps = np.zeros((num_predictions, len(iou_thresholds))) - - for prediction_index, prediction in enumerate(predictions): - predicted_box = Box3D(**prediction) - - sample_token = prediction['sample_token'] - - max_overlap = -np.inf - jmax = -1 - - if sample_token in image_gts: - gt_boxes = image_gts[sample_token] - # gt_boxes per sample - gt_checked = sample_gt_checked[sample_token] - # gt flags per sample - else: - gt_boxes = [] - gt_checked = None - - if len(gt_boxes) > 0: - overlaps = get_ious(gt_boxes, predicted_box) - - max_overlap = np.max(overlaps) - - jmax = np.argmax(overlaps) - - for i, iou_threshold in enumerate(iou_thresholds): - if max_overlap > iou_threshold: - if gt_checked[jmax, i] == 0: - tps[prediction_index, i] = 1.0 - gt_checked[jmax, i] = 1 - else: - fps[prediction_index, i] = 1.0 - else: - fps[prediction_index, i] = 1.0 - - # compute precision recall - fps = np.cumsum(fps, axis=0) - tps = np.cumsum(tps, axis=0) - - recalls = tps / float(num_gts) - # avoid divide by zero in case the first detection - # matches a difficult ground truth - precisions = tps / np.maximum(tps + fps, np.finfo(np.float64).eps) - - aps = [] - for i in range(len(iou_thresholds)): - recall = recalls[:, i] - precision = precisions[:, i] - assert np.all(0 <= recall) & np.all(recall <= 1) - assert np.all(0 <= precision) & np.all(precision <= 1) - ap = get_ap(recall, precision) - aps.append(ap) - - aps = np.array(aps) - - return recalls, precisions, aps diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/__init__.py deleted file mode 100644 index c98ea835..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluate_semantic_instance import evaluate_matches, scannet_eval - -__all__ = ['scannet_eval', 'evaluate_matches'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py deleted file mode 100644 index e4b94395..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py +++ /dev/null @@ -1,347 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# adapted from https://github.com/ScanNet/ScanNet/blob/master/BenchmarkScripts/3d_evaluation/evaluate_semantic_instance.py # noqa -from copy import deepcopy - -import numpy as np - -from . import util_3d - - -def evaluate_matches(matches, class_labels, options): - """Evaluate instance segmentation from matched gt and predicted instances - for all scenes. - - Args: - matches (dict): Contains gt2pred and pred2gt infos for every scene. - class_labels (tuple[str]): Class names. - options (dict): ScanNet evaluator options. See get_options. - - Returns: - np.array: Average precision scores for all thresholds and categories. - """ - overlaps = options['overlaps'] - min_region_sizes = [options['min_region_sizes'][0]] - dist_threshes = [options['distance_threshes'][0]] - dist_confs = [options['distance_confs'][0]] - - # results: class x overlap - ap = np.zeros((len(dist_threshes), len(class_labels), len(overlaps)), - np.float) - for di, (min_region_size, distance_thresh, distance_conf) in enumerate( - zip(min_region_sizes, dist_threshes, dist_confs)): - for oi, overlap_th in enumerate(overlaps): - pred_visited = {} - for m in matches: - for label_name in class_labels: - for p in matches[m]['pred'][label_name]: - if 'filename' in p: - pred_visited[p['filename']] = False - for li, label_name in enumerate(class_labels): - y_true = np.empty(0) - y_score = np.empty(0) - hard_false_negatives = 0 - has_gt = False - has_pred = False - for m in matches: - pred_instances = matches[m]['pred'][label_name] - gt_instances = matches[m]['gt'][label_name] - # filter groups in ground truth - gt_instances = [ - gt for gt in gt_instances - if gt['instance_id'] >= 1000 and gt['vert_count'] >= - min_region_size and gt['med_dist'] <= distance_thresh - and gt['dist_conf'] >= distance_conf - ] - if gt_instances: - has_gt = True - if pred_instances: - has_pred = True - - cur_true = np.ones(len(gt_instances)) - cur_score = np.ones(len(gt_instances)) * (-float('inf')) - cur_match = np.zeros(len(gt_instances), dtype=np.bool) - # collect matches - for (gti, gt) in enumerate(gt_instances): - found_match = False - for pred in gt['matched_pred']: - # greedy assignments - if pred_visited[pred['filename']]: - continue - overlap = float(pred['intersection']) / ( - gt['vert_count'] + pred['vert_count'] - - pred['intersection']) - if overlap > overlap_th: - confidence = pred['confidence'] - # if already have a prediction for this gt, - # the prediction with the lower score is automatically a false positive # noqa - if cur_match[gti]: - max_score = max(cur_score[gti], confidence) - min_score = min(cur_score[gti], confidence) - cur_score[gti] = max_score - # append false positive - cur_true = np.append(cur_true, 0) - cur_score = np.append(cur_score, min_score) - cur_match = np.append(cur_match, True) - # otherwise set score - else: - found_match = True - cur_match[gti] = True - cur_score[gti] = confidence - pred_visited[pred['filename']] = True - if not found_match: - hard_false_negatives += 1 - # remove non-matched ground truth instances - cur_true = cur_true[cur_match] - cur_score = cur_score[cur_match] - - # collect non-matched predictions as false positive - for pred in pred_instances: - found_gt = False - for gt in pred['matched_gt']: - overlap = float(gt['intersection']) / ( - gt['vert_count'] + pred['vert_count'] - - gt['intersection']) - if overlap > overlap_th: - found_gt = True - break - if not found_gt: - num_ignore = pred['void_intersection'] - for gt in pred['matched_gt']: - # group? - if gt['instance_id'] < 1000: - num_ignore += gt['intersection'] - # small ground truth instances - if gt['vert_count'] < min_region_size or gt[ - 'med_dist'] > distance_thresh or gt[ - 'dist_conf'] < distance_conf: - num_ignore += gt['intersection'] - proportion_ignore = float( - num_ignore) / pred['vert_count'] - # if not ignored append false positive - if proportion_ignore <= overlap_th: - cur_true = np.append(cur_true, 0) - confidence = pred['confidence'] - cur_score = np.append(cur_score, confidence) - - # append to overall results - y_true = np.append(y_true, cur_true) - y_score = np.append(y_score, cur_score) - - # compute average precision - if has_gt and has_pred: - # compute precision recall curve first - - # sorting and cumsum - score_arg_sort = np.argsort(y_score) - y_score_sorted = y_score[score_arg_sort] - y_true_sorted = y_true[score_arg_sort] - y_true_sorted_cumsum = np.cumsum(y_true_sorted) - - # unique thresholds - (thresholds, unique_indices) = np.unique( - y_score_sorted, return_index=True) - num_prec_recall = len(unique_indices) + 1 - - # prepare precision recall - num_examples = len(y_score_sorted) - # follow https://github.com/ScanNet/ScanNet/pull/26 ? # noqa - num_true_examples = y_true_sorted_cumsum[-1] if len( - y_true_sorted_cumsum) > 0 else 0 - precision = np.zeros(num_prec_recall) - recall = np.zeros(num_prec_recall) - - # deal with the first point - y_true_sorted_cumsum = np.append(y_true_sorted_cumsum, 0) - # deal with remaining - for idx_res, idx_scores in enumerate(unique_indices): - cumsum = y_true_sorted_cumsum[idx_scores - 1] - tp = num_true_examples - cumsum - fp = num_examples - idx_scores - tp - fn = cumsum + hard_false_negatives - p = float(tp) / (tp + fp) - r = float(tp) / (tp + fn) - precision[idx_res] = p - recall[idx_res] = r - - # first point in curve is artificial - precision[-1] = 1. - recall[-1] = 0. - - # compute average of precision-recall curve - recall_for_conv = np.copy(recall) - recall_for_conv = np.append(recall_for_conv[0], - recall_for_conv) - recall_for_conv = np.append(recall_for_conv, 0.) - - stepWidths = np.convolve(recall_for_conv, [-0.5, 0, 0.5], - 'valid') - # integrate is now simply a dot product - ap_current = np.dot(precision, stepWidths) - - elif has_gt: - ap_current = 0.0 - else: - ap_current = float('nan') - ap[di, li, oi] = ap_current - return ap - - -def compute_averages(aps, options, class_labels): - """Averages AP scores for all categories. - - Args: - aps (np.array): AP scores for all thresholds and categories. - options (dict): ScanNet evaluator options. See get_options. - class_labels (tuple[str]): Class names. - - Returns: - dict: Overall and per-category AP scores. - """ - d_inf = 0 - o50 = np.where(np.isclose(options['overlaps'], 0.5)) - o25 = np.where(np.isclose(options['overlaps'], 0.25)) - o_all_but25 = np.where( - np.logical_not(np.isclose(options['overlaps'], 0.25))) - avg_dict = {} - avg_dict['all_ap'] = np.nanmean(aps[d_inf, :, o_all_but25]) - avg_dict['all_ap_50%'] = np.nanmean(aps[d_inf, :, o50]) - avg_dict['all_ap_25%'] = np.nanmean(aps[d_inf, :, o25]) - avg_dict['classes'] = {} - for (li, label_name) in enumerate(class_labels): - avg_dict['classes'][label_name] = {} - avg_dict['classes'][label_name]['ap'] = np.average(aps[d_inf, li, - o_all_but25]) - avg_dict['classes'][label_name]['ap50%'] = np.average(aps[d_inf, li, - o50]) - avg_dict['classes'][label_name]['ap25%'] = np.average(aps[d_inf, li, - o25]) - return avg_dict - - -def assign_instances_for_scan(pred_info, gt_ids, options, valid_class_ids, - class_labels, id_to_label): - """Assign gt and predicted instances for a single scene. - - Args: - pred_info (dict): Predicted masks, labels and scores. - gt_ids (np.array): Ground truth instance masks. - options (dict): ScanNet evaluator options. See get_options. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id_to_label (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict: Per class assigned gt to predicted instances. - dict: Per class assigned predicted to gt instances. - """ - # get gt instances - gt_instances = util_3d.get_instances(gt_ids, valid_class_ids, class_labels, - id_to_label) - # associate - gt2pred = deepcopy(gt_instances) - for label in gt2pred: - for gt in gt2pred[label]: - gt['matched_pred'] = [] - pred2gt = {} - for label in class_labels: - pred2gt[label] = [] - num_pred_instances = 0 - # mask of void labels in the ground truth - bool_void = np.logical_not(np.in1d(gt_ids // 1000, valid_class_ids)) - # go through all prediction masks - for pred_mask_file in pred_info: - label_id = int(pred_info[pred_mask_file]['label_id']) - conf = pred_info[pred_mask_file]['conf'] - if not label_id in id_to_label: # noqa E713 - continue - label_name = id_to_label[label_id] - # read the mask - pred_mask = pred_info[pred_mask_file]['mask'] - if len(pred_mask) != len(gt_ids): - raise ValueError('len(pred_mask) != len(gt_ids)') - # convert to binary - pred_mask = np.not_equal(pred_mask, 0) - num = np.count_nonzero(pred_mask) - if num < options['min_region_sizes'][0]: - continue # skip if empty - - pred_instance = {} - pred_instance['filename'] = pred_mask_file - pred_instance['pred_id'] = num_pred_instances - pred_instance['label_id'] = label_id - pred_instance['vert_count'] = num - pred_instance['confidence'] = conf - pred_instance['void_intersection'] = np.count_nonzero( - np.logical_and(bool_void, pred_mask)) - - # matched gt instances - matched_gt = [] - # go through all gt instances with matching label - for (gt_num, gt_inst) in enumerate(gt2pred[label_name]): - intersection = np.count_nonzero( - np.logical_and(gt_ids == gt_inst['instance_id'], pred_mask)) - if intersection > 0: - gt_copy = gt_inst.copy() - pred_copy = pred_instance.copy() - gt_copy['intersection'] = intersection - pred_copy['intersection'] = intersection - matched_gt.append(gt_copy) - gt2pred[label_name][gt_num]['matched_pred'].append(pred_copy) - pred_instance['matched_gt'] = matched_gt - num_pred_instances += 1 - pred2gt[label_name].append(pred_instance) - - return gt2pred, pred2gt - - -def scannet_eval(preds, gts, options, valid_class_ids, class_labels, - id_to_label): - """Evaluate instance segmentation in ScanNet protocol. - - Args: - preds (list[dict]): Per scene predictions of mask, label and - confidence. - gts (list[np.array]): Per scene ground truth instance masks. - options (dict): ScanNet evaluator options. See get_options. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id_to_label (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict: Overall and per-category AP scores. - """ - options = get_options(options) - matches = {} - for i, (pred, gt) in enumerate(zip(preds, gts)): - matches_key = i - # assign gt to predictions - gt2pred, pred2gt = assign_instances_for_scan(pred, gt, options, - valid_class_ids, - class_labels, id_to_label) - matches[matches_key] = {} - matches[matches_key]['gt'] = gt2pred - matches[matches_key]['pred'] = pred2gt - - ap_scores = evaluate_matches(matches, class_labels, options) - avgs = compute_averages(ap_scores, options, class_labels) - return avgs - - -def get_options(options=None): - """Set ScanNet evaluator options. - - Args: - options (dict, optional): Not default options. Default: None. - - Returns: - dict: Updated options with all 4 keys. - """ - assert options is None or isinstance(options, dict) - _options = dict( - overlaps=np.append(np.arange(0.5, 0.95, 0.05), 0.25), - min_region_sizes=np.array([100]), - distance_threshes=np.array([float('inf')]), - distance_confs=np.array([-float('inf')])) - if options is not None: - _options.update(options) - return _options diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/util_3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/util_3d.py deleted file mode 100644 index 527d3412..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/scannet_utils/util_3d.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# adapted from https://github.com/ScanNet/ScanNet/blob/master/BenchmarkScripts/util_3d.py # noqa -import json - -import numpy as np - - -class Instance: - """Single instance for ScanNet evaluator. - - Args: - mesh_vert_instances (np.array): Instance ids for each point. - instance_id: Id of single instance. - """ - instance_id = 0 - label_id = 0 - vert_count = 0 - med_dist = -1 - dist_conf = 0.0 - - def __init__(self, mesh_vert_instances, instance_id): - if instance_id == -1: - return - self.instance_id = int(instance_id) - self.label_id = int(self.get_label_id(instance_id)) - self.vert_count = int( - self.get_instance_verts(mesh_vert_instances, instance_id)) - - @staticmethod - def get_label_id(instance_id): - return int(instance_id // 1000) - - @staticmethod - def get_instance_verts(mesh_vert_instances, instance_id): - return (mesh_vert_instances == instance_id).sum() - - def to_json(self): - return json.dumps( - self, default=lambda o: o.__dict__, sort_keys=True, indent=4) - - def to_dict(self): - dict = {} - dict['instance_id'] = self.instance_id - dict['label_id'] = self.label_id - dict['vert_count'] = self.vert_count - dict['med_dist'] = self.med_dist - dict['dist_conf'] = self.dist_conf - return dict - - def from_json(self, data): - self.instance_id = int(data['instance_id']) - self.label_id = int(data['label_id']) - self.vert_count = int(data['vert_count']) - if 'med_dist' in data: - self.med_dist = float(data['med_dist']) - self.dist_conf = float(data['dist_conf']) - - def __str__(self): - return '(' + str(self.instance_id) + ')' - - -def get_instances(ids, class_ids, class_labels, id2label): - """Transform gt instance mask to Instance objects. - - Args: - ids (np.array): Instance ids for each point. - class_ids: (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id2label: (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict [str, list]: Instance objects grouped by class label. - """ - instances = {} - for label in class_labels: - instances[label] = [] - instance_ids = np.unique(ids) - for id in instance_ids: - if id == 0: - continue - inst = Instance(ids, id) - if inst.label_id in class_ids: - instances[id2label[inst.label_id]].append(inst.to_dict()) - return instances diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/seg_eval.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/seg_eval.py deleted file mode 100644 index 4a3166d6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/seg_eval.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def fast_hist(preds, labels, num_classes): - """Compute the confusion matrix for every batch. - - Args: - preds (np.ndarray): Prediction labels of points with shape of - (num_points, ). - labels (np.ndarray): Ground truth labels of points with shape of - (num_points, ). - num_classes (int): number of classes - - Returns: - np.ndarray: Calculated confusion matrix. - """ - - k = (labels >= 0) & (labels < num_classes) - bin_count = np.bincount( - num_classes * labels[k].astype(int) + preds[k], - minlength=num_classes**2) - return bin_count[:num_classes**2].reshape(num_classes, num_classes) - - -def per_class_iou(hist): - """Compute the per class iou. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - np.ndarray: Calculated per class iou - """ - - return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) - - -def get_acc(hist): - """Compute the overall accuracy. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - float: Calculated overall acc - """ - - return np.diag(hist).sum() / hist.sum() - - -def get_acc_cls(hist): - """Compute the class average accuracy. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - float: Calculated class average acc - """ - - return np.nanmean(np.diag(hist) / hist.sum(axis=1)) - - -def seg_eval(gt_labels, seg_preds, label2cat, ignore_index, logger=None): - """Semantic Segmentation Evaluation. - - Evaluate the result of the Semantic Segmentation. - - Args: - gt_labels (list[torch.Tensor]): Ground truth labels. - seg_preds (list[torch.Tensor]): Predictions. - label2cat (dict): Map from label to category name. - ignore_index (int): Index that will be ignored in evaluation. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Returns: - dict[str, float]: Dict of results. - """ - assert len(seg_preds) == len(gt_labels) - num_classes = len(label2cat) - - hist_list = [] - for i in range(len(gt_labels)): - gt_seg = gt_labels[i].clone().numpy().astype(np.int) - pred_seg = seg_preds[i].clone().numpy().astype(np.int) - - # filter out ignored points - pred_seg[gt_seg == ignore_index] = -1 - gt_seg[gt_seg == ignore_index] = -1 - - # calculate one instance result - hist_list.append(fast_hist(pred_seg, gt_seg, num_classes)) - - iou = per_class_iou(sum(hist_list)) - miou = np.nanmean(iou) - acc = get_acc(sum(hist_list)) - acc_cls = get_acc_cls(sum(hist_list)) - - header = ['classes'] - for i in range(len(label2cat)): - header.append(label2cat[i]) - header.extend(['miou', 'acc', 'acc_cls']) - - ret_dict = dict() - table_columns = [['results']] - for i in range(len(label2cat)): - ret_dict[label2cat[i]] = float(iou[i]) - table_columns.append([f'{iou[i]:.4f}']) - ret_dict['miou'] = float(miou) - ret_dict['acc'] = float(acc) - ret_dict['acc_cls'] = float(acc_cls) - - table_columns.append([f'{miou:.4f}']) - table_columns.append([f'{acc:.4f}']) - table_columns.append([f'{acc_cls:.4f}']) - - table_data = [header] - table_rows = list(zip(*table_columns)) - table_data += table_rows - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - - return ret_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/__init__.py deleted file mode 100644 index 72d3a9bd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .prediction_kitti_to_waymo import KITTI2Waymo - -__all__ = ['KITTI2Waymo'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py deleted file mode 100644 index 205c24cb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Adapted from `Waymo to KITTI converter - `_. -""" - -try: - from waymo_open_dataset import dataset_pb2 as open_dataset -except ImportError: - raise ImportError( - 'Please run "pip install waymo-open-dataset-tf-2-1-0==1.2.0" ' - 'to install the official devkit first.') - -from glob import glob -from os.path import join - -import mmcv -import numpy as np -import tensorflow as tf -from waymo_open_dataset import label_pb2 -from waymo_open_dataset.protos import metrics_pb2 - - -class KITTI2Waymo(object): - """KITTI predictions to Waymo converter. - - This class serves as the converter to change predictions from KITTI to - Waymo format. - - Args: - kitti_result_files (list[dict]): Predictions in KITTI format. - waymo_tfrecords_dir (str): Directory to load waymo raw data. - waymo_results_save_dir (str): Directory to save converted predictions - in waymo format (.bin files). - waymo_results_final_path (str): Path to save combined - predictions in waymo format (.bin file), like 'a/b/c.bin'. - prefix (str): Prefix of filename. In general, 0 for training, 1 for - validation and 2 for testing. - workers (str): Number of parallel processes. - """ - - def __init__(self, - kitti_result_files, - waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, - prefix, - workers=64): - - self.kitti_result_files = kitti_result_files - self.waymo_tfrecords_dir = waymo_tfrecords_dir - self.waymo_results_save_dir = waymo_results_save_dir - self.waymo_results_final_path = waymo_results_final_path - self.prefix = prefix - self.workers = int(workers) - self.name2idx = {} - for idx, result in enumerate(kitti_result_files): - if len(result['sample_idx']) > 0: - self.name2idx[str(result['sample_idx'][0])] = idx - - # turn on eager execution for older tensorflow versions - if int(tf.__version__.split('.')[0]) < 2: - tf.enable_eager_execution() - - self.k2w_cls_map = { - 'Car': label_pb2.Label.TYPE_VEHICLE, - 'Pedestrian': label_pb2.Label.TYPE_PEDESTRIAN, - 'Sign': label_pb2.Label.TYPE_SIGN, - 'Cyclist': label_pb2.Label.TYPE_CYCLIST, - } - - self.T_ref_to_front_cam = np.array([[0.0, 0.0, 1.0, 0.0], - [-1.0, 0.0, 0.0, 0.0], - [0.0, -1.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 1.0]]) - - self.get_file_names() - self.create_folder() - - def get_file_names(self): - """Get file names of waymo raw data.""" - self.waymo_tfrecord_pathnames = sorted( - glob(join(self.waymo_tfrecords_dir, '*.tfrecord'))) - print(len(self.waymo_tfrecord_pathnames), 'tfrecords found.') - - def create_folder(self): - """Create folder for data conversion.""" - mmcv.mkdir_or_exist(self.waymo_results_save_dir) - - def parse_objects(self, kitti_result, T_k2w, context_name, - frame_timestamp_micros): - """Parse one prediction with several instances in kitti format and - convert them to `Object` proto. - - Args: - kitti_result (dict): Predictions in kitti format. - - - name (np.ndarray): Class labels of predictions. - - dimensions (np.ndarray): Height, width, length of boxes. - - location (np.ndarray): Bottom center of boxes (x, y, z). - - rotation_y (np.ndarray): Orientation of boxes. - - score (np.ndarray): Scores of predictions. - T_k2w (np.ndarray): Transformation matrix from kitti to waymo. - context_name (str): Context name of the frame. - frame_timestamp_micros (int): Frame timestamp. - - Returns: - :obj:`Object`: Predictions in waymo dataset Object proto. - """ - - def parse_one_object(instance_idx): - """Parse one instance in kitti format and convert them to `Object` - proto. - - Args: - instance_idx (int): Index of the instance to be converted. - - Returns: - :obj:`Object`: Predicted instance in waymo dataset - Object proto. - """ - cls = kitti_result['name'][instance_idx] - length = round(kitti_result['dimensions'][instance_idx, 0], 4) - height = round(kitti_result['dimensions'][instance_idx, 1], 4) - width = round(kitti_result['dimensions'][instance_idx, 2], 4) - x = round(kitti_result['location'][instance_idx, 0], 4) - y = round(kitti_result['location'][instance_idx, 1], 4) - z = round(kitti_result['location'][instance_idx, 2], 4) - rotation_y = round(kitti_result['rotation_y'][instance_idx], 4) - score = round(kitti_result['score'][instance_idx], 4) - - # y: downwards; move box origin from bottom center (kitti) to - # true center (waymo) - y -= height / 2 - # frame transformation: kitti -> waymo - x, y, z = self.transform(T_k2w, x, y, z) - - # different conventions - heading = -(rotation_y + np.pi / 2) - while heading < -np.pi: - heading += 2 * np.pi - while heading > np.pi: - heading -= 2 * np.pi - - box = label_pb2.Label.Box() - box.center_x = x - box.center_y = y - box.center_z = z - box.length = length - box.width = width - box.height = height - box.heading = heading - - o = metrics_pb2.Object() - o.object.box.CopyFrom(box) - o.object.type = self.k2w_cls_map[cls] - o.score = score - - o.context_name = context_name - o.frame_timestamp_micros = frame_timestamp_micros - - return o - - objects = metrics_pb2.Objects() - - for instance_idx in range(len(kitti_result['name'])): - o = parse_one_object(instance_idx) - objects.objects.append(o) - - return objects - - def convert_one(self, file_idx): - """Convert action for single file. - - Args: - file_idx (int): Index of the file to be converted. - """ - file_pathname = self.waymo_tfrecord_pathnames[file_idx] - file_data = tf.data.TFRecordDataset(file_pathname, compression_type='') - - for frame_num, frame_data in enumerate(file_data): - frame = open_dataset.Frame() - frame.ParseFromString(bytearray(frame_data.numpy())) - - filename = f'{self.prefix}{file_idx:03d}{frame_num:03d}' - - for camera in frame.context.camera_calibrations: - # FRONT = 1, see dataset.proto for details - if camera.name == 1: - T_front_cam_to_vehicle = np.array( - camera.extrinsic.transform).reshape(4, 4) - - T_k2w = T_front_cam_to_vehicle @ self.T_ref_to_front_cam - - context_name = frame.context.name - frame_timestamp_micros = frame.timestamp_micros - - if filename in self.name2idx: - kitti_result = \ - self.kitti_result_files[self.name2idx[filename]] - objects = self.parse_objects(kitti_result, T_k2w, context_name, - frame_timestamp_micros) - else: - print(filename, 'not found.') - objects = metrics_pb2.Objects() - - with open( - join(self.waymo_results_save_dir, f'{filename}.bin'), - 'wb') as f: - f.write(objects.SerializeToString()) - - def convert(self): - """Convert action.""" - print('Start converting ...') - mmcv.track_parallel_progress(self.convert_one, range(len(self)), - self.workers) - print('\nFinished ...') - - # combine all files into one .bin - pathnames = sorted(glob(join(self.waymo_results_save_dir, '*.bin'))) - combined = self.combine(pathnames) - - with open(self.waymo_results_final_path, 'wb') as f: - f.write(combined.SerializeToString()) - - def __len__(self): - """Length of the filename list.""" - return len(self.waymo_tfrecord_pathnames) - - def transform(self, T, x, y, z): - """Transform the coordinates with matrix T. - - Args: - T (np.ndarray): Transformation matrix. - x(float): Coordinate in x axis. - y(float): Coordinate in y axis. - z(float): Coordinate in z axis. - - Returns: - list: Coordinates after transformation. - """ - pt_bef = np.array([x, y, z, 1.0]).reshape(4, 1) - pt_aft = np.matmul(T, pt_bef) - return pt_aft[:3].flatten().tolist() - - def combine(self, pathnames): - """Combine predictions in waymo format for each sample together. - - Args: - pathnames (str): Paths to save predictions. - - Returns: - :obj:`Objects`: Combined predictions in Objects proto. - """ - combined = metrics_pb2.Objects() - - for pathname in pathnames: - objects = metrics_pb2.Objects() - with open(pathname, 'rb') as f: - objects.ParseFromString(f.read()) - for o in objects.objects: - combined.objects.append(o) - - return combined diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/__init__.py deleted file mode 100644 index 73d2d833..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints -from .cam_points import CameraPoints -from .depth_points import DepthPoints -from .lidar_points import LiDARPoints - -__all__ = ['BasePoints', 'CameraPoints', 'DepthPoints', 'LiDARPoints'] - - -def get_points_type(points_type): - """Get the class of points according to coordinate type. - - Args: - points_type (str): The type of points coordinate. - The valid value are "CAMERA", "LIDAR", or "DEPTH". - - Returns: - class: Points type. - """ - if points_type == 'CAMERA': - points_cls = CameraPoints - elif points_type == 'LIDAR': - points_cls = LiDARPoints - elif points_type == 'DEPTH': - points_cls = DepthPoints - else: - raise ValueError('Only "points_type" of "CAMERA", "LIDAR", or "DEPTH"' - f' are supported, got {points_type}') - - return points_cls diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/base_points.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/base_points.py deleted file mode 100644 index 929fa21e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/base_points.py +++ /dev/null @@ -1,440 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import numpy as np -import torch - -from ..bbox.structures.utils import rotation_3d_in_axis - - -class BasePoints(object): - """Base class for Points. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, points_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == \ - points_dim, tensor.size() - - self.tensor = tensor - self.points_dim = points_dim - self.attribute_dims = attribute_dims - self.rotation_axis = 0 - - @property - def coord(self): - """torch.Tensor: Coordinates of each point in shape (N, 3).""" - return self.tensor[:, :3] - - @coord.setter - def coord(self, tensor): - """Set the coordinates of each point.""" - try: - tensor = tensor.reshape(self.shape[0], 3) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - self.tensor[:, :3] = tensor - - @property - def height(self): - """torch.Tensor: - A vector with height of each point in shape (N, 1), or None.""" - if self.attribute_dims is not None and \ - 'height' in self.attribute_dims.keys(): - return self.tensor[:, self.attribute_dims['height']] - else: - return None - - @height.setter - def height(self, tensor): - """Set the height of each point.""" - try: - tensor = tensor.reshape(self.shape[0]) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - if self.attribute_dims is not None and \ - 'height' in self.attribute_dims.keys(): - self.tensor[:, self.attribute_dims['height']] = tensor - else: - # add height attribute - if self.attribute_dims is None: - self.attribute_dims = dict() - attr_dim = self.shape[1] - self.tensor = torch.cat([self.tensor, tensor.unsqueeze(1)], dim=1) - self.attribute_dims.update(dict(height=attr_dim)) - self.points_dim += 1 - - @property - def color(self): - """torch.Tensor: - A vector with color of each point in shape (N, 3), or None.""" - if self.attribute_dims is not None and \ - 'color' in self.attribute_dims.keys(): - return self.tensor[:, self.attribute_dims['color']] - else: - return None - - @color.setter - def color(self, tensor): - """Set the color of each point.""" - try: - tensor = tensor.reshape(self.shape[0], 3) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if tensor.max() >= 256 or tensor.min() < 0: - warnings.warn('point got color value beyond [0, 255]') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - if self.attribute_dims is not None and \ - 'color' in self.attribute_dims.keys(): - self.tensor[:, self.attribute_dims['color']] = tensor - else: - # add color attribute - if self.attribute_dims is None: - self.attribute_dims = dict() - attr_dim = self.shape[1] - self.tensor = torch.cat([self.tensor, tensor], dim=1) - self.attribute_dims.update( - dict(color=[attr_dim, attr_dim + 1, attr_dim + 2])) - self.points_dim += 3 - - @property - def shape(self): - """torch.Shape: Shape of points.""" - return self.tensor.shape - - def shuffle(self): - """Shuffle the points. - - Returns: - torch.Tensor: The shuffled index. - """ - idx = torch.randperm(self.__len__(), device=self.tensor.device) - self.tensor = self.tensor[idx] - return idx - - def rotate(self, rotation, axis=None): - """Rotate points with the given rotation matrix or angle. - - Args: - rotation (float | np.ndarray | torch.Tensor): Rotation matrix - or angle. - axis (int, optional): Axis to rotate at. Defaults to None. - """ - if not isinstance(rotation, torch.Tensor): - rotation = self.tensor.new_tensor(rotation) - assert rotation.shape == torch.Size([3, 3]) or \ - rotation.numel() == 1, f'invalid rotation shape {rotation.shape}' - - if axis is None: - axis = self.rotation_axis - - if rotation.numel() == 1: - rotated_points, rot_mat_T = rotation_3d_in_axis( - self.tensor[:, :3][None], rotation, axis=axis, return_mat=True) - self.tensor[:, :3] = rotated_points.squeeze(0) - rot_mat_T = rot_mat_T.squeeze(0) - else: - # rotation.numel() == 9 - self.tensor[:, :3] = self.tensor[:, :3] @ rotation - rot_mat_T = rotation - - return rot_mat_T - - @abstractmethod - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - pass - - def translate(self, trans_vector): - """Translate points with the given translation vector. - - Args: - trans_vector (np.ndarray, torch.Tensor): Translation - vector of size 3 or nx3. - """ - if not isinstance(trans_vector, torch.Tensor): - trans_vector = self.tensor.new_tensor(trans_vector) - trans_vector = trans_vector.squeeze(0) - if trans_vector.dim() == 1: - assert trans_vector.shape[0] == 3 - elif trans_vector.dim() == 2: - assert trans_vector.shape[0] == self.tensor.shape[0] and \ - trans_vector.shape[1] == 3 - else: - raise NotImplementedError( - f'Unsupported translation vector of shape {trans_vector.shape}' - ) - self.tensor[:, :3] += trans_vector - - def in_range_3d(self, point_range): - """Check whether the points are in the given range. - - Args: - point_range (list | torch.Tensor): The range of point - (x_min, y_min, z_min, x_max, y_max, z_max) - - Note: - In the original implementation of SECOND, checking whether - a box in the range checks whether the points are in a convex - polygon, we try to reduce the burden for simpler cases. - - Returns: - torch.Tensor: A binary vector indicating whether each point is - inside the reference range. - """ - in_range_flags = ((self.tensor[:, 0] > point_range[0]) - & (self.tensor[:, 1] > point_range[1]) - & (self.tensor[:, 2] > point_range[2]) - & (self.tensor[:, 0] < point_range[3]) - & (self.tensor[:, 1] < point_range[4]) - & (self.tensor[:, 2] < point_range[5])) - return in_range_flags - - @property - def bev(self): - """torch.Tensor: BEV of the points in shape (N, 2).""" - return self.tensor[:, [0, 1]] - - def in_range_bev(self, point_range): - """Check whether the points are in the given range. - - Args: - point_range (list | torch.Tensor): The range of point - in order of (x_min, y_min, x_max, y_max). - - Returns: - torch.Tensor: Indicating whether each point is inside - the reference range. - """ - in_range_flags = ((self.bev[:, 0] > point_range[0]) - & (self.bev[:, 1] > point_range[1]) - & (self.bev[:, 0] < point_range[2]) - & (self.bev[:, 1] < point_range[3])) - return in_range_flags - - @abstractmethod - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted box of the same type - in the `dst` mode. - """ - pass - - def scale(self, scale_factor): - """Scale the points with horizontal and vertical scaling factors. - - Args: - scale_factors (float): Scale factors to scale the points. - """ - self.tensor[:, :3] *= scale_factor - - def __getitem__(self, item): - """ - Note: - The following usage are allowed: - 1. `new_points = points[3]`: - return a `Points` that contains only one point. - 2. `new_points = points[2:10]`: - return a slice of points. - 3. `new_points = points[vector]`: - where vector is a torch.BoolTensor with `length = len(points)`. - Nonzero elements in the vector will be selected. - 4. `new_points = points[3:11, vector]`: - return a slice of points and attribute dims. - 5. `new_points = points[4:12, 2]`: - return a slice of points with single attribute. - Note that the returned Points might share storage with this Points, - subject to Pytorch's indexing semantics. - - Returns: - :obj:`BasePoints`: A new object of - :class:`BasePoints` after indexing. - """ - original_type = type(self) - if isinstance(item, int): - return original_type( - self.tensor[item].view(1, -1), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - elif isinstance(item, tuple) and len(item) == 2: - if isinstance(item[1], slice): - start = 0 if item[1].start is None else item[1].start - stop = self.tensor.shape[1] if \ - item[1].stop is None else item[1].stop - step = 1 if item[1].step is None else item[1].step - item = list(item) - item[1] = list(range(start, stop, step)) - item = tuple(item) - elif isinstance(item[1], int): - item = list(item) - item[1] = [item[1]] - item = tuple(item) - p = self.tensor[item[0], item[1]] - - keep_dims = list( - set(item[1]).intersection(set(range(3, self.tensor.shape[1])))) - if self.attribute_dims is not None: - attribute_dims = self.attribute_dims.copy() - for key in self.attribute_dims.keys(): - cur_attribute_dims = attribute_dims[key] - if isinstance(cur_attribute_dims, int): - cur_attribute_dims = [cur_attribute_dims] - intersect_attr = list( - set(cur_attribute_dims).intersection(set(keep_dims))) - if len(intersect_attr) == 1: - attribute_dims[key] = intersect_attr[0] - elif len(intersect_attr) > 1: - attribute_dims[key] = intersect_attr - else: - attribute_dims.pop(key) - else: - attribute_dims = None - elif isinstance(item, (slice, np.ndarray, torch.Tensor)): - p = self.tensor[item] - attribute_dims = self.attribute_dims - else: - raise NotImplementedError(f'Invalid slice {item}!') - - assert p.dim() == 2, \ - f'Indexing on Points with {item} failed to return a matrix!' - return original_type( - p, points_dim=p.shape[1], attribute_dims=attribute_dims) - - def __len__(self): - """int: Number of points in the current object.""" - return self.tensor.shape[0] - - def __repr__(self): - """str: Return a strings that describes the object.""" - return self.__class__.__name__ + '(\n ' + str(self.tensor) + ')' - - @classmethod - def cat(cls, points_list): - """Concatenate a list of Points into a single Points. - - Args: - points_list (list[:obj:`BasePoints`]): List of points. - - Returns: - :obj:`BasePoints`: The concatenated Points. - """ - assert isinstance(points_list, (list, tuple)) - if len(points_list) == 0: - return cls(torch.empty(0)) - assert all(isinstance(points, cls) for points in points_list) - - # use torch.cat (v.s. layers.cat) - # so the returned points never share storage with input - cat_points = cls( - torch.cat([p.tensor for p in points_list], dim=0), - points_dim=points_list[0].tensor.shape[1], - attribute_dims=points_list[0].attribute_dims) - return cat_points - - def to(self, device): - """Convert current points to a specific device. - - Args: - device (str | :obj:`torch.device`): The name of the device. - - Returns: - :obj:`BasePoints`: A new boxes object on the - specific device. - """ - original_type = type(self) - return original_type( - self.tensor.to(device), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - - def clone(self): - """Clone the Points. - - Returns: - :obj:`BasePoints`: Box object with the same properties - as self. - """ - original_type = type(self) - return original_type( - self.tensor.clone(), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - - @property - def device(self): - """str: The device of the points are on.""" - return self.tensor.device - - def __iter__(self): - """Yield a point as a Tensor of shape (4,) at a time. - - Returns: - torch.Tensor: A point of shape (4,). - """ - yield from self.tensor - - def new_point(self, data): - """Create a new point object with data. - - The new point and its tensor has the similar properties - as self and self.tensor, respectively. - - Args: - data (torch.Tensor | numpy.array | list): Data to be copied. - - Returns: - :obj:`BasePoints`: A new point object with ``data``, - the object's other properties are similar to ``self``. - """ - new_tensor = self.tensor.new_tensor(data) \ - if not isinstance(data, torch.Tensor) else data.to(self.device) - original_type = type(self) - return original_type( - new_tensor, - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/cam_points.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/cam_points.py deleted file mode 100644 index a57c3db1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/cam_points.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class CameraPoints(BasePoints): - """Points of instances in CAM coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(CameraPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 1 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 0] = -self.tensor[:, 0] - elif bev_direction == 'vertical': - self.tensor[:, 2] = -self.tensor[:, 2] - - @property - def bev(self): - """torch.Tensor: BEV of the points in shape (N, 2).""" - return self.tensor[:, [0, 2]] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.CAM, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/depth_points.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/depth_points.py deleted file mode 100644 index 2d9221fb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/depth_points.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class DepthPoints(BasePoints): - """Points of instances in DEPTH coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(DepthPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 2 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 0] = -self.tensor[:, 0] - elif bev_direction == 'vertical': - self.tensor[:, 1] = -self.tensor[:, 1] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.DEPTH, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/lidar_points.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/lidar_points.py deleted file mode 100644 index ff4f57ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/points/lidar_points.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class LiDARPoints(BasePoints): - """Points of instances in LIDAR coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(LiDARPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 2 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 1] = -self.tensor[:, 1] - elif bev_direction == 'vertical': - self.tensor[:, 0] = -self.tensor[:, 0] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.LIDAR, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/__init__.py deleted file mode 100644 index 2fb534e0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.post_processing import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores, - multiclass_nms) -from .box3d_nms import (aligned_3d_nms, box3d_multiclass_nms, circle_nms, - nms_bev, nms_normal_bev) -from .merge_augs import merge_aug_bboxes_3d - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'box3d_multiclass_nms', - 'aligned_3d_nms', 'merge_aug_bboxes_3d', 'circle_nms', 'nms_bev', - 'nms_normal_bev' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/box3d_nms.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/box3d_nms.py deleted file mode 100644 index 2d42085e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/box3d_nms.py +++ /dev/null @@ -1,288 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numba -import numpy as np -import torch -from mmcv.ops import nms, nms_rotated - - -def box3d_multiclass_nms(mlvl_bboxes, - mlvl_bboxes_for_nms, - mlvl_scores, - score_thr, - max_num, - cfg, - mlvl_dir_scores=None, - mlvl_attr_scores=None, - mlvl_bboxes2d=None): - """Multi-class NMS for 3D boxes. The IoU used for NMS is defined as the 2D - IoU between BEV boxes. - - Args: - mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M). - M is the dimensions of boxes. - mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape - (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes. - The coordinate system of the BEV boxes is counterclockwise. - mlvl_scores (torch.Tensor): Multi-level boxes with shape - (N, C + 1). N is the number of boxes. C is the number of classes. - score_thr (float): Score threshold to filter boxes with low - confidence. - max_num (int): Maximum number of boxes will be kept. - cfg (dict): Configuration dict of NMS. - mlvl_dir_scores (torch.Tensor, optional): Multi-level scores - of direction classifier. Defaults to None. - mlvl_attr_scores (torch.Tensor, optional): Multi-level scores - of attribute classifier. Defaults to None. - mlvl_bboxes2d (torch.Tensor, optional): Multi-level 2D bounding - boxes. Defaults to None. - - Returns: - tuple[torch.Tensor]: Return results after nms, including 3D - bounding boxes, scores, labels, direction scores, attribute - scores (optional) and 2D bounding boxes (optional). - """ - # do multi class nms - # the fg class id range: [0, num_classes-1] - num_classes = mlvl_scores.shape[1] - 1 - bboxes = [] - scores = [] - labels = [] - dir_scores = [] - attr_scores = [] - bboxes2d = [] - for i in range(0, num_classes): - # get bboxes and scores of this class - cls_inds = mlvl_scores[:, i] > score_thr - if not cls_inds.any(): - continue - - _scores = mlvl_scores[cls_inds, i] - _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :] - - if cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr) - _mlvl_bboxes = mlvl_bboxes[cls_inds, :] - bboxes.append(_mlvl_bboxes[selected]) - scores.append(_scores[selected]) - cls_label = mlvl_bboxes.new_full((len(selected), ), - i, - dtype=torch.long) - labels.append(cls_label) - - if mlvl_dir_scores is not None: - _mlvl_dir_scores = mlvl_dir_scores[cls_inds] - dir_scores.append(_mlvl_dir_scores[selected]) - if mlvl_attr_scores is not None: - _mlvl_attr_scores = mlvl_attr_scores[cls_inds] - attr_scores.append(_mlvl_attr_scores[selected]) - if mlvl_bboxes2d is not None: - _mlvl_bboxes2d = mlvl_bboxes2d[cls_inds] - bboxes2d.append(_mlvl_bboxes2d[selected]) - - if bboxes: - bboxes = torch.cat(bboxes, dim=0) - scores = torch.cat(scores, dim=0) - labels = torch.cat(labels, dim=0) - if mlvl_dir_scores is not None: - dir_scores = torch.cat(dir_scores, dim=0) - if mlvl_attr_scores is not None: - attr_scores = torch.cat(attr_scores, dim=0) - if mlvl_bboxes2d is not None: - bboxes2d = torch.cat(bboxes2d, dim=0) - if bboxes.shape[0] > max_num: - _, inds = scores.sort(descending=True) - inds = inds[:max_num] - bboxes = bboxes[inds, :] - labels = labels[inds] - scores = scores[inds] - if mlvl_dir_scores is not None: - dir_scores = dir_scores[inds] - if mlvl_attr_scores is not None: - attr_scores = attr_scores[inds] - if mlvl_bboxes2d is not None: - bboxes2d = bboxes2d[inds] - else: - bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1))) - scores = mlvl_scores.new_zeros((0, )) - labels = mlvl_scores.new_zeros((0, ), dtype=torch.long) - if mlvl_dir_scores is not None: - dir_scores = mlvl_scores.new_zeros((0, )) - if mlvl_attr_scores is not None: - attr_scores = mlvl_scores.new_zeros((0, )) - if mlvl_bboxes2d is not None: - bboxes2d = mlvl_scores.new_zeros((0, 4)) - - results = (bboxes, scores, labels) - - if mlvl_dir_scores is not None: - results = results + (dir_scores, ) - if mlvl_attr_scores is not None: - results = results + (attr_scores, ) - if mlvl_bboxes2d is not None: - results = results + (bboxes2d, ) - - return results - - -def aligned_3d_nms(boxes, scores, classes, thresh): - """3D NMS for aligned boxes. - - Args: - boxes (torch.Tensor): Aligned box with shape [n, 6]. - scores (torch.Tensor): Scores of each box. - classes (torch.Tensor): Class of each box. - thresh (float): IoU threshold for nms. - - Returns: - torch.Tensor: Indices of selected boxes. - """ - x1 = boxes[:, 0] - y1 = boxes[:, 1] - z1 = boxes[:, 2] - x2 = boxes[:, 3] - y2 = boxes[:, 4] - z2 = boxes[:, 5] - area = (x2 - x1) * (y2 - y1) * (z2 - z1) - zero = boxes.new_zeros(1, ) - - score_sorted = torch.argsort(scores) - pick = [] - while (score_sorted.shape[0] != 0): - last = score_sorted.shape[0] - i = score_sorted[-1] - pick.append(i) - - xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]]) - yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]]) - zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]]) - xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]]) - yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]]) - zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]]) - classes1 = classes[i] - classes2 = classes[score_sorted[:last - 1]] - inter_l = torch.max(zero, xx2 - xx1) - inter_w = torch.max(zero, yy2 - yy1) - inter_h = torch.max(zero, zz2 - zz1) - - inter = inter_l * inter_w * inter_h - iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter) - iou = iou * (classes1 == classes2).float() - score_sorted = score_sorted[torch.nonzero( - iou <= thresh, as_tuple=False).flatten()] - - indices = boxes.new_tensor(pick, dtype=torch.long) - return indices - - -@numba.jit(nopython=True) -def circle_nms(dets, thresh, post_max_size=83): - """Circular NMS. - - An object is only counted as positive if no other center - with a higher confidence exists within a radius r using a - bird-eye view distance metric. - - Args: - dets (torch.Tensor): Detection results with the shape of [N, 3]. - thresh (float): Value of threshold. - post_max_size (int, optional): Max number of prediction to be kept. - Defaults to 83. - - Returns: - torch.Tensor: Indexes of the detections to be kept. - """ - x1 = dets[:, 0] - y1 = dets[:, 1] - scores = dets[:, 2] - order = scores.argsort()[::-1].astype(np.int32) # highest->lowest - ndets = dets.shape[0] - suppressed = np.zeros((ndets), dtype=np.int32) - keep = [] - for _i in range(ndets): - i = order[_i] # start with highest score box - if suppressed[ - i] == 1: # if any box have enough iou with this, remove it - continue - keep.append(i) - for _j in range(_i + 1, ndets): - j = order[_j] - if suppressed[j] == 1: - continue - # calculate center distance between i and j box - dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2 - - # ovr = inter / areas[j] - if dist <= thresh: - suppressed[j] = 1 - - if post_max_size < len(keep): - return keep[:post_max_size] - - return keep - - -# This function duplicates functionality of mmcv.ops.iou_3d.nms_bev -# from mmcv<=1.5, but using cuda ops from mmcv.ops.nms.nms_rotated. -# Nms api will be unified in mmdetection3d one day. -def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None): - """NMS function GPU implementation (for BEV boxes). The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = boxes[order].contiguous() - scores = scores[order] - - # xyxyr -> back to xywhr - # note: better skip this step before nms_bev call in the future - boxes = torch.stack( - ((boxes[:, 0] + boxes[:, 2]) / 2, (boxes[:, 1] + boxes[:, 3]) / 2, - boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1], boxes[:, 4]), - dim=-1) - - keep = nms_rotated(boxes, scores, thresh)[1] - keep = order[keep] - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -# This function duplicates functionality of mmcv.ops.iou_3d.nms_normal_bev -# from mmcv<=1.5, but using cuda ops from mmcv.ops.nms.nms. -# Nms api will be unified in mmdetection3d one day. -def nms_normal_bev(boxes, scores, thresh): - """Normal NMS function GPU implementation (for BEV boxes). The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]' - return nms(boxes[:, :-1], scores, thresh)[1] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/merge_augs.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/merge_augs.py deleted file mode 100644 index 0e20dcd5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/post_processing/merge_augs.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from ..bbox import bbox3d2result, bbox3d_mapping_back, xywhr2xyxyr - - -def merge_aug_bboxes_3d(aug_results, img_metas, test_cfg): - """Merge augmented detection 3D bboxes and scores. - - Args: - aug_results (list[dict]): The dict of detection results. - The dict contains the following keys - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - img_metas (list[dict]): Meta information of each sample. - test_cfg (dict): Test config. - - Returns: - dict: Bounding boxes results in cpu mode, containing merged results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Merged detection bbox. - - scores_3d (torch.Tensor): Merged detection scores. - - labels_3d (torch.Tensor): Merged predicted box labels. - """ - - assert len(aug_results) == len(img_metas), \ - '"aug_results" should have the same length as "img_metas", got len(' \ - f'aug_results)={len(aug_results)} and len(img_metas)={len(img_metas)}' - - recovered_bboxes = [] - recovered_scores = [] - recovered_labels = [] - - for bboxes, img_info in zip(aug_results, img_metas): - scale_factor = img_info[0]['pcd_scale_factor'] - pcd_horizontal_flip = img_info[0]['pcd_horizontal_flip'] - pcd_vertical_flip = img_info[0]['pcd_vertical_flip'] - recovered_scores.append(bboxes['scores_3d']) - recovered_labels.append(bboxes['labels_3d']) - bboxes = bbox3d_mapping_back(bboxes['boxes_3d'], scale_factor, - pcd_horizontal_flip, pcd_vertical_flip) - recovered_bboxes.append(bboxes) - - aug_bboxes = recovered_bboxes[0].cat(recovered_bboxes) - aug_bboxes_for_nms = xywhr2xyxyr(aug_bboxes.bev) - aug_scores = torch.cat(recovered_scores, dim=0) - aug_labels = torch.cat(recovered_labels, dim=0) - - # TODO: use a more elegent way to deal with nms - if test_cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - merged_bboxes = [] - merged_scores = [] - merged_labels = [] - - # Apply multi-class nms when merge bboxes - if len(aug_labels) == 0: - return bbox3d2result(aug_bboxes, aug_scores, aug_labels) - - for class_id in range(torch.max(aug_labels).item() + 1): - class_inds = (aug_labels == class_id) - bboxes_i = aug_bboxes[class_inds] - bboxes_nms_i = aug_bboxes_for_nms[class_inds, :] - scores_i = aug_scores[class_inds] - labels_i = aug_labels[class_inds] - if len(bboxes_nms_i) == 0: - continue - selected = nms_func(bboxes_nms_i, scores_i, test_cfg.nms_thr) - - merged_bboxes.append(bboxes_i[selected, :]) - merged_scores.append(scores_i[selected]) - merged_labels.append(labels_i[selected]) - - merged_bboxes = merged_bboxes[0].cat(merged_bboxes) - merged_scores = torch.cat(merged_scores, dim=0) - merged_labels = torch.cat(merged_labels, dim=0) - - _, order = merged_scores.sort(0, descending=True) - num = min(test_cfg.max_num, len(aug_bboxes)) - order = order[:num] - - merged_bboxes = merged_bboxes[order] - merged_scores = merged_scores[order] - merged_labels = merged_labels[order] - - return bbox3d2result(merged_bboxes, merged_scores, merged_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/__init__.py deleted file mode 100644 index b2a8deca..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .array_converter import ArrayConverter, array_converter -from .gaussian import (draw_heatmap_gaussian, ellip_gaussian2D, gaussian_2d, - gaussian_radius, get_ellip_gaussian_2D) - -__all__ = [ - 'gaussian_2d', 'gaussian_radius', 'draw_heatmap_gaussian', - 'ArrayConverter', 'array_converter', 'ellip_gaussian2D', - 'get_ellip_gaussian_2D' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/array_converter.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/array_converter.py deleted file mode 100644 index a555aa60..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/array_converter.py +++ /dev/null @@ -1,324 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -from inspect import getfullargspec - -import numpy as np -import torch - - -def array_converter(to_torch=True, - apply_to=tuple(), - template_arg_name_=None, - recover=True): - """Wrapper function for data-type agnostic processing. - - First converts input arrays to PyTorch tensors or NumPy ndarrays - for middle calculation, then convert output to original data-type if - `recover=True`. - - Args: - to_torch (Bool, optional): Whether convert to PyTorch tensors - for middle calculation. Defaults to True. - apply_to (tuple[str], optional): The arguments to which we apply - data-type conversion. Defaults to an empty tuple. - template_arg_name_ (str, optional): Argument serving as the template ( - return arrays should have the same dtype and device - as the template). Defaults to None. If None, we will use the - first argument in `apply_to` as the template argument. - recover (Bool, optional): Whether or not recover the wrapped function - outputs to the `template_arg_name_` type. Defaults to True. - - Raises: - ValueError: When template_arg_name_ is not among all args, or - when apply_to contains an arg which is not among all args, - a ValueError will be raised. When the template argument or - an argument to convert is a list or tuple, and cannot be - converted to a NumPy array, a ValueError will be raised. - TypeError: When the type of the template argument or - an argument to convert does not belong to the above range, - or the contents of such an list-or-tuple-type argument - do not share the same data type, a TypeError is raised. - - Returns: - (function): wrapped function. - - Example: - >>> import torch - >>> import numpy as np - >>> - >>> # Use torch addition for a + b, - >>> # and convert return values to the type of a - >>> @array_converter(apply_to=('a', 'b')) - >>> def simple_add(a, b): - >>> return a + b - >>> - >>> a = np.array([1.1]) - >>> b = np.array([2.2]) - >>> simple_add(a, b) - >>> - >>> # Use numpy addition for a + b, - >>> # and convert return values to the type of b - >>> @array_converter(to_torch=False, apply_to=('a', 'b'), - >>> template_arg_name_='b') - >>> def simple_add(a, b): - >>> return a + b - >>> - >>> simple_add() - >>> - >>> # Use torch funcs for floor(a) if flag=True else ceil(a), - >>> # and return the torch tensor - >>> @array_converter(apply_to=('a',), recover=False) - >>> def floor_or_ceil(a, flag=True): - >>> return torch.floor(a) if flag else torch.ceil(a) - >>> - >>> floor_or_ceil(a, flag=False) - """ - - def array_converter_wrapper(func): - """Outer wrapper for the function.""" - - @functools.wraps(func) - def new_func(*args, **kwargs): - """Inner wrapper for the arguments.""" - if len(apply_to) == 0: - return func(*args, **kwargs) - - func_name = func.__name__ - - arg_spec = getfullargspec(func) - - arg_names = arg_spec.args - arg_num = len(arg_names) - default_arg_values = arg_spec.defaults - if default_arg_values is None: - default_arg_values = [] - no_default_arg_num = len(arg_names) - len(default_arg_values) - - kwonly_arg_names = arg_spec.kwonlyargs - kwonly_default_arg_values = arg_spec.kwonlydefaults - if kwonly_default_arg_values is None: - kwonly_default_arg_values = {} - - all_arg_names = arg_names + kwonly_arg_names - - # in case there are args in the form of *args - if len(args) > arg_num: - named_args = args[:arg_num] - nameless_args = args[arg_num:] - else: - named_args = args - nameless_args = [] - - # template argument data type is used for all array-like arguments - if template_arg_name_ is None: - template_arg_name = apply_to[0] - else: - template_arg_name = template_arg_name_ - - if template_arg_name not in all_arg_names: - raise ValueError(f'{template_arg_name} is not among the ' - f'argument list of function {func_name}') - - # inspect apply_to - for arg_to_apply in apply_to: - if arg_to_apply not in all_arg_names: - raise ValueError(f'{arg_to_apply} is not ' - f'an argument of {func_name}') - - new_args = [] - new_kwargs = {} - - converter = ArrayConverter() - target_type = torch.Tensor if to_torch else np.ndarray - - # non-keyword arguments - for i, arg_value in enumerate(named_args): - if arg_names[i] in apply_to: - new_args.append( - converter.convert( - input_array=arg_value, target_type=target_type)) - else: - new_args.append(arg_value) - - if arg_names[i] == template_arg_name: - template_arg_value = arg_value - - kwonly_default_arg_values.update(kwargs) - kwargs = kwonly_default_arg_values - - # keyword arguments and non-keyword arguments using default value - for i in range(len(named_args), len(all_arg_names)): - arg_name = all_arg_names[i] - if arg_name in kwargs: - if arg_name in apply_to: - new_kwargs[arg_name] = converter.convert( - input_array=kwargs[arg_name], - target_type=target_type) - else: - new_kwargs[arg_name] = kwargs[arg_name] - else: - default_value = default_arg_values[i - no_default_arg_num] - if arg_name in apply_to: - new_kwargs[arg_name] = converter.convert( - input_array=default_value, target_type=target_type) - else: - new_kwargs[arg_name] = default_value - if arg_name == template_arg_name: - template_arg_value = kwargs[arg_name] - - # add nameless args provided by *args (if exists) - new_args += nameless_args - - return_values = func(*new_args, **new_kwargs) - converter.set_template(template_arg_value) - - def recursive_recover(input_data): - if isinstance(input_data, (tuple, list)): - new_data = [] - for item in input_data: - new_data.append(recursive_recover(item)) - return tuple(new_data) if isinstance(input_data, - tuple) else new_data - elif isinstance(input_data, dict): - new_data = {} - for k, v in input_data.items(): - new_data[k] = recursive_recover(v) - return new_data - elif isinstance(input_data, (torch.Tensor, np.ndarray)): - return converter.recover(input_data) - else: - return input_data - - if recover: - return recursive_recover(return_values) - else: - return return_values - - return new_func - - return array_converter_wrapper - - -class ArrayConverter: - - SUPPORTED_NON_ARRAY_TYPES = (int, float, np.int8, np.int16, np.int32, - np.int64, np.uint8, np.uint16, np.uint32, - np.uint64, np.float16, np.float32, np.float64) - - def __init__(self, template_array=None): - if template_array is not None: - self.set_template(template_array) - - def set_template(self, array): - """Set template array. - - Args: - array (tuple | list | int | float | np.ndarray | torch.Tensor): - Template array. - - Raises: - ValueError: If input is list or tuple and cannot be converted to - to a NumPy array, a ValueError is raised. - TypeError: If input type does not belong to the above range, - or the contents of a list or tuple do not share the - same data type, a TypeError is raised. - """ - self.array_type = type(array) - self.is_num = False - self.device = 'cpu' - - if isinstance(array, np.ndarray): - self.dtype = array.dtype - elif isinstance(array, torch.Tensor): - self.dtype = array.dtype - self.device = array.device - elif isinstance(array, (list, tuple)): - try: - array = np.array(array) - if array.dtype not in self.SUPPORTED_NON_ARRAY_TYPES: - raise TypeError - self.dtype = array.dtype - except (ValueError, TypeError): - print(f'The following list cannot be converted to' - f' a numpy array of supported dtype:\n{array}') - raise - elif isinstance(array, self.SUPPORTED_NON_ARRAY_TYPES): - self.array_type = np.ndarray - self.is_num = True - self.dtype = np.dtype(type(array)) - else: - raise TypeError(f'Template type {self.array_type}' - f' is not supported.') - - def convert(self, input_array, target_type=None, target_array=None): - """Convert input array to target data type. - - Args: - input_array (tuple | list | np.ndarray | - torch.Tensor | int | float ): - Input array. Defaults to None. - target_type ( | , - optional): - Type to which input array is converted. Defaults to None. - target_array (np.ndarray | torch.Tensor, optional): - Template array to which input array is converted. - Defaults to None. - - Raises: - ValueError: If input is list or tuple and cannot be converted to - to a NumPy array, a ValueError is raised. - TypeError: If input type does not belong to the above range, - or the contents of a list or tuple do not share the - same data type, a TypeError is raised. - """ - if isinstance(input_array, (list, tuple)): - try: - input_array = np.array(input_array) - if input_array.dtype not in self.SUPPORTED_NON_ARRAY_TYPES: - raise TypeError - except (ValueError, TypeError): - print(f'The input cannot be converted to' - f' a single-type numpy array:\n{input_array}') - raise - elif isinstance(input_array, self.SUPPORTED_NON_ARRAY_TYPES): - input_array = np.array(input_array) - array_type = type(input_array) - assert target_type is not None or target_array is not None, \ - 'must specify a target' - if target_type is not None: - assert target_type in (np.ndarray, torch.Tensor), \ - 'invalid target type' - if target_type == array_type: - return input_array - elif target_type == np.ndarray: - # default dtype is float32 - converted_array = input_array.cpu().numpy().astype(np.float32) - else: - # default dtype is float32, device is 'cpu' - converted_array = torch.tensor( - input_array, dtype=torch.float32) - else: - assert isinstance(target_array, (np.ndarray, torch.Tensor)), \ - 'invalid target array type' - if isinstance(target_array, array_type): - return input_array - elif isinstance(target_array, np.ndarray): - converted_array = input_array.cpu().numpy().astype( - target_array.dtype) - else: - converted_array = target_array.new_tensor(input_array) - return converted_array - - def recover(self, input_array): - assert isinstance(input_array, (np.ndarray, torch.Tensor)), \ - 'invalid input array type' - if isinstance(input_array, self.array_type): - return input_array - elif isinstance(input_array, torch.Tensor): - converted_array = input_array.cpu().numpy().astype(self.dtype) - else: - converted_array = torch.tensor( - input_array, dtype=self.dtype, device=self.device) - if self.is_num: - converted_array = converted_array.item() - return converted_array diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/gaussian.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/gaussian.py deleted file mode 100644 index 66ccbd9e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/utils/gaussian.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def gaussian_2d(shape, sigma=1): - """Generate gaussian map. - - Args: - shape (list[int]): Shape of the map. - sigma (float, optional): Sigma to generate gaussian map. - Defaults to 1. - - Returns: - np.ndarray: Generated gaussian map. - """ - m, n = [(ss - 1.) / 2. for ss in shape] - y, x = np.ogrid[-m:m + 1, -n:n + 1] - - h = np.exp(-(x * x + y * y) / (2 * sigma * sigma)) - h[h < np.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def draw_heatmap_gaussian(heatmap, center, radius, k=1): - """Get gaussian masked heatmap. - - Args: - heatmap (torch.Tensor): Heatmap to be masked. - center (torch.Tensor): Center coord of the heatmap. - radius (int): Radius of gaussian. - K (int, optional): Multiple of masked_gaussian. Defaults to 1. - - Returns: - torch.Tensor: Masked heatmap. - """ - diameter = 2 * radius + 1 - gaussian = gaussian_2d((diameter, diameter), sigma=diameter / 6) - - x, y = int(center[0]), int(center[1]) - - height, width = heatmap.shape[0:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = torch.from_numpy( - gaussian[radius - top:radius + bottom, - radius - left:radius + right]).to(heatmap.device, - torch.float32) - if min(masked_gaussian.shape) > 0 and min(masked_heatmap.shape) > 0: - torch.max(masked_heatmap, masked_gaussian * k, out=masked_heatmap) - return heatmap - - -def gaussian_radius(det_size, min_overlap=0.5): - """Get radius of gaussian. - - Args: - det_size (tuple[torch.Tensor]): Size of the detection result. - min_overlap (float, optional): Gaussian_overlap. Defaults to 0.5. - - Returns: - torch.Tensor: Computed radius. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = torch.sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 + sq1) / 2 - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = torch.sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 + sq2) / 2 - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = torch.sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / 2 - return min(r1, r2, r3) - - -def get_ellip_gaussian_2D(heatmap, center, radius_x, radius_y, k=1): - """Generate 2D ellipse gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius_x (int): X-axis radius of gaussian kernel. - radius_y (int): Y-axis radius of gaussian kernel. - k (int, optional): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter_x, diameter_y = 2 * radius_x + 1, 2 * radius_y + 1 - gaussian_kernel = ellip_gaussian2D((radius_x, radius_y), - sigma_x=diameter_x / 6, - sigma_y=diameter_y / 6, - dtype=heatmap.dtype, - device=heatmap.device) - - x, y = int(center[0]), int(center[1]) - height, width = heatmap.shape[0:2] - - left, right = min(x, radius_x), min(width - x, radius_x + 1) - top, bottom = min(y, radius_y), min(height - y, radius_y + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius_y - top:radius_y + bottom, - radius_x - left:radius_x + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def ellip_gaussian2D(radius, - sigma_x, - sigma_y, - dtype=torch.float32, - device='cpu'): - """Generate 2D ellipse gaussian kernel. - - Args: - radius (tuple(int)): Ellipse radius (radius_x, radius_y) of gaussian - kernel. - sigma_x (int): X-axis sigma of gaussian function. - sigma_y (int): Y-axis sigma of gaussian function. - dtype (torch.dtype, optional): Dtype of gaussian tensor. - Default: torch.float32. - device (str, optional): Device of gaussian tensor. - Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius_y + 1) * (2 * radius_x + 1)`` shape. - """ - x = torch.arange( - -radius[0], radius[0] + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius[1], radius[1] + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x) / (2 * sigma_x * sigma_x) - (y * y) / - (2 * sigma_y * sigma_y)).exp() - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - - return h diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/__init__.py deleted file mode 100644 index bbf1e60f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .show_result import (show_multi_modality_result, show_result, - show_seg_result) - -__all__ = ['show_result', 'show_seg_result', 'show_multi_modality_result'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/image_vis.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/image_vis.py deleted file mode 100644 index 7ac765c2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/image_vis.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import cv2 -import numpy as np -import torch -from matplotlib import pyplot as plt - - -def project_pts_on_img(points, - raw_img, - lidar2img_rt, - max_distance=70, - thickness=-1): - """Project the 3D points cloud on 2D image. - - Args: - points (numpy.array): 3D points cloud (x, y, z) to visualize. - raw_img (numpy.array): The numpy array of image. - lidar2img_rt (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - max_distance (float, optional): the max distance of the points cloud. - Default: 70. - thickness (int, optional): The thickness of 2D points. Default: -1. - """ - img = raw_img.copy() - num_points = points.shape[0] - pts_4d = np.concatenate([points[:, :3], np.ones((num_points, 1))], axis=-1) - pts_2d = pts_4d @ lidar2img_rt.T - - # cam_points is Tensor of Nx4 whose last column is 1 - # transform camera coordinate to image coordinate - pts_2d[:, 2] = np.clip(pts_2d[:, 2], a_min=1e-5, a_max=99999) - pts_2d[:, 0] /= pts_2d[:, 2] - pts_2d[:, 1] /= pts_2d[:, 2] - - fov_inds = ((pts_2d[:, 0] < img.shape[1]) - & (pts_2d[:, 0] >= 0) - & (pts_2d[:, 1] < img.shape[0]) - & (pts_2d[:, 1] >= 0)) - - imgfov_pts_2d = pts_2d[fov_inds, :3] # u, v, d - - cmap = plt.cm.get_cmap('hsv', 256) - cmap = np.array([cmap(i) for i in range(256)])[:, :3] * 255 - for i in range(imgfov_pts_2d.shape[0]): - depth = imgfov_pts_2d[i, 2] - color = cmap[np.clip(int(max_distance * 10 / depth), 0, 255), :] - cv2.circle( - img, - center=(int(np.round(imgfov_pts_2d[i, 0])), - int(np.round(imgfov_pts_2d[i, 1]))), - radius=1, - color=tuple(color), - thickness=thickness, - ) - cv2.imshow('project_pts_img', img.astype(np.uint8)) - cv2.waitKey(100) - - -def plot_rect3d_on_img(img, - num_rects, - rect_corners, - color=(0, 255, 0), - thickness=1): - """Plot the boundary lines of 3D rectangular on 2D images. - - Args: - img (numpy.array): The numpy array of image. - num_rects (int): Number of 3D rectangulars. - rect_corners (numpy.array): Coordinates of the corners of 3D - rectangulars. Should be in the shape of [num_rect, 8, 2]. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - line_indices = ((0, 1), (0, 3), (0, 4), (1, 2), (1, 5), (3, 2), (3, 7), - (4, 5), (4, 7), (2, 6), (5, 6), (6, 7)) - for i in range(num_rects): - corners = rect_corners[i].astype(np.int) - for start, end in line_indices: - cv2.line(img, (corners[start, 0], corners[start, 1]), - (corners[end, 0], corners[end, 1]), color, thickness, - cv2.LINE_AA) - - return img.astype(np.uint8) - - -def draw_lidar_bbox3d_on_img(bboxes3d, - raw_img, - lidar2img_rt, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`LiDARInstance3DBoxes`): - 3d bbox in lidar coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - lidar2img_rt (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - img_metas (dict): Useless here. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - img = raw_img.copy() - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - pts_4d = np.concatenate( - [corners_3d.reshape(-1, 3), - np.ones((num_bbox * 8, 1))], axis=-1) - lidar2img_rt = copy.deepcopy(lidar2img_rt).reshape(4, 4) - if isinstance(lidar2img_rt, torch.Tensor): - lidar2img_rt = lidar2img_rt.cpu().numpy() - pts_2d = pts_4d @ lidar2img_rt.T - - pts_2d[:, 2] = np.clip(pts_2d[:, 2], a_min=1e-5, a_max=1e5) - pts_2d[:, 0] /= pts_2d[:, 2] - pts_2d[:, 1] /= pts_2d[:, 2] - imgfov_pts_2d = pts_2d[..., :2].reshape(num_bbox, 8, 2) - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) - - -# TODO: remove third parameter in all functions here in favour of img_metas -def draw_depth_bbox3d_on_img(bboxes3d, - raw_img, - calibs, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`DepthInstance3DBoxes`, shape=[M, 7]): - 3d bbox in depth coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - calibs (dict): Camera calibration information, Rt and K. - img_metas (dict): Used in coordinates transformation. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - from mmdet3d.core.bbox import points_cam2img - from mmdet3d.models import apply_3d_transformation - - img = raw_img.copy() - img_metas = copy.deepcopy(img_metas) - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - points_3d = corners_3d.reshape(-1, 3) - - # first reverse the data transformations - xyz_depth = apply_3d_transformation( - points_3d, 'DEPTH', img_metas, reverse=True) - - # project to 2d to get image coords (uv) - uv_origin = points_cam2img(xyz_depth, - xyz_depth.new_tensor(img_metas['depth2img'])) - uv_origin = (uv_origin - 1).round() - imgfov_pts_2d = uv_origin[..., :2].reshape(num_bbox, 8, 2).numpy() - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) - - -def draw_camera_bbox3d_on_img(bboxes3d, - raw_img, - cam2img, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`CameraInstance3DBoxes`, shape=[M, 7]): - 3d bbox in camera coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - cam2img (dict): Camera intrinsic matrix, - denoted as `K` in depth bbox coordinate system. - img_metas (dict): Useless here. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - from mmdet3d.core.bbox import points_cam2img - - img = raw_img.copy() - cam2img = copy.deepcopy(cam2img) - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - points_3d = corners_3d.reshape(-1, 3) - if not isinstance(cam2img, torch.Tensor): - cam2img = torch.from_numpy(np.array(cam2img)) - - assert (cam2img.shape == torch.Size([3, 3]) - or cam2img.shape == torch.Size([4, 4])) - cam2img = cam2img.float().cpu() - - # project to 2d to get image coords (uv) - uv_origin = points_cam2img(points_3d, cam2img) - uv_origin = (uv_origin - 1).round() - imgfov_pts_2d = uv_origin[..., :2].reshape(num_bbox, 8, 2).numpy() - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/open3d_vis.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/open3d_vis.py deleted file mode 100644 index c63b6eca..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/open3d_vis.py +++ /dev/null @@ -1,460 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch - -try: - import open3d as o3d - from open3d import geometry -except ImportError: - raise ImportError( - 'Please run "pip install open3d" to install open3d first.') - - -def _draw_points(points, - vis, - points_size=2, - point_color=(0.5, 0.5, 0.5), - mode='xyz'): - """Draw points on visualizer. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - - Returns: - tuple: points, color of each point. - """ - vis.get_render_option().point_size = points_size # set points size - if isinstance(points, torch.Tensor): - points = points.cpu().numpy() - - points = points.copy() - pcd = geometry.PointCloud() - if mode == 'xyz': - pcd.points = o3d.utility.Vector3dVector(points[:, :3]) - points_colors = np.tile(np.array(point_color), (points.shape[0], 1)) - elif mode == 'xyzrgb': - pcd.points = o3d.utility.Vector3dVector(points[:, :3]) - points_colors = points[:, 3:6] - # normalize to [0, 1] for open3d drawing - if not ((points_colors >= 0.0) & (points_colors <= 1.0)).all(): - points_colors /= 255.0 - else: - raise NotImplementedError - - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.add_geometry(pcd) - - return pcd, points_colors - - -def _draw_bboxes(bbox3d, - vis, - points_colors, - pcd=None, - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox on visualizer and change the color of points inside bbox3d. - - Args: - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3d bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - points_colors (numpy.array): color of each points. - pcd (:obj:`open3d.geometry.PointCloud`, optional): point cloud. - Default: None. - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points inside bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - if isinstance(bbox3d, torch.Tensor): - bbox3d = bbox3d.cpu().numpy() - bbox3d = bbox3d.copy() - - in_box_color = np.array(points_in_box_color) - for i in range(len(bbox3d)): - center = bbox3d[i, 0:3] - dim = bbox3d[i, 3:6] - yaw = np.zeros(3) - yaw[rot_axis] = bbox3d[i, 6] - rot_mat = geometry.get_rotation_matrix_from_xyz(yaw) - - if center_mode == 'lidar_bottom': - center[rot_axis] += dim[ - rot_axis] / 2 # bottom center to gravity center - elif center_mode == 'camera_bottom': - center[rot_axis] -= dim[ - rot_axis] / 2 # bottom center to gravity center - box3d = geometry.OrientedBoundingBox(center, rot_mat, dim) - - line_set = geometry.LineSet.create_from_oriented_bounding_box(box3d) - line_set.paint_uniform_color(bbox_color) - # draw bboxes on visualizer - vis.add_geometry(line_set) - - # change the color of points which are in box - if pcd is not None and mode == 'xyz': - indices = box3d.get_point_indices_within_bounding_box(pcd.points) - points_colors[indices] = in_box_color - - # update points colors - if pcd is not None: - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.update_geometry(pcd) - - -def show_pts_boxes(points, - bbox3d=None, - show=True, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox and points on visualizer. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - bbox3d (numpy.array | torch.tensor, shape=[M, 7], optional): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - Defaults to None. - show (bool, optional): whether to show the visualization results. - Default: True. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is bottom - center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, available - mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - # TODO: support score and class info - assert 0 <= rot_axis <= 2 - - # init visualizer - vis = o3d.visualization.Visualizer() - vis.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - vis.add_geometry(mesh_frame) - - # draw points - pcd, points_colors = _draw_points(points, vis, points_size, point_color, - mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes(bbox3d, vis, points_colors, pcd, bbox_color, - points_in_box_color, rot_axis, center_mode, mode) - - if show: - vis.run() - - if save_path is not None: - vis.capture_screen_image(save_path) - - vis.destroy_window() - - -def _draw_bboxes_ind(bbox3d, - vis, - indices, - points_colors, - pcd=None, - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox on visualizer and change the color or points inside bbox3d - with indices. - - Args: - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3d bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - indices (numpy.array | torch.tensor, shape=[N, M]): - indicate which bbox3d that each point lies in. - points_colors (numpy.array): color of each points. - pcd (:obj:`open3d.geometry.PointCloud`, optional): point cloud. - Default: None. - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - if isinstance(bbox3d, torch.Tensor): - bbox3d = bbox3d.cpu().numpy() - if isinstance(indices, torch.Tensor): - indices = indices.cpu().numpy() - bbox3d = bbox3d.copy() - - in_box_color = np.array(points_in_box_color) - for i in range(len(bbox3d)): - center = bbox3d[i, 0:3] - dim = bbox3d[i, 3:6] - yaw = np.zeros(3) - # TODO: fix problem of current coordinate system - # dim[0], dim[1] = dim[1], dim[0] # for current coordinate - # yaw[rot_axis] = -(bbox3d[i, 6] - 0.5 * np.pi) - yaw[rot_axis] = -bbox3d[i, 6] - rot_mat = geometry.get_rotation_matrix_from_xyz(yaw) - if center_mode == 'lidar_bottom': - center[rot_axis] += dim[ - rot_axis] / 2 # bottom center to gravity center - elif center_mode == 'camera_bottom': - center[rot_axis] -= dim[ - rot_axis] / 2 # bottom center to gravity center - box3d = geometry.OrientedBoundingBox(center, rot_mat, dim) - - line_set = geometry.LineSet.create_from_oriented_bounding_box(box3d) - line_set.paint_uniform_color(bbox_color) - # draw bboxes on visualizer - vis.add_geometry(line_set) - - # change the color of points which are in box - if pcd is not None and mode == 'xyz': - points_colors[indices[:, i].astype(np.bool)] = in_box_color - - # update points colors - if pcd is not None: - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.update_geometry(pcd) - - -def show_pts_index_boxes(points, - bbox3d=None, - show=True, - indices=None, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox and points on visualizer with indices that indicate which - bbox3d that each point lies in. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - Defaults to None. - show (bool, optional): whether to show the visualization results. - Default: True. - indices (numpy.array | torch.tensor, shape=[N, M], optional): - indicate which bbox3d that each point lies in. Default: None. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - # TODO: support score and class info - assert 0 <= rot_axis <= 2 - - # init visualizer - vis = o3d.visualization.Visualizer() - vis.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - vis.add_geometry(mesh_frame) - - # draw points - pcd, points_colors = _draw_points(points, vis, points_size, point_color, - mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes_ind(bbox3d, vis, indices, points_colors, pcd, bbox_color, - points_in_box_color, rot_axis, center_mode, mode) - - if show: - vis.run() - - if save_path is not None: - vis.capture_screen_image(save_path) - - vis.destroy_window() - - -class Visualizer(object): - r"""Online visualizer implemented with Open3d. - - Args: - points (numpy.array, shape=[N, 3+C]): Points to visualize. The Points - cloud is in mode of Coord3DMode.DEPTH (please refer to - core.structures.coord_3d_mode). - bbox3d (numpy.array, shape=[M, 7], optional): 3D bbox - (x, y, z, x_size, y_size, z_size, yaw) to visualize. - The 3D bbox is in mode of Box3DMode.DEPTH with - gravity_center (please refer to core.structures.box_3d_mode). - Default: None. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - - def __init__(self, - points, - bbox3d=None, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - super(Visualizer, self).__init__() - assert 0 <= rot_axis <= 2 - - # init visualizer - self.o3d_visualizer = o3d.visualization.Visualizer() - self.o3d_visualizer.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - self.o3d_visualizer.add_geometry(mesh_frame) - - self.points_size = points_size - self.point_color = point_color - self.bbox_color = bbox_color - self.points_in_box_color = points_in_box_color - self.rot_axis = rot_axis - self.center_mode = center_mode - self.mode = mode - self.seg_num = 0 - - # draw points - if points is not None: - self.pcd, self.points_colors = _draw_points( - points, self.o3d_visualizer, points_size, point_color, mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes(bbox3d, self.o3d_visualizer, self.points_colors, - self.pcd, bbox_color, points_in_box_color, rot_axis, - center_mode, mode) - - def add_bboxes(self, bbox3d, bbox_color=None, points_in_box_color=None): - """Add bounding box to visualizer. - - Args: - bbox3d (numpy.array, shape=[M, 7]): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) - to be visualized. The 3d bbox is in mode of - Box3DMode.DEPTH with gravity_center (please refer to - core.structures.box_3d_mode). - bbox_color (tuple[float]): the color of bbox. Default: None. - points_in_box_color (tuple[float]): the color of points which - are in bbox3d. Default: None. - """ - if bbox_color is None: - bbox_color = self.bbox_color - if points_in_box_color is None: - points_in_box_color = self.points_in_box_color - _draw_bboxes(bbox3d, self.o3d_visualizer, self.points_colors, self.pcd, - bbox_color, points_in_box_color, self.rot_axis, - self.center_mode, self.mode) - - def add_seg_mask(self, seg_mask_colors): - """Add segmentation mask to visualizer via per-point colorization. - - Args: - seg_mask_colors (numpy.array, shape=[N, 6]): - The segmentation mask whose first 3 dims are point coordinates - and last 3 dims are converted colors. - """ - # we can't draw the colors on existing points - # in case gt and pred mask would overlap - # instead we set a large offset along x-axis for each seg mask - self.seg_num += 1 - offset = (np.array(self.pcd.points).max(0) - - np.array(self.pcd.points).min(0))[0] * 1.2 * self.seg_num - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[offset, 0, 0]) # create coordinate frame for seg - self.o3d_visualizer.add_geometry(mesh_frame) - seg_points = copy.deepcopy(seg_mask_colors) - seg_points[:, 0] += offset - _draw_points( - seg_points, self.o3d_visualizer, self.points_size, mode='xyzrgb') - - def show(self, save_path=None): - """Visualize the points cloud. - - Args: - save_path (str, optional): path to save image. Default: None. - """ - - self.o3d_visualizer.run() - - if save_path is not None: - self.o3d_visualizer.capture_screen_image(save_path) - - self.o3d_visualizer.destroy_window() - return diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/show_result.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/show_result.py deleted file mode 100644 index aa732cf4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/visualizer/show_result.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -import trimesh - -from .image_vis import (draw_camera_bbox3d_on_img, draw_depth_bbox3d_on_img, - draw_lidar_bbox3d_on_img) - - -def _write_obj(points, out_filename): - """Write points into ``obj`` format for meshlab visualization. - - Args: - points (np.ndarray): Points in shape (N, dim). - out_filename (str): Filename to be saved. - """ - N = points.shape[0] - fout = open(out_filename, 'w') - for i in range(N): - if points.shape[1] == 6: - c = points[i, 3:].astype(int) - fout.write( - 'v %f %f %f %d %d %d\n' % - (points[i, 0], points[i, 1], points[i, 2], c[0], c[1], c[2])) - - else: - fout.write('v %f %f %f\n' % - (points[i, 0], points[i, 1], points[i, 2])) - fout.close() - - -def _write_oriented_bbox(scene_bbox, out_filename): - """Export oriented (around Z axis) scene bbox to meshes. - - Args: - scene_bbox(list[ndarray] or ndarray): xyz pos of center and - 3 lengths (x_size, y_size, z_size) and heading angle around Z axis. - Y forward, X right, Z upward. heading angle of positive X is 0, - heading angle of positive Y is 90 degrees. - out_filename(str): Filename. - """ - - def heading2rotmat(heading_angle): - rotmat = np.zeros((3, 3)) - rotmat[2, 2] = 1 - cosval = np.cos(heading_angle) - sinval = np.sin(heading_angle) - rotmat[0:2, 0:2] = np.array([[cosval, -sinval], [sinval, cosval]]) - return rotmat - - def convert_oriented_box_to_trimesh_fmt(box): - ctr = box[:3] - lengths = box[3:6] - trns = np.eye(4) - trns[0:3, 3] = ctr - trns[3, 3] = 1.0 - trns[0:3, 0:3] = heading2rotmat(box[6]) - box_trimesh_fmt = trimesh.creation.box(lengths, trns) - return box_trimesh_fmt - - if len(scene_bbox) == 0: - scene_bbox = np.zeros((1, 7)) - scene = trimesh.scene.Scene() - for box in scene_bbox: - scene.add_geometry(convert_oriented_box_to_trimesh_fmt(box)) - - mesh_list = trimesh.util.concatenate(scene.dump()) - # save to obj file - trimesh.io.export.export_mesh(mesh_list, out_filename, file_type='obj') - - return - - -def show_result(points, - gt_bboxes, - pred_bboxes, - out_dir, - filename, - show=False, - snapshot=False, - pred_labels=None): - """Convert results into format that is directly readable for meshlab. - - Args: - points (np.ndarray): Points. - gt_bboxes (np.ndarray): Ground truth boxes. - pred_bboxes (np.ndarray): Predicted boxes. - out_dir (str): Path of output directory - filename (str): Filename of the current frame. - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - pred_labels (np.ndarray, optional): Predicted labels of boxes. - Defaults to None. - """ - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - if show: - from .open3d_vis import Visualizer - - vis = Visualizer(points) - if pred_bboxes is not None: - if pred_labels is None: - vis.add_bboxes(bbox3d=pred_bboxes) - else: - palette = np.random.randint( - 0, 255, size=(pred_labels.max() + 1, 3)) / 256 - labelDict = {} - for j in range(len(pred_labels)): - i = int(pred_labels[j].numpy()) - if labelDict.get(i) is None: - labelDict[i] = [] - labelDict[i].append(pred_bboxes[j]) - for i in labelDict: - vis.add_bboxes( - bbox3d=np.array(labelDict[i]), - bbox_color=palette[i], - points_in_box_color=palette[i]) - - if gt_bboxes is not None: - vis.add_bboxes(bbox3d=gt_bboxes, bbox_color=(0, 0, 1)) - show_path = osp.join(result_path, - f'{filename}_online.png') if snapshot else None - vis.show(show_path) - - if points is not None: - _write_obj(points, osp.join(result_path, f'{filename}_points.obj')) - - if gt_bboxes is not None: - # bottom center to gravity center - gt_bboxes[..., 2] += gt_bboxes[..., 5] / 2 - - _write_oriented_bbox(gt_bboxes, - osp.join(result_path, f'{filename}_gt.obj')) - - if pred_bboxes is not None: - # bottom center to gravity center - pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 - - _write_oriented_bbox(pred_bboxes, - osp.join(result_path, f'{filename}_pred.obj')) - - -def show_seg_result(points, - gt_seg, - pred_seg, - out_dir, - filename, - palette, - ignore_index=None, - show=False, - snapshot=False): - """Convert results into format that is directly readable for meshlab. - - Args: - points (np.ndarray): Points. - gt_seg (np.ndarray): Ground truth segmentation mask. - pred_seg (np.ndarray): Predicted segmentation mask. - out_dir (str): Path of output directory - filename (str): Filename of the current frame. - palette (np.ndarray): Mapping between class labels and colors. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. Defaults to None. - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - """ - # we need 3D coordinates to visualize segmentation mask - if gt_seg is not None or pred_seg is not None: - assert points is not None, \ - '3D coordinates are required for segmentation visualization' - - # filter out ignored points - if gt_seg is not None and ignore_index is not None: - if points is not None: - points = points[gt_seg != ignore_index] - if pred_seg is not None: - pred_seg = pred_seg[gt_seg != ignore_index] - gt_seg = gt_seg[gt_seg != ignore_index] - - if gt_seg is not None: - gt_seg_color = palette[gt_seg] - gt_seg_color = np.concatenate([points[:, :3], gt_seg_color], axis=1) - if pred_seg is not None: - pred_seg_color = palette[pred_seg] - pred_seg_color = np.concatenate([points[:, :3], pred_seg_color], - axis=1) - - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - # online visualization of segmentation mask - # we show three masks in a row, scene_points, gt_mask, pred_mask - if show: - from .open3d_vis import Visualizer - mode = 'xyzrgb' if points.shape[1] == 6 else 'xyz' - vis = Visualizer(points, mode=mode) - if gt_seg is not None: - vis.add_seg_mask(gt_seg_color) - if pred_seg is not None: - vis.add_seg_mask(pred_seg_color) - show_path = osp.join(result_path, - f'{filename}_online.png') if snapshot else None - vis.show(show_path) - - if points is not None: - _write_obj(points, osp.join(result_path, f'{filename}_points.obj')) - - if gt_seg is not None: - _write_obj(gt_seg_color, osp.join(result_path, f'{filename}_gt.obj')) - - if pred_seg is not None: - _write_obj(pred_seg_color, osp.join(result_path, - f'{filename}_pred.obj')) - - -def show_multi_modality_result(img, - gt_bboxes, - pred_bboxes, - proj_mat, - out_dir, - filename, - box_mode='lidar', - img_metas=None, - show=False, - gt_bbox_color=(61, 102, 255), - pred_bbox_color=(241, 101, 72)): - """Convert multi-modality detection results into 2D results. - - Project the predicted 3D bbox to 2D image plane and visualize them. - - Args: - img (np.ndarray): The numpy array of image in cv2 fashion. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Ground truth boxes. - pred_bboxes (:obj:`BaseInstance3DBoxes`): Predicted boxes. - proj_mat (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - out_dir (str): Path of output directory. - filename (str): Filename of the current frame. - box_mode (str, optional): Coordinate system the boxes are in. - Should be one of 'depth', 'lidar' and 'camera'. - Defaults to 'lidar'. - img_metas (dict, optional): Used in projecting depth bbox. - Defaults to None. - show (bool, optional): Visualize the results online. Defaults to False. - gt_bbox_color (str or tuple(int), optional): Color of bbox lines. - The tuple of color should be in BGR order. Default: (255, 102, 61). - pred_bbox_color (str or tuple(int), optional): Color of bbox lines. - The tuple of color should be in BGR order. Default: (72, 101, 241). - """ - if box_mode == 'depth': - draw_bbox = draw_depth_bbox3d_on_img - elif box_mode == 'lidar': - draw_bbox = draw_lidar_bbox3d_on_img - elif box_mode == 'camera': - draw_bbox = draw_camera_bbox3d_on_img - else: - raise NotImplementedError(f'unsupported box mode {box_mode}') - - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - if show: - show_img = img.copy() - if gt_bboxes is not None: - show_img = draw_bbox( - gt_bboxes, show_img, proj_mat, img_metas, color=gt_bbox_color) - if pred_bboxes is not None: - show_img = draw_bbox( - pred_bboxes, - show_img, - proj_mat, - img_metas, - color=pred_bbox_color) - mmcv.imshow(show_img, win_name='project_bbox3d_img', wait_time=0) - - if img is not None: - mmcv.imwrite(img, osp.join(result_path, f'{filename}_img.png')) - - if gt_bboxes is not None: - gt_img = draw_bbox( - gt_bboxes, img, proj_mat, img_metas, color=gt_bbox_color) - mmcv.imwrite(gt_img, osp.join(result_path, f'{filename}_gt.png')) - - if pred_bboxes is not None: - pred_img = draw_bbox( - pred_bboxes, img, proj_mat, img_metas, color=pred_bbox_color) - mmcv.imwrite(pred_img, osp.join(result_path, f'{filename}_pred.png')) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/__init__.py deleted file mode 100644 index 8d695437..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_voxel_generator -from .voxel_generator import VoxelGenerator - -__all__ = ['build_voxel_generator', 'VoxelGenerator'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/builder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/builder.py deleted file mode 100644 index bc663ee4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/builder.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from . import voxel_generator - - -def build_voxel_generator(cfg, **kwargs): - """Builder of voxel generator.""" - if isinstance(cfg, voxel_generator.VoxelGenerator): - return cfg - elif isinstance(cfg, dict): - return mmcv.runner.obj_from_dict( - cfg, voxel_generator, default_args=kwargs) - else: - raise TypeError('Invalid type {} for building a sampler'.format( - type(cfg))) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/voxel_generator.py b/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/voxel_generator.py deleted file mode 100644 index 404f2cdc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/core/voxel/voxel_generator.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numba -import numpy as np - - -class VoxelGenerator(object): - """Voxel generator in numpy implementation. - - Args: - voxel_size (list[float]): Size of a single voxel - point_cloud_range (list[float]): Range of points - max_num_points (int): Maximum number of points in a single voxel - max_voxels (int, optional): Maximum number of voxels. - Defaults to 20000. - """ - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels=20000): - - point_cloud_range = np.array(point_cloud_range, dtype=np.float32) - # [0, -40, -3, 70.4, 40, 1] - voxel_size = np.array(voxel_size, dtype=np.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = np.round(grid_size).astype(np.int64) - - self._voxel_size = voxel_size - self._point_cloud_range = point_cloud_range - self._max_num_points = max_num_points - self._max_voxels = max_voxels - self._grid_size = grid_size - - def generate(self, points): - """Generate voxels given points.""" - return points_to_voxel(points, self._voxel_size, - self._point_cloud_range, self._max_num_points, - True, self._max_voxels) - - @property - def voxel_size(self): - """list[float]: Size of a single voxel.""" - return self._voxel_size - - @property - def max_num_points_per_voxel(self): - """int: Maximum number of points per voxel.""" - return self._max_num_points - - @property - def point_cloud_range(self): - """list[float]: Range of point cloud.""" - return self._point_cloud_range - - @property - def grid_size(self): - """np.ndarray: The size of grids.""" - return self._grid_size - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - indent = ' ' * (len(repr_str) + 1) - repr_str += f'(voxel_size={self._voxel_size},\n' - repr_str += indent + 'point_cloud_range=' - repr_str += f'{self._point_cloud_range.tolist()},\n' - repr_str += indent + f'max_num_points={self._max_num_points},\n' - repr_str += indent + f'max_voxels={self._max_voxels},\n' - repr_str += indent + f'grid_size={self._grid_size.tolist()}' - repr_str += ')' - return repr_str - - -def points_to_voxel(points, - voxel_size, - coors_range, - max_points=35, - reverse_index=True, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size - coors_range (list[float | tuple[float] | ndarray]): Voxel range. - format: xyzxyz, minmax - max_points (int): Indicate maximum points contained in a voxel. - reverse_index (bool): Whether return reversed coordinates. - if points has xyz format and reverse_index is True, output - coordinates will be zyx format, but points in features always - xyz format. - max_voxels (int): Maximum number of voxels this function creates. - For second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: [M, max_points, ndim] float tensor. only contain points. - coordinates: [M, 3] int32 tensor. - num_points_per_voxel: [M] int32 tensor. - """ - if not isinstance(voxel_size, np.ndarray): - voxel_size = np.array(voxel_size, dtype=points.dtype) - if not isinstance(coors_range, np.ndarray): - coors_range = np.array(coors_range, dtype=points.dtype) - voxelmap_shape = (coors_range[3:] - coors_range[:3]) / voxel_size - voxelmap_shape = tuple(np.round(voxelmap_shape).astype(np.int32).tolist()) - if reverse_index: - voxelmap_shape = voxelmap_shape[::-1] - # don't create large array in jit(nopython=True) code. - num_points_per_voxel = np.zeros(shape=(max_voxels, ), dtype=np.int32) - coor_to_voxelidx = -np.ones(shape=voxelmap_shape, dtype=np.int32) - voxels = np.zeros( - shape=(max_voxels, max_points, points.shape[-1]), dtype=points.dtype) - coors = np.zeros(shape=(max_voxels, 3), dtype=np.int32) - if reverse_index: - voxel_num = _points_to_voxel_reverse_kernel( - points, voxel_size, coors_range, num_points_per_voxel, - coor_to_voxelidx, voxels, coors, max_points, max_voxels) - - else: - voxel_num = _points_to_voxel_kernel(points, voxel_size, coors_range, - num_points_per_voxel, - coor_to_voxelidx, voxels, coors, - max_points, max_voxels) - - coors = coors[:voxel_num] - voxels = voxels[:voxel_num] - num_points_per_voxel = num_points_per_voxel[:voxel_num] - - return voxels, coors, num_points_per_voxel - - -@numba.jit(nopython=True) -def _points_to_voxel_reverse_kernel(points, - voxel_size, - coors_range, - num_points_per_voxel, - coor_to_voxelidx, - voxels, - coors, - max_points=35, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size - coors_range (list[float | tuple[float] | ndarray]): Range of voxels. - format: xyzxyz, minmax - num_points_per_voxel (int): Number of points per voxel. - coor_to_voxel_idx (np.ndarray): A voxel grid of shape (D, H, W), - which has the same shape as the complete voxel map. It indicates - the index of each corresponding voxel. - voxels (np.ndarray): Created empty voxels. - coors (np.ndarray): Created coordinates of each voxel. - max_points (int): Indicate maximum points contained in a voxel. - max_voxels (int): Maximum number of voxels this function create. - for second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: Shape [M, max_points, ndim], only contain points. - coordinates: Shape [M, 3]. - num_points_per_voxel: Shape [M]. - """ - # put all computations to one loop. - # we shouldn't create large array in main jit code, otherwise - # reduce performance - N = points.shape[0] - # ndim = points.shape[1] - 1 - ndim = 3 - ndim_minus_1 = ndim - 1 - grid_size = (coors_range[3:] - coors_range[:3]) / voxel_size - # np.round(grid_size) - # grid_size = np.round(grid_size).astype(np.int64)(np.int32) - grid_size = np.round(grid_size, 0, grid_size).astype(np.int32) - coor = np.zeros(shape=(3, ), dtype=np.int32) - voxel_num = 0 - failed = False - for i in range(N): - failed = False - for j in range(ndim): - c = np.floor((points[i, j] - coors_range[j]) / voxel_size[j]) - if c < 0 or c >= grid_size[j]: - failed = True - break - coor[ndim_minus_1 - j] = c - if failed: - continue - voxelidx = coor_to_voxelidx[coor[0], coor[1], coor[2]] - if voxelidx == -1: - voxelidx = voxel_num - if voxel_num >= max_voxels: - continue - voxel_num += 1 - coor_to_voxelidx[coor[0], coor[1], coor[2]] = voxelidx - coors[voxelidx] = coor - num = num_points_per_voxel[voxelidx] - if num < max_points: - voxels[voxelidx, num] = points[i] - num_points_per_voxel[voxelidx] += 1 - return voxel_num - - -@numba.jit(nopython=True) -def _points_to_voxel_kernel(points, - voxel_size, - coors_range, - num_points_per_voxel, - coor_to_voxelidx, - voxels, - coors, - max_points=35, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size. - coors_range (list[float | tuple[float] | ndarray]): Range of voxels. - format: xyzxyz, minmax - num_points_per_voxel (int): Number of points per voxel. - coor_to_voxel_idx (np.ndarray): A voxel grid of shape (D, H, W), - which has the same shape as the complete voxel map. It indicates - the index of each corresponding voxel. - voxels (np.ndarray): Created empty voxels. - coors (np.ndarray): Created coordinates of each voxel. - max_points (int): Indicate maximum points contained in a voxel. - max_voxels (int): Maximum number of voxels this function create. - for second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: Shape [M, max_points, ndim], only contain points. - coordinates: Shape [M, 3]. - num_points_per_voxel: Shape [M]. - """ - N = points.shape[0] - # ndim = points.shape[1] - 1 - ndim = 3 - grid_size = (coors_range[3:] - coors_range[:3]) / voxel_size - # grid_size = np.round(grid_size).astype(np.int64)(np.int32) - grid_size = np.round(grid_size, 0, grid_size).astype(np.int32) - - # lower_bound = coors_range[:3] - # upper_bound = coors_range[3:] - coor = np.zeros(shape=(3, ), dtype=np.int32) - voxel_num = 0 - failed = False - for i in range(N): - failed = False - for j in range(ndim): - c = np.floor((points[i, j] - coors_range[j]) / voxel_size[j]) - if c < 0 or c >= grid_size[j]: - failed = True - break - coor[j] = c - if failed: - continue - voxelidx = coor_to_voxelidx[coor[0], coor[1], coor[2]] - if voxelidx == -1: - voxelidx = voxel_num - if voxel_num >= max_voxels: - continue - voxel_num += 1 - coor_to_voxelidx[coor[0], coor[1], coor[2]] = voxelidx - coors[voxelidx] = coor - num = num_points_per_voxel[voxelidx] - if num < max_points: - voxels[voxelidx, num] = points[i] - num_points_per_voxel[voxelidx] += 1 - return voxel_num diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/__init__.py deleted file mode 100644 index 49cbc6b1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmdet.datasets.builder import build_dataloader -from .builder import DATASETS, PIPELINES, build_dataset -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .kitti_dataset import KittiDataset -from .kitti_mono_dataset import KittiMonoDataset -from .lyft_dataset import LyftDataset -from .nuscenes_dataset import NuScenesDataset -from .nuscenes_mono_dataset import NuScenesMonoDataset -# yapf: disable -from .pipelines import (AffineResize, BackgroundPointsFilter, GlobalAlignment, - GlobalRotScaleTrans, IndoorPatchPointSample, - IndoorPointSample, LoadAnnotations3D, - LoadPointsFromDict, LoadPointsFromFile, - LoadPointsFromMultiSweeps, MultiViewWrapper, - NormalizePointsColor, ObjectNameFilter, ObjectNoise, - ObjectRangeFilter, ObjectSample, PointSample, - PointShuffle, PointsRangeFilter, RandomDropPointsColor, - RandomFlip3D, RandomJitterPoints, RandomRotate, - RandomShiftScale, RangeLimitedRandomCrop, - VoxelBasedPointSampler) -# yapf: enable -from .s3dis_dataset import S3DISDataset, S3DISSegDataset -from .scannet_dataset import (ScanNetDataset, ScanNetInstanceSegDataset, - ScanNetSegDataset) -from .semantickitti_dataset import SemanticKITTIDataset -from .sunrgbd_dataset import SUNRGBDDataset -from .utils import get_loading_pipeline -from .waymo_dataset import WaymoDataset - -__all__ = [ - 'KittiDataset', 'KittiMonoDataset', 'build_dataloader', 'DATASETS', - 'build_dataset', 'NuScenesDataset', 'NuScenesMonoDataset', 'LyftDataset', - 'ObjectSample', 'RandomFlip3D', 'ObjectNoise', 'GlobalRotScaleTrans', - 'PointShuffle', 'ObjectRangeFilter', 'PointsRangeFilter', - 'LoadPointsFromFile', 'S3DISSegDataset', 'S3DISDataset', - 'NormalizePointsColor', 'IndoorPatchPointSample', 'IndoorPointSample', - 'PointSample', 'LoadAnnotations3D', 'GlobalAlignment', 'SUNRGBDDataset', - 'ScanNetDataset', 'ScanNetSegDataset', 'ScanNetInstanceSegDataset', - 'SemanticKITTIDataset', 'Custom3DDataset', 'Custom3DSegDataset', - 'LoadPointsFromMultiSweeps', 'WaymoDataset', 'BackgroundPointsFilter', - 'VoxelBasedPointSampler', 'get_loading_pipeline', 'RandomDropPointsColor', - 'RandomJitterPoints', 'ObjectNameFilter', 'AffineResize', - 'RandomShiftScale', 'LoadPointsFromDict', 'PIPELINES', - 'RangeLimitedRandomCrop', 'RandomRotate', 'MultiViewWrapper' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/builder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/builder.py deleted file mode 100644 index 157f6404..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/builder.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import platform - -from mmcv.utils import Registry, build_from_cfg - -from mmdet.datasets import DATASETS as MMDET_DATASETS -from mmdet.datasets.builder import _concat_dataset - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -OBJECTSAMPLERS = Registry('Object sampler') -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def build_dataset(cfg, default_args=None): - from mmdet3d.datasets.dataset_wrappers import CBGSDataset - from mmdet.datasets.dataset_wrappers import (ClassBalancedDataset, - ConcatDataset, RepeatDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif cfg['type'] == 'CBGSDataset': - dataset = CBGSDataset(build_dataset(cfg['dataset'], default_args)) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - elif cfg['type'] in DATASETS._module_dict.keys(): - dataset = build_from_cfg(cfg, DATASETS, default_args) - else: - dataset = build_from_cfg(cfg, MMDET_DATASETS, default_args) - return dataset diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d.py deleted file mode 100644 index 9c6e3517..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d.py +++ /dev/null @@ -1,448 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -from torch.utils.data import Dataset - -from ..core.bbox import get_box_type -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -class Custom3DDataset(Dataset): - """Customized 3D dataset. - - This is the base dataset of SUNRGB-D, ScanNet, nuScenes, and KITTI - dataset. - - .. code-block:: none - - [ - {'sample_idx': - 'lidar_points': {'lidar_path': velodyne_path, - .... - }, - 'annos': {'box_type_3d': (str) 'LiDAR/Camera/Depth' - 'gt_bboxes_3d': (n, 7) - 'gt_names': [list] - .... - } - 'calib': { .....} - 'images': { .....} - } - ] - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR'. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.ann_file = ann_file - self.test_mode = test_mode - self.modality = modality - self.filter_empty_gt = filter_empty_gt - self.box_type_3d, self.box_mode_3d = get_box_type(box_type_3d) - - self.CLASSES = self.get_classes(classes) - self.file_client = mmcv.FileClient(**file_client_args) - self.cat2id = {name: i for i, name in enumerate(self.CLASSES)} - - # load annotations - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(open(local_path, 'rb')) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - # process pipeline - if pipeline is not None: - self.pipeline = Compose(pipeline) - - # set group flag for the samplers - if not self.test_mode: - self._set_group_flag() - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - # loading data from a file-like object needs file format - return mmcv.load(ann_file, file_format='pkl') - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['sample_idx'] - pts_filename = osp.join(self.data_root, - info['lidar_points']['lidar_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - gt_bboxes_3d = info['annos']['gt_bboxes_3d'] - gt_names_3d = info['annos']['gt_names'] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - # Obtain original box 3d type in info file - ori_box_type_3d = info['annos']['box_type_3d'] - ori_box_type_3d, _ = get_box_type(ori_box_type_3d) - - # turn original box type to target box type - gt_bboxes_3d = ori_box_type_3d( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - gt_names=gt_names_3d) - return anns_results - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - bbox3d_fields (list): 3D bounding boxes fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - bbox_fields (list): Fields of bounding boxes. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - - box_type_3d (str): 3D box type. - - box_mode_3d (str): 3D box mode. - """ - results['img_fields'] = [] - results['bbox3d_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['box_type_3d'] = self.box_type_3d - results['box_mode_3d'] = self.box_mode_3d - - def prepare_train_data(self, index): - """Training data preparation. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Training data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - if input_dict is None: - return None - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - if self.filter_empty_gt and \ - (example is None or - ~(example['gt_labels_3d']._data != -1).any()): - return None - return example - - def prepare_test_data(self, index): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Return: - list[str]: A list of class names. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving json - files when ``jsonfile_prefix`` is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - out = f'{pklfile_prefix}.pkl' - mmcv.dump(outputs, out) - return outputs, tmp_dir - - def evaluate(self, - results, - metric=None, - iou_thr=(0.25, 0.5), - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in indoor protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str], optional): Metrics to be evaluated. - Defaults to None. - iou_thr (list[float]): AP IoU thresholds. Defaults to (0.25, 0.5). - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - from mmdet3d.core.evaluation import indoor_eval - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - gt_annos = [info['annos'] for info in self.data_infos] - label2cat = {i: cat_id for i, cat_id in enumerate(self.CLASSES)} - ret_dict = indoor_eval( - gt_annos, - results, - iou_thr, - label2cat, - logger=logger, - box_type_3d=self.box_type_3d, - box_mode_3d=self.box_mode_3d) - if show: - self.show(results, out_dir, pipeline=pipeline) - - return ret_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - raise NotImplementedError('_build_default_pipeline is not implemented ' - f'for dataset {self.__class__.__name__}') - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - # when we want to load ground-truth via pipeline (e.g. bbox, seg mask) - # we need to set self.test_mode as False so that we have 'annos' - if load_annos: - original_test_mode = self.test_mode - self.test_mode = False - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - if load_annos: - self.test_mode = original_test_mode - - return data - - def __len__(self): - """Return the length of data infos. - - Returns: - int: Length of data infos. - """ - return len(self.data_infos) - - def _rand_another(self, idx): - """Randomly get another item with the same flag. - - Returns: - int: Another index of item with the same flag. - """ - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - Returns: - dict: Data dictionary of the corresponding index. - """ - if self.test_mode: - return self.prepare_test_data(idx) - while True: - data = self.prepare_train_data(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. In 3D datasets, they are all the same, thus are all - zeros. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d_seg.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d_seg.py deleted file mode 100644 index e123611d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/custom_3d_seg.py +++ /dev/null @@ -1,465 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -from torch.utils.data import Dataset - -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class Custom3DSegDataset(Dataset): - """Customized 3D dataset for semantic segmentation task. - - This is the base dataset of ScanNet and S3DIS dataset. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES) to - be consistent with PointSegClassMapping function in pipeline. - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - # names of all classes data used for the task - CLASSES = None - - # class_ids used for training - VALID_CLASS_IDS = None - - # all possible class_ids in loaded segmentation mask - ALL_CLASS_IDS = None - - # official color for visualization - PALETTE = None - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.ann_file = ann_file - self.test_mode = test_mode - self.modality = modality - self.file_client = mmcv.FileClient(**file_client_args) - - # load annotations - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(open(local_path, 'rb')) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - if pipeline is not None: - self.pipeline = Compose(pipeline) - - self.ignore_index = len(self.CLASSES) if \ - ignore_index is None else ignore_index - - self.scene_idxs = self.get_scene_idxs(scene_idxs) - self.CLASSES, self.PALETTE = \ - self.get_classes_and_palette(classes, palette) - - # set group flag for the sampler - if not self.test_mode: - self._set_group_flag() - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - # loading data from a file-like object needs file format - return mmcv.load(ann_file, file_format='pkl') - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - return input_dict - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - """ - results['img_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['bbox3d_fields'] = [] - - def prepare_train_data(self, index): - """Training data preparation. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Training data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - if input_dict is None: - return None - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - def prepare_test_data(self, index): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. - - This function is taken from MMSegmentation. - - Args: - classes (Sequence[str] | str): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - Defaults to None. - palette (Sequence[Sequence[int]]] | np.ndarray): - The palette of segmentation map. If None is given, random - palette will be generated. Defaults to None. - """ - if classes is None: - self.custom_classes = False - # map id in the loaded mask to label used for training - self.label_map = { - cls_id: self.ignore_index - for cls_id in self.ALL_CLASS_IDS - } - self.label_map.update( - {cls_id: i - for i, cls_id in enumerate(self.VALID_CLASS_IDS)}) - # map label to category name - self.label2cat = { - i: cat_name - for i, cat_name in enumerate(self.CLASSES) - } - return self.CLASSES, self.PALETTE - - self.custom_classes = True - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - if self.CLASSES: - if not set(class_names).issubset(self.CLASSES): - raise ValueError('classes is not a subset of CLASSES.') - - # update valid_class_ids - self.VALID_CLASS_IDS = [ - self.VALID_CLASS_IDS[self.CLASSES.index(cls_name)] - for cls_name in class_names - ] - - # dictionary, its keys are the old label ids and its values - # are the new label ids. - # used for changing pixel labels in load_annotations. - self.label_map = { - cls_id: self.ignore_index - for cls_id in self.ALL_CLASS_IDS - } - self.label_map.update( - {cls_id: i - for i, cls_id in enumerate(self.VALID_CLASS_IDS)}) - self.label2cat = { - i: cat_name - for i, cat_name in enumerate(class_names) - } - - # modify palette for visualization - palette = [ - self.PALETTE[self.CLASSES.index(cls_name)] - for cls_name in class_names - ] - - return class_names, palette - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - if self.test_mode: - # when testing, we load one whole scene every time - return np.arange(len(self.data_infos)).astype(np.int32) - - # we may need to re-sample different scenes according to scene_idxs - # this is necessary for indoor scene segmentation such as ScanNet - if scene_idxs is None: - scene_idxs = np.arange(len(self.data_infos)) - if isinstance(scene_idxs, str): - with self.file_client.get_local_path(scene_idxs) as local_path: - scene_idxs = np.load(local_path) - else: - scene_idxs = np.array(scene_idxs) - - return scene_idxs.astype(np.int32) - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving json - files when ``jsonfile_prefix`` is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - out = f'{pklfile_prefix}.pkl' - mmcv.dump(outputs, out) - return outputs, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in semantic segmentation protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Defaults to False. - out_dir (str, optional): Path to save the visualization results. - Defaults to None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - from mmdet3d.core.evaluation import seg_eval - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - - load_pipeline = self._get_pipeline(pipeline) - pred_sem_masks = [result['semantic_mask'] for result in results] - gt_sem_masks = [ - self._extract_data( - i, load_pipeline, 'pts_semantic_mask', load_annos=True) - for i in range(len(self.data_infos)) - ] - ret_dict = seg_eval( - gt_sem_masks, - pred_sem_masks, - self.label2cat, - self.ignore_index, - logger=logger) - - if show: - self.show(pred_sem_masks, out_dir, pipeline=pipeline) - - return ret_dict - - def _rand_another(self, idx): - """Randomly get another item with the same flag. - - Returns: - int: Another index of item with the same flag. - """ - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - raise NotImplementedError('_build_default_pipeline is not implemented ' - f'for dataset {self.__class__.__name__}') - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - # when we want to load ground-truth via pipeline (e.g. bbox, seg mask) - # we need to set self.test_mode as False so that we have 'annos' - if load_annos: - original_test_mode = self.test_mode - self.test_mode = False - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - if load_annos: - self.test_mode = original_test_mode - - return data - - def __len__(self): - """Return the length of scene_idxs. - - Returns: - int: Length of data infos. - """ - return len(self.scene_idxs) - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - In indoor scene segmentation task, each scene contains millions of - points. However, we only sample less than 10k points within a patch - each time. Therefore, we use `scene_idxs` to re-sample different rooms. - - Returns: - dict: Data dictionary of the corresponding index. - """ - scene_idx = self.scene_idxs[idx] # map to scene idx - if self.test_mode: - return self.prepare_test_data(scene_idx) - while True: - data = self.prepare_train_data(scene_idx) - if data is None: - idx = self._rand_another(idx) - scene_idx = self.scene_idxs[idx] # map to scene idx - continue - return data - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. In 3D datasets, they are all the same, thus are all - zeros. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/dataset_wrappers.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/dataset_wrappers.py deleted file mode 100644 index 2ae33279..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/dataset_wrappers.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from .builder import DATASETS - - -@DATASETS.register_module() -class CBGSDataset(object): - """A wrapper of class sampled dataset with ann_file path. Implementation of - paper `Class-balanced Grouping and Sampling for Point Cloud 3D Object - Detection `_. - - Balance the number of scenes under different classes. - - Args: - dataset (:obj:`CustomDataset`): The dataset to be class sampled. - """ - - def __init__(self, dataset): - self.dataset = dataset - self.CLASSES = dataset.CLASSES - self.cat2id = {name: i for i, name in enumerate(self.CLASSES)} - self.sample_indices = self._get_sample_indices() - # self.dataset.data_infos = self.data_infos - if hasattr(self.dataset, 'flag'): - self.flag = np.array( - [self.dataset.flag[ind] for ind in self.sample_indices], - dtype=np.uint8) - - def _get_sample_indices(self): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations after class sampling. - """ - class_sample_idxs = {cat_id: [] for cat_id in self.cat2id.values()} - for idx in range(len(self.dataset)): - sample_cat_ids = self.dataset.get_cat_ids(idx) - for cat_id in sample_cat_ids: - class_sample_idxs[cat_id].append(idx) - duplicated_samples = sum( - [len(v) for _, v in class_sample_idxs.items()]) - class_distribution = { - k: len(v) / duplicated_samples - for k, v in class_sample_idxs.items() - } - - sample_indices = [] - - frac = 1.0 / len(self.CLASSES) - ratios = [frac / v for v in class_distribution.values()] - for cls_inds, ratio in zip(list(class_sample_idxs.values()), ratios): - sample_indices += np.random.choice(cls_inds, - int(len(cls_inds) * - ratio)).tolist() - return sample_indices - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - Returns: - dict: Data dictionary of the corresponding index. - """ - ori_idx = self.sample_indices[idx] - return self.dataset[ori_idx] - - def __len__(self): - """Return the length of data infos. - - Returns: - int: Length of data infos. - """ - return len(self.sample_indices) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti2d_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti2d_dataset.py deleted file mode 100644 index a9439321..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti2d_dataset.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet.datasets import CustomDataset -from .builder import DATASETS - - -@DATASETS.register_module() -class Kitti2DDataset(CustomDataset): - r"""KITTI 2D Dataset. - - This class serves as the API for experiments on the `KITTI Dataset - `_. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR'. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - - CLASSES = ('car', 'pedestrian', 'cyclist') - """ - Annotation format: - [ - { - 'image': { - 'image_idx': 0, - 'image_path': 'training/image_2/000000.png', - 'image_shape': array([ 370, 1224], dtype=int32) - }, - 'point_cloud': { - 'num_features': 4, - 'velodyne_path': 'training/velodyne/000000.bin' - }, - 'calib': { - 'P0': (4, 4), - 'P1': (4, 4), - 'P2': (4, 4), - 'P3': (4, 4), - 'R0_rect':4x4 np.array, - 'Tr_velo_to_cam': 4x4 np.array, - 'Tr_imu_to_velo': 4x4 np.array - }, - 'annos': { - 'name': (n), - 'truncated': (n), - 'occluded': (n), - 'alpha': (n), - 'bbox': (n, 4), - 'dimensions': (n, 3), - 'location': (n, 3), - 'rotation_y': (n), - 'score': (n), - 'index': array([0], dtype=int32), - 'group_ids': array([0], dtype=int32), - 'difficulty': array([0], dtype=int32), - 'num_points_in_gt': (n), - } - } - ] - """ - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - self.data_infos = mmcv.load(ann_file) - self.cat2label = { - cat_name: i - for i, cat_name in enumerate(self.CLASSES) - } - return self.data_infos - - def _filter_imgs(self, min_size=32): - """Filter images without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if len(img_info['annos']['name']) > 0: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - bboxes (np.ndarray): Ground truth bboxes. - - labels (np.ndarray): Labels of ground truths. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - annos = info['annos'] - gt_names = annos['name'] - gt_bboxes = annos['bbox'] - difficulty = annos['difficulty'] - - # remove classes that is not needed - selected = self.keep_arrays_by_name(gt_names, self.CLASSES) - gt_bboxes = gt_bboxes[selected] - gt_names = gt_names[selected] - difficulty = difficulty[selected] - gt_labels = np.array([self.cat2label[n] for n in gt_names]) - - anns_results = dict( - bboxes=gt_bboxes.astype(np.float32), - labels=gt_labels, - ) - return anns_results - - def prepare_train_img(self, idx): - """Training image preparation. - - Args: - index (int): Index for accessing the target image data. - - Returns: - dict: Training image data dict after preprocessing - corresponding to the index. - """ - img_raw_info = self.data_infos[idx]['image'] - img_info = dict(filename=img_raw_info['image_path']) - ann_info = self.get_ann_info(idx) - if len(ann_info['bboxes']) == 0: - return None - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target image data. - - Returns: - dict: Testing image data dict after preprocessing - corresponding to the index. - """ - img_raw_info = self.data_infos[idx]['image'] - img_info = dict(filename=img_raw_info['image_path']) - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def drop_arrays_by_name(self, gt_names, used_classes): - """Drop irrelevant ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be dropped. - """ - inds = [i for i, x in enumerate(gt_names) if x not in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def keep_arrays_by_name(self, gt_names, used_classes): - """Keep useful ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be keeped. - """ - inds = [i for i, x in enumerate(gt_names) if x in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def reformat_bbox(self, outputs, out=None): - """Reformat bounding boxes to KITTI 2D styles. - - Args: - outputs (list[np.ndarray]): List of arrays storing the inferenced - bounding boxes and scores. - out (str, optional): The prefix of output file. - Default: None. - - Returns: - list[dict]: A list of dictionaries with the kitti 2D format. - """ - from mmdet3d.core.bbox.transforms import bbox2result_kitti2d - sample_idx = [info['image']['image_idx'] for info in self.data_infos] - result_files = bbox2result_kitti2d(outputs, self.CLASSES, sample_idx, - out) - return result_files - - def evaluate(self, result_files, eval_types=None): - """Evaluation in KITTI protocol. - - Args: - result_files (str): Path of result files. - eval_types (str, optional): Types of evaluation. Default: None. - KITTI dataset only support 'bbox' evaluation type. - - Returns: - tuple (str, dict): Average precision results in str format - and average precision results in dict format. - """ - from mmdet3d.core.evaluation import kitti_eval - eval_types = ['bbox'] if not eval_types else eval_types - assert eval_types in ('bbox', ['bbox' - ]), 'KITTI data set only evaluate bbox' - gt_annos = [info['annos'] for info in self.data_infos] - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - return ap_result_str, ap_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_dataset.py deleted file mode 100644 index 48025387..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_dataset.py +++ /dev/null @@ -1,773 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core import show_multi_modality_result, show_result -from ..core.bbox import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - LiDARInstance3DBoxes, points_cam2img) -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class KittiDataset(Custom3DDataset): - r"""KITTI Dataset. - - This class serves as the API for experiments on the `KITTI Dataset - `_. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - split (str): Split of input data. - pts_prefix (str, optional): Prefix of points files. - Defaults to 'velodyne'. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - pcd_limit_range (list, optional): The range of point cloud used to - filter invalid predicted boxes. - Default: [0, -40, -3, 70.4, 40, 0.0]. - """ - CLASSES = ('car', 'pedestrian', 'cyclist') - - def __init__(self, - data_root, - ann_file, - split, - pts_prefix='velodyne', - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - pcd_limit_range=[0, -40, -3, 70.4, 40, 0.0], - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - - self.split = split - self.root_split = os.path.join(self.data_root, split) - assert self.modality is not None - self.pcd_limit_range = pcd_limit_range - self.pts_prefix = pts_prefix - - def _get_pts_filename(self, idx): - """Get point cloud filename according to the given index. - - Args: - index (int): Index of the point cloud file to get. - - Returns: - str: Name of the point cloud file. - """ - pts_filename = osp.join(self.root_split, self.pts_prefix, - f'{idx:06d}.bin') - return pts_filename - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - img_prefix (str): Prefix of image files. - - img_info (dict): Image info. - - lidar2img (list[np.ndarray], optional): Transformations - from lidar to different cameras. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['image']['image_idx'] - img_filename = os.path.join(self.data_root, - info['image']['image_path']) - - # TODO: consider use torch.Tensor only - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - lidar2img = P2 @ rect @ Trv2c - - pts_filename = self._get_pts_filename(sample_idx) - input_dict = dict( - sample_idx=sample_idx, - pts_filename=pts_filename, - img_prefix=None, - img_info=dict(filename=img_filename), - lidar2img=lidar2img) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes. - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_bboxes (np.ndarray): 2D ground truth bboxes. - - gt_labels (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - - difficulty (int): Difficulty defined by KITTI. - 0, 1, 2 represent xxxxx respectively. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - - if 'plane' in info: - # convert ground plane to velodyne coordinates - reverse = np.linalg.inv(rect @ Trv2c) - - (plane_norm_cam, - plane_off_cam) = (info['plane'][:3], - -info['plane'][:3] * info['plane'][3]) - plane_norm_lidar = \ - (reverse[:3, :3] @ plane_norm_cam[:, None])[:, 0] - plane_off_lidar = ( - reverse[:3, :3] @ plane_off_cam[:, None][:, 0] + - reverse[:3, 3]) - plane_lidar = np.zeros_like(plane_norm_lidar, shape=(4, )) - plane_lidar[:3] = plane_norm_lidar - plane_lidar[3] = -plane_norm_lidar.T @ plane_off_lidar - else: - plane_lidar = None - - difficulty = info['annos']['difficulty'] - annos = info['annos'] - # we need other objects to avoid collision when sample - annos = self.remove_dontcare(annos) - loc = annos['location'] - dims = annos['dimensions'] - rots = annos['rotation_y'] - gt_names = annos['name'] - gt_bboxes_3d = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1).astype(np.float32) - - # convert gt_bboxes_3d to velodyne coordinates - gt_bboxes_3d = CameraInstance3DBoxes(gt_bboxes_3d).convert_to( - self.box_mode_3d, np.linalg.inv(rect @ Trv2c)) - gt_bboxes = annos['bbox'] - - selected = self.drop_arrays_by_name(gt_names, ['DontCare']) - gt_bboxes = gt_bboxes[selected].astype('float32') - gt_names = gt_names[selected] - - gt_labels = [] - for cat in gt_names: - if cat in self.CLASSES: - gt_labels.append(self.CLASSES.index(cat)) - else: - gt_labels.append(-1) - gt_labels = np.array(gt_labels).astype(np.int64) - gt_labels_3d = copy.deepcopy(gt_labels) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - bboxes=gt_bboxes, - labels=gt_labels, - gt_names=gt_names, - plane=plane_lidar, - difficulty=difficulty) - return anns_results - - def drop_arrays_by_name(self, gt_names, used_classes): - """Drop irrelevant ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be dropped. - """ - inds = [i for i, x in enumerate(gt_names) if x not in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def keep_arrays_by_name(self, gt_names, used_classes): - """Keep useful ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be keeped. - """ - inds = [i for i, x in enumerate(gt_names) if x in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def remove_dontcare(self, ann_info): - """Remove annotations that do not need to be cared. - - Args: - ann_info (dict): Dict of annotation infos. The ``'DontCare'`` - annotations will be removed according to ann_file['name']. - - Returns: - dict: Annotations after filtering. - """ - img_filtered_annotations = {} - relevant_annotation_indices = [ - i for i, x in enumerate(ann_info['name']) if x != 'DontCare' - ] - for key in ann_info.keys(): - img_filtered_annotations[key] = ( - ann_info[key][relevant_annotation_indices]) - return img_filtered_annotations - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - if not isinstance(outputs[0], dict): - result_files = self.bbox2result_kitti2d(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - elif 'pts_bbox' in outputs[0] or 'img_bbox' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = submission_prefix + name - else: - submission_prefix_ = None - if 'img' in name: - result_files = self.bbox2result_kitti2d( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - else: - result_files_ = self.bbox2result_kitti( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: None. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files, including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - Default: None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, pklfile_prefix) - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.data_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bbox', 'bev', '3d'] - if 'img' in name: - eval_types = ['bbox'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float('{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - if metric == 'img_bbox': - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - else: - ap_result_str, ap_dict = kitti_eval(gt_annos, result_files, - self.CLASSES) - print_log('\n' + ap_result_str, logger=logger) - - if tmp_dir is not None: - tmp_dir.cleanup() - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 3D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries with the kitti format. - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.data_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - box_dict = self.convert_valid_bboxes(pred_dicts, info) - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append( - -np.arctan2(-box_lidar[1], box_lidar[0]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - else: - anno = { - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - } - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'.format( - anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], bbox[idx][2], - bbox[idx][3], dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print(f'Result is saved to {out}.') - - return det_annos - - def bbox2result_kitti2d(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 2D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries have the kitti format - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - det_annos = [] - print('\nConverting prediction to KITTI format') - for i, bboxes_per_sample in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - anno = dict( - name=[], - truncated=[], - occluded=[], - alpha=[], - bbox=[], - dimensions=[], - location=[], - rotation_y=[], - score=[]) - sample_idx = self.data_infos[i]['image']['image_idx'] - - num_example = 0 - for label in range(len(bboxes_per_sample)): - bbox = bboxes_per_sample[label] - for i in range(bbox.shape[0]): - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(0.0) - anno['bbox'].append(bbox[i, :4]) - # set dimensions (height, width, length) to zero - anno['dimensions'].append( - np.zeros(shape=[3], dtype=np.float32)) - # set the 3D translation to (-1000, -1000, -1000) - anno['location'].append( - np.ones(shape=[3], dtype=np.float32) * (-1000.0)) - anno['rotation_y'].append(0.0) - anno['score'].append(bbox[i, 4]) - num_example += 1 - - if num_example == 0: - annos.append( - dict( - name=np.array([]), - truncated=np.array([]), - occluded=np.array([]), - alpha=np.array([]), - bbox=np.zeros([0, 4]), - dimensions=np.zeros([0, 3]), - location=np.zeros([0, 3]), - rotation_y=np.array([]), - score=np.array([]), - )) - else: - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * num_example, dtype=np.int64) - det_annos += annos - - if pklfile_prefix is not None: - # save file in pkl format - pklfile_path = ( - pklfile_prefix[:-4] if pklfile_prefix.endswith( - ('.pkl', '.pickle')) else pklfile_prefix) - mmcv.dump(det_annos, pklfile_path) - - if submission_prefix is not None: - # save file in submission format - mmcv.mkdir_or_exist(submission_prefix) - print(f'Saving KITTI submission to {submission_prefix}') - for i, anno in enumerate(det_annos): - sample_idx = self.data_infos[i]['image']['image_idx'] - cur_det_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(cur_det_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'][::-1] # lhw -> hwl - for idx in range(len(bbox)): - print( - '{} -1 -1 {:4f} {:4f} {:4f} {:4f} {:4f} {:4f} ' - '{:4f} {:4f} {:4f} {:4f} {:4f} {:4f} {:4f}'.format( - anno['name'][idx], - anno['alpha'][idx], - *bbox[idx], # 4 float - *dims[idx], # 3 float - *loc[idx], # 3 float - anno['rotation_y'][idx], - anno['score'][idx]), - file=f, - ) - print(f'Result is saved to {submission_prefix}') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the predicted boxes into valid ones. - - Args: - box_dict (dict): Box dictionaries to be converted. - - - boxes_3d (:obj:`LiDARInstance3DBoxes`): 3D bounding boxes. - - scores_3d (torch.Tensor): Scores of boxes. - - labels_3d (torch.Tensor): Class labels of boxes. - info (dict): Data info. - - Returns: - dict: Valid predicted boxes. - - - bbox (np.ndarray): 2D bounding boxes. - - box3d_camera (np.ndarray): 3D bounding boxes in - camera coordinate. - - box3d_lidar (np.ndarray): 3D bounding boxes in - LiDAR coordinate. - - scores (np.ndarray): Scores of boxes. - - label_preds (np.ndarray): Class label predictions. - - sample_idx (int): Sample index. - """ - # TODO: refactor this function - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - box_preds.limit_yaw(offset=0.5, period=np.pi * 2) - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - img_shape = info['image']['image_shape'] - P2 = box_preds.tensor.new_tensor(P2) - - box_preds_camera = box_preds.convert_to(Box3DMode.CAM, rect @ Trv2c) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds_camera - image_shape = box_preds.tensor.new_tensor(img_shape) - valid_cam_inds = ((box_2d_preds[:, 0] < image_shape[1]) & - (box_2d_preds[:, 1] < image_shape[0]) & - (box_2d_preds[:, 2] > 0) & (box_2d_preds[:, 3] > 0)) - # check box_preds - limit_range = box_preds.tensor.new_tensor(self.pcd_limit_range) - valid_pcd_inds = ((box_preds.center > limit_range[:3]) & - (box_preds.center < limit_range[3:])) - valid_inds = valid_cam_inds & valid_pcd_inds.all(-1) - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - if self.modality['use_camera']: - pipeline.insert(0, dict(type='LoadImageFromFile')) - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['point_cloud']['velodyne_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, img_metas, img = self._extract_data( - i, pipeline, ['points', 'img_metas', 'img']) - points = points.numpy() - # for now we convert points into depth mode - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - # multi-modality visualization - if self.modality['use_camera'] and 'lidar2img' in img_metas.keys(): - img = img.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - show_pred_bboxes = LiDARInstance3DBoxes( - pred_bboxes, origin=(0.5, 0.5, 0)) - show_gt_bboxes = LiDARInstance3DBoxes( - gt_bboxes, origin=(0.5, 0.5, 0)) - show_multi_modality_result( - img, - show_gt_bboxes, - show_pred_bboxes, - img_metas['lidar2img'], - out_dir, - file_name, - box_mode='lidar', - show=show) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_mono_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_mono_dataset.py deleted file mode 100644 index c669b0af..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/kitti_mono_dataset.py +++ /dev/null @@ -1,569 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core.bbox import Box3DMode, CameraInstance3DBoxes, points_cam2img -from .builder import DATASETS -from .nuscenes_mono_dataset import NuScenesMonoDataset - - -@DATASETS.register_module() -class KittiMonoDataset(NuScenesMonoDataset): - """Monocular 3D detection on KITTI Dataset. - - Args: - data_root (str): Path of dataset root. - info_file (str): Path of info file. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to False. - eval_version (str, optional): Configuration version of evaluation. - Defaults to None. - version (str, optional): Dataset version. Defaults to None. - kwargs (dict): Other arguments are the same of NuScenesMonoDataset. - """ - - CLASSES = ('Pedestrian', 'Cyclist', 'Car') - - def __init__(self, - data_root, - info_file, - ann_file, - pipeline, - load_interval=1, - with_velocity=False, - eval_version=None, - version=None, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - load_interval=load_interval, - with_velocity=with_velocity, - eval_version=eval_version, - version=version, - **kwargs) - self.anno_infos = mmcv.load(info_file) - self.bbox_code_size = 7 - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore, - labels, masks, seg_map. "masks" are raw annotations and not - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - gt_bboxes_cam3d = [] - centers2d = [] - depths = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - # 3D annotations in camera coordinates - bbox_cam3d = np.array(ann['bbox_cam3d']).reshape(-1, ) - gt_bboxes_cam3d.append(bbox_cam3d) - # 2.5D annotations in camera coordinates - center2d = ann['center2d'][:2] - depth = ann['center2d'][2] - centers2d.append(center2d) - depths.append(depth) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_cam3d: - gt_bboxes_cam3d = np.array(gt_bboxes_cam3d, dtype=np.float32) - centers2d = np.array(centers2d, dtype=np.float32) - depths = np.array(depths, dtype=np.float32) - else: - gt_bboxes_cam3d = np.zeros((0, self.bbox_code_size), - dtype=np.float32) - centers2d = np.zeros((0, 2), dtype=np.float32) - depths = np.zeros((0), dtype=np.float32) - - gt_bboxes_cam3d = CameraInstance3DBoxes( - gt_bboxes_cam3d, - box_dim=gt_bboxes_cam3d.shape[-1], - origin=(0.5, 0.5, 0.5)) - gt_labels_3d = copy.deepcopy(gt_labels) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - gt_bboxes_3d=gt_bboxes_cam3d, - gt_labels_3d=gt_labels_3d, - centers2d=centers2d, - depths=depths, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - if not isinstance(outputs[0], dict): - result_files = self.bbox2result_kitti2d(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - elif 'pts_bbox' in outputs[0] or 'img_bbox' in outputs[0] or \ - 'img_bbox2d' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = submission_prefix + name - else: - submission_prefix_ = None - if '2d' in name: - result_files_ = self.bbox2result_kitti2d( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - else: - result_files_ = self.bbox2result_kitti( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Defaults to None. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files, including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, pklfile_prefix) - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.anno_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bbox', 'bev', '3d'] - if '2d' in name: - eval_types = ['bbox'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float('{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - if metric == 'img_bbox2d': - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - else: - ap_result_str, ap_dict = kitti_eval(gt_annos, result_files, - self.CLASSES) - print_log('\n' + ap_result_str, logger=logger) - - if tmp_dir is not None: - tmp_dir.cleanup() - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 3D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries with the kitti format. - """ - assert len(net_outputs) == len(self.anno_infos) - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.anno_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - - box_dict = self.convert_valid_bboxes(pred_dicts, info) - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(-np.arctan2(box[0], box[2]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - else: - anno = { - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - } - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'.format( - anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], bbox[idx][2], - bbox[idx][3], dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print('Result is saved to %s' % out) - - return det_annos - - def bbox2result_kitti2d(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 2D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries have the kitti format - """ - assert len(net_outputs) == len(self.anno_infos) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for i, bboxes_per_sample in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - anno = dict( - name=[], - truncated=[], - occluded=[], - alpha=[], - bbox=[], - dimensions=[], - location=[], - rotation_y=[], - score=[]) - sample_idx = self.anno_infos[i]['image']['image_idx'] - - num_example = 0 - for label in range(len(bboxes_per_sample)): - bbox = bboxes_per_sample[label] - for i in range(bbox.shape[0]): - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(-10) - anno['bbox'].append(bbox[i, :4]) - # set dimensions (height, width, length) to zero - anno['dimensions'].append( - np.zeros(shape=[3], dtype=np.float32)) - # set the 3D translation to (-1000, -1000, -1000) - anno['location'].append( - np.ones(shape=[3], dtype=np.float32) * (-1000.0)) - anno['rotation_y'].append(0.0) - anno['score'].append(bbox[i, 4]) - num_example += 1 - - if num_example == 0: - annos.append( - dict( - name=np.array([]), - truncated=np.array([]), - occluded=np.array([]), - alpha=np.array([]), - bbox=np.zeros([0, 4]), - dimensions=np.zeros([0, 3]), - location=np.zeros([0, 3]), - rotation_y=np.array([]), - score=np.array([]), - )) - else: - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * num_example, dtype=np.int64) - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print('Result is saved to %s' % out) - - if submission_prefix is not None: - # save file in submission format - mmcv.mkdir_or_exist(submission_prefix) - print(f'Saving KITTI submission to {submission_prefix}') - for i, anno in enumerate(det_annos): - sample_idx = self.anno_infos[i]['image']['image_idx'] - cur_det_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(cur_det_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'][::-1] # lhw -> hwl - for idx in range(len(bbox)): - print( - '{} -1 -1 {:4f} {:4f} {:4f} {:4f} {:4f} {:4f} ' - '{:4f} {:4f} {:4f} {:4f} {:4f} {:4f} {:4f}'.format( - anno['name'][idx], - anno['alpha'][idx], - *bbox[idx], # 4 float - *dims[idx], # 3 float - *loc[idx], # 3 float - anno['rotation_y'][idx], - anno['score'][idx]), - file=f, - ) - print(f'Result is saved to {submission_prefix}') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the predicted boxes into valid ones. - - Args: - box_dict (dict): Box dictionaries to be converted. - - boxes_3d (:obj:`CameraInstance3DBoxes`): 3D bounding boxes. - - scores_3d (torch.Tensor): Scores of boxes. - - labels_3d (torch.Tensor): Class labels of boxes. - info (dict): Data info. - - Returns: - dict: Valid predicted boxes. - - bbox (np.ndarray): 2D bounding boxes. - - box3d_camera (np.ndarray): 3D bounding boxes in - camera coordinate. - - scores (np.ndarray): Scores of boxes. - - label_preds (np.ndarray): Class label predictions. - - sample_idx (int): Sample index. - """ - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - img_shape = info['image']['image_shape'] - P2 = box_preds.tensor.new_tensor(P2) - - box_preds_camera = box_preds - box_preds_lidar = box_preds.convert_to(Box3DMode.LIDAR, - np.linalg.inv(rect @ Trv2c)) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds_camera - image_shape = box_preds.tensor.new_tensor(img_shape) - valid_cam_inds = ((box_2d_preds[:, 0] < image_shape[1]) & - (box_2d_preds[:, 1] < image_shape[0]) & - (box_2d_preds[:, 2] > 0) & (box_2d_preds[:, 3] > 0)) - # check box_preds - valid_inds = valid_cam_inds - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds_lidar[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/lyft_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/lyft_dataset.py deleted file mode 100644 index 031d86a9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/lyft_dataset.py +++ /dev/null @@ -1,567 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import pandas as pd -from lyft_dataset_sdk.lyftdataset import LyftDataset as Lyft -from lyft_dataset_sdk.utils.data_classes import Box as LyftBox -from pyquaternion import Quaternion - -from mmdet3d.core.evaluation.lyft_eval import lyft_eval -from ..core import show_result -from ..core.bbox import Box3DMode, Coord3DMode, LiDARInstance3DBoxes -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class LyftDataset(Custom3DDataset): - r"""Lyft Dataset. - - This class serves as the API for experiments on the Lyft Dataset. - - Please refer to - ``_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - data_root (str): Path of dataset root. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ # noqa: E501 - NameMapping = { - 'bicycle': 'bicycle', - 'bus': 'bus', - 'car': 'car', - 'emergency_vehicle': 'emergency_vehicle', - 'motorcycle': 'motorcycle', - 'other_vehicle': 'other_vehicle', - 'pedestrian': 'pedestrian', - 'truck': 'truck', - 'animal': 'animal' - } - DefaultAttribute = { - 'car': 'is_stationary', - 'truck': 'is_stationary', - 'bus': 'is_stationary', - 'emergency_vehicle': 'is_stationary', - 'other_vehicle': 'is_stationary', - 'motorcycle': 'is_stationary', - 'bicycle': 'is_stationary', - 'pedestrian': 'is_stationary', - 'animal': 'is_stationary' - } - CLASSES = ('car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', - 'motorcycle', 'bicycle', 'pedestrian', 'animal') - - def __init__(self, - ann_file, - pipeline=None, - data_root=None, - classes=None, - load_interval=1, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - **kwargs): - self.load_interval = load_interval - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - - if self.modality is None: - self.modality = dict( - use_camera=False, - use_lidar=True, - use_radar=False, - use_map=False, - use_external=False, - ) - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations sorted by timestamps. - """ - # loading data from a file-like object needs file format - data = mmcv.load(ann_file, file_format='pkl') - data_infos = list(sorted(data['infos'], key=lambda e: e['timestamp'])) - data_infos = data_infos[::self.load_interval] - self.metadata = data['metadata'] - self.version = self.metadata['version'] - return data_infos - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): sample index - - pts_filename (str): filename of point clouds - - sweeps (list[dict]): infos of sweeps - - timestamp (float): sample timestamp - - img_filename (str, optional): image filename - - lidar2img (list[np.ndarray], optional): transformations - from lidar to different cameras - - ann_info (dict): annotation info - """ - info = self.data_infos[index] - - # standard protocol modified from SECOND.Pytorch - input_dict = dict( - sample_idx=info['token'], - pts_filename=info['lidar_path'], - sweeps=info['sweeps'], - timestamp=info['timestamp'] / 1e6, - ) - - if self.modality['use_camera']: - image_paths = [] - lidar2img_rts = [] - for cam_type, cam_info in info['cams'].items(): - image_paths.append(cam_info['data_path']) - # obtain lidar to image transformation matrix - lidar2cam_r = np.linalg.inv(cam_info['sensor2lidar_rotation']) - lidar2cam_t = cam_info[ - 'sensor2lidar_translation'] @ lidar2cam_r.T - lidar2cam_rt = np.eye(4) - lidar2cam_rt[:3, :3] = lidar2cam_r.T - lidar2cam_rt[3, :3] = -lidar2cam_t - intrinsic = cam_info['cam_intrinsic'] - viewpad = np.eye(4) - viewpad[:intrinsic.shape[0], :intrinsic.shape[1]] = intrinsic - lidar2img_rt = (viewpad @ lidar2cam_rt.T) - lidar2img_rts.append(lidar2img_rt) - - input_dict.update( - dict( - img_filename=image_paths, - lidar2img=lidar2img_rts, - )) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes. - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - gt_bboxes_3d = info['gt_boxes'] - gt_names_3d = info['gt_names'] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - if 'gt_shape' in info: - gt_shape = info['gt_shape'] - gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_shape], axis=-1) - - # the lyft box center is [0.5, 0.5, 0.5], we change it to be - # the same as KITTI (0.5, 0.5, 0) - gt_bboxes_3d = LiDARInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - ) - return anns_results - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - lyft_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - annos = [] - boxes = output_to_lyft_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes = lidar_lyft_box_to_global(self.data_infos[sample_id], boxes) - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - lyft_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - name=name, - score=box.score) - annos.append(lyft_anno) - lyft_annos[sample_token] = annos - lyft_submissions = { - 'meta': self.modality, - 'results': lyft_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_lyft.json') - print('Results writes to', res_path) - mmcv.dump(lyft_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='pts_bbox'): - """Evaluation for a single model in Lyft protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'pts_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - - output_dir = osp.join(*osp.split(result_path)[:-1]) - lyft = Lyft( - data_path=osp.join(self.data_root, self.version), - json_path=osp.join(self.data_root, self.version, self.version), - verbose=True) - eval_set_map = { - 'v1.01-train': 'val', - } - metrics = lyft_eval(lyft, self.data_root, result_path, - eval_set_map[self.version], output_dir, logger) - - # record metrics - detail = dict() - metric_prefix = f'{result_name}_Lyft' - - for i, name in enumerate(metrics['class_names']): - AP = float(metrics['mAPs_cate'][i]) - detail[f'{metric_prefix}/{name}_AP'] = AP - - detail[f'{metric_prefix}/mAP'] = metrics['Final mAP'] - return detail - - def format_results(self, results, jsonfile_prefix=None, csv_savepath=None): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - csv_savepath (str): The path for saving csv files. - It includes the file path and the csv filename, - e.g., "a/b/filename.csv". If not specified, - the result will not be converted to csv file. - - Returns: - tuple: Returns (result_files, tmp_dir), where `result_files` is a - dict containing the json filepaths, `tmp_dir` is the temporal - directory created for saving json files when - `jsonfile_prefix` is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on Lyft - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - if csv_savepath is not None: - self.json2csv(result_files['pts_bbox'], csv_savepath) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - csv_savepath=None, - result_names=['pts_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in Lyft protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str, optional): The prefix of json files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - csv_savepath (str, optional): The path for saving csv files. - It includes the file path and the csv filename, - e.g., "a/b/filename.csv". If not specified, - the result will not be converted to csv file. - result_names (list[str], optional): Result names in the - metric prefix. Default: ['pts_bbox']. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Evaluation results. - """ - result_files, tmp_dir = self.format_results(results, jsonfile_prefix, - csv_savepath) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print(f'Evaluating bboxes of {name}') - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return results_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=dict(backend='disk')), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['lidar_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - inds = result['scores_3d'] > 0.1 - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'][inds].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - def json2csv(self, json_path, csv_savepath): - """Convert the json file to csv format for submission. - - Args: - json_path (str): Path of the result json file. - csv_savepath (str): Path to save the csv file. - """ - results = mmcv.load(json_path)['results'] - sample_list_path = osp.join(self.data_root, 'sample_submission.csv') - data = pd.read_csv(sample_list_path) - Id_list = list(data['Id']) - pred_list = list(data['PredictionString']) - cnt = 0 - print('Converting the json to csv...') - for token in results.keys(): - cnt += 1 - predictions = results[token] - prediction_str = '' - for i in range(len(predictions)): - prediction_str += \ - str(predictions[i]['score']) + ' ' + \ - str(predictions[i]['translation'][0]) + ' ' + \ - str(predictions[i]['translation'][1]) + ' ' + \ - str(predictions[i]['translation'][2]) + ' ' + \ - str(predictions[i]['size'][0]) + ' ' + \ - str(predictions[i]['size'][1]) + ' ' + \ - str(predictions[i]['size'][2]) + ' ' + \ - str(Quaternion(list(predictions[i]['rotation'])) - .yaw_pitch_roll[0]) + ' ' + \ - predictions[i]['name'] + ' ' - prediction_str = prediction_str[:-1] - idx = Id_list.index(token) - pred_list[idx] = prediction_str - df = pd.DataFrame({'Id': Id_list, 'PredictionString': pred_list}) - mmcv.mkdir_or_exist(os.path.dirname(csv_savepath)) - df.to_csv(csv_savepath, index=False) - - -def output_to_lyft_box(detection): - """Convert the output to the box class in the Lyft. - - Args: - detection (dict): Detection results. - - Returns: - list[:obj:`LyftBox`]: List of standard LyftBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # our LiDAR coordinate system -> Lyft box coordinate system - lyft_box_dims = box_dims[:, [1, 0, 2]] - - box_list = [] - for i in range(len(box3d)): - quat = Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - box = LyftBox( - box_gravity_center[i], - lyft_box_dims[i], - quat, - label=labels[i], - score=scores[i]) - box_list.append(box) - return box_list - - -def lidar_lyft_box_to_global(info, boxes): - """Convert the box from ego to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`LyftBox`]): List of predicted LyftBoxes. - - Returns: - list: List of standard LyftBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.rotate(Quaternion(info['lidar2ego_rotation'])) - box.translate(np.array(info['lidar2ego_translation'])) - # Move box to global coord system - box.rotate(Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - return box_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_dataset.py deleted file mode 100644 index 1ca82657..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_dataset.py +++ /dev/null @@ -1,654 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import pyquaternion -from nuscenes.utils.data_classes import Box as NuScenesBox - -from ..core import show_result -from ..core.bbox import Box3DMode, Coord3DMode, LiDARInstance3DBoxes -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class NuScenesDataset(Custom3DDataset): - r"""NuScenes Dataset. - - This class serves as the API for experiments on the NuScenes Dataset. - - Please refer to `NuScenes Dataset `_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - data_root (str): Path of dataset root. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to True. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes. - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - eval_version (bool, optional): Configuration version of evaluation. - Defaults to 'detection_cvpr_2019'. - use_valid_flag (bool, optional): Whether to use `use_valid_flag` key - in the info file as mask to filter gt_boxes and gt_names. - Defaults to False. - """ - NameMapping = { - 'movable_object.barrier': 'barrier', - 'vehicle.bicycle': 'bicycle', - 'vehicle.bus.bendy': 'bus', - 'vehicle.bus.rigid': 'bus', - 'vehicle.car': 'car', - 'vehicle.construction': 'construction_vehicle', - 'vehicle.motorcycle': 'motorcycle', - 'human.pedestrian.adult': 'pedestrian', - 'human.pedestrian.child': 'pedestrian', - 'human.pedestrian.construction_worker': 'pedestrian', - 'human.pedestrian.police_officer': 'pedestrian', - 'movable_object.trafficcone': 'traffic_cone', - 'vehicle.trailer': 'trailer', - 'vehicle.truck': 'truck' - } - DefaultAttribute = { - 'car': 'vehicle.parked', - 'pedestrian': 'pedestrian.moving', - 'trailer': 'vehicle.parked', - 'truck': 'vehicle.parked', - 'bus': 'vehicle.moving', - 'motorcycle': 'cycle.without_rider', - 'construction_vehicle': 'vehicle.parked', - 'bicycle': 'cycle.without_rider', - 'barrier': '', - 'traffic_cone': '', - } - AttrMapping = { - 'cycle.with_rider': 0, - 'cycle.without_rider': 1, - 'pedestrian.moving': 2, - 'pedestrian.standing': 3, - 'pedestrian.sitting_lying_down': 4, - 'vehicle.moving': 5, - 'vehicle.parked': 6, - 'vehicle.stopped': 7, - } - AttrMapping_rev = [ - 'cycle.with_rider', - 'cycle.without_rider', - 'pedestrian.moving', - 'pedestrian.standing', - 'pedestrian.sitting_lying_down', - 'vehicle.moving', - 'vehicle.parked', - 'vehicle.stopped', - ] - # https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa - ErrNameMapping = { - 'trans_err': 'mATE', - 'scale_err': 'mASE', - 'orient_err': 'mAOE', - 'vel_err': 'mAVE', - 'attr_err': 'mAAE' - } - CLASSES = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - - def __init__(self, - ann_file, - pipeline=None, - data_root=None, - classes=None, - load_interval=1, - with_velocity=True, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - eval_version='detection_cvpr_2019', - use_valid_flag=False): - self.load_interval = load_interval - self.use_valid_flag = use_valid_flag - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode) - - self.with_velocity = with_velocity - self.eval_version = eval_version - from nuscenes.eval.detection.config import config_factory - self.eval_detection_configs = config_factory(self.eval_version) - if self.modality is None: - self.modality = dict( - use_camera=False, - use_lidar=True, - use_radar=False, - use_map=False, - use_external=False, - ) - - def get_cat_ids(self, idx): - """Get category distribution of single scene. - - Args: - idx (int): Index of the data_info. - - Returns: - dict[list]: for each category, if the current scene - contains such boxes, store a list containing idx, - otherwise, store empty list. - """ - info = self.data_infos[idx] - if self.use_valid_flag: - mask = info['valid_flag'] - gt_names = set(info['gt_names'][mask]) - else: - gt_names = set(info['gt_names']) - - cat_ids = [] - for name in gt_names: - if name in self.CLASSES: - cat_ids.append(self.cat2id[name]) - return cat_ids - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations sorted by timestamps. - """ - data = mmcv.load(ann_file, file_format='pkl') - data_infos = list(sorted(data['infos'], key=lambda e: e['timestamp'])) - data_infos = data_infos[::self.load_interval] - self.metadata = data['metadata'] - self.version = self.metadata['version'] - return data_infos - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - sweeps (list[dict]): Infos of sweeps. - - timestamp (float): Sample timestamp. - - img_filename (str, optional): Image filename. - - lidar2img (list[np.ndarray], optional): Transformations - from lidar to different cameras. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - # standard protocol modified from SECOND.Pytorch - input_dict = dict( - sample_idx=info['token'], - pts_filename=info['lidar_path'], - sweeps=info['sweeps'], - timestamp=info['timestamp'] / 1e6, - ) - - if self.modality['use_camera']: - image_paths = [] - lidar2img_rts = [] - for cam_type, cam_info in info['cams'].items(): - image_paths.append(cam_info['data_path']) - # obtain lidar to image transformation matrix - lidar2cam_r = np.linalg.inv(cam_info['sensor2lidar_rotation']) - lidar2cam_t = cam_info[ - 'sensor2lidar_translation'] @ lidar2cam_r.T - lidar2cam_rt = np.eye(4) - lidar2cam_rt[:3, :3] = lidar2cam_r.T - lidar2cam_rt[3, :3] = -lidar2cam_t - intrinsic = cam_info['cam_intrinsic'] - viewpad = np.eye(4) - viewpad[:intrinsic.shape[0], :intrinsic.shape[1]] = intrinsic - lidar2img_rt = (viewpad @ lidar2cam_rt.T) - lidar2img_rts.append(lidar2img_rt) - - input_dict.update( - dict( - img_filename=image_paths, - lidar2img=lidar2img_rts, - )) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - # filter out bbox containing no points - if self.use_valid_flag: - mask = info['valid_flag'] - else: - mask = info['num_lidar_pts'] > 0 - gt_bboxes_3d = info['gt_boxes'][mask] - gt_names_3d = info['gt_names'][mask] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - if self.with_velocity: - gt_velocity = info['gt_velocity'][mask] - nan_mask = np.isnan(gt_velocity[:, 0]) - gt_velocity[nan_mask] = [0.0, 0.0] - gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1) - - # the nuscenes box center is [0.5, 0.5, 0.5], we change it to be - # the same as KITTI (0.5, 0.5, 0) - gt_bboxes_3d = LiDARInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - gt_names=gt_names_3d) - return anns_results - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - nusc_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - annos = [] - boxes = output_to_nusc_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes = lidar_nusc_box_to_global(self.data_infos[sample_id], boxes, - mapped_class_names, - self.eval_detection_configs, - self.eval_version) - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - if np.sqrt(box.velocity[0]**2 + box.velocity[1]**2) > 0.2: - if name in [ - 'car', - 'construction_vehicle', - 'bus', - 'truck', - 'trailer', - ]: - attr = 'vehicle.moving' - elif name in ['bicycle', 'motorcycle']: - attr = 'cycle.with_rider' - else: - attr = NuScenesDataset.DefaultAttribute[name] - else: - if name in ['pedestrian']: - attr = 'pedestrian.standing' - elif name in ['bus']: - attr = 'vehicle.stopped' - else: - attr = NuScenesDataset.DefaultAttribute[name] - - nusc_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - velocity=box.velocity[:2].tolist(), - detection_name=name, - detection_score=box.score, - attribute_name=attr) - annos.append(nusc_anno) - nusc_annos[sample_token] = annos - nusc_submissions = { - 'meta': self.modality, - 'results': nusc_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_nusc.json') - print('Results writes to', res_path) - mmcv.dump(nusc_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='pts_bbox'): - """Evaluation for a single model in nuScenes protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'pts_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - from nuscenes import NuScenes - from nuscenes.eval.detection.evaluate import NuScenesEval - - output_dir = osp.join(*osp.split(result_path)[:-1]) - nusc = NuScenes( - version=self.version, dataroot=self.data_root, verbose=False) - eval_set_map = { - 'v1.0-mini': 'mini_val', - 'v1.0-trainval': 'val', - } - nusc_eval = NuScenesEval( - nusc, - config=self.eval_detection_configs, - result_path=result_path, - eval_set=eval_set_map[self.version], - output_dir=output_dir, - verbose=False) - nusc_eval.main(render_curves=False) - - # record metrics - metrics = mmcv.load(osp.join(output_dir, 'metrics_summary.json')) - detail = dict() - metric_prefix = f'{result_name}_NuScenes' - for name in self.CLASSES: - for k, v in metrics['label_aps'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_AP_dist_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['label_tp_errors'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['tp_errors'].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}'.format(metric_prefix, - self.ErrNameMapping[k])] = val - - detail['{}/NDS'.format(metric_prefix)] = metrics['nd_score'] - detail['{}/mAP'.format(metric_prefix)] = metrics['mean_ap'] - return detail - - def format_results(self, results, jsonfile_prefix=None): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: Returns (result_files, tmp_dir), where `result_files` is a - dict containing the json filepaths, `tmp_dir` is the temporal - directory created for saving json files when - `jsonfile_prefix` is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on nuScenes - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - result_names=['pts_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in nuScenes protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str, optional): The prefix of json files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print('Evaluating bboxes of {}'.format(name)) - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return results_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=dict(backend='disk')), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['lidar_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - # for now we convert points into depth mode - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - inds = result['scores_3d'] > 0.1 - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'][inds].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - -def output_to_nusc_box(detection): - """Convert the output to the box class in the nuScenes. - - Args: - detection (dict): Detection results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - - Returns: - list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # our LiDAR coordinate system -> nuScenes box coordinate system - nus_box_dims = box_dims[:, [1, 0, 2]] - - box_list = [] - for i in range(len(box3d)): - quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - velocity = (*box3d.tensor[i, 7:9], 0.0) - # velo_val = np.linalg.norm(box3d[i, 7:9]) - # velo_ori = box3d[i, 6] - # velocity = ( - # velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0) - box = NuScenesBox( - box_gravity_center[i], - nus_box_dims[i], - quat, - label=labels[i], - score=scores[i], - velocity=velocity) - box_list.append(box) - return box_list - - -def lidar_nusc_box_to_global(info, - boxes, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from ego to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.rotate(pyquaternion.Quaternion(info['lidar2ego_rotation'])) - box.translate(np.array(info['lidar2ego_translation'])) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to global coord system - box.rotate(pyquaternion.Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - return box_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_mono_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_mono_dataset.py deleted file mode 100644 index c3eb8f1a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/nuscenes_mono_dataset.py +++ /dev/null @@ -1,840 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -import pyquaternion -import torch -from nuscenes.utils.data_classes import Box as NuScenesBox - -from mmdet3d.core import bbox3d2result, box3d_multiclass_nms, xywhr2xyxyr -from mmdet.datasets import CocoDataset -from ..core import show_multi_modality_result -from ..core.bbox import CameraInstance3DBoxes, get_box_type -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -class NuScenesMonoDataset(CocoDataset): - r"""Monocular 3D detection on NuScenes Dataset. - - This class serves as the API for experiments on the NuScenes Dataset. - - Please refer to `NuScenes Dataset `_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - data_root (str): Path of dataset root. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to True. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Camera' in this class. Available options includes. - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - eval_version (str, optional): Configuration version of evaluation. - Defaults to 'detection_cvpr_2019'. - use_valid_flag (bool, optional): Whether to use `use_valid_flag` key - in the info file as mask to filter gt_boxes and gt_names. - Defaults to False. - version (str, optional): Dataset version. Defaults to 'v1.0-trainval'. - """ - CLASSES = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - DefaultAttribute = { - 'car': 'vehicle.parked', - 'pedestrian': 'pedestrian.moving', - 'trailer': 'vehicle.parked', - 'truck': 'vehicle.parked', - 'bus': 'vehicle.moving', - 'motorcycle': 'cycle.without_rider', - 'construction_vehicle': 'vehicle.parked', - 'bicycle': 'cycle.without_rider', - 'barrier': '', - 'traffic_cone': '', - } - # https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa - ErrNameMapping = { - 'trans_err': 'mATE', - 'scale_err': 'mASE', - 'orient_err': 'mAOE', - 'vel_err': 'mAVE', - 'attr_err': 'mAAE' - } - - def __init__(self, - data_root, - ann_file, - pipeline, - load_interval=1, - with_velocity=True, - modality=None, - box_type_3d='Camera', - eval_version='detection_cvpr_2019', - use_valid_flag=False, - version='v1.0-trainval', - classes=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.get_classes(classes) - self.file_client = mmcv.FileClient(**file_client_args) - - # load annotations (and proposals) - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(local_path) - - if self.proposal_file is not None: - with self.file_client.get_local_path( - self.proposal_file) as local_path: - self.proposals = self.load_proposals(local_path) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - self.load_interval = load_interval - self.with_velocity = with_velocity - self.modality = modality - self.box_type_3d, self.box_mode_3d = get_box_type(box_type_3d) - self.eval_version = eval_version - self.use_valid_flag = use_valid_flag - self.bbox_code_size = 9 - self.version = version - if self.eval_version is not None: - from nuscenes.eval.detection.config import config_factory - self.eval_detection_configs = config_factory(self.eval_version) - if self.modality is None: - self.modality = dict( - use_camera=True, - use_lidar=False, - use_radar=False, - use_map=False, - use_external=False) - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - bbox3d_fields (list): 3D bounding boxes fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - bbox_fields (list): Fields of bounding boxes. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - - box_type_3d (str): 3D box type. - - box_mode_3d (str): 3D box mode. - """ - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['img_fields'] = [] - results['bbox3d_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['box_type_3d'] = self.box_type_3d - results['box_mode_3d'] = self.box_mode_3d - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox annotation. - - Args: - img_info (list[dict]): Image info. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, labels, - gt_bboxes_3d, gt_labels_3d, attr_labels, centers2d, - depths, bboxes_ignore, masks, seg_map - """ - gt_bboxes = [] - gt_labels = [] - attr_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - gt_bboxes_cam3d = [] - centers2d = [] - depths = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - attr_labels.append(ann['attribute_id']) - gt_masks_ann.append(ann.get('segmentation', None)) - # 3D annotations in camera coordinates - bbox_cam3d = np.array(ann['bbox_cam3d']).reshape(1, -1) - velo_cam3d = np.array(ann['velo_cam3d']).reshape(1, 2) - nan_mask = np.isnan(velo_cam3d[:, 0]) - velo_cam3d[nan_mask] = [0.0, 0.0] - bbox_cam3d = np.concatenate([bbox_cam3d, velo_cam3d], axis=-1) - gt_bboxes_cam3d.append(bbox_cam3d.squeeze()) - # 2.5D annotations in camera coordinates - center2d = ann['center2d'][:2] - depth = ann['center2d'][2] - centers2d.append(center2d) - depths.append(depth) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - attr_labels = np.array(attr_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - attr_labels = np.array([], dtype=np.int64) - - if gt_bboxes_cam3d: - gt_bboxes_cam3d = np.array(gt_bboxes_cam3d, dtype=np.float32) - centers2d = np.array(centers2d, dtype=np.float32) - depths = np.array(depths, dtype=np.float32) - else: - gt_bboxes_cam3d = np.zeros((0, self.bbox_code_size), - dtype=np.float32) - centers2d = np.zeros((0, 2), dtype=np.float32) - depths = np.zeros((0), dtype=np.float32) - - gt_bboxes_cam3d = CameraInstance3DBoxes( - gt_bboxes_cam3d, - box_dim=gt_bboxes_cam3d.shape[-1], - origin=(0.5, 0.5, 0.5)) - gt_labels_3d = copy.deepcopy(gt_labels) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - gt_bboxes_3d=gt_bboxes_cam3d, - gt_labels_3d=gt_labels_3d, - attr_labels=attr_labels, - centers2d=centers2d, - depths=depths, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def get_attr_name(self, attr_idx, label_name): - """Get attribute from predicted index. - - This is a workaround to predict attribute when the predicted velocity - is not reliable. We map the predicted attribute index to the one - in the attribute set. If it is consistent with the category, we will - keep it. Otherwise, we will use the default attribute. - - Args: - attr_idx (int): Attribute index. - label_name (str): Predicted category name. - - Returns: - str: Predicted attribute name. - """ - # TODO: Simplify the variable name - AttrMapping_rev2 = [ - 'cycle.with_rider', 'cycle.without_rider', 'pedestrian.moving', - 'pedestrian.standing', 'pedestrian.sitting_lying_down', - 'vehicle.moving', 'vehicle.parked', 'vehicle.stopped', 'None' - ] - if label_name == 'car' or label_name == 'bus' \ - or label_name == 'truck' or label_name == 'trailer' \ - or label_name == 'construction_vehicle': - if AttrMapping_rev2[attr_idx] == 'vehicle.moving' or \ - AttrMapping_rev2[attr_idx] == 'vehicle.parked' or \ - AttrMapping_rev2[attr_idx] == 'vehicle.stopped': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - elif label_name == 'pedestrian': - if AttrMapping_rev2[attr_idx] == 'pedestrian.moving' or \ - AttrMapping_rev2[attr_idx] == 'pedestrian.standing' or \ - AttrMapping_rev2[attr_idx] == \ - 'pedestrian.sitting_lying_down': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - elif label_name == 'bicycle' or label_name == 'motorcycle': - if AttrMapping_rev2[attr_idx] == 'cycle.with_rider' or \ - AttrMapping_rev2[attr_idx] == 'cycle.without_rider': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - nusc_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - - CAM_NUM = 6 - - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - - if sample_id % CAM_NUM == 0: - boxes_per_frame = [] - attrs_per_frame = [] - - # need to merge results from images of the same sample - annos = [] - boxes, attrs = output_to_nusc_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes, attrs = cam_nusc_box_to_global(self.data_infos[sample_id], - boxes, attrs, - mapped_class_names, - self.eval_detection_configs, - self.eval_version) - - boxes_per_frame.extend(boxes) - attrs_per_frame.extend(attrs) - # Remove redundant predictions caused by overlap of images - if (sample_id + 1) % CAM_NUM != 0: - continue - boxes = global_nusc_box_to_cam( - self.data_infos[sample_id + 1 - CAM_NUM], boxes_per_frame, - mapped_class_names, self.eval_detection_configs, - self.eval_version) - cam_boxes3d, scores, labels = nusc_box_to_cam_box3d(boxes) - # box nms 3d over 6 images in a frame - # TODO: move this global setting into config - nms_cfg = dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.05, - score_thr=0.01, - min_bbox_size=0, - max_per_frame=500) - from mmcv import Config - nms_cfg = Config(nms_cfg) - cam_boxes3d_for_nms = xywhr2xyxyr(cam_boxes3d.bev) - boxes3d = cam_boxes3d.tensor - # generate attr scores from attr labels - attrs = labels.new_tensor([attr for attr in attrs_per_frame]) - boxes3d, scores, labels, attrs = box3d_multiclass_nms( - boxes3d, - cam_boxes3d_for_nms, - scores, - nms_cfg.score_thr, - nms_cfg.max_per_frame, - nms_cfg, - mlvl_attr_scores=attrs) - cam_boxes3d = CameraInstance3DBoxes(boxes3d, box_dim=9) - det = bbox3d2result(cam_boxes3d, scores, labels, attrs) - boxes, attrs = output_to_nusc_box(det) - boxes, attrs = cam_nusc_box_to_global( - self.data_infos[sample_id + 1 - CAM_NUM], boxes, attrs, - mapped_class_names, self.eval_detection_configs, - self.eval_version) - - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - attr = self.get_attr_name(attrs[i], name) - nusc_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - velocity=box.velocity[:2].tolist(), - detection_name=name, - detection_score=box.score, - attribute_name=attr) - annos.append(nusc_anno) - # other views results of the same frame should be concatenated - if sample_token in nusc_annos: - nusc_annos[sample_token].extend(annos) - else: - nusc_annos[sample_token] = annos - - nusc_submissions = { - 'meta': self.modality, - 'results': nusc_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_nusc.json') - print('Results writes to', res_path) - mmcv.dump(nusc_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='img_bbox'): - """Evaluation for a single model in nuScenes protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'img_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - from nuscenes import NuScenes - from nuscenes.eval.detection.evaluate import NuScenesEval - - output_dir = osp.join(*osp.split(result_path)[:-1]) - nusc = NuScenes( - version=self.version, dataroot=self.data_root, verbose=False) - eval_set_map = { - 'v1.0-mini': 'mini_val', - 'v1.0-trainval': 'val', - } - nusc_eval = NuScenesEval( - nusc, - config=self.eval_detection_configs, - result_path=result_path, - eval_set=eval_set_map[self.version], - output_dir=output_dir, - verbose=False) - nusc_eval.main(render_curves=True) - - # record metrics - metrics = mmcv.load(osp.join(output_dir, 'metrics_summary.json')) - detail = dict() - metric_prefix = f'{result_name}_NuScenes' - for name in self.CLASSES: - for k, v in metrics['label_aps'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_AP_dist_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['label_tp_errors'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['tp_errors'].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}'.format(metric_prefix, - self.ErrNameMapping[k])] = val - - detail['{}/NDS'.format(metric_prefix)] = metrics['nd_score'] - detail['{}/mAP'.format(metric_prefix)] = metrics['mean_ap'] - return detail - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on nuScenes - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - # not evaluate 2D predictions on nuScenes - if '2d' in name: - continue - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - result_names=['img_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in nuScenes protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - result_names (list[str], optional): Result names in the - metric prefix. Default: ['img_bbox']. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print('Evaluating bboxes of {}'.format(name)) - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, pipeline=pipeline) - return results_dict - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - img_info = self.data_infos[index] - input_dict = dict(img_info=img_info) - - if load_annos: - ann_info = self.get_ann_info(index) - input_dict.update(dict(ann_info=ann_info)) - - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - - return data - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['img']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'img_bbox' in result.keys(): - result = result['img_bbox'] - data_info = self.data_infos[i] - img_path = data_info['file_name'] - file_name = osp.split(img_path)[-1].split('.')[0] - img, img_metas = self._extract_data(i, pipeline, - ['img', 'img_metas']) - # need to transpose channel to first dim - img = img.numpy().transpose(1, 2, 0) - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'] - pred_bboxes = result['boxes_3d'] - show_multi_modality_result( - img, - gt_bboxes, - pred_bboxes, - img_metas['cam2img'], - out_dir, - file_name, - box_mode='camera', - show=show) - - -def output_to_nusc_box(detection): - """Convert the output to the box class in the nuScenes. - - Args: - detection (dict): Detection results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - - attrs_3d (torch.Tensor, optional): Predicted attributes. - - Returns: - list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - attrs = None - if 'attrs_3d' in detection: - attrs = detection['attrs_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # convert the dim/rot to nuscbox convention - box_dims[:, [0, 1, 2]] = box_dims[:, [2, 0, 1]] - box_yaw = -box_yaw - - box_list = [] - for i in range(len(box3d)): - q1 = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - q2 = pyquaternion.Quaternion(axis=[1, 0, 0], radians=np.pi / 2) - quat = q2 * q1 - velocity = (box3d.tensor[i, 7], 0.0, box3d.tensor[i, 8]) - box = NuScenesBox( - box_gravity_center[i], - box_dims[i], - quat, - label=labels[i], - score=scores[i], - velocity=velocity) - box_list.append(box) - return box_list, attrs - - -def cam_nusc_box_to_global(info, - boxes, - attrs, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from camera to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - attr_list = [] - for (box, attr) in zip(boxes, attrs): - # Move box to ego vehicle coord system - box.rotate(pyquaternion.Quaternion(info['cam2ego_rotation'])) - box.translate(np.array(info['cam2ego_translation'])) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to global coord system - box.rotate(pyquaternion.Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - attr_list.append(attr) - return box_list, attr_list - - -def global_nusc_box_to_cam(info, - boxes, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from global to camera coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.translate(-np.array(info['ego2global_translation'])) - box.rotate( - pyquaternion.Quaternion(info['ego2global_rotation']).inverse) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to camera coord system - box.translate(-np.array(info['cam2ego_translation'])) - box.rotate(pyquaternion.Quaternion(info['cam2ego_rotation']).inverse) - box_list.append(box) - return box_list - - -def nusc_box_to_cam_box3d(boxes): - """Convert boxes from :obj:`NuScenesBox` to :obj:`CameraInstance3DBoxes`. - - Args: - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - - Returns: - tuple (:obj:`CameraInstance3DBoxes` | torch.Tensor | torch.Tensor): - Converted 3D bounding boxes, scores and labels. - """ - locs = torch.Tensor([b.center for b in boxes]).view(-1, 3) - dims = torch.Tensor([b.wlh for b in boxes]).view(-1, 3) - rots = torch.Tensor([b.orientation.yaw_pitch_roll[0] - for b in boxes]).view(-1, 1) - velocity = torch.Tensor([b.velocity[0::2] for b in boxes]).view(-1, 2) - - # convert nusbox to cambox convention - dims[:, [0, 1, 2]] = dims[:, [1, 2, 0]] - rots = -rots - - boxes_3d = torch.cat([locs, dims, rots, velocity], dim=1).cuda() - cam_boxes3d = CameraInstance3DBoxes( - boxes_3d, box_dim=9, origin=(0.5, 0.5, 0.5)) - scores = torch.Tensor([b.score for b in boxes]).cuda() - labels = torch.LongTensor([b.label for b in boxes]).cuda() - nms_scores = scores.new_zeros(scores.shape[0], 10 + 1) - indices = labels.new_tensor(list(range(scores.shape[0]))) - nms_scores[indices, labels] = scores - return cam_boxes3d, nms_scores, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/__init__.py deleted file mode 100644 index 7a5a71d6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .compose import Compose -from .dbsampler import DataBaseSampler -from .formating import Collect3D, DefaultFormatBundle, DefaultFormatBundle3D -from .loading import (LoadAnnotations3D, LoadImageFromFileMono3D, - LoadMultiViewImageFromFiles, LoadPointsFromDict, - LoadPointsFromFile, LoadPointsFromMultiSweeps, - NormalizePointsColor, PointSegClassMapping) -from .test_time_aug import MultiScaleFlipAug3D -# yapf: disable -from .transforms_3d import (AffineResize, BackgroundPointsFilter, - GlobalAlignment, GlobalRotScaleTrans, - IndoorPatchPointSample, IndoorPointSample, - MultiViewWrapper, ObjectNameFilter, ObjectNoise, - ObjectRangeFilter, ObjectSample, PointSample, - PointShuffle, PointsRangeFilter, - RandomDropPointsColor, RandomFlip3D, - RandomJitterPoints, RandomRotate, RandomShiftScale, - RangeLimitedRandomCrop, VoxelBasedPointSampler) - -__all__ = [ - 'ObjectSample', 'RandomFlip3D', 'ObjectNoise', 'GlobalRotScaleTrans', - 'PointShuffle', 'ObjectRangeFilter', 'PointsRangeFilter', 'Collect3D', - 'Compose', 'LoadMultiViewImageFromFiles', 'LoadPointsFromFile', - 'DefaultFormatBundle', 'DefaultFormatBundle3D', 'DataBaseSampler', - 'NormalizePointsColor', 'LoadAnnotations3D', 'IndoorPointSample', - 'PointSample', 'PointSegClassMapping', 'MultiScaleFlipAug3D', - 'LoadPointsFromMultiSweeps', 'BackgroundPointsFilter', - 'VoxelBasedPointSampler', 'GlobalAlignment', 'IndoorPatchPointSample', - 'LoadImageFromFileMono3D', 'ObjectNameFilter', 'RandomDropPointsColor', - 'RandomJitterPoints', 'AffineResize', 'RandomShiftScale', - 'LoadPointsFromDict', 'MultiViewWrapper', 'RandomRotate', - 'RangeLimitedRandomCrop' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/compose.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/compose.py deleted file mode 100644 index 9ab25d9e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/compose.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from mmdet.datasets.builder import PIPELINES as MMDET_PIPELINES -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. The pipeline registry of - mmdet3d separates with mmdet, however, sometimes we may need to use mmdet's - pipeline. So the class is rewritten to be able to use pipelines from both - mmdet3d and mmdet. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - _, key = PIPELINES.split_scope_key(transform['type']) - if key in PIPELINES._module_dict.keys(): - transform = build_from_cfg(transform, PIPELINES) - else: - transform = build_from_cfg(transform, MMDET_PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/data_augment_utils.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/data_augment_utils.py deleted file mode 100644 index 21be3c06..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/data_augment_utils.py +++ /dev/null @@ -1,411 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numba -import numpy as np -from numba.core.errors import NumbaPerformanceWarning - -from mmdet3d.core.bbox import box_np_ops - -warnings.filterwarnings('ignore', category=NumbaPerformanceWarning) - - -@numba.njit -def _rotation_box2d_jit_(corners, angle, rot_mat_T): - """Rotate 2D boxes. - - Args: - corners (np.ndarray): Corners of boxes. - angle (float): Rotation angle. - rot_mat_T (np.ndarray): Transposed rotation matrix. - """ - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - corners[:] = corners @ rot_mat_T - - -@numba.jit(nopython=True) -def box_collision_test(boxes, qboxes, clockwise=True): - """Box collision test. - - Args: - boxes (np.ndarray): Corners of current boxes. - qboxes (np.ndarray): Boxes to be avoid colliding. - clockwise (bool, optional): Whether the corners are in - clockwise order. Default: True. - """ - N = boxes.shape[0] - K = qboxes.shape[0] - ret = np.zeros((N, K), dtype=np.bool_) - slices = np.array([1, 2, 3, 0]) - lines_boxes = np.stack((boxes, boxes[:, slices, :]), - axis=2) # [N, 4, 2(line), 2(xy)] - lines_qboxes = np.stack((qboxes, qboxes[:, slices, :]), axis=2) - # vec = np.zeros((2,), dtype=boxes.dtype) - boxes_standup = box_np_ops.corner_to_standup_nd_jit(boxes) - qboxes_standup = box_np_ops.corner_to_standup_nd_jit(qboxes) - for i in range(N): - for j in range(K): - # calculate standup first - iw = ( - min(boxes_standup[i, 2], qboxes_standup[j, 2]) - - max(boxes_standup[i, 0], qboxes_standup[j, 0])) - if iw > 0: - ih = ( - min(boxes_standup[i, 3], qboxes_standup[j, 3]) - - max(boxes_standup[i, 1], qboxes_standup[j, 1])) - if ih > 0: - for k in range(4): - for box_l in range(4): - A = lines_boxes[i, k, 0] - B = lines_boxes[i, k, 1] - C = lines_qboxes[j, box_l, 0] - D = lines_qboxes[j, box_l, 1] - acd = (D[1] - A[1]) * (C[0] - - A[0]) > (C[1] - A[1]) * ( - D[0] - A[0]) - bcd = (D[1] - B[1]) * (C[0] - - B[0]) > (C[1] - B[1]) * ( - D[0] - B[0]) - if acd != bcd: - abc = (C[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - C[0] - A[0]) - abd = (D[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - D[0] - A[0]) - if abc != abd: - ret[i, j] = True # collision. - break - if ret[i, j] is True: - break - if ret[i, j] is False: - # now check complete overlap. - # box overlap qbox: - box_overlap_qbox = True - for box_l in range(4): # point l in qboxes - for k in range(4): # corner k in boxes - vec = boxes[i, k] - boxes[i, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - boxes[i, k, 0] - qboxes[j, box_l, 0]) - cross -= vec[0] * ( - boxes[i, k, 1] - qboxes[j, box_l, 1]) - if cross >= 0: - box_overlap_qbox = False - break - if box_overlap_qbox is False: - break - - if box_overlap_qbox is False: - qbox_overlap_box = True - for box_l in range(4): # point box_l in boxes - for k in range(4): # corner k in qboxes - vec = qboxes[j, k] - qboxes[j, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - qboxes[j, k, 0] - boxes[i, box_l, 0]) - cross -= vec[0] * ( - qboxes[j, k, 1] - boxes[i, box_l, 1]) - if cross >= 0: # - qbox_overlap_box = False - break - if qbox_overlap_box is False: - break - if qbox_overlap_box: - ret[i, j] = True # collision. - else: - ret[i, j] = True # collision. - return ret - - -@numba.njit -def noise_per_box(boxes, valid_mask, loc_noises, rot_noises): - """Add noise to every box (only on the horizontal plane). - - Args: - boxes (np.ndarray): Input boxes with shape (N, 5). - valid_mask (np.ndarray): Mask to indicate which boxes are valid - with shape (N). - loc_noises (np.ndarray): Location noises with shape (N, M, 3). - rot_noises (np.ndarray): Rotation noises with shape (N, M). - - Returns: - np.ndarray: Mask to indicate whether the noise is - added successfully (pass the collision test). - """ - num_boxes = boxes.shape[0] - num_tests = loc_noises.shape[1] - box_corners = box_np_ops.box2d_to_corner_jit(boxes) - current_corners = np.zeros((4, 2), dtype=boxes.dtype) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - success_mask = -np.ones((num_boxes, ), dtype=np.int64) - # print(valid_mask) - for i in range(num_boxes): - if valid_mask[i]: - for j in range(num_tests): - current_corners[:] = box_corners[i] - current_corners -= boxes[i, :2] - _rotation_box2d_jit_(current_corners, rot_noises[i, j], - rot_mat_T) - current_corners += boxes[i, :2] + loc_noises[i, j, :2] - coll_mat = box_collision_test( - current_corners.reshape(1, 4, 2), box_corners) - coll_mat[0, i] = False - # print(coll_mat) - if not coll_mat.any(): - success_mask[i] = j - box_corners[i] = current_corners - break - return success_mask - - -@numba.njit -def noise_per_box_v2_(boxes, valid_mask, loc_noises, rot_noises, - global_rot_noises): - """Add noise to every box (only on the horizontal plane). Version 2 used - when enable global rotations. - - Args: - boxes (np.ndarray): Input boxes with shape (N, 5). - valid_mask (np.ndarray): Mask to indicate which boxes are valid - with shape (N). - loc_noises (np.ndarray): Location noises with shape (N, M, 3). - rot_noises (np.ndarray): Rotation noises with shape (N, M). - - Returns: - np.ndarray: Mask to indicate whether the noise is - added successfully (pass the collision test). - """ - num_boxes = boxes.shape[0] - num_tests = loc_noises.shape[1] - box_corners = box_np_ops.box2d_to_corner_jit(boxes) - current_corners = np.zeros((4, 2), dtype=boxes.dtype) - current_box = np.zeros((1, 5), dtype=boxes.dtype) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - dst_pos = np.zeros((2, ), dtype=boxes.dtype) - success_mask = -np.ones((num_boxes, ), dtype=np.int64) - corners_norm = np.zeros((4, 2), dtype=boxes.dtype) - corners_norm[1, 1] = 1.0 - corners_norm[2] = 1.0 - corners_norm[3, 0] = 1.0 - corners_norm -= np.array([0.5, 0.5], dtype=boxes.dtype) - corners_norm = corners_norm.reshape(4, 2) - for i in range(num_boxes): - if valid_mask[i]: - for j in range(num_tests): - current_box[0, :] = boxes[i] - current_radius = np.sqrt(boxes[i, 0]**2 + boxes[i, 1]**2) - current_grot = np.arctan2(boxes[i, 0], boxes[i, 1]) - dst_grot = current_grot + global_rot_noises[i, j] - dst_pos[0] = current_radius * np.sin(dst_grot) - dst_pos[1] = current_radius * np.cos(dst_grot) - current_box[0, :2] = dst_pos - current_box[0, -1] += (dst_grot - current_grot) - - rot_sin = np.sin(current_box[0, -1]) - rot_cos = np.cos(current_box[0, -1]) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - current_corners[:] = current_box[ - 0, 2:4] * corners_norm @ rot_mat_T + current_box[0, :2] - current_corners -= current_box[0, :2] - _rotation_box2d_jit_(current_corners, rot_noises[i, j], - rot_mat_T) - current_corners += current_box[0, :2] + loc_noises[i, j, :2] - coll_mat = box_collision_test( - current_corners.reshape(1, 4, 2), box_corners) - coll_mat[0, i] = False - if not coll_mat.any(): - success_mask[i] = j - box_corners[i] = current_corners - loc_noises[i, j, :2] += (dst_pos - boxes[i, :2]) - rot_noises[i, j] += (dst_grot - current_grot) - break - return success_mask - - -def _select_transform(transform, indices): - """Select transform. - - Args: - transform (np.ndarray): Transforms to select from. - indices (np.ndarray): Mask to indicate which transform to select. - - Returns: - np.ndarray: Selected transforms. - """ - result = np.zeros((transform.shape[0], *transform.shape[2:]), - dtype=transform.dtype) - for i in range(transform.shape[0]): - if indices[i] != -1: - result[i] = transform[i, indices[i]] - return result - - -@numba.njit -def _rotation_matrix_3d_(rot_mat_T, angle, axis): - """Get the 3D rotation matrix. - - Args: - rot_mat_T (np.ndarray): Transposed rotation matrix. - angle (float): Rotation angle. - axis (int): Rotation axis. - """ - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - rot_mat_T[:] = np.eye(3) - if axis == 1: - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 2] = rot_sin - rot_mat_T[2, 0] = -rot_sin - rot_mat_T[2, 2] = rot_cos - elif axis == 2 or axis == -1: - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - elif axis == 0: - rot_mat_T[1, 1] = rot_cos - rot_mat_T[1, 2] = rot_sin - rot_mat_T[2, 1] = -rot_sin - rot_mat_T[2, 2] = rot_cos - - -@numba.njit -def points_transform_(points, centers, point_masks, loc_transform, - rot_transform, valid_mask): - """Apply transforms to points and box centers. - - Args: - points (np.ndarray): Input points. - centers (np.ndarray): Input box centers. - point_masks (np.ndarray): Mask to indicate which points need - to be transformed. - loc_transform (np.ndarray): Location transform to be applied. - rot_transform (np.ndarray): Rotation transform to be applied. - valid_mask (np.ndarray): Mask to indicate which boxes are valid. - """ - num_box = centers.shape[0] - num_points = points.shape[0] - rot_mat_T = np.zeros((num_box, 3, 3), dtype=points.dtype) - for i in range(num_box): - _rotation_matrix_3d_(rot_mat_T[i], rot_transform[i], 2) - for i in range(num_points): - for j in range(num_box): - if valid_mask[j]: - if point_masks[i, j] == 1: - points[i, :3] -= centers[j, :3] - points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j] - points[i, :3] += centers[j, :3] - points[i, :3] += loc_transform[j] - break # only apply first box's transform - - -@numba.njit -def box3d_transform_(boxes, loc_transform, rot_transform, valid_mask): - """Transform 3D boxes. - - Args: - boxes (np.ndarray): 3D boxes to be transformed. - loc_transform (np.ndarray): Location transform to be applied. - rot_transform (np.ndarray): Rotation transform to be applied. - valid_mask (np.ndarray): Mask to indicate which boxes are valid. - """ - num_box = boxes.shape[0] - for i in range(num_box): - if valid_mask[i]: - boxes[i, :3] += loc_transform[i] - boxes[i, 6] += rot_transform[i] - - -def noise_per_object_v3_(gt_boxes, - points=None, - valid_mask=None, - rotation_perturb=np.pi / 4, - center_noise_std=1.0, - global_random_rot_range=np.pi / 4, - num_try=100): - """Random rotate or remove each groundtruth independently. use kitti viewer - to test this function points_transform_ - - Args: - gt_boxes (np.ndarray): Ground truth boxes with shape (N, 7). - points (np.ndarray, optional): Input point cloud with - shape (M, 4). Default: None. - valid_mask (np.ndarray, optional): Mask to indicate which - boxes are valid. Default: None. - rotation_perturb (float, optional): Rotation perturbation. - Default: pi / 4. - center_noise_std (float, optional): Center noise standard deviation. - Default: 1.0. - global_random_rot_range (float, optional): Global random rotation - range. Default: pi/4. - num_try (int, optional): Number of try. Default: 100. - """ - num_boxes = gt_boxes.shape[0] - if not isinstance(rotation_perturb, (list, tuple, np.ndarray)): - rotation_perturb = [-rotation_perturb, rotation_perturb] - if not isinstance(global_random_rot_range, (list, tuple, np.ndarray)): - global_random_rot_range = [ - -global_random_rot_range, global_random_rot_range - ] - enable_grot = np.abs(global_random_rot_range[0] - - global_random_rot_range[1]) >= 1e-3 - - if not isinstance(center_noise_std, (list, tuple, np.ndarray)): - center_noise_std = [ - center_noise_std, center_noise_std, center_noise_std - ] - if valid_mask is None: - valid_mask = np.ones((num_boxes, ), dtype=np.bool_) - center_noise_std = np.array(center_noise_std, dtype=gt_boxes.dtype) - - loc_noises = np.random.normal( - scale=center_noise_std, size=[num_boxes, num_try, 3]) - rot_noises = np.random.uniform( - rotation_perturb[0], rotation_perturb[1], size=[num_boxes, num_try]) - gt_grots = np.arctan2(gt_boxes[:, 0], gt_boxes[:, 1]) - grot_lowers = global_random_rot_range[0] - gt_grots - grot_uppers = global_random_rot_range[1] - gt_grots - global_rot_noises = np.random.uniform( - grot_lowers[..., np.newaxis], - grot_uppers[..., np.newaxis], - size=[num_boxes, num_try]) - - origin = (0.5, 0.5, 0) - gt_box_corners = box_np_ops.center_to_corner_box3d( - gt_boxes[:, :3], - gt_boxes[:, 3:6], - gt_boxes[:, 6], - origin=origin, - axis=2) - - # TODO: rewrite this noise box function? - if not enable_grot: - selected_noise = noise_per_box(gt_boxes[:, [0, 1, 3, 4, 6]], - valid_mask, loc_noises, rot_noises) - else: - selected_noise = noise_per_box_v2_(gt_boxes[:, [0, 1, 3, 4, 6]], - valid_mask, loc_noises, rot_noises, - global_rot_noises) - - loc_transforms = _select_transform(loc_noises, selected_noise) - rot_transforms = _select_transform(rot_noises, selected_noise) - surfaces = box_np_ops.corner_to_surfaces_3d_jit(gt_box_corners) - if points is not None: - # TODO: replace this points_in_convex function by my tools? - point_masks = box_np_ops.points_in_convex_polygon_3d_jit( - points[:, :3], surfaces) - points_transform_(points, gt_boxes[:, :3], point_masks, loc_transforms, - rot_transforms, valid_mask) - - box3d_transform_(gt_boxes, loc_transforms, rot_transforms, valid_mask) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/dbsampler.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/dbsampler.py deleted file mode 100644 index ef82c88e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/dbsampler.py +++ /dev/null @@ -1,340 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os -import warnings - -import mmcv -import numpy as np - -from mmdet3d.core.bbox import box_np_ops -from mmdet3d.datasets.pipelines import data_augment_utils -from ..builder import OBJECTSAMPLERS, PIPELINES - - -class BatchSampler: - """Class for sampling specific category of ground truths. - - Args: - sample_list (list[dict]): List of samples. - name (str, optional): The category of samples. Default: None. - epoch (int, optional): Sampling epoch. Default: None. - shuffle (bool, optional): Whether to shuffle indices. Default: False. - drop_reminder (bool, optional): Drop reminder. Default: False. - """ - - def __init__(self, - sampled_list, - name=None, - epoch=None, - shuffle=True, - drop_reminder=False): - self._sampled_list = sampled_list - self._indices = np.arange(len(sampled_list)) - if shuffle: - np.random.shuffle(self._indices) - self._idx = 0 - self._example_num = len(sampled_list) - self._name = name - self._shuffle = shuffle - self._epoch = epoch - self._epoch_counter = 0 - self._drop_reminder = drop_reminder - - def _sample(self, num): - """Sample specific number of ground truths and return indices. - - Args: - num (int): Sampled number. - - Returns: - list[int]: Indices of sampled ground truths. - """ - if self._idx + num >= self._example_num: - ret = self._indices[self._idx:].copy() - self._reset() - else: - ret = self._indices[self._idx:self._idx + num] - self._idx += num - return ret - - def _reset(self): - """Reset the index of batchsampler to zero.""" - assert self._name is not None - # print("reset", self._name) - if self._shuffle: - np.random.shuffle(self._indices) - self._idx = 0 - - def sample(self, num): - """Sample specific number of ground truths. - - Args: - num (int): Sampled number. - - Returns: - list[dict]: Sampled ground truths. - """ - indices = self._sample(num) - return [self._sampled_list[i] for i in indices] - - -@OBJECTSAMPLERS.register_module() -class DataBaseSampler(object): - """Class for sampling data from the ground truth database. - - Args: - info_path (str): Path of groundtruth database info. - data_root (str): Path of groundtruth database. - rate (float): Rate of actual sampled over maximum sampled number. - prepare (dict): Name of preparation functions and the input value. - sample_groups (dict): Sampled classes and numbers. - classes (list[str], optional): List of classes. Default: None. - points_loader(dict, optional): Config of points loader. Default: - dict(type='LoadPointsFromFile', load_dim=4, use_dim=[0,1,2,3]) - """ - - def __init__(self, - info_path, - data_root, - rate, - prepare, - sample_groups, - classes=None, - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=[0, 1, 2, 3]), - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.info_path = info_path - self.rate = rate - self.prepare = prepare - self.classes = classes - self.cat2label = {name: i for i, name in enumerate(classes)} - self.label2cat = {i: name for i, name in enumerate(classes)} - self.points_loader = mmcv.build_from_cfg(points_loader, PIPELINES) - self.file_client = mmcv.FileClient(**file_client_args) - - # load data base infos - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(info_path) as local_path: - # loading data from a file-like object needs file format - db_infos = mmcv.load(open(local_path, 'rb'), file_format='pkl') - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {info_path} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - db_infos = mmcv.load(info_path) - - # filter database infos - from mmdet3d.utils import get_root_logger - logger = get_root_logger() - for k, v in db_infos.items(): - logger.info(f'load {len(v)} {k} database infos') - for prep_func, val in prepare.items(): - db_infos = getattr(self, prep_func)(db_infos, val) - logger.info('After filter database:') - for k, v in db_infos.items(): - logger.info(f'load {len(v)} {k} database infos') - - self.db_infos = db_infos - - # load sample groups - # TODO: more elegant way to load sample groups - self.sample_groups = [] - for name, num in sample_groups.items(): - self.sample_groups.append({name: int(num)}) - - self.group_db_infos = self.db_infos # just use db_infos - self.sample_classes = [] - self.sample_max_nums = [] - for group_info in self.sample_groups: - self.sample_classes += list(group_info.keys()) - self.sample_max_nums += list(group_info.values()) - - self.sampler_dict = {} - for k, v in self.group_db_infos.items(): - self.sampler_dict[k] = BatchSampler(v, k, shuffle=True) - # TODO: No group_sampling currently - - @staticmethod - def filter_by_difficulty(db_infos, removed_difficulty): - """Filter ground truths by difficulties. - - Args: - db_infos (dict): Info of groundtruth database. - removed_difficulty (list): Difficulties that are not qualified. - - Returns: - dict: Info of database after filtering. - """ - new_db_infos = {} - for key, dinfos in db_infos.items(): - new_db_infos[key] = [ - info for info in dinfos - if info['difficulty'] not in removed_difficulty - ] - return new_db_infos - - @staticmethod - def filter_by_min_points(db_infos, min_gt_points_dict): - """Filter ground truths by number of points in the bbox. - - Args: - db_infos (dict): Info of groundtruth database. - min_gt_points_dict (dict): Different number of minimum points - needed for different categories of ground truths. - - Returns: - dict: Info of database after filtering. - """ - for name, min_num in min_gt_points_dict.items(): - min_num = int(min_num) - if min_num > 0: - filtered_infos = [] - for info in db_infos[name]: - if info['num_points_in_gt'] >= min_num: - filtered_infos.append(info) - db_infos[name] = filtered_infos - return db_infos - - def sample_all(self, gt_bboxes, gt_labels, img=None, ground_plane=None): - """Sampling all categories of bboxes. - - Args: - gt_bboxes (np.ndarray): Ground truth bounding boxes. - gt_labels (np.ndarray): Ground truth labels of boxes. - - Returns: - dict: Dict of sampled 'pseudo ground truths'. - - - gt_labels_3d (np.ndarray): ground truths labels - of sampled objects. - - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): - sampled ground truth 3D bounding boxes - - points (np.ndarray): sampled points - - group_ids (np.ndarray): ids of sampled ground truths - """ - sampled_num_dict = {} - sample_num_per_class = [] - for class_name, max_sample_num in zip(self.sample_classes, - self.sample_max_nums): - class_label = self.cat2label[class_name] - # sampled_num = int(max_sample_num - - # np.sum([n == class_name for n in gt_names])) - sampled_num = int(max_sample_num - - np.sum([n == class_label for n in gt_labels])) - sampled_num = np.round(self.rate * sampled_num).astype(np.int64) - sampled_num_dict[class_name] = sampled_num - sample_num_per_class.append(sampled_num) - - sampled = [] - sampled_gt_bboxes = [] - avoid_coll_boxes = gt_bboxes - - for class_name, sampled_num in zip(self.sample_classes, - sample_num_per_class): - if sampled_num > 0: - sampled_cls = self.sample_class_v2(class_name, sampled_num, - avoid_coll_boxes) - - sampled += sampled_cls - if len(sampled_cls) > 0: - if len(sampled_cls) == 1: - sampled_gt_box = sampled_cls[0]['box3d_lidar'][ - np.newaxis, ...] - else: - sampled_gt_box = np.stack( - [s['box3d_lidar'] for s in sampled_cls], axis=0) - - sampled_gt_bboxes += [sampled_gt_box] - avoid_coll_boxes = np.concatenate( - [avoid_coll_boxes, sampled_gt_box], axis=0) - - ret = None - if len(sampled) > 0: - sampled_gt_bboxes = np.concatenate(sampled_gt_bboxes, axis=0) - # center = sampled_gt_bboxes[:, 0:3] - - # num_sampled = len(sampled) - s_points_list = [] - count = 0 - for info in sampled: - file_path = os.path.join( - self.data_root, - info['path']) if self.data_root else info['path'] - results = dict(pts_filename=file_path) - s_points = self.points_loader(results)['points'] - s_points.translate(info['box3d_lidar'][:3]) - - count += 1 - - s_points_list.append(s_points) - - gt_labels = np.array([self.cat2label[s['name']] for s in sampled], - dtype=np.long) - - if ground_plane is not None: - xyz = sampled_gt_bboxes[:, :3] - dz = (ground_plane[:3][None, :] * - xyz).sum(-1) + ground_plane[3] - sampled_gt_bboxes[:, 2] -= dz - for i, s_points in enumerate(s_points_list): - s_points.tensor[:, 2].sub_(dz[i]) - - ret = { - 'gt_labels_3d': - gt_labels, - 'gt_bboxes_3d': - sampled_gt_bboxes, - 'points': - s_points_list[0].cat(s_points_list), - 'group_ids': - np.arange(gt_bboxes.shape[0], - gt_bboxes.shape[0] + len(sampled)) - } - - return ret - - def sample_class_v2(self, name, num, gt_bboxes): - """Sampling specific categories of bounding boxes. - - Args: - name (str): Class of objects to be sampled. - num (int): Number of sampled bboxes. - gt_bboxes (np.ndarray): Ground truth boxes. - - Returns: - list[dict]: Valid samples after collision test. - """ - sampled = self.sampler_dict[name].sample(num) - sampled = copy.deepcopy(sampled) - num_gt = gt_bboxes.shape[0] - num_sampled = len(sampled) - gt_bboxes_bv = box_np_ops.center_to_corner_box2d( - gt_bboxes[:, 0:2], gt_bboxes[:, 3:5], gt_bboxes[:, 6]) - - sp_boxes = np.stack([i['box3d_lidar'] for i in sampled], axis=0) - boxes = np.concatenate([gt_bboxes, sp_boxes], axis=0).copy() - - sp_boxes_new = boxes[gt_bboxes.shape[0]:] - sp_boxes_bv = box_np_ops.center_to_corner_box2d( - sp_boxes_new[:, 0:2], sp_boxes_new[:, 3:5], sp_boxes_new[:, 6]) - - total_bv = np.concatenate([gt_bboxes_bv, sp_boxes_bv], axis=0) - coll_mat = data_augment_utils.box_collision_test(total_bv, total_bv) - diag = np.arange(total_bv.shape[0]) - coll_mat[diag, diag] = False - - valid_samples = [] - for i in range(num_gt, num_gt + num_sampled): - if coll_mat[i].any(): - coll_mat[i] = False - coll_mat[:, i] = False - else: - valid_samples.append(sampled[i - num_gt]) - return valid_samples diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/formating.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/formating.py deleted file mode 100644 index 94a62e65..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/formating.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.parallel import DataContainer as DC - -from mmdet3d.core.bbox import BaseInstance3DBoxes -from mmdet3d.core.points import BasePoints -from mmdet.datasets.pipelines import to_tensor -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __init__(self, ): - return - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - if 'img' in results: - if isinstance(results['img'], list): - # process multiple imgs in single frame - imgs = [img.transpose(2, 0, 1) for img in results['img']] - imgs = np.ascontiguousarray(np.stack(imgs, axis=0)) - results['img'] = DC(to_tensor(imgs), stack=True) - else: - img = np.ascontiguousarray(results['img'].transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - for key in [ - 'proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels', - 'gt_labels_3d', 'attr_labels', 'pts_instance_mask', - 'pts_semantic_mask', 'centers2d', 'depths' - ]: - if key not in results: - continue - if isinstance(results[key], list): - results[key] = DC([to_tensor(res) for res in results[key]]) - else: - results[key] = DC(to_tensor(results[key])) - if 'gt_bboxes_3d' in results: - if isinstance(results['gt_bboxes_3d'], BaseInstance3DBoxes): - results['gt_bboxes_3d'] = DC( - results['gt_bboxes_3d'], cpu_only=True) - else: - results['gt_bboxes_3d'] = DC( - to_tensor(results['gt_bboxes_3d'])) - - if 'gt_masks' in results: - results['gt_masks'] = DC(results['gt_masks'], cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), stack=True) - - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect3D(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - 'img_shape': shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the - bottom/right if the batch tensor is larger than this shape. - - 'scale_factor': a float indicating the preprocessing scale - - 'flip': a boolean indicating if image flip transform was used - - 'filename': path to the image file - - 'ori_shape': original shape of the image as a tuple (h, w, c) - - 'pad_shape': image shape after padding - - 'lidar2img': transform from lidar to image - - 'depth2img': transform from depth to image - - 'cam2img': transform from camera to image - - 'pcd_horizontal_flip': a boolean indicating if point cloud is - flipped horizontally - - 'pcd_vertical_flip': a boolean indicating if point cloud is - flipped vertically - - 'box_mode_3d': 3D box mode - - 'box_type_3d': 3D box type - - 'img_norm_cfg': a dict of normalization information: - - mean: per channel mean subtraction - - std: per channel std divisor - - to_rgb: bool indicating if bgr was converted to rgb - - 'pcd_trans': point cloud transformations - - 'sample_idx': sample index - - 'pcd_scale_factor': point cloud scale factor - - 'pcd_rotation': rotation applied to point cloud - - 'pts_filename': path to point cloud file. - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ('filename', 'ori_shape', 'img_shape', 'lidar2img', - 'depth2img', 'cam2img', 'pad_shape', 'scale_factor', 'flip', - 'pcd_horizontal_flip', 'pcd_vertical_flip', 'box_mode_3d', - 'box_type_3d', 'img_norm_cfg', 'pcd_trans', - 'sample_idx', 'pcd_scale_factor', 'pcd_rotation', 'pts_filename') - """ - - def __init__( - self, - keys, - meta_keys=('filename', 'ori_shape', 'img_shape', 'lidar2img', - 'depth2img', 'cam2img', 'pad_shape', 'scale_factor', 'flip', - 'pcd_horizontal_flip', 'pcd_vertical_flip', 'box_mode_3d', - 'box_type_3d', 'img_norm_cfg', 'pcd_trans', 'sample_idx', - 'pcd_scale_factor', 'pcd_rotation', 'pcd_rotation_angle', - 'pts_filename', 'transformation_3d_flow', 'trans_mat', - 'affine_aug')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in ``self.keys`` - - ``img_metas`` - """ - data = {} - img_metas = {} - for key in self.meta_keys: - if key in results: - img_metas[key] = results[key] - - data['img_metas'] = DC(img_metas, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - """str: Return a string that describes the module.""" - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class DefaultFormatBundle3D(DefaultFormatBundle): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields for voxels, - including "proposals", "gt_bboxes", "gt_labels", "gt_masks" and - "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - """ - - def __init__(self, class_names, with_gt=True, with_label=True): - super(DefaultFormatBundle3D, self).__init__() - self.class_names = class_names - self.with_gt = with_gt - self.with_label = with_label - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - # Format 3D data - if 'points' in results: - assert isinstance(results['points'], BasePoints) - results['points'] = DC(results['points'].tensor) - - for key in ['voxels', 'coors', 'voxel_centers', 'num_points']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key]), stack=False) - - if self.with_gt: - # Clean GT bboxes in the final - if 'gt_bboxes_3d_mask' in results: - gt_bboxes_3d_mask = results['gt_bboxes_3d_mask'] - results['gt_bboxes_3d'] = results['gt_bboxes_3d'][ - gt_bboxes_3d_mask] - if 'gt_names_3d' in results: - results['gt_names_3d'] = results['gt_names_3d'][ - gt_bboxes_3d_mask] - if 'centers2d' in results: - results['centers2d'] = results['centers2d'][ - gt_bboxes_3d_mask] - if 'depths' in results: - results['depths'] = results['depths'][gt_bboxes_3d_mask] - if 'gt_bboxes_mask' in results: - gt_bboxes_mask = results['gt_bboxes_mask'] - if 'gt_bboxes' in results: - results['gt_bboxes'] = results['gt_bboxes'][gt_bboxes_mask] - results['gt_names'] = results['gt_names'][gt_bboxes_mask] - if self.with_label: - if 'gt_names' in results and len(results['gt_names']) == 0: - results['gt_labels'] = np.array([], dtype=np.int64) - results['attr_labels'] = np.array([], dtype=np.int64) - elif 'gt_names' in results and isinstance( - results['gt_names'][0], list): - # gt_labels might be a list of list in multi-view setting - results['gt_labels'] = [ - np.array([self.class_names.index(n) for n in res], - dtype=np.int64) for res in results['gt_names'] - ] - elif 'gt_names' in results: - results['gt_labels'] = np.array([ - self.class_names.index(n) for n in results['gt_names'] - ], - dtype=np.int64) - # we still assume one pipeline for one frame LiDAR - # thus, the 3D name is list[string] - if 'gt_names_3d' in results: - results['gt_labels_3d'] = np.array([ - self.class_names.index(n) - for n in results['gt_names_3d'] - ], - dtype=np.int64) - results = super(DefaultFormatBundle3D, self).__call__(results) - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(class_names={self.class_names}, ' - repr_str += f'with_gt={self.with_gt}, with_label={self.with_label})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/loading.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/loading.py deleted file mode 100644 index bbdcb8ed..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/loading.py +++ /dev/null @@ -1,685 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet3d.core.points import BasePoints, get_points_type -from mmdet.datasets.pipelines import LoadAnnotations, LoadImageFromFile -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadMultiViewImageFromFiles(object): - """Load multi channel images from a list of separate channel files. - - Expects results['img_filename'] to be a list of filenames. - - Args: - to_float32 (bool, optional): Whether to convert the img to float32. - Defaults to False. - color_type (str, optional): Color type of the file. - Defaults to 'unchanged'. - """ - - def __init__(self, to_float32=False, color_type='unchanged'): - self.to_float32 = to_float32 - self.color_type = color_type - - def __call__(self, results): - """Call function to load multi-view image from files. - - Args: - results (dict): Result dict containing multi-view image filenames. - - Returns: - dict: The result dict containing the multi-view image data. - Added keys and values are described below. - - - filename (str): Multi-view image filenames. - - img (np.ndarray): Multi-view image arrays. - - img_shape (tuple[int]): Shape of multi-view image arrays. - - ori_shape (tuple[int]): Shape of original image arrays. - - pad_shape (tuple[int]): Shape of padded image arrays. - - scale_factor (float): Scale factor. - - img_norm_cfg (dict): Normalization configuration of images. - """ - filename = results['img_filename'] - # img is of shape (h, w, c, num_views) - img = np.stack( - [mmcv.imread(name, self.color_type) for name in filename], axis=-1) - if self.to_float32: - img = img.astype(np.float32) - results['filename'] = filename - # unravel to list, see `DefaultFormatBundle` in formatting.py - # which will transpose each image separately and then stack into array - results['img'] = [img[..., i] for i in range(img.shape[-1])] - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32}, ' - repr_str += f"color_type='{self.color_type}')" - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromFileMono3D(LoadImageFromFile): - """Load an image from file in monocular 3D object detection. Compared to 2D - detection, additional camera parameters need to be loaded. - - Args: - kwargs (dict): Arguments are the same as those in - :class:`LoadImageFromFile`. - """ - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - super().__call__(results) - results['cam2img'] = results['img_info']['cam_intrinsic'] - return results - - -@PIPELINES.register_module() -class LoadPointsFromMultiSweeps(object): - """Load points from multiple sweeps. - - This is usually used for nuScenes dataset to utilize previous sweeps. - - Args: - sweeps_num (int, optional): Number of sweeps. Defaults to 10. - load_dim (int, optional): Dimension number of the loaded points. - Defaults to 5. - use_dim (list[int], optional): Which dimension to use. - Defaults to [0, 1, 2, 4]. - file_client_args (dict, optional): Config dict of file clients, - refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. Defaults to dict(backend='disk'). - pad_empty_sweeps (bool, optional): Whether to repeat keyframe when - sweeps is empty. Defaults to False. - remove_close (bool, optional): Whether to remove close points. - Defaults to False. - test_mode (bool, optional): If `test_mode=True`, it will not - randomly sample sweeps but select the nearest N frames. - Defaults to False. - """ - - def __init__(self, - sweeps_num=10, - load_dim=5, - use_dim=[0, 1, 2, 4], - file_client_args=dict(backend='disk'), - pad_empty_sweeps=False, - remove_close=False, - test_mode=False): - self.load_dim = load_dim - self.sweeps_num = sweeps_num - self.use_dim = use_dim - self.file_client_args = file_client_args.copy() - self.file_client = None - self.pad_empty_sweeps = pad_empty_sweeps - self.remove_close = remove_close - self.test_mode = test_mode - - def _load_points(self, pts_filename): - """Private function to load point clouds data. - - Args: - pts_filename (str): Filename of point clouds data. - - Returns: - np.ndarray: An array containing point clouds data. - """ - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - pts_bytes = self.file_client.get(pts_filename) - points = np.frombuffer(pts_bytes, dtype=np.float32) - except ConnectionError: - mmcv.check_file_exist(pts_filename) - if pts_filename.endswith('.npy'): - points = np.load(pts_filename) - else: - points = np.fromfile(pts_filename, dtype=np.float32) - return points - - def _remove_close(self, points, radius=1.0): - """Removes point too close within a certain radius from origin. - - Args: - points (np.ndarray | :obj:`BasePoints`): Sweep points. - radius (float, optional): Radius below which points are removed. - Defaults to 1.0. - - Returns: - np.ndarray: Points after removing. - """ - if isinstance(points, np.ndarray): - points_numpy = points - elif isinstance(points, BasePoints): - points_numpy = points.tensor.numpy() - else: - raise NotImplementedError - x_filt = np.abs(points_numpy[:, 0]) < radius - y_filt = np.abs(points_numpy[:, 1]) < radius - not_close = np.logical_not(np.logical_and(x_filt, y_filt)) - return points[not_close] - - def __call__(self, results): - """Call function to load multi-sweep point clouds from files. - - Args: - results (dict): Result dict containing multi-sweep point cloud - filenames. - - Returns: - dict: The result dict containing the multi-sweep points data. - Added key and value are described below. - - - points (np.ndarray | :obj:`BasePoints`): Multi-sweep point - cloud arrays. - """ - points = results['points'] - points.tensor[:, 4] = 0 - sweep_points_list = [points] - ts = results['timestamp'] - if self.pad_empty_sweeps and len(results['sweeps']) == 0: - for i in range(self.sweeps_num): - if self.remove_close: - sweep_points_list.append(self._remove_close(points)) - else: - sweep_points_list.append(points) - else: - if len(results['sweeps']) <= self.sweeps_num: - choices = np.arange(len(results['sweeps'])) - elif self.test_mode: - choices = np.arange(self.sweeps_num) - else: - choices = np.random.choice( - len(results['sweeps']), self.sweeps_num, replace=False) - for idx in choices: - sweep = results['sweeps'][idx] - points_sweep = self._load_points(sweep['data_path']) - points_sweep = np.copy(points_sweep).reshape(-1, self.load_dim) - if self.remove_close: - points_sweep = self._remove_close(points_sweep) - sweep_ts = sweep['timestamp'] / 1e6 - points_sweep[:, :3] = points_sweep[:, :3] @ sweep[ - 'sensor2lidar_rotation'].T - points_sweep[:, :3] += sweep['sensor2lidar_translation'] - points_sweep[:, 4] = ts - sweep_ts - points_sweep = points.new_point(points_sweep) - sweep_points_list.append(points_sweep) - - points = points.cat(sweep_points_list) - points = points[:, self.use_dim] - results['points'] = points - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - return f'{self.__class__.__name__}(sweeps_num={self.sweeps_num})' - - -@PIPELINES.register_module() -class PointSegClassMapping(object): - """Map original semantic class to valid category ids. - - Map valid classes as 0~len(valid_cat_ids)-1 and - others as len(valid_cat_ids). - - Args: - valid_cat_ids (tuple[int]): A tuple of valid category. - max_cat_id (int, optional): The max possible cat_id in input - segmentation mask. Defaults to 40. - """ - - def __init__(self, valid_cat_ids, max_cat_id=40): - assert max_cat_id >= np.max(valid_cat_ids), \ - 'max_cat_id should be greater than maximum id in valid_cat_ids' - - self.valid_cat_ids = valid_cat_ids - self.max_cat_id = int(max_cat_id) - - # build cat_id to class index mapping - neg_cls = len(valid_cat_ids) - self.cat_id2class = np.ones( - self.max_cat_id + 1, dtype=np.int) * neg_cls - for cls_idx, cat_id in enumerate(valid_cat_ids): - self.cat_id2class[cat_id] = cls_idx - - def __call__(self, results): - """Call function to map original semantic class to valid category ids. - - Args: - results (dict): Result dict containing point semantic masks. - - Returns: - dict: The result dict containing the mapped category ids. - Updated key and value are described below. - - - pts_semantic_mask (np.ndarray): Mapped semantic masks. - """ - assert 'pts_semantic_mask' in results - pts_semantic_mask = results['pts_semantic_mask'] - - converted_pts_sem_mask = self.cat_id2class[pts_semantic_mask] - - results['pts_semantic_mask'] = converted_pts_sem_mask - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(valid_cat_ids={self.valid_cat_ids}, ' - repr_str += f'max_cat_id={self.max_cat_id})' - return repr_str - - -@PIPELINES.register_module() -class NormalizePointsColor(object): - """Normalize color of points. - - Args: - color_mean (list[float]): Mean color of the point cloud. - """ - - def __init__(self, color_mean): - self.color_mean = color_mean - - def __call__(self, results): - """Call function to normalize color of points. - - Args: - results (dict): Result dict containing point clouds data. - - Returns: - dict: The result dict containing the normalized points. - Updated key and value are described below. - - - points (:obj:`BasePoints`): Points after color normalization. - """ - points = results['points'] - assert points.attribute_dims is not None and \ - 'color' in points.attribute_dims.keys(), \ - 'Expect points have color attribute' - if self.color_mean is not None: - points.color = points.color - \ - points.color.new_tensor(self.color_mean) - points.color = points.color / 255.0 - results['points'] = points - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(color_mean={self.color_mean})' - return repr_str - - -@PIPELINES.register_module() -class LoadPointsFromFile(object): - """Load Points From File. - - Load points from file. - - Args: - coord_type (str): The type of coordinates of points cloud. - Available options includes: - - 'LIDAR': Points in LiDAR coordinates. - - 'DEPTH': Points in depth coordinates, usually for indoor dataset. - - 'CAMERA': Points in camera coordinates. - load_dim (int, optional): The dimension of the loaded points. - Defaults to 6. - use_dim (list[int], optional): Which dimensions of the points to use. - Defaults to [0, 1, 2]. For KITTI dataset, set use_dim=4 - or use_dim=[0, 1, 2, 3] to use the intensity dimension. - shift_height (bool, optional): Whether to use shifted height. - Defaults to False. - use_color (bool, optional): Whether to use color features. - Defaults to False. - file_client_args (dict, optional): Config dict of file clients, - refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. Defaults to dict(backend='disk'). - """ - - def __init__(self, - coord_type, - load_dim=6, - use_dim=[0, 1, 2], - shift_height=False, - use_color=False, - file_client_args=dict(backend='disk')): - self.shift_height = shift_height - self.use_color = use_color - if isinstance(use_dim, int): - use_dim = list(range(use_dim)) - assert max(use_dim) < load_dim, \ - f'Expect all used dimensions < {load_dim}, got {use_dim}' - assert coord_type in ['CAMERA', 'LIDAR', 'DEPTH'] - - self.coord_type = coord_type - self.load_dim = load_dim - self.use_dim = use_dim - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_points(self, pts_filename): - """Private function to load point clouds data. - - Args: - pts_filename (str): Filename of point clouds data. - - Returns: - np.ndarray: An array containing point clouds data. - """ - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - pts_bytes = self.file_client.get(pts_filename) - points = np.frombuffer(pts_bytes, dtype=np.float32) - except ConnectionError: - mmcv.check_file_exist(pts_filename) - if pts_filename.endswith('.npy'): - points = np.load(pts_filename) - else: - points = np.fromfile(pts_filename, dtype=np.float32) - - return points - - def __call__(self, results): - """Call function to load points data from file. - - Args: - results (dict): Result dict containing point clouds data. - - Returns: - dict: The result dict containing the point clouds data. - Added key and value are described below. - - - points (:obj:`BasePoints`): Point clouds data. - """ - pts_filename = results['pts_filename'] - points = self._load_points(pts_filename) - points = points.reshape(-1, self.load_dim) - points = points[:, self.use_dim] - attribute_dims = None - - if self.shift_height: - floor_height = np.percentile(points[:, 2], 0.99) - height = points[:, 2] - floor_height - points = np.concatenate( - [points[:, :3], - np.expand_dims(height, 1), points[:, 3:]], 1) - attribute_dims = dict(height=3) - - if self.use_color: - assert len(self.use_dim) >= 6 - if attribute_dims is None: - attribute_dims = dict() - attribute_dims.update( - dict(color=[ - points.shape[1] - 3, - points.shape[1] - 2, - points.shape[1] - 1, - ])) - - points_class = get_points_type(self.coord_type) - points = points_class( - points, points_dim=points.shape[-1], attribute_dims=attribute_dims) - results['points'] = points - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ + '(' - repr_str += f'shift_height={self.shift_height}, ' - repr_str += f'use_color={self.use_color}, ' - repr_str += f'file_client_args={self.file_client_args}, ' - repr_str += f'load_dim={self.load_dim}, ' - repr_str += f'use_dim={self.use_dim})' - return repr_str - - -@PIPELINES.register_module() -class LoadPointsFromDict(LoadPointsFromFile): - """Load Points From Dict.""" - - def __call__(self, results): - assert 'points' in results - return results - - -@PIPELINES.register_module() -class LoadAnnotations3D(LoadAnnotations): - """Load Annotations3D. - - Load instance mask and semantic mask of points and - encapsulate the items into related fields. - - Args: - with_bbox_3d (bool, optional): Whether to load 3D boxes. - Defaults to True. - with_label_3d (bool, optional): Whether to load 3D labels. - Defaults to True. - with_attr_label (bool, optional): Whether to load attribute label. - Defaults to False. - with_mask_3d (bool, optional): Whether to load 3D instance masks. - for points. Defaults to False. - with_seg_3d (bool, optional): Whether to load 3D semantic masks. - for points. Defaults to False. - with_bbox (bool, optional): Whether to load 2D boxes. - Defaults to False. - with_label (bool, optional): Whether to load 2D labels. - Defaults to False. - with_mask (bool, optional): Whether to load 2D instance masks. - Defaults to False. - with_seg (bool, optional): Whether to load 2D semantic masks. - Defaults to False. - with_bbox_depth (bool, optional): Whether to load 2.5D boxes. - Defaults to False. - poly2mask (bool, optional): Whether to convert polygon annotations - to bitmasks. Defaults to True. - seg_3d_dtype (dtype, optional): Dtype of 3D semantic masks. - Defaults to int64 - file_client_args (dict): Config dict of file clients, refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. - """ - - def __init__(self, - with_bbox_3d=True, - with_label_3d=True, - with_attr_label=False, - with_mask_3d=False, - with_seg_3d=False, - with_bbox=False, - with_label=False, - with_mask=False, - with_seg=False, - with_bbox_depth=False, - poly2mask=True, - seg_3d_dtype=np.int64, - file_client_args=dict(backend='disk')): - super().__init__( - with_bbox, - with_label, - with_mask, - with_seg, - poly2mask, - file_client_args=file_client_args) - self.with_bbox_3d = with_bbox_3d - self.with_bbox_depth = with_bbox_depth - self.with_label_3d = with_label_3d - self.with_attr_label = with_attr_label - self.with_mask_3d = with_mask_3d - self.with_seg_3d = with_seg_3d - self.seg_3d_dtype = seg_3d_dtype - - def _load_bboxes_3d(self, results): - """Private function to load 3D bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D bounding box annotations. - """ - results['gt_bboxes_3d'] = results['ann_info']['gt_bboxes_3d'] - results['bbox3d_fields'].append('gt_bboxes_3d') - return results - - def _load_bboxes_depth(self, results): - """Private function to load 2.5D bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 2.5D bounding box annotations. - """ - results['centers2d'] = results['ann_info']['centers2d'] - results['depths'] = results['ann_info']['depths'] - return results - - def _load_labels_3d(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded label annotations. - """ - results['gt_labels_3d'] = results['ann_info']['gt_labels_3d'] - return results - - def _load_attr_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded label annotations. - """ - results['attr_labels'] = results['ann_info']['attr_labels'] - return results - - def _load_masks_3d(self, results): - """Private function to load 3D mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D mask annotations. - """ - pts_instance_mask_path = results['ann_info']['pts_instance_mask_path'] - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - mask_bytes = self.file_client.get(pts_instance_mask_path) - pts_instance_mask = np.frombuffer(mask_bytes, dtype=np.int64) - except ConnectionError: - mmcv.check_file_exist(pts_instance_mask_path) - pts_instance_mask = np.fromfile( - pts_instance_mask_path, dtype=np.int64) - - results['pts_instance_mask'] = pts_instance_mask - results['pts_mask_fields'].append('pts_instance_mask') - return results - - def _load_semantic_seg_3d(self, results): - """Private function to load 3D semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing the semantic segmentation annotations. - """ - pts_semantic_mask_path = results['ann_info']['pts_semantic_mask_path'] - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - mask_bytes = self.file_client.get(pts_semantic_mask_path) - # add .copy() to fix read-only bug - pts_semantic_mask = np.frombuffer( - mask_bytes, dtype=self.seg_3d_dtype).copy() - except ConnectionError: - mmcv.check_file_exist(pts_semantic_mask_path) - pts_semantic_mask = np.fromfile( - pts_semantic_mask_path, dtype=np.int64) - - results['pts_semantic_mask'] = pts_semantic_mask - results['pts_seg_fields'].append('pts_semantic_mask') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D bounding box, label, mask and - semantic segmentation annotations. - """ - results = super().__call__(results) - if self.with_bbox_3d: - results = self._load_bboxes_3d(results) - if results is None: - return None - if self.with_bbox_depth: - results = self._load_bboxes_depth(results) - if results is None: - return None - if self.with_label_3d: - results = self._load_labels_3d(results) - if self.with_attr_label: - results = self._load_attr_labels(results) - if self.with_mask_3d: - results = self._load_masks_3d(results) - if self.with_seg_3d: - results = self._load_semantic_seg_3d(results) - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}with_bbox_3d={self.with_bbox_3d}, ' - repr_str += f'{indent_str}with_label_3d={self.with_label_3d}, ' - repr_str += f'{indent_str}with_attr_label={self.with_attr_label}, ' - repr_str += f'{indent_str}with_mask_3d={self.with_mask_3d}, ' - repr_str += f'{indent_str}with_seg_3d={self.with_seg_3d}, ' - repr_str += f'{indent_str}with_bbox={self.with_bbox}, ' - repr_str += f'{indent_str}with_label={self.with_label}, ' - repr_str += f'{indent_str}with_mask={self.with_mask}, ' - repr_str += f'{indent_str}with_seg={self.with_seg}, ' - repr_str += f'{indent_str}with_bbox_depth={self.with_bbox_depth}, ' - repr_str += f'{indent_str}poly2mask={self.poly2mask})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/test_time_aug.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/test_time_aug.py deleted file mode 100644 index d53f1109..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from copy import deepcopy - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. An example - configuration is as followed: - - .. code-block:: - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - .. code-block:: - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str - - -@PIPELINES.register_module() -class MultiScaleFlipAug3D(object): - """Test-time augmentation with multiple scales and flipping. - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple]: Images scales for resizing. - pts_scale_ratio (float | list[float]): Points scale ratios for - resizing. - flip (bool, optional): Whether apply flip augmentation. - Defaults to False. - flip_direction (str | list[str], optional): Flip augmentation - directions for images, options are "horizontal" and "vertical". - If flip_direction is list, multiple flip augmentations will - be applied. It has no effect when ``flip == False``. - Defaults to "horizontal". - pcd_horizontal_flip (bool, optional): Whether apply horizontal - flip augmentation to point cloud. Defaults to True. - Note that it works only when 'flip' is turned on. - pcd_vertical_flip (bool, optional): Whether apply vertical flip - augmentation to point cloud. Defaults to True. - Note that it works only when 'flip' is turned on. - """ - - def __init__(self, - transforms, - img_scale, - pts_scale_ratio, - flip=False, - flip_direction='horizontal', - pcd_horizontal_flip=False, - pcd_vertical_flip=False): - self.transforms = Compose(transforms) - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.pts_scale_ratio = pts_scale_ratio \ - if isinstance(pts_scale_ratio, list) else[float(pts_scale_ratio)] - - assert mmcv.is_list_of(self.img_scale, tuple) - assert mmcv.is_list_of(self.pts_scale_ratio, float) - - self.flip = flip - self.pcd_horizontal_flip = pcd_horizontal_flip - self.pcd_vertical_flip = pcd_vertical_flip - - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip and not any([(t['type'] == 'RandomFlip3D' - or t['type'] == 'RandomFlip') - for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to augment common fields in results. - - Args: - results (dict): Result dict contains the data to augment. - - Returns: - dict: The result dict contains the data that is augmented with - different scales and flips. - """ - aug_data = [] - - # modified from `flip_aug = [False, True] if self.flip else [False]` - # to reduce unnecessary scenes when using double flip augmentation - # during test time - flip_aug = [True] if self.flip else [False] - pcd_horizontal_flip_aug = [False, True] \ - if self.flip and self.pcd_horizontal_flip else [False] - pcd_vertical_flip_aug = [False, True] \ - if self.flip and self.pcd_vertical_flip else [False] - for scale in self.img_scale: - for pts_scale_ratio in self.pts_scale_ratio: - for flip in flip_aug: - for pcd_horizontal_flip in pcd_horizontal_flip_aug: - for pcd_vertical_flip in pcd_vertical_flip_aug: - for direction in self.flip_direction: - # results.copy will cause bug - # since it is shallow copy - _results = deepcopy(results) - _results['scale'] = scale - _results['flip'] = flip - _results['pcd_scale_factor'] = \ - pts_scale_ratio - _results['flip_direction'] = direction - _results['pcd_horizontal_flip'] = \ - pcd_horizontal_flip - _results['pcd_vertical_flip'] = \ - pcd_vertical_flip - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'pts_scale_ratio={self.pts_scale_ratio}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/transforms_3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/transforms_3d.py deleted file mode 100644 index 46f4765c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/pipelines/transforms_3d.py +++ /dev/null @@ -1,1855 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import random -import warnings - -import cv2 -import numpy as np -from mmcv import is_tuple_of -from mmcv.utils import build_from_cfg - -from mmdet3d.core import VoxelGenerator -from mmdet3d.core.bbox import (CameraInstance3DBoxes, DepthInstance3DBoxes, - LiDARInstance3DBoxes, box_np_ops) -from mmdet3d.datasets.pipelines.compose import Compose -from mmdet.datasets.pipelines import RandomCrop, RandomFlip, Rotate -from ..builder import OBJECTSAMPLERS, PIPELINES -from .data_augment_utils import noise_per_object_v3_ - - -@PIPELINES.register_module() -class RandomDropPointsColor(object): - r"""Randomly set the color of points to all zeros. - - Once this transform is executed, all the points' color will be dropped. - Refer to `PAConv `_ for more details. - - Args: - drop_ratio (float, optional): The probability of dropping point colors. - Defaults to 0.2. - """ - - def __init__(self, drop_ratio=0.2): - assert isinstance(drop_ratio, (int, float)) and 0 <= drop_ratio <= 1, \ - f'invalid drop_ratio value {drop_ratio}' - self.drop_ratio = drop_ratio - - def __call__(self, input_dict): - """Call function to drop point colors. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after color dropping, - 'points' key is updated in the result dict. - """ - points = input_dict['points'] - assert points.attribute_dims is not None and \ - 'color' in points.attribute_dims, \ - 'Expect points have color attribute' - - # this if-expression is a bit strange - # `RandomDropPointsColor` is used in training 3D segmentor PAConv - # we discovered in our experiments that, using - # `if np.random.rand() > 1.0 - self.drop_ratio` consistently leads to - # better results than using `if np.random.rand() < self.drop_ratio` - # so we keep this hack in our codebase - if np.random.rand() > 1.0 - self.drop_ratio: - points.color = points.color * 0.0 - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(drop_ratio={self.drop_ratio})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip3D(RandomFlip): - """Flip the points & bbox. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - Args: - sync_2d (bool, optional): Whether to apply flip according to the 2D - images. If True, it will apply the same flip as that to 2D images. - If False, it will decide whether to flip randomly and independently - to that of 2D images. Defaults to True. - flip_ratio_bev_horizontal (float, optional): The flipping probability - in horizontal direction. Defaults to 0.0. - flip_ratio_bev_vertical (float, optional): The flipping probability - in vertical direction. Defaults to 0.0. - """ - - def __init__(self, - sync_2d=True, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0, - **kwargs): - super(RandomFlip3D, self).__init__( - flip_ratio=flip_ratio_bev_horizontal, **kwargs) - self.sync_2d = sync_2d - self.flip_ratio_bev_vertical = flip_ratio_bev_vertical - if flip_ratio_bev_horizontal is not None: - assert isinstance( - flip_ratio_bev_horizontal, - (int, float)) and 0 <= flip_ratio_bev_horizontal <= 1 - if flip_ratio_bev_vertical is not None: - assert isinstance( - flip_ratio_bev_vertical, - (int, float)) and 0 <= flip_ratio_bev_vertical <= 1 - - def random_flip_data_3d(self, input_dict, direction='horizontal'): - """Flip 3D data randomly. - - Args: - input_dict (dict): Result dict from loading pipeline. - direction (str, optional): Flip direction. - Default: 'horizontal'. - - Returns: - dict: Flipped results, 'points', 'bbox3d_fields' keys are - updated in the result dict. - """ - assert direction in ['horizontal', 'vertical'] - # for semantic segmentation task, only points will be flipped. - if 'bbox3d_fields' not in input_dict: - input_dict['points'].flip(direction) - return - if len(input_dict['bbox3d_fields']) == 0: # test mode - input_dict['bbox3d_fields'].append('empty_box3d') - input_dict['empty_box3d'] = input_dict['box_type_3d']( - np.array([], dtype=np.float32)) - assert len(input_dict['bbox3d_fields']) == 1 - for key in input_dict['bbox3d_fields']: - if 'points' in input_dict: - input_dict['points'] = input_dict[key].flip( - direction, points=input_dict['points']) - else: - input_dict[key].flip(direction) - if 'centers2d' in input_dict: - assert self.sync_2d is True and direction == 'horizontal', \ - 'Only support sync_2d=True and horizontal flip with images' - w = input_dict['ori_shape'][1] - input_dict['centers2d'][..., 0] = \ - w - input_dict['centers2d'][..., 0] - # need to modify the horizontal position of camera center - # along u-axis in the image (flip like centers2d) - # ['cam2img'][0][2] = c_u - # see more details and examples at - # https://github.com/open-mmlab/mmdetection3d/pull/744 - input_dict['cam2img'][0][2] = w - input_dict['cam2img'][0][2] - - def __call__(self, input_dict): - """Call function to flip points, values in the ``bbox3d_fields`` and - also flip 2D image and its annotations. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction', - 'pcd_horizontal_flip' and 'pcd_vertical_flip' keys are added - into result dict. - """ - # flip 2D image and its annotations - super(RandomFlip3D, self).__call__(input_dict) - - if self.sync_2d: - input_dict['pcd_horizontal_flip'] = input_dict['flip'] - input_dict['pcd_vertical_flip'] = False - else: - if 'pcd_horizontal_flip' not in input_dict: - flip_horizontal = True if np.random.rand( - ) < self.flip_ratio else False - input_dict['pcd_horizontal_flip'] = flip_horizontal - if 'pcd_vertical_flip' not in input_dict: - flip_vertical = True if np.random.rand( - ) < self.flip_ratio_bev_vertical else False - input_dict['pcd_vertical_flip'] = flip_vertical - - if 'transformation_3d_flow' not in input_dict: - input_dict['transformation_3d_flow'] = [] - - if input_dict['pcd_horizontal_flip']: - self.random_flip_data_3d(input_dict, 'horizontal') - input_dict['transformation_3d_flow'].extend(['HF']) - if input_dict['pcd_vertical_flip']: - self.random_flip_data_3d(input_dict, 'vertical') - input_dict['transformation_3d_flow'].extend(['VF']) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(sync_2d={self.sync_2d},' - repr_str += f' flip_ratio_bev_vertical={self.flip_ratio_bev_vertical})' - return repr_str - - -@PIPELINES.register_module() -class MultiViewWrapper(object): - """Wrap transformation from single-view into multi-view. - - The wrapper processes the images from multi-view one by one. For each - image, it constructs a pseudo dict according to the keys specified by the - 'process_fields' parameter. After the transformation is finished, desired - information can be collected by specifying the keys in the 'collected_keys' - parameter. Multi-view images share the same transformation parameters - but do not share the same magnitude when a random transformation is - conducted. - - Args: - transforms (list[dict]): A list of dict specifying the transformations - for the monocular situation. - process_fields (dict): Desired keys that the transformations should - be conducted on. Default to dict(img_fields=['img']). - collected_keys (list[str]): Collect information in transformation - like rotate angles, crop roi, and flip state. - """ - - def __init__(self, - transforms, - process_fields=dict(img_fields=['img']), - collected_keys=[]): - self.transform = Compose(transforms) - self.collected_keys = collected_keys - self.process_fields = process_fields - - def __call__(self, input_dict): - for key in self.collected_keys: - input_dict[key] = [] - for img_id in range(len(input_dict['img'])): - process_dict = self.process_fields.copy() - for field in self.process_fields: - for key in self.process_fields[field]: - process_dict[key] = input_dict[key][img_id] - process_dict = self.transform(process_dict) - for field in self.process_fields: - for key in self.process_fields[field]: - input_dict[key][img_id] = process_dict[key] - for key in self.collected_keys: - input_dict[key].append(process_dict[key]) - return input_dict - - -@PIPELINES.register_module() -class RangeLimitedRandomCrop(RandomCrop): - """Randomly crop image-view objects under a limitation of range. - - Args: - relative_x_offset_range (tuple[float]): Relative range of random crop - in x direction. (x_min, x_max) in [0, 1.0]. Default to (0.0, 1.0). - relative_y_offset_range (tuple[float]): Relative range of random crop - in y direction. (y_min, y_max) in [0, 1.0]. Default to (0.0, 1.0). - """ - - def __init__(self, - relative_x_offset_range=(0.0, 1.0), - relative_y_offset_range=(0.0, 1.0), - **kwargs): - super(RangeLimitedRandomCrop, self).__init__(**kwargs) - for range in [relative_x_offset_range, relative_y_offset_range]: - assert 0 <= range[0] <= range[1] <= 1 - self.relative_x_offset_range = relative_x_offset_range - self.relative_y_offset_range = relative_y_offset_range - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images. - - Modified from RandomCrop in mmdet==2.25.0 - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_range_h = (margin_h * self.relative_y_offset_range[0], - margin_h * self.relative_y_offset_range[1] + 1) - offset_h = np.random.randint(*offset_range_h) - offset_range_w = (margin_w * self.relative_x_offset_range[0], - margin_w * self.relative_x_offset_range[1] + 1) - offset_w = np.random.randint(*offset_range_w) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['crop'] = (crop_x1, crop_y1, crop_x2, crop_y2) - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - if self.recompute_bbox: - results[key] = results[mask_key].get_bboxes() - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - -@PIPELINES.register_module() -class RandomRotate(Rotate): - """Randomly rotate images. - - The ratation angle is selected uniformly within the interval specified by - the 'range' parameter. - - Args: - range (tuple[float]): Define the range of random rotation. - (angle_min, angle_max) in angle. - """ - - def __init__(self, range, **kwargs): - super(RandomRotate, self).__init__(**kwargs) - self.range = range - - def __call__(self, results): - self.angle = np.random.uniform(self.range[0], self.range[1]) - super(RandomRotate, self).__call__(results) - results['rotate'] = self.angle - return results - - -@PIPELINES.register_module() -class RandomJitterPoints(object): - """Randomly jitter point coordinates. - - Different from the global translation in ``GlobalRotScaleTrans``, here we - apply different noises to each point in a scene. - - Args: - jitter_std (list[float]): The standard deviation of jittering noise. - This applies random noise to all points in a 3D scene, which is - sampled from a gaussian distribution whose standard deviation is - set by ``jitter_std``. Defaults to [0.01, 0.01, 0.01] - clip_range (list[float]): Clip the randomly generated jitter - noise into this range. If None is given, don't perform clipping. - Defaults to [-0.05, 0.05] - - Note: - This transform should only be used in point cloud segmentation tasks - because we don't transform ground-truth bboxes accordingly. - For similar transform in detection task, please refer to `ObjectNoise`. - """ - - def __init__(self, - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]): - seq_types = (list, tuple, np.ndarray) - if not isinstance(jitter_std, seq_types): - assert isinstance(jitter_std, (int, float)), \ - f'unsupported jitter_std type {type(jitter_std)}' - jitter_std = [jitter_std, jitter_std, jitter_std] - self.jitter_std = jitter_std - - if clip_range is not None: - if not isinstance(clip_range, seq_types): - assert isinstance(clip_range, (int, float)), \ - f'unsupported clip_range type {type(clip_range)}' - clip_range = [-clip_range, clip_range] - self.clip_range = clip_range - - def __call__(self, input_dict): - """Call function to jitter all the points in the scene. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after adding noise to each point, - 'points' key is updated in the result dict. - """ - points = input_dict['points'] - jitter_std = np.array(self.jitter_std, dtype=np.float32) - jitter_noise = \ - np.random.randn(points.shape[0], 3) * jitter_std[None, :] - if self.clip_range is not None: - jitter_noise = np.clip(jitter_noise, self.clip_range[0], - self.clip_range[1]) - - points.translate(jitter_noise) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(jitter_std={self.jitter_std},' - repr_str += f' clip_range={self.clip_range})' - return repr_str - - -@PIPELINES.register_module() -class ObjectSample(object): - """Sample GT objects to the data. - - Args: - db_sampler (dict): Config dict of the database sampler. - sample_2d (bool): Whether to also paste 2D image patch to the images - This should be true when applying multi-modality cut-and-paste. - Defaults to False. - use_ground_plane (bool): Whether to use gound plane to adjust the - 3D labels. - """ - - def __init__(self, db_sampler, sample_2d=False, use_ground_plane=False): - self.sampler_cfg = db_sampler - self.sample_2d = sample_2d - if 'type' not in db_sampler.keys(): - db_sampler['type'] = 'DataBaseSampler' - self.db_sampler = build_from_cfg(db_sampler, OBJECTSAMPLERS) - self.use_ground_plane = use_ground_plane - - @staticmethod - def remove_points_in_boxes(points, boxes): - """Remove the points in the sampled bounding boxes. - - Args: - points (:obj:`BasePoints`): Input point cloud array. - boxes (np.ndarray): Sampled ground truth boxes. - - Returns: - np.ndarray: Points with those in the boxes removed. - """ - masks = box_np_ops.points_in_rbbox(points.coord.numpy(), boxes) - points = points[np.logical_not(masks.any(-1))] - return points - - def __call__(self, input_dict): - """Call function to sample ground truth objects to the data. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after object sampling augmentation, - 'points', 'gt_bboxes_3d', 'gt_labels_3d' keys are updated - in the result dict. - """ - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - gt_labels_3d = input_dict['gt_labels_3d'] - - if self.use_ground_plane and 'plane' in input_dict['ann_info']: - ground_plane = input_dict['ann_info']['plane'] - input_dict['plane'] = ground_plane - else: - ground_plane = None - # change to float for blending operation - points = input_dict['points'] - if self.sample_2d: - img = input_dict['img'] - gt_bboxes_2d = input_dict['gt_bboxes'] - # Assume for now 3D & 2D bboxes are the same - sampled_dict = self.db_sampler.sample_all( - gt_bboxes_3d.tensor.numpy(), - gt_labels_3d, - gt_bboxes_2d=gt_bboxes_2d, - img=img) - else: - sampled_dict = self.db_sampler.sample_all( - gt_bboxes_3d.tensor.numpy(), - gt_labels_3d, - img=None, - ground_plane=ground_plane) - - if sampled_dict is not None: - sampled_gt_bboxes_3d = sampled_dict['gt_bboxes_3d'] - sampled_points = sampled_dict['points'] - sampled_gt_labels = sampled_dict['gt_labels_3d'] - - gt_labels_3d = np.concatenate([gt_labels_3d, sampled_gt_labels], - axis=0) - gt_bboxes_3d = gt_bboxes_3d.new_box( - np.concatenate( - [gt_bboxes_3d.tensor.numpy(), sampled_gt_bboxes_3d])) - - points = self.remove_points_in_boxes(points, sampled_gt_bboxes_3d) - # check the points dimension - points = points.cat([sampled_points, points]) - - if self.sample_2d: - sampled_gt_bboxes_2d = sampled_dict['gt_bboxes_2d'] - gt_bboxes_2d = np.concatenate( - [gt_bboxes_2d, sampled_gt_bboxes_2d]).astype(np.float32) - - input_dict['gt_bboxes'] = gt_bboxes_2d - input_dict['img'] = sampled_dict['img'] - - input_dict['gt_bboxes_3d'] = gt_bboxes_3d - input_dict['gt_labels_3d'] = gt_labels_3d.astype(np.int64) - input_dict['points'] = points - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f' sample_2d={self.sample_2d},' - repr_str += f' data_root={self.sampler_cfg.data_root},' - repr_str += f' info_path={self.sampler_cfg.info_path},' - repr_str += f' rate={self.sampler_cfg.rate},' - repr_str += f' prepare={self.sampler_cfg.prepare},' - repr_str += f' classes={self.sampler_cfg.classes},' - repr_str += f' sample_groups={self.sampler_cfg.sample_groups}' - return repr_str - - -@PIPELINES.register_module() -class ObjectNoise(object): - """Apply noise to each GT objects in the scene. - - Args: - translation_std (list[float], optional): Standard deviation of the - distribution where translation noise are sampled from. - Defaults to [0.25, 0.25, 0.25]. - global_rot_range (list[float], optional): Global rotation to the scene. - Defaults to [0.0, 0.0]. - rot_range (list[float], optional): Object rotation range. - Defaults to [-0.15707963267, 0.15707963267]. - num_try (int, optional): Number of times to try if the noise applied is - invalid. Defaults to 100. - """ - - def __init__(self, - translation_std=[0.25, 0.25, 0.25], - global_rot_range=[0.0, 0.0], - rot_range=[-0.15707963267, 0.15707963267], - num_try=100): - self.translation_std = translation_std - self.global_rot_range = global_rot_range - self.rot_range = rot_range - self.num_try = num_try - - def __call__(self, input_dict): - """Call function to apply noise to each ground truth in the scene. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after adding noise to each object, - 'points', 'gt_bboxes_3d' keys are updated in the result dict. - """ - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - points = input_dict['points'] - - # TODO: this is inplace operation - numpy_box = gt_bboxes_3d.tensor.numpy() - numpy_points = points.tensor.numpy() - - noise_per_object_v3_( - numpy_box, - numpy_points, - rotation_perturb=self.rot_range, - center_noise_std=self.translation_std, - global_random_rot_range=self.global_rot_range, - num_try=self.num_try) - - input_dict['gt_bboxes_3d'] = gt_bboxes_3d.new_box(numpy_box) - input_dict['points'] = points.new_point(numpy_points) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_try={self.num_try},' - repr_str += f' translation_std={self.translation_std},' - repr_str += f' global_rot_range={self.global_rot_range},' - repr_str += f' rot_range={self.rot_range})' - return repr_str - - -@PIPELINES.register_module() -class GlobalAlignment(object): - """Apply global alignment to 3D scene points by rotation and translation. - - Args: - rotation_axis (int): Rotation axis for points and bboxes rotation. - - Note: - We do not record the applied rotation and translation as in - GlobalRotScaleTrans. Because usually, we do not need to reverse - the alignment step. - For example, ScanNet 3D detection task uses aligned ground-truth - bounding boxes for evaluation. - """ - - def __init__(self, rotation_axis): - self.rotation_axis = rotation_axis - - def _trans_points(self, input_dict, trans_factor): - """Private function to translate points. - - Args: - input_dict (dict): Result dict from loading pipeline. - trans_factor (np.ndarray): Translation vector to be applied. - - Returns: - dict: Results after translation, 'points' is updated in the dict. - """ - input_dict['points'].translate(trans_factor) - - def _rot_points(self, input_dict, rot_mat): - """Private function to rotate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - rot_mat (np.ndarray): Rotation matrix to be applied. - - Returns: - dict: Results after rotation, 'points' is updated in the dict. - """ - # input should be rot_mat_T so I transpose it here - input_dict['points'].rotate(rot_mat.T) - - def _check_rot_mat(self, rot_mat): - """Check if rotation matrix is valid for self.rotation_axis. - - Args: - rot_mat (np.ndarray): Rotation matrix to be applied. - """ - is_valid = np.allclose(np.linalg.det(rot_mat), 1.0) - valid_array = np.zeros(3) - valid_array[self.rotation_axis] = 1.0 - is_valid &= (rot_mat[self.rotation_axis, :] == valid_array).all() - is_valid &= (rot_mat[:, self.rotation_axis] == valid_array).all() - assert is_valid, f'invalid rotation matrix {rot_mat}' - - def __call__(self, input_dict): - """Call function to shuffle points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after global alignment, 'points' and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - assert 'axis_align_matrix' in input_dict['ann_info'].keys(), \ - 'axis_align_matrix is not provided in GlobalAlignment' - - axis_align_matrix = input_dict['ann_info']['axis_align_matrix'] - assert axis_align_matrix.shape == (4, 4), \ - f'invalid shape {axis_align_matrix.shape} for axis_align_matrix' - rot_mat = axis_align_matrix[:3, :3] - trans_vec = axis_align_matrix[:3, -1] - - self._check_rot_mat(rot_mat) - self._rot_points(input_dict, rot_mat) - self._trans_points(input_dict, trans_vec) - - return input_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(rotation_axis={self.rotation_axis})' - return repr_str - - -@PIPELINES.register_module() -class GlobalRotScaleTrans(object): - """Apply global rotation, scaling and translation to a 3D scene. - - Args: - rot_range (list[float], optional): Range of rotation angle. - Defaults to [-0.78539816, 0.78539816] (close to [-pi/4, pi/4]). - scale_ratio_range (list[float], optional): Range of scale ratio. - Defaults to [0.95, 1.05]. - translation_std (list[float], optional): The standard deviation of - translation noise applied to a scene, which - is sampled from a gaussian distribution whose standard deviation - is set by ``translation_std``. Defaults to [0, 0, 0] - shift_height (bool, optional): Whether to shift height. - (the fourth dimension of indoor points) when scaling. - Defaults to False. - """ - - def __init__(self, - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0], - shift_height=False): - seq_types = (list, tuple, np.ndarray) - if not isinstance(rot_range, seq_types): - assert isinstance(rot_range, (int, float)), \ - f'unsupported rot_range type {type(rot_range)}' - rot_range = [-rot_range, rot_range] - self.rot_range = rot_range - - assert isinstance(scale_ratio_range, seq_types), \ - f'unsupported scale_ratio_range type {type(scale_ratio_range)}' - self.scale_ratio_range = scale_ratio_range - - if not isinstance(translation_std, seq_types): - assert isinstance(translation_std, (int, float)), \ - f'unsupported translation_std type {type(translation_std)}' - translation_std = [ - translation_std, translation_std, translation_std - ] - assert all([std >= 0 for std in translation_std]), \ - 'translation_std should be positive' - self.translation_std = translation_std - self.shift_height = shift_height - - def _trans_bbox_points(self, input_dict): - """Private function to translate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after translation, 'points', 'pcd_trans' - and keys in input_dict['bbox3d_fields'] are updated - in the result dict. - """ - translation_std = np.array(self.translation_std, dtype=np.float32) - trans_factor = np.random.normal(scale=translation_std, size=3).T - - input_dict['points'].translate(trans_factor) - input_dict['pcd_trans'] = trans_factor - for key in input_dict['bbox3d_fields']: - input_dict[key].translate(trans_factor) - - def _rot_bbox_points(self, input_dict): - """Private function to rotate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after rotation, 'points', 'pcd_rotation' - and keys in input_dict['bbox3d_fields'] are updated - in the result dict. - """ - rotation = self.rot_range - noise_rotation = np.random.uniform(rotation[0], rotation[1]) - - # if no bbox in input_dict, only rotate points - if len(input_dict['bbox3d_fields']) == 0: - rot_mat_T = input_dict['points'].rotate(noise_rotation) - input_dict['pcd_rotation'] = rot_mat_T - input_dict['pcd_rotation_angle'] = noise_rotation - return - - # rotate points with bboxes - for key in input_dict['bbox3d_fields']: - if len(input_dict[key].tensor) != 0: - points, rot_mat_T = input_dict[key].rotate( - noise_rotation, input_dict['points']) - input_dict['points'] = points - input_dict['pcd_rotation'] = rot_mat_T - input_dict['pcd_rotation_angle'] = noise_rotation - - def _scale_bbox_points(self, input_dict): - """Private function to scale bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'points'and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - scale = input_dict['pcd_scale_factor'] - points = input_dict['points'] - points.scale(scale) - if self.shift_height: - assert 'height' in points.attribute_dims.keys(), \ - 'setting shift_height=True but points have no height attribute' - points.tensor[:, points.attribute_dims['height']] *= scale - input_dict['points'] = points - - for key in input_dict['bbox3d_fields']: - input_dict[key].scale(scale) - - def _random_scale(self, input_dict): - """Private function to randomly set the scale factor. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'pcd_scale_factor' are updated - in the result dict. - """ - scale_factor = np.random.uniform(self.scale_ratio_range[0], - self.scale_ratio_range[1]) - input_dict['pcd_scale_factor'] = scale_factor - - def __call__(self, input_dict): - """Private function to rotate, scale and translate bounding boxes and - points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'points', 'pcd_rotation', - 'pcd_scale_factor', 'pcd_trans' and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - if 'transformation_3d_flow' not in input_dict: - input_dict['transformation_3d_flow'] = [] - - self._rot_bbox_points(input_dict) - - if 'pcd_scale_factor' not in input_dict: - self._random_scale(input_dict) - self._scale_bbox_points(input_dict) - - self._trans_bbox_points(input_dict) - - input_dict['transformation_3d_flow'].extend(['R', 'S', 'T']) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(rot_range={self.rot_range},' - repr_str += f' scale_ratio_range={self.scale_ratio_range},' - repr_str += f' translation_std={self.translation_std},' - repr_str += f' shift_height={self.shift_height})' - return repr_str - - -@PIPELINES.register_module() -class PointShuffle(object): - """Shuffle input points.""" - - def __call__(self, input_dict): - """Call function to shuffle points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - idx = input_dict['points'].shuffle() - idx = idx.numpy() - - pts_instance_mask = input_dict.get('pts_instance_mask', None) - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[idx] - - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[idx] - - return input_dict - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class ObjectRangeFilter(object): - """Filter objects by the range. - - Args: - point_cloud_range (list[float]): Point cloud range. - """ - - def __init__(self, point_cloud_range): - self.pcd_range = np.array(point_cloud_range, dtype=np.float32) - - def __call__(self, input_dict): - """Call function to filter objects by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'gt_bboxes_3d', 'gt_labels_3d' - keys are updated in the result dict. - """ - # Check points instance type and initialise bev_range - if isinstance(input_dict['gt_bboxes_3d'], - (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - bev_range = self.pcd_range[[0, 1, 3, 4]] - elif isinstance(input_dict['gt_bboxes_3d'], CameraInstance3DBoxes): - bev_range = self.pcd_range[[0, 2, 3, 5]] - - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - gt_labels_3d = input_dict['gt_labels_3d'] - mask = gt_bboxes_3d.in_range_bev(bev_range) - gt_bboxes_3d = gt_bboxes_3d[mask] - # mask is a torch tensor but gt_labels_3d is still numpy array - # using mask to index gt_labels_3d will cause bug when - # len(gt_labels_3d) == 1, where mask=1 will be interpreted - # as gt_labels_3d[1] and cause out of index error - gt_labels_3d = gt_labels_3d[mask.numpy().astype(np.bool)] - - # limit rad to [-pi, pi] - gt_bboxes_3d.limit_yaw(offset=0.5, period=2 * np.pi) - input_dict['gt_bboxes_3d'] = gt_bboxes_3d - input_dict['gt_labels_3d'] = gt_labels_3d - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(point_cloud_range={self.pcd_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class PointsRangeFilter(object): - """Filter points by the range. - - Args: - point_cloud_range (list[float]): Point cloud range. - """ - - def __init__(self, point_cloud_range): - self.pcd_range = np.array(point_cloud_range, dtype=np.float32) - - def __call__(self, input_dict): - """Call function to filter points by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = input_dict['points'] - points_mask = points.in_range_3d(self.pcd_range) - clean_points = points[points_mask] - input_dict['points'] = clean_points - points_mask = points_mask.numpy() - - pts_instance_mask = input_dict.get('pts_instance_mask', None) - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[points_mask] - - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[points_mask] - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(point_cloud_range={self.pcd_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class ObjectNameFilter(object): - """Filter GT objects by their names. - - Args: - classes (list[str]): List of class names to be kept for training. - """ - - def __init__(self, classes): - self.classes = classes - self.labels = list(range(len(self.classes))) - - def __call__(self, input_dict): - """Call function to filter objects by their names. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'gt_bboxes_3d', 'gt_labels_3d' - keys are updated in the result dict. - """ - gt_labels_3d = input_dict['gt_labels_3d'] - gt_bboxes_mask = np.array([n in self.labels for n in gt_labels_3d], - dtype=np.bool_) - input_dict['gt_bboxes_3d'] = input_dict['gt_bboxes_3d'][gt_bboxes_mask] - input_dict['gt_labels_3d'] = input_dict['gt_labels_3d'][gt_bboxes_mask] - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(classes={self.classes})' - return repr_str - - -@PIPELINES.register_module() -class PointSample(object): - """Point sample. - - Sampling data to a certain number. - - Args: - num_points (int): Number of points to be sampled. - sample_range (float, optional): The range where to sample points. - If not None, the points with depth larger than `sample_range` are - prior to be sampled. Defaults to None. - replace (bool, optional): Whether the sampling is with or without - replacement. Defaults to False. - """ - - def __init__(self, num_points, sample_range=None, replace=False): - self.num_points = num_points - self.sample_range = sample_range - self.replace = replace - - def _points_random_sampling(self, - points, - num_samples, - sample_range=None, - replace=False, - return_choices=False): - """Points random sampling. - - Sample points to a certain number. - - Args: - points (np.ndarray | :obj:`BasePoints`): 3D Points. - num_samples (int): Number of samples to be sampled. - sample_range (float, optional): Indicating the range where the - points will be sampled. Defaults to None. - replace (bool, optional): Sampling with or without replacement. - Defaults to None. - return_choices (bool, optional): Whether return choice. - Defaults to False. - Returns: - tuple[np.ndarray] | np.ndarray: - - points (np.ndarray | :obj:`BasePoints`): 3D Points. - - choices (np.ndarray, optional): The generated random samples. - """ - if not replace: - replace = (points.shape[0] < num_samples) - point_range = range(len(points)) - if sample_range is not None and not replace: - # Only sampling the near points when len(points) >= num_samples - dist = np.linalg.norm(points.tensor, axis=1) - far_inds = np.where(dist >= sample_range)[0] - near_inds = np.where(dist < sample_range)[0] - # in case there are too many far points - if len(far_inds) > num_samples: - far_inds = np.random.choice( - far_inds, num_samples, replace=False) - point_range = near_inds - num_samples -= len(far_inds) - choices = np.random.choice(point_range, num_samples, replace=replace) - if sample_range is not None and not replace: - choices = np.concatenate((far_inds, choices)) - # Shuffle points after sampling - np.random.shuffle(choices) - if return_choices: - return points[choices], choices - else: - return points[choices] - - def __call__(self, results): - """Call function to sample points to in indoor scenes. - - Args: - input_dict (dict): Result dict from loading pipeline. - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - points, choices = self._points_random_sampling( - points, - self.num_points, - self.sample_range, - self.replace, - return_choices=True) - results['points'] = points - - pts_instance_mask = results.get('pts_instance_mask', None) - pts_semantic_mask = results.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - pts_instance_mask = pts_instance_mask[choices] - results['pts_instance_mask'] = pts_instance_mask - - if pts_semantic_mask is not None: - pts_semantic_mask = pts_semantic_mask[choices] - results['pts_semantic_mask'] = pts_semantic_mask - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_points={self.num_points},' - repr_str += f' sample_range={self.sample_range},' - repr_str += f' replace={self.replace})' - - return repr_str - - -@PIPELINES.register_module() -class IndoorPointSample(PointSample): - """Indoor point sample. - - Sampling data to a certain number. - NOTE: IndoorPointSample is deprecated in favor of PointSample - - Args: - num_points (int): Number of points to be sampled. - """ - - def __init__(self, *args, **kwargs): - warnings.warn( - 'IndoorPointSample is deprecated in favor of PointSample') - super(IndoorPointSample, self).__init__(*args, **kwargs) - - -@PIPELINES.register_module() -class IndoorPatchPointSample(object): - r"""Indoor point sample within a patch. Modified from `PointNet++ `_. - - Sampling data to a certain number for semantic segmentation. - - Args: - num_points (int): Number of points to be sampled. - block_size (float, optional): Size of a block to sample points from. - Defaults to 1.5. - sample_rate (float, optional): Stride used in sliding patch generation. - This parameter is unused in `IndoorPatchPointSample` and thus has - been deprecated. We plan to remove it in the future. - Defaults to None. - ignore_index (int, optional): Label index that won't be used for the - segmentation task. This is set in PointSegClassMapping as neg_cls. - If not None, will be used as a patch selection criterion. - Defaults to None. - use_normalized_coord (bool, optional): Whether to use normalized xyz as - additional features. Defaults to False. - num_try (int, optional): Number of times to try if the patch selected - is invalid. Defaults to 10. - enlarge_size (float, optional): Enlarge the sampled patch to - [-block_size / 2 - enlarge_size, block_size / 2 + enlarge_size] as - an augmentation. If None, set it as 0. Defaults to 0.2. - min_unique_num (int, optional): Minimum number of unique points - the sampled patch should contain. If None, use PointNet++'s method - to judge uniqueness. Defaults to None. - eps (float, optional): A value added to patch boundary to guarantee - points coverage. Defaults to 1e-2. - - Note: - This transform should only be used in the training process of point - cloud segmentation tasks. For the sliding patch generation and - inference process in testing, please refer to the `slide_inference` - function of `EncoderDecoder3D` class. - """ - - def __init__(self, - num_points, - block_size=1.5, - sample_rate=None, - ignore_index=None, - use_normalized_coord=False, - num_try=10, - enlarge_size=0.2, - min_unique_num=None, - eps=1e-2): - self.num_points = num_points - self.block_size = block_size - self.ignore_index = ignore_index - self.use_normalized_coord = use_normalized_coord - self.num_try = num_try - self.enlarge_size = enlarge_size if enlarge_size is not None else 0.0 - self.min_unique_num = min_unique_num - self.eps = eps - - if sample_rate is not None: - warnings.warn( - "'sample_rate' has been deprecated and will be removed in " - 'the future. Please remove them from your code.') - - def _input_generation(self, coords, patch_center, coord_max, attributes, - attribute_dims, point_type): - """Generating model input. - - Generate input by subtracting patch center and adding additional - features. Currently support colors and normalized xyz as features. - - Args: - coords (np.ndarray): Sampled 3D Points. - patch_center (np.ndarray): Center coordinate of the selected patch. - coord_max (np.ndarray): Max coordinate of all 3D Points. - attributes (np.ndarray): features of input points. - attribute_dims (dict): Dictionary to indicate the meaning of extra - dimension. - point_type (type): class of input points inherited from BasePoints. - - Returns: - :obj:`BasePoints`: The generated input data. - """ - # subtract patch center, the z dimension is not centered - centered_coords = coords.copy() - centered_coords[:, 0] -= patch_center[0] - centered_coords[:, 1] -= patch_center[1] - - if self.use_normalized_coord: - normalized_coord = coords / coord_max - attributes = np.concatenate([attributes, normalized_coord], axis=1) - if attribute_dims is None: - attribute_dims = dict() - attribute_dims.update( - dict(normalized_coord=[ - attributes.shape[1], attributes.shape[1] + - 1, attributes.shape[1] + 2 - ])) - - points = np.concatenate([centered_coords, attributes], axis=1) - points = point_type( - points, points_dim=points.shape[1], attribute_dims=attribute_dims) - - return points - - def _patch_points_sampling(self, points, sem_mask): - """Patch points sampling. - - First sample a valid patch. - Then sample points within that patch to a certain number. - - Args: - points (:obj:`BasePoints`): 3D Points. - sem_mask (np.ndarray): semantic segmentation mask for input points. - - Returns: - tuple[:obj:`BasePoints`, np.ndarray] | :obj:`BasePoints`: - - - points (:obj:`BasePoints`): 3D Points. - - choices (np.ndarray): The generated random samples. - """ - coords = points.coord.numpy() - attributes = points.tensor[:, 3:].numpy() - attribute_dims = points.attribute_dims - point_type = type(points) - - coord_max = np.amax(coords, axis=0) - coord_min = np.amin(coords, axis=0) - - for _ in range(self.num_try): - # random sample a point as patch center - cur_center = coords[np.random.choice(coords.shape[0])] - - # boundary of a patch, which would be enlarged by - # `self.enlarge_size` as an augmentation - cur_max = cur_center + np.array( - [self.block_size / 2.0, self.block_size / 2.0, 0.0]) - cur_min = cur_center - np.array( - [self.block_size / 2.0, self.block_size / 2.0, 0.0]) - cur_max[2] = coord_max[2] - cur_min[2] = coord_min[2] - cur_choice = np.sum( - (coords >= (cur_min - self.enlarge_size)) * - (coords <= (cur_max + self.enlarge_size)), - axis=1) == 3 - - if not cur_choice.any(): # no points in this patch - continue - - cur_coords = coords[cur_choice, :] - cur_sem_mask = sem_mask[cur_choice] - point_idxs = np.where(cur_choice)[0] - mask = np.sum( - (cur_coords >= (cur_min - self.eps)) * (cur_coords <= - (cur_max + self.eps)), - axis=1) == 3 - - # two criteria for patch sampling, adopted from PointNet++ - # 1. selected patch should contain enough unique points - if self.min_unique_num is None: - # use PointNet++'s method as default - # [31, 31, 62] are just some big values used to transform - # coords from 3d array to 1d and then check their uniqueness - # this is used in all the ScanNet code following PointNet++ - vidx = np.ceil( - (cur_coords[mask, :] - cur_min) / (cur_max - cur_min) * - np.array([31.0, 31.0, 62.0])) - vidx = np.unique(vidx[:, 0] * 31.0 * 62.0 + vidx[:, 1] * 62.0 + - vidx[:, 2]) - flag1 = len(vidx) / 31.0 / 31.0 / 62.0 >= 0.02 - else: - # if `min_unique_num` is provided, directly compare with it - flag1 = mask.sum() >= self.min_unique_num - - # 2. selected patch should contain enough annotated points - if self.ignore_index is None: - flag2 = True - else: - flag2 = np.sum(cur_sem_mask != self.ignore_index) / \ - len(cur_sem_mask) >= 0.7 - - if flag1 and flag2: - break - - # sample idx to `self.num_points` - if point_idxs.size >= self.num_points: - # no duplicate in sub-sampling - choices = np.random.choice( - point_idxs, self.num_points, replace=False) - else: - # do not use random choice here to avoid some points not counted - dup = np.random.choice(point_idxs.size, - self.num_points - point_idxs.size) - idx_dup = np.concatenate( - [np.arange(point_idxs.size), - np.array(dup)], 0) - choices = point_idxs[idx_dup] - - # construct model input - points = self._input_generation(coords[choices], cur_center, coord_max, - attributes[choices], attribute_dims, - point_type) - - return points, choices - - def __call__(self, results): - """Call function to sample points to in indoor scenes. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - - assert 'pts_semantic_mask' in results.keys(), \ - 'semantic mask should be provided in training and evaluation' - pts_semantic_mask = results['pts_semantic_mask'] - - points, choices = self._patch_points_sampling(points, - pts_semantic_mask) - - results['points'] = points - results['pts_semantic_mask'] = pts_semantic_mask[choices] - pts_instance_mask = results.get('pts_instance_mask', None) - if pts_instance_mask is not None: - results['pts_instance_mask'] = pts_instance_mask[choices] - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_points={self.num_points},' - repr_str += f' block_size={self.block_size},' - repr_str += f' ignore_index={self.ignore_index},' - repr_str += f' use_normalized_coord={self.use_normalized_coord},' - repr_str += f' num_try={self.num_try},' - repr_str += f' enlarge_size={self.enlarge_size},' - repr_str += f' min_unique_num={self.min_unique_num},' - repr_str += f' eps={self.eps})' - return repr_str - - -@PIPELINES.register_module() -class BackgroundPointsFilter(object): - """Filter background points near the bounding box. - - Args: - bbox_enlarge_range (tuple[float], float): Bbox enlarge range. - """ - - def __init__(self, bbox_enlarge_range): - assert (is_tuple_of(bbox_enlarge_range, float) - and len(bbox_enlarge_range) == 3) \ - or isinstance(bbox_enlarge_range, float), \ - f'Invalid arguments bbox_enlarge_range {bbox_enlarge_range}' - - if isinstance(bbox_enlarge_range, float): - bbox_enlarge_range = [bbox_enlarge_range] * 3 - self.bbox_enlarge_range = np.array( - bbox_enlarge_range, dtype=np.float32)[np.newaxis, :] - - def __call__(self, input_dict): - """Call function to filter points by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = input_dict['points'] - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - - # avoid groundtruth being modified - gt_bboxes_3d_np = gt_bboxes_3d.tensor.clone().numpy() - gt_bboxes_3d_np[:, :3] = gt_bboxes_3d.gravity_center.clone().numpy() - - enlarged_gt_bboxes_3d = gt_bboxes_3d_np.copy() - enlarged_gt_bboxes_3d[:, 3:6] += self.bbox_enlarge_range - points_numpy = points.tensor.clone().numpy() - foreground_masks = box_np_ops.points_in_rbbox( - points_numpy, gt_bboxes_3d_np, origin=(0.5, 0.5, 0.5)) - enlarge_foreground_masks = box_np_ops.points_in_rbbox( - points_numpy, enlarged_gt_bboxes_3d, origin=(0.5, 0.5, 0.5)) - foreground_masks = foreground_masks.max(1) - enlarge_foreground_masks = enlarge_foreground_masks.max(1) - valid_masks = ~np.logical_and(~foreground_masks, - enlarge_foreground_masks) - - input_dict['points'] = points[valid_masks] - pts_instance_mask = input_dict.get('pts_instance_mask', None) - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[valid_masks] - - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[valid_masks] - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(bbox_enlarge_range={self.bbox_enlarge_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class VoxelBasedPointSampler(object): - """Voxel based point sampler. - - Apply voxel sampling to multiple sweep points. - - Args: - cur_sweep_cfg (dict): Config for sampling current points. - prev_sweep_cfg (dict): Config for sampling previous points. - time_dim (int): Index that indicate the time dimension - for input points. - """ - - def __init__(self, cur_sweep_cfg, prev_sweep_cfg=None, time_dim=3): - self.cur_voxel_generator = VoxelGenerator(**cur_sweep_cfg) - self.cur_voxel_num = self.cur_voxel_generator._max_voxels - self.time_dim = time_dim - if prev_sweep_cfg is not None: - assert prev_sweep_cfg['max_num_points'] == \ - cur_sweep_cfg['max_num_points'] - self.prev_voxel_generator = VoxelGenerator(**prev_sweep_cfg) - self.prev_voxel_num = self.prev_voxel_generator._max_voxels - else: - self.prev_voxel_generator = None - self.prev_voxel_num = 0 - - def _sample_points(self, points, sampler, point_dim): - """Sample points for each points subset. - - Args: - points (np.ndarray): Points subset to be sampled. - sampler (VoxelGenerator): Voxel based sampler for - each points subset. - point_dim (int): The dimension of each points - - Returns: - np.ndarray: Sampled points. - """ - voxels, coors, num_points_per_voxel = sampler.generate(points) - if voxels.shape[0] < sampler._max_voxels: - padding_points = np.zeros([ - sampler._max_voxels - voxels.shape[0], sampler._max_num_points, - point_dim - ], - dtype=points.dtype) - padding_points[:] = voxels[0] - sample_points = np.concatenate([voxels, padding_points], axis=0) - else: - sample_points = voxels - - return sample_points - - def __call__(self, results): - """Call function to sample points from multiple sweeps. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - original_dim = points.shape[1] - - # TODO: process instance and semantic mask while _max_num_points - # is larger than 1 - # Extend points with seg and mask fields - map_fields2dim = [] - start_dim = original_dim - points_numpy = points.tensor.numpy() - extra_channel = [points_numpy] - for idx, key in enumerate(results['pts_mask_fields']): - map_fields2dim.append((key, idx + start_dim)) - extra_channel.append(results[key][..., None]) - - start_dim += len(results['pts_mask_fields']) - for idx, key in enumerate(results['pts_seg_fields']): - map_fields2dim.append((key, idx + start_dim)) - extra_channel.append(results[key][..., None]) - - points_numpy = np.concatenate(extra_channel, axis=-1) - - # Split points into two part, current sweep points and - # previous sweeps points. - # TODO: support different sampling methods for next sweeps points - # and previous sweeps points. - cur_points_flag = (points_numpy[:, self.time_dim] == 0) - cur_sweep_points = points_numpy[cur_points_flag] - prev_sweeps_points = points_numpy[~cur_points_flag] - if prev_sweeps_points.shape[0] == 0: - prev_sweeps_points = cur_sweep_points - - # Shuffle points before sampling - np.random.shuffle(cur_sweep_points) - np.random.shuffle(prev_sweeps_points) - - cur_sweep_points = self._sample_points(cur_sweep_points, - self.cur_voxel_generator, - points_numpy.shape[1]) - if self.prev_voxel_generator is not None: - prev_sweeps_points = self._sample_points(prev_sweeps_points, - self.prev_voxel_generator, - points_numpy.shape[1]) - - points_numpy = np.concatenate( - [cur_sweep_points, prev_sweeps_points], 0) - else: - points_numpy = cur_sweep_points - - if self.cur_voxel_generator._max_num_points == 1: - points_numpy = points_numpy.squeeze(1) - results['points'] = points.new_point(points_numpy[..., :original_dim]) - - # Restore the corresponding seg and mask fields - for key, dim_index in map_fields2dim: - results[key] = points_numpy[..., dim_index] - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - - def _auto_indent(repr_str, indent): - repr_str = repr_str.split('\n') - repr_str = [' ' * indent + t + '\n' for t in repr_str] - repr_str = ''.join(repr_str)[:-1] - return repr_str - - repr_str = self.__class__.__name__ - indent = 4 - repr_str += '(\n' - repr_str += ' ' * indent + f'num_cur_sweep={self.cur_voxel_num},\n' - repr_str += ' ' * indent + f'num_prev_sweep={self.prev_voxel_num},\n' - repr_str += ' ' * indent + f'time_dim={self.time_dim},\n' - repr_str += ' ' * indent + 'cur_voxel_generator=\n' - repr_str += f'{_auto_indent(repr(self.cur_voxel_generator), 8)},\n' - repr_str += ' ' * indent + 'prev_voxel_generator=\n' - repr_str += f'{_auto_indent(repr(self.prev_voxel_generator), 8)})' - return repr_str - - -@PIPELINES.register_module() -class AffineResize(object): - """Get the affine transform matrices to the target size. - - Different from :class:`RandomAffine` in MMDetection, this class can - calculate the affine transform matrices while resizing the input image - to a fixed size. The affine transform matrices include: 1) matrix - transforming original image to the network input image size. 2) matrix - transforming original image to the network output feature map size. - - Args: - img_scale (tuple): Images scales for resizing. - down_ratio (int): The down ratio of feature map. - Actually the arg should be >= 1. - bbox_clip_border (bool, optional): Whether clip the objects - outside the border of the image. Defaults to True. - """ - - def __init__(self, img_scale, down_ratio, bbox_clip_border=True): - - self.img_scale = img_scale - self.down_ratio = down_ratio - self.bbox_clip_border = bbox_clip_border - - def __call__(self, results): - """Call function to do affine transform to input image and labels. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Results after affine resize, 'affine_aug', 'trans_mat' - keys are added in the result dict. - """ - # The results have gone through RandomShiftScale before AffineResize - if 'center' not in results: - img = results['img'] - height, width = img.shape[:2] - center = np.array([width / 2, height / 2], dtype=np.float32) - size = np.array([width, height], dtype=np.float32) - results['affine_aug'] = False - else: - # The results did not go through RandomShiftScale before - # AffineResize - img = results['img'] - center = results['center'] - size = results['size'] - - trans_affine = self._get_transform_matrix(center, size, self.img_scale) - - img = cv2.warpAffine(img, trans_affine[:2, :], self.img_scale) - - if isinstance(self.down_ratio, tuple): - trans_mat = [ - self._get_transform_matrix( - center, size, - (self.img_scale[0] // ratio, self.img_scale[1] // ratio)) - for ratio in self.down_ratio - ] # (3, 3) - else: - trans_mat = self._get_transform_matrix( - center, size, (self.img_scale[0] // self.down_ratio, - self.img_scale[1] // self.down_ratio)) - - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape - results['trans_mat'] = trans_mat - - self._affine_bboxes(results, trans_affine) - - if 'centers2d' in results: - centers2d = self._affine_transform(results['centers2d'], - trans_affine) - valid_index = (centers2d[:, 0] > - 0) & (centers2d[:, 0] < - self.img_scale[0]) & (centers2d[:, 1] > 0) & ( - centers2d[:, 1] < self.img_scale[1]) - results['centers2d'] = centers2d[valid_index] - - for key in results.get('bbox_fields', []): - if key in ['gt_bboxes']: - results[key] = results[key][valid_index] - if 'gt_labels' in results: - results['gt_labels'] = results['gt_labels'][ - valid_index] - if 'gt_masks' in results: - raise NotImplementedError( - 'AffineResize only supports bbox.') - - for key in results.get('bbox3d_fields', []): - if key in ['gt_bboxes_3d']: - results[key].tensor = results[key].tensor[valid_index] - if 'gt_labels_3d' in results: - results['gt_labels_3d'] = results['gt_labels_3d'][ - valid_index] - - results['depths'] = results['depths'][valid_index] - - return results - - def _affine_bboxes(self, results, matrix): - """Affine transform bboxes to input image. - - Args: - results (dict): Result dict from loading pipeline. - matrix (np.ndarray): Matrix transforming original - image to the network input image size. - shape: (3, 3) - """ - - for key in results.get('bbox_fields', []): - bboxes = results[key] - bboxes[:, :2] = self._affine_transform(bboxes[:, :2], matrix) - bboxes[:, 2:] = self._affine_transform(bboxes[:, 2:], matrix) - if self.bbox_clip_border: - bboxes[:, - [0, 2]] = bboxes[:, - [0, 2]].clip(0, self.img_scale[0] - 1) - bboxes[:, - [1, 3]] = bboxes[:, - [1, 3]].clip(0, self.img_scale[1] - 1) - results[key] = bboxes - - def _affine_transform(self, points, matrix): - """Affine transform bbox points to input image. - - Args: - points (np.ndarray): Points to be transformed. - shape: (N, 2) - matrix (np.ndarray): Affine transform matrix. - shape: (3, 3) - - Returns: - np.ndarray: Transformed points. - """ - num_points = points.shape[0] - hom_points_2d = np.concatenate((points, np.ones((num_points, 1))), - axis=1) - hom_points_2d = hom_points_2d.T - affined_points = np.matmul(matrix, hom_points_2d).T - return affined_points[:, :2] - - def _get_transform_matrix(self, center, scale, output_scale): - """Get affine transform matrix. - - Args: - center (tuple): Center of current image. - scale (tuple): Scale of current image. - output_scale (tuple[float]): The transform target image scales. - - Returns: - np.ndarray: Affine transform matrix. - """ - # TODO: further add rot and shift here. - src_w = scale[0] - dst_w = output_scale[0] - dst_h = output_scale[1] - - src_dir = np.array([0, src_w * -0.5]) - dst_dir = np.array([0, dst_w * -0.5]) - - src = np.zeros((3, 2), dtype=np.float32) - dst = np.zeros((3, 2), dtype=np.float32) - src[0, :] = center - src[1, :] = center + src_dir - dst[0, :] = np.array([dst_w * 0.5, dst_h * 0.5]) - dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir - - src[2, :] = self._get_ref_point(src[0, :], src[1, :]) - dst[2, :] = self._get_ref_point(dst[0, :], dst[1, :]) - - get_matrix = cv2.getAffineTransform(src, dst) - - matrix = np.concatenate((get_matrix, [[0., 0., 1.]])) - - return matrix.astype(np.float32) - - def _get_ref_point(self, ref_point1, ref_point2): - """Get reference point to calculate affine transform matrix. - - While using opencv to calculate the affine matrix, we need at least - three corresponding points separately on original image and target - image. Here we use two points to get the the third reference point. - """ - d = ref_point1 - ref_point2 - ref_point3 = ref_point2 + np.array([-d[1], d[0]]) - return ref_point3 - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'down_ratio={self.down_ratio}) ' - return repr_str - - -@PIPELINES.register_module() -class RandomShiftScale(object): - """Random shift scale. - - Different from the normal shift and scale function, it doesn't - directly shift or scale image. It can record the shift and scale - infos into loading pipelines. It's designed to be used with - AffineResize together. - - Args: - shift_scale (tuple[float]): Shift and scale range. - aug_prob (float): The shifting and scaling probability. - """ - - def __init__(self, shift_scale, aug_prob): - - self.shift_scale = shift_scale - self.aug_prob = aug_prob - - def __call__(self, results): - """Call function to record random shift and scale infos. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Results after random shift and scale, 'center', 'size' - and 'affine_aug' keys are added in the result dict. - """ - img = results['img'] - - height, width = img.shape[:2] - - center = np.array([width / 2, height / 2], dtype=np.float32) - size = np.array([width, height], dtype=np.float32) - - if random.random() < self.aug_prob: - shift, scale = self.shift_scale[0], self.shift_scale[1] - shift_ranges = np.arange(-shift, shift + 0.1, 0.1) - center[0] += size[0] * random.choice(shift_ranges) - center[1] += size[1] * random.choice(shift_ranges) - scale_ranges = np.arange(1 - scale, 1 + scale + 0.1, 0.1) - size *= random.choice(scale_ranges) - results['affine_aug'] = True - else: - results['affine_aug'] = False - - results['center'] = center - results['size'] = size - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(shift_scale={self.shift_scale}, ' - repr_str += f'aug_prob={self.aug_prob}) ' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/s3dis_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/s3dis_dataset.py deleted file mode 100644 index e38dc7ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/s3dis_dataset.py +++ /dev/null @@ -1,445 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import numpy as np - -from mmdet3d.core import show_seg_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class S3DISDataset(Custom3DDataset): - r"""S3DIS Dataset for Detection Task. - - This class is the inner dataset for S3DIS. Since S3DIS has 6 areas, we - often train on 5 of them and test on the remaining one. The one for - test is Area_5 as suggested in `GSDN `_. - To concatenate 5 areas during training - `mmdet.datasets.dataset_wrappers.ConcatDataset` should be used. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('table', 'chair', 'sofa', 'bookcase', 'board') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - *kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - *kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 6), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - with_yaw=False, - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict = dict(pts_filename=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - -class _S3DISSegDataset(Custom3DSegDataset): - r"""S3DIS Dataset for Semantic Segmentation Task. - - This class is the inner dataset for S3DIS. Since S3DIS has 6 areas, we - often train on 5 of them and test on the remaining one. - However, there is not a fixed train-test split of S3DIS. People often test - on Area_5 as suggested by `SEGCloud `_. - But many papers also report the average results of 6-fold cross validation - over the 6 areas (e.g. `DGCNN `_). - Therefore, we use an inner dataset for one area, and further use a dataset - wrapper to concat all the provided data in different areas. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - CLASSES = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') - - VALID_CLASS_IDS = tuple(range(13)) - - ALL_CLASS_IDS = tuple(range(14)) # possibly with 'stair' class - - PALETTE = [[0, 255, 0], [0, 0, 255], [0, 255, 255], [255, 255, 0], - [255, 0, 255], [100, 100, 255], [200, 200, 100], - [170, 120, 200], [255, 0, 0], [200, 100, 100], [10, 200, 100], - [200, 200, 200], [50, 50, 50]] - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs, - **kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=np.max(self.ALL_CLASS_IDS)), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, gt_sem_mask = self._extract_data( - i, pipeline, ['points', 'pts_semantic_mask'], load_annos=True) - points = points.numpy() - pred_sem_mask = result['semantic_mask'].numpy() - show_seg_result(points, gt_sem_mask, - pred_sem_mask, out_dir, file_name, - np.array(self.PALETTE), self.ignore_index, show) - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - # when testing, we load one whole scene every time - if not self.test_mode and scene_idxs is None: - raise NotImplementedError( - 'please provide re-sampled scene indexes for training') - - return super().get_scene_idxs(scene_idxs) - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class S3DISSegDataset(_S3DISSegDataset): - r"""S3DIS Dataset for Semantic Segmentation Task. - - This class serves as the API for experiments on the S3DIS Dataset. - It wraps the provided datasets of different areas. - We don't use `mmdet.datasets.dataset_wrappers.ConcatDataset` because we - need to concat the `scene_idxs` of different areas. - - Please refer to the `google form `_ for - data downloading. - - Args: - data_root (str): Path of dataset root. - ann_files (list[str]): Path of several annotation files. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (list[np.ndarray] | list[str], optional): Precomputed index - to load data. For scenes with many points, we may sample it several - times. Defaults to None. - """ - - def __init__(self, - data_root, - ann_files, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - # make sure that ann_files and scene_idxs have same length - ann_files = self._check_ann_files(ann_files) - scene_idxs = self._check_scene_idxs(scene_idxs, len(ann_files)) - - # initialize some attributes as datasets[0] - super().__init__( - data_root=data_root, - ann_file=ann_files[0], - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs[0], - **kwargs) - - datasets = [ - _S3DISSegDataset( - data_root=data_root, - ann_file=ann_files[i], - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs[i], - **kwargs) for i in range(len(ann_files)) - ] - - # data_infos and scene_idxs need to be concat - self.concat_data_infos([dst.data_infos for dst in datasets]) - self.concat_scene_idxs([dst.scene_idxs for dst in datasets]) - - # set group flag for the sampler - if not self.test_mode: - self._set_group_flag() - - def concat_data_infos(self, data_infos): - """Concat data_infos from several datasets to form self.data_infos. - - Args: - data_infos (list[list[dict]]) - """ - self.data_infos = [ - info for one_data_infos in data_infos for info in one_data_infos - ] - - def concat_scene_idxs(self, scene_idxs): - """Concat scene_idxs from several datasets to form self.scene_idxs. - - Needs to manually add offset to scene_idxs[1, 2, ...]. - - Args: - scene_idxs (list[np.ndarray]) - """ - self.scene_idxs = np.array([], dtype=np.int32) - offset = 0 - for one_scene_idxs in scene_idxs: - self.scene_idxs = np.concatenate( - [self.scene_idxs, one_scene_idxs + offset]).astype(np.int32) - offset = np.unique(self.scene_idxs).max() + 1 - - @staticmethod - def _duplicate_to_list(x, num): - """Repeat x `num` times to form a list.""" - return [x for _ in range(num)] - - def _check_ann_files(self, ann_file): - """Make ann_files as list/tuple.""" - # ann_file could be str - if not isinstance(ann_file, (list, tuple)): - ann_file = self._duplicate_to_list(ann_file, 1) - return ann_file - - def _check_scene_idxs(self, scene_idx, num): - """Make scene_idxs as list/tuple.""" - if scene_idx is None: - return self._duplicate_to_list(scene_idx, num) - # scene_idx could be str, np.ndarray, list or tuple - if isinstance(scene_idx, str): # str - return self._duplicate_to_list(scene_idx, num) - if isinstance(scene_idx[0], str): # list of str - return scene_idx - if isinstance(scene_idx[0], (list, tuple, np.ndarray)): # list of idx - return scene_idx - # single idx - return self._duplicate_to_list(scene_idx, num) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/scannet_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/scannet_dataset.py deleted file mode 100644 index 3e691260..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/scannet_dataset.py +++ /dev/null @@ -1,614 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import numpy as np - -from mmdet3d.core import instance_seg_eval, show_result, show_seg_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class ScanNetDataset(Custom3DDataset): - r"""ScanNet Dataset for Detection Task. - - This class serves as the API for experiments on the ScanNet Dataset. - - Please refer to the `github repo `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=dict(use_camera=False, use_depth=True), - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - assert 'use_camera' in self.modality and \ - 'use_depth' in self.modality - assert self.modality['use_camera'] or self.modality['use_depth'] - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - img_prefix (str, optional): Prefix of image files. - - img_info (dict, optional): Image info. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict = dict(sample_idx=sample_idx) - - if self.modality['use_depth']: - input_dict['pts_filename'] = pts_filename - input_dict['file_name'] = pts_filename - - if self.modality['use_camera']: - img_info = [] - for img_path in info['img_paths']: - img_info.append( - dict(filename=osp.join(self.data_root, img_path))) - intrinsic = info['intrinsics'] - axis_align_matrix = self._get_axis_align_matrix(info) - depth2img = [] - for extrinsic in info['extrinsics']: - depth2img.append( - intrinsic @ np.linalg.inv(axis_align_matrix @ extrinsic)) - - input_dict['img_prefix'] = None - input_dict['img_info'] = img_info - input_dict['depth2img'] = depth2img - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - - axis_align_matrix (np.ndarray): Transformation matrix for - global scene alignment. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 6), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - with_yaw=False, - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - axis_align_matrix = self._get_axis_align_matrix(info) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path, - axis_align_matrix=axis_align_matrix) - return anns_results - - def prepare_test_data(self, index): - """Prepare data for testing. - - We should take axis_align_matrix from self.data_infos since we need - to align point clouds. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - # take the axis_align_matrix from data_infos - input_dict['ann_info'] = dict( - axis_align_matrix=self._get_axis_align_matrix( - self.data_infos[index])) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - @staticmethod - def _get_axis_align_matrix(info): - """Get axis_align_matrix from info. If not exist, return identity mat. - - Args: - info (dict): one data info term. - - Returns: - np.ndarray: 4x4 transformation matrix. - """ - if 'axis_align_matrix' in info['annos'].keys(): - return info['annos']['axis_align_matrix'].astype(np.float32) - else: - warnings.warn( - 'axis_align_matrix is not found in ScanNet data info, please ' - 'use new pre-process scripts to re-generate ScanNet data') - return np.eye(4).astype(np.float32) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_result(points, gt_bboxes, pred_bboxes, out_dir, file_name, - show) - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class ScanNetSegDataset(Custom3DSegDataset): - r"""ScanNet Dataset for Semantic Segmentation Task. - - This class serves as the API for experiments on the ScanNet Dataset. - - Please refer to the `github repo `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - CLASSES = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') - - VALID_CLASS_IDS = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39) - - ALL_CLASS_IDS = tuple(range(41)) - - PALETTE = [ - [174, 199, 232], - [152, 223, 138], - [31, 119, 180], - [255, 187, 120], - [188, 189, 34], - [140, 86, 75], - [255, 152, 150], - [214, 39, 40], - [197, 176, 213], - [148, 103, 189], - [196, 156, 148], - [23, 190, 207], - [247, 182, 210], - [219, 219, 141], - [255, 127, 14], - [158, 218, 229], - [44, 160, 44], - [112, 128, 144], - [227, 119, 194], - [82, 84, 163], - ] - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs, - **kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=np.max(self.ALL_CLASS_IDS)), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, gt_sem_mask = self._extract_data( - i, pipeline, ['points', 'pts_semantic_mask'], load_annos=True) - points = points.numpy() - pred_sem_mask = result['semantic_mask'].numpy() - show_seg_result(points, gt_sem_mask, - pred_sem_mask, out_dir, file_name, - np.array(self.PALETTE), self.ignore_index, show) - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - # when testing, we load one whole scene every time - if not self.test_mode and scene_idxs is None: - raise NotImplementedError( - 'please provide re-sampled scene indexes for training') - - return super().get_scene_idxs(scene_idxs) - - def format_results(self, results, txtfile_prefix=None): - r"""Format the results to txt file. Refer to `ScanNet documentation - `_. - - Args: - outputs (list[dict]): Testing results of the dataset. - txtfile_prefix (str): The prefix of saved files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving submission - files when ``submission_prefix`` is not specified. - """ - import mmcv - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - mmcv.mkdir_or_exist(txtfile_prefix) - - # need to map network output to original label idx - pred2label = np.zeros(len(self.VALID_CLASS_IDS)).astype(np.int) - for original_label, output_idx in self.label_map.items(): - if output_idx != self.ignore_index: - pred2label[output_idx] = original_label - - outputs = [] - for i, result in enumerate(results): - info = self.data_infos[i] - sample_idx = info['point_cloud']['lidar_idx'] - pred_sem_mask = result['semantic_mask'].numpy().astype(np.int) - pred_label = pred2label[pred_sem_mask] - curr_file = f'{txtfile_prefix}/{sample_idx}.txt' - np.savetxt(curr_file, pred_label, fmt='%d') - outputs.append(dict(seg_mask=pred_label)) - - return outputs, tmp_dir - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class ScanNetInstanceSegDataset(Custom3DSegDataset): - CLASSES = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') - - VALID_CLASS_IDS = (3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39) - - ALL_CLASS_IDS = tuple(range(41)) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - pts_semantic_mask_path (str): Path of semantic masks. - - pts_instance_mask_path (str): Path of instance masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict( - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. Palette is simply ignored for - instance segmentation. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - Defaults to None. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, random - palette will be generated. Defaults to None. - """ - if classes is not None: - return classes, None - return self.CLASSES, None - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=True, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict( - type='Collect3D', - keys=['points', 'pts_semantic_mask', 'pts_instance_mask']) - ] - return Compose(pipeline) - - def evaluate(self, - results, - metric=None, - options=None, - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in instance segmentation protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str]): Metrics to be evaluated. - options (dict, optional): options for instance_seg_eval. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Defaults to False. - out_dir (str, optional): Path to save the visualization results. - Defaults to None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - - load_pipeline = self._get_pipeline(pipeline) - pred_instance_masks = [result['instance_mask'] for result in results] - pred_instance_labels = [result['instance_label'] for result in results] - pred_instance_scores = [result['instance_score'] for result in results] - gt_semantic_masks, gt_instance_masks = zip(*[ - self._extract_data( - index=i, - pipeline=load_pipeline, - key=['pts_semantic_mask', 'pts_instance_mask'], - load_annos=True) for i in range(len(self.data_infos)) - ]) - ret_dict = instance_seg_eval( - gt_semantic_masks, - gt_instance_masks, - pred_instance_masks, - pred_instance_labels, - pred_instance_scores, - valid_class_ids=self.VALID_CLASS_IDS, - class_labels=self.CLASSES, - options=options, - logger=logger) - - if show: - raise NotImplementedError('show is not implemented for now') - - return ret_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/semantickitti_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/semantickitti_dataset.py deleted file mode 100644 index 03afbe0c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/semantickitti_dataset.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -from .builder import DATASETS -from .custom_3d import Custom3DDataset - - -@DATASETS.register_module() -class SemanticKITTIDataset(Custom3DDataset): - r"""SemanticKITTI Dataset. - - This class serves as the API for experiments on the SemanticKITTI Dataset - Please refer to `_ - for data downloading - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): NO 3D box for this dataset. - You can choose any type - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('unlabeled', 'car', 'bicycle', 'motorcycle', 'truck', 'bus', - 'person', 'bicyclist', 'motorcyclist', 'road', 'parking', - 'sidewalk', 'other-ground', 'building', 'fence', 'vegetation', - 'trunck', 'terrian', 'pole', 'traffic-sign') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='Lidar', - filter_empty_gt=False, - test_mode=False): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode) - - def get_data_info(self, index): - """Get data info according to the given index. - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/sunrgbd_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/sunrgbd_dataset.py deleted file mode 100644 index 623ab885..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/sunrgbd_dataset.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from os import path as osp - -import numpy as np - -from mmdet3d.core import show_multi_modality_result, show_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmdet.core import eval_map -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class SUNRGBDDataset(Custom3DDataset): - r"""SUNRGBD Dataset. - - This class serves as the API for experiments on the SUNRGBD Dataset. - - See the `download page `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=dict(use_camera=True, use_lidar=True), - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - assert 'use_camera' in self.modality and \ - 'use_lidar' in self.modality - assert self.modality['use_camera'] or self.modality['use_lidar'] - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str, optional): Filename of point clouds. - - file_name (str, optional): Filename of point clouds. - - img_prefix (str, optional): Prefix of image files. - - img_info (dict, optional): Image info. - - calib (dict, optional): Camera calibration info. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - assert info['point_cloud']['lidar_idx'] == info['image']['image_idx'] - input_dict = dict(sample_idx=sample_idx) - - if self.modality['use_lidar']: - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict['pts_filename'] = pts_filename - input_dict['file_name'] = pts_filename - - if self.modality['use_camera']: - img_filename = osp.join( - osp.join(self.data_root, 'sunrgbd_trainval'), - info['image']['image_path']) - input_dict['img_prefix'] = None - input_dict['img_info'] = dict(filename=img_filename) - calib = info['calib'] - rt_mat = calib['Rt'] - # follow Coord3DMode.convert_point - rt_mat = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0] - ]) @ rt_mat.transpose(1, 0) - depth2img = calib['K'] @ rt_mat - input_dict['depth2img'] = depth2img - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and len(annos['gt_bboxes_3d']) == 0: - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 7), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, gt_labels_3d=gt_labels_3d) - - if self.modality['use_camera']: - if info['annos']['gt_num'] != 0: - gt_bboxes_2d = info['annos']['bbox'].astype(np.float32) - else: - gt_bboxes_2d = np.zeros((0, 4), dtype=np.float32) - anns_results['bboxes'] = gt_bboxes_2d - anns_results['labels'] = gt_labels_3d - - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - if self.modality['use_camera']: - pipeline.insert(0, dict(type='LoadImageFromFile')) - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, img_metas, img = self._extract_data( - i, pipeline, ['points', 'img_metas', 'img']) - # scale colors to [0, 255] - points = points.numpy() - points[:, 3:] *= 255 - - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_result(points, gt_bboxes.copy(), pred_bboxes.copy(), out_dir, - file_name, show) - - # multi-modality visualization - if self.modality['use_camera']: - img = img.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - pred_bboxes = DepthInstance3DBoxes( - pred_bboxes, origin=(0.5, 0.5, 0)) - gt_bboxes = DepthInstance3DBoxes( - gt_bboxes, origin=(0.5, 0.5, 0)) - show_multi_modality_result( - img, - gt_bboxes, - pred_bboxes, - None, - out_dir, - file_name, - box_mode='depth', - img_metas=img_metas, - show=show) - - def evaluate(self, - results, - metric=None, - iou_thr=(0.25, 0.5), - iou_thr_2d=(0.5, ), - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in indoor protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str], optional): Metrics to be evaluated. - Default: None. - iou_thr (list[float], optional): AP IoU thresholds for 3D - evaluation. Default: (0.25, 0.5). - iou_thr_2d (list[float], optional): AP IoU thresholds for 2D - evaluation. Default: (0.5, ). - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - # evaluate 3D detection performance - if isinstance(results[0], dict): - return super().evaluate(results, metric, iou_thr, logger, show, - out_dir, pipeline) - # evaluate 2D detection performance - else: - eval_results = OrderedDict() - annotations = [self.get_ann_info(i) for i in range(len(self))] - iou_thr_2d = (iou_thr_2d) if isinstance(iou_thr_2d, - float) else iou_thr_2d - for iou_thr_2d_single in iou_thr_2d: - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=None, - iou_thr=iou_thr_2d_single, - dataset=self.CLASSES, - logger=logger) - eval_results['mAP_' + str(iou_thr_2d_single)] = mean_ap - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/utils.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/utils.py deleted file mode 100644 index e9cfda12..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/utils.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -# yapf: disable -from mmdet3d.datasets.pipelines import (Collect3D, DefaultFormatBundle3D, - LoadAnnotations3D, - LoadImageFromFileMono3D, - LoadMultiViewImageFromFiles, - LoadPointsFromFile, - LoadPointsFromMultiSweeps, - MultiScaleFlipAug3D, - PointSegClassMapping) -from mmdet.datasets.pipelines import LoadImageFromFile, MultiScaleFlipAug -# yapf: enable -from .builder import PIPELINES - - -def is_loading_function(transform): - """Judge whether a transform function is a loading function. - - Note: `MultiScaleFlipAug3D` is a wrapper for multiple pipeline functions, - so we need to search if its inner transforms contain any loading function. - - Args: - transform (dict | :obj:`Pipeline`): A transform config or a function. - - Returns: - bool: Whether it is a loading function. None means can't judge. - When transform is `MultiScaleFlipAug3D`, we return None. - """ - # TODO: use more elegant way to distinguish loading modules - loading_functions = (LoadImageFromFile, LoadPointsFromFile, - LoadAnnotations3D, LoadMultiViewImageFromFiles, - LoadPointsFromMultiSweeps, DefaultFormatBundle3D, - Collect3D, LoadImageFromFileMono3D, - PointSegClassMapping) - if isinstance(transform, dict): - obj_cls = PIPELINES.get(transform['type']) - if obj_cls is None: - return False - if obj_cls in loading_functions: - return True - if obj_cls in (MultiScaleFlipAug3D, MultiScaleFlipAug): - return None - elif callable(transform): - if isinstance(transform, loading_functions): - return True - if isinstance(transform, (MultiScaleFlipAug3D, MultiScaleFlipAug)): - return None - return False - - -def get_loading_pipeline(pipeline): - """Only keep loading image, points and annotations related configuration. - - Args: - pipeline (list[dict] | list[:obj:`Pipeline`]): - Data pipeline configs or list of pipeline functions. - - Returns: - list[dict] | list[:obj:`Pipeline`]): The new pipeline list with only - keep loading image, points and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadPointsFromFile', - ... coord_type='LIDAR', load_dim=4, use_dim=4), - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations3D', - ... with_bbox=True, with_label_3d=True), - ... dict(type='Resize', - ... img_scale=[(640, 192), (2560, 768)], keep_ratio=True), - ... dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - ... dict(type='PointsRangeFilter', - ... point_cloud_range=point_cloud_range), - ... dict(type='ObjectRangeFilter', - ... point_cloud_range=point_cloud_range), - ... dict(type='PointShuffle'), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle3D', class_names=class_names), - ... dict(type='Collect3D', - ... keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadPointsFromFile', - ... coord_type='LIDAR', load_dim=4, use_dim=4), - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations3D', - ... with_bbox=True, with_label_3d=True), - ... dict(type='DefaultFormatBundle3D', class_names=class_names), - ... dict(type='Collect3D', - ... keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']) - ... ] - >>> assert expected_pipelines == \ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline = [] - for transform in pipeline: - is_loading = is_loading_function(transform) - if is_loading is None: # MultiScaleFlipAug3D - # extract its inner pipeline - if isinstance(transform, dict): - inner_pipeline = transform.get('transforms', []) - else: - inner_pipeline = transform.transforms.transforms - loading_pipeline.extend(get_loading_pipeline(inner_pipeline)) - elif is_loading: - loading_pipeline.append(transform) - assert len(loading_pipeline) > 0, \ - 'The data pipeline in your config file must include ' \ - 'loading step.' - return loading_pipeline - - -def extract_result_dict(results, key): - """Extract and return the data corresponding to key in result dict. - - ``results`` is a dict output from `pipeline(input_dict)`, which is the - loaded data from ``Dataset`` class. - The data terms inside may be wrapped in list, tuple and DataContainer, so - this function essentially extracts data from these wrappers. - - Args: - results (dict): Data loaded using pipeline. - key (str): Key of the desired data. - - Returns: - np.ndarray | torch.Tensor: Data term. - """ - if key not in results.keys(): - return None - # results[key] may be data or list[data] or tuple[data] - # data may be wrapped inside DataContainer - data = results[key] - if isinstance(data, (list, tuple)): - data = data[0] - if isinstance(data, mmcv.parallel.DataContainer): - data = data._data - return data diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/waymo_dataset.py b/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/waymo_dataset.py deleted file mode 100644 index 6e204df9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/datasets/waymo_dataset.py +++ /dev/null @@ -1,549 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core.bbox import Box3DMode, points_cam2img -from .builder import DATASETS -from .kitti_dataset import KittiDataset - - -@DATASETS.register_module() -class WaymoDataset(KittiDataset): - """Waymo Dataset. - - This class serves as the API for experiments on the Waymo Dataset. - - Please refer to ``_for data downloading. - It is recommended to symlink the dataset root to $MMDETECTION3D/data and - organize them as the doc shows. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - split (str): Split of input data. - pts_prefix (str, optional): Prefix of points files. - Defaults to 'velodyne'. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': box in LiDAR coordinates - - 'Depth': box in depth coordinates, usually for indoor dataset - - 'Camera': box in camera coordinates - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - pcd_limit_range (list(float), optional): The range of point cloud used - to filter invalid predicted boxes. - Default: [-85, -85, -5, 85, 85, 5]. - """ - - CLASSES = ('Car', 'Cyclist', 'Pedestrian') - - def __init__(self, - data_root, - ann_file, - split, - pts_prefix='velodyne', - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - load_interval=1, - pcd_limit_range=[-85, -85, -5, 85, 85, 5], - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - split=split, - pts_prefix=pts_prefix, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - pcd_limit_range=pcd_limit_range, - **kwargs) - - # to load a subset, just set the load_interval in the dataset config - self.data_infos = self.data_infos[::load_interval] - if hasattr(self, 'flag'): - self.flag = self.flag[::load_interval] - - def _get_pts_filename(self, idx): - pts_filename = osp.join(self.root_split, self.pts_prefix, - f'{idx:07d}.bin') - return pts_filename - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Standard input_dict consists of the - data information. - - - sample_idx (str): sample index - - pts_filename (str): filename of point clouds - - img_prefix (str): prefix of image files - - img_info (dict): image info - - lidar2img (list[np.ndarray], optional): transformations from - lidar to different cameras - - ann_info (dict): annotation info - """ - info = self.data_infos[index] - sample_idx = info['image']['image_idx'] - img_filename = os.path.join(self.data_root, - info['image']['image_path']) - - # TODO: consider use torch.Tensor only - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P0 = info['calib']['P0'].astype(np.float32) - lidar2img = P0 @ rect @ Trv2c - - pts_filename = self._get_pts_filename(sample_idx) - input_dict = dict( - sample_idx=sample_idx, - pts_filename=pts_filename, - img_prefix=None, - img_info=dict(filename=img_filename), - lidar2img=lidar2img) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None, - data_format='waymo'): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - data_format (str, optional): Output data format. - Default: 'waymo'. Another supported choice is 'kitti'. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - assert ('waymo' in data_format or 'kitti' in data_format), \ - f'invalid data_format {data_format}' - - if (not isinstance(outputs[0], dict)) or 'img_bbox' in outputs[0]: - raise TypeError('Not supported type for reformat results.') - elif 'pts_bbox' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = f'{submission_prefix}_{name}' - else: - submission_prefix_ = None - result_files_ = self.bbox2result_kitti(results_, self.CLASSES, - pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - if 'waymo' in data_format: - from ..core.evaluation.waymo_utils.prediction_kitti_to_waymo import \ - KITTI2Waymo # noqa - waymo_root = osp.join( - self.data_root.split('kitti_format')[0], 'waymo_format') - if self.split == 'training': - waymo_tfrecords_dir = osp.join(waymo_root, 'validation') - prefix = '1' - elif self.split == 'testing': - waymo_tfrecords_dir = osp.join(waymo_root, 'testing') - prefix = '2' - else: - raise ValueError('Not supported split value.') - save_tmp_dir = tempfile.TemporaryDirectory() - waymo_results_save_dir = save_tmp_dir.name - waymo_results_final_path = f'{pklfile_prefix}.bin' - if 'pts_bbox' in result_files: - converter = KITTI2Waymo(result_files['pts_bbox'], - waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, prefix) - else: - converter = KITTI2Waymo(result_files, waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, prefix) - converter.convert() - save_tmp_dir.cleanup() - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='waymo', - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'waymo'. Another supported metric is 'kitti'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str: float]: results of each evaluation metric - """ - assert ('waymo' in metric or 'kitti' in metric), \ - f'invalid metric {metric}' - if 'kitti' in metric: - result_files, tmp_dir = self.format_results( - results, - pklfile_prefix, - submission_prefix, - data_format='kitti') - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.data_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bev', '3d'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float( - '{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - ap_result_str, ap_dict = kitti_eval( - gt_annos, - result_files, - self.CLASSES, - eval_types=['bev', '3d']) - print_log('\n' + ap_result_str, logger=logger) - if 'waymo' in metric: - waymo_root = osp.join( - self.data_root.split('kitti_format')[0], 'waymo_format') - if pklfile_prefix is None: - eval_tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(eval_tmp_dir.name, 'results') - else: - eval_tmp_dir = None - result_files, tmp_dir = self.format_results( - results, - pklfile_prefix, - submission_prefix, - data_format='waymo') - import subprocess - ret_bytes = subprocess.check_output( - 'mmdet3d/core/evaluation/waymo_utils/' + - f'compute_detection_metrics_main {pklfile_prefix}.bin ' + - f'{waymo_root}/gt.bin', - shell=True) - ret_texts = ret_bytes.decode('utf-8') - print_log(ret_texts) - # parse the text to get ap_dict - ap_dict = { - 'Vehicle/L1 mAP': 0, - 'Vehicle/L1 mAPH': 0, - 'Vehicle/L2 mAP': 0, - 'Vehicle/L2 mAPH': 0, - 'Pedestrian/L1 mAP': 0, - 'Pedestrian/L1 mAPH': 0, - 'Pedestrian/L2 mAP': 0, - 'Pedestrian/L2 mAPH': 0, - 'Sign/L1 mAP': 0, - 'Sign/L1 mAPH': 0, - 'Sign/L2 mAP': 0, - 'Sign/L2 mAPH': 0, - 'Cyclist/L1 mAP': 0, - 'Cyclist/L1 mAPH': 0, - 'Cyclist/L2 mAP': 0, - 'Cyclist/L2 mAPH': 0, - 'Overall/L1 mAP': 0, - 'Overall/L1 mAPH': 0, - 'Overall/L2 mAP': 0, - 'Overall/L2 mAPH': 0 - } - mAP_splits = ret_texts.split('mAP ') - mAPH_splits = ret_texts.split('mAPH ') - for idx, key in enumerate(ap_dict.keys()): - split_idx = int(idx / 2) + 1 - if idx % 2 == 0: # mAP - ap_dict[key] = float(mAP_splits[split_idx].split(']')[0]) - else: # mAPH - ap_dict[key] = float(mAPH_splits[split_idx].split(']')[0]) - ap_dict['Overall/L1 mAP'] = \ - (ap_dict['Vehicle/L1 mAP'] + ap_dict['Pedestrian/L1 mAP'] + - ap_dict['Cyclist/L1 mAP']) / 3 - ap_dict['Overall/L1 mAPH'] = \ - (ap_dict['Vehicle/L1 mAPH'] + ap_dict['Pedestrian/L1 mAPH'] + - ap_dict['Cyclist/L1 mAPH']) / 3 - ap_dict['Overall/L2 mAP'] = \ - (ap_dict['Vehicle/L2 mAP'] + ap_dict['Pedestrian/L2 mAP'] + - ap_dict['Cyclist/L2 mAP']) / 3 - ap_dict['Overall/L2 mAPH'] = \ - (ap_dict['Vehicle/L2 mAPH'] + ap_dict['Pedestrian/L2 mAPH'] + - ap_dict['Cyclist/L2 mAPH']) / 3 - if eval_tmp_dir is not None: - eval_tmp_dir.cleanup() - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert results to kitti format for evaluation and test submission. - - Args: - net_outputs (List[np.ndarray]): list of array storing the - bbox and score - class_nanes (List[String]): A list of class names - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - List[dict]: A list of dict have the kitti 3d format - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.data_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - - box_dict = self.convert_valid_bboxes(pred_dicts, info) - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append( - -np.arctan2(-box_lidar[1], box_lidar[0]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:07d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'. - format(anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], - bbox[idx][2], bbox[idx][3], - dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - else: - annos.append({ - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - }) - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print(f'Result is saved to {out}.') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the boxes into valid format. - - Args: - box_dict (dict): Bounding boxes to be converted. - - - boxes_3d (:obj:``LiDARInstance3DBoxes``): 3D bounding boxes. - - scores_3d (np.ndarray): Scores of predicted boxes. - - labels_3d (np.ndarray): Class labels of predicted boxes. - info (dict): Dataset information dictionary. - - Returns: - dict: Valid boxes after conversion. - - - bbox (np.ndarray): 2D bounding boxes (in camera 0). - - box3d_camera (np.ndarray): 3D boxes in camera coordinates. - - box3d_lidar (np.ndarray): 3D boxes in lidar coordinates. - - scores (np.ndarray): Scores of predicted boxes. - - label_preds (np.ndarray): Class labels of predicted boxes. - - sample_idx (np.ndarray): Sample index. - """ - # TODO: refactor this function - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - box_preds.limit_yaw(offset=0.5, period=np.pi * 2) - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P0 = info['calib']['P0'].astype(np.float32) - P0 = box_preds.tensor.new_tensor(P0) - - box_preds_camera = box_preds.convert_to(Box3DMode.CAM, rect @ Trv2c) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P0) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds - limit_range = box_preds.tensor.new_tensor(self.pcd_limit_range) - valid_pcd_inds = ((box_preds.center > limit_range[:3]) & - (box_preds.center < limit_range[3:])) - valid_inds = valid_pcd_inds.all(-1) - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx, - ) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx, - ) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/__init__.py deleted file mode 100644 index 7c7e8fc6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, DETECTORS, FUSION_LAYERS, HEADS, LOSSES, - MIDDLE_ENCODERS, NECKS, ROI_EXTRACTORS, SEGMENTORS, - SHARED_HEADS, VOXEL_ENCODERS, build_backbone, - build_detector, build_fusion_layer, build_head, - build_loss, build_middle_encoder, build_model, - build_neck, build_roi_extractor, build_shared_head, - build_voxel_encoder) -from .decode_heads import * # noqa: F401,F403 -from .dense_heads import * # noqa: F401,F403 -from .detectors import * # noqa: F401,F403 -from .fusion_layers import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .middle_encoders import * # noqa: F401,F403 -from .model_utils import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .roi_heads import * # noqa: F401,F403 -from .segmentors import * # noqa: F401,F403 -from .voxel_encoders import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES', - 'DETECTORS', 'SEGMENTORS', 'VOXEL_ENCODERS', 'MIDDLE_ENCODERS', - 'FUSION_LAYERS', 'build_backbone', 'build_neck', 'build_roi_extractor', - 'build_shared_head', 'build_head', 'build_loss', 'build_detector', - 'build_fusion_layer', 'build_model', 'build_middle_encoder', - 'build_voxel_encoder' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/__init__.py deleted file mode 100644 index d51c16d2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.backbones import SSDVGG, HRNet, ResNet, ResNetV1d, ResNeXt -from .dgcnn import DGCNNBackbone -from .dla import DLANet -from .mink_resnet import MinkResNet -from .multi_backbone import MultiBackbone -from .nostem_regnet import NoStemRegNet -from .pointnet2_sa_msg import PointNet2SAMSG -from .pointnet2_sa_ssg import PointNet2SASSG -from .second import SECOND - -__all__ = [ - 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', 'NoStemRegNet', - 'SECOND', 'DGCNNBackbone', 'PointNet2SASSG', 'PointNet2SAMSG', - 'MultiBackbone', 'DLANet', 'MinkResNet' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/base_pointnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/base_pointnet.py deleted file mode 100644 index 31439e6a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/base_pointnet.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import ABCMeta - -from mmcv.runner import BaseModule - - -class BasePointNet(BaseModule, metaclass=ABCMeta): - """Base class for PointNet.""" - - def __init__(self, init_cfg=None, pretrained=None): - super(BasePointNet, self).__init__(init_cfg) - self.fp16_enabled = False - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @staticmethod - def _split_point_feats(points): - """Split coordinates and features of input points. - - Args: - points (torch.Tensor): Point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - """ - xyz = points[..., 0:3].contiguous() - if points.size(-1) > 3: - features = points[..., 3:].transpose(1, 2).contiguous() - else: - features = None - - return xyz, features diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dgcnn.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dgcnn.py deleted file mode 100644 index 20e82d9c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dgcnn.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import DGCNNFAModule, DGCNNGFModule -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class DGCNNBackbone(BaseModule): - """Backbone network for DGCNN. - - Args: - in_channels (int): Input channels of point cloud. - num_samples (tuple[int], optional): The number of samples for knn or - ball query in each graph feature (GF) module. - Defaults to (20, 20, 20). - knn_modes (tuple[str], optional): Mode of KNN of each knn module. - Defaults to ('D-KNN', 'F-KNN', 'F-KNN'). - radius (tuple[float], optional): Sampling radii of each GF module. - Defaults to (None, None, None). - gf_channels (tuple[tuple[int]], optional): Out channels of each mlp in - GF module. Defaults to ((64, 64), (64, 64), (64, )). - fa_channels (tuple[int], optional): Out channels of each mlp in FA - module. Defaults to (1024, ). - act_cfg (dict, optional): Config of activation layer. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. - Defaults to None. - """ - - def __init__(self, - in_channels, - num_samples=(20, 20, 20), - knn_modes=('D-KNN', 'F-KNN', 'F-KNN'), - radius=(None, None, None), - gf_channels=((64, 64), (64, 64), (64, )), - fa_channels=(1024, ), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_gf = len(gf_channels) - - assert len(num_samples) == len(knn_modes) == len(radius) == len( - gf_channels), 'Num_samples, knn_modes, radius and gf_channels \ - should have the same length.' - - self.GF_modules = nn.ModuleList() - gf_in_channel = in_channels * 2 - skip_channel_list = [gf_in_channel] # input channel list - - for gf_index in range(self.num_gf): - cur_gf_mlps = list(gf_channels[gf_index]) - cur_gf_mlps = [gf_in_channel] + cur_gf_mlps - gf_out_channel = cur_gf_mlps[-1] - - self.GF_modules.append( - DGCNNGFModule( - mlp_channels=cur_gf_mlps, - num_sample=num_samples[gf_index], - knn_mode=knn_modes[gf_index], - radius=radius[gf_index], - act_cfg=act_cfg)) - skip_channel_list.append(gf_out_channel) - gf_in_channel = gf_out_channel * 2 - - fa_in_channel = sum(skip_channel_list[1:]) - cur_fa_mlps = list(fa_channels) - cur_fa_mlps = [fa_in_channel] + cur_fa_mlps - - self.FA_module = DGCNNFAModule( - mlp_channels=cur_fa_mlps, act_cfg=act_cfg) - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, in_channels). - - Returns: - dict[str, list[torch.Tensor]]: Outputs after graph feature (GF) and - feature aggregation (FA) modules. - - - gf_points (list[torch.Tensor]): Outputs after each GF module. - - fa_points (torch.Tensor): Outputs after FA module. - """ - gf_points = [points] - - for i in range(self.num_gf): - cur_points = self.GF_modules[i](gf_points[i]) - gf_points.append(cur_points) - - fa_points = self.FA_module(gf_points) - - out = dict(gf_points=gf_points, fa_points=fa_points) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dla.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dla.py deleted file mode 100644 index a5479091..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/dla.py +++ /dev/null @@ -1,446 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch import nn - -from ..builder import BACKBONES - - -def dla_build_norm_layer(cfg, num_features): - """Build normalization layer specially designed for DLANet. - - Args: - cfg (dict): The norm layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate a norm layer. - - requires_grad (bool, optional): Whether stop gradient updates. - num_features (int): Number of input channels. - - - Returns: - Function: Build normalization layer in mmcv. - """ - cfg_ = cfg.copy() - if cfg_['type'] == 'GN': - if num_features % 32 == 0: - return build_norm_layer(cfg_, num_features) - else: - assert 'num_groups' in cfg_ - cfg_['num_groups'] = cfg_['num_groups'] // 2 - return build_norm_layer(cfg_, num_features) - else: - return build_norm_layer(cfg_, num_features) - - -class BasicBlock(BaseModule): - """BasicBlock in DLANet. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Conv stride. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride=1, - dilation=1, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - self.conv1 = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.norm1 = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.relu = nn.ReLU(inplace=True) - self.conv2 = build_conv_layer( - conv_cfg, - out_channels, - out_channels, - 3, - stride=1, - padding=dilation, - dilation=dilation, - bias=False) - self.norm2 = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.stride = stride - - def forward(self, x, identity=None): - """Forward function.""" - - if identity is None: - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - out = self.conv2(out) - out = self.norm2(out) - out += identity - out = self.relu(out) - - return out - - -class Root(BaseModule): - """Root in DLANet. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - kernel_size (int): Size of convolution kernel. - add_identity (bool): Whether to add identity in root. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - kernel_size, - add_identity, - init_cfg=None): - super(Root, self).__init__(init_cfg) - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 1, - stride=1, - padding=(kernel_size - 1) // 2, - bias=False) - self.norm = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.relu = nn.ReLU(inplace=True) - self.add_identity = add_identity - - def forward(self, feat_list): - """Forward function. - - Args: - feat_list (list[torch.Tensor]): Output features from - multiple layers. - """ - children = feat_list - x = self.conv(torch.cat(feat_list, 1)) - x = self.norm(x) - if self.add_identity: - x += children[0] - x = self.relu(x) - - return x - - -class Tree(BaseModule): - """Tree in DLANet. - - Args: - levels (int): The level of the tree. - block (nn.Module): The block module in tree. - in_channels: Input feature channel. - out_channels: Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Convolution stride. - Default: 1. - level_root (bool, optional): whether belongs to the - root layer. - root_dim (int, optional): Root input feature channel. - root_kernel_size (int, optional): Size of root - convolution kernel. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - add_identity (bool, optional): Whether to add - identity in root. Default: False. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - levels, - block, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride=1, - level_root=False, - root_dim=None, - root_kernel_size=1, - dilation=1, - add_identity=False, - init_cfg=None): - super(Tree, self).__init__(init_cfg) - if root_dim is None: - root_dim = 2 * out_channels - if level_root: - root_dim += in_channels - if levels == 1: - self.root = Root(root_dim, out_channels, norm_cfg, conv_cfg, - root_kernel_size, add_identity) - self.tree1 = block( - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride, - dilation=dilation) - self.tree2 = block( - out_channels, - out_channels, - norm_cfg, - conv_cfg, - 1, - dilation=dilation) - else: - self.tree1 = Tree( - levels - 1, - block, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride, - root_dim=None, - root_kernel_size=root_kernel_size, - dilation=dilation, - add_identity=add_identity) - self.tree2 = Tree( - levels - 1, - block, - out_channels, - out_channels, - norm_cfg, - conv_cfg, - root_dim=root_dim + out_channels, - root_kernel_size=root_kernel_size, - dilation=dilation, - add_identity=add_identity) - self.level_root = level_root - self.root_dim = root_dim - self.downsample = None - self.project = None - self.levels = levels - if stride > 1: - self.downsample = nn.MaxPool2d(stride, stride=stride) - if in_channels != out_channels: - self.project = nn.Sequential( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 1, - stride=1, - bias=False), - dla_build_norm_layer(norm_cfg, out_channels)[1]) - - def forward(self, x, identity=None, children=None): - children = [] if children is None else children - bottom = self.downsample(x) if self.downsample else x - identity = self.project(bottom) if self.project else bottom - if self.level_root: - children.append(bottom) - x1 = self.tree1(x, identity) - if self.levels == 1: - x2 = self.tree2(x1) - feat_list = [x2, x1] + children - x = self.root(feat_list) - else: - children.append(x1) - x = self.tree2(x1, children=children) - return x - - -@BACKBONES.register_module() -class DLANet(BaseModule): - r"""`DLA backbone `_. - - Args: - depth (int): Depth of DLA. Default: 34. - in_channels (int, optional): Number of input image channels. - Default: 3. - norm_cfg (dict, optional): Dictionary to construct and config - norm layer. Default: None. - conv_cfg (dict, optional): Dictionary to construct and config - conv layer. Default: None. - layer_with_level_root (list[bool], optional): Whether to apply - level_root in each DLA layer, this is only used for - tree levels. Default: (False, True, True, True). - with_identity_root (bool, optional): Whether to add identity - in root layer. Default: False. - pretrained (str, optional): model pretrained path. - Default: None. - init_cfg (dict or list[dict], optional): Initialization - config dict. Default: None - """ - arch_settings = { - 34: (BasicBlock, (1, 1, 1, 2, 2, 1), (16, 32, 64, 128, 256, 512)), - } - - def __init__(self, - depth, - in_channels=3, - out_indices=(0, 1, 2, 3, 4, 5), - frozen_stages=-1, - norm_cfg=None, - conv_cfg=None, - layer_with_level_root=(False, True, True, True), - with_identity_root=False, - pretrained=None, - init_cfg=None): - super(DLANet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalida depth {depth} for DLA') - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - - block, levels, channels = self.arch_settings[depth] - self.channels = channels - self.num_levels = len(levels) - self.frozen_stages = frozen_stages - self.out_indices = out_indices - assert max(out_indices) < self.num_levels - self.base_layer = nn.Sequential( - build_conv_layer( - conv_cfg, - in_channels, - channels[0], - 7, - stride=1, - padding=3, - bias=False), - dla_build_norm_layer(norm_cfg, channels[0])[1], - nn.ReLU(inplace=True)) - - # DLANet first uses two conv layers then uses several - # Tree layers - for i in range(2): - level_layer = self._make_conv_level( - channels[0], - channels[i], - levels[i], - norm_cfg, - conv_cfg, - stride=i + 1) - layer_name = f'level{i}' - self.add_module(layer_name, level_layer) - - for i in range(2, self.num_levels): - dla_layer = Tree( - levels[i], - block, - channels[i - 1], - channels[i], - norm_cfg, - conv_cfg, - 2, - level_root=layer_with_level_root[i - 2], - add_identity=with_identity_root) - layer_name = f'level{i}' - self.add_module(layer_name, dla_layer) - - self._freeze_stages() - - def _make_conv_level(self, - in_channels, - out_channels, - num_convs, - norm_cfg, - conv_cfg, - stride=1, - dilation=1): - """Conv modules. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - num_convs (int): Number of Conv module. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Conv stride. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - """ - modules = [] - for i in range(num_convs): - modules.extend([ - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 3, - stride=stride if i == 0 else 1, - padding=dilation, - bias=False, - dilation=dilation), - dla_build_norm_layer(norm_cfg, out_channels)[1], - nn.ReLU(inplace=True) - ]) - in_channels = out_channels - return nn.Sequential(*modules) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.base_layer.eval() - for param in self.base_layer.parameters(): - param.requires_grad = False - - for i in range(2): - m = getattr(self, f'level{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'level{i+1}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - outs = [] - x = self.base_layer(x) - for i in range(self.num_levels): - x = getattr(self, 'level{}'.format(i))(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/mink_resnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/mink_resnet.py deleted file mode 100644 index 35a79ce2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/mink_resnet.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Follow https://github.com/NVIDIA/MinkowskiEngine/blob/master/examples/resnet.py # noqa -# and mmcv.cnn.ResNet -try: - import MinkowskiEngine as ME - from MinkowskiEngine.modules.resnet_block import BasicBlock, Bottleneck -except ImportError: - import warnings - warnings.warn( - 'Please follow `getting_started.md` to install MinkowskiEngine.`') - # blocks are used in the static part of MinkResNet - BasicBlock, Bottleneck = None, None - -import torch.nn as nn - -from mmdet3d.models.builder import BACKBONES - - -@BACKBONES.register_module() -class MinkResNet(nn.Module): - r"""Minkowski ResNet backbone. See `4D Spatio-Temporal ConvNets - `_ for more details. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (ont): Number of input channels, 3 for RGB. - num_stages (int, optional): Resnet stages. Default: 4. - pool (bool, optional): Add max pooling after first conv if True. - Default: True. - """ - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, depth, in_channels, num_stages=4, pool=True): - super(MinkResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert 4 >= num_stages >= 1 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - self.num_stages = num_stages - self.pool = pool - - self.inplanes = 64 - self.conv1 = ME.MinkowskiConvolution( - in_channels, self.inplanes, kernel_size=3, stride=2, dimension=3) - # May be BatchNorm is better, but we follow original implementation. - self.norm1 = ME.MinkowskiInstanceNorm(self.inplanes) - self.relu = ME.MinkowskiReLU(inplace=True) - if self.pool: - self.maxpool = ME.MinkowskiMaxPooling( - kernel_size=2, stride=2, dimension=3) - - for i, num_blocks in enumerate(stage_blocks): - setattr( - self, f'layer{i}', - self._make_layer(block, 64 * 2**i, stage_blocks[i], stride=2)) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, ME.MinkowskiConvolution): - ME.utils.kaiming_normal_( - m.kernel, mode='fan_out', nonlinearity='relu') - - if isinstance(m, ME.MinkowskiBatchNorm): - nn.init.constant_(m.bn.weight, 1) - nn.init.constant_(m.bn.bias, 0) - - def _make_layer(self, block, planes, blocks, stride): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - ME.MinkowskiConvolution( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - dimension=3), - ME.MinkowskiBatchNorm(planes * block.expansion)) - layers = [] - layers.append( - block( - self.inplanes, - planes, - stride=stride, - downsample=downsample, - dimension=3)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, stride=1, dimension=3)) - return nn.Sequential(*layers) - - def forward(self, x): - """Forward pass of ResNet. - - Args: - x (ME.SparseTensor): Input sparse tensor. - - Returns: - list[ME.SparseTensor]: Output sparse tensors. - """ - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - if self.pool: - x = self.maxpool(x) - outs = [] - for i in range(self.num_stages): - x = getattr(self, f'layer{i}')(x) - outs.append(x) - return outs diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/multi_backbone.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/multi_backbone.py deleted file mode 100644 index ed04ecdd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/multi_backbone.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from ..builder import BACKBONES, build_backbone - - -@BACKBONES.register_module() -class MultiBackbone(BaseModule): - """MultiBackbone with different configs. - - Args: - num_streams (int): The number of backbones. - backbones (list or dict): A list of backbone configs. - aggregation_mlp_channels (list[int]): Specify the mlp layers - for feature aggregation. - conv_cfg (dict): Config dict of convolutional layers. - norm_cfg (dict): Config dict of normalization layers. - act_cfg (dict): Config dict of activation layers. - suffixes (list): A list of suffixes to rename the return dict - for each backbone. - """ - - def __init__(self, - num_streams, - backbones, - aggregation_mlp_channels=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-5, momentum=0.01), - act_cfg=dict(type='ReLU'), - suffixes=('net0', 'net1'), - init_cfg=None, - pretrained=None, - **kwargs): - super().__init__(init_cfg=init_cfg) - assert isinstance(backbones, dict) or isinstance(backbones, list) - if isinstance(backbones, dict): - backbones_list = [] - for ind in range(num_streams): - backbones_list.append(copy.deepcopy(backbones)) - backbones = backbones_list - - assert len(backbones) == num_streams - assert len(suffixes) == num_streams - - self.backbone_list = nn.ModuleList() - # Rename the ret_dict with different suffixs. - self.suffixes = suffixes - - out_channels = 0 - - for backbone_cfg in backbones: - out_channels += backbone_cfg['fp_channels'][-1][-1] - self.backbone_list.append(build_backbone(backbone_cfg)) - - # Feature aggregation layers - if aggregation_mlp_channels is None: - aggregation_mlp_channels = [ - out_channels, out_channels // 2, - out_channels // len(self.backbone_list) - ] - else: - aggregation_mlp_channels.insert(0, out_channels) - - self.aggregation_layers = nn.Sequential() - for i in range(len(aggregation_mlp_channels) - 1): - self.aggregation_layers.add_module( - f'layer{i}', - ConvModule( - aggregation_mlp_channels[i], - aggregation_mlp_channels[i + 1], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @auto_fp16() - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, list[torch.Tensor]]: Outputs from multiple backbones. - - - fp_xyz[suffix] (list[torch.Tensor]): The coordinates of - each fp features. - - fp_features[suffix] (list[torch.Tensor]): The features - from each Feature Propagate Layers. - - fp_indices[suffix] (list[torch.Tensor]): Indices of the - input points. - - hd_feature (torch.Tensor): The aggregation feature - from multiple backbones. - """ - ret = {} - fp_features = [] - for ind in range(len(self.backbone_list)): - cur_ret = self.backbone_list[ind](points) - cur_suffix = self.suffixes[ind] - fp_features.append(cur_ret['fp_features'][-1]) - if cur_suffix != '': - for k in cur_ret.keys(): - cur_ret[k + '_' + cur_suffix] = cur_ret.pop(k) - ret.update(cur_ret) - - # Combine the features here - hd_feature = torch.cat(fp_features, dim=1) - hd_feature = self.aggregation_layers(hd_feature) - ret['hd_feature'] = hd_feature - return ret diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/nostem_regnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/nostem_regnet.py deleted file mode 100644 index 30905083..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/nostem_regnet.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.backbones import RegNet -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class NoStemRegNet(RegNet): - """RegNet backbone without Stem for 3D detection. - - More details can be found in `paper `_ . - - Args: - arch (dict): The parameter of RegNets. - - w0 (int): Initial width. - - wa (float): Slope of width. - - wm (float): Quantization parameter to quantize the width. - - depth (int): Depth of the backbone. - - group_w (int): Width of group. - - bot_mul (float): Bottleneck ratio, i.e. expansion of bottleneck. - strides (Sequence[int]): Strides of the first block of each stage. - base_channels (int): Base channels after stem layer. - in_channels (int): Number of input image channels. Normally 3. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet3d.models import NoStemRegNet - >>> import torch - >>> self = NoStemRegNet( - arch=dict( - w0=88, - wa=26.31, - wm=2.25, - group_w=48, - depth=25, - bot_mul=1.0)) - >>> self.eval() - >>> inputs = torch.rand(1, 64, 16, 16) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 96, 8, 8) - (1, 192, 4, 4) - (1, 432, 2, 2) - (1, 1008, 1, 1) - """ - - def __init__(self, arch, init_cfg=None, **kwargs): - super(NoStemRegNet, self).__init__(arch, init_cfg=init_cfg, **kwargs) - - def _make_stem_layer(self, in_channels, base_channels): - """Override the original function that do not initialize a stem layer - since 3D detector's voxel encoder works like a stem layer.""" - return - - def forward(self, x): - """Forward function of backbone. - - Args: - x (torch.Tensor): Features in shape (N, C, H, W). - - Returns: - tuple[torch.Tensor]: Multi-scale features. - """ - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_msg.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_msg.py deleted file mode 100644 index f6b1e47b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_msg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import build_sa_module -from ..builder import BACKBONES -from .base_pointnet import BasePointNet - - -@BACKBONES.register_module() -class PointNet2SAMSG(BasePointNet): - """PointNet2 with Multi-scale grouping. - - Args: - in_channels (int): Input channels of point cloud. - num_points (tuple[int]): The number of points which each SA - module samples. - radii (tuple[float]): Sampling radii of each SA module. - num_samples (tuple[int]): The number of samples for ball - query in each SA module. - sa_channels (tuple[tuple[int]]): Out channels of each mlp in SA module. - aggregation_channels (tuple[int]): Out channels of aggregation - multi-scale grouping features. - fps_mods (tuple[int]): Mod of FPS for each SA module. - fps_sample_range_lists (tuple[tuple[int]]): The number of sampling - points which each SA module samples. - dilated_group (tuple[bool]): Whether to use dilated ball query for - out_indices (Sequence[int]): Output from which stages. - norm_cfg (dict): Config of normalization layer. - sa_cfg (dict): Config of set abstraction module, which may contain - the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - """ - - def __init__(self, - in_channels, - num_points=(2048, 1024, 512, 256), - radii=((0.2, 0.4, 0.8), (0.4, 0.8, 1.6), (1.6, 3.2, 4.8)), - num_samples=((32, 32, 64), (32, 32, 64), (32, 32, 32)), - sa_channels=(((16, 16, 32), (16, 16, 32), (32, 32, 64)), - ((64, 64, 128), (64, 64, 128), (64, 96, 128)), - ((128, 128, 256), (128, 192, 256), (128, 256, - 256))), - aggregation_channels=(64, 128, 256), - fps_mods=(('D-FPS'), ('FS'), ('F-FPS', 'D-FPS')), - fps_sample_range_lists=((-1), (-1), (512, -1)), - dilated_group=(True, True, True), - out_indices=(2, ), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_sa = len(sa_channels) - self.out_indices = out_indices - assert max(out_indices) < self.num_sa - assert len(num_points) == len(radii) == len(num_samples) == len( - sa_channels) - if aggregation_channels is not None: - assert len(sa_channels) == len(aggregation_channels) - else: - aggregation_channels = [None] * len(sa_channels) - - self.SA_modules = nn.ModuleList() - self.aggregation_mlps = nn.ModuleList() - sa_in_channel = in_channels - 3 # number of channels without xyz - skip_channel_list = [sa_in_channel] - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - sa_out_channel = 0 - for radius_index in range(len(radii[sa_index])): - cur_sa_mlps[radius_index] = [sa_in_channel] + list( - cur_sa_mlps[radius_index]) - sa_out_channel += cur_sa_mlps[radius_index][-1] - - if isinstance(fps_mods[sa_index], tuple): - cur_fps_mod = list(fps_mods[sa_index]) - else: - cur_fps_mod = list([fps_mods[sa_index]]) - - if isinstance(fps_sample_range_lists[sa_index], tuple): - cur_fps_sample_range_list = list( - fps_sample_range_lists[sa_index]) - else: - cur_fps_sample_range_list = list( - [fps_sample_range_lists[sa_index]]) - - self.SA_modules.append( - build_sa_module( - num_point=num_points[sa_index], - radii=radii[sa_index], - sample_nums=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - fps_mod=cur_fps_mod, - fps_sample_range_list=cur_fps_sample_range_list, - dilated_group=dilated_group[sa_index], - norm_cfg=norm_cfg, - cfg=sa_cfg, - bias=True)) - skip_channel_list.append(sa_out_channel) - - cur_aggregation_channel = aggregation_channels[sa_index] - if cur_aggregation_channel is None: - self.aggregation_mlps.append(None) - sa_in_channel = sa_out_channel - else: - self.aggregation_mlps.append( - ConvModule( - sa_out_channel, - cur_aggregation_channel, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - kernel_size=1, - bias=True)) - sa_in_channel = cur_aggregation_channel - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, torch.Tensor]: Outputs of the last SA module. - - - sa_xyz (torch.Tensor): The coordinates of sa features. - - sa_features (torch.Tensor): The features from the - last Set Aggregation Layers. - - sa_indices (torch.Tensor): Indices of the - input points. - """ - xyz, features = self._split_point_feats(points) - - batch, num_points = xyz.shape[:2] - indices = xyz.new_tensor(range(num_points)).unsqueeze(0).repeat( - batch, 1).long() - - sa_xyz = [xyz] - sa_features = [features] - sa_indices = [indices] - - out_sa_xyz = [xyz] - out_sa_features = [features] - out_sa_indices = [indices] - - for i in range(self.num_sa): - cur_xyz, cur_features, cur_indices = self.SA_modules[i]( - sa_xyz[i], sa_features[i]) - if self.aggregation_mlps[i] is not None: - cur_features = self.aggregation_mlps[i](cur_features) - sa_xyz.append(cur_xyz) - sa_features.append(cur_features) - sa_indices.append( - torch.gather(sa_indices[-1], 1, cur_indices.long())) - if i in self.out_indices: - out_sa_xyz.append(sa_xyz[-1]) - out_sa_features.append(sa_features[-1]) - out_sa_indices.append(sa_indices[-1]) - - return dict( - sa_xyz=out_sa_xyz, - sa_features=out_sa_features, - sa_indices=out_sa_indices) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_ssg.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_ssg.py deleted file mode 100644 index c7b41526..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/pointnet2_sa_ssg.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import PointFPModule, build_sa_module -from ..builder import BACKBONES -from .base_pointnet import BasePointNet - - -@BACKBONES.register_module() -class PointNet2SASSG(BasePointNet): - """PointNet2 with Single-scale grouping. - - Args: - in_channels (int): Input channels of point cloud. - num_points (tuple[int]): The number of points which each SA - module samples. - radius (tuple[float]): Sampling radii of each SA module. - num_samples (tuple[int]): The number of samples for ball - query in each SA module. - sa_channels (tuple[tuple[int]]): Out channels of each mlp in SA module. - fp_channels (tuple[tuple[int]]): Out channels of each mlp in FP module. - norm_cfg (dict): Config of normalization layer. - sa_cfg (dict): Config of set abstraction module, which may contain - the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - """ - - def __init__(self, - in_channels, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_sa = len(sa_channels) - self.num_fp = len(fp_channels) - - assert len(num_points) == len(radius) == len(num_samples) == len( - sa_channels) - assert len(sa_channels) >= len(fp_channels) - - self.SA_modules = nn.ModuleList() - sa_in_channel = in_channels - 3 # number of channels without xyz - skip_channel_list = [sa_in_channel] - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - cur_sa_mlps = [sa_in_channel] + cur_sa_mlps - sa_out_channel = cur_sa_mlps[-1] - - self.SA_modules.append( - build_sa_module( - num_point=num_points[sa_index], - radius=radius[sa_index], - num_sample=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - norm_cfg=norm_cfg, - cfg=sa_cfg)) - skip_channel_list.append(sa_out_channel) - sa_in_channel = sa_out_channel - - self.FP_modules = nn.ModuleList() - - fp_source_channel = skip_channel_list.pop() - fp_target_channel = skip_channel_list.pop() - for fp_index in range(len(fp_channels)): - cur_fp_mlps = list(fp_channels[fp_index]) - cur_fp_mlps = [fp_source_channel + fp_target_channel] + cur_fp_mlps - self.FP_modules.append(PointFPModule(mlp_channels=cur_fp_mlps)) - if fp_index != len(fp_channels) - 1: - fp_source_channel = cur_fp_mlps[-1] - fp_target_channel = skip_channel_list.pop() - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, list[torch.Tensor]]: Outputs after SA and FP modules. - - - fp_xyz (list[torch.Tensor]): The coordinates of - each fp features. - - fp_features (list[torch.Tensor]): The features - from each Feature Propagate Layers. - - fp_indices (list[torch.Tensor]): Indices of the - input points. - """ - xyz, features = self._split_point_feats(points) - - batch, num_points = xyz.shape[:2] - indices = xyz.new_tensor(range(num_points)).unsqueeze(0).repeat( - batch, 1).long() - - sa_xyz = [xyz] - sa_features = [features] - sa_indices = [indices] - - for i in range(self.num_sa): - cur_xyz, cur_features, cur_indices = self.SA_modules[i]( - sa_xyz[i], sa_features[i]) - sa_xyz.append(cur_xyz) - sa_features.append(cur_features) - sa_indices.append( - torch.gather(sa_indices[-1], 1, cur_indices.long())) - - fp_xyz = [sa_xyz[-1]] - fp_features = [sa_features[-1]] - fp_indices = [sa_indices[-1]] - - for i in range(self.num_fp): - fp_features.append(self.FP_modules[i]( - sa_xyz[self.num_sa - i - 1], sa_xyz[self.num_sa - i], - sa_features[self.num_sa - i - 1], fp_features[-1])) - fp_xyz.append(sa_xyz[self.num_sa - i - 1]) - fp_indices.append(sa_indices[self.num_sa - i - 1]) - - ret = dict( - fp_xyz=fp_xyz, - fp_features=fp_features, - fp_indices=fp_indices, - sa_xyz=sa_xyz, - sa_features=sa_features, - sa_indices=sa_indices) - return ret diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/second.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/second.py deleted file mode 100644 index 680dbbec..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/backbones/second.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class SECOND(BaseModule): - """Backbone network for SECOND/PointPillars/PartA2/MVXNet. - - Args: - in_channels (int): Input channels. - out_channels (list[int]): Output channels for multi-scale feature maps. - layer_nums (list[int]): Number of layers in each stage. - layer_strides (list[int]): Strides of each stage. - norm_cfg (dict): Config dict of normalization layers. - conv_cfg (dict): Config dict of convolutional layers. - """ - - def __init__(self, - in_channels=128, - out_channels=[128, 128, 256], - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False), - init_cfg=None, - pretrained=None): - super(SECOND, self).__init__(init_cfg=init_cfg) - assert len(layer_strides) == len(layer_nums) - assert len(out_channels) == len(layer_nums) - - in_filters = [in_channels, *out_channels[:-1]] - # note that when stride > 1, conv2d with same padding isn't - # equal to pad-conv2d. we should use pad-conv2d. - blocks = [] - for i, layer_num in enumerate(layer_nums): - block = [ - build_conv_layer( - conv_cfg, - in_filters[i], - out_channels[i], - 3, - stride=layer_strides[i], - padding=1), - build_norm_layer(norm_cfg, out_channels[i])[1], - nn.ReLU(inplace=True), - ] - for j in range(layer_num): - block.append( - build_conv_layer( - conv_cfg, - out_channels[i], - out_channels[i], - 3, - padding=1)) - block.append(build_norm_layer(norm_cfg, out_channels[i])[1]) - block.append(nn.ReLU(inplace=True)) - - block = nn.Sequential(*block) - blocks.append(block) - - self.blocks = nn.ModuleList(blocks) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - else: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): Input with shape (N, C, H, W). - - Returns: - tuple[torch.Tensor]: Multi-scale features. - """ - outs = [] - for i in range(len(self.blocks)): - x = self.blocks[i](x) - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/builder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/builder.py deleted file mode 100644 index fb8b8c23..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/builder.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -from mmdet.models.builder import BACKBONES as MMDET_BACKBONES -from mmdet.models.builder import DETECTORS as MMDET_DETECTORS -from mmdet.models.builder import HEADS as MMDET_HEADS -from mmdet.models.builder import LOSSES as MMDET_LOSSES -from mmdet.models.builder import NECKS as MMDET_NECKS -from mmdet.models.builder import ROI_EXTRACTORS as MMDET_ROI_EXTRACTORS -from mmdet.models.builder import SHARED_HEADS as MMDET_SHARED_HEADS -from mmseg.models.builder import LOSSES as MMSEG_LOSSES - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -ROI_EXTRACTORS = MODELS -SHARED_HEADS = MODELS -HEADS = MODELS -LOSSES = MODELS -DETECTORS = MODELS -VOXEL_ENCODERS = MODELS -MIDDLE_ENCODERS = MODELS -FUSION_LAYERS = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - if cfg['type'] in BACKBONES._module_dict.keys(): - return BACKBONES.build(cfg) - else: - return MMDET_BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - if cfg['type'] in NECKS._module_dict.keys(): - return NECKS.build(cfg) - else: - return MMDET_NECKS.build(cfg) - - -def build_roi_extractor(cfg): - """Build RoI feature extractor.""" - if cfg['type'] in ROI_EXTRACTORS._module_dict.keys(): - return ROI_EXTRACTORS.build(cfg) - else: - return MMDET_ROI_EXTRACTORS.build(cfg) - - -def build_shared_head(cfg): - """Build shared head of detector.""" - if cfg['type'] in SHARED_HEADS._module_dict.keys(): - return SHARED_HEADS.build(cfg) - else: - return MMDET_SHARED_HEADS.build(cfg) - - -def build_head(cfg): - """Build head.""" - if cfg['type'] in HEADS._module_dict.keys(): - return HEADS.build(cfg) - else: - return MMDET_HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss function.""" - if cfg['type'] in LOSSES._module_dict.keys(): - return LOSSES.build(cfg) - elif cfg['type'] in MMDET_LOSSES._module_dict.keys(): - return MMDET_LOSSES.build(cfg) - else: - return MMSEG_LOSSES.build(cfg) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - if cfg['type'] in DETECTORS._module_dict.keys(): - return DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - else: - return MMDET_DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - - -def build_model(cfg, train_cfg=None, test_cfg=None): - """A function warpper for building 3D detector or segmentor according to - cfg. - - Should be deprecated in the future. - """ - if cfg.type in ['EncoderDecoder3D']: - return build_segmentor(cfg, train_cfg=train_cfg, test_cfg=test_cfg) - else: - return build_detector(cfg, train_cfg=train_cfg, test_cfg=test_cfg) - - -def build_voxel_encoder(cfg): - """Build voxel encoder.""" - return VOXEL_ENCODERS.build(cfg) - - -def build_middle_encoder(cfg): - """Build middle level encoder.""" - return MIDDLE_ENCODERS.build(cfg) - - -def build_fusion_layer(cfg): - """Build fusion layer.""" - return FUSION_LAYERS.build(cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/__init__.py deleted file mode 100644 index 2e86c7c8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dgcnn_head import DGCNNHead -from .paconv_head import PAConvHead -from .pointnet2_head import PointNet2Head - -__all__ = ['PointNet2Head', 'DGCNNHead', 'PAConvHead'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/decode_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/decode_head.py deleted file mode 100644 index 6ccbfe0e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/decode_head.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.cnn import normal_init -from mmcv.runner import BaseModule, auto_fp16, force_fp32 -from torch import nn as nn - -from mmseg.models.builder import build_loss - - -class Base3DDecodeHead(BaseModule, metaclass=ABCMeta): - """Base class for BaseDecodeHead. - - Args: - channels (int): Channels after modules, before conv_seg. - num_classes (int): Number of classes. - dropout_ratio (float, optional): Ratio of dropout layer. Default: 0.5. - conv_cfg (dict, optional): Config of conv layers. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of norm layers. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation layers. - Default: dict(type='ReLU'). - loss_decode (dict, optional): Config of decode loss. - Default: dict(type='CrossEntropyLoss'). - ignore_index (int, optional): The label index to be ignored. - When using masked BCE loss, ignore_index should be set to None. - Default: 255. - """ - - def __init__(self, - channels, - num_classes, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, - loss_weight=1.0), - ignore_index=255, - init_cfg=None): - super(Base3DDecodeHead, self).__init__(init_cfg=init_cfg) - self.channels = channels - self.num_classes = num_classes - self.dropout_ratio = dropout_ratio - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.loss_decode = build_loss(loss_decode) - self.ignore_index = ignore_index - - self.conv_seg = nn.Conv1d(channels, num_classes, kernel_size=1) - if dropout_ratio > 0: - self.dropout = nn.Dropout(dropout_ratio) - else: - self.dropout = None - self.fp16_enabled = False - - def init_weights(self): - """Initialize weights of classification layer.""" - super().init_weights() - normal_init(self.conv_seg, mean=0, std=0.01) - - @auto_fp16() - @abstractmethod - def forward(self, inputs): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, img_metas, pts_semantic_mask, train_cfg): - """Forward function for training. - - Args: - inputs (list[torch.Tensor]): List of multi-level point features. - img_metas (list[dict]): Meta information of each sample. - pts_semantic_mask (torch.Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs) - losses = self.losses(seg_logits, pts_semantic_mask) - return losses - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level point features. - img_metas (list[dict]): Meta information of each sample. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs) - - def cls_seg(self, feat): - """Classify each points.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.conv_seg(feat) - return output - - @force_fp32(apply_to=('seg_logit', )) - def losses(self, seg_logit, seg_label): - """Compute semantic segmentation loss. - - Args: - seg_logit (torch.Tensor): Predicted per-point segmentation logits - of shape [B, num_classes, N]. - seg_label (torch.Tensor): Ground-truth segmentation label of - shape [B, N]. - """ - loss = dict() - loss['loss_sem_seg'] = self.loss_decode( - seg_logit, seg_label, ignore_index=self.ignore_index) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/dgcnn_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/dgcnn_head.py deleted file mode 100644 index 1249b3d1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/dgcnn_head.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule - -from mmdet3d.ops import DGCNNFPModule -from ..builder import HEADS -from .decode_head import Base3DDecodeHead - - -@HEADS.register_module() -class DGCNNHead(Base3DDecodeHead): - r"""DGCNN decoder head. - - Decoder head used in `DGCNN `_. - Refer to the - `reimplementation code `_. - - Args: - fp_channels (tuple[int], optional): Tuple of mlp channels in feature - propagation (FP) modules. Defaults to (1216, 512). - """ - - def __init__(self, fp_channels=(1216, 512), **kwargs): - super(DGCNNHead, self).__init__(**kwargs) - - self.FP_module = DGCNNFPModule( - mlp_channels=fp_channels, act_cfg=self.act_cfg) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L40 - self.pre_seg_conv = ConvModule( - fp_channels[-1], - self.channels, - kernel_size=1, - bias=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: points for decoder. - """ - fa_points = feat_dict['fa_points'] - - return fa_points - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - fa_points = self._extract_input(feat_dict) - - fp_points = self.FP_module(fa_points) - fp_points = fp_points.transpose(1, 2).contiguous() - output = self.pre_seg_conv(fp_points) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/paconv_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/paconv_head.py deleted file mode 100644 index 63cc3fdb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/paconv_head.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule - -from ..builder import HEADS -from .pointnet2_head import PointNet2Head - - -@HEADS.register_module() -class PAConvHead(PointNet2Head): - r"""PAConv decoder head. - - Decoder head used in `PAConv `_. - Refer to the `official code `_. - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - fp_norm_cfg (dict): Config of norm layers used in FP modules. - Default: dict(type='BN2d'). - """ - - def __init__(self, - fp_channels=((768, 256, 256), (384, 256, 256), - (320, 256, 128), (128 + 6, 128, 128, 128)), - fp_norm_cfg=dict(type='BN2d'), - **kwargs): - super(PAConvHead, self).__init__(fp_channels, fp_norm_cfg, **kwargs) - - # https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/model/pointnet2/pointnet2_paconv_seg.py#L53 - # PointNet++'s decoder conv has bias while PAConv's doesn't have - # so we need to rebuild it here - self.pre_seg_conv = ConvModule( - fp_channels[-1][-1], - self.channels, - kernel_size=1, - bias=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - # PointNet++ doesn't use the first level of `sa_features` as input - # while PAConv inputs it through skip-connection - fp_feature = sa_features[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - - output = self.pre_seg_conv(fp_feature) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/pointnet2_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/pointnet2_head.py deleted file mode 100644 index 28b677e0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/decode_heads/pointnet2_head.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule -from torch import nn as nn - -from mmdet3d.ops import PointFPModule -from ..builder import HEADS -from .decode_head import Base3DDecodeHead - - -@HEADS.register_module() -class PointNet2Head(Base3DDecodeHead): - r"""PointNet2 decoder head. - - Decoder head used in `PointNet++ `_. - Refer to the `official code `_. - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - fp_norm_cfg (dict): Config of norm layers used in FP modules. - Default: dict(type='BN2d'). - """ - - def __init__(self, - fp_channels=((768, 256, 256), (384, 256, 256), - (320, 256, 128), (128, 128, 128, 128)), - fp_norm_cfg=dict(type='BN2d'), - **kwargs): - super(PointNet2Head, self).__init__(**kwargs) - - self.num_fp = len(fp_channels) - self.FP_modules = nn.ModuleList() - for cur_fp_mlps in fp_channels: - self.FP_modules.append( - PointFPModule(mlp_channels=cur_fp_mlps, norm_cfg=fp_norm_cfg)) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L40 - self.pre_seg_conv = ConvModule( - fp_channels[-1][-1], - self.channels, - kernel_size=1, - bias=True, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - list[torch.Tensor]: Coordinates of multiple levels of points. - list[torch.Tensor]: Features of multiple levels of points. - """ - sa_xyz = feat_dict['sa_xyz'] - sa_features = feat_dict['sa_features'] - assert len(sa_xyz) == len(sa_features) - - return sa_xyz, sa_features - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L24 - sa_features[0] = None - - fp_feature = sa_features[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - output = self.pre_seg_conv(fp_feature) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/__init__.py deleted file mode 100644 index 25008c95..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor3d_head import Anchor3DHead -from .anchor_free_mono3d_head import AnchorFreeMono3DHead -from .base_conv_bbox_head import BaseConvBboxHead -from .base_mono3d_dense_head import BaseMono3DDenseHead -from .centerpoint_head import CenterHead -from .fcos_mono3d_head import FCOSMono3DHead -from .free_anchor3d_head import FreeAnchor3DHead -from .groupfree3d_head import GroupFree3DHead -from .monoflex_head import MonoFlexHead -from .parta2_rpn_head import PartA2RPNHead -from .pgd_head import PGDHead -from .point_rpn_head import PointRPNHead -from .shape_aware_head import ShapeAwareHead -from .smoke_mono3d_head import SMOKEMono3DHead -from .ssd_3d_head import SSD3DHead -from .vote_head import VoteHead - -__all__ = [ - 'Anchor3DHead', 'FreeAnchor3DHead', 'PartA2RPNHead', 'VoteHead', - 'SSD3DHead', 'BaseConvBboxHead', 'CenterHead', 'ShapeAwareHead', - 'BaseMono3DDenseHead', 'AnchorFreeMono3DHead', 'FCOSMono3DHead', - 'GroupFree3DHead', 'PointRPNHead', 'SMOKEMono3DHead', 'PGDHead', - 'MonoFlexHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor3d_head.py deleted file mode 100644 index b7472645..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor3d_head.py +++ /dev/null @@ -1,516 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - -from mmdet3d.core import (PseudoSampler, box3d_multiclass_nms, limit_period, - xywhr2xyxyr) -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, multi_apply) -from ..builder import HEADS, build_loss -from .train_mixins import AnchorTrainMixin - - -@HEADS.register_module() -class Anchor3DHead(BaseModule, AnchorTrainMixin): - """Anchor head for SECOND/PointPillars/MVXNet/PartA2. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - feat_channels (int): Number of channels of the feature map. - use_direction_classifier (bool): Whether to add a direction classifier. - anchor_generator(dict): Config dict of anchor generator. - assigner_per_size (bool): Whether to do assignment for each separate - anchor size. - assign_per_class (bool): Whether to do assignment for each class. - diff_rad_by_sin (bool): Whether to change the difference into sin - difference for box regression loss. - dir_offset (float | int): The offset of BEV rotation angles. - (TODO: may be moved into box coder) - dir_limit_offset (float | int): The limited range of BEV - rotation angles. (TODO: may be moved into box coder) - bbox_coder (dict): Config dict of box coders. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_dir (dict): Config of direction classifier loss. - """ - - def __init__(self, - num_classes, - in_channels, - train_cfg, - test_cfg, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - range=[0, -39.68, -1.78, 69.12, 39.68, -1.78], - strides=[2], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - custom_values=[], - reshape_out=False), - assigner_per_size=False, - assign_per_class=False, - diff_rad_by_sin=True, - dir_offset=-np.pi / 2, - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict(type='CrossEntropyLoss', loss_weight=0.2), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.diff_rad_by_sin = diff_rad_by_sin - self.use_direction_classifier = use_direction_classifier - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.assigner_per_size = assigner_per_size - self.assign_per_class = assign_per_class - self.dir_offset = dir_offset - self.dir_limit_offset = dir_limit_offset - import warnings - warnings.warn( - 'dir_offset and dir_limit_offset will be depressed and be ' - 'incorporated into box coder in the future') - self.fp16_enabled = False - - # build anchor generator - self.anchor_generator = build_prior_generator(anchor_generator) - # In 3D detection, the anchor stride is connected with anchor size - self.num_anchors = self.anchor_generator.num_base_anchors - # build box coder - self.bbox_coder = build_bbox_coder(bbox_coder) - self.box_code_size = self.bbox_coder.code_size - - # build loss function - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.sampling = loss_cls['type'] not in ['FocalLoss', 'GHMC'] - if not self.use_sigmoid_cls: - self.num_classes += 1 - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_dir = build_loss(loss_dir) - self.fp16_enabled = False - - self._init_layers() - self._init_assigner_sampler() - - if init_cfg is None: - self.init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', name='conv_cls', std=0.01, bias_prob=0.01)) - - def _init_assigner_sampler(self): - """Initialize the target assigner and sampler of the head.""" - if self.train_cfg is None: - return - - if self.sampling: - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - else: - self.bbox_sampler = PseudoSampler() - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - - def _init_layers(self): - """Initialize neural network layers of the head.""" - self.cls_out_channels = self.num_anchors * self.num_classes - self.conv_cls = nn.Conv2d(self.feat_channels, self.cls_out_channels, 1) - self.conv_reg = nn.Conv2d(self.feat_channels, - self.num_anchors * self.box_code_size, 1) - if self.use_direction_classifier: - self.conv_dir_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * 2, 1) - - def forward_single(self, x): - """Forward function on a single-scale feature map. - - Args: - x (torch.Tensor): Input features. - - Returns: - tuple[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = self.conv_dir_cls(x) - return cls_score, bbox_pred, dir_cls_preds - - def forward(self, feats): - """Forward pass. - - Args: - feats (list[torch.Tensor]): Multi-level features, e.g., - features produced by FPN. - - Returns: - tuple[list[torch.Tensor]]: Multi-level class score, bbox - and direction predictions. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, input_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - input_metas (list[dict]): contain pcd and img's meta info. - device (str): device of current module. - - Returns: - list[list[torch.Tensor]]: Anchors of each image, valid flags - of each image. - """ - num_imgs = len(input_metas) - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - return anchor_list - - def loss_single(self, cls_score, bbox_pred, dir_cls_preds, labels, - label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, num_total_samples): - """Calculate loss of Single-level results. - - Args: - cls_score (torch.Tensor): Class score in single-level. - bbox_pred (torch.Tensor): Bbox prediction in single-level. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single-level. - labels (torch.Tensor): Labels of class. - label_weights (torch.Tensor): Weights of class loss. - bbox_targets (torch.Tensor): Targets of bbox predictions. - bbox_weights (torch.Tensor): Weights of bbox loss. - dir_targets (torch.Tensor): Targets of direction predictions. - dir_weights (torch.Tensor): Weights of direction loss. - num_total_samples (int): The number of valid samples. - - Returns: - tuple[torch.Tensor]: Losses of class, bbox - and direction, respectively. - """ - # classification loss - if num_total_samples is None: - num_total_samples = int(cls_score.shape[0]) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, 1).reshape(-1, self.num_classes) - assert labels.max().item() <= self.num_classes - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # regression loss - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, self.box_code_size) - bbox_targets = bbox_targets.reshape(-1, self.box_code_size) - bbox_weights = bbox_weights.reshape(-1, self.box_code_size) - - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero( - as_tuple=False).reshape(-1) - num_pos = len(pos_inds) - - pos_bbox_pred = bbox_pred[pos_inds] - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_weights = bbox_weights[pos_inds] - - # dir loss - if self.use_direction_classifier: - dir_cls_preds = dir_cls_preds.permute(0, 2, 3, 1).reshape(-1, 2) - dir_targets = dir_targets.reshape(-1) - dir_weights = dir_weights.reshape(-1) - pos_dir_cls_preds = dir_cls_preds[pos_inds] - pos_dir_targets = dir_targets[pos_inds] - pos_dir_weights = dir_weights[pos_inds] - - if num_pos > 0: - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - pos_bbox_weights = pos_bbox_weights * bbox_weights.new_tensor( - code_weight) - if self.diff_rad_by_sin: - pos_bbox_pred, pos_bbox_targets = self.add_sin_difference( - pos_bbox_pred, pos_bbox_targets) - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_targets, - pos_bbox_weights, - avg_factor=num_total_samples) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - loss_dir = self.loss_dir( - pos_dir_cls_preds, - pos_dir_targets, - pos_dir_weights, - avg_factor=num_total_samples) - else: - loss_bbox = pos_bbox_pred.sum() - if self.use_direction_classifier: - loss_dir = pos_dir_cls_preds.sum() - - return loss_cls, loss_bbox, loss_dir - - @staticmethod - def add_sin_difference(boxes1, boxes2): - """Convert the rotation difference to difference in sine function. - - Args: - boxes1 (torch.Tensor): Original Boxes in shape (NxC), where C>=7 - and the 7th dimension is rotation dimension. - boxes2 (torch.Tensor): Target boxes in shape (NxC), where C>=7 and - the 7th dimension is rotation dimension. - - Returns: - tuple[torch.Tensor]: ``boxes1`` and ``boxes2`` whose 7th - dimensions are changed. - """ - rad_pred_encoding = torch.sin(boxes1[..., 6:7]) * torch.cos( - boxes2[..., 6:7]) - rad_tg_encoding = torch.cos(boxes1[..., 6:7]) * torch.sin(boxes2[..., - 6:7]) - boxes1 = torch.cat( - [boxes1[..., :6], rad_pred_encoding, boxes1[..., 7:]], dim=-1) - boxes2 = torch.cat([boxes2[..., :6], rad_tg_encoding, boxes2[..., 7:]], - dim=-1) - return boxes1, boxes2 - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Gt bboxes - of each sample. - gt_labels (list[torch.Tensor]): Gt labels of each sample. - input_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding boxes to ignore. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_cls (list[torch.Tensor]): Classification losses. - - loss_bbox (list[torch.Tensor]): Box regression losses. - - loss_dir (list[torch.Tensor]): Direction classification - losses. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - device = cls_scores[0].device - anchor_list = self.get_anchors( - featmap_sizes, input_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.anchor_target_3d( - anchor_list, - gt_bboxes, - input_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - num_classes=self.num_classes, - label_channels=label_channels, - sampling=self.sampling) - - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - dir_targets_list, dir_weights_list, num_total_pos, - num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # num_total_samples = None - losses_cls, losses_bbox, losses_dir = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - dir_cls_preds, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - dir_targets_list, - dir_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dir=losses_dir) - - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - input_metas, - cfg=None, - rescale=False): - """Get bboxes of anchor head. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - input_metas (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): Whether th rescale bbox. - - Returns: - list[tuple]: Prediction resultes of batches. - """ - assert len(cls_scores) == len(bbox_preds) - assert len(cls_scores) == len(dir_cls_preds) - num_levels = len(cls_scores) - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - device = cls_scores[0].device - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - mlvl_anchors = [ - anchor.reshape(-1, self.box_code_size) for anchor in mlvl_anchors - ] - - result_list = [] - for img_id in range(len(input_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() for i in range(num_levels) - ] - - input_meta = input_metas[img_id] - proposals = self.get_bboxes_single(cls_score_list, bbox_pred_list, - dir_cls_pred_list, mlvl_anchors, - input_meta, cfg, rescale) - result_list.append(proposals) - return result_list - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg=None, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): whether th rescale bbox. - - Returns: - tuple: Contain predictions of single batch. - - - bboxes (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores (torch.Tensor): Class score of each bbox. - - labels (torch.Tensor): Label of each bbox. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert cls_score.size()[-2:] == dir_cls_pred.size()[-2:] - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.num_classes) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, self.box_code_size) - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - if self.use_sigmoid_cls: - # Add a dummy background class to the front when using sigmoid - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - score_thr = cfg.get('score_thr', 0) - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_scores, score_thr, cfg.max_num, - cfg, mlvl_dir_scores) - bboxes, scores, labels, dir_scores = results - if bboxes.shape[0] > 0: - dir_rot = limit_period(bboxes[..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores.to(bboxes.dtype)) - bboxes = input_meta['box_type_3d'](bboxes, box_dim=self.box_code_size) - return bboxes, scores, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py deleted file mode 100644 index e9b27d0b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py +++ /dev/null @@ -1,534 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import abstractmethod - -import torch -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .base_mono3d_dense_head import BaseMono3DDenseHead - - -@HEADS.register_module() -class AnchorFreeMono3DHead(BaseMono3DDenseHead): - """Anchor-free head for monocular 3D object detection. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int, optional): Number of hidden channels. - Used in child classes. Defaults to 256. - stacked_convs (int, optional): Number of stacking convs of the head. - strides (tuple, optional): Downsample factor of each feature map. - dcn_on_last_conv (bool, optional): If true, use dcn in the last - layer of towers. Default: False. - conv_bias (bool | str, optional): If specified as `auto`, it will be - decided by the norm_cfg. Bias of conv will be set as True - if `norm_cfg` is None, otherwise False. Default: 'auto'. - background_label (int, optional): Label ID of background, - set as 0 for RPN and num_classes for other heads. - It will automatically set as `num_classes` if None is given. - use_direction_classifier (bool, optional): - Whether to add a direction classifier. - diff_rad_by_sin (bool, optional): Whether to change the difference - into sin difference for box regression loss. Defaults to True. - dir_offset (float, optional): Parameter used in direction - classification. Defaults to 0. - dir_limit_offset (float, optional): Parameter used in direction - classification. Defaults to 0. - loss_cls (dict, optional): Config of classification loss. - loss_bbox (dict, optional): Config of localization loss. - loss_dir (dict, optional): Config of direction classifier loss. - loss_attr (dict, optional): Config of attribute classifier loss, - which is only active when `pred_attrs=True`. - bbox_code_size (int, optional): Dimensions of predicted bounding boxes. - pred_attrs (bool, optional): Whether to predict attributes. - Defaults to False. - num_attrs (int, optional): The number of attributes to be predicted. - Default: 9. - pred_velo (bool, optional): Whether to predict velocity. - Defaults to False. - pred_bbox2d (bool, optional): Whether to predict 2D boxes. - Defaults to False. - group_reg_dims (tuple[int], optional): The dimension of each regression - target group. Default: (2, 1, 3, 1, 2). - cls_branch (tuple[int], optional): Channels for classification branch. - Default: (128, 64). - reg_branch (tuple[tuple], optional): Channels for regression branch. - Default: ( - (128, 64), # offset - (128, 64), # depth - (64, ), # size - (64, ), # rot - () # velo - ), - dir_branch (tuple[int], optional): Channels for direction - classification branch. Default: (64, ). - attr_branch (tuple[int], optional): Channels for classification branch. - Default: (64, ). - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - train_cfg (dict, optional): Training config of anchor head. - test_cfg (dict, optional): Testing config of anchor head. - """ # noqa: W605 - - _version = 1 - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - dcn_on_last_conv=False, - conv_bias='auto', - background_label=None, - use_direction_classifier=True, - diff_rad_by_sin=True, - dir_offset=0, - dir_limit_offset=0, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - bbox_code_size=9, # For nuscenes - pred_attrs=False, - num_attrs=9, # For nuscenes - pred_velo=False, - pred_bbox2d=False, - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo, - cls_branch=(128, 64), - reg_branch=( - (128, 64), # offset - (128, 64), # depth - (64, ), # size - (64, ), # rot - () # velo - ), - dir_branch=(64, ), - attr_branch=(64, ), - conv_cfg=None, - norm_cfg=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(AnchorFreeMono3DHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.cls_out_channels = num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.use_direction_classifier = use_direction_classifier - self.diff_rad_by_sin = diff_rad_by_sin - self.dir_offset = dir_offset - self.dir_limit_offset = dir_limit_offset - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_dir = build_loss(loss_dir) - self.bbox_code_size = bbox_code_size - self.group_reg_dims = list(group_reg_dims) - self.cls_branch = cls_branch - self.reg_branch = reg_branch - assert len(reg_branch) == len(group_reg_dims), 'The number of '\ - 'element in reg_branch and group_reg_dims should be the same.' - self.pred_velo = pred_velo - self.pred_bbox2d = pred_bbox2d - self.out_channels = [] - for reg_branch_channels in reg_branch: - if len(reg_branch_channels) > 0: - self.out_channels.append(reg_branch_channels[-1]) - else: - self.out_channels.append(-1) - self.dir_branch = dir_branch - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - self.background_label = ( - num_classes if background_label is None else background_label) - # background_label should be either 0 or num_classes - assert (self.background_label == 0 - or self.background_label == num_classes) - self.pred_attrs = pred_attrs - self.attr_background_label = -1 - self.num_attrs = num_attrs - if self.pred_attrs: - self.attr_background_label = num_attrs - self.loss_attr = build_loss(loss_attr) - self.attr_branch = attr_branch - - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self): - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self): - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_branch(self, conv_channels=(64), conv_strides=(1)): - """Initialize conv layers as a prediction branch.""" - conv_before_pred = nn.ModuleList() - if isinstance(conv_channels, int): - conv_channels = [self.feat_channels] + [conv_channels] - conv_strides = [conv_strides] - else: - conv_channels = [self.feat_channels] + list(conv_channels) - conv_strides = list(conv_strides) - for i in range(len(conv_strides)): - conv_before_pred.append( - ConvModule( - conv_channels[i], - conv_channels[i + 1], - 3, - stride=conv_strides[i], - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - return conv_before_pred - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls_prev = self._init_branch( - conv_channels=self.cls_branch, - conv_strides=(1, ) * len(self.cls_branch)) - self.conv_cls = nn.Conv2d(self.cls_branch[-1], self.cls_out_channels, - 1) - self.conv_reg_prevs = nn.ModuleList() - self.conv_regs = nn.ModuleList() - for i in range(len(self.group_reg_dims)): - reg_dim = self.group_reg_dims[i] - reg_branch_channels = self.reg_branch[i] - out_channel = self.out_channels[i] - if len(reg_branch_channels) > 0: - self.conv_reg_prevs.append( - self._init_branch( - conv_channels=reg_branch_channels, - conv_strides=(1, ) * len(reg_branch_channels))) - self.conv_regs.append(nn.Conv2d(out_channel, reg_dim, 1)) - else: - self.conv_reg_prevs.append(None) - self.conv_regs.append( - nn.Conv2d(self.feat_channels, reg_dim, 1)) - if self.use_direction_classifier: - self.conv_dir_cls_prev = self._init_branch( - conv_channels=self.dir_branch, - conv_strides=(1, ) * len(self.dir_branch)) - self.conv_dir_cls = nn.Conv2d(self.dir_branch[-1], 2, 1) - if self.pred_attrs: - self.conv_attr_prev = self._init_branch( - conv_channels=self.attr_branch, - conv_strides=(1, ) * len(self.attr_branch)) - self.conv_attr = nn.Conv2d(self.attr_branch[-1], self.num_attrs, 1) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized defined init_weights because the - default init of DCN triggered by the init_cfg will init - conv_offset.weight, which mistakenly affects the training stability. - """ - for modules in [self.cls_convs, self.reg_convs, self.conv_cls_prev]: - for m in modules: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - for conv_reg_prev in self.conv_reg_prevs: - if conv_reg_prev is None: - continue - for m in conv_reg_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - if self.use_direction_classifier: - for m in self.conv_dir_cls_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - if self.pred_attrs: - for m in self.conv_attr_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - for conv_reg in self.conv_regs: - normal_init(conv_reg, std=0.01) - if self.use_direction_classifier: - normal_init(self.conv_dir_cls, std=0.01, bias=bias_cls) - if self.pred_attrs: - normal_init(self.conv_attr, std=0.01, bias=bias_cls) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores, bbox predictions, - and direction class predictions. - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - """ - return multi_apply(self.forward_single, feats)[:5] - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, direction class, - and attributes, features after classification and regression - conv layers, some models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - # clone the cls_feat for reusing the feature map afterwards - clone_cls_feat = cls_feat.clone() - for conv_cls_prev_layer in self.conv_cls_prev: - clone_cls_feat = conv_cls_prev_layer(clone_cls_feat) - cls_score = self.conv_cls(clone_cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = [] - for i in range(len(self.group_reg_dims)): - # clone the reg_feat for reusing the feature map afterwards - clone_reg_feat = reg_feat.clone() - if len(self.reg_branch[i]) > 0: - for conv_reg_prev_layer in self.conv_reg_prevs[i]: - clone_reg_feat = conv_reg_prev_layer(clone_reg_feat) - bbox_pred.append(self.conv_regs[i](clone_reg_feat)) - bbox_pred = torch.cat(bbox_pred, dim=1) - - dir_cls_pred = None - if self.use_direction_classifier: - clone_reg_feat = reg_feat.clone() - for conv_dir_cls_prev_layer in self.conv_dir_cls_prev: - clone_reg_feat = conv_dir_cls_prev_layer(clone_reg_feat) - dir_cls_pred = self.conv_dir_cls(clone_reg_feat) - - attr_pred = None - if self.pred_attrs: - # clone the cls_feat for reusing the feature map afterwards - clone_cls_feat = cls_feat.clone() - for conv_attr_prev_layer in self.conv_attr_prev: - clone_cls_feat = conv_attr_prev_layer(clone_cls_feat) - attr_pred = self.conv_attr(clone_cls_feat) - - return cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, \ - reg_feat - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D Ground truth bboxes for each - image with shape (num_gts, bbox_code_size). - gt_labels_3d (list[Tensor]): 3D class indices of each box. - centers2d (list[Tensor]): Projected 3D centers onto 2D images. - depths (list[Tensor]): Depth of projected centers on 2D images. - attr_labels (list[Tensor], optional): Attribute indices - corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - - raise NotImplementedError - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * bbox_code_size, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - """ - raise NotImplementedError - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points of a single scale level.""" - h, w = featmap_size - x_range = torch.arange(w, dtype=dtype, device=device) - y_range = torch.arange(h, dtype=dtype, device=device) - y, x = torch.meshgrid(y_range, x_range) - if flatten: - y = y.flatten() - x = x.flatten() - return y, x - - def get_points(self, featmap_sizes, dtype, device, flatten=False): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - dtype (torch.dtype): Type of points. - device (torch.device): Device of points. - - Returns: - tuple: points of each image. - """ - mlvl_points = [] - for i in range(len(featmap_sizes)): - mlvl_points.append( - self._get_points_single(featmap_sizes[i], self.strides[i], - dtype, device, flatten)) - return mlvl_points diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_conv_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_conv_bbox_head.py deleted file mode 100644 index ec5eaa61..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_conv_bbox_head.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import HEADS - - -@HEADS.register_module() -class BaseConvBboxHead(BaseModule): - r"""More general bbox head, with shared conv layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls_score - shared convs - \-> reg convs -> bbox_pred - """ - - def __init__(self, - in_channels=0, - shared_conv_channels=(), - cls_conv_channels=(), - num_cls_out_channels=0, - reg_conv_channels=(), - num_reg_out_channels=0, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - bias='auto', - init_cfg=None, - *args, - **kwargs): - super(BaseConvBboxHead, self).__init__( - init_cfg=init_cfg, *args, **kwargs) - assert in_channels > 0 - assert num_cls_out_channels > 0 - assert num_reg_out_channels > 0 - self.in_channels = in_channels - self.shared_conv_channels = shared_conv_channels - self.cls_conv_channels = cls_conv_channels - self.num_cls_out_channels = num_cls_out_channels - self.reg_conv_channels = reg_conv_channels - self.num_reg_out_channels = num_reg_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.bias = bias - - # add shared convs - if len(self.shared_conv_channels) > 0: - self.shared_convs = self._add_conv_branch( - self.in_channels, self.shared_conv_channels) - out_channels = self.shared_conv_channels[-1] - else: - out_channels = self.in_channels - - # add cls specific branch - prev_channel = out_channels - if len(self.cls_conv_channels) > 0: - self.cls_convs = self._add_conv_branch(prev_channel, - self.cls_conv_channels) - prev_channel = self.cls_conv_channels[-1] - - self.conv_cls = build_conv_layer( - conv_cfg, - in_channels=prev_channel, - out_channels=num_cls_out_channels, - kernel_size=1) - # add reg specific branch - prev_channel = out_channels - if len(self.reg_conv_channels) > 0: - self.reg_convs = self._add_conv_branch(prev_channel, - self.reg_conv_channels) - prev_channel = self.reg_conv_channels[-1] - - self.conv_reg = build_conv_layer( - conv_cfg, - in_channels=prev_channel, - out_channels=num_reg_out_channels, - kernel_size=1) - - def _add_conv_branch(self, in_channels, conv_channels): - """Add shared or separable branch.""" - conv_spec = [in_channels] + list(conv_channels) - # add branch specific conv layers - conv_layers = nn.Sequential() - for i in range(len(conv_spec) - 1): - conv_layers.add_module( - f'layer{i}', - ConvModule( - conv_spec[i], - conv_spec[i + 1], - kernel_size=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.bias, - inplace=True)) - return conv_layers - - def forward(self, feats): - """Forward. - - Args: - feats (Tensor): Input features - - Returns: - Tensor: Class scores predictions - Tensor: Regression predictions - """ - # shared part - if len(self.shared_conv_channels) > 0: - x = self.shared_convs(feats) - - # separate branches - x_cls = x - x_reg = x - - if len(self.cls_conv_channels) > 0: - x_cls = self.cls_convs(x_cls) - cls_score = self.conv_cls(x_cls) - - if len(self.reg_conv_channels) > 0: - x_reg = self.reg_convs(x_reg) - bbox_pred = self.conv_reg(x_reg) - - return cls_score, bbox_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_mono3d_dense_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_mono3d_dense_head.py deleted file mode 100644 index 24444730..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/base_mono3d_dense_head.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class BaseMono3DDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for Monocular 3D DenseHeads.""" - - def __init__(self, init_cfg=None): - super(BaseMono3DDenseHead, self).__init__(init_cfg=init_cfg) - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @abstractmethod - def get_bboxes(self, **kwargs): - """Transform network output for a batch into bbox predictions.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - centers2d=None, - depths=None, - attr_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_3d (list[Tensor]): 3D ground truth bboxes of the image, - shape (num_gts, self.bbox_code_size). - gt_labels_3d (list[Tensor]): 3D ground truth labels of each box, - shape (num_gts,). - centers2d (list[Tensor]): Projected 3D center of each box, - shape (num_gts, 2). - depths (list[Tensor]): Depth of projected 3D center of each box, - shape (num_gts,). - attr_labels (list[Tensor]): Attribute labels of each box, - shape (num_gts,). - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, gt_bboxes_3d, centers2d, depths, - attr_labels, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/centerpoint_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/centerpoint_head.py deleted file mode 100644 index 2cf758bd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/centerpoint_head.py +++ /dev/null @@ -1,830 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -from mmcv.cnn import ConvModule, build_conv_layer -from mmcv.runner import BaseModule, force_fp32 -from torch import nn - -from mmdet3d.core import (circle_nms, draw_heatmap_gaussian, gaussian_radius, - xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev -from mmdet3d.models import builder -from mmdet3d.models.utils import clip_sigmoid -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss - - -@HEADS.register_module() -class SeparateHead(BaseModule): - """SeparateHead for CenterHead. - - Args: - in_channels (int): Input channels for conv_layer. - heads (dict): Conv information. - head_conv (int, optional): Output channels. - Default: 64. - final_kernel (int, optional): Kernel size for the last conv layer. - Default: 1. - init_bias (float, optional): Initial bias. Default: -2.19. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ - - def __init__(self, - in_channels, - heads, - head_conv=64, - final_kernel=1, - init_bias=-2.19, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(SeparateHead, self).__init__(init_cfg=init_cfg) - self.heads = heads - self.init_bias = init_bias - for head in self.heads: - classes, num_conv = self.heads[head] - - conv_layers = [] - c_in = in_channels - for i in range(num_conv - 1): - conv_layers.append( - ConvModule( - c_in, - head_conv, - kernel_size=final_kernel, - stride=1, - padding=final_kernel // 2, - bias=bias, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - c_in = head_conv - - conv_layers.append( - build_conv_layer( - conv_cfg, - head_conv, - classes, - kernel_size=final_kernel, - stride=1, - padding=final_kernel // 2, - bias=True)) - conv_layers = nn.Sequential(*conv_layers) - - self.__setattr__(head, conv_layers) - - if init_cfg is None: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - for head in self.heads: - if head == 'heatmap': - self.__getattr__(head)[-1].bias.data.fill_(self.init_bias) - - def forward(self, x): - """Forward function for SepHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - dict[str: torch.Tensor]: contains the following keys: - - -reg (torch.Tensor): 2D regression value with the - shape of [B, 2, H, W]. - -height (torch.Tensor): Height value with the - shape of [B, 1, H, W]. - -dim (torch.Tensor): Size value with the shape - of [B, 3, H, W]. - -rot (torch.Tensor): Rotation value with the - shape of [B, 2, H, W]. - -vel (torch.Tensor): Velocity value with the - shape of [B, 2, H, W]. - -heatmap (torch.Tensor): Heatmap with the shape of - [B, N, H, W]. - """ - ret_dict = dict() - for head in self.heads: - ret_dict[head] = self.__getattr__(head)(x) - - return ret_dict - - -@HEADS.register_module() -class DCNSeparateHead(BaseModule): - r"""DCNSeparateHead for CenterHead. - - .. code-block:: none - /-----> DCN for heatmap task -----> heatmap task. - feature - \-----> DCN for regression tasks -----> regression tasks - - Args: - in_channels (int): Input channels for conv_layer. - num_cls (int): Number of classes. - heads (dict): Conv information. - dcn_config (dict): Config of dcn layer. - head_conv (int, optional): Output channels. - Default: 64. - final_kernel (int, optional): Kernel size for the last conv - layer. Default: 1. - init_bias (float, optional): Initial bias. Default: -2.19. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ # noqa: W605 - - def __init__(self, - in_channels, - num_cls, - heads, - dcn_config, - head_conv=64, - final_kernel=1, - init_bias=-2.19, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(DCNSeparateHead, self).__init__(init_cfg=init_cfg) - if 'heatmap' in heads: - heads.pop('heatmap') - # feature adaptation with dcn - # use separate features for classification / regression - self.feature_adapt_cls = build_conv_layer(dcn_config) - - self.feature_adapt_reg = build_conv_layer(dcn_config) - - # heatmap prediction head - cls_head = [ - ConvModule( - in_channels, - head_conv, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - bias=bias, - norm_cfg=norm_cfg), - build_conv_layer( - conv_cfg, - head_conv, - num_cls, - kernel_size=3, - stride=1, - padding=1, - bias=bias) - ] - self.cls_head = nn.Sequential(*cls_head) - self.init_bias = init_bias - # other regression target - self.task_head = SeparateHead( - in_channels, - heads, - head_conv=head_conv, - final_kernel=final_kernel, - bias=bias) - if init_cfg is None: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - self.cls_head[-1].bias.data.fill_(self.init_bias) - - def forward(self, x): - """Forward function for DCNSepHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - dict[str: torch.Tensor]: contains the following keys: - - -reg (torch.Tensor): 2D regression value with the - shape of [B, 2, H, W]. - -height (torch.Tensor): Height value with the - shape of [B, 1, H, W]. - -dim (torch.Tensor): Size value with the shape - of [B, 3, H, W]. - -rot (torch.Tensor): Rotation value with the - shape of [B, 2, H, W]. - -vel (torch.Tensor): Velocity value with the - shape of [B, 2, H, W]. - -heatmap (torch.Tensor): Heatmap with the shape of - [B, N, H, W]. - """ - center_feat = self.feature_adapt_cls(x) - reg_feat = self.feature_adapt_reg(x) - - cls_score = self.cls_head(center_feat) - ret = self.task_head(reg_feat) - ret['heatmap'] = cls_score - - return ret - - -@HEADS.register_module() -class CenterHead(BaseModule): - """CenterHead for CenterPoint. - - Args: - in_channels (list[int] | int, optional): Channels of the input - feature map. Default: [128]. - tasks (list[dict], optional): Task information including class number - and class names. Default: None. - train_cfg (dict, optional): Train-time configs. Default: None. - test_cfg (dict, optional): Test-time configs. Default: None. - bbox_coder (dict, optional): Bbox coder configs. Default: None. - common_heads (dict, optional): Conv information for common heads. - Default: dict(). - loss_cls (dict, optional): Config of classification loss function. - Default: dict(type='GaussianFocalLoss', reduction='mean'). - loss_bbox (dict, optional): Config of regression loss function. - Default: dict(type='L1Loss', reduction='none'). - separate_head (dict, optional): Config of separate head. Default: dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3) - share_conv_channel (int, optional): Output channels for share_conv - layer. Default: 64. - num_heatmap_convs (int, optional): Number of conv layers for heatmap - conv layer. Default: 2. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ - - def __init__(self, - in_channels=[128], - tasks=None, - train_cfg=None, - test_cfg=None, - bbox_coder=None, - common_heads=dict(), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict( - type='L1Loss', reduction='none', loss_weight=0.25), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - share_conv_channel=64, - num_heatmap_convs=2, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - norm_bbox=True, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(CenterHead, self).__init__(init_cfg=init_cfg) - - num_classes = [len(t['class_names']) for t in tasks] - self.class_names = [t['class_names'] for t in tasks] - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.in_channels = in_channels - self.num_classes = num_classes - self.norm_bbox = norm_bbox - - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_anchor_per_locs = [n for n in num_classes] - self.fp16_enabled = False - - # a shared convolution - self.shared_conv = ConvModule( - in_channels, - share_conv_channel, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=bias) - - self.task_heads = nn.ModuleList() - - for num_cls in num_classes: - heads = copy.deepcopy(common_heads) - heads.update(dict(heatmap=(num_cls, num_heatmap_convs))) - separate_head.update( - in_channels=share_conv_channel, heads=heads, num_cls=num_cls) - self.task_heads.append(builder.build_head(separate_head)) - - def forward_single(self, x): - """Forward function for CenterPoint. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - list[dict]: Output results for tasks. - """ - ret_dicts = [] - - x = self.shared_conv(x) - - for task in self.task_heads: - ret_dicts.append(task(x)) - - return ret_dicts - - def forward(self, feats): - """Forward pass. - - Args: - feats (list[torch.Tensor]): Multi-level features, e.g., - features produced by FPN. - - Returns: - tuple(list[dict]): Output results for tasks. - """ - return multi_apply(self.forward_single, feats) - - def _gather_feat(self, feat, ind, mask=None): - """Gather feature map. - - Given feature map and index, return indexed feature map. - - Args: - feat (torch.tensor): Feature map with the shape of [B, H*W, 10]. - ind (torch.Tensor): Index of the ground truth boxes with the - shape of [B, max_obj]. - mask (torch.Tensor, optional): Mask of the feature map with the - shape of [B, max_obj]. Default: None. - - Returns: - torch.Tensor: Feature map after gathering with the shape - of [B, max_obj, 10]. - """ - dim = feat.size(2) - ind = ind.unsqueeze(2).expand(ind.size(0), ind.size(1), dim) - feat = feat.gather(1, ind) - if mask is not None: - mask = mask.unsqueeze(2).expand_as(feat) - feat = feat[mask] - feat = feat.view(-1, dim) - return feat - - def get_targets(self, gt_bboxes_3d, gt_labels_3d): - """Generate targets. - - How each output is transformed: - - Each nested list is transposed so that all same-index elements in - each sub-list (1, ..., N) become the new sub-lists. - [ [a0, a1, a2, ... ], [b0, b1, b2, ... ], ... ] - ==> [ [a0, b0, ... ], [a1, b1, ... ], [a2, b2, ... ] ] - - The new transposed nested list is converted into a list of N - tensors generated by concatenating tensors in the new sub-lists. - [ tensor0, tensor1, tensor2, ... ] - - Args: - gt_bboxes_3d (list[:obj:`LiDARInstance3DBoxes`]): Ground - truth gt boxes. - gt_labels_3d (list[torch.Tensor]): Labels of boxes. - - Returns: - Returns: - tuple[list[torch.Tensor]]: Tuple of target including - the following results in order. - - - list[torch.Tensor]: Heatmap scores. - - list[torch.Tensor]: Ground truth boxes. - - list[torch.Tensor]: Indexes indicating the - position of the valid boxes. - - list[torch.Tensor]: Masks indicating which - boxes are valid. - """ - heatmaps, anno_boxes, inds, masks = multi_apply( - self.get_targets_single, gt_bboxes_3d, gt_labels_3d) - # Transpose heatmaps - heatmaps = list(map(list, zip(*heatmaps))) - heatmaps = [torch.stack(hms_) for hms_ in heatmaps] - # Transpose anno_boxes - anno_boxes = list(map(list, zip(*anno_boxes))) - anno_boxes = [torch.stack(anno_boxes_) for anno_boxes_ in anno_boxes] - # Transpose inds - inds = list(map(list, zip(*inds))) - inds = [torch.stack(inds_) for inds_ in inds] - # Transpose inds - masks = list(map(list, zip(*masks))) - masks = [torch.stack(masks_) for masks_ in masks] - return heatmaps, anno_boxes, inds, masks - - def get_targets_single(self, gt_bboxes_3d, gt_labels_3d): - """Generate training targets for a single sample. - - Args: - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): Ground truth gt boxes. - gt_labels_3d (torch.Tensor): Labels of boxes. - - Returns: - tuple[list[torch.Tensor]]: Tuple of target including - the following results in order. - - - list[torch.Tensor]: Heatmap scores. - - list[torch.Tensor]: Ground truth boxes. - - list[torch.Tensor]: Indexes indicating the position - of the valid boxes. - - list[torch.Tensor]: Masks indicating which boxes - are valid. - """ - device = gt_labels_3d.device - gt_bboxes_3d = torch.cat( - (gt_bboxes_3d.gravity_center, gt_bboxes_3d.tensor[:, 3:]), - dim=1).to(device) - max_objs = self.train_cfg['max_objs'] * self.train_cfg['dense_reg'] - grid_size = torch.tensor(self.train_cfg['grid_size']) - pc_range = torch.tensor(self.train_cfg['point_cloud_range']) - voxel_size = torch.tensor(self.train_cfg['voxel_size']) - - feature_map_size = grid_size[:2] // self.train_cfg['out_size_factor'] - - # reorganize the gt_dict by tasks - task_masks = [] - flag = 0 - for class_name in self.class_names: - task_masks.append([ - torch.where(gt_labels_3d == class_name.index(i) + flag) - for i in class_name - ]) - flag += len(class_name) - - task_boxes = [] - task_classes = [] - flag2 = 0 - for idx, mask in enumerate(task_masks): - task_box = [] - task_class = [] - for m in mask: - task_box.append(gt_bboxes_3d[m]) - # 0 is background for each task, so we need to add 1 here. - task_class.append(gt_labels_3d[m] + 1 - flag2) - task_boxes.append(torch.cat(task_box, axis=0).to(device)) - task_classes.append(torch.cat(task_class).long().to(device)) - flag2 += len(mask) - draw_gaussian = draw_heatmap_gaussian - heatmaps, anno_boxes, inds, masks = [], [], [], [] - - for idx, task_head in enumerate(self.task_heads): - heatmap = gt_bboxes_3d.new_zeros( - (len(self.class_names[idx]), feature_map_size[1], - feature_map_size[0])) - - anno_box = gt_bboxes_3d.new_zeros((max_objs, 10), - dtype=torch.float32) - - ind = gt_labels_3d.new_zeros((max_objs), dtype=torch.int64) - mask = gt_bboxes_3d.new_zeros((max_objs), dtype=torch.uint8) - - num_objs = min(task_boxes[idx].shape[0], max_objs) - - for k in range(num_objs): - cls_id = task_classes[idx][k] - 1 - - width = task_boxes[idx][k][3] - length = task_boxes[idx][k][4] - width = width / voxel_size[0] / self.train_cfg[ - 'out_size_factor'] - length = length / voxel_size[1] / self.train_cfg[ - 'out_size_factor'] - - if width > 0 and length > 0: - radius = gaussian_radius( - (length, width), - min_overlap=self.train_cfg['gaussian_overlap']) - radius = max(self.train_cfg['min_radius'], int(radius)) - - # be really careful for the coordinate system of - # your box annotation. - x, y, z = task_boxes[idx][k][0], task_boxes[idx][k][ - 1], task_boxes[idx][k][2] - - coor_x = ( - x - pc_range[0] - ) / voxel_size[0] / self.train_cfg['out_size_factor'] - coor_y = ( - y - pc_range[1] - ) / voxel_size[1] / self.train_cfg['out_size_factor'] - - center = torch.tensor([coor_x, coor_y], - dtype=torch.float32, - device=device) - center_int = center.to(torch.int32) - - # throw out not in range objects to avoid out of array - # area when creating the heatmap - if not (0 <= center_int[0] < feature_map_size[0] - and 0 <= center_int[1] < feature_map_size[1]): - continue - - draw_gaussian(heatmap[cls_id], center_int, radius) - - new_idx = k - x, y = center_int[0], center_int[1] - - assert (y * feature_map_size[0] + x < - feature_map_size[0] * feature_map_size[1]) - - ind[new_idx] = y * feature_map_size[0] + x - mask[new_idx] = 1 - # TODO: support other outdoor dataset - vx, vy = task_boxes[idx][k][7:] - rot = task_boxes[idx][k][6] - box_dim = task_boxes[idx][k][3:6] - if self.norm_bbox: - box_dim = box_dim.log() - anno_box[new_idx] = torch.cat([ - center - torch.tensor([x, y], device=device), - z.unsqueeze(0), box_dim, - torch.sin(rot).unsqueeze(0), - torch.cos(rot).unsqueeze(0), - vx.unsqueeze(0), - vy.unsqueeze(0) - ]) - - heatmaps.append(heatmap) - anno_boxes.append(anno_box) - masks.append(mask) - inds.append(ind) - return heatmaps, anno_boxes, inds, masks - - @force_fp32(apply_to=('preds_dicts')) - def loss(self, gt_bboxes_3d, gt_labels_3d, preds_dicts, **kwargs): - """Loss function for CenterHead. - - Args: - gt_bboxes_3d (list[:obj:`LiDARInstance3DBoxes`]): Ground - truth gt boxes. - gt_labels_3d (list[torch.Tensor]): Labels of boxes. - preds_dicts (dict): Output of forward function. - - Returns: - dict[str:torch.Tensor]: Loss of heatmap and bbox of each task. - """ - heatmaps, anno_boxes, inds, masks = self.get_targets( - gt_bboxes_3d, gt_labels_3d) - loss_dict = dict() - for task_id, preds_dict in enumerate(preds_dicts): - # heatmap focal loss - preds_dict[0]['heatmap'] = clip_sigmoid(preds_dict[0]['heatmap']) - num_pos = heatmaps[task_id].eq(1).float().sum().item() - loss_heatmap = self.loss_cls( - preds_dict[0]['heatmap'], - heatmaps[task_id], - avg_factor=max(num_pos, 1)) - target_box = anno_boxes[task_id] - # reconstruct the anno_box from multiple reg heads - preds_dict[0]['anno_box'] = torch.cat( - (preds_dict[0]['reg'], preds_dict[0]['height'], - preds_dict[0]['dim'], preds_dict[0]['rot'], - preds_dict[0]['vel']), - dim=1) - - # Regression loss for dimension, offset, height, rotation - ind = inds[task_id] - num = masks[task_id].float().sum() - pred = preds_dict[0]['anno_box'].permute(0, 2, 3, 1).contiguous() - pred = pred.view(pred.size(0), -1, pred.size(3)) - pred = self._gather_feat(pred, ind) - mask = masks[task_id].unsqueeze(2).expand_as(target_box).float() - isnotnan = (~torch.isnan(target_box)).float() - mask *= isnotnan - - code_weights = self.train_cfg.get('code_weights', None) - bbox_weights = mask * mask.new_tensor(code_weights) - loss_bbox = self.loss_bbox( - pred, target_box, bbox_weights, avg_factor=(num + 1e-4)) - loss_dict[f'task{task_id}.loss_heatmap'] = loss_heatmap - loss_dict[f'task{task_id}.loss_bbox'] = loss_bbox - return loss_dict - - def get_bboxes(self, preds_dicts, img_metas, img=None, rescale=False): - """Generate bboxes from bbox head predictions. - - Args: - preds_dicts (tuple[list[dict]]): Prediction results. - img_metas (list[dict]): Point cloud and image's meta info. - - Returns: - list[dict]: Decoded bbox, scores and labels after nms. - """ - rets = [] - for task_id, preds_dict in enumerate(preds_dicts): - num_class_with_bg = self.num_classes[task_id] - batch_size = preds_dict[0]['heatmap'].shape[0] - batch_heatmap = preds_dict[0]['heatmap'].sigmoid() - - batch_reg = preds_dict[0]['reg'] - batch_hei = preds_dict[0]['height'] - - if self.norm_bbox: - batch_dim = torch.exp(preds_dict[0]['dim']) - else: - batch_dim = preds_dict[0]['dim'] - - batch_rots = preds_dict[0]['rot'][:, 0].unsqueeze(1) - batch_rotc = preds_dict[0]['rot'][:, 1].unsqueeze(1) - - if 'vel' in preds_dict[0]: - batch_vel = preds_dict[0]['vel'] - else: - batch_vel = None - temp = self.bbox_coder.decode( - batch_heatmap, - batch_rots, - batch_rotc, - batch_hei, - batch_dim, - batch_vel, - reg=batch_reg, - task_id=task_id) - assert self.test_cfg['nms_type'] in ['circle', 'rotate'] - batch_reg_preds = [box['bboxes'] for box in temp] - batch_cls_preds = [box['scores'] for box in temp] - batch_cls_labels = [box['labels'] for box in temp] - if self.test_cfg['nms_type'] == 'circle': - ret_task = [] - for i in range(batch_size): - boxes3d = temp[i]['bboxes'] - scores = temp[i]['scores'] - labels = temp[i]['labels'] - centers = boxes3d[:, [0, 1]] - boxes = torch.cat([centers, scores.view(-1, 1)], dim=1) - keep = torch.tensor( - circle_nms( - boxes.detach().cpu().numpy(), - self.test_cfg['min_radius'][task_id], - post_max_size=self.test_cfg['post_max_size']), - dtype=torch.long, - device=boxes.device) - - boxes3d = boxes3d[keep] - scores = scores[keep] - labels = labels[keep] - ret = dict(bboxes=boxes3d, scores=scores, labels=labels) - ret_task.append(ret) - rets.append(ret_task) - else: - rets.append( - self.get_task_detections(num_class_with_bg, - batch_cls_preds, batch_reg_preds, - batch_cls_labels, img_metas)) - - # Merge branches results - num_samples = len(rets[0]) - - ret_list = [] - for i in range(num_samples): - for k in rets[0][i].keys(): - if k == 'bboxes': - bboxes = torch.cat([ret[i][k] for ret in rets]) - bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 5] * 0.5 - bboxes = img_metas[i]['box_type_3d']( - bboxes, self.bbox_coder.code_size) - elif k == 'scores': - scores = torch.cat([ret[i][k] for ret in rets]) - elif k == 'labels': - flag = 0 - for j, num_class in enumerate(self.num_classes): - rets[j][i][k] += flag - flag += num_class - labels = torch.cat([ret[i][k].int() for ret in rets]) - ret_list.append([bboxes, scores, labels]) - return ret_list - - def get_task_detections(self, num_class_with_bg, batch_cls_preds, - batch_reg_preds, batch_cls_labels, img_metas): - """Rotate nms for each task. - - Args: - num_class_with_bg (int): Number of classes for the current task. - batch_cls_preds (list[torch.Tensor]): Prediction score with the - shape of [N]. - batch_reg_preds (list[torch.Tensor]): Prediction bbox with the - shape of [N, 9]. - batch_cls_labels (list[torch.Tensor]): Prediction label with the - shape of [N]. - img_metas (list[dict]): Meta information of each sample. - - Returns: - list[dict[str: torch.Tensor]]: contains the following keys: - - -bboxes (torch.Tensor): Prediction bboxes after nms with the - shape of [N, 9]. - -scores (torch.Tensor): Prediction scores after nms with the - shape of [N]. - -labels (torch.Tensor): Prediction labels after nms with the - shape of [N]. - """ - predictions_dicts = [] - post_center_range = self.test_cfg['post_center_limit_range'] - if len(post_center_range) > 0: - post_center_range = torch.tensor( - post_center_range, - dtype=batch_reg_preds[0].dtype, - device=batch_reg_preds[0].device) - - for i, (box_preds, cls_preds, cls_labels) in enumerate( - zip(batch_reg_preds, batch_cls_preds, batch_cls_labels)): - - # Apply NMS in bird eye view - - # get the highest score per prediction, then apply nms - # to remove overlapped box. - if num_class_with_bg == 1: - top_scores = cls_preds.squeeze(-1) - top_labels = torch.zeros( - cls_preds.shape[0], - device=cls_preds.device, - dtype=torch.long) - - else: - top_labels = cls_labels.long() - top_scores = cls_preds.squeeze(-1) - - if self.test_cfg['score_threshold'] > 0.0: - thresh = torch.tensor( - [self.test_cfg['score_threshold']], - device=cls_preds.device).type_as(cls_preds) - top_scores_keep = top_scores >= thresh - top_scores = top_scores.masked_select(top_scores_keep) - - if top_scores.shape[0] != 0: - if self.test_cfg['score_threshold'] > 0.0: - box_preds = box_preds[top_scores_keep] - top_labels = top_labels[top_scores_keep] - - boxes_for_nms = xywhr2xyxyr(img_metas[i]['box_type_3d']( - box_preds[:, :], self.bbox_coder.code_size).bev) - # the nms in 3d detection just remove overlap boxes. - - selected = nms_bev( - boxes_for_nms, - top_scores, - thresh=self.test_cfg['nms_thr'], - pre_max_size=self.test_cfg['pre_max_size'], - post_max_size=self.test_cfg['post_max_size']) - else: - selected = [] - - # if selected is not None: - selected_boxes = box_preds[selected] - selected_labels = top_labels[selected] - selected_scores = top_scores[selected] - - # finally generate predictions. - if selected_boxes.shape[0] != 0: - box_preds = selected_boxes - scores = selected_scores - label_preds = selected_labels - final_box_preds = box_preds - final_scores = scores - final_labels = label_preds - if post_center_range is not None: - mask = (final_box_preds[:, :3] >= - post_center_range[:3]).all(1) - mask &= (final_box_preds[:, :3] <= - post_center_range[3:]).all(1) - predictions_dict = dict( - bboxes=final_box_preds[mask], - scores=final_scores[mask], - labels=final_labels[mask]) - else: - predictions_dict = dict( - bboxes=final_box_preds, - scores=final_scores, - labels=final_labels) - else: - dtype = batch_reg_preds[0].dtype - device = batch_reg_preds[0].device - predictions_dict = dict( - bboxes=torch.zeros([0, self.bbox_coder.code_size], - dtype=dtype, - device=device), - scores=torch.zeros([0], dtype=dtype, device=device), - labels=torch.zeros([0], - dtype=top_labels.dtype, - device=device)) - - predictions_dicts.append(predictions_dict) - return predictions_dicts diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/fcos_mono3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/fcos_mono3d_head.py deleted file mode 100644 index d0aa29f8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/fcos_mono3d_head.py +++ /dev/null @@ -1,956 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from logging import warning - -import numpy as np -import torch -from mmcv.cnn import Scale, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn - -from mmdet3d.core import (box3d_multiclass_nms, limit_period, points_img2cam, - xywhr2xyxyr) -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from ..builder import HEADS, build_loss -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - -INF = 1e8 - - -@HEADS.register_module() -class FCOSMono3DHead(AnchorFreeMono3DHead): - """Anchor-free head used in FCOS3D. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - regress_ranges (tuple[tuple[int, int]], optional): Regress range of multiple - level points. - center_sampling (bool, optional): If true, use center sampling. Default: True. - center_sample_radius (float, optional): Radius of center sampling. Default: 1.5. - norm_on_bbox (bool, optional): If true, normalize the regression targets - with FPN strides. Default: True. - centerness_on_reg (bool, optional): If true, position centerness on the - regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. - Default: True. - centerness_alpha (int, optional): Parameter used to adjust the intensity - attenuation from the center to the periphery. Default: 2.5. - loss_cls (dict, optional): Config of classification loss. - loss_bbox (dict, optional): Config of localization loss. - loss_dir (dict, optional): Config of direction classification loss. - loss_attr (dict, optional): Config of attribute classification loss. - loss_centerness (dict, optional): Config of centerness loss. - norm_cfg (dict, optional): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - centerness_branch (tuple[int], optional): Channels for centerness branch. - Default: (64, ). - """ # noqa: E501 - - def __init__(self, - regress_ranges=((-1, 48), (48, 96), (96, 192), (192, 384), - (384, INF)), - center_sampling=True, - center_sample_radius=1.5, - norm_on_bbox=True, - centerness_on_reg=True, - centerness_alpha=2.5, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - bbox_coder=dict(type='FCOS3DBBoxCoder', code_size=9), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - centerness_branch=(64, ), - init_cfg=None, - **kwargs): - self.regress_ranges = regress_ranges - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.norm_on_bbox = norm_on_bbox - self.centerness_on_reg = centerness_on_reg - self.centerness_alpha = centerness_alpha - self.centerness_branch = centerness_branch - super().__init__( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.loss_centerness = build_loss(loss_centerness) - bbox_coder['code_size'] = self.bbox_code_size - self.bbox_coder = build_bbox_coder(bbox_coder) - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - self.conv_centerness_prev = self._init_branch( - conv_channels=self.centerness_branch, - conv_strides=(1, ) * len(self.centerness_branch)) - self.conv_centerness = nn.Conv2d(self.centerness_branch[-1], 1, 1) - self.scale_dim = 3 # only for offset, depth and size regression - self.scales = nn.ModuleList([ - nn.ModuleList([Scale(1.0) for _ in range(self.scale_dim)]) - for _ in self.strides - ]) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized init_weights because the default - init of DCN triggered by the init_cfg will init conv_offset.weight, - which mistakenly affects the training stability. - """ - super().init_weights() - for m in self.conv_centerness_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - normal_init(self.conv_centerness, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2). - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, - each is a 4D-tensor, the channel number is num_points * 1. - """ - # Note: we use [:5] to filter feats and only return predictions - return multi_apply(self.forward_single, feats, self.scales, - self.strides)[:5] - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox and direction class - predictions, centerness predictions of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, reg_feat = \ - super().forward_single(x) - - if self.centerness_on_reg: - clone_reg_feat = reg_feat.clone() - for conv_centerness_prev_layer in self.conv_centerness_prev: - clone_reg_feat = conv_centerness_prev_layer(clone_reg_feat) - centerness = self.conv_centerness(clone_reg_feat) - else: - clone_cls_feat = cls_feat.clone() - for conv_centerness_prev_layer in self.conv_centerness_prev: - clone_cls_feat = conv_centerness_prev_layer(clone_cls_feat) - centerness = self.conv_centerness(clone_cls_feat) - - bbox_pred = self.bbox_coder.decode(bbox_pred, scale, stride, - self.training, cls_score) - - return cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, \ - cls_feat, reg_feat - - @staticmethod - def add_sin_difference(boxes1, boxes2): - """Convert the rotation difference to difference in sine function. - - Args: - boxes1 (torch.Tensor): Original Boxes in shape (NxC), where C>=7 - and the 7th dimension is rotation dimension. - boxes2 (torch.Tensor): Target boxes in shape (NxC), where C>=7 and - the 7th dimension is rotation dimension. - - Returns: - tuple[torch.Tensor]: ``boxes1`` and ``boxes2`` whose 7th - dimensions are changed. - """ - rad_pred_encoding = torch.sin(boxes1[..., 6:7]) * torch.cos( - boxes2[..., 6:7]) - rad_tg_encoding = torch.cos(boxes1[..., 6:7]) * torch.sin(boxes2[..., - 6:7]) - boxes1 = torch.cat( - [boxes1[..., :6], rad_pred_encoding, boxes1[..., 7:]], dim=-1) - boxes2 = torch.cat([boxes2[..., :6], rad_tg_encoding, boxes2[..., 7:]], - dim=-1) - return boxes1, boxes2 - - @staticmethod - def get_direction_target(reg_targets, - dir_offset=0, - dir_limit_offset=0.0, - num_bins=2, - one_hot=True): - """Encode direction to 0 ~ num_bins-1. - - Args: - reg_targets (torch.Tensor): Bbox regression targets. - dir_offset (int, optional): Direction offset. Default to 0. - dir_limit_offset (float, optional): Offset to set the direction - range. Default to 0.0. - num_bins (int, optional): Number of bins to divide 2*PI. - Default to 2. - one_hot (bool, optional): Whether to encode as one hot. - Default to True. - - Returns: - torch.Tensor: Encoded direction targets. - """ - rot_gt = reg_targets[..., 6] - offset_rot = limit_period(rot_gt - dir_offset, dir_limit_offset, - 2 * np.pi) - dir_cls_targets = torch.floor(offset_rot / - (2 * np.pi / num_bins)).long() - dir_cls_targets = torch.clamp(dir_cls_targets, min=0, max=num_bins - 1) - if one_hot: - dir_targets = torch.zeros( - *list(dir_cls_targets.shape), - num_bins, - dtype=reg_targets.dtype, - device=dir_cls_targets.device) - dir_targets.scatter_(dir_cls_targets.unsqueeze(dim=-1).long(), 1.0) - dir_cls_targets = dir_targets - return dir_cls_targets - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', 'attr_preds', - 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D boxes ground truth with shape of - (num_gts, code_size). - gt_labels_3d (list[Tensor]): same as gt_labels - centers2d (list[Tensor]): 2D centers on the image with shape of - (num_gts, 2). - depths (list[Tensor]): Depth ground truth with shape of - (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(centernesses) == len( - attr_preds) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - labels_3d, bbox_targets_3d, centerness_targets, attr_targets = \ - self.get_targets( - all_level_points, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds, dir_cls_preds and centerness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, sum(self.group_reg_dims)) - for bbox_pred in bbox_preds - ] - flatten_dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - for dir_cls_pred in dir_cls_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_dir_cls_preds = torch.cat(flatten_dir_cls_preds) - flatten_centerness = torch.cat(flatten_centerness) - flatten_labels_3d = torch.cat(labels_3d) - flatten_bbox_targets_3d = torch.cat(bbox_targets_3d) - flatten_centerness_targets = torch.cat(centerness_targets) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels_3d >= 0) - & (flatten_labels_3d < bg_class_ind)).nonzero().reshape(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels_3d, - avg_factor=num_pos + num_imgs) # avoid num_pos is 0 - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_dir_cls_preds = flatten_dir_cls_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - - if self.pred_attrs: - flatten_attr_preds = [ - attr_pred.permute(0, 2, 3, 1).reshape(-1, self.num_attrs) - for attr_pred in attr_preds - ] - flatten_attr_preds = torch.cat(flatten_attr_preds) - flatten_attr_targets = torch.cat(attr_targets) - pos_attr_preds = flatten_attr_preds[pos_inds] - - if num_pos > 0: - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_centerness_targets = flatten_centerness_targets[pos_inds] - if self.pred_attrs: - pos_attr_targets = flatten_attr_targets[pos_inds] - bbox_weights = pos_centerness_targets.new_ones( - len(pos_centerness_targets), sum(self.group_reg_dims)) - equal_weights = pos_centerness_targets.new_ones( - pos_centerness_targets.shape) - - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - assert len(code_weight) == sum(self.group_reg_dims) - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - - if self.use_direction_classifier: - pos_dir_cls_targets = self.get_direction_target( - pos_bbox_targets_3d, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - - if self.diff_rad_by_sin: - pos_bbox_preds, pos_bbox_targets_3d = self.add_sin_difference( - pos_bbox_preds, pos_bbox_targets_3d) - - loss_offset = self.loss_bbox( - pos_bbox_preds[:, :2], - pos_bbox_targets_3d[:, :2], - weight=bbox_weights[:, :2], - avg_factor=equal_weights.sum()) - loss_depth = self.loss_bbox( - pos_bbox_preds[:, 2], - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - loss_size = self.loss_bbox( - pos_bbox_preds[:, 3:6], - pos_bbox_targets_3d[:, 3:6], - weight=bbox_weights[:, 3:6], - avg_factor=equal_weights.sum()) - loss_rotsin = self.loss_bbox( - pos_bbox_preds[:, 6], - pos_bbox_targets_3d[:, 6], - weight=bbox_weights[:, 6], - avg_factor=equal_weights.sum()) - loss_velo = None - if self.pred_velo: - loss_velo = self.loss_bbox( - pos_bbox_preds[:, 7:9], - pos_bbox_targets_3d[:, 7:9], - weight=bbox_weights[:, 7:9], - avg_factor=equal_weights.sum()) - - loss_centerness = self.loss_centerness(pos_centerness, - pos_centerness_targets) - - # direction classification loss - loss_dir = None - # TODO: add more check for use_direction_classifier - if self.use_direction_classifier: - loss_dir = self.loss_dir( - pos_dir_cls_preds, - pos_dir_cls_targets, - equal_weights, - avg_factor=equal_weights.sum()) - - # attribute classification loss - loss_attr = None - if self.pred_attrs: - loss_attr = self.loss_attr( - pos_attr_preds, - pos_attr_targets, - pos_centerness_targets, - avg_factor=pos_centerness_targets.sum()) - - else: - # need absolute due to possible negative delta x/y - loss_offset = pos_bbox_preds[:, :2].sum() - loss_depth = pos_bbox_preds[:, 2].sum() - loss_size = pos_bbox_preds[:, 3:6].sum() - loss_rotsin = pos_bbox_preds[:, 6].sum() - loss_velo = None - if self.pred_velo: - loss_velo = pos_bbox_preds[:, 7:9].sum() - loss_centerness = pos_centerness.sum() - loss_dir = None - if self.use_direction_classifier: - loss_dir = pos_dir_cls_preds.sum() - loss_attr = None - if self.pred_attrs: - loss_attr = pos_attr_preds.sum() - - loss_dict = dict( - loss_cls=loss_cls, - loss_offset=loss_offset, - loss_depth=loss_depth, - loss_size=loss_size, - loss_rotsin=loss_rotsin, - loss_centerness=loss_centerness) - - if loss_velo is not None: - loss_dict['loss_velo'] = loss_velo - - if loss_dir is not None: - loss_dict['loss_dir'] = loss_dir - - if loss_attr is not None: - loss_dict['loss_attr'] = loss_attr - - return loss_dict - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', 'attr_preds', - 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_points * 1, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(centernesses) == len(attr_preds) - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - if self.use_direction_classifier: - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - dir_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [2, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.pred_attrs: - attr_pred_list = [ - attr_preds[i][img_id].detach() for i in range(num_levels) - ] - else: - attr_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_attrs, *cls_scores[i][img_id].shape[1:]], - self.attr_background_label).detach() - for i in range(num_levels) - ] - centerness_pred_list = [ - centernesses[i][img_id].detach() for i in range(num_levels) - ] - input_meta = img_metas[img_id] - det_bboxes = self._get_bboxes_single( - cls_score_list, bbox_pred_list, dir_cls_pred_list, - attr_pred_list, centerness_pred_list, mlvl_points, input_meta, - cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - mlvl_points, - input_meta, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - Has shape (num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single scale - level with shape (num_points * bbox_code_size, H, W). - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on a single scale level with shape - (num_points * 2, H, W) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for a single scale level - with shape (num_points, H, W). - mlvl_points (list[Tensor]): Box reference for a single scale level - with shape (num_total_points, 2). - input_meta (dict): Metadata of input image. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - tuples[Tensor]: Predicted 3D boxes, scores, labels and attributes. - """ - view = np.array(input_meta['cam2img']) - scale_factor = input_meta['scale_factor'] - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_points) - mlvl_centers2d = [] - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - mlvl_attr_scores = [] - mlvl_centerness = [] - - for cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, \ - points in zip(cls_scores, bbox_preds, dir_cls_preds, - attr_preds, centernesses, mlvl_points): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - attr_pred = attr_pred.permute(1, 2, 0).reshape(-1, self.num_attrs) - attr_score = torch.max(attr_pred, dim=-1)[1] - centerness = centerness.permute(1, 2, 0).reshape(-1).sigmoid() - - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, - sum(self.group_reg_dims)) - bbox_pred = bbox_pred[:, :self.bbox_code_size] - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - max_scores, _ = (scores * centerness[:, None]).max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - points = points[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_pred = dir_cls_pred[topk_inds, :] - centerness = centerness[topk_inds] - dir_cls_score = dir_cls_score[topk_inds] - attr_score = attr_score[topk_inds] - # change the offset to actual center predictions - bbox_pred[:, :2] = points - bbox_pred[:, :2] - if rescale: - bbox_pred[:, :2] /= bbox_pred[:, :2].new_tensor(scale_factor) - pred_center2d = bbox_pred[:, :3].clone() - bbox_pred[:, :3] = points_img2cam(bbox_pred[:, :3], view) - mlvl_centers2d.append(pred_center2d) - mlvl_bboxes.append(bbox_pred) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - mlvl_attr_scores.append(attr_score) - mlvl_centerness.append(centerness) - - mlvl_centers2d = torch.cat(mlvl_centers2d) - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - # change local yaw to global yaw for 3D nms - cam2img = mlvl_centers2d.new_zeros((4, 4)) - cam2img[:view.shape[0], :view.shape[1]] = \ - mlvl_centers2d.new_tensor(view) - mlvl_bboxes = self.bbox_coder.decode_yaw(mlvl_bboxes, mlvl_centers2d, - mlvl_dir_scores, - self.dir_offset, cam2img) - - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.bbox_code_size, - origin=(0.5, 0.5, 0.5)).bev) - - mlvl_scores = torch.cat(mlvl_scores) - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - mlvl_attr_scores = torch.cat(mlvl_attr_scores) - mlvl_centerness = torch.cat(mlvl_centerness) - # no scale_factors in box3d_multiclass_nms - # Then we multiply it from outside - mlvl_nms_scores = mlvl_scores * mlvl_centerness[:, None] - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_nms_scores, cfg.score_thr, - cfg.max_per_img, cfg, mlvl_dir_scores, - mlvl_attr_scores) - bboxes, scores, labels, dir_scores, attrs = results - attrs = attrs.to(labels.dtype) # change data type to int - bboxes = input_meta['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - # Note that the predictions use origin (0.5, 0.5, 0.5) - # Due to the ground truth centers2d are the gravity center of objects - # v0.10.0 fix inplace operation to the input tensor of cam_box3d - # So here we also need to add origin=(0.5, 0.5, 0.5) - if not self.pred_attrs: - attrs = None - - return bboxes, scores, labels, attrs - - @staticmethod - def pts2Dto3D(points, view): - """ - Args: - points (torch.Tensor): points in 2D images, [N, 3], - 3 corresponds with x, y in the image and depth. - view (np.ndarray): camera intrinsic, [3, 3] - - Returns: - torch.Tensor: points in 3D space. [N, 3], - 3 corresponds with x, y, z in 3D space. - """ - warning.warn('DeprecationWarning: This static method has been moved ' - 'out of this class to mmdet3d/core. The function ' - 'pts2Dto3D will be deprecated.') - - assert view.shape[0] <= 4 - assert view.shape[1] <= 4 - assert points.shape[1] == 3 - - points2D = points[:, :2] - depths = points[:, 2].view(-1, 1) - unnorm_points2D = torch.cat([points2D * depths, depths], dim=1) - - viewpad = torch.eye(4, dtype=points2D.dtype, device=points2D.device) - viewpad[:view.shape[0], :view.shape[1]] = points2D.new_tensor(view) - inv_viewpad = torch.inverse(viewpad).transpose(0, 1) - - # Do operation in homogeneous coordinates. - nbr_points = unnorm_points2D.shape[0] - homo_points2D = torch.cat( - [unnorm_points2D, - points2D.new_ones((nbr_points, 1))], dim=1) - points3D = torch.mm(homo_points2D, inv_viewpad)[:, :3] - - return points3D - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map sizes.""" - y, x = super()._get_points_single(featmap_size, stride, dtype, device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) + stride // 2 - return points - - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - if attr_labels_list is None: - attr_labels_list = [ - gt_labels.new_full(gt_labels.shape, self.attr_background_label) - for gt_labels in gt_labels_list - ] - - # get labels and bbox_targets of each image - _, _, labels_3d_list, bbox_targets_3d_list, centerness_targets_list, \ - attr_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - gt_bboxes_3d_list, - gt_labels_3d_list, - centers2d_list, - depths_list, - attr_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - labels_3d_list = [ - labels_3d.split(num_points, 0) for labels_3d in labels_3d_list - ] - bbox_targets_3d_list = [ - bbox_targets_3d.split(num_points, 0) - for bbox_targets_3d in bbox_targets_3d_list - ] - centerness_targets_list = [ - centerness_targets.split(num_points, 0) - for centerness_targets in centerness_targets_list - ] - attr_targets_list = [ - attr_targets.split(num_points, 0) - for attr_targets in attr_targets_list - ] - - # concat per level image - concat_lvl_labels_3d = [] - concat_lvl_bbox_targets_3d = [] - concat_lvl_centerness_targets = [] - concat_lvl_attr_targets = [] - for i in range(num_levels): - concat_lvl_labels_3d.append( - torch.cat([labels[i] for labels in labels_3d_list])) - concat_lvl_centerness_targets.append( - torch.cat([ - centerness_targets[i] - for centerness_targets in centerness_targets_list - ])) - bbox_targets_3d = torch.cat([ - bbox_targets_3d[i] for bbox_targets_3d in bbox_targets_3d_list - ]) - concat_lvl_attr_targets.append( - torch.cat( - [attr_targets[i] for attr_targets in attr_targets_list])) - if self.norm_on_bbox: - bbox_targets_3d[:, : - 2] = bbox_targets_3d[:, :2] / self.strides[i] - concat_lvl_bbox_targets_3d.append(bbox_targets_3d) - return concat_lvl_labels_3d, concat_lvl_bbox_targets_3d, \ - concat_lvl_centerness_targets, concat_lvl_attr_targets - - def _get_target_single(self, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - points, regress_ranges, num_points_per_lvl): - """Compute regression and classification targets for a single image.""" - num_points = points.size(0) - num_gts = gt_labels.size(0) - if not isinstance(gt_bboxes_3d, torch.Tensor): - gt_bboxes_3d = gt_bboxes_3d.tensor.to(gt_bboxes.device) - if num_gts == 0: - return gt_labels.new_full((num_points,), self.background_label), \ - gt_bboxes.new_zeros((num_points, 4)), \ - gt_labels_3d.new_full( - (num_points,), self.background_label), \ - gt_bboxes_3d.new_zeros((num_points, self.bbox_code_size)), \ - gt_bboxes_3d.new_zeros((num_points,)), \ - attr_labels.new_full( - (num_points,), self.attr_background_label) - - # change orientation to local yaw - gt_bboxes_3d[..., 6] = -torch.atan2( - gt_bboxes_3d[..., 0], gt_bboxes_3d[..., 2]) + gt_bboxes_3d[..., 6] - - areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - areas = areas[None].repeat(num_points, 1) - regress_ranges = regress_ranges[:, None, :].expand( - num_points, num_gts, 2) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - centers2d = centers2d[None].expand(num_points, num_gts, 2) - gt_bboxes_3d = gt_bboxes_3d[None].expand(num_points, num_gts, - self.bbox_code_size) - depths = depths[None, :, None].expand(num_points, num_gts, 1) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None].expand(num_points, num_gts) - ys = ys[:, None].expand(num_points, num_gts) - - delta_xs = (xs - centers2d[..., 0])[..., None] - delta_ys = (ys - centers2d[..., 1])[..., None] - bbox_targets_3d = torch.cat( - (delta_xs, delta_ys, depths, gt_bboxes_3d[..., 3:]), dim=-1) - - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - - assert self.center_sampling is True, 'Setting center_sampling to '\ - 'False has not been implemented for FCOS3D.' - # condition1: inside a `center bbox` - radius = self.center_sample_radius - center_xs = centers2d[..., 0] - center_ys = centers2d[..., 1] - center_gts = torch.zeros_like(gt_bboxes) - stride = center_xs.new_zeros(center_xs.shape) - - # project the points on current lvl back to the `original` sizes - lvl_begin = 0 - for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl): - lvl_end = lvl_begin + num_points_lvl - stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius - lvl_begin = lvl_end - - center_gts[..., 0] = center_xs - stride - center_gts[..., 1] = center_ys - stride - center_gts[..., 2] = center_xs + stride - center_gts[..., 3] = center_ys + stride - - cb_dist_left = xs - center_gts[..., 0] - cb_dist_right = center_gts[..., 2] - xs - cb_dist_top = ys - center_gts[..., 1] - cb_dist_bottom = center_gts[..., 3] - ys - center_bbox = torch.stack( - (cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1) - inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0 - - # condition2: limit the regression range for each location - max_regress_distance = bbox_targets.max(-1)[0] - inside_regress_range = ( - (max_regress_distance >= regress_ranges[..., 0]) - & (max_regress_distance <= regress_ranges[..., 1])) - - # center-based criterion to deal with ambiguity - dists = torch.sqrt(torch.sum(bbox_targets_3d[..., :2]**2, dim=-1)) - dists[inside_gt_bbox_mask == 0] = INF - dists[inside_regress_range == 0] = INF - min_dist, min_dist_inds = dists.min(dim=1) - - labels = gt_labels[min_dist_inds] - labels_3d = gt_labels_3d[min_dist_inds] - attr_labels = attr_labels[min_dist_inds] - labels[min_dist == INF] = self.background_label # set as BG - labels_3d[min_dist == INF] = self.background_label # set as BG - attr_labels[min_dist == INF] = self.attr_background_label - - bbox_targets = bbox_targets[range(num_points), min_dist_inds] - bbox_targets_3d = bbox_targets_3d[range(num_points), min_dist_inds] - relative_dists = torch.sqrt( - torch.sum(bbox_targets_3d[..., :2]**2, - dim=-1)) / (1.414 * stride[:, 0]) - # [N, 1] / [N, 1] - centerness_targets = torch.exp(-self.centerness_alpha * relative_dists) - - return labels, bbox_targets, labels_3d, bbox_targets_3d, \ - centerness_targets, attr_labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/free_anchor3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/free_anchor3d_head.py deleted file mode 100644 index a56f2c7c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/free_anchor3d_head.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.bbox import bbox_overlaps_nearest_3d -from ..builder import HEADS -from .anchor3d_head import Anchor3DHead -from .train_mixins import get_direction_target - - -@HEADS.register_module() -class FreeAnchor3DHead(Anchor3DHead): - r"""`FreeAnchor `_ head for 3D detection. - - Note: - This implementation is directly modified from the `mmdet implementation - `_. - We find it also works on 3D detection with minor modification, i.e., - different hyper-parameters and a additional direction classifier. - - Args: - pre_anchor_topk (int): Number of boxes that be token in each bag. - bbox_thr (float): The threshold of the saturated linear function. It is - usually the same with the IoU threshold used in NMS. - gamma (float): Gamma parameter in focal loss. - alpha (float): Alpha parameter in focal loss. - kwargs (dict): Other arguments are the same as those in :class:`Anchor3DHead`. - """ # noqa: E501 - - def __init__(self, - pre_anchor_topk=50, - bbox_thr=0.6, - gamma=2.0, - alpha=0.5, - init_cfg=None, - **kwargs): - super().__init__(init_cfg=init_cfg, **kwargs) - self.pre_anchor_topk = pre_anchor_topk - self.bbox_thr = bbox_thr - self.gamma = gamma - self.alpha = alpha - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate loss of FreeAnchor head. - - Args: - cls_scores (list[torch.Tensor]): Classification scores of - different samples. - bbox_preds (list[torch.Tensor]): Box predictions of - different samples - dir_cls_preds (list[torch.Tensor]): Direction predictions of - different samples - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth boxes. - gt_labels (list[torch.Tensor]): Ground truth labels. - input_metas (list[dict]): List of input meta information. - gt_bboxes_ignore (list[:obj:`BaseInstance3DBoxes`], optional): - Ground truth boxes that should be ignored. Defaults to None. - - Returns: - dict[str, torch.Tensor]: Loss items. - - - positive_bag_loss (torch.Tensor): Loss of positive samples. - - negative_bag_loss (torch.Tensor): Loss of negative samples. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - anchor_list = self.get_anchors(featmap_sizes, input_metas) - anchors = [torch.cat(anchor) for anchor in anchor_list] - - # concatenate each level - cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape( - cls_score.size(0), -1, self.num_classes) - for cls_score in cls_scores - ] - bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape( - bbox_pred.size(0), -1, self.box_code_size) - for bbox_pred in bbox_preds - ] - dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, - 1).reshape(dir_cls_pred.size(0), -1, 2) - for dir_cls_pred in dir_cls_preds - ] - - cls_scores = torch.cat(cls_scores, dim=1) - bbox_preds = torch.cat(bbox_preds, dim=1) - dir_cls_preds = torch.cat(dir_cls_preds, dim=1) - - cls_prob = torch.sigmoid(cls_scores) - box_prob = [] - num_pos = 0 - positive_losses = [] - for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_, bbox_preds_, - dir_cls_preds_) in enumerate( - zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds, - dir_cls_preds)): - - gt_bboxes_ = gt_bboxes_.tensor.to(anchors_.device) - - with torch.no_grad(): - # box_localization: a_{j}^{loc}, shape: [j, 4] - pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_) - - # object_box_iou: IoU_{ij}^{loc}, shape: [i, j] - object_box_iou = bbox_overlaps_nearest_3d( - gt_bboxes_, pred_boxes) - - # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j] - t1 = self.bbox_thr - t2 = object_box_iou.max( - dim=1, keepdim=True).values.clamp(min=t1 + 1e-6) - object_box_prob = ((object_box_iou - t1) / (t2 - t1)).clamp( - min=0, max=1) - - # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j] - num_obj = gt_labels_.size(0) - indices = torch.stack( - [torch.arange(num_obj).type_as(gt_labels_), gt_labels_], - dim=0) - - object_cls_box_prob = torch.sparse_coo_tensor( - indices, object_box_prob) - - # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j] - """ - from "start" to "end" implement: - image_box_iou = torch.sparse.max(object_cls_box_prob, - dim=0).t() - - """ - # start - box_cls_prob = torch.sparse.sum( - object_cls_box_prob, dim=0).to_dense() - - indices = torch.nonzero(box_cls_prob, as_tuple=False).t_() - if indices.numel() == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.num_classes).type_as(object_box_prob) - else: - nonzero_box_prob = torch.where( - (gt_labels_.unsqueeze(dim=-1) == indices[0]), - object_box_prob[:, indices[1]], - torch.tensor( - [0]).type_as(object_box_prob)).max(dim=0).values - - # upmap to shape [j, c] - image_box_prob = torch.sparse_coo_tensor( - indices.flip([0]), - nonzero_box_prob, - size=(anchors_.size(0), self.num_classes)).to_dense() - # end - - box_prob.append(image_box_prob) - - # construct bags for objects - match_quality_matrix = bbox_overlaps_nearest_3d( - gt_bboxes_, anchors_) - _, matched = torch.topk( - match_quality_matrix, - self.pre_anchor_topk, - dim=1, - sorted=False) - del match_quality_matrix - - # matched_cls_prob: P_{ij}^{cls} - matched_cls_prob = torch.gather( - cls_prob_[matched], 2, - gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk, - 1)).squeeze(2) - - # matched_box_prob: P_{ij}^{loc} - matched_anchors = anchors_[matched] - matched_object_targets = self.bbox_coder.encode( - matched_anchors, - gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors)) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - # also calculate direction prob: P_{ij}^{dir} - matched_dir_targets = get_direction_target( - matched_anchors, - matched_object_targets, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - loss_dir = self.loss_dir( - dir_cls_preds_[matched].transpose(-2, -1), - matched_dir_targets, - reduction_override='none') - - # generate bbox weights - if self.diff_rad_by_sin: - bbox_preds_[matched], matched_object_targets = \ - self.add_sin_difference( - bbox_preds_[matched], matched_object_targets) - bbox_weights = matched_anchors.new_ones(matched_anchors.size()) - # Use pop is not right, check performance - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - loss_bbox = self.loss_bbox( - bbox_preds_[matched], - matched_object_targets, - bbox_weights, - reduction_override='none').sum(-1) - - if loss_dir is not None: - loss_bbox += loss_dir - matched_box_prob = torch.exp(-loss_bbox) - - # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )} - num_pos += len(gt_bboxes_) - positive_losses.append( - self.positive_bag_loss(matched_cls_prob, matched_box_prob)) - - positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos) - - # box_prob: P{a_{j} \in A_{+}} - box_prob = torch.stack(box_prob, dim=0) - - # negative_loss: - # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B|| - negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max( - 1, num_pos * self.pre_anchor_topk) - - losses = { - 'positive_bag_loss': positive_loss, - 'negative_bag_loss': negative_loss - } - return losses - - def positive_bag_loss(self, matched_cls_prob, matched_box_prob): - """Generate positive bag loss. - - Args: - matched_cls_prob (torch.Tensor): Classification probability - of matched positive samples. - matched_box_prob (torch.Tensor): Bounding box probability - of matched positive samples. - - Returns: - torch.Tensor: Loss of positive samples. - """ - # bag_prob = Mean-max(matched_prob) - matched_prob = matched_cls_prob * matched_box_prob - weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None) - weight /= weight.sum(dim=1).unsqueeze(dim=-1) - bag_prob = (weight * matched_prob).sum(dim=1) - # positive_bag_loss = -self.alpha * log(bag_prob) - bag_prob = bag_prob.clamp(0, 1) # to avoid bug of BCE, check - return self.alpha * F.binary_cross_entropy( - bag_prob, torch.ones_like(bag_prob), reduction='none') - - def negative_bag_loss(self, cls_prob, box_prob): - """Generate negative bag loss. - - Args: - cls_prob (torch.Tensor): Classification probability - of negative samples. - box_prob (torch.Tensor): Bounding box probability - of negative samples. - - Returns: - torch.Tensor: Loss of negative samples. - """ - prob = cls_prob * (1 - box_prob) - prob = prob.clamp(0, 1) # to avoid bug of BCE, check - negative_bag_loss = prob**self.gamma * F.binary_cross_entropy( - prob, torch.zeros_like(prob), reduction='none') - return (1 - self.alpha) * negative_bag_loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/groupfree3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/groupfree3d_head.py deleted file mode 100644 index b76cb05a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/groupfree3d_head.py +++ /dev/null @@ -1,994 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.cnn import ConvModule, xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer) -from mmcv.ops import PointsSampler as Points_Sampler -from mmcv.ops import gather_points -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss -from .base_conv_bbox_head import BaseConvBboxHead - -EPS = 1e-6 - - -class PointsObjClsModule(BaseModule): - """object candidate point prediction from seed point features. - - Args: - in_channel (int): number of channels of seed point features. - num_convs (int, optional): number of conv layers. - Default: 3. - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - """ - - def __init__(self, - in_channel, - num_convs=3, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - conv_channels = [in_channel for _ in range(num_convs - 1)] - conv_channels.append(1) - - self.mlp = nn.Sequential() - prev_channels = in_channel - for i in range(num_convs): - self.mlp.add_module( - f'layer{i}', - ConvModule( - prev_channels, - conv_channels[i], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if i < num_convs - 1 else None, - act_cfg=act_cfg if i < num_convs - 1 else None, - bias=True, - inplace=True)) - prev_channels = conv_channels[i] - - def forward(self, seed_features): - """Forward pass. - - Args: - seed_features (torch.Tensor): seed features, dims: - (batch_size, feature_dim, num_seed) - - Returns: - torch.Tensor: objectness logits, dim: - (batch_size, 1, num_seed) - """ - return self.mlp(seed_features) - - -class GeneralSamplingModule(nn.Module): - """Sampling Points. - - Sampling points with given index. - """ - - def forward(self, xyz, features, sample_inds): - """Forward pass. - - Args: - xyz: (B, N, 3) the coordinates of the features. - features (Tensor): (B, C, N) features to sample. - sample_inds (Tensor): (B, M) the given index, - where M is the number of points. - - Returns: - Tensor: (B, M, 3) coordinates of sampled features - Tensor: (B, C, M) the sampled features. - Tensor: (B, M) the given index. - """ - xyz_t = xyz.transpose(1, 2).contiguous() - new_xyz = gather_points(xyz_t, sample_inds).transpose(1, - 2).contiguous() - new_features = gather_points(features, sample_inds).contiguous() - - return new_xyz, new_features, sample_inds - - -@HEADS.register_module() -class GroupFree3DHead(BaseModule): - r"""Bbox head of `Group-Free 3D `_. - - Args: - num_classes (int): The number of class. - in_channels (int): The dims of input features from backbone. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - num_decoder_layers (int): The number of transformer decoder layers. - transformerlayers (dict): Config for transformer decoder. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - num_proposal (int): The number of initial sampling candidates. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - size_cls_agnostic (bool): Whether the predicted size is class-agnostic. - gt_per_seed (int): the number of candidate instance each point belongs - to. - sampling_objectness_loss (dict): Config of initial sampling - objectness loss. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - size_reg_loss (dict): Config of class-agnostic size regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_classes, - in_channels, - bbox_coder, - num_decoder_layers, - transformerlayers, - decoder_self_posembeds=dict( - type='ConvBNPositionalEncoding', - input_channel=6, - num_pos_feats=288), - decoder_cross_posembeds=dict( - type='ConvBNPositionalEncoding', - input_channel=3, - num_pos_feats=288), - train_cfg=None, - test_cfg=None, - num_proposal=128, - pred_layer_cfg=None, - size_cls_agnostic=True, - gt_per_seed=3, - sampling_objectness_loss=None, - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - size_reg_loss=None, - semantic_loss=None, - init_cfg=None): - super(GroupFree3DHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.num_proposal = num_proposal - self.in_channels = in_channels - self.num_decoder_layers = num_decoder_layers - self.size_cls_agnostic = size_cls_agnostic - self.gt_per_seed = gt_per_seed - - # Transformer decoder layers - if isinstance(transformerlayers, ConfigDict): - transformerlayers = [ - copy.deepcopy(transformerlayers) - for _ in range(num_decoder_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_decoder_layers - self.decoder_layers = nn.ModuleList() - for i in range(self.num_decoder_layers): - self.decoder_layers.append( - build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.decoder_layers[0].embed_dims - assert self.embed_dims == decoder_self_posembeds['num_pos_feats'] - assert self.embed_dims == decoder_cross_posembeds['num_pos_feats'] - - # bbox_coder - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - # Initial object candidate sampling - self.gsample_module = GeneralSamplingModule() - self.fps_module = Points_Sampler([self.num_proposal]) - self.points_obj_cls = PointsObjClsModule(self.in_channels) - - self.fp16_enabled = False - - # initial candidate prediction - self.conv_pred = BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels()) - - # query proj and key proj - self.decoder_query_proj = nn.Conv1d( - self.embed_dims, self.embed_dims, kernel_size=1) - self.decoder_key_proj = nn.Conv1d( - self.embed_dims, self.embed_dims, kernel_size=1) - - # query position embed - self.decoder_self_posembeds = nn.ModuleList() - for _ in range(self.num_decoder_layers): - self.decoder_self_posembeds.append( - build_positional_encoding(decoder_self_posembeds)) - # key position embed - self.decoder_cross_posembeds = nn.ModuleList() - for _ in range(self.num_decoder_layers): - self.decoder_cross_posembeds.append( - build_positional_encoding(decoder_cross_posembeds)) - - # Prediction Head - self.prediction_heads = nn.ModuleList() - for i in range(self.num_decoder_layers): - self.prediction_heads.append( - BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels())) - - self.sampling_objectness_loss = build_loss(sampling_objectness_loss) - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.semantic_loss = build_loss(semantic_loss) - if self.size_cls_agnostic: - self.size_reg_loss = build_loss(size_reg_loss) - else: - self.size_res_loss = build_loss(size_res_loss) - self.size_class_loss = build_loss(size_class_loss) - - def init_weights(self): - """Initialize weights of transformer decoder in GroupFree3DHead.""" - # initialize transformer - for m in self.decoder_layers.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - for m in self.decoder_self_posembeds.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - for m in self.decoder_cross_posembeds.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes + 1 - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # center residual (3), - # heading class+residual (num_dir_bins*2), - # size class+residual(num_sizes*4 or 3) - if self.size_cls_agnostic: - return 6 + self.num_dir_bins * 2 - else: - return 3 + self.num_dir_bins * 2 + self.num_sizes * 4 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - - seed_points = feat_dict['fp_xyz'][-1] - seed_features = feat_dict['fp_features'][-1] - seed_indices = feat_dict['fp_indices'][-1] - - return seed_points, seed_features, seed_indices - - def forward(self, feat_dict, sample_mod): - """Forward pass. - - Note: - The forward of GroupFree3DHead is divided into 2 steps: - - 1. Initial object candidates sampling. - 2. Iterative object box prediction by transformer decoder. - - Args: - feat_dict (dict): Feature dict from backbone. - sample_mod (str): sample mode for initial candidates sampling. - - Returns: - results (dict): Predictions of GroupFree3D head. - """ - assert sample_mod in ['fps', 'kps'] - - seed_xyz, seed_features, seed_indices = self._extract_input(feat_dict) - - results = dict( - seed_points=seed_xyz, - seed_features=seed_features, - seed_indices=seed_indices) - - # 1. Initial object candidates sampling. - if sample_mod == 'fps': - sample_inds = self.fps_module(seed_xyz, seed_features) - elif sample_mod == 'kps': - points_obj_cls_logits = self.points_obj_cls( - seed_features) # (batch_size, 1, num_seed) - points_obj_cls_scores = points_obj_cls_logits.sigmoid().squeeze(1) - sample_inds = torch.topk(points_obj_cls_scores, - self.num_proposal)[1].int() - results['seeds_obj_cls_logits'] = points_obj_cls_logits - else: - raise NotImplementedError( - f'Sample mode {sample_mod} is not supported!') - - candidate_xyz, candidate_features, sample_inds = self.gsample_module( - seed_xyz, seed_features, sample_inds) - - results['query_points_xyz'] = candidate_xyz # (B, M, 3) - results['query_points_feature'] = candidate_features # (B, C, M) - results['query_points_sample_inds'] = sample_inds.long() # (B, M) - - prefix = 'proposal.' - cls_predictions, reg_predictions = self.conv_pred(candidate_features) - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, candidate_xyz, - prefix) - - results.update(decode_res) - bbox3d = self.bbox_coder.decode(results, prefix) - - # 2. Iterative object box prediction by transformer decoder. - base_bbox3d = bbox3d[:, :, :6].detach().clone() - - query = self.decoder_query_proj(candidate_features).permute(2, 0, 1) - key = self.decoder_key_proj(seed_features).permute(2, 0, 1) - value = key - - # transformer decoder - results['num_decoder_layers'] = 0 - for i in range(self.num_decoder_layers): - prefix = f's{i}.' - - query_pos = self.decoder_self_posembeds[i](base_bbox3d).permute( - 2, 0, 1) - key_pos = self.decoder_cross_posembeds[i](seed_xyz).permute( - 2, 0, 1) - - query = self.decoder_layers[i]( - query, key, value, query_pos=query_pos, - key_pos=key_pos).permute(1, 2, 0) - - results[f'{prefix}query'] = query - - cls_predictions, reg_predictions = self.prediction_heads[i](query) - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, - candidate_xyz, prefix) - # TODO: should save bbox3d instead of decode_res? - results.update(decode_res) - - bbox3d = self.bbox_coder.decode(results, prefix) - results[f'{prefix}bbox3d'] = bbox3d - base_bbox3d = bbox3d[:, :, :6].detach().clone() - query = query.permute(2, 0, 1) - - results['num_decoder_layers'] += 1 - - return results - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None, - ret_target=False): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - ret_target (Bool): Return targets or not. - - Returns: - dict: Losses of GroupFree3D. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (sampling_targets, sampling_weights, assigned_size_targets, - size_class_targets, size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets, valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) = targets - - batch_size, proposal_num = size_class_targets.shape[:2] - - losses = dict() - - # calculate objectness classification loss - sampling_obj_score = bbox_preds['seeds_obj_cls_logits'].reshape(-1, 1) - sampling_objectness_loss = self.sampling_objectness_loss( - sampling_obj_score, - 1 - sampling_targets.reshape(-1), - sampling_weights.reshape(-1), - avg_factor=batch_size) - losses['sampling_objectness_loss'] = sampling_objectness_loss - - prefixes = ['proposal.'] + [ - f's{i}.' for i in range(bbox_preds['num_decoder_layers']) - ] - num_stages = len(prefixes) - for prefix in prefixes: - - # calculate objectness loss - obj_score = bbox_preds[f'{prefix}obj_scores'].transpose(2, 1) - objectness_loss = self.objectness_loss( - obj_score.reshape(-1, 1), - 1 - objectness_targets.reshape(-1), - objectness_weights.reshape(-1), - avg_factor=batch_size) - losses[f'{prefix}objectness_loss'] = objectness_loss / num_stages - - # calculate center loss - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).expand( - -1, -1, 3) - center_loss = self.center_loss( - bbox_preds[f'{prefix}center'], - assigned_center_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}center_loss'] = center_loss / num_stages - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds[f'{prefix}dir_class'].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - losses[f'{prefix}dir_class_loss'] = dir_class_loss / num_stages - - # calculate direction residual loss - heading_label_one_hot = size_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), - 1) - dir_res_norm = torch.sum( - bbox_preds[f'{prefix}dir_res_norm'] * heading_label_one_hot, - -1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - losses[f'{prefix}dir_res_loss'] = dir_res_loss / num_stages - - if self.size_cls_agnostic: - # calculate class-agnostic size loss - size_reg_loss = self.size_reg_loss( - bbox_preds[f'{prefix}size'], - assigned_size_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}size_reg_loss'] = size_reg_loss / num_stages - - else: - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds[f'{prefix}size_class'].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - losses[ - f'{prefix}size_class_loss'] = size_class_loss / num_stages - - # calculate size residual loss - one_hot_size_targets = size_class_targets.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, - size_class_targets.unsqueeze(-1), - 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).expand(-1, -1, -1, 3).contiguous() - size_residual_norm = torch.sum( - bbox_preds[f'{prefix}size_res_norm'] * - one_hot_size_targets_expand, 2) - box_loss_weights_expand = box_loss_weights.unsqueeze( - -1).expand(-1, -1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}size_res_loss'] = size_res_loss / num_stages - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds[f'{prefix}sem_scores'].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - losses[f'{prefix}semantic_loss'] = semantic_loss / num_stages - - if ret_target: - losses['targets'] = targets - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None, - max_gt_num=64): - """Generate targets of GroupFree3D head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - max_gt_num (int): Max number of GTs for single batch. - - Returns: - tuple[torch.Tensor]: Targets of GroupFree3D head. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - # max_gt_num = max(gt_num) - - max_gt_nums = [max_gt_num for _ in range(len(gt_labels_3d))] - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - seed_points = [ - bbox_preds['seed_points'][i] for i in range(len(gt_labels_3d)) - ] - - seed_indices = [ - bbox_preds['seed_indices'][i] for i in range(len(gt_labels_3d)) - ] - - candidate_indices = [ - bbox_preds['query_points_sample_inds'][i] - for i in range(len(gt_labels_3d)) - ] - - (sampling_targets, assigned_size_targets, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, objectness_targets, - objectness_masks) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - max_gt_nums, seed_points, - seed_indices, candidate_indices) - - # pad targets as original code of GroupFree3D. - for index in range(len(gt_labels_3d)): - pad_num = max_gt_num - gt_labels_3d[index].shape[0] - valid_gt_masks[index] = F.pad(valid_gt_masks[index], (0, pad_num)) - - sampling_targets = torch.stack(sampling_targets) - sampling_weights = (sampling_targets >= 0).float() - sampling_normalizer = sampling_weights.sum(dim=1, keepdim=True).float() - sampling_weights /= sampling_normalizer.clamp(min=1.0) - - assigned_size_targets = torch.stack(assigned_size_targets) - center_targets = torch.stack(center_targets) - valid_gt_masks = torch.stack(valid_gt_masks) - - assigned_center_targets = torch.stack(assigned_center_targets) - objectness_targets = torch.stack(objectness_targets) - - objectness_weights = torch.stack(objectness_masks) - cls_normalizer = objectness_weights.sum(dim=1, keepdim=True).float() - objectness_weights /= cls_normalizer.clamp(min=1.0) - - box_loss_weights = objectness_targets.float() / ( - objectness_targets.sum().float() + EPS) - - valid_gt_weights = valid_gt_masks.float() / ( - valid_gt_masks.sum().float() + EPS) - - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_class_targets = torch.stack(size_class_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - - return (sampling_targets, sampling_weights, assigned_size_targets, - size_class_targets, size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets, valid_gt_masks, objectness_targets, - objectness_weights, box_loss_weights, valid_gt_weights) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - max_gt_nums=None, - seed_points=None, - seed_indices=None, - candidate_indices=None, - seed_points_obj_topk=4): - """Generate targets of GroupFree3D head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - max_gt_nums (int): Max number of GTs for single batch. - seed_points (torch.Tensor): Coordinates of seed points. - seed_indices (torch.Tensor): Indices of seed points. - candidate_indices (torch.Tensor): Indices of object candidates. - seed_points_obj_topk (int): k value of k-Closest Points Sampling. - - Returns: - tuple[torch.Tensor]: Targets of GroupFree3D head. - """ - - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - # generate center, dir, size target - (center_targets, size_targets, size_class_targets, size_res_targets, - dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - # pad targets as original code of GroupFree3D - pad_num = max_gt_nums - gt_labels_3d.shape[0] - box_label_mask = points.new_zeros([max_gt_nums]) - box_label_mask[:gt_labels_3d.shape[0]] = 1 - - gt_bboxes_pad = F.pad(gt_bboxes_3d.tensor, (0, 0, 0, pad_num)) - gt_bboxes_pad[gt_labels_3d.shape[0]:, 0:3] += 1000 - gt_bboxes_3d = gt_bboxes_3d.new_box(gt_bboxes_pad) - - gt_labels_3d = F.pad(gt_labels_3d, (0, pad_num)) - - center_targets = F.pad(center_targets, (0, 0, 0, pad_num), value=1000) - size_targets = F.pad(size_targets, (0, 0, 0, pad_num)) - size_class_targets = F.pad(size_class_targets, (0, pad_num)) - size_res_targets = F.pad(size_res_targets, (0, 0, 0, pad_num)) - dir_class_targets = F.pad(dir_class_targets, (0, pad_num)) - dir_res_targets = F.pad(dir_res_targets, (0, pad_num)) - - # 0. generate pts_instance_label and pts_obj_mask - num_points = points.shape[0] - pts_obj_mask = points.new_zeros([num_points], dtype=torch.long) - pts_instance_label = points.new_zeros([num_points], - dtype=torch.long) - 1 - - if self.bbox_coder.with_rot: - vote_targets = points.new_zeros([num_points, 4 * self.gt_per_seed]) - vote_target_idx = points.new_zeros([num_points], dtype=torch.long) - box_indices_all = gt_bboxes_3d.points_in_boxes_part(points) - for i in range(gt_labels_3d.shape[0]): - box_indices = box_indices_all[:, i] - indices = torch.nonzero( - box_indices, as_tuple=False).squeeze(-1) - selected_points = points[indices] - pts_obj_mask[indices] = 1 - vote_targets_tmp = vote_targets[indices] - votes = gt_bboxes_3d.gravity_center[i].unsqueeze( - 0) - selected_points[:, :3] - - for j in range(self.gt_per_seed): - column_indices = torch.nonzero( - vote_target_idx[indices] == j, - as_tuple=False).squeeze(-1) - vote_targets_tmp[column_indices, - int(j * 3):int(j * 3 + - 3)] = votes[column_indices] - vote_targets_tmp[column_indices, - j + 3 * self.gt_per_seed] = i - if j == 0: - vote_targets_tmp[ - column_indices, :3 * - self.gt_per_seed] = votes[column_indices].repeat( - 1, self.gt_per_seed) - vote_targets_tmp[column_indices, - 3 * self.gt_per_seed:] = i - - vote_targets[indices] = vote_targets_tmp - vote_target_idx[indices] = torch.clamp( - vote_target_idx[indices] + 1, max=2) - - dist = points.new_zeros([num_points, self.gt_per_seed]) + 1000 - for j in range(self.gt_per_seed): - dist[:, j] = (vote_targets[:, 3 * j:3 * j + 3]**2).sum(-1) - - instance_indices = torch.argmin( - dist, dim=-1).unsqueeze(-1) + 3 * self.gt_per_seed - instance_lable = torch.gather(vote_targets, 1, - instance_indices).squeeze(-1) - pts_instance_label = instance_lable.long() - pts_instance_label[pts_obj_mask == 0] = -1 - - elif pts_semantic_mask is not None: - for i in torch.unique(pts_instance_mask): - indices = torch.nonzero( - pts_instance_mask == i, as_tuple=False).squeeze(-1) - - if pts_semantic_mask[indices[0]] < self.num_classes: - selected_points = points[indices, :3] - center = 0.5 * ( - selected_points.min(0)[0] + selected_points.max(0)[0]) - - delta_xyz = center - center_targets - instance_lable = torch.argmin((delta_xyz**2).sum(-1)) - pts_instance_label[indices] = instance_lable - pts_obj_mask[indices] = 1 - - else: - raise NotImplementedError - - # 1. generate objectness targets in sampling head - gt_num = gt_labels_3d.shape[0] - num_seed = seed_points.shape[0] - num_candidate = candidate_indices.shape[0] - - object_assignment = torch.gather(pts_instance_label, 0, seed_indices) - # set background points to the last gt bbox as original code - object_assignment[object_assignment < 0] = gt_num - 1 - object_assignment_one_hot = gt_bboxes_3d.tensor.new_zeros( - (num_seed, gt_num)) - object_assignment_one_hot.scatter_(1, object_assignment.unsqueeze(-1), - 1) # (num_seed, gt_num) - - delta_xyz = seed_points.unsqueeze( - 1) - gt_bboxes_3d.gravity_center.unsqueeze( - 0) # (num_seed, gt_num, 3) - delta_xyz = delta_xyz / (gt_bboxes_3d.dims.unsqueeze(0) + EPS) - - new_dist = torch.sum(delta_xyz**2, dim=-1) - euclidean_dist1 = torch.sqrt(new_dist + EPS) - euclidean_dist1 = euclidean_dist1 * object_assignment_one_hot + 100 * ( - 1 - object_assignment_one_hot) - # (gt_num, num_seed) - euclidean_dist1 = euclidean_dist1.permute(1, 0) - - # gt_num x topk - topk_inds = torch.topk( - euclidean_dist1, - seed_points_obj_topk, - largest=False)[1] * box_label_mask[:, None] + \ - (box_label_mask[:, None] - 1) - topk_inds = topk_inds.long() - topk_inds = topk_inds.view(-1).contiguous() - - sampling_targets = torch.zeros( - num_seed + 1, dtype=torch.long).to(points.device) - sampling_targets[topk_inds] = 1 - sampling_targets = sampling_targets[:num_seed] - # pts_instance_label - objectness_label_mask = torch.gather(pts_instance_label, 0, - seed_indices) # num_seed - sampling_targets[objectness_label_mask < 0] = 0 - - # 2. objectness target - seed_obj_gt = torch.gather(pts_obj_mask, 0, seed_indices) # num_seed - objectness_targets = torch.gather(seed_obj_gt, 0, - candidate_indices) # num_candidate - - # 3. box target - seed_instance_label = torch.gather(pts_instance_label, 0, - seed_indices) # num_seed - query_points_instance_label = torch.gather( - seed_instance_label, 0, candidate_indices) # num_candidate - - # Set assignment - # (num_candidate, ) with values in 0,1,...,gt_num-1 - assignment = query_points_instance_label - # set background points to the last gt bbox as original code - assignment[assignment < 0] = gt_num - 1 - assignment_expand = assignment.unsqueeze(1).expand(-1, 3) - - assigned_center_targets = center_targets[assignment] - assigned_size_targets = size_targets[assignment] - - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - dir_res_targets /= (np.pi / self.num_dir_bins) - - size_class_targets = size_class_targets[assignment] - size_res_targets = \ - torch.gather(size_res_targets, 0, assignment_expand) - one_hot_size_targets = gt_bboxes_3d.tensor.new_zeros( - (num_candidate, self.num_sizes)) - one_hot_size_targets.scatter_(1, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets = one_hot_size_targets.unsqueeze(-1).expand( - -1, -1, 3) # (num_candidate,num_size_cluster,3) - mean_sizes = size_res_targets.new_tensor( - self.bbox_coder.mean_sizes).unsqueeze(0) - pos_mean_sizes = torch.sum(one_hot_size_targets * mean_sizes, 1) - size_res_targets /= pos_mean_sizes - - mask_targets = gt_labels_3d[assignment].long() - - objectness_masks = points.new_ones((num_candidate)) - - return (sampling_targets, assigned_size_targets, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, - center_targets, assigned_center_targets, mask_targets, - objectness_targets, objectness_masks) - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - use_nms=True): - """Generate bboxes from GroupFree3D head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from GroupFree3D head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - use_nms (bool): Whether to apply NMS, skip nms postprocessing - while using GroupFree3D head in rpn stage. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # support multi-stage predictions - assert self.test_cfg['prediction_stages'] in \ - ['last', 'all', 'last_three'] - - prefixes = list() - if self.test_cfg['prediction_stages'] == 'last': - prefixes = [f's{self.num_decoder_layers - 1}.'] - elif self.test_cfg['prediction_stages'] == 'all': - prefixes = ['proposal.'] + \ - [f's{i}.' for i in range(self.num_decoder_layers)] - elif self.test_cfg['prediction_stages'] == 'last_three': - prefixes = [ - f's{i}.' for i in range(self.num_decoder_layers - - 3, self.num_decoder_layers) - ] - else: - raise NotImplementedError - - obj_scores = list() - sem_scores = list() - bbox3d = list() - for prefix in prefixes: - # decode boxes - obj_score = bbox_preds[f'{prefix}obj_scores'][..., -1].sigmoid() - sem_score = bbox_preds[f'{prefix}sem_scores'].softmax(-1) - bbox = self.bbox_coder.decode(bbox_preds, prefix) - obj_scores.append(obj_score) - sem_scores.append(sem_score) - bbox3d.append(bbox) - - obj_scores = torch.cat(obj_scores, dim=1) - sem_scores = torch.cat(sem_scores, dim=1) - bbox3d = torch.cat(bbox3d, dim=1) - - if use_nms: - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = \ - self.multiclass_nms_single(obj_scores[b], sem_scores[b], - bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - else: - return bbox3d - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/monoflex_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/monoflex_head.py deleted file mode 100644 index 2253c758..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/monoflex_head.py +++ /dev/null @@ -1,771 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import xavier_init -from torch import nn as nn - -from mmdet3d.core.utils import get_ellip_gaussian_2D -from mmdet3d.models.model_utils import EdgeFusionModule -from mmdet3d.models.utils import (filter_outside_objs, get_edge_indices, - get_keypoints, handle_proj_objs) -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from mmdet.models.utils.gaussian_target import (get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from ..builder import HEADS, build_loss -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - - -@HEADS.register_module() -class MonoFlexHead(AnchorFreeMono3DHead): - r"""MonoFlex head used in `MonoFlex `_ - - .. code-block:: none - - / --> 3 x 3 conv --> 1 x 1 conv --> [edge fusion] --> cls - | - | --> 3 x 3 conv --> 1 x 1 conv --> 2d bbox - | - | --> 3 x 3 conv --> 1 x 1 conv --> [edge fusion] --> 2d offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints uncertainty - feature - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints uncertainty - | - | --> 3 x 3 conv --> 1 x 1 conv --> 3d dimensions - | - | |--- 1 x 1 conv --> ori cls - | --> 3 x 3 conv --| - | |--- 1 x 1 conv --> ori offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> depth - | - \ --> 3 x 3 conv --> 1 x 1 conv --> depth uncertainty - - Args: - use_edge_fusion (bool): Whether to use edge fusion module while - feature extraction. - edge_fusion_inds (list[tuple]): Indices of feature to use edge fusion. - edge_heatmap_ratio (float): Ratio of generating target heatmap. - filter_outside_objs (bool, optional): Whether to filter the - outside objects. Default: True. - loss_cls (dict, optional): Config of classification loss. - Default: loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0). - loss_bbox (dict, optional): Config of localization loss. - Default: loss_bbox=dict(type='IOULoss', loss_weight=10.0). - loss_dir (dict, optional): Config of direction classification loss. - Default: dict(type='MultibinLoss', loss_weight=0.1). - loss_keypoints (dict, optional): Config of keypoints loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_dims: (dict, optional): Config of dimensions loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_offsets2d: (dict, optional): Config of offsets2d loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_direct_depth: (dict, optional): Config of directly regression depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_keypoints_depth: (dict, optional): Config of keypoints decoded depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_combined_depth: (dict, optional): Config of combined depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_attr (dict, optional): Config of attribute classification loss. - In MonoFlex, Default: None. - bbox_coder (dict, optional): Bbox coder for encoding and decoding boxes. - Default: dict(type='MonoFlexCoder', code_size=7). - norm_cfg (dict, optional): Dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict): Initialization config dict. Default: None. - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - use_edge_fusion, - edge_fusion_inds, - edge_heatmap_ratio, - filter_outside_objs=True, - loss_cls=dict(type='GaussianFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=0.1), - loss_dir=dict(type='MultiBinLoss', loss_weight=0.1), - loss_keypoints=dict(type='L1Loss', loss_weight=0.1), - loss_dims=dict(type='L1Loss', loss_weight=0.1), - loss_offsets2d=dict(type='L1Loss', loss_weight=0.1), - loss_direct_depth=dict(type='L1Loss', loss_weight=0.1), - loss_keypoints_depth=dict(type='L1Loss', loss_weight=0.1), - loss_combined_depth=dict(type='L1Loss', loss_weight=0.1), - loss_attr=None, - bbox_coder=dict(type='MonoFlexCoder', code_size=7), - norm_cfg=dict(type='BN'), - init_cfg=None, - init_bias=-2.19, - **kwargs): - self.use_edge_fusion = use_edge_fusion - self.edge_fusion_inds = edge_fusion_inds - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.filter_outside_objs = filter_outside_objs - self.edge_heatmap_ratio = edge_heatmap_ratio - self.init_bias = init_bias - self.loss_dir = build_loss(loss_dir) - self.loss_keypoints = build_loss(loss_keypoints) - self.loss_dims = build_loss(loss_dims) - self.loss_offsets2d = build_loss(loss_offsets2d) - self.loss_direct_depth = build_loss(loss_direct_depth) - self.loss_keypoints_depth = build_loss(loss_keypoints_depth) - self.loss_combined_depth = build_loss(loss_combined_depth) - self.bbox_coder = build_bbox_coder(bbox_coder) - - def _init_edge_module(self): - """Initialize edge fusion module for feature extraction.""" - self.edge_fuse_cls = EdgeFusionModule(self.num_classes, 256) - for i in range(len(self.edge_fusion_inds)): - reg_inds, out_inds = self.edge_fusion_inds[i] - out_channels = self.group_reg_dims[reg_inds][out_inds] - fusion_layer = EdgeFusionModule(out_channels, 256) - layer_name = f'edge_fuse_reg_{reg_inds}_{out_inds}' - self.add_module(layer_name, fusion_layer) - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - self.conv_cls.bias.data.fill_(self.init_bias) - xavier_init(self.conv_regs[4][0], gain=0.01) - xavier_init(self.conv_regs[7][0], gain=0.01) - for m in self.conv_regs.modules(): - if isinstance(m, nn.Conv2d): - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls_prev = self._init_branch( - conv_channels=self.cls_branch, - conv_strides=(1, ) * len(self.cls_branch)) - self.conv_cls = nn.Conv2d(self.cls_branch[-1], self.cls_out_channels, - 1) - # init regression head - self.conv_reg_prevs = nn.ModuleList() - # init output head - self.conv_regs = nn.ModuleList() - # group_reg_dims: - # ((4, ), (2, ), (20, ), (3, ), (3, ), (8, 8), (1, ), (1, )) - for i in range(len(self.group_reg_dims)): - reg_dims = self.group_reg_dims[i] - reg_branch_channels = self.reg_branch[i] - out_channel = self.out_channels[i] - reg_list = nn.ModuleList() - if len(reg_branch_channels) > 0: - self.conv_reg_prevs.append( - self._init_branch( - conv_channels=reg_branch_channels, - conv_strides=(1, ) * len(reg_branch_channels))) - for reg_dim in reg_dims: - reg_list.append(nn.Conv2d(out_channel, reg_dim, 1)) - self.conv_regs.append(reg_list) - else: - self.conv_reg_prevs.append(None) - for reg_dim in reg_dims: - reg_list.append(nn.Conv2d(self.feat_channels, reg_dim, 1)) - self.conv_regs.append(reg_list) - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_predictor() - if self.use_edge_fusion: - self._init_edge_module() - - def forward_train(self, x, input_metas, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - gt_bboxes_ignore, proposal_cfg, **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_3d (list[Tensor]): 3D ground truth bboxes of the image, - shape (num_gts, self.bbox_code_size). - gt_labels_3d (list[Tensor]): 3D ground truth labels of each box, - shape (num_gts,). - centers2d (list[Tensor]): Projected 3D center of each box, - shape (num_gts, 2). - depths (list[Tensor]): Depth of projected 3D center of each box, - shape (num_gts,). - attr_labels (list[Tensor]): Attribute labels of each box, - shape (num_gts,). - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x, input_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, gt_bboxes_3d, centers2d, depths, - attr_labels, input_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - input_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes( - *outs, input_metas, cfg=proposal_cfg) - return losses, proposal_list - - def forward(self, feats, input_metas): - """Forward features from the upstream network. - - Args: - feats (list[Tensor]): Features from the upstream network, each is - a 4D-tensor. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - """ - mlvl_input_metas = [input_metas for i in range(len(feats))] - return multi_apply(self.forward_single, feats, mlvl_input_metas) - - def forward_single(self, x, input_metas): - """Forward features of a single scale level. - - Args: - x (Tensor): Feature maps from a specific FPN feature level. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: Scores for each class, bbox predictions. - """ - img_h, img_w = input_metas[0]['pad_shape'][:2] - batch_size, _, feat_h, feat_w = x.shape - downsample_ratio = img_h / feat_h - - for conv_cls_prev_layer in self.conv_cls_prev: - cls_feat = conv_cls_prev_layer(x) - out_cls = self.conv_cls(cls_feat) - - if self.use_edge_fusion: - # calculate the edge indices for the batch data - edge_indices_list = get_edge_indices( - input_metas, downsample_ratio, device=x.device) - edge_lens = [ - edge_indices.shape[0] for edge_indices in edge_indices_list - ] - max_edge_len = max(edge_lens) - edge_indices = x.new_zeros((batch_size, max_edge_len, 2), - dtype=torch.long) - for i in range(batch_size): - edge_indices[i, :edge_lens[i]] = edge_indices_list[i] - # cls feature map edge fusion - out_cls = self.edge_fuse_cls(cls_feat, out_cls, edge_indices, - edge_lens, feat_h, feat_w) - - bbox_pred = [] - - for i in range(len(self.group_reg_dims)): - reg_feat = x.clone() - # feature regression head - if len(self.reg_branch[i]) > 0: - for conv_reg_prev_layer in self.conv_reg_prevs[i]: - reg_feat = conv_reg_prev_layer(reg_feat) - - for j, conv_reg in enumerate(self.conv_regs[i]): - out_reg = conv_reg(reg_feat) - # Use Edge Fusion Module - if self.use_edge_fusion and (i, j) in self.edge_fusion_inds: - # reg feature map edge fusion - out_reg = getattr(self, 'edge_fuse_reg_{}_{}'.format( - i, j))(reg_feat, out_reg, edge_indices, edge_lens, - feat_h, feat_w) - bbox_pred.append(out_reg) - - bbox_pred = torch.cat(bbox_pred, dim=1) - cls_score = out_cls.sigmoid() # turn to 0-1 - cls_score = cls_score.clamp(min=1e-4, max=1 - 1e-4) - - return cls_score, bbox_pred - - def get_bboxes(self, cls_scores, bbox_preds, input_metas): - """Generate bboxes from bbox head predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - bbox_preds (list[Tensor]): Box regression for each scale. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Returns: - list[tuple[:obj:`CameraInstance3DBoxes`, Tensor, Tensor, None]]: - Each item in result_list is 4-tuple. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - cam2imgs = torch.stack([ - cls_scores[0].new_tensor(input_meta['cam2img']) - for input_meta in input_metas - ]) - batch_bboxes, batch_scores, batch_topk_labels = self.decode_heatmap( - cls_scores[0], - bbox_preds[0], - input_metas, - cam2imgs=cam2imgs, - topk=100, - kernel=3) - - result_list = [] - for img_id in range(len(input_metas)): - - bboxes = batch_bboxes[img_id] - scores = batch_scores[img_id] - labels = batch_topk_labels[img_id] - - keep_idx = scores > 0.25 - bboxes = bboxes[keep_idx] - scores = scores[keep_idx] - labels = labels[keep_idx] - - bboxes = input_metas[img_id]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - attrs = None - result_list.append((bboxes, scores, labels, attrs)) - - return result_list - - def decode_heatmap(self, - cls_score, - reg_pred, - input_metas, - cam2imgs, - topk=100, - kernel=3): - """Transform outputs into detections raw bbox predictions. - - Args: - class_score (Tensor): Center predict heatmap, - shape (B, num_classes, H, W). - reg_pred (Tensor): Box regression map. - shape (B, channel, H , W). - input_metas (List[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cam2imgs (Tensor): Camera intrinsic matrix. - shape (N, 4, 4) - topk (int, optional): Get top k center keypoints from heatmap. - Default 100. - kernel (int, optional): Max pooling kernel for extract local - maximum pixels. Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of SMOKEHead, containing - the following Tensors: - - batch_bboxes (Tensor): Coords of each 3D box. - shape (B, k, 7) - - batch_scores (Tensor): Scores of each 3D box. - shape (B, k) - - batch_topk_labels (Tensor): Categories of each 3D box. - shape (B, k) - """ - img_h, img_w = input_metas[0]['pad_shape'][:2] - batch_size, _, feat_h, feat_w = cls_score.shape - - downsample_ratio = img_h / feat_h - center_heatmap_pred = get_local_maximum(cls_score, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=topk) - batch_scores, batch_index, batch_topk_labels = batch_dets - - regression = transpose_and_gather_feat(reg_pred, batch_index) - regression = regression.view(-1, 8) - - pred_base_centers2d = torch.cat( - [topk_xs.view(-1, 1), - topk_ys.view(-1, 1).float()], dim=1) - preds = self.bbox_coder.decode(regression, batch_topk_labels, - downsample_ratio, cam2imgs) - pred_locations = self.bbox_coder.decode_location( - pred_base_centers2d, preds['offsets2d'], preds['combined_depth'], - cam2imgs, downsample_ratio) - pred_yaws = self.bbox_coder.decode_orientation( - preds['orientations']).unsqueeze(-1) - pred_dims = preds['dimensions'] - batch_bboxes = torch.cat((pred_locations, pred_dims, pred_yaws), dim=1) - batch_bboxes = batch_bboxes.view(batch_size, -1, self.bbox_code_size) - return batch_bboxes, batch_scores, batch_topk_labels - - def get_predictions(self, pred_reg, labels3d, centers2d, reg_mask, - batch_indices, input_metas, downsample_ratio): - """Prepare predictions for computing loss. - - Args: - pred_reg (Tensor): Box regression map. - shape (B, channel, H , W). - labels3d (Tensor): Labels of each 3D box. - shape (B * max_objs, ) - centers2d (Tensor): Coords of each projected 3D box - center on image. shape (N, 2) - reg_mask (Tensor): Indexes of the existence of the 3D box. - shape (B * max_objs, ) - batch_indices (Tenosr): Batch indices of the 3D box. - shape (N, 3) - input_metas (list[dict]): Meta information of each image, - e.g., image size, scaling factor, etc. - downsample_ratio (int): The stride of feature map. - - Returns: - dict: The predictions for computing loss. - """ - batch, channel = pred_reg.shape[0], pred_reg.shape[1] - w = pred_reg.shape[3] - cam2imgs = torch.stack([ - centers2d.new_tensor(input_meta['cam2img']) - for input_meta in input_metas - ]) - # (batch_size, 4, 4) -> (N, 4, 4) - cam2imgs = cam2imgs[batch_indices, :, :] - centers2d_inds = centers2d[:, 1] * w + centers2d[:, 0] - centers2d_inds = centers2d_inds.view(batch, -1) - pred_regression = transpose_and_gather_feat(pred_reg, centers2d_inds) - pred_regression_pois = pred_regression.view(-1, channel)[reg_mask] - preds = self.bbox_coder.decode(pred_regression_pois, labels3d, - downsample_ratio, cam2imgs) - - return preds - - def get_targets(self, gt_bboxes_list, gt_labels_list, gt_bboxes_3d_list, - gt_labels_3d_list, centers2d_list, depths_list, feat_shape, - img_shape, input_metas): - """Get training targets for batch images. -`` - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each - image, shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each - box, shape (num_gt,). - gt_bboxes_3d_list (list[:obj:`CameraInstance3DBoxes`]): 3D - Ground truth bboxes of each image, - shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of - each box, shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D - image, shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - feat_shape (tuple[int]): Feature map shape with value, - shape (B, _, H, W). - img_shape (tuple[int]): Image shape in [h, w] format. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor, dict]: The Tensor value is the targets of - center heatmap, the dict has components below: - - base_centers2d_target (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2), [dtype: int] - - labels3d (Tensor): Labels of each 3D box. - shape (N, ) - - reg_mask (Tensor): Mask of the existence of the 3D box. - shape (B * max_objs, ) - - batch_indices (Tensor): Batch id of the 3D box. - shape (N, ) - - depth_target (Tensor): Depth target of each 3D box. - shape (N, ) - - keypoints2d_target (Tensor): Keypoints of each projected 3D box - on image. shape (N, 10, 2) - - keypoints_mask (Tensor): Keypoints mask of each projected 3D - box on image. shape (N, 10) - - keypoints_depth_mask (Tensor): Depths decoded from keypoints - of each 3D box. shape (N, 3) - - orientations_target (Tensor): Orientation (encoded local yaw) - target of each 3D box. shape (N, ) - - offsets2d_target (Tensor): Offsets target of each projected - 3D box. shape (N, 2) - - dimensions_target (Tensor): Dimensions target of each 3D box. - shape (N, 3) - - downsample_ratio (int): The stride of feature map. - """ - - img_h, img_w = img_shape[:2] - batch_size, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) # 1/4 - height_ratio = float(feat_h / img_h) # 1/4 - - assert width_ratio == height_ratio - - # Whether to filter the objects which are not in FOV. - if self.filter_outside_objs: - filter_outside_objs(gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, - centers2d_list, input_metas) - - # transform centers2d to base centers2d for regression and - # heatmap generation. - # centers2d = int(base_centers2d) + offsets2d - base_centers2d_list, offsets2d_list, trunc_mask_list = \ - handle_proj_objs(centers2d_list, gt_bboxes_list, input_metas) - - keypoints2d_list, keypoints_mask_list, keypoints_depth_mask_list = \ - get_keypoints(gt_bboxes_3d_list, centers2d_list, input_metas) - - center_heatmap_target = gt_bboxes_list[-1].new_zeros( - [batch_size, self.num_classes, feat_h, feat_w]) - - for batch_id in range(batch_size): - # project gt_bboxes from input image to feat map - gt_bboxes = gt_bboxes_list[batch_id] * width_ratio - gt_labels = gt_labels_list[batch_id] - - # project base centers2d from input image to feat map - gt_base_centers2d = base_centers2d_list[batch_id] * width_ratio - trunc_masks = trunc_mask_list[batch_id] - - for j, base_center2d in enumerate(gt_base_centers2d): - if trunc_masks[j]: - # for outside objects, generate ellipse heatmap - base_center2d_x_int, base_center2d_y_int = \ - base_center2d.int() - scale_box_w = min(base_center2d_x_int - gt_bboxes[j][0], - gt_bboxes[j][2] - base_center2d_x_int) - scale_box_h = min(base_center2d_y_int - gt_bboxes[j][1], - gt_bboxes[j][3] - base_center2d_y_int) - radius_x = scale_box_w * self.edge_heatmap_ratio - radius_y = scale_box_h * self.edge_heatmap_ratio - radius_x, radius_y = max(0, int(radius_x)), max( - 0, int(radius_y)) - assert min(radius_x, radius_y) == 0 - ind = gt_labels[j] - get_ellip_gaussian_2D( - center_heatmap_target[batch_id, ind], - [base_center2d_x_int, base_center2d_y_int], radius_x, - radius_y) - else: - base_center2d_x_int, base_center2d_y_int = \ - base_center2d.int() - scale_box_h = (gt_bboxes[j][3] - gt_bboxes[j][1]) - scale_box_w = (gt_bboxes[j][2] - gt_bboxes[j][0]) - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.7) - radius = max(0, int(radius)) - ind = gt_labels[j] - gen_gaussian_target( - center_heatmap_target[batch_id, ind], - [base_center2d_x_int, base_center2d_y_int], radius) - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - num_ctrs = [centers2d.shape[0] for centers2d in centers2d_list] - max_objs = max(num_ctrs) - batch_indices = [ - centers2d_list[0].new_full((num_ctrs[i], ), i) - for i in range(batch_size) - ] - batch_indices = torch.cat(batch_indices, dim=0) - reg_mask = torch.zeros( - (batch_size, max_objs), - dtype=torch.bool).to(base_centers2d_list[0].device) - gt_bboxes_3d = input_metas['box_type_3d'].cat(gt_bboxes_3d_list) - gt_bboxes_3d = gt_bboxes_3d.to(base_centers2d_list[0].device) - - # encode original local yaw to multibin format - orienations_target = self.bbox_coder.encode(gt_bboxes_3d) - - batch_base_centers2d = base_centers2d_list[0].new_zeros( - (batch_size, max_objs, 2)) - - for i in range(batch_size): - reg_mask[i, :num_ctrs[i]] = 1 - batch_base_centers2d[i, :num_ctrs[i]] = base_centers2d_list[i] - - flatten_reg_mask = reg_mask.flatten() - - # transform base centers2d from input scale to output scale - batch_base_centers2d = batch_base_centers2d.view(-1, 2) * width_ratio - - dimensions_target = gt_bboxes_3d.tensor[:, 3:6] - labels_3d = torch.cat(gt_labels_3d_list) - keypoints2d_target = torch.cat(keypoints2d_list) - keypoints_mask = torch.cat(keypoints_mask_list) - keypoints_depth_mask = torch.cat(keypoints_depth_mask_list) - offsets2d_target = torch.cat(offsets2d_list) - bboxes2d = torch.cat(gt_bboxes_list) - - # transform FCOS style bbox into [x1, y1, x2, y2] format. - bboxes2d_target = torch.cat([bboxes2d[:, 0:2] * -1, bboxes2d[:, 2:]], - dim=-1) - depths = torch.cat(depths_list) - - target_labels = dict( - base_centers2d_target=batch_base_centers2d.int(), - labels3d=labels_3d, - reg_mask=flatten_reg_mask, - batch_indices=batch_indices, - bboxes2d_target=bboxes2d_target, - depth_target=depths, - keypoints2d_target=keypoints2d_target, - keypoints_mask=keypoints_mask, - keypoints_depth_mask=keypoints_depth_mask, - orienations_target=orienations_target, - offsets2d_target=offsets2d_target, - dimensions_target=dimensions_target, - downsample_ratio=1 / width_ratio) - - return center_heatmap_target, avg_factor, target_labels - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - input_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - shape (num_gt, 4). - bbox_preds (list[Tensor]): Box dims is a 4D-tensor, the channel - number is bbox_code_size. - shape (B, 7, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image. - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - shape (num_gts, ). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D boxes ground - truth. it is the flipped gt_bboxes - gt_labels_3d (list[Tensor]): Same as gt_labels. - centers2d (list[Tensor]): 2D centers on the image. - shape (num_gts, 2). - depths (list[Tensor]): Depth ground truth. - shape (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - In kitti it's None. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - assert attr_labels is None - assert gt_bboxes_ignore is None - center2d_heatmap = cls_scores[0] - pred_reg = bbox_preds[0] - - center2d_heatmap_target, avg_factor, target_labels = \ - self.get_targets(gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, - center2d_heatmap.shape, - input_metas[0]['pad_shape'], - input_metas) - - preds = self.get_predictions( - pred_reg=pred_reg, - labels3d=target_labels['labels3d'], - centers2d=target_labels['base_centers2d_target'], - reg_mask=target_labels['reg_mask'], - batch_indices=target_labels['batch_indices'], - input_metas=input_metas, - downsample_ratio=target_labels['downsample_ratio']) - - # heatmap loss - loss_cls = self.loss_cls( - center2d_heatmap, center2d_heatmap_target, avg_factor=avg_factor) - - # bbox2d regression loss - loss_bbox = self.loss_bbox(preds['bboxes2d'], - target_labels['bboxes2d_target']) - - # keypoints loss, the keypoints in predictions and target are all - # local coordinates. Check the mask dtype should be bool, not int - # or float to ensure the indexing is bool index - keypoints2d_mask = target_labels['keypoints2d_mask'] - loss_keypoints = self.loss_keypoints( - preds['keypoints2d'][keypoints2d_mask], - target_labels['keypoints2d_target'][keypoints2d_mask]) - - # orientations loss - loss_dir = self.loss_dir(preds['orientations'], - target_labels['orientations_target']) - - # dimensions loss - loss_dims = self.loss_dims(preds['dimensions'], - target_labels['dimensions_target']) - - # offsets for center heatmap - loss_offsets2d = self.loss_offsets2d(preds['offsets2d'], - target_labels['offsets2d_target']) - - # directly regressed depth loss with direct depth uncertainty loss - direct_depth_weights = torch.exp(-preds['direct_depth_uncertainty']) - loss_weight_1 = self.loss_direct_depth.loss_weight - loss_direct_depth = self.loss_direct_depth( - preds['direct_depth'], target_labels['depth_target'], - direct_depth_weights) - loss_uncertainty_1 =\ - preds['direct_depth_uncertainty'] * loss_weight_1 - loss_direct_depth = loss_direct_depth + loss_uncertainty_1.mean() - - # keypoints decoded depth loss with keypoints depth uncertainty loss - depth_mask = target_labels['keypoints_depth_mask'] - depth_target = target_labels['depth_target'].unsqueeze(-1).repeat(1, 3) - valid_keypoints_depth_uncertainty = preds[ - 'keypoints_depth_uncertainty'][depth_mask] - valid_keypoints_depth_weights = torch.exp( - -valid_keypoints_depth_uncertainty) - loss_keypoints_depth = self.loss_keypoint_depth( - preds['keypoints_depth'][depth_mask], depth_target[depth_mask], - valid_keypoints_depth_weights) - loss_weight_2 = self.loss_keypoints_depth.loss_weight - loss_uncertainty_2 =\ - valid_keypoints_depth_uncertainty * loss_weight_2 - loss_keypoints_depth = loss_keypoints_depth + loss_uncertainty_2.mean() - - # combined depth loss for optimiaze the uncertainty - loss_combined_depth = self.loss_combined_depth( - preds['combined_depth'], target_labels['depth_target']) - - loss_dict = dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_keypoints=loss_keypoints, - loss_dir=loss_dir, - loss_dims=loss_dims, - loss_offsets2d=loss_offsets2d, - loss_direct_depth=loss_direct_depth, - loss_keypoints_depth=loss_keypoints_depth, - loss_combined_depth=loss_combined_depth) - - return loss_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/parta2_rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/parta2_rpn_head.py deleted file mode 100644 index a57e1a12..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/parta2_rpn_head.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet3d.core import limit_period, xywhr2xyxyr -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from ..builder import HEADS -from .anchor3d_head import Anchor3DHead - - -@HEADS.register_module() -class PartA2RPNHead(Anchor3DHead): - """RPN head for PartA2. - - Note: - The main difference between the PartA2 RPN head and the Anchor3DHead - lies in their output during inference. PartA2 RPN head further returns - the original classification score for the second stage since the bbox - head in RoI head does not do classification task. - - Different from RPN heads in 2D detectors, this RPN head does - multi-class classification task and uses FocalLoss like the SECOND and - PointPillars do. But this head uses class agnostic nms rather than - multi-class nms. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - feat_channels (int): Number of channels of the feature map. - use_direction_classifier (bool): Whether to add a direction classifier. - anchor_generator(dict): Config dict of anchor generator. - assigner_per_size (bool): Whether to do assignment for each separate - anchor size. - assign_per_class (bool): Whether to do assignment for each class. - diff_rad_by_sin (bool): Whether to change the difference into sin - difference for box regression loss. - dir_offset (float | int): The offset of BEV rotation angles - (TODO: may be moved into box coder) - dir_limit_offset (float | int): The limited range of BEV - rotation angles. (TODO: may be moved into box coder) - bbox_coder (dict): Config dict of box coders. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_dir (dict): Config of direction classifier loss. - """ - - def __init__(self, - num_classes, - in_channels, - train_cfg, - test_cfg, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - range=[0, -39.68, -1.78, 69.12, 39.68, -1.78], - strides=[2], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - custom_values=[], - reshape_out=False), - assigner_per_size=False, - assign_per_class=False, - diff_rad_by_sin=True, - dir_offset=-np.pi / 2, - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict(type='CrossEntropyLoss', loss_weight=0.2), - init_cfg=None): - super().__init__(num_classes, in_channels, train_cfg, test_cfg, - feat_channels, use_direction_classifier, - anchor_generator, assigner_per_size, assign_per_class, - diff_rad_by_sin, dir_offset, dir_limit_offset, - bbox_coder, loss_cls, loss_bbox, loss_dir, init_cfg) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth boxes - of each sample. - gt_labels (list[torch.Tensor]): Labels of each sample. - input_metas (list[dict]): Point cloud and image's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_rpn_cls (list[torch.Tensor]): Classification losses. - - loss_rpn_bbox (list[torch.Tensor]): Box regression losses. - - loss_rpn_dir (list[torch.Tensor]): Direction classification - losses. - """ - loss_dict = super().loss(cls_scores, bbox_preds, dir_cls_preds, - gt_bboxes, gt_labels, input_metas, - gt_bboxes_ignore) - # change the loss key names to avoid conflict - return dict( - loss_rpn_cls=loss_dict['loss_cls'], - loss_rpn_bbox=loss_dict['loss_bbox'], - loss_rpn_dir=loss_dict['loss_dir']) - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): whether th rescale bbox. - - Returns: - dict: Predictions of single batch containing the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores_3d (torch.Tensor): Score of each bbox. - - labels_3d (torch.Tensor): Label of each bbox. - - cls_preds (torch.Tensor): Class score of each bbox. - """ - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_max_scores = [] - mlvl_label_pred = [] - mlvl_dir_scores = [] - mlvl_cls_score = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert cls_score.size()[-2:] == dir_cls_pred.size()[-2:] - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.num_classes) - - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, self.box_code_size) - - nms_pre = cfg.get('nms_pre', -1) - if self.use_sigmoid_cls: - max_scores, pred_labels = scores.max(dim=1) - else: - max_scores, pred_labels = scores[:, :-1].max(dim=1) - # get topk - if nms_pre > 0 and scores.shape[0] > nms_pre: - topk_scores, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - max_scores = topk_scores - cls_score = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - pred_labels = pred_labels[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_max_scores.append(max_scores) - mlvl_cls_score.append(cls_score) - mlvl_label_pred.append(pred_labels) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_max_scores = torch.cat(mlvl_max_scores) - mlvl_label_pred = torch.cat(mlvl_label_pred) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - # shape [k, num_class] before sigmoid - # PartA2 need to keep raw classification score - # because the bbox head in the second stage does not have - # classification branch, - # roi head need this score as classification score - mlvl_cls_score = torch.cat(mlvl_cls_score) - - score_thr = cfg.get('score_thr', 0) - result = self.class_agnostic_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_max_scores, mlvl_label_pred, - mlvl_cls_score, mlvl_dir_scores, - score_thr, cfg.nms_post, cfg, - input_meta) - - return result - - def class_agnostic_nms(self, mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_max_scores, mlvl_label_pred, mlvl_cls_score, - mlvl_dir_scores, score_thr, max_num, cfg, - input_meta): - """Class agnostic nms for single batch. - - Args: - mlvl_bboxes (torch.Tensor): Bboxes from Multi-level. - mlvl_bboxes_for_nms (torch.Tensor): Bboxes for nms - (bev or minmax boxes) from Multi-level. - mlvl_max_scores (torch.Tensor): Max scores of Multi-level bbox. - mlvl_label_pred (torch.Tensor): Class predictions - of Multi-level bbox. - mlvl_cls_score (torch.Tensor): Class scores of - Multi-level bbox. - mlvl_dir_scores (torch.Tensor): Direction scores of - Multi-level bbox. - score_thr (int): Score threshold. - max_num (int): Max number of bboxes after nms. - cfg (:obj:`ConfigDict`): Training or testing config. - input_meta (dict): Contain pcd and img's meta info. - - Returns: - dict: Predictions of single batch. Contain the keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores_3d (torch.Tensor): Score of each bbox. - - labels_3d (torch.Tensor): Label of each bbox. - - cls_preds (torch.Tensor): Class score of each bbox. - """ - bboxes = [] - scores = [] - labels = [] - dir_scores = [] - cls_scores = [] - score_thr_inds = mlvl_max_scores > score_thr - _scores = mlvl_max_scores[score_thr_inds] - _bboxes_for_nms = mlvl_bboxes_for_nms[score_thr_inds, :] - if cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr) - - _mlvl_bboxes = mlvl_bboxes[score_thr_inds, :] - _mlvl_dir_scores = mlvl_dir_scores[score_thr_inds] - _mlvl_label_pred = mlvl_label_pred[score_thr_inds] - _mlvl_cls_score = mlvl_cls_score[score_thr_inds] - - if len(selected) > 0: - bboxes.append(_mlvl_bboxes[selected]) - scores.append(_scores[selected]) - labels.append(_mlvl_label_pred[selected]) - cls_scores.append(_mlvl_cls_score[selected]) - dir_scores.append(_mlvl_dir_scores[selected]) - dir_rot = limit_period(bboxes[-1][..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[-1][..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores[-1].to(bboxes[-1].dtype)) - - if bboxes: - bboxes = torch.cat(bboxes, dim=0) - scores = torch.cat(scores, dim=0) - cls_scores = torch.cat(cls_scores, dim=0) - labels = torch.cat(labels, dim=0) - if bboxes.shape[0] > max_num: - _, inds = scores.sort(descending=True) - inds = inds[:max_num] - bboxes = bboxes[inds, :] - labels = labels[inds] - scores = scores[inds] - cls_scores = cls_scores[inds] - bboxes = input_meta['box_type_3d']( - bboxes, box_dim=self.box_code_size) - return dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=cls_scores # raw scores [max_num, cls_num] - ) - else: - return dict( - boxes_3d=input_meta['box_type_3d']( - mlvl_bboxes.new_zeros([0, self.box_code_size]), - box_dim=self.box_code_size), - scores_3d=mlvl_bboxes.new_zeros([0]), - labels_3d=mlvl_bboxes.new_zeros([0]), - cls_preds=mlvl_bboxes.new_zeros([0, mlvl_cls_score.shape[-1]])) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/pgd_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/pgd_head.py deleted file mode 100644 index d9bfadb0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/pgd_head.py +++ /dev/null @@ -1,1229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import Scale, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core import box3d_multiclass_nms, xywhr2xyxyr -from mmdet3d.core.bbox import points_cam2img, points_img2cam -from mmdet.core import distance2bbox, multi_apply -from ..builder import HEADS, build_loss -from .fcos_mono3d_head import FCOSMono3DHead - - -@HEADS.register_module() -class PGDHead(FCOSMono3DHead): - r"""Anchor-free head used in `PGD `_. - - Args: - use_depth_classifer (bool, optional): Whether to use depth classifier. - Defaults to True. - use_only_reg_proj (bool, optional): Whether to use only direct - regressed depth in the re-projection (to make the network easier - to learn). Defaults to False. - weight_dim (int, optional): Dimension of the location-aware weight - map. Defaults to -1. - weight_branch (tuple[tuple[int]], optional): Feature map channels of - the convolutional branch for weight map. Defaults to ((256, ), ). - depth_branch (tuple[int], optional): Feature map channels of the - branch for probabilistic depth estimation. Defaults to (64, ), - depth_range (tuple[float], optional): Range of depth estimation. - Defaults to (0, 70), - depth_unit (int, optional): Unit of depth range division. Defaults to - 10. - division (str, optional): Depth division method. Options include - 'uniform', 'linear', 'log', 'loguniform'. Defaults to 'uniform'. - depth_bins (int, optional): Discrete bins of depth division. Defaults - to 8. - loss_depth (dict, optional): Depth loss. Defaults to dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0). - loss_bbox2d (dict, optional): Loss for 2D box estimation. Defaults to - dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0). - loss_consistency (dict, optional): Consistency loss. Defaults to - dict(type='GIoULoss', loss_weight=1.0), - pred_velo (bool, optional): Whether to predict velocity. Defaults to - False. - pred_bbox2d (bool, optional): Whether to predict 2D bounding boxes. - Defaults to True. - pred_keypoints (bool, optional): Whether to predict keypoints. - Defaults to False, - bbox_coder (dict, optional): Bounding box coder. Defaults to - dict(type='PGDBBoxCoder', base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), (3.9, 1.56, 1.6)), - code_size=7). - """ - - def __init__(self, - use_depth_classifier=True, - use_onlyreg_proj=False, - weight_dim=-1, - weight_branch=((256, ), ), - depth_branch=(64, ), - depth_range=(0, 70), - depth_unit=10, - division='uniform', - depth_bins=8, - loss_depth=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_bbox2d=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_consistency=dict(type='GIoULoss', loss_weight=1.0), - pred_bbox2d=True, - pred_keypoints=False, - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), - (3.9, 1.56, 1.6)), - code_size=7), - **kwargs): - self.use_depth_classifier = use_depth_classifier - self.use_onlyreg_proj = use_onlyreg_proj - self.depth_branch = depth_branch - self.pred_keypoints = pred_keypoints - self.weight_dim = weight_dim - self.weight_branch = weight_branch - self.weight_out_channels = [] - for weight_branch_channels in weight_branch: - if len(weight_branch_channels) > 0: - self.weight_out_channels.append(weight_branch_channels[-1]) - else: - self.weight_out_channels.append(-1) - self.depth_range = depth_range - self.depth_unit = depth_unit - self.division = division - if self.division == 'uniform': - self.num_depth_cls = int( - (depth_range[1] - depth_range[0]) / depth_unit) + 1 - if self.num_depth_cls != depth_bins: - print('Warning: The number of bins computed from ' + - 'depth_unit is different from given parameter! ' + - 'Depth_unit will be considered with priority in ' + - 'Uniform Division.') - else: - self.num_depth_cls = depth_bins - super().__init__( - pred_bbox2d=pred_bbox2d, bbox_coder=bbox_coder, **kwargs) - self.loss_depth = build_loss(loss_depth) - if self.pred_bbox2d: - self.loss_bbox2d = build_loss(loss_bbox2d) - self.loss_consistency = build_loss(loss_consistency) - if self.pred_keypoints: - self.kpts_start = 9 if self.pred_velo else 7 - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - if self.pred_bbox2d: - self.scale_dim += 1 - if self.pred_keypoints: - self.scale_dim += 1 - self.scales = nn.ModuleList([ - nn.ModuleList([Scale(1.0) for _ in range(self.scale_dim)]) - for _ in self.strides - ]) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - super()._init_predictor() - - if self.use_depth_classifier: - self.conv_depth_cls_prev = self._init_branch( - conv_channels=self.depth_branch, - conv_strides=(1, ) * len(self.depth_branch)) - self.conv_depth_cls = nn.Conv2d(self.depth_branch[-1], - self.num_depth_cls, 1) - # Data-agnostic single param lambda for local depth fusion - self.fuse_lambda = nn.Parameter(torch.tensor(10e-5)) - - if self.weight_dim != -1: - self.conv_weight_prevs = nn.ModuleList() - self.conv_weights = nn.ModuleList() - for i in range(self.weight_dim): - weight_branch_channels = self.weight_branch[i] - weight_out_channel = self.weight_out_channels[i] - if len(weight_branch_channels) > 0: - self.conv_weight_prevs.append( - self._init_branch( - conv_channels=weight_branch_channels, - conv_strides=(1, ) * len(weight_branch_channels))) - self.conv_weights.append( - nn.Conv2d(weight_out_channel, 1, 1)) - else: - self.conv_weight_prevs.append(None) - self.conv_weights.append( - nn.Conv2d(self.feat_channels, 1, 1)) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized defined init_weights because the - default init of DCN triggered by the init_cfg will init - conv_offset.weight, which mistakenly affects the training stability. - """ - super().init_weights() - - bias_cls = bias_init_with_prob(0.01) - if self.use_depth_classifier: - for m in self.conv_depth_cls_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - normal_init(self.conv_depth_cls, std=0.01, bias=bias_cls) - - if self.weight_dim != -1: - for conv_weight_prev in self.conv_weight_prevs: - if conv_weight_prev is None: - continue - for m in conv_weight_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - for conv_weight in self.conv_weights: - normal_init(conv_weight, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2). - weight (list[Tensor]): Location-aware weight maps on each - scale level, each is a 4D-tensor, the channel number is - num_points * 1. - depth_cls_preds (list[Tensor]): Box scores for depth class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, - each is a 4D-tensor, the channel number is num_points * 1. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.strides) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox and direction class - predictions, depth class predictions, location-aware weights, - attribute and centerness predictions of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, cls_feat, \ - reg_feat = super().forward_single(x, scale, stride) - - max_regress_range = stride * self.regress_ranges[0][1] / \ - self.strides[0] - bbox_pred = self.bbox_coder.decode_2d(bbox_pred, scale, stride, - max_regress_range, self.training, - self.pred_keypoints, - self.pred_bbox2d) - - depth_cls_pred = None - if self.use_depth_classifier: - clone_reg_feat = reg_feat.clone() - for conv_depth_cls_prev_layer in self.conv_depth_cls_prev: - clone_reg_feat = conv_depth_cls_prev_layer(clone_reg_feat) - depth_cls_pred = self.conv_depth_cls(clone_reg_feat) - - weight = None - if self.weight_dim != -1: - weight = [] - for i in range(self.weight_dim): - clone_reg_feat = reg_feat.clone() - if len(self.weight_branch[i]) > 0: - for conv_weight_prev_layer in self.conv_weight_prevs[i]: - clone_reg_feat = conv_weight_prev_layer(clone_reg_feat) - weight.append(self.conv_weights[i](clone_reg_feat)) - weight = torch.cat(weight, dim=1) - - return cls_score, bbox_pred, dir_cls_pred, depth_cls_pred, weight, \ - attr_pred, centerness - - def get_proj_bbox2d(self, - bbox_preds, - pos_dir_cls_preds, - labels_3d, - bbox_targets_3d, - pos_points, - pos_inds, - img_metas, - pos_depth_cls_preds=None, - pos_weights=None, - pos_cls_scores=None, - with_kpts=False): - """Decode box predictions and get projected 2D attributes. - - Args: - bbox_preds (list[Tensor]): Box predictions for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - pos_dir_cls_preds (Tensor): Box scores for direction class - predictions of positive boxes on all the scale levels in shape - (num_pos_points, 2). - labels_3d (list[Tensor]): 3D box category labels for each scale - level, each is a 4D-tensor. - bbox_targets_3d (list[Tensor]): 3D box targets for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - pos_points (Tensor): Foreground points. - pos_inds (Tensor): Index of foreground points from flattened - tensors. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - pos_depth_cls_preds (Tensor, optional): Probabilistic depth map of - positive boxes on all the scale levels in shape - (num_pos_points, self.num_depth_cls). Defaults to None. - pos_weights (Tensor, optional): Location-aware weights of positive - boxes in shape (num_pos_points, self.weight_dim). Defaults to - None. - pos_cls_scores (Tensor, optional): Classification scores of - positive boxes in shape (num_pos_points, self.num_classes). - Defaults to None. - with_kpts (bool, optional): Whether to output keypoints targets. - Defaults to False. - - Returns: - tuple[Tensor]: Exterior 2D boxes from projected 3D boxes, - predicted 2D boxes and keypoint targets (if necessary). - """ - views = [np.array(img_meta['cam2img']) for img_meta in img_metas] - num_imgs = len(img_metas) - img_idx = [] - for label in labels_3d: - for idx in range(num_imgs): - img_idx.append( - labels_3d[0].new_ones(int(len(label) / num_imgs)) * idx) - img_idx = torch.cat(img_idx) - pos_img_idx = img_idx[pos_inds] - - flatten_strided_bbox_preds = [] - flatten_strided_bbox2d_preds = [] - flatten_bbox_targets_3d = [] - flatten_strides = [] - - for stride_idx, bbox_pred in enumerate(bbox_preds): - flatten_bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape( - -1, sum(self.group_reg_dims)) - flatten_bbox_pred[:, :2] *= self.strides[stride_idx] - flatten_bbox_pred[:, -4:] *= self.strides[stride_idx] - flatten_strided_bbox_preds.append( - flatten_bbox_pred[:, :self.bbox_coder.bbox_code_size]) - flatten_strided_bbox2d_preds.append(flatten_bbox_pred[:, -4:]) - - bbox_target_3d = bbox_targets_3d[stride_idx].clone() - bbox_target_3d[:, :2] *= self.strides[stride_idx] - bbox_target_3d[:, -4:] *= self.strides[stride_idx] - flatten_bbox_targets_3d.append(bbox_target_3d) - - flatten_stride = flatten_bbox_pred.new_ones( - *flatten_bbox_pred.shape[:-1], 1) * self.strides[stride_idx] - flatten_strides.append(flatten_stride) - - flatten_strided_bbox_preds = torch.cat(flatten_strided_bbox_preds) - flatten_strided_bbox2d_preds = torch.cat(flatten_strided_bbox2d_preds) - flatten_bbox_targets_3d = torch.cat(flatten_bbox_targets_3d) - flatten_strides = torch.cat(flatten_strides) - pos_strided_bbox_preds = flatten_strided_bbox_preds[pos_inds] - pos_strided_bbox2d_preds = flatten_strided_bbox2d_preds[pos_inds] - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_strides = flatten_strides[pos_inds] - - pos_decoded_bbox2d_preds = distance2bbox(pos_points, - pos_strided_bbox2d_preds) - - pos_strided_bbox_preds[:, :2] = \ - pos_points - pos_strided_bbox_preds[:, :2] - pos_bbox_targets_3d[:, :2] = \ - pos_points - pos_bbox_targets_3d[:, :2] - - if self.use_depth_classifier and (not self.use_onlyreg_proj): - pos_prob_depth_preds = self.bbox_coder.decode_prob_depth( - pos_depth_cls_preds, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - pos_strided_bbox_preds[:, 2] = \ - sig_alpha * pos_strided_bbox_preds.clone()[:, 2] + \ - (1 - sig_alpha) * pos_prob_depth_preds - - box_corners_in_image = pos_strided_bbox_preds.new_zeros( - (*pos_strided_bbox_preds.shape[:-1], 8, 2)) - box_corners_in_image_gt = pos_strided_bbox_preds.new_zeros( - (*pos_strided_bbox_preds.shape[:-1], 8, 2)) - - for idx in range(num_imgs): - mask = (pos_img_idx == idx) - if pos_strided_bbox_preds[mask].shape[0] == 0: - continue - cam2img = torch.eye( - 4, - dtype=pos_strided_bbox_preds.dtype, - device=pos_strided_bbox_preds.device) - view_shape = views[idx].shape - cam2img[:view_shape[0], :view_shape[1]] = \ - pos_strided_bbox_preds.new_tensor(views[idx]) - - centers2d_preds = pos_strided_bbox_preds.clone()[mask, :2] - centers2d_targets = pos_bbox_targets_3d.clone()[mask, :2] - centers3d_targets = points_img2cam(pos_bbox_targets_3d[mask, :3], - views[idx]) - - # use predicted depth to re-project the 2.5D centers - pos_strided_bbox_preds[mask, :3] = points_img2cam( - pos_strided_bbox_preds[mask, :3], views[idx]) - pos_bbox_targets_3d[mask, :3] = centers3d_targets - - # depth fixed when computing re-project 3D bboxes - pos_strided_bbox_preds[mask, 2] = \ - pos_bbox_targets_3d.clone()[mask, 2] - - # decode yaws - if self.use_direction_classifier: - pos_dir_cls_scores = torch.max( - pos_dir_cls_preds[mask], dim=-1)[1] - pos_strided_bbox_preds[mask] = self.bbox_coder.decode_yaw( - pos_strided_bbox_preds[mask], centers2d_preds, - pos_dir_cls_scores, self.dir_offset, cam2img) - pos_bbox_targets_3d[mask, 6] = torch.atan2( - centers2d_targets[:, 0] - cam2img[0, 2], - cam2img[0, 0]) + pos_bbox_targets_3d[mask, 6] - - corners = img_metas[0]['box_type_3d']( - pos_strided_bbox_preds[mask], - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).corners - box_corners_in_image[mask] = points_cam2img(corners, cam2img) - - corners_gt = img_metas[0]['box_type_3d']( - pos_bbox_targets_3d[mask, :self.bbox_code_size], - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).corners - box_corners_in_image_gt[mask] = points_cam2img(corners_gt, cam2img) - - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - proj_bbox2d_preds = torch.cat([minxy, maxxy], dim=1) - - outputs = (proj_bbox2d_preds, pos_decoded_bbox2d_preds) - - if with_kpts: - norm_strides = pos_strides * self.regress_ranges[0][1] / \ - self.strides[0] - kpts_targets = box_corners_in_image_gt - pos_points[..., None, :] - kpts_targets = kpts_targets.view( - (*pos_strided_bbox_preds.shape[:-1], 16)) - kpts_targets /= norm_strides - - outputs += (kpts_targets, ) - - return outputs - - def get_pos_predictions(self, bbox_preds, dir_cls_preds, depth_cls_preds, - weights, attr_preds, centernesses, pos_inds, - img_metas): - """Flatten predictions and get positive ones. - - Args: - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - pos_inds (Tensor): Index of foreground points from flattened - tensors. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor]: Box predictions, direction classes, probabilistic - depth maps, location-aware weight maps, attributes and - centerness predictions. - """ - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, sum(self.group_reg_dims)) - for bbox_pred in bbox_preds - ] - flatten_dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - for dir_cls_pred in dir_cls_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_dir_cls_preds = torch.cat(flatten_dir_cls_preds) - flatten_centerness = torch.cat(flatten_centerness) - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_dir_cls_preds = flatten_dir_cls_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - - pos_depth_cls_preds = None - if self.use_depth_classifier: - flatten_depth_cls_preds = [ - depth_cls_pred.permute(0, 2, 3, - 1).reshape(-1, self.num_depth_cls) - for depth_cls_pred in depth_cls_preds - ] - flatten_depth_cls_preds = torch.cat(flatten_depth_cls_preds) - pos_depth_cls_preds = flatten_depth_cls_preds[pos_inds] - - pos_weights = None - if self.weight_dim != -1: - flatten_weights = [ - weight.permute(0, 2, 3, 1).reshape(-1, self.weight_dim) - for weight in weights - ] - flatten_weights = torch.cat(flatten_weights) - pos_weights = flatten_weights[pos_inds] - - pos_attr_preds = None - if self.pred_attrs: - flatten_attr_preds = [ - attr_pred.permute(0, 2, 3, 1).reshape(-1, self.num_attrs) - for attr_pred in attr_preds - ] - flatten_attr_preds = torch.cat(flatten_attr_preds) - pos_attr_preds = flatten_attr_preds[pos_inds] - - return pos_bbox_preds, pos_dir_cls_preds, pos_depth_cls_preds, \ - pos_weights, pos_attr_preds, pos_centerness - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', - 'depth_cls_preds', 'weights', 'attr_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - weights (list[Tensor]): Location-aware weights for each scale - level, each is a 4D-tensor, the channel number is - num_points * self.weight_dim. - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D boxes ground truth with shape of - (num_gts, code_size). - gt_labels_3d (list[Tensor]): same as gt_labels - centers2d (list[Tensor]): 2D centers on the image with shape of - (num_gts, 2). - depths (list[Tensor]): Depth ground truth with shape of - (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding boxes can - be ignored when computing the loss. Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(depth_cls_preds) == len(weights) == len(centernesses) == \ - len(attr_preds), 'The length of cls_scores, bbox_preds, ' \ - 'dir_cls_preds, depth_cls_preds, weights, centernesses, and' \ - f'attr_preds: {len(cls_scores)}, {len(bbox_preds)}, ' \ - f'{len(dir_cls_preds)}, {len(depth_cls_preds)}, {len(weights)}' \ - f'{len(centernesses)}, {len(attr_preds)} are inconsistent.' - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - labels_3d, bbox_targets_3d, centerness_targets, attr_targets = \ - self.get_targets( - all_level_points, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores and targets - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_labels_3d = torch.cat(labels_3d) - flatten_bbox_targets_3d = torch.cat(bbox_targets_3d) - flatten_centerness_targets = torch.cat(centerness_targets) - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - if self.pred_attrs: - flatten_attr_targets = torch.cat(attr_targets) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels_3d >= 0) - & (flatten_labels_3d < bg_class_ind)).nonzero().reshape(-1) - num_pos = len(pos_inds) - - loss_dict = dict() - - loss_dict['loss_cls'] = self.loss_cls( - flatten_cls_scores, - flatten_labels_3d, - avg_factor=num_pos + num_imgs) # avoid num_pos is 0 - - pos_bbox_preds, pos_dir_cls_preds, pos_depth_cls_preds, pos_weights, \ - pos_attr_preds, pos_centerness = self.get_pos_predictions( - bbox_preds, dir_cls_preds, depth_cls_preds, weights, - attr_preds, centernesses, pos_inds, img_metas) - - if num_pos > 0: - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_centerness_targets = flatten_centerness_targets[pos_inds] - pos_points = flatten_points[pos_inds] - if self.pred_attrs: - pos_attr_targets = flatten_attr_targets[pos_inds] - if self.use_direction_classifier: - pos_dir_cls_targets = self.get_direction_target( - pos_bbox_targets_3d, self.dir_offset, one_hot=False) - - bbox_weights = pos_centerness_targets.new_ones( - len(pos_centerness_targets), sum(self.group_reg_dims)) - equal_weights = pos_centerness_targets.new_ones( - pos_centerness_targets.shape) - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - assert len(code_weight) == sum(self.group_reg_dims) - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - - if self.diff_rad_by_sin: - pos_bbox_preds, pos_bbox_targets_3d = self.add_sin_difference( - pos_bbox_preds, pos_bbox_targets_3d) - - loss_dict['loss_offset'] = self.loss_bbox( - pos_bbox_preds[:, :2], - pos_bbox_targets_3d[:, :2], - weight=bbox_weights[:, :2], - avg_factor=equal_weights.sum()) - loss_dict['loss_size'] = self.loss_bbox( - pos_bbox_preds[:, 3:6], - pos_bbox_targets_3d[:, 3:6], - weight=bbox_weights[:, 3:6], - avg_factor=equal_weights.sum()) - loss_dict['loss_rotsin'] = self.loss_bbox( - pos_bbox_preds[:, 6], - pos_bbox_targets_3d[:, 6], - weight=bbox_weights[:, 6], - avg_factor=equal_weights.sum()) - if self.pred_velo: - loss_dict['loss_velo'] = self.loss_bbox( - pos_bbox_preds[:, 7:9], - pos_bbox_targets_3d[:, 7:9], - weight=bbox_weights[:, 7:9], - avg_factor=equal_weights.sum()) - - proj_bbox2d_inputs = (bbox_preds, pos_dir_cls_preds, labels_3d, - bbox_targets_3d, pos_points, pos_inds, - img_metas) - - # direction classification loss - # TODO: add more check for use_direction_classifier - if self.use_direction_classifier: - loss_dict['loss_dir'] = self.loss_dir( - pos_dir_cls_preds, - pos_dir_cls_targets, - equal_weights, - avg_factor=equal_weights.sum()) - - # init depth loss with the one computed from direct regression - loss_dict['loss_depth'] = self.loss_bbox( - pos_bbox_preds[:, 2], - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - # depth classification loss - if self.use_depth_classifier: - pos_prob_depth_preds = self.bbox_coder.decode_prob_depth( - pos_depth_cls_preds, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - if self.weight_dim != -1: - loss_fuse_depth = self.loss_depth( - sig_alpha * pos_bbox_preds[:, 2] + - (1 - sig_alpha) * pos_prob_depth_preds, - pos_bbox_targets_3d[:, 2], - sigma=pos_weights[:, 0], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - else: - loss_fuse_depth = self.loss_depth( - sig_alpha * pos_bbox_preds[:, 2] + - (1 - sig_alpha) * pos_prob_depth_preds, - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - loss_dict['loss_depth'] = loss_fuse_depth - - proj_bbox2d_inputs += (pos_depth_cls_preds, ) - - if self.pred_keypoints: - # use smoothL1 to compute consistency loss for keypoints - # normalize the offsets with strides - proj_bbox2d_preds, pos_decoded_bbox2d_preds, kpts_targets = \ - self.get_proj_bbox2d(*proj_bbox2d_inputs, with_kpts=True) - loss_dict['loss_kpts'] = self.loss_bbox( - pos_bbox_preds[:, self.kpts_start:self.kpts_start + 16], - kpts_targets, - weight=bbox_weights[:, - self.kpts_start:self.kpts_start + 16], - avg_factor=equal_weights.sum()) - - if self.pred_bbox2d: - loss_dict['loss_bbox2d'] = self.loss_bbox2d( - pos_bbox_preds[:, -4:], - pos_bbox_targets_3d[:, -4:], - weight=bbox_weights[:, -4:], - avg_factor=equal_weights.sum()) - if not self.pred_keypoints: - proj_bbox2d_preds, pos_decoded_bbox2d_preds = \ - self.get_proj_bbox2d(*proj_bbox2d_inputs) - loss_dict['loss_consistency'] = self.loss_consistency( - proj_bbox2d_preds, - pos_decoded_bbox2d_preds, - weight=bbox_weights[:, -4:], - avg_factor=equal_weights.sum()) - - loss_dict['loss_centerness'] = self.loss_centerness( - pos_centerness, pos_centerness_targets) - - # attribute classification loss - if self.pred_attrs: - loss_dict['loss_attr'] = self.loss_attr( - pos_attr_preds, - pos_attr_targets, - pos_centerness_targets, - avg_factor=pos_centerness_targets.sum()) - - else: - # need absolute due to possible negative delta x/y - loss_dict['loss_offset'] = pos_bbox_preds[:, :2].sum() - loss_dict['loss_size'] = pos_bbox_preds[:, 3:6].sum() - loss_dict['loss_rotsin'] = pos_bbox_preds[:, 6].sum() - loss_dict['loss_depth'] = pos_bbox_preds[:, 2].sum() - if self.pred_velo: - loss_dict['loss_velo'] = pos_bbox_preds[:, 7:9].sum() - if self.pred_keypoints: - loss_dict['loss_kpts'] = pos_bbox_preds[:, - self.kpts_start:self. - kpts_start + 16].sum() - if self.pred_bbox2d: - loss_dict['loss_bbox2d'] = pos_bbox_preds[:, -4:].sum() - loss_dict['loss_consistency'] = pos_bbox_preds[:, -4:].sum() - loss_dict['loss_centerness'] = pos_centerness.sum() - if self.use_direction_classifier: - loss_dict['loss_dir'] = pos_dir_cls_preds.sum() - if self.use_depth_classifier: - sig_alpha = torch.sigmoid(self.fuse_lambda) - loss_fuse_depth = \ - sig_alpha * pos_bbox_preds[:, 2].sum() + \ - (1 - sig_alpha) * pos_depth_cls_preds.sum() - if self.weight_dim != -1: - loss_fuse_depth *= torch.exp(-pos_weights[:, 0].sum()) - loss_dict['loss_depth'] = loss_fuse_depth - if self.pred_attrs: - loss_dict['loss_attr'] = pos_attr_preds.sum() - - return loss_dict - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', - 'depth_cls_preds', 'weights', 'attr_preds', 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - weights (list[Tensor]): Location-aware weights for each scale - level, each is a 4D-tensor, the channel number is - num_points * self.weight_dim. - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_points * 1, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config, optional): Test / postprocessing configuration, - if None, test_cfg would be used. Defaults to None. - rescale (bool, optional): If True, return boxes in original image - space. Defaults to None. - - Returns: - list[tuple[Tensor]]: Each item in result_list is a tuple, which - consists of predicted 3D boxes, scores, labels, attributes and - 2D boxes (if necessary). - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(depth_cls_preds) == len(weights) == len(centernesses) == \ - len(attr_preds), 'The length of cls_scores, bbox_preds, ' \ - 'dir_cls_preds, depth_cls_preds, weights, centernesses, and' \ - f'attr_preds: {len(cls_scores)}, {len(bbox_preds)}, ' \ - f'{len(dir_cls_preds)}, {len(depth_cls_preds)}, {len(weights)}' \ - f'{len(centernesses)}, {len(attr_preds)} are inconsistent.' - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - if self.use_direction_classifier: - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - dir_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [2, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.use_depth_classifier: - depth_cls_pred_list = [ - depth_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - depth_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_depth_cls, *cls_scores[i][img_id].shape[1:]], - 0).detach() for i in range(num_levels) - ] - if self.weight_dim != -1: - weight_list = [ - weights[i][img_id].detach() for i in range(num_levels) - ] - else: - weight_list = [ - cls_scores[i][img_id].new_full( - [1, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.pred_attrs: - attr_pred_list = [ - attr_preds[i][img_id].detach() for i in range(num_levels) - ] - else: - attr_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_attrs, *cls_scores[i][img_id].shape[1:]], - self.attr_background_label).detach() - for i in range(num_levels) - ] - centerness_pred_list = [ - centernesses[i][img_id].detach() for i in range(num_levels) - ] - input_meta = img_metas[img_id] - det_bboxes = self._get_bboxes_single( - cls_score_list, bbox_pred_list, dir_cls_pred_list, - depth_cls_pred_list, weight_list, attr_pred_list, - centerness_pred_list, mlvl_points, input_meta, cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - mlvl_points, - input_meta, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - Has shape (num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single scale - level with shape (num_points * bbox_code_size, H, W). - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on a single scale level with shape - (num_points * 2, H, W) - depth_cls_preds (list[Tensor]): Box scores for probabilistic depth - predictions on a single scale level with shape - (num_points * self.num_depth_cls, H, W) - weights (list[Tensor]): Location-aware weight maps on a single - scale level with shape (num_points * self.weight_dim, H, W). - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for a single scale level - with shape (num_points, H, W). - mlvl_points (list[Tensor]): Box reference for a single scale level - with shape (num_total_points, 2). - input_meta (dict): Metadata of input image. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool, optional): If True, return boxes in original image - space. Defaults to False. - - Returns: - tuples[Tensor]: Predicted 3D boxes, scores, labels, attributes and - 2D boxes (if necessary). - """ - view = np.array(input_meta['cam2img']) - scale_factor = input_meta['scale_factor'] - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_points) - mlvl_centers2d = [] - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - mlvl_attr_scores = [] - mlvl_centerness = [] - mlvl_depth_cls_scores = [] - mlvl_depth_uncertainty = [] - mlvl_bboxes2d = None - if self.pred_bbox2d: - mlvl_bboxes2d = [] - - for cls_score, bbox_pred, dir_cls_pred, depth_cls_pred, weight, \ - attr_pred, centerness, points in zip( - cls_scores, bbox_preds, dir_cls_preds, depth_cls_preds, - weights, attr_preds, centernesses, mlvl_points): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - depth_cls_pred = depth_cls_pred.permute(1, 2, 0).reshape( - -1, self.num_depth_cls) - depth_cls_score = F.softmax( - depth_cls_pred, dim=-1).topk( - k=2, dim=-1)[0].mean(dim=-1) - if self.weight_dim != -1: - weight = weight.permute(1, 2, 0).reshape(-1, self.weight_dim) - else: - weight = weight.permute(1, 2, 0).reshape(-1, 1) - depth_uncertainty = torch.exp(-weight[:, -1]) - attr_pred = attr_pred.permute(1, 2, 0).reshape(-1, self.num_attrs) - attr_score = torch.max(attr_pred, dim=-1)[1] - centerness = centerness.permute(1, 2, 0).reshape(-1).sigmoid() - - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, - sum(self.group_reg_dims)) - bbox_pred3d = bbox_pred[:, :self.bbox_coder.bbox_code_size] - if self.pred_bbox2d: - bbox_pred2d = bbox_pred[:, -4:] - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - merged_scores = scores * centerness[:, None] - if self.use_depth_classifier: - merged_scores *= depth_cls_score[:, None] - if self.weight_dim != -1: - merged_scores *= depth_uncertainty[:, None] - max_scores, _ = merged_scores.max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - points = points[topk_inds, :] - bbox_pred3d = bbox_pred3d[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_pred = dir_cls_pred[topk_inds, :] - depth_cls_pred = depth_cls_pred[topk_inds, :] - centerness = centerness[topk_inds] - dir_cls_score = dir_cls_score[topk_inds] - depth_cls_score = depth_cls_score[topk_inds] - depth_uncertainty = depth_uncertainty[topk_inds] - attr_score = attr_score[topk_inds] - if self.pred_bbox2d: - bbox_pred2d = bbox_pred2d[topk_inds, :] - # change the offset to actual center predictions - bbox_pred3d[:, :2] = points - bbox_pred3d[:, :2] - if rescale: - bbox_pred3d[:, :2] /= bbox_pred3d[:, :2].new_tensor( - scale_factor) - if self.pred_bbox2d: - bbox_pred2d /= bbox_pred2d.new_tensor(scale_factor) - if self.use_depth_classifier: - prob_depth_pred = self.bbox_coder.decode_prob_depth( - depth_cls_pred, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - bbox_pred3d[:, 2] = sig_alpha * bbox_pred3d[:, 2] + \ - (1 - sig_alpha) * prob_depth_pred - pred_center2d = bbox_pred3d[:, :3].clone() - bbox_pred3d[:, :3] = points_img2cam(bbox_pred3d[:, :3], view) - mlvl_centers2d.append(pred_center2d) - mlvl_bboxes.append(bbox_pred3d) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - mlvl_depth_cls_scores.append(depth_cls_score) - mlvl_attr_scores.append(attr_score) - mlvl_centerness.append(centerness) - mlvl_depth_uncertainty.append(depth_uncertainty) - if self.pred_bbox2d: - bbox_pred2d = distance2bbox( - points, bbox_pred2d, max_shape=input_meta['img_shape']) - mlvl_bboxes2d.append(bbox_pred2d) - - mlvl_centers2d = torch.cat(mlvl_centers2d) - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - if self.pred_bbox2d: - mlvl_bboxes2d = torch.cat(mlvl_bboxes2d) - - # change local yaw to global yaw for 3D nms - cam2img = torch.eye( - 4, dtype=mlvl_centers2d.dtype, device=mlvl_centers2d.device) - cam2img[:view.shape[0], :view.shape[1]] = \ - mlvl_centers2d.new_tensor(view) - mlvl_bboxes = self.bbox_coder.decode_yaw(mlvl_bboxes, mlvl_centers2d, - mlvl_dir_scores, - self.dir_offset, cam2img) - - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).bev) - - mlvl_scores = torch.cat(mlvl_scores) - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - mlvl_attr_scores = torch.cat(mlvl_attr_scores) - mlvl_centerness = torch.cat(mlvl_centerness) - # no scale_factors in box3d_multiclass_nms - # Then we multiply it from outside - mlvl_nms_scores = mlvl_scores * mlvl_centerness[:, None] - if self.use_depth_classifier: # multiply the depth confidence - mlvl_depth_cls_scores = torch.cat(mlvl_depth_cls_scores) - mlvl_nms_scores *= mlvl_depth_cls_scores[:, None] - if self.weight_dim != -1: - mlvl_depth_uncertainty = torch.cat(mlvl_depth_uncertainty) - mlvl_nms_scores *= mlvl_depth_uncertainty[:, None] - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_nms_scores, cfg.score_thr, - cfg.max_per_img, cfg, mlvl_dir_scores, - mlvl_attr_scores, mlvl_bboxes2d) - bboxes, scores, labels, dir_scores, attrs = results[0:5] - attrs = attrs.to(labels.dtype) # change data type to int - bboxes = input_meta['box_type_3d']( - bboxes, - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)) - # Note that the predictions use origin (0.5, 0.5, 0.5) - # Due to the ground truth centers2d are the gravity center of objects - # v0.10.0 fix inplace operation to the input tensor of cam_box3d - # So here we also need to add origin=(0.5, 0.5, 0.5) - if not self.pred_attrs: - attrs = None - - outputs = (bboxes, scores, labels, attrs) - if self.pred_bbox2d: - bboxes2d = results[-1] - bboxes2d = torch.cat([bboxes2d, scores[:, None]], dim=1) - outputs = outputs + (bboxes2d, ) - - return outputs - - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. \ - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each \ - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - if attr_labels_list is None: - attr_labels_list = [ - gt_labels.new_full(gt_labels.shape, self.attr_background_label) - for gt_labels in gt_labels_list - ] - - # get labels and bbox_targets of each image - _, bbox_targets_list, labels_3d_list, bbox_targets_3d_list, \ - centerness_targets_list, attr_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - gt_bboxes_3d_list, - gt_labels_3d_list, - centers2d_list, - depths_list, - attr_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - bbox_targets_list = [ - bbox_targets.split(num_points, 0) - for bbox_targets in bbox_targets_list - ] - labels_3d_list = [ - labels_3d.split(num_points, 0) for labels_3d in labels_3d_list - ] - bbox_targets_3d_list = [ - bbox_targets_3d.split(num_points, 0) - for bbox_targets_3d in bbox_targets_3d_list - ] - centerness_targets_list = [ - centerness_targets.split(num_points, 0) - for centerness_targets in centerness_targets_list - ] - attr_targets_list = [ - attr_targets.split(num_points, 0) - for attr_targets in attr_targets_list - ] - - # concat per level image - concat_lvl_labels_3d = [] - concat_lvl_bbox_targets_3d = [] - concat_lvl_centerness_targets = [] - concat_lvl_attr_targets = [] - for i in range(num_levels): - concat_lvl_labels_3d.append( - torch.cat([labels[i] for labels in labels_3d_list])) - concat_lvl_centerness_targets.append( - torch.cat([ - centerness_targets[i] - for centerness_targets in centerness_targets_list - ])) - bbox_targets_3d = torch.cat([ - bbox_targets_3d[i] for bbox_targets_3d in bbox_targets_3d_list - ]) - if self.pred_bbox2d: - bbox_targets = torch.cat( - [bbox_targets[i] for bbox_targets in bbox_targets_list]) - bbox_targets_3d = torch.cat([bbox_targets_3d, bbox_targets], - dim=1) - concat_lvl_attr_targets.append( - torch.cat( - [attr_targets[i] for attr_targets in attr_targets_list])) - if self.norm_on_bbox: - bbox_targets_3d[:, :2] = \ - bbox_targets_3d[:, :2] / self.strides[i] - if self.pred_bbox2d: - bbox_targets_3d[:, -4:] = \ - bbox_targets_3d[:, -4:] / self.strides[i] - concat_lvl_bbox_targets_3d.append(bbox_targets_3d) - return concat_lvl_labels_3d, concat_lvl_bbox_targets_3d, \ - concat_lvl_centerness_targets, concat_lvl_attr_targets diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/point_rpn_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/point_rpn_head.py deleted file mode 100644 index 546cf166..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/point_rpn_head.py +++ /dev/null @@ -1,381 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - -from mmdet3d.core import xywhr2xyxyr -from mmdet3d.core.bbox.structures import (DepthInstance3DBoxes, - LiDARInstance3DBoxes) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss - - -@HEADS.register_module() -class PointRPNHead(BaseModule): - """RPN module for PointRCNN. - - Args: - num_classes (int): Number of classes. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - pred_layer_cfg (dict, optional): Config of classification and - regression prediction layers. Defaults to None. - enlarge_width (float, optional): Enlarge bbox for each side to ignore - close points. Defaults to 0.1. - cls_loss (dict, optional): Config of direction classification loss. - Defaults to None. - bbox_loss (dict, optional): Config of localization loss. - Defaults to None. - bbox_coder (dict, optional): Config dict of box coders. - Defaults to None. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - num_classes, - train_cfg, - test_cfg, - pred_layer_cfg=None, - enlarge_width=0.1, - cls_loss=None, - bbox_loss=None, - bbox_coder=None, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.enlarge_width = enlarge_width - - # build loss function - self.bbox_loss = build_loss(bbox_loss) - self.cls_loss = build_loss(cls_loss) - - # build box coder - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build pred conv - self.cls_layers = self._make_fc_layers( - fc_cfg=pred_layer_cfg.cls_linear_channels, - input_channels=pred_layer_cfg.in_channels, - output_channels=self._get_cls_out_channels()) - - self.reg_layers = self._make_fc_layers( - fc_cfg=pred_layer_cfg.reg_linear_channels, - input_channels=pred_layer_cfg.in_channels, - output_channels=self._get_reg_out_channels()) - - def _make_fc_layers(self, fc_cfg, input_channels, output_channels): - """Make fully connect layers. - - Args: - fc_cfg (dict): Config of fully connect. - input_channels (int): Input channels for fc_layers. - output_channels (int): Input channels for fc_layers. - - Returns: - nn.Sequential: Fully connect layers. - """ - fc_layers = [] - c_in = input_channels - for k in range(0, fc_cfg.__len__()): - fc_layers.extend([ - nn.Linear(c_in, fc_cfg[k], bias=False), - nn.BatchNorm1d(fc_cfg[k]), - nn.ReLU(), - ]) - c_in = fc_cfg[k] - fc_layers.append(nn.Linear(c_in, output_channels, bias=True)) - return nn.Sequential(*fc_layers) - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Bbox classification and regression - # (center residual (3), size regression (3) - # torch.cos(yaw) (1), torch.sin(yaw) (1) - return self.bbox_coder.code_size - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - tuple[list[torch.Tensor]]: Predicted boxes and classification - scores. - """ - point_features = feat_dict['fp_features'] - point_features = point_features.permute(0, 2, 1).contiguous() - batch_size = point_features.shape[0] - feat_cls = point_features.view(-1, point_features.shape[-1]) - feat_reg = point_features.view(-1, point_features.shape[-1]) - - point_cls_preds = self.cls_layers(feat_cls).reshape( - batch_size, -1, self._get_cls_out_channels()) - point_box_preds = self.reg_layers(feat_reg).reshape( - batch_size, -1, self._get_reg_out_channels()) - return point_box_preds, point_cls_preds - - @force_fp32(apply_to=('bbox_preds')) - def loss(self, - bbox_preds, - cls_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - img_metas=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of PointRCNN RPN_Head. - cls_preds (dict): Classification from forward of PointRCNN - RPN_Head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - img_metas (list[dict], Optional): Contain pcd and img's meta info. - Defaults to None. - - Returns: - dict: Losses of PointRCNN RPN module. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d) - (bbox_targets, mask_targets, positive_mask, negative_mask, - box_loss_weights, point_targets) = targets - - # bbox loss - bbox_loss = self.bbox_loss(bbox_preds, bbox_targets, - box_loss_weights.unsqueeze(-1)) - # calculate semantic loss - semantic_points = cls_preds.reshape(-1, self.num_classes) - semantic_targets = mask_targets - semantic_targets[negative_mask] = self.num_classes - semantic_points_label = semantic_targets - # for ignore, but now we do not have ignored label - semantic_loss_weight = negative_mask.float() + positive_mask.float() - semantic_loss = self.cls_loss(semantic_points, - semantic_points_label.reshape(-1), - semantic_loss_weight.reshape(-1)) - semantic_loss /= positive_mask.float().sum() - losses = dict(bbox_loss=bbox_loss, semantic_loss=semantic_loss) - - return losses - - def get_targets(self, points, gt_bboxes_3d, gt_labels_3d): - """Generate targets of PointRCNN RPN head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - - Returns: - tuple[torch.Tensor]: Targets of PointRCNN RPN head. - """ - # find empty example - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - (bbox_targets, mask_targets, positive_mask, negative_mask, - point_targets) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d) - - bbox_targets = torch.stack(bbox_targets) - mask_targets = torch.stack(mask_targets) - positive_mask = torch.stack(positive_mask) - negative_mask = torch.stack(negative_mask) - box_loss_weights = positive_mask / (positive_mask.sum() + 1e-6) - - return (bbox_targets, mask_targets, positive_mask, negative_mask, - box_loss_weights, point_targets) - - def get_targets_single(self, points, gt_bboxes_3d, gt_labels_3d): - """Generate targets of PointRCNN RPN head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - valid_gt = gt_labels_3d != -1 - gt_bboxes_3d = gt_bboxes_3d[valid_gt] - gt_labels_3d = gt_labels_3d[valid_gt] - - # transform the bbox coordinate to the point cloud coordinate - gt_bboxes_3d_tensor = gt_bboxes_3d.tensor.clone() - gt_bboxes_3d_tensor[..., 2] += gt_bboxes_3d_tensor[..., 5] / 2 - - points_mask, assignment = self._assign_targets_by_points_inside( - gt_bboxes_3d, points) - gt_bboxes_3d_tensor = gt_bboxes_3d_tensor[assignment] - mask_targets = gt_labels_3d[assignment] - - bbox_targets = self.bbox_coder.encode(gt_bboxes_3d_tensor, - points[..., 0:3], mask_targets) - - positive_mask = (points_mask.max(1)[0] > 0) - # add ignore_mask - extend_gt_bboxes_3d = gt_bboxes_3d.enlarged_box(self.enlarge_width) - points_mask, _ = self._assign_targets_by_points_inside( - extend_gt_bboxes_3d, points) - negative_mask = (points_mask.max(1)[0] == 0) - - point_targets = points[..., 0:3] - return (bbox_targets, mask_targets, positive_mask, negative_mask, - point_targets) - - def get_bboxes(self, - points, - bbox_preds, - cls_preds, - input_metas, - rescale=False): - """Generate bboxes from RPN head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Regression predictions from PointRCNN head. - cls_preds (dict): Class scores predictions from PointRCNN head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool, optional): Whether to rescale bboxes. - Defaults to False. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - sem_scores = cls_preds.sigmoid() - obj_scores = sem_scores.max(-1)[0] - object_class = sem_scores.argmax(dim=-1) - - batch_size = sem_scores.shape[0] - results = list() - for b in range(batch_size): - bbox3d = self.bbox_coder.decode(bbox_preds[b], points[b, ..., :3], - object_class[b]) - bbox_selected, score_selected, labels, cls_preds_selected = \ - self.class_agnostic_nms(obj_scores[b], sem_scores[b], bbox3d, - points[b, ..., :3], input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected.clone(), - box_dim=bbox_selected.shape[-1], - with_yaw=True) - results.append((bbox, score_selected, labels, cls_preds_selected)) - return results - - def class_agnostic_nms(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Class agnostic nms. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): Semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - nms_cfg = self.test_cfg.nms_cfg if not self.training \ - else self.train_cfg.nms_cfg - if nms_cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - num_bbox = bbox.shape[0] - bbox = input_meta['box_type_3d']( - bbox.clone(), - box_dim=bbox.shape[-1], - with_yaw=True, - origin=(0.5, 0.5, 0.5)) - - if isinstance(bbox, LiDARInstance3DBoxes): - box_idx = bbox.points_in_boxes(points) - box_indices = box_idx.new_zeros([num_bbox + 1]) - box_idx[box_idx == -1] = num_bbox - box_indices.scatter_add_(0, box_idx.long(), - box_idx.new_ones(box_idx.shape)) - box_indices = box_indices[:-1] - nonempty_box_mask = box_indices >= 0 - elif isinstance(bbox, DepthInstance3DBoxes): - box_indices = bbox.points_in_boxes(points) - nonempty_box_mask = box_indices.T.sum(1) >= 0 - else: - raise NotImplementedError('Unsupported bbox type!') - - bbox = bbox[nonempty_box_mask] - - if self.test_cfg.score_thr is not None: - score_thr = self.test_cfg.score_thr - keep = (obj_scores >= score_thr) - obj_scores = obj_scores[keep] - sem_scores = sem_scores[keep] - bbox = bbox.tensor[keep] - - if obj_scores.shape[0] > 0: - topk = min(nms_cfg.nms_pre, obj_scores.shape[0]) - obj_scores_nms, indices = torch.topk(obj_scores, k=topk) - bbox_for_nms = xywhr2xyxyr(bbox[indices].bev) - sem_scores_nms = sem_scores[indices] - - keep = nms_func(bbox_for_nms, obj_scores_nms, nms_cfg.iou_thr) - keep = keep[:nms_cfg.nms_post] - - bbox_selected = bbox.tensor[indices][keep] - score_selected = obj_scores_nms[keep] - cls_preds = sem_scores_nms[keep] - labels = torch.argmax(cls_preds, -1) - else: - bbox_selected = bbox.tensor - score_selected = obj_scores.new_zeros([0]) - labels = obj_scores.new_zeros([0]) - cls_preds = obj_scores.new_zeros([0, sem_scores.shape[-1]]) - - return bbox_selected, score_selected, labels, cls_preds - - def _assign_targets_by_points_inside(self, bboxes_3d, points): - """Compute assignment by checking whether point is inside bbox. - - Args: - bboxes_3d (:obj:`BaseInstance3DBoxes`): Instance of bounding boxes. - points (torch.Tensor): Points of a batch. - - Returns: - tuple[torch.Tensor]: Flags indicating whether each point is - inside bbox and the index of box where each point are in. - """ - # TODO: align points_in_boxes function in each box_structures - num_bbox = bboxes_3d.tensor.shape[0] - if isinstance(bboxes_3d, LiDARInstance3DBoxes): - assignment = bboxes_3d.points_in_boxes(points[:, 0:3]).long() - points_mask = assignment.new_zeros( - [assignment.shape[0], num_bbox + 1]) - assignment[assignment == -1] = num_bbox - points_mask.scatter_(1, assignment.unsqueeze(1), 1) - points_mask = points_mask[:, :-1] - assignment[assignment == num_bbox] = num_bbox - 1 - elif isinstance(bboxes_3d, DepthInstance3DBoxes): - points_mask = bboxes_3d.points_in_boxes(points) - assignment = points_mask.argmax(dim=-1) - else: - raise NotImplementedError('Unsupported bbox type!') - - return points_mask, assignment diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/shape_aware_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/shape_aware_head.py deleted file mode 100644 index 6c555718..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/shape_aware_head.py +++ /dev/null @@ -1,515 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core import box3d_multiclass_nms, limit_period, xywhr2xyxyr -from mmdet.core import multi_apply -from ..builder import HEADS, build_head -from .anchor3d_head import Anchor3DHead - - -@HEADS.register_module() -class BaseShapeHead(BaseModule): - """Base Shape-aware Head in Shape Signature Network. - - Note: - This base shape-aware grouping head uses default settings for small - objects. For large and huge objects, it is recommended to use - heavier heads, like (64, 64, 64) and (128, 128, 64, 64, 64) in - shared conv channels, (2, 1, 1) and (2, 1, 2, 1, 1) in shared - conv strides. For tiny objects, we can use smaller heads, like - (32, 32) channels and (1, 1) strides. - - Args: - num_cls (int): Number of classes. - num_base_anchors (int): Number of anchors per location. - box_code_size (int): The dimension of boxes to be encoded. - in_channels (int): Input channels for convolutional layers. - shared_conv_channels (tuple, optional): Channels for shared - convolutional layers. Default: (64, 64). - shared_conv_strides (tuple, optional): Strides for shared - convolutional layers. Default: (1, 1). - use_direction_classifier (bool, optional): Whether to use direction - classifier. Default: True. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (bool | str, optional): Type of bias. Default: False. - """ - - def __init__(self, - num_cls, - num_base_anchors, - box_code_size, - in_channels, - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - use_direction_classifier=True, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_cls = num_cls - self.num_base_anchors = num_base_anchors - self.use_direction_classifier = use_direction_classifier - self.box_code_size = box_code_size - - assert len(shared_conv_channels) == len(shared_conv_strides), \ - 'Lengths of channels and strides list should be equal.' - - self.shared_conv_channels = [in_channels] + list(shared_conv_channels) - self.shared_conv_strides = list(shared_conv_strides) - - shared_conv = [] - for i in range(len(self.shared_conv_strides)): - shared_conv.append( - ConvModule( - self.shared_conv_channels[i], - self.shared_conv_channels[i + 1], - kernel_size=3, - stride=self.shared_conv_strides[i], - padding=1, - conv_cfg=conv_cfg, - bias=bias, - norm_cfg=norm_cfg)) - - self.shared_conv = nn.Sequential(*shared_conv) - - out_channels = self.shared_conv_channels[-1] - self.conv_cls = nn.Conv2d(out_channels, num_base_anchors * num_cls, 1) - self.conv_reg = nn.Conv2d(out_channels, - num_base_anchors * box_code_size, 1) - - if use_direction_classifier: - self.conv_dir_cls = nn.Conv2d(out_channels, num_base_anchors * 2, - 1) - if init_cfg is None: - if use_direction_classifier: - self.init_cfg = dict( - type='Kaiming', - layer='Conv2d', - override=[ - dict(type='Normal', name='conv_reg', std=0.01), - dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', - name='conv_dir_cls', - std=0.01, - bias_prob=0.01) - ]) - else: - self.init_cfg = dict( - type='Kaiming', - layer='Conv2d', - override=[ - dict(type='Normal', name='conv_reg', std=0.01), - dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01) - ]) - - def forward(self, x): - """Forward function for SmallHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, C, H, W]. - - Returns: - dict[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - Note that all the returned tensors are reshaped as - [bs*num_base_anchors*H*W, num_cls/box_code_size/dir_bins]. - It is more convenient to concat anchors for different - classes even though they have different feature map sizes. - """ - x = self.shared_conv(x) - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - featmap_size = bbox_pred.shape[-2:] - H, W = featmap_size - B = bbox_pred.shape[0] - cls_score = cls_score.view(-1, self.num_base_anchors, self.num_cls, H, - W).permute(0, 1, 3, 4, - 2).reshape(B, -1, self.num_cls) - bbox_pred = bbox_pred.view(-1, self.num_base_anchors, - self.box_code_size, H, W).permute( - 0, 1, 3, 4, - 2).reshape(B, -1, self.box_code_size) - - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = self.conv_dir_cls(x) - dir_cls_preds = dir_cls_preds.view(-1, self.num_base_anchors, 2, H, - W).permute(0, 1, 3, 4, - 2).reshape(B, -1, 2) - ret = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - dir_cls_preds=dir_cls_preds, - featmap_size=featmap_size) - return ret - - -@HEADS.register_module() -class ShapeAwareHead(Anchor3DHead): - """Shape-aware grouping head for SSN. - - Args: - tasks (dict): Shape-aware groups of multi-class objects. - assign_per_class (bool, optional): Whether to do assignment for each - class. Default: True. - kwargs (dict): Other arguments are the same as those in - :class:`Anchor3DHead`. - """ - - def __init__(self, tasks, assign_per_class=True, init_cfg=None, **kwargs): - self.tasks = tasks - self.featmap_sizes = [] - super().__init__( - assign_per_class=assign_per_class, init_cfg=init_cfg, **kwargs) - - def init_weights(self): - if not self._is_init: - for m in self.heads: - if hasattr(m, 'init_weights'): - m.init_weights() - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - def _init_layers(self): - """Initialize neural network layers of the head.""" - self.heads = nn.ModuleList() - cls_ptr = 0 - for task in self.tasks: - sizes = self.anchor_generator.sizes[cls_ptr:cls_ptr + - task['num_class']] - num_size = torch.tensor(sizes).reshape(-1, 3).size(0) - num_rot = len(self.anchor_generator.rotations) - num_base_anchors = num_rot * num_size - branch = dict( - type='BaseShapeHead', - num_cls=self.num_classes, - num_base_anchors=num_base_anchors, - box_code_size=self.box_code_size, - in_channels=self.in_channels, - shared_conv_channels=task['shared_conv_channels'], - shared_conv_strides=task['shared_conv_strides']) - self.heads.append(build_head(branch)) - cls_ptr += task['num_class'] - - def forward_single(self, x): - """Forward function on a single-scale feature map. - - Args: - x (torch.Tensor): Input features. - Returns: - tuple[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - """ - results = [] - - for head in self.heads: - results.append(head(x)) - - cls_score = torch.cat([result['cls_score'] for result in results], - dim=1) - bbox_pred = torch.cat([result['bbox_pred'] for result in results], - dim=1) - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = torch.cat( - [result['dir_cls_preds'] for result in results], dim=1) - - self.featmap_sizes = [] - for i, task in enumerate(self.tasks): - for _ in range(task['num_class']): - self.featmap_sizes.append(results[i]['featmap_size']) - assert len(self.featmap_sizes) == len(self.anchor_generator.ranges), \ - 'Length of feature map sizes must be equal to length of ' + \ - 'different ranges of anchor generator.' - - return cls_score, bbox_pred, dir_cls_preds - - def loss_single(self, cls_score, bbox_pred, dir_cls_preds, labels, - label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, num_total_samples): - """Calculate loss of Single-level results. - - Args: - cls_score (torch.Tensor): Class score in single-level. - bbox_pred (torch.Tensor): Bbox prediction in single-level. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single-level. - labels (torch.Tensor): Labels of class. - label_weights (torch.Tensor): Weights of class loss. - bbox_targets (torch.Tensor): Targets of bbox predictions. - bbox_weights (torch.Tensor): Weights of bbox loss. - dir_targets (torch.Tensor): Targets of direction predictions. - dir_weights (torch.Tensor): Weights of direction loss. - num_total_samples (int): The number of valid samples. - - Returns: - tuple[torch.Tensor]: Losses of class, bbox - and direction, respectively. - """ - # classification loss - if num_total_samples is None: - num_total_samples = int(cls_score.shape[0]) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.reshape(-1, self.num_classes) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # regression loss - bbox_targets = bbox_targets.reshape(-1, self.box_code_size) - bbox_weights = bbox_weights.reshape(-1, self.box_code_size) - code_weight = self.train_cfg.get('code_weight', None) - - if code_weight: - bbox_weights = bbox_weights * bbox_weights.new_tensor(code_weight) - bbox_pred = bbox_pred.reshape(-1, self.box_code_size) - if self.diff_rad_by_sin: - bbox_pred, bbox_targets = self.add_sin_difference( - bbox_pred, bbox_targets) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - dir_cls_preds = dir_cls_preds.reshape(-1, 2) - dir_targets = dir_targets.reshape(-1) - dir_weights = dir_weights.reshape(-1) - loss_dir = self.loss_dir( - dir_cls_preds, - dir_targets, - dir_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dir - - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Gt bboxes - of each sample. - gt_labels (list[torch.Tensor]): Gt labels of each sample. - input_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_cls (list[torch.Tensor]): Classification losses. - - loss_bbox (list[torch.Tensor]): Box regression losses. - - loss_dir (list[torch.Tensor]): Direction classification - losses. - """ - device = cls_scores[0].device - anchor_list = self.get_anchors( - self.featmap_sizes, input_metas, device=device) - cls_reg_targets = self.anchor_target_3d( - anchor_list, - gt_bboxes, - input_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - num_classes=self.num_classes, - sampling=self.sampling) - - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - dir_targets_list, dir_weights_list, num_total_pos, - num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # num_total_samples = None - losses_cls, losses_bbox, losses_dir = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - dir_cls_preds, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - dir_targets_list, - dir_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dir=losses_dir) - - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - input_metas, - cfg=None, - rescale=False): - """Get bboxes of anchor head. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - input_metas (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`, optional): Training or testing config. - Default: None. - rescale (list[torch.Tensor], optional): Whether to rescale bbox. - Default: False. - - Returns: - list[tuple]: Prediction resultes of batches. - """ - assert len(cls_scores) == len(bbox_preds) - assert len(cls_scores) == len(dir_cls_preds) - num_levels = len(cls_scores) - assert num_levels == 1, 'Only support single level inference.' - device = cls_scores[0].device - mlvl_anchors = self.anchor_generator.grid_anchors( - self.featmap_sizes, device=device) - # `anchor` is a list of anchors for different classes - mlvl_anchors = [torch.cat(anchor, dim=0) for anchor in mlvl_anchors] - - result_list = [] - for img_id in range(len(input_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() for i in range(num_levels) - ] - - input_meta = input_metas[img_id] - proposals = self.get_bboxes_single(cls_score_list, bbox_pred_list, - dir_cls_pred_list, mlvl_anchors, - input_meta, cfg, rescale) - result_list.append(proposals) - return result_list - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg=None, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor], optional): whether to rescale bbox. - Default: False. - - Returns: - tuple: Contain predictions of single batch. - - - bboxes (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores (torch.Tensor): Class score of each bbox. - - labels (torch.Tensor): Label of each bbox. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2] == bbox_pred.size()[-2] - assert cls_score.size()[-2] == dir_cls_pred.size()[-2] - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - if self.use_sigmoid_cls: - # Add a dummy background class to the front when using sigmoid - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - score_thr = cfg.get('score_thr', 0) - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_scores, score_thr, cfg.max_num, - cfg, mlvl_dir_scores) - bboxes, scores, labels, dir_scores = results - if bboxes.shape[0] > 0: - dir_rot = limit_period(bboxes[..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores.to(bboxes.dtype)) - bboxes = input_meta['box_type_3d'](bboxes, box_dim=self.box_code_size) - return bboxes, scores, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/smoke_mono3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/smoke_mono3d_head.py deleted file mode 100644 index 3459e092..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/smoke_mono3d_head.py +++ /dev/null @@ -1,516 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn import functional as F - -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from mmdet.models.utils.gaussian_target import (get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from ..builder import HEADS -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - - -@HEADS.register_module() -class SMOKEMono3DHead(AnchorFreeMono3DHead): - r"""Anchor-free head used in `SMOKE `_ - - .. code-block:: none - - /-----> 3*3 conv -----> 1*1 conv -----> cls - feature - \-----> 3*3 conv -----> 1*1 conv -----> reg - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - dim_channel (list[int]): indices of dimension offset preds in - regression heatmap channels. - ori_channel (list[int]): indices of orientation offset pred in - regression heatmap channels. - bbox_coder (:obj:`CameraInstance3DBoxes`): Bbox coder - for encoding and decoding boxes. - loss_cls (dict, optional): Config of classification loss. - Default: loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0). - loss_bbox (dict, optional): Config of localization loss. - Default: loss_bbox=dict(type='L1Loss', loss_weight=10.0). - loss_dir (dict, optional): Config of direction classification loss. - In SMOKE, Default: None. - loss_attr (dict, optional): Config of attribute classification loss. - In SMOKE, Default: None. - loss_centerness (dict): Config of centerness loss. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict): Initialization config dict. Default: None. - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - dim_channel, - ori_channel, - bbox_coder, - loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=0.1), - loss_dir=None, - loss_attr=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=None, - **kwargs): - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.dim_channel = dim_channel - self.ori_channel = ori_channel - self.bbox_coder = build_bbox_coder(bbox_coder) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - """ - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): Input feature map. - - Returns: - tuple: Scores for each class, bbox of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, reg_feat = \ - super().forward_single(x) - cls_score = cls_score.sigmoid() # turn to 0-1 - cls_score = cls_score.clamp(min=1e-4, max=1 - 1e-4) - # (N, C, H, W) - offset_dims = bbox_pred[:, self.dim_channel, ...] - bbox_pred[:, self.dim_channel, ...] = offset_dims.sigmoid() - 0.5 - # (N, C, H, W) - vector_ori = bbox_pred[:, self.ori_channel, ...] - bbox_pred[:, self.ori_channel, ...] = F.normalize(vector_ori) - return cls_score, bbox_pred - - def get_bboxes(self, cls_scores, bbox_preds, img_metas, rescale=None): - """Generate bboxes from bbox head predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - bbox_preds (list[Tensor]): Box regression for each scale. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - - Returns: - list[tuple[:obj:`CameraInstance3DBoxes`, Tensor, Tensor, None]]: - Each item in result_list is 4-tuple. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - cam2imgs = torch.stack([ - cls_scores[0].new_tensor(img_meta['cam2img']) - for img_meta in img_metas - ]) - trans_mats = torch.stack([ - cls_scores[0].new_tensor(img_meta['trans_mat']) - for img_meta in img_metas - ]) - batch_bboxes, batch_scores, batch_topk_labels = self.decode_heatmap( - cls_scores[0], - bbox_preds[0], - img_metas, - cam2imgs=cam2imgs, - trans_mats=trans_mats, - topk=100, - kernel=3) - - result_list = [] - for img_id in range(len(img_metas)): - - bboxes = batch_bboxes[img_id] - scores = batch_scores[img_id] - labels = batch_topk_labels[img_id] - - keep_idx = scores > 0.25 - bboxes = bboxes[keep_idx] - scores = scores[keep_idx] - labels = labels[keep_idx] - - bboxes = img_metas[img_id]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - attrs = None - result_list.append((bboxes, scores, labels, attrs)) - - return result_list - - def decode_heatmap(self, - cls_score, - reg_pred, - img_metas, - cam2imgs, - trans_mats, - topk=100, - kernel=3): - """Transform outputs into detections raw bbox predictions. - - Args: - class_score (Tensor): Center predict heatmap, - shape (B, num_classes, H, W). - reg_pred (Tensor): Box regression map. - shape (B, channel, H , W). - img_metas (List[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cam2imgs (Tensor): Camera intrinsic matrixs. - shape (B, 4, 4) - trans_mats (Tensor): Transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - topk (int): Get top k center keypoints from heatmap. Default 100. - kernel (int): Max pooling kernel for extract local maximum pixels. - Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of SMOKEHead, containing - the following Tensors: - - batch_bboxes (Tensor): Coords of each 3D box. - shape (B, k, 7) - - batch_scores (Tensor): Scores of each 3D box. - shape (B, k) - - batch_topk_labels (Tensor): Categories of each 3D box. - shape (B, k) - """ - img_h, img_w = img_metas[0]['pad_shape'][:2] - bs, _, feat_h, feat_w = cls_score.shape - - center_heatmap_pred = get_local_maximum(cls_score, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=topk) - batch_scores, batch_index, batch_topk_labels = batch_dets - - regression = transpose_and_gather_feat(reg_pred, batch_index) - regression = regression.view(-1, 8) - - points = torch.cat([topk_xs.view(-1, 1), - topk_ys.view(-1, 1).float()], - dim=1) - locations, dimensions, orientations = self.bbox_coder.decode( - regression, points, batch_topk_labels, cam2imgs, trans_mats) - - batch_bboxes = torch.cat((locations, dimensions, orientations), dim=1) - batch_bboxes = batch_bboxes.view(bs, -1, self.bbox_code_size) - return batch_bboxes, batch_scores, batch_topk_labels - - def get_predictions(self, labels3d, centers2d, gt_locations, gt_dimensions, - gt_orientations, indices, img_metas, pred_reg): - """Prepare predictions for computing loss. - - Args: - labels3d (Tensor): Labels of each 3D box. - shape (B, max_objs, ) - centers2d (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2) - gt_locations (Tensor): Coords of each 3D box's location. - shape (B * max_objs, 3) - gt_dimensions (Tensor): Dimensions of each 3D box. - shape (N, 3) - gt_orientations (Tensor): Orientation(yaw) of each 3D box. - shape (N, 1) - indices (Tensor): Indices of the existence of the 3D box. - shape (B * max_objs, ) - img_metas (list[dict]): Meta information of each image, - e.g., image size, scaling factor, etc. - pre_reg (Tensor): Box regression map. - shape (B, channel, H , W). - - Returns: - dict: the dict has components below: - - bbox3d_yaws (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred orientations. - - bbox3d_dims (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred dimensions. - - bbox3d_locs (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred locations. - """ - batch, channel = pred_reg.shape[0], pred_reg.shape[1] - w = pred_reg.shape[3] - cam2imgs = torch.stack([ - gt_locations.new_tensor(img_meta['cam2img']) - for img_meta in img_metas - ]) - trans_mats = torch.stack([ - gt_locations.new_tensor(img_meta['trans_mat']) - for img_meta in img_metas - ]) - centers2d_inds = centers2d[:, 1] * w + centers2d[:, 0] - centers2d_inds = centers2d_inds.view(batch, -1) - pred_regression = transpose_and_gather_feat(pred_reg, centers2d_inds) - pred_regression_pois = pred_regression.view(-1, channel) - locations, dimensions, orientations = self.bbox_coder.decode( - pred_regression_pois, centers2d, labels3d, cam2imgs, trans_mats, - gt_locations) - - locations, dimensions, orientations = locations[indices], dimensions[ - indices], orientations[indices] - - locations[:, 1] += dimensions[:, 1] / 2 - - gt_locations = gt_locations[indices] - - assert len(locations) == len(gt_locations) - assert len(dimensions) == len(gt_dimensions) - assert len(orientations) == len(gt_orientations) - bbox3d_yaws = self.bbox_coder.encode(gt_locations, gt_dimensions, - orientations, img_metas) - bbox3d_dims = self.bbox_coder.encode(gt_locations, dimensions, - gt_orientations, img_metas) - bbox3d_locs = self.bbox_coder.encode(locations, gt_dimensions, - gt_orientations, img_metas) - - pred_bboxes = dict(ori=bbox3d_yaws, dim=bbox3d_dims, loc=bbox3d_locs) - - return pred_bboxes - - def get_targets(self, gt_bboxes, gt_labels, gt_bboxes_3d, gt_labels_3d, - centers2d, feat_shape, img_shape, img_metas): - """Get training targets for batch images. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gt,). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D Ground - truth bboxes of each image, - shape (num_gt, bbox_code_size). - gt_labels_3d (list[Tensor]): 3D Ground truth labels of each - box, shape (num_gt,). - centers2d (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - feat_shape (tuple[int]): Feature map shape with value, - shape (B, _, H, W). - img_shape (tuple[int]): Image shape in [h, w] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor, dict]: The Tensor value is the targets of - center heatmap, the dict has components below: - - gt_centers2d (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2) - - gt_labels3d (Tensor): Labels of each 3D box. - shape (B, max_objs, ) - - indices (Tensor): Indices of the existence of the 3D box. - shape (B * max_objs, ) - - affine_indices (Tensor): Indices of the affine of the 3D box. - shape (N, ) - - gt_locs (Tensor): Coords of each 3D box's location. - shape (N, 3) - - gt_dims (Tensor): Dimensions of each 3D box. - shape (N, 3) - - gt_yaws (Tensor): Orientation(yaw) of each 3D box. - shape (N, 1) - - gt_cors (Tensor): Coords of the corners of each 3D box. - shape (N, 8, 3) - """ - - reg_mask = torch.stack([ - gt_bboxes[0].new_tensor( - not img_meta['affine_aug'], dtype=torch.bool) - for img_meta in img_metas - ]) - - img_h, img_w = img_shape[:2] - bs, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) # 1/4 - height_ratio = float(feat_h / img_h) # 1/4 - - assert width_ratio == height_ratio - - center_heatmap_target = gt_bboxes[-1].new_zeros( - [bs, self.num_classes, feat_h, feat_w]) - - gt_centers2d = centers2d.copy() - - for batch_id in range(bs): - gt_bbox = gt_bboxes[batch_id] - gt_label = gt_labels[batch_id] - # project centers2d from input image to feat map - gt_center2d = gt_centers2d[batch_id] * width_ratio - - for j, center in enumerate(gt_center2d): - center_x_int, center_y_int = center.int() - scale_box_h = (gt_bbox[j][3] - gt_bbox[j][1]) * height_ratio - scale_box_w = (gt_bbox[j][2] - gt_bbox[j][0]) * width_ratio - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.7) - radius = max(0, int(radius)) - ind = gt_label[j] - gen_gaussian_target(center_heatmap_target[batch_id, ind], - [center_x_int, center_y_int], radius) - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - num_ctrs = [center2d.shape[0] for center2d in centers2d] - max_objs = max(num_ctrs) - - reg_inds = torch.cat( - [reg_mask[i].repeat(num_ctrs[i]) for i in range(bs)]) - - inds = torch.zeros((bs, max_objs), - dtype=torch.bool).to(centers2d[0].device) - - # put gt 3d bboxes to gpu - gt_bboxes_3d = [ - gt_bbox_3d.to(centers2d[0].device) for gt_bbox_3d in gt_bboxes_3d - ] - - batch_centers2d = centers2d[0].new_zeros((bs, max_objs, 2)) - batch_labels_3d = gt_labels_3d[0].new_zeros((bs, max_objs)) - batch_gt_locations = \ - gt_bboxes_3d[0].tensor.new_zeros((bs, max_objs, 3)) - for i in range(bs): - inds[i, :num_ctrs[i]] = 1 - batch_centers2d[i, :num_ctrs[i]] = centers2d[i] - batch_labels_3d[i, :num_ctrs[i]] = gt_labels_3d[i] - batch_gt_locations[i, :num_ctrs[i]] = \ - gt_bboxes_3d[i].tensor[:, :3] - - inds = inds.flatten() - batch_centers2d = batch_centers2d.view(-1, 2) * width_ratio - batch_gt_locations = batch_gt_locations.view(-1, 3) - - # filter the empty image, without gt_bboxes_3d - gt_bboxes_3d = [ - gt_bbox_3d for gt_bbox_3d in gt_bboxes_3d - if gt_bbox_3d.tensor.shape[0] > 0 - ] - - gt_dimensions = torch.cat( - [gt_bbox_3d.tensor[:, 3:6] for gt_bbox_3d in gt_bboxes_3d]) - gt_orientations = torch.cat([ - gt_bbox_3d.tensor[:, 6].unsqueeze(-1) - for gt_bbox_3d in gt_bboxes_3d - ]) - gt_corners = torch.cat( - [gt_bbox_3d.corners for gt_bbox_3d in gt_bboxes_3d]) - - target_labels = dict( - gt_centers2d=batch_centers2d.long(), - gt_labels3d=batch_labels_3d, - indices=inds, - reg_indices=reg_inds, - gt_locs=batch_gt_locations, - gt_dims=gt_dimensions, - gt_yaws=gt_orientations, - gt_cors=gt_corners) - - return center_heatmap_target, avg_factor, target_labels - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - shape (num_gt, 4). - bbox_preds (list[Tensor]): Box dims is a 4D-tensor, the channel - number is bbox_code_size. - shape (B, 7, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image. - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - shape (num_gts, ). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D boxes ground - truth. it is the flipped gt_bboxes - gt_labels_3d (list[Tensor]): Same as gt_labels. - centers2d (list[Tensor]): 2D centers on the image. - shape (num_gts, 2). - depths (list[Tensor]): Depth ground truth. - shape (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - In kitti it's None. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - assert attr_labels is None - assert gt_bboxes_ignore is None - center2d_heatmap = cls_scores[0] - pred_reg = bbox_preds[0] - - center2d_heatmap_target, avg_factor, target_labels = \ - self.get_targets(gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, - center2d_heatmap.shape, - img_metas[0]['pad_shape'], - img_metas) - - pred_bboxes = self.get_predictions( - labels3d=target_labels['gt_labels3d'], - centers2d=target_labels['gt_centers2d'], - gt_locations=target_labels['gt_locs'], - gt_dimensions=target_labels['gt_dims'], - gt_orientations=target_labels['gt_yaws'], - indices=target_labels['indices'], - img_metas=img_metas, - pred_reg=pred_reg) - - loss_cls = self.loss_cls( - center2d_heatmap, center2d_heatmap_target, avg_factor=avg_factor) - - reg_inds = target_labels['reg_indices'] - - loss_bbox_oris = self.loss_bbox( - pred_bboxes['ori'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox_dims = self.loss_bbox( - pred_bboxes['dim'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox_locs = self.loss_bbox( - pred_bboxes['loc'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox = loss_bbox_dims + loss_bbox_locs + loss_bbox_oris - - loss_dict = dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - return loss_dict diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/ssd_3d_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/ssd_3d_head.py deleted file mode 100644 index c20c4b12..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/ssd_3d_head.py +++ /dev/null @@ -1,557 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import (DepthInstance3DBoxes, - LiDARInstance3DBoxes, - rotation_3d_in_axis) -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .vote_head import VoteHead - - -@HEADS.register_module() -class SSD3DHead(VoteHead): - r"""Bbox head of `3DSSD `_. - - Args: - num_classes (int): The number of class. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - in_channels (int): The number of input feature channel. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - act_cfg (dict): Config of activation in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_res_loss (dict): Config of size residual regression loss. - corner_loss (dict): Config of bbox corners regression loss. - vote_loss (dict): Config of candidate points regression loss. - """ - - def __init__(self, - num_classes, - bbox_coder, - in_channels=256, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - pred_layer_cfg=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_res_loss=None, - corner_loss=None, - vote_loss=None, - init_cfg=None): - super(SSD3DHead, self).__init__( - num_classes, - bbox_coder, - train_cfg=train_cfg, - test_cfg=test_cfg, - vote_module_cfg=vote_module_cfg, - vote_aggregation_cfg=vote_aggregation_cfg, - pred_layer_cfg=pred_layer_cfg, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - objectness_loss=objectness_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=None, - size_res_loss=size_res_loss, - semantic_loss=None, - init_cfg=init_cfg) - - self.corner_loss = build_loss(corner_loss) - self.vote_loss = build_loss(vote_loss) - self.num_candidates = vote_module_cfg['num_points'] - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Bbox classification and regression - # (center residual (3), size regression (3) - # heading class+residual (num_dir_bins*2)), - return 3 + 3 + self.num_dir_bins * 2 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - seed_points = feat_dict['sa_xyz'][-1] - seed_features = feat_dict['sa_features'][-1] - seed_indices = feat_dict['sa_indices'][-1] - - return seed_points, seed_features, seed_indices - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of SSD3DHead. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of 3DSSD. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (vote_targets, center_targets, size_res_targets, dir_class_targets, - dir_res_targets, mask_targets, centerness_targets, corner3d_targets, - vote_mask, positive_mask, negative_mask, centerness_weights, - box_loss_weights, heading_res_loss_weight) = targets - - # calculate centerness loss - centerness_loss = self.objectness_loss( - bbox_preds['obj_scores'].transpose(2, 1), - centerness_targets, - weight=centerness_weights) - - # calculate center loss - center_loss = self.center_loss( - bbox_preds['center_offset'], - center_targets, - weight=box_loss_weights.unsqueeze(-1)) - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class'].transpose(1, 2), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - dir_res_loss = self.dir_res_loss( - bbox_preds['dir_res_norm'], - dir_res_targets.unsqueeze(-1).repeat(1, 1, self.num_dir_bins), - weight=heading_res_loss_weight) - - # calculate size residual loss - size_loss = self.size_res_loss( - bbox_preds['size'], - size_res_targets, - weight=box_loss_weights.unsqueeze(-1)) - - # calculate corner loss - one_hot_dir_class_targets = dir_class_targets.new_zeros( - bbox_preds['dir_class'].shape) - one_hot_dir_class_targets.scatter_(2, dir_class_targets.unsqueeze(-1), - 1) - pred_bbox3d = self.bbox_coder.decode( - dict( - center=bbox_preds['center'], - dir_res=bbox_preds['dir_res'], - dir_class=one_hot_dir_class_targets, - size=bbox_preds['size'])) - pred_bbox3d = pred_bbox3d.reshape(-1, pred_bbox3d.shape[-1]) - pred_bbox3d = img_metas[0]['box_type_3d']( - pred_bbox3d.clone(), - box_dim=pred_bbox3d.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - pred_corners3d = pred_bbox3d.corners.reshape(-1, 8, 3) - corner_loss = self.corner_loss( - pred_corners3d, - corner3d_targets.reshape(-1, 8, 3), - weight=box_loss_weights.view(-1, 1, 1)) - - # calculate vote loss - vote_loss = self.vote_loss( - bbox_preds['vote_offset'].transpose(1, 2), - vote_targets, - weight=vote_mask.unsqueeze(-1)) - - losses = dict( - centerness_loss=centerness_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_res_loss=size_loss, - corner_loss=corner_loss, - vote_loss=vote_loss) - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of ssd3d head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of ssd3d head. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - # find empty example - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - seed_points = [ - bbox_preds['seed_points'][i, :self.num_candidates].detach() - for i in range(len(gt_labels_3d)) - ] - - (vote_targets, center_targets, size_res_targets, dir_class_targets, - dir_res_targets, mask_targets, centerness_targets, corner3d_targets, - vote_mask, positive_mask, negative_mask) = multi_apply( - self.get_targets_single, points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, aggregated_points, - seed_points) - - center_targets = torch.stack(center_targets) - positive_mask = torch.stack(positive_mask) - negative_mask = torch.stack(negative_mask) - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - centerness_targets = torch.stack(centerness_targets).detach() - corner3d_targets = torch.stack(corner3d_targets) - vote_targets = torch.stack(vote_targets) - vote_mask = torch.stack(vote_mask) - - center_targets -= bbox_preds['aggregated_points'] - - centerness_weights = (positive_mask + - negative_mask).unsqueeze(-1).repeat( - 1, 1, self.num_classes).float() - centerness_weights = centerness_weights / \ - (centerness_weights.sum() + 1e-6) - vote_mask = vote_mask / (vote_mask.sum() + 1e-6) - - box_loss_weights = positive_mask / (positive_mask.sum() + 1e-6) - - batch_size, proposal_num = dir_class_targets.shape[:2] - heading_label_one_hot = dir_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - heading_res_loss_weight = heading_label_one_hot * \ - box_loss_weights.unsqueeze(-1) - - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, positive_mask, - negative_mask, centerness_weights, box_loss_weights, - heading_res_loss_weight) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None, - seed_points=None): - """Generate targets of ssd3d head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - candidate points layer. - seed_points (torch.Tensor): Seed points of candidate points. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - valid_gt = gt_labels_3d != -1 - gt_bboxes_3d = gt_bboxes_3d[valid_gt] - gt_labels_3d = gt_labels_3d[valid_gt] - - # Generate fake GT for empty scene - if valid_gt.sum() == 0: - vote_targets = points.new_zeros(self.num_candidates, 3) - center_targets = points.new_zeros(self.num_candidates, 3) - size_res_targets = points.new_zeros(self.num_candidates, 3) - dir_class_targets = points.new_zeros( - self.num_candidates, dtype=torch.int64) - dir_res_targets = points.new_zeros(self.num_candidates) - mask_targets = points.new_zeros( - self.num_candidates, dtype=torch.int64) - centerness_targets = points.new_zeros(self.num_candidates, - self.num_classes) - corner3d_targets = points.new_zeros(self.num_candidates, 8, 3) - vote_mask = points.new_zeros(self.num_candidates, dtype=torch.bool) - positive_mask = points.new_zeros( - self.num_candidates, dtype=torch.bool) - negative_mask = points.new_ones( - self.num_candidates, dtype=torch.bool) - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, - positive_mask, negative_mask) - - gt_corner3d = gt_bboxes_3d.corners - - (center_targets, size_targets, dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - points_mask, assignment = self._assign_targets_by_points_inside( - gt_bboxes_3d, aggregated_points) - - center_targets = center_targets[assignment] - size_res_targets = size_targets[assignment] - mask_targets = gt_labels_3d[assignment] - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - corner3d_targets = gt_corner3d[assignment] - - top_center_targets = center_targets.clone() - top_center_targets[:, 2] += size_res_targets[:, 2] - dist = torch.norm(aggregated_points - top_center_targets, dim=1) - dist_mask = dist < self.train_cfg.pos_distance_thr - positive_mask = (points_mask.max(1)[0] > 0) * dist_mask - negative_mask = (points_mask.max(1)[0] == 0) - - # Centerness loss targets - canonical_xyz = aggregated_points - center_targets - if self.bbox_coder.with_rot: - # TODO: Align points rotation implementation of - # LiDARInstance3DBoxes and DepthInstance3DBoxes - canonical_xyz = rotation_3d_in_axis( - canonical_xyz.unsqueeze(0).transpose(0, 1), - -gt_bboxes_3d.yaw[assignment], - axis=2).squeeze(1) - distance_front = torch.clamp( - size_res_targets[:, 0] - canonical_xyz[:, 0], min=0) - distance_back = torch.clamp( - size_res_targets[:, 0] + canonical_xyz[:, 0], min=0) - distance_left = torch.clamp( - size_res_targets[:, 1] - canonical_xyz[:, 1], min=0) - distance_right = torch.clamp( - size_res_targets[:, 1] + canonical_xyz[:, 1], min=0) - distance_top = torch.clamp( - size_res_targets[:, 2] - canonical_xyz[:, 2], min=0) - distance_bottom = torch.clamp( - size_res_targets[:, 2] + canonical_xyz[:, 2], min=0) - - centerness_l = torch.min(distance_front, distance_back) / torch.max( - distance_front, distance_back) - centerness_w = torch.min(distance_left, distance_right) / torch.max( - distance_left, distance_right) - centerness_h = torch.min(distance_bottom, distance_top) / torch.max( - distance_bottom, distance_top) - centerness_targets = torch.clamp( - centerness_l * centerness_w * centerness_h, min=0) - centerness_targets = centerness_targets.pow(1 / 3.0) - centerness_targets = torch.clamp(centerness_targets, min=0, max=1) - - proposal_num = centerness_targets.shape[0] - one_hot_centerness_targets = centerness_targets.new_zeros( - (proposal_num, self.num_classes)) - one_hot_centerness_targets.scatter_(1, mask_targets.unsqueeze(-1), 1) - centerness_targets = centerness_targets.unsqueeze( - 1) * one_hot_centerness_targets - - # Vote loss targets - enlarged_gt_bboxes_3d = gt_bboxes_3d.enlarged_box( - self.train_cfg.expand_dims_length) - enlarged_gt_bboxes_3d.tensor[:, 2] -= self.train_cfg.expand_dims_length - vote_mask, vote_assignment = self._assign_targets_by_points_inside( - enlarged_gt_bboxes_3d, seed_points) - - vote_targets = gt_bboxes_3d.gravity_center - vote_targets = vote_targets[vote_assignment] - seed_points - vote_mask = vote_mask.max(1)[0] > 0 - - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, positive_mask, - negative_mask) - - def get_bboxes(self, points, bbox_preds, input_metas, rescale=False): - """Generate bboxes from 3DSSD head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from sdd3d head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - sem_scores = F.sigmoid(bbox_preds['obj_scores']).transpose(1, 2) - obj_scores = sem_scores.max(-1)[0] - bbox3d = self.bbox_coder.decode(bbox_preds) - - batch_size = bbox3d.shape[0] - results = list() - - for b in range(batch_size): - bbox_selected, score_selected, labels = self.multiclass_nms_single( - obj_scores[b], sem_scores[b], bbox3d[b], points[b, ..., :3], - input_metas[b]) - - bbox = input_metas[b]['box_type_3d']( - bbox_selected.clone(), - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): Semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox.clone(), - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - - if isinstance(bbox, (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - box_indices = bbox.points_in_boxes_all(points) - nonempty_box_mask = box_indices.T.sum(1) >= 0 - else: - raise NotImplementedError('Unsupported bbox type!') - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - bbox_classes = torch.argmax(sem_scores, -1) - nms_keep = batched_nms( - minmax_box3d[nonempty_box_mask][:, [0, 1, 3, 4]], - obj_scores[nonempty_box_mask], bbox_classes[nonempty_box_mask], - self.test_cfg.nms_cfg)[1] - - if nms_keep.shape[0] > self.test_cfg.max_output_num: - nms_keep = nms_keep[:self.test_cfg.max_output_num] - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores >= self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_keep], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels - - def _assign_targets_by_points_inside(self, bboxes_3d, points): - """Compute assignment by checking whether point is inside bbox. - - Args: - bboxes_3d (BaseInstance3DBoxes): Instance of bounding boxes. - points (torch.Tensor): Points of a batch. - - Returns: - tuple[torch.Tensor]: Flags indicating whether each point is - inside bbox and the index of box where each point are in. - """ - if isinstance(bboxes_3d, (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - points_mask = bboxes_3d.points_in_boxes_all(points) - assignment = points_mask.argmax(dim=-1) - else: - raise NotImplementedError('Unsupported bbox type!') - - return points_mask, assignment diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/train_mixins.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/train_mixins.py deleted file mode 100644 index 90c9cbbf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/train_mixins.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core import limit_period -from mmdet.core import images_to_levels, multi_apply - - -class AnchorTrainMixin(object): - """Mixin class for target assigning of dense heads.""" - - def anchor_target_3d(self, - anchor_list, - gt_bboxes_list, - input_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - num_classes=1, - sampling=True): - """Compute regression and classification targets for anchors. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - gt_bboxes_list (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each image. - input_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list): Ignore list of gt bboxes. - gt_labels_list (list[torch.Tensor]): Gt labels of batches. - label_channels (int): The channel of labels. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple (list, list, list, list, list, list, int, int): - Anchor targets, including labels, label weights, - bbox targets, bbox weights, direction targets, - direction weights, number of positive anchors and - number of negative anchors. - """ - num_imgs = len(input_metas) - assert len(anchor_list) == num_imgs - - if isinstance(anchor_list[0][0], list): - # sizes of anchors are different - # anchor number of a single level - num_level_anchors = [ - sum([anchor.size(0) for anchor in anchors]) - for anchors in anchor_list[0] - ] - for i in range(num_imgs): - anchor_list[i] = anchor_list[i][0] - else: - # anchor number of multi levels - num_level_anchors = [ - anchors.view(-1, self.box_code_size).size(0) - for anchors in anchor_list[0] - ] - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - anchor_list[i] = torch.cat(anchor_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - all_dir_targets, all_dir_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self.anchor_target_3d_single, - anchor_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - input_metas, - label_channels=label_channels, - num_classes=num_classes, - sampling=sampling) - - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - dir_targets_list = images_to_levels(all_dir_targets, num_level_anchors) - dir_weights_list = images_to_levels(all_dir_weights, num_level_anchors) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, dir_targets_list, dir_weights_list, - num_total_pos, num_total_neg) - - def anchor_target_3d_single(self, - anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - input_meta, - label_channels=1, - num_classes=1, - sampling=True): - """Compute targets of anchors in single batch. - - Args: - anchors (torch.Tensor): Concatenated multi-level anchor. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Gt bboxes. - gt_bboxes_ignore (torch.Tensor): Ignored gt bboxes. - gt_labels (torch.Tensor): Gt class labels. - input_meta (dict): Meta info of each image. - label_channels (int): The channel of labels. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple[torch.Tensor]: Anchor targets. - """ - if isinstance(self.bbox_assigner, - list) and (not isinstance(anchors, list)): - feat_size = anchors.size(0) * anchors.size(1) * anchors.size(2) - rot_angles = anchors.size(-2) - assert len(self.bbox_assigner) == anchors.size(-3) - (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) = [], [], [], [], [], [], [], [] - current_anchor_num = 0 - for i, assigner in enumerate(self.bbox_assigner): - current_anchors = anchors[..., i, :, :].reshape( - -1, self.box_code_size) - current_anchor_num += current_anchors.size(0) - if self.assign_per_class: - gt_per_cls = (gt_labels == i) - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes[gt_per_cls, :], - gt_bboxes_ignore, gt_labels[gt_per_cls], input_meta, - num_classes, sampling) - else: - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes, gt_bboxes_ignore, - gt_labels, input_meta, num_classes, sampling) - - (labels, label_weights, bbox_targets, bbox_weights, - dir_targets, dir_weights, pos_inds, neg_inds) = anchor_targets - total_labels.append(labels.reshape(feat_size, 1, rot_angles)) - total_label_weights.append( - label_weights.reshape(feat_size, 1, rot_angles)) - total_bbox_targets.append( - bbox_targets.reshape(feat_size, 1, rot_angles, - anchors.size(-1))) - total_bbox_weights.append( - bbox_weights.reshape(feat_size, 1, rot_angles, - anchors.size(-1))) - total_dir_targets.append( - dir_targets.reshape(feat_size, 1, rot_angles)) - total_dir_weights.append( - dir_weights.reshape(feat_size, 1, rot_angles)) - total_pos_inds.append(pos_inds) - total_neg_inds.append(neg_inds) - - total_labels = torch.cat(total_labels, dim=-2).reshape(-1) - total_label_weights = torch.cat( - total_label_weights, dim=-2).reshape(-1) - total_bbox_targets = torch.cat( - total_bbox_targets, dim=-3).reshape(-1, anchors.size(-1)) - total_bbox_weights = torch.cat( - total_bbox_weights, dim=-3).reshape(-1, anchors.size(-1)) - total_dir_targets = torch.cat( - total_dir_targets, dim=-2).reshape(-1) - total_dir_weights = torch.cat( - total_dir_weights, dim=-2).reshape(-1) - total_pos_inds = torch.cat(total_pos_inds, dim=0).reshape(-1) - total_neg_inds = torch.cat(total_neg_inds, dim=0).reshape(-1) - return (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) - elif isinstance(self.bbox_assigner, list) and isinstance( - anchors, list): - # class-aware anchors with different feature map sizes - assert len(self.bbox_assigner) == len(anchors), \ - 'The number of bbox assigners and anchors should be the same.' - (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) = [], [], [], [], [], [], [], [] - current_anchor_num = 0 - for i, assigner in enumerate(self.bbox_assigner): - current_anchors = anchors[i] - current_anchor_num += current_anchors.size(0) - if self.assign_per_class: - gt_per_cls = (gt_labels == i) - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes[gt_per_cls, :], - gt_bboxes_ignore, gt_labels[gt_per_cls], input_meta, - num_classes, sampling) - else: - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes, gt_bboxes_ignore, - gt_labels, input_meta, num_classes, sampling) - - (labels, label_weights, bbox_targets, bbox_weights, - dir_targets, dir_weights, pos_inds, neg_inds) = anchor_targets - total_labels.append(labels) - total_label_weights.append(label_weights) - total_bbox_targets.append( - bbox_targets.reshape(-1, anchors[i].size(-1))) - total_bbox_weights.append( - bbox_weights.reshape(-1, anchors[i].size(-1))) - total_dir_targets.append(dir_targets) - total_dir_weights.append(dir_weights) - total_pos_inds.append(pos_inds) - total_neg_inds.append(neg_inds) - - total_labels = torch.cat(total_labels, dim=0) - total_label_weights = torch.cat(total_label_weights, dim=0) - total_bbox_targets = torch.cat(total_bbox_targets, dim=0) - total_bbox_weights = torch.cat(total_bbox_weights, dim=0) - total_dir_targets = torch.cat(total_dir_targets, dim=0) - total_dir_weights = torch.cat(total_dir_weights, dim=0) - total_pos_inds = torch.cat(total_pos_inds, dim=0) - total_neg_inds = torch.cat(total_neg_inds, dim=0) - return (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) - else: - return self.anchor_target_single_assigner(self.bbox_assigner, - anchors, gt_bboxes, - gt_bboxes_ignore, - gt_labels, input_meta, - num_classes, sampling) - - def anchor_target_single_assigner(self, - bbox_assigner, - anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - input_meta, - num_classes=1, - sampling=True): - """Assign anchors and encode positive anchors. - - Args: - bbox_assigner (BaseAssigner): assign positive and negative boxes. - anchors (torch.Tensor): Concatenated multi-level anchor. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Gt bboxes. - gt_bboxes_ignore (torch.Tensor): Ignored gt bboxes. - gt_labels (torch.Tensor): Gt class labels. - input_meta (dict): Meta info of each image. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple[torch.Tensor]: Anchor targets. - """ - anchors = anchors.reshape(-1, anchors.size(-1)) - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - dir_targets = anchors.new_zeros((anchors.shape[0]), dtype=torch.long) - dir_weights = anchors.new_zeros((anchors.shape[0]), dtype=torch.float) - labels = anchors.new_zeros(num_valid_anchors, dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - if len(gt_bboxes) > 0: - if not isinstance(gt_bboxes, torch.Tensor): - gt_bboxes = gt_bboxes.tensor.to(anchors.device) - assign_result = bbox_assigner.assign(anchors, gt_bboxes, - gt_bboxes_ignore, gt_labels) - sampling_result = self.bbox_sampler.sample(assign_result, anchors, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - else: - pos_inds = torch.nonzero( - anchors.new_zeros((anchors.shape[0], ), dtype=torch.bool) > 0, - as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - anchors.new_zeros((anchors.shape[0], ), dtype=torch.bool) == 0, - as_tuple=False).squeeze(-1).unique() - - if gt_labels is not None: - labels += num_classes - if len(pos_inds) > 0: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - pos_dir_targets = get_direction_target( - sampling_result.pos_bboxes, - pos_bbox_targets, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - dir_targets[pos_inds] = pos_dir_targets - dir_weights[pos_inds] = 1.0 - - if gt_labels is None: - labels[pos_inds] = 1 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - return (labels, label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, pos_inds, neg_inds) - - -def get_direction_target(anchors, - reg_targets, - dir_offset=0, - dir_limit_offset=0, - num_bins=2, - one_hot=True): - """Encode direction to 0 ~ num_bins-1. - - Args: - anchors (torch.Tensor): Concatenated multi-level anchor. - reg_targets (torch.Tensor): Bbox regression targets. - dir_offset (int): Direction offset. - num_bins (int): Number of bins to divide 2*PI. - one_hot (bool): Whether to encode as one hot. - - Returns: - torch.Tensor: Encoded direction targets. - """ - rot_gt = reg_targets[..., 6] + anchors[..., 6] - offset_rot = limit_period(rot_gt - dir_offset, dir_limit_offset, 2 * np.pi) - dir_cls_targets = torch.floor(offset_rot / (2 * np.pi / num_bins)).long() - dir_cls_targets = torch.clamp(dir_cls_targets, min=0, max=num_bins - 1) - if one_hot: - dir_targets = torch.zeros( - *list(dir_cls_targets.shape), - num_bins, - dtype=anchors.dtype, - device=dir_cls_targets.device) - dir_targets.scatter_(dir_cls_targets.unsqueeze(dim=-1).long(), 1.0) - dir_cls_targets = dir_targets - return dir_cls_targets diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/vote_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/vote_head.py deleted file mode 100644 index 53b1154f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/dense_heads/vote_head.py +++ /dev/null @@ -1,663 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.ops import furthest_point_sample -from mmcv.runner import BaseModule, force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet3d.models.losses import chamfer_distance -from mmdet3d.models.model_utils import VoteModule -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss -from .base_conv_bbox_head import BaseConvBboxHead - - -@HEADS.register_module() -class VoteHead(BaseModule): - r"""Bbox head of `Votenet `_. - - Args: - num_classes (int): The number of class. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_classes, - bbox_coder, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - pred_layer_cfg=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - semantic_loss=None, - iou_loss=None, - init_cfg=None): - super(VoteHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = vote_module_cfg['gt_per_seed'] - self.num_proposal = vote_aggregation_cfg['num_point'] - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.size_res_loss = build_loss(size_res_loss) - if size_class_loss is not None: - self.size_class_loss = build_loss(size_class_loss) - if semantic_loss is not None: - self.semantic_loss = build_loss(semantic_loss) - if iou_loss is not None: - self.iou_loss = build_loss(iou_loss) - else: - self.iou_loss = None - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - self.vote_module = VoteModule(**vote_module_cfg) - self.vote_aggregation = build_sa_module(vote_aggregation_cfg) - self.fp16_enabled = False - - # Bbox classification and regression - self.conv_pred = BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels()) - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (2) - return self.num_classes + 2 - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Objectness scores (2), center residual (3), - # heading class+residual (num_dir_bins*2), - # size class+residual(num_sizes*4) - return 3 + self.num_dir_bins * 2 + self.num_sizes * 4 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - - # for imvotenet - if 'seed_points' in feat_dict and \ - 'seed_features' in feat_dict and \ - 'seed_indices' in feat_dict: - seed_points = feat_dict['seed_points'] - seed_features = feat_dict['seed_features'] - seed_indices = feat_dict['seed_indices'] - # for votenet - else: - seed_points = feat_dict['fp_xyz'][-1] - seed_features = feat_dict['fp_features'][-1] - seed_indices = feat_dict['fp_indices'][-1] - - return seed_points, seed_features, seed_indices - - def forward(self, feat_dict, sample_mod): - """Forward pass. - - Note: - The forward of VoteHead is divided into 4 steps: - - 1. Generate vote_points from seed_points. - 2. Aggregate vote_points. - 3. Predict bbox and score. - 4. Decode predictions. - - Args: - feat_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed", "random" and "spec". - - Returns: - dict: Predictions of vote head. - """ - assert sample_mod in ['vote', 'seed', 'random', 'spec'] - - seed_points, seed_features, seed_indices = self._extract_input( - feat_dict) - - # 1. generate vote_points from seed_points - vote_points, vote_features, vote_offset = self.vote_module( - seed_points, seed_features) - results = dict( - seed_points=seed_points, - seed_indices=seed_indices, - vote_points=vote_points, - vote_features=vote_features, - vote_offset=vote_offset) - - # 2. aggregate vote_points - if sample_mod == 'vote': - # use fps in vote_aggregation - aggregation_inputs = dict( - points_xyz=vote_points, features=vote_features) - elif sample_mod == 'seed': - # FPS on seed and choose the votes corresponding to the seeds - sample_indices = furthest_point_sample(seed_points, - self.num_proposal) - aggregation_inputs = dict( - points_xyz=vote_points, - features=vote_features, - indices=sample_indices) - elif sample_mod == 'random': - # Random sampling from the votes - batch_size, num_seed = seed_points.shape[:2] - sample_indices = seed_points.new_tensor( - torch.randint(0, num_seed, (batch_size, self.num_proposal)), - dtype=torch.int32) - aggregation_inputs = dict( - points_xyz=vote_points, - features=vote_features, - indices=sample_indices) - elif sample_mod == 'spec': - # Specify the new center in vote_aggregation - aggregation_inputs = dict( - points_xyz=seed_points, - features=seed_features, - target_xyz=vote_points) - else: - raise NotImplementedError( - f'Sample mode {sample_mod} is not supported!') - - vote_aggregation_ret = self.vote_aggregation(**aggregation_inputs) - aggregated_points, features, aggregated_indices = vote_aggregation_ret - - results['aggregated_points'] = aggregated_points - results['aggregated_features'] = features - results['aggregated_indices'] = aggregated_indices - - # 3. predict bbox and score - cls_predictions, reg_predictions = self.conv_pred(features) - - # 4. decode predictions - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, - aggregated_points) - - results.update(decode_res) - - return results - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None, - ret_target=False): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - ret_target (Bool): Return targets or not. - - Returns: - dict: Losses of Votenet. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, valid_gt_masks, - objectness_targets, objectness_weights, box_loss_weights, - valid_gt_weights) = targets - - # calculate vote loss - vote_loss = self.vote_module.get_loss(bbox_preds['seed_points'], - bbox_preds['vote_points'], - bbox_preds['seed_indices'], - vote_target_masks, vote_targets) - - # calculate objectness loss - objectness_loss = self.objectness_loss( - bbox_preds['obj_scores'].transpose(2, 1), - objectness_targets, - weight=objectness_weights) - - # calculate center loss - source2target_loss, target2source_loss = self.center_loss( - bbox_preds['center'], - center_targets, - src_weight=box_loss_weights, - dst_weight=valid_gt_weights) - center_loss = source2target_loss + target2source_loss - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class'].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - batch_size, proposal_num = size_class_targets.shape[:2] - heading_label_one_hot = vote_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - dir_res_norm = torch.sum( - bbox_preds['dir_res_norm'] * heading_label_one_hot, -1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds['size_class'].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - - # calculate size residual loss - one_hot_size_targets = vote_targets.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).repeat(1, 1, 1, 3).contiguous() - size_residual_norm = torch.sum( - bbox_preds['size_res_norm'] * one_hot_size_targets_expand, 2) - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).repeat( - 1, 1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds['sem_scores'].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - - losses = dict( - vote_loss=vote_loss, - objectness_loss=objectness_loss, - semantic_loss=semantic_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=size_class_loss, - size_res_loss=size_res_loss) - - if self.iou_loss: - corners_pred = self.bbox_coder.decode_corners( - bbox_preds['center'], size_residual_norm, - one_hot_size_targets_expand) - corners_target = self.bbox_coder.decode_corners( - assigned_center_targets, size_res_targets, - one_hot_size_targets_expand) - iou_loss = self.iou_loss( - corners_pred, corners_target, weight=box_loss_weights) - losses['iou_loss'] = iou_loss - - if ret_target: - losses['targets'] = targets - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of vote head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - - Returns: - tuple[torch.Tensor]: Targets of vote head. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - max_gt_num = max(gt_num) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, objectness_targets, - objectness_masks) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - aggregated_points) - - # pad targets as original code of votenet. - for index in range(len(gt_labels_3d)): - pad_num = max_gt_num - gt_labels_3d[index].shape[0] - center_targets[index] = F.pad(center_targets[index], - (0, 0, 0, pad_num)) - valid_gt_masks[index] = F.pad(valid_gt_masks[index], (0, pad_num)) - - vote_targets = torch.stack(vote_targets) - vote_target_masks = torch.stack(vote_target_masks) - center_targets = torch.stack(center_targets) - valid_gt_masks = torch.stack(valid_gt_masks) - - assigned_center_targets = torch.stack(assigned_center_targets) - objectness_targets = torch.stack(objectness_targets) - objectness_weights = torch.stack(objectness_masks) - objectness_weights /= (torch.sum(objectness_weights) + 1e-6) - box_loss_weights = objectness_targets.float() / ( - torch.sum(objectness_targets).float() + 1e-6) - valid_gt_weights = valid_gt_masks.float() / ( - torch.sum(valid_gt_masks.float()) + 1e-6) - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_class_targets = torch.stack(size_class_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - - return (vote_targets, vote_target_masks, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, - center_targets, assigned_center_targets, mask_targets, - valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None): - """Generate targets of vote head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - vote aggregation layer. - - Returns: - tuple[torch.Tensor]: Targets of vote head. - """ - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - # generate votes target - num_points = points.shape[0] - if self.bbox_coder.with_rot: - vote_targets = points.new_zeros([num_points, 3 * self.gt_per_seed]) - vote_target_masks = points.new_zeros([num_points], - dtype=torch.long) - vote_target_idx = points.new_zeros([num_points], dtype=torch.long) - box_indices_all = gt_bboxes_3d.points_in_boxes_all(points) - for i in range(gt_labels_3d.shape[0]): - box_indices = box_indices_all[:, i] - indices = torch.nonzero( - box_indices, as_tuple=False).squeeze(-1) - selected_points = points[indices] - vote_target_masks[indices] = 1 - vote_targets_tmp = vote_targets[indices] - votes = gt_bboxes_3d.gravity_center[i].unsqueeze( - 0) - selected_points[:, :3] - - for j in range(self.gt_per_seed): - column_indices = torch.nonzero( - vote_target_idx[indices] == j, - as_tuple=False).squeeze(-1) - vote_targets_tmp[column_indices, - int(j * 3):int(j * 3 + - 3)] = votes[column_indices] - if j == 0: - vote_targets_tmp[column_indices] = votes[ - column_indices].repeat(1, self.gt_per_seed) - - vote_targets[indices] = vote_targets_tmp - vote_target_idx[indices] = torch.clamp( - vote_target_idx[indices] + 1, max=2) - elif pts_semantic_mask is not None: - vote_targets = points.new_zeros([num_points, 3]) - vote_target_masks = points.new_zeros([num_points], - dtype=torch.long) - - for i in torch.unique(pts_instance_mask): - indices = torch.nonzero( - pts_instance_mask == i, as_tuple=False).squeeze(-1) - if pts_semantic_mask[indices[0]] < self.num_classes: - selected_points = points[indices, :3] - center = 0.5 * ( - selected_points.min(0)[0] + selected_points.max(0)[0]) - vote_targets[indices, :] = center - selected_points - vote_target_masks[indices] = 1 - vote_targets = vote_targets.repeat((1, self.gt_per_seed)) - else: - raise NotImplementedError - - (center_targets, size_class_targets, size_res_targets, - dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - proposal_num = aggregated_points.shape[0] - distance1, _, assignment, _ = chamfer_distance( - aggregated_points.unsqueeze(0), - center_targets.unsqueeze(0), - reduction='none') - assignment = assignment.squeeze(0) - euclidean_distance1 = torch.sqrt(distance1.squeeze(0) + 1e-6) - - objectness_targets = points.new_zeros((proposal_num), dtype=torch.long) - objectness_targets[ - euclidean_distance1 < self.train_cfg['pos_distance_thr']] = 1 - - objectness_masks = points.new_zeros((proposal_num)) - objectness_masks[ - euclidean_distance1 < self.train_cfg['pos_distance_thr']] = 1.0 - objectness_masks[ - euclidean_distance1 > self.train_cfg['neg_distance_thr']] = 1.0 - - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - dir_res_targets /= (np.pi / self.num_dir_bins) - size_class_targets = size_class_targets[assignment] - size_res_targets = size_res_targets[assignment] - - one_hot_size_targets = gt_bboxes_3d.tensor.new_zeros( - (proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(1, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets = one_hot_size_targets.unsqueeze(-1).repeat( - 1, 1, 3) - mean_sizes = size_res_targets.new_tensor( - self.bbox_coder.mean_sizes).unsqueeze(0) - pos_mean_sizes = torch.sum(one_hot_size_targets * mean_sizes, 1) - size_res_targets /= pos_mean_sizes - - mask_targets = gt_labels_3d[assignment] - assigned_center_targets = center_targets[assignment] - - return (vote_targets, vote_target_masks, size_class_targets, - size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets.long(), objectness_targets, objectness_masks) - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - use_nms=True): - """Generate bboxes from vote head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from vote head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - use_nms (bool): Whether to apply NMS, skip nms postprocessing - while using vote head in rpn stage. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - obj_scores = F.softmax(bbox_preds['obj_scores'], dim=-1)[..., -1] - sem_scores = F.softmax(bbox_preds['sem_scores'], dim=-1) - bbox3d = self.bbox_coder.decode(bbox_preds) - - if use_nms: - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = \ - self.multiclass_nms_single(obj_scores[b], sem_scores[b], - bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - else: - return bbox3d - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/__init__.py deleted file mode 100644 index 1924b123..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import Base3DDetector -from .centerpoint import CenterPoint -from .dynamic_voxelnet import DynamicVoxelNet -from .fcos_mono3d import FCOSMono3D -from .groupfree3dnet import GroupFree3DNet -from .h3dnet import H3DNet -from .imvotenet import ImVoteNet -from .imvoxelnet import ImVoxelNet -from .mvx_faster_rcnn import DynamicMVXFasterRCNN, MVXFasterRCNN -from .mvx_two_stage import MVXTwoStageDetector -from .parta2 import PartA2 -from .point_rcnn import PointRCNN -from .sassd import SASSD -from .single_stage_mono3d import SingleStageMono3DDetector -from .smoke_mono3d import SMOKEMono3D -from .ssd3dnet import SSD3DNet -from .votenet import VoteNet -from .voxelnet import VoxelNet - -__all__ = [ - 'Base3DDetector', 'VoxelNet', 'DynamicVoxelNet', 'MVXTwoStageDetector', - 'DynamicMVXFasterRCNN', 'MVXFasterRCNN', 'PartA2', 'VoteNet', 'H3DNet', - 'CenterPoint', 'SSD3DNet', 'ImVoteNet', 'SingleStageMono3DDetector', - 'FCOSMono3D', 'ImVoxelNet', 'GroupFree3DNet', 'PointRCNN', 'SMOKEMono3D', - 'SASSD' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/base.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/base.py deleted file mode 100644 index 4985c1dc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/base.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import torch -from mmcv.parallel import DataContainer as DC -from mmcv.runner import auto_fp16 - -from mmdet3d.core import Box3DMode, Coord3DMode, show_result -from mmdet.models.detectors import BaseDetector - - -class Base3DDetector(BaseDetector): - """Base class for detectors.""" - - def forward_test(self, points, img_metas, img=None, **kwargs): - """ - Args: - points (list[torch.Tensor]): the outer list indicates test-time - augmentations and inner torch.Tensor should have a shape NxC, - which contains all points in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch - img (list[torch.Tensor], optional): the outer - list indicates test-time augmentations and inner - torch.Tensor should have a shape NxCxHxW, which contains - all images in the batch. Defaults to None. - """ - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError('{} must be a list, but got {}'.format( - name, type(var))) - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError( - 'num of augmentations ({}) != num of image meta ({})'.format( - len(points), len(img_metas))) - - if num_augs == 1: - img = [img] if img is None else img - return self.simple_test(points[0], img_metas[0], img[0], **kwargs) - else: - return self.aug_test(points, img_metas, img, **kwargs) - - @auto_fp16(apply_to=('img', 'points')) - def forward(self, return_loss=True, **kwargs): - """Calls either forward_train or forward_test depending on whether - return_loss=True. - - Note this setting will change the expected inputs. When - `return_loss=True`, img and img_metas are single-nested (i.e. - torch.Tensor and list[dict]), and when `resturn_loss=False`, img and - img_metas should be double nested (i.e. list[torch.Tensor], - list[list[dict]]), with the outer list indicating test time - augmentations. - """ - if return_loss: - return self.forward_train(**kwargs) - else: - return self.forward_test(**kwargs) - - def show_results(self, data, result, out_dir, show=False, score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input points and the information of the sample. - result (list[dict]): Prediction results. - out_dir (str): Output directory of visualization result. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - """ - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - box_mode_3d = data['img_metas'][0]._data[0][batch_id][ - 'box_mode_3d'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - box_mode_3d = data['img_metas'][0][batch_id]['box_mode_3d'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - - pred_bboxes = result[batch_id]['boxes_3d'] - pred_labels = result[batch_id]['labels_3d'] - - if score_thr is not None: - mask = result[batch_id]['scores_3d'] > score_thr - pred_bboxes = pred_bboxes[mask] - pred_labels = pred_labels[mask] - - # for now we convert points and bbox into depth mode - if (box_mode_3d == Box3DMode.CAM) or (box_mode_3d - == Box3DMode.LIDAR): - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d, - Box3DMode.DEPTH) - elif box_mode_3d != Box3DMode.DEPTH: - ValueError( - f'Unsupported box_mode_3d {box_mode_3d} for conversion!') - pred_bboxes = pred_bboxes.tensor.cpu().numpy() - show_result( - points, - None, - pred_bboxes, - out_dir, - file_name, - show=show, - pred_labels=pred_labels) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/centerpoint.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/centerpoint.py deleted file mode 100644 index 290af5be..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/centerpoint.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .mvx_two_stage import MVXTwoStageDetector - - -@DETECTORS.register_module() -class CenterPoint(MVXTwoStageDetector): - """Base class of Multi-modality VoxelNet.""" - - def __init__(self, - pts_voxel_layer=None, - pts_voxel_encoder=None, - pts_middle_encoder=None, - pts_fusion_layer=None, - img_backbone=None, - pts_backbone=None, - img_neck=None, - pts_neck=None, - pts_bbox_head=None, - img_roi_head=None, - img_rpn_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CenterPoint, - self).__init__(pts_voxel_layer, pts_voxel_encoder, - pts_middle_encoder, pts_fusion_layer, - img_backbone, pts_backbone, img_neck, pts_neck, - pts_bbox_head, img_roi_head, img_rpn_head, - train_cfg, test_cfg, pretrained, init_cfg) - - def extract_pts_feat(self, pts, img_feats, img_metas): - """Extract features of points.""" - if not self.with_pts_bbox: - return None - voxels, num_points, coors = self.voxelize(pts) - - voxel_features = self.pts_voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x - - def forward_pts_train(self, - pts_feats, - gt_bboxes_3d, - gt_labels_3d, - img_metas, - gt_bboxes_ignore=None): - """Forward function for point cloud branch. - - Args: - pts_feats (list[torch.Tensor]): Features of point cloud branch - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - img_metas (list[dict]): Meta information of samples. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - outs = self.pts_bbox_head(pts_feats) - loss_inputs = [gt_bboxes_3d, gt_labels_3d, outs] - losses = self.pts_bbox_head.loss(*loss_inputs) - return losses - - def simple_test_pts(self, x, img_metas, rescale=False): - """Test function of point cloud branch.""" - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test_pts(self, feats, img_metas, rescale=False): - """Test function of point cloud branch with augmentaiton. - - The function implementation process is as follows: - - - step 1: map features back for double-flip augmentation. - - step 2: merge all features and generate boxes. - - step 3: map boxes back for scale augmentation. - - step 4: merge results. - - Args: - feats (list[torch.Tensor]): Feature of point cloud. - img_metas (list[dict]): Meta information of samples. - rescale (bool, optional): Whether to rescale bboxes. - Default: False. - - Returns: - dict: Returned bboxes consists of the following keys: - - - boxes_3d (:obj:`LiDARInstance3DBoxes`): Predicted bboxes. - - scores_3d (torch.Tensor): Scores of predicted boxes. - - labels_3d (torch.Tensor): Labels of predicted boxes. - """ - # only support aug_test for one sample - outs_list = [] - for x, img_meta in zip(feats, img_metas): - outs = self.pts_bbox_head(x) - # merge augmented outputs before decoding bboxes - for task_id, out in enumerate(outs): - for key in out[0].keys(): - if img_meta[0]['pcd_horizontal_flip']: - outs[task_id][0][key] = torch.flip( - outs[task_id][0][key], dims=[2]) - if key == 'reg': - outs[task_id][0][key][:, 1, ...] = 1 - outs[ - task_id][0][key][:, 1, ...] - elif key == 'rot': - outs[task_id][0][ - key][:, 0, - ...] = -outs[task_id][0][key][:, 0, ...] - elif key == 'vel': - outs[task_id][0][ - key][:, 1, - ...] = -outs[task_id][0][key][:, 1, ...] - if img_meta[0]['pcd_vertical_flip']: - outs[task_id][0][key] = torch.flip( - outs[task_id][0][key], dims=[3]) - if key == 'reg': - outs[task_id][0][key][:, 0, ...] = 1 - outs[ - task_id][0][key][:, 0, ...] - elif key == 'rot': - outs[task_id][0][ - key][:, 1, - ...] = -outs[task_id][0][key][:, 1, ...] - elif key == 'vel': - outs[task_id][0][ - key][:, 0, - ...] = -outs[task_id][0][key][:, 0, ...] - - outs_list.append(outs) - - preds_dicts = dict() - scale_img_metas = [] - - # concat outputs sharing the same pcd_scale_factor - for i, (img_meta, outs) in enumerate(zip(img_metas, outs_list)): - pcd_scale_factor = img_meta[0]['pcd_scale_factor'] - if pcd_scale_factor not in preds_dicts.keys(): - preds_dicts[pcd_scale_factor] = outs - scale_img_metas.append(img_meta) - else: - for task_id, out in enumerate(outs): - for key in out[0].keys(): - preds_dicts[pcd_scale_factor][task_id][0][key] += out[ - 0][key] - - aug_bboxes = [] - - for pcd_scale_factor, preds_dict in preds_dicts.items(): - for task_id, pred_dict in enumerate(preds_dict): - # merge outputs with different flips before decoding bboxes - for key in pred_dict[0].keys(): - preds_dict[task_id][0][key] /= len(outs_list) / len( - preds_dicts.keys()) - bbox_list = self.pts_bbox_head.get_bboxes( - preds_dict, img_metas[0], rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - if len(preds_dicts.keys()) > 1: - # merge outputs with different scales after decoding bboxes - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, scale_img_metas, - self.pts_bbox_head.test_cfg) - return merged_bboxes - else: - for key in bbox_list[0].keys(): - bbox_list[0][key] = bbox_list[0][key].to('cpu') - return bbox_list[0] - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - img_feats, pts_feats = self.extract_feats(points, img_metas, imgs) - bbox_list = dict() - if pts_feats and self.with_pts_bbox: - pts_bbox = self.aug_test_pts(pts_feats, img_metas, rescale) - bbox_list.update(pts_bbox=pts_bbox) - return [bbox_list] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/dynamic_voxelnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/dynamic_voxelnet.py deleted file mode 100644 index c4226ecd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/dynamic_voxelnet.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from ..builder import DETECTORS -from .voxelnet import VoxelNet - - -@DETECTORS.register_module() -class DynamicVoxelNet(VoxelNet): - r"""VoxelNet using `dynamic voxelization `_. - """ - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(DynamicVoxelNet, self).__init__( - voxel_layer=voxel_layer, - voxel_encoder=voxel_encoder, - middle_encoder=middle_encoder, - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def extract_feat(self, points, img_metas): - """Extract features from points.""" - voxels, coors = self.voxelize(points) - voxel_features, feature_coors = self.voxel_encoder(voxels, coors) - batch_size = coors[-1, 0].item() + 1 - x = self.middle_encoder(voxel_features, feature_coors, batch_size) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points and coordinates. - """ - coors = [] - # dynamic voxelization only provide a coors mapping - for res in points: - res_coors = self.voxel_layer(res) - coors.append(res_coors) - points = torch.cat(points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return points, coors_batch diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/fcos_mono3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/fcos_mono3d.py deleted file mode 100644 index 5baed7b8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/fcos_mono3d.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_mono3d import SingleStageMono3DDetector - - -@DETECTORS.register_module() -class FCOSMono3D(SingleStageMono3DDetector): - r"""`FCOS3D `_ for monocular 3D object detection. - - Currently please refer to our entry on the - `leaderboard `_. - """ # noqa: E501 - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FCOSMono3D, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/groupfree3dnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/groupfree3dnet.py deleted file mode 100644 index 71bd002f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/groupfree3dnet.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class GroupFree3DNet(SingleStage3DDetector): - """`Group-Free 3D `_.""" - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(GroupFree3DNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str: torch.Tensor]: Losses. - """ - # TODO: refactor votenet series to reduce redundant codes. - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.train_cfg.sample_mod) - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas) - losses = self.bbox_head.loss( - bbox_preds, *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - points_cat, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta in zip(feats, points_cat, img_metas): - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - pts_cat, bbox_preds, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/h3dnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/h3dnet.py deleted file mode 100644 index 033a9a1a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/h3dnet.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import merge_aug_bboxes_3d -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class H3DNet(TwoStage3DDetector): - r"""H3DNet model. - - Please refer to the `paper `_ - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(H3DNet, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses. - """ - points_cat = torch.stack(points) - - feats_dict = self.extract_feat(points_cat) - feats_dict['fp_xyz'] = [feats_dict['fp_xyz_net0'][-1]] - feats_dict['fp_features'] = [feats_dict['hd_feature']] - feats_dict['fp_indices'] = [feats_dict['fp_indices_net0'][-1]] - - losses = dict() - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict, self.train_cfg.rpn.sample_mod) - feats_dict.update(rpn_outs) - - rpn_loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, img_metas) - rpn_losses = self.rpn_head.loss( - rpn_outs, - *rpn_loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore, - ret_target=True) - feats_dict['targets'] = rpn_losses.pop('targets') - losses.update(rpn_losses) - - # Generate rpn proposals - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - proposal_inputs = (points, rpn_outs, img_metas) - proposal_list = self.rpn_head.get_bboxes( - *proposal_inputs, use_nms=proposal_cfg.use_nms) - feats_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - roi_losses = self.roi_head.forward_train(feats_dict, img_metas, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, - pts_instance_mask, - gt_bboxes_ignore) - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - feats_dict = self.extract_feat(points_cat) - feats_dict['fp_xyz'] = [feats_dict['fp_xyz_net0'][-1]] - feats_dict['fp_features'] = [feats_dict['hd_feature']] - feats_dict['fp_indices'] = [feats_dict['fp_indices_net0'][-1]] - - if self.with_rpn: - proposal_cfg = self.test_cfg.rpn - rpn_outs = self.rpn_head(feats_dict, proposal_cfg.sample_mod) - feats_dict.update(rpn_outs) - # Generate rpn proposals - proposal_list = self.rpn_head.get_bboxes( - points, rpn_outs, img_metas, use_nms=proposal_cfg.use_nms) - feats_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - return self.roi_head.simple_test( - feats_dict, img_metas, points_cat, rescale=rescale) - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats_dict = self.extract_feats(points_cat, img_metas) - for feat_dict in feats_dict: - feat_dict['fp_xyz'] = [feat_dict['fp_xyz_net0'][-1]] - feat_dict['fp_features'] = [feat_dict['hd_feature']] - feat_dict['fp_indices'] = [feat_dict['fp_indices_net0'][-1]] - - # only support aug_test for one sample - aug_bboxes = [] - for feat_dict, pts_cat, img_meta in zip(feats_dict, points_cat, - img_metas): - if self.with_rpn: - proposal_cfg = self.test_cfg.rpn - rpn_outs = self.rpn_head(feat_dict, proposal_cfg.sample_mod) - feat_dict.update(rpn_outs) - # Generate rpn proposals - proposal_list = self.rpn_head.get_bboxes( - points, rpn_outs, img_metas, use_nms=proposal_cfg.use_nms) - feat_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - bbox_results = self.roi_head.simple_test( - feat_dict, - self.test_cfg.rcnn.sample_mod, - img_meta, - pts_cat, - rescale=rescale) - aug_bboxes.append(bbox_results) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] - - def extract_feats(self, points, img_metas): - """Extract features of multiple samples.""" - return [ - self.extract_feat(pts, img_meta) - for pts, img_meta in zip(points, img_metas) - ] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvotenet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvotenet.py deleted file mode 100644 index 9f48b817..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvotenet.py +++ /dev/null @@ -1,819 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from mmdet3d.models.utils import MLP -from .. import builder -from ..builder import DETECTORS -from .base import Base3DDetector - - -def sample_valid_seeds(mask, num_sampled_seed=1024): - r"""Randomly sample seeds from all imvotes. - - Modified from ``_ - - Args: - mask (torch.Tensor): Bool tensor in shape ( - seed_num*max_imvote_per_pixel), indicates - whether this imvote corresponds to a 2D bbox. - num_sampled_seed (int): How many to sample from all imvotes. - - Returns: - torch.Tensor: Indices with shape (num_sampled_seed). - """ # noqa: E501 - device = mask.device - batch_size = mask.shape[0] - sample_inds = mask.new_zeros((batch_size, num_sampled_seed), - dtype=torch.int64) - for bidx in range(batch_size): - # return index of non zero elements - valid_inds = torch.nonzero(mask[bidx, :]).squeeze(-1) - if len(valid_inds) < num_sampled_seed: - # compute set t1 - t2 - t1 = torch.arange(num_sampled_seed, device=device) - t2 = valid_inds % num_sampled_seed - combined = torch.cat((t1, t2)) - uniques, counts = combined.unique(return_counts=True) - difference = uniques[counts == 1] - - rand_inds = torch.randperm( - len(difference), - device=device)[:num_sampled_seed - len(valid_inds)] - cur_sample_inds = difference[rand_inds] - cur_sample_inds = torch.cat((valid_inds, cur_sample_inds)) - else: - rand_inds = torch.randperm( - len(valid_inds), device=device)[:num_sampled_seed] - cur_sample_inds = valid_inds[rand_inds] - sample_inds[bidx, :] = cur_sample_inds - return sample_inds - - -@DETECTORS.register_module() -class ImVoteNet(Base3DDetector): - r"""`ImVoteNet `_ for 3D detection.""" - - def __init__(self, - pts_backbone=None, - pts_bbox_heads=None, - pts_neck=None, - img_backbone=None, - img_neck=None, - img_roi_head=None, - img_rpn_head=None, - img_mlp=None, - freeze_img_branch=False, - fusion_layer=None, - num_sampled_seed=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - - super(ImVoteNet, self).__init__(init_cfg=init_cfg) - - # point branch - if pts_backbone is not None: - self.pts_backbone = builder.build_backbone(pts_backbone) - if pts_neck is not None: - self.pts_neck = builder.build_neck(pts_neck) - if pts_bbox_heads is not None: - pts_bbox_head_common = pts_bbox_heads.common - pts_bbox_head_common.update( - train_cfg=train_cfg.pts if train_cfg is not None else None) - pts_bbox_head_common.update(test_cfg=test_cfg.pts) - pts_bbox_head_joint = pts_bbox_head_common.copy() - pts_bbox_head_joint.update(pts_bbox_heads.joint) - pts_bbox_head_pts = pts_bbox_head_common.copy() - pts_bbox_head_pts.update(pts_bbox_heads.pts) - pts_bbox_head_img = pts_bbox_head_common.copy() - pts_bbox_head_img.update(pts_bbox_heads.img) - - self.pts_bbox_head_joint = builder.build_head(pts_bbox_head_joint) - self.pts_bbox_head_pts = builder.build_head(pts_bbox_head_pts) - self.pts_bbox_head_img = builder.build_head(pts_bbox_head_img) - self.pts_bbox_heads = [ - self.pts_bbox_head_joint, self.pts_bbox_head_pts, - self.pts_bbox_head_img - ] - self.loss_weights = pts_bbox_heads.loss_weights - - # image branch - if img_backbone: - self.img_backbone = builder.build_backbone(img_backbone) - if img_neck is not None: - self.img_neck = builder.build_neck(img_neck) - if img_rpn_head is not None: - rpn_train_cfg = train_cfg.img_rpn if train_cfg \ - is not None else None - img_rpn_head_ = img_rpn_head.copy() - img_rpn_head_.update( - train_cfg=rpn_train_cfg, test_cfg=test_cfg.img_rpn) - self.img_rpn_head = builder.build_head(img_rpn_head_) - if img_roi_head is not None: - rcnn_train_cfg = train_cfg.img_rcnn if train_cfg \ - is not None else None - img_roi_head.update( - train_cfg=rcnn_train_cfg, test_cfg=test_cfg.img_rcnn) - self.img_roi_head = builder.build_head(img_roi_head) - - # fusion - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - self.max_imvote_per_pixel = fusion_layer.max_imvote_per_pixel - - self.freeze_img_branch = freeze_img_branch - if freeze_img_branch: - self.freeze_img_branch_params() - - if img_mlp is not None: - self.img_mlp = MLP(**img_mlp) - - self.num_sampled_seed = num_sampled_seed - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if pretrained is None: - img_pretrained = None - pts_pretrained = None - elif isinstance(pretrained, dict): - img_pretrained = pretrained.get('img', None) - pts_pretrained = pretrained.get('pts', None) - else: - raise ValueError( - f'pretrained should be a dict, got {type(pretrained)}') - - if self.with_img_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_backbone.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_img_roi_head: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_roi_head.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - - if self.with_pts_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.pts_backbone.init_cfg = dict( - type='Pretrained', checkpoint=pts_pretrained) - - def freeze_img_branch_params(self): - """Freeze all image branch parameters.""" - if self.with_img_bbox_head: - for param in self.img_bbox_head.parameters(): - param.requires_grad = False - if self.with_img_backbone: - for param in self.img_backbone.parameters(): - param.requires_grad = False - if self.with_img_neck: - for param in self.img_neck.parameters(): - param.requires_grad = False - if self.with_img_rpn: - for param in self.img_rpn_head.parameters(): - param.requires_grad = False - if self.with_img_roi_head: - for param in self.img_roi_head.parameters(): - param.requires_grad = False - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Overload in order to load img network ckpts into img branch.""" - module_names = ['backbone', 'neck', 'roi_head', 'rpn_head'] - for key in list(state_dict): - for module_name in module_names: - if key.startswith(module_name) and ('img_' + - key) not in state_dict: - state_dict['img_' + key] = state_dict.pop(key) - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def train(self, mode=True): - """Overload in order to keep image branch modules in eval mode.""" - super(ImVoteNet, self).train(mode) - if self.freeze_img_branch: - if self.with_img_bbox_head: - self.img_bbox_head.eval() - if self.with_img_backbone: - self.img_backbone.eval() - if self.with_img_neck: - self.img_neck.eval() - if self.with_img_rpn: - self.img_rpn_head.eval() - if self.with_img_roi_head: - self.img_roi_head.eval() - - @property - def with_img_bbox(self): - """bool: Whether the detector has a 2D image box head.""" - return ((hasattr(self, 'img_roi_head') and self.img_roi_head.with_bbox) - or (hasattr(self, 'img_bbox_head') - and self.img_bbox_head is not None)) - - @property - def with_img_bbox_head(self): - """bool: Whether the detector has a 2D image box head (not roi).""" - return hasattr(self, - 'img_bbox_head') and self.img_bbox_head is not None - - @property - def with_img_backbone(self): - """bool: Whether the detector has a 2D image backbone.""" - return hasattr(self, 'img_backbone') and self.img_backbone is not None - - @property - def with_img_neck(self): - """bool: Whether the detector has a neck in image branch.""" - return hasattr(self, 'img_neck') and self.img_neck is not None - - @property - def with_img_rpn(self): - """bool: Whether the detector has a 2D RPN in image detector branch.""" - return hasattr(self, 'img_rpn_head') and self.img_rpn_head is not None - - @property - def with_img_roi_head(self): - """bool: Whether the detector has a RoI Head in image branch.""" - return hasattr(self, 'img_roi_head') and self.img_roi_head is not None - - @property - def with_pts_bbox(self): - """bool: Whether the detector has a 3D box head.""" - return hasattr(self, - 'pts_bbox_head') and self.pts_bbox_head is not None - - @property - def with_pts_backbone(self): - """bool: Whether the detector has a 3D backbone.""" - return hasattr(self, 'pts_backbone') and self.pts_backbone is not None - - @property - def with_pts_neck(self): - """bool: Whether the detector has a neck in 3D detector branch.""" - return hasattr(self, 'pts_neck') and self.pts_neck is not None - - def extract_feat(self, imgs): - """Just to inherit from abstract method.""" - pass - - def extract_img_feat(self, img): - """Directly extract features from the img backbone+neck.""" - x = self.img_backbone(img) - if self.with_img_neck: - x = self.img_neck(x) - return x - - def extract_img_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - - assert isinstance(imgs, list) - return [self.extract_img_feat(img) for img in imgs] - - def extract_pts_feat(self, pts): - """Extract features of points.""" - x = self.pts_backbone(pts) - if self.with_pts_neck: - x = self.pts_neck(x) - - seed_points = x['fp_xyz'][-1] - seed_features = x['fp_features'][-1] - seed_indices = x['fp_indices'][-1] - - return (seed_points, seed_features, seed_indices) - - def extract_pts_feats(self, pts): - """Extract features of points from multiple samples.""" - assert isinstance(pts, list) - return [self.extract_pts_feat(pt) for pt in pts] - - @torch.no_grad() - def extract_bboxes_2d(self, - img, - img_metas, - train=True, - bboxes_2d=None, - **kwargs): - """Extract bounding boxes from 2d detector. - - Args: - img (torch.Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): Image meta info. - train (bool): train-time or not. - bboxes_2d (list[torch.Tensor]): provided 2d bboxes, - not supported yet. - - Return: - list[torch.Tensor]: a list of processed 2d bounding boxes. - """ - if bboxes_2d is None: - x = self.extract_img_feat(img) - proposal_list = self.img_rpn_head.simple_test_rpn(x, img_metas) - rets = self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=False) - - rets_processed = [] - for ret in rets: - tmp = np.concatenate(ret, axis=0) - sem_class = img.new_zeros((len(tmp))) - start = 0 - for i, bboxes in enumerate(ret): - sem_class[start:start + len(bboxes)] = i - start += len(bboxes) - ret = img.new_tensor(tmp) - - # append class index - ret = torch.cat([ret, sem_class[:, None]], dim=-1) - inds = torch.argsort(ret[:, 4], descending=True) - ret = ret.index_select(0, inds) - - # drop half bboxes during training for better generalization - if train: - rand_drop = torch.randperm(len(ret))[:(len(ret) + 1) // 2] - rand_drop = torch.sort(rand_drop)[0] - ret = ret[rand_drop] - - rets_processed.append(ret.float()) - return rets_processed - else: - rets_processed = [] - for ret in bboxes_2d: - if len(ret) > 0 and train: - rand_drop = torch.randperm(len(ret))[:(len(ret) + 1) // 2] - rand_drop = torch.sort(rand_drop)[0] - ret = ret[rand_drop] - rets_processed.append(ret.float()) - return rets_processed - - def forward_train(self, - points=None, - img=None, - img_metas=None, - gt_bboxes=None, - gt_labels=None, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - bboxes_2d=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - pts_semantic_mask=None, - pts_instance_mask=None, - **kwargs): - """Forwarding of train for image branch pretrain or stage 2 train. - - Args: - points (list[torch.Tensor]): Points of each batch. - img (torch.Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image and point cloud meta info - dict. For example, keys include 'ori_shape', 'img_norm_cfg', - and 'transformation_3d_flow'. For details on the values of - the keys see `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[torch.Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[torch.Tensor]): class indices for each - 2d bounding box. - gt_bboxes_ignore (list[torch.Tensor]): specify which - 2d bounding boxes can be ignored when computing the loss. - gt_masks (torch.Tensor): true segmentation masks for each - 2d bbox, used if the architecture supports a segmentation task. - proposals: override rpn proposals (2d) with custom proposals. - Use when `with_rpn` is False. - bboxes_2d (list[torch.Tensor]): provided 2d bboxes, - not supported yet. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): 3d gt bboxes. - gt_labels_3d (list[torch.Tensor]): gt class labels for 3d bboxes. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - - Returns: - dict[str, torch.Tensor]: a dictionary of loss components. - """ - if points is None: - x = self.extract_img_feat(img) - losses = dict() - - # RPN forward and loss - if self.with_img_rpn: - proposal_cfg = self.train_cfg.get('img_rpn_proposal', - self.test_cfg.img_rpn) - rpn_losses, proposal_list = self.img_rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.img_roi_head.forward_train( - x, img_metas, proposal_list, gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, **kwargs) - losses.update(roi_losses) - return losses - else: - bboxes_2d = self.extract_bboxes_2d( - img, img_metas, bboxes_2d=bboxes_2d, **kwargs) - - points = torch.stack(points) - seeds_3d, seed_3d_features, seed_indices = \ - self.extract_pts_feat(points) - - img_features, masks = self.fusion_layer(img, bboxes_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, - -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict_joint = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - feat_dict_pts = dict( - seed_points=seeds_3d, - seed_features=seed_3d_features, - seed_indices=seed_indices) - feat_dict_img = dict( - seed_points=seeds_3d, - seed_features=img_features, - seed_indices=seed_indices) - - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, img_metas) - bbox_preds_joints = self.pts_bbox_head_joint( - feat_dict_joint, self.train_cfg.pts.sample_mod) - bbox_preds_pts = self.pts_bbox_head_pts( - feat_dict_pts, self.train_cfg.pts.sample_mod) - bbox_preds_img = self.pts_bbox_head_img( - feat_dict_img, self.train_cfg.pts.sample_mod) - losses_towers = [] - losses_joint = self.pts_bbox_head_joint.loss( - bbox_preds_joints, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_pts = self.pts_bbox_head_pts.loss( - bbox_preds_pts, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_img = self.pts_bbox_head_img.loss( - bbox_preds_img, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_towers.append(losses_joint) - losses_towers.append(losses_pts) - losses_towers.append(losses_img) - combined_losses = dict() - for loss_term in losses_joint: - if 'loss' in loss_term: - combined_losses[loss_term] = 0 - for i in range(len(losses_towers)): - combined_losses[loss_term] += \ - losses_towers[i][loss_term] * \ - self.loss_weights[i] - else: - # only save the metric of the joint head - # if it is not a loss - combined_losses[loss_term] = \ - losses_towers[0][loss_term] - - return combined_losses - - def forward_test(self, - points=None, - img_metas=None, - img=None, - bboxes_2d=None, - **kwargs): - """Forwarding of test for image branch pretrain or stage 2 train. - - Args: - points (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and the inner - list contains all points in the batch, where each Tensor - should have a shape NxC. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - img (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - bboxes_2d (list[list[torch.Tensor]], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - - Returns: - list[list[torch.Tensor]]|list[dict]: Predicted 2d or 3d boxes. - """ - if points is None: - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError( - f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test_img_only( - img=img[0], img_metas=img_metas[0], **kwargs) - else: - assert img[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{img[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test_img_only( - img=img, img_metas=img_metas, **kwargs) - - else: - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError('{} must be a list, but got {}'.format( - name, type(var))) - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError( - 'num of augmentations ({}) != num of image meta ({})'. - format(len(points), len(img_metas))) - - if num_augs == 1: - return self.simple_test( - points[0], - img_metas[0], - img[0], - bboxes_2d=bboxes_2d[0] if bboxes_2d is not None else None, - **kwargs) - else: - return self.aug_test(points, img_metas, img, bboxes_2d, - **kwargs) - - def simple_test_img_only(self, - img, - img_metas, - proposals=None, - rescale=False): - r"""Test without augmentation, image network pretrain. May refer to - ``_. - - Args: - img (torch.Tensor): Should have a shape NxCxHxW, which contains - all images in the batch. - img_metas (list[dict]): - proposals (list[Tensor], optional): override rpn proposals - with custom proposals. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes to the - original shape of input image. Defaults to False. - - Returns: - list[list[torch.Tensor]]: Predicted 2d boxes. - """ # noqa: E501 - assert self.with_img_bbox, 'Img bbox head must be implemented.' - assert self.with_img_backbone, 'Img backbone must be implemented.' - assert self.with_img_rpn, 'Img rpn must be implemented.' - assert self.with_img_roi_head, 'Img roi head must be implemented.' - - x = self.extract_img_feat(img) - - if proposals is None: - proposal_list = self.img_rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - ret = self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - return ret - - def simple_test(self, - points=None, - img_metas=None, - img=None, - bboxes_2d=None, - rescale=False, - **kwargs): - """Test without augmentation, stage 2. - - Args: - points (list[torch.Tensor], optional): Elements in the list - should have a shape NxC, the list indicates all point-clouds - in the batch. Defaults to None. - img_metas (list[dict], optional): List indicates - images in a batch. Defaults to None. - img (torch.Tensor, optional): Should have a shape NxCxHxW, - which contains all images in the batch. Defaults to None. - bboxes_2d (list[torch.Tensor], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes. - Defaults to False. - - Returns: - list[dict]: Predicted 3d boxes. - """ - bboxes_2d = self.extract_bboxes_2d( - img, img_metas, train=False, bboxes_2d=bboxes_2d, **kwargs) - - points = torch.stack(points) - seeds_3d, seed_3d_features, seed_indices = \ - self.extract_pts_feat(points) - - img_features, masks = self.fusion_layer(img, bboxes_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - bbox_preds = self.pts_bbox_head_joint(feat_dict, - self.test_cfg.pts.sample_mod) - bbox_list = self.pts_bbox_head_joint.get_bboxes( - points, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test_img_only(self, img, img_metas, rescale=False): - r"""Test function with augmentation, image network pretrain. May refer - to ``_. - - Args: - img (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes to the - original shape of input image. If rescale is False, then - returned bboxes and masks will fit the scale of imgs[0]. - Defaults to None. - - Returns: - list[list[torch.Tensor]]: Predicted 2d boxes. - """ # noqa: E501 - assert self.with_img_bbox, 'Img bbox head must be implemented.' - assert self.with_img_backbone, 'Img backbone must be implemented.' - assert self.with_img_rpn, 'Img rpn must be implemented.' - assert self.with_img_roi_head, 'Img roi head must be implemented.' - - x = self.extract_img_feats(img) - proposal_list = self.img_rpn_head.aug_test_rpn(x, img_metas) - - return self.img_roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, - points=None, - img_metas=None, - imgs=None, - bboxes_2d=None, - rescale=False, - **kwargs): - """Test function with augmentation, stage 2. - - Args: - points (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and the inner - list contains all points in the batch, where each Tensor - should have a shape NxC. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - imgs (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - bboxes_2d (list[list[torch.Tensor]], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes. - Defaults to False. - - Returns: - list[dict]: Predicted 3d boxes. - """ - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_pts_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta, bbox_2d, img in zip(feats, points_cat, - img_metas, bboxes_2d, - imgs): - - bbox_2d = self.extract_bboxes_2d( - img, img_metas, train=False, bboxes_2d=bbox_2d, **kwargs) - - seeds_3d, seed_3d_features, seed_indices = x - - img_features, masks = self.fusion_layer(img, bbox_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, - -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - bbox_preds = self.pts_bbox_head_joint(feat_dict, - self.test_cfg.pts.sample_mod) - bbox_list = self.pts_bbox_head_joint.get_bboxes( - pts_cat, bbox_preds, img_metas, rescale=rescale) - - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvoxelnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvoxelnet.py deleted file mode 100644 index ca65b337..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/imvoxelnet.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, build_prior_generator -from mmdet3d.models.fusion_layers.point_fusion import point_sample -from mmdet.models.detectors import BaseDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck - - -@DETECTORS.register_module() -class ImVoxelNet(BaseDetector): - r"""`ImVoxelNet `_.""" - - def __init__(self, - backbone, - neck, - neck_3d, - bbox_head, - n_voxels, - anchor_generator, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) - self.neck_3d = build_neck(neck_3d) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.n_voxels = n_voxels - self.anchor_generator = build_prior_generator(anchor_generator) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img, img_metas): - """Extract 3d features from the backbone -> fpn -> 3d projection. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - torch.Tensor: of shape (N, C_out, N_x, N_y, N_z) - """ - x = self.backbone(img) - x = self.neck(x)[0] - points = self.anchor_generator.grid_anchors( - [self.n_voxels[::-1]], device=img.device)[0][:, :3] - volumes = [] - for feature, img_meta in zip(x, img_metas): - img_scale_factor = ( - points.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta.keys() else 1) - img_flip = img_meta['flip'] if 'flip' in img_meta.keys() else False - img_crop_offset = ( - points.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta.keys() else 0) - volume = point_sample( - img_meta, - img_features=feature[None, ...], - points=points, - proj_mat=points.new_tensor(img_meta['lidar2img']), - coord_type='LIDAR', - img_scale_factor=img_scale_factor, - img_crop_offset=img_crop_offset, - img_flip=img_flip, - img_pad_shape=img.shape[-2:], - img_shape=img_meta['img_shape'][:2], - aligned=False) - volumes.append( - volume.reshape(self.n_voxels[::-1] + [-1]).permute(3, 2, 1, 0)) - x = torch.stack(volumes) - x = self.neck_3d(x) - return x - - def forward_train(self, img, img_metas, gt_bboxes_3d, gt_labels_3d, - **kwargs): - """Forward of training. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - - Returns: - dict[str, torch.Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img, img_metas) - x = self.bbox_head(x) - losses = self.bbox_head.loss(*x, gt_bboxes_3d, gt_labels_3d, img_metas) - return losses - - def forward_test(self, img, img_metas, **kwargs): - """Forward of testing. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - # not supporting aug_test for now - return self.simple_test(img, img_metas) - - def simple_test(self, img, img_metas): - """Test without augmentations. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - x = self.extract_feat(img, img_metas) - x = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes(*x, img_metas) - bbox_results = [ - bbox3d2result(det_bboxes, det_scores, det_labels) - for det_bboxes, det_scores, det_labels in bbox_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, **kwargs): - """Test with augmentations. - - Args: - imgs (list[torch.Tensor]): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - raise NotImplementedError diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_faster_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_faster_rcnn.py deleted file mode 100644 index 07efad6a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_faster_rcnn.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from ..builder import DETECTORS -from .mvx_two_stage import MVXTwoStageDetector - - -@DETECTORS.register_module() -class MVXFasterRCNN(MVXTwoStageDetector): - """Multi-modality VoxelNet using Faster R-CNN.""" - - def __init__(self, **kwargs): - super(MVXFasterRCNN, self).__init__(**kwargs) - - -@DETECTORS.register_module() -class DynamicMVXFasterRCNN(MVXTwoStageDetector): - """Multi-modality VoxelNet using Faster R-CNN and dynamic voxelization.""" - - def __init__(self, **kwargs): - super(DynamicMVXFasterRCNN, self).__init__(**kwargs) - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points and coordinates. - """ - coors = [] - # dynamic voxelization only provide a coors mapping - for res in points: - res_coors = self.pts_voxel_layer(res) - coors.append(res_coors) - points = torch.cat(points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return points, coors_batch - - def extract_pts_feat(self, points, img_feats, img_metas): - """Extract point features.""" - if not self.with_pts_bbox: - return None - voxels, coors = self.voxelize(points) - voxel_features, feature_coors = self.pts_voxel_encoder( - voxels, coors, points, img_feats, img_metas) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, feature_coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_two_stage.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_two_stage.py deleted file mode 100644 index 1eba10df..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/mvx_two_stage.py +++ /dev/null @@ -1,503 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from os import path as osp - -import mmcv -import torch -from mmcv.ops import Voxelization -from mmcv.parallel import DataContainer as DC -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import (Box3DMode, Coord3DMode, bbox3d2result, - merge_aug_bboxes_3d, show_result) -from mmdet.core import multi_apply -from .. import builder -from ..builder import DETECTORS -from .base import Base3DDetector - - -@DETECTORS.register_module() -class MVXTwoStageDetector(Base3DDetector): - """Base class of Multi-modality VoxelNet.""" - - def __init__(self, - pts_voxel_layer=None, - pts_voxel_encoder=None, - pts_middle_encoder=None, - pts_fusion_layer=None, - img_backbone=None, - pts_backbone=None, - img_neck=None, - pts_neck=None, - pts_bbox_head=None, - img_roi_head=None, - img_rpn_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(MVXTwoStageDetector, self).__init__(init_cfg=init_cfg) - - if pts_voxel_layer: - self.pts_voxel_layer = Voxelization(**pts_voxel_layer) - if pts_voxel_encoder: - self.pts_voxel_encoder = builder.build_voxel_encoder( - pts_voxel_encoder) - if pts_middle_encoder: - self.pts_middle_encoder = builder.build_middle_encoder( - pts_middle_encoder) - if pts_backbone: - self.pts_backbone = builder.build_backbone(pts_backbone) - if pts_fusion_layer: - self.pts_fusion_layer = builder.build_fusion_layer( - pts_fusion_layer) - if pts_neck is not None: - self.pts_neck = builder.build_neck(pts_neck) - if pts_bbox_head: - pts_train_cfg = train_cfg.pts if train_cfg else None - pts_bbox_head.update(train_cfg=pts_train_cfg) - pts_test_cfg = test_cfg.pts if test_cfg else None - pts_bbox_head.update(test_cfg=pts_test_cfg) - self.pts_bbox_head = builder.build_head(pts_bbox_head) - - if img_backbone: - self.img_backbone = builder.build_backbone(img_backbone) - if img_neck is not None: - self.img_neck = builder.build_neck(img_neck) - if img_rpn_head is not None: - self.img_rpn_head = builder.build_head(img_rpn_head) - if img_roi_head is not None: - self.img_roi_head = builder.build_head(img_roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if pretrained is None: - img_pretrained = None - pts_pretrained = None - elif isinstance(pretrained, dict): - img_pretrained = pretrained.get('img', None) - pts_pretrained = pretrained.get('pts', None) - else: - raise ValueError( - f'pretrained should be a dict, got {type(pretrained)}') - - if self.with_img_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_backbone.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_img_roi_head: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_roi_head.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_pts_backbone: - if pts_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg') - self.pts_backbone.init_cfg = dict( - type='Pretrained', checkpoint=pts_pretrained) - - @property - def with_img_shared_head(self): - """bool: Whether the detector has a shared head in image branch.""" - return hasattr(self, - 'img_shared_head') and self.img_shared_head is not None - - @property - def with_pts_bbox(self): - """bool: Whether the detector has a 3D box head.""" - return hasattr(self, - 'pts_bbox_head') and self.pts_bbox_head is not None - - @property - def with_img_bbox(self): - """bool: Whether the detector has a 2D image box head.""" - return hasattr(self, - 'img_bbox_head') and self.img_bbox_head is not None - - @property - def with_img_backbone(self): - """bool: Whether the detector has a 2D image backbone.""" - return hasattr(self, 'img_backbone') and self.img_backbone is not None - - @property - def with_pts_backbone(self): - """bool: Whether the detector has a 3D backbone.""" - return hasattr(self, 'pts_backbone') and self.pts_backbone is not None - - @property - def with_fusion(self): - """bool: Whether the detector has a fusion layer.""" - return hasattr(self, - 'pts_fusion_layer') and self.fusion_layer is not None - - @property - def with_img_neck(self): - """bool: Whether the detector has a neck in image branch.""" - return hasattr(self, 'img_neck') and self.img_neck is not None - - @property - def with_pts_neck(self): - """bool: Whether the detector has a neck in 3D detector branch.""" - return hasattr(self, 'pts_neck') and self.pts_neck is not None - - @property - def with_img_rpn(self): - """bool: Whether the detector has a 2D RPN in image detector branch.""" - return hasattr(self, 'img_rpn_head') and self.img_rpn_head is not None - - @property - def with_img_roi_head(self): - """bool: Whether the detector has a RoI Head in image branch.""" - return hasattr(self, 'img_roi_head') and self.img_roi_head is not None - - @property - def with_voxel_encoder(self): - """bool: Whether the detector has a voxel encoder.""" - return hasattr(self, - 'voxel_encoder') and self.voxel_encoder is not None - - @property - def with_middle_encoder(self): - """bool: Whether the detector has a middle encoder.""" - return hasattr(self, - 'middle_encoder') and self.middle_encoder is not None - - def extract_img_feat(self, img, img_metas): - """Extract features of images.""" - if self.with_img_backbone and img is not None: - input_shape = img.shape[-2:] - # update real input shape of each single img - for img_meta in img_metas: - img_meta.update(input_shape=input_shape) - - if img.dim() == 5 and img.size(0) == 1: - img.squeeze_() - elif img.dim() == 5 and img.size(0) > 1: - B, N, C, H, W = img.size() - img = img.view(B * N, C, H, W) - img_feats = self.img_backbone(img) - else: - return None - if self.with_img_neck: - img_feats = self.img_neck(img_feats) - return img_feats - - def extract_pts_feat(self, pts, img_feats, img_metas): - """Extract features of points.""" - if not self.with_pts_bbox: - return None - voxels, num_points, coors = self.voxelize(pts) - voxel_features = self.pts_voxel_encoder(voxels, num_points, coors, - img_feats, img_metas) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x - - def extract_feat(self, points, img, img_metas): - """Extract features from images and points.""" - img_feats = self.extract_img_feat(img, img_metas) - pts_feats = self.extract_pts_feat(points, img_feats, img_metas) - return (img_feats, pts_feats) - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points, number of points - per voxel, and coordinates. - """ - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.pts_voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points=None, - img_metas=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - gt_labels=None, - gt_bboxes=None, - img=None, - proposals=None, - gt_bboxes_ignore=None): - """Forward training function. - - Args: - points (list[torch.Tensor], optional): Points of each sample. - Defaults to None. - img_metas (list[dict], optional): Meta information of each sample. - Defaults to None. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`], optional): - Ground truth 3D boxes. Defaults to None. - gt_labels_3d (list[torch.Tensor], optional): Ground truth labels - of 3D boxes. Defaults to None. - gt_labels (list[torch.Tensor], optional): Ground truth labels - of 2D boxes in images. Defaults to None. - gt_bboxes (list[torch.Tensor], optional): Ground truth 2D boxes in - images. Defaults to None. - img (torch.Tensor, optional): Images of each sample with shape - (N, C, H, W). Defaults to None. - proposals ([list[torch.Tensor], optional): Predicted proposals - used for training Fast RCNN. Defaults to None. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - 2D boxes in images to be ignored. Defaults to None. - - Returns: - dict: Losses of different branches. - """ - img_feats, pts_feats = self.extract_feat( - points, img=img, img_metas=img_metas) - losses = dict() - if pts_feats: - losses_pts = self.forward_pts_train(pts_feats, gt_bboxes_3d, - gt_labels_3d, img_metas, - gt_bboxes_ignore) - losses.update(losses_pts) - if img_feats: - losses_img = self.forward_img_train( - img_feats, - img_metas=img_metas, - gt_bboxes=gt_bboxes, - gt_labels=gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - proposals=proposals) - losses.update(losses_img) - return losses - - def forward_pts_train(self, - pts_feats, - gt_bboxes_3d, - gt_labels_3d, - img_metas, - gt_bboxes_ignore=None): - """Forward function for point cloud branch. - - Args: - pts_feats (list[torch.Tensor]): Features of point cloud branch - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - img_metas (list[dict]): Meta information of samples. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - outs = self.pts_bbox_head(pts_feats) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.pts_bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def forward_img_train(self, - x, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - proposals=None, - **kwargs): - """Forward function for image branch. - - This function works similar to the forward function of Faster R-CNN. - - Args: - x (list[torch.Tensor]): Image features of shape (B, C, H, W) - of multiple levels. - img_metas (list[dict]): Meta information of images. - gt_bboxes (list[torch.Tensor]): Ground truth boxes of each image - sample. - gt_labels (list[torch.Tensor]): Ground truth labels of boxes. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - proposals (list[torch.Tensor], optional): Proposals of each sample. - Defaults to None. - - Returns: - dict: Losses of each branch. - """ - losses = dict() - # RPN forward and loss - if self.with_img_rpn: - rpn_outs = self.img_rpn_head(x) - rpn_loss_inputs = rpn_outs + (gt_bboxes, img_metas, - self.train_cfg.img_rpn) - rpn_losses = self.img_rpn_head.loss( - *rpn_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(rpn_losses) - - proposal_cfg = self.train_cfg.get('img_rpn_proposal', - self.test_cfg.img_rpn) - proposal_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.img_rpn_head.get_bboxes(*proposal_inputs) - else: - proposal_list = proposals - - # bbox head forward and loss - if self.with_img_bbox: - # bbox head forward and loss - img_roi_losses = self.img_roi_head.forward_train( - x, img_metas, proposal_list, gt_bboxes, gt_labels, - gt_bboxes_ignore, **kwargs) - losses.update(img_roi_losses) - - return losses - - def simple_test_img(self, x, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - if proposals is None: - proposal_list = self.simple_test_rpn(x, img_metas, - self.test_cfg.img_rpn) - else: - proposal_list = proposals - - return self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def simple_test_rpn(self, x, img_metas, rpn_test_cfg): - """RPN test function.""" - rpn_outs = self.img_rpn_head(x) - proposal_inputs = rpn_outs + (img_metas, rpn_test_cfg) - proposal_list = self.img_rpn_head.get_bboxes(*proposal_inputs) - return proposal_list - - def simple_test_pts(self, x, img_metas, rescale=False): - """Test function of point cloud branch.""" - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def simple_test(self, points, img_metas, img=None, rescale=False): - """Test function without augmentaiton.""" - img_feats, pts_feats = self.extract_feat( - points, img=img, img_metas=img_metas) - - bbox_list = [dict() for i in range(len(img_metas))] - if pts_feats and self.with_pts_bbox: - bbox_pts = self.simple_test_pts( - pts_feats, img_metas, rescale=rescale) - for result_dict, pts_bbox in zip(bbox_list, bbox_pts): - result_dict['pts_bbox'] = pts_bbox - if img_feats and self.with_img_bbox: - bbox_img = self.simple_test_img( - img_feats, img_metas, rescale=rescale) - for result_dict, img_bbox in zip(bbox_list, bbox_img): - result_dict['img_bbox'] = img_bbox - return bbox_list - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - img_feats, pts_feats = self.extract_feats(points, img_metas, imgs) - - bbox_list = dict() - if pts_feats and self.with_pts_bbox: - bbox_pts = self.aug_test_pts(pts_feats, img_metas, rescale) - bbox_list.update(pts_bbox=bbox_pts) - return [bbox_list] - - def extract_feats(self, points, img_metas, imgs=None): - """Extract point and image features of multiple samples.""" - if imgs is None: - imgs = [None] * len(img_metas) - img_feats, pts_feats = multi_apply(self.extract_feat, points, imgs, - img_metas) - return img_feats, pts_feats - - def aug_test_pts(self, feats, img_metas, rescale=False): - """Test function of point cloud branch with augmentaiton.""" - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.pts_bbox_head.test_cfg) - return merged_bboxes - - def show_results(self, data, result, out_dir): - """Results visualization. - - Args: - data (dict): Input points and the information of the sample. - result (dict): Prediction results. - out_dir (str): Output directory of visualization result. - """ - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - box_mode_3d = data['img_metas'][0]._data[0][batch_id][ - 'box_mode_3d'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - box_mode_3d = data['img_metas'][0][batch_id]['box_mode_3d'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - inds = result[batch_id]['pts_bbox']['scores_3d'] > 0.1 - pred_bboxes = result[batch_id]['pts_bbox']['boxes_3d'][inds] - - # for now we convert points and bbox into depth mode - if (box_mode_3d == Box3DMode.CAM) or (box_mode_3d - == Box3DMode.LIDAR): - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d, - Box3DMode.DEPTH) - elif box_mode_3d != Box3DMode.DEPTH: - ValueError( - f'Unsupported box_mode_3d {box_mode_3d} for conversion!') - - pred_bboxes = pred_bboxes.tensor.cpu().numpy() - show_result(points, None, pred_bboxes, out_dir, file_name) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/parta2.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/parta2.py deleted file mode 100644 index 459a9158..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/parta2.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from torch.nn import functional as F - -from .. import builder -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class PartA2(TwoStage3DDetector): - r"""Part-A2 detector. - - Please refer to the `paper `_ - """ - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PartA2, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas): - """Extract features from points.""" - voxel_dict = self.voxelize(points) - voxel_features = self.voxel_encoder(voxel_dict['voxels'], - voxel_dict['num_points'], - voxel_dict['coors']) - batch_size = voxel_dict['coors'][-1, 0].item() + 1 - feats_dict = self.middle_encoder(voxel_features, voxel_dict['coors'], - batch_size) - x = self.backbone(feats_dict['spatial_features']) - if self.with_neck: - neck_feats = self.neck(x) - feats_dict.update({'neck_feats': neck_feats}) - return feats_dict, voxel_dict - - @torch.no_grad() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points, voxel_centers = [], [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - res_voxel_centers = ( - res_coors[:, [2, 1, 0]] + 0.5) * res_voxels.new_tensor( - self.voxel_layer.voxel_size) + res_voxels.new_tensor( - self.voxel_layer.point_cloud_range[0:3]) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxel_centers.append(res_voxel_centers) - - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - voxel_centers = torch.cat(voxel_centers, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - - voxel_dict = dict( - voxels=voxels, - num_points=num_points, - coors=coors_batch, - voxel_centers=voxel_centers) - return voxel_dict - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None, - proposals=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - feats_dict, voxels_dict = self.extract_feat(points, img_metas) - - losses = dict() - - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict['neck_feats']) - rpn_loss_inputs = rpn_outs + (gt_bboxes_3d, gt_labels_3d, - img_metas) - rpn_losses = self.rpn_head.loss( - *rpn_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(rpn_losses) - - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - proposal_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.rpn_head.get_bboxes(*proposal_inputs) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(feats_dict, voxels_dict, - img_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d) - - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, proposals=None, rescale=False): - """Test function without augmentaiton.""" - feats_dict, voxels_dict = self.extract_feat(points, img_metas) - - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict['neck_feats']) - proposal_cfg = self.test_cfg.rpn - bbox_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.rpn_head.get_bboxes(*bbox_inputs) - else: - proposal_list = proposals - - return self.roi_head.simple_test(feats_dict, voxels_dict, img_metas, - proposal_list) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/point_rcnn.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/point_rcnn.py deleted file mode 100644 index 31c86938..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/point_rcnn.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class PointRCNN(TwoStage3DDetector): - r"""PointRCNN detector. - - Please refer to the `PointRCNN `_ - - Args: - backbone (dict): Config dict of detector's backbone. - neck (dict, optional): Config dict of neck. Defaults to None. - rpn_head (dict, optional): Config of RPN head. Defaults to None. - roi_head (dict, optional): Config of ROI head. Defaults to None. - train_cfg (dict, optional): Train configs. Defaults to None. - test_cfg (dict, optional): Test configs. Defaults to None. - pretrained (str, optional): Model pretrained path. Defaults to None. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PointRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def extract_feat(self, points): - """Directly extract features from the backbone+neck. - - Args: - points (torch.Tensor): Input points. - - Returns: - dict: Features from the backbone+neck - """ - x = self.backbone(points) - - if self.with_neck: - x = self.neck(x) - return x - - def forward_train(self, points, img_metas, gt_bboxes_3d, gt_labels_3d): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list[dict]): Meta information of each sample. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - - Returns: - dict: Losses. - """ - losses = dict() - points_cat = torch.stack(points) - x = self.extract_feat(points_cat) - - # features for rcnn - backbone_feats = x['fp_features'].clone() - backbone_xyz = x['fp_xyz'].clone() - rcnn_feats = {'features': backbone_feats, 'points': backbone_xyz} - - bbox_preds, cls_preds = self.rpn_head(x) - - rpn_loss = self.rpn_head.loss( - bbox_preds=bbox_preds, - cls_preds=cls_preds, - points=points, - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - img_metas=img_metas) - losses.update(rpn_loss) - - bbox_list = self.rpn_head.get_bboxes(points_cat, bbox_preds, cls_preds, - img_metas) - proposal_list = [ - dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=preds_cls) - for bboxes, scores, labels, preds_cls in bbox_list - ] - rcnn_feats.update({'points_cls_preds': cls_preds}) - - roi_losses = self.roi_head.forward_train(rcnn_feats, img_metas, - proposal_list, gt_bboxes_3d, - gt_labels_3d) - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list[dict]): Image metas. - imgs (list[torch.Tensor], optional): Images of each sample. - Defaults to None. - rescale (bool, optional): Whether to rescale results. - Defaults to False. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - # features for rcnn - backbone_feats = x['fp_features'].clone() - backbone_xyz = x['fp_xyz'].clone() - rcnn_feats = {'features': backbone_feats, 'points': backbone_xyz} - bbox_preds, cls_preds = self.rpn_head(x) - rcnn_feats.update({'points_cls_preds': cls_preds}) - - bbox_list = self.rpn_head.get_bboxes( - points_cat, bbox_preds, cls_preds, img_metas, rescale=rescale) - - proposal_list = [ - dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=preds_cls) - for bboxes, scores, labels, preds_cls in bbox_list - ] - bbox_results = self.roi_head.simple_test(rcnn_feats, img_metas, - proposal_list) - - return bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/sassd.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/sassd.py deleted file mode 100644 index 2151c4e0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/sassd.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from mmdet.models.builder import DETECTORS -from .. import builder -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class SASSD(SingleStage3DDetector): - r"""`SASSD ` _ for 3D detection.""" - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SASSD, self).__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) - - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas=None, test_mode=False): - """Extract features from points.""" - voxels, num_points, coors = self.voxelize(points) - voxel_features = self.voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0].item() + 1 - x, point_misc = self.middle_encoder(voxel_features, coors, batch_size, - test_mode) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x, point_misc - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - - x, point_misc = self.extract_feat(points, img_metas, test_mode=False) - aux_loss = self.middle_encoder.aux_loss(*point_misc, gt_bboxes_3d) - - outs = self.bbox_head(x) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(aux_loss) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Test function without augmentaiton.""" - x, _ = self.extract_feat(points, img_metas, test_mode=True) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - feats = self.extract_feats(points, img_metas, test_mode=True) - - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage.py deleted file mode 100644 index 11f84799..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import Base3DDetector - - -@DETECTORS.register_module() -class SingleStage3DDetector(Base3DDetector): - """SingleStage3DDetector. - - This class serves as a base class for single-stage 3D detectors. - - Args: - backbone (dict): Config dict of detector's backbone. - neck (dict, optional): Config dict of neck. Defaults to None. - bbox_head (dict, optional): Config dict of box head. Defaults to None. - train_cfg (dict, optional): Config dict of training hyper-parameters. - Defaults to None. - test_cfg (dict, optional): Config dict of test hyper-parameters. - Defaults to None. - pretrained (str, optional): Path of pretrained models. - Defaults to None. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SingleStage3DDetector, self).__init__(init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def forward_dummy(self, points): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(points) - try: - sample_mod = self.train_cfg.sample_mod - outs = self.bbox_head(x, sample_mod) - except AttributeError: - outs = self.bbox_head(x) - return outs - - def extract_feat(self, points, img_metas=None): - """Directly extract features from the backbone+neck. - - Args: - points (torch.Tensor): Input points. - """ - x = self.backbone(points) - if self.with_neck: - x = self.neck(x) - return x - - def extract_feats(self, points, img_metas): - """Extract features of multiple samples.""" - return [ - self.extract_feat(pts, img_meta) - for pts, img_meta in zip(points, img_metas) - ] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage_mono3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage_mono3d.py deleted file mode 100644 index 464fab04..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/single_stage_mono3d.py +++ /dev/null @@ -1,250 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from mmdet3d.core import (CameraInstance3DBoxes, bbox3d2result, - show_multi_modality_result) -from mmdet.models.detectors import SingleStageDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck - - -@DETECTORS.register_module() -class SingleStageMono3DDetector(SingleStageDetector): - """Base class for monocular 3D single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feats(self, imgs): - """Directly extract features from the backbone+neck.""" - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): Each item are the 3D truth boxes for - each image in [x, y, z, x_size, y_size, z_size, yaw, vx, vy] - format. - gt_labels_3d (list[Tensor]): 3D class indices corresponding to - each box. - centers2d (list[Tensor]): Projected 3D centers onto 2D images. - depths (list[Tensor]): Depth of projected centers on 2D images. - attr_labels (list[Tensor], optional): Attribute indices - corresponding to each box - gt_bboxes_ignore (list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, - attr_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - bbox_outputs = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - - if self.bbox_head.pred_bbox2d: - from mmdet.core import bbox2result - bbox2d_img = [ - bbox2result(bboxes2d, labels, self.bbox_head.num_classes) - for bboxes, scores, labels, attrs, bboxes2d in bbox_outputs - ] - bbox_outputs = [bbox_outputs[0][:-1]] - - bbox_img = [ - bbox3d2result(bboxes, scores, labels, attrs) - for bboxes, scores, labels, attrs in bbox_outputs - ] - - bbox_list = [dict() for i in range(len(img_metas))] - for result_dict, img_bbox in zip(bbox_list, bbox_img): - result_dict['img_bbox'] = img_bbox - if self.bbox_head.pred_bbox2d: - for result_dict, img_bbox2d in zip(bbox_list, bbox2d_img): - result_dict['img_bbox2d'] = img_bbox2d - return bbox_list - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation.""" - feats = self.extract_feats(imgs) - - # only support aug_test for one sample - outs_list = [self.bbox_head(x) for x in feats] - for i, img_meta in enumerate(img_metas): - if img_meta[0]['pcd_horizontal_flip']: - for j in range(len(outs_list[i])): # for each prediction - if outs_list[i][j][0] is None: - continue - for k in range(len(outs_list[i][j])): - # every stride of featmap - outs_list[i][j][k] = torch.flip( - outs_list[i][j][k], dims=[3]) - reg = outs_list[i][1] - for reg_feat in reg: - # offset_x - reg_feat[:, 0, :, :] = 1 - reg_feat[:, 0, :, :] - # velo_x - if self.bbox_head.pred_velo: - reg_feat[:, 7, :, :] = -reg_feat[:, 7, :, :] - # rotation - reg_feat[:, 6, :, :] = -reg_feat[:, 6, :, :] + np.pi - - merged_outs = [] - for i in range(len(outs_list[0])): # for each prediction - merged_feats = [] - for j in range(len(outs_list[0][i])): - if outs_list[0][i][0] is None: - merged_feats.append(None) - continue - # for each stride of featmap - avg_feats = torch.mean( - torch.cat([x[i][j] for x in outs_list]), - dim=0, - keepdim=True) - if i == 1: # regression predictions - # rot/velo/2d det keeps the original - avg_feats[:, 6:, :, :] = \ - outs_list[0][i][j][:, 6:, :, :] - if i == 2: - # dir_cls keeps the original - avg_feats = outs_list[0][i][j] - merged_feats.append(avg_feats) - merged_outs.append(merged_feats) - merged_outs = tuple(merged_outs) - - bbox_outputs = self.bbox_head.get_bboxes( - *merged_outs, img_metas[0], rescale=rescale) - if self.bbox_head.pred_bbox2d: - from mmdet.core import bbox2result - bbox2d_img = [ - bbox2result(bboxes2d, labels, self.bbox_head.num_classes) - for bboxes, scores, labels, attrs, bboxes2d in bbox_outputs - ] - bbox_outputs = [bbox_outputs[0][:-1]] - - bbox_img = [ - bbox3d2result(bboxes, scores, labels, attrs) - for bboxes, scores, labels, attrs in bbox_outputs - ] - - bbox_list = dict() - bbox_list.update(img_bbox=bbox_img[0]) - if self.bbox_head.pred_bbox2d: - bbox_list.update(img_bbox2d=bbox2d_img[0]) - - return [bbox_list] - - def show_results(self, data, result, out_dir, show=False, score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input images and the information of the sample. - result (list[dict]): Prediction results. - out_dir (str): Output directory of visualization result. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - TODO: implement score_thr of single_stage_mono3d. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - Not implemented yet, but it is here for unification. - """ - for batch_id in range(len(result)): - if isinstance(data['img_metas'][0], DC): - img_filename = data['img_metas'][0]._data[0][batch_id][ - 'filename'] - cam2img = data['img_metas'][0]._data[0][batch_id]['cam2img'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - img_filename = data['img_metas'][0][batch_id]['filename'] - cam2img = data['img_metas'][0][batch_id]['cam2img'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - img = mmcv.imread(img_filename) - file_name = osp.split(img_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - - pred_bboxes = result[batch_id]['img_bbox']['boxes_3d'] - assert isinstance(pred_bboxes, CameraInstance3DBoxes), \ - f'unsupported predicted bbox type {type(pred_bboxes)}' - - show_multi_modality_result( - img, - None, - pred_bboxes, - cam2img, - out_dir, - file_name, - 'camera', - show=show) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/smoke_mono3d.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/smoke_mono3d.py deleted file mode 100644 index 241187fa..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/smoke_mono3d.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_mono3d import SingleStageMono3DDetector - - -@DETECTORS.register_module() -class SMOKEMono3D(SingleStageMono3DDetector): - r"""SMOKE `_ for monocular 3D object - detection. - - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(SMOKEMono3D, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/ssd3dnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/ssd3dnet.py deleted file mode 100644 index fd5e310c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/ssd3dnet.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .votenet import VoteNet - - -@DETECTORS.register_module() -class SSD3DNet(VoteNet): - """3DSSDNet model. - - https://arxiv.org/abs/2002.10187.pdf - """ - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SSD3DNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/two_stage.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/two_stage.py deleted file mode 100644 index 707f706d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/two_stage.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmdet.models import TwoStageDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import Base3DDetector - - -@DETECTORS.register_module() -class TwoStage3DDetector(Base3DDetector, TwoStageDetector): - """Base class of two-stage 3D detector. - - It inherits original ``:class:TwoStageDetector`` and - ``:class:Base3DDetector``. This class could serve as a base class for all - two-stage 3D detectors. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TwoStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - roi_head.pretrained = pretrained - self.roi_head = build_head(roi_head) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/votenet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/votenet.py deleted file mode 100644 index 41e41449..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/votenet.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class VoteNet(SingleStage3DDetector): - r"""`VoteNet `_ for 3D detection.""" - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(VoteNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=None, - pretrained=pretrained) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.train_cfg.sample_mod) - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas) - losses = self.bbox_head.loss( - bbox_preds, *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - points_cat, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta in zip(feats, points_cat, img_metas): - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - pts_cat, bbox_preds, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/voxelnet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/voxelnet.py deleted file mode 100644 index 9276b7d8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/detectors/voxelnet.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from .. import builder -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class VoxelNet(SingleStage3DDetector): - r"""`VoxelNet `_ for 3D detection.""" - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(VoxelNet, self).__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas=None): - """Extract features from points.""" - voxels, num_points, coors = self.voxelize(points) - voxel_features = self.voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0].item() + 1 - x = self.middle_encoder(voxel_features, coors, batch_size) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - x = self.extract_feat(points, img_metas) - outs = self.bbox_head(x) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Test function without augmentaiton.""" - x = self.extract_feat(points, img_metas) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - feats = self.extract_feats(points, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/__init__.py deleted file mode 100644 index 6df4741d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coord_transform import (apply_3d_transformation, bbox_2d_transform, - coord_2d_transform) -from .point_fusion import PointFusion -from .vote_fusion import VoteFusion - -__all__ = [ - 'PointFusion', 'VoteFusion', 'apply_3d_transformation', - 'bbox_2d_transform', 'coord_2d_transform' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/coord_transform.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/coord_transform.py deleted file mode 100644 index 9c6929b0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/coord_transform.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from functools import partial - -import torch - -from mmdet3d.core.points import get_points_type - - -def apply_3d_transformation(pcd, coord_type, img_meta, reverse=False): - """Apply transformation to input point cloud. - - Args: - pcd (torch.Tensor): The point cloud to be transformed. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - img_meta(dict): Meta info regarding data transformation. - reverse (bool): Reversed transformation or not. - - Note: - The elements in img_meta['transformation_3d_flow']: - "T" stands for translation; - "S" stands for scale; - "R" stands for rotation; - "HF" stands for horizontal flip; - "VF" stands for vertical flip. - - Returns: - torch.Tensor: The transformed point cloud. - """ - - dtype = pcd.dtype - device = pcd.device - - pcd_rotate_mat = ( - torch.tensor(img_meta['pcd_rotation'], dtype=dtype, device=device) - if 'pcd_rotation' in img_meta else torch.eye( - 3, dtype=dtype, device=device)) - - pcd_scale_factor = ( - img_meta['pcd_scale_factor'] if 'pcd_scale_factor' in img_meta else 1.) - - pcd_trans_factor = ( - torch.tensor(img_meta['pcd_trans'], dtype=dtype, device=device) - if 'pcd_trans' in img_meta else torch.zeros( - (3), dtype=dtype, device=device)) - - pcd_horizontal_flip = img_meta[ - 'pcd_horizontal_flip'] if 'pcd_horizontal_flip' in \ - img_meta else False - - pcd_vertical_flip = img_meta[ - 'pcd_vertical_flip'] if 'pcd_vertical_flip' in \ - img_meta else False - - flow = img_meta['transformation_3d_flow'] \ - if 'transformation_3d_flow' in img_meta else [] - - pcd = pcd.clone() # prevent inplace modification - pcd = get_points_type(coord_type)(pcd) - - horizontal_flip_func = partial(pcd.flip, bev_direction='horizontal') \ - if pcd_horizontal_flip else lambda: None - vertical_flip_func = partial(pcd.flip, bev_direction='vertical') \ - if pcd_vertical_flip else lambda: None - if reverse: - scale_func = partial(pcd.scale, scale_factor=1.0 / pcd_scale_factor) - translate_func = partial(pcd.translate, trans_vector=-pcd_trans_factor) - # pcd_rotate_mat @ pcd_rotate_mat.inverse() is not - # exactly an identity matrix - # use angle to create the inverse rot matrix neither. - # rotate_func = partial(pcd.rotate, rotation=pcd_rotate_mat.inverse()) - - device = pcd_rotate_mat.device - rotation_ = pcd_rotate_mat.cpu().inverse().to(device) - rotate_func = partial(pcd.rotate, rotation=rotation_) - - # reverse the pipeline - flow = flow[::-1] - else: - scale_func = partial(pcd.scale, scale_factor=pcd_scale_factor) - translate_func = partial(pcd.translate, trans_vector=pcd_trans_factor) - rotate_func = partial(pcd.rotate, rotation=pcd_rotate_mat) - - flow_mapping = { - 'T': translate_func, - 'S': scale_func, - 'R': rotate_func, - 'HF': horizontal_flip_func, - 'VF': vertical_flip_func - } - for op in flow: - assert op in flow_mapping, f'This 3D data '\ - f'transformation op ({op}) is not supported' - func = flow_mapping[op] - func() - - return pcd.coord - - -def extract_2d_info(img_meta, tensor): - """Extract image augmentation information from img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - tensor(torch.Tensor): Input tensor used to create new ones. - - Returns: - (int, int, int, int, torch.Tensor, bool, torch.Tensor): - The extracted information. - """ - img_shape = img_meta['img_shape'] - ori_shape = img_meta['ori_shape'] - img_h, img_w, _ = img_shape - ori_h, ori_w, _ = ori_shape - - img_scale_factor = ( - tensor.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta else tensor.new_tensor([1.0, 1.0])) - img_flip = img_meta['flip'] if 'flip' in img_meta else False - img_crop_offset = ( - tensor.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta else tensor.new_tensor([0.0, 0.0])) - - return (img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, - img_crop_offset) - - -def bbox_2d_transform(img_meta, bbox_2d, ori2new): - """Transform 2d bbox according to img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - bbox_2d (torch.Tensor): Shape (..., >4) - The input 2d bboxes to transform. - ori2new (bool): Origin img coord system to new or not. - - Returns: - torch.Tensor: The transformed 2d bboxes. - """ - - img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, \ - img_crop_offset = extract_2d_info(img_meta, bbox_2d) - - bbox_2d_new = bbox_2d.clone() - - if ori2new: - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] * img_scale_factor[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] * img_scale_factor[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] * img_scale_factor[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] * img_scale_factor[1] - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] + img_crop_offset[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] + img_crop_offset[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] + img_crop_offset[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] + img_crop_offset[1] - - if img_flip: - bbox_2d_r = img_w - bbox_2d_new[:, 0] - bbox_2d_l = img_w - bbox_2d_new[:, 2] - bbox_2d_new[:, 0] = bbox_2d_l - bbox_2d_new[:, 2] = bbox_2d_r - else: - if img_flip: - bbox_2d_r = img_w - bbox_2d_new[:, 0] - bbox_2d_l = img_w - bbox_2d_new[:, 2] - bbox_2d_new[:, 0] = bbox_2d_l - bbox_2d_new[:, 2] = bbox_2d_r - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] - img_crop_offset[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] - img_crop_offset[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] - img_crop_offset[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] - img_crop_offset[1] - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] / img_scale_factor[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] / img_scale_factor[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] / img_scale_factor[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] / img_scale_factor[1] - - return bbox_2d_new - - -def coord_2d_transform(img_meta, coord_2d, ori2new): - """Transform 2d pixel coordinates according to img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - coord_2d (torch.Tensor): Shape (..., 2) - The input 2d coords to transform. - ori2new (bool): Origin img coord system to new or not. - - Returns: - torch.Tensor: The transformed 2d coordinates. - """ - - img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, \ - img_crop_offset = extract_2d_info(img_meta, coord_2d) - - coord_2d_new = coord_2d.clone() - - if ori2new: - # TODO here we assume this order of transformation - coord_2d_new[..., 0] = coord_2d_new[..., 0] * img_scale_factor[0] - coord_2d_new[..., 1] = coord_2d_new[..., 1] * img_scale_factor[1] - - coord_2d_new[..., 0] += img_crop_offset[0] - coord_2d_new[..., 1] += img_crop_offset[1] - - # flip uv coordinates and bbox - if img_flip: - coord_2d_new[..., 0] = img_w - coord_2d_new[..., 0] - else: - if img_flip: - coord_2d_new[..., 0] = img_w - coord_2d_new[..., 0] - - coord_2d_new[..., 0] -= img_crop_offset[0] - coord_2d_new[..., 1] -= img_crop_offset[1] - - coord_2d_new[..., 0] = coord_2d_new[..., 0] / img_scale_factor[0] - coord_2d_new[..., 1] = coord_2d_new[..., 1] / img_scale_factor[1] - - return coord_2d_new diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/point_fusion.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/point_fusion.py deleted file mode 100644 index 97b41777..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/point_fusion.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import (get_proj_mat_by_coord_type, - points_cam2img) -from ..builder import FUSION_LAYERS -from . import apply_3d_transformation - - -def point_sample(img_meta, - img_features, - points, - proj_mat, - coord_type, - img_scale_factor, - img_crop_offset, - img_flip, - img_pad_shape, - img_shape, - aligned=True, - padding_mode='zeros', - align_corners=True): - """Obtain image features using points. - - Args: - img_meta (dict): Meta info. - img_features (torch.Tensor): 1 x C x H x W image features. - points (torch.Tensor): Nx3 point cloud in LiDAR coordinates. - proj_mat (torch.Tensor): 4x4 transformation matrix. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - img_scale_factor (torch.Tensor): Scale factor with shape of - (w_scale, h_scale). - img_crop_offset (torch.Tensor): Crop offset used to crop - image during data augmentation with shape of (w_offset, h_offset). - img_flip (bool): Whether the image is flipped. - img_pad_shape (tuple[int]): int tuple indicates the h & w after - padding, this is necessary to obtain features in feature map. - img_shape (tuple[int]): int tuple indicates the h & w before padding - after scaling, this is necessary for flipping coordinates. - aligned (bool, optional): Whether use bilinear interpolation when - sampling image features for each point. Defaults to True. - padding_mode (str, optional): Padding mode when padding values for - features of out-of-image points. Defaults to 'zeros'. - align_corners (bool, optional): Whether to align corners when - sampling image features for each point. Defaults to True. - - Returns: - torch.Tensor: NxC image features sampled by point coordinates. - """ - - # apply transformation based on info in img_meta - points = apply_3d_transformation( - points, coord_type, img_meta, reverse=True) - - # project points to camera coordinate - pts_2d = points_cam2img(points, proj_mat) - - # img transformation: scale -> crop -> flip - # the image is resized by img_scale_factor - img_coors = pts_2d[:, 0:2] * img_scale_factor # Nx2 - img_coors -= img_crop_offset - - # grid sample, the valid grid range should be in [-1,1] - coor_x, coor_y = torch.split(img_coors, 1, dim=1) # each is Nx1 - - if img_flip: - # by default we take it as horizontal flip - # use img_shape before padding for flip - orig_h, orig_w = img_shape - coor_x = orig_w - coor_x - - h, w = img_pad_shape - coor_y = coor_y / h * 2 - 1 - coor_x = coor_x / w * 2 - 1 - grid = torch.cat([coor_x, coor_y], - dim=1).unsqueeze(0).unsqueeze(0) # Nx2 -> 1x1xNx2 - - # align_corner=True provides higher performance - mode = 'bilinear' if aligned else 'nearest' - point_features = F.grid_sample( - img_features, - grid, - mode=mode, - padding_mode=padding_mode, - align_corners=align_corners) # 1xCx1xN feats - - return point_features.squeeze().t() - - -@FUSION_LAYERS.register_module() -class PointFusion(BaseModule): - """Fuse image features from multi-scale features. - - Args: - img_channels (list[int] | int): Channels of image features. - It could be a list if the input is multi-scale image features. - pts_channels (int): Channels of point features - mid_channels (int): Channels of middle layers - out_channels (int): Channels of output fused features - img_levels (int, optional): Number of image levels. Defaults to 3. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - Defaults to 'LIDAR'. - conv_cfg (dict, optional): Dict config of conv layers of middle - layers. Defaults to None. - norm_cfg (dict, optional): Dict config of norm layers of middle - layers. Defaults to None. - act_cfg (dict, optional): Dict config of activatation layers. - Defaults to None. - activate_out (bool, optional): Whether to apply relu activation - to output features. Defaults to True. - fuse_out (bool, optional): Whether apply conv layer to the fused - features. Defaults to False. - dropout_ratio (int, float, optional): Dropout ratio of image - features to prevent overfitting. Defaults to 0. - aligned (bool, optional): Whether apply aligned feature fusion. - Defaults to True. - align_corners (bool, optional): Whether to align corner when - sampling features according to points. Defaults to True. - padding_mode (str, optional): Mode used to pad the features of - points that do not have corresponding image features. - Defaults to 'zeros'. - lateral_conv (bool, optional): Whether to apply lateral convs - to image features. Defaults to True. - """ - - def __init__(self, - img_channels, - pts_channels, - mid_channels, - out_channels, - img_levels=3, - coord_type='LIDAR', - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - init_cfg=None, - activate_out=True, - fuse_out=False, - dropout_ratio=0, - aligned=True, - align_corners=True, - padding_mode='zeros', - lateral_conv=True): - super(PointFusion, self).__init__(init_cfg=init_cfg) - if isinstance(img_levels, int): - img_levels = [img_levels] - if isinstance(img_channels, int): - img_channels = [img_channels] * len(img_levels) - assert isinstance(img_levels, list) - assert isinstance(img_channels, list) - assert len(img_channels) == len(img_levels) - - self.img_levels = img_levels - self.coord_type = coord_type - self.act_cfg = act_cfg - self.activate_out = activate_out - self.fuse_out = fuse_out - self.dropout_ratio = dropout_ratio - self.img_channels = img_channels - self.aligned = aligned - self.align_corners = align_corners - self.padding_mode = padding_mode - - self.lateral_convs = None - if lateral_conv: - self.lateral_convs = nn.ModuleList() - for i in range(len(img_channels)): - l_conv = ConvModule( - img_channels[i], - mid_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.img_transform = nn.Sequential( - nn.Linear(mid_channels * len(img_channels), out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - else: - self.img_transform = nn.Sequential( - nn.Linear(sum(img_channels), out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - self.pts_transform = nn.Sequential( - nn.Linear(pts_channels, out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - - if self.fuse_out: - self.fuse_conv = nn.Sequential( - nn.Linear(mid_channels, out_channels), - # For pts the BN is initialized differently by default - # TODO: check whether this is necessary - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - nn.ReLU(inplace=False)) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Xavier', layer='Conv2d', distribution='uniform'), - dict(type='Xavier', layer='Linear', distribution='uniform') - ] - - def forward(self, img_feats, pts, pts_feats, img_metas): - """Forward function. - - Args: - img_feats (list[torch.Tensor]): Image features. - pts: [list[torch.Tensor]]: A batch of points with shape N x 3. - pts_feats (torch.Tensor): A tensor consist of point features of the - total batch. - img_metas (list[dict]): Meta information of images. - - Returns: - torch.Tensor: Fused features of each point. - """ - img_pts = self.obtain_mlvl_feats(img_feats, pts, img_metas) - img_pre_fuse = self.img_transform(img_pts) - if self.training and self.dropout_ratio > 0: - img_pre_fuse = F.dropout(img_pre_fuse, self.dropout_ratio) - pts_pre_fuse = self.pts_transform(pts_feats) - - fuse_out = img_pre_fuse + pts_pre_fuse - if self.activate_out: - fuse_out = F.relu(fuse_out) - if self.fuse_out: - fuse_out = self.fuse_conv(fuse_out) - - return fuse_out - - def obtain_mlvl_feats(self, img_feats, pts, img_metas): - """Obtain multi-level features for each point. - - Args: - img_feats (list(torch.Tensor)): Multi-scale image features produced - by image backbone in shape (N, C, H, W). - pts (list[torch.Tensor]): Points of each sample. - img_metas (list[dict]): Meta information for each sample. - - Returns: - torch.Tensor: Corresponding image features of each point. - """ - if self.lateral_convs is not None: - img_ins = [ - lateral_conv(img_feats[i]) - for i, lateral_conv in zip(self.img_levels, self.lateral_convs) - ] - else: - img_ins = img_feats - img_feats_per_point = [] - # Sample multi-level features - for i in range(len(img_metas)): - mlvl_img_feats = [] - for level in range(len(self.img_levels)): - mlvl_img_feats.append( - self.sample_single(img_ins[level][i:i + 1], pts[i][:, :3], - img_metas[i])) - mlvl_img_feats = torch.cat(mlvl_img_feats, dim=-1) - img_feats_per_point.append(mlvl_img_feats) - - img_pts = torch.cat(img_feats_per_point, dim=0) - return img_pts - - def sample_single(self, img_feats, pts, img_meta): - """Sample features from single level image feature map. - - Args: - img_feats (torch.Tensor): Image feature map in shape - (1, C, H, W). - pts (torch.Tensor): Points of a single sample. - img_meta (dict): Meta information of the single sample. - - Returns: - torch.Tensor: Single level image features of each point. - """ - # TODO: image transformation also extracted - img_scale_factor = ( - pts.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta.keys() else 1) - img_flip = img_meta['flip'] if 'flip' in img_meta.keys() else False - img_crop_offset = ( - pts.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta.keys() else 0) - proj_mat = get_proj_mat_by_coord_type(img_meta, self.coord_type) - img_pts = point_sample( - img_meta=img_meta, - img_features=img_feats, - points=pts, - proj_mat=pts.new_tensor(proj_mat), - coord_type=self.coord_type, - img_scale_factor=img_scale_factor, - img_crop_offset=img_crop_offset, - img_flip=img_flip, - img_pad_shape=img_meta['input_shape'][:2], - img_shape=img_meta['img_shape'][:2], - aligned=self.aligned, - padding_mode=self.padding_mode, - align_corners=self.align_corners, - ) - return img_pts diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/vote_fusion.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/vote_fusion.py deleted file mode 100644 index 3633e4d2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/fusion_layers/vote_fusion.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.core.bbox import points_cam2img -from ..builder import FUSION_LAYERS -from . import apply_3d_transformation, bbox_2d_transform, coord_2d_transform - -EPS = 1e-6 - - -@FUSION_LAYERS.register_module() -class VoteFusion(nn.Module): - """Fuse 2d features from 3d seeds. - - Args: - num_classes (int): number of classes. - max_imvote_per_pixel (int): max number of imvotes. - """ - - def __init__(self, num_classes=10, max_imvote_per_pixel=3): - super(VoteFusion, self).__init__() - self.num_classes = num_classes - self.max_imvote_per_pixel = max_imvote_per_pixel - - def forward(self, imgs, bboxes_2d_rescaled, seeds_3d_depth, img_metas): - """Forward function. - - Args: - imgs (list[torch.Tensor]): Image features. - bboxes_2d_rescaled (list[torch.Tensor]): 2D bboxes. - seeds_3d_depth (torch.Tensor): 3D seeds. - img_metas (list[dict]): Meta information of images. - - Returns: - torch.Tensor: Concatenated cues of each point. - torch.Tensor: Validity mask of each feature. - """ - img_features = [] - masks = [] - for i, data in enumerate( - zip(imgs, bboxes_2d_rescaled, seeds_3d_depth, img_metas)): - img, bbox_2d_rescaled, seed_3d_depth, img_meta = data - bbox_num = bbox_2d_rescaled.shape[0] - seed_num = seed_3d_depth.shape[0] - - img_shape = img_meta['img_shape'] - img_h, img_w, _ = img_shape - - # first reverse the data transformations - xyz_depth = apply_3d_transformation( - seed_3d_depth, 'DEPTH', img_meta, reverse=True) - - # project points from depth to image - depth2img = xyz_depth.new_tensor(img_meta['depth2img']) - uvz_origin = points_cam2img(xyz_depth, depth2img, True) - z_cam = uvz_origin[..., 2] - uv_origin = (uvz_origin[..., :2] - 1).round() - - # rescale 2d coordinates and bboxes - uv_rescaled = coord_2d_transform(img_meta, uv_origin, True) - bbox_2d_origin = bbox_2d_transform(img_meta, bbox_2d_rescaled, - False) - - if bbox_num == 0: - imvote_num = seed_num * self.max_imvote_per_pixel - - # use zero features - two_cues = torch.zeros((15, imvote_num), - device=seed_3d_depth.device) - mask_zero = torch.zeros( - imvote_num - seed_num, device=seed_3d_depth.device).bool() - mask_one = torch.ones( - seed_num, device=seed_3d_depth.device).bool() - mask = torch.cat([mask_one, mask_zero], dim=0) - else: - # expand bboxes and seeds - bbox_expanded = bbox_2d_origin.view(1, bbox_num, -1).expand( - seed_num, -1, -1) - seed_2d_expanded = uv_origin.view(seed_num, 1, - -1).expand(-1, bbox_num, -1) - seed_2d_expanded_x, seed_2d_expanded_y = \ - seed_2d_expanded.split(1, dim=-1) - - bbox_expanded_l, bbox_expanded_t, bbox_expanded_r, \ - bbox_expanded_b, bbox_expanded_conf, bbox_expanded_cls = \ - bbox_expanded.split(1, dim=-1) - bbox_expanded_midx = (bbox_expanded_l + bbox_expanded_r) / 2 - bbox_expanded_midy = (bbox_expanded_t + bbox_expanded_b) / 2 - - seed_2d_in_bbox_x = (seed_2d_expanded_x > bbox_expanded_l) * \ - (seed_2d_expanded_x < bbox_expanded_r) - seed_2d_in_bbox_y = (seed_2d_expanded_y > bbox_expanded_t) * \ - (seed_2d_expanded_y < bbox_expanded_b) - seed_2d_in_bbox = seed_2d_in_bbox_x * seed_2d_in_bbox_y - - # semantic cues, dim=class_num - sem_cue = torch.zeros_like(bbox_expanded_conf).expand( - -1, -1, self.num_classes) - sem_cue = sem_cue.scatter(-1, bbox_expanded_cls.long(), - bbox_expanded_conf) - - # bbox center - uv - delta_u = bbox_expanded_midx - seed_2d_expanded_x - delta_v = bbox_expanded_midy - seed_2d_expanded_y - - seed_3d_expanded = seed_3d_depth.view(seed_num, 1, -1).expand( - -1, bbox_num, -1) - - z_cam = z_cam.view(seed_num, 1, 1).expand(-1, bbox_num, -1) - imvote = torch.cat( - [delta_u, delta_v, - torch.zeros_like(delta_v)], dim=-1).view(-1, 3) - imvote = imvote * z_cam.reshape(-1, 1) - imvote = imvote @ torch.inverse(depth2img.t()) - - # apply transformation to lifted imvotes - imvote = apply_3d_transformation( - imvote, 'DEPTH', img_meta, reverse=False) - - seed_3d_expanded = seed_3d_expanded.reshape(imvote.shape) - - # ray angle - ray_angle = seed_3d_expanded + imvote - ray_angle /= torch.sqrt(torch.sum(ray_angle**2, -1) + - EPS).unsqueeze(-1) - - # imvote lifted to 3d - xz = ray_angle[:, [0, 2]] / (ray_angle[:, [1]] + EPS) \ - * seed_3d_expanded[:, [1]] - seed_3d_expanded[:, [0, 2]] - - # geometric cues, dim=5 - geo_cue = torch.cat([xz, ray_angle], - dim=-1).view(seed_num, -1, 5) - - two_cues = torch.cat([geo_cue, sem_cue], dim=-1) - # mask to 0 if seed not in bbox - two_cues = two_cues * seed_2d_in_bbox.float() - - feature_size = two_cues.shape[-1] - # if bbox number is too small, append zeros - if bbox_num < self.max_imvote_per_pixel: - append_num = self.max_imvote_per_pixel - bbox_num - append_zeros = torch.zeros( - (seed_num, append_num, 1), - device=seed_2d_in_bbox.device).bool() - seed_2d_in_bbox = torch.cat( - [seed_2d_in_bbox, append_zeros], dim=1) - append_zeros = torch.zeros( - (seed_num, append_num, feature_size), - device=two_cues.device) - two_cues = torch.cat([two_cues, append_zeros], dim=1) - append_zeros = torch.zeros((seed_num, append_num, 1), - device=two_cues.device) - bbox_expanded_conf = torch.cat( - [bbox_expanded_conf, append_zeros], dim=1) - - # sort the valid seed-bbox pair according to confidence - pair_score = seed_2d_in_bbox.float() + bbox_expanded_conf - # and find the largests - mask, indices = pair_score.topk( - self.max_imvote_per_pixel, - dim=1, - largest=True, - sorted=True) - - indices_img = indices.expand(-1, -1, feature_size) - two_cues = two_cues.gather(dim=1, index=indices_img) - two_cues = two_cues.transpose(1, 0) - two_cues = two_cues.reshape(-1, feature_size).transpose( - 1, 0).contiguous() - - # since conf is ~ (0, 1), floor gives us validity - mask = mask.floor().int() - mask = mask.transpose(1, 0).reshape(-1).bool() - - # clear the padding - img = img[:, :img_shape[0], :img_shape[1]] - img_flatten = img.reshape(3, -1).float() - img_flatten /= 255. - - # take the normalized pixel value as texture cue - uv_rescaled[:, 0] = torch.clamp(uv_rescaled[:, 0].round(), 0, - img_shape[1] - 1) - uv_rescaled[:, 1] = torch.clamp(uv_rescaled[:, 1].round(), 0, - img_shape[0] - 1) - uv_flatten = uv_rescaled[:, 1].round() * \ - img_shape[1] + uv_rescaled[:, 0].round() - uv_expanded = uv_flatten.unsqueeze(0).expand(3, -1).long() - txt_cue = torch.gather(img_flatten, dim=-1, index=uv_expanded) - txt_cue = txt_cue.unsqueeze(1).expand(-1, - self.max_imvote_per_pixel, - -1).reshape(3, -1) - - # append texture cue - img_feature = torch.cat([two_cues, txt_cue], dim=0) - img_features.append(img_feature) - masks.append(mask) - - return torch.stack(img_features, 0), torch.stack(masks, 0) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/__init__.py deleted file mode 100644 index dcdc69ab..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.losses import FocalLoss, SmoothL1Loss, binary_cross_entropy -from .axis_aligned_iou_loss import AxisAlignedIoULoss, axis_aligned_iou_loss -from .chamfer_distance import ChamferDistance, chamfer_distance -from .multibin_loss import MultiBinLoss -from .paconv_regularization_loss import PAConvRegularizationLoss -from .uncertain_smooth_l1_loss import UncertainL1Loss, UncertainSmoothL1Loss - -__all__ = [ - 'FocalLoss', 'SmoothL1Loss', 'binary_cross_entropy', 'ChamferDistance', - 'chamfer_distance', 'axis_aligned_iou_loss', 'AxisAlignedIoULoss', - 'PAConvRegularizationLoss', 'UncertainL1Loss', 'UncertainSmoothL1Loss', - 'MultiBinLoss' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/axis_aligned_iou_loss.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/axis_aligned_iou_loss.py deleted file mode 100644 index 428d7bb8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/axis_aligned_iou_loss.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet.models.losses.utils import weighted_loss -from ...core.bbox import AxisAlignedBboxOverlaps3D -from ..builder import LOSSES - - -@weighted_loss -def axis_aligned_iou_loss(pred, target): - """Calculate the IoU loss (1-IoU) of two set of axis aligned bounding - boxes. Note that predictions and targets are one-to-one corresponded. - - Args: - pred (torch.Tensor): Bbox predictions with shape [..., 3]. - target (torch.Tensor): Bbox targets (gt) with shape [..., 3]. - - Returns: - torch.Tensor: IoU loss between predictions and targets. - """ - - axis_aligned_iou = AxisAlignedBboxOverlaps3D()( - pred, target, is_aligned=True) - iou_loss = 1 - axis_aligned_iou - return iou_loss - - -@LOSSES.register_module() -class AxisAlignedIoULoss(nn.Module): - """Calculate the IoU loss (1-IoU) of axis aligned bounding boxes. - - Args: - reduction (str): Method to reduce losses. - The valid reduction method are none, sum or mean. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(AxisAlignedIoULoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss calculation. - - Args: - pred (torch.Tensor): Bbox predictions with shape [..., 3]. - target (torch.Tensor): Bbox targets (gt) with shape [..., 3]. - weight (torch.Tensor | float, optional): Weight of loss. - Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - - Returns: - torch.Tensor: IoU loss between predictions and targets. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - return (pred * weight).sum() - return axis_aligned_iou_loss( - pred, - target, - weight=weight, - avg_factor=avg_factor, - reduction=reduction) * self.loss_weight diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/chamfer_distance.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/chamfer_distance.py deleted file mode 100644 index 8ad109d7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/chamfer_distance.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.nn.functional import l1_loss, mse_loss, smooth_l1_loss - -from ..builder import LOSSES - - -def chamfer_distance(src, - dst, - src_weight=1.0, - dst_weight=1.0, - criterion_mode='l2', - reduction='mean'): - """Calculate Chamfer Distance of two sets. - - Args: - src (torch.Tensor): Source set with shape [B, N, C] to - calculate Chamfer Distance. - dst (torch.Tensor): Destination set with shape [B, M, C] to - calculate Chamfer Distance. - src_weight (torch.Tensor or float): Weight of source loss. - dst_weight (torch.Tensor or float): Weight of destination loss. - criterion_mode (str): Criterion mode to calculate distance. - The valid modes are smooth_l1, l1 or l2. - reduction (str): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - - Returns: - tuple: Source and Destination loss with the corresponding indices. - - - loss_src (torch.Tensor): The min distance - from source to destination. - - loss_dst (torch.Tensor): The min distance - from destination to source. - - indices1 (torch.Tensor): Index the min distance point - for each point in source to destination. - - indices2 (torch.Tensor): Index the min distance point - for each point in destination to source. - """ - - if criterion_mode == 'smooth_l1': - criterion = smooth_l1_loss - elif criterion_mode == 'l1': - criterion = l1_loss - elif criterion_mode == 'l2': - criterion = mse_loss - else: - raise NotImplementedError - - src_expand = src.unsqueeze(2).repeat(1, 1, dst.shape[1], 1) - dst_expand = dst.unsqueeze(1).repeat(1, src.shape[1], 1, 1) - - distance = criterion(src_expand, dst_expand, reduction='none').sum(-1) - src2dst_distance, indices1 = torch.min(distance, dim=2) # (B,N) - dst2src_distance, indices2 = torch.min(distance, dim=1) # (B,M) - - loss_src = (src2dst_distance * src_weight) - loss_dst = (dst2src_distance * dst_weight) - - if reduction == 'sum': - loss_src = torch.sum(loss_src) - loss_dst = torch.sum(loss_dst) - elif reduction == 'mean': - loss_src = torch.mean(loss_src) - loss_dst = torch.mean(loss_dst) - elif reduction == 'none': - pass - else: - raise NotImplementedError - - return loss_src, loss_dst, indices1, indices2 - - -@LOSSES.register_module() -class ChamferDistance(nn.Module): - """Calculate Chamfer Distance of two sets. - - Args: - mode (str): Criterion mode to calculate distance. - The valid modes are smooth_l1, l1 or l2. - reduction (str): Method to reduce losses. - The valid reduction method are none, sum or mean. - loss_src_weight (float): Weight of loss_source. - loss_dst_weight (float): Weight of loss_target. - """ - - def __init__(self, - mode='l2', - reduction='mean', - loss_src_weight=1.0, - loss_dst_weight=1.0): - super(ChamferDistance, self).__init__() - - assert mode in ['smooth_l1', 'l1', 'l2'] - assert reduction in ['none', 'sum', 'mean'] - self.mode = mode - self.reduction = reduction - self.loss_src_weight = loss_src_weight - self.loss_dst_weight = loss_dst_weight - - def forward(self, - source, - target, - src_weight=1.0, - dst_weight=1.0, - reduction_override=None, - return_indices=False, - **kwargs): - """Forward function of loss calculation. - - Args: - source (torch.Tensor): Source set with shape [B, N, C] to - calculate Chamfer Distance. - target (torch.Tensor): Destination set with shape [B, M, C] to - calculate Chamfer Distance. - src_weight (torch.Tensor | float, optional): - Weight of source loss. Defaults to 1.0. - dst_weight (torch.Tensor | float, optional): - Weight of destination loss. Defaults to 1.0. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - return_indices (bool, optional): Whether to return indices. - Defaults to False. - - Returns: - tuple[torch.Tensor]: If ``return_indices=True``, return losses of - source and target with their corresponding indices in the - order of ``(loss_source, loss_target, indices1, indices2)``. - If ``return_indices=False``, return - ``(loss_source, loss_target)``. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - loss_source, loss_target, indices1, indices2 = chamfer_distance( - source, target, src_weight, dst_weight, self.mode, reduction) - - loss_source *= self.loss_src_weight - loss_target *= self.loss_dst_weight - - if return_indices: - return loss_source, loss_target, indices1, indices2 - else: - return loss_source, loss_target diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/multibin_loss.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/multibin_loss.py deleted file mode 100644 index 461a19cf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/multibin_loss.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.nn import functional as F - -from mmdet.models.losses.utils import weighted_loss -from ..builder import LOSSES - - -@weighted_loss -def multibin_loss(pred_orientations, gt_orientations, num_dir_bins=4): - """Multi-Bin Loss. - - Args: - pred_orientations(torch.Tensor): Predicted local vector - orientation in [axis_cls, head_cls, sin, cos] format. - shape (N, num_dir_bins * 4) - gt_orientations(torch.Tensor): Corresponding gt bboxes, - shape (N, num_dir_bins * 2). - num_dir_bins(int, optional): Number of bins to encode - direction angle. - Defaults: 4. - - Return: - torch.Tensor: Loss tensor. - """ - cls_losses = 0 - reg_losses = 0 - reg_cnt = 0 - for i in range(num_dir_bins): - # bin cls loss - cls_ce_loss = F.cross_entropy( - pred_orientations[:, (i * 2):(i * 2 + 2)], - gt_orientations[:, i].long(), - reduction='mean') - # regression loss - valid_mask_i = (gt_orientations[:, i] == 1) - cls_losses += cls_ce_loss - if valid_mask_i.sum() > 0: - start = num_dir_bins * 2 + i * 2 - end = start + 2 - pred_offset = F.normalize(pred_orientations[valid_mask_i, - start:end]) - gt_offset_sin = torch.sin(gt_orientations[valid_mask_i, - num_dir_bins + i]) - gt_offset_cos = torch.cos(gt_orientations[valid_mask_i, - num_dir_bins + i]) - reg_loss = \ - F.l1_loss(pred_offset[:, 0], gt_offset_sin, - reduction='none') + \ - F.l1_loss(pred_offset[:, 1], gt_offset_cos, - reduction='none') - - reg_losses += reg_loss.sum() - reg_cnt += valid_mask_i.sum() - - return cls_losses / num_dir_bins + reg_losses / reg_cnt - - -@LOSSES.register_module() -class MultiBinLoss(nn.Module): - """Multi-Bin Loss for orientation. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'none'. - loss_weight (float, optional): The weight of loss. Defaults - to 1.0. - """ - - def __init__(self, reduction='none', loss_weight=1.0): - super(MultiBinLoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, pred, target, num_dir_bins, reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - num_dir_bins (int): Number of bins to encode direction angle. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * multibin_loss( - pred, target, num_dir_bins=num_dir_bins, reduction=reduction) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/paconv_regularization_loss.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/paconv_regularization_loss.py deleted file mode 100644 index 20017909..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/paconv_regularization_loss.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.ops import PAConv, PAConvCUDA -from mmdet.models.losses.utils import weight_reduce_loss -from ..builder import LOSSES - - -def weight_correlation(conv): - """Calculate correlations between kernel weights in Conv's weight bank as - regularization loss. The cosine similarity is used as metrics. - - Args: - conv (nn.Module): A Conv modules to be regularized. - Currently we only support `PAConv` and `PAConvCUDA`. - - Returns: - torch.Tensor: Correlations between each kernel weights in weight bank. - """ - assert isinstance(conv, (PAConv, PAConvCUDA)), \ - f'unsupported module type {type(conv)}' - kernels = conv.weight_bank # [C_in, num_kernels * C_out] - in_channels = conv.in_channels - out_channels = conv.out_channels - num_kernels = conv.num_kernels - - # [num_kernels, Cin * Cout] - flatten_kernels = kernels.view(in_channels, num_kernels, out_channels).\ - permute(1, 0, 2).reshape(num_kernels, -1) - # [num_kernels, num_kernels] - inner_product = torch.matmul(flatten_kernels, flatten_kernels.T) - # [num_kernels, 1] - kernel_norms = torch.sum(flatten_kernels**2, dim=-1, keepdim=True)**0.5 - # [num_kernels, num_kernels] - kernel_norms = torch.matmul(kernel_norms, kernel_norms.T) - cosine_sims = inner_product / kernel_norms - # take upper triangular part excluding diagonal since we only compute - # correlation between different kernels once - # the square is to ensure positive loss, refer to: - # https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/tool/train.py#L208 - corr = torch.sum(torch.triu(cosine_sims, diagonal=1)**2) - - return corr - - -def paconv_regularization_loss(modules, reduction): - """Computes correlation loss of PAConv weight kernels as regularization. - - Args: - modules (List[nn.Module] | :obj:`generator`): - A list or a python generator of torch.nn.Modules. - reduction (str): Method to reduce losses among PAConv modules. - The valid reduction method are none, sum or mean. - - Returns: - torch.Tensor: Correlation loss of kernel weights. - """ - corr_loss = [] - for module in modules: - if isinstance(module, (PAConv, PAConvCUDA)): - corr_loss.append(weight_correlation(module)) - corr_loss = torch.stack(corr_loss) - - # perform reduction - corr_loss = weight_reduce_loss(corr_loss, reduction=reduction) - - return corr_loss - - -@LOSSES.register_module() -class PAConvRegularizationLoss(nn.Module): - """Calculate correlation loss of kernel weights in PAConv's weight bank. - - This is used as a regularization term in PAConv model training. - - Args: - reduction (str): Method to reduce losses. The reduction is performed - among all PAConv modules instead of prediction tensors. - The valid reduction method are none, sum or mean. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(PAConvRegularizationLoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, modules, reduction_override=None, **kwargs): - """Forward function of loss calculation. - - Args: - modules (List[nn.Module] | :obj:`generator`): - A list or a python generator of torch.nn.Modules. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - - Returns: - torch.Tensor: Correlation loss of kernel weights. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - return self.loss_weight * paconv_regularization_loss( - modules, reduction=reduction) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/uncertain_smooth_l1_loss.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/uncertain_smooth_l1_loss.py deleted file mode 100644 index e80c08f1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/losses/uncertain_smooth_l1_loss.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet.models.losses.utils import weighted_loss -from ..builder import LOSSES - - -@weighted_loss -def uncertain_smooth_l1_loss(pred, target, sigma, alpha=1.0, beta=1.0): - """Smooth L1 loss with uncertainty. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - assert target.numel() > 0 - assert pred.size() == target.size() == sigma.size(), 'The size of pred ' \ - f'{pred.size()}, target {target.size()}, and sigma {sigma.size()} ' \ - 'are inconsistent.' - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - loss = torch.exp(-sigma) * loss + alpha * sigma - - return loss - - -@weighted_loss -def uncertain_l1_loss(pred, target, sigma, alpha=1.0): - """L1 loss with uncertainty. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert target.numel() > 0 - assert pred.size() == target.size() == sigma.size(), 'The size of pred ' \ - f'{pred.size()}, target {target.size()}, and sigma {sigma.size()} ' \ - 'are inconsistent.' - loss = torch.abs(pred - target) - loss = torch.exp(-sigma) * loss + alpha * sigma - return loss - - -@LOSSES.register_module() -class UncertainSmoothL1Loss(nn.Module): - r"""Smooth L1 loss with uncertainty. - - Please refer to `PGD `_ and - `Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry - and Semantics `_ for more details. - - Args: - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'mean'. - loss_weight (float, optional): The weight of loss. Defaults to 1.0 - """ - - def __init__(self, alpha=1.0, beta=1.0, reduction='mean', loss_weight=1.0): - super(UncertainSmoothL1Loss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.alpha = alpha - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - sigma, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * uncertain_smooth_l1_loss( - pred, - target, - weight, - sigma=sigma, - alpha=self.alpha, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class UncertainL1Loss(nn.Module): - """L1 loss with uncertainty. - - Args: - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'mean'. - loss_weight (float, optional): The weight of loss. Defaults to 1.0. - """ - - def __init__(self, alpha=1.0, reduction='mean', loss_weight=1.0): - super(UncertainL1Loss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.alpha = alpha - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - sigma, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * uncertain_l1_loss( - pred, - target, - weight, - sigma=sigma, - alpha=self.alpha, - reduction=reduction, - avg_factor=avg_factor) - return loss_bbox diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/__init__.py deleted file mode 100644 index 1e7bb638..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .pillar_scatter import PointPillarsScatter -# from .sparse_encoder import SparseEncoder, SparseEncoderSASSD -# from .sparse_unet import SparseUNet - -__all__ = [ - 'PointPillarsScatter' -] - -# __all__ = [ -# 'PointPillarsScatter', 'SparseEncoder', 'SparseEncoderSASSD', 'SparseUNet' -# ] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/pillar_scatter.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/pillar_scatter.py deleted file mode 100644 index 725ce290..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/pillar_scatter.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import auto_fp16 -from torch import nn - -from ..builder import MIDDLE_ENCODERS - - -@MIDDLE_ENCODERS.register_module() -class PointPillarsScatter(nn.Module): - """Point Pillar's Scatter. - - Converts learned features from dense tensor to sparse pseudo image. - - Args: - in_channels (int): Channels of input features. - output_shape (list[int]): Required output shape of features. - """ - - def __init__(self, in_channels, output_shape): - super().__init__() - self.output_shape = output_shape - self.ny = output_shape[0] - self.nx = output_shape[1] - self.in_channels = in_channels - self.fp16_enabled = False - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size=None): - """Foraward function to scatter features.""" - # TODO: rewrite the function in a batch manner - # no need to deal with different batch cases - if batch_size is not None: - return self.forward_batch(voxel_features, coors, batch_size) - else: - return self.forward_single(voxel_features, coors) - - def forward_single(self, voxel_features, coors): - """Scatter features of single sample. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, M, C). - coors (torch.Tensor): Coordinates of each voxel. - The first column indicates the sample ID. - """ - # Create the canvas for this sample - canvas = torch.zeros( - self.in_channels, - self.nx * self.ny, - dtype=voxel_features.dtype, - device=voxel_features.device) - - indices = coors[:, 2] * self.nx + coors[:, 3] - indices = indices.long() - voxels = voxel_features.t() - # Now scatter the blob back to the canvas. - canvas[:, indices] = voxels - # Undo the column stacking to final 4-dim tensor - canvas = canvas.view(1, self.in_channels, self.ny, self.nx) - return canvas - - def forward_batch(self, voxel_features, coors, batch_size): - """Scatter features of single sample. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, M, C). - coors (torch.Tensor): Coordinates of each voxel in shape (N, 4). - The first column indicates the sample ID. - batch_size (int): Number of samples in the current batch. - """ - # batch_canvas will be the final output. - batch_canvas = [] - for batch_itt in range(batch_size): - # Create the canvas for this sample - canvas = torch.zeros( - self.in_channels, - self.nx * self.ny, - dtype=voxel_features.dtype, - device=voxel_features.device) - - # Only include non-empty pillars - batch_mask = coors[:, 0] == batch_itt - this_coors = coors[batch_mask, :] - indices = this_coors[:, 2] * self.nx + this_coors[:, 3] - indices = indices.type(torch.long) - voxels = voxel_features[batch_mask, :] - voxels = voxels.t() - - # Now scatter the blob back to the canvas. - canvas[:, indices] = voxels - - # Append to a list for later stacking. - batch_canvas.append(canvas) - - # Stack to 3-dim tensor (batch-size, in_channels, nrows*ncols) - batch_canvas = torch.stack(batch_canvas, 0) - - # Undo the column stacking to final 4-dim tensor - batch_canvas = batch_canvas.view(batch_size, self.in_channels, self.ny, - self.nx) - - return batch_canvas diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_encoder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_encoder.py deleted file mode 100644 index 83a7a301..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_encoder.py +++ /dev/null @@ -1,491 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import points_in_boxes_all, three_interpolate, three_nn -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import SparseBasicBlock, make_sparse_convmodule -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE -from mmdet.models.losses import sigmoid_focal_loss, smooth_l1_loss -from ..builder import MIDDLE_ENCODERS - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseConvTensor, SparseSequential -else: - from mmcv.ops import SparseConvTensor, SparseSequential - - -@MIDDLE_ENCODERS.register_module() -class SparseEncoder(nn.Module): - r"""Sparse encoder for SECOND and Part-A2. - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - order (list[str], optional): Order of conv module. - Defaults to ('conv', 'norm', 'act'). - norm_cfg (dict, optional): Config of normalization layer. Defaults to - dict(type='BN1d', eps=1e-3, momentum=0.01). - base_channels (int, optional): Out channels for conv_input layer. - Defaults to 16. - output_channels (int, optional): Out channels for conv_out layer. - Defaults to 128. - encoder_channels (tuple[tuple[int]], optional): - Convolutional channels of each encode block. - Defaults to ((16, ), (32, 32, 32), (64, 64, 64), (64, 64, 64)). - encoder_paddings (tuple[tuple[int]], optional): - Paddings of each encode block. - Defaults to ((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, 1)). - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - block_type='conv_module'): - super().__init__() - assert block_type in ['conv_module', 'basicblock'] - self.sparse_shape = sparse_shape - self.in_channels = in_channels - self.order = order - self.base_channels = base_channels - self.output_channels = output_channels - self.encoder_channels = encoder_channels - self.encoder_paddings = encoder_paddings - self.stage_num = len(self.encoder_channels) - self.fp16_enabled = False - # Spconv init all weight on its own - - assert isinstance(order, tuple) and len(order) == 3 - assert set(order) == {'conv', 'norm', 'act'} - - if self.order[0] != 'conv': # pre activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d', - order=('conv', )) - else: # post activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d') - - encoder_out_channels = self.make_encoder_layers( - make_sparse_convmodule, - norm_cfg, - self.base_channels, - block_type=block_type) - - self.conv_out = make_sparse_convmodule( - encoder_out_channels, - self.output_channels, - kernel_size=(3, 1, 1), - stride=(2, 1, 1), - norm_cfg=norm_cfg, - padding=0, - indice_key='spconv_down2', - conv_type='SparseConv3d') - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size): - """Forward of SparseEncoder. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, C). - coors (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - - Returns: - dict: Backbone features. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - return spatial_features - - def make_encoder_layers(self, - make_block, - norm_cfg, - in_channels, - block_type='conv_module', - conv_cfg=dict(type='SubMConv3d')): - """make encoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - conv_cfg (dict, optional): Config of conv layer. Defaults to - dict(type='SubMConv3d'). - - Returns: - int: The number of encoder output channels. - """ - assert block_type in ['conv_module', 'basicblock'] - self.encoder_layers = SparseSequential() - - for i, blocks in enumerate(self.encoder_channels): - blocks_list = [] - for j, out_channels in enumerate(tuple(blocks)): - padding = tuple(self.encoder_paddings[i])[j] - # each stage started with a spconv layer - # except the first stage - if i != 0 and j == 0 and block_type == 'conv_module': - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - elif block_type == 'basicblock': - if j == len(blocks) - 1 and i != len( - self.encoder_channels) - 1: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - else: - blocks_list.append( - SparseBasicBlock( - out_channels, - out_channels, - norm_cfg=norm_cfg, - conv_cfg=conv_cfg)) - else: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - padding=padding, - indice_key=f'subm{i + 1}', - conv_type='SubMConv3d')) - in_channels = out_channels - stage_name = f'encoder_layer{i + 1}' - stage_layers = SparseSequential(*blocks_list) - self.encoder_layers.add_module(stage_name, stage_layers) - return out_channels - - -@MIDDLE_ENCODERS.register_module() -class SparseEncoderSASSD(SparseEncoder): - r"""Sparse encoder for `SASSD `_ - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - order (list[str], optional): Order of conv module. - Defaults to ('conv', 'norm', 'act'). - norm_cfg (dict, optional): Config of normalization layer. Defaults to - dict(type='BN1d', eps=1e-3, momentum=0.01). - base_channels (int, optional): Out channels for conv_input layer. - Defaults to 16. - output_channels (int, optional): Out channels for conv_out layer. - Defaults to 128. - encoder_channels (tuple[tuple[int]], optional): - Convolutional channels of each encode block. - Defaults to ((16, ), (32, 32, 32), (64, 64, 64), (64, 64, 64)). - encoder_paddings (tuple[tuple[int]], optional): - Paddings of each encode block. - Defaults to ((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, 1)). - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - block_type='conv_module'): - super(SparseEncoderSASSD, self).__init__( - in_channels=in_channels, - sparse_shape=sparse_shape, - order=order, - norm_cfg=norm_cfg, - base_channels=base_channels, - output_channels=output_channels, - encoder_channels=encoder_channels, - encoder_paddings=encoder_paddings, - block_type=block_type) - - self.point_fc = nn.Linear(112, 64, bias=False) - self.point_cls = nn.Linear(64, 1, bias=False) - self.point_reg = nn.Linear(64, 3, bias=False) - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size, test_mode=False): - """Forward of SparseEncoder. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, C). - coors (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - test_mode (bool, optional): Whether in test mode. - Defaults to False. - - Returns: - dict: Backbone features. - tuple[torch.Tensor]: Mean feature value of the points, - Classificaion result of the points, - Regression offsets of the points. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - if test_mode: - return spatial_features, None - - points_mean = torch.zeros_like(voxel_features) - points_mean[:, 0] = coors[:, 0] - points_mean[:, 1:] = voxel_features[:, :3] - - # auxiliary network - p0 = self.make_auxiliary_points( - encode_features[0], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.1, .1, .2)) - - p1 = self.make_auxiliary_points( - encode_features[1], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.2, .2, .4)) - - p2 = self.make_auxiliary_points( - encode_features[2], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.4, .4, .8)) - - pointwise = torch.cat([p0, p1, p2], dim=-1) - pointwise = self.point_fc(pointwise) - point_cls = self.point_cls(pointwise) - point_reg = self.point_reg(pointwise) - point_misc = (points_mean, point_cls, point_reg) - - return spatial_features, point_misc - - def get_auxiliary_targets(self, nxyz, gt_boxes3d, enlarge=1.0): - """Get auxiliary target. - - Args: - nxyz (torch.Tensor): Mean features of the points. - gt_boxes3d (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - enlarge (int, optional): Enlaged scale. Defaults to 1.0. - - Returns: - tuple[torch.Tensor]: Label of the points and - center offsets of the points. - """ - center_offsets = list() - pts_labels = list() - for i in range(len(gt_boxes3d)): - boxes3d = gt_boxes3d[i].tensor.cpu() - idx = torch.nonzero(nxyz[:, 0] == i).view(-1) - new_xyz = nxyz[idx, 1:].cpu() - - boxes3d[:, 3:6] *= enlarge - - pts_in_flag, center_offset = self.calculate_pts_offsets( - new_xyz, boxes3d) - pts_label = pts_in_flag.max(0)[0].byte() - pts_labels.append(pts_label) - center_offsets.append(center_offset) - - center_offsets = torch.cat(center_offsets).cuda() - pts_labels = torch.cat(pts_labels).to(center_offsets.device) - - return pts_labels, center_offsets - - def calculate_pts_offsets(self, points, boxes): - """Find all boxes in which each point is, as well as the offsets from - the box centers. - - Args: - points (torch.Tensor): [M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - tuple[torch.Tensor]: Point indices of boxes with the shape of - (T, M). Default background = 0. - And offsets from the box centers of points, - if it belows to the box, with the shape of (M, 3). - Default background = 0. - """ - boxes_num = len(boxes) - pts_num = len(points) - points = points.cuda() - boxes = boxes.to(points.device) - - box_idxs_of_pts = points_in_boxes_all(points[None, ...], boxes[None, - ...]) - - pts_indices = box_idxs_of_pts.squeeze(0).transpose(0, 1) - - center_offsets = torch.zeros_like(points).to(points.device) - - for i in range(boxes_num): - for j in range(pts_num): - if pts_indices[i][j] == 1: - center_offsets[j][0] = points[j][0] - boxes[i][0] - center_offsets[j][1] = points[j][1] - boxes[i][1] - center_offsets[j][2] = ( - points[j][2] - (boxes[i][2] + boxes[i][2] / 2.0)) - return pts_indices.cpu(), center_offsets.cpu() - - def aux_loss(self, points, point_cls, point_reg, gt_bboxes): - """Calculate auxiliary loss. - - Args: - points (torch.Tensor): Mean feature value of the points. - point_cls (torch.Tensor): Classificaion result of the points. - point_reg (torch.Tensor): Regression offsets of the points. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - - Returns: - dict: Backbone features. - """ - num_boxes = len(gt_bboxes) - - pts_labels, center_targets = self.get_auxiliary_targets( - points, gt_bboxes) - - rpn_cls_target = pts_labels.long() - pos = (pts_labels > 0).float() - neg = (pts_labels == 0).float() - - pos_normalizer = pos.sum().clamp(min=1.0) - - cls_weights = pos + neg - reg_weights = pos - reg_weights = reg_weights / pos_normalizer - - aux_loss_cls = sigmoid_focal_loss( - point_cls, - rpn_cls_target, - weight=cls_weights, - avg_factor=pos_normalizer) - - aux_loss_cls /= num_boxes - - weight = reg_weights[..., None] - aux_loss_reg = smooth_l1_loss(point_reg, center_targets, beta=1 / 9.) - aux_loss_reg = torch.sum(aux_loss_reg * weight)[None] - aux_loss_reg /= num_boxes - - aux_loss_cls, aux_loss_reg = [aux_loss_cls], [aux_loss_reg] - - return dict(aux_loss_cls=aux_loss_cls, aux_loss_reg=aux_loss_reg) - - def make_auxiliary_points(self, - source_tensor, - target, - offset=(0., -40., -3.), - voxel_size=(.05, .05, .1)): - """Make auxiliary points for loss computation. - - Args: - source_tensor (torch.Tensor): (M, C) features to be propigated. - target (torch.Tensor): (N, 4) bxyz positions of the - target features. - offset (tuple[float], optional): Voxelization offset. - Defaults to (0., -40., -3.) - voxel_size (tuple[float], optional): Voxelization size. - Defaults to (.05, .05, .1) - - Returns: - torch.Tensor: (N, C) tensor of the features of the target features. - """ - # Tansfer tensor to points - source = source_tensor.indices.float() - offset = torch.Tensor(offset).to(source.device) - voxel_size = torch.Tensor(voxel_size).to(source.device) - source[:, 1:] = ( - source[:, [3, 2, 1]] * voxel_size + offset + .5 * voxel_size) - - source_feats = source_tensor.features[None, ...].transpose(1, 2) - - # Interplate auxiliary points - dist, idx = three_nn(target[None, ...], source[None, ...]) - dist_recip = 1.0 / (dist + 1e-8) - norm = torch.sum(dist_recip, dim=2, keepdim=True) - weight = dist_recip / norm - new_features = three_interpolate(source_feats.contiguous(), idx, - weight) - - return new_features.squeeze(0).transpose(0, 1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_unet.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_unet.py deleted file mode 100644 index 005e34eb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/middle_encoders/sparse_unet.py +++ /dev/null @@ -1,300 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseConvTensor, SparseSequential -else: - from mmcv.ops import SparseConvTensor, SparseSequential - -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet3d.ops import SparseBasicBlock, make_sparse_convmodule -from mmdet3d.ops.sparse_block import replace_feature -from ..builder import MIDDLE_ENCODERS - - -@MIDDLE_ENCODERS.register_module() -class SparseUNet(BaseModule): - r"""SparseUNet for PartA^2. - - See the `paper `_ for more details. - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - norm_cfg (dict): Config of normalization layer. - base_channels (int): Out channels for conv_input layer. - output_channels (int): Out channels for conv_out layer. - encoder_channels (tuple[tuple[int]]): - Convolutional channels of each encode block. - encoder_paddings (tuple[tuple[int]]): Paddings of each encode block. - decoder_channels (tuple[tuple[int]]): - Convolutional channels of each decode block. - decoder_paddings (tuple[tuple[int]]): Paddings of each decode block. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - decoder_channels=((64, 64, 64), (64, 64, 32), (32, 32, 16), - (16, 16, 16)), - decoder_paddings=((1, 0), (1, 0), (0, 0), (0, 1)), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.sparse_shape = sparse_shape - self.in_channels = in_channels - self.order = order - self.base_channels = base_channels - self.output_channels = output_channels - self.encoder_channels = encoder_channels - self.encoder_paddings = encoder_paddings - self.decoder_channels = decoder_channels - self.decoder_paddings = decoder_paddings - self.stage_num = len(self.encoder_channels) - self.fp16_enabled = False - # Spconv init all weight on its own - - assert isinstance(order, tuple) and len(order) == 3 - assert set(order) == {'conv', 'norm', 'act'} - - if self.order[0] != 'conv': # pre activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d', - order=('conv', )) - else: # post activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d') - - encoder_out_channels = self.make_encoder_layers( - make_sparse_convmodule, norm_cfg, self.base_channels) - self.make_decoder_layers(make_sparse_convmodule, norm_cfg, - encoder_out_channels) - - self.conv_out = make_sparse_convmodule( - encoder_out_channels, - self.output_channels, - kernel_size=(3, 1, 1), - stride=(2, 1, 1), - norm_cfg=norm_cfg, - padding=0, - indice_key='spconv_down2', - conv_type='SparseConv3d') - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size): - """Forward of SparseUNet. - - Args: - voxel_features (torch.float32): Voxel features in shape [N, C]. - coors (torch.int32): Coordinates in shape [N, 4], - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - - Returns: - dict[str, torch.Tensor]: Backbone features. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - # for segmentation head, with output shape: - # [400, 352, 11] <- [200, 176, 5] - # [800, 704, 21] <- [400, 352, 11] - # [1600, 1408, 41] <- [800, 704, 21] - # [1600, 1408, 41] <- [1600, 1408, 41] - decode_features = [] - x = encode_features[-1] - for i in range(self.stage_num, 0, -1): - x = self.decoder_layer_forward(encode_features[i - 1], x, - getattr(self, f'lateral_layer{i}'), - getattr(self, f'merge_layer{i}'), - getattr(self, f'upsample_layer{i}')) - decode_features.append(x) - - seg_features = decode_features[-1].features - - ret = dict( - spatial_features=spatial_features, seg_features=seg_features) - - return ret - - def decoder_layer_forward(self, x_lateral, x_bottom, lateral_layer, - merge_layer, upsample_layer): - """Forward of upsample and residual block. - - Args: - x_lateral (:obj:`SparseConvTensor`): Lateral tensor. - x_bottom (:obj:`SparseConvTensor`): Feature from bottom layer. - lateral_layer (SparseBasicBlock): Convolution for lateral tensor. - merge_layer (SparseSequential): Convolution for merging features. - upsample_layer (SparseSequential): Convolution for upsampling. - - Returns: - :obj:`SparseConvTensor`: Upsampled feature. - """ - x = lateral_layer(x_lateral) - x = replace_feature(x, torch.cat((x_bottom.features, x.features), - dim=1)) - x_merge = merge_layer(x) - x = self.reduce_channel(x, x_merge.features.shape[1]) - x = replace_feature(x, x_merge.features + x.features) - x = upsample_layer(x) - return x - - @staticmethod - def reduce_channel(x, out_channels): - """reduce channel for element-wise addition. - - Args: - x (:obj:`SparseConvTensor`): Sparse tensor, ``x.features`` - are in shape (N, C1). - out_channels (int): The number of channel after reduction. - - Returns: - :obj:`SparseConvTensor`: Channel reduced feature. - """ - features = x.features - n, in_channels = features.shape - assert (in_channels % out_channels - == 0) and (in_channels >= out_channels) - x = replace_feature(x, features.view(n, out_channels, -1).sum(dim=2)) - return x - - def make_encoder_layers(self, make_block, norm_cfg, in_channels): - """make encoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - - Returns: - int: The number of encoder output channels. - """ - self.encoder_layers = SparseSequential() - - for i, blocks in enumerate(self.encoder_channels): - blocks_list = [] - for j, out_channels in enumerate(tuple(blocks)): - padding = tuple(self.encoder_paddings[i])[j] - # each stage started with a spconv layer - # except the first stage - if i != 0 and j == 0: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - else: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - padding=padding, - indice_key=f'subm{i + 1}', - conv_type='SubMConv3d')) - in_channels = out_channels - stage_name = f'encoder_layer{i + 1}' - stage_layers = SparseSequential(*blocks_list) - self.encoder_layers.add_module(stage_name, stage_layers) - return out_channels - - def make_decoder_layers(self, make_block, norm_cfg, in_channels): - """make decoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - - Returns: - int: The number of encoder output channels. - """ - block_num = len(self.decoder_channels) - for i, block_channels in enumerate(self.decoder_channels): - paddings = self.decoder_paddings[i] - setattr( - self, f'lateral_layer{block_num - i}', - SparseBasicBlock( - in_channels, - block_channels[0], - conv_cfg=dict( - type='SubMConv3d', indice_key=f'subm{block_num - i}'), - norm_cfg=norm_cfg)) - setattr( - self, f'merge_layer{block_num - i}', - make_block( - in_channels * 2, - block_channels[1], - 3, - norm_cfg=norm_cfg, - padding=paddings[0], - indice_key=f'subm{block_num - i}', - conv_type='SubMConv3d')) - if block_num - i != 1: - setattr( - self, f'upsample_layer{block_num - i}', - make_block( - in_channels, - block_channels[2], - 3, - norm_cfg=norm_cfg, - indice_key=f'spconv{block_num - i}', - conv_type='SparseInverseConv3d')) - else: - # use submanifold conv instead of inverse conv - # in the last block - setattr( - self, f'upsample_layer{block_num - i}', - make_block( - in_channels, - block_channels[2], - 3, - norm_cfg=norm_cfg, - padding=paddings[1], - indice_key='subm1', - conv_type='SubMConv3d')) - in_channels = block_channels[2] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/__init__.py deleted file mode 100644 index 34df79a2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .edge_fusion_module import EdgeFusionModule -from .transformer import GroupFree3DMHA -from .vote_module import VoteModule - -__all__ = ['VoteModule', 'GroupFree3DMHA', 'EdgeFusionModule'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/edge_fusion_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/edge_fusion_module.py deleted file mode 100644 index 2d9e09ee..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/edge_fusion_module.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - - -class EdgeFusionModule(BaseModule): - """Edge Fusion Module for feature map. - - Args: - out_channels (int): The number of output channels. - feat_channels (int): The number of channels in feature map - during edge feature fusion. - kernel_size (int, optional): Kernel size of convolution. - Default: 3. - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d')). - """ - - def __init__(self, - out_channels, - feat_channels, - kernel_size=3, - act_cfg=dict(type='ReLU'), - norm_cfg=dict(type='BN1d')): - super().__init__() - self.edge_convs = nn.Sequential( - ConvModule( - feat_channels, - feat_channels, - kernel_size=kernel_size, - padding=kernel_size // 2, - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg), - nn.Conv1d(feat_channels, out_channels, kernel_size=1)) - self.feat_channels = feat_channels - - def forward(self, features, fused_features, edge_indices, edge_lens, - output_h, output_w): - """Forward pass. - - Args: - features (torch.Tensor): Different representative features - for fusion. - fused_features (torch.Tensor): Different representative - features to be fused. - edge_indices (torch.Tensor): Batch image edge indices. - edge_lens (list[int]): List of edge length of each image. - output_h (int): Height of output feature map. - output_w (int): Width of output feature map. - - Returns: - torch.Tensor: Fused feature maps. - """ - batch_size = features.shape[0] - # normalize - grid_edge_indices = edge_indices.view(batch_size, -1, 1, 2).float() - grid_edge_indices[..., 0] = \ - grid_edge_indices[..., 0] / (output_w - 1) * 2 - 1 - grid_edge_indices[..., 1] = \ - grid_edge_indices[..., 1] / (output_h - 1) * 2 - 1 - - # apply edge fusion - edge_features = F.grid_sample( - features, grid_edge_indices, align_corners=True).squeeze(-1) - edge_output = self.edge_convs(edge_features) - - for k in range(batch_size): - edge_indice_k = edge_indices[k, :edge_lens[k]] - fused_features[k, :, edge_indice_k[:, 1], - edge_indice_k[:, 0]] += edge_output[ - k, :, :edge_lens[k]] - - return fused_features diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/transformer.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/transformer.py deleted file mode 100644 index 4f9a833e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/transformer.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks.registry import ATTENTION -from mmcv.cnn.bricks.transformer import POSITIONAL_ENCODING, MultiheadAttention -from torch import nn as nn - - -@ATTENTION.register_module() -class GroupFree3DMHA(MultiheadAttention): - """A warpper for torch.nn.MultiheadAttention for GroupFree3D. - - This module implements MultiheadAttention with identity connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - attn_drop (float, optional): A Dropout layer on attn_output_weights. - Defaults to 0.0. - proj_drop (float, optional): A Dropout layer. Defaults to 0.0. - dropout_layer (obj:`ConfigDict`, optional): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`, optional): The Config for - initialization. Default: None. - batch_first (bool, optional): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Defaults to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='DropOut', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super().__init__(embed_dims, num_heads, attn_drop, proj_drop, - dropout_layer, init_cfg, batch_first, **kwargs) - - def forward(self, - query, - key, - value, - identity, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `GroupFree3DMHA`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - If None, the ``query`` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. If None, `x` will be used. - query_pos (Tensor, optional): The positional encoding for query, - with the same shape as `x`. Defaults to None. - If not None, it will be added to `x` before forward function. - key_pos (Tensor, optional): The positional encoding for `key`, - with the same shape as `key`. Defaults to None. If not None, - it will be added to `key` before forward function. If None, - and `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor, optional): ByteTensor mask with shape - [num_queries, num_keys]. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - key_padding_mask (Tensor, optional): ByteTensor with shape - [bs, num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - if hasattr(self, 'operation_name'): - if self.operation_name == 'self_attn': - value = value + query_pos - elif self.operation_name == 'cross_attn': - value = value + key_pos - else: - raise NotImplementedError( - f'{self.__class__.name} ' - f"can't be used as {self.operation_name}") - else: - value = value + query_pos - - return super(GroupFree3DMHA, self).forward( - query=query, - key=key, - value=value, - identity=identity, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask, - **kwargs) - - -@POSITIONAL_ENCODING.register_module() -class ConvBNPositionalEncoding(nn.Module): - """Absolute position embedding with Conv learning. - - Args: - input_channel (int): input features dim. - num_pos_feats (int, optional): output position features dim. - Defaults to 288 to be consistent with seed features dim. - """ - - def __init__(self, input_channel, num_pos_feats=288): - super().__init__() - self.position_embedding_head = nn.Sequential( - nn.Conv1d(input_channel, num_pos_feats, kernel_size=1), - nn.BatchNorm1d(num_pos_feats), nn.ReLU(inplace=True), - nn.Conv1d(num_pos_feats, num_pos_feats, kernel_size=1)) - - def forward(self, xyz): - """Forward pass. - - Args: - xyz (Tensor): (B, N, 3) the coordinates to embed. - - Returns: - Tensor: (B, num_pos_feats, N) the embedded position features. - """ - xyz = xyz.permute(0, 2, 1) - position_embedding = self.position_embedding_head(xyz) - return position_embedding diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/vote_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/vote_module.py deleted file mode 100644 index 5cc52ad9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/model_utils/vote_module.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import is_tuple_of -from mmcv.cnn import ConvModule -from torch import nn as nn - -from mmdet3d.models.builder import build_loss - - -class VoteModule(nn.Module): - """Vote module. - - Generate votes from seed point features. - - Args: - in_channels (int): Number of channels of seed point features. - vote_per_seed (int, optional): Number of votes generated from - each seed point. Default: 1. - gt_per_seed (int, optional): Number of ground truth votes generated - from each seed point. Default: 3. - num_points (int, optional): Number of points to be used for voting. - Default: 1. - conv_channels (tuple[int], optional): Out channels of vote - generating convolution. Default: (16, 16). - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - norm_feats (bool, optional): Whether to normalize features. - Default: True. - with_res_feat (bool, optional): Whether to predict residual features. - Default: True. - vote_xyz_range (list[float], optional): - The range of points translation. Default: None. - vote_loss (dict, optional): Config of vote loss. Default: None. - """ - - def __init__(self, - in_channels, - vote_per_seed=1, - gt_per_seed=3, - num_points=-1, - conv_channels=(16, 16), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - norm_feats=True, - with_res_feat=True, - vote_xyz_range=None, - vote_loss=None): - super().__init__() - self.in_channels = in_channels - self.vote_per_seed = vote_per_seed - self.gt_per_seed = gt_per_seed - self.num_points = num_points - self.norm_feats = norm_feats - self.with_res_feat = with_res_feat - - assert vote_xyz_range is None or is_tuple_of(vote_xyz_range, float) - self.vote_xyz_range = vote_xyz_range - - if vote_loss is not None: - self.vote_loss = build_loss(vote_loss) - - prev_channels = in_channels - vote_conv_list = list() - for k in range(len(conv_channels)): - vote_conv_list.append( - ConvModule( - prev_channels, - conv_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - prev_channels = conv_channels[k] - self.vote_conv = nn.Sequential(*vote_conv_list) - - # conv_out predicts coordinate and residual features - if with_res_feat: - out_channel = (3 + in_channels) * self.vote_per_seed - else: - out_channel = 3 * self.vote_per_seed - self.conv_out = nn.Conv1d(prev_channels, out_channel, 1) - - def forward(self, seed_points, seed_feats): - """forward. - - Args: - seed_points (torch.Tensor): Coordinate of the seed - points in shape (B, N, 3). - seed_feats (torch.Tensor): Features of the seed points in shape - (B, C, N). - - Returns: - tuple[torch.Tensor]: - - - vote_points: Voted xyz based on the seed points - with shape (B, M, 3), ``M=num_seed*vote_per_seed``. - - vote_features: Voted features based on the seed points with - shape (B, C, M) where ``M=num_seed*vote_per_seed``, - ``C=vote_feature_dim``. - """ - if self.num_points != -1: - assert self.num_points < seed_points.shape[1], \ - f'Number of vote points ({self.num_points}) should be '\ - f'smaller than seed points size ({seed_points.shape[1]})' - seed_points = seed_points[:, :self.num_points] - seed_feats = seed_feats[..., :self.num_points] - - batch_size, feat_channels, num_seed = seed_feats.shape - num_vote = num_seed * self.vote_per_seed - x = self.vote_conv(seed_feats) - # (batch_size, (3+out_dim)*vote_per_seed, num_seed) - votes = self.conv_out(x) - - votes = votes.transpose(2, 1).view(batch_size, num_seed, - self.vote_per_seed, -1) - - offset = votes[:, :, :, 0:3] - if self.vote_xyz_range is not None: - limited_offset_list = [] - for axis in range(len(self.vote_xyz_range)): - limited_offset_list.append(offset[..., axis].clamp( - min=-self.vote_xyz_range[axis], - max=self.vote_xyz_range[axis])) - limited_offset = torch.stack(limited_offset_list, -1) - vote_points = (seed_points.unsqueeze(2) + - limited_offset).contiguous() - else: - vote_points = (seed_points.unsqueeze(2) + offset).contiguous() - vote_points = vote_points.view(batch_size, num_vote, 3) - offset = offset.reshape(batch_size, num_vote, 3).transpose(2, 1) - - if self.with_res_feat: - res_feats = votes[:, :, :, 3:] - vote_feats = (seed_feats.transpose(2, 1).unsqueeze(2) + - res_feats).contiguous() - vote_feats = vote_feats.view(batch_size, - num_vote, feat_channels).transpose( - 2, 1).contiguous() - - if self.norm_feats: - features_norm = torch.norm(vote_feats, p=2, dim=1) - vote_feats = vote_feats.div(features_norm.unsqueeze(1)) - else: - vote_feats = seed_feats - return vote_points, vote_feats, offset - - def get_loss(self, seed_points, vote_points, seed_indices, - vote_targets_mask, vote_targets): - """Calculate loss of voting module. - - Args: - seed_points (torch.Tensor): Coordinate of the seed points. - vote_points (torch.Tensor): Coordinate of the vote points. - seed_indices (torch.Tensor): Indices of seed points in raw points. - vote_targets_mask (torch.Tensor): Mask of valid vote targets. - vote_targets (torch.Tensor): Targets of votes. - - Returns: - torch.Tensor: Weighted vote loss. - """ - batch_size, num_seed = seed_points.shape[:2] - - seed_gt_votes_mask = torch.gather(vote_targets_mask, 1, - seed_indices).float() - - seed_indices_expand = seed_indices.unsqueeze(-1).repeat( - 1, 1, 3 * self.gt_per_seed) - seed_gt_votes = torch.gather(vote_targets, 1, seed_indices_expand) - seed_gt_votes += seed_points.repeat(1, 1, self.gt_per_seed) - - weight = seed_gt_votes_mask / (torch.sum(seed_gt_votes_mask) + 1e-6) - distance = self.vote_loss( - vote_points.view(batch_size * num_seed, -1, 3), - seed_gt_votes.view(batch_size * num_seed, -1, 3), - dst_weight=weight.view(batch_size * num_seed, 1))[1] - vote_loss = torch.sum(torch.min(distance, dim=1)[0]) - - return vote_loss diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/__init__.py deleted file mode 100644 index 5443d357..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.necks.fpn import FPN -from .dla_neck import DLANeck -from .imvoxel_neck import OutdoorImVoxelNeck -from .pointnet2_fp_neck import PointNetFPNeck -from .second_fpn import SECONDFPN - -__all__ = [ - 'FPN', 'SECONDFPN', 'OutdoorImVoxelNeck', 'PointNetFPNeck', 'DLANeck' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/dla_neck.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/dla_neck.py deleted file mode 100644 index c32e8bb8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/dla_neck.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -from mmcv.cnn import ConvModule, build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import NECKS - - -def fill_up_weights(up): - """Simulated bilinear upsampling kernel. - - Args: - up (nn.Module): ConvTranspose2d module. - """ - w = up.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - - -class IDAUpsample(BaseModule): - """Iterative Deep Aggregation (IDA) Upsampling module to upsample features - of different scales to a similar scale. - - Args: - out_channels (int): Number of output channels for DeformConv. - in_channels (List[int]): List of input channels of multi-scale - feature maps. - kernel_sizes (List[int]): List of size of the convolving - kernel of different scales. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - use_dcn (bool, optional): If True, use DCNv2. Default: True. - """ - - def __init__( - self, - out_channels, - in_channels, - kernel_sizes, - norm_cfg=None, - use_dcn=True, - init_cfg=None, - ): - super(IDAUpsample, self).__init__(init_cfg) - self.use_dcn = use_dcn - self.projs = nn.ModuleList() - self.ups = nn.ModuleList() - self.nodes = nn.ModuleList() - - for i in range(1, len(in_channels)): - in_channel = in_channels[i] - up_kernel_size = int(kernel_sizes[i]) - proj = ConvModule( - in_channel, - out_channels, - 3, - padding=1, - bias=True, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=norm_cfg) - node = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - bias=True, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=norm_cfg) - up = build_conv_layer( - dict(type='deconv'), - out_channels, - out_channels, - up_kernel_size * 2, - stride=up_kernel_size, - padding=up_kernel_size // 2, - output_padding=0, - groups=out_channels, - bias=False) - - self.projs.append(proj) - self.ups.append(up) - self.nodes.append(node) - - def forward(self, mlvl_features, start_level, end_level): - """Forward function. - - Args: - mlvl_features (list[torch.Tensor]): Features from multiple layers. - start_level (int): Start layer for feature upsampling. - end_level (int): End layer for feature upsampling. - """ - for i in range(start_level, end_level - 1): - upsample = self.ups[i - start_level] - project = self.projs[i - start_level] - mlvl_features[i + 1] = upsample(project(mlvl_features[i + 1])) - node = self.nodes[i - start_level] - mlvl_features[i + 1] = node(mlvl_features[i + 1] + - mlvl_features[i]) - - -class DLAUpsample(BaseModule): - """Deep Layer Aggregation (DLA) Upsampling module for different scales - feature extraction, upsampling and fusion, It consists of groups of - IDAupsample modules. - - Args: - start_level (int): The start layer. - channels (List[int]): List of input channels of multi-scale - feature maps. - scales(List[int]): List of scale of different layers' feature. - in_channels (NoneType, optional): List of input channels of - different scales. Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - use_dcn (bool, optional): Whether to use dcn in IDAup module. - Default: True. - """ - - def __init__(self, - start_level, - channels, - scales, - in_channels=None, - norm_cfg=None, - use_dcn=True, - init_cfg=None): - super(DLAUpsample, self).__init__(init_cfg) - self.start_level = start_level - if in_channels is None: - in_channels = channels - self.channels = channels - channels = list(channels) - scales = np.array(scales, dtype=int) - for i in range(len(channels) - 1): - j = -i - 2 - setattr( - self, 'ida_{}'.format(i), - IDAUpsample(channels[j], in_channels[j:], - scales[j:] // scales[j], norm_cfg, use_dcn)) - scales[j + 1:] = scales[j] - in_channels[j + 1:] = [channels[j] for _ in channels[j + 1:]] - - def forward(self, mlvl_features): - """Forward function. - - Args: - mlvl_features(list[torch.Tensor]): Features from multi-scale - layers. - - Returns: - tuple[torch.Tensor]: Up-sampled features of different layers. - """ - outs = [mlvl_features[-1]] - for i in range(len(mlvl_features) - self.start_level - 1): - ida = getattr(self, 'ida_{}'.format(i)) - ida(mlvl_features, len(mlvl_features) - i - 2, len(mlvl_features)) - outs.insert(0, mlvl_features[-1]) - return outs - - -@NECKS.register_module() -class DLANeck(BaseModule): - """DLA Neck. - - Args: - in_channels (list[int], optional): List of input channels - of multi-scale feature map. - start_level (int, optional): The scale level where upsampling - starts. Default: 2. - end_level (int, optional): The scale level where upsampling - ends. Default: 5. - norm_cfg (dict, optional): Config dict for normalization - layer. Default: None. - use_dcn (bool, optional): Whether to use dcn in IDAup module. - Default: True. - """ - - def __init__(self, - in_channels=[16, 32, 64, 128, 256, 512], - start_level=2, - end_level=5, - norm_cfg=None, - use_dcn=True, - init_cfg=None): - super(DLANeck, self).__init__(init_cfg) - self.start_level = start_level - self.end_level = end_level - scales = [2**i for i in range(len(in_channels[self.start_level:]))] - self.dla_up = DLAUpsample( - start_level=self.start_level, - channels=in_channels[self.start_level:], - scales=scales, - norm_cfg=norm_cfg, - use_dcn=use_dcn) - self.ida_up = IDAUpsample( - in_channels[self.start_level], - in_channels[self.start_level:self.end_level], - [2**i for i in range(self.end_level - self.start_level)], norm_cfg, - use_dcn) - - def forward(self, x): - mlvl_features = [x[i] for i in range(len(x))] - mlvl_features = self.dla_up(mlvl_features) - outs = [] - for i in range(self.end_level - self.start_level): - outs.append(mlvl_features[i].clone()) - self.ida_up(outs, 0, len(outs)) - return [outs[-1]] - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.ConvTranspose2d): - # In order to be consistent with the source code, - # reset the ConvTranspose2d initialization parameters - m.reset_parameters() - # Simulated bilinear upsampling kernel - fill_up_weights(m) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Conv2d): - # In order to be consistent with the source code, - # reset the Conv2d initialization parameters - m.reset_parameters() diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/imvoxel_neck.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/imvoxel_neck.py deleted file mode 100644 index 88814916..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/imvoxel_neck.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from torch import nn - -from ..builder import NECKS - - -@NECKS.register_module() -class OutdoorImVoxelNeck(nn.Module): - """Neck for ImVoxelNet outdoor scenario. - - Args: - in_channels (int): Input channels of multi-scale feature map. - out_channels (int): Output channels of multi-scale feature map. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.model = nn.Sequential( - ResModule(in_channels), - ConvModule( - in_channels=in_channels, - out_channels=in_channels * 2, - kernel_size=3, - stride=(1, 1, 2), - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)), - ResModule(in_channels * 2), - ConvModule( - in_channels=in_channels * 2, - out_channels=in_channels * 4, - kernel_size=3, - stride=(1, 1, 2), - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)), - ResModule(in_channels * 4), - ConvModule( - in_channels=in_channels * 4, - out_channels=out_channels, - kernel_size=3, - padding=(1, 1, 0), - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True))) - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): of shape (N, C_in, N_x, N_y, N_z). - - Returns: - list[torch.Tensor]: of shape (N, C_out, N_y, N_x). - """ - x = self.model.forward(x) - assert x.shape[-1] == 1 - # Anchor3DHead axis order is (y, x). - return [x[..., 0].transpose(-1, -2)] - - def init_weights(self): - """Initialize weights of neck.""" - pass - - -class ResModule(nn.Module): - """3d residual block for ImVoxelNeck. - - Args: - n_channels (int): Input channels of a feature map. - """ - - def __init__(self, n_channels): - super().__init__() - self.conv0 = ConvModule( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=3, - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)) - self.conv1 = ConvModule( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=3, - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=None) - self.activation = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): of shape (N, C, N_x, N_y, N_z). - - Returns: - torch.Tensor: 5d feature map. - """ - identity = x - x = self.conv0(x) - x = self.conv1(x) - x = identity + x - x = self.activation(x) - return x diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/pointnet2_fp_neck.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/pointnet2_fp_neck.py deleted file mode 100644 index 62db0c10..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/pointnet2_fp_neck.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.ops import PointFPModule -from ..builder import NECKS - - -@NECKS.register_module() -class PointNetFPNeck(BaseModule): - r"""PointNet FP Module used in PointRCNN. - - Refer to the `official code `_. - - .. code-block:: none - - sa_n ---------------------------------------- - | - ... --------------------------------- | - | | - sa_1 ------------- | | - | | | - sa_0 -> fp_0 -> fp_module ->fp_1 -> ... -> fp_module -> fp_n - - sa_n including sa_xyz (torch.Tensor) and sa_features (torch.Tensor) - fp_n including fp_xyz (torch.Tensor) and fp_features (torch.Tensor) - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, fp_channels, init_cfg=None): - super(PointNetFPNeck, self).__init__(init_cfg=init_cfg) - - self.num_fp = len(fp_channels) - self.FP_modules = nn.ModuleList() - for cur_fp_mlps in fp_channels: - self.FP_modules.append(PointFPModule(mlp_channels=cur_fp_mlps)) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone, which may contain - the following keys and values: - - - sa_xyz (list[torch.Tensor]): Points of each sa module - in shape (N, 3). - - sa_features (list[torch.Tensor]): Output features of - each sa module in shape (N, M). - - Returns: - list[torch.Tensor]: Coordinates of multiple levels of points. - list[torch.Tensor]: Features of multiple levels of points. - """ - sa_xyz = feat_dict['sa_xyz'] - sa_features = feat_dict['sa_features'] - assert len(sa_xyz) == len(sa_features) - - return sa_xyz, sa_features - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - dict[str, torch.Tensor]: Outputs of the Neck. - - - fp_xyz (torch.Tensor): The coordinates of fp features. - - fp_features (torch.Tensor): The features from the last - feature propagation layers. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - fp_feature = sa_features[-1] - fp_xyz = sa_xyz[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - fp_xyz = sa_xyz[-(i + 2)] - - ret = dict(fp_xyz=fp_xyz, fp_features=fp_feature) - return ret diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/second_fpn.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/second_fpn.py deleted file mode 100644 index ef1b3de6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/necks/second_fpn.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import build_conv_layer, build_norm_layer, build_upsample_layer -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from ..builder import NECKS - - -@NECKS.register_module() -class SECONDFPN(BaseModule): - """FPN used in SECOND/PointPillars/PartA2/MVXNet. - - Args: - in_channels (list[int]): Input channels of multi-scale feature maps. - out_channels (list[int]): Output channels of feature maps. - upsample_strides (list[int]): Strides used to upsample the - feature maps. - norm_cfg (dict): Config dict of normalization layers. - upsample_cfg (dict): Config dict of upsample layers. - conv_cfg (dict): Config dict of conv layers. - use_conv_for_no_stride (bool): Whether to use conv when stride is 1. - """ - - def __init__(self, - in_channels=[128, 128, 256], - out_channels=[256, 256, 256], - upsample_strides=[1, 2, 4], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - conv_cfg=dict(type='Conv2d', bias=False), - use_conv_for_no_stride=False, - init_cfg=None): - # if for GroupNorm, - # cfg is dict(type='GN', num_groups=num_groups, eps=1e-3, affine=True) - super(SECONDFPN, self).__init__(init_cfg=init_cfg) - assert len(out_channels) == len(upsample_strides) == len(in_channels) - self.in_channels = in_channels - self.out_channels = out_channels - self.fp16_enabled = False - - deblocks = [] - for i, out_channel in enumerate(out_channels): - stride = upsample_strides[i] - if stride > 1 or (stride == 1 and not use_conv_for_no_stride): - upsample_layer = build_upsample_layer( - upsample_cfg, - in_channels=in_channels[i], - out_channels=out_channel, - kernel_size=upsample_strides[i], - stride=upsample_strides[i]) - else: - stride = np.round(1 / stride).astype(np.int64) - upsample_layer = build_conv_layer( - conv_cfg, - in_channels=in_channels[i], - out_channels=out_channel, - kernel_size=stride, - stride=stride) - - deblock = nn.Sequential(upsample_layer, - build_norm_layer(norm_cfg, out_channel)[1], - nn.ReLU(inplace=True)) - deblocks.append(deblock) - self.deblocks = nn.ModuleList(deblocks) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='ConvTranspose2d'), - dict(type='Constant', layer='NaiveSyncBatchNorm2d', val=1.0) - ] - - @auto_fp16() - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): 4D Tensor in (N, C, H, W) shape. - - Returns: - list[torch.Tensor]: Multi-level feature maps. - """ - assert len(x) == len(self.in_channels) - ups = [deblock(x[i]) for i, deblock in enumerate(self.deblocks)] - - if len(ups) > 1: - out = torch.cat(ups, dim=1) - else: - out = ups[0] - return [out] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/__init__.py deleted file mode 100644 index 1cc4dc6e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .base_3droi_head import Base3DRoIHead -# from .bbox_heads import PartA2BboxHead -from .h3d_roi_head import H3DRoIHead -from .mask_heads import PointwiseSemanticHead, PrimitiveHead -from .part_aggregation_roi_head import PartAggregationROIHead -from .point_rcnn_roi_head import PointRCNNRoIHead -from .roi_extractors import Single3DRoIAwareExtractor, SingleRoIExtractor - -__all__ = [ - 'Base3DRoIHead', 'PartAggregationROIHead', 'PointwiseSemanticHead', - 'Single3DRoIAwareExtractor', 'SingleRoIExtractor', - 'H3DRoIHead', 'PrimitiveHead', 'PointRCNNRoIHead' -] - -# __all__ = [ -# 'Base3DRoIHead', 'PartAggregationROIHead', 'PointwiseSemanticHead', -# 'Single3DRoIAwareExtractor', 'PartA2BboxHead', 'SingleRoIExtractor', -# 'H3DRoIHead', 'PrimitiveHead', 'PointRCNNRoIHead' -# ] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/base_3droi_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/base_3droi_head.py deleted file mode 100644 index e1816ff6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/base_3droi_head.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class Base3DRoIHead(BaseModule, metaclass=ABCMeta): - """Base class for 3d RoIHeads.""" - - def __init__(self, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(Base3DRoIHead, self).__init__(init_cfg=init_cfg) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if bbox_head is not None: - self.init_bbox_head(bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoIHead has box head""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoIHead has mask head""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @abstractmethod - def init_bbox_head(self): - """Initialize the box head.""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize maek head.""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - **kwargs): - """Forward function during training. - - Args: - x (dict): Contains features from the first stage. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels (list[torch.LongTensor]): GT labels of each sample. - gt_bboxes_ignore (list[torch.Tensor], optional): - Ground truth boxes to be ignored. - - Returns: - dict[str, torch.Tensor]: Losses from each head. - """ - pass - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - pass - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - pass diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/__init__.py deleted file mode 100644 index fd7a6b04..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.roi_heads.bbox_heads import (BBoxHead, ConvFCBBoxHead, - DoubleConvFCBBoxHead, - Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .h3d_bbox_head import H3DBboxHead -from .parta2_bbox_head import PartA2BboxHead -from .point_rcnn_bbox_head import PointRCNNBboxHead - -__all__ = [ - 'BBoxHead', 'ConvFCBBoxHead', 'Shared2FCBBoxHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'PartA2BboxHead', - 'H3DBboxHead', 'PointRCNNBboxHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py deleted file mode 100644 index a8bd11a2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py +++ /dev/null @@ -1,925 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.models.losses import chamfer_distance -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class H3DBboxHead(BaseModule): - r"""Bbox head of `H3DNet `_. - - Args: - num_classes (int): The number of classes. - surface_matching_cfg (dict): Config for surface primitive matching. - line_matching_cfg (dict): Config for line primitive matching. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - gt_per_seed (int): Number of ground truth votes generated - from each seed point. - num_proposal (int): Number of proposal votes generated. - feat_channels (tuple[int]): Convolution channels of - prediction layer. - primitive_feat_refine_streams (int): The number of mlps to - refine primitive feature. - primitive_refine_channels (tuple[int]): Convolution channels of - prediction layer. - upper_thresh (float): Threshold for line matching. - surface_thresh (float): Threshold for surface matching. - line_thresh (float): Threshold for line matching. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - cues_objectness_loss (dict): Config of cues objectness loss. - cues_semantic_loss (dict): Config of cues semantic loss. - proposal_objectness_loss (dict): Config of proposal objectness - loss. - primitive_center_loss (dict): Config of primitive center regression - loss. - """ - - def __init__(self, - num_classes, - suface_matching_cfg, - line_matching_cfg, - bbox_coder, - train_cfg=None, - test_cfg=None, - gt_per_seed=1, - num_proposal=256, - feat_channels=(128, 128), - primitive_feat_refine_streams=2, - primitive_refine_channels=[128, 128, 128], - upper_thresh=100.0, - surface_thresh=0.5, - line_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - semantic_loss=None, - cues_objectness_loss=None, - cues_semantic_loss=None, - proposal_objectness_loss=None, - primitive_center_loss=None, - init_cfg=None): - super(H3DBboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = gt_per_seed - self.num_proposal = num_proposal - self.with_angle = bbox_coder['with_rot'] - self.upper_thresh = upper_thresh - self.surface_thresh = surface_thresh - self.line_thresh = line_thresh - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.size_class_loss = build_loss(size_class_loss) - self.size_res_loss = build_loss(size_res_loss) - self.semantic_loss = build_loss(semantic_loss) - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - self.cues_objectness_loss = build_loss(cues_objectness_loss) - self.cues_semantic_loss = build_loss(cues_semantic_loss) - self.proposal_objectness_loss = build_loss(proposal_objectness_loss) - self.primitive_center_loss = build_loss(primitive_center_loss) - - assert suface_matching_cfg['mlp_channels'][-1] == \ - line_matching_cfg['mlp_channels'][-1] - - # surface center matching - self.surface_center_matcher = build_sa_module(suface_matching_cfg) - # line center matching - self.line_center_matcher = build_sa_module(line_matching_cfg) - - # Compute the matching scores - matching_feat_dims = suface_matching_cfg['mlp_channels'][-1] - self.matching_conv = ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.matching_pred = nn.Conv1d(matching_feat_dims, 2, 1) - - # Compute the semantic matching scores - self.semantic_matching_conv = ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.semantic_matching_pred = nn.Conv1d(matching_feat_dims, 2, 1) - - # Surface feature aggregation - self.surface_feats_aggregation = list() - for k in range(primitive_feat_refine_streams): - self.surface_feats_aggregation.append( - ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - self.surface_feats_aggregation = nn.Sequential( - *self.surface_feats_aggregation) - - # Line feature aggregation - self.line_feats_aggregation = list() - for k in range(primitive_feat_refine_streams): - self.line_feats_aggregation.append( - ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - self.line_feats_aggregation = nn.Sequential( - *self.line_feats_aggregation) - - # surface center(6) + line center(12) - prev_channel = 18 * matching_feat_dims - self.bbox_pred = nn.ModuleList() - for k in range(len(primitive_refine_channels)): - self.bbox_pred.append( - ConvModule( - prev_channel, - primitive_refine_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=False)) - prev_channel = primitive_refine_channels[k] - - # Final object detection - # Objectness scores (2), center residual (3), - # heading class+residual (num_heading_bin*2), size class + - # residual(num_size_cluster*4) - conv_out_channel = (2 + 3 + bbox_coder['num_dir_bins'] * 2 + - bbox_coder['num_sizes'] * 4 + self.num_classes) - self.bbox_pred.append(nn.Conv1d(prev_channel, conv_out_channel, 1)) - - def forward(self, feats_dict, sample_mod): - """Forward pass. - - Args: - feats_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed" and "random". - - Returns: - dict: Predictions of vote head. - """ - ret_dict = {} - aggregated_points = feats_dict['aggregated_points'] - original_feature = feats_dict['aggregated_features'] - batch_size = original_feature.shape[0] - object_proposal = original_feature.shape[2] - - # Extract surface center, features and semantic predictions - z_center = feats_dict['pred_z_center'] - xy_center = feats_dict['pred_xy_center'] - z_semantic = feats_dict['sem_cls_scores_z'] - xy_semantic = feats_dict['sem_cls_scores_xy'] - z_feature = feats_dict['aggregated_features_z'] - xy_feature = feats_dict['aggregated_features_xy'] - # Extract line points and features - line_center = feats_dict['pred_line_center'] - line_feature = feats_dict['aggregated_features_line'] - - surface_center_pred = torch.cat((z_center, xy_center), dim=1) - ret_dict['surface_center_pred'] = surface_center_pred - ret_dict['surface_sem_pred'] = torch.cat((z_semantic, xy_semantic), - dim=1) - - # Extract the surface and line centers of rpn proposals - rpn_proposals = feats_dict['proposal_list'] - rpn_proposals_bbox = DepthInstance3DBoxes( - rpn_proposals.reshape(-1, 7).clone(), - box_dim=rpn_proposals.shape[-1], - with_yaw=self.with_angle, - origin=(0.5, 0.5, 0.5)) - - obj_surface_center, obj_line_center = \ - rpn_proposals_bbox.get_surface_line_center() - obj_surface_center = obj_surface_center.reshape( - batch_size, -1, 6, 3).transpose(1, 2).reshape(batch_size, -1, 3) - obj_line_center = obj_line_center.reshape(batch_size, -1, 12, - 3).transpose(1, 2).reshape( - batch_size, -1, 3) - ret_dict['surface_center_object'] = obj_surface_center - ret_dict['line_center_object'] = obj_line_center - - # aggregate primitive z and xy features to rpn proposals - surface_center_feature_pred = torch.cat((z_feature, xy_feature), dim=2) - surface_center_feature_pred = torch.cat( - (surface_center_feature_pred.new_zeros( - (batch_size, 6, surface_center_feature_pred.shape[2])), - surface_center_feature_pred), - dim=1) - - surface_xyz, surface_features, _ = self.surface_center_matcher( - surface_center_pred, - surface_center_feature_pred, - target_xyz=obj_surface_center) - - # aggregate primitive line features to rpn proposals - line_feature = torch.cat((line_feature.new_zeros( - (batch_size, 12, line_feature.shape[2])), line_feature), - dim=1) - line_xyz, line_features, _ = self.line_center_matcher( - line_center, line_feature, target_xyz=obj_line_center) - - # combine the surface and line features - combine_features = torch.cat((surface_features, line_features), dim=2) - - matching_features = self.matching_conv(combine_features) - matching_score = self.matching_pred(matching_features) - ret_dict['matching_score'] = matching_score.transpose(2, 1) - - semantic_matching_features = self.semantic_matching_conv( - combine_features) - semantic_matching_score = self.semantic_matching_pred( - semantic_matching_features) - ret_dict['semantic_matching_score'] = \ - semantic_matching_score.transpose(2, 1) - - surface_features = self.surface_feats_aggregation(surface_features) - line_features = self.line_feats_aggregation(line_features) - - # Combine all surface and line features - surface_features = surface_features.view(batch_size, -1, - object_proposal) - line_features = line_features.view(batch_size, -1, object_proposal) - - combine_feature = torch.cat((surface_features, line_features), dim=1) - - # Final bbox predictions - bbox_predictions = self.bbox_pred[0](combine_feature) - bbox_predictions += original_feature - for conv_module in self.bbox_pred[1:]: - bbox_predictions = conv_module(bbox_predictions) - - refine_decode_res = self.bbox_coder.split_pred( - bbox_predictions[:, :self.num_classes + 2], - bbox_predictions[:, self.num_classes + 2:], aggregated_points) - for key in refine_decode_res.keys(): - ret_dict[key + '_optimized'] = refine_decode_res[key] - return ret_dict - - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - rpn_targets=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of h3d bbox head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - rpn_targets (Tuple) : Targets generated by rpn head. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of H3dnet. - """ - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, _, mask_targets, - valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) = rpn_targets - - losses = {} - - # calculate refined proposal loss - refined_proposal_loss = self.get_proposal_stage_loss( - bbox_preds, - size_class_targets, - size_res_targets, - dir_class_targets, - dir_res_targets, - center_targets, - mask_targets, - objectness_targets, - objectness_weights, - box_loss_weights, - valid_gt_weights, - suffix='_optimized') - for key in refined_proposal_loss.keys(): - losses[key + '_optimized'] = refined_proposal_loss[key] - - bbox3d_optimized = self.bbox_coder.decode( - bbox_preds, suffix='_optimized') - - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - - (cues_objectness_label, cues_sem_label, proposal_objectness_label, - cues_mask, cues_match_mask, proposal_objectness_mask, - cues_matching_label, obj_surface_line_center) = targets - - # match scores for each geometric primitive - objectness_scores = bbox_preds['matching_score'] - # match scores for the semantics of primitives - objectness_scores_sem = bbox_preds['semantic_matching_score'] - - primitive_objectness_loss = self.cues_objectness_loss( - objectness_scores.transpose(2, 1), - cues_objectness_label, - weight=cues_mask, - avg_factor=cues_mask.sum() + 1e-6) - - primitive_sem_loss = self.cues_semantic_loss( - objectness_scores_sem.transpose(2, 1), - cues_sem_label, - weight=cues_mask, - avg_factor=cues_mask.sum() + 1e-6) - - objectness_scores = bbox_preds['obj_scores_optimized'] - objectness_loss_refine = self.proposal_objectness_loss( - objectness_scores.transpose(2, 1), proposal_objectness_label) - primitive_matching_loss = (objectness_loss_refine * - cues_match_mask).sum() / ( - cues_match_mask.sum() + 1e-6) * 0.5 - primitive_sem_matching_loss = ( - objectness_loss_refine * proposal_objectness_mask).sum() / ( - proposal_objectness_mask.sum() + 1e-6) * 0.5 - - # Get the object surface center here - batch_size, object_proposal = bbox3d_optimized.shape[:2] - refined_bbox = DepthInstance3DBoxes( - bbox3d_optimized.reshape(-1, 7).clone(), - box_dim=bbox3d_optimized.shape[-1], - with_yaw=self.with_angle, - origin=(0.5, 0.5, 0.5)) - - pred_obj_surface_center, pred_obj_line_center = \ - refined_bbox.get_surface_line_center() - pred_obj_surface_center = pred_obj_surface_center.reshape( - batch_size, -1, 6, 3).transpose(1, 2).reshape(batch_size, -1, 3) - pred_obj_line_center = pred_obj_line_center.reshape( - batch_size, -1, 12, 3).transpose(1, 2).reshape(batch_size, -1, 3) - pred_surface_line_center = torch.cat( - (pred_obj_surface_center, pred_obj_line_center), 1) - - square_dist = self.primitive_center_loss(pred_surface_line_center, - obj_surface_line_center) - - match_dist = torch.sqrt(square_dist.sum(dim=-1) + 1e-6) - primitive_centroid_reg_loss = torch.sum( - match_dist * cues_matching_label) / ( - cues_matching_label.sum() + 1e-6) - - refined_loss = dict( - primitive_objectness_loss=primitive_objectness_loss, - primitive_sem_loss=primitive_sem_loss, - primitive_matching_loss=primitive_matching_loss, - primitive_sem_matching_loss=primitive_sem_matching_loss, - primitive_centroid_reg_loss=primitive_centroid_reg_loss) - - losses.update(refined_loss) - - return losses - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - suffix=''): - """Generate bboxes from vote head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from vote head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - obj_scores = F.softmax( - bbox_preds['obj_scores' + suffix], dim=-1)[..., -1] - - sem_scores = F.softmax(bbox_preds['sem_scores'], dim=-1) - - prediction_collection = {} - prediction_collection['center'] = bbox_preds['center' + suffix] - prediction_collection['dir_class'] = bbox_preds['dir_class'] - prediction_collection['dir_res'] = bbox_preds['dir_res' + suffix] - prediction_collection['size_class'] = bbox_preds['size_class'] - prediction_collection['size_res'] = bbox_preds['size_res' + suffix] - - bbox3d = self.bbox_coder.decode(prediction_collection) - - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = self.multiclass_nms_single( - obj_scores[b], sem_scores[b], bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels - - def get_proposal_stage_loss(self, - bbox_preds, - size_class_targets, - size_res_targets, - dir_class_targets, - dir_res_targets, - center_targets, - mask_targets, - objectness_targets, - objectness_weights, - box_loss_weights, - valid_gt_weights, - suffix=''): - """Compute loss for the aggregation module. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - size_class_targets (torch.Tensor): Ground truth - size class of each prediction bounding box. - size_res_targets (torch.Tensor): Ground truth - size residual of each prediction bounding box. - dir_class_targets (torch.Tensor): Ground truth - direction class of each prediction bounding box. - dir_res_targets (torch.Tensor): Ground truth - direction residual of each prediction bounding box. - center_targets (torch.Tensor): Ground truth center - of each prediction bounding box. - mask_targets (torch.Tensor): Validation of each - prediction bounding box. - objectness_targets (torch.Tensor): Ground truth - objectness label of each prediction bounding box. - objectness_weights (torch.Tensor): Weights of objectness - loss for each prediction bounding box. - box_loss_weights (torch.Tensor): Weights of regression - loss for each prediction bounding box. - valid_gt_weights (torch.Tensor): Validation of each - ground truth bounding box. - - Returns: - dict: Losses of aggregation module. - """ - # calculate objectness loss - objectness_loss = self.objectness_loss( - bbox_preds['obj_scores' + suffix].transpose(2, 1), - objectness_targets, - weight=objectness_weights) - - # calculate center loss - source2target_loss, target2source_loss = self.center_loss( - bbox_preds['center' + suffix], - center_targets, - src_weight=box_loss_weights, - dst_weight=valid_gt_weights) - center_loss = source2target_loss + target2source_loss - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class' + suffix].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - batch_size, proposal_num = size_class_targets.shape[:2] - heading_label_one_hot = dir_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - dir_res_norm = (bbox_preds['dir_res_norm' + suffix] * - heading_label_one_hot).sum(dim=-1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds['size_class' + suffix].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - - # calculate size residual loss - one_hot_size_targets = box_loss_weights.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).repeat(1, 1, 1, 3) - size_residual_norm = (bbox_preds['size_res_norm' + suffix] * - one_hot_size_targets_expand).sum(dim=2) - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).repeat( - 1, 1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds['sem_scores' + suffix].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - - losses = dict( - objectness_loss=objectness_loss, - semantic_loss=semantic_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=size_class_loss, - size_res_loss=size_res_loss) - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of proposal module. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - - Returns: - tuple[torch.Tensor]: Targets of proposal module. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_center_pred = [ - bbox_preds['surface_center_pred'][i] - for i in range(len(gt_labels_3d)) - ] - - line_center_pred = [ - bbox_preds['pred_line_center'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_center_object = [ - bbox_preds['surface_center_object'][i] - for i in range(len(gt_labels_3d)) - ] - - line_center_object = [ - bbox_preds['line_center_object'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_sem_pred = [ - bbox_preds['surface_sem_pred'][i] - for i in range(len(gt_labels_3d)) - ] - - line_sem_pred = [ - bbox_preds['sem_cls_scores_line'][i] - for i in range(len(gt_labels_3d)) - ] - - (cues_objectness_label, cues_sem_label, proposal_objectness_label, - cues_mask, cues_match_mask, proposal_objectness_mask, - cues_matching_label, obj_surface_line_center) = multi_apply( - self.get_targets_single, points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, aggregated_points, - surface_center_pred, line_center_pred, surface_center_object, - line_center_object, surface_sem_pred, line_sem_pred) - - cues_objectness_label = torch.stack(cues_objectness_label) - cues_sem_label = torch.stack(cues_sem_label) - proposal_objectness_label = torch.stack(proposal_objectness_label) - cues_mask = torch.stack(cues_mask) - cues_match_mask = torch.stack(cues_match_mask) - proposal_objectness_mask = torch.stack(proposal_objectness_mask) - cues_matching_label = torch.stack(cues_matching_label) - obj_surface_line_center = torch.stack(obj_surface_line_center) - - return (cues_objectness_label, cues_sem_label, - proposal_objectness_label, cues_mask, cues_match_mask, - proposal_objectness_mask, cues_matching_label, - obj_surface_line_center) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None, - pred_surface_center=None, - pred_line_center=None, - pred_obj_surface_center=None, - pred_obj_line_center=None, - pred_surface_sem=None, - pred_line_sem=None): - """Generate targets for primitive cues for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - vote aggregation layer. - pred_surface_center (torch.Tensor): Prediction of surface center. - pred_line_center (torch.Tensor): Prediction of line center. - pred_obj_surface_center (torch.Tensor): Objectness prediction - of surface center. - pred_obj_line_center (torch.Tensor): Objectness prediction of - line center. - pred_surface_sem (torch.Tensor): Semantic prediction of - surface center. - pred_line_sem (torch.Tensor): Semantic prediction of line center. - Returns: - tuple[torch.Tensor]: Targets for primitive cues. - """ - device = points.device - gt_bboxes_3d = gt_bboxes_3d.to(device) - num_proposals = aggregated_points.shape[0] - gt_center = gt_bboxes_3d.gravity_center - - dist1, dist2, ind1, _ = chamfer_distance( - aggregated_points.unsqueeze(0), - gt_center.unsqueeze(0), - reduction='none') - # Set assignment - object_assignment = ind1.squeeze(0) - - # Generate objectness label and mask - # objectness_label: 1 if pred object center is within - # self.train_cfg['near_threshold'] of any GT object - # objectness_mask: 0 if pred object center is in gray - # zone (DONOTCARE), 1 otherwise - euclidean_dist1 = torch.sqrt(dist1.squeeze(0) + 1e-6) - proposal_objectness_label = euclidean_dist1.new_zeros( - num_proposals, dtype=torch.long) - proposal_objectness_mask = euclidean_dist1.new_zeros(num_proposals) - - gt_sem = gt_labels_3d[object_assignment] - - obj_surface_center, obj_line_center = \ - gt_bboxes_3d.get_surface_line_center() - obj_surface_center = obj_surface_center.reshape(-1, 6, - 3).transpose(0, 1) - obj_line_center = obj_line_center.reshape(-1, 12, 3).transpose(0, 1) - obj_surface_center = obj_surface_center[:, object_assignment].reshape( - 1, -1, 3) - obj_line_center = obj_line_center[:, - object_assignment].reshape(1, -1, 3) - - surface_sem = torch.argmax(pred_surface_sem, dim=1).float() - line_sem = torch.argmax(pred_line_sem, dim=1).float() - - dist_surface, _, surface_ind, _ = chamfer_distance( - obj_surface_center, - pred_surface_center.unsqueeze(0), - reduction='none') - dist_line, _, line_ind, _ = chamfer_distance( - obj_line_center, pred_line_center.unsqueeze(0), reduction='none') - - surface_sel = pred_surface_center[surface_ind.squeeze(0)] - line_sel = pred_line_center[line_ind.squeeze(0)] - surface_sel_sem = surface_sem[surface_ind.squeeze(0)] - line_sel_sem = line_sem[line_ind.squeeze(0)] - - surface_sel_sem_gt = gt_sem.repeat(6).float() - line_sel_sem_gt = gt_sem.repeat(12).float() - - euclidean_dist_surface = torch.sqrt(dist_surface.squeeze(0) + 1e-6) - euclidean_dist_line = torch.sqrt(dist_line.squeeze(0) + 1e-6) - objectness_label_surface = euclidean_dist_line.new_zeros( - num_proposals * 6, dtype=torch.long) - objectness_mask_surface = euclidean_dist_line.new_zeros(num_proposals * - 6) - objectness_label_line = euclidean_dist_line.new_zeros( - num_proposals * 12, dtype=torch.long) - objectness_mask_line = euclidean_dist_line.new_zeros(num_proposals * - 12) - objectness_label_surface_sem = euclidean_dist_line.new_zeros( - num_proposals * 6, dtype=torch.long) - objectness_label_line_sem = euclidean_dist_line.new_zeros( - num_proposals * 12, dtype=torch.long) - - euclidean_dist_obj_surface = torch.sqrt(( - (pred_obj_surface_center - surface_sel)**2).sum(dim=-1) + 1e-6) - euclidean_dist_obj_line = torch.sqrt( - torch.sum((pred_obj_line_center - line_sel)**2, dim=-1) + 1e-6) - - # Objectness score just with centers - proposal_objectness_label[ - euclidean_dist1 < self.train_cfg['near_threshold']] = 1 - proposal_objectness_mask[ - euclidean_dist1 < self.train_cfg['near_threshold']] = 1 - proposal_objectness_mask[ - euclidean_dist1 > self.train_cfg['far_threshold']] = 1 - - objectness_label_surface[ - (euclidean_dist_obj_surface < - self.train_cfg['label_surface_threshold']) * - (euclidean_dist_surface < - self.train_cfg['mask_surface_threshold'])] = 1 - objectness_label_surface_sem[ - (euclidean_dist_obj_surface < - self.train_cfg['label_surface_threshold']) * - (euclidean_dist_surface < self.train_cfg['mask_surface_threshold']) - * (surface_sel_sem == surface_sel_sem_gt)] = 1 - - objectness_label_line[ - (euclidean_dist_obj_line < self.train_cfg['label_line_threshold']) - * - (euclidean_dist_line < self.train_cfg['mask_line_threshold'])] = 1 - objectness_label_line_sem[ - (euclidean_dist_obj_line < self.train_cfg['label_line_threshold']) - * (euclidean_dist_line < self.train_cfg['mask_line_threshold']) * - (line_sel_sem == line_sel_sem_gt)] = 1 - - objectness_label_surface_obj = proposal_objectness_label.repeat(6) - objectness_mask_surface_obj = proposal_objectness_mask.repeat(6) - objectness_label_line_obj = proposal_objectness_label.repeat(12) - objectness_mask_line_obj = proposal_objectness_mask.repeat(12) - - objectness_mask_surface = objectness_mask_surface_obj - objectness_mask_line = objectness_mask_line_obj - - cues_objectness_label = torch.cat( - (objectness_label_surface, objectness_label_line), 0) - cues_sem_label = torch.cat( - (objectness_label_surface_sem, objectness_label_line_sem), 0) - cues_mask = torch.cat((objectness_mask_surface, objectness_mask_line), - 0) - - objectness_label_surface *= objectness_label_surface_obj - objectness_label_line *= objectness_label_line_obj - cues_matching_label = torch.cat( - (objectness_label_surface, objectness_label_line), 0) - - objectness_label_surface_sem *= objectness_label_surface_obj - objectness_label_line_sem *= objectness_label_line_obj - - cues_match_mask = (torch.sum( - cues_objectness_label.view(18, num_proposals), dim=0) >= - 1).float() - - obj_surface_line_center = torch.cat( - (obj_surface_center, obj_line_center), 1).squeeze(0) - - return (cues_objectness_label, cues_sem_label, - proposal_objectness_label, cues_mask, cues_match_mask, - proposal_objectness_mask, cues_matching_label, - obj_surface_line_center) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py deleted file mode 100644 index 6f5ea722..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py +++ /dev/null @@ -1,629 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import ConvModule, normal_init - -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import (SparseConvTensor, SparseMaxPool3d, - SparseSequential) -else: - from mmcv.ops import SparseConvTensor, SparseMaxPool3d, SparseSequential - -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core.bbox.structures import (LiDARInstance3DBoxes, - rotation_3d_in_axis, xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.ops import make_sparse_convmodule -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class PartA2BboxHead(BaseModule): - """PartA2 RoI head. - - Args: - num_classes (int): The number of classes to prediction. - seg_in_channels (int): Input channels of segmentation - convolution layer. - part_in_channels (int): Input channels of part convolution layer. - seg_conv_channels (list(int)): Out channels of each - segmentation convolution layer. - part_conv_channels (list(int)): Out channels of each - part convolution layer. - merge_conv_channels (list(int)): Out channels of each - feature merged convolution layer. - down_conv_channels (list(int)): Out channels of each - downsampled convolution layer. - shared_fc_channels (list(int)): Out channels of each shared fc layer. - cls_channels (list(int)): Out channels of each classification layer. - reg_channels (list(int)): Out channels of each regression layer. - dropout_ratio (float): Dropout ratio of classification and - regression layers. - roi_feat_size (int): The size of pooled roi features. - with_corner_loss (bool): Whether to use corner loss or not. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for box head. - conv_cfg (dict): Config dict of convolutional layers - norm_cfg (dict): Config dict of normalization layers - loss_bbox (dict): Config dict of box regression loss. - loss_cls (dict): Config dict of classifacation loss. - """ - - def __init__(self, - num_classes, - seg_in_channels, - part_in_channels, - seg_conv_channels=None, - part_conv_channels=None, - merge_conv_channels=None, - down_conv_channels=None, - shared_fc_channels=None, - cls_channels=None, - reg_channels=None, - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='none', - loss_weight=1.0), - init_cfg=None): - super(PartA2BboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.with_corner_loss = with_corner_loss - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_bbox = build_loss(loss_bbox) - self.loss_cls = build_loss(loss_cls) - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - assert down_conv_channels[-1] == shared_fc_channels[0] - - # init layers - part_channel_last = part_in_channels - part_conv = [] - for i, channel in enumerate(part_conv_channels): - part_conv.append( - make_sparse_convmodule( - part_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key=f'rcnn_part{i}', - conv_type='SubMConv3d')) - part_channel_last = channel - self.part_conv = SparseSequential(*part_conv) - - seg_channel_last = seg_in_channels - seg_conv = [] - for i, channel in enumerate(seg_conv_channels): - seg_conv.append( - make_sparse_convmodule( - seg_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key=f'rcnn_seg{i}', - conv_type='SubMConv3d')) - seg_channel_last = channel - self.seg_conv = SparseSequential(*seg_conv) - - self.conv_down = SparseSequential() - - merge_conv_channel_last = part_channel_last + seg_channel_last - merge_conv = [] - for i, channel in enumerate(merge_conv_channels): - merge_conv.append( - make_sparse_convmodule( - merge_conv_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key='rcnn_down0')) - merge_conv_channel_last = channel - - down_conv_channel_last = merge_conv_channel_last - conv_down = [] - for i, channel in enumerate(down_conv_channels): - conv_down.append( - make_sparse_convmodule( - down_conv_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key='rcnn_down1')) - down_conv_channel_last = channel - - self.conv_down.add_module('merge_conv', SparseSequential(*merge_conv)) - self.conv_down.add_module('max_pool3d', - SparseMaxPool3d(kernel_size=2, stride=2)) - self.conv_down.add_module('down_conv', SparseSequential(*conv_down)) - - shared_fc_list = [] - pool_size = roi_feat_size // 2 - pre_channel = shared_fc_channels[0] * pool_size**3 - for k in range(1, len(shared_fc_channels)): - shared_fc_list.append( - ConvModule( - pre_channel, - shared_fc_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = shared_fc_channels[k] - - if k != len(shared_fc_channels) - 1 and dropout_ratio > 0: - shared_fc_list.append(nn.Dropout(dropout_ratio)) - - self.shared_fc = nn.Sequential(*shared_fc_list) - - # Classification layer - channel_in = shared_fc_channels[-1] - cls_channel = 1 - cls_layers = [] - pre_channel = channel_in - for k in range(0, len(cls_channels)): - cls_layers.append( - ConvModule( - pre_channel, - cls_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = cls_channels[k] - cls_layers.append( - ConvModule( - pre_channel, - cls_channel, - 1, - padding=0, - conv_cfg=conv_cfg, - act_cfg=None)) - if dropout_ratio >= 0: - cls_layers.insert(1, nn.Dropout(dropout_ratio)) - - self.conv_cls = nn.Sequential(*cls_layers) - - # Regression layer - reg_layers = [] - pre_channel = channel_in - for k in range(0, len(reg_channels)): - reg_layers.append( - ConvModule( - pre_channel, - reg_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = reg_channels[k] - reg_layers.append( - ConvModule( - pre_channel, - self.bbox_coder.code_size, - 1, - padding=0, - conv_cfg=conv_cfg, - act_cfg=None)) - if dropout_ratio >= 0: - reg_layers.insert(1, nn.Dropout(dropout_ratio)) - - self.conv_reg = nn.Sequential(*reg_layers) - - if init_cfg is None: - self.init_cfg = dict( - type='Xavier', - layer=['Conv2d', 'Conv1d'], - distribution='uniform') - - def init_weights(self): - super().init_weights() - normal_init(self.conv_reg[-1].conv, mean=0, std=0.001) - - def forward(self, seg_feats, part_feats): - """Forward pass. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - - Returns: - tuple[torch.Tensor]: Score of class and bbox predictions. - """ - # (B * N, out_x, out_y, out_z, 4) - rcnn_batch_size = part_feats.shape[0] - - # transform to sparse tensors - sparse_shape = part_feats.shape[1:4] - # (non_empty_num, 4) ==> [bs_idx, x_idx, y_idx, z_idx] - sparse_idx = part_feats.sum(dim=-1).nonzero(as_tuple=False) - - part_features = part_feats[sparse_idx[:, 0], sparse_idx[:, 1], - sparse_idx[:, 2], sparse_idx[:, 3]] - seg_features = seg_feats[sparse_idx[:, 0], sparse_idx[:, 1], - sparse_idx[:, 2], sparse_idx[:, 3]] - coords = sparse_idx.int().contiguous() - part_features = SparseConvTensor(part_features, coords, sparse_shape, - rcnn_batch_size) - seg_features = SparseConvTensor(seg_features, coords, sparse_shape, - rcnn_batch_size) - - # forward rcnn network - x_part = self.part_conv(part_features) - x_rpn = self.seg_conv(seg_features) - - merged_feature = torch.cat((x_rpn.features, x_part.features), - dim=1) # (N, C) - shared_feature = SparseConvTensor(merged_feature, coords, sparse_shape, - rcnn_batch_size) - - x = self.conv_down(shared_feature) - - shared_feature = x.dense().view(rcnn_batch_size, -1, 1) - - shared_feature = self.shared_fc(shared_feature) - - cls_score = self.conv_cls(shared_feature).transpose( - 1, 2).contiguous().squeeze(dim=1) # (B, 1) - bbox_pred = self.conv_reg(shared_feature).transpose( - 1, 2).contiguous().squeeze(dim=1) # (B, C) - - return cls_score, bbox_pred - - def loss(self, cls_score, bbox_pred, rois, labels, bbox_targets, - pos_gt_bboxes, reg_mask, label_weights, bbox_weights): - """Computing losses. - - Args: - cls_score (torch.Tensor): Scores of each roi. - bbox_pred (torch.Tensor): Predictions of bboxes. - rois (torch.Tensor): Roi bboxes. - labels (torch.Tensor): Labels of class. - bbox_targets (torch.Tensor): Target of positive bboxes. - pos_gt_bboxes (torch.Tensor): Ground truths of positive bboxes. - reg_mask (torch.Tensor): Mask for positive bboxes. - label_weights (torch.Tensor): Weights of class loss. - bbox_weights (torch.Tensor): Weights of bbox loss. - - Returns: - dict: Computed losses. - - - loss_cls (torch.Tensor): Loss of classes. - - loss_bbox (torch.Tensor): Loss of bboxes. - - loss_corner (torch.Tensor): Loss of corners. - """ - losses = dict() - rcnn_batch_size = cls_score.shape[0] - - # calculate class loss - cls_flat = cls_score.view(-1) - loss_cls = self.loss_cls(cls_flat, labels, label_weights) - losses['loss_cls'] = loss_cls - - # calculate regression loss - code_size = self.bbox_coder.code_size - pos_inds = (reg_mask > 0) - if pos_inds.any() == 0: - # fake a part loss - losses['loss_bbox'] = loss_cls.new_tensor(0) - if self.with_corner_loss: - losses['loss_corner'] = loss_cls.new_tensor(0) - else: - pos_bbox_pred = bbox_pred.view(rcnn_batch_size, -1)[pos_inds] - bbox_weights_flat = bbox_weights[pos_inds].view(-1, 1).repeat( - 1, pos_bbox_pred.shape[-1]) - loss_bbox = self.loss_bbox( - pos_bbox_pred.unsqueeze(dim=0), bbox_targets.unsqueeze(dim=0), - bbox_weights_flat.unsqueeze(dim=0)) - losses['loss_bbox'] = loss_bbox - - if self.with_corner_loss: - pos_roi_boxes3d = rois[..., 1:].view(-1, code_size)[pos_inds] - pos_roi_boxes3d = pos_roi_boxes3d.view(-1, code_size) - batch_anchors = pos_roi_boxes3d.clone().detach() - pos_rois_rotation = pos_roi_boxes3d[..., 6].view(-1) - roi_xyz = pos_roi_boxes3d[..., 0:3].view(-1, 3) - batch_anchors[..., 0:3] = 0 - # decode boxes - pred_boxes3d = self.bbox_coder.decode( - batch_anchors, - pos_bbox_pred.view(-1, code_size)).view(-1, code_size) - - pred_boxes3d[..., 0:3] = rotation_3d_in_axis( - pred_boxes3d[..., 0:3].unsqueeze(1), - pos_rois_rotation, - axis=2).squeeze(1) - - pred_boxes3d[:, 0:3] += roi_xyz - - # calculate corner loss - loss_corner = self.get_corner_loss_lidar( - pred_boxes3d, pos_gt_bboxes) - losses['loss_corner'] = loss_corner - - return losses - - def get_targets(self, sampling_results, rcnn_train_cfg, concat=True): - """Generate targets. - - Args: - sampling_results (list[:obj:`SamplingResult`]): - Sampled results from rois. - rcnn_train_cfg (:obj:`ConfigDict`): Training config of rcnn. - concat (bool): Whether to concatenate targets between batches. - - Returns: - tuple[torch.Tensor]: Targets of boxes and class prediction. - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - iou_list = [res.iou for res in sampling_results] - targets = multi_apply( - self._get_target_single, - pos_bboxes_list, - pos_gt_bboxes_list, - iou_list, - cfg=rcnn_train_cfg) - - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) = targets - - if concat: - label = torch.cat(label, 0) - bbox_targets = torch.cat(bbox_targets, 0) - pos_gt_bboxes = torch.cat(pos_gt_bboxes, 0) - reg_mask = torch.cat(reg_mask, 0) - - label_weights = torch.cat(label_weights, 0) - label_weights /= torch.clamp(label_weights.sum(), min=1.0) - - bbox_weights = torch.cat(bbox_weights, 0) - bbox_weights /= torch.clamp(bbox_weights.sum(), min=1.0) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def _get_target_single(self, pos_bboxes, pos_gt_bboxes, ious, cfg): - """Generate training targets for a single sample. - - Args: - pos_bboxes (torch.Tensor): Positive boxes with shape - (N, 7). - pos_gt_bboxes (torch.Tensor): Ground truth boxes with shape - (M, 7). - ious (torch.Tensor): IoU between `pos_bboxes` and `pos_gt_bboxes` - in shape (N, M). - cfg (dict): Training configs. - - Returns: - tuple[torch.Tensor]: Target for positive boxes. - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - """ - cls_pos_mask = ious > cfg.cls_pos_thr - cls_neg_mask = ious < cfg.cls_neg_thr - interval_mask = (cls_pos_mask == 0) & (cls_neg_mask == 0) - - # iou regression target - label = (cls_pos_mask > 0).float() - label[interval_mask] = ious[interval_mask] * 2 - 0.5 - # label weights - label_weights = (label >= 0).float() - - # box regression target - reg_mask = pos_bboxes.new_zeros(ious.size(0)).long() - reg_mask[0:pos_gt_bboxes.size(0)] = 1 - bbox_weights = (reg_mask > 0).float() - if reg_mask.bool().any(): - pos_gt_bboxes_ct = pos_gt_bboxes.clone().detach() - roi_center = pos_bboxes[..., 0:3] - roi_ry = pos_bboxes[..., 6] % (2 * np.pi) - - # canonical transformation - pos_gt_bboxes_ct[..., 0:3] -= roi_center - pos_gt_bboxes_ct[..., 6] -= roi_ry - pos_gt_bboxes_ct[..., 0:3] = rotation_3d_in_axis( - pos_gt_bboxes_ct[..., 0:3].unsqueeze(1), -roi_ry, - axis=2).squeeze(1) - - # flip orientation if rois have opposite orientation - ry_label = pos_gt_bboxes_ct[..., 6] % (2 * np.pi) # 0 ~ 2pi - opposite_flag = (ry_label > np.pi * 0.5) & (ry_label < np.pi * 1.5) - ry_label[opposite_flag] = (ry_label[opposite_flag] + np.pi) % ( - 2 * np.pi) # (0 ~ pi/2, 3pi/2 ~ 2pi) - flag = ry_label > np.pi - ry_label[flag] = ry_label[flag] - np.pi * 2 # (-pi/2, pi/2) - ry_label = torch.clamp(ry_label, min=-np.pi / 2, max=np.pi / 2) - pos_gt_bboxes_ct[..., 6] = ry_label - - rois_anchor = pos_bboxes.clone().detach() - rois_anchor[:, 0:3] = 0 - rois_anchor[:, 6] = 0 - bbox_targets = self.bbox_coder.encode(rois_anchor, - pos_gt_bboxes_ct) - else: - # no fg bbox - bbox_targets = pos_gt_bboxes.new_empty((0, 7)) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def get_corner_loss_lidar(self, pred_bbox3d, gt_bbox3d, delta=1.0): - """Calculate corner loss of given boxes. - - Args: - pred_bbox3d (torch.FloatTensor): Predicted boxes in shape (N, 7). - gt_bbox3d (torch.FloatTensor): Ground truth boxes in shape (N, 7). - delta (float, optional): huber loss threshold. Defaults to 1.0 - - Returns: - torch.FloatTensor: Calculated corner loss in shape (N). - """ - assert pred_bbox3d.shape[0] == gt_bbox3d.shape[0] - - # This is a little bit hack here because we assume the box for - # Part-A2 is in LiDAR coordinates - gt_boxes_structure = LiDARInstance3DBoxes(gt_bbox3d) - pred_box_corners = LiDARInstance3DBoxes(pred_bbox3d).corners - gt_box_corners = gt_boxes_structure.corners - - # This flip only changes the heading direction of GT boxes - gt_bbox3d_flip = gt_boxes_structure.clone() - gt_bbox3d_flip.tensor[:, 6] += np.pi - gt_box_corners_flip = gt_bbox3d_flip.corners - - corner_dist = torch.min( - torch.norm(pred_box_corners - gt_box_corners, dim=2), - torch.norm(pred_box_corners - gt_box_corners_flip, - dim=2)) # (N, 8) - # huber loss - abs_error = corner_dist.abs() - quadratic = abs_error.clamp(max=delta) - linear = (abs_error - quadratic) - corner_loss = 0.5 * quadratic**2 + delta * linear - - return corner_loss.mean(dim=1) - - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - class_labels, - class_pred, - img_metas, - cfg=None): - """Generate bboxes from bbox head predictions. - - Args: - rois (torch.Tensor): Roi bounding boxes. - cls_score (torch.Tensor): Scores of bounding boxes. - bbox_pred (torch.Tensor): Bounding boxes predictions - class_labels (torch.Tensor): Label of classes - class_pred (torch.Tensor): Score for nms. - img_metas (list[dict]): Point cloud and image's meta info. - cfg (:obj:`ConfigDict`): Testing config. - - Returns: - list[tuple]: Decoded bbox, scores and labels after nms. - """ - roi_batch_id = rois[..., 0] - roi_boxes = rois[..., 1:] # boxes without batch id - batch_size = int(roi_batch_id.max().item() + 1) - - # decode boxes - roi_ry = roi_boxes[..., 6].view(-1) - roi_xyz = roi_boxes[..., 0:3].view(-1, 3) - local_roi_boxes = roi_boxes.clone().detach() - local_roi_boxes[..., 0:3] = 0 - rcnn_boxes3d = self.bbox_coder.decode(local_roi_boxes, bbox_pred) - rcnn_boxes3d[..., 0:3] = rotation_3d_in_axis( - rcnn_boxes3d[..., 0:3].unsqueeze(1), roi_ry, axis=2).squeeze(1) - rcnn_boxes3d[:, 0:3] += roi_xyz - - # post processing - result_list = [] - for batch_id in range(batch_size): - cur_class_labels = class_labels[batch_id] - cur_cls_score = cls_score[roi_batch_id == batch_id].view(-1) - - cur_box_prob = class_pred[batch_id] - cur_rcnn_boxes3d = rcnn_boxes3d[roi_batch_id == batch_id] - keep = self.multi_class_nms(cur_box_prob, cur_rcnn_boxes3d, - cfg.score_thr, cfg.nms_thr, - img_metas[batch_id], - cfg.use_rotate_nms) - selected_bboxes = cur_rcnn_boxes3d[keep] - selected_label_preds = cur_class_labels[keep] - selected_scores = cur_cls_score[keep] - - result_list.append( - (img_metas[batch_id]['box_type_3d'](selected_bboxes, - self.bbox_coder.code_size), - selected_scores, selected_label_preds)) - return result_list - - def multi_class_nms(self, - box_probs, - box_preds, - score_thr, - nms_thr, - input_meta, - use_rotate_nms=True): - """Multi-class NMS for box head. - - Note: - This function has large overlap with the `box3d_multiclass_nms` - implemented in `mmdet3d.core.post_processing`. We are considering - merging these two functions in the future. - - Args: - box_probs (torch.Tensor): Predicted boxes probabitilies in - shape (N,). - box_preds (torch.Tensor): Predicted boxes in shape (N, 7+C). - score_thr (float): Threshold of scores. - nms_thr (float): Threshold for NMS. - input_meta (dict): Meta information of the current sample. - use_rotate_nms (bool, optional): Whether to use rotated nms. - Defaults to True. - - Returns: - torch.Tensor: Selected indices. - """ - if use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - assert box_probs.shape[ - 1] == self.num_classes, f'box_probs shape: {str(box_probs.shape)}' - selected_list = [] - selected_labels = [] - boxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - box_preds, self.bbox_coder.code_size).bev) - - score_thresh = score_thr if isinstance( - score_thr, list) else [score_thr for x in range(self.num_classes)] - nms_thresh = nms_thr if isinstance( - nms_thr, list) else [nms_thr for x in range(self.num_classes)] - for k in range(0, self.num_classes): - class_scores_keep = box_probs[:, k] >= score_thresh[k] - - if class_scores_keep.int().sum() > 0: - original_idxs = class_scores_keep.nonzero( - as_tuple=False).view(-1) - cur_boxes_for_nms = boxes_for_nms[class_scores_keep] - cur_rank_scores = box_probs[class_scores_keep, k] - - cur_selected = nms_func(cur_boxes_for_nms, cur_rank_scores, - nms_thresh[k]) - - if cur_selected.shape[0] == 0: - continue - selected_list.append(original_idxs[cur_selected]) - selected_labels.append( - torch.full([cur_selected.shape[0]], - k + 1, - dtype=torch.int64, - device=box_preds.device)) - - keep = torch.cat( - selected_list, dim=0) if len(selected_list) > 0 else [] - return keep diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py deleted file mode 100644 index df469215..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py +++ /dev/null @@ -1,575 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import ConvModule, normal_init -from mmcv.cnn.bricks import build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core.bbox.structures import (LiDARInstance3DBoxes, - rotation_3d_in_axis, xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class PointRCNNBboxHead(BaseModule): - """PointRCNN RoI Bbox head. - - Args: - num_classes (int): The number of classes to prediction. - in_channels (int): Input channels of point features. - mlp_channels (list[int]): the number of mlp channels - pred_layer_cfg (dict, optional): Config of classfication and - regression prediction layers. Defaults to None. - num_points (tuple, optional): The number of points which each SA - module samples. Defaults to (128, 32, -1). - radius (tuple, optional): Sampling radius of each SA module. - Defaults to (0.2, 0.4, 100). - num_samples (tuple, optional): The number of samples for ball query - in each SA module. Defaults to (64, 64, 64). - sa_channels (tuple, optional): Out channels of each mlp in SA module. - Defaults to ((128, 128, 128), (128, 128, 256), (256, 256, 512)). - bbox_coder (dict, optional): Config dict of box coders. - Defaults to dict(type='DeltaXYZWLHRBBoxCoder'). - sa_cfg (dict, optional): Config of set abstraction module, which may - contain the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - Defaults to dict(type='PointSAModule', pool_mod='max', - use_xyz=True). - conv_cfg (dict, optional): Config dict of convolutional layers. - Defaults to dict(type='Conv1d'). - norm_cfg (dict, optional): Config dict of normalization layers. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Config dict of activation layers. - Defaults to dict(type='ReLU'). - bias (str, optional): Type of bias. Defaults to 'auto'. - loss_bbox (dict, optional): Config of regression loss function. - Defaults to dict(type='SmoothL1Loss', beta=1.0 / 9.0, - reduction='sum', loss_weight=1.0). - loss_cls (dict, optional): Config of classification loss function. - Defaults to dict(type='CrossEntropyLoss', use_sigmoid=True, - reduction='sum', loss_weight=1.0). - with_corner_loss (bool, optional): Whether using corner loss. - Defaults to True. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__( - self, - num_classes, - in_channels, - mlp_channels, - pred_layer_cfg=None, - num_points=(128, 32, -1), - radius=(0.2, 0.4, 100), - num_samples=(64, 64, 64), - sa_channels=((128, 128, 128), (128, 128, 256), (256, 256, 512)), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - sa_cfg=dict(type='PointSAModule', pool_mod='max', use_xyz=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - bias='auto', - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - with_corner_loss=True, - init_cfg=None): - super(PointRCNNBboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.num_sa = len(sa_channels) - self.with_corner_loss = with_corner_loss - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.bias = bias - - self.loss_bbox = build_loss(loss_bbox) - self.loss_cls = build_loss(loss_cls) - self.bbox_coder = build_bbox_coder(bbox_coder) - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - self.in_channels = in_channels - mlp_channels = [self.in_channels] + mlp_channels - shared_mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - shared_mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - inplace=False, - conv_cfg=dict(type='Conv2d'))) - self.xyz_up_layer = nn.Sequential(*shared_mlps) - - c_out = mlp_channels[-1] - self.merge_down_layer = ConvModule( - c_out * 2, - c_out, - kernel_size=(1, 1), - stride=(1, 1), - inplace=False, - conv_cfg=dict(type='Conv2d')) - - pre_channels = c_out - - self.SA_modules = nn.ModuleList() - sa_in_channel = pre_channels - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - cur_sa_mlps = [sa_in_channel] + cur_sa_mlps - sa_out_channel = cur_sa_mlps[-1] - - cur_num_points = num_points[sa_index] - if cur_num_points <= 0: - cur_num_points = None - self.SA_modules.append( - build_sa_module( - num_point=cur_num_points, - radius=radius[sa_index], - num_sample=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - cfg=sa_cfg)) - sa_in_channel = sa_out_channel - self.cls_convs = self._add_conv_branch( - pred_layer_cfg.in_channels, pred_layer_cfg.cls_conv_channels) - self.reg_convs = self._add_conv_branch( - pred_layer_cfg.in_channels, pred_layer_cfg.reg_conv_channels) - - prev_channel = pred_layer_cfg.cls_conv_channels[-1] - self.conv_cls = build_conv_layer( - self.conv_cfg, - in_channels=prev_channel, - out_channels=self.num_classes, - kernel_size=1) - prev_channel = pred_layer_cfg.reg_conv_channels[-1] - self.conv_reg = build_conv_layer( - self.conv_cfg, - in_channels=prev_channel, - out_channels=self.bbox_coder.code_size * self.num_classes, - kernel_size=1) - - if init_cfg is None: - self.init_cfg = dict(type='Xavier', layer=['Conv2d', 'Conv1d']) - - def _add_conv_branch(self, in_channels, conv_channels): - """Add shared or separable branch. - - Args: - in_channels (int): Input feature channel. - conv_channels (tuple): Middle feature channels. - """ - conv_spec = [in_channels] + list(conv_channels) - # add branch specific conv layers - conv_layers = nn.Sequential() - for i in range(len(conv_spec) - 1): - conv_layers.add_module( - f'layer{i}', - ConvModule( - conv_spec[i], - conv_spec[i + 1], - kernel_size=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.bias, - inplace=True)) - return conv_layers - - def init_weights(self): - """Initialize weights of the head.""" - super().init_weights() - for m in self.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d): - if m.bias is not None: - nn.init.constant_(m.bias, 0) - normal_init(self.conv_reg.weight, mean=0, std=0.001) - - def forward(self, feats): - """Forward pass. - - Args: - feats (torch.Torch): Features from RCNN modules. - - Returns: - tuple[torch.Tensor]: Score of class and bbox predictions. - """ - input_data = feats.clone().detach() - xyz_input = input_data[..., 0:self.in_channels].transpose( - 1, 2).unsqueeze(dim=3).contiguous().clone().detach() - xyz_features = self.xyz_up_layer(xyz_input) - rpn_features = input_data[..., self.in_channels:].transpose( - 1, 2).unsqueeze(dim=3) - merged_features = torch.cat((xyz_features, rpn_features), dim=1) - merged_features = self.merge_down_layer(merged_features) - l_xyz, l_features = [input_data[..., 0:3].contiguous()], \ - [merged_features.squeeze(dim=3)] - for i in range(len(self.SA_modules)): - li_xyz, li_features, cur_indices = \ - self.SA_modules[i](l_xyz[i], l_features[i]) - l_xyz.append(li_xyz) - l_features.append(li_features) - - shared_features = l_features[-1] - x_cls = shared_features - x_reg = shared_features - x_cls = self.cls_convs(x_cls) - rcnn_cls = self.conv_cls(x_cls) - x_reg = self.reg_convs(x_reg) - rcnn_reg = self.conv_reg(x_reg) - rcnn_cls = rcnn_cls.transpose(1, 2).contiguous().squeeze(dim=1) - rcnn_reg = rcnn_reg.transpose(1, 2).contiguous().squeeze(dim=1) - return rcnn_cls, rcnn_reg - - def loss(self, cls_score, bbox_pred, rois, labels, bbox_targets, - pos_gt_bboxes, reg_mask, label_weights, bbox_weights): - """Computing losses. - - Args: - cls_score (torch.Tensor): Scores of each RoI. - bbox_pred (torch.Tensor): Predictions of bboxes. - rois (torch.Tensor): RoI bboxes. - labels (torch.Tensor): Labels of class. - bbox_targets (torch.Tensor): Target of positive bboxes. - pos_gt_bboxes (torch.Tensor): Ground truths of positive bboxes. - reg_mask (torch.Tensor): Mask for positive bboxes. - label_weights (torch.Tensor): Weights of class loss. - bbox_weights (torch.Tensor): Weights of bbox loss. - - Returns: - dict: Computed losses. - - - loss_cls (torch.Tensor): Loss of classes. - - loss_bbox (torch.Tensor): Loss of bboxes. - - loss_corner (torch.Tensor): Loss of corners. - """ - losses = dict() - rcnn_batch_size = cls_score.shape[0] - # calculate class loss - cls_flat = cls_score.view(-1) - loss_cls = self.loss_cls(cls_flat, labels, label_weights) - losses['loss_cls'] = loss_cls - - # calculate regression loss - code_size = self.bbox_coder.code_size - pos_inds = (reg_mask > 0) - - pos_bbox_pred = bbox_pred.view(rcnn_batch_size, -1)[pos_inds].clone() - bbox_weights_flat = bbox_weights[pos_inds].view(-1, 1).repeat( - 1, pos_bbox_pred.shape[-1]) - loss_bbox = self.loss_bbox( - pos_bbox_pred.unsqueeze(dim=0), - bbox_targets.unsqueeze(dim=0).detach(), - bbox_weights_flat.unsqueeze(dim=0)) - losses['loss_bbox'] = loss_bbox - - if pos_inds.any() != 0 and self.with_corner_loss: - rois = rois.detach() - pos_roi_boxes3d = rois[..., 1:].view(-1, code_size)[pos_inds] - pos_roi_boxes3d = pos_roi_boxes3d.view(-1, code_size) - batch_anchors = pos_roi_boxes3d.clone().detach() - pos_rois_rotation = pos_roi_boxes3d[..., 6].view(-1) - roi_xyz = pos_roi_boxes3d[..., 0:3].view(-1, 3) - batch_anchors[..., 0:3] = 0 - # decode boxes - pred_boxes3d = self.bbox_coder.decode( - batch_anchors, - pos_bbox_pred.view(-1, code_size)).view(-1, code_size) - - pred_boxes3d[..., 0:3] = rotation_3d_in_axis( - pred_boxes3d[..., 0:3].unsqueeze(1), (pos_rois_rotation), - axis=2).squeeze(1) - - pred_boxes3d[:, 0:3] += roi_xyz - - # calculate corner loss - loss_corner = self.get_corner_loss_lidar(pred_boxes3d, - pos_gt_bboxes) - - losses['loss_corner'] = loss_corner - else: - losses['loss_corner'] = loss_cls.new_tensor(0) - - return losses - - def get_corner_loss_lidar(self, pred_bbox3d, gt_bbox3d, delta=1.0): - """Calculate corner loss of given boxes. - - Args: - pred_bbox3d (torch.FloatTensor): Predicted boxes in shape (N, 7). - gt_bbox3d (torch.FloatTensor): Ground truth boxes in shape (N, 7). - delta (float, optional): huber loss threshold. Defaults to 1.0 - - Returns: - torch.FloatTensor: Calculated corner loss in shape (N). - """ - assert pred_bbox3d.shape[0] == gt_bbox3d.shape[0] - - # This is a little bit hack here because we assume the box for - # PointRCNN is in LiDAR coordinates - - gt_boxes_structure = LiDARInstance3DBoxes(gt_bbox3d) - pred_box_corners = LiDARInstance3DBoxes(pred_bbox3d).corners - gt_box_corners = gt_boxes_structure.corners - - # This flip only changes the heading direction of GT boxes - gt_bbox3d_flip = gt_boxes_structure.clone() - gt_bbox3d_flip.tensor[:, 6] += np.pi - gt_box_corners_flip = gt_bbox3d_flip.corners - - corner_dist = torch.min( - torch.norm(pred_box_corners - gt_box_corners, dim=2), - torch.norm(pred_box_corners - gt_box_corners_flip, dim=2)) - # huber loss - abs_error = corner_dist.abs() - quadratic = abs_error.clamp(max=delta) - linear = (abs_error - quadratic) - corner_loss = 0.5 * quadratic**2 + delta * linear - return corner_loss.mean(dim=1) - - def get_targets(self, sampling_results, rcnn_train_cfg, concat=True): - """Generate targets. - - Args: - sampling_results (list[:obj:`SamplingResult`]): - Sampled results from rois. - rcnn_train_cfg (:obj:`ConfigDict`): Training config of rcnn. - concat (bool, optional): Whether to concatenate targets between - batches. Defaults to True. - - Returns: - tuple[torch.Tensor]: Targets of boxes and class prediction. - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - iou_list = [res.iou for res in sampling_results] - targets = multi_apply( - self._get_target_single, - pos_bboxes_list, - pos_gt_bboxes_list, - iou_list, - cfg=rcnn_train_cfg) - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) = targets - - if concat: - label = torch.cat(label, 0) - bbox_targets = torch.cat(bbox_targets, 0) - pos_gt_bboxes = torch.cat(pos_gt_bboxes, 0) - reg_mask = torch.cat(reg_mask, 0) - - label_weights = torch.cat(label_weights, 0) - label_weights /= torch.clamp(label_weights.sum(), min=1.0) - - bbox_weights = torch.cat(bbox_weights, 0) - bbox_weights /= torch.clamp(bbox_weights.sum(), min=1.0) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def _get_target_single(self, pos_bboxes, pos_gt_bboxes, ious, cfg): - """Generate training targets for a single sample. - - Args: - pos_bboxes (torch.Tensor): Positive boxes with shape - (N, 7). - pos_gt_bboxes (torch.Tensor): Ground truth boxes with shape - (M, 7). - ious (torch.Tensor): IoU between `pos_bboxes` and `pos_gt_bboxes` - in shape (N, M). - cfg (dict): Training configs. - - Returns: - tuple[torch.Tensor]: Target for positive boxes. - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - """ - cls_pos_mask = ious > cfg.cls_pos_thr - cls_neg_mask = ious < cfg.cls_neg_thr - interval_mask = (cls_pos_mask == 0) & (cls_neg_mask == 0) - # iou regression target - label = (cls_pos_mask > 0).float() - label[interval_mask] = (ious[interval_mask] - cfg.cls_neg_thr) / \ - (cfg.cls_pos_thr - cfg.cls_neg_thr) - # label weights - label_weights = (label >= 0).float() - # box regression target - reg_mask = pos_bboxes.new_zeros(ious.size(0)).long() - reg_mask[0:pos_gt_bboxes.size(0)] = 1 - bbox_weights = (reg_mask > 0).float() - if reg_mask.bool().any(): - pos_gt_bboxes_ct = pos_gt_bboxes.clone().detach() - roi_center = pos_bboxes[..., 0:3] - roi_ry = pos_bboxes[..., 6] % (2 * np.pi) - - # canonical transformation - pos_gt_bboxes_ct[..., 0:3] -= roi_center - pos_gt_bboxes_ct[..., 6] -= roi_ry - pos_gt_bboxes_ct[..., 0:3] = rotation_3d_in_axis( - pos_gt_bboxes_ct[..., 0:3].unsqueeze(1), -(roi_ry), - axis=2).squeeze(1) - - # flip orientation if gt have opposite orientation - ry_label = pos_gt_bboxes_ct[..., 6] % (2 * np.pi) # 0 ~ 2pi - is_opposite = (ry_label > np.pi * 0.5) & (ry_label < np.pi * 1.5) - ry_label[is_opposite] = (ry_label[is_opposite] + np.pi) % ( - 2 * np.pi) # (0 ~ pi/2, 3pi/2 ~ 2pi) - flag = ry_label > np.pi - ry_label[flag] = ry_label[flag] - np.pi * 2 # (-pi/2, pi/2) - ry_label = torch.clamp(ry_label, min=-np.pi / 2, max=np.pi / 2) - pos_gt_bboxes_ct[..., 6] = ry_label - - rois_anchor = pos_bboxes.clone().detach() - rois_anchor[:, 0:3] = 0 - rois_anchor[:, 6] = 0 - bbox_targets = self.bbox_coder.encode(rois_anchor, - pos_gt_bboxes_ct) - else: - # no fg bbox - bbox_targets = pos_gt_bboxes.new_empty((0, 7)) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - class_labels, - img_metas, - cfg=None): - """Generate bboxes from bbox head predictions. - - Args: - rois (torch.Tensor): RoI bounding boxes. - cls_score (torch.Tensor): Scores of bounding boxes. - bbox_pred (torch.Tensor): Bounding boxes predictions - class_labels (torch.Tensor): Label of classes - img_metas (list[dict]): Point cloud and image's meta info. - cfg (:obj:`ConfigDict`, optional): Testing config. - Defaults to None. - - Returns: - list[tuple]: Decoded bbox, scores and labels after nms. - """ - roi_batch_id = rois[..., 0] - roi_boxes = rois[..., 1:] # boxes without batch id - batch_size = int(roi_batch_id.max().item() + 1) - - # decode boxes - roi_ry = roi_boxes[..., 6].view(-1) - roi_xyz = roi_boxes[..., 0:3].view(-1, 3) - local_roi_boxes = roi_boxes.clone().detach() - local_roi_boxes[..., 0:3] = 0 - rcnn_boxes3d = self.bbox_coder.decode(local_roi_boxes, bbox_pred) - rcnn_boxes3d[..., 0:3] = rotation_3d_in_axis( - rcnn_boxes3d[..., 0:3].unsqueeze(1), roi_ry, axis=2).squeeze(1) - rcnn_boxes3d[:, 0:3] += roi_xyz - - # post processing - result_list = [] - for batch_id in range(batch_size): - cur_class_labels = class_labels[batch_id] - cur_cls_score = cls_score[roi_batch_id == batch_id].view(-1) - - cur_box_prob = cur_cls_score.unsqueeze(1) - cur_rcnn_boxes3d = rcnn_boxes3d[roi_batch_id == batch_id] - keep = self.multi_class_nms(cur_box_prob, cur_rcnn_boxes3d, - cfg.score_thr, cfg.nms_thr, - img_metas[batch_id], - cfg.use_rotate_nms) - selected_bboxes = cur_rcnn_boxes3d[keep] - selected_label_preds = cur_class_labels[keep] - selected_scores = cur_cls_score[keep] - - result_list.append( - (img_metas[batch_id]['box_type_3d'](selected_bboxes, - self.bbox_coder.code_size), - selected_scores, selected_label_preds)) - return result_list - - def multi_class_nms(self, - box_probs, - box_preds, - score_thr, - nms_thr, - input_meta, - use_rotate_nms=True): - """Multi-class NMS for box head. - - Note: - This function has large overlap with the `box3d_multiclass_nms` - implemented in `mmdet3d.core.post_processing`. We are considering - merging these two functions in the future. - - Args: - box_probs (torch.Tensor): Predicted boxes probabilities in - shape (N,). - box_preds (torch.Tensor): Predicted boxes in shape (N, 7+C). - score_thr (float): Threshold of scores. - nms_thr (float): Threshold for NMS. - input_meta (dict): Meta information of the current sample. - use_rotate_nms (bool, optional): Whether to use rotated nms. - Defaults to True. - - Returns: - torch.Tensor: Selected indices. - """ - if use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - assert box_probs.shape[ - 1] == self.num_classes, f'box_probs shape: {str(box_probs.shape)}' - selected_list = [] - selected_labels = [] - boxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - box_preds, self.bbox_coder.code_size).bev) - - score_thresh = score_thr if isinstance( - score_thr, list) else [score_thr for x in range(self.num_classes)] - nms_thresh = nms_thr if isinstance( - nms_thr, list) else [nms_thr for x in range(self.num_classes)] - for k in range(0, self.num_classes): - class_scores_keep = box_probs[:, k] >= score_thresh[k] - - if class_scores_keep.int().sum() > 0: - original_idxs = class_scores_keep.nonzero( - as_tuple=False).view(-1) - cur_boxes_for_nms = boxes_for_nms[class_scores_keep] - cur_rank_scores = box_probs[class_scores_keep, k] - - cur_selected = nms_func(cur_boxes_for_nms, cur_rank_scores, - nms_thresh[k]) - - if cur_selected.shape[0] == 0: - continue - selected_list.append(original_idxs[cur_selected]) - selected_labels.append( - torch.full([cur_selected.shape[0]], - k + 1, - dtype=torch.int64, - device=box_preds.device)) - - keep = torch.cat( - selected_list, dim=0) if len(selected_list) > 0 else [] - return keep diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/h3d_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/h3d_roi_head.py deleted file mode 100644 index b6b95972..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/h3d_roi_head.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet3d.core.bbox import bbox3d2result -from ..builder import HEADS, build_head -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class H3DRoIHead(Base3DRoIHead): - """H3D roi head for H3DNet. - - Args: - primitive_list (List): Configs of primitive heads. - bbox_head (ConfigDict): Config of bbox_head. - train_cfg (ConfigDict): Training config. - test_cfg (ConfigDict): Testing config. - """ - - def __init__(self, - primitive_list, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(H3DRoIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - # Primitive module - assert len(primitive_list) == 3 - self.primitive_z = build_head(primitive_list[0]) - self.primitive_xy = build_head(primitive_list[1]) - self.primitive_line = build_head(primitive_list[2]) - - def init_mask_head(self): - """Initialize mask head, skip since ``H3DROIHead`` does not have - one.""" - pass - - def init_bbox_head(self, bbox_head): - """Initialize box head.""" - bbox_head['train_cfg'] = self.train_cfg - bbox_head['test_cfg'] = self.test_cfg - self.bbox_head = build_head(bbox_head) - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - def forward_train(self, - feats_dict, - img_metas, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask, - pts_instance_mask, - gt_bboxes_ignore=None): - """Training forward function of PartAggregationROIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Contain pcd and img's meta info. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding boxes to ignore. - - Returns: - dict: losses from each head. - """ - losses = dict() - - sample_mod = self.train_cfg.sample_mod - assert sample_mod in ['vote', 'seed', 'random'] - result_z = self.primitive_z(feats_dict, sample_mod) - feats_dict.update(result_z) - - result_xy = self.primitive_xy(feats_dict, sample_mod) - feats_dict.update(result_xy) - - result_line = self.primitive_line(feats_dict, sample_mod) - feats_dict.update(result_line) - - primitive_loss_inputs = (feats_dict, points, gt_bboxes_3d, - gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas, - gt_bboxes_ignore) - - loss_z = self.primitive_z.loss(*primitive_loss_inputs) - losses.update(loss_z) - - loss_xy = self.primitive_xy.loss(*primitive_loss_inputs) - losses.update(loss_xy) - - loss_line = self.primitive_line.loss(*primitive_loss_inputs) - losses.update(loss_line) - - targets = feats_dict.pop('targets') - - bbox_results = self.bbox_head(feats_dict, sample_mod) - - feats_dict.update(bbox_results) - bbox_loss = self.bbox_head.loss(feats_dict, points, gt_bboxes_3d, - gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas, targets, - gt_bboxes_ignore) - losses.update(bbox_loss) - - return losses - - def simple_test(self, feats_dict, img_metas, points, rescale=False): - """Simple testing forward function of PartAggregationROIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Contain pcd and img's meta info. - points (torch.Tensor): Input points. - rescale (bool): Whether to rescale results. - - Returns: - dict: Bbox results of one frame. - """ - sample_mod = self.test_cfg.sample_mod - assert sample_mod in ['vote', 'seed', 'random'] - - result_z = self.primitive_z(feats_dict, sample_mod) - feats_dict.update(result_z) - - result_xy = self.primitive_xy(feats_dict, sample_mod) - feats_dict.update(result_xy) - - result_line = self.primitive_line(feats_dict, sample_mod) - feats_dict.update(result_line) - - bbox_preds = self.bbox_head(feats_dict, sample_mod) - feats_dict.update(bbox_preds) - bbox_list = self.bbox_head.get_bboxes( - points, - feats_dict, - img_metas, - rescale=rescale, - suffix='_optimized') - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/__init__.py deleted file mode 100644 index 0aa11569..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .pointwise_semantic_head import PointwiseSemanticHead -from .primitive_head import PrimitiveHead - -__all__ = ['PointwiseSemanticHead', 'PrimitiveHead'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py deleted file mode 100644 index fc0bcf5b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import rotation_3d_in_axis -from mmdet3d.models.builder import HEADS, build_loss -from mmdet.core import multi_apply - - -@HEADS.register_module() -class PointwiseSemanticHead(BaseModule): - """Semantic segmentation head for point-wise segmentation. - - Predict point-wise segmentation and part regression results for PartA2. - See `paper `_ for more details. - - Args: - in_channels (int): The number of input channel. - num_classes (int): The number of class. - extra_width (float): Boxes enlarge width. - loss_seg (dict): Config of segmentation loss. - loss_part (dict): Config of part prediction loss. - """ - - def __init__(self, - in_channels, - num_classes=3, - extra_width=0.2, - seg_score_thr=0.3, - init_cfg=None, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0)): - super(PointwiseSemanticHead, self).__init__(init_cfg=init_cfg) - self.extra_width = extra_width - self.num_classes = num_classes - self.seg_score_thr = seg_score_thr - self.seg_cls_layer = nn.Linear(in_channels, 1, bias=True) - self.seg_reg_layer = nn.Linear(in_channels, 3, bias=True) - - self.loss_seg = build_loss(loss_seg) - self.loss_part = build_loss(loss_part) - - def forward(self, x): - """Forward pass. - - Args: - x (torch.Tensor): Features from the first stage. - - Returns: - dict: Part features, segmentation and part predictions. - - - seg_preds (torch.Tensor): Segment predictions. - - part_preds (torch.Tensor): Part predictions. - - part_feats (torch.Tensor): Feature predictions. - """ - seg_preds = self.seg_cls_layer(x) # (N, 1) - part_preds = self.seg_reg_layer(x) # (N, 3) - - seg_scores = torch.sigmoid(seg_preds).detach() - seg_mask = (seg_scores > self.seg_score_thr) - - part_offsets = torch.sigmoid(part_preds).clone().detach() - part_offsets[seg_mask.view(-1) == 0] = 0 - part_feats = torch.cat((part_offsets, seg_scores), - dim=-1) # shape (npoints, 4) - return dict( - seg_preds=seg_preds, part_preds=part_preds, part_feats=part_feats) - - def get_targets_single(self, voxel_centers, gt_bboxes_3d, gt_labels_3d): - """generate segmentation and part prediction targets for a single - sample. - - Args: - voxel_centers (torch.Tensor): The center of voxels in shape - (voxel_num, 3). - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth boxes in - shape (box_num, 7). - gt_labels_3d (torch.Tensor): Class labels of ground truths in - shape (box_num). - - Returns: - tuple[torch.Tensor]: Segmentation targets with shape [voxel_num] - part prediction targets with shape [voxel_num, 3] - """ - gt_bboxes_3d = gt_bboxes_3d.to(voxel_centers.device) - enlarged_gt_boxes = gt_bboxes_3d.enlarged_box(self.extra_width) - - part_targets = voxel_centers.new_zeros((voxel_centers.shape[0], 3), - dtype=torch.float32) - box_idx = gt_bboxes_3d.points_in_boxes_part(voxel_centers) - enlarge_box_idx = enlarged_gt_boxes.points_in_boxes_part( - voxel_centers).long() - - gt_labels_pad = F.pad( - gt_labels_3d, (1, 0), mode='constant', value=self.num_classes) - seg_targets = gt_labels_pad[(box_idx.long() + 1)] - fg_pt_flag = box_idx > -1 - ignore_flag = fg_pt_flag ^ (enlarge_box_idx > -1) - seg_targets[ignore_flag] = -1 - - for k in range(len(gt_bboxes_3d)): - k_box_flag = box_idx == k - # no point in current box (caused by velodyne reduce) - if not k_box_flag.any(): - continue - fg_voxels = voxel_centers[k_box_flag] - transformed_voxels = fg_voxels - gt_bboxes_3d.bottom_center[k] - transformed_voxels = rotation_3d_in_axis( - transformed_voxels.unsqueeze(0), - -gt_bboxes_3d.yaw[k].view(1), - axis=2) - part_targets[k_box_flag] = transformed_voxels / gt_bboxes_3d.dims[ - k] + voxel_centers.new_tensor([0.5, 0.5, 0]) - - part_targets = torch.clamp(part_targets, min=0) - return seg_targets, part_targets - - def get_targets(self, voxels_dict, gt_bboxes_3d, gt_labels_3d): - """generate segmentation and part prediction targets. - - Args: - voxel_centers (torch.Tensor): The center of voxels in shape - (voxel_num, 3). - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth boxes in - shape (box_num, 7). - gt_labels_3d (torch.Tensor): Class labels of ground truths in - shape (box_num). - - Returns: - dict: Prediction targets - - - seg_targets (torch.Tensor): Segmentation targets - with shape [voxel_num]. - - part_targets (torch.Tensor): Part prediction targets - with shape [voxel_num, 3]. - """ - batch_size = len(gt_labels_3d) - voxel_center_list = [] - for idx in range(batch_size): - coords_idx = voxels_dict['coors'][:, 0] == idx - voxel_center_list.append(voxels_dict['voxel_centers'][coords_idx]) - - seg_targets, part_targets = multi_apply(self.get_targets_single, - voxel_center_list, - gt_bboxes_3d, gt_labels_3d) - seg_targets = torch.cat(seg_targets, dim=0) - part_targets = torch.cat(part_targets, dim=0) - return dict(seg_targets=seg_targets, part_targets=part_targets) - - def loss(self, semantic_results, semantic_targets): - """Calculate point-wise segmentation and part prediction losses. - - Args: - semantic_results (dict): Results from semantic head. - - - seg_preds: Segmentation predictions. - - part_preds: Part predictions. - - semantic_targets (dict): Targets of semantic results. - - - seg_preds: Segmentation targets. - - part_preds: Part targets. - - Returns: - dict: Loss of segmentation and part prediction. - - - loss_seg (torch.Tensor): Segmentation prediction loss. - - loss_part (torch.Tensor): Part prediction loss. - """ - seg_preds = semantic_results['seg_preds'] - part_preds = semantic_results['part_preds'] - seg_targets = semantic_targets['seg_targets'] - part_targets = semantic_targets['part_targets'] - - pos_mask = (seg_targets > -1) & (seg_targets < self.num_classes) - binary_seg_target = pos_mask.long() - pos = pos_mask.float() - neg = (seg_targets == self.num_classes).float() - seg_weights = pos + neg - pos_normalizer = pos.sum() - seg_weights = seg_weights / torch.clamp(pos_normalizer, min=1.0) - loss_seg = self.loss_seg(seg_preds, binary_seg_target, seg_weights) - - if pos_normalizer > 0: - loss_part = self.loss_part(part_preds[pos_mask], - part_targets[pos_mask]) - else: - # fake a part loss - loss_part = loss_seg.new_tensor(0) - - return dict(loss_seg=loss_seg, loss_part=loss_part) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/primitive_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/primitive_head.py deleted file mode 100644 index 4c9c28b3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/mask_heads/primitive_head.py +++ /dev/null @@ -1,966 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import furthest_point_sample -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.models.model_utils import VoteModule -from mmdet3d.ops import build_sa_module -from mmdet.core import multi_apply - - -@HEADS.register_module() -class PrimitiveHead(BaseModule): - r"""Primitive head of `H3DNet `_. - - Args: - num_dims (int): The dimension of primitive semantic information. - num_classes (int): The number of class. - primitive_mode (str): The mode of primitive module, - available mode ['z', 'xy', 'line']. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - feat_channels (tuple[int]): Convolution channels of - prediction layer. - upper_thresh (float): Threshold for line matching. - surface_thresh (float): Threshold for surface matching. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_dims, - num_classes, - primitive_mode, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - feat_channels=(128, 128), - upper_thresh=100.0, - surface_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - semantic_reg_loss=None, - semantic_cls_loss=None, - init_cfg=None): - super(PrimitiveHead, self).__init__(init_cfg=init_cfg) - assert primitive_mode in ['z', 'xy', 'line'] - # The dimension of primitive semantic information. - self.num_dims = num_dims - self.num_classes = num_classes - self.primitive_mode = primitive_mode - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = vote_module_cfg['gt_per_seed'] - self.num_proposal = vote_aggregation_cfg['num_point'] - self.upper_thresh = upper_thresh - self.surface_thresh = surface_thresh - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.semantic_reg_loss = build_loss(semantic_reg_loss) - self.semantic_cls_loss = build_loss(semantic_cls_loss) - - assert vote_aggregation_cfg['mlp_channels'][0] == vote_module_cfg[ - 'in_channels'] - - # Primitive existence flag prediction - self.flag_conv = ConvModule( - vote_module_cfg['conv_channels'][-1], - vote_module_cfg['conv_channels'][-1] // 2, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.flag_pred = torch.nn.Conv1d( - vote_module_cfg['conv_channels'][-1] // 2, 2, 1) - - self.vote_module = VoteModule(**vote_module_cfg) - self.vote_aggregation = build_sa_module(vote_aggregation_cfg) - - prev_channel = vote_aggregation_cfg['mlp_channels'][-1] - conv_pred_list = list() - for k in range(len(feat_channels)): - conv_pred_list.append( - ConvModule( - prev_channel, - feat_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - prev_channel = feat_channels[k] - self.conv_pred = nn.Sequential(*conv_pred_list) - - conv_out_channel = 3 + num_dims + num_classes - self.conv_pred.add_module('conv_out', - nn.Conv1d(prev_channel, conv_out_channel, 1)) - - def forward(self, feats_dict, sample_mod): - """Forward pass. - - Args: - feats_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed" and "random". - - Returns: - dict: Predictions of primitive head. - """ - assert sample_mod in ['vote', 'seed', 'random'] - - seed_points = feats_dict['fp_xyz_net0'][-1] - seed_features = feats_dict['hd_feature'] - results = {} - - primitive_flag = self.flag_conv(seed_features) - primitive_flag = self.flag_pred(primitive_flag) - - results['pred_flag_' + self.primitive_mode] = primitive_flag - - # 1. generate vote_points from seed_points - vote_points, vote_features, _ = self.vote_module( - seed_points, seed_features) - results['vote_' + self.primitive_mode] = vote_points - results['vote_features_' + self.primitive_mode] = vote_features - - # 2. aggregate vote_points - if sample_mod == 'vote': - # use fps in vote_aggregation - sample_indices = None - elif sample_mod == 'seed': - # FPS on seed and choose the votes corresponding to the seeds - sample_indices = furthest_point_sample(seed_points, - self.num_proposal) - elif sample_mod == 'random': - # Random sampling from the votes - batch_size, num_seed = seed_points.shape[:2] - sample_indices = torch.randint( - 0, - num_seed, (batch_size, self.num_proposal), - dtype=torch.int32, - device=seed_points.device) - else: - raise NotImplementedError('Unsupported sample mod!') - - vote_aggregation_ret = self.vote_aggregation(vote_points, - vote_features, - sample_indices) - aggregated_points, features, aggregated_indices = vote_aggregation_ret - results['aggregated_points_' + self.primitive_mode] = aggregated_points - results['aggregated_features_' + self.primitive_mode] = features - results['aggregated_indices_' + - self.primitive_mode] = aggregated_indices - - # 3. predict primitive offsets and semantic information - predictions = self.conv_pred(features) - - # 4. decode predictions - decode_ret = self.primitive_decode_scores(predictions, - aggregated_points) - results.update(decode_ret) - - center, pred_ind = self.get_primitive_center( - primitive_flag, decode_ret['center_' + self.primitive_mode]) - - results['pred_' + self.primitive_mode + '_ind'] = pred_ind - results['pred_' + self.primitive_mode + '_center'] = center - return results - - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of primitive head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of Primitive Head. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - - (point_mask, point_offset, gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask) = targets - - losses = {} - # Compute the loss of primitive existence flag - pred_flag = bbox_preds['pred_flag_' + self.primitive_mode] - flag_loss = self.objectness_loss(pred_flag, gt_primitive_mask.long()) - losses['flag_loss_' + self.primitive_mode] = flag_loss - - # calculate vote loss - vote_loss = self.vote_module.get_loss( - bbox_preds['seed_points'], - bbox_preds['vote_' + self.primitive_mode], - bbox_preds['seed_indices'], point_mask, point_offset) - losses['vote_loss_' + self.primitive_mode] = vote_loss - - num_proposal = bbox_preds['aggregated_points_' + - self.primitive_mode].shape[1] - primitive_center = bbox_preds['center_' + self.primitive_mode] - if self.primitive_mode != 'line': - primitive_semantic = bbox_preds['size_residuals_' + - self.primitive_mode].contiguous() - else: - primitive_semantic = None - semancitc_scores = bbox_preds['sem_cls_scores_' + - self.primitive_mode].transpose(2, 1) - - gt_primitive_mask = gt_primitive_mask / \ - (gt_primitive_mask.sum() + 1e-6) - center_loss, size_loss, sem_cls_loss = self.compute_primitive_loss( - primitive_center, primitive_semantic, semancitc_scores, - num_proposal, gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask) - losses['center_loss_' + self.primitive_mode] = center_loss - losses['size_loss_' + self.primitive_mode] = size_loss - losses['sem_loss_' + self.primitive_mode] = sem_cls_loss - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of primitive head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (dict): Predictions from forward of primitive head. - - Returns: - tuple[torch.Tensor]: Targets of primitive head. - """ - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - (point_mask, point_sem, - point_offset) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask) - - point_mask = torch.stack(point_mask) - point_sem = torch.stack(point_sem) - point_offset = torch.stack(point_offset) - - batch_size = point_mask.shape[0] - num_proposal = bbox_preds['aggregated_points_' + - self.primitive_mode].shape[1] - num_seed = bbox_preds['seed_points'].shape[1] - seed_inds = bbox_preds['seed_indices'].long() - seed_inds_expand = seed_inds.view(batch_size, num_seed, - 1).repeat(1, 1, 3) - seed_gt_votes = torch.gather(point_offset, 1, seed_inds_expand) - seed_gt_votes += bbox_preds['seed_points'] - gt_primitive_center = seed_gt_votes.view(batch_size * num_proposal, 1, - 3) - - seed_inds_expand_sem = seed_inds.view(batch_size, num_seed, 1).repeat( - 1, 1, 4 + self.num_dims) - seed_gt_sem = torch.gather(point_sem, 1, seed_inds_expand_sem) - gt_primitive_semantic = seed_gt_sem[:, :, 3:3 + self.num_dims].view( - batch_size * num_proposal, 1, self.num_dims).contiguous() - - gt_sem_cls_label = seed_gt_sem[:, :, -1].long() - - gt_votes_mask = torch.gather(point_mask, 1, seed_inds) - - return (point_mask, point_offset, gt_primitive_center, - gt_primitive_semantic, gt_sem_cls_label, gt_votes_mask) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None): - """Generate targets of primitive head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - - Returns: - tuple[torch.Tensor]: Targets of primitive head. - """ - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - num_points = points.shape[0] - - point_mask = points.new_zeros(num_points) - # Offset to the primitive center - point_offset = points.new_zeros([num_points, 3]) - # Semantic information of primitive center - point_sem = points.new_zeros([num_points, 3 + self.num_dims + 1]) - - # Generate pts_semantic_mask and pts_instance_mask when they are None - if pts_semantic_mask is None or pts_instance_mask is None: - points2box_mask = gt_bboxes_3d.points_in_boxes_all(points) - assignment = points2box_mask.argmax(1) - background_mask = points2box_mask.max(1)[0] == 0 - - if pts_semantic_mask is None: - pts_semantic_mask = gt_labels_3d[assignment] - pts_semantic_mask[background_mask] = self.num_classes - - if pts_instance_mask is None: - pts_instance_mask = assignment - pts_instance_mask[background_mask] = gt_labels_3d.shape[0] - - instance_flag = torch.nonzero( - pts_semantic_mask != self.num_classes, as_tuple=False).squeeze(1) - instance_labels = pts_instance_mask[instance_flag].unique() - - with_yaw = gt_bboxes_3d.with_yaw - for i, i_instance in enumerate(instance_labels): - indices = instance_flag[pts_instance_mask[instance_flag] == - i_instance] - coords = points[indices, :3] - cur_cls_label = pts_semantic_mask[indices][0] - - # Bbox Corners - cur_corners = gt_bboxes_3d.corners[i] - - plane_lower_temp = points.new_tensor( - [0, 0, 1, -cur_corners[7, -1]]) - upper_points = cur_corners[[1, 2, 5, 6]] - refined_distance = (upper_points * plane_lower_temp[:3]).sum(dim=1) - - if self.check_horizon(upper_points) and \ - plane_lower_temp[0] + plane_lower_temp[1] < \ - self.train_cfg['lower_thresh']: - plane_lower = points.new_tensor( - [0, 0, 1, plane_lower_temp[-1]]) - plane_upper = points.new_tensor( - [0, 0, 1, -torch.mean(refined_distance)]) - else: - raise NotImplementedError('Only horizontal plane is support!') - - if self.check_dist(plane_upper, upper_points) is False: - raise NotImplementedError( - 'Mean distance to plane should be lower than thresh!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_lower, coords) - - # Get bottom four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='bottom') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - point2line_matching, - cur_corners, - [1, 1, 0, 0], - with_yaw, - mode='bottom') - - # Set the surface labels here - if self.primitive_mode == 'z' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - cur_corners, - with_yaw, - mode='bottom') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_upper, coords) - - # Get top four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='top') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - point2line_matching, - cur_corners, - [1, 1, 0, 0], - with_yaw, - mode='top') - - if self.primitive_mode == 'z' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - cur_corners, - with_yaw, - mode='top') - - # Get left two lines - plane_left_temp = self._get_plane_fomulation( - cur_corners[2] - cur_corners[3], - cur_corners[3] - cur_corners[0], cur_corners[0]) - - right_points = cur_corners[[4, 5, 7, 6]] - plane_left_temp /= torch.norm(plane_left_temp[:3]) - refined_distance = (right_points * plane_left_temp[:3]).sum(dim=1) - - if plane_left_temp[2] < self.train_cfg['lower_thresh']: - plane_left = plane_left_temp - plane_right = points.new_tensor([ - plane_left_temp[0], plane_left_temp[1], plane_left_temp[2], - -refined_distance.mean() - ]) - else: - raise NotImplementedError( - 'Normal vector of the plane should be horizontal!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_left, coords) - - # Get left four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='left') - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - point2line_matching[2:], cur_corners, [2, 2], - with_yaw, mode='left') - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='left') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_right, coords) - - # Get right four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='right') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - point2line_matching[2:], cur_corners, [2, 2], - with_yaw, mode='right') - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='right') - - plane_front_temp = self._get_plane_fomulation( - cur_corners[0] - cur_corners[4], - cur_corners[4] - cur_corners[5], cur_corners[5]) - - back_points = cur_corners[[3, 2, 7, 6]] - plane_front_temp /= torch.norm(plane_front_temp[:3]) - refined_distance = (back_points * plane_front_temp[:3]).sum(dim=1) - - if plane_front_temp[2] < self.train_cfg['lower_thresh']: - plane_front = plane_front_temp - plane_back = points.new_tensor([ - plane_front_temp[0], plane_front_temp[1], - plane_front_temp[2], -torch.mean(refined_distance) - ]) - else: - raise NotImplementedError( - 'Normal vector of the plane should be horizontal!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_front, coords) - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - (point2plane_dist[selected]).var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='front') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_back, coords) - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='back') - - return (point_mask, point_sem, point_offset) - - def primitive_decode_scores(self, predictions, aggregated_points): - """Decode predicted parts to primitive head. - - Args: - predictions (torch.Tensor): primitive pridictions of each batch. - aggregated_points (torch.Tensor): The aggregated points - of vote stage. - - Returns: - Dict: Predictions of primitive head, including center, - semantic size and semantic scores. - """ - - ret_dict = {} - pred_transposed = predictions.transpose(2, 1) - - center = aggregated_points + pred_transposed[:, :, 0:3] - ret_dict['center_' + self.primitive_mode] = center - - if self.primitive_mode in ['z', 'xy']: - ret_dict['size_residuals_' + self.primitive_mode] = \ - pred_transposed[:, :, 3:3 + self.num_dims] - - ret_dict['sem_cls_scores_' + self.primitive_mode] = \ - pred_transposed[:, :, 3 + self.num_dims:] - - return ret_dict - - def check_horizon(self, points): - """Check whether is a horizontal plane. - - Args: - points (torch.Tensor): Points of input. - - Returns: - Bool: Flag of result. - """ - return (points[0][-1] == points[1][-1]) and \ - (points[1][-1] == points[2][-1]) and \ - (points[2][-1] == points[3][-1]) - - def check_dist(self, plane_equ, points): - """Whether the mean of points to plane distance is lower than thresh. - - Args: - plane_equ (torch.Tensor): Plane to be checked. - points (torch.Tensor): Points to be checked. - - Returns: - Tuple: Flag of result. - """ - return (points[:, 2] + - plane_equ[-1]).sum() / 4.0 < self.train_cfg['lower_thresh'] - - def point2line_dist(self, points, pts_a, pts_b): - """Calculate the distance from point to line. - - Args: - points (torch.Tensor): Points of input. - pts_a (torch.Tensor): Point on the specific line. - pts_b (torch.Tensor): Point on the specific line. - - Returns: - torch.Tensor: Distance between each point to line. - """ - line_a2b = pts_b - pts_a - line_a2pts = points - pts_a - length = (line_a2pts * line_a2b.view(1, 3)).sum(1) / \ - line_a2b.norm() - dist = (line_a2pts.norm(dim=1)**2 - length**2).sqrt() - - return dist - - def match_point2line(self, points, corners, with_yaw, mode='bottom'): - """Match points to corresponding line. - - Args: - points (torch.Tensor): Points of input. - corners (torch.Tensor): Eight corners of a bounding box. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right'). - Defaults to 'bottom'. - - Returns: - Tuple: Flag of matching correspondence. - """ - if with_yaw: - corners_pair = { - 'bottom': [[0, 3], [4, 7], [0, 4], [3, 7]], - 'top': [[1, 2], [5, 6], [1, 5], [2, 6]], - 'left': [[0, 1], [3, 2], [0, 1], [3, 2]], - 'right': [[4, 5], [7, 6], [4, 5], [7, 6]] - } - selected_list = [] - for pair_index in corners_pair[mode]: - selected = self.point2line_dist( - points, corners[pair_index[0]], corners[pair_index[1]]) \ - < self.train_cfg['line_thresh'] - selected_list.append(selected) - else: - xmin, ymin, _ = corners.min(0)[0] - xmax, ymax, _ = corners.max(0)[0] - sel1 = torch.abs(points[:, 0] - - xmin) < self.train_cfg['line_thresh'] - sel2 = torch.abs(points[:, 0] - - xmax) < self.train_cfg['line_thresh'] - sel3 = torch.abs(points[:, 1] - - ymin) < self.train_cfg['line_thresh'] - sel4 = torch.abs(points[:, 1] - - ymax) < self.train_cfg['line_thresh'] - selected_list = [sel1, sel2, sel3, sel4] - return selected_list - - def match_point2plane(self, plane, points): - """Match points to plane. - - Args: - plane (torch.Tensor): Equation of the plane. - points (torch.Tensor): Points of input. - - Returns: - Tuple: Distance of each point to the plane and - flag of matching correspondence. - """ - point2plane_dist = torch.abs((points * plane[:3]).sum(dim=1) + - plane[-1]) - min_dist = point2plane_dist.min() - selected = torch.abs(point2plane_dist - - min_dist) < self.train_cfg['dist_thresh'] - return point2plane_dist, selected - - def compute_primitive_loss(self, primitive_center, primitive_semantic, - semantic_scores, num_proposal, - gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask): - """Compute loss of primitive module. - - Args: - primitive_center (torch.Tensor): Pridictions of primitive center. - primitive_semantic (torch.Tensor): Pridictions of primitive - semantic. - semantic_scores (torch.Tensor): Pridictions of primitive - semantic scores. - num_proposal (int): The number of primitive proposal. - gt_primitive_center (torch.Tensor): Ground truth of - primitive center. - gt_votes_sem (torch.Tensor): Ground truth of primitive semantic. - gt_sem_cls_label (torch.Tensor): Ground truth of primitive - semantic class. - gt_primitive_mask (torch.Tensor): Ground truth of primitive mask. - - Returns: - Tuple: Loss of primitive module. - """ - batch_size = primitive_center.shape[0] - vote_xyz_reshape = primitive_center.view(batch_size * num_proposal, -1, - 3) - - center_loss = self.center_loss( - vote_xyz_reshape, - gt_primitive_center, - dst_weight=gt_primitive_mask.view(batch_size * num_proposal, 1))[1] - - if self.primitive_mode != 'line': - size_xyz_reshape = primitive_semantic.view( - batch_size * num_proposal, -1, self.num_dims).contiguous() - size_loss = self.semantic_reg_loss( - size_xyz_reshape, - gt_primitive_semantic, - dst_weight=gt_primitive_mask.view(batch_size * num_proposal, - 1))[1] - else: - size_loss = center_loss.new_tensor(0.0) - - # Semantic cls loss - sem_cls_loss = self.semantic_cls_loss( - semantic_scores, gt_sem_cls_label, weight=gt_primitive_mask) - - return center_loss, size_loss, sem_cls_loss - - def get_primitive_center(self, pred_flag, center): - """Generate primitive center from predictions. - - Args: - pred_flag (torch.Tensor): Scores of primitive center. - center (torch.Tensor): Pridictions of primitive center. - - Returns: - Tuple: Primitive center and the prediction indices. - """ - ind_normal = F.softmax(pred_flag, dim=1) - pred_indices = (ind_normal[:, 1, :] > - self.surface_thresh).detach().float() - selected = (ind_normal[:, 1, :] <= - self.surface_thresh).detach().float() - offset = torch.ones_like(center) * self.upper_thresh - center = center + offset * selected.unsqueeze(-1) - return center, pred_indices - - def _assign_primitive_line_targets(self, - point_mask, - point_offset, - point_sem, - coords, - indices, - cls_label, - point2line_matching, - corners, - center_axises, - with_yaw, - mode='bottom'): - """Generate targets of line primitive. - - Args: - point_mask (torch.Tensor): Tensor to store the ground - truth of mask. - point_offset (torch.Tensor): Tensor to store the ground - truth of offset. - point_sem (torch.Tensor): Tensor to store the ground - truth of semantic. - coords (torch.Tensor): The selected points. - indices (torch.Tensor): Indices of the selected points. - cls_label (int): Class label of the ground truth bounding box. - point2line_matching (torch.Tensor): Flag indicate that - matching line of each point. - corners (torch.Tensor): Corners of the ground truth bounding box. - center_axises (list[int]): Indicate in which axis the line center - should be refined. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right'). - Defaults to 'bottom'. - - Returns: - Tuple: Targets of the line primitive. - """ - corners_pair = { - 'bottom': [[0, 3], [4, 7], [0, 4], [3, 7]], - 'top': [[1, 2], [5, 6], [1, 5], [2, 6]], - 'left': [[0, 1], [3, 2]], - 'right': [[4, 5], [7, 6]] - } - corners_pair = corners_pair[mode] - assert len(corners_pair) == len(point2line_matching) == len( - center_axises) - for line_select, center_axis, pair_index in zip( - point2line_matching, center_axises, corners_pair): - if line_select.sum() > self.train_cfg['num_point_line']: - point_mask[indices[line_select]] = 1.0 - - if with_yaw: - line_center = (corners[pair_index[0]] + - corners[pair_index[1]]) / 2 - else: - line_center = coords[line_select].mean(dim=0) - line_center[center_axis] = corners[:, center_axis].mean() - - point_offset[indices[line_select]] = \ - line_center - coords[line_select] - point_sem[indices[line_select]] = \ - point_sem.new_tensor([line_center[0], line_center[1], - line_center[2], cls_label]) - return point_mask, point_offset, point_sem - - def _assign_primitive_surface_targets(self, - point_mask, - point_offset, - point_sem, - coords, - indices, - cls_label, - corners, - with_yaw, - mode='bottom'): - """Generate targets for primitive z and primitive xy. - - Args: - point_mask (torch.Tensor): Tensor to store the ground - truth of mask. - point_offset (torch.Tensor): Tensor to store the ground - truth of offset. - point_sem (torch.Tensor): Tensor to store the ground - truth of semantic. - coords (torch.Tensor): The selected points. - indices (torch.Tensor): Indices of the selected points. - cls_label (int): Class label of the ground truth bounding box. - corners (torch.Tensor): Corners of the ground truth bounding box. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right', - 'front', 'back'). - Defaults to 'bottom'. - - Returns: - Tuple: Targets of the center primitive. - """ - point_mask[indices] = 1.0 - corners_pair = { - 'bottom': [0, 7], - 'top': [1, 6], - 'left': [0, 1], - 'right': [4, 5], - 'front': [0, 1], - 'back': [3, 2] - } - pair_index = corners_pair[mode] - if self.primitive_mode == 'z': - if with_yaw: - center = (corners[pair_index[0]] + - corners[pair_index[1]]) / 2.0 - center[2] = coords[:, 2].mean() - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], - center[2], (corners[4] - corners[0]).norm(), - (corners[3] - corners[0]).norm(), cls_label - ]) - else: - center = point_mask.new_tensor([ - corners[:, 0].mean(), corners[:, 1].mean(), - coords[:, 2].mean() - ]) - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[:, 0].max() - corners[:, 0].min(), - corners[:, 1].max() - corners[:, 1].min(), cls_label - ]) - elif self.primitive_mode == 'xy': - if with_yaw: - center = coords.mean(0) - center[2] = (corners[pair_index[0], 2] + - corners[pair_index[1], 2]) / 2.0 - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[pair_index[1], 2] - corners[pair_index[0], 2], - cls_label - ]) - else: - center = point_mask.new_tensor([ - coords[:, 0].mean(), coords[:, 1].mean(), - corners[:, 2].mean() - ]) - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[:, 2].max() - corners[:, 2].min(), cls_label - ]) - point_offset[indices] = center - coords - return point_mask, point_offset, point_sem - - def _get_plane_fomulation(self, vector1, vector2, point): - """Compute the equation of the plane. - - Args: - vector1 (torch.Tensor): Parallel vector of the plane. - vector2 (torch.Tensor): Parallel vector of the plane. - point (torch.Tensor): Point on the plane. - - Returns: - torch.Tensor: Equation of the plane. - """ - surface_norm = torch.cross(vector1, vector2) - surface_dis = -torch.dot(surface_norm, point) - plane = point.new_tensor( - [surface_norm[0], surface_norm[1], surface_norm[2], surface_dis]) - return plane diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/part_aggregation_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/part_aggregation_roi_head.py deleted file mode 100644 index a3e49eae..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/part_aggregation_roi_head.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from torch.nn import functional as F - -from mmdet3d.core import AssignResult -from mmdet3d.core.bbox import bbox3d2result, bbox3d2roi -from mmdet.core import build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class PartAggregationROIHead(Base3DRoIHead): - """Part aggregation roi head for PartA2. - - Args: - semantic_head (ConfigDict): Config of semantic head. - num_classes (int): The number of classes. - seg_roi_extractor (ConfigDict): Config of seg_roi_extractor. - part_roi_extractor (ConfigDict): Config of part_roi_extractor. - bbox_head (ConfigDict): Config of bbox_head. - train_cfg (ConfigDict): Training config. - test_cfg (ConfigDict): Testing config. - """ - - def __init__(self, - semantic_head, - num_classes=3, - seg_roi_extractor=None, - part_roi_extractor=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PartAggregationROIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg) - self.num_classes = num_classes - assert semantic_head is not None - self.semantic_head = build_head(semantic_head) - - if seg_roi_extractor is not None: - self.seg_roi_extractor = build_roi_extractor(seg_roi_extractor) - if part_roi_extractor is not None: - self.part_roi_extractor = build_roi_extractor(part_roi_extractor) - - self.init_assigner_sampler() - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - def init_mask_head(self): - """Initialize mask head, skip since ``PartAggregationROIHead`` does not - have one.""" - pass - - def init_bbox_head(self, bbox_head): - """Initialize box head.""" - self.bbox_head = build_head(bbox_head) - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - - @property - def with_semantic(self): - """bool: whether the head has semantic branch""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - def forward_train(self, feats_dict, voxels_dict, img_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d): - """Training forward function of PartAggregationROIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - voxels_dict (dict): Contains information of voxels. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - The dictionary should contain the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Proposal bboxes - - labels_3d (torch.Tensor): Labels of proposals - - cls_preds (torch.Tensor): Original scores of proposals - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels_3d (list[LongTensor]): GT labels of each sample. - - Returns: - dict: losses from each head. - - - loss_semantic (torch.Tensor): loss of semantic head - - loss_bbox (torch.Tensor): loss of bboxes - """ - losses = dict() - if self.with_semantic: - semantic_results = self._semantic_forward_train( - feats_dict['seg_features'], voxels_dict, gt_bboxes_3d, - gt_labels_3d) - losses.update(semantic_results['loss_semantic']) - - sample_results = self._assign_and_sample(proposal_list, gt_bboxes_3d, - gt_labels_3d) - if self.with_bbox: - bbox_results = self._bbox_forward_train( - feats_dict['seg_features'], semantic_results['part_feats'], - voxels_dict, sample_results) - losses.update(bbox_results['loss_bbox']) - - return losses - - def simple_test(self, feats_dict, voxels_dict, img_metas, proposal_list, - **kwargs): - """Simple testing forward function of PartAggregationROIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - voxels_dict (dict): Contains information of voxels. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - - Returns: - dict: Bbox results of one frame. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - assert self.with_semantic - - semantic_results = self.semantic_head(feats_dict['seg_features']) - - rois = bbox3d2roi([res['boxes_3d'].tensor for res in proposal_list]) - labels_3d = [res['labels_3d'] for res in proposal_list] - cls_preds = [res['cls_preds'] for res in proposal_list] - bbox_results = self._bbox_forward(feats_dict['seg_features'], - semantic_results['part_feats'], - voxels_dict, rois) - - bbox_list = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - labels_3d, - cls_preds, - img_metas, - cfg=self.test_cfg) - - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def _bbox_forward_train(self, seg_feats, part_feats, voxels_dict, - sampling_results): - """Forward training function of roi_extractor and bbox_head. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - voxels_dict (dict): Contains information of voxels. - sampling_results (:obj:`SamplingResult`): Sampled results used - for training. - - Returns: - dict: Forward results including losses and predictions. - """ - rois = bbox3d2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(seg_feats, part_feats, voxels_dict, - rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, - self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _bbox_forward(self, seg_feats, part_feats, voxels_dict, rois): - """Forward function of roi_extractor and bbox_head used in both - training and testing. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - voxels_dict (dict): Contains information of voxels. - rois (Tensor): Roi boxes. - - Returns: - dict: Contains predictions of bbox_head and - features of roi_extractor. - """ - pooled_seg_feats = self.seg_roi_extractor(seg_feats, - voxels_dict['voxel_centers'], - voxels_dict['coors'][..., 0], - rois) - pooled_part_feats = self.part_roi_extractor( - part_feats, voxels_dict['voxel_centers'], - voxels_dict['coors'][..., 0], rois) - cls_score, bbox_pred = self.bbox_head(pooled_seg_feats, - pooled_part_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - pooled_seg_feats=pooled_seg_feats, - pooled_part_feats=pooled_part_feats) - return bbox_results - - def _assign_and_sample(self, proposal_list, gt_bboxes_3d, gt_labels_3d): - """Assign and sample proposals for training. - - Args: - proposal_list (list[dict]): Proposals produced by RPN. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - list[:obj:`SamplingResult`]: Sampled results of each training - sample. - """ - sampling_results = [] - # bbox assign - for batch_idx in range(len(proposal_list)): - cur_proposal_list = proposal_list[batch_idx] - cur_boxes = cur_proposal_list['boxes_3d'] - cur_labels_3d = cur_proposal_list['labels_3d'] - cur_gt_bboxes = gt_bboxes_3d[batch_idx].to(cur_boxes.device) - cur_gt_labels = gt_labels_3d[batch_idx] - - batch_num_gts = 0 - # 0 is bg - batch_gt_indis = cur_gt_labels.new_full((len(cur_boxes), ), 0) - batch_max_overlaps = cur_boxes.tensor.new_zeros(len(cur_boxes)) - # -1 is bg - batch_gt_labels = cur_gt_labels.new_full((len(cur_boxes), ), -1) - - # each class may have its own assigner - if isinstance(self.bbox_assigner, list): - for i, assigner in enumerate(self.bbox_assigner): - gt_per_cls = (cur_gt_labels == i) - pred_per_cls = (cur_labels_3d == i) - cur_assign_res = assigner.assign( - cur_boxes.tensor[pred_per_cls], - cur_gt_bboxes.tensor[gt_per_cls], - gt_labels=cur_gt_labels[gt_per_cls]) - # gather assign_results in different class into one result - batch_num_gts += cur_assign_res.num_gts - # gt inds (1-based) - gt_inds_arange_pad = gt_per_cls.nonzero( - as_tuple=False).view(-1) + 1 - # pad 0 for indice unassigned - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=0) - # pad -1 for indice ignore - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=-1) - # convert to 0~gt_num+2 for indices - gt_inds_arange_pad += 1 - # now 0 is bg, >1 is fg in batch_gt_indis - batch_gt_indis[pred_per_cls] = gt_inds_arange_pad[ - cur_assign_res.gt_inds + 1] - 1 - batch_max_overlaps[ - pred_per_cls] = cur_assign_res.max_overlaps - batch_gt_labels[pred_per_cls] = cur_assign_res.labels - - assign_result = AssignResult(batch_num_gts, batch_gt_indis, - batch_max_overlaps, - batch_gt_labels) - else: # for single class - assign_result = self.bbox_assigner.assign( - cur_boxes.tensor, - cur_gt_bboxes.tensor, - gt_labels=cur_gt_labels) - # sample boxes - sampling_result = self.bbox_sampler.sample(assign_result, - cur_boxes.tensor, - cur_gt_bboxes.tensor, - cur_gt_labels) - sampling_results.append(sampling_result) - return sampling_results - - def _semantic_forward_train(self, x, voxels_dict, gt_bboxes_3d, - gt_labels_3d): - """Train semantic head. - - Args: - x (torch.Tensor): Point-wise semantic features for segmentation - voxels_dict (dict): Contains information of voxels. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - dict: Segmentation results including losses - """ - semantic_results = self.semantic_head(x) - semantic_targets = self.semantic_head.get_targets( - voxels_dict, gt_bboxes_3d, gt_labels_3d) - loss_semantic = self.semantic_head.loss(semantic_results, - semantic_targets) - semantic_results.update(loss_semantic=loss_semantic) - return semantic_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/point_rcnn_roi_head.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/point_rcnn_roi_head.py deleted file mode 100644 index acf7c16d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/point_rcnn_roi_head.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn import functional as F - -from mmdet3d.core import AssignResult -from mmdet3d.core.bbox import bbox3d2result, bbox3d2roi -from mmdet.core import build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class PointRCNNRoIHead(Base3DRoIHead): - """RoI head for PointRCNN. - - Args: - bbox_head (dict): Config of bbox_head. - point_roi_extractor (dict): Config of RoI extractor. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - depth_normalizer (float, optional): Normalize depth feature. - Defaults to 70.0. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - bbox_head, - point_roi_extractor, - train_cfg, - test_cfg, - depth_normalizer=70.0, - pretrained=None, - init_cfg=None): - super(PointRCNNRoIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - self.depth_normalizer = depth_normalizer - - if point_roi_extractor is not None: - self.point_roi_extractor = build_roi_extractor(point_roi_extractor) - - self.init_assigner_sampler() - - def init_bbox_head(self, bbox_head): - """Initialize box head. - - Args: - bbox_head (dict): Config dict of RoI Head. - """ - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self): - """Initialize maek head.""" - pass - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - - def forward_train(self, feats_dict, input_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d): - """Training forward function of PointRCNNRoIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - imput_metas (list[dict]): Meta info of each input. - proposal_list (list[dict]): Proposal information from rpn. - The dictionary should contain the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Proposal bboxes - - labels_3d (torch.Tensor): Labels of proposals - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels_3d (list[LongTensor]): GT labels of each sample. - - Returns: - dict: Losses from RoI RCNN head. - - loss_bbox (torch.Tensor): Loss of bboxes - """ - features = feats_dict['features'] - points = feats_dict['points'] - point_cls_preds = feats_dict['points_cls_preds'] - sem_scores = point_cls_preds.sigmoid() - point_scores = sem_scores.max(-1)[0] - - sample_results = self._assign_and_sample(proposal_list, gt_bboxes_3d, - gt_labels_3d) - - # concat the depth, semantic features and backbone features - features = features.transpose(1, 2).contiguous() - point_depths = points.norm(dim=2) / self.depth_normalizer - 0.5 - features_list = [ - point_scores.unsqueeze(2), - point_depths.unsqueeze(2), features - ] - features = torch.cat(features_list, dim=2) - - bbox_results = self._bbox_forward_train(features, points, - sample_results) - losses = dict() - losses.update(bbox_results['loss_bbox']) - - return losses - - def simple_test(self, feats_dict, img_metas, proposal_list, **kwargs): - """Simple testing forward function of PointRCNNRoIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - - Returns: - dict: Bbox results of one frame. - """ - rois = bbox3d2roi([res['boxes_3d'].tensor for res in proposal_list]) - labels_3d = [res['labels_3d'] for res in proposal_list] - - features = feats_dict['features'] - points = feats_dict['points'] - point_cls_preds = feats_dict['points_cls_preds'] - sem_scores = point_cls_preds.sigmoid() - point_scores = sem_scores.max(-1)[0] - - features = features.transpose(1, 2).contiguous() - point_depths = points.norm(dim=2) / self.depth_normalizer - 0.5 - features_list = [ - point_scores.unsqueeze(2), - point_depths.unsqueeze(2), features - ] - - features = torch.cat(features_list, dim=2) - batch_size = features.shape[0] - bbox_results = self._bbox_forward(features, points, batch_size, rois) - object_score = bbox_results['cls_score'].sigmoid() - bbox_list = self.bbox_head.get_bboxes( - rois, - object_score, - bbox_results['bbox_pred'], - labels_3d, - img_metas, - cfg=self.test_cfg) - - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def _bbox_forward_train(self, features, points, sampling_results): - """Forward training function of roi_extractor and bbox_head. - - Args: - features (torch.Tensor): Backbone features with depth and \ - semantic features. - points (torch.Tensor): Pointcloud. - sampling_results (:obj:`SamplingResult`): Sampled results used - for training. - - Returns: - dict: Forward results including losses and predictions. - """ - rois = bbox3d2roi([res.bboxes for res in sampling_results]) - batch_size = features.shape[0] - bbox_results = self._bbox_forward(features, points, batch_size, rois) - bbox_targets = self.bbox_head.get_targets(sampling_results, - self.train_cfg) - - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _bbox_forward(self, features, points, batch_size, rois): - """Forward function of roi_extractor and bbox_head used in both - training and testing. - - Args: - features (torch.Tensor): Backbone features with depth and - semantic features. - points (torch.Tensor): Pointcloud. - batch_size (int): Batch size. - rois (torch.Tensor): RoI boxes. - - Returns: - dict: Contains predictions of bbox_head and - features of roi_extractor. - """ - pooled_point_feats = self.point_roi_extractor(features, points, - batch_size, rois) - - cls_score, bbox_pred = self.bbox_head(pooled_point_feats) - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def _assign_and_sample(self, proposal_list, gt_bboxes_3d, gt_labels_3d): - """Assign and sample proposals for training. - - Args: - proposal_list (list[dict]): Proposals produced by RPN. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - list[:obj:`SamplingResult`]: Sampled results of each training - sample. - """ - sampling_results = [] - # bbox assign - for batch_idx in range(len(proposal_list)): - cur_proposal_list = proposal_list[batch_idx] - cur_boxes = cur_proposal_list['boxes_3d'] - cur_labels_3d = cur_proposal_list['labels_3d'] - cur_gt_bboxes = gt_bboxes_3d[batch_idx].to(cur_boxes.device) - cur_gt_labels = gt_labels_3d[batch_idx] - batch_num_gts = 0 - # 0 is bg - batch_gt_indis = cur_gt_labels.new_full((len(cur_boxes), ), 0) - batch_max_overlaps = cur_boxes.tensor.new_zeros(len(cur_boxes)) - # -1 is bg - batch_gt_labels = cur_gt_labels.new_full((len(cur_boxes), ), -1) - - # each class may have its own assigner - if isinstance(self.bbox_assigner, list): - for i, assigner in enumerate(self.bbox_assigner): - gt_per_cls = (cur_gt_labels == i) - pred_per_cls = (cur_labels_3d == i) - cur_assign_res = assigner.assign( - cur_boxes.tensor[pred_per_cls], - cur_gt_bboxes.tensor[gt_per_cls], - gt_labels=cur_gt_labels[gt_per_cls]) - # gather assign_results in different class into one result - batch_num_gts += cur_assign_res.num_gts - # gt inds (1-based) - gt_inds_arange_pad = gt_per_cls.nonzero( - as_tuple=False).view(-1) + 1 - # pad 0 for indice unassigned - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=0) - # pad -1 for indice ignore - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=-1) - # convert to 0~gt_num+2 for indices - gt_inds_arange_pad += 1 - # now 0 is bg, >1 is fg in batch_gt_indis - batch_gt_indis[pred_per_cls] = gt_inds_arange_pad[ - cur_assign_res.gt_inds + 1] - 1 - batch_max_overlaps[ - pred_per_cls] = cur_assign_res.max_overlaps - batch_gt_labels[pred_per_cls] = cur_assign_res.labels - - assign_result = AssignResult(batch_num_gts, batch_gt_indis, - batch_max_overlaps, - batch_gt_labels) - else: # for single class - assign_result = self.bbox_assigner.assign( - cur_boxes.tensor, - cur_gt_bboxes.tensor, - gt_labels=cur_gt_labels) - - # sample boxes - sampling_result = self.bbox_sampler.sample(assign_result, - cur_boxes.tensor, - cur_gt_bboxes.tensor, - cur_gt_labels) - sampling_results.append(sampling_result) - return sampling_results diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/__init__.py deleted file mode 100644 index 70c28812..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.roi_heads.roi_extractors import SingleRoIExtractor -from .single_roiaware_extractor import Single3DRoIAwareExtractor -from .single_roipoint_extractor import Single3DRoIPointExtractor - -__all__ = [ - 'SingleRoIExtractor', 'Single3DRoIAwareExtractor', - 'Single3DRoIPointExtractor' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py deleted file mode 100644 index c27a0047..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import ops -from mmcv.runner import BaseModule - -from mmdet3d.models.builder import ROI_EXTRACTORS - - -@ROI_EXTRACTORS.register_module() -class Single3DRoIAwareExtractor(BaseModule): - """Point-wise roi-aware Extractor. - - Extract Point-wise roi features. - - Args: - roi_layer (dict): The config of roi layer. - """ - - def __init__(self, roi_layer=None, init_cfg=None): - super(Single3DRoIAwareExtractor, self).__init__(init_cfg=init_cfg) - self.roi_layer = self.build_roi_layers(roi_layer) - - def build_roi_layers(self, layer_cfg): - """Build roi layers using `layer_cfg`""" - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = layer_cls(**cfg) - return roi_layers - - def forward(self, feats, coordinate, batch_inds, rois): - """Extract point-wise roi features. - - Args: - feats (torch.FloatTensor): Point-wise features with - shape (batch, npoints, channels) for pooling. - coordinate (torch.FloatTensor): Coordinate of each point. - batch_inds (torch.LongTensor): Indicate the batch of each point. - rois (torch.FloatTensor): Roi boxes with batch indices. - - Returns: - torch.FloatTensor: Pooled features - """ - pooled_roi_feats = [] - for batch_idx in range(int(batch_inds.max()) + 1): - roi_inds = (rois[..., 0].int() == batch_idx) - coors_inds = (batch_inds.int() == batch_idx) - pooled_roi_feat = self.roi_layer(rois[..., 1:][roi_inds], - coordinate[coors_inds], - feats[coors_inds]) - pooled_roi_feats.append(pooled_roi_feat) - pooled_roi_feats = torch.cat(pooled_roi_feats, 0) - return pooled_roi_feats diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py deleted file mode 100644 index 4983a01e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import ops -from torch import nn as nn - -from mmdet3d.core.bbox.structures import rotation_3d_in_axis -from mmdet3d.models.builder import ROI_EXTRACTORS - - -@ROI_EXTRACTORS.register_module() -class Single3DRoIPointExtractor(nn.Module): - """Point-wise roi-aware Extractor. - - Extract Point-wise roi features. - - Args: - roi_layer (dict): The config of roi layer. - """ - - def __init__(self, roi_layer=None): - super(Single3DRoIPointExtractor, self).__init__() - self.roi_layer = self.build_roi_layers(roi_layer) - - def build_roi_layers(self, layer_cfg): - """Build roi layers using `layer_cfg`""" - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = layer_cls(**cfg) - return roi_layers - - def forward(self, feats, coordinate, batch_inds, rois): - """Extract point-wise roi features. - - Args: - feats (torch.FloatTensor): Point-wise features with - shape (batch, npoints, channels) for pooling. - coordinate (torch.FloatTensor): Coordinate of each point. - batch_inds (torch.LongTensor): Indicate the batch of each point. - rois (torch.FloatTensor): Roi boxes with batch indices. - - Returns: - torch.FloatTensor: Pooled features - """ - rois = rois[..., 1:] - rois = rois.view(batch_inds, -1, rois.shape[-1]) - with torch.no_grad(): - pooled_roi_feat, pooled_empty_flag = self.roi_layer( - coordinate, feats, rois) - - # canonical transformation - roi_center = rois[:, :, 0:3] - pooled_roi_feat[:, :, :, 0:3] -= roi_center.unsqueeze(dim=2) - pooled_roi_feat = pooled_roi_feat.view(-1, - pooled_roi_feat.shape[-2], - pooled_roi_feat.shape[-1]) - pooled_roi_feat[:, :, 0:3] = rotation_3d_in_axis( - pooled_roi_feat[:, :, 0:3], - -(rois.view(-1, rois.shape[-1])[:, 6]), - axis=2) - pooled_roi_feat[pooled_empty_flag.view(-1) > 0] = 0 - - return pooled_roi_feat diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/__init__.py deleted file mode 100644 index 29fbc33e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import Base3DSegmentor -from .encoder_decoder import EncoderDecoder3D - -__all__ = ['Base3DSegmentor', 'EncoderDecoder3D'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/base.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/base.py deleted file mode 100644 index 99136983..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/base.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC -from mmcv.runner import auto_fp16 - -from mmdet3d.core import show_seg_result -from mmseg.models.segmentors import BaseSegmentor - - -class Base3DSegmentor(BaseSegmentor): - """Base class for 3D segmentors. - - The main difference with `BaseSegmentor` is that we modify the keys in - data_dict and use a 3D seg specific visualization function. - """ - - @property - def with_regularization_loss(self): - """bool: whether the segmentor has regularization loss for weight""" - return hasattr(self, 'loss_regularization') and \ - self.loss_regularization is not None - - def forward_test(self, points, img_metas, **kwargs): - """Calls either simple_test or aug_test depending on the length of - outer list of points. If len(points) == 1, call simple_test. Otherwise - call aug_test to aggregate the test results by e.g. voting. - - Args: - points (list[list[torch.Tensor]]): the outer list indicates - test-time augmentations and inner torch.Tensor should have a - shape BXNxC, which contains all points in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(points)}) != ' - f'num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(points[0], img_metas[0], **kwargs) - else: - return self.aug_test(points, img_metas, **kwargs) - - @auto_fp16(apply_to=('points')) - def forward(self, return_loss=True, **kwargs): - """Calls either forward_train or forward_test depending on whether - return_loss=True. - - Note this setting will change the expected inputs. When - `return_loss=True`, point and img_metas are single-nested (i.e. - torch.Tensor and list[dict]), and when `resturn_loss=False`, point and - img_metas should be double nested (i.e. list[torch.Tensor], - list[list[dict]]), with the outer list indicating test time - augmentations. - """ - if return_loss: - return self.forward_train(**kwargs) - else: - return self.forward_test(**kwargs) - - def show_results(self, - data, - result, - palette=None, - out_dir=None, - ignore_index=None, - show=False, - score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input points and the information of the sample. - result (list[dict]): Prediction results. - palette (list[list[int]]] | np.ndarray): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - out_dir (str): Output directory of visualization result. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - TODO: implement score_thr of Base3DSegmentor. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - Not implemented yet, but it is here for unification. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - pred_sem_mask = result[batch_id]['semantic_mask'].cpu().numpy() - - show_seg_result( - points, - None, - pred_sem_mask, - out_dir, - file_name, - palette, - ignore_index, - show=show) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/encoder_decoder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/encoder_decoder.py deleted file mode 100644 index 1a4fee93..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,454 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch import nn as nn -from torch.nn import functional as F - -from mmseg.core import add_prefix -from ..builder import (SEGMENTORS, build_backbone, build_head, build_loss, - build_neck) -from .base import Base3DSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder3D(Base3DSegmentor): - """3D Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be thrown during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - loss_regularization=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(EncoderDecoder3D, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - self._init_loss_regularization(loss_regularization) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - assert self.with_decode_head, \ - '3D EncoderDecoder Segmentor should have a decode_head' - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = build_head(decode_head) - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(build_head(head_cfg)) - else: - self.auxiliary_head = build_head(auxiliary_head) - - def _init_loss_regularization(self, loss_regularization): - """Initialize ``loss_regularization``""" - if loss_regularization is not None: - if isinstance(loss_regularization, list): - self.loss_regularization = nn.ModuleList() - for loss_cfg in loss_regularization: - self.loss_regularization.append(build_loss(loss_cfg)) - else: - self.loss_regularization = build_loss(loss_regularization) - - def extract_feat(self, points): - """Extract features from points.""" - x = self.backbone(points) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, points, img_metas): - """Encode points with backbone and decode into a semantic segmentation - map of the same size as input. - - Args: - points (torch.Tensor): Input points of shape [B, N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - - Returns: - torch.Tensor: Segmentation logits of shape [B, num_classes, N]. - """ - x = self.extract_feat(points) - out = self._decode_head_forward_test(x, img_metas) - return out - - def _decode_head_forward_train(self, x, img_metas, pts_semantic_mask): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - pts_semantic_mask, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, pts_semantic_mask): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - pts_semantic_mask, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, pts_semantic_mask, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def _loss_regularization_forward_train(self): - """Calculate regularization loss for model weight in training.""" - losses = dict() - if isinstance(self.loss_regularization, nn.ModuleList): - for idx, regularize_loss in enumerate(self.loss_regularization): - loss_regularize = dict( - loss_regularize=regularize_loss(self.modules())) - losses.update(add_prefix(loss_regularize, f'regularize_{idx}')) - else: - loss_regularize = dict( - loss_regularize=self.loss_regularization(self.modules())) - losses.update(add_prefix(loss_regularize, 'regularize')) - - return losses - - def forward_dummy(self, points): - """Dummy forward function.""" - seg_logit = self.encode_decode(points, None) - - return seg_logit - - def forward_train(self, points, img_metas, pts_semantic_mask): - """Forward function for training. - - Args: - points (list[torch.Tensor]): List of points of shape [N, C]. - img_metas (list): Image metas. - pts_semantic_mask (list[torch.Tensor]): List of point-wise semantic - labels of shape [N]. - - Returns: - dict[str, Tensor]: Losses. - """ - points_cat = torch.stack(points) - pts_semantic_mask_cat = torch.stack(pts_semantic_mask) - - # extract features using backbone - x = self.extract_feat(points_cat) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - pts_semantic_mask_cat) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, pts_semantic_mask_cat) - losses.update(loss_aux) - - if self.with_regularization_loss: - loss_regularize = self._loss_regularization_forward_train() - losses.update(loss_regularize) - - return losses - - @staticmethod - def _input_generation(coords, - patch_center, - coord_max, - feats, - use_normalized_coord=False): - """Generating model input. - - Generate input by subtracting patch center and adding additional - features. Currently support colors and normalized xyz as features. - - Args: - coords (torch.Tensor): Sampled 3D point coordinate of shape [S, 3]. - patch_center (torch.Tensor): Center coordinate of the patch. - coord_max (torch.Tensor): Max coordinate of all 3D points. - feats (torch.Tensor): Features of sampled points of shape [S, C]. - use_normalized_coord (bool, optional): Whether to use normalized - xyz as additional features. Defaults to False. - - Returns: - torch.Tensor: The generated input data of shape [S, 3+C']. - """ - # subtract patch center, the z dimension is not centered - centered_coords = coords.clone() - centered_coords[:, 0] -= patch_center[0] - centered_coords[:, 1] -= patch_center[1] - - # normalized coordinates as extra features - if use_normalized_coord: - normalized_coord = coords / coord_max - feats = torch.cat([feats, normalized_coord], dim=1) - - points = torch.cat([centered_coords, feats], dim=1) - - return points - - def _sliding_patch_generation(self, - points, - num_points, - block_size, - sample_rate=0.5, - use_normalized_coord=False, - eps=1e-3): - """Sampling points in a sliding window fashion. - - First sample patches to cover all the input points. - Then sample points in each patch to batch points of a certain number. - - Args: - points (torch.Tensor): Input points of shape [N, 3+C]. - num_points (int): Number of points to be sampled in each patch. - block_size (float, optional): Size of a patch to sample. - sample_rate (float, optional): Stride used in sliding patch. - Defaults to 0.5. - use_normalized_coord (bool, optional): Whether to use normalized - xyz as additional features. Defaults to False. - eps (float, optional): A value added to patch boundary to guarantee - points coverage. Defaults to 1e-3. - - Returns: - np.ndarray | np.ndarray: - - - patch_points (torch.Tensor): Points of different patches of - shape [K, N, 3+C]. - - patch_idxs (torch.Tensor): Index of each point in - `patch_points`, of shape [K, N]. - """ - device = points.device - # we assume the first three dims are points' 3D coordinates - # and the rest dims are their per-point features - coords = points[:, :3] - feats = points[:, 3:] - - coord_max = coords.max(0)[0] - coord_min = coords.min(0)[0] - stride = block_size * sample_rate - num_grid_x = int( - torch.ceil((coord_max[0] - coord_min[0] - block_size) / - stride).item() + 1) - num_grid_y = int( - torch.ceil((coord_max[1] - coord_min[1] - block_size) / - stride).item() + 1) - - patch_points, patch_idxs = [], [] - for idx_y in range(num_grid_y): - s_y = coord_min[1] + idx_y * stride - e_y = torch.min(s_y + block_size, coord_max[1]) - s_y = e_y - block_size - for idx_x in range(num_grid_x): - s_x = coord_min[0] + idx_x * stride - e_x = torch.min(s_x + block_size, coord_max[0]) - s_x = e_x - block_size - - # extract points within this patch - cur_min = torch.tensor([s_x, s_y, coord_min[2]]).to(device) - cur_max = torch.tensor([e_x, e_y, coord_max[2]]).to(device) - cur_choice = ((coords >= cur_min - eps) & - (coords <= cur_max + eps)).all(dim=1) - - if not cur_choice.any(): # no points in this patch - continue - - # sample points in this patch to multiple batches - cur_center = cur_min + block_size / 2.0 - point_idxs = torch.nonzero(cur_choice, as_tuple=True)[0] - num_batch = int(np.ceil(point_idxs.shape[0] / num_points)) - point_size = int(num_batch * num_points) - replace = point_size > 2 * point_idxs.shape[0] - num_repeat = point_size - point_idxs.shape[0] - if replace: # duplicate - point_idxs_repeat = point_idxs[torch.randint( - 0, point_idxs.shape[0], - size=(num_repeat, )).to(device)] - else: - point_idxs_repeat = point_idxs[torch.randperm( - point_idxs.shape[0])[:num_repeat]] - - choices = torch.cat([point_idxs, point_idxs_repeat], dim=0) - choices = choices[torch.randperm(choices.shape[0])] - - # construct model input - point_batches = self._input_generation( - coords[choices], - cur_center, - coord_max, - feats[choices], - use_normalized_coord=use_normalized_coord) - - patch_points.append(point_batches) - patch_idxs.append(choices) - - patch_points = torch.cat(patch_points, dim=0) - patch_idxs = torch.cat(patch_idxs, dim=0) - - # make sure all points are sampled at least once - assert torch.unique(patch_idxs).shape[0] == points.shape[0], \ - 'some points are not sampled in sliding inference' - - return patch_points, patch_idxs - - def slide_inference(self, point, img_meta, rescale): - """Inference by sliding-window with overlap. - - Args: - point (torch.Tensor): Input points of shape [N, 3+C]. - img_meta (dict): Meta information of input sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - - Returns: - Tensor: The output segmentation map of shape [num_classes, N]. - """ - num_points = self.test_cfg.num_points - block_size = self.test_cfg.block_size - sample_rate = self.test_cfg.sample_rate - use_normalized_coord = self.test_cfg.use_normalized_coord - batch_size = self.test_cfg.batch_size * num_points - - # patch_points is of shape [K*N, 3+C], patch_idxs is of shape [K*N] - patch_points, patch_idxs = self._sliding_patch_generation( - point, num_points, block_size, sample_rate, use_normalized_coord) - feats_dim = patch_points.shape[1] - seg_logits = [] # save patch predictions - - for batch_idx in range(0, patch_points.shape[0], batch_size): - batch_points = patch_points[batch_idx:batch_idx + batch_size] - batch_points = batch_points.view(-1, num_points, feats_dim) - # batch_seg_logit is of shape [B, num_classes, N] - batch_seg_logit = self.encode_decode(batch_points, img_meta) - batch_seg_logit = batch_seg_logit.transpose(1, 2).contiguous() - seg_logits.append(batch_seg_logit.view(-1, self.num_classes)) - - # aggregate per-point logits by indexing sum and dividing count - seg_logits = torch.cat(seg_logits, dim=0) # [K*N, num_classes] - expand_patch_idxs = patch_idxs.unsqueeze(1).repeat(1, self.num_classes) - preds = point.new_zeros((point.shape[0], self.num_classes)).\ - scatter_add_(dim=0, index=expand_patch_idxs, src=seg_logits) - count_mat = torch.bincount(patch_idxs) - preds = preds / count_mat[:, None] - - # TODO: if rescale and voxelization segmentor - - return preds.transpose(0, 1) # to [num_classes, K*N] - - def whole_inference(self, points, img_metas, rescale): - """Inference with full scene (one forward pass without sliding).""" - seg_logit = self.encode_decode(points, img_metas) - # TODO: if rescale and voxelization segmentor - return seg_logit - - def inference(self, points, img_metas, rescale): - """Inference with slide/whole style. - - Args: - points (torch.Tensor): Input points of shape [B, N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - - Returns: - Tensor: The output segmentation map. - """ - assert self.test_cfg.mode in ['slide', 'whole'] - if self.test_cfg.mode == 'slide': - seg_logit = torch.stack([ - self.slide_inference(point, img_meta, rescale) - for point, img_meta in zip(points, img_metas) - ], 0) - else: - seg_logit = self.whole_inference(points, img_metas, rescale) - output = F.softmax(seg_logit, dim=1) - return output - - def simple_test(self, points, img_metas, rescale=True): - """Simple test with single scene. - - Args: - points (list[torch.Tensor]): List of points of shape [N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - Defaults to True. - - Returns: - list[dict]: The output prediction result with following keys: - - - semantic_mask (Tensor): Segmentation mask of shape [N]. - """ - # 3D segmentation requires per-point prediction, so it's impossible - # to use down-sampling to get a batch of scenes with same num_points - # therefore, we only support testing one scene every time - seg_pred = [] - for point, img_meta in zip(points, img_metas): - seg_prob = self.inference(point.unsqueeze(0), [img_meta], - rescale)[0] - seg_map = seg_prob.argmax(0) # [N] - # to cpu tensor for consistency with det3d - seg_map = seg_map.cpu() - seg_pred.append(seg_map) - # warp in dict - seg_pred = [dict(semantic_mask=seg_map) for seg_map in seg_pred] - return seg_pred - - def aug_test(self, points, img_metas, rescale=True): - """Test with augmentations. - - Args: - points (list[torch.Tensor]): List of points of shape [B, N, 3+C]. - img_metas (list[list[dict]]): Meta information of each sample. - Outer list are different samples while inner is different augs. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - Defaults to True. - - Returns: - list[dict]: The output prediction result with following keys: - - - semantic_mask (Tensor): Segmentation mask of shape [N]. - """ - # in aug_test, one scene going through different augmentations could - # have the same number of points and are stacked as a batch - # to save memory, we get augmented seg logit inplace - seg_pred = [] - for point, img_meta in zip(points, img_metas): - seg_prob = self.inference(point, img_meta, rescale) - seg_prob = seg_prob.mean(0) # [num_classes, N] - seg_map = seg_prob.argmax(0) # [N] - # to cpu tensor for consistency with det3d - seg_map = seg_map.cpu() - seg_pred.append(seg_map) - # warp in dict - seg_pred = [dict(semantic_mask=seg_map) for seg_map in seg_pred] - return seg_pred diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/__init__.py deleted file mode 100644 index 92a0499a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .clip_sigmoid import clip_sigmoid -from .edge_indices import get_edge_indices -from .gen_keypoints import get_keypoints -from .handle_objs import filter_outside_objs, handle_proj_objs -from .mlp import MLP - -__all__ = [ - 'clip_sigmoid', 'MLP', 'get_edge_indices', 'filter_outside_objs', - 'handle_proj_objs', 'get_keypoints' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/clip_sigmoid.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/clip_sigmoid.py deleted file mode 100644 index 3afd4edb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/clip_sigmoid.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def clip_sigmoid(x, eps=1e-4): - """Sigmoid function for input feature. - - Args: - x (torch.Tensor): Input feature map with the shape of [B, N, H, W]. - eps (float, optional): Lower bound of the range to be clamped to. - Defaults to 1e-4. - - Returns: - torch.Tensor: Feature map after sigmoid. - """ - y = torch.clamp(x.sigmoid_(), min=eps, max=1 - eps) - return y diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/edge_indices.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/edge_indices.py deleted file mode 100644 index 5dcb71fe..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/edge_indices.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def get_edge_indices(img_metas, - downsample_ratio, - step=1, - pad_mode='default', - dtype=np.float32, - device='cpu'): - """Function to filter the objects label outside the image. - The edge_indices are generated using numpy on cpu rather - than on CUDA due to the latency issue. When batch size = 8, - this function with numpy array is ~8 times faster than that - with CUDA tensor (0.09s and 0.72s in 100 runs). - - Args: - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - downsample_ratio (int): Downsample ratio of output feature, - step (int, optional): Step size used for generateing - edge indices. Default: 1. - pad_mode (str, optional): Padding mode during data pipeline. - Default: 'default'. - dtype (torch.dtype, optional): Dtype of edge indices tensor. - Default: np.float32. - device (str, optional): Device of edge indices tensor. - Default: 'cpu'. - - Returns: - list[Tensor]: Edge indices for each image in batch data. - """ - edge_indices_list = [] - for i in range(len(img_metas)): - img_shape = img_metas[i]['img_shape'] - pad_shape = img_metas[i]['pad_shape'] - h, w = img_shape[:2] - pad_h, pad_w = pad_shape - edge_indices = [] - - if pad_mode == 'default': - x_min = 0 - y_min = 0 - x_max = (w - 1) // downsample_ratio - y_max = (h - 1) // downsample_ratio - elif pad_mode == 'center': - x_min = np.ceil((pad_w - w) / 2 * downsample_ratio) - y_min = np.ceil((pad_h - h) / 2 * downsample_ratio) - x_max = x_min + w // downsample_ratio - y_max = y_min + h // downsample_ratio - else: - raise NotImplementedError - - # left - y = np.arange(y_min, y_max, step, dtype=dtype) - x = np.ones(len(y)) * x_min - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # bottom - x = np.arange(x_min, x_max, step, dtype=dtype) - y = np.ones(len(x)) * y_max - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # right - y = np.arange(y_max, y_min, -step, dtype=dtype) - x = np.ones(len(y)) * x_max - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # top - x = np.arange(x_max, x_min, -step, dtype=dtype) - y = np.ones(len(x)) * y_min - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - edge_indices = \ - np.concatenate([index for index in edge_indices], axis=0) - edge_indices = torch.from_numpy(edge_indices).to(device).long() - edge_indices_list.append(edge_indices) - - return edge_indices_list diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/gen_keypoints.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/gen_keypoints.py deleted file mode 100644 index 8c7909b8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/gen_keypoints.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core.bbox import points_cam2img - - -def get_keypoints(gt_bboxes_3d_list, - centers2d_list, - img_metas, - use_local_coords=True): - """Function to filter the objects label outside the image. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - use_local_coords (bool, optional): Wheher to use local coordinates - for keypoints. Default: True. - - Returns: - tuple[list[Tensor]]: It contains two elements, the first is the - keypoints for each projected 2D bbox in batch data. The second is - the visible mask of depth calculated by keypoints. - """ - - assert len(gt_bboxes_3d_list) == len(centers2d_list) - bs = len(gt_bboxes_3d_list) - keypoints2d_list = [] - keypoints_depth_mask_list = [] - - for i in range(bs): - gt_bboxes_3d = gt_bboxes_3d_list[i] - centers2d = centers2d_list[i] - img_shape = img_metas[i]['img_shape'] - cam2img = img_metas[i]['cam2img'] - h, w = img_shape[:2] - # (N, 8, 3) - corners3d = gt_bboxes_3d.corners - top_centers3d = torch.mean(corners3d[:, [0, 1, 4, 5], :], dim=1) - bot_centers3d = torch.mean(corners3d[:, [2, 3, 6, 7], :], dim=1) - # (N, 2, 3) - top_bot_centers3d = torch.stack((top_centers3d, bot_centers3d), dim=1) - keypoints3d = torch.cat((corners3d, top_bot_centers3d), dim=1) - # (N, 10, 2) - keypoints2d = points_cam2img(keypoints3d, cam2img) - - # keypoints mask: keypoints must be inside - # the image and in front of the camera - keypoints_x_visible = (keypoints2d[..., 0] >= 0) & ( - keypoints2d[..., 0] <= w - 1) - keypoints_y_visible = (keypoints2d[..., 1] >= 0) & ( - keypoints2d[..., 1] <= h - 1) - keypoints_z_visible = (keypoints3d[..., -1] > 0) - - # (N, 1O) - keypoints_visible = keypoints_x_visible & \ - keypoints_y_visible & keypoints_z_visible - # center, diag-02, diag-13 - keypoints_depth_valid = torch.stack( - (keypoints_visible[:, [8, 9]].all(dim=1), - keypoints_visible[:, [0, 3, 5, 6]].all(dim=1), - keypoints_visible[:, [1, 2, 4, 7]].all(dim=1)), - dim=1) - keypoints_visible = keypoints_visible.float() - - if use_local_coords: - keypoints2d = torch.cat((keypoints2d - centers2d.unsqueeze(1), - keypoints_visible.unsqueeze(-1)), - dim=2) - else: - keypoints2d = torch.cat( - (keypoints2d, keypoints_visible.unsqueeze(-1)), dim=2) - - keypoints2d_list.append(keypoints2d) - keypoints_depth_mask_list.append(keypoints_depth_valid) - - return (keypoints2d_list, keypoints_depth_mask_list) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/handle_objs.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/handle_objs.py deleted file mode 100644 index 25fd793a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/handle_objs.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def filter_outside_objs(gt_bboxes_list, gt_labels_list, gt_bboxes_3d_list, - gt_labels_3d_list, centers2d_list, img_metas): - """Function to filter the objects label outside the image. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - """ - bs = len(centers2d_list) - - for i in range(bs): - centers2d = centers2d_list[i].clone() - img_shape = img_metas[i]['img_shape'] - keep_inds = (centers2d[:, 0] > 0) & \ - (centers2d[:, 0] < img_shape[1]) & \ - (centers2d[:, 1] > 0) & \ - (centers2d[:, 1] < img_shape[0]) - centers2d_list[i] = centers2d[keep_inds] - gt_labels_list[i] = gt_labels_list[i][keep_inds] - gt_bboxes_list[i] = gt_bboxes_list[i][keep_inds] - gt_bboxes_3d_list[i].tensor = gt_bboxes_3d_list[i].tensor[keep_inds] - gt_labels_3d_list[i] = gt_labels_3d_list[i][keep_inds] - - -def get_centers2d_target(centers2d, centers, img_shape): - """Function to get target centers2d. - - Args: - centers2d (Tensor): Projected 3D centers onto 2D images. - centers (Tensor): Centers of 2d gt bboxes. - img_shape (tuple): Resized image shape. - - Returns: - torch.Tensor: Projected 3D centers (centers2D) target. - """ - N = centers2d.shape[0] - h, w = img_shape[:2] - valid_intersects = centers2d.new_zeros((N, 2)) - a = (centers[:, 1] - centers2d[:, 1]) / (centers[:, 0] - centers2d[:, 0]) - b = centers[:, 1] - a * centers[:, 0] - left_y = b - right_y = (w - 1) * a + b - top_x = -b / a - bottom_x = (h - 1 - b) / a - - left_coors = torch.stack((left_y.new_zeros(N, ), left_y), dim=1) - right_coors = torch.stack((right_y.new_full((N, ), w - 1), right_y), dim=1) - top_coors = torch.stack((top_x, top_x.new_zeros(N, )), dim=1) - bottom_coors = torch.stack((bottom_x, bottom_x.new_full((N, ), h - 1)), - dim=1) - - intersects = torch.stack( - [left_coors, right_coors, top_coors, bottom_coors], dim=1) - intersects_x = intersects[:, :, 0] - intersects_y = intersects[:, :, 1] - inds = (intersects_x >= 0) & (intersects_x <= - w - 1) & (intersects_y >= 0) & ( - intersects_y <= h - 1) - valid_intersects = intersects[inds].reshape(N, 2, 2) - dist = torch.norm(valid_intersects - centers2d.unsqueeze(1), dim=2) - min_idx = torch.argmin(dist, dim=1) - - min_idx = min_idx.unsqueeze(-1).unsqueeze(-1).expand(-1, -1, 2) - centers2d_target = valid_intersects.gather(dim=1, index=min_idx).squeeze(1) - - return centers2d_target - - -def handle_proj_objs(centers2d_list, gt_bboxes_list, img_metas): - """Function to handle projected object centers2d, generate target - centers2d. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[list[Tensor]]: It contains three elements. The first is the - target centers2d after handling the truncated objects. The second - is the offsets between target centers2d and round int dtype - centers2d,and the last is the truncation mask for each object in - batch data. - """ - bs = len(centers2d_list) - centers2d_target_list = [] - trunc_mask_list = [] - offsets2d_list = [] - # for now, only pad mode that img is padded by right and - # bottom side is supported. - for i in range(bs): - centers2d = centers2d_list[i] - gt_bbox = gt_bboxes_list[i] - img_shape = img_metas[i]['img_shape'] - centers2d_target = centers2d.clone() - inside_inds = (centers2d[:, 0] > 0) & \ - (centers2d[:, 0] < img_shape[1]) & \ - (centers2d[:, 1] > 0) & \ - (centers2d[:, 1] < img_shape[0]) - outside_inds = ~inside_inds - - # if there are outside objects - if outside_inds.any(): - centers = (gt_bbox[:, :2] + gt_bbox[:, 2:]) / 2 - outside_centers2d = centers2d[outside_inds] - match_centers = centers[outside_inds] - target_outside_centers2d = get_centers2d_target( - outside_centers2d, match_centers, img_shape) - centers2d_target[outside_inds] = target_outside_centers2d - - offsets2d = centers2d - centers2d_target.round().int() - trunc_mask = outside_inds - - centers2d_target_list.append(centers2d_target) - trunc_mask_list.append(trunc_mask) - offsets2d_list.append(offsets2d) - - return (centers2d_target_list, offsets2d_list, trunc_mask_list) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/mlp.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/mlp.py deleted file mode 100644 index 0b499bb4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/utils/mlp.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn - - -class MLP(BaseModule): - """A simple MLP module. - - Pass features (B, C, N) through an MLP. - - Args: - in_channels (int, optional): Number of channels of input features. - Default: 18. - conv_channels (tuple[int], optional): Out channels of the convolution. - Default: (256, 256). - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - """ - - def __init__(self, - in_channel=18, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.mlp = nn.Sequential() - prev_channels = in_channel - for i, conv_channel in enumerate(conv_channels): - self.mlp.add_module( - f'layer{i}', - ConvModule( - prev_channels, - conv_channels[i], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - prev_channels = conv_channels[i] - - def forward(self, img_features): - return self.mlp(img_features) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/__init__.py deleted file mode 100644 index 2926a834..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .pillar_encoder import DynamicPillarFeatureNet, PillarFeatureNet -from .voxel_encoder import DynamicSimpleVFE, DynamicVFE, HardSimpleVFE, HardVFE - -__all__ = [ - 'PillarFeatureNet', 'DynamicPillarFeatureNet', 'HardVFE', 'DynamicVFE', - 'HardSimpleVFE', 'DynamicSimpleVFE' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/pillar_encoder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/pillar_encoder.py deleted file mode 100644 index 39bdc728..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/pillar_encoder.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.ops import DynamicScatter -from mmcv.runner import force_fp32 -from torch import nn - -from ..builder import VOXEL_ENCODERS -from .utils import PFNLayer, get_paddings_indicator - - -@VOXEL_ENCODERS.register_module() -class PillarFeatureNet(nn.Module): - """Pillar Feature Net. - - The network prepares the pillar features and performs forward pass - through PFNLayers. - - Args: - in_channels (int, optional): Number of input features, - either x, y, z or x, y, z, r. Defaults to 4. - feat_channels (tuple, optional): Number of features in each of the - N PFNLayers. Defaults to (64, ). - with_distance (bool, optional): Whether to include Euclidean distance - to points. Defaults to False. - with_cluster_center (bool, optional): [description]. Defaults to True. - with_voxel_center (bool, optional): [description]. Defaults to True. - voxel_size (tuple[float], optional): Size of voxels, only utilize x - and y size. Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): Point cloud range, only - utilizes x and y min. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg ([type], optional): [description]. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - mode (str, optional): The mode to gather point features. Options are - 'max' or 'avg'. Defaults to 'max'. - legacy (bool, optional): Whether to use the new behavior or - the original behavior. Defaults to True. - """ - - def __init__(self, - in_channels=4, - feat_channels=(64, ), - with_distance=False, - with_cluster_center=True, - with_voxel_center=True, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - legacy=True): - super(PillarFeatureNet, self).__init__() - assert len(feat_channels) > 0 - self.legacy = legacy - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.fp16_enabled = False - # Create PillarFeatureNet layers - self.in_channels = in_channels - feat_channels = [in_channels] + list(feat_channels) - pfn_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i < len(feat_channels) - 2: - last_layer = False - else: - last_layer = True - pfn_layers.append( - PFNLayer( - in_filters, - out_filters, - norm_cfg=norm_cfg, - last_layer=last_layer, - mode=mode)) - self.pfn_layers = nn.ModuleList(pfn_layers) - - # Need pillar (voxel) size and x/y offset in order to calculate offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - - @force_fp32(out_fp16=True) - def forward(self, features, num_points, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features or raw points in shape - (N, M, C). - num_points (torch.Tensor): Number of points in each pillar. - coors (torch.Tensor): Coordinates of each voxel. - - Returns: - torch.Tensor: Features of pillars. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - points_mean = features[:, :, :3].sum( - dim=1, keepdim=True) / num_points.type_as(features).view( - -1, 1, 1) - f_cluster = features[:, :, :3] - points_mean - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - dtype = features.dtype - if self._with_voxel_center: - if not self.legacy: - f_center = torch.zeros_like(features[:, :, :3]) - f_center[:, :, 0] = features[:, :, 0] - ( - coors[:, 3].to(dtype).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = features[:, :, 1] - ( - coors[:, 2].to(dtype).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = features[:, :, 2] - ( - coors[:, 1].to(dtype).unsqueeze(1) * self.vz + - self.z_offset) - else: - f_center = features[:, :, :3] - f_center[:, :, 0] = f_center[:, :, 0] - ( - coors[:, 3].type_as(features).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = f_center[:, :, 1] - ( - coors[:, 2].type_as(features).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = f_center[:, :, 2] - ( - coors[:, 1].type_as(features).unsqueeze(1) * self.vz + - self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :, :3], 2, 2, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - # The feature decorations were calculated without regard to whether - # pillar was empty. Need to ensure that - # empty pillars remain set to zeros. - voxel_count = features.shape[1] - mask = get_paddings_indicator(num_points, voxel_count, axis=0) - mask = torch.unsqueeze(mask, -1).type_as(features) - features *= mask - - for pfn in self.pfn_layers: - features = pfn(features, num_points) - - return features.squeeze(1) - - -@VOXEL_ENCODERS.register_module() -class DynamicPillarFeatureNet(PillarFeatureNet): - """Pillar Feature Net using dynamic voxelization. - - The network prepares the pillar features and performs forward pass - through PFNLayers. The main difference is that it is used for - dynamic voxels, which contains different number of points inside a voxel - without limits. - - Args: - in_channels (int, optional): Number of input features, - either x, y, z or x, y, z, r. Defaults to 4. - feat_channels (tuple, optional): Number of features in each of the - N PFNLayers. Defaults to (64, ). - with_distance (bool, optional): Whether to include Euclidean distance - to points. Defaults to False. - with_cluster_center (bool, optional): [description]. Defaults to True. - with_voxel_center (bool, optional): [description]. Defaults to True. - voxel_size (tuple[float], optional): Size of voxels, only utilize x - and y size. Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): Point cloud range, only - utilizes x and y min. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg ([type], optional): [description]. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - mode (str, optional): The mode to gather point features. Options are - 'max' or 'avg'. Defaults to 'max'. - legacy (bool, optional): Whether to use the new behavior or - the original behavior. Defaults to True. - """ - - def __init__(self, - in_channels=4, - feat_channels=(64, ), - with_distance=False, - with_cluster_center=True, - with_voxel_center=True, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - legacy=True): - super(DynamicPillarFeatureNet, self).__init__( - in_channels, - feat_channels, - with_distance, - with_cluster_center=with_cluster_center, - with_voxel_center=with_voxel_center, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - norm_cfg=norm_cfg, - mode=mode, - legacy=legacy) - self.fp16_enabled = False - feat_channels = [self.in_channels] + list(feat_channels) - pfn_layers = [] - # TODO: currently only support one PFNLayer - - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - pfn_layers.append( - nn.Sequential( - nn.Linear(in_filters, out_filters, bias=False), norm_layer, - nn.ReLU(inplace=True))) - self.num_pfn = len(pfn_layers) - self.pfn_layers = nn.ModuleList(pfn_layers) - self.pfn_scatter = DynamicScatter(voxel_size, point_cloud_range, - (mode != 'max')) - self.cluster_scatter = DynamicScatter( - voxel_size, point_cloud_range, average_points=True) - - def map_voxel_center_to_point(self, pts_coors, voxel_mean, voxel_coors): - """Map the centers of voxels to its corresponding points. - - Args: - pts_coors (torch.Tensor): The coordinates of each points, shape - (M, 3), where M is the number of points. - voxel_mean (torch.Tensor): The mean or aggregated features of a - voxel, shape (N, C), where N is the number of voxels. - voxel_coors (torch.Tensor): The coordinates of each voxel. - - Returns: - torch.Tensor: Corresponding voxel centers of each points, shape - (M, C), where M is the number of points. - """ - # Step 1: scatter voxel into canvas - # Calculate necessary things for canvas creation - canvas_y = int( - (self.point_cloud_range[4] - self.point_cloud_range[1]) / self.vy) - canvas_x = int( - (self.point_cloud_range[3] - self.point_cloud_range[0]) / self.vx) - canvas_channel = voxel_mean.size(1) - batch_size = pts_coors[-1, 0] + 1 - canvas_len = canvas_y * canvas_x * batch_size - # Create the canvas for this sample - canvas = voxel_mean.new_zeros(canvas_channel, canvas_len) - # Only include non-empty pillars - indices = ( - voxel_coors[:, 0] * canvas_y * canvas_x + - voxel_coors[:, 2] * canvas_x + voxel_coors[:, 3]) - # Scatter the blob back to the canvas - canvas[:, indices.long()] = voxel_mean.t() - - # Step 2: get voxel mean for each point - voxel_index = ( - pts_coors[:, 0] * canvas_y * canvas_x + - pts_coors[:, 2] * canvas_x + pts_coors[:, 3]) - center_per_point = canvas[:, voxel_index.long()].t() - return center_per_point - - @force_fp32(out_fp16=True) - def forward(self, features, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features or raw points in shape - (N, M, C). - coors (torch.Tensor): Coordinates of each voxel - - Returns: - torch.Tensor: Features of pillars. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - voxel_mean, mean_coors = self.cluster_scatter(features, coors) - points_mean = self.map_voxel_center_to_point( - coors, voxel_mean, mean_coors) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :3] - points_mean[:, :3] - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros(size=(features.size(0), 3)) - f_center[:, 0] = features[:, 0] - ( - coors[:, 3].type_as(features) * self.vx + self.x_offset) - f_center[:, 1] = features[:, 1] - ( - coors[:, 2].type_as(features) * self.vy + self.y_offset) - f_center[:, 2] = features[:, 2] - ( - coors[:, 1].type_as(features) * self.vz + self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :3], 2, 1, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - for i, pfn in enumerate(self.pfn_layers): - point_feats = pfn(features) - voxel_feats, voxel_coors = self.pfn_scatter(point_feats, coors) - if i != len(self.pfn_layers) - 1: - # need to concat voxel feats if it is not the last pfn - feat_per_point = self.map_voxel_center_to_point( - coors, voxel_feats, voxel_coors) - features = torch.cat([point_feats, feat_per_point], dim=1) - - return voxel_feats, voxel_coors diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/utils.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/utils.py deleted file mode 100644 index 8c54fc2d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/utils.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.runner import auto_fp16 -from torch import nn -from torch.nn import functional as F - - -def get_paddings_indicator(actual_num, max_num, axis=0): - """Create boolean mask by actually number of a padded tensor. - - Args: - actual_num (torch.Tensor): Actual number of points in each voxel. - max_num (int): Max number of points in each voxel - - Returns: - torch.Tensor: Mask indicates which points are valid inside a voxel. - """ - actual_num = torch.unsqueeze(actual_num, axis + 1) - # tiled_actual_num: [N, M, 1] - max_num_shape = [1] * len(actual_num.shape) - max_num_shape[axis + 1] = -1 - max_num = torch.arange( - max_num, dtype=torch.int, device=actual_num.device).view(max_num_shape) - # tiled_actual_num: [[3,3,3,3,3], [4,4,4,4,4], [2,2,2,2,2]] - # tiled_max_num: [[0,1,2,3,4], [0,1,2,3,4], [0,1,2,3,4]] - paddings_indicator = actual_num.int() > max_num - # paddings_indicator shape: [batch_size, max_num] - return paddings_indicator - - -class VFELayer(nn.Module): - """Voxel Feature Encoder layer. - - The voxel encoder is composed of a series of these layers. - This module do not support average pooling and only support to use - max pooling to gather features inside a VFE. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - norm_cfg (dict): Config dict of normalization layers - max_out (bool): Whether aggregate the features of points inside - each voxel and only return voxel features. - cat_max (bool): Whether concatenate the aggregated features - and pointwise features. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - max_out=True, - cat_max=True): - super(VFELayer, self).__init__() - self.fp16_enabled = False - self.cat_max = cat_max - self.max_out = max_out - # self.units = int(out_channels / 2) - - self.norm = build_norm_layer(norm_cfg, out_channels)[1] - self.linear = nn.Linear(in_channels, out_channels, bias=False) - - @auto_fp16(apply_to=('inputs'), out_fp32=True) - def forward(self, inputs): - """Forward function. - - Args: - inputs (torch.Tensor): Voxels features of shape (N, M, C). - N is the number of voxels, M is the number of points in - voxels, C is the number of channels of point features. - - Returns: - torch.Tensor: Voxel features. There are three mode under which the - features have different meaning. - - `max_out=False`: Return point-wise features in - shape (N, M, C). - - `max_out=True` and `cat_max=False`: Return aggregated - voxel features in shape (N, C) - - `max_out=True` and `cat_max=True`: Return concatenated - point-wise features in shape (N, M, C). - """ - # [K, T, 7] tensordot [7, units] = [K, T, units] - voxel_count = inputs.shape[1] - - x = self.linear(inputs) - x = self.norm(x.permute(0, 2, 1).contiguous()).permute(0, 2, - 1).contiguous() - pointwise = F.relu(x) - # [K, T, units] - if self.max_out: - aggregated = torch.max(pointwise, dim=1, keepdim=True)[0] - else: - # this is for fusion layer - return pointwise - - if not self.cat_max: - return aggregated.squeeze(1) - else: - # [K, 1, units] - repeated = aggregated.repeat(1, voxel_count, 1) - concatenated = torch.cat([pointwise, repeated], dim=2) - # [K, T, 2 * units] - return concatenated - - -class PFNLayer(nn.Module): - """Pillar Feature Net Layer. - - The Pillar Feature Net is composed of a series of these layers, but the - PointPillars paper results only used a single PFNLayer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - norm_cfg (dict, optional): Config dict of normalization layers. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - last_layer (bool, optional): If last_layer, there is no - concatenation of features. Defaults to False. - mode (str, optional): Pooling model to gather features inside voxels. - Defaults to 'max'. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - last_layer=False, - mode='max'): - - super().__init__() - self.fp16_enabled = False - self.name = 'PFNLayer' - self.last_vfe = last_layer - if not self.last_vfe: - out_channels = out_channels // 2 - self.units = out_channels - - self.norm = build_norm_layer(norm_cfg, self.units)[1] - self.linear = nn.Linear(in_channels, self.units, bias=False) - - assert mode in ['max', 'avg'] - self.mode = mode - - @auto_fp16(apply_to=('inputs'), out_fp32=True) - def forward(self, inputs, num_voxels=None, aligned_distance=None): - """Forward function. - - Args: - inputs (torch.Tensor): Pillar/Voxel inputs with shape (N, M, C). - N is the number of voxels, M is the number of points in - voxels, C is the number of channels of point features. - num_voxels (torch.Tensor, optional): Number of points in each - voxel. Defaults to None. - aligned_distance (torch.Tensor, optional): The distance of - each points to the voxel center. Defaults to None. - - Returns: - torch.Tensor: Features of Pillars. - """ - x = self.linear(inputs) - x = self.norm(x.permute(0, 2, 1).contiguous()).permute(0, 2, - 1).contiguous() - x = F.relu(x) - - if self.mode == 'max': - if aligned_distance is not None: - x = x.mul(aligned_distance.unsqueeze(-1)) - x_max = torch.max(x, dim=1, keepdim=True)[0] - elif self.mode == 'avg': - if aligned_distance is not None: - x = x.mul(aligned_distance.unsqueeze(-1)) - x_max = x.sum( - dim=1, keepdim=True) / num_voxels.type_as(inputs).view( - -1, 1, 1) - - if self.last_vfe: - return x_max - else: - x_repeat = x_max.repeat(1, inputs.shape[1], 1) - x_concatenated = torch.cat([x, x_repeat], dim=2) - return x_concatenated diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/voxel_encoder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/voxel_encoder.py deleted file mode 100644 index 9f3cf53d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/models/voxel_encoders/voxel_encoder.py +++ /dev/null @@ -1,489 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.ops import DynamicScatter -from mmcv.runner import force_fp32 -from torch import nn - -from .. import builder -from ..builder import VOXEL_ENCODERS -from .utils import VFELayer, get_paddings_indicator - - -@VOXEL_ENCODERS.register_module() -class HardSimpleVFE(nn.Module): - """Simple voxel feature encoder used in SECOND. - - It simply averages the values of points in a voxel. - - Args: - num_features (int, optional): Number of features to use. Default: 4. - """ - - def __init__(self, num_features=4): - super(HardSimpleVFE, self).__init__() - self.num_features = num_features - self.fp16_enabled = False - - @force_fp32(out_fp16=True) - def forward(self, features, num_points, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features in shape - (N, M, 3(4)). N is the number of voxels and M is the maximum - number of points inside a single voxel. - num_points (torch.Tensor): Number of points in each voxel, - shape (N, ). - coors (torch.Tensor): Coordinates of voxels. - - Returns: - torch.Tensor: Mean of points inside each voxel in shape (N, 3(4)) - """ - points_mean = features[:, :, :self.num_features].sum( - dim=1, keepdim=False) / num_points.type_as(features).view(-1, 1) - return points_mean.contiguous() - - -@VOXEL_ENCODERS.register_module() -class DynamicSimpleVFE(nn.Module): - """Simple dynamic voxel feature encoder used in DV-SECOND. - - It simply averages the values of points in a voxel. - But the number of points in a voxel is dynamic and varies. - - Args: - voxel_size (tupe[float]): Size of a single voxel - point_cloud_range (tuple[float]): Range of the point cloud and voxels - """ - - def __init__(self, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1)): - super(DynamicSimpleVFE, self).__init__() - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - self.fp16_enabled = False - - @torch.no_grad() - @force_fp32(out_fp16=True) - def forward(self, features, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features in shape - (N, 3(4)). N is the number of points. - coors (torch.Tensor): Coordinates of voxels. - - Returns: - torch.Tensor: Mean of points inside each voxel in shape (M, 3(4)). - M is the number of voxels. - """ - # This function is used from the start of the voxelnet - # num_points: [concated_num_points] - features, features_coors = self.scatter(features, coors) - return features, features_coors - - -@VOXEL_ENCODERS.register_module() -class DynamicVFE(nn.Module): - """Dynamic Voxel feature encoder used in DV-SECOND. - - It encodes features of voxels and their points. It could also fuse - image feature into voxel features in a point-wise manner. - The number of points inside the voxel varies. - - Args: - in_channels (int, optional): Input channels of VFE. Defaults to 4. - feat_channels (list(int), optional): Channels of features in VFE. - with_distance (bool, optional): Whether to use the L2 distance of - points to the origin point. Defaults to False. - with_cluster_center (bool, optional): Whether to use the distance - to cluster center of points inside a voxel. Defaults to False. - with_voxel_center (bool, optional): Whether to use the distance - to center of voxel for each points inside a voxel. - Defaults to False. - voxel_size (tuple[float], optional): Size of a single voxel. - Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): The range of points - or voxels. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg (dict, optional): Config dict of normalization layers. - mode (str, optional): The mode when pooling features of points - inside a voxel. Available options include 'max' and 'avg'. - Defaults to 'max'. - fusion_layer (dict, optional): The config dict of fusion - layer used in multi-modal detectors. Defaults to None. - return_point_feats (bool, optional): Whether to return the features - of each points. Defaults to False. - """ - - def __init__(self, - in_channels=4, - feat_channels=[], - with_distance=False, - with_cluster_center=False, - with_voxel_center=False, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - fusion_layer=None, - return_point_feats=False): - super(DynamicVFE, self).__init__() - assert mode in ['avg', 'max'] - assert len(feat_channels) > 0 - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self.in_channels = in_channels - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.return_point_feats = return_point_feats - self.fp16_enabled = False - - # Need pillar (voxel) size and x/y offset in order to calculate offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - - feat_channels = [self.in_channels] + list(feat_channels) - vfe_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - vfe_layers.append( - nn.Sequential( - nn.Linear(in_filters, out_filters, bias=False), norm_layer, - nn.ReLU(inplace=True))) - self.vfe_layers = nn.ModuleList(vfe_layers) - self.num_vfe = len(vfe_layers) - self.vfe_scatter = DynamicScatter(voxel_size, point_cloud_range, - (mode != 'max')) - self.cluster_scatter = DynamicScatter( - voxel_size, point_cloud_range, average_points=True) - self.fusion_layer = None - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - - def map_voxel_center_to_point(self, pts_coors, voxel_mean, voxel_coors): - """Map voxel features to its corresponding points. - - Args: - pts_coors (torch.Tensor): Voxel coordinate of each point. - voxel_mean (torch.Tensor): Voxel features to be mapped. - voxel_coors (torch.Tensor): Coordinates of valid voxels - - Returns: - torch.Tensor: Features or centers of each point. - """ - # Step 1: scatter voxel into canvas - # Calculate necessary things for canvas creation - canvas_z = int( - (self.point_cloud_range[5] - self.point_cloud_range[2]) / self.vz) - canvas_y = int( - (self.point_cloud_range[4] - self.point_cloud_range[1]) / self.vy) - canvas_x = int( - (self.point_cloud_range[3] - self.point_cloud_range[0]) / self.vx) - # canvas_channel = voxel_mean.size(1) - batch_size = pts_coors[-1, 0] + 1 - canvas_len = canvas_z * canvas_y * canvas_x * batch_size - # Create the canvas for this sample - canvas = voxel_mean.new_zeros(canvas_len, dtype=torch.long) - # Only include non-empty pillars - indices = ( - voxel_coors[:, 0] * canvas_z * canvas_y * canvas_x + - voxel_coors[:, 1] * canvas_y * canvas_x + - voxel_coors[:, 2] * canvas_x + voxel_coors[:, 3]) - # Scatter the blob back to the canvas - canvas[indices.long()] = torch.arange( - start=0, end=voxel_mean.size(0), device=voxel_mean.device) - - # Step 2: get voxel mean for each point - voxel_index = ( - pts_coors[:, 0] * canvas_z * canvas_y * canvas_x + - pts_coors[:, 1] * canvas_y * canvas_x + - pts_coors[:, 2] * canvas_x + pts_coors[:, 3]) - voxel_inds = canvas[voxel_index.long()] - center_per_point = voxel_mean[voxel_inds, ...] - return center_per_point - - @force_fp32(out_fp16=True) - def forward(self, - features, - coors, - points=None, - img_feats=None, - img_metas=None): - """Forward functions. - - Args: - features (torch.Tensor): Features of voxels, shape is NxC. - coors (torch.Tensor): Coordinates of voxels, shape is Nx(1+NDim). - points (list[torch.Tensor], optional): Raw points used to guide the - multi-modality fusion. Defaults to None. - img_feats (list[torch.Tensor], optional): Image features used for - multi-modality fusion. Defaults to None. - img_metas (dict, optional): [description]. Defaults to None. - - Returns: - tuple: If `return_point_feats` is False, returns voxel features and - its coordinates. If `return_point_feats` is True, returns - feature of each points inside voxels. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - voxel_mean, mean_coors = self.cluster_scatter(features, coors) - points_mean = self.map_voxel_center_to_point( - coors, voxel_mean, mean_coors) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :3] - points_mean[:, :3] - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros(size=(features.size(0), 3)) - f_center[:, 0] = features[:, 0] - ( - coors[:, 3].type_as(features) * self.vx + self.x_offset) - f_center[:, 1] = features[:, 1] - ( - coors[:, 2].type_as(features) * self.vy + self.y_offset) - f_center[:, 2] = features[:, 2] - ( - coors[:, 1].type_as(features) * self.vz + self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :3], 2, 1, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - for i, vfe in enumerate(self.vfe_layers): - point_feats = vfe(features) - if (i == len(self.vfe_layers) - 1 and self.fusion_layer is not None - and img_feats is not None): - point_feats = self.fusion_layer(img_feats, points, point_feats, - img_metas) - voxel_feats, voxel_coors = self.vfe_scatter(point_feats, coors) - if i != len(self.vfe_layers) - 1: - # need to concat voxel feats if it is not the last vfe - feat_per_point = self.map_voxel_center_to_point( - coors, voxel_feats, voxel_coors) - features = torch.cat([point_feats, feat_per_point], dim=1) - - if self.return_point_feats: - return point_feats - return voxel_feats, voxel_coors - - -@VOXEL_ENCODERS.register_module() -class HardVFE(nn.Module): - """Voxel feature encoder used in DV-SECOND. - - It encodes features of voxels and their points. It could also fuse - image feature into voxel features in a point-wise manner. - - Args: - in_channels (int, optional): Input channels of VFE. Defaults to 4. - feat_channels (list(int), optional): Channels of features in VFE. - with_distance (bool, optional): Whether to use the L2 distance - of points to the origin point. Defaults to False. - with_cluster_center (bool, optional): Whether to use the distance - to cluster center of points inside a voxel. Defaults to False. - with_voxel_center (bool, optional): Whether to use the distance to - center of voxel for each points inside a voxel. Defaults to False. - voxel_size (tuple[float], optional): Size of a single voxel. - Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): The range of points - or voxels. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg (dict, optional): Config dict of normalization layers. - mode (str, optional): The mode when pooling features of points inside a - voxel. Available options include 'max' and 'avg'. - Defaults to 'max'. - fusion_layer (dict, optional): The config dict of fusion layer - used in multi-modal detectors. Defaults to None. - return_point_feats (bool, optional): Whether to return the - features of each points. Defaults to False. - """ - - def __init__(self, - in_channels=4, - feat_channels=[], - with_distance=False, - with_cluster_center=False, - with_voxel_center=False, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - fusion_layer=None, - return_point_feats=False): - super(HardVFE, self).__init__() - assert len(feat_channels) > 0 - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self.in_channels = in_channels - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.return_point_feats = return_point_feats - self.fp16_enabled = False - - # Need pillar (voxel) size and x/y offset to calculate pillar offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - - feat_channels = [self.in_channels] + list(feat_channels) - vfe_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - # TODO: pass norm_cfg to VFE - # norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - if i == (len(feat_channels) - 2): - cat_max = False - max_out = True - if fusion_layer: - max_out = False - else: - max_out = True - cat_max = True - vfe_layers.append( - VFELayer( - in_filters, - out_filters, - norm_cfg=norm_cfg, - max_out=max_out, - cat_max=cat_max)) - self.vfe_layers = nn.ModuleList(vfe_layers) - self.num_vfe = len(vfe_layers) - - self.fusion_layer = None - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - - @force_fp32(out_fp16=True) - def forward(self, - features, - num_points, - coors, - img_feats=None, - img_metas=None): - """Forward functions. - - Args: - features (torch.Tensor): Features of voxels, shape is MxNxC. - num_points (torch.Tensor): Number of points in each voxel. - coors (torch.Tensor): Coordinates of voxels, shape is Mx(1+NDim). - img_feats (list[torch.Tensor], optional): Image features used for - multi-modality fusion. Defaults to None. - img_metas (dict, optional): [description]. Defaults to None. - - Returns: - tuple: If `return_point_feats` is False, returns voxel features and - its coordinates. If `return_point_feats` is True, returns - feature of each points inside voxels. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - points_mean = ( - features[:, :, :3].sum(dim=1, keepdim=True) / - num_points.type_as(features).view(-1, 1, 1)) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :, :3] - points_mean - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros( - size=(features.size(0), features.size(1), 3)) - f_center[:, :, 0] = features[:, :, 0] - ( - coors[:, 3].type_as(features).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = features[:, :, 1] - ( - coors[:, 2].type_as(features).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = features[:, :, 2] - ( - coors[:, 1].type_as(features).unsqueeze(1) * self.vz + - self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :, :3], 2, 2, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - voxel_feats = torch.cat(features_ls, dim=-1) - # The feature decorations were calculated without regard to whether - # pillar was empty. - # Need to ensure that empty voxels remain set to zeros. - voxel_count = voxel_feats.shape[1] - mask = get_paddings_indicator(num_points, voxel_count, axis=0) - voxel_feats *= mask.unsqueeze(-1).type_as(voxel_feats) - - for i, vfe in enumerate(self.vfe_layers): - voxel_feats = vfe(voxel_feats) - - if (self.fusion_layer is not None and img_feats is not None): - voxel_feats = self.fusion_with_mask(features, mask, voxel_feats, - coors, img_feats, img_metas) - - return voxel_feats - - def fusion_with_mask(self, features, mask, voxel_feats, coors, img_feats, - img_metas): - """Fuse image and point features with mask. - - Args: - features (torch.Tensor): Features of voxel, usually it is the - values of points in voxels. - mask (torch.Tensor): Mask indicates valid features in each voxel. - voxel_feats (torch.Tensor): Features of voxels. - coors (torch.Tensor): Coordinates of each single voxel. - img_feats (list[torch.Tensor]): Multi-scale feature maps of image. - img_metas (list(dict)): Meta information of image and points. - - Returns: - torch.Tensor: Fused features of each voxel. - """ - # the features is consist of a batch of points - batch_size = coors[-1, 0] + 1 - points = [] - for i in range(batch_size): - single_mask = (coors[:, 0] == i) - points.append(features[single_mask][mask[single_mask]]) - - point_feats = voxel_feats[mask] - point_feats = self.fusion_layer(img_feats, points, point_feats, - img_metas) - - voxel_canvas = voxel_feats.new_zeros( - size=(voxel_feats.size(0), voxel_feats.size(1), - point_feats.size(-1))) - voxel_canvas[mask] = point_feats - out = torch.max(voxel_canvas, dim=1)[0] - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/__init__.py deleted file mode 100644 index c96e954a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/__init__.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmcv.ops import (RoIAlign, SigmoidFocalLoss, get_compiler_version, - get_compiling_cuda_version, nms, roi_align, - sigmoid_focal_loss) -from mmcv.ops.assign_score_withk import assign_score_withk -from mmcv.ops.ball_query import ball_query -from mmcv.ops.furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) -from mmcv.ops.gather_points import gather_points -from mmcv.ops.group_points import GroupAll, QueryAndGroup, grouping_operation -from mmcv.ops.knn import knn -from mmcv.ops.points_in_boxes import (points_in_boxes_all, points_in_boxes_cpu, - points_in_boxes_part) -from mmcv.ops.points_sampler import PointsSampler as Points_Sampler -from mmcv.ops.roiaware_pool3d import RoIAwarePool3d -from mmcv.ops.roipoint_pool3d import RoIPointPool3d -from mmcv.ops.scatter_points import DynamicScatter, dynamic_scatter -from mmcv.ops.three_interpolate import three_interpolate -from mmcv.ops.three_nn import three_nn -from mmcv.ops.voxelize import Voxelization, voxelization - -from .dgcnn_modules import DGCNNFAModule, DGCNNFPModule, DGCNNGFModule -from .norm import NaiveSyncBatchNorm1d, NaiveSyncBatchNorm2d -from .paconv import PAConv, PAConvCUDA -from .pointnet_modules import (PAConvCUDASAModule, PAConvCUDASAModuleMSG, - PAConvSAModule, PAConvSAModuleMSG, - PointFPModule, PointSAModule, PointSAModuleMSG, - build_sa_module) -# from .sparse_block import (SparseBasicBlock, SparseBottleneck, -# make_sparse_convmodule) - -__all__ = [ - 'nms', 'soft_nms', 'RoIAlign', 'roi_align', 'get_compiler_version', - 'get_compiling_cuda_version', 'NaiveSyncBatchNorm1d', - 'NaiveSyncBatchNorm2d', 'batched_nms', 'Voxelization', 'voxelization', - 'dynamic_scatter', 'DynamicScatter', 'sigmoid_focal_loss', - 'SigmoidFocalLoss', 'SparseBasicBlock', 'SparseBottleneck', - 'RoIAwarePool3d', 'points_in_boxes_part', 'points_in_boxes_cpu', - 'make_sparse_convmodule', 'ball_query', 'knn', 'furthest_point_sample', - 'furthest_point_sample_with_dist', 'three_interpolate', 'three_nn', - 'gather_points', 'grouping_operation', 'GroupAll', 'QueryAndGroup', - 'PointSAModule', 'PointSAModuleMSG', 'PointFPModule', 'DGCNNFPModule', - 'DGCNNGFModule', 'DGCNNFAModule', 'points_in_boxes_all', - 'get_compiler_version', 'assign_score_withk', 'get_compiling_cuda_version', - 'Points_Sampler', 'build_sa_module', 'PAConv', 'PAConvCUDA', - 'PAConvSAModuleMSG', 'PAConvSAModule', 'PAConvCUDASAModule', - 'PAConvCUDASAModuleMSG', 'RoIPointPool3d' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/__init__.py deleted file mode 100644 index 67beb090..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dgcnn_fa_module import DGCNNFAModule -from .dgcnn_fp_module import DGCNNFPModule -from .dgcnn_gf_module import DGCNNGFModule - -__all__ = ['DGCNNFAModule', 'DGCNNFPModule', 'DGCNNGFModule'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py deleted file mode 100644 index b0975e69..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class DGCNNFAModule(BaseModule): - """Point feature aggregation module used in DGCNN. - - Aggregate all the features of points. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. Defaults to None. - """ - - def __init__(self, - mlp_channels, - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, ), - stride=(1, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - @force_fp32() - def forward(self, points): - """forward. - - Args: - points (List[Tensor]): tensor of the features to be aggregated. - - Returns: - Tensor: (B, N, M) M = mlp[-1], tensor of the output points. - """ - - if len(points) > 1: - new_points = torch.cat(points[1:], dim=-1) - new_points = new_points.transpose(1, 2).contiguous() # (B, C, N) - new_points_copy = new_points - - new_points = self.mlps(new_points) - - new_fa_points = new_points.max(dim=-1, keepdim=True)[0] - new_fa_points = new_fa_points.repeat(1, 1, new_points.shape[-1]) - - new_points = torch.cat([new_fa_points, new_points_copy], dim=1) - new_points = new_points.transpose(1, 2).contiguous() - else: - new_points = points - - return new_points diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py deleted file mode 100644 index c871721b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class DGCNNFPModule(BaseModule): - """Point feature propagation module used in DGCNN. - - Propagate the features from one set to another. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of activation method. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. Defaults to None. - """ - - def __init__(self, - mlp_channels, - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, ), - stride=(1, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - @force_fp32() - def forward(self, points): - """forward. - - Args: - points (Tensor): (B, N, C) tensor of the input points. - - Returns: - Tensor: (B, N, M) M = mlp[-1], tensor of the new points. - """ - - if points is not None: - new_points = points.transpose(1, 2).contiguous() # (B, C, N) - new_points = self.mlps(new_points) - new_points = new_points.transpose(1, 2).contiguous() - else: - new_points = points - - return new_points diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py deleted file mode 100644 index 96785e7e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py +++ /dev/null @@ -1,221 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops.group_points import GroupAll, QueryAndGroup, grouping_operation -from torch import nn as nn -from torch.nn import functional as F - - -class BaseDGCNNGFModule(nn.Module): - """Base module for point graph feature module used in DGCNN. - - Args: - radii (list[float]): List of radius in each knn or ball query. - sample_nums (list[int]): Number of samples in each knn or ball query. - mlp_channels (list[list[int]]): Specify of the dgcnn before - the global pooling for each graph feature module. - knn_modes (list[str], optional): Type of KNN method, valid mode - ['F-KNN', 'D-KNN'], Defaults to ['F-KNN']. - dilated_group (bool, optional): Whether to use dilated ball query. - Defaults to False. - use_xyz (bool, optional): Whether to use xyz as point features. - Defaults to True. - pool_mode (str, optional): Type of pooling method. Defaults to 'max'. - normalize_xyz (bool, optional): If ball query, whether to normalize - local XYZ with radius. Defaults to False. - grouper_return_grouped_xyz (bool, optional): Whether to return grouped - xyz in `QueryAndGroup`. Defaults to False. - grouper_return_grouped_idx (bool, optional): Whether to return grouped - idx in `QueryAndGroup`. Defaults to False. - """ - - def __init__(self, - radii, - sample_nums, - mlp_channels, - knn_modes=['F-KNN'], - dilated_group=False, - use_xyz=True, - pool_mode='max', - normalize_xyz=False, - grouper_return_grouped_xyz=False, - grouper_return_grouped_idx=False): - super(BaseDGCNNGFModule, self).__init__() - - assert len(sample_nums) == len( - mlp_channels - ), 'Num_samples and mlp_channels should have the same length.' - assert pool_mode in ['max', 'avg' - ], "Pool_mode should be one of ['max', 'avg']." - assert isinstance(knn_modes, list) or isinstance( - knn_modes, tuple), 'The type of knn_modes should be list or tuple.' - - if isinstance(mlp_channels, tuple): - mlp_channels = list(map(list, mlp_channels)) - self.mlp_channels = mlp_channels - - self.pool_mode = pool_mode - self.groupers = nn.ModuleList() - self.mlps = nn.ModuleList() - self.knn_modes = knn_modes - - for i in range(len(sample_nums)): - sample_num = sample_nums[i] - if sample_num is not None: - if self.knn_modes[i] == 'D-KNN': - grouper = QueryAndGroup( - radii[i], - sample_num, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=True) - else: - grouper = QueryAndGroup( - radii[i], - sample_num, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=grouper_return_grouped_idx) - else: - grouper = GroupAll(use_xyz) - self.groupers.append(grouper) - - def _pool_features(self, features): - """Perform feature aggregation using pooling operation. - - Args: - features (torch.Tensor): (B, C, N, K) - Features of locally grouped points before pooling. - - Returns: - torch.Tensor: (B, C, N) - Pooled features aggregating local information. - """ - if self.pool_mode == 'max': - # (B, C, N, 1) - new_features = F.max_pool2d( - features, kernel_size=[1, features.size(3)]) - elif self.pool_mode == 'avg': - # (B, C, N, 1) - new_features = F.avg_pool2d( - features, kernel_size=[1, features.size(3)]) - else: - raise NotImplementedError - - return new_features.squeeze(-1).contiguous() - - def forward(self, points): - """forward. - - Args: - points (Tensor): (B, N, C) input points. - - Returns: - List[Tensor]: (B, N, C1) new points generated from each graph - feature module. - """ - new_points_list = [points] - - for i in range(len(self.groupers)): - - new_points = new_points_list[i] - new_points_trans = new_points.transpose( - 1, 2).contiguous() # (B, C, N) - - if self.knn_modes[i] == 'D-KNN': - # (B, N, C) -> (B, N, K) - idx = self.groupers[i](new_points[..., -3:].contiguous(), - new_points[..., -3:].contiguous())[-1] - - grouped_results = grouping_operation( - new_points_trans, idx) # (B, C, N) -> (B, C, N, K) - grouped_results -= new_points_trans.unsqueeze(-1) - else: - grouped_results = self.groupers[i]( - new_points, new_points) # (B, N, C) -> (B, C, N, K) - - new_points = new_points_trans.unsqueeze(-1).repeat( - 1, 1, 1, grouped_results.shape[-1]) - new_points = torch.cat([grouped_results, new_points], dim=1) - - # (B, mlp[-1], N, K) - new_points = self.mlps[i](new_points) - - # (B, mlp[-1], N) - new_points = self._pool_features(new_points) - new_points = new_points.transpose(1, 2).contiguous() - new_points_list.append(new_points) - - return new_points - - -class DGCNNGFModule(BaseDGCNNGFModule): - """Point graph feature module used in DGCNN. - - Args: - mlp_channels (list[int]): Specify of the dgcnn before - the global pooling for each graph feature module. - num_sample (int, optional): Number of samples in each knn or ball - query. Defaults to None. - knn_mode (str, optional): Type of KNN method, valid mode - ['F-KNN', 'D-KNN']. Defaults to 'F-KNN'. - radius (float, optional): Radius to group with. - Defaults to None. - dilated_group (bool, optional): Whether to use dilated ball query. - Defaults to False. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - use_xyz (bool, optional): Whether to use xyz as point features. - Defaults to True. - pool_mode (str, optional): Type of pooling method. - Defaults to 'max'. - normalize_xyz (bool, optional): If ball query, whether to normalize - local XYZ with radius. Defaults to False. - bias (bool | str, optional): If specified as `auto`, it will be decided - by the norm_cfg. Bias will be set as True if `norm_cfg` is None, - otherwise False. Defaults to 'auto'. - """ - - def __init__(self, - mlp_channels, - num_sample=None, - knn_mode='F-KNN', - radius=None, - dilated_group=False, - norm_cfg=dict(type='BN2d'), - act_cfg=dict(type='ReLU'), - use_xyz=True, - pool_mode='max', - normalize_xyz=False, - bias='auto'): - super(DGCNNGFModule, self).__init__( - mlp_channels=[mlp_channels], - sample_nums=[num_sample], - knn_modes=[knn_mode], - radii=[radius], - use_xyz=use_xyz, - pool_mode=pool_mode, - normalize_xyz=normalize_xyz, - dilated_group=dilated_group) - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - ConvModule( - mlp_channel[i], - mlp_channel[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=bias)) - self.mlps.append(mlp) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/norm.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/norm.py deleted file mode 100644 index 98ec7f11..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/norm.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NORM_LAYERS -from mmcv.runner import force_fp32 -from torch import distributed as dist -from torch import nn as nn -from torch.autograd.function import Function - - -class AllReduce(Function): - - @staticmethod - def forward(ctx, input): - input_list = [ - torch.zeros_like(input) for k in range(dist.get_world_size()) - ] - # Use allgather instead of allreduce in-place operations is unreliable - dist.all_gather(input_list, input, async_op=False) - inputs = torch.stack(input_list, dim=0) - return torch.sum(inputs, dim=0) - - @staticmethod - def backward(ctx, grad_output): - dist.all_reduce(grad_output, async_op=False) - return grad_output - - -@NORM_LAYERS.register_module('naiveSyncBN1d') -class NaiveSyncBatchNorm1d(nn.BatchNorm1d): - """Synchronized Batch Normalization for 3D Tensors. - - Note: - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - `torch.nn.SyncBatchNorm` has known unknown bugs. - It produces significantly worse AP (and sometimes goes NaN) - when the batch size on each worker is quite different - (e.g., when scale augmentation is used). - In 3D detection, different workers has points of different shapes, - which also cause instability. - - Use this implementation before `nn.SyncBatchNorm` is fixed. - It is slower than `nn.SyncBatchNorm`. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.fp16_enabled = False - - # customized normalization layer still needs this decorator - # to force the input to be fp32 and the output to be fp16 - # TODO: make mmcv fp16 utils handle customized norm layers - @force_fp32(out_fp16=True) - def forward(self, input): - """ - Args: - input (tensor): Has shape (N, C) or (N, C, L), where N is - the batch size, C is the number of features or - channels, and L is the sequence length - - Returns: - tensor: Has shape (N, C) or (N, C, L), has same shape - as input. - """ - assert input.dtype == torch.float32, \ - f'input should be in float32 type, got {input.dtype}' - using_dist = dist.is_available() and dist.is_initialized() - if (not using_dist) or dist.get_world_size() == 1 \ - or not self.training: - return super().forward(input) - assert input.shape[0] > 0, 'SyncBN does not support empty inputs' - is_two_dim = input.dim() == 2 - if is_two_dim: - input = input.unsqueeze(2) - - C = input.shape[1] - mean = torch.mean(input, dim=[0, 2]) - meansqr = torch.mean(input * input, dim=[0, 2]) - - vec = torch.cat([mean, meansqr], dim=0) - vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size()) - - mean, meansqr = torch.split(vec, C) - var = meansqr - mean * mean - self.running_mean += self.momentum * ( - mean.detach() - self.running_mean) - self.running_var += self.momentum * (var.detach() - self.running_var) - - invstd = torch.rsqrt(var + self.eps) - scale = self.weight * invstd - bias = self.bias - mean * scale - scale = scale.reshape(1, -1, 1) - bias = bias.reshape(1, -1, 1) - output = input * scale + bias - if is_two_dim: - output = output.squeeze(2) - return output - - -@NORM_LAYERS.register_module('naiveSyncBN2d') -class NaiveSyncBatchNorm2d(nn.BatchNorm2d): - """Synchronized Batch Normalization for 4D Tensors. - - Note: - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - `torch.nn.SyncBatchNorm` has known unknown bugs. - It produces significantly worse AP (and sometimes goes NaN) - when the batch size on each worker is quite different - (e.g., when scale augmentation is used). - This phenomenon also occurs when the multi-modality feature fusion - modules of multi-modality detectors use SyncBN. - - Use this implementation before `nn.SyncBatchNorm` is fixed. - It is slower than `nn.SyncBatchNorm`. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.fp16_enabled = False - - # customized normalization layer still needs this decorator - # to force the input to be fp32 and the output to be fp16 - # TODO: make mmcv fp16 utils handle customized norm layers - @force_fp32(out_fp16=True) - def forward(self, input): - """ - Args: - Input (tensor): Feature has shape (N, C, H, W). - - Returns: - tensor: Has shape (N, C, H, W), same shape as input. - """ - assert input.dtype == torch.float32, \ - f'input should be in float32 type, got {input.dtype}' - using_dist = dist.is_available() and dist.is_initialized() - if (not using_dist) or \ - dist.get_world_size() == 1 or \ - not self.training: - return super().forward(input) - - assert input.shape[0] > 0, 'SyncBN does not support empty inputs' - C = input.shape[1] - mean = torch.mean(input, dim=[0, 2, 3]) - meansqr = torch.mean(input * input, dim=[0, 2, 3]) - - vec = torch.cat([mean, meansqr], dim=0) - vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size()) - - mean, meansqr = torch.split(vec, C) - var = meansqr - mean * mean - self.running_mean += self.momentum * ( - mean.detach() - self.running_mean) - self.running_var += self.momentum * (var.detach() - self.running_var) - - invstd = torch.rsqrt(var + self.eps) - scale = self.weight * invstd - bias = self.bias - mean * scale - scale = scale.reshape(1, -1, 1, 1) - bias = bias.reshape(1, -1, 1, 1) - return input * scale + bias diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/__init__.py deleted file mode 100644 index d71c7660..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .paconv import PAConv, PAConvCUDA - -__all__ = ['PAConv', 'PAConvCUDA'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/paconv.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/paconv.py deleted file mode 100644 index bda8bfe3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/paconv.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -from mmcv.cnn import (ConvModule, build_activation_layer, build_norm_layer, - constant_init) -from mmcv.ops import assign_score_withk as assign_score_cuda -from torch import nn as nn -from torch.nn import functional as F - -from .utils import assign_kernel_withoutk, assign_score, calc_euclidian_dist - - -class ScoreNet(nn.Module): - r"""ScoreNet that outputs coefficient scores to assemble kernel weights in - the weight bank according to the relative position of point pairs. - - Args: - mlp_channels (List[int]): Hidden unit sizes of SharedMLP layers. - last_bn (bool, optional): Whether to use BN on the last output of mlps. - Defaults to False. - score_norm (str, optional): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. Defaults to 'softmax'. - temp_factor (float, optional): Temperature factor to scale the output - scores before softmax. Defaults to 1.0. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d'). - bias (bool | str, optional): If specified as `auto`, it will be decided - by the norm_cfg. Bias will be set as True if `norm_cfg` is None, - otherwise False. Defaults to 'auto'. - - Note: - The official code applies xavier_init to all Conv layers in ScoreNet, - see `PAConv `_. However in our experiments, we - did not find much difference in applying such xavier initialization - or not. So we neglect this initialization in our implementation. - """ - - def __init__(self, - mlp_channels, - last_bn=False, - score_norm='softmax', - temp_factor=1.0, - norm_cfg=dict(type='BN2d'), - bias='auto'): - super(ScoreNet, self).__init__() - - assert score_norm in ['softmax', 'sigmoid', 'identity'], \ - f'unsupported score_norm function {score_norm}' - - self.score_norm = score_norm - self.temp_factor = temp_factor - - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 2): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - bias=bias)) - - # for the last mlp that outputs scores, no relu and possibly no bn - i = len(mlp_channels) - 2 - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg if last_bn else None, - act_cfg=None, - bias=bias)) - - def forward(self, xyz_features): - """Forward. - - Args: - xyz_features (torch.Tensor): (B, C, N, K), features constructed - from xyz coordinates of point pairs. May contain relative - positions, Euclidean distance, etc. - - Returns: - torch.Tensor: (B, N, K, M), predicted scores for `M` kernels. - """ - scores = self.mlps(xyz_features) # (B, M, N, K) - - # perform score normalization - if self.score_norm == 'softmax': - scores = F.softmax(scores / self.temp_factor, dim=1) - elif self.score_norm == 'sigmoid': - scores = torch.sigmoid(scores / self.temp_factor) - else: # 'identity' - scores = scores - - scores = scores.permute(0, 2, 3, 1) # (B, N, K, M) - - return scores - - -class PAConv(nn.Module): - """Non-CUDA version of PAConv. - - PAConv stores a trainable weight bank containing several kernel weights. - Given input points and features, it computes coefficient scores to assemble - those kernels to form conv kernels, and then runs convolution on the input. - - Args: - in_channels (int): Input channels of point features. - out_channels (int): Output channels of point features. - num_kernels (int): Number of kernel weights in the weight bank. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d', momentum=0.1). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU', inplace=True). - scorenet_input (str, optional): Type of input to ScoreNet. - Can be 'identity', 'w_neighbor' or 'w_neighbor_dist'. - Defaults to 'w_neighbor_dist'. - weight_bank_init (str, optional): Init method of weight bank kernels. - Can be 'kaiming' or 'xavier'. Defaults to 'kaiming'. - kernel_input (str, optional): Input features to be multiplied with - kernel weights. Can be 'identity' or 'w_neighbor'. - Defaults to 'w_neighbor'. - scorenet_cfg (dict, optional): Config of the ScoreNet module, which - may contain the following keys and values: - - - mlp_channels (List[int]): Hidden units of MLPs. - - score_norm (str): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. - - temp_factor (float): Temperature factor to scale the output - scores before softmax. - - last_bn (bool): Whether to use BN on the last output of mlps. - """ - - def __init__(self, - in_channels, - out_channels, - num_kernels, - norm_cfg=dict(type='BN2d', momentum=0.1), - act_cfg=dict(type='ReLU', inplace=True), - scorenet_input='w_neighbor_dist', - weight_bank_init='kaiming', - kernel_input='w_neighbor', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConv, self).__init__() - - # determine weight kernel size according to used features - if kernel_input == 'identity': - # only use grouped_features - kernel_mul = 1 - elif kernel_input == 'w_neighbor': - # concat of (grouped_features - center_features, grouped_features) - kernel_mul = 2 - else: - raise NotImplementedError( - f'unsupported kernel_input {kernel_input}') - self.kernel_input = kernel_input - in_channels = kernel_mul * in_channels - - # determine mlp channels in ScoreNet according to used xyz features - if scorenet_input == 'identity': - # only use relative position (grouped_xyz - center_xyz) - self.scorenet_in_channels = 3 - elif scorenet_input == 'w_neighbor': - # (grouped_xyz - center_xyz, grouped_xyz) - self.scorenet_in_channels = 6 - elif scorenet_input == 'w_neighbor_dist': - # (center_xyz, grouped_xyz - center_xyz, Euclidean distance) - self.scorenet_in_channels = 7 - else: - raise NotImplementedError( - f'unsupported scorenet_input {scorenet_input}') - self.scorenet_input = scorenet_input - - # construct kernel weights in weight bank - # self.weight_bank is of shape [C, num_kernels * out_c] - # where C can be in_c or (2 * in_c) - if weight_bank_init == 'kaiming': - weight_init = nn.init.kaiming_normal_ - elif weight_bank_init == 'xavier': - weight_init = nn.init.xavier_normal_ - else: - raise NotImplementedError( - f'unsupported weight bank init method {weight_bank_init}') - - self.num_kernels = num_kernels # the parameter `m` in the paper - weight_bank = weight_init( - torch.empty(self.num_kernels, in_channels, out_channels)) - weight_bank = weight_bank.permute(1, 0, 2).reshape( - in_channels, self.num_kernels * out_channels).contiguous() - self.weight_bank = nn.Parameter(weight_bank, requires_grad=True) - - # construct ScoreNet - scorenet_cfg_ = copy.deepcopy(scorenet_cfg) - scorenet_cfg_['mlp_channels'].insert(0, self.scorenet_in_channels) - scorenet_cfg_['mlp_channels'].append(self.num_kernels) - self.scorenet = ScoreNet(**scorenet_cfg_) - - self.bn = build_norm_layer(norm_cfg, out_channels)[1] if \ - norm_cfg is not None else None - self.activate = build_activation_layer(act_cfg) if \ - act_cfg is not None else None - - # set some basic attributes of Conv layers - self.in_channels = in_channels - self.out_channels = out_channels - - self.init_weights() - - def init_weights(self): - """Initialize weights of shared MLP layers and BN layers.""" - if self.bn is not None: - constant_init(self.bn, val=1, bias=0) - - def _prepare_scorenet_input(self, points_xyz): - """Prepare input point pairs features for self.ScoreNet. - - Args: - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - Returns: - torch.Tensor: (B, C, npoint, K) - The generated features per point pair. - """ - B, _, npoint, K = points_xyz.size() - center_xyz = points_xyz[..., :1].repeat(1, 1, 1, K) - xyz_diff = points_xyz - center_xyz # [B, 3, npoint, K] - if self.scorenet_input == 'identity': - xyz_features = xyz_diff - elif self.scorenet_input == 'w_neighbor': - xyz_features = torch.cat((xyz_diff, points_xyz), dim=1) - else: # w_neighbor_dist - euclidian_dist = calc_euclidian_dist( - center_xyz.permute(0, 2, 3, 1).reshape(B * npoint * K, 3), - points_xyz.permute(0, 2, 3, 1).reshape(B * npoint * K, 3)).\ - reshape(B, 1, npoint, K) - xyz_features = torch.cat((center_xyz, xyz_diff, euclidian_dist), - dim=1) - return xyz_features - - def forward(self, inputs): - """Forward. - - Args: - inputs (tuple(torch.Tensor)): - - - features (torch.Tensor): (B, in_c, npoint, K) - Features of the queried points. - - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - Returns: - Tuple[torch.Tensor]: - - - new_features: (B, out_c, npoint, K), features after PAConv. - - points_xyz: same as input. - """ - features, points_xyz = inputs - B, _, npoint, K = features.size() - - if self.kernel_input == 'w_neighbor': - center_features = features[..., :1].repeat(1, 1, 1, K) - features_diff = features - center_features - # to (B, 2 * in_c, npoint, K) - features = torch.cat((features_diff, features), dim=1) - - # prepare features for between each point and its grouping center - xyz_features = self._prepare_scorenet_input(points_xyz) - - # scores to assemble kernel weights - scores = self.scorenet(xyz_features) # [B, npoint, K, m] - - # first compute out features over all kernels - # features is [B, C, npoint, K], weight_bank is [C, m * out_c] - new_features = torch.matmul( - features.permute(0, 2, 3, 1), - self.weight_bank).view(B, npoint, K, self.num_kernels, - -1) # [B, npoint, K, m, out_c] - - # then aggregate using scores - new_features = assign_score(scores, new_features) - # to [B, out_c, npoint, K] - new_features = new_features.permute(0, 3, 1, 2).contiguous() - - if self.bn is not None: - new_features = self.bn(new_features) - if self.activate is not None: - new_features = self.activate(new_features) - - # in order to keep input output consistency - # so that we can wrap PAConv in Sequential - return (new_features, points_xyz) - - -class PAConvCUDA(PAConv): - """CUDA version of PAConv that implements a cuda op to efficiently perform - kernel assembling. - - Different from vanilla PAConv, the input features of this function is not - grouped by centers. Instead, they will be queried on-the-fly by the - additional input `points_idx`. This avoids the large intermediate matrix. - See the `paper `_ appendix Sec. D for - more detailed descriptions. - """ - - def __init__(self, - in_channels, - out_channels, - num_kernels, - norm_cfg=dict(type='BN2d', momentum=0.1), - act_cfg=dict(type='ReLU', inplace=True), - scorenet_input='w_neighbor_dist', - weight_bank_init='kaiming', - kernel_input='w_neighbor', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDA, self).__init__( - in_channels=in_channels, - out_channels=out_channels, - num_kernels=num_kernels, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - scorenet_input=scorenet_input, - weight_bank_init=weight_bank_init, - kernel_input=kernel_input, - scorenet_cfg=scorenet_cfg) - - assert self.kernel_input == 'w_neighbor', \ - 'CUDA implemented PAConv only supports w_neighbor kernel_input' - - def forward(self, inputs): - """Forward. - - Args: - inputs (tuple(torch.Tensor)): - - - features (torch.Tensor): (B, in_c, N) - Features of all points in the current point cloud. - Different from non-CUDA version PAConv, here the features - are not grouped by each center to form a K dim. - - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - points_idx (torch.Tensor): (B, npoint, K) - Index of the grouped points. - - Returns: - Tuple[torch.Tensor]: - - - new_features: (B, out_c, npoint, K), features after PAConv. - - points_xyz: same as input. - - points_idx: same as input. - """ - features, points_xyz, points_idx = inputs - - # prepare features for between each point and its grouping center - xyz_features = self._prepare_scorenet_input(points_xyz) - - # scores to assemble kernel weights - scores = self.scorenet(xyz_features) # [B, npoint, K, m] - - # pre-compute features for points and centers separately - # features is [B, in_c, N], weight_bank is [C, m * out_dim] - point_feat, center_feat = assign_kernel_withoutk( - features, self.weight_bank, self.num_kernels) - - # aggregate features using custom cuda op - new_features = assign_score_cuda( - scores, point_feat, center_feat, points_idx, - 'sum').contiguous() # [B, out_c, npoint, K] - - if self.bn is not None: - new_features = self.bn(new_features) - if self.activate is not None: - new_features = self.activate(new_features) - - # in order to keep input output consistency - return (new_features, points_xyz, points_idx) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/utils.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/utils.py deleted file mode 100644 index 68e71d51..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/paconv/utils.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def calc_euclidian_dist(xyz1, xyz2): - """Calculate the Euclidean distance between two sets of points. - - Args: - xyz1 (torch.Tensor): (N, 3), the first set of points. - xyz2 (torch.Tensor): (N, 3), the second set of points. - - Returns: - torch.Tensor: (N, ), the Euclidean distance between each point pair. - """ - assert xyz1.shape[0] == xyz2.shape[0], 'number of points are not the same' - assert xyz1.shape[1] == xyz2.shape[1] == 3, \ - 'points coordinates dimension is not 3' - return torch.norm(xyz1 - xyz2, dim=-1) - - -def assign_score(scores, point_features): - """Perform weighted sum to aggregate output features according to scores. - This function is used in non-CUDA version of PAConv. - - Compared to the cuda op assigh_score_withk, this pytorch implementation - pre-computes output features for the neighbors of all centers, and then - performs aggregation. It consumes more GPU memories. - - Args: - scores (torch.Tensor): (B, npoint, K, M), predicted scores to - aggregate weight matrices in the weight bank. - `npoint` is the number of sampled centers. - `K` is the number of queried neighbors. - `M` is the number of weight matrices in the weight bank. - point_features (torch.Tensor): (B, npoint, K, M, out_dim) - Pre-computed point features to be aggregated. - - Returns: - torch.Tensor: (B, npoint, K, out_dim), the aggregated features. - """ - B, npoint, K, M = scores.size() - scores = scores.view(B, npoint, K, 1, M) - output = torch.matmul(scores, point_features).view(B, npoint, K, -1) - return output - - -def assign_kernel_withoutk(features, kernels, M): - """Pre-compute features with weight matrices in weight bank. This function - is used before cuda op assign_score_withk in CUDA version PAConv. - - Args: - features (torch.Tensor): (B, in_dim, N), input features of all points. - `N` is the number of points in current point cloud. - kernels (torch.Tensor): (2 * in_dim, M * out_dim), weight matrices in - the weight bank, transformed from (M, 2 * in_dim, out_dim). - `2 * in_dim` is because the input features are concatenation of - (point_features - center_features, point_features). - M (int): Number of weight matrices in the weight bank. - - Returns: - Tuple[torch.Tensor]: both of shape (B, N, M, out_dim): - - - point_features: Pre-computed features for points. - - center_features: Pre-computed features for centers. - """ - B, in_dim, N = features.size() - feat_trans = features.permute(0, 2, 1) # [B, N, in_dim] - out_feat_half1 = torch.matmul(feat_trans, kernels[:in_dim]).view( - B, N, M, -1) # [B, N, M, out_dim] - out_feat_half2 = torch.matmul(feat_trans, kernels[in_dim:]).view( - B, N, M, -1) # [B, N, M, out_dim] - - # TODO: why this hard-coded if condition? - # when the network input is only xyz without additional features - # xyz will be used as features, so that features.size(1) == 3 % 2 != 0 - # we need to compensate center_features because otherwise - # `point_features - center_features` will result in all zeros? - if features.size(1) % 2 != 0: - out_feat_half_coord = torch.matmul( - feat_trans[:, :, :3], # [B, N, 3] - kernels[in_dim:in_dim + 3]).view(B, N, M, -1) # [B, N, M, out_dim] - else: - out_feat_half_coord = torch.zeros_like(out_feat_half2) - - point_features = out_feat_half1 + out_feat_half2 - center_features = out_feat_half1 + out_feat_half_coord - return point_features, center_features diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/__init__.py deleted file mode 100644 index 99b08eb8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_sa_module -from .paconv_sa_module import (PAConvCUDASAModule, PAConvCUDASAModuleMSG, - PAConvSAModule, PAConvSAModuleMSG) -from .point_fp_module import PointFPModule -from .point_sa_module import PointSAModule, PointSAModuleMSG - -__all__ = [ - 'build_sa_module', 'PointSAModuleMSG', 'PointSAModule', 'PointFPModule', - 'PAConvSAModule', 'PAConvSAModuleMSG', 'PAConvCUDASAModule', - 'PAConvCUDASAModuleMSG' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/builder.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/builder.py deleted file mode 100644 index 6631cb42..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/builder.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -SA_MODULES = Registry('point_sa_module') - - -def build_sa_module(cfg, *args, **kwargs): - """Build PointNet2 set abstraction (SA) module. - - Args: - cfg (None or dict): The SA module config, which should contain: - - type (str): Module type. - - module args: Args needed to instantiate an SA module. - args (argument list): Arguments passed to the `__init__` - method of the corresponding module. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding SA module . - - Returns: - nn.Module: Created SA module. - """ - if cfg is None: - cfg_ = dict(type='PointSAModule') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - module_type = cfg_.pop('type') - if module_type not in SA_MODULES: - raise KeyError(f'Unrecognized module type {module_type}') - else: - sa_module = SA_MODULES.get(module_type) - - module = sa_module(*args, **kwargs, **cfg_) - - return module diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/paconv_sa_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/paconv_sa_module.py deleted file mode 100644 index 361ecbb2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/paconv_sa_module.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.ops import PAConv, PAConvCUDA -from .builder import SA_MODULES -from .point_sa_module import BasePointSAModule - - -@SA_MODULES.register_module() -class PAConvSAModuleMSG(BasePointSAModule): - r"""Point set abstraction module with multi-scale grouping (MSG) used in - PAConv networks. - - Replace the MLPs in `PointSAModuleMSG` with PAConv layers. - See the `paper `_ for more details. - - Args: - paconv_num_kernels (list[list[int]]): Number of kernel weights in the - weight banks of each layer's PAConv. - paconv_kernel_input (str, optional): Input features to be multiplied - with kernel weights. Can be 'identity' or 'w_neighbor'. - Defaults to 'w_neighbor'. - scorenet_input (str, optional): Type of the input to ScoreNet. - Defaults to 'w_neighbor_dist'. Can be the following values: - - - 'identity': Use xyz coordinates as input. - - 'w_neighbor': Use xyz coordinates and the difference with center - points as input. - - 'w_neighbor_dist': Use xyz coordinates, the difference with - center points and the Euclidean distance as input. - - scorenet_cfg (dict, optional): Config of the ScoreNet module, which - may contain the following keys and values: - - - mlp_channels (List[int]): Hidden units of MLPs. - - score_norm (str): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. - - temp_factor (float): Temperature factor to scale the output - scores before softmax. - - last_bn (bool): Whether to use BN on the last output of mlps. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - paconv_num_kernels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto', - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvSAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz, - grouper_return_grouped_xyz=True) - - assert len(paconv_num_kernels) == len(mlp_channels) - for i in range(len(mlp_channels)): - assert len(paconv_num_kernels[i]) == len(mlp_channels[i]) - 1, \ - 'PAConv number of kernel weights wrong' - - # in PAConv, bias only exists in ScoreNet - scorenet_cfg['bias'] = bias - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - num_kernels = paconv_num_kernels[i] - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - PAConv( - mlp_channel[i], - mlp_channel[i + 1], - num_kernels[i], - norm_cfg=norm_cfg, - kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg)) - self.mlps.append(mlp) - - -@SA_MODULES.register_module() -class PAConvSAModule(PAConvSAModuleMSG): - r"""Point set abstraction module with single-scale grouping (SSG) used in - PAConv networks. - - Replace the MLPs in `PointSAModule` with PAConv layers. See the `paper - `_ for more details. - """ - - def __init__(self, - mlp_channels, - paconv_num_kernels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False, - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvSAModule, self).__init__( - mlp_channels=[mlp_channels], - paconv_num_kernels=[paconv_num_kernels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz, - paconv_kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg) - - -@SA_MODULES.register_module() -class PAConvCUDASAModuleMSG(BasePointSAModule): - r"""Point set abstraction module with multi-scale grouping (MSG) used in - PAConv networks. - - Replace the non CUDA version PAConv with CUDA implemented PAConv for - efficient computation. See the `paper `_ - for more details. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - paconv_num_kernels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto', - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDASAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz, - grouper_return_grouped_xyz=True, - grouper_return_grouped_idx=True) - - assert len(paconv_num_kernels) == len(mlp_channels) - for i in range(len(mlp_channels)): - assert len(paconv_num_kernels[i]) == len(mlp_channels[i]) - 1, \ - 'PAConv number of kernel weights wrong' - - # in PAConv, bias only exists in ScoreNet - scorenet_cfg['bias'] = bias - - # we need to manually concat xyz for CUDA implemented PAConv - self.use_xyz = use_xyz - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - num_kernels = paconv_num_kernels[i] - - # can't use `nn.Sequential` for PAConvCUDA because its input and - # output have different shapes - mlp = nn.ModuleList() - for i in range(len(mlp_channel) - 1): - mlp.append( - PAConvCUDA( - mlp_channel[i], - mlp_channel[i + 1], - num_kernels[i], - norm_cfg=norm_cfg, - kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg)) - self.mlps.append(mlp) - - def forward( - self, - points_xyz, - features=None, - indices=None, - target_xyz=None, - ): - """forward. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor, optional): (B, C, N) features of each point. - Default: None. - indices (Tensor, optional): (B, num_point) Index of the features. - Default: None. - target_xyz (Tensor, optional): (B, M, 3) new coords of the outputs. - Default: None. - - Returns: - Tensor: (B, M, 3) where M is the number of points. - New features xyz. - Tensor: (B, M, sum_k(mlps[k][-1])) where M is the number - of points. New feature descriptors. - Tensor: (B, M) where M is the number of points. - Index of the features. - """ - new_features_list = [] - - # sample points, (B, num_point, 3), (B, num_point) - new_xyz, indices = self._sample_points(points_xyz, features, indices, - target_xyz) - - for i in range(len(self.groupers)): - xyz = points_xyz - new_features = features - for j in range(len(self.mlps[i])): - # we don't use grouped_features here to avoid large GPU memory - # _, (B, 3, num_point, nsample), (B, num_point, nsample) - _, grouped_xyz, grouped_idx = self.groupers[i](xyz, new_xyz, - new_features) - - # concat xyz as additional features - if self.use_xyz and j == 0: - # (B, C+3, N) - new_features = torch.cat( - (points_xyz.permute(0, 2, 1), new_features), dim=1) - - # (B, out_c, num_point, nsample) - grouped_new_features = self.mlps[i][j]( - (new_features, grouped_xyz, grouped_idx.long()))[0] - - # different from PointNet++ and non CUDA version of PAConv - # CUDA version of PAConv needs to aggregate local features - # every time after it passes through a Conv layer - # in order to transform to valid input shape - # (B, out_c, num_point) - new_features = self._pool_features(grouped_new_features) - - # constrain the points to be grouped for next PAConv layer - # because new_features only contains sampled centers now - # (B, num_point, 3) - xyz = new_xyz - - new_features_list.append(new_features) - - return new_xyz, torch.cat(new_features_list, dim=1), indices - - -@SA_MODULES.register_module() -class PAConvCUDASAModule(PAConvCUDASAModuleMSG): - r"""Point set abstraction module with single-scale grouping (SSG) used in - PAConv networks. - - Replace the non CUDA version PAConv with CUDA implemented PAConv for - efficient computation. See the `paper `_ - for more details. - """ - - def __init__(self, - mlp_channels, - paconv_num_kernels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False, - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDASAModule, self).__init__( - mlp_channels=[mlp_channels], - paconv_num_kernels=[paconv_num_kernels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz, - paconv_kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_fp_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_fp_module.py deleted file mode 100644 index 1bc833e0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_fp_module.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import three_interpolate, three_nn -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class PointFPModule(BaseModule): - """Point feature propagation module used in PointNets. - - Propagate the features from one set to another. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - """ - - def __init__(self, - mlp_channels: List[int], - norm_cfg: dict = dict(type='BN2d'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg)) - - @force_fp32() - def forward(self, target: torch.Tensor, source: torch.Tensor, - target_feats: torch.Tensor, - source_feats: torch.Tensor) -> torch.Tensor: - """forward. - - Args: - target (Tensor): (B, n, 3) tensor of the xyz positions of - the target features. - source (Tensor): (B, m, 3) tensor of the xyz positions of - the source features. - target_feats (Tensor): (B, C1, n) tensor of the features to be - propagated to. - source_feats (Tensor): (B, C2, m) tensor of features - to be propagated. - - Return: - Tensor: (B, M, N) M = mlp[-1], tensor of the target features. - """ - if source is not None: - dist, idx = three_nn(target, source) - dist_reciprocal = 1.0 / (dist + 1e-8) - norm = torch.sum(dist_reciprocal, dim=2, keepdim=True) - weight = dist_reciprocal / norm - - interpolated_feats = three_interpolate(source_feats, idx, weight) - else: - interpolated_feats = source_feats.expand(*source_feats.size()[0:2], - target.size(1)) - - if target_feats is not None: - new_features = torch.cat([interpolated_feats, target_feats], - dim=1) # (B, C2 + C1, n) - else: - new_features = interpolated_feats - - new_features = new_features.unsqueeze(-1) - new_features = self.mlps(new_features) - - return new_features.squeeze(-1) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_sa_module.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_sa_module.py deleted file mode 100644 index e33377fc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/pointnet_modules/point_sa_module.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import GroupAll -from mmcv.ops import PointsSampler as Points_Sampler -from mmcv.ops import QueryAndGroup, gather_points -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.ops import PAConv -from .builder import SA_MODULES - - -class BasePointSAModule(nn.Module): - """Base module for point set abstraction module used in PointNets. - - Args: - num_point (int): Number of points. - radii (list[float]): List of radius in each ball query. - sample_nums (list[int]): Number of samples in each ball query. - mlp_channels (list[list[int]]): Specify of the pointnet before - the global pooling for each scale. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): - Range of points to apply FPS. Default: [-1]. - dilated_group (bool, optional): Whether to use dilated ball query. - Default: False. - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - grouper_return_grouped_xyz (bool, optional): Whether to return - grouped xyz in `QueryAndGroup`. Defaults to False. - grouper_return_grouped_idx (bool, optional): Whether to return - grouped idx in `QueryAndGroup`. Defaults to False. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - grouper_return_grouped_xyz=False, - grouper_return_grouped_idx=False): - super(BasePointSAModule, self).__init__() - - assert len(radii) == len(sample_nums) == len(mlp_channels) - assert pool_mod in ['max', 'avg'] - assert isinstance(fps_mod, list) or isinstance(fps_mod, tuple) - assert isinstance(fps_sample_range_list, list) or isinstance( - fps_sample_range_list, tuple) - assert len(fps_mod) == len(fps_sample_range_list) - - if isinstance(mlp_channels, tuple): - mlp_channels = list(map(list, mlp_channels)) - self.mlp_channels = mlp_channels - - if isinstance(num_point, int): - self.num_point = [num_point] - elif isinstance(num_point, list) or isinstance(num_point, tuple): - self.num_point = num_point - elif num_point is None: - self.num_point = None - else: - raise NotImplementedError('Error type of num_point!') - - self.pool_mod = pool_mod - self.groupers = nn.ModuleList() - self.mlps = nn.ModuleList() - self.fps_mod_list = fps_mod - self.fps_sample_range_list = fps_sample_range_list - - if self.num_point is not None: - self.points_sampler = Points_Sampler(self.num_point, - self.fps_mod_list, - self.fps_sample_range_list) - else: - self.points_sampler = None - - for i in range(len(radii)): - radius = radii[i] - sample_num = sample_nums[i] - if num_point is not None: - if dilated_group and i != 0: - min_radius = radii[i - 1] - else: - min_radius = 0 - grouper = QueryAndGroup( - radius, - sample_num, - min_radius=min_radius, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=grouper_return_grouped_idx) - else: - grouper = GroupAll(use_xyz) - self.groupers.append(grouper) - - def _sample_points(self, points_xyz, features, indices, target_xyz): - """Perform point sampling based on inputs. - - If `indices` is specified, directly sample corresponding points. - Else if `target_xyz` is specified, use is as sampled points. - Otherwise sample points using `self.points_sampler`. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor): (B, C, N) features of each point. - indices (Tensor): (B, num_point) Index of the features. - target_xyz (Tensor): (B, M, 3) new_xyz coordinates of the outputs. - - Returns: - Tensor: (B, num_point, 3) sampled xyz coordinates of points. - Tensor: (B, num_point) sampled points' index. - """ - xyz_flipped = points_xyz.transpose(1, 2).contiguous() - if indices is not None: - assert (indices.shape[1] == self.num_point[0]) - new_xyz = gather_points(xyz_flipped, indices).transpose( - 1, 2).contiguous() if self.num_point is not None else None - elif target_xyz is not None: - new_xyz = target_xyz.contiguous() - else: - if self.num_point is not None: - indices = self.points_sampler(points_xyz, features) - new_xyz = gather_points(xyz_flipped, - indices).transpose(1, 2).contiguous() - else: - new_xyz = None - - return new_xyz, indices - - def _pool_features(self, features): - """Perform feature aggregation using pooling operation. - - Args: - features (torch.Tensor): (B, C, N, K) - Features of locally grouped points before pooling. - - Returns: - torch.Tensor: (B, C, N) - Pooled features aggregating local information. - """ - if self.pool_mod == 'max': - # (B, C, N, 1) - new_features = F.max_pool2d( - features, kernel_size=[1, features.size(3)]) - elif self.pool_mod == 'avg': - # (B, C, N, 1) - new_features = F.avg_pool2d( - features, kernel_size=[1, features.size(3)]) - else: - raise NotImplementedError - - return new_features.squeeze(-1).contiguous() - - def forward( - self, - points_xyz, - features=None, - indices=None, - target_xyz=None, - ): - """forward. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor, optional): (B, C, N) features of each point. - Default: None. - indices (Tensor, optional): (B, num_point) Index of the features. - Default: None. - target_xyz (Tensor, optional): (B, M, 3) new coords of the outputs. - Default: None. - - Returns: - Tensor: (B, M, 3) where M is the number of points. - New features xyz. - Tensor: (B, M, sum_k(mlps[k][-1])) where M is the number - of points. New feature descriptors. - Tensor: (B, M) where M is the number of points. - Index of the features. - """ - new_features_list = [] - - # sample points, (B, num_point, 3), (B, num_point) - new_xyz, indices = self._sample_points(points_xyz, features, indices, - target_xyz) - - for i in range(len(self.groupers)): - # grouped_results may contain: - # - grouped_features: (B, C, num_point, nsample) - # - grouped_xyz: (B, 3, num_point, nsample) - # - grouped_idx: (B, num_point, nsample) - grouped_results = self.groupers[i](points_xyz, new_xyz, features) - - # (B, mlp[-1], num_point, nsample) - new_features = self.mlps[i](grouped_results) - - # this is a bit hack because PAConv outputs two values - # we take the first one as feature - if isinstance(self.mlps[i][0], PAConv): - assert isinstance(new_features, tuple) - new_features = new_features[0] - - # (B, mlp[-1], num_point) - new_features = self._pool_features(new_features) - new_features_list.append(new_features) - - return new_xyz, torch.cat(new_features_list, dim=1), indices - - -@SA_MODULES.register_module() -class PointSAModuleMSG(BasePointSAModule): - """Point set abstraction module with multi-scale grouping (MSG) used in - PointNets. - - Args: - num_point (int): Number of points. - radii (list[float]): List of radius in each ball query. - sample_nums (list[int]): Number of samples in each ball query. - mlp_channels (list[list[int]]): Specify of the pointnet before - the global pooling for each scale. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): Range of points to - apply FPS. Default: [-1]. - dilated_group (bool, optional): Whether to use dilated ball query. - Default: False. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - bias (bool | str, optional): If specified as `auto`, it will be - decided by `norm_cfg`. `bias` will be set as True if - `norm_cfg` is None, otherwise False. Default: 'auto'. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d'), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto'): - super(PointSAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz) - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - ConvModule( - mlp_channel[i], - mlp_channel[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - bias=bias)) - self.mlps.append(mlp) - - -@SA_MODULES.register_module() -class PointSAModule(PointSAModuleMSG): - """Point set abstraction module with single-scale grouping (SSG) used in - PointNets. - - Args: - mlp_channels (list[int]): Specify of the pointnet before - the global pooling for each scale. - num_point (int, optional): Number of points. - Default: None. - radius (float, optional): Radius to group with. - Default: None. - num_sample (int, optional): Number of samples in each ball query. - Default: None. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - fps_sample_range_list (list[int], optional): Range of points - to apply FPS. Default: [-1]. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - """ - - def __init__(self, - mlp_channels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d'), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False): - super(PointSAModule, self).__init__( - mlp_channels=[mlp_channels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/sparse_block.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/sparse_block.py deleted file mode 100644 index 03b18e2e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/sparse_block.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn - -from mmdet.models.backbones.resnet import BasicBlock, Bottleneck -from .spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseModule, SparseSequential -else: - from mmcv.ops import SparseModule, SparseSequential - - -def replace_feature(out, new_features): - if 'replace_feature' in out.__dir__(): - # spconv 2.x behaviour - return out.replace_feature(new_features) - else: - out.features = new_features - return out - - -class SparseBottleneck(Bottleneck, SparseModule): - """Sparse bottleneck block for PartA^2. - - Bottleneck block implemented with submanifold sparse convolution. - - Args: - inplanes (int): inplanes of block. - planes (int): planes of block. - stride (int, optional): stride of the first block. Default: 1. - downsample (Module, optional): down sample module for block. - conv_cfg (dict, optional): dictionary to construct and config conv - layer. Default: None. - norm_cfg (dict, optional): dictionary to construct and config norm - layer. Default: dict(type='BN'). - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - conv_cfg=None, - norm_cfg=None): - - SparseModule.__init__(self) - Bottleneck.__init__( - self, - inplanes, - planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - identity = x.features - - out = self.conv1(x) - out = replace_feature(out, self.bn1(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv2(out) - out = replace_feature(out, self.bn2(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv3(out) - out = replace_feature(out, self.bn3(out.features)) - - if self.downsample is not None: - identity = self.downsample(x) - - out = replace_feature(out, out.features + identity) - out = replace_feature(out, self.relu(out.features)) - - return out - - -class SparseBasicBlock(BasicBlock, SparseModule): - """Sparse basic block for PartA^2. - - Sparse basic block implemented with submanifold sparse convolution. - - Args: - inplanes (int): inplanes of block. - planes (int): planes of block. - stride (int, optional): stride of the first block. Default: 1. - downsample (Module, optional): down sample module for block. - conv_cfg (dict, optional): dictionary to construct and config conv - layer. Default: None. - norm_cfg (dict, optional): dictionary to construct and config norm - layer. Default: dict(type='BN'). - """ - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - conv_cfg=None, - norm_cfg=None): - SparseModule.__init__(self) - BasicBlock.__init__( - self, - inplanes, - planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - identity = x.features - - assert x.features.dim() == 2, f'x.features.dim()={x.features.dim()}' - out = self.conv1(x) - out = replace_feature(out, self.norm1(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv2(out) - out = replace_feature(out, self.norm2(out.features)) - - if self.downsample is not None: - identity = self.downsample(x) - - out = replace_feature(out, out.features + identity) - out = replace_feature(out, self.relu(out.features)) - - return out - - -def make_sparse_convmodule(in_channels, - out_channels, - kernel_size, - indice_key, - stride=1, - padding=0, - conv_type='SubMConv3d', - norm_cfg=None, - order=('conv', 'norm', 'act')): - """Make sparse convolution module. - - Args: - in_channels (int): the number of input channels - out_channels (int): the number of out channels - kernel_size (int|tuple(int)): kernel size of convolution - indice_key (str): the indice key used for sparse tensor - stride (int|tuple(int)): the stride of convolution - padding (int or list[int]): the padding number of input - conv_type (str): sparse conv type in spconv - norm_cfg (dict[str]): config of normalization layer - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - - Returns: - spconv.SparseSequential: sparse convolution module. - """ - assert isinstance(order, tuple) and len(order) <= 3 - assert set(order) | {'conv', 'norm', 'act'} == {'conv', 'norm', 'act'} - - conv_cfg = dict(type=conv_type, indice_key=indice_key) - - layers = list() - for layer in order: - if layer == 'conv': - if conv_type not in [ - 'SparseInverseConv3d', 'SparseInverseConv2d', - 'SparseInverseConv1d' - ]: - layers.append( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - bias=False)) - else: - layers.append( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - bias=False)) - elif layer == 'norm': - layers.append(build_norm_layer(norm_cfg, out_channels)[1]) - elif layer == 'act': - layers.append(nn.ReLU(inplace=True)) - - layers = SparseSequential(*layers) - return layers diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/__init__.py deleted file mode 100644 index 561e5024..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .overwrite_spconv.write_spconv2 import register_spconv2 - -try: - import spconv -except ImportError: - IS_SPCONV2_AVAILABLE = False -else: - if hasattr(spconv, '__version__') and spconv.__version__ >= '2.0.0': - IS_SPCONV2_AVAILABLE = register_spconv2() - else: - IS_SPCONV2_AVAILABLE = False - -__all__ = ['IS_SPCONV2_AVAILABLE'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/__init__.py deleted file mode 100644 index 2e93d9ca..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .write_spconv2 import register_spconv2 - -__all__ = ['register_spconv2'] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py b/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py deleted file mode 100644 index 237051eb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -from mmcv.cnn.bricks.registry import CONV_LAYERS -from torch.nn.parameter import Parameter - - -def register_spconv2(): - """This func registers spconv2.0 spconv ops to overwrite the default mmcv - spconv ops.""" - try: - from spconv.pytorch import (SparseConv2d, SparseConv3d, SparseConv4d, - SparseConvTranspose2d, - SparseConvTranspose3d, SparseInverseConv2d, - SparseInverseConv3d, SparseModule, - SubMConv2d, SubMConv3d, SubMConv4d) - except ImportError: - return False - else: - CONV_LAYERS._register_module(SparseConv2d, 'SparseConv2d', force=True) - CONV_LAYERS._register_module(SparseConv3d, 'SparseConv3d', force=True) - CONV_LAYERS._register_module(SparseConv4d, 'SparseConv4d', force=True) - - CONV_LAYERS._register_module( - SparseConvTranspose2d, 'SparseConvTranspose2d', force=True) - CONV_LAYERS._register_module( - SparseConvTranspose3d, 'SparseConvTranspose3d', force=True) - - CONV_LAYERS._register_module( - SparseInverseConv2d, 'SparseInverseConv2d', force=True) - CONV_LAYERS._register_module( - SparseInverseConv3d, 'SparseInverseConv3d', force=True) - - CONV_LAYERS._register_module(SubMConv2d, 'SubMConv2d', force=True) - CONV_LAYERS._register_module(SubMConv3d, 'SubMConv3d', force=True) - CONV_LAYERS._register_module(SubMConv4d, 'SubMConv4d', force=True) - SparseModule._load_from_state_dict = _load_from_state_dict - SparseModule._save_to_state_dict = _save_to_state_dict - return True - - -def _save_to_state_dict(self, destination, prefix, keep_vars): - """Rewrite this func to compat the convolutional kernel weights between - spconv 1.x in MMCV and 2.x in spconv2.x. - - Kernel weights in MMCV spconv has shape in (D,H,W,in_channel,out_channel) , - while those in spcon2.x is in (out_channel,D,H,W,in_channel). - """ - for name, param in self._parameters.items(): - if param is not None: - param = param if keep_vars else param.detach() - if name == 'weight': - dims = list(range(1, len(param.shape))) + [0] - param = param.permute(*dims) - destination[prefix + name] = param - for name, buf in self._buffers.items(): - if buf is not None and name not in self._non_persistent_buffers_set: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Rewrite this func to compat the convolutional kernel weights between - spconv 1.x in MMCV and 2.x in spconv2.x. - - Kernel weights in MMCV spconv has shape in (D,H,W,in_channel,out_channel) , - while those in spcon2.x is in (out_channel,D,H,W,in_channel). - """ - for hook in self._load_state_dict_pre_hooks.values(): - hook(state_dict, prefix, local_metadata, strict, missing_keys, - unexpected_keys, error_msgs) - - local_name_params = itertools.chain(self._parameters.items(), - self._buffers.items()) - local_state = {k: v.data for k, v in local_name_params if v is not None} - - for name, param in local_state.items(): - key = prefix + name - if key in state_dict: - input_param = state_dict[key] - - # Backward compatibility: loading 1-dim tensor from - # 0.3.* to version 0.4+ - if len(param.shape) == 0 and len(input_param.shape) == 1: - input_param = input_param[0] - dims = [len(input_param.shape) - 1] + list( - range(len(input_param.shape) - 1)) - input_param = input_param.permute(*dims) - if input_param.shape != param.shape: - # local shape should match the one in checkpoint - error_msgs.append( - f'size mismatch for {key}: copying a param with ' - f'shape {key, input_param.shape} from checkpoint,' - f'the shape in current model is {param.shape}.') - continue - - if isinstance(input_param, Parameter): - # backwards compatibility for serialized parameters - input_param = input_param.data - try: - param.copy_(input_param) - except Exception: - error_msgs.append( - f'While copying the parameter named "{key}", whose ' - f'dimensions in the model are {param.size()} and whose ' - f'dimensions in the checkpoint are {input_param.size()}.') - elif strict: - missing_keys.append(key) - - if strict: - for key, input_param in state_dict.items(): - if key.startswith(prefix): - input_name = key[len(prefix):] - input_name = input_name.split( - '.', 1)[0] # get the name of param/buffer/child - if input_name not in self._modules \ - and input_name not in local_state: - unexpected_keys.append(key) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/__init__.py deleted file mode 100644 index ad599618..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg, print_log - -from .collect_env import collect_env -from .compat_cfg import compat_cfg -from .logger import get_root_logger -from .misc import find_latest_checkpoint -from .setup_env import setup_multi_processes - -__all__ = [ - 'Registry', 'build_from_cfg', 'get_root_logger', 'collect_env', - 'print_log', 'setup_multi_processes', 'find_latest_checkpoint', - 'compat_cfg' -] diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/collect_env.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/collect_env.py deleted file mode 100644 index c10d01a0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/collect_env.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmdet -import mmdet3d -import mmseg -# from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMDetection'] = mmdet.__version__ - env_info['MMSegmentation'] = mmseg.__version__ - env_info['MMDetection3D'] = mmdet3d.__version__ + '+' + get_git_hash()[:7] - # env_info['spconv2.0'] = IS_SPCONV2_AVAILABLE - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print(f'{name}: {val}') diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/compat_cfg.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/compat_cfg.py deleted file mode 100644 index 05aa37dc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/compat_cfg.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv import ConfigDict - - -def compat_cfg(cfg): - """This function would modify some filed to keep the compatibility of - config. - - For example, it will move some args which will be deprecated to the correct - fields. - """ - cfg = copy.deepcopy(cfg) - cfg = compat_imgs_per_gpu(cfg) - cfg = compat_loader_args(cfg) - cfg = compat_runner_args(cfg) - return cfg - - -def compat_runner_args(cfg): - if 'runner' not in cfg: - cfg.runner = ConfigDict({ - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - }) - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - return cfg - - -def compat_imgs_per_gpu(cfg): - cfg = copy.deepcopy(cfg) - if 'imgs_per_gpu' in cfg.data: - warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - warnings.warn( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - return cfg - - -def compat_loader_args(cfg): - """Deprecated sample_per_gpu in cfg.data.""" - - cfg = copy.deepcopy(cfg) - if 'train_dataloader' not in cfg.data: - cfg.data['train_dataloader'] = ConfigDict() - if 'val_dataloader' not in cfg.data: - cfg.data['val_dataloader'] = ConfigDict() - if 'test_dataloader' not in cfg.data: - cfg.data['test_dataloader'] = ConfigDict() - - # special process for train_dataloader - if 'samples_per_gpu' in cfg.data: - - samples_per_gpu = cfg.data.pop('samples_per_gpu') - assert 'samples_per_gpu' not in \ - cfg.data.train_dataloader, ('`samples_per_gpu` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu - - if 'persistent_workers' in cfg.data: - - persistent_workers = cfg.data.pop('persistent_workers') - assert 'persistent_workers' not in \ - cfg.data.train_dataloader, ('`persistent_workers` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['persistent_workers'] = persistent_workers - - if 'workers_per_gpu' in cfg.data: - - workers_per_gpu = cfg.data.pop('workers_per_gpu') - cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu - - # special process for val_dataloader - if 'samples_per_gpu' in cfg.data.val: - # keep default value of `sample_per_gpu` is 1 - assert 'samples_per_gpu' not in \ - cfg.data.val_dataloader, ('`samples_per_gpu` are set ' - 'in `data.val` field and ` ' - 'data.val_dataloader` at ' - 'the same time. ' - 'Please only set it in ' - '`data.val_dataloader`. ') - cfg.data.val_dataloader['samples_per_gpu'] = \ - cfg.data.val.pop('samples_per_gpu') - # special process for val_dataloader - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - if 'samples_per_gpu' in cfg.data.test: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - - cfg.data.test_dataloader['samples_per_gpu'] = \ - cfg.data.test.pop('samples_per_gpu') - - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - if 'samples_per_gpu' in ds_cfg: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` at' - ' the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu - - return cfg diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/logger.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/logger.py deleted file mode 100644 index 14295d1a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/logger.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO, name='mmdet3d'): - """Get root logger and add a keyword filter to it. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmdet3d". - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - name (str, optional): The name of the root logger, also used as a - filter keyword. Defaults to 'mmdet3d'. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name=name, log_file=log_file, log_level=log_level) - - # add a logging filter - logging_filter = logging.Filter(name) - logging_filter.filter = lambda record: record.find(name) != -1 - - return logger diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/misc.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/misc.py deleted file mode 100644 index 08af0484..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/misc.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os.path as osp -import warnings - - -def find_latest_checkpoint(path, suffix='pth'): - """Find the latest checkpoint from the working directory. This function is - copied from mmdetection. - - Args: - path(str): The path to find checkpoints. - suffix(str): File extension. - Defaults to pth. - - Returns: - latest_path(str | None): File path of the latest checkpoint. - References: - .. [1] https://github.com/microsoft/SoftTeacher - /blob/main/ssod/utils/patch.py - """ - if not osp.exists(path): - warnings.warn('The path of checkpoints does not exist.') - return None - if osp.exists(osp.join(path, f'latest.{suffix}')): - return osp.join(path, f'latest.{suffix}') - - checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) - if len(checkpoints) == 0: - warnings.warn('There are no checkpoints in the path.') - return None - latest = -1 - latest_path = None - for checkpoint in checkpoints: - count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) - if count > latest: - latest = count - latest_path = checkpoint - return latest_path diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/setup_env.py b/cv/3d_detection/paconv/pytorch/mmdet3d/utils/setup_env.py deleted file mode 100644 index 8812cb71..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/utils/setup_env.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -from torch import multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - workers_per_gpu = cfg.data.get('workers_per_gpu', 1) - if 'train_dataloader' in cfg.data: - workers_per_gpu = \ - max(cfg.data.train_dataloader.get('workers_per_gpu', 1), - workers_per_gpu) - - if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/3d_detection/paconv/pytorch/mmdet3d/version.py b/cv/3d_detection/paconv/pytorch/mmdet3d/version.py deleted file mode 100644 index c95fbedd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmdet3d/version.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '1.0.0rc3' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/__init__.py deleted file mode 100644 index eb0a5f4e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -from packaging.version import parse - -from .version import __version__, version_info - -MMCV_MIN = '1.3.13' -MMCV_MAX = '1.6.0' - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -mmcv_min_version = digit_version(MMCV_MIN) -mmcv_max_version = digit_version(MMCV_MAX) -mmcv_version = digit_version(mmcv.__version__) - - -# assert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \ -# f'MMCV=={mmcv.__version__} is used but incompatible. ' \ -# f'Please install mmcv>={mmcv_min_version}, <={mmcv_max_version}.' - -__all__ = ['__version__', 'version_info', 'digit_version'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/apis/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/apis/__init__.py deleted file mode 100644 index c6881805..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/apis/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import inference_segmentor, init_segmentor, show_result_pyplot -from .test import multi_gpu_test, single_gpu_test -from .train import (get_root_logger, init_random_seed, set_random_seed, - train_segmentor) - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor', - 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test', - 'show_result_pyplot', 'init_random_seed' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/apis/inference.py b/cv/3d_detection/paconv/pytorch/mmseg/apis/inference.py deleted file mode 100644 index 90694380..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import matplotlib.pyplot as plt -import mmcv -import torch -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmseg.datasets.pipelines import Compose -from mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - plt.figure(figsize=fig_size) - plt.imshow(mmcv.bgr2rgb(img)) - plt.title(title) - plt.tight_layout() - plt.show(block=block) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/apis/test.py b/cv/3d_detection/paconv/pytorch/mmseg/apis/test.py deleted file mode 100644 index cc4fcc97..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/apis/test.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import tempfile -import warnings - -import mmcv -import numpy as np -import torch -from mmcv.engine import collect_results_cpu, collect_results_gpu -from mmcv.image import tensor2imgs -from mmcv.runner import get_dist_info - - -def np2tmp(array, temp_file_name=None, tmpdir=None): - """Save ndarray to local numpy file. - - Args: - array (ndarray): Ndarray to save. - temp_file_name (str): Numpy file name. If 'temp_file_name=None', this - function will generate a file name with tempfile.NamedTemporaryFile - to save ndarray. Default: None. - tmpdir (str): Temporary directory to save Ndarray files. Default: None. - Returns: - str: The numpy file name. - """ - - if temp_file_name is None: - temp_file_name = tempfile.NamedTemporaryFile( - suffix='.npy', delete=False, dir=tmpdir).name - np.save(temp_file_name, array) - return temp_file_name - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - efficient_test=False, - opacity=0.5, - pre_eval=False, - format_only=False, - format_args={}): - """Test with single GPU by progressive mode. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - show (bool): Whether show results during inference. Default: False. - out_dir (str, optional): If specified, the results will be dumped into - the directory to save output results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Mutually exclusive with - pre_eval and format_results. Default: False. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - pre_eval (bool): Use dataset.pre_eval() function to generate - pre_results for metric evaluation. Mutually exclusive with - efficient_test and format_results. Default: False. - format_only (bool): Only format result for results commit. - Mutually exclusive with pre_eval and efficient_test. - Default: False. - format_args (dict): The args for format_results. Default: {}. - Returns: - list: list of evaluation pre-results or list of save file names. - """ - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' - 'evaluation is CPU memory friendly with pre_eval=True') - mmcv.mkdir_or_exist('.efficient_test') - # when none of them is set true, return segmentation results as - # a list of np.array. - assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ - '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ - 'exclusive, only one of them could be true .' - - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - # The pipeline about how the data_loader retrieval samples from dataset: - # sampler -> batch_sampler -> indices - # The indices are passed to dataset_fetcher to get data from dataset. - # data_fetcher -> collate_fn(dataset[index]) -> data_sample - # we use batch_sampler to get correct data idx - loader_indices = data_loader.batch_sampler - - for batch_indices, data in zip(loader_indices, data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - - if show or out_dir: - img_tensor = data['img'][0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for img, img_meta in zip(imgs, img_metas): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result, - palette=dataset.PALETTE, - show=show, - out_file=out_file, - opacity=opacity) - - if efficient_test: - result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] - - if format_only: - result = dataset.format_results( - result, indices=batch_indices, **format_args) - if pre_eval: - # TODO: adapt samples_per_gpu > 1. - # only samples_per_gpu=1 valid now - result = dataset.pre_eval(result, indices=batch_indices) - results.extend(result) - else: - results.extend(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - - return results - - -def multi_gpu_test(model, - data_loader, - tmpdir=None, - gpu_collect=False, - efficient_test=False, - pre_eval=False, - format_only=False, - format_args={}): - """Test model with multiple gpus by progressive mode. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. The same path is used for efficient - test. Default: None. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Mutually exclusive with - pre_eval and format_results. Default: False. - pre_eval (bool): Use dataset.pre_eval() function to generate - pre_results for metric evaluation. Mutually exclusive with - efficient_test and format_results. Default: False. - format_only (bool): Only format result for results commit. - Mutually exclusive with pre_eval and efficient_test. - Default: False. - format_args (dict): The args for format_results. Default: {}. - - Returns: - list: list of evaluation pre-results or list of save file names. - """ - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' - 'evaluation is CPU memory friendly with pre_eval=True') - mmcv.mkdir_or_exist('.efficient_test') - # when none of them is set true, return segmentation results as - # a list of np.array. - assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ - '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ - 'exclusive, only one of them could be true .' - - model.eval() - results = [] - dataset = data_loader.dataset - # The pipeline about how the data_loader retrieval samples from dataset: - # sampler -> batch_sampler -> indices - # The indices are passed to dataset_fetcher to get data from dataset. - # data_fetcher -> collate_fn(dataset[index]) -> data_sample - # we use batch_sampler to get correct data idx - - # batch_sampler based on DistributedSampler, the indices only point to data - # samples of related machine. - loader_indices = data_loader.batch_sampler - - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - - for batch_indices, data in zip(loader_indices, data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if efficient_test: - result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] - - if format_only: - result = dataset.format_results( - result, indices=batch_indices, **format_args) - if pre_eval: - # TODO: adapt samples_per_gpu > 1. - # only samples_per_gpu=1 valid now - result = dataset.pre_eval(result, indices=batch_indices) - - results.extend(result) - - if rank == 0: - batch_size = len(result) * world_size - for _ in range(batch_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results diff --git a/cv/3d_detection/paconv/pytorch/mmseg/apis/train.py b/cv/3d_detection/paconv/pytorch/mmseg/apis/train.py deleted file mode 100644 index 7e1096bc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/apis/train.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import HOOKS, build_optimizer, build_runner, get_dist_info -from mmcv.utils import build_from_cfg - -from mmseg.core import DistEvalHook, EvalHook -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.utils import get_root_logger - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/core/__init__.py deleted file mode 100644 index 40227861..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluation import * # noqa: F401, F403 -from .seg import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/__init__.py deleted file mode 100644 index 3d16d17e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_names import get_classes, get_palette -from .eval_hooks import DistEvalHook, EvalHook -from .metrics import (eval_metrics, intersect_and_union, mean_dice, - mean_fscore, mean_iou, pre_eval_to_metrics) - -__all__ = [ - 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', - 'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics', - 'intersect_and_union' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/class_names.py b/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/class_names.py deleted file mode 100644 index 4527fbaf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/class_names.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - - -def cityscapes_classes(): - """Cityscapes class names for external use.""" - return [ - 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def ade_classes(): - """ADE20K class names for external use.""" - return [ - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag' - ] - - -def voc_classes(): - """Pascal VOC class names for external use.""" - return [ - 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor' - ] - - -def cityscapes_palette(): - """Cityscapes palette for external use.""" - return [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], - [0, 0, 230], [119, 11, 32]] - - -def ade_palette(): - """ADE20K palette for external use.""" - return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - -def voc_palette(): - """Pascal VOC palette for external use.""" - return [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - -dataset_aliases = { - 'cityscapes': ['cityscapes'], - 'ade': ['ade', 'ade20k'], - 'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels - - -def get_palette(dataset): - """Get class palette (RGB) of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_palette()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/eval_hooks.py b/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/eval_hooks.py deleted file mode 100644 index 952db3b0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import torch.distributed as dist -from mmcv.runner import DistEvalHook as _DistEvalHook -from mmcv.runner import EvalHook as _EvalHook -from torch.nn.modules.batchnorm import _BatchNorm - - -class EvalHook(_EvalHook): - """Single GPU EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - pre_eval (bool): Whether to use progressive mode to evaluate model. - Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, - *args, - by_epoch=False, - efficient_test=False, - pre_eval=False, - **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.pre_eval = pre_eval - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` for evaluation hook ' - 'is deprecated, the evaluation hook is CPU memory friendly ' - 'with ``pre_eval=True`` as argument for ``single_gpu_test()`` ' - 'function') - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - if not self._should_evaluate(runner): - return - - from mmseg.apis import single_gpu_test - results = single_gpu_test( - runner.model, self.dataloader, show=False, pre_eval=self.pre_eval) - runner.log_buffer.clear() - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - if self.save_best: - self._save_ckpt(runner, key_score) - - -class DistEvalHook(_DistEvalHook): - """Distributed EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - pre_eval (bool): Whether to use progressive mode to evaluate model. - Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, - *args, - by_epoch=False, - efficient_test=False, - pre_eval=False, - **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.pre_eval = pre_eval - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` for evaluation hook ' - 'is deprecated, the evaluation hook is CPU memory friendly ' - 'with ``pre_eval=True`` as argument for ``multi_gpu_test()`` ' - 'function') - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - if not self._should_evaluate(runner): - return - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - from mmseg.apis import multi_gpu_test - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect, - pre_eval=self.pre_eval) - - runner.log_buffer.clear() - - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - - if self.save_best: - self._save_ckpt(runner, key_score) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/metrics.py b/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/metrics.py deleted file mode 100644 index a1c0908e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,395 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calculate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground - truth segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for result, gt_seg_map in zip(results, gt_seg_maps): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - result, gt_seg_map, num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category recall, shape (num_classes, ). - ndarray: Per category precision, shape (num_classes, ). - ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground - truth segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, - total_area_pred_label, - total_area_label, metrics, nan_to_num, - beta) - - return ret_metrics - - -def pre_eval_to_metrics(pre_eval_results, - metrics=['mIoU'], - nan_to_num=None, - beta=1): - """Convert pre-eval results to metrics. - - Args: - pre_eval_results (list[tuple[torch.Tensor]]): per image eval results - for computing evaluation metric - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - - # convert list of tuples to tuple of lists, e.g. - # [(A_1, B_1, C_1, D_1), ..., (A_n, B_n, C_n, D_n)] to - # ([A_1, ..., A_n], ..., [D_1, ..., D_n]) - pre_eval_results = tuple(zip(*pre_eval_results)) - assert len(pre_eval_results) == 4 - - total_area_intersect = sum(pre_eval_results[0]) - total_area_union = sum(pre_eval_results[1]) - total_area_pred_label = sum(pre_eval_results[2]) - total_area_label = sum(pre_eval_results[3]) - - ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, - total_area_pred_label, - total_area_label, metrics, nan_to_num, - beta) - - return ret_metrics - - -def total_area_to_metrics(total_area_intersect, - total_area_union, - total_area_pred_label, - total_area_label, - metrics=['mIoU'], - nan_to_num=None, - beta=1): - """Calculate evaluation metrics - Args: - total_area_intersect (ndarray): The intersection of prediction and - ground truth histogram on all classes. - total_area_union (ndarray): The union of prediction and ground truth - histogram on all classes. - total_area_pred_label (ndarray): The prediction histogram on all - classes. - total_area_label (ndarray): The ground truth histogram on all classes. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/core/seg/__init__.py deleted file mode 100644 index 5206b96b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_pixel_sampler -from .sampler import BasePixelSampler, OHEMPixelSampler - -__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/builder.py b/cv/3d_detection/paconv/pytorch/mmseg/core/seg/builder.py deleted file mode 100644 index 1cecd347..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -PIXEL_SAMPLERS = Registry('pixel sampler') - - -def build_pixel_sampler(cfg, **default_args): - """Build pixel sampler for segmentation map.""" - return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/__init__.py deleted file mode 100644 index 5a764856..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_pixel_sampler import BasePixelSampler -from .ohem_pixel_sampler import OHEMPixelSampler - -__all__ = ['BasePixelSampler', 'OHEMPixelSampler'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/base_pixel_sampler.py b/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index 03672cd4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/ohem_pixel_sampler.py b/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/ohem_pixel_sampler.py deleted file mode 100644 index 833a2876..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/seg/sampler/ohem_pixel_sampler.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import PIXEL_SAMPLERS -from .base_pixel_sampler import BasePixelSampler - - -@PIXEL_SAMPLERS.register_module() -class OHEMPixelSampler(BasePixelSampler): - """Online Hard Example Mining Sampler for segmentation. - - Args: - context (nn.Module): The context of sampler, subclass of - :obj:`BaseDecodeHead`. - thresh (float, optional): The threshold for hard example selection. - Below which, are prediction with low confidence. If not - specified, the hard examples will be pixels of top ``min_kept`` - loss. Default: None. - min_kept (int, optional): The minimum number of predictions to keep. - Default: 100000. - """ - - def __init__(self, context, thresh=None, min_kept=100000): - super(OHEMPixelSampler, self).__init__() - self.context = context - assert min_kept > 1 - self.thresh = thresh - self.min_kept = min_kept - - def sample(self, seg_logit, seg_label): - """Sample pixels that have high loss or with low prediction confidence. - - Args: - seg_logit (torch.Tensor): segmentation logits, shape (N, C, H, W) - seg_label (torch.Tensor): segmentation label, shape (N, 1, H, W) - - Returns: - torch.Tensor: segmentation weight, shape (N, H, W) - """ - with torch.no_grad(): - assert seg_logit.shape[2:] == seg_label.shape[2:] - assert seg_label.shape[1] == 1 - seg_label = seg_label.squeeze(1).long() - batch_kept = self.min_kept * seg_label.size(0) - valid_mask = seg_label != self.context.ignore_index - seg_weight = seg_logit.new_zeros(size=seg_label.size()) - valid_seg_weight = seg_weight[valid_mask] - if self.thresh is not None: - seg_prob = F.softmax(seg_logit, dim=1) - - tmp_seg_label = seg_label.clone().unsqueeze(1) - tmp_seg_label[tmp_seg_label == self.context.ignore_index] = 0 - seg_prob = seg_prob.gather(1, tmp_seg_label).squeeze(1) - sort_prob, sort_indices = seg_prob[valid_mask].sort() - - if sort_prob.numel() > 0: - min_threshold = sort_prob[min(batch_kept, - sort_prob.numel() - 1)] - else: - min_threshold = 0.0 - threshold = max(min_threshold, self.thresh) - valid_seg_weight[seg_prob[valid_mask] < threshold] = 1. - else: - if not isinstance(self.context.loss_decode, nn.ModuleList): - losses_decode = [self.context.loss_decode] - else: - losses_decode = self.context.loss_decode - losses = 0.0 - for loss_module in losses_decode: - losses += loss_module( - seg_logit, - seg_label, - weight=None, - ignore_index=self.context.ignore_index, - reduction_override='none') - - # faster than topk according to https://github.com/pytorch/pytorch/issues/22812 # noqa - _, sort_indices = losses[valid_mask].sort(descending=True) - valid_seg_weight[sort_indices[:batch_kept]] = 1. - - seg_weight[valid_mask] = valid_seg_weight - - return seg_weight diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/core/utils/__init__.py deleted file mode 100644 index be9de558..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .misc import add_prefix - -__all__ = ['add_prefix'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/core/utils/misc.py b/cv/3d_detection/paconv/pytorch/mmseg/core/utils/misc.py deleted file mode 100644 index 282bb8d9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/core/utils/misc.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def add_prefix(inputs, prefix): - """Add prefix for dict. - - Args: - inputs (dict): The input dict with str keys. - prefix (str): The prefix to add. - - Returns: - - dict: The dict with keys updated with ``prefix``. - """ - - outputs = dict() - for name, value in inputs.items(): - outputs[f'{prefix}.{name}'] = value - - return outputs diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/__init__.py deleted file mode 100644 index c115ab79..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .ade import ADE20KDataset -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .chase_db1 import ChaseDB1Dataset -from .cityscapes import CityscapesDataset -from .coco_stuff import COCOStuffDataset -from .custom import CustomDataset -from .dark_zurich import DarkZurichDataset -from .dataset_wrappers import ConcatDataset, RepeatDataset -from .drive import DRIVEDataset -from .hrf import HRFDataset -from .loveda import LoveDADataset -from .night_driving import NightDrivingDataset -from .pascal_context import PascalContextDataset, PascalContextDataset59 -from .stare import STAREDataset -from .voc import PascalVOCDataset - -__all__ = [ - 'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset', - 'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset', - 'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset', - 'STAREDataset', 'DarkZurichDataset', 'NightDrivingDataset', - 'COCOStuffDataset', 'LoveDADataset' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/ade.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/ade.py deleted file mode 100644 index db94cebd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/ade.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) - - def results2img(self, results, imgfile_prefix, to_label_id, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission. - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - if indices is None: - indices = list(range(len(self))) - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - # The index range of official requirement is from 0 to 150. - # But the index range of output is from 0 to 149. - # That is because we set reduce_zero_label=True. - result = result + 1 - - output = Image.fromarray(result.astype(np.uint8)) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, - results, - imgfile_prefix, - to_label_id=True, - indices=None): - """Format the results into dir (standard format for ade20k evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str | None): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - to_label_id (bool): whether convert output to label_id for - submission. Default: False - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, to_label_id, - indices) - return result_files diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/builder.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/builder.py deleted file mode 100644 index 7ab64595..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/builder.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import Registry, build_from_cfg, digit_version -from torch.utils.data import DataLoader, DistributedSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - """Build :obj:`ConcatDataset by.""" - from .dataset_wrappers import ConcatDataset - img_dir = cfg['img_dir'] - ann_dir = cfg.get('ann_dir', None) - split = cfg.get('split', None) - # pop 'separate_eval' since it is not a valid key for common datasets. - separate_eval = cfg.pop('separate_eval', True) - num_img_dir = len(img_dir) if isinstance(img_dir, (list, tuple)) else 1 - if ann_dir is not None: - num_ann_dir = len(ann_dir) if isinstance(ann_dir, (list, tuple)) else 1 - else: - num_ann_dir = 0 - if split is not None: - num_split = len(split) if isinstance(split, (list, tuple)) else 1 - else: - num_split = 0 - if num_img_dir > 1: - assert num_img_dir == num_ann_dir or num_ann_dir == 0 - assert num_img_dir == num_split or num_split == 0 - else: - assert num_split == num_ann_dir or num_ann_dir <= 1 - num_dset = max(num_split, num_img_dir) - - datasets = [] - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - if isinstance(img_dir, (list, tuple)): - data_cfg['img_dir'] = img_dir[i] - if isinstance(ann_dir, (list, tuple)): - data_cfg['ann_dir'] = ann_dir[i] - if isinstance(split, (list, tuple)): - data_cfg['split'] = split[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - """Build datasets.""" - from .dataset_wrappers import ConcatDataset, RepeatDataset - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif isinstance(cfg.get('img_dir'), (list, tuple)) or isinstance( - cfg.get('split', None), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - drop_last=False, - pin_memory=True, - persistent_workers=True, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int | None): Seed to be used. Default: None. - drop_last (bool): Whether to drop the last incomplete batch in epoch. - Default: False - pin_memory (bool): Whether to use pin_memory in DataLoader. - Default: True - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers Dataset instances alive. - The argument also has effect in PyTorch>=1.7.0. - Default: True - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=shuffle) - shuffle = False - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if digit_version(torch.__version__) >= digit_version('1.8.0'): - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - persistent_workers=persistent_workers, - **kwargs) - else: - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - """Worker init func for dataloader. - - The seed of each worker equals to num_worker * rank + worker_id + user_seed - - Args: - worker_id (int): Worker id. - num_workers (int): Number of workers. - rank (int): The rank of current process. - seed (int): The random seed to use. - """ - - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/chase_db1.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/chase_db1.py deleted file mode 100644 index 7f14b2da..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/chase_db1.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ChaseDB1Dataset(CustomDataset): - """Chase_db1 dataset. - - In segmentation map annotation for Chase_db1, 0 stands for background, - which is included in 2 categories. ``reduce_zero_label`` is fixed to False. - The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_1stHO.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(ChaseDB1Dataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_1stHO.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/cityscapes.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/cityscapes.py deleted file mode 100644 index ed633d00..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/cityscapes.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from mmcv.utils import print_log -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CityscapesDataset(CustomDataset): - """Cityscapes dataset. - - The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is - fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. - """ - - CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], - [0, 80, 100], [0, 0, 230], [119, 11, 32]] - - def __init__(self, - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtFine_labelTrainIds.png', - **kwargs): - super(CityscapesDataset, self).__init__( - img_suffix=img_suffix, seg_map_suffix=seg_map_suffix, **kwargs) - - @staticmethod - def _convert_to_label_id(result): - """Convert trainId to id for cityscapes.""" - if isinstance(result, str): - result = np.load(result) - import cityscapesscripts.helpers.labels as CSLabels - result_copy = result.copy() - for trainId, label in CSLabels.trainId2label.items(): - result_copy[result == trainId] = label.id - - return result_copy - - def results2img(self, results, imgfile_prefix, to_label_id, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission. - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - if indices is None: - indices = list(range(len(self))) - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - if to_label_id: - result = self._convert_to_label_id(result) - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - output = Image.fromarray(result.astype(np.uint8)).convert('P') - import cityscapesscripts.helpers.labels as CSLabels - palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) - for label_id, label in CSLabels.id2label.items(): - palette[label_id] = label.color - - output.putpalette(palette) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, - results, - imgfile_prefix, - to_label_id=True, - indices=None): - """Format the results into dir (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - to_label_id (bool): whether convert output to label_id for - submission. Default: False - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, to_label_id, - indices) - - return result_files - - def evaluate(self, - results, - metric='mIoU', - logger=None, - imgfile_prefix=None): - """Evaluation in Cityscapes/default protocol. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file, - for cityscapes evaluation only. It includes the file path and - the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with cityscapes protocol, it would be - the prefix of output png files. The output files would be - png images under folder "a/b/prefix/xxx.png", where "xxx" is - the image name of cityscapes. If not specified, a temp file - will be created for evaluation. - Default: None. - - Returns: - dict[str, float]: Cityscapes/default metrics. - """ - - eval_results = dict() - metrics = metric.copy() if isinstance(metric, list) else [metric] - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, logger, imgfile_prefix)) - metrics.remove('cityscapes') - if len(metrics) > 0: - eval_results.update( - super(CityscapesDataset, - self).evaluate(results, metrics, logger)) - - return eval_results - - def _evaluate_cityscapes(self, results, logger, imgfile_prefix): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file - - Returns: - dict[str: float]: Cityscapes evaluation results. - """ - try: - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install cityscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_dir = imgfile_prefix - - eval_results = dict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - CSEval.args.evalInstLevelScore = True - CSEval.args.predictionPath = osp.abspath(result_dir) - CSEval.args.evalPixelAccuracy = True - CSEval.args.JSONOutput = False - - seg_map_list = [] - pred_list = [] - - # when evaluating with official cityscapesscripts, - # **_gtFine_labelIds.png is used - for seg_map in mmcv.scandir( - self.ann_dir, 'gtFine_labelIds.png', recursive=True): - seg_map_list.append(osp.join(self.ann_dir, seg_map)) - pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) - - eval_results.update( - CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) - - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/coco_stuff.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/coco_stuff.py deleted file mode 100644 index 546a0142..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/coco_stuff.py +++ /dev/null @@ -1,93 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class COCOStuffDataset(CustomDataset): - """COCO-Stuff dataset. - - In segmentation map annotation for COCO-Stuff, Train-IDs of the 10k version - are from 1 to 171, where 0 is the ignore index, and Train-ID of COCO Stuff - 164k is from 0 to 170, where 255 is the ignore index. So, they are all 171 - semantic categories. ``reduce_zero_label`` is set to True and False for the - 10k and 164k versions, respectively. The ``img_suffix`` is fixed to '.jpg', - and ``seg_map_suffix`` is fixed to '.png'. - """ - CLASSES = ( - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', - 'blanket', 'branch', 'bridge', 'building-other', 'bush', 'cabinet', - 'cage', 'cardboard', 'carpet', 'ceiling-other', 'ceiling-tile', - 'cloth', 'clothes', 'clouds', 'counter', 'cupboard', 'curtain', - 'desk-stuff', 'dirt', 'door-stuff', 'fence', 'floor-marble', - 'floor-other', 'floor-stone', 'floor-tile', 'floor-wood', - 'flower', 'fog', 'food-other', 'fruit', 'furniture-other', 'grass', - 'gravel', 'ground-other', 'hill', 'house', 'leaves', 'light', 'mat', - 'metal', 'mirror-stuff', 'moss', 'mountain', 'mud', 'napkin', 'net', - 'paper', 'pavement', 'pillow', 'plant-other', 'plastic', 'platform', - 'playingfield', 'railing', 'railroad', 'river', 'road', 'rock', 'roof', - 'rug', 'salad', 'sand', 'sea', 'shelf', 'sky-other', 'skyscraper', - 'snow', 'solid-other', 'stairs', 'stone', 'straw', 'structural-other', - 'table', 'tent', 'textile-other', 'towel', 'tree', 'vegetable', - 'wall-brick', 'wall-concrete', 'wall-other', 'wall-panel', - 'wall-stone', 'wall-tile', 'wall-wood', 'water-other', 'waterdrops', - 'window-blind', 'window-other', 'wood') - - PALETTE = [[0, 192, 64], [0, 192, 64], [0, 64, 96], [128, 192, 192], - [0, 64, 64], [0, 192, 224], [0, 192, 192], [128, 192, 64], - [0, 192, 96], [128, 192, 64], [128, 32, 192], [0, 0, 224], - [0, 0, 64], [0, 160, 192], [128, 0, 96], [128, 0, 192], - [0, 32, 192], [128, 128, 224], [0, 0, 192], [128, 160, 192], - [128, 128, 0], [128, 0, 32], [128, 32, 0], [128, 0, 128], - [64, 128, 32], [0, 160, 0], [0, 0, 0], [192, 128, 160], - [0, 32, 0], [0, 128, 128], [64, 128, 160], [128, 160, 0], - [0, 128, 0], [192, 128, 32], [128, 96, 128], [0, 0, 128], - [64, 0, 32], [0, 224, 128], [128, 0, 0], [192, 0, 160], - [0, 96, 128], [128, 128, 128], [64, 0, 160], [128, 224, 128], - [128, 128, 64], [192, 0, 32], [128, 96, 0], [128, 0, 192], - [0, 128, 32], [64, 224, 0], [0, 0, 64], [128, 128, 160], - [64, 96, 0], [0, 128, 192], [0, 128, 160], [192, 224, 0], - [0, 128, 64], [128, 128, 32], [192, 32, 128], [0, 64, 192], - [0, 0, 32], [64, 160, 128], [128, 64, 64], [128, 0, 160], - [64, 32, 128], [128, 192, 192], [0, 0, 160], [192, 160, 128], - [128, 192, 0], [128, 0, 96], [192, 32, 0], [128, 64, 128], - [64, 128, 96], [64, 160, 0], [0, 64, 0], [192, 128, 224], - [64, 32, 0], [0, 192, 128], [64, 128, 224], [192, 160, 0], - [0, 192, 0], [192, 128, 96], [192, 96, 128], [0, 64, 128], - [64, 0, 96], [64, 224, 128], [128, 64, 0], [192, 0, 224], - [64, 96, 128], [128, 192, 128], [64, 0, 224], [192, 224, 128], - [128, 192, 64], [192, 0, 96], [192, 96, 0], [128, 64, 192], - [0, 128, 96], [0, 224, 0], [64, 64, 64], [128, 128, 224], - [0, 96, 0], [64, 192, 192], [0, 128, 224], [128, 224, 0], - [64, 192, 64], [128, 128, 96], [128, 32, 128], [64, 0, 192], - [0, 64, 96], [0, 160, 128], [192, 0, 64], [128, 64, 224], - [0, 32, 128], [192, 128, 192], [0, 64, 224], [128, 160, 128], - [192, 128, 0], [128, 64, 32], [128, 32, 64], [192, 0, 128], - [64, 192, 32], [0, 160, 64], [64, 0, 0], [192, 192, 160], - [0, 32, 64], [64, 128, 128], [64, 192, 160], [128, 160, 64], - [64, 128, 0], [192, 192, 32], [128, 96, 192], [64, 0, 128], - [64, 64, 32], [0, 224, 192], [192, 0, 0], [192, 64, 160], - [0, 96, 192], [192, 128, 128], [64, 64, 160], [128, 224, 192], - [192, 128, 64], [192, 64, 32], [128, 96, 64], [192, 0, 192], - [0, 192, 32], [64, 224, 64], [64, 0, 64], [128, 192, 160], - [64, 96, 64], [64, 128, 192], [0, 192, 160], [192, 224, 64], - [64, 128, 64], [128, 192, 32], [192, 32, 192], [64, 64, 192], - [0, 64, 32], [64, 160, 192], [192, 64, 64], [128, 64, 160], - [64, 32, 192], [192, 192, 192], [0, 64, 160], [192, 160, 192], - [192, 192, 0], [128, 64, 96], [192, 32, 64], [192, 64, 128], - [64, 192, 96], [64, 160, 64], [64, 64, 0]] - - def __init__(self, **kwargs): - super(COCOStuffDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='_labelTrainIds.png', **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/custom.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/custom.py deleted file mode 100644 index 872b2b84..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/custom.py +++ /dev/null @@ -1,457 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from prettytable import PrettyTable -from torch.utils.data import Dataset - -from mmseg.core import eval_metrics, intersect_and_union, pre_eval_to_metrics -from mmseg.utils import get_root_logger -from .builder import DATASETS -from .pipelines import Compose, LoadAnnotations - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for semantic segmentation. An example of file structure - is as followed. - - .. code-block:: none - - ├── data - │ ├── my_dataset - │ │ ├── img_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{img_suffix} - │ │ │ │ ├── yyy{img_suffix} - │ │ │ │ ├── zzz{img_suffix} - │ │ │ ├── val - │ │ ├── ann_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{seg_map_suffix} - │ │ │ │ ├── yyy{seg_map_suffix} - │ │ │ │ ├── zzz{seg_map_suffix} - │ │ │ ├── val - - The img/gt_semantic_seg pair of CustomDataset should be of the same - except suffix. A valid img/gt_semantic_seg filename pair should be like - ``xxx{img_suffix}`` and ``xxx{seg_map_suffix}`` (extension is also included - in the suffix). If split is given, then ``xxx`` is specified in txt file. - Otherwise, all files in ``img_dir/``and ``ann_dir`` will be loaded. - Please refer to ``docs/tutorials/new_dataset.md`` for more details. - - - Args: - pipeline (list[dict]): Processing pipeline - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. Default: '.jpg' - ann_dir (str, optional): Path to annotation directory. Default: None - seg_map_suffix (str): Suffix of segmentation maps. Default: '.png' - split (str, optional): Split txt file. If split is specified, only - file with suffix in the splits will be loaded. Otherwise, all - images in img_dir/ann_dir will be loaded. Default: None - data_root (str, optional): Data root for img_dir/ann_dir. Default: - None. - test_mode (bool): If test_mode=True, gt wouldn't be loaded. - ignore_index (int): The label index to be ignored. Default: 255 - reduce_zero_label (bool): Whether to mark label zero as ignored. - Default: False - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, and - self.PALETTE is None, random palette will be generated. - Default: None - gt_seg_map_loader_cfg (dict, optional): build LoadAnnotations to - load gt for evaluation, load from disk by default. Default: None. - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - pipeline, - img_dir, - img_suffix='.jpg', - ann_dir=None, - seg_map_suffix='.png', - split=None, - data_root=None, - test_mode=False, - ignore_index=255, - reduce_zero_label=False, - classes=None, - palette=None, - gt_seg_map_loader_cfg=None): - self.pipeline = Compose(pipeline) - self.img_dir = img_dir - self.img_suffix = img_suffix - self.ann_dir = ann_dir - self.seg_map_suffix = seg_map_suffix - self.split = split - self.data_root = data_root - self.test_mode = test_mode - self.ignore_index = ignore_index - self.reduce_zero_label = reduce_zero_label - self.label_map = None - self.CLASSES, self.PALETTE = self.get_classes_and_palette( - classes, palette) - self.gt_seg_map_loader = LoadAnnotations( - ) if gt_seg_map_loader_cfg is None else LoadAnnotations( - **gt_seg_map_loader_cfg) - - if test_mode: - assert self.CLASSES is not None, \ - '`cls.CLASSES` or `classes` should be specified when testing' - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.img_dir): - self.img_dir = osp.join(self.data_root, self.img_dir) - if not (self.ann_dir is None or osp.isabs(self.ann_dir)): - self.ann_dir = osp.join(self.data_root, self.ann_dir) - if not (self.split is None or osp.isabs(self.split)): - self.split = osp.join(self.data_root, self.split) - - # load annotations - self.img_infos = self.load_annotations(self.img_dir, self.img_suffix, - self.ann_dir, - self.seg_map_suffix, self.split) - - def __len__(self): - """Total number of samples of data.""" - return len(self.img_infos) - - def load_annotations(self, img_dir, img_suffix, ann_dir, seg_map_suffix, - split): - """Load annotation from directory. - - Args: - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. - ann_dir (str|None): Path to annotation directory. - seg_map_suffix (str|None): Suffix of segmentation maps. - split (str|None): Split txt file. If split is specified, only file - with suffix in the splits will be loaded. Otherwise, all images - in img_dir/ann_dir will be loaded. Default: None - - Returns: - list[dict]: All image info of dataset. - """ - - img_infos = [] - if split is not None: - with open(split) as f: - for line in f: - img_name = line.strip() - img_info = dict(filename=img_name + img_suffix) - if ann_dir is not None: - seg_map = img_name + seg_map_suffix - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - else: - for img in mmcv.scandir(img_dir, img_suffix, recursive=True): - img_info = dict(filename=img) - if ann_dir is not None: - seg_map = img.replace(img_suffix, seg_map_suffix) - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - img_infos = sorted(img_infos, key=lambda x: x['filename']) - - print_log(f'Loaded {len(img_infos)} images', logger=get_root_logger()) - return img_infos - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.img_infos[idx]['ann'] - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['seg_fields'] = [] - results['img_prefix'] = self.img_dir - results['seg_prefix'] = self.ann_dir - if self.custom_classes: - results['label_map'] = self.label_map - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set - False). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - else: - return self.prepare_train_img(idx) - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys - introduced by pipeline. - """ - - img_info = self.img_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by - pipeline. - """ - - img_info = self.img_infos[idx] - results = dict(img_info=img_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def format_results(self, results, imgfile_prefix, indices=None, **kwargs): - """Place holder to format result to dataset specific output.""" - raise NotImplementedError - - def get_gt_seg_map_by_idx(self, index): - """Get one ground truth segmentation map for evaluation.""" - ann_info = self.get_ann_info(index) - results = dict(ann_info=ann_info) - self.pre_pipeline(results) - self.gt_seg_map_loader(results) - return results['gt_semantic_seg'] - - def get_gt_seg_maps(self, efficient_test=None): - """Get ground truth segmentation maps for evaluation.""" - if efficient_test is not None: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` has been deprecated ' - 'since MMSeg v0.16, the ``get_gt_seg_maps()`` is CPU memory ' - 'friendly by default. ') - - for idx in range(len(self)): - ann_info = self.get_ann_info(idx) - results = dict(ann_info=ann_info) - self.pre_pipeline(results) - self.gt_seg_map_loader(results) - yield results['gt_semantic_seg'] - - def pre_eval(self, preds, indices): - """Collect eval result from each iteration. - - Args: - preds (list[torch.Tensor] | torch.Tensor): the segmentation logit - after argmax, shape (N, H, W). - indices (list[int] | int): the prediction related ground truth - indices. - - Returns: - list[torch.Tensor]: (area_intersect, area_union, area_prediction, - area_ground_truth). - """ - # In order to compat with batch inference - if not isinstance(indices, list): - indices = [indices] - if not isinstance(preds, list): - preds = [preds] - - pre_eval_results = [] - - for pred, index in zip(preds, indices): - seg_map = self.get_gt_seg_map_by_idx(index) - pre_eval_results.append( - intersect_and_union(pred, seg_map, len(self.CLASSES), - self.ignore_index, self.label_map, - self.reduce_zero_label)) - - return pre_eval_results - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, random - palette will be generated. Default: None - """ - if classes is None: - self.custom_classes = False - return self.CLASSES, self.PALETTE - - self.custom_classes = True - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - if self.CLASSES: - if not set(class_names).issubset(self.CLASSES): - raise ValueError('classes is not a subset of CLASSES.') - - # dictionary, its keys are the old label ids and its values - # are the new label ids. - # used for changing pixel labels in load_annotations. - self.label_map = {} - for i, c in enumerate(self.CLASSES): - if c not in class_names: - self.label_map[i] = -1 - else: - self.label_map[i] = class_names.index(c) - - palette = self.get_palette_for_custom_classes(class_names, palette) - - return class_names, palette - - def get_palette_for_custom_classes(self, class_names, palette=None): - - if self.label_map is not None: - # return subset of palette - palette = [] - for old_id, new_id in sorted( - self.label_map.items(), key=lambda x: x[1]): - if new_id != -1: - palette.append(self.PALETTE[old_id]) - palette = type(self.PALETTE)(palette) - - elif palette is None: - if self.PALETTE is None: - palette = np.random.randint(0, 255, size=(len(class_names), 3)) - else: - palette = self.PALETTE - - return palette - - def evaluate(self, - results, - metric='mIoU', - logger=None, - gt_seg_maps=None, - **kwargs): - """Evaluate the dataset. - - Args: - results (list[tuple[torch.Tensor]] | list[str]): per image pre_eval - results or predict segmentation map for computing evaluation - metric. - metric (str | list[str]): Metrics to be evaluated. 'mIoU', - 'mDice' and 'mFscore' are supported. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - gt_seg_maps (generator[ndarray]): Custom gt seg maps as input, - used in ConcatDataset - - Returns: - dict[str, float]: Default metrics. - """ - if isinstance(metric, str): - metric = [metric] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metric).issubset(set(allowed_metrics)): - raise KeyError('metric {} is not supported'.format(metric)) - - eval_results = {} - # test a list of files - if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( - results, str): - if gt_seg_maps is None: - gt_seg_maps = self.get_gt_seg_maps() - num_classes = len(self.CLASSES) - ret_metrics = eval_metrics( - results, - gt_seg_maps, - num_classes, - self.ignore_index, - metric, - label_map=self.label_map, - reduce_zero_label=self.reduce_zero_label) - # test a list of pre_eval_results - else: - ret_metrics = pre_eval_to_metrics(results, metric) - - # Because dataset.CLASSES is required for per-eval. - if self.CLASSES is None: - class_names = tuple(range(num_classes)) - else: - class_names = self.CLASSES - - # summary table - ret_metrics_summary = OrderedDict({ - ret_metric: np.round(np.nanmean(ret_metric_value) * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - - # each class table - ret_metrics.pop('aAcc', None) - ret_metrics_class = OrderedDict({ - ret_metric: np.round(ret_metric_value * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - ret_metrics_class.update({'Class': class_names}) - ret_metrics_class.move_to_end('Class', last=False) - - # for logger - class_table_data = PrettyTable() - for key, val in ret_metrics_class.items(): - class_table_data.add_column(key, val) - - summary_table_data = PrettyTable() - for key, val in ret_metrics_summary.items(): - if key == 'aAcc': - summary_table_data.add_column(key, [val]) - else: - summary_table_data.add_column('m' + key, [val]) - - print_log('per class results:', logger) - print_log('\n' + class_table_data.get_string(), logger=logger) - print_log('Summary:', logger) - print_log('\n' + summary_table_data.get_string(), logger=logger) - - # each metric dict - for key, value in ret_metrics_summary.items(): - if key == 'aAcc': - eval_results[key] = value / 100.0 - else: - eval_results['m' + key] = value / 100.0 - - ret_metrics_class.pop('Class', None) - for key, value in ret_metrics_class.items(): - eval_results.update({ - key + '.' + str(name): value[idx] / 100.0 - for idx, name in enumerate(class_names) - }) - - return eval_results diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/dark_zurich.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/dark_zurich.py deleted file mode 100644 index efc088f3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/dark_zurich.py +++ /dev/null @@ -1,13 +0,0 @@ -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class DarkZurichDataset(CityscapesDataset): - """DarkZurichDataset dataset.""" - - def __init__(self, **kwargs): - super().__init__( - img_suffix='_rgb_anon.png', - seg_map_suffix='_gt_labelTrainIds.png', - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/dataset_wrappers.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/dataset_wrappers.py deleted file mode 100644 index 0349332e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/dataset_wrappers.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -from itertools import chain - -import mmcv -import numpy as np -from mmcv.utils import print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - support evaluation and formatting results - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the concatenated - dataset results separately, Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = datasets[0].PALETTE - self.separate_eval = separate_eval - assert separate_eval in [True, False], \ - f'separate_eval can only be True or False,' \ - f'but get {separate_eval}' - if any([isinstance(ds, CityscapesDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating ConcatDataset containing CityscapesDataset' - 'is not supported!') - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[tuple[torch.Tensor]] | list[str]]): per image - pre_eval results or predict segmentation map for - computing evaluation metric. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: evaluate results of the total dataset - or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.img_dir} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - - if len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types when ' - 'self.separate_eval=False') - else: - if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( - results, str): - # merge the generators of gt_seg_maps - gt_seg_maps = chain( - *[dataset.get_gt_seg_maps() for dataset in self.datasets]) - else: - # if the results are `pre_eval` results, - # we do not need gt_seg_maps to evaluate - gt_seg_maps = None - eval_results = self.datasets[0].evaluate( - results, gt_seg_maps=gt_seg_maps, logger=logger, **kwargs) - return eval_results - - def get_dataset_idx_and_sample_idx(self, indice): - """Return dataset and sample index when given an indice of - ConcatDataset. - - Args: - indice (int): indice of sample in ConcatDataset - - Returns: - int: the index of sub dataset the sample belong to - int: the index of sample in its corresponding subset - """ - if indice < 0: - if -indice > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - indice = len(self) + indice - dataset_idx = bisect.bisect_right(self.cumulative_sizes, indice) - if dataset_idx == 0: - sample_idx = indice - else: - sample_idx = indice - self.cumulative_sizes[dataset_idx - 1] - return dataset_idx, sample_idx - - def format_results(self, results, imgfile_prefix, indices=None, **kwargs): - """format result for every sample of ConcatDataset.""" - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - ret_res = [] - for i, indice in enumerate(indices): - dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( - indice) - res = self.datasets[dataset_idx].format_results( - [results[i]], - imgfile_prefix + f'/{dataset_idx}', - indices=[sample_idx], - **kwargs) - ret_res.append(res) - return sum(ret_res, []) - - def pre_eval(self, preds, indices): - """do pre eval for every sample of ConcatDataset.""" - # In order to compat with batch inference - if not isinstance(indices, list): - indices = [indices] - if not isinstance(preds, list): - preds = [preds] - ret_res = [] - for i, indice in enumerate(indices): - dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( - indice) - res = self.datasets[dataset_idx].pre_eval(preds[i], sample_idx) - ret_res.append(res) - return sum(ret_res, []) - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = dataset.PALETTE - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item from original dataset.""" - return self.dataset[idx % self._ori_len] - - def __len__(self): - """The length is multiplied by ``times``""" - return self.times * self._ori_len diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/drive.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/drive.py deleted file mode 100644 index 65099114..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/drive.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/hrf.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/hrf.py deleted file mode 100644 index e4e10aea..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/hrf.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/loveda.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/loveda.py deleted file mode 100644 index 90d654f6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/loveda.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class LoveDADataset(CustomDataset): - """LoveDA dataset. - - In segmentation map annotation for LoveDA, 0 is the ignore index. - ``reduce_zero_label`` should be set to True. The ``img_suffix`` and - ``seg_map_suffix`` are both fixed to '.png'. - """ - CLASSES = ('background', 'building', 'road', 'water', 'barren', 'forest', - 'agricultural') - - PALETTE = [[255, 255, 255], [255, 0, 0], [255, 255, 0], [0, 0, 255], - [159, 129, 183], [0, 255, 0], [255, 195, 128]] - - def __init__(self, **kwargs): - super(LoveDADataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) - - def results2img(self, results, imgfile_prefix, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - # The index range of official requirement is from 0 to 6. - output = Image.fromarray(result.astype(np.uint8)) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, results, imgfile_prefix, indices=None): - """Format the results into dir (standard format for LoveDA evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, indices) - - return result_files diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/night_driving.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/night_driving.py deleted file mode 100644 index a9289a27..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/night_driving.py +++ /dev/null @@ -1,13 +0,0 @@ -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class NightDrivingDataset(CityscapesDataset): - """NightDrivingDataset dataset.""" - - def __init__(self, **kwargs): - super().__init__( - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtCoarse_labelTrainIds.png', - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pascal_context.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pascal_context.py deleted file mode 100644 index 1e7a09d7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pascal_context.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalContextDataset(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('background', 'aeroplane', 'bag', 'bed', 'bedclothes', 'bench', - 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus', - 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth', - 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence', - 'floor', 'flower', 'food', 'grass', 'ground', 'horse', - 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person', - 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep', - 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table', - 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water', - 'window', 'wood') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None - - -@DATASETS.register_module() -class PascalContextDataset59(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle', - 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet', - 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow', - 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower', - 'food', 'grass', 'ground', 'horse', 'keyboard', 'light', - 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform', - 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk', - 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train', - 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood') - - PALETTE = [[180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], - [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], - [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], - [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], - [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], - [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], - [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], - [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], - [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], - [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], - [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], - [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], - [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], - [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], - [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset59, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=True, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/__init__.py deleted file mode 100644 index 91d9e474..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .compose import Compose -from .formatting import (Collect, ImageToTensor, ToDataContainer, ToTensor, - Transpose, to_tensor) -from .loading import LoadAnnotations, LoadImageFromFile -from .test_time_aug import MultiScaleFlipAug -from .transforms import (CLAHE, AdjustGamma, Normalize, Pad, - PhotoMetricDistortion, RandomCrop, RandomCutOut, - RandomFlip, RandomRotate, Rerange, Resize, RGB2Gray, - SegRescale) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile', - 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', - 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate', - 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray', 'RandomCutOut' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/compose.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/compose.py deleted file mode 100644 index 30280c13..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/compose.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formating.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formating.py deleted file mode 100644 index f6e53bfe..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formating.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -import warnings - -from .formatting import * - -warnings.warn('DeprecationWarning: mmseg.datasets.pipelines.formating will be ' - 'deprecated in 2021, please replace it with ' - 'mmseg.datasets.pipelines.formatting.') diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formatting.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formatting.py deleted file mode 100644 index 4e057c1b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/formatting.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), - dict(key='gt_semantic_seg'))``. - """ - - def __init__(self, - fields=(dict(key='img', - stack=True), dict(key='gt_semantic_seg'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - if 'gt_semantic_seg' in results: - # convert to long - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, - ...].astype(np.int64)), - stack=True) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_semantic_seg". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the bottom/right - if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: (``filename``, ``ori_filename``, ``ori_shape``, - ``img_shape``, ``pad_shape``, ``scale_factor``, ``flip``, - ``flip_direction``, ``img_norm_cfg``) - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/loading.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index e1c82bd3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/test_time_aug.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 5c17cbbb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=(2048, 1024), - img_ratios=[0.5, 1.0], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (None | tuple | list[tuple]): Images scales for resizing. - img_ratios (float | list[float]): Image ratios for resizing - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale, - img_ratios=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - if img_ratios is not None: - img_ratios = img_ratios if isinstance(img_ratios, - list) else [img_ratios] - assert mmcv.is_list_of(img_ratios, float) - if img_scale is None: - # mode 1: given img_scale=None and a range of image ratio - self.img_scale = None - assert mmcv.is_list_of(img_ratios, float) - elif isinstance(img_scale, tuple) and mmcv.is_list_of( - img_ratios, float): - assert len(img_scale) == 2 - # mode 2: given a scale and a range of image ratio - self.img_scale = [(int(img_scale[0] * ratio), - int(img_scale[1] * ratio)) - for ratio in img_ratios] - else: - # mode 3: given multiple scales - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None - self.flip = flip - self.img_ratios = img_ratios - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): - h, w = results['img'].shape[:2] - img_scale = [(int(w * ratio), int(h * ratio)) - for ratio in self.img_ratios] - else: - img_scale = self.img_scale - flip_aug = [False, True] if self.flip else [False] - for scale in img_scale: - for flip in flip_aug: - for direction in self.flip_direction: - _results = results.copy() - _results['scale'] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip})' - repr_str += f'flip_direction={self.flip_direction}' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/transforms.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/transforms.py deleted file mode 100644 index 567c960a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/pipelines/transforms.py +++ /dev/null @@ -1,1042 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -from mmcv.utils import deprecated_api_warning, is_tuple_of -from numpy import random - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class ResizeToMultiple(object): - """Resize images & seg to multiple of divisor. - - Args: - size_divisor (int): images and gt seg maps need to resize to multiple - of size_divisor. Default: 32. - interpolation (str, optional): The interpolation mode of image resize. - Default: None - """ - - def __init__(self, size_divisor=32, interpolation=None): - self.size_divisor = size_divisor - self.interpolation = interpolation - - def __call__(self, results): - """Call function to resize images, semantic segmentation map to - multiple of size divisor. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape' keys are updated. - """ - # Align image to multiple of size divisor. - img = results['img'] - img = mmcv.imresize_to_multiple( - img, - self.size_divisor, - scale_factor=1, - interpolation=self.interpolation - if self.interpolation else 'bilinear') - - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape - - # Align segmentation map to multiple of size divisor. - for key in results.get('seg_fields', []): - gt_seg = results[key] - gt_seg = mmcv.imresize_to_multiple( - gt_seg, - self.size_divisor, - scale_factor=1, - interpolation='nearest') - results[key] = gt_seg - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(size_divisor={self.size_divisor}, ' - f'interpolation={self.interpolation})') - return repr_str - - -@PIPELINES.register_module() -class Resize(object): - """Resize images & seg. - - This transform resizes the input image to some scale. If the input dict - contains the key "scale", then the scale in the input dict is used, - otherwise the specified scale in the init method is used. - - ``img_scale`` can be None, a tuple (single-scale) or a list of tuple - (multi-scale). There are 4 multiscale modes: - - - ``ratio_range is not None``: - 1. When img_scale is None, img_scale is the shape of image in results - (img_scale = results['img'].shape[:2]) and the image is resized based - on the original size. (mode 1) - 2. When img_scale is a tuple (single-scale), randomly sample a ratio from - the ratio range and multiply it with the image scale. (mode 2) - - - ``ratio_range is None and multiscale_mode == "range"``: randomly sample a - scale from the a range. (mode 3) - - - ``ratio_range is None and multiscale_mode == "value"``: randomly sample a - scale from multiple scales. (mode 4) - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - Default:None. - multiscale_mode (str): Either "range" or "value". - Default: 'range' - ratio_range (tuple[float]): (min_ratio, max_ratio). - Default: None - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: True - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given img_scale=None and a range of image ratio - # mode 2: given a scale and a range of image ratio - assert self.img_scale is None or len(self.img_scale) == 1 - else: - # mode 3 and 4: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, - where ``img_scale`` is the selected image scale and - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where - ``img_scale`` is sampled scale and None is just a placeholder - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where - ``scale`` is sampled ratio multiplied with ``img_scale`` and - None is just a placeholder to be consistent with - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - if self.img_scale is None: - h, w = results['img'].shape[:2] - scale, scale_idx = self.random_sample_ratio((w, h), - self.ratio_range) - else: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results['img'], results['scale'], return_scale=True) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results['img'].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results['img'], results['scale'], return_scale=True) - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape # in case that there is no padding - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], results['scale'], interpolation='nearest') - else: - gt_seg = mmcv.imresize( - results[key], results['scale'], interpolation='nearest') - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - self._random_scale(results) - self._resize_img(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(img_scale={self.img_scale}, ' - f'multiscale_mode={self.multiscale_mode}, ' - f'ratio_range={self.ratio_range}, ' - f'keep_ratio={self.keep_ratio})') - return repr_str - - -@PIPELINES.register_module() -class RandomFlip(object): - """Flip the image & seg. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - Args: - prob (float, optional): The flipping probability. Default: None. - direction(str, optional): The flipping direction. Options are - 'horizontal' and 'vertical'. Default: 'horizontal'. - """ - - @deprecated_api_warning({'flip_ratio': 'prob'}, cls_name='RandomFlip') - def __init__(self, prob=None, direction='horizontal'): - self.prob = prob - self.direction = direction - if prob is not None: - assert prob >= 0 and prob <= 1 - assert direction in ['horizontal', 'vertical'] - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added into - result dict. - """ - - if 'flip' not in results: - flip = True if np.random.rand() < self.prob else False - results['flip'] = flip - if 'flip_direction' not in results: - results['flip_direction'] = self.direction - if results['flip']: - # flip image - results['img'] = mmcv.imflip( - results['img'], direction=results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - # use copy() to make numpy stride positive - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']).copy() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(prob={self.prob})' - - -@PIPELINES.register_module() -class Pad(object): - """Pad the image & mask. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_val (float, optional): Padding value. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_val=0, - seg_pad_val=255): - self.size = size - self.size_divisor = size_divisor - self.pad_val = pad_val - self.seg_pad_val = seg_pad_val - # only one of size and size_divisor should be valid - assert size is not None or size_divisor is not None - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - if self.size is not None: - padded_img = mmcv.impad( - results['img'], shape=self.size, pad_val=self.pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results['img'], self.size_divisor, pad_val=self.pad_val) - results['img'] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_seg(self, results): - """Pad masks according to ``results['pad_shape']``.""" - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], - shape=results['pad_shape'][:2], - pad_val=self.seg_pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - - self._pad_img(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, size_divisor={self.size_divisor}, ' \ - f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - - results['img'] = mmcv.imnormalize(results['img'], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb=' \ - f'{self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class Rerange(object): - """Rerange the image pixel value. - - Args: - min_value (float or int): Minimum value of the reranged image. - Default: 0. - max_value (float or int): Maximum value of the reranged image. - Default: 255. - """ - - def __init__(self, min_value=0, max_value=255): - assert isinstance(min_value, float) or isinstance(min_value, int) - assert isinstance(max_value, float) or isinstance(max_value, int) - assert min_value < max_value - self.min_value = min_value - self.max_value = max_value - - def __call__(self, results): - """Call function to rerange images. - - Args: - results (dict): Result dict from loading pipeline. - Returns: - dict: Reranged results. - """ - - img = results['img'] - img_min_value = np.min(img) - img_max_value = np.max(img) - - assert img_min_value < img_max_value - # rerange to [0, 1] - img = (img - img_min_value) / (img_max_value - img_min_value) - # rerange to [min_value, max_value] - img = img * (self.max_value - self.min_value) + self.min_value - results['img'] = img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_value={self.min_value}, max_value={self.max_value})' - return repr_str - - -@PIPELINES.register_module() -class CLAHE(object): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - """ - - def __init__(self, clip_limit=40.0, tile_grid_size=(8, 8)): - assert isinstance(clip_limit, (float, int)) - self.clip_limit = clip_limit - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - self.tile_grid_size = tile_grid_size - - def __call__(self, results): - """Call function to Use CLAHE method process images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - for i in range(results['img'].shape[2]): - results['img'][:, :, i] = mmcv.clahe( - np.array(results['img'][:, :, i], dtype=np.uint8), - self.clip_limit, self.tile_grid_size) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(clip_limit={self.clip_limit}, '\ - f'tile_grid_size={self.tile_grid_size})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop(object): - """Random crop the image & seg. - - Args: - crop_size (tuple): Expected size after cropping, (h, w). - cat_max_ratio (float): The maximum ratio that single category could - occupy. - """ - - def __init__(self, crop_size, cat_max_ratio=1., ignore_index=255): - assert crop_size[0] > 0 and crop_size[1] > 0 - self.crop_size = crop_size - self.cat_max_ratio = cat_max_ratio - self.ignore_index = ignore_index - - def get_crop_bbox(self, img): - """Randomly get a crop bounding box.""" - margin_h = max(img.shape[0] - self.crop_size[0], 0) - margin_w = max(img.shape[1] - self.crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + self.crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + self.crop_size[1] - - return crop_y1, crop_y2, crop_x1, crop_x2 - - def crop(self, img, crop_bbox): - """Crop from ``img``""" - crop_y1, crop_y2, crop_x1, crop_x2 = crop_bbox - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - return img - - def __call__(self, results): - """Call function to randomly crop images, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - - img = results['img'] - crop_bbox = self.get_crop_bbox(img) - if self.cat_max_ratio < 1.: - # Repeat 10 times - for _ in range(10): - seg_temp = self.crop(results['gt_semantic_seg'], crop_bbox) - labels, cnt = np.unique(seg_temp, return_counts=True) - cnt = cnt[labels != self.ignore_index] - if len(cnt) > 1 and np.max(cnt) / np.sum( - cnt) < self.cat_max_ratio: - break - crop_bbox = self.get_crop_bbox(img) - - # crop the image - img = self.crop(img, crop_bbox) - img_shape = img.shape - results['img'] = img - results['img_shape'] = img_shape - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = self.crop(results[key], crop_bbox) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(crop_size={self.crop_size})' - - -@PIPELINES.register_module() -class RandomRotate(object): - """Rotate the image & seg. - - Args: - prob (float): The rotation probability. - degree (float, tuple[float]): Range of degrees to select from. If - degree is a number instead of tuple like (min, max), - the range of degree will be (``-degree``, ``+degree``) - pad_val (float, optional): Padding value of image. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. Default: None. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. Default: False - """ - - def __init__(self, - prob, - degree, - pad_val=0, - seg_pad_val=255, - center=None, - auto_bound=False): - self.prob = prob - assert prob >= 0 and prob <= 1 - if isinstance(degree, (float, int)): - assert degree > 0, f'degree {degree} should be positive' - self.degree = (-degree, degree) - else: - self.degree = degree - assert len(self.degree) == 2, f'degree {self.degree} should be a ' \ - f'tuple of (min, max)' - self.pal_val = pad_val - self.seg_pad_val = seg_pad_val - self.center = center - self.auto_bound = auto_bound - - def __call__(self, results): - """Call function to rotate image, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - - rotate = True if np.random.rand() < self.prob else False - degree = np.random.uniform(min(*self.degree), max(*self.degree)) - if rotate: - # rotate image - results['img'] = mmcv.imrotate( - results['img'], - angle=degree, - border_value=self.pal_val, - center=self.center, - auto_bound=self.auto_bound) - - # rotate segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imrotate( - results[key], - angle=degree, - border_value=self.seg_pad_val, - center=self.center, - auto_bound=self.auto_bound, - interpolation='nearest') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' \ - f'degree={self.degree}, ' \ - f'pad_val={self.pal_val}, ' \ - f'seg_pad_val={self.seg_pad_val}, ' \ - f'center={self.center}, ' \ - f'auto_bound={self.auto_bound})' - return repr_str - - -@PIPELINES.register_module() -class RGB2Gray(object): - """Convert RGB image to grayscale image. - - This transform calculate the weighted mean of input image channels with - ``weights`` and then expand the channels to ``out_channels``. When - ``out_channels`` is None, the number of output channels is the same as - input channels. - - Args: - out_channels (int): Expected number of output channels after - transforming. Default: None. - weights (tuple[float]): The weights to calculate the weighted mean. - Default: (0.299, 0.587, 0.114). - """ - - def __init__(self, out_channels=None, weights=(0.299, 0.587, 0.114)): - assert out_channels is None or out_channels > 0 - self.out_channels = out_channels - assert isinstance(weights, tuple) - for item in weights: - assert isinstance(item, (float, int)) - self.weights = weights - - def __call__(self, results): - """Call function to convert RGB image to grayscale image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with grayscale image. - """ - img = results['img'] - assert len(img.shape) == 3 - assert img.shape[2] == len(self.weights) - weights = np.array(self.weights).reshape((1, 1, -1)) - img = (img * weights).sum(2, keepdims=True) - if self.out_channels is None: - img = img.repeat(weights.shape[2], axis=2) - else: - img = img.repeat(self.out_channels, axis=2) - - results['img'] = img - results['img_shape'] = img.shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(out_channels={self.out_channels}, ' \ - f'weights={self.weights})' - return repr_str - - -@PIPELINES.register_module() -class AdjustGamma(object): - """Using gamma correction to process the image. - - Args: - gamma (float or int): Gamma value used in gamma correction. - Default: 1.0. - """ - - def __init__(self, gamma=1.0): - assert isinstance(gamma, float) or isinstance(gamma, int) - assert gamma > 0 - self.gamma = gamma - inv_gamma = 1.0 / gamma - self.table = np.array([(i / 255.0)**inv_gamma * 255 - for i in np.arange(256)]).astype('uint8') - - def __call__(self, results): - """Call function to process the image with gamma correction. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - results['img'] = mmcv.lut_transform( - np.array(results['img'], dtype=np.uint8), self.table) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(gamma={self.gamma})' - - -@PIPELINES.register_module() -class SegRescale(object): - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - """ - - def __init__(self, scale_factor=1): - self.scale_factor = scale_factor - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], self.scale_factor, interpolation='nearest') - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion(object): - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def convert(self, img, alpha=1, beta=0): - """Multiple with alpha and add beat with clip.""" - img = img.astype(np.float32) * alpha + beta - img = np.clip(img, 0, 255) - return img.astype(np.uint8) - - def brightness(self, img): - """Brightness distortion.""" - if random.randint(2): - return self.convert( - img, - beta=random.uniform(-self.brightness_delta, - self.brightness_delta)) - return img - - def contrast(self, img): - """Contrast distortion.""" - if random.randint(2): - return self.convert( - img, - alpha=random.uniform(self.contrast_lower, self.contrast_upper)) - return img - - def saturation(self, img): - """Saturation distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, 1] = self.convert( - img[:, :, 1], - alpha=random.uniform(self.saturation_lower, - self.saturation_upper)) - img = mmcv.hsv2bgr(img) - return img - - def hue(self, img): - """Hue distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, - 0] = (img[:, :, 0].astype(int) + - random.randint(-self.hue_delta, self.hue_delta)) % 180 - img = mmcv.hsv2bgr(img) - return img - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - img = results['img'] - # random brightness - img = self.brightness(img) - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - img = self.contrast(img) - - # random saturation - img = self.saturation(img) - - # random hue - img = self.hue(img) - - # random contrast - if mode == 0: - img = self.contrast(img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(brightness_delta={self.brightness_delta}, ' - f'contrast_range=({self.contrast_lower}, ' - f'{self.contrast_upper}), ' - f'saturation_range=({self.saturation_lower}, ' - f'{self.saturation_upper}), ' - f'hue_delta={self.hue_delta})') - return repr_str - - -@PIPELINES.register_module() -class RandomCutOut(object): - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - Args: - prob (float): cutout probability. - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - seg_fill_in (int): The labels of pixel to fill in the dropped regions. - If seg_fill_in is None, skip. Default: None. - """ - - def __init__(self, - prob, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0), - seg_fill_in=None): - - assert 0 <= prob and prob <= 1 - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - if seg_fill_in is not None: - assert (isinstance(seg_fill_in, int) and 0 <= seg_fill_in - and seg_fill_in <= 255) - self.prob = prob - self.n_holes = n_holes - self.fill_in = fill_in - self.seg_fill_in = seg_fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - cutout = True if np.random.rand() < self.prob else False - if cutout: - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - if self.seg_fill_in is not None: - for key in results.get('seg_fields', []): - results[key][y1:y2, x1:x2] = self.seg_fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' - repr_str += f'n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in}, ' - repr_str += f'seg_fill_in={self.seg_fill_in})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/stare.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/stare.py deleted file mode 100644 index a24d1d95..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/stare.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/datasets/voc.py b/cv/3d_detection/paconv/pytorch/mmseg/datasets/voc.py deleted file mode 100644 index 3cec9e35..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/datasets/voc.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalVOCDataset(CustomDataset): - """Pascal VOC dataset. - - Args: - split (str): Split txt file for Pascal VOC. - """ - - CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', - 'train', 'tvmonitor') - - PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - def __init__(self, split, **kwargs): - super(PascalVOCDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/__init__.py deleted file mode 100644 index 87d8108e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, HEADS, LOSSES, SEGMENTORS, build_backbone, - build_head, build_loss, build_segmentor) -from .decode_heads import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .segmentors import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'HEADS', 'LOSSES', 'SEGMENTORS', 'build_backbone', - 'build_head', 'build_loss', 'build_segmentor' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/__init__.py deleted file mode 100644 index 434378e9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bisenetv1 import BiSeNetV1 -from .bisenetv2 import BiSeNetV2 -from .cgnet import CGNet -from .erfnet import ERFNet -from .fast_scnn import FastSCNN -from .hrnet import HRNet -from .icnet import ICNet -from .mit import MixVisionTransformer -from .mobilenet_v2 import MobileNetV2 -from .mobilenet_v3 import MobileNetV3 -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1c, ResNetV1d -from .resnext import ResNeXt -from .stdc import STDCContextPathNet, STDCNet -from .swin import SwinTransformer -from .timm_backbone import TIMMBackbone -from .twins import PCPVT, SVT -from .unet import UNet -from .vit import VisionTransformer - -__all__ = [ - 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet', 'FastSCNN', - 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3', - 'VisionTransformer', 'SwinTransformer', 'MixVisionTransformer', - 'BiSeNetV1', 'BiSeNetV2', 'ICNet', 'TIMMBackbone', 'ERFNet', 'PCPVT', - 'SVT', 'STDCNet', 'STDCContextPathNet' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv1.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv1.py deleted file mode 100644 index 4beb7b39..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv1.py +++ /dev/null @@ -1,332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone - - -class SpatialPath(BaseModule): - """Spatial Path to preserve the spatial size of the original input image - and encode affluent spatial information. - - Args: - in_channels(int): The number of channels of input - image. Default: 3. - num_channels (Tuple[int]): The number of channels of - each layers in Spatial Path. - Default: (64, 64, 64, 128). - Returns: - x (torch.Tensor): Feature map for Feature Fusion Module. - """ - - def __init__(self, - in_channels=3, - num_channels=(64, 64, 64, 128), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(SpatialPath, self).__init__(init_cfg=init_cfg) - assert len(num_channels) == 4, 'Length of input channels \ - of Spatial Path must be 4!' - - self.layers = [] - for i in range(len(num_channels)): - layer_name = f'layer{i + 1}' - self.layers.append(layer_name) - if i == 0: - self.add_module( - layer_name, - ConvModule( - in_channels=in_channels, - out_channels=num_channels[i], - kernel_size=7, - stride=2, - padding=3, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - elif i == len(num_channels) - 1: - self.add_module( - layer_name, - ConvModule( - in_channels=num_channels[i - 1], - out_channels=num_channels[i], - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - else: - self.add_module( - layer_name, - ConvModule( - in_channels=num_channels[i - 1], - out_channels=num_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, x): - for i, layer_name in enumerate(self.layers): - layer_stage = getattr(self, layer_name) - x = layer_stage(x) - return x - - -class AttentionRefinementModule(BaseModule): - """Attention Refinement Module (ARM) to refine the features of each stage. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - Returns: - x_out (torch.Tensor): Feature map of Attention Refinement Module. - """ - - def __init__(self, - in_channels, - out_channel, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(AttentionRefinementModule, self).__init__(init_cfg=init_cfg) - self.conv_layer = ConvModule( - in_channels=in_channels, - out_channels=out_channel, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.atten_conv_layer = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - in_channels=out_channel, - out_channels=out_channel, - kernel_size=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), nn.Sigmoid()) - - def forward(self, x): - x = self.conv_layer(x) - x_atten = self.atten_conv_layer(x) - x_out = x * x_atten - return x_out - - -class ContextPath(BaseModule): - """Context Path to provide sufficient receptive field. - - Args: - backbone_cfg:(dict): Config of backbone of - Context Path. - context_channels (Tuple[int]): The number of channel numbers - of various modules in Context Path. - Default: (128, 256, 512). - align_corners (bool, optional): The align_corners argument of - resize operation. Default: False. - Returns: - x_16_up, x_32_up (torch.Tensor, torch.Tensor): Two feature maps - undergoing upsampling from 1/16 and 1/32 downsampling - feature maps. These two feature maps are used for Feature - Fusion Module and Auxiliary Head. - """ - - def __init__(self, - backbone_cfg, - context_channels=(128, 256, 512), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(ContextPath, self).__init__(init_cfg=init_cfg) - assert len(context_channels) == 3, 'Length of input channels \ - of Context Path must be 3!' - - self.backbone = build_backbone(backbone_cfg) - - self.align_corners = align_corners - self.arm16 = AttentionRefinementModule(context_channels[1], - context_channels[0]) - self.arm32 = AttentionRefinementModule(context_channels[2], - context_channels[0]) - self.conv_head32 = ConvModule( - in_channels=context_channels[0], - out_channels=context_channels[0], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv_head16 = ConvModule( - in_channels=context_channels[0], - out_channels=context_channels[0], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.gap_conv = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - in_channels=context_channels[2], - out_channels=context_channels[0], - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, x): - x_4, x_8, x_16, x_32 = self.backbone(x) - x_gap = self.gap_conv(x_32) - - x_32_arm = self.arm32(x_32) - x_32_sum = x_32_arm + x_gap - x_32_up = resize(input=x_32_sum, size=x_16.shape[2:], mode='nearest') - x_32_up = self.conv_head32(x_32_up) - - x_16_arm = self.arm16(x_16) - x_16_sum = x_16_arm + x_32_up - x_16_up = resize(input=x_16_sum, size=x_8.shape[2:], mode='nearest') - x_16_up = self.conv_head16(x_16_up) - - return x_16_up, x_32_up - - -class FeatureFusionModule(BaseModule): - """Feature Fusion Module to fuse low level output feature of Spatial Path - and high level output feature of Context Path. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - Returns: - x_out (torch.Tensor): Feature map of Feature Fusion Module. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.gap = nn.AdaptiveAvgPool2d((1, 1)) - self.conv_atten = nn.Sequential( - ConvModule( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), nn.Sigmoid()) - - def forward(self, x_sp, x_cp): - x_concat = torch.cat([x_sp, x_cp], dim=1) - x_fuse = self.conv1(x_concat) - x_atten = self.gap(x_fuse) - # Note: No BN and more 1x1 conv in paper. - x_atten = self.conv_atten(x_atten) - x_atten = x_fuse * x_atten - x_out = x_atten + x_fuse - return x_out - - -@BACKBONES.register_module() -class BiSeNetV1(BaseModule): - """BiSeNetV1 backbone. - - This backbone is the implementation of `BiSeNet: Bilateral - Segmentation Network for Real-time Semantic - Segmentation `_. - - Args: - backbone_cfg:(dict): Config of backbone of - Context Path. - in_channels (int): The number of channels of input - image. Default: 3. - spatial_channels (Tuple[int]): Size of channel numbers of - various layers in Spatial Path. - Default: (64, 64, 64, 128). - context_channels (Tuple[int]): Size of channel numbers of - various modules in Context Path. - Default: (128, 256, 512). - out_indices (Tuple[int] | int, optional): Output from which stages. - Default: (0, 1, 2). - align_corners (bool, optional): The align_corners argument of - resize operation in Bilateral Guided Aggregation Layer. - Default: False. - out_channels(int): The number of channels of output. - It must be the same with `in_channels` of decode_head. - Default: 256. - """ - - def __init__(self, - backbone_cfg, - in_channels=3, - spatial_channels=(64, 64, 64, 128), - context_channels=(128, 256, 512), - out_indices=(0, 1, 2), - align_corners=False, - out_channels=256, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - init_cfg=None): - - super(BiSeNetV1, self).__init__(init_cfg=init_cfg) - assert len(spatial_channels) == 4, 'Length of input channels \ - of Spatial Path must be 4!' - - assert len(context_channels) == 3, 'Length of input channels \ - of Context Path must be 3!' - - self.out_indices = out_indices - self.align_corners = align_corners - self.context_path = ContextPath(backbone_cfg, context_channels, - self.align_corners) - self.spatial_path = SpatialPath(in_channels, spatial_channels) - self.ffm = FeatureFusionModule(context_channels[1], out_channels) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - def forward(self, x): - # stole refactoring code from Coin Cheung, thanks - x_context8, x_context16 = self.context_path(x) - x_spatial = self.spatial_path(x) - x_fuse = self.ffm(x_spatial, x_context8) - - outs = [x_fuse, x_context8, x_context16] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv2.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv2.py deleted file mode 100644 index d908b321..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/bisenetv2.py +++ /dev/null @@ -1,622 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, - build_activation_layer, build_norm_layer) -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES - - -class DetailBranch(BaseModule): - """Detail Branch with wide channels and shallow layers to capture low-level - details and generate high-resolution feature representation. - - Args: - detail_channels (Tuple[int]): Size of channel numbers of each stage - in Detail Branch, in paper it has 3 stages. - Default: (64, 64, 128). - in_channels (int): Number of channels of input image. Default: 3. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Feature map of Detail Branch. - """ - - def __init__(self, - detail_channels=(64, 64, 128), - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(DetailBranch, self).__init__(init_cfg=init_cfg) - detail_branch = [] - for i in range(len(detail_channels)): - if i == 0: - detail_branch.append( - nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=detail_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg))) - else: - detail_branch.append( - nn.Sequential( - ConvModule( - in_channels=detail_channels[i - 1], - out_channels=detail_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg))) - self.detail_branch = nn.ModuleList(detail_branch) - - def forward(self, x): - for stage in self.detail_branch: - x = stage(x) - return x - - -class StemBlock(BaseModule): - """Stem Block at the beginning of Semantic Branch. - - Args: - in_channels (int): Number of input channels. - Default: 3. - out_channels (int): Number of output channels. - Default: 16. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): First feature map in Semantic Branch. - """ - - def __init__(self, - in_channels=3, - out_channels=16, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(StemBlock, self).__init__(init_cfg=init_cfg) - - self.conv_first = ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.convs = nn.Sequential( - ConvModule( - in_channels=out_channels, - out_channels=out_channels // 2, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=out_channels // 2, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.pool = nn.MaxPool2d( - kernel_size=3, stride=2, padding=1, ceil_mode=False) - self.fuse_last = ConvModule( - in_channels=out_channels * 2, - out_channels=out_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - x = self.conv_first(x) - x_left = self.convs(x) - x_right = self.pool(x) - x = self.fuse_last(torch.cat([x_left, x_right], dim=1)) - return x - - -class GELayer(BaseModule): - """Gather-and-Expansion Layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - exp_ratio (int): Expansion ratio for middle channels. - Default: 6. - stride (int): Stride of GELayer. Default: 1 - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Intermediate feature map in - Semantic Branch. - """ - - def __init__(self, - in_channels, - out_channels, - exp_ratio=6, - stride=1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(GELayer, self).__init__(init_cfg=init_cfg) - mid_channel = in_channels * exp_ratio - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if stride == 1: - self.dwconv = nn.Sequential( - # ReLU in ConvModule not shown in paper - ConvModule( - in_channels=in_channels, - out_channels=mid_channel, - kernel_size=3, - stride=stride, - padding=1, - groups=in_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.shortcut = None - else: - self.dwconv = nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=mid_channel, - kernel_size=3, - stride=stride, - padding=1, - groups=in_channels, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), - # ReLU in ConvModule not shown in paper - ConvModule( - in_channels=mid_channel, - out_channels=mid_channel, - kernel_size=3, - stride=1, - padding=1, - groups=mid_channel, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ) - self.shortcut = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=norm_cfg, - pw_act_cfg=None, - )) - - self.conv2 = nn.Sequential( - ConvModule( - in_channels=mid_channel, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None, - )) - - self.act = build_activation_layer(act_cfg) - - def forward(self, x): - identity = x - x = self.conv1(x) - x = self.dwconv(x) - x = self.conv2(x) - if self.shortcut is not None: - shortcut = self.shortcut(identity) - x = x + shortcut - else: - x = x + identity - x = self.act(x) - return x - - -class CEBlock(BaseModule): - """Context Embedding Block for large receptive filed in Semantic Branch. - - Args: - in_channels (int): Number of input channels. - Default: 3. - out_channels (int): Number of output channels. - Default: 16. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Last feature map in Semantic Branch. - """ - - def __init__(self, - in_channels=3, - out_channels=16, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(CEBlock, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - self.gap = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - build_norm_layer(norm_cfg, self.in_channels)[1]) - self.conv_gap = ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # Note: in paper here is naive conv2d, no bn-relu - self.conv_last = ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - identity = x - x = self.gap(x) - x = self.conv_gap(x) - x = identity + x - x = self.conv_last(x) - return x - - -class SemanticBranch(BaseModule): - """Semantic Branch which is lightweight with narrow channels and deep - layers to obtain high-level semantic context. - - Args: - semantic_channels(Tuple[int]): Size of channel numbers of - various stages in Semantic Branch. - Default: (16, 32, 64, 128). - in_channels (int): Number of channels of input image. Default: 3. - exp_ratio (int): Expansion ratio for middle channels. - Default: 6. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - semantic_outs (List[torch.Tensor]): List of several feature maps - for auxiliary heads (Booster) and Bilateral - Guided Aggregation Layer. - """ - - def __init__(self, - semantic_channels=(16, 32, 64, 128), - in_channels=3, - exp_ratio=6, - init_cfg=None): - super(SemanticBranch, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.semantic_channels = semantic_channels - self.semantic_stages = [] - for i in range(len(semantic_channels)): - stage_name = f'stage{i + 1}' - self.semantic_stages.append(stage_name) - if i == 0: - self.add_module( - stage_name, - StemBlock(self.in_channels, semantic_channels[i])) - elif i == (len(semantic_channels) - 1): - self.add_module( - stage_name, - nn.Sequential( - GELayer(semantic_channels[i - 1], semantic_channels[i], - exp_ratio, 2), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1))) - else: - self.add_module( - stage_name, - nn.Sequential( - GELayer(semantic_channels[i - 1], semantic_channels[i], - exp_ratio, 2), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1))) - - self.add_module(f'stage{len(semantic_channels)}_CEBlock', - CEBlock(semantic_channels[-1], semantic_channels[-1])) - self.semantic_stages.append(f'stage{len(semantic_channels)}_CEBlock') - - def forward(self, x): - semantic_outs = [] - for stage_name in self.semantic_stages: - semantic_stage = getattr(self, stage_name) - x = semantic_stage(x) - semantic_outs.append(x) - return semantic_outs - - -class BGALayer(BaseModule): - """Bilateral Guided Aggregation Layer to fuse the complementary information - from both Detail Branch and Semantic Branch. - - Args: - out_channels (int): Number of output channels. - Default: 128. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - output (torch.Tensor): Output feature map for Segment heads. - """ - - def __init__(self, - out_channels=128, - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(BGALayer, self).__init__(init_cfg=init_cfg) - self.out_channels = out_channels - self.align_corners = align_corners - self.detail_dwconv = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=None, - pw_act_cfg=None, - )) - self.detail_down = nn.Sequential( - ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), - nn.AvgPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=False)) - self.semantic_conv = nn.Sequential( - ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None)) - self.semantic_dwconv = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=None, - pw_act_cfg=None, - )) - self.conv = ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - inplace=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - ) - - def forward(self, x_d, x_s): - detail_dwconv = self.detail_dwconv(x_d) - detail_down = self.detail_down(x_d) - semantic_conv = self.semantic_conv(x_s) - semantic_dwconv = self.semantic_dwconv(x_s) - semantic_conv = resize( - input=semantic_conv, - size=detail_dwconv.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fuse_1 = detail_dwconv * torch.sigmoid(semantic_conv) - fuse_2 = detail_down * torch.sigmoid(semantic_dwconv) - fuse_2 = resize( - input=fuse_2, - size=fuse_1.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = self.conv(fuse_1 + fuse_2) - return output - - -@BACKBONES.register_module() -class BiSeNetV2(BaseModule): - """BiSeNetV2: Bilateral Network with Guided Aggregation for - Real-time Semantic Segmentation. - - This backbone is the implementation of - `BiSeNetV2 `_. - - Args: - in_channels (int): Number of channel of input image. Default: 3. - detail_channels (Tuple[int], optional): Channels of each stage - in Detail Branch. Default: (64, 64, 128). - semantic_channels (Tuple[int], optional): Channels of each stage - in Semantic Branch. Default: (16, 32, 64, 128). - See Table 1 and Figure 3 of paper for more details. - semantic_expansion_ratio (int, optional): The expansion factor - expanding channel number of middle channels in Semantic Branch. - Default: 6. - bga_channels (int, optional): Number of middle channels in - Bilateral Guided Aggregation Layer. Default: 128. - out_indices (Tuple[int] | int, optional): Output from which stages. - Default: (0, 1, 2, 3, 4). - align_corners (bool, optional): The align_corners argument of - resize operation in Bilateral Guided Aggregation Layer. - Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=3, - detail_channels=(64, 64, 128), - semantic_channels=(16, 32, 64, 128), - semantic_expansion_ratio=6, - bga_channels=128, - out_indices=(0, 1, 2, 3, 4), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - if init_cfg is None: - init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) - ] - super(BiSeNetV2, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_indices = out_indices - self.detail_channels = detail_channels - self.semantic_channels = semantic_channels - self.semantic_expansion_ratio = semantic_expansion_ratio - self.bga_channels = bga_channels - self.align_corners = align_corners - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.detail = DetailBranch(self.detail_channels, self.in_channels) - self.semantic = SemanticBranch(self.semantic_channels, - self.in_channels, - self.semantic_expansion_ratio) - self.bga = BGALayer(self.bga_channels, self.align_corners) - - def forward(self, x): - # stole refactoring code from Coin Cheung, thanks - x_detail = self.detail(x) - x_semantic_lst = self.semantic(x) - x_head = self.bga(x_detail, x_semantic_lst[-1]) - outs = [x_head] + x_semantic_lst[:-1] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/cgnet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/cgnet.py deleted file mode 100644 index 168194c1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/cgnet.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import ConvModule, build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from ..builder import BACKBONES - - -class GlobalContextExtractor(nn.Module): - """Global Context Extractor for CGNet. - - This class is employed to refine the joint feature of both local feature - and surrounding context. - - Args: - channel (int): Number of input feature channels. - reduction (int): Reductions for global context extractor. Default: 16. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, channel, reduction=16, with_cp=False): - super(GlobalContextExtractor, self).__init__() - self.channel = channel - self.reduction = reduction - assert reduction >= 1 and channel >= reduction - self.with_cp = with_cp - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), nn.Sigmoid()) - - def forward(self, x): - - def _inner_forward(x): - num_batch, num_channel = x.size()[:2] - y = self.avg_pool(x).view(num_batch, num_channel) - y = self.fc(y).view(num_batch, num_channel, 1, 1) - return x * y - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class ContextGuidedBlock(nn.Module): - """Context Guided Block for CGNet. - - This class consists of four components: local feature extractor, - surrounding feature extractor, joint feature extractor and global - context extractor. - - Args: - in_channels (int): Number of input feature channels. - out_channels (int): Number of output feature channels. - dilation (int): Dilation rate for surrounding context extractor. - Default: 2. - reduction (int): Reduction for global context extractor. Default: 16. - skip_connect (bool): Add input to output or not. Default: True. - downsample (bool): Downsample the input to 1/2 or not. Default: False. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels, - out_channels, - dilation=2, - reduction=16, - skip_connect=True, - downsample=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - with_cp=False): - super(ContextGuidedBlock, self).__init__() - self.with_cp = with_cp - self.downsample = downsample - - channels = out_channels if downsample else out_channels // 2 - if 'type' in act_cfg and act_cfg['type'] == 'PReLU': - act_cfg['num_parameters'] = channels - kernel_size = 3 if downsample else 1 - stride = 2 if downsample else 1 - padding = (kernel_size - 1) // 2 - - self.conv1x1 = ConvModule( - in_channels, - channels, - kernel_size, - stride, - padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.f_loc = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=1, - groups=channels, - bias=False) - self.f_sur = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=dilation, - groups=channels, - dilation=dilation, - bias=False) - - self.bn = build_norm_layer(norm_cfg, 2 * channels)[1] - self.activate = nn.PReLU(2 * channels) - - if downsample: - self.bottleneck = build_conv_layer( - conv_cfg, - 2 * channels, - out_channels, - kernel_size=1, - bias=False) - - self.skip_connect = skip_connect and not downsample - self.f_glo = GlobalContextExtractor(out_channels, reduction, with_cp) - - def forward(self, x): - - def _inner_forward(x): - out = self.conv1x1(x) - loc = self.f_loc(out) - sur = self.f_sur(out) - - joi_feat = torch.cat([loc, sur], 1) # the joint feature - joi_feat = self.bn(joi_feat) - joi_feat = self.activate(joi_feat) - if self.downsample: - joi_feat = self.bottleneck(joi_feat) # channel = out_channels - # f_glo is employed to refine the joint feature - out = self.f_glo(joi_feat) - - if self.skip_connect: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InputInjection(nn.Module): - """Downsampling module for CGNet.""" - - def __init__(self, num_downsampling): - super(InputInjection, self).__init__() - self.pool = nn.ModuleList() - for i in range(num_downsampling): - self.pool.append(nn.AvgPool2d(3, stride=2, padding=1)) - - def forward(self, x): - for pool in self.pool: - x = pool(x) - return x - - -@BACKBONES.register_module() -class CGNet(BaseModule): - """CGNet backbone. - - This backbone is the implementation of `A Light-weight Context Guided - Network for Semantic Segmentation `_. - - Args: - in_channels (int): Number of input image channels. Normally 3. - num_channels (tuple[int]): Numbers of feature channels at each stages. - Default: (32, 64, 128). - num_blocks (tuple[int]): Numbers of CG blocks at stage 1 and stage 2. - Default: (3, 21). - dilations (tuple[int]): Dilation rate for surrounding context - extractors at stage 1 and stage 2. Default: (2, 4). - reductions (tuple[int]): Reductions for global context extractors at - stage 1 and stage 2. Default: (8, 16). - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - - super(CGNet, self).__init__(init_cfg) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer=['Conv2d', 'Linear']), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']), - dict(type='Constant', val=0, layer='PReLU') - ] - else: - raise TypeError('pretrained must be a str or None') - - self.in_channels = in_channels - self.num_channels = num_channels - assert isinstance(self.num_channels, tuple) and len( - self.num_channels) == 3 - self.num_blocks = num_blocks - assert isinstance(self.num_blocks, tuple) and len(self.num_blocks) == 2 - self.dilations = dilations - assert isinstance(self.dilations, tuple) and len(self.dilations) == 2 - self.reductions = reductions - assert isinstance(self.reductions, tuple) and len(self.reductions) == 2 - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - if 'type' in self.act_cfg and self.act_cfg['type'] == 'PReLU': - self.act_cfg['num_parameters'] = num_channels[0] - self.norm_eval = norm_eval - self.with_cp = with_cp - - cur_channels = in_channels - self.stem = nn.ModuleList() - for i in range(3): - self.stem.append( - ConvModule( - cur_channels, - num_channels[0], - 3, - 2 if i == 0 else 1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - cur_channels = num_channels[0] - - self.inject_2x = InputInjection(1) # down-sample for Input, factor=2 - self.inject_4x = InputInjection(2) # down-sample for Input, factor=4 - - cur_channels += in_channels - self.norm_prelu_0 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 1 - self.level1 = nn.ModuleList() - for i in range(num_blocks[0]): - self.level1.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[1], - num_channels[1], - dilations[0], - reductions[0], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[1] + in_channels - self.norm_prelu_1 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 2 - self.level2 = nn.ModuleList() - for i in range(num_blocks[1]): - self.level2.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[2], - num_channels[2], - dilations[1], - reductions[1], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[2] - self.norm_prelu_2 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - def forward(self, x): - output = [] - - # stage 0 - inp_2x = self.inject_2x(x) - inp_4x = self.inject_4x(x) - for layer in self.stem: - x = layer(x) - x = self.norm_prelu_0(torch.cat([x, inp_2x], 1)) - output.append(x) - - # stage 1 - for i, layer in enumerate(self.level1): - x = layer(x) - if i == 0: - down1 = x - x = self.norm_prelu_1(torch.cat([x, down1, inp_4x], 1)) - output.append(x) - - # stage 2 - for i, layer in enumerate(self.level2): - x = layer(x) - if i == 0: - down2 = x - x = self.norm_prelu_2(torch.cat([down2, x], 1)) - output.append(x) - - return output - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(CGNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/erfnet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/erfnet.py deleted file mode 100644 index 8921c18f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/erfnet.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import build_activation_layer, build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES - - -class DownsamplerBlock(BaseModule): - """Downsampler block of ERFNet. - - This module is a little different from basical ConvModule. - The features from Conv and MaxPool layers are - concatenated before BatchNorm. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(DownsamplerBlock, self).__init__(init_cfg=init_cfg) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.conv = build_conv_layer( - self.conv_cfg, - in_channels, - out_channels - in_channels, - kernel_size=3, - stride=2, - padding=1) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] - self.act = build_activation_layer(self.act_cfg) - - def forward(self, input): - conv_out = self.conv(input) - pool_out = self.pool(input) - pool_out = resize( - input=pool_out, - size=conv_out.size()[2:], - mode='bilinear', - align_corners=False) - output = torch.cat([conv_out, pool_out], 1) - output = self.bn(output) - output = self.act(output) - return output - - -class NonBottleneck1d(BaseModule): - """Non-bottleneck block of ERFNet. - - Args: - channels (int): Number of channels in Non-bottleneck block. - drop_rate (float): Probability of an element to be zeroed. - Default 0. - dilation (int): Dilation rate for last two conv layers. - Default 1. - num_conv_layer (int): Number of 3x1 and 1x3 convolution layers. - Default 2. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - channels, - drop_rate=0, - dilation=1, - num_conv_layer=2, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(NonBottleneck1d, self).__init__(init_cfg=init_cfg) - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.act = build_activation_layer(self.act_cfg) - - self.convs_layers = nn.ModuleList() - for conv_layer in range(num_conv_layer): - first_conv_padding = (1, 0) if conv_layer == 0 else (dilation, 0) - first_conv_dilation = 1 if conv_layer == 0 else (dilation, 1) - second_conv_padding = (0, 1) if conv_layer == 0 else (0, dilation) - second_conv_dilation = 1 if conv_layer == 0 else (1, dilation) - - self.convs_layers.append( - build_conv_layer( - self.conv_cfg, - channels, - channels, - kernel_size=(3, 1), - stride=1, - padding=first_conv_padding, - bias=True, - dilation=first_conv_dilation)) - self.convs_layers.append(self.act) - self.convs_layers.append( - build_conv_layer( - self.conv_cfg, - channels, - channels, - kernel_size=(1, 3), - stride=1, - padding=second_conv_padding, - bias=True, - dilation=second_conv_dilation)) - self.convs_layers.append( - build_norm_layer(self.norm_cfg, channels)[1]) - if conv_layer == 0: - self.convs_layers.append(self.act) - else: - self.convs_layers.append(nn.Dropout(p=drop_rate)) - - def forward(self, input): - output = input - for conv in self.convs_layers: - output = conv(output) - output = self.act(output + input) - return output - - -class UpsamplerBlock(BaseModule): - """Upsampler block of ERFNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(UpsamplerBlock, self).__init__(init_cfg=init_cfg) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.conv = nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - output_padding=1, - bias=True) - self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] - self.act = build_activation_layer(self.act_cfg) - - def forward(self, input): - output = self.conv(input) - output = self.bn(output) - output = self.act(output) - return output - - -@BACKBONES.register_module() -class ERFNet(BaseModule): - """ERFNet backbone. - - This backbone is the implementation of `ERFNet: Efficient Residual - Factorized ConvNet for Real-time SemanticSegmentation - `_. - - Args: - in_channels (int): The number of channels of input - image. Default: 3. - enc_downsample_channels (Tuple[int]): Size of channel - numbers of various Downsampler block in encoder. - Default: (16, 64, 128). - enc_stage_non_bottlenecks (Tuple[int]): Number of stages of - Non-bottleneck block in encoder. - Default: (5, 8). - enc_non_bottleneck_dilations (Tuple[int]): Dilation rate of each - stage of Non-bottleneck block of encoder. - Default: (2, 4, 8, 16). - enc_non_bottleneck_channels (Tuple[int]): Size of channel - numbers of various Non-bottleneck block in encoder. - Default: (64, 128). - dec_upsample_channels (Tuple[int]): Size of channel numbers of - various Deconvolution block in decoder. - Default: (64, 16). - dec_stages_non_bottleneck (Tuple[int]): Number of stages of - Non-bottleneck block in decoder. - Default: (2, 2). - dec_non_bottleneck_channels (Tuple[int]): Size of channel - numbers of various Non-bottleneck block in decoder. - Default: (64, 16). - drop_rate (float): Probability of an element to be zeroed. - Default 0.1. - """ - - def __init__(self, - in_channels=3, - enc_downsample_channels=(16, 64, 128), - enc_stage_non_bottlenecks=(5, 8), - enc_non_bottleneck_dilations=(2, 4, 8, 16), - enc_non_bottleneck_channels=(64, 128), - dec_upsample_channels=(64, 16), - dec_stages_non_bottleneck=(2, 2), - dec_non_bottleneck_channels=(64, 16), - dropout_ratio=0.1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - init_cfg=None): - - super(ERFNet, self).__init__(init_cfg=init_cfg) - assert len(enc_downsample_channels) \ - == len(dec_upsample_channels)+1, 'Number of downsample\ - block of encoder does not \ - match number of upsample block of decoder!' - assert len(enc_downsample_channels) \ - == len(enc_stage_non_bottlenecks)+1, 'Number of \ - downsample block of encoder does not match \ - number of Non-bottleneck block of encoder!' - assert len(enc_downsample_channels) \ - == len(enc_non_bottleneck_channels)+1, 'Number of \ - downsample block of encoder does not match \ - number of channels of Non-bottleneck block of encoder!' - assert enc_stage_non_bottlenecks[-1] \ - % len(enc_non_bottleneck_dilations) == 0, 'Number of \ - Non-bottleneck block of encoder does not match \ - number of Non-bottleneck block of encoder!' - assert len(dec_upsample_channels) \ - == len(dec_stages_non_bottleneck), 'Number of \ - upsample block of decoder does not match \ - number of Non-bottleneck block of decoder!' - assert len(dec_stages_non_bottleneck) \ - == len(dec_non_bottleneck_channels), 'Number of \ - Non-bottleneck block of decoder does not match \ - number of channels of Non-bottleneck block of decoder!' - - self.in_channels = in_channels - self.enc_downsample_channels = enc_downsample_channels - self.enc_stage_non_bottlenecks = enc_stage_non_bottlenecks - self.enc_non_bottleneck_dilations = enc_non_bottleneck_dilations - self.enc_non_bottleneck_channels = enc_non_bottleneck_channels - self.dec_upsample_channels = dec_upsample_channels - self.dec_stages_non_bottleneck = dec_stages_non_bottleneck - self.dec_non_bottleneck_channels = dec_non_bottleneck_channels - self.dropout_ratio = dropout_ratio - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.encoder.append( - DownsamplerBlock(self.in_channels, enc_downsample_channels[0])) - - for i in range(len(enc_downsample_channels) - 1): - self.encoder.append( - DownsamplerBlock(enc_downsample_channels[i], - enc_downsample_channels[i + 1])) - # Last part of encoder is some dilated NonBottleneck1d blocks. - if i == len(enc_downsample_channels) - 2: - iteration_times = int(enc_stage_non_bottlenecks[-1] / - len(enc_non_bottleneck_dilations)) - for j in range(iteration_times): - for k in range(len(enc_non_bottleneck_dilations)): - self.encoder.append( - NonBottleneck1d(enc_downsample_channels[-1], - self.dropout_ratio, - enc_non_bottleneck_dilations[k])) - else: - for j in range(enc_stage_non_bottlenecks[i]): - self.encoder.append( - NonBottleneck1d(enc_downsample_channels[i + 1], - self.dropout_ratio)) - - for i in range(len(dec_upsample_channels)): - if i == 0: - self.decoder.append( - UpsamplerBlock(enc_downsample_channels[-1], - dec_non_bottleneck_channels[i])) - else: - self.decoder.append( - UpsamplerBlock(dec_non_bottleneck_channels[i - 1], - dec_non_bottleneck_channels[i])) - for j in range(dec_stages_non_bottleneck[i]): - self.decoder.append( - NonBottleneck1d(dec_non_bottleneck_channels[i])) - - def forward(self, x): - for enc in self.encoder: - x = enc(x) - for dec in self.decoder: - x = dec(x) - return [x] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/fast_scnn.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/fast_scnn.py deleted file mode 100644 index cbfbcaf4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/fast_scnn.py +++ /dev/null @@ -1,409 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from mmseg.models.decode_heads.psp_head import PPM -from mmseg.ops import resize -from ..builder import BACKBONES -from ..utils import InvertedResidual - - -class LearningToDownsample(nn.Module): - """Learning to downsample module. - - Args: - in_channels (int): Number of input channels. - dw_channels (tuple[int]): Number of output channels of the first and - the second depthwise conv (dwconv) layers. - out_channels (int): Number of output channels of the whole - 'learning to downsample' module. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config - of depthwise ConvModule. If it is 'default', it will be the same - as `act_cfg`. Default: None. - """ - - def __init__(self, - in_channels, - dw_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dw_act_cfg=None): - super(LearningToDownsample, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.dw_act_cfg = dw_act_cfg - dw_channels1 = dw_channels[0] - dw_channels2 = dw_channels[1] - - self.conv = ConvModule( - in_channels, - dw_channels1, - 3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.dsconv1 = DepthwiseSeparableConvModule( - dw_channels1, - dw_channels2, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg, - dw_act_cfg=self.dw_act_cfg) - - self.dsconv2 = DepthwiseSeparableConvModule( - dw_channels2, - out_channels, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg, - dw_act_cfg=self.dw_act_cfg) - - def forward(self, x): - x = self.conv(x) - x = self.dsconv1(x) - x = self.dsconv2(x) - return x - - -class GlobalFeatureExtractor(nn.Module): - """Global feature extractor module. - - Args: - in_channels (int): Number of input channels of the GFE module. - Default: 64 - block_channels (tuple[int]): Tuple of ints. Each int specifies the - number of output channels of each Inverted Residual module. - Default: (64, 96, 128) - out_channels(int): Number of output channels of the GFE module. - Default: 128 - expand_ratio (int): Adjusts number of channels of the hidden layer - in InvertedResidual by this amount. - Default: 6 - num_blocks (tuple[int]): Tuple of ints. Each int specifies the - number of times each Inverted Residual module is repeated. - The repeated Inverted Residual modules are called a 'group'. - Default: (3, 3, 3) - strides (tuple[int]): Tuple of ints. Each int specifies - the downsampling factor of each 'group'. - Default: (2, 2, 1) - pool_scales (tuple[int]): Tuple of ints. Each int specifies - the parameter required in 'global average pooling' within PPM. - Default: (1, 2, 3, 6) - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - """ - - def __init__(self, - in_channels=64, - block_channels=(64, 96, 128), - out_channels=128, - expand_ratio=6, - num_blocks=(3, 3, 3), - strides=(2, 2, 1), - pool_scales=(1, 2, 3, 6), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False): - super(GlobalFeatureExtractor, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - assert len(block_channels) == len(num_blocks) == 3 - self.bottleneck1 = self._make_layer(in_channels, block_channels[0], - num_blocks[0], strides[0], - expand_ratio) - self.bottleneck2 = self._make_layer(block_channels[0], - block_channels[1], num_blocks[1], - strides[1], expand_ratio) - self.bottleneck3 = self._make_layer(block_channels[1], - block_channels[2], num_blocks[2], - strides[2], expand_ratio) - self.ppm = PPM( - pool_scales, - block_channels[2], - block_channels[2] // 4, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=align_corners) - - self.out = ConvModule( - block_channels[2] * 2, - out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _make_layer(self, - in_channels, - out_channels, - blocks, - stride=1, - expand_ratio=6): - layers = [ - InvertedResidual( - in_channels, - out_channels, - stride, - expand_ratio, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - ] - for i in range(1, blocks): - layers.append( - InvertedResidual( - out_channels, - out_channels, - 1, - expand_ratio, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - return nn.Sequential(*layers) - - def forward(self, x): - x = self.bottleneck1(x) - x = self.bottleneck2(x) - x = self.bottleneck3(x) - x = torch.cat([x, *self.ppm(x)], dim=1) - x = self.out(x) - return x - - -class FeatureFusionModule(nn.Module): - """Feature fusion module. - - Args: - higher_in_channels (int): Number of input channels of the - higher-resolution branch. - lower_in_channels (int): Number of input channels of the - lower-resolution branch. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - dwconv_act_cfg (dict): Config of activation layers in 3x3 conv. - Default: dict(type='ReLU'). - conv_act_cfg (dict): Config of activation layers in the two 1x1 conv. - Default: None. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - """ - - def __init__(self, - higher_in_channels, - lower_in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dwconv_act_cfg=dict(type='ReLU'), - conv_act_cfg=None, - align_corners=False): - super(FeatureFusionModule, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dwconv_act_cfg = dwconv_act_cfg - self.conv_act_cfg = conv_act_cfg - self.align_corners = align_corners - self.dwconv = ConvModule( - lower_in_channels, - out_channels, - 3, - padding=1, - groups=out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.dwconv_act_cfg) - self.conv_lower_res = ConvModule( - out_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.conv_act_cfg) - - self.conv_higher_res = ConvModule( - higher_in_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.conv_act_cfg) - - self.relu = nn.ReLU(True) - - def forward(self, higher_res_feature, lower_res_feature): - lower_res_feature = resize( - lower_res_feature, - size=higher_res_feature.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - lower_res_feature = self.dwconv(lower_res_feature) - lower_res_feature = self.conv_lower_res(lower_res_feature) - - higher_res_feature = self.conv_higher_res(higher_res_feature) - out = higher_res_feature + lower_res_feature - return self.relu(out) - - -@BACKBONES.register_module() -class FastSCNN(BaseModule): - """Fast-SCNN Backbone. - - This backbone is the implementation of `Fast-SCNN: Fast Semantic - Segmentation Network `_. - - Args: - in_channels (int): Number of input image channels. Default: 3. - downsample_dw_channels (tuple[int]): Number of output channels after - the first conv layer & the second conv layer in - Learning-To-Downsample (LTD) module. - Default: (32, 48). - global_in_channels (int): Number of input channels of - Global Feature Extractor(GFE). - Equal to number of output channels of LTD. - Default: 64. - global_block_channels (tuple[int]): Tuple of integers that describe - the output channels for each of the MobileNet-v2 bottleneck - residual blocks in GFE. - Default: (64, 96, 128). - global_block_strides (tuple[int]): Tuple of integers - that describe the strides (downsampling factors) for each of the - MobileNet-v2 bottleneck residual blocks in GFE. - Default: (2, 2, 1). - global_out_channels (int): Number of output channels of GFE. - Default: 128. - higher_in_channels (int): Number of input channels of the higher - resolution branch in FFM. - Equal to global_in_channels. - Default: 64. - lower_in_channels (int): Number of input channels of the lower - resolution branch in FFM. - Equal to global_out_channels. - Default: 128. - fusion_out_channels (int): Number of output channels of FFM. - Default: 128. - out_indices (tuple): Tuple of indices of list - [higher_res_features, lower_res_features, fusion_output]. - Often set to (0,1,2) to enable aux. heads. - Default: (0, 1, 2). - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config - of depthwise ConvModule. If it is 'default', it will be the same - as `act_cfg`. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels=3, - downsample_dw_channels=(32, 48), - global_in_channels=64, - global_block_channels=(64, 96, 128), - global_block_strides=(2, 2, 1), - global_out_channels=128, - higher_in_channels=64, - lower_in_channels=128, - fusion_out_channels=128, - out_indices=(0, 1, 2), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - dw_act_cfg=None, - init_cfg=None): - - super(FastSCNN, self).__init__(init_cfg) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) - ] - - if global_in_channels != higher_in_channels: - raise AssertionError('Global Input Channels must be the same \ - with Higher Input Channels!') - elif global_out_channels != lower_in_channels: - raise AssertionError('Global Output Channels must be the same \ - with Lower Input Channels!') - - self.in_channels = in_channels - self.downsample_dw_channels1 = downsample_dw_channels[0] - self.downsample_dw_channels2 = downsample_dw_channels[1] - self.global_in_channels = global_in_channels - self.global_block_channels = global_block_channels - self.global_block_strides = global_block_strides - self.global_out_channels = global_out_channels - self.higher_in_channels = higher_in_channels - self.lower_in_channels = lower_in_channels - self.fusion_out_channels = fusion_out_channels - self.out_indices = out_indices - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.learning_to_downsample = LearningToDownsample( - in_channels, - downsample_dw_channels, - global_in_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - dw_act_cfg=dw_act_cfg) - self.global_feature_extractor = GlobalFeatureExtractor( - global_in_channels, - global_block_channels, - global_out_channels, - strides=self.global_block_strides, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.feature_fusion = FeatureFusionModule( - higher_in_channels, - lower_in_channels, - fusion_out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dwconv_act_cfg=self.act_cfg, - align_corners=self.align_corners) - - def forward(self, x): - higher_res_features = self.learning_to_downsample(x) - lower_res_features = self.global_feature_extractor(higher_res_features) - fusion_output = self.feature_fusion(higher_res_features, - lower_res_features) - - outs = [higher_res_features, lower_res_features, fusion_output] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/hrnet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/hrnet.py deleted file mode 100644 index 90feadcf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/hrnet.py +++ /dev/null @@ -1,642 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, ModuleList, Sequential -from mmcv.utils.parrots_wrapper import _BatchNorm - -from mmseg.ops import Upsample, resize -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(BaseModule): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - block_init_cfg=None, - init_cfg=None): - super(HRModule, self).__init__(init_cfg) - self.block_init_cfg = block_init_cfg - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - """Check branches configuration.""" - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \ - f'{len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \ - f'{len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \ - f'{len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - """Build one branch.""" - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - - return Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - """Build multiple branch.""" - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return ModuleList(branches) - - def _make_fuse_layers(self): - """Build fuse layer.""" - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - # we set align_corners=False for HRNet - Upsample( - scale_factor=2**(j - i), - mode='bilinear', - align_corners=False))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - elif j > i: - y = y + resize( - self.fuse_layers[i][j](x[j]), - size=x[i].shape[2:], - mode='bilinear', - align_corners=False) - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(BaseModule): - """HRNet backbone. - - This backbone is the implementation of `High-Resolution Representations - for Labeling Pixels and Regions `_. - - Args: - extra (dict): Detailed configuration for each stage of HRNet. - There must be 4 stages, the configuration for each stage must have - 5 keys: - - - num_modules (int): The number of HRModule in this stage. - - num_branches (int): The number of branches in the HRModule. - - block (str): The type of convolution block. - - num_blocks (tuple): The number of blocks in each branch. - The length must be equal to num_branches. - - num_channels (tuple): The number of channels in each branch. - The length must be equal to num_branches. - in_channels (int): Number of input image channels. Normally 3. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Use `BN` by default. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: False. - multiscale_output (bool): Whether to output multi-level features - produced by multiple branches. If False, only the first level - feature will be output. Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmseg.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - with_cp=False, - frozen_stages=-1, - zero_init_residual=False, - multiscale_output=True, - pretrained=None, - init_cfg=None): - super(HRNet, self).__init__(init_cfg) - - self.pretrained = pretrained - self.zero_init_residual = zero_init_residual - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - # Assert configurations of 4 stages are in extra - assert 'stage1' in extra and 'stage2' in extra \ - and 'stage3' in extra and 'stage4' in extra - # Assert whether the length of `num_blocks` and `num_channels` are - # equal to `num_branches` - for i in range(4): - cfg = extra[f'stage{i + 1}'] - assert len(cfg['num_blocks']) == cfg['num_branches'] and \ - len(cfg['num_channels']) == cfg['num_branches'] - - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.frozen_stages = frozen_stages - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels, multiscale_output=multiscale_output) - - self._freeze_stages() - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - """Make transition layer.""" - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - """Make each layer.""" - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - - return Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - """Make each stage.""" - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - block_init_cfg=block_init_cfg)) - - return Sequential(*hr_modules), in_channels - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - - self.norm1.eval() - self.norm2.eval() - for m in [self.conv1, self.norm1, self.conv2, self.norm2]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - if i == 1: - m = getattr(self, f'layer{i}') - t = getattr(self, f'transition{i}') - elif i == 4: - m = getattr(self, f'stage{i}') - else: - m = getattr(self, f'stage{i}') - t = getattr(self, f'transition{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - t.eval() - for param in t.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/icnet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/icnet.py deleted file mode 100644 index 10e54278..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/icnet.py +++ /dev/null @@ -1,165 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone -from ..decode_heads.psp_head import PPM - - -@BACKBONES.register_module() -class ICNet(BaseModule): - """ICNet for Real-Time Semantic Segmentation on High-Resolution Images. - - This backbone is the implementation of - `ICNet `_. - - Args: - backbone_cfg (dict): Config dict to build backbone. Usually it is - ResNet but it can also be other backbones. - in_channels (int): The number of input image channels. Default: 3. - layer_channels (Sequence[int]): The numbers of feature channels at - layer 2 and layer 4 in ResNet. It can also be other backbones. - Default: (512, 2048). - light_branch_middle_channels (int): The number of channels of the - middle layer in light branch. Default: 32. - psp_out_channels (int): The number of channels of the output of PSP - module. Default: 512. - out_channels (Sequence[int]): The numbers of output feature channels - at each branches. Default: (64, 256, 256). - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - backbone_cfg, - in_channels=3, - layer_channels=(512, 2048), - light_branch_middle_channels=32, - psp_out_channels=512, - out_channels=(64, 256, 256), - pool_scales=(1, 2, 3, 6), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - if backbone_cfg is None: - raise TypeError('backbone_cfg must be passed from config file!') - if init_cfg is None: - init_cfg = [ - dict(type='Kaiming', mode='fan_out', layer='Conv2d'), - dict(type='Constant', val=1, layer='_BatchNorm'), - dict(type='Normal', mean=0.01, layer='Linear') - ] - super(ICNet, self).__init__(init_cfg=init_cfg) - self.align_corners = align_corners - self.backbone = build_backbone(backbone_cfg) - - # Note: Default `ceil_mode` is false in nn.MaxPool2d, set - # `ceil_mode=True` to keep information in the corner of feature map. - self.backbone.maxpool = nn.MaxPool2d( - kernel_size=3, stride=2, padding=1, ceil_mode=True) - - self.psp_modules = PPM( - pool_scales=pool_scales, - in_channels=layer_channels[1], - channels=psp_out_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - align_corners=align_corners) - - self.psp_bottleneck = ConvModule( - layer_channels[1] + len(pool_scales) * psp_out_channels, - psp_out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.conv_sub1 = nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=light_branch_middle_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg), - ConvModule( - in_channels=light_branch_middle_channels, - out_channels=light_branch_middle_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg), - ConvModule( - in_channels=light_branch_middle_channels, - out_channels=out_channels[0], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - - self.conv_sub2 = ConvModule( - layer_channels[0], - out_channels[1], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - self.conv_sub4 = ConvModule( - psp_out_channels, - out_channels[2], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - output = [] - - # sub 1 - output.append(self.conv_sub1(x)) - - # sub 2 - x = resize( - x, - scale_factor=0.5, - mode='bilinear', - align_corners=self.align_corners) - x = self.backbone.stem(x) - x = self.backbone.maxpool(x) - x = self.backbone.layer1(x) - x = self.backbone.layer2(x) - output.append(self.conv_sub2(x)) - - # sub 4 - x = resize( - x, - scale_factor=0.5, - mode='bilinear', - align_corners=self.align_corners) - x = self.backbone.layer3(x) - x = self.backbone.layer4(x) - psp_outs = self.psp_modules(x) + [x] - psp_outs = torch.cat(psp_outs, dim=1) - x = self.psp_bottleneck(psp_outs) - - output.append(self.conv_sub4(x)) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mit.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mit.py deleted file mode 100644 index c97213a4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mit.py +++ /dev/null @@ -1,431 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, build_activation_layer, build_norm_layer -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import MultiheadAttention -from mmcv.cnn.utils.weight_init import (constant_init, normal_init, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList, Sequential - -from ..builder import BACKBONES -from ..utils import PatchEmbed, nchw_to_nlc, nlc_to_nchw - - -class MixFFN(BaseModule): - """An implementation of MixFFN of Segformer. - - The differences between MixFFN & FFN: - 1. Use 1X1 Conv to replace Linear layer. - 2. Introduce 3X3 Conv to encode positional information. - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - act_cfg=dict(type='GELU'), - ffn_drop=0., - dropout_layer=None, - init_cfg=None): - super(MixFFN, self).__init__(init_cfg) - - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - in_channels = embed_dims - fc1 = Conv2d( - in_channels=in_channels, - out_channels=feedforward_channels, - kernel_size=1, - stride=1, - bias=True) - # 3x3 depth wise conv to provide positional encode information - pe_conv = Conv2d( - in_channels=feedforward_channels, - out_channels=feedforward_channels, - kernel_size=3, - stride=1, - padding=(3 - 1) // 2, - bias=True, - groups=feedforward_channels) - fc2 = Conv2d( - in_channels=feedforward_channels, - out_channels=in_channels, - kernel_size=1, - stride=1, - bias=True) - drop = nn.Dropout(ffn_drop) - layers = [fc1, pe_conv, self.activate, drop, fc2, drop] - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - - def forward(self, x, hw_shape, identity=None): - out = nlc_to_nchw(x, hw_shape) - out = self.layers(out) - out = nchw_to_nlc(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -class EfficientMultiheadAttention(MultiheadAttention): - """An implementation of Efficient Multi-head Attention of Segformer. - - This module is modified from MultiheadAttention which is a module from - mmcv.cnn.bricks.transformer. - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head - Attention of Segformer. Default: 1. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - init_cfg=None, - batch_first=True, - qkv_bias=False, - norm_cfg=dict(type='LN'), - sr_ratio=1): - super().__init__( - embed_dims, - num_heads, - attn_drop, - proj_drop, - dropout_layer=dropout_layer, - init_cfg=init_cfg, - batch_first=batch_first, - bias=qkv_bias) - - self.sr_ratio = sr_ratio - if sr_ratio > 1: - self.sr = Conv2d( - in_channels=embed_dims, - out_channels=embed_dims, - kernel_size=sr_ratio, - stride=sr_ratio) - # The ret[0] of build_norm_layer is norm name. - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - # handle the BC-breaking from https://github.com/open-mmlab/mmcv/pull/1418 # noqa - from mmseg import digit_version, mmcv_version - if mmcv_version < digit_version('1.3.17'): - warnings.warn('The legacy version of forward function in' - 'EfficientMultiheadAttention is deprecated in' - 'mmcv>=1.3.17 and will no longer support in the' - 'future. Please upgrade your mmcv.') - self.forward = self.legacy_forward - - def forward(self, x, hw_shape, identity=None): - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - x_q = x_q.transpose(0, 1) - x_kv = x_kv.transpose(0, 1) - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - def legacy_forward(self, x, hw_shape, identity=None): - """multi head attention forward in mmcv version < 1.3.17.""" - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # `need_weights=True` will let nn.MultiHeadAttention - # `return attn_output, attn_output_weights.sum(dim=1) / num_heads` - # The `attn_output_weights.sum(dim=1)` may cause cuda error. So, we set - # `need_weights=False` to ignore `attn_output_weights.sum(dim=1)`. - # This issue - `https://github.com/pytorch/pytorch/issues/37583` report - # the error that large scale tensor sum operation may cause cuda error. - out = self.attn(query=x_q, key=x_kv, value=x_kv, need_weights=False)[0] - - return identity + self.dropout_layer(self.proj_drop(out)) - - -class TransformerEncoderLayer(BaseModule): - """Implements one encoder layer in Segformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed. - after the feed forward layer. Default 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.0. - qkv_bias (bool): enable bias for qkv if True. - Default: True. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - init_cfg (dict, optional): Initialization config dict. - Default:None. - sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head - Attention of Segformer. Default: 1. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - batch_first=True, - sr_ratio=1): - super(TransformerEncoderLayer, self).__init__() - - # The ret[0] of build_norm_layer is norm name. - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.attn = EfficientMultiheadAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - batch_first=batch_first, - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - # The ret[0] of build_norm_layer is norm name. - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.ffn = MixFFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg) - - def forward(self, x, hw_shape): - x = self.attn(self.norm1(x), hw_shape, identity=x) - x = self.ffn(self.norm2(x), hw_shape, identity=x) - return x - - -@BACKBONES.register_module() -class MixVisionTransformer(BaseModule): - """The backbone of Segformer. - - This backbone is the implementation of `SegFormer: Simple and - Efficient Design for Semantic Segmentation with - Transformers `_. - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): Embedding dimension. Default: 768. - num_stags (int): The num of stages. Default: 4. - num_layers (Sequence[int]): The layer number of each transformer encode - layer. Default: [3, 4, 6, 3]. - num_heads (Sequence[int]): The attention heads of each transformer - encode layer. Default: [1, 2, 4, 8]. - patch_sizes (Sequence[int]): The patch_size of each overlapped patch - embedding. Default: [7, 3, 3, 3]. - strides (Sequence[int]): The stride of each overlapped patch embedding. - Default: [4, 2, 2, 2]. - sr_ratios (Sequence[int]): The spatial reduction rate of each - transformer encode layer. Default: [8, 4, 2, 1]. - out_indices (Sequence[int] | int): Output from which stages. - Default: (0, 1, 2, 3). - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - qkv_bias (bool): Enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0 - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): stochastic depth rate. Default 0.0 - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=3, - embed_dims=64, - num_stages=4, - num_layers=[3, 4, 6, 3], - num_heads=[1, 2, 4, 8], - patch_sizes=[7, 3, 3, 3], - strides=[4, 2, 2, 2], - sr_ratios=[8, 4, 2, 1], - out_indices=(0, 1, 2, 3), - mlp_ratio=4, - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - pretrained=None, - init_cfg=None): - super(MixVisionTransformer, self).__init__(init_cfg=init_cfg) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - - self.embed_dims = embed_dims - self.num_stages = num_stages - self.num_layers = num_layers - self.num_heads = num_heads - self.patch_sizes = patch_sizes - self.strides = strides - self.sr_ratios = sr_ratios - assert num_stages == len(num_layers) == len(num_heads) \ - == len(patch_sizes) == len(strides) == len(sr_ratios) - - self.out_indices = out_indices - assert max(out_indices) < self.num_stages - - # transformer encoder - dpr = [ - x.item() - for x in torch.linspace(0, drop_path_rate, sum(num_layers)) - ] # stochastic num_layer decay rule - - cur = 0 - self.layers = ModuleList() - for i, num_layer in enumerate(num_layers): - embed_dims_i = embed_dims * num_heads[i] - patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims_i, - kernel_size=patch_sizes[i], - stride=strides[i], - padding=patch_sizes[i] // 2, - norm_cfg=norm_cfg) - layer = ModuleList([ - TransformerEncoderLayer( - embed_dims=embed_dims_i, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * embed_dims_i, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[cur + idx], - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - sr_ratio=sr_ratios[i]) for idx in range(num_layer) - ]) - in_channels = embed_dims_i - # The ret[0] of build_norm_layer is norm name. - norm = build_norm_layer(norm_cfg, embed_dims_i)[1] - self.layers.append(ModuleList([patch_embed, layer, norm])) - cur += num_layer - - def init_weights(self): - if self.init_cfg is None: - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, val=1.0, bias=0.) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init( - m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) - else: - super(MixVisionTransformer, self).init_weights() - - def forward(self, x): - outs = [] - - for i, layer in enumerate(self.layers): - x, hw_shape = layer[0](x) - for block in layer[1]: - x = block(x, hw_shape) - x = layer[2](x) - x = nlc_to_nchw(x, hw_shape) - if i in self.out_indices: - outs.append(x) - - return outs diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v2.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v2.py deleted file mode 100644 index cbb9c6cd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(BaseModule): - """MobileNetV2 backbone. - - This backbone is the implementation of - `MobileNetV2: Inverted Residuals and Linear Bottlenecks - `_. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - strides (Sequence[int], optional): Strides of the first block of each - layer. If not specified, default config in ``arch_setting`` will - be used. - dilations (Sequence[int]): Dilation of each layer. - out_indices (None or Sequence[int]): Output from which stages. - Default: (7, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - # Parameters to build layers. 3 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks. - arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4], - [6, 96, 3], [6, 160, 3], [6, 320, 1]] - - def __init__(self, - widen_factor=1., - strides=(1, 2, 2, 2, 1, 2, 1), - dilations=(1, 1, 1, 1, 1, 1, 1), - out_indices=(1, 2, 4, 6), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV2, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - self.widen_factor = widen_factor - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == len(self.arch_settings) - self.out_indices = out_indices - for index in out_indices: - if index not in range(0, 7): - raise ValueError('the item in out_indices must in ' - f'range(0, 7). But received {index}') - - if frozen_stages not in range(-1, 7): - raise ValueError('frozen_stages must be in range(-1, 7). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks = layer_cfg - stride = self.strides[i] - dilation = self.dilations[i] - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - def make_layer(self, out_channels, num_blocks, stride, dilation, - expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): Number of blocks. - stride (int): Stride of the first block. - dilation (int): Dilation of the first block. - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. - """ - layers = [] - for i in range(num_blocks): - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - stride if i == 0 else 1, - expand_ratio=expand_ratio, - dilation=dilation if i == 0 else 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v3.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v3.py deleted file mode 100644 index dd3d6eb1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/mobilenet_v3.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import Conv2dAdaptivePadding -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidualV3 as InvertedResidual - - -@BACKBONES.register_module() -class MobileNetV3(BaseModule): - """MobileNetV3 backbone. - - This backbone is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - arch (str): Architecture of mobilnetv3, from {'small', 'large'}. - Default: 'small'. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - out_indices (tuple[int]): Output from which layer. - Default: (0, 1, 12). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - # Parameters to build each block: - # [kernel size, mid channels, out channels, with_se, act type, stride] - arch_settings = { - 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4 - [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8 - [3, 88, 24, False, 'ReLU', 1], - [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16 - [5, 240, 40, True, 'HSwish', 1], - [5, 240, 40, True, 'HSwish', 1], - [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16 - [5, 144, 48, True, 'HSwish', 1], - [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32 - [5, 576, 96, True, 'HSwish', 1], - [5, 576, 96, True, 'HSwish', 1]], - 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2 - [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4 - [3, 72, 24, False, 'ReLU', 1], - [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8 - [5, 120, 40, True, 'ReLU', 1], - [5, 120, 40, True, 'ReLU', 1], - [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16 - [3, 200, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16 - [3, 672, 112, True, 'HSwish', 1], - [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32 - [5, 960, 160, True, 'HSwish', 1], - [5, 960, 160, True, 'HSwish', 1]] - } # yapf: disable - - def __init__(self, - arch='small', - conv_cfg=None, - norm_cfg=dict(type='BN'), - out_indices=(0, 1, 12), - frozen_stages=-1, - reduction_factor=1, - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV3, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - assert arch in self.arch_settings - assert isinstance(reduction_factor, int) and reduction_factor > 0 - assert mmcv.is_tuple_of(out_indices, int) - for index in out_indices: - if index not in range(0, len(self.arch_settings[arch]) + 2): - raise ValueError( - 'the item in out_indices must in ' - f'range(0, {len(self.arch_settings[arch])+2}). ' - f'But received {index}') - - if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2): - raise ValueError('frozen_stages must be in range(-1, ' - f'{len(self.arch_settings[arch])+2}). ' - f'But received {frozen_stages}') - self.arch = arch - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.reduction_factor = reduction_factor - self.norm_eval = norm_eval - self.with_cp = with_cp - self.layers = self._make_layer() - - def _make_layer(self): - layers = [] - - # build the first layer (layer0) - in_channels = 16 - layer = ConvModule( - in_channels=3, - out_channels=in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - self.add_module('layer0', layer) - layers.append('layer0') - - layer_setting = self.arch_settings[self.arch] - for i, params in enumerate(layer_setting): - (kernel_size, mid_channels, out_channels, with_se, act, - stride) = params - - if self.arch == 'large' and i >= 12 or self.arch == 'small' and \ - i >= 8: - mid_channels = mid_channels // self.reduction_factor - out_channels = out_channels // self.reduction_factor - - if with_se: - se_cfg = dict( - channels=mid_channels, - ratio=4, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))) - else: - se_cfg = None - - layer = InvertedResidual( - in_channels=in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - with_expand_conv=(in_channels != mid_channels), - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type=act), - with_cp=self.with_cp) - in_channels = out_channels - layer_name = 'layer{}'.format(i + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # build the last layer - # block5 layer12 os=32 for small model - # block6 layer16 os=32 for large model - layer = ConvModule( - in_channels=in_channels, - out_channels=576 if self.arch == 'small' else 960, - kernel_size=1, - stride=1, - dilation=4, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - layer_name = 'layer{}'.format(len(layer_setting) + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # next, convert backbone MobileNetV3 to a semantic segmentation version - if self.arch == 'small': - self.layer4.depthwise_conv.conv.stride = (1, 1) - self.layer9.depthwise_conv.conv.stride = (1, 1) - for i in range(4, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 9: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - else: - self.layer7.depthwise_conv.conv.stride = (1, 1) - self.layer13.depthwise_conv.conv.stride = (1, 1) - for i in range(7, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 13: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - - return layers - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return outs - - def _freeze_stages(self): - for i in range(self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV3, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnest.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnest.py deleted file mode 100644 index 91952c2c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnest.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int | tuple[int]): Same as nn.Conv2d. - stride (int | tuple[int]): Same as nn.Conv2d. - padding (int | tuple[int]): Same as nn.Conv2d. - dilation (int | tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - width_per_group (int): Width per group of conv2. 64x4d indicates - ``groups=64, width_per_group=4`` and 32x8d indicates - ``groups=32, width_per_group=8``. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - This backbone is the implementation of `ResNeSt: - Split-Attention Networks `_. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnet.py deleted file mode 100644 index e8b961d5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnet.py +++ /dev/null @@ -1,714 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - """Basic block for ResNet.""" - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - """Forward function for plugins.""" - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - This backbone is the improved implementation of `Deep Residual Learning - for Image Recognition `_. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - stem_channels (int): Number of stem channels. Default: 64. - base_channels (int): Number of base channels of res layer. Default: 64. - num_stages (int): Resnet stages, normally 4. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - Default: (1, 2, 2, 2). - dilations (Sequence[int]): Dilation of each stage. - Default: (1, 1, 1, 1). - out_indices (Sequence[int]): Output from which stages. - Default: (0, 1, 2, 3). - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. Default: 'pytorch'. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv. - Default: False. - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict | None): Dictionary to construct and config conv layer. - When conv_cfg is None, cfg will be set to dict(type='Conv2d'). - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (dict | None): Dictionary to construct and config DCN conv layer. - When dcn is not None, conv_cfg must be None. Default: None. - stage_with_dcn (Sequence[bool]): Whether to set DCN conv for each - stage. The length of stage_with_dcn is equal to num_stages. - Default: (False, False, False, False). - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - - position (str, required): Position inside block to insert plugin, - options: 'after_conv1', 'after_conv2', 'after_conv3'. - - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - Default: None. - multi_grid (Sequence[int]|None): Multi grid dilation rates of last - stage. Default: None. - contract_dilation (bool): Whether contract first dilation of each layer - Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: True. - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmseg.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=64, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - multi_grid=None, - contract_dilation=False, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - self.pretrained = pretrained - self.zero_init_residual = zero_init_residual - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.multi_grid = multi_grid - self.contract_dilation = contract_dilation - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - # multi grid is applied to last layer only - stage_multi_grid = multi_grid if i == len( - self.stage_blocks) - 1 else None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - multi_grid=stage_multi_grid, - contract_dilation=contract_dilation, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i+1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """make plugins for ResNet 'stage_idx'th stage . - - Currently we support to insert 'context_block', - 'empirical_attention_block', 'nonlocal_block' into the backbone like - ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be : - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose 'stage_idx=0', the structure of blocks in the stage would be: - conv1-> conv2->conv3->yyy->zzz1->zzz2 - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - """Make stem layer for ResNet.""" - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1c(ResNet): - """ResNetV1c variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv in - the input stem with three 3x3 convs. For more details please refer to `Bag - of Tricks for Image Classification with Convolutional Neural Networks - `_. - """ - - def __init__(self, **kwargs): - super(ResNetV1c, self).__init__( - deep_stem=True, avg_down=False, **kwargs) - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - """ResNetV1d variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnext.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnext.py deleted file mode 100644 index 805c27bf..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/resnext.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - This backbone is the implementation of `Aggregated - Residual Transformations for Deep Neural - Networks `_. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Normally 3. - num_stages (int): Resnet stages, normally 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmseg.models import ResNeXt - >>> import torch - >>> self = ResNeXt(depth=50) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/stdc.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/stdc.py deleted file mode 100644 index 04f2f7a2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/stdc.py +++ /dev/null @@ -1,422 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/MichaelFan01/STDC-Seg.""" -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner.base_module import BaseModule, ModuleList, Sequential - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone -from .bisenetv1 import AttentionRefinementModule - - -class STDCModule(BaseModule): - """STDCModule. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels before scaling. - stride (int): The number of stride for the first conv layer. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): The activation config for conv layers. - num_convs (int): Numbers of conv layers. - fusion_type (str): Type of fusion operation. Default: 'add'. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - stride, - norm_cfg=None, - act_cfg=None, - num_convs=4, - fusion_type='add', - init_cfg=None): - super(STDCModule, self).__init__(init_cfg=init_cfg) - assert num_convs > 1 - assert fusion_type in ['add', 'cat'] - self.stride = stride - self.with_downsample = True if self.stride == 2 else False - self.fusion_type = fusion_type - - self.layers = ModuleList() - conv_0 = ConvModule( - in_channels, out_channels // 2, kernel_size=1, norm_cfg=norm_cfg) - - if self.with_downsample: - self.downsample = ConvModule( - out_channels // 2, - out_channels // 2, - kernel_size=3, - stride=2, - padding=1, - groups=out_channels // 2, - norm_cfg=norm_cfg, - act_cfg=None) - - if self.fusion_type == 'add': - self.layers.append(nn.Sequential(conv_0, self.downsample)) - self.skip = Sequential( - ConvModule( - in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=1, - groups=in_channels, - norm_cfg=norm_cfg, - act_cfg=None), - ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None)) - else: - self.layers.append(conv_0) - self.skip = nn.AvgPool2d(kernel_size=3, stride=2, padding=1) - else: - self.layers.append(conv_0) - - for i in range(1, num_convs): - out_factor = 2**(i + 1) if i != num_convs - 1 else 2**i - self.layers.append( - ConvModule( - out_channels // 2**i, - out_channels // out_factor, - kernel_size=3, - stride=1, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - if self.fusion_type == 'add': - out = self.forward_add(inputs) - else: - out = self.forward_cat(inputs) - return out - - def forward_add(self, inputs): - layer_outputs = [] - x = inputs.clone() - for layer in self.layers: - x = layer(x) - layer_outputs.append(x) - if self.with_downsample: - inputs = self.skip(inputs) - - return torch.cat(layer_outputs, dim=1) + inputs - - def forward_cat(self, inputs): - x0 = self.layers[0](inputs) - layer_outputs = [x0] - for i, layer in enumerate(self.layers[1:]): - if i == 0: - if self.with_downsample: - x = layer(self.downsample(x0)) - else: - x = layer(x0) - else: - x = layer(x) - layer_outputs.append(x) - if self.with_downsample: - layer_outputs[0] = self.skip(x0) - return torch.cat(layer_outputs, dim=1) - - -class FeatureFusionModule(BaseModule): - """Feature Fusion Module. This module is different from FeatureFusionModule - in BiSeNetV1. It uses two ConvModules in `self.attention` whose inter - channel number is calculated by given `scale_factor`, while - FeatureFusionModule in BiSeNetV1 only uses one ConvModule in - `self.conv_atten`. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - scale_factor (int): The number of channel scale factor. - Default: 4. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): The activation config for conv layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=4, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) - channels = out_channels // scale_factor - self.conv0 = ConvModule( - in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.attention = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - out_channels, - channels, - 1, - norm_cfg=None, - bias=False, - act_cfg=act_cfg), - ConvModule( - channels, - out_channels, - 1, - norm_cfg=None, - bias=False, - act_cfg=None), nn.Sigmoid()) - - def forward(self, spatial_inputs, context_inputs): - inputs = torch.cat([spatial_inputs, context_inputs], dim=1) - x = self.conv0(inputs) - attn = self.attention(x) - x_attn = x * attn - return x_attn + x - - -@BACKBONES.register_module() -class STDCNet(BaseModule): - """This backbone is the implementation of `Rethinking BiSeNet For Real-time - Semantic Segmentation `_. - - Args: - stdc_type (int): The type of backbone structure, - `STDCNet1` and`STDCNet2` denotes two main backbones in paper, - whose FLOPs is 813M and 1446M, respectively. - in_channels (int): The num of input_channels. - channels (tuple[int]): The output channels for each stage. - bottleneck_type (str): The type of STDC Module type, the value must - be 'add' or 'cat'. - norm_cfg (dict): Config dict for normalization layer. - act_cfg (dict): The activation config for conv layers. - num_convs (int): Numbers of conv layer at each STDC Module. - Default: 4. - with_final_conv (bool): Whether add a conv layer at the Module output. - Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> import torch - >>> stdc_type = 'STDCNet1' - >>> in_channels = 3 - >>> channels = (32, 64, 256, 512, 1024) - >>> bottleneck_type = 'cat' - >>> inputs = torch.rand(1, 3, 1024, 2048) - >>> self = STDCNet(stdc_type, in_channels, - ... channels, bottleneck_type).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 256, 128, 256]) - outputs[1].shape = torch.Size([1, 512, 64, 128]) - outputs[2].shape = torch.Size([1, 1024, 32, 64]) - """ - - arch_settings = { - 'STDCNet1': [(2, 1), (2, 1), (2, 1)], - 'STDCNet2': [(2, 1, 1, 1), (2, 1, 1, 1, 1), (2, 1, 1)] - } - - def __init__(self, - stdc_type, - in_channels, - channels, - bottleneck_type, - norm_cfg, - act_cfg, - num_convs=4, - with_final_conv=False, - pretrained=None, - init_cfg=None): - super(STDCNet, self).__init__(init_cfg=init_cfg) - assert stdc_type in self.arch_settings, \ - f'invalid structure {stdc_type} for STDCNet.' - assert bottleneck_type in ['add', 'cat'],\ - f'bottleneck_type must be `add` or `cat`, got {bottleneck_type}' - - assert len(channels) == 5,\ - f'invalid channels length {len(channels)} for STDCNet.' - - self.in_channels = in_channels - self.channels = channels - self.stage_strides = self.arch_settings[stdc_type] - self.prtrained = pretrained - self.num_convs = num_convs - self.with_final_conv = with_final_conv - - self.stages = ModuleList([ - ConvModule( - self.in_channels, - self.channels[0], - kernel_size=3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - self.channels[0], - self.channels[1], - kernel_size=3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - ]) - # `self.num_shallow_features` is the number of shallow modules in - # `STDCNet`, which is noted as `Stage1` and `Stage2` in original paper. - # They are both not used for following modules like Attention - # Refinement Module and Feature Fusion Module. - # Thus they would be cut from `outs`. Please refer to Figure 4 - # of original paper for more details. - self.num_shallow_features = len(self.stages) - - for strides in self.stage_strides: - idx = len(self.stages) - 1 - self.stages.append( - self._make_stage(self.channels[idx], self.channels[idx + 1], - strides, norm_cfg, act_cfg, bottleneck_type)) - # After appending, `self.stages` is a ModuleList including several - # shallow modules and STDCModules. - # (len(self.stages) == - # self.num_shallow_features + len(self.stage_strides)) - if self.with_final_conv: - self.final_conv = ConvModule( - self.channels[-1], - max(1024, self.channels[-1]), - 1, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def _make_stage(self, in_channels, out_channels, strides, norm_cfg, - act_cfg, bottleneck_type): - layers = [] - for i, stride in enumerate(strides): - layers.append( - STDCModule( - in_channels if i == 0 else out_channels, - out_channels, - stride, - norm_cfg, - act_cfg, - num_convs=self.num_convs, - fusion_type=bottleneck_type)) - return Sequential(*layers) - - def forward(self, x): - outs = [] - for stage in self.stages: - x = stage(x) - outs.append(x) - if self.with_final_conv: - outs[-1] = self.final_conv(outs[-1]) - outs = outs[self.num_shallow_features:] - return tuple(outs) - - -@BACKBONES.register_module() -class STDCContextPathNet(BaseModule): - """STDCNet with Context Path. The `outs` below is a list of three feature - maps from deep to shallow, whose height and width is from small to big, - respectively. The biggest feature map of `outs` is outputted for - `STDCHead`, where Detail Loss would be calculated by Detail Ground-truth. - The other two feature maps are used for Attention Refinement Module, - respectively. Besides, the biggest feature map of `outs` and the last - output of Attention Refinement Module are concatenated for Feature Fusion - Module. Then, this fusion feature map `feat_fuse` would be outputted for - `decode_head`. More details please refer to Figure 4 of original paper. - - Args: - backbone_cfg (dict): Config dict for stdc backbone. - last_in_channels (tuple(int)), The number of channels of last - two feature maps from stdc backbone. Default: (1024, 512). - out_channels (int): The channels of output feature maps. - Default: 128. - ffm_cfg (dict): Config dict for Feature Fusion Module. Default: - `dict(in_channels=512, out_channels=256, scale_factor=4)`. - upsample_mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'nearest'``. - align_corners (str): align_corners argument of F.interpolate. It - must be `None` if upsample_mode is ``'nearest'``. Default: None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Return: - outputs (tuple): The tuple of list of output feature map for - auxiliary heads and decoder head. - """ - - def __init__(self, - backbone_cfg, - last_in_channels=(1024, 512), - out_channels=128, - ffm_cfg=dict( - in_channels=512, out_channels=256, scale_factor=4), - upsample_mode='nearest', - align_corners=None, - norm_cfg=dict(type='BN'), - init_cfg=None): - super(STDCContextPathNet, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone_cfg) - self.arms = ModuleList() - self.convs = ModuleList() - for channels in last_in_channels: - self.arms.append(AttentionRefinementModule(channels, out_channels)) - self.convs.append( - ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=norm_cfg)) - self.conv_avg = ConvModule( - last_in_channels[0], out_channels, 1, norm_cfg=norm_cfg) - - self.ffm = FeatureFusionModule(**ffm_cfg) - - self.upsample_mode = upsample_mode - self.align_corners = align_corners - - def forward(self, x): - outs = list(self.backbone(x)) - avg = F.adaptive_avg_pool2d(outs[-1], 1) - avg_feat = self.conv_avg(avg) - - feature_up = resize( - avg_feat, - size=outs[-1].shape[2:], - mode=self.upsample_mode, - align_corners=self.align_corners) - arms_out = [] - for i in range(len(self.arms)): - x_arm = self.arms[i](outs[len(outs) - 1 - i]) + feature_up - feature_up = resize( - x_arm, - size=outs[len(outs) - 1 - i - 1].shape[2:], - mode=self.upsample_mode, - align_corners=self.align_corners) - feature_up = self.convs[i](feature_up) - arms_out.append(feature_up) - - feat_fuse = self.ffm(outs[0], arms_out[1]) - - # The `outputs` has four feature maps. - # `outs[0]` is outputted for `STDCHead` auxiliary head. - # Two feature maps of `arms_out` are outputted for auxiliary head. - # `feat_fuse` is outputted for decoder head. - outputs = [outs[0]] + list(arms_out) + [feat_fuse] - return tuple(outputs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/swin.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/swin.py deleted file mode 100644 index a360ab01..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/swin.py +++ /dev/null @@ -1,755 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from copy import deepcopy - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.transformer import FFN, build_dropout -from mmcv.cnn.utils.weight_init import (constant_init, trunc_normal_, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from mmcv.utils import to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils.embed import PatchEmbed, PatchMerging - - -class WindowMSA(BaseModule): - """Window based multi-head self-attention (W-MSA) module with relative - position bias. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (tuple[int]): The height and width of the window. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - init_cfg (dict | None, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - init_cfg=None): - - super().__init__(init_cfg=init_cfg) - self.embed_dims = embed_dims - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_embed_dims = embed_dims // num_heads - self.scale = qk_scale or head_embed_dims**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), - num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # About 2x faster than original impl - Wh, Ww = self.window_size - rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) - rel_position_index = rel_index_coords + rel_index_coords.T - rel_position_index = rel_position_index.flip(1).contiguous() - self.register_buffer('relative_position_index', rel_position_index) - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - - self.softmax = nn.Softmax(dim=-1) - - def init_weights(self): - trunc_normal_(self.relative_position_bias_table, std=0.02) - - def forward(self, x, mask=None): - """ - Args: - - x (tensor): input features with shape of (num_windows*B, N, C) - mask (tensor | None, Optional): mask with shape of (num_windows, - Wh*Ww, Wh*Ww), value should be between (-inf, 0]. - """ - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, 4) - # make torchscript happy (cannot use tensor as tuple) - q, k, v = qkv[0], qkv[1], qkv[2] - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B // nW, nW, self.num_heads, N, - N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - @staticmethod - def double_step_seq(step1, len1, step2, len2): - seq1 = torch.arange(0, step1 * len1, step1) - seq2 = torch.arange(0, step2 * len2, step2) - return (seq1[:, None] + seq2[None, :]).reshape(1, -1) - - -class ShiftWindowMSA(BaseModule): - """Shifted Window Multihead Self-Attention Module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): The height and width of the window. - shift_size (int, optional): The shift step of each window towards - right-bottom. If zero, act as regular window-msa. Defaults to 0. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Defaults: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Defaults: 0. - proj_drop_rate (float, optional): Dropout ratio of output. - Defaults: 0. - dropout_layer (dict, optional): The dropout_layer used before output. - Defaults: dict(type='DropPath', drop_prob=0.). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - shift_size=0, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0, - proj_drop_rate=0, - dropout_layer=dict(type='DropPath', drop_prob=0.), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.window_size = window_size - self.shift_size = shift_size - assert 0 <= self.shift_size < self.window_size - - self.w_msa = WindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=to_2tuple(window_size), - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=proj_drop_rate, - init_cfg=None) - - self.drop = build_dropout(dropout_layer) - - def forward(self, query, hw_shape): - B, L, C = query.shape - H, W = hw_shape - assert L == H * W, 'input feature has wrong size' - query = query.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) - H_pad, W_pad = query.shape[1], query.shape[2] - - # cyclic shift - if self.shift_size > 0: - shifted_query = torch.roll( - query, - shifts=(-self.shift_size, -self.shift_size), - dims=(1, 2)) - - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - # nW, window_size, window_size, 1 - mask_windows = self.window_partition(img_mask) - mask_windows = mask_windows.view( - -1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-100.0)).masked_fill( - attn_mask == 0, float(0.0)) - else: - shifted_query = query - attn_mask = None - - # nW*B, window_size, window_size, C - query_windows = self.window_partition(shifted_query) - # nW*B, window_size*window_size, C - query_windows = query_windows.view(-1, self.window_size**2, C) - - # W-MSA/SW-MSA (nW*B, window_size*window_size, C) - attn_windows = self.w_msa(query_windows, mask=attn_mask) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, - self.window_size, C) - - # B H' W' C - shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, - shifts=(self.shift_size, self.shift_size), - dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - x = self.drop(x) - return x - - def window_reverse(self, windows, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - window_size = self.window_size - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, - window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - def window_partition(self, x): - """ - Args: - x: (B, H, W, C) - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - window_size = self.window_size - x = x.view(B, H // window_size, window_size, W // window_size, - window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() - windows = windows.view(-1, window_size, window_size, C) - return windows - - -class SwinBlock(BaseModule): - """" - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - window_size (int, optional): The local window scale. Default: 7. - shift (bool, optional): whether to shift window or not. Default False. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float, optional): Stochastic depth rate. Default: 0. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - window_size=7, - shift=False, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - - super(SwinBlock, self).__init__(init_cfg=init_cfg) - - self.with_cp = with_cp - - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - self.attn = ShiftWindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=window_size, - shift_size=window_size // 2 if shift else 0, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - init_cfg=None) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=2, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=True, - init_cfg=None) - - def forward(self, x, hw_shape): - - def _inner_forward(x): - identity = x - x = self.norm1(x) - x = self.attn(x, hw_shape) - - x = x + identity - - identity = x - x = self.norm2(x) - x = self.ffn(x, identity=identity) - - return x - - if self.with_cp and x.requires_grad: - x = cp.checkpoint(_inner_forward, x) - else: - x = _inner_forward(x) - - return x - - -class SwinBlockSequence(BaseModule): - """Implements one stage in Swin Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - depth (int): The number of blocks in this stage. - window_size (int, optional): The local window scale. Default: 7. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float | list[float], optional): Stochastic depth - rate. Default: 0. - downsample (BaseModule | None, optional): The downsample operation - module. Default: None. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - depth, - window_size=7, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - downsample=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(drop_path_rate, list): - drop_path_rates = drop_path_rate - assert len(drop_path_rates) == depth - else: - drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] - - self.blocks = ModuleList() - for i in range(depth): - block = SwinBlock( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=feedforward_channels, - window_size=window_size, - shift=False if i % 2 == 0 else True, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=drop_path_rates[i], - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.blocks.append(block) - - self.downsample = downsample - - def forward(self, x, hw_shape): - for block in self.blocks: - x = block(x, hw_shape) - - if self.downsample: - x_down, down_hw_shape = self.downsample(x, hw_shape) - return x_down, down_hw_shape, x, hw_shape - else: - return x, hw_shape, x, hw_shape - - -@BACKBONES.register_module() -class SwinTransformer(BaseModule): - """Swin Transformer backbone. - - This backbone is the implementation of `Swin Transformer: - Hierarchical Vision Transformer using Shifted - Windows `_. - Inspiration from https://github.com/microsoft/Swin-Transformer. - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): The num of input channels. - Defaults: 3. - embed_dims (int): The feature dimension. Default: 96. - patch_size (int | tuple[int]): Patch size. Default: 4. - window_size (int): Window size. Default: 7. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - Default: 4. - depths (tuple[int]): Depths of each Swin Transformer stage. - Default: (2, 2, 6, 2). - num_heads (tuple[int]): Parallel attention heads of each Swin - Transformer stage. Default: (3, 6, 12, 24). - strides (tuple[int]): The patch merging or patch embedding stride of - each Swin Transformer stage. (In swin, we set kernel size equal to - stride.) Default: (4, 2, 2, 2). - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool, optional): If True, add a learnable bias to query, key, - value. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - patch_norm (bool): If add a norm layer for patch embed and patch - merging. Default: True. - drop_rate (float): Dropout rate. Defaults: 0. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: False. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LN'). - norm_cfg (dict): Config dict for normalization layer at - output of backone. Defaults: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=96, - patch_size=4, - window_size=7, - mlp_ratio=4, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - strides=(4, 2, 2, 2), - out_indices=(0, 1, 2, 3), - qkv_bias=True, - qk_scale=None, - patch_norm=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - pretrained=None, - frozen_stages=-1, - init_cfg=None): - self.frozen_stages = frozen_stages - - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - super(SwinTransformer, self).__init__(init_cfg=init_cfg) - - num_layers = len(depths) - self.out_indices = out_indices - self.use_abs_pos_embed = use_abs_pos_embed - - assert strides[0] == patch_size, 'Use non-overlapping patch embed.' - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=strides[0], - padding='corner', - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - - if self.use_abs_pos_embed: - patch_row = pretrain_img_size[0] // patch_size - patch_col = pretrain_img_size[1] // patch_size - num_patches = patch_row * patch_col - self.absolute_pos_embed = nn.Parameter( - torch.zeros((1, num_patches, embed_dims))) - - self.drop_after_pos = nn.Dropout(p=drop_rate) - - # set stochastic depth decay rule - total_depth = sum(depths) - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, total_depth) - ] - - self.stages = ModuleList() - in_channels = embed_dims - for i in range(num_layers): - if i < num_layers - 1: - downsample = PatchMerging( - in_channels=in_channels, - out_channels=2 * in_channels, - stride=strides[i + 1], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - else: - downsample = None - - stage = SwinBlockSequence( - embed_dims=in_channels, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * in_channels, - depth=depths[i], - window_size=window_size, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], - downsample=downsample, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.stages.append(stage) - if downsample: - in_channels = downsample.out_channels - - self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] - # Add a norm layer for each output - for i in out_indices: - layer = build_norm_layer(norm_cfg, self.num_features[i])[1] - layer_name = f'norm{i}' - self.add_module(layer_name, layer) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - if self.use_abs_pos_embed: - self.absolute_pos_embed.requires_grad = False - self.drop_after_pos.eval() - - for i in range(1, self.frozen_stages + 1): - - if (i - 1) in self.out_indices: - norm_layer = getattr(self, f'norm{i-1}') - norm_layer.eval() - for param in norm_layer.parameters(): - param.requires_grad = False - - m = self.stages[i - 1] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - if self.use_abs_pos_embed: - trunc_normal_(self.absolute_pos_embed, std=0.02) - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, val=1.0, bias=0.) - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - ckpt = _load_checkpoint( - self.init_cfg['checkpoint'], logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - - state_dict = OrderedDict() - for k, v in _state_dict.items(): - if k.startswith('backbone.'): - state_dict[k[9:]] = v - else: - state_dict[k] = v - - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = self.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H * W: - logger.warning('Error in loading absolute_pos_embed, pass') - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view( - N2, H, W, C2).permute(0, 3, 1, 2).contiguous() - - # interpolate position bias table if needed - relative_position_bias_table_keys = [ - k for k in state_dict.keys() - if 'relative_position_bias_table' in k - ] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = self.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f'Error in loading {table_key}, pass') - elif L1 != L2: - S1 = int(L1**0.5) - S2 = int(L2**0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view( - nH2, L2).permute(1, 0).contiguous() - - # load state_dict - self.load_state_dict(state_dict, False) - - def forward(self, x): - x, hw_shape = self.patch_embed(x) - - if self.use_abs_pos_embed: - x = x + self.absolute_pos_embed - x = self.drop_after_pos(x) - - outs = [] - for i, stage in enumerate(self.stages): - x, hw_shape, out, out_hw_shape = stage(x, hw_shape) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - out = norm_layer(out) - out = out.view(-1, *out_hw_shape, - self.num_features[i]).permute(0, 3, 1, - 2).contiguous() - outs.append(out) - - return outs diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/timm_backbone.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/timm_backbone.py deleted file mode 100644 index 01b29fc5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/timm_backbone.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -try: - import timm -except ImportError: - timm = None - -from mmcv.cnn.bricks.registry import NORM_LAYERS -from mmcv.runner import BaseModule - -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class TIMMBackbone(BaseModule): - """Wrapper to use backbones from timm library. More details can be found in - `timm `_ . - - Args: - model_name (str): Name of timm model to instantiate. - pretrained (bool): Load pretrained weights if True. - checkpoint_path (str): Path of checkpoint to load after - model is initialized. - in_channels (int): Number of input image channels. Default: 3. - init_cfg (dict, optional): Initialization config dict - **kwargs: Other timm & model specific arguments. - """ - - def __init__( - self, - model_name, - features_only=True, - pretrained=True, - checkpoint_path='', - in_channels=3, - init_cfg=None, - **kwargs, - ): - if timm is None: - raise RuntimeError('timm is not installed') - super(TIMMBackbone, self).__init__(init_cfg) - if 'norm_layer' in kwargs: - kwargs['norm_layer'] = NORM_LAYERS.get(kwargs['norm_layer']) - self.timm_model = timm.create_model( - model_name=model_name, - features_only=features_only, - pretrained=pretrained, - in_chans=in_channels, - checkpoint_path=checkpoint_path, - **kwargs, - ) - - # Make unused parameters None - self.timm_model.global_pool = None - self.timm_model.fc = None - self.timm_model.classifier = None - - # Hack to use pretrained weights from timm - if pretrained or checkpoint_path: - self._is_init = True - - def forward(self, x): - features = self.timm_model(x) - return features diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/twins.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/twins.py deleted file mode 100644 index b41325b8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/twins.py +++ /dev/null @@ -1,587 +0,0 @@ -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import FFN -from mmcv.cnn.utils.weight_init import (constant_init, normal_init, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList -from torch.nn.modules.batchnorm import _BatchNorm - -from mmseg.models.backbones.mit import EfficientMultiheadAttention -from mmseg.models.builder import BACKBONES -from ..utils.embed import PatchEmbed - - -class GlobalSubsampledAttention(EfficientMultiheadAttention): - """Global Sub-sampled Attention (Spatial Reduction Attention) - - This module is modified from EfficientMultiheadAttention, - which is a module from mmseg.models.backbones.mit.py. - Specifically, there is no difference between - `GlobalSubsampledAttention` and `EfficientMultiheadAttention`, - `GlobalSubsampledAttention` is built as a brand new class - because it is renamed as `Global sub-sampled attention (GSA)` - in paper. - - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dims) - or (n, batch, embed_dims). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default: True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of GSA of PCPVT. - Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - batch_first=True, - qkv_bias=True, - norm_cfg=dict(type='LN'), - sr_ratio=1, - init_cfg=None): - super(GlobalSubsampledAttention, self).__init__( - embed_dims, - num_heads, - attn_drop=attn_drop, - proj_drop=proj_drop, - dropout_layer=dropout_layer, - batch_first=batch_first, - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio, - init_cfg=init_cfg) - - -class GSAEncoderLayer(BaseModule): - """Implements one encoder layer with GSA. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): Stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): Enable bias for qkv if True. Default: True - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (float): Kernel_size of conv in Attention modules. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=1., - init_cfg=None): - super(GSAEncoderLayer, self).__init__(init_cfg=init_cfg) - - self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] - self.attn = GlobalSubsampledAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=False) - - self.drop_path = build_dropout( - dict(type='DropPath', drop_prob=drop_path_rate) - ) if drop_path_rate > 0. else nn.Identity() - - def forward(self, x, hw_shape): - x = x + self.drop_path(self.attn(self.norm1(x), hw_shape, identity=0.)) - x = x + self.drop_path(self.ffn(self.norm2(x))) - return x - - -class LocallyGroupedSelfAttention(BaseModule): - """Locally-grouped Self Attention (LSA) module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. Default: 8 - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: False. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - window_size(int): Window size of LSA. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - window_size=1, - init_cfg=None): - super(LocallyGroupedSelfAttention, self).__init__(init_cfg=init_cfg) - - assert embed_dims % num_heads == 0, f'dim {embed_dims} should be ' \ - f'divided by num_heads ' \ - f'{num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - head_dim = embed_dims // num_heads - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - self.window_size = window_size - - def forward(self, x, hw_shape): - b, n, c = x.shape - h, w = hw_shape - x = x.view(b, h, w, c) - - # pad feature maps to multiples of Local-groups - pad_l = pad_t = 0 - pad_r = (self.window_size - w % self.window_size) % self.window_size - pad_b = (self.window_size - h % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - - # calculate attention mask for LSA - Hp, Wp = x.shape[1:-1] - _h, _w = Hp // self.window_size, Wp // self.window_size - mask = torch.zeros((1, Hp, Wp), device=x.device) - mask[:, -pad_b:, :].fill_(1) - mask[:, :, -pad_r:].fill_(1) - - # [B, _h, _w, window_size, window_size, C] - x = x.reshape(b, _h, self.window_size, _w, self.window_size, - c).transpose(2, 3) - mask = mask.reshape(1, _h, self.window_size, _w, - self.window_size).transpose(2, 3).reshape( - 1, _h * _w, - self.window_size * self.window_size) - # [1, _h*_w, window_size*window_size, window_size*window_size] - attn_mask = mask.unsqueeze(2) - mask.unsqueeze(3) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-1000.0)).masked_fill( - attn_mask == 0, float(0.0)) - - # [3, B, _w*_h, nhead, window_size*window_size, dim] - qkv = self.qkv(x).reshape(b, _h * _w, - self.window_size * self.window_size, 3, - self.num_heads, c // self.num_heads).permute( - 3, 0, 1, 4, 2, 5) - q, k, v = qkv[0], qkv[1], qkv[2] - # [B, _h*_w, n_head, window_size*window_size, window_size*window_size] - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn + attn_mask.unsqueeze(2) - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - attn = (attn @ v).transpose(2, 3).reshape(b, _h, _w, self.window_size, - self.window_size, c) - x = attn.transpose(2, 3).reshape(b, _h * self.window_size, - _w * self.window_size, c) - if pad_r > 0 or pad_b > 0: - x = x[:, :h, :w, :].contiguous() - - x = x.reshape(b, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class LSAEncoderLayer(BaseModule): - """Implements one encoder layer in Twins-SVT. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): Enable bias for qkv if True. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - window_size (int): Window size of LSA. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - qk_scale=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - window_size=1, - init_cfg=None): - - super(LSAEncoderLayer, self).__init__(init_cfg=init_cfg) - - self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] - self.attn = LocallyGroupedSelfAttention(embed_dims, num_heads, - qkv_bias, qk_scale, - attn_drop_rate, drop_rate, - window_size) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=False) - - self.drop_path = build_dropout( - dict(type='DropPath', drop_prob=drop_path_rate) - ) if drop_path_rate > 0. else nn.Identity() - - def forward(self, x, hw_shape): - x = x + self.drop_path(self.attn(self.norm1(x), hw_shape)) - x = x + self.drop_path(self.ffn(self.norm2(x))) - return x - - -class ConditionalPositionEncoding(BaseModule): - """The Conditional Position Encoding (CPE) module. - - The CPE is the implementation of 'Conditional Positional Encodings - for Vision Transformers '_. - - Args: - in_channels (int): Number of input channels. - embed_dims (int): The feature dimension. Default: 768. - stride (int): Stride of conv layer. Default: 1. - """ - - def __init__(self, in_channels, embed_dims=768, stride=1, init_cfg=None): - super(ConditionalPositionEncoding, self).__init__(init_cfg=init_cfg) - self.proj = nn.Conv2d( - in_channels, - embed_dims, - kernel_size=3, - stride=stride, - padding=1, - bias=True, - groups=embed_dims) - self.stride = stride - - def forward(self, x, hw_shape): - b, n, c = x.shape - h, w = hw_shape - feat_token = x - cnn_feat = feat_token.transpose(1, 2).view(b, c, h, w) - if self.stride == 1: - x = self.proj(cnn_feat) + cnn_feat - else: - x = self.proj(cnn_feat) - x = x.flatten(2).transpose(1, 2) - return x - - -@BACKBONES.register_module() -class PCPVT(BaseModule): - """The backbone of Twins-PCPVT. - - This backbone is the implementation of `Twins: Revisiting the Design - of Spatial Attention in Vision Transformers - `_. - - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. - patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. - strides (list): The strides. Default: [4, 2, 2, 2]. - num_heads (int): Number of attention heads. Default: [1, 2, 4, 8]. - mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. - Default: [4, 4, 4, 4]. - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool): Enable bias for qkv if True. Default: False. - drop_rate (float): Probability of an element to be zeroed. - Default 0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.0 - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - depths (list): Depths of each stage. Default [3, 4, 6, 3] - sr_ratios (list): Kernel_size of conv in each Attn module in - Transformer encoder layer. Default: [8, 4, 2, 1]. - norm_after_stage(bool): Add extra norm. Default False. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - in_channels=3, - embed_dims=[64, 128, 256, 512], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - num_heads=[1, 2, 4, 8], - mlp_ratios=[4, 4, 4, 4], - out_indices=(0, 1, 2, 3), - qkv_bias=False, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - norm_cfg=dict(type='LN'), - depths=[3, 4, 6, 3], - sr_ratios=[8, 4, 2, 1], - norm_after_stage=False, - pretrained=None, - init_cfg=None): - super(PCPVT, self).__init__(init_cfg=init_cfg) - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - self.depths = depths - - # patch_embed - self.patch_embeds = ModuleList() - self.position_encoding_drops = ModuleList() - self.layers = ModuleList() - - for i in range(len(depths)): - self.patch_embeds.append( - PatchEmbed( - in_channels=in_channels if i == 0 else embed_dims[i - 1], - embed_dims=embed_dims[i], - conv_type='Conv2d', - kernel_size=patch_sizes[i], - stride=strides[i], - padding='corner', - norm_cfg=norm_cfg)) - - self.position_encoding_drops.append(nn.Dropout(p=drop_rate)) - - self.position_encodings = ModuleList([ - ConditionalPositionEncoding(embed_dim, embed_dim) - for embed_dim in embed_dims - ]) - - # transformer encoder - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - cur = 0 - - for k in range(len(depths)): - _block = ModuleList([ - GSAEncoderLayer( - embed_dims=embed_dims[k], - num_heads=num_heads[k], - feedforward_channels=mlp_ratios[k] * embed_dims[k], - attn_drop_rate=attn_drop_rate, - drop_rate=drop_rate, - drop_path_rate=dpr[cur + i], - num_fcs=2, - qkv_bias=qkv_bias, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=sr_ratios[k]) for i in range(depths[k]) - ]) - self.layers.append(_block) - cur += depths[k] - - self.norm_name, norm = build_norm_layer( - norm_cfg, embed_dims[-1], postfix=1) - - self.out_indices = out_indices - self.norm_after_stage = norm_after_stage - if self.norm_after_stage: - self.norm_list = ModuleList() - for dim in embed_dims: - self.norm_list.append(build_norm_layer(norm_cfg, dim)[1]) - - def init_weights(self): - if self.init_cfg is not None: - super(PCPVT, self).init_weights() - else: - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m, val=1.0, bias=0.) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init( - m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) - - def forward(self, x): - outputs = list() - - b = x.shape[0] - - for i in range(len(self.depths)): - x, hw_shape = self.patch_embeds[i](x) - h, w = hw_shape - x = self.position_encoding_drops[i](x) - for j, blk in enumerate(self.layers[i]): - x = blk(x, hw_shape) - if j == 0: - x = self.position_encodings[i](x, hw_shape) - if self.norm_after_stage: - x = self.norm_list[i](x) - x = x.reshape(b, h, w, -1).permute(0, 3, 1, 2).contiguous() - - if i in self.out_indices: - outputs.append(x) - - return tuple(outputs) - - -@BACKBONES.register_module() -class SVT(PCPVT): - """The backbone of Twins-SVT. - - This backbone is the implementation of `Twins: Revisiting the Design - of Spatial Attention in Vision Transformers - `_. - - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. - patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. - strides (list): The strides. Default: [4, 2, 2, 2]. - num_heads (int): Number of attention heads. Default: [1, 2, 4]. - mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. - Default: [4, 4, 4]. - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool): Enable bias for qkv if True. Default: False. - drop_rate (float): Dropout rate. Default 0. - attn_drop_rate (float): Dropout ratio of attention weight. - Default 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.2. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - depths (list): Depths of each stage. Default [4, 4, 4]. - sr_ratios (list): Kernel_size of conv in each Attn module in - Transformer encoder layer. Default: [4, 2, 1]. - windiow_sizes (list): Window size of LSA. Default: [7, 7, 7], - input_features_slice(bool): Input features need slice. Default: False. - norm_after_stage(bool): Add extra norm. Default False. - strides (list): Strides in patch-Embedding modules. Default: (2, 2, 2) - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - in_channels=3, - embed_dims=[64, 128, 256], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - num_heads=[1, 2, 4], - mlp_ratios=[4, 4, 4], - out_indices=(0, 1, 2, 3), - qkv_bias=False, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_cfg=dict(type='LN'), - depths=[4, 4, 4], - sr_ratios=[4, 2, 1], - windiow_sizes=[7, 7, 7], - norm_after_stage=True, - pretrained=None, - init_cfg=None): - super(SVT, self).__init__(in_channels, embed_dims, patch_sizes, - strides, num_heads, mlp_ratios, out_indices, - qkv_bias, drop_rate, attn_drop_rate, - drop_path_rate, norm_cfg, depths, sr_ratios, - norm_after_stage, pretrained, init_cfg) - # transformer encoder - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - for k in range(len(depths)): - for i in range(depths[k]): - if i % 2 == 0: - self.layers[k][i] = \ - LSAEncoderLayer( - embed_dims=embed_dims[k], - num_heads=num_heads[k], - feedforward_channels=mlp_ratios[k] * embed_dims[k], - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:k])+i], - qkv_bias=qkv_bias, - window_size=windiow_sizes[k]) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/unet.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/unet.py deleted file mode 100644 index c2d33667..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/unet.py +++ /dev/null @@ -1,438 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer) -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from mmseg.ops import Upsample -from ..builder import BACKBONES -from ..utils import UpConvBlock - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsample_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsample_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = Upsample(**upsample_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -@BACKBONES.register_module() -class UNet(BaseModule): - """UNet backbone. - - This backbone is the implementation of `U-Net: Convolutional Networks - for Biomedical Image Segmentation `_. - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Notice: - The input image size should be divisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_divisible. - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None, - pretrained=None, - init_cfg=None): - super(UNet, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - self.base_channels = base_channels - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - self._check_input_divisible(x) - enc_outs = [] - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(UNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - def _check_input_divisible(self, x): - h, w = x.shape[-2:] - whole_downsample_rate = 1 - for i in range(1, self.num_stages): - if self.strides[i] == 2 or self.downsamples[i - 1]: - whole_downsample_rate *= 2 - assert (h % whole_downsample_rate == 0) \ - and (w % whole_downsample_rate == 0),\ - f'The input image size {(h, w)} should be divisible by the whole '\ - f'downsample rate {whole_downsample_rate}, when num_stages is '\ - f'{self.num_stages}, strides is {self.strides}, and downsamples '\ - f'is {self.downsamples}.' diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/vit.py b/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/vit.py deleted file mode 100644 index 96565250..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/backbones/vit.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention -from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init, - trunc_normal_) -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.modules.utils import _pair as to_2tuple - -from mmseg.ops import resize -from mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import PatchEmbed - - -class TransformerEncoderLayer(BaseModule): - """Implements one encoder layer in Vision Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): enable bias for qkv if True. Default: True - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: True. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - batch_first=True): - super(TransformerEncoderLayer, self).__init__() - - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, embed_dims, postfix=1) - self.add_module(self.norm1_name, norm1) - - self.attn = MultiheadAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - batch_first=batch_first, - bias=qkv_bias) - - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, embed_dims, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg) - - @property - def norm1(self): - return getattr(self, self.norm1_name) - - @property - def norm2(self): - return getattr(self, self.norm2_name) - - def forward(self, x): - x = self.attn(self.norm1(x), identity=x) - x = self.ffn(self.norm2(x), identity=x) - return x - - -@BACKBONES.register_module() -class VisionTransformer(BaseModule): - """Vision Transformer. - - This backbone is the implementation of `An Image is Worth 16x16 Words: - Transformers for Image Recognition at - Scale `_. - - Args: - img_size (int | tuple): Input image size. Default: 224. - patch_size (int): The patch size. Default: 16. - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): embedding dimension. Default: 768. - num_layers (int): depth of transformer. Default: 12. - num_heads (int): number of attention heads. Default: 12. - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - out_indices (list | tuple | int): Output from which stages. - Default: -1. - qkv_bias (bool): enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0 - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): stochastic depth rate. Default 0.0 - with_cls_token (bool): Whether concatenating class token into image - tokens as transformer input. Default: True. - output_cls_token (bool): Whether output the cls_token. If set True, - `with_cls_token` must be True. Default: False. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - patch_norm (bool): Whether to add a norm in PatchEmbed Block. - Default: False. - final_norm (bool): Whether to add a additional layer to normalize - final feature map. Default: False. - interpolate_mode (str): Select the interpolate mode for position - embeding vector resize. Default: bicubic. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - img_size=224, - patch_size=16, - in_channels=3, - embed_dims=768, - num_layers=12, - num_heads=12, - mlp_ratio=4, - out_indices=-1, - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - with_cls_token=True, - output_cls_token=False, - norm_cfg=dict(type='LN'), - act_cfg=dict(type='GELU'), - patch_norm=False, - final_norm=False, - interpolate_mode='bicubic', - num_fcs=2, - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(VisionTransformer, self).__init__(init_cfg=init_cfg) - - if isinstance(img_size, int): - img_size = to_2tuple(img_size) - elif isinstance(img_size, tuple): - if len(img_size) == 1: - img_size = to_2tuple(img_size[0]) - assert len(img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(img_size)}' - - if output_cls_token: - assert with_cls_token is True, f'with_cls_token must be True if' \ - f'set output_cls_token to True, but got {with_cls_token}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - - self.img_size = img_size - self.patch_size = patch_size - self.interpolate_mode = interpolate_mode - self.norm_eval = norm_eval - self.with_cp = with_cp - self.pretrained = pretrained - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=patch_size, - padding='corner', - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None, - ) - - num_patches = (img_size[0] // patch_size) * \ - (img_size[1] // patch_size) - - self.with_cls_token = with_cls_token - self.output_cls_token = output_cls_token - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims)) - self.pos_embed = nn.Parameter( - torch.zeros(1, num_patches + 1, embed_dims)) - self.drop_after_pos = nn.Dropout(p=drop_rate) - - if isinstance(out_indices, int): - if out_indices == -1: - out_indices = num_layers - 1 - self.out_indices = [out_indices] - elif isinstance(out_indices, list) or isinstance(out_indices, tuple): - self.out_indices = out_indices - else: - raise TypeError('out_indices must be type of int, list or tuple') - - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, num_layers) - ] # stochastic depth decay rule - - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append( - TransformerEncoderLayer( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=mlp_ratio * embed_dims, - attn_drop_rate=attn_drop_rate, - drop_rate=drop_rate, - drop_path_rate=dpr[i], - num_fcs=num_fcs, - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - batch_first=True)) - - self.final_norm = final_norm - if final_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, embed_dims, postfix=1) - self.add_module(self.norm1_name, norm1) - - @property - def norm1(self): - return getattr(self, self.norm1_name) - - def init_weights(self): - if (isinstance(self.init_cfg, dict) - and self.init_cfg.get('type') == 'Pretrained'): - logger = get_root_logger() - checkpoint = _load_checkpoint( - self.init_cfg['checkpoint'], logger=logger, map_location='cpu') - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - if 'pos_embed' in state_dict.keys(): - if self.pos_embed.shape != state_dict['pos_embed'].shape: - logger.info(msg=f'Resize the pos_embed shape from ' - f'{state_dict["pos_embed"].shape} to ' - f'{self.pos_embed.shape}') - h, w = self.img_size - pos_size = int( - math.sqrt(state_dict['pos_embed'].shape[1] - 1)) - state_dict['pos_embed'] = self.resize_pos_embed( - state_dict['pos_embed'], - (h // self.patch_size, w // self.patch_size), - (pos_size, pos_size), self.interpolate_mode) - - self.load_state_dict(state_dict, False) - elif self.init_cfg is not None: - super(VisionTransformer, self).init_weights() - else: - # We only implement the 'jax_impl' initialization implemented at - # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - for n, m in self.named_modules(): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - if 'ffn' in n: - nn.init.normal_(m.bias, mean=0., std=1e-6) - else: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Conv2d): - kaiming_init(m, mode='fan_in', bias=0.) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m, val=1.0, bias=0.) - - def _pos_embeding(self, patched_img, hw_shape, pos_embed): - """Positiong embeding method. - - Resize the pos_embed, if the input image size doesn't match - the training size. - Args: - patched_img (torch.Tensor): The patched image, it should be - shape of [B, L1, C]. - hw_shape (tuple): The downsampled image resolution. - pos_embed (torch.Tensor): The pos_embed weighs, it should be - shape of [B, L2, c]. - Return: - torch.Tensor: The pos encoded image feature. - """ - assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ - 'the shapes of patched_img and pos_embed must be [B, L, C]' - x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] - if x_len != pos_len: - if pos_len == (self.img_size[0] // self.patch_size) * ( - self.img_size[1] // self.patch_size) + 1: - pos_h = self.img_size[0] // self.patch_size - pos_w = self.img_size[1] // self.patch_size - else: - raise ValueError( - 'Unexpected shape of pos_embed, got {}.'.format( - pos_embed.shape)) - pos_embed = self.resize_pos_embed(pos_embed, hw_shape, - (pos_h, pos_w), - self.interpolate_mode) - return self.drop_after_pos(patched_img + pos_embed) - - @staticmethod - def resize_pos_embed(pos_embed, input_shpae, pos_shape, mode): - """Resize pos_embed weights. - - Resize pos_embed using bicubic interpolate method. - Args: - pos_embed (torch.Tensor): Position embedding weights. - input_shpae (tuple): Tuple for (downsampled input image height, - downsampled input image width). - pos_shape (tuple): The resolution of downsampled origin training - image. - mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'nearest'`` - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C] - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - pos_h, pos_w = pos_shape - cls_token_weight = pos_embed[:, 0] - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) - pos_embed_weight = resize( - pos_embed_weight, size=input_shpae, align_corners=False, mode=mode) - cls_token_weight = cls_token_weight.unsqueeze(1) - pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) - pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) - return pos_embed - - def forward(self, inputs): - B = inputs.shape[0] - - x, hw_shape = self.patch_embed(inputs) - - # stole cls_tokens impl from Phil Wang, thanks - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - x = self._pos_embeding(x, hw_shape, self.pos_embed) - - if not self.with_cls_token: - # Remove class token for transformer encoder input - x = x[:, 1:] - - outs = [] - for i, layer in enumerate(self.layers): - x = layer(x) - if i == len(self.layers) - 1: - if self.final_norm: - x = self.norm1(x) - if i in self.out_indices: - if self.with_cls_token: - # Remove class token and reshape token for decoder head - out = x[:, 1:] - else: - out = x - B, _, C = out.shape - out = out.reshape(B, hw_shape[0], hw_shape[1], - C).permute(0, 3, 1, 2).contiguous() - if self.output_cls_token: - out = [out, x[:, 0]] - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - super(VisionTransformer, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.LayerNorm): - m.eval() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/builder.py b/cv/3d_detection/paconv/pytorch/mmseg/models/builder.py deleted file mode 100644 index 5e18e4e6..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/builder.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.cnn.bricks.registry import ATTENTION as MMCV_ATTENTION -from mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) -ATTENTION = Registry('attention', parent=MMCV_ATTENTION) - -BACKBONES = MODELS -NECKS = MODELS -HEADS = MODELS -LOSSES = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/__init__.py deleted file mode 100644 index b5375a1f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .ann_head import ANNHead -from .apc_head import APCHead -from .aspp_head import ASPPHead -from .cc_head import CCHead -from .da_head import DAHead -from .dm_head import DMHead -from .dnl_head import DNLHead -from .dpt_head import DPTHead -from .ema_head import EMAHead -from .enc_head import EncHead -from .fcn_head import FCNHead -from .fpn_head import FPNHead -from .gc_head import GCHead -from .isa_head import ISAHead -from .lraspp_head import LRASPPHead -from .nl_head import NLHead -from .ocr_head import OCRHead -from .point_head import PointHead -from .psa_head import PSAHead -from .psp_head import PSPHead -from .segformer_head import SegformerHead -from .sep_aspp_head import DepthwiseSeparableASPPHead -from .sep_fcn_head import DepthwiseSeparableFCNHead -from .setr_mla_head import SETRMLAHead -from .setr_up_head import SETRUPHead -from .stdc_head import STDCHead -from .uper_head import UPerHead - -__all__ = [ - 'FCNHead', 'PSPHead', 'ASPPHead', 'PSAHead', 'NLHead', 'GCHead', 'CCHead', - 'UPerHead', 'DepthwiseSeparableASPPHead', 'ANNHead', 'DAHead', 'OCRHead', - 'EncHead', 'DepthwiseSeparableFCNHead', 'FPNHead', 'EMAHead', 'DNLHead', - 'PointHead', 'APCHead', 'DMHead', 'LRASPPHead', 'SETRUPHead', - 'SETRMLAHead', 'DPTHead', 'SETRMLAHead', 'SegformerHead', 'ISAHead', - 'STDCHead' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ann_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ann_head.py deleted file mode 100644 index c8d882e3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ann_head.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PPMConcat(nn.ModuleList): - """Pyramid Pooling Module that only concat the features of each layer. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - """ - - def __init__(self, pool_scales=(1, 3, 6, 8)): - super(PPMConcat, self).__init__( - [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) - - def forward(self, feats): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(feats) - ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) - concat_outs = torch.cat(ppm_outs, dim=2) - return concat_outs - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Make a ANN used SelfAttentionBlock. - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_scale (int): The scale of query feature map. - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, share_key_query, query_scale, key_pool_scales, - conv_cfg, norm_cfg, act_cfg): - key_psp = PPMConcat(key_pool_scales) - if query_scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=query_scale) - else: - query_downsample = None - super(SelfAttentionBlock, self).__init__( - key_in_channels=low_in_channels, - query_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=share_key_query, - query_downsample=query_downsample, - key_downsample=key_psp, - key_query_num_convs=1, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - -class AFNB(nn.Module): - """Asymmetric Fusion Non-local Block(AFNB) - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - and query projection. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, query_scales, key_pool_scales, conv_cfg, - norm_cfg, act_cfg): - super(AFNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=False, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - out_channels + high_in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, low_feats, high_feats): - """Forward function.""" - priors = [stage(high_feats, low_feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, high_feats], 1)) - return output - - -class APNB(nn.Module): - """Asymmetric Pyramid Non-local Block (APNB) - - Args: - in_channels (int): Input channels of key/query feature, - which is the key feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, out_channels, query_scales, - key_pool_scales, conv_cfg, norm_cfg, act_cfg): - super(APNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=in_channels, - high_in_channels=in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=True, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - 2 * in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, feats): - """Forward function.""" - priors = [stage(feats, feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, feats], 1)) - return output - - -@HEADS.register_module() -class ANNHead(BaseDecodeHead): - """Asymmetric Non-local Neural Networks for Semantic Segmentation. - - This head is the implementation of `ANNNet - `_. - - Args: - project_channels (int): Projection channels for Nonlocal. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): The pooling scales of key feature map. - Default: (1, 3, 6, 8). - """ - - def __init__(self, - project_channels, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - **kwargs): - super(ANNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(self.in_channels) == 2 - low_in_channels, high_in_channels = self.in_channels - self.project_channels = project_channels - self.fusion = AFNB( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - out_channels=high_in_channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - high_in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.context = APNB( - in_channels=self.channels, - out_channels=self.channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - low_feats, high_feats = self._transform_inputs(inputs) - output = self.fusion(low_feats, high_feats) - output = self.dropout(output) - output = self.bottleneck(output) - output = self.context(output) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/apc_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/apc_head.py deleted file mode 100644 index 3198fd18..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/apc_head.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ACM(nn.Module): - """Adaptive Context Module used in APCNet. - - Args: - pool_scale (int): Pooling scale used in Adaptive Context - Module to extract region features. - fusion (bool): Add one conv to fuse residual feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, pool_scale, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(ACM, self).__init__() - self.pool_scale = pool_scale - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.pooled_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.global_info = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.gla = nn.Conv2d(self.channels, self.pool_scale**2, 1, 1, 0) - - self.residual_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - pooled_x = F.adaptive_avg_pool2d(x, self.pool_scale) - # [batch_size, channels, h, w] - x = self.input_redu_conv(x) - # [batch_size, channels, pool_scale, pool_scale] - pooled_x = self.pooled_redu_conv(pooled_x) - batch_size = x.size(0) - # [batch_size, pool_scale * pool_scale, channels] - pooled_x = pooled_x.view(batch_size, self.channels, - -1).permute(0, 2, 1).contiguous() - # [batch_size, h * w, pool_scale * pool_scale] - affinity_matrix = self.gla(x + resize( - self.global_info(F.adaptive_avg_pool2d(x, 1)), size=x.shape[2:]) - ).permute(0, 2, 3, 1).reshape( - batch_size, -1, self.pool_scale**2) - affinity_matrix = F.sigmoid(affinity_matrix) - # [batch_size, h * w, channels] - z_out = torch.matmul(affinity_matrix, pooled_x) - # [batch_size, channels, h * w] - z_out = z_out.permute(0, 2, 1).contiguous() - # [batch_size, channels, h, w] - z_out = z_out.view(batch_size, self.channels, x.size(2), x.size(3)) - z_out = self.residual_conv(z_out) - z_out = F.relu(z_out + x) - if self.fusion: - z_out = self.fusion_conv(z_out) - - return z_out - - -@HEADS.register_module() -class APCHead(BaseDecodeHead): - """Adaptive Pyramid Context Network for Semantic Segmentation. - - This head is the implementation of - `APCNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Adaptive Context - Module. Default: (1, 2, 3, 6). - fusion (bool): Add one conv to fuse residual feature. - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), fusion=True, **kwargs): - super(APCHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.fusion = fusion - acm_modules = [] - for pool_scale in self.pool_scales: - acm_modules.append( - ACM(pool_scale, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.acm_modules = nn.ModuleList(acm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - acm_outs = [x] - for acm_module in self.acm_modules: - acm_outs.append(acm_module(x)) - acm_outs = torch.cat(acm_outs, dim=1) - output = self.bottleneck(acm_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/aspp_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/aspp_head.py deleted file mode 100644 index 1fbd1bc8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/aspp_head.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ASPPModule(nn.ModuleList): - """Atrous Spatial Pyramid Pooling (ASPP) Module. - - Args: - dilations (tuple[int]): Dilation rate of each layer. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg, - act_cfg): - super(ASPPModule, self).__init__() - self.dilations = dilations - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for dilation in dilations: - self.append( - ConvModule( - self.in_channels, - self.channels, - 1 if dilation == 1 else 3, - dilation=dilation, - padding=0 if dilation == 1 else dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, x): - """Forward function.""" - aspp_outs = [] - for aspp_module in self: - aspp_outs.append(aspp_module(x)) - - return aspp_outs - - -@HEADS.register_module() -class ASPPHead(BaseDecodeHead): - """Rethinking Atrous Convolution for Semantic Image Segmentation. - - This head is the implementation of `DeepLabV3 - `_. - - Args: - dilations (tuple[int]): Dilation rates for ASPP module. - Default: (1, 6, 12, 18). - """ - - def __init__(self, dilations=(1, 6, 12, 18), **kwargs): - super(ASPPHead, self).__init__(**kwargs) - assert isinstance(dilations, (list, tuple)) - self.dilations = dilations - self.image_pool = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.aspp_modules = ASPPModule( - dilations, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - (len(dilations) + 1) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cascade_decode_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index f7c3da0d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cc_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cc_head.py deleted file mode 100644 index ed19eb46..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/cc_head.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import HEADS -from .fcn_head import FCNHead - -try: - from mmcv.ops import CrissCrossAttention -except ModuleNotFoundError: - CrissCrossAttention = None - - -@HEADS.register_module() -class CCHead(FCNHead): - """CCNet: Criss-Cross Attention for Semantic Segmentation. - - This head is the implementation of `CCNet - `_. - - Args: - recurrence (int): Number of recurrence of Criss Cross Attention - module. Default: 2. - """ - - def __init__(self, recurrence=2, **kwargs): - if CrissCrossAttention is None: - raise RuntimeError('Please install mmcv-full for ' - 'CrissCrossAttention ops') - super(CCHead, self).__init__(num_convs=2, **kwargs) - self.recurrence = recurrence - self.cca = CrissCrossAttention(self.channels) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - for _ in range(self.recurrence): - output = self.cca(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/da_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/da_head.py deleted file mode 100644 index 77fd6639..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/da_head.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale -from torch import nn - -from mmseg.core import add_prefix -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PAM(_SelfAttentionBlock): - """Position Attention Module (PAM) - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - """ - - def __init__(self, in_channels, channels): - super(PAM, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=1, - key_query_norm=False, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=False, - with_out=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None) - - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - out = super(PAM, self).forward(x, x) - - out = self.gamma(out) + x - return out - - -class CAM(nn.Module): - """Channel Attention Module (CAM)""" - - def __init__(self): - super(CAM, self).__init__() - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - batch_size, channels, height, width = x.size() - proj_query = x.view(batch_size, channels, -1) - proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1) - energy = torch.bmm(proj_query, proj_key) - energy_new = torch.max( - energy, -1, keepdim=True)[0].expand_as(energy) - energy - attention = F.softmax(energy_new, dim=-1) - proj_value = x.view(batch_size, channels, -1) - - out = torch.bmm(attention, proj_value) - out = out.view(batch_size, channels, height, width) - - out = self.gamma(out) + x - return out - - -@HEADS.register_module() -class DAHead(BaseDecodeHead): - """Dual Attention Network for Scene Segmentation. - - This head is the implementation of `DANet - `_. - - Args: - pam_channels (int): The channels of Position Attention Module(PAM). - """ - - def __init__(self, pam_channels, **kwargs): - super(DAHead, self).__init__(**kwargs) - self.pam_channels = pam_channels - self.pam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam = PAM(self.channels, pam_channels) - self.pam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - self.cam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam = CAM() - self.cam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - def pam_cls_seg(self, feat): - """PAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.pam_conv_seg(feat) - return output - - def cam_cls_seg(self, feat): - """CAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.cam_conv_seg(feat) - return output - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - pam_feat = self.pam_in_conv(x) - pam_feat = self.pam(pam_feat) - pam_feat = self.pam_out_conv(pam_feat) - pam_out = self.pam_cls_seg(pam_feat) - - cam_feat = self.cam_in_conv(x) - cam_feat = self.cam(cam_feat) - cam_feat = self.cam_out_conv(cam_feat) - cam_out = self.cam_cls_seg(cam_feat) - - feat_sum = pam_feat + cam_feat - pam_cam_out = self.cls_seg(feat_sum) - - return pam_cam_out, pam_out, cam_out - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, only ``pam_cam`` is used.""" - return self.forward(inputs)[0] - - def losses(self, seg_logit, seg_label): - """Compute ``pam_cam``, ``pam``, ``cam`` loss.""" - pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit - loss = dict() - loss.update( - add_prefix( - super(DAHead, self).losses(pam_cam_seg_logit, seg_label), - 'pam_cam')) - loss.update( - add_prefix( - super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam')) - loss.update( - add_prefix( - super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam')) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/decode_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/decode_head.py deleted file mode 100644 index 1443a81d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/decode_head.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmseg.core import build_pixel_sampler -from mmseg.ops import resize -from ..builder import build_loss -from ..losses import accuracy - - -class BaseDecodeHead(BaseModule, metaclass=ABCMeta): - """Base class for BaseDecodeHead. - - Args: - in_channels (int|Sequence[int]): Input channels. - channels (int): Channels after modules, before conv_seg. - num_classes (int): Number of classes. - dropout_ratio (float): Ratio of dropout layer. Default: 0.1. - conv_cfg (dict|None): Config of conv layers. Default: None. - norm_cfg (dict|None): Config of norm layers. Default: None. - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU') - in_index (int|Sequence[int]): Input feature index. Default: -1 - input_transform (str|None): Transformation type of input features. - Options: 'resize_concat', 'multiple_select', None. - 'resize_concat': Multiple feature maps will be resize to the - same size as first one and than concat together. - Usually used in FCN head of HRNet. - 'multiple_select': Multiple feature maps will be bundle into - a list and passed into decode head. - None: Only one select feature map is allowed. - Default: None. - loss_decode (dict | Sequence[dict]): Config of decode loss. - The `loss_name` is property of corresponding loss function which - could be shown in training log. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_ce'. - e.g. dict(type='CrossEntropyLoss'), - [dict(type='CrossEntropyLoss', loss_name='loss_ce'), - dict(type='DiceLoss', loss_name='loss_dice')] - Default: dict(type='CrossEntropyLoss'). - ignore_index (int | None): The label index to be ignored. When using - masked BCE loss, ignore_index should be set to None. Default: 255. - sampler (dict|None): The config of segmentation map sampler. - Default: None. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - channels, - *, - num_classes, - dropout_ratio=0.1, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - in_index=-1, - input_transform=None, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - ignore_index=255, - sampler=None, - align_corners=False, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='conv_seg'))): - super(BaseDecodeHead, self).__init__(init_cfg) - self._init_inputs(in_channels, in_index, input_transform) - self.channels = channels - self.num_classes = num_classes - self.dropout_ratio = dropout_ratio - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.in_index = in_index - - self.ignore_index = ignore_index - self.align_corners = align_corners - - if isinstance(loss_decode, dict): - self.loss_decode = build_loss(loss_decode) - elif isinstance(loss_decode, (list, tuple)): - self.loss_decode = nn.ModuleList() - for loss in loss_decode: - self.loss_decode.append(build_loss(loss)) - else: - raise TypeError(f'loss_decode must be a dict or sequence of dict,\ - but got {type(loss_decode)}') - - if sampler is not None: - self.sampler = build_pixel_sampler(sampler, context=self) - else: - self.sampler = None - - self.conv_seg = nn.Conv2d(channels, num_classes, kernel_size=1) - if dropout_ratio > 0: - self.dropout = nn.Dropout2d(dropout_ratio) - else: - self.dropout = None - self.fp16_enabled = False - - def extra_repr(self): - """Extra repr.""" - s = f'input_transform={self.input_transform}, ' \ - f'ignore_index={self.ignore_index}, ' \ - f'align_corners={self.align_corners}' - return s - - def _init_inputs(self, in_channels, in_index, input_transform): - """Check and initialize input transforms. - - The in_channels, in_index and input_transform must match. - Specifically, when input_transform is None, only single feature map - will be selected. So in_channels and in_index must be of type int. - When input_transform - - Args: - in_channels (int|Sequence[int]): Input channels. - in_index (int|Sequence[int]): Input feature index. - input_transform (str|None): Transformation type of input features. - Options: 'resize_concat', 'multiple_select', None. - 'resize_concat': Multiple feature maps will be resize to the - same size as first one and than concat together. - Usually used in FCN head of HRNet. - 'multiple_select': Multiple feature maps will be bundle into - a list and passed into decode head. - None: Only one select feature map is allowed. - """ - - if input_transform is not None: - assert input_transform in ['resize_concat', 'multiple_select'] - self.input_transform = input_transform - self.in_index = in_index - if input_transform is not None: - assert isinstance(in_channels, (list, tuple)) - assert isinstance(in_index, (list, tuple)) - assert len(in_channels) == len(in_index) - if input_transform == 'resize_concat': - self.in_channels = sum(in_channels) - else: - self.in_channels = in_channels - else: - assert isinstance(in_channels, int) - assert isinstance(in_index, int) - self.in_channels = in_channels - - def _transform_inputs(self, inputs): - """Transform inputs for decoder. - - Args: - inputs (list[Tensor]): List of multi-level img features. - - Returns: - Tensor: The transformed inputs - """ - - if self.input_transform == 'resize_concat': - inputs = [inputs[i] for i in self.in_index] - upsampled_inputs = [ - resize( - input=x, - size=inputs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) for x in inputs - ] - inputs = torch.cat(upsampled_inputs, dim=1) - elif self.input_transform == 'multiple_select': - inputs = [inputs[i] for i in self.in_index] - else: - inputs = inputs[self.in_index] - - return inputs - - @auto_fp16() - @abstractmethod - def forward(self, inputs): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, img_metas, gt_semantic_seg, train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs) - losses = self.losses(seg_logits, gt_semantic_seg) - return losses - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs) - - def cls_seg(self, feat): - """Classify each pixel.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.conv_seg(feat) - return output - - @force_fp32(apply_to=('seg_logit', )) - def losses(self, seg_logit, seg_label): - """Compute segmentation loss.""" - loss = dict() - seg_logit = resize( - input=seg_logit, - size=seg_label.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - if self.sampler is not None: - seg_weight = self.sampler.sample(seg_logit, seg_label) - else: - seg_weight = None - seg_label = seg_label.squeeze(1) - - if not isinstance(self.loss_decode, nn.ModuleList): - losses_decode = [self.loss_decode] - else: - losses_decode = self.loss_decode - for loss_decode in losses_decode: - if loss_decode.loss_name not in loss: - loss[loss_decode.loss_name] = loss_decode( - seg_logit, - seg_label, - weight=seg_weight, - ignore_index=self.ignore_index) - else: - loss[loss_decode.loss_name] += loss_decode( - seg_logit, - seg_label, - weight=seg_weight, - ignore_index=self.ignore_index) - - loss['acc_seg'] = accuracy(seg_logit, seg_label) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dm_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dm_head.py deleted file mode 100644 index ffaa870a..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dm_head.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class DCM(nn.Module): - """Dynamic Convolutional Module used in DMNet. - - Args: - filter_size (int): The filter size of generated convolution kernel - used in Dynamic Convolutional Module. - fusion (bool): Add one conv to fuse DCM output feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(DCM, self).__init__() - self.filter_size = filter_size - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1, - 0) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.norm_cfg is not None: - self.norm = build_norm_layer(self.norm_cfg, self.channels)[1] - else: - self.norm = None - self.activate = build_activation_layer(self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - generated_filter = self.filter_gen_conv( - F.adaptive_avg_pool2d(x, self.filter_size)) - x = self.input_redu_conv(x) - b, c, h, w = x.shape - # [1, b * c, h, w], c = self.channels - x = x.view(1, b * c, h, w) - # [b * c, 1, filter_size, filter_size] - generated_filter = generated_filter.view(b * c, 1, self.filter_size, - self.filter_size) - pad = (self.filter_size - 1) // 2 - if (self.filter_size - 1) % 2 == 0: - p2d = (pad, pad, pad, pad) - else: - p2d = (pad + 1, pad, pad + 1, pad) - x = F.pad(input=x, pad=p2d, mode='constant', value=0) - # [1, b * c, h, w] - output = F.conv2d(input=x, weight=generated_filter, groups=b * c) - # [b, c, h, w] - output = output.view(b, c, h, w) - if self.norm is not None: - output = self.norm(output) - output = self.activate(output) - - if self.fusion: - output = self.fusion_conv(output) - - return output - - -@HEADS.register_module() -class DMHead(BaseDecodeHead): - """Dynamic Multi-scale Filters for Semantic Segmentation. - - This head is the implementation of - `DMNet `_. - - Args: - filter_sizes (tuple[int]): The size of generated convolutional filters - used in Dynamic Convolutional Module. Default: (1, 3, 5, 7). - fusion (bool): Add one conv to fuse DCM output feature. - """ - - def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs): - super(DMHead, self).__init__(**kwargs) - assert isinstance(filter_sizes, (list, tuple)) - self.filter_sizes = filter_sizes - self.fusion = fusion - dcm_modules = [] - for filter_size in self.filter_sizes: - dcm_modules.append( - DCM(filter_size, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.dcm_modules = nn.ModuleList(dcm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(filter_sizes) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - dcm_outs = [x] - for dcm_module in self.dcm_modules: - dcm_outs.append(dcm_module(x)) - dcm_outs = torch.cat(dcm_outs, dim=1) - output = self.bottleneck(dcm_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dnl_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dnl_head.py deleted file mode 100644 index ab53d9a2..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dnl_head.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NonLocal2d -from torch import nn - -from ..builder import HEADS -from .fcn_head import FCNHead - - -class DisentangledNonLocal2d(NonLocal2d): - """Disentangled Non-Local Blocks. - - Args: - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, *arg, temperature, **kwargs): - super().__init__(*arg, **kwargs) - self.temperature = temperature - self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1) - - def embedded_gaussian(self, theta_x, phi_x): - """Embedded gaussian with temperature.""" - - # NonLocal2d pairwise_weight: [N, HxW, HxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight /= self.temperature - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def forward(self, x): - # x: [N, C, H, W] - n = x.size(0) - - # g_x: [N, HxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # theta_x: [N, HxW, C], phi_x: [N, C, HxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - # subtract mean - theta_x -= theta_x.mean(dim=-2, keepdim=True) - phi_x -= phi_x.mean(dim=-1, keepdim=True) - - pairwise_func = getattr(self, self.mode) - # pairwise_weight: [N, HxW, HxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # y: [N, HxW, C] - y = torch.matmul(pairwise_weight, g_x) - # y: [N, C, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - # unary_mask: [N, 1, HxW] - unary_mask = self.conv_mask(x) - unary_mask = unary_mask.view(n, 1, -1) - unary_mask = unary_mask.softmax(dim=-1) - # unary_x: [N, 1, C] - unary_x = torch.matmul(unary_mask, g_x) - # unary_x: [N, C, 1, 1] - unary_x = unary_x.permute(0, 2, 1).contiguous().reshape( - n, self.inter_channels, 1, 1) - - output = x + self.conv_out(y + unary_x) - - return output - - -@HEADS.register_module() -class DNLHead(FCNHead): - """Disentangled Non-Local Neural Networks. - - This head is the implementation of `DNLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: False. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - temperature=0.05, - **kwargs): - super(DNLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.temperature = temperature - self.dnl_block = DisentangledNonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode, - temperature=self.temperature) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.dnl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dpt_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dpt_head.py deleted file mode 100644 index a63f9d29..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/dpt_head.py +++ /dev/null @@ -1,293 +0,0 @@ -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Linear, build_activation_layer -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ReassembleBlocks(BaseModule): - """ViTPostProcessBlock, process cls_token in ViT backbone output and - rearrange the feature vector to feature map. - - Args: - in_channels (int): ViT feature channels. Default: 768. - out_channels (List): output channels of each stage. - Default: [96, 192, 384, 768]. - readout_type (str): Type of readout operation. Default: 'ignore'. - patch_size (int): The patch size. Default: 16. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels=768, - out_channels=[96, 192, 384, 768], - readout_type='ignore', - patch_size=16, - init_cfg=None): - super(ReassembleBlocks, self).__init__(init_cfg) - - assert readout_type in ['ignore', 'add', 'project'] - self.readout_type = readout_type - self.patch_size = patch_size - - self.projects = nn.ModuleList([ - ConvModule( - in_channels=in_channels, - out_channels=out_channel, - kernel_size=1, - act_cfg=None, - ) for out_channel in out_channels - ]) - - self.resize_layers = nn.ModuleList([ - nn.ConvTranspose2d( - in_channels=out_channels[0], - out_channels=out_channels[0], - kernel_size=4, - stride=4, - padding=0), - nn.ConvTranspose2d( - in_channels=out_channels[1], - out_channels=out_channels[1], - kernel_size=2, - stride=2, - padding=0), - nn.Identity(), - nn.Conv2d( - in_channels=out_channels[3], - out_channels=out_channels[3], - kernel_size=3, - stride=2, - padding=1) - ]) - if self.readout_type == 'project': - self.readout_projects = nn.ModuleList() - for _ in range(len(self.projects)): - self.readout_projects.append( - nn.Sequential( - Linear(2 * in_channels, in_channels), - build_activation_layer(dict(type='GELU')))) - - def forward(self, inputs): - assert isinstance(inputs, list) - out = [] - for i, x in enumerate(inputs): - assert len(x) == 2 - x, cls_token = x[0], x[1] - feature_shape = x.shape - if self.readout_type == 'project': - x = x.flatten(2).permute((0, 2, 1)) - readout = cls_token.unsqueeze(1).expand_as(x) - x = self.readout_projects[i](torch.cat((x, readout), -1)) - x = x.permute(0, 2, 1).reshape(feature_shape) - elif self.readout_type == 'add': - x = x.flatten(2) + cls_token.unsqueeze(-1) - x = x.reshape(feature_shape) - else: - pass - x = self.projects[i](x) - x = self.resize_layers[i](x) - out.append(x) - return out - - -class PreActResidualConvUnit(BaseModule): - """ResidualConvUnit, pre-activate residual unit. - - Args: - in_channels (int): number of channels in the input feature map. - act_cfg (dict): dictionary to construct and config activation layer. - norm_cfg (dict): dictionary to construct and config norm layer. - stride (int): stride of the first block. Default: 1 - dilation (int): dilation rate for convs layers. Default: 1. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels, - act_cfg, - norm_cfg, - stride=1, - dilation=1, - init_cfg=None): - super(PreActResidualConvUnit, self).__init__(init_cfg) - - self.conv1 = ConvModule( - in_channels, - in_channels, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=False, - order=('act', 'conv', 'norm')) - - self.conv2 = ConvModule( - in_channels, - in_channels, - 3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=False, - order=('act', 'conv', 'norm')) - - def forward(self, inputs): - inputs_ = inputs.clone() - x = self.conv1(inputs) - x = self.conv2(x) - return x + inputs_ - - -class FeatureFusionBlock(BaseModule): - """FeatureFusionBlock, merge feature map from different stages. - - Args: - in_channels (int): Input channels. - act_cfg (dict): The activation config for ResidualConvUnit. - norm_cfg (dict): Config dict for normalization layer. - expand (bool): Whether expand the channels in post process block. - Default: False. - align_corners (bool): align_corner setting for bilinear upsample. - Default: True. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels, - act_cfg, - norm_cfg, - expand=False, - align_corners=True, - init_cfg=None): - super(FeatureFusionBlock, self).__init__(init_cfg) - - self.in_channels = in_channels - self.expand = expand - self.align_corners = align_corners - - self.out_channels = in_channels - if self.expand: - self.out_channels = in_channels // 2 - - self.project = ConvModule( - self.in_channels, - self.out_channels, - kernel_size=1, - act_cfg=None, - bias=True) - - self.res_conv_unit1 = PreActResidualConvUnit( - in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) - self.res_conv_unit2 = PreActResidualConvUnit( - in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) - - def forward(self, *inputs): - x = inputs[0] - if len(inputs) == 2: - if x.shape != inputs[1].shape: - res = resize( - inputs[1], - size=(x.shape[2], x.shape[3]), - mode='bilinear', - align_corners=False) - else: - res = inputs[1] - x = x + self.res_conv_unit1(res) - x = self.res_conv_unit2(x) - x = resize( - x, - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners) - x = self.project(x) - return x - - -@HEADS.register_module() -class DPTHead(BaseDecodeHead): - """Vision Transformers for Dense Prediction. - - This head is implemented of `DPT `_. - - Args: - embed_dims (int): The embed dimension of the ViT backbone. - Default: 768. - post_process_channels (List): Out channels of post process conv - layers. Default: [96, 192, 384, 768]. - readout_type (str): Type of readout operation. Default: 'ignore'. - patch_size (int): The patch size. Default: 16. - expand_channels (bool): Whether expand the channels in post process - block. Default: False. - act_cfg (dict): The activation config for residual conv unit. - Default dict(type='ReLU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - """ - - def __init__(self, - embed_dims=768, - post_process_channels=[96, 192, 384, 768], - readout_type='ignore', - patch_size=16, - expand_channels=False, - act_cfg=dict(type='ReLU'), - norm_cfg=dict(type='BN'), - **kwargs): - super(DPTHead, self).__init__(**kwargs) - - self.in_channels = self.in_channels - self.expand_channels = expand_channels - self.reassemble_blocks = ReassembleBlocks(embed_dims, - post_process_channels, - readout_type, patch_size) - - self.post_process_channels = [ - channel * math.pow(2, i) if expand_channels else channel - for i, channel in enumerate(post_process_channels) - ] - self.convs = nn.ModuleList() - for channel in self.post_process_channels: - self.convs.append( - ConvModule( - channel, - self.channels, - kernel_size=3, - padding=1, - act_cfg=None, - bias=False)) - self.fusion_blocks = nn.ModuleList() - for _ in range(len(self.convs)): - self.fusion_blocks.append( - FeatureFusionBlock(self.channels, act_cfg, norm_cfg)) - self.fusion_blocks[0].res_conv_unit1 = None - self.project = ConvModule( - self.channels, - self.channels, - kernel_size=3, - padding=1, - norm_cfg=norm_cfg) - self.num_fusion_blocks = len(self.fusion_blocks) - self.num_reassemble_blocks = len(self.reassemble_blocks.resize_layers) - self.num_post_process_channels = len(self.post_process_channels) - assert self.num_fusion_blocks == self.num_reassemble_blocks - assert self.num_reassemble_blocks == self.num_post_process_channels - - def forward(self, inputs): - assert len(inputs) == self.num_reassemble_blocks - x = self._transform_inputs(inputs) - x = self.reassemble_blocks(x) - x = [self.convs[i](feature) for i, feature in enumerate(x)] - out = self.fusion_blocks[0](x[-1]) - for i in range(1, len(self.fusion_blocks)): - out = self.fusion_blocks[i](out, x[-(i + 1)]) - out = self.project(out) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ema_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ema_head.py deleted file mode 100644 index f6de1671..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ema_head.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -def reduce_mean(tensor): - """Reduce mean when distributed training.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -class EMAModule(nn.Module): - """Expectation Maximization Attention Module used in EMANet. - - Args: - channels (int): Channels of the whole module. - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - """ - - def __init__(self, channels, num_bases, num_stages, momentum): - super(EMAModule, self).__init__() - assert num_stages >= 1, 'num_stages must be at least 1!' - self.num_bases = num_bases - self.num_stages = num_stages - self.momentum = momentum - - bases = torch.zeros(1, channels, self.num_bases) - bases.normal_(0, math.sqrt(2. / self.num_bases)) - # [1, channels, num_bases] - bases = F.normalize(bases, dim=1, p=2) - self.register_buffer('bases', bases) - - def forward(self, feats): - """Forward function.""" - batch_size, channels, height, width = feats.size() - # [batch_size, channels, height*width] - feats = feats.view(batch_size, channels, height * width) - # [batch_size, channels, num_bases] - bases = self.bases.repeat(batch_size, 1, 1) - - with torch.no_grad(): - for i in range(self.num_stages): - # [batch_size, height*width, num_bases] - attention = torch.einsum('bcn,bck->bnk', feats, bases) - attention = F.softmax(attention, dim=2) - # l1 norm - attention_normed = F.normalize(attention, dim=1, p=1) - # [batch_size, channels, num_bases] - bases = torch.einsum('bcn,bnk->bck', feats, attention_normed) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - - feats_recon = torch.einsum('bck,bnk->bcn', bases, attention) - feats_recon = feats_recon.view(batch_size, channels, height, width) - - if self.training: - bases = bases.mean(dim=0, keepdim=True) - bases = reduce_mean(bases) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - self.bases = (1 - - self.momentum) * self.bases + self.momentum * bases - - return feats_recon - - -@HEADS.register_module() -class EMAHead(BaseDecodeHead): - """Expectation Maximization Attention Networks for Semantic Segmentation. - - This head is the implementation of `EMANet - `_. - - Args: - ema_channels (int): EMA module channels - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - concat_input (bool): Whether concat the input and output of convs - before classification layer. Default: True - momentum (float): Momentum to update the base. Default: 0.1. - """ - - def __init__(self, - ema_channels, - num_bases, - num_stages, - concat_input=True, - momentum=0.1, - **kwargs): - super(EMAHead, self).__init__(**kwargs) - self.ema_channels = ema_channels - self.num_bases = num_bases - self.num_stages = num_stages - self.concat_input = concat_input - self.momentum = momentum - self.ema_module = EMAModule(self.ema_channels, self.num_bases, - self.num_stages, self.momentum) - - self.ema_in_conv = ConvModule( - self.in_channels, - self.ema_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # project (0, inf) -> (-inf, inf) - self.ema_mid_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=None, - act_cfg=None) - for param in self.ema_mid_conv.parameters(): - param.requires_grad = False - - self.ema_out_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.bottleneck = ConvModule( - self.ema_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.ema_in_conv(x) - identity = feats - feats = self.ema_mid_conv(feats) - recon = self.ema_module(feats) - recon = F.relu(recon, inplace=True) - recon = self.ema_out_conv(recon) - output = F.relu(identity + recon, inplace=True) - output = self.bottleneck(output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/enc_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/enc_head.py deleted file mode 100644 index 648c8906..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/enc_head.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_norm_layer - -from mmseg.ops import Encoding, resize -from ..builder import HEADS, build_loss -from .decode_head import BaseDecodeHead - - -class EncModule(nn.Module): - """Encoding Module used in EncNet. - - Args: - in_channels (int): Input channels. - num_codes (int): Number of code words. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): - super(EncModule, self).__init__() - self.encoding_project = ConvModule( - in_channels, - in_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # TODO: resolve this hack - # change to 1d - if norm_cfg is not None: - encoding_norm_cfg = norm_cfg.copy() - if encoding_norm_cfg['type'] in ['BN', 'IN']: - encoding_norm_cfg['type'] += '1d' - else: - encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( - '2d', '1d') - else: - # fallback to BN1d - encoding_norm_cfg = dict(type='BN1d') - self.encoding = nn.Sequential( - Encoding(channels=in_channels, num_codes=num_codes), - build_norm_layer(encoding_norm_cfg, num_codes)[1], - nn.ReLU(inplace=True)) - self.fc = nn.Sequential( - nn.Linear(in_channels, in_channels), nn.Sigmoid()) - - def forward(self, x): - """Forward function.""" - encoding_projection = self.encoding_project(x) - encoding_feat = self.encoding(encoding_projection).mean(dim=1) - batch_size, channels, _, _ = x.size() - gamma = self.fc(encoding_feat) - y = gamma.view(batch_size, channels, 1, 1) - output = F.relu_(x + x * y) - return encoding_feat, output - - -@HEADS.register_module() -class EncHead(BaseDecodeHead): - """Context Encoding for Semantic Segmentation. - - This head is the implementation of `EncNet - `_. - - Args: - num_codes (int): Number of code words. Default: 32. - use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to - regularize the training. Default: True. - add_lateral (bool): Whether use lateral connection to fuse features. - Default: False. - loss_se_decode (dict): Config of decode loss. - Default: dict(type='CrossEntropyLoss', use_sigmoid=True). - """ - - def __init__(self, - num_codes=32, - use_se_loss=True, - add_lateral=False, - loss_se_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=0.2), - **kwargs): - super(EncHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.use_se_loss = use_se_loss - self.add_lateral = add_lateral - self.num_codes = num_codes - self.bottleneck = ConvModule( - self.in_channels[-1], - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if add_lateral: - self.lateral_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the last one - self.lateral_convs.append( - ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.fusion = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.enc_module = EncModule( - self.channels, - num_codes=num_codes, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.use_se_loss: - self.loss_se_decode = build_loss(loss_se_decode) - self.se_layer = nn.Linear(self.channels, self.num_classes) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - feat = self.bottleneck(inputs[-1]) - if self.add_lateral: - laterals = [ - resize( - lateral_conv(inputs[i]), - size=feat.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - feat = self.fusion(torch.cat([feat, *laterals], 1)) - encode_feat, output = self.enc_module(feat) - output = self.cls_seg(output) - if self.use_se_loss: - se_output = self.se_layer(encode_feat) - return output, se_output - else: - return output - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, ignore se_loss.""" - if self.use_se_loss: - return self.forward(inputs)[0] - else: - return self.forward(inputs) - - @staticmethod - def _convert_to_onehot_labels(seg_label, num_classes): - """Convert segmentation label to onehot. - - Args: - seg_label (Tensor): Segmentation label of shape (N, H, W). - num_classes (int): Number of classes. - - Returns: - Tensor: Onehot labels of shape (N, num_classes). - """ - - batch_size = seg_label.size(0) - onehot_labels = seg_label.new_zeros((batch_size, num_classes)) - for i in range(batch_size): - hist = seg_label[i].float().histc( - bins=num_classes, min=0, max=num_classes - 1) - onehot_labels[i] = hist > 0 - return onehot_labels - - def losses(self, seg_logit, seg_label): - """Compute segmentation and semantic encoding loss.""" - seg_logit, se_seg_logit = seg_logit - loss = dict() - loss.update(super(EncHead, self).losses(seg_logit, seg_label)) - se_loss = self.loss_se_decode( - se_seg_logit, - self._convert_to_onehot_labels(seg_label, self.num_classes)) - loss['loss_se'] = se_loss - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fcn_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fcn_head.py deleted file mode 100644 index 3c8de51f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fcn_head.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FCNHead(BaseDecodeHead): - """Fully Convolution Networks for Semantic Segmentation. - - This head is implemented of `FCNNet `_. - - Args: - num_convs (int): Number of convs in the head. Default: 2. - kernel_size (int): The kernel size for convs in the head. Default: 3. - concat_input (bool): Whether concat the input and output of convs - before classification layer. - dilation (int): The dilation rate for convs in the head. Default: 1. - """ - - def __init__(self, - num_convs=2, - kernel_size=3, - concat_input=True, - dilation=1, - **kwargs): - assert num_convs >= 0 and dilation > 0 and isinstance(dilation, int) - self.num_convs = num_convs - self.concat_input = concat_input - self.kernel_size = kernel_size - super(FCNHead, self).__init__(**kwargs) - if num_convs == 0: - assert self.in_channels == self.channels - - conv_padding = (kernel_size // 2) * dilation - convs = [] - convs.append( - ConvModule( - self.in_channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - for i in range(num_convs - 1): - convs.append( - ConvModule( - self.channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if num_convs == 0: - self.convs = nn.Identity() - else: - self.convs = nn.Sequential(*convs) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=kernel_size, - padding=kernel_size // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs(x) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fpn_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fpn_head.py deleted file mode 100644 index e41f324c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/fpn_head.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import Upsample, resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FPNHead(BaseDecodeHead): - """Panoptic Feature Pyramid Networks. - - This head is the implementation of `Semantic FPN - `_. - - Args: - feature_strides (tuple[int]): The strides for input feature maps. - stack_lateral. All strides suppose to be power of 2. The first - one is of largest resolution. - """ - - def __init__(self, feature_strides, **kwargs): - super(FPNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(feature_strides) == len(self.in_channels) - assert min(feature_strides) == feature_strides[0] - self.feature_strides = feature_strides - - self.scale_heads = nn.ModuleList() - for i in range(len(feature_strides)): - head_length = max( - 1, - int(np.log2(feature_strides[i]) - np.log2(feature_strides[0]))) - scale_head = [] - for k in range(head_length): - scale_head.append( - ConvModule( - self.in_channels[i] if k == 0 else self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if feature_strides[i] != feature_strides[0]: - scale_head.append( - Upsample( - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners)) - self.scale_heads.append(nn.Sequential(*scale_head)) - - def forward(self, inputs): - - x = self._transform_inputs(inputs) - - output = self.scale_heads[0](x[0]) - for i in range(1, len(self.feature_strides)): - # non inplace - output = output + resize( - self.scale_heads[i](x[i]), - size=output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/gc_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/gc_head.py deleted file mode 100644 index eed50742..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/gc_head.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ContextBlock - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class GCHead(FCNHead): - """GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. - - This head is the implementation of `GCNet - `_. - - Args: - ratio (float): Multiplier of channels ratio. Default: 1/4. - pooling_type (str): The pooling type of context aggregation. - Options are 'att', 'avg'. Default: 'avg'. - fusion_types (tuple[str]): The fusion type for feature fusion. - Options are 'channel_add', 'channel_mul'. Default: ('channel_add',) - """ - - def __init__(self, - ratio=1 / 4., - pooling_type='att', - fusion_types=('channel_add', ), - **kwargs): - super(GCHead, self).__init__(num_convs=2, **kwargs) - self.ratio = ratio - self.pooling_type = pooling_type - self.fusion_types = fusion_types - self.gc_block = ContextBlock( - in_channels=self.channels, - ratio=self.ratio, - pooling_type=self.pooling_type, - fusion_types=self.fusion_types) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.gc_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/isa_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/isa_head.py deleted file mode 100644 index c9224b61..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/isa_head.py +++ /dev/null @@ -1,142 +0,0 @@ -import math - -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Self-Attention Module. - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict | None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, conv_cfg, norm_cfg, act_cfg): - super(SelfAttentionBlock, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=2, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.output_project = self.build_project( - in_channels, - in_channels, - num_convs=1, - use_conv_module=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - """Forward function.""" - context = super(SelfAttentionBlock, self).forward(x, x) - return self.output_project(context) - - -@HEADS.register_module() -class ISAHead(BaseDecodeHead): - """Interlaced Sparse Self-Attention for Semantic Segmentation. - - This head is the implementation of `ISA - `_. - - Args: - isa_channels (int): The channels of ISA Module. - down_factor (tuple[int]): The local group size of ISA. - """ - - def __init__(self, isa_channels, down_factor=(8, 8), **kwargs): - super(ISAHead, self).__init__(**kwargs) - self.down_factor = down_factor - - self.in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.global_relation = SelfAttentionBlock( - self.channels, - isa_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.local_relation = SelfAttentionBlock( - self.channels, - isa_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.out_conv = ConvModule( - self.channels * 2, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x_ = self._transform_inputs(inputs) - x = self.in_conv(x_) - residual = x - - n, c, h, w = x.size() - loc_h, loc_w = self.down_factor # size of local group in H- and W-axes - glb_h, glb_w = math.ceil(h / loc_h), math.ceil(w / loc_w) - pad_h, pad_w = glb_h * loc_h - h, glb_w * loc_w - w - if pad_h > 0 or pad_w > 0: # pad if the size is not divisible - padding = (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2) - x = F.pad(x, padding) - - # global relation - x = x.view(n, c, glb_h, loc_h, glb_w, loc_w) - # do permutation to gather global group - x = x.permute(0, 3, 5, 1, 2, 4) # (n, loc_h, loc_w, c, glb_h, glb_w) - x = x.reshape(-1, c, glb_h, glb_w) - # apply attention within each global group - x = self.global_relation(x) # (n * loc_h * loc_w, c, glb_h, glb_w) - - # local relation - x = x.view(n, loc_h, loc_w, c, glb_h, glb_w) - # do permutation to gather local group - x = x.permute(0, 4, 5, 3, 1, 2) # (n, glb_h, glb_w, c, loc_h, loc_w) - x = x.reshape(-1, c, loc_h, loc_w) - # apply attention within each local group - x = self.local_relation(x) # (n * glb_h * glb_w, c, loc_h, loc_w) - - # permute each pixel back to its original position - x = x.view(n, glb_h, glb_w, c, loc_h, loc_w) - x = x.permute(0, 3, 1, 4, 2, 5) # (n, c, glb_h, loc_h, glb_w, loc_w) - x = x.reshape(n, c, glb_h * loc_h, glb_w * loc_w) - if pad_h > 0 or pad_w > 0: # remove padding - x = x[:, :, pad_h // 2:pad_h // 2 + h, pad_w // 2:pad_w // 2 + w] - - x = self.out_conv(torch.cat([x, residual], dim=1)) - out = self.cls_seg(x) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/lraspp_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/lraspp_head.py deleted file mode 100644 index c10ff0d8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/lraspp_head.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv import is_tuple_of -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class LRASPPHead(BaseDecodeHead): - """Lite R-ASPP (LRASPP) head is proposed in Searching for MobileNetV3. - - This head is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - branch_channels (tuple[int]): The number of output channels in every - each branch. Default: (32, 64). - """ - - def __init__(self, branch_channels=(32, 64), **kwargs): - super(LRASPPHead, self).__init__(**kwargs) - if self.input_transform != 'multiple_select': - raise ValueError('in Lite R-ASPP (LRASPP) head, input_transform ' - f'must be \'multiple_select\'. But received ' - f'\'{self.input_transform}\'') - assert is_tuple_of(branch_channels, int) - assert len(branch_channels) == len(self.in_channels) - 1 - self.branch_channels = branch_channels - - self.convs = nn.Sequential() - self.conv_ups = nn.Sequential() - for i in range(len(branch_channels)): - self.convs.add_module( - f'conv{i}', - nn.Conv2d( - self.in_channels[i], branch_channels[i], 1, bias=False)) - self.conv_ups.add_module( - f'conv_up{i}', - ConvModule( - self.channels + branch_channels[i], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False)) - - self.conv_up_input = nn.Conv2d(self.channels, self.channels, 1) - - self.aspp_conv = ConvModule( - self.in_channels[-1], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False) - self.image_pool = nn.Sequential( - nn.AvgPool2d(kernel_size=49, stride=(16, 20)), - ConvModule( - self.in_channels[2], - self.channels, - 1, - act_cfg=dict(type='Sigmoid'), - bias=False)) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - - x = inputs[-1] - - x = self.aspp_conv(x) * resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = self.conv_up_input(x) - - for i in range(len(self.branch_channels) - 1, -1, -1): - x = resize( - x, - size=inputs[i].size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = torch.cat([x, self.convs[i](inputs[i])], 1) - x = self.conv_ups[i](x) - - return self.cls_seg(x) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/nl_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/nl_head.py deleted file mode 100644 index 637517e7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/nl_head.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NonLocal2d - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class NLHead(FCNHead): - """Non-local Neural Networks. - - This head is the implementation of `NLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: True. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - **kwargs): - super(NLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.nl_block = NonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.nl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ocr_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ocr_head.py deleted file mode 100644 index 09eadfb1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/ocr_head.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .cascade_decode_head import BaseCascadeDecodeHead - - -class SpatialGatherModule(nn.Module): - """Aggregate the context features according to the initial predicted - probability distribution. - - Employ the soft-weighted method to aggregate the context. - """ - - def __init__(self, scale): - super(SpatialGatherModule, self).__init__() - self.scale = scale - - def forward(self, feats, probs): - """Forward function.""" - batch_size, num_classes, height, width = probs.size() - channels = feats.size(1) - probs = probs.view(batch_size, num_classes, -1) - feats = feats.view(batch_size, channels, -1) - # [batch_size, height*width, num_classes] - feats = feats.permute(0, 2, 1) - # [batch_size, channels, height*width] - probs = F.softmax(self.scale * probs, dim=2) - # [batch_size, channels, num_classes] - ocr_context = torch.matmul(probs, feats) - ocr_context = ocr_context.permute(0, 2, 1).contiguous().unsqueeze(3) - return ocr_context - - -class ObjectAttentionBlock(_SelfAttentionBlock): - """Make a OCR used SelfAttentionBlock.""" - - def __init__(self, in_channels, channels, scale, conv_cfg, norm_cfg, - act_cfg): - if scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=scale) - else: - query_downsample = None - super(ObjectAttentionBlock, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=query_downsample, - key_downsample=None, - key_query_num_convs=2, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=True, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.bottleneck = ConvModule( - in_channels * 2, - in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, query_feats, key_feats): - """Forward function.""" - context = super(ObjectAttentionBlock, - self).forward(query_feats, key_feats) - output = self.bottleneck(torch.cat([context, query_feats], dim=1)) - if self.query_downsample is not None: - output = resize(query_feats) - - return output - - -@HEADS.register_module() -class OCRHead(BaseCascadeDecodeHead): - """Object-Contextual Representations for Semantic Segmentation. - - This head is the implementation of `OCRNet - `_. - - Args: - ocr_channels (int): The intermediate channels of OCR block. - scale (int): The scale of probability map in SpatialGatherModule in - Default: 1. - """ - - def __init__(self, ocr_channels, scale=1, **kwargs): - super(OCRHead, self).__init__(**kwargs) - self.ocr_channels = ocr_channels - self.scale = scale - self.object_context_block = ObjectAttentionBlock( - self.channels, - self.ocr_channels, - self.scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.spatial_gather_module = SpatialGatherModule(self.scale) - - self.bottleneck = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs, prev_output): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.bottleneck(x) - context = self.spatial_gather_module(feats, prev_output) - object_context = self.object_context_block(feats, context) - output = self.cls_seg(object_context) - - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/point_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/point_head.py deleted file mode 100644 index 72762180..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/point_head.py +++ /dev/null @@ -1,356 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import point_sample - -from mmseg.models.builder import HEADS -from mmseg.ops import resize -from ..losses import accuracy -from .cascade_decode_head import BaseCascadeDecodeHead - - -def calculate_uncertainty(seg_logits): - """Estimate uncertainty based on seg logits. - - For each location of the prediction ``seg_logits`` we estimate - uncertainty as the difference between top first and top second - predicted logits. - - Args: - seg_logits (Tensor): Semantic segmentation logits, - shape (batch_size, num_classes, height, width). - - Returns: - scores (Tensor): T uncertainty scores with the most uncertain - locations having the highest uncertainty score, shape ( - batch_size, 1, height, width) - """ - top2_scores = torch.topk(seg_logits, k=2, dim=1)[0] - return (top2_scores[:, 1] - top2_scores[:, 0]).unsqueeze(1) - - -@HEADS.register_module() -class PointHead(BaseCascadeDecodeHead): - """A mask point head use in PointRend. - - This head is implemented of `PointRend: Image Segmentation as - Rendering `_. - ``PointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict|None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict|None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - """ - - def __init__(self, - num_fcs=3, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU', inplace=False), - **kwargs): - super(PointHead, self).__init__( - input_transform='multiple_select', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='fc_seg')), - **kwargs) - - self.num_fcs = num_fcs - self.coarse_pred_each_layer = coarse_pred_each_layer - - fc_in_channels = sum(self.in_channels) + self.num_classes - fc_channels = self.channels - self.fcs = nn.ModuleList() - for k in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += self.num_classes if self.coarse_pred_each_layer \ - else 0 - self.fc_seg = nn.Conv1d( - fc_in_channels, - self.num_classes, - kernel_size=1, - stride=1, - padding=0) - if self.dropout_ratio > 0: - self.dropout = nn.Dropout(self.dropout_ratio) - delattr(self, 'conv_seg') - - def cls_seg(self, feat): - """Classify each pixel with fc.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.fc_seg(feat) - return output - - def forward(self, fine_grained_point_feats, coarse_point_feats): - x = torch.cat([fine_grained_point_feats, coarse_point_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_point_feats), dim=1) - return self.cls_seg(x) - - def _get_fine_grained_point_feats(self, x, points): - """Sample from fine grained features. - - Args: - x (list[Tensor]): Feature pyramid from by neck or backbone. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - fine_grained_feats (Tensor): Sampled fine grained feature, - shape (batch_size, sum(channels of x), num_points). - """ - - fine_grained_feats_list = [ - point_sample(_, points, align_corners=self.align_corners) - for _ in x - ] - if len(fine_grained_feats_list) > 1: - fine_grained_feats = torch.cat(fine_grained_feats_list, dim=1) - else: - fine_grained_feats = fine_grained_feats_list[0] - - return fine_grained_feats - - def _get_coarse_point_feats(self, prev_output, points): - """Sample from fine grained features. - - Args: - prev_output (list[Tensor]): Prediction of previous decode head. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - coarse_feats (Tensor): Sampled coarse feature, shape (batch_size, - num_classes, num_points). - """ - - coarse_feats = point_sample( - prev_output, points, align_corners=self.align_corners) - - return coarse_feats - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self._transform_inputs(inputs) - with torch.no_grad(): - points = self.get_points_train( - prev_output, calculate_uncertainty, cfg=train_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats(prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - point_label = point_sample( - gt_semantic_seg.float(), - points, - mode='nearest', - align_corners=self.align_corners) - point_label = point_label.squeeze(1).long() - - losses = self.losses(point_logits, point_label) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - - x = self._transform_inputs(inputs) - refined_seg_logits = prev_output.clone() - for _ in range(test_cfg.subdivision_steps): - refined_seg_logits = resize( - refined_seg_logits, - scale_factor=test_cfg.scale_factor, - mode='bilinear', - align_corners=self.align_corners) - batch_size, channels, height, width = refined_seg_logits.shape - point_indices, points = self.get_points_test( - refined_seg_logits, calculate_uncertainty, cfg=test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats( - prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_seg_logits = refined_seg_logits.reshape( - batch_size, channels, height * width) - refined_seg_logits = refined_seg_logits.scatter_( - 2, point_indices, point_logits) - refined_seg_logits = refined_seg_logits.view( - batch_size, channels, height, width) - - return refined_seg_logits - - def losses(self, point_logits, point_label): - """Compute segmentation loss.""" - loss = dict() - if not isinstance(self.loss_decode, nn.ModuleList): - losses_decode = [self.loss_decode] - else: - losses_decode = self.loss_decode - for loss_module in losses_decode: - loss['point' + loss_module.loss_name] = loss_module( - point_logits, point_label, ignore_index=self.ignore_index) - - loss['acc_point'] = accuracy(point_logits, point_label) - return loss - - def get_points_train(self, seg_logits, uncertainty_func, cfg): - """Sample points for training. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - 'uncertainty_func' function that takes point's logit prediction as - input. - - Args: - seg_logits (Tensor): Semantic segmentation logits, shape ( - batch_size, num_classes, height, width). - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains the coordinates of ``num_points`` sampled - points. - """ - num_points = cfg.num_points - oversample_ratio = cfg.oversample_ratio - importance_sample_ratio = cfg.importance_sample_ratio - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = seg_logits.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=seg_logits.device) - point_logits = point_sample(seg_logits, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = uncertainty_func(point_logits) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=seg_logits.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_point_coords = torch.rand( - batch_size, num_random_points, 2, device=seg_logits.device) - point_coords = torch.cat((point_coords, rand_point_coords), dim=1) - return point_coords - - def get_points_test(self, seg_logits, uncertainty_func, cfg): - """Sample points for testing. - - Find ``num_points`` most uncertain points from ``uncertainty_map``. - - Args: - seg_logits (Tensor): A tensor of shape (batch_size, num_classes, - height, width) for class-specific or class-agnostic prediction. - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (batch_size, num_points) - that contains indices from [0, height x width) of the most - uncertain points. - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the ``height x width`` grid . - """ - - num_points = cfg.subdivision_num_points - uncertainty_map = uncertainty_func(seg_logits) - batch_size, _, height, width = uncertainty_map.shape - h_step = 1.0 / height - w_step = 1.0 / width - - uncertainty_map = uncertainty_map.view(batch_size, height * width) - num_points = min(height * width, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - point_coords = torch.zeros( - batch_size, - num_points, - 2, - dtype=torch.float, - device=seg_logits.device) - point_coords[:, :, 0] = w_step / 2.0 + (point_indices % - width).float() * w_step - point_coords[:, :, 1] = h_step / 2.0 + (point_indices // - width).float() * h_step - return point_indices, point_coords diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psa_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psa_head.py deleted file mode 100644 index df7593cb..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psa_head.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - -try: - from mmcv.ops import PSAMask -except ModuleNotFoundError: - PSAMask = None - - -@HEADS.register_module() -class PSAHead(BaseDecodeHead): - """Point-wise Spatial Attention Network for Scene Parsing. - - This head is the implementation of `PSANet - `_. - - Args: - mask_size (tuple[int]): The PSA mask size. It usually equals input - size. - psa_type (str): The type of psa module. Options are 'collect', - 'distribute', 'bi-direction'. Default: 'bi-direction' - compact (bool): Whether use compact map for 'collect' mode. - Default: True. - shrink_factor (int): The downsample factors of psa mask. Default: 2. - normalization_factor (float): The normalize factor of attention. - psa_softmax (bool): Whether use softmax for attention. - """ - - def __init__(self, - mask_size, - psa_type='bi-direction', - compact=False, - shrink_factor=2, - normalization_factor=1.0, - psa_softmax=True, - **kwargs): - if PSAMask is None: - raise RuntimeError('Please install mmcv-full for PSAMask ops') - super(PSAHead, self).__init__(**kwargs) - assert psa_type in ['collect', 'distribute', 'bi-direction'] - self.psa_type = psa_type - self.compact = compact - self.shrink_factor = shrink_factor - self.mask_size = mask_size - mask_h, mask_w = mask_size - self.psa_softmax = psa_softmax - if normalization_factor is None: - normalization_factor = mask_h * mask_w - self.normalization_factor = normalization_factor - - self.reduce = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - if psa_type == 'bi-direction': - self.reduce_p = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention_p = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - self.psamask_collect = PSAMask('collect', mask_size) - self.psamask_distribute = PSAMask('distribute', mask_size) - else: - self.psamask = PSAMask(psa_type, mask_size) - self.proj = ConvModule( - self.channels * (2 if psa_type == 'bi-direction' else 1), - self.in_channels, - kernel_size=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - self.in_channels * 2, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - identity = x - align_corners = self.align_corners - if self.psa_type in ['collect', 'distribute']: - out = self.reduce(x) - n, c, h, w = out.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - out = resize( - out, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y = self.attention(out) - if self.compact: - if self.psa_type == 'collect': - y = y.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y = self.psamask(y) - if self.psa_softmax: - y = F.softmax(y, dim=1) - out = torch.bmm( - out.view(n, c, h * w), y.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - else: - x_col = self.reduce(x) - x_dis = self.reduce_p(x) - n, c, h, w = x_col.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - x_col = resize( - x_col, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - x_dis = resize( - x_dis, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y_col = self.attention(x_col) - y_dis = self.attention_p(x_dis) - if self.compact: - y_dis = y_dis.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y_col = self.psamask_collect(y_col) - y_dis = self.psamask_distribute(y_dis) - if self.psa_softmax: - y_col = F.softmax(y_col, dim=1) - y_dis = F.softmax(y_dis, dim=1) - x_col = torch.bmm( - x_col.view(n, c, h * w), y_col.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - x_dis = torch.bmm( - x_dis.view(n, c, h * w), y_dis.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - out = torch.cat([x_col, x_dis], 1) - out = self.proj(out) - out = resize( - out, - size=identity.shape[2:], - mode='bilinear', - align_corners=align_corners) - out = self.bottleneck(torch.cat((identity, out), dim=1)) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psp_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index a27ae4bd..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners, **kwargs): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - **kwargs))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/segformer_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/segformer_head.py deleted file mode 100644 index 2e75d506..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/segformer_head.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.models.builder import HEADS -from mmseg.models.decode_heads.decode_head import BaseDecodeHead -from mmseg.ops import resize - - -@HEADS.register_module() -class SegformerHead(BaseDecodeHead): - """The all mlp Head of segformer. - - This head is the implementation of - `Segformer ` _. - - Args: - interpolate_mode: The interpolate mode of MLP head upsample operation. - Default: 'bilinear'. - """ - - def __init__(self, interpolate_mode='bilinear', **kwargs): - super().__init__(input_transform='multiple_select', **kwargs) - - self.interpolate_mode = interpolate_mode - num_inputs = len(self.in_channels) - - assert num_inputs == len(self.in_index) - - self.convs = nn.ModuleList() - for i in range(num_inputs): - self.convs.append( - ConvModule( - in_channels=self.in_channels[i], - out_channels=self.channels, - kernel_size=1, - stride=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - self.fusion_conv = ConvModule( - in_channels=self.channels * num_inputs, - out_channels=self.channels, - kernel_size=1, - norm_cfg=self.norm_cfg) - - def forward(self, inputs): - # Receive 4 stage backbone feature map: 1/4, 1/8, 1/16, 1/32 - inputs = self._transform_inputs(inputs) - outs = [] - for idx in range(len(inputs)): - x = inputs[idx] - conv = self.convs[idx] - outs.append( - resize( - input=conv(x), - size=inputs[0].shape[2:], - mode=self.interpolate_mode, - align_corners=self.align_corners)) - - out = self.fusion_conv(torch.cat(outs, dim=1)) - - out = self.cls_seg(out) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_aspp_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_aspp_head.py deleted file mode 100644 index 4e894e28..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_aspp_head.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .aspp_head import ASPPHead, ASPPModule - - -class DepthwiseSeparableASPPModule(ASPPModule): - """Atrous Spatial Pyramid Pooling (ASPP) Module with depthwise separable - conv.""" - - def __init__(self, **kwargs): - super(DepthwiseSeparableASPPModule, self).__init__(**kwargs) - for i, dilation in enumerate(self.dilations): - if dilation > 1: - self[i] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - 3, - dilation=dilation, - padding=dilation, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - -@HEADS.register_module() -class DepthwiseSeparableASPPHead(ASPPHead): - """Encoder-Decoder with Atrous Separable Convolution for Semantic Image - Segmentation. - - This head is the implementation of `DeepLabV3+ - `_. - - Args: - c1_in_channels (int): The input channels of c1 decoder. If is 0, - the no decoder will be used. - c1_channels (int): The intermediate channels of c1 decoder. - """ - - def __init__(self, c1_in_channels, c1_channels, **kwargs): - super(DepthwiseSeparableASPPHead, self).__init__(**kwargs) - assert c1_in_channels >= 0 - self.aspp_modules = DepthwiseSeparableASPPModule( - dilations=self.dilations, - in_channels=self.in_channels, - channels=self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if c1_in_channels > 0: - self.c1_bottleneck = ConvModule( - c1_in_channels, - c1_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - else: - self.c1_bottleneck = None - self.sep_bottleneck = nn.Sequential( - DepthwiseSeparableConvModule( - self.channels + c1_channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - DepthwiseSeparableConvModule( - self.channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - if self.c1_bottleneck is not None: - c1_output = self.c1_bottleneck(inputs[0]) - output = resize( - input=output, - size=c1_output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = torch.cat([output, c1_output], dim=1) - output = self.sep_bottleneck(output) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_fcn_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index 7f9658e0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to `Fast-SCNN: Fast Semantic - Segmentation Network `_. - - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - dw_act_cfg (dict):Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: None. - """ - - def __init__(self, dw_act_cfg=None, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) - - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_mla_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_mla_head.py deleted file mode 100644 index 6bb94ae3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_mla_head.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import Upsample -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class SETRMLAHead(BaseDecodeHead): - """Multi level feature aggretation head of SETR. - - MLA head of `SETR `_. - - Args: - mlahead_channels (int): Channels of conv-conv-4x of multi-level feature - aggregation. Default: 128. - up_scale (int): The scale factor of interpolate. Default:4. - """ - - def __init__(self, mla_channels=128, up_scale=4, **kwargs): - super(SETRMLAHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.mla_channels = mla_channels - - num_inputs = len(self.in_channels) - - # Refer to self.cls_seg settings of BaseDecodeHead - assert self.channels == num_inputs * mla_channels - - self.up_convs = nn.ModuleList() - for i in range(num_inputs): - self.up_convs.append( - nn.Sequential( - ConvModule( - in_channels=self.in_channels[i], - out_channels=mla_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - ConvModule( - in_channels=mla_channels, - out_channels=mla_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - Upsample( - scale_factor=up_scale, - mode='bilinear', - align_corners=self.align_corners))) - - def forward(self, inputs): - inputs = self._transform_inputs(inputs) - outs = [] - for x, up_conv in zip(inputs, self.up_convs): - outs.append(up_conv(x)) - out = torch.cat(outs, dim=1) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_up_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_up_head.py deleted file mode 100644 index 87e7ea7f..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/setr_up_head.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_norm_layer - -from mmseg.ops import Upsample -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class SETRUPHead(BaseDecodeHead): - """Naive upsampling head and Progressive upsampling head of SETR. - - Naive or PUP head of `SETR `_. - - Args: - norm_layer (dict): Config dict for input normalization. - Default: norm_layer=dict(type='LN', eps=1e-6, requires_grad=True). - num_convs (int): Number of decoder convolutions. Default: 1. - up_scale (int): The scale factor of interpolate. Default:4. - kernel_size (int): The kernel size of convolution when decoding - feature information from backbone. Default: 3. - init_cfg (dict | list[dict] | None): Initialization config dict. - Default: dict( - type='Constant', val=1.0, bias=0, layer='LayerNorm'). - """ - - def __init__(self, - norm_layer=dict(type='LN', eps=1e-6, requires_grad=True), - num_convs=1, - up_scale=4, - kernel_size=3, - init_cfg=[ - dict(type='Constant', val=1.0, bias=0, layer='LayerNorm'), - dict( - type='Normal', - std=0.01, - override=dict(name='conv_seg')) - ], - **kwargs): - - assert kernel_size in [1, 3], 'kernel_size must be 1 or 3.' - - super(SETRUPHead, self).__init__(init_cfg=init_cfg, **kwargs) - - assert isinstance(self.in_channels, int) - - _, self.norm = build_norm_layer(norm_layer, self.in_channels) - - self.up_convs = nn.ModuleList() - in_channels = self.in_channels - out_channels = self.channels - for _ in range(num_convs): - self.up_convs.append( - nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=1, - padding=int(kernel_size - 1) // 2, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - Upsample( - scale_factor=up_scale, - mode='bilinear', - align_corners=self.align_corners))) - in_channels = out_channels - - def forward(self, x): - x = self._transform_inputs(x) - - n, c, h, w = x.shape - x = x.reshape(n, c, h * w).transpose(2, 1).contiguous() - x = self.norm(x) - x = x.transpose(1, 2).reshape(n, c, h, w).contiguous() - - for up_conv in self.up_convs: - x = up_conv(x) - out = self.cls_seg(x) - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/stdc_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/stdc_head.py deleted file mode 100644 index 71600163..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/stdc_head.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class STDCHead(FCNHead): - """This head is the implementation of `Rethinking BiSeNet For Real-time - Semantic Segmentation `_. - - Args: - boundary_threshold (float): The threshold of calculating boundary. - Default: 0.1. - """ - - def __init__(self, boundary_threshold=0.1, **kwargs): - super(STDCHead, self).__init__(**kwargs) - self.boundary_threshold = boundary_threshold - # Using register buffer to make laplacian kernel on the same - # device of `seg_label`. - self.register_buffer( - 'laplacian_kernel', - torch.tensor([-1, -1, -1, -1, 8, -1, -1, -1, -1], - dtype=torch.float32, - requires_grad=False).reshape((1, 1, 3, 3))) - self.fusion_kernel = torch.nn.Parameter( - torch.tensor([[6. / 10], [3. / 10], [1. / 10]], - dtype=torch.float32).reshape(1, 3, 1, 1), - requires_grad=False) - - def losses(self, seg_logit, seg_label): - """Compute Detail Aggregation Loss.""" - # Note: The paper claims `fusion_kernel` is a trainable 1x1 conv - # parameters. However, it is a constant in original repo and other - # codebase because it would not be added into computation graph - # after threshold operation. - seg_label = seg_label.float() - boundary_targets = F.conv2d( - seg_label, self.laplacian_kernel, padding=1) - boundary_targets = boundary_targets.clamp(min=0) - boundary_targets[boundary_targets > self.boundary_threshold] = 1 - boundary_targets[boundary_targets <= self.boundary_threshold] = 0 - - boundary_targets_x2 = F.conv2d( - seg_label, self.laplacian_kernel, stride=2, padding=1) - boundary_targets_x2 = boundary_targets_x2.clamp(min=0) - - boundary_targets_x4 = F.conv2d( - seg_label, self.laplacian_kernel, stride=4, padding=1) - boundary_targets_x4 = boundary_targets_x4.clamp(min=0) - - boundary_targets_x4_up = F.interpolate( - boundary_targets_x4, boundary_targets.shape[2:], mode='nearest') - boundary_targets_x2_up = F.interpolate( - boundary_targets_x2, boundary_targets.shape[2:], mode='nearest') - - boundary_targets_x2_up[ - boundary_targets_x2_up > self.boundary_threshold] = 1 - boundary_targets_x2_up[ - boundary_targets_x2_up <= self.boundary_threshold] = 0 - - boundary_targets_x4_up[ - boundary_targets_x4_up > self.boundary_threshold] = 1 - boundary_targets_x4_up[ - boundary_targets_x4_up <= self.boundary_threshold] = 0 - - boudary_targets_pyramids = torch.stack( - (boundary_targets, boundary_targets_x2_up, boundary_targets_x4_up), - dim=1) - - boudary_targets_pyramids = boudary_targets_pyramids.squeeze(2) - boudary_targets_pyramid = F.conv2d(boudary_targets_pyramids, - self.fusion_kernel) - - boudary_targets_pyramid[ - boudary_targets_pyramid > self.boundary_threshold] = 1 - boudary_targets_pyramid[ - boudary_targets_pyramid <= self.boundary_threshold] = 0 - - seg_logit = F.interpolate( - seg_logit, - boundary_targets.shape[2:], - mode='bilinear', - align_corners=True) - loss = super(STDCHead, self).losses(seg_logit, - boudary_targets_pyramid.long()) - return loss diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/uper_head.py b/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/uper_head.py deleted file mode 100644 index 57d80be1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/decode_heads/uper_head.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead -from .psp_head import PPM - - -@HEADS.register_module() -class UPerHead(BaseDecodeHead): - """Unified Perceptual Parsing for Scene Understanding. - - This head is the implementation of `UPerNet - `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module applied on the last feature. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(UPerHead, self).__init__( - input_transform='multiple_select', **kwargs) - # PSP Module - self.psp_modules = PPM( - pool_scales, - self.in_channels[-1], - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels[-1] + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # FPN Module - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the top layer - l_conv = ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - fpn_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - self.fpn_bottleneck = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def psp_forward(self, inputs): - """Forward function of PSP module.""" - x = inputs[-1] - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - - return output - - def forward(self, inputs): - """Forward function.""" - - inputs = self._transform_inputs(inputs) - - # build laterals - laterals = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - laterals.append(self.psp_forward(inputs)) - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], - size=prev_shape, - mode='bilinear', - align_corners=self.align_corners) - - # build outputs - fpn_outs = [ - self.fpn_convs[i](laterals[i]) - for i in range(used_backbone_levels - 1) - ] - # append psp feature - fpn_outs.append(laterals[-1]) - - for i in range(used_backbone_levels - 1, 0, -1): - fpn_outs[i] = resize( - fpn_outs[i], - size=fpn_outs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fpn_outs = torch.cat(fpn_outs, dim=1) - output = self.fpn_bottleneck(fpn_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/__init__.py deleted file mode 100644 index fbc5b2d1..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .accuracy import Accuracy, accuracy -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .focal_loss import FocalLoss -from .lovasz_loss import LovaszLoss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss', - 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss', - 'FocalLoss' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/accuracy.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/accuracy.py deleted file mode 100644 index f2cd16b7..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/accuracy.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - - -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class, ...) - target (torch.Tensor): The target of each prediction, shape (N, , ...) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == target.ndim + 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - # transpose to shape (maxk, N, ...) - pred_label = pred_label.transpose(0, 1) - correct = pred_label.eq(target.unsqueeze(0).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / target.numel())) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - """Accuracy calculation module.""" - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/cross_entropy_loss.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/cross_entropy_loss.py deleted file mode 100644 index ee489a88..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=-100): - """The wrapper function for :func:`F.cross_entropy`""" - # class_weight is a manual rescaling weight given to each class. - # If given, has to be a Tensor of size C element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, target_shape, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_zeros(target_shape) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero(valid_mask, as_tuple=True) - - if inds[0].numel() > 0: - if labels.dim() == 3: - bin_labels[inds[0], labels[valid_mask], inds[1], inds[2]] = 1 - else: - bin_labels[inds[0], labels[valid_mask]] = 1 - - valid_mask = valid_mask.unsqueeze(1).expand(target_shape).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.unsqueeze(1).expand(target_shape) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=255): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. Default: 255 - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - assert (pred.dim() == 2 and label.dim() == 1) or ( - pred.dim() == 4 and label.dim() == 3), \ - 'Only pred shape [N, C], label shape [N] or pred shape [N, C, ' \ - 'H, W], label shape [N, H, W] are supported' - label, weight = _expand_onehot_labels(label, weight, pred.shape, - ignore_index) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask' - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_ce'. - """ - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_ce'): - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - self._loss_name = loss_name - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/dice_loss.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/dice_loss.py deleted file mode 100644 index 79a3abfc..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/dice_loss.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/ -segmentron/solver/loss.py (Apache-2.0 License)""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weighted_loss - - -@weighted_loss -def dice_loss(pred, - target, - valid_mask, - smooth=1, - exponent=2, - class_weight=None, - ignore_index=255): - assert pred.shape[0] == target.shape[0] - total_loss = 0 - num_classes = pred.shape[1] - for i in range(num_classes): - if i != ignore_index: - dice_loss = binary_dice_loss( - pred[:, i], - target[..., i], - valid_mask=valid_mask, - smooth=smooth, - exponent=exponent) - if class_weight is not None: - dice_loss *= class_weight[i] - total_loss += dice_loss - return total_loss / num_classes - - -@weighted_loss -def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards): - assert pred.shape[0] == target.shape[0] - pred = pred.reshape(pred.shape[0], -1) - target = target.reshape(target.shape[0], -1) - valid_mask = valid_mask.reshape(valid_mask.shape[0], -1) - - num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth - den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth - - return 1 - num / den - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - """DiceLoss. - - This loss is proposed in `V-Net: Fully Convolutional Neural Networks for - Volumetric Medical Image Segmentation `_. - - Args: - smooth (float): A float number to smooth loss, and avoid NaN error. - Default: 1 - exponent (float): An float number to calculate denominator - value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Default to 1.0. - ignore_index (int | None): The label index to be ignored. Default: 255. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_dice'. - """ - - def __init__(self, - smooth=1, - exponent=2, - reduction='mean', - class_weight=None, - loss_weight=1.0, - ignore_index=255, - loss_name='loss_dice', - **kwards): - super(DiceLoss, self).__init__() - self.smooth = smooth - self.exponent = exponent - self.reduction = reduction - self.class_weight = get_class_weight(class_weight) - self.loss_weight = loss_weight - self.ignore_index = ignore_index - self._loss_name = loss_name - - def forward(self, - pred, - target, - avg_factor=None, - reduction_override=None, - **kwards): - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = pred.new_tensor(self.class_weight) - else: - class_weight = None - - pred = F.softmax(pred, dim=1) - num_classes = pred.shape[1] - one_hot_target = F.one_hot( - torch.clamp(target.long(), 0, num_classes - 1), - num_classes=num_classes) - valid_mask = (target != self.ignore_index).long() - - loss = self.loss_weight * dice_loss( - pred, - one_hot_target, - valid_mask=valid_mask, - reduction=reduction, - avg_factor=avg_factor, - smooth=self.smooth, - exponent=self.exponent, - class_weight=class_weight, - ignore_index=self.ignore_index) - return loss - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/focal_loss.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/focal_loss.py deleted file mode 100644 index af1c711d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/focal_loss.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/open-mmlab/mmdetection -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -# This method is used when cuda is not available -def py_sigmoid_focal_loss(pred, - target, - one_hot_target=None, - weight=None, - gamma=2.0, - alpha=0.5, - class_weight=None, - valid_mask=None, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning label of the prediction with - shape (N, C) - one_hot_target (None): Placeholder. It should be None. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal Loss. - Defaults to 0.5. - class_weight (list[float], optional): Weight of each class. - Defaults to None. - valid_mask (torch.Tensor, optional): A mask uses 1 to mark the valid - samples and uses 0 to mark the ignored samples. Default: None. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - if isinstance(alpha, list): - alpha = pred.new_tensor(alpha) - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - one_minus_pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * one_minus_pt.pow(gamma) - - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - final_weight = torch.ones(1, pred.size(1)).type_as(loss) - if weight is not None: - if weight.shape != loss.shape and weight.size(0) == loss.size(0): - # For most cases, weight is of shape (N, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - assert weight.dim() == loss.dim() - final_weight = final_weight * weight - if class_weight is not None: - final_weight = final_weight * pred.new_tensor(class_weight) - if valid_mask is not None: - final_weight = final_weight * valid_mask - loss = weight_reduce_loss(loss, final_weight, reduction, avg_factor) - return loss - - -def sigmoid_focal_loss(pred, - target, - one_hot_target, - weight=None, - gamma=2.0, - alpha=0.5, - class_weight=None, - valid_mask=None, - reduction='mean', - avg_factor=None): - r"""A warpper of cuda version `Focal Loss - `_. - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. It's shape - should be (N, ) - one_hot_target (torch.Tensor): The learning label with shape (N, C) - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal Loss. - Defaults to 0.5. - class_weight (list[float], optional): Weight of each class. - Defaults to None. - valid_mask (torch.Tensor, optional): A mask uses 1 to mark the valid - samples and uses 0 to mark the ignored samples. Default: None. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # Function.apply does not accept keyword arguments, so the decorator - # "weighted_loss" is not applicable - final_weight = torch.ones(1, pred.size(1)).type_as(pred) - if isinstance(alpha, list): - # _sigmoid_focal_loss doesn't accept alpha of list type. Therefore, if - # a list is given, we set the input alpha as 0.5. This means setting - # equal weight for foreground class and background class. By - # multiplying the loss by 2, the effect of setting alpha as 0.5 is - # undone. The alpha of type list is used to regulate the loss in the - # post-processing process. - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), - gamma, 0.5, None, 'none') * 2 - alpha = pred.new_tensor(alpha) - final_weight = final_weight * ( - alpha * one_hot_target + (1 - alpha) * (1 - one_hot_target)) - else: - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), - gamma, alpha, None, 'none') - if weight is not None: - if weight.shape != loss.shape and weight.size(0) == loss.size(0): - # For most cases, weight is of shape (N, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - assert weight.dim() == loss.dim() - final_weight = final_weight * weight - if class_weight is not None: - final_weight = final_weight * pred.new_tensor(class_weight) - if valid_mask is not None: - final_weight = final_weight * valid_mask - loss = weight_reduce_loss(loss, final_weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class FocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - gamma=2.0, - alpha=0.5, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_focal'): - """`Focal Loss `_ - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal - Loss. Defaults to 0.5. When a list is provided, the length - of the list should be equal to the number of classes. - Please be careful that this parameter is not the - class-wise weight but the weight of a binary classification - problem. This binary classification problem regards the - pixels which belong to one class as the foreground - and the other pixels as the background, each element in - the list is the weight of the corresponding foreground class. - The value of alpha or each element of alpha should be a float - in the interval [0, 1]. If you want to specify the class-wise - weight, please use `class_weight` parameter. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this - loss item to be included into the backward graph, `loss_` must - be the prefix of the name. Defaults to 'loss_focal'. - """ - super(FocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'AssertionError: Only sigmoid focal loss supported now.' - assert reduction in ('none', 'mean', 'sum'), \ - "AssertionError: reduction should be 'none', 'mean' or " \ - "'sum'" - assert isinstance(alpha, (float, list)), \ - 'AssertionError: alpha should be of type float' - assert isinstance(gamma, float), \ - 'AssertionError: gamma should be of type float' - assert isinstance(loss_weight, float), \ - 'AssertionError: loss_weight should be of type float' - assert isinstance(loss_name, str), \ - 'AssertionError: loss_name should be of type str' - assert isinstance(class_weight, list) or class_weight is None, \ - 'AssertionError: class_weight must be None or of type list' - self.use_sigmoid = use_sigmoid - self.gamma = gamma - self.alpha = alpha - self.reduction = reduction - self.class_weight = class_weight - self.loss_weight = loss_weight - self._loss_name = loss_name - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - ignore_index=255, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction with shape - (N, C) where C = number of classes, or - (N, C, d_1, d_2, ..., d_K) with K≥1 in the - case of K-dimensional loss. - target (torch.Tensor): The ground truth. If containing class - indices, shape (N) where each value is 0≤targets[i]≤C−1, - or (N, d_1, d_2, ..., d_K) with K≥1 in the case of - K-dimensional loss. If containing class probabilities, - same shape as the input. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to - average the loss. Defaults to None. - reduction_override (str, optional): The reduction method used - to override the original reduction method of the loss. - Options are "none", "mean" and "sum". - ignore_index (int, optional): The label index to be ignored. - Default: 255 - Returns: - torch.Tensor: The calculated loss - """ - assert isinstance(ignore_index, int), \ - 'ignore_index must be of type int' - assert reduction_override in (None, 'none', 'mean', 'sum'), \ - "AssertionError: reduction should be 'none', 'mean' or " \ - "'sum'" - assert pred.shape == target.shape or \ - (pred.size(0) == target.size(0) and - pred.shape[2:] == target.shape[1:]), \ - "The shape of pred doesn't match the shape of target" - - original_shape = pred.shape - - # [B, C, d_1, d_2, ..., d_k] -> [C, B, d_1, d_2, ..., d_k] - pred = pred.transpose(0, 1) - # [C, B, d_1, d_2, ..., d_k] -> [C, N] - pred = pred.reshape(pred.size(0), -1) - # [C, N] -> [N, C] - pred = pred.transpose(0, 1).contiguous() - - if original_shape == target.shape: - # target with shape [B, C, d_1, d_2, ...] - # transform it's shape into [N, C] - # [B, C, d_1, d_2, ...] -> [C, B, d_1, d_2, ..., d_k] - target = target.transpose(0, 1) - # [C, B, d_1, d_2, ..., d_k] -> [C, N] - target = target.reshape(target.size(0), -1) - # [C, N] -> [N, C] - target = target.transpose(0, 1).contiguous() - else: - # target with shape [B, d_1, d_2, ...] - # transform it's shape into [N, ] - target = target.view(-1).contiguous() - valid_mask = (target != ignore_index).view(-1, 1) - # avoid raising error when using F.one_hot() - target = torch.where(target == ignore_index, target.new_tensor(0), - target) - - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - num_classes = pred.size(1) - if torch.cuda.is_available() and pred.is_cuda: - if target.dim() == 1: - one_hot_target = F.one_hot(target, num_classes=num_classes) - else: - one_hot_target = target - target = target.argmax(dim=1) - valid_mask = (target != ignore_index).view(-1, 1) - calculate_loss_func = sigmoid_focal_loss - else: - one_hot_target = None - if target.dim() == 1: - target = F.one_hot(target, num_classes=num_classes) - else: - valid_mask = (target.argmax(dim=1) != ignore_index).view( - -1, 1) - calculate_loss_func = py_sigmoid_focal_loss - - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - one_hot_target, - weight, - gamma=self.gamma, - alpha=self.alpha, - class_weight=self.class_weight, - valid_mask=valid_mask, - reduction=reduction, - avg_factor=avg_factor) - - if reduction == 'none': - # [N, C] -> [C, N] - loss_cls = loss_cls.transpose(0, 1) - # [C, N] -> [C, B, d1, d2, ...] - # original_shape: [B, C, d1, d2, ...] - loss_cls = loss_cls.reshape(original_shape[1], - original_shape[0], - *original_shape[2:]) - # [C, B, d1, d2, ...] -> [B, C, d1, d2, ...] - loss_cls = loss_cls.transpose(0, 1).contiguous() - else: - raise NotImplementedError - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/lovasz_loss.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/lovasz_loss.py deleted file mode 100644 index 2bb0fad3..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/lovasz_loss.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytor -ch/lovasz_losses.py Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim -Berman 2018 ESAT-PSI KU Leuven (MIT License)""" - -import mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def lovasz_grad(gt_sorted): - """Computes gradient of the Lovasz extension w.r.t sorted errors. - - See Alg. 1 in paper. - """ - p = len(gt_sorted) - gts = gt_sorted.sum() - intersection = gts - gt_sorted.float().cumsum(0) - union = gts + (1 - gt_sorted).float().cumsum(0) - jaccard = 1. - intersection / union - if p > 1: # cover 1-pixel case - jaccard[1:p] = jaccard[1:p] - jaccard[0:-1] - return jaccard - - -def flatten_binary_logits(logits, labels, ignore_index=None): - """Flattens predictions in the batch (binary case) Remove labels equal to - 'ignore_index'.""" - logits = logits.view(-1) - labels = labels.view(-1) - if ignore_index is None: - return logits, labels - valid = (labels != ignore_index) - vlogits = logits[valid] - vlabels = labels[valid] - return vlogits, vlabels - - -def flatten_probs(probs, labels, ignore_index=None): - """Flattens predictions in the batch.""" - if probs.dim() == 3: - # assumes output of a sigmoid layer - B, H, W = probs.size() - probs = probs.view(B, 1, H, W) - B, C, H, W = probs.size() - probs = probs.permute(0, 2, 3, 1).contiguous().view(-1, C) # B*H*W, C=P,C - labels = labels.view(-1) - if ignore_index is None: - return probs, labels - valid = (labels != ignore_index) - vprobs = probs[valid.nonzero().squeeze()] - vlabels = labels[valid] - return vprobs, vlabels - - -def lovasz_hinge_flat(logits, labels): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [P], logits at each prediction - (between -infty and +infty). - labels (torch.Tensor): [P], binary ground truth labels (0 or 1). - - Returns: - torch.Tensor: The calculated loss. - """ - if len(labels) == 0: - # only void pixels, the gradients should be 0 - return logits.sum() * 0. - signs = 2. * labels.float() - 1. - errors = (1. - logits * signs) - errors_sorted, perm = torch.sort(errors, dim=0, descending=True) - perm = perm.data - gt_sorted = labels[perm] - grad = lovasz_grad(gt_sorted) - loss = torch.dot(F.relu(errors_sorted), grad) - return loss - - -def lovasz_hinge(logits, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [B, H, W], logits at each pixel - (between -infty and +infty). - labels (torch.Tensor): [B, H, W], binary ground truth masks (0 or 1). - classes (str | list[int], optional): Placeholder, to be consistent with - other loss. Default: None. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): Placeholder, to be consistent - with other loss. Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - if per_image: - loss = [ - lovasz_hinge_flat(*flatten_binary_logits( - logit.unsqueeze(0), label.unsqueeze(0), ignore_index)) - for logit, label in zip(logits, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_hinge_flat( - *flatten_binary_logits(logits, labels, ignore_index)) - return loss - - -def lovasz_softmax_flat(probs, labels, classes='present', class_weight=None): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [P, C], class probabilities at each prediction - (between 0 and 1). - labels (torch.Tensor): [P], ground truth labels (between 0 and C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - class_weight (list[float], optional): The weight for each class. - Default: None. - - Returns: - torch.Tensor: The calculated loss. - """ - if probs.numel() == 0: - # only void pixels, the gradients should be 0 - return probs * 0. - C = probs.size(1) - losses = [] - class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes - for c in class_to_sum: - fg = (labels == c).float() # foreground for class c - if (classes == 'present' and fg.sum() == 0): - continue - if C == 1: - if len(classes) > 1: - raise ValueError('Sigmoid output possible only with 1 class') - class_pred = probs[:, 0] - else: - class_pred = probs[:, c] - errors = (fg - class_pred).abs() - errors_sorted, perm = torch.sort(errors, 0, descending=True) - perm = perm.data - fg_sorted = fg[perm] - loss = torch.dot(errors_sorted, lovasz_grad(fg_sorted)) - if class_weight is not None: - loss *= class_weight[c] - losses.append(loss) - return torch.stack(losses).mean() - - -def lovasz_softmax(probs, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [B, C, H, W], class probabilities at each - prediction (between 0 and 1). - labels (torch.Tensor): [B, H, W], ground truth labels (between 0 and - C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): The weight for each class. - Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - - if per_image: - loss = [ - lovasz_softmax_flat( - *flatten_probs( - prob.unsqueeze(0), label.unsqueeze(0), ignore_index), - classes=classes, - class_weight=class_weight) - for prob, label in zip(probs, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_softmax_flat( - *flatten_probs(probs, labels, ignore_index), - classes=classes, - class_weight=class_weight) - return loss - - -@LOSSES.register_module() -class LovaszLoss(nn.Module): - """LovaszLoss. - - This loss is proposed in `The Lovasz-Softmax loss: A tractable surrogate - for the optimization of the intersection-over-union measure in neural - networks `_. - - Args: - loss_type (str, optional): Binary or multi-class loss. - Default: 'multi_class'. Options are "binary" and "multi_class". - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_lovasz'. - """ - - def __init__(self, - loss_type='multi_class', - classes='present', - per_image=False, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_lovasz'): - super(LovaszLoss, self).__init__() - assert loss_type in ('binary', 'multi_class'), "loss_type should be \ - 'binary' or 'multi_class'." - - if loss_type == 'binary': - self.cls_criterion = lovasz_hinge - else: - self.cls_criterion = lovasz_softmax - assert classes in ('all', 'present') or mmcv.is_list_of(classes, int) - if not per_image: - assert reduction == 'none', "reduction should be 'none' when \ - per_image is False." - - self.classes = classes - self.per_image = per_image - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - self._loss_name = loss_name - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - - # if multi-class loss, transform logits to probs - if self.cls_criterion == lovasz_softmax: - cls_score = F.softmax(cls_score, dim=1) - - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - self.classes, - self.per_image, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/utils.py b/cv/3d_detection/paconv/pytorch/mmseg/models/losses/utils.py deleted file mode 100644 index c37875fa..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/losses/utils.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import mmcv -import numpy as np -import torch.nn.functional as F - - -def get_class_weight(class_weight): - """Get class weight for loss function. - - Args: - class_weight (list[float] | str | None): If class_weight is a str, - take it as a file name and read from it. - """ - if isinstance(class_weight, str): - # take it as a file path - if class_weight.endswith('.npy'): - class_weight = np.load(class_weight) - else: - # pkl, json or yaml - class_weight = mmcv.load(class_weight) - - return class_weight - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Average factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - if weight.dim() > 1: - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - loss = loss.sum() / avg_factor - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/__init__.py deleted file mode 100644 index aba73f16..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .fpn import FPN -from .ic_neck import ICNeck -from .jpu import JPU -from .mla_neck import MLANeck -from .multilevel_neck import MultiLevelNeck - -__all__ = ['FPN', 'MultiLevelNeck', 'MLANeck', 'ICNeck', 'JPU'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/fpn.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/fpn.py deleted file mode 100644 index 975a48e8..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/fpn.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(BaseModule): - """Feature Pyramid Network. - - This neck is the implementation of `Feature Pyramid Networks for Object - Detection `_. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs - on the original feature from the backbone. If True, - it is equivalent to `add_extra_convs='on_input'`. If False, it is - equivalent to set `add_extra_convs='on_output'`. Default to True. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - extra_convs_on_inputs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest'), - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - if extra_convs_on_inputs: - # For compatibility with previous release - # TODO: deprecate `extra_convs_on_inputs` - self.add_extra_convs = 'on_input' - else: - self.add_extra_convs = 'on_output' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - @auto_fp16() - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/ic_neck.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/ic_neck.py deleted file mode 100644 index d836a6b9..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/ic_neck.py +++ /dev/null @@ -1,147 +0,0 @@ -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import NECKS - - -class CascadeFeatureFusion(BaseModule): - """Cascade Feature Fusion Unit in ICNet. - - Args: - low_channels (int): The number of input channels for - low resolution feature map. - high_channels (int): The number of input channels for - high resolution feature map. - out_channels (int): The number of output channels. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Returns: - x (Tensor): The output tensor of shape (N, out_channels, H, W). - x_low (Tensor): The output tensor of shape (N, out_channels, H, W) - for Cascade Label Guidance in auxiliary heads. - """ - - def __init__(self, - low_channels, - high_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - super(CascadeFeatureFusion, self).__init__(init_cfg=init_cfg) - self.align_corners = align_corners - self.conv_low = ConvModule( - low_channels, - out_channels, - 3, - padding=2, - dilation=2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv_high = ConvModule( - high_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x_low, x_high): - x_low = resize( - x_low, - size=x_high.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - # Note: Different from original paper, `x_low` is underwent - # `self.conv_low` rather than another 1x1 conv classifier - # before being used for auxiliary head. - x_low = self.conv_low(x_low) - x_high = self.conv_high(x_high) - x = x_low + x_high - x = F.relu(x, inplace=True) - return x, x_low - - -@NECKS.register_module() -class ICNeck(BaseModule): - """ICNet for Real-Time Semantic Segmentation on High-Resolution Images. - - This head is the implementation of `ICHead - `_. - - Args: - in_channels (int): The number of input image channels. Default: 3. - out_channels (int): The numbers of output feature channels. - Default: 128. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=(64, 256, 256), - out_channels=128, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - super(ICNeck, self).__init__(init_cfg=init_cfg) - assert len(in_channels) == 3, 'Length of input channels \ - must be 3!' - - self.in_channels = in_channels - self.out_channels = out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.cff_24 = CascadeFeatureFusion( - self.in_channels[2], - self.in_channels[1], - self.out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - - self.cff_12 = CascadeFeatureFusion( - self.out_channels, - self.in_channels[0], - self.out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - - def forward(self, inputs): - assert len(inputs) == 3, 'Length of input feature \ - maps must be 3!' - - x_sub1, x_sub2, x_sub4 = inputs - x_cff_24, x_24 = self.cff_24(x_sub4, x_sub2) - x_cff_12, x_12 = self.cff_12(x_cff_24, x_sub1) - # Note: `x_cff_12` is used for decode_head, - # `x_24` and `x_12` are used for auxiliary head. - return x_24, x_12, x_cff_12 diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/jpu.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/jpu.py deleted file mode 100644 index 3cc6b9f4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/jpu.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class JPU(BaseModule): - """FastFCN: Rethinking Dilated Convolution in the Backbone - for Semantic Segmentation. - - This Joint Pyramid Upsampling (JPU) neck is the implementation of - `FastFCN `_. - - Args: - in_channels (Tuple[int], optional): The number of input channels - for each convolution operations before upsampling. - Default: (512, 1024, 2048). - mid_channels (int): The number of output channels of JPU. - Default: 512. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - dilations (tuple[int]): Dilation rate of each Depthwise - Separable ConvModule. Default: (1, 2, 4, 8). - align_corners (bool, optional): The align_corners argument of - resize operation. Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=(512, 1024, 2048), - mid_channels=512, - start_level=0, - end_level=-1, - dilations=(1, 2, 4, 8), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(JPU, self).__init__(init_cfg=init_cfg) - assert isinstance(in_channels, tuple) - assert isinstance(dilations, tuple) - self.in_channels = in_channels - self.mid_channels = mid_channels - self.start_level = start_level - self.num_ins = len(in_channels) - if end_level == -1: - self.backbone_end_level = self.num_ins - else: - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - - self.dilations = dilations - self.align_corners = align_corners - - self.conv_layers = nn.ModuleList() - self.dilation_layers = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - conv_layer = nn.Sequential( - ConvModule( - self.in_channels[i], - self.mid_channels, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.conv_layers.append(conv_layer) - for i in range(len(dilations)): - dilation_layer = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=(self.backbone_end_level - self.start_level) * - self.mid_channels, - out_channels=self.mid_channels, - kernel_size=3, - stride=1, - padding=dilations[i], - dilation=dilations[i], - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=norm_cfg, - pw_act_cfg=act_cfg)) - self.dilation_layers.append(dilation_layer) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels), 'Length of inputs must \ - be the same with self.in_channels!' - - feats = [ - self.conv_layers[i - self.start_level](inputs[i]) - for i in range(self.start_level, self.backbone_end_level) - ] - - h, w = feats[0].shape[2:] - for i in range(1, len(feats)): - feats[i] = resize( - feats[i], - size=(h, w), - mode='bilinear', - align_corners=self.align_corners) - - feat = torch.cat(feats, dim=1) - concat_feat = torch.cat([ - self.dilation_layers[i](feat) for i in range(len(self.dilations)) - ], - dim=1) - - outs = [] - - # Default: outs[2] is the output of JPU for decoder head, outs[1] is - # the feature map from backbone for auxiliary head. Additionally, - # outs[0] can also be used for auxiliary head. - for i in range(self.start_level, self.backbone_end_level - 1): - outs.append(inputs[i]) - outs.append(concat_feat) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/mla_neck.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/mla_neck.py deleted file mode 100644 index 1513e296..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/mla_neck.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_norm_layer - -from ..builder import NECKS - - -class MLAModule(nn.Module): - - def __init__(self, - in_channels=[1024, 1024, 1024, 1024], - out_channels=256, - norm_cfg=None, - act_cfg=None): - super(MLAModule, self).__init__() - self.channel_proj = nn.ModuleList() - for i in range(len(in_channels)): - self.channel_proj.append( - ConvModule( - in_channels=in_channels[i], - out_channels=out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.feat_extract = nn.ModuleList() - for i in range(len(in_channels)): - self.feat_extract.append( - ConvModule( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - - # feat_list -> [p2, p3, p4, p5] - feat_list = [] - for x, conv in zip(inputs, self.channel_proj): - feat_list.append(conv(x)) - - # feat_list -> [p5, p4, p3, p2] - # mid_list -> [m5, m4, m3, m2] - feat_list = feat_list[::-1] - mid_list = [] - for feat in feat_list: - if len(mid_list) == 0: - mid_list.append(feat) - else: - mid_list.append(mid_list[-1] + feat) - - # mid_list -> [m5, m4, m3, m2] - # out_list -> [o2, o3, o4, o5] - out_list = [] - for mid, conv in zip(mid_list, self.feat_extract): - out_list.append(conv(mid)) - - return tuple(out_list) - - -@NECKS.register_module() -class MLANeck(nn.Module): - """Multi-level Feature Aggregation. - - This neck is `The Multi-level Feature Aggregation construction of - SETR `_. - - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - norm_layer (dict): Config dict for input normalization. - Default: norm_layer=dict(type='LN', eps=1e-6, requires_grad=True). - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_layer=dict(type='LN', eps=1e-6, requires_grad=True), - norm_cfg=None, - act_cfg=None): - super(MLANeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - - # In order to build general vision transformer backbone, we have to - # move MLA to neck. - self.norm = nn.ModuleList([ - build_norm_layer(norm_layer, in_channels[i])[1] - for i in range(len(in_channels)) - ]) - - self.mla = MLAModule( - in_channels=in_channels, - out_channels=out_channels, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # Convert from nchw to nlc - outs = [] - for i in range(len(inputs)): - x = inputs[i] - n, c, h, w = x.shape - x = x.reshape(n, c, h * w).transpose(2, 1).contiguous() - x = self.norm[i](x) - x = x.transpose(1, 2).reshape(n, c, h, w).contiguous() - outs.append(x) - - outs = self.mla(outs) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/multilevel_neck.py b/cv/3d_detection/paconv/pytorch/mmseg/models/necks/multilevel_neck.py deleted file mode 100644 index 5151f876..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/necks/multilevel_neck.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, xavier_init - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class MultiLevelNeck(nn.Module): - """MultiLevelNeck. - - A neck structure connect vit backbone and decoder_heads. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - scales (List[float]): Scale factors for each input feature map. - Default: [0.5, 1, 2, 4] - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scales=[0.5, 1, 2, 4], - norm_cfg=None, - act_cfg=None): - super(MultiLevelNeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.scales = scales - self.num_outs = len(scales) - self.lateral_convs = nn.ModuleList() - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.lateral_convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - for _ in range(self.num_outs): - self.convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - inputs = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # for len(inputs) not equal to self.num_outs - if len(inputs) == 1: - inputs = [inputs[0] for _ in range(self.num_outs)] - outs = [] - for i in range(self.num_outs): - x_resize = resize( - inputs[i], scale_factor=self.scales[i], mode='bilinear') - outs.append(self.convs[i](x_resize)) - return tuple(outs) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/__init__.py deleted file mode 100644 index 387c858b..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseSegmentor -from .cascade_encoder_decoder import CascadeEncoderDecoder -from .encoder_decoder import EncoderDecoder - -__all__ = ['BaseSegmentor', 'EncoderDecoder', 'CascadeEncoderDecoder'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/base.py b/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/base.py deleted file mode 100644 index f0f320ff..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/base.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - - -class BaseSegmentor(BaseModule, metaclass=ABCMeta): - """Base class for segmentors.""" - - def __init__(self, init_cfg=None): - super(BaseSegmentor, self).__init__(init_cfg) - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the segmentor has neck""" - return hasattr(self, 'neck') and self.neck is not None - - @property - def with_auxiliary_head(self): - """bool: whether the segmentor has auxiliary head""" - return hasattr(self, - 'auxiliary_head') and self.auxiliary_head is not None - - @property - def with_decode_head(self): - """bool: whether the segmentor has decode head""" - return hasattr(self, 'decode_head') and self.decode_head is not None - - @abstractmethod - def extract_feat(self, imgs): - """Placeholder for extract features from images.""" - pass - - @abstractmethod - def encode_decode(self, img, img_metas): - """Placeholder for encode images with backbone and decode into a - semantic segmentation map of the same size as input.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """Placeholder for Forward function for training.""" - pass - - @abstractmethod - def simple_test(self, img, img_meta, **kwargs): - """Placeholder for single image test.""" - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Placeholder for augmentation test.""" - pass - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got ' - f'{type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) != ' - f'num of image meta ({len(img_metas)})') - # all images in the same aug batch all of the same ori_shape and pad - # shape - for img_meta in img_metas: - ori_shapes = [_['ori_shape'] for _ in img_meta] - assert all(shape == ori_shapes[0] for shape in ori_shapes) - img_shapes = [_['img_shape'] for _ in img_meta] - assert all(shape == img_shapes[0] for shape in img_shapes) - pad_shapes = [_['pad_shape'] for _ in img_meta] - assert all(shape == pad_shapes[0] for shape in pad_shapes) - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def train_step(self, data_batch, optimizer, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - ``log_vars`` contains all the variables to be sent to the - logger. - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - def val_step(self, data_batch, optimizer=None, **kwargs): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - @staticmethod - def _parse_losses(losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - # If the loss_vars has different length, raise assertion error - # to prevent GPUs from infinite waiting. - if dist.is_available() and dist.is_initialized(): - log_var_length = torch.tensor(len(log_vars), device=loss.device) - dist.all_reduce(log_var_length) - message = (f'rank {dist.get_rank()}' + - f' len(log_vars): {len(log_vars)}' + ' keys: ' + - ','.join(log_vars.keys()) + '\n') - assert log_var_length == len(log_vars) * dist.get_world_size(), \ - 'loss log variables are different across GPUs!\n' + message - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def show_result(self, - img, - result, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor): The semantic segmentation results to draw over - `img`. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - assert palette.shape[0] == len(self.CLASSES) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/cascade_encoder_decoder.py b/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/cascade_encoder_decoder.py deleted file mode 100644 index 7f9f9006..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/cascade_encoder_decoder.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from mmseg.core import add_prefix -from mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .encoder_decoder import EncoderDecoder - - -@SEGMENTORS.register_module() -class CascadeEncoderDecoder(EncoderDecoder): - """Cascade Encoder Decoder segmentors. - - CascadeEncoderDecoder almost the same as EncoderDecoder, while decoders of - CascadeEncoderDecoder are cascaded. The output of previous decoder_head - will be the input of next decoder_head. - """ - - def __init__(self, - num_stages, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - self.num_stages = num_stages - super(CascadeEncoderDecoder, self).__init__( - backbone=backbone, - decode_head=decode_head, - neck=neck, - auxiliary_head=auxiliary_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - assert isinstance(decode_head, list) - assert len(decode_head) == self.num_stages - self.decode_head = nn.ModuleList() - for i in range(self.num_stages): - self.decode_head.append(builder.build_head(decode_head[i])) - self.align_corners = self.decode_head[-1].align_corners - self.num_classes = self.decode_head[-1].num_classes - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self.decode_head[0].forward_test(x, img_metas, self.test_cfg) - for i in range(1, self.num_stages): - out = self.decode_head[i].forward_test(x, out, img_metas, - self.test_cfg) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - - loss_decode = self.decode_head[0].forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode_0')) - - for i in range(1, self.num_stages): - # forward test again, maybe unnecessary for most methods. - prev_outputs = self.decode_head[i - 1].forward_test( - x, img_metas, self.test_cfg) - loss_decode = self.decode_head[i].forward_train( - x, prev_outputs, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_decode, f'decode_{i}')) - - return losses diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/encoder_decoder.py b/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/encoder_decoder.py deleted file mode 100644 index 72467b46..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmseg.core import add_prefix -from mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .base import BaseSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder(BaseSegmentor): - """Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be dumped during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(EncoderDecoder, self).__init__(init_cfg) - if pretrained is not None: - assert backbone.get('pretrained') is None, \ - 'both backbone and segmentor set pretrained weight' - backbone.pretrained = pretrained - self.backbone = builder.build_backbone(backbone) - if neck is not None: - self.neck = builder.build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - assert self.with_decode_head - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = builder.build_head(decode_head) - self.align_corners = self.decode_head.align_corners - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(builder.build_head(head_cfg)) - else: - self.auxiliary_head = builder.build_head(auxiliary_head) - - def extract_feat(self, img): - """Extract features from images.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self._decode_head_forward_test(x, img_metas) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def forward_dummy(self, img): - """Dummy forward function.""" - seg_logit = self.encode_decode(img, None) - - return seg_logit - - def forward_train(self, img, img_metas, gt_semantic_seg): - """Forward function for training. - - Args: - img (Tensor): Input images. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - x = self.extract_feat(img) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - gt_semantic_seg) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, gt_semantic_seg) - losses.update(loss_aux) - - return losses - - # TODO refactor - def slide_inference(self, img, img_meta, rescale): - """Inference by sliding-window with overlap. - - If h_crop > h_img or w_crop > w_img, the small patch will be used to - decode without padding. - """ - - h_stride, w_stride = self.test_cfg.stride - h_crop, w_crop = self.test_cfg.crop_size - batch_size, _, h_img, w_img = img.size() - num_classes = self.num_classes - h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1 - w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1 - preds = img.new_zeros((batch_size, num_classes, h_img, w_img)) - count_mat = img.new_zeros((batch_size, 1, h_img, w_img)) - for h_idx in range(h_grids): - for w_idx in range(w_grids): - y1 = h_idx * h_stride - x1 = w_idx * w_stride - y2 = min(y1 + h_crop, h_img) - x2 = min(x1 + w_crop, w_img) - y1 = max(y2 - h_crop, 0) - x1 = max(x2 - w_crop, 0) - crop_img = img[:, :, y1:y2, x1:x2] - crop_seg_logit = self.encode_decode(crop_img, img_meta) - preds += F.pad(crop_seg_logit, - (int(x1), int(preds.shape[3] - x2), int(y1), - int(preds.shape[2] - y2))) - - count_mat[:, :, y1:y2, x1:x2] += 1 - assert (count_mat == 0).sum() == 0 - if torch.onnx.is_in_onnx_export(): - # cast count_mat to constant while exporting to ONNX - count_mat = torch.from_numpy( - count_mat.cpu().detach().numpy()).to(device=img.device) - preds = preds / count_mat - if rescale: - preds = resize( - preds, - size=img_meta[0]['ori_shape'][:2], - mode='bilinear', - align_corners=self.align_corners, - warning=False) - return preds - - def whole_inference(self, img, img_meta, rescale): - """Inference with full image.""" - - seg_logit = self.encode_decode(img, img_meta) - if rescale: - # support dynamic shape for onnx - if torch.onnx.is_in_onnx_export(): - size = img.shape[2:] - else: - size = img_meta[0]['ori_shape'][:2] - seg_logit = resize( - seg_logit, - size=size, - mode='bilinear', - align_corners=self.align_corners, - warning=False) - - return seg_logit - - def inference(self, img, img_meta, rescale): - """Inference with slide/whole style. - - Args: - img (Tensor): The input image of shape (N, 3, H, W). - img_meta (dict): Image info dict where each dict has: 'img_shape', - 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - rescale (bool): Whether rescale back to original shape. - - Returns: - Tensor: The output segmentation map. - """ - - assert self.test_cfg.mode in ['slide', 'whole'] - ori_shape = img_meta[0]['ori_shape'] - assert all(_['ori_shape'] == ori_shape for _ in img_meta) - if self.test_cfg.mode == 'slide': - seg_logit = self.slide_inference(img, img_meta, rescale) - else: - seg_logit = self.whole_inference(img, img_meta, rescale) - output = F.softmax(seg_logit, dim=1) - flip = img_meta[0]['flip'] - if flip: - flip_direction = img_meta[0]['flip_direction'] - assert flip_direction in ['horizontal', 'vertical'] - if flip_direction == 'horizontal': - output = output.flip(dims=(3, )) - elif flip_direction == 'vertical': - output = output.flip(dims=(2, )) - - return output - - def simple_test(self, img, img_meta, rescale=True): - """Simple test with single image.""" - seg_logit = self.inference(img, img_meta, rescale) - seg_pred = seg_logit.argmax(dim=1) - if torch.onnx.is_in_onnx_export(): - # our inference backend only support 4D output - seg_pred = seg_pred.unsqueeze(0) - return seg_pred - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, rescale=True): - """Test with augmentations. - - Only rescale=True is supported. - """ - # aug_test rescale all imgs back to ori_shape for now - assert rescale - # to save memory, we get augmented seg logit inplace - seg_logit = self.inference(imgs[0], img_metas[0], rescale) - for i in range(1, len(imgs)): - cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale) - seg_logit += cur_seg_logit - seg_logit /= len(imgs) - seg_pred = seg_logit.argmax(dim=1) - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/__init__.py deleted file mode 100644 index 2417c518..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -from .embed import PatchEmbed -from .inverted_residual import InvertedResidual, InvertedResidualV3 -from .make_divisible import make_divisible -from .res_layer import ResLayer -from .se_layer import SELayer -from .self_attention_block import SelfAttentionBlock -from .shape_convert import nchw_to_nlc, nlc_to_nchw -from .up_conv_block import UpConvBlock - -__all__ = [ - 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual', - 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'PatchEmbed', - 'nchw_to_nlc', 'nlc_to_nchw' -] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/embed.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/embed.py deleted file mode 100644 index 1515675e..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/embed.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from typing import Sequence - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner.base_module import BaseModule -from mmcv.utils import to_2tuple - - -class AdaptivePadding(nn.Module): - """Applies padding to input (if needed) so that input can get fully covered - by filter you specified. It support two modes "same" and "corner". The - "same" mode is same with "SAME" padding mode in TensorFlow, pad zero around - input. The "corner" mode would pad zero to bottom right. - - Args: - kernel_size (int | tuple): Size of the kernel: - stride (int | tuple): Stride of the filter. Default: 1: - dilation (int | tuple): Spacing between kernel elements. - Default: 1. - padding (str): Support "same" and "corner", "corner" mode - would pad zero to bottom right, and "same" mode would - pad zero around input. Default: "corner". - Example: - >>> kernel_size = 16 - >>> stride = 16 - >>> dilation = 1 - >>> input = torch.rand(1, 1, 15, 17) - >>> adap_pad = AdaptivePadding( - >>> kernel_size=kernel_size, - >>> stride=stride, - >>> dilation=dilation, - >>> padding="corner") - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - >>> input = torch.rand(1, 1, 16, 17) - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - """ - - def __init__(self, kernel_size=1, stride=1, dilation=1, padding='corner'): - - super(AdaptivePadding, self).__init__() - - assert padding in ('same', 'corner') - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - self.padding = padding - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - - def get_pad_shape(self, input_shape): - input_h, input_w = input_shape - kernel_h, kernel_w = self.kernel_size - stride_h, stride_w = self.stride - output_h = math.ceil(input_h / stride_h) - output_w = math.ceil(input_w / stride_w) - pad_h = max((output_h - 1) * stride_h + - (kernel_h - 1) * self.dilation[0] + 1 - input_h, 0) - pad_w = max((output_w - 1) * stride_w + - (kernel_w - 1) * self.dilation[1] + 1 - input_w, 0) - return pad_h, pad_w - - def forward(self, x): - pad_h, pad_w = self.get_pad_shape(x.size()[-2:]) - if pad_h > 0 or pad_w > 0: - if self.padding == 'corner': - x = F.pad(x, [0, pad_w, 0, pad_h]) - elif self.padding == 'same': - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2 - ]) - return x - - -class PatchEmbed(BaseModule): - """Image to Patch Embedding. - - We use a conv layer to implement PatchEmbed. - - Args: - in_channels (int): The num of input channels. Default: 3 - embed_dims (int): The dimensions of embedding. Default: 768 - conv_type (str): The config dict for embedding - conv layer type selection. Default: "Conv2d". - kernel_size (int): The kernel_size of embedding conv. Default: 16. - stride (int, optional): The slide stride of embedding conv. - Default: None (Would be set as `kernel_size`). - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int): The dilation rate of embedding conv. Default: 1. - bias (bool): Bias of embed conv. Default: True. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - input_size (int | tuple | None): The size of input, which will be - used to calculate the out size. Only work when `dynamic_size` - is False. Default: None. - init_cfg (`mmcv.ConfigDict`, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - in_channels=3, - embed_dims=768, - conv_type='Conv2d', - kernel_size=16, - stride=None, - padding='corner', - dilation=1, - bias=True, - norm_cfg=None, - input_size=None, - init_cfg=None): - super(PatchEmbed, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - if stride is None: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of conv - padding = 0 - else: - self.adap_padding = None - padding = to_2tuple(padding) - - self.projection = build_conv_layer( - dict(type=conv_type), - in_channels=in_channels, - out_channels=embed_dims, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias) - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - else: - self.norm = None - - if input_size: - input_size = to_2tuple(input_size) - # `init_out_size` would be used outside to - # calculate the num_patches - # when `use_abs_pos_embed` outside - self.init_input_size = input_size - if self.adap_padding: - pad_h, pad_w = self.adap_padding.get_pad_shape(input_size) - input_h, input_w = input_size - input_h = input_h + pad_h - input_w = input_w + pad_w - input_size = (input_h, input_w) - - # https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html - h_out = (input_size[0] + 2 * padding[0] - dilation[0] * - (kernel_size[0] - 1) - 1) // stride[0] + 1 - w_out = (input_size[1] + 2 * padding[1] - dilation[1] * - (kernel_size[1] - 1) - 1) // stride[1] + 1 - self.init_out_size = (h_out, w_out) - else: - self.init_input_size = None - self.init_out_size = None - - def forward(self, x): - """ - Args: - x (Tensor): Has shape (B, C, H, W). In most case, C is 3. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, out_h * out_w, embed_dims) - - out_size (tuple[int]): Spatial shape of x, arrange as - (out_h, out_w). - """ - - if self.adap_padding: - x = self.adap_padding(x) - - x = self.projection(x) - out_size = (x.shape[2], x.shape[3]) - x = x.flatten(2).transpose(1, 2) - if self.norm is not None: - x = self.norm(x) - return x, out_size - - -class PatchMerging(BaseModule): - """Merge patch feature map. - - This layer groups feature map by kernel_size, and applies norm and linear - layers to the grouped feature map. Our implementation uses `nn.Unfold` to - merge patch, which is about 25% faster than original implementation. - Instead, we need to modify pretrained models for compatibility. - - Args: - in_channels (int): The num of input channels. - out_channels (int): The num of output channels. - kernel_size (int | tuple, optional): the kernel size in the unfold - layer. Defaults to 2. - stride (int | tuple, optional): the stride of the sliding blocks in the - unfold layer. Default: None. (Would be set as `kernel_size`) - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int | tuple, optional): dilation parameter in the unfold - layer. Default: 1. - bias (bool, optional): Whether to add bias in linear layer or not. - Defaults: False. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=2, - stride=None, - padding='corner', - dilation=1, - bias=False, - norm_cfg=dict(type='LN'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - if stride: - stride = stride - else: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of unfold - padding = 0 - else: - self.adap_padding = None - - padding = to_2tuple(padding) - self.sampler = nn.Unfold( - kernel_size=kernel_size, - dilation=dilation, - padding=padding, - stride=stride) - - sample_dim = kernel_size[0] * kernel_size[1] * in_channels - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, sample_dim)[1] - else: - self.norm = None - - self.reduction = nn.Linear(sample_dim, out_channels, bias=bias) - - def forward(self, x, input_size): - """ - Args: - x (Tensor): Has shape (B, H*W, C_in). - input_size (tuple[int]): The spatial shape of x, arrange as (H, W). - Default: None. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, Merged_H * Merged_W, C_out) - - out_size (tuple[int]): Spatial shape of x, arrange as - (Merged_H, Merged_W). - """ - B, L, C = x.shape - assert isinstance(input_size, Sequence), f'Expect ' \ - f'input_size is ' \ - f'`Sequence` ' \ - f'but get {input_size}' - - H, W = input_size - assert L == H * W, 'input feature has wrong size' - - x = x.view(B, H, W, C).permute([0, 3, 1, 2]) # B, C, H, W - # Use nn.Unfold to merge patch. About 25% faster than original method, - # but need to modify pretrained model for compatibility - - if self.adap_padding: - x = self.adap_padding(x) - H, W = x.shape[-2:] - - x = self.sampler(x) - # if kernel_size=2 and stride=2, x should has shape (B, 4*C, H/2*W/2) - - out_h = (H + 2 * self.sampler.padding[0] - self.sampler.dilation[0] * - (self.sampler.kernel_size[0] - 1) - - 1) // self.sampler.stride[0] + 1 - out_w = (W + 2 * self.sampler.padding[1] - self.sampler.dilation[1] * - (self.sampler.kernel_size[1] - 1) - - 1) // self.sampler.stride[1] + 1 - - output_size = (out_h, out_w) - x = x.transpose(1, 2) # B, H/2*W/2, 4*C - x = self.norm(x) if self.norm else x - x = self.reduction(x) - return x, output_size diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/inverted_residual.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/inverted_residual.py deleted file mode 100644 index c9cda768..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/inverted_residual.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from torch import nn -from torch.utils import checkpoint as cp - -from .se_layer import SELayer - - -class InvertedResidual(nn.Module): - """InvertedResidual block for MobileNetV2. - - Args: - in_channels (int): The input channels of the InvertedResidual block. - out_channels (int): The output channels of the InvertedResidual block. - stride (int): Stride of the middle (first) 3x3 convolution. - expand_ratio (int): Adjusts number of channels of the hidden layer - in InvertedResidual by this amount. - dilation (int): Dilation rate of depthwise conv. Default: 1 - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - stride, - expand_ratio, - dilation=1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - with_cp=False, - **kwargs): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2], f'stride must in [1, 2]. ' \ - f'But received {stride}.' - self.with_cp = with_cp - self.use_res_connect = self.stride == 1 and in_channels == out_channels - hidden_dim = int(round(in_channels * expand_ratio)) - - layers = [] - if expand_ratio != 1: - layers.append( - ConvModule( - in_channels=in_channels, - out_channels=hidden_dim, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - **kwargs)) - layers.extend([ - ConvModule( - in_channels=hidden_dim, - out_channels=hidden_dim, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - groups=hidden_dim, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - **kwargs), - ConvModule( - in_channels=hidden_dim, - out_channels=out_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None, - **kwargs) - ]) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - - def _inner_forward(x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InvertedResidualV3(nn.Module): - """Inverted Residual Block for MobileNetV3. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - mid_channels (int): The input channels of the depthwise convolution. - kernel_size (int): The kernel size of the depthwise convolution. - Default: 3. - stride (int): The stride of the depthwise convolution. Default: 1. - se_cfg (dict): Config dict for se layer. Default: None, which means no - se layer. - with_expand_conv (bool): Use expand conv or not. If set False, - mid_channels must be the same with in_channels. Default: True. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_expand_conv=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - with_cp=False): - super(InvertedResidualV3, self).__init__() - self.with_res_shortcut = (stride == 1 and in_channels == out_channels) - assert stride in [1, 2] - self.with_cp = with_cp - self.with_se = se_cfg is not None - self.with_expand_conv = with_expand_conv - - if self.with_se: - assert isinstance(se_cfg, dict) - if not self.with_expand_conv: - assert mid_channels == in_channels - - if self.with_expand_conv: - self.expand_conv = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.depthwise_conv = ConvModule( - in_channels=mid_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - padding=kernel_size // 2, - groups=mid_channels, - conv_cfg=dict( - type='Conv2dAdaptivePadding') if stride == 2 else conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.linear_conv = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - - if self.with_expand_conv: - out = self.expand_conv(out) - - out = self.depthwise_conv(out) - - if self.with_se: - out = self.se(out) - - out = self.linear_conv(out) - - if self.with_res_shortcut: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/make_divisible.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/make_divisible.py deleted file mode 100644 index ed42c2ee..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/make_divisible.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def make_divisible(value, divisor, min_value=None, min_ratio=0.9): - """Make divisible function. - - This function rounds the channel number to the nearest value that can be - divisible by the divisor. It is taken from the original tf repo. It ensures - that all layers have a channel number that is divisible by divisor. It can - be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa - - Args: - value (int): The original channel number. - divisor (int): The divisor to fully divide the channel number. - min_value (int): The minimum value of the output channel. - Default: None, means that the minimum value equal to the divisor. - min_ratio (float): The minimum ratio of the rounded channel number to - the original channel number. Default: 0.9. - - Returns: - int: The modified output channel number. - """ - - if min_value is None: - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than (1-min_ratio). - if new_value < min_ratio * value: - new_value += divisor - return new_value diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/res_layer.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/res_layer.py deleted file mode 100644 index 190a0c5d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/res_layer.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - multi_grid (int | None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - dilation=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - multi_grid=None, - contract_dilation=False, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if multi_grid is None: - if dilation > 1 and contract_dilation: - first_dilation = dilation // 2 - else: - first_dilation = dilation - else: - first_dilation = multi_grid[0] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - dilation=first_dilation, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - dilation=dilation if multi_grid is None else multi_grid[i], - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/se_layer.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/se_layer.py deleted file mode 100644 index 16f52aa5..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/se_layer.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -from mmcv.cnn import ConvModule - -from .make_divisible import make_divisible - - -class SELayer(nn.Module): - """Squeeze-and-Excitation Module. - - Args: - channels (int): The input (and output) channels of the SE layer. - ratio (int): Squeeze ratio in SELayer, the intermediate channel will be - ``int(channels/ratio)``. Default: 16. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configured - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configured by the first dict and the - second activation layer will be configured by the second dict. - Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, - divisor=6.0)). - """ - - def __init__(self, - channels, - ratio=16, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))): - super(SELayer, self).__init__() - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=make_divisible(channels // ratio, 8), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=make_divisible(channels // ratio, 8), - out_channels=channels, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - out = self.global_avgpool(x) - out = self.conv1(out) - out = self.conv2(out) - return x * out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/self_attention_block.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/self_attention_block.py deleted file mode 100644 index c945fa71..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/self_attention_block.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule, constant_init -from torch import nn as nn -from torch.nn import functional as F - - -class SelfAttentionBlock(nn.Module): - """General self-attention block/non-local block. - - Please refer to https://arxiv.org/abs/1706.03762 for details about key, - query and value. - - Args: - key_in_channels (int): Input channels of key feature. - query_in_channels (int): Input channels of query feature. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_downsample (nn.Module): Query downsample module. - key_downsample (nn.Module): Key downsample module. - key_query_num_convs (int): Number of convs for key/query projection. - value_num_convs (int): Number of convs for value projection. - matmul_norm (bool): Whether normalize attention map with sqrt of - channels - with_out (bool): Whether use out projection. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, key_in_channels, query_in_channels, channels, - out_channels, share_key_query, query_downsample, - key_downsample, key_query_num_convs, value_out_num_convs, - key_query_norm, value_out_norm, matmul_norm, with_out, - conv_cfg, norm_cfg, act_cfg): - super(SelfAttentionBlock, self).__init__() - if share_key_query: - assert key_in_channels == query_in_channels - self.key_in_channels = key_in_channels - self.query_in_channels = query_in_channels - self.out_channels = out_channels - self.channels = channels - self.share_key_query = share_key_query - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.key_project = self.build_project( - key_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if share_key_query: - self.query_project = self.key_project - else: - self.query_project = self.build_project( - query_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.value_project = self.build_project( - key_in_channels, - channels if with_out else out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if with_out: - self.out_project = self.build_project( - channels, - out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.out_project = None - - self.query_downsample = query_downsample - self.key_downsample = key_downsample - self.matmul_norm = matmul_norm - - self.init_weights() - - def init_weights(self): - """Initialize weight of later layer.""" - if self.out_project is not None: - if not isinstance(self.out_project, ConvModule): - constant_init(self.out_project, 0) - - def build_project(self, in_channels, channels, num_convs, use_conv_module, - conv_cfg, norm_cfg, act_cfg): - """Build projection layer for key/query/value/out.""" - if use_conv_module: - convs = [ - ConvModule( - in_channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - ] - for _ in range(num_convs - 1): - convs.append( - ConvModule( - channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - else: - convs = [nn.Conv2d(in_channels, channels, 1)] - for _ in range(num_convs - 1): - convs.append(nn.Conv2d(channels, channels, 1)) - if len(convs) > 1: - convs = nn.Sequential(*convs) - else: - convs = convs[0] - return convs - - def forward(self, query_feats, key_feats): - """Forward function.""" - batch_size = query_feats.size(0) - query = self.query_project(query_feats) - if self.query_downsample is not None: - query = self.query_downsample(query) - query = query.reshape(*query.shape[:2], -1) - query = query.permute(0, 2, 1).contiguous() - - key = self.key_project(key_feats) - value = self.value_project(key_feats) - if self.key_downsample is not None: - key = self.key_downsample(key) - value = self.key_downsample(value) - key = key.reshape(*key.shape[:2], -1) - value = value.reshape(*value.shape[:2], -1) - value = value.permute(0, 2, 1).contiguous() - - sim_map = torch.matmul(query, key) - if self.matmul_norm: - sim_map = (self.channels**-.5) * sim_map - sim_map = F.softmax(sim_map, dim=-1) - - context = torch.matmul(sim_map, value) - context = context.permute(0, 2, 1).contiguous() - context = context.reshape(batch_size, -1, *query_feats.shape[2:]) - if self.out_project is not None: - context = self.out_project(context) - return context diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/shape_convert.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/shape_convert.py deleted file mode 100644 index 0677348c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/shape_convert.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def nlc_to_nchw(x, hw_shape): - """Convert [N, L, C] shape tensor to [N, C, H, W] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, L, C] before conversion. - hw_shape (Sequence[int]): The height and width of output feature map. - - Returns: - Tensor: The output tensor of shape [N, C, H, W] after conversion. - """ - H, W = hw_shape - assert len(x.shape) == 3 - B, L, C = x.shape - assert L == H * W, 'The seq_len doesn\'t match H, W' - return x.transpose(1, 2).reshape(B, C, H, W) - - -def nchw_to_nlc(x): - """Flatten [N, C, H, W] shape tensor to [N, L, C] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, C, H, W] before conversion. - - Returns: - Tensor: The output tensor of shape [N, L, C] after conversion. - """ - assert len(x.shape) == 4 - return x.flatten(2).transpose(1, 2).contiguous() diff --git a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/up_conv_block.py b/cv/3d_detection/paconv/pytorch/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index d8396d9c..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/cv/3d_detection/paconv/pytorch/mmseg/ops/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/ops/__init__.py deleted file mode 100644 index bc075cd4..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/ops/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/ops/encoding.py b/cv/3d_detection/paconv/pytorch/mmseg/ops/encoding.py deleted file mode 100644 index f397cc54..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/ops/encoding.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.nn import functional as F - - -class Encoding(nn.Module): - """Encoding Layer: a learnable residual encoder. - - Input is of shape (batch_size, channels, height, width). - Output is of shape (batch_size, num_codes, channels). - - Args: - channels: dimension of the features or feature channels - num_codes: number of code words - """ - - def __init__(self, channels, num_codes): - super(Encoding, self).__init__() - # init codewords and smoothing factor - self.channels, self.num_codes = channels, num_codes - std = 1. / ((num_codes * channels)**0.5) - # [num_codes, channels] - self.codewords = nn.Parameter( - torch.empty(num_codes, channels, - dtype=torch.float).uniform_(-std, std), - requires_grad=True) - # [num_codes] - self.scale = nn.Parameter( - torch.empty(num_codes, dtype=torch.float).uniform_(-1, 0), - requires_grad=True) - - @staticmethod - def scaled_l2(x, codewords, scale): - num_codes, channels = codewords.size() - batch_size = x.size(0) - reshaped_scale = scale.view((1, 1, num_codes)) - expanded_x = x.unsqueeze(2).expand( - (batch_size, x.size(1), num_codes, channels)) - reshaped_codewords = codewords.view((1, 1, num_codes, channels)) - - scaled_l2_norm = reshaped_scale * ( - expanded_x - reshaped_codewords).pow(2).sum(dim=3) - return scaled_l2_norm - - @staticmethod - def aggregate(assignment_weights, x, codewords): - num_codes, channels = codewords.size() - reshaped_codewords = codewords.view((1, 1, num_codes, channels)) - batch_size = x.size(0) - - expanded_x = x.unsqueeze(2).expand( - (batch_size, x.size(1), num_codes, channels)) - encoded_feat = (assignment_weights.unsqueeze(3) * - (expanded_x - reshaped_codewords)).sum(dim=1) - return encoded_feat - - def forward(self, x): - assert x.dim() == 4 and x.size(1) == self.channels - # [batch_size, channels, height, width] - batch_size = x.size(0) - # [batch_size, height x width, channels] - x = x.view(batch_size, self.channels, -1).transpose(1, 2).contiguous() - # assignment_weights: [batch_size, channels, num_codes] - assignment_weights = F.softmax( - self.scaled_l2(x, self.codewords, self.scale), dim=2) - # aggregate - encoded_feat = self.aggregate(assignment_weights, x, self.codewords) - return encoded_feat - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(Nx{self.channels}xHxW =>Nx{self.num_codes}' \ - f'x{self.channels})' - return repr_str diff --git a/cv/3d_detection/paconv/pytorch/mmseg/ops/wrappers.py b/cv/3d_detection/paconv/pytorch/mmseg/ops/wrappers.py deleted file mode 100644 index ce67e4be..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/ops/wrappers.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.nn.functional as F - - -def resize(input, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None, - warning=True): - if warning: - if size is not None and align_corners: - input_h, input_w = tuple(int(x) for x in input.shape[2:]) - output_h, output_w = tuple(int(x) for x in size) - if output_h > input_h or output_w > output_h: - if ((output_h > 1 and output_w > 1 and input_h > 1 - and input_w > 1) and (output_h - 1) % (input_h - 1) - and (output_w - 1) % (input_w - 1)): - warnings.warn( - f'When align_corners={align_corners}, ' - 'the output would more aligned if ' - f'input size {(input_h, input_w)} is `x+1` and ' - f'out size {(output_h, output_w)} is `nx+1`') - return F.interpolate(input, size, scale_factor, mode, align_corners) - - -class Upsample(nn.Module): - - def __init__(self, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None): - super(Upsample, self).__init__() - self.size = size - if isinstance(scale_factor, tuple): - self.scale_factor = tuple(float(factor) for factor in scale_factor) - else: - self.scale_factor = float(scale_factor) if scale_factor else None - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - if not self.size: - size = [int(t * self.scale_factor) for t in x.shape[-2:]] - else: - size = self.size - return resize(x, size, None, self.mode, self.align_corners) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/utils/__init__.py b/cv/3d_detection/paconv/pytorch/mmseg/utils/__init__.py deleted file mode 100644 index 3f155805..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/cv/3d_detection/paconv/pytorch/mmseg/utils/collect_env.py b/cv/3d_detection/paconv/pytorch/mmseg/utils/collect_env.py deleted file mode 100644 index 3379ecb0..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/utils/collect_env.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmseg - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMSegmentation'] = f'{mmseg.__version__}+{get_git_hash()[:7]}' - - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print('{}: {}'.format(name, val)) diff --git a/cv/3d_detection/paconv/pytorch/mmseg/utils/logger.py b/cv/3d_detection/paconv/pytorch/mmseg/utils/logger.py deleted file mode 100644 index 0cb3c78d..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/utils/logger.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmseg". - - Args: - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - - logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level) - - return logger diff --git a/cv/3d_detection/paconv/pytorch/mmseg/version.py b/cv/3d_detection/paconv/pytorch/mmseg/version.py deleted file mode 100644 index ffa55d38..00000000 --- a/cv/3d_detection/paconv/pytorch/mmseg/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '0.20.0' - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/paconv/pytorch/requirements.txt b/cv/3d_detection/paconv/pytorch/requirements.txt deleted file mode 100644 index 6981bd72..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements.txt +++ /dev/null @@ -1,4 +0,0 @@ --r requirements/build.txt --r requirements/optional.txt --r requirements/runtime.txt --r requirements/tests.txt diff --git a/cv/3d_detection/paconv/pytorch/requirements/build.txt b/cv/3d_detection/paconv/pytorch/requirements/build.txt deleted file mode 100644 index e69de29b..00000000 diff --git a/cv/3d_detection/paconv/pytorch/requirements/docs.txt b/cv/3d_detection/paconv/pytorch/requirements/docs.txt deleted file mode 100644 index a31b7716..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/docs.txt +++ /dev/null @@ -1,8 +0,0 @@ -docutils==0.16.0 -m2r -mistune==0.8.4 -myst-parser --e git+https://github.com/open-mmlab/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme -sphinx==4.0.2 -sphinx-copybutton -sphinx_markdown_tables diff --git a/cv/3d_detection/paconv/pytorch/requirements/mminstall.txt b/cv/3d_detection/paconv/pytorch/requirements/mminstall.txt deleted file mode 100644 index 16a8d8b7..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/mminstall.txt +++ /dev/null @@ -1,3 +0,0 @@ -mmcv-full>=1.4.8,<=1.6.0 -mmdet>=2.24.0,<=3.0.0 -mmsegmentation>=0.20.0,<=1.0.0 diff --git a/cv/3d_detection/paconv/pytorch/requirements/optional.txt b/cv/3d_detection/paconv/pytorch/requirements/optional.txt deleted file mode 100644 index 84cbfa89..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/optional.txt +++ /dev/null @@ -1,3 +0,0 @@ -open3d -spconv -waymo-open-dataset-tf-2-1-0==1.2.0 diff --git a/cv/3d_detection/paconv/pytorch/requirements/readthedocs.txt b/cv/3d_detection/paconv/pytorch/requirements/readthedocs.txt deleted file mode 100644 index 3ffe9e47..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/readthedocs.txt +++ /dev/null @@ -1,5 +0,0 @@ -mmcv>=1.4.8 -mmdet>=2.24.0 -mmsegmentation>=0.20.1 -torch -torchvision diff --git a/cv/3d_detection/paconv/pytorch/requirements/runtime.txt b/cv/3d_detection/paconv/pytorch/requirements/runtime.txt deleted file mode 100644 index 789b822b..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/runtime.txt +++ /dev/null @@ -1,15 +0,0 @@ -lyft_dataset_sdk -networkx>=2.2,<2.3 -numba==0.53.0 -numpy==1.21 -nuscenes-devkit -plyfile -scikit-image -# by default we also use tensorboard to log results -tensorboard -trimesh>=2.35.39,<2.35.40 -addict -yapf==0.40.1 -terminaltables -prettytable -opencv-python diff --git a/cv/3d_detection/paconv/pytorch/requirements/tests.txt b/cv/3d_detection/paconv/pytorch/requirements/tests.txt deleted file mode 100644 index 303cc37d..00000000 --- a/cv/3d_detection/paconv/pytorch/requirements/tests.txt +++ /dev/null @@ -1,13 +0,0 @@ -asynctest -codecov -flake8 -interrogate -isort -# Note: used for kwarray.group_items, this may be ported to mmcv in the future. -kwarray -pytest -pytest-cov -pytest-runner -ubelt -xdoctest >= 0.10.0 -yapf diff --git a/cv/3d_detection/paconv/pytorch/setup.cfg b/cv/3d_detection/paconv/pytorch/setup.cfg deleted file mode 100644 index f6173432..00000000 --- a/cv/3d_detection/paconv/pytorch/setup.cfg +++ /dev/null @@ -1,16 +0,0 @@ -[yapf] -BASED_ON_STYLE = pep8 -BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true -SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true - -[isort] -line_length = 79 -multi_line_output = 0 -extra_standard_library = setuptools -known_first_party = mmdet,mmseg,mmdet3d -known_third_party = cv2,imageio,indoor3d_util,load_scannet_data,lyft_dataset_sdk,m2r,matplotlib,mmcv,nuimages,numba,numpy,nuscenes,pandas,plyfile,pycocotools,pyquaternion,pytest,pytorch_sphinx_theme,recommonmark,requests,scannet_utils,scipy,seaborn,shapely,skimage,sphinx,tensorflow,terminaltables,torch,trimesh,ts,waymo_open_dataset -no_lines_before = STDLIB,LOCALFOLDER -default_section = THIRDPARTY - -[codespell] -ignore-words-list = ans,refridgerator,crate,hist,formating,dout,wan,nd,fo,avod,AVOD diff --git a/cv/3d_detection/paconv/pytorch/setup.py b/cv/3d_detection/paconv/pytorch/setup.py deleted file mode 100755 index 28af491b..00000000 --- a/cv/3d_detection/paconv/pytorch/setup.py +++ /dev/null @@ -1,429 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import glob -import os -import platform -import re -import warnings -from pkg_resources import DistributionNotFound, get_distribution -from setuptools import find_packages, setup - -EXT_TYPE = '' -try: - import torch - if torch.__version__ == 'parrots': - from parrots.utils.build_extension import BuildExtension - EXT_TYPE = 'parrots' - elif (hasattr(torch, 'is_mlu_available') and torch.is_mlu_available()) or \ - os.getenv('FORCE_MLU', '0') == '1': - from torch_mlu.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - else: - from torch.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - cmd_class = {'build_ext': BuildExtension} -except ModuleNotFoundError: - cmd_class = {} - print('Skip building ext ops due to the absence of torch.') - - -def choose_requirement(primary, secondary): - """If some version of primary requirement installed, return primary, else - return secondary.""" - try: - name = re.split(r'[!<>=]', primary)[0] - get_distribution(name) - except DistributionNotFound: - return secondary - - return str(primary) - - -def get_version(): - version_file = 'mmcv/version.py' - with open(version_file, 'r', encoding='utf-8') as f: - exec(compile(f.read(), version_file, 'exec')) - version = locals()['__version__'] - local_version_identifier = os.environ.get('MMCV_LOCAL_VERSION_IDENTIFIER', '') - if local_version_identifier != '': - version += '+' + local_version_identifier - return version - - -def parse_requirements(fname='requirements/runtime.txt', with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import sys - from os.path import exists - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith('-r '): - # Allow specifying requirements in other files - target = line.split(' ')[1] - for info in parse_require_file(target): - yield info - else: - info = {'line': line} - if line.startswith('-e '): - info['package'] = line.split('#egg=')[1] - else: - # Remove versioning from the package - pat = '(' + '|'.join(['>=', '==', '>']) + ')' - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info['package'] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ';' in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, - rest.split(';')) - info['platform_deps'] = platform_deps - else: - version = rest # NOQA - info['version'] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath) as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith('#'): - yield from parse_line(line) - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info['package']] - if with_version and 'version' in info: - parts.extend(info['version']) - if not sys.version.startswith('3.4'): - # apparently package_deps are broken in 3.4 - platform_deps = info.get('platform_deps') - if platform_deps is not None: - parts.append(';' + platform_deps) - item = ''.join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -install_requires = parse_requirements() - -try: - # OpenCV installed via conda. - import cv2 # NOQA: F401 - major, minor, *rest = cv2.__version__.split('.') - if int(major) < 3: - raise RuntimeError( - f'OpenCV >=3 is required but {cv2.__version__} is installed') -except ImportError: - # If first not installed install second package - CHOOSE_INSTALL_REQUIRES = [('opencv-python-headless>=3', - 'opencv-python>=3')] - for main, secondary in CHOOSE_INSTALL_REQUIRES: - install_requires.append(choose_requirement(main, secondary)) - - -def get_extensions(): - extensions = [] - - if os.getenv('MMCV_WITH_TRT', '0') != '0': - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: ' + \ - 'Custom TensorRT Ops will be deprecated in future. ' - msg += blue_text + \ - 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - ext_name = 'mmcv._ext_trt' - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - tensorrt_path = os.getenv('TENSORRT_DIR', '0') - tensorrt_lib_path = glob.glob( - os.path.join(tensorrt_path, 'targets', '*', 'lib'))[0] - library_dirs += [tensorrt_lib_path] - libraries += ['nvinfer', 'nvparsers', 'nvinfer_plugin'] - libraries += ['cudart'] - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/common/cuda') - include_trt_path = os.path.abspath('./mmcv/ops/csrc/tensorrt') - include_dirs.append(include_path) - include_dirs.append(include_trt_path) - include_dirs.append(os.path.join(tensorrt_path, 'include')) - include_dirs += include_paths(cuda=True) - - op_files = glob.glob('./mmcv/ops/csrc/tensorrt/plugins/*') - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('MMCV_WITH_TRT', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - # prevent cub/thrust conflict with other python library - # More context See issues #1454 - extra_compile_args['nvcc'] += ['-Xcompiler=-fno-gnu-unique'] - library_dirs += library_paths(cuda=True) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - if os.getenv('MMCV_WITH_OPS', '0') == '0': - return extensions - - if EXT_TYPE == 'parrots': - ext_name = 'mmcv._ext' - from parrots.utils.build_extension import Extension - - # new parrots op impl do not use MMCV_USE_PARROTS - # define_macros = [('MMCV_USE_PARROTS', None)] - define_macros = [] - include_dirs = [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') +\ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') +\ - glob.glob('./mmcv/ops/csrc/parrots/*.cpp') - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args = { - 'nvcc': [cuda_args, '-std=c++14'] if cuda_args else ['-std=c++14'], - 'cxx': ['-std=c++14'], - } - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - extra_compile_args['nvcc'] += [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - cuda=True, - pytorch=True) - extensions.append(ext_ops) - elif EXT_TYPE == 'pytorch': - ext_name = 'mmcv._ext' - from torch.utils.cpp_extension import CppExtension, CUDAExtension - - # prevent ninja from using too many resources - try: - import psutil - num_cpu = len(psutil.Process().cpu_affinity()) - cpu_use = max(4, num_cpu - 1) - except (ModuleNotFoundError, AttributeError): - cpu_use = 4 - - os.environ.setdefault('MAX_JOBS', str(cpu_use)) - define_macros = [] - - # Before PyTorch1.8.0, when compiling CUDA code, `cxx` is a - # required key passed to PyTorch. Even if there is no flag passed - # to cxx, users also need to pass an empty list to PyTorch. - # Since PyTorch1.8.0, it has a default value so users do not need - # to pass an empty list anymore. - # More details at https://github.com/pytorch/pytorch/pull/45956 - extra_compile_args = {'cxx': []} - - # Since the PR (https://github.com/open-mmlab/mmcv/pull/1463) uses - # c++14 features, the argument ['std=c++14'] must be added here. - # However, in the windows environment, some standard libraries - # will depend on c++17 or higher. In fact, for the windows - # environment, the compiler will choose the appropriate compiler - # to compile those cpp files, so there is no need to add the - # argument - if platform.system() != 'Windows': - extra_compile_args['cxx'] = ['-std=c++14'] - - include_dirs = [] - - is_rocm_pytorch = False - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm_pytorch = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - - if is_rocm_pytorch or torch.cuda.is_available() or os.getenv( - 'FORCE_CUDA', '0') == '1': - if is_rocm_pytorch: - define_macros += [('HIP_DIFF', None)] - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cpp') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - elif (hasattr(torch, 'is_mlu_available') and - torch.is_mlu_available()) or \ - os.getenv('FORCE_MLU', '0') == '1': - from torch_mlu.utils.cpp_extension import MLUExtension - define_macros += [('MMCV_WITH_MLU', None)] - mlu_args = os.getenv('MMCV_MLU_ARGS') - extra_compile_args['cncc'] = [mlu_args] if mlu_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/mlu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/common/mlu/*.mlu') - extension = MLUExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/mlu')) - else: - print(f'Compiling {ext_name} only with CPU') - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') - extension = CppExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - - # Since the PR (https://github.com/open-mmlab/mmcv/pull/1463) uses - # c++14 features, the argument ['std=c++14'] must be added here. - # However, in the windows environment, some standard libraries - # will depend on c++17 or higher. In fact, for the windows - # environment, the compiler will choose the appropriate compiler - # to compile those cpp files, so there is no need to add the - # argument - if 'nvcc' in extra_compile_args and platform.system() != 'Windows': - extra_compile_args['nvcc'] += ['-std=c++14'] - - ext_ops = extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args) - extensions.append(ext_ops) - - if EXT_TYPE == 'pytorch' and os.getenv('MMCV_WITH_ORT', '0') != '0': - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: ' + \ - 'Custom ONNXRuntime Ops will be deprecated in future. ' - msg += blue_text + \ - 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - ext_name = 'mmcv._ext_ort' - import onnxruntime - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - ort_path = os.getenv('ONNXRUNTIME_DIR', '0') - library_dirs += [os.path.join(ort_path, 'lib')] - libraries.append('onnxruntime') - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/onnxruntime') - include_dirs.append(include_path) - include_dirs.append(os.path.join(ort_path, 'include')) - - op_files = glob.glob('./mmcv/ops/csrc/onnxruntime/cpu/*') - if onnxruntime.get_device() == 'GPU' or os.getenv('FORCE_CUDA', - '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files += glob.glob('./mmcv/ops/csrc/onnxruntime/gpu/*') - include_dirs += include_paths(cuda=True) - library_dirs += library_paths(cuda=True) - else: - include_dirs += include_paths(cuda=False) - library_dirs += library_paths(cuda=False) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - return extensions - - -setup( - name='mmcv' if os.getenv('MMCV_WITH_OPS', '0') == '0' else 'mmcv-full', - version=get_version(), - description='OpenMMLab Computer Vision Foundation', - keywords='computer vision', - packages=find_packages(), - include_package_data=True, - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', - 'Topic :: Utilities', - ], - url='https://github.com/open-mmlab/mmcv', - author='MMCV Contributors', - author_email='openmmlab@gmail.com', - install_requires=install_requires, - extras_require={ - 'all': parse_requirements('requirements.txt'), - 'tests': parse_requirements('requirements/test.txt'), - 'build': parse_requirements('requirements/build.txt'), - 'optional': parse_requirements('requirements/optional.txt'), - }, - ext_modules=get_extensions(), - cmdclass=cmd_class, - zip_safe=False) diff --git a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/analyze_logs.py b/cv/3d_detection/paconv/pytorch/tools/analysis_tools/analyze_logs.py deleted file mode 100644 index 18858466..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/analyze_logs.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import json -from collections import defaultdict - -import numpy as np -import seaborn as sns -from matplotlib import pyplot as plt - - -def cal_train_time(log_dicts, args): - for i, log_dict in enumerate(log_dicts): - print(f'{"-" * 5}Analyze train time of {args.json_logs[i]}{"-" * 5}') - all_times = [] - for epoch in log_dict.keys(): - if args.include_outliers: - all_times.append(log_dict[epoch]['time']) - else: - all_times.append(log_dict[epoch]['time'][1:]) - all_times = np.array(all_times) - epoch_ave_time = all_times.mean(-1) - slowest_epoch = epoch_ave_time.argmax() - fastest_epoch = epoch_ave_time.argmin() - std_over_epoch = epoch_ave_time.std() - print(f'slowest epoch {slowest_epoch + 1}, ' - f'average time is {epoch_ave_time[slowest_epoch]:.4f}') - print(f'fastest epoch {fastest_epoch + 1}, ' - f'average time is {epoch_ave_time[fastest_epoch]:.4f}') - print(f'time std over epochs is {std_over_epoch:.4f}') - print(f'average iter time: {np.mean(all_times):.4f} s/iter') - print() - - -def plot_curve(log_dicts, args): - if args.backend is not None: - plt.switch_backend(args.backend) - sns.set_style(args.style) - # if legend is None, use {filename}_{key} as legend - legend = args.legend - if legend is None: - legend = [] - for json_log in args.json_logs: - for metric in args.keys: - legend.append(f'{json_log}_{metric}') - assert len(legend) == (len(args.json_logs) * len(args.keys)) - metrics = args.keys - - num_metrics = len(metrics) - for i, log_dict in enumerate(log_dicts): - epochs = list(log_dict.keys()) - for j, metric in enumerate(metrics): - print(f'plot curve of {args.json_logs[i]}, metric is {metric}') - if metric not in log_dict[epochs[args.interval - 1]]: - raise KeyError( - f'{args.json_logs[i]} does not contain metric {metric}') - - if args.mode == 'eval': - if min(epochs) == args.interval: - x0 = args.interval - else: - # if current training is resumed from previous checkpoint - # we lost information in early epochs - # `xs` should start according to `min(epochs)` - if min(epochs) % args.interval == 0: - x0 = min(epochs) - else: - # find the first epoch that do eval - x0 = min(epochs) + args.interval - \ - min(epochs) % args.interval - xs = np.arange(x0, max(epochs) + 1, args.interval) - ys = [] - for epoch in epochs[args.interval - 1::args.interval]: - ys += log_dict[epoch][metric] - - # if training is aborted before eval of the last epoch - # `xs` and `ys` will have different length and cause an error - # check if `ys[-1]` is empty here - if not log_dict[epoch][metric]: - xs = xs[:-1] - - ax = plt.gca() - ax.set_xticks(xs) - plt.xlabel('epoch') - plt.plot(xs, ys, label=legend[i * num_metrics + j], marker='o') - else: - xs = [] - ys = [] - num_iters_per_epoch = \ - log_dict[epochs[args.interval-1]]['iter'][-1] - for epoch in epochs[args.interval - 1::args.interval]: - iters = log_dict[epoch]['iter'] - if log_dict[epoch]['mode'][-1] == 'val': - iters = iters[:-1] - xs.append( - np.array(iters) + (epoch - 1) * num_iters_per_epoch) - ys.append(np.array(log_dict[epoch][metric][:len(iters)])) - xs = np.concatenate(xs) - ys = np.concatenate(ys) - plt.xlabel('iter') - plt.plot( - xs, ys, label=legend[i * num_metrics + j], linewidth=0.5) - plt.legend() - if args.title is not None: - plt.title(args.title) - if args.out is None: - plt.show() - else: - print(f'save curve to: {args.out}') - plt.savefig(args.out) - plt.cla() - - -def add_plot_parser(subparsers): - parser_plt = subparsers.add_parser( - 'plot_curve', help='parser for plotting curves') - parser_plt.add_argument( - 'json_logs', - type=str, - nargs='+', - help='path of train log in json format') - parser_plt.add_argument( - '--keys', - type=str, - nargs='+', - default=['mAP_0.25'], - help='the metric that you want to plot') - parser_plt.add_argument('--title', type=str, help='title of figure') - parser_plt.add_argument( - '--legend', - type=str, - nargs='+', - default=None, - help='legend of each plot') - parser_plt.add_argument( - '--backend', type=str, default=None, help='backend of plt') - parser_plt.add_argument( - '--style', type=str, default='dark', help='style of plt') - parser_plt.add_argument('--out', type=str, default=None) - parser_plt.add_argument('--mode', type=str, default='train') - parser_plt.add_argument('--interval', type=int, default=1) - - -def add_time_parser(subparsers): - parser_time = subparsers.add_parser( - 'cal_train_time', - help='parser for computing the average time per training iteration') - parser_time.add_argument( - 'json_logs', - type=str, - nargs='+', - help='path of train log in json format') - parser_time.add_argument( - '--include-outliers', - action='store_true', - help='include the first value of every epoch when computing ' - 'the average time') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Analyze Json Log') - # currently only support plot curve and calculate average train time - subparsers = parser.add_subparsers(dest='task', help='task parser') - add_plot_parser(subparsers) - add_time_parser(subparsers) - args = parser.parse_args() - return args - - -def load_json_logs(json_logs): - # load and convert json_logs to log_dict, key is epoch, value is a sub dict - # keys of sub dict is different metrics, e.g. memory, bbox_mAP - # value of sub dict is a list of corresponding values of all iterations - log_dicts = [dict() for _ in json_logs] - for json_log, log_dict in zip(json_logs, log_dicts): - with open(json_log, 'r') as log_file: - for line in log_file: - log = json.loads(line.strip()) - # skip lines without `epoch` field - if 'epoch' not in log: - continue - epoch = log.pop('epoch') - if epoch not in log_dict: - log_dict[epoch] = defaultdict(list) - for k, v in log.items(): - log_dict[epoch][k].append(v) - return log_dicts - - -def main(): - args = parse_args() - - json_logs = args.json_logs - for json_log in json_logs: - assert json_log.endswith('.json') - - log_dicts = load_json_logs(json_logs) - - eval(args.task)(log_dicts, args) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/benchmark.py b/cv/3d_detection/paconv/pytorch/tools/analysis_tools/benchmark.py deleted file mode 100644 index b31c9f09..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/benchmark.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import time - -import torch -from mmcv import Config -from mmcv.parallel import MMDataParallel -from mmcv.runner import load_checkpoint, wrap_fp16_model - -from mmdet3d.datasets import build_dataloader, build_dataset -from mmdet3d.models import build_detector -from tools.misc.fuse_conv_bn import fuse_module - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMDet benchmark a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--samples', default=2000, help='samples to benchmark') - parser.add_argument( - '--log-interval', default=50, help='interval of logging') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the dataloader - # TODO: support multiple images per gpu (only minor changes are needed) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=False, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_module(model) - - model = MMDataParallel(model, device_ids=[0]) - - model.eval() - - # the first several iterations may be very slow so skip them - num_warmup = 5 - pure_inf_time = 0 - - # benchmark with several samples and take the average - for i, data in enumerate(data_loader): - - torch.cuda.synchronize() - start_time = time.perf_counter() - - with torch.no_grad(): - model(return_loss=False, rescale=True, **data) - - torch.cuda.synchronize() - elapsed = time.perf_counter() - start_time - - if i >= num_warmup: - pure_inf_time += elapsed - if (i + 1) % args.log_interval == 0: - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Done image [{i + 1:<3}/ {args.samples}], ' - f'fps: {fps:.1f} img / s') - - if (i + 1) == args.samples: - pure_inf_time += elapsed - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Overall fps: {fps:.1f} img / s') - break - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/get_flops.py b/cv/3d_detection/paconv/pytorch/tools/analysis_tools/get_flops.py deleted file mode 100644 index f45ed80f..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/analysis_tools/get_flops.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import torch -from mmcv import Config, DictAction - -from mmdet3d.models import build_model - -try: - from mmcv.cnn import get_model_complexity_info -except ImportError: - raise ImportError('Please upgrade mmcv to >0.6.2') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[40000, 4], - help='input point cloud size') - parser.add_argument( - '--modality', - type=str, - default='point', - choices=['point', 'image', 'multi'], - help='input data modality') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def main(): - - args = parse_args() - - if args.modality == 'point': - assert len(args.shape) == 2, 'invalid input shape' - input_shape = tuple(args.shape) - elif args.modality == 'image': - if len(args.shape) == 1: - input_shape = (3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (3, ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - elif args.modality == 'multi': - raise NotImplementedError( - 'FLOPs counter is currently not supported for models with ' - 'multi-modality input') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - model = build_model( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - if torch.cuda.is_available(): - model.cuda() - model.eval() - - if hasattr(model, 'forward_dummy'): - model.forward = model.forward_dummy - else: - raise NotImplementedError( - 'FLOPs counter is currently not supported for {}'.format( - model.__class__.__name__)) - - flops, params = get_model_complexity_info(model, input_shape) - split_line = '=' * 30 - print(f'{split_line}\nInput shape: {input_shape}\n' - f'Flops: {flops}\nParams: {params}\n{split_line}') - print('!!!Please be cautious if you use the results in papers. ' - 'You may need to check if all ops are supported and verify that the ' - 'flops computation is correct.') - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/create_data.py b/cv/3d_detection/paconv/pytorch/tools/create_data.py deleted file mode 100644 index 6d619125..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/create_data.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -from os import path as osp - -from tools.data_converter import indoor_converter as indoor -from tools.data_converter import kitti_converter as kitti -from tools.data_converter import lyft_converter as lyft_converter -from tools.data_converter import nuscenes_converter as nuscenes_converter -from tools.data_converter.create_gt_database import ( - GTDatabaseCreater, create_groundtruth_database) - - -def kitti_data_prep(root_path, - info_prefix, - version, - out_dir, - with_plane=False): - """Prepare data related to Kitti dataset. - - Related data consists of '.pkl' files recording basic infos, - 2D annotations and groundtruth database. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - out_dir (str): Output directory of the groundtruth database info. - with_plane (bool, optional): Whether to use plane information. - Default: False. - """ - kitti.create_kitti_info_file(root_path, info_prefix, with_plane) - kitti.create_reduced_point_cloud(root_path, info_prefix) - - info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl') - info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl') - info_trainval_path = osp.join(root_path, - f'{info_prefix}_infos_trainval.pkl') - info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl') - kitti.export_2d_annotation(root_path, info_train_path) - kitti.export_2d_annotation(root_path, info_val_path) - kitti.export_2d_annotation(root_path, info_trainval_path) - kitti.export_2d_annotation(root_path, info_test_path) - - create_groundtruth_database( - 'KittiDataset', - root_path, - info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl', - relative_path=False, - mask_anno_path='instances_train.json', - with_mask=(version == 'mask')) - - -def nuscenes_data_prep(root_path, - info_prefix, - version, - dataset_name, - out_dir, - max_sweeps=10): - """Prepare data related to nuScenes dataset. - - Related data consists of '.pkl' files recording basic infos, - 2D annotations and groundtruth database. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - dataset_name (str): The dataset class name. - out_dir (str): Output directory of the groundtruth database info. - max_sweeps (int, optional): Number of input consecutive frames. - Default: 10 - """ - nuscenes_converter.create_nuscenes_infos( - root_path, info_prefix, version=version, max_sweeps=max_sweeps) - - if version == 'v1.0-test': - info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl') - nuscenes_converter.export_2d_annotation( - root_path, info_test_path, version=version) - return - - info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl') - info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl') - nuscenes_converter.export_2d_annotation( - root_path, info_train_path, version=version) - nuscenes_converter.export_2d_annotation( - root_path, info_val_path, version=version) - create_groundtruth_database(dataset_name, root_path, info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl') - - -def lyft_data_prep(root_path, info_prefix, version, max_sweeps=10): - """Prepare data related to Lyft dataset. - - Related data consists of '.pkl' files recording basic infos. - Although the ground truth database and 2D annotations are not used in - Lyft, it can also be generated like nuScenes. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - max_sweeps (int, optional): Number of input consecutive frames. - Defaults to 10. - """ - lyft_converter.create_lyft_infos( - root_path, info_prefix, version=version, max_sweeps=max_sweeps) - - -def scannet_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for scannet dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def s3dis_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for s3dis dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def sunrgbd_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for sunrgbd dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def waymo_data_prep(root_path, - info_prefix, - version, - out_dir, - workers, - max_sweeps=5): - """Prepare the info file for waymo dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - max_sweeps (int, optional): Number of input consecutive frames. - Default: 5. Here we store pose information of these frames - for later use. - """ - from tools.data_converter import waymo_converter as waymo - - splits = ['training', 'validation', 'testing'] - for i, split in enumerate(splits): - load_dir = osp.join(root_path, 'waymo_format', split) - if split == 'validation': - save_dir = osp.join(out_dir, 'kitti_format', 'training') - else: - save_dir = osp.join(out_dir, 'kitti_format', split) - converter = waymo.Waymo2KITTI( - load_dir, - save_dir, - prefix=str(i), - workers=workers, - test_mode=(split == 'testing')) - converter.convert() - # Generate waymo infos - out_dir = osp.join(out_dir, 'kitti_format') - kitti.create_waymo_info_file( - out_dir, info_prefix, max_sweeps=max_sweeps, workers=workers) - GTDatabaseCreater( - 'WaymoDataset', - out_dir, - info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl', - relative_path=False, - with_mask=False, - num_worker=workers).create() - - -parser = argparse.ArgumentParser(description='Data converter arg parser') -parser.add_argument('dataset', metavar='kitti', help='name of the dataset') -parser.add_argument( - '--root-path', - type=str, - default='./data/kitti', - help='specify the root path of dataset') -parser.add_argument( - '--version', - type=str, - default='v1.0', - required=False, - help='specify the dataset version, no need for kitti') -parser.add_argument( - '--max-sweeps', - type=int, - default=10, - required=False, - help='specify sweeps of lidar per example') -parser.add_argument( - '--with-plane', - action='store_true', - help='Whether to use plane information for kitti.') -parser.add_argument( - '--out-dir', - type=str, - default='./data/kitti', - required=False, - help='name of info pkl') -parser.add_argument('--extra-tag', type=str, default='kitti') -parser.add_argument( - '--workers', type=int, default=4, help='number of threads to be used') -args = parser.parse_args() - -if __name__ == '__main__': - if args.dataset == 'kitti': - kitti_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=args.version, - out_dir=args.out_dir, - with_plane=args.with_plane) - elif args.dataset == 'nuscenes' and args.version != 'v1.0-mini': - train_version = f'{args.version}-trainval' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - test_version = f'{args.version}-test' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=test_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - elif args.dataset == 'nuscenes' and args.version == 'v1.0-mini': - train_version = f'{args.version}' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - elif args.dataset == 'lyft': - train_version = f'{args.version}-train' - lyft_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - max_sweeps=args.max_sweeps) - test_version = f'{args.version}-test' - lyft_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=test_version, - max_sweeps=args.max_sweeps) - elif args.dataset == 'waymo': - waymo_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=args.version, - out_dir=args.out_dir, - workers=args.workers, - max_sweeps=args.max_sweeps) - elif args.dataset == 'scannet': - scannet_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) - elif args.dataset == 's3dis': - s3dis_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) - elif args.dataset == 'sunrgbd': - sunrgbd_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) diff --git a/cv/3d_detection/paconv/pytorch/tools/create_data.sh b/cv/3d_detection/paconv/pytorch/tools/create_data.sh deleted file mode 100755 index 9a57852f..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/create_data.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x -export PYTHONPATH=`pwd`:$PYTHONPATH - -PARTITION=$1 -JOB_NAME=$2 -DATASET=$3 -GPUS=${GPUS:-1} -GPUS_PER_NODE=${GPUS_PER_NODE:-1} -SRUN_ARGS=${SRUN_ARGS:-""} -JOB_NAME=create_data - -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/create_data.py ${DATASET} \ - --root-path ./data/${DATASET} \ - --out-dir ./data/${DATASET} \ - --extra-tag ${DATASET} diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/__init__.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/__init__.py deleted file mode 100644 index ef101fec..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/create_gt_database.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/create_gt_database.py deleted file mode 100644 index 210f0e88..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/create_gt_database.py +++ /dev/null @@ -1,624 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle -from os import path as osp - -import mmcv -import numpy as np -from mmcv import track_iter_progress -from mmcv.ops import roi_align -from pycocotools import mask as maskUtils -from pycocotools.coco import COCO - -from mmdet3d.core.bbox import box_np_ops as box_np_ops -from mmdet3d.datasets import build_dataset -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps - - -def _poly2mask(mask_ann, img_h, img_w): - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - -def _parse_coco_ann_info(ann_info): - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, bboxes_ignore=gt_bboxes_ignore, masks=gt_masks_ann) - - return ann - - -def crop_image_patch_v2(pos_proposals, pos_assigned_gt_inds, gt_masks): - import torch - from torch.nn.modules.utils import _pair - device = pos_proposals.device - num_pos = pos_proposals.size(0) - fake_inds = ( - torch.arange(num_pos, - device=device).to(dtype=pos_proposals.dtype)[:, None]) - rois = torch.cat([fake_inds, pos_proposals], dim=1) # Nx5 - mask_size = _pair(28) - rois = rois.to(device=device) - gt_masks_th = ( - torch.from_numpy(gt_masks).to(device).index_select( - 0, pos_assigned_gt_inds).to(dtype=rois.dtype)) - # Use RoIAlign could apparently accelerate the training (~0.1s/iter) - targets = ( - roi_align(gt_masks_th, rois, mask_size[::-1], 1.0, 0, True).squeeze(1)) - return targets - - -def crop_image_patch(pos_proposals, gt_masks, pos_assigned_gt_inds, org_img): - num_pos = pos_proposals.shape[0] - masks = [] - img_patches = [] - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - bbox = pos_proposals[i, :].astype(np.int32) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1 + 1, 1) - h = np.maximum(y2 - y1 + 1, 1) - - mask_patch = gt_mask[y1:y1 + h, x1:x1 + w] - masked_img = gt_mask[..., None] * org_img - img_patch = masked_img[y1:y1 + h, x1:x1 + w] - - img_patches.append(img_patch) - masks.append(mask_patch) - return img_patches, masks - - -def create_groundtruth_database(dataset_class_name, - data_path, - info_prefix, - info_path=None, - mask_anno_path=None, - used_classes=None, - database_save_path=None, - db_info_save_path=None, - relative_path=True, - add_rgb=False, - lidar_only=False, - bev_only=False, - coors_range=None, - with_mask=False): - """Given the raw data, generate the ground truth database. - - Args: - dataset_class_name (str): Name of the input dataset. - data_path (str): Path of the data. - info_prefix (str): Prefix of the info file. - info_path (str, optional): Path of the info file. - Default: None. - mask_anno_path (str, optional): Path of the mask_anno. - Default: None. - used_classes (list[str], optional): Classes have been used. - Default: None. - database_save_path (str, optional): Path to save database. - Default: None. - db_info_save_path (str, optional): Path to save db_info. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - with_mask (bool, optional): Whether to use mask. - Default: False. - """ - print(f'Create GT Database of {dataset_class_name}') - dataset_cfg = dict( - type=dataset_class_name, data_root=data_path, ann_file=info_path) - if dataset_class_name == 'KittiDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=with_mask, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - elif dataset_class_name == 'NuScenesDataset': - dataset_cfg.update( - use_valid_flag=True, - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - use_dim=[0, 1, 2, 3, 4], - pad_empty_sweeps=True, - remove_close=True), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True) - ]) - - elif dataset_class_name == 'WaymoDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=False, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=6, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - dataset = build_dataset(dataset_cfg) - - if database_save_path is None: - database_save_path = osp.join(data_path, f'{info_prefix}_gt_database') - if db_info_save_path is None: - db_info_save_path = osp.join(data_path, - f'{info_prefix}_dbinfos_train.pkl') - mmcv.mkdir_or_exist(database_save_path) - all_db_infos = dict() - if with_mask: - coco = COCO(osp.join(data_path, mask_anno_path)) - imgIds = coco.getImgIds() - file2id = dict() - for i in imgIds: - info = coco.loadImgs([i])[0] - file2id.update({info['file_name']: i}) - - group_counter = 0 - for j in track_iter_progress(list(range(len(dataset)))): - input_dict = dataset.get_data_info(j) - dataset.pre_pipeline(input_dict) - example = dataset.pipeline(input_dict) - annos = example['ann_info'] - image_idx = example['sample_idx'] - points = example['points'].tensor.numpy() - gt_boxes_3d = annos['gt_bboxes_3d'].tensor.numpy() - names = annos['gt_names'] - group_dict = dict() - if 'group_ids' in annos: - group_ids = annos['group_ids'] - else: - group_ids = np.arange(gt_boxes_3d.shape[0], dtype=np.int64) - difficulty = np.zeros(gt_boxes_3d.shape[0], dtype=np.int32) - if 'difficulty' in annos: - difficulty = annos['difficulty'] - - num_obj = gt_boxes_3d.shape[0] - point_indices = box_np_ops.points_in_rbbox(points, gt_boxes_3d) - - if with_mask: - # prepare masks - gt_boxes = annos['gt_bboxes'] - img_path = osp.split(example['img_info']['filename'])[-1] - if img_path not in file2id.keys(): - print(f'skip image {img_path} for empty mask') - continue - img_id = file2id[img_path] - kins_annIds = coco.getAnnIds(imgIds=img_id) - kins_raw_info = coco.loadAnns(kins_annIds) - kins_ann_info = _parse_coco_ann_info(kins_raw_info) - h, w = annos['img_shape'][:2] - gt_masks = [ - _poly2mask(mask, h, w) for mask in kins_ann_info['masks'] - ] - # get mask inds based on iou mapping - bbox_iou = bbox_overlaps(kins_ann_info['bboxes'], gt_boxes) - mask_inds = bbox_iou.argmax(axis=0) - valid_inds = (bbox_iou.max(axis=0) > 0.5) - - # mask the image - # use more precise crop when it is ready - # object_img_patches = np.ascontiguousarray( - # np.stack(object_img_patches, axis=0).transpose(0, 3, 1, 2)) - # crop image patches using roi_align - # object_img_patches = crop_image_patch_v2( - # torch.Tensor(gt_boxes), - # torch.Tensor(mask_inds).long(), object_img_patches) - object_img_patches, object_masks = crop_image_patch( - gt_boxes, gt_masks, mask_inds, annos['img']) - - for i in range(num_obj): - filename = f'{image_idx}_{names[i]}_{i}.bin' - abs_filepath = osp.join(database_save_path, filename) - rel_filepath = osp.join(f'{info_prefix}_gt_database', filename) - - # save point clouds and image patches for each object - gt_points = points[point_indices[:, i]] - gt_points[:, :3] -= gt_boxes_3d[i, :3] - - if with_mask: - if object_masks[i].sum() == 0 or not valid_inds[i]: - # Skip object for empty or invalid mask - continue - img_patch_path = abs_filepath + '.png' - mask_patch_path = abs_filepath + '.mask.png' - mmcv.imwrite(object_img_patches[i], img_patch_path) - mmcv.imwrite(object_masks[i], mask_patch_path) - - with open(abs_filepath, 'w') as f: - gt_points.tofile(f) - - if (used_classes is None) or names[i] in used_classes: - db_info = { - 'name': names[i], - 'path': rel_filepath, - 'image_idx': image_idx, - 'gt_idx': i, - 'box3d_lidar': gt_boxes_3d[i], - 'num_points_in_gt': gt_points.shape[0], - 'difficulty': difficulty[i], - } - local_group_id = group_ids[i] - # if local_group_id >= 0: - if local_group_id not in group_dict: - group_dict[local_group_id] = group_counter - group_counter += 1 - db_info['group_id'] = group_dict[local_group_id] - if 'score' in annos: - db_info['score'] = annos['score'][i] - if with_mask: - db_info.update({'box2d_camera': gt_boxes[i]}) - if names[i] in all_db_infos: - all_db_infos[names[i]].append(db_info) - else: - all_db_infos[names[i]] = [db_info] - - for k, v in all_db_infos.items(): - print(f'load {len(v)} {k} database infos') - - with open(db_info_save_path, 'wb') as f: - pickle.dump(all_db_infos, f) - - -class GTDatabaseCreater: - """Given the raw data, generate the ground truth database. This is the - parallel version. For serialized version, please refer to - `create_groundtruth_database` - - Args: - dataset_class_name (str): Name of the input dataset. - data_path (str): Path of the data. - info_prefix (str): Prefix of the info file. - info_path (str, optional): Path of the info file. - Default: None. - mask_anno_path (str, optional): Path of the mask_anno. - Default: None. - used_classes (list[str], optional): Classes have been used. - Default: None. - database_save_path (str, optional): Path to save database. - Default: None. - db_info_save_path (str, optional): Path to save db_info. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - with_mask (bool, optional): Whether to use mask. - Default: False. - num_worker (int, optional): the number of parallel workers to use. - Default: 8. - """ - - def __init__(self, - dataset_class_name, - data_path, - info_prefix, - info_path=None, - mask_anno_path=None, - used_classes=None, - database_save_path=None, - db_info_save_path=None, - relative_path=True, - add_rgb=False, - lidar_only=False, - bev_only=False, - coors_range=None, - with_mask=False, - num_worker=8) -> None: - self.dataset_class_name = dataset_class_name - self.data_path = data_path - self.info_prefix = info_prefix - self.info_path = info_path - self.mask_anno_path = mask_anno_path - self.used_classes = used_classes - self.database_save_path = database_save_path - self.db_info_save_path = db_info_save_path - self.relative_path = relative_path - self.add_rgb = add_rgb - self.lidar_only = lidar_only - self.bev_only = bev_only - self.coors_range = coors_range - self.with_mask = with_mask - self.num_worker = num_worker - self.pipeline = None - - def create_single(self, input_dict): - group_counter = 0 - single_db_infos = dict() - example = self.pipeline(input_dict) - annos = example['ann_info'] - image_idx = example['sample_idx'] - points = example['points'].tensor.numpy() - gt_boxes_3d = annos['gt_bboxes_3d'].tensor.numpy() - names = annos['gt_names'] - group_dict = dict() - if 'group_ids' in annos: - group_ids = annos['group_ids'] - else: - group_ids = np.arange(gt_boxes_3d.shape[0], dtype=np.int64) - difficulty = np.zeros(gt_boxes_3d.shape[0], dtype=np.int32) - if 'difficulty' in annos: - difficulty = annos['difficulty'] - - num_obj = gt_boxes_3d.shape[0] - point_indices = box_np_ops.points_in_rbbox(points, gt_boxes_3d) - - if self.with_mask: - # prepare masks - gt_boxes = annos['gt_bboxes'] - img_path = osp.split(example['img_info']['filename'])[-1] - if img_path not in self.file2id.keys(): - print(f'skip image {img_path} for empty mask') - return single_db_infos - img_id = self.file2id[img_path] - kins_annIds = self.coco.getAnnIds(imgIds=img_id) - kins_raw_info = self.coco.loadAnns(kins_annIds) - kins_ann_info = _parse_coco_ann_info(kins_raw_info) - h, w = annos['img_shape'][:2] - gt_masks = [ - _poly2mask(mask, h, w) for mask in kins_ann_info['masks'] - ] - # get mask inds based on iou mapping - bbox_iou = bbox_overlaps(kins_ann_info['bboxes'], gt_boxes) - mask_inds = bbox_iou.argmax(axis=0) - valid_inds = (bbox_iou.max(axis=0) > 0.5) - - # mask the image - # use more precise crop when it is ready - # object_img_patches = np.ascontiguousarray( - # np.stack(object_img_patches, axis=0).transpose(0, 3, 1, 2)) - # crop image patches using roi_align - # object_img_patches = crop_image_patch_v2( - # torch.Tensor(gt_boxes), - # torch.Tensor(mask_inds).long(), object_img_patches) - object_img_patches, object_masks = crop_image_patch( - gt_boxes, gt_masks, mask_inds, annos['img']) - - for i in range(num_obj): - filename = f'{image_idx}_{names[i]}_{i}.bin' - abs_filepath = osp.join(self.database_save_path, filename) - rel_filepath = osp.join(f'{self.info_prefix}_gt_database', - filename) - - # save point clouds and image patches for each object - gt_points = points[point_indices[:, i]] - gt_points[:, :3] -= gt_boxes_3d[i, :3] - - if self.with_mask: - if object_masks[i].sum() == 0 or not valid_inds[i]: - # Skip object for empty or invalid mask - continue - img_patch_path = abs_filepath + '.png' - mask_patch_path = abs_filepath + '.mask.png' - mmcv.imwrite(object_img_patches[i], img_patch_path) - mmcv.imwrite(object_masks[i], mask_patch_path) - - with open(abs_filepath, 'w') as f: - gt_points.tofile(f) - - if (self.used_classes is None) or names[i] in self.used_classes: - db_info = { - 'name': names[i], - 'path': rel_filepath, - 'image_idx': image_idx, - 'gt_idx': i, - 'box3d_lidar': gt_boxes_3d[i], - 'num_points_in_gt': gt_points.shape[0], - 'difficulty': difficulty[i], - } - local_group_id = group_ids[i] - # if local_group_id >= 0: - if local_group_id not in group_dict: - group_dict[local_group_id] = group_counter - group_counter += 1 - db_info['group_id'] = group_dict[local_group_id] - if 'score' in annos: - db_info['score'] = annos['score'][i] - if self.with_mask: - db_info.update({'box2d_camera': gt_boxes[i]}) - if names[i] in single_db_infos: - single_db_infos[names[i]].append(db_info) - else: - single_db_infos[names[i]] = [db_info] - - return single_db_infos - - def create(self): - print(f'Create GT Database of {self.dataset_class_name}') - dataset_cfg = dict( - type=self.dataset_class_name, - data_root=self.data_path, - ann_file=self.info_path) - if self.dataset_class_name == 'KittiDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=self.with_mask, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - elif self.dataset_class_name == 'NuScenesDataset': - dataset_cfg.update( - use_valid_flag=True, - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - use_dim=[0, 1, 2, 3, 4], - pad_empty_sweeps=True, - remove_close=True), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True) - ]) - - elif self.dataset_class_name == 'WaymoDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=False, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=6, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - dataset = build_dataset(dataset_cfg) - self.pipeline = dataset.pipeline - if self.database_save_path is None: - self.database_save_path = osp.join( - self.data_path, f'{self.info_prefix}_gt_database') - if self.db_info_save_path is None: - self.db_info_save_path = osp.join( - self.data_path, f'{self.info_prefix}_dbinfos_train.pkl') - mmcv.mkdir_or_exist(self.database_save_path) - if self.with_mask: - self.coco = COCO(osp.join(self.data_path, self.mask_anno_path)) - imgIds = self.coco.getImgIds() - self.file2id = dict() - for i in imgIds: - info = self.coco.loadImgs([i])[0] - self.file2id.update({info['file_name']: i}) - - def loop_dataset(i): - input_dict = dataset.get_data_info(i) - dataset.pre_pipeline(input_dict) - return input_dict - - multi_db_infos = mmcv.track_parallel_progress( - self.create_single, ((loop_dataset(i) - for i in range(len(dataset))), len(dataset)), - self.num_worker) - print('Make global unique group id') - group_counter_offset = 0 - all_db_infos = dict() - for single_db_infos in track_iter_progress(multi_db_infos): - group_id = -1 - for name, name_db_infos in single_db_infos.items(): - for db_info in name_db_infos: - group_id = max(group_id, db_info['group_id']) - db_info['group_id'] += group_counter_offset - if name not in all_db_infos: - all_db_infos[name] = [] - all_db_infos[name].extend(name_db_infos) - group_counter_offset += (group_id + 1) - - for k, v in all_db_infos.items(): - print(f'load {len(v)} {k} database infos') - - with open(self.db_info_save_path, 'wb') as f: - pickle.dump(all_db_infos, f) diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/indoor_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/indoor_converter.py deleted file mode 100644 index d3be3676..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/indoor_converter.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -import mmcv -import numpy as np - -from tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData -from tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData -from tools.data_converter.sunrgbd_data_utils import SUNRGBDData - - -def create_indoor_info_file(data_path, - pkl_prefix='sunrgbd', - save_path=None, - use_v1=False, - workers=4): - """Create indoor information file. - - Get information of the raw data and save it to the pkl file. - - Args: - data_path (str): Path of the data. - pkl_prefix (str, optional): Prefix of the pkl to be saved. - Default: 'sunrgbd'. - save_path (str, optional): Path of the pkl to be saved. Default: None. - use_v1 (bool, optional): Whether to use v1. Default: False. - workers (int, optional): Number of threads to be used. Default: 4. - """ - assert os.path.exists(data_path) - assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \ - f'unsupported indoor dataset {pkl_prefix}' - save_path = data_path if save_path is None else save_path - assert os.path.exists(save_path) - - # generate infos for both detection and segmentation task - if pkl_prefix in ['sunrgbd', 'scannet']: - train_filename = os.path.join(save_path, - f'{pkl_prefix}_infos_train.pkl') - val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl') - if pkl_prefix == 'sunrgbd': - # SUN RGB-D has a train-val split - train_dataset = SUNRGBDData( - root_path=data_path, split='train', use_v1=use_v1) - val_dataset = SUNRGBDData( - root_path=data_path, split='val', use_v1=use_v1) - else: - # ScanNet has a train-val-test split - train_dataset = ScanNetData(root_path=data_path, split='train') - val_dataset = ScanNetData(root_path=data_path, split='val') - test_dataset = ScanNetData(root_path=data_path, split='test') - test_filename = os.path.join(save_path, - f'{pkl_prefix}_infos_test.pkl') - - infos_train = train_dataset.get_infos( - num_workers=workers, has_label=True) - mmcv.dump(infos_train, train_filename, 'pkl') - print(f'{pkl_prefix} info train file is saved to {train_filename}') - - infos_val = val_dataset.get_infos(num_workers=workers, has_label=True) - mmcv.dump(infos_val, val_filename, 'pkl') - print(f'{pkl_prefix} info val file is saved to {val_filename}') - - if pkl_prefix == 'scannet': - infos_test = test_dataset.get_infos( - num_workers=workers, has_label=False) - mmcv.dump(infos_test, test_filename, 'pkl') - print(f'{pkl_prefix} info test file is saved to {test_filename}') - - # generate infos for the semantic segmentation task - # e.g. re-sampled scene indexes and label weights - # scene indexes are used to re-sample rooms with different number of points - # label weights are used to balance classes with different number of points - if pkl_prefix == 'scannet': - # label weight computation function is adopted from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - train_dataset = ScanNetSegData( - data_root=data_path, - ann_file=train_filename, - split='train', - num_points=8192, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - # TODO: do we need to generate on val set? - val_dataset = ScanNetSegData( - data_root=data_path, - ann_file=val_filename, - split='val', - num_points=8192, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - # no need to generate for test set - train_dataset.get_seg_infos() - val_dataset.get_seg_infos() - elif pkl_prefix == 's3dis': - # S3DIS doesn't have a fixed train-val split - # it has 6 areas instead, so we generate info file for each of them - # in training, we will use dataset to wrap different areas - splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]] - for split in splits: - dataset = S3DISData(root_path=data_path, split=split) - info = dataset.get_infos(num_workers=workers, has_label=True) - filename = os.path.join(save_path, - f'{pkl_prefix}_infos_{split}.pkl') - mmcv.dump(info, filename, 'pkl') - print(f'{pkl_prefix} info {split} file is saved to {filename}') - seg_dataset = S3DISSegData( - data_root=data_path, - ann_file=filename, - split=split, - num_points=4096, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - seg_dataset.get_seg_infos() diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_converter.py deleted file mode 100644 index 2db461d4..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_converter.py +++ /dev/null @@ -1,624 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from pathlib import Path - -import mmcv -import numpy as np -from nuscenes.utils.geometry_utils import view_points - -from mmdet3d.core.bbox import box_np_ops, points_cam2img -from .kitti_data_utils import WaymoInfoGatherer, get_kitti_image_info -from .nuscenes_converter import post_process_coords - -kitti_categories = ('Pedestrian', 'Cyclist', 'Car') - - -def convert_to_kitti_info_version2(info): - """convert kitti info v1 to v2 if possible. - - Args: - info (dict): Info of the input kitti data. - - image (dict): image info - - calib (dict): calibration info - - point_cloud (dict): point cloud info - """ - if 'image' not in info or 'calib' not in info or 'point_cloud' not in info: - info['image'] = { - 'image_shape': info['img_shape'], - 'image_idx': info['image_idx'], - 'image_path': info['img_path'], - } - info['calib'] = { - 'R0_rect': info['calib/R0_rect'], - 'Tr_velo_to_cam': info['calib/Tr_velo_to_cam'], - 'P2': info['calib/P2'], - } - info['point_cloud'] = { - 'velodyne_path': info['velodyne_path'], - } - - -def _read_imageset_file(path): - with open(path, 'r') as f: - lines = f.readlines() - return [int(line) for line in lines] - - -class _NumPointsInGTCalculater: - """Calculate the number of points inside the ground truth box. This is the - parallel version. For the serialized version, please refer to - `_calculate_num_points_in_gt`. - - Args: - data_path (str): Path of the data. - relative_path (bool): Whether to use relative path. - remove_outside (bool, optional): Whether to remove points which are - outside of image. Default: True. - num_features (int, optional): Number of features per point. - Default: False. - num_worker (int, optional): the number of parallel workers to use. - Default: 8. - """ - - def __init__(self, - data_path, - relative_path, - remove_outside=True, - num_features=4, - num_worker=8) -> None: - self.data_path = data_path - self.relative_path = relative_path - self.remove_outside = remove_outside - self.num_features = num_features - self.num_worker = num_worker - - def calculate_single(self, info): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - if self.relative_path: - v_path = str(Path(self.data_path) / pc_info['velodyne_path']) - else: - v_path = pc_info['velodyne_path'] - points_v = np.fromfile( - v_path, dtype=np.float32, - count=-1).reshape([-1, self.num_features]) - rect = calib['R0_rect'] - Trv2c = calib['Tr_velo_to_cam'] - P2 = calib['P2'] - if self.remove_outside: - points_v = box_np_ops.remove_outside_points( - points_v, rect, Trv2c, P2, image_info['image_shape']) - annos = info['annos'] - num_obj = len([n for n in annos['name'] if n != 'DontCare']) - dims = annos['dimensions'][:num_obj] - loc = annos['location'][:num_obj] - rots = annos['rotation_y'][:num_obj] - gt_boxes_camera = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - gt_boxes_lidar = box_np_ops.box_camera_to_lidar( - gt_boxes_camera, rect, Trv2c) - indices = box_np_ops.points_in_rbbox(points_v[:, :3], gt_boxes_lidar) - num_points_in_gt = indices.sum(0) - num_ignored = len(annos['dimensions']) - num_obj - num_points_in_gt = np.concatenate( - [num_points_in_gt, -np.ones([num_ignored])]) - annos['num_points_in_gt'] = num_points_in_gt.astype(np.int32) - return info - - def calculate(self, infos): - ret_infos = mmcv.track_parallel_progress(self.calculate_single, infos, - self.num_worker) - for i, ret_info in enumerate(ret_infos): - infos[i] = ret_info - - -def _calculate_num_points_in_gt(data_path, - infos, - relative_path, - remove_outside=True, - num_features=4): - for info in mmcv.track_iter_progress(infos): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - if relative_path: - v_path = str(Path(data_path) / pc_info['velodyne_path']) - else: - v_path = pc_info['velodyne_path'] - points_v = np.fromfile( - v_path, dtype=np.float32, count=-1).reshape([-1, num_features]) - rect = calib['R0_rect'] - Trv2c = calib['Tr_velo_to_cam'] - P2 = calib['P2'] - if remove_outside: - points_v = box_np_ops.remove_outside_points( - points_v, rect, Trv2c, P2, image_info['image_shape']) - - # points_v = points_v[points_v[:, 0] > 0] - annos = info['annos'] - num_obj = len([n for n in annos['name'] if n != 'DontCare']) - # annos = kitti.filter_kitti_anno(annos, ['DontCare']) - dims = annos['dimensions'][:num_obj] - loc = annos['location'][:num_obj] - rots = annos['rotation_y'][:num_obj] - gt_boxes_camera = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - gt_boxes_lidar = box_np_ops.box_camera_to_lidar( - gt_boxes_camera, rect, Trv2c) - indices = box_np_ops.points_in_rbbox(points_v[:, :3], gt_boxes_lidar) - num_points_in_gt = indices.sum(0) - num_ignored = len(annos['dimensions']) - num_obj - num_points_in_gt = np.concatenate( - [num_points_in_gt, -np.ones([num_ignored])]) - annos['num_points_in_gt'] = num_points_in_gt.astype(np.int32) - - -def create_kitti_info_file(data_path, - pkl_prefix='kitti', - with_plane=False, - save_path=None, - relative_path=True): - """Create info file of KITTI dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - data_path (str): Path of the data root. - pkl_prefix (str, optional): Prefix of the info file to be generated. - Default: 'kitti'. - with_plane (bool, optional): Whether to use plane information. - Default: False. - save_path (str, optional): Path to save the info file. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - """ - imageset_folder = Path(data_path) / 'ImageSets' - train_img_ids = _read_imageset_file(str(imageset_folder / 'train.txt')) - val_img_ids = _read_imageset_file(str(imageset_folder / 'val.txt')) - test_img_ids = _read_imageset_file(str(imageset_folder / 'test.txt')) - - print('Generate info. this may take several minutes.') - if save_path is None: - save_path = Path(data_path) - else: - save_path = Path(save_path) - kitti_infos_train = get_kitti_image_info( - data_path, - training=True, - velodyne=True, - calib=True, - with_plane=with_plane, - image_ids=train_img_ids, - relative_path=relative_path) - _calculate_num_points_in_gt(data_path, kitti_infos_train, relative_path) - filename = save_path / f'{pkl_prefix}_infos_train.pkl' - print(f'Kitti info train file is saved to {filename}') - mmcv.dump(kitti_infos_train, filename) - kitti_infos_val = get_kitti_image_info( - data_path, - training=True, - velodyne=True, - calib=True, - with_plane=with_plane, - image_ids=val_img_ids, - relative_path=relative_path) - _calculate_num_points_in_gt(data_path, kitti_infos_val, relative_path) - filename = save_path / f'{pkl_prefix}_infos_val.pkl' - print(f'Kitti info val file is saved to {filename}') - mmcv.dump(kitti_infos_val, filename) - filename = save_path / f'{pkl_prefix}_infos_trainval.pkl' - print(f'Kitti info trainval file is saved to {filename}') - mmcv.dump(kitti_infos_train + kitti_infos_val, filename) - - kitti_infos_test = get_kitti_image_info( - data_path, - training=False, - label_info=False, - velodyne=True, - calib=True, - with_plane=False, - image_ids=test_img_ids, - relative_path=relative_path) - filename = save_path / f'{pkl_prefix}_infos_test.pkl' - print(f'Kitti info test file is saved to {filename}') - mmcv.dump(kitti_infos_test, filename) - - -def create_waymo_info_file(data_path, - pkl_prefix='waymo', - save_path=None, - relative_path=True, - max_sweeps=5, - workers=8): - """Create info file of waymo dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - data_path (str): Path of the data root. - pkl_prefix (str, optional): Prefix of the info file to be generated. - Default: 'waymo'. - save_path (str, optional): Path to save the info file. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - max_sweeps (int, optional): Max sweeps before the detection frame - to be used. Default: 5. - """ - imageset_folder = Path(data_path) / 'ImageSets' - train_img_ids = _read_imageset_file(str(imageset_folder / 'train.txt')) - val_img_ids = _read_imageset_file(str(imageset_folder / 'val.txt')) - test_img_ids = _read_imageset_file(str(imageset_folder / 'test.txt')) - - print('Generate info. this may take several minutes.') - if save_path is None: - save_path = Path(data_path) - else: - save_path = Path(save_path) - waymo_infos_gatherer_trainval = WaymoInfoGatherer( - data_path, - training=True, - velodyne=True, - calib=True, - pose=True, - relative_path=relative_path, - max_sweeps=max_sweeps, - num_worker=workers) - waymo_infos_gatherer_test = WaymoInfoGatherer( - data_path, - training=False, - label_info=False, - velodyne=True, - calib=True, - pose=True, - relative_path=relative_path, - max_sweeps=max_sweeps, - num_worker=workers) - num_points_in_gt_calculater = _NumPointsInGTCalculater( - data_path, - relative_path, - num_features=6, - remove_outside=False, - num_worker=workers) - - waymo_infos_train = waymo_infos_gatherer_trainval.gather(train_img_ids) - num_points_in_gt_calculater.calculate(waymo_infos_train) - filename = save_path / f'{pkl_prefix}_infos_train.pkl' - print(f'Waymo info train file is saved to {filename}') - mmcv.dump(waymo_infos_train, filename) - waymo_infos_val = waymo_infos_gatherer_trainval.gather(val_img_ids) - num_points_in_gt_calculater.calculate(waymo_infos_val) - filename = save_path / f'{pkl_prefix}_infos_val.pkl' - print(f'Waymo info val file is saved to {filename}') - mmcv.dump(waymo_infos_val, filename) - filename = save_path / f'{pkl_prefix}_infos_trainval.pkl' - print(f'Waymo info trainval file is saved to {filename}') - mmcv.dump(waymo_infos_train + waymo_infos_val, filename) - waymo_infos_test = waymo_infos_gatherer_test.gather(test_img_ids) - filename = save_path / f'{pkl_prefix}_infos_test.pkl' - print(f'Waymo info test file is saved to {filename}') - mmcv.dump(waymo_infos_test, filename) - - -def _create_reduced_point_cloud(data_path, - info_path, - save_path=None, - back=False, - num_features=4, - front_camera_id=2): - """Create reduced point clouds for given info. - - Args: - data_path (str): Path of original data. - info_path (str): Path of data info. - save_path (str, optional): Path to save reduced point cloud - data. Default: None. - back (bool, optional): Whether to flip the points to back. - Default: False. - num_features (int, optional): Number of point features. Default: 4. - front_camera_id (int, optional): The referenced/front camera ID. - Default: 2. - """ - kitti_infos = mmcv.load(info_path) - - for info in mmcv.track_iter_progress(kitti_infos): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - - v_path = pc_info['velodyne_path'] - v_path = Path(data_path) / v_path - points_v = np.fromfile( - str(v_path), dtype=np.float32, - count=-1).reshape([-1, num_features]) - rect = calib['R0_rect'] - if front_camera_id == 2: - P2 = calib['P2'] - else: - P2 = calib[f'P{str(front_camera_id)}'] - Trv2c = calib['Tr_velo_to_cam'] - # first remove z < 0 points - # keep = points_v[:, -1] > 0 - # points_v = points_v[keep] - # then remove outside. - if back: - points_v[:, 0] = -points_v[:, 0] - points_v = box_np_ops.remove_outside_points(points_v, rect, Trv2c, P2, - image_info['image_shape']) - if save_path is None: - save_dir = v_path.parent.parent / (v_path.parent.stem + '_reduced') - if not save_dir.exists(): - save_dir.mkdir() - save_filename = save_dir / v_path.name - # save_filename = str(v_path) + '_reduced' - if back: - save_filename += '_back' - else: - save_filename = str(Path(save_path) / v_path.name) - if back: - save_filename += '_back' - with open(save_filename, 'w') as f: - points_v.tofile(f) - - -def create_reduced_point_cloud(data_path, - pkl_prefix, - train_info_path=None, - val_info_path=None, - test_info_path=None, - save_path=None, - with_back=False): - """Create reduced point clouds for training/validation/testing. - - Args: - data_path (str): Path of original data. - pkl_prefix (str): Prefix of info files. - train_info_path (str, optional): Path of training set info. - Default: None. - val_info_path (str, optional): Path of validation set info. - Default: None. - test_info_path (str, optional): Path of test set info. - Default: None. - save_path (str, optional): Path to save reduced point cloud data. - Default: None. - with_back (bool, optional): Whether to flip the points to back. - Default: False. - """ - if train_info_path is None: - train_info_path = Path(data_path) / f'{pkl_prefix}_infos_train.pkl' - if val_info_path is None: - val_info_path = Path(data_path) / f'{pkl_prefix}_infos_val.pkl' - if test_info_path is None: - test_info_path = Path(data_path) / f'{pkl_prefix}_infos_test.pkl' - - print('create reduced point cloud for training set') - _create_reduced_point_cloud(data_path, train_info_path, save_path) - print('create reduced point cloud for validation set') - _create_reduced_point_cloud(data_path, val_info_path, save_path) - print('create reduced point cloud for testing set') - _create_reduced_point_cloud(data_path, test_info_path, save_path) - if with_back: - _create_reduced_point_cloud( - data_path, train_info_path, save_path, back=True) - _create_reduced_point_cloud( - data_path, val_info_path, save_path, back=True) - _create_reduced_point_cloud( - data_path, test_info_path, save_path, back=True) - - -def export_2d_annotation(root_path, info_path, mono3d=True): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - mono3d (bool, optional): Whether to export mono3d annotation. - Default: True. - """ - # get bbox annotations for camera - kitti_infos = mmcv.load(info_path) - cat2Ids = [ - dict(id=kitti_categories.index(cat_name), name=cat_name) - for cat_name in kitti_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - from os import path as osp - for info in mmcv.track_iter_progress(kitti_infos): - coco_infos = get_2d_boxes(info, occluded=[0, 1, 2, 3], mono3d=mono3d) - (height, width, - _) = mmcv.imread(osp.join(root_path, - info['image']['image_path'])).shape - coco_2d_dict['images'].append( - dict( - file_name=info['image']['image_path'], - id=info['image']['image_idx'], - Tri2v=info['calib']['Tr_imu_to_velo'], - Trv2c=info['calib']['Tr_velo_to_cam'], - rect=info['calib']['R0_rect'], - cam_intrinsic=info['calib']['P2'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - if mono3d: - json_prefix = f'{info_path[:-4]}_mono3d' - else: - json_prefix = f'{info_path[:-4]}' - mmcv.dump(coco_2d_dict, f'{json_prefix}.coco.json') - - -def get_2d_boxes(info, occluded, mono3d=True): - """Get the 2D annotation records for a given info. - - Args: - info: Information of the given sample data. - occluded: Integer (0, 1, 2, 3) indicating occlusion state: - 0 = fully visible, 1 = partly occluded, 2 = largely occluded, - 3 = unknown, -1 = DontCare - mono3d (bool): Whether to get boxes with mono3d annotation. - - Return: - list[dict]: List of 2D annotation record that belongs to the input - `sample_data_token`. - """ - # Get calibration information - P2 = info['calib']['P2'] - - repro_recs = [] - # if no annotations in info (test dataset), then return - if 'annos' not in info: - return repro_recs - - # Get all the annotation with the specified visibilties. - ann_dicts = info['annos'] - mask = [(ocld in occluded) for ocld in ann_dicts['occluded']] - for k in ann_dicts.keys(): - ann_dicts[k] = ann_dicts[k][mask] - - # convert dict of list to list of dict - ann_recs = [] - for i in range(len(ann_dicts['occluded'])): - ann_rec = {} - for k in ann_dicts.keys(): - ann_rec[k] = ann_dicts[k][i] - ann_recs.append(ann_rec) - - for ann_idx, ann_rec in enumerate(ann_recs): - # Augment sample_annotation with token information. - ann_rec['sample_annotation_token'] = \ - f"{info['image']['image_idx']}.{ann_idx}" - ann_rec['sample_data_token'] = info['image']['image_idx'] - sample_data_token = info['image']['image_idx'] - - loc = ann_rec['location'][np.newaxis, :] - dim = ann_rec['dimensions'][np.newaxis, :] - rot = ann_rec['rotation_y'][np.newaxis, np.newaxis] - # transform the center from [0.5, 1.0, 0.5] to [0.5, 0.5, 0.5] - dst = np.array([0.5, 0.5, 0.5]) - src = np.array([0.5, 1.0, 0.5]) - loc = loc + dim * (dst - src) - offset = (info['calib']['P2'][0, 3] - info['calib']['P0'][0, 3]) \ - / info['calib']['P2'][0, 0] - loc_3d = np.copy(loc) - loc_3d[0, 0] += offset - gt_bbox_3d = np.concatenate([loc, dim, rot], axis=1).astype(np.float32) - - # Filter out the corners that are not in front of the calibrated - # sensor. - corners_3d = box_np_ops.center_to_corner_box3d( - gt_bbox_3d[:, :3], - gt_bbox_3d[:, 3:6], - gt_bbox_3d[:, 6], [0.5, 0.5, 0.5], - axis=1) - corners_3d = corners_3d[0].T # (1, 8, 3) -> (3, 8) - in_front = np.argwhere(corners_3d[2, :] > 0).flatten() - corners_3d = corners_3d[:, in_front] - - # Project 3d box to 2d. - camera_intrinsic = P2 - corner_coords = view_points(corners_3d, camera_intrinsic, - True).T[:, :2].tolist() - - # Keep only corners that fall within the image. - final_coords = post_process_coords(corner_coords) - - # Skip if the convex hull of the re-projected corners - # does not intersect the image canvas. - if final_coords is None: - continue - else: - min_x, min_y, max_x, max_y = final_coords - - # Generate dictionary record to be included in the .json file. - repro_rec = generate_record(ann_rec, min_x, min_y, max_x, max_y, - sample_data_token, - info['image']['image_path']) - - # If mono3d=True, add 3D annotations in camera coordinates - if mono3d and (repro_rec is not None): - repro_rec['bbox_cam3d'] = np.concatenate( - [loc_3d, dim, rot], - axis=1).astype(np.float32).squeeze().tolist() - repro_rec['velo_cam3d'] = -1 # no velocity in KITTI - - center3d = np.array(loc).reshape([1, 3]) - center2d = points_cam2img( - center3d, camera_intrinsic, with_depth=True) - repro_rec['center2d'] = center2d.squeeze().tolist() - # normalized center2D + depth - # samples with depth < 0 will be removed - if repro_rec['center2d'][2] <= 0: - continue - - repro_rec['attribute_name'] = -1 # no attribute in KITTI - repro_rec['attribute_id'] = -1 - - repro_recs.append(repro_rec) - - return repro_recs - - -def generate_record(ann_rec, x1, y1, x2, y2, sample_data_token, filename): - """Generate one 2D annotation record given various information on top of - the 2D bounding box coordinates. - - Args: - ann_rec (dict): Original 3d annotation record. - x1 (float): Minimum value of the x coordinate. - y1 (float): Minimum value of the y coordinate. - x2 (float): Maximum value of the x coordinate. - y2 (float): Maximum value of the y coordinate. - sample_data_token (str): Sample data token. - filename (str):The corresponding image file where the annotation - is present. - - Returns: - dict: A sample 2D annotation record. - - file_name (str): file name - - image_id (str): sample data token - - area (float): 2d box area - - category_name (str): category name - - category_id (int): category id - - bbox (list[float]): left x, top y, x_size, y_size of 2d box - - iscrowd (int): whether the area is crowd - """ - repro_rec = OrderedDict() - repro_rec['sample_data_token'] = sample_data_token - coco_rec = dict() - - key_mapping = { - 'name': 'category_name', - 'num_points_in_gt': 'num_lidar_pts', - 'sample_annotation_token': 'sample_annotation_token', - 'sample_data_token': 'sample_data_token', - } - - for key, value in ann_rec.items(): - if key in key_mapping.keys(): - repro_rec[key_mapping[key]] = value - - repro_rec['bbox_corners'] = [x1, y1, x2, y2] - repro_rec['filename'] = filename - - coco_rec['file_name'] = filename - coco_rec['image_id'] = sample_data_token - coco_rec['area'] = (y2 - y1) * (x2 - x1) - - if repro_rec['category_name'] not in kitti_categories: - return None - cat_name = repro_rec['category_name'] - coco_rec['category_name'] = cat_name - coco_rec['category_id'] = kitti_categories.index(cat_name) - coco_rec['bbox'] = [x1, y1, x2 - x1, y2 - y1] - coco_rec['iscrowd'] = 0 - - return coco_rec diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_data_utils.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_data_utils.py deleted file mode 100644 index cae84cc6..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/kitti_data_utils.py +++ /dev/null @@ -1,619 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from concurrent import futures as futures -from os import path as osp -from pathlib import Path - -import mmcv -import numpy as np -from PIL import Image -from skimage import io - - -def get_image_index_str(img_idx, use_prefix_id=False): - if use_prefix_id: - return '{:07d}'.format(img_idx) - else: - return '{:06d}'.format(img_idx) - - -def get_kitti_info_path(idx, - prefix, - info_type='image_2', - file_tail='.png', - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - img_idx_str = get_image_index_str(idx, use_prefix_id) - img_idx_str += file_tail - prefix = Path(prefix) - if training: - file_path = Path('training') / info_type / img_idx_str - else: - file_path = Path('testing') / info_type / img_idx_str - if exist_check and not (prefix / file_path).exists(): - raise ValueError('file not exist: {}'.format(file_path)) - if relative_path: - return str(file_path) - else: - return str(prefix / file_path) - - -def get_image_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='image_2', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.png', training, - relative_path, exist_check, use_prefix_id) - - -def get_label_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='label_2', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_plane_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='planes', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_velodyne_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'velodyne', '.bin', training, - relative_path, exist_check, use_prefix_id) - - -def get_calib_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'calib', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_pose_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'pose', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_timestamp_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'timestamp', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_label_anno(label_path): - annotations = {} - annotations.update({ - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [] - }) - with open(label_path, 'r') as f: - lines = f.readlines() - # if len(lines) == 0 or len(lines[0]) < 15: - # content = [] - # else: - content = [line.strip().split(' ') for line in lines] - num_objects = len([x[0] for x in content if x[0] != 'DontCare']) - annotations['name'] = np.array([x[0] for x in content]) - num_gt = len(annotations['name']) - annotations['truncated'] = np.array([float(x[1]) for x in content]) - annotations['occluded'] = np.array([int(x[2]) for x in content]) - annotations['alpha'] = np.array([float(x[3]) for x in content]) - annotations['bbox'] = np.array([[float(info) for info in x[4:8]] - for x in content]).reshape(-1, 4) - # dimensions will convert hwl format to standard lhw(camera) format. - annotations['dimensions'] = np.array([[float(info) for info in x[8:11]] - for x in content - ]).reshape(-1, 3)[:, [2, 0, 1]] - annotations['location'] = np.array([[float(info) for info in x[11:14]] - for x in content]).reshape(-1, 3) - annotations['rotation_y'] = np.array([float(x[14]) - for x in content]).reshape(-1) - if len(content) != 0 and len(content[0]) == 16: # have score - annotations['score'] = np.array([float(x[15]) for x in content]) - else: - annotations['score'] = np.zeros((annotations['bbox'].shape[0], )) - index = list(range(num_objects)) + [-1] * (num_gt - num_objects) - annotations['index'] = np.array(index, dtype=np.int32) - annotations['group_ids'] = np.arange(num_gt, dtype=np.int32) - return annotations - - -def _extend_matrix(mat): - mat = np.concatenate([mat, np.array([[0., 0., 0., 1.]])], axis=0) - return mat - - -def get_kitti_image_info(path, - training=True, - label_info=True, - velodyne=False, - calib=False, - with_plane=False, - image_ids=7481, - extend_matrix=True, - num_worker=8, - relative_path=True, - with_imageshape=True): - """ - KITTI annotation format version 2: - { - [optional]points: [N, 3+] point cloud - [optional, for kitti]image: { - image_idx: ... - image_path: ... - image_shape: ... - } - point_cloud: { - num_features: 4 - velodyne_path: ... - } - [optional, for kitti]calib: { - R0_rect: ... - Tr_velo_to_cam: ... - P2: ... - } - annos: { - location: [num_gt, 3] array - dimensions: [num_gt, 3] array - rotation_y: [num_gt] angle array - name: [num_gt] ground truth name array - [optional]difficulty: kitti difficulty - [optional]group_ids: used for multi-part object - } - } - """ - root_path = Path(path) - if not isinstance(image_ids, list): - image_ids = list(range(image_ids)) - - def map_func(idx): - info = {} - pc_info = {'num_features': 4} - calib_info = {} - - image_info = {'image_idx': idx} - annotations = None - if velodyne: - pc_info['velodyne_path'] = get_velodyne_path( - idx, path, training, relative_path) - image_info['image_path'] = get_image_path(idx, path, training, - relative_path) - if with_imageshape: - img_path = image_info['image_path'] - if relative_path: - img_path = str(root_path / img_path) - image_info['image_shape'] = np.array( - io.imread(img_path).shape[:2], dtype=np.int32) - if label_info: - label_path = get_label_path(idx, path, training, relative_path) - if relative_path: - label_path = str(root_path / label_path) - annotations = get_label_anno(label_path) - info['image'] = image_info - info['point_cloud'] = pc_info - if calib: - calib_path = get_calib_path( - idx, path, training, relative_path=False) - with open(calib_path, 'r') as f: - lines = f.readlines() - P0 = np.array([float(info) for info in lines[0].split(' ')[1:13] - ]).reshape([3, 4]) - P1 = np.array([float(info) for info in lines[1].split(' ')[1:13] - ]).reshape([3, 4]) - P2 = np.array([float(info) for info in lines[2].split(' ')[1:13] - ]).reshape([3, 4]) - P3 = np.array([float(info) for info in lines[3].split(' ')[1:13] - ]).reshape([3, 4]) - if extend_matrix: - P0 = _extend_matrix(P0) - P1 = _extend_matrix(P1) - P2 = _extend_matrix(P2) - P3 = _extend_matrix(P3) - R0_rect = np.array([ - float(info) for info in lines[4].split(' ')[1:10] - ]).reshape([3, 3]) - if extend_matrix: - rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype) - rect_4x4[3, 3] = 1. - rect_4x4[:3, :3] = R0_rect - else: - rect_4x4 = R0_rect - - Tr_velo_to_cam = np.array([ - float(info) for info in lines[5].split(' ')[1:13] - ]).reshape([3, 4]) - Tr_imu_to_velo = np.array([ - float(info) for info in lines[6].split(' ')[1:13] - ]).reshape([3, 4]) - if extend_matrix: - Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam) - Tr_imu_to_velo = _extend_matrix(Tr_imu_to_velo) - calib_info['P0'] = P0 - calib_info['P1'] = P1 - calib_info['P2'] = P2 - calib_info['P3'] = P3 - calib_info['R0_rect'] = rect_4x4 - calib_info['Tr_velo_to_cam'] = Tr_velo_to_cam - calib_info['Tr_imu_to_velo'] = Tr_imu_to_velo - info['calib'] = calib_info - - if with_plane: - plane_path = get_plane_path(idx, path, training, relative_path) - if relative_path: - plane_path = str(root_path / plane_path) - lines = mmcv.list_from_file(plane_path) - info['plane'] = np.array([float(i) for i in lines[3].split()]) - - if annotations is not None: - info['annos'] = annotations - add_difficulty_to_annos(info) - return info - - with futures.ThreadPoolExecutor(num_worker) as executor: - image_infos = executor.map(map_func, image_ids) - - return list(image_infos) - - -class WaymoInfoGatherer: - """ - Parallel version of waymo dataset information gathering. - Waymo annotation format version like KITTI: - { - [optional]points: [N, 3+] point cloud - [optional, for kitti]image: { - image_idx: ... - image_path: ... - image_shape: ... - } - point_cloud: { - num_features: 6 - velodyne_path: ... - } - [optional, for kitti]calib: { - R0_rect: ... - Tr_velo_to_cam0: ... - P0: ... - } - annos: { - location: [num_gt, 3] array - dimensions: [num_gt, 3] array - rotation_y: [num_gt] angle array - name: [num_gt] ground truth name array - [optional]difficulty: kitti difficulty - [optional]group_ids: used for multi-part object - } - } - """ - - def __init__(self, - path, - training=True, - label_info=True, - velodyne=False, - calib=False, - pose=False, - extend_matrix=True, - num_worker=8, - relative_path=True, - with_imageshape=True, - max_sweeps=5) -> None: - self.path = path - self.training = training - self.label_info = label_info - self.velodyne = velodyne - self.calib = calib - self.pose = pose - self.extend_matrix = extend_matrix - self.num_worker = num_worker - self.relative_path = relative_path - self.with_imageshape = with_imageshape - self.max_sweeps = max_sweeps - - def gather_single(self, idx): - root_path = Path(self.path) - info = {} - pc_info = {'num_features': 6} - calib_info = {} - - image_info = {'image_idx': idx} - annotations = None - if self.velodyne: - pc_info['velodyne_path'] = get_velodyne_path( - idx, - self.path, - self.training, - self.relative_path, - use_prefix_id=True) - with open( - get_timestamp_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True)) as f: - info['timestamp'] = np.int64(f.read()) - image_info['image_path'] = get_image_path( - idx, - self.path, - self.training, - self.relative_path, - info_type='image_0', - use_prefix_id=True) - if self.with_imageshape: - img_path = image_info['image_path'] - if self.relative_path: - img_path = str(root_path / img_path) - # io using PIL is significantly faster than skimage - w, h = Image.open(img_path).size - image_info['image_shape'] = np.array((h, w), dtype=np.int32) - if self.label_info: - label_path = get_label_path( - idx, - self.path, - self.training, - self.relative_path, - info_type='label_all', - use_prefix_id=True) - if self.relative_path: - label_path = str(root_path / label_path) - annotations = get_label_anno(label_path) - info['image'] = image_info - info['point_cloud'] = pc_info - if self.calib: - calib_path = get_calib_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - with open(calib_path, 'r') as f: - lines = f.readlines() - P0 = np.array([float(info) for info in lines[0].split(' ')[1:13] - ]).reshape([3, 4]) - P1 = np.array([float(info) for info in lines[1].split(' ')[1:13] - ]).reshape([3, 4]) - P2 = np.array([float(info) for info in lines[2].split(' ')[1:13] - ]).reshape([3, 4]) - P3 = np.array([float(info) for info in lines[3].split(' ')[1:13] - ]).reshape([3, 4]) - P4 = np.array([float(info) for info in lines[4].split(' ')[1:13] - ]).reshape([3, 4]) - if self.extend_matrix: - P0 = _extend_matrix(P0) - P1 = _extend_matrix(P1) - P2 = _extend_matrix(P2) - P3 = _extend_matrix(P3) - P4 = _extend_matrix(P4) - R0_rect = np.array([ - float(info) for info in lines[5].split(' ')[1:10] - ]).reshape([3, 3]) - if self.extend_matrix: - rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype) - rect_4x4[3, 3] = 1. - rect_4x4[:3, :3] = R0_rect - else: - rect_4x4 = R0_rect - - Tr_velo_to_cam = np.array([ - float(info) for info in lines[6].split(' ')[1:13] - ]).reshape([3, 4]) - if self.extend_matrix: - Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam) - calib_info['P0'] = P0 - calib_info['P1'] = P1 - calib_info['P2'] = P2 - calib_info['P3'] = P3 - calib_info['P4'] = P4 - calib_info['R0_rect'] = rect_4x4 - calib_info['Tr_velo_to_cam'] = Tr_velo_to_cam - info['calib'] = calib_info - if self.pose: - pose_path = get_pose_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - info['pose'] = np.loadtxt(pose_path) - - if annotations is not None: - info['annos'] = annotations - info['annos']['camera_id'] = info['annos'].pop('score') - add_difficulty_to_annos(info) - - sweeps = [] - prev_idx = idx - while len(sweeps) < self.max_sweeps: - prev_info = {} - prev_idx -= 1 - prev_info['velodyne_path'] = get_velodyne_path( - prev_idx, - self.path, - self.training, - self.relative_path, - exist_check=False, - use_prefix_id=True) - if_prev_exists = osp.exists( - Path(self.path) / prev_info['velodyne_path']) - if if_prev_exists: - with open( - get_timestamp_path( - prev_idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True)) as f: - prev_info['timestamp'] = np.int64(f.read()) - prev_pose_path = get_pose_path( - prev_idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - prev_info['pose'] = np.loadtxt(prev_pose_path) - sweeps.append(prev_info) - else: - break - info['sweeps'] = sweeps - - return info - - def gather(self, image_ids): - if not isinstance(image_ids, list): - image_ids = list(range(image_ids)) - image_infos = mmcv.track_parallel_progress(self.gather_single, - image_ids, self.num_worker) - return list(image_infos) - - -def kitti_anno_to_label_file(annos, folder): - folder = Path(folder) - for anno in annos: - image_idx = anno['metadata']['image_idx'] - label_lines = [] - for j in range(anno['bbox'].shape[0]): - label_dict = { - 'name': anno['name'][j], - 'alpha': anno['alpha'][j], - 'bbox': anno['bbox'][j], - 'location': anno['location'][j], - 'dimensions': anno['dimensions'][j], - 'rotation_y': anno['rotation_y'][j], - 'score': anno['score'][j], - } - label_line = kitti_result_line(label_dict) - label_lines.append(label_line) - label_file = folder / f'{get_image_index_str(image_idx)}.txt' - label_str = '\n'.join(label_lines) - with open(label_file, 'w') as f: - f.write(label_str) - - -def add_difficulty_to_annos(info): - min_height = [40, 25, - 25] # minimum height for evaluated groundtruth/detections - max_occlusion = [ - 0, 1, 2 - ] # maximum occlusion level of the groundtruth used for evaluation - max_trunc = [ - 0.15, 0.3, 0.5 - ] # maximum truncation level of the groundtruth used for evaluation - annos = info['annos'] - dims = annos['dimensions'] # lhw format - bbox = annos['bbox'] - height = bbox[:, 3] - bbox[:, 1] - occlusion = annos['occluded'] - truncation = annos['truncated'] - diff = [] - easy_mask = np.ones((len(dims), ), dtype=np.bool) - moderate_mask = np.ones((len(dims), ), dtype=np.bool) - hard_mask = np.ones((len(dims), ), dtype=np.bool) - i = 0 - for h, o, t in zip(height, occlusion, truncation): - if o > max_occlusion[0] or h <= min_height[0] or t > max_trunc[0]: - easy_mask[i] = False - if o > max_occlusion[1] or h <= min_height[1] or t > max_trunc[1]: - moderate_mask[i] = False - if o > max_occlusion[2] or h <= min_height[2] or t > max_trunc[2]: - hard_mask[i] = False - i += 1 - is_easy = easy_mask - is_moderate = np.logical_xor(easy_mask, moderate_mask) - is_hard = np.logical_xor(hard_mask, moderate_mask) - - for i in range(len(dims)): - if is_easy[i]: - diff.append(0) - elif is_moderate[i]: - diff.append(1) - elif is_hard[i]: - diff.append(2) - else: - diff.append(-1) - annos['difficulty'] = np.array(diff, np.int32) - return diff - - -def kitti_result_line(result_dict, precision=4): - prec_float = '{' + ':.{}f'.format(precision) + '}' - res_line = [] - all_field_default = OrderedDict([ - ('name', None), - ('truncated', -1), - ('occluded', -1), - ('alpha', -10), - ('bbox', None), - ('dimensions', [-1, -1, -1]), - ('location', [-1000, -1000, -1000]), - ('rotation_y', -10), - ('score', 0.0), - ]) - res_dict = [(key, None) for key, val in all_field_default.items()] - res_dict = OrderedDict(res_dict) - for key, val in result_dict.items(): - if all_field_default[key] is None and val is None: - raise ValueError('you must specify a value for {}'.format(key)) - res_dict[key] = val - - for key, val in res_dict.items(): - if key == 'name': - res_line.append(val) - elif key in ['truncated', 'alpha', 'rotation_y', 'score']: - if val is None: - res_line.append(str(all_field_default[key])) - else: - res_line.append(prec_float.format(val)) - elif key == 'occluded': - if val is None: - res_line.append(str(all_field_default[key])) - else: - res_line.append('{}'.format(val)) - elif key in ['bbox', 'dimensions', 'location']: - if val is None: - res_line += [str(v) for v in all_field_default[key]] - else: - res_line += [prec_float.format(v) for v in val] - else: - raise ValueError('unknown key. supported key:{}'.format( - res_dict.keys())) - return ' '.join(res_line) diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_converter.py deleted file mode 100644 index c6a89d0d..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_converter.py +++ /dev/null @@ -1,271 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from logging import warning -from os import path as osp - -import mmcv -import numpy as np -from lyft_dataset_sdk.lyftdataset import LyftDataset as Lyft -from pyquaternion import Quaternion - -from mmdet3d.datasets import LyftDataset -from .nuscenes_converter import (get_2d_boxes, get_available_scenes, - obtain_sensor2top) - -lyft_categories = ('car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', - 'motorcycle', 'bicycle', 'pedestrian', 'animal') - - -def create_lyft_infos(root_path, - info_prefix, - version='v1.01-train', - max_sweeps=10): - """Create info file of lyft dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - root_path (str): Path of the data root. - info_prefix (str): Prefix of the info file to be generated. - version (str, optional): Version of the data. - Default: 'v1.01-train'. - max_sweeps (int, optional): Max number of sweeps. - Default: 10. - """ - lyft = Lyft( - data_path=osp.join(root_path, version), - json_path=osp.join(root_path, version, version), - verbose=True) - available_vers = ['v1.01-train', 'v1.01-test'] - assert version in available_vers - if version == 'v1.01-train': - train_scenes = mmcv.list_from_file('data/lyft/train.txt') - val_scenes = mmcv.list_from_file('data/lyft/val.txt') - elif version == 'v1.01-test': - train_scenes = mmcv.list_from_file('data/lyft/test.txt') - val_scenes = [] - else: - raise ValueError('unknown') - - # filter existing scenes. - available_scenes = get_available_scenes(lyft) - available_scene_names = [s['name'] for s in available_scenes] - train_scenes = list( - filter(lambda x: x in available_scene_names, train_scenes)) - val_scenes = list(filter(lambda x: x in available_scene_names, val_scenes)) - train_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in train_scenes - ]) - val_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in val_scenes - ]) - - test = 'test' in version - if test: - print(f'test scene: {len(train_scenes)}') - else: - print(f'train scene: {len(train_scenes)}, \ - val scene: {len(val_scenes)}') - train_lyft_infos, val_lyft_infos = _fill_trainval_infos( - lyft, train_scenes, val_scenes, test, max_sweeps=max_sweeps) - - metadata = dict(version=version) - if test: - print(f'test sample: {len(train_lyft_infos)}') - data = dict(infos=train_lyft_infos, metadata=metadata) - info_name = f'{info_prefix}_infos_test' - info_path = osp.join(root_path, f'{info_name}.pkl') - mmcv.dump(data, info_path) - else: - print(f'train sample: {len(train_lyft_infos)}, \ - val sample: {len(val_lyft_infos)}') - data = dict(infos=train_lyft_infos, metadata=metadata) - train_info_name = f'{info_prefix}_infos_train' - info_path = osp.join(root_path, f'{train_info_name}.pkl') - mmcv.dump(data, info_path) - data['infos'] = val_lyft_infos - val_info_name = f'{info_prefix}_infos_val' - info_val_path = osp.join(root_path, f'{val_info_name}.pkl') - mmcv.dump(data, info_val_path) - - -def _fill_trainval_infos(lyft, - train_scenes, - val_scenes, - test=False, - max_sweeps=10): - """Generate the train/val infos from the raw data. - - Args: - lyft (:obj:`LyftDataset`): Dataset class in the Lyft dataset. - train_scenes (list[str]): Basic information of training scenes. - val_scenes (list[str]): Basic information of validation scenes. - test (bool, optional): Whether use the test mode. In the test mode, no - annotations can be accessed. Default: False. - max_sweeps (int, optional): Max number of sweeps. Default: 10. - - Returns: - tuple[list[dict]]: Information of training set and - validation set that will be saved to the info file. - """ - train_lyft_infos = [] - val_lyft_infos = [] - - for sample in mmcv.track_iter_progress(lyft.sample): - lidar_token = sample['data']['LIDAR_TOP'] - sd_rec = lyft.get('sample_data', sample['data']['LIDAR_TOP']) - cs_record = lyft.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = lyft.get('ego_pose', sd_rec['ego_pose_token']) - abs_lidar_path, boxes, _ = lyft.get_sample_data(lidar_token) - # nuScenes devkit returns more convenient relative paths while - # lyft devkit returns absolute paths - abs_lidar_path = str(abs_lidar_path) # absolute path - lidar_path = abs_lidar_path.split(f'{os.getcwd()}/')[-1] - # relative path - - mmcv.check_file_exist(lidar_path) - - info = { - 'lidar_path': lidar_path, - 'token': sample['token'], - 'sweeps': [], - 'cams': dict(), - 'lidar2ego_translation': cs_record['translation'], - 'lidar2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sample['timestamp'], - } - - l2e_r = info['lidar2ego_rotation'] - l2e_t = info['lidar2ego_translation'] - e2g_r = info['ego2global_rotation'] - e2g_t = info['ego2global_translation'] - l2e_r_mat = Quaternion(l2e_r).rotation_matrix - e2g_r_mat = Quaternion(e2g_r).rotation_matrix - - # obtain 6 image's information per frame - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - for cam in camera_types: - cam_token = sample['data'][cam] - cam_path, _, cam_intrinsic = lyft.get_sample_data(cam_token) - cam_info = obtain_sensor2top(lyft, cam_token, l2e_t, l2e_r_mat, - e2g_t, e2g_r_mat, cam) - cam_info.update(cam_intrinsic=cam_intrinsic) - info['cams'].update({cam: cam_info}) - - # obtain sweeps for a single key-frame - sd_rec = lyft.get('sample_data', sample['data']['LIDAR_TOP']) - sweeps = [] - while len(sweeps) < max_sweeps: - if not sd_rec['prev'] == '': - sweep = obtain_sensor2top(lyft, sd_rec['prev'], l2e_t, - l2e_r_mat, e2g_t, e2g_r_mat, 'lidar') - sweeps.append(sweep) - sd_rec = lyft.get('sample_data', sd_rec['prev']) - else: - break - info['sweeps'] = sweeps - # obtain annotation - if not test: - annotations = [ - lyft.get('sample_annotation', token) - for token in sample['anns'] - ] - locs = np.array([b.center for b in boxes]).reshape(-1, 3) - dims = np.array([b.wlh for b in boxes]).reshape(-1, 3) - rots = np.array([b.orientation.yaw_pitch_roll[0] - for b in boxes]).reshape(-1, 1) - - names = [b.name for b in boxes] - for i in range(len(names)): - if names[i] in LyftDataset.NameMapping: - names[i] = LyftDataset.NameMapping[names[i]] - names = np.array(names) - - # we need to convert box size to - # the format of our lidar coordinate system - # which is x_size, y_size, z_size (corresponding to l, w, h) - gt_boxes = np.concatenate([locs, dims[:, [1, 0, 2]], rots], axis=1) - assert len(gt_boxes) == len( - annotations), f'{len(gt_boxes)}, {len(annotations)}' - info['gt_boxes'] = gt_boxes - info['gt_names'] = names - info['num_lidar_pts'] = np.array( - [a['num_lidar_pts'] for a in annotations]) - info['num_radar_pts'] = np.array( - [a['num_radar_pts'] for a in annotations]) - - if sample['scene_token'] in train_scenes: - train_lyft_infos.append(info) - else: - val_lyft_infos.append(info) - - return train_lyft_infos, val_lyft_infos - - -def export_2d_annotation(root_path, info_path, version): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - version (str): Dataset version. - """ - warning.warn('DeprecationWarning: 2D annotations are not used on the ' - 'Lyft dataset. The function export_2d_annotation will be ' - 'deprecated.') - # get bbox annotations for camera - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - lyft_infos = mmcv.load(info_path)['infos'] - lyft = Lyft( - data_path=osp.join(root_path, version), - json_path=osp.join(root_path, version, version), - verbose=True) - # info_2d_list = [] - cat2Ids = [ - dict(id=lyft_categories.index(cat_name), name=cat_name) - for cat_name in lyft_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - for info in mmcv.track_iter_progress(lyft_infos): - for cam in camera_types: - cam_info = info['cams'][cam] - coco_infos = get_2d_boxes( - lyft, - cam_info['sample_data_token'], - visibilities=['', '1', '2', '3', '4']) - (height, width, _) = mmcv.imread(cam_info['data_path']).shape - coco_2d_dict['images'].append( - dict( - file_name=cam_info['data_path'], - id=cam_info['sample_data_token'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - mmcv.dump(coco_2d_dict, f'{info_path[:-4]}.coco.json') diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_data_fixer.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_data_fixer.py deleted file mode 100644 index 55103515..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/lyft_data_fixer.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os - -import numpy as np - - -def fix_lyft(root_folder='./data/lyft', version='v1.01'): - # refer to https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/discussion/110000 # noqa - lidar_path = 'lidar/host-a011_lidar1_1233090652702363606.bin' - root_folder = os.path.join(root_folder, f'{version}-train') - lidar_path = os.path.join(root_folder, lidar_path) - assert os.path.isfile(lidar_path), f'Please download the complete Lyft ' \ - f'dataset and make sure {lidar_path} is present.' - points = np.fromfile(lidar_path, dtype=np.float32, count=-1) - try: - points.reshape([-1, 5]) - print(f'This fix is not required for version {version}.') - except ValueError: - new_points = np.array(list(points) + [100.0, 1.0], dtype='float32') - new_points.tofile(lidar_path) - print(f'Appended 100.0 and 1.0 to the end of {lidar_path}.') - - -parser = argparse.ArgumentParser(description='Lyft dataset fixer arg parser') -parser.add_argument( - '--root-folder', - type=str, - default='./data/lyft', - help='specify the root path of Lyft dataset') -parser.add_argument( - '--version', - type=str, - default='v1.01', - help='specify Lyft dataset version') -args = parser.parse_args() - -if __name__ == '__main__': - fix_lyft(root_folder=args.root_folder, version=args.version) diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/nuimage_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/nuimage_converter.py deleted file mode 100644 index a46015a1..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/nuimage_converter.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import base64 -from os import path as osp - -import mmcv -import numpy as np -from nuimages import NuImages -from nuimages.utils.utils import mask_decode, name_to_index_mapping - -nus_categories = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - -NAME_MAPPING = { - 'movable_object.barrier': 'barrier', - 'vehicle.bicycle': 'bicycle', - 'vehicle.bus.bendy': 'bus', - 'vehicle.bus.rigid': 'bus', - 'vehicle.car': 'car', - 'vehicle.construction': 'construction_vehicle', - 'vehicle.motorcycle': 'motorcycle', - 'human.pedestrian.adult': 'pedestrian', - 'human.pedestrian.child': 'pedestrian', - 'human.pedestrian.construction_worker': 'pedestrian', - 'human.pedestrian.police_officer': 'pedestrian', - 'movable_object.trafficcone': 'traffic_cone', - 'vehicle.trailer': 'trailer', - 'vehicle.truck': 'truck', -} - - -def parse_args(): - parser = argparse.ArgumentParser(description='Data converter arg parser') - parser.add_argument( - '--data-root', - type=str, - default='./data/nuimages', - help='specify the root path of dataset') - parser.add_argument( - '--version', - type=str, - nargs='+', - default=['v1.0-mini'], - required=False, - help='specify the dataset version') - parser.add_argument( - '--out-dir', - type=str, - default='./data/nuimages/annotations/', - required=False, - help='path to save the exported json') - parser.add_argument( - '--nproc', - type=int, - default=4, - required=False, - help='workers to process semantic masks') - parser.add_argument('--extra-tag', type=str, default='nuimages') - args = parser.parse_args() - return args - - -def get_img_annos(nuim, img_info, cat2id, out_dir, data_root, seg_root): - """Get semantic segmentation map for an image. - - Args: - nuim (obj:`NuImages`): NuImages dataset object - img_info (dict): Meta information of img - - Returns: - np.ndarray: Semantic segmentation map of the image - """ - sd_token = img_info['token'] - image_id = img_info['id'] - name_to_index = name_to_index_mapping(nuim.category) - - # Get image data. - width, height = img_info['width'], img_info['height'] - semseg_mask = np.zeros((height, width)).astype('uint8') - - # Load stuff / surface regions. - surface_anns = [ - o for o in nuim.surface_ann if o['sample_data_token'] == sd_token - ] - - # Draw stuff / surface regions. - for ann in surface_anns: - # Get color and mask. - category_token = ann['category_token'] - category_name = nuim.get('category', category_token)['name'] - if ann['mask'] is None: - continue - mask = mask_decode(ann['mask']) - - # Draw mask for semantic segmentation. - semseg_mask[mask == 1] = name_to_index[category_name] - - # Load object instances. - object_anns = [ - o for o in nuim.object_ann if o['sample_data_token'] == sd_token - ] - - # Sort by token to ensure that objects always appear in the - # instance mask in the same order. - object_anns = sorted(object_anns, key=lambda k: k['token']) - - # Draw object instances. - # The 0 index is reserved for background; thus, the instances - # should start from index 1. - annotations = [] - for i, ann in enumerate(object_anns, start=1): - # Get color, box, mask and name. - category_token = ann['category_token'] - category_name = nuim.get('category', category_token)['name'] - if ann['mask'] is None: - continue - mask = mask_decode(ann['mask']) - - # Draw masks for semantic segmentation and instance segmentation. - semseg_mask[mask == 1] = name_to_index[category_name] - - if category_name in NAME_MAPPING: - cat_name = NAME_MAPPING[category_name] - cat_id = cat2id[cat_name] - - x_min, y_min, x_max, y_max = ann['bbox'] - # encode calibrated instance mask - mask_anno = dict() - mask_anno['counts'] = base64.b64decode( - ann['mask']['counts']).decode() - mask_anno['size'] = ann['mask']['size'] - - data_anno = dict( - image_id=image_id, - category_id=cat_id, - bbox=[x_min, y_min, x_max - x_min, y_max - y_min], - area=(x_max - x_min) * (y_max - y_min), - segmentation=mask_anno, - iscrowd=0) - annotations.append(data_anno) - - # after process, save semantic masks - img_filename = img_info['file_name'] - seg_filename = img_filename.replace('jpg', 'png') - seg_filename = osp.join(seg_root, seg_filename) - mmcv.imwrite(semseg_mask, seg_filename) - return annotations, np.max(semseg_mask) - - -def export_nuim_to_coco(nuim, data_root, out_dir, extra_tag, version, nproc): - print('Process category information') - categories = [] - categories = [ - dict(id=nus_categories.index(cat_name), name=cat_name) - for cat_name in nus_categories - ] - cat2id = {k_v['name']: k_v['id'] for k_v in categories} - - images = [] - print('Process image meta information...') - for sample_info in mmcv.track_iter_progress(nuim.sample_data): - if sample_info['is_key_frame']: - img_idx = len(images) - images.append( - dict( - id=img_idx, - token=sample_info['token'], - file_name=sample_info['filename'], - width=sample_info['width'], - height=sample_info['height'])) - - seg_root = f'{out_dir}semantic_masks' - mmcv.mkdir_or_exist(seg_root) - mmcv.mkdir_or_exist(osp.join(data_root, 'calibrated')) - - global process_img_anno - - def process_img_anno(img_info): - single_img_annos, max_cls_id = get_img_annos(nuim, img_info, cat2id, - out_dir, data_root, - seg_root) - return single_img_annos, max_cls_id - - print('Process img annotations...') - if nproc > 1: - outputs = mmcv.track_parallel_progress( - process_img_anno, images, nproc=nproc) - else: - outputs = [] - for img_info in mmcv.track_iter_progress(images): - outputs.append(process_img_anno(img_info)) - - # Determine the index of object annotation - print('Process annotation information...') - annotations = [] - max_cls_ids = [] - for single_img_annos, max_cls_id in outputs: - max_cls_ids.append(max_cls_id) - for img_anno in single_img_annos: - img_anno.update(id=len(annotations)) - annotations.append(img_anno) - - max_cls_id = max(max_cls_ids) - print(f'Max ID of class in the semantic map: {max_cls_id}') - - coco_format_json = dict( - images=images, annotations=annotations, categories=categories) - - mmcv.mkdir_or_exist(out_dir) - out_file = osp.join(out_dir, f'{extra_tag}_{version}.json') - print(f'Annotation dumped to {out_file}') - mmcv.dump(coco_format_json, out_file) - - -def main(): - args = parse_args() - for version in args.version: - nuim = NuImages( - dataroot=args.data_root, version=version, verbose=True, lazy=True) - export_nuim_to_coco(nuim, args.data_root, args.out_dir, args.extra_tag, - version, args.nproc) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/nuscenes_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/nuscenes_converter.py deleted file mode 100644 index c6140fcc..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/nuscenes_converter.py +++ /dev/null @@ -1,628 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from collections import OrderedDict -from os import path as osp -from typing import List, Tuple, Union - -import mmcv -import numpy as np -from nuscenes.nuscenes import NuScenes -from nuscenes.utils.geometry_utils import view_points -from pyquaternion import Quaternion -from shapely.geometry import MultiPoint, box - -from mmdet3d.core.bbox import points_cam2img -from mmdet3d.datasets import NuScenesDataset - -nus_categories = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - -nus_attributes = ('cycle.with_rider', 'cycle.without_rider', - 'pedestrian.moving', 'pedestrian.standing', - 'pedestrian.sitting_lying_down', 'vehicle.moving', - 'vehicle.parked', 'vehicle.stopped', 'None') - - -def create_nuscenes_infos(root_path, - info_prefix, - version='v1.0-trainval', - max_sweeps=10): - """Create info file of nuscene dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - root_path (str): Path of the data root. - info_prefix (str): Prefix of the info file to be generated. - version (str, optional): Version of the data. - Default: 'v1.0-trainval'. - max_sweeps (int, optional): Max number of sweeps. - Default: 10. - """ - from nuscenes.nuscenes import NuScenes - nusc = NuScenes(version=version, dataroot=root_path, verbose=True) - from nuscenes.utils import splits - available_vers = ['v1.0-trainval', 'v1.0-test', 'v1.0-mini'] - assert version in available_vers - if version == 'v1.0-trainval': - train_scenes = splits.train - val_scenes = splits.val - elif version == 'v1.0-test': - train_scenes = splits.test - val_scenes = [] - elif version == 'v1.0-mini': - train_scenes = splits.mini_train - val_scenes = splits.mini_val - else: - raise ValueError('unknown') - - # filter existing scenes. - available_scenes = get_available_scenes(nusc) - available_scene_names = [s['name'] for s in available_scenes] - train_scenes = list( - filter(lambda x: x in available_scene_names, train_scenes)) - val_scenes = list(filter(lambda x: x in available_scene_names, val_scenes)) - train_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in train_scenes - ]) - val_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in val_scenes - ]) - - test = 'test' in version - if test: - print('test scene: {}'.format(len(train_scenes))) - else: - print('train scene: {}, val scene: {}'.format( - len(train_scenes), len(val_scenes))) - train_nusc_infos, val_nusc_infos = _fill_trainval_infos( - nusc, train_scenes, val_scenes, test, max_sweeps=max_sweeps) - - metadata = dict(version=version) - if test: - print('test sample: {}'.format(len(train_nusc_infos))) - data = dict(infos=train_nusc_infos, metadata=metadata) - info_path = osp.join(root_path, - '{}_infos_test.pkl'.format(info_prefix)) - mmcv.dump(data, info_path) - else: - print('train sample: {}, val sample: {}'.format( - len(train_nusc_infos), len(val_nusc_infos))) - data = dict(infos=train_nusc_infos, metadata=metadata) - info_path = osp.join(root_path, - '{}_infos_train.pkl'.format(info_prefix)) - mmcv.dump(data, info_path) - data['infos'] = val_nusc_infos - info_val_path = osp.join(root_path, - '{}_infos_val.pkl'.format(info_prefix)) - mmcv.dump(data, info_val_path) - - -def get_available_scenes(nusc): - """Get available scenes from the input nuscenes class. - - Given the raw data, get the information of available scenes for - further info generation. - - Args: - nusc (class): Dataset class in the nuScenes dataset. - - Returns: - available_scenes (list[dict]): List of basic information for the - available scenes. - """ - available_scenes = [] - print('total scene num: {}'.format(len(nusc.scene))) - for scene in nusc.scene: - scene_token = scene['token'] - scene_rec = nusc.get('scene', scene_token) - sample_rec = nusc.get('sample', scene_rec['first_sample_token']) - sd_rec = nusc.get('sample_data', sample_rec['data']['LIDAR_TOP']) - has_more_frames = True - scene_not_exist = False - while has_more_frames: - lidar_path, boxes, _ = nusc.get_sample_data(sd_rec['token']) - lidar_path = str(lidar_path) - if os.getcwd() in lidar_path: - # path from lyftdataset is absolute path - lidar_path = lidar_path.split(f'{os.getcwd()}/')[-1] - # relative path - if not mmcv.is_filepath(lidar_path): - scene_not_exist = True - break - else: - break - if scene_not_exist: - continue - available_scenes.append(scene) - print('exist scene num: {}'.format(len(available_scenes))) - return available_scenes - - -def _fill_trainval_infos(nusc, - train_scenes, - val_scenes, - test=False, - max_sweeps=10): - """Generate the train/val infos from the raw data. - - Args: - nusc (:obj:`NuScenes`): Dataset class in the nuScenes dataset. - train_scenes (list[str]): Basic information of training scenes. - val_scenes (list[str]): Basic information of validation scenes. - test (bool, optional): Whether use the test mode. In test mode, no - annotations can be accessed. Default: False. - max_sweeps (int, optional): Max number of sweeps. Default: 10. - - Returns: - tuple[list[dict]]: Information of training set and validation set - that will be saved to the info file. - """ - train_nusc_infos = [] - val_nusc_infos = [] - - for sample in mmcv.track_iter_progress(nusc.sample): - lidar_token = sample['data']['LIDAR_TOP'] - sd_rec = nusc.get('sample_data', sample['data']['LIDAR_TOP']) - cs_record = nusc.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token']) - lidar_path, boxes, _ = nusc.get_sample_data(lidar_token) - - mmcv.check_file_exist(lidar_path) - - info = { - 'lidar_path': lidar_path, - 'token': sample['token'], - 'sweeps': [], - 'cams': dict(), - 'lidar2ego_translation': cs_record['translation'], - 'lidar2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sample['timestamp'], - } - - l2e_r = info['lidar2ego_rotation'] - l2e_t = info['lidar2ego_translation'] - e2g_r = info['ego2global_rotation'] - e2g_t = info['ego2global_translation'] - l2e_r_mat = Quaternion(l2e_r).rotation_matrix - e2g_r_mat = Quaternion(e2g_r).rotation_matrix - - # obtain 6 image's information per frame - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - for cam in camera_types: - cam_token = sample['data'][cam] - cam_path, _, cam_intrinsic = nusc.get_sample_data(cam_token) - cam_info = obtain_sensor2top(nusc, cam_token, l2e_t, l2e_r_mat, - e2g_t, e2g_r_mat, cam) - cam_info.update(cam_intrinsic=cam_intrinsic) - info['cams'].update({cam: cam_info}) - - # obtain sweeps for a single key-frame - sd_rec = nusc.get('sample_data', sample['data']['LIDAR_TOP']) - sweeps = [] - while len(sweeps) < max_sweeps: - if not sd_rec['prev'] == '': - sweep = obtain_sensor2top(nusc, sd_rec['prev'], l2e_t, - l2e_r_mat, e2g_t, e2g_r_mat, 'lidar') - sweeps.append(sweep) - sd_rec = nusc.get('sample_data', sd_rec['prev']) - else: - break - info['sweeps'] = sweeps - # obtain annotation - if not test: - annotations = [ - nusc.get('sample_annotation', token) - for token in sample['anns'] - ] - locs = np.array([b.center for b in boxes]).reshape(-1, 3) - dims = np.array([b.wlh for b in boxes]).reshape(-1, 3) - rots = np.array([b.orientation.yaw_pitch_roll[0] - for b in boxes]).reshape(-1, 1) - velocity = np.array( - [nusc.box_velocity(token)[:2] for token in sample['anns']]) - valid_flag = np.array( - [(anno['num_lidar_pts'] + anno['num_radar_pts']) > 0 - for anno in annotations], - dtype=bool).reshape(-1) - # convert velo from global to lidar - for i in range(len(boxes)): - velo = np.array([*velocity[i], 0.0]) - velo = velo @ np.linalg.inv(e2g_r_mat).T @ np.linalg.inv( - l2e_r_mat).T - velocity[i] = velo[:2] - - names = [b.name for b in boxes] - for i in range(len(names)): - if names[i] in NuScenesDataset.NameMapping: - names[i] = NuScenesDataset.NameMapping[names[i]] - names = np.array(names) - # we need to convert box size to - # the format of our lidar coordinate system - # which is x_size, y_size, z_size (corresponding to l, w, h) - gt_boxes = np.concatenate([locs, dims[:, [1, 0, 2]], rots], axis=1) - assert len(gt_boxes) == len( - annotations), f'{len(gt_boxes)}, {len(annotations)}' - info['gt_boxes'] = gt_boxes - info['gt_names'] = names - info['gt_velocity'] = velocity.reshape(-1, 2) - info['num_lidar_pts'] = np.array( - [a['num_lidar_pts'] for a in annotations]) - info['num_radar_pts'] = np.array( - [a['num_radar_pts'] for a in annotations]) - info['valid_flag'] = valid_flag - - if sample['scene_token'] in train_scenes: - train_nusc_infos.append(info) - else: - val_nusc_infos.append(info) - - return train_nusc_infos, val_nusc_infos - - -def obtain_sensor2top(nusc, - sensor_token, - l2e_t, - l2e_r_mat, - e2g_t, - e2g_r_mat, - sensor_type='lidar'): - """Obtain the info with RT matric from general sensor to Top LiDAR. - - Args: - nusc (class): Dataset class in the nuScenes dataset. - sensor_token (str): Sample data token corresponding to the - specific sensor type. - l2e_t (np.ndarray): Translation from lidar to ego in shape (1, 3). - l2e_r_mat (np.ndarray): Rotation matrix from lidar to ego - in shape (3, 3). - e2g_t (np.ndarray): Translation from ego to global in shape (1, 3). - e2g_r_mat (np.ndarray): Rotation matrix from ego to global - in shape (3, 3). - sensor_type (str, optional): Sensor to calibrate. Default: 'lidar'. - - Returns: - sweep (dict): Sweep information after transformation. - """ - sd_rec = nusc.get('sample_data', sensor_token) - cs_record = nusc.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token']) - data_path = str(nusc.get_sample_data_path(sd_rec['token'])) - if os.getcwd() in data_path: # path from lyftdataset is absolute path - data_path = data_path.split(f'{os.getcwd()}/')[-1] # relative path - sweep = { - 'data_path': data_path, - 'type': sensor_type, - 'sample_data_token': sd_rec['token'], - 'sensor2ego_translation': cs_record['translation'], - 'sensor2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sd_rec['timestamp'] - } - l2e_r_s = sweep['sensor2ego_rotation'] - l2e_t_s = sweep['sensor2ego_translation'] - e2g_r_s = sweep['ego2global_rotation'] - e2g_t_s = sweep['ego2global_translation'] - - # obtain the RT from sensor to Top LiDAR - # sweep->ego->global->ego'->lidar - l2e_r_s_mat = Quaternion(l2e_r_s).rotation_matrix - e2g_r_s_mat = Quaternion(e2g_r_s).rotation_matrix - R = (l2e_r_s_mat.T @ e2g_r_s_mat.T) @ ( - np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T) - T = (l2e_t_s @ e2g_r_s_mat.T + e2g_t_s) @ ( - np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T) - T -= e2g_t @ (np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T - ) + l2e_t @ np.linalg.inv(l2e_r_mat).T - sweep['sensor2lidar_rotation'] = R.T # points @ R.T + T - sweep['sensor2lidar_translation'] = T - return sweep - - -def export_2d_annotation(root_path, info_path, version, mono3d=True): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - version (str): Dataset version. - mono3d (bool, optional): Whether to export mono3d annotation. - Default: True. - """ - # get bbox annotations for camera - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - nusc_infos = mmcv.load(info_path)['infos'] - nusc = NuScenes(version=version, dataroot=root_path, verbose=True) - # info_2d_list = [] - cat2Ids = [ - dict(id=nus_categories.index(cat_name), name=cat_name) - for cat_name in nus_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - for info in mmcv.track_iter_progress(nusc_infos): - for cam in camera_types: - cam_info = info['cams'][cam] - coco_infos = get_2d_boxes( - nusc, - cam_info['sample_data_token'], - visibilities=['', '1', '2', '3', '4'], - mono3d=mono3d) - (height, width, _) = mmcv.imread(cam_info['data_path']).shape - coco_2d_dict['images'].append( - dict( - file_name=cam_info['data_path'].split('data/nuscenes/') - [-1], - id=cam_info['sample_data_token'], - token=info['token'], - cam2ego_rotation=cam_info['sensor2ego_rotation'], - cam2ego_translation=cam_info['sensor2ego_translation'], - ego2global_rotation=info['ego2global_rotation'], - ego2global_translation=info['ego2global_translation'], - cam_intrinsic=cam_info['cam_intrinsic'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - if mono3d: - json_prefix = f'{info_path[:-4]}_mono3d' - else: - json_prefix = f'{info_path[:-4]}' - mmcv.dump(coco_2d_dict, f'{json_prefix}.coco.json') - - -def get_2d_boxes(nusc, - sample_data_token: str, - visibilities: List[str], - mono3d=True): - """Get the 2D annotation records for a given `sample_data_token`. - - Args: - sample_data_token (str): Sample data token belonging to a camera - keyframe. - visibilities (list[str]): Visibility filter. - mono3d (bool): Whether to get boxes with mono3d annotation. - - Return: - list[dict]: List of 2D annotation record that belongs to the input - `sample_data_token`. - """ - - # Get the sample data and the sample corresponding to that sample data. - sd_rec = nusc.get('sample_data', sample_data_token) - - assert sd_rec[ - 'sensor_modality'] == 'camera', 'Error: get_2d_boxes only works' \ - ' for camera sample_data!' - if not sd_rec['is_key_frame']: - raise ValueError( - 'The 2D re-projections are available only for keyframes.') - - s_rec = nusc.get('sample', sd_rec['sample_token']) - - # Get the calibrated sensor and ego pose - # record to get the transformation matrices. - cs_rec = nusc.get('calibrated_sensor', sd_rec['calibrated_sensor_token']) - pose_rec = nusc.get('ego_pose', sd_rec['ego_pose_token']) - camera_intrinsic = np.array(cs_rec['camera_intrinsic']) - - # Get all the annotation with the specified visibilties. - ann_recs = [ - nusc.get('sample_annotation', token) for token in s_rec['anns'] - ] - ann_recs = [ - ann_rec for ann_rec in ann_recs - if (ann_rec['visibility_token'] in visibilities) - ] - - repro_recs = [] - - for ann_rec in ann_recs: - # Augment sample_annotation with token information. - ann_rec['sample_annotation_token'] = ann_rec['token'] - ann_rec['sample_data_token'] = sample_data_token - - # Get the box in global coordinates. - box = nusc.get_box(ann_rec['token']) - - # Move them to the ego-pose frame. - box.translate(-np.array(pose_rec['translation'])) - box.rotate(Quaternion(pose_rec['rotation']).inverse) - - # Move them to the calibrated sensor frame. - box.translate(-np.array(cs_rec['translation'])) - box.rotate(Quaternion(cs_rec['rotation']).inverse) - - # Filter out the corners that are not in front of the calibrated - # sensor. - corners_3d = box.corners() - in_front = np.argwhere(corners_3d[2, :] > 0).flatten() - corners_3d = corners_3d[:, in_front] - - # Project 3d box to 2d. - corner_coords = view_points(corners_3d, camera_intrinsic, - True).T[:, :2].tolist() - - # Keep only corners that fall within the image. - final_coords = post_process_coords(corner_coords) - - # Skip if the convex hull of the re-projected corners - # does not intersect the image canvas. - if final_coords is None: - continue - else: - min_x, min_y, max_x, max_y = final_coords - - # Generate dictionary record to be included in the .json file. - repro_rec = generate_record(ann_rec, min_x, min_y, max_x, max_y, - sample_data_token, sd_rec['filename']) - - # If mono3d=True, add 3D annotations in camera coordinates - if mono3d and (repro_rec is not None): - loc = box.center.tolist() - - dim = box.wlh - dim[[0, 1, 2]] = dim[[1, 2, 0]] # convert wlh to our lhw - dim = dim.tolist() - - rot = box.orientation.yaw_pitch_roll[0] - rot = [-rot] # convert the rot to our cam coordinate - - global_velo2d = nusc.box_velocity(box.token)[:2] - global_velo3d = np.array([*global_velo2d, 0.0]) - e2g_r_mat = Quaternion(pose_rec['rotation']).rotation_matrix - c2e_r_mat = Quaternion(cs_rec['rotation']).rotation_matrix - cam_velo3d = global_velo3d @ np.linalg.inv( - e2g_r_mat).T @ np.linalg.inv(c2e_r_mat).T - velo = cam_velo3d[0::2].tolist() - - repro_rec['bbox_cam3d'] = loc + dim + rot - repro_rec['velo_cam3d'] = velo - - center3d = np.array(loc).reshape([1, 3]) - center2d = points_cam2img( - center3d, camera_intrinsic, with_depth=True) - repro_rec['center2d'] = center2d.squeeze().tolist() - # normalized center2D + depth - # if samples with depth < 0 will be removed - if repro_rec['center2d'][2] <= 0: - continue - - ann_token = nusc.get('sample_annotation', - box.token)['attribute_tokens'] - if len(ann_token) == 0: - attr_name = 'None' - else: - attr_name = nusc.get('attribute', ann_token[0])['name'] - attr_id = nus_attributes.index(attr_name) - repro_rec['attribute_name'] = attr_name - repro_rec['attribute_id'] = attr_id - - repro_recs.append(repro_rec) - - return repro_recs - - -def post_process_coords( - corner_coords: List, imsize: Tuple[int, int] = (1600, 900) -) -> Union[Tuple[float, float, float, float], None]: - """Get the intersection of the convex hull of the reprojected bbox corners - and the image canvas, return None if no intersection. - - Args: - corner_coords (list[int]): Corner coordinates of reprojected - bounding box. - imsize (tuple[int]): Size of the image canvas. - - Return: - tuple [float]: Intersection of the convex hull of the 2D box - corners and the image canvas. - """ - polygon_from_2d_box = MultiPoint(corner_coords).convex_hull - img_canvas = box(0, 0, imsize[0], imsize[1]) - - if polygon_from_2d_box.intersects(img_canvas): - img_intersection = polygon_from_2d_box.intersection(img_canvas) - intersection_coords = np.array( - [coord for coord in img_intersection.exterior.coords]) - - min_x = min(intersection_coords[:, 0]) - min_y = min(intersection_coords[:, 1]) - max_x = max(intersection_coords[:, 0]) - max_y = max(intersection_coords[:, 1]) - - return min_x, min_y, max_x, max_y - else: - return None - - -def generate_record(ann_rec: dict, x1: float, y1: float, x2: float, y2: float, - sample_data_token: str, filename: str) -> OrderedDict: - """Generate one 2D annotation record given various information on top of - the 2D bounding box coordinates. - - Args: - ann_rec (dict): Original 3d annotation record. - x1 (float): Minimum value of the x coordinate. - y1 (float): Minimum value of the y coordinate. - x2 (float): Maximum value of the x coordinate. - y2 (float): Maximum value of the y coordinate. - sample_data_token (str): Sample data token. - filename (str):The corresponding image file where the annotation - is present. - - Returns: - dict: A sample 2D annotation record. - - file_name (str): file name - - image_id (str): sample data token - - area (float): 2d box area - - category_name (str): category name - - category_id (int): category id - - bbox (list[float]): left x, top y, dx, dy of 2d box - - iscrowd (int): whether the area is crowd - """ - repro_rec = OrderedDict() - repro_rec['sample_data_token'] = sample_data_token - coco_rec = dict() - - relevant_keys = [ - 'attribute_tokens', - 'category_name', - 'instance_token', - 'next', - 'num_lidar_pts', - 'num_radar_pts', - 'prev', - 'sample_annotation_token', - 'sample_data_token', - 'visibility_token', - ] - - for key, value in ann_rec.items(): - if key in relevant_keys: - repro_rec[key] = value - - repro_rec['bbox_corners'] = [x1, y1, x2, y2] - repro_rec['filename'] = filename - - coco_rec['file_name'] = filename - coco_rec['image_id'] = sample_data_token - coco_rec['area'] = (y2 - y1) * (x2 - x1) - - if repro_rec['category_name'] not in NuScenesDataset.NameMapping: - return None - cat_name = NuScenesDataset.NameMapping[repro_rec['category_name']] - coco_rec['category_name'] = cat_name - coco_rec['category_id'] = nus_categories.index(cat_name) - coco_rec['bbox'] = [x1, y1, x2 - x1, y2 - y1] - coco_rec['iscrowd'] = 0 - - return coco_rec diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/s3dis_data_utils.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/s3dis_data_utils.py deleted file mode 100644 index 751688f7..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/s3dis_data_utils.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np - - -class S3DISData(object): - """S3DIS data. - - Generate s3dis infos for s3dis_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'Area_1'. - """ - - def __init__(self, root_path, split='Area_1'): - self.root_dir = root_path - self.split = split - self.data_dir = osp.join(root_path, - 'Stanford3dDataset_v1.2_Aligned_Version') - - # Following `GSDN `_, use 5 furniture - # classes for detection: table, chair, sofa, bookcase, board. - self.cat_ids = np.array([7, 8, 9, 10, 11]) - self.cat_ids2class = { - cat_id: i - for i, cat_id in enumerate(list(self.cat_ids)) - } - - assert split in [ - 'Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_6' - ] - self.sample_id_list = os.listdir(osp.join(self.data_dir, - split)) # conferenceRoom_1 - for sample_id in self.sample_id_list: - if os.path.isfile(osp.join(self.data_dir, split, sample_id)): - self.sample_id_list.remove(sample_id) - - def __len__(self): - return len(self.sample_id_list) - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - info = dict() - pc_info = { - 'num_features': 6, - 'lidar_idx': f'{self.split}_{sample_idx}' - } - info['point_cloud'] = pc_info - pts_filename = osp.join(self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_point.npy') - pts_instance_mask_path = osp.join( - self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_ins_label.npy') - pts_semantic_mask_path = osp.join( - self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_sem_label.npy') - - points = np.load(pts_filename).astype(np.float32) - pts_instance_mask = np.load(pts_instance_mask_path).astype(np.int) - pts_semantic_mask = np.load(pts_semantic_mask_path).astype(np.int) - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'instance_mask')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'semantic_mask')) - - points.tofile( - osp.join(self.root_dir, 'points', - f'{self.split}_{sample_idx}.bin')) - pts_instance_mask.tofile( - osp.join(self.root_dir, 'instance_mask', - f'{self.split}_{sample_idx}.bin')) - pts_semantic_mask.tofile( - osp.join(self.root_dir, 'semantic_mask', - f'{self.split}_{sample_idx}.bin')) - - info['pts_path'] = osp.join('points', - f'{self.split}_{sample_idx}.bin') - info['pts_instance_mask_path'] = osp.join( - 'instance_mask', f'{self.split}_{sample_idx}.bin') - info['pts_semantic_mask_path'] = osp.join( - 'semantic_mask', f'{self.split}_{sample_idx}.bin') - info['annos'] = self.get_bboxes(points, pts_instance_mask, - pts_semantic_mask) - - return info - - sample_id_list = sample_id_list if sample_id_list is not None \ - else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) - - def get_bboxes(self, points, pts_instance_mask, pts_semantic_mask): - """Convert instance masks to axis-aligned bounding boxes. - - Args: - points (np.array): Scene points of shape (n, 6). - pts_instance_mask (np.ndarray): Instance labels of shape (n,). - pts_semantic_mask (np.ndarray): Semantic labels of shape (n,). - - Returns: - dict: A dict containing detection infos with following keys: - - - gt_boxes_upright_depth (np.ndarray): Bounding boxes - of shape (n, 6) - - class (np.ndarray): Box labels of shape (n,) - - gt_num (int): Number of boxes. - """ - bboxes, labels = [], [] - for i in range(1, pts_instance_mask.max()): - ids = pts_instance_mask == i - # check if all instance points have same semantic label - assert pts_semantic_mask[ids].min() == pts_semantic_mask[ids].max() - label = pts_semantic_mask[ids][0] - # keep only furniture objects - if label in self.cat_ids2class: - labels.append(self.cat_ids2class[pts_semantic_mask[ids][0]]) - pts = points[:, :3][ids] - min_pts = pts.min(axis=0) - max_pts = pts.max(axis=0) - locations = (min_pts + max_pts) / 2 - dimensions = max_pts - min_pts - bboxes.append(np.concatenate((locations, dimensions))) - annotation = dict() - # follow ScanNet and SUN RGB-D keys - annotation['gt_boxes_upright_depth'] = np.array(bboxes) - annotation['class'] = np.array(labels) - annotation['gt_num'] = len(labels) - return annotation - - -class S3DISSegData(object): - """S3DIS dataset used to generate infos for semantic segmentation task. - - Args: - data_root (str): Root path of the raw data. - ann_file (str): The generated scannet infos. - split (str, optional): Set split type of the data. Default: 'train'. - num_points (int, optional): Number of points in each data input. - Default: 8192. - label_weight_func (function, optional): Function to compute the - label weight. Default: None. - """ - - def __init__(self, - data_root, - ann_file, - split='Area_1', - num_points=4096, - label_weight_func=None): - self.data_root = data_root - self.data_infos = mmcv.load(ann_file) - self.split = split - self.num_points = num_points - - self.all_ids = np.arange(13) # all possible ids - self.cat_ids = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, - 12]) # used for seg task - self.ignore_index = len(self.cat_ids) - - self.cat_id2class = np.ones((self.all_ids.shape[0],), dtype=np.int) * \ - self.ignore_index - for i, cat_id in enumerate(self.cat_ids): - self.cat_id2class[cat_id] = i - - # label weighting function is taken from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - self.label_weight_func = (lambda x: 1.0 / np.log(1.2 + x)) if \ - label_weight_func is None else label_weight_func - - def get_seg_infos(self): - scene_idxs, label_weight = self.get_scene_idxs_and_label_weight() - save_folder = osp.join(self.data_root, 'seg_info') - mmcv.mkdir_or_exist(save_folder) - np.save( - osp.join(save_folder, f'{self.split}_resampled_scene_idxs.npy'), - scene_idxs) - np.save( - osp.join(save_folder, f'{self.split}_label_weight.npy'), - label_weight) - print(f'{self.split} resampled scene index and label weight saved') - - def _convert_to_label(self, mask): - """Convert class_id in loaded segmentation mask to label.""" - if isinstance(mask, str): - if mask.endswith('npy'): - mask = np.load(mask) - else: - mask = np.fromfile(mask, dtype=np.int64) - label = self.cat_id2class[mask] - return label - - def get_scene_idxs_and_label_weight(self): - """Compute scene_idxs for data sampling and label weight for loss - calculation. - - We sample more times for scenes with more points. Label_weight is - inversely proportional to number of class points. - """ - num_classes = len(self.cat_ids) - num_point_all = [] - label_weight = np.zeros((num_classes + 1, )) # ignore_index - for data_info in self.data_infos: - label = self._convert_to_label( - osp.join(self.data_root, data_info['pts_semantic_mask_path'])) - num_point_all.append(label.shape[0]) - class_count, _ = np.histogram(label, range(num_classes + 2)) - label_weight += class_count - - # repeat scene_idx for num_scene_point // num_sample_point times - sample_prob = np.array(num_point_all) / float(np.sum(num_point_all)) - num_iter = int(np.sum(num_point_all) / float(self.num_points)) - scene_idxs = [] - for idx in range(len(self.data_infos)): - scene_idxs.extend([idx] * int(round(sample_prob[idx] * num_iter))) - scene_idxs = np.array(scene_idxs).astype(np.int32) - - # calculate label weight, adopted from PointNet++ - label_weight = label_weight[:-1].astype(np.float32) - label_weight = label_weight / label_weight.sum() - label_weight = self.label_weight_func(label_weight).astype(np.float32) - - return scene_idxs, label_weight diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/scannet_data_utils.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/scannet_data_utils.py deleted file mode 100644 index 085d401c..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/scannet_data_utils.py +++ /dev/null @@ -1,297 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np - - -class ScanNetData(object): - """ScanNet data. - - Generate scannet infos for scannet_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'train'. - """ - - def __init__(self, root_path, split='train'): - self.root_dir = root_path - self.split = split - self.split_dir = osp.join(root_path) - self.classes = [ - 'cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin' - ] - self.cat2label = {cat: self.classes.index(cat) for cat in self.classes} - self.label2cat = {self.cat2label[t]: t for t in self.cat2label} - self.cat_ids = np.array( - [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, 36, 39]) - self.cat_ids2class = { - nyu40id: i - for i, nyu40id in enumerate(list(self.cat_ids)) - } - assert split in ['train', 'val', 'test'] - split_file = osp.join(self.root_dir, 'meta_data', - f'scannetv2_{split}.txt') - mmcv.check_file_exist(split_file) - self.sample_id_list = mmcv.list_from_file(split_file) - self.test_mode = (split == 'test') - - def __len__(self): - return len(self.sample_id_list) - - def get_aligned_box_label(self, idx): - box_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_aligned_bbox.npy') - mmcv.check_file_exist(box_file) - return np.load(box_file) - - def get_unaligned_box_label(self, idx): - box_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_unaligned_bbox.npy') - mmcv.check_file_exist(box_file) - return np.load(box_file) - - def get_axis_align_matrix(self, idx): - matrix_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_axis_align_matrix.npy') - mmcv.check_file_exist(matrix_file) - return np.load(matrix_file) - - def get_images(self, idx): - paths = [] - path = osp.join(self.root_dir, 'posed_images', idx) - for file in sorted(os.listdir(path)): - if file.endswith('.jpg'): - paths.append(osp.join('posed_images', idx, file)) - return paths - - def get_extrinsics(self, idx): - extrinsics = [] - path = osp.join(self.root_dir, 'posed_images', idx) - for file in sorted(os.listdir(path)): - if file.endswith('.txt') and not file == 'intrinsic.txt': - extrinsics.append(np.loadtxt(osp.join(path, file))) - return extrinsics - - def get_intrinsics(self, idx): - matrix_file = osp.join(self.root_dir, 'posed_images', idx, - 'intrinsic.txt') - mmcv.check_file_exist(matrix_file) - return np.loadtxt(matrix_file) - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - info = dict() - pc_info = {'num_features': 6, 'lidar_idx': sample_idx} - info['point_cloud'] = pc_info - pts_filename = osp.join(self.root_dir, 'scannet_instance_data', - f'{sample_idx}_vert.npy') - points = np.load(pts_filename) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - points.tofile( - osp.join(self.root_dir, 'points', f'{sample_idx}.bin')) - info['pts_path'] = osp.join('points', f'{sample_idx}.bin') - - # update with RGB image paths if exist - if os.path.exists(osp.join(self.root_dir, 'posed_images')): - info['intrinsics'] = self.get_intrinsics(sample_idx) - all_extrinsics = self.get_extrinsics(sample_idx) - all_img_paths = self.get_images(sample_idx) - # some poses in ScanNet are invalid - extrinsics, img_paths = [], [] - for extrinsic, img_path in zip(all_extrinsics, all_img_paths): - if np.all(np.isfinite(extrinsic)): - img_paths.append(img_path) - extrinsics.append(extrinsic) - info['extrinsics'] = extrinsics - info['img_paths'] = img_paths - - if not self.test_mode: - pts_instance_mask_path = osp.join( - self.root_dir, 'scannet_instance_data', - f'{sample_idx}_ins_label.npy') - pts_semantic_mask_path = osp.join( - self.root_dir, 'scannet_instance_data', - f'{sample_idx}_sem_label.npy') - - pts_instance_mask = np.load(pts_instance_mask_path).astype( - np.int64) - pts_semantic_mask = np.load(pts_semantic_mask_path).astype( - np.int64) - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'instance_mask')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'semantic_mask')) - - pts_instance_mask.tofile( - osp.join(self.root_dir, 'instance_mask', - f'{sample_idx}.bin')) - pts_semantic_mask.tofile( - osp.join(self.root_dir, 'semantic_mask', - f'{sample_idx}.bin')) - - info['pts_instance_mask_path'] = osp.join( - 'instance_mask', f'{sample_idx}.bin') - info['pts_semantic_mask_path'] = osp.join( - 'semantic_mask', f'{sample_idx}.bin') - - if has_label: - annotations = {} - # box is of shape [k, 6 + class] - aligned_box_label = self.get_aligned_box_label(sample_idx) - unaligned_box_label = self.get_unaligned_box_label(sample_idx) - annotations['gt_num'] = aligned_box_label.shape[0] - if annotations['gt_num'] != 0: - aligned_box = aligned_box_label[:, :-1] # k, 6 - unaligned_box = unaligned_box_label[:, :-1] - classes = aligned_box_label[:, -1] # k - annotations['name'] = np.array([ - self.label2cat[self.cat_ids2class[classes[i]]] - for i in range(annotations['gt_num']) - ]) - # default names are given to aligned bbox for compatibility - # we also save unaligned bbox info with marked names - annotations['location'] = aligned_box[:, :3] - annotations['dimensions'] = aligned_box[:, 3:6] - annotations['gt_boxes_upright_depth'] = aligned_box - annotations['unaligned_location'] = unaligned_box[:, :3] - annotations['unaligned_dimensions'] = unaligned_box[:, 3:6] - annotations[ - 'unaligned_gt_boxes_upright_depth'] = unaligned_box - annotations['index'] = np.arange( - annotations['gt_num'], dtype=np.int32) - annotations['class'] = np.array([ - self.cat_ids2class[classes[i]] - for i in range(annotations['gt_num']) - ]) - axis_align_matrix = self.get_axis_align_matrix(sample_idx) - annotations['axis_align_matrix'] = axis_align_matrix # 4x4 - info['annos'] = annotations - return info - - sample_id_list = sample_id_list if sample_id_list is not None \ - else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) - - -class ScanNetSegData(object): - """ScanNet dataset used to generate infos for semantic segmentation task. - - Args: - data_root (str): Root path of the raw data. - ann_file (str): The generated scannet infos. - split (str, optional): Set split type of the data. Default: 'train'. - num_points (int, optional): Number of points in each data input. - Default: 8192. - label_weight_func (function, optional): Function to compute the - label weight. Default: None. - """ - - def __init__(self, - data_root, - ann_file, - split='train', - num_points=8192, - label_weight_func=None): - self.data_root = data_root - self.data_infos = mmcv.load(ann_file) - self.split = split - assert split in ['train', 'val', 'test'] - self.num_points = num_points - - self.all_ids = np.arange(41) # all possible ids - self.cat_ids = np.array([ - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, 36, - 39 - ]) # used for seg task - self.ignore_index = len(self.cat_ids) - - self.cat_id2class = np.ones((self.all_ids.shape[0],), dtype=np.int) * \ - self.ignore_index - for i, cat_id in enumerate(self.cat_ids): - self.cat_id2class[cat_id] = i - - # label weighting function is taken from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - self.label_weight_func = (lambda x: 1.0 / np.log(1.2 + x)) if \ - label_weight_func is None else label_weight_func - - def get_seg_infos(self): - if self.split == 'test': - return - scene_idxs, label_weight = self.get_scene_idxs_and_label_weight() - save_folder = osp.join(self.data_root, 'seg_info') - mmcv.mkdir_or_exist(save_folder) - np.save( - osp.join(save_folder, f'{self.split}_resampled_scene_idxs.npy'), - scene_idxs) - np.save( - osp.join(save_folder, f'{self.split}_label_weight.npy'), - label_weight) - print(f'{self.split} resampled scene index and label weight saved') - - def _convert_to_label(self, mask): - """Convert class_id in loaded segmentation mask to label.""" - if isinstance(mask, str): - if mask.endswith('npy'): - mask = np.load(mask) - else: - mask = np.fromfile(mask, dtype=np.int64) - label = self.cat_id2class[mask] - return label - - def get_scene_idxs_and_label_weight(self): - """Compute scene_idxs for data sampling and label weight for loss - calculation. - - We sample more times for scenes with more points. Label_weight is - inversely proportional to number of class points. - """ - num_classes = len(self.cat_ids) - num_point_all = [] - label_weight = np.zeros((num_classes + 1, )) # ignore_index - for data_info in self.data_infos: - label = self._convert_to_label( - osp.join(self.data_root, data_info['pts_semantic_mask_path'])) - num_point_all.append(label.shape[0]) - class_count, _ = np.histogram(label, range(num_classes + 2)) - label_weight += class_count - - # repeat scene_idx for num_scene_point // num_sample_point times - sample_prob = np.array(num_point_all) / float(np.sum(num_point_all)) - num_iter = int(np.sum(num_point_all) / float(self.num_points)) - scene_idxs = [] - for idx in range(len(self.data_infos)): - scene_idxs.extend([idx] * int(round(sample_prob[idx] * num_iter))) - scene_idxs = np.array(scene_idxs).astype(np.int32) - - # calculate label weight, adopted from PointNet++ - label_weight = label_weight[:-1].astype(np.float32) - label_weight = label_weight / label_weight.sum() - label_weight = self.label_weight_func(label_weight).astype(np.float32) - - return scene_idxs, label_weight diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/sunrgbd_data_utils.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/sunrgbd_data_utils.py deleted file mode 100644 index 152ea42f..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/sunrgbd_data_utils.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np -from scipy import io as sio - - -def random_sampling(points, num_points, replace=None, return_choices=False): - """Random sampling. - - Sampling point cloud to a certain number of points. - - Args: - points (ndarray): Point cloud. - num_points (int): The number of samples. - replace (bool): Whether the sample is with or without replacement. - return_choices (bool): Whether to return choices. - - Returns: - points (ndarray): Point cloud after sampling. - """ - - if replace is None: - replace = (points.shape[0] < num_points) - choices = np.random.choice(points.shape[0], num_points, replace=replace) - if return_choices: - return points[choices], choices - else: - return points[choices] - - -class SUNRGBDInstance(object): - - def __init__(self, line): - data = line.split(' ') - data[1:] = [float(x) for x in data[1:]] - self.classname = data[0] - self.xmin = data[1] - self.ymin = data[2] - self.xmax = data[1] + data[3] - self.ymax = data[2] + data[4] - self.box2d = np.array([self.xmin, self.ymin, self.xmax, self.ymax]) - self.centroid = np.array([data[5], data[6], data[7]]) - self.width = data[8] - self.length = data[9] - self.height = data[10] - # data[9] is x_size (length), data[8] is y_size (width), data[10] is - # z_size (height) in our depth coordinate system, - # l corresponds to the size along the x axis - self.size = np.array([data[9], data[8], data[10]]) * 2 - self.orientation = np.zeros((3, )) - self.orientation[0] = data[11] - self.orientation[1] = data[12] - self.heading_angle = np.arctan2(self.orientation[1], - self.orientation[0]) - self.box3d = np.concatenate( - [self.centroid, self.size, self.heading_angle[None]]) - - -class SUNRGBDData(object): - """SUNRGBD data. - - Generate scannet infos for sunrgbd_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'train'. - use_v1 (bool, optional): Whether to use v1. Default: False. - """ - - def __init__(self, root_path, split='train', use_v1=False): - self.root_dir = root_path - self.split = split - self.split_dir = osp.join(root_path, 'sunrgbd_trainval') - self.classes = [ - 'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub' - ] - self.cat2label = {cat: self.classes.index(cat) for cat in self.classes} - self.label2cat = { - label: self.classes[label] - for label in range(len(self.classes)) - } - assert split in ['train', 'val', 'test'] - split_file = osp.join(self.split_dir, f'{split}_data_idx.txt') - mmcv.check_file_exist(split_file) - self.sample_id_list = map(int, mmcv.list_from_file(split_file)) - self.image_dir = osp.join(self.split_dir, 'image') - self.calib_dir = osp.join(self.split_dir, 'calib') - self.depth_dir = osp.join(self.split_dir, 'depth') - if use_v1: - self.label_dir = osp.join(self.split_dir, 'label_v1') - else: - self.label_dir = osp.join(self.split_dir, 'label') - - def __len__(self): - return len(self.sample_id_list) - - def get_image(self, idx): - img_filename = osp.join(self.image_dir, f'{idx:06d}.jpg') - return mmcv.imread(img_filename) - - def get_image_shape(self, idx): - image = self.get_image(idx) - return np.array(image.shape[:2], dtype=np.int32) - - def get_depth(self, idx): - depth_filename = osp.join(self.depth_dir, f'{idx:06d}.mat') - depth = sio.loadmat(depth_filename)['instance'] - return depth - - def get_calibration(self, idx): - calib_filepath = osp.join(self.calib_dir, f'{idx:06d}.txt') - lines = [line.rstrip() for line in open(calib_filepath)] - Rt = np.array([float(x) for x in lines[0].split(' ')]) - Rt = np.reshape(Rt, (3, 3), order='F').astype(np.float32) - K = np.array([float(x) for x in lines[1].split(' ')]) - K = np.reshape(K, (3, 3), order='F').astype(np.float32) - return K, Rt - - def get_label_objects(self, idx): - label_filename = osp.join(self.label_dir, f'{idx:06d}.txt') - lines = [line.rstrip() for line in open(label_filename)] - objects = [SUNRGBDInstance(line) for line in lines] - return objects - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - # convert depth to points - SAMPLE_NUM = 50000 - # TODO: Check whether can move the point - # sampling process during training. - pc_upright_depth = self.get_depth(sample_idx) - pc_upright_depth_subsampled = random_sampling( - pc_upright_depth, SAMPLE_NUM) - - info = dict() - pc_info = {'num_features': 6, 'lidar_idx': sample_idx} - info['point_cloud'] = pc_info - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - pc_upright_depth_subsampled.tofile( - osp.join(self.root_dir, 'points', f'{sample_idx:06d}.bin')) - - info['pts_path'] = osp.join('points', f'{sample_idx:06d}.bin') - img_path = osp.join('image', f'{sample_idx:06d}.jpg') - image_info = { - 'image_idx': sample_idx, - 'image_shape': self.get_image_shape(sample_idx), - 'image_path': img_path - } - info['image'] = image_info - - K, Rt = self.get_calibration(sample_idx) - calib_info = {'K': K, 'Rt': Rt} - info['calib'] = calib_info - - if has_label: - obj_list = self.get_label_objects(sample_idx) - annotations = {} - annotations['gt_num'] = len([ - obj.classname for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - if annotations['gt_num'] != 0: - annotations['name'] = np.array([ - obj.classname for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['bbox'] = np.concatenate([ - obj.box2d.reshape(1, 4) for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) - annotations['location'] = np.concatenate([ - obj.centroid.reshape(1, 3) for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) - annotations['dimensions'] = 2 * np.array([ - [obj.length, obj.width, obj.height] for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) # lwh (depth) format - annotations['rotation_y'] = np.array([ - obj.heading_angle for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['index'] = np.arange( - len(obj_list), dtype=np.int32) - annotations['class'] = np.array([ - self.cat2label[obj.classname] for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['gt_boxes_upright_depth'] = np.stack( - [ - obj.box3d for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) # (K,8) - info['annos'] = annotations - return info - - sample_id_list = sample_id_list if \ - sample_id_list is not None else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) diff --git a/cv/3d_detection/paconv/pytorch/tools/data_converter/waymo_converter.py b/cv/3d_detection/paconv/pytorch/tools/data_converter/waymo_converter.py deleted file mode 100644 index f991514b..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/data_converter/waymo_converter.py +++ /dev/null @@ -1,556 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Adapted from `Waymo to KITTI converter - `_. -""" - -try: - from waymo_open_dataset import dataset_pb2 -except ImportError: - raise ImportError( - 'Please run "pip install waymo-open-dataset-tf-2-1-0==1.2.0" ' - 'to install the official devkit first.') - -from glob import glob -from os.path import join - -import mmcv -import numpy as np -import tensorflow as tf -from waymo_open_dataset.utils import range_image_utils, transform_utils -from waymo_open_dataset.utils.frame_utils import \ - parse_range_image_and_camera_projection - - -class Waymo2KITTI(object): - """Waymo to KITTI converter. - - This class serves as the converter to change the waymo raw data to KITTI - format. - - Args: - load_dir (str): Directory to load waymo raw data. - save_dir (str): Directory to save data in KITTI format. - prefix (str): Prefix of filename. In general, 0 for training, 1 for - validation and 2 for testing. - workers (int, optional): Number of workers for the parallel process. - test_mode (bool, optional): Whether in the test_mode. Default: False. - """ - - def __init__(self, - load_dir, - save_dir, - prefix, - workers=64, - test_mode=False): - self.filter_empty_3dboxes = True - self.filter_no_label_zone_points = True - - self.selected_waymo_classes = ['VEHICLE', 'PEDESTRIAN', 'CYCLIST'] - - # Only data collected in specific locations will be converted - # If set None, this filter is disabled - # Available options: location_sf (main dataset) - self.selected_waymo_locations = None - self.save_track_id = False - - # turn on eager execution for older tensorflow versions - if int(tf.__version__.split('.')[0]) < 2: - tf.enable_eager_execution() - - self.lidar_list = [ - '_FRONT', '_FRONT_RIGHT', '_FRONT_LEFT', '_SIDE_RIGHT', - '_SIDE_LEFT' - ] - self.type_list = [ - 'UNKNOWN', 'VEHICLE', 'PEDESTRIAN', 'SIGN', 'CYCLIST' - ] - self.waymo_to_kitti_class_map = { - 'UNKNOWN': 'DontCare', - 'PEDESTRIAN': 'Pedestrian', - 'VEHICLE': 'Car', - 'CYCLIST': 'Cyclist', - 'SIGN': 'Sign' # not in kitti - } - - self.load_dir = load_dir - self.save_dir = save_dir - self.prefix = prefix - self.workers = int(workers) - self.test_mode = test_mode - - self.tfrecord_pathnames = sorted( - glob(join(self.load_dir, '*.tfrecord'))) - - self.label_save_dir = f'{self.save_dir}/label_' - self.label_all_save_dir = f'{self.save_dir}/label_all' - self.image_save_dir = f'{self.save_dir}/image_' - self.calib_save_dir = f'{self.save_dir}/calib' - self.point_cloud_save_dir = f'{self.save_dir}/velodyne' - self.pose_save_dir = f'{self.save_dir}/pose' - self.timestamp_save_dir = f'{self.save_dir}/timestamp' - - self.create_folder() - - def convert(self): - """Convert action.""" - print('Start converting ...') - mmcv.track_parallel_progress(self.convert_one, range(len(self)), - self.workers) - print('\nFinished ...') - - def convert_one(self, file_idx): - """Convert action for single file. - - Args: - file_idx (int): Index of the file to be converted. - """ - pathname = self.tfrecord_pathnames[file_idx] - dataset = tf.data.TFRecordDataset(pathname, compression_type='') - - for frame_idx, data in enumerate(dataset): - - frame = dataset_pb2.Frame() - frame.ParseFromString(bytearray(data.numpy())) - if (self.selected_waymo_locations is not None - and frame.context.stats.location - not in self.selected_waymo_locations): - continue - - self.save_image(frame, file_idx, frame_idx) - self.save_calib(frame, file_idx, frame_idx) - self.save_lidar(frame, file_idx, frame_idx) - self.save_pose(frame, file_idx, frame_idx) - self.save_timestamp(frame, file_idx, frame_idx) - - if not self.test_mode: - self.save_label(frame, file_idx, frame_idx) - - def __len__(self): - """Length of the filename list.""" - return len(self.tfrecord_pathnames) - - def save_image(self, frame, file_idx, frame_idx): - """Parse and save the images in png format. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - for img in frame.images: - img_path = f'{self.image_save_dir}{str(img.name - 1)}/' + \ - f'{self.prefix}{str(file_idx).zfill(3)}' + \ - f'{str(frame_idx).zfill(3)}.png' - img = mmcv.imfrombytes(img.image) - mmcv.imwrite(img, img_path) - - def save_calib(self, frame, file_idx, frame_idx): - """Parse and save the calibration data. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - # waymo front camera to kitti reference camera - T_front_cam_to_ref = np.array([[0.0, -1.0, 0.0], [0.0, 0.0, -1.0], - [1.0, 0.0, 0.0]]) - camera_calibs = [] - R0_rect = [f'{i:e}' for i in np.eye(3).flatten()] - Tr_velo_to_cams = [] - calib_context = '' - - for camera in frame.context.camera_calibrations: - # extrinsic parameters - T_cam_to_vehicle = np.array(camera.extrinsic.transform).reshape( - 4, 4) - T_vehicle_to_cam = np.linalg.inv(T_cam_to_vehicle) - Tr_velo_to_cam = \ - self.cart_to_homo(T_front_cam_to_ref) @ T_vehicle_to_cam - if camera.name == 1: # FRONT = 1, see dataset.proto for details - self.T_velo_to_front_cam = Tr_velo_to_cam.copy() - Tr_velo_to_cam = Tr_velo_to_cam[:3, :].reshape((12, )) - Tr_velo_to_cams.append([f'{i:e}' for i in Tr_velo_to_cam]) - - # intrinsic parameters - camera_calib = np.zeros((3, 4)) - camera_calib[0, 0] = camera.intrinsic[0] - camera_calib[1, 1] = camera.intrinsic[1] - camera_calib[0, 2] = camera.intrinsic[2] - camera_calib[1, 2] = camera.intrinsic[3] - camera_calib[2, 2] = 1 - camera_calib = list(camera_calib.reshape(12)) - camera_calib = [f'{i:e}' for i in camera_calib] - camera_calibs.append(camera_calib) - - # all camera ids are saved as id-1 in the result because - # camera 0 is unknown in the proto - for i in range(5): - calib_context += 'P' + str(i) + ': ' + \ - ' '.join(camera_calibs[i]) + '\n' - calib_context += 'R0_rect' + ': ' + ' '.join(R0_rect) + '\n' - for i in range(5): - calib_context += 'Tr_velo_to_cam_' + str(i) + ': ' + \ - ' '.join(Tr_velo_to_cams[i]) + '\n' - - with open( - f'{self.calib_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', - 'w+') as fp_calib: - fp_calib.write(calib_context) - fp_calib.close() - - def save_lidar(self, frame, file_idx, frame_idx): - """Parse and save the lidar data in psd format. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - range_images, camera_projections, range_image_top_pose = \ - parse_range_image_and_camera_projection(frame) - - # First return - points_0, cp_points_0, intensity_0, elongation_0, mask_indices_0 = \ - self.convert_range_image_to_point_cloud( - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=0 - ) - points_0 = np.concatenate(points_0, axis=0) - intensity_0 = np.concatenate(intensity_0, axis=0) - elongation_0 = np.concatenate(elongation_0, axis=0) - mask_indices_0 = np.concatenate(mask_indices_0, axis=0) - - # Second return - points_1, cp_points_1, intensity_1, elongation_1, mask_indices_1 = \ - self.convert_range_image_to_point_cloud( - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=1 - ) - points_1 = np.concatenate(points_1, axis=0) - intensity_1 = np.concatenate(intensity_1, axis=0) - elongation_1 = np.concatenate(elongation_1, axis=0) - mask_indices_1 = np.concatenate(mask_indices_1, axis=0) - - points = np.concatenate([points_0, points_1], axis=0) - intensity = np.concatenate([intensity_0, intensity_1], axis=0) - elongation = np.concatenate([elongation_0, elongation_1], axis=0) - mask_indices = np.concatenate([mask_indices_0, mask_indices_1], axis=0) - - # timestamp = frame.timestamp_micros * np.ones_like(intensity) - - # concatenate x,y,z, intensity, elongation, timestamp (6-dim) - point_cloud = np.column_stack( - (points, intensity, elongation, mask_indices)) - - pc_path = f'{self.point_cloud_save_dir}/{self.prefix}' + \ - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.bin' - point_cloud.astype(np.float32).tofile(pc_path) - - def save_label(self, frame, file_idx, frame_idx): - """Parse and save the label data in txt format. - The relation between waymo and kitti coordinates is noteworthy: - 1. x, y, z correspond to l, w, h (waymo) -> l, h, w (kitti) - 2. x-y-z: front-left-up (waymo) -> right-down-front(kitti) - 3. bbox origin at volumetric center (waymo) -> bottom center (kitti) - 4. rotation: +x around y-axis (kitti) -> +x around z-axis (waymo) - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - fp_label_all = open( - f'{self.label_all_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', 'w+') - id_to_bbox = dict() - id_to_name = dict() - for labels in frame.projected_lidar_labels: - name = labels.name - for label in labels.labels: - # TODO: need a workaround as bbox may not belong to front cam - bbox = [ - label.box.center_x - label.box.length / 2, - label.box.center_y - label.box.width / 2, - label.box.center_x + label.box.length / 2, - label.box.center_y + label.box.width / 2 - ] - id_to_bbox[label.id] = bbox - id_to_name[label.id] = name - 1 - - for obj in frame.laser_labels: - bounding_box = None - name = None - id = obj.id - for lidar in self.lidar_list: - if id + lidar in id_to_bbox: - bounding_box = id_to_bbox.get(id + lidar) - name = str(id_to_name.get(id + lidar)) - break - - if bounding_box is None or name is None: - name = '0' - bounding_box = (0, 0, 0, 0) - - my_type = self.type_list[obj.type] - - if my_type not in self.selected_waymo_classes: - continue - - if self.filter_empty_3dboxes and obj.num_lidar_points_in_box < 1: - continue - - my_type = self.waymo_to_kitti_class_map[my_type] - - height = obj.box.height - width = obj.box.width - length = obj.box.length - - x = obj.box.center_x - y = obj.box.center_y - z = obj.box.center_z - height / 2 - - # project bounding box to the virtual reference frame - pt_ref = self.T_velo_to_front_cam @ \ - np.array([x, y, z, 1]).reshape((4, 1)) - x, y, z, _ = pt_ref.flatten().tolist() - - rotation_y = -obj.box.heading - np.pi / 2 - track_id = obj.id - - # not available - truncated = 0 - occluded = 0 - alpha = -10 - - line = my_type + \ - ' {} {} {} {} {} {} {} {} {} {} {} {} {} {}\n'.format( - round(truncated, 2), occluded, round(alpha, 2), - round(bounding_box[0], 2), round(bounding_box[1], 2), - round(bounding_box[2], 2), round(bounding_box[3], 2), - round(height, 2), round(width, 2), round(length, 2), - round(x, 2), round(y, 2), round(z, 2), - round(rotation_y, 2)) - - if self.save_track_id: - line_all = line[:-1] + ' ' + name + ' ' + track_id + '\n' - else: - line_all = line[:-1] + ' ' + name + '\n' - - fp_label = open( - f'{self.label_save_dir}{name}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', 'a') - fp_label.write(line) - fp_label.close() - - fp_label_all.write(line_all) - - fp_label_all.close() - - def save_pose(self, frame, file_idx, frame_idx): - """Parse and save the pose data. - - Note that SDC's own pose is not included in the regular training - of KITTI dataset. KITTI raw dataset contains ego motion files - but are not often used. Pose is important for algorithms that - take advantage of the temporal information. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - pose = np.array(frame.pose.transform).reshape(4, 4) - np.savetxt( - join(f'{self.pose_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt'), - pose) - - def save_timestamp(self, frame, file_idx, frame_idx): - """Save the timestamp data in a separate file instead of the - pointcloud. - - Note that SDC's own pose is not included in the regular training - of KITTI dataset. KITTI raw dataset contains ego motion files - but are not often used. Pose is important for algorithms that - take advantage of the temporal information. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - with open( - join(f'{self.timestamp_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt'), - 'w') as f: - f.write(str(frame.timestamp_micros)) - - def create_folder(self): - """Create folder for data preprocessing.""" - if not self.test_mode: - dir_list1 = [ - self.label_all_save_dir, self.calib_save_dir, - self.point_cloud_save_dir, self.pose_save_dir, - self.timestamp_save_dir - ] - dir_list2 = [self.label_save_dir, self.image_save_dir] - else: - dir_list1 = [ - self.calib_save_dir, self.point_cloud_save_dir, - self.pose_save_dir, self.timestamp_save_dir - ] - dir_list2 = [self.image_save_dir] - for d in dir_list1: - mmcv.mkdir_or_exist(d) - for d in dir_list2: - for i in range(5): - mmcv.mkdir_or_exist(f'{d}{str(i)}') - - def convert_range_image_to_point_cloud(self, - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=0): - """Convert range images to point cloud. - - Args: - frame (:obj:`Frame`): Open dataset frame. - range_images (dict): Mapping from laser_name to list of two - range images corresponding with two returns. - camera_projections (dict): Mapping from laser_name to list of two - camera projections corresponding with two returns. - range_image_top_pose (:obj:`Transform`): Range image pixel pose for - top lidar. - ri_index (int, optional): 0 for the first return, - 1 for the second return. Default: 0. - - Returns: - tuple[list[np.ndarray]]: (List of points with shape [N, 3], - camera projections of points with shape [N, 6], intensity - with shape [N, 1], elongation with shape [N, 1], points' - position in the depth map (element offset if points come from - the main lidar otherwise -1) with shape[N, 1]). All the - lists have the length of lidar numbers (5). - """ - calibrations = sorted( - frame.context.laser_calibrations, key=lambda c: c.name) - points = [] - cp_points = [] - intensity = [] - elongation = [] - mask_indices = [] - - frame_pose = tf.convert_to_tensor( - value=np.reshape(np.array(frame.pose.transform), [4, 4])) - # [H, W, 6] - range_image_top_pose_tensor = tf.reshape( - tf.convert_to_tensor(value=range_image_top_pose.data), - range_image_top_pose.shape.dims) - # [H, W, 3, 3] - range_image_top_pose_tensor_rotation = \ - transform_utils.get_rotation_matrix( - range_image_top_pose_tensor[..., 0], - range_image_top_pose_tensor[..., 1], - range_image_top_pose_tensor[..., 2]) - range_image_top_pose_tensor_translation = \ - range_image_top_pose_tensor[..., 3:] - range_image_top_pose_tensor = transform_utils.get_transform( - range_image_top_pose_tensor_rotation, - range_image_top_pose_tensor_translation) - for c in calibrations: - range_image = range_images[c.name][ri_index] - if len(c.beam_inclinations) == 0: - beam_inclinations = range_image_utils.compute_inclination( - tf.constant( - [c.beam_inclination_min, c.beam_inclination_max]), - height=range_image.shape.dims[0]) - else: - beam_inclinations = tf.constant(c.beam_inclinations) - - beam_inclinations = tf.reverse(beam_inclinations, axis=[-1]) - extrinsic = np.reshape(np.array(c.extrinsic.transform), [4, 4]) - - range_image_tensor = tf.reshape( - tf.convert_to_tensor(value=range_image.data), - range_image.shape.dims) - pixel_pose_local = None - frame_pose_local = None - if c.name == dataset_pb2.LaserName.TOP: - pixel_pose_local = range_image_top_pose_tensor - pixel_pose_local = tf.expand_dims(pixel_pose_local, axis=0) - frame_pose_local = tf.expand_dims(frame_pose, axis=0) - range_image_mask = range_image_tensor[..., 0] > 0 - - if self.filter_no_label_zone_points: - nlz_mask = range_image_tensor[..., 3] != 1.0 # 1.0: in NLZ - range_image_mask = range_image_mask & nlz_mask - - range_image_cartesian = \ - range_image_utils.extract_point_cloud_from_range_image( - tf.expand_dims(range_image_tensor[..., 0], axis=0), - tf.expand_dims(extrinsic, axis=0), - tf.expand_dims(tf.convert_to_tensor( - value=beam_inclinations), axis=0), - pixel_pose=pixel_pose_local, - frame_pose=frame_pose_local) - - mask_index = tf.where(range_image_mask) - - range_image_cartesian = tf.squeeze(range_image_cartesian, axis=0) - points_tensor = tf.gather_nd(range_image_cartesian, mask_index) - - cp = camera_projections[c.name][ri_index] - cp_tensor = tf.reshape( - tf.convert_to_tensor(value=cp.data), cp.shape.dims) - cp_points_tensor = tf.gather_nd(cp_tensor, mask_index) - points.append(points_tensor.numpy()) - cp_points.append(cp_points_tensor.numpy()) - - intensity_tensor = tf.gather_nd(range_image_tensor[..., 1], - mask_index) - intensity.append(intensity_tensor.numpy()) - - elongation_tensor = tf.gather_nd(range_image_tensor[..., 2], - mask_index) - elongation.append(elongation_tensor.numpy()) - if c.name == 1: - mask_index = (ri_index * range_image_mask.shape[0] + - mask_index[:, 0] - ) * range_image_mask.shape[1] + mask_index[:, 1] - mask_index = mask_index.numpy().astype(elongation[-1].dtype) - else: - mask_index = np.full_like(elongation[-1], -1) - - mask_indices.append(mask_index) - - return points, cp_points, intensity, elongation, mask_indices - - def cart_to_homo(self, mat): - """Convert transformation matrix in Cartesian coordinates to - homogeneous format. - - Args: - mat (np.ndarray): Transformation matrix in Cartesian. - The input matrix shape is 3x3 or 3x4. - - Returns: - np.ndarray: Transformation matrix in homogeneous format. - The matrix shape is 4x4. - """ - ret = np.eye(4) - if mat.shape == (3, 3): - ret[:3, :3] = mat - elif mat.shape == (3, 4): - ret[:3, :] = mat - else: - raise ValueError(mat.shape) - return ret diff --git a/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d2torchserve.py b/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d2torchserve.py deleted file mode 100644 index df7e6084..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d2torchserve.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from argparse import ArgumentParser, Namespace -from pathlib import Path -from tempfile import TemporaryDirectory - -import mmcv - -try: - from model_archiver.model_packaging import package_model - from model_archiver.model_packaging_utils import ModelExportUtils -except ImportError: - package_model = None - - -def mmdet3d2torchserve( - config_file: str, - checkpoint_file: str, - output_folder: str, - model_name: str, - model_version: str = '1.0', - force: bool = False, -): - """Converts MMDetection3D model (config + checkpoint) to TorchServe `.mar`. - - Args: - config_file (str): - In MMDetection3D config format. - The contents vary for each task repository. - checkpoint_file (str): - In MMDetection3D checkpoint format. - The contents vary for each task repository. - output_folder (str): - Folder where `{model_name}.mar` will be created. - The file created will be in TorchServe archive format. - model_name (str): - If not None, used for naming the `{model_name}.mar` file - that will be created under `output_folder`. - If None, `{Path(checkpoint_file).stem}` will be used. - model_version (str, optional): - Model's version. Default: '1.0'. - force (bool, optional): - If True, if there is an existing `{model_name}.mar` - file under `output_folder` it will be overwritten. - Default: False. - """ - mmcv.mkdir_or_exist(output_folder) - - config = mmcv.Config.fromfile(config_file) - - with TemporaryDirectory() as tmpdir: - config.dump(f'{tmpdir}/config.py') - - args = Namespace( - **{ - 'model_file': f'{tmpdir}/config.py', - 'serialized_file': checkpoint_file, - 'handler': f'{Path(__file__).parent}/mmdet3d_handler.py', - 'model_name': model_name or Path(checkpoint_file).stem, - 'version': model_version, - 'export_path': output_folder, - 'force': force, - 'requirements_file': None, - 'extra_files': None, - 'runtime': 'python', - 'archive_format': 'default' - }) - manifest = ModelExportUtils.generate_manifest_json(args) - package_model(args, manifest) - - -def parse_args(): - parser = ArgumentParser( - description='Convert MMDetection models to TorchServe `.mar` format.') - parser.add_argument('config', type=str, help='config file path') - parser.add_argument('checkpoint', type=str, help='checkpoint file path') - parser.add_argument( - '--output-folder', - type=str, - required=True, - help='Folder where `{model_name}.mar` will be created.') - parser.add_argument( - '--model-name', - type=str, - default=None, - help='If not None, used for naming the `{model_name}.mar`' - 'file that will be created under `output_folder`.' - 'If None, `{Path(checkpoint_file).stem}` will be used.') - parser.add_argument( - '--model-version', - type=str, - default='1.0', - help='Number used for versioning.') - parser.add_argument( - '-f', - '--force', - action='store_true', - help='overwrite the existing `{model_name}.mar`') - args = parser.parse_args() - - return args - - -if __name__ == '__main__': - args = parse_args() - - if package_model is None: - raise ImportError('`torch-model-archiver` is required.' - 'Try: pip install torch-model-archiver') - - mmdet3d2torchserve(args.config, args.checkpoint, args.output_folder, - args.model_name, args.model_version, args.force) diff --git a/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d_handler.py b/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d_handler.py deleted file mode 100644 index 8b526cdf..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/deployment/mmdet3d_handler.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import base64 -import os - -import numpy as np -import torch -from ts.torch_handler.base_handler import BaseHandler - -from mmdet3d.apis import inference_detector, init_model -from mmdet3d.core.points import get_points_type - - -class MMdet3dHandler(BaseHandler): - """MMDetection3D Handler used in TorchServe. - - Handler to load models in MMDetection3D, and it will process data to get - predicted results. For now, it only supports SECOND. - """ - threshold = 0.5 - load_dim = 4 - use_dim = [0, 1, 2, 3] - coord_type = 'LIDAR' - attribute_dims = None - - def initialize(self, context): - """Initialize function loads the model in MMDetection3D. - - Args: - context (context): It is a JSON Object containing information - pertaining to the model artifacts parameters. - """ - properties = context.system_properties - self.map_location = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = torch.device(self.map_location + ':' + - str(properties.get('gpu_id')) if torch.cuda. - is_available() else self.map_location) - self.manifest = context.manifest - - model_dir = properties.get('model_dir') - serialized_file = self.manifest['model']['serializedFile'] - checkpoint = os.path.join(model_dir, serialized_file) - self.config_file = os.path.join(model_dir, 'config.py') - self.model = init_model(self.config_file, checkpoint, self.device) - self.initialized = True - - def preprocess(self, data): - """Preprocess function converts data into LiDARPoints class. - - Args: - data (List): Input data from the request. - - Returns: - `LiDARPoints` : The preprocess function returns the input - point cloud data as LiDARPoints class. - """ - for row in data: - # Compat layer: normally the envelope should just return the data - # directly, but older versions of Torchserve didn't have envelope. - pts = row.get('data') or row.get('body') - if isinstance(pts, str): - pts = base64.b64decode(pts) - - points = np.frombuffer(pts, dtype=np.float32) - points = points.reshape(-1, self.load_dim) - points = points[:, self.use_dim] - points_class = get_points_type(self.coord_type) - points = points_class( - points, - points_dim=points.shape[-1], - attribute_dims=self.attribute_dims) - - return points - - def inference(self, data): - """Inference Function. - - This function is used to make a prediction call on the - given input request. - - Args: - data (`LiDARPoints`): LiDARPoints class passed to make - the inference request. - - Returns: - List(dict) : The predicted result is returned in this function. - """ - results, _ = inference_detector(self.model, data) - return results - - def postprocess(self, data): - """Postprocess function. - - This function makes use of the output from the inference and - converts it into a torchserve supported response output. - - Args: - data (List[dict]): The data received from the prediction - output of the model. - - Returns: - List: The post process function returns a list of the predicted - output. - """ - output = [] - for pts_index, result in enumerate(data): - output.append([]) - if 'pts_bbox' in result.keys(): - pred_bboxes = result['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = result['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = result['boxes_3d'].tensor.numpy() - pred_scores = result['scores_3d'].numpy() - - index = pred_scores > self.threshold - bbox_coords = pred_bboxes[index].tolist() - score = pred_scores[index].tolist() - - output[pts_index].append({'3dbbox': bbox_coords, 'score': score}) - - return output diff --git a/cv/3d_detection/paconv/pytorch/tools/deployment/test_torchserver.py b/cv/3d_detection/paconv/pytorch/tools/deployment/test_torchserver.py deleted file mode 100644 index 613f9e4f..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/deployment/test_torchserver.py +++ /dev/null @@ -1,56 +0,0 @@ -from argparse import ArgumentParser - -import numpy as np -import requests - -from mmdet3d.apis import inference_detector, init_model - - -def parse_args(): - parser = ArgumentParser() - parser.add_argument('pcd', help='Point cloud file') - parser.add_argument('config', help='Config file') - parser.add_argument('checkpoint', help='Checkpoint file') - parser.add_argument('model_name', help='The model name in the server') - parser.add_argument( - '--inference-addr', - default='127.0.0.1:8080', - help='Address and port of the inference server') - parser.add_argument( - '--device', default='cuda:0', help='Device used for inference') - parser.add_argument( - '--score-thr', type=float, default=0.5, help='3d bbox score threshold') - args = parser.parse_args() - return args - - -def parse_result(input): - bbox = input[0]['3dbbox'] - result = np.array(bbox) - return result - - -def main(args): - # build the model from a config file and a checkpoint file - model = init_model(args.config, args.checkpoint, device=args.device) - # test a single point cloud file - model_result, _ = inference_detector(model, args.pcd) - # filter the 3d bboxes whose scores > 0.5 - if 'pts_bbox' in model_result[0].keys(): - pred_bboxes = model_result[0]['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = model_result[0]['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = model_result[0]['boxes_3d'].tensor.numpy() - pred_scores = model_result[0]['scores_3d'].numpy() - model_result = pred_bboxes[pred_scores > 0.5] - - url = 'http://' + args.inference_addr + '/predictions/' + args.model_name - with open(args.pcd, 'rb') as points: - response = requests.post(url, points) - server_result = parse_result(response.json()) - assert np.allclose(model_result, server_result) - - -if __name__ == '__main__': - args = parse_args() - main(args) diff --git a/cv/3d_detection/paconv/pytorch/tools/dist_test.sh b/cv/3d_detection/paconv/pytorch/tools/dist_test.sh deleted file mode 100755 index dea131b4..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/dist_test.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -CHECKPOINT=$2 -GPUS=$3 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/test.py \ - $CONFIG \ - $CHECKPOINT \ - --launcher pytorch \ - ${@:4} diff --git a/cv/3d_detection/paconv/pytorch/tools/misc/browse_dataset.py b/cv/3d_detection/paconv/pytorch/tools/misc/browse_dataset.py deleted file mode 100644 index e4451b12..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/misc/browse_dataset.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import warnings -from os import path as osp -from pathlib import Path - -import mmcv -import numpy as np -from mmcv import Config, DictAction, mkdir_or_exist - -from mmdet3d.core.bbox import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - DepthInstance3DBoxes, LiDARInstance3DBoxes) -from mmdet3d.core.visualizer import (show_multi_modality_result, show_result, - show_seg_result) -from mmdet3d.datasets import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser(description='Browse a dataset') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--skip-type', - type=str, - nargs='+', - default=['Normalize'], - help='skip some useless pipeline') - parser.add_argument( - '--output-dir', - default=None, - type=str, - help='If there is no display interface, you can save it') - parser.add_argument( - '--task', - type=str, - choices=['det', 'seg', 'multi_modality-det', 'mono-det'], - help='Determine the visualization method depending on the task.') - parser.add_argument( - '--aug', - action='store_true', - help='Whether to visualize augmented datasets or original dataset.') - parser.add_argument( - '--online', - action='store_true', - help='Whether to perform online visualization. Note that you often ' - 'need a monitor to do so.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def build_data_cfg(config_path, skip_type, aug, cfg_options): - """Build data config for loading visualization data.""" - - cfg = Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # extract inner dataset of `RepeatDataset` as `cfg.data.train` - # so we don't need to worry about it later - if cfg.data.train['type'] == 'RepeatDataset': - cfg.data.train = cfg.data.train.dataset - # use only first dataset for `ConcatDataset` - if cfg.data.train['type'] == 'ConcatDataset': - cfg.data.train = cfg.data.train.datasets[0] - train_data_cfg = cfg.data.train - - if aug: - show_pipeline = cfg.train_pipeline - else: - show_pipeline = cfg.eval_pipeline - for i in range(len(cfg.train_pipeline)): - if cfg.train_pipeline[i]['type'] == 'LoadAnnotations3D': - show_pipeline.insert(i, cfg.train_pipeline[i]) - # Collect points as well as labels - if cfg.train_pipeline[i]['type'] == 'Collect3D': - if show_pipeline[-1]['type'] == 'Collect3D': - show_pipeline[-1] = cfg.train_pipeline[i] - else: - show_pipeline.append(cfg.train_pipeline[i]) - - train_data_cfg['pipeline'] = [ - x for x in show_pipeline if x['type'] not in skip_type - ] - - return cfg - - -def to_depth_mode(points, bboxes): - """Convert points and bboxes to Depth Coord and Depth Box mode.""" - if points is not None: - points = Coord3DMode.convert_point(points.copy(), Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - if bboxes is not None: - bboxes = Box3DMode.convert(bboxes.clone(), Box3DMode.LIDAR, - Box3DMode.DEPTH) - return points, bboxes - - -def show_det_data(input, out_dir, show=False): - """Visualize 3D point cloud and 3D bboxes.""" - img_metas = input['img_metas']._data - points = input['points']._data.numpy() - gt_bboxes = input['gt_bboxes_3d']._data.tensor - if img_metas['box_mode_3d'] != Box3DMode.DEPTH: - points, gt_bboxes = to_depth_mode(points, gt_bboxes) - filename = osp.splitext(osp.basename(img_metas['pts_filename']))[0] - show_result( - points, - gt_bboxes.clone(), - None, - out_dir, - filename, - show=show, - snapshot=True) - - -def show_seg_data(input, out_dir, show=False): - """Visualize 3D point cloud and segmentation mask.""" - img_metas = input['img_metas']._data - points = input['points']._data.numpy() - gt_seg = input['pts_semantic_mask']._data.numpy() - filename = osp.splitext(osp.basename(img_metas['pts_filename']))[0] - show_seg_result( - points, - gt_seg.copy(), - None, - out_dir, - filename, - np.array(img_metas['PALETTE']), - img_metas['ignore_index'], - show=show, - snapshot=True) - - -def show_proj_bbox_img(input, out_dir, show=False, is_nus_mono=False): - """Visualize 3D bboxes on 2D image by projection.""" - gt_bboxes = input['gt_bboxes_3d']._data - img_metas = input['img_metas']._data - img = input['img']._data.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - # no 3D gt bboxes, just show img - if gt_bboxes.tensor.shape[0] == 0: - gt_bboxes = None - filename = Path(img_metas['filename']).name - if isinstance(gt_bboxes, DepthInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - None, - out_dir, - filename, - box_mode='depth', - img_metas=img_metas, - show=show) - elif isinstance(gt_bboxes, LiDARInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - img_metas['lidar2img'], - out_dir, - filename, - box_mode='lidar', - img_metas=img_metas, - show=show) - elif isinstance(gt_bboxes, CameraInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - img_metas['cam2img'], - out_dir, - filename, - box_mode='camera', - img_metas=img_metas, - show=show) - else: - # can't project, just show img - warnings.warn( - f'unrecognized gt box type {type(gt_bboxes)}, only show image') - show_multi_modality_result( - img, None, None, None, out_dir, filename, show=show) - - -def main(): - args = parse_args() - - if args.output_dir is not None: - mkdir_or_exist(args.output_dir) - - cfg = build_data_cfg(args.config, args.skip_type, args.aug, - args.cfg_options) - try: - dataset = build_dataset( - cfg.data.train, default_args=dict(filter_empty_gt=False)) - except TypeError: # seg dataset doesn't have `filter_empty_gt` key - dataset = build_dataset(cfg.data.train) - - dataset_type = cfg.dataset_type - # configure visualization mode - vis_task = args.task # 'det', 'seg', 'multi_modality-det', 'mono-det' - progress_bar = mmcv.ProgressBar(len(dataset)) - - for input in dataset: - if vis_task in ['det', 'multi_modality-det']: - # show 3D bboxes on 3D point clouds - show_det_data(input, args.output_dir, show=args.online) - if vis_task in ['multi_modality-det', 'mono-det']: - # project 3D bboxes to 2D image - show_proj_bbox_img( - input, - args.output_dir, - show=args.online, - is_nus_mono=(dataset_type == 'NuScenesMonoDataset')) - elif vis_task in ['seg']: - # show 3D segmentation mask on 3D point clouds - show_seg_data(input, args.output_dir, show=args.online) - progress_bar.update() - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/misc/fuse_conv_bn.py b/cv/3d_detection/paconv/pytorch/tools/misc/fuse_conv_bn.py deleted file mode 100644 index 9aff4029..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/misc/fuse_conv_bn.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import torch -from mmcv.runner import save_checkpoint -from torch import nn as nn - -from mmdet3d.apis import init_model - - -def fuse_conv_bn(conv, bn): - """During inference, the functionary of batch norm layers is turned off but - only the mean and var alone channels are used, which exposes the chance to - fuse it with the preceding conv layers to save computations and simplify - network structures.""" - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_module(m): - last_conv = None - last_conv_name = None - - for name, child in m.named_children(): - if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = fuse_conv_bn(last_conv, child) - m._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - m._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_module(child) - return m - - -def parse_args(): - parser = argparse.ArgumentParser( - description='fuse Conv and BN layers in a model') - parser.add_argument('config', help='config file path') - parser.add_argument('checkpoint', help='checkpoint file path') - parser.add_argument('out', help='output path of the converted model') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - # build the model from a config file and a checkpoint file - model = init_model(args.config, args.checkpoint) - # fuse conv and bn layers of the model - fused_model = fuse_module(model) - save_checkpoint(fused_model, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/misc/print_config.py b/cv/3d_detection/paconv/pytorch/tools/misc/print_config.py deleted file mode 100644 index c3538ef5..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/misc/print_config.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -from mmcv import Config, DictAction - - -def parse_args(): - parser = argparse.ArgumentParser(description='Print the whole config') - parser.add_argument('config', help='config file path') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='arguments in dict') - args = parser.parse_args() - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - print(f'Config:\n{cfg.pretty_text}') - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/misc/visualize_results.py b/cv/3d_detection/paconv/pytorch/tools/misc/visualize_results.py deleted file mode 100644 index c59445f6..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/misc/visualize_results.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import mmcv -from mmcv import Config - -from mmdet3d.datasets import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D visualize the results') - parser.add_argument('config', help='test config file path') - parser.add_argument('--result', help='results file in pickle format') - parser.add_argument( - '--show-dir', help='directory where visualize results will be saved') - args = parser.parse_args() - - return args - - -def main(): - args = parse_args() - - if args.result is not None and \ - not args.result.endswith(('.pkl', '.pickle')): - raise ValueError('The results file must be a pkl file.') - - cfg = Config.fromfile(args.config) - cfg.data.test.test_mode = True - - # build the dataset - dataset = build_dataset(cfg.data.test) - results = mmcv.load(args.result) - - if getattr(dataset, 'show', None) is not None: - # data loading pipeline for showing - eval_pipeline = cfg.get('eval_pipeline', {}) - if eval_pipeline: - dataset.show(results, args.show_dir, pipeline=eval_pipeline) - else: - dataset.show(results, args.show_dir) # use default pipeline - else: - raise NotImplementedError( - 'Show is not implemented for dataset {}!'.format( - type(dataset).__name__)) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_h3dnet_checkpoints.py b/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_h3dnet_checkpoints.py deleted file mode 100644 index 2ede340a..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_h3dnet_checkpoints.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import tempfile - -import torch -from mmcv import Config -from mmcv.runner import load_state_dict - -from mmdet3d.models import build_detector - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D upgrade model version(before v0.6.0) of H3DNet') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='path of the output checkpoint file') - args = parser.parse_args() - return args - - -def parse_config(config_strings): - """Parse config from strings. - - Args: - config_strings (string): strings of model config. - - Returns: - Config: model config - """ - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - - # Update backbone config - if 'pool_mod' in config.model.backbone.backbones: - config.model.backbone.backbones.pop('pool_mod') - - if 'sa_cfg' not in config.model.backbone: - config.model.backbone['sa_cfg'] = dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True) - - if 'type' not in config.model.rpn_head.vote_aggregation_cfg: - config.model.rpn_head.vote_aggregation_cfg['type'] = 'PointSAModule' - - # Update rpn_head config - if 'pred_layer_cfg' not in config.model.rpn_head: - config.model.rpn_head['pred_layer_cfg'] = dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True) - - if 'feat_channels' in config.model.rpn_head: - config.model.rpn_head.pop('feat_channels') - - if 'vote_moudule_cfg' in config.model.rpn_head: - config.model.rpn_head['vote_module_cfg'] = config.model.rpn_head.pop( - 'vote_moudule_cfg') - - if config.model.rpn_head.vote_aggregation_cfg.use_xyz: - config.model.rpn_head.vote_aggregation_cfg.mlp_channels[0] -= 3 - - for cfg in config.model.roi_head.primitive_list: - cfg['vote_module_cfg'] = cfg.pop('vote_moudule_cfg') - cfg.vote_aggregation_cfg.mlp_channels[0] -= 3 - if 'type' not in cfg.vote_aggregation_cfg: - cfg.vote_aggregation_cfg['type'] = 'PointSAModule' - - if 'type' not in config.model.roi_head.bbox_head.suface_matching_cfg: - config.model.roi_head.bbox_head.suface_matching_cfg[ - 'type'] = 'PointSAModule' - - if config.model.roi_head.bbox_head.suface_matching_cfg.use_xyz: - config.model.roi_head.bbox_head.suface_matching_cfg.mlp_channels[ - 0] -= 3 - - if 'type' not in config.model.roi_head.bbox_head.line_matching_cfg: - config.model.roi_head.bbox_head.line_matching_cfg[ - 'type'] = 'PointSAModule' - - if config.model.roi_head.bbox_head.line_matching_cfg.use_xyz: - config.model.roi_head.bbox_head.line_matching_cfg.mlp_channels[0] -= 3 - - if 'proposal_module_cfg' in config.model.roi_head.bbox_head: - config.model.roi_head.bbox_head.pop('proposal_module_cfg') - - temp_file.close() - - return config - - -def main(): - """Convert keys in checkpoints for VoteNet. - - There can be some breaking changes during the development of mmdetection3d, - and this tool is used for upgrading checkpoints trained with old versions - (before v0.6.0) to the latest one. - """ - args = parse_args() - checkpoint = torch.load(args.checkpoint) - cfg = parse_config(checkpoint['meta']['config']) - # Build the model and load checkpoint - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - orig_ckpt = checkpoint['state_dict'] - converted_ckpt = orig_ckpt.copy() - - if cfg['dataset_type'] == 'ScanNetDataset': - NUM_CLASSES = 18 - elif cfg['dataset_type'] == 'SUNRGBDDataset': - NUM_CLASSES = 10 - else: - raise NotImplementedError - - RENAME_PREFIX = { - 'rpn_head.conv_pred.0': 'rpn_head.conv_pred.shared_convs.layer0', - 'rpn_head.conv_pred.1': 'rpn_head.conv_pred.shared_convs.layer1' - } - - DEL_KEYS = [ - 'rpn_head.conv_pred.0.bn.num_batches_tracked', - 'rpn_head.conv_pred.1.bn.num_batches_tracked' - ] - - EXTRACT_KEYS = { - 'rpn_head.conv_pred.conv_cls.weight': - ('rpn_head.conv_pred.conv_out.weight', [(0, 2), (-NUM_CLASSES, -1)]), - 'rpn_head.conv_pred.conv_cls.bias': - ('rpn_head.conv_pred.conv_out.bias', [(0, 2), (-NUM_CLASSES, -1)]), - 'rpn_head.conv_pred.conv_reg.weight': - ('rpn_head.conv_pred.conv_out.weight', [(2, -NUM_CLASSES)]), - 'rpn_head.conv_pred.conv_reg.bias': - ('rpn_head.conv_pred.conv_out.bias', [(2, -NUM_CLASSES)]) - } - - # Delete some useless keys - for key in DEL_KEYS: - converted_ckpt.pop(key) - - # Rename keys with specific prefix - RENAME_KEYS = dict() - for old_key in converted_ckpt.keys(): - for rename_prefix in RENAME_PREFIX.keys(): - if rename_prefix in old_key: - new_key = old_key.replace(rename_prefix, - RENAME_PREFIX[rename_prefix]) - RENAME_KEYS[new_key] = old_key - for new_key, old_key in RENAME_KEYS.items(): - converted_ckpt[new_key] = converted_ckpt.pop(old_key) - - # Extract weights and rename the keys - for new_key, (old_key, indices) in EXTRACT_KEYS.items(): - cur_layers = orig_ckpt[old_key] - converted_layers = [] - for (start, end) in indices: - if end != -1: - converted_layers.append(cur_layers[start:end]) - else: - converted_layers.append(cur_layers[start:]) - converted_layers = torch.cat(converted_layers, 0) - converted_ckpt[new_key] = converted_layers - if old_key in converted_ckpt.keys(): - converted_ckpt.pop(old_key) - - # Check the converted checkpoint by loading to the model - load_state_dict(model, converted_ckpt, strict=True) - checkpoint['state_dict'] = converted_ckpt - torch.save(checkpoint, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_votenet_checkpoints.py b/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_votenet_checkpoints.py deleted file mode 100644 index 7264e319..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/model_converters/convert_votenet_checkpoints.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import tempfile - -import torch -from mmcv import Config -from mmcv.runner import load_state_dict - -from mmdet3d.models import build_detector - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D upgrade model version(before v0.6.0) of VoteNet') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='path of the output checkpoint file') - args = parser.parse_args() - return args - - -def parse_config(config_strings): - """Parse config from strings. - - Args: - config_strings (string): strings of model config. - - Returns: - Config: model config - """ - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - - # Update backbone config - if 'pool_mod' in config.model.backbone: - config.model.backbone.pop('pool_mod') - - if 'sa_cfg' not in config.model.backbone: - config.model.backbone['sa_cfg'] = dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True) - - if 'type' not in config.model.bbox_head.vote_aggregation_cfg: - config.model.bbox_head.vote_aggregation_cfg['type'] = 'PointSAModule' - - # Update bbox_head config - if 'pred_layer_cfg' not in config.model.bbox_head: - config.model.bbox_head['pred_layer_cfg'] = dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True) - - if 'feat_channels' in config.model.bbox_head: - config.model.bbox_head.pop('feat_channels') - - if 'vote_moudule_cfg' in config.model.bbox_head: - config.model.bbox_head['vote_module_cfg'] = config.model.bbox_head.pop( - 'vote_moudule_cfg') - - if config.model.bbox_head.vote_aggregation_cfg.use_xyz: - config.model.bbox_head.vote_aggregation_cfg.mlp_channels[0] -= 3 - - temp_file.close() - - return config - - -def main(): - """Convert keys in checkpoints for VoteNet. - - There can be some breaking changes during the development of mmdetection3d, - and this tool is used for upgrading checkpoints trained with old versions - (before v0.6.0) to the latest one. - """ - args = parse_args() - checkpoint = torch.load(args.checkpoint) - cfg = parse_config(checkpoint['meta']['config']) - # Build the model and load checkpoint - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - orig_ckpt = checkpoint['state_dict'] - converted_ckpt = orig_ckpt.copy() - - if cfg['dataset_type'] == 'ScanNetDataset': - NUM_CLASSES = 18 - elif cfg['dataset_type'] == 'SUNRGBDDataset': - NUM_CLASSES = 10 - else: - raise NotImplementedError - - RENAME_PREFIX = { - 'bbox_head.conv_pred.0': 'bbox_head.conv_pred.shared_convs.layer0', - 'bbox_head.conv_pred.1': 'bbox_head.conv_pred.shared_convs.layer1' - } - - DEL_KEYS = [ - 'bbox_head.conv_pred.0.bn.num_batches_tracked', - 'bbox_head.conv_pred.1.bn.num_batches_tracked' - ] - - EXTRACT_KEYS = { - 'bbox_head.conv_pred.conv_cls.weight': - ('bbox_head.conv_pred.conv_out.weight', [(0, 2), (-NUM_CLASSES, -1)]), - 'bbox_head.conv_pred.conv_cls.bias': - ('bbox_head.conv_pred.conv_out.bias', [(0, 2), (-NUM_CLASSES, -1)]), - 'bbox_head.conv_pred.conv_reg.weight': - ('bbox_head.conv_pred.conv_out.weight', [(2, -NUM_CLASSES)]), - 'bbox_head.conv_pred.conv_reg.bias': - ('bbox_head.conv_pred.conv_out.bias', [(2, -NUM_CLASSES)]) - } - - # Delete some useless keys - for key in DEL_KEYS: - converted_ckpt.pop(key) - - # Rename keys with specific prefix - RENAME_KEYS = dict() - for old_key in converted_ckpt.keys(): - for rename_prefix in RENAME_PREFIX.keys(): - if rename_prefix in old_key: - new_key = old_key.replace(rename_prefix, - RENAME_PREFIX[rename_prefix]) - RENAME_KEYS[new_key] = old_key - for new_key, old_key in RENAME_KEYS.items(): - converted_ckpt[new_key] = converted_ckpt.pop(old_key) - - # Extract weights and rename the keys - for new_key, (old_key, indices) in EXTRACT_KEYS.items(): - cur_layers = orig_ckpt[old_key] - converted_layers = [] - for (start, end) in indices: - if end != -1: - converted_layers.append(cur_layers[start:end]) - else: - converted_layers.append(cur_layers[start:]) - converted_layers = torch.cat(converted_layers, 0) - converted_ckpt[new_key] = converted_layers - if old_key in converted_ckpt.keys(): - converted_ckpt.pop(old_key) - - # Check the converted checkpoint by loading to the model - load_state_dict(model, converted_ckpt, strict=True) - checkpoint['state_dict'] = converted_ckpt - torch.save(checkpoint, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/model_converters/publish_model.py b/cv/3d_detection/paconv/pytorch/tools/model_converters/publish_model.py deleted file mode 100644 index e2660578..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/model_converters/publish_model.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import subprocess - -import torch - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Process a checkpoint to be published') - parser.add_argument('in_file', help='input checkpoint filename') - parser.add_argument('out_file', help='output checkpoint filename') - args = parser.parse_args() - return args - - -def process_checkpoint(in_file, out_file): - checkpoint = torch.load(in_file, map_location='cpu') - # remove optimizer for smaller file size - if 'optimizer' in checkpoint: - del checkpoint['optimizer'] - # if it is necessary to remove some sensitive data in checkpoint['meta'], - # add the code here. - torch.save(checkpoint, out_file) - sha = subprocess.check_output(['sha256sum', out_file]).decode() - final_file = out_file.rstrip('.pth') + '-{}.pth'.format(sha[:8]) - subprocess.Popen(['mv', out_file, final_file]) - - -def main(): - args = parse_args() - process_checkpoint(args.in_file, args.out_file) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/model_converters/regnet2mmdet.py b/cv/3d_detection/paconv/pytorch/tools/model_converters/regnet2mmdet.py deleted file mode 100644 index fbf8c8f3..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/model_converters/regnet2mmdet.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -from collections import OrderedDict - -import torch - - -def convert_stem(model_key, model_weight, state_dict, converted_names): - new_key = model_key.replace('stem.conv', 'conv1') - new_key = new_key.replace('stem.bn', 'bn1') - state_dict[new_key] = model_weight - converted_names.add(model_key) - print(f'Convert {model_key} to {new_key}') - - -def convert_head(model_key, model_weight, state_dict, converted_names): - new_key = model_key.replace('head.fc', 'fc') - state_dict[new_key] = model_weight - converted_names.add(model_key) - print(f'Convert {model_key} to {new_key}') - - -def convert_reslayer(model_key, model_weight, state_dict, converted_names): - split_keys = model_key.split('.') - layer, block, module = split_keys[:3] - block_id = int(block[1:]) - layer_name = f'layer{int(layer[1:])}' - block_name = f'{block_id - 1}' - - if block_id == 1 and module == 'bn': - new_key = f'{layer_name}.{block_name}.downsample.1.{split_keys[-1]}' - elif block_id == 1 and module == 'proj': - new_key = f'{layer_name}.{block_name}.downsample.0.{split_keys[-1]}' - elif module == 'f': - if split_keys[3] == 'a_bn': - module_name = 'bn1' - elif split_keys[3] == 'b_bn': - module_name = 'bn2' - elif split_keys[3] == 'c_bn': - module_name = 'bn3' - elif split_keys[3] == 'a': - module_name = 'conv1' - elif split_keys[3] == 'b': - module_name = 'conv2' - elif split_keys[3] == 'c': - module_name = 'conv3' - new_key = f'{layer_name}.{block_name}.{module_name}.{split_keys[-1]}' - else: - raise ValueError(f'Unsupported conversion of key {model_key}') - print(f'Convert {model_key} to {new_key}') - state_dict[new_key] = model_weight - converted_names.add(model_key) - - -def convert(src, dst): - """Convert keys in pycls pretrained RegNet models to mmdet style.""" - # load caffe model - regnet_model = torch.load(src) - blobs = regnet_model['model_state'] - # convert to pytorch style - state_dict = OrderedDict() - converted_names = set() - for key, weight in blobs.items(): - if 'stem' in key: - convert_stem(key, weight, state_dict, converted_names) - elif 'head' in key: - convert_head(key, weight, state_dict, converted_names) - elif key.startswith('s'): - convert_reslayer(key, weight, state_dict, converted_names) - - # check if all layers are converted - for key in blobs: - if key not in converted_names: - print(f'not converted: {key}') - # save checkpoint - checkpoint = dict() - checkpoint['state_dict'] = state_dict - torch.save(checkpoint, dst) - - -def main(): - parser = argparse.ArgumentParser(description='Convert model keys') - parser.add_argument('src', help='src detectron model path') - parser.add_argument('dst', help='save path') - args = parser.parse_args() - convert(args.src, args.dst) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/slurm_test.sh b/cv/3d_detection/paconv/pytorch/tools/slurm_test.sh deleted file mode 100755 index 6dd67e57..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/slurm_test.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -CHECKPOINT=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-8} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -PY_ARGS=${@:5} -SRUN_ARGS=${SRUN_ARGS:-""} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS} diff --git a/cv/3d_detection/paconv/pytorch/tools/slurm_train.sh b/cv/3d_detection/paconv/pytorch/tools/slurm_train.sh deleted file mode 100755 index b3feb3d9..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/slurm_train.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -WORK_DIR=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-8} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:5} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS} diff --git a/cv/3d_detection/paconv/pytorch/tools/test.py b/cv/3d_detection/paconv/pytorch/tools/test.py deleted file mode 100644 index bd9d95f4..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/test.py +++ /dev/null @@ -1,260 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import warnings - -import mmcv -import torch -from mmcv import Config, DictAction -from mmcv.cnn import fuse_conv_bn -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) - -import mmdet -from mmdet3d.apis import single_gpu_test -from mmdet3d.datasets import build_dataloader, build_dataset -from mmdet3d.models import build_model -from mmdet.apis import multi_gpu_test, set_random_seed -from mmdet.datasets import replace_ImageToTensor - -if mmdet.__version__ > '2.23.0': - # If mmdet version > 2.23.0, setup_multi_processes would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import setup_multi_processes -else: - from mmdet3d.utils import setup_multi_processes - -try: - # If mmdet version > 2.23.0, compat_cfg would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import compat_cfg -except ImportError: - from mmdet3d.utils import compat_cfg - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet test (and eval) a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - parser.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument( - '--gpu-id', - type=int, - default=0, - help='id of gpu to use ' - '(only applicable to non-distributed testing)') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "bbox",' - ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where results will be saved') - parser.add_argument( - '--gpu-collect', - action='store_true', - help='whether to use gpu to collect results.') - parser.add_argument( - '--tmpdir', - help='tmp directory used for collecting results from multiple ' - 'workers, available when gpu-collect is not specified') - parser.add_argument('--seed', type=int, default=0, help='random seed') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function (deprecate), ' - 'change to --eval-options instead.') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.eval_options: - raise ValueError( - '--options and --eval-options cannot be both specified, ' - '--options is deprecated in favor of --eval-options') - if args.options: - warnings.warn('--options is deprecated in favor of --eval-options') - args.eval_options = args.options - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - cfg = compat_cfg(cfg) - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - cfg.model.pretrained = None - - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed testing. Use the first GPU ' - 'in `gpu_ids` now.') - else: - cfg.gpu_ids = [args.gpu_id] - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - test_dataloader_default_args = dict( - samples_per_gpu=1, workers_per_gpu=2, dist=distributed, shuffle=False) - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - cfg.data.test.test_mode = True - if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.test.pipeline = replace_ImageToTensor( - cfg.data.test.pipeline) - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - ds_cfg.test_mode = True - if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1: - for ds_cfg in cfg.data.test: - ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline) - - test_loader_cfg = { - **test_dataloader_default_args, - **cfg.data.get('test_dataloader', {}) - } - - # set random seeds - if args.seed is not None: - set_random_seed(args.seed, deterministic=args.deterministic) - - # build the dataloader - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader(dataset, **test_loader_cfg) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_model(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_conv_bn(model) - # old versions did not save class info in checkpoints, this walkaround is - # for backward compatibility - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = dataset.CLASSES - # palette for visualization in segmentation tasks - if 'PALETTE' in checkpoint.get('meta', {}): - model.PALETTE = checkpoint['meta']['PALETTE'] - elif hasattr(dataset, 'PALETTE'): - # segmentation dataset has `PALETTE` attribute - model.PALETTE = dataset.PALETTE - - if not distributed: - model = MMDataParallel(model, device_ids=cfg.gpu_ids) - outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - outputs = multi_gpu_test(model, data_loader, args.tmpdir, - args.gpu_collect) - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - print(f'\nwriting results to {args.out}') - mmcv.dump(outputs, args.out) - kwargs = {} if args.eval_options is None else args.eval_options - if args.format_only: - dataset.format_results(outputs, **kwargs) - if args.eval: - eval_kwargs = cfg.get('evaluation', {}).copy() - # hard-code way to remove EvalHook args - for key in [ - 'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best', - 'rule' - ]: - eval_kwargs.pop(key, None) - eval_kwargs.update(dict(metric=args.eval, **kwargs)) - print(dataset.evaluate(outputs, **eval_kwargs)) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/paconv/pytorch/tools/update_data_coords.py b/cv/3d_detection/paconv/pytorch/tools/update_data_coords.py deleted file mode 100644 index 94728bcc..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/update_data_coords.py +++ /dev/null @@ -1,168 +0,0 @@ -import argparse -import time -from os import path as osp - -import mmcv -import numpy as np - -from mmdet3d.core.bbox import limit_period - - -def update_sunrgbd_infos(root_dir, out_dir, pkl_files): - print(f'{pkl_files} will be modified because ' - f'of the refactor of the Depth coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for item in mmcv.track_iter_progress(a): - if 'rotation_y' in item['annos']: - item['annos']['rotation_y'] = -item['annos']['rotation_y'] - item['annos']['gt_boxes_upright_depth'][:, -1:] = \ - -item['annos']['gt_boxes_upright_depth'][:, -1:] - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -def update_outdoor_dbinfos(root_dir, out_dir, pkl_files): - print(f'{pkl_files} will be modified because ' - f'of the refactor of the LIDAR coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for k in a.keys(): - print(f'Updating samples of class {k}:') - for item in mmcv.track_iter_progress(a[k]): - boxes = item['box3d_lidar'].copy() - # swap l, w (or dx, dy) - item['box3d_lidar'][3] = boxes[4] - item['box3d_lidar'][4] = boxes[3] - # change yaw - item['box3d_lidar'][6] = -boxes[6] - np.pi / 2 - item['box3d_lidar'][6] = limit_period( - item['box3d_lidar'][6], period=np.pi * 2) - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -def update_nuscenes_or_lyft_infos(root_dir, out_dir, pkl_files): - - print(f'{pkl_files} will be modified because ' - f'of the refactor of the LIDAR coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for item in mmcv.track_iter_progress(a['infos']): - boxes = item['gt_boxes'].copy() - # swap l, w (or dx, dy) - item['gt_boxes'][:, 3] = boxes[:, 4] - item['gt_boxes'][:, 4] = boxes[:, 3] - # change yaw - item['gt_boxes'][:, 6] = -boxes[:, 6] - np.pi / 2 - item['gt_boxes'][:, 6] = limit_period( - item['gt_boxes'][:, 6], period=np.pi * 2) - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -parser = argparse.ArgumentParser(description='Arg parser for data coords ' - 'update due to coords sys refactor.') -parser.add_argument('dataset', metavar='kitti', help='name of the dataset') -parser.add_argument( - '--root-dir', - type=str, - default='./data/kitti', - help='specify the root dir of dataset') -parser.add_argument( - '--version', - type=str, - default='v1.0', - required=False, - help='specify the dataset version, no need for kitti') -parser.add_argument( - '--out-dir', - type=str, - default=None, - required=False, - help='name of info pkl') -args = parser.parse_args() - -if __name__ == '__main__': - if args.out_dir is None: - args.out_dir = args.root_dir - if args.dataset == 'kitti': - # KITTI infos is in CAM coord sys (unchanged) - # KITTI dbinfos is in LIDAR coord sys (changed) - # so we only update dbinfos - pkl_files = ['kitti_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'nuscenes': - # nuScenes infos is in LIDAR coord sys (changed) - # nuScenes dbinfos is in LIDAR coord sys (changed) - # so we update both infos and dbinfos - pkl_files = ['nuscenes_infos_val.pkl'] - if args.version != 'v1.0-mini': - pkl_files.append('nuscenes_infos_train.pkl') - else: - pkl_files.append('nuscenes_infos_train_tiny.pkl') - update_nuscenes_or_lyft_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - if args.version != 'v1.0-mini': - pkl_files = ['nuscenes_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, - out_dir=args.out_dir, - pkl_files=pkl_files) - elif args.dataset == 'lyft': - # Lyft infos is in LIDAR coord sys (changed) - # Lyft has no dbinfos - # so we update infos - pkl_files = ['lyft_infos_train.pkl', 'lyft_infos_val.pkl'] - update_nuscenes_or_lyft_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'waymo': - # Waymo infos is in CAM coord sys (unchanged) - # Waymo dbinfos is in LIDAR coord sys (changed) - # so we only update dbinfos - pkl_files = ['waymo_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'scannet': - # ScanNet infos is in DEPTH coord sys (changed) - # but bbox is without yaw - # so ScanNet is unaffected - pass - elif args.dataset == 's3dis': - # Segmentation datasets are not affected - pass - elif args.dataset == 'sunrgbd': - # SUNRGBD infos is in DEPTH coord sys (changed) - # and bbox is with yaw - # so we update infos - pkl_files = ['sunrgbd_infos_train.pkl', 'sunrgbd_infos_val.pkl'] - update_sunrgbd_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) diff --git a/cv/3d_detection/paconv/pytorch/tools/update_data_coords.sh b/cv/3d_detection/paconv/pytorch/tools/update_data_coords.sh deleted file mode 100644 index bd8db628..00000000 --- a/cv/3d_detection/paconv/pytorch/tools/update_data_coords.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -set -x -export PYTHONPATH=`pwd`:$PYTHONPATH - -PARTITION=$1 -DATASET=$2 -GPUS=${GPUS:-1} -GPUS_PER_NODE=${GPUS_PER_NODE:-1} -SRUN_ARGS=${SRUN_ARGS:-""} -JOB_NAME=update_data_coords - -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/update_data_coords.py ${DATASET} \ - --root-dir ./data/${DATASET} \ - --out-dir ./data/${DATASET} diff --git a/cv/3d_detection/paconv/pytorch/train.py b/cv/3d_detection/paconv/pytorch/train.py deleted file mode 100644 index ed9c2a6b..00000000 --- a/cv/3d_detection/paconv/pytorch/train.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from __future__ import division -import argparse -import copy -import os -import time -import warnings -from os import path as osp - -import mmcv -import torch -import torch.distributed as dist -from mmcv import Config, DictAction -from mmcv.runner import get_dist_info, init_dist - -from mmdet import __version__ as mmdet_version -from mmdet3d import __version__ as mmdet3d_version -from mmdet3d.apis import init_random_seed, train_model -from mmdet3d.datasets import build_dataset -from mmdet3d.models import build_model -from mmdet3d.utils import collect_env, get_root_logger -from mmdet.apis import set_random_seed -from mmseg import __version__ as mmseg_version - -try: - # If mmdet version > 2.20.0, setup_multi_processes would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import setup_multi_processes -except ImportError: - from mmdet3d.utils import setup_multi_processes - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--auto-resume', - action='store_true', - help='resume from the latest checkpoint automatically') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='(Deprecated, please use --gpu-id) number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-id', - type=int, - default=0, - help='number of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=0, help='random seed') - parser.add_argument( - '--diff-seed', - action='store_true', - help='Whether or not set different seeds for different ranks') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file (deprecate), ' - 'change to --cfg-options instead.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--autoscale-lr', - action='store_true', - help='automatically scale lr with the number of gpus') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.cfg_options: - raise ValueError( - '--options and --cfg-options cannot be both specified, ' - '--options is deprecated in favor of --cfg-options') - if args.options: - warnings.warn('--options is deprecated in favor of --cfg-options') - args.cfg_options = args.options - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - if args.resume_from is not None: - cfg.resume_from = args.resume_from - - if args.auto_resume: - cfg.auto_resume = args.auto_resume - warnings.warn('`--auto-resume` is only supported when mmdet' - 'version >= 2.20.0 for 3D detection model or' - 'mmsegmentation verision >= 0.21.0 for 3D' - 'segmentation model') - - if args.gpus is not None: - cfg.gpu_ids = range(1) - warnings.warn('`--gpus` is deprecated because we only support ' - 'single GPU mode in non-distributed training. ' - 'Use `gpus=1` now.') - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed training. Use the first GPU ' - 'in `gpu_ids` now.') - if args.gpus is None and args.gpu_ids is None: - cfg.gpu_ids = [args.gpu_id] - - if args.autoscale_lr: - # apply the linear scaling rule (https://arxiv.org/abs/1706.02677) - cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8 - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # re-set gpu_ids with distributed training mode - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - # specify logger name, if we still use 'mmdet', the output info will be - # filtered and won't be saved in the log_file - # TODO: ugly workaround to judge whether we are training det or seg model - if cfg.model.type in ['EncoderDecoder3D']: - logger_name = 'mmseg' - else: - logger_name = 'mmdet' - logger = get_root_logger( - log_file=log_file, log_level=cfg.log_level, name=logger_name) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - meta['config'] = cfg.pretty_text - - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - # set random seeds - seed = init_random_seed(args.seed) - seed = seed + dist.get_rank() if args.diff_seed else seed - logger.info(f'Set random seed to {seed}, ' - f'deterministic: {args.deterministic}') - set_random_seed(seed, deterministic=args.deterministic) - cfg.seed = seed - meta['seed'] = seed - meta['exp_name'] = osp.basename(args.config) - - model = build_model( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - model.init_weights() - - logger.info(f'Model:\n{model}') - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - # in case we use a dataset wrapper - if 'dataset' in cfg.data.train: - val_dataset.pipeline = cfg.data.train.dataset.pipeline - else: - val_dataset.pipeline = cfg.data.train.pipeline - # set test_mode=False here in deep copied config - # which do not affect AP/AR calculation later - # refer to https://mmdetection3d.readthedocs.io/en/latest/tutorials/customize_runtime.html#customize-workflow # noqa - val_dataset.test_mode = False - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmdet version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmdet_version=mmdet_version, - mmseg_version=mmseg_version, - mmdet3d_version=mmdet3d_version, - config=cfg.pretty_text, - CLASSES=datasets[0].CLASSES, - PALETTE=datasets[0].PALETTE # for segmentors - if hasattr(datasets[0], 'PALETTE') else None) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - train_model( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/internimage/pytorch/README.md b/cv/classification/internimage/pytorch/README.md index f3240431..33708fe8 100644 --- a/cv/classification/internimage/pytorch/README.md +++ b/cv/classification/internimage/pytorch/README.md @@ -12,8 +12,11 @@ - `PyTorch>=1.10.0` and `torchvision>=0.9.0` with `CUDA>=10.2` ```bash -## Install libGL +# Install libGL +## CentOS yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx ## Install mmcv cd mmcv/ -- Gitee From 4531fb970b23b2e82163a21426d05471e7c6689b Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 13:36:29 +0800 Subject: [PATCH 3/7] update PointNet2 use github mmdetection3d and delete useless code --- cv/3d_detection/paconv/pytorch/README.md | 7 +- .../pytorch/{mmdetection3d => }/README.md | 42 +- .../pytorch/mmdetection3d/.gitignore | 136 - .../pointnet2/pytorch/mmdetection3d/LICENSE | 203 -- .../configs/3dssd/3dssd_4x4_kitti-3d-car.py | 121 - .../mmdetection3d/configs/3dssd/README.md | 45 - .../mmdetection3d/configs/3dssd/metafile.yml | 29 - .../configs/_base_/datasets/coco_instance.py | 48 - .../_base_/datasets/kitti-3d-3class.py | 142 - .../configs/_base_/datasets/kitti-3d-car.py | 140 - .../configs/_base_/datasets/kitti-mono3d.py | 94 - .../configs/_base_/datasets/lyft-3d.py | 138 - .../configs/_base_/datasets/nuim_instance.py | 59 - .../configs/_base_/datasets/nus-3d.py | 144 - .../configs/_base_/datasets/nus-mono3d.py | 102 - .../_base_/datasets/range100_lyft-3d.py | 136 - .../_base_/datasets/s3dis-3d-5class.py | 114 - .../_base_/datasets/s3dis_seg-3d-13class.py | 161 - .../_base_/datasets/scannet-3d-18class.py | 128 - .../_base_/datasets/scannet_seg-3d-20class.py | 132 - .../_base_/datasets/sunrgbd-3d-10class.py | 107 - .../_base_/datasets/waymoD5-3d-3class.py | 147 - .../configs/_base_/datasets/waymoD5-3d-car.py | 145 - .../configs/_base_/default_runtime.py | 25 - .../configs/_base_/models/3dssd.py | 79 - .../models/cascade_mask_rcnn_r50_fpn.py | 198 -- .../centerpoint_01voxel_second_secfpn_nus.py | 83 - .../centerpoint_02pillar_second_secfpn_nus.py | 83 - .../configs/_base_/models/dgcnn.py | 30 - .../configs/_base_/models/fcos3d.py | 80 - .../configs/_base_/models/groupfree3d.py | 73 - .../configs/_base_/models/h3dnet.py | 343 -- .../_base_/models/hv_pointpillars_fpn_lyft.py | 22 - .../_base_/models/hv_pointpillars_fpn_nus.py | 95 - .../hv_pointpillars_fpn_range100_lyft.py | 22 - .../models/hv_pointpillars_secfpn_kitti.py | 94 - .../models/hv_pointpillars_secfpn_waymo.py | 107 - .../_base_/models/hv_second_secfpn_kitti.py | 89 - .../_base_/models/hv_second_secfpn_waymo.py | 99 - .../configs/_base_/models/imvotenet_image.py | 108 - .../_base_/models/mask_rcnn_r50_fpn.py | 124 - .../configs/_base_/models/paconv_cuda_ssg.py | 7 - .../configs/_base_/models/paconv_ssg.py | 51 - .../configs/_base_/models/parta2.py | 203 -- .../configs/_base_/models/pgd.py | 57 - .../configs/_base_/models/point_rcnn.py | 133 - .../configs/_base_/models/pointnet2_msg.py | 28 - .../configs/_base_/models/pointnet2_ssg.py | 37 - .../configs/_base_/models/smoke.py | 55 - .../configs/_base_/models/votenet.py | 75 - .../configs/_base_/schedules/cosine.py | 22 - .../configs/_base_/schedules/cyclic_20e.py | 24 - .../configs/_base_/schedules/cyclic_40e.py | 31 - .../_base_/schedules/mmdet_schedule_1x.py | 11 - .../configs/_base_/schedules/schedule_2x.py | 14 - .../configs/_base_/schedules/schedule_3x.py | 9 - .../_base_/schedules/seg_cosine_100e.py | 8 - .../_base_/schedules/seg_cosine_150e.py | 9 - .../_base_/schedules/seg_cosine_200e.py | 9 - .../_base_/schedules/seg_cosine_50e.py | 9 - ...pn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py | 334 -- ...lars_secfpn_3x8_100e_det3d_kitti-3d-car.py | 203 -- ...rs_secfpn_4x8_80e_pcdet_kitti-3d-3class.py | 246 -- ...nd_secfpn_4x8_80e_pcdet_kitti-3d-3class.py | 253 -- .../configs/centerpoint/README.md | 138 - ...5voxel_second_secfpn_4x8_cyclic_20e_nus.py | 140 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...el_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ..._secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py | 50 - ...econd_secfpn_dcn_4x8_cyclic_tta_20e_nus.py | 52 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - ...n_circlenms_4x8_cyclic_flip-tta_20e_nus.py | 51 - ...1voxel_second_secfpn_4x8_cyclic_20e_nus.py | 171 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...el_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - ...pillar_second_secfpn_4x8_cyclic_20e_nus.py | 170 - ...ond_secfpn_circlenms_4x8_cyclic_20e_nus.py | 3 - ...ar_second_secfpn_dcn_4x8_cyclic_20e_nus.py | 15 - ...secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py | 16 - .../configs/centerpoint/metafile.yml | 95 - .../mmdetection3d/configs/dgcnn/README.md | 55 - ...n_32x4_cosine_100e_s3dis_seg-3d-13class.py | 24 - .../mmdetection3d/configs/dgcnn/metafile.yml | 24 - .../configs/dynamic_voxelization/README.md | 40 - ...intpillars_secfpn_6x8_160e_kitti-3d-car.py | 19 - ...d_secfpn_2x8_cosine_80e_kitti-3d-3class.py | 22 - .../dv_second_secfpn_6x8_80e_kitti-3d-car.py | 18 - .../configs/dynamic_voxelization/metafile.yml | 53 - .../mmdetection3d/configs/fcos3d/README.md | 75 - ...caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py | 75 - ..._gn-head_dcn_2x8_1x_nus-mono3d_finetune.py | 8 - .../mmdetection3d/configs/fcos3d/metafile.yml | 43 - .../configs/free_anchor/README.md | 105 - ...s_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 47 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - ...ll_free-anchor_strong-aug_4x8_3x_nus-3d.py | 70 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - ...ll_free-anchor_strong-aug_4x8_3x_nus-3d.py | 70 - ...f_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py | 18 - .../configs/free_anchor/metafile.yml | 96 - .../configs/groupfree3d/README.md | 44 - ...pfree3d_8x4_scannet-3d-18class-L12-O256.py | 199 -- ...upfree3d_8x4_scannet-3d-18class-L6-O256.py | 198 -- ...e3d_8x4_scannet-3d-18class-w2x-L12-O256.py | 214 -- ...e3d_8x4_scannet-3d-18class-w2x-L12-O512.py | 215 -- .../configs/groupfree3d/metafile.yml | 72 - .../mmdetection3d/configs/h3dnet/README.md | 44 - .../h3dnet/h3dnet_3x8_scannet-3d-18class.py | 64 - .../mmdetection3d/configs/h3dnet/metafile.yml | 29 - .../mmdetection3d/configs/imvotenet/README.md | 43 - ...ter_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py | 58 - ...mvotenet_stage2_16x8_sunrgbd-3d-10class.py | 260 -- .../configs/imvotenet/metafile.yml | 43 - .../configs/imvoxelnet/README.md | 38 - .../imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py | 162 - .../configs/imvoxelnet/metafile.yml | 29 - .../mmdetection3d/configs/monoflex/README.md | 48 - .../configs/monoflex/metafile.yml | 30 - .../mmdetection3d/configs/mvxnet/README.md | 38 - ...nd_secfpn_adamw_2x8_80e_kitti-3d-3class.py | 251 -- .../mmdetection3d/configs/mvxnet/metafile.yml | 30 - .../mmdetection3d/configs/nuimages/README.md | 59 - .../cascade_mask_rcnn_r101_fpn_1x_nuim.py | 2 - .../cascade_mask_rcnn_r50_fpn_1x_nuim.py | 60 - ...cade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py | 3 - ...ade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py | 7 - ...ascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py | 13 - .../configs/nuimages/htc_r50_fpn_1x_nuim.py | 46 - .../nuimages/htc_r50_fpn_coco-20e_1x_nuim.py | 3 - .../nuimages/htc_r50_fpn_coco-20e_20e_nuim.py | 4 - .../htc_without_semantic_r50_fpn_1x_nuim.py | 221 -- ..._fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py | 23 - .../nuimages/mask_rcnn_r101_fpn_1x_nuim.py | 2 - .../mask_rcnn_r50_caffe_fpn_1x_nuim.py | 46 - ...mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py | 48 - ...ask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py | 52 - .../nuimages/mask_rcnn_r50_fpn_1x_nuim.py | 8 - .../mask_rcnn_r50_fpn_coco-2x_1x_nuim.py | 9 - .../mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py | 39 - .../mask_rcnn_x101_32x4d_fpn_1x_nuim.py | 13 - .../configs/nuimages/metafile.yml | 255 -- .../mmdetection3d/configs/paconv/README.md | 51 - .../mmdetection3d/configs/paconv/metafile.yml | 29 - ...sg_8x8_cosine_200e_s3dis_seg-3d-13class.py | 69 - ...sg_8x8_cosine_150e_s3dis_seg-3d-13class.py | 66 - .../mmdetection3d/configs/parta2/README.md | 38 - ...2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py | 122 - ...rtA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py | 137 - .../mmdetection3d/configs/parta2/metafile.yml | 41 - .../mmdetection3d/configs/pgd/README.md | 69 - .../mmdetection3d/configs/pgd/metafile.yml | 81 - ...01_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py | 107 - ...fpn_gn-head_2x16_1x_nus-mono3d_finetune.py | 9 - ...01_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py | 5 - ...fpn_gn-head_2x16_2x_nus-mono3d_finetune.py | 9 - ...1_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py | 127 - .../configs/point_rcnn/README.md | 47 - .../configs/point_rcnn/metafile.yml | 29 - .../point_rcnn_2x8_kitti-3d-3classes.py | 94 - .../mmdetection3d/configs/pointnet2/README.md | 72 - .../configs/pointnet2/metafile.yml | 94 - ...16x2_cosine_250e_scannet_seg-3d-20class.py | 36 - ...sg_16x2_cosine_80e_s3dis_seg-3d-13class.py | 27 - ...16x2_cosine_250e_scannet_seg-3d-20class.py | 166 - ...16x2_cosine_200e_scannet_seg-3d-20class.py | 34 - ...sg_16x2_cosine_50e_s3dis_seg-3d-13class.py | 25 - ...16x2_cosine_200e_scannet_seg-3d-20class.py | 164 - .../configs/pointpillars/README.md | 78 - ...pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py | 5 - ..._pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py | 5 - ...tpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ...ars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py | 5 - ...pillars_secfpn_6x8_160e_kitti-3d-3class.py | 81 - ...intpillars_secfpn_6x8_160e_kitti-3d-car.py | 87 - ...ntpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py | 43 - ...intpillars_secfpn_sbn-all_4x8_2x_nus-3d.py | 42 - ...llars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ..._secfpn_sbn-all_range100_2x8_2x_lyft-3d.py | 42 - ...lars_secfpn_sbn_2x16_2x_waymo-3d-3class.py | 9 - ...pillars_secfpn_sbn_2x16_2x_waymo-3d-car.py | 37 - ...rs_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py | 6 - ...llars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py | 34 - .../configs/pointpillars/metafile.yml | 213 -- .../mmdetection3d/configs/regnet/README.md | 82 - ..._regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py | 24 - ...regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py | 24 - ..._regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py | 24 - ...et-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py | 4 - ...0mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py | 24 - ...net-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py | 39 - ...gnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py | 38 - ..._secfpn_sbn-all_range100_2x8_2x_lyft-3d.py | 40 - .../mmdetection3d/configs/regnet/metafile.yml | 85 - .../mmdetection3d/configs/sassd/README.md | 28 - .../sassd/sassd_6x8_80e_kitti-3d-3class.py | 94 - .../mmdetection3d/configs/second/README.md | 54 - ...v_second_secfpn_6x8_80e_kitti-3d-3class.py | 5 - .../hv_second_secfpn_6x8_80e_kitti-3d-car.py | 30 - ...ond_secfpn_fp16_6x8_80e_kitti-3d-3class.py | 3 - ...second_secfpn_fp16_6x8_80e_kitti-3d-car.py | 3 - ...nd_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py | 112 - .../mmdetection3d/configs/second/metafile.yml | 97 - .../mmdetection3d/configs/smoke/README.md | 47 - .../mmdetection3d/configs/smoke/metafile.yml | 30 - ...orch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py | 64 - .../mmdetection3d/configs/ssn/README.md | 53 - ...et-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py | 21 - ...net-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py | 19 - .../hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py | 224 -- .../hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py | 238 -- .../mmdetection3d/configs/ssn/metafile.yml | 72 - .../mmdetection3d/configs/votenet/README.md | 68 - .../configs/votenet/metafile.yml | 59 - .../votenet_16x8_sunrgbd-3d-10class.py | 21 - .../votenet/votenet_8x8_scannet-3d-18class.py | 36 - .../votenet_iouloss_8x8_scannet-3d-18class.py | 8 - .../mmdetection3d/data/s3dis/README.md | 59 - .../data/s3dis/collect_indoor3d_data.py | 50 - .../mmdetection3d/data/s3dis/indoor3d_util.py | 55 - .../data/s3dis/meta_data/anno_paths.txt | 272 -- .../data/s3dis/meta_data/class_names.txt | 13 - .../pytorch/mmdetection3d/dist_train.sh | 34 - .../pytorch/mmdetection3d/mmcv/__init__.py | 16 - .../mmdetection3d/mmcv/arraymisc/__init__.py | 4 - .../mmcv/arraymisc/quantization.py | 65 - .../mmdetection3d/mmcv/cnn/__init__.py | 41 - .../pytorch/mmdetection3d/mmcv/cnn/alexnet.py | 63 - .../mmdetection3d/mmcv/cnn/bricks/__init__.py | 35 - .../mmcv/cnn/bricks/activation.py | 93 - .../mmcv/cnn/bricks/context_block.py | 125 - .../mmdetection3d/mmcv/cnn/bricks/conv.py | 44 - .../cnn/bricks/conv2d_adaptive_padding.py | 62 - .../mmcv/cnn/bricks/conv_module.py | 206 -- .../mmdetection3d/mmcv/cnn/bricks/conv_ws.py | 148 - .../bricks/depthwise_separable_conv_module.py | 96 - .../mmdetection3d/mmcv/cnn/bricks/drop.py | 65 - .../mmcv/cnn/bricks/generalized_attention.py | 412 --- .../mmdetection3d/mmcv/cnn/bricks/hsigmoid.py | 46 - .../mmdetection3d/mmcv/cnn/bricks/hswish.py | 38 - .../mmcv/cnn/bricks/non_local.py | 303 -- .../mmdetection3d/mmcv/cnn/bricks/norm.py | 144 - .../mmdetection3d/mmcv/cnn/bricks/padding.py | 36 - .../mmdetection3d/mmcv/cnn/bricks/plugin.py | 89 - .../mmdetection3d/mmcv/cnn/bricks/registry.py | 16 - .../mmdetection3d/mmcv/cnn/bricks/scale.py | 21 - .../mmdetection3d/mmcv/cnn/bricks/swish.py | 25 - .../mmcv/cnn/bricks/transformer.py | 944 ------ .../mmdetection3d/mmcv/cnn/bricks/upsample.py | 84 - .../mmdetection3d/mmcv/cnn/bricks/wrappers.py | 180 - .../pytorch/mmdetection3d/mmcv/cnn/builder.py | 30 - .../pytorch/mmdetection3d/mmcv/cnn/resnet.py | 322 -- .../mmdetection3d/mmcv/cnn/utils/__init__.py | 19 - .../mmcv/cnn/utils/flops_counter.py | 603 ---- .../mmcv/cnn/utils/fuse_conv_bn.py | 59 - .../mmdetection3d/mmcv/cnn/utils/sync_bn.py | 61 - .../mmcv/cnn/utils/weight_init.py | 708 ---- .../pytorch/mmdetection3d/mmcv/cnn/vgg.py | 177 - .../mmdetection3d/mmcv/device/__init__.py | 4 - .../mmdetection3d/mmcv/device/ipu/__init__.py | 14 - .../mmcv/device/ipu/dataloader.py | 157 - .../device/ipu/hierarchical_data_manager.py | 243 -- .../mmcv/device/ipu/hook_wrapper.py | 105 - .../mmcv/device/ipu/model_wrapper.py | 721 ---- .../mmdetection3d/mmcv/device/ipu/runner.py | 142 - .../mmdetection3d/mmcv/device/ipu/utils.py | 244 -- .../mmdetection3d/mmcv/device/mlu/__init__.py | 9 - .../mmcv/device/mlu/_functions.py | 24 - .../mmcv/device/mlu/data_parallel.py | 41 - .../mmcv/device/mlu/distributed.py | 20 - .../mmcv/device/mlu/scatter_gather.py | 59 - .../mmdetection3d/mmcv/engine/__init__.py | 8 - .../pytorch/mmdetection3d/mmcv/engine/test.py | 213 -- .../mmdetection3d/mmcv/fileio/__init__.py | 11 - .../mmdetection3d/mmcv/fileio/file_client.py | 1173 ------- .../mmcv/fileio/handlers/__init__.py | 7 - .../mmcv/fileio/handlers/base.py | 30 - .../mmcv/fileio/handlers/json_handler.py | 36 - .../mmcv/fileio/handlers/pickle_handler.py | 26 - .../mmcv/fileio/handlers/yaml_handler.py | 25 - .../pytorch/mmdetection3d/mmcv/fileio/io.py | 163 - .../mmdetection3d/mmcv/fileio/parse.py | 99 - .../mmdetection3d/mmcv/image/__init__.py | 29 - .../mmdetection3d/mmcv/image/colorspace.py | 309 -- .../mmdetection3d/mmcv/image/geometric.py | 741 ----- .../pytorch/mmdetection3d/mmcv/image/io.py | 314 -- .../pytorch/mmdetection3d/mmcv/image/misc.py | 53 - .../mmdetection3d/mmcv/image/photometric.py | 471 --- .../mmcv/model_zoo/deprecated.json | 6 - .../mmdetection3d/mmcv/model_zoo/mmcls.json | 59 - .../mmcv/model_zoo/open_mmlab.json | 50 - .../mmcv/model_zoo/torchvision_0.12.json | 57 - .../mmdetection3d/mmcv/onnx/__init__.py | 5 - .../pytorch/mmdetection3d/mmcv/onnx/info.py | 35 - .../mmcv/onnx/onnx_utils/__init__.py | 1 - .../mmcv/onnx/onnx_utils/symbolic_helper.py | 331 -- .../mmdetection3d/mmcv/onnx/symbolic.py | 509 --- .../mmdetection3d/mmcv/ops/__init__.py | 104 - .../mmcv/ops/active_rotated_filter.py | 64 - .../mmcv/ops/assign_score_withk.py | 131 - .../mmdetection3d/mmcv/ops/ball_query.py | 58 - .../pytorch/mmdetection3d/mmcv/ops/bbox.py | 130 - .../mmdetection3d/mmcv/ops/border_align.py | 114 - .../mmdetection3d/mmcv/ops/box_iou_rotated.py | 148 - .../pytorch/mmdetection3d/mmcv/ops/carafe.py | 301 -- .../mmdetection3d/mmcv/ops/cc_attention.py | 84 - .../mmcv/ops/chamfer_distance.py | 95 - .../mmdetection3d/mmcv/ops/contour_expand.py | 52 - .../mmdetection3d/mmcv/ops/convex_iou.py | 52 - .../mmdetection3d/mmcv/ops/corner_pool.py | 156 - .../mmdetection3d/mmcv/ops/correlation.py | 200 -- .../mmdetection3d/mmcv/ops/csrc/README.md | 170 - .../ops/csrc/common/box_iou_rotated_utils.hpp | 347 -- .../active_rotated_filter_cuda_kernel.cuh | 59 - .../cuda/assign_score_withk_cuda_kernel.cuh | 116 - .../common/cuda/ball_query_cuda_kernel.cuh | 58 - .../common/cuda/bbox_overlaps_cuda_kernel.cuh | 147 - .../common/cuda/border_align_cuda_kernel.cuh | 200 -- .../csrc/common/cuda/box_iou_rotated_cuda.cuh | 81 - .../csrc/common/cuda/carafe_cuda_kernel.cuh | 332 -- .../common/cuda/carafe_naive_cuda_kernel.cuh | 111 - .../cuda/chamfer_distance_cuda_kernel.cuh | 101 - .../csrc/common/cuda/common_cuda_helper.hpp | 122 - .../common/cuda/convex_iou_cuda_kernel.cuh | 831 ----- .../ops/csrc/common/cuda/correlation_cuda.cuh | 225 -- .../common/cuda/deform_conv_cuda_kernel.cuh | 367 --- .../cuda/deform_roi_pool_cuda_kernel.cuh | 186 -- .../cuda/diff_iou_rotated_cuda_kernel.cuh | 136 - .../furthest_point_sample_cuda_kernel.cuh | 152 - .../common/cuda/gather_points_cuda_kernel.cuh | 58 - .../common/cuda/group_points_cuda_kernel.cuh | 65 - .../csrc/common/cuda/iou3d_cuda_kernel.cuh | 367 --- .../ops/csrc/common/cuda/knn_cuda_kernel.cuh | 92 - .../common/cuda/masked_conv2d_cuda_kernel.cuh | 62 - .../common/cuda/min_area_polygons_cuda.cuh | 300 -- .../modulated_deform_conv_cuda_kernel.cuh | 399 --- .../cuda/ms_deform_attn_cuda_kernel.cuh | 801 ----- .../ops/csrc/common/cuda/nms_cuda_kernel.cuh | 117 - .../ops/csrc/common/cuda/nms_rotated_cuda.cuh | 133 - .../common/cuda/parrots_cudawarpfunction.cuh | 109 - .../cuda/points_in_boxes_cuda_kernel.cuh | 95 - .../cuda/points_in_polygons_cuda_kernel.cuh | 79 - .../csrc/common/cuda/psamask_cuda_kernel.cuh | 141 - .../cuda/riroi_align_rotated_cuda_kernel.cuh | 242 -- .../common/cuda/roi_align_cuda_kernel.cuh | 212 -- .../cuda/roi_align_rotated_cuda_kernel.cuh | 202 -- .../csrc/common/cuda/roi_pool_cuda_kernel.cuh | 93 - .../cuda/roiaware_pool3d_cuda_kernel.cuh | 260 -- .../cuda/roipoint_pool3d_cuda_kernel.cuh | 134 - .../rotated_feature_align_cuda_kernel.cuh | 129 - .../cuda/scatter_points_cuda_kernel.cuh | 187 -- .../cuda/sigmoid_focal_loss_cuda_kernel.cuh | 71 - .../cuda/softmax_focal_loss_cuda_kernel.cuh | 72 - .../csrc/common/cuda/sync_bn_cuda_kernel.cuh | 331 -- .../cuda/three_interpolate_cuda_kernel.cuh | 61 - .../csrc/common/cuda/three_nn_cuda_kernel.cuh | 67 - .../common/cuda/tin_shift_cuda_kernel.cuh | 61 - .../common/cuda/voxelization_cuda_kernel.cuh | 216 -- .../common/mlu/bbox_overlaps_mlu_kernel.mlu | 322 -- .../ops/csrc/common/mlu/common_mlu_helper.hpp | 38 - .../mlu/focal_loss_sigmoid_mlu_kernel.mlu | 888 ----- .../ops/csrc/common/mlu/nms_mlu_kernel.mlu | 1161 ------- .../csrc/common/mlu/psamask_mlu_kernel.mlu | 615 ---- .../ops/csrc/common/mlu/psamask_utils.hpp | 55 - .../csrc/common/mlu/roi_align_mlu_kernel.mlu | 493 --- .../mlu/roi_align_rotated_mlu_kernel.mlu | 472 --- .../common/mlu/roi_align_rotated_utils.hpp | 24 - .../csrc/common/mlu/tin_shift_mlu_kernel.mlu | 307 -- .../ops/csrc/common/parrots_cpp_helper.hpp | 40 - .../ops/csrc/common/parrots_cuda_helper.hpp | 111 - .../ops/csrc/common/pytorch_cpp_helper.hpp | 27 - .../ops/csrc/common/pytorch_cuda_helper.hpp | 19 - .../csrc/common/pytorch_device_registry.hpp | 141 - .../ops/csrc/common/pytorch_mlu_helper.hpp | 28 - .../mmcv/ops/csrc/onnxruntime/corner_pool.h | 46 - .../ops/csrc/onnxruntime/cpu/corner_pool.cpp | 123 - .../ops/csrc/onnxruntime/cpu/deform_conv.cpp | 263 -- .../ops/csrc/onnxruntime/cpu/gridSample.cpp | 314 -- .../onnxruntime/cpu/modulated_deform_conv.cpp | 292 -- .../mmcv/ops/csrc/onnxruntime/cpu/nms.cpp | 108 - .../onnxruntime/cpu/onnxruntime_register.cpp | 88 - .../ops/csrc/onnxruntime/cpu/reduce_ops.cpp | 188 -- .../ops/csrc/onnxruntime/cpu/roi_align.cpp | 265 -- .../onnxruntime/cpu/roi_align_rotated.cpp | 247 -- .../onnxruntime/cpu/rotated_feature_align.cpp | 132 - .../ops/csrc/onnxruntime/cpu/soft_nms.cpp | 156 - .../mmcv/ops/csrc/onnxruntime/deform_conv.h | 57 - .../mmcv/ops/csrc/onnxruntime/grid_sample.h | 44 - .../csrc/onnxruntime/modulated_deform_conv.h | 61 - .../mmcv/ops/csrc/onnxruntime/nms.h | 45 - .../csrc/onnxruntime/onnxruntime_register.h | 16 - .../onnxruntime_session_options_config_keys.h | 44 - .../ops/csrc/onnxruntime/ort_mmcv_utils.h | 15 - .../mmcv/ops/csrc/onnxruntime/reduce_ops.h | 95 - .../mmcv/ops/csrc/onnxruntime/roi_align.h | 62 - .../ops/csrc/onnxruntime/roi_align_rotated.h | 62 - .../csrc/onnxruntime/rotated_feature_align.h | 50 - .../mmcv/ops/csrc/onnxruntime/soft_nms.h | 49 - .../csrc/parrots/active_rotated_filter.cpp | 28 - .../parrots/active_rotated_filter_parrots.cpp | 63 - .../parrots/active_rotated_filter_pytorch.h | 13 - .../ops/csrc/parrots/assign_score_withk.cpp | 42 - .../parrots/assign_score_withk_parrots.cpp | 89 - .../csrc/parrots/assign_score_withk_pytorch.h | 19 - .../ops/csrc/parrots/ball_query._parrots.cpp | 43 - .../mmcv/ops/csrc/parrots/ball_query.cpp | 20 - .../ops/csrc/parrots/ball_query_pytorch.h | 11 - .../mmcv/ops/csrc/parrots/bbox_overlaps.cpp | 14 - .../csrc/parrots/bbox_overlaps_parrots.cpp | 40 - .../ops/csrc/parrots/bbox_overlaps_pytorch.h | 10 - .../mmcv/ops/csrc/parrots/border_align.cpp | 30 - .../ops/csrc/parrots/border_align_parrots.cpp | 51 - .../ops/csrc/parrots/border_align_pytorch.h | 17 - .../mmcv/ops/csrc/parrots/box_iou_rotated.cpp | 19 - .../csrc/parrots/box_iou_rotated_parrots.cpp | 61 - .../csrc/parrots/box_iou_rotated_pytorch.h | 15 - .../mmcv/ops/csrc/parrots/carafe.cpp | 38 - .../mmcv/ops/csrc/parrots/carafe_naive.cpp | 32 - .../ops/csrc/parrots/carafe_naive_parrots.cpp | 74 - .../ops/csrc/parrots/carafe_naive_pytorch.h | 15 - .../mmcv/ops/csrc/parrots/carafe_parrots.cpp | 88 - .../mmcv/ops/csrc/parrots/carafe_pytorch.h | 16 - .../mmcv/ops/csrc/parrots/contour_expand.cpp | 111 - .../csrc/parrots/contour_expand_parrots.cpp | 43 - .../ops/csrc/parrots/contour_expand_pytorch.h | 12 - .../mmcv/ops/csrc/parrots/convex_iou.cpp | 23 - .../ops/csrc/parrots/convex_iou_parrots.cpp | 40 - .../ops/csrc/parrots/convex_iou_pytorch.h | 11 - .../mmcv/ops/csrc/parrots/correlation.cpp | 47 - .../ops/csrc/parrots/correlation_parrots.cpp | 176 - .../ops/csrc/parrots/correlation_pytorch.h | 18 - .../mmcv/ops/csrc/parrots/cudabind.cpp | 1591 --------- .../mmcv/ops/csrc/parrots/deform_conv.cpp | 517 --- .../ops/csrc/parrots/deform_conv_parrots.cpp | 273 -- .../ops/csrc/parrots/deform_conv_pytorch.h | 28 - .../mmcv/ops/csrc/parrots/deform_roi_pool.cpp | 42 - .../csrc/parrots/deform_roi_pool_parrots.cpp | 102 - .../csrc/parrots/deform_roi_pool_pytorch.h | 18 - .../ops/csrc/parrots/diff_iou_rotated.cpp | 14 - .../csrc/parrots/diff_iou_rotated_parrots.cpp | 28 - .../csrc/parrots/diff_iou_rotated_pytorch.h | 10 - .../mmcv/ops/csrc/parrots/focal_loss.cpp | 53 - .../ops/csrc/parrots/focal_loss_parrots.cpp | 113 - .../ops/csrc/parrots/focal_loss_pytorch.h | 21 - .../csrc/parrots/furthest_point_sample.cpp | 34 - .../parrots/furthest_point_sample_parrots.cpp | 57 - .../parrots/furthest_point_sample_pytorch.h | 14 - .../ops/csrc/parrots/fused_bias_leakyrelu.cpp | 119 - .../ops/csrc/parrots/fused_bias_parrots.cpp | 41 - .../mmcv/ops/csrc/parrots/gather_points.cpp | 30 - .../csrc/parrots/gather_points_parrots.cpp | 71 - .../ops/csrc/parrots/gather_points_pytorch.h | 13 - .../mmcv/ops/csrc/parrots/group_points.cpp | 34 - .../ops/csrc/parrots/group_points_parrots.cpp | 72 - .../ops/csrc/parrots/group_points_pytorch.h | 15 - .../mmcv/ops/csrc/parrots/info.cpp | 56 - .../mmcv/ops/csrc/parrots/iou3d.cpp | 135 - .../mmcv/ops/csrc/parrots/iou3d_parrots.cpp | 70 - .../mmcv/ops/csrc/parrots/iou3d_pytorch.h | 16 - .../mmcv/ops/csrc/parrots/knn.cpp | 17 - .../mmcv/ops/csrc/parrots/knn_parrots.cpp | 41 - .../mmcv/ops/csrc/parrots/knn_pytorch.h | 9 - .../mmcv/ops/csrc/parrots/masked_conv2d.cpp | 33 - .../csrc/parrots/masked_conv2d_parrots.cpp | 72 - .../ops/csrc/parrots/masked_conv2d_pytorch.h | 15 - .../ops/csrc/parrots/min_area_polygons.cpp | 11 - .../parrots/min_area_polygons_parrots.cpp | 26 - .../csrc/parrots/min_area_polygons_pytorch.h | 9 - .../csrc/parrots/modulated_deform_conv.cpp | 237 -- .../parrots/modulated_deform_conv_parrots.cpp | 199 -- .../parrots/modulated_deform_conv_pytorch.h | 21 - .../mmcv/ops/csrc/parrots/ms_deform_attn.cpp | 60 - .../csrc/parrots/ms_deform_attn_parrots.cpp | 69 - .../mmcv/ops/csrc/parrots/nms.cpp | 33 - .../mmcv/ops/csrc/parrots/nms_parrots.cpp | 140 - .../mmcv/ops/csrc/parrots/nms_pytorch.h | 18 - .../mmcv/ops/csrc/parrots/nms_rotated.cpp | 32 - .../mmcv/ops/csrc/parrots/pixel_group.cpp | 26 - .../ops/csrc/parrots/pixel_group_parrots.cpp | 54 - .../ops/csrc/parrots/pixel_group_pytorch.h | 11 - .../mmcv/ops/csrc/parrots/points_in_boxes.cpp | 44 - .../csrc/parrots/points_in_boxes_parrots.cpp | 64 - .../csrc/parrots/points_in_boxes_pytorch.h | 16 - .../ops/csrc/parrots/points_in_polygons.cpp | 15 - .../parrots/points_in_polygons_parrots.cpp | 28 - .../csrc/parrots/points_in_polygons_pytorch.h | 9 - .../mmcv/ops/csrc/parrots/psamask.cpp | 41 - .../mmcv/ops/csrc/parrots/psamask_parrots.cpp | 129 - .../mmcv/ops/csrc/parrots/psamask_pytorch.h | 31 - .../ops/csrc/parrots/riroi_align_rotated.cpp | 42 - .../parrots/riroi_align_rotated_parrots.cpp | 86 - .../parrots/riroi_align_rotated_pytorch.h | 18 - .../mmcv/ops/csrc/parrots/roi_align.cpp | 41 - .../ops/csrc/parrots/roi_align_parrots.cpp | 151 - .../mmcv/ops/csrc/parrots/roi_align_pytorch.h | 32 - .../ops/csrc/parrots/roi_align_rotated.cpp | 41 - .../parrots/roi_align_rotated_parrots.cpp | 147 - .../csrc/parrots/roi_align_rotated_pytorch.h | 31 - .../mmcv/ops/csrc/parrots/roi_pool.cpp | 31 - .../ops/csrc/parrots/roi_pool_parrots.cpp | 67 - .../mmcv/ops/csrc/parrots/roi_pool_pytorch.h | 16 - .../mmcv/ops/csrc/parrots/roiaware_pool3d.cpp | 72 - .../csrc/parrots/roiaware_pool3d_parrots.cpp | 58 - .../csrc/parrots/roiaware_pool3d_pytorch.h | 14 - .../mmcv/ops/csrc/parrots/roipoint_pool3d.cpp | 39 - .../csrc/parrots/roipoint_pool3d_parrots.cpp | 31 - .../csrc/parrots/roipoint_pool3d_pytorch.h | 10 - .../csrc/parrots/rotated_feature_align.cpp | 39 - .../parrots/rotated_feature_align_parrots.cpp | 99 - .../parrots/rotated_feature_align_pytorch.h | 17 - .../mmcv/ops/csrc/parrots/sync_bn.cpp | 69 - .../mmcv/ops/csrc/parrots/sync_bn_parrots.cpp | 111 - .../mmcv/ops/csrc/parrots/sync_bn_pytorch.h | 26 - .../ops/csrc/parrots/three_interpolate.cpp | 33 - .../parrots/three_interpolate_parrots.cpp | 74 - .../csrc/parrots/three_interpolate_pytorch.h | 14 - .../mmcv/ops/csrc/parrots/three_nn.cpp | 18 - .../ops/csrc/parrots/three_nn_parrots.cpp | 35 - .../mmcv/ops/csrc/parrots/three_nn_pytorch.h | 10 - .../mmcv/ops/csrc/parrots/tin_shift.cpp | 20 - .../ops/csrc/parrots/tin_shift_parrots.cpp | 39 - .../mmcv/ops/csrc/parrots/tin_shift_pytorch.h | 11 - .../mmcv/ops/csrc/parrots/upfirdn2d.cpp | 118 - .../ops/csrc/parrots/upfirdn2d_parrots.cpp | 47 - .../mmcv/ops/csrc/parrots/voxelization.cpp | 74 - .../ops/csrc/parrots/voxelization_parrots.cpp | 113 - .../ops/csrc/parrots/voxelization_pytorch.h | 20 - .../csrc/pytorch/active_rotated_filter.cpp | 28 - .../ops/csrc/pytorch/assign_score_withk.cpp | 42 - .../mmcv/ops/csrc/pytorch/ball_query.cpp | 20 - .../mmcv/ops/csrc/pytorch/bbox_overlaps.cpp | 14 - .../mmcv/ops/csrc/pytorch/border_align.cpp | 30 - .../mmcv/ops/csrc/pytorch/box_iou_rotated.cpp | 19 - .../mmcv/ops/csrc/pytorch/carafe.cpp | 38 - .../mmcv/ops/csrc/pytorch/carafe_naive.cpp | 32 - .../ops/csrc/pytorch/chamfer_distance.cpp | 35 - .../mmcv/ops/csrc/pytorch/contour_expand.cpp | 111 - .../mmcv/ops/csrc/pytorch/convex_iou.cpp | 23 - .../mmcv/ops/csrc/pytorch/correlation.cpp | 47 - .../pytorch/cpu/active_rotated_filter.cpp | 120 - .../ops/csrc/pytorch/cpu/box_iou_rotated.cpp | 38 - .../mmcv/ops/csrc/pytorch/cpu/deform_conv.cpp | 408 --- .../pytorch/cpu/modulated_deform_conv.cpp | 436 --- .../mmcv/ops/csrc/pytorch/cpu/nms.cpp | 230 -- .../mmcv/ops/csrc/pytorch/cpu/nms_rotated.cpp | 66 - .../mmcv/ops/csrc/pytorch/cpu/pixel_group.cpp | 126 - .../ops/csrc/pytorch/cpu/points_in_boxes.cpp | 53 - .../mmcv/ops/csrc/pytorch/cpu/psamask.cpp | 199 -- .../mmcv/ops/csrc/pytorch/cpu/roi_align.cpp | 466 --- .../csrc/pytorch/cpu/roi_align_rotated.cpp | 455 --- .../pytorch/cpu/rotated_feature_align.cpp | 262 -- .../ops/csrc/pytorch/cpu/voxelization.cpp | 186 -- .../cuda/active_rotated_filter_cuda.cu | 58 - .../pytorch/cuda/assign_score_withk_cuda.cu | 66 - .../ops/csrc/pytorch/cuda/ball_query_cuda.cu | 38 - .../csrc/pytorch/cuda/bbox_overlaps_cuda.cu | 39 - .../csrc/pytorch/cuda/border_align_cuda.cu | 68 - .../csrc/pytorch/cuda/box_iou_rotated_cuda.cu | 25 - .../mmcv/ops/csrc/pytorch/cuda/carafe_cuda.cu | 180 - .../csrc/pytorch/cuda/carafe_naive_cuda.cu | 52 - .../pytorch/cuda/chamfer_distance_cuda.cu | 63 - .../mmcv/ops/csrc/pytorch/cuda/convex_iou.cu | 41 - .../ops/csrc/pytorch/cuda/correlation_cuda.cu | 94 - .../mmcv/ops/csrc/pytorch/cuda/cudabind.cpp | 1739 ---------- .../ops/csrc/pytorch/cuda/deform_conv_cuda.cu | 105 - .../csrc/pytorch/cuda/deform_roi_pool_cuda.cu | 55 - .../pytorch/cuda/diff_iou_rotated_cuda.cu | 35 - .../ops/csrc/pytorch/cuda/focal_loss_cuda.cu | 111 - .../cuda/furthest_point_sample_cuda.cu | 143 - .../pytorch/cuda/fused_bias_leakyrelu_cuda.cu | 109 - .../csrc/pytorch/cuda/gather_points_cuda.cu | 58 - .../csrc/pytorch/cuda/group_points_cuda.cu | 61 - .../mmcv/ops/csrc/pytorch/cuda/iou3d_cuda.cu | 67 - .../mmcv/ops/csrc/pytorch/cuda/knn_cuda.cu | 34 - .../csrc/pytorch/cuda/masked_conv2d_cuda.cu | 54 - .../csrc/pytorch/cuda/min_area_polygons.cu | 21 - .../cuda/modulated_deform_conv_cuda.cu | 96 - .../csrc/pytorch/cuda/ms_deform_attn_cuda.cu | 351 -- .../mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu | 36 - .../ops/csrc/pytorch/cuda/nms_rotated_cuda.cu | 62 - .../csrc/pytorch/cuda/points_in_boxes_cuda.cu | 62 - .../pytorch/cuda/points_in_polygons_cuda.cu | 28 - .../ops/csrc/pytorch/cuda/psamask_cuda.cu | 60 - .../pytorch/cuda/riroi_align_rotated_cuda.cu | 53 - .../ops/csrc/pytorch/cuda/roi_align_cuda.cu | 58 - .../pytorch/cuda/roi_align_rotated_cuda.cu | 45 - .../ops/csrc/pytorch/cuda/roi_pool_cuda.cu | 50 - .../csrc/pytorch/cuda/roiaware_pool3d_cuda.cu | 118 - .../csrc/pytorch/cuda/roipoint_pool3d_cuda.cu | 60 - .../cuda/rotated_feature_align_cuda.cu | 53 - .../csrc/pytorch/cuda/scatter_points_cuda.cu | 132 - .../ops/csrc/pytorch/cuda/sync_bn_cuda.cu | 110 - .../pytorch/cuda/three_interpolate_cuda.cu | 66 - .../ops/csrc/pytorch/cuda/three_nn_cuda.cu | 35 - .../ops/csrc/pytorch/cuda/tin_shift_cuda.cu | 55 - .../ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu | 370 --- .../csrc/pytorch/cuda/voxelization_cuda.cu | 286 -- .../mmcv/ops/csrc/pytorch/deform_conv.cpp | 517 --- .../mmcv/ops/csrc/pytorch/deform_roi_pool.cpp | 42 - .../ops/csrc/pytorch/diff_iou_rotated.cpp | 14 - .../mmcv/ops/csrc/pytorch/focal_loss.cpp | 53 - .../csrc/pytorch/furthest_point_sample.cpp | 34 - .../ops/csrc/pytorch/fused_bias_leakyrelu.cpp | 119 - .../ops/csrc/pytorch/fused_spconv_ops.cpp | 34 - .../mmcv/ops/csrc/pytorch/gather_points.cpp | 30 - .../mmcv/ops/csrc/pytorch/group_points.cpp | 34 - .../mmcv/ops/csrc/pytorch/info.cpp | 56 - .../mmcv/ops/csrc/pytorch/iou3d.cpp | 135 - .../mmcv/ops/csrc/pytorch/knn.cpp | 17 - .../mmcv/ops/csrc/pytorch/masked_conv2d.cpp | 33 - .../ops/csrc/pytorch/min_area_polygons.cpp | 11 - .../csrc/pytorch/mlu/bbox_overlaps_mlu.cpp | 100 - .../pytorch/mlu/focal_loss_sigmoid_mlu.cpp | 332 -- .../mmcv/ops/csrc/pytorch/mlu/nms_mlu.cpp | 130 - .../mmcv/ops/csrc/pytorch/mlu/psamask_mlu.cpp | 308 -- .../ops/csrc/pytorch/mlu/roi_align_mlu.cpp | 206 -- .../pytorch/mlu/roi_align_rotated_mlu.cpp | 232 -- .../ops/csrc/pytorch/mlu/tin_shift_mlu.cpp | 203 -- .../csrc/pytorch/modulated_deform_conv.cpp | 237 -- .../mmcv/ops/csrc/pytorch/ms_deform_attn.cpp | 60 - .../mmcv/ops/csrc/pytorch/nms.cpp | 33 - .../mmcv/ops/csrc/pytorch/nms_rotated.cpp | 32 - .../mmcv/ops/csrc/pytorch/pixel_group.cpp | 26 - .../mmcv/ops/csrc/pytorch/points_in_boxes.cpp | 44 - .../ops/csrc/pytorch/points_in_polygons.cpp | 15 - .../mmcv/ops/csrc/pytorch/psamask.cpp | 41 - .../mmcv/ops/csrc/pytorch/pybind.cpp | 831 ----- .../ops/csrc/pytorch/riroi_align_rotated.cpp | 42 - .../mmcv/ops/csrc/pytorch/roi_align.cpp | 41 - .../ops/csrc/pytorch/roi_align_rotated.cpp | 41 - .../mmcv/ops/csrc/pytorch/roi_pool.cpp | 31 - .../mmcv/ops/csrc/pytorch/roiaware_pool3d.cpp | 72 - .../mmcv/ops/csrc/pytorch/roipoint_pool3d.cpp | 39 - .../csrc/pytorch/rotated_feature_align.cpp | 39 - .../mmcv/ops/csrc/pytorch/scatter_points.cpp | 53 - .../mmcv/ops/csrc/pytorch/sync_bn.cpp | 69 - .../ops/csrc/pytorch/three_interpolate.cpp | 33 - .../mmcv/ops/csrc/pytorch/three_nn.cpp | 18 - .../mmcv/ops/csrc/pytorch/tin_shift.cpp | 20 - .../mmcv/ops/csrc/pytorch/upfirdn2d.cpp | 118 - .../mmcv/ops/csrc/pytorch/voxelization.cpp | 74 - .../csrc/tensorrt/plugins/trt_corner_pool.cpp | 217 -- .../plugins/trt_corner_pool_kernel.cu | 110 - .../csrc/tensorrt/plugins/trt_cuda_helper.cu | 91 - .../csrc/tensorrt/plugins/trt_cummaxmin.cpp | 242 -- .../tensorrt/plugins/trt_cummaxmin_kernel.cu | 90 - .../csrc/tensorrt/plugins/trt_deform_conv.cpp | 318 -- .../plugins/trt_deform_conv_kernel.cu | 129 - .../tensorrt/plugins/trt_grid_sampler.cpp | 256 -- .../plugins/trt_grid_sampler_kernel.cu | 441 --- .../tensorrt/plugins/trt_instance_norm.cpp | 246 -- .../plugins/trt_modulated_deform_conv.cpp | 308 -- .../trt_modulated_deform_conv_kernel.cu | 134 - .../ops/csrc/tensorrt/plugins/trt_nms.cpp | 279 -- .../csrc/tensorrt/plugins/trt_nms_kernel.cu | 274 -- .../ops/csrc/tensorrt/plugins/trt_plugin.cpp | 27 - .../csrc/tensorrt/plugins/trt_roi_align.cpp | 294 -- .../tensorrt/plugins/trt_roi_align_kernel.cu | 28 - .../csrc/tensorrt/plugins/trt_scatternd.cpp | 207 -- .../tensorrt/plugins/trt_scatternd_kernel.cu | 93 - .../ops/csrc/tensorrt/trt_corner_pool.hpp | 111 - .../ops/csrc/tensorrt/trt_cuda_helper.cuh | 39 - .../mmcv/ops/csrc/tensorrt/trt_cummaxmin.hpp | 122 - .../ops/csrc/tensorrt/trt_deform_conv.hpp | 118 - .../ops/csrc/tensorrt/trt_grid_sampler.hpp | 108 - .../ops/csrc/tensorrt/trt_instance_norm.hpp | 120 - .../tensorrt/trt_modulated_deform_conv.hpp | 120 - .../mmcv/ops/csrc/tensorrt/trt_nms.hpp | 107 - .../mmcv/ops/csrc/tensorrt/trt_plugin.hpp | 7 - .../ops/csrc/tensorrt/trt_plugin_helper.hpp | 41 - .../mmcv/ops/csrc/tensorrt/trt_roi_align.hpp | 108 - .../mmcv/ops/csrc/tensorrt/trt_scatternd.hpp | 98 - .../mmcv/ops/csrc/tensorrt/trt_serialize.hpp | 105 - .../mmdetection3d/mmcv/ops/deform_conv.py | 408 --- .../mmdetection3d/mmcv/ops/deform_roi_pool.py | 209 -- .../mmcv/ops/deprecated_wrappers.py | 46 - .../mmcv/ops/diff_iou_rotated.py | 301 -- .../mmdetection3d/mmcv/ops/focal_loss.py | 235 -- .../mmcv/ops/furthest_point_sample.py | 84 - .../mmcv/ops/fused_bias_leakyrelu.py | 270 -- .../mmdetection3d/mmcv/ops/gather_points.py | 57 - .../mmdetection3d/mmcv/ops/group_points.py | 241 -- .../pytorch/mmdetection3d/mmcv/ops/info.py | 36 - .../pytorch/mmdetection3d/mmcv/ops/iou3d.py | 224 -- .../pytorch/mmdetection3d/mmcv/ops/knn.py | 78 - .../mmdetection3d/mmcv/ops/masked_conv.py | 109 - .../mmdetection3d/mmcv/ops/merge_cells.py | 159 - .../mmcv/ops/min_area_polygons.py | 18 - .../mmcv/ops/modulated_deform_conv.py | 283 -- .../mmcv/ops/multi_scale_deform_attn.py | 357 -- .../pytorch/mmdetection3d/mmcv/ops/nms.py | 477 --- .../mmdetection3d/mmcv/ops/pixel_group.py | 86 - .../mmdetection3d/mmcv/ops/point_sample.py | 360 -- .../mmdetection3d/mmcv/ops/points_in_boxes.py | 137 - .../mmcv/ops/points_in_polygons.py | 40 - .../mmdetection3d/mmcv/ops/points_sampler.py | 181 - .../mmdetection3d/mmcv/ops/psa_mask.py | 92 - .../mmcv/ops/riroi_align_rotated.py | 132 - .../mmdetection3d/mmcv/ops/roi_align.py | 224 -- .../mmcv/ops/roi_align_rotated.py | 180 - .../mmdetection3d/mmcv/ops/roi_pool.py | 86 - .../mmdetection3d/mmcv/ops/roiaware_pool3d.py | 123 - .../mmdetection3d/mmcv/ops/roipoint_pool3d.py | 77 - .../mmcv/ops/rotated_feature_align.py | 92 - .../pytorch/mmdetection3d/mmcv/ops/saconv.py | 146 - .../mmdetection3d/mmcv/ops/scatter_points.py | 137 - .../pytorch/mmdetection3d/mmcv/ops/sync_bn.py | 279 -- .../mmcv/ops/three_interpolate.py | 69 - .../mmdetection3d/mmcv/ops/three_nn.py | 51 - .../mmdetection3d/mmcv/ops/tin_shift.py | 75 - .../mmdetection3d/mmcv/ops/upfirdn2d.py | 330 -- .../mmdetection3d/mmcv/ops/voxelize.py | 179 - .../mmdetection3d/mmcv/parallel/__init__.py | 13 - .../mmdetection3d/mmcv/parallel/_functions.py | 76 - .../mmdetection3d/mmcv/parallel/collate.py | 84 - .../mmcv/parallel/data_container.py | 89 - .../mmcv/parallel/data_parallel.py | 97 - .../mmcv/parallel/distributed.py | 138 - .../mmcv/parallel/distributed_deprecated.py | 70 - .../mmdetection3d/mmcv/parallel/registry.py | 8 - .../mmcv/parallel/scatter_gather.py | 59 - .../mmdetection3d/mmcv/parallel/utils.py | 30 - .../mmdetection3d/mmcv/runner/__init__.py | 73 - .../mmdetection3d/mmcv/runner/base_module.py | 213 -- .../mmdetection3d/mmcv/runner/base_runner.py | 544 --- .../mmdetection3d/mmcv/runner/builder.py | 25 - .../mmdetection3d/mmcv/runner/checkpoint.py | 800 ----- .../mmcv/runner/default_constructor.py | 47 - .../mmdetection3d/mmcv/runner/dist_utils.py | 209 -- .../mmcv/runner/epoch_based_runner.py | 191 -- .../mmdetection3d/mmcv/runner/fp16_utils.py | 435 --- .../mmcv/runner/hooks/__init__.py | 48 - .../mmcv/runner/hooks/checkpoint.py | 167 - .../mmcv/runner/hooks/closure.py | 11 - .../mmdetection3d/mmcv/runner/hooks/ema.py | 89 - .../mmcv/runner/hooks/evaluation.py | 511 --- .../mmdetection3d/mmcv/runner/hooks/hook.py | 92 - .../mmcv/runner/hooks/iter_timer.py | 18 - .../mmcv/runner/hooks/logger/__init__.py | 18 - .../mmcv/runner/hooks/logger/base.py | 172 - .../mmcv/runner/hooks/logger/clearml.py | 63 - .../mmcv/runner/hooks/logger/dvclive.py | 69 - .../mmcv/runner/hooks/logger/mlflow.py | 81 - .../mmcv/runner/hooks/logger/neptune.py | 89 - .../mmcv/runner/hooks/logger/pavi.py | 132 - .../mmcv/runner/hooks/logger/segmind.py | 48 - .../mmcv/runner/hooks/logger/tensorboard.py | 69 - .../mmcv/runner/hooks/logger/text.py | 256 -- .../mmcv/runner/hooks/logger/wandb.py | 107 - .../mmcv/runner/hooks/lr_updater.py | 751 ----- .../mmdetection3d/mmcv/runner/hooks/memory.py | 25 - .../mmcv/runner/hooks/momentum_updater.py | 566 ---- .../mmcv/runner/hooks/optimizer.py | 554 ---- .../mmcv/runner/hooks/profiler.py | 180 - .../mmcv/runner/hooks/sampler_seed.py | 20 - .../mmcv/runner/hooks/sync_buffer.py | 22 - .../mmcv/runner/iter_based_runner.py | 277 -- .../mmdetection3d/mmcv/runner/log_buffer.py | 41 - .../mmcv/runner/optimizer/__init__.py | 9 - .../mmcv/runner/optimizer/builder.py | 45 - .../runner/optimizer/default_constructor.py | 258 -- .../mmdetection3d/mmcv/runner/priority.py | 61 - .../mmdetection3d/mmcv/runner/utils.py | 99 - .../mmdetection3d/mmcv/tensorrt/__init__.py | 30 - .../mmcv/tensorrt/init_plugins.py | 76 - .../mmdetection3d/mmcv/tensorrt/preprocess.py | 136 - .../mmcv/tensorrt/tensorrt_utils.py | 291 -- .../mmdetection3d/mmcv/utils/__init__.py | 78 - .../mmdetection3d/mmcv/utils/config.py | 741 ----- .../mmdetection3d/mmcv/utils/device_type.py | 24 - .../pytorch/mmdetection3d/mmcv/utils/env.py | 120 - .../mmdetection3d/mmcv/utils/ext_loader.py | 72 - .../pytorch/mmdetection3d/mmcv/utils/hub.py | 131 - .../mmdetection3d/mmcv/utils/logging.py | 111 - .../pytorch/mmdetection3d/mmcv/utils/misc.py | 377 --- .../mmdetection3d/mmcv/utils/parrots_jit.py | 41 - .../mmcv/utils/parrots_wrapper.py | 114 - .../pytorch/mmdetection3d/mmcv/utils/path.py | 101 - .../mmdetection3d/mmcv/utils/progressbar.py | 208 -- .../mmdetection3d/mmcv/utils/registry.py | 340 -- .../pytorch/mmdetection3d/mmcv/utils/seed.py | 23 - .../mmdetection3d/mmcv/utils/testing.py | 141 - .../pytorch/mmdetection3d/mmcv/utils/timer.py | 118 - .../pytorch/mmdetection3d/mmcv/utils/trace.py | 24 - .../mmdetection3d/mmcv/utils/version_utils.py | 90 - .../pytorch/mmdetection3d/mmcv/version.py | 35 - .../mmdetection3d/mmcv/video/__init__.py | 11 - .../pytorch/mmdetection3d/mmcv/video/io.py | 317 -- .../mmdetection3d/mmcv/video/optflow.py | 272 -- .../mmdetection3d/mmcv/video/processing.py | 161 - .../mmcv/visualization/__init__.py | 9 - .../mmdetection3d/mmcv/visualization/color.py | 52 - .../mmdetection3d/mmcv/visualization/image.py | 161 - .../mmcv/visualization/optflow.py | 116 - .../pytorch/mmdetection3d/mmdet/__init__.py | 29 - .../mmdetection3d/mmdet/apis/__init__.py | 12 - .../mmdetection3d/mmdet/apis/inference.py | 251 -- .../pytorch/mmdetection3d/mmdet/apis/test.py | 209 -- .../pytorch/mmdetection3d/mmdet/apis/train.py | 244 -- .../mmdetection3d/mmdet/core/__init__.py | 9 - .../mmdet/core/anchor/__init__.py | 14 - .../mmdet/core/anchor/anchor_generator.py | 866 ----- .../mmdet/core/anchor/builder.py | 19 - .../mmdet/core/anchor/point_generator.py | 263 -- .../mmdetection3d/mmdet/core/anchor/utils.py | 72 - .../mmdetection3d/mmdet/core/bbox/__init__.py | 28 - .../mmdet/core/bbox/assigners/__init__.py | 22 - .../bbox/assigners/approx_max_iou_assigner.py | 146 - .../core/bbox/assigners/assign_result.py | 206 -- .../core/bbox/assigners/atss_assigner.py | 179 - .../core/bbox/assigners/base_assigner.py | 10 - .../bbox/assigners/center_region_assigner.py | 336 -- .../core/bbox/assigners/grid_assigner.py | 156 - .../core/bbox/assigners/hungarian_assigner.py | 146 - .../bbox/assigners/mask_hungarian_assigner.py | 132 - .../core/bbox/assigners/max_iou_assigner.py | 218 -- .../core/bbox/assigners/point_assigner.py | 134 - .../core/bbox/assigners/region_assigner.py | 222 -- .../core/bbox/assigners/sim_ota_assigner.py | 257 -- .../bbox/assigners/task_aligned_assigner.py | 151 - .../core/bbox/assigners/uniform_assigner.py | 135 - .../mmdetection3d/mmdet/core/bbox/builder.py | 21 - .../mmdet/core/bbox/coder/__init__.py | 15 - .../mmdet/core/bbox/coder/base_bbox_coder.py | 18 - .../core/bbox/coder/bucketing_bbox_coder.py | 351 -- .../core/bbox/coder/delta_xywh_bbox_coder.py | 392 --- .../bbox/coder/distance_point_bbox_coder.py | 63 - .../coder/legacy_delta_xywh_bbox_coder.py | 216 -- .../core/bbox/coder/pseudo_bbox_coder.py | 19 - .../mmdet/core/bbox/coder/tblr_bbox_coder.py | 206 -- .../mmdet/core/bbox/coder/yolo_bbox_coder.py | 83 - .../mmdetection3d/mmdet/core/bbox/demodata.py | 42 - .../core/bbox/iou_calculators/__init__.py | 5 - .../core/bbox/iou_calculators/builder.py | 9 - .../bbox/iou_calculators/iou2d_calculator.py | 261 -- .../mmdet/core/bbox/match_costs/__init__.py | 9 - .../mmdet/core/bbox/match_costs/builder.py | 9 - .../mmdet/core/bbox/match_costs/match_cost.py | 359 -- .../mmdet/core/bbox/samplers/__init__.py | 19 - .../mmdet/core/bbox/samplers/base_sampler.py | 102 - .../core/bbox/samplers/combined_sampler.py | 21 - .../samplers/instance_balanced_pos_sampler.py | 56 - .../bbox/samplers/iou_balanced_neg_sampler.py | 158 - .../core/bbox/samplers/mask_pseudo_sampler.py | 44 - .../bbox/samplers/mask_sampling_result.py | 60 - .../mmdet/core/bbox/samplers/ohem_sampler.py | 111 - .../core/bbox/samplers/pseudo_sampler.py | 42 - .../core/bbox/samplers/random_sampler.py | 82 - .../core/bbox/samplers/sampling_result.py | 153 - .../core/bbox/samplers/score_hlr_sampler.py | 265 -- .../mmdet/core/bbox/transforms.py | 270 -- .../mmdet/core/data_structures/__init__.py | 5 - .../core/data_structures/general_data.py | 326 -- .../core/data_structures/instance_data.py | 188 -- .../mmdet/core/evaluation/__init__.py | 19 - .../mmdet/core/evaluation/bbox_overlaps.py | 65 - .../mmdet/core/evaluation/class_names.py | 332 -- .../mmdet/core/evaluation/eval_hooks.py | 130 - .../mmdet/core/evaluation/mean_ap.py | 753 ----- .../mmdet/core/evaluation/panoptic_utils.py | 6 - .../mmdet/core/evaluation/recall.py | 197 -- .../mmdet/core/export/__init__.py | 12 - .../mmdet/core/export/model_wrappers.py | 183 -- .../mmdet/core/export/onnx_helper.py | 223 -- .../mmdet/core/export/pytorch2onnx.py | 159 - .../mmdetection3d/mmdet/core/hook/__init__.py | 15 - .../mmdet/core/hook/checkloss_hook.py | 24 - .../mmdetection3d/mmdet/core/hook/ema.py | 130 - .../mmdet/core/hook/memory_profiler_hook.py | 55 - .../mmdet/core/hook/set_epoch_info_hook.py | 15 - .../mmdet/core/hook/sync_norm_hook.py | 52 - .../mmdet/core/hook/sync_random_size_hook.py | 72 - .../mmdet/core/hook/yolox_lrupdater_hook.py | 67 - .../mmdet/core/hook/yolox_mode_switch_hook.py | 52 - .../mmdetection3d/mmdet/core/mask/__init__.py | 9 - .../mmdet/core/mask/mask_target.py | 127 - .../mmdet/core/mask/structures.py | 1102 ------- .../mmdetection3d/mmdet/core/mask/utils.py | 89 - .../mmdet/core/post_processing/__init__.py | 10 - .../mmdet/core/post_processing/bbox_nms.py | 171 - .../mmdet/core/post_processing/matrix_nms.py | 121 - .../mmdet/core/post_processing/merge_augs.py | 154 - .../mmdet/core/utils/__init__.py | 13 - .../mmdet/core/utils/dist_utils.py | 193 -- .../mmdetection3d/mmdet/core/utils/misc.py | 208 -- .../mmdet/core/visualization/__init__.py | 9 - .../mmdet/core/visualization/image.py | 524 --- .../mmdet/core/visualization/palette.py | 63 - .../mmdetection3d/mmdet/datasets/__init__.py | 28 - .../mmdet/datasets/api_wrappers/__init__.py | 7 - .../mmdet/datasets/api_wrappers/coco_api.py | 47 - .../api_wrappers/panoptic_evaluation.py | 224 -- .../mmdetection3d/mmdet/datasets/builder.py | 215 -- .../mmdet/datasets/cityscapes.py | 338 -- .../mmdetection3d/mmdet/datasets/coco.py | 649 ---- .../mmdet/datasets/coco_panoptic.py | 692 ---- .../mmdetection3d/mmdet/datasets/custom.py | 410 --- .../mmdet/datasets/dataset_wrappers.py | 456 --- .../mmdet/datasets/deepfashion.py | 16 - .../mmdetection3d/mmdet/datasets/lvis.py | 742 ----- .../mmdet/datasets/openimages.py | 891 ----- .../mmdet/datasets/pipelines/__init__.py | 30 - .../mmdet/datasets/pipelines/auto_augment.py | 894 ----- .../mmdet/datasets/pipelines/compose.py | 55 - .../mmdet/datasets/pipelines/formating.py | 9 - .../mmdet/datasets/pipelines/formatting.py | 392 --- .../mmdet/datasets/pipelines/instaboost.py | 118 - .../mmdet/datasets/pipelines/loading.py | 609 ---- .../mmdet/datasets/pipelines/test_time_aug.py | 121 - .../mmdet/datasets/pipelines/transforms.py | 2919 ----------------- .../mmdet/datasets/samplers/__init__.py | 10 - .../datasets/samplers/class_aware_sampler.py | 176 - .../datasets/samplers/distributed_sampler.py | 54 - .../mmdet/datasets/samplers/group_sampler.py | 148 - .../datasets/samplers/infinite_sampler.py | 186 -- .../mmdetection3d/mmdet/datasets/utils.py | 166 - .../mmdetection3d/mmdet/datasets/voc.py | 112 - .../mmdet/datasets/wider_face.py | 54 - .../mmdetection3d/mmdet/datasets/xml_style.py | 178 - .../mmdetection3d/mmdet/models/__init__.py | 19 - .../mmdet/models/backbones/__init__.py | 26 - .../mmdet/models/backbones/csp_darknet.py | 284 -- .../mmdet/models/backbones/darknet.py | 213 -- .../models/backbones/detectors_resnet.py | 353 -- .../models/backbones/detectors_resnext.py | 123 - .../mmdet/models/backbones/efficientnet.py | 417 --- .../mmdet/models/backbones/hourglass.py | 222 -- .../mmdet/models/backbones/hrnet.py | 589 ---- .../mmdet/models/backbones/mobilenet_v2.py | 197 -- .../mmdet/models/backbones/pvt.py | 591 ---- .../mmdet/models/backbones/regnet.py | 356 -- .../mmdet/models/backbones/res2net.py | 327 -- .../mmdet/models/backbones/resnest.py | 322 -- .../mmdet/models/backbones/resnet.py | 672 ---- .../mmdet/models/backbones/resnext.py | 154 - .../mmdet/models/backbones/ssd_vgg.py | 128 - .../mmdet/models/backbones/swin.py | 763 ----- .../mmdet/models/backbones/trident_resnet.py | 298 -- .../mmdetection3d/mmdet/models/builder.py | 59 - .../mmdet/models/dense_heads/__init__.py | 56 - .../models/dense_heads/anchor_free_head.py | 350 -- .../mmdet/models/dense_heads/anchor_head.py | 542 --- .../mmdet/models/dense_heads/atss_head.py | 501 --- .../models/dense_heads/autoassign_head.py | 527 --- .../models/dense_heads/base_dense_head.py | 526 --- .../models/dense_heads/base_mask_head.py | 116 - .../models/dense_heads/cascade_rpn_head.py | 801 ----- .../models/dense_heads/centernet_head.py | 412 --- .../models/dense_heads/centripetal_head.py | 430 --- .../mmdet/models/dense_heads/corner_head.py | 1086 ------ .../dense_heads/deformable_detr_head.py | 318 -- .../models/dense_heads/dense_test_mixins.py | 206 -- .../mmdet/models/dense_heads/detr_head.py | 844 ----- .../models/dense_heads/embedding_rpn_head.py | 116 - .../mmdet/models/dense_heads/fcos_head.py | 455 --- .../mmdet/models/dense_heads/fovea_head.py | 385 --- .../dense_heads/free_anchor_retina_head.py | 272 -- .../mmdet/models/dense_heads/fsaf_head.py | 433 --- .../models/dense_heads/ga_retina_head.py | 113 - .../mmdet/models/dense_heads/ga_rpn_head.py | 177 - .../mmdet/models/dense_heads/gfl_head.py | 648 ---- .../models/dense_heads/guided_anchor_head.py | 868 ----- .../mmdet/models/dense_heads/lad_head.py | 232 -- .../mmdet/models/dense_heads/ld_head.py | 261 -- .../models/dense_heads/mask2former_head.py | 430 --- .../models/dense_heads/maskformer_head.py | 553 ---- .../mmdet/models/dense_heads/nasfcos_head.py | 80 - .../mmdet/models/dense_heads/paa_head.py | 756 ----- .../models/dense_heads/pisa_retinanet_head.py | 155 - .../mmdet/models/dense_heads/pisa_ssd_head.py | 140 - .../models/dense_heads/reppoints_head.py | 764 ----- .../mmdet/models/dense_heads/retina_head.py | 115 - .../models/dense_heads/retina_sepbn_head.py | 118 - .../mmdet/models/dense_heads/rpn_head.py | 265 -- .../models/dense_heads/sabl_retina_head.py | 630 ---- .../mmdet/models/dense_heads/solo_head.py | 1177 ------- .../mmdet/models/dense_heads/ssd_head.py | 357 -- .../mmdet/models/dense_heads/tood_head.py | 778 ----- .../mmdet/models/dense_heads/vfnet_head.py | 740 ----- .../mmdet/models/dense_heads/yolact_head.py | 1018 ------ .../mmdet/models/dense_heads/yolo_head.py | 619 ---- .../mmdet/models/dense_heads/yolof_head.py | 416 --- .../mmdet/models/dense_heads/yolox_head.py | 491 --- .../mmdet/models/detectors/__init__.py | 56 - .../mmdet/models/detectors/atss.py | 19 - .../mmdet/models/detectors/autoassign.py | 19 - .../mmdet/models/detectors/base.py | 360 -- .../mmdet/models/detectors/cascade_rcnn.py | 49 - .../mmdet/models/detectors/centernet.py | 111 - .../mmdet/models/detectors/cornernet.py | 97 - .../mmdet/models/detectors/deformable_detr.py | 10 - .../mmdet/models/detectors/detr.py | 70 - .../mmdet/models/detectors/fast_rcnn.py | 55 - .../mmdet/models/detectors/faster_rcnn.py | 27 - .../mmdet/models/detectors/fcos.py | 19 - .../mmdet/models/detectors/fovea.py | 19 - .../mmdet/models/detectors/fsaf.py | 19 - .../mmdet/models/detectors/gfl.py | 18 - .../mmdet/models/detectors/grid_rcnn.py | 32 - .../mmdet/models/detectors/htc.py | 16 - .../mmdet/models/detectors/kd_one_stage.py | 103 - .../mmdet/models/detectors/lad.py | 92 - .../mmdet/models/detectors/mask2former.py | 27 - .../mmdet/models/detectors/mask_rcnn.py | 27 - .../models/detectors/mask_scoring_rcnn.py | 30 - .../mmdet/models/detectors/maskformer.py | 233 -- .../mmdet/models/detectors/nasfcos.py | 22 - .../mmdet/models/detectors/paa.py | 19 - .../mmdet/models/detectors/panoptic_fpn.py | 34 - .../detectors/panoptic_two_stage_segmentor.py | 279 -- .../mmdet/models/detectors/point_rend.py | 32 - .../mmdet/models/detectors/queryinst.py | 28 - .../models/detectors/reppoints_detector.py | 24 - .../mmdet/models/detectors/retinanet.py | 19 - .../mmdet/models/detectors/rpn.py | 159 - .../mmdet/models/detectors/scnet.py | 11 - .../mmdet/models/detectors/single_stage.py | 171 - .../detectors/single_stage_instance_seg.py | 363 -- .../mmdet/models/detectors/solo.py | 30 - .../mmdet/models/detectors/sparse_rcnn.py | 111 - .../mmdet/models/detectors/tood.py | 23 - .../models/detectors/trident_faster_rcnn.py | 70 - .../mmdet/models/detectors/two_stage.py | 211 -- .../mmdet/models/detectors/vfnet.py | 20 - .../mmdet/models/detectors/yolact.py | 120 - .../mmdet/models/detectors/yolo.py | 42 - .../mmdet/models/detectors/yolof.py | 19 - .../mmdet/models/detectors/yolox.py | 136 - .../mmdet/models/losses/__init__.py | 32 - .../mmdet/models/losses/accuracy.py | 79 - .../mmdet/models/losses/ae_loss.py | 103 - .../mmdet/models/losses/balanced_l1_loss.py | 124 - .../mmdet/models/losses/cross_entropy_loss.py | 301 -- .../mmdet/models/losses/dice_loss.py | 146 - .../mmdet/models/losses/focal_loss.py | 244 -- .../models/losses/gaussian_focal_loss.py | 92 - .../mmdet/models/losses/gfocal_loss.py | 245 -- .../mmdet/models/losses/ghm_loss.py | 213 -- .../mmdet/models/losses/iou_loss.py | 474 --- .../mmdet/models/losses/kd_loss.py | 88 - .../mmdet/models/losses/mse_loss.py | 57 - .../mmdet/models/losses/pisa_loss.py | 184 -- .../mmdet/models/losses/seesaw_loss.py | 262 -- .../mmdet/models/losses/smooth_l1_loss.py | 146 - .../mmdet/models/losses/utils.py | 105 - .../mmdet/models/losses/varifocal_loss.py | 134 - .../mmdet/models/necks/__init__.py | 23 - .../mmdetection3d/mmdet/models/necks/bfp.py | 102 - .../mmdet/models/necks/channel_mapper.py | 100 - .../mmdet/models/necks/ct_resnet_neck.py | 94 - .../mmdet/models/necks/dilated_encoder.py | 108 - .../mmdet/models/necks/dyhead.py | 174 - .../mmdetection3d/mmdet/models/necks/fpg.py | 406 --- .../mmdetection3d/mmdet/models/necks/fpn.py | 204 -- .../mmdet/models/necks/fpn_carafe.py | 275 -- .../mmdetection3d/mmdet/models/necks/hrfpn.py | 100 - .../mmdet/models/necks/nas_fpn.py | 158 - .../mmdet/models/necks/nasfcos_fpn.py | 170 - .../mmdetection3d/mmdet/models/necks/pafpn.py | 158 - .../mmdetection3d/mmdet/models/necks/rfp.py | 135 - .../mmdet/models/necks/ssd_neck.py | 129 - .../mmdet/models/necks/yolo_neck.py | 140 - .../mmdet/models/necks/yolox_pafpn.py | 156 - .../mmdet/models/plugins/__init__.py | 9 - .../mmdet/models/plugins/dropblock.py | 85 - .../plugins/msdeformattn_pixel_decoder.py | 269 -- .../mmdet/models/plugins/pixel_decoder.py | 243 -- .../mmdet/models/roi_heads/__init__.py | 37 - .../mmdet/models/roi_heads/base_roi_head.py | 103 - .../models/roi_heads/bbox_heads/__init__.py | 14 - .../models/roi_heads/bbox_heads/bbox_head.py | 594 ---- .../roi_heads/bbox_heads/convfc_bbox_head.py | 229 -- .../models/roi_heads/bbox_heads/dii_head.py | 426 --- .../roi_heads/bbox_heads/double_bbox_head.py | 178 - .../models/roi_heads/bbox_heads/sabl_head.py | 596 ---- .../roi_heads/bbox_heads/scnet_bbox_head.py | 77 - .../models/roi_heads/cascade_roi_head.py | 631 ---- .../mmdet/models/roi_heads/double_roi_head.py | 34 - .../models/roi_heads/dynamic_roi_head.py | 155 - .../mmdet/models/roi_heads/grid_roi_head.py | 170 - .../mmdet/models/roi_heads/htc_roi_head.py | 628 ---- .../models/roi_heads/mask_heads/__init__.py | 20 - .../roi_heads/mask_heads/coarse_mask_head.py | 100 - .../roi_heads/mask_heads/dynamic_mask_head.py | 147 - .../roi_heads/mask_heads/fcn_mask_head.py | 412 --- .../mask_heads/feature_relay_head.py | 53 - .../mask_heads/fused_semantic_head.py | 117 - .../mask_heads/global_context_head.py | 101 - .../models/roi_heads/mask_heads/grid_head.py | 363 -- .../roi_heads/mask_heads/htc_mask_head.py | 39 - .../roi_heads/mask_heads/mask_point_head.py | 253 -- .../roi_heads/mask_heads/maskiou_head.py | 183 -- .../roi_heads/mask_heads/scnet_mask_head.py | 28 - .../mask_heads/scnet_semantic_head.py | 28 - .../models/roi_heads/mask_scoring_roi_head.py | 113 - .../mmdet/models/roi_heads/pisa_roi_head.py | 160 - .../models/roi_heads/point_rend_roi_head.py | 393 --- .../roi_heads/roi_extractors/__init__.py | 6 - .../roi_extractors/base_roi_extractor.py | 88 - .../roi_extractors/generic_roi_extractor.py | 84 - .../single_level_roi_extractor.py | 115 - .../mmdet/models/roi_heads/scnet_roi_head.py | 605 ---- .../models/roi_heads/shared_heads/__init__.py | 4 - .../roi_heads/shared_heads/res_layer.py | 80 - .../mmdet/models/roi_heads/sparse_roi_head.py | 424 --- .../models/roi_heads/standard_roi_head.py | 397 --- .../mmdet/models/roi_heads/test_mixins.py | 311 -- .../models/roi_heads/trident_roi_head.py | 120 - .../mmdet/models/seg_heads/__init__.py | 3 - .../models/seg_heads/base_semantic_head.py | 86 - .../models/seg_heads/panoptic_fpn_head.py | 155 - .../panoptic_fusion_heads/__init__.py | 5 - .../base_panoptic_fusion_head.py | 48 - .../heuristic_fusion_head.py | 126 - .../maskformer_fusion_head.py | 241 -- .../mmdet/models/utils/__init__.py | 34 - .../mmdet/models/utils/brick_wrappers.py | 51 - .../mmdet/models/utils/builder.py | 47 - .../mmdet/models/utils/ckpt_convert.py | 137 - .../mmdet/models/utils/conv_upsample.py | 67 - .../mmdet/models/utils/csp_layer.py | 150 - .../mmdet/models/utils/gaussian_target.py | 268 -- .../mmdet/models/utils/inverted_residual.py | 130 - .../mmdet/models/utils/make_divisible.py | 28 - .../mmdetection3d/mmdet/models/utils/misc.py | 72 - .../mmdet/models/utils/normed_predictor.py | 88 - .../models/utils/panoptic_gt_processing.py | 62 - .../mmdet/models/utils/point_sample.py | 87 - .../mmdet/models/utils/positional_encoding.py | 163 - .../mmdet/models/utils/res_layer.py | 190 -- .../mmdet/models/utils/se_layer.py | 127 - .../mmdet/models/utils/transformer.py | 1167 ------- .../mmdetection3d/mmdet/utils/__init__.py | 15 - .../mmdetection3d/mmdet/utils/collect_env.py | 17 - .../mmdet/utils/compat_config.py | 139 - .../mmdet/utils/contextmanagers.py | 122 - .../mmdetection3d/mmdet/utils/logger.py | 65 - .../pytorch/mmdetection3d/mmdet/utils/misc.py | 76 - .../mmdetection3d/mmdet/utils/profiling.py | 40 - .../mmdetection3d/mmdet/utils/setup_env.py | 53 - .../mmdetection3d/mmdet/utils/split_batch.py | 45 - .../mmdet/utils/util_distribution.py | 74 - .../mmdetection3d/mmdet/utils/util_mixins.py | 105 - .../mmdetection3d/mmdet/utils/util_random.py | 34 - .../pytorch/mmdetection3d/mmdet/version.py | 19 - .../pytorch/mmdetection3d/mmdet3d/__init__.py | 51 - .../mmdetection3d/mmdet3d/apis/__init__.py | 16 - .../mmdetection3d/mmdet3d/apis/inference.py | 528 --- .../mmdetection3d/mmdet3d/apis/test.py | 90 - .../mmdetection3d/mmdet3d/apis/train.py | 351 -- .../mmdetection3d/mmdet3d/core/__init__.py | 9 - .../mmdet3d/core/anchor/__init__.py | 10 - .../core/anchor/anchor_3d_generator.py | 419 --- .../mmdet3d/core/bbox/__init__.py | 30 - .../mmdet3d/core/bbox/assigners/__init__.py | 4 - .../mmdet3d/core/bbox/box_np_ops.py | 827 ----- .../mmdet3d/core/bbox/coders/__init__.py | 19 - .../bbox/coders/anchor_free_bbox_coder.py | 130 - .../bbox/coders/centerpoint_bbox_coders.py | 229 -- .../bbox/coders/delta_xyzwhlr_bbox_coder.py | 91 - .../core/bbox/coders/fcos3d_bbox_coder.py | 127 - .../bbox/coders/groupfree3d_bbox_coder.py | 191 -- .../core/bbox/coders/monoflex_bbox_coder.py | 515 --- .../coders/partial_bin_based_bbox_coder.py | 241 -- .../core/bbox/coders/pgd_bbox_coder.py | 128 - .../bbox/coders/point_xyzwhlr_bbox_coder.py | 117 - .../core/bbox/coders/smoke_bbox_coder.py | 216 -- .../core/bbox/iou_calculators/__init__.py | 11 - .../bbox/iou_calculators/iou3d_calculator.py | 329 -- .../mmdet3d/core/bbox/samplers/__init__.py | 13 - .../samplers/iou_neg_piecewise_sampler.py | 183 -- .../mmdet3d/core/bbox/structures/__init__.py | 18 - .../core/bbox/structures/base_box3d.py | 578 ---- .../core/bbox/structures/box_3d_mode.py | 197 -- .../mmdet3d/core/bbox/structures/cam_box3d.py | 354 -- .../core/bbox/structures/coord_3d_mode.py | 234 -- .../core/bbox/structures/depth_box3d.py | 270 -- .../core/bbox/structures/lidar_box3d.py | 210 -- .../mmdet3d/core/bbox/structures/utils.py | 342 -- .../mmdet3d/core/bbox/transforms.py | 76 - .../mmdet3d/core/evaluation/__init__.py | 11 - .../mmdet3d/core/evaluation/indoor_eval.py | 309 -- .../core/evaluation/instance_seg_eval.py | 128 - .../core/evaluation/kitti_utils/__init__.py | 4 - .../core/evaluation/kitti_utils/eval.py | 950 ------ .../core/evaluation/kitti_utils/rotate_iou.py | 379 --- .../mmdet3d/core/evaluation/lyft_eval.py | 285 -- .../core/evaluation/scannet_utils/__init__.py | 4 - .../evaluate_semantic_instance.py | 347 -- .../core/evaluation/scannet_utils/util_3d.py | 84 - .../mmdet3d/core/evaluation/seg_eval.py | 131 - .../core/evaluation/waymo_utils/__init__.py | 4 - .../waymo_utils/prediction_kitti_to_waymo.py | 263 -- .../mmdet3d/core/points/__init__.py | 30 - .../mmdet3d/core/points/base_points.py | 440 --- .../mmdet3d/core/points/cam_points.py | 63 - .../mmdet3d/core/points/depth_points.py | 58 - .../mmdet3d/core/points/lidar_points.py | 58 - .../mmdet3d/core/post_processing/__init__.py | 14 - .../mmdet3d/core/post_processing/box3d_nms.py | 288 -- .../core/post_processing/merge_augs.py | 92 - .../mmdet3d/core/utils/__init__.py | 10 - .../mmdet3d/core/utils/array_converter.py | 324 -- .../mmdet3d/core/utils/gaussian.py | 158 - .../mmdet3d/core/visualizer/__init__.py | 5 - .../mmdet3d/core/visualizer/image_vis.py | 206 -- .../mmdet3d/core/visualizer/open3d_vis.py | 460 --- .../mmdet3d/core/visualizer/show_result.py | 291 -- .../mmdet3d/core/voxel/__init__.py | 5 - .../mmdet3d/core/voxel/builder.py | 16 - .../mmdet3d/core/voxel/voxel_generator.py | 280 -- .../mmdet3d/datasets/__init__.py | 49 - .../mmdetection3d/mmdet3d/datasets/builder.py | 47 - .../mmdet3d/datasets/custom_3d.py | 448 --- .../mmdet3d/datasets/custom_3d_seg.py | 465 --- .../mmdet3d/datasets/dataset_wrappers.py | 78 - .../mmdet3d/datasets/kitti2d_dataset.py | 243 -- .../mmdet3d/datasets/kitti_dataset.py | 775 ----- .../mmdet3d/datasets/kitti_mono_dataset.py | 569 ---- .../mmdet3d/datasets/lyft_dataset.py | 569 ---- .../mmdet3d/datasets/nuscenes_dataset.py | 656 ---- .../mmdet3d/datasets/nuscenes_mono_dataset.py | 840 ----- .../mmdet3d/datasets/pipelines/__init__.py | 36 - .../mmdet3d/datasets/pipelines/compose.py | 60 - .../datasets/pipelines/data_augment_utils.py | 411 --- .../mmdet3d/datasets/pipelines/dbsampler.py | 340 -- .../mmdet3d/datasets/pipelines/formating.py | 266 -- .../mmdet3d/datasets/pipelines/loading.py | 685 ---- .../datasets/pipelines/test_time_aug.py | 229 -- .../datasets/pipelines/transforms_3d.py | 1855 ----------- .../mmdet3d/datasets/s3dis_dataset.py | 447 --- .../mmdet3d/datasets/scannet_dataset.py | 616 ---- .../mmdet3d/datasets/semantickitti_dataset.py | 112 - .../mmdet3d/datasets/sunrgbd_dataset.py | 282 -- .../mmdetection3d/mmdet3d/datasets/utils.py | 142 - .../mmdet3d/datasets/waymo_dataset.py | 551 ---- .../mmdetection3d/mmdet3d/models/__init__.py | 31 - .../mmdet3d/models/backbones/__init__.py | 18 - .../mmdet3d/models/backbones/base_pointnet.py | 41 - .../mmdet3d/models/backbones/dgcnn.py | 100 - .../mmdet3d/models/backbones/dla.py | 448 --- .../mmdet3d/models/backbones/mink_resnet.py | 118 - .../models/backbones/multi_backbone.py | 129 - .../mmdet3d/models/backbones/nostem_regnet.py | 86 - .../models/backbones/pointnet2_sa_msg.py | 177 - .../models/backbones/pointnet2_sa_ssg.py | 145 - .../mmdet3d/models/backbones/second.py | 93 - .../mmdetection3d/mmdet3d/models/builder.py | 137 - .../mmdet3d/models/decode_heads/__init__.py | 8 - .../models/decode_heads/decode_head.py | 125 - .../mmdet3d/models/decode_heads/dgcnn_head.py | 69 - .../models/decode_heads/paconv_head.py | 65 - .../models/decode_heads/pointnet2_head.py | 87 - .../mmdet3d/models/dense_heads/__init__.py | 27 - .../models/dense_heads/anchor3d_head.py | 518 --- .../dense_heads/anchor_free_mono3d_head.py | 536 --- .../models/dense_heads/base_conv_bbox_head.py | 133 - .../dense_heads/base_mono3d_dense_head.py | 80 - .../models/dense_heads/centerpoint_head.py | 832 ----- .../models/dense_heads/fcos_mono3d_head.py | 958 ------ .../models/dense_heads/free_anchor3d_head.py | 287 -- .../models/dense_heads/groupfree3d_head.py | 996 ------ .../models/dense_heads/monoflex_head.py | 773 ----- .../models/dense_heads/parta2_rpn_head.py | 312 -- .../mmdet3d/models/dense_heads/pgd_head.py | 1231 ------- .../models/dense_heads/point_rpn_head.py | 383 --- .../models/dense_heads/shape_aware_head.py | 517 --- .../models/dense_heads/smoke_mono3d_head.py | 518 --- .../mmdet3d/models/dense_heads/ssd_3d_head.py | 559 ---- .../models/dense_heads/train_mixins.py | 351 -- .../mmdet3d/models/dense_heads/vote_head.py | 665 ---- .../mmdet3d/models/detectors/__init__.py | 29 - .../mmdet3d/models/detectors/base.py | 129 - .../mmdet3d/models/detectors/centerpoint.py | 198 -- .../models/detectors/dynamic_voxelnet.py | 73 - .../mmdet3d/models/detectors/fcos_mono3d.py | 24 - .../models/detectors/groupfree3dnet.py | 107 - .../mmdet3d/models/detectors/h3dnet.py | 178 - .../mmdet3d/models/detectors/imvotenet.py | 821 ----- .../mmdet3d/models/detectors/imvoxelnet.py | 140 - .../models/detectors/mvx_faster_rcnn.py | 63 - .../mmdet3d/models/detectors/mvx_two_stage.py | 505 --- .../mmdet3d/models/detectors/parta2.py | 153 - .../mmdet3d/models/detectors/point_rcnn.py | 150 - .../mmdet3d/models/detectors/sassd.py | 138 - .../mmdet3d/models/detectors/single_stage.py | 73 - .../models/detectors/single_stage_mono3d.py | 252 -- .../mmdet3d/models/detectors/smoke_mono3d.py | 23 - .../mmdet3d/models/detectors/ssd3dnet.py | 28 - .../mmdet3d/models/detectors/two_stage.py | 53 - .../mmdet3d/models/detectors/votenet.py | 109 - .../mmdet3d/models/detectors/voxelnet.py | 132 - .../mmdet3d/models/fusion_layers/__init__.py | 10 - .../models/fusion_layers/coord_transform.py | 222 -- .../models/fusion_layers/point_fusion.py | 306 -- .../models/fusion_layers/vote_fusion.py | 200 -- .../mmdet3d/models/losses/__init__.py | 16 - .../models/losses/axis_aligned_iou_loss.py | 81 - .../mmdet3d/models/losses/chamfer_distance.py | 149 - .../mmdet3d/models/losses/multibin_loss.py | 95 - .../losses/paconv_regularization_loss.py | 110 - .../models/losses/uncertain_smooth_l1_loss.py | 178 - .../models/middle_encoders/__init__.py | 14 - .../models/middle_encoders/pillar_scatter.py | 104 - .../models/middle_encoders/sparse_encoder.py | 493 --- .../models/middle_encoders/sparse_unet.py | 302 -- .../mmdet3d/models/model_utils/__init__.py | 6 - .../models/model_utils/edge_fusion_module.py | 78 - .../mmdet3d/models/model_utils/transformer.py | 139 - .../mmdet3d/models/model_utils/vote_module.py | 184 -- .../mmdet3d/models/necks/__init__.py | 12 - .../mmdet3d/models/necks/dla_neck.py | 235 -- .../mmdet3d/models/necks/imvoxel_neck.py | 112 - .../mmdet3d/models/necks/pointnet2_fp_neck.py | 91 - .../mmdet3d/models/necks/second_fpn.py | 93 - .../mmdet3d/models/roi_heads/__init__.py | 22 - .../models/roi_heads/base_3droi_head.py | 100 - .../models/roi_heads/bbox_heads/__init__.py | 16 - .../roi_heads/bbox_heads/h3d_bbox_head.py | 927 ------ .../roi_heads/bbox_heads/parta2_bbox_head.py | 631 ---- .../bbox_heads/point_rcnn_bbox_head.py | 577 ---- .../mmdet3d/models/roi_heads/h3d_roi_head.py | 161 - .../models/roi_heads/mask_heads/__init__.py | 7 - .../mask_heads/pointwise_semantic_head.py | 204 -- .../roi_heads/mask_heads/primitive_head.py | 968 ------ .../roi_heads/part_aggregation_roi_head.py | 327 -- .../models/roi_heads/point_rcnn_roi_head.py | 288 -- .../roi_heads/roi_extractors/__init__.py | 11 - .../single_roiaware_extractor.py | 56 - .../single_roipoint_extractor.py | 66 - .../mmdet3d/models/segmentors/__init__.py | 7 - .../mmdet3d/models/segmentors/base.py | 138 - .../models/segmentors/encoder_decoder.py | 456 --- .../mmdet3d/models/utils/__init__.py | 13 - .../mmdet3d/models/utils/clip_sigmoid.py | 19 - .../mmdet3d/models/utils/edge_indices.py | 90 - .../mmdet3d/models/utils/gen_keypoints.py | 82 - .../mmdet3d/models/utils/handle_objs.py | 137 - .../mmdetection3d/mmdet3d/models/utils/mlp.py | 51 - .../mmdet3d/models/voxel_encoders/__init__.py | 10 - .../models/voxel_encoders/pillar_encoder.py | 325 -- .../mmdet3d/models/voxel_encoders/utils.py | 184 -- .../models/voxel_encoders/voxel_encoder.py | 491 --- .../mmdetection3d/mmdet3d/ops/__init__.py | 50 - .../mmdet3d/ops/dgcnn_modules/__init__.py | 6 - .../ops/dgcnn_modules/dgcnn_fa_module.py | 68 - .../ops/dgcnn_modules/dgcnn_fp_module.py | 59 - .../ops/dgcnn_modules/dgcnn_gf_module.py | 221 -- .../pytorch/mmdetection3d/mmdet3d/ops/norm.py | 163 - .../mmdet3d/ops/paconv/__init__.py | 4 - .../mmdet3d/ops/paconv/paconv.py | 392 --- .../mmdetection3d/mmdet3d/ops/paconv/utils.py | 87 - .../mmdet3d/ops/pointnet_modules/__init__.py | 12 - .../mmdet3d/ops/pointnet_modules/builder.py | 39 - .../ops/pointnet_modules/paconv_sa_module.py | 342 -- .../ops/pointnet_modules/point_fp_module.py | 79 - .../ops/pointnet_modules/point_sa_module.py | 352 -- .../mmdetection3d/mmdet3d/ops/sparse_block.py | 199 -- .../mmdet3d/ops/spconv/__init__.py | 14 - .../ops/spconv/overwrite_spconv/__init__.py | 4 - .../spconv/overwrite_spconv/write_spconv2.py | 118 - .../mmdetection3d/mmdet3d/utils/__init__.py | 16 - .../mmdet3d/utils/collect_env.py | 25 - .../mmdetection3d/mmdet3d/utils/compat_cfg.py | 141 - .../mmdetection3d/mmdet3d/utils/logger.py | 31 - .../mmdetection3d/mmdet3d/utils/misc.py | 41 - .../mmdetection3d/mmdet3d/utils/setup_env.py | 55 - .../pytorch/mmdetection3d/mmdet3d/version.py | 21 - .../pytorch/mmdetection3d/mmseg/__init__.py | 62 - .../mmdetection3d/mmseg/apis/__init__.py | 11 - .../mmdetection3d/mmseg/apis/inference.py | 136 - .../pytorch/mmdetection3d/mmseg/apis/test.py | 233 -- .../pytorch/mmdetection3d/mmseg/apis/train.py | 167 - .../mmdetection3d/mmseg/core/__init__.py | 4 - .../mmseg/core/evaluation/__init__.py | 11 - .../mmseg/core/evaluation/class_names.py | 153 - .../mmseg/core/evaluation/eval_hooks.py | 128 - .../mmseg/core/evaluation/metrics.py | 395 --- .../mmdetection3d/mmseg/core/seg/__init__.py | 5 - .../mmdetection3d/mmseg/core/seg/builder.py | 9 - .../mmseg/core/seg/sampler/__init__.py | 5 - .../core/seg/sampler/base_pixel_sampler.py | 13 - .../core/seg/sampler/ohem_pixel_sampler.py | 85 - .../mmseg/core/utils/__init__.py | 4 - .../mmdetection3d/mmseg/core/utils/misc.py | 18 - .../mmdetection3d/mmseg/datasets/__init__.py | 25 - .../mmdetection3d/mmseg/datasets/ade.py | 167 - .../mmdetection3d/mmseg/datasets/builder.py | 182 - .../mmdetection3d/mmseg/datasets/chase_db1.py | 28 - .../mmseg/datasets/cityscapes.py | 214 -- .../mmseg/datasets/coco_stuff.py | 93 - .../mmdetection3d/mmseg/datasets/custom.py | 457 --- .../mmseg/datasets/dark_zurich.py | 13 - .../mmseg/datasets/dataset_wrappers.py | 190 -- .../mmdetection3d/mmseg/datasets/drive.py | 28 - .../mmdetection3d/mmseg/datasets/hrf.py | 28 - .../mmdetection3d/mmseg/datasets/loveda.py | 92 - .../mmseg/datasets/night_driving.py | 13 - .../mmseg/datasets/pascal_context.py | 104 - .../mmseg/datasets/pipelines/__init__.py | 18 - .../mmseg/datasets/pipelines/compose.py | 52 - .../mmseg/datasets/pipelines/formating.py | 9 - .../mmseg/datasets/pipelines/formatting.py | 289 -- .../mmseg/datasets/pipelines/loading.py | 154 - .../mmseg/datasets/pipelines/test_time_aug.py | 134 - .../mmseg/datasets/pipelines/transforms.py | 1042 ------ .../mmdetection3d/mmseg/datasets/stare.py | 28 - .../mmdetection3d/mmseg/datasets/voc.py | 30 - .../mmdetection3d/mmseg/models/__init__.py | 13 - .../mmseg/models/backbones/__init__.py | 28 - .../mmseg/models/backbones/bisenetv1.py | 332 -- .../mmseg/models/backbones/bisenetv2.py | 622 ---- .../mmseg/models/backbones/cgnet.py | 372 --- .../mmseg/models/backbones/erfnet.py | 329 -- .../mmseg/models/backbones/fast_scnn.py | 409 --- .../mmseg/models/backbones/hrnet.py | 642 ---- .../mmseg/models/backbones/icnet.py | 165 - .../mmseg/models/backbones/mit.py | 431 --- .../mmseg/models/backbones/mobilenet_v2.py | 197 -- .../mmseg/models/backbones/mobilenet_v3.py | 267 -- .../mmseg/models/backbones/resnest.py | 318 -- .../mmseg/models/backbones/resnet.py | 714 ---- .../mmseg/models/backbones/resnext.py | 150 - .../mmseg/models/backbones/stdc.py | 422 --- .../mmseg/models/backbones/swin.py | 755 ----- .../mmseg/models/backbones/timm_backbone.py | 63 - .../mmseg/models/backbones/twins.py | 587 ---- .../mmseg/models/backbones/unet.py | 438 --- .../mmseg/models/backbones/vit.py | 412 --- .../mmdetection3d/mmseg/models/builder.py | 49 - .../mmseg/models/decode_heads/__init__.py | 37 - .../mmseg/models/decode_heads/ann_head.py | 246 -- .../mmseg/models/decode_heads/apc_head.py | 159 - .../mmseg/models/decode_heads/aspp_head.py | 108 - .../decode_heads/cascade_decode_head.py | 58 - .../mmseg/models/decode_heads/cc_head.py | 43 - .../mmseg/models/decode_heads/da_head.py | 179 - .../mmseg/models/decode_heads/decode_head.py | 265 -- .../mmseg/models/decode_heads/dm_head.py | 141 - .../mmseg/models/decode_heads/dnl_head.py | 132 - .../mmseg/models/decode_heads/dpt_head.py | 293 -- .../mmseg/models/decode_heads/ema_head.py | 169 - .../mmseg/models/decode_heads/enc_head.py | 188 -- .../mmseg/models/decode_heads/fcn_head.py | 82 - .../mmseg/models/decode_heads/fpn_head.py | 69 - .../mmseg/models/decode_heads/gc_head.py | 48 - .../mmseg/models/decode_heads/isa_head.py | 142 - .../mmseg/models/decode_heads/lraspp_head.py | 91 - .../mmseg/models/decode_heads/nl_head.py | 50 - .../mmseg/models/decode_heads/ocr_head.py | 128 - .../mmseg/models/decode_heads/point_head.py | 356 -- .../mmseg/models/decode_heads/psa_head.py | 197 -- .../mmseg/models/decode_heads/psp_head.py | 103 - .../models/decode_heads/segformer_head.py | 66 - .../models/decode_heads/sep_aspp_head.py | 102 - .../mmseg/models/decode_heads/sep_fcn_head.py | 60 - .../models/decode_heads/setr_mla_head.py | 63 - .../mmseg/models/decode_heads/setr_up_head.py | 81 - .../mmseg/models/decode_heads/stdc_head.py | 90 - .../mmseg/models/decode_heads/uper_head.py | 127 - .../mmseg/models/losses/__init__.py | 15 - .../mmseg/models/losses/accuracy.py | 79 - .../mmseg/models/losses/cross_entropy_loss.py | 218 -- .../mmseg/models/losses/dice_loss.py | 137 - .../mmseg/models/losses/focal_loss.py | 327 -- .../mmseg/models/losses/lovasz_loss.py | 323 -- .../mmseg/models/losses/utils.py | 122 - .../mmseg/models/necks/__init__.py | 8 - .../mmdetection3d/mmseg/models/necks/fpn.py | 213 -- .../mmseg/models/necks/ic_neck.py | 147 - .../mmdetection3d/mmseg/models/necks/jpu.py | 131 - .../mmseg/models/necks/mla_neck.py | 118 - .../mmseg/models/necks/multilevel_neck.py | 78 - .../mmseg/models/segmentors/__init__.py | 6 - .../mmseg/models/segmentors/base.py | 277 -- .../segmentors/cascade_encoder_decoder.py | 84 - .../models/segmentors/encoder_decoder.py | 284 -- .../mmseg/models/utils/__init__.py | 14 - .../mmdetection3d/mmseg/models/utils/embed.py | 330 -- .../mmseg/models/utils/inverted_residual.py | 213 -- .../mmseg/models/utils/make_divisible.py | 28 - .../mmseg/models/utils/res_layer.py | 96 - .../mmseg/models/utils/se_layer.py | 58 - .../models/utils/self_attention_block.py | 160 - .../mmseg/models/utils/shape_convert.py | 29 - .../mmseg/models/utils/up_conv_block.py | 102 - .../mmdetection3d/mmseg/ops/__init__.py | 5 - .../mmdetection3d/mmseg/ops/encoding.py | 75 - .../mmdetection3d/mmseg/ops/wrappers.py | 51 - .../mmdetection3d/mmseg/utils/__init__.py | 5 - .../mmdetection3d/mmseg/utils/collect_env.py | 18 - .../mmdetection3d/mmseg/utils/logger.py | 28 - .../pytorch/mmdetection3d/mmseg/version.py | 18 - .../pytorch/mmdetection3d/requirements.txt | 4 - .../mmdetection3d/requirements/build.txt | 0 .../mmdetection3d/requirements/docs.txt | 8 - .../mmdetection3d/requirements/mminstall.txt | 3 - .../mmdetection3d/requirements/optional.txt | 3 - .../requirements/readthedocs.txt | 5 - .../mmdetection3d/requirements/runtime.txt | 15 - .../mmdetection3d/requirements/tests.txt | 13 - .../pointnet2/pytorch/mmdetection3d/setup.cfg | 16 - .../pointnet2/pytorch/mmdetection3d/setup.py | 429 --- .../tools/analysis_tools/analyze_logs.py | 204 -- .../tools/analysis_tools/benchmark.py | 98 - .../tools/analysis_tools/get_flops.py | 94 - .../mmdetection3d/tools/create_data.py | 305 -- .../mmdetection3d/tools/create_data.sh | 24 - .../tools/data_converter/__init__.py | 1 - .../data_converter/create_gt_database.py | 624 ---- .../tools/data_converter/indoor_converter.py | 110 - .../tools/data_converter/kitti_converter.py | 624 ---- .../tools/data_converter/kitti_data_utils.py | 619 ---- .../tools/data_converter/lyft_converter.py | 271 -- .../tools/data_converter/lyft_data_fixer.py | 39 - .../tools/data_converter/nuimage_converter.py | 226 -- .../data_converter/nuscenes_converter.py | 628 ---- .../tools/data_converter/s3dis_data_utils.py | 245 -- .../data_converter/scannet_data_utils.py | 297 -- .../data_converter/sunrgbd_data_utils.py | 226 -- .../tools/data_converter/waymo_converter.py | 556 ---- .../tools/deployment/mmdet3d2torchserve.py | 113 - .../tools/deployment/mmdet3d_handler.py | 122 - .../tools/deployment/test_torchserver.py | 58 - .../pytorch/mmdetection3d/tools/dist_test.sh | 22 - .../tools/misc/browse_dataset.py | 234 -- .../mmdetection3d/tools/misc/fuse_conv_bn.py | 70 - .../mmdetection3d/tools/misc/print_config.py | 29 - .../tools/misc/visualize_results.py | 52 - .../convert_h3dnet_checkpoints.py | 179 - .../convert_votenet_checkpoints.py | 155 - .../tools/model_converters/publish_model.py | 36 - .../tools/model_converters/regnet2mmdet.py | 90 - .../pytorch/mmdetection3d/tools/slurm_test.sh | 24 - .../mmdetection3d/tools/slurm_train.sh | 24 - .../pytorch/mmdetection3d/tools/test.py | 262 -- .../mmdetection3d/tools/update_data_coords.py | 170 - .../mmdetection3d/tools/update_data_coords.sh | 22 - .../pointnet2/pytorch/mmdetection3d/train.py | 263 -- .../pointpillars/pytorch/README.md | 3 + cv/classification/fasternet/pytorch/README.md | 4 +- cv/classification/repvit/pytorch/README.md | 2 +- cv/detection/yolof/pytorch/README.md | 2 +- 1543 files changed, 29 insertions(+), 244361 deletions(-) rename cv/3d_detection/pointnet2/pytorch/{mmdetection3d => }/README.md (74%) delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/.gitignore delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/LICENSE delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/3dssd_4x4_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/coco_instance.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nuim_instance.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/range100_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis-3d-5class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet-3d-18class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/default_runtime.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/3dssd.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/dgcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/fcos3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/groupfree3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/h3dnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_lyft.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_kitti.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_waymo.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_kitti.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_waymo.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/imvotenet_image.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/mask_rcnn_r50_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_cuda_ssg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_ssg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/parta2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pgd.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/point_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_msg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_ssg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/smoke.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/votenet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cosine.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_20e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_40e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/mmdet_schedule_1x.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_2x.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_3x.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_100e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_150e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_200e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_50e.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/metafile.yml delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_8x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/collect_indoor3d_data.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/indoor3d_util.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/anno_paths.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/class_names.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/dist_train.sh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/quantization.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/alexnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/activation.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/context_block.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv2d_adaptive_padding.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_ws.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/depthwise_separable_conv_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/drop.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/generalized_attention.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hsigmoid.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hswish.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/non_local.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/norm.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/padding.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/plugin.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/registry.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/scale.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/swish.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/transformer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/upsample.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/flops_counter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/fuse_conv_bn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/sync_bn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/weight_init.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/vgg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/__init__.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/__init__.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/dataloader.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hierarchical_data_manager.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hook_wrapper.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/model_wrapper.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/runner.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/_functions.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/data_parallel.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/distributed.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/scatter_gather.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/test.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/file_client.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/json_handler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/pickle_handler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/yaml_handler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/io.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/parse.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/colorspace.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/geometric.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/io.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/photometric.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/deprecated.json delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/mmcls.json delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/open_mmlab.json delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/torchvision_0.12.json delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/info.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/symbolic_helper.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/symbolic.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/active_rotated_filter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/assign_score_withk.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/ball_query.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/bbox.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/border_align.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/box_iou_rotated.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/carafe.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/cc_attention.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/chamfer_distance.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/contour_expand.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/convex_iou.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/corner_pool.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/correlation.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/README.md delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/box_iou_rotated_utils.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/active_rotated_filter_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/assign_score_withk_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ball_query_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/bbox_overlaps_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/border_align_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/box_iou_rotated_cuda.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_naive_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/chamfer_distance_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/convex_iou_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/correlation_cuda.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_conv_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_roi_pool_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/diff_iou_rotated_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/furthest_point_sample_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/gather_points_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/group_points_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/iou3d_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/knn_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/masked_conv2d_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/min_area_polygons_cuda.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ms_deform_attn_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_rotated_cuda.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/parrots_cudawarpfunction.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_boxes_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_polygons_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/psamask_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/riroi_align_rotated_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_rotated_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_pool_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roiaware_pool3d_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roipoint_pool3d_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/rotated_feature_align_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/scatter_points_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/softmax_focal_loss_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_interpolate_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_nn_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/tin_shift_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/voxelization_cuda_kernel.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/bbox_overlaps_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/common_mlu_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/focal_loss_sigmoid_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/nms_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_utils.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_utils.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/tin_shift_mlu_kernel.mlu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cpp_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cuda_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_device_registry.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_mlu_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/corner_pool.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/corner_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/gridSample.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/modulated_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/reduce_ops.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/rotated_feature_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/soft_nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/deform_conv.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/grid_sample.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/modulated_deform_conv.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/nms.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_register.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_session_options_config_keys.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/ort_mmcv_utils.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/reduce_ops.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align_rotated.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/rotated_feature_align.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/soft_nms.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query._parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/cudabind.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_leakyrelu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/info.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_parrots.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_pytorch.h delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/active_rotated_filter.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/assign_score_withk.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ball_query.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/bbox_overlaps.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/border_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/box_iou_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe_naive.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/chamfer_distance.cpp delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/contour_expand.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/convex_iou.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/correlation.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/active_rotated_filter.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/box_iou_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/modulated_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms_rotated.cpp delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/pixel_group.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/points_in_boxes.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/psamask.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/rotated_feature_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/voxelization.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/active_rotated_filter_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/assign_score_withk_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ball_query_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/bbox_overlaps_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/border_align_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/box_iou_rotated_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_naive_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/chamfer_distance_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/convex_iou.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/correlation_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/cudabind.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_conv_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_roi_pool_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/diff_iou_rotated_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/furthest_point_sample_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/gather_points_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/group_points_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/iou3d_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/knn_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/masked_conv2d_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/min_area_polygons.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ms_deform_attn_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_rotated_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_boxes_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_polygons_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/psamask_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/riroi_align_rotated_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_rotated_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_pool_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roiaware_pool3d_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roipoint_pool3d_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/rotated_feature_align_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/scatter_points_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_interpolate_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_nn_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/tin_shift_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/voxelization_cuda.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_roi_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/diff_iou_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/focal_loss.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/furthest_point_sample.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_bias_leakyrelu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_spconv_ops.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/gather_points.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/group_points.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/info.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/iou3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/knn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/masked_conv2d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/min_area_polygons.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/bbox_overlaps_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/focal_loss_sigmoid_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/nms_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/psamask_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_rotated_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/tin_shift_mlu.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ms_deform_attn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms_rotated.cpp delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pixel_group.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_boxes.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_polygons.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/psamask.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pybind.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/riroi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align_rotated.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roiaware_pool3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roipoint_pool3d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/rotated_feature_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/scatter_points.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/sync_bn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_interpolate.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_nn.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/tin_shift.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/upfirdn2d.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/voxelization.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cuda_helper.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_instance_norm.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd.cpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd_kernel.cu delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_corner_pool.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cuda_helper.cuh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cummaxmin.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_deform_conv.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_grid_sampler.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_instance_norm.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_modulated_deform_conv.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_nms.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin_helper.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_roi_align.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_scatternd.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_serialize.hpp delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_conv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_roi_pool.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deprecated_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/diff_iou_rotated.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/focal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/furthest_point_sample.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/fused_bias_leakyrelu.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/gather_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/group_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/info.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/iou3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/knn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/masked_conv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/merge_cells.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/min_area_polygons.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/modulated_deform_conv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/multi_scale_deform_attn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/nms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/pixel_group.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/point_sample.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_boxes.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_polygons.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/psa_mask.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/riroi_align_rotated.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align_rotated.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_pool.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roiaware_pool3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roipoint_pool3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/rotated_feature_align.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/saconv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/scatter_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/sync_bn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_interpolate.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_nn.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/tin_shift.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/upfirdn2d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/voxelize.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/_functions.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/collate.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_container.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_parallel.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed_deprecated.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/registry.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/scatter_gather.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_runner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/checkpoint.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/default_constructor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/dist_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/epoch_based_runner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/fp16_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/checkpoint.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/closure.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/ema.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/evaluation.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/iter_timer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/clearml.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/dvclive.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/mlflow.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/neptune.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/pavi.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/segmind.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/tensorboard.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/text.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/wandb.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/lr_updater.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/memory.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/momentum_updater.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/optimizer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/profiler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sampler_seed.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sync_buffer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/iter_based_runner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/log_buffer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/default_constructor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/priority.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/init_plugins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/preprocess.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/tensorrt_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/config.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/device_type.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/ext_loader.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/hub.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/logging.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_jit.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_wrapper.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/path.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/progressbar.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/registry.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/seed.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/testing.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/timer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/trace.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/version_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/version.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/io.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/optflow.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/processing.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/color.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/image.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/optflow.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/inference.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/test.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/train.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/anchor_generator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/point_generator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/approx_max_iou_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/assign_result.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/atss_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/base_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/center_region_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/grid_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/hungarian_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/mask_hungarian_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/max_iou_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/point_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/region_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/sim_ota_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/task_aligned_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/uniform_assigner.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/base_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/bucketing_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/distance_point_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/pseudo_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/tblr_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/yolo_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/demodata.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/iou2d_calculator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/match_cost.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/base_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/combined_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_pseudo_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_sampling_result.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/ohem_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/pseudo_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/random_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/sampling_result.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/score_hlr_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/transforms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/general_data.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/instance_data.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/bbox_overlaps.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/class_names.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/eval_hooks.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/mean_ap.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/panoptic_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/recall.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/model_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/onnx_helper.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/pytorch2onnx.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/checkloss_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/ema.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/memory_profiler_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/set_epoch_info_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_norm_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_random_size_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_lrupdater_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_mode_switch_hook.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/mask_target.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/structures.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/bbox_nms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/matrix_nms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/merge_augs.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/dist_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/image.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/palette.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/coco_api.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/panoptic_evaluation.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/cityscapes.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco_panoptic.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/custom.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/deepfashion.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/lvis.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/openimages.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/auto_augment.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formatting.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/instaboost.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/transforms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/class_aware_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/distributed_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/group_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/infinite_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/voc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/wider_face.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/xml_style.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/csp_darknet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/darknet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnext.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/efficientnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hourglass.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hrnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/mobilenet_v2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/pvt.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/regnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/res2net.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnest.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnext.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/ssd_vgg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/swin.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/trident_resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_free_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/atss_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/autoassign_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_dense_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/cascade_rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centernet_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centripetal_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/corner_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/deformable_detr_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/dense_test_mixins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/detr_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/embedding_rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fcos_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fovea_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/free_anchor_retina_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fsaf_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_retina_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/gfl_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/guided_anchor_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/lad_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ld_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/mask2former_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/maskformer_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/nasfcos_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/paa_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_retinanet_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_ssd_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/reppoints_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_sepbn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/sabl_retina_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/solo_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ssd_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/tood_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/vfnet_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolact_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolo_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolof_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/atss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/autoassign.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cascade_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/centernet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cornernet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/deformable_detr.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/detr.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fast_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/faster_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fcos.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fovea.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fsaf.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/gfl.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/grid_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/htc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/kd_one_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/lad.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask2former.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_scoring_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/maskformer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/nasfcos.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/paa.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_two_stage_segmentor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/point_rend.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/queryinst.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/reppoints_detector.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/retinanet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/rpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/scnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage_instance_seg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/solo.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/sparse_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/tood.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/trident_faster_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/two_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/vfnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolact.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolo.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolof.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolox.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/accuracy.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ae_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/balanced_l1_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/cross_entropy_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/dice_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/focal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gaussian_focal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gfocal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ghm_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/iou_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/kd_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/mse_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/pisa_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/seesaw_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/smooth_l1_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/varifocal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/bfp.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/channel_mapper.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ct_resnet_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dilated_encoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dyhead.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn_carafe.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/hrfpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nas_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nasfcos_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/pafpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/rfp.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ssd_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolo_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolox_pafpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/dropblock.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/msdeformattn_pixel_decoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/pixel_decoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/base_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/dii_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/sabl_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/cascade_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/double_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/dynamic_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/grid_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/htc_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/feature_relay_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/global_context_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/grid_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/htc_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/mask_point_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/maskiou_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_scoring_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/pisa_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/point_rend_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/scnet_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/res_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/sparse_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/standard_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/test_mixins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/trident_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/base_semantic_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/brick_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/ckpt_convert.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/conv_upsample.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/csp_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/gaussian_target.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/inverted_residual.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/make_divisible.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/normed_predictor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/panoptic_gt_processing.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/point_sample.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/positional_encoding.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/res_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/se_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/transformer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/collect_env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/compat_config.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/contextmanagers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/logger.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/profiling.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/setup_env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/split_batch.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_distribution.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_mixins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_random.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/version.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/inference.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/test.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/train.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/anchor_3d_generator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/assigners/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/box_np_ops.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/pgd_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/smoke_bbox_coder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/box_3d_mode.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/cam_box3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/coord_3d_mode.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/depth_box3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/lidar_box3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/transforms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/indoor_eval.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/instance_seg_eval.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/lyft_eval.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/util_3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/seg_eval.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/base_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/cam_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/depth_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/lidar_points.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/box3d_nms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/merge_augs.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/array_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/gaussian.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/image_vis.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/open3d_vis.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/show_result.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/voxel_generator.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d_seg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti2d_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_mono_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/lyft_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_mono_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/data_augment_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/dbsampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/transforms_3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/s3dis_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/scannet_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/semantickitti_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/sunrgbd_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/waymo_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/base_pointnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dgcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dla.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/mink_resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/multi_backbone.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/nostem_regnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_msg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_ssg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/second.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/decode_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/dgcnn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/paconv_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/pointnet2_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_conv_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_mono3d_dense_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/centerpoint_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/fcos_mono3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/free_anchor3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/groupfree3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/monoflex_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/parta2_rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/pgd_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/point_rpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/shape_aware_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/smoke_mono3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/ssd_3d_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/train_mixins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/vote_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/centerpoint.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/dynamic_voxelnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/fcos_mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/groupfree3dnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/h3dnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvotenet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvoxelnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/parta2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/point_rcnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/sassd.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage_mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/smoke_mono3d.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/ssd3dnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/two_stage.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/votenet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/voxelnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/point_fusion.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/vote_fusion.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/axis_aligned_iou_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/chamfer_distance.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/multibin_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/paconv_regularization_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/uncertain_smooth_l1_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_encoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_unet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/edge_fusion_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/transformer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/vote_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/dla_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/imvoxel_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/pointnet2_fp_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/second_fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/base_3droi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/h3d_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/primitive_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/part_aggregation_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/point_rcnn_roi_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/encoder_decoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/clip_sigmoid.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/edge_indices.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/gen_keypoints.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/handle_objs.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/mlp.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/pillar_encoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/norm.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/paconv.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/paconv_sa_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_fp_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_sa_module.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/sparse_block.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/collect_env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/compat_cfg.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/logger.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/setup_env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/version.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/inference.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/test.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/train.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/class_names.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/eval_hooks.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/metrics.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/base_pixel_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/ohem_pixel_sampler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/misc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/ade.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/chase_db1.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/cityscapes.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/coco_stuff.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/custom.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dark_zurich.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dataset_wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/drive.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/hrf.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/loveda.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/night_driving.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pascal_context.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/compose.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formating.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formatting.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/loading.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/test_time_aug.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/transforms.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/stare.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/voc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv1.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/cgnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/erfnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/fast_scnn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/hrnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/icnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mit.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v2.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v3.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnest.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnext.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/stdc.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/swin.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/timm_backbone.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/twins.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/unet.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/vit.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/builder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ann_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/apc_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/aspp_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cascade_decode_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cc_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/da_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/decode_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dm_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dnl_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dpt_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ema_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/enc_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fcn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fpn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/gc_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/isa_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/lraspp_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/nl_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ocr_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/point_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psa_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psp_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/segformer_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_aspp_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_fcn_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_mla_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_up_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/stdc_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/uper_head.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/accuracy.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/cross_entropy_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/dice_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/focal_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/lovasz_loss.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/fpn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/ic_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/jpu.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/mla_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/multilevel_neck.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/base.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/cascade_encoder_decoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/encoder_decoder.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/embed.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/inverted_residual.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/make_divisible.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/res_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/se_layer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/self_attention_block.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/shape_convert.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/up_conv_block.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/encoding.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/wrappers.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/collect_env.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/logger.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/version.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/build.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/docs.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/mminstall.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/optional.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/readthedocs.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/runtime.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/tests.txt delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.cfg delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/analyze_logs.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/benchmark.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/get_flops.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.sh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/__init__.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/create_gt_database.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/indoor_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_data_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_data_fixer.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuimage_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuscenes_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/s3dis_data_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/scannet_data_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/sunrgbd_data_utils.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/waymo_converter.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d2torchserve.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d_handler.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/test_torchserver.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/dist_test.sh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/browse_dataset.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/fuse_conv_bn.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/print_config.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/visualize_results.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_h3dnet_checkpoints.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_votenet_checkpoints.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/publish_model.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/regnet2mmdet.py delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_test.sh delete mode 100755 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_train.sh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/test.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.py delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.sh delete mode 100644 cv/3d_detection/pointnet2/pytorch/mmdetection3d/train.py diff --git a/cv/3d_detection/paconv/pytorch/README.md b/cv/3d_detection/paconv/pytorch/README.md index c1b1da32..3ff390de 100644 --- a/cv/3d_detection/paconv/pytorch/README.md +++ b/cv/3d_detection/paconv/pytorch/README.md @@ -29,10 +29,11 @@ Enter the data/s3dis/ folder, then prepare the dataset according to readme instr ```bash # Single GPU training -python3 train.py configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py +python3 tools/train.py configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py # Multiple GPU training -bash dist_train.sh configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py 8 +sed -i 's/python /python3 /g' tools/dist_train.sh +bash tools/dist_train.sh configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py 8 ``` ## Results @@ -44,4 +45,4 @@ results | 0.9488 | 0.9838 | 0.8184 | 0.0000 | 0.1682 | 0.5836 | 0.7387 | 0.7782 fps = batchsize*8/1batchtime = 65.3 samples/sec ## Reference -- [mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/v1.4.0) +[mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/v1.4.0) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/README.md b/cv/3d_detection/pointnet2/pytorch/README.md similarity index 74% rename from cv/3d_detection/pointnet2/pytorch/mmdetection3d/README.md rename to cv/3d_detection/pointnet2/pytorch/README.md index afea3321..3b8dfcdc 100644 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/README.md +++ b/cv/3d_detection/pointnet2/pytorch/README.md @@ -5,21 +5,17 @@ Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. ## Installing packages -``` -## install libGL -yum install mesa-libGL - -## install zlib -wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz -tar xvf zlib-1.2.9.tar.gz -cd zlib-1.2.9/ -./configure && make install -cd .. -rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ -``` -``` -pip3 install -r requirements/runtime.txt -MMCV_WITH_OPS=1 python3 setup.py develop +```bash +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +#install mmdetection3d +git clone https://github.com/open-mmlab/mmdetection3d.git -b v1.4.0 --depth=1 +cd mmdetection3d +pip install -v -e . ``` ## Prepare S3DIS Data @@ -29,13 +25,13 @@ cd data/s3dis/ Enter the data/s3dis/ folder, then prepare the dataset according to readme instructions in data/s3dis/ folder. ## Training -Single GPU training -``` -python3 train.py configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py -``` -Multiple GPU training -``` -bash dist_train.sh configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py 8 +```bash +# Single GPU training +python3 tools/train.py configs/pointnet2/pointnet2_msg_2xb16-cosine-80e_s3dis-seg.py + +# Multiple GPU training +sed -i 's/python /python3 /g' tools/dist_train.sh +bash tools/dist_train.sh configs/pointnet2/pointnet2_msg_2xb16-cosine-80e_s3dis-seg.py 8 ``` ## Training Results @@ -44,4 +40,4 @@ bash dist_train.sh configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d- | Results | 0.9147 | 0.9742 | 0.7800 | 0.0000 | 0.1881 | 0.5361 | 0.2265 | 0.6922 | 0.8249 | 0.3303 | 0.6585 | 0.5422 | 0.4607 | 0.5483 | 0.8490 | 0.6168 | ## Reference -https://github.com/open-mmlab/mmdetection3d \ No newline at end of file +[mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/v1.4.0) \ No newline at end of file diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/.gitignore b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/.gitignore deleted file mode 100644 index 7de6b802..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/.gitignore +++ /dev/null @@ -1,136 +0,0 @@ -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class -*.ipynb -mmcv/__pycache__/ - -# C extensions -*.so -*pyc - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/en/_build/ -docs/zh_cn/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ - -# cython generated cpp -.vscode -.idea - -# custom -*.pkl -*.pkl.json -*.log.json -work_dirs/ -exps/ -*~ -mmdet3d/.mim - -# Pytorch -*.pth - -# demo -*.jpg -*.png -data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/ -data/scannet/scans/ -data/sunrgbd/OFFICIAL_SUNRGBD/ -*.obj -*.ply - -# Waymo evaluation -mmdet3d/core/evaluation/waymo_utils/compute_detection_metrics_main diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/LICENSE b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/LICENSE deleted file mode 100644 index 04adf5cb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/LICENSE +++ /dev/null @@ -1,203 +0,0 @@ -Copyright 2018-2019 Open-MMLab. All rights reserved. - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2018-2019 Open-MMLab. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/3dssd_4x4_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/3dssd_4x4_kitti-3d-car.py deleted file mode 100644 index bcc8c822..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/3dssd_4x4_kitti-3d-car.py +++ /dev/null @@ -1,121 +0,0 @@ -_base_ = [ - '../_base_/models/3dssd.py', '../_base_/datasets/kitti-3d-car.py', - '../_base_/default_runtime.py' -] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -point_cloud_range = [0, -40, -5, 70, 40, 3] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0], - global_rot_range=[0.0, 0.0], - rot_range=[-1.0471975511965976, 1.0471975511965976]), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.9, 1.1]), - # 3DSSD can get a higher performance without this transform - # dict(type='BackgroundPointsFilter', bbox_enlarge_range=(0.5, 2.0, 0.5)), - dict(type='PointSample', num_points=16384), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -evaluation = dict(interval=2) - -# model settings -model = dict( - bbox_head=dict( - num_classes=1, - bbox_coder=dict( - type='AnchorFreeBBoxCoder', num_dir_bins=12, with_rot=True))) - -# optimizer -lr = 0.002 # max learning rate -optimizer = dict(type='AdamW', lr=lr, weight_decay=0) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[45, 60]) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) - -# yapf:disable -log_config = dict( - interval=30, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/README.md deleted file mode 100644 index 4feb6d76..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/README.md +++ /dev/null @@ -1,45 +0,0 @@ -# 3DSSD: Point-based 3D Single Stage Object Detector - -> [3DSSD: Point-based 3D Single Stage Object Detector](https://arxiv.org/abs/2002.10187) - - - -## Abstract - -Currently, there have been many kinds of voxel-based 3D single stage detectors, while point-based single stage methods are still underexplored. In this paper, we first present a lightweight and effective point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. We novelly propose a fusion sampling strategy in downsampling process to make detection on less representative points feasible. A delicate box prediction network including a candidate generation layer, an anchor-free regression head with a 3D center-ness assignment strategy is designed to meet with our demand of accuracy and speed. Our paradigm is an elegant single stage anchor-free framework, showing great superiority to other existing methods. We evaluate 3DSSD on widely used KITTI dataset and more challenging nuScenes dataset. Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well, with inference speed more than 25 FPS, 2x faster than former state-of-the-art point-based methods. - -
- -
- -## Introduction - -We implement 3DSSD and provide the results and checkpoints on KITTI datasets. - -Some settings in our implementation are different from the [official implementation](https://github.com/Jia-Research-Lab/3DSSD), which bring marginal differences to the performance on KITTI datasets in our experiments. To simplify and unify the models of our implementation, we skip them in our models. These differences are listed as below: - -1. We keep the scenes without any object while the official code skips these scenes in training. In the official implementation, only 3229 and 3394 samples are used as training and validation sets, respectively. In our implementation, we keep using 3712 and 3769 samples as training and validation sets, respectively, as those used for all the other models in our implementation on KITTI datasets. -2. We do not modify the decay of `batch normalization` during training. -3. While using [`DataBaseSampler`](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/pipelines/dbsampler.py#L80) for data augmentation, the official code uses road planes as reference to place the sampled objects while we do not. -4. We perform detection using LIDAR coordinates while the official code uses camera coordinates. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------: | :---: | :-----: | :------: | :------------: | :----------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet2SAMSG](./3dssd_4x4_kitti-3d-car.py) | Car | 72e | 4.7 | | 78.58(81.27)1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828-b89c8fc4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828.log.json) | - -\[1\]: We report two different 3D object detection performance here. 78.58mAP is evaluated by our evaluation code and 81.27mAP is evaluated by the official development kit (so as that used in the paper and official code of 3DSSD ). We found that the commonly used Python implementation of [`rotate_iou`](https://github.com/traveller59/second.pytorch/blob/e42e4a0e17262ab7d180ee96a0a36427f2c20a44/second/core/non_max_suppression/nms_gpu.py#L605) which is used in our KITTI dataset evaluation, is different from the official implementation in [KITTI benchmark](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d). - -## Citation - -```latex -@inproceedings{yang20203dssd, - author = {Zetong Yang and Yanan Sun and Shu Liu and Jiaya Jia}, - title = {3DSSD: Point-based 3D Single Stage Object Detector}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, - year = {2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/metafile.yml deleted file mode 100644 index f6dbb3c4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/3dssd/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: 3DSSD - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 4x TITAN X - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/2002.10187 - Title: '3DSSD: Point-based 3D Single Stage Object Detector' - README: configs/3dssd/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/ssd3dnet.py#L7 - Version: v0.6.0 - -Models: - - Name: 3dssd_4x4_kitti-3d-car - In Collection: 3DSSD - Config: configs/3dssd/3dssd_4x4_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.58 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/3dssd/3dssd_4x4_kitti-3d-car/3dssd_4x4_kitti-3d-car_20210818_203828-b89c8fc4.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/coco_instance.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/coco_instance.py deleted file mode 100644 index f6ea4f45..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/coco_instance.py +++ /dev/null @@ -1,48 +0,0 @@ -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-3class.py deleted file mode 100644 index f5b2c4f4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-3class.py +++ /dev/null @@ -1,142 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=6, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=1, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-car.py deleted file mode 100644 index 2915669e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-3d-car.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) - -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=6, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=1, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-mono3d.py deleted file mode 100644 index ede6c48f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/kitti-mono3d.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -dataset_type = 'KittiMonoDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=False, use_camera=True) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1242, 375), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - img_scale=(1242, 375), - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train_mono3d.coco.json', - info_file=data_root + 'kitti_infos_train.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline, - modality=input_modality, - test_mode=False, - box_type_3d='Camera'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val_mono3d.coco.json', - info_file=data_root + 'kitti_infos_val.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val_mono3d.coco.json', - info_file=data_root + 'kitti_infos_val.pkl', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera')) -evaluation = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/lyft-3d.py deleted file mode 100644 index 1d33d7de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/lyft-3d.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-80, -80, -5, 80, 80, 3] -# For Lyft we usually do 9-class detection -class_names = [ - 'car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', 'motorcycle', - 'bicycle', 'pedestrian', 'animal' -] -dataset_type = 'LyftDataset' -data_root = 'data/lyft/' -# Input modality for Lyft dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/lyft/': 's3://lyft/lyft/', -# 'data/lyft/': 's3://lyft/lyft/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_test.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True)) -# For Lyft dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nuim_instance.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nuim_instance.py deleted file mode 100644 index 82fce56b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nuim_instance.py +++ /dev/null @@ -1,59 +0,0 @@ -dataset_type = 'CocoDataset' -data_root = 'data/nuimages/' -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-train.json', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-val.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/nuimages_v1.0-val.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-3d.py deleted file mode 100644 index 19b4335f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-3d.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -# Input modality for nuScenes dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True, - box_type_3d='LiDAR')) -# For nuScenes dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-mono3d.py deleted file mode 100644 index f18bc048..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/nus-mono3d.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -dataset_type = 'NuScenesMonoDataset' -data_root = 'data/nuscenes/' -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -# Input modality for nuScenes dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=False, - use_camera=True, - use_radar=False, - use_map=False, - use_external=False) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=train_pipeline, - modality=input_modality, - test_mode=False, - box_type_3d='Camera'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_val_mono3d.coco.json', - img_prefix=data_root, - classes=class_names, - pipeline=test_pipeline, - modality=input_modality, - test_mode=True, - box_type_3d='Camera')) -evaluation = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/range100_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/range100_lyft-3d.py deleted file mode 100644 index efa63ea3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/range100_lyft-3d.py +++ /dev/null @@ -1,136 +0,0 @@ -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-100, -100, -5, 100, 100, 3] -# For Lyft we usually do 9-class detection -class_names = [ - 'car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', 'motorcycle', - 'bicycle', 'pedestrian', 'animal' -] -dataset_type = 'LyftDataset' -data_root = 'data/lyft/' -# Input modality for Lyft dataset, this is consistent with the submission -# format which requires the information in input_modality. -input_modality = dict( - use_lidar=True, - use_camera=False, - use_radar=False, - use_map=False, - use_external=False) -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/lyft/': 's3://lyft/lyft/', -# 'data/lyft/': 's3://lyft/lyft/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - modality=input_modality, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'lyft_infos_test.pkl', - pipeline=test_pipeline, - classes=class_names, - modality=input_modality, - test_mode=True)) -# For Lyft dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 24. Please change the interval accordingly if you do not -# use a default schedule. -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis-3d-5class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis-3d-5class.py deleted file mode 100644 index 2422766f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis-3d-5class.py +++ /dev/null @@ -1,114 +0,0 @@ -# dataset settings -dataset_type = 'S3DISDataset' -data_root = './data/s3dis/' -class_names = ('table', 'chair', 'sofa', 'bookcase', 'board') -train_area = [1, 2, 3, 4, 6] -test_area = 5 - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='PointSample', num_points=40000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - # following ScanNet dataset the rotation range is 5 degrees - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0], - shift_height=True), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=40000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type='ConcatDataset', - datasets=[ - dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{i}.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - box_type_3d='Depth') for i in train_area - ], - separate_eval=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis_seg-3d-13class.py deleted file mode 100644 index cad81c61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/s3dis_seg-3d-13class.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -dataset_type = 'S3DISSegDataset' -data_root = 'data/s3dis/' -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/s3dis/': -# 's3://openmmlab/datasets/detection3d/s3dis_processed/', -# 'data/s3dis/': -# 's3://openmmlab/datasets/detection3d/s3dis_processed/' -# })) -num_points = 4096 -train_area = [1, 2, 3, 4, 6] -test_area = 5 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - file_client_args=file_client_args, - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - ignore_index=len(class_names), - use_normalized_coord=True, - enlarge_size=0.2, - min_unique_num=None), - dict(type='NormalizePointsColor', color_mean=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='NormalizePointsColor', color_mean=None), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - file_client_args=file_client_args, - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - file_client_args=file_client_args, - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - # train on area 1, 2, 3, 4, 6 - # test on area 5 - train=dict( - type=dataset_type, - data_root=data_root, - ann_files=[ - data_root + f's3dis_infos_Area_{i}.pkl' for i in train_area - ], - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=[ - data_root + f'seg_info/Area_{i}_resampled_scene_idxs.npy' - for i in train_area - ], - file_client_args=file_client_args), - val=dict( - type=dataset_type, - data_root=data_root, - ann_files=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names), - scene_idxs=data_root + - f'seg_info/Area_{test_area}_resampled_scene_idxs.npy', - file_client_args=file_client_args), - test=dict( - type=dataset_type, - data_root=data_root, - ann_files=data_root + f's3dis_infos_Area_{test_area}.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names), - file_client_args=file_client_args)) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet-3d-18class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet-3d-18class.py deleted file mode 100644 index 93da1e58..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet-3d-18class.py +++ /dev/null @@ -1,128 +0,0 @@ -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39), - max_cat_id=40), - dict(type='PointSample', num_points=40000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0], - shift_height=True), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=40000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet_seg-3d-20class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet_seg-3d-20class.py deleted file mode 100644 index cf73b09c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/scannet_seg-3d-20class.py +++ /dev/null @@ -1,132 +0,0 @@ -# dataset settings -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='NormalizePointsColor', color_mean=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict(type='NormalizePointsColor', color_mean=None), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/sunrgbd-3d-10class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/sunrgbd-3d-10class.py deleted file mode 100644 index 7121b75b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/sunrgbd-3d-10class.py +++ /dev/null @@ -1,107 +0,0 @@ -dataset_type = 'SUNRGBDDataset' -data_root = 'data/sunrgbd/' -class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='LoadAnnotations3D'), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.523599, 0.523599], - scale_ratio_range=[0.85, 1.15], - shift_height=True), - dict(type='PointSample', num_points=20000), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict(type='PointSample', num_points=20000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - filter_empty_gt=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'sunrgbd_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -evaluation = dict(pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-3class.py deleted file mode 100644 index 00883b98..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-3class.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -# D5 in the config name means the whole dataset is divided into 5 folds -# We only use one fold for efficient experiments -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://waymo_data/')) - -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [-74.88, -74.88, -2, 74.88, 74.88, 4] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=10, Cyclist=10), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-car.py deleted file mode 100644 index 1ebbbfda..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/datasets/waymoD5-3d-car.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# dataset settings -# D5 in the config name means the whole dataset is divided into 5 folds -# We only use one fold for efficient experiments -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://waymo_data/')) - -class_names = ['Car'] -point_cloud_range = [-74.88, -74.88, -2, 74.88, 74.88, 4] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=5, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -evaluation = dict(interval=24, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/default_runtime.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/default_runtime.py deleted file mode 100644 index 3c122f34..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/default_runtime.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -checkpoint_config = dict(interval=1) -# yapf:disable push -# By default we use textlogger hook and tensorboard -# For more loggers see -# https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.LoggerHook -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = None -load_from = None -resume_from = None -workflow = [('train', 1)] - -# disable opencv multithreading to avoid system being overloaded -opencv_num_threads = 0 -# set multi-process start method as `fork` to speed up the training -mp_start_method = 'fork' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/3dssd.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/3dssd.py deleted file mode 100644 index ad3de7a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/3dssd.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='SSD3DNet', - backbone=dict( - type='PointNet2SAMSG', - in_channels=4, - num_points=(4096, 512, (256, 256)), - radii=((0.2, 0.4, 0.8), (0.4, 0.8, 1.6), (1.6, 3.2, 4.8)), - num_samples=((32, 32, 64), (32, 32, 64), (32, 32, 32)), - sa_channels=(((16, 16, 32), (16, 16, 32), (32, 32, 64)), - ((64, 64, 128), (64, 64, 128), (64, 96, 128)), - ((128, 128, 256), (128, 192, 256), (128, 256, 256))), - aggregation_channels=(64, 128, 256), - fps_mods=(('D-FPS'), ('FS'), ('F-FPS', 'D-FPS')), - fps_sample_range_lists=((-1), (-1), (512, -1)), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - bbox_head=dict( - type='SSD3DHead', - in_channels=256, - vote_module_cfg=dict( - in_channels=256, - num_points=256, - gt_per_seed=1, - conv_channels=(128, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - with_res_feat=False, - vote_xyz_range=(3.0, 3.0, 2.0)), - vote_aggregation_cfg=dict( - type='PointSAModuleMSG', - num_point=256, - radii=(4.8, 6.4), - sample_nums=(16, 32), - mlp_channels=((256, 256, 256, 512), (256, 256, 512, 1024)), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - use_xyz=True, - normalize_xyz=False, - bias=True), - pred_layer_cfg=dict( - in_channels=1536, - shared_conv_channels=(512, 128), - cls_conv_channels=(128, ), - reg_conv_channels=(128, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.1), - objectness_loss=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - corner_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=1.0), - vote_loss=dict(type='SmoothL1Loss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - sample_mod='spec', pos_distance_thr=10.0, expand_dims_length=0.05), - test_cfg=dict( - nms_cfg=dict(type='nms', iou_thr=0.1), - sample_mod='spec', - score_thr=0.0, - per_class_proposal=True, - max_output_num=100)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py deleted file mode 100644 index cafb530c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,198 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py deleted file mode 100644 index efdce59c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_01voxel_second_secfpn_nus.py +++ /dev/null @@ -1,83 +0,0 @@ -voxel_size = [0.1, 0.1, 0.2] -model = dict( - type='CenterPoint', - pts_voxel_layer=dict( - max_num_points=10, voxel_size=voxel_size, max_voxels=(90000, 120000)), - pts_voxel_encoder=dict(type='HardSimpleVFE', num_features=5), - pts_middle_encoder=dict( - type='SparseEncoder', - in_channels=5, - sparse_shape=[41, 1024, 1024], - output_channels=128, - order=('conv', 'norm', 'act'), - encoder_channels=((16, 16, 32), (32, 32, 64), (64, 64, 128), (128, - 128)), - encoder_paddings=((0, 0, 1), (0, 0, 1), (0, 0, [0, 1, 1]), (0, 0)), - block_type='basicblock'), - pts_backbone=dict( - type='SECOND', - in_channels=256, - out_channels=[128, 256], - layer_nums=[5, 5], - layer_strides=[1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False)), - pts_neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - out_channels=[256, 256], - upsample_strides=[1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - use_conv_for_no_stride=True), - pts_bbox_head=dict( - type='CenterHead', - in_channels=sum([256, 256]), - tasks=[ - dict(num_class=1, class_names=['car']), - dict(num_class=2, class_names=['truck', 'construction_vehicle']), - dict(num_class=2, class_names=['bus', 'trailer']), - dict(num_class=1, class_names=['barrier']), - dict(num_class=2, class_names=['motorcycle', 'bicycle']), - dict(num_class=2, class_names=['pedestrian', 'traffic_cone']), - ], - common_heads=dict( - reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)), - share_conv_channel=64, - bbox_coder=dict( - type='CenterPointBBoxCoder', - post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_num=500, - score_threshold=0.1, - out_size_factor=8, - voxel_size=voxel_size[:2], - code_size=9), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25), - norm_bbox=True), - # model training and testing settings - train_cfg=dict( - pts=dict( - grid_size=[1024, 1024, 40], - voxel_size=voxel_size, - out_size_factor=8, - dense_reg=1, - gaussian_overlap=0.1, - max_objs=500, - min_radius=2, - code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2])), - test_cfg=dict( - pts=dict( - post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_per_img=500, - max_pool_nms=False, - min_radius=[4, 12, 10, 1, 0.85, 0.175], - score_threshold=0.1, - out_size_factor=8, - voxel_size=voxel_size[:2], - nms_type='rotate', - pre_max_size=1000, - post_max_size=83, - nms_thr=0.2))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py deleted file mode 100644 index 311d7637..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/centerpoint_02pillar_second_secfpn_nus.py +++ /dev/null @@ -1,83 +0,0 @@ -voxel_size = [0.2, 0.2, 8] -model = dict( - type='CenterPoint', - pts_voxel_layer=dict( - max_num_points=20, voxel_size=voxel_size, max_voxels=(30000, 40000)), - pts_voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=5, - feat_channels=[64], - with_distance=False, - voxel_size=(0.2, 0.2, 8), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - legacy=False), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=(512, 512)), - pts_backbone=dict( - type='SECOND', - in_channels=64, - out_channels=[64, 128, 256], - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False)), - pts_neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - out_channels=[128, 128, 128], - upsample_strides=[0.5, 1, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - use_conv_for_no_stride=True), - pts_bbox_head=dict( - type='CenterHead', - in_channels=sum([128, 128, 128]), - tasks=[ - dict(num_class=1, class_names=['car']), - dict(num_class=2, class_names=['truck', 'construction_vehicle']), - dict(num_class=2, class_names=['bus', 'trailer']), - dict(num_class=1, class_names=['barrier']), - dict(num_class=2, class_names=['motorcycle', 'bicycle']), - dict(num_class=2, class_names=['pedestrian', 'traffic_cone']), - ], - common_heads=dict( - reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)), - share_conv_channel=64, - bbox_coder=dict( - type='CenterPointBBoxCoder', - post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_num=500, - score_threshold=0.1, - out_size_factor=4, - voxel_size=voxel_size[:2], - code_size=9), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25), - norm_bbox=True), - # model training and testing settings - train_cfg=dict( - pts=dict( - grid_size=[512, 512, 1], - voxel_size=voxel_size, - out_size_factor=4, - dense_reg=1, - gaussian_overlap=0.1, - max_objs=500, - min_radius=2, - code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2])), - test_cfg=dict( - pts=dict( - post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0], - max_per_img=500, - max_pool_nms=False, - min_radius=[4, 12, 10, 1, 0.85, 0.175], - score_threshold=0.1, - pc_range=[-51.2, -51.2], - out_size_factor=4, - voxel_size=voxel_size[:2], - nms_type='rotate', - pre_max_size=1000, - post_max_size=83, - nms_thr=0.2))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/dgcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/dgcnn.py deleted file mode 100644 index 8303a789..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/dgcnn.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='DGCNNBackbone', - in_channels=9, # [xyz, rgb, normal_xyz], modified with dataset - num_samples=(20, 20, 20), - knn_modes=('D-KNN', 'F-KNN', 'F-KNN'), - radius=(None, None, None), - gf_channels=((64, 64), (64, 64), (64, )), - fa_channels=(1024, ), - act_cfg=dict(type='LeakyReLU', negative_slope=0.2)), - decode_head=dict( - type='DGCNNHead', - fp_channels=(1216, 512), - channels=256, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='LeakyReLU', negative_slope=0.2), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # modified with dataset - loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/fcos3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/fcos3d.py deleted file mode 100644 index 639bee8a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/fcos3d.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='FCOSMono3D', - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe', - init_cfg=dict( - type='Pretrained', - checkpoint='open-mmlab://detectron2/resnet101_caffe')), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5, - relu_before_extra_convs=True), - bbox_head=dict( - type='FCOSMono3DHead', - num_classes=10, - in_channels=256, - stacked_convs=2, - feat_channels=256, - use_direction_classifier=True, - diff_rad_by_sin=True, - pred_attrs=True, - pred_velo=True, - dir_offset=0.7854, # pi/4 - dir_limit_offset=0, - strides=[8, 16, 32, 64, 128], - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo - cls_branch=(256, ), - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - () # velo - ), - dir_branch=(256, ), - attr_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - bbox_coder=dict(type='FCOS3DBBoxCoder', code_size=9), - norm_on_bbox=True, - centerness_on_reg=True, - center_sampling=True, - conv_bias=True, - dcn_on_last_conv=True), - train_cfg=dict( - allowed_border=0, - code_weight=[1.0, 1.0, 0.2, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05], - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=1000, - nms_thr=0.8, - score_thr=0.05, - min_bbox_size=0, - max_per_img=200)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/groupfree3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/groupfree3d.py deleted file mode 100644 index 91ccf893..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/groupfree3d.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='GroupFree3DNet', - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - type='GroupFree3DHead', - in_channels=288, - num_decoder_layers=6, - num_proposal=256, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=dict( - type='GroupFree3DMHA', - embed_dims=288, - num_heads=8, - attn_drop=0.1, - dropout_layer=dict(type='Dropout', drop_prob=0.1)), - ffn_cfgs=dict( - embed_dims=288, - feedforward_channels=2048, - ffn_drop=0.1, - act_cfg=dict(type='ReLU', inplace=True)), - operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', - 'norm')), - pred_layer_cfg=dict( - in_channels=288, shared_conv_channels=(288, 288), bias=True), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', beta=1.0, reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(sample_mod='kps'), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/h3dnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/h3dnet.py deleted file mode 100644 index 552c3d5f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/h3dnet.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -primitive_z_cfg = dict( - type='PrimitiveHead', - num_dims=2, - num_classes=18, - primitive_mode='z', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -primitive_xy_cfg = dict( - type='PrimitiveHead', - num_dims=1, - num_classes=18, - primitive_mode='xy', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=0.5, - loss_dst_weight=0.5), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -primitive_line_cfg = dict( - type='PrimitiveHead', - num_dims=0, - num_classes=18, - primitive_mode='line', - upper_thresh=100.0, - surface_thresh=0.5, - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=1, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=1024, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.4, 0.6], - reduction='mean', - loss_weight=30.0), - center_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=1.0, - loss_dst_weight=1.0), - semantic_reg_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='sum', - loss_src_weight=1.0, - loss_dst_weight=1.0), - semantic_cls_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=2.0), - train_cfg=dict( - dist_thresh=0.2, - var_thresh=1e-2, - lower_thresh=1e-6, - num_point=100, - num_point_line=10, - line_thresh=0.2)) - -model = dict( - type='H3DNet', - backbone=dict( - type='MultiBackbone', - num_streams=4, - suffixes=['net0', 'net1', 'net2', 'net3'], - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-5, momentum=0.01), - act_cfg=dict(type='ReLU'), - backbones=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True))), - rpn_head=dict( - type='VoteHead', - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - roi_head=dict( - type='H3DRoIHead', - primitive_list=[primitive_z_cfg, primitive_xy_cfg, primitive_line_cfg], - bbox_head=dict( - type='H3DBboxHead', - gt_per_seed=3, - num_proposal=256, - suface_matching_cfg=dict( - type='PointSAModule', - num_point=256 * 6, - radius=0.5, - num_sample=32, - mlp_channels=[128 + 6, 128, 64, 32], - use_xyz=True, - normalize_xyz=True), - line_matching_cfg=dict( - type='PointSAModule', - num_point=256 * 12, - radius=0.5, - num_sample=32, - mlp_channels=[128 + 12, 128, 64, 32], - use_xyz=True, - normalize_xyz=True), - feat_channels=(128, 128), - primitive_refine_channels=[128, 128, 128], - upper_thresh=100.0, - surface_thresh=0.5, - line_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=0.1), - cues_objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.3, 0.7], - reduction='mean', - loss_weight=5.0), - cues_semantic_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.3, 0.7], - reduction='mean', - loss_weight=5.0), - proposal_objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='none', - loss_weight=5.0), - primitive_center_loss=dict( - type='MSELoss', reduction='none', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote'), - rpn_proposal=dict(use_nms=False), - rcnn=dict( - pos_distance_thr=0.3, - neg_distance_thr=0.6, - sample_mod='vote', - far_threshold=0.6, - near_threshold=0.3, - mask_surface_threshold=0.3, - label_surface_threshold=0.3, - mask_line_threshold=0.3, - label_line_threshold=0.3)), - test_cfg=dict( - rpn=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True, - use_nms=False), - rcnn=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_lyft.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_lyft.py deleted file mode 100644 index 87c7fe0c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_lyft.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = './hv_pointpillars_fpn_nus.py' - -# model settings (based on nuScenes model settings) -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -model = dict( - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-80, -80, -5, 80, 80, 3], - max_voxels=(60000, 60000)), - pts_voxel_encoder=dict( - feat_channels=[64], point_cloud_range=[-80, -80, -5, 80, 80, 3]), - pts_middle_encoder=dict(output_shape=[640, 640]), - pts_bbox_head=dict( - num_classes=9, - anchor_generator=dict( - ranges=[[-80, -80, -1.8, 80, 80, -1.8]], custom_values=[]), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7)), - # model training settings (based on nuScenes model settings) - train_cfg=dict(pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_nus.py deleted file mode 100644 index be29269d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_nus.py +++ /dev/null @@ -1,95 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.25, 0.25, 8] -model = dict( - type='MVXFasterRCNN', - pts_voxel_layer=dict( - max_num_points=64, - point_cloud_range=[-50, -50, -5, 50, 50, 3], - voxel_size=voxel_size, - max_voxels=(30000, 40000)), - pts_voxel_encoder=dict( - type='HardVFE', - in_channels=4, - feat_channels=[64, 64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=[-50, -50, -5, 50, 50, 3], - norm_cfg=dict(type='naiveSyncBN1d', eps=1e-3, momentum=0.01)), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[400, 400]), - pts_backbone=dict( - type='SECOND', - in_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - pts_neck=dict( - type='FPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - act_cfg=dict(type='ReLU'), - in_channels=[64, 128, 256], - out_channels=256, - start_level=0, - num_outs=3), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2], - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=1000, - nms_thr=0.2, - score_thr=0.05, - min_bbox_size=0, - max_num=500))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py deleted file mode 100644 index 9cd200f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_fpn_range100_lyft.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = './hv_pointpillars_fpn_nus.py' - -# model settings (based on nuScenes model settings) -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -model = dict( - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-100, -100, -5, 100, 100, 3], - max_voxels=(60000, 60000)), - pts_voxel_encoder=dict( - feat_channels=[64], point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_middle_encoder=dict(output_shape=[800, 800]), - pts_bbox_head=dict( - num_classes=9, - anchor_generator=dict( - ranges=[[-100, -100, -1.8, 100, 100, -1.8]], custom_values=[]), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7)), - # model training settings (based on nuScenes model settings) - train_cfg=dict(pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_kitti.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_kitti.py deleted file mode 100644 index ac46475d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_kitti.py +++ /dev/null @@ -1,94 +0,0 @@ -voxel_size = [0.16, 0.16, 4] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=32, # max_points_per_voxel - point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_voxels - ), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1]), - middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[496, 432]), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - assign_per_class=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -1.78, 69.12, 39.68, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_waymo.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_waymo.py deleted file mode 100644 index 30e23e95..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_pointpillars_secfpn_waymo.py +++ /dev/null @@ -1,107 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.32, 0.32, 6] -model = dict( - type='MVXFasterRCNN', - pts_voxel_layer=dict( - max_num_points=20, - point_cloud_range=[-74.88, -74.88, -2, 74.88, 74.88, 4], - voxel_size=voxel_size, - max_voxels=(32000, 32000)), - pts_voxel_encoder=dict( - type='HardVFE', - in_channels=5, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=[-74.88, -74.88, -2, 74.88, 74.88, 4], - norm_cfg=dict(type='naiveSyncBN1d', eps=1e-3, momentum=0.01)), - pts_middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[468, 468]), - pts_backbone=dict( - type='SECOND', - in_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[3, 5, 5], - layer_strides=[1, 2, 2], - out_channels=[64, 128, 256]), - pts_neck=dict( - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345], - [-74.88, -74.88, -0.1188, 74.88, 74.88, -0.1188], - [-74.88, -74.88, 0, 74.88, 74.88, 0]], - sizes=[ - [4.73, 2.08, 1.77], # car - [1.81, 0.84, 1.77], # cyclist - [0.91, 0.84, 1.74] # pedestrian - ], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=[ - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.25, - score_thr=0.1, - min_bbox_size=0, - max_num=500))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_kitti.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_kitti.py deleted file mode 100644 index e7d569a5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_kitti.py +++ /dev/null @@ -1,89 +0,0 @@ -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=[0, -40, -3, 70.4, 40, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoder', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_waymo.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_waymo.py deleted file mode 100644 index 0fa39e15..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/hv_second_secfpn_waymo.py +++ /dev/null @@ -1,99 +0,0 @@ -# model settings -# Voxel size for voxel encoder -# Usually voxel size is changed consistently with the point cloud range -# If point cloud range is modified, do remember to change all related -# keys in the config. -voxel_size = [0.08, 0.08, 0.1] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=10, - point_cloud_range=[-76.8, -51.2, -2, 76.8, 51.2, 4], - voxel_size=voxel_size, - max_voxels=(80000, 90000)), - voxel_encoder=dict(type='HardSimpleVFE', num_features=5), - middle_encoder=dict( - type='SparseEncoder', - in_channels=5, - sparse_shape=[61, 1280, 1920], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=384, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-76.8, -51.2, -0.0345, 76.8, 51.2, -0.0345], - [-76.8, -51.2, 0, 76.8, 51.2, 0], - [-76.8, -51.2, -0.1188, 76.8, 51.2, -0.1188]], - sizes=[ - [4.73, 2.08, 1.77], # car - [0.91, 0.84, 1.74], # pedestrian - [1.81, 0.84, 1.77] # cyclist - ], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.25, - score_thr=0.1, - min_bbox_size=0, - max_num=500)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/imvotenet_image.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/imvotenet_image.py deleted file mode 100644 index 981f8bc9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/imvotenet_image.py +++ /dev/null @@ -1,108 +0,0 @@ -model = dict( - type='ImVoteNet', - img_backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - img_neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - img_rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - img_roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - - # model training and testing settings - train_cfg=dict( - img_rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - img_rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - img_rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - img_rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - img_rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/mask_rcnn_r50_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/mask_rcnn_r50_fpn.py deleted file mode 100644 index 4e670e9d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,124 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_cuda_ssg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_cuda_ssg.py deleted file mode 100644 index f513bd4a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_cuda_ssg.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = './paconv_ssg.py' - -model = dict( - backbone=dict( - sa_cfg=dict( - type='PAConvCUDASAModule', - scorenet_cfg=dict(mlp_channels=[8, 16, 16])))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_ssg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_ssg.py deleted file mode 100644 index 902b1cfb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/paconv_ssg.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='PointNet2SASSG', - in_channels=9, # [xyz, rgb, normalized_xyz] - num_points=(1024, 256, 64, 16), - radius=(None, None, None, None), # use kNN instead of ball query - num_samples=(32, 32, 32, 32), - sa_channels=((32, 32, 64), (64, 64, 128), (128, 128, 256), (256, 256, - 512)), - fp_channels=(), - norm_cfg=dict(type='BN2d', momentum=0.1), - sa_cfg=dict( - type='PAConvSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=False, - paconv_num_kernels=[16, 16, 16], - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False))), - decode_head=dict( - type='PAConvHead', - # PAConv model's decoder takes skip connections from beckbone - # different from PointNet++, it also concats input features in the last - # level of decoder, leading to `128 + 6` as the channel number - fp_channels=((768, 256, 256), (384, 256, 256), (320, 256, 128), - (128 + 6, 128, 128, 128)), - channels=128, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # should be modified with dataset - loss_weight=1.0)), - # correlation loss to regularize PAConv's kernel weights - loss_regularization=dict( - type='PAConvRegularizationLoss', reduction='sum', loss_weight=10.0), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/parta2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/parta2.py deleted file mode 100644 index 6c573c71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/parta2.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='PartA2', - voxel_layer=dict( - max_num_points=5, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_voxels - ), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseUNet', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - rpn_head=dict( - type='PartA2RPNHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - assigner_per_size=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - roi_head=dict( - type='PartAggregationROIHead', - num_classes=3, - semantic_head=dict( - type='PointwiseSemanticHead', - in_channels=16, - extra_width=0.2, - seg_score_thr=0.3, - num_classes=3, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)), - seg_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='max')), - part_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='avg')), - bbox_head=dict( - type='PartA2BboxHead', - num_classes=3, - seg_in_channels=16, - part_in_channels=4, - seg_conv_channels=[64, 64], - part_conv_channels=[64, 64], - merge_conv_channels=[128, 128], - down_conv_channels=[128, 256], - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - shared_fc_channels=[256, 512, 512, 512], - cls_channels=[256, 256], - reg_channels=[256, 256], - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.1))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pgd.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pgd.py deleted file mode 100644 index 473ab09c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pgd.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -_base_ = './fcos3d.py' -# model settings -model = dict( - bbox_head=dict( - _delete_=True, - type='PGDHead', - num_classes=10, - in_channels=256, - stacked_convs=2, - feat_channels=256, - use_direction_classifier=True, - diff_rad_by_sin=True, - pred_attrs=True, - pred_velo=True, - pred_bbox2d=True, - pred_keypoints=False, - dir_offset=0.7854, # pi/4 - strides=[8, 16, 32, 64, 128], - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo - cls_branch=(256, ), - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - () # velo - ), - dir_branch=(256, ), - attr_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - norm_on_bbox=True, - centerness_on_reg=True, - center_sampling=True, - conv_bias=True, - dcn_on_last_conv=True, - use_depth_classifier=True, - depth_branch=(256, ), - depth_range=(0, 50), - depth_unit=10, - division='uniform', - depth_bins=6, - bbox_coder=dict(type='PGDBBoxCoder', code_size=9)), - test_cfg=dict(nms_pre=1000, nms_thr=0.8, score_thr=0.01, max_per_img=200)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/point_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/point_rcnn.py deleted file mode 100644 index e09ed63b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/point_rcnn.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='PointRCNN', - backbone=dict( - type='PointNet2SAMSG', - in_channels=4, - num_points=(4096, 1024, 256, 64), - radii=((0.1, 0.5), (0.5, 1.0), (1.0, 2.0), (2.0, 4.0)), - num_samples=((16, 32), (16, 32), (16, 32), (16, 32)), - sa_channels=(((16, 16, 32), (32, 32, 64)), ((64, 64, 128), (64, 96, - 128)), - ((128, 196, 256), (128, 196, 256)), ((256, 256, 512), - (256, 384, 512))), - fps_mods=(('D-FPS'), ('D-FPS'), ('D-FPS'), ('D-FPS')), - fps_sample_range_lists=((-1), (-1), (-1), (-1)), - aggregation_channels=(None, None, None, None), - dilated_group=(False, False, False, False), - out_indices=(0, 1, 2, 3), - norm_cfg=dict(type='BN2d', eps=1e-3, momentum=0.1), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - neck=dict( - type='PointNetFPNeck', - fp_channels=((1536, 512, 512), (768, 512, 512), (608, 256, 256), - (257, 128, 128))), - rpn_head=dict( - type='PointRPNHead', - num_classes=3, - enlarge_width=0.1, - pred_layer_cfg=dict( - in_channels=128, - cls_linear_channels=(256, 256), - reg_linear_channels=(256, 256)), - cls_loss=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - bbox_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - bbox_coder=dict( - type='PointXYZWHLRBBoxCoder', - code_size=8, - # code_size: (center residual (3), size regression (3), - # torch.cos(yaw) (1), torch.sin(yaw) (1) - use_mean_size=True, - mean_size=[[3.9, 1.6, 1.56], [0.8, 0.6, 1.73], [1.76, 0.6, - 1.73]])), - roi_head=dict( - type='PointRCNNRoIHead', - point_roi_extractor=dict( - type='Single3DRoIPointExtractor', - roi_layer=dict(type='RoIPointPool3d', num_sampled_points=512)), - bbox_head=dict( - type='PointRCNNBboxHead', - num_classes=1, - pred_layer_cfg=dict( - in_channels=512, - cls_conv_channels=(256, 256), - reg_conv_channels=(256, 256), - bias=True), - in_channels=5, - # 5 = 3 (xyz) + scores + depth - mlp_channels=[128, 128], - num_points=(128, 32, -1), - radius=(0.2, 0.4, 100), - num_samples=(16, 16, 16), - sa_channels=((128, 128, 128), (128, 128, 256), (256, 256, 512)), - with_corner_loss=True), - depth_normalizer=70.0), - # model training and testing settings - train_cfg=dict( - pos_distance_thr=10.0, - rpn=dict( - nms_cfg=dict( - use_rotate_nms=True, iou_thr=0.8, nms_pre=9000, nms_post=512), - score_thr=None), - rcnn=dict( - assigner=[ - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False), - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1, - match_low_quality=False) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.5, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.7, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_cfg=dict( - use_rotate_nms=True, iou_thr=0.85, nms_pre=9000, nms_post=512), - score_thr=None), - rcnn=dict(use_rotate_nms=True, nms_thr=0.1, score_thr=0.1))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_msg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_msg.py deleted file mode 100644 index 222ab885..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_msg.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = './pointnet2_ssg.py' - -# model settings -model = dict( - backbone=dict( - _delete_=True, - type='PointNet2SAMSG', - in_channels=6, # [xyz, rgb], should be modified with dataset - num_points=(1024, 256, 64, 16), - radii=((0.05, 0.1), (0.1, 0.2), (0.2, 0.4), (0.4, 0.8)), - num_samples=((16, 32), (16, 32), (16, 32), (16, 32)), - sa_channels=(((16, 16, 32), (32, 32, 64)), ((64, 64, 128), (64, 96, - 128)), - ((128, 196, 256), (128, 196, 256)), ((256, 256, 512), - (256, 384, 512))), - aggregation_channels=(None, None, None, None), - fps_mods=(('D-FPS'), ('D-FPS'), ('D-FPS'), ('D-FPS')), - fps_sample_range_lists=((-1), (-1), (-1), (-1)), - dilated_group=(False, False, False, False), - out_indices=(0, 1, 2, 3), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - decode_head=dict( - fp_channels=((1536, 256, 256), (512, 256, 256), (352, 256, 128), - (128, 128, 128, 128)))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_ssg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_ssg.py deleted file mode 100644 index ed6a3af3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/pointnet2_ssg.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -model = dict( - type='EncoderDecoder3D', - backbone=dict( - type='PointNet2SASSG', - in_channels=6, # [xyz, rgb], should be modified with dataset - num_points=(1024, 256, 64, 16), - radius=(0.1, 0.2, 0.4, 0.8), - num_samples=(32, 32, 32, 32), - sa_channels=((32, 32, 64), (64, 64, 128), (128, 128, 256), (256, 256, - 512)), - fp_channels=(), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=False)), - decode_head=dict( - type='PointNet2Head', - fp_channels=((768, 256, 256), (384, 256, 256), (320, 256, 128), - (128, 128, 128, 128)), - channels=128, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, # should be modified with dataset - loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/smoke.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/smoke.py deleted file mode 100644 index 45dfb35e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/smoke.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='SMOKEMono3D', - backbone=dict( - type='DLANet', - depth=34, - in_channels=3, - norm_cfg=dict(type='GN', num_groups=32), - init_cfg=dict( - type='Pretrained', - checkpoint='http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth' - )), - neck=dict( - type='DLANeck', - in_channels=[16, 32, 64, 128, 256, 512], - start_level=2, - end_level=5, - norm_cfg=dict(type='GN', num_groups=32)), - bbox_head=dict( - type='SMOKEMono3DHead', - num_classes=3, - in_channels=64, - dim_channel=[3, 4, 5], - ori_channel=[6, 7], - stacked_convs=0, - feat_channels=64, - use_direction_classifier=False, - diff_rad_by_sin=False, - pred_attrs=False, - pred_velo=False, - dir_offset=0, - strides=None, - group_reg_dims=(8, ), - cls_branch=(256, ), - reg_branch=((256, ), ), - num_attrs=0, - bbox_code_size=7, - dir_branch=(), - attr_branch=(), - bbox_coder=dict( - type='SMOKECoder', - base_depth=(28.01, 16.32), - base_dims=((0.88, 1.73, 0.67), (1.78, 1.70, 0.58), (3.88, 1.63, - 1.53)), - code_size=7), - loss_cls=dict(type='GaussianFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='L1Loss', reduction='sum', loss_weight=1 / 300), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=None, - conv_bias=True, - dcn_on_last_conv=False), - train_cfg=None, - test_cfg=dict(topK=100, local_maximum_kernel=3, max_per_img=100)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/votenet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/votenet.py deleted file mode 100644 index 076aebfe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/models/votenet.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='VoteNet', - backbone=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - type='VoteHead', - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0 / 3.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote'), - test_cfg=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cosine.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cosine.py deleted file mode 100644 index 15a44548..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cosine.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# This schedule is mainly used by models with dynamic voxelization -# optimizer -lr = 0.003 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.001) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) - -lr_config = dict( - policy='CosineAnnealing', - warmup='linear', - warmup_iters=1000, - warmup_ratio=1.0 / 10, - min_lr_ratio=1e-5) - -momentum_config = None - -runner = dict(type='EpochBasedRunner', max_epochs=40) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_20e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_20e.py deleted file mode 100644 index 704740ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_20e.py +++ /dev/null @@ -1,24 +0,0 @@ -# For nuScenes dataset, we usually evaluate the model at the end of training. -# Since the models are trained by 24 epochs by default, we set evaluation -# interval to be 20. Please change the interval accordingly if you do not -# use a default schedule. -# optimizer -# This schedule is mainly used by models on nuScenes dataset -optimizer = dict(type='AdamW', lr=1e-4, weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, -) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, -) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_40e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_40e.py deleted file mode 100644 index 66498633..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/cyclic_40e.py +++ /dev/null @@ -1,31 +0,0 @@ -# The schedule is usually used by models trained on KITTI dataset - -# The learning rate set in the cyclic schedule is the initial learning rate -# rather than the max learning rate. Since the target_ratio is (10, 1e-4), -# the learning rate will change from 0.0018 to 0.018, than go to 0.0018*1e-4 -lr = 0.0018 -# The optimizer follows the setting in SECOND.Pytorch, but here we use -# the official AdamW optimizer implemented by PyTorch. -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -# We use cyclic learning rate and momentum schedule following SECOND.Pytorch -# https://github.com/traveller59/second.pytorch/blob/3aba19c9688274f75ebb5e576f65cfe54773c021/torchplus/train/learning_schedules_fastai.py#L69 # noqa -# We implement them in mmcv, for more details, please refer to -# https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327 # noqa -# https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130 # noqa -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, -) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, -) -# Although the max_epochs is 40, this schedule is usually used we -# RepeatDataset with repeat ratio N, thus the actual max epoch -# number could be Nx40 -runner = dict(type='EpochBasedRunner', max_epochs=40) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/mmdet_schedule_1x.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/mmdet_schedule_1x.py deleted file mode 100644 index 13b3783c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/mmdet_schedule_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_2x.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_2x.py deleted file mode 100644 index afde799d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_2x.py +++ /dev/null @@ -1,14 +0,0 @@ -# optimizer -# This schedule is mainly used by models on nuScenes dataset -optimizer = dict(type='AdamW', lr=0.001, weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=1.0 / 1000, - step=[20, 23]) -momentum_config = None -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_3x.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_3x.py deleted file mode 100644 index 115cd26b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/schedule_3x.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used by models on indoor dataset, -# e.g., VoteNet on SUNRGBD and ScanNet -lr = 0.008 # max learning rate -optimizer = dict(type='AdamW', lr=lr, weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[24, 32]) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_100e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_100e.py deleted file mode 100644 index 3b75932b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_100e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=100) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_150e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_150e.py deleted file mode 100644 index 04b44e51..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_150e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='SGD', lr=0.2, weight_decay=0.0001, momentum=0.9) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=0.002) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=150) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_200e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_200e.py deleted file mode 100644 index 6a49484c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_200e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on ScanNet dataset in segmentation task -optimizer = dict(type='Adam', lr=0.001, weight_decay=0.01) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=200) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_50e.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_50e.py deleted file mode 100644 index 975a8f9f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/_base_/schedules/seg_cosine_50e.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -# This schedule is mainly used on S3DIS dataset in segmentation task -optimizer = dict(type='Adam', lr=0.001, weight_decay=0.001) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='CosineAnnealing', warmup=None, min_lr=1e-5) -momentum_config = None - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=50) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index c3fcca9b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_PartA2_secfpn_4x8_cyclic_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] # velodyne coordinates, x, y, z - -model = dict( - type='PartA2', - voxel_layer=dict( - max_num_points=5, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_coxels - ), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseUNet', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - rpn_head=dict( - type='PartA2RPNHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - assigner_per_size=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - roi_head=dict( - type='PartAggregationROIHead', - num_classes=3, - semantic_head=dict( - type='PointwiseSemanticHead', - in_channels=16, - extra_width=0.2, - seg_score_thr=0.3, - num_classes=3, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)), - seg_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='max')), - part_roi_extractor=dict( - type='Single3DRoIAwareExtractor', - roi_layer=dict( - type='RoIAwarePool3d', - out_size=14, - max_pts_per_voxel=128, - mode='avg')), - bbox_head=dict( - type='PartA2BboxHead', - num_classes=3, - seg_in_channels=16, - part_in_channels=4, - seg_conv_channels=[64, 64], - part_conv_channels=[64, 64], - merge_conv_channels=[128, 128], - down_conv_channels=[128, 256], - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - shared_fc_channels=[256, 512, 512, 512], - cls_channels=[256, 256], - reg_channels=[256, 256], - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict( - type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1) - ], - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.3))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=20, Pedestrian=15, Cyclist=15)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.001 # max learning rate -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=1, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl', port=29506) -log_level = 'INFO' -find_unused_parameters = True -work_dir = './work_dirs/parta2_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py deleted file mode 100644 index 98da21f2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_3x8_100e_det3d_kitti-3d-car.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -voxel_size = [0.16, 0.16, 4] -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=64, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(12000, 20000)), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range), - middle_encoder=dict( - type='PointPillarsScatter', in_channels=64, output_shape=[496, 432]), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[[0, -39.68, -1.78, 69.12, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - sample_groups=dict(Car=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[0.25, 0.25, 0.25], - global_rot_range=[0.0, 0.0], - rot_range=[-0.15707963267, 0.15707963267]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=3, - workers_per_gpu=3, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.001 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=1, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=50) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/pp_secfpn_100e' -load_from = None -resume_from = None -workflow = [('train', 50)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index ebc134d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_pointpillars_secfpn_4x8_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -voxel_size = [0.16, 0.16, 4] -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=32, # max_points_per_voxel - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000) # (training, testing) max_coxels - ), - voxel_encoder=dict( - type='PillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - ), - middle_encoder=dict( - type='PointPillarsScatter', - in_channels=64, - output_shape=[496, 432], - ), - backbone=dict( - type='SECOND', - in_channels=64, - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - out_channels=[64, 128, 256], - ), - neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128], - ), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2), - ), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - Car=5, - Pedestrian=5, - Cyclist=5, - )), - classes=class_names, - sample_groups=dict( - Car=15, - Pedestrian=15, - Cyclist=15, - )) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']), -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.0003 # max learning rate -optimizer = dict( - type='AdamW', - lr=lr, - betas=(0.95, 0.99), # the momentum is change during training - weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -# learning policy -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=2, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/pp_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py deleted file mode 100644 index 3b346297..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/benchmark/hv_second_secfpn_4x8_80e_pcdet_kitti-3d-3class.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='VoxelNet', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoder', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=False, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - Car=5, - Pedestrian=5, - Cyclist=5, - )), - classes=class_names, - sample_groups=dict( - Car=20, - Pedestrian=15, - Cyclist=15, - )) -file_client_args = dict(backend='disk') -# file_client_args = dict( -# backend='petrel', path_mapping=dict(data='s3://kitti_data/')) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) -# optimizer -lr = 0.0003 # max learning rate -optimizer = dict(type='AdamW', lr=lr, betas=(0.95, 0.99), weight_decay=0.01) -optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2)) -lr_config = dict( - policy='cyclic', - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4) -momentum_config = dict( - policy='cyclic', - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4) -checkpoint_config = dict(interval=1) -evaluation = dict(interval=2, pipeline=eval_pipeline) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = './work_dirs/sec_secfpn_80e' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/README.md deleted file mode 100644 index d9173c93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# Center-based 3D Object Detection and Tracking - -> [Center-based 3D Object Detection and Tracking](https://arxiv.org/abs/2006.11275) - - - -## Abstract - -Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions. - -
- -
- -## Introduction - -We implement CenterPoint and provide the result and checkpoints on nuScenes dataset. - -We follow the below style to name config files. Contributors are advised to follow the same style. -`{xxx}` is required field and `[yyy]` is optional. - -`{model}`: model type like `centerpoint`. - -`{model setting}`: voxel size and voxel type like `01voxel`, `02pillar`. - -`{backbone}`: backbone type like `second`. - -`{neck}`: neck type like `secfpn`. - -`[dcn]`: Whether to use deformable convolution. - -`[circle]`: Whether to use circular nms. - -`[batch_per_gpu x gpu]`: GPUs and samples per GPU, 4x8 is used by default. - -`{schedule}`: training schedule, options are 1x, 2x, 20e, etc. 1x and 2x means 12 epochs and 24 epochs respectively. 20e is adopted in cascade models, which denotes 20 epochs. For 1x/2x, initial learning rate decays by a factor of 10 at the 8/16th and 11/22th epochs. For 20e, initial learning rate decays by a factor of 10 at the 16th and 19th epochs. - -`{dataset}`: dataset like nus-3d, kitti-3d, lyft-3d, scannet-3d, sunrgbd-3d. We also indicate the number of classes we are using if there exist multiple settings, e.g., kitti-3d-3class and kitti-3d-car means training on KITTI dataset with 3 classes and single class, respectively. - -## Usage - -### Test time augmentation - -We have supported double-flip and scale augmentation during test time. To use test time augmentation, users need to modify the -`test_pipeline` and `test_cfg` in the config. -For example, we change `centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py` to the following. - -```python -_base_ = './centerpoint_0075voxel_second_secfpn_circlenms' \ - '_4x8_cyclic_20e_nus.py' - -model = dict( - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - max_num=83))) - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=[0.95, 1.0, 1.05], - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) - -``` - -## Results and models - -### CenterPoint - -| Backbone | Voxel type (voxel size) | Dcn | Circular nms | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :---------------------------------------------------------------------------------: | :---------------------: | :-: | :----------: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.1) | ✗ | ✓ | 4.9 | | 56.19 | 64.43 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210815_085857-9ba7f3a5.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210815_085857.log.json) | -| above w/o circle nms | voxel (0.1) | ✗ | ✗ | | | 56.56 | 64.46 | | -| [SECFPN](./centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.1) | ✓ | ✓ | 5.2 | | 56.34 | 64.81 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210814_060754-c9d535d2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210814_060754.log.json) | -| above w/o circle nms | voxel (0.1) | ✓ | ✗ | | | 56.60 | 64.90 | | -| [SECFPN](./centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.075) | ✗ | ✓ | 7.8 | | 57.34 | 65.23 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210814_113418-76ae0cf0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210814_113418.log.json) | -| above w/o circle nms | voxel (0.075) | ✗ | ✗ | | | 57.63 | 65.39 | | -| [SECFPN](./centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py) | voxel (0.075) | ✓ | ✓ | 8.5 | | 57.27 | 65.58 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210827_161135-1782af3e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20210827_161135.log.json) | -| above w/o circle nms | voxel (0.075) | ✓ | ✗ | | | 57.43 | 65.63 | | -| above w/ double flip | voxel (0.075) | ✓ | ✗ | | | 59.73 | 67.39 | | -| above w/ scale tta | voxel (0.075) | ✓ | ✗ | | | 60.43 | 67.65 | | -| above w/ circle nms w/o scale tta | voxel (0.075) | ✓ | ✗ | | | 59.52 | 67.24 | | -| [SECFPN](./centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py) | pillar (0.2) | ✗ | ✓ | 4.4 | | 49.07 | 59.66 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624-0f3299c0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624.log.json) | -| above w/o circle nms | pillar (0.2) | ✗ | ✗ | | | 49.12 | 59.66 | | -| [SECFPN](./centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py) | pillar (0.2) | ✓ | ✗ | 4.6 | | 48.8 | 59.67 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20210815_202702-f03ab9e4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20210815_202702.log.json) | -| above w/ circle nms | pillar (0.2) | ✓ | ✓ | | | 48.79 | 59.65 | | - -## Citation - -```latex -@article{yin2021center, - title={Center-based 3D Object Detection and Tracking}, - author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp}, - journal={CVPR}, - year={2021}, -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index f17d98ef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,140 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -voxel_size = [0.075, 0.075, 0.2] -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict( - voxel_size=voxel_size, point_cloud_range=point_cloud_range), - pts_middle_encoder=dict(sparse_shape=[41, 1440, 1440]), - pts_bbox_head=dict( - bbox_coder=dict( - voxel_size=voxel_size[:2], pc_range=point_cloud_range[:2])), - train_cfg=dict( - pts=dict( - grid_size=[1440, 1440, 40], - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)), - test_cfg=dict( - pts=dict(voxel_size=voxel_size[:2], pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 1541a102..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index e479650a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py deleted file mode 100644 index 0090b3cb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_flip-tta_20e_nus.py +++ /dev/null @@ -1,50 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py' - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py deleted file mode 100644 index cdbdf060..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_tta_20e_nus.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py' - -model = dict(test_cfg=dict(pts=dict(use_rotate_nms=True, max_num=500))) - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=[0.95, 1.0, 1.05], - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 1e7d14e2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_0075voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py deleted file mode 100644 index d3956fc1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_flip-tta_20e_nus.py +++ /dev/null @@ -1,51 +0,0 @@ -_base_ = './centerpoint_0075voxel_second_secfpn_dcn_' \ - 'circlenms_4x8_cyclic_20e_nus.py' - -point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0] -file_client_args = dict(backend='disk') -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - # Add double-flip augmentation - flip=True, - pcd_horizontal_flip=True, - pcd_vertical_flip=True, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D', sync_2d=False), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index eae92849..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,171 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-3d.py', - '../_base_/models/centerpoint_01voxel_second_secfpn_nus.py', - '../_base_/schedules/cyclic_20e.py', '../_base_/default_runtime.py' -] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict(point_cloud_range=point_cloud_range), - pts_bbox_head=dict(bbox_coder=dict(pc_range=point_cloud_range[:2])), - # model training and testing settings - train_cfg=dict(pts=dict(point_cloud_range=point_cloud_range)), - test_cfg=dict(pts=dict(pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - train=dict( - type='CBGSDataset', - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - use_valid_flag=True, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -evaluation = dict(interval=20, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index ae560321..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index 5f31c441..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index cc5488e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_01voxel_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py deleted file mode 100644 index cd903492..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,170 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-3d.py', - '../_base_/models/centerpoint_02pillar_second_secfpn_nus.py', - '../_base_/schedules/cyclic_20e.py', '../_base_/default_runtime.py' -] - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', - 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' -] - -model = dict( - pts_voxel_layer=dict(point_cloud_range=point_cloud_range), - pts_voxel_encoder=dict(point_cloud_range=point_cloud_range), - pts_bbox_head=dict(bbox_coder=dict(pc_range=point_cloud_range[:2])), - # model training and testing settings - train_cfg=dict(pts=dict(point_cloud_range=point_cloud_range)), - test_cfg=dict(pts=dict(pc_range=point_cloud_range[:2]))) - -dataset_type = 'NuScenesDataset' -data_root = 'data/nuscenes/' -file_client_args = dict(backend='disk') - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'nuscenes_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict( - car=5, - truck=5, - bus=5, - trailer=5, - construction_vehicle=5, - traffic_cone=5, - barrier=5, - motorcycle=5, - bicycle=5, - pedestrian=5)), - classes=class_names, - sample_groups=dict( - car=2, - truck=3, - construction_vehicle=7, - bus=4, - trailer=6, - barrier=2, - motorcycle=6, - bicycle=6, - pedestrian=2, - traffic_cone=2), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args)) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=9, - use_dim=[0, 1, 2, 3, 4], - file_client_args=file_client_args, - pad_empty_sweeps=True, - remove_close=True), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - train=dict( - type='CBGSDataset', - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'nuscenes_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - use_valid_flag=True, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR')), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -evaluation = dict(interval=20, pipeline=eval_pipeline) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index 67a1cf6e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict(test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py deleted file mode 100644 index e6948921..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py deleted file mode 100644 index c62488df..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = ['./centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus.py'] - -model = dict( - pts_bbox_head=dict( - separate_head=dict( - type='DCNSeparateHead', - dcn_config=dict( - type='DCN', - in_channels=64, - out_channels=64, - kernel_size=3, - padding=1, - groups=4), - init_bias=-2.19, - final_kernel=3)), - test_cfg=dict(pts=dict(nms_type='circle'))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/metafile.yml deleted file mode 100644 index 1651689e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/centerpoint/metafile.yml +++ /dev/null @@ -1,95 +0,0 @@ -Collections: - - Name: CenterPoint - Metadata: - Training Data: nuScenes - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Hard Voxelization - Paper: - URL: https://arxiv.org/abs/2006.11275 - Title: 'Center-based 3D Object Detection and Tracking' - README: configs/centerpoint/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/centerpoint.py#L10 - Version: v0.6.0 - -Models: - - Name: centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.9 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 56.19 - NDS: 64.43 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20201001_135205-5db91e00.pth - - - Name: centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 5.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 56.34 - NDS: 64.81 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_01voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20201004_075317-26d8176c.pth - - - Name: centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 7.8 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 57.34 - NDS: 65.23 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20200925_230905-358fbe3b.pth - - - Name: centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 8.5 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 57.27 - NDS: 65.58 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus/centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20200930_201619-67c8496f.pth - - - Name: centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 49.07 - NDS: 59.66 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20201004_170716-a134a233.pth - - - Name: centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus - In Collection: CenterPoint - Config: configs/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus.py - Metadata: - Training Memory (GB): 4.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.8 - NDS: 59.67 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/centerpoint/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus/centerpoint_02pillar_second_secfpn_dcn_4x8_cyclic_20e_nus_20200930_103722-3bb135f2.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/README.md deleted file mode 100644 index 52554350..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/README.md +++ /dev/null @@ -1,55 +0,0 @@ -# Dynamic Graph CNN for Learning on Point Clouds - -> [Dynamic Graph CNN for Learning on Point Clouds](https://arxiv.org/abs/1801.07829) - - - -## Abstract - -Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS. - -
- -
- -## Introduction - -We implement DGCNN and provide the results and checkpoints on S3DIS dataset. - -**Notice**: We follow the implementations in the original DGCNN paper and a PyTorch implementation of DGCNN [code](https://github.com/AnTao97/dgcnn.pytorch). - -## Results and models - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------: | :----: | :---------: | :------: | :------------: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_1 | cosine 100e | 13.1 | | 68.33 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734-39658f14.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area1/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_000734.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_2 | cosine 100e | 13.1 | | 40.68 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648-aea9ecb6.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area2/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210731_144648.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_3 | cosine 100e | 13.1 | | 69.38 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629-2ff50ee0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area3/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210801_154629.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_4 | cosine 100e | 13.1 | | 50.07 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551-dffab9cd.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area4/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_073551.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_5 | cosine 100e | 13.1 | | 50.59 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | Area_6 | cosine 100e | 13.1 | | 77.94 | [model](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317-e3511b32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area6/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210802_154317.log.json) | -| [DGCNN](./dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py) | 6-fold | | | | 59.43 | | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. -- `6-fold` Split means the overall result of 6 different splits (Area_1, Area_2, Area_3, Area_4, Area_5 and Area_6 Splits). -- Users need to modify `train_area` and `test_area` in the S3DIS dataset's [config](./configs/_base_/datasets/s3dis_seg-3d-13class.py) to set the training and testing areas, respectively. - -## Indeterminism - -Since DGCNN testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@article{dgcnn, - title={Dynamic Graph CNN for Learning on Point Clouds}, - author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.}, - journal={ACM Transactions on Graphics (TOG)}, - year={2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py deleted file mode 100644 index 6f1b5822..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', '../_base_/models/dgcnn.py', - '../_base_/schedules/seg_cosine_100e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=32) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/metafile.yml deleted file mode 100644 index 87ff9156..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dgcnn/metafile.yml +++ /dev/null @@ -1,24 +0,0 @@ -Collections: - - Name: DGCNN - Metadata: - Training Techniques: - - SGD - Training Resources: 4x Titan XP GPUs - Architecture: - - DGCNN - Paper: https://arxiv.org/abs/1801.07829 - README: configs/dgcnn/README.md - -Models: - - Name: dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py - In Collection: DGCNN - Config: configs/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 13.3 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 50.59 - Weights: https://download.openmmlab.com/mmdetection3d/v0.17.0_models/dgcnn/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class/area5/dgcnn_32x4_cosine_100e_s3dis_seg-3d-13class_20210730_235824-f277e0c5.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/README.md deleted file mode 100644 index ab2bbc69..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Dynamic Voxelization - -> [End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds](https://arxiv.org/abs/1910.06528) - - - -## Abstract - -Recent work on 3D object detection advocates point cloud voxelization in birds-eye view, where objects preserve their physical dimensions and are naturally separable. When represented in this view, however, point clouds are sparse and have highly variable point density, which may cause detectors difficulties in detecting distant or small objects (pedestrians, traffic signs, etc.). On the other hand, perspective view provides dense observations, which could allow more favorable feature encoding for such cases. In this paper, we aim to synergize the birds-eye view and the perspective view and propose a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both. Specifically, we introduce dynamic voxelization, which has four merits compared to existing voxelization methods, i) removing the need of pre-allocating a tensor with fixed size; ii) overcoming the information loss due to stochastic point/voxel dropout; iii) yielding deterministic voxel embeddings and more stable detection outcomes; iv) establishing the bi-directional relationship between points and voxels, which potentially lays a natural foundation for cross-view feature fusion. By employing dynamic voxelization, the proposed feature fusion architecture enables each point to learn to fuse context information from different views. MVF operates on points and can be naturally extended to other approaches using LiDAR point clouds. We evaluate our MVF model extensively on the newly released Waymo Open Dataset and on the KITTI dataset and demonstrate that it significantly improves detection accuracy over the comparable single-view PointPillars baseline. - -
- -
- -## Introduction - -We implement Dynamic Voxelization proposed in and provide its results and models on KITTI dataset. - -## Results and models - -### KITTI - -| Model | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECOND](./dv_second_secfpn_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 5.5 | | 78.83 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228.log.json) | -| [SECOND](./dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py) | 3 Class | cosine 80e | 5.5 | | 65.27 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106.log.json) | -| [PointPillars](./dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py) | Car | cyclic 80e | 4.7 | | 77.76 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844.log.json) | - -## Citation - -```latex -@article{zhou2019endtoend, - title={End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds}, - author={Yin Zhou and Pei Sun and Yu Zhang and Dragomir Anguelov and Jiyang Gao and Tom Ouyang and James Guo and Jiquan Ngiam and Vijay Vasudevan}, - year={2019}, - eprint={1910.06528}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py deleted file mode 100644 index 68baae91..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = '../pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py' - -voxel_size = [0.16, 0.16, 4] -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - type='DynamicPillarFeatureNet', - in_channels=4, - feat_channels=[64], - with_distance=False, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py deleted file mode 100644 index 87fefadd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', '../_base_/schedules/cosine.py', - '../_base_/default_runtime.py' -] - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - _delete_=True, - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - _delete_=True, - type='DynamicSimpleVFE', - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py deleted file mode 100644 index 9da4ffe5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = '../second/hv_second_secfpn_6x8_80e_kitti-3d-car.py' - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='DynamicVoxelNet', - voxel_layer=dict( - _delete_=True, - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1)), - voxel_encoder=dict( - _delete_=True, - type='DynamicSimpleVFE', - voxel_size=voxel_size, - point_cloud_range=point_cloud_range)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/metafile.yml deleted file mode 100644 index 190c51de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/dynamic_voxelization/metafile.yml +++ /dev/null @@ -1,53 +0,0 @@ -Collections: - - Name: Dynamic Voxelization - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Dynamic Voxelization - Paper: - URL: https://arxiv.org/abs/1910.06528 - Title: 'End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds' - README: configs/dynamic_voxelization/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/dynamic_voxelnet.py#L11 - Version: v0.5.0 - -Models: - - Name: dv_second_secfpn_6x8_80e_kitti-3d-car - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car.py - Metadata: - Training Memory (GB): 5.5 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.83 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_second_secfpn_6x8_80e_kitti-3d-car/dv_second_secfpn_6x8_80e_kitti-3d-car_20200620_235228-ac2c1c0c.pth - - - Name: dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 5.5 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 65.27 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/dynamic_voxelization/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class/dv_second_secfpn_2x8_cosine_80e_kitti-3d-3class_20210831_054106-e742d163.pth - - - Name: dv_pointpillars_secfpn_6x8_160e_kitti-3d-car - In Collection: Dynamic Voxelization - Config: configs/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 77.76 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/dynamic_voxelization/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car/dv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20200620_230844-ee7b75c9.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/README.md deleted file mode 100644 index e47a489b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection - -> [FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection](https://arxiv.org/abs/2104.10956) - - - -## Abstract - -Monocular 3D object detection is an important task for autonomous driving considering its advantage of low cost. It is much more challenging than conventional 2D cases due to its inherent ill-posed property, which is mainly reflected in the lack of depth information. Recent progress on 2D detection offers opportunities to better solving this problem. However, it is non-trivial to make a general adapted 2D detector work in this 3D task. In this paper, we study this problem with a practice built on a fully convolutional single-stage detector and propose a general framework FCOS3D. Specifically, we first transform the commonly defined 7-DoF 3D targets to the image domain and decouple them as 2D and 3D attributes. Then the objects are distributed to different feature levels with consideration of their 2D scales and assigned only according to the projected 3D-center for the training procedure. Furthermore, the center-ness is redefined with a 2D Gaussian distribution based on the 3D-center to fit the 3D target formulation. All of these make this framework simple yet effective, getting rid of any 2D detection or 2D-3D correspondence priors. Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020. - -
- -
- -## Introduction - -FCOS3D is a general anchor-free, one-stage monocular 3D object detector adapted from the original 2D version FCOS. -It serves as a baseline built on top of mmdetection and mmdetection3d for 3D detection based on monocular vision. - -Currently we first support the benchmark on the large-scale nuScenes dataset, which achieved 1st place out of all the vision-only methods in the [nuScenes 3D detecton challenge](https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Camera) of NeurIPS 2020. - -![demo image](../../resources/browse_dataset_mono.png) - -## Usage - -### Data Preparation - -After supporting FCOS3D and monocular 3D object detection in v0.13.0, the coco-style 2D json info files will include related annotations by default -(see [here](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/nuscenes_converter.py#L333) if you would like to change the parameter). -So you can just follow the data preparation steps given in the documentation, then all the needed infos are ready together. - -### Training and Inference - -The way to training and inference a monocular 3D object detector is the same as others in mmdetection and mmdetection3d. You can basically follow the [documentation](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html#train-predefined-models-on-standard-datasets) and change the `config`, `work_dirs`, etc. accordingly. - -### Test time augmentation - -We implement test time augmentation for the dense outputs of detection heads, which is more effective than merging predicted boxes at last. -You can turn on it by setting `flip=True` in the `test_pipeline`. - -### Training with finetune - -Due to the scale and measurements of depth is different from those of other regression targets, we first train the model with depth weight equal to 0.2 for a more stable training procedure. For a stronger detector with better performance, please finetune the model with depth weight changed to 1.0 as shown in the [config](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py). Note that the path of `load_from` needs to be changed to yours accordingly. - -### Visualizing prediction results - -We also provide visualization functions to show the monocular 3D detection results. Simply follow the [documentation](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html#test-existing-models-on-standard-datasets) and use the `single-gpu testing` command. You only need to add the `--show` flag and specify `--show-dir` to store the visualization results. - -## Results and models - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :--: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101 w/ DCN](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py) | 1x | 8.69 | | 29.8 | 37.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210715_235813-4bed5239.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210715_235813.log.json) | -| [above w/ finetune](./fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py) | 1x | 8.69 | | 32.1 | 39.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210717_095645-8d806dc2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210717_095645.log.json) | -| above w/ tta | 1x | 8.69 | | 33.1 | 40.3 | | - -## Citation - -```latex -@inproceedings{wang2021fcos3d, - title={{FCOS3D: Fully} Convolutional One-Stage Monocular 3D Object Detection}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, - year={2021} -} -# For the original 2D version -@inproceedings{tian2019fcos, - title = {{FCOS: Fully} Convolutional One-Stage Object Detection}, - author = {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong}, - booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, - year = {2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py deleted file mode 100644 index 3b7eb99f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py +++ /dev/null @@ -1,75 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-mono3d.py', '../_base_/models/fcos3d.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True))) - -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.002, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[8, 11]) -total_epochs = 12 -evaluation = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py deleted file mode 100644 index ade5b4ec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = './fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict( - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05])) -# optimizer -optimizer = dict(lr=0.001) -load_from = 'work_dirs/fcos3d_nus/latest.pth' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/metafile.yml deleted file mode 100644 index 11de4911..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/fcos3d/metafile.yml +++ /dev/null @@ -1,43 +0,0 @@ -Collections: - - Name: FCOS3D - Metadata: - Training Data: NuScenes - Training Techniques: - - SGD - Training Resources: 8x GeForce RTX 2080 Ti - Architecture: - - FCOSMono3DHead - Paper: - URL: https://arxiv.org/abs/2104.10956 - Title: 'FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection' - README: configs/fcos3d/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/fcos_mono3d.py#L7 - Version: v0.13.0 - -Models: - - Name: fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d - In Collection: FCOS3D - Config: configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d.py - Metadata: - Training Memory (GB): 8.7 - Results: - - Task: 3D Object Detection - Dataset: NuScenes - Metrics: - mAP: 29.9 - NDS: 37.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_20210425_181341-8d5a21fe.pth - - - Name: fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune - In Collection: FCOS3D - Config: configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 8.7 - Results: - - Task: 3D Object Detection - Dataset: NuScenes - Metrics: - mAP: 32.1 - NDS: 39.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune_20210427_091419-35aaaad0.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/README.md deleted file mode 100644 index 727a7006..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# FreeAnchor for 3D Object Detection - -> [FreeAnchor: Learning to Match Anchors for Visual Object Detection](https://arxiv.org/abs/1909.02466) - - - -## Abstract - -Modern CNN-based object detectors assign anchors for ground-truth objects under the restriction of object-anchor Intersection-over-Unit (IoU). In this study, we propose a learning-to-match approach to break IoU restriction, allowing objects to match anchors in a flexible manner. Our approach, referred to as FreeAnchor, updates hand-crafted anchor assignment to “free" anchor matching by formulating detector training as a maximum likelihood estimation (MLE) procedure. FreeAnchor targets at learning features which best explain a class of objects in terms of both classification and localization. FreeAnchor is implemented by optimizing detection customized likelihood and can be fused with CNN-based detectors in a plug-and-play manner. Experiments on COCO demonstrate that FreeAnchor consistently outperforms the counterparts with significant margins. - -
- -
- -## Introduction - -We implement FreeAnchor in 3D detection systems and provide their first results with PointPillars on nuScenes dataset. -With the implemented `FreeAnchor3DHead`, a PointPillar detector with a big backbone (e.g., RegNet-3.2GF) achieves top performance -on the nuScenes benchmark. - -## Usage - -### Modify config - -As in the [baseline config](hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py), we only need to replace the head of an existing one-stage detector to use FreeAnchor head. -Since the config is inherit from a common detector head, `_delete_=True` is necessary to avoid conflicts. -The hyperparameters are specifically tuned according to the original paper. - -```python -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] - -model = dict( - pts_bbox_head=dict( - _delete_=True, - type='FreeAnchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - pre_anchor_topk=25, - bbox_thr=0.5, - gamma=2.0, - alpha=0.5, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.8), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg = dict( - pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.25, 0.25]))) -``` - -## Results and models - -### PointPillars - -| Backbone | FreeAnchor | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :-------------------------------------------------------------------------------------------------------: | :--------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | ✗ | 2x | 17.1 | | 40.0 | 53.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405-2fa62f3d.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 16.3 | | 43.82 | 54.86 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441-ae0897e7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441.log.json) | -| [RegNetX-400MF-FPN](../regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py) | ✗ | 2x | 17.3 | | 44.8 | 56.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 17.6 | | 48.3 | 58.65 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939-a2dd3fff.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939.log.json) | -| [RegNetX-1.6GF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 24.3 | | 52.04 | 61.49 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608-bfbd506e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608.log.json) | -| [RegNetX-1.6GF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py)\* | ✓ | 3x | 24.4 | | 52.69 | 62.45 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909-14d2dbd1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909.log.json) | -| [RegNetX-3.2GF-FPN](./hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py) | ✓ | 2x | 29.4 | | 52.4 | 61.94 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237-e385c35a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237.log.json) | -| [RegNetX-3.2GF-FPN](./hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py)\* | ✓ | 3x | 29.2 | | 54.23 | 63.41 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816-06708918.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816.log.json) | - -**Note**: Models noted by `*` means it is trained using stronger augmentation with vertical flip under bird-eye-view, global translation, and larger range of global rotation. - -## Citation - -```latex -@inproceedings{zhang2019freeanchor, - title = {{FreeAnchor}: Learning to Match Anchors for Visual Object Detection}, - author = {Zhang, Xiaosong and Wan, Fang and Liu, Chang and Ji, Rongrong and Ye, Qixiang}, - booktitle = {Neural Information Processing Systems}, - year = {2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 7412b930..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,47 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] - -model = dict( - pts_bbox_head=dict( - _delete_=True, - type='FreeAnchor3DHead', - num_classes=10, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - pre_anchor_topk=25, - bbox_thr=0.5, - gamma=2.0, - alpha=0.5, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-50, -50, -1.8, 50, 50, -1.8]], - scales=[1, 2, 4], - sizes=[ - [2.5981, 0.8660, 1.], # 1.5 / sqrt(3) - [1.7321, 0.5774, 1.], # 1 / sqrt(3) - [1., 1., 1.], - [0.4, 0.4, 1], - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True), - assigner_per_size=False, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi / 4 - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.8), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict(code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.25, 0.25]))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index ef740a8a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py deleted file mode 100644 index d4e48d36..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.7854, 0.7854], - scale_ratio_range=[0.95, 1.05], - translation_std=[0.2, 0.2, 0.2]), - dict( - type='RandomFlip3D', - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -data = dict(train=dict(pipeline=train_pipeline)) - -lr_config = dict(step=[28, 34]) -runner = dict(max_epochs=36) -evaluation = dict(interval=36) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 13bc0d68..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_3.2gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_3.2gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[192, 432, 1008])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py deleted file mode 100644 index 6fbce89b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_3.2gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_3.2gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[192, 432, 1008])) - -# If point cloud range is changed, the models should also change their point -# cloud range accordingly -point_cloud_range = [-50, -50, -5, 50, 50, 3] -# For nuScenes we usually do 10-class detection -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -file_client_args = dict(backend='disk') -# Uncomment the following if use ceph or other file clients. -# See https://mmcv.readthedocs.io/en/latest/api.html#mmcv.fileio.FileClient -# for more details. -# file_client_args = dict( -# backend='petrel', -# path_mapping=dict({ -# './data/nuscenes/': 's3://nuscenes/nuscenes/', -# 'data/nuscenes/': 's3://nuscenes/nuscenes/' -# })) -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=file_client_args), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=file_client_args), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.7854, 0.7854], - scale_ratio_range=[0.9, 1.1], - translation_std=[0.2, 0.2, 0.2]), - dict( - type='RandomFlip3D', - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -data = dict(train=dict(pipeline=train_pipeline)) -lr_config = dict(step=[28, 34]) -runner = dict(max_epochs=36) -evaluation = dict(interval=36) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py deleted file mode 100644 index 2b5f254b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py' - -model = dict( - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_400mf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/metafile.yml deleted file mode 100644 index 73b55f5f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/free_anchor/metafile.yml +++ /dev/null @@ -1,96 +0,0 @@ -Collections: - - Name: FreeAnchor - Metadata: - Training Data: nuScenes - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Hard Voxelization - - Free Anchor - Paper: - URL: https://arxiv.org/abs/1909.02466 - Title: 'FreeAnchor: Learning to Match Anchors for Visual Object Detection' - README: configs/free_anchor/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/free_anchor3d_head.py#L13 - Version: v0.5.0 - -Models: - - Name: hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 16.3 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 43.82 - NDS: 54.86 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210816_163441-ae0897e7.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 17.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.3 - NDS: 58.65 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_213939-a2dd3fff.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 24.3 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.04 - NDS: 61.49 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210828_025608-bfbd506e.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py - Metadata: - Training Memory (GB): 24.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.69 - NDS: 62.45 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210827_184909-14d2dbd1.pth - - - Name: hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py - Metadata: - Training Memory (GB): 29.4 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 52.4 - NDS: 61.94 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_4x8_2x_nus-3d_20210827_181237-e385c35a.pth - - - Name: hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d - In Collection: FreeAnchor - Config: configs/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d.py - Metadata: - Training Memory (GB): 29.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 54.23 - NDS: 63.41 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/free_anchor/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d/hv_pointpillars_regnet-3.2gf_fpn_sbn-all_free-anchor_strong-aug_4x8_3x_nus-3d_20210828_030816-06708918.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/README.md deleted file mode 100644 index 5b055e7e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# Group-Free 3D Object Detection via Transformers - -> [Group-Free 3D Object Detection via Transformers](https://arxiv.org/abs/2104.00678) - - - -## Abstract - -Recently, directly detecting 3D objects from 3D point clouds has received increasing attention. To extract object representation from an irregular point cloud, existing methods usually take a point grouping step to assign the points to an object candidate so that a PointNet-like network could be used to derive object features from the grouped points. However, the inaccurate point assignments caused by the hand-crafted grouping scheme decrease the performance of 3D object detection. In this paper, we present a simple yet effective method for directly detecting 3D objects from the 3D point cloud. Instead of grouping local points to each object candidate, our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers, where the contribution of each point is automatically learned in the network training. With an improved attention stacking scheme, our method fuses object features in different stages and generates more accurate object detection results. With few bells and whistles, the proposed method achieves state-of-the-art 3D object detection performance on two widely used benchmarks, ScanNet V2 and SUN RGB-D. - -
- -
- -## Introduction - -We implement Group-Free-3D and provide the result and checkpoints on ScanNet datasets. - -## Results and models - -### ScanNet - -| Method | Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------------: | :-----------: | :-----: | :------: | :------------: | :-------------: | :-------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [L6, O256](./groupfree3d_8x4_scannet-3d-18class-L6-O256.py) | PointNet++ | 3x | 6.7 | | 66.32 (65.67\*) | 47.82 (47.74\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347-3499eb55.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347.log.json) | -| [L12, O256](./groupfree3d_8x4_scannet-3d-18class-L12-O256.py) | PointNet++ | 3x | 9.4 | | 66.57 (66.22\*) | 48.21 (48.95\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907-1c5551ad.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907.log.json) | -| [L12, O256](./groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py) | PointNet++w2x | 3x | 13.3 | | 68.20 (67.30\*) | 51.02 (50.44\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301-944f0ac0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301.log.json) | -| [L12, O512](./groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py) | PointNet++w2x | 3x | 18.8 | | 68.22 (68.20\*) | 52.61 (51.31\*) | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204-187b71c7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204.log.json) | - -**Notes:** - -- We report the best results (AP@0.50) on validation set during each training. * means the evaluation method in the paper: we train each setting 5 times and test each training trial 5 times, then the average performance of these 25 trials is reported to account for algorithm randomness. -- We use 4 GPUs for training by default as the original code. - -## Citation - -```latex -@article{liu2021, - title={Group-Free 3D Object Detection via Transformers}, - author={Liu, Ze and Zhang, Zheng and Cao, Yue and Hu, Han and Tong, Xin}, - journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, - year={2021} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py deleted file mode 100644 index 987bcec6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py +++ /dev/null @@ -1,199 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py deleted file mode 100644 index 62821293..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py +++ /dev/null @@ -1,198 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py deleted file mode 100644 index 8551b740..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py +++ /dev/null @@ -1,214 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((128, 128, 256), (256, 256, 512), (256, 256, 512), - (256, 256, 512)), - fp_channels=((512, 512), (512, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py deleted file mode 100644 index 199e08bf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py +++ /dev/null @@ -1,215 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', - '../_base_/models/groupfree3d.py', '../_base_/schedules/schedule_3x.py', - '../_base_/default_runtime.py' -] - -# model settings -model = dict( - backbone=dict( - type='PointNet2SASSG', - in_channels=3, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((128, 128, 256), (256, 256, 512), (256, 256, 512), - (256, 256, 512)), - fp_channels=((512, 512), (512, 288)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - bbox_head=dict( - num_classes=18, - num_decoder_layers=12, - num_proposal=512, - size_cls_agnostic=False, - bbox_coder=dict( - type='GroupFree3DBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - size_cls_agnostic=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]), - sampling_objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=8.0), - objectness_loss=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - center_loss=dict( - type='SmoothL1Loss', beta=0.04, reduction='sum', loss_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=10.0 / 9.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - test_cfg=dict( - sample_mod='kps', - nms_thr=0.25, - score_thr=0.0, - per_class_proposal=True, - prediction_stages='last_three')) - -# dataset settings -dataset_type = 'ScanNetDataset' -data_root = './data/scannet/' -class_names = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - with_mask_3d=True, - with_seg_3d=True), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='PointSegClassMapping', - valid_cat_ids=(3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39)), - dict(type='PointSample', num_points=50000), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.087266, 0.087266], - scale_ratio_range=[1.0, 1.0]), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'points', 'gt_bboxes_3d', 'gt_labels_3d', 'pts_semantic_mask', - 'pts_instance_mask' - ]) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointSample', num_points=50000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - filter_empty_gt=False, - classes=class_names, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='Depth')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - box_type_3d='Depth')) - -# optimizer -lr = 0.006 -optimizer = dict( - lr=lr, - weight_decay=0.0005, - paramwise_cfg=dict( - custom_keys={ - 'bbox_head.decoder_layers': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_self_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_cross_posembeds': dict( - lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_query_proj': dict(lr_mult=0.1, decay_mult=1.0), - 'bbox_head.decoder_key_proj': dict(lr_mult=0.1, decay_mult=1.0) - })) - -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', warmup=None, step=[56, 68]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -checkpoint_config = dict(interval=1, max_keep_ckpts=10) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/metafile.yml deleted file mode 100644 index ff0b63cc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/groupfree3d/metafile.yml +++ /dev/null @@ -1,72 +0,0 @@ -Collections: - - Name: Group-Free-3D - Metadata: - Training Techniques: - - AdamW - Training Resources: 4x V100 GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/2104.00678 - Title: 'Group-Free 3D Object Detection via Transformers' - README: configs/groupfree3d/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/groupfree3dnet.py#L10 - Version: v0.15.0 - -Models: - - Name: groupfree3d_8x4_scannet-3d-18class-L6-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 6.7 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.32 - AP@0.5: 47.82 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L6-O256/groupfree3d_8x4_scannet-3d-18class-L6-O256_20210702_145347-3499eb55.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-L12-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 9.4 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.57 - AP@0.5: 48.21 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-L12-O256/groupfree3d_8x4_scannet-3d-18class-L12-O256_20210702_150907-1c5551ad.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 13.3 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 68.20 - AP@0.5: 51.02 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O256_20210702_200301-944f0ac0.pth - - - Name: groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py - In Collection: Group-Free-3D - Config: configs/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 18.8 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 68.22 - AP@0.5: 52.61 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/groupfree3d/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512/groupfree3d_8x4_scannet-3d-18class-w2x-L12-O512_20210702_220204-187b71c7.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/README.md deleted file mode 100644 index 60cc30f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# H3DNet: 3D Object Detection Using Hybrid Geometric Primitives - -> [H3DNet: 3D Object Detection Using Hybrid Geometric Primitives](https://arxiv.org/abs/2006.05682) - - - -## Abstract - -We introduce H3DNet, which takes a colorless 3D point cloud as input and outputs a collection of oriented object bounding boxes (or BB) and their semantic labels. The critical idea of H3DNet is to predict a hybrid set of geometric primitives, i.e., BB centers, BB face centers, and BB edge centers. We show how to convert the predicted geometric primitives into object proposals by defining a distance function between an object and the geometric primitives. This distance function enables continuous optimization of object proposals, and its local minimums provide high-fidelity object proposals. H3DNet then utilizes a matching and refinement module to classify object proposals into detected objects and fine-tune the geometric parameters of the detected objects. The hybrid set of geometric primitives not only provides more accurate signals for object detection than using a single type of geometric primitives, but it also provides an overcomplete set of constraints on the resulting 3D layout. Therefore, H3DNet can tolerate outliers in predicted geometric primitives. Our model achieves state-of-the-art 3D detection results on two large datasets with real 3D scans, ScanNet and SUN RGB-D. - -
- -
- -## Introduction - -We implement H3DNet and provide the result and checkpoints on ScanNet datasets. - -## Results and models - -### ScanNet - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [MultiBackbone](./h3dnet_3x8_scannet-3d-18class.py) | 3x | 7.9 | | 66.07 | 47.68 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149-414bd304.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149.log.json) | - -**Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version \< 0.6.0, the checkpoints have to be first converted via [tools/model_converters/convert_h3dnet_checkpoints.py](../../tools/model_converters/convert_h3dnet_checkpoints.py): - -``` -python ./tools/model_converters/convert_h3dnet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH} -``` - -Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md). - -## Citation - -```latex -@inproceedings{zhang2020h3dnet, - author = {Zhang, Zaiwei and Sun, Bo and Yang, Haitao and Huang, Qixing}, - title = {H3DNet: 3D Object Detection Using Hybrid Geometric Primitives}, - booktitle = {Proceedings of the European Conference on Computer Vision}, - year = {2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py deleted file mode 100644 index e6534a4b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py +++ /dev/null @@ -1,64 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', '../_base_/models/h3dnet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] - -# model settings -model = dict( - rpn_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=24, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]])), - roi_head=dict( - bbox_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=24, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]])))) - -data = dict(samples_per_gpu=3, workers_per_gpu=2) - -# yapf:disable -log_config = dict(interval=30) -# yapf:enable diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/metafile.yml deleted file mode 100644 index 6d731d6d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/h3dnet/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: H3DNet - Metadata: - Training Data: ScanNet - Training Techniques: - - AdamW - Training Resources: 8x GeForce GTX 1080 Ti - Architecture: - Paper: - URL: https://arxiv.org/abs/2006.05682 - Title: 'H3DNet: 3D Object Detection Using Hybrid Geometric Primitives' - README: configs/h3dnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/h3dnet.py#L10 - Version: v0.6.0 - -Models: - - Name: h3dnet_3x8_scannet-3d-18class - In Collection: H3DNet - Config: configs/h3dnet/h3dnet_3x8_scannet-3d-18class.py - Metadata: - Training Memory (GB): 7.9 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 66.07 - AP@0.5: 47.68 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_3x8_scannet-3d-18class_20210824_003149-414bd304.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/README.md deleted file mode 100644 index a491b9d8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes - -> [ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes](https://arxiv.org/abs/2001.10692) - - - -## Abstract - -3D object detection has seen quick progress thanks to advances in deep learning on point clouds. A few recent works have even shown state-of-the-art performance with just point clouds input (e.g. VOTENET). However, point cloud data have inherent limitations. They are sparse, lack color information and often suffer from sensor noise. Images, on the other hand, have high resolution and rich texture. Thus they can complement the 3D geometry provided by point clouds. Yet how to effectively use image information to assist point cloud based detection is still an open question. In this work, we build on top of VOTENET and propose a 3D detection architecture called IMVOTENET specialized for RGB-D scenes. IMVOTENET is based on fusing 2D votes in images and 3D votes in point clouds. Compared to prior work on multi-modal detection, we explicitly extract both geometric and semantic features from the 2D images. We leverage camera parameters to lift these features to 3D. To improve the synergy of 2D-3D feature fusion, we also propose a multi-tower training scheme. We validate our model on the challenging SUN RGB-D dataset, advancing state-of-the-art results by 5.7 mAP. We also provide rich ablation studies to analyze the contribution of each design choice. - -
- -
- -## Introduction - -We implement ImVoteNet and provide the result and checkpoints on SUNRGBD. - -## Results and models - -### SUNRGBD-2D (Stage 1, image branch pre-train) - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py) | | 2.1 | | | 62.70 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618-62eba6ce.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618.json) | - -### SUNRGBD-3D (Stage 2) - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :---------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./imvotenet_stage2_16x8_sunrgbd-3d-10class.py) | 3x | 9.4 | | 64.55 | | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851-1bcd1b97.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851.log.json) | - -## Citation - -```latex -@inproceedings{qi2020imvotenet, - title={Imvotenet: Boosting 3D object detection in point clouds with image votes}, - author={Qi, Charles R and Chen, Xinlei and Litany, Or and Guibas, Leonidas J}, - booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, - pages={4404--4413}, - year={2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py deleted file mode 100644 index e999c650..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py +++ /dev/null @@ -1,58 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', '../_base_/default_runtime.py', - '../_base_/models/imvotenet_image.py' -] - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 504), (1333, 528), (1333, 552), - (1333, 576), (1333, 600)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 600), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(times=1, dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[6]) -runner = dict(type='EpochBasedRunner', max_epochs=8) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py deleted file mode 100644 index ef1e5539..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py +++ /dev/null @@ -1,260 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py', - '../_base_/models/imvotenet_image.py' -] - -class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) - -model = dict( - pts_backbone=dict( - type='PointNet2SASSG', - in_channels=4, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True)), - pts_bbox_heads=dict( - common=dict( - type='VoteHead', - num_classes=10, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=10, - num_dir_bins=12, - with_rot=True, - mean_sizes=[[2.114256, 1.620300, 0.927272], - [0.791118, 1.279516, 0.718182], - [0.923508, 1.867419, 0.845495], - [0.591958, 0.552978, 0.827272], - [0.699104, 0.454178, 0.75625], - [0.69519, 1.346299, 0.736364], - [0.528526, 1.002642, 1.172878], - [0.500618, 0.632163, 0.683424], - [0.404671, 1.071108, 1.688889], - [0.76584, 1.398258, 0.472728]]), - pred_layer_cfg=dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=dict( - type='CrossEntropyLoss', - class_weight=[0.2, 0.8], - reduction='sum', - loss_weight=5.0), - center_loss=dict( - type='ChamferDistance', - mode='l2', - reduction='sum', - loss_src_weight=10.0, - loss_dst_weight=10.0), - dir_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - dir_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0), - size_class_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0), - size_res_loss=dict( - type='SmoothL1Loss', reduction='sum', loss_weight=10.0 / 3.0), - semantic_loss=dict( - type='CrossEntropyLoss', reduction='sum', loss_weight=1.0)), - joint=dict( - vote_module_cfg=dict( - in_channels=512, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(512, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[512, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - pts=dict( - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - img=dict( - vote_module_cfg=dict( - in_channels=256, - vote_per_seed=1, - gt_per_seed=3, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - norm_feats=True, - vote_loss=dict( - type='ChamferDistance', - mode='l1', - reduction='none', - loss_dst_weight=10.0)), - vote_aggregation_cfg=dict( - type='PointSAModule', - num_point=256, - radius=0.3, - num_sample=16, - mlp_channels=[256, 128, 128, 128], - use_xyz=True, - normalize_xyz=True)), - loss_weights=[0.4, 0.3, 0.3]), - img_mlp=dict( - in_channel=18, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU')), - fusion_layer=dict( - type='VoteFusion', - num_classes=len(class_names), - max_imvote_per_pixel=3), - num_sampled_seed=1024, - freeze_img_branch=True, - - # model training and testing settings - train_cfg=dict( - pts=dict( - pos_distance_thr=0.3, neg_distance_thr=0.6, sample_mod='vote')), - test_cfg=dict( - img_rcnn=dict(score_thr=0.1), - pts=dict( - sample_mod='seed', - nms_thr=0.25, - score_thr=0.05, - per_class_proposal=True))) - -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations3D'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 600), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.0), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.523599, 0.523599], - scale_ratio_range=[0.85, 1.15], - shift_height=True), - dict(type='PointSample', num_points=20000), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'points', 'gt_bboxes_3d', - 'gt_labels_3d' - ]) -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=True, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 600), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.0), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - ), - dict(type='PointSample', num_points=20000), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img', 'points']) - ]), -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img', 'points']) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -evaluation = dict(pipeline=eval_pipeline) - -# may also use your own pre-trained image branch -load_from = 'https://download.openmmlab.com/mmdetection3d/v0.1.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210323_173222-cad62aeb.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/metafile.yml deleted file mode 100644 index 28051c43..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvotenet/metafile.yml +++ /dev/null @@ -1,43 +0,0 @@ -Collections: - - Name: ImVoteNet - Metadata: - Training Data: SUNRGBD - Training Techniques: - - AdamW - Training Resources: 8x TITAN Xp - Architecture: - - Faster R-CNN - - VoteNet - - Feature Pyramid Network - Paper: - URL: https://arxiv.org/abs/2001.10692 - Title: 'ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes' - README: configs/imvotenet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/imvotenet.py#L56 - Version: v0.12.0 - -Models: - - Name: imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class - In Collection: ImVoteNet - Config: configs/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class.py - Metadata: - Training Memory (GB): 2.1 - Results: - - Task: Object Detection - Dataset: SUNRGBD-2D - Metrics: - AP@0.5: 62.70 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class/imvotenet_faster_rcnn_r50_fpn_2x4_sunrgbd-3d-10class_20210819_225618-62eba6ce.pth - - - Name: imvotenet_stage2_16x8_sunrgbd-3d-10class - In Collection: ImVoteNet - Config: configs/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class.py - Metadata: - Training Memory (GB): 9.4 - Results: - - Task: 3D Object Detection - Dataset: SUNRGBD-3D - Metrics: - AP@0.25: 64.55 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvotenet/imvotenet_stage2_16x8_sunrgbd-3d-10class/imvotenet_stage2_16x8_sunrgbd-3d-10class_20210819_192851-1bcd1b97.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/README.md deleted file mode 100644 index faaddf29..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection - -> [ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection](https://arxiv.org/abs/2106.01178) - - - -## Abstract - -In this paper, we introduce the task of multi-view RGB-based 3D object detection as an end-to-end optimization problem. To address this problem, we propose ImVoxelNet, a novel fully convolutional method of 3D object detection based on posed monocular or multi-view RGB images. The number of monocular images in each multiview input can variate during training and inference; actually, this number might be unique for each multi-view input. ImVoxelNet successfully handles both indoor and outdoor scenes, which makes it general-purpose. Specifically, it achieves state-of-the-art results in car detection on KITTI (monocular) and nuScenes (multi-view) benchmarks among all methods that accept RGB images. Moreover, it surpasses existing RGB-based 3D object detection methods on the SUN RGB-D dataset. On ScanNet, ImVoxelNet sets a new benchmark for multi-view 3D object detection. - -
- -
- -## Introduction - -We implement a monocular 3D detector ImVoxelNet and provide its results and checkpoints on KITTI dataset. -Results for SUN RGB-D, ScanNet and nuScenes are currently available in ImVoxelNet authors -[repo](https://github.com/saic-vul/imvoxelnet) (based on mmdetection3d). - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------: | :---: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet-50](./imvoxelnet_kitti-3d-car.py) | Car | 3x | | | 17.26 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014-3d0ffdf4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014.log.json) | - -## Citation - -```latex -@article{rukhovich2021imvoxelnet, - title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection}, - author={Danila Rukhovich, Anna Vorontsova, Anton Konushin}, - journal={arXiv preprint arXiv:2106.01178}, - year={2021} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py deleted file mode 100644 index 89bf2426..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/imvoxelnet_4x8_kitti-3d-car.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -model = dict( - type='ImVoxelNet', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=64, - num_outs=4), - neck_3d=dict(type='OutdoorImVoxelNeck', in_channels=64, out_channels=256), - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - in_channels=256, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-0.16, -39.68, -1.78, 68.96, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - n_voxels=[216, 248, 12], - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-0.16, -39.68, -3.08, 68.96, 39.68, 0.76]], - rotations=[.0]), - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) - -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=False, use_camera=True) -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadAnnotations3D'), - dict(type='LoadImageFromFile'), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='Resize', - img_scale=[(1173, 352), (1387, 416)], - keep_ratio=True, - multiscale_mode='range'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['img', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='Resize', img_scale=(1280, 384), keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=3, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True)) - -optimizer = dict( - type='AdamW', - lr=0.0001, - weight_decay=0.0001, - paramwise_cfg=dict( - custom_keys={'backbone': dict(lr_mult=0.1, decay_mult=1.0)})) -optimizer_config = dict(grad_clip=dict(max_norm=35., norm_type=2)) -lr_config = dict(policy='step', step=[8, 11]) -total_epochs = 12 - -checkpoint_config = dict(interval=1, max_keep_ckpts=1) -log_config = dict( - interval=1, - hooks=[dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook')]) -evaluation = dict(interval=1) -dist_params = dict(backend='nccl') -find_unused_parameters = True # only 1 of 4 FPN outputs is used -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/metafile.yml deleted file mode 100644 index 0dea4866..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/imvoxelnet/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: ImVoxelNet - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x Tesla P40 - Architecture: - - Anchor3DHead - Paper: - URL: https://arxiv.org/abs/2106.01178 - Title: 'ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection' - README: configs/imvoxelnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/imvoxelnet.py#L11 - Version: v0.15.0 - -Models: - - Name: imvoxelnet_kitti-3d-car - In Collection: ImVoxelNet - Config: configs/imvoxelnet/imvoxelnet_kitti-3d-car.py - Metadata: - Training Memory (GB): 15.0 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 17.26 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/imvoxelnet/imvoxelnet_4x8_kitti-3d-car/imvoxelnet_4x8_kitti-3d-car_20210830_003014-3d0ffdf4.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/README.md deleted file mode 100644 index 0f402be2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Objects are Different: Flexible Monocular 3D Object Detection - -> [Objects are Different: Flexible Monocular 3D Object Detection](https://arxiv.org/abs/2104.02323) - - - -## Abstract - -The precise localization of 3D objects from a single image without depth information is a highly challenging problem. Most existing methods adopt the same approach for all objects regardless of their diverse distributions, leading to limited performance for truncated objects. In this paper, we propose a flexible framework for monocular 3D object detection which explicitly decouples the truncated objects and adaptively combines multiple approaches for object depth estimation. Specifically, we decouple the edge of the feature map for predicting long-tail truncated objects so that the optimization of normal objects is not influenced. Furthermore, we formulate the object depth estimation as an uncertainty-guided ensemble of directly regressed object depth and solved depths from different groups of keypoints. Experiments demonstrate that our method outperforms the state-of-the-art method by relatively 27% for the moderate level and 30% for the hard level in the test set of KITTI benchmark while maintaining real-time efficiency. - -
- -
- -## Introduction - -We implement MonoFlex and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DLA34](./monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d.py) | 6x | 9.64 | | 21.86 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553-d46d9bb0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553.log.json) | - -Note: mAP represents Car moderate 3D strict AP11 results. -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 and AP40 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car (AP11) | 28.02 / 36.11 | 21.86 / 29.46 | 19.01 / 24.83 | -| Car (AP40) | 23.22 / 32.74 | 17.18 / 24.02 | 15.13 / 20.67 | - -Note: mAP represents Car moderate 3D strict AP11 / AP40 results. Because of the limited data for pedestrians and cyclists, the detection performance for these two classes is usually unstable. Therefore, we only list car detection results here. In addition, the AP11 result may fluctuate in a larger range (~1 AP), so AP40 is a more recommended metric for reference due to its much better stability. - -## Citation - -```latex -@InProceedings{MonoFlex, - author = {Zhang, Yunpeng and Lu, Jiwen and Zhou, Jie}, - title = {Objects Are Different: Flexible Monocular 3D Object Detection}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2021}, - pages = {3289-3298} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/metafile.yml deleted file mode 100644 index c64dd6ff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/monoflex/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: MonoFlex - Metadata: - Training Data: KITTI - Training Techniques: - - Adam - Training Resources: 2x V100 GPUS - Architecture: - - MonoFlexHead - - DLA - Paper: - URL: https://arxiv.org/abs/2104.02323 - Title: 'Objects are Different: Flexible Monocular 3D Object Detection' - README: configs/monoflex/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/monoflex.py#L7 - Version: v1.0.0 - -Models: - - Name: monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d - In Collection: MonoFlex - Config: configs/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.64 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 21.98 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/monoflex/monoflex_dla34_pytorch_dlaneck_gn-all_2x4_6x_kitti-mono3d_20211228_027553-d46d9bb0.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/README.md deleted file mode 100644 index d786efa7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# MVX-Net: Multimodal VoxelNet for 3D Object Detection - -> [MVX-Net: Multimodal VoxelNet for 3D Object Detection](https://arxiv.org/abs/1904.01649) - - - -## Abstract - -Many recent works on 3D object detection have focused on designing neural network architectures that can consume point cloud data. While these approaches demonstrate encouraging performance, they are typically based on a single modality and are unable to leverage information from other modalities, such as a camera. Although a few approaches fuse data from different modalities, these methods either use a complicated pipeline to process the modalities sequentially, or perform late-fusion and are unable to learn interaction between different modalities at early stages. In this work, we present PointFusion and VoxelFusion: two simple yet effective early-fusion approaches to combine the RGB and point cloud modalities, by leveraging the recently introduced VoxelNet architecture. Evaluation on the KITTI dataset demonstrates significant improvements in performance over approaches which only use point cloud data. Furthermore, the proposed method provides results competitive with the state-of-the-art multimodal algorithms, achieving top-2 ranking in five of the six bird's eye view and 3D detection categories on the KITTI benchmark, by using a simple single stage network. - -
- -
- -## Introduction - -We implement MVX-Net and provide its results and models on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py) | 3 Class | cosine 80e | 6.7 | | 63.22 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805-83442923.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805.log.json) | - -## Citation - -```latex -@inproceedings{sindagi2019mvx, - title={MVX-Net: Multimodal voxelnet for 3D object detection}, - author={Sindagi, Vishwanath A and Zhou, Yin and Tuzel, Oncel}, - booktitle={2019 International Conference on Robotics and Automation (ICRA)}, - pages={7276--7282}, - year={2019}, - organization={IEEE} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py deleted file mode 100644 index e9f592f5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,251 +0,0 @@ -_base_ = ['../_base_/schedules/cosine.py', '../_base_/default_runtime.py'] - -# model settings -voxel_size = [0.05, 0.05, 0.1] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -model = dict( - type='DynamicMVXFasterRCNN', - img_backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - img_neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - pts_voxel_layer=dict( - max_num_points=-1, - point_cloud_range=point_cloud_range, - voxel_size=voxel_size, - max_voxels=(-1, -1), - ), - pts_voxel_encoder=dict( - type='DynamicVFE', - in_channels=4, - feat_channels=[64, 64], - with_distance=False, - voxel_size=voxel_size, - with_cluster_center=True, - with_voxel_center=True, - point_cloud_range=point_cloud_range, - fusion_layer=dict( - type='PointFusion', - img_channels=256, - pts_channels=64, - mid_channels=128, - out_channels=128, - img_levels=[0, 1, 2, 3, 4], - align_corners=False, - activate_out=True, - fuse_out=False)), - pts_middle_encoder=dict( - type='SparseEncoder', - in_channels=128, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - pts_backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - pts_neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - assigner_per_size=True, - diff_rad_by_sin=True, - assign_per_class=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - pts=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - pts=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -input_modality = dict(use_lidar=True, use_camera=True) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='Resize', - img_scale=[(640, 192), (2560, 768)], - multiscale_mode='range', - keep_ratio=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05], - translation_std=[0.2, 0.2, 0.2]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']), -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1280, 384), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict(type='Resize', multiscale_mode='value', keep_ratio=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points', 'img']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadImageFromFile'), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points', 'img']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - box_type_3d='LiDAR')), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) - -# Training settings -optimizer = dict(weight_decay=0.01) -# max_norm=10 is better for SECOND -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) - -evaluation = dict(interval=1, pipeline=eval_pipeline) - -# You may need to download the model first is the network is unstable -load_from = 'https://download.openmmlab.com/mmdetection3d/pretrain_models/mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/metafile.yml deleted file mode 100644 index 4ce10b71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/mvxnet/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: MVX-Net - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Feature Pyramid Network - - Dynamic Voxelization - Paper: - URL: https://arxiv.org/abs/1904.01649 - Title: 'MVX-Net: Multimodal VoxelNet for 3D Object Detection' - README: configs/mvxnet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/mvx_two_stage.py#L20 - Version: v0.5.0 - -Models: - - Name: dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class - In Collection: MVX-Net - Config: configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 6.7 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 63.22 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20210831_060805-83442923.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/README.md deleted file mode 100644 index 91062296..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/README.md +++ /dev/null @@ -1,59 +0,0 @@ -# NuImages Results - - - -## Introduction - -We support and provide some baseline results on [nuImages dataset](https://www.nuscenes.org/nuimages). -We follow the class mapping in nuScenes dataset, which maps the original categories into 10 foreground categories. -The convert script can be found [here](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/nuimage_converter.py). -The baseline results include instance segmentation models, e.g., Mask R-CNN, Cascade Mask R-CNN, and HTC. -We will support panoptic segmentation models in the future. - -![demo image](../../resources/nuimages_demo.gif) - -The dataset converted by the script of v0.6.0 only supports instance segmentation. Since v0.7.0, we also support to produce semantic segmentation mask of each image; thus, we can train HTC or semantic segmentation models using the dataset. To convert the nuImages dataset into COCO format, please use the command below: - -```shell -python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --version ${VERSIONS} \ - --out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG} -``` - -- `--data-root`: the root of the dataset, defaults to `./data/nuimages`. -- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini` -- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`. -- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel. -- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study. - -## Results and models - -### Instance Segmentation - -We report Mask R-CNN and Cascade Mask R-CNN results on nuimages. - -| Method | Backbone | Pretraining | Lr schd | Mem (GB) | Box AP | Mask AP | Download | -| :----------------: | :-----------------------------------------------------------------------------------: | :---------: | :-----: | :------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| Mask R-CNN | [R-50](./mask_rcnn_r50_fpn_1x_nuim.py) | IN | 1x | 7.4 | 47.8 | 38.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238-e99f5182.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238.log.json) | -| Mask R-CNN | [R-50](./mask_rcnn_r50_fpn_coco-2x_1x_nuim.py) | IN+COCO-2x | 1x | 7.4 | 49.7 | 40.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238-b1742a60.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238.log.json) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_1x_nuim.py) | IN | 1x | 7.0 | 47.7 | 38.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py) | IN+COCO-3x | 1x | 7.0 | 49.9 | 40.8 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305-661a992e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305.log.json) | -| Mask R-CNN | [R-50-CAFFE](./mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py) | IN+COCO-3x | 20e | 7.0 | 50.6 | 41.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002-5529442c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002.log.json) | -| Mask R-CNN | [R-101](./mask_rcnn_r101_fpn_1x_nuim.py) | IN | 1x | 10.9 | 48.9 | 39.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803-65c7623a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803.log.json) | -| Mask R-CNN | [X-101_32x4d](./mask_rcnn_x101_32x4d_fpn_1x_nuim.py) | IN | 1x | 13.3 | 50.4 | 40.5 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741-b699ab37.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_1x_nuim.py) | IN | 1x | 8.9 | 50.8 | 40.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342-1147c036.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py) | IN+COCO-20e | 1x | 8.9 | 52.8 | 42.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158-ad0540e3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158.log.json) | -| Cascade Mask R-CNN | [R-50](./cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py) | IN+COCO-20e | 20e | 8.9 | 52.8 | 42.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951-40963960.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951.log.json) | -| Cascade Mask R-CNN | [R-101](./cascade_mask_rcnn_r101_fpn_1x_nuim.py) | IN | 1x | 12.5 | 51.5 | 40.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804-45215b1e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804.log.json) | -| Cascade Mask R-CNN | [X-101_32x4d](./cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py) | IN | 1x | 14.9 | 52.8 | 41.6 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753-e0e49778.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753.log.json) | -| HTC w/o semantic | [R-50](./htc_without_semantic_r50_fpn_1x_nuim.py) | IN | 1x | | [model](<>) \| [log](<>) | | | -| HTC | [R-50](./htc_r50_fpn_1x_nuim.py) | IN | 1x | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/) | | | -| HTC | [R-50](./htc_r50_fpn_coco-20e_1x_nuim.py) | IN+COCO-20e | 1x | 11.6 | 53.8 | 43.8 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203-0b53a65e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203.log.json) | -| HTC | [R-50](./htc_r50_fpn_coco-20e_20e_nuim.py) | IN+COCO-20e | 20e | 11.6 | 54.8 | 44.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415-d6c60a2c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415.log.json) | -| HTC | [X-101_64x4d + DCN_c3-c5](./htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py) | IN+COCO-20e | 20e | 13.3 | 57.3 | 46.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222-0b16ac4b.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222.log.json) | - -**Note**: - -1. `IN` means only using ImageNet pre-trained backbone. `IN+COCO-Nx` and `IN+COCO-Ne` means the backbone is first pre-trained on ImageNet, and then the detector is pre-trained on COCO train2017 dataset by `Nx` and `N` epochs schedules, respectively. -2. All the training hyper-parameters follow the standard schedules on COCO dataset except that the images are resized from - 1280 x 720 to 1920 x 1080 (relative ratio 0.8 to 1.2) since the images are in size 1600 x 900. -3. The class order in the detectors released in v0.6.0 is different from the order in the configs because the bug in the conversion script. This bug has been fixed since v0.7.0 and the models trained by the correct class order are also released. If you used nuImages since v0.6.0, please re-convert the data through the conversion script using the above-mentioned command. diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py deleted file mode 100644 index 28a54f71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py deleted file mode 100644 index c6ce25e3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,60 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_head=dict(num_classes=10))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py deleted file mode 100644 index bf3ffed0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_bbox_mAP-0.419__segm_mAP-0.365_20200504_174711-4af8e66e.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py deleted file mode 100644 index 5d69466f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' - -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_bbox_mAP-0.419__segm_mAP-0.365_20200504_174711-4af8e66e.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py deleted file mode 100644 index 19f35aef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_1x_nuim.py deleted file mode 100644 index 38f22053..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -_base_ = './htc_without_semantic_r50_fpn_1x_nuim.py' -model = dict( - roi_head=dict( - semantic_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[8]), - semantic_head=dict( - type='FusedSemanticHead', - num_ins=5, - fusion_level=1, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=32, - ignore_label=0, - loss_weight=0.2))) - -data_root = 'data/nuimages/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='SegRescale', scale_factor=1 / 8), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']) -] -data = dict( - train=dict( - seg_prefix=data_root + 'annotations/semantic_masks/', - pipeline=train_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py deleted file mode 100644 index e5f60523..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './htc_r50_fpn_1x_nuim.py' - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/htc/htc_r50_fpn_20e_coco/htc_r50_fpn_20e_coco_20200319-fe28c577.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py deleted file mode 100644 index 2274900f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './htc_r50_fpn_coco-20e_1x_nuim.py' -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py deleted file mode 100644 index 09fde671..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_without_semantic_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,221 +0,0 @@ -_base_ = [ - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='HybridTaskCascade', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='HybridTaskCascadeRoIHead', - interleaved=True, - mask_info_flow=True, - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=10, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=[ - dict( - type='HTCMaskHead', - with_conv_res=False, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - dict( - type='HTCMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - dict( - type='HTCMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=10, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)) - ]), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.001, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py deleted file mode 100644 index 4ab095a8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = './htc_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) - -data = dict(samples_per_gpu=1, workers_per_gpu=1) -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/htc/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco_20200312-946fd751.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py deleted file mode 100644 index 6245194c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_1x_nuim.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py deleted file mode 100644 index 4af79e59..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py +++ /dev/null @@ -1,46 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py deleted file mode 100644 index 32c3f44c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py deleted file mode 100644 index 60973539..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'), - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1280, 720), (1920, 1080)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(max_epochs=20) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py deleted file mode 100644 index ec999ecd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py deleted file mode 100644 index fd603538..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_2x_coco/mask_rcnn_r50_fpn_2x_coco_bbox_mAP-0.392__segm_mAP-0.354_20200505_003907-3e542a40.pth' # noqa diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py deleted file mode 100644 index 06d27450..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nus-2d.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/nuim_instance.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - roi_head=dict( - bbox_head=dict(num_classes=10), mask_head=dict(num_classes=10))) - -file_client_args = dict( - backend='petrel', - path_mapping=dict({ - './data/nuscenes/': 's3://nuscenes/nuscenes/', - 'data/nuscenes/': 's3://nuscenes/nuscenes/' - })) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=file_client_args), - dict( - type='MultiScaleFlipAug', - img_scale=(1600, 900), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data_root = 'data/nuimages/' -# data = dict( -# val=dict( -# ann_file=data_root + 'annotations/nuimages_v1.0-mini.json'), -# test=dict( -# ann_file=data_root + 'annotations/nuimages_v1.0-mini.json')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py deleted file mode 100644 index eb3e81b6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_1x_nuim.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/metafile.yml deleted file mode 100644 index 7b94ce7d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/nuimages/metafile.yml +++ /dev/null @@ -1,255 +0,0 @@ -Models: - - Name: mask_rcnn_r50_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 7.4 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 47.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 38.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_1x_nuim/mask_rcnn_r50_fpn_1x_nuim_20201008_195238-e99f5182.pth - - - Name: mask_rcnn_r50_fpn_coco-2x_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_fpn_coco-2x_1x_nuim.py - Metadata: - Training Memory (GB): 7.4 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 49.7 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.5 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_fpn_coco-2x_1x_nuim/mask_rcnn_r50_fpn_coco-2x_1x_nuim_20201008_195238-b1742a60.pth - - - Name: mask_rcnn_r50_caffe_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 47.7 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 38.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_1x_nuim/ - - - Name: mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 49.9 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_1x_nuim_20201008_195305-661a992e.pth - - - Name: mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim.py - Metadata: - Training Memory (GB): 7.0 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.6 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 41.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim/mask_rcnn_r50_caffe_fpn_coco-3x_20e_nuim_20201009_125002-5529442c.pth - - - Name: mask_rcnn_r101_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_r101_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 10.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 48.9 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 39.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_r101_fpn_1x_nuim/mask_rcnn_r101_fpn_1x_nuim_20201024_134803-65c7623a.pth - - - Name: mask_rcnn_x101_32x4d_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/mask_rcnn_x101_32x4d_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 13.3 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.4 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.5 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/mask_rcnn_x101_32x4d_fpn_1x_nuim/mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135741-b699ab37.pth - - - Name: cascade_mask_rcnn_r50_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 50.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_1x_nuim/cascade_mask_rcnn_r50_fpn_1x_nuim_20201008_195342-1147c036.pth - - - Name: cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 42.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_1x_nuim_20201009_124158-ad0540e3.pth - - - Name: cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim.py - Metadata: - Training Memory (GB): 8.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 42.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim/cascade_mask_rcnn_r50_fpn_coco-20e_20e_nuim_20201009_124951-40963960.pth - - - Name: cascade_mask_rcnn_r101_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_r101_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 12.5 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 51.5 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 40.7 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_r101_fpn_1x_nuim/cascade_mask_rcnn_r101_fpn_1x_nuim_20201024_134804-45215b1e.pth - - - Name: cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim.py - Metadata: - Training Memory (GB): 14.9 - Training Resources: 8x TITAN Xp - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 52.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 41.6 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim/cascade_mask_rcnn_x101_32x4d_fpn_1x_nuim_20201024_135753-e0e49778.pth - - - Name: htc_r50_fpn_coco-20e_1x_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_r50_fpn_coco-20e_1x_nuim.py - Metadata: - Training Memory (GB): 11.6 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 53.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 43.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_1x_nuim/htc_r50_fpn_coco-20e_1x_nuim_20201010_070203-0b53a65e.pth - - - Name: htc_r50_fpn_coco-20e_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_r50_fpn_coco-20e_20e_nuim.py - Metadata: - Training Memory (GB): 11.6 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 54.8 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 44.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_r50_fpn_coco-20e_20e_nuim/htc_r50_fpn_coco-20e_20e_nuim_20201008_211415-d6c60a2c.pth - - - Name: htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim - In Collection: Mask R-CNN - Config: configs/nuimages/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim.py - Metadata: - Training Memory (GB): 13.3 - Training Resources: 8x V100 GPUs - Results: - - Task: Object Detection - Dataset: nuImages - Metrics: - Box AP: 57.3 - - Task: Instance Segmentation - Dataset: nuImages - Metrics: - Mask AP: 46.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/nuimages_semseg/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim/htc_x101_64x4d_fpn_dconv_c3-c5_coco-20e_16x1_20e_nuim_20201008_211222-0b16ac4b.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/README.md deleted file mode 100644 index 83ab5b08..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds - -> [PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds](https://arxiv.org/abs/2103.14635) - - - -## Abstract - -We introduce Position Adaptive Convolution (PAConv), a generic convolution operation for 3D point cloud processing. The key of PAConv is to construct the convolution kernel by dynamically assembling basic weight matrices stored in Weight Bank, where the coefficients of these weight matrices are self-adaptively learned from point positions through ScoreNet. In this way, the kernel is built in a data-driven manner, endowing PAConv with more flexibility than 2D convolutions to better handle the irregular and unordered point cloud data. Besides, the complexity of the learning process is reduced by combining weight matrices instead of brutally predicting kernels from point positions. -Furthermore, different from the existing point convolution operators whose network architectures are often heavily engineered, we integrate our PAConv into classical MLP-based point cloud pipelines without changing network configurations. Even built on simple networks, our method still approaches or even surpasses the state-of-the-art models, and significantly improves baseline performance on both classification and segmentation tasks, yet with decent efficiency. Thorough ablation studies and visualizations are provided to understand PAConv. - -
- -
- -## Introduction - -We implement PAConv and provide the result and checkpoints on S3DIS dataset. - -**Notice**: The original PAConv paper used step learning rate schedule. We discovered that cosine schedule achieves slightly better results and adopt it in our implementations. - -## Results and models - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------------------------: | :----: | :---------: | :------: | :------------: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PAConv (SSG)](./paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py) | Area_5 | cosine 150e | 5.8 | | 66.65 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615-2147b2d1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615.log.json) | -| [PAConv\* (SSG)](./paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py) | Area_5 | cosine 200e | 3.8 | | 65.33 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class_20210802_171802-e5ea9bb9.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class_20210802_171802.log.json) | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. -- PAConv\* stands for the CUDA implementation of PAConv operations. See the [paper](https://arxiv.org/pdf/2103.14635.pdf) appendix section D for more details. In our experiments, the training of PAConv\* is found to be very unstable. We achieved slightly lower mIoU than the result in the paper, but is consistent with the result obtained by running their [official code](https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg). Besides, although the GPU memory consumption of PAConv\* is significantly lower than PAConv, its training and inference speed are actually slower (by ~10%). - -## Indeterminism - -Since PAConv testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@inproceedings{xu2021paconv, - title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds}, - author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan}, - booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, - pages={3173--3182}, - year={2021} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/metafile.yml deleted file mode 100644 index 589f8079..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: PAConv - Metadata: - Training Techniques: - - SGD - Training Resources: 8x Titan XP GPUs - Architecture: - - PAConv - Paper: - URL: https://arxiv.org/abs/2103.14635 - Title: 'PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds' - README: configs/paconv/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/paconv/paconv.py#L106 - Version: v0.16.0 - -Models: - - Name: paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py - In Collection: PAConv - Config: configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 5.8 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 66.65 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class_20210729_200615-2147b2d1.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py deleted file mode 100644 index b2a1440e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_cuda_ssg_8x8_cosine_200e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,69 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/paconv_cuda_ssg.py', - '../_base_/schedules/seg_cosine_150e.py', '../_base_/default_runtime.py' -] - -# data settings -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -num_points = 4096 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - use_normalized_coord=True, - num_try=10000, - enlarge_size=None, - min_unique_num=num_points // 4, - eps=0.0), - dict(type='NormalizePointsColor', color_mean=None), - dict( - type='GlobalRotScaleTrans', - rot_range=[0.0, 6.283185307179586], # [0, 2 * pi] - scale_ratio_range=[0.8, 1.2], - translation_std=[0, 0, 0]), - dict( - type='RandomJitterPoints', - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]), - dict(type='RandomDropPointsColor', drop_ratio=0.2), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict(samples_per_gpu=8, train=dict(pipeline=train_pipeline)) -evaluation = dict(interval=1) - -# model settings -model = dict( - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=12)) - -# runtime settings -runner = dict(max_epochs=200) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py deleted file mode 100644 index 6b22a67f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/paconv/paconv_ssg_8x8_cosine_150e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,66 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/paconv_ssg.py', '../_base_/schedules/seg_cosine_150e.py', - '../_base_/default_runtime.py' -] - -# data settings -class_names = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') -num_points = 4096 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=tuple(range(len(class_names))), - max_cat_id=13), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.0, - use_normalized_coord=True, - num_try=10000, - enlarge_size=None, - min_unique_num=num_points // 4, - eps=0.0), - dict(type='NormalizePointsColor', color_mean=None), - dict( - type='GlobalRotScaleTrans', - rot_range=[0.0, 6.283185307179586], # [0, 2 * pi] - scale_ratio_range=[0.8, 1.2], - translation_std=[0, 0, 0]), - dict( - type='RandomJitterPoints', - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]), - dict(type='RandomDropPointsColor', drop_ratio=0.2), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict(samples_per_gpu=8, train=dict(pipeline=train_pipeline)) -evaluation = dict(interval=1) - -# model settings -model = dict( - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=12)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/README.md deleted file mode 100644 index b94b8492..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network - -> [From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network](https://arxiv.org/abs/1907.03670) - - - -## Abstract - -3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. - -
- -
- -## Introduction - -We implement Part-A^2 and provide its results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 4.1 | | 68.33 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017-454a5344.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017.log.json) | -| [SECFPN](./hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py) | Car | cyclic 80e | 4.0 | | 79.08 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017-cb7ff621.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017.log.json) | - -## Citation - -```latex -@article{shi2020points, - title={From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network}, - author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - year={2020}, - publisher={IEEE} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py deleted file mode 100644 index 11662318..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py +++ /dev/null @@ -1,122 +0,0 @@ -_base_ = [ - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py', - '../_base_/models/parta2.py' -] - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)), - classes=class_names, - sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -eval_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_train.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=False)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=True), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'kitti_infos_val.pkl', - split='training', - pts_prefix='velodyne_reduced', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - box_type_3d='LiDAR', - test_mode=True)) - -# Part-A2 uses a different learning rate from what SECOND uses. -lr = 0.001 -optimizer = dict(lr=lr) -evaluation = dict(pipeline=eval_pipeline) -find_unused_parameters = True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py deleted file mode 100644 index 89be085d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py +++ /dev/null @@ -1,137 +0,0 @@ -_base_ = './hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py' - -point_cloud_range = [0, -40, -3, 70.4, 40, 1] # velodyne coordinates, x, y, z - -model = dict( - rpn_head=dict( - type='PartA2RPNHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=False)), - roi_head=dict( - num_classes=1, - semantic_head=dict(num_classes=1), - bbox_head=dict(num_classes=1)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=9000, - nms_post=512, - max_num=512, - nms_thr=0.8, - score_thr=0, - use_rotate_nms=False), - rcnn=dict( - assigner=dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlaps3D', coordinate='lidar'), - pos_iou_thr=0.55, - neg_iou_thr=0.55, - min_pos_iou=0.55, - ignore_iof_thr=-1), - sampler=dict( - type='IoUNegPiecewiseSampler', - num=128, - pos_fraction=0.55, - neg_piece_fractions=[0.8, 0.2], - neg_iou_piece_thrs=[0.55, 0.1], - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=True), - cls_pos_thr=0.75, - cls_neg_thr=0.25)), - test_cfg=dict( - rpn=dict( - nms_pre=1024, - nms_post=100, - max_num=100, - nms_thr=0.7, - score_thr=0, - use_rotate_nms=True), - rcnn=dict( - use_rotate_nms=True, - use_raw_score=True, - nms_thr=0.01, - score_thr=0.1))) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -input_modality = dict(use_lidar=True, use_camera=False) -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - classes=class_names, - sample_groups=dict(Car=15)) -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectNameFilter', classes=class_names), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -find_unused_parameters = True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/metafile.yml deleted file mode 100644 index d626fcb0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/parta2/metafile.yml +++ /dev/null @@ -1,41 +0,0 @@ -Collections: - - Name: Part-A^2 - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - Sparse U-Net - Paper: - URL: https://arxiv.org/abs/1907.03670 - Title: 'From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network' - README: configs/parta2/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/parta2.py#L12 - Version: v0.5.0 - -Models: - - Name: hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class - In Collection: Part-A^2 - Config: configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class.py - Metadata: - Training Memory (GB): 4.1 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 68.33 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-3class_20210831_022017-454a5344.pth - - - Name: hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car - In Collection: Part-A^2 - Config: configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py - Metadata: - Training Memory (GB): 4.0 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 79.08 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car_20210831_022017-cb7ff621.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/README.md deleted file mode 100644 index f805f53d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Probabilistic and Geometric Depth: Detecting Objects in Perspective - -> [Probabilistic and Geometric Depth: Detecting Objects in Perspective](https://arxiv.org/abs/2107.14160) - - - -## Abstract - -3D object detection is an important capability needed in various practical applications such as driver assistance systems. Monocular 3D detection, as a representative general setting among image-based approaches, provides a more economical solution than conventional settings relying on LiDARs but still yields unsatisfactory results. This paper first presents a systematic study on this problem. We observe that the current monocular 3D detection can be simplified as an instance depth estimation problem: The inaccurate instance depth blocks all the other 3D attribute predictions from improving the overall detection performance. Moreover, recent methods directly estimate the depth based on isolated instances or pixels while ignoring the geometric relations across different objects. To this end, we construct geometric relation graphs across predicted objects and use the graph to facilitate depth estimation. As the preliminary depth estimation of each instance is usually inaccurate in this ill-posed setting, we incorporate a probabilistic representation to capture the uncertainty. It provides an important indicator to identify confident predictions and further guide the depth propagation. Despite the simplicity of the basic idea, our method, PGD, obtains significant improvements on KITTI and nuScenes benchmarks, achieving 1st place out of all monocular vision-only methods while still maintaining real-time efficiency. Code and models will be released at [this https URL](https://github.com/open-mmlab/mmdetection3d). - -
- -
- -## Introduction - -PGD, also can be regarded as FCOS3D++, is a simple yet effective monocular 3D detector. It enhances the FCOS3D baseline by involving local geometric constraints and improving instance depth estimation. - -We release the code and model for both KITTI and nuScenes benchmark, which is a good supplement for the original FCOS3D baseline (only supported on nuScenes). - -For clean implementation, our preliminary release supports base models with proposed local geometric constraints and the probabilistic depth representation. We will involve the geometric graph part in the future. - -A more extensive study based on FCOS3D and PGD is on-going. Please stay tuned. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP_11 / mAP_40 | Download | -| :--------------------------------------------------------------: | :-----: | :------: | :------------: | :-------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101](./pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py) | 4x | 9.07 | | 18.33 / 13.23 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608-8a97533b.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608.log.json) | - -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 and AP40 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car (AP11) | 24.09 / 30.11 | 18.33 / 23.46 | 16.90 / 19.33 | -| Car (AP40) | 19.27 / 26.60 | 13.23 / 18.23 | 10.65 / 15.00 | - -Note: mAP represents Car moderate 3D strict AP11 / AP40 results. Because of the limited data for pedestrians and cyclists, the detection performance for these two classes is usually unstable. Therefore, we only list car detection results here. In addition, AP40 is a more recommended metric for reference due to its much better stability. - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | mAP | NDS | Download | -| :------------------------------------------------------------------------------: | :-----: | :------: | :--: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [ResNet101 w/ DCN](./pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py) | 1x | 9.20 | 31.7 | 39.3 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350-f4b5eec2.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350.log.json) | -| [above w/ finetune](./pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py) | 1x | 9.20 | 34.6 | 41.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245-fd419681.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245.log.json) | -| above w/ tta | 1x | 9.20 | 35.5 | 41.8 | | -| [ResNet101 w/ DCN](./pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py) | 2x | 9.20 | 33.6 | 40.9 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314-cb677266.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314.log.json) | -| [above w/ finetune](./pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py) | 2x | 9.20 | 35.8 | 42.5 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135-5ec7c1cd.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135.log.json) | -| above w/ tta | 2x | 9.20 | 36.8 | 43.1 | | - -## Citation - -```latex -@inproceedings{wang2021pgd, - title={{Probabilistic and Geometric Depth: Detecting} Objects in Perspective}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Conference on Robot Learning (CoRL) 2021}, - year={2021} -} -# For the baseline version -@inproceedings{wang2021fcos3d, - title={{FCOS3D: Fully} Convolutional One-Stage Monocular 3D Object Detection}, - author={Wang, Tai and Zhu, Xinge and Pang, Jiangmiao and Lin, Dahua}, - booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, - year={2021} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/metafile.yml deleted file mode 100644 index d7d66265..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/metafile.yml +++ /dev/null @@ -1,81 +0,0 @@ -Collections: - - Name: PGD - Metadata: - Training Data: KITTI - Training Techniques: - - SGD - Training Resources: 4x TITAN XP - Architecture: - - PGDHead - Paper: - URL: https://arxiv.org/abs/2107.14160 - Title: 'Probabilistic and Geometric Depth: Detecting Objects in Perspective' - README: configs/pgd/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/dense_heads/pgd_head.py#17 - Version: v1.0.0 - -Models: - - Name: pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.1 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 18.33 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d_20211022_102608-8a97533b.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 31.7 - NDS: 39.3 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_20211116_195350-f4b5eec2.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 34.6 - NDS: 41.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune_20211118_093245-fd419681.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 33.6 - NDS: 40.9 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_20211112_125314-cb677266.pth - - - Name: pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune - In Collection: PGD - Config: configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py - Metadata: - Training Memory (GB): 9.2 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 35.8 - NDS: 42.5 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune_20211114_162135-5ec7c1cd.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py deleted file mode 100644 index 37b50493..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py +++ /dev/null @@ -1,107 +0,0 @@ -_base_ = [ - '../_base_/datasets/nus-mono3d.py', '../_base_/models/pgd.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True)), - bbox_head=dict( - pred_bbox2d=True, - group_reg_dims=(2, 1, 3, 1, 2, - 4), # offset, depth, size, rot, velo, bbox2d - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - (), # velo - (256, ) # bbox2d - ), - loss_depth=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((31.99, 21.12), (37.15, 24.63), (39.69, 23.97), - (40.91, 26.34), (34.16, 20.11), (22.35, 13.70), - (24.28, 16.05), (27.26, 15.50), (20.61, 13.68), - (22.74, 15.01)), - base_dims=((4.62, 1.73, 1.96), (6.93, 2.83, 2.51), - (12.56, 3.89, 2.94), (11.22, 3.50, 2.95), - (6.68, 3.21, 2.85), (6.68, 3.21, 2.85), - (2.11, 1.46, 0.78), (0.73, 1.77, 0.67), - (0.41, 1.08, 0.41), (0.50, 0.99, 2.52)), - code_size=9)), - # set weight 1.0 for base 7 dims (offset, depth, size, rot) - # 0.05 for 2-dim velocity and 0.2 for 4-dim 2D distance targets - train_cfg=dict(code_weight=[ - 1.0, 1.0, 0.2, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ]), - test_cfg=dict(nms_pre=1000, nms_thr=0.8, score_thr=0.01, max_per_img=200)) - -class_names = [ - 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', - 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' -] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=True, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'attr_labels', 'gt_bboxes_3d', - 'gt_labels_3d', 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.004, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[8, 11]) -total_epochs = 12 -evaluation = dict(interval=4) -runner = dict(max_epochs=total_epochs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py deleted file mode 100644 index f5d64232..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d_finetune.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ])) -# optimizer -optimizer = dict(lr=0.002) -load_from = 'work_dirs/pgd_nus_benchmark_1x/latest.pth' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py deleted file mode 100644 index 2dd59575..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_1x_nus-mono3d.py' -# learning policy -lr_config = dict(step=[16, 22]) -total_epochs = 24 -runner = dict(max_epochs=total_epochs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py deleted file mode 100644 index 19a3d630..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d_finetune.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pgd_r101_caffe_fpn_gn-head_2x16_2x_nus-mono3d.py' -# model settings -model = dict( - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.05, 0.05, 0.2, 0.2, 0.2, 0.2 - ])) -# optimizer -optimizer = dict(lr=0.002) -load_from = 'work_dirs/pgd_nus_benchmark_2x/latest.pth' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py deleted file mode 100644 index 832b34e6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pgd/pgd_r101_caffe_fpn_gn-head_3x4_4x_kitti-mono3d.py +++ /dev/null @@ -1,127 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-mono3d.py', '../_base_/models/pgd.py', - '../_base_/schedules/mmdet_schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - backbone=dict(frozen_stages=0), - neck=dict(start_level=0, num_outs=4), - bbox_head=dict( - num_classes=3, - bbox_code_size=7, - pred_attrs=False, - pred_velo=False, - pred_bbox2d=True, - use_onlyreg_proj=True, - strides=(4, 8, 16, 32), - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 1e8)), - group_reg_dims=(2, 1, 3, 1, 16, - 4), # offset, depth, size, rot, kpts, bbox2d - reg_branch=( - (256, ), # offset - (256, ), # depth - (256, ), # size - (256, ), # rot - (256, ), # kpts - (256, ) # bbox2d - ), - centerness_branch=(256, ), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - use_depth_classifier=True, - depth_branch=(256, ), - depth_range=(0, 70), - depth_unit=10, - division='uniform', - depth_bins=8, - pred_keypoints=True, - weight_dim=1, - loss_depth=dict( - type='UncertainSmoothL1Loss', alpha=1.0, beta=3.0, - loss_weight=1.0), - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), (3.9, 1.56, 1.6)), - code_size=7)), - # set weight 1.0 for base 7 dims (offset, depth, size, rot) - # 0.2 for 16-dim keypoint offsets and 1.0 for 4-dim 2D distance targets - train_cfg=dict(code_weight=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, - 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 1.0, 1.0, 1.0, 1.0 - ]), - test_cfg=dict(nms_pre=100, nms_thr=0.05, score_thr=0.001, max_per_img=20)) - -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='Resize', img_scale=(1242, 375), keep_ratio=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=False, - transforms=[ - dict(type='RandomFlip3D'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=3, - workers_per_gpu=3, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.001, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[32, 44]) -total_epochs = 48 -runner = dict(type='EpochBasedRunner', max_epochs=48) -evaluation = dict(interval=2) -checkpoint_config = dict(interval=8) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/README.md deleted file mode 100644 index eddbdc72..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud - -> [PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud](https://arxiv.org/abs/1812.04244) - - - -## Abstract - -In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. - -
- -
- -## Introduction - -We implement PointRCNN and provide the result with checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./point_rcnn_2x8_kitti-3d-3classes.py) | 3 Class | cyclic 40e | 4.6 | | 70.83 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.log.json) | - -Note: mAP represents AP11 results on 3 Class under the moderate setting. - -Detailed performance on KITTI 3D detection (3D) is as follows, evaluated by AP11 metric: - -| | Easy | Moderate | Hard | -| ---------- | :---: | :------: | :---: | -| Car | 89.13 | 78.72 | 78.24 | -| Pedestrian | 65.81 | 59.57 | 52.75 | -| Cyclist | 93.51 | 74.19 | 70.73 | - -## Citation - -```latex -@inproceedings{Shi_2019_CVPR, - title = {PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud}, - author = {Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng}, - booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/metafile.yml deleted file mode 100644 index a7627cee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/metafile.yml +++ /dev/null @@ -1,29 +0,0 @@ -Collections: - - Name: PointRCNN - Metadata: - Training Data: KITTI - Training Techniques: - - AdamW - Training Resources: 8x Titan XP GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1812.04244 - Title: 'PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud' - README: configs/point_rcnn/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/point_rcnn.py#L8 - Version: v1.0.0 - -Models: - - Name: point_rcnn_2x8_kitti-3d-3classes.py - In Collection: PointRCNN - Config: configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py - Metadata: - Training Memory (GB): 4.6 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 70.83 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py deleted file mode 100644 index 1344aca5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py +++ /dev/null @@ -1,94 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-3d-car.py', '../_base_/models/point_rcnn.py', - '../_base_/default_runtime.py', '../_base_/schedules/cyclic_40e.py' -] - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -input_modality = dict(use_lidar=True, use_camera=False) - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - sample_groups=dict(Car=20, Pedestrian=15, Cyclist=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectSample', db_sampler=db_sampler), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='ObjectNoise', - num_try=100, - translation_std=[1.0, 1.0, 0.5], - global_rot_range=[0.0, 0.0], - rot_range=[-0.78539816, 0.78539816]), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384, sample_range=40.0), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointSample', num_points=16384, sample_range=40.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# optimizer -lr = 0.001 # max learning rate -optimizer = dict(lr=lr, betas=(0.95, 0.85)) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=80) -evaluation = dict(interval=2) -# yapf:disable -log_config = dict( - interval=30, - hooks=[ - dict(type='TextLoggerHook'), - dict(type='TensorboardLoggerHook') - ]) -# yapf:enable diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/README.md deleted file mode 100644 index c9204eb1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space - -> [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413) - - - -## Abstract - -Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. - -
- -
- -## Introduction - -We implement PointNet++ and provide the result and checkpoints on ScanNet and S3DIS datasets. - -**Notice**: The original PointNet++ paper used step learning rate schedule. We discovered that cosine schedule achieves much better results and adopt it in our implementations. We also use a larger `weight_decay` factor because we find it consistently improves the performance. - -## Results and models - -### ScanNet - -| Method | Input | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | mIoU (Test set) | Download | -| :-------------------------------------------------------------------------------------: | :-------: | :---------: | :------: | :------------: | :------------: | :-------------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| [PointNet++ (SSG)](./pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py) | XYZ | cosine 200e | 1.9 | | 53.91 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628-4e341a48.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628.log.json) | -| [PointNet++ (SSG)](./pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py) | XYZ+Color | cosine 200e | 1.9 | | 54.44 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644-ee73704a.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py) | XYZ | cosine 250e | 2.4 | | 54.26 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838-b4a3cf89.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py) | XYZ+Color | cosine 250e | 2.4 | | 55.05 | | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009-24477ab1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009.log.json) | - -**Notes:** - -- The original PointNet++ paper conducted experiments on the ScanNet V1 dataset, while later point cloud segmentor papers often used ScanNet V2. Following common practice, we report results on the ScanNet V2 dataset. - -- Since ScanNet dataset doesn't provide ground-truth labels for the test set, users can only evaluate test set performance by submitting to its online benchmark [website](http://kaldir.vc.in.tum.de/scannet_benchmark/). However, users are only allowed to submit once every two weeks. Therefore, we currently report val set mIoU. Test set performance may be added in the future. - -- To generate submission file for ScanNet online benchmark, you need to modify the ScanNet dataset's [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/scannet_seg-3d-20class.py#L126). Change `ann_file=data_root + 'scannet_infos_val.pkl'` to `ann_file=data_root + 'scannet_infos_test.pkl'`, and then simply run: - - ```shell - python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --format-only --options 'txt_prefix=exps/pointnet2_scannet_results' - ``` - - This will save the prediction results as `txt` files in `exps/pointnet2_scannet_results/`. Then, go to this folder and zip all files into `pn2_scannet.zip`. Now you can submit it to the online benchmark and wait for the test set result. More instructions can be found at their official [website](http://kaldir.vc.in.tum.de/scannet_benchmark/documentation#submission-policy). - -### S3DIS - -| Method | Split | Lr schd | Mem (GB) | Inf time (fps) | mIoU (Val set) | Download | -| :-------------------------------------------------------------------------: | :----: | :--------: | :------: | :------------: | :------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++ (SSG)](./pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py) | Area_5 | cosine 50e | 3.6 | | 56.93 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205-995d0119.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205.log.json) | -| [PointNet++ (MSG)](./pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py) | Area_5 | cosine 80e | 3.6 | | 58.04 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307-b2059817.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307.log.json) | - -**Notes:** - -- We use XYZ+Color+Normalized_XYZ as input in all the experiments on S3DIS datasets. -- `Area_5` Split means training the model on Area_1, 2, 3, 4, 6 and testing on Area_5. - -## Indeterminism - -Since PointNet++ testing adopts sliding patch inference which involves random point sampling, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## Citation - -```latex -@inproceedings{qi2017pointnet++, - title={PointNet++ deep hierarchical feature learning on point sets in a metric space}, - author={Qi, Charles R and Yi, Li and Su, Hao and Guibas, Leonidas J}, - booktitle={Proceedings of the 31st International Conference on Neural Information Processing Systems}, - pages={5105--5114}, - year={2017} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/metafile.yml deleted file mode 100644 index e7e51759..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/metafile.yml +++ /dev/null @@ -1,94 +0,0 @@ -Collections: - - Name: PointNet++ - Metadata: - Training Techniques: - - Adam - Training Resources: 2x Titan XP GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1706.02413 - Title: 'PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space' - README: configs/pointnet2/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/backbones/pointnet2_sa_ssg.py#L12 - Version: v0.14.0 - -Models: - - Name: pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 1.9 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 53.91 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143628-4e341a48.pth - - - Name: pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 1.9 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 54.44 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class_20210514_143644-ee73704a.pth - - - Name: pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 2.4 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 54.26 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class_20210514_143838-b4a3cf89.pth - - - Name: pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 2.4 - Results: - - Task: 3D Semantic Segmentation - Dataset: ScanNet - Metrics: - mIoU: 55.05 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class_20210514_144009-24477ab1.pth - - - Name: pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 3.6 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 56.93 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class_20210514_144205-995d0119.pth - - - Name: pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py - In Collection: PointNet++ - Config: configs/pointnet/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py - Metadata: - Training Data: S3DIS - Training Memory (GB): 3.6 - Results: - - Task: 3D Semantic Segmentation - Dataset: S3DIS - Metrics: - mIoU: 58.04 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class_20210514_144307-b2059817.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py deleted file mode 100644 index fbad158d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_250e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=5) - -# model settings -model = dict( - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=250) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py deleted file mode 100644 index ed1e3c43..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_16x2_cosine_80e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,27 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_50e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=80) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py deleted file mode 100644 index 2cb7ee18..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_msg_xyz-only_16x2_cosine_250e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,166 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_msg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# dataset settings -# in this setting, we only use xyz as network input -# so we need to re-write all the data pipeline -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), # only load xyz coordinates - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline, interval=5) - -# model settings -model = dict( - backbone=dict(in_channels=3), # only [xyz] - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) -# PointNet2-MSG needs longer training time than PointNet2-SSG -runner = dict(type='EpochBasedRunner', max_epochs=250) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py deleted file mode 100644 index b5261077..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_200e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,34 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=5) - -# model settings -model = dict( - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py deleted file mode 100644 index b14100d1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_16x2_cosine_50e_s3dis_seg-3d-13class.py +++ /dev/null @@ -1,25 +0,0 @@ -_base_ = [ - '../_base_/datasets/s3dis_seg-3d-13class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_50e.py', '../_base_/default_runtime.py' -] - -# data settings -data = dict(samples_per_gpu=16) -evaluation = dict(interval=2) - -# model settings -model = dict( - backbone=dict(in_channels=9), # [xyz, rgb, normalized_xyz] - decode_head=dict( - num_classes=13, ignore_index=13, - loss_decode=dict(class_weight=None)), # S3DIS doesn't use class_weight - test_cfg=dict( - num_points=4096, - block_size=1.0, - sample_rate=0.5, - use_normalized_coord=True, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py deleted file mode 100644 index 9dff449c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointnet2/pointnet2_ssg_xyz-only_16x2_cosine_200e_scannet_seg-3d-20class.py +++ /dev/null @@ -1,164 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet_seg-3d-20class.py', - '../_base_/models/pointnet2_ssg.py', - '../_base_/schedules/seg_cosine_200e.py', '../_base_/default_runtime.py' -] - -# dataset settings -# in this setting, we only use xyz as network input -# so we need to re-write all the data pipeline -dataset_type = 'ScanNetSegDataset' -data_root = './data/scannet/' -class_names = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') -num_points = 8192 -train_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), # only load xyz coordinates - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='IndoorPatchPointSample', - num_points=num_points, - block_size=1.5, - ignore_index=len(class_names), - use_normalized_coord=False, - enlarge_size=0.2, - min_unique_num=None), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] -test_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - # a wrapper in order to successfully call test function - # actually we don't perform test-time-aug - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -# construct a pipeline for data and gt loading in show function -# please keep its loading function consistent with test_pipeline (e.g. client) -# we need to load gt seg_mask! -eval_pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39), - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=class_names), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) -] - -data = dict( - samples_per_gpu=16, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_train.pkl', - pipeline=train_pipeline, - classes=class_names, - test_mode=False, - ignore_index=len(class_names), - scene_idxs=data_root + 'seg_info/train_resampled_scene_idxs.npy'), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names)), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'scannet_infos_val.pkl', - pipeline=test_pipeline, - classes=class_names, - test_mode=True, - ignore_index=len(class_names))) - -evaluation = dict(pipeline=eval_pipeline, interval=5) - -# model settings -model = dict( - backbone=dict(in_channels=3), # only [xyz] - decode_head=dict( - num_classes=20, - ignore_index=20, - # `class_weight` is generated in data pre-processing, saved in - # `data/scannet/seg_info/train_label_weight.npy` - # you can copy paste the values here, or input the file path as - # `class_weight=data/scannet/seg_info/train_label_weight.npy` - loss_decode=dict(class_weight=[ - 2.389689, 2.7215734, 4.5944676, 4.8543367, 4.096086, 4.907941, - 4.690836, 4.512031, 4.623311, 4.9242644, 5.358117, 5.360071, - 5.019636, 4.967126, 5.3502126, 5.4023647, 5.4027233, 5.4169416, - 5.3954206, 4.6971426 - ])), - test_cfg=dict( - num_points=8192, - block_size=1.5, - sample_rate=0.5, - use_normalized_coord=False, - batch_size=24)) - -# runtime settings -checkpoint_config = dict(interval=5) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/README.md deleted file mode 100644 index 62090972..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/README.md +++ /dev/null @@ -1,78 +0,0 @@ -# PointPillars: Fast Encoders for Object Detection from Point Clouds - -> [PointPillars: Fast Encoders for Object Detection from Point Clouds](https://arxiv.org/abs/1812.05784) - - - -## Abstract - -Object detection in point clouds is an important aspect of many robotics applications such as autonomous driving. In this paper we consider the problem of encoding a point cloud into a format appropriate for a downstream detection pipeline. Recent literature suggests two types of encoders; fixed encoders tend to be fast but sacrifice accuracy, while encoders that are learned from data are more accurate, but slower. In this work we propose PointPillars, a novel encoder which utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). While the encoded features can be used with any standard 2D convolutional detection architecture, we further propose a lean downstream network. Extensive experimentation shows that PointPillars outperforms previous encoders with respect to both speed and accuracy by a large margin. Despite only using lidar, our full detection pipeline significantly outperforms the state of the art, even among fusion methods, with respect to both the 3D and bird's eye view KITTI benchmarks. This detection performance is achieved while running at 62 Hz: a 2 - 4 fold runtime improvement. A faster version of our method matches the state of the art at 105 Hz. These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds. - -
- -
- -## Introduction - -We implement PointPillars and provide the results and checkpoints on KITTI, nuScenes, Lyft and Waymo datasets. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | AP | Download | -| :------------------------------------------------------------: | :-----: | :---------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py) | Car | cyclic 160e | 5.4 | | 77.6 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py) | 3 Class | cyclic 160e | 5.5 | | 64.07 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306.log.json) | - -### nuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :---------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 34.33 | 49.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857-f19d00a3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857.log.json) | -| [SECFPN (FP16)](./hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py) | 2x | 8.37 | | 35.19 | 50.27 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.3 | | 39.7 | 53.2 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936-fca299c1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936.log.json) | -| [FPN (FP16)](./hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) | 2x | 8.40 | | 39.26 | 53.26 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719-269f9dd6.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :----------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.8 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455-82b81c39.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455.log.json) | -| [FPN](./hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 9.2 | | 14.8 | 15.0 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429-0b3d6196.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429.log.json) | - -### Waymo - -| Backbone | Load Interval | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP@L1 | mAPH@L1 | mAP@L2 | **mAPH@L2** | Download | -| :-----------------------------------------------------------------: | :-----------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :----: | :---------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py) | 5 | Car | 2x | 7.76 | | 70.2 | 69.6 | 62.6 | 62.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315-302fc3e7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py) | 5 | 3 Class | 2x | 8.12 | | 64.7 | 57.6 | 58.4 | 52.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144-d1a706b1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144.log.json) | -| above @ Car | | | 2x | 8.12 | | 68.5 | 67.9 | 60.1 | 59.6 | | -| above @ Pedestrian | | | 2x | 8.12 | | 67.8 | 50.6 | 59.6 | 44.3 | | -| above @ Cyclist | | | 2x | 8.12 | | 57.7 | 54.4 | 55.5 | 52.4 | | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py) | 1 | Car | 2x | 7.76 | | 72.1 | 71.5 | 63.6 | 63.1 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.log.json) | -| [SECFPN](./hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py) | 1 | 3 Class | 2x | 8.12 | | 68.8 | 63.3 | 62.6 | 57.6 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.log.json) | -| above @ Car | | | 2x | 8.12 | | 71.6 | 71.0 | 63.1 | 62.5 | | -| above @ Pedestrian | | | 2x | 8.12 | | 70.6 | 56.7 | 62.9 | 50.2 | | -| above @ Cyclist | | | 2x | 8.12 | | 64.4 | 62.3 | 61.9 | 59.9 | | - -#### Note: - -- **Metric**: For model trained with 3 classes, the average APH@L2 (mAPH@L2) of all the categories is reported and used to rank the model. For model trained with only 1 class, the APH@L2 is reported and used to rank the model. -- **Data Split**: Here we provide several baselines for waymo dataset, among which D5 means that we divide the dataset into 5 folds and only use one fold for efficient experiments. Using the complete dataset can boost the performance a lot, especially for the detection of cyclist and pedestrian, where more than 5 mAP or mAPH improvement can be expected. -- **Implementation Details**: We basically follow the implementation in the [paper](https://arxiv.org/pdf/1912.04838.pdf) in terms of the network architecture (having a - stride of 1 for the first convolutional block). Different settings of voxelization, data augmentation and hyper parameters make these baselines outperform those in the paper by about 7 mAP for car and 4 mAP for pedestrian with only a subset of the whole dataset. All of these results are achieved without bells-and-whistles, e.g. ensemble, multi-scale training and test augmentation. -- **License Aggrement**: To comply the [license agreement of Waymo dataset](https://waymo.com/open/terms/), the pre-trained models on Waymo dataset are not released. We still release the training log as a reference to ease the future research. -- `FP16` means Mixed Precision (FP16) is adopted in training. With mixed precision training, we can train PointPillars with nuScenes dataset on 8 Titan XP GPUS with batch size of 2. This will cause OOM error without mixed precision training. The loss scale for PointPillars on nuScenes dataset is specifically tuned to avoid the loss to be Nan. We find 32 is more stable than 512, though loss scale 32 still cause Nan sometimes. - -## Citation - -```latex -@inproceedings{lang2019pointpillars, - title={Pointpillars: Fast encoders for object detection from point clouds}, - author={Lang, Alex H and Vora, Sourabh and Caesar, Holger and Zhou, Lubing and Yang, Jiong and Beijbom, Oscar}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={12697--12705}, - year={2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 6cc3e2d1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 2c6ba49b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index 9764aa33..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 57c90db7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_fpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py deleted file mode 100644 index d8aad2fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py +++ /dev/null @@ -1,81 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] - -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -# dataset settings -data_root = 'data/kitti/' -class_names = ['Pedestrian', 'Cyclist', 'Car'] -# PointPillars adopted a different sampling strategies among classes -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=15, Cyclist=15)) - -# PointPillars uses different augmentation hyper parameters -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler, use_ground_plane=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict(dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# In practice PointPillars also uses a different schedule -# optimizer -lr = 0.001 -optimizer = dict(lr=lr) -# max_norm=35 is slightly better than 10 for PointPillars in the earlier -# development of the codebase thus we keep the setting. But we does not -# specifically tune this parameter. -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# PointPillars usually need longer schedule than second, we simply double -# the training schedule. Do remind that since we use RepeatDataset and -# repeat factor is 2, so we actually train 160 epochs. -runner = dict(max_epochs=80) - -# Use evaluation interval=2 reduce the number of evaluation timese -evaluation = dict(interval=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py deleted file mode 100644 index 3537ce3e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py +++ /dev/null @@ -1,87 +0,0 @@ -# model settings -_base_ = './hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py' - -point_cloud_range = [0, -39.68, -3, 69.12, 39.68, 1] -model = dict( - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[0, -39.68, -1.78, 69.12, 39.68, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False)) - -# dataset settings -dataset_type = 'KittiDataset' -data_root = 'data/kitti/' -class_names = ['Car'] -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'kitti_dbinfos_train.pkl', - rate=1.0, - prepare=dict(filter_by_difficulty=[-1], filter_by_min_points=dict(Car=5)), - sample_groups=dict(Car=15), - classes=class_names) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler, use_ground_plane=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - train=dict( - type='RepeatDataset', - times=2, - dataset=dict(pipeline=train_pipeline, classes=class_names)), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 1a0400eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,43 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-80, -80, -1.0715024, 80, 80, -1.0715024], - [-80, -80, -0.3033737, 80, 80, -0.3033737], - [-80, -80, -0.3519405, 80, 80, -0.3519405], - [-80, -80, -0.8871424, 80, 80, -0.8871424], - [-80, -80, -0.6276341, 80, 80, -0.6276341], - [-80, -80, -1.3220503, 80, 80, -1.3220503], - [-80, -80, -1.0709302, 80, 80, -1.0709302], - [-80, -80, -0.9122268, 80, 80, -0.9122268], - [-80, -80, -1.8012227, 80, 80, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index afff99c6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [-49.6, -49.6, -1.80032795, 49.6, 49.6, -1.80032795], - [-49.6, -49.6, -1.74440365, 49.6, 49.6, -1.74440365], - [-49.6, -49.6, -1.68526504, 49.6, 49.6, -1.68526504], - [-49.6, -49.6, -1.67339111, 49.6, 49.6, -1.67339111], - [-49.6, -49.6, -1.61785072, 49.6, 49.6, -1.61785072], - [-49.6, -49.6, -1.80984986, 49.6, 49.6, -1.80984986], - [-49.6, -49.6, -1.763965, 49.6, 49.6, -1.763965], - ], - sizes=[ - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.4560939, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [1.68452161, 0.60058911, 1.27192197], # bicycle - [0.7256437, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic_cone - [0.48578221, 2.49008838, 0.98297065], # barrier - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index ff0f67a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 7964b799..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.3033737, 100, 100, -0.3033737], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py deleted file mode 100644 index 8655691b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# data settings -data = dict(train=dict(dataset=dict(load_interval=1))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py deleted file mode 100644 index 90f2a42c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py +++ /dev/null @@ -1,37 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-car.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# data settings -data = dict(train=dict(dataset=dict(load_interval=1))) - -# model settings -model = dict( - type='MVXFasterRCNN', - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345]], - sizes=[[4.73, 2.08, 1.77]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py deleted file mode 100644 index e4f1ce5c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py deleted file mode 100644 index 3a3e3266..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py +++ /dev/null @@ -1,34 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-car.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -# model settings -model = dict( - type='MVXFasterRCNN', - pts_bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - type='AlignedAnchor3DRangeGenerator', - ranges=[[-74.88, -74.88, -0.0345, 74.88, 74.88, -0.0345]], - sizes=[[4.73, 2.08, 1.77]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/metafile.yml deleted file mode 100644 index 9a898c4b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/pointpillars/metafile.yml +++ /dev/null @@ -1,213 +0,0 @@ -Collections: - - Name: PointPillars - Metadata: - Training Techniques: - - AdamW - Architecture: - - Feature Pyramid Network - Paper: - URL: https://arxiv.org/abs/1812.05784 - Title: 'PointPillars: Fast Encoders for Object Detection from Point Clouds' - README: configs/pointpillars/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/voxel_encoders/pillar_encoder.py#L13 - Version: v0.6.0 - -Models: - - Name: hv_pointpillars_secfpn_6x8_160e_kitti-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - AP: 77.6 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car/hv_pointpillars_secfpn_6x8_160e_kitti-3d-car_20220331_134606-d42d15ed.pth - - - Name: hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.5 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - AP: 64.07 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth - - - Name: hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 34.33 - NDS: 49.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20210826_225857-f19d00a3.pth - - - Name: hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.3 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 39.71 - NDS: 53.15 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20210826_104936-fca299c1.pth - - - Name: hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 12.2 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 13.8 - Public Score: 14.1 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210829_100455-82b81c39.pth - - - Name: hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 9.2 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 14.0 - Public Score: 15.0 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210822_095429-0b3d6196.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car.py - Metadata: - Training Data: Waymo - Training Memory (GB): 7.76 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 70.2 - mAPH@L1: 69.6 - mAP@L2: 62.6 - mAPH@L2: 62.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-car_20200901_204315-302fc3e7.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 64.7 - mAPH@L1: 57.6 - mAP@L2: 58.4 - mAPH@L2: 52.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class_20200831_204144-d1a706b1.pth - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-car.py - Metadata: - Training Data: Waymo - Training Memory (GB): 7.76 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 72.1 - mAPH@L1: 71.5 - mAP@L2: 63.6 - mAPH@L2: 63.1 - - - Name: hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymo-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 68.8 - mAPH@L1: 63.3 - mAP@L2: 62.6 - mAPH@L2: 57.6 - - - Name: hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Architecture: - - Hard Voxelization - Training Data: nuScenes - Training Memory (GB): 8.37 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 35.19 - NDS: 50.27 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth - Code: - Version: v0.7.0 - - - Name: hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d - In Collection: PointPillars - Config: configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Architecture: - - Hard Voxelization - Training Data: nuScenes - Training Memory (GB): 8.40 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 39.26 - NDS: 53.26 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d_20201021_120719-269f9dd6.pth - Code: - Version: v0.7.0 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/README.md deleted file mode 100644 index f15b94fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/README.md +++ /dev/null @@ -1,82 +0,0 @@ -# Designing Network Design Spaces - -> [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) - - - -## Abstract - -In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs. - -
- -
- -## Introduction - -We implement RegNetX models in 3D detection systems and provide their first results with PointPillars on nuScenes and Lyft dataset. - -The pre-trained modles are converted from [model zoo of pycls](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md) and maintained in [mmcv](https://github.com/open-mmlab/mmcv). - -## Usage - -To use a regnet model, there are two steps to do: - -1. Convert the model to ResNet-style supported by MMDetection -2. Modify backbone and neck in config accordingly - -### Convert model - -We already prepare models of FLOPs from 800M to 12G in our model zoo. - -For more general usage, we also provide script `regnet2mmdet.py` in the tools directory to convert the key of models pretrained by [pycls](https://github.com/facebookresearch/pycls/) to -ResNet-style checkpoints used in MMDetection. - -```bash -python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH} -``` - -This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`. - -### Modify config - -The users can modify the config's `depth` of backbone and corresponding keys in `arch` according to the configs in the [pycls model zoo](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md). -The parameter `in_channels` in FPN can be found in the Figure 15 & 16 of the paper (`wi` in the legend). -This directory already provides some configs with their performance, using RegNetX from 800MF to 12GF level. -For other pre-trained models or self-implemented regnet models, the users are responsible to check these parameters by themselves. - -**Note**: Although Fig. 15 & 16 also provide `w0`, `wa`, `wm`, `group_w`, and `bot_mul` for `arch`, they are quantized thus inaccurate, using them sometimes produces different backbone that does not match the key in the pre-trained model. - -## Results and models - -### nuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :--: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 35.17 | 49.7 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725-0817d270.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725.log.json) | -| [RegNetX-400MF-SECFPN](./hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 41.2 | 55.2 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334.log.json) | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 17.1 | | 40.0 | 53.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405-2fa62f3d.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_fpn_sbn-all_4x8_2x_nus-3d_20200620_230405.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 17.3 | | 44.8 | 56.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239.log.json) | -| [RegNetX-1.6gF-FPN](./hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 24.0 | | 48.2 | 59.3 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311-dcd4e090.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :-------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.9 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807-2518e3de.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807.log.json) | -| [RegNetX-400MF-SECFPN](./hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_lyft-3d.py) | 2x | 15.9 | | 14.9 | 15.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151-42513826.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151.log.json) | -| [FPN](../pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 9.2 | | 14.9 | 15.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210517_202818-fc6904c3.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d_20210517_202818.log.json) | -| [RegNetX-400MF-FPN](./hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_lyft-3d.py) | 2x | 13.0 | | 16.0 | 16.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618-823dcf18.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618.log.json) | - -## Citation - -```latex -@article{radosavovic2020designing, - title={Designing Network Design Spaces}, - author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár}, - year={2020}, - eprint={2003.13678}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 0574be57..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch='regnetx_1.6gf', - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_1.6gf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[168, 408, 912])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index 1f391a32..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index 884729cc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py deleted file mode 100644 index e5863652..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_fp16_2x8_2x_nus-3d.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py' -data = dict(samples_per_gpu=2, workers_per_gpu=2) -# fp16 settings, the loss scale is specifically tuned to avoid Nan -fp16 = dict(loss_scale=32.) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index fef308df..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_range100_lyft.py', - '../_base_/datasets/range100_lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py deleted file mode 100644 index fb330d78..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-80, -80, -1.0715024, 80, 80, -1.0715024], - [-80, -80, -0.3033737, 80, 80, -0.3033737], - [-80, -80, -0.3519405, 80, 80, -0.3519405], - [-80, -80, -0.8871424, 80, 80, -0.8871424], - [-80, -80, -0.6276341, 80, 80, -0.6276341], - [-80, -80, -1.3220503, 80, 80, -1.3220503], - [-80, -80, -1.0709302, 80, 80, -1.0709302], - [-80, -80, -0.9122268, 80, 80, -0.9122268], - [-80, -80, -1.8012227, 80, 80, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py deleted file mode 100644 index ef8996a1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = './hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[ - [-49.6, -49.6, -1.80032795, 49.6, 49.6, -1.80032795], - [-49.6, -49.6, -1.74440365, 49.6, 49.6, -1.74440365], - [-49.6, -49.6, -1.68526504, 49.6, 49.6, -1.68526504], - [-49.6, -49.6, -1.67339111, 49.6, 49.6, -1.67339111], - [-49.6, -49.6, -1.61785072, 49.6, 49.6, -1.61785072], - [-49.6, -49.6, -1.80984986, 49.6, 49.6, -1.80984986], - [-49.6, -49.6, -1.763965, 49.6, 49.6, -1.763965], - ], - sizes=[ - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.4560939, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [1.68452161, 0.60058911, 1.27192197], # bicycle - [0.7256437, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic_cone - [0.48578221, 2.49008838, 0.98297065], # barrier - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py deleted file mode 100644 index 2af3719c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_range100_2x8_2x_lyft-3d.py +++ /dev/null @@ -1,40 +0,0 @@ -_base_ = \ - './hv_pointpillars_regnet-400mf_fpn_sbn-all_range100_2x8_2x_lyft-3d.py' -# model settings -model = dict( - pts_neck=dict( - type='SECONDFPN', - _delete_=True, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 160, 384], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - type='Anchor3DHead', - in_channels=384, - feat_channels=384, - anchor_generator=dict( - _delete_=True, - type='AlignedAnchor3DRangeGenerator', - ranges=[[-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.3033737, 100, 100, -0.3033737], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227]], - sizes=[ - [4.75, 1.92, 1.71], # car - [10.24, 2.84, 3.44], # truck - [12.70, 2.92, 3.42], # bus - [6.52, 2.42, 2.34], # emergency vehicle - [8.17, 2.75, 3.20], # other vehicle - [2.35, 0.96, 1.59], # motorcycle - [1.76, 0.63, 1.44], # bicycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50] # animal - ], - rotations=[0, 1.57], - reshape_out=True))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/metafile.yml deleted file mode 100644 index 18f13b1d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/regnet/metafile.yml +++ /dev/null @@ -1,85 +0,0 @@ -Models: - - Name: hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 16.4 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 41.2 - NDS: 55.2 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 17.3 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 44.8 - NDS: 56.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_4x8_2x_nus-3d_20200620_230239-c694dce7.pth - - - Name: hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 24.0 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 48.2 - NDS: 59.3 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-1.6gf_fpn_sbn-all_4x8_2x_nus-3d_20200629_050311-dcd4e090.pth - - - Name: hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 15.9 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 14.9 - Public Score: 15.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_2x8_2x_lyft-3d_20210524_092151-42513826.pth - - - Name: hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d - In Collection: PointPillars - Config: configs/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 13.0 - Architecture: - - RegNetX - - Hard Voxelization - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 16.0 - Public Score: 16.1 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_regnet-400mf_fpn_sbn-all_2x8_2x_lyft-3d_20210521_115618-823dcf18.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/README.md deleted file mode 100644 index 3a4444a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Structure Aware Single-stage 3D Object Detection from Point Cloud - -> [Structure Aware Single-stage 3D Object Detection from Point Cloud]([https://arxiv.org/abs/2104.02323](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.pdf)) - - - -## Abstract - -3D object detection from point cloud data plays an essential role in autonomous driving. Current single-stage detectors are efficient by progressively downscaling the 3D point clouds in a fully convolutional manner. However, the downscaled features inevitably lose spatial information and cannot make full use of the structure information of 3D point cloud, degrading their localization precision. In this work, we propose to improve the localization precision of single-stage detectors by explicitly leveraging the structure information of 3D point cloud. Specifically, we design an auxiliary network which converts the convolutional features in the backbone network back to point-level representations. The auxiliary network is jointly optimized, by two point-level supervisions, to guide the convolutional features in the backbone network to be aware of the object structure. The auxiliary network can be detached after training and therefore introduces no extra computation in the inference stage. Besides, considering that single-stage detectors suffer from the discordance between the predicted bounding boxes and corresponding classification confidences, we develop an efficient part-sensitive warping operation to align the confidences to the predicted bounding boxes. Our proposed detector ranks at the top of KITTI 3D/BEV detection leaderboards and runs at 25 FPS for inference. - -
- -
- -## Introduction - -We implement SA-SSD and provide the results and checkpoints on KITTI dataset. - -## Citation - -```latex -@InProceedings{he2020sassd, - title={Structure Aware Single-stage 3D Object Detection from Point Cloud}, - author={He, Chenhang and Zeng, Hui and Huang, Jianqiang and Hua, Xian-Sheng and Zhang, Lei}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - year={2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index efc67c7d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/sassd/sassd_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,94 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] - -voxel_size = [0.05, 0.05, 0.1] - -model = dict( - type='SASSD', - voxel_layer=dict( - max_num_points=5, - point_cloud_range=[0, -40, -3, 70.4, 40, 1], - voxel_size=voxel_size, - max_voxels=(16000, 40000)), - voxel_encoder=dict(type='HardSimpleVFE'), - middle_encoder=dict( - type='SparseEncoderSASSD', - in_channels=4, - sparse_shape=[41, 1600, 1408], - order=('conv', 'norm', 'act')), - backbone=dict( - type='SECOND', - in_channels=256, - layer_nums=[5, 5], - layer_strides=[1, 2], - out_channels=[128, 256]), - neck=dict( - type='SECONDFPN', - in_channels=[128, 256], - upsample_strides=[1, 2], - out_channels=[256, 256]), - bbox_head=dict( - type='Anchor3DHead', - num_classes=3, - in_channels=512, - feat_channels=512, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - ranges=[ - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -0.6, 70.4, 40.0, -0.6], - [0, -40.0, -1.78, 70.4, 40.0, -1.78], - ], - sizes=[[0.6, 0.8, 1.73], [0.6, 1.76, 1.73], [1.6, 3.9, 1.56]], - rotations=[0, 1.57], - reshape_out=False), - diff_rad_by_sin=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - assigner=[ - dict( # for Pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Cyclist - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.35, - neg_iou_thr=0.2, - min_pos_iou=0.2, - ignore_iof_thr=-1), - dict( # for Car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - ], - allowed_border=0, - pos_weight=-1, - debug=False), - test_cfg=dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_thr=0.01, - score_thr=0.1, - min_bbox_size=0, - nms_pre=100, - max_num=50)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/README.md deleted file mode 100644 index 1aa96501..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# Second: Sparsely embedded convolutional detection - -> [SECOND: Sparsely Embedded Convolutional Detection](https://www.mdpi.com/1424-8220/18/10/3337) - - - -## Abstract - -LiDAR-based or RGB-D-based object detection is used in numerous applications, ranging from autonomous driving to robot vision. Voxel-based 3D convolutional networks have been used for some time to enhance the retention of information when processing point cloud LiDAR data. However, problems remain, including a slow inference speed and low orientation estimation performance. We therefore investigate an improved sparse convolution method for such networks, which significantly increases the speed of both training and inference. We also introduce a new form of angle loss regression to improve the orientation estimation performance and a new data augmentation approach that can enhance the convergence speed and performance. The proposed network produces state-of-the-art results on the KITTI 3D object detection benchmarks while maintaining a fast inference speed. - -
- -
- -## Introduction - -We implement SECOND and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :-----------------------------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 5.4 | | 79.07 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238.log.json) | -| [SECFPN (FP16)](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py) | Car | cyclic 80e | 2.9 | | 78.72 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth)\| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301.log.json) | -| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 5.4 | | 65.74 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017log.json) | -| [SECFPN (FP16)](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 2.9 | | 67.4 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059.log.json) | - -### Waymo - -| Backbone | Load Interval | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP@L1 | mAPH@L1 | mAP@L2 | **mAPH@L2** | Download | -| :-----------------------------------------------------------: | :-----------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :----: | :---------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](./hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py) | 5 | 3 Class | 2x | 8.12 | | 65.3 | 61.7 | 58.9 | 55.7 | [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_sbn_4x8_2x_waymoD5-3d-3class/hv_second_secfpn_sbn_4x8_2x_waymoD5-3d-3class_20201115_112448.log.json) | -| above @ Car | | | 2x | 8.12 | | 67.1 | 66.6 | 58.7 | 58.2 | | -| above @ Pedestrian | | | 2x | 8.12 | | 68.1 | 59.1 | 59.5 | 51.5 | | -| above @ Cyclist | | | 2x | 8.12 | | 60.7 | 59.5 | 58.4 | 57.3 | | - -Note: - -- See more details about metrics and data split on Waymo [HERE](https://github.com/open-mmlab/mmdetection3d/tree/master/configs/pointpillars). For implementation details, we basically follow the original settings. All of these results are achieved without bells-and-whistles, e.g. ensemble, multi-scale training and test augmentation. -- `FP16` means Mixed Precision (FP16) is adopted in training. - -## Citation - -```latex -@article{yan2018second, - title={Second: Sparsely embedded convolutional detection}, - author={Yan, Yan and Mao, Yuxing and Li, Bo}, - journal={Sensors}, - year={2018}, - publisher={Multidisciplinary Digital Publishing Institute} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index 0f28921f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-3class.py', - '../_base_/schedules/cyclic_40e.py', '../_base_/default_runtime.py' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py deleted file mode 100644 index 9ab7350a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_kitti.py', - '../_base_/datasets/kitti-3d-car.py', '../_base_/schedules/cyclic_40e.py', - '../_base_/default_runtime.py' -] -point_cloud_range = [0, -40, -3, 70.4, 40, 1] -model = dict( - bbox_head=dict( - type='Anchor3DHead', - num_classes=1, - anchor_generator=dict( - _delete_=True, - type='Anchor3DRangeGenerator', - ranges=[[0, -40.0, -1.78, 70.4, 40.0, -1.78]], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - reshape_out=True)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - assigner=dict( - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - allowed_border=0, - pos_weight=-1, - debug=False)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py deleted file mode 100644 index bf0336a4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './hv_second_secfpn_6x8_80e_kitti-3d-3class.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py deleted file mode 100644 index efba5533..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './hv_second_secfpn_6x8_80e_kitti-3d-car.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py deleted file mode 100644 index 758827f8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py +++ /dev/null @@ -1,112 +0,0 @@ -_base_ = [ - '../_base_/models/hv_second_secfpn_waymo.py', - '../_base_/datasets/waymoD5-3d-3class.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] - -dataset_type = 'WaymoDataset' -data_root = 'data/waymo/kitti_format/' -class_names = ['Car', 'Pedestrian', 'Cyclist'] -point_cloud_range = [-76.8, -51.2, -2, 76.8, 51.2, 4] -input_modality = dict(use_lidar=True, use_camera=False) - -db_sampler = dict( - data_root=data_root, - info_path=data_root + 'waymo_dbinfos_train.pkl', - rate=1.0, - prepare=dict( - filter_by_difficulty=[-1], - filter_by_min_points=dict(Car=5, Pedestrian=5, Cyclist=5)), - classes=class_names, - sample_groups=dict(Car=15, Pedestrian=10, Cyclist=10), - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=[0, 1, 2, 3, 4])) - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=6, use_dim=5), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict(type='ObjectSample', db_sampler=db_sampler), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05]), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] - -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=6, use_dim=5), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_train.pkl', - split='training', - pipeline=train_pipeline, - modality=input_modality, - classes=class_names, - test_mode=False, - # we use box_type_3d='LiDAR' in kitti and nuscenes dataset - # and box_type_3d='Depth' in sunrgbd and scannet dataset. - box_type_3d='LiDAR', - # load one frame every five frames - load_interval=5)), - val=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR'), - test=dict( - type=dataset_type, - data_root=data_root, - ann_file=data_root + 'waymo_infos_val.pkl', - split='training', - pipeline=test_pipeline, - modality=input_modality, - classes=class_names, - test_mode=True, - box_type_3d='LiDAR')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/metafile.yml deleted file mode 100644 index 5b68fe9c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/second/metafile.yml +++ /dev/null @@ -1,97 +0,0 @@ -Collections: - - Name: SECOND - Metadata: - Training Techniques: - - AdamW - Architecture: - - Hard Voxelization - Paper: - URL: https://www.mdpi.com/1424-8220/18/10/3337 - Title: 'SECOND: Sparsely Embedded Convolutional Detection' - README: configs/second/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/backbones/second.py#L11 - Version: v0.5.0 - -Models: - - Name: hv_second_secfpn_6x8_80e_kitti-3d-car - In Collection: SECOND - Config: configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 79.07 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth - - - Name: hv_second_secfpn_6x8_80e_kitti-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py - Metadata: - Training Data: KITTI - Training Memory (GB): 5.4 - Training Resources: 8x V100 GPUs - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 65.74 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-3class/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth - - - Name: hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py - Metadata: - Training Data: Waymo - Training Memory (GB): 8.12 - Training Resources: 8x GeForce GTX 1080 Ti - Results: - - Task: 3D Object Detection - Dataset: Waymo - Metrics: - mAP@L1: 65.3 - mAPH@L1: 61.7 - mAP@L2: 58.9 - mAPH@L2: 55.7 - - - Name: hv_second_secfpn_fp16_6x8_80e_kitti-3d-car - In Collection: SECOND - Config: configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Training Data: KITTI - Training Memory (GB): 2.9 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 78.72 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth - Code: - Version: v0.7.0 - - - Name: hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class - In Collection: SECOND - Config: configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py - Metadata: - Training Techniques: - - AdamW - - Mixed Precision Training - Training Resources: 8x TITAN Xp - Training Data: KITTI - Training Memory (GB): 2.9 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 67.4 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth - Code: - Version: v0.7.0 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/README.md deleted file mode 100644 index 8d91314d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation - -> [SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation](https://arxiv.org/abs/2002.10111) - - - -## Abstract - -Estimating 3D orientation and translation of objects is essential for infrastructure-less autonomous navigation and driving. In case of monocular vision, successful methods have been mainly based on two ingredients: (i) a network generating 2D region proposals, (ii) a R-CNN structure predicting 3D object pose by utilizing the acquired regions of interest. We argue that the 2D detection network is redundant and introduces non-negligible noise for 3D detection. Hence, we propose a novel 3D object detection method, named SMOKE, in this paper that predicts a 3D bounding box for each detected object by combining a single keypoint estimate with regressed 3D variables. As a second contribution, we propose a multi-step disentangling approach for constructing the 3D bounding box, which significantly improves both training convergence and detection accuracy. In contrast to previous 3D detection techniques, our method does not require complicated pre/post-processing, extra data, and a refinement stage. Despite of its structural simplicity, our proposed SMOKE network outperforms all existing monocular 3D detection methods on the KITTI dataset, giving the best state-of-the-art result on both 3D object detection and Bird's eye view evaluation. - -
- -
- -## Introduction - -We implement SMOKE and provide the results and checkpoints on KITTI dataset. - -## Results and models - -### KITTI - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download | -| :------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [DLA34](./smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py) | 6x | 9.64 | | 13.85 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553-d46d9bb0.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553.log.json) | - -Note: mAP represents Car moderate 3D strict AP11 results. - -Detailed performance on KITTI 3D detection (3D/BEV) is as follows, evaluated by AP11 metric: - -| | Easy | Moderate | Hard | -| ---------- | :-----------: | :-----------: | :-----------: | -| Car | 16.92 / 22.97 | 13.85 / 18.32 | 11.90 / 15.88 | -| Pedestrian | 11.13 / 12.61 | 11.10 / 11.32 | 10.67 / 11.14 | -| Cyclist | 0.99 / 1.47 | 0.54 / 0.65 | 0.55 / 0.67 | - -## Citation - -```latex -@inproceedings{liu2020smoke, - title={Smoke: Single-stage monocular 3d object detection via keypoint estimation}, - author={Liu, Zechen and Wu, Zizhang and T{\'o}th, Roland}, - booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, - pages={996--997}, - year={2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/metafile.yml deleted file mode 100644 index df956e49..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/metafile.yml +++ /dev/null @@ -1,30 +0,0 @@ -Collections: - - Name: SMOKE - Metadata: - Training Data: KITTI - Training Techniques: - - Adam - Training Resources: 4x V100 GPUS - Architecture: - - SMOKEMono3DHead - - DLA - Paper: - URL: https://arxiv.org/abs/2002.10111 - Title: 'SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation' - README: configs/smoke/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/v1.0.0.dev0/mmdet3d/models/detectors/smoke_mono3d.py#L7 - Version: v1.0.0 - -Models: - - Name: smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d - In Collection: SMOKE - Config: configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py - Metadata: - Training Memory (GB): 9.6 - Results: - - Task: 3D Object Detection - Dataset: KITTI - Metrics: - mAP: 13.8 - Weights: https://download.openmmlab.com/mmdetection3d/v0.1.0_models/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d_20210929_015553-d46d9bb0.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py deleted file mode 100644 index c802ce30..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/smoke/smoke_dla34_pytorch_dlaneck_gn-all_8x4_6x_kitti-mono3d.py +++ /dev/null @@ -1,64 +0,0 @@ -_base_ = [ - '../_base_/datasets/kitti-mono3d.py', '../_base_/models/smoke.py', - '../_base_/default_runtime.py' -] - -# optimizer -optimizer = dict(type='Adam', lr=2.5e-4) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='step', warmup=None, step=[50]) - -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=72) -log_config = dict(interval=10) - -find_unused_parameters = True -class_names = ['Pedestrian', 'Cyclist', 'Car'] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='LoadAnnotations3D', - with_bbox=True, - with_label=True, - with_attr_label=False, - with_bbox_3d=True, - with_label_3d=True, - with_bbox_depth=True), - dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - dict(type='RandomShiftScale', shift_scale=(0.2, 0.4), aug_prob=0.3), - dict(type='AffineResize', img_scale=(1280, 384), down_ratio=4), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict( - type='Collect3D', - keys=[ - 'img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', - 'centers2d', 'depths' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='MultiScaleFlipAug', - img_scale=(1280, 384), - flip=False, - transforms=[ - dict(type='AffineResize', img_scale=(1280, 384), down_ratio=4), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/README.md deleted file mode 100644 index dad03f86..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds - -> [SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds](https://arxiv.org/abs/2004.02774) - - - -## Abstract - -Multi-class 3D object detection aims to localize and classify objects of multiple categories from point clouds. Due to the nature of point clouds, i.e. unstructured, sparse and noisy, some features benefit-ting multi-class discrimination are underexploited, such as shape information. In this paper, we propose a novel 3D shape signature to explore the shape information from point clouds. By incorporating operations of symmetry, convex hull and chebyshev fitting, the proposed shape sig-nature is not only compact and effective but also robust to the noise, which serves as a soft constraint to improve the feature capability of multi-class discrimination. Based on the proposed shape signature, we develop the shape signature networks (SSN) for 3D object detection, which consist of pyramid feature encoding part, shape-aware grouping heads and explicit shape encoding objective. Experiments show that the proposed method performs remarkably better than existing methods on two large-scale datasets. Furthermore, our shape signature can act as a plug-and-play component and ablation study shows its effectiveness and good scalability. - -
- -
- -## Introduction - -We implement PointPillars with Shape-aware grouping heads used in the SSN and provide the results and checkpoints on the nuScenes and Lyft dataset. - -## Results and models - -### NuScenes - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download | -| :--------------------------------------------------------------------------------------------: | :-----: | :------: | :------------: | :---: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 35.17 | 49.76 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725-0817d270.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230725.log.json) | -| [SSN](./hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py) | 2x | 3.6 | | 40.91 | 54.44 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351-51915986.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351.log.json) | -| [RegNetX-400MF-SECFPN](../regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d.py) | 2x | 16.4 | | 41.15 | 55.20 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334-53044f32.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/regnet/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d/hv_pointpillars_regnet-400mf_secfpn_sbn-all_4x8_2x_nus-3d_20200620_230334.log.json) | -| [RegNetX-400MF-SSN](./hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py) | 2x | 5.1 | | 46.65 | 58.24 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615-361e5e04.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615.log.json) | - -### Lyft - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download | -| :--------------------------------------------------------------------------: | :-----: | :------: | :------------: | :-----------: | :----------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SECFPN](../pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d.py) | 2x | 12.2 | | 13.9 | 14.1 | [model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807-2518e3de.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/pointpillars/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d/hv_pointpillars_secfpn_sbn-all_2x8_2x_lyft-3d_20210517_204807.log.json) | -| [SSN](./hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py) | 2x | 8.5 | | 17.5 | 17.5 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731-46841b41.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731.log.json) | -| [RegNetX-400MF-SSN](./hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py) | 2x | 7.4 | | 17.9 | 18 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825-d93475a1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825.log.json) | - -Note: - -The main difference of the shape-aware grouping heads with the original SECOND FPN heads is that the former groups objects with similar sizes and shapes together, and design shape-specific heads for each group. Heavier heads (with more convolutions and large strides) are designed for large objects while smaller heads for small objects. Note that there may appear different feature map sizes in the outputs, so an anchor generator tailored to these feature maps is also needed in the implementation. - -Users could try other settings in terms of the head design. Here we basically refer to the implementation [HERE](https://github.com/xinge008/SSN). - -## Citation - -```latex -@inproceedings{zhu2020ssn, - title={SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds}, - author={Zhu, Xinge and Ma, Yuexin and Wang, Tai and Xu, Yan and Shi, Jianping and Lin, Dahua}, - booktitle={Proceedings of the European Conference on Computer Vision}, - year={2020} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py deleted file mode 100644 index 1103bcf1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py +++ /dev/null @@ -1,21 +0,0 @@ -_base_ = './hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py' -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) -# dataset settings -data = dict(samples_per_gpu=1, workers_per_gpu=2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py deleted file mode 100644 index fb9ef316..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = './hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py' -# model settings -model = dict( - type='MVXFasterRCNN', - pts_backbone=dict( - _delete_=True, - type='NoStemRegNet', - arch=dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - init_cfg=dict( - type='Pretrained', checkpoint='open-mmlab://regnetx_400mf'), - out_indices=(1, 2, 3), - frozen_stages=-1, - strides=(1, 2, 2, 2), - base_channels=64, - stem_channels=64, - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - norm_eval=False, - style='pytorch'), - pts_neck=dict(in_channels=[64, 160, 384])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py deleted file mode 100644 index 50b33c80..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py +++ /dev/null @@ -1,224 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_lyft.py', - '../_base_/datasets/lyft-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -point_cloud_range = [-100, -100, -5, 100, 100, 3] -# Note that the order of class names should be consistent with -# the following anchors' order -class_names = [ - 'bicycle', 'motorcycle', 'pedestrian', 'animal', 'car', - 'emergency_vehicle', 'bus', 'other_vehicle', 'truck' -] - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline, classes=class_names), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# model settings -model = dict( - pts_voxel_layer=dict(point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_voxel_encoder=dict( - feat_channels=[32, 64], - point_cloud_range=[-100, -100, -5, 100, 100, 3]), - pts_middle_encoder=dict(output_shape=[800, 800]), - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - _delete_=True, - type='ShapeAwareHead', - num_classes=9, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGeneratorPerCls', - ranges=[[-100, -100, -1.0709302, 100, 100, -1.0709302], - [-100, -100, -1.3220503, 100, 100, -1.3220503], - [-100, -100, -0.9122268, 100, 100, -0.9122268], - [-100, -100, -1.8012227, 100, 100, -1.8012227], - [-100, -100, -1.0715024, 100, 100, -1.0715024], - [-100, -100, -0.8871424, 100, 100, -0.8871424], - [-100, -100, -0.3519405, 100, 100, -0.3519405], - [-100, -100, -0.6276341, 100, 100, -0.6276341], - [-100, -100, -0.3033737, 100, 100, -0.3033737]], - sizes=[ - [1.76, 0.63, 1.44], # bicycle - [2.35, 0.96, 1.59], # motorcycle - [0.80, 0.76, 1.76], # pedestrian - [0.73, 0.35, 0.50], # animal - [4.75, 1.92, 1.71], # car - [6.52, 2.42, 2.34], # emergency vehicle - [12.70, 2.92, 3.42], # bus - [8.17, 2.75, 3.20], # other vehicle - [10.24, 2.84, 3.44] # truck - ], - custom_values=[], - rotations=[0, 1.57], - reshape_out=False), - tasks=[ - dict( - num_class=2, - class_names=['bicycle', 'motorcycle'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['pedestrian', 'animal'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['car', 'emergency_vehicle'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=3, - class_names=['bus', 'other_vehicle', 'truck'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)) - ], - assign_per_class=True, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi/4 - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=7), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=[ - dict( # bicycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # motorcycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # animal - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # emergency vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # bus - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # other vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # truck - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py deleted file mode 100644 index 85502014..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py +++ /dev/null @@ -1,238 +0,0 @@ -_base_ = [ - '../_base_/models/hv_pointpillars_fpn_nus.py', - '../_base_/datasets/nus-3d.py', - '../_base_/schedules/schedule_2x.py', - '../_base_/default_runtime.py', -] -# Note that the order of class names should be consistent with -# the following anchors' order -point_cloud_range = [-50, -50, -5, 50, 50, 3] -class_names = [ - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', 'car', - 'truck', 'trailer', 'bus', 'construction_vehicle' -] - -train_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), - dict( - type='GlobalRotScaleTrans', - rot_range=[-0.3925, 0.3925], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0]), - dict( - type='RandomFlip3D', - sync_2d=False, - flip_ratio_bev_horizontal=0.5, - flip_ratio_bev_vertical=0.5), - dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), - dict(type='PointShuffle'), - dict(type='DefaultFormatBundle3D', class_names=class_names), - dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) -] -test_pipeline = [ - dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5), - dict(type='LoadPointsFromMultiSweeps', sweeps_num=10), - dict( - type='MultiScaleFlipAug3D', - img_scale=(1333, 800), - pts_scale_ratio=1, - flip=False, - transforms=[ - dict( - type='GlobalRotScaleTrans', - rot_range=[0, 0], - scale_ratio_range=[1., 1.], - translation_std=[0, 0, 0]), - dict(type='RandomFlip3D'), - dict( - type='PointsRangeFilter', point_cloud_range=point_cloud_range), - dict( - type='DefaultFormatBundle3D', - class_names=class_names, - with_label=False), - dict(type='Collect3D', keys=['points']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline, classes=class_names), - val=dict(pipeline=test_pipeline, classes=class_names), - test=dict(pipeline=test_pipeline, classes=class_names)) - -# model settings -model = dict( - pts_voxel_layer=dict(max_num_points=20), - pts_voxel_encoder=dict(feat_channels=[64, 64]), - pts_neck=dict( - _delete_=True, - type='SECONDFPN', - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01), - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]), - pts_bbox_head=dict( - _delete_=True, - type='ShapeAwareHead', - num_classes=10, - in_channels=384, - feat_channels=384, - use_direction_classifier=True, - anchor_generator=dict( - type='AlignedAnchor3DRangeGeneratorPerCls', - ranges=[[-50, -50, -1.67339111, 50, 50, -1.67339111], - [-50, -50, -1.71396371, 50, 50, -1.71396371], - [-50, -50, -1.61785072, 50, 50, -1.61785072], - [-50, -50, -1.80984986, 50, 50, -1.80984986], - [-50, -50, -1.76396500, 50, 50, -1.76396500], - [-50, -50, -1.80032795, 50, 50, -1.80032795], - [-50, -50, -1.74440365, 50, 50, -1.74440365], - [-50, -50, -1.68526504, 50, 50, -1.68526504], - [-50, -50, -1.80673031, 50, 50, -1.80673031], - [-50, -50, -1.64824291, 50, 50, -1.64824291]], - sizes=[ - [1.68452161, 0.60058911, 1.27192197], # bicycle - [2.09973778, 0.76279481, 1.44403034], # motorcycle - [0.72564370, 0.66344886, 1.75748069], # pedestrian - [0.40359262, 0.39694519, 1.06232151], # traffic cone - [0.48578221, 2.49008838, 0.98297065], # barrier - [4.60718145, 1.95017717, 1.72270761], # car - [6.73778078, 2.45609390, 2.73004906], # truck - [12.01320693, 2.87427237, 3.81509561], # trailer - [11.1885991, 2.94046906, 3.47030982], # bus - [6.38352896, 2.73050468, 3.13312415] # construction vehicle - ], - custom_values=[0, 0], - rotations=[0, 1.57], - reshape_out=False), - tasks=[ - dict( - num_class=2, - class_names=['bicycle', 'motorcycle'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=1, - class_names=['pedestrian'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=2, - class_names=['traffic_cone', 'barrier'], - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=1, - class_names=['car'], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)), - dict( - num_class=4, - class_names=[ - 'truck', 'trailer', 'bus', 'construction_vehicle' - ], - shared_conv_channels=(64, 64, 64), - shared_conv_strides=(2, 1, 1), - norm_cfg=dict(type='naiveSyncBN2d', eps=1e-3, momentum=0.01)) - ], - assign_per_class=True, - diff_rad_by_sin=True, - dir_offset=-0.7854, # -pi/4 - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), - # model training and testing settings - train_cfg=dict( - _delete_=True, - pts=dict( - assigner=[ - dict( # bicycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # motorcycle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - dict( # pedestrian - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # traffic cone - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # barrier - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # car - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.6, - neg_iou_thr=0.45, - min_pos_iou=0.45, - ignore_iof_thr=-1), - dict( # truck - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # trailer - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1), - dict( # bus - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.55, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - dict( # construction vehicle - type='MaxIoUAssigner', - iou_calculator=dict(type='BboxOverlapsNearest3D'), - pos_iou_thr=0.5, - neg_iou_thr=0.35, - min_pos_iou=0.35, - ignore_iof_thr=-1) - ], - allowed_border=0, - code_weight=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2], - pos_weight=-1, - debug=False))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/metafile.yml deleted file mode 100644 index df6dd9ed..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/ssn/metafile.yml +++ /dev/null @@ -1,72 +0,0 @@ -Collections: - - Name: SSN - Metadata: - Training Techniques: - - AdamW - Training Resources: 8x GeForce GTX 1080 Ti - Architecture: - - Hard Voxelization - Paper: - URL: https://arxiv.org/abs/2004.02774 - Title: 'SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds' - README: configs/ssn/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/shape_aware_head.py#L166 - Version: v0.7.0 - -Models: - - Name: hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 3.6 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 40.91 - NDS: 54.44 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_secfpn_sbn-all_2x16_2x_nus-3d_20210830_101351-51915986.pth - - - Name: hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d.py - Metadata: - Training Data: nuScenes - Training Memory (GB): 5.1 - Results: - - Task: 3D Object Detection - Dataset: nuScenes - Metrics: - mAP: 46.65 - NDS: 58.24 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_2x16_2x_nus-3d_20210829_210615-361e5e04.pth - - - Name: hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 8.5 - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 17.5 - Public Score: 17.5 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d/hv_ssn_secfpn_sbn-all_2x16_2x_lyft-3d_20210822_134731-46841b41.pth - - - Name: hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d - In Collection: SSN - Config: configs/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d.py - Metadata: - Training Data: Lyft - Training Memory (GB): 7.4 - Results: - - Task: 3D Object Detection - Dataset: Lyft - Metrics: - Private Score: 17.9 - Public Score: 18.0 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/ssn/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d/hv_ssn_regnet-400mf_secfpn_sbn-all_1x16_2x_lyft-3d_20210829_122825-d93475a1.pth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/README.md deleted file mode 100644 index d74486f0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# Deep Hough Voting for 3D Object Detection in Point Clouds - -> [Deep Hough Voting for 3D Object Detection in Point Clouds](https://arxiv.org/abs/1904.09664) - - - -## Abstract - -Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -- samples from 2D manifolds in 3D space -- we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images. - -
- -
- -## Introduction - -We implement VoteNet and provide the result and checkpoints on ScanNet and SUNRGBD datasets. - -## Results and models - -### ScanNet - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-----------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./votenet_8x8_scannet-3d-18class.py) | 3x | 4.1 | | 62.34 | 40.82 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503-cf8134fa.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503.log.json) | - -### SUNRGBD - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [PointNet++](./votenet_16x8_sunrgbd-3d-10class.py) | 3x | 8.1 | | 59.78 | 35.77 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823-bf11f014.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823.log.json) | - -**Notice**: If your current mmdetection3d version >= 0.6.0, and you are using the checkpoints downloaded from the above links or using checkpoints trained with mmdetection3d version \< 0.6.0, the checkpoints have to be first converted via [tools/model_converters/convert_votenet_checkpoints.py](../../tools/model_converters/convert_votenet_checkpoints.py): - -``` -python ./tools/model_converters/convert_votenet_checkpoints.py ${ORIGINAL_CHECKPOINT_PATH} --out=${NEW_CHECKPOINT_PATH} -``` - -Then you can use the converted checkpoints following [getting_started.md](../../docs/en/getting_started.md). - -## Indeterminism - -Since test data preparation randomly downsamples the points, and the test script uses fixed random seeds while the random seeds of validation in training are not fixed, the test results may be slightly different from the results reported above. - -## IoU loss - -Adding IoU loss (simply = 1-IoU) boosts VoteNet's performance. To use IoU loss, add this loss term to the config file: - -```python -iou_loss=dict(type='AxisAlignedIoULoss', reduction='sum', loss_weight=10.0 / 3.0) -``` - -| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 | AP@0.5 | Download | -| :-------------------------------------------------------: | :-----: | :------: | :------------: | :-----: | :----: | :------: | -| [PointNet++](./votenet_iouloss_8x8_scannet-3d-18class.py) | 3x | 4.1 | | 63.81 | 44.21 | / | - -For now, we only support calculating IoU loss for axis-aligned bounding boxes since the CUDA op of general 3D IoU calculation does not implement the backward method. Therefore, IoU loss can only be used for ScanNet dataset for now. - -## Citation - -```latex -@inproceedings{qi2019deep, - author = {Qi, Charles R and Litany, Or and He, Kaiming and Guibas, Leonidas J}, - title = {Deep Hough Voting for 3D Object Detection in Point Clouds}, - booktitle = {Proceedings of the IEEE International Conference on Computer Vision}, - year = {2019} -} -``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/metafile.yml b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/metafile.yml deleted file mode 100644 index cd18680f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/metafile.yml +++ /dev/null @@ -1,59 +0,0 @@ -Collections: - - Name: VoteNet - Metadata: - Training Techniques: - - AdamW - Training Resources: 8x V100 GPUs - Architecture: - - PointNet++ - Paper: - URL: https://arxiv.org/abs/1904.09664 - Title: 'Deep Hough Voting for 3D Object Detection in Point Clouds' - README: configs/votenet/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/detectors/votenet.py#L10 - Version: v0.5.0 - -Models: - - Name: votenet_16x8_sunrgbd-3d-10class.py - In Collection: VoteNet - Config: configs/votenet/votenet_16x8_sunrgbd-3d-10class.py - Metadata: - Training Data: SUNRGBD - Training Memory (GB): 8.1 - Results: - - Task: 3D Object Detection - Dataset: SUNRGBD - Metrics: - AP@0.25: 59.78 - AP@0.5: 35.77 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_16x8_sunrgbd-3d-10class/votenet_16x8_sunrgbd-3d-10class_20210820_162823-bf11f014.pth - - - Name: votenet_8x8_scannet-3d-18class.py - In Collection: VoteNet - Config: configs/votenet/votenet_8x8_scannet-3d-18class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 4.1 - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 62.34 - AP@0.5: 40.82 - Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503-cf8134fa.pth - - - Name: votenet_iouloss_8x8_scannet-3d-18class - In Collection: VoteNet - Config: configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py - Metadata: - Training Data: ScanNet - Training Memory (GB): 4.1 - Architecture: - - IoU Loss - Results: - - Task: 3D Object Detection - Dataset: ScanNet - Metrics: - AP@0.25: 63.81 - AP@0.5: 44.21 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py deleted file mode 100644 index 5ddfa7ad..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_16x8_sunrgbd-3d-10class.py +++ /dev/null @@ -1,21 +0,0 @@ -_base_ = [ - '../_base_/datasets/sunrgbd-3d-10class.py', '../_base_/models/votenet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - bbox_head=dict( - num_classes=10, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=10, - num_dir_bins=12, - with_rot=True, - mean_sizes=[ - [2.114256, 1.620300, 0.927272], [0.791118, 1.279516, 0.718182], - [0.923508, 1.867419, 0.845495], [0.591958, 0.552978, 0.827272], - [0.699104, 0.454178, 0.75625], [0.69519, 1.346299, 0.736364], - [0.528526, 1.002642, 1.172878], [0.500618, 0.632163, 0.683424], - [0.404671, 1.071108, 1.688889], [0.76584, 1.398258, 0.472728] - ]), - )) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_8x8_scannet-3d-18class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_8x8_scannet-3d-18class.py deleted file mode 100644 index 62e56303..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_8x8_scannet-3d-18class.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = [ - '../_base_/datasets/scannet-3d-18class.py', '../_base_/models/votenet.py', - '../_base_/schedules/schedule_3x.py', '../_base_/default_runtime.py' -] - -# model settings -model = dict( - bbox_head=dict( - num_classes=18, - bbox_coder=dict( - type='PartialBinBasedBBoxCoder', - num_sizes=18, - num_dir_bins=1, - with_rot=False, - mean_sizes=[[0.76966727, 0.8116021, 0.92573744], - [1.876858, 1.8425595, 1.1931566], - [0.61328, 0.6148609, 0.7182701], - [1.3955007, 1.5121545, 0.83443564], - [0.97949594, 1.0675149, 0.6329687], - [0.531663, 0.5955577, 1.7500148], - [0.9624706, 0.72462326, 1.1481868], - [0.83221924, 1.0490936, 1.6875663], - [0.21132214, 0.4206159, 0.5372846], - [1.4440073, 1.8970833, 0.26985747], - [1.0294262, 1.4040797, 0.87554324], - [1.3766412, 0.65521795, 1.6813129], - [0.6650819, 0.71111923, 1.298853], - [0.41999173, 0.37906948, 1.7513971], - [0.59359556, 0.5912492, 0.73919016], - [0.50867593, 0.50656086, 0.30136237], - [1.1511526, 1.0546296, 0.49706793], - [0.47535285, 0.49249494, 0.5802117]]))) - -# yapf:disable -log_config = dict(interval=30) -# yapf:enable diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py deleted file mode 100644 index ac2a6c00..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/configs/votenet/votenet_iouloss_8x8_scannet-3d-18class.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = ['./votenet_8x8_scannet-3d-18class.py'] - -# model settings, add iou loss -model = dict( - bbox_head=dict( - iou_loss=dict( - type='AxisAlignedIoULoss', reduction='sum', loss_weight=10.0 / - 3.0))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/README.md deleted file mode 100644 index 20170c65..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/README.md +++ /dev/null @@ -1,59 +0,0 @@ -### Prepare S3DIS Data - -We follow the procedure in [pointnet](https://github.com/charlesq34/pointnet). - -1. Download S3DIS data by filling this [Google form](https://docs.google.com/forms/d/e/1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw/viewform?c=0&w=1). Download the ```Stanford3dDataset_v1.2_Aligned_Version.zip``` file and unzip it. Link or move the folder to this level of directory. - -2. In this directory, extract point clouds and annotations by running `python3 collect_indoor3d_data.py`. - -3. Enter the project root directory, generate training data by running - -```bash -python3 tools/create_data.py s3dis --root-path ./data/s3dis --out-dir ./data/s3dis --extra-tag s3dis -``` - -The overall process could be achieved through the following script - -```bash -python3 collect_indoor3d_data.py -cd ../.. -python3 tools/create_data.py s3dis --root-path ./data/s3dis --out-dir ./data/s3dis --extra-tag s3dis -``` - -The directory structure after pre-processing should be as below - -``` -s3dis -├── meta_data -├── indoor3d_util.py -├── collect_indoor3d_data.py -├── README.md -├── Stanford3dDataset_v1.2_Aligned_Version -├── s3dis_data -├── points -│ ├── xxxxx.bin -├── instance_mask -│ ├── xxxxx.bin -├── semantic_mask -│ ├── xxxxx.bin -├── seg_info -│ ├── Area_1_label_weight.npy -│ ├── Area_1_resampled_scene_idxs.npy -│ ├── Area_2_label_weight.npy -│ ├── Area_2_resampled_scene_idxs.npy -│ ├── Area_3_label_weight.npy -│ ├── Area_3_resampled_scene_idxs.npy -│ ├── Area_4_label_weight.npy -│ ├── Area_4_resampled_scene_idxs.npy -│ ├── Area_5_label_weight.npy -│ ├── Area_5_resampled_scene_idxs.npy -│ ├── Area_6_label_weight.npy -│ ├── Area_6_resampled_scene_idxs.npy -├── s3dis_infos_Area_1.pkl -├── s3dis_infos_Area_2.pkl -├── s3dis_infos_Area_3.pkl -├── s3dis_infos_Area_4.pkl -├── s3dis_infos_Area_5.pkl -├── s3dis_infos_Area_6.pkl - -``` \ No newline at end of file diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/collect_indoor3d_data.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/collect_indoor3d_data.py deleted file mode 100644 index df307ed5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/collect_indoor3d_data.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import argparse -import mmcv -from indoor3d_util import export -from os import path as osp - -parser = argparse.ArgumentParser() -parser.add_argument( - '--output-folder', - default='./s3dis_data', - help='output folder of the result.') -parser.add_argument( - '--data-dir', - default='Stanford3dDataset_v1.2_Aligned_Version', - help='s3dis data directory.') -parser.add_argument( - '--ann-file', - default='meta_data/anno_paths.txt', - help='The path of the file that stores the annotation names.') -args = parser.parse_args() - -anno_paths = [line.rstrip() for line in open(args.ann_file)] -anno_paths = [osp.join(args.data_dir, p) for p in anno_paths] - -output_folder = args.output_folder -mmcv.mkdir_or_exist(output_folder) - -# Note: there is an extra character in the v1.2 data in Area_5/hallway_6. -# It's fixed manually here. -# Refer to https://github.com/AnTao97/dgcnn.pytorch/blob/843abe82dd731eb51a4b3f70632c2ed3c60560e9/prepare_data/collect_indoor3d_data.py#L18 # noqa -revise_file = osp.join(args.data_dir, - 'Area_5/hallway_6/Annotations/ceiling_1.txt') -with open(revise_file, 'r') as f: - data = f.read() - # replace that extra character with blank space to separate data - data = data[:5545347] + ' ' + data[5545348:] -with open(revise_file, 'w') as f: - f.write(data) - -for anno_path in anno_paths: - print(f'Exporting data from annotation file: {anno_path}') - elements = anno_path.split('/') - out_filename = \ - elements[-3] + '_' + elements[-2] # Area_1_hallway_1 - out_filename = osp.join(output_folder, out_filename) - if osp.isfile(f'{out_filename}_point.npy'): - print('File already exists. skipping.') - continue - export(anno_path, out_filename) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/indoor3d_util.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/indoor3d_util.py deleted file mode 100644 index 7b0c7f57..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/indoor3d_util.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import glob -import numpy as np -from os import path as osp - -# ----------------------------------------------------------------------------- -# CONSTANTS -# ----------------------------------------------------------------------------- - -BASE_DIR = osp.dirname(osp.abspath(__file__)) - -class_names = [ - x.rstrip() for x in open(osp.join(BASE_DIR, 'meta_data/class_names.txt')) -] -class2label = {one_class: i for i, one_class in enumerate(class_names)} - -# ----------------------------------------------------------------------------- -# CONVERT ORIGINAL DATA TO POINTS, SEM_LABEL AND INS_LABEL FILES -# ----------------------------------------------------------------------------- - - -def export(anno_path, out_filename): - """Convert original dataset files to points, instance mask and semantic - mask files. We aggregated all the points from each instance in the room. - - Args: - anno_path (str): path to annotations. e.g. Area_1/office_2/Annotations/ - out_filename (str): path to save collected points and labels - file_format (str): txt or numpy, determines what file format to save. - - Note: - the points are shifted before save, the most negative point is now - at origin. - """ - points_list = [] - ins_idx = 1 # instance ids should be indexed from 1, so 0 is unannotated - - for f in glob.glob(osp.join(anno_path, '*.txt')): - one_class = osp.basename(f).split('_')[0] - if one_class not in class_names: # some rooms have 'staris' class - one_class = 'clutter' - points = np.loadtxt(f) - labels = np.ones((points.shape[0], 1)) * class2label[one_class] - ins_labels = np.ones((points.shape[0], 1)) * ins_idx - ins_idx += 1 - points_list.append(np.concatenate([points, labels, ins_labels], 1)) - - data_label = np.concatenate(points_list, 0) # [N, 8], (pts, rgb, sem, ins) - xyz_min = np.amin(data_label, axis=0)[0:3] - data_label[:, 0:3] -= xyz_min - - np.save(f'{out_filename}_point.npy', data_label[:, :6].astype(np.float32)) - np.save(f'{out_filename}_sem_label.npy', data_label[:, 6].astype(np.int)) - np.save(f'{out_filename}_ins_label.npy', data_label[:, 7].astype(np.int)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/anno_paths.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/anno_paths.txt deleted file mode 100644 index e5a4d7b9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/anno_paths.txt +++ /dev/null @@ -1,272 +0,0 @@ -Area_1/conferenceRoom_1/Annotations -Area_1/conferenceRoom_2/Annotations -Area_1/copyRoom_1/Annotations -Area_1/hallway_1/Annotations -Area_1/hallway_2/Annotations -Area_1/hallway_3/Annotations -Area_1/hallway_4/Annotations -Area_1/hallway_5/Annotations -Area_1/hallway_6/Annotations -Area_1/hallway_7/Annotations -Area_1/hallway_8/Annotations -Area_1/office_10/Annotations -Area_1/office_11/Annotations -Area_1/office_12/Annotations -Area_1/office_13/Annotations -Area_1/office_14/Annotations -Area_1/office_15/Annotations -Area_1/office_16/Annotations -Area_1/office_17/Annotations -Area_1/office_18/Annotations -Area_1/office_19/Annotations -Area_1/office_1/Annotations -Area_1/office_20/Annotations -Area_1/office_21/Annotations -Area_1/office_22/Annotations -Area_1/office_23/Annotations -Area_1/office_24/Annotations -Area_1/office_25/Annotations -Area_1/office_26/Annotations -Area_1/office_27/Annotations -Area_1/office_28/Annotations -Area_1/office_29/Annotations -Area_1/office_2/Annotations -Area_1/office_30/Annotations -Area_1/office_31/Annotations -Area_1/office_3/Annotations -Area_1/office_4/Annotations -Area_1/office_5/Annotations -Area_1/office_6/Annotations -Area_1/office_7/Annotations -Area_1/office_8/Annotations -Area_1/office_9/Annotations -Area_1/pantry_1/Annotations -Area_1/WC_1/Annotations -Area_2/auditorium_1/Annotations -Area_2/auditorium_2/Annotations -Area_2/conferenceRoom_1/Annotations -Area_2/hallway_10/Annotations -Area_2/hallway_11/Annotations -Area_2/hallway_12/Annotations -Area_2/hallway_1/Annotations -Area_2/hallway_2/Annotations -Area_2/hallway_3/Annotations -Area_2/hallway_4/Annotations -Area_2/hallway_5/Annotations -Area_2/hallway_6/Annotations -Area_2/hallway_7/Annotations -Area_2/hallway_8/Annotations -Area_2/hallway_9/Annotations -Area_2/office_10/Annotations -Area_2/office_11/Annotations -Area_2/office_12/Annotations -Area_2/office_13/Annotations -Area_2/office_14/Annotations -Area_2/office_1/Annotations -Area_2/office_2/Annotations -Area_2/office_3/Annotations -Area_2/office_4/Annotations -Area_2/office_5/Annotations -Area_2/office_6/Annotations -Area_2/office_7/Annotations -Area_2/office_8/Annotations -Area_2/office_9/Annotations -Area_2/storage_1/Annotations -Area_2/storage_2/Annotations -Area_2/storage_3/Annotations -Area_2/storage_4/Annotations -Area_2/storage_5/Annotations -Area_2/storage_6/Annotations -Area_2/storage_7/Annotations -Area_2/storage_8/Annotations -Area_2/storage_9/Annotations -Area_2/WC_1/Annotations -Area_2/WC_2/Annotations -Area_3/conferenceRoom_1/Annotations -Area_3/hallway_1/Annotations -Area_3/hallway_2/Annotations -Area_3/hallway_3/Annotations -Area_3/hallway_4/Annotations -Area_3/hallway_5/Annotations -Area_3/hallway_6/Annotations -Area_3/lounge_1/Annotations -Area_3/lounge_2/Annotations -Area_3/office_10/Annotations -Area_3/office_1/Annotations -Area_3/office_2/Annotations -Area_3/office_3/Annotations -Area_3/office_4/Annotations -Area_3/office_5/Annotations -Area_3/office_6/Annotations -Area_3/office_7/Annotations -Area_3/office_8/Annotations -Area_3/office_9/Annotations -Area_3/storage_1/Annotations -Area_3/storage_2/Annotations -Area_3/WC_1/Annotations -Area_3/WC_2/Annotations -Area_4/conferenceRoom_1/Annotations -Area_4/conferenceRoom_2/Annotations -Area_4/conferenceRoom_3/Annotations -Area_4/hallway_10/Annotations -Area_4/hallway_11/Annotations -Area_4/hallway_12/Annotations -Area_4/hallway_13/Annotations -Area_4/hallway_14/Annotations -Area_4/hallway_1/Annotations -Area_4/hallway_2/Annotations -Area_4/hallway_3/Annotations -Area_4/hallway_4/Annotations -Area_4/hallway_5/Annotations -Area_4/hallway_6/Annotations -Area_4/hallway_7/Annotations -Area_4/hallway_8/Annotations -Area_4/hallway_9/Annotations -Area_4/lobby_1/Annotations -Area_4/lobby_2/Annotations -Area_4/office_10/Annotations -Area_4/office_11/Annotations -Area_4/office_12/Annotations -Area_4/office_13/Annotations -Area_4/office_14/Annotations -Area_4/office_15/Annotations -Area_4/office_16/Annotations -Area_4/office_17/Annotations -Area_4/office_18/Annotations -Area_4/office_19/Annotations -Area_4/office_1/Annotations -Area_4/office_20/Annotations -Area_4/office_21/Annotations -Area_4/office_22/Annotations -Area_4/office_2/Annotations -Area_4/office_3/Annotations -Area_4/office_4/Annotations -Area_4/office_5/Annotations -Area_4/office_6/Annotations -Area_4/office_7/Annotations -Area_4/office_8/Annotations -Area_4/office_9/Annotations -Area_4/storage_1/Annotations -Area_4/storage_2/Annotations -Area_4/storage_3/Annotations -Area_4/storage_4/Annotations -Area_4/WC_1/Annotations -Area_4/WC_2/Annotations -Area_4/WC_3/Annotations -Area_4/WC_4/Annotations -Area_5/conferenceRoom_1/Annotations -Area_5/conferenceRoom_2/Annotations -Area_5/conferenceRoom_3/Annotations -Area_5/hallway_10/Annotations -Area_5/hallway_11/Annotations -Area_5/hallway_12/Annotations -Area_5/hallway_13/Annotations -Area_5/hallway_14/Annotations -Area_5/hallway_15/Annotations -Area_5/hallway_1/Annotations -Area_5/hallway_2/Annotations -Area_5/hallway_3/Annotations -Area_5/hallway_4/Annotations -Area_5/hallway_5/Annotations -Area_5/hallway_6/Annotations -Area_5/hallway_7/Annotations -Area_5/hallway_8/Annotations -Area_5/hallway_9/Annotations -Area_5/lobby_1/Annotations -Area_5/office_10/Annotations -Area_5/office_11/Annotations -Area_5/office_12/Annotations -Area_5/office_13/Annotations -Area_5/office_14/Annotations -Area_5/office_15/Annotations -Area_5/office_16/Annotations -Area_5/office_17/Annotations -Area_5/office_18/Annotations -Area_5/office_19/Annotations -Area_5/office_1/Annotations -Area_5/office_20/Annotations -Area_5/office_21/Annotations -Area_5/office_22/Annotations -Area_5/office_23/Annotations -Area_5/office_24/Annotations -Area_5/office_25/Annotations -Area_5/office_26/Annotations -Area_5/office_27/Annotations -Area_5/office_28/Annotations -Area_5/office_29/Annotations -Area_5/office_2/Annotations -Area_5/office_30/Annotations -Area_5/office_31/Annotations -Area_5/office_32/Annotations -Area_5/office_33/Annotations -Area_5/office_34/Annotations -Area_5/office_35/Annotations -Area_5/office_36/Annotations -Area_5/office_37/Annotations -Area_5/office_38/Annotations -Area_5/office_39/Annotations -Area_5/office_3/Annotations -Area_5/office_40/Annotations -Area_5/office_41/Annotations -Area_5/office_42/Annotations -Area_5/office_4/Annotations -Area_5/office_5/Annotations -Area_5/office_6/Annotations -Area_5/office_7/Annotations -Area_5/office_8/Annotations -Area_5/office_9/Annotations -Area_5/pantry_1/Annotations -Area_5/storage_1/Annotations -Area_5/storage_2/Annotations -Area_5/storage_3/Annotations -Area_5/storage_4/Annotations -Area_5/WC_1/Annotations -Area_5/WC_2/Annotations -Area_6/conferenceRoom_1/Annotations -Area_6/copyRoom_1/Annotations -Area_6/hallway_1/Annotations -Area_6/hallway_2/Annotations -Area_6/hallway_3/Annotations -Area_6/hallway_4/Annotations -Area_6/hallway_5/Annotations -Area_6/hallway_6/Annotations -Area_6/lounge_1/Annotations -Area_6/office_10/Annotations -Area_6/office_11/Annotations -Area_6/office_12/Annotations -Area_6/office_13/Annotations -Area_6/office_14/Annotations -Area_6/office_15/Annotations -Area_6/office_16/Annotations -Area_6/office_17/Annotations -Area_6/office_18/Annotations -Area_6/office_19/Annotations -Area_6/office_1/Annotations -Area_6/office_20/Annotations -Area_6/office_21/Annotations -Area_6/office_22/Annotations -Area_6/office_23/Annotations -Area_6/office_24/Annotations -Area_6/office_25/Annotations -Area_6/office_26/Annotations -Area_6/office_27/Annotations -Area_6/office_28/Annotations -Area_6/office_29/Annotations -Area_6/office_2/Annotations -Area_6/office_30/Annotations -Area_6/office_31/Annotations -Area_6/office_32/Annotations -Area_6/office_33/Annotations -Area_6/office_34/Annotations -Area_6/office_35/Annotations -Area_6/office_36/Annotations -Area_6/office_37/Annotations -Area_6/office_3/Annotations -Area_6/office_4/Annotations -Area_6/office_5/Annotations -Area_6/office_6/Annotations -Area_6/office_7/Annotations -Area_6/office_8/Annotations -Area_6/office_9/Annotations -Area_6/openspace_1/Annotations -Area_6/pantry_1/Annotations \ No newline at end of file diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/class_names.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/class_names.txt deleted file mode 100644 index b4b91540..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/data/s3dis/meta_data/class_names.txt +++ /dev/null @@ -1,13 +0,0 @@ -ceiling -floor -wall -beam -column -window -door -table -chair -sofa -bookcase -board -clutter \ No newline at end of file diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/dist_train.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/dist_train.sh deleted file mode 100644 index ecb19be2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/dist_train.sh +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python3 -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --seed 0 \ - --launcher pytorch ${@:3} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/__init__.py deleted file mode 100644 index 14c556ac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -from .arraymisc import * -from .fileio import * -from .image import * -from .utils import * -from .version import * -from .video import * -from .visualization import * - -# The following modules are not imported to this level, so mmcv may be used -# without PyTorch. -# - runner -# - parallel -# - op -# - device diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/__init__.py deleted file mode 100644 index 4b4700d6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .quantization import dequantize, quantize - -__all__ = ['quantize', 'dequantize'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/quantization.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/quantization.py deleted file mode 100644 index 6182710d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/arraymisc/quantization.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Union - -import numpy as np - - -def quantize(arr: np.ndarray, - min_val: Union[int, float], - max_val: Union[int, float], - levels: int, - dtype=np.int64) -> tuple: - """Quantize an array of (-inf, inf) to [0, levels-1]. - - Args: - arr (ndarray): Input array. - min_val (int or float): Minimum value to be clipped. - max_val (int or float): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the quantized array. - - Returns: - tuple: Quantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - arr = np.clip(arr, min_val, max_val) - min_val - quantized_arr = np.minimum( - np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) - - return quantized_arr - - -def dequantize(arr: np.ndarray, - min_val: Union[int, float], - max_val: Union[int, float], - levels: int, - dtype=np.float64) -> tuple: - """Dequantize an array. - - Args: - arr (ndarray): Input array. - min_val (int or float): Minimum value to be clipped. - max_val (int or float): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the dequantized array. - - Returns: - tuple: Dequantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - dequantized_arr = (arr + 0.5).astype(dtype) * (max_val - - min_val) / levels + min_val - - return dequantized_arr diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/__init__.py deleted file mode 100644 index 7246c897..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .alexnet import AlexNet -# yapf: disable -from .bricks import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS, - ContextBlock, Conv2d, Conv3d, ConvAWS2d, ConvModule, - ConvTranspose2d, ConvTranspose3d, ConvWS2d, - DepthwiseSeparableConvModule, GeneralizedAttention, - HSigmoid, HSwish, Linear, MaxPool2d, MaxPool3d, - NonLocal1d, NonLocal2d, NonLocal3d, Scale, Swish, - build_activation_layer, build_conv_layer, - build_norm_layer, build_padding_layer, build_plugin_layer, - build_upsample_layer, conv_ws_2d, is_norm) -from .builder import MODELS, build_model_from_cfg -# yapf: enable -from .resnet import ResNet, make_res_layer -from .utils import (INITIALIZERS, Caffe2XavierInit, ConstantInit, KaimingInit, - NormalInit, PretrainedInit, TruncNormalInit, UniformInit, - XavierInit, bias_init_with_prob, caffe2_xavier_init, - constant_init, fuse_conv_bn, get_model_complexity_info, - initialize, kaiming_init, normal_init, trunc_normal_init, - uniform_init, xavier_init) -from .vgg import VGG, make_vgg_layer - -__all__ = [ - 'AlexNet', 'VGG', 'make_vgg_layer', 'ResNet', 'make_res_layer', - 'constant_init', 'xavier_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'kaiming_init', 'caffe2_xavier_init', - 'bias_init_with_prob', 'ConvModule', 'build_activation_layer', - 'build_conv_layer', 'build_norm_layer', 'build_padding_layer', - 'build_upsample_layer', 'build_plugin_layer', 'is_norm', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'HSigmoid', 'Swish', 'HSwish', - 'GeneralizedAttention', 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', - 'PADDING_LAYERS', 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', - 'get_model_complexity_info', 'conv_ws_2d', 'ConvAWS2d', 'ConvWS2d', - 'fuse_conv_bn', 'DepthwiseSeparableConvModule', 'Linear', 'Conv2d', - 'ConvTranspose2d', 'MaxPool2d', 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', - 'initialize', 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'MODELS', 'build_model_from_cfg' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/alexnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/alexnet.py deleted file mode 100644 index 4d45d96d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/alexnet.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging -from typing import Optional - -import torch -import torch.nn as nn - - -class AlexNet(nn.Module): - """AlexNet backbone. - - Args: - num_classes (int): number of classes for classification. - """ - - def __init__(self, num_classes: int = -1): - super().__init__() - self.num_classes = num_classes - self.features = nn.Sequential( - nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(64, 192, kernel_size=5, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(192, 384, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(384, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - ) - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained: Optional[str] = None) -> None: - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x: torch.Tensor) -> torch.Tensor: - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/activation.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/activation.py deleted file mode 100644 index b82374cf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super().__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/context_block.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/context_block.py deleted file mode 100644 index 92e5255b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/context_block.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn - -from ..utils import constant_init, kaiming_init -from .registry import PLUGIN_LAYERS - - -def last_zero_init(m): - if isinstance(m, nn.Sequential): - constant_init(m[-1], val=0) - else: - constant_init(m, val=0) - - -@PLUGIN_LAYERS.register_module() -class ContextBlock(nn.Module): - """ContextBlock module in GCNet. - - See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond' - (https://arxiv.org/abs/1904.11492) for details. - - Args: - in_channels (int): Channels of the input feature map. - ratio (float): Ratio of channels of transform bottleneck - pooling_type (str): Pooling method for context modeling. - Options are 'att' and 'avg', stand for attention pooling and - average pooling respectively. Default: 'att'. - fusion_types (Sequence[str]): Fusion method for feature fusion, - Options are 'channels_add', 'channel_mul', stand for channelwise - addition and multiplication respectively. Default: ('channel_add',) - """ - - _abbr_ = 'context_block' - - def __init__(self, - in_channels, - ratio, - pooling_type='att', - fusion_types=('channel_add', )): - super().__init__() - assert pooling_type in ['avg', 'att'] - assert isinstance(fusion_types, (list, tuple)) - valid_fusion_types = ['channel_add', 'channel_mul'] - assert all([f in valid_fusion_types for f in fusion_types]) - assert len(fusion_types) > 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv.py deleted file mode 100644 index f6c35fd7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from .registry import CONV_LAYERS - -CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d) -CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d) -CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d) -CONV_LAYERS.register_module('Conv', module=nn.Conv2d) - - -def build_conv_layer(cfg, *args, **kwargs): - """Build convolution layer. - - Args: - cfg (None or dict): The conv layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an conv layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding conv layer. - - Returns: - nn.Module: Created conv layer. - """ - if cfg is None: - cfg_ = dict(type='Conv2d') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in CONV_LAYERS: - raise KeyError(f'Unrecognized layer type {layer_type}') - else: - conv_layer = CONV_LAYERS.get(layer_type) - - layer = conv_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv2d_adaptive_padding.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv2d_adaptive_padding.py deleted file mode 100644 index b45e758a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv2d_adaptive_padding.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from torch import nn -from torch.nn import functional as F - -from .registry import CONV_LAYERS - - -@CONV_LAYERS.register_module() -class Conv2dAdaptivePadding(nn.Conv2d): - """Implementation of 2D convolution in tensorflow with `padding` as "same", - which applies padding to input (if needed) so that input image gets fully - covered by filter and stride you specified. For stride 1, this will ensure - that output image size is same as input. For stride of 2, output dimensions - will be half, for example. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the convolving kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If ``True``, adds a learnable bias to the - output. Default: ``True`` - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__(in_channels, out_channels, kernel_size, stride, 0, - dilation, groups, bias) - - def forward(self, x): - img_h, img_w = x.size()[-2:] - kernel_h, kernel_w = self.weight.size()[-2:] - stride_h, stride_w = self.stride - output_h = math.ceil(img_h / stride_h) - output_w = math.ceil(img_w / stride_w) - pad_h = ( - max((output_h - 1) * self.stride[0] + - (kernel_h - 1) * self.dilation[0] + 1 - img_h, 0)) - pad_w = ( - max((output_w - 1) * self.stride[1] + - (kernel_w - 1) * self.dilation[1] + 1 - img_w, 0)) - if pad_h > 0 or pad_w > 0: - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2 - ]) - return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_module.py deleted file mode 100644 index 1d975685..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_module.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn - -from mmcv.utils import _BatchNorm, _InstanceNorm -from ..utils import constant_init, kaiming_init -from .activation import build_activation_layer -from .conv import build_conv_layer -from .norm import build_norm_layer -from .padding import build_padding_layer -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class ConvModule(nn.Module): - """A conv block that bundles conv/norm/activation layers. - - This block simplifies the usage of convolution layers, which are commonly - used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU). - It is based upon three build methods: `build_conv_layer()`, - `build_norm_layer()` and `build_activation_layer()`. - - Besides, we add some additional features in this module. - 1. Automatically set `bias` of the conv layer. - 2. Spectral norm is supported. - 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only - supports zero and circular padding, and we add "reflect" padding mode. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. - groups (int): Number of blocked connections from input channels to - output channels. Same as that in ``nn._ConvNd``. - bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - inplace (bool): Whether to use inplace mode for activation. - Default: True. - with_spectral_norm (bool): Whether use spectral norm in conv module. - Default: False. - padding_mode (str): If the `padding_mode` has not been supported by - current `Conv2d` in PyTorch, we will use our own padding layer - instead. Currently, we support ['zeros', 'circular'] with official - implementation and ['reflect'] with our own implementation. - Default: 'zeros'. - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - Default: ('conv', 'norm', 'act'). - """ - - _abbr_ = 'conv_block' - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias='auto', - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - inplace=True, - with_spectral_norm=False, - padding_mode='zeros', - order=('conv', 'norm', 'act')): - super().__init__() - assert conv_cfg is None or isinstance(conv_cfg, dict) - assert norm_cfg is None or isinstance(norm_cfg, dict) - assert act_cfg is None or isinstance(act_cfg, dict) - official_padding_mode = ['zeros', 'circular'] - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.inplace = inplace - self.with_spectral_norm = with_spectral_norm - self.with_explicit_padding = padding_mode not in official_padding_mode - self.order = order - assert isinstance(self.order, tuple) and len(self.order) == 3 - assert set(order) == {'conv', 'norm', 'act'} - - self.with_norm = norm_cfg is not None - self.with_activation = act_cfg is not None - # if the conv layer is before a norm layer, bias is unnecessary. - if bias == 'auto': - bias = not self.with_norm - self.with_bias = bias - - if self.with_explicit_padding: - pad_cfg = dict(type=padding_mode) - self.padding_layer = build_padding_layer(pad_cfg, padding) - - # reset padding to 0 for conv module - conv_padding = 0 if self.with_explicit_padding else padding - # build convolution layer - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=conv_padding, - dilation=dilation, - groups=groups, - bias=bias) - # export the attributes of self.conv to a higher level for convenience - self.in_channels = self.conv.in_channels - self.out_channels = self.conv.out_channels - self.kernel_size = self.conv.kernel_size - self.stride = self.conv.stride - self.padding = padding - self.dilation = self.conv.dilation - self.transposed = self.conv.transposed - self.output_padding = self.conv.output_padding - self.groups = self.conv.groups - - if self.with_spectral_norm: - self.conv = nn.utils.spectral_norm(self.conv) - - # build normalization layers - if self.with_norm: - # norm layer is after conv layer - if order.index('norm') > order.index('conv'): - norm_channels = out_channels - else: - norm_channels = in_channels - self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels) - self.add_module(self.norm_name, norm) - if self.with_bias: - if isinstance(norm, (_BatchNorm, _InstanceNorm)): - warnings.warn( - 'Unnecessary conv bias before batch/instance norm') - else: - self.norm_name = None - - # build activation layer - if self.with_activation: - act_cfg_ = act_cfg.copy() - # nn.Tanh has no 'inplace' argument - if act_cfg_['type'] not in [ - 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish', 'GELU' - ]: - act_cfg_.setdefault('inplace', inplace) - self.activate = build_activation_layer(act_cfg_) - - # Use msra init by default - self.init_weights() - - @property - def norm(self): - if self.norm_name: - return getattr(self, self.norm_name) - else: - return None - - def init_weights(self): - # 1. It is mainly for customized conv layers with their own - # initialization manners by calling their own ``init_weights()``, - # and we do not want ConvModule to override the initialization. - # 2. For customized conv layers without their own initialization - # manners (that is, they don't have their own ``init_weights()``) - # and PyTorch's conv layers, they will be initialized by - # this method with default ``kaiming_init``. - # Note: For PyTorch's conv layers, they will be overwritten by our - # initialization implementation using default ``kaiming_init``. - if not hasattr(self.conv, 'init_weights'): - if self.with_activation and self.act_cfg['type'] == 'LeakyReLU': - nonlinearity = 'leaky_relu' - a = self.act_cfg.get('negative_slope', 0.01) - else: - nonlinearity = 'relu' - a = 0 - kaiming_init(self.conv, a=a, nonlinearity=nonlinearity) - if self.with_norm: - constant_init(self.norm, 1, bias=0) - - def forward(self, x, activate=True, norm=True): - for layer in self.order: - if layer == 'conv': - if self.with_explicit_padding: - x = self.padding_layer(x) - x = self.conv(x) - elif layer == 'norm' and norm and self.with_norm: - x = self.norm(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_ws.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_ws.py deleted file mode 100644 index fcd0f228..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/conv_ws.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .registry import CONV_LAYERS - - -def conv_ws_2d(input, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - eps=1e-5): - c_in = weight.size(0) - weight_flat = weight.view(c_in, -1) - mean = weight_flat.mean(dim=1, keepdim=True).view(c_in, 1, 1, 1) - std = weight_flat.std(dim=1, keepdim=True).view(c_in, 1, 1, 1) - weight = (weight - mean) / (std + eps) - return F.conv2d(input, weight, bias, stride, padding, dilation, groups) - - -@CONV_LAYERS.register_module('ConvWS') -class ConvWS2d(nn.Conv2d): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - eps=1e-5): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.eps = eps - - def forward(self, x): - return conv_ws_2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.eps) - - -@CONV_LAYERS.register_module(name='ConvAWS') -class ConvAWS2d(nn.Conv2d): - """AWS (Adaptive Weight Standardization) - - This is a variant of Weight Standardization - (https://arxiv.org/pdf/1903.10520.pdf) - It is used in DetectoRS to avoid NaN - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: True - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.register_buffer('weight_gamma', - torch.ones(self.out_channels, 1, 1, 1)) - self.register_buffer('weight_beta', - torch.zeros(self.out_channels, 1, 1, 1)) - - def _get_weight(self, weight): - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - weight = (weight - mean) / std - weight = self.weight_gamma * weight + self.weight_beta - return weight - - def forward(self, x): - weight = self._get_weight(self.weight) - return F.conv2d(x, weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override default load function. - - AWS overrides the function _load_from_state_dict to recover - weight_gamma and weight_beta if they are missing. If weight_gamma and - weight_beta are found in the checkpoint, this function will return - after super()._load_from_state_dict. Otherwise, it will compute the - mean and std of the pretrained weights and store them in weight_beta - and weight_gamma. - """ - - self.weight_gamma.data.fill_(-1) - local_missing_keys = [] - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, local_missing_keys, - unexpected_keys, error_msgs) - if self.weight_gamma.data.mean() > 0: - for k in local_missing_keys: - missing_keys.append(k) - return - weight = self.weight.data - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - self.weight_beta.data.copy_(mean) - self.weight_gamma.data.copy_(std) - missing_gamma_beta = [ - k for k in local_missing_keys - if k.endswith('weight_gamma') or k.endswith('weight_beta') - ] - for k in missing_gamma_beta: - local_missing_keys.remove(k) - for k in local_missing_keys: - missing_keys.append(k) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/depthwise_separable_conv_module.py deleted file mode 100644 index fa613614..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/depthwise_separable_conv_module.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .conv_module import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if `norm_cfg` and `act_cfg` are specified. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. Default: 1. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. Default: 0. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super().__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/drop.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/drop.py deleted file mode 100644 index 763b4321..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super().__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/generalized_attention.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/generalized_attention.py deleted file mode 100644 index 87c5c378..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/generalized_attention.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import kaiming_init -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class GeneralizedAttention(nn.Module): - """GeneralizedAttention module. - - See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks' - (https://arxiv.org/abs/1711.07971) for details. - - Args: - in_channels (int): Channels of the input feature map. - spatial_range (int): The spatial range. -1 indicates no spatial range - constraint. Default: -1. - num_heads (int): The head number of empirical_attention module. - Default: 9. - position_embedding_dim (int): The position embedding dimension. - Default: -1. - position_magnitude (int): A multiplier acting on coord difference. - Default: 1. - kv_stride (int): The feature stride acting on key/value feature map. - Default: 2. - q_stride (int): The feature stride acting on query feature map. - Default: 1. - attention_type (str): A binary indicator string for indicating which - items in generalized empirical_attention module are used. - Default: '1111'. - - - '1000' indicates 'query and key content' (appr - appr) item, - - '0100' indicates 'query content and relative position' - (appr - position) item, - - '0010' indicates 'key content only' (bias - appr) item, - - '0001' indicates 'relative position only' (bias - position) item. - """ - - _abbr_ = 'gen_attention_block' - - def __init__(self, - in_channels, - spatial_range=-1, - num_heads=9, - position_embedding_dim=-1, - position_magnitude=1, - kv_stride=2, - q_stride=1, - attention_type='1111'): - - super().__init__() - - # hard range means local range for non-local operation - self.position_embedding_dim = ( - position_embedding_dim - if position_embedding_dim > 0 else in_channels) - - self.position_magnitude = position_magnitude - self.num_heads = num_heads - self.in_channels = in_channels - self.spatial_range = spatial_range - self.kv_stride = kv_stride - self.q_stride = q_stride - self.attention_type = [bool(int(_)) for _ in attention_type] - self.qk_embed_dim = in_channels // num_heads - out_c = self.qk_embed_dim * num_heads - - if self.attention_type[0] or self.attention_type[1]: - self.query_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.query_conv.kaiming_init = True - - if self.attention_type[0] or self.attention_type[2]: - self.key_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.key_conv.kaiming_init = True - - self.v_dim = in_channels // num_heads - self.value_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=self.v_dim * num_heads, - kernel_size=1, - bias=False) - self.value_conv.kaiming_init = True - - if self.attention_type[1] or self.attention_type[3]: - self.appr_geom_fc_x = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_x.kaiming_init = True - - self.appr_geom_fc_y = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_y.kaiming_init = True - - if self.attention_type[2]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.appr_bias = nn.Parameter(appr_bias_value) - - if self.attention_type[3]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.geom_bias = nn.Parameter(geom_bias_value) - - self.proj_conv = nn.Conv2d( - in_channels=self.v_dim * num_heads, - out_channels=in_channels, - kernel_size=1, - bias=True) - self.proj_conv.kaiming_init = True - self.gamma = nn.Parameter(torch.zeros(1)) - - if self.spatial_range >= 0: - # only works when non local is after 3*3 conv - if in_channels == 256: - max_len = 84 - elif in_channels == 512: - max_len = 42 - - max_len_kv = int((max_len - 1.0) / self.kv_stride + 1) - local_constraint_map = np.ones( - (max_len, max_len, max_len_kv, max_len_kv), dtype=int) - for iy in range(max_len): - for ix in range(max_len): - local_constraint_map[ - iy, ix, - max((iy - self.spatial_range) // - self.kv_stride, 0):min((iy + self.spatial_range + - 1) // self.kv_stride + - 1, max_len), - max((ix - self.spatial_range) // - self.kv_stride, 0):min((ix + self.spatial_range + - 1) // self.kv_stride + - 1, max_len)] = 0 - - self.local_constraint_map = nn.Parameter( - torch.from_numpy(local_constraint_map).byte(), - requires_grad=False) - - if self.q_stride > 1: - self.q_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.q_stride) - else: - self.q_downsample = None - - if self.kv_stride > 1: - self.kv_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.kv_stride) - else: - self.kv_downsample = None - - self.init_weights() - - def get_position_embedding(self, - h, - w, - h_kv, - w_kv, - q_stride, - kv_stride, - device, - dtype, - feat_dim, - wave_length=1000): - # the default type of Tensor is float32, leading to type mismatch - # in fp16 mode. Cast it to support fp16 mode. - h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype) - h_idxs = h_idxs.view((h, 1)) * q_stride - - w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype) - w_idxs = w_idxs.view((w, 1)) * q_stride - - h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to( - device=device, dtype=dtype) - h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride - - w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to( - device=device, dtype=dtype) - w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride - - # (h, h_kv, 1) - h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0) - h_diff *= self.position_magnitude - - # (w, w_kv, 1) - w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0) - w_diff *= self.position_magnitude - - feat_range = torch.arange(0, feat_dim / 4).to( - device=device, dtype=dtype) - - dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype) - dim_mat = dim_mat**((4. / feat_dim) * feat_range) - dim_mat = dim_mat.view((1, 1, -1)) - - embedding_x = torch.cat( - ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2) - - embedding_y = torch.cat( - ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2) - - return embedding_x, embedding_y - - def forward(self, x_input): - num_heads = self.num_heads - - # use empirical_attention - if self.q_downsample is not None: - x_q = self.q_downsample(x_input) - else: - x_q = x_input - n, _, h, w = x_q.shape - - if self.kv_downsample is not None: - x_kv = self.kv_downsample(x_input) - else: - x_kv = x_input - _, _, h_kv, w_kv = x_kv.shape - - if self.attention_type[0] or self.attention_type[1]: - proj_query = self.query_conv(x_q).view( - (n, num_heads, self.qk_embed_dim, h * w)) - proj_query = proj_query.permute(0, 1, 3, 2) - - if self.attention_type[0] or self.attention_type[2]: - proj_key = self.key_conv(x_kv).view( - (n, num_heads, self.qk_embed_dim, h_kv * w_kv)) - - if self.attention_type[1] or self.attention_type[3]: - position_embed_x, position_embed_y = self.get_position_embedding( - h, w, h_kv, w_kv, self.q_stride, self.kv_stride, - x_input.device, x_input.dtype, self.position_embedding_dim) - # (n, num_heads, w, w_kv, dim) - position_feat_x = self.appr_geom_fc_x(position_embed_x).\ - view(1, w, w_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - # (n, num_heads, h, h_kv, dim) - position_feat_y = self.appr_geom_fc_y(position_embed_y).\ - view(1, h, h_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - position_feat_x /= math.sqrt(2) - position_feat_y /= math.sqrt(2) - - # accelerate for saliency only - if (np.sum(self.attention_type) == 1) and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy = torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, h_kv * w_kv) - - h = 1 - w = 1 - else: - # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for - if not self.attention_type[0]: - energy = torch.zeros( - n, - num_heads, - h, - w, - h_kv, - w_kv, - dtype=x_input.dtype, - device=x_input.device) - - # attention_type[0]: appr - appr - # attention_type[1]: appr - position - # attention_type[2]: bias - appr - # attention_type[3]: bias - position - if self.attention_type[0] or self.attention_type[2]: - if self.attention_type[0] and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - energy = torch.matmul(proj_query + appr_bias, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[0]: - energy = torch.matmul(proj_query, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy += torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, 1, h_kv, w_kv) - - if self.attention_type[1] or self.attention_type[3]: - if self.attention_type[1] and self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - - proj_query_reshape = (proj_query + geom_bias).\ - view(n, num_heads, h, w, self.qk_embed_dim) - - energy_x = torch.matmul( - proj_query_reshape.permute(0, 1, 3, 2, 4), - position_feat_x.permute(0, 1, 2, 4, 3)) - energy_x = energy_x.\ - permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul( - proj_query_reshape, - position_feat_y.permute(0, 1, 2, 4, 3)) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[1]: - proj_query_reshape = proj_query.\ - view(n, num_heads, h, w, self.qk_embed_dim) - proj_query_reshape = proj_query_reshape.\ - permute(0, 1, 3, 2, 4) - position_feat_x_reshape = position_feat_x.\ - permute(0, 1, 2, 4, 3) - position_feat_y_reshape = position_feat_y.\ - permute(0, 1, 2, 4, 3) - - energy_x = torch.matmul(proj_query_reshape, - position_feat_x_reshape) - energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul(proj_query_reshape, - position_feat_y_reshape) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, self.qk_embed_dim, 1).\ - repeat(n, 1, 1, 1) - - position_feat_x_reshape = position_feat_x.\ - view(n, num_heads, w * w_kv, self.qk_embed_dim) - - position_feat_y_reshape = position_feat_y.\ - view(n, num_heads, h * h_kv, self.qk_embed_dim) - - energy_x = torch.matmul(position_feat_x_reshape, geom_bias) - energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv) - - energy_y = torch.matmul(position_feat_y_reshape, geom_bias) - energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1) - - energy += energy_x + energy_y - - energy = energy.view(n, num_heads, h * w, h_kv * w_kv) - - if self.spatial_range >= 0: - cur_local_constraint_map = \ - self.local_constraint_map[:h, :w, :h_kv, :w_kv].\ - contiguous().\ - view(1, 1, h*w, h_kv*w_kv) - - energy = energy.masked_fill_(cur_local_constraint_map, - float('-inf')) - - attention = F.softmax(energy, 3) - - proj_value = self.value_conv(x_kv) - proj_value_reshape = proj_value.\ - view((n, num_heads, self.v_dim, h_kv * w_kv)).\ - permute(0, 1, 3, 2) - - out = torch.matmul(attention, proj_value_reshape).\ - permute(0, 1, 3, 2).\ - contiguous().\ - view(n, self.v_dim * self.num_heads, h, w) - - out = self.proj_conv(out) - - # output is downsampled, upsample back to input size - if self.q_downsample is not None: - out = F.interpolate( - out, - size=x_input.shape[2:], - mode='bilinear', - align_corners=False) - - out = self.gamma * out + x_input - return out - - def init_weights(self): - for m in self.modules(): - if hasattr(m, 'kaiming_init') and m.kaiming_init: - kaiming_init( - m, - mode='fan_in', - nonlinearity='leaky_relu', - bias=0, - distribution='uniform', - a=1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hsigmoid.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hsigmoid.py deleted file mode 100644 index 1b538371..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hsigmoid.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSigmoid(nn.Module): - """Hard Sigmoid Module. Apply the hard sigmoid function: - Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value) - Default: Hsigmoid(x) = min(max((x + 3) / 6, 0), 1) - - Note: - In MMCV v1.4.4, we modified the default value of args to align with - PyTorch official. - - Args: - bias (float): Bias of the input feature map. Default: 3.0. - divisor (float): Divisor of the input feature map. Default: 6.0. - min_value (float): Lower bound value. Default: 0.0. - max_value (float): Upper bound value. Default: 1.0. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, bias=3.0, divisor=6.0, min_value=0.0, max_value=1.0): - super().__init__() - warnings.warn( - 'In MMCV v1.4.4, we modified the default value of args to align ' - 'with PyTorch official. Previous Implementation: ' - 'Hsigmoid(x) = min(max((x + 1) / 2, 0), 1). ' - 'Current Implementation: ' - 'Hsigmoid(x) = min(max((x + 3) / 6, 0), 1).') - self.bias = bias - self.divisor = divisor - assert self.divisor != 0 - self.min_value = min_value - self.max_value = max_value - - def forward(self, x): - x = (x + self.bias) / self.divisor - - return x.clamp_(self.min_value, self.max_value) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hswish.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hswish.py deleted file mode 100644 index 6089d0cc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/hswish.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from mmcv.utils import TORCH_VERSION, digit_version -from .registry import ACTIVATION_LAYERS - - -class HSwish(nn.Module): - """Hard Swish Module. - - This module applies the hard swish function: - - .. math:: - Hswish(x) = x * ReLU6(x + 3) / 6 - - Args: - inplace (bool): can optionally do the operation in-place. - Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, inplace=False): - super().__init__() - self.act = nn.ReLU6(inplace) - - def forward(self, x): - return x * self.act(x + 3) / 6 - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.7')): - # Hardswish is not supported when PyTorch version < 1.6. - # And Hardswish in PyTorch 1.6 does not support inplace. - ACTIVATION_LAYERS.register_module(module=HSwish) -else: - ACTIVATION_LAYERS.register_module(module=nn.Hardswish, name='HSwish') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/non_local.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/non_local.py deleted file mode 100644 index f7979799..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/non_local.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta - -import torch -import torch.nn as nn - -from ..utils import constant_init, normal_init -from .conv_module import ConvModule -from .registry import PLUGIN_LAYERS - - -class _NonLocalNd(nn.Module, metaclass=ABCMeta): - """Basic Non-local module. - - This module is proposed in - "Non-local Neural Networks" - Paper reference: https://arxiv.org/abs/1711.07971 - Code reference: https://github.com/AlexHex7/Non-local_pytorch - - Args: - in_channels (int): Channels of the input feature map. - reduction (int): Channel reduction ratio. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - `1/sqrt(inter_channels)` when the mode is `embedded_gaussian`. - Default: True. - conv_cfg (None | dict): The config dict for convolution layers. - If not specified, it will use `nn.Conv2d` for convolution layers. - Default: None. - norm_cfg (None | dict): The config dict for normalization layers. - Default: None. (This parameter is only applicable to conv_out.) - mode (str): Options are `gaussian`, `concatenation`, - `embedded_gaussian` and `dot_product`. Default: embedded_gaussian. - """ - - def __init__(self, - in_channels, - reduction=2, - use_scale=True, - conv_cfg=None, - norm_cfg=None, - mode='embedded_gaussian', - **kwargs): - super().__init__() - self.in_channels = in_channels - self.reduction = reduction - self.use_scale = use_scale - self.inter_channels = max(in_channels // reduction, 1) - self.mode = mode - - if mode not in [ - 'gaussian', 'embedded_gaussian', 'dot_product', 'concatenation' - ]: - raise ValueError("Mode should be in 'gaussian', 'concatenation', " - f"'embedded_gaussian' or 'dot_product', but got " - f'{mode} instead.') - - # g, theta, phi are defaulted as `nn.ConvNd`. - # Here we use ConvModule for potential usage. - self.g = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.conv_out = ConvModule( - self.inter_channels, - self.in_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - if self.mode != 'gaussian': - self.theta = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.phi = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - - if self.mode == 'concatenation': - self.concat_project = ConvModule( - self.inter_channels * 2, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - act_cfg=dict(type='ReLU')) - - self.init_weights(**kwargs) - - def init_weights(self, std=0.01, zeros_init=True): - if self.mode != 'gaussian': - for m in [self.g, self.theta, self.phi]: - normal_init(m.conv, std=std) - else: - normal_init(self.g.conv, std=std) - if zeros_init: - if self.conv_out.norm_cfg is None: - constant_init(self.conv_out.conv, 0) - else: - constant_init(self.conv_out.norm, 0) - else: - if self.conv_out.norm_cfg is None: - normal_init(self.conv_out.conv, std=std) - else: - normal_init(self.conv_out.norm, std=std) - - def gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def embedded_gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def dot_product(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight /= pairwise_weight.shape[-1] - return pairwise_weight - - def concatenation(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - h = theta_x.size(2) - w = phi_x.size(3) - theta_x = theta_x.repeat(1, 1, 1, w) - phi_x = phi_x.repeat(1, 1, h, 1) - - concat_feature = torch.cat([theta_x, phi_x], dim=1) - pairwise_weight = self.concat_project(concat_feature) - n, _, h, w = pairwise_weight.size() - pairwise_weight = pairwise_weight.view(n, h, w) - pairwise_weight /= pairwise_weight.shape[-1] - - return pairwise_weight - - def forward(self, x): - # Assume `reduction = 1`, then `inter_channels = C` - # or `inter_channels = C` when `mode="gaussian"` - - # NonLocal1d x: [N, C, H] - # NonLocal2d x: [N, C, H, W] - # NonLocal3d x: [N, C, T, H, W] - n = x.size(0) - - # NonLocal1d g_x: [N, H, C] - # NonLocal2d g_x: [N, HxW, C] - # NonLocal3d g_x: [N, TxHxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # NonLocal1d theta_x: [N, H, C], phi_x: [N, C, H] - # NonLocal2d theta_x: [N, HxW, C], phi_x: [N, C, HxW] - # NonLocal3d theta_x: [N, TxHxW, C], phi_x: [N, C, TxHxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - pairwise_func = getattr(self, self.mode) - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # NonLocal1d y: [N, H, C] - # NonLocal2d y: [N, HxW, C] - # NonLocal3d y: [N, TxHxW, C] - y = torch.matmul(pairwise_weight, g_x) - # NonLocal1d y: [N, C, H] - # NonLocal2d y: [N, C, H, W] - # NonLocal3d y: [N, C, T, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - output = x + self.conv_out(y) - - return output - - -class NonLocal1d(_NonLocalNd): - """1D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv1d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv1d'), - **kwargs): - super().__init__(in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool1d(kernel_size=2) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -@PLUGIN_LAYERS.register_module() -class NonLocal2d(_NonLocalNd): - """2D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv2d'). - """ - - _abbr_ = 'nonlocal_block' - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv2d'), - **kwargs): - super().__init__(in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -class NonLocal3d(_NonLocalNd): - """3D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv3d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv3d'), - **kwargs): - super().__init__(in_channels, conv_cfg=conv_cfg, **kwargs) - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/norm.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/norm.py deleted file mode 100644 index 51efdc18..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/norm.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect - -import torch.nn as nn - -from mmcv.utils import is_tuple_of -from mmcv.utils.parrots_wrapper import SyncBatchNorm, _BatchNorm, _InstanceNorm -from .registry import NORM_LAYERS - -NORM_LAYERS.register_module('BN', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN1d', module=nn.BatchNorm1d) -NORM_LAYERS.register_module('BN2d', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN3d', module=nn.BatchNorm3d) -NORM_LAYERS.register_module('SyncBN', module=SyncBatchNorm) -NORM_LAYERS.register_module('GN', module=nn.GroupNorm) -NORM_LAYERS.register_module('LN', module=nn.LayerNorm) -NORM_LAYERS.register_module('IN', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN1d', module=nn.InstanceNorm1d) -NORM_LAYERS.register_module('IN2d', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN3d', module=nn.InstanceNorm3d) - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - When we build a norm layer with `build_norm_layer()`, we want to preserve - the norm type in variable names, e.g, self.bn1, self.gn. This method will - infer the abbreviation to map class types to abbreviations. - - Rule 1: If the class has the property "_abbr_", return the property. - Rule 2: If the parent class is _BatchNorm, GroupNorm, LayerNorm or - InstanceNorm, the abbreviation of this layer will be "bn", "gn", "ln" and - "in" respectively. - Rule 3: If the class name contains "batch", "group", "layer" or "instance", - the abbreviation of this layer will be "bn", "gn", "ln" and "in" - respectively. - Rule 4: Otherwise, the abbreviation falls back to "norm". - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - if issubclass(class_type, _InstanceNorm): # IN is a subclass of BN - return 'in' - elif issubclass(class_type, _BatchNorm): - return 'bn' - elif issubclass(class_type, nn.GroupNorm): - return 'gn' - elif issubclass(class_type, nn.LayerNorm): - return 'ln' - else: - class_name = class_type.__name__.lower() - if 'batch' in class_name: - return 'bn' - elif 'group' in class_name: - return 'gn' - elif 'layer' in class_name: - return 'ln' - elif 'instance' in class_name: - return 'in' - else: - return 'norm_layer' - - -def build_norm_layer(cfg, num_features, postfix=''): - """Build normalization layer. - - Args: - cfg (dict): The norm layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate a norm layer. - - requires_grad (bool, optional): Whether stop gradient updates. - num_features (int): Number of input channels. - postfix (int | str): The postfix to be appended into norm abbreviation - to create named layer. - - Returns: - tuple[str, nn.Module]: The first element is the layer name consisting - of abbreviation and postfix, e.g., bn1, gn. The second element is the - created norm layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in NORM_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - - norm_layer = NORM_LAYERS.get(layer_type) - abbr = infer_abbr(norm_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - requires_grad = cfg_.pop('requires_grad', True) - cfg_.setdefault('eps', 1e-5) - if layer_type != 'GN': - layer = norm_layer(num_features, **cfg_) - if layer_type == 'SyncBN' and hasattr(layer, '_specify_ddp_gpu_num'): - layer._specify_ddp_gpu_num(1) - else: - assert 'num_groups' in cfg_ - layer = norm_layer(num_channels=num_features, **cfg_) - - for param in layer.parameters(): - param.requires_grad = requires_grad - - return name, layer - - -def is_norm(layer, exclude=None): - """Check if a layer is a normalization layer. - - Args: - layer (nn.Module): The layer to be checked. - exclude (type | tuple[type]): Types to be excluded. - - Returns: - bool: Whether the layer is a norm layer. - """ - if exclude is not None: - if not isinstance(exclude, tuple): - exclude = (exclude, ) - if not is_tuple_of(exclude, type): - raise TypeError( - f'"exclude" must be either None or type or a tuple of types, ' - f'but got {type(exclude)}: {exclude}') - - if exclude and isinstance(layer, exclude): - return False - - all_norm_bases = (_BatchNorm, _InstanceNorm, nn.GroupNorm, nn.LayerNorm) - return isinstance(layer, all_norm_bases) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/padding.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/padding.py deleted file mode 100644 index e4ac6b28..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/padding.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import PADDING_LAYERS - -PADDING_LAYERS.register_module('zero', module=nn.ZeroPad2d) -PADDING_LAYERS.register_module('reflect', module=nn.ReflectionPad2d) -PADDING_LAYERS.register_module('replicate', module=nn.ReplicationPad2d) - - -def build_padding_layer(cfg, *args, **kwargs): - """Build padding layer. - - Args: - cfg (None or dict): The padding layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate a padding layer. - - Returns: - nn.Module: Created padding layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - - cfg_ = cfg.copy() - padding_type = cfg_.pop('type') - if padding_type not in PADDING_LAYERS: - raise KeyError(f'Unrecognized padding type {padding_type}.') - else: - padding_layer = PADDING_LAYERS.get(padding_type) - - layer = padding_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/plugin.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/plugin.py deleted file mode 100644 index 6aa13f43..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re # type: ignore -else: - import re # type: ignore - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - - - type (str): identify plugin layer type. - - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: The first one is the concatenation of - abbreviation and postfix. The second is the created plugin layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/registry.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/registry.py deleted file mode 100644 index c2927977..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/scale.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/scale.py deleted file mode 100644 index afd5d2d4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super().__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/swish.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/swish.py deleted file mode 100644 index 7df0fbba..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/swish.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class Swish(nn.Module): - """Swish Module. - - This module applies the swish function: - - .. math:: - Swish(x) = x * Sigmoid(x) - - Returns: - Tensor: The output tensor. - """ - - def __init__(self): - super().__init__() - - def forward(self, x): - return x * torch.sigmoid(x) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/transformer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index f7ba4d9f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,944 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings -from typing import Sequence - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.cnn import (Linear, build_activation_layer, build_conv_layer, - build_norm_layer) -from mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from mmcv.utils import (ConfigDict, build_from_cfg, deprecated_api_warning, - to_2tuple) -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from mmcv.ops.multi_scale_deform_attn import \ - MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -class AdaptivePadding(nn.Module): - """Applies padding adaptively to the input. - - This module can make input get fully covered by filter - you specified. It support two modes "same" and "corner". The - "same" mode is same with "SAME" padding mode in TensorFlow, pad - zero around input. The "corner" mode would pad zero - to bottom right. - - Args: - kernel_size (int | tuple): Size of the kernel. Default: 1. - stride (int | tuple): Stride of the filter. Default: 1. - dilation (int | tuple): Spacing between kernel elements. - Default: 1. - padding (str): Support "same" and "corner", "corner" mode - would pad zero to bottom right, and "same" mode would - pad zero around input. Default: "corner". - - Example: - >>> kernel_size = 16 - >>> stride = 16 - >>> dilation = 1 - >>> input = torch.rand(1, 1, 15, 17) - >>> adap_pad = AdaptivePadding( - >>> kernel_size=kernel_size, - >>> stride=stride, - >>> dilation=dilation, - >>> padding="corner") - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - >>> input = torch.rand(1, 1, 16, 17) - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - """ - - def __init__(self, kernel_size=1, stride=1, dilation=1, padding='corner'): - super().__init__() - assert padding in ('same', 'corner') - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - self.padding = padding - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - - def get_pad_shape(self, input_shape): - """Calculate the padding size of input. - - Args: - input_shape (:obj:`torch.Size`): arrange as (H, W). - - Returns: - Tuple[int]: The padding size along the - original H and W directions - """ - input_h, input_w = input_shape - kernel_h, kernel_w = self.kernel_size - stride_h, stride_w = self.stride - output_h = math.ceil(input_h / stride_h) - output_w = math.ceil(input_w / stride_w) - pad_h = max((output_h - 1) * stride_h + - (kernel_h - 1) * self.dilation[0] + 1 - input_h, 0) - pad_w = max((output_w - 1) * stride_w + - (kernel_w - 1) * self.dilation[1] + 1 - input_w, 0) - return pad_h, pad_w - - def forward(self, x): - """Add padding to `x` - - Args: - x (Tensor): Input tensor has shape (B, C, H, W). - - Returns: - Tensor: The tensor with adaptive padding - """ - pad_h, pad_w = self.get_pad_shape(x.size()[-2:]) - if pad_h > 0 or pad_w > 0: - if self.padding == 'corner': - x = F.pad(x, [0, pad_w, 0, pad_h]) - elif self.padding == 'same': - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2 - ]) - return x - - -class PatchEmbed(BaseModule): - """Image to Patch Embedding. - - We use a conv layer to implement PatchEmbed. - - Args: - in_channels (int): The num of input channels. Default: 3 - embed_dims (int): The dimensions of embedding. Default: 768 - conv_type (str): The type of convolution - to generate patch embedding. Default: "Conv2d". - kernel_size (int): The kernel_size of embedding conv. Default: 16. - stride (int): The slide stride of embedding conv. - Default: 16. - padding (int | tuple | string): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int): The dilation rate of embedding conv. Default: 1. - bias (bool): Bias of embed conv. Default: True. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - input_size (int | tuple | None): The size of input, which will be - used to calculate the out size. Only works when `dynamic_size` - is False. Default: None. - init_cfg (`mmcv.ConfigDict`, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - in_channels=3, - embed_dims=768, - conv_type='Conv2d', - kernel_size=16, - stride=16, - padding='corner', - dilation=1, - bias=True, - norm_cfg=None, - input_size=None, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - if stride is None: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adaptive_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of conv - padding = 0 - else: - self.adaptive_padding = None - padding = to_2tuple(padding) - - self.projection = build_conv_layer( - dict(type=conv_type), - in_channels=in_channels, - out_channels=embed_dims, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias) - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - else: - self.norm = None - - if input_size: - input_size = to_2tuple(input_size) - # `init_out_size` would be used outside to - # calculate the num_patches - # e.g. when `use_abs_pos_embed` outside - self.init_input_size = input_size - if self.adaptive_padding: - pad_h, pad_w = self.adaptive_padding.get_pad_shape(input_size) - input_h, input_w = input_size - input_h = input_h + pad_h - input_w = input_w + pad_w - input_size = (input_h, input_w) - - # https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html - h_out = (input_size[0] + 2 * padding[0] - dilation[0] * - (kernel_size[0] - 1) - 1) // stride[0] + 1 - w_out = (input_size[1] + 2 * padding[1] - dilation[1] * - (kernel_size[1] - 1) - 1) // stride[1] + 1 - self.init_out_size = (h_out, w_out) - else: - self.init_input_size = None - self.init_out_size = None - - def forward(self, x): - """ - Args: - x (Tensor): Has shape (B, C, H, W). In most case, C is 3. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, out_h * out_w, embed_dims) - - out_size (tuple[int]): Spatial shape of x, arrange as - (out_h, out_w). - """ - - if self.adaptive_padding: - x = self.adaptive_padding(x) - - x = self.projection(x) - out_size = (x.shape[2], x.shape[3]) - x = x.flatten(2).transpose(1, 2) - if self.norm is not None: - x = self.norm(x) - return x, out_size - - -class PatchMerging(BaseModule): - """Merge patch feature map. - - This layer groups feature map by kernel_size, and applies norm and linear - layers to the grouped feature map ((used in Swin Transformer)). - Our implementation uses `nn.Unfold` to - merge patches, which is about 25% faster than the original - implementation. However, we need to modify pretrained - models for compatibility. - - Args: - in_channels (int): The num of input channels. - to gets fully covered by filter and stride you specified. - out_channels (int): The num of output channels. - kernel_size (int | tuple, optional): the kernel size in the unfold - layer. Defaults to 2. - stride (int | tuple, optional): the stride of the sliding blocks in the - unfold layer. Default: None. (Would be set as `kernel_size`) - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int | tuple, optional): dilation parameter in the unfold - layer. Default: 1. - bias (bool, optional): Whether to add bias in linear layer or not. - Defaults: False. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=2, - stride=None, - padding='corner', - dilation=1, - bias=False, - norm_cfg=dict(type='LN'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - if stride: - stride = stride - else: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adaptive_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of unfold - padding = 0 - else: - self.adaptive_padding = None - - padding = to_2tuple(padding) - self.sampler = nn.Unfold( - kernel_size=kernel_size, - dilation=dilation, - padding=padding, - stride=stride) - - sample_dim = kernel_size[0] * kernel_size[1] * in_channels - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, sample_dim)[1] - else: - self.norm = None - - self.reduction = nn.Linear(sample_dim, out_channels, bias=bias) - - def forward(self, x, input_size): - """ - Args: - x (Tensor): Has shape (B, H*W, C_in). - input_size (tuple[int]): The spatial shape of x, arrange as (H, W). - Default: None. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, Merged_H * Merged_W, C_out) - - out_size (tuple[int]): Spatial shape of x, arrange as - (Merged_H, Merged_W). - """ - B, L, C = x.shape - assert isinstance(input_size, Sequence), f'Expect ' \ - f'input_size is ' \ - f'`Sequence` ' \ - f'but get {input_size}' - - H, W = input_size - assert L == H * W, 'input feature has wrong size' - - x = x.view(B, H, W, C).permute([0, 3, 1, 2]) # B, C, H, W - - if self.adaptive_padding: - x = self.adaptive_padding(x) - H, W = x.shape[-2:] - - # Use nn.Unfold to merge patch. About 25% faster than original method, - # but need to modify pretrained model for compatibility - # if kernel_size=2 and stride=2, x should has shape (B, 4*C, H/2*W/2) - x = self.sampler(x) - - out_h = (H + 2 * self.sampler.padding[0] - self.sampler.dilation[0] * - (self.sampler.kernel_size[0] - 1) - - 1) // self.sampler.stride[0] + 1 - out_w = (W + 2 * self.sampler.padding[1] - self.sampler.dilation[1] * - (self.sampler.kernel_size[1] - 1) - - 1) // self.sampler.stride[1] + 1 - - output_size = (out_h, out_w) - x = x.transpose(1, 2) # B, H/2*W/2, 4*C - x = self.norm(x) if self.norm else x - x = self.reduction(x) - return x, output_size - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super().__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn( - 'The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ', DeprecationWarning) - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super().__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ', DeprecationWarning) - ffn_cfgs[new_name] = kwargs[ori_name] - - super().__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & { - 'self_attn', 'norm', 'ffn', 'cross_attn'} == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs[ffn_index]['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super().__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/upsample.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/upsample.py deleted file mode 100644 index 15e4febd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/upsample.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import xavier_init -from .registry import UPSAMPLE_LAYERS - -UPSAMPLE_LAYERS.register_module('nearest', module=nn.Upsample) -UPSAMPLE_LAYERS.register_module('bilinear', module=nn.Upsample) - - -@UPSAMPLE_LAYERS.register_module(name='pixel_shuffle') -class PixelShufflePack(nn.Module): - """Pixel Shuffle upsample layer. - - This module packs `F.pixel_shuffle()` and a nn.Conv2d module together to - achieve a simple upsampling with pixel shuffle. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of the conv layer to expand the - channels. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - xavier_init(self.upsample_conv, distribution='uniform') - - def forward(self, x): - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x - - -def build_upsample_layer(cfg, *args, **kwargs): - """Build upsample layer. - - Args: - cfg (dict): The upsample layer config, which should contain: - - - type (str): Layer type. - - scale_factor (int): Upsample ratio, which is not applicable to - deconv. - - layer args: Args needed to instantiate a upsample layer. - args (argument list): Arguments passed to the ``__init__`` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the - ``__init__`` method of the corresponding conv layer. - - Returns: - nn.Module: Created upsample layer. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - raise KeyError( - f'the cfg dict must contain the key "type", but got {cfg}') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in UPSAMPLE_LAYERS: - raise KeyError(f'Unrecognized upsample type {layer_type}') - else: - upsample = UPSAMPLE_LAYERS.get(layer_type) - - if upsample is nn.Upsample: - cfg_['mode'] = layer_type - layer = upsample(*args, **kwargs, **cfg_) - return layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/wrappers.py deleted file mode 100644 index 8aebf67b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/bricks/wrappers.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/wrappers.py # noqa: E501 - -Wrap some nn modules to support empty tensor input. Currently, these wrappers -are mainly used in mask heads like fcn_mask_head and maskiou_heads since mask -heads are trained on only positive RoIs. -""" -import math - -import torch -import torch.nn as nn -from torch.nn.modules.utils import _pair, _triple - -from .registry import CONV_LAYERS, UPSAMPLE_LAYERS - -if torch.__version__ == 'parrots': - TORCH_VERSION = torch.__version__ -else: - # torch.__version__ could be 1.3.1+cu92, we only need the first two - # for comparison - TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2]) - - -def obsolete_torch_version(torch_version, version_threshold): - return torch_version == 'parrots' or torch_version <= version_threshold - - -class NewEmptyTensorOp(torch.autograd.Function): - - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return NewEmptyTensorOp.apply(grad, shape), None - - -@CONV_LAYERS.register_module('Conv', force=True) -class Conv2d(nn.Conv2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module('Conv3d', force=True) -class Conv3d(nn.Conv3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv') -@UPSAMPLE_LAYERS.register_module('deconv', force=True) -class ConvTranspose2d(nn.ConvTranspose2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv3d') -@UPSAMPLE_LAYERS.register_module('deconv3d', force=True) -class ConvTranspose3d(nn.ConvTranspose3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -class MaxPool2d(nn.MaxPool2d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-2:], _pair(self.kernel_size), - _pair(self.padding), _pair(self.stride), - _pair(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class MaxPool3d(nn.MaxPool3d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-3:], _triple(self.kernel_size), - _triple(self.padding), - _triple(self.stride), - _triple(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class Linear(torch.nn.Linear): - - def forward(self, x): - # empty tensor forward of Linear layer is supported in Pytorch 1.6 - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)): - out_shape = [x.shape[0], self.out_features] - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/builder.py deleted file mode 100644 index 7567316c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/builder.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..runner import Sequential -from ..utils import Registry, build_from_cfg - - -def build_model_from_cfg(cfg, registry, default_args=None): - """Build a PyTorch model from config dict(s). Different from - ``build_from_cfg``, if cfg is a list, a ``nn.Sequential`` will be built. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a config - dict or a list of config dicts. If cfg is a list, a - the built modules will be wrapped with ``nn.Sequential``. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -MODELS = Registry('model', build_func=build_model_from_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/resnet.py deleted file mode 100644 index fb29e625..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/resnet.py +++ /dev/null @@ -1,322 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging -from typing import Optional, Sequence, Tuple, Union - -import torch.nn as nn -import torch.utils.checkpoint as cp -from torch import Tensor - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes: int, - out_planes: int, - stride: int = 1, - dilation: int = 1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes: int, - planes: int, - stride: int = 1, - dilation: int = 1, - downsample: Optional[nn.Module] = None, - style: str = 'pytorch', - with_cp: bool = False): - super().__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x: Tensor) -> Tensor: - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes: int, - planes: int, - stride: int = 1, - dilation: int = 1, - downsample: Optional[nn.Module] = None, - style: str = 'pytorch', - with_cp: bool = False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super().__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x: Tensor) -> Tensor: - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block: nn.Module, - inplanes: int, - planes: int, - blocks: int, - stride: int = 1, - dilation: int = 1, - style: str = 'pytorch', - with_cp: bool = False) -> nn.Module: - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth: int, - num_stages: int = 4, - strides: Sequence[int] = (1, 2, 2, 2), - dilations: Sequence[int] = (1, 1, 1, 1), - out_indices: Sequence[int] = (0, 1, 2, 3), - style: str = 'pytorch', - frozen_stages: int = -1, - bn_eval: bool = True, - bn_frozen: bool = False, - with_cp: bool = False): - super().__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] # type: ignore - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes: int = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion # type: ignore - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**( # type: ignore - len(stage_blocks) - 1) - - def init_weights(self, pretrained: Optional[str] = None) -> None: - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x: Tensor) -> Union[Tensor, Tuple[Tensor]]: - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode: bool = True) -> None: - super().train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/__init__.py deleted file mode 100644 index a263e31c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .flops_counter import get_model_complexity_info -from .fuse_conv_bn import fuse_conv_bn -from .sync_bn import revert_sync_batchnorm -from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit, - KaimingInit, NormalInit, PretrainedInit, - TruncNormalInit, UniformInit, XavierInit, - bias_init_with_prob, caffe2_xavier_init, - constant_init, initialize, kaiming_init, normal_init, - trunc_normal_init, uniform_init, xavier_init) - -__all__ = [ - 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init', - 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize', - 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'revert_sync_batchnorm' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/flops_counter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index 150a5599..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,603 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -import warnings -from functools import partial -from typing import Any, Callable, Dict, Optional, TextIO, Tuple - -import numpy as np -import torch -import torch.nn as nn - -import mmcv - - -def get_model_complexity_info(model: nn.Module, - input_shape: tuple, - print_per_layer_stat: bool = True, - as_strings: bool = True, - input_constructor: Optional[Callable] = None, - flush: bool = False, - ost: TextIO = sys.stdout) -> tuple: - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, - ``nn.LeakyReLU``, ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops: float, - units: Optional[str] = 'GFLOPs', - precision: int = 2) -> str: - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params: float, - units: Optional[str] = None, - precision: int = 2) -> str: - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model: nn.Module, - total_flops: float, - total_params: float, - units: Optional[str] = 'GFLOPs', - precision: int = 3, - ost: TextIO = sys.stdout, - flush: bool = False) -> None: - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - f'{accumulated_num_params / total_params:.3%} Params', - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - f'{accumulated_flops_cost / total_flops:.3%} FLOPs', - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model: nn.Module) -> float: - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module: nn.Module) -> nn.Module: - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( # type: ignore # noqa E501 - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( # type: ignore # noqa E501 - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( # type: ignore # noqa E501 - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # type: ignore # noqa E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self) -> Tuple[float, float]: - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self) -> None: - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module: nn.Module) -> None: - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self) -> None: - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self) -> None: - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module: nn.Module, input: tuple, - output: Any) -> None: - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input[0].shape) * output_last_dim) - - -def pool_flops_counter_hook(module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - module.__flops__ += int(np.prod(input[0].shape)) - - -def norm_flops_counter_hook(module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - batch_flops = np.prod(input[0].shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - # Can have multiple inputs, getting the first one - batch_size = input[0].shape[0] - input_height, input_width = input[0].shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_width - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module: nn.Module, input: tuple, - output: torch.Tensor) -> None: - # Can have multiple inputs, getting the first one - batch_size = input[0].shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module: nn.Module, input: tuple, output: Any) -> None: - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - batch_size = len(input[0]) - else: - warnings.warn('No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module: nn.Module) -> None: - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module: nn.Module) -> None: - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module: nn.Module) -> None: - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module: nn.Module) -> None: - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - warnings.warn('variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module: nn.Module) -> bool: - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module: nn.Module) -> None: - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping() -> Dict: - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/fuse_conv_bn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100644 index 6ccaab3b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv: nn.Module, bn: nn.Module) -> nn.Module: - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module: nn.Module) -> nn.Module: - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/sync_bn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index c534fc0e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -import mmcv - - -class _BatchNormXd(nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input: torch.Tensor): - return - - -def revert_sync_batchnorm(module: nn.Module) -> nn.Module: - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/weight_init.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/weight_init.py deleted file mode 100644 index 6e0d293a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,708 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings -from typing import Dict, List, Optional, Union - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module: nn.Module, init_info: str) -> None: - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module: nn.Module, val: float, bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module: nn.Module, - gain: float = 1, - bias: float = 0, - distribution: str = 'normal') -> None: - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module: nn.Module, - a: float = 0, - b: float = 1, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module: nn.Module, - a: float = 0, - mode: str = 'fan_out', - nonlinearity: str = 'relu', - bias: float = 0, - distribution: str = 'normal') -> None: - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module: nn.Module, bias: float = 0) -> None: - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob: float) -> float: - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m: nn.Module) -> List[str]: - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit: - - def __init__(self, - *, - bias: float = 0, - bias_prob: Optional[float] = None, - layer: Union[str, List, None] = None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val: Union[int, float], **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - gain: float = 1, - distribution: str = 'normal', - **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean: float = 0, std: float = 1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a: float = 0., b: float = 1., **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a: float = 0, - mode: str = 'fan_out', - nonlinearity: str = 'relu', - distribution: str = 'normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module: nn.Module) -> None: - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit: - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, - checkpoint: str, - prefix: Optional[str] = None, - map_location: Optional[str] = None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module: nn.Module) -> None: - from mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self) -> str: - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module: nn.Module, - cfg: Dict, - wholemodule: bool = False) -> None: - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module: nn.Module, override: Union[Dict, List], - cfg: Dict) -> None: - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module: nn.Module, init_cfg: Union[Dict, List[dict]]) -> None: - r"""Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/vgg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/vgg.py deleted file mode 100644 index a1d9ba21..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/cnn/vgg.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging -from typing import List, Optional, Sequence, Tuple, Union - -import torch.nn as nn -from torch import Tensor - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes: int, out_planes: int, dilation: int = 1) -> nn.Module: - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes: int, - planes: int, - num_blocks: int, - dilation: int = 1, - with_bn: bool = False, - ceil_mode: bool = False) -> List[nn.Module]: - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth: int, - with_bn: bool = False, - num_classes: int = -1, - num_stages: int = 5, - dilations: Sequence[int] = (1, 1, 1, 1, 1), - out_indices: Sequence[int] = (0, 1, 2, 3, 4), - frozen_stages: int = -1, - bn_eval: bool = True, - bn_frozen: bool = False, - ceil_mode: bool = False, - with_last_pool: bool = True): - super().__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained: Optional[str] = None) -> None: - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x: Tensor) -> Union[Tensor, Tuple[Tensor, ...]]: - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode: bool = True) -> None: - super().train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/__init__.py deleted file mode 100644 index 6ac55e63..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from . import ipu, mlu - -__all__ = ['mlu', 'ipu'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/__init__.py deleted file mode 100755 index d550865a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import IS_IPU_AVAILABLE - -if IS_IPU_AVAILABLE: - from .dataloader import IPUDataLoader - from .hook_wrapper import IPUFp16OptimizerHook - from .model_wrapper import ipu_model_wrapper - from .runner import IPUBaseRunner, IPUEpochBasedRunner, IPUIterBasedRunner - from .utils import cfg2options - __all__ = [ - 'cfg2options', 'ipu_model_wrapper', 'IPUFp16OptimizerHook', - 'IPUDataLoader', 'IPUBaseRunner', 'IPUEpochBasedRunner', - 'IPUIterBasedRunner' - ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/dataloader.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/dataloader.py deleted file mode 100755 index 1485df2f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/dataloader.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence -from functools import partial - -import poptorch -from torch.utils.data.dataloader import default_collate - -from mmcv.parallel import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Put each data field into a tensor/DataContainer with outer dimension - batch size. - - TODO support for - :type:`~mmcv.parallel.DataContainer`. Currently, it will be ignored. - There are 3 cases. - - 1. cpu_only = True, e.g., meta data. - 2. cpu_only = False, stack = True, e.g., images tensors. - 3. cpu_only = False, stack = False, e.g., gt bboxes. - """ - - if not isinstance(batch, Sequence): - raise TypeError( - f'`batch` should be a sequence, but got {type(batch)}.') - - if isinstance(batch[0], DataContainer): - # TODO `DataContainer` will be supported in the future. - raise TypeError('DataContainer is not supported in ipu data loader.') - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - collated_batch = [] - for samples in transposed: - if not isinstance(samples[0], DataContainer): - # At present, we will skip the processing of datacontainer, - # which will reduce the performance of IPU DataLoder - collated_batch.append(collate(samples, samples_per_gpu)) - return collated_batch - elif isinstance(batch[0], Mapping): - collated_batch = {} - for key in batch[0]: - if not isinstance(batch[0][key], DataContainer): - # At present, we will skip the processing of datacontainer, - # which will reduce the performance of IPU DataLoder - collated_batch[key] = collate([d[key] for d in batch]) - return collated_batch - else: - return default_collate(batch) - - -class IPUDataLoader(poptorch.DataLoader): - """Thin wrapper of `torch.utils.data.DataLoader`. - - Compared with the pytorch DataLoder, this DataLoder changes the way of - calculation of batch size and adds the AsynchronousDataAccessor to - load and release data faster in cpu mode. - - If this data loader is used in a distributed execution environment, it will - ensure that each process uses a different subset of the dataset, providing - you first call ``options.randomSeed(N)`` with an integer N which is the - same across all hosts. - - Args: - dataset (torch.utils.data.Dataset): The dataset to get the data from. - options (poptorch.Options): Options that will be used to compile - and run the model. - batch_size (int, optional): This is the batch size in the conventional - sense of being the size that runs through an operation in the model - at any given time. - shuffle (bool, optional): set to ``True`` to have the data reshuffled - at every epoch (default: ``False``). - num_workers (int, optional): how many subprocesses to use for data - loading. ``0`` means that the data will be loaded in the main - process. (default: ``0``) - drop_last (bool, optional): If True and the number of elements in the - dataset is not a multiple of the combined batch size then the - incomplete batch at the end will be dropped. - persistent_workers (bool, optional): Re-use workers between - iterations if True. - auto_distributed_partitioning (bool, optional): If True, partitions the - dataset for distributed execution automatically. Otherwise, it is - assumed that partitioning has been handled manually. - mode (poptorch.DataLoaderMode, optional): If `DataLoaderMode.Async`, - uses an :py:class:`~poptorch.AsynchronousDataAccessor` to access - the dataset. If `DataLoaderMode.Sync`, accesses the dataset - synchronously. - async_options (Dict[str, Any], optional): Options to pass to - :py:class:`~poptorch.AsynchronousDataAccessor`. - rebatched_worker_size (int, optional): When using AsyncRebatched: batch - size of the tensors loaded by the workers. - Default to the combined batch size. - If specified the ``rebatched_worker_size`` must be less than - or equal to the combined batch size. - kwargs (Dict[str, Any], optional): Other options to pass to PyTorch's - ``DataLoader`` constructor. - """ - - def __init__(self, - dataset, - options, - batch_size=1, - shuffle=False, - num_workers=0, - drop_last=True, - persistent_workers=True, - auto_distributed_partitioning=True, - mode='sync', - async_options=None, - rebatched_worker_size=None, - **kwargs): - """Lazy init: - - In many frameworks, the dataloader will be constructed before the - initialization of the ipu options, so the lazy init method is used - here, and the real initialization will not be done until the dataloader - needs to be used and the options are input. - """ - # lazy init: sometimes, we cannot get IPU options when build data - # loader - self.kwargs = { - 'dataset': dataset, - 'batch_size': batch_size, - 'shuffle': shuffle, - 'num_workers': num_workers, - 'drop_last': drop_last, - 'persistent_workers': persistent_workers, - 'auto_distributed_partitioning': auto_distributed_partitioning, - 'mode': mode, - 'collate_fn': partial(collate, samples_per_gpu=batch_size), - 'async_options': async_options, - 'rebatched_worker_size': rebatched_worker_size, - **kwargs - } - self.dataset = dataset - self.initialized = False - if options: - self.init(options=options) - - def init(self, options, **kwargs): - if not self.initialized: - kwargs = {**self.kwargs, **kwargs, 'options': options} - if kwargs['mode'] == 'sync': - kwargs['mode'] = poptorch.DataLoaderMode.Sync - elif kwargs['mode'] == 'async': - kwargs['mode'] = poptorch.DataLoaderMode.AsyncRebatched - if kwargs['async_options'] is None: - kwargs['async_options'] = { - 'load_indefinitely': True, - 'buffer_size': 8 - } - if kwargs['rebatched_worker_size'] is None: - kwargs['rebatched_worker_size'] = 128 - super().__init__(**kwargs) - self.initialized = True - - return self diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hierarchical_data_manager.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hierarchical_data_manager.py deleted file mode 100755 index a6f3b3cd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hierarchical_data_manager.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch - -from mmcv.parallel import DataContainer - -# A customized None type for HierarchicalDataManager -HierarchicalDataNone = object() - - -class HierarchicalDataManager: - """A class manage all the tensors in the hierarchical data. - - At present, the input data structure accepted by IPU is limited, - when the input data structure of mmcv varies. - Here, an intermediate class is needed to get and update tensors - from the original data. - - HierarchicalDataManager will record a hierarchical input/output data in - self._hierarchical_data. For example, we have an input data: - {'img': tensorA, 'label': tensorB, 'img_metas': [tensorC, tensorD]} - To enable IPU to use the input, HierarchicalDataManager will collect - the torch tensors from self._hierarchical_data into a tuple like: - (tensorA, tensorB, tensorC, tensorD). - Meanwhile, the return of IPU is a tuple of tensors, HierarchicalDataManager - also have a function named update_all_tensors to update tensors in - self._hierarchical_data which is the output for upper calls. - - Args: - logger (:obj:`logging.Logger`): Logger used during running. - Defaults to None. - """ - - def __init__(self, logger=None): - self.atomic_types = (int, str, float, np.ndarray, type(None)) - self.warning = warnings.warn if logger is None else logger.warning - # enable or disable input data's shape and value check - self.quick_mode = False - self._hierarchical_data = None - - def quick(self): - self.quick_mode = True - - def compare_atomic_type(self, a, b): - """Compare data, supported datatypes are numpy array and python basic - types.""" - if isinstance(a, np.ndarray): - return np.all(a == b) - else: - return a == b - - def record_hierarchical_data(self, data): - """Record a hierarchical data.""" - if self._hierarchical_data is not None: - if isinstance(data, torch.Tensor): - assert isinstance(self._hierarchical_data, torch.Tensor), \ - 'original hierarchical data is not torch.tensor' - self._hierarchical_data = data - else: - self.update_hierarchical_data(data) - else: - self._hierarchical_data = data - - @property - def hierarchical_data(self): - return self._hierarchical_data - - def update_hierarchical_data(self, - dataA, - dataB=HierarchicalDataNone, - strict=True, - address='data'): - """Update dataB with dataA in-place. - - Args: - dataA (list or dict or tuple): New hierarchical data. - dataB (list or dict or tuple): hierarchical data to update. - if not specified, self.hierarchical_data will be updated then. - strict (bool, optional): If true, an error will be reported - when the following conditions occur: - 1. Non-torch.Tensor data changed. - 2. Torch.Tensor data shape changed. - address (str): Record the address of current data to be updated. - Default: 'data'. - """ - if dataB is HierarchicalDataNone: - dataB = self.hierarchical_data - - # Update with a da ta with the same structure - # but different values(tensors and basic python data types) - if isinstance(dataA, (tuple, list)): - for idx, node in enumerate(dataA): - new_address = '' - if not self.quick_mode: - new_address = address + f'[{str(idx)}]' - assert isinstance(node, type(dataB[idx])),\ - f'data structure changed: {new_address}' - if isinstance(node, torch.Tensor): - dataB[idx] = node - else: - self.update_hierarchical_data( - node, dataB[idx], strict, address=new_address) - elif isinstance(dataA, dict): - for k, v in dataA.items(): - new_address = '' - if not self.quick_mode: - new_address = address + f'[{str(k)}]' - assert isinstance(v, type(dataB[k])),\ - f'data structure changed: {new_address}' - if isinstance(v, torch.Tensor): - dataB[k] = v - else: - self.update_hierarchical_data( - v, dataB[k], strict, address=new_address) - elif isinstance(dataA, self.atomic_types): - if not self.quick_mode: - is_equal = self.compare_atomic_type(dataA, dataB) - if not is_equal: - if strict: - raise ValueError( - 'all data except torch.Tensor should be same, ' - f'but data({address}) is changed.') - else: - self.warning( - f'find a non-torch.Tensor data({type(dataA)}) ' - f'changed, and the address is {address}') - elif isinstance(dataA, DataContainer): - if not self.quick_mode: - assert isinstance(dataB, DataContainer) - new_address = address + '.data' - self.update_hierarchical_data( - dataA.data, dataB.data, False, address=new_address) - else: - raise NotImplementedError( - f'not supported datatype:{type(dataA)}, address is {address}') - - def collect_all_tensors(self, hierarchical_data=None): - """Collect torch.Tensor data from self.hierarchical_data to a list and - return.""" - # get a list of tensor from self._hierarchical_data - if hierarchical_data is None: - hierarchical_data = self._hierarchical_data - tensors = [] - if isinstance(hierarchical_data, torch.Tensor): - tensors = [hierarchical_data] - else: - self._collect_tensors(hierarchical_data, tensors) - return tensors - - def _collect_tensors(self, data, tensors): - if isinstance(data, (tuple, list)): - for node in data: - if isinstance(node, torch.Tensor): - tensors.append(node) - else: - self._collect_tensors(node, tensors) - elif isinstance(data, dict): - for v in data.values(): - if isinstance(v, torch.Tensor): - tensors.append(v) - else: - self._collect_tensors(v, tensors) - elif isinstance(data, self.atomic_types): - pass - elif isinstance(data, DataContainer): - self._collect_tensors(data.data, tensors) - else: - raise NotImplementedError(f'not supported datatype:{type(data)}') - - def update_all_tensors(self, tensors): - """Put tensors from tuple back to self.hierarchical_data.""" - if isinstance(self._hierarchical_data, torch.Tensor): - print(tensors, len(tensors)) - assert len(tensors) == 1 - assert isinstance(tensors[0], torch.Tensor) - self._hierarchical_data = tensors[0] - else: - # convert to list if tensors is tuple - tensors = list(tensors) - self._set_tensors(self._hierarchical_data, tensors) - return self.hierarchical_data - - def _set_tensors(self, data, tensors): - if isinstance(data, tuple): - data = list(data) - for idx in range(len(data)): - if isinstance(data[idx], torch.Tensor): - data[idx] = tensors.pop(0) - else: - self._set_tensors(data[idx], tensors) - data = tuple(data) - elif isinstance(data, list): - for idx in range(len(data)): - if isinstance(data[idx], torch.Tensor): - data[idx] = tensors.pop(0) - else: - self._set_tensors(data[idx], tensors) - elif isinstance(data, dict): - for k, v in data.items(): - if isinstance(v, torch.Tensor): - data[k] = tensors.pop(0) - else: - self._set_tensors(v, tensors) - elif isinstance(data, self.atomic_types): - pass - elif isinstance(data, DataContainer): - self._set_tensors(data.data, tensors) - else: - raise NotImplementedError(f'not supported datatype:{type(data)}') - - def clean_all_tensors(self): - """Delete tensors from self.hierarchical_data.""" - self._clean_tensors(self._hierarchical_data) - - def _clean_tensors(self, data): - if isinstance(data, tuple): - data = list(data) - for idx in range(len(data)): - if isinstance(data[idx], torch.Tensor): - data[idx] = None - else: - self._clean_tensors(data[idx]) - data = tuple(data) - elif isinstance(data, list): - for idx in range(len(data)): - if isinstance(data[idx], torch.Tensor): - data[idx] = None - else: - self._clean_tensors(data[idx]) - elif isinstance(data, dict): - for k, v in data.items(): - if isinstance(v, torch.Tensor): - data[k] = None - else: - self._clean_tensors(v) - elif isinstance(data, self.atomic_types): - pass - elif isinstance(data, DataContainer): - self._clean_tensors(data.data) - else: - raise NotImplementedError(f'not supported datatype:{type(data)}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hook_wrapper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hook_wrapper.py deleted file mode 100755 index 141afb86..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/hook_wrapper.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import HOOKS, LrUpdaterHook, OptimizerHook -from mmcv.utils import TORCH_VERSION, digit_version - - -def wrap_lr_updater_hook(lr_hook_class): - """A wrapper function to wrap any subclass of LrUpdaterHook. - - IPU needs extra operations to upload optimizer settings. This wrapper will - override function(_set_lr) of a subclass of LrUpdaterHook. - """ - assert issubclass(lr_hook_class, LrUpdaterHook) - - class ipu_lr_hook_class(lr_hook_class): - - def _set_lr(self, runner, *args, **kwargs): - super()._set_lr(runner, *args, **kwargs) - # convert torch optimizer to poptorch optimizer - runner.model.setOptimizer(runner.optimizer) - - return ipu_lr_hook_class - - -def wrap_optimizer_hook(optimizer_hook_class): - """A wrapper function to wrap OptimizerHook. - - This is an non-intrusive implementation of wrapping optimizer hook (or you - need to change every config file to use IPU optimizer hook) IPU's clip-norm - implementation is different from pytorch, so there should be an error - raised when using clip-norm. - """ - - class ipu_optimizer_hook_class(OptimizerHook): - - def __init__(self, **kwargs): - super().__init__(**kwargs) - if self.grad_clip is not None: - raise NotImplementedError('IPU does not support gradient clip') - - return ipu_optimizer_hook_class - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class IPUFp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - assert grad_clip is None,\ - 'IPU mode does not support `grad_clip` currently' - assert coalesce,\ - 'implemented all reduce in distributed training currently' - assert bucket_size_mb == -1,\ - '`bucket_size_mb` should not be set in IPU mode' - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - raise NotImplementedError( - 'IPU mode does not support dynamic loss scale currently') - elif isinstance(loss_scale, float): - self.loss_scale = loss_scale - elif isinstance(loss_scale, dict): - raise NotImplementedError( - 'IPU mode supports single scale currently') - else: - raise ValueError( - f'loss_scale should be float, but got {loss_scale} ') - - def after_train_iter(self, runner): - pass - -else: - raise RuntimeError('The IPU mode only supports torch 1.6 and above') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/model_wrapper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/model_wrapper.py deleted file mode 100755 index c345537e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/model_wrapper.py +++ /dev/null @@ -1,721 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect -from collections import OrderedDict -from typing import Optional, Union - -import poptorch -import torch -import torch.nn as nn -from poptorch import PoplarExecutor, __version__, identity_loss -from poptorch._args_parser import ArgsParser - -from mmcv.runner import auto_fp16 -from .hierarchical_data_manager import HierarchicalDataManager -from .utils import compare_ndarray, model_sharding, recomputation_checkpoint - - -class DictArgsParser(ArgsParser): - """A helper class for handling model input. - - Args: - inputs (list): Inputs of model. - """ - - def __init__(self, inputs): - # Combine args and kwargs: - self._has_variadic_arguments = True - self._varnames = list(inputs.keys()) - self._defaults = [inspect.Parameter.empty for _ in self._varnames] - self._warned_not_contiguous_input = False - - -class WrappedNet(nn.Module): - """A net wrapper for model conversion. - - This wrapper will make some changes and add some extra functions to - training/inference model. - - Args: - model (:obj:`nn.Module`): The model to run. - inputs_manager (:obj:`HierarchicalDataManager`): A parser - converting inputs from tuple to dictionary. - outputs_manager (:obj:`HierarchicalDataManager`): A parser - converting outputs from dictionary to tuple. - inter_outputs_in_cpu (dict): Specify the features to be - recorded. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - """ - - def __init__(self, - model, - inputs_manager, - outputs_manager, - inter_outputs_in_cpu, - modules_to_record=None): - super().__init__() - self.model = model - self.inputs_manager = inputs_manager - self.outputs_manager = outputs_manager - self.training = model.training - # Register a hook function to capture the intermediate features - # generated by the network to align the outputs between ipu and cpu - # Used to confirm whether the implementation of CPU is consistent - # with the implementation of IPU - self.inter_outputs_in_cpu = inter_outputs_in_cpu - if modules_to_record is None: - modules_to_record = [] - - for idx, (name, module) in enumerate(model.named_modules()): - if name in modules_to_record or idx in modules_to_record: - features_hook = self.get_input_output_hook( - name, idx, self.inter_outputs_in_cpu) - module.register_forward_hook(hook=features_hook) - - def get_input_output_hook(self, name, idx, save_dict): - - def input_output_hook(module, fea_in, fea_out): - if isinstance(fea_in, tuple): - fea_in = list(fea_in) - if isinstance(fea_out, tuple): - fea_out = list(fea_out) - save_dict[name] = { - 'fea_in': fea_in, - 'fea_out': fea_out, - 'idx': idx - } - return None - - return input_output_hook - - def forward(self, inputs_tuple): - """This function is used to be compiled to ipu, the inputs and outputs - need to be tuples, so here we need to restore the input back to a - dictionary and convert the output to a tuple.""" - self.inputs_manager.update_all_tensors(inputs_tuple) - kwargs = {**(self.inputs_manager.hierarchical_data)} - if self.training: - outputs = self.forward_train(kwargs) - # tell poptorch which loss will be used finally - identity_loss(outputs['loss'], reduction='none') - else: - outputs = self.forward_eval(kwargs) - - if isinstance(outputs, torch.Tensor): - # currently not support single tensor output, - # need to wrap it with a dictionary, - # use a keyword to identify this case - outputs = {'output of WrappedNet: single tensor': outputs} - - # if there are some features need to be record, add extra outputs - for name in self.inter_outputs_in_cpu: - outputs[name] = self.inter_outputs_in_cpu[name] - - # record all the places of return tensors in the converting stage - # while in the real run stage, all the tensor are changed in-place - # that means the output can be obtained directly outside this function - self.outputs_manager.record_hierarchical_data(outputs) - plain_outputs = self.outputs_manager.collect_all_tensors() - return plain_outputs - - def forward_train(self, kwargs): - optimizer = kwargs.pop('optimizer') - outputs = self.train_step(kwargs, optimizer) - return outputs - - def train_step(self, data, optimizer=None, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating are also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer`, optional): The - optimizer of runner is passed to ``train_step()``. This - argument is unused and reserved. - - Returns: - dict: Dict of outputs. The following fields are contained. - - loss (torch.Tensor): A tensor for back propagation, which \ - can be a weighted sum of multiple losses. - - log_vars (dict): Dict contains all the variables to be sent \ - to the logger. - - num_samples (int): Indicates the batch size (when the model \ - is DDP, it means the batch size on each GPU), which is \ - used for averaging the logs. - """ - losses = self.model(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img'].data)) - - return outputs - - def _parse_losses(self, losses): - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(loss.mean() for loss in loss_value) - elif isinstance(loss_value, dict): - for name, value in loss_value.items(): - log_vars[name] = value - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(value for key, value in log_vars.items() if 'loss' in key) - log_vars['loss'] = loss - - return loss, log_vars - - def forward_eval(self, kwargs): - img = kwargs.pop('img') - img_metas = kwargs.pop('img_metas', None) - return_loss = kwargs.pop('return_loss') - assert not return_loss - # TODO Temporarily hard-code to close post_process, - # otherwise, in the third trace(_check_trace), - # post_process will convert output tensor to numpy array automatically, - # resulting in _check_trace failure - outputs = self.model( - img, - img_metas=img_metas, - return_loss=return_loss, - post_process=False) - return outputs - - -class MMPoplarExecutor(PoplarExecutor): - """An executor for inputs/outputs parsing, model compilation, data - alignment and IPU upload/download. - - Args: - model (:obj:`nn.Module`): The model to be compiled. - logger (:obj:`logging.Logger`): Logger used during running. - Defaults to None. - training (bool): Model in training mode or eval mode. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - args (argument list): Arguments passed to the `__init__` - method of PoplarExecutor. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of PoplarExecutor. - """ - - def __init__(self, - model, - logger=None, - training=True, - modules_to_record=None, - *args, - **kwargs): - # self.model == self._user_model: input pytorch model - # self._model: wrapped model which is used to compile - # and update weights, these two models use same weights - # wrapped model only accept and output tuple, so - # HierarchicalDataManager will convert dictionary - # to tuple and convert them back - self.inputs_manager = HierarchicalDataManager(logger=logger) - self.outputs_manager = HierarchicalDataManager(logger=logger) - self.logger = logger - # the features calculated by CPU - self.inter_outputs_in_cpu = {} - # the features calculated by IPU - self.inter_outputs_in_ipu = {} - if modules_to_record is None: - # It is possible that the IPU implementation of some operators - # is inconsistent with the expected (CPU), here you can use - # this method to confirm whether there is a problem - self.compare_with_cpu = False - else: - self.compare_with_cpu = True - # move model.fp16_enabled to self.fp16_enabled, - # modify the position where the input is automatically casted to half - if getattr(model, 'fp16_enabled', False): - model.fp16_enabled = False - self.fp16_enabled = True - # make torch.jit.trace convert self._model - model = WrappedNet( - model, - self.inputs_manager, - self.outputs_manager, - self.inter_outputs_in_cpu, - modules_to_record=modules_to_record) - super().__init__(model, training=training, *args, **kwargs) - # overwrite self._args_parser in train_step or val_step - self._args_parser = None - if training: - assert self.training - else: - assert not self.training - - @property - def training(self): - # If trying to get the attribute(training) of self, - # since the class has no training attribute, - # it will automatically look for the training attribute of self.model. - # However, the real attribute we want to check is self._training, - # self.model.training and self._training are often inconsistent. - # It is not clear whether it is a Poptorch bug or a special design, - # temporarily use this function to fix the problem - return self._training # comes from self.model._training - - @auto_fp16(supported_types=(PoplarExecutor, )) - def run_model(self, data_dict): - # this function is used to parse input_dict - # and convert to output_dict - if self.isCompiled(): - self.inputs_manager.record_hierarchical_data(data_dict) - inputs_tuple = tuple(self.inputs_manager.collect_all_tensors()) - else: - # get tensors out of data and put them in a tuple - self.inputs_manager.record_hierarchical_data(data_dict) - inputs_tuple = tuple(self.inputs_manager.collect_all_tensors()) - # turn logger in data manager off after compilation - self.inputs_manager.quick() - self.outputs_manager.quick() - - # parser args in the first iter - if self._args_parser is None: - self._args_parser = DictArgsParser({'args': inputs_tuple}) - - # run or convert model - # the plain_outputs will be used in converting stage - plain_outputs = self(inputs_tuple) - - self.inputs_manager.clean_all_tensors() - - # put list of tensors back to the output dict - # according to the same order - self.outputs_manager.update_all_tensors(plain_outputs) - # get the real output dictionary from self.outputs_manager - output_dict = self.outputs_manager.hierarchical_data - - # split output_dict into inter_outputs_in_ipu - # and output of the torch model - torch_model_output = {} - for name in output_dict: - if name in self.inter_outputs_in_cpu: - self.inter_outputs_in_ipu[name] = output_dict[name] - else: - torch_model_output[name] = output_dict[name] - - if 'output of WrappedNet: single tensor' in output_dict: - assert len(torch_model_output) == 1 - assert isinstance( - torch_model_output['output of WrappedNet: single tensor'], - torch.Tensor) - torch_model_output = \ - torch_model_output['output of WrappedNet: single tensor'] - - return torch_model_output - - def train_step(self, data, optimizer=None, **kwargs): - # arguments from mmcls/models/classifiers/base.py: - # BaseClassifier.train_step - assert self.training - assert len(kwargs) == 0 # TODO, support later if necessary - - # TODO support datacontainer as input - # currently, auto_fp16 and HierarchicalDataManager take too much - # time on traversing datacontainer - data['img_metas'] = None - num_samples = len(data['img'].data) - - # TODO we will ignore optimizer because it will not be used in model, - # support later if necessary - data['optimizer'] = None - output_dict = self.run_model(data) - - # outputs contained loss, log_vars, num_samples, - # only loss(torch.tensor) has been updated - # remove all unchanged vars, left torch.tensor - neat_output_dict = {'loss': output_dict['loss']} - - # re-parse outputs, get back log_vars and num_samples - loss, log_vars = self.model._parse_losses(neat_output_dict) - final_output_dict = dict( - loss=loss, log_vars=log_vars, num_samples=num_samples) - return final_output_dict - - def eval_call(self, img, img_metas=None, return_loss=True, **kwargs): - # arguments from mmdet/models/detectors/base.py:BaseDetector.forward - # tmp usssage for eval mode - assert not self.training - assert len(kwargs) == 0 # TODO, support later if necessary - assert not return_loss - data = {'img': img, 'img_metas': img_metas, 'return_loss': return_loss} - - output_dict = self.run_model(data) - - return output_dict - - def detachFromDevice(self): - if self.isCompiled() and self._is_attached: - super().detachFromDevice() - - def attachToDevice(self): - if self.isCompiled() and not self._is_attached: - super().attachToDevice() - - -class TrainEvalModel: - """A class maintaining training MMPoplarExecutor and inference - MMPoplarExecutor. - - Args: - train_model (:obj:`nn.Module`): The training model to be compiled. - ``train_model`` can be None if only executing validation. - eval_model (:obj:`nn.Module`): The inference model to be compiled. - options (mmcv.Config, dict): Options that will be used to compile - and run the model. - optimizer (:obj:`torch.optim.Optimizer`, optional): torch - optimizer, necessary if in training mode - logger (:obj:`logging.Logger`): Logger used during running. - Defaults to None. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - """ - - def __init__(self, - train_model, - eval_model, - options, - optimizer, - modules_to_record=None, - logger=None): - if train_model is None: - self._train_executor = None - self.training = False - else: - self._train_executor = get_training_model( - train_model, - options=options['training'], - optimizer=optimizer, - logger=logger, - modules_to_record=modules_to_record) - self.training = True - self._eval_executor = get_inference_model( - eval_model, options=options['inference'], logger=logger) - - @property - def executor(self): - if self.training: - return self._train_executor - else: - return self._eval_executor - - def train(self, mode: bool = True): - """Sets the module in training mode. - - This has any effect only on certain modules. See documentations of - particular modules for details of their behaviors in - training/evaluation mode, if they are affected, - e.g. :class:`Dropout`, :class:`BatchNorm`, etc. - - Args: - mode (bool): whether to set training mode (``True``) or evaluation - mode (``False``). Default: ``True``. - - Returns: - Module: self - """ - if not isinstance(mode, bool): - raise ValueError('training mode is expected to be boolean, ' - f'but got {type(mode)}') - if self._train_executor is None and mode: - raise RuntimeError( - 'The train_executor is not initialized.' - 'If you want to initialize train_executor,' - 'you need to input optimizer when converting pytorch model') - - if mode == self.training: - self.model.train(mode) - return self - else: - if self.isCompiled(): - # copy weights from IPU to cpu before off-load current session - self.copyWeightsToHost() - # detach the current session before change the mode, - # if is training mode and weights are updated, - # poptorch will copy weights from IPU to host - self.detachFromDevice() - - self.training = mode # session will changed with mode changing - self.model.train(mode) - - # after changing mode, attach the current new session, - # and this function will copy weights of model to device - self.attachToDevice() - return self - - def eval(self): - """Sets the module in evaluation mode. - - This has any effect only on certain modules. - See documentations of particular modules - for details of their behaviors in training/evaluation mode, - if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`, etc. - - This is equivalent with :meth:`self.train(False) - `. - - See :ref:`locally-disable-grad-doc` for a comparison between - `.eval()` and several similar mechanisms that may be confused with it. - - Returns: - Module: self - """ - return self.train(False) - - def compare_data_between_ipu_and_cpu(self, inter_outputs_in_cpu, - inter_outputs_in_ipu): - for key, val in inter_outputs_in_cpu.items(): - is_tensor = isinstance(val['fea_in'], torch.Tensor) - fea_in_cpu = val['fea_in'] - fea_in_cpu_list = [fea_in_cpu] if is_tensor else fea_in_cpu - fea_in_ipu = inter_outputs_in_ipu[key]['fea_in'] - fea_in_ipu_list = [fea_in_ipu] if is_tensor else fea_in_ipu - - is_tensor = isinstance(val['fea_out'], torch.Tensor) - fea_out_cpu = val['fea_out'] - fea_out_cpu_list = [fea_out_cpu] if is_tensor else fea_out_cpu - fea_out_ipu = inter_outputs_in_ipu[key]['fea_out'] - fea_out_ipu_list = [fea_out_ipu] if is_tensor else fea_out_ipu - - print('comparing layer:', key) - for idx, (featA, featB) in \ - enumerate(zip(fea_in_cpu_list, fea_in_ipu_list)): - print('fea_in, tensor ', idx) - compare_ndarray(featA.detach().numpy(), featB.detach().numpy()) - for idx, (featA, featB) in \ - enumerate(zip(fea_out_cpu_list, fea_out_ipu_list)): - print('fea_out, tensor', idx) - compare_ndarray(featA.detach().numpy(), featB.detach().numpy()) - - # TODO Unified training and eval interface, - # merge train_step(train) and __call__(eval) together - def train_step(self, data, optimizer=None, **kwargs): - assert self.training, 'not supported train_step on eval mode' - inter_outputs_in_cpu = {} - if (self._train_executor.isCompiled() - and self._train_executor.compare_with_cpu): - self.copyWeightsToHost() - # run in CPU mode - self._train_executor.model.train_step(data, optimizer, **kwargs) - inter_outputs_in_cpu = { - **(self._train_executor.inter_outputs_in_cpu) - } - # run in IPU mode - result = self._train_executor.train_step(data, optimizer, **kwargs) - if (self._train_executor.isCompiled() - and self._train_executor.compare_with_cpu - and len(inter_outputs_in_cpu) > 0): - self.compare_data_between_ipu_and_cpu( - inter_outputs_in_cpu, - self._train_executor.inter_outputs_in_ipu) - return result - - # TODO Unified training and eval interface, - # merge train_step(train) and __call__(eval) together - def __call__(self, *args, **kwargs): - if self.training: - raise NotImplementedError('use train_step rather than __call__') - else: - return self._eval_executor.eval_call(*args, **kwargs) - - def __getattr__(self, attr): - return getattr(self.executor, attr) - - -def get_training_model(model: nn.Module, - options: Optional[poptorch.Options] = None, - optimizer: Optional[torch.optim.Optimizer] = None, - logger=None, - modules_to_record=None) -> poptorch.PoplarExecutor: - """Create a PopTorch training model from a PyTorch model, running on IPU - hardware in training mode. - - Note: - PopTorch makes a shallow copy of the model. Changes to the - parameters in the returned training model affect the original model - and vice versa. However, primitive variable types are not synced: for - example calling ``model.train()`` on the original model, which - changes the ``training`` bool of the model instance, will not alter the - model returned by this function. You may need to call ``model.train()`` - on your model before you call this function for correct behavior. - - Args: - model (:obj:`nn.Module`): The model to run. - options (poptorch.Options): Options that will be used to compile - and run the model. - optimizer (:obj:`torch.optim.Optimizer`, optional): The optimizers - to apply during training. - logger (:obj:`logging.Logger`): Logger used during running. - Defaults to None. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - - Returns: - The :class:`poptorch.PoplarExecutor` wrapper to use in place - of ``model``. - """ - # Create a copy of the original model in case it needs to be wrapped - maybe_wrapped_model = copy.copy(model) - - return MMPoplarExecutor( - model=maybe_wrapped_model, - logger=logger, - options=options, - training=True, - optimizer=optimizer, - user_model=model, - modules_to_record=modules_to_record, - poptorch_version=__version__) - - -def get_inference_model(model: Union[nn.Module, poptorch.PoplarExecutor], - options: Optional[poptorch.Options] = None, - logger=None) -> poptorch.PoplarExecutor: - """Create a PopTorch inference model from a PyTorch model, running on IPU - hardware in inference mode. - - Note: - PopTorch makes a shallow copy of the model. Changes to the - parameters in the returned inference model affect the original model - and vice versa. However, primitive variable types are not synced: for - example calling ``model.eval()`` on the original model will not alter - the model returned by this function. You may need to call - ``model.eval()`` on your model before you call this function for - correct behavior. - - Args: - model (:obj:`nn.Module`): The model to run. - options (poptorch.Options): Options that will be used to compile - and run the model. - logger (:obj:`logging.Logger`): Logger used during running. - Defaults to None. - - Returns: - The :class:`poptorch.PoplarExecutor` wrapper to use in place of - ``model``. - """ - - return MMPoplarExecutor( - model=copy.copy(model), - logger=logger, - options=options, - training=False, - poptorch_version=__version__) - - -def ipu_model_wrapper(model, - options, - optimizer=None, - logger=None, - modules_to_record=None, - ipu_model_cfg=None, - fp16_cfg=None): - """Convert torch model to IPU model. - - Args: - model (nn.Module): The target model to be converted. - options (dict[str, poptorch.Options]): IPU options, generated - by :func:`cfg2options`. - optimizer (:obj:`torch.optim.Optimizer`, optional): torch - optimizer, necessary if in training mode - logger (:obj:`logging.Logger`): Logger used during training. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - ipu_model_cfg (dict): A dictionary contains train_split_edges and - train_ckpt_nodes, See details in :func:`model_sharding` and - :func:`recomputation_checkpoint` functions. - fp16_cfg (dict): Config for IPU fp16 training. Currently supports - configs: `loss_scale`, `velocity_accum_type` and `accum_type`. - See details in - https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/index.html - - Returns: - TrainEvalModel: IPU wrapped model. - """ - if ipu_model_cfg is None: - ipu_model_cfg = {} - training = model.training if optimizer is not None else False - # set mixed-precision - if fp16_cfg is not None: - from mmcv.runner import wrap_fp16_model - loss_scale = fp16_cfg['loss_scale'] - wrap_fp16_model(model) - model.half() - # TODO tmp ussage to set loss scaling for torch original optimizer - if optimizer is not None: - optimizer.loss_scaling = loss_scale - if fp16_cfg.get('velocity_accum_type', False): - if fp16_cfg['velocity_accum_type'] == 'half': - optimizer.velocity_accum_type = torch.half - else: - optimizer.velocity_accum_type = torch.float32 - if fp16_cfg.get('accum_type', False): - if fp16_cfg['accum_type'] == 'half': - optimizer.accum_type = torch.half - else: - optimizer.accum_type = torch.float32 - # TODO support feature alignment for fp16 - if modules_to_record is not None: - raise NotImplementedError( - 'Feature alignment for fp16 is not implemented') - - # set model partition - if optimizer is None: - train_model = None - else: - # split model into multi-IPUs if specified - train_model = model_sharding( - copy.copy(model).train(), - ipu_model_cfg.get('train_split_edges', [])) - - recomputation_checkpoint(train_model, - ipu_model_cfg.get('train_ckpt_nodes', [])) - - # TODO support feature alignment for gradient accumulation mode - gradient_accumulation = \ - getattr(options['training'].Training, 'gradient_accumulation', 1) - if gradient_accumulation > 1: - assert modules_to_record is None, \ - 'Feature alignment for grad-accumulation mode not implemented' - - # TODO support feature alignment for multi-replica mode - replication_factor = \ - getattr(options['training'], 'replication_factor', 1) - if replication_factor > 1: - assert modules_to_record is None, \ - 'Feature alignment for multi-replica mode not implemented' - - # TODO supports different model partitions between train and eval mode - assert len(ipu_model_cfg.get('eval_split_edges', [])) == 0,\ - 'Currently, BeginBlock can only be used once on the same model' - eval_model = copy.copy(model).eval() - - # wrap model for compilation - model = TrainEvalModel( - train_model, - eval_model, - options=options, - optimizer=optimizer, - logger=logger, - modules_to_record=modules_to_record) - model.train(training) - return model diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/runner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/runner.py deleted file mode 100755 index e2d49226..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/runner.py +++ /dev/null @@ -1,142 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from mmcv.runner import (HOOKS, RUNNERS, BaseRunner, EpochBasedRunner, - IterBasedRunner) -from mmcv.utils import IS_IPU_AVAILABLE - -if IS_IPU_AVAILABLE: - from .dataloader import IPUDataLoader - from .hook_wrapper import (IPUFp16OptimizerHook, wrap_lr_updater_hook, - wrap_optimizer_hook) - from .model_wrapper import ipu_model_wrapper - from .utils import build_from_cfg_with_wrapper, cfg2options - - -class IPUBaseRunner(BaseRunner): - """A base runner for IPU. - - This runner has some extra processes for IPU which are shown below: - - 1. Parse options for IPU - 2. wrap pytorch model for IPU - 3. Raise errors while encountering illegal usage - 4. Input IPU options and initialize dataloader if finding an instance - of IPUDataLoader - - Args: - model (:obj:`nn.Module`): The model to run. - options_cfg (mmcv.Config, dict): Options that will be used to compile - and run the model. - modules_to_record (mmcv.Config, list): Index or name of modules which - will be recorded for output. It is necessary to specify output for - static graph of model training or inference. - ipu_model_cfg (mmcv.Config, dict): Config of model partition and - recomputing checkpoint - fp16_cfg (mmcv.Config): Config for fp16 training. - batch_processor (callable): A callable method that process a data - batch. Should be None for IPU runner - kwargs (Dict[str, Any], optional): Keyword arguments will be passed to - ``base_runner.BaseRunner``. - """ - - def __init__(self, - model, - options_cfg=None, - modules_to_record=None, - ipu_model_cfg=None, - fp16_cfg=None, - batch_processor=None, - **kwargs): - assert hasattr(model, 'train_step') and batch_processor is None,\ - 'only support model with train_step' - - if options_cfg is None: - options_cfg = {} - # call BaseRunner.__init__() here - super().__init__(model, **kwargs) - - # process options of ipu - if IS_IPU_AVAILABLE: - self.options = cfg2options(options_cfg) - self.model = ipu_model_wrapper( - self.model, - self.options, - self.optimizer, - self.logger, - modules_to_record=modules_to_record, - ipu_model_cfg=ipu_model_cfg, - fp16_cfg=fp16_cfg) - else: - raise NotImplementedError('cpu mode on IPURunner is not supported') - - def register_lr_hook(self, lr_config): - if lr_config is None: - return - assert isinstance(lr_config, dict) - assert 'policy' in lr_config - policy_type = lr_config.pop('policy') - # If the type of policy is all in lower case, - # e.g., 'cyclic', then its first letter will be capitalized, - # e.g., to be 'Cyclic'. - # This is for the convenient usage of Lr updater. - # Since this is not applicable for ` - # CosineAnnealingLrUpdater`, the string will not be changed - # if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'LrUpdaterHook' - lr_config['type'] = hook_type - hook = build_from_cfg_with_wrapper(lr_config, HOOKS, - wrap_lr_updater_hook) - self.register_hook(hook, priority='VERY_HIGH') - - def register_optimizer_hook(self, optimizer_config): - if optimizer_config is None: - return - assert isinstance(optimizer_config, (dict, IPUFp16OptimizerHook)) - if isinstance(optimizer_config, dict): - optimizer_config.setdefault('type', 'OptimizerHook') - hook = build_from_cfg_with_wrapper(optimizer_config, HOOKS, - wrap_optimizer_hook) - else: - hook = optimizer_config - self.register_hook(hook, priority='ABOVE_NORMAL') - - def run(self, data_loaders, workflow, *args, **kwargs): - for i, flow in enumerate(workflow): - mode, _ = flow - # initialize IPU dataloader if not initialized - assert isinstance(data_loaders[i], IPUDataLoader),\ - 'IPU runner can only work with `IPUDataLoader`' - data_loaders[i].init(options=self.get_options(mode)) - - super().run(data_loaders, workflow, *args, **kwargs) - - def get_options(self, mode): - if mode == 'train': - return self.options['training'] - elif mode == 'val': - return self.options['inference'] - else: - raise ValueError(f'mode should be train or val but got {mode}') - - -@RUNNERS.register_module() -class IPUEpochBasedRunner(IPUBaseRunner, EpochBasedRunner): - """Epoch-based Runner for IPU. - - The Inheritance order(MRO) is: IPUEpochBasedRunner -> IPUBaseRunner -> - EpochBasedRunner -> BaseRunner This runner train models epoch by epoch. - """ - pass - - -@RUNNERS.register_module() -class IPUIterBasedRunner(IPUBaseRunner, IterBasedRunner): - """Iteration-based Runner for IPU. - - The Inheritance order(MRO) is: IPUIterBasedRunner -> IPUBaseRunner -> - IterBasedRunner -> BaseRunner This runner train models iteration by - iteration. - """ - pass diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/utils.py deleted file mode 100755 index 79709db1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/ipu/utils.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect - -import numpy as np -import popart -import poptorch -import torch -import torch.nn as nn - -from mmcv.utils import Registry - - -def _options_assigner(cfg, options_node): - # set popart.options by config - # cfg: dict, python data type - # options_node: python module or function - if isinstance(cfg, dict): - for key in cfg: - _options_assigner(cfg[key], getattr(options_node, key)) - elif isinstance(cfg, (int, float, str, list)): - if callable(options_node): - options_node(cfg) - else: - error_msg = f'options_node type {type(options_node)} not supported' - raise NotImplementedError(error_msg) - else: - error_msg = f'cfg type {type(cfg)} not supported' - raise NotImplementedError(error_msg) - - -def cfg2options(cfg): - """Parse dictionary to ipu options. - - Args: - cfg (dict): A dictionary of ipu settings. - - Returns: - dict[str, poptorch.Options]: Training options and inference options - of IPU. - """ - # set ipu options for inference and training by config - train_cfg = cfg.pop('train_cfg', {}) - eval_cfg = cfg.pop('eval_cfg', {}) - eval_cfg['replicationFactor'] = 1 # eval mode only use one replica - eval_cfg['executionStrategy'] = 'ShardedExecution' - # overwrite default ipu cfg with specified train cfgs - training_ipu_cfg = {**cfg, **train_cfg} - # overwrite default ipu cfg with specified eval cfgs - inference_ipu_cfg = {**cfg, **eval_cfg} - - ipu_options = { - 'training': _cast_to_options(training_ipu_cfg), - 'inference': _cast_to_options(inference_ipu_cfg) - } - - # TODO configure these codes - ipu_options['training']._Popart.set('disableGradAccumulationTensorStreams', - True) - ipu_options['training']._Popart.set( - 'accumulateOuterFragmentSettings.schedule', - int(popart.AccumulateOuterFragmentSchedule.OverlapMemoryOptimized)) - ipu_options['training'].Precision.enableStochasticRounding(True) - - return ipu_options - - -def _cast_to_options(cfg): - # If it cannot be directly assigned, use if statement to parse it, - # and if it can be directly assigned, use _options_assigner to assign - options = poptorch.Options() - - if 'availableMemoryProportion' in cfg: - available_memory_proportion = cfg.pop('availableMemoryProportion') - mem_props = {} - for i, mem_prop in enumerate(available_memory_proportion): - mem_props[f'IPU{i}'] = mem_prop - options.setAvailableMemoryProportion(mem_props) - - if 'executionStrategy' in cfg: - execution_strategy = cfg.pop('executionStrategy') - if execution_strategy == 'SameAsIpu': - options.setExecutionStrategy( - poptorch.PipelinedExecution( - getattr(poptorch.AutoStage, execution_strategy))) - elif execution_strategy == 'ShardedExecution': - options.setExecutionStrategy(poptorch.ShardedExecution()) - else: - raise NotImplementedError( - 'executionStrategy should be "SameAsIpu" or "ShardedExecution"' - f', but got {execution_strategy}') - - if 'partialsType' in cfg: - partials_type = cfg.pop('partialsType') - options.Precision.setPartialsType(getattr( - torch, partials_type)) # half or float - - _options_assigner(cfg, options) - return options - - -def model_sharding(model, split_edges): - """split models in-place into multi-IPUs. - - Args: - model (nn.Module): The target model to be split. - split_edges (list of dict): Model layer names or layer numbers - of split edge. Each item of ``split_edges`` is a dictionary, - which may contain the following key-pairs: - - - layer_to_call: PyTorch module to assign to the block - - user_id (optional): A user defined identifier for the block. - - ipu_id: The id of the IPU to run on. - - Examples: - >>> split_edges = [ - ... dict(layer_to_call='model.conv1', ipu_id=0), - ... dict(layer_to_call='model.conv3', ipu_id=1)] - >>> sharding_model = model_sharding(torch_model, split_edges) - - Returns: - nn.Module: Split model. - """ - if len(split_edges) == 0: - return model - assert isinstance(split_edges, list) - spilt_edges_dict = {edge['layer_to_call']: edge for edge in split_edges} - - for idx, (name, module) in enumerate(model.named_modules()): - if idx in spilt_edges_dict and name in spilt_edges_dict: - raise ValueError( - 'The same layer is referenced twice while doing model' - f' partition: idx is {idx} and name is {name}') - - edge = spilt_edges_dict.pop(name, None) - edge = spilt_edges_dict.pop(idx, edge) - if edge is not None: - poptorch.BeginBlock(module, edge.get('user_id', name), - edge['ipu_id']) - - # ensure all split_edges are used - if len(spilt_edges_dict) > 0: - split_edge_names = list(spilt_edges_dict.keys()) - raise RuntimeError( - f'split_edges: {split_edge_names} are not contained in the model') - return model - - -def recomputation_checkpoint(model: nn.Module, module_names: list): - """Annotates the output of a module to be checkpointed instead of - recomputed. - - If recomputation mode is enabled, ipu will release the activations of - the middle layers to save memory. During the backward of gradient, - the activation of the middle layer will be recalculated again. - This function is used to declare the activations of some intermediate - layers that need to be saved in order to skip the recomputation of - some layers. - - Args: - model (nn.Module): The target model to apply recomputation - checkpoint. - module_names (list): Layer names of module. - """ - - def recompute_outputs(module, inputs, outputs): - if isinstance(outputs, tuple): - return tuple(poptorch.recomputationCheckpoint(y) for y in outputs) - else: - return poptorch.recomputationCheckpoint(outputs) - - for name, module in model.named_modules(): - if name in module_names: - module.register_forward_hook(recompute_outputs) - module_names.remove(name) - - # check all module_names are used - assert len(module_names) == 0,\ - f'recomputed nodes: {module_names} are not contained in the model' - - -def compare_ndarray(featA, featB, rtol=1e-3, atol=1e-5): - """Align data between two activations or weights.""" - try: - np.testing.assert_allclose(featA, featB, rtol=rtol, atol=atol) - except AssertionError as e: - print(e) - - -def build_from_cfg_with_wrapper(cfg, - registry, - wrapper_func=None, - default_args=None): - """Build a module from config dict and wrap module with "wrapper_func". - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - wrapper_func (function): Used to wrap class - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if wrapper_func is None: - wrapped_obj_cls = obj_cls - else: - wrapped_obj_cls = wrapper_func(obj_cls) - try: - return wrapped_obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{wrapped_obj_cls.__name__}: {e}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/__init__.py deleted file mode 100644 index 572c4da7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .data_parallel import MLUDataParallel -from .distributed import MLUDistributedDataParallel -from .scatter_gather import scatter, scatter_kwargs - -__all__ = [ - 'MLUDataParallel', 'MLUDistributedDataParallel', 'scatter', - 'scatter_kwargs' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/_functions.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/_functions.py deleted file mode 100644 index 75660fa9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/_functions.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Union - -import torch - - -def scatter(input: Union[List, torch.Tensor], devices: List) -> List: - """scatter copies tensor to MLU directly.""" - if isinstance(input, list): - outputs = [scatter(_input, devices) for _input in input] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - return output.to('mlu') if devices != [-1] else output - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_mlus, input): - outputs = scatter(input, target_mlus) - return tuple(outputs) if isinstance(outputs, list) else (outputs, ) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/data_parallel.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/data_parallel.py deleted file mode 100644 index ebe14c0a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/data_parallel.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -import torch - -from mmcv.parallel import MMDataParallel -from .scatter_gather import scatter_kwargs - - -class MLUDataParallel(MMDataParallel): - """The MLUDataParallel module that supports DataContainer. - - MLUDataParallel is a class inherited from MMDataParall, which supports - MLU training and inference only. - - The main differences with MMDataParallel: - - - It only supports single-card of MLU, and only use first card to - run training and inference. - - - It uses direct host-to-device copy instead of stream-background - scatter. - - .. warning:: - MLUDataParallel only supports single MLU training, if you need to - train with multiple MLUs, please use MLUDistributedDataParallel - instead. If you have multiple MLUs, you can set the environment - variable ``MLU_VISIBLE_DEVICES=0`` (or any other card number(s)) - to specify the running device. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super().__init__(*args, dim=dim, **kwargs) - self.device_ids = [0] - self.src_device_obj = torch.device('mlu:0') - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/distributed.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/distributed.py deleted file mode 100644 index 3768c754..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/distributed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from mmcv.parallel import MMDistributedDataParallel -from .scatter_gather import scatter_kwargs - - -class MLUDistributedDataParallel(MMDistributedDataParallel): - """The DDP module supports DataContainer. - - MLUDDP has one difference from MMDDP which moves data to MLU with coping - instead of scattering. - """ - - def to_kwargs(self, inputs, kwargs, device_id): - # Use `self.to_kwargs` instead of `self.scatter` in pytorch1.8 - # to move all tensors to device_id - return scatter_kwargs(inputs, kwargs, [device_id], dim=self.dim) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/scatter_gather.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/scatter_gather.py deleted file mode 100644 index 0b0c9b96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/device/mlu/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmcv.parallel.data_container import DataContainer -from ._functions import Scatter - - -def scatter(inputs, target_mlus, dim=0): - """Scatter inputs to target mlu. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_mlus != [-1]: - obj = obj.to('mlu') - return [obj] - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_mlus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_mlus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_mlus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_mlus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_mlus, dim) if inputs else [] - kwargs = scatter(kwargs, target_mlus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/__init__.py deleted file mode 100644 index 3193b7f6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .test import (collect_results_cpu, collect_results_gpu, multi_gpu_test, - single_gpu_test) - -__all__ = [ - 'collect_results_cpu', 'collect_results_gpu', 'multi_gpu_test', - 'single_gpu_test' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/test.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/test.py deleted file mode 100644 index 83546cae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/engine/test.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time -from typing import Optional - -import torch -import torch.distributed as dist -import torch.nn as nn -from torch.utils.data import DataLoader - -import mmcv -from mmcv.runner import get_dist_info - - -def single_gpu_test(model: nn.Module, data_loader: DataLoader) -> list: - """Test model with a single gpu. - - This method tests model with a single gpu and displays test progress bar. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for data in data_loader: - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - # Assume result has the same length of batch_size - # refer to https://github.com/open-mmlab/mmcv/issues/985 - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model: nn.Module, - data_loader: DataLoader, - tmpdir: Optional[str] = None, - gpu_collect: bool = False) -> Optional[list]: - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting - ``gpu_collect=True``, it encodes results to gpu tensors and use gpu - communication for results collection. On cpu mode it saves the results on - different gpus to ``tmpdir`` and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - if rank == 0: - batch_size = len(result) - batch_size_all = batch_size * world_size - if batch_size_all + prog_bar.completed > len(dataset): - batch_size_all = len(dataset) - prog_bar.completed - for _ in range(batch_size_all): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - result_from_ranks = collect_results_gpu(results, len(dataset)) - else: - result_from_ranks = collect_results_cpu(results, len(dataset), tmpdir) - return result_from_ranks - - -def collect_results_cpu(result_part: list, - size: int, - tmpdir: Optional[str] = None) -> Optional[list]: - """Collect results under cpu mode. - - On cpu mode, this function will save the results on different gpus to - ``tmpdir`` and collect them by the rank 0 worker. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - tmpdir (str | None): temporal directory for collected results to - store. If set to None, it will create a random temporal directory - for it. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - part_file = osp.join(tmpdir, f'part_{rank}.pkl') # type: ignore - mmcv.dump(result_part, part_file) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') # type: ignore - part_result = mmcv.load(part_file) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) # type: ignore - return ordered_results - - -def collect_results_gpu(result_part: list, size: int) -> Optional[list]: - """Collect results under gpu mode. - - On gpu mode, this function will encode results to gpu tensors and use gpu - communication for results collection. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_result = pickle.loads(recv[:shape[0]].cpu().numpy().tobytes()) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results - else: - return None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/__init__.py deleted file mode 100644 index 2051b85f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/file_client.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/file_client.py deleted file mode 100644 index ee7c3164..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1173 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Any, Generator, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import mmcv -from mmcv.utils.misc import has_method -from mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead', - DeprecationWarning) - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - 'Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.') - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - 'Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.') - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - 'Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.') - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - 'Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.') - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path( - self, - filepath: Union[str, - Path]) -> Generator[Union[str, Path], None, None]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - 'Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.') - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb # NOQA - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self.readonly = readonly - self.lock = lock - self.readahead = readahead - self.kwargs = kwargs - self._client = None - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - if self._client is None: - self._client = self._get_client() - - with self._client.begin(write=False) as txn: - value_buf = txn.get(str(filepath).encode('utf-8')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - def _get_client(self): - import lmdb - - return lmdb.open( - self.db_path, - readonly=self.readonly, - lock=self.lock, - readahead=self.readahead, - **self.kwargs) - - def __del__(self): - self._client.close() - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, - filepath: Union[str, - Path]) -> Generator[Union[str, Path], None, None]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path( - self, filepath: str) -> Generator[Union[str, Path], None, None]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - - _instances: dict = {} - - client: Any - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - if arg_key in cls._instances: - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' else - ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - for arg_key, instance in list(cls._instances.items()): - if isinstance(instance.client, cls._backends[name]): - cls._instances.pop(arg_key) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - overridden_backend = cls._prefix_to_backends[prefix] - if isinstance(overridden_backend, list): - overridden_backend = tuple(overridden_backend) - for arg_key, instance in list(cls._instances.items()): - if isinstance(instance.client, overridden_backend): - cls._instances.pop(arg_key) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, - filepath: Union[str, - Path]) -> Generator[Union[str, Path], None, None]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/__init__.py deleted file mode 100644 index aa24d919..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseFileHandler -from .json_handler import JsonHandler -from .pickle_handler import PickleHandler -from .yaml_handler import YamlHandler - -__all__ = ['BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/base.py deleted file mode 100644 index 0c9cc15b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath: str, mode: str = 'r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath: str, mode: str = 'w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/json_handler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/json_handler.py deleted file mode 100644 index 18d4f15f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/json_handler.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -import numpy as np - -from .base import BaseFileHandler - - -def set_default(obj): - """Set default json values for non-serializable values. - - It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list. - It also converts ``np.generic`` (including ``np.int32``, ``np.float32``, - etc.) into plain numbers of plain python built-in types. - """ - if isinstance(obj, (set, range)): - return list(obj) - elif isinstance(obj, np.ndarray): - return obj.tolist() - elif isinstance(obj, np.generic): - return obj.item() - raise TypeError(f'{type(obj)} is unsupported for json dump') - - -class JsonHandler(BaseFileHandler): - - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('default', set_default) - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('default', set_default) - return json.dumps(obj, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/pickle_handler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100644 index 073856fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super().load_from_path(filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super().dump_to_path(obj, filepath, mode='wb', **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/yaml_handler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100644 index 1c1b0779..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CDumper as Dumper - from yaml import CLoader as Loader -except ImportError: - from yaml import Loader, Dumper # type: ignore - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/io.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/io.py deleted file mode 100644 index 91192103..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/io.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from io import BytesIO, StringIO -from pathlib import Path -from typing import Any, Callable, Dict, List, Optional, TextIO, Union - -from ..utils import is_list_of -from .file_client import FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler - -FileLikeObject = Union[TextIO, StringIO, BytesIO] - -file_handlers = { - 'json': JsonHandler(), - 'yaml': YamlHandler(), - 'yml': YamlHandler(), - 'pickle': PickleHandler(), - 'pkl': PickleHandler() -} - - -def load(file: Union[str, Path, FileLikeObject], - file_format: Optional[str] = None, - file_client_args: Optional[Dict] = None, - **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Note: - In v1.3.16 and later, ``load`` supports loading data from serialized - files those can be storaged in different backends. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> load('/path/of/your/file') # file is storaged in disk - >>> load('https://path/of/your/file') # file is storaged in Internet - >>> load('s3://path/of/your/file') # file is storaged in petrel - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and isinstance(file, str): - file_format = file.split('.')[-1] - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - f: FileLikeObject - if isinstance(file, str): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO(file_client.get_text(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - else: - with BytesIO(file_client.get(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - elif hasattr(file, 'read'): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def dump(obj: Any, - file: Optional[Union[str, Path, FileLikeObject]] = None, - file_format: Optional[str] = None, - file_client_args: Optional[Dict] = None, - **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Note: - In v1.3.16 and later, ``dump`` supports dumping data as strings or to - files which is saved to different backends. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dumped to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dump('hello world', '/path/of/your/file') # disk - >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if isinstance(file, str): - file_format = file.split('.')[-1] - elif file is None: - raise ValueError( - 'file_format must be specified since file is None') - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - f: FileLikeObject - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif isinstance(file, str): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put_text(f.getvalue(), file) - else: - with BytesIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put(f.getvalue(), file) - elif hasattr(file, 'write'): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') - - -def _register_handler(handler: BaseFileHandler, - file_formats: Union[str, List[str]]) -> None: - """Register a handler for some file extensions. - - Args: - handler (:obj:`BaseFileHandler`): Handler to be registered. - file_formats (str or list[str]): File formats to be handled by this - handler. - """ - if not isinstance(handler, BaseFileHandler): - raise TypeError( - f'handler must be a child of BaseFileHandler, not {type(handler)}') - if isinstance(file_formats, str): - file_formats = [file_formats] - if not is_list_of(file_formats, str): - raise TypeError('file_formats must be a str or a list of str') - for ext in file_formats: - file_handlers[ext] = handler - - -def register_handler(file_formats: Union[str, list], **kwargs) -> Callable: - - def wrap(cls): - _register_handler(cls(**kwargs), file_formats) - return cls - - return wrap diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/parse.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/parse.py deleted file mode 100644 index f28e5911..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/fileio/parse.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from io import StringIO -from pathlib import Path -from typing import Dict, List, Optional, Union - -from .file_client import FileClient - - -def list_from_file(filename: Union[str, Path], - prefix: str = '', - offset: int = 0, - max_num: int = 0, - encoding: str = 'utf-8', - file_client_args: Optional[Dict] = None) -> List: - """Load a text file and parse the content as a list of strings. - - Note: - In v1.3.16 and later, ``list_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a list for strings. - - Args: - filename (str): Filename. - prefix (str): The prefix to be inserted to the beginning of each item. - offset (int): The offset of lines. - max_num (int): The maximum number of lines to be read, - zeros and negatives mean no limitation. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> list_from_file('/path/of/your/file') # disk - ['hello', 'world'] - >>> list_from_file('s3://path/of/your/file') # ceph or petrel - ['hello', 'world'] - - Returns: - list[str]: A list of strings. - """ - cnt = 0 - item_list = [] - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for _ in range(offset): - f.readline() - for line in f: - if 0 < max_num <= cnt: - break - item_list.append(prefix + line.rstrip('\n\r')) - cnt += 1 - return item_list - - -def dict_from_file(filename: Union[str, Path], - key_type: type = str, - encoding: str = 'utf-8', - file_client_args: Optional[Dict] = None) -> Dict: - """Load a text file and parse the content as a dict. - - Each line of the text file will be two or more columns split by - whitespaces or tabs. The first column will be parsed as dict keys, and - the following columns will be parsed as dict values. - - Note: - In v1.3.16 and later, ``dict_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a dict. - - Args: - filename(str): Filename. - key_type(type): Type of the dict keys. str is user by default and - type conversion will be performed if specified. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dict_from_file('/path/of/your/file') # disk - {'key1': 'value1', 'key2': 'value2'} - >>> dict_from_file('s3://path/of/your/file') # ceph or petrel - {'key1': 'value1', 'key2': 'value2'} - - Returns: - dict: The parsed contents. - """ - mapping = {} - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for line in f: - items = line.rstrip('\n').split() - assert len(items) >= 2 - key = key_type(items[0]) - val = items[1:] if len(items) > 2 else items[1] - mapping[key] = val - return mapping diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/__init__.py deleted file mode 100644 index 92ecec40..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_hue, adjust_lighting, adjust_sharpness, - auto_contrast, clahe, imdenormalize, imequalize, - iminvert, imnormalize, imnormalize_, lut_transform, - posterize, solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting', - 'adjust_hue' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/colorspace.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/colorspace.py deleted file mode 100644 index 08f99524..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/colorspace.py +++ /dev/null @@ -1,309 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Callable, Union - -import cv2 -import numpy as np - - -def imconvert(img: np.ndarray, src: str, dst: str) -> np.ndarray: - """Convert an image from the src colorspace to dst colorspace. - - Args: - img (ndarray): The input image. - src (str): The source colorspace, e.g., 'rgb', 'hsv'. - dst (str): The destination colorspace, e.g., 'rgb', 'hsv'. - - Returns: - ndarray: The converted image. - """ - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - out_img = cv2.cvtColor(img, code) - return out_img - - -def bgr2gray(img: np.ndarray, keepdim: bool = False) -> np.ndarray: - """Convert a BGR image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def rgb2gray(img: np.ndarray, keepdim: bool = False) -> np.ndarray: - """Convert a RGB image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def gray2bgr(img: np.ndarray) -> np.ndarray: - """Convert a grayscale image to BGR image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted BGR image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - return out_img - - -def gray2rgb(img: np.ndarray) -> np.ndarray: - """Convert a grayscale image to RGB image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted RGB image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return out_img - - -def _convert_input_type_range(img: np.ndarray) -> np.ndarray: - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' - f'but got {img_type}') - return img - - -def _convert_output_type_range( - img: np.ndarray, dst_type: Union[np.uint8, np.float32]) -> np.ndarray: - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' - f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) - - -def rgb2ycbcr(img: np.ndarray, y_only: bool = False) -> np.ndarray: - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img: np.ndarray, y_only: bool = False) -> np.ndarray: - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img: np.ndarray) -> np.ndarray: - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [ - -222.921, 135.576, -276.836 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img: np.ndarray) -> np.ndarray: - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [ - -276.836, 135.576, -222.921 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def convert_color_factory(src: str, dst: str) -> Callable: - - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - - def convert_color(img: np.ndarray) -> np.ndarray: - out_img = cv2.cvtColor(img, code) - return out_img - - convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()} - image. - - Args: - img (ndarray or str): The input image. - - Returns: - ndarray: The converted {dst.upper()} image. - """ - - return convert_color - - -bgr2rgb = convert_color_factory('bgr', 'rgb') - -rgb2bgr = convert_color_factory('rgb', 'bgr') - -bgr2hsv = convert_color_factory('bgr', 'hsv') - -hsv2bgr = convert_color_factory('hsv', 'bgr') - -bgr2hls = convert_color_factory('bgr', 'hls') - -hls2bgr = convert_color_factory('hls', 'bgr') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/geometric.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/geometric.py deleted file mode 100644 index eecd795e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/geometric.py +++ /dev/null @@ -1,741 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -# Pillow >=v9.1.0 use a slightly different naming scheme for filters. -# Set pillow_interp_codes according to the naming scheme used. -if Image is not None: - if hasattr(Image, 'Resampling'): - pillow_interp_codes = { - 'nearest': Image.Resampling.NEAREST, - 'bilinear': Image.Resampling.BILINEAR, - 'bicubic': Image.Resampling.BICUBIC, - 'box': Image.Resampling.BOX, - 'lanczos': Image.Resampling.LANCZOS, - 'hamming': Image.Resampling.HAMMING - } - else: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple(int(np.ceil(s / d)) * d for s, d in zip(size, divisor)) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with 2 - elements on both sides in reflect mode will result in - [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last value - on the edge. For example, padding [1, 2, 3, 4] with 2 elements on - both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - width = max(shape[1] - img.shape[1], 0) - height = max(shape[0] - img.shape[0], 0) - padding = (0, 0, width, height) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/io.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/io.py deleted file mode 100644 index ae81b561..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/io.py +++ /dev/null @@ -1,314 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -import warnings -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from mmcv.fileio import FileClient -from mmcv.utils import is_filepath, is_str - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, - flag='color', - channel_order='bgr', - backend=None, - file_client_args=None): - """Read an image. - - Note: - In v1.4.1 and later, add `file_client_args` parameters. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - file_client_args (dict | None): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Returns: - ndarray: Loaded image array. - - Examples: - >>> import mmcv - >>> img_path = '/path/to/img.jpg' - >>> img = mmcv.imread(img_path) - >>> img = mmcv.imread(img_path, flag='color', channel_order='rgb', - ... backend='cv2') - >>> img = mmcv.imread(img_path, flag='color', channel_order='bgr', - ... backend='pillow') - >>> s3_img_path = 's3://bucket/img.jpg' - >>> # infer the file backend by the prefix s3 - >>> img = mmcv.imread(s3_img_path) - >>> # manually set the file backend petrel - >>> img = mmcv.imread(s3_img_path, file_client_args={ - ... 'backend': 'petrel'}) - >>> http_img_path = 'http://path/to/img.jpg' - >>> img = mmcv.imread(http_img_path) - >>> img = mmcv.imread(http_img_path, file_client_args={ - ... 'backend': 'http'}) - """ - - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - file_client = FileClient.infer_client(file_client_args, img_or_path) - img_bytes = file_client.get(img_or_path) - return imfrombytes(img_bytes, flag, channel_order, backend) - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - channel_order (str): The channel order of the output, candidates - are 'bgr' and 'rgb'. Default to 'bgr'. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. If backend is - None, the global imread_backend specified by ``mmcv.use_backend()`` - will be used. Default: None. - - Returns: - ndarray: Loaded image array. - - Examples: - >>> img_path = '/path/to/img.jpg' - >>> with open(img_path, 'rb') as f: - >>> img_buff = f.read() - >>> img = mmcv.imfrombytes(img_buff) - >>> img = mmcv.imfrombytes(img_buff, flag='color', channel_order='rgb') - >>> img = mmcv.imfrombytes(img_buff, backend='pillow') - >>> img = mmcv.imfrombytes(img_buff, backend='cv2') - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError( - f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow', 'tifffile'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - with io.BytesIO(content) as buff: - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - with io.BytesIO(content) as buff: - img = tifffile.imread(buff) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, - file_path, - params=None, - auto_mkdir=None, - file_client_args=None): - """Write image to file. - - Note: - In v1.4.1 and later, add `file_client_args` parameters. - - Warning: - The parameter `auto_mkdir` will be deprecated in the future and every - file clients will make directory automatically. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. It will be deprecated. - file_client_args (dict | None): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Returns: - bool: Successful or not. - - Examples: - >>> # write to hard disk client - >>> ret = mmcv.imwrite(img, '/path/to/img.jpg') - >>> # infer the file backend by the prefix s3 - >>> ret = mmcv.imwrite(img, 's3://bucket/img.jpg') - >>> # manually set the file backend petrel - >>> ret = mmcv.imwrite(img, 's3://bucket/img.jpg', file_client_args={ - ... 'backend': 'petrel'}) - """ - assert is_filepath(file_path) - file_path = str(file_path) - if auto_mkdir is not None: - warnings.warn( - 'The parameter `auto_mkdir` will be deprecated in the future and ' - 'every file clients will make directory automatically.') - file_client = FileClient.infer_client(file_client_args, file_path) - img_ext = osp.splitext(file_path)[-1] - # Encode image according to image suffix. - # For example, if image path is '/path/your/img.jpg', the encode - # format is '.jpg'. - flag, img_buff = cv2.imencode(img_ext, img, params) - file_client.put(img_buff.tobytes(), file_path) - return flag diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/misc.py deleted file mode 100644 index 43934a68..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/misc.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -import mmcv - -try: - import torch -except ImportError: - torch = None - - -def tensor2imgs(tensor, mean=None, std=None, to_rgb=True): - """Convert tensor to 3-channel images or 1-channel gray images. - - Args: - tensor (torch.Tensor): Tensor that contains multiple images, shape ( - N, C, H, W). :math:`C` can be either 3 or 1. - mean (tuple[float], optional): Mean of images. If None, - (0, 0, 0) will be used for tensor with 3-channel, - while (0, ) for tensor with 1-channel. Defaults to None. - std (tuple[float], optional): Standard deviation of images. If None, - (1, 1, 1) will be used for tensor with 3-channel, - while (1, ) for tensor with 1-channel. Defaults to None. - to_rgb (bool, optional): Whether the tensor was converted to RGB - format in the first place. If so, convert it back to BGR. - For the tensor with 1 channel, it must be False. Defaults to True. - - Returns: - list[np.ndarray]: A list that contains multiple images. - """ - - if torch is None: - raise RuntimeError('pytorch is not installed') - assert torch.is_tensor(tensor) and tensor.ndim == 4 - channels = tensor.size(1) - assert channels in [1, 3] - if mean is None: - mean = (0, ) * channels - if std is None: - std = (1, ) * channels - assert (channels == len(mean) == len(std) == 3) or \ - (channels == len(mean) == len(std) == 1 and not to_rgb) - - num_imgs = tensor.size(0) - mean = np.array(mean, dtype=np.float32) - std = np.array(std, dtype=np.float32) - imgs = [] - for img_id in range(num_imgs): - img = tensor[img_id, ...].cpu().numpy().transpose(1, 2, 0) - img = mmcv.imdenormalize( - img, mean, std, to_bgr=to_rgb).astype(np.uint8) - imgs.append(np.ascontiguousarray(img)) - return imgs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/photometric.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/photometric.py deleted file mode 100644 index b41cea71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/image/photometric.py +++ /dev/null @@ -1,471 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - -from ..utils import is_tuple_of -from .colorspace import bgr2gray, gray2bgr - - -def imnormalize(img, mean, std, to_rgb=True): - """Normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - img = img.copy().astype(np.float32) - return imnormalize_(img, mean, std, to_rgb) - - -def imnormalize_(img, mean, std, to_rgb=True): - """Inplace normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - # cv2 inplace normalization does not accept uint8 - assert img.dtype != np.uint8 - mean = np.float64(mean.reshape(1, -1)) - stdinv = 1 / np.float64(std.reshape(1, -1)) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - cv2.subtract(img, mean, img) # inplace - cv2.multiply(img, stdinv, img) # inplace - return img - - -def imdenormalize(img, mean, std, to_bgr=True): - assert img.dtype != np.uint8 - mean = mean.reshape(1, -1).astype(np.float64) - std = std.reshape(1, -1).astype(np.float64) - img = cv2.multiply(img, std) # make a copy - cv2.add(img, mean, img) # inplace - if to_bgr: - cv2.cvtColor(img, cv2.COLOR_RGB2BGR, img) # inplace - return img - - -def iminvert(img): - """Invert (negate) an image. - - Args: - img (ndarray): Image to be inverted. - - Returns: - ndarray: The inverted image. - """ - return np.full_like(img, 255) - img - - -def solarize(img, thr=128): - """Solarize an image (invert all pixel values above a threshold) - - Args: - img (ndarray): Image to be solarized. - thr (int): Threshold for solarizing (0 - 255). - - Returns: - ndarray: The solarized image. - """ - img = np.where(img < thr, img, 255 - img) - return img - - -def posterize(img, bits): - """Posterize an image (reduce the number of bits for each color channel) - - Args: - img (ndarray): Image to be posterized. - bits (int): Number of bits (1 to 8) to use for posterizing. - - Returns: - ndarray: The posterized image. - """ - shift = 8 - bits - img = np.left_shift(np.right_shift(img, shift), shift) - return img - - -def adjust_color(img, alpha=1, beta=None, gamma=0): - r"""It blends the source image and its gray image: - - .. math:: - output = img * alpha + gray\_img * beta + gamma - - Args: - img (ndarray): The input source image. - alpha (int | float): Weight for the source image. Default 1. - beta (int | float): Weight for the converted gray image. - If None, it's assigned the value (1 - `alpha`). - gamma (int | float): Scalar added to each sum. - Same as :func:`cv2.addWeighted`. Default 0. - - Returns: - ndarray: Colored image which has the same size and dtype as input. - """ - gray_img = bgr2gray(img) - gray_img = np.tile(gray_img[..., None], [1, 1, 3]) - if beta is None: - beta = 1 - alpha - colored_img = cv2.addWeighted(img, alpha, gray_img, beta, gamma) - if not colored_img.dtype == np.uint8: - # Note when the dtype of `img` is not the default `np.uint8` - # (e.g. np.float32), the value in `colored_img` got from cv2 - # is not guaranteed to be in range [0, 255], so here clip - # is needed. - colored_img = np.clip(colored_img, 0, 255) - return colored_img - - -def imequalize(img): - """Equalize the image histogram. - - This function applies a non-linear mapping to the input image, - in order to create a uniform distribution of grayscale values - in the output image. - - Args: - img (ndarray): Image to be equalized. - - Returns: - ndarray: The equalized image. - """ - - def _scale_channel(im, c): - """Scale the data in the corresponding channel.""" - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # For computing the step, filter out the nonzeros. - nonzero_histo = histo[histo > 0] - step = (np.sum(nonzero_histo) - nonzero_histo[-1]) // 255 - if not step: - lut = np.array(range(256)) - else: - # Compute the cumulative sum, shifted by step // 2 - # and then normalized by step. - lut = (np.cumsum(histo) + (step // 2)) // step - # Shift lut, prepending with 0. - lut = np.concatenate([[0], lut[:-1]], 0) - # handle potential integer overflow - lut[lut > 255] = 255 - # If step is zero, return the original image. - # Otherwise, index from lut. - return np.where(np.equal(step, 0), im, lut[im]) - - # Scales each channel independently and then stacks - # the result. - s1 = _scale_channel(img, 0) - s2 = _scale_channel(img, 1) - s3 = _scale_channel(img, 2) - equalized_img = np.stack([s1, s2, s3], axis=-1) - return equalized_img.astype(img.dtype) - - -def adjust_brightness(img, factor=1.): - """Adjust image brightness. - - This function controls the brightness of an image. An - enhancement factor of 0.0 gives a black image. - A factor of 1.0 gives the original image. This function - blends the source image and the degenerated black image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be brightened. - factor (float): A value controls the enhancement. - Factor 1.0 returns the original image, lower - factors mean less color (brightness, contrast, - etc), and higher values more. Default 1. - - Returns: - ndarray: The brightened image. - """ - degenerated = np.zeros_like(img) - # Note manually convert the dtype to np.float32, to - # achieve as close results as PIL.ImageEnhance.Brightness. - # Set beta=1-factor, and gamma=0 - brightened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - brightened_img = np.clip(brightened_img, 0, 255) - return brightened_img.astype(img.dtype) - - -def adjust_contrast(img, factor=1.): - """Adjust image contrast. - - This function controls the contrast of an image. An - enhancement factor of 0.0 gives a solid grey - image. A factor of 1.0 gives the original image. It - blends the source image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be contrasted. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - - Returns: - ndarray: The contrasted image. - """ - gray_img = bgr2gray(img) - hist = np.histogram(gray_img, 256, (0, 255))[0] - mean = round(np.sum(gray_img) / np.sum(hist)) - degenerated = (np.ones_like(img[..., 0]) * mean).astype(img.dtype) - degenerated = gray2bgr(degenerated) - contrasted_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - contrasted_img = np.clip(contrasted_img, 0, 255) - return contrasted_img.astype(img.dtype) - - -def auto_contrast(img, cutoff=0): - """Auto adjust image contrast. - - This function maximize (normalize) image contrast by first removing cutoff - percent of the lightest and darkest pixels from the histogram and remapping - the image so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - Args: - img (ndarray): Image to be contrasted. BGR order. - cutoff (int | float | tuple): The cutoff percent of the lightest and - darkest pixels to be removed. If given as tuple, it shall be - (low, high). Otherwise, the single value will be used for both. - Defaults to 0. - - Returns: - ndarray: The contrasted image. - """ - - def _auto_contrast_channel(im, c, cutoff): - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # Remove cut-off percent pixels from histo - histo_sum = np.cumsum(histo) - cut_low = histo_sum[-1] * cutoff[0] // 100 - cut_high = histo_sum[-1] - histo_sum[-1] * cutoff[1] // 100 - histo_sum = np.clip(histo_sum, cut_low, cut_high) - cut_low - histo = np.concatenate([[histo_sum[0]], np.diff(histo_sum)], 0) - - # Compute mapping - low, high = np.nonzero(histo)[0][0], np.nonzero(histo)[0][-1] - # If all the values have been cut off, return the origin img - if low >= high: - return im - scale = 255.0 / (high - low) - offset = -low * scale - lut = np.array(range(256)) - lut = lut * scale + offset - lut = np.clip(lut, 0, 255) - return lut[im] - - if isinstance(cutoff, (int, float)): - cutoff = (cutoff, cutoff) - else: - assert isinstance(cutoff, tuple), 'cutoff must be of type int, ' \ - f'float or tuple, but got {type(cutoff)} instead.' - # Auto adjusts contrast for each channel independently and then stacks - # the result. - s1 = _auto_contrast_channel(img, 0, cutoff) - s2 = _auto_contrast_channel(img, 1, cutoff) - s3 = _auto_contrast_channel(img, 2, cutoff) - contrasted_img = np.stack([s1, s2, s3], axis=-1) - return contrasted_img.astype(img.dtype) - - -def adjust_sharpness(img, factor=1., kernel=None): - """Adjust image sharpness. - - This function controls the sharpness of an image. An - enhancement factor of 0.0 gives a blurred image. A - factor of 1.0 gives the original image. And a factor - of 2.0 gives a sharpened image. It blends the source - image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be sharpened. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - kernel (np.ndarray, optional): Filter kernel to be applied on the img - to obtain the degenerated img. Defaults to None. - - Note: - No value sanity check is enforced on the kernel set by users. So with - an inappropriate kernel, the ``adjust_sharpness`` may fail to perform - the function its name indicates but end up performing whatever - transform determined by the kernel. - - Returns: - ndarray: The sharpened image. - """ - - if kernel is None: - # adopted from PIL.ImageFilter.SMOOTH - kernel = np.array([[1., 1., 1.], [1., 5., 1.], [1., 1., 1.]]) / 13 - assert isinstance(kernel, np.ndarray), \ - f'kernel must be of type np.ndarray, but got {type(kernel)} instead.' - assert kernel.ndim == 2, \ - f'kernel must have a dimension of 2, but got {kernel.ndim} instead.' - - degenerated = cv2.filter2D(img, -1, kernel) - sharpened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - sharpened_img = np.clip(sharpened_img, 0, 255) - return sharpened_img.astype(img.dtype) - - -def adjust_lighting(img, eigval, eigvec, alphastd=0.1, to_rgb=True): - """AlexNet-style PCA jitter. - - This data augmentation is proposed in `ImageNet Classification with Deep - Convolutional Neural Networks - `_. - - Args: - img (ndarray): Image to be adjusted lighting. BGR order. - eigval (ndarray): the eigenvalue of the convariance matrix of pixel - values, respectively. - eigvec (ndarray): the eigenvector of the convariance matrix of pixel - values, respectively. - alphastd (float): The standard deviation for distribution of alpha. - Defaults to 0.1 - to_rgb (bool): Whether to convert img to rgb. - - Returns: - ndarray: The adjusted image. - """ - assert isinstance(eigval, np.ndarray) and isinstance(eigvec, np.ndarray), \ - f'eigval and eigvec should both be of type np.ndarray, got ' \ - f'{type(eigval)} and {type(eigvec)} instead.' - - assert eigval.ndim == 1 and eigvec.ndim == 2 - assert eigvec.shape == (3, eigval.shape[0]) - n_eigval = eigval.shape[0] - assert isinstance(alphastd, float), 'alphastd should be of type float, ' \ - f'got {type(alphastd)} instead.' - - img = img.copy().astype(np.float32) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - - alpha = np.random.normal(0, alphastd, n_eigval) - alter = eigvec \ - * np.broadcast_to(alpha.reshape(1, n_eigval), (3, n_eigval)) \ - * np.broadcast_to(eigval.reshape(1, n_eigval), (3, n_eigval)) - alter = np.broadcast_to(alter.sum(axis=1).reshape(1, 1, 3), img.shape) - img_adjusted = img + alter - return img_adjusted - - -def lut_transform(img, lut_table): - """Transform array by look-up table. - - The function lut_transform fills the output array with values from the - look-up table. Indices of the entries are taken from the input array. - - Args: - img (ndarray): Image to be transformed. - lut_table (ndarray): look-up table of 256 elements; in case of - multi-channel input array, the table should either have a single - channel (in this case the same table is used for all channels) or - the same number of channels as in the input array. - - Returns: - ndarray: The transformed image. - """ - assert isinstance(img, np.ndarray) - assert 0 <= np.min(img) and np.max(img) <= 255 - assert isinstance(lut_table, np.ndarray) - assert lut_table.shape == (256, ) - - return cv2.LUT(np.array(img, dtype=np.uint8), lut_table) - - -def clahe(img, clip_limit=40.0, tile_grid_size=(8, 8)): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - img (ndarray): Image to be processed. - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - - Returns: - ndarray: The processed image. - """ - assert isinstance(img, np.ndarray) - assert img.ndim == 2 - assert isinstance(clip_limit, (float, int)) - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - - clahe = cv2.createCLAHE(clip_limit, tile_grid_size) - return clahe.apply(np.array(img, dtype=np.uint8)) - - -def adjust_hue(img: np.ndarray, hue_factor: float) -> np.ndarray: - """Adjust hue of an image. - - The image hue is adjusted by converting the image to HSV and cyclically - shifting the intensities in the hue channel (H). The image is then - converted back to original image mode. - - `hue_factor` is the amount of shift in H channel and must be in the - interval `[-0.5, 0.5]`. - - Modified from - https://github.com/pytorch/vision/blob/main/torchvision/ - transforms/functional.py - - Args: - img (ndarray): Image to be adjusted. - hue_factor (float): How much to shift the hue channel. Should be in - [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in - HSV space in positive and negative direction respectively. - 0 means no shift. Therefore, both -0.5 and 0.5 will give an image - with complementary colors while 0 gives the original image. - - Returns: - ndarray: Hue adjusted image. - """ - - if not (-0.5 <= hue_factor <= 0.5): - raise ValueError(f'hue_factor:{hue_factor} is not in [-0.5, 0.5].') - if not (isinstance(img, np.ndarray) and (img.ndim in {2, 3})): - raise TypeError('img should be ndarray with dim=[2 or 3].') - - dtype = img.dtype - img = img.astype(np.uint8) - hsv_img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV_FULL) - h, s, v = cv2.split(hsv_img) - h = h.astype(np.uint8) - # uint8 addition take cares of rotation across boundaries - with np.errstate(over='ignore'): - h += np.uint8(hue_factor * 255) - hsv_img = cv2.merge([h, s, v]) - return cv2.cvtColor(hsv_img, cv2.COLOR_HSV2RGB_FULL).astype(dtype) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/deprecated.json b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/deprecated.json deleted file mode 100644 index 25cf6f28..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/deprecated.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "resnet50_caffe": "detectron/resnet50_caffe", - "resnet50_caffe_bgr": "detectron2/resnet50_caffe_bgr", - "resnet101_caffe": "detectron/resnet101_caffe", - "resnet101_caffe_bgr": "detectron2/resnet101_caffe_bgr" -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/mmcls.json b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/mmcls.json deleted file mode 100644 index c073a41d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/mmcls.json +++ /dev/null @@ -1,59 +0,0 @@ -{ - "vgg11": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.pth", - "vgg13": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.pth", - "vgg16": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth", - "vgg19": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.pth", - "vgg11_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth", - "vgg13_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth", - "vgg16_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth", - "vgg19_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth", - "resnet18": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth", - "resnet34": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.pth", - "resnet50": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth", - "resnet101": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.pth", - "resnet152": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.pth", - "resnet50_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.pth", - "resnet101_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.pth", - "resnet152_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.pth", - "resnext50_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth", - "resnext101_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth", - "resnext101_32x8d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth", - "resnext152_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth", - "se-resnet50": "https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet50_batch256_imagenet_20200804-ae206104.pth", - "se-resnet101": "https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet101_batch256_imagenet_20200804-ba5b51d4.pth", - "resnest50": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest50_imagenet_converted-1ebf0afe.pth", - "resnest101": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest101_imagenet_converted-032caa52.pth", - "resnest200": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest200_imagenet_converted-581a60f2.pth", - "resnest269": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest269_imagenet_converted-59930960.pth", - "shufflenet_v1": "https://download.openmmlab.com/mmclassification/v0/shufflenet_v1/shufflenet_v1_batch1024_imagenet_20200804-5d6cec73.pth", - "shufflenet_v2": "https://download.openmmlab.com/mmclassification/v0/shufflenet_v2/shufflenet_v2_batch1024_imagenet_20200812-5bf4721e.pth", - "mobilenet_v2": "https://download.openmmlab.com/mmclassification/v0/mobilenet_v2/mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth", - "mobilenet_v3_small": "https://download.openmmlab.com/mmclassification/v0/mobilenet_v3/convert/mobilenet_v3_small-8427ecf0.pth", - "mobilenet_v3_large": "https://download.openmmlab.com/mmclassification/v0/mobilenet_v3/convert/mobilenet_v3_large-3ea3c186.pth", - "repvgg_A0": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A0_3rdparty_4xb64-coslr-120e_in1k_20210909-883ab98c.pth", - "repvgg_A1": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A1_3rdparty_4xb64-coslr-120e_in1k_20210909-24003a24.pth", - "repvgg_A2": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-A2_3rdparty_4xb64-coslr-120e_in1k_20210909-97d7695a.pth", - "repvgg_B0": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B0_3rdparty_4xb64-coslr-120e_in1k_20210909-446375f4.pth", - "repvgg_B1": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1_3rdparty_4xb64-coslr-120e_in1k_20210909-750cdf67.pth", - "repvgg_B1g2": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g2_3rdparty_4xb64-coslr-120e_in1k_20210909-344f6422.pth", - "repvgg_B1g4": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B1g4_3rdparty_4xb64-coslr-120e_in1k_20210909-d4c1a642.pth", - "repvgg_B2": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2_3rdparty_4xb64-coslr-120e_in1k_20210909-bd6b937c.pth", - "repvgg_B2g4": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B2g4_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-7b7955f0.pth", - "repvgg_B3": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-dda968bf.pth", - "repvgg_B3g4": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-B3g4_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-4e54846a.pth", - "repvgg_D2se": "https://download.openmmlab.com/mmclassification/v0/repvgg/repvgg-D2se_3rdparty_4xb64-autoaug-lbs-mixup-coslr-200e_in1k_20210909-cf3139b7.pth", - "res2net101_w26": "https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth", - "res2net50_w14": "https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth", - "res2net50_w26": "https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth", - "swin_tiny": "https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth", - "swin_small": "https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth", - "swin_base": "https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224_22kto1k-f967f799.pth", - "swin_large": "https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window7_224_22kto1k-5f0996db.pth", - "t2t_vit_t_14": "https://download.openmmlab.com/mmclassification/v0/t2t-vit/t2t-vit-t-14_3rdparty_8xb64_in1k_20210928-b7c09b62.pth", - "t2t_vit_t_19": "https://download.openmmlab.com/mmclassification/v0/t2t-vit/t2t-vit-t-19_3rdparty_8xb64_in1k_20210928-7f1478d5.pth", - "t2t_vit_t_24": "https://download.openmmlab.com/mmclassification/v0/t2t-vit/t2t-vit-t-24_3rdparty_8xb64_in1k_20210928-fe95a61b.pth", - "tnt_small": "https://download.openmmlab.com/mmclassification/v0/tnt/tnt-small-p16_3rdparty_in1k_20210903-c56ee7df.pth", - "vit_base_p16": "https://download.openmmlab.com/mmclassification/v0/vit/finetune/vit-base-p16_in21k-pre-3rdparty_ft-64xb64_in1k-384_20210928-98e8652b.pth", - "vit_base_p32": "https://download.openmmlab.com/mmclassification/v0/vit/finetune/vit-base-p32_in21k-pre-3rdparty_ft-64xb64_in1k-384_20210928-9cea8599.pth", - "vit_large_p16": "https://download.openmmlab.com/mmclassification/v0/vit/finetune/vit-large-p16_in21k-pre-3rdparty_ft-64xb64_in1k-384_20210928-b20ba619.pth" -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/open_mmlab.json b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/open_mmlab.json deleted file mode 100644 index 8311db4f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/open_mmlab.json +++ /dev/null @@ -1,50 +0,0 @@ -{ - "vgg16_caffe": "https://download.openmmlab.com/pretrain/third_party/vgg16_caffe-292e1171.pth", - "detectron/resnet50_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet50_caffe-788b5fa3.pth", - "detectron2/resnet50_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet50_msra-5891d200.pth", - "detectron/resnet101_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet101_caffe-3ad79236.pth", - "detectron2/resnet101_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet101_msra-6cc46731.pth", - "detectron2/resnext101_32x8d": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x8d-1516f1aa.pth", - "resnext50_32x4d": "https://download.openmmlab.com/pretrain/third_party/resnext50-32x4d-0ab1a123.pth", - "resnext101_32x4d": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d-a5af3160.pth", - "resnext101_64x4d": "https://download.openmmlab.com/pretrain/third_party/resnext101_64x4d-ee2c6f71.pth", - "contrib/resnet50_gn": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn_thangvubk-ad1730dd.pth", - "detectron/resnet50_gn": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn-9186a21c.pth", - "detectron/resnet101_gn": "https://download.openmmlab.com/pretrain/third_party/resnet101_gn-cac0ab98.pth", - "jhu/resnet50_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn_ws-15beedd8.pth", - "jhu/resnet101_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnet101_gn_ws-3e3c308c.pth", - "jhu/resnext50_32x4d_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnext50_32x4d_gn_ws-0d87ac85.pth", - "jhu/resnext101_32x4d_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d_gn_ws-34ac1a9e.pth", - "jhu/resnext50_32x4d_gn": "https://download.openmmlab.com/pretrain/third_party/resnext50_32x4d_gn-c7e8b754.pth", - "jhu/resnext101_32x4d_gn": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d_gn-ac3bb84e.pth", - "msra/hrnetv2_w18_small": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w18_small-b5a04e21.pth", - "msra/hrnetv2_w18": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w18-00eb2006.pth", - "msra/hrnetv2_w32": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w32-dc9eeb4f.pth", - "msra/hrnetv2_w40": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w40-ed0b031c.pth", - "msra/hrnetv2_w48": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w48-d2186c55.pth", - "bninception_caffe": "https://download.openmmlab.com/pretrain/third_party/bn_inception_caffe-ed2e8665.pth", - "kin400/i3d_r50_f32s2_k400": "https://download.openmmlab.com/pretrain/third_party/i3d_r50_f32s2_k400-2c57e077.pth", - "kin400/nl3d_r50_f32s2_k400": "https://download.openmmlab.com/pretrain/third_party/nl3d_r50_f32s2_k400-fa7e7caa.pth", - "res2net101_v1d_26w_4s": "https://download.openmmlab.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth", - "regnetx_400mf": "https://download.openmmlab.com/pretrain/third_party/regnetx_400mf-a5b10d96.pth", - "regnetx_800mf": "https://download.openmmlab.com/pretrain/third_party/regnetx_800mf-1f4be4c7.pth", - "regnetx_1.6gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_1.6gf-5791c176.pth", - "regnetx_3.2gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_3.2gf-c2599b0f.pth", - "regnetx_4.0gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_4.0gf-a88f671e.pth", - "regnetx_6.4gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_6.4gf-006af45d.pth", - "regnetx_8.0gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_8.0gf-3c68abe7.pth", - "regnetx_12gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_12gf-4c2a3350.pth", - "resnet18_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet18_v1c-b5776b93.pth", - "resnet50_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet50_v1c-2cccc1ad.pth", - "resnet101_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet101_v1c-e67eebb6.pth", - "mmedit/vgg16": "https://download.openmmlab.com/mmediting/third_party/vgg_state_dict.pth", - "mmedit/res34_en_nomixup": "https://download.openmmlab.com/mmediting/third_party/model_best_resnet34_En_nomixup.pth", - "mmedit/mobilenet_v2": "https://download.openmmlab.com/mmediting/third_party/mobilenet_v2.pth", - "contrib/mobilenet_v3_large": "https://download.openmmlab.com/pretrain/third_party/mobilenet_v3_large-bc2c3fd3.pth", - "contrib/mobilenet_v3_small": "https://download.openmmlab.com/pretrain/third_party/mobilenet_v3_small-47085aa1.pth", - "resnest50": "https://download.openmmlab.com/pretrain/third_party/resnest50_d2-7497a55b.pth", - "resnest101": "https://download.openmmlab.com/pretrain/third_party/resnest101_d2-f3b931b2.pth", - "resnest200": "https://download.openmmlab.com/pretrain/third_party/resnest200_d2-ca88e41f.pth", - "darknet53": "https://download.openmmlab.com/pretrain/third_party/darknet53-a628ea1b.pth", - "mmdet/mobilenet_v2": "https://download.openmmlab.com/mmdetection/v2.0/third_party/mobilenet_v2_batch256_imagenet-ff34753d.pth" -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/torchvision_0.12.json b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/torchvision_0.12.json deleted file mode 100644 index 06defe67..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/model_zoo/torchvision_0.12.json +++ /dev/null @@ -1,57 +0,0 @@ -{ - "alexnet": "https://download.pytorch.org/models/alexnet-owt-7be5be79.pth", - "densenet121": "https://download.pytorch.org/models/densenet121-a639ec97.pth", - "densenet169": "https://download.pytorch.org/models/densenet169-b2777c0a.pth", - "densenet201": "https://download.pytorch.org/models/densenet201-c1103571.pth", - "densenet161": "https://download.pytorch.org/models/densenet161-8d451a50.pth", - "efficientnet_b0": "https://download.pytorch.org/models/efficientnet_b0_rwightman-3dd342df.pth", - "efficientnet_b1": "https://download.pytorch.org/models/efficientnet_b1_rwightman-533bc792.pth", - "efficientnet_b2": "https://download.pytorch.org/models/efficientnet_b2_rwightman-bcdf34b7.pth", - "efficientnet_b3": "https://download.pytorch.org/models/efficientnet_b3_rwightman-cf984f9c.pth", - "efficientnet_b4": "https://download.pytorch.org/models/efficientnet_b4_rwightman-7eb33cd5.pth", - "efficientnet_b5": "https://download.pytorch.org/models/efficientnet_b5_lukemelas-b6417697.pth", - "efficientnet_b6": "https://download.pytorch.org/models/efficientnet_b6_lukemelas-c76e70fd.pth", - "efficientnet_b7": "https://download.pytorch.org/models/efficientnet_b7_lukemelas-dcc49843.pth", - "googlenet": "https://download.pytorch.org/models/googlenet-1378be20.pth", - "inception_v3_google": "https://download.pytorch.org/models/inception_v3_google-0cc3c7bd.pth", - "mobilenet_v2": "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth", - "mobilenet_v3_large": "https://download.pytorch.org/models/mobilenet_v3_large-8738ca79.pth", - "mobilenet_v3_small": "https://download.pytorch.org/models/mobilenet_v3_small-047dcff4.pth", - "regnet_y_400mf": "https://download.pytorch.org/models/regnet_y_400mf-c65dace8.pth", - "regnet_y_800mf": "https://download.pytorch.org/models/regnet_y_800mf-1b27b58c.pth", - "regnet_y_1_6gf": "https://download.pytorch.org/models/regnet_y_1_6gf-b11a554e.pth", - "regnet_y_3_2gf": "https://download.pytorch.org/models/regnet_y_3_2gf-b5a9779c.pth", - "regnet_y_8gf": "https://download.pytorch.org/models/regnet_y_8gf-d0d0e4a8.pth", - "regnet_y_16gf": "https://download.pytorch.org/models/regnet_y_16gf-9e6ed7dd.pth", - "regnet_y_32gf": "https://download.pytorch.org/models/regnet_y_32gf-4dee3f7a.pth", - "regnet_x_400mf": "https://download.pytorch.org/models/regnet_x_400mf-adf1edd5.pth", - "regnet_x_800mf": "https://download.pytorch.org/models/regnet_x_800mf-ad17e45c.pth", - "regnet_x_1_6gf": "https://download.pytorch.org/models/regnet_x_1_6gf-e3633e7f.pth", - "regnet_x_3_2gf": "https://download.pytorch.org/models/regnet_x_3_2gf-f342aeae.pth", - "regnet_x_8gf": "https://download.pytorch.org/models/regnet_x_8gf-03ceed89.pth", - "regnet_x_16gf": "https://download.pytorch.org/models/regnet_x_16gf-2007eb11.pth", - "regnet_x_32gf": "https://download.pytorch.org/models/regnet_x_32gf-9d47f8d0.pth", - "resnet18": "https://download.pytorch.org/models/resnet18-f37072fd.pth", - "resnet34": "https://download.pytorch.org/models/resnet34-b627a593.pth", - "resnet50": "https://download.pytorch.org/models/resnet50-0676ba61.pth", - "resnet101": "https://download.pytorch.org/models/resnet101-63fe2227.pth", - "resnet152": "https://download.pytorch.org/models/resnet152-394f9c45.pth", - "resnext50_32x4d": "https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth", - "resnext101_32x8d": "https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth", - "wide_resnet50_2": "https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth", - "wide_resnet101_2": "https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth", - "shufflenetv2_x0.5": "https://download.pytorch.org/models/shufflenetv2_x0.5-f707e7126e.pth", - "shufflenetv2_x1.0": "https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth", - "shufflenetv2_x1.5": null, - "shufflenetv2_x2.0": null, - "squeezenet1_0": "https://download.pytorch.org/models/squeezenet1_0-b66bff10.pth", - "squeezenet1_1": "https://download.pytorch.org/models/squeezenet1_1-b8a52dc0.pth", - "vgg11": "https://download.pytorch.org/models/vgg11-8a719046.pth", - "vgg13": "https://download.pytorch.org/models/vgg13-19584684.pth", - "vgg16": "https://download.pytorch.org/models/vgg16-397923af.pth", - "vgg19": "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth", - "vgg11_bn": "https://download.pytorch.org/models/vgg11_bn-6002323d.pth", - "vgg13_bn": "https://download.pytorch.org/models/vgg13_bn-abd245e5.pth", - "vgg16_bn": "https://download.pytorch.org/models/vgg16_bn-6c64b313.pth", - "vgg19_bn": "https://download.pytorch.org/models/vgg19_bn-c79401a0.pth" -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/__init__.py deleted file mode 100644 index 0d7eb5b0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .info import is_custom_op_loaded -from .symbolic import register_extra_symbolics - -__all__ = ['register_extra_symbolics', 'is_custom_op_loaded'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/info.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/info.py deleted file mode 100644 index b8325a9c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/info.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import warnings - -import torch - - -def is_custom_op_loaded() -> bool: - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - flag = False - try: - from ..tensorrt import is_tensorrt_plugin_loaded - flag = is_tensorrt_plugin_loaded() - except (ImportError, ModuleNotFoundError): - pass - if not flag: - try: - from ..ops import get_onnxruntime_op_path - ort_lib_path = get_onnxruntime_op_path() - flag = os.path.exists(ort_lib_path) - except (ImportError, ModuleNotFoundError): - pass - return flag or torch.__version__ == 'parrots' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/__init__.py deleted file mode 100644 index ef101fec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/symbolic_helper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/symbolic_helper.py deleted file mode 100644 index cc9e96f8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/onnx_utils/symbolic_helper.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/pytorch/pytorch.""" -import warnings -from functools import wraps -from sys import maxsize - -import torch -import torch.onnx -# This import monkey-patches graph manipulation methods on Graph, used for the -# ONNX symbolics -import torch.onnx.utils -from torch._C import ListType - -# --------------------------------------------------------------------------------- -# Helper functions -# --------------------------------------------------------------------------------- - -# Save some builtins as locals, because we'll shadown them below -_sum = sum - - -def _parse_arg(value, desc): - if desc == 'none': - return value - if desc == 'v' or not _is_value(value): - return value - if value.node().mustBeNone(): - return None - if value.node().kind() == 'onnx::Constant': - tval = value.node()['value'] - if desc == 'i': - return int(tval) - elif desc == 'f': - return float(tval) - elif desc == 'b': - return bool(tval) - elif desc == 's': - return str(tval) - elif desc == 't': - return tval - elif desc == 'is': - return [int(v) for v in tval] - elif desc == 'fs': - return [float(v) for v in tval] - else: - raise RuntimeError( - "ONNX symbolic doesn't know to interpret Constant node") - elif value.node().kind() == 'prim::ListConstruct': - if desc == 'is': - for v in value.node().inputs(): - if v.node().kind() != 'onnx::Constant': - raise RuntimeError( - "Failed to export an ONNX attribute '" + - v.node().kind() + - "', since it's not constant, please try to make " - 'things (e.g., kernel size) static if possible') - return [int(v.node()['value']) for v in value.node().inputs()] - else: - raise RuntimeError( - "ONNX symbolic doesn't know to interpret ListConstruct node") - - raise RuntimeError(f'Unexpected node type: {value.node().kind()}') - - -def _maybe_get_const(value, desc): - if _is_value(value) and value.node().kind() == 'onnx::Constant': - return _parse_arg(value, desc) - return value - - -def _maybe_get_scalar(value): - value_t = _maybe_get_const(value, 't') - if isinstance(value_t, torch.Tensor) and value_t.shape == (): - return value_t - return value - - -def _get_const(value, desc, arg_name): - if _is_value(value) and value.node().kind() not in ('onnx::Constant', - 'prim::Constant'): - raise RuntimeError('ONNX symbolic expected a constant' - ' value of the {} argument, got `{}`'.format( - arg_name, value)) - return _parse_arg(value, desc) - - -def _unpack_list(list_value): - list_node = list_value.node() - assert list_node.kind() == 'prim::ListConstruct' - return list(list_node.inputs()) - - -# Check if list_value is output from prim::ListConstruct -# This is usually called before _unpack_list to ensure the list can be -# unpacked. -def _is_packed_list(list_value): - return _is_value( - list_value) and list_value.node().kind() == 'prim::ListConstruct' - - -def parse_args(*arg_descriptors): - - def decorator(fn): - fn._arg_descriptors = arg_descriptors - - def wrapper(g, *args): - # some args may be optional, so the length may be smaller - assert len(arg_descriptors) >= len(args) - args = [ - _parse_arg(arg, arg_desc) - for arg, arg_desc in zip(args, arg_descriptors) - ] - return fn(g, *args) - - # In Python 2 functools.wraps chokes on partially applied functions, so - # we need this as a workaround - try: - wrapper = wraps(fn)(wrapper) - except Exception: - pass - return wrapper - - return decorator - - -def _scalar(x): - """Convert a scalar tensor into a Python value.""" - assert x.numel() == 1 - return x.item() - - -def _if_scalar_type_as(g, self, tensor): - """Convert self into the same type of tensor, as necessary.""" - if isinstance(self, torch._C.Value): - return self - - scalar_type = tensor.type().scalarType() - if scalar_type: - ty = scalar_type.lower() - return getattr(self, ty)() - - return self - - -def _is_none(x): - return x.node().mustBeNone() - - -def _is_value(x): - return isinstance(x, torch._C.Value) - - -def _is_tensor_list(x): - return x.type().isSubtypeOf(ListType.ofTensors()) - - -def _unimplemented(op, msg): - warnings.warn('ONNX export failed on ' + op + ' because ' + msg + - ' not supported') - - -def _try_get_scalar_type(*args): - for arg in args: - try: - return arg.type().scalarType() - except RuntimeError: - pass - return None - - -def _topk_helper(g, input, k, dim, largest=True, sorted=False, out=None): - if out is not None: - _unimplemented('TopK', 'Out parameter is not supported') - if not _is_value(k): - k = g.op('Constant', value_t=torch.tensor([k], dtype=torch.int64)) - else: - k = g.op('Reshape', k, g.op('Constant', value_t=torch.tensor([1]))) - return g.op( - 'TopK', - input, - k, - axis_i=dim, - largest_i=largest, - sorted_i=sorted, - outputs=2) - - -def _slice_helper(g, - input, - axes, - starts, - ends, - steps=None, - dynamic_slice=False): - # TODO(ruobing): add support for opset<10 - from torch.onnx.symbolic_opset10 import _slice - return _slice(g, input, axes, starts, ends, steps, dynamic_slice) - - -def _unsqueeze_helper(g, input, dim): - from torch.onnx.symbolic_opset9 import unsqueeze - return unsqueeze(g, input, dim) - - -def _interpolate_size_to_scales(g, input, output_size, dim): - output_size = _maybe_get_const(output_size, 'is') - if _is_value(output_size): - offset = 2 - offsets = g.op( - 'Constant', value_t=torch.ones(offset, dtype=torch.float32)) - dividend = g.op( - 'Cast', output_size, to_i=cast_pytorch_to_onnx['Float']) - divisor = _slice_helper( - g, g.op('Shape', input), axes=[0], ends=[maxsize], starts=[offset]) - divisor = g.op('Cast', divisor, to_i=cast_pytorch_to_onnx['Float']) - scale_dims = g.op('Div', dividend, divisor) - scales = g.op('Concat', offsets, scale_dims, axis_i=0) - else: - scales_constant = [ - 1. if i < 2 else float(output_size[-(dim - i)]) / - float(input.type().sizes()[-(dim - i)]) for i in range(0, dim) - ] - scales = g.op( - 'Constant', - value_t=torch.tensor(scales_constant, dtype=torch.float32)) - return scales - - -def _interpolate_get_scales_if_available(g, scales): - if len(scales) == 0: - return None - # scales[0] is NoneType in Pytorch == 1.5.1 - # scales[0] is TensorType with sizes = [] in Pytorch == 1.6.0 - # scales[0] is ListType in Pytorch == 1.7.0 - # scales[0] is TensorType with sizes = [2] in Pytorch == 1.8.0 - scale_desc = 'fs' if scales[0].type().kind() == 'ListType' or ( - scales[0].type().kind() == 'TensorType' and - (sum(scales[0].type().sizes()) > 1)) else 'f' - available_scales = _maybe_get_const( - scales[0], scale_desc) != -1 and not _is_none(scales[0]) - - if not available_scales: - return None - - offsets = g.op('Constant', value_t=torch.ones(2, dtype=torch.float32)) - if scale_desc == 'fs': - scales_list = g.op( - 'Constant', - value_t=torch.tensor(_maybe_get_const(scales[0], scale_desc))) - # modify to support PyTorch==1.7.0 - # https://github.com/pytorch/pytorch/blob/75ee5756715e7161314ce037474843b68f69fc04/torch/onnx/symbolic_helper.py#L375 # noqa: E501 - scales = g.op('Concat', offsets, scales_list, axis_i=0) - else: - # for PyTorch < 1.7.0 - scales_list = [] - for scale in scales: - unsqueezed_scale = _unsqueeze_helper(g, scale, 0) - # ONNX only supports float for the scales. double -> float. - unsqueezed_scale = g.op( - 'Cast', unsqueezed_scale, to_i=cast_pytorch_to_onnx['Float']) - scales_list.append(unsqueezed_scale) - scales = g.op('Concat', offsets, *scales_list, axis_i=0) - return scales - - -def _get_interpolate_attributes(g, mode, args): - if mode == 'nearest': - align_corners = None - scales = args[0:] - else: - align_corners = args[0] - scales = args[1:] - scales = _interpolate_get_scales_if_available(g, scales) - return scales, align_corners - - -def _interpolate_get_scales(g, scale_factor, dim): - offsets = g.op('Constant', value_t=torch.ones(2, dtype=torch.float32)) - if isinstance(scale_factor.type(), torch._C.ListType): - return g.op('Concat', offsets, scale_factor, axis_i=0) - else: - scale_factor = _unsqueeze_helper(g, scale_factor, 0) - scale_factor = g.op( - 'Cast', scale_factor, to_i=cast_pytorch_to_onnx['Float']) - scales = [scale_factor for i in range(dim - 2)] - scale_factor = g.op('Concat', offsets, *scales, axis_i=0) - return scale_factor - - -def _size_helper(g, self, dim): - full_shape = g.op('Shape', self) - from torch.onnx.symbolic_opset9 import select - return select(g, full_shape, g.op('Constant', value_t=torch.tensor([0])), - dim) - - -def _avgpool_helper(tuple_fn, padding, kernel_size, stride, divisor_override, - name): - if divisor_override and divisor_override.node().kind() != 'prim::Constant': - return _unimplemented(name, 'divisor_override') - if not stride: - stride = kernel_size - padding = tuple(tuple_fn(padding)) - return padding - - -# Metaprogram symbolics for each ATen native specialized cast operator. -# For e.g. we specify a function named `_cast_uint8_t` that instantiates an -# ONNX cast node with `to` attribute 'UINT8' -# -# TODO: remove these once we support Type's in the JIT IR and we can once again -# use the unified toType operator -cast_pytorch_to_onnx = { - 'Byte': torch.onnx.TensorProtoDataType.UINT8, - 'Char': torch.onnx.TensorProtoDataType.INT8, - 'Double': torch.onnx.TensorProtoDataType.DOUBLE, - 'Float': torch.onnx.TensorProtoDataType.FLOAT, - 'Half': torch.onnx.TensorProtoDataType.FLOAT16, - 'Int': torch.onnx.TensorProtoDataType.INT32, - 'Long': torch.onnx.TensorProtoDataType.INT64, - 'Short': torch.onnx.TensorProtoDataType.INT16, - 'Bool': torch.onnx.TensorProtoDataType.BOOL, - 'ComplexFloat': torch.onnx.TensorProtoDataType.COMPLEX64, - 'ComplexDouble': torch.onnx.TensorProtoDataType.COMPLEX128, - 'Undefined': torch.onnx.TensorProtoDataType.UNDEFINED, -} - -# Global set to store the list of quantized operators in the network. -# This is currently only used in the conversion of quantized ops from PT -# -> C2 via ONNX. -_quantized_ops: set = set() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/symbolic.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/symbolic.py deleted file mode 100644 index 3599b3f2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/onnx/symbolic.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/pytorch/pytorch.""" -import os -import warnings - -import numpy as np -import torch -from torch.nn.modules.utils import _pair, _single, _triple -from torch.onnx.symbolic_helper import parse_args -from torch.onnx.symbolic_registry import register_op - -from .onnx_utils import symbolic_helper as sym_help - - -def _interpolate(name, dim, interpolate_mode): - - def symbolic_fn(g, input, output_size, *args): - scales, align_corners = sym_help._get_interpolate_attributes( - g, interpolate_mode, args) - align_corners = sym_help._maybe_get_scalar(align_corners) - transformation_mode = 'asymmetric' \ - if interpolate_mode == 'nearest' \ - else 'align_corners' if align_corners else 'pytorch_half_pixel' - empty_tensor = g.op( - 'Constant', value_t=torch.tensor([], dtype=torch.float32)) - - if scales is None: - if 'ONNX_BACKEND' in os.environ and os.environ[ - 'ONNX_BACKEND'] == 'TensorRT': - input_size = input.type().sizes() - # slice the first two dim - input_size = input_size[:2] - # convert output_size to int type - output_size = sym_help._maybe_get_const(output_size, 'is') - input_size.extend(output_size) - output_size = g.op( - 'Constant', - value_t=torch.tensor(input_size, dtype=torch.int64)) - else: - input_size = g.op('Shape', input) - input_size_beg = sym_help._slice_helper( - g, input_size, axes=[0], ends=[2], starts=[0]) - output_size = g.op( - 'Cast', - output_size, - to_i=sym_help.cast_pytorch_to_onnx['Long']) - output_size = g.op( - 'Concat', input_size_beg, output_size, axis_i=0) - scales = g.op( - 'Constant', value_t=torch.tensor([], dtype=torch.float32)) - return g.op( - 'Resize', - input, - empty_tensor, - # roi only takes effect with - # coordinate_transformation_mode="tf_crop_and_resize" - scales, # scales is not needed since we are sending out_size - output_size, - coordinate_transformation_mode_s=transformation_mode, - cubic_coeff_a_f=-0.75, # only valid when mode="cubic" - mode_s=interpolate_mode, # nearest, linear, or cubic - nearest_mode_s='floor') # only valid when mode="nearest" - else: - return g.op( - 'Resize', - input, - empty_tensor, - # roi only takes effect with - # coordinate_transformation_mode="tf_crop_and_resize" - scales, # scales is not needed since we are sending out_size - coordinate_transformation_mode_s=transformation_mode, - cubic_coeff_a_f=-0.75, # only valid when mode="cubic" - mode_s=interpolate_mode, # nearest, linear, or cubic - nearest_mode_s='floor') # only valid when mode="nearest" - - return symbolic_fn - - -upsample_nearest1d = _interpolate('upsample_nearest1d', 3, 'nearest') -upsample_nearest2d = _interpolate('upsample_nearest2d', 4, 'nearest') -upsample_nearest3d = _interpolate('upsample_nearest3d', 5, 'nearest') -upsample_linear1d = _interpolate('upsample_linear1d', 3, 'linear') -upsample_bilinear2d = _interpolate('upsample_bilinear2d', 4, 'linear') -upsample_trilinear3d = _interpolate('upsample_trilinear3d', 5, 'linear') -upsample_bicubic2d = _interpolate('upsample_bicubic2d', 4, 'cubic') - - -@parse_args('v', 'v', 'i', 'i', 'i', 'none') -def topk(g, self, k, dim, largest, sorted, out=None): - return sym_help._topk_helper( - g, self, k, dim, largest=largest, sorted=sorted, out=out) - - -def masked_select(g, self, mask): - from torch.onnx.symbolic_opset9 import expand_as, nonzero - index = nonzero(g, expand_as(g, mask, self)) - return g.op('GatherND', self, index) - - -def _prepare_onnx_paddings(g, dim, pad): - pad_len = torch.onnx.symbolic_opset9.size( - g, pad, g.op('Constant', value_t=torch.tensor([0]))) - # Set extension = [0] * (dim * 2 - len(pad)) - extension = g.op( - 'Sub', - g.op('Mul', - g.op('Constant', value_t=torch.tensor(dim, dtype=torch.int64)), - g.op('Constant', value_t=torch.tensor(2, dtype=torch.int64))), - pad_len) - pad = g.op('Cast', pad, to_i=sym_help.cast_pytorch_to_onnx['Long']) - paddings = g.op( - 'Concat', - pad, - g.op( - 'ConstantOfShape', - extension, - value_t=torch.tensor([0], dtype=torch.int64)), - axis_i=0) - paddings = g.op('Reshape', paddings, - g.op('Constant', value_t=torch.tensor([-1, 2]))) - paddings = g.op( - 'Transpose', - torch.onnx.symbolic_opset10.flip(g, paddings, [0]), - perm_i=[1, 0]) - paddings = g.op('Reshape', paddings, - g.op('Constant', value_t=torch.tensor([-1]))) - padding_c = g.op( - 'Cast', paddings, to_i=sym_help.cast_pytorch_to_onnx['Long']) - return padding_c - - -def constant_pad_nd(g, input, padding, value=None): - mode = 'constant' - value = sym_help._maybe_get_scalar(value) - value = sym_help._if_scalar_type_as(g, value, input) - pad = _prepare_onnx_paddings(g, input.type().dim(), padding) - return g.op('Pad', input, pad, value, mode_s=mode) - - -def reflection_pad(g, input, padding): - mode = 'reflect' - paddings = _prepare_onnx_paddings(g, input.type().dim(), padding) - return g.op('Pad', input, paddings, mode_s=mode) - - -reflection_pad1d = reflection_pad -reflection_pad2d = reflection_pad -reflection_pad3d = reflection_pad - - -def _avg_pool(name, tuple_fn): - - @parse_args('v', 'is', 'is', 'is', 'i', 'i', 'none') - def symbolic_fn(g, - input, - kernel_size, - stride, - padding, - ceil_mode, - count_include_pad, - divisor_override=None): - padding = sym_help._avgpool_helper(tuple_fn, padding, kernel_size, - stride, divisor_override, name) - if not stride: - stride = kernel_size - if count_include_pad: - input = g.op( - 'Pad', - input, - g.op( - 'Constant', - value_t=torch.tensor(((0, ) * 2 + padding) * 2)), - mode_s='constant') - padding = (0, ) * len(padding) - output = g.op( - 'AveragePool', - input, - kernel_shape_i=tuple_fn(kernel_size), - strides_i=tuple_fn(stride), - pads_i=padding * 2, - ceil_mode_i=ceil_mode) - return output - - return symbolic_fn - - -avg_pool1d = _avg_pool('avg_pool1d', _single) -avg_pool2d = _avg_pool('avg_pool2d', _pair) -avg_pool3d = _avg_pool('avg_pool3d', _triple) - - -def _get_im2col_indices_along_dim(g, input_d, kernel_size_d, dilation_d, - padding_d, stride_d): - # Input is always 4-D (N, C, H, W) - # Calculate indices of sliding blocks along spatial dimension - # Slide kernel over input each dim d: - # each dimension d ranges from 0 to - # input[d]+2xpadding[d]-dilation[d]x(kernel_size[d]-1) - # with steps = stride - - blocks_d = g.op('Add', input_d, - g.op('Constant', value_t=torch.tensor(padding_d * 2))) - blocks_d = g.op( - 'Sub', blocks_d, - g.op( - 'Constant', - value_t=torch.tensor(dilation_d * (kernel_size_d - 1)))) - - # Stride kernel over input and find starting indices along dim d - blocks_d_indices = g.op('Range', g.op('Constant', value_t=torch.tensor(0)), - blocks_d, - g.op('Constant', value_t=torch.tensor(stride_d))) - - # Apply dilation on kernel and find its indices along dim d - kernel_grid = np.arange(0, kernel_size_d * dilation_d, dilation_d) - kernel_grid = g.op('Constant', value_t=torch.tensor([kernel_grid])) - - # Broadcast and add kernel staring positions (indices) with - # kernel_grid along dim d, to get block indices along dim d - blocks_d_indices = g.op( - 'Unsqueeze', blocks_d_indices, axes_i=[0]) # Reshape to [1, -1] - kernel_mask = g.op('Reshape', kernel_grid, - g.op('Constant', value_t=torch.tensor([-1, 1]))) - block_mask = g.op('Add', blocks_d_indices, kernel_mask) - - return block_mask - - -def _get_im2col_padded_input(g, input, padding_h, padding_w): - # Input is always 4-D tensor (N, C, H, W) - # Padding tensor has the following format: (padding_h, padding_w) - # Reshape the padding to follow ONNX format: - # (dim1_begin, dim2_begin,...,dim1_end, dim2_end,...) - pad = g.op( - 'Constant', value_t=torch.LongTensor([0, 0, padding_h, padding_w] * 2)) - return g.op('Pad', input, pad) - - -def _get_im2col_output_shape(g, input, kernel_h, kernel_w): - batch_dim = size(g, input, g.op('Constant', value_t=torch.tensor(0))) - channel_dim = size(g, input, g.op('Constant', value_t=torch.tensor(1))) - channel_unfolded = g.op( - 'Mul', channel_dim, - g.op('Constant', value_t=torch.tensor(kernel_h * kernel_w))) - - return g.op( - 'Concat', - g.op('Unsqueeze', batch_dim, axes_i=[0]), - g.op('Unsqueeze', channel_unfolded, axes_i=[0]), - g.op('Constant', value_t=torch.tensor([-1])), - axis_i=0) - - -def size(g, self, dim=None): - if dim is None: - return g.op('Shape', self) - return sym_help._size_helper(g, self, dim) - - -@parse_args('v', 'is', 'is', 'is', 'is') -def im2col(g, input, kernel_size, dilation, padding, stride): - # Input is always 4-D tensor (N, C, H, W) - # All other args are int[2] - - input_h = size(g, input, g.op('Constant', value_t=torch.tensor(2))) - input_w = size(g, input, g.op('Constant', value_t=torch.tensor(3))) - - stride_h, stride_w = stride[0], stride[1] - padding_h, padding_w = padding[0], padding[1] - dilation_h, dilation_w = dilation[0], dilation[1] - kernel_h, kernel_w = kernel_size[0], kernel_size[1] - - blocks_row_indices = _get_im2col_indices_along_dim(g, input_h, kernel_h, - dilation_h, padding_h, - stride_h) - blocks_col_indices = _get_im2col_indices_along_dim(g, input_w, kernel_w, - dilation_w, padding_w, - stride_w) - - output_shape = _get_im2col_output_shape(g, input, kernel_h, kernel_w) - padded_input = _get_im2col_padded_input(g, input, padding_h, padding_w) - - output = g.op('Gather', padded_input, blocks_row_indices, axis_i=2) - output = g.op('Gather', output, blocks_col_indices, axis_i=4) - output = g.op('Transpose', output, perm_i=[0, 1, 2, 4, 3, 5]) - return g.op('Reshape', output, output_shape) - - -@parse_args('v', 'i') -def one_hot(g, self, num_classes): - values = g.op('Constant', value_t=torch.LongTensor([0, 1])) - depth = g.op('Constant', value_t=torch.LongTensor([num_classes])) - return g.op('OneHot', self, depth, values, axis_i=-1) - - -@parse_args('v', 'i', 'none') -def softmax(g, input, dim, dtype=None): - input_dim = input.type().dim() - if input_dim: - # TODO: remove this as onnx opset 11 spec allows negative axes - if dim < 0: - dim = input_dim + dim - if input_dim == dim + 1: - softmax = g.op('Softmax', input, axis_i=dim) - if dtype and dtype.node().kind() != 'prim::Constant': - parsed_dtype = sym_help._get_const(dtype, 'i', 'dtype') - softmax = g.op( - 'Cast', - softmax, - to_i=sym_help.scalar_type_to_onnx[parsed_dtype]) - return softmax - - max_value = g.op('ReduceMax', input, axes_i=[dim], keepdims_i=1) - input = g.op('Sub', input, max_value) - exp = g.op('Exp', input) - sum = g.op('ReduceSum', exp, axes_i=[dim]) - softmax = g.op('Div', exp, sum) - if dtype and dtype.node().kind() != 'prim::Constant': - parsed_dtype = sym_help._get_const(dtype, 'i', 'dtype') - softmax = g.op( - 'Cast', softmax, to_i=sym_help.scalar_type_to_onnx[parsed_dtype]) - return softmax - - -def _adaptive_pool(name, type, tuple_fn, fn=None): - - @parse_args('v', 'is') - def symbolic_fn(g, input, output_size): - if output_size == [1] * len(output_size) and type == 'AveragePool': - return g.op('GlobalAveragePool', input) - if not input.isCompleteTensor(): - if output_size == [1] * len(output_size): - return g.op('GlobalMaxPool', input), None - raise NotImplementedError( - '[Adaptive pool]:input size not accessible') - dim = input.type().sizes()[2:] - if output_size == [1] * len(output_size) and type == 'MaxPool': - return g.op('GlobalMaxPool', input), None - - # compute stride = floor(input_size / output_size) - s = [int(dim[i] / output_size[i]) for i in range(0, len(dim))] - - # compute kernel_size = input_size - (output_size - 1) * stride - k = [dim[i] - (output_size[i] - 1) * s[i] for i in range(0, len(dim))] - - # call max_poolxd_with_indices to get indices in the output - if type == 'MaxPool': - return fn(g, input, k, k, (0, ) * len(dim), (1, ) * len(dim), - False) - output = g.op( - type, - input, - kernel_shape_i=tuple_fn(k), - strides_i=tuple_fn(s), - ceil_mode_i=False) - return output - - return symbolic_fn - - -adaptive_avg_pool1d = _adaptive_pool('adaptive_avg_pool1d', 'AveragePool', - _single) -adaptive_avg_pool2d = _adaptive_pool('adaptive_avg_pool2d', 'AveragePool', - _pair) -adaptive_avg_pool3d = _adaptive_pool('adaptive_avg_pool3d', 'AveragePool', - _triple) - - -def new_full(g, - self, - size, - fill_value, - dtype, - layout, - device, - pin_memory=False): - from torch.onnx.symbolic_opset9 import full - if dtype is None and self.isCompleteTensor(): - dtype = self.type().scalarType() - dtype = sym_help.scalar_type_to_onnx.index( - sym_help.cast_pytorch_to_onnx[dtype]) - return full(g, size, fill_value, dtype, layout, device, pin_memory) - - -@parse_args('v', 'v', 'i', 'i', 'i') -def grid_sampler(g, - input, - grid, - interpolation_mode, - padding_mode, - align_corners=False): - return g.op( - 'mmcv::grid_sampler', - input, - grid, - interpolation_mode_i=interpolation_mode, - padding_mode_i=padding_mode, - align_corners_i=align_corners) - - -@parse_args('v', 'i') -def cummax(g, input, dim): - return g.op('mmcv::cummax', input, dim_i=dim, outputs=2) - - -@parse_args('v', 'i') -def cummin(g, input, dim): - return g.op('mmcv::cummin', input, dim_i=dim, outputs=2) - - -@parse_args('v', 'v', 'is') -def roll(g, input, shifts, dims): - from packaging import version - from torch.onnx.symbolic_opset9 import squeeze - input_shape = g.op('Shape', input) - - need_flatten = len(dims) == 0 - # If dims is not specified, the tensor will be flattened before - # rolling and then restored to the original shape. - if need_flatten: - resize_shape = input_shape - input = g.op('Reshape', input, - g.op('Constant', value_t=torch.LongTensor([1, -1]))) - input_shape = g.op('Shape', input) - dims = [1] - - for index, dim in enumerate(dims): - end_size = sym_help._slice_helper( - g, input_shape, axes=[0], ends=[dim + 1], starts=[dim]) - shift_size = sym_help._slice_helper( - g, shifts, axes=[0], ends=[index + 1], starts=[index]) - slice_size = g.op('Sub', end_size, shift_size) - - # Can not use Mod because tensorrt does not support - div_size = g.op('Div', slice_size, end_size) - slice_size = g.op('Sub', slice_size, g.op('Mul', end_size, div_size)) - - if version.parse(torch.__version__) >= version.parse('1.7.0'): - # add dim=0 for pytorch 1.9.0 - end_size = squeeze(g, end_size, 0) - slice_size = squeeze(g, slice_size, 0) - else: - end_size = g.op('Squeeze', end_size) - slice_size = g.op('Squeeze', slice_size) - dim = torch.LongTensor([dim]) - - input_slice0 = sym_help._slice_helper( - g, - input, - axes=dim, - starts=torch.LongTensor([0]), - ends=slice_size, - dynamic_slice=True) - input_slice1 = sym_help._slice_helper( - g, - input, - axes=dim, - ends=end_size, - starts=slice_size, - dynamic_slice=True) - - input = g.op('Concat', input_slice1, input_slice0, axis_i=dim) - - if need_flatten: - input = g.op('Reshape', input, resize_shape) - - return input - - -def register_extra_symbolics(opset=11): - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - register_op('one_hot', one_hot, '', opset) - register_op('im2col', im2col, '', opset) - register_op('topk', topk, '', opset) - register_op('softmax', softmax, '', opset) - register_op('constant_pad_nd', constant_pad_nd, '', opset) - register_op('reflection_pad1d', reflection_pad1d, '', opset) - register_op('reflection_pad2d', reflection_pad2d, '', opset) - register_op('reflection_pad3d', reflection_pad3d, '', opset) - register_op('avg_pool1d', avg_pool1d, '', opset) - register_op('avg_pool2d', avg_pool2d, '', opset) - register_op('avg_pool3d', avg_pool3d, '', opset) - register_op('adaptive_avg_pool1d', adaptive_avg_pool1d, '', opset) - register_op('adaptive_avg_pool2d', adaptive_avg_pool2d, '', opset) - register_op('adaptive_avg_pool3d', adaptive_avg_pool3d, '', opset) - register_op('masked_select', masked_select, '', opset) - register_op('upsample_nearest1d', upsample_nearest1d, '', opset) - register_op('upsample_nearest2d', upsample_nearest2d, '', opset) - register_op('upsample_nearest3d', upsample_nearest3d, '', opset) - register_op('upsample_linear1d', upsample_linear1d, '', opset) - register_op('upsample_bilinear2d', upsample_bilinear2d, '', opset) - register_op('upsample_trilinear3d', upsample_trilinear3d, '', opset) - register_op('upsample_bicubic2d', upsample_bicubic2d, '', opset) - register_op('new_full', new_full, '', opset) - register_op('grid_sampler', grid_sampler, '', opset) - register_op('cummax', cummax, '', opset) - register_op('cummin', cummin, '', opset) - register_op('roll', roll, '', opset) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/__init__.py deleted file mode 100755 index ad2633a7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/__init__.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .active_rotated_filter import active_rotated_filter -from .assign_score_withk import assign_score_withk -from .ball_query import ball_query -from .bbox import bbox_overlaps -from .border_align import BorderAlign, border_align -from .box_iou_rotated import box_iou_rotated -from .carafe import CARAFE, CARAFENaive, CARAFEPack, carafe, carafe_naive -from .cc_attention import CrissCrossAttention -from .chamfer_distance import chamfer_distance -from .contour_expand import contour_expand -from .convex_iou import convex_giou, convex_iou -from .corner_pool import CornerPool -from .correlation import Correlation -from .deform_conv import DeformConv2d, DeformConv2dPack, deform_conv2d -from .deform_roi_pool import (DeformRoIPool, DeformRoIPoolPack, - ModulatedDeformRoIPoolPack, deform_roi_pool) -from .deprecated_wrappers import Conv2d_deprecated as Conv2d -from .deprecated_wrappers import ConvTranspose2d_deprecated as ConvTranspose2d -from .deprecated_wrappers import Linear_deprecated as Linear -from .deprecated_wrappers import MaxPool2d_deprecated as MaxPool2d -from .diff_iou_rotated import diff_iou_rotated_2d, diff_iou_rotated_3d -from .focal_loss import (SigmoidFocalLoss, SoftmaxFocalLoss, - sigmoid_focal_loss, softmax_focal_loss) -from .furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) -from .fused_bias_leakyrelu import FusedBiasLeakyReLU, fused_bias_leakyrelu -from .gather_points import gather_points -from .group_points import GroupAll, QueryAndGroup, grouping_operation -from .info import (get_compiler_version, get_compiling_cuda_version, - get_onnxruntime_op_path) -from .iou3d import (boxes_iou3d, boxes_iou_bev, boxes_overlap_bev, nms3d, - nms3d_normal, nms_bev, nms_normal_bev) -from .knn import knn -from .masked_conv import MaskedConv2d, masked_conv2d -from .min_area_polygons import min_area_polygons -from .modulated_deform_conv import (ModulatedDeformConv2d, - ModulatedDeformConv2dPack, - modulated_deform_conv2d) -from .multi_scale_deform_attn import MultiScaleDeformableAttention -from .nms import batched_nms, nms, nms_match, nms_rotated, soft_nms -from .pixel_group import pixel_group -from .point_sample import (SimpleRoIAlign, point_sample, - rel_roi_point_to_rel_img_point) -from .points_in_boxes import (points_in_boxes_all, points_in_boxes_cpu, - points_in_boxes_part) -from .points_in_polygons import points_in_polygons -from .points_sampler import PointsSampler -from .psa_mask import PSAMask -from .riroi_align_rotated import RiRoIAlignRotated, riroi_align_rotated -from .roi_align import RoIAlign, roi_align -from .roi_align_rotated import RoIAlignRotated, roi_align_rotated -from .roi_pool import RoIPool, roi_pool -from .roiaware_pool3d import RoIAwarePool3d -from .roipoint_pool3d import RoIPointPool3d -from .rotated_feature_align import rotated_feature_align -from .saconv import SAConv2d -from .scatter_points import DynamicScatter, dynamic_scatter -# from .sparse_conv import (SparseConv2d, SparseConv3d, SparseConvTranspose2d, -# SparseConvTranspose3d, SparseInverseConv2d, -# SparseInverseConv3d, SubMConv2d, SubMConv3d) -# from .sparse_modules import SparseModule, SparseSequential -# from .sparse_pool import SparseMaxPool2d, SparseMaxPool3d -# from .sparse_structure import SparseConvTensor, scatter_nd -from .sync_bn import SyncBatchNorm -from .three_interpolate import three_interpolate -from .three_nn import three_nn -from .tin_shift import TINShift, tin_shift -from .upfirdn2d import upfirdn2d -from .voxelize import Voxelization, voxelization - -# __all__ = [ -# 'bbox_overlaps', 'CARAFE', 'CARAFENaive', 'CARAFEPack', 'carafe', -# 'carafe_naive', 'CornerPool', 'DeformConv2d', 'DeformConv2dPack', -# 'deform_conv2d', 'DeformRoIPool', 'DeformRoIPoolPack', -# 'ModulatedDeformRoIPoolPack', 'deform_roi_pool', 'SigmoidFocalLoss', -# 'SoftmaxFocalLoss', 'sigmoid_focal_loss', 'softmax_focal_loss', -# 'get_compiler_version', 'get_compiling_cuda_version', -# 'get_onnxruntime_op_path', 'MaskedConv2d', 'masked_conv2d', -# 'ModulatedDeformConv2d', 'ModulatedDeformConv2dPack', -# 'modulated_deform_conv2d', 'batched_nms', 'nms', 'soft_nms', 'nms_match', -# 'RoIAlign', 'roi_align', 'RoIPool', 'roi_pool', 'SyncBatchNorm', 'Conv2d', -# 'ConvTranspose2d', 'Linear', 'MaxPool2d', 'CrissCrossAttention', 'PSAMask', -# 'point_sample', 'rel_roi_point_to_rel_img_point', 'SimpleRoIAlign', -# 'SAConv2d', 'TINShift', 'tin_shift', 'assign_score_withk', -# 'box_iou_rotated', 'RoIPointPool3d', 'nms_rotated', 'knn', 'ball_query', -# 'upfirdn2d', 'FusedBiasLeakyReLU', 'fused_bias_leakyrelu', -# 'rotated_feature_align', 'RiRoIAlignRotated', 'riroi_align_rotated', -# 'RoIAlignRotated', 'roi_align_rotated', 'pixel_group', 'QueryAndGroup', -# 'GroupAll', 'grouping_operation', 'contour_expand', 'three_nn', -# 'three_interpolate', 'MultiScaleDeformableAttention', 'BorderAlign', -# 'border_align', 'gather_points', 'furthest_point_sample', -# 'furthest_point_sample_with_dist', 'PointsSampler', 'Correlation', -# 'boxes_iou3d', 'boxes_iou_bev', 'boxes_overlap_bev', 'nms_bev', -# 'nms_normal_bev', 'nms3d', 'nms3d_normal', 'Voxelization', 'voxelization', -# 'dynamic_scatter', 'DynamicScatter', 'RoIAwarePool3d', 'SparseConv2d', -# 'SparseConv3d', 'SparseConvTranspose2d', 'SparseConvTranspose3d', -# 'SparseInverseConv2d', 'SparseInverseConv3d', 'SubMConv2d', 'SubMConv3d', -# 'SparseModule', 'SparseSequential', 'SparseMaxPool2d', 'SparseMaxPool3d', -# 'SparseConvTensor', 'scatter_nd', 'points_in_boxes_part', -# 'points_in_boxes_cpu', 'points_in_boxes_all', 'points_in_polygons', -# 'min_area_polygons', 'active_rotated_filter', 'convex_iou', 'convex_giou', -# 'diff_iou_rotated_2d', 'diff_iou_rotated_3d', 'chamfer_distance' -# ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/active_rotated_filter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/active_rotated_filter.py deleted file mode 100644 index 46c2aa78..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/active_rotated_filter.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['active_rotated_filter_forward', 'active_rotated_filter_backward']) - - -class ActiveRotatedFilterFunction(Function): - """Encoding the orientation information and generating orientation- - sensitive features. - - The details are described in the paper `Align Deep Features for Oriented - Object Detection _`. - """ - - @staticmethod - def forward(ctx, input: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - input (torch.Tensor): Input features with shape - [num_output_planes, num_input_planes, num_orientations, H, W]. - indices (torch.Tensor): Indices with shape - [num_orientations, H, W, num_rotations]. - - Returns: - torch.Tensor: Refined features with shape [num_output_planes * - num_rotations, num_input_planes * num_orientations, H, W]. - """ - ctx.save_for_backward(input, indices) - op, ip, o, h, w = input.size() - o, h, w, r = indices.size() - output = input.new_zeros((op * r, ip * o, h, w)) - ext_module.active_rotated_filter_forward(input, indices, output) - - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_out: torch.Tensor) -> Tuple[torch.Tensor, None]: - """ - Args: - grad_output (torch.Tensor): The gradiant of output features - with shape [num_output_planes * num_rotations, - num_input_planes * num_orientations, H, W]. - - Returns: - torch.Tensor: The gradiant of input features with shape - [num_output_planes, num_input_planes, num_orientations, H, W]. - """ - input, indices = ctx.saved_tensors - grad_in = torch.zeros_like(input) - ext_module.active_rotated_filter_backward(grad_out, indices, grad_in) - return grad_in, None - - -active_rotated_filter = ActiveRotatedFilterFunction.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/assign_score_withk.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/assign_score_withk.py deleted file mode 100644 index deca0892..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/assign_score_withk.py +++ /dev/null @@ -1,131 +0,0 @@ -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['assign_score_withk_forward', 'assign_score_withk_backward']) - - -class AssignScoreWithK(Function): - r"""Perform weighted sum to generate output features according to scores. - Modified from `PAConv `_. - - This is a memory-efficient CUDA implementation of assign_scores operation, - which first transform all point features with weight bank, then assemble - neighbor features with ``knn_idx`` and perform weighted sum of ``scores``. - - See the `paper `_ appendix Sec. D for - more detailed descriptions. - - Note: - This implementation assumes using ``neighbor`` kernel input, which is - (point_features - center_features, point_features). - See https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/model/ - pointnet2/paconv.py#L128 for more details. - """ - - @staticmethod - def forward(ctx, - scores: torch.Tensor, - point_features: torch.Tensor, - center_features: torch.Tensor, - knn_idx: torch.Tensor, - aggregate: str = 'sum') -> torch.Tensor: - """ - Args: - scores (torch.Tensor): (B, npoint, K, M), predicted scores to - aggregate weight matrices in the weight bank. - ``npoint`` is the number of sampled centers. - ``K`` is the number of queried neighbors. - ``M`` is the number of weight matrices in the weight bank. - point_features (torch.Tensor): (B, N, M, out_dim) - Pre-computed point features to be aggregated. - center_features (torch.Tensor): (B, N, M, out_dim) - Pre-computed center features to be aggregated. - knn_idx (torch.Tensor): (B, npoint, K), index of sampled kNN. - We assume the first idx in each row is the idx of the center. - aggregate (str, optional): Aggregation method. - Can be 'sum', 'avg' or 'max'. Defaults: 'sum'. - - Returns: - torch.Tensor: (B, out_dim, npoint, K), the aggregated features. - """ - agg = {'sum': 0, 'avg': 1, 'max': 2} - - B, N, M, out_dim = point_features.size() - _, npoint, K, _ = scores.size() - - output = point_features.new_zeros((B, out_dim, npoint, K)) - ext_module.assign_score_withk_forward( - point_features.contiguous(), - center_features.contiguous(), - scores.contiguous(), - knn_idx.contiguous(), - output, - B=B, - N0=N, - N1=npoint, - M=M, - K=K, - O=out_dim, - aggregate=agg[aggregate]) - - ctx.save_for_backward(output, point_features, center_features, scores, - knn_idx) - ctx.agg = agg[aggregate] - - return output - - @staticmethod - def backward( - ctx, grad_out: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, None, None]: - """ - Args: - grad_out (torch.Tensor): (B, out_dim, npoint, K) - - Returns: - tuple[torch.Tensor]: A tuple contains five elements. The first one - is the gradient of ``scores`` whose shape is (B, npoint, K, M). The - second is the gradient of ``point_features`` whose shape is - (B, N, M, out_dim). The third is the gradient of - ``center_features`` with the shape of (B, N, M, out_dim). The last - two are ``None``. - """ - _, point_features, center_features, scores, knn_idx = ctx.saved_tensors - - agg = ctx.agg - - B, N, M, out_dim = point_features.size() - _, npoint, K, _ = scores.size() - - grad_point_features = point_features.new_zeros(point_features.shape) - grad_center_features = center_features.new_zeros(center_features.shape) - grad_scores = scores.new_zeros(scores.shape) - - ext_module.assign_score_withk_backward( - grad_out.contiguous(), - point_features.contiguous(), - center_features.contiguous(), - scores.contiguous(), - knn_idx.contiguous(), - grad_point_features, - grad_center_features, - grad_scores, - B=B, - N0=N, - N1=npoint, - M=M, - K=K, - O=out_dim, - aggregate=agg) - - return grad_scores, grad_point_features, \ - grad_center_features, None, None - - -assign_score_withk = AssignScoreWithK.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/ball_query.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/ball_query.py deleted file mode 100644 index d24e0446..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/ball_query.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['ball_query_forward']) - - -class BallQuery(Function): - """Find nearby points in spherical space.""" - - @staticmethod - def forward(ctx, min_radius: float, max_radius: float, sample_num: int, - xyz: torch.Tensor, center_xyz: torch.Tensor) -> torch.Tensor: - """ - Args: - min_radius (float): minimum radius of the balls. - max_radius (float): maximum radius of the balls. - sample_num (int): maximum number of features in the balls. - xyz (torch.Tensor): (B, N, 3) xyz coordinates of the features. - center_xyz (torch.Tensor): (B, npoint, 3) centers of the ball - query. - - Returns: - torch.Tensor: (B, npoint, nsample) tensor with the indices of the - features that form the query balls. - """ - assert center_xyz.is_contiguous() - assert xyz.is_contiguous() - assert min_radius < max_radius - - B, N, _ = xyz.size() - npoint = center_xyz.size(1) - idx = xyz.new_zeros(B, npoint, sample_num, dtype=torch.int) - - ext_module.ball_query_forward( - center_xyz, - xyz, - idx, - b=B, - n=N, - m=npoint, - min_radius=min_radius, - max_radius=max_radius, - nsample=sample_num) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - return idx - - @staticmethod - def backward(ctx, a=None) -> Tuple[None, None, None, None]: - return None, None, None, None - - -ball_query = BallQuery.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/bbox.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/bbox.py deleted file mode 100644 index bf6bd43b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/bbox.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps']) - - -def _bbox_overlaps_cpu(bboxes1: torch.Tensor, - bboxes2: torch.Tensor, - mode: str = 'iou', - aligned: bool = False, - offset: int = 0) -> torch.Tensor: - assert mode in ['iou', 'iof'] - - if aligned: - lt = torch.max(bboxes1[:, :2], bboxes2[:, :2]) # [rows, 2] - rb = torch.min(bboxes1[:, 2:], bboxes2[:, 2:]) # [rows, 2] - - wh = (rb - lt + offset).clamp(min=0) # [rows, 2] - overlap = wh[:, 0] * wh[:, 1] - area1 = (bboxes1[:, 2] - bboxes1[:, 0] + offset) * ( - bboxes1[:, 3] - bboxes1[:, 1] + offset) - - if mode == 'iou': - area2 = (bboxes2[:, 2] - bboxes2[:, 0] + offset) * ( - bboxes2[:, 3] - bboxes2[:, 1] + offset) - ious = overlap / (area1 + area2 - overlap) - else: - ious = overlap / area1 - else: - lt = torch.max(bboxes1[:, None, :2], bboxes2[:, :2]) # [rows, cols, 2] - rb = torch.min(bboxes1[:, None, 2:], bboxes2[:, 2:]) # [rows, cols, 2] - - wh = (rb - lt + offset).clamp(min=0) # [rows, cols, 2] - overlap = wh[:, :, 0] * wh[:, :, 1] - area1 = (bboxes1[:, 2] - bboxes1[:, 0] + offset) * ( - bboxes1[:, 3] - bboxes1[:, 1] + offset) - - if mode == 'iou': - area2 = (bboxes2[:, 2] - bboxes2[:, 0] + offset) * ( - bboxes2[:, 3] - bboxes2[:, 1] + offset) - ious = overlap / (area1[:, None] + area2 - overlap) - else: - ious = overlap / (area1[:, None]) - - return ious - - -def bbox_overlaps(bboxes1: torch.Tensor, - bboxes2: torch.Tensor, - mode: str = 'iou', - aligned: bool = False, - offset: int = 0) -> torch.Tensor: - """Calculate overlap between two set of bboxes. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - Args: - bboxes1 (torch.Tensor): shape (m, 4) in format or - empty. - bboxes2 (torch.Tensor): shape (n, 4) in format or - empty. If aligned is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - - Returns: - torch.Tensor: Return the ious betweens boxes. If ``aligned`` is - ``False``, the shape of ious is (m, n) else (m, 1). - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> bbox_overlaps(bboxes1, bboxes2) - tensor([[0.5000, 0.0000, 0.0000], - [0.0000, 0.0000, 1.0000], - [0.0000, 0.0000, 0.0000]]) - - Example: - >>> empty = torch.FloatTensor([]) - >>> nonempty = torch.FloatTensor([ - >>> [0, 0, 10, 9], - >>> ]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - mode_dict = {'iou': 0, 'iof': 1} - assert mode in mode_dict.keys() - mode_flag = mode_dict[mode] - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - assert offset == 1 or offset == 0 - - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - assert rows == cols - - if rows * cols == 0: - return bboxes1.new(rows, 1) if aligned else bboxes1.new(rows, cols) - - if bboxes1.device.type == 'cpu': - return _bbox_overlaps_cpu( - bboxes1, bboxes2, mode=mode, aligned=aligned, offset=offset) - else: - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros((rows, cols)) - ext_module.bbox_overlaps( - bboxes1, - bboxes2, - ious, - mode=mode_flag, - aligned=aligned, - offset=offset) - return ious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/border_align.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/border_align.py deleted file mode 100644 index c09501b9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/border_align.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# modified from -# https://github.com/Megvii-BaseDetection/cvpods/blob/master/cvpods/layers/border_align.py - -from typing import Tuple - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['border_align_forward', 'border_align_backward']) - - -class BorderAlignFunction(Function): - - @staticmethod - def symbolic(g, input, boxes, pool_size): - return g.op( - 'mmcv::MMCVBorderAlign', input, boxes, pool_size_i=pool_size) - - @staticmethod - def forward(ctx, input: torch.Tensor, boxes: torch.Tensor, - pool_size: int) -> torch.Tensor: - ctx.pool_size = pool_size - ctx.input_shape = input.size() - - assert boxes.ndim == 3, 'boxes must be with shape [B, H*W, 4]' - assert boxes.size(2) == 4, \ - 'the last dimension of boxes must be (x1, y1, x2, y2)' - assert input.size(1) % 4 == 0, \ - 'the channel for input feature must be divisible by factor 4' - - # [B, C//4, H*W, 4] - output_shape = (input.size(0), input.size(1) // 4, boxes.size(1), 4) - output = input.new_zeros(output_shape) - # `argmax_idx` only used for backward - argmax_idx = input.new_zeros(output_shape).to(torch.int) - - ext_module.border_align_forward( - input, boxes, output, argmax_idx, pool_size=ctx.pool_size) - - ctx.save_for_backward(boxes, argmax_idx) - return output - - @staticmethod - @once_differentiable - def backward(ctx, - grad_output: torch.Tensor) -> Tuple[torch.Tensor, None, None]: - boxes, argmax_idx = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - # complex head architecture may cause grad_output uncontiguous - grad_output = grad_output.contiguous() - ext_module.border_align_backward( - grad_output, - boxes, - argmax_idx, - grad_input, - pool_size=ctx.pool_size) - return grad_input, None, None - - -border_align = BorderAlignFunction.apply - - -class BorderAlign(nn.Module): - r"""Border align pooling layer. - - Applies border_align over the input feature based on predicted bboxes. - The details were described in the paper - `BorderDet: Border Feature for Dense Object Detection - `_. - - For each border line (e.g. top, left, bottom or right) of each box, - border_align does the following: - - 1. uniformly samples ``pool_size`` +1 positions on this line, involving - the start and end points. - 2. the corresponding features on these points are computed by bilinear - interpolation. - 3. max pooling over all the ``pool_size`` +1 positions are used for - computing pooled feature. - - Args: - pool_size (int): number of positions sampled over the boxes' borders - (e.g. top, bottom, left, right). - """ - - def __init__(self, pool_size: int): - super().__init__() - self.pool_size = pool_size - - def forward(self, input: torch.Tensor, - boxes: torch.Tensor) -> torch.Tensor: - """ - Args: - input: Features with shape [N,4C,H,W]. Channels ranged in [0,C), - [C,2C), [2C,3C), [3C,4C) represent the top, left, bottom, - right features respectively. - boxes: Boxes with shape [N,H*W,4]. Coordinate format (x1,y1,x2,y2). - - Returns: - torch.Tensor: Pooled features with shape [N,C,H*W,4]. The order is - (top,left,bottom,right) for the last dimension. - """ - return border_align(input, boxes, self.pool_size) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(pool_size={self.pool_size})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/box_iou_rotated.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/box_iou_rotated.py deleted file mode 100644 index 2443af27..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/box_iou_rotated.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['box_iou_rotated']) - - -def box_iou_rotated(bboxes1: torch.Tensor, - bboxes2: torch.Tensor, - mode: str = 'iou', - aligned: bool = False, - clockwise: bool = True) -> torch.Tensor: - """Return intersection-over-union (Jaccard index) of boxes. - - Both sets of boxes are expected to be in - (x_center, y_center, width, height, angle) format. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - .. note:: - The operator assumes: - - 1) The positive direction along x axis is left -> right. - - 2) The positive direction along y axis is top -> down. - - 3) The w border is in parallel with x axis when angle = 0. - - However, there are 2 opposite definitions of the positive angular - direction, clockwise (CW) and counter-clockwise (CCW). MMCV supports - both definitions and uses CW by default. - - Please set ``clockwise=False`` if you are using the CCW definition. - - The coordinate system when ``clockwise`` is ``True`` (default) - - .. code-block:: none - - 0-------------------> x (0 rad) - | A-------------B - | | | - | | box h - | | angle=0 | - | D------w------C - v - y (pi/2 rad) - - In such coordination system the rotation matrix is - - .. math:: - \\begin{pmatrix} - \\cos\\alpha & -\\sin\\alpha \\\\ - \\sin\\alpha & \\cos\\alpha - \\end{pmatrix} - - The coordinates of the corner point A can be calculated as: - - .. math:: - P_A= - \\begin{pmatrix} x_A \\\\ y_A\\end{pmatrix} - = - \\begin{pmatrix} x_{center} \\\\ y_{center}\\end{pmatrix} + - \\begin{pmatrix}\\cos\\alpha & -\\sin\\alpha \\\\ - \\sin\\alpha & \\cos\\alpha\\end{pmatrix} - \\begin{pmatrix} -0.5w \\\\ -0.5h\\end{pmatrix} \\\\ - = - \\begin{pmatrix} x_{center}-0.5w\\cos\\alpha+0.5h\\sin\\alpha - \\\\ - y_{center}-0.5w\\sin\\alpha-0.5h\\cos\\alpha\\end{pmatrix} - - - The coordinate system when ``clockwise`` is ``False`` - - .. code-block:: none - - 0-------------------> x (0 rad) - | A-------------B - | | | - | | box h - | | angle=0 | - | D------w------C - v - y (-pi/2 rad) - - In such coordination system the rotation matrix is - - .. math:: - \\begin{pmatrix} - \\cos\\alpha & \\sin\\alpha \\\\ - -\\sin\\alpha & \\cos\\alpha - \\end{pmatrix} - - The coordinates of the corner point A can be calculated as: - - .. math:: - P_A= - \\begin{pmatrix} x_A \\\\ y_A\\end{pmatrix} - = - \\begin{pmatrix} x_{center} \\\\ y_{center}\\end{pmatrix} + - \\begin{pmatrix}\\cos\\alpha & \\sin\\alpha \\\\ - -\\sin\\alpha & \\cos\\alpha\\end{pmatrix} - \\begin{pmatrix} -0.5w \\\\ -0.5h\\end{pmatrix} \\\\ - = - \\begin{pmatrix} x_{center}-0.5w\\cos\\alpha-0.5h\\sin\\alpha - \\\\ - y_{center}+0.5w\\sin\\alpha-0.5h\\cos\\alpha\\end{pmatrix} - - Args: - boxes1 (torch.Tensor): rotated bboxes 1. It has shape (N, 5), - indicating (x, y, w, h, theta) for each row. Note that theta is in - radian. - boxes2 (torch.Tensor): rotated bboxes 2. It has shape (M, 5), - indicating (x, y, w, h, theta) for each row. Note that theta is in - radian. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - clockwise (bool): flag indicating whether the positive angular - orientation is clockwise. default True. - `New in version 1.4.3.` - - Returns: - torch.Tensor: Return the ious betweens boxes. If ``aligned`` is - ``False``, the shape of ious is (N, M) else (N,). - """ - assert mode in ['iou', 'iof'] - mode_dict = {'iou': 0, 'iof': 1} - mode_flag = mode_dict[mode] - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros(rows * cols) - if not clockwise: - flip_mat = bboxes1.new_ones(bboxes1.shape[-1]) - flip_mat[-1] = -1 - bboxes1 = bboxes1 * flip_mat - bboxes2 = bboxes2 * flip_mat - bboxes1 = bboxes1.contiguous() - bboxes2 = bboxes2.contiguous() - ext_module.box_iou_rotated( - bboxes1, bboxes2, ious, mode_flag=mode_flag, aligned=aligned) - if not aligned: - ious = ious.view(rows, cols) - return ious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/carafe.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/carafe.py deleted file mode 100644 index 18230c08..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/carafe.py +++ /dev/null @@ -1,301 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.autograd import Function -from torch.nn.modules.module import Module - -from ..cnn import UPSAMPLE_LAYERS, normal_init, xavier_init -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'carafe_naive_forward', 'carafe_naive_backward', 'carafe_forward', - 'carafe_backward' -]) - - -class CARAFENaiveFunction(Function): - - @staticmethod - def symbolic(g, features: Tensor, masks: Tensor, kernel_size: int, - group_size: int, scale_factor: int) -> Tensor: - return g.op( - 'mmcv::MMCVCARAFENaive', - features, - masks, - kernel_size_i=kernel_size, - group_size_i=group_size, - scale_factor_f=scale_factor) - - @staticmethod - def forward(ctx, features: Tensor, masks: Tensor, kernel_size: int, - group_size: int, scale_factor: int) -> Tensor: - assert scale_factor >= 1 - assert masks.size(1) == kernel_size * kernel_size * group_size - assert masks.size(-1) == features.size(-1) * scale_factor - assert masks.size(-2) == features.size(-2) * scale_factor - assert features.size(1) % group_size == 0 - assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1 - ctx.kernel_size = kernel_size - ctx.group_size = group_size - ctx.scale_factor = scale_factor - ctx.feature_size = features.size() - ctx.mask_size = masks.size() - - n, c, h, w = features.size() - output = features.new_zeros((n, c, h * scale_factor, w * scale_factor)) - ext_module.carafe_naive_forward( - features, - masks, - output, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - if features.requires_grad or masks.requires_grad or \ - torch.__version__ == 'parrots': - ctx.save_for_backward(features, masks) - return output - - @staticmethod - def backward( - ctx, - grad_output: Tensor) -> Tuple[Tensor, Tensor, None, None, None]: - assert grad_output.is_cuda - - features, masks = ctx.saved_tensors - kernel_size = ctx.kernel_size - group_size = ctx.group_size - scale_factor = ctx.scale_factor - - grad_input = torch.zeros_like(features) - grad_masks = torch.zeros_like(masks) - ext_module.carafe_naive_backward( - grad_output.contiguous(), - features, - masks, - grad_input, - grad_masks, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - return grad_input, grad_masks, None, None, None - - -carafe_naive = CARAFENaiveFunction.apply - - -class CARAFENaive(Module): - - def __init__(self, kernel_size: int, group_size: int, scale_factor: int): - super().__init__() - - assert isinstance(kernel_size, int) and isinstance( - group_size, int) and isinstance(scale_factor, int) - self.kernel_size = kernel_size - self.group_size = group_size - self.scale_factor = scale_factor - - def forward(self, features: Tensor, masks: Tensor) -> Tensor: - return carafe_naive(features, masks, self.kernel_size, self.group_size, - self.scale_factor) - - -class CARAFEFunction(Function): - - @staticmethod - def symbolic(g, features: Tensor, masks: Tensor, kernel_size: int, - group_size: int, scale_factor: int) -> Tensor: - return g.op( - 'mmcv::MMCVCARAFE', - features, - masks, - kernel_size_i=kernel_size, - group_size_i=group_size, - scale_factor_f=scale_factor) - - @staticmethod - def forward(ctx, features: Tensor, masks: Tensor, kernel_size: int, - group_size: int, scale_factor: int) -> Tensor: - assert scale_factor >= 1 - assert masks.size(1) == kernel_size * kernel_size * group_size - assert masks.size(-1) == features.size(-1) * scale_factor - assert masks.size(-2) == features.size(-2) * scale_factor - assert features.size(1) % group_size == 0 - assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1 - ctx.kernel_size = kernel_size - ctx.group_size = group_size - ctx.scale_factor = scale_factor - ctx.feature_size = features.size() - ctx.mask_size = masks.size() - - n, c, h, w = features.size() - output = features.new_zeros((n, c, h * scale_factor, w * scale_factor)) - routput = features.new_zeros(output.size(), requires_grad=False) - rfeatures = features.new_zeros(features.size(), requires_grad=False) - rmasks = masks.new_zeros(masks.size(), requires_grad=False) - ext_module.carafe_forward( - features, - masks, - rfeatures, - routput, - rmasks, - output, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - - if features.requires_grad or masks.requires_grad or \ - torch.__version__ == 'parrots': - ctx.save_for_backward(features, masks, rfeatures) - return output - - @staticmethod - def backward( - ctx, - grad_output: Tensor) -> Tuple[Tensor, Tensor, None, None, None]: - assert grad_output.is_cuda - - features, masks, rfeatures = ctx.saved_tensors - kernel_size = ctx.kernel_size - group_size = ctx.group_size - scale_factor = ctx.scale_factor - - rgrad_output = torch.zeros_like(grad_output, requires_grad=False) - rgrad_input_hs = torch.zeros_like(grad_output, requires_grad=False) - rgrad_input = torch.zeros_like(features, requires_grad=False) - rgrad_masks = torch.zeros_like(masks, requires_grad=False) - grad_input = torch.zeros_like(features, requires_grad=False) - grad_masks = torch.zeros_like(masks, requires_grad=False) - ext_module.carafe_backward( - grad_output.contiguous(), - rfeatures, - masks, - rgrad_output, - rgrad_input_hs, - rgrad_input, - rgrad_masks, - grad_input, - grad_masks, - kernel_size=kernel_size, - group_size=group_size, - scale_factor=scale_factor) - return grad_input, grad_masks, None, None, None - - -carafe = CARAFEFunction.apply - - -class CARAFE(Module): - """ CARAFE: Content-Aware ReAssembly of FEatures - - Please refer to `CARAFE: Content-Aware ReAssembly of FEatures - `_ for more details. - - Args: - kernel_size (int): reassemble kernel size - group_size (int): reassemble group size - scale_factor (int): upsample ratio - - Returns: - upsampled feature map - """ - - def __init__(self, kernel_size: int, group_size: int, scale_factor: int): - super().__init__() - - assert isinstance(kernel_size, int) and isinstance( - group_size, int) and isinstance(scale_factor, int) - self.kernel_size = kernel_size - self.group_size = group_size - self.scale_factor = scale_factor - - def forward(self, features: Tensor, masks: Tensor) -> Tensor: - return carafe(features, masks, self.kernel_size, self.group_size, - self.scale_factor) - - -@UPSAMPLE_LAYERS.register_module(name='carafe') -class CARAFEPack(nn.Module): - """A unified package of CARAFE upsampler that contains: 1) channel - compressor 2) content encoder 3) CARAFE op. - - Official implementation of ICCV 2019 paper - `CARAFE: Content-Aware ReAssembly of FEatures - `_. - - Args: - channels (int): input feature channels - scale_factor (int): upsample ratio - up_kernel (int): kernel size of CARAFE op - up_group (int): group size of CARAFE op - encoder_kernel (int): kernel size of content encoder - encoder_dilation (int): dilation of content encoder - compressed_channels (int): output channels of channels compressor - - Returns: - upsampled feature map - """ - - def __init__(self, - channels: int, - scale_factor: int, - up_kernel: int = 5, - up_group: int = 1, - encoder_kernel: int = 3, - encoder_dilation: int = 1, - compressed_channels: int = 64): - super().__init__() - self.channels = channels - self.scale_factor = scale_factor - self.up_kernel = up_kernel - self.up_group = up_group - self.encoder_kernel = encoder_kernel - self.encoder_dilation = encoder_dilation - self.compressed_channels = compressed_channels - self.channel_compressor = nn.Conv2d(channels, self.compressed_channels, - 1) - self.content_encoder = nn.Conv2d( - self.compressed_channels, - self.up_kernel * self.up_kernel * self.up_group * - self.scale_factor * self.scale_factor, - self.encoder_kernel, - padding=int((self.encoder_kernel - 1) * self.encoder_dilation / 2), - dilation=self.encoder_dilation, - groups=1) - self.init_weights() - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - normal_init(self.content_encoder, std=0.001) - - def kernel_normalizer(self, mask: Tensor) -> Tensor: - mask = F.pixel_shuffle(mask, self.scale_factor) - n, mask_c, h, w = mask.size() - # use float division explicitly, - # to void inconsistency while exporting to onnx - mask_channel = int(mask_c / float(self.up_kernel**2)) - mask = mask.view(n, mask_channel, -1, h, w) - - mask = F.softmax(mask, dim=2, dtype=mask.dtype) - mask = mask.view(n, mask_c, h, w).contiguous() - - return mask - - def feature_reassemble(self, x: Tensor, mask: Tensor) -> Tensor: - x = carafe(x, mask, self.up_kernel, self.up_group, self.scale_factor) - return x - - def forward(self, x: Tensor) -> Tensor: - compressed_x = self.channel_compressor(x) - mask = self.content_encoder(compressed_x) - mask = self.kernel_normalizer(mask) - - x = self.feature_reassemble(x, mask) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/cc_attention.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/cc_attention.py deleted file mode 100644 index 9e5d3325..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/cc_attention.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.cnn import PLUGIN_LAYERS, Scale - - -def NEG_INF_DIAG(n: int, device: torch.device) -> torch.Tensor: - """Returns a diagonal matrix of size [n, n]. - - The diagonal are all "-inf". This is for avoiding calculating the - overlapped element in the Criss-Cross twice. - """ - return torch.diag(torch.tensor(float('-inf')).to(device).repeat(n), 0) - - -@PLUGIN_LAYERS.register_module() -class CrissCrossAttention(nn.Module): - """Criss-Cross Attention Module. - - .. note:: - Before v1.3.13, we use a CUDA op. Since v1.3.13, we switch - to a pure PyTorch and equivalent implementation. For more - details, please refer to https://github.com/open-mmlab/mmcv/pull/1201. - - Speed comparison for one forward pass - - - Input size: [2,512,97,97] - - Device: 1 NVIDIA GeForce RTX 2080 Ti - - +-----------------------+---------------+------------+---------------+ - | |PyTorch version|CUDA version|Relative speed | - +=======================+===============+============+===============+ - |with torch.no_grad() |0.00554402 s |0.0299619 s |5.4x | - +-----------------------+---------------+------------+---------------+ - |no with torch.no_grad()|0.00562803 s |0.0301349 s |5.4x | - +-----------------------+---------------+------------+---------------+ - - Args: - in_channels (int): Channels of the input feature map. - """ - - def __init__(self, in_channels: int) -> None: - super().__init__() - self.query_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.key_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.value_conv = nn.Conv2d(in_channels, in_channels, 1) - self.gamma = Scale(0.) - self.in_channels = in_channels - - def forward(self, x: torch.Tensor) -> torch.Tensor: - """forward function of Criss-Cross Attention. - - Args: - x (torch.Tensor): Input feature with the shape of - (batch_size, in_channels, height, width). - - Returns: - torch.Tensor: Output of the layer, with the shape of - (batch_size, in_channels, height, width) - """ - B, C, H, W = x.size() - query = self.query_conv(x) - key = self.key_conv(x) - value = self.value_conv(x) - energy_H = torch.einsum('bchw,bciw->bwhi', query, key) + NEG_INF_DIAG( - H, query.device) - energy_H = energy_H.transpose(1, 2) - energy_W = torch.einsum('bchw,bchj->bhwj', query, key) - attn = F.softmax( - torch.cat([energy_H, energy_W], dim=-1), dim=-1) # [B,H,W,(H+W)] - out = torch.einsum('bciw,bhwi->bchw', value, attn[..., :H]) - out += torch.einsum('bchj,bhwj->bchw', value, attn[..., H:]) - - out = self.gamma(out) + x - out = out.contiguous() - - return out - - def __repr__(self) -> str: - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/chamfer_distance.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/chamfer_distance.py deleted file mode 100644 index d68eafb4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/chamfer_distance.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Sequence, Tuple - -import torch -from torch import Tensor -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['chamfer_distance_forward', 'chamfer_distance_backward']) - - -class ChamferDistanceFunction(Function): - """This is an implementation of the 2D Chamfer Distance. - - It has been used in the paper `Oriented RepPoints for Aerial Object - Detection (CVPR 2022) _`. - """ - - @staticmethod - def forward(ctx, xyz1: Tensor, xyz2: Tensor) -> Sequence[Tensor]: - """ - Args: - xyz1 (Tensor): Point set with shape (B, N, 2). - xyz2 (Tensor): Point set with shape (B, N, 2). - - Returns: - Sequence[Tensor]: - - - dist1 (Tensor): Chamfer distance (xyz1 to xyz2) with - shape (B, N). - - dist2 (Tensor): Chamfer distance (xyz2 to xyz1) with - shape (B, N). - - idx1 (Tensor): Index of chamfer distance (xyz1 to xyz2) - with shape (B, N), which be used in compute gradient. - - idx2 (Tensor): Index of chamfer distance (xyz2 to xyz2) - with shape (B, N), which be used in compute gradient. - """ - batch_size, n, _ = xyz1.size() - _, m, _ = xyz2.size() - device = xyz1.device - xyz1 = xyz1.contiguous() - xyz2 = xyz2.contiguous() - - dist1 = torch.zeros(batch_size, n).to(device) - dist2 = torch.zeros(batch_size, m).to(device) - idx1 = torch.zeros(batch_size, n).type(torch.IntTensor).to(device) - idx2 = torch.zeros(batch_size, m).type(torch.IntTensor).to(device) - - ext_module.chamfer_distance_forward(xyz1, xyz2, dist1, dist2, idx1, - idx2) - ctx.save_for_backward(xyz1, xyz2, idx1, idx2) - return dist1, dist2, idx1, idx2 - - @staticmethod - @once_differentiable - def backward(ctx, grad_dist1: Tensor, grad_dist2: Tensor, - grad_idx1: Tensor, - grad_idx2: Tensor) -> Tuple[Tensor, Tensor]: - """ - - Args: - grad_dist1 (Tensor): Gradient of chamfer distance - (xyz1 to xyz2) with shape (B, N). - grad_dist2 (Tensor): Gradient of chamfer distance - (xyz2 to xyz1) with shape (B, N). - grad_idx1 (Tensor): Index of chamfer distance (xyz1 to xyz2) - with shape (B, N), which be used in compute gradient. - grad_idx2 (Tensor): Index of chamfer distance (xyz2 to xyz2) - with shape (B, N), which be used in compute gradient. - - Returns: - Tuple[Tensor, Tensor]: - - - grad_xyz1 (Tensor): Gradient of the point set with shape \ - (B, N, 2). - - grad_xyz2 (Tensor):Gradient of the point set with shape \ - (B, N, 2). - """ - xyz1, xyz2, idx1, idx2 = ctx.saved_tensors - device = grad_dist1.device - grad_dist1 = grad_dist1.contiguous() - grad_dist2 = grad_dist2.contiguous() - grad_xyz1 = torch.zeros(xyz1.size()).to(device) - grad_xyz2 = torch.zeros(xyz2.size()).to(device) - - ext_module.chamfer_distance_backward(xyz1, xyz2, grad_xyz1, grad_xyz2, - grad_dist1, grad_dist2, idx1, - idx2) - return grad_xyz1, grad_xyz2 - - -chamfer_distance = ChamferDistanceFunction.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/contour_expand.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/contour_expand.py deleted file mode 100644 index 7184609a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/contour_expand.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Union - -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['contour_expand']) - - -def contour_expand(kernel_mask: Union[np.array, torch.Tensor], - internal_kernel_label: Union[np.array, torch.Tensor], - min_kernel_area: int, kernel_num: int) -> list: - """Expand kernel contours so that foreground pixels are assigned into - instances. - - Args: - kernel_mask (np.array or torch.Tensor): The instance kernel mask with - size hxw. - internal_kernel_label (np.array or torch.Tensor): The instance internal - kernel label with size hxw. - min_kernel_area (int): The minimum kernel area. - kernel_num (int): The instance kernel number. - - Returns: - list: The instance index map with size hxw. - """ - assert isinstance(kernel_mask, (torch.Tensor, np.ndarray)) - assert isinstance(internal_kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(min_kernel_area, int) - assert isinstance(kernel_num, int) - - if isinstance(kernel_mask, np.ndarray): - kernel_mask = torch.from_numpy(kernel_mask) - if isinstance(internal_kernel_label, np.ndarray): - internal_kernel_label = torch.from_numpy(internal_kernel_label) - - if torch.__version__ == 'parrots': - if kernel_mask.shape[0] == 0 or internal_kernel_label.shape[0] == 0: - label = [] - else: - label = ext_module.contour_expand( - kernel_mask, - internal_kernel_label, - min_kernel_area=min_kernel_area, - kernel_num=kernel_num) - label = label.tolist() # type: ignore - else: - label = ext_module.contour_expand(kernel_mask, internal_kernel_label, - min_kernel_area, kernel_num) - return label diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/convex_iou.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/convex_iou.py deleted file mode 100644 index 50050363..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/convex_iou.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['convex_iou', 'convex_giou']) - - -def convex_giou(pointsets: torch.Tensor, - polygons: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """Return generalized intersection-over-union (Jaccard index) between point - sets and polygons. - - Args: - pointsets (torch.Tensor): It has shape (N, 18), - indicating (x1, y1, x2, y2, ..., x9, y9) for each row. - polygons (torch.Tensor): It has shape (N, 8), - indicating (x1, y1, x2, y2, x3, y3, x4, y4) for each row. - - Returns: - tuple[torch.Tensor, torch.Tensor]: The first element is the gious - between point sets and polygons with the shape (N,). The second - element is the gradient of point sets with the shape (N, 18). - """ - output = pointsets.new_zeros((pointsets.size(0), 19)) - ext_module.convex_giou(pointsets, polygons, output) - convex_giou = output[:, -1] - points_grad = output[:, 0:-1] - return convex_giou, points_grad - - -def convex_iou(pointsets: torch.Tensor, - polygons: torch.Tensor) -> torch.Tensor: - """Return intersection-over-union (Jaccard index) between point sets and - polygons. - - Args: - pointsets (torch.Tensor): It has shape (N, 18), - indicating (x1, y1, x2, y2, ..., x9, y9) for each row. - polygons (torch.Tensor): It has shape (K, 8), - indicating (x1, y1, x2, y2, x3, y3, x4, y4) for each row. - - Returns: - torch.Tensor: Return the ious between point sets and polygons with the - shape (N, K). - """ - N, K = pointsets.size(0), polygons.size(0) - ious = pointsets.new_zeros((N, K)) - ext_module.convex_iou(pointsets, polygons, ious) - return ious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/corner_pool.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/corner_pool.py deleted file mode 100644 index 17ce2495..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/corner_pool.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import Tensor, nn -from torch.autograd import Function - -_mode_dict = {'top': 0, 'bottom': 1, 'left': 2, 'right': 3} - - -def _corner_pool(x: Tensor, dim: int, flip: bool) -> Tensor: - size = x.size(dim) - output = x.clone() - - ind = 1 - while ind < size: - if flip: - cur_start = 0 - cur_len = size - ind - next_start = ind - next_len = size - ind - else: - cur_start = ind - cur_len = size - ind - next_start = 0 - next_len = size - ind - - # max_temp should be cloned for backward computation - max_temp = output.narrow(dim, cur_start, cur_len).clone() - cur_temp = output.narrow(dim, cur_start, cur_len) - next_temp = output.narrow(dim, next_start, next_len) - - cur_temp[...] = torch.where(max_temp > next_temp, max_temp, next_temp) - - ind = ind << 1 - - return output - - -class TopPoolFunction(Function): - - @staticmethod - def symbolic(g, input: Tensor) -> Tensor: - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['top'])) - return output - - @staticmethod - def forward(ctx, input: Tensor) -> Tensor: - return _corner_pool(input, 2, True) - - -class BottomPoolFunction(Function): - - @staticmethod - def symbolic(g, input: Tensor) -> Tensor: - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['bottom'])) - return output - - @staticmethod - def forward(ctx, input: Tensor) -> Tensor: - return _corner_pool(input, 2, False) - - -class LeftPoolFunction(Function): - - @staticmethod - def symbolic(g, input: Tensor) -> Tensor: - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['left'])) - return output - - @staticmethod - def forward(ctx, input: Tensor) -> Tensor: - return _corner_pool(input, 3, True) - - -class RightPoolFunction(Function): - - @staticmethod - def symbolic(g, input: Tensor) -> Tensor: - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['right'])) - return output - - @staticmethod - def forward(ctx, input: Tensor) -> Tensor: - return _corner_pool(input, 3, False) - - -class CornerPool(nn.Module): - """Corner Pooling. - - Corner Pooling is a new type of pooling layer that helps a - convolutional network better localize corners of bounding boxes. - - Please refer to `CornerNet: Detecting Objects as Paired Keypoints - `_ for more details. - - Code is modified from https://github.com/princeton-vl/CornerNet-Lite. - - Args: - mode (str): Pooling orientation for the pooling layer - - - 'bottom': Bottom Pooling - - 'left': Left Pooling - - 'right': Right Pooling - - 'top': Top Pooling - - Returns: - Feature map after pooling. - """ - - pool_functions = { - 'bottom': BottomPoolFunction, - 'left': LeftPoolFunction, - 'right': RightPoolFunction, - 'top': TopPoolFunction, - } - - cummax_dim_flip = { - 'bottom': (2, False), - 'left': (3, True), - 'right': (3, False), - 'top': (2, True), - } - - def __init__(self, mode: str): - super().__init__() - assert mode in self.pool_functions - self.mode = mode - self.corner_pool: Function = self.pool_functions[mode] - - def forward(self, x: Tensor) -> Tensor: - if torch.__version__ != 'parrots' and torch.__version__ >= '1.5.0': - if torch.onnx.is_in_onnx_export(): - assert torch.__version__ >= '1.7.0', \ - 'When `cummax` serves as an intermediate component whose '\ - 'outputs is used as inputs for another modules, it\'s '\ - 'expected that pytorch version must be >= 1.7.0, '\ - 'otherwise Error appears like: `RuntimeError: tuple '\ - 'appears in op that does not forward tuples, unsupported '\ - 'kind: prim::PythonOp`.' - - dim, flip = self.cummax_dim_flip[self.mode] - if flip: - x = x.flip(dim) - pool_tensor, _ = torch.cummax(x, dim=dim) - if flip: - pool_tensor = pool_tensor.flip(dim) - return pool_tensor - else: - if torch.onnx.is_in_onnx_export(): - return self.corner_pool.apply(x) - else: - dim, flip = self.cummax_dim_flip[self.mode] - return _corner_pool(x, dim, flip) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/correlation.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/correlation.py deleted file mode 100644 index 319b7646..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/correlation.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch import Tensor, nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['correlation_forward', 'correlation_backward']) - - -class CorrelationFunction(Function): - - @staticmethod - def forward(ctx, - input1: Tensor, - input2: Tensor, - kernel_size: int = 1, - max_displacement: int = 1, - stride: int = 1, - padding: int = 1, - dilation: int = 1, - dilation_patch: int = 1) -> Tensor: - - ctx.save_for_backward(input1, input2) - - kH, kW = ctx.kernel_size = _pair(kernel_size) - patch_size = max_displacement * 2 + 1 - ctx.patch_size = patch_size - dH, dW = ctx.stride = _pair(stride) - padH, padW = ctx.padding = _pair(padding) - dilationH, dilationW = ctx.dilation = _pair(dilation) - dilation_patchH, dilation_patchW = ctx.dilation_patch = _pair( - dilation_patch) - - output_size = CorrelationFunction._output_size(ctx, input1) - - output = input1.new_zeros(output_size) - - ext_module.correlation_forward( - input1, - input2, - output, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - - return output - - @staticmethod - @once_differentiable - def backward( - ctx, grad_output: Tensor - ) -> Tuple[Tensor, Tensor, None, None, None, None, None, None]: - input1, input2 = ctx.saved_tensors - - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilation_patchH, dilation_patchW = ctx.dilation_patch - dH, dW = ctx.stride - grad_input1 = torch.zeros_like(input1) - grad_input2 = torch.zeros_like(input2) - - ext_module.correlation_backward( - grad_output, - input1, - input2, - grad_input1, - grad_input2, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - return grad_input1, grad_input2, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input1): - iH, iW = input1.size(2), input1.size(3) - batch_size = input1.size(0) - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - dH, dW = ctx.stride - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilatedKH = (kH - 1) * dilationH + 1 - dilatedKW = (kW - 1) * dilationW + 1 - - oH = int((iH + 2 * padH - dilatedKH) / dH + 1) - oW = int((iW + 2 * padW - dilatedKW) / dW + 1) - - output_size = (batch_size, patch_size, patch_size, oH, oW) - return output_size - - -class Correlation(nn.Module): - r"""Correlation operator - - This correlation operator works for optical flow correlation computation. - - There are two batched tensors with shape :math:`(N, C, H, W)`, - and the correlation output's shape is :math:`(N, max\_displacement \times - 2 + 1, max\_displacement * 2 + 1, H_{out}, W_{out})` - - where - - .. math:: - H_{out} = \left\lfloor\frac{H_{in} + 2 \times padding - - dilation \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - .. math:: - W_{out} = \left\lfloor\frac{W_{in} + 2 \times padding - dilation - \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - the correlation item :math:`(N_i, dy, dx)` is formed by taking the sliding - window convolution between input1 and shifted input2, - - .. math:: - Corr(N_i, dx, dy) = - \sum_{c=0}^{C-1} - input1(N_i, c) \star - \mathcal{S}(input2(N_i, c), dy, dx) - - where :math:`\star` is the valid 2d sliding window convolution operator, - and :math:`\mathcal{S}` means shifting the input features (auto-complete - zero marginal), and :math:`dx, dy` are shifting distance, :math:`dx, dy \in - [-max\_displacement \times dilation\_patch, max\_displacement \times - dilation\_patch]`. - - Args: - kernel_size (int): The size of sliding window i.e. local neighborhood - representing the center points and involved in correlation - computation. Defaults to 1. - max_displacement (int): The radius for computing correlation volume, - but the actual working space can be dilated by dilation_patch. - Defaults to 1. - stride (int): The stride of the sliding blocks in the input spatial - dimensions. Defaults to 1. - padding (int): Zero padding added to all four sides of the input1. - Defaults to 0. - dilation (int): The spacing of local neighborhood that will involved - in correlation. Defaults to 1. - dilation_patch (int): The spacing between position need to compute - correlation. Defaults to 1. - """ - - def __init__(self, - kernel_size: int = 1, - max_displacement: int = 1, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - dilation_patch: int = 1) -> None: - super().__init__() - self.kernel_size = kernel_size - self.max_displacement = max_displacement - self.stride = stride - self.padding = padding - self.dilation = dilation - self.dilation_patch = dilation_patch - - def forward(self, input1: Tensor, input2: Tensor) -> Tensor: - return CorrelationFunction.apply(input1, input2, self.kernel_size, - self.max_displacement, self.stride, - self.padding, self.dilation, - self.dilation_patch) - - def __repr__(self) -> str: - s = self.__class__.__name__ - s += f'(kernel_size={self.kernel_size}, ' - s += f'max_displacement={self.max_displacement}, ' - s += f'stride={self.stride}, ' - s += f'padding={self.padding}, ' - s += f'dilation={self.dilation}, ' - s += f'dilation_patch={self.dilation_patch})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/README.md b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/README.md deleted file mode 100644 index 317b8fb3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/README.md +++ /dev/null @@ -1,170 +0,0 @@ -# Code Structure of CUDA operators - -This folder contains all non-python code for MMCV custom ops. Please follow the same architecture if you want to add new ops. - -## Directories Tree - -```folder -. -├── common -│ ├── box_iou_rotated_utils.hpp -│ ├── parrots_cpp_helper.hpp -│ ├── parrots_cuda_helper.hpp -│ ├── pytorch_cpp_helper.hpp -│ ├── pytorch_cuda_helper.hpp -│ ├── pytorch_device_registry.hpp -│   └── cuda -│   ├── common_cuda_helper.hpp -│   ├── parrots_cudawarpfunction.cuh -│   ├── ... -│   └── ops_cuda_kernel.cuh -├── onnxruntime -│   ├── onnxruntime_register.h -│   ├── onnxruntime_session_options_config_keys.h -│   ├── ort_mmcv_utils.h -│   ├── ... -│   ├── onnx_ops.h -│   └── cpu -│ ├── onnxruntime_register.cpp -│      ├── ... -│      └── onnx_ops_impl.cpp -├── parrots -│   ├── ... -│   ├── ops.cpp -│   ├── ops_parrots.cpp -│   └── ops_pytorch.h -├── pytorch -│   ├── info.cpp -│   ├── pybind.cpp -│   ├── ... -│   ├── ops.cpp -│   ├── cuda -│   │   ├── ... -│   │   └── ops_cuda.cu -│   └── cpu -│      ├── ... -│      └── ops.cpp -└── tensorrt - ├── trt_cuda_helper.cuh - ├── trt_plugin_helper.hpp - ├── trt_plugin.hpp - ├── trt_serialize.hpp - ├── ... - ├── trt_ops.hpp - └── plugins -    ├── trt_cuda_helper.cu -    ├── trt_plugin.cpp -    ├── ... -    ├── trt_ops.cpp -    └── trt_ops_kernel.cu -``` - -## Components - -- `common`: This directory contains all tools and shared codes. - - `cuda`: The cuda kernels which can be shared by all backends. **HIP** kernel is also here since they have similar syntax. -- `onnxruntime`: **ONNX Runtime** support for custom ops. - - `cpu`: CPU implementation of supported ops. -- `parrots`: **Parrots** is a deep learning frame for model training and inference. Parrots custom ops are placed in this directory. -- `pytorch`: **PyTorch** custom ops are supported by binding C++ to Python with **pybind11**. The ops implementation and binding codes are placed in this directory. - - `cuda`: This directory contains cuda kernel launchers, which feed memory pointers of tensor to the cuda kernel in `common/cuda`. The launchers provide c++ interface of cuda implementation of corresponding custom ops. - - `cpu`: This directory contain cpu implementations of corresponding custom ops. -- `tensorrt`: **TensorRT** support for custom ops. - - `plugins`: This directory contains the implementation of the supported custom ops. Some ops might also use shared cuda kernel in `common/cuda`. - -## How to add new PyTorch ops? - -1. (Optional) Add shared kernel in `common` to support special hardware platform. - - ```c++ - // src/common/cuda/new_ops_cuda_kernel.cuh - - template - __global__ void new_ops_forward_cuda_kernel(const T* input, T* output, ...) { - // forward here - } - - ``` - - Add cuda kernel launcher in `pytorch/cuda`. - - ```c++ - // src/pytorch/cuda - #include - - void NewOpsForwardCUDAKernelLauncher(Tensor input, Tensor output, ...){ - // initialize - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - ... - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "new_ops_forward_cuda_kernel", ([&] { - new_ops_forward_cuda_kernel - <<>>( - input.data_ptr(), output.data_ptr(),...); - })); - AT_CUDA_CHECK(cudaGetLastError()); - } - ``` - -2. Register implementation for different devices. - - ```c++ - // src/pytorch/cuda/cudabind.cpp - ... - - Tensor new_ops_forward_cuda(Tensor input, Tensor output, ...){ - // implement cuda forward here - // use `NewOpsForwardCUDAKernelLauncher` here - } - // declare interface here. - Tensor new_ops_forward_impl(Tensor input, Tensor output, ...); - // register the implementation for given device (CUDA here). - REGISTER_DEVICE_IMPL(new_ops_forward_impl, CUDA, new_ops_forward_cuda); - ``` - -3. Add ops implementation in `pytorch` directory. Select different implementations according to device type. - - ```c++ - // src/pytorch/new_ops.cpp - Tensor new_ops_forward_impl(Tensor input, Tensor output, ...){ - // dispatch the implementation according to the device type of input. - DISPATCH_DEVICE_IMPL(new_ops_forward_impl, input, output, ...); - } - ... - - Tensor new_ops_forward(Tensor input, Tensor output, ...){ - return new_ops_forward_impl(input, output, ...); - } - ``` - -4. Binding the implementation in `pytorch/pybind.cpp` - - ```c++ - // src/pytorch/pybind.cpp - - ... - - Tensor new_ops_forward(Tensor input, Tensor output, ...); - - ... - - // bind with pybind11 - m.def("new_ops_forward", &new_ops_forward, "new_ops_forward", - py::arg("input"), py::arg("output"), ...); - - ... - - ``` - -5. Build MMCV again. Enjoy new ops in python - - ```python - from ..utils import ext_loader - ext_module = ext_loader.load_ext('_ext', ['new_ops_forward']) - - ... - - ext_module.new_ops_forward(input, output, ...) - - ``` diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/box_iou_rotated_utils.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/box_iou_rotated_utils.hpp deleted file mode 100644 index 424da4f7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/box_iou_rotated_utils.hpp +++ /dev/null @@ -1,347 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h -#pragma once -#include -#include - -#ifdef __CUDACC__ -// Designates functions callable from the host (CPU) and the device (GPU) -#define HOST_DEVICE __host__ __device__ -#define HOST_DEVICE_INLINE HOST_DEVICE __forceinline__ -#else -#include -#define HOST_DEVICE -#define HOST_DEVICE_INLINE HOST_DEVICE inline -#endif - -namespace { - -template -struct RotatedBox { - T x_ctr, y_ctr, w, h, a; -}; - -template -struct Point { - T x, y; - HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {} - HOST_DEVICE_INLINE Point operator+(const Point& p) const { - return Point(x + p.x, y + p.y); - } - HOST_DEVICE_INLINE Point& operator+=(const Point& p) { - x += p.x; - y += p.y; - return *this; - } - HOST_DEVICE_INLINE Point operator-(const Point& p) const { - return Point(x - p.x, y - p.y); - } - HOST_DEVICE_INLINE Point operator*(const T coeff) const { - return Point(x * coeff, y * coeff); - } -}; - -template -HOST_DEVICE_INLINE T dot_2d(const Point& A, const Point& B) { - return A.x * B.x + A.y * B.y; -} - -template -HOST_DEVICE_INLINE T cross_2d(const Point& A, const Point& B) { - return A.x * B.y - B.x * A.y; -} - -template -HOST_DEVICE_INLINE void get_rotated_vertices(const RotatedBox& box, - Point (&pts)[4]) { - // M_PI / 180. == 0.01745329251 - // double theta = box.a * 0.01745329251; - // MODIFIED - float theta = box.a; - T cosTheta2 = (T)cos(theta) * 0.5f; - T sinTheta2 = (T)sin(theta) * 0.5f; - - // y: top --> down; x: left --> right - pts[0].x = box.x_ctr - sinTheta2 * box.h - cosTheta2 * box.w; - pts[0].y = box.y_ctr + cosTheta2 * box.h - sinTheta2 * box.w; - pts[1].x = box.x_ctr + sinTheta2 * box.h - cosTheta2 * box.w; - pts[1].y = box.y_ctr - cosTheta2 * box.h - sinTheta2 * box.w; - pts[2].x = 2 * box.x_ctr - pts[0].x; - pts[2].y = 2 * box.y_ctr - pts[0].y; - pts[3].x = 2 * box.x_ctr - pts[1].x; - pts[3].y = 2 * box.y_ctr - pts[1].y; -} - -template -HOST_DEVICE_INLINE int get_intersection_points(const Point (&pts1)[4], - const Point (&pts2)[4], - Point (&intersections)[24]) { - // Line vector - // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1] - Point vec1[4], vec2[4]; - for (int i = 0; i < 4; i++) { - vec1[i] = pts1[(i + 1) % 4] - pts1[i]; - vec2[i] = pts2[(i + 1) % 4] - pts2[i]; - } - - // Line test - test all line combos for intersection - int num = 0; // number of intersections - for (int i = 0; i < 4; i++) { - for (int j = 0; j < 4; j++) { - // Solve for 2x2 Ax=b - T det = cross_2d(vec2[j], vec1[i]); - - // This takes care of parallel lines - if (fabs(det) <= 1e-14) { - continue; - } - - auto vec12 = pts2[j] - pts1[i]; - - T t1 = cross_2d(vec2[j], vec12) / det; - T t2 = cross_2d(vec1[i], vec12) / det; - - if (t1 >= 0.0f && t1 <= 1.0f && t2 >= 0.0f && t2 <= 1.0f) { - intersections[num++] = pts1[i] + vec1[i] * t1; - } - } - } - - // Check for vertices of rect1 inside rect2 - { - const auto& AB = vec2[0]; - const auto& DA = vec2[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - // assume ABCD is the rectangle, and P is the point to be judged - // P is inside ABCD iff. P's projection on AB lies within AB - // and P's projection on AD lies within AD - - auto AP = pts1[i] - pts2[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts1[i]; - } - } - } - - // Reverse the check - check for vertices of rect2 inside rect1 - { - const auto& AB = vec1[0]; - const auto& DA = vec1[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - auto AP = pts2[i] - pts1[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts2[i]; - } - } - } - - return num; -} - -template -HOST_DEVICE_INLINE int convex_hull_graham(const Point (&p)[24], - const int& num_in, Point (&q)[24], - bool shift_to_zero = false) { - assert(num_in >= 2); - - // Step 1: - // Find point with minimum y - // if more than 1 points have the same minimum y, - // pick the one with the minimum x. - int t = 0; - for (int i = 1; i < num_in; i++) { - if (p[i].y < p[t].y || (p[i].y == p[t].y && p[i].x < p[t].x)) { - t = i; - } - } - auto& start = p[t]; // starting point - - // Step 2: - // Subtract starting point from every points (for sorting in the next step) - for (int i = 0; i < num_in; i++) { - q[i] = p[i] - start; - } - - // Swap the starting point to position 0 - auto tmp = q[0]; - q[0] = q[t]; - q[t] = tmp; - - // Step 3: - // Sort point 1 ~ num_in according to their relative cross-product values - // (essentially sorting according to angles) - // If the angles are the same, sort according to their distance to origin - T dist[24]; - for (int i = 0; i < num_in; i++) { - dist[i] = dot_2d(q[i], q[i]); - } - -#ifdef __CUDACC__ - // CUDA version - // In the future, we can potentially use thrust - // for sorting here to improve speed (though not guaranteed) - for (int i = 1; i < num_in - 1; i++) { - for (int j = i + 1; j < num_in; j++) { - T crossProduct = cross_2d(q[i], q[j]); - if ((crossProduct < -1e-6) || - (fabs(crossProduct) < 1e-6 && dist[i] > dist[j])) { - auto q_tmp = q[i]; - q[i] = q[j]; - q[j] = q_tmp; - auto dist_tmp = dist[i]; - dist[i] = dist[j]; - dist[j] = dist_tmp; - } - } - } -#else - // CPU version - std::sort(q + 1, q + num_in, - [](const Point& A, const Point& B) -> bool { - T temp = cross_2d(A, B); - if (fabs(temp) < 1e-6) { - return dot_2d(A, A) < dot_2d(B, B); - } else { - return temp > 0; - } - }); - // compute distance to origin after sort, since the points are now different. - for (int i = 0; i < num_in; i++) { - dist[i] = dot_2d(q[i], q[i]); - } -#endif - - // Step 4: - // Make sure there are at least 2 points (that don't overlap with each other) - // in the stack - int k; // index of the non-overlapped second point - for (k = 1; k < num_in; k++) { - if (dist[k] > 1e-8) { - break; - } - } - if (k == num_in) { - // We reach the end, which means the convex hull is just one point - q[0] = p[t]; - return 1; - } - q[1] = q[k]; - int m = 2; // 2 points in the stack - // Step 5: - // Finally we can start the scanning process. - // When a non-convex relationship between the 3 points is found - // (either concave shape or duplicated points), - // we pop the previous point from the stack - // until the 3-point relationship is convex again, or - // until the stack only contains two points - for (int i = k + 1; i < num_in; i++) { - while (m > 1 && cross_2d(q[i] - q[m - 2], q[m - 1] - q[m - 2]) >= 0) { - m--; - } - q[m++] = q[i]; - } - - // Step 6 (Optional): - // In general sense we need the original coordinates, so we - // need to shift the points back (reverting Step 2) - // But if we're only interested in getting the area/perimeter of the shape - // We can simply return. - if (!shift_to_zero) { - for (int i = 0; i < m; i++) { - q[i] += start; - } - } - - return m; -} - -template -HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) { - if (m <= 2) { - return 0; - } - - T area = 0; - for (int i = 1; i < m - 1; i++) { - area += fabs(cross_2d(q[i] - q[0], q[i + 1] - q[0])); - } - - return area / 2.0; -} - -template -HOST_DEVICE_INLINE T rotated_boxes_intersection(const RotatedBox& box1, - const RotatedBox& box2) { - // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned - // from rotated_rect_intersection_pts - Point intersectPts[24], orderedPts[24]; - - Point pts1[4]; - Point pts2[4]; - get_rotated_vertices(box1, pts1); - get_rotated_vertices(box2, pts2); - - int num = get_intersection_points(pts1, pts2, intersectPts); - - if (num <= 2) { - return 0.0; - } - - // Convex Hull to order the intersection points in clockwise order and find - // the contour area. - int num_convex = convex_hull_graham(intersectPts, num, orderedPts, true); - return polygon_area(orderedPts, num_convex); -} - -} // namespace - -template -HOST_DEVICE_INLINE T single_box_iou_rotated(T const* const box1_raw, - T const* const box2_raw, - const int mode_flag) { - // shift center to the middle point to achieve higher precision in result - RotatedBox box1, box2; - auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0; - auto center_shift_y = (box1_raw[1] + box2_raw[1]) / 2.0; - box1.x_ctr = box1_raw[0] - center_shift_x; - box1.y_ctr = box1_raw[1] - center_shift_y; - box1.w = box1_raw[2]; - box1.h = box1_raw[3]; - box1.a = box1_raw[4]; - box2.x_ctr = box2_raw[0] - center_shift_x; - box2.y_ctr = box2_raw[1] - center_shift_y; - box2.w = box2_raw[2]; - box2.h = box2_raw[3]; - box2.a = box2_raw[4]; - - const T area1 = box1.w * box1.h; - const T area2 = box2.w * box2.h; - if (area1 < 1e-14 || area2 < 1e-14) { - return 0.f; - } - - const T intersection = rotated_boxes_intersection(box1, box2); - T baseS = 1.0; - if (mode_flag == 0) { - baseS = (area1 + area2 - intersection); - } else if (mode_flag == 1) { - baseS = area1; - } - const T iou = intersection / baseS; - return iou; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/active_rotated_filter_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/active_rotated_filter_cuda_kernel.cuh deleted file mode 100644 index 36e41107..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/active_rotated_filter_cuda_kernel.cuh +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/orn/src/cuda/ActiveRotatingFilter_cuda.cu -#ifndef ACTIVE_ROTATED_FILTER_CUDA_KERNEL_CUH -#define ACTIVE_ROTATED_FILTER_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void active_rotated_filter_forward_cuda_kernel( - const int nthreads, const scalar_t* weight_data, const int* indices_data, - const int num_input_planes, const int num_output_planes, - const int num_orientations, const int num_rotations, const int nEntry, - scalar_t* output_data) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int l = index % nEntry; - int j = (index / nEntry) % num_input_planes; - int i = index / nEntry / num_input_planes; - int k; - scalar_t val = *(weight_data + index); - for (k = 0; k < num_rotations; k++) { - int idx = (int)(*(indices_data + l * num_rotations + k)) - 1; - scalar_t* target = output_data + - i * (num_rotations * num_input_planes * nEntry) + - k * (num_input_planes * nEntry) + j * (nEntry) + idx; - *target = val; - } - } -} - -template -__global__ void active_rotated_filter_backward_cuda_kernel( - const int nthreads, const scalar_t* gradWeight_data, - const int* indices_data, const int num_input_planes, - const int num_output_planes, const int num_orientations, - const int num_rotations, const int nEntry, scalar_t* weight_data) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int l = index % nEntry; - int j = (index / nEntry) % num_input_planes; - int i = index / nEntry / num_input_planes; - int k; - scalar_t* val = weight_data + index; - *val = 0; - scalar_t tmp = 0; - for (k = 0; k < num_rotations; k++) { - int idx = (int)(*(indices_data + l * num_rotations + k)) - 1; - scalar_t target = - *(gradWeight_data + i * (num_rotations * num_input_planes * nEntry) + - k * (num_input_planes * nEntry) + j * (nEntry) + idx); - tmp = tmp + target; - } - *val = tmp; - } -} -#endif // ACTIVE_ROTATED_FILTER_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/assign_score_withk_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/assign_score_withk_cuda_kernel.cuh deleted file mode 100644 index 9f925084..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/assign_score_withk_cuda_kernel.cuh +++ /dev/null @@ -1,116 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ASSIGN_SCORE_WITHK_CUDA_KERNEL_CUH -#define ASSIGN_SCORE_WITHK_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -// input: points(B,N0,M,O), centers(B,N0,M,O), scores(B,N1,K,M), knn_idx(B,N1,K) -// output: fout(B,O,N) -// algo: fout(b,i,k,j) = s(b,i,k,m)*p(b,c(i),k,m,j) = s(b,i,k,m)*p(b,i(k),m,j) -// i(k) = idx(b,i,k) -// sum: fout(b,i,j) = fout(b,i,j) + s(b,i,k,m)*p(b,i,k,m,j) -// avg: fout(b,i,j) = sum(fout(b,i,k,j)) / k -// max: fout(b,i,j) = max(fout(b,i,k,j), sum(s(b,i,k,m)*p(b,i,k,m,j))) - -template -__global__ void assign_score_withk_forward_cuda_kernel( - const int B, const int N0, const int N1, const int M, const int K, - const int O, const int aggregate, const T* points, const T* centers, - const T* scores, const int64_t* knn_idx, T* output) { - // ----- parallel loop for B, N1, K and O --------- - CUDA_1D_KERNEL_LOOP(i, B * O * N1 * K) { - // ------- loop for M ---------- - const int b = (int)(i / (O * N1 * K)); - const int o = (int)(i % (O * N1 * K) / (N1 * K)); - const int n = (int)(i % (N1 * K) / K); - const int k = (int)(i % K); - const int cn = (int)knn_idx[b * K * N1 + n * K + - 0]; // The first neighbor is the center point - const int kn = (int)knn_idx[b * K * N1 + n * K + k]; - if (kn >= N0 || - kn < 0) { // if index overflows, it is out of the neighborhood range - return; - } - assert(b < B); - assert(kn < N0); - assert(cn < N0); - assert(o < O); - assert(n < N1); - const int out_idx = b * N1 * O * K + o * N1 * K + n * K + k; - T val = output[out_idx]; - for (int m = 0; m < M; m++) { - val += points[b * N0 * M * O + kn * M * O + m * O + o] * - scores[b * N1 * K * M + n * K * M + k * M + m] - - centers[b * N0 * M * O + cn * M * O + m * O + o] * - scores[b * N1 * K * M + n * K * M + k * M + m]; - } - output[out_idx] = val; - } -} - -template -__global__ void assign_score_withk_points_backward_cuda_kernel( - const int B, const int N0, const int N, const int M, const int K, - const int O, const int aggregate, const T* grad_out, const T* scores, - const int64_t* knn_idx, T* grad_points, T* grad_centers) { - // ----- parallel loop for B, M, O --------- - CUDA_1D_KERNEL_LOOP(i, B * M * O) { - int b = (int)(i / (M * O)); - int m = (int)(i % (M * O) / O); - int o = (int)(i % O); - - // ----- loop for N,K --------- - for (int n = 0; n < N; n++) { - for (int k = 0; k < K; k++) { - int kn = knn_idx[b * N * K + n * K + k]; - int cn = knn_idx[b * N * K + n * K + 0]; - if (kn >= N0 || kn < 0) { // if index overflows, it is out of the - // neighborhood range - continue; - } - atomicAdd(grad_points + b * N0 * M * O + kn * M * O + m * O + o, - scores[b * N * K * M + n * K * M + k * M + m] * - grad_out[b * O * N * K + o * N * K + n * K + k]); - atomicAdd(grad_centers + b * N0 * M * O + cn * M * O + m * O + o, - -scores[b * N * K * M + n * K * M + k * M + m] * - grad_out[b * O * N * K + o * N * K + n * K + k]); - } - } - } -} - -template -__global__ void assign_score_withk_scores_backward_cuda_kernel( - const int B, const int N0, const int N, const int M, const int K, - const int O, const int aggregate, const T* grad_out, const T* points, - const T* centers, const int64_t* knn_idx, T* grad_scores) { - // ----- parallel loop for B, N, K, M --------- - CUDA_1D_KERNEL_LOOP(i, B * N * K * M) { - const int b = (int)(i / (N * M * K)); - const int n = (int)(i % (N * M * K) / M / K); - const int k = (int)(i % (M * K) / M); - const int m = (int)(i % M); - const int cn = knn_idx[b * N * K + n * K + 0]; - const int kn = knn_idx[b * N * K + n * K + k]; - if (kn >= N0 || - kn < 0) { // if index overflows, it is out of the neighborhood range - return; - } - - // -------------- loop for O ------------------------ - const int out_idx = b * N * K * M + n * K * M + k * M + m; - T val = grad_scores[out_idx]; - for (int o = 0; o < O; o++) { - val += (points[b * N0 * M * O + kn * M * O + m * O + o] - - centers[b * N0 * M * O + cn * M * O + m * O + o]) * - grad_out[b * O * N * K + o * N * K + n * K + k]; - } - grad_scores[out_idx] = val; - } -} - -#endif // ASSIGN_SCORE_WITHK_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ball_query_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ball_query_cuda_kernel.cuh deleted file mode 100644 index 632b5c49..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ball_query_cuda_kernel.cuh +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/ball_query_gpu.cu -#ifndef BALL_QUERY_CUDA_KERNEL_CUH -#define BALL_QUERY_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void ball_query_forward_cuda_kernel(int b, int n, int m, - float min_radius, - float max_radius, int nsample, - const T* new_xyz, const T* xyz, - int* idx) { - // new_xyz: (B, M, 3) - // xyz: (B, N, 3) - // output: - // idx: (B, M, nsample) - int bs_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, m) { - if (bs_idx >= b) return; - - new_xyz += bs_idx * m * 3 + pt_idx * 3; - xyz += bs_idx * n * 3; - idx += bs_idx * m * nsample + pt_idx * nsample; - - float max_radius2 = max_radius * max_radius; - float min_radius2 = min_radius * min_radius; - T new_x = new_xyz[0]; - T new_y = new_xyz[1]; - T new_z = new_xyz[2]; - - int cnt = 0; - for (int k = 0; k < n; ++k) { - T x = xyz[k * 3 + 0]; - T y = xyz[k * 3 + 1]; - T z = xyz[k * 3 + 2]; - T d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) + - (new_z - z) * (new_z - z); - if (d2 == 0 || (d2 >= min_radius2 && d2 < max_radius2)) { - if (cnt == 0) { - for (int l = 0; l < nsample; ++l) { - idx[l] = k; - } - } - idx[cnt] = k; - ++cnt; - if (cnt >= nsample) break; - } - } - } -} - -#endif // BALL_QUERY_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/bbox_overlaps_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/bbox_overlaps_cuda_kernel.cuh deleted file mode 100644 index 15bd91ec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/bbox_overlaps_cuda_kernel.cuh +++ /dev/null @@ -1,147 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef BBOX_OVERLAPS_CUDA_KERNEL_CUH -#define BBOX_OVERLAPS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__device__ __forceinline__ void load_bbox(const T* bbox, const int base, T& x1, - T& y1, T& x2, T& y2) { - x1 = bbox[base]; - y1 = bbox[base + 1]; - x2 = bbox[base + 2]; - y2 = bbox[base + 3]; -} - -template <> -__device__ __forceinline__ void load_bbox(const float* bbox, - const int base, float& x1, - float& y1, float& x2, - float& y2) { - const float4 bbox_offset = reinterpret_cast(bbox + base)[0]; - x1 = bbox_offset.x; - y1 = bbox_offset.y; - x2 = bbox_offset.z; - y2 = bbox_offset.w; -} - -template -__global__ void bbox_overlaps_cuda_kernel(const T* bbox1, const T* bbox2, - T* ious, const int num_bbox1, - const int num_bbox2, const int mode, - const bool aligned, - const int offset) { - if (aligned) { - CUDA_1D_KERNEL_LOOP(index, num_bbox1) { - const int b1 = index; - const int b2 = index; - - const int base1 = b1 << 2; // b1 * 4 - T b1_x1, b1_y1, b1_x2, b1_y2; - load_bbox(bbox1, base1, b1_x1, b1_y1, b1_x2, b1_y2); - const T b1_area = (b1_x2 - b1_x1 + offset) * (b1_y2 - b1_y1 + offset); - - const int base2 = b2 << 2; // b2 * 4 - T b2_x1, b2_y1, b2_x2, b2_y2; - load_bbox(bbox2, base2, b2_x1, b2_y1, b2_x2, b2_y2); - const T b2_area = (b2_x2 - b2_x1 + offset) * (b2_y2 - b2_y1 + offset); - - const T left = fmaxf(b1_x1, b2_x1), right = fminf(b1_x2, b2_x2); - const T top = fmaxf(b1_y1, b2_y1), bottom = fminf(b1_y2, b2_y2); - const T width = fmaxf(right - left + offset, 0.f); - const T height = fmaxf(bottom - top + offset, 0.f); - const T interS = width * height; - - const T baseS = - fmaxf(mode == 0 ? b1_area + b2_area - interS : b1_area, T(offset)); - ious[index] = interS / baseS; - } - } else { - CUDA_1D_KERNEL_LOOP(index, num_bbox1 * num_bbox2) { - const int b1 = index / num_bbox2; - const int b2 = index % num_bbox2; - - const int base1 = b1 << 2; // b1 * 4 - T b1_x1, b1_y1, b1_x2, b1_y2; - load_bbox(bbox1, base1, b1_x1, b1_y1, b1_x2, b1_y2); - const T b1_area = (b1_x2 - b1_x1 + offset) * (b1_y2 - b1_y1 + offset); - - const int base2 = b2 << 2; // b2 * 4 - T b2_x1, b2_y1, b2_x2, b2_y2; - load_bbox(bbox2, base2, b2_x1, b2_y1, b2_x2, b2_y2); - const T b2_area = (b2_x2 - b2_x1 + offset) * (b2_y2 - b2_y1 + offset); - - const T left = fmaxf(b1_x1, b2_x1), right = fminf(b1_x2, b2_x2); - const T top = fmaxf(b1_y1, b2_y1), bottom = fminf(b1_y2, b2_y2); - const T width = fmaxf(right - left + offset, 0.f); - const T height = fmaxf(bottom - top + offset, 0.f); - const T interS = width * height; - - const T baseS = - fmaxf(mode == 0 ? b1_area + b2_area - interS : b1_area, T(offset)); - ious[index] = interS / baseS; - } - } -} - -#if __CUDA_ARCH__ >= 530 -__device__ __forceinline__ __half __half_area(const __half x1, const __half y1, - const __half x2, const __half y2, - const __half offset) { - const __half half_w = __hadd(__hsub(x2, x1), offset); - const __half half_h = __hadd(__hsub(y2, y1), offset); - return __hmul(half_w, half_h); -} - -__device__ __forceinline__ __half __half_max(const __half a, const __half b) { - return __hge(a, b) ? a : b; -} - -__device__ __forceinline__ __half __half_min(const __half a, const __half b) { - return __hle(a, b) ? a : b; -} - -// fp16 won't provide much increase when aligned==true. It is useful when -// aligned==false, which would give you ~40% bonus. -__device__ void bbox_overlaps_cuda_kernel_half( - const __half* bbox1, const __half* bbox2, __half* ious, const int num_bbox1, - const int num_bbox2, const int mode, const bool aligned, const int offset) { - const int num_output = aligned ? num_bbox1 : num_bbox1 * num_bbox2; - const __half h_offset = __int2half_rn(offset); - CUDA_1D_KERNEL_LOOP(index, num_output) { - const int b1 = aligned ? index : index / num_bbox2; - const int b2 = aligned ? index : index % num_bbox2; - - const int base1 = b1 << 2; - __half b1_x1, b1_y1, b1_x2, b1_y2; - load_bbox<__half>(bbox1, base1, b1_x1, b1_y1, b1_x2, b1_y2); - const __half b1_area = __half_area(b1_x1, b1_y1, b1_x2, b1_y2, h_offset); - - const int base2 = b2 << 2; - __half b2_x1, b2_y1, b2_x2, b2_y2; - load_bbox<__half>(bbox2, base2, b2_x1, b2_y1, b2_x2, b2_y2); - const __half b2_area = __half_area(b2_x1, b2_y1, b2_x2, b2_y2, h_offset); - - const __half left = __half_max(b1_x1, b2_x1), - right = __half_min(b1_x2, b2_x2); - const __half top = __half_max(b1_y1, b2_y1), - bottom = __half_min(b1_y2, b2_y2); - const __half width = - __half_max(__hadd(__hsub(right, left), h_offset), __float2half(0.f)); - const __half height = - __half_max(__hadd(__hsub(bottom, top), h_offset), __float2half(0.f)); - const __half interS = __hmul(width, height); - - const __half baseS = __half_max( - mode == 0 ? __hsub(__hadd(b1_area, b2_area), interS) : b1_area, - h_offset); - ious[index] = __hdiv(interS, baseS); - } -} -#endif // __CUDA_ARCH__ >= 530 - -#endif // BBOX_OVERLAPS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/border_align_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/border_align_cuda_kernel.cuh deleted file mode 100644 index 1d2a2197..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/border_align_cuda_kernel.cuh +++ /dev/null @@ -1,200 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/Megvii-BaseDetection/cvpods/blob/master/cvpods/layers/csrc/border_align/border_align_kernel.cu. -// the main difference: (1) use `argmax_idx` for fast computing of gradient -// during the backward. (2) `wh` is directly computed by `boxes`, rather than -// passing it as argument to forward or backward functions. - -#ifndef BORDER_ALIGN_CUDA_KERNEL_CUH -#define BORDER_ALIGN_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -enum BorderMode { Top = 0, Left = 1, Bottom = 2, Right = 3 }; - -/*** Forward ***/ -template -__global__ void border_align_forward_cuda_kernel( - const int nthreads, const T* input, const T* boxes, T* output, - int* argmax_idx, const int channels, const int box_size, const int height, - const int width, const int pool_size) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (batch_idx, c_idx, box_idx) is an element paralleled for computing - // output, and `extreme_idx` is in range [0,3] - int batch_idx, c_idx, box_idx, extreme_idx, maxidx, *offset_argmax_idx; - const T *offset_box, *offset_input, *offset_box_x; - T *offset_output, box_width, box_height, stride, x_stride, y_stride, x, y, - val, maxval; - - extreme_idx = threadIdx.y; - // shape (N, C, box_size, 4) for output - batch_idx = index / channels / box_size; - // shape (N, box_size, 4) for boxes - box_idx = index % box_size + batch_idx * box_size; - c_idx = (index / box_size) % channels; - - offset_box = boxes + box_idx * 4; - box_width = *(offset_box + 2) - *offset_box; - box_height = *(offset_box + 3) - *(offset_box + 1); - offset_output = output + index * 4 + extreme_idx; - offset_argmax_idx = argmax_idx + index * 4 + extreme_idx; - // shape (N, 4C, h, w) for input. - // [0,C) for top feature, [C,2C) for left feature, - // [2C,3C) for bottom feature, [3C,4C) for right feature - offset_input = - input + (batch_idx * channels * 4 + extreme_idx * channels + c_idx) * - height * width; - - // extreme_idx in [0,1] -> offset_box_x indexed at x1 - // extreme_idx in [2,3] -> offset_box_x indexed at x2 - offset_box_x = offset_box + extreme_idx / 2 * 2; - - // (x1,y1) or (x2,y2) for (x,y) - x = *offset_box_x; - y = *(offset_box_x + 1); - - switch (extreme_idx) { - // top - case BorderMode::Top: - stride = box_width / pool_size; - x_stride = stride; - y_stride = 0; - break; - // left - case BorderMode::Left: - stride = box_height / pool_size; - x_stride = 0; - y_stride = stride; - break; - // bottom - case BorderMode::Bottom: - stride = box_width / pool_size; - x_stride = -stride; - y_stride = 0; - break; - // right - case BorderMode::Right: - stride = box_height / pool_size; - x_stride = 0; - y_stride = -stride; - break; - } - - // initialize maxval and maxidx with the start position (e.g. (x1,y1) or - // (x2,y2)) - maxval = bilinear_interpolate(offset_input, height, width, y, x, index); - maxidx = 0; - - // do max_pool along the border - for (int i = 1; i <= pool_size; i++) { - x += x_stride; - y += y_stride; - val = bilinear_interpolate(offset_input, height, width, y, x, index); - if (val > maxval) { - maxval = val; - maxidx = i; - } - } - - // update output and argmax_idx - *offset_output = maxval; - *offset_argmax_idx = maxidx; - } -} - -/*** Backward ***/ -template -__global__ void border_align_backward_cuda_kernel( - const int nthreads, const T* grad_output, const T* boxes, - const int* argmax_idx, T* grad_input, const int channels, - const int box_size, const int height, const int width, - const int pool_size) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (batch_idx, c_idx, box_idx) is an element paralleled for computing - // output, and `extreme_idx` is in range [0,3] - int batch_idx, c_idx, box_idx, extreme_idx; - const int* offset_argmax_idx; - const T *offset_grad_output, *offset_box, *offset_box_x; - T *offset_grad_input, box_width, box_height, stride, x_stride, y_stride, x, - y; - - extreme_idx = threadIdx.y; - batch_idx = index / channels / box_size; - box_idx = index % box_size + batch_idx * box_size; - c_idx = (index / box_size) % channels; - - offset_box = boxes + box_idx * 4; - box_width = *(offset_box + 2) - *offset_box; - box_height = *(offset_box + 3) - *(offset_box + 1); - offset_grad_output = grad_output + index * 4 + extreme_idx; - offset_argmax_idx = argmax_idx + index * 4 + extreme_idx; - // [0,C) for top feature grad, [C,2C) for left feature grad, - // [2C,3C) for bottom feature grad, [3C,4C) for right feature grad - offset_grad_input = grad_input + (batch_idx * channels * 4 + - extreme_idx * channels + c_idx) * - height * width; - - // extreme_idx in [0,1] -> offset_box_x indexed at x1 - // extreme_idx in [2,3] -> offset_box_x indexed at x2 - offset_box_x = offset_box + extreme_idx / 2 * 2; - - switch (extreme_idx) { - // top - case BorderMode::Top: - stride = box_width / pool_size; - x_stride = stride; - y_stride = 0; - break; - // left - case BorderMode::Left: - stride = box_height / pool_size; - x_stride = 0; - y_stride = stride; - break; - // bottom - case BorderMode::Bottom: - stride = box_width / pool_size; - x_stride = -stride; - y_stride = 0; - break; - // right - case BorderMode::Right: - stride = box_height / pool_size; - x_stride = 0; - y_stride = -stride; - break; - } - - // get position (x,y) which has maximum value during forward - x = *offset_box_x; - y = *(offset_box_x + 1); - x += x_stride * (T)(*offset_argmax_idx); - y += y_stride * (T)(*offset_argmax_idx); - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, x_low, - x_high, y_low, y_high, index); - - // update grad_output - atomicAdd(offset_grad_input + y_low * width + x_low, - *offset_grad_output * w1); - atomicAdd(offset_grad_input + y_low * width + x_high, - *offset_grad_output * w2); - atomicAdd(offset_grad_input + y_high * width + x_low, - *offset_grad_output * w3); - atomicAdd(offset_grad_input + y_high * width + x_high, - *offset_grad_output * w4); - } -} - -#endif // BORDER_ALIGN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/box_iou_rotated_cuda.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/box_iou_rotated_cuda.cuh deleted file mode 100644 index abd47cd8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/box_iou_rotated_cuda.cuh +++ /dev/null @@ -1,81 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cuda.cu -#ifndef BOX_IOU_ROTATED_CUDA_CUH -#define BOX_IOU_ROTATED_CUDA_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif -#include "box_iou_rotated_utils.hpp" - -// 2D block with 32 * 16 = 512 threads per block -const int BLOCK_DIM_X = 32; -const int BLOCK_DIM_Y = 16; - -inline int divideUP(const int x, const int y) { return (((x) + (y)-1) / (y)); } - -template -__global__ void box_iou_rotated_cuda_kernel( - const int n_boxes1, const int n_boxes2, const T* dev_boxes1, - const T* dev_boxes2, T* dev_ious, const int mode_flag, const bool aligned) { - if (aligned) { - CUDA_1D_KERNEL_LOOP(index, n_boxes1) { - int b1 = index; - int b2 = index; - - int base1 = b1 * 5; - - float block_boxes1[5]; - float block_boxes2[5]; - - block_boxes1[0] = dev_boxes1[base1 + 0]; - block_boxes1[1] = dev_boxes1[base1 + 1]; - block_boxes1[2] = dev_boxes1[base1 + 2]; - block_boxes1[3] = dev_boxes1[base1 + 3]; - block_boxes1[4] = dev_boxes1[base1 + 4]; - - int base2 = b2 * 5; - - block_boxes2[0] = dev_boxes2[base2 + 0]; - block_boxes2[1] = dev_boxes2[base2 + 1]; - block_boxes2[2] = dev_boxes2[base2 + 2]; - block_boxes2[3] = dev_boxes2[base2 + 3]; - block_boxes2[4] = dev_boxes2[base2 + 4]; - - dev_ious[index] = - single_box_iou_rotated(block_boxes1, block_boxes2, mode_flag); - } - } else { - CUDA_1D_KERNEL_LOOP(index, n_boxes1 * n_boxes2) { - int b1 = index / n_boxes2; - int b2 = index % n_boxes2; - - int base1 = b1 * 5; - - float block_boxes1[5]; - float block_boxes2[5]; - - block_boxes1[0] = dev_boxes1[base1 + 0]; - block_boxes1[1] = dev_boxes1[base1 + 1]; - block_boxes1[2] = dev_boxes1[base1 + 2]; - block_boxes1[3] = dev_boxes1[base1 + 3]; - block_boxes1[4] = dev_boxes1[base1 + 4]; - - int base2 = b2 * 5; - - block_boxes2[0] = dev_boxes2[base2 + 0]; - block_boxes2[1] = dev_boxes2[base2 + 1]; - block_boxes2[2] = dev_boxes2[base2 + 2]; - block_boxes2[3] = dev_boxes2[base2 + 3]; - block_boxes2[4] = dev_boxes2[base2 + 4]; - - dev_ious[index] = - single_box_iou_rotated(block_boxes1, block_boxes2, mode_flag); - } - } -} - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_cuda_kernel.cuh deleted file mode 100644 index c343e4fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_cuda_kernel.cuh +++ /dev/null @@ -1,332 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CARAFE_CUDA_KERNEL_CUH -#define CARAFE_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#if defined(HIP_DIFF) || defined(__ILUVATAR__) -#define WARP_SIZE 64 -#else -#define WARP_SIZE 32 -#endif -#define THREADS_PER_PIXEL 32 -#define MAX_SHARED_MEMORY 49152 -#define MAX_SHARED_SCALAR_T 6144 // 49152 / 8 = 6144 -#define MAXIMIZE_KERNEL_SIZE true -#define kTileDim 32 -#define kBlockRows 8 -#define FULL_MASK 0xffffffff - -inline int divideUP(const int x, const int y) { return (((x) + (y)-1) / (y)); } - -__device__ inline int Loc2Index(const int n, const int c, const int h, - const int w, const int channel_num, - const int height, const int width) { - int index = w + (h + (c + n * channel_num) * height) * width; - return index; -} -#ifndef HIP_DIFF -/* TODO: move this to a common place */ -template -__device__ inline scalar_t min(scalar_t a, scalar_t b) { - return a < b ? a : b; -} - -template -__device__ inline scalar_t max(scalar_t a, scalar_t b) { - return a > b ? a : b; -} -#endif -template -__device__ __forceinline__ scalar_t warpReduceSum(scalar_t val) { - for (int offset = WARP_SIZE / 2; offset > 0; offset /= 2) -#ifdef HIP_DIFF - val += __shfl_down(val, offset); -#else - val += __shfl_down_sync(FULL_MASK, val, offset); -#endif - return val; -} - -template <> -__device__ __forceinline__ phalf warpReduceSum(phalf val) { - for (int offset = WARP_SIZE / 2; offset > 0; offset /= 2) -#ifdef HIP_DIFF - __PHALF(val) += __shfl_down(FULL_MASK, val, offset); -#else - __PHALF(val) += - __shfl_down_sync(FULL_MASK, static_cast<__half>(__PHALF(val)), offset); -#endif - return val; -} - -// Splits the original matrix into submatrices with size 32 * 32. -// Each block transposes one submatrix by loading it into shared memory. -// Reference https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc/ -template -__global__ void BatchTranspose2DCUDAKernel(const int N, const int H, - const int W, const int dh, - const int dw, - const scalar_t *__restrict__ X, - scalar_t *__restrict__ Y) { - __shared__ scalar_t tile[kTileDim][kTileDim + 1]; - const int n = blockIdx.x / (dh * dw); - const int k = blockIdx.x % (dh * dw); - const int r = k / dw; - const int c = k % dw; - const int offset = n * H * W; - int x = c * kTileDim + threadIdx.x; - int y = r * kTileDim + threadIdx.y; - if (x < W) { - for (int i = 0; threadIdx.y + i < kTileDim && y + i < H; i += kBlockRows) { - tile[threadIdx.y + i][threadIdx.x] = X[offset + (y + i) * W + x]; - } - } - __syncthreads(); - x = r * kTileDim + threadIdx.x; - y = c * kTileDim + threadIdx.y; - if (x < H) { - for (int i = 0; threadIdx.y + i < kTileDim && y + i < W; i += kBlockRows) { - Y[offset + (y + i) * H + x] = tile[threadIdx.x][threadIdx.y + i]; - } - } -} -template -__global__ void CARAFEForward( - const int num_kernels, const scalar_t *__restrict__ bottom_data, - const scalar_t *__restrict__ bottom_masks, const int kernel_size, - const int group_size, const int scale_factor, const int channels, - const int down_height, const int down_width, const int height, - const int width, const int mask_channels, scalar_t *__restrict__ top_data) { -#if MAXIMIZE_KERNEL_SIZE - __shared__ float shared_mask[MAX_SHARED_SCALAR_T * 2]; -#else - __shared__ scalar_t shared_mask[MAX_SHARED_SCALAR_T]; -#endif - - int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index > num_kernels - 1) { - return; - } - const int pixel_id = threadIdx.x / THREADS_PER_PIXEL; - const int split_id = threadIdx.x % THREADS_PER_PIXEL; - index = index / THREADS_PER_PIXEL; - const int pw = index % width; - const int ph = (index / width) % height; - const int n = index / width / height; - - const int down_pw = pw / scale_factor; - const int down_ph = ph / scale_factor; - - const int start_w = down_pw - (kernel_size - 1) / 2; - const int end_w = down_pw + (kernel_size - 1) / 2 + 1; - const int start_h = down_ph - (kernel_size - 1) / 2; - const int end_h = down_ph + (kernel_size - 1) / 2 + 1; - for (int c = split_id; c < mask_channels; c += THREADS_PER_PIXEL) { - int mask_index = Loc2Index(n, ph, pw, c, height, width, mask_channels); - shared_mask[c * WARP_SIZE + pixel_id] = bottom_masks[mask_index]; - } - __syncthreads(); - - const int channels_per_group = ceilf(channels / (float)group_size); -#pragma unroll - for (int c = split_id; c < channels; c += THREADS_PER_PIXEL) { - int mask_group = c / channels_per_group; - scalar_t output_val = 0; -#pragma unroll - for (int iy = start_h; iy < end_h; iy++) { -#pragma unroll - for (int ix = start_w; ix < end_w; ix++) { - if (iy < 0 || iy > down_height - 1 || ix < 0 || ix > down_width - 1) { - continue; - } - int mask_iy = iy - down_ph + (kernel_size - 1) / 2; - int mask_ix = ix - down_pw + (kernel_size - 1) / 2; - int mask_c = - (mask_group * kernel_size + mask_iy) * kernel_size + mask_ix; - int feat_index = - Loc2Index(n, iy, ix, c, down_height, down_width, channels); - - output_val += bottom_data[feat_index] * - shared_mask[mask_c * WARP_SIZE + pixel_id]; - } - } - - int top_index = Loc2Index(n, ph, pw, c, height, width, channels); - top_data[top_index] = output_val; - } -} - -template -__global__ void CARAFEBackward_Feature( - const int num_kernels, const scalar_t *__restrict__ top_diff, - const scalar_t *__restrict__ bottom_masks, const int kernel_size, - const int group_size, const int scale_factor, const int channels, - const int down_height, const int down_width, const int height, - const int width, const int mask_channels, - scalar_t *__restrict__ bottom_diff) { -#if MAXIMIZE_KERNEL_SIZE - __shared__ float shared_mask[MAX_SHARED_SCALAR_T * 2]; -#else - __shared__ scalar_t shared_mask[MAX_SHARED_SCALAR_T]; -#endif - - int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index > num_kernels - 1) { - return; - } - - const int pixel_id = threadIdx.x / THREADS_PER_PIXEL; - const int split_id = threadIdx.x % THREADS_PER_PIXEL; - // (n, c, ph, pw) is an element in the bottom_data - index = index / THREADS_PER_PIXEL; - const int pw = index % width; - const int ph = (index / width) % height; - const int n = index / width / height; - - const int start_w = pw - (kernel_size - 1) * scale_factor / 2; - const int end_w = pw + (kernel_size - 1) * scale_factor / 2 + 1; - const int start_h = ph - (kernel_size - 1) * scale_factor / 2; - const int end_h = ph + (kernel_size - 1) * scale_factor / 2 + 1; - for (int c = split_id; c < mask_channels; c += THREADS_PER_PIXEL) { - const int mask_w = (c % kernel_size) * scale_factor; - const int mask_h = (c / kernel_size % kernel_size) * scale_factor; - const int mask_x = start_w + mask_w; - const int mask_y = start_h + mask_h; - if (mask_y < 0 || mask_y > height - 1 || mask_x < 0 || mask_x > width - 1) { - shared_mask[c * WARP_SIZE + pixel_id] = 0; - continue; - } - const int mask_group = c / (kernel_size * kernel_size); - const int mask_c = (2 * mask_group + 1) * kernel_size * kernel_size - c - 1; - int mask_index = - Loc2Index(n, mask_c, mask_y, mask_x, mask_channels, height, width); - shared_mask[c * WARP_SIZE + pixel_id] = bottom_masks[mask_index]; - } - __syncthreads(); - const int channels_per_group = ceilf(channels / (float)group_size); -#pragma unroll - for (int c = split_id; c < channels; c += THREADS_PER_PIXEL) { - int mask_group = c / channels_per_group; - int top_index = Loc2Index(n, ph, pw, c, height, width, channels); - scalar_t output_val = 0; -#pragma unroll - for (int iy = start_h; iy < end_h; iy += scale_factor) { -#pragma unroll - for (int ix = start_w; ix < end_w; ix += scale_factor) { - if (iy < 0 || iy > height - 1 || ix < 0 || ix > width - 1) { - continue; - } - int mask_iy = - (iy - ph + (kernel_size - 1) * scale_factor / 2) / scale_factor; - int mask_ix = - (ix - pw + (kernel_size - 1) * scale_factor / 2) / scale_factor; - int mask_c = - (mask_group * kernel_size + mask_iy) * kernel_size + mask_ix; - int feat_index = Loc2Index(n, iy, ix, c, height, width, channels); - output_val += - shared_mask[mask_c * WARP_SIZE + pixel_id] * top_diff[feat_index]; - } - } - bottom_diff[top_index] = output_val; - } -} - -template -__global__ void FeatureSum(const int num_kernels, - const scalar_t *__restrict__ input_data, - const int scale_factor, const int channels, - const int height, const int width, - scalar_t *__restrict__ output_data) { - int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index > num_kernels - 1) { - return; - } - const int split_id = threadIdx.x % THREADS_PER_PIXEL; - index = index / THREADS_PER_PIXEL; - const int pw = index % width; - const int ph = (index / width) % height; - const int n = index / width / height; - for (int c = split_id; c < channels; c += THREADS_PER_PIXEL) { - scalar_t output_val = 0; - for (int iy = ph * scale_factor; iy < (ph + 1) * scale_factor; iy++) { - for (int ix = pw * scale_factor; ix < (pw + 1) * scale_factor; ix++) { - int input_id = Loc2Index(n, iy, ix, c, height * scale_factor, - width * scale_factor, channels); - output_val += input_data[input_id]; - } - } - const int output_id = Loc2Index(n, ph, pw, c, height, width, channels); - output_data[output_id] = output_val; - } -} - -template -__global__ void CARAFEBackward_Mask(const int num_kernels, - const scalar_t *__restrict__ top_diff, - const scalar_t *__restrict__ bottom_data, - const int kernel_size, const int group_size, - const int scale_factor, const int channels, - const int down_height, const int down_width, - const int height, const int width, - const int mask_channels, - scalar_t *__restrict__ mask_diff) { - int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index > num_kernels - 1) { - return; - } - - const int lane_id = index % WARP_SIZE; - index = index / WARP_SIZE; - const int mask_c = index % mask_channels; - // (n, c, ph, pw) is an element in the bottom_data - index = index / mask_channels; - const int pw = index % width; - const int ph = (index / width) % height; - const int n = index / width / height; - - const int down_pw = pw / scale_factor; - const int down_ph = ph / scale_factor; - - const int mask_group = mask_c / (kernel_size * kernel_size); - const int mask_loc = mask_c % (kernel_size * kernel_size); - - const int offset_x = mask_loc % kernel_size - (kernel_size - 1) / 2; - const int offset_y = - mask_loc / kernel_size % kernel_size - (kernel_size - 1) / 2; - - const int down_x = down_pw + offset_x; - const int down_y = down_ph + offset_y; - - scalar_t output_val = 0; - - if (down_y >= 0 && down_y <= down_height - 1 && down_x >= 0 && - down_x <= down_width - 1) { - const int channels_per_mask = ceilf(channels / (float)group_size); - const int start = channels_per_mask * mask_group; - const int end = min(channels_per_mask * (mask_group + 1), channels); - for (int c = start + lane_id; c < end; c += WARP_SIZE) { - int bottom_id = - Loc2Index(n, down_y, down_x, c, down_height, down_width, channels); - int top_id = Loc2Index(n, ph, pw, c, height, width, channels); - output_val += top_diff[top_id] * bottom_data[bottom_id]; - } - } -#if defined(HIP_DIFF) || defined(__ILUVATAR__) - __syncthreads(); -#else - __syncwarp(); -#endif - output_val = warpReduceSum(output_val); - if (lane_id == 0) { - const int mask_id = - Loc2Index(n, ph, pw, mask_c, height, width, mask_channels); - mask_diff[mask_id] = output_val; - } -} - -#endif // CARAFE_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_naive_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_naive_cuda_kernel.cuh deleted file mode 100644 index 48230c63..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/carafe_naive_cuda_kernel.cuh +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CARAFE_NAIVE_CUDA_KERNEL_CUH -#define CARAFE_NAIVE_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -__device__ inline int Loc2Index(const int n, const int c, const int h, - const int w, const int channel_num, - const int height, const int width) { - int index = w + (h + (c + n * channel_num) * height) * width; - return index; -} - -template -__global__ void carafe_naive_forward_cuda_kernel( - const int nthreads, const scalar_t *bottom_data, - const scalar_t *bottom_masks, scalar_t *top_data, const int kernel_size, - const int group_size, const int scale_factor, const int channels, - const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the bottom_data - int pw = index % width; - int ph = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - int mask_channels = kernel_size * kernel_size * group_size; - int mask_group = c / (channels / group_size); - - int down_pw = pw / scale_factor; - int down_ph = ph / scale_factor; - int down_width = width / scale_factor; - int down_height = height / scale_factor; - int start_w = down_pw - (kernel_size - 1) / 2; - int end_w = down_pw + (kernel_size - 1) / 2 + 1; - int start_h = down_ph - (kernel_size - 1) / 2; - int end_h = down_ph + (kernel_size - 1) / 2 + 1; - - scalar_t output_val = 0; - for (int iy = start_h; iy < end_h; iy++) { - for (int ix = start_w; ix < end_w; ix++) { - if (iy < 0 || iy > down_height - 1 || ix < 0 || ix > down_width - 1) { - continue; - } - int mask_iy = iy - down_ph + (kernel_size - 1) / 2; - int mask_ix = ix - down_pw + (kernel_size - 1) / 2; - int mask_c = - (mask_group * kernel_size + mask_iy) * kernel_size + mask_ix; - int feat_index = - Loc2Index(n, c, iy, ix, channels, down_height, down_width); - int mask_index = - Loc2Index(n, mask_c, ph, pw, mask_channels, height, width); - output_val += bottom_data[feat_index] * bottom_masks[mask_index]; - } - } - top_data[index] = output_val; - } -} - -template -__global__ void carafe_naive_backward_cuda_kernel( - const int nthreads, const scalar_t *top_diff, const scalar_t *bottom_data, - const scalar_t *bottom_masks, scalar_t *bottom_diff, scalar_t *mask_diff, - const int kernel_size, const int group_size, const int scale_factor, - const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the bottom_data - int pw = index % width; - int ph = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - int mask_channels = kernel_size * kernel_size * group_size; - int mask_group = c / (channels / group_size); - - int down_pw = pw / scale_factor; - int down_ph = ph / scale_factor; - int down_width = width / scale_factor; - int down_height = height / scale_factor; - int start_w = down_pw - (kernel_size - 1) / 2; - int end_w = down_pw + (kernel_size - 1) / 2 + 1; - int start_h = down_ph - (kernel_size - 1) / 2; - int end_h = down_ph + (kernel_size - 1) / 2 + 1; - - for (int iy = start_h; iy < end_h; iy++) { - for (int ix = start_w; ix < end_w; ix++) { - if (iy < 0 || iy > down_height - 1 || ix < 0 || ix > down_width - 1) { - continue; - } - int mask_iy = iy - down_ph + (kernel_size - 1) / 2; - int mask_ix = ix - down_pw + (kernel_size - 1) / 2; - int mask_c = - (mask_group * kernel_size + mask_iy) * kernel_size + mask_ix; - int feat_index = - Loc2Index(n, c, iy, ix, channels, down_height, down_width); - int mask_index = - Loc2Index(n, mask_c, ph, pw, mask_channels, height, width); - atomicAdd(bottom_diff + feat_index, - bottom_masks[mask_index] * top_diff[index]); - atomicAdd(mask_diff + mask_index, - bottom_data[feat_index] * top_diff[index]); - } - } - } -} - -#endif // CARAFE_NAIVE_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/chamfer_distance_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/chamfer_distance_cuda_kernel.cuh deleted file mode 100644 index 89feea4a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/chamfer_distance_cuda_kernel.cuh +++ /dev/null @@ -1,101 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/chrdiller/pyTorchChamferDistance/blob/master/chamfer_distance/chamfer_distance.cu -#ifndef CHAMFER_DISTANCE_CUDA_KERNEL_CUH -#define CHAMFER_DISTANCE_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#define MAX_SHARED_SCALAR_T 6144 // 49152 / 8 = 6144 - -template -__global__ void chamfer_distance_forward_cuda_kernel(int b, int n, - const scalar_t* xyz, int m, - const scalar_t* xyz2, - scalar_t* result, - int* result_i) { - __shared__ scalar_t buf[MAX_SHARED_SCALAR_T]; - for (int i = blockIdx.x; i < b; i += gridDim.x) { - for (int k2 = 0; k2 < m; k2 += THREADS_PER_BLOCK) { - int end_k = min(m, k2 + THREADS_PER_BLOCK) - k2; - for (int j = threadIdx.x; j < end_k * 2; j += blockDim.x) { - buf[j] = xyz2[(i * m + k2) * 2 + j]; - } - __syncthreads(); - for (int j = threadIdx.x; j < n; j += blockDim.x * gridDim.y) { - scalar_t x1 = xyz[(i * n + j) * 2 + 0]; - scalar_t y1 = xyz[(i * n + j) * 2 + 1]; - int best_i = 0; - scalar_t best = 1e10; - int end_ka = end_k & (~2); - if (end_ka == THREADS_PER_BLOCK) { - for (int k = 0; k < THREADS_PER_BLOCK; k += 4) { -#pragma unroll - for (int j = 0; j < 4; ++j) { - scalar_t x2 = buf[(k + j) * 2] - x1; - scalar_t y2 = buf[(k + j) * 2 + 1] - y1; - scalar_t d = x2 * x2 + y2 * y2; - if (d < best) { - best = d; - best_i = k + k2 + j; - } - } - } - } else { - for (int k = 0; k < end_ka; k += 4) { -#pragma unroll - for (int j = 0; j < 4; ++j) { - scalar_t x2 = buf[(k + j) * 2] - x1; - scalar_t y2 = buf[(k + j) * 2 + 1] - y1; - scalar_t d = x2 * x2 + y2 * y2; - if (d < best) { - best = d; - best_i = k + k2 + j; - } - } - } - } - for (int k = end_ka; k < end_k; k++) { - scalar_t x2 = buf[k * 2 + 0] - x1; - scalar_t y2 = buf[k * 2 + 1] - y1; - scalar_t d = x2 * x2 + y2 * y2; - if (k == 0 || d < best) { - best = d; - best_i = k + k2; - } - } - if (k2 == 0 || result[(i * n + j)] > best) { - result[(i * n + j)] = best; - result_i[(i * n + j)] = best_i; - } - } - __syncthreads(); - } - } -} - -template -__global__ void chamfer_distance_backward_cuda_kernel( - int b, int n, const scalar_t* xyz1, int m, const scalar_t* xyz2, - const scalar_t* grad_dist1, const int* idx1, scalar_t* grad_xyz1, - scalar_t* grad_xyz2) { - for (int i = blockIdx.x; i < b; i += gridDim.x) { - for (int j = threadIdx.x; j < n; j += blockDim.x * gridDim.y) { - scalar_t x1 = xyz1[(i * n + j) * 2 + 0]; - scalar_t y1 = xyz1[(i * n + j) * 2 + 1]; - int j2 = idx1[i * n + j]; - scalar_t x2 = xyz2[(i * m + j2) * 2 + 0]; - scalar_t y2 = xyz2[(i * m + j2) * 2 + 1]; - scalar_t g = grad_dist1[i * n + j] * 2; - atomicAdd(&(grad_xyz1[(i * n + j) * 2 + 0]), g * (x1 - x2)); - atomicAdd(&(grad_xyz1[(i * n + j) * 2 + 1]), g * (y1 - y2)); - atomicAdd(&(grad_xyz2[(i * m + j2) * 2 + 0]), -(g * (x1 - x2))); - atomicAdd(&(grad_xyz2[(i * m + j2) * 2 + 1]), -(g * (y1 - y2))); - } - } -} -#endif // CHAMFER_DISTANCE_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp deleted file mode 100644 index 9e544c79..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp +++ /dev/null @@ -1,122 +0,0 @@ -#ifndef COMMON_CUDA_HELPER -#define COMMON_CUDA_HELPER - -#include -#include -using namespace std; - -#define CUDA_1D_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ - i += blockDim.x * gridDim.x) - -#define CUDA_2D_KERNEL_LOOP(i, n, j, m) \ - for (size_t i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ - i += blockDim.x * gridDim.x) \ - for (size_t j = blockIdx.y * blockDim.y + threadIdx.y; j < (m); \ - j += blockDim.y * gridDim.y) - -#define CUDA_2D_KERNEL_BLOCK_LOOP(i, n, j, m) \ - for (size_t i = blockIdx.x; i < (n); i += gridDim.x) \ - for (size_t j = blockIdx.y; j < (m); j += gridDim.y) - -#define THREADS_PER_BLOCK 512 - -inline int GET_BLOCKS(const int N, const int num_threads = THREADS_PER_BLOCK) { - int optimal_block_num = (N + num_threads - 1) / num_threads; - int max_block_num = 4096; - return std::min(optimal_block_num, max_block_num); -} - -template -__device__ T bilinear_interpolate(const T* input, const int height, - const int width, T y, T x, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) return 0; - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - // do bilinear interpolation - T v1 = input[y_low * width + x_low]; - T v2 = input[y_low * width + x_high]; - T v3 = input[y_high * width + x_low]; - T v4 = input[y_high * width + x_high]; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - return val; -} - -template -__device__ void bilinear_interpolate_gradient( - const int height, const int width, T y, T x, T& w1, T& w2, T& w3, T& w4, - int& x_low, int& x_high, int& y_low, int& y_high, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} -#endif // COMMON_CUDA_HELPER diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/convex_iou_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/convex_iou_cuda_kernel.cuh deleted file mode 100644 index acff7518..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/convex_iou_cuda_kernel.cuh +++ /dev/null @@ -1,831 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CONVEX_IOU_CUDA_KERNEL_CUH -#define CONVEX_IOU_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#define MAXN 100 -#define NMAX 512 -__device__ const float EPS = 1E-6; - -__device__ inline int sig(float d) { return (d > EPS) - (d < -EPS); } - -struct Point { - float x, y; - __device__ Point() {} - __device__ Point(float x, float y) : x(x), y(y) {} -}; - -__device__ inline bool point_same(Point& a, Point& b) { - return sig(a.x - b.x) == 0 && sig(a.y - b.y) == 0; -} - -__device__ inline void swap1(Point* a, Point* b) { - Point temp; - temp.x = a->x; - temp.y = a->y; - - a->x = b->x; - a->y = b->y; - - b->x = temp.x; - b->y = temp.y; -} - -__device__ inline void reverse1(Point* a, const int n) { - for (int i = 0; i < (n - 1) / 2.0; i++) { - Point* j = &(a[i]); - Point* k = &(a[n - 1 - i]); - swap1(j, k); - } -} - -__device__ inline float cross(Point o, Point a, Point b) { - return (a.x - o.x) * (b.y - o.y) - (b.x - o.x) * (a.y - o.y); -} - -__device__ inline float dis(Point a, Point b) { - return (a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y); -} -__device__ inline float area(Point* ps, int n) { - ps[n] = ps[0]; - float res = 0; - for (int i = 0; i < n; i++) { - res += ps[i].x * ps[i + 1].y - ps[i].y * ps[i + 1].x; - } - return res / 2.0; -} -__device__ inline float polygon_area_grad(Point* ps, int n, - int* polygon_to_pred_index, - int n_pred, float* grad_C) { - ps[n] = ps[0]; - float partion_grad[4 * 30 + 2]; - float res = 0; - for (int i = 0; i < n; i++) { - res += ps[i].x * ps[i + 1].y - ps[i].y * ps[i + 1].x; - partion_grad[i * 4 + 2] = ps[i + 1].y; - partion_grad[i * 4 + 3] = -ps[i + 1].x; - if (i != n - 1) { - partion_grad[i * 4 + 4] = -ps[i].y; - partion_grad[i * 4 + 5] = ps[i].x; - } else { - partion_grad[0] = -ps[i].y; - partion_grad[1] = ps[i].x; - } - } - for (int i = 0; i < n; i++) { - for (int j = 0; j < n_pred; j++) { - if (i == polygon_to_pred_index[j]) { - grad_C[2 * polygon_to_pred_index[j + n_pred]] = - (partion_grad[i * 4] + partion_grad[i * 4 + 2]) / 2; - break; - } - } - for (int j = 0; j < n_pred; j++) { - if (i == polygon_to_pred_index[j]) { - grad_C[2 * polygon_to_pred_index[j + n_pred] + 1] = - (partion_grad[i * 4 + 1] + partion_grad[i * 4 + 1 + 2]) / 2; - break; - } - } - } - - return res / 2.0; -} - -__device__ inline int lineCross(Point a, Point b, Point c, Point d, Point& p, - float* cut_grad, int m, int n, int i) { - float s1, s2; - float s2_s1_2; - float ds1_dxc, ds1_dyc, ds2_dxd, ds2_dyd; - float dxp_dxc, dxp_dyc, dxp_dxd, dxp_dyd, dyp_dxc, dyp_dyc, dyp_dxd, dyp_dyd; - s1 = cross(a, b, c); - s2 = cross(a, b, d); - - ds1_dxc = -(b.y - a.y); - ds1_dyc = b.x - a.x; - ds2_dxd = ds1_dxc; - ds2_dyd = ds1_dyc; - s2_s1_2 = (s2 - s1) * (s2 - s1); - - if (sig(s1) == 0 && sig(s2) == 0) return 2; - if (sig(s2 - s1) == 0) return 0; - - dxp_dxc = - ((s2 - d.x * ds1_dxc) * (s2 - s1) - (c.x * s2 - d.x * s1) * (-ds1_dxc)) / - (s2_s1_2); - dxp_dyc = - ((0 - d.x * ds1_dyc) * (s2 - s1) - (c.x * s2 - d.x * s1) * (-ds1_dyc)) / - (s2_s1_2); - dxp_dxd = - ((c.x * ds2_dxd - s1) * (s2 - s1) - (c.x * s2 - d.x * s1) * (ds2_dxd)) / - (s2_s1_2); - dxp_dyd = - ((c.x * ds2_dyd - 0) * (s2 - s1) - (c.x * s2 - d.x * s1) * (ds2_dyd)) / - (s2_s1_2); - - dyp_dxc = - ((0 - d.y * ds1_dxc) * (s2 - s1) - (c.y * s2 - d.y * s1) * (-ds1_dxc)) / - (s2_s1_2); - dyp_dyc = - ((s2 - d.y * ds1_dyc) * (s2 - s1) - (c.y * s2 - d.y * s1) * (-ds1_dyc)) / - (s2_s1_2); - dyp_dxd = - ((c.y * ds2_dxd - 0) * (s2 - s1) - (c.y * s2 - d.y * s1) * (ds2_dxd)) / - (s2_s1_2); - dyp_dyd = - ((c.y * ds2_dyd - s1) * (s2 - s1) - (c.y * s2 - d.y * s1) * (ds2_dyd)) / - (s2_s1_2); - - p.x = (c.x * s2 - d.x * s1) / (s2 - s1); - p.y = (c.y * s2 - d.y * s1) / (s2 - s1); - if (i == n - 1) { - cut_grad[4 * n * m + 4 * i] = dxp_dxc; // + dyp_dxc; - cut_grad[4 * n * m + 4 * i + 1] = dyp_dxc; - cut_grad[4 * n * m + 4 * i + 2] = dxp_dyc; // + dyp_dyc; - cut_grad[4 * n * m + 4 * i + 3] = dyp_dyc; - cut_grad[4 * n * m + 0] = dxp_dxd; // + dyp_dxd; - cut_grad[4 * n * m + 1] = dyp_dxd; - cut_grad[4 * n * m + 2] = dxp_dyd; // + dyp_dyd; - cut_grad[4 * n * m + 3] = dyp_dyd; - } else { - cut_grad[4 * n * m + 4 * i] = dxp_dxc; // + dyp_dxc; - cut_grad[4 * n * m + 4 * i + 1] = dyp_dxc; - cut_grad[4 * n * m + 4 * i + 2] = dxp_dyc; // + dyp_dyc; - cut_grad[4 * n * m + 4 * i + 3] = dyp_dyc; - cut_grad[4 * n * m + 4 * (i + 1)] = dxp_dxd; // + dyp_dxd; - cut_grad[4 * n * m + 4 * (i + 1) + 1] = dyp_dxd; - cut_grad[4 * n * m + 4 * (i + 1) + 2] = dxp_dyd; // + dyp_dyd; - cut_grad[4 * n * m + 4 * (i + 1) + 3] = dyp_dyd; - } - - return 1; -} -__device__ inline void polygon_cut(Point* p, int& n, Point a, Point b, - float* cut_grad) { - Point pp[MAXN]; - float ccur_grad[MAXN] = {}; - int m = 0; - p[n] = p[0]; - int k = n; - for (int i = 0; i < n; i++) { - if (sig(cross(a, b, p[i])) > 0) { - pp[m] = p[i]; - ccur_grad[4 * n * m + 4 * i] = 1.0; - ccur_grad[4 * n * m + 4 * i + 3] = 1.0; - m++; - } - if (sig(cross(a, b, p[i])) != sig(cross(a, b, p[i + 1]))) { - lineCross(a, b, p[i], p[i + 1], pp[m], ccur_grad, m, n, i); - m++; - } - } - - n = 0; - for (int i = 0; i < m; i++) { - if (!i || !(point_same(pp[i], pp[i - 1]))) { - p[n] = pp[i]; - for (int j = 0; j < 4 * k; j++) { - cut_grad[4 * k * n + j] = ccur_grad[4 * k * i + j]; - } - n++; - } - } - - while (n > 1 && point_same(p[n - 1], p[0])) n--; -} - -__device__ inline float intersectArea(Point a, Point b, Point c, Point d, - float* grad_AB, int order, - int convex_n) { - Point o(0, 0); - int res_flag = 0; - int s1 = sig(cross(o, a, b)); - int s2 = sig(cross(o, c, d)); - if (s1 == 0 || s2 == 0) return 0.0; - if (s1 == -1) { - Point* i = &a; - Point* j = &b; - swap1(i, j); - res_flag = 1; - } - if (s2 == -1) { - Point* i = &c; - Point* j = &d; - swap1(i, j); - } - Point p[10] = {o, a, b}; - int n = 3, n0 = 3, n1, n2, n3; - float cut_grad1[MAXN] = {}; - float cut_grad2[MAXN] = {}; - float cut_grad3[MAXN] = {}; - float p1_p_grad[10][10] = {}; - float p2_p1_grad[10][10] = {}; - float p3_p2_grad[10][10] = {}; - - float p3_p1_grad[10][10] = {}; - float p3_p_grad[10][10] = {}; - - // 1 - polygon_cut(p, n, o, c, cut_grad1); - n1 = n; - for (int i = 0; i < n; i++) { - for (int j = 0; j < 4 * n0; j++) { - if (!(j % 2)) { - p1_p_grad[2 * i][j / 2] = cut_grad1[4 * n0 * i + j]; - } else { - p1_p_grad[2 * i + 1][j / 2] = cut_grad1[4 * n0 * i + j]; - } - } - } - - // 2 - polygon_cut(p, n, c, d, cut_grad2); - n2 = n; - for (int i = 0; i < n; i++) { - for (int j = 0; j < 4 * n1; j++) { - if (!(j % 2)) { - p2_p1_grad[2 * i][j / 2] = cut_grad2[4 * n1 * i + j]; - } else { - p2_p1_grad[2 * i + 1][j / 2] = cut_grad2[4 * n1 * i + j]; - } - } - } - // 3 - polygon_cut(p, n, d, o, cut_grad3); - n3 = n; - for (int i = 0; i < n; i++) { - for (int j = 0; j < 4 * n2; j++) { - if (!(j % 2)) { - p3_p2_grad[2 * i][j / 2] = cut_grad3[4 * n2 * i + j]; - } else { - p3_p2_grad[2 * i + 1][j / 2] = cut_grad3[4 * n2 * i + j]; - } - } - } - - // mul - // p3_p2(n3 * n2) * p2_p1(n2 * n1) = p3_p1 (n3 * n1) - for (int i = 0; i < 2 * n3; i++) { - for (int j = 0; j < 2 * n1; j++) { - float sum = 0.0; - for (int m = 0; m < 2 * n2; m++) { - sum = sum + p3_p2_grad[i][m] * p2_p1_grad[m][j]; - } - p3_p1_grad[i][j] = sum; - } - } - - // p3_p1 (n3 * n1) * p1_p (n1 * n0) = p3_p (n3 * n0) - for (int i = 0; i < 2 * n3; i++) { - for (int j = 0; j < 2 * n0; j++) { - float sum = 0.0; - for (int m = 0; m < 2 * n1; m++) { - sum = sum + p3_p1_grad[i][m] * p1_p_grad[m][j]; - } - p3_p_grad[i][j] = sum; - } - } - - // calculate S_grad - int polygon_index_box_index[20]; - float grad_polygon[20]; - float S_grad[6]; - - for (int i = 0; i < n3; i++) { - polygon_index_box_index[i] = i; - polygon_index_box_index[i + n3] = i; - } - - float res = - polygon_area_grad(p, n3, polygon_index_box_index, n3, grad_polygon); - - if (s1 * s2 == -1) { - for (int j = 0; j < 2 * 3; j++) { - float sum = 0.0; - for (int m = 0; m < 2 * n3; m++) { - sum = sum - grad_polygon[m] * p3_p_grad[m][j]; - } - S_grad[j] = sum; - } - - if (order != convex_n - 1) { - if (res_flag) { - grad_AB[2 * order] += S_grad[4]; - grad_AB[2 * order + 1] += S_grad[5]; - grad_AB[2 * order + 2] += S_grad[2]; - grad_AB[2 * order + 3] += S_grad[3]; - - } else { - grad_AB[2 * order] += S_grad[2]; - grad_AB[2 * order + 1] += S_grad[3]; - grad_AB[2 * order + 2] += S_grad[4]; - grad_AB[2 * order + 3] += S_grad[5]; - } - } else { - if (res_flag) { - grad_AB[2 * order] += S_grad[4]; - grad_AB[2 * order + 1] += S_grad[5]; - grad_AB[0] += S_grad[2]; - grad_AB[1] += S_grad[3]; - - } else { - grad_AB[2 * order] += S_grad[2]; - grad_AB[2 * order + 1] += S_grad[3]; - grad_AB[0] += S_grad[4]; - grad_AB[1] += S_grad[5]; - } - } - res = -res; - } else { - for (int j = 0; j < 2 * 3; j++) { - float sum = 0.0; - for (int m = 0; m < 2 * n3; m++) { - sum = sum + grad_polygon[m] * p3_p_grad[m][j]; - } - S_grad[j] = sum; - } - - if (order != convex_n - 1) { - if (res_flag) { - grad_AB[2 * order] += S_grad[4]; - grad_AB[2 * order + 1] += S_grad[5]; - grad_AB[2 * order + 2] += S_grad[2]; - grad_AB[2 * order + 3] += S_grad[3]; - } else { - grad_AB[2 * order] += S_grad[2]; - grad_AB[2 * order + 1] += S_grad[3]; - grad_AB[2 * order + 2] += S_grad[4]; - grad_AB[2 * order + 3] += S_grad[5]; - } - } else { - if (res_flag) { - grad_AB[2 * order] += S_grad[4]; - grad_AB[2 * order + 1] += S_grad[5]; - grad_AB[0] += S_grad[2]; - grad_AB[1] += S_grad[3]; - } else { - grad_AB[2 * order] += S_grad[2]; - grad_AB[2 * order + 1] += S_grad[3]; - grad_AB[0] += S_grad[4]; - grad_AB[1] += S_grad[5]; - } - } - } - return res; -} - -__device__ inline float intersectAreaO(Point* ps1, int n1, Point* ps2, int n2, - float* grad_AB) { - if (area(ps1, n1) < 0) reverse1(ps1, n1); - if (area(ps2, n2) < 0) reverse1(ps2, n2); - ps1[n1] = ps1[0]; - ps2[n2] = ps2[0]; - float res = 0; - for (int i = 0; i < n1; i++) { - for (int j = 0; j < n2; j++) { - res += - intersectArea(ps1[i], ps1[i + 1], ps2[j], ps2[j + 1], grad_AB, i, n1); - } - } - return res; -} - -__device__ inline void Jarvis(Point* in_poly, int& n_poly) { - Point p_max, p_k; - int max_index, k_index; - int Stack[NMAX] = {}, top1, top2; - float sign; - Point right_point[10], left_point[10]; - - for (int i = 0; i < n_poly; i++) { - if (in_poly[i].y < in_poly[0].y || - in_poly[i].y == in_poly[0].y && in_poly[i].x < in_poly[0].x) { - Point* j = &(in_poly[0]); - Point* k = &(in_poly[i]); - swap1(j, k); - } - if (i == 0) { - p_max = in_poly[0]; - max_index = 0; - } - if (in_poly[i].y > p_max.y || - in_poly[i].y == p_max.y && in_poly[i].x > p_max.x) { - p_max = in_poly[i]; - max_index = i; - } - } - - if (max_index == 0) { - max_index = 1; - p_max = in_poly[max_index]; - } - - k_index = 0, Stack[0] = 0, top1 = 0; - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top1]], in_poly[i], p_k); - if ((sign > 0) || ((sign == 0) && (dis(in_poly[Stack[top1]], in_poly[i]) > - dis(in_poly[Stack[top1]], p_k)))) { - p_k = in_poly[i]; - k_index = i; - } - } - top1++; - Stack[top1] = k_index; - } - for (int i = 0; i <= top1; i++) right_point[i] = in_poly[Stack[i]]; - - k_index = 0, Stack[0] = 0, top2 = 0; - - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top2]], in_poly[i], p_k); - if ((sign < 0) || (sign == 0) && (dis(in_poly[Stack[top2]], in_poly[i]) > - dis(in_poly[Stack[top2]], p_k))) { - p_k = in_poly[i]; - k_index = i; - } - } - top2++; - Stack[top2] = k_index; - } - for (int i = top2 - 1; i >= 0; i--) left_point[i] = in_poly[Stack[i]]; - - for (int i = 0; i < top1 + top2; i++) { - if (i <= top1) { - in_poly[i] = right_point[i]; - } else { - in_poly[i] = left_point[top2 - (i - top1)]; - } - } - n_poly = top1 + top2; -} - -__device__ inline float intersectAreaPoly(Point* ps1, int n1, Point* ps2, - int n2, float* grad_C) { - Point polygon[MAXN]; - int n = n1 + n2, n_poly = 0; - for (int i = 0; i < n1; i++) { - for (int j = 0; j < n - n1; j++) { - if (point_same(ps1[i], ps2[j])) { - for (int k = j; k < n - n1 - 1; k++) { - ps2[k] = ps2[k + 1]; - } - n2--; - break; - } - } - } - n_poly = n1 + n2; - for (int i = 0; i < n_poly; i++) { - if (i < n1) { - polygon[i] = ps1[i]; - } else { - polygon[i] = ps2[i - n1]; - } - } - - Jarvis(polygon, n_poly); - - int polygon_to_pred_index[18] = {-1, -1, -1, -1, -1, -1, -1, -1, -1, - -1, -1, -1, -1, -1, -1, -1, -1, -1}; - int n_pred = 0; - for (int i = 0; i < n_poly; i++) { - for (int j = 0; j < n1; j++) { - if (polygon[i].x == ps1[j].x && polygon[i].y == ps1[j].y) { - polygon_to_pred_index[n_pred] = i; - polygon_to_pred_index[n_pred + n1] = j; - n_pred += 1; - break; - } - } - } - if (n_pred == 0) { - float polygon_area = fabs(area(polygon, n_poly)); - for (int i = 0; i < 18; i++) { - grad_C[i] = 0.0; - } - return polygon_area; - } else { - float polygon_area = - polygon_area_grad(polygon, n_poly, polygon_to_pred_index, n1, grad_C); - if (polygon_area < 0) { - for (int i = 0; i < 18; i++) { - grad_C[i] = -grad_C[i]; - } - } - return fabs(polygon_area); - } -} - -// convex_find and get the polygon_index_box_index -__device__ inline void Jarvis_and_index(Point* in_poly, int& n_poly, - int* points_to_convex_ind) { - int n_input = n_poly; - Point input_poly[20]; - for (int i = 0; i < n_input; i++) { - input_poly[i].x = in_poly[i].x; - input_poly[i].y = in_poly[i].y; - } - Point p_max, p_k; - int max_index, k_index; - int Stack[20], top1, top2; - float sign; - Point right_point[10], left_point[10]; - - for (int i = 0; i < n_poly; i++) { - if (in_poly[i].y < in_poly[0].y || - in_poly[i].y == in_poly[0].y && in_poly[i].x < in_poly[0].x) { - Point* j = &(in_poly[0]); - Point* k = &(in_poly[i]); - swap1(j, k); - } - if (i == 0) { - p_max = in_poly[0]; - max_index = 0; - } - if (in_poly[i].y > p_max.y || - in_poly[i].y == p_max.y && in_poly[i].x > p_max.x) { - p_max = in_poly[i]; - max_index = i; - } - } - if (max_index == 0) { - max_index = 1; - p_max = in_poly[max_index]; - } - - k_index = 0, Stack[0] = 0, top1 = 0; - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top1]], in_poly[i], p_k); - if ((sign > 0) || ((sign == 0) && (dis(in_poly[Stack[top1]], in_poly[i]) > - dis(in_poly[Stack[top1]], p_k)))) { - p_k = in_poly[i]; - k_index = i; - } - } - top1++; - Stack[top1] = k_index; - } - for (int i = 0; i <= top1; i++) { - right_point[i] = in_poly[Stack[i]]; - } - - k_index = 0, Stack[0] = 0, top2 = 0; - - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top2]], in_poly[i], p_k); - if ((sign < 0) || (sign == 0) && (dis(in_poly[Stack[top2]], in_poly[i]) > - dis(in_poly[Stack[top2]], p_k))) { - p_k = in_poly[i]; - k_index = i; - } - } - top2++; - Stack[top2] = k_index; - } - - for (int i = top2 - 1; i >= 0; i--) { - left_point[i] = in_poly[Stack[i]]; - } - - for (int i = 0; i < top1 + top2; i++) { - if (i <= top1) { - in_poly[i] = right_point[i]; - } else { - in_poly[i] = left_point[top2 - (i - top1)]; - } - } - n_poly = top1 + top2; - for (int i = 0; i < n_poly; i++) { - for (int j = 0; j < n_input; j++) { - if (point_same(in_poly[i], input_poly[j])) { - points_to_convex_ind[i] = j; - break; - } - } - } -} - -template -__device__ inline float devrIoU(T const* const p, T const* const q, - T* point_grad, const int idx) { - Point ps1[MAXN], ps2[MAXN]; - - Point convex[MAXN]; - for (int i = 0; i < 9; i++) { - convex[i].x = (float)p[i * 2]; - convex[i].y = (float)p[i * 2 + 1]; - } - int n_convex = 9; - int points_to_convex_ind[9] = {-1, -1, -1, -1, -1, -1, -1, -1, -1}; - Jarvis_and_index(convex, n_convex, points_to_convex_ind); - - int n1 = n_convex; - int n2 = 4; - - for (int i = 0; i < n1; i++) { - ps1[i].x = (float)convex[i].x; - ps1[i].y = (float)convex[i].y; - } - - for (int i = 0; i < n2; i++) { - ps2[i].x = (float)q[i * 2]; - ps2[i].y = (float)q[i * 2 + 1]; - } - - int polygon_index_box_index[18]; - for (int i = 0; i < n1; i++) { - polygon_index_box_index[i] = i; - polygon_index_box_index[i + n1] = i; - } - - float grad_A[18] = {}; - float grad_AB[18] = {}; - float grad_C[18] = {}; - - float inter_area = intersectAreaO(ps1, n1, ps2, n2, grad_AB); - float S_pred = - polygon_area_grad(ps1, n1, polygon_index_box_index, n1, grad_A); - if (S_pred < 0) { - for (int i = 0; i < n_convex * 2; i++) { - grad_A[i] = -grad_A[i]; - } - } - float union_area = fabs(S_pred) + fabs(area(ps2, n2)) - inter_area; - - float iou = inter_area / union_area; - float polygon_area = intersectAreaPoly(ps1, n1, ps2, n2, grad_C); - - // printf("%d:live\n", idx); - float rot_giou = iou - (polygon_area - union_area) / polygon_area; - - float grad_point_temp[18] = {}; - - for (int i = 0; i < n_convex; i++) { - int grad_point = points_to_convex_ind[i]; - grad_point_temp[2 * grad_point] = - (float)((union_area + inter_area) / (union_area * union_area) * - grad_AB[2 * i] - - iou / union_area * grad_A[2 * i] - - 1 / polygon_area * (grad_AB[2 * i] - grad_A[2 * i]) - - (union_area) / polygon_area / polygon_area * grad_C[2 * i]); - grad_point_temp[2 * grad_point + 1] = - (float)((union_area + inter_area) / (union_area * union_area) * - grad_AB[2 * i + 1] - - iou / union_area * grad_A[2 * i + 1] - - 1 / polygon_area * (grad_AB[2 * i + 1] - grad_A[2 * i + 1]) - - (union_area) / polygon_area / polygon_area * grad_C[2 * i + 1]); - } - - for (int i = 0; i < 9; i++) { - point_grad[2 * i] = grad_point_temp[2 * i]; - point_grad[2 * i + 1] = grad_point_temp[2 * i + 1]; - } - return (float)rot_giou; -} - -template -__global__ void convex_giou_cuda_kernel(const int ex_n_boxes, - const int gt_n_boxes, const T* ex_boxes, - const T* gt_boxes, T* point_grad) { - CUDA_1D_KERNEL_LOOP(index, ex_n_boxes) { - const T* cur_box = ex_boxes + index * 18; - const T* cur_gt_box = gt_boxes + index * 8; - T* cur_grad = point_grad + index * 19; - T giou = devrIoU(cur_box, cur_gt_box, cur_grad, threadIdx.x); - cur_grad[18] = giou; - } -} - -__device__ inline int lineCross(Point a, Point b, Point c, Point d, Point& p) { - float s1, s2; - s1 = cross(a, b, c); - s2 = cross(a, b, d); - if (sig(s1) == 0 && sig(s2) == 0) return 2; - if (sig(s2 - s1) == 0) return 0; - p.x = (c.x * s2 - d.x * s1) / (s2 - s1); - p.y = (c.y * s2 - d.y * s1) / (s2 - s1); - return 1; -} - -__device__ inline void polygon_cut(Point* p, int& n, Point a, Point b) { - Point pp[MAXN]; - int m = 0; - p[n] = p[0]; - for (int i = 0; i < n; i++) { - if (sig(cross(a, b, p[i])) > 0) { - pp[m] = p[i]; - m++; - } - if (sig(cross(a, b, p[i])) != sig(cross(a, b, p[i + 1]))) { - lineCross(a, b, p[i], p[i + 1], pp[m]); - m++; - } - } - n = 0; - for (int i = 0; i < m; i++) { - if (!i || !(point_same(pp[i], pp[i - 1]))) { - p[n] = pp[i]; - n++; - } - } - - while (n > 1 && point_same(p[n - 1], p[0])) n--; -} - -__device__ inline float intersectArea(Point a, Point b, Point c, Point d) { - Point o(0, 0); - int s1 = sig(cross(o, a, b)); - int s2 = sig(cross(o, c, d)); - if (s1 == 0 || s2 == 0) return 0.0; - if (s1 == -1) { - Point* i = &a; - Point* j = &b; - swap1(i, j); - } - if (s2 == -1) { - Point* i = &c; - Point* j = &d; - swap1(i, j); - } - Point p[10] = {o, a, b}; - int n = 3; - - polygon_cut(p, n, o, c); - polygon_cut(p, n, c, d); - polygon_cut(p, n, d, o); - float res = area(p, n); - if (s1 * s2 == -1) res = -res; - return res; -} -__device__ inline float intersectAreaO(Point* ps1, int n1, Point* ps2, - int n2) { - if (area(ps1, n1) < 0) reverse1(ps1, n1); - if (area(ps2, n2) < 0) reverse1(ps2, n2); - ps1[n1] = ps1[0]; - ps2[n2] = ps2[0]; - float res = 0; - for (int i = 0; i < n1; i++) { - for (int j = 0; j < n2; j++) { - res += intersectArea(ps1[i], ps1[i + 1], ps2[j], ps2[j + 1]); - } - } - return res; -} - -template -__device__ inline float devrIoU(T const* const p, T const* const q) { - Point ps1[MAXN], ps2[MAXN]; - Point convex[MAXN]; - for (int i = 0; i < 9; i++) { - convex[i].x = (float)p[i * 2]; - convex[i].y = (float)p[i * 2 + 1]; - } - int n_convex = 9; - int points_to_convex_ind[9] = {-1, -1, -1, -1, -1, -1, -1, -1, -1}; - Jarvis_and_index(convex, n_convex, points_to_convex_ind); - int n1 = n_convex; - for (int i = 0; i < n1; i++) { - ps1[i].x = (float)convex[i].x; - ps1[i].y = (float)convex[i].y; - } - int n2 = 4; - for (int i = 0; i < n2; i++) { - ps2[i].x = (float)q[i * 2]; - ps2[i].y = (float)q[i * 2 + 1]; - } - float inter_area = intersectAreaO(ps1, n1, ps2, n2); - float S_pred = area(ps1, n1); - float union_area = fabs(S_pred) + fabs(area(ps2, n2)) - inter_area; - float iou = inter_area / union_area; - return (float)iou; -} - -template -__global__ void convex_iou_cuda_kernel(const int ex_n_boxes, - const int gt_n_boxes, const T* ex_boxes, - const T* gt_boxes, T* iou) { - CUDA_1D_KERNEL_LOOP(index, ex_n_boxes) { - const T* cur_box = ex_boxes + index * 18; - for (int i = 0; i < gt_n_boxes; i++) { - iou[index * gt_n_boxes + i] = devrIoU(cur_box, gt_boxes + i * 8); - } - } -} -#endif // CONVEX_IOU_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/correlation_cuda.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/correlation_cuda.cuh deleted file mode 100644 index 2f7f1129..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/correlation_cuda.cuh +++ /dev/null @@ -1,225 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/ClementPinard/Pytorch-Correlation-extension/blob/master/Correlation_Module/correlation_cuda_kernel.cu -// Original licence: Under MIT License - -#ifndef CORRELATION_CUDA -#define CORRELATION_CUDA - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#include -#include -// Using is recommended in the official documentation in -// https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-the-c-op. -// However, we use for compatibility with CUDA 9.0 -// Read https://github.com/pytorch/extension-cpp/issues/35 for more details. -#include - -#include -#include - -using namespace torch; - -#define TensorAcc4R PackedTensorAccessor32 -#define TensorAcc5R PackedTensorAccessor32 -#define WITHIN_BOUNDS(x, y, H, W) (x >= 0 && x < H && y >= 0 && y < W) - -#define WARP_SIZE 32 -#define FULL_MASK 0xffffffff - -template -__global__ void correlation_forward_cuda_kernel( - const TensorAcc4R rInput1, const TensorAcc4R rInput2, TensorAcc5R output, - int kH, int kW, int patchH, int patchW, int padH, int padW, int dilationH, - int dilationW, int dilation_patchH, int dilation_patchW, int dH, int dW) { - const int iH = rInput1.size(1); - const int iW = rInput1.size(2); - const int C = rInput1.size(3); - - const int n = blockIdx.x; - const int h = blockIdx.y * blockDim.y + threadIdx.y; - const int w = blockIdx.z * blockDim.z + threadIdx.z; - const int thread = threadIdx.x; - - const int start_i = -padH + h * dH; - const int start_j = -padW + w * dW; - - const int patchRadH = dilation_patchH * (patchH - 1) / 2; - const int patchRadW = dilation_patchW * (patchW - 1) / 2; - - for (int ph = 0; ph < patchH; ++ph) { - int ph_dilated = ph * dilation_patchH - patchRadH; - for (int pw = 0; pw < patchW; ++pw) { - int pw_dilated = pw * dilation_patchW - patchRadW; - scalar_t prod_sum = 0.0f; - for (int i = 0; i < kH; ++i) { - int i1 = start_i + i * dilationH; - int i2 = i1 + ph_dilated; - if - WITHIN_BOUNDS(i1, i2, iH, iH) { - for (int j = 0; j < kW; ++j) { - int j1 = start_j + j * dilationW; - int j2 = j1 + pw_dilated; - if - WITHIN_BOUNDS(j1, j2, iW, iW) { - for (int c = thread; c < C; c += WARP_SIZE) { - scalar_t v1 = rInput1[n][i1][j1][c]; - scalar_t v2 = rInput2[n][i2][j2][c]; - prod_sum += v1 * v2; - } - } - } - } - } - // accumulate - for (int offset = 16; offset > 0; offset /= 2) - prod_sum += __shfl_down_sync(FULL_MASK, float(prod_sum), offset); - if (thread == 0) { - output[n][ph][pw][h][w] = prod_sum; - } - } - } -} - -template -__global__ void correlation_backward_cuda_kernel_input1( - const TensorAcc5R grad_output, const TensorAcc4R input2, - TensorAcc4R grad_input1, const int kH, const int kW, const int patchH, - const int patchW, const int padH, const int padW, const int dilationH, - const int dilationW, const int dilation_patchH, const int dilation_patchW, - const int dH, const int dW) { - const int iH = input2.size(1); - const int iW = input2.size(2); - const int C = input2.size(3); - - const int H = grad_output.size(3); - const int W = grad_output.size(4); - - const int patchRadH = (patchH - 1) / 2; - const int patchRadW = (patchW - 1) / 2; - - const int n = blockIdx.x; - const int h = blockIdx.y; - const int w = blockIdx.z; - - const int h_2 = h + padH; - const int w_2 = w + padW; - const int min_h = h_2 - kH * dilationH; - const int min_w = w_2 - kW * dilationW; - - extern __shared__ __align__(sizeof(4)) unsigned char grad_cache_char[]; - scalar_t *grad_cache = reinterpret_cast(grad_cache_char); - for (int i = threadIdx.x; i < patchH * patchW; i += blockDim.x) { - const int ph = i / patchW; - const int pw = i % patchW; - int i1 = h + dilation_patchH * (ph - patchRadH); - int j1 = w + dilation_patchW * (pw - patchRadW); - - if (WITHIN_BOUNDS(i1, j1, iH, iW)) { - scalar_t grad_val = 0.0f; - for (int h_3 = h_2; h_3 > min_h; h_3 -= dilationH) { - int i2 = (h_3) / dH; - if (i2 * dH != h_3) continue; - for (int w_3 = w_2; w_3 > min_w; w_3 -= dilationW) { - int j2 = (w_3) / dW; - if (j2 * dW != w_3) continue; - if (WITHIN_BOUNDS(i2, j2, H, W)) { - grad_val += grad_output[n][ph][pw][i2][j2]; - } - } - } - grad_cache[i] = grad_val; - } - } - __syncthreads(); - - for (int c = threadIdx.x; c < C; c += blockDim.x) { - scalar_t grad_input_val = 0.0f; - for (int ph = 0; ph < patchH; ++ph) { - int i1 = h + dilation_patchH * (ph - patchRadH); - for (int pw = 0; pw < patchW; ++pw) { - int j1 = w + dilation_patchW * (pw - patchRadW); - if (WITHIN_BOUNDS(i1, j1, iH, iW)) { - grad_input_val += input2[n][i1][j1][c] * grad_cache[ph * patchW + pw]; - } - } - } - grad_input1[n][c][h][w] = grad_input_val; - } -} - -template -__global__ void correlation_backward_cuda_kernel_input2( - const TensorAcc5R grad_output, const TensorAcc4R input1, - TensorAcc4R grad_input2, int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - const int iH = input1.size(1); - const int iW = input1.size(2); - const int C = input1.size(3); - - const int patchRadH = (patchH - 1) / 2; - const int patchRadW = (patchW - 1) / 2; - - const int H = grad_output.size(3); - const int W = grad_output.size(4); - - const int dilatedKH = kH * dilationH; - const int dilatedKW = kW * dilationW; - - const int n = blockIdx.x; - const int h = blockIdx.y; - const int w = blockIdx.z; - - extern __shared__ __align__(sizeof(4)) unsigned char grad_cache_char[]; - scalar_t *grad_cache = reinterpret_cast(grad_cache_char); - for (int i = threadIdx.x; i < patchH * patchW; i += blockDim.x) { - const int ph = i / patchW; - const int pw = i % patchW; - int i1 = h - dilation_patchH * (ph - patchRadH); - int j1 = w - dilation_patchW * (pw - patchRadW); - - if (WITHIN_BOUNDS(i1, j1, iH, iW)) { - scalar_t grad_val = 0.0f; - - const int h_2 = i1 + padH; - const int w_2 = j1 + padW; - const int min_h = h_2 - dilatedKH; - const int min_w = w_2 - dilatedKW; - - for (int h_3 = h_2; h_3 > min_h; h_3 -= dilationH) { - int i2 = (h_3) / dH; - if (i2 * dH != h_3) continue; - for (int w_3 = w_2; w_3 > min_w; w_3 -= dilationW) { - int j2 = (w_3) / dW; - if (j2 * dW != w_3) continue; - if (WITHIN_BOUNDS(i2, j2, H, W)) { - grad_val += grad_output[n][ph][pw][i2][j2]; - } - } - } - grad_cache[i] = grad_val; - } - } - __syncthreads(); - - for (int c = threadIdx.x; c < C; c += blockDim.x) { - scalar_t grad_input_val = 0.0f; - for (int ph = 0; ph < patchH; ++ph) { - int i1 = h - dilation_patchH * (ph - patchRadH); - for (int pw = 0; pw < patchW; ++pw) { - int j1 = w - dilation_patchW * (pw - patchRadW); - if (WITHIN_BOUNDS(i1, j1, iH, iW)) { - grad_input_val += input1[n][i1][j1][c] * grad_cache[ph * patchW + pw]; - } - } - } - grad_input2[n][c][h][w] = grad_input_val; - } -} -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_conv_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_conv_cuda_kernel.cuh deleted file mode 100644 index 6b4d1bbd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_conv_cuda_kernel.cuh +++ /dev/null @@ -1,367 +0,0 @@ -/*! - ******************* BEGIN Caffe Copyright Notice and Disclaimer - ***************** - * - * COPYRIGHT - * - * All contributions by the University of California: - * Copyright (c) 2014-2017 The Regents of the University of California (Regents) - * All rights reserved. - * - * All other contributions: - * Copyright (c) 2014-2017, the respective contributors - * All rights reserved. - * - * Caffe uses a shared copyright model: each contributor holds copyright over - * their contributions to Caffe. The project versioning records all such - * contribution and copyright details. If a contributor wants to further mark - * their specific copyright on a particular contribution, they should indicate - * their copyright solely in the commit message of the change when it is - * committed. - * - * LICENSE - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright notice, - *this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright notice, - * this list of conditions and the following disclaimer in the documentation - * and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - *AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - *IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE - *FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - *DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - *SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER - *CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, - *OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - *OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * CONTRIBUTION AGREEMENT - * - * By contributing to the BVLC/caffe repository through pull-request, comment, - * or otherwise, the contributor releases their content to the - * license and copyright terms herein. - * - ***************** END Caffe Copyright Notice and Disclaimer - ********************* - * - * Copyright (c) 2018 Microsoft - * Licensed under The MIT License [see LICENSE for details] - * \file modulated_deformable_im2col.cuh - * \brief Function definitions of converting an image to - * column matrix based on kernel, padding, dilation, and offset. - * These functions are mainly used in deformable convolution operators. - * \ref: https://arxiv.org/abs/1703.06211 - * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu, Dazhi Cheng - */ - -// modified from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda_kernel.cu - -#ifndef DEFORM_CONV_CUDA_KERNEL_CUH -#define DEFORM_CONV_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -template -__device__ T deformable_im2col_bilinear(const T *input, const int data_width, - const int height, const int width, T h, - T w) { - if (h <= -1 || height <= h || w <= -1 || width <= w) { - return 0; - } - - int h_low = floorf(h); - int w_low = floorf(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -__device__ T get_gradient_weight(T argmax_h, T argmax_w, const int h, - const int w, const int height, - const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -__device__ T get_coordinate_weight(T argmax_h, T argmax_w, const int height, - const int width, const T *im_data, - const int data_width, const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -__global__ void deformable_im2col_gpu_kernel( - const int n, const T *data_im, const T *data_offset, const int height, - const int width, const int kernel_h, const int kernel_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - CUDA_1D_KERNEL_LOOP(index, n) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = deformable_im2col_bilinear(data_im_ptr, width, height, width, - h_im, w_im); - *data_col_ptr = val; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -__global__ void deformable_col2im_gpu_kernel( - const int n, const T *data_col, const T *data_offset, const int channels, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - CUDA_1D_KERNEL_LOOP(index, n) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index]; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = get_gradient_weight(cur_inv_h_data, cur_inv_w_data, - cur_h + dy, cur_w + dx, height, width); - atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); - } - } - } - } -} - -template -__global__ void deformable_col2im_coord_gpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int offset_channels, const int deformable_group, const int height_col, - const int width_col, T *grad_offset) { - CUDA_1D_KERNEL_LOOP(index, n) { - T val = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - const T weight = get_coordinate_weight(inv_h, inv_w, height, width, - data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos]; - cnt += 1; - } - - grad_offset[index] = val; - } -} - -#endif // DEFORM_CONV_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_roi_pool_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_roi_pool_cuda_kernel.cuh deleted file mode 100644 index 86c4bc66..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/deform_roi_pool_cuda_kernel.cuh +++ /dev/null @@ -1,186 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef DEFORM_ROI_POOL_CUDA_KERNEL_CUH -#define DEFORM_ROI_POOL_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void deform_roi_pool_forward_cuda_kernel( - const int nthreads, const T* input, const T* rois, const T* offset, - T* output, const int pooled_height, const int pooled_width, - const T spatial_scale, const int sampling_ratio, const T gamma, - const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not using rounding; this implementation detail is critical - T roi_start_w = offset_rois[1] * spatial_scale - 0.5; - T roi_start_h = offset_rois[2] * spatial_scale - 0.5; - T roi_end_w = offset_rois[3] * spatial_scale - 0.5; - T roi_end_h = offset_rois[4] * spatial_scale - 0.5; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_height / pooled_height)); - int roi_bin_grid_w = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_width / pooled_width)); - - // Compute roi offset - if (offset != NULL) { - const T* offset_cur_w = offset + n * pooled_width * pooled_height * 2 + - ph * pooled_width + pw; - T offset_roi_w = gamma * roi_width * offset_cur_w[0]; - T offset_roi_h = - gamma * roi_height * offset_cur_w[pooled_width * pooled_height]; - roi_start_w += offset_roi_w; - roi_start_h += offset_roi_h; - } - - // We do average pooling inside a bin - const T count = max(roi_bin_grid_h * roi_bin_grid_w, 1); - T output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - T val = bilinear_interpolate(offset_input, height, width, y, x, index); - output_val += val; - } - } - output[index] = output_val / count; - } -} - -template -__global__ void deform_roi_pool_backward_cuda_kernel( - const int nthreads, const T* grad_output, const T* input, const T* rois, - const T* offset, T* grad_input, T* grad_offset, const int pooled_height, - const int pooled_width, const T spatial_scale, const int sampling_ratio, - const T gamma, const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - const T* offset_input = - input + ((roi_batch_ind * channels + c) * height * width); - T* offset_grad_input = - grad_input + ((roi_batch_ind * channels + c) * height * width); - - // Do not using rounding; this implementation detail is critical - T roi_start_w = offset_rois[1] * spatial_scale - 0.5; - T roi_start_h = offset_rois[2] * spatial_scale - 0.5; - T roi_end_w = offset_rois[3] * spatial_scale - 0.5; - T roi_end_h = offset_rois[4] * spatial_scale - 0.5; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_height / pooled_height)); - int roi_bin_grid_w = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_width / pooled_width)); - - // Compute roi offset - if (offset != NULL) { - const T* offset_cur_w = offset + n * pooled_width * pooled_height * 2 + - ph * pooled_width + pw; - T offset_roi_w = gamma * roi_width * offset_cur_w[0]; - T offset_roi_h = - gamma * roi_height * offset_cur_w[pooled_width * pooled_height]; - roi_start_w += offset_roi_w; - roi_start_h += offset_roi_h; - } - - // We do average (integral) pooling inside a bin - const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - const T grad_output_this_bin = grad_output[index] / count; - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high, index); - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_grad_input + y_low * width + x_low, - grad_output_this_bin * w1); - atomicAdd(offset_grad_input + y_low * width + x_high, - grad_output_this_bin * w2); - atomicAdd(offset_grad_input + y_high * width + x_low, - grad_output_this_bin * w3); - atomicAdd(offset_grad_input + y_high * width + x_high, - grad_output_this_bin * w4); - if (offset != NULL) { - T input_00 = offset_input[y_low * width + x_low]; - T input_10 = offset_input[y_low * width + x_high]; - T input_01 = offset_input[y_high * width + x_low]; - T input_11 = offset_input[y_high * width + x_high]; - T ogx = gamma * roi_width * grad_output_this_bin * - (input_11 * (y - y_low) + input_10 * (y_high - y) + - input_01 * (y_low - y) + input_00 * (y - y_high)); - T ogy = gamma * roi_height * grad_output_this_bin * - (input_11 * (x - x_low) + input_01 * (x_high - x) + - input_10 * (x_low - x) + input_00 * (x - x_high)); - atomicAdd(grad_offset + n * pooled_width * pooled_height * 2 + - ph * pooled_width + pw, - ogx); - atomicAdd(grad_offset + n * pooled_width * pooled_height * 2 + - pooled_width * pooled_height + ph * pooled_width + pw, - ogy); - } - } - } - } - } -} - -#endif // DEFORM_ROI_POOL_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/diff_iou_rotated_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/diff_iou_rotated_cuda_kernel.cuh deleted file mode 100644 index 0355109e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/diff_iou_rotated_cuda_kernel.cuh +++ /dev/null @@ -1,136 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Adapted from -// https://github.com/lilanxiao/Rotated_IoU/cuda_op/sort_vert_kernel.cu # noqa -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#define MAX_NUM_VERT_IDX 9 -#define INTERSECTION_OFFSET 8 -#define EPSILON 1e-8 - -inline int opt_n_thread(int work_size) { - const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0); - return std::max(std::min(1 << pow_2, THREADS_PER_BLOCK), 1); -} - -/* -compare normalized vertices (vertices around (0,0)) -if vertex1 < vertex2 return true. -order: minimum at x-aixs, become larger in anti-clockwise direction -*/ -__device__ bool compare_vertices(float x1, float y1, float x2, float y2) { - if (fabs(x1 - x2) < EPSILON && fabs(y2 - y1) < EPSILON) - return false; // if equal, return false - - if (y1 > 0 && y2 < 0) return true; - if (y1 < 0 && y2 > 0) return false; - - float n1 = x1 * x1 + y1 * y1 + EPSILON; - float n2 = x2 * x2 + y2 * y2 + EPSILON; - float diff = fabs(x1) * x1 / n1 - fabs(x2) * x2 / n2; - - if (y1 > 0 && y2 > 0) { - if (diff > EPSILON) - return true; - else - return false; - } - if (y1 < 0 && y2 < 0) { - if (diff < EPSILON) - return true; - else - return false; - } -} - -__global__ void diff_iou_rotated_sort_vertices_forward_cuda_kernel( - int b, int n, int m, const float *__restrict__ vertices, - const bool *__restrict__ mask, const int *__restrict__ num_valid, - int *__restrict__ idx) { - int batch_idx = blockIdx.x; - vertices += batch_idx * n * m * 2; - mask += batch_idx * n * m; - num_valid += batch_idx * n; - idx += batch_idx * n * MAX_NUM_VERT_IDX; - - int index = threadIdx.x; // index of polygon - int stride = blockDim.x; - for (int i = index; i < n; i += stride) { - int pad; // index of arbitrary invalid intersection point (not box corner!) - for (int j = INTERSECTION_OFFSET; j < m; ++j) { - if (!mask[i * m + j]) { - pad = j; - break; - } - } - if (num_valid[i] < 3) { - // not enough vertices, take an invalid intersection point - // (zero padding) - for (int j = 0; j < MAX_NUM_VERT_IDX; ++j) { - idx[i * MAX_NUM_VERT_IDX + j] = pad; - } - } else { - // sort the valid vertices - // note the number of valid vertices is known - // note: check that num_valid[i] < MAX_NUM_VERT_IDX - for (int j = 0; j < num_valid[i]; ++j) { - // initialize with a "big" value - float x_min = 1; - float y_min = -EPSILON; - int i_take = 0; - int i2; - float x2, y2; - if (j != 0) { - i2 = idx[i * MAX_NUM_VERT_IDX + j - 1]; - x2 = vertices[i * m * 2 + i2 * 2 + 0]; - y2 = vertices[i * m * 2 + i2 * 2 + 1]; - } - for (int k = 0; k < m; ++k) { - float x = vertices[i * m * 2 + k * 2 + 0]; - float y = vertices[i * m * 2 + k * 2 + 1]; - if (mask[i * m + k] && compare_vertices(x, y, x_min, y_min)) { - if ((j == 0) || (j != 0 && compare_vertices(x2, y2, x, y))) { - x_min = x; - y_min = y; - i_take = k; - } - } - } - idx[i * MAX_NUM_VERT_IDX + j] = i_take; - } - // duplicate the first idx - idx[i * MAX_NUM_VERT_IDX + num_valid[i]] = idx[i * MAX_NUM_VERT_IDX + 0]; - - // pad zeros - for (int j = num_valid[i] + 1; j < MAX_NUM_VERT_IDX; ++j) { - idx[i * MAX_NUM_VERT_IDX + j] = pad; - } - - // for corner case: the two boxes are exactly the same. - // in this case, idx would have duplicate elements, which makes the - // shoelace formula broken because of the definition, the duplicate - // elements only appear in the first 8 positions (they are "corners in - // box", not "intersection of edges") - if (num_valid[i] == 8) { - int counter = 0; - for (int j = 0; j < 4; ++j) { - int check = idx[i * MAX_NUM_VERT_IDX + j]; - for (int k = 4; k < INTERSECTION_OFFSET; ++k) { - if (idx[i * MAX_NUM_VERT_IDX + k] == check) counter++; - } - } - if (counter == 4) { - idx[i * MAX_NUM_VERT_IDX + 4] = idx[i * MAX_NUM_VERT_IDX + 0]; - for (int j = 5; j < MAX_NUM_VERT_IDX; ++j) { - idx[i * MAX_NUM_VERT_IDX + j] = pad; - } - } - } - - // TODO: still might need to cover some other corner cases :( - } - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/furthest_point_sample_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/furthest_point_sample_cuda_kernel.cuh deleted file mode 100644 index d3801a02..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/furthest_point_sample_cuda_kernel.cuh +++ /dev/null @@ -1,152 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef FURTHEST_POINT_SAMPLE_CUDA_KERNEL_CUH -#define FURTHEST_POINT_SAMPLE_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -__device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i, - int idx1, int idx2) { - const float v1 = dists[idx1], v2 = dists[idx2]; - const int i1 = dists_i[idx1], i2 = dists_i[idx2]; - dists[idx1] = max(v1, v2); - dists_i[idx1] = v2 > v1 ? i2 : i1; -} - -template -__global__ void furthest_point_sampling_forward_cuda_kernel( - int b, int n, int m, const float *__restrict__ dataset, - float *__restrict__ temp, int *__restrict__ idxs) { - // dataset: (B, N, 3) - // tmp: (B, N) - // output: - // idx: (B, M) - - if (m <= 0) return; - __shared__ float dists[block_size]; - __shared__ int dists_i[block_size]; - - int batch_index = blockIdx.x; - dataset += batch_index * n * 3; - temp += batch_index * n; - idxs += batch_index * m; - - int tid = threadIdx.x; - const int stride = block_size; - - int old = 0; - if (threadIdx.x == 0) idxs[0] = old; - - __syncthreads(); - for (int j = 1; j < m; j++) { - int besti = 0; - float best = -1; - float x1 = dataset[old * 3 + 0]; - float y1 = dataset[old * 3 + 1]; - float z1 = dataset[old * 3 + 2]; - for (int k = tid; k < n; k += stride) { - float x2, y2, z2; - x2 = dataset[k * 3 + 0]; - y2 = dataset[k * 3 + 1]; - z2 = dataset[k * 3 + 2]; - // float mag = (x2 * x2) + (y2 * y2) + (z2 * z2); - // if (mag <= 1e-3) - // continue; - - float d = - (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1); - float d2 = min(d, temp[k]); - temp[k] = d2; - besti = d2 > best ? k : besti; - best = d2 > best ? d2 : best; - } - dists[tid] = best; - dists_i[tid] = besti; - __syncthreads(); - -#pragma unroll - for (int block_size_thres = 1024; block_size_thres >= 2; - block_size_thres >>= 1) { - const int tid_thres = block_size_thres / 2; - if (block_size >= block_size_thres && tid < tid_thres) { - __update(dists, dists_i, tid, tid + tid_thres); - } - __syncthreads(); - } - - old = dists_i[0]; - if (tid == 0) idxs[j] = old; - } -} - -// Modified from -// https://github.com/qiqihaer/3DSSD-pytorch/blob/master/lib/pointnet2/src/sampling_gpu.cu -template -__global__ void furthest_point_sampling_with_dist_forward_cuda_kernel( - int b, int n, int m, const float *__restrict__ dataset, - float *__restrict__ temp, int *__restrict__ idxs) { - // dataset: (B, N, N) - // tmp: (B, N) - // output: - // idx: (B, M) - - if (m <= 0) return; - __shared__ float dists[block_size]; - __shared__ int dists_i[block_size]; - - int batch_index = blockIdx.x; - dataset += batch_index * n * n; - temp += batch_index * n; - idxs += batch_index * m; - - int tid = threadIdx.x; - const int stride = block_size; - - int old = 0; - if (threadIdx.x == 0) idxs[0] = old; - - __syncthreads(); - for (int j = 1; j < m; j++) { - int besti = 0; - float best = -1; - // float x1 = dataset[old * 3 + 0]; - // float y1 = dataset[old * 3 + 1]; - // float z1 = dataset[old * 3 + 2]; - for (int k = tid; k < n; k += stride) { - // float x2, y2, z2; - // x2 = dataset[k * 3 + 0]; - // y2 = dataset[k * 3 + 1]; - // z2 = dataset[k * 3 + 2]; - - // float d = (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * - // (z2 - z1); - float d = dataset[old * n + k]; - - float d2 = min(d, temp[k]); - temp[k] = d2; - besti = d2 > best ? k : besti; - best = d2 > best ? d2 : best; - } - dists[tid] = best; - dists_i[tid] = besti; - __syncthreads(); - -#pragma unroll - for (int block_size_thres = 1024; block_size_thres >= 2; - block_size_thres >>= 1) { - const int tid_thres = block_size_thres / 2; - if (block_size >= block_size_thres && tid < tid_thres) { - __update(dists, dists_i, tid, tid + tid_thres); - } - __syncthreads(); - } - - old = dists_i[0]; - if (tid == 0) idxs[j] = old; - } -} - -#endif // FURTHEST_POINT_SAMPLE_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/gather_points_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/gather_points_cuda_kernel.cuh deleted file mode 100644 index 6d932434..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/gather_points_cuda_kernel.cuh +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef GATHER_POINTS_CUDA_KERNEL_CUH -#define GATHER_POINTS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#define TOTAL_THREADS 1024 - -template -__global__ void gather_points_forward_cuda_kernel(int b, int c, int n, int m, - const T *points, - const int *__restrict__ idx, - T *out) { - // points: (B, C, N) - // idx: (B, M) - // output: - // out: (B, C, M) - - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, m) { - if (bs_idx >= b || c_idx >= c) return; - - out += bs_idx * c * m + c_idx * m + pt_idx; - idx += bs_idx * m + pt_idx; - points += bs_idx * c * n + c_idx * n; - out[0] = points[idx[0]]; - } -} - -template -__global__ void gather_points_backward_cuda_kernel(int b, int c, int n, int m, - const T *grad_out, - const int *__restrict__ idx, - T *grad_points) { - // grad_out: (B, C, M) - // idx: (B, M) - // output: - // grad_points: (B, C, N) - - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, m) { - if (bs_idx >= b || c_idx >= c) return; - - grad_out += bs_idx * c * m + c_idx * m + pt_idx; - idx += bs_idx * m + pt_idx; - grad_points += bs_idx * c * n + c_idx * n; - - atomicAdd(grad_points + idx[0], grad_out[0]); - } -} - -#endif // GATHER_POINTS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/group_points_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/group_points_cuda_kernel.cuh deleted file mode 100644 index dfad66fc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/group_points_cuda_kernel.cuh +++ /dev/null @@ -1,65 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/group_points_gpu.cu -#ifndef GROUP_POINTS_CUDA_KERNEL_CUH -#define GROUP_POINTS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void group_points_forward_cuda_kernel(int b, int c, int n, - int npoints, int nsample, - const T *points, - const int *__restrict__ idx, - T *out) { - // points: (B, C, N) - // idx: (B, npoints, nsample) - // output: - // out: (B, C, npoints, nsample) - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(index, npoints * nsample) { - if (bs_idx >= b || c_idx >= c) return; - - int pt_idx = index / nsample; - int sample_idx = index % nsample; - - idx += bs_idx * npoints * nsample + pt_idx * nsample + sample_idx; - int in_idx = bs_idx * c * n + c_idx * n + idx[0]; - int out_idx = bs_idx * c * npoints * nsample + c_idx * npoints * nsample + - pt_idx * nsample + sample_idx; - - out[out_idx] = points[in_idx]; - } -} - -template -__global__ void group_points_backward_cuda_kernel(int b, int c, int n, - int npoints, int nsample, - const T *grad_out, - const int *__restrict__ idx, - T *grad_points) { - // grad_out: (B, C, npoints, nsample) - // idx: (B, npoints, nsample) - // output: - // grad_points: (B, C, N) - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(index, npoints * nsample) { - int pt_idx = index / nsample; - if (bs_idx >= b || c_idx >= c) return; - - int sample_idx = index % nsample; - grad_out += bs_idx * c * npoints * nsample + c_idx * npoints * nsample + - pt_idx * nsample + sample_idx; - idx += bs_idx * npoints * nsample + pt_idx * nsample + sample_idx; - - atomicAdd(grad_points + bs_idx * c * n + c_idx * n + idx[0], grad_out[0]); - } -} - -#endif // GROUP_POINTS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/iou3d_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/iou3d_cuda_kernel.cuh deleted file mode 100644 index 46e7c7d0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/iou3d_cuda_kernel.cuh +++ /dev/null @@ -1,367 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef IOU3D_CUDA_KERNEL_CUH -#define IOU3D_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -const int THREADS_PER_BLOCK_IOU3D = 16; -const int THREADS_PER_BLOCK_NMS = sizeof(unsigned long long) * 8; -__device__ const float EPS = 1e-8; - -struct Point { - float x, y; - __device__ Point() {} - __device__ Point(float _x, float _y) { x = _x, y = _y; } - - __device__ void set(float _x, float _y) { - x = _x; - y = _y; - } - - __device__ Point operator+(const Point &b) const { - return Point(x + b.x, y + b.y); - } - - __device__ Point operator-(const Point &b) const { - return Point(x - b.x, y - b.y); - } -}; - -__device__ inline float cross(const Point &a, const Point &b) { - return a.x * b.y - a.y * b.x; -} - -__device__ inline float cross(const Point &p1, const Point &p2, - const Point &p0) { - return (p1.x - p0.x) * (p2.y - p0.y) - (p2.x - p0.x) * (p1.y - p0.y); -} - -__device__ int check_rect_cross(const Point &p1, const Point &p2, - const Point &q1, const Point &q2) { - int ret = min(p1.x, p2.x) <= max(q1.x, q2.x) && - min(q1.x, q2.x) <= max(p1.x, p2.x) && - min(p1.y, p2.y) <= max(q1.y, q2.y) && - min(q1.y, q2.y) <= max(p1.y, p2.y); - return ret; -} - -__device__ inline int check_in_box2d(const float *box, const Point &p) { - // params: box (7) [x, y, z, dx, dy, dz, heading] - const float MARGIN = 1e-2; - - float center_x = box[0], center_y = box[1]; - // rotate the point in the opposite direction of box - float angle_cos = cos(-box[6]), angle_sin = sin(-box[6]); - float rot_x = (p.x - center_x) * angle_cos + (p.y - center_y) * (-angle_sin); - float rot_y = (p.x - center_x) * angle_sin + (p.y - center_y) * angle_cos; - - return (fabs(rot_x) < box[3] / 2 + MARGIN && - fabs(rot_y) < box[4] / 2 + MARGIN); -} - -__device__ inline int intersection(const Point &p1, const Point &p0, - const Point &q1, const Point &q0, - Point &ans_point) { - // fast exclusion - if (check_rect_cross(p0, p1, q0, q1) == 0) return 0; - - // check cross standing - float s1 = cross(q0, p1, p0); - float s2 = cross(p1, q1, p0); - float s3 = cross(p0, q1, q0); - float s4 = cross(q1, p1, q0); - - if (!(s1 * s2 > 0 && s3 * s4 > 0)) return 0; - - // calculate intersection of two lines - float s5 = cross(q1, p1, p0); - if (fabs(s5 - s1) > EPS) { - ans_point.x = (s5 * q0.x - s1 * q1.x) / (s5 - s1); - ans_point.y = (s5 * q0.y - s1 * q1.y) / (s5 - s1); - - } else { - float a0 = p0.y - p1.y, b0 = p1.x - p0.x, c0 = p0.x * p1.y - p1.x * p0.y; - float a1 = q0.y - q1.y, b1 = q1.x - q0.x, c1 = q0.x * q1.y - q1.x * q0.y; - float D = a0 * b1 - a1 * b0; - - ans_point.x = (b0 * c1 - b1 * c0) / D; - ans_point.y = (a1 * c0 - a0 * c1) / D; - } - - return 1; -} - -__device__ inline void rotate_around_center(const Point ¢er, - const float angle_cos, - const float angle_sin, Point &p) { - float new_x = - (p.x - center.x) * angle_cos - (p.y - center.y) * angle_sin + center.x; - float new_y = - (p.x - center.x) * angle_sin + (p.y - center.y) * angle_cos + center.y; - p.set(new_x, new_y); -} - -__device__ inline int point_cmp(const Point &a, const Point &b, - const Point ¢er) { - return atan2(a.y - center.y, a.x - center.x) > - atan2(b.y - center.y, b.x - center.x); -} - -__device__ inline float box_overlap(const float *box_a, const float *box_b) { - // params box_a: [x, y, z, dx, dy, dz, heading] - // params box_b: [x, y, z, dx, dy, dz, heading] - - float a_angle = box_a[6], b_angle = box_b[6]; - float a_dx_half = box_a[3] / 2, b_dx_half = box_b[3] / 2, - a_dy_half = box_a[4] / 2, b_dy_half = box_b[4] / 2; - float a_x1 = box_a[0] - a_dx_half, a_y1 = box_a[1] - a_dy_half; - float a_x2 = box_a[0] + a_dx_half, a_y2 = box_a[1] + a_dy_half; - float b_x1 = box_b[0] - b_dx_half, b_y1 = box_b[1] - b_dy_half; - float b_x2 = box_b[0] + b_dx_half, b_y2 = box_b[1] + b_dy_half; - - Point center_a(box_a[0], box_a[1]); - Point center_b(box_b[0], box_b[1]); - - Point box_a_corners[5]; - box_a_corners[0].set(a_x1, a_y1); - box_a_corners[1].set(a_x2, a_y1); - box_a_corners[2].set(a_x2, a_y2); - box_a_corners[3].set(a_x1, a_y2); - - Point box_b_corners[5]; - box_b_corners[0].set(b_x1, b_y1); - box_b_corners[1].set(b_x2, b_y1); - box_b_corners[2].set(b_x2, b_y2); - box_b_corners[3].set(b_x1, b_y2); - - // get oriented corners - float a_angle_cos = cos(a_angle), a_angle_sin = sin(a_angle); - float b_angle_cos = cos(b_angle), b_angle_sin = sin(b_angle); - - for (int k = 0; k < 4; k++) { - rotate_around_center(center_a, a_angle_cos, a_angle_sin, box_a_corners[k]); - rotate_around_center(center_b, b_angle_cos, b_angle_sin, box_b_corners[k]); - } - - box_a_corners[4] = box_a_corners[0]; - box_b_corners[4] = box_b_corners[0]; - - // get intersection of lines - Point cross_points[16]; - Point poly_center; - int cnt = 0, flag = 0; - - poly_center.set(0, 0); - for (int i = 0; i < 4; i++) { - for (int j = 0; j < 4; j++) { - flag = intersection(box_a_corners[i + 1], box_a_corners[i], - box_b_corners[j + 1], box_b_corners[j], - cross_points[cnt]); - if (flag) { - poly_center = poly_center + cross_points[cnt]; - cnt++; - } - } - } - - // check corners - for (int k = 0; k < 4; k++) { - if (check_in_box2d(box_a, box_b_corners[k])) { - poly_center = poly_center + box_b_corners[k]; - cross_points[cnt] = box_b_corners[k]; - cnt++; - } - if (check_in_box2d(box_b, box_a_corners[k])) { - poly_center = poly_center + box_a_corners[k]; - cross_points[cnt] = box_a_corners[k]; - cnt++; - } - } - - poly_center.x /= cnt; - poly_center.y /= cnt; - - // sort the points of polygon - Point temp; - for (int j = 0; j < cnt - 1; j++) { - for (int i = 0; i < cnt - j - 1; i++) { - if (point_cmp(cross_points[i], cross_points[i + 1], poly_center)) { - temp = cross_points[i]; - cross_points[i] = cross_points[i + 1]; - cross_points[i + 1] = temp; - } - } - } - - // get the overlap areas - float area = 0; - for (int k = 0; k < cnt - 1; k++) { - area += cross(cross_points[k] - cross_points[0], - cross_points[k + 1] - cross_points[0]); - } - - return fabs(area) / 2.0; -} - -__device__ inline float iou_bev(const float *box_a, const float *box_b) { - // params box_a: [x, y, z, dx, dy, dz, heading] - // params box_b: [x, y, z, dx, dy, dz, heading] - float sa = box_a[3] * box_a[4]; - float sb = box_b[3] * box_b[4]; - float s_overlap = box_overlap(box_a, box_b); - return s_overlap / fmaxf(sa + sb - s_overlap, EPS); -} - -__global__ void iou3d_boxes_overlap_bev_forward_cuda_kernel( - const int num_a, const float *boxes_a, const int num_b, - const float *boxes_b, float *ans_overlap) { - // params boxes_a: (N, 7) [x, y, z, dx, dy, dz, heading] - // params boxes_b: (M, 7) [x, y, z, dx, dy, dz, heading] - CUDA_2D_KERNEL_LOOP(b_idx, num_b, a_idx, num_a) { - if (a_idx >= num_a || b_idx >= num_b) { - return; - } - - const float *cur_box_a = boxes_a + a_idx * 7; - const float *cur_box_b = boxes_b + b_idx * 7; - float cur_overlap = box_overlap(cur_box_a, cur_box_b); - ans_overlap[a_idx * num_b + b_idx] = cur_overlap; - } -} - -__global__ void iou3d_nms3d_forward_cuda_kernel(const int boxes_num, - const float nms_overlap_thresh, - const float *boxes, - unsigned long long *mask) { - // params: boxes (N, 7) [x, y, z, dx, dy, dz, heading] - // params: mask (N, N/THREADS_PER_BLOCK_NMS) - const int blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - CUDA_2D_KERNEL_BLOCK_LOOP(col_start, blocks, row_start, blocks) { - // if (row_start > col_start) return; - - const int row_size = fminf(boxes_num - row_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - const int col_size = fminf(boxes_num - col_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - - __shared__ float block_boxes[THREADS_PER_BLOCK_NMS * 7]; - - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 7 + 0] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 0]; - block_boxes[threadIdx.x * 7 + 1] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 1]; - block_boxes[threadIdx.x * 7 + 2] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 2]; - block_boxes[threadIdx.x * 7 + 3] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 3]; - block_boxes[threadIdx.x * 7 + 4] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 4]; - block_boxes[threadIdx.x * 7 + 5] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 5]; - block_boxes[threadIdx.x * 7 + 6] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 6]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = THREADS_PER_BLOCK_NMS * row_start + threadIdx.x; - const float *cur_box = boxes + cur_box_idx * 7; - - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - if (iou_bev(cur_box, block_boxes + i * 7) > nms_overlap_thresh) { - t |= 1ULL << i; - } - } - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - mask[cur_box_idx * col_blocks + col_start] = t; - } - } -} - -__device__ inline float iou_normal(float const *const a, float const *const b) { - // params: a: [x, y, z, dx, dy, dz, heading] - // params: b: [x, y, z, dx, dy, dz, heading] - - float left = fmaxf(a[0] - a[3] / 2, b[0] - b[3] / 2), - right = fminf(a[0] + a[3] / 2, b[0] + b[3] / 2); - float top = fmaxf(a[1] - a[4] / 2, b[1] - b[4] / 2), - bottom = fminf(a[1] + a[4] / 2, b[1] + b[4] / 2); - float width = fmaxf(right - left, 0.f), height = fmaxf(bottom - top, 0.f); - float interS = width * height; - float Sa = a[3] * a[4]; - float Sb = b[3] * b[4]; - return interS / fmaxf(Sa + Sb - interS, EPS); -} - -__global__ void iou3d_nms3d_normal_forward_cuda_kernel( - const int boxes_num, const float nms_overlap_thresh, const float *boxes, - unsigned long long *mask) { - // params: boxes (N, 7) [x, y, z, dx, dy, dz, heading] - // params: mask (N, N/THREADS_PER_BLOCK_NMS) - - const int blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - CUDA_2D_KERNEL_BLOCK_LOOP(col_start, blocks, row_start, blocks) { - // if (row_start > col_start) return; - - const int row_size = fminf(boxes_num - row_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - const int col_size = fminf(boxes_num - col_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - - __shared__ float block_boxes[THREADS_PER_BLOCK_NMS * 7]; - - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 7 + 0] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 0]; - block_boxes[threadIdx.x * 7 + 1] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 1]; - block_boxes[threadIdx.x * 7 + 2] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 2]; - block_boxes[threadIdx.x * 7 + 3] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 3]; - block_boxes[threadIdx.x * 7 + 4] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 4]; - block_boxes[threadIdx.x * 7 + 5] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 5]; - block_boxes[threadIdx.x * 7 + 6] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 7 + 6]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = THREADS_PER_BLOCK_NMS * row_start + threadIdx.x; - const float *cur_box = boxes + cur_box_idx * 7; - - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - if (iou_normal(cur_box, block_boxes + i * 7) > nms_overlap_thresh) { - t |= 1ULL << i; - } - } - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - mask[cur_box_idx * col_blocks + col_start] = t; - } - } -} - -#endif // IOU3D_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/knn_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/knn_cuda_kernel.cuh deleted file mode 100644 index 3cf52bb9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/knn_cuda_kernel.cuh +++ /dev/null @@ -1,92 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/pointops/src/knnquery_heap -#ifndef KNN_CUDA_KERNEL_CUH -#define KNN_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -inline __device__ void swap_float(float *x, float *y) { - float tmp = *x; - *x = *y; - *y = tmp; -} - -inline __device__ void swap_int(int *x, int *y) { - int tmp = *x; - *x = *y; - *y = tmp; -} - -__device__ void reheap(float *dist, int *idx, int k) { - int root = 0; - int child = root * 2 + 1; - while (child < k) { - if (child + 1 < k && dist[child + 1] > dist[child]) child++; - if (dist[root] > dist[child]) return; - swap_float(&dist[root], &dist[child]); - swap_int(&idx[root], &idx[child]); - root = child; - child = root * 2 + 1; - } -} - -__device__ void heap_sort(float *dist, int *idx, int k) { - int i; - for (i = k - 1; i > 0; i--) { - swap_float(&dist[0], &dist[i]); - swap_int(&idx[0], &idx[i]); - reheap(dist, idx, i); - } -} - -// input: xyz (b, n, 3) new_xyz (b, m, 3) -// output: idx (b, m, nsample) dist2 (b, m, nsample) -template -__global__ void knn_forward_cuda_kernel(int b, int n, int m, int nsample, - const T *xyz, const T *new_xyz, - int *__restrict__ idx, T *dist2) { - int bs_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, m) { - if (bs_idx >= b) return; - - new_xyz += bs_idx * m * 3 + pt_idx * 3; - xyz += bs_idx * n * 3; - idx += bs_idx * m * nsample + pt_idx * nsample; - dist2 += bs_idx * m * nsample + pt_idx * nsample; - - T new_x = new_xyz[0]; - T new_y = new_xyz[1]; - T new_z = new_xyz[2]; - - float best_dist[100]; - int best_idx[100]; - for (int i = 0; i < nsample; i++) { - best_dist[i] = 1e10; - best_idx[i] = 0; - } - for (int i = 0; i < n; i++) { - T x = xyz[i * 3 + 0]; - T y = xyz[i * 3 + 1]; - T z = xyz[i * 3 + 2]; - T d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) + - (new_z - z) * (new_z - z); - if (d2 < best_dist[0]) { - best_dist[0] = d2; - best_idx[0] = i; - reheap(best_dist, best_idx, nsample); - } - } - heap_sort(best_dist, best_idx, nsample); - for (int i = 0; i < nsample; i++) { - idx[i] = best_idx[i]; - dist2[i] = best_dist[i]; - } - } -} - -#endif // KNN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/masked_conv2d_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/masked_conv2d_cuda_kernel.cuh deleted file mode 100644 index 1a0bd040..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/masked_conv2d_cuda_kernel.cuh +++ /dev/null @@ -1,62 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef MASKED_CONV2D_CUDA_KERNEL_CUH -#define MASKED_CONV2D_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void MaskedIm2colForward(const int n, const scalar_t *data_im, - const int height, const int width, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, - const int64_t *mask_h_idx, - const int64_t *mask_w_idx, - const int mask_cnt, scalar_t *data_col) { - // mask_cnt * channels - CUDA_1D_KERNEL_LOOP(index, n) { - const int m_index = index % mask_cnt; - const int h_col = mask_h_idx[m_index]; - const int w_col = mask_w_idx[m_index]; - const int c_im = index / mask_cnt; - const int c_col = c_im * kernel_h * kernel_w; - const int h_offset = h_col - pad_h; - const int w_offset = w_col - pad_w; - scalar_t *data_col_ptr = data_col + c_col * mask_cnt + m_index; - for (int i = 0; i < kernel_h; ++i) { - int h_im = h_offset + i; - for (int j = 0; j < kernel_w; ++j) { - int w_im = w_offset + j; - if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) { - *data_col_ptr = - (scalar_t)data_im[(c_im * height + h_im) * width + w_im]; - } else { - *data_col_ptr = 0.0; - } - data_col_ptr += mask_cnt; - } - } - } -} - -template -__global__ void MaskedCol2imForward(const int n, const scalar_t *data_col, - const int height, const int width, - const int channels, - const int64_t *mask_h_idx, - const int64_t *mask_w_idx, - const int mask_cnt, scalar_t *data_im) { - CUDA_1D_KERNEL_LOOP(index, n) { - const int m_index = index % mask_cnt; - const int h_im = mask_h_idx[m_index]; - const int w_im = mask_w_idx[m_index]; - const int c_im = index / mask_cnt; - // compute the start and end of the output - data_im[(c_im * height + h_im) * width + w_im] = data_col[index]; - } -} - -#endif // MASKED_CONV2D_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/min_area_polygons_cuda.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/min_area_polygons_cuda.cuh deleted file mode 100644 index b8e3b426..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/min_area_polygons_cuda.cuh +++ /dev/null @@ -1,300 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef MIN_AREA_POLYGONS_CUDA_KERNEL_CUH -#define MIN_AREA_POLYGONS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -#define MAXN 20 -__device__ const float PI = 3.1415926; - -struct Point { - float x, y; - __device__ Point() {} - __device__ Point(float x, float y) : x(x), y(y) {} -}; - -__device__ inline void swap1(Point *a, Point *b) { - Point temp; - temp.x = a->x; - temp.y = a->y; - - a->x = b->x; - a->y = b->y; - - b->x = temp.x; - b->y = temp.y; -} -__device__ inline float cross(Point o, Point a, Point b) { - return (a.x - o.x) * (b.y - o.y) - (b.x - o.x) * (a.y - o.y); -} - -__device__ inline float dis(Point a, Point b) { - return (a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y); -} -__device__ inline void minBoundingRect(Point *ps, int n_points, float *minbox) { - float convex_points[2][MAXN]; - for (int j = 0; j < n_points; j++) { - convex_points[0][j] = ps[j].x; - } - for (int j = 0; j < n_points; j++) { - convex_points[1][j] = ps[j].y; - } - - Point edges[MAXN]; - float edges_angles[MAXN]; - float unique_angles[MAXN]; - int n_edges = n_points - 1; - int n_unique = 0; - int unique_flag = 0; - - for (int i = 0; i < n_edges; i++) { - edges[i].x = ps[i + 1].x - ps[i].x; - edges[i].y = ps[i + 1].y - ps[i].y; - } - for (int i = 0; i < n_edges; i++) { - edges_angles[i] = atan2((float)edges[i].y, (float)edges[i].x); - if (edges_angles[i] >= 0) { - edges_angles[i] = fmod((float)edges_angles[i], (float)PI / 2); - } else { - edges_angles[i] = - edges_angles[i] - (int)(edges_angles[i] / (PI / 2) - 1) * (PI / 2); - } - } - unique_angles[0] = edges_angles[0]; - n_unique += 1; - for (int i = 1; i < n_edges; i++) { - for (int j = 0; j < n_unique; j++) { - if (edges_angles[i] == unique_angles[j]) { - unique_flag += 1; - } - } - if (unique_flag == 0) { - unique_angles[n_unique] = edges_angles[i]; - n_unique += 1; - unique_flag = 0; - } else { - unique_flag = 0; - } - } - - float minarea = 1e12; - for (int i = 0; i < n_unique; i++) { - float R[2][2]; - float rot_points[2][MAXN]; - R[0][0] = cos(unique_angles[i]); - R[0][1] = sin(unique_angles[i]); - R[1][0] = -sin(unique_angles[i]); - R[1][1] = cos(unique_angles[i]); - // R x Points - for (int m = 0; m < 2; m++) { - for (int n = 0; n < n_points; n++) { - float sum = 0.0; - for (int k = 0; k < 2; k++) { - sum = sum + R[m][k] * convex_points[k][n]; - } - rot_points[m][n] = sum; - } - } - - // xmin; - float xmin, ymin, xmax, ymax; - xmin = 1e12; - for (int j = 0; j < n_points; j++) { - if (isinf(rot_points[0][j]) || isnan(rot_points[0][j])) { - continue; - } else { - if (rot_points[0][j] < xmin) { - xmin = rot_points[0][j]; - } - } - } - // ymin - ymin = 1e12; - for (int j = 0; j < n_points; j++) { - if (isinf(rot_points[1][j]) || isnan(rot_points[1][j])) { - continue; - } else { - if (rot_points[1][j] < ymin) { - ymin = rot_points[1][j]; - } - } - } - // xmax - xmax = -1e12; - for (int j = 0; j < n_points; j++) { - if (isinf(rot_points[0][j]) || isnan(rot_points[0][j])) { - continue; - } else { - if (rot_points[0][j] > xmax) { - xmax = rot_points[0][j]; - } - } - } - // ymax - ymax = -1e12; - for (int j = 0; j < n_points; j++) { - if (isinf(rot_points[1][j]) || isnan(rot_points[1][j])) { - continue; - } else { - if (rot_points[1][j] > ymax) { - ymax = rot_points[1][j]; - } - } - } - float area = (xmax - xmin) * (ymax - ymin); - if (area < minarea) { - minarea = area; - minbox[0] = unique_angles[i]; - minbox[1] = xmin; - minbox[2] = ymin; - minbox[3] = xmax; - minbox[4] = ymax; - } - } -} - -// convex_find -__device__ inline void Jarvis(Point *in_poly, int &n_poly) { - int n_input = n_poly; - Point input_poly[20]; - for (int i = 0; i < n_input; i++) { - input_poly[i].x = in_poly[i].x; - input_poly[i].y = in_poly[i].y; - } - Point p_max, p_k; - int max_index, k_index; - int Stack[20], top1, top2; - // float sign; - float sign; - Point right_point[10], left_point[10]; - - for (int i = 0; i < n_poly; i++) { - if (in_poly[i].y < in_poly[0].y || - in_poly[i].y == in_poly[0].y && in_poly[i].x < in_poly[0].x) { - Point *j = &(in_poly[0]); - Point *k = &(in_poly[i]); - swap1(j, k); - } - if (i == 0) { - p_max = in_poly[0]; - max_index = 0; - } - if (in_poly[i].y > p_max.y || - in_poly[i].y == p_max.y && in_poly[i].x > p_max.x) { - p_max = in_poly[i]; - max_index = i; - } - } - if (max_index == 0) { - max_index = 1; - p_max = in_poly[max_index]; - } - - k_index = 0, Stack[0] = 0, top1 = 0; - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top1]], in_poly[i], p_k); - if ((sign > 0) || ((sign == 0) && (dis(in_poly[Stack[top1]], in_poly[i]) > - dis(in_poly[Stack[top1]], p_k)))) { - p_k = in_poly[i]; - k_index = i; - } - } - top1++; - Stack[top1] = k_index; - } - - for (int i = 0; i <= top1; i++) { - right_point[i] = in_poly[Stack[i]]; - } - - k_index = 0, Stack[0] = 0, top2 = 0; - - while (k_index != max_index) { - p_k = p_max; - k_index = max_index; - for (int i = 1; i < n_poly; i++) { - sign = cross(in_poly[Stack[top2]], in_poly[i], p_k); - if ((sign < 0) || (sign == 0) && (dis(in_poly[Stack[top2]], in_poly[i]) > - dis(in_poly[Stack[top2]], p_k))) { - p_k = in_poly[i]; - k_index = i; - } - } - top2++; - Stack[top2] = k_index; - } - - for (int i = top2 - 1; i >= 0; i--) { - left_point[i] = in_poly[Stack[i]]; - } - - for (int i = 0; i < top1 + top2; i++) { - if (i <= top1) { - in_poly[i] = right_point[i]; - } else { - in_poly[i] = left_point[top2 - (i - top1)]; - } - } - n_poly = top1 + top2; -} - -template -__device__ inline void Findminbox(T const *const p, T *minpoints) { - Point ps1[MAXN]; - Point convex[MAXN]; - for (int i = 0; i < 9; i++) { - convex[i].x = p[i * 2]; - convex[i].y = p[i * 2 + 1]; - } - int n_convex = 9; - Jarvis(convex, n_convex); - int n1 = n_convex; - for (int i = 0; i < n1; i++) { - ps1[i].x = convex[i].x; - ps1[i].y = convex[i].y; - } - ps1[n1].x = convex[0].x; - ps1[n1].y = convex[0].y; - - float minbbox[5] = {0}; - minBoundingRect(ps1, n1 + 1, minbbox); - float angle = minbbox[0]; - float xmin = minbbox[1]; - float ymin = minbbox[2]; - float xmax = minbbox[3]; - float ymax = minbbox[4]; - float R[2][2]; - - R[0][0] = cos(angle); - R[0][1] = sin(angle); - R[1][0] = -sin(angle); - R[1][1] = cos(angle); - - minpoints[0] = xmax * R[0][0] + ymin * R[1][0]; - minpoints[1] = xmax * R[0][1] + ymin * R[1][1]; - minpoints[2] = xmin * R[0][0] + ymin * R[1][0]; - minpoints[3] = xmin * R[0][1] + ymin * R[1][1]; - minpoints[4] = xmin * R[0][0] + ymax * R[1][0]; - minpoints[5] = xmin * R[0][1] + ymax * R[1][1]; - minpoints[6] = xmax * R[0][0] + ymax * R[1][0]; - minpoints[7] = xmax * R[0][1] + ymax * R[1][1]; -} - -template -__global__ void min_area_polygons_cuda_kernel(const int ex_n_boxes, - const T *ex_boxes, T *minbox) { - CUDA_1D_KERNEL_LOOP(index, ex_n_boxes) { - const T *cur_box = ex_boxes + index * 18; - T *cur_min_box = minbox + index * 8; - Findminbox(cur_box, cur_min_box); - } -} - -#endif // MIN_AREA_POLYGONS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh deleted file mode 100644 index ca0e91a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh +++ /dev/null @@ -1,399 +0,0 @@ -/*! - ******************* BEGIN Caffe Copyright Notice and Disclaimer - ***************** - * - * COPYRIGHT - * - * All contributions by the University of California: - * Copyright (c) 2014-2017 The Regents of the University of California (Regents) - * All rights reserved. - * - * All other contributions: - * Copyright (c) 2014-2017, the respective contributors - * All rights reserved. - * - * Caffe uses a shared copyright model: each contributor holds copyright over - * their contributions to Caffe. The project versioning records all such - * contribution and copyright details. If a contributor wants to further mark - * their specific copyright on a particular contribution, they should indicate - * their copyright solely in the commit message of the change when it is - * committed. - * - * LICENSE - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright notice, - *this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright notice, - * this list of conditions and the following disclaimer in the documentation - * and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - *AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - *IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE - *FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - *DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - *SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER - *CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, - *OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - *OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * CONTRIBUTION AGREEMENT - * - * By contributing to the BVLC/caffe repository through pull-request, comment, - * or otherwise, the contributor releases their content to the - * license and copyright terms herein. - * - ***************** END Caffe Copyright Notice and Disclaimer - ********************* - * - * Copyright (c) 2018 Microsoft - * Licensed under The MIT License [see LICENSE for details] - * \file modulated_deformable_im2col.cuh - * \brief Function definitions of converting an image to - * column matrix based on kernel, padding, dilation, and offset. - * These functions are mainly used in deformable convolution operators. - * \ref: https://arxiv.org/abs/1703.06211 - * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu, Dazhi Cheng - */ - -// modified from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda_kernel.cu - -#ifndef MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH -#define MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -template -__device__ T dmcn_im2col_bilinear(const T *input, const int data_width, - const int height, const int width, T h, T w) { - int h_low = floorf(h); - int w_low = floorf(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -__device__ T dmcn_get_gradient_weight(T argmax_h, T argmax_w, const int h, - const int w, const int height, - const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -__device__ T dmcn_get_coordinate_weight(T argmax_h, T argmax_w, - const int height, const int width, - const T *im_data, const int data_width, - const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -__global__ void modulated_deformable_im2col_gpu_kernel( - const int n, const T *data_im, const T *data_offset, const T *data_mask, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - CUDA_1D_KERNEL_LOOP(index, n) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const T *data_mask_ptr = - data_mask + (b_col * deformable_group + deformable_group_index) * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_col) * width_col + w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = dmcn_im2col_bilinear(data_im_ptr, width, height, width, h_im, - w_im); - *data_col_ptr = val * mask; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -__global__ void modulated_deformable_col2im_gpu_kernel( - const int n, const T *data_col, const T *data_offset, const T *data_mask, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - CUDA_1D_KERNEL_LOOP(index, n) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index] * mask; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = - dmcn_get_gradient_weight(cur_inv_h_data, cur_inv_w_data, - cur_h + dy, cur_w + dx, height, width); - atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); - } - } - } - } -} - -template -__global__ void modulated_deformable_col2im_coord_gpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const T *data_mask, const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int channel_per_deformable_group, - const int batch_size, const int offset_channels, const int deformable_group, - const int height_col, const int width_col, T *grad_offset, T *grad_mask) { - CUDA_1D_KERNEL_LOOP(index, n) { - T val = 0, mval = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const int data_mask_hw_ptr = - (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - else - mval += data_col_ptr[col_pos] * - dmcn_im2col_bilinear(data_im_ptr + cnt * height * width, width, - height, width, inv_h, inv_w); - const T weight = dmcn_get_coordinate_weight( - inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos] * mask; - cnt += 1; - } - // KERNEL_ASSIGN(grad_offset[index], offset_req, val); - grad_offset[index] = val; - if (offset_c % 2 == 0) - // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + - // deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * - // height_col + h) * width_col + w], mask_req, mval); - grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * - kernel_w + - offset_c / 2) * - height_col + - h) * - width_col + - w] = mval; - } -} - -#endif // MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ms_deform_attn_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ms_deform_attn_cuda_kernel.cuh deleted file mode 100644 index 12225ffd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/ms_deform_attn_cuda_kernel.cuh +++ /dev/null @@ -1,801 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from -*https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ -#ifndef DEFORM_ATTN_CUDA_KERNEL -#define DEFORM_ATTN_CUDA_KERNEL - -#include "common_cuda_helper.hpp" -#include "pytorch_cuda_helper.hpp" - -template -__device__ scalar_t ms_deform_attn_im2col_bilinear( - const scalar_t *&bottom_data, const int &height, const int &width, - const int &nheads, const int &channels, const scalar_t &h, - const scalar_t &w, const int &m, const int &c) { - const int h_low = floorf(h); - const int w_low = floorf(w); - const int h_high = h_low + 1; - const int w_high = w_low + 1; - - const scalar_t lh = h - h_low; - const scalar_t lw = w - w_low; - const scalar_t hh = 1 - lh, hw = 1 - lw; - - const int w_stride = nheads * channels; - const int h_stride = width * w_stride; - const int h_low_ptr_offset = h_low * h_stride; - const int h_high_ptr_offset = h_low_ptr_offset + h_stride; - const int w_low_ptr_offset = w_low * w_stride; - const int w_high_ptr_offset = w_low_ptr_offset + w_stride; - const int base_ptr = m * channels + c; - - scalar_t v1 = 0; - if (h_low >= 0 && w_low >= 0) { - const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr; - v1 = bottom_data[ptr1]; - } - scalar_t v2 = 0; - if (h_low >= 0 && w_high <= width - 1) { - const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr; - v2 = bottom_data[ptr2]; - } - scalar_t v3 = 0; - if (h_high <= height - 1 && w_low >= 0) { - const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr; - v3 = bottom_data[ptr3]; - } - scalar_t v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) { - const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr; - v4 = bottom_data[ptr4]; - } - - const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -__device__ void ms_deform_attn_col2im_bilinear( - const scalar_t *&bottom_data, const int &height, const int &width, - const int &nheads, const int &channels, const scalar_t &h, - const scalar_t &w, const int &m, const int &c, const scalar_t &top_grad, - const scalar_t &attn_weight, scalar_t *&grad_value, - scalar_t *grad_sampling_loc, scalar_t *grad_attn_weight) { - const int h_low = floorf(h); - const int w_low = floorf(w); - const int h_high = h_low + 1; - const int w_high = w_low + 1; - - const scalar_t lh = h - h_low; - const scalar_t lw = w - w_low; - const scalar_t hh = 1 - lh, hw = 1 - lw; - - const int w_stride = nheads * channels; - const int h_stride = width * w_stride; - const int h_low_ptr_offset = h_low * h_stride; - const int h_high_ptr_offset = h_low_ptr_offset + h_stride; - const int w_low_ptr_offset = w_low * w_stride; - const int w_high_ptr_offset = w_low_ptr_offset + w_stride; - const int base_ptr = m * channels + c; - - const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - const scalar_t top_grad_value = top_grad * attn_weight; - scalar_t grad_h_weight = 0, grad_w_weight = 0; - - scalar_t v1 = 0; - if (h_low >= 0 && w_low >= 0) { - const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr; - v1 = bottom_data[ptr1]; - grad_h_weight -= hw * v1; - grad_w_weight -= hh * v1; - atomicAdd(grad_value + ptr1, w1 * top_grad_value); - } - scalar_t v2 = 0; - if (h_low >= 0 && w_high <= width - 1) { - const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr; - v2 = bottom_data[ptr2]; - grad_h_weight -= lw * v2; - grad_w_weight += hh * v2; - atomicAdd(grad_value + ptr2, w2 * top_grad_value); - } - scalar_t v3 = 0; - if (h_high <= height - 1 && w_low >= 0) { - const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr; - v3 = bottom_data[ptr3]; - grad_h_weight += hw * v3; - grad_w_weight -= lh * v3; - atomicAdd(grad_value + ptr3, w3 * top_grad_value); - } - scalar_t v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) { - const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr; - v4 = bottom_data[ptr4]; - grad_h_weight += lw * v4; - grad_w_weight += lh * v4; - atomicAdd(grad_value + ptr4, w4 * top_grad_value); - } - - const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - *grad_attn_weight = top_grad * val; - *grad_sampling_loc = width * grad_w_weight * top_grad_value; - *(grad_sampling_loc + 1) = height * grad_h_weight * top_grad_value; -} - -template -__device__ void ms_deform_attn_col2im_bilinear_gm( - const scalar_t *&bottom_data, const int &height, const int &width, - const int &nheads, const int &channels, const scalar_t &h, - const scalar_t &w, const int &m, const int &c, const scalar_t &top_grad, - const scalar_t &attn_weight, scalar_t *&grad_value, - scalar_t *grad_sampling_loc, scalar_t *grad_attn_weight) { - const int h_low = floorf(h); - const int w_low = floorf(w); - const int h_high = h_low + 1; - const int w_high = w_low + 1; - - const scalar_t lh = h - h_low; - const scalar_t lw = w - w_low; - const scalar_t hh = 1 - lh, hw = 1 - lw; - - const int w_stride = nheads * channels; - const int h_stride = width * w_stride; - const int h_low_ptr_offset = h_low * h_stride; - const int h_high_ptr_offset = h_low_ptr_offset + h_stride; - const int w_low_ptr_offset = w_low * w_stride; - const int w_high_ptr_offset = w_low_ptr_offset + w_stride; - const int base_ptr = m * channels + c; - - const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - const scalar_t top_grad_value = top_grad * attn_weight; - scalar_t grad_h_weight = 0, grad_w_weight = 0; - - scalar_t v1 = 0; - if (h_low >= 0 && w_low >= 0) { - const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr; - v1 = bottom_data[ptr1]; - grad_h_weight -= hw * v1; - grad_w_weight -= hh * v1; - atomicAdd(grad_value + ptr1, w1 * top_grad_value); - } - scalar_t v2 = 0; - if (h_low >= 0 && w_high <= width - 1) { - const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr; - v2 = bottom_data[ptr2]; - grad_h_weight -= lw * v2; - grad_w_weight += hh * v2; - atomicAdd(grad_value + ptr2, w2 * top_grad_value); - } - scalar_t v3 = 0; - if (h_high <= height - 1 && w_low >= 0) { - const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr; - v3 = bottom_data[ptr3]; - grad_h_weight += hw * v3; - grad_w_weight -= lh * v3; - atomicAdd(grad_value + ptr3, w3 * top_grad_value); - } - scalar_t v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) { - const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr; - v4 = bottom_data[ptr4]; - grad_h_weight += lw * v4; - grad_w_weight += lh * v4; - atomicAdd(grad_value + ptr4, w4 * top_grad_value); - } - - const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - atomicAdd(grad_attn_weight, top_grad * val); - atomicAdd(grad_sampling_loc, width * grad_w_weight * top_grad_value); - atomicAdd(grad_sampling_loc + 1, height * grad_h_weight * top_grad_value); -} - -template -__global__ void ms_deformable_im2col_gpu_kernel( - const int n, const scalar_t *data_value, const int64_t *data_spatial_shapes, - const int64_t *data_level_start_index, const scalar_t *data_sampling_loc, - const scalar_t *data_attn_weight, const int batch_size, - const int spatial_size, const int num_heads, const int channels, - const int num_levels, const int num_query, const int num_point, - scalar_t *data_col) { - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - scalar_t *data_col_ptr = data_col + index; - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - scalar_t col = 0; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const scalar_t *data_value_ptr = - data_value + - (data_value_ptr_init_offset + level_start_id * qid_stride); - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - col += ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, - spatial_w, num_heads, channels, - h_im, w_im, m_col, c_col) * - weight; - } - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - } - } - *data_col_ptr = col; - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2]; - __shared__ scalar_t cache_grad_attn_weight[blockSize]; - unsigned int tid = threadIdx.x; - const int qid_stride = num_heads * channels; - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - *(cache_grad_sampling_loc + (threadIdx.x << 1)) = 0; - *(cache_grad_sampling_loc + ((threadIdx.x << 1) + 1)) = 0; - *(cache_grad_attn_weight + threadIdx.x) = 0; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - cache_grad_sampling_loc + (threadIdx.x << 1), - cache_grad_attn_weight + threadIdx.x); - } - - __syncthreads(); - if (tid == 0) { - scalar_t _grad_w = cache_grad_sampling_loc[0], - _grad_h = cache_grad_sampling_loc[1], - _grad_a = cache_grad_attn_weight[0]; - int sid = 2; - for (unsigned int _tid = 1; _tid < blockSize; ++_tid) { - _grad_w += cache_grad_sampling_loc[sid]; - _grad_h += cache_grad_sampling_loc[sid + 1]; - _grad_a += cache_grad_attn_weight[_tid]; - sid += 2; - } - - *grad_sampling_loc_out = _grad_w; - *(grad_sampling_loc_out + 1) = _grad_h; - *grad_attn_weight_out = _grad_a; - } - __syncthreads(); - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2]; - __shared__ scalar_t cache_grad_attn_weight[blockSize]; - unsigned int tid = threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - *(cache_grad_sampling_loc + (threadIdx.x << 1)) = 0; - *(cache_grad_sampling_loc + ((threadIdx.x << 1) + 1)) = 0; - *(cache_grad_attn_weight + threadIdx.x) = 0; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - cache_grad_sampling_loc + (threadIdx.x << 1), - cache_grad_attn_weight + threadIdx.x); - } - - __syncthreads(); - - for (unsigned int s = blockSize / 2; s > 0; s >>= 1) { - if (tid < s) { - const unsigned int xid1 = tid << 1; - const unsigned int xid2 = (tid + s) << 1; - cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s]; - cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2]; - cache_grad_sampling_loc[xid1 + 1] += - cache_grad_sampling_loc[xid2 + 1]; - } - __syncthreads(); - } - - if (tid == 0) { - *grad_sampling_loc_out = cache_grad_sampling_loc[0]; - *(grad_sampling_loc_out + 1) = cache_grad_sampling_loc[1]; - *grad_attn_weight_out = cache_grad_attn_weight[0]; - } - __syncthreads(); - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v1( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - extern __shared__ int _s[]; - scalar_t *cache_grad_sampling_loc = reinterpret_cast(_s); - scalar_t *cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x; - unsigned int tid = threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - *(cache_grad_sampling_loc + (threadIdx.x << 1)) = 0; - *(cache_grad_sampling_loc + ((threadIdx.x << 1) + 1)) = 0; - *(cache_grad_attn_weight + threadIdx.x) = 0; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - cache_grad_sampling_loc + (threadIdx.x << 1), - cache_grad_attn_weight + threadIdx.x); - } - - __syncthreads(); - if (tid == 0) { - scalar_t _grad_w = cache_grad_sampling_loc[0], - _grad_h = cache_grad_sampling_loc[1], - _grad_a = cache_grad_attn_weight[0]; - int sid = 2; - for (unsigned int _tid = 1; _tid < blockDim.x; ++_tid) { - _grad_w += cache_grad_sampling_loc[sid]; - _grad_h += cache_grad_sampling_loc[sid + 1]; - _grad_a += cache_grad_attn_weight[_tid]; - sid += 2; - } - - *grad_sampling_loc_out = _grad_w; - *(grad_sampling_loc_out + 1) = _grad_h; - *grad_attn_weight_out = _grad_a; - } - __syncthreads(); - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - extern __shared__ int _s[]; - scalar_t *cache_grad_sampling_loc = reinterpret_cast(_s); - scalar_t *cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x; - unsigned int tid = threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - *(cache_grad_sampling_loc + (threadIdx.x << 1)) = 0; - *(cache_grad_sampling_loc + ((threadIdx.x << 1) + 1)) = 0; - *(cache_grad_attn_weight + threadIdx.x) = 0; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - cache_grad_sampling_loc + (threadIdx.x << 1), - cache_grad_attn_weight + threadIdx.x); - } - - __syncthreads(); - - for (unsigned int s = blockDim.x / 2, spre = blockDim.x; s > 0; - s >>= 1, spre >>= 1) { - if (tid < s) { - const unsigned int xid1 = tid << 1; - const unsigned int xid2 = (tid + s) << 1; - cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s]; - cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2]; - cache_grad_sampling_loc[xid1 + 1] += - cache_grad_sampling_loc[xid2 + 1]; - if (tid + (s << 1) < spre) { - cache_grad_attn_weight[tid] += - cache_grad_attn_weight[tid + (s << 1)]; - cache_grad_sampling_loc[xid1] += - cache_grad_sampling_loc[xid2 + (s << 1)]; - cache_grad_sampling_loc[xid1 + 1] += - cache_grad_sampling_loc[xid2 + 1 + (s << 1)]; - } - } - __syncthreads(); - } - - if (tid == 0) { - *grad_sampling_loc_out = cache_grad_sampling_loc[0]; - *(grad_sampling_loc_out + 1) = cache_grad_sampling_loc[1]; - *grad_attn_weight_out = cache_grad_attn_weight[0]; - } - __syncthreads(); - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - extern __shared__ int _s[]; - scalar_t *cache_grad_sampling_loc = reinterpret_cast(_s); - scalar_t *cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x; - unsigned int tid = threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - *(cache_grad_sampling_loc + (threadIdx.x << 1)) = 0; - *(cache_grad_sampling_loc + ((threadIdx.x << 1) + 1)) = 0; - *(cache_grad_attn_weight + threadIdx.x) = 0; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - cache_grad_sampling_loc + (threadIdx.x << 1), - cache_grad_attn_weight + threadIdx.x); - } - - __syncthreads(); - - for (unsigned int s = blockDim.x / 2, spre = blockDim.x; s > 0; - s >>= 1, spre >>= 1) { - if (tid < s) { - const unsigned int xid1 = tid << 1; - const unsigned int xid2 = (tid + s) << 1; - cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s]; - cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2]; - cache_grad_sampling_loc[xid1 + 1] += - cache_grad_sampling_loc[xid2 + 1]; - if (tid + (s << 1) < spre) { - cache_grad_attn_weight[tid] += - cache_grad_attn_weight[tid + (s << 1)]; - cache_grad_sampling_loc[xid1] += - cache_grad_sampling_loc[xid2 + (s << 1)]; - cache_grad_sampling_loc[xid1 + 1] += - cache_grad_sampling_loc[xid2 + 1 + (s << 1)]; - } - } - __syncthreads(); - } - - if (tid == 0) { - atomicAdd(grad_sampling_loc_out, cache_grad_sampling_loc[0]); - atomicAdd(grad_sampling_loc_out + 1, cache_grad_sampling_loc[1]); - atomicAdd(grad_attn_weight_out, cache_grad_attn_weight[0]); - } - __syncthreads(); - - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} - -template -__global__ void ms_deformable_col2im_gpu_kernel_gm( - const int n, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - CUDA_1D_KERNEL_LOOP(index, n) { - int _temp = index; - const int c_col = _temp % channels; - _temp /= channels; - const int sampling_index = _temp; - const int m_col = _temp % num_heads; - _temp /= num_heads; - _temp /= num_query; - const int b_col = _temp; - - const scalar_t top_grad = grad_col[index]; - - int data_weight_ptr = sampling_index * num_levels * num_point; - int data_loc_w_ptr = data_weight_ptr << 1; - const int grad_sampling_ptr = data_weight_ptr; - scalar_t *grad_sampling_loc_out = - grad_sampling_loc + (grad_sampling_ptr << 1); - scalar_t *grad_attn_weight_out = grad_attn_weight + grad_sampling_ptr; - const int grad_weight_stride = 1; - const int grad_loc_stride = 2; - const int qid_stride = num_heads * channels; - const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride; - - for (int l_col = 0; l_col < num_levels; ++l_col) { - const int level_start_id = data_level_start_index[l_col]; - const int spatial_h_ptr = l_col << 1; - const int spatial_h = data_spatial_shapes[spatial_h_ptr]; - const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1]; - const int value_ptr_offset = - data_value_ptr_init_offset + level_start_id * qid_stride; - const scalar_t *data_value_ptr = data_value + value_ptr_offset; - scalar_t *grad_value_ptr = grad_value + value_ptr_offset; - - for (int p_col = 0; p_col < num_point; ++p_col) { - const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr]; - const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1]; - const scalar_t weight = data_attn_weight[data_weight_ptr]; - - const scalar_t h_im = loc_h * spatial_h - 0.5; - const scalar_t w_im = loc_w * spatial_w - 0.5; - if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w) { - ms_deform_attn_col2im_bilinear_gm( - data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, - w_im, m_col, c_col, top_grad, weight, grad_value_ptr, - grad_sampling_loc_out, grad_attn_weight_out); - } - data_weight_ptr += 1; - data_loc_w_ptr += 2; - grad_attn_weight_out += grad_weight_stride; - grad_sampling_loc_out += grad_loc_stride; - } - } - } -} -#endif // DEFORM_ATTN_CUDA_KERNEL diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh deleted file mode 100644 index 0a5c2505..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh +++ /dev/null @@ -1,117 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef NMS_CUDA_KERNEL_CUH -#define NMS_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -int const threadsPerBlock = sizeof(unsigned long long int) * 8; - -__device__ inline bool devIoU(float const *const a, float const *const b, - const int offset, const float threshold) { - float left = fmaxf(a[0], b[0]), right = fminf(a[2], b[2]); - float top = fmaxf(a[1], b[1]), bottom = fminf(a[3], b[3]); - float width = fmaxf(right - left + offset, 0.f), - height = fmaxf(bottom - top + offset, 0.f); - float interS = width * height; - float Sa = (a[2] - a[0] + offset) * (a[3] - a[1] + offset); - float Sb = (b[2] - b[0] + offset) * (b[3] - b[1] + offset); - return interS > threshold * (Sa + Sb - interS); -} - -__global__ void nms_cuda(const int n_boxes, const float iou_threshold, - const int offset, const float *dev_boxes, - unsigned long long *dev_mask) { - int blocks = (n_boxes + threadsPerBlock - 1) / threadsPerBlock; - CUDA_2D_KERNEL_BLOCK_LOOP(col_start, blocks, row_start, blocks) { - const int tid = threadIdx.x; - - if (row_start > col_start) return; - - const int row_size = - fminf(n_boxes - row_start * threadsPerBlock, threadsPerBlock); - const int col_size = - fminf(n_boxes - col_start * threadsPerBlock, threadsPerBlock); - - __shared__ float block_boxes[threadsPerBlock * 4]; - if (tid < col_size) { - block_boxes[tid * 4 + 0] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 0]; - block_boxes[tid * 4 + 1] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 1]; - block_boxes[tid * 4 + 2] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 2]; - block_boxes[tid * 4 + 3] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 3]; - } - __syncthreads(); - - if (tid < row_size) { - const int cur_box_idx = threadsPerBlock * row_start + tid; - const float *cur_box = dev_boxes + cur_box_idx * 4; - int i = 0; - unsigned long long int t = 0; - int start = 0; - if (row_start == col_start) { - start = tid + 1; - } - for (i = start; i < col_size; i++) { - if (devIoU(cur_box, block_boxes + i * 4, offset, iou_threshold)) { - t |= 1ULL << i; - } - } - dev_mask[cur_box_idx * gridDim.y + col_start] = t; - } - } -} - -__global__ void gather_keep_from_mask(bool *keep, - const unsigned long long *dev_mask, - const int n_boxes) { - const int col_blocks = (n_boxes + threadsPerBlock - 1) / threadsPerBlock; - const int tid = threadIdx.x; - - // mark the bboxes which have been removed. - extern __shared__ unsigned long long removed[]; - - // initialize removed. - for (int i = tid; i < col_blocks; i += blockDim.x) { - removed[i] = 0; - } - __syncthreads(); - - for (int nblock = 0; nblock < col_blocks; ++nblock) { - auto removed_val = removed[nblock]; - __syncthreads(); - const int i_offset = nblock * threadsPerBlock; -#pragma unroll - for (int inblock = 0; inblock < threadsPerBlock; ++inblock) { - const int i = i_offset + inblock; - if (i >= n_boxes) break; - // select a candidate, check if it should kept. - if (!(removed_val & (1ULL << inblock))) { - if (tid == 0) { - // mark the output. - keep[i] = true; - } - auto p = dev_mask + i * col_blocks; - // remove all bboxes which overlap the candidate. - for (int j = tid; j < col_blocks; j += blockDim.x) { - if (j >= nblock) removed[j] |= p[j]; - } - __syncthreads(); - removed_val = removed[nblock]; - } - } - } -} - -#endif // NMS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_rotated_cuda.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_rotated_cuda.cuh deleted file mode 100644 index 747327af..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/nms_rotated_cuda.cuh +++ /dev/null @@ -1,133 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/nms_rotated/nms_rotated_cuda.cu -#ifndef NMS_ROTATED_CUDA_CUH -#define NMS_ROTATED_CUDA_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif -#include "box_iou_rotated_utils.hpp" - -__host__ __device__ inline int divideUP(const int x, const int y) { - return (((x) + (y)-1) / (y)); -} - -namespace { -int const threadsPerBlock = sizeof(unsigned long long) * 8; -} - -template -__global__ void nms_rotated_cuda_kernel(const int n_boxes, - const float iou_threshold, - const T* dev_boxes, - unsigned long long* dev_mask, - const int multi_label) { - // nms_rotated_cuda_kernel is modified from torchvision's nms_cuda_kernel - - if (multi_label == 1) { - const int row_start = blockIdx.y; - const int col_start = blockIdx.x; - - // if (row_start > col_start) return; - - const int row_size = - min(n_boxes - row_start * threadsPerBlock, threadsPerBlock); - const int col_size = - min(n_boxes - col_start * threadsPerBlock, threadsPerBlock); - - // Compared to nms_cuda_kernel, where each box is represented with 4 values - // (x1, y1, x2, y2), each rotated box is represented with 5 values - // (x_center, y_center, width, height, angle_degrees) here. - __shared__ T block_boxes[threadsPerBlock * 5]; - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 5 + 0] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 6 + 0]; - block_boxes[threadIdx.x * 5 + 1] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 6 + 1]; - block_boxes[threadIdx.x * 5 + 2] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 6 + 2]; - block_boxes[threadIdx.x * 5 + 3] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 6 + 3]; - block_boxes[threadIdx.x * 5 + 4] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 6 + 4]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; - const T* cur_box = dev_boxes + cur_box_idx * 6; - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - // Instead of devIoU used by original horizontal nms, here - // we use the single_box_iou_rotated function from - // box_iou_rotated_utils.h - if (single_box_iou_rotated(cur_box, block_boxes + i * 5, 0) > - iou_threshold) { - t |= 1ULL << i; - } - } - const int col_blocks = divideUP(n_boxes, threadsPerBlock); - dev_mask[cur_box_idx * col_blocks + col_start] = t; - } - } else { - const int row_start = blockIdx.y; - const int col_start = blockIdx.x; - - // if (row_start > col_start) return; - - const int row_size = - min(n_boxes - row_start * threadsPerBlock, threadsPerBlock); - const int col_size = - min(n_boxes - col_start * threadsPerBlock, threadsPerBlock); - - // Compared to nms_cuda_kernel, where each box is represented with 4 values - // (x1, y1, x2, y2), each rotated box is represented with 5 values - // (x_center, y_center, width, height, angle_degrees) here. - __shared__ T block_boxes[threadsPerBlock * 5]; - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 5 + 0] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0]; - block_boxes[threadIdx.x * 5 + 1] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1]; - block_boxes[threadIdx.x * 5 + 2] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2]; - block_boxes[threadIdx.x * 5 + 3] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3]; - block_boxes[threadIdx.x * 5 + 4] = - dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x; - const T* cur_box = dev_boxes + cur_box_idx * 5; - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - // Instead of devIoU used by original horizontal nms, here - // we use the single_box_iou_rotated function from - // box_iou_rotated_utils.h - if (single_box_iou_rotated(cur_box, block_boxes + i * 5, 0) > - iou_threshold) { - t |= 1ULL << i; - } - } - const int col_blocks = divideUP(n_boxes, threadsPerBlock); - dev_mask[cur_box_idx * col_blocks + col_start] = t; - } - } -} - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/parrots_cudawarpfunction.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/parrots_cudawarpfunction.cuh deleted file mode 100644 index 7918a574..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/parrots_cudawarpfunction.cuh +++ /dev/null @@ -1,109 +0,0 @@ -/* - * Copyright (c) 2019, SenseTime. - */ - -#ifndef INCLUDE_PARROTS_DARRAY_CUDAWARPFUNCTION_CUH_ -#define INCLUDE_PARROTS_DARRAY_CUDAWARPFUNCTION_CUH_ - -#ifndef __CUDACC__ -#error cudawarpfunction.cuh should only be included by .cu files -#endif -#include - -#include - -#ifdef PARROTS_USE_HALF -#include -#endif -#ifdef __CUDA_ARCH__ -#define CUDA_INTRINSIC_FUNC(Expr) Expr -#else -#define CUDA_INTRINSIC_FUNC(Expr) -#endif - -#if !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 300 - -#ifdef PARROTS_USE_HALF - -#if CUDA_VERSION < 9000 - -__device__ inline float16 __shfl(float16 var, int srcLane, int width) { - CUDA_INTRINSIC_FUNC(return __shfl(var.y, srcLane, width);); -} - -__device__ inline float16 __shfl_up(float16 var, unsigned delta, int width) { - CUDA_INTRINSIC_FUNC(return __shfl_up(var.y, delta, width);); -} - -__device__ inline float16 __shfl_down(float16 var, unsigned delta, int width) { - CUDA_INTRINSIC_FUNC(return __shfl_down(var.y, delta, width);); -} - -__device__ inline float16 __shfl_xor(float16 var, int laneMask, int width) { - CUDA_INTRINSIC_FUNC(return __shfl_xor(var.y, laneMask, width);); -} - -#else // CUDA_VERSION >= 9000 - -__device__ inline float16 __shfl_sync(unsigned mask, float16 var, int srcLane, - int width = warpSize) { - CUDA_INTRINSIC_FUNC(float16 r; r.y = __shfl_sync(mask, var.y, srcLane, width); - return r;); -} - -__device__ inline float16 __shfl_up_sync(unsigned mask, float16 var, - unsigned delta, int width = warpSize) { - CUDA_INTRINSIC_FUNC( - float16 r; r.y = __shfl_up_sync(mask, var.y, delta, width); return r;); -} - -__device__ inline float16 __shfl_down_sync(unsigned mask, float16 var, - unsigned delta, - int width = warpSize) { - CUDA_INTRINSIC_FUNC( - float16 r; r.y = __shfl_down_sync(mask, var.y, delta, width); return r;); -} - -__device__ inline float16 __shfl_xor_sync(unsigned mask, float16 var, - int laneMask, int width) { - CUDA_INTRINSIC_FUNC(float16 r; - r.y = __shfl_xor_sync(mask, var.y, laneMask, width); - return r;); -} - -#endif // CUDA_VERSION < 9000 - -#endif // PARROTS_USE_HALF - -// warp shuffle interface with a dummy mask -#if CUDA_VERSION < 9000 - -template -__device__ inline T __shfl_sync(unsigned mask, T var, int srcLane, - int width = warpSize) { - CUDA_INTRINSIC_FUNC(return __shfl(var, srcLane, width);); -} - -template -__device__ inline T __shfl_up_sync(unsigned mask, T var, unsigned delta, - int width = warpSize) { - CUDA_INTRINSIC_FUNC(return __shfl_up(var, delta, width);); -} - -template -__device__ inline T __shfl_down_sync(unsigned mask, T var, unsigned delta, - int width = warpSize) { - CUDA_INTRINSIC_FUNC(return __shfl_down(var, delta, width);); -} - -template -__device__ inline T __shfl_xor_sync(unsigned mask, T var, int laneMask, - int width = warpSize) { - CUDA_INTRINSIC_FUNC(return __shfl_xor(var, laneMask, width);); -} - -#endif // CUDA_VERSION < 9000 - -#endif // !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 300 - -#endif // INCLUDE_PARROTS_DARRAY_CUDAWARPFUNCTION_CUH_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_boxes_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_boxes_cuda_kernel.cuh deleted file mode 100644 index 34236207..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_boxes_cuda_kernel.cuh +++ /dev/null @@ -1,95 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef POINT_IN_BOXES_CUDA_KERNEL_CUH -#define POINT_IN_BOXES_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__device__ inline void lidar_to_local_coords(T shift_x, T shift_y, T rz, - T &local_x, T &local_y) { - T cosa = cos(-rz), sina = sin(-rz); - local_x = shift_x * cosa + shift_y * (-sina); - local_y = shift_x * sina + shift_y * cosa; -} - -template -__device__ inline int check_pt_in_box3d(const T *pt, const T *box3d, T &local_x, - T &local_y) { - // param pt: (x, y, z) - // param box3d: (cx, cy, cz, x_size, y_size, z_size, rz) in LiDAR coordinate, - // cz in the bottom center - T x = pt[0], y = pt[1], z = pt[2]; - T cx = box3d[0], cy = box3d[1], cz = box3d[2]; - T x_size = box3d[3], y_size = box3d[4], z_size = box3d[5], rz = box3d[6]; - cz += z_size / - 2.0; // shift to the center since cz in box3d is the bottom center - - if (fabsf(z - cz) > z_size / 2.0) return 0; - lidar_to_local_coords(x - cx, y - cy, rz, local_x, local_y); - float in_flag = (local_x > -x_size / 2.0) & (local_x < x_size / 2.0) & - (local_y > -y_size / 2.0) & (local_y < y_size / 2.0); - return in_flag; -} - -template -__global__ void points_in_boxes_part_forward_cuda_kernel( - int batch_size, int boxes_num, int pts_num, const T *boxes, const T *pts, - int *box_idx_of_points) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box DO NOT overlaps params pts: - // (B, npoints, 3) [x, y, z] in LiDAR coordinate params boxes_idx_of_points: - // (B, npoints), default -1 - - int bs_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, pts_num) { - if (bs_idx >= batch_size) return; - - boxes += bs_idx * boxes_num * 7; - pts += bs_idx * pts_num * 3 + pt_idx * 3; - box_idx_of_points += bs_idx * pts_num + pt_idx; - - T local_x = 0, local_y = 0; - int cur_in_flag = 0; - for (int k = 0; k < boxes_num; k++) { - cur_in_flag = check_pt_in_box3d(pts, boxes + k * 7, local_x, local_y); - if (cur_in_flag) { - box_idx_of_points[0] = k; - break; - } - } - } -} - -template -__global__ void points_in_boxes_all_forward_cuda_kernel( - int batch_size, int boxes_num, int pts_num, const T *boxes, const T *pts, - int *box_idx_of_points) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box DO NOT overlaps params pts: - // (B, npoints, 3) [x, y, z] in LiDAR coordinate params boxes_idx_of_points: - // (B, npoints), default -1 - - int bs_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, pts_num) { - if (bs_idx >= batch_size) return; - - boxes += bs_idx * boxes_num * 7; - pts += bs_idx * pts_num * 3 + pt_idx * 3; - box_idx_of_points += bs_idx * pts_num * boxes_num + pt_idx * boxes_num; - - T local_x = 0, local_y = 0; - for (int k = 0; k < boxes_num; k++) { - const int cur_in_flag = - check_pt_in_box3d(pts, boxes + k * 7, local_x, local_y); - if (cur_in_flag) { - box_idx_of_points[k] = 1; - } - } - } -} - -#endif // POINT_IN_BOXES_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_polygons_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_polygons_cuda_kernel.cuh deleted file mode 100644 index a0769d75..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/points_in_polygons_cuda_kernel.cuh +++ /dev/null @@ -1,79 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef POINTS_IN_POLYGONS_CUDA_KERNEL_CUH -#define POINTS_IN_POLYGONS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -struct point { - float x, y; -}; - -template -__global__ void points_in_polygons_forward_cuda_kernel( - const int nthreads, const scalar_t *vertex1, const scalar_t *vertex2, - const int rows, const int cols, scalar_t *inside_flag) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int row = index / cols; - int col = index % cols; - - const scalar_t *offset_vertex1 = vertex1 + row * 2; - const scalar_t *offset_vertex2 = vertex2 + col * 8; - - point point_[1]; - point polygon[4]; - - point_[0].x = offset_vertex1[0]; - point_[0].y = offset_vertex1[1]; - - polygon[0].x = offset_vertex2[0]; - polygon[0].y = offset_vertex2[1]; - polygon[1].x = offset_vertex2[2]; - polygon[1].y = offset_vertex2[3]; - polygon[2].x = offset_vertex2[4]; - polygon[2].y = offset_vertex2[5]; - polygon[3].x = offset_vertex2[6]; - polygon[3].y = offset_vertex2[7]; - - int nCross = 0; - int i, j; - float sx, sy, tx, ty, px, py, x; - for (i = 0, j = 3; i < 4; j = i, i++) { - sx = polygon[i].x; - sy = polygon[i].y; - tx = polygon[j].x; - ty = polygon[j].y; - - px = point_[0].x; - py = point_[0].y; - - if (py < min(sy, ty)) continue; - if (py > max(sy, ty)) continue; - - if ((sx == px && sy == py) || (tx == px && ty == py)) { - break; - } else { - if ((sy < py && ty >= py) || (sy >= py && ty < py)) { - x = sx + (py - sy) * (tx - sx) / (ty - sy); - if (x == px) { - break; - } - if (x > px) { - nCross++; - } - } - } - } - if (nCross % 2 == 1) { - inside_flag[index] = 1.0; - } else { - inside_flag[index] = 0.0; - } - return; - } -} - -#endif // POINTS_IN_POLYGONS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/psamask_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/psamask_cuda_kernel.cuh deleted file mode 100644 index 5d946686..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/psamask_cuda_kernel.cuh +++ /dev/null @@ -1,141 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef PSAMASK_CUDA_KERNEL_CUH -#define PSAMASK_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -// CUDA: grid stride looping -#ifndef CUDA_KERNEL_LOOP -#define CUDA_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ - i += blockDim.x * gridDim.x) -#endif - -template -__global__ void psamask_collect_forward_cuda( - const int nthreads, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask, const T* mask_data, T* buffer_data) { - CUDA_KERNEL_LOOP(index, nthreads) { - const int w = index % w_feature; - const int h = (index / w_feature) % h_feature; - const int n = index / w_feature / h_feature; - // effective mask region : [hstart, hend) x [wstart, wend) with mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - buffer_data[(n * h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)) * - h_feature * w_feature + - h * w_feature + w] = mask_data - [((n * h_mask * w_mask + hidx * w_mask + widx) * h_feature + h) * - w_feature + - w]; - } - } - } -} - -template -__global__ void psamask_distribute_forward_cuda( - const int nthreads, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask, const T* mask_data, T* buffer_data) { - CUDA_KERNEL_LOOP(index, nthreads) { - const int w = index % w_feature; - const int h = (index / w_feature) % h_feature; - const int n = index / w_feature / h_feature; - // effective mask region : [hstart, hend) x [wstart, wend) with mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - buffer_data[(n * h_feature * w_feature + h * w_feature + w) * - h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)] = mask_data - [((n * h_mask * w_mask + hidx * w_mask + widx) * h_feature + h) * - w_feature + - w]; - } - } - } -} - -template -__global__ void psamask_collect_backward_cuda( - const int nthreads, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask, const T* buffer_diff, T* mask_diff) { - CUDA_KERNEL_LOOP(index, nthreads) { - const int w = index % w_feature; - const int h = (index / w_feature) % h_feature; - const int n = index / w_feature / h_feature; - // effective mask region : [hstart, hend) x [wstart, wend) with mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - mask_diff[((n * h_mask * w_mask + hidx * w_mask + widx) * h_feature + - h) * - w_feature + - w] = buffer_diff[(n * h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)) * - h_feature * w_feature + - h * w_feature + w]; - } - } - } -} - -template -__global__ void psamask_distribute_backward_cuda( - const int nthreads, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask, const T* buffer_diff, T* mask_diff) { - CUDA_KERNEL_LOOP(index, nthreads) { - const int w = index % w_feature; - const int h = (index / w_feature) % h_feature; - const int n = index / w_feature / h_feature; - // effective mask region : [hstart, hend) x [wstart, wend) with mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - mask_diff[((n * h_mask * w_mask + hidx * w_mask + widx) * h_feature + - h) * - w_feature + - w] = - buffer_diff[(n * h_feature * w_feature + h * w_feature + w) * - h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)]; - } - } - } -} - -#endif // PSAMASK_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/riroi_align_rotated_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/riroi_align_rotated_cuda_kernel.cuh deleted file mode 100644 index 4383d9e8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/riroi_align_rotated_cuda_kernel.cuh +++ /dev/null @@ -1,242 +0,0 @@ -// Modified from -// https://github.com/csuhan/ReDet/blob/master/mmdet/ops/riroi_align/src/riroi_align_kernel.cu -#ifndef RIROI_ALIGN_ROTATED_CUDA_KERNEL_CUH -#define RIROI_ALIGN_ROTATED_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS - -/*** Forward ***/ -template -__global__ void riroi_align_rotated_forward_cuda_kernel( - const int nthreads, const scalar_t *bottom_data, - const scalar_t *bottom_rois, const scalar_t spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int pooled_height, - const int pooled_width, const int num_orientations, scalar_t *top_data) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int o = (index / pooled_width / pooled_height) % num_orientations; - int c = - (index / pooled_width / pooled_height / num_orientations) % channels; - int n = index / pooled_width / pooled_height / num_orientations / channels; - - const scalar_t *offset_bottom_rois = bottom_rois + n * 6; - int roi_batch_ind = offset_bottom_rois[0]; - - // Do not using rounding; this implementation detail is critical - scalar_t roi_center_w = offset_bottom_rois[1] * spatial_scale; - scalar_t roi_center_h = offset_bottom_rois[2] * spatial_scale; - scalar_t roi_width = offset_bottom_rois[3] * spatial_scale; - scalar_t roi_height = offset_bottom_rois[4] * spatial_scale; - // scalar_t theta = offset_bottom_rois[5] * M_PI / 180.0; - scalar_t theta = offset_bottom_rois[5]; - // Force malformed ROIs to be 1x1 - roi_width = max(roi_width, (scalar_t)1.); - roi_height = max(roi_height, (scalar_t)1.); - scalar_t bin_size_h = static_cast(roi_height) / - static_cast(pooled_height); - scalar_t bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - // find aligned index - scalar_t ind_float = theta * num_orientations / (2 * M_PI); - int ind = floorf(ind_float); - scalar_t l_var = ind_float - (scalar_t)ind; - scalar_t r_var = 1.0 - l_var; - // correct start channel - ind = (ind + num_orientations) % num_orientations; - // rotated channel - int ind_rot = (o - ind + num_orientations) % num_orientations; - int ind_rot_plus = (ind_rot + 1 + num_orientations) % num_orientations; - const scalar_t *offset_bottom_data = - bottom_data + (roi_batch_ind * channels * num_orientations + - c * num_orientations + ind_rot) * - height * width; - - const scalar_t *offset_bottom_data_plus = - bottom_data + (roi_batch_ind * channels * num_orientations + - c * num_orientations + ind_rot_plus) * - height * width; - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (num_samples > 0) - ? num_samples - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (num_samples > 0) ? num_samples : ceilf(roi_width / pooled_width); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - scalar_t roi_start_h = -roi_height / 2.0; - scalar_t roi_start_w = -roi_width / 2.0; - scalar_t cosscalar_theta = cos(theta); - scalar_t sinscalar_theta = sin(theta); - - // We do average (integral) pooling inside a bin - const scalar_t count = max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - scalar_t output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 - const scalar_t yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const scalar_t xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta (counterclockwise) around the center and translate - scalar_t y = yy * cosscalar_theta - xx * sinscalar_theta + roi_center_h; - scalar_t x = yy * sinscalar_theta + xx * cosscalar_theta + roi_center_w; - - scalar_t val = bilinear_interpolate( - offset_bottom_data, height, width, y, x, index); - scalar_t val_plus = bilinear_interpolate( - offset_bottom_data_plus, height, width, y, x, index); - output_val += r_var * val + l_var * val_plus; - } - } - output_val /= count; - - top_data[index] = output_val; - } -} - -/*** Backward ***/ -template -__global__ void riroi_align_rotated_backward_cuda_kernel( - const int nthreads, const scalar_t *top_diff, const scalar_t *bottom_rois, - const scalar_t spatial_scale, const int num_samples, const bool clockwise, - const int channels, const int height, const int width, - const int pooled_height, const int pooled_width, const int num_orientations, - scalar_t *bottom_diff) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int o = (index / pooled_width / pooled_height) % num_orientations; - int c = - (index / pooled_width / pooled_height / num_orientations) % channels; - int n = index / pooled_width / pooled_height / num_orientations / channels; - - const scalar_t *offset_bottom_rois = bottom_rois + n * 6; - int roi_batch_ind = offset_bottom_rois[0]; - - // Do not round - scalar_t roi_center_w = offset_bottom_rois[1] * spatial_scale; - scalar_t roi_center_h = offset_bottom_rois[2] * spatial_scale; - scalar_t roi_width = offset_bottom_rois[3] * spatial_scale; - scalar_t roi_height = offset_bottom_rois[4] * spatial_scale; - // scalar_t theta = offset_bottom_rois[5] * M_PI / 180.0; - scalar_t theta = offset_bottom_rois[5]; - // Force malformed ROIs to be 1x1 - roi_width = max(roi_width, (scalar_t)1.); - roi_height = max(roi_height, (scalar_t)1.); - - scalar_t bin_size_h = static_cast(roi_height) / - static_cast(pooled_height); - scalar_t bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - // find aligned index - scalar_t ind_float = theta * num_orientations / (2 * M_PI); - int ind = floorf(ind_float); - scalar_t l_var = ind_float - (scalar_t)ind; - scalar_t r_var = 1.0 - l_var; - // correct start channel - ind = (ind + num_orientations) % num_orientations; - // rotated channel - int ind_rot = (o - ind + num_orientations) % num_orientations; - int ind_rot_plus = (ind_rot + 1 + num_orientations) % num_orientations; - scalar_t *offset_bottom_diff = - bottom_diff + (roi_batch_ind * channels * num_orientations + - c * num_orientations + ind_rot) * - height * width; - scalar_t *offset_bottom_diff_plus = - bottom_diff + (roi_batch_ind * channels * num_orientations + - c * num_orientations + ind_rot_plus) * - height * width; - int top_offset = - (n * channels * num_orientations + c * num_orientations + o) * - pooled_height * pooled_width; - const scalar_t *offset_top_diff = top_diff + top_offset; - const scalar_t top_diff_this_bin = offset_top_diff[ph * pooled_width + pw]; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (num_samples > 0) - ? num_samples - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (num_samples > 0) ? num_samples : ceilf(roi_width / pooled_width); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - scalar_t roi_start_h = -roi_height / 2.0; - scalar_t roi_start_w = -roi_width / 2.0; - scalar_t cosTheta = cos(theta); - scalar_t sinTheta = sin(theta); - - // We do average (integral) pooling inside a bin - const scalar_t count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 - const scalar_t yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const scalar_t xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta around the center and translate - scalar_t y = yy * cosTheta - xx * sinTheta + roi_center_h; - scalar_t x = yy * sinTheta + xx * cosTheta + roi_center_w; - - scalar_t w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, - w4, x_low, x_high, y_low, - y_high, index); - - scalar_t g1 = top_diff_this_bin * w1 / count; - scalar_t g2 = top_diff_this_bin * w2 / count; - scalar_t g3 = top_diff_this_bin * w3 / count; - scalar_t g4 = top_diff_this_bin * w4 / count; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_bottom_diff + y_low * width + x_low, g1 * r_var); - atomicAdd(offset_bottom_diff + y_low * width + x_high, g2 * r_var); - atomicAdd(offset_bottom_diff + y_high * width + x_low, g3 * r_var); - atomicAdd(offset_bottom_diff + y_high * width + x_high, g4 * r_var); - - atomicAdd(offset_bottom_diff_plus + y_low * width + x_low, - g1 * l_var); - atomicAdd(offset_bottom_diff_plus + y_low * width + x_high, - g2 * l_var); - atomicAdd(offset_bottom_diff_plus + y_high * width + x_low, - g3 * l_var); - atomicAdd(offset_bottom_diff_plus + y_high * width + x_high, - g4 * l_var); - - } // if - } // ix - } // iy - } // CUDA_1D_KERNEL_LOOP -} // RiRoIAlignBackward - -#endif // RIROI_ALIGN_ROTATED_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_cuda_kernel.cuh deleted file mode 100644 index 4541462a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_cuda_kernel.cuh +++ /dev/null @@ -1,212 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROI_ALIGN_CUDA_KERNEL_CUH -#define ROI_ALIGN_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -/*** Forward ***/ -template -__global__ void roi_align_forward_cuda_kernel( - const int nthreads, const T* input, const T* rois, T* output, T* argmax_y, - T* argmax_x, const int pooled_height, const int pooled_width, - const T spatial_scale, const int sampling_ratio, - const int pool_mode, // 0 - max pool, 1 - avg pool - const bool aligned, const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not using rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (!aligned) { // for backward-compatibility only - roi_width = max(roi_width, (T)1.); - roi_height = max(roi_height, (T)1.); - } - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_height / pooled_height)); - int roi_bin_grid_w = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_width / pooled_width)); - - if (pool_mode == 0) { - // We do max pooling inside a bin - T maxval = -FLT_MAX; - T maxidx_y = -1.f, maxidx_x = -1.f; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - T val = - bilinear_interpolate(offset_input, height, width, y, x, index); - if (val > maxval) { - maxval = val; - maxidx_y = y; - maxidx_x = x; - } - } - } - output[index] = maxval; - argmax_y[index] = maxidx_y; - argmax_x[index] = maxidx_x; - } else if (pool_mode == 1) { - // We do average pooling inside a bin - const T count = max(roi_bin_grid_h * roi_bin_grid_w, 1); - T output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - T val = - bilinear_interpolate(offset_input, height, width, y, x, index); - output_val += val; - } - } - output[index] = output_val / count; - } - } -} - -/*** Backward ***/ -template -__global__ void roi_align_backward_cuda_kernel( - const int nthreads, const T* grad_output, const T* rois, const T* argmax_y, - const T* argmax_x, T* grad_input, const int pooled_height, - const int pooled_width, const T spatial_scale, const int sampling_ratio, - const int pool_mode, // 0 - max pool, 1 - avg pool - const bool aligned, const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T grad_output_this_bin = grad_output[index]; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - T* offset_grad_input = - grad_input + ((roi_batch_ind * channels + c) * height * width); - - if (pool_mode == 0) { - T y = argmax_y[index], x = argmax_x[index]; - if (y != -1.f) { - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high, index); - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_grad_input + y_low * width + x_low, - grad_output_this_bin * w1); - atomicAdd(offset_grad_input + y_low * width + x_high, - grad_output_this_bin * w2); - atomicAdd(offset_grad_input + y_high * width + x_low, - grad_output_this_bin * w3); - atomicAdd(offset_grad_input + y_high * width + x_high, - grad_output_this_bin * w4); - } - } - } else if (pool_mode == 1) { - // Do not using rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (!aligned) { // for backward-compatibility only - roi_width = max(roi_width, (T)1.); - roi_height = max(roi_height, (T)1.); - } - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_height / pooled_height)); - int roi_bin_grid_w = - (sampling_ratio > 0) - ? sampling_ratio - : static_cast(ceilf(roi_width / pooled_width)); - - // We do average (integral) pooling inside a bin - const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high, index); - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_grad_input + y_low * width + x_low, - grad_output_this_bin * w1 / count); - atomicAdd(offset_grad_input + y_low * width + x_high, - grad_output_this_bin * w2 / count); - atomicAdd(offset_grad_input + y_high * width + x_low, - grad_output_this_bin * w3 / count); - atomicAdd(offset_grad_input + y_high * width + x_high, - grad_output_this_bin * w4 / count); - } - } - } - } - } -} - -#endif // ROI_ALIGN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_rotated_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_rotated_cuda_kernel.cuh deleted file mode 100644 index 8274dc50..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_align_rotated_cuda_kernel.cuh +++ /dev/null @@ -1,202 +0,0 @@ -// Modified from -// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/ROIAlignRotated -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#ifndef ROI_ALIGN_ROTATED_CUDA_KERNEL_CUH -#define ROI_ALIGN_ROTATED_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -/*** Forward ***/ -template -__global__ void roi_align_rotated_forward_cuda_kernel( - const int nthreads, const scalar_t *bottom_data, - const scalar_t *bottom_rois, const scalar_t spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, - const int pooled_height, const int pooled_width, scalar_t *top_data) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const scalar_t *offset_bottom_rois = bottom_rois + n * 6; - int roi_batch_ind = offset_bottom_rois[0]; - - // Do not using rounding; this implementation detail is critical - scalar_t offset = aligned ? (scalar_t)0.5 : (scalar_t)0.0; - scalar_t roi_center_w = offset_bottom_rois[1] * spatial_scale - offset; - scalar_t roi_center_h = offset_bottom_rois[2] * spatial_scale - offset; - scalar_t roi_width = offset_bottom_rois[3] * spatial_scale; - scalar_t roi_height = offset_bottom_rois[4] * spatial_scale; - // scalar_t theta = offset_bottom_rois[5] * M_PI / 180.0; - scalar_t theta = offset_bottom_rois[5]; - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - if (!aligned) { // for backward-compatibility only - // Force malformed ROIs to be 1x1 - roi_width = max(roi_width, (scalar_t)1.); - roi_height = max(roi_height, (scalar_t)1.); - } - scalar_t bin_size_h = static_cast(roi_height) / - static_cast(pooled_height); - scalar_t bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - const scalar_t *offset_bottom_data = - bottom_data + (roi_batch_ind * channels + c) * height * width; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceilf(roi_width / pooled_width); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - scalar_t roi_start_h = -roi_height / 2.0; - scalar_t roi_start_w = -roi_width / 2.0; - scalar_t cosscalar_theta = cos(theta); - scalar_t sinscalar_theta = sin(theta); - - // We do average (integral) pooling inside a bin - const scalar_t count = max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - scalar_t output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 - const scalar_t yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const scalar_t xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta (counterclockwise) around the center and translate - scalar_t y = yy * cosscalar_theta - xx * sinscalar_theta + roi_center_h; - scalar_t x = yy * sinscalar_theta + xx * cosscalar_theta + roi_center_w; - - scalar_t val = bilinear_interpolate( - offset_bottom_data, height, width, y, x, index); - output_val += val; - } - } - output_val /= count; - - top_data[index] = output_val; - } -} - -/*** Backward ***/ -template -__global__ void roi_align_rotated_backward_cuda_kernel( - const int nthreads, const scalar_t *top_diff, const scalar_t *bottom_rois, - const scalar_t spatial_scale, const int sampling_ratio, const bool aligned, - const bool clockwise, const int channels, const int height, const int width, - const int pooled_height, const int pooled_width, scalar_t *bottom_diff) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const scalar_t *offset_bottom_rois = bottom_rois + n * 6; - int roi_batch_ind = offset_bottom_rois[0]; - - // Do not round - scalar_t offset = aligned ? (scalar_t)0.5 : (scalar_t)0.0; - scalar_t roi_center_w = offset_bottom_rois[1] * spatial_scale - offset; - scalar_t roi_center_h = offset_bottom_rois[2] * spatial_scale - offset; - scalar_t roi_width = offset_bottom_rois[3] * spatial_scale; - scalar_t roi_height = offset_bottom_rois[4] * spatial_scale; - // scalar_t theta = offset_bottom_rois[5] * M_PI / 180.0; - scalar_t theta = offset_bottom_rois[5]; - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - if (!aligned) { // for backward-compatibility only - // Force malformed ROIs to be 1x1 - roi_width = max(roi_width, (scalar_t)1.); - roi_height = max(roi_height, (scalar_t)1.); - } - scalar_t bin_size_h = static_cast(roi_height) / - static_cast(pooled_height); - scalar_t bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - scalar_t *offset_bottom_diff = - bottom_diff + (roi_batch_ind * channels + c) * height * width; - - int top_offset = (n * channels + c) * pooled_height * pooled_width; - const scalar_t *offset_top_diff = top_diff + top_offset; - const scalar_t top_diff_this_bin = offset_top_diff[ph * pooled_width + pw]; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceilf(roi_width / pooled_width); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - scalar_t roi_start_h = -roi_height / 2.0; - scalar_t roi_start_w = -roi_width / 2.0; - scalar_t cosTheta = cos(theta); - scalar_t sinTheta = sin(theta); - - // We do average (integral) pooling inside a bin - const scalar_t count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { // e.g., iy = 0, 1 - const scalar_t yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const scalar_t xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta around the center and translate - scalar_t y = yy * cosTheta - xx * sinTheta + roi_center_h; - scalar_t x = yy * sinTheta + xx * cosTheta + roi_center_w; - - scalar_t w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, - w4, x_low, x_high, y_low, - y_high, index); - - scalar_t g1 = top_diff_this_bin * w1 / count; - scalar_t g2 = top_diff_this_bin * w2 / count; - scalar_t g3 = top_diff_this_bin * w3 / count; - scalar_t g4 = top_diff_this_bin * w4 / count; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_bottom_diff + y_low * width + x_low, g1); - atomicAdd(offset_bottom_diff + y_low * width + x_high, g2); - atomicAdd(offset_bottom_diff + y_high * width + x_low, g3); - atomicAdd(offset_bottom_diff + y_high * width + x_high, g4); - } // if - } // ix - } // iy - } // CUDA_1D_KERNEL_LOOP -} // RoIAlignBackward - -#endif // ROI_ALIGN_ROTATED_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_pool_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_pool_cuda_kernel.cuh deleted file mode 100644 index 3d7eae66..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roi_pool_cuda_kernel.cuh +++ /dev/null @@ -1,93 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROI_POOL_CUDA_KERNEL_CUH -#define ROI_POOL_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void roi_pool_forward_cuda_kernel( - const int nthreads, const T* input, const T* rois, T* output, int* argmax, - const int pooled_height, const int pooled_width, const T spatial_scale, - const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - // calculate the roi region on feature maps - T roi_x1 = offset_rois[1] * spatial_scale; - T roi_y1 = offset_rois[2] * spatial_scale; - T roi_x2 = (offset_rois[3] + 1) * spatial_scale; - T roi_y2 = (offset_rois[4] + 1) * spatial_scale; - - // force malformed rois to be 1x1 - T roi_w = roi_x2 - roi_x1; - T roi_h = roi_y2 - roi_y1; - if (roi_w <= 0 || roi_h <= 0) continue; - - T bin_size_w = roi_w / static_cast(pooled_width); - T bin_size_h = roi_h / static_cast(pooled_height); - - // the corresponding bin region - int bin_x1 = floorf(static_cast(pw) * bin_size_w + roi_x1); - int bin_y1 = floorf(static_cast(ph) * bin_size_h + roi_y1); - int bin_x2 = ceilf(static_cast(pw + 1) * bin_size_w + roi_x1); - int bin_y2 = ceilf(static_cast(ph + 1) * bin_size_h + roi_y1); - - // add roi offsets and clip to input boundaries - bin_x1 = min(max(bin_x1, 0), width); - bin_y1 = min(max(bin_y1, 0), height); - bin_x2 = min(max(bin_x2, 0), width); - bin_y2 = min(max(bin_y2, 0), height); - bool is_empty = (bin_y2 <= bin_y1) || (bin_x2 <= bin_x1); - - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - // Define an empty pooling region to be zero - // If nothing is pooled, argmax = -1 causes nothing to be backprop'd - T max_val = is_empty ? 0 : -FLT_MAX; - int max_idx = -1; - for (int h = bin_y1; h < bin_y2; ++h) { - for (int w = bin_x1; w < bin_x2; ++w) { - int offset = h * width + w; - if (offset_input[offset] > max_val) { - max_val = offset_input[offset]; - max_idx = offset; - } - } - } - output[index] = max_val; - if (argmax != NULL) argmax[index] = max_idx; - } -} - -template -__global__ void roi_pool_backward_cuda_kernel( - const int nthreads, const T* grad_output, const T* rois, const int* argmax, - T* grad_input, const int pooled_height, const int pooled_width, - const int channels, const int height, const int width) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - // (n, c) is an element in the pooled output - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - int roi_batch_ind = rois[n * 5]; - T* grad_input_offset = - grad_input + ((roi_batch_ind * channels + c) * height * width); - int argmax_index = argmax[index]; - - if (argmax_index != -1) { - atomicAdd(grad_input_offset + argmax_index, grad_output[index]); - } - } -} - -#endif // ROI_POOL_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roiaware_pool3d_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roiaware_pool3d_cuda_kernel.cuh deleted file mode 100644 index fc0aacf1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roiaware_pool3d_cuda_kernel.cuh +++ /dev/null @@ -1,260 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROIAWARE_POOL3D_CUDA_KERNEL_CUH -#define ROIAWARE_POOL3D_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__device__ inline void lidar_to_local_coords(T shift_x, T shift_y, T rz, - T &local_x, T &local_y) { - T cosa = cos(-rz), sina = sin(-rz); - local_x = shift_x * cosa + shift_y * (-sina); - local_y = shift_x * sina + shift_y * cosa; -} - -template -__device__ inline int check_pt_in_box3d(const T *pt, const T *box3d, T &local_x, - T &local_y) { - // param pt: (x, y, z) - // param box3d: (cx, cy, cz, x_size, y_size, z_size, rz) in LiDAR coordinate, - // cz in the bottom center - T x = pt[0], y = pt[1], z = pt[2]; - T cx = box3d[0], cy = box3d[1], cz = box3d[2]; - T x_size = box3d[3], y_size = box3d[4], z_size = box3d[5], rz = box3d[6]; - cz += z_size / - 2.0; // shift to the center since cz in box3d is the bottom center - - if (fabsf(z - cz) > z_size / 2.0) return 0; - lidar_to_local_coords(x - cx, y - cy, rz, local_x, local_y); - float in_flag = (local_x > -x_size / 2.0) & (local_x < x_size / 2.0) & - (local_y > -y_size / 2.0) & (local_y < y_size / 2.0); - return in_flag; -} - -template -__global__ void generate_pts_mask_for_box3d(int boxes_num, int pts_num, - int out_x, int out_y, int out_z, - const T *rois, const T *pts, - int *pts_mask) { - // params rois: (N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate params pts: (npoints, 3) [x, y, z] params pts_mask: (N, - // npoints): -1 means point does not in this box, otherwise: encode (x_idxs, - // y_idxs, z_idxs) by binary bit - int box_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, pts_num) { - if (box_idx >= boxes_num) return; - - pts += pt_idx * 3; - rois += box_idx * 7; - pts_mask += box_idx * pts_num + pt_idx; - - T local_x = 0, local_y = 0; - int cur_in_flag = check_pt_in_box3d(pts, rois, local_x, local_y); - - pts_mask[0] = -1; - if (cur_in_flag > 0) { - T local_z = pts[2] - rois[2]; - T x_size = rois[3], y_size = rois[4], z_size = rois[5]; - - T x_res = x_size / out_x; - T y_res = y_size / out_y; - T z_res = z_size / out_z; - - unsigned int x_idx = int((local_x + x_size / 2) / x_res); - unsigned int y_idx = int((local_y + y_size / 2) / y_res); - unsigned int z_idx = int(local_z / z_res); - - x_idx = min(max(x_idx, 0), out_x - 1); - y_idx = min(max(y_idx, 0), out_y - 1); - z_idx = min(max(z_idx, 0), out_z - 1); - - unsigned int idx_encoding = (x_idx << 16) + (y_idx << 8) + z_idx; - - pts_mask[0] = idx_encoding; - } - } -} - -template -__global__ void collect_inside_pts_for_box3d(int boxes_num, int pts_num, - int max_pts_each_voxel, int out_x, - int out_y, int out_z, - const int *pts_mask, - T *pts_idx_of_voxels) { - // params pts_mask: (N, npoints) 0 or 1 - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - CUDA_1D_KERNEL_LOOP(box_idx, boxes_num) { - int max_num_pts = max_pts_each_voxel - 1; // index 0 is the counter - pts_idx_of_voxels += box_idx * out_x * out_y * out_z * max_pts_each_voxel; - - for (int k = 0; k < pts_num; k++) { - if (pts_mask[box_idx * pts_num + k] != -1) { - unsigned int idx_encoding = pts_mask[box_idx * pts_num + k]; - unsigned int x_idx = (idx_encoding >> 16) & 0xFF; - unsigned int y_idx = (idx_encoding >> 8) & 0xFF; - unsigned int z_idx = idx_encoding & 0xFF; - unsigned int base_offset = x_idx * out_y * out_z * max_pts_each_voxel + - y_idx * out_z * max_pts_each_voxel + - z_idx * max_pts_each_voxel; - unsigned int cnt = pts_idx_of_voxels[base_offset]; - if (cnt < max_num_pts) { - pts_idx_of_voxels[base_offset + cnt + 1] = k; - pts_idx_of_voxels[base_offset]++; - } - } - } - } -} - -template -__global__ void roiaware_maxpool3d(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const T *pts_feature, - const int *pts_idx_of_voxels, - T *pooled_features, int *argmax) { - // params pts_feature: (npoints, C) - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel), - // index 0 is the counter params pooled_features: (N, out_x, out_y, out_z, C) - // params argmax: (N, out_x, out_y, out_z, C) - - int box_idx = blockIdx.z; - int channel_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(voxel_idx_flat, out_x * out_y * out_z) { - int x_idx = voxel_idx_flat / (out_y * out_z); - int y_idx = (voxel_idx_flat - x_idx * (out_y * out_z)) / out_z; - int z_idx = voxel_idx_flat % out_z; - if (box_idx >= boxes_num || channel_idx >= channels) return; - - int offset_base = x_idx * out_y * out_z + y_idx * out_z + z_idx; - pts_idx_of_voxels += box_idx * out_x * out_y * out_z * max_pts_each_voxel + - offset_base * max_pts_each_voxel; - pooled_features += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - argmax += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - - int argmax_idx = -1; - float max_val = -1e50; - - int total_pts = pts_idx_of_voxels[0]; - - for (int k = 1; k <= total_pts; k++) { - if (pts_feature[pts_idx_of_voxels[k] * channels + channel_idx] > - max_val) { - max_val = pts_feature[pts_idx_of_voxels[k] * channels + channel_idx]; - argmax_idx = pts_idx_of_voxels[k]; - } - } - - if (argmax_idx != -1) { - pooled_features[0] = max_val; - } - argmax[0] = argmax_idx; - } -} - -template -__global__ void roiaware_avgpool3d(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const T *pts_feature, - const int *pts_idx_of_voxels, - T *pooled_features) { - // params pts_feature: (npoints, C) - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel), - // index 0 is the counter params pooled_features: (N, out_x, out_y, out_z, C) - // params argmax: (N, out_x, out_y, out_z, C) - - int box_idx = blockIdx.z; - int channel_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(voxel_idx_flat, out_x * out_y * out_z) { - int x_idx = voxel_idx_flat / (out_y * out_z); - int y_idx = (voxel_idx_flat - x_idx * (out_y * out_z)) / out_z; - int z_idx = voxel_idx_flat % out_z; - if (box_idx >= boxes_num || channel_idx >= channels) return; - - int offset_base = x_idx * out_y * out_z + y_idx * out_z + z_idx; - pts_idx_of_voxels += box_idx * out_x * out_y * out_z * max_pts_each_voxel + - offset_base * max_pts_each_voxel; - pooled_features += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - - float sum_val = 0; - int total_pts = pts_idx_of_voxels[0]; - - for (int k = 1; k <= total_pts; k++) { - sum_val += pts_feature[pts_idx_of_voxels[k] * channels + channel_idx]; - } - - if (total_pts > 0) { - pooled_features[0] = sum_val / total_pts; - } - } -} - -template -__global__ void roiaware_maxpool3d_backward(int boxes_num, int channels, - int out_x, int out_y, int out_z, - const int *argmax, - const T *grad_out, T *grad_in) { - // params argmax: (N, out_x, out_y, out_z, C) - // params grad_out: (N, out_x, out_y, out_z, C) - // params grad_in: (npoints, C), return value - - int box_idx = blockIdx.z; - int channel_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(voxel_idx_flat, out_x * out_y * out_z) { - int x_idx = voxel_idx_flat / (out_y * out_z); - int y_idx = (voxel_idx_flat - x_idx * (out_y * out_z)) / out_z; - int z_idx = voxel_idx_flat % out_z; - if (box_idx >= boxes_num || channel_idx >= channels) return; - - int offset_base = x_idx * out_y * out_z + y_idx * out_z + z_idx; - argmax += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - grad_out += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - - if (argmax[0] == -1) return; - - atomicAdd(grad_in + argmax[0] * channels + channel_idx, grad_out[0] * 1); - } -} - -template -__global__ void roiaware_avgpool3d_backward(int boxes_num, int channels, - int out_x, int out_y, int out_z, - int max_pts_each_voxel, - const int *pts_idx_of_voxels, - const T *grad_out, T *grad_in) { - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params grad_out: (N, out_x, out_y, out_z, C) - // params grad_in: (npoints, C), return value - - int box_idx = blockIdx.z; - int channel_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(voxel_idx_flat, out_x * out_y * out_z) { - int x_idx = voxel_idx_flat / (out_y * out_z); - int y_idx = (voxel_idx_flat - x_idx * (out_y * out_z)) / out_z; - int z_idx = voxel_idx_flat % out_z; - if (box_idx >= boxes_num || channel_idx >= channels) return; - - int offset_base = x_idx * out_y * out_z + y_idx * out_z + z_idx; - pts_idx_of_voxels += box_idx * out_x * out_y * out_z * max_pts_each_voxel + - offset_base * max_pts_each_voxel; - grad_out += box_idx * out_x * out_y * out_z * channels + - offset_base * channels + channel_idx; - - int total_pts = pts_idx_of_voxels[0]; - float cur_grad = 1 / fmaxf(float(total_pts), 1.0); - for (int k = 1; k <= total_pts; k++) { - atomicAdd(grad_in + pts_idx_of_voxels[k] * channels + channel_idx, - grad_out[0] * cur_grad); - } - } -} - -#endif // ROIAWARE_POOL3D_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roipoint_pool3d_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roipoint_pool3d_cuda_kernel.cuh deleted file mode 100644 index 545f6ffa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/roipoint_pool3d_cuda_kernel.cuh +++ /dev/null @@ -1,134 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROIPOINT_POOL3D_CUDA_KERNEL_CUH -#define ROIPOINT_POOL3D_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__device__ inline void lidar_to_local_coords(T shift_x, T shift_y, T rz, - T &local_x, T &local_y) { - T cosa = cos(-rz), sina = sin(-rz); - local_x = shift_x * cosa + shift_y * (-sina); - local_y = shift_x * sina + shift_y * cosa; -} - -template -__device__ inline int check_pt_in_box3d(const T *pt, const T *box3d, T &local_x, - T &local_y) { - // param pt: (x, y, z) - // param box3d: (cx, cy, cz, dx, dy, dz, rz) in LiDAR coordinate, cz in the - // bottom center - T x = pt[0], y = pt[1], z = pt[2]; - T cx = box3d[0], cy = box3d[1], cz = box3d[2]; - T dx = box3d[3], dy = box3d[4], dz = box3d[5], rz = box3d[6]; - cz += dz / 2.0; // shift to the center since cz in box3d is the bottom center - - if (fabsf(z - cz) > dz / 2.0) return 0; - lidar_to_local_coords(x - cx, y - cy, rz, local_x, local_y); - T in_flag = (local_x > -dx / 2.0) & (local_x < dx / 2.0) & - (local_y > -dy / 2.0) & (local_y < dy / 2.0); - return in_flag; -} - -template -__global__ void assign_pts_to_box3d(int batch_size, int pts_num, int boxes_num, - const T *xyz, const T *boxes3d, - int *pts_assign) { - // params xyz: (B, N, 3) - // params boxes3d: (B, M, 7) - // params pts_assign: (B, N, M): idx of the corresponding box3d, -1 means - // background points - int box_idx = blockIdx.y; - int bs_idx = blockIdx.z; - CUDA_1D_KERNEL_LOOP(pt_idx, pts_num) { - if (box_idx >= boxes_num || bs_idx >= batch_size) return; - - int assign_idx = - bs_idx * pts_num * boxes_num + pt_idx * boxes_num + box_idx; - pts_assign[assign_idx] = 0; - - int box_offset = bs_idx * boxes_num * 7 + box_idx * 7; - int pt_offset = bs_idx * pts_num * 3 + pt_idx * 3; - - T local_x = 0, local_y = 0; - int cur_in_flag = check_pt_in_box3d(xyz + pt_offset, boxes3d + box_offset, - local_x, local_y); - pts_assign[assign_idx] = cur_in_flag; - } -} - -__global__ void get_pooled_idx(int batch_size, int pts_num, int boxes_num, - int sampled_pts_num, const int *pts_assign, - int *pts_idx, int *pooled_empty_flag) { - // params xyz: (B, N, 3) - // params pts_feature: (B, N, C) - // params pts_assign: (B, N) - // params pts_idx: (B, M, 512) - // params pooled_empty_flag: (B, M) - CUDA_1D_KERNEL_LOOP(boxes_idx, boxes_num) { - int bs_idx = blockIdx.y; - - int cnt = 0; - for (int k = 0; k < pts_num; k++) { - if (pts_assign[bs_idx * pts_num * boxes_num + k * boxes_num + - boxes_idx]) { - if (cnt < sampled_pts_num) { - pts_idx[bs_idx * boxes_num * sampled_pts_num + - boxes_idx * sampled_pts_num + cnt] = k; - cnt++; - } else - break; - } - } - - if (cnt == 0) { - pooled_empty_flag[bs_idx * boxes_num + boxes_idx] = 1; - } else if (cnt < sampled_pts_num) { - // duplicate same points for sampling - for (int k = cnt; k < sampled_pts_num; k++) { - int duplicate_idx = k % cnt; - int base_offset = - bs_idx * boxes_num * sampled_pts_num + boxes_idx * sampled_pts_num; - pts_idx[base_offset + k] = pts_idx[base_offset + duplicate_idx]; - } - } - } -} - -template -__global__ void roipoint_pool3d_forward( - int batch_size, int pts_num, int boxes_num, int feature_in_len, - int sampled_pts_num, const T *xyz, const int *pts_idx, const T *pts_feature, - T *pooled_features, int *pooled_empty_flag) { - // params xyz: (B, N, 3) - // params pts_idx: (B, M, 512) - // params pts_feature: (B, N, C) - // params pooled_features: (B, M, 512, 3+C) - // params pooled_empty_flag: (B, M) - int box_idx = blockIdx.y; - int bs_idx = blockIdx.z; - CUDA_1D_KERNEL_LOOP(sample_pt_idx, sampled_pts_num) { - if (box_idx >= boxes_num || bs_idx >= batch_size) return; - if (pooled_empty_flag[bs_idx * boxes_num + box_idx]) return; - - int temp_idx = bs_idx * boxes_num * sampled_pts_num + - box_idx * sampled_pts_num + sample_pt_idx; - int src_pt_idx = pts_idx[temp_idx]; - int dst_feature_offset = temp_idx * (3 + feature_in_len); - - for (int j = 0; j < 3; j++) - pooled_features[dst_feature_offset + j] = - xyz[bs_idx * pts_num * 3 + src_pt_idx * 3 + j]; - - int src_feature_offset = - bs_idx * pts_num * feature_in_len + src_pt_idx * feature_in_len; - memcpy(pooled_features + dst_feature_offset + 3, - pts_feature + src_feature_offset, feature_in_len * sizeof(T)); - } -} - -#endif // ROIPOINT_POOL3D_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/rotated_feature_align_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/rotated_feature_align_cuda_kernel.cuh deleted file mode 100644 index ffcc658c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/rotated_feature_align_cuda_kernel.cuh +++ /dev/null @@ -1,129 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_kernel.cu -#ifndef ROTATED_FEATURE_ALIGN_CUDA_KERNEL_CUH -#define ROTATED_FEATURE_ALIGN_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void rotated_feature_align_forward_kernel( - const int nthreads, const int points, const scalar_t* bottom_data, - const scalar_t* best_bboxes, const scalar_t spatial_scale, - const int channels, const int height, const int width, scalar_t* top_data) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int w = index % width; - int h = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - const scalar_t* bbox_offset = - best_bboxes + ((n * height + h) * width + w) * 5; - scalar_t roi_y = bbox_offset[0] * spatial_scale; - scalar_t roi_x = bbox_offset[1] * spatial_scale; - - scalar_t px[5] = {roi_x, 0, 0, 0, 0}; - scalar_t py[5] = {roi_y, 0, 0, 0, 0}; - - if (points > 1) { - scalar_t roi_w = bbox_offset[2] * spatial_scale; - scalar_t roi_h = bbox_offset[3] * spatial_scale; - scalar_t roi_a = bbox_offset[4]; - - scalar_t w_2 = roi_w / 2, h_2 = roi_h / 2; - scalar_t cosa = cosf(roi_a), sina = sinf(roi_a); - scalar_t wx = cosa * w_2, wy = sina * w_2; - scalar_t hx = -sina * h_2, hy = cosa * h_2; - - px[1] = roi_x + wx + hx; - py[1] = roi_y + wy + hy; - px[2] = roi_x - wx + hx; - py[2] = roi_y - wy + hy; - px[3] = roi_x - wx - hx; - py[3] = roi_y - wy - hy; - px[4] = roi_x + wx - hx; - py[4] = roi_y + wy - hy; - } - - const scalar_t* offset_bottom_data = - bottom_data + (n * channels + c) * height * width; - - scalar_t output_val = bottom_data[index]; - for (int i = 0; i < points; i++) { - output_val += bilinear_interpolate(offset_bottom_data, height, - width, py[i], px[i], i); - } - top_data[index] = output_val; - } -} - -template -__global__ void rotated_feature_align_backward_kernel( - const int nthreads, const int points, const scalar_t* top_diff, - const scalar_t* best_bboxes, const scalar_t spatial_scale, - const int channels, const int height, const int width, - scalar_t* bottom_diff) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int w = index % width; - int h = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - const scalar_t* bbox_offset = - best_bboxes + ((n * height + h) * width + w) * 5; - scalar_t roi_y = bbox_offset[0] * spatial_scale; - scalar_t roi_x = bbox_offset[1] * spatial_scale; - - scalar_t px[5] = {roi_x, 0, 0, 0, 0}; - scalar_t py[5] = {roi_y, 0, 0, 0, 0}; - - if (points > 1) { - scalar_t roi_w = bbox_offset[2] * spatial_scale; - scalar_t roi_h = bbox_offset[3] * spatial_scale; - scalar_t roi_a = bbox_offset[4]; - - scalar_t w_2 = roi_w / 2, h_2 = roi_h / 2; - scalar_t cosa = cosf(roi_a), sina = sinf(roi_a); - scalar_t wx = cosa * w_2, wy = sina * w_2; - scalar_t hx = -sina * h_2, hy = cosa * h_2; - - px[1] = roi_x + wx + hx; - py[1] = roi_y + wy + hy; - px[2] = roi_x - wx + hx; - py[2] = roi_y - wy + hy; - px[3] = roi_x - wx - hx; - py[3] = roi_y - wy - hy; - px[4] = roi_x + wx - hx; - py[4] = roi_y + wy - hy; - } - - scalar_t* offset_bottom_diff = - bottom_diff + (n * channels + c) * height * width; - scalar_t value_top_diff = top_diff[index]; - - atomicAdd(bottom_diff + index, value_top_diff); - for (int i = 0; i < points; i++) { - scalar_t w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, py[i], px[i], w1, - w2, w3, w4, x_low, x_high, y_low, - y_high, i); - scalar_t g1 = value_top_diff * w1; - scalar_t g2 = value_top_diff * w2; - scalar_t g3 = value_top_diff * w3; - scalar_t g4 = value_top_diff * w4; - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - atomicAdd(offset_bottom_diff + y_low * width + x_low, g1); - atomicAdd(offset_bottom_diff + y_low * width + x_high, g2); - atomicAdd(offset_bottom_diff + y_high * width + x_low, g3); - atomicAdd(offset_bottom_diff + y_high * width + x_high, g4); - } - } - } -} -#endif // ROTATED_FEATURE_ALIGN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/scatter_points_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/scatter_points_cuda_kernel.cuh deleted file mode 100644 index 9138d1fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/scatter_points_cuda_kernel.cuh +++ /dev/null @@ -1,187 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SCATTER_POINTS_CUDA_KERNEL_CUH -#define SCATTER_POINTS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; -int const maxGridDim = 50000; - -__device__ __forceinline__ static void reduceMax(float *address, float val) { - int *address_as_i = reinterpret_cast(address); - int old = *address_as_i, assumed; - do { - assumed = old; - old = atomicCAS(address_as_i, assumed, - __float_as_int(fmaxf(val, __int_as_float(assumed)))); - } while (assumed != old || __int_as_float(old) < val); -} - -__device__ __forceinline__ static void reduceMax(double *address, double val) { - unsigned long long *address_as_ull = - reinterpret_cast(address); - unsigned long long old = *address_as_ull, assumed; - do { - assumed = old; - old = atomicCAS( - address_as_ull, assumed, - __double_as_longlong(fmax(val, __longlong_as_double(assumed)))); - } while (assumed != old || __longlong_as_double(old) < val); -} - -// get rid of meaningless warnings when compiling host code -#if defined(HIP_DIFF) || defined(__ILUVATAR__) -__device__ __forceinline__ static void reduceAdd(float *address, float val) { - atomicAdd(address, val); -} -__device__ __forceinline__ static void reduceAdd(double *address, double val) { - atomicAdd(address, val); -} -#else -#ifdef __CUDA_ARCH__ -__device__ __forceinline__ static void reduceAdd(float *address, float val) { -#if (__CUDA_ARCH__ < 200) -#ifdef _MSC_VER -#pragma message( \ - "compute capability lower than 2.x. fall back to use CAS version of atomicAdd for float32") -#else -#warning \ - "compute capability lower than 2.x. fall back to use CAS version of atomicAdd for float32" -#endif - int *address_as_i = reinterpret_cast(address); - int old = *address_as_i, assumed; - do { - assumed = old; - old = atomicCAS(address_as_i, assumed, - __float_as_int(val + __int_as_float(assumed))); - } while (assumed != old); -#else - atomicAdd(address, val); -#endif -} - -__device__ __forceinline__ static void reduceAdd(double *address, double val) { -#if (__CUDA_ARCH__ < 600) -#ifdef _MSC_VER -#pragma message( \ - "compute capability lower than 6.x. fall back to use CAS version of atomicAdd for float64") -#else -#warning \ - "compute capability lower than 6.x. fall back to use CAS version of atomicAdd for float64" -#endif - unsigned long long *address_as_ull = - reinterpret_cast(address); - unsigned long long old = *address_as_ull, assumed; - do { - assumed = old; - old = atomicCAS(address_as_ull, assumed, - __double_as_longlong(val + __longlong_as_double(assumed))); - } while (assumed != old); -#else - atomicAdd(address, val); -#endif -} -#endif // __CUDA_ARCH__ -#endif // HIP_DIFF || __ILUVATAR__ - -template -__global__ void feats_reduce_kernel( - const T *feats, const int32_t *coors_map, - T *reduced_feats, // shall be 0 at initialization - const int num_input, const int num_feats, const reduce_t reduce_type) { - CUDA_1D_KERNEL_LOOP(x, num_input) { - int32_t reduce_to = coors_map[x]; - if (reduce_to == -1) continue; - - const T *feats_offset = feats + x * num_feats; - T *reduced_feats_offset = reduced_feats + reduce_to * num_feats; - if (reduce_type == reduce_t::MAX) { - for (int i = 0; i < num_feats; i++) { - reduceMax(&reduced_feats_offset[i], feats_offset[i]); - } - } else { - for (int i = 0; i < num_feats; i++) { - reduceAdd(&reduced_feats_offset[i], feats_offset[i]); - } - } - } -} - -template -__global__ void add_reduce_traceback_grad_kernel( - T *grad_feats, const T *grad_reduced_feats, const int32_t *coors_map, - const int32_t *reduce_count, const int num_input, const int num_feats, - const reduce_t reduce_type) { - CUDA_1D_KERNEL_LOOP(x, num_input) { - int32_t reduce_to = coors_map[x]; - if (reduce_to == -1) { - continue; - } - - const int input_offset = x * num_feats; - T *grad_feats_offset = grad_feats + input_offset; - const int reduced_offset = reduce_to * num_feats; - const T *grad_reduced_feats_offset = grad_reduced_feats + reduced_offset; - - if (reduce_type == reduce_t::SUM) { - for (int i = 0; i < num_feats; i++) { - grad_feats_offset[i] = grad_reduced_feats_offset[i]; - } - } else if (reduce_type == reduce_t::MEAN) { - for (int i = 0; i < num_feats; i++) { - grad_feats_offset[i] = grad_reduced_feats_offset[i] / - static_cast(reduce_count[reduce_to]); - } - } - } -} - -template -__global__ void max_reduce_traceback_scatter_idx_kernel( - const T *feats, const T *reduced_feats, int32_t *reduce_from, - const int32_t *coors_map, const int num_input, const int num_feats) { - CUDA_1D_KERNEL_LOOP(x, num_input) { - int32_t reduce_to = coors_map[x]; - - const int input_offset = x * num_feats; - const T *feats_offset = feats + input_offset; - - if (reduce_to == -1) { - continue; - } - - const int reduced_offset = reduce_to * num_feats; - const T *reduced_feats_offset = reduced_feats + reduced_offset; - int32_t *reduce_from_offset = reduce_from + reduced_offset; - - for (int i = 0; i < num_feats; i++) { - if (feats_offset[i] == reduced_feats_offset[i]) { - atomicMin(&reduce_from_offset[i], static_cast(x)); - } - } - } -} - -template -__global__ void max_reduce_scatter_grad_kernel(T *grad_feats, - const T *grad_reduced_feats, - const int32_t *reduce_from, - const int num_reduced, - const int num_feats) { - CUDA_1D_KERNEL_LOOP(x, num_reduced) { - const int reduced_offset = x * num_feats; - const int32_t *scatter_to_offset = reduce_from + reduced_offset; - const T *grad_reduced_feats_offset = grad_reduced_feats + reduced_offset; - - for (int i = 0; i < num_feats; i++) { - grad_feats[scatter_to_offset[i] * num_feats + i] = - grad_reduced_feats_offset[i]; - } - } -} - -#endif // SCATTER_POINTS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh deleted file mode 100644 index 1eb5f8fc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH -#define SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void sigmoid_focal_loss_forward_cuda_kernel( - const int nthreads, const T* input, const int64_t* target, const T* weight, - T* output, const T gamma, const T alpha, const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n = index / num_classes; - int c = index % num_classes; - - int64_t t = target[n]; - T flag_p = (t == c); - T flag_n = (t != c); - - // p = sigmoid(x) = 1. / 1. + expf(-x) - T p = (T)1. / ((T)1. + expf(-input[index])); - - // (1 - p)**gamma * log(p) - T term_p = pow(((T)1. - p), gamma) * log(max(p, (T)FLT_MIN)); - // p**gamma * log(1 - p) - T term_n = pow(p, gamma) * log(max((T)1. - p, (T)FLT_MIN)); - - output[index] = (T)0.; - output[index] += -flag_p * alpha * term_p; - output[index] += -flag_n * ((T)1. - alpha) * term_n; - if (weight != NULL) { - output[index] *= weight[t]; - } - } -} - -template -__global__ void sigmoid_focal_loss_backward_cuda_kernel( - const int nthreads, const T* input, const int64_t* target, const T* weight, - T* grad_input, const T gamma, const T alpha, const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n = index / num_classes; - int c = index % num_classes; - - int64_t t = target[n]; - T flag_p = (t == c); - T flag_n = (t != c); - - // p = sigmoid(x) = 1. / 1. + expf(-x) - T p = (T)1. / ((T)1. + exp(-input[index])); - - // (1 - p)**gamma * (1 - p - gamma*p*log(p)) - T term_p = pow(((T)1. - p), gamma) * - ((T)1. - p - (gamma * p * log(max(p, (T)FLT_MIN)))); - // p**gamma * (gamma * (1 - p) * log(1 - p) - p) - T term_n = pow(p, gamma) * - (gamma * ((T)1. - p) * log(max((T)1. - p, (T)FLT_MIN)) - p); - - grad_input[index] = (T)0.; - grad_input[index] += -flag_p * alpha * term_p; - grad_input[index] += -flag_n * ((T)1. - alpha) * term_n; - if (weight != NULL) { - grad_input[index] *= weight[t]; - } - } -} - -#endif // SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/softmax_focal_loss_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/softmax_focal_loss_cuda_kernel.cuh deleted file mode 100644 index 631b2c61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/softmax_focal_loss_cuda_kernel.cuh +++ /dev/null @@ -1,72 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SOFTMAX_FOCAL_LOSS_CUDA_KERNEL_CUH -#define SOFTMAX_FOCAL_LOSS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void softmax_focal_loss_forward_cuda_kernel( - const int nthreads, const T* softmax, const int64_t* target, - const T* weight, T* output, const T gamma, const T alpha, - const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int64_t label = target[index]; - T pred = softmax[index * num_classes + label]; - - if (label >= 0) { - output[index] = - -alpha * pow((T)1. - pred, gamma) * log(max(pred, (T)FLT_MIN)); - } else { - output[index] = 0; - } - if (weight != NULL) { - output[index] *= weight[label]; - } - } -} - -template -__global__ void softmax_focal_loss_backward_cuda1_kernel( - const int nthreads, const T* softmax, const int64_t* target, - const T* weight, T* buff, const T gamma, const T alpha, - const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int64_t label = target[index]; - T pred = softmax[index * num_classes + label]; - - if (label >= 0) { - buff[index] = alpha * (-pow((T)1. - pred, gamma) + - gamma * pow((T)1. - pred, gamma - 1) * pred * - log(max(pred, (T)FLT_MIN))); - } else { - buff[index] = 0; - } - if (weight != NULL) { - buff[index] *= weight[label]; - } - } -} - -template -__global__ void softmax_focal_loss_backward_cuda2_kernel( - const int nthreads, const T* softmax, const int64_t* target, const T* buff, - T* grad_input, const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n = index / num_classes; - int c = index % num_classes; - int64_t label = target[n]; - - if (label >= 0) { - T flag = (label == c ? (T)1. : (T)0.); - grad_input[index] = buff[n] * (flag - softmax[index]); - } else { - grad_input[index] = 0; - } - } -} - -#endif // SOFTMAX_FOCAL_LOSS_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh deleted file mode 100644 index 4ec6a466..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh +++ /dev/null @@ -1,331 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SYNCBN_CUDA_KERNEL_CUH -#define SYNCBN_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void sync_bn_forward_mean_cuda_kernel(const T *input, float *mean, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer[tid] += input[index]; - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - mean[c] = buffer[0] / total; - } -} - -template <> -__global__ void sync_bn_forward_mean_cuda_kernel(const phalf *input, - float *mean, int num, - int channels, int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer[tid] += static_cast(input[index]); - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - mean[c] = buffer[0] / total; - } -} - -template -__global__ void sync_bn_forward_var_cuda_kernel(const T *input, - const float *mean, float *var, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - float td = input[index] - mean[c]; - buffer[tid] += td * td; - } - __syncthreads(); - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - var[c] = buffer[0] / total; - } -} - -template <> -__global__ void sync_bn_forward_var_cuda_kernel(const phalf *input, - const float *mean, float *var, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - float td = static_cast(input[index]) - mean[c]; - buffer[tid] += td * td; - } - __syncthreads(); - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - var[c] = buffer[0] / total; - } -} - -template -__global__ void sync_bn_forward_output_cuda_kernel( - const T *input, const float *mean, const float *var, float *running_mean, - float *running_var, const float *weight, const float *bias, float *norm, - float *std, T *output, int num, int channels, int spatial, float eps, - float momentum, int group_size) { - int tid = threadIdx.x; - int c = blockIdx.x; - float mean_value = mean[c]; - float std_value = sqrt(var[c] + eps); - - if (weight != nullptr) { - float weight_value = weight[c]; - float bias_value = bias[c]; - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = (input[index] - mean_value) / std_value; - output[index] = norm[index] * weight_value + bias_value; - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = - (input[index] - mean_value) / std_value * weight_value + bias_value; - } - } - } else { - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = norm[index] = (input[index] - mean_value) / std_value; - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = (input[index] - mean_value) / std_value; - } - } - } - if (tid == 0) { - if (std != nullptr) std[c] = std_value; - if (running_mean != nullptr) { - running_mean[c] = - momentum * mean_value + (1 - momentum) * running_mean[c]; - int count = num * spatial * group_size; - float var_unbias = count > 1 ? var[c] * count / (count - 1) : var[c]; - running_var[c] = momentum * var_unbias + (1 - momentum) * running_var[c]; - } - } -} - -template <> -__global__ void sync_bn_forward_output_cuda_kernel( - const phalf *input, const float *mean, const float *var, - float *running_mean, float *running_var, const float *weight, - const float *bias, float *norm, float *std, phalf *output, int num, - int channels, int spatial, float eps, float momentum, int group_size) { - int tid = threadIdx.x; - int c = blockIdx.x; - float mean_value = mean[c]; - float std_value = sqrt(var[c] + eps); - if (weight != nullptr) { - float weight_value = weight[c]; - float bias_value = bias[c]; - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = - (static_cast(input[index]) - mean_value) / std_value; - output[index] = - static_cast(norm[index] * weight_value + bias_value); - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = - static_cast((static_cast(input[index]) - mean_value) / - std_value * weight_value + - bias_value); - } - } - } else { - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = - (static_cast(input[index]) - mean_value) / std_value; - output[index] = static_cast(norm[index]); - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = static_cast( - (static_cast(input[index]) - mean_value) / std_value); - } - } - } - if (tid == 0) { - if (std != nullptr) std[c] = std_value; - if (running_mean != nullptr) { - running_mean[c] = - momentum * mean_value + (1 - momentum) * running_mean[c]; - int count = num * spatial * group_size; - float var_unbias = count > 1 ? var[c] * count / (count - 1) : var[c]; - running_var[c] = momentum * var_unbias + (1 - momentum) * running_var[c]; - } - } -} - -template -__global__ void sync_bn_backward_param_cuda_kernel(const T *grad_output, - const float *norm, - float *grad_weight, - float *grad_bias, int num, - int channels, int spatial) { - __shared__ float buffer1[THREADS_PER_BLOCK]; - __shared__ float buffer2[THREADS_PER_BLOCK]; - - int tid = threadIdx.x; - int c = blockIdx.x; - buffer1[tid] = buffer2[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer1[tid] += grad_output[index] * norm[index]; - buffer2[tid] += grad_output[index]; - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer1[tid] += buffer1[tid + s]; - buffer2[tid] += buffer2[tid + s]; - } - __syncthreads(); - } - if (tid == 0) { - grad_weight[c] = buffer1[0]; - grad_bias[c] = buffer2[0]; - } -} - -template <> -__global__ void sync_bn_backward_param_cuda_kernel(const phalf *grad_output, - const float *norm, - float *grad_weight, - float *grad_bias, int num, - int channels, int spatial) { - __shared__ float buffer1[THREADS_PER_BLOCK]; - __shared__ float buffer2[THREADS_PER_BLOCK]; - - int tid = threadIdx.x; - int c = blockIdx.x; - buffer1[tid] = buffer2[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer1[tid] += static_cast(grad_output[index]) * norm[index]; - buffer2[tid] += static_cast(grad_output[index]); - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer1[tid] += buffer1[tid + s]; - buffer2[tid] += buffer2[tid + s]; - } - __syncthreads(); - } - if (tid == 0) { - grad_weight[c] = buffer1[0]; - grad_bias[c] = buffer2[0]; - } -} - -template -__global__ void sync_bn_backward_data_cuda_kernel( - int output_size, const T *grad_output, const float *weight, - const float *grad_weight, const float *grad_bias, const float *norm, - const float *std, T *grad_input, int num, int channels, int spatial) { - int factor = num * spatial; - CUDA_1D_KERNEL_LOOP(index, output_size) { - int c = (index / spatial) % channels; - grad_input[index] = - weight[c] * - (grad_output[index] - - (grad_weight[c] * norm[index] + grad_bias[c]) / factor) / - std[c]; - } -} - -template <> -__global__ void sync_bn_backward_data_cuda_kernel( - int output_size, const phalf *grad_output, const float *weight, - const float *grad_weight, const float *grad_bias, const float *norm, - const float *std, phalf *grad_input, int num, int channels, int spatial) { - int factor = num * spatial; - CUDA_1D_KERNEL_LOOP(index, output_size) { - int c = (index / spatial) % channels; - grad_input[index] = static_cast( - weight[c] * - (static_cast(grad_output[index]) - - (grad_weight[c] * norm[index] + grad_bias[c]) / factor) / - std[c]); - } -} - -#endif // SYNCBN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_interpolate_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_interpolate_cuda_kernel.cuh deleted file mode 100644 index 971b496e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_interpolate_cuda_kernel.cuh +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef THREE_INTERPOLATE_CUDA_KERNEL_CUH -#define THREE_INTERPOLATE_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void three_interpolate_forward_cuda_kernel( - int b, int c, int m, int n, const T *points, const int *__restrict__ idx, - const T *weight, T *out) { - // points: (B, C, M) - // idx: (B, N, 3) - // weight: (B, N, 3) - // output: - // out: (B, C, N) - - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, n) { - if (bs_idx >= b || c_idx >= c) return; - - weight += bs_idx * n * 3 + pt_idx * 3; - points += bs_idx * c * m + c_idx * m; - idx += bs_idx * n * 3 + pt_idx * 3; - out += bs_idx * c * n + c_idx * n; - - out[pt_idx] = weight[0] * points[idx[0]] + weight[1] * points[idx[1]] + - weight[2] * points[idx[2]]; - } -} - -template -__global__ void three_interpolate_backward_cuda_kernel( - int b, int c, int n, int m, const T *grad_out, const int *__restrict__ idx, - const T *weight, T *grad_points) { - // grad_out: (B, C, N) - // weight: (B, N, 3) - // output: - // grad_points: (B, C, M) - - int bs_idx = blockIdx.z; - int c_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, n) { - if (bs_idx >= b || c_idx >= c) return; - - grad_out += bs_idx * c * n + c_idx * n + pt_idx; - weight += bs_idx * n * 3 + pt_idx * 3; - grad_points += bs_idx * c * m + c_idx * m; - idx += bs_idx * n * 3 + pt_idx * 3; - - atomicAdd(grad_points + idx[0], grad_out[0] * weight[0]); - atomicAdd(grad_points + idx[1], grad_out[0] * weight[1]); - atomicAdd(grad_points + idx[2], grad_out[0] * weight[2]); - } -} - -#endif // THREE_INTERPOLATE_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_nn_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_nn_cuda_kernel.cuh deleted file mode 100644 index 67d68341..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/three_nn_cuda_kernel.cuh +++ /dev/null @@ -1,67 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef THREE_NN_CUDA_KERNEL_CUH -#define THREE_NN_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void three_nn_forward_cuda_kernel(int b, int n, int m, - const T *unknown, const T *known, - T *dist2, int *__restrict__ idx) { - // unknown: (B, N, 3) - // known: (B, M, 3) - // output: - // dist2: (B, N, 3) - // idx: (B, N, 3) - - int bs_idx = blockIdx.y; - CUDA_1D_KERNEL_LOOP(pt_idx, n) { - if (bs_idx >= b) return; - - unknown += bs_idx * n * 3 + pt_idx * 3; - known += bs_idx * m * 3; - dist2 += bs_idx * n * 3 + pt_idx * 3; - idx += bs_idx * n * 3 + pt_idx * 3; - - T ux = unknown[0]; - T uy = unknown[1]; - T uz = unknown[2]; - - float best1 = 1e40, best2 = 1e40, best3 = 1e40; - int besti1 = 0, besti2 = 0, besti3 = 0; - for (int k = 0; k < m; ++k) { - T x = known[k * 3 + 0]; - T y = known[k * 3 + 1]; - T z = known[k * 3 + 2]; - T d = (ux - x) * (ux - x) + (uy - y) * (uy - y) + (uz - z) * (uz - z); - if (d < best1) { - best3 = best2; - besti3 = besti2; - best2 = best1; - besti2 = besti1; - best1 = d; - besti1 = k; - } else if (d < best2) { - best3 = best2; - besti3 = besti2; - best2 = d; - besti2 = k; - } else if (d < best3) { - best3 = d; - besti3 = k; - } - } - dist2[0] = best1; - dist2[1] = best2; - dist2[2] = best3; - idx[0] = besti1; - idx[1] = besti2; - idx[2] = besti3; - } -} - -#endif // THREE_NN_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/tin_shift_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/tin_shift_cuda_kernel.cuh deleted file mode 100644 index 4d1159a5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/tin_shift_cuda_kernel.cuh +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef TIN_SHIFT_CUDA_KERNEL_CUH -#define TIN_SHIFT_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void tin_shift_forward_cuda_kernel( - const int nthreads, const T* input, const int* shift, T* output, - const int batch_size, const int channels, const int t_size, - const int hw_size, const int group_size, const int group_channel) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - const int hw_index = index % hw_size; - const int j = (index / hw_size) % channels; - - const int n_index = (index / hw_size / channels) % batch_size; - int group_id = j / group_channel; - int t_shift = shift[n_index * group_size + group_id]; - int offset = n_index * t_size * hw_size * channels + hw_size * j + hw_index; - for (int i = 0; i < t_size; i++) { - int now_t = i + t_shift; - int data_id = i * hw_size * channels + offset; - if (now_t < 0 || now_t >= t_size) { - continue; - } - int out_id = now_t * hw_size * channels + offset; - output[out_id] = input[data_id]; - } - } -} - -template -__global__ void tin_shift_backward_cuda_kernel( - const int nthreads, const T* input, const int* shift, T* output, - const int batch_size, const int channels, const int t_size, - const int hw_size, const int group_size, const int group_channel) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - const int hw_index = index % hw_size; - const int j = (index / hw_size) % channels; - - const int n_index = (index / hw_size / channels) % batch_size; - int group_id = j / group_channel; - int t_shift = shift[n_index * group_size + group_id]; - int offset = n_index * t_size * hw_size * channels + hw_size * j + hw_index; - for (int i = 0; i < t_size; i++) { - int now_t = i + t_shift; - int data_id = i * hw_size * channels + offset; - if (now_t < 0 || now_t >= t_size) { - continue; - } - int out_id = now_t * hw_size * channels + offset; - output[out_id] = input[data_id]; - } - } -} - -#endif // TIN_SHIFT_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/voxelization_cuda_kernel.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/voxelization_cuda_kernel.cuh deleted file mode 100644 index 021b488d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/cuda/voxelization_cuda_kernel.cuh +++ /dev/null @@ -1,216 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#ifndef VOXELIZATION_CUDA_KERNEL_CUH -#define VOXELIZATION_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; - -template -__global__ void dynamic_voxelize_kernel( - const T* points, T_int* coors, const float voxel_x, const float voxel_y, - const float voxel_z, const float coors_x_min, const float coors_y_min, - const float coors_z_min, const float coors_x_max, const float coors_y_max, - const float coors_z_max, const int grid_x, const int grid_y, - const int grid_z, const int num_points, const int num_features, - const int NDim) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, num_points) { - // To save some computation - auto points_offset = points + index * num_features; - auto coors_offset = coors + index * NDim; - int c_x = floorf((points_offset[0] - coors_x_min) / voxel_x); - if (c_x < 0 || c_x >= grid_x) { - coors_offset[0] = -1; - continue; - } - - int c_y = floorf((points_offset[1] - coors_y_min) / voxel_y); - if (c_y < 0 || c_y >= grid_y) { - coors_offset[0] = -1; - coors_offset[1] = -1; - continue; - } - - int c_z = floorf((points_offset[2] - coors_z_min) / voxel_z); - if (c_z < 0 || c_z >= grid_z) { - coors_offset[0] = -1; - coors_offset[1] = -1; - coors_offset[2] = -1; - } else { - coors_offset[0] = c_z; - coors_offset[1] = c_y; - coors_offset[2] = c_x; - } - } -} - -template -__global__ void assign_point_to_voxel(const int nthreads, const T* points, - T_int* point_to_voxelidx, - T_int* coor_to_voxelidx, T* voxels, - const int max_points, - const int num_features, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - int index = thread_idx / num_features; - - int num = point_to_voxelidx[index]; - int voxelidx = coor_to_voxelidx[index]; - if (num > -1 && voxelidx > -1) { - auto voxels_offset = - voxels + voxelidx * max_points * num_features + num * num_features; - - int k = thread_idx % num_features; - voxels_offset[k] = points[thread_idx]; - } - } -} - -template -__global__ void assign_voxel_coors(const int nthreads, T_int* coor, - T_int* point_to_voxelidx, - T_int* coor_to_voxelidx, T_int* voxel_coors, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - // if (index >= num_points) return; - int index = thread_idx / NDim; - int num = point_to_voxelidx[index]; - int voxelidx = coor_to_voxelidx[index]; - if (num == 0 && voxelidx > -1) { - auto coors_offset = voxel_coors + voxelidx * NDim; - int k = thread_idx % NDim; - coors_offset[k] = coor[thread_idx]; - } - } -} - -template -__global__ void point_to_voxelidx_kernel(const T_int* coor, - T_int* point_to_voxelidx, - T_int* point_to_pointidx, - const int max_points, - const int max_voxels, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(index, num_points) { - auto coor_offset = coor + index * NDim; - // skip invalid points - if (coor_offset[0] == -1) continue; - - int num = 0; - int coor_x = coor_offset[0]; - int coor_y = coor_offset[1]; - int coor_z = coor_offset[2]; - // only calculate the coors before this coor[index] - for (int i = 0; i < index; ++i) { - auto prev_coor = coor + i * NDim; - if (prev_coor[0] == -1) continue; - - // Find all previous points that have the same coors - // if find the same coor, record it - if ((prev_coor[0] == coor_x) && (prev_coor[1] == coor_y) && - (prev_coor[2] == coor_z)) { - num++; - if (num == 1) { - // point to the same coor that first show up - point_to_pointidx[index] = i; - } else if (num >= max_points) { - // out of boundary - break; - } - } - } - if (num == 0) { - point_to_pointidx[index] = index; - } - if (num < max_points) { - point_to_voxelidx[index] = num; - } - } -} - -template -__global__ void determin_voxel_num( - // const T_int* coor, - T_int* num_points_per_voxel, T_int* point_to_voxelidx, - T_int* point_to_pointidx, T_int* coor_to_voxelidx, T_int* voxel_num, - const int max_points, const int max_voxels, const int num_points) { - // only calculate the coors before this coor[index] - for (int i = 0; i < num_points; ++i) { - int point_pos_in_voxel = point_to_voxelidx[i]; - // record voxel - if (point_pos_in_voxel == -1) { - // out of max_points or invalid point - continue; - } else if (point_pos_in_voxel == 0) { - // record new voxel - int voxelidx = voxel_num[0]; - if (voxel_num[0] >= max_voxels) continue; - voxel_num[0] += 1; - coor_to_voxelidx[i] = voxelidx; - num_points_per_voxel[voxelidx] = 1; - } else { - int point_idx = point_to_pointidx[i]; - int voxelidx = coor_to_voxelidx[point_idx]; - if (voxelidx != -1) { - coor_to_voxelidx[i] = voxelidx; - num_points_per_voxel[voxelidx] += 1; - } - } - } -} - -__global__ void nondeterministic_get_assign_pos( - const int nthreads, const int32_t* coors_map, int32_t* pts_id, - int32_t* coors_count, int32_t* reduce_count, int32_t* coors_order) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - int coors_idx = coors_map[thread_idx]; - if (coors_idx > -1) { - int32_t coors_pts_pos = atomicAdd(&reduce_count[coors_idx], 1); - pts_id[thread_idx] = coors_pts_pos; - if (coors_pts_pos == 0) { - coors_order[coors_idx] = atomicAdd(coors_count, 1); - } - } - } -} - -template -__global__ void nondeterministic_assign_point_voxel( - const int nthreads, const T* points, const int32_t* coors_map, - const int32_t* pts_id, const int32_t* coors_in, const int32_t* reduce_count, - const int32_t* coors_order, T* voxels, int32_t* coors, int32_t* pts_count, - const int max_voxels, const int max_points, const int num_features, - const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - int coors_idx = coors_map[thread_idx]; - int coors_pts_pos = pts_id[thread_idx]; - if (coors_idx > -1 && coors_pts_pos < max_points) { - int coors_pos = coors_order[coors_idx]; - if (coors_pos < max_voxels) { - auto voxels_offset = - voxels + (coors_pos * max_points + coors_pts_pos) * num_features; - auto points_offset = points + thread_idx * num_features; - for (int k = 0; k < num_features; k++) { - voxels_offset[k] = points_offset[k]; - } - if (coors_pts_pos == 0) { - pts_count[coors_pos] = min(reduce_count[coors_idx], max_points); - auto coors_offset = coors + coors_pos * NDim; - auto coors_in_offset = coors_in + coors_idx * NDim; - for (int k = 0; k < NDim; k++) { - coors_offset[k] = coors_in_offset[k]; - } - } - } - } - } -} - -#endif // VOXELIZATION_CUDA_KERNEL_CUH diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/bbox_overlaps_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/bbox_overlaps_mlu_kernel.mlu deleted file mode 100644 index 58e695a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/bbox_overlaps_mlu_kernel.mlu +++ /dev/null @@ -1,322 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include - -#include "common_mlu_helper.hpp" - -#define COORD_NUM 4 - -__nram__ char nmem_buf[MAX_NRAM_SIZE]; - -template -__mlu_func__ void computeDiv(void *nram_dst, void *nram_src0, void *nram_src1, - void *nram_addition, const int32_t deal_num) { - __bang_active_reciphp((T *)nram_dst, (T *)nram_src1, deal_num); - __bang_mul((T *)nram_dst, (T *)nram_src0, (T *)nram_dst, deal_num); -} - -template <> -__mlu_func__ void computeDiv(void *nram_dst, void *nram_src0, - void *nram_src1, void *nram_addition, - const int32_t deal_num) { - __bang_half2float((float *)nram_addition, (half *)nram_src1, deal_num); - __bang_active_reciphp((float *)nram_addition, (float *)nram_addition, - deal_num); - __bang_float2half_rd((half *)nram_src1, (float *)nram_addition, deal_num); - __bang_mul((half *)nram_dst, (half *)nram_src0, (half *)nram_src1, deal_num); -} - -template -__mlu_func__ void bboxOverlapsWorkflow( - T *vec_b1_x1, T *vec_b1_y1, T *vec_b1_x2, T *vec_b1_y2, T *vec_b2_x1, - T *vec_b2_y1, T *vec_b2_x2, T *vec_b2_y2, T *vec_left, T *vec_right, - T *vec_top, T *vec_bottom, const T *bbox1, const T *bbox2, void *ious, - const int32_t offset, const int32_t mode, const int32_t batches_stride, - const int32_t num_bbox1, const int32_t num_bbox2, const bool aligned) { - int32_t task_batch_stride = (num_bbox1 + taskDim - 1) / taskDim; - int32_t batch_start = taskId * task_batch_stride; - int32_t batch_per_task = batch_start + task_batch_stride < num_bbox1 - ? task_batch_stride - : num_bbox1 - batch_start; - batch_per_task = batch_per_task > 0 ? batch_per_task : (0); - - if (aligned) { - int32_t num_loop_cpy = batch_per_task / batches_stride; - int32_t num_rem_cpy_batches = batch_per_task % batches_stride; - num_loop_cpy = num_rem_cpy_batches > 0 ? num_loop_cpy + 1 : num_loop_cpy; - for (int32_t i = 0; i < num_loop_cpy; i++) { - int32_t index = batch_start + i * batches_stride; - int32_t handle_batches = index + batches_stride > num_bbox1 - ? num_rem_cpy_batches - : batches_stride; - int32_t b1 = index; - int32_t b2 = index; - - int32_t base1 = b1 * COORD_NUM; - __memcpy(vec_b1_x1, &bbox1[base1], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b1_y1, &bbox1[base1 + 1], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b1_x2, &bbox1[base1 + 2], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b1_y2, &bbox1[base1 + 3], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - - int32_t base2 = b2 * COORD_NUM; - __memcpy(vec_b2_x1, &bbox2[base2], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_y1, &bbox2[base2 + 1], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_x2, &bbox2[base2 + 2], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_y2, &bbox2[base2 + 3], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - // get the width and height - __bang_maxequal(vec_left, vec_b1_x1, vec_b2_x1, batches_stride); - __bang_minequal(vec_right, vec_b1_x2, vec_b2_x2, batches_stride); - __bang_maxequal(vec_top, vec_b1_y1, vec_b2_y1, batches_stride); - __bang_minequal(vec_bottom, vec_b1_y2, vec_b2_y2, batches_stride); - - // right - left + offset ---> left - __bang_sub(vec_left, vec_right, vec_left, batches_stride); - __bang_add_const(vec_left, vec_left, (T)offset, batches_stride); - - // bottom - top + offset ---> right - __bang_sub(vec_right, vec_bottom, vec_top, batches_stride); - __bang_add_const(vec_right, vec_right, (T)offset, batches_stride); - - // zero vector ---> bottom - __nramset(vec_bottom, batches_stride, 0.f); - - // width --> vec_left - __bang_maxequal(vec_left, vec_bottom, vec_left, batches_stride); - T *width = vec_left; - // height --> vec_right - __bang_maxequal(vec_right, vec_bottom, vec_right, batches_stride); - T *height = vec_right; - - // get the b1_area - // (b1_x2 - b1_x1 + offset) ---> vec_top - __bang_sub(vec_top, vec_b1_x2, vec_b1_x1, batches_stride); - __bang_add_const(vec_top, vec_top, (T)offset, batches_stride); - - // (b1_y2 - b1_y1 + offset) ---> vec_bottom - __bang_sub(vec_bottom, vec_b1_y2, vec_b1_y1, batches_stride); - __bang_add_const(vec_bottom, vec_bottom, (T)offset, batches_stride); - - // b1_area = (b1_x2 - b1_x1 + offset) * (b1_y2 - b1_y1 + offset) - // ---> vec_top; - __bang_mul(vec_top, vec_top, vec_bottom, batches_stride); - T *b1_area = vec_top; - - // get the b2_area - // (b2_x2 - b2_x1 + offset) ---> b2_x1 - __bang_sub(vec_b2_x1, vec_b2_x2, vec_b2_x1, batches_stride); - __bang_add_const(vec_b2_x1, vec_b2_x1, (T)offset, batches_stride); - - // (b2_y2 - b2_y1 + offset) ---> b2_y1 - __bang_sub(vec_b2_y1, vec_b2_y2, vec_b2_y1, batches_stride); - __bang_add_const(vec_b2_y1, vec_b2_y1, (T)offset, batches_stride); - - // b2_area = (b2_x2 - b2_x1 + offset) * (b2_y2 - b2_y1 + offset) - // ---> b2_x1; - __bang_mul(vec_b2_x1, vec_b2_x1, vec_b2_y1, batches_stride); - T *b2_area = vec_b2_x1; - - // inter_s = width * height - __bang_mul(height, width, height, batches_stride); - T *inter_s = height; - - // offset vector ---> vec_b2_y1 - __nramset(vec_b2_y1, batches_stride, T(offset)); - T *vec_offset = vec_b2_y1; - - if (mode == 0) { - __bang_add(b1_area, b1_area, b2_area, batches_stride); - __bang_sub(b1_area, b1_area, inter_s, batches_stride); - __bang_maxequal(b1_area, vec_offset, b1_area, batches_stride); - } else { - __bang_maxequal(b1_area, vec_offset, b1_area, batches_stride); - } - T *base_s = b1_area; - - // ious = inter_s / base_s - computeDiv(width, inter_s, base_s, vec_b2_x2, batches_stride); - __memcpy((T *)ious + index, width, handle_batches * sizeof(T), - NRAM2GDRAM); - } - } else { - int32_t num_loop_cpy = num_bbox2 / batches_stride; - int32_t num_rem_cpy_batches = num_bbox2 % batches_stride; - num_loop_cpy = num_rem_cpy_batches > 0 ? num_loop_cpy + 1 : num_loop_cpy; - for (int32_t i = 0; i < batch_per_task; i++) { - int32_t index1 = batch_start + i; - int32_t b1 = index1; - int32_t base1 = b1 * COORD_NUM; - - // set bbox1 and bbox2 to nram - __nramset(vec_b1_x1, batches_stride, bbox1[base1]); - __nramset(vec_b1_y1, batches_stride, bbox1[base1 + 1]); - __nramset(vec_b1_x2, batches_stride, bbox1[base1 + 2]); - __nramset(vec_b1_y2, batches_stride, bbox1[base1 + 3]); - - for (int32_t j = 0; j < num_loop_cpy; j++) { - int32_t index2 = j * batches_stride; - int32_t handle_batches = index2 + batches_stride > num_bbox2 - ? num_rem_cpy_batches - : batches_stride; - int32_t b2 = index2; - int32_t base2 = b2 * COORD_NUM; - - // copy bbox2 to nram - __memcpy(vec_b2_x1, &bbox2[base2], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_y1, &bbox2[base2 + 1], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_x2, &bbox2[base2 + 2], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - __memcpy(vec_b2_y2, &bbox2[base2 + 3], sizeof(T), GDRAM2NRAM, sizeof(T), - COORD_NUM * sizeof(T), handle_batches - 1); - - // get the width and height - __bang_maxequal(vec_left, vec_b1_x1, vec_b2_x1, batches_stride); - __bang_minequal(vec_right, vec_b1_x2, vec_b2_x2, batches_stride); - __bang_maxequal(vec_top, vec_b1_y1, vec_b2_y1, batches_stride); - __bang_minequal(vec_bottom, vec_b1_y2, vec_b2_y2, batches_stride); - - // right - left + offset ---> left - __bang_sub(vec_left, vec_right, vec_left, batches_stride); - __bang_add_const(vec_left, vec_left, (T)offset, batches_stride); - // bottom - top + offset ---> right - __bang_sub(vec_right, vec_bottom, vec_top, batches_stride); - __bang_add_const(vec_right, vec_right, (T)offset, batches_stride); - - // zero vector ---> bottom - __nramset(vec_bottom, batches_stride, (T)0); - - // width --> vec_left - __bang_maxequal(vec_left, vec_bottom, vec_left, batches_stride); - T *width = vec_left; - // height --> vec_right - __bang_maxequal(vec_right, vec_bottom, vec_right, batches_stride); - T *height = vec_right; - - // get the b1_area - // (b1_x2 - b1_x1 + offset) ---> vec_top - __bang_sub(vec_top, vec_b1_x2, vec_b1_x1, batches_stride); - __bang_add_const(vec_top, vec_top, (T)offset, batches_stride); - // (b1_y2 - b1_y1 + offset) ---> vec_bottom - __bang_sub(vec_bottom, vec_b1_y2, vec_b1_y1, batches_stride); - __bang_add_const(vec_bottom, vec_bottom, (T)offset, batches_stride); - // b1_area = (b1_x2 - b1_x1 + offset) * (b1_y2 - b1_y1 + offset) - // ---> vec_top; - __bang_mul(vec_top, vec_top, vec_bottom, batches_stride); - T *b1_area = vec_top; - - // get the b2_area - // (b2_x2 - b2_x1 + offset) ---> b2_x1 - __bang_sub(vec_b2_x1, vec_b2_x2, vec_b2_x1, batches_stride); - __bang_add_const(vec_b2_x1, vec_b2_x1, (T)offset, batches_stride); - // (b2_y2 - b2_y1 + offset) ---> b2_y1 - __bang_sub(vec_b2_y1, vec_b2_y2, vec_b2_y1, batches_stride); - __bang_add_const(vec_b2_y1, vec_b2_y1, (T)offset, batches_stride); - // b2_area = (b2_x2 - b2_x1 + offset) * (b2_y2 - b2_y1 + offset) - // ---> b2_x1; - __bang_mul(vec_b2_x1, vec_b2_x1, vec_b2_y1, batches_stride); - T *b2_area = vec_b2_x1; - - // inter_s = width * height - __bang_mul(height, width, height, batches_stride); - T *inter_s = height; - - // offset vector ---> vec_b2_y1 - __nramset(vec_b2_y1, batches_stride, T(offset)); - T *vec_offset = vec_b2_y1; - - if (mode == 0) { - __bang_add(b1_area, b1_area, b2_area, batches_stride); - __bang_sub(b1_area, b1_area, inter_s, batches_stride); - __bang_maxequal(b1_area, vec_offset, b1_area, batches_stride); - } else { - __bang_maxequal(b1_area, vec_offset, b1_area, batches_stride); - } - T *base_s = b1_area; - - // ious = inter_s / base_s - computeDiv(width, inter_s, base_s, vec_b2_x2, batches_stride); - int32_t gdram_offset = index1 * num_bbox2 + index2; - __memcpy((T *)ious + gdram_offset, width, handle_batches * sizeof(T), - NRAM2GDRAM); - } - } - } -} - -template -__mlu_global__ void MLUUnion1KernelBBoxOverlaps( - const void *bbox1, const void *bbox2, void *ious, const int32_t num_bbox1, - const int32_t num_bbox2, const int32_t mode, const bool aligned, - const int32_t offset) { - /* - * NRAM partition - * |-------------------------------------------------------------| - * | vec_b1_x1 | vec_b1_y1 | vec_b1_x2 | vec_b1_y2 | - * |-------------------------------------------------------------| - * | vec_b2_x1 | vec_b2_y1 | vec_b2_x2 | vec_b2_y2 | - * |-------------------------------------------------------------| - * | vec_left | vec_right | vec_top | vec_bottom | - * |-------------------------------------------------------------| - * - */ - const int32_t align_bytes = PAD_DOWN(MAX_NRAM_SIZE, NFU_ALIGN_SIZE); - const int32_t split_nram_num = 12; - const int32_t nram_stride = - align_bytes / NFU_ALIGN_SIZE / split_nram_num * NFU_ALIGN_SIZE; - - void *vec_b1_x1 = nmem_buf; - void *vec_b1_y1 = nmem_buf + nram_stride; - void *vec_b1_x2 = nmem_buf + 2 * nram_stride; - void *vec_b1_y2 = nmem_buf + 3 * nram_stride; - - void *vec_b2_x1 = nmem_buf + 4 * nram_stride; - void *vec_b2_y1 = nmem_buf + 5 * nram_stride; - void *vec_b2_x2 = nmem_buf + 6 * nram_stride; - void *vec_b2_y2 = nmem_buf + 7 * nram_stride; - - void *vec_left = nmem_buf + 8 * nram_stride; - void *vec_right = nmem_buf + 9 * nram_stride; - void *vec_top = nmem_buf + 10 * nram_stride; - void *vec_bottom = nmem_buf + 11 * nram_stride; - - const int32_t vec_length = nram_stride / sizeof(T); - bboxOverlapsWorkflow((T *)vec_b1_x1, (T *)vec_b1_y1, (T *)vec_b1_x2, - (T *)vec_b1_y2, (T *)vec_b2_x1, (T *)vec_b2_y1, - (T *)vec_b2_x2, (T *)vec_b2_y2, (T *)vec_left, - (T *)vec_right, (T *)vec_top, (T *)vec_bottom, - (T *)bbox1, (T *)bbox2, (T *)ious, offset, mode, - vec_length, num_bbox1, num_bbox2, aligned); -} - -void KernelBBoxOverlaps(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t d_type, - const void *bbox1, const void *bbox2, void *ious, - const int32_t num_bbox1, const int32_t num_bbox2, - const int32_t mode, const bool aligned, - const int32_t offset) { - if (d_type == CNRT_FLOAT16) { - MLUUnion1KernelBBoxOverlaps<<>>( - bbox1, bbox2, ious, num_bbox1, num_bbox2, mode, aligned, offset); - } else { - MLUUnion1KernelBBoxOverlaps<<>>( - bbox1, bbox2, ious, num_bbox1, num_bbox2, mode, aligned, offset); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/common_mlu_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/common_mlu_helper.hpp deleted file mode 100644 index 1371e26e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/common_mlu_helper.hpp +++ /dev/null @@ -1,38 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#ifndef UTILS_H_ -#define UTILS_H_ - -#define NFU_ALIGN_SIZE 128 // Byte -#define REM_FOR_STACK (128 * 1024) // 128KB reserved for cncc - -#ifdef __BANG_ARCH__ -#define MAX_NRAM_SIZE \ - (__MLU_NRAM_SIZE__ * 1024 - REM_FOR_STACK) // 128KB reserved for cncc -#define MAX_SRAM_SIZE \ - (__MLU_SRAM_SIZE__ * 1024 - REM_FOR_STACK) // 128KB reserved for cncc -#else -#define MAX_NRAM_SIZE (384 * 1024) // 384KB, initialization value -#define MAX_SRAM_SIZE (1920 * 1024) // 1920KB, initialization value -#endif - -#ifndef PAD_UP -#define PAD_UP(x, y) (((x) / (y) + (int)((x) % (y) > 0)) * (y)) -#endif - -#ifndef PAD_DOWN -#define PAD_DOWN(x, y) (((x) / (y)) * (y)) -#endif - -#define CEIL_ALIGN(x, y) (((x) + (y)-1) / (y) * (y)) - -#endif // UTILS_H_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/focal_loss_sigmoid_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/focal_loss_sigmoid_mlu_kernel.mlu deleted file mode 100644 index 7624379b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/focal_loss_sigmoid_mlu_kernel.mlu +++ /dev/null @@ -1,888 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include - -#include "common_mlu_helper.hpp" - -#define PING 0 -#define PONG 1 - -__nram__ char nram_buffer[MAX_NRAM_SIZE]; - -namespace forward { -template -__mlu_func__ void loadInput(char *nram_input, T *dram_input, const int32_t size, - const int32_t dst_stride = 0, - const int32_t src_stride = 0, - const int32_t count = 1) { - if (dst_stride == src_stride) { - __memcpy_async(nram_input, dram_input, size * count, GDRAM2NRAM); - } else { - __memcpy_async(nram_input, dram_input, size, GDRAM2NRAM, dst_stride, - src_stride, count - 1); - } -} - -template -__mlu_func__ void loadWeight(char *nram_input, T *dram_input, const int32_t t, - const int32_t c, const int32_t has_weight, - const int32_t partition_nc) { - if (has_weight && partition_nc && t >= 0 && t < c) { - __memcpy_async(nram_input, (T *)dram_input + t, sizeof(T), GDRAM2NRAM); - } -} - -template -__mlu_func__ void storeOutput(T *dram_output, char *nram_output, - const int32_t size, const int32_t dst_stride = 0, - const int32_t src_stride = 0, - const int32_t count = 1) { - if (dst_stride == src_stride) { - __memcpy_async(dram_output, nram_output, size * count, NRAM2GDRAM); - } else { - __memcpy_async(dram_output, nram_output, size, NRAM2GDRAM, dst_stride, - src_stride, count - 1); - } -} - -template -__mlu_func__ void compute(T *input, const int32_t *target, const T *weight, - const int32_t has_weight, const int32_t partition_nc, - const int32_t deal_num, const int32_t n_seg, - const int32_t c, const int32_t c_seg, - const int32_t c_start_index, const float alpha, - const float gamma, T *compute_a, T *compute_b, - T *output) { - // set params - const int32_t c_num = - has_weight ? PAD_UP(c_seg, NFU_ALIGN_SIZE / sizeof(T)) : c_seg; - const int32_t c_end_index = c_start_index + c_seg; - const int32_t half_epsilon = 0x0400; - const T epsilon_f = - sizeof(T) == sizeof(float) ? FLT_MIN : *((half *)&half_epsilon); - - // 0. alpha_t * p_t^r = alpha * (1 - p) ^ gamma if t == c_i - // = (1 - alpha) * p ^ gamma if t != c_i - __nramset((T *)output, deal_num, (T)(1 - alpha)); - __bang_active_sigmoid((T *)compute_b, (T *)input, deal_num); - for (int32_t i = 0; i < n_seg; ++i) { - const int32_t t = *((uint32_t *)target + i); - if (t >= c_start_index && t < c_end_index) { - const uint32_t index = i * c_num + t - c_start_index; - *((T *)input + index) = -1.0 * (*((T *)input + index)); - *((T *)compute_b + index) = 1.0 - (*((T *)compute_b + index)) + epsilon_f; - *((T *)output + index) = alpha; - } - } - if (sizeof(T) == sizeof(half)) { - __bang_half2float((float *)compute_a, (half *)compute_b, deal_num); - __bang_active_loghp((float *)compute_a, (float *)compute_a, deal_num); - __bang_mul_const((float *)compute_a, (float *)compute_a, (float)gamma, - deal_num); - __bang_active_exphp((float *)compute_a, (float *)compute_a, deal_num); - __bang_float2half_rd((half *)compute_a, (float *)compute_a, deal_num); - } else { - __bang_active_loghp((T *)compute_a, (T *)compute_b, deal_num); - __bang_mul_const((T *)compute_a, (T *)compute_a, (T)gamma, deal_num); - __bang_active_exphp((T *)compute_a, (T *)compute_a, deal_num); - } - __bang_mul((T *)output, (T *)compute_a, (T *)output, deal_num); - - // 1. max = max(0, -x) if t == c_i - // = max(0, x) if t != c_i - __nramset((T *)compute_b, deal_num, (T)0); - __bang_maxequal((T *)compute_b, (T *)compute_b, (T *)input, deal_num); - - // 2. -log(p_t) = ln(e^(-max)+ e^(-max-x) + max if t == c_i - // = ln(e^(-max)+ e^(-max+x) + max if t != c_i - __bang_mul_const((T *)compute_a, (T *)compute_b, (T)-1.0, deal_num); - __bang_add((T *)input, (T *)compute_a, (T *)input, deal_num); - - __bang_active_exphp((T *)compute_a, (T *)compute_a, deal_num); - __bang_active_exphp((T *)input, (T *)input, deal_num); - __bang_add((T *)compute_a, (T *)compute_a, (T *)input, deal_num); - __bang_active_loghp((T *)compute_a, (T *)compute_a, deal_num); - __bang_add((T *)input, (T *)compute_a, (T *)compute_b, deal_num); - - // 3. output = alpha_t * p_t^r * [-log(p_t)] - __bang_mul((T *)output, (T *)output, (T *)input, deal_num); - - // 4. with weight - if (has_weight) { - for (int32_t i = 0; i < n_seg; ++i) { - int32_t t = *((int32_t *)target + i); - if (t >= 0 && t < c) { - t = partition_nc ? 0 : t; - __bang_mul_const((T *)output + i * c_num, (T *)output + i * c_num, - *((T *)weight + t), c_num); - } - } - } -} - -template -__mlu_func__ void startPipeline( - const T *input, const int32_t *target, const T *weight, - char *nram_compute_a, char *nram_compute_b, char *nram_input, - char *nram_target, char *nram_weight, char *nram_output, - const int32_t has_weight, const int32_t partition_nc, - const int32_t pingpong_offset, const int32_t pingpong_weight_offset, - const int32_t c_offset_num, const int32_t n, const int32_t n_seg, - const int32_t c, const int32_t c_seg, const float alpha, const float gamma, - T *output) { - // with offset - input = (T *)((char *)input + c_offset_num * sizeof(T)); - output = (T *)((char *)output + c_offset_num * sizeof(T)); - - const int32_t c_seg_align_num = PAD_UP(c_seg, NFU_ALIGN_SIZE / sizeof(T)); - const int32_t c_num = has_weight ? c_seg_align_num : c_seg; - const int32_t deal_num = PAD_UP(n_seg * c_num, NFU_ALIGN_SIZE / sizeof(T)); - const int32_t load_size = c_seg * sizeof(T); - const int32_t dram_stride = c * sizeof(T); - const int32_t nram_stride = c_num * sizeof(T); - - if (has_weight && !partition_nc) { - loadInput(nram_weight, (T *)weight, load_size, nram_stride, dram_stride, - 1); - __asm__ volatile("sync;\n\t"); - } - const int32_t repeat = n / n_seg; - const int32_t remain = n % n_seg; - - /* - * Pipeline: The pipeline is processed in three stages: Load, Compute, Store. - * The allocated memory space of NRAM is divided into two parts: - * PING and Pong. In a single time slice, PING is used to process - * IO stream and PONG is used for computation. Both of them are - * processed synchronously until finished. - * - * diagram of PINGPONG: - * |------|-----------------------------------------------------------------| - * | | space | - * |------|-----------------------------------------------------------------| - * | time | Ping | Pong | Ping | Pong | Ping | Pong | - * |------|-----------------------------------------------------------------| - * | 0 | L0 | | | | | | - * | 1 | C0 | L1 | | | | | - * | 2 | S0 | C1 | L2 | | | | - * | 3 | | S1 | C2 | L3 | | | - * | 4 | | | S2 | C3 | L4 | | - * | 5 | | | | S3 | C4 | L5 | - * | 6 | | | | | S4 | C5 | - * | 7 | | | | | | S5 | - * |------|-----------------------------------------------------------------| - */ - - // diagram of PINGPONG: L0 - if (repeat > 0) { - loadInput(nram_input, (T *)input, load_size, nram_stride, dram_stride, - n_seg); - loadInput(nram_target, (int32_t *)target, n_seg * sizeof(int32_t)); - loadWeight(nram_weight, (T *)weight, *((int32_t *)target), c, has_weight, - partition_nc); - __asm__ volatile("sync;\n\t"); - } - - // diagram of PINGPONG: C0 and L1 - if (repeat > 1) { - compute((T *)nram_input, (int32_t *)nram_target, (T *)nram_weight, - has_weight, partition_nc, deal_num, n_seg, c, c_seg, c_offset_num, - alpha, gamma, (T *)nram_compute_a, (T *)nram_compute_b, - (T *)nram_output); - loadInput((char *)nram_input + pingpong_offset, (T *)input + c * n_seg, - load_size, nram_stride, dram_stride, n_seg); - loadInput((char *)nram_target + pingpong_offset, - (int32_t *)target + n_seg, n_seg * sizeof(int32_t)); - loadWeight((char *)nram_weight + pingpong_weight_offset, (T *)weight, - *((int32_t *)target + n_seg), c, has_weight, partition_nc); - __asm__ volatile("sync;\n\t"); - } - - for (int32_t i = 0; i < repeat - 2; ++i) { - storeOutput((T *)output + i * c * n_seg, - nram_output + (i % 2) * pingpong_offset, load_size, - dram_stride, nram_stride, n_seg); - loadInput((char *)nram_input + (i % 2) * pingpong_offset, - (T *)(input) + (i + 2) * c * n_seg, load_size, nram_stride, - dram_stride, n_seg); - loadInput((char *)nram_target + (i % 2) * pingpong_offset, - (int32_t *)target + (i + 2) * n_seg, - n_seg * sizeof(int32_t)); - loadWeight((char *)nram_weight + (i % 2) * pingpong_weight_offset, - (T *)weight, *((int32_t *)target + (i + 2) * n_seg), c, - has_weight, partition_nc); - compute((T *)(nram_input + ((i + 1) % 2) * pingpong_offset), - (int32_t *)(nram_target + ((i + 1) % 2) * pingpong_offset), - (T *)(nram_weight + - partition_nc * ((i + 1) % 2) * pingpong_weight_offset), - has_weight, partition_nc, deal_num, n_seg, c, c_seg, c_offset_num, - alpha, gamma, (T *)nram_compute_a, (T *)nram_compute_b, - (T *)(nram_output + ((i + 1) % 2) * pingpong_offset)); - __asm__ volatile("sync;\n\t"); - } - - if (repeat > 1) { - storeOutput((T *)output + (repeat - 2) * c * n_seg, - (char *)nram_output + (repeat % 2) * pingpong_offset, - load_size, dram_stride, nram_stride, n_seg); - } - - if (remain > 0) { - loadInput((char *)nram_input + (repeat % 2) * pingpong_offset, - (T *)input + repeat * c * n_seg, load_size, nram_stride, - dram_stride, remain); - loadInput((char *)nram_target + (repeat % 2) * pingpong_offset, - (int32_t *)target + repeat * n_seg, - remain * sizeof(int32_t)); - loadWeight((char *)nram_weight + (repeat % 2) * pingpong_weight_offset, - (T *)weight, *((int32_t *)target + repeat * n_seg), c, - has_weight, partition_nc); - } - - if (repeat > 0) { - compute((T *)(nram_input + ((repeat - 1) % 2) * pingpong_offset), - (int32_t *)(nram_target + ((repeat - 1) % 2) * pingpong_offset), - (T *)(nram_weight + - partition_nc * ((repeat - 1) % 2) * pingpong_weight_offset), - has_weight, partition_nc, deal_num, n_seg, c, c_seg, c_offset_num, - alpha, gamma, (T *)nram_compute_a, (T *)nram_compute_b, - (T *)(nram_output + ((repeat - 1) % 2) * pingpong_offset)); - } - __asm__ volatile("sync;\n\t"); - - if (repeat > 0) { - storeOutput((T *)output + (repeat - 1) * c * n_seg, - (char *)nram_output + ((repeat - 1) % 2) * pingpong_offset, - load_size, dram_stride, nram_stride, n_seg); - } - - if (remain > 0) { - int32_t rem_num = PAD_UP(remain * c_num, NFU_ALIGN_SIZE / sizeof(T)); - compute((T *)(nram_input + (repeat % 2) * pingpong_offset), - (int32_t *)(nram_target + (repeat % 2) * pingpong_offset), - (T *)(nram_weight + - partition_nc * (repeat % 2) * pingpong_weight_offset), - has_weight, partition_nc, rem_num, remain, c, c_seg, c_offset_num, - alpha, gamma, (T *)nram_compute_a, (T *)nram_compute_b, - (T *)(nram_output + (repeat % 2) * pingpong_offset)); - __asm__ volatile("sync;\n\t"); - - storeOutput((T *)output + repeat * c * n_seg, - (char *)nram_output + (repeat % 2) * pingpong_offset, - load_size, dram_stride, nram_stride, remain); - } - __asm__ volatile("sync;\n\t"); -} - -template -__mlu_func__ void focalLossSigmoidForwardBlock( - const T *input, const int32_t *target, const T *weight, const int32_t n, - const int32_t c, const float alpha, const float gamma, T *output) { - /* - * NRAM partition - * |-----------------------------------------------------------------------| - * | weight | - * |------------------------------- COMPUTE -------------------------------| - * | | | - * | computeA | computeB | - * | | | - * |------------- PING ------------------------------- PONG ---------------| - * | | | - * | input | input | - * | | | - * |-----------------------------------|-----------------------------------| - * | | | - * | output | output | - * | | | - * |-----------------------------------|-----------------------------------| - * | target | target | - * |-----------------------------------|-----------------------------------| - * - * split_pipeline_num is 6: COMPUTE(computeA,computeB), PING(input,output), - * PONG(input,output). - * split_target_num is 2: PING(target), PONG(target). - * weight is not NULL: - * The nram-size of weight is equal to c_align_size when partition input-N. - * The nram-size of weight is equal to NFU_ALIGN_SIZE when partition - * input-NC. - */ - - // calculate threshold of c - const int32_t split_pipeline_num = 6; - const int32_t split_target_num = 2; - const int32_t has_weight = weight != NULL; - const int32_t threshold_c = - PAD_DOWN((MAX_NRAM_SIZE - split_target_num * sizeof(int32_t)) / - (split_pipeline_num + has_weight), - NFU_ALIGN_SIZE) / - sizeof(T); - const int32_t c_align = PAD_UP(c, NFU_ALIGN_SIZE / sizeof(T)); - const int32_t c_align_size = c_align * sizeof(T); - - if (c <= threshold_c) { - // partition inputN - int32_t c_num = c; - int32_t reservered_align_size = - (split_target_num + split_pipeline_num) * NFU_ALIGN_SIZE; - int32_t weight_size = 0; - if (has_weight) { - c_num = c_align; - reservered_align_size = split_target_num * NFU_ALIGN_SIZE; - weight_size = c_align_size; - } - - const int32_t remain_size = - MAX_NRAM_SIZE - weight_size - reservered_align_size; - const int32_t n_seg = - remain_size / (split_pipeline_num * c_num * sizeof(T) + - split_target_num * sizeof(int32_t)); - const int32_t split_pipeline_size = - PAD_UP(c_num * n_seg * sizeof(T), NFU_ALIGN_SIZE); - const int32_t compute_size = 2 * split_pipeline_size; - const int32_t pingpong_offset = (MAX_NRAM_SIZE - weight_size - compute_size) / 2; - - char *nram_weight = (char *)nram_buffer; - char *nram_compute_a = nram_weight + has_weight * c_align_size; - char *nram_compute_b = nram_compute_a + split_pipeline_size; - char *nram_input = nram_compute_b + split_pipeline_size; - char *nram_output = nram_input + split_pipeline_size; - char *nram_target = nram_output + split_pipeline_size; - - startPipeline(input, target, weight, nram_compute_a, nram_compute_b, - nram_input, nram_target, nram_weight, nram_output, - has_weight, 0, pingpong_offset, 0, 0, n, n_seg, c, c, - alpha, gamma, output); - } else { - // partition inputNC - const int32_t weight_size = has_weight * NFU_ALIGN_SIZE; - const int32_t remain_size = MAX_NRAM_SIZE - weight_size; - const int32_t split_pipeline_size = PAD_DOWN( - (remain_size - split_target_num * NFU_ALIGN_SIZE) / split_pipeline_num, - NFU_ALIGN_SIZE); - const int32_t c_seg = split_pipeline_size / sizeof(T); - const int32_t n_seg = 1; - const int32_t compute_size = 2 * split_pipeline_size; - const int32_t pingpong_offset = (MAX_NRAM_SIZE - weight_size - compute_size) / 2; - const int32_t pingpong_weight_offset = weight_size / 2; - - char *nram_weight = (char *)nram_buffer; - char *nram_compute_a = nram_weight + weight_size; - char *nram_compute_b = nram_compute_a + split_pipeline_size; - char *nram_input = nram_compute_b + split_pipeline_size; - char *nram_output = nram_input + split_pipeline_size; - char *nram_target = nram_output + split_pipeline_size; - - const int32_t loop_num = (c + c_seg - 1) / c_seg; - const int32_t partition_nc = 1; - for (int32_t i = 0; i < loop_num; ++i) { - const int32_t c_index = i * c_seg; - const int32_t c_seg_curr = i == (loop_num - 1) ? c - c_index : c_seg; - startPipeline(input, target, weight, nram_compute_a, nram_compute_b, - nram_input, nram_target, nram_weight, nram_output, - has_weight, partition_nc, pingpong_offset, - pingpong_weight_offset, c_index, n, n_seg, c, c_seg_curr, - alpha, gamma, output); - } - } -} - -template -__mlu_global__ void MLUUnion1KernelFocalLossSigmoidForward( - const void *input, const void *target, const void *weight, const int32_t N, - const int32_t C, const float alpha, const float gamma, void *output) { - const int32_t n_seg = N / taskDim + (taskId == taskDim - 1) * (N % taskDim); - const T *input_offset = (T *)input + N / taskDim * taskId * C; - const int32_t *target_offset = (int32_t *)target + N / taskDim * taskId; - T *output_offset = (T *)output + N / taskDim * taskId * C; - - focalLossSigmoidForwardBlock((T *)input_offset, (int32_t *)target_offset, - (T *)weight, n_seg, C, alpha, gamma, - (T *)output_offset); -} -} // namespace forward - -namespace backward { -template -__mlu_func__ void loadInput(char *nram_input, char *nram_target, - const T *gdram_input, const int32_t *gdram_target, - const int32_t deal_n, const int32_t total_c, - const bool pingping_flag, const bool has_weight, - const int32_t nram_offset, - const int32_t gdram_offset) { - if (pingping_flag == PONG) { - nram_input += nram_offset; - nram_target += nram_offset; - } - - __memcpy_async(nram_target, gdram_target + gdram_offset / total_c, - deal_n * sizeof(int32_t), GDRAM2NRAM); - - char *nram_input_load = nram_input; - int32_t compute_align_size = 2 * NFU_ALIGN_SIZE; - if (has_weight) { - if (sizeof(T) == sizeof(half)) { - int32_t compute_align_num = compute_align_size / sizeof(float); - int32_t align_c = PAD_UP(total_c, compute_align_num); - int32_t compute_size = deal_n * align_c * sizeof(float); - nram_input_load += compute_size / 2; - } - int32_t align_c = PAD_UP(total_c, NFU_ALIGN_SIZE / sizeof(T)); - int32_t total_c_size = total_c * sizeof(T); - int32_t align_c_size = align_c * sizeof(T); - __memcpy_async(nram_input_load, gdram_input + gdram_offset, total_c_size, - GDRAM2NRAM, align_c_size, total_c_size, deal_n - 1); - } else { - if (sizeof(T) == sizeof(half)) { - int32_t compute_size = - PAD_UP(deal_n * total_c * sizeof(float), compute_align_size); - nram_input_load += compute_size / 2; - } - int32_t load_size = deal_n * total_c * sizeof(T); - __memcpy_async(nram_input_load, gdram_input + gdram_offset, load_size, - GDRAM2NRAM); - } -} - -template -__mlu_func__ void sigmoid(T *dst_data, const T *src_data, - const int32_t elem_count) { - __bang_mul_const(dst_data, (T *)src_data, T(-1), elem_count); - __bang_active_exphp(dst_data, dst_data, elem_count); - __bang_add_const(dst_data, dst_data, T(1), elem_count); - __bang_active_reciphp(dst_data, dst_data, elem_count); -} - -template -__mlu_func__ void coreCompute(char *nram_input, const T *nram_weight, - const float *nram_flt_min, char *nram_pt, - char *nram_alpha_t, char *nram_temp, - char *nram_target, const float *nram_gamma, - char *nram_output, const float alpha, - const int32_t compute_num, const int32_t deal_n, - const int32_t total_c, const bool pingpong_flag, - const int32_t nram_offset, - const bool has_weight) { - if (pingpong_flag == PONG) { - nram_input += nram_offset; - nram_pt += nram_offset; - nram_alpha_t += nram_offset; - nram_temp += nram_offset; - nram_output += nram_offset; - nram_target += nram_offset; - } - - if (sizeof(T) == sizeof(half)) { - const int32_t compute_size = compute_num * sizeof(float); - char *nram_input_load = nram_input + compute_size / 2; - __bang_half2float((float *)nram_input, (half *)nram_input_load, - compute_num); - } - - // 0. alpha_t = alpha - 1 - __nramset((float *)nram_alpha_t, compute_num, (float)(alpha - 1.0)); - - // 1. pt = 1 - sigmoid(x) - sigmoid((float *)nram_pt, (float *)nram_input, compute_num); - __bang_mul_const((float *)nram_pt, (float *)nram_pt, (float)(-1), - compute_num); - __bang_add_const((float *)nram_pt, (float *)nram_pt, (float)1, compute_num); - - // 2. pt = target[n] == c ? sigmoid(x) : 1 - sigmoid(x) - // alpha_t = target[n] == c ? alpha : alpha - 1 - const int32_t nfu_align_num = NFU_ALIGN_SIZE / sizeof(float); - for (int n = 0; n < deal_n; n++) { - const int32_t target_value = ((int32_t *)nram_target)[n]; - if (target_value >= total_c || target_value < 0) continue; - int32_t c_offset = 0; - if (has_weight) { - int32_t c_align_num = nfu_align_num; - if (sizeof(T) == sizeof(half)) { - c_align_num += nfu_align_num; - } - c_offset = PAD_UP(total_c, c_align_num); - } else { - c_offset = total_c; - } - int32_t idx = n * c_offset + target_value; - *((float *)nram_pt + idx) = 1.0 - *((float *)nram_pt + idx); - *((float *)nram_alpha_t + idx) = alpha; - } - - // 3. temp = -alpha_t * e^(gamma * log(max(1 - pt, FLT_MIN)) - __bang_mul_const((float *)nram_temp, (float *)nram_pt, (float)(-1), - compute_num); - __bang_add_const((float *)nram_temp, (float *)nram_temp, (float)(1), - compute_num); - __bang_cycle_maxequal((float *)nram_temp, (float *)nram_temp, - (float *)nram_flt_min, compute_num, nfu_align_num); - __bang_active_loghp((float *)nram_temp, (float *)nram_temp, compute_num); - __bang_cycle_mul((float *)nram_temp, (float *)nram_temp, (float *)nram_gamma, - compute_num, nfu_align_num); - __bang_active_exphp((float *)nram_temp, (float *)nram_temp, compute_num); - __bang_mul((float *)nram_temp, (float *)nram_temp, (float *)nram_alpha_t, - compute_num); - __bang_mul_const((float *)nram_temp, (float *)nram_temp, (float)(-1), - compute_num); - - // 4. output = 1 - pt - gamma * pt * log(max(pt, FLT_MIN)) - __bang_cycle_maxequal((float *)nram_output, (float *)nram_pt, - (float *)nram_flt_min, compute_num, nfu_align_num); - __bang_active_loghp((float *)nram_output, (float *)nram_output, compute_num); - __bang_mul((float *)nram_output, (float *)nram_output, (float *)nram_pt, - compute_num); - __bang_cycle_mul((float *)nram_output, (float *)nram_output, - (float *)nram_gamma, compute_num, nfu_align_num); - __bang_add((float *)nram_output, (float *)nram_output, (float *)nram_pt, - compute_num); - __bang_mul_const((float *)nram_output, (float *)nram_output, (float)(-1), - compute_num); - __bang_add_const((float *)nram_output, (float *)nram_output, (float)(1), - compute_num); - - // 5. output = output * temp - __bang_mul((float *)nram_output, (float *)nram_output, (float *)nram_temp, - compute_num); - - if (sizeof(T) == sizeof(half)) { - __bang_float2half_rd((half *)nram_output, (float *)nram_output, - compute_num); - } - - if (has_weight) { - // with weight - for (int n = 0; n < deal_n; n++) { - int32_t c_align_num = nfu_align_num; - if (sizeof(T) == sizeof(half)) { - c_align_num += nfu_align_num; - } - int32_t align_c = PAD_UP(total_c, c_align_num); - int32_t target_value = ((int32_t *)nram_target)[n]; - T weight_value = nram_weight[target_value]; - __bang_mul_const((T *)nram_output + n * align_c, - (T *)nram_output + n * align_c, weight_value, align_c); - } - } -} - -template -__mlu_func__ void storeOutput(T *gdram_output, const char *nram_output, - const int32_t deal_n, const int32_t total_c, - const bool pingpong_flag, const bool has_weight, - const int32_t nram_offset, - const int32_t gdram_offset) { - if (pingpong_flag == PONG) { - nram_output += nram_offset; - } - const int32_t store_size = deal_n * total_c * sizeof(T); - if (has_weight) { - int32_t align_c = PAD_UP(total_c, NFU_ALIGN_SIZE / sizeof(T)); - int32_t total_c_size = total_c * sizeof(T); - int32_t align_c_size = align_c * sizeof(T); - __memcpy_async(gdram_output + gdram_offset, nram_output, total_c_size, - NRAM2GDRAM, total_c_size, align_c_size, deal_n - 1); - } else { - __memcpy_async(gdram_output + gdram_offset, nram_output, store_size, - NRAM2GDRAM); - } -} - -template -__mlu_func__ void focalLossSigmoidBackwardBlock( - const T *input, const int32_t *target, const T *weight, const float gamma, - const float alpha, const int32_t total_n, const int32_t deal_n, - const int32_t total_c, T *output) { - // params per time slice - int32_t deal_num = deal_n * total_c; - int32_t deal_size = deal_num * sizeof(float); - int32_t compute_num = 0; - int32_t compute_size = 0; - int32_t compute_align_size = NFU_ALIGN_SIZE; - const int32_t nfu_align_num = NFU_ALIGN_SIZE / sizeof(T); - if (sizeof(T) == sizeof(half)) { - compute_align_size += NFU_ALIGN_SIZE; - } - const int32_t compute_align_num = compute_align_size / sizeof(float); - bool has_weight = false; - if (weight != NULL) { - has_weight = true; - int32_t align_c = PAD_UP(total_c, compute_align_num); - compute_num = deal_n * align_c; - compute_size = compute_num * sizeof(float); - } else { - compute_size = PAD_UP(deal_size, compute_align_size); - compute_num = compute_size / sizeof(float); - } - - // params per core - int32_t total_num = total_n * total_c; - int32_t num_per_core = PAD_DOWN(total_num / taskDim, deal_num); - int32_t loop_per_core = num_per_core / deal_num; - - /* NRAM partition: - * - * |-----------------ping pong--------------------| - * |input | pt | alpha_t | temp | output | target | flt_min | gamma | weight| - * - * split_pipeline_num is 5: input, pt, alpha_t, temp, output. - * nram_reserved_line_num is 2: flt_min, gamma. - */ - const int32_t split_pipeline_num = 5; - const int32_t nram_reserved_line_num = 2; - int32_t target_deal_size = deal_n * sizeof(int32_t); - int32_t target_deal_size_align = PAD_UP(target_deal_size, NFU_ALIGN_SIZE); - // nram PING/PONG offset - int32_t ping_pong_offset = - compute_size * split_pipeline_num + target_deal_size_align; - - // gdram addr - int32_t *base_addr_target = - (int32_t *)target + taskId * loop_per_core * deal_n; - T *base_addr_input = (T *)input + taskId * num_per_core; - T *base_addr_output = output + taskId * num_per_core; - - // nram addr - char *nram_input = (char *)nram_buffer; - char *nram_pt = nram_input + compute_size; - char *nram_alpha_t = nram_pt + compute_size; - char *nram_temp = nram_alpha_t + compute_size; - char *nram_output = nram_temp + compute_size; - char *nram_target = nram_output + compute_size; - float *nram_flt_min = NULL; - float *nram_gamma = NULL; - T *nram_weight = NULL; - - if (!has_weight) { - nram_flt_min = (float *)(nram_buffer + MAX_NRAM_SIZE - - nram_reserved_line_num * NFU_ALIGN_SIZE); - nram_gamma = nram_flt_min + nfu_align_num; - } else { - int32_t weight_space = PAD_UP(total_c * sizeof(T), NFU_ALIGN_SIZE); - nram_flt_min = - (float *)(nram_buffer + MAX_NRAM_SIZE - - nram_reserved_line_num * NFU_ALIGN_SIZE - weight_space); - nram_gamma = nram_flt_min + nfu_align_num; - nram_weight = (T *)(nram_gamma + nfu_align_num); - __memcpy_async(nram_weight, weight, total_c * sizeof(T), GDRAM2NRAM); - } - - // nram set gamma and FLT_MIN - __nramset(nram_gamma, nfu_align_num, gamma); - __nramset(nram_flt_min, nfu_align_num, FLT_MIN); - - /* - * Pipeline: The pipeline is processed in three stages: Load, Compute, Store. - * The allocated memory space of NRAM is divided into two parts: - * PING and Pong. In a single time slice, PING is used to process - * IO stream and PONG is used for computation. Both of them are - * processed synchronously until finished. - * - * diagram of PINGPONG: - * |------|-----------------------------------------------------------------| - * | | space | - * |------|-----------------------------------------------------------------| - * | time | Ping | Pong | Ping | Pong | Ping | Pong | - * |------|-----------------------------------------------------------------| - * | 0 | L0 | | | | | | - * | 1 | C0 | L1 | | | | | - * | 2 | S0 | C1 | L2 | | | | - * | 3 | | S1 | C2 | L3 | | | - * | 4 | | | S2 | C3 | L4 | | - * | 5 | | | | S3 | C4 | L5 | - * | 6 | | | | | S4 | C5 | - * | 7 | | | | | | S5 | - * |------|-----------------------------------------------------------------| - */ - - // diagram of PINGPONG: L0 - if (loop_per_core > 0) { - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - deal_n, total_c, PING, has_weight, ping_pong_offset, 0); - __asm__ volatile("sync;"); - } - - // diagram of PINGPONG: C0 and L1 - if (loop_per_core > 1) { - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PING, ping_pong_offset, - has_weight); - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - deal_n, total_c, PONG, has_weight, ping_pong_offset, deal_num); - __asm__ volatile("sync;"); - } - - for (int i = 0; i < loop_per_core - 2; ++i) { - if (i % 2 == PING) { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PING, - has_weight, ping_pong_offset, i * deal_num); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PONG, ping_pong_offset, - has_weight); - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - deal_n, total_c, PING, has_weight, ping_pong_offset, - (i + 2) * deal_num); - } else { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PONG, - has_weight, ping_pong_offset, i * deal_num); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PING, ping_pong_offset, - has_weight); - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - deal_n, total_c, PONG, has_weight, ping_pong_offset, - (i + 2) * deal_num); - } - __asm__ volatile("sync;"); - } - - if (loop_per_core > 1) { - if ((loop_per_core - 2) % 2 == PING) { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PING, - has_weight, ping_pong_offset, (loop_per_core - 2) * deal_num); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PONG, ping_pong_offset, - has_weight); - } else { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PONG, - has_weight, ping_pong_offset, (loop_per_core - 2) * deal_num); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PING, ping_pong_offset, - has_weight); - } - __asm__ volatile("sync;"); - } - - if (loop_per_core > 0) { - if (loop_per_core == 1) { - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - compute_num, deal_n, total_c, PING, ping_pong_offset, - has_weight); - __asm__ volatile("sync;"); - } - if ((loop_per_core - 1) % 2 == PING) { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PING, - has_weight, ping_pong_offset, (loop_per_core - 1) * deal_num); - } else { - storeOutput(base_addr_output, nram_output, deal_n, total_c, PONG, - has_weight, ping_pong_offset, (loop_per_core - 1) * deal_num); - } - } - - // process the remaining data which N remainder per core is less than deal_n - int32_t rem_for_all = total_num - num_per_core * taskDim; - if (rem_for_all == 0) return; - int32_t rem_n_for_all = rem_for_all / total_c; - int32_t rem_n_per_core = (rem_n_for_all + taskDim - 1) / taskDim; - int32_t rem_num_per_core = rem_n_per_core * total_c; - int32_t rem_num_per_core_align = 0; - int32_t rem_core_num = rem_for_all / rem_num_per_core; - - int32_t rem_n_for_last = rem_n_for_all % rem_n_per_core; - int32_t rem_num_for_last = rem_n_for_last * total_c; - int32_t rem_num_for_last_align = 0; - - if (has_weight) { - int32_t align_c = PAD_UP(total_c, compute_align_num); - rem_num_per_core_align = rem_n_per_core * align_c; - rem_num_for_last_align = rem_n_for_last * align_c; - } else { - rem_num_per_core_align = PAD_UP(rem_num_per_core, compute_align_num); - rem_num_for_last_align = PAD_UP(rem_num_for_last, compute_align_num); - } - - int32_t rem_addr_base = num_per_core * taskDim; - int32_t rem_target_addr_base = loop_per_core * deal_n * taskDim; - base_addr_target = (int32_t *)target + rem_target_addr_base; - base_addr_input = (T *)input + rem_addr_base; - base_addr_output = output + rem_addr_base; - - if (taskId < rem_core_num) { - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - rem_n_per_core, total_c, PING, has_weight, ping_pong_offset, - taskId * rem_num_per_core); - __asm__ volatile("sync;"); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - rem_num_per_core_align, rem_n_per_core, total_c, PING, - ping_pong_offset, has_weight); - __asm__ volatile("sync;"); - storeOutput(base_addr_output, nram_output, rem_n_per_core, total_c, PING, - has_weight, ping_pong_offset, taskId * rem_num_per_core); - } else if (taskId == rem_core_num) { - if (rem_num_for_last == 0) return; - loadInput(nram_input, nram_target, base_addr_input, base_addr_target, - rem_n_for_last, total_c, PING, has_weight, ping_pong_offset, - taskId * rem_num_per_core); - __asm__ volatile("sync;"); - coreCompute(nram_input, nram_weight, nram_flt_min, nram_pt, nram_alpha_t, - nram_temp, nram_target, nram_gamma, nram_output, alpha, - rem_num_for_last_align, rem_n_for_last, total_c, PING, - ping_pong_offset, has_weight); - __asm__ volatile("sync;"); - storeOutput(base_addr_output, nram_output, rem_n_for_last, total_c, PING, - has_weight, ping_pong_offset, taskId * rem_num_per_core); - } else { - return; - } -} - -template -__mlu_global__ void MLUUnion1KernelFocalLossSigmoidBackward( - const void *input, const void *target, const void *weight, - const float gamma, const float alpha, const int32_t total_n, - const int32_t deal_n, const int32_t total_c, void *output) { - focalLossSigmoidBackwardBlock((T *)input, (int32_t *)target, (T *)weight, - gamma, alpha, total_n, deal_n, total_c, - (T *)output); -} -} // namespace backward - -void KernelFocalLossSigmoidForward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, - const cnrtDataType_t d_type, - const void *input, const void *target, - const void *weight, const int32_t N, - const int32_t C, const float alpha, - const float gamma, void *output) { - if (d_type == CNRT_FLOAT16) { - forward::MLUUnion1KernelFocalLossSigmoidForward< - half><<>>(input, target, weight, N, C, alpha, - gamma, output); - } else { - forward::MLUUnion1KernelFocalLossSigmoidForward< - float><<>>(input, target, weight, N, C, alpha, - gamma, output); - } -} - -void KernelFocalLossSigmoidBackward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, - const cnrtDataType_t d_type, - const void *input, const void *target, - const void *weight, const float gamma, - const float alpha, const int32_t dim_n, - const int32_t deal_n, const int32_t dim_c, - void *output) { - if (d_type == CNRT_FLOAT16) { - backward::MLUUnion1KernelFocalLossSigmoidBackward< - half><<>>(input, target, weight, gamma, alpha, - dim_n, deal_n, dim_c, output); - } else { - backward::MLUUnion1KernelFocalLossSigmoidBackward< - float><<>>(input, target, weight, gamma, alpha, - dim_n, deal_n, dim_c, output); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/nms_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/nms_mlu_kernel.mlu deleted file mode 100644 index 7cb16bb1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/nms_mlu_kernel.mlu +++ /dev/null @@ -1,1161 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "common_mlu_helper.hpp" - -#define NMS_SIZE (64) -#define COORD_DIM (4) -#define MEMORY_CORE (0x80) -#define INFO_NUM (5) // 5 means x1, x2, y1, y2 and score -#define REDUCE_NUM \ - (7) // score, x1, y1, x2, y2, max_index (reserve 2 num for half-type input) - -#define SIZE_NRAM_BUF (MAX_NRAM_SIZE + REM_FOR_STACK - 62 * 1024) -#define SIZE_SRAM_BUF (MAX_SRAM_SIZE) - -__nram__ int8_t nram_buffer[SIZE_NRAM_BUF]; -__mlu_shared__ int8_t sram_buffer[SIZE_SRAM_BUF]; - -__mlu_func__ void pvLock() { -#if __BANG_ARCH__ == 270 - if (coreId != MEMORY_CORE) { - __bang_lock(0, 0); - } -#endif -} - -__mlu_func__ void pvUnlock() { -#if __BANG_ARCH__ == 270 - if (coreId != MEMORY_CORE) { - __bang_unlock(0, 0); - } -#endif -} - -enum Addr { SRAM, GDRAM }; - -template -__mlu_func__ void nms_detection( - uint32_t *output_box_num, const int output_mode, const int input_layout, - OUT_DT *output_data, const Addr dst, IN_DT *input_data_score, - const IN_DT *input_data_box, const Addr src, IN_DT *buffer, - const int buffer_size, IN_DT *sram, const int core_limit, - const int input_box_num, const int input_stride, const int output_stride, - const int keepNum, const float thresh_iou, const float thresh_score, - const float offset, const int algo) { - // global value, it is stored in sram with a offset from the begin. - const int flag_offset_size = 28; - int32_t *loop_end_flag = (int32_t *)(sram + flag_offset_size); - loop_end_flag[0] = 0; - // score, x1, y1, x2, y2, inter_x1, inter_y1, inter_x2, inter_y2 - const int nms_buffer_count1 = 9; - // temp nram buffer to store selected target. - const int nram_save_limit_count = 256; - float div_thresh_iou = 1.0 / thresh_iou; - - // input data ptr - IN_DT *input_score_ptr; - const IN_DT *input_x1_ptr; - const IN_DT *input_y1_ptr; - const IN_DT *input_x2_ptr; - const IN_DT *input_y2_ptr; - input_score_ptr = input_data_score; - input_x1_ptr = input_data_box; - if (input_layout == 0) { - // [boxes_num, 4] - input_y1_ptr = input_x1_ptr + 1; - input_x2_ptr = input_x1_ptr + 2; - input_y2_ptr = input_x1_ptr + 3; - } else if (input_layout == 1) { - // [4, boxes_num] - input_y1_ptr = input_x1_ptr + input_stride; - input_x2_ptr = input_y1_ptr + input_stride; - input_y2_ptr = input_x2_ptr + input_stride; - } - - // nram data ptr - IN_DT *x1; - IN_DT *y1; - IN_DT *x2; - IN_DT *y2; - IN_DT *score; - IN_DT *inter_x1; - IN_DT *inter_y1; - IN_DT *inter_x2; - IN_DT *inter_y2; - IN_DT *max_box; // the max score, x1, y1, x2, y2 - IN_DT *x1_mask; - IN_DT *y1_mask; - IN_DT *x2_mask; - IN_DT *y2_mask; - OUT_DT *nram_save; - - int limit = 0; // find limit when GDRAM or SRAM - int len_core = 0; // the length deal by every core - int max_seg_pad = 0; // the max length every repeat - int repeat = 0; - int remain = 0; - int remain_pad = 0; - int input_offset = 0; // offset of input_data for current core - int nram_save_count = 0; - // mask for collect x1, y1, x2, y2. each mask has 128 elements - const int mask_size = 128; - const int total_mask_size = 512; - - if (output_mode == 0) { - limit = (buffer_size - 128 /*for max_box*/ * sizeof(IN_DT) - - nram_save_limit_count * sizeof(OUT_DT) - - total_mask_size * sizeof(IN_DT)) / - (nms_buffer_count1 * sizeof(IN_DT)); - } else { - limit = (buffer_size - 128 /*for max_box*/ * sizeof(IN_DT) - - nram_save_limit_count * INFO_NUM * sizeof(OUT_DT) - - total_mask_size * sizeof(IN_DT)) / - (nms_buffer_count1 * sizeof(IN_DT)); - } - - if (core_limit == 1) { - len_core = input_box_num; - input_offset = 0; - } else { - int avg_core = input_box_num / core_limit; - int rem = input_box_num % core_limit; - len_core = avg_core + (taskId < rem ? 1 : 0); - input_offset = avg_core * taskId + (taskId <= rem ? taskId : rem); - } - max_seg_pad = PAD_DOWN(limit, NMS_SIZE); - repeat = len_core / max_seg_pad; - remain = len_core % max_seg_pad; - remain_pad = PAD_UP(remain, NMS_SIZE); - - // if datatype is half, we should convert it to float when compute the IoU - int max_seg_iou_compute = - PAD_DOWN(max_seg_pad / (sizeof(float) / sizeof(IN_DT)), NMS_SIZE); - int repeat_iou_compute = len_core / max_seg_iou_compute; - int remain_iou_compute = len_core % max_seg_iou_compute; - int remain_pad_iou_compute = PAD_UP(remain_iou_compute, NMS_SIZE); - // initial the address point - score = buffer; - x1 = score + max_seg_pad; - y1 = x1 + max_seg_pad; - x2 = y1 + max_seg_pad; - y2 = x2 + max_seg_pad; - inter_x1 = y2 + max_seg_pad; - inter_y1 = inter_x1 + max_seg_pad; - inter_x2 = inter_y1 + max_seg_pad; - inter_y2 = inter_x2 + max_seg_pad; - x1_mask = inter_y2 + max_seg_pad; - y1_mask = x1_mask + mask_size; - x2_mask = y1_mask + mask_size; - y2_mask = x2_mask + mask_size; - max_box = y2_mask + mask_size; // the max score, x1, y1, x2, y2 - // offset two line from max_box - nram_save = (OUT_DT *)((char *)max_box + NFU_ALIGN_SIZE); - - // set mask for __bang_collect instruction - if (input_layout == 0) { - __nramset((IN_DT *)x1_mask, total_mask_size, (IN_DT)0); - for (int idx = 0; idx < mask_size; idx++) { - int index = (idx % COORD_DIM) * mask_size + idx; - x1_mask[index] = (IN_DT)1.0; - } - } - - for (int keep = 0; keep < keepNum; keep++) { // loop until the max_score <= 0 - if (core_limit != 1) { - __sync_cluster(); // sync before current loop - } - - /******find max start******/ - int max_index = 0; // the max score index - int global_max_index = 0; // for U1 - float max_area = 0; // the max score area - max_box[0] = 0; // init 0 - - for (int i = 0; i <= repeat; i++) { - if (i == repeat && remain == 0) { - break; - } - int seg_len = 0; // the length every nms compute - int cpy_len = 0; // the length every nms memcpy - i == repeat ? seg_len = remain_pad : seg_len = max_seg_pad; - // check seg_len exceeds the limit of fp16 or not. 65536 is the largest - // num that half data type could express. - if (sizeof(IN_DT) == sizeof(half) && seg_len > 65536) { - // seg length exceeds the max num for fp16 datatype! - return; - } - i == repeat ? cpy_len = remain : cpy_len = max_seg_pad; - /******nms load start******/ - mluMemcpyDirection_t load_dir = SRAM2NRAM; - if (src == SRAM) { - load_dir = SRAM2NRAM; - } else { - load_dir = GDRAM2NRAM; - } - __nramset(score, seg_len, (IN_DT)0); - __memcpy(score, input_score_ptr + input_offset + i * max_seg_pad, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - - /******nms load end******/ - - __bang_max(inter_x1, score, seg_len); - if (inter_x1[0] > max_box[0]) { - max_box[0] = inter_x1[0]; - - if (sizeof(IN_DT) == sizeof(half)) { - max_index = ((uint16_t *)inter_x1)[1] + input_offset + - i * max_seg_pad; // offset start from head of input_data - } else if (sizeof(IN_DT) == sizeof(float)) { - max_index = ((uint32_t *)inter_x1)[1] + input_offset + - i * max_seg_pad; // offset start from head of input_data - } - } - } // for repeat - - int stride = 1; - if (input_layout == 0) { - stride = input_stride; - } else if (input_layout == 1) { - stride = 1; - } - - if (core_limit == 1) { - max_box[1] = input_x1_ptr[max_index * stride]; - max_box[2] = input_y1_ptr[max_index * stride]; - max_box[3] = input_x2_ptr[max_index * stride]; - max_box[4] = input_y2_ptr[max_index * stride]; - if (algo == 0 || offset == 0.0) { - max_area = ((float)max_box[3] - (float)max_box[1]) * - ((float)max_box[4] - (float)max_box[2]); - } else { - max_area = ((float)max_box[3] - (float)max_box[1] + offset) * - ((float)max_box[4] - (float)max_box[2] + offset); - } - input_score_ptr[max_index] = 0; - global_max_index = max_index; - ((uint32_t *)(max_box + INFO_NUM))[0] = max_index; - } else if (core_limit == 4) { - // find the max with sram - // the max box's x1, y1, x2, y2 on every core - if (coreId != MEMORY_CORE) { - max_box[1] = input_x1_ptr[max_index * stride]; - max_box[2] = input_y1_ptr[max_index * stride]; - max_box[3] = input_x2_ptr[max_index * stride]; - max_box[4] = input_y2_ptr[max_index * stride]; - } - ((uint32_t *)(max_box + INFO_NUM))[0] = max_index; - // copy every core's box info to sram, form: score---x1---y1---x2---y2--- - for (int i = 0; i < INFO_NUM; i++) { - __memcpy(sram + i * core_limit + taskId, max_box + i, 1 * sizeof(IN_DT), - NRAM2SRAM); - } - // copy every core's max_index to sram, use 2 half to store max_index - __memcpy(sram + INFO_NUM * core_limit + taskId * 2, max_box + INFO_NUM, - sizeof(uint32_t), - NRAM2SRAM); // int32_t datatype - __sync_cluster(); - - // copy score from sram to nram and find the max - __nramset(inter_x1, NMS_SIZE, (IN_DT)0); - __memcpy(inter_x1, sram, core_limit * sizeof(IN_DT), SRAM2NRAM); - __bang_max(max_box, inter_x1, NMS_SIZE); - int max_core = 0; - if (sizeof(IN_DT) == sizeof(half)) { - max_core = ((uint16_t *)max_box)[1]; - } else if (sizeof(IN_DT) == sizeof(float)) { - max_core = ((uint32_t *)max_box)[1]; - } - - // copy the max box from SRAM to NRAM - __memcpy(max_box + 1, sram + 1 * core_limit + max_core, 1 * sizeof(IN_DT), - SRAM2NRAM); // x1 - __memcpy(max_box + 2, sram + 2 * core_limit + max_core, 1 * sizeof(IN_DT), - SRAM2NRAM); // y1 - __memcpy(max_box + 3, sram + 3 * core_limit + max_core, 1 * sizeof(IN_DT), - SRAM2NRAM); // x2 - __memcpy(max_box + 4, sram + 4 * core_limit + max_core, 1 * sizeof(IN_DT), - SRAM2NRAM); // y2 - __memcpy(max_box + 5, sram + 5 * core_limit + 2 * max_core, - sizeof(uint32_t), SRAM2NRAM); - if (algo == 0 || offset == 0.0) { - max_area = ((float)max_box[3] - (float)max_box[1]) * - ((float)max_box[4] - (float)max_box[2]); - } else { - max_area = ((float)max_box[3] - (float)max_box[1] + offset) * - ((float)max_box[4] - (float)max_box[2] + offset); - } - global_max_index = ((uint32_t *)(max_box + INFO_NUM))[0]; - input_score_ptr[global_max_index] = 0; - } - // by now, we get: max_score|max_index|max_box|max_area - /******find max end******/ - - /******nms store start******/ - // store to nram - if (float(max_box[0]) > thresh_score) { - OUT_DT *save_ptr; - int save_offset = 0; - int save_str_num = 0; - save_ptr = nram_save; - save_offset = nram_save_count; - save_str_num = nram_save_limit_count; - if (coreId == 0) { - if (output_mode == 0) { // index1, index2, ... - __memcpy(save_ptr + save_offset, (uint32_t *)(max_box + INFO_NUM), - 1 * sizeof(uint32_t), NRAM2NRAM, 1 * sizeof(uint32_t), - 1 * sizeof(uint32_t), 0); - } else if (output_mode == 1) { // score, x1, y1, x2, y2 - __memcpy(save_ptr + save_offset * INFO_NUM, max_box, - INFO_NUM * sizeof(IN_DT), NRAM2NRAM, - INFO_NUM * sizeof(IN_DT), INFO_NUM * sizeof(IN_DT), 0); - } else if (output_mode == 2) { // score---, x1---, y1---, x2---, y2--- - __memcpy(save_ptr + save_offset, max_box, 1 * sizeof(IN_DT), - NRAM2NRAM, save_str_num * sizeof(IN_DT), 1 * sizeof(IN_DT), - 4); - } - } - nram_save_count++; - (*output_box_num)++; - } - - // store to sram/gdram - if (*output_box_num != 0) { - mluMemcpyDirection_t store_dir = NRAM2GDRAM; - if (dst == SRAM) { - store_dir = NRAM2SRAM; - } else { // dst == GDRAM - store_dir = NRAM2GDRAM; - } - if ((nram_save_count == nram_save_limit_count) || - (float(max_box[0]) <= thresh_score) || keep == keepNum - 1) { - if (nram_save_count != 0) { - if (coreId == 0) { - if (output_mode == 0) { // index1, index2, ... - pvLock(); - __memcpy(output_data, nram_save, - nram_save_count * sizeof(uint32_t), store_dir); - pvUnlock(); - output_data += nram_save_count; - } else if (output_mode == 1) { // score, x1, y1, x2, y2 - pvLock(); - __memcpy(output_data, nram_save, - nram_save_count * INFO_NUM * sizeof(IN_DT), store_dir); - pvUnlock(); - output_data += nram_save_count * INFO_NUM; - } else if (output_mode == - 2) { // score---, x1---, y1---, x2---, y2--- - pvLock(); - __memcpy(output_data, nram_save, nram_save_count * sizeof(IN_DT), - store_dir, output_stride * sizeof(IN_DT), - nram_save_limit_count * sizeof(IN_DT), 4); - pvUnlock(); - output_data += nram_save_count; - } - nram_save_count = 0; - } - } - } // if move data nram->sram/gdram - } // if dst - - // if the max score <= 0, end - if (core_limit == 1) { - if (float(max_box[0]) <= thresh_score) { - break; - } - } else { - if (float(max_box[0]) <= thresh_score) { - if (coreId == 0) { - loop_end_flag[0] = 1; - } - } - __sync_cluster(); - if (loop_end_flag[0] == 1) { - break; - } - } - /******nms store end******/ - - // To solve half data accuracy, we convert half to float to calculate IoU. - for (int i = 0; i <= repeat_iou_compute; i++) { - if (i == repeat_iou_compute && remain_iou_compute == 0) { - break; - } - int seg_len = 0; // the length every nms compute - int cpy_len = 0; // the length every nms memcpy - i == repeat_iou_compute ? seg_len = remain_pad_iou_compute - : seg_len = max_seg_iou_compute; - i == repeat_iou_compute ? cpy_len = remain_iou_compute - : cpy_len = max_seg_iou_compute; - - /******nms load start******/ - mluMemcpyDirection_t load_dir = SRAM2NRAM; - if (src == SRAM) { - load_dir = SRAM2NRAM; - } else { - load_dir = GDRAM2NRAM; - } - - __nramset((float *)score, seg_len, 0.0f); - int dt_offset = 0; - if (sizeof(IN_DT) == sizeof(float)) { - __memcpy(score, input_score_ptr + input_offset + i * max_seg_pad, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - dt_offset = 0; - } else if (sizeof(IN_DT) == sizeof(half)) { - __nramset(x1, seg_len, half(0)); - __memcpy(x1, input_score_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - __bang_half2float((float *)score, (half *)x1, seg_len); - dt_offset = max_seg_iou_compute; - } - - if (input_layout == 0) { - // the following number 4 means x1, y1, x2, y2 - __memcpy( - inter_x1, - input_x1_ptr + (input_offset + i * max_seg_iou_compute) * COORD_DIM, - cpy_len * COORD_DIM * sizeof(IN_DT), load_dir, - cpy_len * COORD_DIM * sizeof(IN_DT), - cpy_len * COORD_DIM * sizeof(IN_DT), 0); - // here use collect instruction to transpose the [n, 4] shape into [4, - // n] shape to avoid - // discrete memory accessing. - for (int c_i = 0; c_i < COORD_DIM * seg_len / mask_size; c_i++) { - // the following number 32 means 32 elements will be selected out by - // once operation - __bang_collect(x1 + dt_offset + c_i * 32, inter_x1 + c_i * mask_size, - x1_mask, mask_size); - __bang_collect(y1 + dt_offset + c_i * 32, inter_x1 + c_i * mask_size, - y1_mask, mask_size); - __bang_collect(x2 + dt_offset + c_i * 32, inter_x1 + c_i * mask_size, - x2_mask, mask_size); - __bang_collect(y2 + dt_offset + c_i * 32, inter_x1 + c_i * mask_size, - y2_mask, mask_size); - } - } else if (input_layout == 1) { - __memcpy(x1 + dt_offset, - input_x1_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - __memcpy(y1 + dt_offset, - input_y1_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - __memcpy(x2 + dt_offset, - input_x2_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - __memcpy(y2 + dt_offset, - input_y2_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), load_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - } - /******nms load end******/ - - /******nms compute start******/ - if (sizeof(IN_DT) == sizeof(half)) { - __bang_half2float((float *)x1, (half *)x1 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)y1, (half *)y1 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)x2, (half *)x2 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)y2, (half *)y2 + max_seg_iou_compute, - seg_len); - } - // 1、 compute IOU - // get the area_I - __nramset((float *)inter_y1, seg_len, float(max_box[1])); // max_x1 - __bang_maxequal((float *)inter_x1, (float *)x1, (float *)inter_y1, - seg_len); // inter_x1 - __nramset((float *)inter_y2, seg_len, float(max_box[3])); // max_x2 - __bang_minequal((float *)inter_x2, (float *)x2, (float *)inter_y2, - seg_len); // inter_x2 - __bang_sub((float *)inter_x1, (float *)inter_x2, (float *)inter_x1, - seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_x1, (float *)inter_x1, offset, seg_len); - } - __bang_active_relu((float *)inter_x1, (float *)inter_x1, - seg_len); // inter_w - __nramset((float *)inter_x2, seg_len, float(max_box[2])); // max_y1 - __bang_maxequal((float *)inter_y1, (float *)y1, (float *)inter_x2, - seg_len); // inter_y1 - __nramset((float *)inter_x2, seg_len, float(max_box[4])); // max_y2 - __bang_minequal((float *)inter_y2, (float *)y2, (float *)inter_x2, - seg_len); // inter_y2 - __bang_sub((float *)inter_y1, (float *)inter_y2, (float *)inter_y1, - seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_y1, (float *)inter_y1, offset, seg_len); - } - __bang_active_relu((float *)inter_y1, (float *)inter_y1, - seg_len); // inter_h - __bang_mul((float *)inter_x1, (float *)inter_x1, (float *)inter_y1, - seg_len); // area_I - // get the area of input_box: area = (x2 - x1) * (y2 - y1); - __bang_sub((float *)inter_y1, (float *)x2, (float *)x1, seg_len); - __bang_sub((float *)inter_y2, (float *)y2, (float *)y1, seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_y1, (float *)inter_y1, offset, seg_len); - __bang_add_const((float *)inter_y2, (float *)inter_y2, offset, seg_len); - } - __bang_mul((float *)inter_x2, (float *)inter_y1, (float *)inter_y2, - seg_len); // area - // get the area_U: area + max_area - area_I - __bang_add_const((float *)inter_x2, (float *)inter_x2, float(max_area), - seg_len); - __bang_sub((float *)inter_x2, (float *)inter_x2, (float *)inter_x1, - seg_len); // area_U - // 2、 select the box - // if IOU greater than thres, set the score to zero, abort it: area_U > - // area_I * (1 / thresh)? - if (thresh_iou > 0.0) { - __bang_mul_const((float *)inter_x1, (float *)inter_x1, div_thresh_iou, - seg_len); - } else { - __bang_mul_const((float *)inter_x2, (float *)inter_x2, thresh_iou, - seg_len); - } - __bang_ge((float *)inter_x1, (float *)inter_x2, (float *)inter_x1, - seg_len); - __bang_mul((float *)score, (float *)score, (float *)inter_x1, seg_len); - /******nms compute end******/ - - // update the score - mluMemcpyDirection_t update_dir = NRAM2SRAM; - if (dst == SRAM) { - update_dir = NRAM2SRAM; - } else { - update_dir = NRAM2GDRAM; - } - if (sizeof(IN_DT) == sizeof(half)) { - __bang_float2half_rd((half *)score, (float *)score, seg_len); - } - pvLock(); - __memcpy(input_score_ptr + input_offset + i * max_seg_iou_compute, score, - cpy_len * sizeof(IN_DT), update_dir, cpy_len * sizeof(IN_DT), - cpy_len * sizeof(IN_DT), 0); - pvUnlock(); - } // for repeat - } // for keepNum -} - -__mlu_global__ void MLUUnion1KernelNMS( - const void *input_boxes, const void *input_confidence, - const int input_num_boxes, const int input_stride, - const int max_output_size, const float iou_threshold, - const float confidence_threshold, const int mode, const int input_layout, - void *workspace, void *result_num, void *output, - const cnrtDataType_t data_type_input, const float offset, const int algo) { - if (data_type_input == CNRT_FLOAT16) { - __memcpy(workspace, input_confidence, input_num_boxes * sizeof(half), - GDRAM2GDRAM); - } else if (data_type_input == CNRT_FLOAT32) { - __memcpy(workspace, input_confidence, input_num_boxes * sizeof(float), - GDRAM2GDRAM); - } else { - } - - int output_stride = max_output_size; - uint32_t result_box_num = 0; - if (mode == 0) { - uint32_t *out_data = (uint32_t *)output; - switch (data_type_input) { - default: { return; } - case CNRT_FLOAT16: { - half *boxes_data = (half *)input_boxes; - half *confi_data = (half *)workspace; - half *buffer = (half *)nram_buffer; - half *sram = (half *)sram_buffer; - - nms_detection(&result_box_num, mode, input_layout, out_data, GDRAM, - confi_data, boxes_data, GDRAM, buffer, SIZE_NRAM_BUF, - sram, taskDim, input_num_boxes, input_stride, - output_stride, max_output_size, iou_threshold, - confidence_threshold, offset, algo); - ((uint32_t *)result_num)[0] = result_box_num; - }; break; - case CNRT_FLOAT32: { - float *boxes_data = (float *)input_boxes; - float *confi_data = (float *)workspace; - float *buffer = (float *)nram_buffer; - float *sram = (float *)sram_buffer; - - nms_detection(&result_box_num, mode, input_layout, out_data, GDRAM, - confi_data, boxes_data, GDRAM, buffer, SIZE_NRAM_BUF, - sram, taskDim, input_num_boxes, input_stride, - output_stride, max_output_size, iou_threshold, - confidence_threshold, offset, algo); - ((uint32_t *)result_num)[0] = result_box_num; - }; break; - } - } else { - switch (data_type_input) { - default: { return; } - case CNRT_FLOAT16: { - half *boxes_data = (half *)input_boxes; - half *confi_data = (half *)workspace; - half *out_data = (half *)output; - half *buffer = (half *)nram_buffer; - half *sram = (half *)sram_buffer; - - nms_detection(&result_box_num, mode, input_layout, out_data, GDRAM, - confi_data, boxes_data, GDRAM, buffer, SIZE_NRAM_BUF, - sram, taskDim, input_num_boxes, input_stride, - output_stride, max_output_size, iou_threshold, - confidence_threshold, offset, algo); - ((uint32_t *)result_num)[0] = result_box_num; - }; break; - case CNRT_FLOAT32: { - float *boxes_data = (float *)input_boxes; - float *confi_data = (float *)workspace; - float *out_data = (float *)output; - float *buffer = (float *)nram_buffer; - float *sram = (float *)sram_buffer; - - nms_detection(&result_box_num, mode, input_layout, out_data, GDRAM, - confi_data, boxes_data, GDRAM, buffer, SIZE_NRAM_BUF, - sram, taskDim, input_num_boxes, input_stride, - output_stride, max_output_size, iou_threshold, - confidence_threshold, offset, algo); - ((uint32_t *)result_num)[0] = result_box_num; - }; break; - } - } -} - -template -__mlu_func__ void nms_detection_ux( - int32_t *loop_end_flag, uint32_t &output_box_num, OUT_DT *output_dram, - IN_DT *score_data, const IN_DT *boxes_data, const Addr input_ram, - const int input_layout, const int input_num_boxes, const int input_stride, - const int max_output_size, const float thresh_iou, const float thresh_score, - const float offset, const int output_mode, const int algo) { - loop_end_flag[0] = 0; - IN_DT *sram = (IN_DT *)sram_buffer; - - // score, x1, y1, x2, y2, inter_x1, inter_y1, inter_x2, inter_y2 - int nms_buffer_count1 = 9; - // temp nram buffer to store selected target. - int nram_save_limit_count = 256; - float div_thresh_iou = 1.0 / thresh_iou; - - // input data ptr - IN_DT *input_score_ptr; - const IN_DT *input_x1_ptr; - const IN_DT *input_y1_ptr; - const IN_DT *input_x2_ptr; - const IN_DT *input_y2_ptr; - input_score_ptr = score_data; - input_x1_ptr = boxes_data; - input_y1_ptr = input_x1_ptr + input_stride; - input_x2_ptr = input_y1_ptr + input_stride; - input_y2_ptr = input_x2_ptr + input_stride; - - int limit = 0; // find limit when GDRAM or SRAM - int max_seg_pad = 0; // the max length every repeat - int repeat = 0; - int remain = 0; - int remain_pad = 0; - int nram_save_count = 0; - - if (output_mode == 0) { - limit = (SIZE_NRAM_BUF - NFU_ALIGN_SIZE /*for max_box*/ * sizeof(IN_DT) - - nram_save_limit_count * sizeof(OUT_DT)) / - (nms_buffer_count1 * sizeof(IN_DT)); - } else { - limit = (SIZE_NRAM_BUF - NFU_ALIGN_SIZE /*for max_box*/ * sizeof(IN_DT) - - nram_save_limit_count * INFO_NUM * sizeof(OUT_DT)) / - (nms_buffer_count1 * sizeof(IN_DT)); - } - - // data split - int avg_cluster = input_num_boxes / clusterDim; - int rem_cluster = input_num_boxes % clusterDim; - int len_cluster = avg_cluster + (clusterId < rem_cluster ? 1 : 0); - int cluster_offset = avg_cluster * clusterId + - (clusterId <= rem_cluster ? clusterId : rem_cluster); - - int avg_core = len_cluster / coreDim; - int rem_core = len_cluster % coreDim; - int len_core = avg_core + (coreId < rem_core ? 1 : 0); - int core_offset = - avg_core * coreId + (coreId <= rem_core ? coreId : rem_core); - int input_offset = cluster_offset + core_offset; - - max_seg_pad = PAD_DOWN(limit, NMS_SIZE); - - // core 0 of each cluster calculate the max score index - int max_index_avg_core = input_num_boxes / clusterDim; - int max_index_rem_core = input_num_boxes % clusterDim; - int max_index_len_core = - max_index_avg_core + (clusterId < max_index_rem_core ? 1 : 0); - int max_index_input_offset = - max_index_avg_core * clusterId + - (clusterId <= max_index_rem_core ? clusterId : max_index_rem_core); - repeat = max_index_len_core / max_seg_pad; - remain = max_index_len_core % max_seg_pad; - remain_pad = PAD_UP(remain, NMS_SIZE); - - // if datatype is fp16, we should cvt to fp32 when compute iou - int max_seg_iou_compute = - PAD_DOWN(max_seg_pad / (sizeof(float) / sizeof(IN_DT)), NMS_SIZE); - int repeat_iou_compute = len_core / max_seg_iou_compute; - int remain_iou_compute = len_core % max_seg_iou_compute; - int remain_pad_iou_compute = PAD_UP(remain_iou_compute, NMS_SIZE); - - // init the nram ptr - IN_DT *score = (IN_DT *)nram_buffer; - IN_DT *x1 = score + max_seg_pad; - IN_DT *y1 = x1 + max_seg_pad; - IN_DT *x2 = y1 + max_seg_pad; - IN_DT *y2 = x2 + max_seg_pad; - IN_DT *inter_x1 = y2 + max_seg_pad; - IN_DT *inter_y1 = inter_x1 + max_seg_pad; - IN_DT *inter_x2 = inter_y1 + max_seg_pad; - IN_DT *inter_y2 = inter_x2 + max_seg_pad; - IN_DT *max_box = inter_y2 + max_seg_pad; // the max score, x1, y1, x2, y2 - OUT_DT *nram_save = - (OUT_DT *)((char *)max_box + - NFU_ALIGN_SIZE); // offset two line from max_box - - mluMemcpyDirection_t input_load_dir = SRAM2NRAM; - mluMemcpyDirection_t input_store_dir = NRAM2SRAM; - input_load_dir = (input_ram == SRAM) ? SRAM2NRAM : GDRAM2NRAM; - input_store_dir = (input_ram == SRAM) ? NRAM2SRAM : NRAM2GDRAM; - - for (int keep = 0; keep < max_output_size; - keep++) { // loop until the max_score <= 0 - __sync_all(); - - /******FIND MAX START******/ - int max_index = 0; - int global_max_index = 0; // for Ux - float max_area = 0; // the max socre area - max_box[0] = 0; // init 0 - - if (coreId == 0) { - for (int i = 0; i <= repeat; i++) { - if (i == repeat && remain == 0) { - break; - } - - int seg_len = (i == repeat) - ? remain_pad - : max_seg_pad; // the length every nms compute - // check seg_len exceeds the limit of fp16 or not. 65536 is the largest - // num - // that fp16 could express. - if (sizeof(IN_DT) == sizeof(half) && seg_len > 65536) { - return; - } - int cpy_len = (i == repeat) - ? remain - : max_seg_pad; // the length every nms memcpy - - /******NMS LOAD START******/ - __bang_write_zero(score, seg_len); - __memcpy(score, - input_score_ptr + max_index_input_offset + i * max_seg_pad, - cpy_len * sizeof(IN_DT), input_load_dir, - cpy_len * sizeof(IN_DT), cpy_len * sizeof(IN_DT), 0); - - /******NMS LOAD END******/ - - __bang_max(inter_x1, score, seg_len); - if (inter_x1[0] > max_box[0]) { - max_box[0] = inter_x1[0]; - if (sizeof(IN_DT) == sizeof(half)) { - max_index = - ((uint16_t *)inter_x1)[1] + max_index_input_offset + - i * max_seg_pad; // offset start from head of input_data - } else if (sizeof(IN_DT) == sizeof(float)) { - max_index = - ((uint32_t *)inter_x1)[1] + max_index_input_offset + - i * max_seg_pad; // offset start from head of input_data - } - } - } // for repeat - - // the max box's x1, y1, x2, y2 on every cluster - max_box[1] = input_x1_ptr[max_index]; - max_box[2] = input_y1_ptr[max_index]; - max_box[3] = input_x2_ptr[max_index]; - max_box[4] = input_y2_ptr[max_index]; - ((uint32_t *)(max_box + 5))[0] = max_index; - // copy max box info to sram - __memcpy(sram, max_box, REDUCE_NUM * sizeof(IN_DT), NRAM2SRAM); - } - __sync_all(); - // copy all partial max to the sram of cluster 0 - if (clusterId != 0) { - __memcpy(sram + REDUCE_NUM * clusterId, sram, REDUCE_NUM * sizeof(IN_DT), - SRAM2SRAM, 0); - } - __sync_all(); - - // reduce between clusters to get the global max box - if (clusterId == 0) { - if (coreId == 0) { - __bang_write_zero(inter_x1, NMS_SIZE); - __memcpy(inter_x1, sram, sizeof(IN_DT), SRAM2NRAM, sizeof(IN_DT), - REDUCE_NUM * sizeof(IN_DT), clusterDim - 1); - __bang_max(max_box, inter_x1, NMS_SIZE); - int max_cluster = (sizeof(IN_DT) == sizeof(half)) - ? ((uint16_t *)max_box)[1] - : ((uint32_t *)max_box)[1]; - __memcpy(max_box, sram + max_cluster * REDUCE_NUM, - REDUCE_NUM * sizeof(IN_DT), SRAM2NRAM); - __memcpy(sram, max_box, REDUCE_NUM * sizeof(IN_DT), NRAM2SRAM); - } - __sync_cluster(); - if (coreId == 0x80 && clusterDim > 1) { - // broadcast global max box to each cluster's sram - for (int cluster_idx = 1; cluster_idx < clusterDim; ++cluster_idx) { - __memcpy(sram, sram, REDUCE_NUM * sizeof(IN_DT), SRAM2SRAM, - cluster_idx); - } - } - __sync_cluster(); - } - __sync_all(); - - // copy the global max box to max_box - __memcpy(max_box, sram, REDUCE_NUM * sizeof(IN_DT), SRAM2NRAM); - if (algo == 0 || offset == 0.0) { - max_area = ((float)max_box[3] - (float)max_box[1]) * - ((float)max_box[4] - (float)max_box[2]); - } else { - max_area = ((float)max_box[3] - (float)max_box[1] + offset) * - ((float)max_box[4] - (float)max_box[2] + offset); - } - global_max_index = ((uint32_t *)(max_box + 5))[0]; - if (coreId != 0x80) { - input_score_ptr[global_max_index] = 0; - } - // by now, we get: max_score|max_index|max_box|max_area - /******FIND MAX END******/ - - /******NMS STORE START******/ - // store to nram - if (float(max_box[0]) > thresh_score) { - OUT_DT *save_ptr; - int save_offset = 0; - int save_str_num = 0; - save_ptr = nram_save; - save_offset = nram_save_count; - save_str_num = nram_save_limit_count; - if (clusterId == 0 && coreId == 0) { - if (output_mode == 0) { // index1, index2, ... - save_ptr[save_offset] = ((uint32_t *)(max_box + INFO_NUM))[0]; - } else if (output_mode == 1) { // score, x1, y1, x2, y2 - __memcpy(save_ptr + save_offset * INFO_NUM, max_box, - INFO_NUM * sizeof(IN_DT), NRAM2NRAM, - INFO_NUM * sizeof(IN_DT), INFO_NUM * sizeof(IN_DT), 0); - } else if (output_mode == 2) { // score---, x1---, y1---, x2---, y2--- - __memcpy(save_ptr + save_offset, max_box, 1 * sizeof(IN_DT), - NRAM2NRAM, save_str_num * sizeof(IN_DT), 1 * sizeof(IN_DT), - 4); - } - } - nram_save_count++; - output_box_num++; - } - - // store to sram/gdram - if (output_box_num != 0) { - if ((nram_save_count == nram_save_limit_count) || - (float(max_box[0]) <= thresh_score) || keep == max_output_size - 1) { - if (nram_save_count != 0) { - if (clusterId == 0 && coreId == 0) { - if (output_mode == 0) { // index1, index2, ... - pvLock(); - __memcpy(output_dram, nram_save, - nram_save_count * sizeof(uint32_t), NRAM2GDRAM); - pvUnlock(); - output_dram += nram_save_count; - } else if (output_mode == 1) { // score, x1, y1, x2, y2 - pvLock(); - __memcpy(output_dram, nram_save, - nram_save_count * INFO_NUM * sizeof(IN_DT), NRAM2GDRAM); - pvUnlock(); - output_dram += nram_save_count * INFO_NUM; - } else if (output_mode == - 2) { // score---, x1---, y1---, x2---, y2--- - pvLock(); - __memcpy(output_dram, nram_save, nram_save_count * sizeof(IN_DT), - NRAM2GDRAM, max_output_size * sizeof(IN_DT), - nram_save_limit_count * sizeof(IN_DT), 4); - pvUnlock(); - output_dram += nram_save_count; - } - nram_save_count = 0; - } - } - } // if move data nram->sram/gdram - } // if dst - - if (float(max_box[0]) <= thresh_score) { - if (clusterId == 0 && coreId == 0) { - loop_end_flag[0] = 1; // dram - } - } - __sync_all(); - if (loop_end_flag[0] == 1) { - break; - } - /******NMS STORE END******/ - - // To solve fp16 accuracy, we convert fp16 to fp32 to calculate IoU. - for (int i = 0; i <= repeat_iou_compute; i++) { - if (i == repeat_iou_compute && remain_iou_compute == 0) { - break; - } - int seg_len = (i == repeat_iou_compute) ? remain_pad_iou_compute - : max_seg_iou_compute; - int cpy_len = - (i == repeat_iou_compute) ? remain_iou_compute : max_seg_iou_compute; - - /******NMS LOAD START******/ - __nramset((float *)score, seg_len, 0.0f); - int dt_offset = 0; - if (sizeof(IN_DT) == sizeof(float)) { - __memcpy(score, input_score_ptr + input_offset + i * max_seg_pad, - cpy_len * sizeof(IN_DT), input_load_dir, - cpy_len * sizeof(IN_DT), cpy_len * sizeof(IN_DT), 0); - dt_offset = 0; - } else if (sizeof(IN_DT) == sizeof(half)) { - __nramset(x1, seg_len, half(0)); - __memcpy(x1, input_score_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), input_load_dir, - cpy_len * sizeof(IN_DT), cpy_len * sizeof(IN_DT), 0); - __bang_half2float((float *)score, (half *)x1, seg_len); - dt_offset = max_seg_iou_compute; - } - - __memcpy(x1 + dt_offset, - input_x1_ptr + input_offset + i * max_seg_iou_compute, - cpy_len * sizeof(IN_DT), input_load_dir, - max_seg_pad * sizeof(IN_DT), input_num_boxes * sizeof(IN_DT), 3); - /******NMS LOAD END******/ - - /******NMS COMPUTE START******/ - if (sizeof(IN_DT) == sizeof(half)) { - __bang_half2float((float *)x1, (half *)x1 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)y1, (half *)y1 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)x2, (half *)x2 + max_seg_iou_compute, - seg_len); - __bang_half2float((float *)y2, (half *)y2 + max_seg_iou_compute, - seg_len); - } - // 1、 compute IOU - // get the area_I - __nramset((float *)inter_y1, seg_len, float(max_box[1])); // max_x1 - __bang_maxequal((float *)inter_x1, (float *)x1, (float *)inter_y1, - seg_len); // inter_x1 - __nramset((float *)inter_y2, seg_len, float(max_box[3])); // max_x2 - __bang_minequal((float *)inter_x2, (float *)x2, (float *)inter_y2, - seg_len); // inter_x2 - __bang_sub((float *)inter_x1, (float *)inter_x2, (float *)inter_x1, - seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_x1, (float *)inter_x1, offset, seg_len); - } - __bang_active_relu((float *)inter_x1, (float *)inter_x1, - seg_len); // inter_w - __nramset((float *)inter_x2, seg_len, float(max_box[2])); // max_y1 - __bang_maxequal((float *)inter_y1, (float *)y1, (float *)inter_x2, - seg_len); // inter_y1 - __nramset((float *)inter_x2, seg_len, float(max_box[4])); // max_y2 - __bang_minequal((float *)inter_y2, (float *)y2, (float *)inter_x2, - seg_len); // inter_y2 - __bang_sub((float *)inter_y1, (float *)inter_y2, (float *)inter_y1, - seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_y1, (float *)inter_y1, offset, seg_len); - } - __bang_active_relu((float *)inter_y1, (float *)inter_y1, - seg_len); // inter_h - __bang_mul((float *)inter_x1, (float *)inter_x1, (float *)inter_y1, - seg_len); // area_I - // get the area of input_box: area = (x2 - x1) * (y2 - y1); - __bang_sub((float *)inter_y1, (float *)x2, (float *)x1, seg_len); - __bang_sub((float *)inter_y2, (float *)y2, (float *)y1, seg_len); - if (algo == 1 && offset != 0.0) { - __bang_add_const((float *)inter_y1, (float *)inter_y1, offset, seg_len); - __bang_add_const((float *)inter_y2, (float *)inter_y2, offset, seg_len); - } - __bang_mul((float *)inter_x2, (float *)inter_y1, (float *)inter_y2, - seg_len); // area - // get the area_U: area + max_area - area_I - __bang_add_const((float *)inter_x2, (float *)inter_x2, float(max_area), - seg_len); - __bang_sub((float *)inter_x2, (float *)inter_x2, (float *)inter_x1, - seg_len); // area_U - // 2、 select the box - // if IOU greater than thres, set the score to zero, abort it: area_U > - // area_I * (1 / thresh)? - if (thresh_iou > 0.0) { - __bang_mul_const((float *)inter_x1, (float *)inter_x1, div_thresh_iou, - seg_len); - } else { - __bang_mul_const((float *)inter_x2, (float *)inter_x2, thresh_iou, - seg_len); - } - __bang_ge((float *)inter_x1, (float *)inter_x2, (float *)inter_x1, - seg_len); - __bang_mul((float *)score, (float *)score, (float *)inter_x1, seg_len); - /******NMS COMPUTE END******/ - - if (sizeof(IN_DT) == 2) { - __bang_float2half_rd((half *)score, (float *)score, seg_len); - } - pvLock(); - __memcpy(input_score_ptr + input_offset + i * max_seg_iou_compute, score, - cpy_len * sizeof(IN_DT), input_store_dir, - cpy_len * sizeof(IN_DT), cpy_len * sizeof(IN_DT), 0); - pvUnlock(); - } // for repeat - } // for max_output_size -} - -__mlu_global__ void MLUUionXKernelNMS( - const void *input_boxes, const void *input_confidence, - const int input_num_boxes, const int input_layout, const int input_stride, - const int max_output_size, const float iou_threshold, - const float confidence_threshold, const float offset, - const cnrtDataType_t data_type_input, const int output_mode, const int algo, - void *workspace, void *result_num, void *output) { - int input_dwidth = (data_type_input == CNRT_FLOAT32) ? 4 : 2; - int32_t *loop_end_flag = - (int32_t *)((char *)workspace + - INFO_NUM * input_num_boxes * input_dwidth); - int reduce_sram_size = NFU_ALIGN_SIZE * REDUCE_NUM * input_dwidth; - int availbale_sram_size = SIZE_SRAM_BUF - reduce_sram_size; - - int cluster_score_size = input_num_boxes * input_dwidth; - int cluster_boxes_size = input_num_boxes * 4 * input_dwidth; - char *sram_score = (char *)sram_buffer + reduce_sram_size; - char *sram_boxes = - (char *)sram_buffer + reduce_sram_size + cluster_score_size; - Addr input_ram = GDRAM; - if ((cluster_score_size + cluster_boxes_size) < availbale_sram_size) { - input_ram = SRAM; - __memcpy(sram_score, input_confidence, cluster_score_size, GDRAM2SRAM); - __memcpy(sram_boxes, input_boxes, cluster_boxes_size, GDRAM2SRAM); - } else { - __memcpy(workspace, input_confidence, cluster_score_size, GDRAM2GDRAM); - } - __sync_cluster(); - uint32_t output_box_num = 0; - if (output_mode == 0) { - uint32_t *output_dram = (uint32_t *)output; - switch (data_type_input) { - default: { return; } - case CNRT_FLOAT16: { - half *score_data; - half *boxes_data; - score_data = - (input_ram == SRAM) ? (half *)sram_score : (half *)workspace; - boxes_data = - (input_ram == SRAM) ? (half *)sram_boxes : (half *)input_boxes; - nms_detection_ux(loop_end_flag, output_box_num, output_dram, score_data, - boxes_data, input_ram, input_layout, input_num_boxes, - input_stride, max_output_size, iou_threshold, - confidence_threshold, offset, output_mode, algo); - ((uint32_t *)result_num)[0] = output_box_num; - }; break; - case CNRT_FLOAT32: { - float *score_data; - float *boxes_data; - score_data = - (input_ram == SRAM) ? (float *)sram_score : (float *)workspace; - boxes_data = - (input_ram == SRAM) ? (float *)sram_boxes : (float *)input_boxes; - nms_detection_ux(loop_end_flag, output_box_num, output_dram, score_data, - boxes_data, input_ram, input_layout, input_num_boxes, - input_stride, max_output_size, iou_threshold, - confidence_threshold, offset, output_mode, algo); - ((uint32_t *)result_num)[0] = output_box_num; - }; break; - } - } else { - switch (data_type_input) { - default: { return; } - case CNRT_FLOAT16: { - half *output_dram = (half *)output; - half *score_data; - half *boxes_data; - score_data = - (input_ram == SRAM) ? (half *)sram_score : (half *)workspace; - boxes_data = - (input_ram == SRAM) ? (half *)sram_boxes : (half *)input_boxes; - nms_detection_ux(loop_end_flag, output_box_num, output_dram, score_data, - boxes_data, input_ram, input_layout, input_num_boxes, - input_stride, max_output_size, iou_threshold, - confidence_threshold, offset, output_mode, algo); - ((uint32_t *)result_num)[0] = output_box_num; - }; break; - case CNRT_FLOAT32: { - float *output_dram = (float *)output; - float *score_data; - float *boxes_data; - score_data = - (input_ram == SRAM) ? (float *)sram_score : (float *)workspace; - boxes_data = - (input_ram == SRAM) ? (float *)sram_boxes : (float *)input_boxes; - nms_detection_ux(loop_end_flag, output_box_num, output_dram, score_data, - boxes_data, input_ram, input_layout, input_num_boxes, - input_stride, max_output_size, iou_threshold, - confidence_threshold, offset, output_mode, algo); - ((uint32_t *)result_num)[0] = output_box_num; - }; break; - } - } -} - -void KernelNms(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t data_type_input, const void *boxes_ptr, - const void *scores_ptr, const int input_num_boxes, - const int input_stride, const int max_output_boxes, - const float iou_threshold, const float offset, - void *workspace_ptr, void *output_size_ptr, void *output_ptr) { - switch (k_type) { - default: { return; } - case CNRT_FUNC_TYPE_BLOCK: - case CNRT_FUNC_TYPE_UNION1: { - MLUUnion1KernelNMS<<>>( - boxes_ptr, scores_ptr, input_num_boxes, input_stride, - max_output_boxes, iou_threshold, /*confidence_threshold=*/0.0, - /*output_mode=*/0, - /*input_layout=*/1, workspace_ptr, output_size_ptr, output_ptr, - data_type_input, offset, /*algo=*/1); - }; break; - case CNRT_FUNC_TYPE_UNION2: - case CNRT_FUNC_TYPE_UNION4: - case CNRT_FUNC_TYPE_UNION8: - case CNRT_FUNC_TYPE_UNION16: { - MLUUionXKernelNMS<<>>( - boxes_ptr, scores_ptr, input_num_boxes, /*input_layout=*/1, - input_stride, max_output_boxes, iou_threshold, - /*confidence_threshold=*/0.0, offset, data_type_input, - /*output_mode=*/0, /*algo=*/1, workspace_ptr, output_size_ptr, - output_ptr); - }; break; - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_mlu_kernel.mlu deleted file mode 100644 index 13b4af19..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_mlu_kernel.mlu +++ /dev/null @@ -1,615 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "common_mlu_helper.hpp" -#include "psamask_utils.hpp" - -#define COMPUTE_COUNT_ALIGN 64 - -__nram__ char buf[MAX_NRAM_SIZE]; - -template -__mlu_func__ void swap(T &a, T &b) { - T tmp = a; - a = b; - b = tmp; -} - -template -__mlu_func__ void storeDataFromNramToDram(T *dst, const T *src, - const PositionInCore &position, - const Shape &shape_full) { - int n_offset = shape_full.h * shape_full.w * shape_full.c; - int h_offset = shape_full.w * shape_full.c; - int w_offset = shape_full.c; - int n_seg = position.n_end - position.n_start; - int h_seg = position.h_end - position.h_start; - int w_seg = position.w_end - position.w_start; - int size = h_seg * w_seg * shape_full.c; - - __memcpy(dst + position.n_start * n_offset + position.h_start * h_offset + - position.w_start * w_offset, - src, size * sizeof(T), NRAM2GDRAM, n_offset * sizeof(T), - size * sizeof(T), n_seg - 1); -} - -template -__mlu_func__ void loadDataFromDramToNram(T *dst, const T *src, - const PositionInCore &position, - const Shape &shape_full) { - int n_offset = shape_full.h * shape_full.w * shape_full.c; - int h_offset = shape_full.w * shape_full.c; - int w_offset = shape_full.c; - int n_seg = position.n_end - position.n_start; - int h_seg = position.h_end - position.h_start; - int w_seg = position.w_end - position.w_start; - int size = h_seg * w_seg * shape_full.c; - - __memcpy(dst, - src + position.n_start * n_offset + position.h_start * h_offset + - position.w_start * w_offset, - size * sizeof(T), GDRAM2NRAM, size * sizeof(T), n_offset * sizeof(T), - n_seg - 1); -} - -// transpose the data from A*B*C*(D*E) to A*D*E*(B*C) -template -__mlu_func__ void transposeData(T *dst, T *src, const Shape &shape_seg) { - int align_c = CEIL_ALIGN(shape_seg.c, COMPUTE_COUNT_ALIGN / sizeof(T)); - int align_hw = - CEIL_ALIGN(shape_seg.h * shape_seg.w, COMPUTE_COUNT_ALIGN / sizeof(T)); - for (int i = 0; i < shape_seg.n; ++i) { - __bang_transpose(dst, src, align_hw, align_c); - dst += align_hw * align_c; - src += align_hw * align_c; - } -} - -template -__mlu_func__ void psamaskCollectForward( - const T *x_dram, T *y_dram, const PositionInCore &position, - const Shape &x_full, const Shape &y_full, const Shape &shape_seg, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - T *x_nram = (T *)buf; - T *y_nram = - x_nram + CEIL_ALIGN(shape_seg.n * shape_seg.h * shape_seg.w * x_full.c, - COMPUTE_COUNT_ALIGN / sizeof(T)); - loadDataFromDramToNram(x_nram, x_dram, position, x_full); - - // fill zeros to output - int elem_count = - CEIL_ALIGN(shape_seg.n * shape_seg.h * shape_seg.w * y_full.c, - NFU_ALIGN_SIZE / sizeof(T)); - __nramset(y_nram, elem_count, (T)0); - - int y_n_offset = shape_seg.h * shape_seg.w * shape_seg.c; - int y_h_offset = shape_seg.w * shape_seg.c; - int y_w_offset = shape_seg.c; - int x_n_offset = shape_seg.h * shape_seg.w * x_full.c; - int y_c_offset = 1; - int x_h_offset = shape_seg.w * x_full.c; - int x_w_offset = x_full.c; - int x_c_offset = 1; - int x_start = 0; - int y_start = 0; - for (int nidx = 0; nidx < shape_seg.n; ++nidx) { - for (int hidx = 0; hidx < shape_seg.h; ++hidx) { - for (int widx = 0; widx < shape_seg.w; ++widx) { - int h_abs = hidx + position.h_start; - int w_abs = widx + position.w_start; - int y_offset = y_start; - int x_offset = x_start; - y_offset += hidx * y_h_offset + widx * y_w_offset; - x_offset += hidx * x_h_offset + widx * x_w_offset; - - const int hstart = half_h_mask - h_abs > 0 ? half_h_mask - h_abs : 0; - const int hend = x_full.h + half_h_mask - h_abs < h_mask - ? x_full.h + half_h_mask - h_abs - : h_mask; - const int wstart = half_w_mask - w_abs > 0 ? half_w_mask - w_abs : 0; - const int wend = x_full.w + half_w_mask - w_abs < w_mask - ? x_full.w + half_w_mask - w_abs - : w_mask; - // (h, w ) with mask-indexed - // (h + hidx - half_h_mask, w + widx - half_w_mask) with feature-indexed - y_offset += ((hstart + h_abs - half_h_mask) * x_full.w + wstart + - w_abs - half_w_mask) * - y_c_offset; - x_offset += (hstart * w_mask + wstart) * x_c_offset; - int count = wend - wstart; - __memcpy(y_nram + y_offset, x_nram + x_offset, count * sizeof(T), - NRAM2NRAM, y_c_offset * x_full.w * sizeof(T), - x_c_offset * w_mask * sizeof(T), hend - hstart - 1); - } - } - y_start += y_n_offset; - x_start += x_n_offset; - } - storeDataFromNramToDram(y_dram, y_nram, position, y_full); -} - -template -__mlu_func__ void psamaskDistributeForward( - const T *x_dram, T *y_dram, const PositionInCore &position, - const Shape &x_full, const Shape &y_full, const Shape &shape_seg, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - T *x_nram = (T *)buf; - T *y_nram_temp = - x_nram + CEIL_ALIGN(shape_seg.n * shape_seg.h * shape_seg.w * x_full.c, - COMPUTE_COUNT_ALIGN / sizeof(T)); - loadDataFromDramToNram(x_nram, x_dram, position, x_full); - - // fill zeros to output - int align_c = CEIL_ALIGN(y_full.c, COMPUTE_COUNT_ALIGN / sizeof(T)); - int align_hw = - CEIL_ALIGN(shape_seg.h * shape_seg.w, COMPUTE_COUNT_ALIGN / sizeof(T)); - int elem_count = - CEIL_ALIGN(shape_seg.n * align_c * align_hw, NFU_ALIGN_SIZE / sizeof(T)); - __nramset(y_nram_temp, elem_count, (T)0); - - int y_n_offset = align_hw * align_c; - int y_h_offset = shape_seg.w * align_c; - int y_w_offset = align_c; - int y_c_offset = 1; - int x_n_offset = shape_seg.h * shape_seg.w * x_full.c; - int x_h_offset = shape_seg.w * x_full.c; - int x_w_offset = x_full.c; - int x_c_offset = 1; - int h_feature = y_full.h; - int w_feature = y_full.w; - - int y_start = 0; - int x_start = 0; - for (int nidx = 0; nidx < shape_seg.n; ++nidx) { - for (int hidx = 0; hidx < shape_seg.h; ++hidx) { - for (int widx = 0; widx < shape_seg.w; ++widx) { - int h_abs = hidx + position.h_start; - int w_abs = widx + position.w_start; - int y_offset = y_start; - int x_offset = x_start; - y_offset += hidx * y_h_offset + widx * y_w_offset; - x_offset += hidx * x_h_offset + widx * x_w_offset; - const int hstart = half_h_mask - h_abs > 0 ? half_h_mask - h_abs : 0; - const int hend = h_feature + half_h_mask - h_abs < h_mask - ? h_feature + half_h_mask - h_abs - : h_mask; - const int wstart = half_w_mask - w_abs > 0 ? half_w_mask - w_abs : 0; - const int wend = w_feature + half_w_mask - w_abs < w_mask - ? w_feature + half_w_mask - w_abs - : w_mask; - // (h, w ) with mask-indexed - // (h + hidx - half_h_mask, w + widx - half_w_mask) with feature-indexed - y_offset += ((hstart + h_abs - half_h_mask) * x_full.w + wstart + - w_abs - half_w_mask) * - y_c_offset; - x_offset += (hstart * w_mask + wstart) * x_c_offset; - int count = wend - wstart; - __memcpy(y_nram_temp + y_offset, x_nram + x_offset, count * sizeof(T), - NRAM2NRAM, y_c_offset * w_feature * sizeof(T), - x_c_offset * w_mask * sizeof(T), hend - hstart - 1); - } - } - y_start += y_n_offset; - x_start += x_n_offset; - } - // transpose y - T *y_nram = y_nram_temp + shape_seg.n * align_hw * align_c; - Shape y_seg{shape_seg.n, shape_seg.h, shape_seg.w, y_full.c}; - transposeData(y_nram, y_nram_temp, y_seg); - swap(align_c, align_hw); - // store y from nram to dram - int y_n_offset_full = y_full.h * y_full.w * y_full.c; - int y_w_offset_full = y_full.c; - int y_c_offset_full = 1; - - int y_dram_start = - position.n_start * y_n_offset_full + - (position.h_start * y_full.w + position.w_start) * y_c_offset_full; - int y_nram_start = 0; - for (int nidx = 0; nidx < shape_seg.n; ++nidx) { - int y_dram_offset = y_dram_start + nidx * y_n_offset_full; - int y_nram_offset = y_nram_start + nidx * align_hw * align_c; - __memcpy(y_dram + y_dram_offset, y_nram + y_nram_offset, - shape_seg.h * shape_seg.w * sizeof(T), NRAM2GDRAM, - y_w_offset_full * sizeof(T), align_c * sizeof(T), - h_feature * w_feature - 1); - } -} - -template -__mlu_func__ void psamaskCollectBackward( - const T *dy_dram, T *dx_dram, const PositionInCore &position, - const Shape &dy_full, const Shape &dx_full, const Shape &shape_seg, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - T *dy_nram = (T *)buf; - T *dx_nram = - dy_nram + CEIL_ALIGN(shape_seg.n * shape_seg.h * shape_seg.w * dy_full.c, - COMPUTE_COUNT_ALIGN / sizeof(T)); - loadDataFromDramToNram(dy_nram, dy_dram, position, dy_full); - - // fill zeros to output - int elem_count = - CEIL_ALIGN(shape_seg.n * shape_seg.h * shape_seg.w * shape_seg.c, - NFU_ALIGN_SIZE / sizeof(T)); - __nramset(dx_nram, elem_count, (T)0); - - int dy_n_offset = shape_seg.h * shape_seg.w * dy_full.c; - int dy_h_offset = shape_seg.w * dy_full.c; - int dy_w_offset = dy_full.c; - int dy_c_offset = 1; - int dx_n_offset = shape_seg.h * shape_seg.w * dx_full.c; - int dx_h_offset = shape_seg.w * dx_full.c; - int dx_w_offset = dx_full.c; - int dx_c_offset = 1; - int h_feature = dy_full.h; - int w_feature = dy_full.w; - - int dy_start = 0; - int dx_start = 0; - for (int nidx = 0; nidx < shape_seg.n; ++nidx) { - for (int hidx = 0; hidx < shape_seg.h; ++hidx) { - for (int widx = 0; widx < shape_seg.w; ++widx) { - int h_abs = hidx + position.h_start; - int w_abs = widx + position.w_start; - int dy_offset = dy_start; - int dx_offset = dx_start; - dy_offset += hidx * dy_h_offset + widx * dy_w_offset; - dx_offset += hidx * dx_h_offset + widx * dx_w_offset; - - const int hstart = half_h_mask - h_abs > 0 ? half_h_mask - h_abs : 0; - const int hend = h_feature + half_h_mask - h_abs < h_mask - ? h_feature + half_h_mask - h_abs - : h_mask; - const int wstart = half_w_mask - w_abs > 0 ? half_w_mask - w_abs : 0; - const int wend = w_feature + half_w_mask - w_abs < w_mask - ? w_feature + half_w_mask - w_abs - : w_mask; - // (h, w ) with mask-indexed - // (h + h_abs - half_h_mask, w + w_abs - half_w_mask) with - // feature-indexed - dy_offset += ((hstart + h_abs - half_h_mask) * w_feature + wstart + - w_abs - half_w_mask) * - dy_c_offset; - dx_offset += (hstart * w_mask + wstart) * dx_c_offset; - int count = wend - wstart; - __memcpy(dx_nram + dx_offset, dy_nram + dy_offset, count * sizeof(T), - NRAM2NRAM, dx_c_offset * w_mask * sizeof(T), - dy_c_offset * w_feature * sizeof(T), hend - hstart - 1); - } - } - dy_start += dy_n_offset; - dx_start += dx_n_offset; - } - storeDataFromNramToDram(dx_dram, dx_nram, position, dx_full); -} - -template -__mlu_func__ void psamaskDistributeBackward( - const T *dy_dram, T *dx_dram, const PositionInCore &position, - const Shape &dy_full, const Shape &dx_full, const Shape &shape_seg, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - // load dy from dram to nram - T *dy_nram_temp = (T *)buf; - int dy_n_offset_full = dy_full.h * dy_full.w * dy_full.c; - int dy_c_offset_full = 1; - int h_feature = dy_full.h; - int w_feature = dy_full.w; - int align_c = - CEIL_ALIGN(shape_seg.h * shape_seg.w, COMPUTE_COUNT_ALIGN / sizeof(T)); - int align_hw = - CEIL_ALIGN(h_feature * w_feature, COMPUTE_COUNT_ALIGN / sizeof(T)); - - int dy_dram_start = - position.n_start * dy_n_offset_full + - (position.h_start * w_feature + position.w_start) * dy_c_offset_full; - int dy_nram_start = 0; - for (int i = 0; i < shape_seg.n; ++i) { - int dy_nram_offset = dy_nram_start + i * (align_hw * align_c); - int dy_dram_offset = dy_dram_start + i * dy_n_offset_full; - __memcpy(dy_nram_temp + dy_nram_offset, dy_dram + dy_dram_offset, - shape_seg.h * shape_seg.w * sizeof(T), GDRAM2NRAM, - align_c * sizeof(T), dy_full.c * sizeof(T), - h_feature * w_feature - 1); - } - T *dy_nram = dy_nram_temp + shape_seg.n * align_hw * align_c; - Shape dy_seg{shape_seg.n, h_feature, w_feature, shape_seg.h * shape_seg.w}; - transposeData(dy_nram, dy_nram_temp, dy_seg); - swap(align_c, align_hw); - - // fill zeros to dx - T *dx_nram = dy_nram + shape_seg.n * align_hw * align_c; - int dx_size = shape_seg.n * shape_seg.h * shape_seg.w * dx_full.c; - __nramset(dx_nram, CEIL_ALIGN(dx_size, NFU_ALIGN_SIZE / sizeof(T)), (T)0); - - int dy_n_offset_seg = align_hw * align_c; - int dy_h_offset_seg = shape_seg.w * align_c; - int dy_w_offset_seg = align_c; - int dy_c_offset_seg = 1; - int dx_n_offset_seg = shape_seg.h * shape_seg.w * shape_seg.c; - int dx_h_offset_seg = shape_seg.w * shape_seg.c; - int dx_w_offset_seg = shape_seg.c; - int dx_c_offset_seg = 1; - - int dy_start = 0; - int dx_start = 0; - for (int nidx = 0; nidx < shape_seg.n; ++nidx) { - for (int hidx = 0; hidx < shape_seg.h; ++hidx) { - for (int widx = 0; widx < shape_seg.w; ++widx) { - int h_abs = hidx + position.h_start; - int w_abs = widx + position.w_start; - int dy_offset = dy_start; - int dx_offset = dx_start; - dy_offset += hidx * dy_h_offset_seg + widx * dy_w_offset_seg; - dx_offset += hidx * dx_h_offset_seg + widx * dx_w_offset_seg; - const int hstart = half_h_mask - h_abs > 0 ? half_h_mask - h_abs : 0; - const int hend = h_feature + half_h_mask - h_abs < h_mask - ? h_feature + half_h_mask - h_abs - : h_mask; - const int wstart = half_w_mask - w_abs > 0 ? half_w_mask - w_abs : 0; - const int wend = w_feature + half_w_mask - w_abs < w_mask - ? w_feature + half_w_mask - w_abs - : w_mask; - // (h, w ) with mask-indexed - // (h + h_abs - half_h_mask, w + w_abs - half_w_mask) with - // feature-indexed - dy_offset += ((hstart + h_abs - half_h_mask) * w_feature + wstart + - w_abs - half_w_mask) * - dy_c_offset_seg; - dx_offset += (hstart * w_mask + wstart) * dx_c_offset_seg; - int count = wend - wstart; - __memcpy(dx_nram + dx_offset, dy_nram + dy_offset, count * sizeof(T), - NRAM2NRAM, w_mask * dx_c_offset_seg * sizeof(T), - w_feature * dy_c_offset_seg * sizeof(T), hend - hstart - 1); - } - } - dy_start += dy_n_offset_seg; - dx_start += dx_n_offset_seg; - } - storeDataFromNramToDram(dx_dram, dx_nram, position, dx_full); -} - -template -__mlu_func__ void psamaskBase(const T *input_dram, T *output_dram, - const Shape &input_full, const Shape &output_full, - LimitParam &limit, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, - const bool is_forward, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask, const int n_per_core, - const int h_per_core, const int n_per_cluster, - const int h_per_cluster) { - PositionInCore position_full; - PositionInCore position_seg; - position_full.w_start = 0; - position_full.w_end = output_full.w; - int n_num_in_cluster = n_per_cluster; - int h_num_in_cluster = h_per_cluster; - - switch (cluster_partition) { - case PARTITION_N: { - position_full.h_start = 0; - position_full.h_end = input_full.h; - position_full.n_start = taskIdY * n_per_cluster; - int cluster_need = (input_full.n + n_per_cluster - 1) / n_per_cluster; - if (taskIdY >= cluster_need) return; - int n_remainder = input_full.n - (cluster_need - 1) * n_per_cluster; - n_num_in_cluster = - (taskIdY == cluster_need - 1) ? n_remainder : n_per_cluster; - position_full.n_end = position_full.n_start + n_num_in_cluster; - }; break; - case PARTITION_H: { - position_full.n_start = 0; - position_full.n_end = input_full.n; - position_full.h_start = taskIdY * h_per_cluster; - int cluster_need = (input_full.h + h_per_cluster - 1) / h_per_cluster; - if (taskIdY >= cluster_need) return; - int h_remainder = input_full.h - (cluster_need - 1) * h_per_cluster; - h_num_in_cluster = - (taskIdY == cluster_need - 1) ? h_remainder : h_per_cluster; - position_full.h_end = position_full.h_start + h_num_in_cluster; - }; break; - } - switch (core_partition) { - case PARTITION_N: { - position_full.n_start += taskIdX * n_per_core; - int core_need = (n_num_in_cluster + n_per_core - 1) / n_per_core; - if (taskIdX >= core_need) return; - int n_remainder = n_num_in_cluster - (core_need - 1) * n_per_core; - position_full.n_end = - position_full.n_start + - ((taskIdX == core_need - 1) ? n_remainder : n_per_core); - }; break; - case PARTITION_H: { - position_full.h_start += taskIdX * h_per_core; - int core_need = (h_num_in_cluster + h_per_core - 1) / h_per_core; - if (taskIdX >= core_need) return; - int h_remainder = h_num_in_cluster - (core_need - 1) * h_per_core; - position_full.h_end = - position_full.h_start + - ((taskIdX == core_need - 1) ? h_remainder : h_per_core); - }; break; - } - // the count of n ,h and w need to be processed in the current core - int shape_core_n = position_full.n_end - position_full.n_start; - int shape_core_h = position_full.h_end - position_full.h_start; - int shape_core_w = input_full.w; - - limit.n = limit.n < shape_core_n ? limit.n : shape_core_n; - limit.h = limit.h < shape_core_h ? limit.h : shape_core_h; - limit.w = limit.w < shape_core_w ? limit.w : shape_core_w; - - // load the data to nram according to the limit - for (int nidx = position_full.n_start; nidx < position_full.n_end; - nidx += limit.n) { - position_seg.n_start = nidx; - position_seg.n_end = - position_seg.n_start + (position_full.n_end - nidx < limit.n - ? position_full.n_end - nidx - : limit.n); - for (int hidx = position_full.h_start; hidx < position_full.h_end; - hidx += limit.h) { - position_seg.h_start = hidx; - position_seg.h_end = - position_seg.h_start + (position_full.h_end - hidx < limit.h - ? position_full.h_end - hidx - : limit.h); - for (int widx = position_full.w_start; widx < position_full.w_end; - widx += limit.w) { - position_seg.w_start = widx; - position_seg.w_end = - position_seg.w_start + (position_full.w_end - widx < limit.w - ? position_full.w_end - widx - : limit.w); - - // record the segment of output except the size of channel - // channel segments of output and input are the same - Shape shape_seg; - shape_seg.n = position_seg.n_end - position_seg.n_start; - shape_seg.h = position_seg.h_end - position_seg.h_start; - shape_seg.w = position_seg.w_end - position_seg.w_start; - shape_seg.c = output_full.c; - - switch (psa_type) { - case COLLECT: { - if (is_forward) { - psamaskCollectForward(input_dram, output_dram, position_seg, - input_full, output_full, shape_seg, h_mask, - w_mask, half_h_mask, half_w_mask); - } else { - psamaskCollectBackward(input_dram, output_dram, position_seg, - input_full, output_full, shape_seg, h_mask, - w_mask, half_h_mask, half_w_mask); - } - } break; - case DISTRIBUTE: { - if (is_forward) { - psamaskDistributeForward(input_dram, output_dram, position_seg, - input_full, output_full, shape_seg, - h_mask, w_mask, half_h_mask, - half_w_mask); - } else { - psamaskDistributeBackward(input_dram, output_dram, position_seg, - input_full, output_full, shape_seg, - h_mask, w_mask, half_h_mask, - half_w_mask); - } - } break; - } - } - } - } -} - -template -__mlu_global__ void MLUUnion1KernelPsamaskForward( - const T *x, T *y, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int x_c, const int y_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg) { - if (coreId == 0x80) { - return; - } - Shape x_full, y_full; - x_full.n = batch; - x_full.h = h_feature; - x_full.w = w_feature; - x_full.c = x_c; - y_full.n = batch; - y_full.h = h_feature; - y_full.w = w_feature; - y_full.c = y_c; - - LimitParam limit; - limit.n = limit_n_seg; - limit.h = limit_h_seg; - limit.w = limit_w_seg; - - psamaskBase(x, y, x_full, y_full, limit, psa_type, core_partition, - cluster_partition, true, h_mask, w_mask, half_h_mask, half_w_mask, - n_per_core, h_per_core, n_per_cluster, h_per_cluster); -} - -template -__mlu_global__ void MLUUnion1KernelPsamaskBackward( - const T *dy, T *dx, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int dx_c, const int dy_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg) { - if (coreId == 0x80) { - return; - } - Shape dy_full, dx_full; - dx_full.n = batch; - dx_full.h = h_feature; - dx_full.w = w_feature; - dx_full.c = dx_c; - dy_full.n = batch; - dy_full.h = h_feature; - dy_full.w = w_feature; - dy_full.c = dy_c; - - LimitParam limit; - limit.n = limit_n_seg; - limit.h = limit_h_seg; - limit.w = limit_w_seg; - - psamaskBase(dy, dx, dy_full, dx_full, limit, psa_type, core_partition, - cluster_partition, false, h_mask, w_mask, half_h_mask, - half_w_mask, n_per_core, h_per_core, n_per_cluster, - h_per_cluster); -} - -void KernelPsamaskForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *x, void *y, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int x_c, const int y_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg) { - MLUUnion1KernelPsamaskForward<<>>( - static_cast(x), static_cast(y), psa_type, - core_partition, cluster_partition, batch, h_feature, w_feature, h_mask, - w_mask, x_c, y_c, half_h_mask, half_w_mask, n_per_core, h_per_core, - n_per_cluster, h_per_cluster, limit_n_seg, limit_h_seg, limit_w_seg); -} - -void KernelPsamaskBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *dy, void *dx, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int dx_c, const int dy_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg) { - MLUUnion1KernelPsamaskBackward<<>>( - static_cast(dy), static_cast(dx), psa_type, - core_partition, cluster_partition, batch, h_feature, w_feature, h_mask, - w_mask, dx_c, dy_c, half_h_mask, half_w_mask, n_per_core, h_per_core, - n_per_cluster, h_per_cluster, limit_n_seg, limit_h_seg, limit_w_seg); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_utils.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_utils.hpp deleted file mode 100644 index 30ec3884..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/psamask_utils.hpp +++ /dev/null @@ -1,55 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#ifndef PSAMASK_UTILS_HPP_ -#define PSAMASK_UTILS_HPP_ - -typedef enum { - COLLECT = 0, - DISTRIBUTE = 1, -} PsamaskType; - -typedef enum { - PARTITION_N = 0, - PARTITION_H = 1, -} DimPartitionType; - -struct PartitionSeg { - int h_per_cluster; - int n_per_cluster; - int h_per_core; - int n_per_core; - DimPartitionType cluster_partition; - DimPartitionType core_partition; -}; - -struct Shape { - int n; - int h; - int w; - int c; -}; - -struct LimitParam { - int n; - int h; - int w; -}; - -struct PositionInCore { - int n_start; - int n_end; - int h_start; - int h_end; - int w_start; - int w_end; -}; -#endif // PSAMASK_UTILS_HPP_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_mlu_kernel.mlu deleted file mode 100644 index f62554d0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_mlu_kernel.mlu +++ /dev/null @@ -1,493 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "common_mlu_helper.hpp" - -#define ROI_OFFSET 5 - -__nram__ char buffer[MAX_NRAM_SIZE]; - -namespace forward { -template -__mlu_func__ void bilinearInterpolate(const int input_height, - const int input_width, T y, T x, T *w1, - T *w2, T *w3, T *w4, int *x_low, - int *x_high, int *y_low, int *y_high, - bool *empty) { - // deal with cases that inverse elements are of feature map boundary - if (y < -1.0 || y > input_height || x < -1.0 || x > input_width) { - *empty = true; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low_ = int(y); - int x_low_ = int(x); - - if (y_low_ >= input_height - 1) { - *y_high = y_low_ = input_height - 1; - y = (T)y_low_; - } else { - *y_high = y_low_ + 1; - } - - if (x_low_ >= input_width - 1) { - *x_high = x_low_ = input_width - 1; - x = T(x_low_); - } else { - *x_high = x_low_ + 1; - } - - *y_low = y_low_; - *x_low = x_low_; - - T ly = y - y_low_; - T lx = x - x_low_; - T hy = 1.0 - ly; - T hx = 1.0 - lx; - *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; - return; -} - -template -__mlu_func__ void computeChannel(T *input_core, T *nram_in, T *output_core, - T *nram_out, const int roi_bin_grid_h, - const int roi_bin_grid_w, const T roi_start_h, - const T roi_start_w, const int ph, - const int pw, const T bin_size_h, - const T bin_size_w, const float count, - const int input_height, const int input_width, - const int channels, const int cyc_num, - const int max_elements) { - int cyc_channel = max_elements; - - for (int i = 0; i < cyc_num; i++) { - int real_channel = - (i == cyc_num - 1) ? channels - i * cyc_channel : cyc_channel; - int align_channel = PAD_UP(real_channel, NFU_ALIGN_SIZE / sizeof(T)); - __bang_write_zero(nram_out, align_channel); - uint32_t real_size = real_channel * sizeof(T); - - int iy, ix; - for (iy = 0; iy < roi_bin_grid_h; iy++) { - // 1. compute the coordinates of the y axis in the current roi_bin_grid_h - T y = roi_start_h + ph * bin_size_h + - (T)(iy + 0.5) * bin_size_h / (T)(roi_bin_grid_h); - for (ix = 0; ix < roi_bin_grid_w; ix++) { - // 2. compute the coordinates of the x axis in the current - // roi_bin_grid_w - T x = roi_start_w + pw * bin_size_w + - (T)(ix + 0.5) * bin_size_w / (T)(roi_bin_grid_w); - - // 3. compute the four weights (w1, w2, w3 and w4), the height (y_low - // and y_high) and weight (x_low and x_high) of input feature map in - // the current roi bin grid, and the flag (empty) which shows if x, y - // are out of input feature map ranges - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bool empty = false; - - bilinearInterpolate(input_height, input_width, y, x, &w1, &w2, &w3, &w4, - &x_low, &x_high, &y_low, &y_high, &empty); - - // 4. compute interpolation of the current roi bin grid - // tmp_cyc1, temp_cyc2, tmp_cyc3 and tmp_cyc4 store the input values - // to compute the interpolation, and then reused to compute - // the argmax_x and argmax_y. - T *tmp_cyc1 = nram_in + cyc_channel; - T *tmp_cyc2 = nram_in + cyc_channel * 2; - T *tmp_cyc3 = nram_in + cyc_channel * 3; - T *tmp_cyc4 = nram_in + cyc_channel * 4; - - if (empty) { // exits abnormal values - __bang_write_zero(nram_in, align_channel); - } else { - __bang_write_zero(nram_in, align_channel); - uint32_t offset1 = (y_low * input_width + x_low) * channels; - uint32_t offset2 = (y_low * input_width + x_high) * channels; - uint32_t offset3 = (y_high * input_width + x_low) * channels; - uint32_t offset4 = (y_high * input_width + x_high) * channels; - T *input1 = (T *)input_core + offset1 + i * cyc_channel; - T *input2 = (T *)input_core + offset2 + i * cyc_channel; - T *input3 = (T *)input_core + offset3 + i * cyc_channel; - T *input4 = (T *)input_core + offset4 + i * cyc_channel; - - // load the four pixels (p1, p2, p3 and p4) of input feature map to - // compute interpolation - __memcpy(tmp_cyc1, input1, real_size, GDRAM2NRAM); - __memcpy(tmp_cyc2, input2, real_size, GDRAM2NRAM); - __memcpy(tmp_cyc3, input3, real_size, GDRAM2NRAM); - __memcpy(tmp_cyc4, input4, real_size, GDRAM2NRAM); - - // interpolation value = w1 * p1 + w2 * p2 + w3 * p3 + w4 * p4 - __bang_mul_const(tmp_cyc1, tmp_cyc1, w1, align_channel); - __bang_mul_const(tmp_cyc2, tmp_cyc2, w2, align_channel); - __bang_mul_const(tmp_cyc3, tmp_cyc3, w3, align_channel); - __bang_mul_const(tmp_cyc4, tmp_cyc4, w4, align_channel); - - __bang_add(nram_in, tmp_cyc1, nram_in, align_channel); - __bang_add(nram_in, tmp_cyc2, nram_in, align_channel); - __bang_add(nram_in, tmp_cyc3, nram_in, align_channel); - __bang_add(nram_in, tmp_cyc4, nram_in, align_channel); - } - // 5. compute sum value and corresponding coordinates of x axis and y - // axis. Update the sum value. - __bang_add(nram_out, nram_in, nram_out, align_channel); - } // loop_roi_grid_w - } // loop_roi_grid_h - T count_value = (T)(1.0 / count); - __bang_mul_const(nram_out, nram_out, count_value, align_channel); - __memcpy(output_core + i * cyc_channel, nram_out, real_size, NRAM2GDRAM); - } // loop_cyc_num -} - -template -__mlu_func__ void roialignForwardAvg( - T *input, T *rois, T *output, const bool aligned, const int channels, - const int pooled_height, const int pooled_width, const int input_height, - const int input_width, const int sampling_ratio, const T spatial_scale, - const int num_rois) { - // find limit for channel, the nram space is divided to 6 parts that are - // input, 4 weights to compute the interpolation (w1, w2, w3, w4), output - - // max_elements : 300 : float datatype : 27296, half datatype : 54592 - // max_elements : 200 : float datatype : 16384, half datatype : 32768 - int max_elements = (PAD_DOWN(MAX_NRAM_SIZE / 6, NFU_ALIGN_SIZE)) / sizeof(T); - int cyc_num = channels / max_elements + (int)(channels % max_elements != 0); - T offset = aligned ? (T)0.5 : (T)0.0; - int task_num = num_rois * pooled_height * pooled_width; - T *nram_out = (T *)buffer; - T *nram_in = nram_out + max_elements; - if (task_num < taskDim) { - if (taskId >= task_num) { - return; - } - } - - for (int bin_idx = taskId; bin_idx < task_num; bin_idx = bin_idx + taskDim) { - if (bin_idx >= task_num) { - return; - } - - // (n,ph.pw) is a c in the pooled output - int pw = bin_idx % pooled_width; - int ph = (bin_idx / pooled_width) % pooled_height; - int n = bin_idx / pooled_width / pooled_height; - - T *roi_id_tmp = rois + n * ROI_OFFSET; - // 1. compute width and height of roi region. - int batch_idx = (int)roi_id_tmp[0]; - T roi_x1 = roi_id_tmp[1]; - T roi_y1 = roi_id_tmp[2]; - T roi_x2 = roi_id_tmp[3]; - T roi_y2 = roi_id_tmp[4]; - T roi_start_w = roi_x1 * spatial_scale - offset; - T roi_start_h = roi_y1 * spatial_scale - offset; - T roi_end_w = roi_x2 * spatial_scale - offset; - T roi_end_h = roi_y2 * spatial_scale - offset; - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - - if (!aligned) { - roi_width = roi_width > (T)(1.0) ? roi_width : (T)(1.0); - roi_height = roi_height > (T)(1.0) ? roi_height : (T)(1.0); - } - - // 2. compute float-type width and height of roi bin region. - T bin_size_w = (T)roi_width / (T)pooled_width; - T bin_size_h = (T)roi_height / (T)pooled_height; - - // 3. compute int-type width and height of roi bin region. - int roi_bin_grid_h, roi_bin_grid_w; - roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : int(ceilf(roi_height / pooled_height)); - roi_bin_grid_w = (sampling_ratio > 0) - ? sampling_ratio - : int(ceilf(roi_width / pooled_width)); - float count = (float)((roi_bin_grid_h * roi_bin_grid_w) > 1 - ? roi_bin_grid_h * roi_bin_grid_w - : 1.0); - T *input_core = input + batch_idx * channels * input_width * input_height; - T *output_core = output + bin_idx * channels; - // 4. compute avg value and corresponding coordinates of x axis and y axis. - computeChannel(input_core, nram_in, output_core, nram_out, roi_bin_grid_h, - roi_bin_grid_w, roi_start_h, roi_start_w, ph, pw, bin_size_h, - bin_size_w, count, input_height, input_width, channels, - cyc_num, max_elements); - } -} - -__mlu_global__ void MLUUnion1KernelRoiAlignAvg( - const void *input, const void *rois, const int channels, const bool aligned, - const int pooled_height, const int pooled_width, const int input_height, - const int input_width, const int sampling_ratio, const float spatial_scale, - const int num_rois, const cnrtDataType_t data_type, void *output) { - // make sure that memcore is not used - if (coreId == 0x80) { - return; - } - - switch (data_type) { - case CNRT_FLOAT16: { - roialignForwardAvg((half *)input, (half *)rois, (half *)output, aligned, - channels, pooled_height, pooled_width, input_height, - input_width, sampling_ratio, - (half)spatial_scale, num_rois); - }; break; - case CNRT_FLOAT32: { - roialignForwardAvg((float *)input, (float *)rois, (float *)output, - aligned, channels, pooled_height, pooled_width, - input_height, input_width, sampling_ratio, - (float)spatial_scale, num_rois); - }; break; - default: - break; - } - - return; -} -} // namespace forward - -namespace backward { -__mlu_func__ void bilinearInterpolateGradient(int height, int width, float y, - float x, float *w1, float *w2, - float *w3, float *w4, int *x_low, - int *x_high, int *y_low, - int *y_high) { - if (y < -1.0 || y > height || x < -1.0 || x > width) { - *w1 = 0.0, *w2 = 0.0, *w3 = 0.0, *w4 = 0.0; - *x_low = -1, *x_high = -1, *y_low = -1, *y_high = -1; - return; - } - if (y <= 0) { - y = 0; - } - if (x <= 0) { - x = 0; - } - *y_low = (int)y; - *x_low = (int)x; - if (*y_low >= height - 1) { - *y_high = height - 1, *y_low = height - 1; - y = (float)(*y_low); - } else { - *y_high = *y_low + 1; - } - if (*x_low >= width - 1) { - *x_high = width - 1, *x_low = width - 1; - x = (float)(*x_low); - } else { - *x_high = *x_low + 1; - } - float ly = y - *y_low, lx = x - *x_low; - float hy = 1.0 - ly, hx = 1.0 - lx; - *w1 = hy * hx, *w2 = hy * lx, *w3 = ly * hx, *w4 = ly * lx; - return; -} - -template -__mlu_func__ void unionRoiAlignBp( - T *grads, T *boxes, T *grads_image, const int boxes_num, const int hi, - const int wi, const int c, const int no, const int ho, const int wo, - const float spatial_scale, const int sampling_ratio, const bool aligned) { - int c_align = PAD_UP(c, NFU_ALIGN_SIZE / sizeof(T)); - int deal_all = boxes_num * hi * wi; - int deal_this_core = deal_all / taskDim + (int)(taskId < deal_all % taskDim); - for (int i = 0; i < deal_this_core; ++i) { - int bhw_id = i * taskDim + taskId; - int box_id = bhw_id / (hi * wi); - int ih = (bhw_id / wi) % hi; - int iw = bhw_id % wi; - T *box = boxes + box_id * 5; - int image_id = (int)box[0]; - T *image_offset = grads_image + image_id * ho * wo * c; - T *grads_ = grads + box_id * hi * wi * c + ih * wi * c + iw * c; - - float offset = aligned ? 0.5 : 0.0; - float x1 = box[1] * spatial_scale - offset; - float y1 = box[2] * spatial_scale - offset; - float x2 = box[3] * spatial_scale - offset; - float y2 = box[4] * spatial_scale - offset; - float roi_width = x2 - x1; - float roi_height = y2 - y1; - if (!aligned) { - roi_width = (roi_width > 1.0) ? roi_width : 1.0; - roi_height = (roi_height > 1.0) ? roi_height : 1.0; - } - float bin_size_h = roi_height / hi; - float bin_size_w = roi_width / wi; - - int roi_grid_h = - (sampling_ratio > 0) ? sampling_ratio : std::ceil(roi_height / hi); - int roi_grid_w = - (sampling_ratio > 0) ? sampling_ratio : std::ceil(roi_width / wi); - const T count = roi_grid_h * roi_grid_w; - if (c_align * sizeof(T) * 2 <= MAX_NRAM_SIZE) { - for (int iy = 0; iy < roi_grid_h; ++iy) { - const float y = - y1 + ih * bin_size_h + (iy + 0.5) * bin_size_h / roi_grid_h; - for (int ix = 0; ix < roi_grid_w; ++ix) { - const float x = - x1 + iw * bin_size_w + (ix + 0.5) * bin_size_w / roi_grid_w; - float w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinearInterpolateGradient(ho, wo, y, x, &w1, &w2, &w3, &w4, &x_low, - &x_high, &y_low, &y_high); - if (x_low >= 0 && y_low >= 0) { - __memcpy(buffer, grads_, c * sizeof(T), GDRAM2NRAM); - __bang_mul_const((T *)buffer + c_align, (T *)buffer, (T)w1, - c_align); - __bang_mul_const((T *)buffer + c_align, (T *)buffer + c_align, - 1 / count, c_align); - __bang_atomic_add((T *)buffer + c_align, - image_offset + y_low * wo * c + x_low * c, - (T *)buffer + c_align, c); - __bang_mul_const((T *)buffer + c_align, (T *)buffer, (T)w2, - c_align); - __bang_mul_const((T *)buffer + c_align, (T *)buffer + c_align, - 1 / count, c_align); - __bang_atomic_add((T *)buffer + c_align, - image_offset + y_low * wo * c + x_high * c, - (T *)buffer + c_align, c); - __bang_mul_const((T *)buffer + c_align, (T *)buffer, (T)w3, - c_align); - __bang_mul_const((T *)buffer + c_align, (T *)buffer + c_align, - 1 / count, c_align); - __bang_atomic_add((T *)buffer + c_align, - image_offset + y_high * wo * c + x_low * c, - (T *)buffer + c_align, c); - __bang_mul_const((T *)buffer + c_align, (T *)buffer, (T)w4, - c_align); - __bang_mul_const((T *)buffer + c_align, (T *)buffer + c_align, - 1 / count, c_align); - __bang_atomic_add((T *)buffer + c_align, - image_offset + y_high * wo * c + x_high * c, - (T *)buffer + c_align, c); - } // x_low && y_low - } // ix - } // iy - } else { - for (int iy = 0; iy < roi_grid_h; ++iy) { - const float y = - y1 + ih * bin_size_h + (iy + 0.5) * bin_size_h / roi_grid_h; - for (int ix = 0; ix < roi_grid_w; ++ix) { - const float x = - x1 + iw * bin_size_w + (ix + 0.5) * bin_size_w / roi_grid_w; - float w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinearInterpolateGradient(ho, wo, y, x, &w1, &w2, &w3, &w4, &x_low, - &x_high, &y_low, &y_high); - if (x_low >= 0 && y_low >= 0) { - int deal_once = - PAD_DOWN(MAX_NRAM_SIZE / 2, NFU_ALIGN_SIZE) / sizeof(T); - int c_repeat = c / deal_once + (int)(c % deal_once != 0); - for (int i = 0; i < c_repeat; ++i) { - int deal_c = deal_once; - int align_c = deal_once; - if (i == c_repeat - 1) { - deal_c = c - i * deal_once; - align_c = c_align - i * deal_once; - } - __memcpy(buffer, grads_ + i * deal_once, deal_c * sizeof(T), - GDRAM2NRAM); - __bang_mul_const((T *)buffer + align_c, (T *)buffer, (T)w1, - align_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer + align_c, - 1 / count, align_c); - __bang_atomic_add( - (T *)buffer + align_c, - image_offset + y_low * wo * c + x_low * c + i * deal_once, - (T *)buffer + align_c, deal_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer, (T)w2, - align_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer + align_c, - 1 / count, align_c); - __bang_atomic_add( - (T *)buffer + align_c, - image_offset + y_low * wo * c + x_high * c + i * deal_once, - (T *)buffer + align_c, deal_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer, (T)w3, - align_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer + align_c, - 1 / count, align_c); - __bang_atomic_add( - (T *)buffer + align_c, - image_offset + y_high * wo * c + x_low * c + i * deal_once, - (T *)buffer + align_c, deal_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer, (T)w4, - align_c); - __bang_mul_const((T *)buffer + align_c, (T *)buffer + align_c, - 1 / count, align_c); - __bang_atomic_add( - (T *)buffer + align_c, - image_offset + y_high * wo * c + x_high * c + i * deal_once, - (T *)buffer + align_c, deal_c); - } // for c_repeat - } // x_low >= 0 && y_low >= 0 - } // ix - } // iy - } // if c - } // i -} - -__mlu_global__ void MLUUnion1KernelRoiAlignBackward( - const void *grads, const void *boxes, void *grads_image, - const cnrtDataType_t dtype, const int boxes_num, const int hi, const int wi, - const int c, const int no, const int ho, const int wo, - const float spatial_scale, const int sampling_ratio, const bool aligned) { - // make sure that memcore is not used - if (coreId == 0x80) { - return; - } - switch (dtype) { - case CNRT_FLOAT16: { - unionRoiAlignBp((half *)grads, (half *)boxes, (half *)grads_image, - boxes_num, hi, wi, c, no, ho, wo, spatial_scale, - sampling_ratio, aligned); - }; break; - case CNRT_FLOAT32: { - unionRoiAlignBp((float *)grads, (float *)boxes, (float *)grads_image, - boxes_num, hi, wi, c, no, ho, wo, spatial_scale, - sampling_ratio, aligned); - }; break; - default: { return; } - } -} -} // namespace backward - -void KernelRoiAlign(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t d_type, - const void *input, const void *rois, const int channels, - const bool aligned, const int pooled_height, - const int pooled_width, const int input_height, - const int input_width, const int sampling_ratio, - const float spatial_scale, const int num_rois, - void *output) { - forward::MLUUnion1KernelRoiAlignAvg<<>>( - input, rois, channels, aligned, pooled_height, pooled_width, input_height, - input_width, sampling_ratio, spatial_scale, num_rois, d_type, output); -} - -void KernelRoiAlignBackward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t dtype, - const void *grads, const void *boxes, - void *grads_image, const int boxes_num, - const int hi, const int wi, const int c, - const int no, const int ho, const int wo, - const float spatial_scale, const int sampling_ratio, - const bool aligned) { - backward::MLUUnion1KernelRoiAlignBackward<<>>( - grads, boxes, grads_image, dtype, boxes_num, hi, wi, c, no, ho, wo, - spatial_scale, sampling_ratio, aligned); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_mlu_kernel.mlu deleted file mode 100644 index 7f05b525..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_mlu_kernel.mlu +++ /dev/null @@ -1,472 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * OR IMPLIED, INCLUDING BUvoid NOKType LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENvoid SHALL THE AUTHORS OR COPYRIGHKType HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORvoid OR OTHERWISE, ARISING FROM, OUKType OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "common_mlu_helper.hpp" -#include "roi_align_rotated_utils.hpp" - -#define ROI_OFFSET 6 -#define SAMPLING_NUM 4 - -__nram__ char nram_buffer[MAX_NRAM_SIZE]; - -template -__mlu_func__ void swap(T &a, T &b) { - T tmp = a; - a = b; - b = tmp; -} - -template -__mlu_func__ void bilinearInterpolate(const int input_height, - const int input_width, T x, T y, - const T zero_sign, T *w1, T *w2, T *w3, - T *w4, int *x_low, int *x_high, - int *y_low, int *y_high, bool *empty) { - // deal with case that the point is out of feature map boundary - if (y < -1.0 || y > input_height || x < -1.0 || x > input_width) { - *empty = true; - return; - } - - if (y <= 0) y = (T)0; - if (x <= 0) x = (T)0; - - *y_low = int(y); - *x_low = int(x); - - if (*y_low >= input_height - 1) { - *y_high = *y_low = input_height - 1; - y = (T)(*y_low); - } else { - *y_high = *y_low + 1; - } - - if (*x_low >= input_width - 1) { - *x_high = *x_low = input_width - 1; - x = T(*x_low); - } else { - *x_high = *x_low + 1; - } - T ly = y - *y_low; - T lx = x - *x_low; - T hy = 1.0 - ly; - T hx = 1.0 - lx; - *w1 = hy * hx * zero_sign; - *w2 = hy * lx * zero_sign; - *w3 = ly * hx * zero_sign; - *w4 = ly * lx * zero_sign; -} - -template -__mlu_func__ void getRoiBinInfo(const T *rois_dram, const int bin_i, - const RoiAlignRotatedParams ¶ms, - int *batch_idx, int *roi_n, int *pw, int *ph, - T *roi_center_x, T *roi_center_y, T *roi_width, - T *roi_height, T *theta) { - T offset = params.aligned ? (T)0.5 : (T)0.0; - *pw = bin_i % params.pooled_width; - *ph = (bin_i / params.pooled_width) % params.pooled_height; - *roi_n = bin_i / params.pooled_width / params.pooled_height; - const T *roi_info = rois_dram + (*roi_n) * ROI_OFFSET; - *batch_idx = (int)roi_info[0]; - *roi_center_x = roi_info[1] * (T)params.spatial_scale - offset; - *roi_center_y = roi_info[2] * (T)params.spatial_scale - offset; - *roi_width = roi_info[3] * (T)params.spatial_scale; - *roi_height = roi_info[4] * (T)params.spatial_scale; - *theta = roi_info[5]; - if (params.clockwise) { - *theta = -(*theta); - } - if (!params.aligned) { - *roi_width = *roi_width > (T)1.0 ? *roi_width : (T)1.0; - *roi_height = *roi_height > (T)1.0 ? *roi_height : (T)1.0; - } -} - -template -__mlu_func__ void roiAlignRotatedForward(const T *input_dram, - const T *rois_dram, const int batch, - const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams ¶ms, - T *output_dram) { - int align_base_128 = NFU_ALIGN_SIZE / sizeof(T); - int channel_max_cap = MAX_NRAM_SIZE / sizeof(T) / (2 * SAMPLING_NUM + 1); - channel_max_cap = channel_max_cap / align_base_128 * align_base_128; - int channel_align = channel < channel_max_cap ? channel : channel_max_cap; - channel_align = CEIL_ALIGN(channel_align, align_base_128); - - T *nram_out = (T *)nram_buffer; - T *nram_ping = nram_out + channel_align; - T *nram_pong = nram_ping + channel_align * SAMPLING_NUM; - - int bin_first = taskId; - int bin_end = rois_num * params.pooled_height * params.pooled_width; - - for (int bin_i = bin_first; bin_i < bin_end; bin_i += taskDim) { - T roi_center_x, roi_center_y, roi_width, roi_height, theta; - int batch_idx, roi_n, pw, ph; - getRoiBinInfo(rois_dram, bin_i, params, &batch_idx, &roi_n, &pw, &ph, - &roi_center_x, &roi_center_y, &roi_width, &roi_height, - &theta); - T bin_size_h = roi_height / params.pooled_height; - T bin_size_w = roi_width / params.pooled_width; - - int roi_bin_grid_h = - (params.sample_ratio > 0) - ? params.sample_ratio - : __float2int_up((float)roi_height / params.pooled_height); - int roi_bin_grid_w = - (params.sample_ratio > 0) - ? params.sample_ratio - : __float2int_up((float)roi_width / params.pooled_width); - T roi_start_y = -roi_height / 2; - T roi_start_x = -roi_width / 2; - const int bin_dim = roi_bin_grid_h * roi_bin_grid_w > 1 - ? roi_bin_grid_h * roi_bin_grid_w - : 1; - T cos_theta = std::cos(theta); - T sin_theta = std::sin(theta); - T zero_sign = 1.0f / bin_dim; - - bool is_first_sample = true; - int src_offset = 0; - int dst_offset = 0; - int c_rem, c_slice, c_slice_align, pongc_slice, pongc_slice_align; - for (int c_offset = 0; c_offset < channel; c_offset += channel_align) { - __nramset(nram_out, channel_align, (T)0); - c_rem = channel - c_offset; - c_slice = channel_align > c_rem ? c_rem : channel_align; - c_slice_align = CEIL_ALIGN(c_slice, align_base_128); - is_first_sample = true; - for (int iy = 0; iy < roi_bin_grid_h; ++iy) { - const T yy = roi_start_y + ph * bin_size_h + - T(iy + 0.5) * bin_size_h / roi_bin_grid_h; - for (int ix = 0; ix < roi_bin_grid_w; ++ix) { - const T xx = roi_start_x + pw * bin_size_w + - T(ix + 0.5) * bin_size_w / roi_bin_grid_w; - int sample_i = iy * roi_bin_grid_w + ix; - - T y = yy * cos_theta - xx * sin_theta + roi_center_y; - T x = yy * sin_theta + xx * cos_theta + roi_center_x; - T w1, w2, w3, w4; - bool empty = false; - int x_low, x_high, y_low, y_high; - bilinearInterpolate(height, width, x, y, zero_sign, &w1, &w2, &w3, - &w4, &x_low, &x_high, &y_low, &y_high, &empty); - int sample_wdim = x_high - x_low + 1; - /******************************************************* - | ping | pong | - |------|-----|-----|-----|-----|-----|-----|-----|-----| - |output| p1 | p2 | p3 | p4 | p1 | p2 | p3 | p4 | - |------|-----|-----|-----|-----|-----|-----|-----|-----| - ********************************************************/ - if (is_first_sample && !empty) { - // load input data from dram to nram - __nramset(nram_ping, SAMPLING_NUM * c_slice_align, (T)0); - for (int h = y_low; h <= y_high; ++h) { - src_offset = - (batch_idx * height * width + h * width + x_low) * channel + - c_offset; - dst_offset = (h - y_low) * SAMPLING_NUM * c_slice_align / 2; - if (c_slice_align == channel) { - __memcpy(nram_ping + dst_offset, input_dram + src_offset, - sample_wdim * channel * sizeof(T), GDRAM2NRAM); - } else { - __memcpy(nram_ping + dst_offset, input_dram + src_offset, - c_slice * sizeof(T), GDRAM2NRAM, - c_slice_align * sizeof(T), channel * sizeof(T), - sample_wdim - 1); - } - } - } - // load next input data to nram - if (sample_i + 1 < bin_dim) { - int p_iy = (sample_i + 1) / roi_bin_grid_w; - int p_ix = (sample_i + 1) % roi_bin_grid_w; - const T p_yy = roi_start_y + ph * bin_size_h + - T(p_iy + 0.5) * bin_size_h / roi_bin_grid_h; - const T p_xx = roi_start_x + pw * bin_size_w + - T(p_ix + 0.5) * bin_size_w / roi_bin_grid_w; - T p_y = p_yy * cos_theta - p_xx * sin_theta + roi_center_y; - T p_x = p_yy * sin_theta + p_xx * cos_theta + roi_center_x; - T p_w1, p_w2, p_w3, p_w4; - bool p_empty = false; - int p_x_low, p_x_high, p_y_low, p_y_high; - bilinearInterpolate(height, width, p_x, p_y, zero_sign, &p_w1, - &p_w2, &p_w3, &p_w4, &p_x_low, &p_x_high, - &p_y_low, &p_y_high, &p_empty); - int p_sample_wdim = p_x_high - p_x_low + 1; - pongc_slice = c_slice; - pongc_slice_align = c_slice_align; - if (!p_empty) { - __nramset(nram_pong, SAMPLING_NUM * pongc_slice_align, (T)0); - for (int h = p_y_low; h <= p_y_high; ++h) { - src_offset = - (batch_idx * height * width + h * width + p_x_low) * - channel + - c_offset; - dst_offset = - (h - p_y_low) * SAMPLING_NUM * pongc_slice_align / 2; - if (pongc_slice_align == channel) { - __memcpy_async( - nram_pong + dst_offset, input_dram + src_offset, - p_sample_wdim * channel * sizeof(T), GDRAM2NRAM); - } else { - __memcpy_async(nram_pong + dst_offset, - input_dram + src_offset, - pongc_slice * sizeof(T), GDRAM2NRAM, - pongc_slice_align * sizeof(T), - channel * sizeof(T), p_sample_wdim - 1); - } - } - } - } - T *tmp_sum = nram_ping + 3 * c_slice_align; - if (empty) { - __nramset(tmp_sum, c_slice_align, T(0)); - } else { - __bang_mul_const(nram_ping, nram_ping, w1, c_slice_align); - __bang_mul_const(nram_ping + c_slice_align, - nram_ping + c_slice_align, w2, c_slice_align); - __bang_mul_const(nram_ping + 2 * c_slice_align, - nram_ping + 2 * c_slice_align, w3, c_slice_align); - __bang_mul_const(nram_ping + 3 * c_slice_align, - nram_ping + 3 * c_slice_align, w4, c_slice_align); - __bang_sumpool(tmp_sum, nram_ping, c_slice_align, 1, SAMPLING_NUM, - 1, SAMPLING_NUM, 1, 1); - } - __bang_add(nram_out, nram_out, tmp_sum, c_slice_align); - swap(nram_ping, nram_pong); - - __asm__ volatile("sync;"); - is_first_sample = false; - } - } - // store the result to dram - int output_offset = - ((roi_n * params.pooled_height + ph) * params.pooled_width + pw) * - channel + - c_offset; - __memcpy(output_dram + output_offset, nram_out, c_slice * sizeof(T), - NRAM2GDRAM); - } - } -} - -template -__mlu_func__ void roiAlignRotatedBackward(const T *top_grad_dram, - const T *rois_dram, const int batch, - const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams ¶ms, - T *bottom_grad_dram) { - int align_base_128 = NFU_ALIGN_SIZE / sizeof(T); - int channel_align = CEIL_ALIGN(channel, align_base_128); - - unsigned int max_element = MAX_NRAM_SIZE / sizeof(T); - int c_limit = max_element >> 2; - c_limit = c_limit > channel_align ? channel_align : c_limit; - - T *nram_ping = (T *)nram_buffer; - T *nram_pong = nram_ping + 2 * c_limit; - T *nram_output = nullptr; - - int bin_first = taskId; - int bin_end = rois_num * params.pooled_height * params.pooled_width; - bool is_first_bin = true; - T roi_center_x, roi_center_y, roi_width, roi_height, theta; - int batch_idx, roi_n, pw, ph; - T pong_roi_center_x, pong_roi_center_y, pong_roi_width, pong_roi_height, - pong_theta; - int pong_batch_idx, pong_roi_n, pong_pw, pong_ph; - for (int bin_i = bin_first; bin_i < bin_end; bin_i += taskDim) { - getRoiBinInfo(rois_dram, bin_i, params, &batch_idx, &roi_n, &pw, &ph, - &roi_center_x, &roi_center_y, &roi_width, &roi_height, - &theta); - T bin_size_h = roi_height / params.pooled_height; - T bin_size_w = roi_width / params.pooled_width; - - int roi_bin_grid_h = - (params.sample_ratio > 0) - ? params.sample_ratio - : __float2int_up((float)roi_height / params.pooled_height); - int roi_bin_grid_w = - (params.sample_ratio > 0) - ? params.sample_ratio - : __float2int_up((float)roi_width / params.pooled_width); - T roi_start_y = -roi_height / 2; - T roi_start_x = -roi_width / 2; - const int bin_dim = roi_bin_grid_h * roi_bin_grid_w > 1 - ? roi_bin_grid_h * roi_bin_grid_w - : 1; - T cos_theta = std::cos(theta); - T sin_theta = std::sin(theta); - T zero_sign = 1.0f / bin_dim; - - int c_rem, c_slice, pongc_slice, c_offset; - c_rem = channel; - c_offset = 0; - /**************************************** - | ping | pong | - |---------|---------|---------|---------| - | input | output | input | output | - |---------|---------|---------|---------| - *****************************************/ - if (is_first_bin) { - // load the first top_grad to nram - c_slice = c_limit < c_rem ? c_limit : c_rem; - int top_grad_offset = - ((roi_n * params.pooled_height + ph) * params.pooled_width + pw) * - channel; - __memcpy(nram_ping, top_grad_dram + top_grad_offset, c_slice * sizeof(T), - GDRAM2NRAM); - } - nram_output = nram_ping + c_limit; - while (c_rem > 0) { - c_slice = c_slice < c_rem ? c_slice : c_rem; - // load the next top_grad to nram - if (c_rem - c_slice > 0) { - // load the rest channels to nram - pongc_slice = (c_rem - c_slice > c_slice) ? c_slice : c_rem - c_slice; - int top_grad_offset = - ((roi_n * params.pooled_height + ph) * params.pooled_width + pw) * - channel + - c_offset + c_slice; - __memcpy_async(nram_pong, top_grad_dram + top_grad_offset, - pongc_slice * sizeof(T), GDRAM2NRAM); - } else if (bin_i + taskDim < bin_end) { - // load next bin's data to nram - getRoiBinInfo(rois_dram, bin_i + taskDim, params, &pong_batch_idx, - &pong_roi_n, &pong_pw, &pong_ph, &pong_roi_center_x, - &pong_roi_center_y, &pong_roi_width, &pong_roi_height, - &pong_theta); - pongc_slice = c_limit < channel ? c_limit : channel; - int top_grad_offset = ((pong_roi_n * params.pooled_height + pong_ph) * - params.pooled_width + - pong_pw) * - channel; - __memcpy_async(nram_pong, top_grad_dram + top_grad_offset, - c_slice * sizeof(T), GDRAM2NRAM); - } - // comput the output in a single bin - - for (int iy = 0; iy < roi_bin_grid_h; ++iy) { - const T yy = roi_start_y + ph * bin_size_h + - T(iy + 0.5) * bin_size_h / roi_bin_grid_h; - for (int ix = 0; ix < roi_bin_grid_w; ++ix) { - const T xx = roi_start_x + pw * bin_size_w + - T(ix + 0.5) * bin_size_w / roi_bin_grid_w; - T y = yy * cos_theta - xx * sin_theta + roi_center_y; - T x = yy * sin_theta + xx * cos_theta + roi_center_x; - T w1, w2, w3, w4; - bool empty = false; - int x_low, x_high, y_low, y_high; - bilinearInterpolate(height, width, x, y, zero_sign, &w1, &w2, &w3, - &w4, &x_low, &x_high, &y_low, &y_high, &empty); - if (empty) { - continue; - } else { - __bang_mul_const(nram_output, nram_ping, w1, c_limit); - __bang_atomic_add( - (T *)nram_output, - bottom_grad_dram + batch_idx * height * width * channel + - y_low * width * channel + x_low * channel + c_offset, - (T *)nram_output, c_slice); - __bang_mul_const(nram_output, nram_ping, w2, c_limit); - __bang_atomic_add( - (T *)nram_output, - bottom_grad_dram + batch_idx * height * width * channel + - y_low * width * channel + x_high * channel + c_offset, - (T *)nram_output, c_slice); - __bang_mul_const(nram_output, nram_ping, w3, c_limit); - __bang_atomic_add( - (T *)nram_output, - bottom_grad_dram + batch_idx * height * width * channel + - y_high * width * channel + x_low * channel + c_offset, - (T *)nram_output, c_slice); - __bang_mul_const(nram_output, nram_ping, w4, c_limit); - __bang_atomic_add( - (T *)nram_output, - bottom_grad_dram + batch_idx * height * width * channel + - y_high * width * channel + x_high * channel + c_offset, - (T *)nram_output, c_slice); - } - } - } - swap(nram_ping, nram_pong); - c_rem -= c_slice; - c_offset += c_slice; - __asm__ volatile("sync;"); - } - is_first_bin = false; - } -} - -__mlu_global__ void MLUUnion1KernelRoiAlignRotatedForward( - const void *features, const void *rois, void *output, const int batch, - const int height, const int width, const int channel, const int rois_num, - const RoiAlignRotatedParams rroiAlignParams, - const cnrtDataType_t data_type) { - if (0x80 == coreId) { - return; - } - - if (data_type == CNRT_FLOAT32) { - roiAlignRotatedForward((float *)features, (float *)rois, batch, height, - width, channel, rois_num, rroiAlignParams, - (float *)output); - } else { - roiAlignRotatedForward((half *)features, (half *)rois, batch, height, width, - channel, rois_num, rroiAlignParams, (half *)output); - } -} - -__mlu_global__ void MLUUnion1KernelRoiAlignRotatedBackward( - const void *top_grad, const void *rois, void *bottom_grad, const int batch, - const int height, const int width, const int channel, const int rois_num, - const RoiAlignRotatedParams rroiAlignParams, - const cnrtDataType_t data_type) { - if (0x80 == coreId) { - return; - } - - if (data_type == CNRT_FLOAT32) { - roiAlignRotatedBackward((float *)top_grad, (float *)rois, batch, height, - width, channel, rois_num, rroiAlignParams, - (float *)bottom_grad); - } else { - roiAlignRotatedBackward((half *)top_grad, (half *)rois, batch, height, - width, channel, rois_num, rroiAlignParams, - (half *)bottom_grad); - } -} - -void KernelRoiAlignRotatedForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t d_type, const void *features, const void *rois, - void *output, const int batch, const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams roiAlignRotatedParams) { - MLUUnion1KernelRoiAlignRotatedForward<<>>( - features, rois, output, batch, height, width, channel, rois_num, - roiAlignRotatedParams, d_type); -} - -void KernelRoiAlignRotatedBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t d_type, const void *top_grad, const void *rois, - void *bottom_grad, const int batch, const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams roiAlignRotatedParams) { - MLUUnion1KernelRoiAlignRotatedBackward<<>>( - top_grad, rois, bottom_grad, batch, height, width, channel, rois_num, - roiAlignRotatedParams, d_type); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_utils.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_utils.hpp deleted file mode 100644 index cd0ec024..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/roi_align_rotated_utils.hpp +++ /dev/null @@ -1,24 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#ifndef ROI_ALIGN_ROTATED_UTILS_HPP_ -#define ROI_ALIGN_ROTATED_UTILS_HPP_ - -struct RoiAlignRotatedParams { - int pooled_height; - int pooled_width; - int sample_ratio; - float spatial_scale; - bool aligned; - bool clockwise; -}; - -#endif // ROI_ALIGN_ROTATED_UTILS_HPP_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/tin_shift_mlu_kernel.mlu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/tin_shift_mlu_kernel.mlu deleted file mode 100644 index 7cb6df0e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/mlu/tin_shift_mlu_kernel.mlu +++ /dev/null @@ -1,307 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "common_mlu_helper.hpp" - -__nram__ char data_nram[MAX_NRAM_SIZE]; - -template -__mlu_func__ void mluMultiKernelTinShift( - const T *input, const int *shifts, T *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel) { - for (int cur_channel_index = taskId; - cur_channel_index < batch_size * channel_size; - cur_channel_index += taskDim) { - int n_index = cur_channel_index / channel_size; - int group_id = cur_channel_index % channel_size / group_channel; - int t_shift = shifts[n_index * group_size + group_id]; - int index = cur_channel_index % channel_size * hw_size + - n_index * time_size * channel_size * hw_size; - __nramset(data_nram, MAX_NRAM_SIZE, (char)0); - __asm__ volatile("sync;"); - if (abs(t_shift) >= time_size) { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - time_size - 1); - } else { - if (t_shift > 0) { - __memcpy(data_nram + t_shift * hw_size * sizeof(T), input + index, - hw_size * sizeof(T), GDRAM2NRAM, hw_size * sizeof(T), - channel_size * hw_size * sizeof(T), time_size - 1 - t_shift); - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - time_size - 1); - } else { - __memcpy(data_nram, input + (index - t_shift * channel_size * hw_size), - hw_size * sizeof(T), GDRAM2NRAM, hw_size * sizeof(T), - channel_size * hw_size * sizeof(T), time_size - 1 + t_shift); - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - time_size - 1); - } - } - __asm__ volatile("sync;"); - } -} - -template -__mlu_func__ void mluHwSplit(const T *input, const int t_shift, - const int time_size, const int hw_size, - const int channel_size, const int index, - const int cur_sequence_index, - const int max_length_per_core, T *output) { - for (int cur_index = index; cur_index < index + hw_size; - cur_index += max_length_per_core) { - int memcpy_size = max_length_per_core; - if (cur_index + max_length_per_core > index + hw_size) { - memcpy_size = index + hw_size - cur_index; - } - if (cur_sequence_index - t_shift < 0 || - cur_sequence_index - t_shift >= time_size) { - __memcpy(output + cur_index, data_nram, memcpy_size * sizeof(T), - NRAM2GDRAM); - } else { - __memcpy(data_nram, input + cur_index - t_shift * channel_size * hw_size, - memcpy_size * sizeof(T), GDRAM2NRAM); - __memcpy(output + cur_index, data_nram, memcpy_size * sizeof(T), - NRAM2GDRAM); - } - __asm__ volatile("sync;"); - } -} - -template -__mlu_func__ void mluMultiKernelTinShiftSplitSequence( - const T *input, const int *shifts, T *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel, - const int max_number_hw_per_core, const int max_length_per_core) { - const int tmp_max_number_hw_per_core = - max_number_hw_per_core > 0 ? max_number_hw_per_core : 1; - const int loop_time = time_size / tmp_max_number_hw_per_core + - ((time_size % tmp_max_number_hw_per_core) > 0 ? 1 : 0); - int segmentime_size = tmp_max_number_hw_per_core; - int res_segment = time_size % tmp_max_number_hw_per_core; - - for (int cur_segment_index = taskId; - cur_segment_index < loop_time * batch_size * channel_size; - cur_segment_index += taskDim) { - int n_index = cur_segment_index / loop_time / channel_size; - int group_id = cur_segment_index / loop_time % channel_size / group_channel; - int t_shift = shifts[n_index * group_size + group_id]; - int index = n_index * time_size * channel_size * hw_size + - (cur_segment_index / loop_time % channel_size) * hw_size + - cur_segment_index % loop_time * segmentime_size * hw_size * - channel_size; - char *dst_gdram2nram = data_nram; - const T *src_gdram2nram = input + index; - int count_gdram2nram = -1; - int count_nram2gdram = -1; - int next_sequence_index = - index / hw_size / channel_size % time_size + segmentime_size; - int cur_sequence_index = index / hw_size / channel_size % time_size; - __nramset(data_nram, MAX_NRAM_SIZE, (char)0); - __asm__ volatile("sync;"); - if (max_number_hw_per_core == 0) { - mluHwSplit(input, t_shift, time_size, hw_size, channel_size, index, - cur_sequence_index, max_length_per_core, output); - continue; - } - if (abs(t_shift) >= time_size) { - if ((cur_segment_index + 1) % loop_time == 0 && res_segment != 0) { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - res_segment - 1); - } else { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - segmentime_size - 1); - } - continue; - } - if (t_shift == 0) { - if ((cur_segment_index + 1) % loop_time == 0 && res_segment != 0) { - dst_gdram2nram = data_nram; - src_gdram2nram = input + index; - count_gdram2nram = res_segment - 1; - count_nram2gdram = res_segment - 1; - } else { - dst_gdram2nram = data_nram; - src_gdram2nram = input + index; - count_gdram2nram = segmentime_size - 1; - count_nram2gdram = segmentime_size - 1; - } - } else if (t_shift > 0) { - int first_index_cur_channel = - n_index * time_size * channel_size * hw_size + - (cur_segment_index / loop_time % channel_size) * hw_size; - if ((cur_segment_index + 1) % loop_time == 0 && res_segment != 0) { - dst_gdram2nram = data_nram; - src_gdram2nram = - input + - (index - t_shift * channel_size * hw_size < first_index_cur_channel - ? first_index_cur_channel - : index - t_shift * channel_size * hw_size); - count_gdram2nram = res_segment - 1; - count_nram2gdram = res_segment - 1; - if (cur_sequence_index < t_shift && t_shift < next_sequence_index) { - dst_gdram2nram = - data_nram + t_shift % segmentime_size * hw_size * sizeof(T); - count_gdram2nram = res_segment - (t_shift - cur_sequence_index) - 1; - } - } else { - if (t_shift >= next_sequence_index) { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - segmentime_size - 1); - continue; - } else if (cur_sequence_index < t_shift && - t_shift < next_sequence_index) { - dst_gdram2nram = - data_nram + t_shift % segmentime_size * hw_size * sizeof(T); - src_gdram2nram = input + first_index_cur_channel; - count_gdram2nram = segmentime_size - (t_shift % segmentime_size) - 1; - count_nram2gdram = segmentime_size - 1; - } else { - dst_gdram2nram = data_nram; - src_gdram2nram = input + index - t_shift * channel_size * hw_size; - count_gdram2nram = segmentime_size - 1; - count_nram2gdram = segmentime_size - 1; - } - } - } else { - int offset_index = time_size + t_shift; - if (cur_sequence_index >= offset_index) { - if ((cur_segment_index + 1) % loop_time == 0 && res_segment != 0) { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - res_segment - 1); - continue; - } else { - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - segmentime_size - 1); - continue; - } - } else { - dst_gdram2nram = data_nram; - src_gdram2nram = input + index - t_shift * channel_size * hw_size; - if (cur_sequence_index - t_shift + segmentime_size < time_size) { - count_gdram2nram = segmentime_size - 1; - count_nram2gdram = segmentime_size - 1; - } else { - count_gdram2nram = time_size - (cur_sequence_index - t_shift) - 1; - count_nram2gdram = - (segmentime_size - 1) < (time_size - cur_sequence_index - 1) - ? (segmentime_size - 1) - : (time_size - cur_sequence_index - 1); - } - } - } - __memcpy(dst_gdram2nram, src_gdram2nram, hw_size * sizeof(T), GDRAM2NRAM, - hw_size * sizeof(T), channel_size * hw_size * sizeof(T), - count_gdram2nram); - __memcpy(output + index, data_nram, hw_size * sizeof(T), NRAM2GDRAM, - channel_size * hw_size * sizeof(T), hw_size * sizeof(T), - count_nram2gdram); - __asm__ volatile("sync;"); - } -} - -__mlu_entry__ void MLUUnion1KernelTinShift( - const void *input, const void *shifts, void *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel, - const cnrtDataType_t data_dtype) { - // make sure that memcore is not used - if (coreId == 0x80) { - return; - } - switch (data_dtype) { - case CNRT_FLOAT16: { - mluMultiKernelTinShift((half *)input, (const int *)shifts, (half *)output, - batch_size, time_size, channel_size, hw_size, - group_size, group_channel); - }; break; - case CNRT_FLOAT32: { - mluMultiKernelTinShift((float *)input, (const int *)shifts, - (float *)output, batch_size, time_size, - channel_size, hw_size, group_size, group_channel); - }; break; - default: { return; } - } -} - -__mlu_entry__ void MLUUnion1KernelTinShiftSplitSequence( - const void *input, const void *shifts, void *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel, - const int max_number_hw_per_core, const int max_length_per_core, - const cnrtDataType_t data_dtype) { - // make sure that memcore is not used - if (coreId == 0x80) { - return; - } - switch (data_dtype) { - case CNRT_FLOAT16: { - mluMultiKernelTinShiftSplitSequence( - (half *)input, (const int *)shifts, (half *)output, batch_size, - time_size, channel_size, hw_size, group_size, group_channel, - max_number_hw_per_core, max_length_per_core); - }; break; - case CNRT_FLOAT32: { - mluMultiKernelTinShiftSplitSequence( - (float *)input, (const int *)shifts, (float *)output, batch_size, - time_size, channel_size, hw_size, group_size, group_channel, - max_number_hw_per_core, max_length_per_core); - }; break; - default: { return; } - } -} - -void KernelTinShiftForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *input, const void *shifts, void *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel, - const cnrtDataType_t data_dtype, const int channel_per_core, - const int max_number_hw_per_core, const int max_length_per_core) { - if (channel_per_core >= 1) { - MLUUnion1KernelTinShift<<>>( - input, shifts, output, batch_size, time_size, channel_size, hw_size, - group_size, group_channel, data_dtype); - } else { - MLUUnion1KernelTinShiftSplitSequence<<>>( - input, shifts, output, batch_size, time_size, channel_size, hw_size, - group_size, group_channel, max_number_hw_per_core, max_length_per_core, - data_dtype); - } -} - -void KernelTinShiftBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *grad_output, const void *shifts, void *grad_input, - const int batch_size, const int time_size, const int channel_size, - const int hw_size, const int group_size, const int group_channel, - const cnrtDataType_t data_dtype, const int channel_per_core, - const int max_number_hw_per_core, const int max_length_per_core) { - if (channel_per_core >= 1) { - MLUUnion1KernelTinShift<<>>( - grad_output, shifts, grad_input, batch_size, time_size, channel_size, - hw_size, group_size, group_channel, data_dtype); - } else { - MLUUnion1KernelTinShiftSplitSequence<<>>( - grad_output, shifts, grad_input, batch_size, time_size, channel_size, - hw_size, group_size, group_channel, max_number_hw_per_core, - max_length_per_core, data_dtype); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cpp_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cpp_helper.hpp deleted file mode 100644 index 72701890..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cpp_helper.hpp +++ /dev/null @@ -1,40 +0,0 @@ -#ifndef PARROTS_CPP_HELPER -#define PARROTS_CPP_HELPER -#include -#include -#include -#include -#include - -using namespace parrots; - -#define PARROTS_PRIVATE_CASE_TYPE(prim_type, type, ...) \ - case prim_type: { \ - using scalar_t = type; \ - return __VA_ARGS__(); \ - } - -#define PARROTS_DISPATCH_FLOATING_TYPES(TYPE, ...) \ - [&] { \ - const auto& the_type = TYPE; \ - switch (the_type) { \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float64, double, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float32, float, __VA_ARGS__) \ - default: \ - PARROTS_NOTSUPPORTED; \ - } \ - }() - -#define PARROTS_DISPATCH_FLOATING_TYPES_AND_HALF(TYPE, ...) \ - [&] { \ - const auto& the_type = TYPE; \ - switch (the_type) { \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float64, double, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float32, float, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float16, float16, __VA_ARGS__) \ - default: \ - PARROTS_NOTSUPPORTED; \ - } \ - }() - -#endif // PARROTS_CPP_HELPER diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cuda_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cuda_helper.hpp deleted file mode 100644 index 539009c3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/parrots_cuda_helper.hpp +++ /dev/null @@ -1,111 +0,0 @@ -#ifndef PARROTS_CUDA_HELPER -#define PARROTS_CUDA_HELPER - -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -#include "common_cuda_helper.hpp" -#include "parrots_cudawarpfunction.cuh" - -using namespace parrots; -using phalf = float16; - -#define __PHALF(x) (x.y) - -#define PARROTS_CUDA_CHECK(exp) \ - do { \ - cudaError_t err = exp; \ - if (err != cudaSuccess) { \ - fprintf(stderr, "cudaCheckError() failed : %s\n", \ - cudaGetErrorString(err)); \ - exit(-1); \ - } \ - } while (0) - -#define PARROTS_PRIVATE_CASE_TYPE(prim_type, type, ...) \ - case prim_type: { \ - using scalar_t = type; \ - return __VA_ARGS__(); \ - } - -#define PARROTS_DISPATCH_FLOATING_TYPES(TYPE, ...) \ - [&] { \ - const auto& the_type = TYPE; \ - switch (the_type) { \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float64, double, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float32, float, __VA_ARGS__) \ - default: \ - PARROTS_NOTSUPPORTED; \ - } \ - }() - -#define PARROTS_DISPATCH_FLOATING_TYPES_AND_HALF(TYPE, ...) \ - [&] { \ - const auto& the_type = TYPE; \ - switch (the_type) { \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float64, double, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float32, float, __VA_ARGS__) \ - PARROTS_PRIVATE_CASE_TYPE(Prim::Float16, float16, __VA_ARGS__) \ - default: \ - PARROTS_NOTSUPPORTED; \ - } \ - }() - -/** atomicAdd **/ -#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ < 600 - -static __inline__ __device__ double atomicAdd(double* address, double val) { - unsigned long long int* address_as_ull = (unsigned long long int*)address; - unsigned long long int old = *address_as_ull, assumed; - if (val == 0.0) return __longlong_as_double(old); - do { - assumed = old; - old = atomicCAS(address_as_ull, assumed, - __double_as_longlong(val + __longlong_as_double(assumed))); - } while (assumed != old); - return __longlong_as_double(old); -} - -#endif - -static __inline__ __device__ float16 atomicAdd(float16* address, float16 val) { - unsigned int* aligned = - (unsigned int*)((size_t)address - ((size_t)address & 2)); - unsigned int old = *aligned; - unsigned int assumed; - unsigned short old_as_us; - do { - assumed = old; - old_as_us = - (unsigned short)((size_t)address & 2 ? old >> 16 : old & 0xffff); - -#if __CUDACC_VER_MAJOR__ >= 9 - float16 tmp; - tmp.x = old_as_us; - float16 sum = tmp + val; - unsigned short sum_as_us = sum.x; -// half sum = __float2half_rn(__half2float(__ushort_as_half(old_as_us)) -// + (float)(val)); unsigned short sum_as_us = __half_as_ushort(sum); -#else - unsigned short sum_as_us = - __float2half_rn(__half2float(old_as_us) + (float)(val)); -#endif - - unsigned int sum_as_ui = (size_t)address & 2 - ? (sum_as_us << 16) | (old & 0xffff) - : (old & 0xffff0000) | sum_as_us; - old = atomicCAS(aligned, assumed, sum_as_ui); - } while (assumed != old); - //__half_raw raw = {old_as_us}; - // return float16(raw); - return *reinterpret_cast(&old_as_us); -} -#endif // PARROTS_CUDA_HELPER diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp deleted file mode 100644 index f68e8740..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp +++ /dev/null @@ -1,27 +0,0 @@ -#ifndef PYTORCH_CPP_HELPER -#define PYTORCH_CPP_HELPER -#include - -#include - -using namespace at; - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_MLU(x) \ - TORCH_CHECK(x.device().type() == at::kMLU, #x " must be a MLU tensor") -#define CHECK_CPU(x) \ - TORCH_CHECK(x.device().type() == at::kCPU, #x " must be a CPU tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_CUDA_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) -#define CHECK_MLU_INPUT(x) \ - CHECK_MLU(x); \ - CHECK_CONTIGUOUS(x) -#define CHECK_CPU_INPUT(x) \ - CHECK_CPU(x); \ - CHECK_CONTIGUOUS(x) - -#endif // PYTORCH_CPP_HELPER diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp deleted file mode 100644 index 9869b535..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp +++ /dev/null @@ -1,19 +0,0 @@ -#ifndef PYTORCH_CUDA_HELPER -#define PYTORCH_CUDA_HELPER - -#include -#include -#include - -#include -#include - -#include "common_cuda_helper.hpp" - -using at::Half; -using at::Tensor; -using phalf = at::Half; - -#define __PHALF(x) (x) - -#endif // PYTORCH_CUDA_HELPER diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_device_registry.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_device_registry.hpp deleted file mode 100644 index 2a32b727..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_device_registry.hpp +++ /dev/null @@ -1,141 +0,0 @@ -#ifndef PYTORCH_DEVICE_REGISTRY_H -#define PYTORCH_DEVICE_REGISTRY_H - -// Using is recommended in the official documentation in -// https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-the-c-op. -// However, we use for compatibility with CUDA 9.0 -// Read https://github.com/pytorch/extension-cpp/issues/35 for more details. -#include - -#include -#include -#include -#include - -inline std::string GetDeviceStr(const at::Device& device) { - std::string str = DeviceTypeName(device.type(), true); - if (device.has_index()) { - str.push_back(':'); - str.append(std::to_string(device.index())); - } - return str; -} - -// Registry -template -class DeviceRegistry; - -template -class DeviceRegistry { - public: - using FunctionType = Ret (*)(Args...); - static const int MAX_DEVICE_TYPES = - int8_t(at::DeviceType::COMPILE_TIME_MAX_DEVICE_TYPES); - - void Register(at::DeviceType device, FunctionType function) { - funcs_[int8_t(device)] = function; - } - - FunctionType Find(at::DeviceType device) const { - return funcs_[int8_t(device)]; - } - - static DeviceRegistry& instance() { - static DeviceRegistry inst; - return inst; - } - - private: - DeviceRegistry() { - for (size_t i = 0; i < MAX_DEVICE_TYPES; ++i) { - funcs_[i] = nullptr; - } - }; - FunctionType funcs_[MAX_DEVICE_TYPES]; -}; - -// get device of first tensor param - -template , at::Tensor>::value, - bool> = true> -at::Device GetFirstTensorDevice(T&& t, Args&&... args) { - return std::forward(t).device(); -} -template , at::Tensor>::value, - bool> = true> -at::Device GetFirstTensorDevice(T&& t, Args&&... args) { - return GetFirstTensorDevice(std::forward(args)...); -} - -// check device consistency - -inline std::pair CheckDeviceConsistency( - const at::Device& device, int index) { - return {index, device}; -} - -template , at::Tensor>::value, - bool> = true> -std::pair CheckDeviceConsistency(const at::Device& device, - int index, T&& t, - Args&&... args); - -template , at::Tensor>::value, - bool> = true> -std::pair CheckDeviceConsistency(const at::Device& device, - int index, T&& t, - Args&&... args) { - auto new_device = std::forward(t).device(); - if (new_device.type() != device.type() || - new_device.index() != device.index()) { - return {index, new_device}; - } - return CheckDeviceConsistency(device, index + 1, std::forward(args)...); -} - -template < - typename T, typename... Args, - std::enable_if_t, at::Tensor>::value, bool>> -std::pair CheckDeviceConsistency(const at::Device& device, - int index, T&& t, - Args&&... args) { - return CheckDeviceConsistency(device, index + 1, std::forward(args)...); -} - -// dispatch - -template -auto Dispatch(const R& registry, const char* name, Args&&... args) { - auto device = GetFirstTensorDevice(std::forward(args)...); - auto inconsist = - CheckDeviceConsistency(device, 0, std::forward(args)...); - TORCH_CHECK(inconsist.first >= int(sizeof...(Args)), name, ": at param ", - inconsist.first, - ", inconsistent device: ", GetDeviceStr(inconsist.second).c_str(), - " vs ", GetDeviceStr(device).c_str(), "\n") - auto f_ptr = registry.Find(device.type()); - TORCH_CHECK(f_ptr != nullptr, name, ": implementation for device ", - GetDeviceStr(device).c_str(), " not found.\n") - return f_ptr(std::forward(args)...); -} - -// helper macro - -#define DEVICE_REGISTRY(key) DeviceRegistry::instance() - -#define REGISTER_DEVICE_IMPL(key, device, value) \ - struct key##_##device##_registerer { \ - key##_##device##_registerer() { \ - DEVICE_REGISTRY(key).Register(at::k##device, value); \ - } \ - }; \ - static key##_##device##_registerer _##key##_##device##_registerer; - -#define DISPATCH_DEVICE_IMPL(key, ...) \ - Dispatch(DEVICE_REGISTRY(key), #key, __VA_ARGS__) - -#endif // PYTORCH_DEVICE_REGISTRY diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_mlu_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_mlu_helper.hpp deleted file mode 100644 index 72dbe588..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/common/pytorch_mlu_helper.hpp +++ /dev/null @@ -1,28 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#ifndef PYTORCH_MLU_HELPER_HPP_ -#define PYTORCH_MLU_HELPER_HPP_ - -#ifdef MMCV_WITH_MLU -#include "aten.h" - -#define NFU_ALIGN_SIZE 128 - -#define PAD_UP(x, y) (((x) / (y) + (int)((x) % (y) > 0)) * (y)) - -#define PAD_DOWN(x, y) (((x) / (y)) * (y)) - -#define CEIL_ALIGN(x, y) (((x) + (y)-1) / (y) * (y)) - -#endif - -#endif // PYTORCH_MLU_HELPER_HPP_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/corner_pool.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/corner_pool.h deleted file mode 100644 index b4086792..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/corner_pool.h +++ /dev/null @@ -1,46 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_CORNER_POOL_H -#define ONNXRUNTIME_CORNER_POOL_H - -#include -#include - -struct MMCVCornerPoolKernel { - public: - MMCVCornerPoolKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - mode_ = ort_.KernelInfoGetAttribute(info, "mode"); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - - int64_t mode_; -}; - -struct MMCVCornerPoolCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVCornerPoolKernel(api, info); - } - - const char* GetName() const { return "MMCVCornerPool"; } - - size_t GetInputTypeCount() const { return 1; } - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - size_t GetOutputTypeCount() const { return 1; } - ONNXTensorElementDataType GetOutputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - } -}; -#endif // ONNXRUNTIME_CORNER_POOL_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/corner_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/corner_pool.cpp deleted file mode 100644 index 397fe10e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/corner_pool.cpp +++ /dev/null @@ -1,123 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "corner_pool.h" - -#include "../ort_mmcv_utils.h" - -void TopPoolForwardCPU(const float *input, float *output, const int batch_size, - const int channels, const int height, const int width) { - for (int n = 0; n < batch_size; n++) { - int index_n = n * channels * width * height; - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * width * height; - for (int w = 0; w < width; w++) { - // directly copy the most bottom value from input to output - output[index_n_c + (height - 1) * width + w] = - input[index_n_c + (height - 1) * width + w]; - // do top_pool - for (int h = height - 2; h >= 0; h--) { - output[index_n_c + h * width + w] = - std::max(output[index_n_c + (h + 1) * width + w], - input[index_n_c + h * width + w]); - } // for h - } // for w - } // for c - } // for n -} - -void BottomPoolForwardCPU(const float *input, float *output, - const int batch_size, const int channels, - const int height, const int width) { - for (int n = 0; n < batch_size; n++) { - int index_n = n * channels * width * height; - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * width * height; - for (int w = 0; w < width; w++) { - // directly copy the most top value from input to output - output[index_n_c + w] = input[index_n_c + w]; - // do top_pool - for (int h = 1; h < height; h++) { - output[index_n_c + h * width + w] = - std::max(output[index_n_c + (h - 1) * width + w], - input[index_n_c + h * width + w]); - } // for h - } // for w - } // for c - } // for n -} - -void LeftPoolForwardCPU(const float *input, float *output, const int batch_size, - const int channels, const int height, const int width) { - for (int n = 0; n < batch_size; n++) { - int index_n = n * channels * width * height; - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * width * height; - for (int h = 0; h < height; h++) { - // directly copy the most right value from input to output - output[index_n_c + h * width + width - 1] = - input[index_n_c + h * width + width - 1]; - // do left_pool - for (int w = width - 2; w >= 0; w--) { - output[index_n_c + h * width + w] = - std::max(output[index_n_c + h * width + w + 1], - input[index_n_c + h * width + w]); - } // for w - } // for h - } // for c - } // for n -} - -void RightPoolForwardCPU(const float *input, float *output, - const int batch_size, const int channels, - const int height, const int width) { - for (int n = 0; n < batch_size; n++) { - int index_n = n * channels * width * height; - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * width * height; - for (int h = 0; h < height; h++) { - // directly copy the most left value from input to output - output[index_n_c + h * width] = input[index_n_c + h * width]; - // do right_pool - for (int w = 1; w < width; w++) { - output[index_n_c + h * width + w] = - std::max(output[index_n_c + h * width + w - 1], - input[index_n_c + h * width + w]); - } // for w - } // for h - } // for c - } // for n -} - -void MMCVCornerPoolKernel::Compute(OrtKernelContext *context) { - const int mode = int(mode_); - typedef float T; - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const T *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - // get output memory - OrtTensorDimensions out_dimensions(ort_, input); - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - T *output_data = ort_.GetTensorMutableData(output); - - // 'top': 0, 'bottom': 1, 'left': 2, 'right':3 - assert(mode == 0 || mode == 1 || mode == 2 || mode == 3); - - // do corner_pool - int batch_size = out_dimensions.data()[0]; - int input_channels = out_dimensions.data()[1]; - int input_height = out_dimensions.data()[2]; - int input_width = out_dimensions.data()[3]; - if (mode == 0) - TopPoolForwardCPU(input_data, output_data, batch_size, input_channels, - input_height, input_width); - else if (mode == 1) - BottomPoolForwardCPU(input_data, output_data, batch_size, input_channels, - input_height, input_width); - else if (mode == 2) - LeftPoolForwardCPU(input_data, output_data, batch_size, input_channels, - input_height, input_width); - else - RightPoolForwardCPU(input_data, output_data, batch_size, input_channels, - input_height, input_width); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/deform_conv.cpp deleted file mode 100644 index db1f08b5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/deform_conv.cpp +++ /dev/null @@ -1,263 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "deform_conv.h" - -#include -#include - -#include "../ort_mmcv_utils.h" - -void gemm_ref_fp32_deform(const float *A, const float *B, const float *V, - const float *H, const int32_t trans_A, - const int32_t trans_B, const int32_t M, - const int32_t N, const int32_t K, const float alpha, - const float beta, float *Y) { - if (!trans_A && !trans_B) { // MK, KN; NN - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[m * K + k] * B[k * N + n]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (trans_A && !trans_B) { // KM, KN; TN - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[k * M + m] * B[k * N + n]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (trans_A && trans_B) { // KM, NK; TT - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[k * M + m] * B[n * K + k]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (!trans_A && trans_B) { // MK, NK; NT - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[m * K + k] * B[n * K + k]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } -} - -float bilinear_interpolate(const float *src, const int64_t src_h, - const int64_t src_w, const float h, const float w) { - if (h <= -1 || src_h <= h || w <= -1 || src_w <= w) { - return 0; - } - - int64_t h_low = floor(h); - int64_t w_low = floor(w); - int64_t h_high = h_low + 1; - int64_t w_high = w_low + 1; - - float lh = h - h_low; - float lw = w - w_low; - float hh = 1 - lh; - float hw = 1 - lw; - - float v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = src[h_low * src_w + w_low]; - float v2 = 0; - if (h_low >= 0 && w_high <= src_w - 1) v2 = src[h_low * src_w + w_high]; - float v3 = 0; - if (h_high <= src_h - 1 && w_low >= 0) v3 = src[h_high * src_w + w_low]; - float v4 = 0; - if (h_high <= src_h - 1 && w_high <= src_w - 1) - v4 = src[h_high * src_w + w_high]; - - float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -void deformable_im2col(const float *input, const float *offset, - const int64_t src_h, const int64_t src_w, - const int64_t kernel_h, const int64_t kernel_w, - const int64_t pad_h, const int64_t pad_w, - const int64_t stride_h, const int64_t stride_w, - const int64_t dilation_h, const int64_t dilation_w, - const int64_t channels, const int64_t offset_groups, - const int64_t dst_h, const int64_t dst_w, - float *columns) { - const int64_t indices = channels * dst_h * dst_w; - for (int64_t index = 0; index != indices; ++index) { - const int64_t w_col = index % dst_w; - const int64_t h_col = (index / dst_w) % dst_h; - const int64_t c_im = index / (dst_w * dst_h); - const int64_t c_col = c_im * kernel_h * kernel_w; - - int64_t c_per_offset_grp = channels / offset_groups; - const int64_t grp_idx = c_im / c_per_offset_grp; - auto columns_ptr = - columns + (c_col * (dst_h * dst_w) + h_col * dst_w + w_col); - auto input_ptr = input + c_im * (src_h * src_w); - auto offset_ptr = - offset + grp_idx * 2 * kernel_h * kernel_w * dst_h * dst_w; - - for (int64_t kh = 0; kh < kernel_h; ++kh) { - for (int64_t kw = 0; kw < kernel_w; ++kw) { - const int data_offset_h_ptr = - ((2 * (kh * kernel_w + kw)) * dst_h + h_col) * dst_w + w_col; - const int data_offset_w_ptr = - ((2 * (kh * kernel_w + kw) + 1) * dst_h + h_col) * dst_w + w_col; - - const float offset_h = offset_ptr[data_offset_h_ptr]; - const float offset_w = offset_ptr[data_offset_w_ptr]; - const float ih = - (h_col * stride_h - pad_h) + kh * dilation_h + offset_h; - const float iw = - (w_col * stride_w - pad_w) + kw * dilation_w + offset_w; - *columns_ptr = bilinear_interpolate(input_ptr, src_h, src_w, ih, iw); - columns_ptr += dst_h * dst_w; - } - } - } -} - -void deformable_conv_forward( - const float *src, const float *offset, const float *filter, - const int64_t batch, const int64_t src_c, const int64_t src_h, - const int64_t src_w, const int64_t dst_c, const int64_t dst_h, - const int64_t dst_w, const int64_t group, const int64_t offset_group, - const int64_t channels, const int64_t num_output, const int64_t kernel_h, - const int64_t kernel_w, const int64_t stride_h, const int64_t stride_w, - const int64_t pad_h, const int64_t pad_w, const int64_t dilation_h, - const int64_t dilation_w, float *columns, float *dst) { - const int64_t ic_per_gp = channels / group; - const int64_t oc_per_gp = num_output / group; - for (int64_t b = 0; b < batch; ++b) { - for (int64_t g = 0; g < group; ++g) { - deformable_im2col( - src + b * src_c * src_h * src_w + g * ic_per_gp * src_h * src_w, - offset + b * offset_group * 2 * kernel_h * kernel_w * dst_h * dst_w, - src_h, src_w, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, ic_per_gp, offset_group, dst_h, dst_w, - columns); - float *dst_ptr = - dst + b * dst_c * dst_h * dst_w + g * oc_per_gp * dst_h * dst_w; - - memset(dst_ptr, 0.0f, sizeof(float) * oc_per_gp * dst_h * dst_w); - - gemm_ref_fp32_deform( - filter + g * oc_per_gp * ic_per_gp * kernel_h * kernel_w, columns, - nullptr, dst_ptr, 0, 0, oc_per_gp, dst_h * dst_w, - ic_per_gp * kernel_h * kernel_w, 1.0f, 1.0f, dst_ptr); - } - } -} - -MMCVDeformConvKernel::MMCVDeformConvKernel(OrtApi api, - const OrtKernelInfo *info) - : api_(api), ort_(api_), info_(info) { - std::vector stride = - ort_.KernelInfoGetAttribute>(info, "stride"); - stride_height_ = stride[0]; - stride_width_ = stride[1]; - std::vector padding = - ort_.KernelInfoGetAttribute>(info, "padding"); - padding_height_ = padding[0]; - padding_width_ = padding[1]; - std::vector dilation = - ort_.KernelInfoGetAttribute>(info, "dilation"); - dilation_height_ = dilation[0]; - dilation_width_ = dilation[1]; - deformable_group_ = - ort_.KernelInfoGetAttribute(info, "deform_groups"); - group_ = ort_.KernelInfoGetAttribute(info, "groups"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); -} - -void MMCVDeformConvKernel::Compute(OrtKernelContext *context) { - const int64_t stride_height = stride_height_; - const int64_t stride_width = stride_width_; - const int64_t padding_height = padding_height_; - const int64_t padding_width = padding_width_; - const int64_t dilation_height = dilation_height_; - const int64_t dilation_width = dilation_width_; - const int64_t deformable_group = deformable_group_; - const int64_t group = group_; - - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const float *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - const OrtValue *offset = ort_.KernelContext_GetInput(context, 1); - const float *offset_data = - reinterpret_cast(ort_.GetTensorData(offset)); - - const OrtValue *filter = ort_.KernelContext_GetInput(context, 2); - const float *filter_data = - reinterpret_cast(ort_.GetTensorData(filter)); - - OrtTensorDimensions input_dims(ort_, input); - OrtTensorDimensions filter_dims(ort_, filter); - - int64_t batch_size = input_dims[0]; - int64_t in_channels = input_dims[1]; - int64_t in_height = input_dims[2]; - int64_t in_width = input_dims[3]; - int64_t out_channels = filter_dims[0]; - int64_t kernel_height = filter_dims[2]; - int64_t kernel_width = filter_dims[3]; - - // get output memory - int64_t out_height = floor((in_height + 2 * padding_height - - dilation_height * (kernel_height - 1) - 1) / - stride_height + - 1); - int64_t out_width = floor( - (in_width + 2 * padding_width - dilation_width * (kernel_width - 1) - 1) / - stride_width + - 1); - - std::vector output_dims = {batch_size, out_channels, out_height, - out_width}; - - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, output_dims.data(), output_dims.size()); - float *out_ptr = ort_.GetTensorMutableData(output); - - // allocate tmp memory - int64_t column_len = (in_channels / group) * kernel_height * kernel_width * - out_height * out_width; - float *columns = (float *)allocator_.Alloc(sizeof(float) * column_len); - deformable_conv_forward( - input_data, offset_data, filter_data, batch_size, in_channels, in_height, - in_width, out_channels, out_height, out_width, group, deformable_group, - in_channels, out_channels, kernel_height, kernel_width, stride_height, - stride_width, padding_height, padding_width, dilation_height, - dilation_width, columns, out_ptr); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/gridSample.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/gridSample.cpp deleted file mode 100644 index ca150cd7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/gridSample.cpp +++ /dev/null @@ -1,314 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include "../ort_mmcv_utils.h" -#include "grid_sample.h" - -#define MIN(a, b) (((a) < (b)) ? (a) : (b)) -#define MAX(a, b) (((a) < (b)) ? (b) : (a)) -#define CLIP_COORDINATES(in, out, clip_limit) \ - out = MIN((clip_limit - 1), MAX(in, 0)) - -// modified from -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/GridSampler.cpp - -GridSampleKernel::GridSampleKernel(OrtApi api, const OrtKernelInfo *info) - : api_(api), ort_(api_), info_(info) { - align_corners_ = ort_.KernelInfoGetAttribute(info, "align_corners"); - interpolation_mode_ = - ort_.KernelInfoGetAttribute(info, "interpolation_mode"); - padding_mode_ = ort_.KernelInfoGetAttribute(info, "padding_mode"); - - allocator_ = Ort::AllocatorWithDefaultOptions(); -} - -enum GridSamplerInterpolation { Bilinear = 0, Nearest = 1, Bicubic = 2 }; -enum GridSamplerPadding { Zeros = 0, Border = 1, Reflection = 2 }; - -template -static inline scalar_t grid_sampler_unnormalize(scalar_t coord, int64_t size, - bool align_corners) { - if (align_corners) { - return ((coord + 1) / 2) * (size - 1); - } else { - return ((coord + 1) * size - 1) / 2; - } -} - -// Clips coordinates to between 0 and clip_limit - 1 -template -static inline scalar_t clip_coordinates(scalar_t in, int64_t clip_limit) { - return std::min(static_cast(clip_limit - 1), - std::max(in, static_cast(0))); -} - -// Reflects coordinates until they fall between low and high (inclusive). -// The bounds are passed as twice their value so that half-integer values -// can be represented as ints. -template -static inline scalar_t reflect_coordinates(scalar_t in, int64_t twice_low, - int64_t twice_high) { - if (twice_low == twice_high) { - return static_cast(0); - } - scalar_t min = static_cast(twice_low) / 2; - scalar_t span = static_cast(twice_high - twice_low) / 2; - in = std::fabs(in - min); - // `fmod` returns same sign as `in`, which is positive after the `fabs` above. - scalar_t extra = std::fmod(in, span); - int flips = static_cast(std::floor(in / span)); - if (flips % 2 == 0) { - return extra + min; - } else { - return span - extra + min; - } -} - -template -static inline scalar_t compute_coordinates(scalar_t coord, int64_t size, - int64_t padding_mode, - bool align_corners) { - if (padding_mode == GridSamplerPadding::Border) { - coord = clip_coordinates(coord, size); - } else if (padding_mode == GridSamplerPadding::Reflection) { - if (align_corners) { - coord = reflect_coordinates(coord, 0, 2 * (size - 1)); - } else { - coord = reflect_coordinates(coord, -1, 2 * size - 1); - } - coord = clip_coordinates(coord, size); - } - return coord; -} - -// Computes the pixel source index value for a grid coordinate -template -static inline scalar_t grid_sampler_compute_source_index(scalar_t coord, - int64_t size, - int64_t padding_mode, - bool align_corners) { - coord = grid_sampler_unnormalize(coord, size, align_corners); - coord = compute_coordinates(coord, size, padding_mode, align_corners); - return coord; -} - -static inline bool within_bounds_2d(int64_t h, int64_t w, int64_t H, - int64_t W) { - return h >= 0 && h < H && w >= 0 && w < W; -} - -template -static inline scalar_t get_value_bounded(const scalar_t *data, scalar_t x, - scalar_t y, int64_t W, int64_t H, - int64_t sW, int64_t sH, - int64_t padding_mode, - bool align_corners) { - x = compute_coordinates(x, W, padding_mode, align_corners); - y = compute_coordinates(y, H, padding_mode, align_corners); - - int64_t ix = static_cast(x); - int64_t iy = static_cast(y); - - if (within_bounds_2d(iy, ix, H, W)) { - return data[iy * sH + ix * sW]; - } - return static_cast(0); -} - -template -static inline scalar_t cubic_convolution1(scalar_t x, scalar_t A) { - return ((A + 2) * x - (A + 3)) * x * x + 1; -} - -template -static inline scalar_t cubic_convolution2(scalar_t x, scalar_t A) { - return ((A * x - 5 * A) * x + 8 * A) * x - 4 * A; -} - -template -static inline void get_cubic_upsample_coefficients(scalar_t coeffs[4], - scalar_t t) { - scalar_t A = -0.75; - - scalar_t x1 = t; - coeffs[0] = cubic_convolution2(x1 + 1.0, A); - coeffs[1] = cubic_convolution1(x1, A); - - // opposite coefficients - scalar_t x2 = 1.0 - t; - coeffs[2] = cubic_convolution1(x2, A); - coeffs[3] = cubic_convolution2(x2 + 1.0, A); -} - -template -static inline scalar_t cubic_interp1d(scalar_t x0, scalar_t x1, scalar_t x2, - scalar_t x3, scalar_t t) { - scalar_t coeffs[4]; - get_cubic_upsample_coefficients(coeffs, t); - - return x0 * coeffs[0] + x1 * coeffs[1] + x2 * coeffs[2] + x3 * coeffs[3]; -} - -void GridSampleKernel::Compute(OrtKernelContext *context) { - const bool align_corners = align_corners_; - const int64_t padding_mode = padding_mode_; - const int64_t interpolation_mode = interpolation_mode_; - - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const float *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - const OrtValue *grid = ort_.KernelContext_GetInput(context, 1); - const float *grid_data = - reinterpret_cast(ort_.GetTensorData(grid)); - - OrtTensorDimensions input_dims(ort_, input); - OrtTensorDimensions grid_dims(ort_, grid); - int64_t N = input_dims[0]; - int64_t C = input_dims[1]; - int64_t inp_H = input_dims[2]; - int64_t inp_W = input_dims[3]; - int64_t out_H = grid_dims[1]; - int64_t out_W = grid_dims[2]; - - std::vector output_dims = {N, C, out_H, out_W}; - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, output_dims.data(), output_dims.size()); - float *out_ptr = ort_.GetTensorMutableData(output); - - int64_t inp_sN = input_dims[1] * input_dims[2] * input_dims[3]; - int64_t inp_sC = input_dims[2] * input_dims[3]; - int64_t inp_sH = input_dims[3]; - int64_t inp_sW = 1; - int64_t grid_sN = grid_dims[1] * grid_dims[2] * grid_dims[3]; - int64_t grid_sH = grid_dims[2] * grid_dims[3]; - int64_t grid_sW = grid_dims[3]; - int64_t grid_sCoor = 1; - int64_t out_sN = output_dims[1] * output_dims[2] * output_dims[3]; - int64_t out_sC = output_dims[2] * output_dims[3]; - int64_t out_sH = output_dims[3]; - int64_t out_sW = 1; - - // loop over each output pixel - for (int64_t n = 0; n < N; ++n) { - const float *grid_ptr_N = grid_data + n * grid_sN; - const float *inp_ptr_N = input_data + n * inp_sN; - for (int64_t h = 0; h < out_H; ++h) { - for (int64_t w = 0; w < out_W; ++w) { - const float *grid_ptr_NHW = grid_ptr_N + h * grid_sH + w * grid_sW; - float x = *grid_ptr_NHW; - float y = grid_ptr_NHW[grid_sCoor]; - - float ix = grid_sampler_compute_source_index(x, inp_W, padding_mode, - align_corners); - float iy = grid_sampler_compute_source_index(y, inp_H, padding_mode, - align_corners); - - if (interpolation_mode == GridSamplerInterpolation::Bilinear) { - // get corner pixel values from (x, y) - // for 4d, we use north-east-south-west - int64_t ix_nw = static_cast(std::floor(ix)); - int64_t iy_nw = static_cast(std::floor(iy)); - - int64_t ix_ne = ix_nw + 1; - int64_t iy_ne = iy_nw; - - int64_t ix_sw = ix_nw; - int64_t iy_sw = iy_nw + 1; - - int64_t ix_se = ix_nw + 1; - int64_t iy_se = iy_nw + 1; - - // get surfaces to each neighbor: - float nw = (ix_se - ix) * (iy_se - iy); - float ne = (ix - ix_sw) * (iy_sw - iy); - float sw = (ix_ne - ix) * (iy - iy_ne); - float se = (ix - ix_nw) * (iy - iy_nw); - - // calculate bilinear weighted pixel value and set output pixel - const float *inp_ptr_NC = inp_ptr_N; - float *out_ptr_NCHW = out_ptr + n * out_sN + h * out_sH + w * out_sW; - for (int64_t c = 0; c < C; - ++c, out_ptr_NCHW += out_sC, inp_ptr_NC += inp_sC) { - auto res = static_cast(0); - if (within_bounds_2d(iy_nw, ix_nw, inp_H, inp_W)) { - res += inp_ptr_NC[iy_nw * inp_sH + ix_nw * inp_sW] * nw; - } - if (within_bounds_2d(iy_ne, ix_ne, inp_H, inp_W)) { - res += inp_ptr_NC[iy_ne * inp_sH + ix_ne * inp_sW] * ne; - } - if (within_bounds_2d(iy_sw, ix_sw, inp_H, inp_W)) { - res += inp_ptr_NC[iy_sw * inp_sH + ix_sw * inp_sW] * sw; - } - if (within_bounds_2d(iy_se, ix_se, inp_H, inp_W)) { - res += inp_ptr_NC[iy_se * inp_sH + ix_se * inp_sW] * se; - } - *out_ptr_NCHW = res; - } - } else if (interpolation_mode == GridSamplerInterpolation::Nearest) { - int64_t ix_nearest = static_cast(std::nearbyint(ix)); - int64_t iy_nearest = static_cast(std::nearbyint(iy)); - - // assign nearest neighbor pixel value to output pixel - float *out_ptr_NCHW = out_ptr + n * out_sN + h * out_sH + w * out_sW; - const float *inp_ptr_NC = inp_ptr_N; - for (int64_t c = 0; c < C; - ++c, out_ptr_NCHW += out_sC, inp_ptr_NC += inp_sC) { - if (within_bounds_2d(iy_nearest, ix_nearest, inp_H, inp_W)) { - *out_ptr_NCHW = - inp_ptr_NC[iy_nearest * inp_sH + ix_nearest * inp_sW]; - } else { - *out_ptr_NCHW = static_cast(0); - } - } - } else if (interpolation_mode == GridSamplerInterpolation::Bicubic) { - // grid_sampler_compute_source_index will "clip the value" of idx - // depends on the padding, - // which would cause calculation to be wrong, - // for example x = -0.1 -> ix = 0 for zero padding, but in bicubic ix - // = floor(x) = -1 - // There would be more problem in reflection padding, since the -1 and - // +1 direction is not fixed in boundary condition - ix = grid_sampler_unnormalize(x, inp_W, align_corners); - iy = grid_sampler_unnormalize(y, inp_H, align_corners); - - float ix_nw = std::floor(ix); - float iy_nw = std::floor(iy); - - const float tx = ix - ix_nw; - const float ty = iy - iy_nw; - - const float *inp_ptr_NC = inp_ptr_N; - float *out_ptr_NCHW = out_ptr + n * out_sN + h * out_sH + w * out_sW; - for (int64_t c = 0; c < C; - ++c, out_ptr_NCHW += out_sC, inp_ptr_NC += inp_sC) { - float coefficients[4]; - - // Interpolate 4 values in the x direction - for (int64_t i = 0; i < 4; ++i) { - coefficients[i] = cubic_interp1d( - get_value_bounded(inp_ptr_NC, ix_nw - 1, iy_nw - 1 + i, - inp_W, inp_H, inp_sW, inp_sH, - padding_mode, align_corners), - get_value_bounded(inp_ptr_NC, ix_nw + 0, iy_nw - 1 + i, - inp_W, inp_H, inp_sW, inp_sH, - padding_mode, align_corners), - get_value_bounded(inp_ptr_NC, ix_nw + 1, iy_nw - 1 + i, - inp_W, inp_H, inp_sW, inp_sH, - padding_mode, align_corners), - get_value_bounded(inp_ptr_NC, ix_nw + 2, iy_nw - 1 + i, - inp_W, inp_H, inp_sW, inp_sH, - padding_mode, align_corners), - tx); - } - - // Interpolate in the y direction - *out_ptr_NCHW = - cubic_interp1d(coefficients[0], coefficients[1], - coefficients[2], coefficients[3], ty); - } - } - } - } - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/modulated_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/modulated_deform_conv.cpp deleted file mode 100644 index cd8f0d06..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/modulated_deform_conv.cpp +++ /dev/null @@ -1,292 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "modulated_deform_conv.h" - -#include -#include - -#include "../ort_mmcv_utils.h" - -float bilinear_interpolate_2d(const float *src, const int64_t src_h, - const int64_t src_w, const float h, - const float w) { - if (h <= -1 || src_h <= h || w <= -1 || src_w <= w) { - return 0; - } - - int64_t h_low = floor(h); - int64_t w_low = floor(w); - int64_t h_high = h_low + 1; - int64_t w_high = w_low + 1; - - float lh = h - h_low; - float lw = w - w_low; - float hh = 1 - lh; - float hw = 1 - lw; - - float v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = src[h_low * src_w + w_low]; - float v2 = 0; - if (h_low >= 0 && w_high <= src_w - 1) v2 = src[h_low * src_w + w_high]; - float v3 = 0; - if (h_high <= src_h - 1 && w_low >= 0) v3 = src[h_high * src_w + w_low]; - float v4 = 0; - if (h_high <= src_h - 1 && w_high <= src_w - 1) - v4 = src[h_high * src_w + w_high]; - - float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -// output: (channels * kernel_h * kernel_w, dst_h * dst_w) -void deformable_im2col_2d(const float *input, const float *offset, - const float *mask, const int64_t src_h, - const int64_t src_w, const int64_t kernel_h, - const int64_t kernel_w, const int64_t pad_h, - const int64_t pad_w, const int64_t stride_h, - const int64_t stride_w, const int64_t dilation_h, - const int64_t dilation_w, const int64_t channels, - const int64_t offset_groups, const int64_t dst_h, - const int64_t dst_w, const bool use_mask, - float *columns) { - const int64_t workload = channels * dst_h * dst_w; - for (int64_t index = 0; index != workload; ++index) { - const int64_t ow = index % dst_w; - const int64_t oh = (index / dst_w) % dst_h; - const int64_t ic = index / (dst_w * dst_h); - const int64_t oc = ic * kernel_h * kernel_w; - - int64_t c_per_offset_grp = channels / offset_groups; - const int64_t grp_idx = ic / c_per_offset_grp; - - auto columns_ptr = columns + (oc * (dst_h * dst_w) + oh * dst_w + ow); - auto input_ptr = input + ic * (src_h * src_w); - auto offset_ptr = - offset + grp_idx * 2 * kernel_h * kernel_w * dst_h * dst_w; - auto mask_ptr = mask; - if (use_mask) { - mask_ptr += grp_idx * kernel_h * kernel_w * dst_h * dst_w; - } - - for (int64_t kh = 0; kh < kernel_h; ++kh) { - for (int64_t kw = 0; kw < kernel_w; ++kw) { - const int64_t mask_idx = kh * kernel_w + kw; - const int64_t offset_idx = 2 * mask_idx; - - float mask_value = 1; - if (use_mask) { - mask_value = mask_ptr[mask_idx * (dst_h * dst_w) + oh * dst_w + ow]; - } - - const float offset_h = - offset_ptr[offset_idx * (dst_h * dst_w) + oh * dst_w + ow]; - const float offset_w = - offset_ptr[(offset_idx + 1) * (dst_h * dst_w) + oh * dst_w + ow]; - const float ih = (oh * stride_h - pad_h) + kh * dilation_h + offset_h; - const float iw = (ow * stride_w - pad_w) + kw * dilation_w + offset_w; - *columns_ptr = mask_value * - bilinear_interpolate_2d(input_ptr, src_h, src_w, ih, iw); - columns_ptr += dst_h * dst_w; - } - } - } -} - -void gemm_ref_fp32(const float *A, const float *B, const float *V, - const float *H, const int32_t trans_A, const int32_t trans_B, - const int32_t M, const int32_t N, const int32_t K, - const float alpha, const float beta, float *Y) { - if (!trans_A && !trans_B) { // MK, KN; NN - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[m * K + k] * B[k * N + n]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (trans_A && !trans_B) { // KM, KN; TN - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[k * M + m] * B[k * N + n]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (trans_A && trans_B) { // KM, NK; TT - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[k * M + m] * B[n * K + k]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } - if (!trans_A && trans_B) { // MK, NK; NT - for (int64_t m = 0; m < M; ++m) { - for (int64_t n = 0; n < N; ++n) { - float y = 0.0f; - for (int64_t k = 0; k < K; ++k) { - y += A[m * K + k] * B[n * K + k]; - } - y *= alpha; - if (V) y += beta * V[n]; - if (H) y += beta * H[m * N + n]; - Y[m * N + n] = y; - } - } - } -} - -void deformable_conv2d_ref_fp32( - const float *src, const float *offset, const float *mask, - const float *filter, const float *bias, const int64_t batch, - const int64_t src_c, const int64_t src_h, const int64_t src_w, - const int64_t dst_c, const int64_t dst_h, const int64_t dst_w, - const int64_t group, const int64_t offset_group, const int64_t channels, - const int64_t num_output, const int64_t kernel_h, const int64_t kernel_w, - const int64_t stride_h, const int64_t stride_w, const int64_t pad_h, - const int64_t pad_w, const int64_t dilation_h, const int64_t dilation_w, - float *columns, float *dst) { - const int64_t ic_per_gp = channels / group; - const int64_t oc_per_gp = num_output / group; - - for (int64_t b = 0; b < batch; ++b) { - for (int64_t g = 0; g < group; ++g) { - deformable_im2col_2d( - src + b * src_c * src_h * src_w + g * ic_per_gp * src_h * src_w, - offset + b * offset_group * 2 * kernel_h * kernel_w * dst_h * dst_w, - mask + b * offset_group * kernel_h * kernel_w * dst_h * dst_w, src_h, - src_w, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, ic_per_gp, offset_group, dst_h, dst_w, - mask != nullptr, columns); - float *dst_ptr = - dst + b * dst_c * dst_h * dst_w + g * oc_per_gp * dst_h * dst_w; - if (bias != nullptr) { - const float *bias_ptr = bias + g * oc_per_gp; - for (int64_t oc = 0; oc < oc_per_gp; ++oc) { - for (int64_t hw = 0; hw < dst_h * dst_w; ++hw) { - dst_ptr[oc * dst_h * dst_w + hw] = bias_ptr[oc]; - } - } - } else { - memset(dst_ptr, 0.0f, sizeof(float) * oc_per_gp * dst_h * dst_w); - } - gemm_ref_fp32(filter + g * oc_per_gp * ic_per_gp * kernel_h * kernel_w, - columns, nullptr, dst_ptr, 0, 0, oc_per_gp, dst_h * dst_w, - ic_per_gp * kernel_h * kernel_w, 1.0f, 1.0f, dst_ptr); - } - } -} - -MMCVModulatedDeformConvKernel::MMCVModulatedDeformConvKernel( - OrtApi api, const OrtKernelInfo *info) - : api_(api), ort_(api_), info_(info) { - std::vector stride = - ort_.KernelInfoGetAttribute>(info, "stride"); - stride_height_ = stride[0]; - stride_width_ = stride[1]; - std::vector padding = - ort_.KernelInfoGetAttribute>(info, "padding"); - padding_height_ = padding[0]; - padding_width_ = padding[1]; - std::vector dilation = - ort_.KernelInfoGetAttribute>(info, "dilation"); - dilation_height_ = dilation[0]; - dilation_width_ = dilation[1]; - deformable_group_ = - ort_.KernelInfoGetAttribute(info, "deform_groups"); - group_ = ort_.KernelInfoGetAttribute(info, "groups"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); -} - -void MMCVModulatedDeformConvKernel::Compute(OrtKernelContext *context) { - const int64_t stride_height = stride_height_; - const int64_t stride_width = stride_width_; - const int64_t padding_height = padding_height_; - const int64_t padding_width = padding_width_; - const int64_t dilation_height = dilation_height_; - const int64_t dilation_width = dilation_width_; - const int64_t deformable_group = deformable_group_; - const int64_t group = group_; - - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const float *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - const OrtValue *offset = ort_.KernelContext_GetInput(context, 1); - const float *offset_data = - reinterpret_cast(ort_.GetTensorData(offset)); - - const OrtValue *mask = ort_.KernelContext_GetInput(context, 2); - const float *mask_data = - reinterpret_cast(ort_.GetTensorData(mask)); - - const OrtValue *filter = ort_.KernelContext_GetInput(context, 3); - const float *filter_data = - reinterpret_cast(ort_.GetTensorData(filter)); - - const OrtValue *bias = ort_.KernelContext_GetInput(context, 4); - const float *bias_data = - (bias != nullptr) - ? reinterpret_cast(ort_.GetTensorData(bias)) - : nullptr; - // const float *bias_data = nullptr; - - OrtTensorDimensions input_dims(ort_, input); - OrtTensorDimensions filter_dims(ort_, filter); - - int64_t batch = input_dims[0]; - int64_t channels = input_dims[1]; - int64_t in_height = input_dims[2]; - int64_t in_width = input_dims[3]; - int64_t num_output = filter_dims[0]; - int64_t kernel_height = filter_dims[2]; - int64_t kernel_width = filter_dims[3]; - - // get output memory - int64_t out_height = floor((in_height + 2 * padding_height - - dilation_height * (kernel_height - 1) - 1) / - stride_height + - 1); - int64_t out_width = floor( - (in_width + 2 * padding_width - dilation_width * (kernel_width - 1) - 1) / - stride_width + - 1); - - std::vector output_dims = {batch, num_output, out_height, out_width}; - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, output_dims.data(), output_dims.size()); - float *out_ptr = ort_.GetTensorMutableData(output); - - // allocate tmp memory - int64_t column_len = (channels / group) * kernel_height * kernel_width * - out_height * out_width; - float *columns = (float *)allocator_.Alloc(sizeof(float) * column_len); - - deformable_conv2d_ref_fp32( - input_data, offset_data, mask_data, filter_data, bias_data, batch, - channels, in_height, in_width, num_output, out_height, out_width, group, - deformable_group, channels, num_output, kernel_height, kernel_width, - stride_height, stride_width, padding_height, padding_width, - dilation_height, dilation_width, columns, out_ptr); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/nms.cpp deleted file mode 100644 index b38a76e1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/nms.cpp +++ /dev/null @@ -1,108 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "nms.h" - -#include - -#include -#include -#include -#include -#include // std::iota -#include - -#include "../ort_mmcv_utils.h" - -NmsKernel::NmsKernel(OrtApi api, const OrtKernelInfo *info) - : api_(api), ort_(api_), info_(info) { - iou_threshold_ = ort_.KernelInfoGetAttribute(info, "iou_threshold"); - offset_ = ort_.KernelInfoGetAttribute(info, "offset"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); -} - -void NmsKernel::Compute(OrtKernelContext *context) { - const float iou_threshold = iou_threshold_; - const int64_t offset = offset_; - - const OrtValue *boxes = ort_.KernelContext_GetInput(context, 0); - const float *boxes_data = - reinterpret_cast(ort_.GetTensorData(boxes)); - const OrtValue *scores = ort_.KernelContext_GetInput(context, 1); - const float *scores_data = - reinterpret_cast(ort_.GetTensorData(scores)); - - OrtTensorDimensions boxes_dim(ort_, boxes); - OrtTensorDimensions scores_dim(ort_, scores); - - int64_t nboxes = boxes_dim[0]; - assert(boxes_dim[1] == 4); - - // allocate tmp memory - float *tmp_boxes = (float *)allocator_.Alloc(sizeof(float) * nboxes * 4); - float *sc = (float *)allocator_.Alloc(sizeof(float) * nboxes); - float *areas = (float *)allocator_.Alloc(sizeof(float) * nboxes); - bool *select = (bool *)allocator_.Alloc(sizeof(bool) * nboxes); - for (int64_t i = 0; i < nboxes; i++) { - select[i] = true; - } - - memcpy(tmp_boxes, boxes_data, sizeof(float) * nboxes * 4); - memcpy(sc, scores_data, sizeof(float) * nboxes); - - // sort scores - std::vector tmp_sc; - for (int i = 0; i < nboxes; i++) { - tmp_sc.push_back(sc[i]); - } - std::vector order(tmp_sc.size()); - std::iota(order.begin(), order.end(), 0); - std::sort(order.begin(), order.end(), [&tmp_sc](int64_t id1, int64_t id2) { - return tmp_sc[id1] > tmp_sc[id2]; - }); - - // area = (x2 - x1 + offset) * (y2 - y1 + offset) - for (int64_t i = 0; i < nboxes; i++) { - areas[i] = (tmp_boxes[i * 4 + 2] - tmp_boxes[i * 4 + 0] + offset) * - (tmp_boxes[i * 4 + 3] - tmp_boxes[i * 4 + 1] + offset); - } - - for (int64_t _i = 0; _i < nboxes; _i++) { - if (select[_i] == false) continue; - auto i = order[_i]; - auto ix1 = tmp_boxes[i * 4 + 0]; - auto iy1 = tmp_boxes[i * 4 + 1]; - auto ix2 = tmp_boxes[i * 4 + 2]; - auto iy2 = tmp_boxes[i * 4 + 3]; - auto iarea = areas[i]; - - for (int64_t _j = _i + 1; _j < nboxes; _j++) { - if (select[_j] == false) continue; - auto j = order[_j]; - auto xx1 = std::max(ix1, tmp_boxes[j * 4 + 0]); - auto yy1 = std::max(iy1, tmp_boxes[j * 4 + 1]); - auto xx2 = std::min(ix2, tmp_boxes[j * 4 + 2]); - auto yy2 = std::min(iy2, tmp_boxes[j * 4 + 3]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[j] - inter); - if (ovr > iou_threshold) select[_j] = false; - } - } - std::vector res_order; - for (int i = 0; i < nboxes; i++) { - if (select[i]) { - res_order.push_back(order[i]); - } - } - - std::vector inds_dims({res_order.size()}); - - OrtValue *res = ort_.KernelContext_GetOutput(context, 0, inds_dims.data(), - inds_dims.size()); - int64_t *res_data = ort_.GetTensorMutableData(res); - - memcpy(res_data, res_order.data(), sizeof(int64_t) * res_order.size()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp deleted file mode 100644 index 840eed82..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp +++ /dev/null @@ -1,88 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "onnxruntime_register.h" - -#include "corner_pool.h" -#include "deform_conv.h" -#include "grid_sample.h" -#include "modulated_deform_conv.h" -#include "nms.h" -#include "ort_mmcv_utils.h" -#include "reduce_ops.h" -#include "roi_align.h" -#include "roi_align_rotated.h" -#include "rotated_feature_align.h" -#include "soft_nms.h" - -const char *c_MMCVOpDomain = "mmcv"; -SoftNmsOp c_SoftNmsOp; -NmsOp c_NmsOp; -MMCVRoiAlignCustomOp c_MMCVRoiAlignCustomOp; -MMCVRoIAlignRotatedCustomOp c_MMCVRoIAlignRotatedCustomOp; -MMCVRotatedFeatureAlignCustomOp c_MMCVRotatedFeatureAlignCustomOp; -GridSampleOp c_GridSampleOp; -MMCVCumMaxCustomOp c_MMCVCumMaxCustomOp; -MMCVCumMinCustomOp c_MMCVCumMinCustomOp; -MMCVCornerPoolCustomOp c_MMCVCornerPoolCustomOp; -MMCVModulatedDeformConvOp c_MMCVModulatedDeformConvOp; -MMCVDeformConvOp c_MMCVDeformConvOp; - -OrtStatus *ORT_API_CALL RegisterCustomOps(OrtSessionOptions *options, - const OrtApiBase *api) { - OrtCustomOpDomain *domain = nullptr; - const OrtApi *ortApi = api->GetApi(ORT_API_VERSION); - - if (auto status = ortApi->CreateCustomOpDomain(c_MMCVOpDomain, &domain)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_SoftNmsOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_NmsOp)) { - return status; - } - - if (auto status = - ortApi->CustomOpDomain_Add(domain, &c_MMCVRoiAlignCustomOp)) { - return status; - } - - if (auto status = - ortApi->CustomOpDomain_Add(domain, &c_MMCVRoIAlignRotatedCustomOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_GridSampleOp)) { - return status; - } - - if (auto status = - ortApi->CustomOpDomain_Add(domain, &c_MMCVCornerPoolCustomOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_MMCVCumMaxCustomOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_MMCVCumMinCustomOp)) { - return status; - } - - if (auto status = - ortApi->CustomOpDomain_Add(domain, &c_MMCVModulatedDeformConvOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add(domain, &c_MMCVDeformConvOp)) { - return status; - } - - if (auto status = ortApi->CustomOpDomain_Add( - domain, &c_MMCVRotatedFeatureAlignCustomOp)) { - return status; - } - - return ortApi->AddCustomOpDomain(options, domain); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/reduce_ops.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/reduce_ops.cpp deleted file mode 100644 index 81aef390..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/reduce_ops.cpp +++ /dev/null @@ -1,188 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "reduce_ops.h" - -#include - -#include - -#include "../ort_mmcv_utils.h" - -// modified from -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/ReduceOps.cpp - -static inline int64_t maybe_wrap_dim(int64_t dim, int64_t ndims) { - int64_t min = -ndims; - int64_t max = ndims - 1; - assert(dim >= min && dim <= max); - if (dim < 0) dim += ndims; - return dim; -} - -static inline int64_t get_dim_stride(const int64_t dim, const int64_t ndims, - const int64_t *reversed_dim_cumprod) { - return dim == ndims - 1 ? 1 : reversed_dim_cumprod[dim + 1]; -} - -static inline int64_t get_dim_size(const int64_t dim, const int64_t ndims, - const int64_t *reversed_dim_cumprod) { - return dim == ndims - 1 - ? reversed_dim_cumprod[dim] - : reversed_dim_cumprod[dim] / reversed_dim_cumprod[dim + 1]; -} - -template -void cummax_cummin_helper(const T1 *input, T1 *output, T2 *indices, - const int64_t input_dim_size, const int64_t stride) { - Operation op; - T1 out = input[0]; - int64_t idx = 0; - for (int64_t i = 0; i < input_dim_size; i++) { - T1 curr_elem = input[i * stride]; - if (op(curr_elem, out)) { - out = curr_elem; - idx = i; - } - output[i * stride] = out; - indices[i * stride] = idx; - } -} - -// modified `tensor_dim_apply3` from -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorDimApply.h. -// the difference is that: (1) use `reversed_dim_cumprod` for fast computing of -// tensor `size` and `stride`. (2) the same `stride` is used for input, output, -// and indices, since it's unnecessary to use separate values. currently -// `tensor_dim_apply3` is only used for `cummax` and `cummin`, according to the -// official pytorch projects: https://github.com/pytorch/pytorch. -template -void tensor_dim_apply3(const T1 *input, T1 *output, T2 *indices, - const int64_t dim, const int64_t ndims, - const int64_t *reversed_dim_cumprod, Function func) { - int dim_apply_finished = 0; - int64_t input_dim_size = get_dim_size(dim, ndims, reversed_dim_cumprod); - // the same stride is used for input, output and indices - int64_t stride = get_dim_stride(dim, ndims, reversed_dim_cumprod); - std::vector counter(ndims, 0); - - while (!dim_apply_finished) { - // call `func` once to update output and indices - func(input, output, indices, input_dim_size, stride); - if (ndims == 1) break; - for (int64_t dim_i = 0; dim_i < ndims; dim_i++) { - if (dim_i == dim) { - if (dim_i == (ndims - 1)) { - dim_apply_finished = 1; - break; - } - continue; - } - counter[dim_i]++; - - // the same stride is used for input, output, and indices - int64_t stride_dim_i = get_dim_stride(dim_i, ndims, reversed_dim_cumprod); - input += stride_dim_i; - output += stride_dim_i; - indices += stride_dim_i; - - if (counter[dim_i] == get_dim_size(dim_i, ndims, reversed_dim_cumprod)) { - if (dim_i == ndims - 1) { - dim_apply_finished = 1; - break; - } else { - input -= counter[dim_i] * stride_dim_i; - output -= counter[dim_i] * stride_dim_i; - indices -= counter[dim_i] * stride_dim_i; - counter[dim_i] = 0; - } - } else { - break; - } // if - } // for - } // while -} - -template -void CumMax_CumMin_CPU(const T1 *input, T1 *output, T2 *indices, - int64_t *reversed_dim_cumprod, const int64_t dim, - const OrtTensorDimensions &out_dimensions) { - // calculate numel - const int64_t ndims = out_dimensions.size(); - int64_t numel = 1; - for (int64_t dim_i = 0; dim_i < ndims; dim_i++) { - numel *= out_dimensions.data()[dim_i]; - } - - // cummax is only applied to input which is non-zero dim and non-empty - if (numel) { - // compute the cumulative production on dimension size, - // which is then used for computing the stride or size of a specific `dim`. - reversed_dim_cumprod[ndims - 1] = out_dimensions.data()[ndims - 1]; - for (int64_t dim_i = ndims - 2; dim_i >= 0; dim_i--) { - reversed_dim_cumprod[dim_i] = - reversed_dim_cumprod[dim_i + 1] * out_dimensions.data()[dim_i]; - } - - // do cummax or cummin based on `Operation` type - tensor_dim_apply3( - input, output, indices, dim, ndims, reversed_dim_cumprod, - cummax_cummin_helper); - } -} - -void MMCVCumMaxKernel::Compute(OrtKernelContext *context) { - // get input - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const float *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - // get output - OrtTensorDimensions out_dimensions(ort_, input); - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - float *output_data = ort_.GetTensorMutableData(output); - OrtValue *indices = ort_.KernelContext_GetOutput( - context, 1, out_dimensions.data(), out_dimensions.size()); - int64_t *indices_data = ort_.GetTensorMutableData(indices); - - // allocate tmp memory for computing the cumulative production on dimension - // size - const int64_t ndims = out_dimensions.size(); - assert(ndims > 0); - int64_t *reversed_dim_cumprod = - (int64_t *)allocator_.Alloc(sizeof(int64_t) * ndims); - - // dim should be wrapped if it's negative (e.g. -1) - const int64_t dim = maybe_wrap_dim(dim_, ndims); - CumMax_CumMin_CPU>( - input_data, output_data, indices_data, reversed_dim_cumprod, dim, - out_dimensions); -} - -void MMCVCumMinKernel::Compute(OrtKernelContext *context) { - // get input - const OrtValue *input = ort_.KernelContext_GetInput(context, 0); - const float *input_data = - reinterpret_cast(ort_.GetTensorData(input)); - - // get output - OrtTensorDimensions out_dimensions(ort_, input); - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - float *output_data = ort_.GetTensorMutableData(output); - OrtValue *indices = ort_.KernelContext_GetOutput( - context, 1, out_dimensions.data(), out_dimensions.size()); - int64_t *indices_data = ort_.GetTensorMutableData(indices); - - // allocate tmp memory for computing the cumulative production on dimension - // size - const int64_t ndims = out_dimensions.size(); - assert(ndims > 0); - int64_t *reversed_dim_cumprod = - (int64_t *)allocator_.Alloc(sizeof(int64_t) * ndims); - - // dim should be wrapped if it's negative (e.g. -1) - const int64_t dim = maybe_wrap_dim(dim_, ndims); - CumMax_CumMin_CPU>( - input_data, output_data, indices_data, reversed_dim_cumprod, dim, - out_dimensions); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align.cpp deleted file mode 100644 index 2151d2ac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align.cpp +++ /dev/null @@ -1,265 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "roi_align.h" - -#include "../ort_mmcv_utils.h" - -// implementation taken from Caffe2 -struct PreCalc { - int pos1; - int pos2; - int pos3; - int pos4; - float w1; - float w2; - float w3; - float w4; -}; - -void pre_calc_for_bilinear_interpolate( - const int height, const int width, const int pooled_height, - const int pooled_width, const int iy_upper, const int ix_upper, - float roi_start_h, float roi_start_w, float bin_size_h, float bin_size_w, - int roi_bin_grid_h, int roi_bin_grid_w, std::vector &pre_calc) { - int pre_calc_index = 0; - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - for (int iy = 0; iy < iy_upper; iy++) { - const float yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < ix_upper; ix++) { - const float xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - float x = xx; - float y = yy; - // deal with: inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - PreCalc pc; - pc.pos1 = 0; - pc.pos2 = 0; - pc.pos3 = 0; - pc.pos4 = 0; - pc.w1 = 0; - pc.w2 = 0; - pc.w3 = 0; - pc.w4 = 0; - pre_calc[pre_calc_index] = pc; - pre_calc_index += 1; - continue; - } - - if (y <= 0) { - y = 0; - } - if (x <= 0) { - x = 0; - } - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (float)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (float)x_low; - } else { - x_high = x_low + 1; - } - - float ly = y - y_low; - float lx = x - x_low; - float hy = 1. - ly, hx = 1. - lx; - float w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - // save weights and indices - PreCalc pc; - pc.pos1 = y_low * width + x_low; - pc.pos2 = y_low * width + x_high; - pc.pos3 = y_high * width + x_low; - pc.pos4 = y_high * width + x_high; - pc.w1 = w1; - pc.w2 = w2; - pc.w3 = w3; - pc.w4 = w4; - pre_calc[pre_calc_index] = pc; - - pre_calc_index += 1; - } - } - } - } -} - -void ROIAlignForwardCPU(const int nthreads, const float *input, - const float *rois, float *output, float *argmax_y, - float *argmax_x, const int pooled_height, - const int pooled_width, const float spatial_scale, - const int sampling_ratio, - const int pool_mode, // 0 - max pool, 1 - avg pool - const bool aligned, const int channels, - const int height, const int width) { - int n_rois = nthreads / channels / pooled_width / pooled_height; - // (n, c, ph, pw) is an element in the pooled output - // can be parallelized using omp - // #pragma omp parallel for num_threads(32) - for (int n = 0; n < n_rois; n++) { - int index_n = n * channels * pooled_width * pooled_height; - - const float *offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not use rounding; this implementation detail is critical - float offset = aligned ? (float)0.5 : (float)0.0; - float roi_start_w = offset_rois[1] * spatial_scale - offset; - float roi_start_h = offset_rois[2] * spatial_scale - offset; - float roi_end_w = offset_rois[3] * spatial_scale - offset; - float roi_end_h = offset_rois[4] * spatial_scale - offset; - - float roi_width = roi_end_w - roi_start_w; - float roi_height = roi_end_h - roi_start_h; - if (aligned) { - /*AT_ASSERTM(roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlign cannot have non-negative size!");*/ - assert(roi_width >= 0 && roi_height >= 0); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (float)1.); - roi_height = std::max(roi_height, (float)1.); - } - float bin_size_h = - static_cast(roi_height) / static_cast(pooled_height); - float bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceil(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); - - // When the grid is empty, output zeros == 0/1, instead of NaN. - const float count = - std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - // we want to precalculate indices and weights shared by all channels, - // this is the key point of optimization - std::vector pre_calc(roi_bin_grid_h * roi_bin_grid_w * - pooled_width * pooled_height); - pre_calc_for_bilinear_interpolate( - height, width, pooled_height, pooled_width, roi_bin_grid_h, - roi_bin_grid_w, roi_start_h, roi_start_w, bin_size_h, bin_size_w, - roi_bin_grid_h, roi_bin_grid_w, pre_calc); - - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * pooled_width * pooled_height; - const float *offset_input = - input + (roi_batch_ind * channels + c) * height * width; - int pre_calc_index = 0; - - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - int index = index_n_c + ph * pooled_width + pw; - - float output_val = 0.; - float maxval = -10000; - float maxidx_y = -1.f, maxidx_x = -1.f; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const float y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const float x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - PreCalc pc = pre_calc[pre_calc_index]; - float val = pc.w1 * offset_input[pc.pos1] + - pc.w2 * offset_input[pc.pos2] + - pc.w3 * offset_input[pc.pos3] + - pc.w4 * offset_input[pc.pos4]; - if (val > maxval) { - maxval = val; - maxidx_y = y; - maxidx_x = x; - } - output_val += val; - pre_calc_index += 1; - } - } - if (pool_mode == 0) { - // We do max pooling inside a bin - output[index] = maxval; - argmax_y[index] = maxidx_y; - argmax_x[index] = maxidx_x; - } else if (pool_mode == 1) { - // We do average (integral) pooling inside a bin - output[index] = output_val / count; - } // if - } // for pw - } // for ph - } // for c - } // for n -} - -void MMCVRoiAlignKernel::Compute(OrtKernelContext *context) { - // Setup inputs - const OrtValue *input_X = ort_.KernelContext_GetInput(context, 0); - const float *X_data = - reinterpret_cast(ort_.GetTensorData(input_X)); - const OrtValue *input_rois = ort_.KernelContext_GetInput(context, 1); - const float *rois = reinterpret_cast( - ort_.GetTensorData(input_rois)); - - // Setup output - OrtTensorDimensions out_dimensions(ort_, input_X); - OrtTensorDimensions roi_dimensions(ort_, input_rois); - - int batch_size = out_dimensions.data()[0]; - int input_channels = out_dimensions.data()[1]; - int input_height = out_dimensions.data()[2]; - int input_width = out_dimensions.data()[3]; - - out_dimensions.data()[0] = roi_dimensions.data()[0]; - out_dimensions.data()[2] = aligned_height_; - out_dimensions.data()[3] = aligned_width_; - - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - float *out = ort_.GetTensorMutableData(output); - OrtTensorTypeAndShapeInfo *output_info = ort_.GetTensorTypeAndShape(output); - ort_.ReleaseTensorTypeAndShapeInfo(output_info); - - // TODO: forward here - int output_size = out_dimensions.data()[0]; - for (auto i = 1; i < out_dimensions.size(); ++i) { - output_size *= out_dimensions.data()[i]; - } - - int poolMod = 1; - if (pool_mode_ == "max") poolMod = 0; - - float *argmax_x = nullptr, *argmax_y = nullptr; - if (poolMod == 0) { - argmax_y = new float[output_size]; - argmax_x = new float[output_size]; - } - - ROIAlignForwardCPU(output_size, X_data, rois, out, argmax_y, argmax_x, - aligned_height_, aligned_width_, spatial_scale_, - sampling_ratio_, poolMod, aligned_, input_channels, - input_height, input_width); - - if (argmax_x) delete argmax_x; - if (argmax_y) delete argmax_y; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align_rotated.cpp deleted file mode 100644 index ce0b2202..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/roi_align_rotated.cpp +++ /dev/null @@ -1,247 +0,0 @@ -// Modified from -// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/ROIAlignRotated -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include "roi_align_rotated.h" - -#include "../ort_mmcv_utils.h" - -struct PreCalc { - int pos1; - int pos2; - int pos3; - int pos4; - float w1; - float w2; - float w3; - float w4; -}; - -void pre_calc_for_bilinear_interpolate( - const int height, const int width, const int pooled_height, - const int pooled_width, const int iy_upper, const int ix_upper, - float roi_start_h, float roi_start_w, float bin_size_h, float bin_size_w, - int roi_bin_grid_h, int roi_bin_grid_w, float roi_center_h, - float roi_center_w, float cos_theta, float sin_theta, - std::vector &pre_calc) { - int pre_calc_index = 0; - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - for (int iy = 0; iy < iy_upper; iy++) { - const float yy = - roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < ix_upper; ix++) { - const float xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta around the center and translate - // In image space, (y, x) is the order for Right Handed System, - // and this is essentially multiplying the point by a rotation matrix - // to rotate it counterclockwise through angle theta. - float y = yy * cos_theta - xx * sin_theta + roi_center_h; - float x = yy * sin_theta + xx * cos_theta + roi_center_w; - // deal with: inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - PreCalc pc; - pc.pos1 = 0; - pc.pos2 = 0; - pc.pos3 = 0; - pc.pos4 = 0; - pc.w1 = 0; - pc.w2 = 0; - pc.w3 = 0; - pc.w4 = 0; - pre_calc[pre_calc_index] = pc; - pre_calc_index += 1; - continue; - } - - if (y < 0) { - y = 0; - } - if (x < 0) { - x = 0; - } - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (float)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (float)x_low; - } else { - x_high = x_low + 1; - } - - float ly = y - y_low; - float lx = x - x_low; - float hy = 1. - ly, hx = 1. - lx; - float w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - // save weights and indices - PreCalc pc; - pc.pos1 = y_low * width + x_low; - pc.pos2 = y_low * width + x_high; - pc.pos3 = y_high * width + x_low; - pc.pos4 = y_high * width + x_high; - pc.w1 = w1; - pc.w2 = w2; - pc.w3 = w3; - pc.w4 = w4; - pre_calc[pre_calc_index] = pc; - - pre_calc_index += 1; - } - } - } - } -} - -void ROIAlignRotatedForwardCPU(const int nthreads, const float *input, - const float *rois, float *output, - const float &spatial_scale, const int aligned, - const int clockwise, const int channels, - const int height, const int width, - const int pooled_height, const int pooled_width, - const int sampling_ratio) { - int n_rois = nthreads / channels / pooled_width / pooled_height; - // (n, c, ph, pw) is an element in the pooled output - // can be parallelized using omp - // #pragma omp parallel for num_threads(32) - for (int n = 0; n < n_rois; n++) { - int index_n = n * channels * pooled_width * pooled_height; - - const float *current_roi = rois + n * 6; - int roi_batch_ind = current_roi[0]; - - // Do not use rounding; this implementation detail is critical - float offset = aligned ? (float)0.5 : (float)0.0; - float roi_center_w = current_roi[1] * spatial_scale - offset; - float roi_center_h = current_roi[2] * spatial_scale - offset; - float roi_width = current_roi[3] * spatial_scale; - float roi_height = current_roi[4] * spatial_scale; - // float theta = current_roi[5] * M_PI / 180.0; - float theta = current_roi[5]; // Radian angle by default - if (clockwise) { - theta = -theta; - } - float cos_theta = cos(theta); - float sin_theta = sin(theta); - if (!aligned) { // for backward-compatibility only - roi_width = std::max(roi_width, (float)1.); - roi_height = std::max(roi_height, (float)1.); - } - - float bin_size_h = - static_cast(roi_height) / static_cast(pooled_height); - float bin_size_w = - static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceil(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width); - - // We do average (integral) pooling inside a bin - const float count = - std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - // we want to precalculate indices and weights shared by all channels, - // this is the key point of optimization - std::vector pre_calc(roi_bin_grid_h * roi_bin_grid_w * - pooled_width * pooled_height); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - float roi_start_h = -roi_height / 2.0; - float roi_start_w = -roi_width / 2.0; - - pre_calc_for_bilinear_interpolate( - height, width, pooled_height, pooled_width, roi_bin_grid_h, - roi_bin_grid_w, roi_start_h, roi_start_w, bin_size_h, bin_size_w, - roi_bin_grid_h, roi_bin_grid_w, roi_center_h, roi_center_w, cos_theta, - sin_theta, pre_calc); - - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * pooled_width * pooled_height; - const float *offset_input = - input + (roi_batch_ind * channels + c) * height * width; - int pre_calc_index = 0; - - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - int index = index_n_c + ph * pooled_width + pw; - - float output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - PreCalc pc = pre_calc[pre_calc_index]; - output_val += pc.w1 * offset_input[pc.pos1] + - pc.w2 * offset_input[pc.pos2] + - pc.w3 * offset_input[pc.pos3] + - pc.w4 * offset_input[pc.pos4]; - - pre_calc_index += 1; - } - } - output_val /= count; - - output[index] = output_val; - } // for pw - } // for ph - } // for c - } // for n -} - -void MMCVRoIAlignRotatedKernel::Compute(OrtKernelContext *context) { - // Setup inputs - const OrtValue *input_X = ort_.KernelContext_GetInput(context, 0); - const float *X_data = - reinterpret_cast(ort_.GetTensorData(input_X)); - const OrtValue *input_rois = ort_.KernelContext_GetInput(context, 1); - const float *rois = reinterpret_cast( - ort_.GetTensorData(input_rois)); - - // Setup output - OrtTensorDimensions out_dimensions(ort_, input_X); - OrtTensorDimensions roi_dimensions(ort_, input_rois); - - int batch_size = out_dimensions.data()[0]; - int input_channels = out_dimensions.data()[1]; - int input_height = out_dimensions.data()[2]; - int input_width = out_dimensions.data()[3]; - - out_dimensions.data()[0] = roi_dimensions.data()[0]; - out_dimensions.data()[2] = aligned_height_; - out_dimensions.data()[3] = aligned_width_; - - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - float *out = ort_.GetTensorMutableData(output); - OrtTensorTypeAndShapeInfo *output_info = ort_.GetTensorTypeAndShape(output); - ort_.ReleaseTensorTypeAndShapeInfo(output_info); - - // TODO: forward here - int output_size = out_dimensions.data()[0]; - for (auto i = 1; i < out_dimensions.size(); ++i) { - output_size *= out_dimensions.data()[i]; - } - ROIAlignRotatedForwardCPU(output_size, X_data, rois, out, spatial_scale_, - aligned_, clockwise_, input_channels, input_height, - input_width, aligned_height_, aligned_width_, - sampling_ratio_); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/rotated_feature_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/rotated_feature_align.cpp deleted file mode 100644 index b8d07376..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/rotated_feature_align.cpp +++ /dev/null @@ -1,132 +0,0 @@ -// Modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_kernel.cu -#include "rotated_feature_align.h" - -#include "../ort_mmcv_utils.h" - -template -T bilinear_interpolate(const T *input, const int height, const int width, T y, - T x, const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) return 0; - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - // do bilinear interpolation - T v1 = input[int(fma(y_low, width, x_low))]; - T v2 = input[int(fma(y_low, width, x_high))]; - T v3 = input[int(fma(y_high, width, x_low))]; - T v4 = input[int(fma(y_high, width, x_high))]; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - return val; -} - -template -void rotated_feature_align_forward_cpu_kernel( - const int nthreads, const int points, const scalar_t *bottom_data, - const scalar_t *best_bboxes, const scalar_t spatial_scale, - const int channels, const int height, const int width, scalar_t *top_data) { - for (int index = 0; index < nthreads; index++) { - int w = index % width; - int h = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - const scalar_t *bbox_offset = - best_bboxes + ((n * height + h) * width + w) * 5; - scalar_t roi_y = bbox_offset[0] * spatial_scale; - scalar_t roi_x = bbox_offset[1] * spatial_scale; - - scalar_t px[5] = {roi_x, 0, 0, 0, 0}; - scalar_t py[5] = {roi_y, 0, 0, 0, 0}; - - if (points > 1) { - scalar_t roi_w = bbox_offset[2] * spatial_scale; - scalar_t roi_h = bbox_offset[3] * spatial_scale; - scalar_t roi_a = bbox_offset[4]; - - scalar_t w_2 = roi_w / 2, h_2 = roi_h / 2; - scalar_t cosa = cosf(roi_a), sina = sinf(roi_a); - scalar_t wx = cosa * w_2, wy = sina * w_2; - scalar_t hx = -sina * h_2, hy = cosa * h_2; - - px[1] = roi_x + wx + hx; - py[1] = roi_y + wy + hy; - px[2] = roi_x - wx + hx; - py[2] = roi_y - wy + hy; - px[3] = roi_x - wx - hx; - py[3] = roi_y - wy - hy; - px[4] = roi_x + wx - hx; - py[4] = roi_y + wy - hy; - } - - const scalar_t *offset_bottom_data = - bottom_data + (n * channels + c) * height * width; - - scalar_t output_val = bottom_data[index]; - for (int i = 0; i < points; i++) { - output_val += bilinear_interpolate(offset_bottom_data, height, - width, py[i], px[i], i); - } - top_data[index] = output_val; - } -} - -void MMCVRotatedFeatureAlignKernel::Compute(OrtKernelContext *context) { - // Setup inputs - const OrtValue *input_features = ort_.KernelContext_GetInput(context, 0); - const float *features_data = reinterpret_cast( - ort_.GetTensorData(input_features)); - const OrtValue *input_best_rbboxes = ort_.KernelContext_GetInput(context, 1); - const float *best_rbboxes = reinterpret_cast( - ort_.GetTensorData(input_best_rbboxes)); - - // Setup output - OrtTensorDimensions out_dimensions(ort_, input_features); - - int batch_size = out_dimensions.data()[0]; - int input_channels = out_dimensions.data()[1]; - int input_height = out_dimensions.data()[2]; - int input_width = out_dimensions.data()[3]; - - OrtValue *output = ort_.KernelContext_GetOutput( - context, 0, out_dimensions.data(), out_dimensions.size()); - float *out = ort_.GetTensorMutableData(output); - OrtTensorTypeAndShapeInfo *output_info = ort_.GetTensorTypeAndShape(output); - ort_.ReleaseTensorTypeAndShapeInfo(output_info); - - // TODO: forward here - int output_size = out_dimensions.data()[0]; - for (auto i = 1; i < out_dimensions.size(); ++i) { - output_size *= out_dimensions.data()[i]; - } - rotated_feature_align_forward_cpu_kernel( - output_size, points_, features_data, best_rbboxes, spatial_scale_, - input_channels, input_height, input_width, out); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/soft_nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/soft_nms.cpp deleted file mode 100644 index 8bb4ce33..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/cpu/soft_nms.cpp +++ /dev/null @@ -1,156 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "soft_nms.h" - -#include - -#include -#include - -#include "../ort_mmcv_utils.h" - -SoftNmsKernel::SoftNmsKernel(OrtApi api, const OrtKernelInfo *info) - : api_(api), ort_(api_), info_(info) { - iou_threshold_ = ort_.KernelInfoGetAttribute(info, "iou_threshold"); - sigma_ = ort_.KernelInfoGetAttribute(info, "sigma"); - min_score_ = ort_.KernelInfoGetAttribute(info, "min_score"); - method_ = ort_.KernelInfoGetAttribute(info, "method"); - offset_ = ort_.KernelInfoGetAttribute(info, "offset"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); -} - -void SoftNmsKernel::Compute(OrtKernelContext *context) { - typedef float T; - - const T iou_threshold = T(iou_threshold_); - const T sigma = T(sigma_); - const T min_score = T(min_score_); - const int method = int(method_); - const T offset = T(offset_); - - const OrtValue *boxes = ort_.KernelContext_GetInput(context, 0); - const T *boxes_data = - reinterpret_cast(ort_.GetTensorData(boxes)); - const OrtValue *scores = ort_.KernelContext_GetInput(context, 1); - const T *scores_data = - reinterpret_cast(ort_.GetTensorData(scores)); - - OrtTensorDimensions boxes_dim(ort_, boxes); - OrtTensorDimensions scores_dim(ort_, scores); - - int64_t nboxes = boxes_dim[0]; - assert(boxes_dim[1] == 4); - - // allocate tmp memory - T *tmp_boxes = (T *)allocator_.Alloc(sizeof(T) * nboxes * 4); - T *x1 = tmp_boxes; - T *y1 = tmp_boxes + 1; - T *x2 = tmp_boxes + 2; - T *y2 = tmp_boxes + 3; - T *sc = (T *)allocator_.Alloc(sizeof(T) * nboxes); - T *areas = (T *)allocator_.Alloc(sizeof(T) * nboxes); - T *de = (T *)allocator_.Alloc(sizeof(T) * nboxes * 5); - int64_t *inds = (int64_t *)allocator_.Alloc(sizeof(int64_t) * nboxes); - - memcpy(tmp_boxes, boxes_data, sizeof(T) * nboxes * 4); - memcpy(sc, scores_data, sizeof(T) * nboxes); - - // init inds as arange(nboxes) - std::generate(inds, inds + nboxes, [n = 0]() mutable { return n++; }); - - // area = (x2-x1+offset)*(y2-y1+offset) - for (int64_t i = 0; i < nboxes; i++) { - areas[i] = - (x2[i * 4] - x1[i * 4] + offset) * (y2[i * 4] - y1[i * 4] + offset); - } - - int64_t pos = 0; - - for (int64_t i = 0; i < nboxes; i++) { - auto max_score = sc[i]; - auto max_pos = i; - - pos = i + 1; - // get max box - while (pos < nboxes) { - if (max_score < sc[pos]) { - max_score = sc[pos]; - max_pos = pos; - } - pos = pos + 1; - } - // swap - auto ix1 = de[i * 5 + 0] = x1[max_pos * 4]; - auto iy1 = de[i * 5 + 1] = y1[max_pos * 4]; - auto ix2 = de[i * 5 + 2] = x2[max_pos * 4]; - auto iy2 = de[i * 5 + 3] = y2[max_pos * 4]; - auto iscore = de[i * 5 + 4] = sc[max_pos]; - auto iarea = areas[max_pos]; - auto iind = inds[max_pos]; - x1[max_pos * 4] = x1[i * 4]; - y1[max_pos * 4] = y1[i * 4]; - x2[max_pos * 4] = x2[i * 4]; - y2[max_pos * 4] = y2[i * 4]; - sc[max_pos] = sc[i]; - areas[max_pos] = areas[i]; - inds[max_pos] = inds[i]; - x1[i * 4] = ix1; - y1[i * 4] = iy1; - x2[i * 4] = ix2; - y2[i * 4] = iy2; - sc[i] = iscore; - areas[i] = iarea; - inds[i] = iind; - - pos = i + 1; - while (pos < nboxes) { - auto xx1 = std::max(ix1, x1[pos * 4]); - auto yy1 = std::max(iy1, y1[pos * 4]); - auto xx2 = std::min(ix2, x2[pos * 4]); - auto yy2 = std::min(iy2, y2[pos * 4]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[pos] - inter); - - float weight = 1.; - if (method == 0) { - if (ovr >= iou_threshold) weight = 0; - } else if (method == 1) { - if (ovr >= iou_threshold) weight = 1 - ovr; - } else if (method == 2) { - weight = std::exp(-(ovr * ovr) / sigma); - } - sc[pos] *= weight; - // if box score falls below threshold, discard the box by - // swapping with last box update N - if (sc[pos] < min_score) { - x1[pos * 4] = x1[(nboxes - 1) * 4]; - y1[pos * 4] = y1[(nboxes - 1) * 4]; - x2[pos * 4] = x2[(nboxes - 1) * 4]; - y2[pos * 4] = y2[(nboxes - 1) * 4]; - sc[pos] = sc[nboxes - 1]; - areas[pos] = areas[nboxes - 1]; - inds[pos] = inds[nboxes - 1]; - nboxes = nboxes - 1; - pos = pos - 1; - } - pos = pos + 1; - } - } - - std::vector dets_dim({nboxes, 5}); - OrtValue *dets = ort_.KernelContext_GetOutput(context, 0, dets_dim.data(), - dets_dim.size()); - T *dets_data = ort_.GetTensorMutableData(dets); - - std::vector inds_dim({nboxes}); - OrtValue *inds_ov = ort_.KernelContext_GetOutput(context, 1, inds_dim.data(), - inds_dim.size()); - int64_t *inds_data = ort_.GetTensorMutableData(inds_ov); - - memcpy(dets_data, de, sizeof(T) * nboxes * 5); - memcpy(inds_data, inds, sizeof(int64_t) * nboxes); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/deform_conv.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/deform_conv.h deleted file mode 100644 index 05f324a7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/deform_conv.h +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_DEFORM_CONV_H -#define ONNXRUNTIME_DEFORM_CONV_H - -#include - -struct MMCVDeformConvKernel { - MMCVDeformConvKernel(OrtApi api, const OrtKernelInfo *info); - - void Compute(OrtKernelContext *context); - - protected: - OrtApi api_; - Ort::CustomOpApi ort_; - const OrtKernelInfo *info_; - Ort::AllocatorWithDefaultOptions allocator_; - - int64_t stride_height_; - int64_t stride_width_; - int64_t padding_height_; - int64_t padding_width_; - int64_t dilation_height_; - int64_t dilation_width_; - int64_t deformable_group_; - int64_t group_; - int64_t im2col_step_; -}; - -struct MMCVDeformConvOp - : Ort::CustomOpBase { - void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const { - return new MMCVDeformConvKernel(api, info); - } - - const char *GetName() const { return "MMCVDeformConv2d"; }; - - size_t GetInputTypeCount() const { return 3; }; - ONNXTensorElementDataType GetInputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - OrtCustomOpInputOutputCharacteristic GetInputCharacteristic( - size_t index) const { - return OrtCustomOpInputOutputCharacteristic::INPUT_OUTPUT_REQUIRED; - } - - size_t GetOutputTypeCount() const { return 1; }; - ONNXTensorElementDataType GetOutputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - // force cpu - const char *GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/grid_sample.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/grid_sample.h deleted file mode 100644 index 6be15146..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/grid_sample.h +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_GRIDSAMPLE_H -#define ONNXRUNTIME_GRIDSAMPLE_H - -#include - -struct GridSampleKernel { - GridSampleKernel(OrtApi api, const OrtKernelInfo *info); - - void Compute(OrtKernelContext *context); - - protected: - OrtApi api_; - Ort::CustomOpApi ort_; - const OrtKernelInfo *info_; - Ort::AllocatorWithDefaultOptions allocator_; - - int64_t align_corners_; - int64_t interpolation_mode_; - int64_t padding_mode_; -}; - -struct GridSampleOp : Ort::CustomOpBase { - void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const { - return new GridSampleKernel(api, info); - }; - - const char *GetName() const { return "grid_sampler"; }; - - size_t GetInputTypeCount() const { return 2; }; - ONNXTensorElementDataType GetInputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - size_t GetOutputTypeCount() const { return 1; }; - ONNXTensorElementDataType GetOutputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - const char *GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/modulated_deform_conv.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/modulated_deform_conv.h deleted file mode 100644 index 09d9d1f8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/modulated_deform_conv.h +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_MODULATED_DEFORM_CONV_H -#define ONNXRUNTIME_MODULATED_DEFORM_CONV_H - -#include - -struct MMCVModulatedDeformConvKernel { - MMCVModulatedDeformConvKernel(OrtApi api, const OrtKernelInfo *info); - - void Compute(OrtKernelContext *context); - - protected: - OrtApi api_; - Ort::CustomOpApi ort_; - const OrtKernelInfo *info_; - Ort::AllocatorWithDefaultOptions allocator_; - - int64_t stride_height_; - int64_t stride_width_; - int64_t padding_height_; - int64_t padding_width_; - int64_t dilation_height_; - int64_t dilation_width_; - int64_t deformable_group_; - int64_t group_; -}; - -struct MMCVModulatedDeformConvOp - : Ort::CustomOpBase { - void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const { - return new MMCVModulatedDeformConvKernel(api, info); - } - - const char *GetName() const { return "MMCVModulatedDeformConv2d"; }; - - size_t GetInputTypeCount() const { return 5; }; - ONNXTensorElementDataType GetInputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - OrtCustomOpInputOutputCharacteristic GetInputCharacteristic( - size_t index) const { - // The last input (index == 4) is optional, which is bias - if (index == 4) - return OrtCustomOpInputOutputCharacteristic::INPUT_OUTPUT_OPTIONAL; - - return OrtCustomOpInputOutputCharacteristic::INPUT_OUTPUT_REQUIRED; - } - - size_t GetOutputTypeCount() const { return 1; }; - ONNXTensorElementDataType GetOutputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - // force cpu - const char *GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/nms.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/nms.h deleted file mode 100644 index ddb208de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/nms.h +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_NMS_H -#define ONNXRUNTIME_NMS_H - -#include - -struct NmsKernel { - NmsKernel(OrtApi api, const OrtKernelInfo *info); - - void Compute(OrtKernelContext *context); - - protected: - OrtApi api_; - Ort::CustomOpApi ort_; - const OrtKernelInfo *info_; - Ort::AllocatorWithDefaultOptions allocator_; - - float iou_threshold_; - int64_t offset_; -}; - -struct NmsOp : Ort::CustomOpBase { - void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const { - return new NmsKernel(api, info); - }; - - const char *GetName() const { return "NonMaxSuppression"; }; - - size_t GetInputTypeCount() const { return 2; }; - ONNXTensorElementDataType GetInputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - size_t GetOutputTypeCount() const { return 1; }; - ONNXTensorElementDataType GetOutputType(size_t index) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64; - } - - // force cpu - const char *GetExecutionProviderType() const { - return "CPUExecutionProvider"; - } -}; - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_register.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_register.h deleted file mode 100644 index 84d20145..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_register.h +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_REGISTER_H -#define ONNXRUNTIME_REGISTER_H -#include - -#ifdef __cplusplus -extern "C" { -#endif - -OrtStatus *ORT_API_CALL RegisterCustomOps(OrtSessionOptions *options, - const OrtApiBase *api); - -#ifdef __cplusplus -} -#endif -#endif // ONNXRUNTIME_REGISTER_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_session_options_config_keys.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_session_options_config_keys.h deleted file mode 100644 index 8e8dbf4b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/onnxruntime_session_options_config_keys.h +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. - -#ifndef ONNXRUNTIME_SESSION_OPTIONS_CONFIG_KEYS_H -#define ONNXRUNTIME_SESSION_OPTIONS_CONFIG_KEYS_H - -/* - * This file defines SessionOptions Config Keys and format of the Config Values. - * - * The Naming Convention for a SessionOptions Config Key, - * "[Area][.[SubArea1].[SubArea2]...].[Keyname]" - * Such as "ep.cuda.use_arena" - * The Config Key cannot be empty - * The maximum length of the Config Key is 128 - * - * The string format of a SessionOptions Config Value is defined individually - * for each Config. The maximum length of the Config Value is 1024 - */ - -// Key for disable PrePacking, -// If the config value is set to "1" then the prepacking is disabled, otherwise -// prepacking is enabled (default value) -static const char* const kOrtSessionOptionsConfigDisablePrepacking = - "session.disable_prepacking"; - -// A value of "1" means allocators registered in the env will be used. "0" means -// the allocators created in the session will be used. Use this to override the -// usage of env allocators on a per session level. -static const char* const kOrtSessionOptionsConfigUseEnvAllocators = - "session.use_env_allocators"; - -// Set to 'ORT' (case sensitive) to load an ORT format model. -// If unset, model type will default to ONNX unless inferred from filename -// ('.ort' == ORT format) or bytes to be ORT -static const char* const kOrtSessionOptionsConfigLoadModelFormat = - "session.load_model_format"; - -// Set to 'ORT' (case sensitive) to save optimized model in ORT format when -// SessionOptions.optimized_model_path is set. If unset, format will default to -// ONNX unless optimized_model_filepath ends in '.ort'. -static const char* const kOrtSessionOptionsConfigSaveModelFormat = - "session.save_model_format"; - -#endif // ONNXRUNTIME_SESSION_OPTIONS_CONFIG_KEYS_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/ort_mmcv_utils.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/ort_mmcv_utils.h deleted file mode 100644 index b3d6d3da..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/ort_mmcv_utils.h +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ORT_MMCV_UTILS_H -#define ORT_MMCV_UTILS_H -#include - -#include - -struct OrtTensorDimensions : std::vector { - OrtTensorDimensions(Ort::CustomOpApi ort, const OrtValue* value) { - OrtTensorTypeAndShapeInfo* info = ort.GetTensorTypeAndShape(value); - std::vector::operator=(ort.GetTensorShape(info)); - ort.ReleaseTensorTypeAndShapeInfo(info); - } -}; -#endif // ORT_MMCV_UTILS_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/reduce_ops.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/reduce_ops.h deleted file mode 100644 index 996a84e1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/reduce_ops.h +++ /dev/null @@ -1,95 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_REDUCE_OPS_H -#define ONNXRUNTIME_REDUCE_OPS_H - -#include - -struct MMCVCumMaxKernel { - public: - MMCVCumMaxKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - dim_ = ort_.KernelInfoGetAttribute(info, "dim"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - Ort::AllocatorWithDefaultOptions allocator_; - - int64_t dim_; -}; - -struct MMCVCumMinKernel { - public: - MMCVCumMinKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - dim_ = ort_.KernelInfoGetAttribute(info, "dim"); - - // create allocator - allocator_ = Ort::AllocatorWithDefaultOptions(); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - Ort::AllocatorWithDefaultOptions allocator_; - - int64_t dim_; -}; - -struct MMCVCumMaxCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVCumMaxKernel(api, info); - } - - const char* GetName() const { return "cummax"; } - - size_t GetInputTypeCount() const { return 1; } - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - size_t GetOutputTypeCount() const { return 2; } - ONNXTensorElementDataType GetOutputType(size_t index) const { - if (index == 1) return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64; - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; - -struct MMCVCumMinCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVCumMinKernel(api, info); - } - - const char* GetName() const { return "cummin"; } - - size_t GetInputTypeCount() const { return 1; } - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - size_t GetOutputTypeCount() const { return 2; } - ONNXTensorElementDataType GetOutputType(size_t index) const { - if (index == 1) return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64; - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; - -#endif // ONNXRUNTIME_REDUCE_OPS_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align.h deleted file mode 100644 index bacc11cf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align.h +++ /dev/null @@ -1,62 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_ROI_ALIGN_H -#define ONNXRUNTIME_ROI_ALIGN_H - -#include -#include - -#include -#include -#include -#include - -struct MMCVRoiAlignKernel { - public: - MMCVRoiAlignKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - aligned_ = ort_.KernelInfoGetAttribute(info, "aligned"); - aligned_height_ = - ort_.KernelInfoGetAttribute(info, "output_height"); - aligned_width_ = ort_.KernelInfoGetAttribute(info, "output_width"); - pool_mode_ = ort_.KernelInfoGetAttribute(info, "mode"); - sampling_ratio_ = - ort_.KernelInfoGetAttribute(info, "sampling_ratio"); - spatial_scale_ = ort_.KernelInfoGetAttribute(info, "spatial_scale"); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - - int aligned_height_; - int aligned_width_; - float spatial_scale_; - int sampling_ratio_; - std::string pool_mode_; - int aligned_; -}; - -struct MMCVRoiAlignCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVRoiAlignKernel(api, info); - } - const char* GetName() const { return "MMCVRoiAlign"; } - - size_t GetInputTypeCount() const { return 2; } - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - size_t GetOutputTypeCount() const { return 1; } - ONNXTensorElementDataType GetOutputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - } -}; -#endif // ONNXRUNTIME_ROI_ALIGN_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align_rotated.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align_rotated.h deleted file mode 100644 index b9ba2895..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/roi_align_rotated.h +++ /dev/null @@ -1,62 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_ROI_ALIGN_ROTATED_H -#define ONNXRUNTIME_ROI_ALIGN_ROTATED_H - -#include -#include - -#include -#include -#include -#include - -struct MMCVRoIAlignRotatedKernel { - public: - MMCVRoIAlignRotatedKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - aligned_height_ = - ort_.KernelInfoGetAttribute(info, "output_height"); - aligned_width_ = ort_.KernelInfoGetAttribute(info, "output_width"); - sampling_ratio_ = - ort_.KernelInfoGetAttribute(info, "sampling_ratio"); - spatial_scale_ = ort_.KernelInfoGetAttribute(info, "spatial_scale"); - aligned_ = ort_.KernelInfoGetAttribute(info, "aligned"); - clockwise_ = ort_.KernelInfoGetAttribute(info, "clockwise"); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - int aligned_height_; - int aligned_width_; - float spatial_scale_; - int sampling_ratio_; - int aligned_; - int clockwise_; -}; - -struct MMCVRoIAlignRotatedCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVRoIAlignRotatedKernel(api, info); - } - const char* GetName() const { return "MMCVRoIAlignRotated"; } - - size_t GetInputTypeCount() const { return 2; } - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - size_t GetOutputTypeCount() const { return 1; } - ONNXTensorElementDataType GetOutputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - } -}; -#endif // ONNXRUNTIME_ROI_ALIGN_ROTATED_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/rotated_feature_align.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/rotated_feature_align.h deleted file mode 100644 index 0fc03d84..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/rotated_feature_align.h +++ /dev/null @@ -1,50 +0,0 @@ -#ifndef ONNXRUNTIME_ROTATED_FEATURE_ALIGN_H -#define ONNXRUNTIME_ROTATED_FEATURE_ALIGN_H - -#include - -#include - -struct MMCVRotatedFeatureAlignKernel { - public: - MMCVRotatedFeatureAlignKernel(Ort::CustomOpApi ort, const OrtKernelInfo* info) - : ort_(ort) { - spatial_scale_ = ort_.KernelInfoGetAttribute(info, "spatial_scale"); - points_ = ort_.KernelInfoGetAttribute(info, "points"); - } - - void Compute(OrtKernelContext* context); - - private: - Ort::CustomOpApi ort_; - float spatial_scale_; - int points_; -}; - -struct MMCVRotatedFeatureAlignCustomOp - : Ort::CustomOpBase { - void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) const { - return new MMCVRotatedFeatureAlignKernel(api, info); - } - - const char* GetName() const { return "MMCVRotatedFeatureAlign"; } - - size_t GetInputTypeCount() const { return 2; } - - ONNXTensorElementDataType GetInputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - size_t GetOutputTypeCount() const { return 1; } - - ONNXTensorElementDataType GetOutputType(size_t) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - } - - // force cpu - const char* GetExecutionProviderType() const { - return "CPUExecutionProvider"; - } -}; -#endif // ONNXRUNTIME_ROTATED_FEATURE_ALIGN_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/soft_nms.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/soft_nms.h deleted file mode 100644 index 7f9f8e62..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/onnxruntime/soft_nms.h +++ /dev/null @@ -1,49 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ONNXRUNTIME_SOFT_NMS_H -#define ONNXRUNTIME_SOFT_NMS_H -#include - -struct SoftNmsKernel { - SoftNmsKernel(OrtApi api, const OrtKernelInfo *info); - - void Compute(OrtKernelContext *context); - - protected: - OrtApi api_; - Ort::CustomOpApi ort_; - const OrtKernelInfo *info_; - Ort::AllocatorWithDefaultOptions allocator_; - - float iou_threshold_; - float sigma_; - float min_score_; - int64_t method_; - int64_t offset_; -}; - -struct SoftNmsOp : Ort::CustomOpBase { - void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const { - return new SoftNmsKernel(api, info); - }; - - const char *GetName() const { return "SoftNonMaxSuppression"; }; - - size_t GetInputTypeCount() const { return 2; }; - ONNXTensorElementDataType GetInputType(size_t /*index*/) const { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - size_t GetOutputTypeCount() const { return 2; }; - ONNXTensorElementDataType GetOutputType(size_t index) const { - if (index == 1) { - return ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64; - } - return ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT; - }; - - // force cpu - const char *GetExecutionProviderType() const { - return "CPUExecutionProvider"; - }; -}; -#endif // ONNXRUNTIME_SOFT_NMS_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter.cpp deleted file mode 100644 index e1ead1f8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter.cpp +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/orn/src/ActiveRotatingFilter.h - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void active_rotated_filter_forward_impl(const Tensor input, - const Tensor indices, Tensor output) { - DISPATCH_DEVICE_IMPL(active_rotated_filter_forward_impl, input, indices, - output); -} - -void active_rotated_filter_backward_impl(const Tensor grad_out, - const Tensor indices, Tensor grad_in) { - DISPATCH_DEVICE_IMPL(active_rotated_filter_backward_impl, grad_out, indices, - grad_in); -} - -void active_rotated_filter_forward(const Tensor input, const Tensor indices, - Tensor output) { - active_rotated_filter_forward_impl(input, indices, output); -} - -void active_rotated_filter_backward(const Tensor grad_out, const Tensor indices, - Tensor grad_in) { - active_rotated_filter_backward_impl(grad_out, indices, grad_in); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_parrots.cpp deleted file mode 100644 index 9097f7e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_parrots.cpp +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "active_rotated_filter_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void active_rotated_filter_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto input = buildATensor(ctx, ins[0]); - auto indices = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - active_rotated_filter_forward(input, indices, output); -} - -void active_rotated_filter_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto grad_out = buildATensor(ctx, ins[0]); - auto indices = buildATensor(ctx, ins[1]); - auto grad_in = buildATensor(ctx, outs[0]); - active_rotated_filter_backward(grad_out, indices, grad_in); -} -#endif - -void active_rotated_filter_forward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto input = buildATensor(ctx, ins[0]); - auto indices = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - active_rotated_filter_forward(input, indices, output); -} - -void active_rotated_filter_backward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto grad_out = buildATensor(ctx, ins[0]); - auto indices = buildATensor(ctx, ins[1]); - auto grad_in = buildATensor(ctx, outs[0]); - active_rotated_filter_backward(grad_out, indices, grad_in); -} - -PARROTS_EXTENSION_REGISTER(active_rotated_filter_forward) - .input(2) - .output(1) - .apply(active_rotated_filter_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(active_rotated_filter_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(active_rotated_filter_backward) - .input(2) - .output(1) - .apply(active_rotated_filter_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(active_rotated_filter_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_pytorch.h deleted file mode 100644 index 9a4d2ce9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/active_rotated_filter_pytorch.h +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ACTIVE_ROTATED_FILTER_PYTORCH_H -#define ACTIVE_ROTATED_FILTER_PYTORCH_H -#include -using namespace at; - -void active_rotated_filter_forward(const Tensor input, const Tensor indices, - Tensor output); - -void active_rotated_filter_backward(const Tensor grad_out, const Tensor indices, - Tensor grad_in); - -#endif // ACTIVE_ROTATED_FILTER_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk.cpp deleted file mode 100644 index 90762771..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/paconv_lib/src/gpu -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void assign_score_withk_forward_impl(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output) { - DISPATCH_DEVICE_IMPL(assign_score_withk_forward_impl, B, N0, N1, M, K, O, - aggregate, points, centers, scores, knn_idx, output); -} - -void assign_score_withk_backward_impl( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores) { - DISPATCH_DEVICE_IMPL(assign_score_withk_backward_impl, B, N0, N1, M, K, O, - aggregate, grad_out, points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -} - -void assign_score_withk_forward(const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, - Tensor& output, int B, int N0, int N1, int M, - int K, int O, int aggregate) { - assign_score_withk_forward_impl(B, N0, N1, M, K, O, aggregate, points, - centers, scores, knn_idx, output); -} - -void assign_score_withk_backward(const Tensor& grad_out, const Tensor& points, - const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores, - int B, int N0, int N1, int M, int K, int O, - int aggregate) { - assign_score_withk_backward_impl(B, N0, N1, M, K, O, aggregate, grad_out, - points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_parrots.cpp deleted file mode 100644 index 5729c716..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_parrots.cpp +++ /dev/null @@ -1,89 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "assign_score_withk_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void assign_score_withk_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int B, N0, N1, M, K, O, aggregate; - SSAttrs(attr) - .get("B", B) - .get("N0", N0) - .get("N1", N1) - .get("M", M) - .get("K", K) - .get("O", O) - .get("aggregate", aggregate) - .done(); - - const auto& points = buildATensor(ctx, ins[0]); - const auto& centers = buildATensor(ctx, ins[1]); - const auto& scores = buildATensor(ctx, ins[2]); - const auto& knn_idx = buildATensor(ctx, ins[3]); - - auto output = buildATensor(ctx, outs[0]); - assign_score_withk_forward(points, centers, scores, knn_idx, output, B, N0, - N1, M, K, O, aggregate); -} - -void assign_score_withk_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int B, N0, N1, M, K, O, aggregate; - SSAttrs(attr) - .get("B", B) - .get("N0", N0) - .get("N1", N1) - .get("M", M) - .get("K", K) - .get("O", O) - .get("aggregate", aggregate) - .done(); - - const auto& grad_out = buildATensor(ctx, ins[0]); - const auto& points = buildATensor(ctx, ins[1]); - const auto& centers = buildATensor(ctx, ins[2]); - const auto& scores = buildATensor(ctx, ins[3]); - const auto& knn_idx = buildATensor(ctx, ins[4]); - - auto grad_points = buildATensor(ctx, outs[0]); - auto grad_centers = buildATensor(ctx, outs[1]); - auto grad_scores = buildATensor(ctx, outs[2]); - assign_score_withk_backward(grad_out, points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores, B, N0, N1, - M, K, O, aggregate); -} - -PARROTS_EXTENSION_REGISTER(assign_score_withk_forward) - .attr("B") - .attr("N0") - .attr("N1") - .attr("M") - .attr("K") - .attr("O") - .attr("aggregate") - .input(4) - .output(1) - .apply(assign_score_withk_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(assign_score_withk_backward) - .attr("B") - .attr("N0") - .attr("N1") - .attr("M") - .attr("K") - .attr("O") - .attr("aggregate") - .input(5) - .output(3) - .apply(assign_score_withk_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_pytorch.h deleted file mode 100644 index 660594fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/assign_score_withk_pytorch.h +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ASSIGN_SCORE_WITHK_PYTORCH_H -#define ASSIGN_SCORE_WITHK_PYTORCH_H -#include -using namespace at; - -void assign_score_withk_forward(const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, - Tensor& output, int B, int N0, int N1, int M, - int K, int O, int aggregate); - -void assign_score_withk_backward(const Tensor& grad_out, const Tensor& points, - const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores, - int B, int N0, int N1, int M, int K, int O, - int aggregate); - -#endif // ASSIGN_SCORE_WITHK_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query._parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query._parrots.cpp deleted file mode 100644 index 01ab9739..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query._parrots.cpp +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "ball_query_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void ball_query_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, n, m, nsample; - float min_radius, max_radius; - SSAttrs(attr) - .get("b", b) - .get("n", n) - .get("m", m) - .get("nsample", nsample) - .get("min_radius", min_radius) - .get("max_radius", max_radius) - .done(); - - const auto& center_xyz = buildATensor(ctx, ins[0]); - const auto& xyz = buildATensor(ctx, ins[1]); - auto idx = buildATensor(ctx, outs[0]); - ball_query_forward(center_xyz, xyz, idx, b, n, m, min_radius, max_radius, - nsample); -} - -PARROTS_EXTENSION_REGISTER(ball_query_forward) - .attr("b") - .attr("n") - .attr("m") - .attr("nsample") - .attr("min_radius") - .attr("max_radius") - .input(2) - .output(1) - .apply(ball_query_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query.cpp deleted file mode 100644 index 1c9e7a20..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/ball_query.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void ball_query_forward_impl(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx) { - DISPATCH_DEVICE_IMPL(ball_query_forward_impl, b, n, m, min_radius, max_radius, - nsample, new_xyz, xyz, idx); -} - -void ball_query_forward(Tensor new_xyz_tensor, Tensor xyz_tensor, - Tensor idx_tensor, int b, int n, int m, - float min_radius, float max_radius, int nsample) { - ball_query_forward_impl(b, n, m, min_radius, max_radius, nsample, - new_xyz_tensor, xyz_tensor, idx_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query_pytorch.h deleted file mode 100644 index 70026f31..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ball_query_pytorch.h +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef BALL_QUERY_PYTORCH_H -#define BALL_QUERY_PYTORCH_H -#include -using namespace at; - -void ball_query_forward(const Tensor new_xyz, const Tensor xyz, Tensor idx, - int b, int n, int m, float min_radius, float max_radius, - int nsample); - -#endif // BALL_QUERY_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps.cpp deleted file mode 100644 index 187216fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void bbox_overlaps_impl(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - DISPATCH_DEVICE_IMPL(bbox_overlaps_impl, bboxes1, bboxes2, ious, mode, - aligned, offset); -} - -void bbox_overlaps(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - bbox_overlaps_impl(bboxes1, bboxes2, ious, mode, aligned, offset); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_parrots.cpp deleted file mode 100644 index 5f6264d3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_parrots.cpp +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "bbox_overlaps_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -/* - * void bbox_overlaps_cuda(const Tensor bboxes1, const Tensor bboxes2, Tensor - * ious, const int mode, const bool aligned, const int offset); - */ -void bbox_overlaps_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int mode, offset; - bool aligned; - SSAttrs(attr) - .get("mode", mode) - .get("aligned", aligned) - .get("offset", offset) - .done(); - - const auto& bboxes1 = buildATensor(ctx, ins[0]); - const auto& bboxes2 = buildATensor(ctx, ins[1]); - auto ious = buildATensor(ctx, outs[0]); - bbox_overlaps_cuda(bboxes1, bboxes2, ious, mode, aligned, offset); -} - -PARROTS_EXTENSION_REGISTER(bbox_overlaps) - .attr("mode") - .attr("aligned") - .attr("offset") - .input(2) - .output(1) - .apply(bbox_overlaps_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_pytorch.h deleted file mode 100644 index 4f68aa33..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/bbox_overlaps_pytorch.h +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef BBOX_OVERLAPS_PYTORCH_H -#define BBOX_OVERLAPS_PYTORCH_H -#include -using namespace at; - -void bbox_overlaps_cuda(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset); - -#endif // BBOX_OVERLAPS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align.cpp deleted file mode 100644 index 565de689..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align.cpp +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void border_align_forward_impl(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - DISPATCH_DEVICE_IMPL(border_align_forward_impl, input, boxes, output, - argmax_idx, pool_size); -} - -void border_align_backward_impl(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size) { - DISPATCH_DEVICE_IMPL(border_align_backward_impl, grad_output, boxes, - argmax_idx, grad_input, pool_size); -} - -void border_align_forward(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - border_align_forward_impl(input, boxes, output, argmax_idx, pool_size); -} - -void border_align_backward(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size) { - border_align_backward_impl(grad_output, boxes, argmax_idx, grad_input, - pool_size); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_parrots.cpp deleted file mode 100644 index 9a075a10..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_parrots.cpp +++ /dev/null @@ -1,51 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "border_align_pytorch.h" - -using namespace parrots; - -void border_align_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pool_size; - SSAttrs(attr).get("pool_size", pool_size).done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& boxes = buildATensor(ctx, ins[1]); - - auto output = buildATensor(ctx, outs[0]); - auto argmax_idx = buildATensor(ctx, outs[1]); - border_align_forward_cuda(input, boxes, output, argmax_idx, pool_size); -} - -void border_align_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pool_size; - SSAttrs(attr).get("pool_size", pool_size).done(); - - const auto& top_grad = buildATensor(ctx, ins[0]); - const auto& boxes = buildATensor(ctx, ins[1]); - const auto& argmax_idx = buildATensor(ctx, ins[2]); - - auto bottom_grad = buildATensor(ctx, outs[0]); - border_align_backward_cuda(top_grad, boxes, argmax_idx, bottom_grad, - pool_size); -} - -PARROTS_EXTENSION_REGISTER(border_align_forward) - .attr("pool_size") - .input(2) - .output(2) - .apply(border_align_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(border_align_backward) - .attr("pool_size") - .input(3) - .output(1) - .apply(border_align_backward_cuda_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_pytorch.h deleted file mode 100644 index cb031e57..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/border_align_pytorch.h +++ /dev/null @@ -1,17 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef BORDER_ALIGN_PYTORCH_H -#define BORDER_ALIGN_PYTORCH_H -#include -using namespace at; - -#ifdef MMCV_WITH_CUDA -void border_align_forward_cuda(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size); - -void border_align_backward_cuda(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size); -#endif - -#endif // BORDER_ALIGN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated.cpp deleted file mode 100644 index a2a4e095..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated.cpp +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void box_iou_rotated_impl(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - DISPATCH_DEVICE_IMPL(box_iou_rotated_impl, boxes1, boxes2, ious, mode_flag, - aligned); -} - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -void box_iou_rotated(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - box_iou_rotated_impl(boxes1, boxes2, ious, mode_flag, aligned); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_parrots.cpp deleted file mode 100644 index a90d6404..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_parrots.cpp +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "box_iou_rotated_pytorch.h" - -using namespace parrots; - -/* - * void box_iou_rotated_cpu(const Tensor boxes1, const Tensor boxes2, Tensor - * ious, const int mode_flag, const bool aligned); - */ -void box_iou_rotated_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - bool aligned; - int mode_flag; - SSAttrs(attr) - .get("aligned", aligned) - .get("mode_flag", mode_flag) - .done(); - - const auto& boxes1 = buildATensor(ctx, ins[0]); - const auto& boxes2 = buildATensor(ctx, ins[1]); - auto ious = buildATensor(ctx, outs[0]); - box_iou_rotated_cpu(boxes1, boxes2, ious, mode_flag, aligned); -} - -#ifdef MMCV_WITH_CUDA -/* - * void box_iou_rotated_cuda(const Tensor boxes1, const Tensor boxes2, Tensor - * ious, const int mode_flag, const bool aligned); - */ -void box_iou_rotated_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - bool aligned; - int mode_flag; - SSAttrs(attr) - .get("aligned", aligned) - .get("mode_flag", mode_flag) - .done(); - - const auto& boxes1 = buildATensor(ctx, ins[0]); - const auto& boxes2 = buildATensor(ctx, ins[1]); - auto ious = buildATensor(ctx, outs[0]); - box_iou_rotated_cuda(boxes1, boxes2, ious, mode_flag, aligned); -} -#endif - -PARROTS_EXTENSION_REGISTER(box_iou_rotated) - .attr("aligned") - .attr("mode_flag") - .input(2) - .output(1) - .apply(box_iou_rotated_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(box_iou_rotated_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_pytorch.h deleted file mode 100644 index afab7031..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/box_iou_rotated_pytorch.h +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef BOX_IOU_ROTATED_PYTORCH_H -#define BOX_IOU_ROTATED_PYTORCH_H -#include -using namespace at; - -void box_iou_rotated_cpu(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); - -#ifdef MMCV_WITH_CUDA -void box_iou_rotated_cuda(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); -#endif - -#endif // BOX_IOU_ROTATED_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe.cpp deleted file mode 100644 index a563aed9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe.cpp +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void carafe_forward_impl(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_forward_impl, features, masks, rfeatures, routput, - rmasks, output, kernel_size, group_size, scale_factor); -} - -void carafe_backward_impl(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_backward_impl, top_grad, rfeatures, masks, - rtop_grad, rbottom_grad_hs, rbottom_grad, rmask_grad, - bottom_grad, mask_grad, kernel_size, group_size, - scale_factor); -} - -void carafe_forward(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - carafe_forward_impl(features, masks, rfeatures, routput, rmasks, output, - kernel_size, group_size, scale_factor); -} - -void carafe_backward(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, Tensor bottom_grad, - Tensor mask_grad, int kernel_size, int group_size, - int scale_factor) { - carafe_backward_impl(top_grad, rfeatures, masks, rtop_grad, rbottom_grad_hs, - rbottom_grad, rmask_grad, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive.cpp deleted file mode 100644 index 6e8917a6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive.cpp +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void carafe_naive_forward_impl(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_naive_forward_impl, features, masks, output, - kernel_size, group_size, scale_factor); -} - -void carafe_naive_backward_impl(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_naive_backward_impl, top_grad, features, masks, - bottom_grad, mask_grad, kernel_size, group_size, - scale_factor); -} - -void carafe_naive_forward(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - carafe_naive_forward_impl(features, masks, output, kernel_size, group_size, - scale_factor); -} - -void carafe_naive_backward(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, int scale_factor) { - carafe_naive_backward_impl(top_grad, features, masks, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_parrots.cpp deleted file mode 100644 index 9c16a370..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_parrots.cpp +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "carafe_naive_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -/*void carafe_naive_forward_cuda(Tensor features, Tensor masks, Tensor output, - * int kernel_size, int group_size, - * int scale_factor) - */ -void carafe_naive_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_size, group_size, scale_factor; - SSAttrs(attr) - .get("kernel_size", kernel_size) - .get("group_size", group_size) - .get("scale_factor", scale_factor) - .done(); - - const auto& features = buildATensor(ctx, ins[0]); - const auto& masks = buildATensor(ctx, ins[1]); - - auto output = buildATensor(ctx, outs[0]); - carafe_naive_forward_cuda(features, masks, output, kernel_size, group_size, - scale_factor); -} - -/*void carafe_naive_backward_cuda(Tensor top_grad, Tensor features, Tensor - * masks, Tensor bottom_grad, Tensor mask_grad, int kernel_size, int group_size, - * int scale_factor); - */ -void carafe_naive_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_size, group_size, scale_factor; - SSAttrs(attr) - .get("kernel_size", kernel_size) - .get("group_size", group_size) - .get("scale_factor", scale_factor) - .done(); - - const auto& top_grad = buildATensor(ctx, ins[0]); - const auto& features = buildATensor(ctx, ins[1]); - const auto& masks = buildATensor(ctx, ins[2]); - - auto bottom_grad = buildATensor(ctx, outs[0]); - auto mask_grad = buildATensor(ctx, outs[1]); - carafe_naive_backward_cuda(top_grad, features, masks, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} - -PARROTS_EXTENSION_REGISTER(carafe_naive_forward) - .attr("kernel_size") - .attr("group_size") - .attr("scale_factor") - .input(2) - .output(1) - .apply(carafe_naive_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(carafe_naive_backward) - .attr("kernel_size") - .attr("group_size") - .attr("scale_factor") - .input(3) - .output(2) - .apply(carafe_naive_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_pytorch.h deleted file mode 100644 index 6df9b88c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_naive_pytorch.h +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CARAFE_NAIVE_PYTORCH_H -#define CARAFE_NAIVE_PYTORCH_H -#include -using namespace at; - -void carafe_naive_forward_cuda(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor); - -void carafe_naive_backward_cuda(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor); -#endif // CARAFE_NAIVE_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_parrots.cpp deleted file mode 100644 index e99f59ef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_parrots.cpp +++ /dev/null @@ -1,88 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "carafe_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -/* - * void carafe_forward_cuda(Tensor features, Tensor masks, Tensor rfeatures, - * Tensor routput, Tensor rmasks, Tensor output, - * int kernel_size, int group_size, int scale_factor); - */ -void carafe_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_size, group_size, scale_factor; - SSAttrs(attr) - .get("kernel_size", kernel_size) - .get("group_size", group_size) - .get("scale_factor", scale_factor) - .done(); - - const auto& features = buildATensor(ctx, ins[0]); - const auto& masks = buildATensor(ctx, ins[1]); - - auto rfeatures = buildATensor(ctx, outs[0]); - auto routput = buildATensor(ctx, outs[1]); - auto rmasks = buildATensor(ctx, outs[2]); - auto output = buildATensor(ctx, outs[3]); - - carafe_forward_cuda(features, masks, rfeatures, routput, rmasks, output, - kernel_size, group_size, scale_factor); -} - -/* - * void carafe_backward_cuda(Tensor top_grad, Tensor rfeatures, Tensor masks, - * Tensor rtop_grad, Tensor rbottom_grad_hs, - * Tensor rbottom_grad, Tensor rmask_grad, - * Tensor bottom_grad, Tensor mask_grad, int - * kernel_size, int group_size, int scale_factor); - */ -void carafe_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_size, group_size, scale_factor; - SSAttrs(attr) - .get("kernel_size", kernel_size) - .get("group_size", group_size) - .get("scale_factor", scale_factor) - .done(); - - const auto& top_grad = buildATensor(ctx, ins[0]); - const auto& rfeatures = buildATensor(ctx, ins[1]); - const auto& masks = buildATensor(ctx, ins[2]); - - auto rtop_grad = buildATensor(ctx, outs[0]); - auto rbottom_grad_hs = buildATensor(ctx, outs[1]); - auto rbottom_grad = buildATensor(ctx, outs[2]); - auto rmask_grad = buildATensor(ctx, outs[3]); - auto bottom_grad = buildATensor(ctx, outs[4]); - auto mask_grad = buildATensor(ctx, outs[5]); - - carafe_backward_cuda(top_grad, rfeatures, masks, rtop_grad, rbottom_grad_hs, - rbottom_grad, rmask_grad, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} - -PARROTS_EXTENSION_REGISTER(carafe_forward) - .attr("kernel_size") - .attr("group_size") - .attr("scale_factor") - .input(2) - .output(4) - .apply(carafe_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(carafe_backward) - .attr("kernel_size") - .attr("group_size") - .attr("scale_factor") - .input(3) - .output(6) - .apply(carafe_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_pytorch.h deleted file mode 100644 index 2b94d44d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/carafe_pytorch.h +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CARAFE_PYTORCH_H -#define CARAFE_PYTORCH_H -#include -using namespace at; - -void carafe_forward_cuda(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor); - -void carafe_backward_cuda(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor); -#endif // CARAFE_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand.cpp deleted file mode 100644 index 586c48ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand.cpp +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// It is modified from https://github.com/whai362/PSENet -#include -#include - -#include "pytorch_cpp_helper.hpp" - -using namespace std; - -class Point2d { - public: - int x; - int y; - - Point2d() : x(0), y(0) {} - Point2d(int _x, int _y) : x(_x), y(_y) {} -}; - -void kernel_dilate(const uint8_t *data, IntArrayRef data_shape, - const int *label_map, int &label_num, int &min_area, - vector> &text_line) { - std::vector area(label_num + 1); - int kernel_num = data_shape[0]; - int height = data_shape[1]; - int width = data_shape[2]; - - for (int x = 0; x < height; ++x) { - for (int y = 0; y < width; ++y) { - int label = label_map[x * width + y]; - if (label == 0) continue; - area[label] += 1; - } - } - - queue queue, next_queue; - for (int x = 0; x < height; ++x) { - vector row(width); - for (int y = 0; y < width; ++y) { - int label = label_map[x * width + y]; - if (label == 0) continue; - if (area[label] < min_area) continue; - - Point2d point(x, y); - queue.push(point); - row[y] = label; - } - text_line.emplace_back(row); - } - - int dx[] = {-1, 1, 0, 0}; - int dy[] = {0, 0, -1, 1}; - vector kernel_step(kernel_num); - std::for_each(kernel_step.begin(), kernel_step.end(), - [=](int &k) { return k * height * width; }); - - for (int kernel_id = kernel_num - 2; kernel_id >= 0; --kernel_id) { - while (!queue.empty()) { - Point2d point = queue.front(); - queue.pop(); - int x = point.x; - int y = point.y; - int label = text_line[x][y]; - - bool is_edge = true; - for (int d = 0; d < 4; ++d) { - int tmp_x = x + dx[d]; - int tmp_y = y + dy[d]; - - if (tmp_x < 0 || tmp_x >= height) continue; - if (tmp_y < 0 || tmp_y >= width) continue; - int kernel_value = data[kernel_step[kernel_id] + tmp_x * width + tmp_y]; - if (kernel_value == 0) continue; - if (text_line[tmp_x][tmp_y] > 0) continue; - - Point2d point(tmp_x, tmp_y); - queue.push(point); - text_line[tmp_x][tmp_y] = label; - is_edge = false; - } - - if (is_edge) { - next_queue.push(point); - } - } - swap(queue, next_queue); - } -} - -std::vector> contour_expand(Tensor kernel_mask, - Tensor internal_kernel_label, - int min_kernel_area, - int kernel_num) { - kernel_mask = kernel_mask.contiguous(); - internal_kernel_label = internal_kernel_label.contiguous(); - assert(kernel_mask.dim() == 3); - assert(internal_kernel_label.dim() == 2); - assert(kernel_mask.size(1) == internal_kernel_label.size(0)); - assert(kernel_mask.size(2) == internal_kernel_label.size(1)); - CHECK_CPU_INPUT(kernel_mask); - CHECK_CPU_INPUT(internal_kernel_label); - auto ptr_data = kernel_mask.data_ptr(); - IntArrayRef data_shape = kernel_mask.sizes(); - - auto data_label_map = internal_kernel_label.data_ptr(); - vector> text_line; - - kernel_dilate(ptr_data, data_shape, data_label_map, kernel_num, - min_kernel_area, text_line); - - return text_line; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_parrots.cpp deleted file mode 100644 index 1581fdc8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_parrots.cpp +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "contour_expand_pytorch.h" - -using namespace parrots; -using namespace std; - -template -void contour_expand_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int min_kernel_area, kernel_num; - SSAttrs(attr) - .get("min_kernel_area", min_kernel_area) - .get("kernel_num", kernel_num) - .done(); - at::Tensor kernel_mask; - at::Tensor internal_kernel_label; - kernel_mask = buildATensor(ctx, ins[0]); - internal_kernel_label = buildATensor(ctx, ins[1]); - auto out = contour_expand(kernel_mask, internal_kernel_label, min_kernel_area, - kernel_num); - int n = out.size(), m = 0; - for (int i = 0; i < n; ++i) - if (m < out[i].size()) m = out[i].size(); - auto options = torch::TensorOptions().dtype(at::kInt); - auto tensor = torch::zeros({n, m}, options); - for (int i = 0; i < n; i++) - tensor.slice(0, i, i + 1) = - torch::from_blob(out[i].data(), {out[i].size()}, options); - updateDArray(ctx, tensor, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(contour_expand) - .attr("min_kernel_area") - .attr("kernel_num") - .input(2) - .output(1) - .apply(contour_expand_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_pytorch.h deleted file mode 100644 index 881bbac3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/contour_expand_pytorch.h +++ /dev/null @@ -1,12 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CONTOUR_EXPAND_PYTORCH_H -#define CONTOUR_EXPAND_PYTORCH_H -#include -using namespace at; - -std::vector> contour_expand(Tensor kernel_mask, - Tensor internal_kernel_label, - int min_kernel_area, - int kernel_num); - -#endif // CONTOUR_EXPAND_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou.cpp deleted file mode 100644 index 79f2028b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/SDL-GuoZonghao/BeyondBoundingBox/tree/main/mmdet/ops/iou/src -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void convex_iou_impl(const Tensor pointsets, const Tensor polygons, - Tensor ious) { - DISPATCH_DEVICE_IMPL(convex_iou_impl, pointsets, polygons, ious); -} - -void convex_iou(const Tensor pointsets, const Tensor polygons, Tensor ious) { - convex_iou_impl(pointsets, polygons, ious); -} - -void convex_giou_impl(const Tensor pointsets, const Tensor polygons, - Tensor output) { - DISPATCH_DEVICE_IMPL(convex_giou_impl, pointsets, polygons, output); -} - -void convex_giou(const Tensor pointsets, const Tensor polygons, Tensor output) { - convex_giou_impl(pointsets, polygons, output); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_parrots.cpp deleted file mode 100644 index bf766542..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_parrots.cpp +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "convex_iou_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void convex_iou_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto pointsets = buildATensor(ctx, ins[0]); - auto polygons = buildATensor(ctx, ins[1]); - auto ious = buildATensor(ctx, outs[0]); - convex_iou(pointsets, polygons, ious); -} - -void convex_giou_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto pointsets = buildATensor(ctx, ins[0]); - auto polygons = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - convex_giou(pointsets, polygons, output); -} - -PARROTS_EXTENSION_REGISTER(convex_iou) - .input(2) - .output(1) - .apply(convex_iou_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(convex_giou) - .input(2) - .output(1) - .apply(convex_giou_forward_cuda_parrots) - .done(); - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_pytorch.h deleted file mode 100644 index 4f16a1ce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/convex_iou_pytorch.h +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CONVEX_IOU_PYTORCH_H -#define CONVEX_IOU_PYTORCH_H -#include -using namespace at; - -void convex_iou(const Tensor pointsets, const Tensor polygons, Tensor ious); - -void convex_giou(const Tensor pointsets, const Tensor polygons, Tensor output); - -#endif // RIROI_ALIGN_ROTATED_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation.cpp deleted file mode 100644 index f4adba2a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation.cpp +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void correlation_forward_impl(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - DISPATCH_DEVICE_IMPL(correlation_forward_impl, input1, input2, output, kH, kW, - patchH, patchW, padH, padW, dilationH, dilationW, - dilation_patchH, dilation_patchW, dH, dW); -} - -void correlation_backward_impl(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - DISPATCH_DEVICE_IMPL(correlation_backward_impl, grad_output, input1, input2, - grad_input1, grad_input2, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_forward(Tensor input1, Tensor input2, Tensor output, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - correlation_forward_impl(input1, input2, output, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_backward(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - correlation_backward_impl(grad_output, input1, input2, grad_input1, - grad_input2, kH, kW, patchH, patchW, padH, padW, - dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_parrots.cpp deleted file mode 100644 index b1e287d0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_parrots.cpp +++ /dev/null @@ -1,176 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "correlation_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void correlation_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW; - SSAttrs(attr) - .get("kH", kH) - .get("kW", kW) - .get("patchH", patchH) - .get("patchW", patchW) - .get("padH", padH) - .get("padW", padW) - .get("dilationH", dilationH) - .get("dilationW", dilationW) - .get("dilation_patchH", dilation_patchH) - .get("dilation_patchW", dilation_patchW) - .get("dH", dH) - .get("dW", dW) - .done(); - - auto input1 = buildATensor(ctx, ins[0]); - auto input2 = buildATensor(ctx, ins[1]); - - auto output = buildATensor(ctx, outs[0]); - - correlation_forward(input1, input2, output, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW; - SSAttrs(attr) - .get("kH", kH) - .get("kW", kW) - .get("patchH", patchH) - .get("patchW", patchW) - .get("padH", padH) - .get("padW", padW) - .get("dilationH", dilationH) - .get("dilationW", dilationW) - .get("dilation_patchH", dilation_patchH) - .get("dilation_patchW", dilation_patchW) - .get("dH", dH) - .get("dW", dW) - .done(); - - auto grad_output = buildATensor(ctx, ins[0]); - auto input1 = buildATensor(ctx, ins[1]); - auto input2 = buildATensor(ctx, ins[2]); - - auto grad_input1 = buildATensor(ctx, outs[0]); - auto grad_input2 = buildATensor(ctx, outs[1]); - - correlation_backward(grad_output, input1, input2, grad_input1, grad_input2, - kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, - dilation_patchH, dilation_patchW, dH, dW); -} -#endif - -void correlation_forward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW; - SSAttrs(attr) - .get("kH", kH) - .get("kW", kW) - .get("patchH", patchH) - .get("patchW", patchW) - .get("padH", padH) - .get("padW", padW) - .get("dilationH", dilationH) - .get("dilationW", dilationW) - .get("dilation_patchH", dilation_patchH) - .get("dilation_patchW", dilation_patchW) - .get("dH", dH) - .get("dW", dW) - .done(); - - auto input1 = buildATensor(ctx, ins[0]); - auto input2 = buildATensor(ctx, ins[1]); - - auto output = buildATensor(ctx, outs[0]); - - correlation_forward(input1, input2, output, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_backward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW; - SSAttrs(attr) - .get("kH", kH) - .get("kW", kW) - .get("patchH", patchH) - .get("patchW", patchW) - .get("padH", padH) - .get("padW", padW) - .get("dilationH", dilationH) - .get("dilationW", dilationW) - .get("dilation_patchH", dilation_patchH) - .get("dilation_patchW", dilation_patchW) - .get("dH", dH) - .get("dW", dW) - .done(); - - auto grad_output = buildATensor(ctx, ins[0]); - auto input1 = buildATensor(ctx, ins[1]); - auto input2 = buildATensor(ctx, ins[2]); - - auto grad_input1 = buildATensor(ctx, outs[0]); - auto grad_input2 = buildATensor(ctx, outs[1]); - - correlation_backward(grad_output, input1, input2, grad_input1, grad_input2, - kH, kW, patchH, patchW, padH, padW, dilationH, dilationW, - dilation_patchH, dilation_patchW, dH, dW); -} - -PARROTS_EXTENSION_REGISTER(correlation_forward) - .attr("kH") - .attr("kW") - .attr("patchH") - .attr("patchW") - .attr("padH") - .attr("padW") - .attr("dilationH") - .attr("dilationW") - .attr("dilation_patchH") - .attr("dilation_patchW") - .attr("dH") - .attr("dW") - .input(2) - .output(1) - .apply(correlation_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(correlation_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(correlation_backward) - .attr("kH") - .attr("kW") - .attr("patchH") - .attr("patchW") - .attr("padH") - .attr("padW") - .attr("dilationH") - .attr("dilationW") - .attr("dilation_patchH") - .attr("dilation_patchW") - .attr("dH") - .attr("dW") - .input(3) - .output(2) - .apply(correlation_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(correlation_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_pytorch.h deleted file mode 100644 index 806fcaa7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/correlation_pytorch.h +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef CORRELATION_PYTORCH_H -#define CORRELATION_PYTORCH_H -#include -using namespace at; - -void correlation_forward(Tensor input1, Tensor input2, Tensor output, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void correlation_backward(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -#endif // CORRELATION_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/cudabind.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/cudabind.cpp deleted file mode 100644 index 04c6e36c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/cudabind.cpp +++ /dev/null @@ -1,1591 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void AssignScoreWithKForwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& points, const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& output); - -void AssignScoreWithKBackwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores); - -void assign_score_withk_forward_cuda(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output) { - AssignScoreWithKForwardCUDAKernelLauncher( - B, N0, N1, M, K, O, aggregate, points, centers, scores, knn_idx, output); -}; - -void assign_score_withk_backward_cuda( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores) { - AssignScoreWithKBackwardCUDAKernelLauncher( - B, N0, N1, M, K, O, aggregate, grad_out, points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -}; - -void assign_score_withk_forward_impl(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output); - -void assign_score_withk_backward_impl( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores); - -REGISTER_DEVICE_IMPL(assign_score_withk_forward_impl, CUDA, - assign_score_withk_forward_cuda); -REGISTER_DEVICE_IMPL(assign_score_withk_backward_impl, CUDA, - assign_score_withk_backward_cuda); - -void BallQueryForwardCUDAKernelLauncher(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx); - -void ball_query_forward_cuda(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx) { - BallQueryForwardCUDAKernelLauncher(b, n, m, min_radius, max_radius, nsample, - new_xyz, xyz, idx); -}; - -void ball_query_forward_impl(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx); -REGISTER_DEVICE_IMPL(ball_query_forward_impl, CUDA, ball_query_forward_cuda); - -void BBoxOverlapsCUDAKernelLauncher(const Tensor bboxes1, const Tensor bboxes2, - Tensor ious, const int mode, - const bool aligned, const int offset); - -void bbox_overlaps_cuda(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - BBoxOverlapsCUDAKernelLauncher(bboxes1, bboxes2, ious, mode, aligned, offset); -} - -void bbox_overlaps_impl(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset); -REGISTER_DEVICE_IMPL(bbox_overlaps_impl, CUDA, bbox_overlaps_cuda); - -void BorderAlignForwardCUDAKernelLauncher(const Tensor& input, - const Tensor& boxes, Tensor output, - Tensor argmax_idx, - const int pool_size); - -void BorderAlignBackwardCUDAKernelLauncher(const Tensor& grad_output, - const Tensor& boxes, - const Tensor& argmax_idx, - Tensor grad_input, - const int pool_size); - -void border_align_forward_cuda(const Tensor& input, const Tensor& boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - BorderAlignForwardCUDAKernelLauncher(input, boxes, output, argmax_idx, - pool_size); -} - -void border_align_backward_cuda(const Tensor& grad_output, const Tensor& boxes, - const Tensor& argmax_idx, Tensor grad_input, - const int pool_size) { - BorderAlignBackwardCUDAKernelLauncher(grad_output, boxes, argmax_idx, - grad_input, pool_size); -} - -void border_align_forward_impl(const Tensor& input, const Tensor& boxes, - Tensor output, Tensor argmax_idx, - const int pool_size); - -void border_align_backward_impl(const Tensor& grad_output, const Tensor& boxes, - const Tensor& argmax_idx, Tensor grad_input, - const int pool_size); - -REGISTER_DEVICE_IMPL(border_align_forward_impl, CUDA, - border_align_forward_cuda); -REGISTER_DEVICE_IMPL(border_align_backward_impl, CUDA, - border_align_backward_cuda); - -void box_iou_rotated_cuda(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); - -void box_iou_rotated_impl(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); -REGISTER_DEVICE_IMPL(box_iou_rotated_impl, CUDA, box_iou_rotated_cuda); - -void CARAFEForwardCUDAKernelLauncher(const Tensor features, const Tensor masks, - Tensor rfeatures, Tensor routput, - Tensor rmasks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor); - -void CARAFEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor rfeatures, const Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, Tensor rbottom_grad, - Tensor rmask_grad, Tensor bottom_grad, Tensor mask_grad, - const int kernel_size, const int group_size, const int scale_factor); - -void carafe_forward_cuda(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - CARAFEForwardCUDAKernelLauncher(features, masks, rfeatures, routput, rmasks, - output, kernel_size, group_size, - scale_factor); -} - -void carafe_backward_cuda(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor) { - CARAFEBackwardCUDAKernelLauncher(top_grad, rfeatures, masks, rtop_grad, - rbottom_grad_hs, rbottom_grad, rmask_grad, - bottom_grad, mask_grad, kernel_size, - group_size, scale_factor); -} - -void carafe_forward_impl(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor); - -void carafe_backward_impl(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor); - -REGISTER_DEVICE_IMPL(carafe_forward_impl, CUDA, carafe_forward_cuda); -REGISTER_DEVICE_IMPL(carafe_backward_impl, CUDA, carafe_backward_cuda); - -void CARAFENAIVEForwardCUDAKernelLauncher(const Tensor features, - const Tensor masks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor); - -void CARAFENAIVEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor features, const Tensor masks, - Tensor bottom_grad, Tensor mask_grad, const int kernel_size, - const int group_size, const int scale_factor); - -void carafe_naive_forward_cuda(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor) { - CARAFENAIVEForwardCUDAKernelLauncher(features, masks, output, kernel_size, - group_size, scale_factor); -} - -void carafe_naive_backward_cuda(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor) { - CARAFENAIVEBackwardCUDAKernelLauncher(top_grad, features, masks, bottom_grad, - mask_grad, kernel_size, group_size, - scale_factor); -} -void carafe_naive_forward_impl(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor); - -void carafe_naive_backward_impl(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor); - -REGISTER_DEVICE_IMPL(carafe_naive_forward_impl, CUDA, - carafe_naive_forward_cuda); -REGISTER_DEVICE_IMPL(carafe_naive_backward_impl, CUDA, - carafe_naive_backward_cuda); - -void CorrelationForwardCUDAKernelLauncher(Tensor input1, Tensor input2, - Tensor output, int kH, int kW, - int patchH, int patchW, int padH, - int padW, int dilationH, - int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void CorrelationBackwardCUDAKernelLauncher(Tensor grad_output, Tensor input1, - Tensor input2, Tensor grad_input1, - Tensor grad_input2, int kH, int kW, - int patchH, int patchW, int padH, - int padW, int dilationH, - int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void correlation_forward_cuda(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - CorrelationForwardCUDAKernelLauncher( - input1, input2, output, kH, kW, patchH, patchW, padH, padW, dilationH, - dilationW, dilation_patchH, dilation_patchW, dH, dW); -} - -void correlation_backward_cuda(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - CorrelationBackwardCUDAKernelLauncher( - grad_output, input1, input2, grad_input1, grad_input2, kH, kW, patchH, - patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_forward_impl(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW); - -void correlation_backward_impl(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW); - -REGISTER_DEVICE_IMPL(correlation_forward_impl, CUDA, correlation_forward_cuda); -REGISTER_DEVICE_IMPL(correlation_backward_impl, CUDA, - correlation_backward_cuda); - -void deformable_im2col_cuda(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col); - -void deformable_col2im_cuda(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im); - -void deformable_col2im_coord_cuda( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset); - -void deformable_im2col_impl(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col); - -void deformable_col2im_impl(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im); - -void deformable_col2im_coord_impl( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset); - -REGISTER_DEVICE_IMPL(deformable_im2col_impl, CUDA, deformable_im2col_cuda); -REGISTER_DEVICE_IMPL(deformable_col2im_impl, CUDA, deformable_col2im_cuda); -REGISTER_DEVICE_IMPL(deformable_col2im_coord_impl, CUDA, - deformable_col2im_coord_cuda); - -void DeformRoIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, - Tensor offset, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, - int sampling_ratio, float gamma); - -void DeformRoIPoolBackwardCUDAKernelLauncher( - Tensor grad_output, Tensor input, Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, float gamma); - -void deform_roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - DeformRoIPoolForwardCUDAKernelLauncher(input, rois, offset, output, - pooled_height, pooled_width, - spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_backward_cuda(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - DeformRoIPoolBackwardCUDAKernelLauncher( - grad_output, input, rois, offset, grad_input, grad_offset, pooled_height, - pooled_width, spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_forward_impl(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma); - -void deform_roi_pool_backward_impl(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma); - -REGISTER_DEVICE_IMPL(deform_roi_pool_forward_impl, CUDA, - deform_roi_pool_forward_cuda); -REGISTER_DEVICE_IMPL(deform_roi_pool_backward_impl, CUDA, - deform_roi_pool_backward_cuda); - -void SigmoidFocalLossForwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha); - -void SigmoidFocalLossBackwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, - Tensor grad_input, - const float gamma, - const float alpha); - -void SoftmaxFocalLossForwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha); - -void SoftmaxFocalLossBackwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, - const float gamma, - const float alpha); - -void sigmoid_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SigmoidFocalLossForwardCUDAKernelLauncher(input, target, weight, output, - gamma, alpha); -} - -void sigmoid_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha) { - SigmoidFocalLossBackwardCUDAKernelLauncher(input, target, weight, grad_input, - gamma, alpha); -} - -void softmax_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SoftmaxFocalLossForwardCUDAKernelLauncher(input, target, weight, output, - gamma, alpha); -} - -void softmax_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha) { - SoftmaxFocalLossBackwardCUDAKernelLauncher(input, target, weight, buff, - grad_input, gamma, alpha); -} - -void sigmoid_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha); - -void softmax_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void softmax_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha); - -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_forward_impl, CUDA, - sigmoid_focal_loss_forward_cuda); -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_backward_impl, CUDA, - sigmoid_focal_loss_backward_cuda); -REGISTER_DEVICE_IMPL(softmax_focal_loss_forward_impl, CUDA, - softmax_focal_loss_forward_cuda); -REGISTER_DEVICE_IMPL(softmax_focal_loss_backward_impl, CUDA, - softmax_focal_loss_backward_cuda); - -void FurthestPointSamplingForwardCUDAKernelLauncher(int b, int n, int m, - const float* dataset, - float* temp, int* idxs); - -void FurthestPointSamplingWithDistForwardCUDAKernelLauncher( - int b, int n, int m, const float* dataset, float* temp, int* idxs); - -void furthest_point_sampling_forward_cuda(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m) { - const float* dataset = points_tensor.data_ptr(); - float* temp = temp_tensor.data_ptr(); - int* idxs = idx_tensor.data_ptr(); - FurthestPointSamplingForwardCUDAKernelLauncher(b, n, m, dataset, temp, idxs); -} - -void furthest_point_sampling_with_dist_forward_cuda(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m) { - const float* dataset = points_tensor.data_ptr(); - float* temp = temp_tensor.data_ptr(); - int* idxs = idx_tensor.data_ptr(); - FurthestPointSamplingWithDistForwardCUDAKernelLauncher(b, n, m, dataset, temp, - idxs); -} - -void furthest_point_sampling_forward_impl(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m); - -void furthest_point_sampling_with_dist_forward_impl(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m); - -REGISTER_DEVICE_IMPL(furthest_point_sampling_forward_impl, CUDA, - furthest_point_sampling_forward_cuda); -REGISTER_DEVICE_IMPL(furthest_point_sampling_with_dist_forward_impl, CUDA, - furthest_point_sampling_with_dist_forward_cuda); - -torch::Tensor fused_bias_leakyrelu_op(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale); - -torch::Tensor fused_bias_leakyrelu_op_impl(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale); -REGISTER_DEVICE_IMPL(fused_bias_leakyrelu_op_impl, CUDA, - fused_bias_leakyrelu_op); - -void GatherPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor points, - const Tensor idx, Tensor out); - -void GatherPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor grad_out, - const Tensor idx, - Tensor grad_points); - -void gather_points_forward_cuda(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out) { - GatherPointsForwardCUDAKernelLauncher(b, c, n, npoints, points, idx, out); -}; - -void gather_points_backward_cuda(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - GatherPointsBackwardCUDAKernelLauncher(b, c, n, npoints, grad_out, idx, - grad_points); -}; - -void gather_points_forward_impl(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out); - -void gather_points_backward_impl(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points); - -REGISTER_DEVICE_IMPL(gather_points_forward_impl, CUDA, - gather_points_forward_cuda); -REGISTER_DEVICE_IMPL(gather_points_backward_impl, CUDA, - gather_points_backward_cuda); - -void GroupPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor points, - const Tensor idx, Tensor out); - -void GroupPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor grad_out, - const Tensor idx, - Tensor grad_points); - -void group_points_forward_cuda(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out) { - GroupPointsForwardCUDAKernelLauncher(b, c, n, npoints, nsample, points, idx, - out); -}; - -void group_points_backward_cuda(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - GroupPointsBackwardCUDAKernelLauncher(b, c, n, npoints, nsample, grad_out, - idx, grad_points); -}; - -void group_points_forward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out); - -void group_points_backward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points); - -REGISTER_DEVICE_IMPL(group_points_forward_impl, CUDA, - group_points_forward_cuda); -REGISTER_DEVICE_IMPL(group_points_backward_impl, CUDA, - group_points_backward_cuda); - -void IoU3DBoxesOverlapBevForwardCUDAKernelLauncher(const int num_a, - const Tensor boxes_a, - const int num_b, - const Tensor boxes_b, - Tensor ans_overlap); - -void IoU3DNMS3DForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long* mask, - int boxes_num, - float nms_overlap_thresh); - -void IoU3DNMS3DNormalForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long* mask, - int boxes_num, - float nms_overlap_thresh); - -void iou3d_boxes_overlap_bev_forward_cuda(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap) { - IoU3DBoxesOverlapBevForwardCUDAKernelLauncher(num_a, boxes_a, num_b, boxes_b, - ans_overlap); -}; - -void iou3d_nms3d_forward_cuda(const Tensor boxes, unsigned long long* mask, - int boxes_num, float nms_overlap_thresh) { - IoU3DNMS3DForwardCUDAKernelLauncher(boxes, mask, boxes_num, - nms_overlap_thresh); -}; - -void iou3d_nms3d_normal_forward_cuda(const Tensor boxes, - unsigned long long* mask, int boxes_num, - float nms_overlap_thresh) { - IoU3DNMS3DNormalForwardCUDAKernelLauncher(boxes, mask, boxes_num, - nms_overlap_thresh); -}; - -void iou3d_boxes_overlap_bev_forward_impl(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap); - -void iou3d_nms3d_forward_impl(const Tensor boxes, unsigned long long* mask, - int boxes_num, float nms_overlap_thresh); - -void iou3d_nms3d_normal_forward_impl(const Tensor boxes, - unsigned long long* mask, int boxes_num, - float nms_overlap_thresh); - -REGISTER_DEVICE_IMPL(iou3d_boxes_overlap_bev_forward_impl, CUDA, - iou3d_boxes_overlap_bev_forward_cuda); -REGISTER_DEVICE_IMPL(iou3d_nms3d_forward_impl, CUDA, iou3d_nms3d_forward_cuda); -REGISTER_DEVICE_IMPL(iou3d_nms3d_normal_forward_impl, CUDA, - iou3d_nms3d_normal_forward_cuda); - -void KNNForwardCUDAKernelLauncher(int b, int n, int m, int nsample, - const Tensor xyz, const Tensor new_xyz, - Tensor idx, Tensor dist2); - -void knn_forward_cuda(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2) { - KNNForwardCUDAKernelLauncher(b, n, m, nsample, xyz, new_xyz, idx, dist2); -} - -void knn_forward_impl(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2); -REGISTER_DEVICE_IMPL(knn_forward_impl, CUDA, knn_forward_cuda); - -void MaskedIm2colForwardCUDAKernelLauncher(const Tensor bottom_data, - const Tensor mask_h_idx, - const Tensor mask_w_idx, - Tensor top_data, const int kernel_h, - const int kernel_w, const int pad_h, - const int pad_w); - -void MaskedCol2imForwardCUDAKernelLauncher(const Tensor bottom_data, - const Tensor mask_h_idx, - const Tensor mask_w_idx, - Tensor top_data, const int height, - const int width, const int channels); - -void masked_im2col_forward_cuda(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kw), col: (kh * kw * ic, ow * oh) - MaskedIm2colForwardCUDAKernelLauncher(im, mask_h_idx, mask_w_idx, col, - kernel_h, kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward_cuda(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kh), col: (kh * kw * ic, ow * oh) - MaskedCol2imForwardCUDAKernelLauncher(col, mask_h_idx, mask_w_idx, im, height, - width, channels); -} - -void masked_im2col_forward_impl(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w); - -void masked_col2im_forward_impl(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels); - -REGISTER_DEVICE_IMPL(masked_im2col_forward_impl, CUDA, - masked_im2col_forward_cuda); -REGISTER_DEVICE_IMPL(masked_col2im_forward_impl, CUDA, - masked_col2im_forward_cuda); - -void modulated_deformable_im2col_cuda( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_cuda( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_cuda( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -void modulated_deformable_im2col_impl( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_impl( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_impl( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -REGISTER_DEVICE_IMPL(modulated_deformable_im2col_impl, CUDA, - modulated_deformable_im2col_cuda); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_impl, CUDA, - modulated_deformable_col2im_cuda); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_coord_impl, CUDA, - modulated_deformable_col2im_coord_cuda); - -Tensor ms_deform_attn_cuda_forward(const Tensor& value, - const Tensor& spatial_shapes, - const Tensor& level_start_index, - const Tensor& sampling_loc, - const Tensor& attn_weight, - const int im2col_step); - -void ms_deform_attn_cuda_backward( - const Tensor& value, const Tensor& spatial_shapes, - const Tensor& level_start_index, const Tensor& sampling_loc, - const Tensor& attn_weight, const Tensor& grad_output, Tensor& grad_value, - Tensor& grad_sampling_loc, Tensor& grad_attn_weight, const int im2col_step); - -Tensor ms_deform_attn_impl_forward(const Tensor& value, - const Tensor& spatial_shapes, - const Tensor& level_start_index, - const Tensor& sampling_loc, - const Tensor& attn_weight, - const int im2col_step); - -void ms_deform_attn_impl_backward( - const Tensor& value, const Tensor& spatial_shapes, - const Tensor& level_start_index, const Tensor& sampling_loc, - const Tensor& attn_weight, const Tensor& grad_output, Tensor& grad_value, - Tensor& grad_sampling_loc, Tensor& grad_attn_weight, const int im2col_step); - -REGISTER_DEVICE_IMPL(ms_deform_attn_impl_forward, CUDA, - ms_deform_attn_cuda_forward); -REGISTER_DEVICE_IMPL(ms_deform_attn_impl_backward, CUDA, - ms_deform_attn_cuda_backward); - -Tensor NMSCUDAKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset); - -Tensor nms_cuda(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return NMSCUDAKernelLauncher(boxes, scores, iou_threshold, offset); -} - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset); -REGISTER_DEVICE_IMPL(nms_impl, CUDA, nms_cuda); - -void PointsInBoxesPartForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void PointsInBoxesAllForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void points_in_boxes_part_forward_cuda(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - PointsInBoxesPartForwardCUDAKernelLauncher(batch_size, boxes_num, pts_num, - boxes, pts, box_idx_of_points); -}; - -void points_in_boxes_all_forward_cuda(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - PointsInBoxesAllForwardCUDAKernelLauncher(batch_size, boxes_num, pts_num, - boxes, pts, box_idx_of_points); -}; - -void points_in_boxes_part_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void points_in_boxes_all_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); -REGISTER_DEVICE_IMPL(points_in_boxes_part_forward_impl, CUDA, - points_in_boxes_part_forward_cuda); -REGISTER_DEVICE_IMPL(points_in_boxes_all_forward_impl, CUDA, - points_in_boxes_all_forward_cuda); - -void PSAMaskForwardCUDAKernelLauncher(const int psa_type, const Tensor input, - Tensor output, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, - const int half_w_mask); - -void PSAMaskBackwardCUDAKernelLauncher( - const int psa_type, const Tensor grad_output, Tensor grad_input, - const int num_, const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, const int half_w_mask); - -void psamask_forward_cuda(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - PSAMaskForwardCUDAKernelLauncher(psa_type, input, output, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_backward_cuda(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - PSAMaskBackwardCUDAKernelLauncher(psa_type, grad_output, grad_input, num_, - h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask); -} - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); -REGISTER_DEVICE_IMPL(psamask_forward_impl, CUDA, psamask_forward_cuda); -REGISTER_DEVICE_IMPL(psamask_backward_impl, CUDA, psamask_backward_cuda); - -void ROIAlignForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void ROIAlignBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax_y, Tensor argmax_x, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, - bool aligned); - -void roi_align_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignForwardCUDAKernelLauncher( - input, rois, output, argmax_y, argmax_x, aligned_height, aligned_width, - spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignBackwardCUDAKernelLauncher( - grad_output, rois, argmax_y, argmax_x, grad_input, aligned_height, - aligned_width, spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -REGISTER_DEVICE_IMPL(roi_align_forward_impl, CUDA, roi_align_forward_cuda); -REGISTER_DEVICE_IMPL(roi_align_backward_impl, CUDA, roi_align_backward_cuda); - -void ROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor input, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor output); - -void ROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor bottom_grad); - -void roi_align_rotated_forward_cuda(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - - int num_channels = input.size(1); - int data_height = input.size(2); - int data_width = input.size(3); - ROIAlignRotatedForwardCUDAKernelLauncher( - input, rois, spatial_scale, sampling_ratio, aligned, clockwise, - num_channels, data_height, data_width, num_rois, aligned_height, - aligned_width, output); -} - -void roi_align_rotated_backward_cuda(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - - int num_channels = bottom_grad.size(1); - int data_height = bottom_grad.size(2); - int data_width = bottom_grad.size(3); - ROIAlignRotatedBackwardCUDAKernelLauncher( - top_grad, rois, spatial_scale, sampling_ratio, aligned, clockwise, - num_channels, data_height, data_width, num_rois, aligned_height, - aligned_width, bottom_grad); -} - -void roi_align_rotated_forward_impl(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); -REGISTER_DEVICE_IMPL(roi_align_rotated_forward_impl, CUDA, - roi_align_rotated_forward_cuda); -REGISTER_DEVICE_IMPL(roi_align_rotated_backward_impl, CUDA, - roi_align_rotated_backward_cuda); - -void RiROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor features, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor output); - -void RiROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor bottom_grad); - -void riroi_align_rotated_forward_cuda(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - CHECK_CONTIGUOUS(features); - CHECK_CONTIGUOUS(rois); - int num_channels = features.size(1) / num_orientations; - int data_height = features.size(2); - int data_width = features.size(3); - RiROIAlignRotatedForwardCUDAKernelLauncher( - features, rois, spatial_scale, num_samples, clockwise, num_channels, - data_height, data_width, num_rois, pooled_height, pooled_width, - num_orientations, output); -} - -void riroi_align_rotated_backward_cuda(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - CHECK_CONTIGUOUS(top_grad); - CHECK_CONTIGUOUS(rois); - int num_channels = bottom_grad.size(1) / num_orientations; - int data_height = bottom_grad.size(2); - int data_width = bottom_grad.size(3); - RiROIAlignRotatedBackwardCUDAKernelLauncher( - top_grad, rois, spatial_scale, num_samples, clockwise, num_channels, - data_height, data_width, num_rois, pooled_height, pooled_width, - num_orientations, bottom_grad); -} - -void riroi_align_rotated_forward_impl(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -void riroi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -REGISTER_DEVICE_IMPL(riroi_align_rotated_forward_impl, CUDA, - riroi_align_rotated_forward_cuda); -REGISTER_DEVICE_IMPL(riroi_align_rotated_backward_impl, CUDA, - riroi_align_rotated_backward_cuda); - -void RoiawarePool3dForwardCUDAKernelLauncher( - int boxes_num, int pts_num, int channels, int max_pts_each_voxel, int out_x, - int out_y, int out_z, const Tensor rois, const Tensor pts, - const Tensor pts_feature, Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void RoiawarePool3dBackwardCUDAKernelLauncher( - int boxes_num, int out_x, int out_y, int out_z, int channels, - int max_pts_each_voxel, const Tensor pts_idx_of_voxels, const Tensor argmax, - const Tensor grad_out, Tensor grad_in, int pool_method); - -void roiaware_pool3d_forward_cuda(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - RoiawarePool3dForwardCUDAKernelLauncher( - boxes_num, pts_num, channels, max_pts_each_voxel, out_x, out_y, out_z, - rois, pts, pts_feature, argmax, pts_idx_of_voxels, pooled_features, - pool_method); -}; - -void roiaware_pool3d_backward_cuda(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method) { - RoiawarePool3dBackwardCUDAKernelLauncher( - boxes_num, out_x, out_y, out_z, channels, max_pts_each_voxel, - pts_idx_of_voxels, argmax, grad_out, grad_in, pool_method); -}; - -void roiaware_pool3d_forward_impl(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void roiaware_pool3d_backward_impl(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method); - -REGISTER_DEVICE_IMPL(roiaware_pool3d_forward_impl, CUDA, - roiaware_pool3d_forward_cuda); -REGISTER_DEVICE_IMPL(roiaware_pool3d_backward_impl, CUDA, - roiaware_pool3d_backward_cuda); - -void RoIPointPool3dForwardCUDAKernelLauncher( - int batch_size, int pts_num, int boxes_num, int feature_in_len, - int sampled_pts_num, const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, Tensor pooled_features, Tensor pooled_empty_flag); - -void roipoint_pool3d_forward_cuda(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag) { - RoIPointPool3dForwardCUDAKernelLauncher( - batch_size, pts_num, boxes_num, feature_in_len, sampled_pts_num, xyz, - boxes3d, pts_feature, pooled_features, pooled_empty_flag); -}; - -void roipoint_pool3d_forward_impl(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag); -REGISTER_DEVICE_IMPL(roipoint_pool3d_forward_impl, CUDA, - roipoint_pool3d_forward_cuda); - -void ROIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, - int pooled_width, float spatial_scale); - -void ROIPoolBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax, Tensor grad_input, - int pooled_height, int pooled_width, - float spatial_scale); - -void roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale) { - ROIPoolForwardCUDAKernelLauncher(input, rois, output, argmax, pooled_height, - pooled_width, spatial_scale); -} - -void roi_pool_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale) { - ROIPoolBackwardCUDAKernelLauncher(grad_output, rois, argmax, grad_input, - pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale); -void roi_pool_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale); -REGISTER_DEVICE_IMPL(roi_pool_forward_impl, CUDA, roi_pool_forward_cuda); -REGISTER_DEVICE_IMPL(roi_pool_backward_impl, CUDA, roi_pool_backward_cuda); - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; - -std::vector DynamicPointToVoxelForwardCUDAKernelLauncher( - const at::Tensor& feats, const at::Tensor& coors, - const reduce_t reduce_type); - -void DynamicPointToVoxelBackwardCUDAKernelLauncher( - at::Tensor& grad_feats, const at::Tensor& grad_reduced_feats, - const at::Tensor& feats, const at::Tensor& reduced_feats, - const at::Tensor& coors_map, const at::Tensor& reduce_count, - const reduce_t reduce_type); - -std::vector dynamic_point_to_voxel_forward_cuda( - const torch::Tensor& feats, const torch::Tensor& coors, - const reduce_t reduce_type) { - return DynamicPointToVoxelForwardCUDAKernelLauncher(feats, coors, - reduce_type); -}; - -void dynamic_point_to_voxel_backward_cuda( - torch::Tensor& grad_feats, const torch::Tensor& grad_reduced_feats, - const torch::Tensor& feats, const torch::Tensor& reduced_feats, - const torch::Tensor& coors_idx, const torch::Tensor& reduce_count, - const reduce_t reduce_type) { - DynamicPointToVoxelBackwardCUDAKernelLauncher(grad_feats, grad_reduced_feats, - feats, reduced_feats, coors_idx, - reduce_count, reduce_type); -}; - -std::vector dynamic_point_to_voxel_forward_impl( - const torch::Tensor& feats, const torch::Tensor& coors, - const reduce_t reduce_type); - -void dynamic_point_to_voxel_backward_impl( - torch::Tensor& grad_feats, const torch::Tensor& grad_reduced_feats, - const torch::Tensor& feats, const torch::Tensor& reduced_feats, - const torch::Tensor& coors_idx, const torch::Tensor& reduce_count, - const reduce_t reduce_type); - -REGISTER_DEVICE_IMPL(dynamic_point_to_voxel_forward_impl, CUDA, - dynamic_point_to_voxel_forward_cuda); -REGISTER_DEVICE_IMPL(dynamic_point_to_voxel_backward_impl, CUDA, - dynamic_point_to_voxel_backward_cuda); - -void SyncBNForwardMeanCUDAKernelLauncher(const Tensor input, Tensor mean); - -void SyncBNForwardVarCUDAKernelLauncher(const Tensor input, const Tensor mean, - Tensor var); - -void SyncBNForwardOutputCUDAKernelLauncher( - const Tensor input, const Tensor mean, const Tensor var, - Tensor running_mean, Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, Tensor output, float eps, - float momentum, int group_size); - -void SyncBNBackwardParamCUDAKernelLauncher(const Tensor grad_output, - const Tensor norm, - Tensor grad_weight, - Tensor grad_bias); - -void SyncBNBackwardDataCUDAKernelLauncher(const Tensor grad_output, - const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input); - -void sync_bn_forward_mean_cuda(const Tensor input, Tensor mean) { - SyncBNForwardMeanCUDAKernelLauncher(input, mean); -} - -void sync_bn_forward_var_cuda(const Tensor input, const Tensor mean, - Tensor var) { - SyncBNForwardVarCUDAKernelLauncher(input, mean, var); -} - -void sync_bn_forward_output_cuda(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - SyncBNForwardOutputCUDAKernelLauncher(input, mean, var, running_mean, - running_var, weight, bias, norm, std, - output, eps, momentum, group_size); -} - -void sync_bn_backward_param_cuda(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - SyncBNBackwardParamCUDAKernelLauncher(grad_output, norm, grad_weight, - grad_bias); -} - -void sync_bn_backward_data_cuda(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input) { - SyncBNBackwardDataCUDAKernelLauncher(grad_output, weight, grad_weight, - grad_bias, norm, std, grad_input); -} - -void sync_bn_forward_mean_impl(const Tensor input, Tensor mean); - -void sync_bn_forward_var_impl(const Tensor input, const Tensor mean, - Tensor var); - -void sync_bn_forward_output_impl(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size); - -void sync_bn_backward_param_impl(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias); - -void sync_bn_backward_data_impl(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input); - -REGISTER_DEVICE_IMPL(sync_bn_forward_mean_impl, CUDA, - sync_bn_forward_mean_cuda); -REGISTER_DEVICE_IMPL(sync_bn_forward_var_impl, CUDA, sync_bn_forward_var_cuda); -REGISTER_DEVICE_IMPL(sync_bn_forward_output_impl, CUDA, - sync_bn_forward_output_cuda); -REGISTER_DEVICE_IMPL(sync_bn_backward_param_impl, CUDA, - sync_bn_backward_param_cuda); -REGISTER_DEVICE_IMPL(sync_bn_backward_data_impl, CUDA, - sync_bn_backward_data_cuda); - -void ThreeInterpolateForwardCUDAKernelLauncher(int b, int c, int m, int n, - const Tensor points, - const Tensor idx, - const Tensor weight, Tensor out); - -void ThreeInterpolateBackwardCUDAKernelLauncher(int b, int c, int n, int m, - const Tensor grad_out, - const Tensor idx, - const Tensor weight, - Tensor grad_points); - -void three_interpolate_forward_cuda(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out) { - ThreeInterpolateForwardCUDAKernelLauncher(b, c, m, n, points, idx, weight, - out); -}; - -void three_interpolate_backward_cuda(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points) { - ThreeInterpolateBackwardCUDAKernelLauncher(b, c, n, m, grad_out, idx, weight, - grad_points); -}; - -void three_interpolate_forward_impl(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out); - -void three_interpolate_backward_impl(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points); -REGISTER_DEVICE_IMPL(three_interpolate_forward_impl, CUDA, - three_interpolate_forward_cuda); -REGISTER_DEVICE_IMPL(three_interpolate_backward_impl, CUDA, - three_interpolate_backward_cuda); - -void ThreeNNForwardCUDAKernelLauncher(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, - Tensor idx); - -void three_nn_forward_cuda(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx) { - ThreeNNForwardCUDAKernelLauncher(b, n, m, unknown, known, dist2, idx); -}; - -void three_nn_forward_impl(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx); -REGISTER_DEVICE_IMPL(three_nn_forward_impl, CUDA, three_nn_forward_cuda); - -void TINShiftForwardCUDAKernelLauncher(Tensor input, Tensor shift, - Tensor output); - -void TINShiftBackwardCUDAKernelLauncher(Tensor grad_output, Tensor shift, - Tensor grad_input); - -void tin_shift_forward_cuda(Tensor input, Tensor shift, Tensor output) { - TINShiftForwardCUDAKernelLauncher(input, shift, output); -} - -void tin_shift_backward_cuda(Tensor grad_output, Tensor shift, - Tensor grad_input) { - TINShiftBackwardCUDAKernelLauncher(grad_output, shift, grad_input); -} - -void tin_shift_forward_impl(Tensor input, Tensor shift, Tensor output); -void tin_shift_backward_impl(Tensor grad_output, Tensor shift, - Tensor grad_input); -REGISTER_DEVICE_IMPL(tin_shift_forward_impl, CUDA, tin_shift_forward_cuda); -REGISTER_DEVICE_IMPL(tin_shift_backward_impl, CUDA, tin_shift_backward_cuda); - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1); - -torch::Tensor upfirdn2d_op_impl(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1); -REGISTER_DEVICE_IMPL(upfirdn2d_op_impl, CUDA, upfirdn2d_op); - -int HardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3); - -int NondeterministicHardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3); - -void DynamicVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, const std::vector coors_range, - const int NDim = 3); - -int hard_voxelize_forward_cuda(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim) { - return HardVoxelizeForwardCUDAKernelLauncher( - points, voxels, coors, num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -}; - -int nondeterministic_hard_voxelize_forward_cuda( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim) { - return NondeterministicHardVoxelizeForwardCUDAKernelLauncher( - points, voxels, coors, num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -}; - -void dynamic_voxelize_forward_cuda(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim) { - DynamicVoxelizeForwardCUDAKernelLauncher(points, coors, voxel_size, - coors_range, NDim); -}; - -int hard_voxelize_forward_impl(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim); - -int nondeterministic_hard_voxelize_forward_impl( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim); - -void dynamic_voxelize_forward_impl(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim); - -REGISTER_DEVICE_IMPL(hard_voxelize_forward_impl, CUDA, - hard_voxelize_forward_cuda); -REGISTER_DEVICE_IMPL(nondeterministic_hard_voxelize_forward_impl, CUDA, - nondeterministic_hard_voxelize_forward_cuda); -REGISTER_DEVICE_IMPL(dynamic_voxelize_forward_impl, CUDA, - dynamic_voxelize_forward_cuda); - -void RotatedFeatureAlignForwardCUDAKernelLauncher(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor output); - -void RotatedFeatureAlignBackwardCUDAKernelLauncher(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor bottom_grad); - -void rotated_feature_align_forward_cuda(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output) { - RotatedFeatureAlignForwardCUDAKernelLauncher(features, best_bboxes, - spatial_scale, points, output); -}; - -void rotated_feature_align_backward_cuda(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad) { - RotatedFeatureAlignBackwardCUDAKernelLauncher( - top_grad, best_bboxes, spatial_scale, points, bottom_grad); -}; - -void rotated_feature_align_forward_impl(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output); - -void rotated_feature_align_backward_impl(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad); - -REGISTER_DEVICE_IMPL(rotated_feature_align_forward_impl, CUDA, - rotated_feature_align_forward_cuda); -REGISTER_DEVICE_IMPL(rotated_feature_align_backward_impl, CUDA, - rotated_feature_align_backward_cuda); - -void PointsInPolygonsForwardCUDAKernelLauncher(const at::Tensor points, - const at::Tensor polygons, - const int rows, const int cols, - at::Tensor output); - -void points_in_polygons_forward_cuda(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols) { - PointsInPolygonsForwardCUDAKernelLauncher(points, polygons, rows, cols, - output); -}; - -void points_in_polygons_forward_impl(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols); - -REGISTER_DEVICE_IMPL(points_in_polygons_forward_impl, CUDA, - points_in_polygons_forward_cuda); - -void MinAreaPolygonsCUDAKernelLauncher(const Tensor pointsets, Tensor polygons); - -void min_area_polygons_cuda(const Tensor pointsets, Tensor polygons) { - MinAreaPolygonsCUDAKernelLauncher(pointsets, polygons); -} - -void min_area_polygons_impl(const Tensor pointsets, Tensor polygons); - -REGISTER_DEVICE_IMPL(min_area_polygons_impl, CUDA, min_area_polygons_cuda); - -void ActiveRotatedFilterForwardCUDAKernelLauncher(const Tensor input, - const Tensor indices, - Tensor output); - -void ActiveRotatedFilterBackwardCUDAKernelLauncher(const Tensor grad_out, - const Tensor indices, - Tensor grad_in); - -void active_rotated_filter_forward_cuda(const Tensor input, - const Tensor indices, Tensor output) { - ActiveRotatedFilterForwardCUDAKernelLauncher(input, indices, output); -}; - -void active_rotated_filter_backward_cuda(const Tensor grad_out, - const Tensor indices, Tensor grad_in) { - ActiveRotatedFilterBackwardCUDAKernelLauncher(grad_out, indices, grad_in); -}; - -void active_rotated_filter_forward_impl(const Tensor input, - const Tensor indices, Tensor output); - -void active_rotated_filter_backward_impl(const Tensor grad_out, - const Tensor indices, Tensor grad_in); - -REGISTER_DEVICE_IMPL(active_rotated_filter_forward_impl, CUDA, - active_rotated_filter_forward_cuda); -REGISTER_DEVICE_IMPL(active_rotated_filter_backward_impl, CUDA, - active_rotated_filter_backward_cuda); - -void ConvexIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor ious); - -void ConvexGIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor output); - -void convex_iou_cuda(const Tensor pointsets, const Tensor polygons, - Tensor ious) { - ConvexIoUCUDAKernelLauncher(pointsets, polygons, ious); -} - -void convex_giou_cuda(const Tensor pointsets, const Tensor polygons, - Tensor output) { - ConvexGIoUCUDAKernelLauncher(pointsets, polygons, output); -} - -void convex_iou_impl(const Tensor pointsets, const Tensor polygons, - Tensor ious); - -void convex_giou_impl(const Tensor pointsets, const Tensor polygons, - Tensor output); - -REGISTER_DEVICE_IMPL(convex_iou_impl, CUDA, convex_iou_cuda); -REGISTER_DEVICE_IMPL(convex_giou_impl, CUDA, convex_giou_cuda); - -Tensor DiffIoURotatedSortVerticesCUDAKernelLauncher(Tensor vertices, - Tensor mask, - Tensor num_valid); - -Tensor diff_iou_rotated_sort_vertices_forward_cuda(Tensor vertices, Tensor mask, - Tensor num_valid) { - return DiffIoURotatedSortVerticesCUDAKernelLauncher(vertices, mask, - num_valid); -} - -Tensor diff_iou_rotated_sort_vertices_forward_impl(Tensor vertices, Tensor mask, - Tensor num_valid); - -REGISTER_DEVICE_IMPL(diff_iou_rotated_sort_vertices_forward_impl, CUDA, - diff_iou_rotated_sort_vertices_forward_cuda); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv.cpp deleted file mode 100644 index 86690b93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv.cpp +++ /dev/null @@ -1,517 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void deformable_im2col_impl(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col) { - DISPATCH_DEVICE_IMPL(deformable_im2col_impl, data_im, data_offset, channels, - height, width, ksize_h, ksize_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, parallel_imgs, - deformable_group, data_col); -} - -void deformable_col2im_impl(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im) { - DISPATCH_DEVICE_IMPL(deformable_col2im_impl, data_col, data_offset, channels, - height, width, ksize_h, ksize_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, parallel_imgs, - deformable_group, grad_im); -} - -void deformable_col2im_coord_impl( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset) { - DISPATCH_DEVICE_IMPL(deformable_col2im_coord_impl, data_col, data_im, - data_offset, channels, height, width, ksize_h, ksize_w, - pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, - parallel_imgs, deformable_group, grad_offset); -} - -void deform_conv_shape_check(at::Tensor input, at::Tensor offset, - at::Tensor *gradOutput, at::Tensor weight, int kH, - int kW, int dH, int dW, int padH, int padW, - int dilationH, int dilationW, int group, - int deformable_group) { - TORCH_CHECK( - weight.ndimension() == 4, - "4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, but got: %s", - weight.ndimension()); - - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - - TORCH_CHECK(kW > 0 && kH > 0, - "kernel size should be greater than zero, but got kH: %d kW: %d", - kH, kW); - - TORCH_CHECK((weight.size(2) == kH && weight.size(3) == kW), - "kernel size should be consistent with weight, ", - "but got kH: %d kW: %d weight.size(2): %d, weight.size(3): %d", - kH, kW, weight.size(2), weight.size(3)); - - TORCH_CHECK(dW > 0 && dH > 0, - "stride should be greater than zero, but got dH: %d dW: %d", dH, - dW); - - TORCH_CHECK( - dilationW > 0 && dilationH > 0, - "dilation should be greater than 0, but got dilationH: %d dilationW: %d", - dilationH, dilationW); - - int ndim = input.ndimension(); - int dimf = 0; - int dimh = 1; - int dimw = 2; - - if (ndim == 4) { - dimf++; - dimh++; - dimw++; - } - - TORCH_CHECK(ndim == 3 || ndim == 4, - "3D or 4D input tensor expected but got: %s", ndim); - - long nInputPlane = weight.size(1) * group; - long inputHeight = input.size(dimh); - long inputWidth = input.size(dimw); - long nOutputPlane = weight.size(0); - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - - TORCH_CHECK(nInputPlane % deformable_group == 0, - "input channels must divide deformable group size"); - - if (outputWidth < 1 || outputHeight < 1) - AT_ERROR( - "Given input size: (%ld x %ld x %ld). " - "Calculated output size: (%ld x %ld x %ld). Output size is too small", - nInputPlane, inputHeight, inputWidth, nOutputPlane, outputHeight, - outputWidth); - - TORCH_CHECK(input.size(1) == nInputPlane, - "invalid number of input planes, expected: %d, but got: %d", - nInputPlane, input.size(1)); - - TORCH_CHECK((inputHeight >= kH && inputWidth >= kW), - "input image is smaller than kernel"); - - TORCH_CHECK( - (offset.size(2) == outputHeight && offset.size(3) == outputWidth), - "invalid spatial size of offset, expected height: %d width: %d, but " - "got height: %d width: %d", - outputHeight, outputWidth, offset.size(2), offset.size(3)); - - TORCH_CHECK((offset.size(1) == deformable_group * 2 * kH * kW), - "invalid number of channels of offset"); - - if (gradOutput != NULL) { - TORCH_CHECK( - gradOutput->size(dimf) == nOutputPlane, - "invalid number of gradOutput planes, expected: %d, but got: %d", - nOutputPlane, gradOutput->size(dimf)); - - TORCH_CHECK( - (gradOutput->size(dimh) == outputHeight && - gradOutput->size(dimw) == outputWidth), - "invalid size of gradOutput, expected height: %d width: %d , but " - "got height: %d width: %d", - outputHeight, outputWidth, gradOutput->size(dimh), - gradOutput->size(dimw)); - } -} - -void deform_conv_forward(Tensor input, Tensor weight, Tensor offset, - Tensor output, Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(output); - CHECK_CUDA_INPUT(columns); - CHECK_CUDA_INPUT(ones); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(output); - CHECK_CPU_INPUT(columns); - CHECK_CPU_INPUT(ones); - } - - deform_conv_shape_check(input, offset, NULL, weight, kH, kW, dH, dW, padH, - padW, dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input.unsqueeze_(0); - offset.unsqueeze_(0); - } - - // todo: assert batchsize dividable by im2col_step - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - output = output.view({batchSize / im2col_step, im2col_step, nOutputPlane, - outputHeight, outputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < outputHeight * outputWidth) { - ones = at::ones({outputHeight, outputWidth}, input.options()); - } - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - Tensor output_buffer = at::zeros({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}, - output.options()); - - output_buffer = output_buffer.view( - {output_buffer.size(0), group, output_buffer.size(1) / group, - output_buffer.size(2), output_buffer.size(3)}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col_impl(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - output_buffer[elt][g] = output_buffer[elt][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output_buffer[elt][g]); - } - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - } - - output_buffer = output_buffer.view( - {output_buffer.size(0), output_buffer.size(1) * output_buffer.size(2), - output_buffer.size(3), output_buffer.size(4)}); - - output_buffer = output_buffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step, outputHeight, outputWidth}); - output_buffer.transpose_(1, 2); - output.copy_(output_buffer); - output = output.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - output = output.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - } -} - -void deform_conv_backward_input(Tensor input, Tensor offset, Tensor gradOutput, - Tensor gradInput, Tensor gradOffset, - Tensor weight, Tensor columns, int kW, int kH, - int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(gradOutput); - CHECK_CUDA_INPUT(gradInput); - CHECK_CUDA_INPUT(gradOffset); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(columns); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(gradOutput); - CHECK_CPU_INPUT(gradInput); - CHECK_CPU_INPUT(gradOffset); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(columns); - } - deform_conv_shape_check(input, offset, &gradOutput, weight, kH, kW, dH, dW, - padH, padW, dilationH, dilationW, group, - deformable_group); - - at::DeviceGuard guard(input.device()); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view({1, input.size(0), input.size(1), input.size(2)}); - offset = offset.view({1, offset.size(0), offset.size(1), offset.size(2)}); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset"); - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - // change order of grad output - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - gradInput = gradInput.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - gradOffset = gradOffset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, - outputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - // divide into groups - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), group, gradOutput.size(1) / group, - gradOutput.size(2), gradOutput.size(3), gradOutput.size(4)}); - - for (int g = 0; g < group; g++) { - columns[g] = columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - gradOutput[elt][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), gradOutput.size(1) * gradOutput.size(2), - gradOutput.size(3), gradOutput.size(4), gradOutput.size(5)}); - - deformable_col2im_coord_impl(columns, input[elt], offset[elt], nInputPlane, - inputHeight, inputWidth, kH, kW, padH, padW, - dH, dW, dilationH, dilationW, im2col_step, - deformable_group, gradOffset[elt]); - - deformable_col2im_impl(columns, offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, - gradInput[elt]); - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - } - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - gradOffset = gradOffset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - gradInput = gradInput.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - gradOffset = - gradOffset.view({offset.size(1), offset.size(2), offset.size(3)}); - } -} - -void deform_conv_backward_parameters(Tensor input, Tensor offset, - Tensor gradOutput, Tensor gradWeight, - Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, float scale, - int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(gradOutput); - CHECK_CUDA_INPUT(gradWeight); - CHECK_CUDA_INPUT(columns); - CHECK_CUDA_INPUT(ones); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(gradOutput); - CHECK_CPU_INPUT(gradWeight); - CHECK_CPU_INPUT(columns); - CHECK_CPU_INPUT(ones); - } - - deform_conv_shape_check(input, offset, &gradOutput, gradWeight, kH, kW, dH, - dW, padH, padW, dilationH, dilationW, group, - deformable_group); - at::DeviceGuard guard(input.device()); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view( - at::IntList({1, input.size(0), input.size(1), input.size(2)})); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = gradWeight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - Tensor gradOutputBuffer = at::zeros_like(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, im2col_step, - outputHeight, outputWidth}); - gradOutputBuffer = gradOutputBuffer.contiguous(); - gradOutputBuffer.copy_(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}); - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col_impl(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - // divide into group - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), group, gradOutputBuffer.size(1) / group, - gradOutputBuffer.size(2), gradOutputBuffer.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - gradWeight = - gradWeight.view({group, gradWeight.size(0) / group, gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3)}); - - for (int g = 0; g < group; g++) { - gradWeight[g] = gradWeight[g] - .flatten(1) - .addmm_(gradOutputBuffer[elt][g].flatten(1), - columns[g].transpose(1, 0), 1.0, scale) - .view_as(gradWeight[g]); - } - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), - gradOutputBuffer.size(1) * gradOutputBuffer.size(2), - gradOutputBuffer.size(3), gradOutputBuffer.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradWeight = gradWeight.view({gradWeight.size(0) * gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3), - gradWeight.size(4)}); - } - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_parrots.cpp deleted file mode 100644 index c07a170d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_parrots.cpp +++ /dev/null @@ -1,273 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "deform_conv_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void deform_conv_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& offset = buildATensor(ctx, ins[2]); - - auto output = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - auto ones = buildATensor(ctx, outs[2]); - - deform_conv_forward(input, weight, offset, output, columns, ones, kW, kH, dW, - dH, padW, padH, dilationW, dilationH, group, - deformable_group, im2col_step); -} - -void deform_conv_backward_input_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& offset = buildATensor(ctx, ins[1]); - const auto& gradOutput = buildATensor(ctx, ins[2]); - - auto gradInput = buildATensor(ctx, outs[0]); - auto gradOffset = buildATensor(ctx, outs[1]); - auto weight = buildATensor(ctx, outs[2]); - auto columns = buildATensor(ctx, outs[3]); - - deform_conv_backward_input(input, offset, gradOutput, gradInput, gradOffset, - weight, columns, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, - im2col_step); -} - -void deform_conv_backward_parameters_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - float scale; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("scale", scale) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& offset = buildATensor(ctx, ins[1]); - const auto& gradOutput = buildATensor(ctx, ins[2]); - - auto gradWeight = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - auto ones = buildATensor(ctx, outs[2]); - deform_conv_backward_parameters(input, offset, gradOutput, gradWeight, - columns, ones, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, - scale, im2col_step); -} -#endif - -void deform_conv_forward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& offset = buildATensor(ctx, ins[2]); - - auto output = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - auto ones = buildATensor(ctx, outs[2]); - - deform_conv_forward(input, weight, offset, output, columns, ones, kW, kH, dW, - dH, padW, padH, dilationW, dilationH, group, - deformable_group, im2col_step); -} - -void deform_conv_backward_input_cpu_parrots(HostContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& offset = buildATensor(ctx, ins[1]); - const auto& gradOutput = buildATensor(ctx, ins[2]); - - auto gradInput = buildATensor(ctx, outs[0]); - auto gradOffset = buildATensor(ctx, outs[1]); - auto weight = buildATensor(ctx, outs[2]); - auto columns = buildATensor(ctx, outs[3]); - - deform_conv_backward_input(input, offset, gradOutput, gradInput, gradOffset, - weight, columns, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, - im2col_step); -} - -void deform_conv_backward_parameters_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, deformable_group, - im2col_step; - float scale; - SSAttrs(attr) - .get("kW", kW) - .get("kH", kH) - .get("dW", dW) - .get("dH", dH) - .get("padW", padW) - .get("padH", padH) - .get("dilationW", dilationW) - .get("dilationH", dilationH) - .get("group", group) - .get("deformable_group", deformable_group) - .get("scale", scale) - .get("im2col_step", im2col_step) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& offset = buildATensor(ctx, ins[1]); - const auto& gradOutput = buildATensor(ctx, ins[2]); - - auto gradWeight = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - auto ones = buildATensor(ctx, outs[2]); - deform_conv_backward_parameters(input, offset, gradOutput, gradWeight, - columns, ones, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, - scale, im2col_step); -} - -PARROTS_EXTENSION_REGISTER(deform_conv_forward) - .attr("kW") - .attr("kH") - .attr("dW") - .attr("dH") - .attr("padW") - .attr("padH") - .attr("dilationW") - .attr("dilationH") - .attr("group") - .attr("deformable_group") - .attr("im2col_step") - .input(3) - .output(3) - .apply(deform_conv_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(deform_conv_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(deform_conv_backward_input) - .attr("kW") - .attr("kH") - .attr("dW") - .attr("dH") - .attr("padW") - .attr("padH") - .attr("dilationW") - .attr("dilationH") - .attr("group") - .attr("deformable_group") - .attr("im2col_step") - .input(3) - .output(4) - .apply(deform_conv_backward_input_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(deform_conv_backward_input_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(deform_conv_backward_parameters) - .attr("kW") - .attr("kH") - .attr("dW") - .attr("dH") - .attr("padW") - .attr("padH") - .attr("dilationW") - .attr("dilationH") - .attr("group") - .attr("deformable_group") - .attr("scale") - .attr("im2col_step") - .input(3) - .output(3) - .apply(deform_conv_backward_parameters_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(deform_conv_backward_parameters_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_pytorch.h deleted file mode 100644 index e0d3d40d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_conv_pytorch.h +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef DEFORM_CONV_PYTORCH_H -#define DEFORM_CONV_PYTORCH_H -#include -using namespace at; - -void deform_conv_forward(Tensor input, Tensor weight, Tensor offset, - Tensor output, Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -void deform_conv_backward_input(Tensor input, Tensor offset, Tensor gradOutput, - Tensor gradInput, Tensor gradOffset, - Tensor weight, Tensor columns, int kW, int kH, - int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -void deform_conv_backward_parameters(Tensor input, Tensor offset, - Tensor gradOutput, Tensor gradWeight, - Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, float scale, - int im2col_step); - -#endif // DEFORM_CONV_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool.cpp deleted file mode 100644 index 4fb78a96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void deform_roi_pool_forward_impl(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - DISPATCH_DEVICE_IMPL(deform_roi_pool_forward_impl, input, rois, offset, - output, pooled_height, pooled_width, spatial_scale, - sampling_ratio, gamma); -} - -void deform_roi_pool_backward_impl(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - DISPATCH_DEVICE_IMPL(deform_roi_pool_backward_impl, grad_output, input, rois, - offset, grad_input, grad_offset, pooled_height, - pooled_width, spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_forward(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - deform_roi_pool_forward_impl(input, rois, offset, output, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - gamma); -} - -void deform_roi_pool_backward(Tensor grad_output, Tensor input, Tensor rois, - Tensor offset, Tensor grad_input, - Tensor grad_offset, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - deform_roi_pool_backward_impl(grad_output, input, rois, offset, grad_input, - grad_offset, pooled_height, pooled_width, - spatial_scale, sampling_ratio, gamma); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_parrots.cpp deleted file mode 100644 index fc2701d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_parrots.cpp +++ /dev/null @@ -1,102 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "deform_roi_pool_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -/*void deform_roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor offset, - * Tensor output, int pooled_height, - * int pooled_width, float spatial_scale, - * int sampling_ratio, float gamma); - */ -void deform_roi_pool_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - float gamma; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("gamma", gamma) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - const auto& offset = buildATensor(ctx, ins[2]); - - auto output = buildATensor(ctx, outs[0]); - deform_roi_pool_forward_cuda(input, rois, offset, output, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - gamma); -} - -/*void deform_roi_pool_backward_cuda(Tensor grad_output, Tensor input, - * Tensor rois, Tensor offset, - * Tensor grad_input, Tensor grad_offset, - * int pooled_height, int pooled_width, - * float spatial_scale, int sampling_ratio, - * float gamma); - */ -void deform_roi_pool_backward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - float gamma; - - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("gamma", gamma) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& input = buildATensor(ctx, ins[1]); - const auto& rois = buildATensor(ctx, ins[2]); - const auto& offset = buildATensor(ctx, ins[3]); - - auto grad_input = buildATensor(ctx, outs[0]); - auto grad_offset = buildATensor(ctx, outs[1]); - - deform_roi_pool_backward_cuda(grad_output, input, rois, offset, grad_input, - grad_offset, pooled_height, pooled_width, - spatial_scale, sampling_ratio, gamma); -} - -PARROTS_EXTENSION_REGISTER(deform_roi_pool_forward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("gamma") - .input(3) - .output(1) - .apply(deform_roi_pool_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(deform_roi_pool_backward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("gamma") - .input(4) - .output(2) - .apply(deform_roi_pool_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_pytorch.h deleted file mode 100644 index ac0f2c32..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/deform_roi_pool_pytorch.h +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef DEFORM_ROI_POOL_PYTORCH_H -#define DEFORM_ROI_POOL_PYTORCH_H -#include -using namespace at; - -void deform_roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma); - -void deform_roi_pool_backward_cuda(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma); -#endif // DEFORM_ROI_POOL_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated.cpp deleted file mode 100644 index 2361b7fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor diff_iou_rotated_sort_vertices_forward_impl(Tensor vertices, Tensor mask, - Tensor num_valid) { - return DISPATCH_DEVICE_IMPL(diff_iou_rotated_sort_vertices_forward_impl, - vertices, mask, num_valid); -} - -Tensor diff_iou_rotated_sort_vertices_forward(Tensor vertices, Tensor mask, - Tensor num_valid) { - return diff_iou_rotated_sort_vertices_forward_impl(vertices, mask, num_valid); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_parrots.cpp deleted file mode 100644 index b4d3e0e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_parrots.cpp +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "diff_iou_rotated_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void diff_iou_rotated_sort_vertices_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - at::Tensor boxes, scores, dets; - auto vertices = buildATensor(ctx, ins[0]); - auto mask = buildATensor(ctx, ins[1]); - auto num_valid = buildATensor(ctx, ins[2]); - auto out = - diff_iou_rotated_sort_vertices_forward_cuda(vertices, mask, num_valid); - updateDArray(ctx, out, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(diff_iou_rotated_sort_vertices_forward) - .input(3) - .output(1) - .apply(diff_iou_rotated_sort_vertices_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_pytorch.h deleted file mode 100644 index ef911ecc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/diff_iou_rotated_pytorch.h +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef DIFF_IOU_ROTATED_PYTORCH_H -#define DIFF_IOU_ROTATED_PYTORCH_H -#include -using namespace at; - -Tensor diff_iou_rotated_sort_vertices_forward_cuda(Tensor vertices, Tensor mask, - Tensor num_valid); - -#endif // DIFF_IOU_ROTATED_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss.cpp deleted file mode 100644 index ed0e2186..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss.cpp +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void sigmoid_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(sigmoid_focal_loss_forward_impl, input, target, weight, - output, gamma, alpha); -} - -void sigmoid_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(sigmoid_focal_loss_backward_impl, input, target, weight, - grad_input, gamma, alpha); -} - -void softmax_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(softmax_focal_loss_forward_impl, input, target, weight, - output, gamma, alpha); -} - -void softmax_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha) { - DISPATCH_DEVICE_IMPL(softmax_focal_loss_backward_impl, input, target, weight, - buff, grad_input, gamma, alpha); -} - -void sigmoid_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - sigmoid_focal_loss_forward_impl(input, target, weight, output, gamma, alpha); -} - -void sigmoid_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, float alpha) { - sigmoid_focal_loss_backward_impl(input, target, weight, grad_input, gamma, - alpha); -} - -void softmax_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - softmax_focal_loss_forward_impl(input, target, weight, output, gamma, alpha); -} - -void softmax_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor buff, Tensor grad_input, float gamma, - float alpha) { - softmax_focal_loss_backward_impl(input, target, weight, buff, grad_input, - gamma, alpha); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_parrots.cpp deleted file mode 100644 index 044e200c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_parrots.cpp +++ /dev/null @@ -1,113 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "focal_loss_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void sigmoid_focal_loss_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float gamma; - float alpha; - SSAttrs(attr).get("gamma", gamma).get("alpha", alpha).done(); - - // get inputs and outputs - const auto& input = buildATensor(ctx, ins[0]); - const auto& target = buildATensor(ctx, ins[1]); - const auto& weight = buildATensor(ctx, ins[2]); - - auto output = buildATensor(ctx, outs[0]); - - sigmoid_focal_loss_forward_cuda(input, target, weight, output, gamma, alpha); -} - -void sigmoid_focal_loss_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float gamma; - float alpha; - SSAttrs(attr).get("gamma", gamma).get("alpha", alpha).done(); - - // get inputs and outputs - const auto& input = buildATensor(ctx, ins[0]); - const auto& target = buildATensor(ctx, ins[1]); - const auto& weight = buildATensor(ctx, ins[2]); - - auto grad_input = buildATensor(ctx, outs[0]); - - sigmoid_focal_loss_backward_cuda(input, target, weight, grad_input, gamma, - alpha); -} - -void softmax_focal_loss_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float gamma; - float alpha; - SSAttrs(attr).get("gamma", gamma).get("alpha", alpha).done(); - - // get inputs and outputs - const auto& input = buildATensor(ctx, ins[0]); - const auto& target = buildATensor(ctx, ins[1]); - const auto& weight = buildATensor(ctx, ins[2]); - - auto output = buildATensor(ctx, outs[0]); - softmax_focal_loss_forward_cuda(input, target, weight, output, gamma, alpha); -} - -void softmax_focal_loss_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float gamma; - float alpha; - SSAttrs(attr).get("gamma", gamma).get("alpha", alpha).done(); - - // get inputs and outputs - const auto& input = buildATensor(ctx, ins[0]); - const auto& target = buildATensor(ctx, ins[1]); - const auto& weight = buildATensor(ctx, ins[2]); - - auto buff = buildATensor(ctx, outs[0]); - auto grad_input = buildATensor(ctx, outs[1]); - softmax_focal_loss_backward_cuda(input, target, weight, buff, grad_input, - gamma, alpha); -} - -PARROTS_EXTENSION_REGISTER(sigmoid_focal_loss_forward) - .attr("gamma") - .attr("alpha") - .input(3) - .output(1) - .apply(sigmoid_focal_loss_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(sigmoid_focal_loss_backward) - .attr("gamma") - .attr("alpha") - .input(3) - .output(1) - .apply(sigmoid_focal_loss_backward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(softmax_focal_loss_forward) - .attr("gamma") - .attr("alpha") - .input(3) - .output(1) - .apply(softmax_focal_loss_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(softmax_focal_loss_backward) - .attr("gamma") - .attr("alpha") - .input(3) - .output(2) - .apply(softmax_focal_loss_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_pytorch.h deleted file mode 100644 index b7a00c8a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/focal_loss_pytorch.h +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef FOCAL_LOSS_PYTORCH_H -#define FOCAL_LOSS_PYTORCH_H -#include -using namespace at; - -void sigmoid_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha); - -void softmax_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void softmax_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha); -#endif // FOCAL_LOSS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample.cpp deleted file mode 100644 index 9c7098ac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample.cpp +++ /dev/null @@ -1,34 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/sampling.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void furthest_point_sampling_forward_impl(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m) { - DISPATCH_DEVICE_IMPL(furthest_point_sampling_forward_impl, points_tensor, - temp_tensor, idx_tensor, b, n, m); -} - -void furthest_point_sampling_with_dist_forward_impl(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m) { - DISPATCH_DEVICE_IMPL(furthest_point_sampling_with_dist_forward_impl, - points_tensor, temp_tensor, idx_tensor, b, n, m); -} - -void furthest_point_sampling_forward(Tensor points_tensor, Tensor temp_tensor, - Tensor idx_tensor, int b, int n, int m) { - furthest_point_sampling_forward_impl(points_tensor, temp_tensor, idx_tensor, - b, n, m); -} - -void furthest_point_sampling_with_dist_forward(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, int n, - int m) { - furthest_point_sampling_with_dist_forward_impl(points_tensor, temp_tensor, - idx_tensor, b, n, m); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_parrots.cpp deleted file mode 100644 index 483bfb24..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_parrots.cpp +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "furthest_point_sample_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void furthest_point_sample_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, n, m; - SSAttrs(attr).get("b", b).get("n", n).get("m", m).done(); - - auto points_tensor = buildATensor(ctx, ins[0]); - auto temp_tensor = buildATensor(ctx, ins[1]); - - auto idx_tensor = buildATensor(ctx, outs[0]); - - furthest_point_sampling_forward(points_tensor, temp_tensor, idx_tensor, b, n, - m); -} - -void furthest_point_sampling_with_dist_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, n, m; - SSAttrs(attr).get("b", b).get("n", n).get("m", m).done(); - - auto points_tensor = buildATensor(ctx, ins[0]); - auto temp_tensor = buildATensor(ctx, ins[1]); - - auto idx_tensor = buildATensor(ctx, outs[0]); - - furthest_point_sampling_with_dist_forward(points_tensor, temp_tensor, - idx_tensor, b, n, m); -} -PARROTS_EXTENSION_REGISTER(furthest_point_sampling_forward) - .attr("b") - .attr("n") - .attr("m") - .input(2) - .output(1) - .apply(furthest_point_sample_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(furthest_point_sampling_with_dist_forward) - .attr("b") - .attr("n") - .attr("m") - .input(2) - .output(1) - .apply(furthest_point_sampling_with_dist_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_pytorch.h deleted file mode 100644 index 0325cd66..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/furthest_point_sample_pytorch.h +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef FURTHEST_POINT_SAMPLE_PYTORCH_H -#define FURTHEST_POINT_SAMPLE_PYTORCH_H -#include -using namespace at; - -void furthest_point_sampling_forward(Tensor points_tensor, Tensor temp_tensor, - Tensor idx_tensor, int b, int n, int m); - -void furthest_point_sampling_with_dist_forward(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, int n, - int m); -#endif // FURTHEST_POINT_SAMPLE_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_leakyrelu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_leakyrelu.cpp deleted file mode 100644 index 8d411c9d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_leakyrelu.cpp +++ /dev/null @@ -1,119 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act.cpp - -/* -Copyright (c) 2021, NVIDIA Corporation. All rights reserved. - -NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -Augmentation (ADA) -======================================================================= - -1. Definitions - -"Licensor" means any person or entity that distributes its Work. - -"Software" means the original work of authorship made available under -this License. - -"Work" means the Software and any additions to or derivative works of -the Software that are made available under this License. - -The terms "reproduce," "reproduction," "derivative works," and -"distribution" have the meaning as provided under U.S. copyright law; -provided, however, that for the purposes of this License, derivative -works shall not include works that remain separable from, or merely -link (or bind by name) to the interfaces of, the Work. - -Works, including the Software, are "made available" under this License -by including in or with the Work either (a) a copyright notice -referencing the applicability of this License to the Work, or (b) a -copy of this License. - -2. License Grants - - 2.1 Copyright Grant. Subject to the terms and conditions of this - License, each Licensor grants to you a perpetual, worldwide, - non-exclusive, royalty-free, copyright license to reproduce, - prepare derivative works of, publicly display, publicly perform, - sublicense and distribute its Work and any resulting derivative - works in any form. - -3. Limitations - - 3.1 Redistribution. You may reproduce or distribute the Work only - if (a) you do so under this License, (b) you include a complete - copy of this License with your distribution, and (c) you retain - without modification any copyright, patent, trademark, or - attribution notices that are present in the Work. - - 3.2 Derivative Works. You may specify that additional or different - terms apply to the use, reproduction, and distribution of your - derivative works of the Work ("Your Terms") only if (a) Your Terms - provide that the use limitation in Section 3.3 applies to your - derivative works, and (b) you identify the specific derivative - works that are subject to Your Terms. Notwithstanding Your Terms, - this License (including the redistribution requirements in Section - 3.1) will continue to apply to the Work itself. - - 3.3 Use Limitation. The Work and any derivative works thereof only - may be used or intended for use non-commercially. Notwithstanding - the foregoing, NVIDIA and its affiliates may use the Work and any - derivative works commercially. As used herein, "non-commercially" - means for research or evaluation purposes only. - - 3.4 Patent Claims. If you bring or threaten to bring a patent claim - against any Licensor (including any claim, cross-claim or - counterclaim in a lawsuit) to enforce any patents that you allege - are infringed by any Work, then your rights under this License from - such Licensor (including the grant in Section 2.1) will terminate - immediately. - - 3.5 Trademarks. This License does not grant any rights to use any - Licensor’s or its affiliates’ names, logos, or trademarks, except - as necessary to reproduce the notices described in this License. - - 3.6 Termination. If you violate any term of this License, then your - rights under this License (including the grant in Section 2.1) will - terminate immediately. - -4. Disclaimer of Warranty. - -THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -THIS LICENSE. - -5. Limitation of Liability. - -EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -THE POSSIBILITY OF SUCH DAMAGES. - -======================================================================= -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -torch::Tensor fused_bias_leakyrelu_op_impl(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale) { - return DISPATCH_DEVICE_IMPL(fused_bias_leakyrelu_op_impl, input, bias, refer, - act, grad, alpha, scale); -} - -torch::Tensor fused_bias_leakyrelu(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale) { - return fused_bias_leakyrelu_op_impl(input, bias, refer, act, grad, alpha, - scale); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_parrots.cpp deleted file mode 100644 index 47409ad2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/fused_bias_parrots.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include -#include -#include -using namespace at; -using namespace parrots; - -torch::Tensor fused_bias_leakyrelu(const torch::Tensor &input, - const torch::Tensor &bias, - const torch::Tensor &refer, int act, - int grad, float alpha, float scale); - -void fused_bias_leakyrelu_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int act, grad; - float alpha, scale; - SSAttrs(attr) - .get("act", act) - .get("grad", grad) - .get("alpha", alpha) - .get("scale", scale) - .done(); - const auto &input = buildATensor(ctx, ins[0]); - const auto &bias = buildATensor(ctx, ins[1]); - const auto &refer = buildATensor(ctx, ins[2]); - auto out = fused_bias_leakyrelu(input, bias, refer, act, grad, alpha, scale); - updateDArray(ctx, out, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(fused_bias_leakyrelu) - .attr("act") - .attr("grad") - .attr("alpha") - .attr("scale") - .input(3) - .output(1) - .apply(fused_bias_leakyrelu_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points.cpp deleted file mode 100644 index b8fb0200..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points.cpp +++ /dev/null @@ -1,30 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void gather_points_forward_impl(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out) { - DISPATCH_DEVICE_IMPL(gather_points_forward_impl, b, c, n, npoints, points, - idx, out); -} - -void gather_points_backward_impl(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - DISPATCH_DEVICE_IMPL(gather_points_backward_impl, b, c, n, npoints, grad_out, - idx, grad_points); -} - -void gather_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, - int npoints) { - gather_points_forward_impl(b, c, n, npoints, points_tensor, idx_tensor, - out_tensor); -} - -void gather_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints) { - gather_points_backward_impl(b, c, n, npoints, grad_out_tensor, idx_tensor, - grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_parrots.cpp deleted file mode 100644 index 1d2d9e12..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_parrots.cpp +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "gather_points_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void gather_points_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, n, npoints; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("n", n) - .get("npoints", npoints) - .done(); - - auto points_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - - auto out_tensor = buildATensor(ctx, outs[0]); - - gather_points_forward(points_tensor, idx_tensor, out_tensor, b, c, n, - npoints); -} - -void gather_points_backward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, n, npoints; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("n", n) - .get("npoints", npoints) - .done(); - - auto grad_out_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - - auto grad_points_tensor = buildATensor(ctx, outs[0]); - - gather_points_backward(grad_out_tensor, idx_tensor, grad_points_tensor, b, c, - n, npoints); -} - -PARROTS_EXTENSION_REGISTER(gather_points_forward) - .attr("b") - .attr("c") - .attr("n") - .attr("npoints") - .input(2) - .output(1) - .apply(gather_points_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(gather_points_backward) - .attr("b") - .attr("c") - .attr("n") - .attr("npoints") - .input(2) - .output(1) - .apply(gather_points_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_pytorch.h deleted file mode 100644 index 1689ae6a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/gather_points_pytorch.h +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef GATHER_POINTS_PYTORCH_H -#define GATHER_POINTS_PYTORCH_H -#include -using namespace at; - -void gather_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints); - -void gather_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints); -#endif // GATHER_POINTS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points.cpp deleted file mode 100644 index cdd190d4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points.cpp +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/group_points.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void group_points_forward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out) { - DISPATCH_DEVICE_IMPL(group_points_forward_impl, b, c, n, npoints, nsample, - points, idx, out); -} - -void group_points_backward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - DISPATCH_DEVICE_IMPL(group_points_backward_impl, b, c, n, npoints, nsample, - grad_out, idx, grad_points); -} - -void group_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints, - int nsample) { - DISPATCH_DEVICE_IMPL(group_points_forward_impl, b, c, n, npoints, nsample, - points_tensor, idx_tensor, out_tensor); -} - -void group_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints, int nsample) { - group_points_backward_impl(b, c, n, npoints, nsample, grad_out_tensor, - idx_tensor, grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_parrots.cpp deleted file mode 100644 index 282c01a8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_parrots.cpp +++ /dev/null @@ -1,72 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "group_points_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void group_points_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, n, npoints, nsample; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("n", n) - .get("npoints", npoints) - .get("nsample", nsample) - .done(); - auto points_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - - auto out_tensor = buildATensor(ctx, outs[0]); - - group_points_forward(points_tensor, idx_tensor, out_tensor, b, c, n, npoints, - nsample); -} - -void group_points_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, n, npoints, nsample; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("n", n) - .get("npoints", npoints) - .get("nsample", nsample) - .done(); - auto grad_out_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - - auto grad_points_tensor = buildATensor(ctx, outs[0]); - - group_points_backward(grad_out_tensor, idx_tensor, grad_points_tensor, b, c, - n, npoints, nsample); -} - -PARROTS_EXTENSION_REGISTER(group_points_forward) - .attr("b") - .attr("c") - .attr("n") - .attr("npoints") - .attr("nsample") - .input(2) - .output(1) - .apply(group_points_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(group_points_backward) - .attr("b") - .attr("c") - .attr("n") - .attr("npoints") - .attr("nsample") - .input(2) - .output(1) - .apply(group_points_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_pytorch.h deleted file mode 100644 index e704ab07..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/group_points_pytorch.h +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef GROUP_POINTS_PYTORCH_H -#define GROUP_POINTS_PYTORCH_H -#include -using namespace at; - -void group_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints, - int nsample); - -void group_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints, int nsample); - -#endif // GROUP_POINTS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/info.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/info.cpp deleted file mode 100644 index a08d227d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/info.cpp +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/vision.cpp -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF -#include -int get_cudart_version() { return CUDART_VERSION; } -#endif -#endif - -std::string get_compiling_cuda_version() { -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF - std::ostringstream oss; - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("rocm not available"); -#endif -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d.cpp deleted file mode 100644 index 5ef9c7e8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d.cpp +++ /dev/null @@ -1,135 +0,0 @@ -// Modified from -// https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/src/iou3d_nms.cpp - -/* -3D IoU Calculation and Rotated NMS(modified from 2D NMS written by others) -Written by Shaoshuai Shi -All Rights Reserved 2019-2020. -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -const int THREADS_PER_BLOCK_NMS = sizeof(unsigned long long) * 8; - -void iou3d_boxes_overlap_bev_forward_impl(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap) { - DISPATCH_DEVICE_IMPL(iou3d_boxes_overlap_bev_forward_impl, num_a, boxes_a, - num_b, boxes_b, ans_overlap); -} - -void iou3d_nms3d_forward_impl(const Tensor boxes, unsigned long long *mask, - int boxes_num, float nms_overlap_thresh) { - DISPATCH_DEVICE_IMPL(iou3d_nms3d_forward_impl, boxes, mask, boxes_num, - nms_overlap_thresh); -} - -void iou3d_nms3d_normal_forward_impl(const Tensor boxes, - unsigned long long *mask, int boxes_num, - float nms_overlap_thresh) { - DISPATCH_DEVICE_IMPL(iou3d_nms3d_normal_forward_impl, boxes, mask, boxes_num, - nms_overlap_thresh); -} - -void iou3d_boxes_overlap_bev_forward(Tensor boxes_a, Tensor boxes_b, - Tensor ans_overlap) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params boxes_b: (M, 5) - // params ans_overlap: (N, M) - int num_a = boxes_a.size(0); - int num_b = boxes_b.size(0); - - iou3d_boxes_overlap_bev_forward_impl(num_a, boxes_a, num_b, boxes_b, - ans_overlap); -} - -void iou3d_nms3d_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params keep: (N) - CHECK_CONTIGUOUS(boxes); - CHECK_CONTIGUOUS(keep); - - int boxes_num = boxes.size(0); - int64_t *keep_data = keep.data_ptr(); - int64_t *keep_num_data = keep_num.data_ptr(); - - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - unsigned long long *mask_data = - (unsigned long long *)mask.data_ptr(); - iou3d_nms3d_forward_impl(boxes, mask_data, boxes_num, nms_overlap_thresh); - - at::Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long *mask_host = - (unsigned long long *)mask_cpu.data_ptr(); - - std::vector remv_cpu(col_blocks); - memset(&remv_cpu[0], 0, sizeof(unsigned long long) * col_blocks); - - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_host[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - *keep_num_data = num_to_keep; - } -} - -void iou3d_nms3d_normal_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params keep: (N) - - CHECK_CONTIGUOUS(boxes); - CHECK_CONTIGUOUS(keep); - - int boxes_num = boxes.size(0); - int64_t *keep_data = keep.data_ptr(); - int64_t *keep_num_data = keep_num.data_ptr(); - - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - unsigned long long *mask_data = - (unsigned long long *)mask.data_ptr(); - iou3d_nms3d_normal_forward_impl(boxes, mask_data, boxes_num, - nms_overlap_thresh); - - at::Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long *mask_host = - (unsigned long long *)mask_cpu.data_ptr(); - - std::vector remv_cpu(col_blocks); - memset(&remv_cpu[0], 0, sizeof(unsigned long long) * col_blocks); - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_host[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - } - - *keep_num_data = num_to_keep; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_parrots.cpp deleted file mode 100644 index 20e288ae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_parrots.cpp +++ /dev/null @@ -1,70 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "iou3d_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void iou3d_boxes_overlap_bev_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto boxes_a = buildATensor(ctx, ins[0]); - auto boxes_b = buildATensor(ctx, ins[1]); - - auto ans_iou = buildATensor(ctx, outs[0]); - - iou3d_boxes_overlap_bev_forward(boxes_a, boxes_b, ans_iou); -} - -void iou3d_nms3d_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float nms_overlap_thresh; - SSAttrs(attr).get("nms_overlap_thresh", nms_overlap_thresh).done(); - - auto boxes = buildATensor(ctx, ins[0]); - - auto keep = buildATensor(ctx, outs[0]); - auto keep_num = buildATensor(ctx, outs[1]); - - iou3d_nms3d_forward(boxes, keep, keep_num, nms_overlap_thresh); -} - -void iou3d_nms3d_normal_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float nms_overlap_thresh; - SSAttrs(attr).get("nms_overlap_thresh", nms_overlap_thresh).done(); - - auto boxes = buildATensor(ctx, ins[0]); - - auto keep = buildATensor(ctx, outs[0]); - auto keep_num = buildATensor(ctx, outs[1]); - - iou3d_nms3d_normal_forward(boxes, keep, keep_num, nms_overlap_thresh); -} - -PARROTS_EXTENSION_REGISTER(iou3d_boxes_overlap_bev_forward) - .input(2) - .output(1) - .apply(iou3d_boxes_overlap_bev_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(iou3d_nms3d_forward) - .attr("nms_overlap_thresh") - .input(1) - .output(2) - .apply(iou3d_nms3d_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(iou3d_nms3d_normal_forward) - .attr("nms_overlap_thresh") - .input(1) - .output(2) - .apply(iou3d_nms3d_normal_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_pytorch.h deleted file mode 100644 index 76170edc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/iou3d_pytorch.h +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef IOU_3D_PYTORCH_H -#define IOU_3D_PYTORCH_H -#include -using namespace at; - -void iou3d_boxes_overlap_bev_forward(Tensor boxes_a, Tensor boxes_b, - Tensor ans_overlap); - -void iou3d_nms3d_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh); - -void iou3d_nms3d_normal_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh); - -#endif // IOU_3D_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn.cpp deleted file mode 100644 index b4be9428..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn.cpp +++ /dev/null @@ -1,17 +0,0 @@ -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/pointops/src/knnquery_heap - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void knn_forward_impl(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2) { - DISPATCH_DEVICE_IMPL(knn_forward_impl, b, n, m, nsample, xyz, new_xyz, idx, - dist2); -} - -void knn_forward(Tensor xyz_tensor, Tensor new_xyz_tensor, Tensor idx_tensor, - Tensor dist2_tensor, int b, int n, int m, int nsample) { - knn_forward_impl(b, n, m, nsample, xyz_tensor, new_xyz_tensor, idx_tensor, - dist2_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_parrots.cpp deleted file mode 100644 index 585b8464..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_parrots.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "knn_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void knn_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, n, m, nsample; - SSAttrs(attr) - .get("b", b) - .get("n", n) - .get("m", m) - .get("nsample", nsample) - .done(); - - auto xyz_tensor = buildATensor(ctx, ins[0]); - auto new_xyz_tensor = buildATensor(ctx, ins[1]); - - auto idx_tensor = buildATensor(ctx, outs[0]); - auto dist2_tensor = buildATensor(ctx, outs[1]); - - knn_forward(xyz_tensor, new_xyz_tensor, idx_tensor, dist2_tensor, b, n, m, - nsample); -} - -PARROTS_EXTENSION_REGISTER(knn_forward) - .attr("b") - .attr("n") - .attr("m") - .attr("nsample") - .input(2) - .output(2) - .apply(knn_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_pytorch.h deleted file mode 100644 index b0875f83..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/knn_pytorch.h +++ /dev/null @@ -1,9 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef KNN_PYTORCH_H -#define KNN_PYTORCH_H -#include -using namespace at; - -void knn_forward(Tensor xyz_tensor, Tensor new_xyz_tensor, Tensor idx_tensor, - Tensor dist2_tensor, int b, int n, int m, int nsample); -#endif // KNN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d.cpp deleted file mode 100644 index 59039253..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void masked_im2col_forward_impl(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - DISPATCH_DEVICE_IMPL(masked_im2col_forward_impl, im, mask_h_idx, mask_w_idx, - col, kernel_h, kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward_impl(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - DISPATCH_DEVICE_IMPL(masked_col2im_forward_impl, col, mask_h_idx, mask_w_idx, - im, height, width, channels); -} - -void masked_im2col_forward(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - masked_im2col_forward_impl(im, mask_h_idx, mask_w_idx, col, kernel_h, - kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - masked_col2im_forward_impl(col, mask_h_idx, mask_w_idx, im, height, width, - channels); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_parrots.cpp deleted file mode 100644 index 39f19740..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_parrots.cpp +++ /dev/null @@ -1,72 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "masked_conv2d_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void masked_im2col_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kw), col: (kh * kw * ic, ow * oh) - int kernel_h, kernel_w, pad_h, pad_w; - SSAttrs(attr) - .get("kernel_h", kernel_h) - .get("kernel_w", kernel_w) - .get("pad_h", pad_h) - .get("pad_w", pad_w) - .done(); - - const auto& im = buildATensor(ctx, ins[0]); - const auto& mask_h_idx = buildATensor(ctx, ins[1]); - const auto& mask_w_idx = buildATensor(ctx, ins[2]); - - auto col = buildATensor(ctx, outs[0]); - masked_im2col_forward_cuda(im, mask_h_idx, mask_w_idx, col, kernel_h, - kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kh), col: (kh * kw * ic, ow * oh) - int height, width, channels; - SSAttrs(attr) - .get("height", height) - .get("width", width) - .get("channels", channels) - .done(); - - const auto& col = buildATensor(ctx, ins[0]); - const auto& mask_h_idx = buildATensor(ctx, ins[1]); - const auto& mask_w_idx = buildATensor(ctx, ins[2]); - - auto im = buildATensor(ctx, outs[0]); - masked_col2im_forward_cuda(col, mask_h_idx, mask_w_idx, im, height, width, - channels); -} - -PARROTS_EXTENSION_REGISTER(masked_im2col_forward) - .attr("kernel_h") - .attr("kernel_w") - .attr("pad_h") - .attr("pad_w") - .input(3) - .output(1) - .apply(masked_im2col_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(masked_col2im_forward) - .attr("height") - .attr("width") - .attr("channels") - .input(3) - .output(1) - .apply(masked_col2im_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_pytorch.h deleted file mode 100644 index 36d5643f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/masked_conv2d_pytorch.h +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef MASKED_CONV2D_PYTORCH_H -#define MASKED_CONV2D_PYTORCH_H -#include -using namespace at; - -void masked_im2col_forward_cuda(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w); - -void masked_col2im_forward_cuda(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels); -#endif // MASKED_CONV2D_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons.cpp deleted file mode 100644 index 8ff996dc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons.cpp +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void min_area_polygons_impl(const Tensor pointsets, Tensor polygons) { - DISPATCH_DEVICE_IMPL(min_area_polygons_impl, pointsets, polygons); -} - -void min_area_polygons(const Tensor pointsets, Tensor polygons) { - min_area_polygons_impl(pointsets, polygons); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_parrots.cpp deleted file mode 100644 index d9e4ff4b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_parrots.cpp +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "min_area_polygons_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void min_area_polygons_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto pointsets = buildATensor(ctx, ins[0]); - - auto polygons = buildATensor(ctx, outs[0]); - min_area_polygons(pointsets, polygons); -} - -PARROTS_EXTENSION_REGISTER(min_area_polygons) - .input(1) - .output(1) - .apply(min_area_polygons_cuda_parrots) - .done(); - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_pytorch.h deleted file mode 100644 index 1df27641..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/min_area_polygons_pytorch.h +++ /dev/null @@ -1,9 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef MIN_AREA_POLYGONS_PYTORCH_H -#define MIN_AREA_POLYGONS_PYTORCH_H -#include -using namespace at; - -void min_area_polygons(const Tensor pointsets, Tensor polygons); - -#endif // MIN_AREA_POLYGONS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv.cpp deleted file mode 100644 index 12b538a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv.cpp +++ /dev/null @@ -1,237 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void modulated_deformable_im2col_impl( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - DISPATCH_DEVICE_IMPL(modulated_deformable_im2col_impl, data_im, data_offset, - data_mask, batch_size, channels, height_im, width_im, - height_col, width_col, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - deformable_group, data_col); -} - -void modulated_deformable_col2im_impl( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - DISPATCH_DEVICE_IMPL(modulated_deformable_col2im_impl, data_col, data_offset, - data_mask, batch_size, channels, height_im, width_im, - height_col, width_col, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - deformable_group, grad_im); -} - -void modulated_deformable_col2im_coord_impl( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - DISPATCH_DEVICE_IMPL(modulated_deformable_col2im_coord_impl, data_col, - data_im, data_offset, data_mask, batch_size, channels, - height_im, width_im, height_col, width_col, kernel_h, - kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, - dilation_w, deformable_group, grad_offset, grad_mask); -} - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias) { - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_out = weight.size(0); - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - // resize output - output = output.view({batch, channels_out, height_out, width_out}).zero_(); - // resize temporary columns - columns = - at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, - input.options()); - - output = output.view({output.size(0), group, output.size(1) / group, - output.size(2), output.size(3)}); - - for (int b = 0; b < batch; b++) { - modulated_deformable_im2col_impl( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - // divide into group - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - - for (int g = 0; g < group; g++) { - output[b][g] = output[b][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output[b][g]); - } - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - } - - output = output.view({output.size(0), output.size(1) * output.size(2), - output.size(3), output.size(4)}); - - if (with_bias) { - output += bias.view({1, bias.size(0), 1, 1}); - } -} - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - grad_input = grad_input.view({batch, channels, height, width}); - columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out}, - input.options()); - - grad_output = - grad_output.view({grad_output.size(0), group, grad_output.size(1) / group, - grad_output.size(2), grad_output.size(3)}); - - for (int b = 0; b < batch; b++) { - // divide int group - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - grad_output[b][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_impl( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_impl( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_impl( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - grad_weight = grad_weight.view({group, grad_weight.size(0) / group, - grad_weight.size(1), grad_weight.size(2), - grad_weight.size(3)}); - if (with_bias) - grad_bias = grad_bias.view({group, grad_bias.size(0) / group}); - - for (int g = 0; g < group; g++) { - grad_weight[g] = - grad_weight[g] - .flatten(1) - .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1)) - .view_as(grad_weight[g]); - if (with_bias) { - grad_bias[g] = - grad_bias[g] - .view({-1, 1}) - .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1})) - .view(-1); - } - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1), - grad_weight.size(2), grad_weight.size(3), - grad_weight.size(4)}); - if (with_bias) - grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)}); - } - grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1), - grad_output.size(2), grad_output.size(3), - grad_output.size(4)}); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_parrots.cpp deleted file mode 100644 index 2ef7efff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_parrots.cpp +++ /dev/null @@ -1,199 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "modulated_deform_conv_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void modulated_deform_conv_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, - dilation_w, group, deformable_group, with_bias; - SSAttrs(attr) - .get("kernel_h", kernel_h) - .get("kernel_w", kernel_w) - .get("stride_h", stride_h) - .get("stride_w", stride_w) - .get("pad_h", pad_h) - .get("pad_w", pad_w) - .get("dilation_h", dilation_h) - .get("dilation_w", dilation_w) - .get("group", group) - .get("deformable_group", deformable_group) - .get("with_bias", with_bias) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& bias = buildATensor(ctx, ins[2]); - const auto& ones = buildATensor(ctx, ins[3]); - const auto& offset = buildATensor(ctx, ins[4]); - const auto& mask = buildATensor(ctx, ins[5]); - - auto output = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - - modulated_deform_conv_forward(input, weight, bias, ones, offset, mask, output, - columns, kernel_h, kernel_w, stride_h, stride_w, - pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -} - -void modulated_deform_conv_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, - dilation_w, group, deformable_group, with_bias; - SSAttrs(attr) - .get("kernel_h", kernel_h) - .get("kernel_w", kernel_w) - .get("stride_h", stride_h) - .get("stride_w", stride_w) - .get("pad_h", pad_h) - .get("pad_w", pad_w) - .get("dilation_h", dilation_h) - .get("dilation_w", dilation_w) - .get("group", group) - .get("deformable_group", deformable_group) - .get("with_bias", with_bias) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& bias = buildATensor(ctx, ins[2]); - const auto& ones = buildATensor(ctx, ins[3]); - const auto& offset = buildATensor(ctx, ins[4]); - const auto& mask = buildATensor(ctx, ins[5]); - - auto columns = buildATensor(ctx, outs[0]); - auto grad_input = buildATensor(ctx, outs[1]); - auto grad_weight = buildATensor(ctx, outs[2]); - auto grad_bias = buildATensor(ctx, outs[3]); - auto grad_offset = buildATensor(ctx, outs[4]); - auto grad_mask = buildATensor(ctx, outs[5]); - auto grad_output = buildATensor(ctx, outs[6]); - modulated_deform_conv_backward( - input, weight, bias, ones, offset, mask, columns, grad_input, grad_weight, - grad_bias, grad_offset, grad_mask, grad_output, kernel_h, kernel_w, - stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -} -#endif - -void modulated_deform_conv_forward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, - dilation_w, group, deformable_group, with_bias; - SSAttrs(attr) - .get("kernel_h", kernel_h) - .get("kernel_w", kernel_w) - .get("stride_h", stride_h) - .get("stride_w", stride_w) - .get("pad_h", pad_h) - .get("pad_w", pad_w) - .get("dilation_h", dilation_h) - .get("dilation_w", dilation_w) - .get("group", group) - .get("deformable_group", deformable_group) - .get("with_bias", with_bias) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& bias = buildATensor(ctx, ins[2]); - const auto& ones = buildATensor(ctx, ins[3]); - const auto& offset = buildATensor(ctx, ins[4]); - const auto& mask = buildATensor(ctx, ins[5]); - - auto output = buildATensor(ctx, outs[0]); - auto columns = buildATensor(ctx, outs[1]); - - modulated_deform_conv_forward(input, weight, bias, ones, offset, mask, output, - columns, kernel_h, kernel_w, stride_h, stride_w, - pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -} - -void modulated_deform_conv_backward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, - dilation_w, group, deformable_group, with_bias; - SSAttrs(attr) - .get("kernel_h", kernel_h) - .get("kernel_w", kernel_w) - .get("stride_h", stride_h) - .get("stride_w", stride_w) - .get("pad_h", pad_h) - .get("pad_w", pad_w) - .get("dilation_h", dilation_h) - .get("dilation_w", dilation_w) - .get("group", group) - .get("deformable_group", deformable_group) - .get("with_bias", with_bias) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& bias = buildATensor(ctx, ins[2]); - const auto& ones = buildATensor(ctx, ins[3]); - const auto& offset = buildATensor(ctx, ins[4]); - const auto& mask = buildATensor(ctx, ins[5]); - - auto columns = buildATensor(ctx, outs[0]); - auto grad_input = buildATensor(ctx, outs[1]); - auto grad_weight = buildATensor(ctx, outs[2]); - auto grad_bias = buildATensor(ctx, outs[3]); - auto grad_offset = buildATensor(ctx, outs[4]); - auto grad_mask = buildATensor(ctx, outs[5]); - auto grad_output = buildATensor(ctx, outs[6]); - modulated_deform_conv_backward( - input, weight, bias, ones, offset, mask, columns, grad_input, grad_weight, - grad_bias, grad_offset, grad_mask, grad_output, kernel_h, kernel_w, - stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -} -PARROTS_EXTENSION_REGISTER(modulated_deform_conv_forward) - .attr("kernel_h") - .attr("kernel_w") - .attr("stride_h") - .attr("stride_w") - .attr("pad_h") - .attr("pad_w") - .attr("dilation_h") - .attr("dilation_w") - .attr("group") - .attr("deformable_group") - .attr("with_bias") - .input(6) - .output(2) - .apply(modulated_deform_conv_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(modulated_deform_conv_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(modulated_deform_conv_backward) - .attr("kernel_h") - .attr("kernel_w") - .attr("stride_h") - .attr("stride_w") - .attr("pad_h") - .attr("pad_w") - .attr("dilation_h") - .attr("dilation_w") - .attr("group") - .attr("deformable_group") - .attr("with_bias") - .input(6) - .output(7) - .apply(modulated_deform_conv_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(modulated_deform_conv_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_pytorch.h deleted file mode 100644 index 12f68686..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/modulated_deform_conv_pytorch.h +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef MODULATED_DEFORM_CONV_PYTORCH_H -#define MODULATED_DEFORM_CONV_PYTORCH_H -#include -using namespace at; - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias); - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); -#endif // MODULATED_DEFORM_CONV_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn.cpp deleted file mode 100644 index 25c8f620..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn.cpp +++ /dev/null @@ -1,60 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from -*https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor ms_deform_attn_impl_forward(const Tensor &value, - const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const int im2col_step) { - return DISPATCH_DEVICE_IMPL(ms_deform_attn_impl_forward, value, - spatial_shapes, level_start_index, sampling_loc, - attn_weight, im2col_step); -} - -void ms_deform_attn_impl_backward( - const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, const Tensor &sampling_loc, - const Tensor &attn_weight, const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, Tensor &grad_attn_weight, - const int im2col_step) { - DISPATCH_DEVICE_IMPL(ms_deform_attn_impl_backward, value, spatial_shapes, - level_start_index, sampling_loc, attn_weight, - grad_output, grad_value, grad_sampling_loc, - grad_attn_weight, im2col_step); -} - -Tensor ms_deform_attn_forward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const int im2col_step) { - at::DeviceGuard guard(value.device()); - return ms_deform_attn_impl_forward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, im2col_step); -} - -void ms_deform_attn_backward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, - Tensor &grad_attn_weight, const int im2col_step) { - at::DeviceGuard guard(value.device()); - ms_deform_attn_impl_backward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, grad_output, - grad_value, grad_sampling_loc, grad_attn_weight, - im2col_step); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn_parrots.cpp deleted file mode 100644 index a3ad786a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/ms_deform_attn_parrots.cpp +++ /dev/null @@ -1,69 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include -#include -#include -using namespace at; -using namespace parrots; - -Tensor ms_deform_attn_forward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, const int im2col_step); - -void ms_deform_attn_backward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, - Tensor &grad_attn_weight, const int im2col_step); - -void ms_deform_attn_forward_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int im2col_step; - SSAttrs(attr).get("im2col_step", im2col_step).done(); - const auto &value = buildATensor(ctx, ins[0]); - const auto &spatial_shapes = buildATensor(ctx, ins[1]); - const auto &level_start_index = buildATensor(ctx, ins[2]); - const auto &sampling_loc = buildATensor(ctx, ins[3]); - const auto &attn_weight = buildATensor(ctx, ins[4]); - auto out = ms_deform_attn_forward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, im2col_step); - updateDArray(ctx, out, outs[0]); -} - -void ms_deform_attn_backward_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int im2col_step; - SSAttrs(attr).get("im2col_step", im2col_step).done(); - const auto &value = buildATensor(ctx, ins[0]); - const auto &spatial_shapes = buildATensor(ctx, ins[1]); - const auto &level_start_index = buildATensor(ctx, ins[2]); - const auto &sampling_loc = buildATensor(ctx, ins[3]); - const auto &attn_weight = buildATensor(ctx, ins[4]); - const auto &grad_output = buildATensor(ctx, ins[5]); - auto grad_value = buildATensor(ctx, outs[0]); - auto grad_sampling_loc = buildATensor(ctx, outs[1]); - auto grad_attn_weight = buildATensor(ctx, outs[2]); - ms_deform_attn_backward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, grad_output, grad_value, - grad_sampling_loc, grad_attn_weight, im2col_step); -} - -PARROTS_EXTENSION_REGISTER(ms_deform_attn_forward) - .attr("im2col_step") - .input(5) - .output(1) - .apply(ms_deform_attn_forward_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(ms_deform_attn_backward) - .attr("im2col_step") - .input(6) - .output(3) - .apply(ms_deform_attn_backward_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms.cpp deleted file mode 100644 index 199d8af2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return DISPATCH_DEVICE_IMPL(nms_impl, boxes, scores, iou_threshold, offset); -} - -Tensor softnms_impl(Tensor boxes, Tensor scores, Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset) { - return DISPATCH_DEVICE_IMPL(softnms_impl, boxes, scores, dets, iou_threshold, - sigma, min_score, method, offset); -} - -std::vector > nms_match_impl(Tensor dets, - float iou_threshold) { - return DISPATCH_DEVICE_IMPL(nms_match_impl, dets, iou_threshold); -} - -Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return nms_impl(boxes, scores, iou_threshold, offset); -} - -Tensor softnms(Tensor boxes, Tensor scores, Tensor dets, float iou_threshold, - float sigma, float min_score, int method, int offset) { - return softnms_impl(boxes, scores, dets, iou_threshold, sigma, min_score, - method, offset); -} - -std::vector > nms_match(Tensor dets, float iou_threshold) { - return nms_match_impl(dets, iou_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_parrots.cpp deleted file mode 100644 index db8b5f16..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_parrots.cpp +++ /dev/null @@ -1,140 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "nms_pytorch.h" - -using namespace parrots; - -// Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset); -template -void nms_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float iou_threshold; - int offset; - SSAttrs(attr) - .get("iou_threshold", iou_threshold) - .get("offset", offset) - .done(); - at::Tensor boxes, scores; - boxes = buildATensor(ctx, ins[0]); - scores = buildATensor(ctx, ins[1]); - auto out = nms(boxes, scores, iou_threshold, offset); - updateDArray(ctx, out, outs[0]); -} - -/*Tensor softnms(Tensor boxes, Tensor scores, Tensor dets, float iou_threshold, - * float sigma, float min_score, int method, int offset);*/ -template -void softnms_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float iou_threshold, sigma, min_score; - int method, offset; - SSAttrs(attr) - .get("iou_threshold", iou_threshold) - .get("sigma", sigma) - .get("min_score", min_score) - .get("method", method) - .get("offset", offset) - .done(); - at::Tensor boxes, scores, dets; - boxes = buildATensor(ctx, ins[0]); - scores = buildATensor(ctx, ins[1]); - dets = buildATensor(ctx, ins[2]); - auto out = softnms(boxes, scores, dets, iou_threshold, sigma, min_score, - method, offset); - updateDArray(ctx, out, outs[0]); -} - -// std::vector > nms_match(Tensor dets, float iou_threshold); -template -void nms_match_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float iou_threshold; - SSAttrs(attr).get("iou_threshold", iou_threshold).done(); - at::Tensor dets; - dets = buildATensor(ctx, ins[0]); - auto out = nms_match(dets, iou_threshold); - int n = out.size(), m = 0; - for (int i = 0; i < n; ++i) - if (m < out[i].size()) m = out[i].size(); - auto options = torch::TensorOptions().dtype(at::kInt); - auto tensor = torch::zeros({n, m}, options); - for (int i = 0; i < n; i++) - tensor.slice(0, i, i + 1) = - torch::from_blob(out[i].data(), {out[i].size()}, options); - updateDArray(ctx, tensor, outs[0]); -} - -/*Tensor nms_rotated(const Tensor dets, const Tensor scores, const Tensor order, - * const Tensor dets_sorted, const float iou_threshold, - * const int multi_label);*/ -template -void nms_rotated_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float iou_threshold; - int multi_label; - SSAttrs(attr) - .get("iou_threshold", iou_threshold) - .get("multi_label", multi_label) - .done(); - at::Tensor dets, scores, order, dets_sorted; - dets = buildATensor(ctx, ins[0]); - scores = buildATensor(ctx, ins[1]); - order = buildATensor(ctx, ins[2]); - dets_sorted = buildATensor(ctx, ins[3]); - auto out = - nms_rotated(dets, scores, order, dets_sorted, iou_threshold, multi_label); - updateDArray(ctx, out, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(nms) - .attr("iou_threshold") - .attr("offset") - .input(2) - .output(1) - .apply(nms_parrots) -#ifdef MMCV_WITH_CUDA - .apply(nms_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(softnms) - .attr("iou_threshold") - .attr("sigma") - .attr("min_score") - .attr("method") - .attr("offset") - .input(3) - .output(1) - .apply(softnms_parrots) -#ifdef MMCV_WITH_CUDA - .apply(softnms_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(nms_match) - .attr("iou_threshold") - .input(1) - .output(1) - .apply(nms_match_parrots) -#ifdef MMCV_WITH_CUDA - .apply(nms_match_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(nms_rotated) - .attr("multi_label") - .attr("iou_threshold") - .input(4) - .output(1) - .apply(nms_rotated_parrots) -#ifdef MMCV_WITH_CUDA - .apply(nms_rotated_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_pytorch.h deleted file mode 100644 index 78c680e5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_pytorch.h +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef NMS_PYTORCH_H -#define NMS_PYTORCH_H -#include - -at::Tensor nms(at::Tensor boxes, at::Tensor scores, float iou_threshold, - int offset); - -at::Tensor softnms(at::Tensor boxes, at::Tensor scores, at::Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset); - -std::vector > nms_match(at::Tensor dets, float iou_threshold); - -at::Tensor nms_rotated(const at::Tensor dets, const at::Tensor scores, - const at::Tensor order, const at::Tensor dets_sorted, - const float iou_threshold, const int multi_label); -#endif // NMS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_rotated.cpp deleted file mode 100644 index e4ef676a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/nms_rotated.cpp +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/nms_rotated/nms_rotated.h -#include "pytorch_cpp_helper.hpp" - -Tensor nms_rotated_cpu(const Tensor dets, const Tensor scores, - const float iou_threshold); - -#ifdef MMCV_WITH_CUDA -Tensor nms_rotated_cuda(const Tensor dets, const Tensor scores, - const Tensor order, const Tensor dets_sorted, - const float iou_threshold, const int multi_label); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -Tensor nms_rotated(const Tensor dets, const Tensor scores, const Tensor order, - const Tensor dets_sorted, const float iou_threshold, - const int multi_label) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - return nms_rotated_cuda(dets, scores, order, dets_sorted, iou_threshold, - multi_label); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - - return nms_rotated_cpu(dets, scores, iou_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group.cpp deleted file mode 100644 index 2bf8c8bb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group.cpp +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// It is modified from https://github.com/WenmuZhou/PAN.pytorch - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -std::vector> pixel_group_impl( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float dis_threshold) { - return DISPATCH_DEVICE_IMPL(pixel_group_impl, score, mask, embedding, - kernel_label, kernel_contour, kernel_region_num, - dis_threshold); -} - -std::vector> pixel_group( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float distance_threshold) { - score = score.contiguous(); - mask = mask.contiguous(); - embedding = embedding.contiguous(); - kernel_label = kernel_label.contiguous(); - kernel_contour = kernel_contour.contiguous(); - - return pixel_group_impl(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_parrots.cpp deleted file mode 100644 index bd863a4e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_parrots.cpp +++ /dev/null @@ -1,54 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "pixel_group_pytorch.h" - -using namespace parrots; -using namespace std; - -template -void pixel_group_parrots(T& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int kernel_region_num; - float distance_threshold; - SSAttrs(attr) - .get("kernel_region_num", kernel_region_num) - .get("distance_threshold", distance_threshold) - .done(); - at::Tensor score; - at::Tensor mask; - at::Tensor embedding; - at::Tensor kernel_label; - at::Tensor kernel_contour; - score = buildATensor(ctx, ins[0]); - mask = buildATensor(ctx, ins[1]); - embedding = buildATensor(ctx, ins[2]); - kernel_label = buildATensor(ctx, ins[3]); - kernel_contour = buildATensor(ctx, ins[4]); - auto out = pixel_group(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold); - int n = out.size(); - std::vector out_tensor; - for (int i = 0; i < n; ++i) out_tensor.push_back(float(out[i].size())); - for (int i = 0; i < n; ++i) - out_tensor.insert(out_tensor.end(), out[i].begin(), out[i].end()); - auto options = torch::TensorOptions().dtype(at::kFloat); - auto tensor = torch::zeros({1, out_tensor.size()}, options); - tensor.slice(0, 0, 1) = - torch::from_blob(out_tensor.data(), {out_tensor.size()}, options); - updateDArray(ctx, tensor, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(pixel_group) - .attr("kernel_region_num") - .attr("distance_threshold") - .input(5) - .output(1) - .apply(pixel_group_parrots) -#ifdef MMCV_WITH_CUDA - .apply(pixel_group_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_pytorch.h deleted file mode 100644 index 1686ef3e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/pixel_group_pytorch.h +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef PIXEL_GROUP_PYTORCH_H -#define PIXEL_GROUP_PYTORCH_H -#include -using namespace at; - -std::vector> pixel_group( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float distance_threshold); - -#endif // PIXEL_GROUP_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes.cpp deleted file mode 100644 index 540da940..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes.cpp +++ /dev/null @@ -1,44 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void points_in_boxes_part_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - DISPATCH_DEVICE_IMPL(points_in_boxes_part_forward_impl, batch_size, boxes_num, - pts_num, boxes, pts, box_idx_of_points); -} - -void points_in_boxes_all_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - DISPATCH_DEVICE_IMPL(points_in_boxes_all_forward_impl, batch_size, boxes_num, - pts_num, boxes, pts, box_idx_of_points); -} - -void points_in_boxes_part_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box params pts: (B, npoints, 3) - // [x, y, z] in LiDAR coordinate params boxes_idx_of_points: (B, npoints), - // default -1 - int batch_size = boxes_tensor.size(0); - int boxes_num = boxes_tensor.size(1); - int pts_num = pts_tensor.size(1); - points_in_boxes_part_forward_impl(batch_size, boxes_num, pts_num, - boxes_tensor, pts_tensor, - box_idx_of_points_tensor); -} - -void points_in_boxes_all_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center. params pts: (B, npoints, 3) [x, y, z] - // in LiDAR coordinate params boxes_idx_of_points: (B, npoints), default -1 - int batch_size = boxes_tensor.size(0); - int boxes_num = boxes_tensor.size(1); - int pts_num = pts_tensor.size(1); - points_in_boxes_all_forward_impl(batch_size, boxes_num, pts_num, boxes_tensor, - pts_tensor, box_idx_of_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_parrots.cpp deleted file mode 100644 index afd2b0eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_parrots.cpp +++ /dev/null @@ -1,64 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "points_in_boxes_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void points_in_boxes_part_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto boxes_tensor = buildATensor(ctx, ins[0]); - auto pts_tensor = buildATensor(ctx, ins[1]); - - auto box_idx_of_points_tensor = buildATensor(ctx, outs[0]); - - points_in_boxes_part_forward(boxes_tensor, pts_tensor, - box_idx_of_points_tensor); -} - -void points_in_boxes_all_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto boxes_tensor = buildATensor(ctx, ins[0]); - auto pts_tensor = buildATensor(ctx, ins[1]); - - auto box_idx_of_points_tensor = buildATensor(ctx, outs[0]); - - points_in_boxes_all_forward(boxes_tensor, pts_tensor, - box_idx_of_points_tensor); -} - -PARROTS_EXTENSION_REGISTER(points_in_boxes_part_forward) - .input(2) - .output(1) - .apply(points_in_boxes_part_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(points_in_boxes_all_forward) - .input(2) - .output(1) - .apply(points_in_boxes_all_forward_cuda_parrots) - .done(); -#endif - -void points_in_boxes_forward_cpu_parrots(HostContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto boxes_tensor = buildATensor(ctx, ins[0]); - auto pts_tensor = buildATensor(ctx, ins[1]); - - auto pts_indices_tensor = buildATensor(ctx, outs[0]); - - points_in_boxes_cpu_forward(boxes_tensor, pts_tensor, pts_indices_tensor); -} - -PARROTS_EXTENSION_REGISTER(points_in_boxes_cpu_forward) - .input(2) - .output(1) - .apply(points_in_boxes_forward_cpu_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_pytorch.h deleted file mode 100644 index f3e465e3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_boxes_pytorch.h +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef POINTS_IN_BOXES_PYTORCH_H -#define POINTS_IN_BOXES_PYTORCH_H -#include -using namespace at; - -void points_in_boxes_part_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor); - -void points_in_boxes_all_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor); - -void points_in_boxes_cpu_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor pts_indices_tensor); - -#endif // POINTS_IN_BOXES_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons.cpp deleted file mode 100644 index 75a93dce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons.cpp +++ /dev/null @@ -1,15 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void points_in_polygons_forward_impl(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols) { - DISPATCH_DEVICE_IMPL(points_in_polygons_forward_impl, points, polygons, - output, rows, cols); -} - -void points_in_polygons_forward(Tensor points, Tensor polygons, Tensor output) { - int rows = points.size(0); - int cols = polygons.size(0); - points_in_polygons_forward_impl(points, polygons, output, rows, cols); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_parrots.cpp deleted file mode 100644 index d52018e6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_parrots.cpp +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "points_in_polygons_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void points_in_polygons_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto points = buildATensor(ctx, ins[0]); - auto polygons = buildATensor(ctx, ins[1]); - - auto output = buildATensor(ctx, outs[0]); - - points_in_polygons_forward(points, polygons, output); -} - -PARROTS_EXTENSION_REGISTER(points_in_polygons_forward) - .input(2) - .output(1) - .apply(points_in_polygons_cuda_parrots) - .done(); - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_pytorch.h deleted file mode 100644 index 04267814..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/points_in_polygons_pytorch.h +++ /dev/null @@ -1,9 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef POINTS_IN_POLYGONS_PYTORCH_H -#define POINTS_IN_POLYGONS_PYTORCH_H -#include -using namespace at; - -void points_in_polygons_forward(Tensor points, Tensor polygons, Tensor output); - -#endif // POINTS_IN_POLYGONS_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask.cpp deleted file mode 100644 index 6064c9ba..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/hszhao/semseg/blob/master/lib/psa/src -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - DISPATCH_DEVICE_IMPL(psamask_forward_impl, psa_type, input, output, num_, - h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - DISPATCH_DEVICE_IMPL(psamask_backward_impl, psa_type, grad_output, grad_input, - num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_forward(const Tensor input, Tensor output, const int psa_type, - const int num_, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - psamask_forward_impl(psa_type, input, output, num_, h_feature, w_feature, - h_mask, w_mask, half_h_mask, half_w_mask); -} - -void psamask_backward(Tensor grad_output, const Tensor grad_input, - const int psa_type, const int num_, const int h_feature, - const int w_feature, const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - psamask_backward_impl(psa_type, grad_output, grad_input, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, half_w_mask); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_parrots.cpp deleted file mode 100644 index f67102d0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_parrots.cpp +++ /dev/null @@ -1,129 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "psamask_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void psamask_forward_cuda_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int psa_type, num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask; - SSAttrs(attr) - .get("psa_type", psa_type) - .get("num_", num_) - .get("h_feature", h_feature) - .get("w_feature", w_feature) - .get("h_mask", h_mask) - .get("w_mask", w_mask) - .get("half_h_mask", half_h_mask) - .get("half_w_mask", half_w_mask) - .done(); - const auto &input = buildATensor(ctx, ins[0]); - auto output = buildATensor(ctx, outs[0]); - psamask_forward_cuda(psa_type, input, output, num_, h_feature, w_feature, - h_mask, w_mask, half_h_mask, half_w_mask); -} - -void psamask_backward_cuda_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int psa_type, num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask; - SSAttrs(attr) - .get("psa_type", psa_type) - .get("num_", num_) - .get("h_feature", h_feature) - .get("w_feature", w_feature) - .get("h_mask", h_mask) - .get("w_mask", w_mask) - .get("half_h_mask", half_h_mask) - .get("half_w_mask", half_w_mask) - .done(); - - const auto &grad_output = buildATensor(ctx, ins[0]); - auto grad_input = buildATensor(ctx, outs[0]); - psamask_backward_cuda(psa_type, grad_output, grad_input, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, half_w_mask); -} -#endif - -void psamask_forward_cpu_parrots(HostContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int psa_type, num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask; - SSAttrs(attr) - .get("psa_type", psa_type) - .get("num_", num_) - .get("h_feature", h_feature) - .get("w_feature", w_feature) - .get("h_mask", h_mask) - .get("w_mask", w_mask) - .get("half_h_mask", half_h_mask) - .get("half_w_mask", half_w_mask) - .done(); - const auto &input = buildATensor(ctx, ins[0]); - auto output = buildATensor(ctx, outs[0]); - psamask_forward_cpu(psa_type, input, output, num_, h_feature, w_feature, - h_mask, w_mask, half_h_mask, half_w_mask); -} - -void psamask_backward_cpu_parrots(HostContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int psa_type, num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask; - SSAttrs(attr) - .get("psa_type", psa_type) - .get("num_", num_) - .get("h_feature", h_feature) - .get("w_feature", w_feature) - .get("h_mask", h_mask) - .get("w_mask", w_mask) - .get("half_h_mask", half_h_mask) - .get("half_w_mask", half_w_mask) - .done(); - - const auto &grad_output = buildATensor(ctx, ins[0]); - auto grad_input = buildATensor(ctx, outs[0]); - psamask_backward_cpu(psa_type, grad_output, grad_input, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, half_w_mask); -} - -PARROTS_EXTENSION_REGISTER(psamask_forward) - .attr("psa_type") - .attr("num_") - .attr("h_feature") - .attr("w_feature") - .attr("h_mask") - .attr("w_mask") - .attr("half_h_mask") - .attr("half_w_mask") - .input(1) - .output(1) - .apply(psamask_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(psamask_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(psamask_backward) - .attr("psa_type") - .attr("num_") - .attr("h_feature") - .attr("w_feature") - .attr("h_mask") - .attr("w_mask") - .attr("half_h_mask") - .attr("half_w_mask") - .input(1) - .output(1) - .apply(psamask_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(psamask_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_pytorch.h deleted file mode 100644 index c3f0579e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/psamask_pytorch.h +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef PSAMASK_PYTORCH_H -#define PSAMASK_PYTORCH_H -#include -using namespace at; - -#ifdef MMCV_WITH_CUDA -void psamask_forward_cuda(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_cuda(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); -#endif -void psamask_forward_cpu(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_cpu(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); -#endif // PSAMASK_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated.cpp deleted file mode 100644 index 81ffa9fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void riroi_align_rotated_forward_impl(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - DISPATCH_DEVICE_IMPL(riroi_align_rotated_forward_impl, features, rois, output, - pooled_height, pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} - -void riroi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - DISPATCH_DEVICE_IMPL(riroi_align_rotated_backward_impl, top_grad, rois, - bottom_grad, pooled_height, pooled_width, spatial_scale, - num_samples, num_orientations, clockwise); -} - -void riroi_align_rotated_forward(Tensor features, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int num_samples, - int num_orientations, bool clockwise) { - riroi_align_rotated_forward_impl(features, rois, output, pooled_height, - pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} - -void riroi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - riroi_align_rotated_backward_impl(top_grad, rois, bottom_grad, pooled_height, - pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_parrots.cpp deleted file mode 100644 index 5eb340ce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_parrots.cpp +++ /dev/null @@ -1,86 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "riroi_align_rotated_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void riroi_align_rotated_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sample_num; - int num_orientations; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("num_samples", sample_num) - .get("num_orientations", num_orientations) - .get("clockwise", clockwise) - .done(); - - auto input = buildATensor(ctx, ins[0]); - auto rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - riroi_align_rotated_forward(input, rois, output, pooled_height, pooled_width, - spatial_scale, sample_num, num_orientations, - clockwise); -} - -void riroi_align_rotated_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sample_num; - int num_orientations; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("num_samples", sample_num) - .get("num_orientations", num_orientations) - .get("clockwise", clockwise) - .done(); - - auto grad_output = buildATensor(ctx, ins[0]); - auto rois = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - riroi_align_rotated_backward(grad_output, rois, grad_input, pooled_height, - pooled_width, spatial_scale, sample_num, - num_orientations, clockwise); -} - -PARROTS_EXTENSION_REGISTER(riroi_align_rotated_forward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("num_samples") - .attr("num_orientations") - .attr("clockwise") - .input(2) - .output(1) - .apply(riroi_align_rotated_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(riroi_align_rotated_backward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("num_samples") - .attr("num_orientations") - .attr("clockwise") - .input(2) - .output(1) - .apply(riroi_align_rotated_backward_cuda_parrots) - .done(); - -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_pytorch.h deleted file mode 100644 index 49a30bff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/riroi_align_rotated_pytorch.h +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef RIROI_ALIGN_ROTATED_PYTORCH_H -#define RIROI_ALIGN_ROTATED_PYTORCH_H -#include -using namespace at; - -void riroi_align_rotated_forward(Tensor features, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int num_samples, - int num_orientations, bool clockwise); - -void riroi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -#endif // RIROI_ALIGN_ROTATED_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align.cpp deleted file mode 100644 index 6e707739..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - DISPATCH_DEVICE_IMPL(roi_align_forward_impl, input, rois, output, argmax_y, - argmax_x, aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - DISPATCH_DEVICE_IMPL(roi_align_backward_impl, grad_output, rois, argmax_y, - argmax_x, grad_input, aligned_height, aligned_width, - spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - roi_align_forward_impl(input, rois, output, argmax_y, argmax_x, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - roi_align_backward_impl(grad_output, rois, argmax_y, argmax_x, grad_input, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_parrots.cpp deleted file mode 100644 index 60abea09..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_parrots.cpp +++ /dev/null @@ -1,151 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "roi_align_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void roi_align_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int aligned_height; - int aligned_width; - float spatial_scale; - int sampling_ratio; - int pool_mode; - bool aligned; - SSAttrs(attr) - .get("aligned_height", aligned_height) - .get("aligned_width", aligned_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("pool_mode", pool_mode) - .get("aligned", aligned) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - auto argmax_y = buildATensor(ctx, outs[1]); - auto argmax_x = buildATensor(ctx, outs[2]); - roi_align_forward_cuda(input, rois, output, argmax_y, argmax_x, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int aligned_height; - int aligned_width; - float spatial_scale; - int sampling_ratio; - int pool_mode; - bool aligned; - SSAttrs(attr) - .get("aligned_height", aligned_height) - .get("aligned_width", aligned_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("pool_mode", pool_mode) - .get("aligned", aligned) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - const auto& argmax_y = buildATensor(ctx, ins[2]); - const auto& argmax_x = buildATensor(ctx, ins[3]); - auto grad_input = buildATensor(ctx, outs[0]); - roi_align_backward_cuda(grad_output, rois, argmax_y, argmax_x, grad_input, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} -#endif - -void roi_align_forward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int aligned_height; - int aligned_width; - float spatial_scale; - int sampling_ratio; - int pool_mode; - bool aligned; - SSAttrs(attr) - .get("aligned_height", aligned_height) - .get("aligned_width", aligned_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("pool_mode", pool_mode) - .get("aligned", aligned) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - auto argmax_y = buildATensor(ctx, outs[1]); - auto argmax_x = buildATensor(ctx, outs[2]); - roi_align_forward_cpu(input, rois, output, argmax_y, argmax_x, aligned_height, - aligned_width, spatial_scale, sampling_ratio, pool_mode, - aligned); -} - -void roi_align_backward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int aligned_height; - int aligned_width; - float spatial_scale; - int sampling_ratio; - int pool_mode; - bool aligned; - SSAttrs(attr) - .get("aligned_height", aligned_height) - .get("aligned_width", aligned_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("pool_mode", pool_mode) - .get("aligned", aligned) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - const auto& argmax_y = buildATensor(ctx, ins[2]); - const auto& argmax_x = buildATensor(ctx, ins[3]); - auto grad_input = buildATensor(ctx, outs[0]); - roi_align_backward_cpu(grad_output, rois, argmax_y, argmax_x, grad_input, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -PARROTS_EXTENSION_REGISTER(roi_align_forward) - .attr("aligned_height") - .attr("aligned_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("pool_mode") - .attr("aligned") - .input(2) - .output(3) - .apply(roi_align_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(roi_align_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(roi_align_backward) - .attr("aligned_height") - .attr("aligned_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("pool_mode") - .attr("aligned") - .input(4) - .output(1) - .apply(roi_align_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(roi_align_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_pytorch.h deleted file mode 100644 index 4c601609..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_pytorch.h +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROI_ALIGN_PYTORCH_H -#define ROI_ALIGN_PYTORCH_H -#include -using namespace at; - -#ifdef MMCV_WITH_CUDA -void roi_align_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void roi_align_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); -#endif - -void roi_align_forward_cpu(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned); - -void roi_align_backward_cpu(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -#endif // ROI_ALIGN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated.cpp deleted file mode 100644 index 5ef691ad..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_align_rotated_forward_impl(Tensor features, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sample_ratio, - bool aligned, bool clockwise) { - DISPATCH_DEVICE_IMPL(roi_align_rotated_forward_impl, features, rois, output, - aligned_height, aligned_width, spatial_scale, - sample_ratio, aligned, clockwise); -} - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sample_ratio, bool aligned, - bool clockwise) { - DISPATCH_DEVICE_IMPL(roi_align_rotated_backward_impl, top_grad, rois, - bottom_grad, aligned_height, aligned_width, - spatial_scale, sample_ratio, aligned, clockwise); -} - -void roi_align_rotated_forward(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - roi_align_rotated_forward_impl(input, rois, output, aligned_height, - aligned_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} - -void roi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - roi_align_rotated_backward_impl(top_grad, rois, bottom_grad, aligned_height, - aligned_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_parrots.cpp deleted file mode 100644 index 9386250a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_parrots.cpp +++ /dev/null @@ -1,147 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "roi_align_rotated_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void roi_align_rotated_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - bool aligned; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("aligned", aligned) - .get("clockwise", clockwise) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - roi_align_rotated_forward_cuda(input, rois, output, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} - -void roi_align_rotated_backward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - bool aligned; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("aligned", aligned) - .get("clockwise", clockwise) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - roi_align_rotated_backward_cuda(grad_output, rois, grad_input, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} -#endif - -void roi_align_rotated_forward_cpu_parrots(HostContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - bool aligned; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("aligned", aligned) - .get("clockwise", clockwise) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - roi_align_rotated_forward_cpu(input, rois, output, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} - -void roi_align_rotated_backward_cpu_parrots(HostContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - int sampling_ratio; - bool aligned; - bool clockwise; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .get("sampling_ratio", sampling_ratio) - .get("aligned", aligned) - .get("clockwise", clockwise) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - roi_align_rotated_backward_cpu(grad_output, rois, grad_input, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} - -PARROTS_EXTENSION_REGISTER(roi_align_rotated_forward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("aligned") - .attr("clockwise") - .input(2) - .output(1) - .apply(roi_align_rotated_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(roi_align_rotated_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(roi_align_rotated_backward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .attr("sampling_ratio") - .attr("aligned") - .attr("clockwise") - .input(2) - .output(1) - .apply(roi_align_rotated_backward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(roi_align_rotated_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_pytorch.h deleted file mode 100644 index 8136b56d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_align_rotated_pytorch.h +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROI_ALIGN_ROTATED_PYTORCH_H -#define ROI_ALIGN_ROTATED_PYTORCH_H -#include -using namespace at; - -#ifdef MMCV_WITH_CUDA -void roi_align_rotated_forward_cuda(Tensor input, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_cuda(Tensor grad_output, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); -#endif - -void roi_align_rotated_forward_cpu(Tensor input, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_cpu(Tensor grad_output, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); - -#endif // ROI_ALIGN_ROTATED_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool.cpp deleted file mode 100644 index bba90b80..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool.cpp +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_pool_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale) { - DISPATCH_DEVICE_IMPL(roi_pool_forward_impl, input, rois, output, argmax, - pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale) { - DISPATCH_DEVICE_IMPL(roi_pool_backward_impl, grad_output, rois, argmax, - grad_input, pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_forward(Tensor input, Tensor rois, Tensor output, Tensor argmax, - int pooled_height, int pooled_width, - float spatial_scale) { - roi_pool_forward_impl(input, rois, output, argmax, pooled_height, - pooled_width, spatial_scale); -} - -void roi_pool_backward(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, int pooled_width, - float spatial_scale) { - roi_pool_backward_impl(grad_output, rois, argmax, grad_input, pooled_height, - pooled_width, spatial_scale); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_parrots.cpp deleted file mode 100644 index 0acde4a4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_parrots.cpp +++ /dev/null @@ -1,67 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "roi_pool_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void roi_pool_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - auto argmax = buildATensor(ctx, outs[1]); - roi_pool_forward_cuda(input, rois, output, argmax, pooled_height, - pooled_width, spatial_scale); -} - -void roi_pool_backward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pooled_height; - int pooled_width; - float spatial_scale; - SSAttrs(attr) - .get("pooled_height", pooled_height) - .get("pooled_width", pooled_width) - .get("spatial_scale", spatial_scale) - .done(); - - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& rois = buildATensor(ctx, ins[1]); - const auto& argmax = buildATensor(ctx, ins[2]); - auto grad_input = buildATensor(ctx, outs[0]); - roi_pool_backward_cuda(grad_output, rois, argmax, grad_input, pooled_height, - pooled_width, spatial_scale); -} - -PARROTS_EXTENSION_REGISTER(roi_pool_forward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .input(2) - .output(2) - .apply(roi_pool_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(roi_pool_backward) - .attr("pooled_height") - .attr("pooled_width") - .attr("spatial_scale") - .input(3) - .output(1) - .apply(roi_pool_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_pytorch.h deleted file mode 100644 index d67a1502..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roi_pool_pytorch.h +++ /dev/null @@ -1,16 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROI_POOL_PYTORCH_H -#define ROI_POOL_PYTORCH_H -#include -using namespace at; - -#ifdef MMCV_WITH_CUDA -void roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale); - -void roi_pool_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale); -#endif -#endif // ROI_POOL_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d.cpp deleted file mode 100644 index 6cf9cf09..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d.cpp +++ /dev/null @@ -1,72 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roiaware_pool3d_forward_impl(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - DISPATCH_DEVICE_IMPL(roiaware_pool3d_forward_impl, boxes_num, pts_num, - channels, max_pts_each_voxel, out_x, out_y, out_z, rois, - pts, pts_feature, argmax, pts_idx_of_voxels, - pooled_features, pool_method); -} - -void roiaware_pool3d_backward_impl(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method) { - DISPATCH_DEVICE_IMPL(roiaware_pool3d_backward_impl, boxes_num, out_x, out_y, - out_z, channels, max_pts_each_voxel, pts_idx_of_voxels, - argmax, grad_out, grad_in, pool_method); -} - -void roiaware_pool3d_forward(Tensor rois, Tensor pts, Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - // params rois: (N, 7) [x, y, z, x_size, y_size, z_size, ry] in LiDAR - // coordinate - // params pts: (npoints, 3) [x, y, z] in LiDAR coordinate - // params pts_feature: (npoints, C) - // params argmax: (N, out_x, out_y, out_z, C) - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params pooled_features: (N, out_x, out_y, out_z, C) - // params pool_method: 0: max_pool 1: avg_pool - int boxes_num = rois.size(0); - int pts_num = pts.size(0); - int channels = pts_feature.size(1); - int max_pts_each_voxel = pts_idx_of_voxels.size(4); // index 0 is the counter - int out_x = pts_idx_of_voxels.size(1); - int out_y = pts_idx_of_voxels.size(2); - int out_z = pts_idx_of_voxels.size(3); - assert((out_x < 256) && (out_y < 256) && - (out_z < 256)); // we encode index with 8bit - - roiaware_pool3d_forward_impl(boxes_num, pts_num, channels, max_pts_each_voxel, - out_x, out_y, out_z, rois, pts, pts_feature, - argmax, pts_idx_of_voxels, pooled_features, - pool_method); -} - -void roiaware_pool3d_backward(Tensor pts_idx_of_voxels, Tensor argmax, - Tensor grad_out, Tensor grad_in, - int pool_method) { - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params argmax: (N, out_x, out_y, out_z, C) - // params grad_out: (N, out_x, out_y, out_z, C) - // params grad_in: (npoints, C), return value - // params pool_method: 0: max_pool 1: avg_pool - int boxes_num = pts_idx_of_voxels.size(0); - int out_x = pts_idx_of_voxels.size(1); - int out_y = pts_idx_of_voxels.size(2); - int out_z = pts_idx_of_voxels.size(3); - int max_pts_each_voxel = pts_idx_of_voxels.size(4); // index 0 is the counter - int channels = grad_out.size(4); - - roiaware_pool3d_backward_impl(boxes_num, out_x, out_y, out_z, channels, - max_pts_each_voxel, pts_idx_of_voxels, argmax, - grad_out, grad_in, pool_method); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_parrots.cpp deleted file mode 100644 index 771d9200..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_parrots.cpp +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "roiaware_pool3d_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void roiaware_pool3d_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pool_method; - SSAttrs(attr).get("pool_method", pool_method).done(); - auto rois = buildATensor(ctx, ins[0]); - auto pts = buildATensor(ctx, ins[1]); - auto pts_feature = buildATensor(ctx, ins[2]); - - auto argmax = buildATensor(ctx, outs[0]); - auto pts_idx_of_voxels = buildATensor(ctx, outs[1]); - auto pooled_features = buildATensor(ctx, outs[2]); - - roiaware_pool3d_forward(rois, pts, pts_feature, argmax, pts_idx_of_voxels, - pooled_features, pool_method); -} - -void roiaware_pool3d_backward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int pool_method; - SSAttrs(attr).get("pool_method", pool_method).done(); - auto pts_idx_of_voxels = buildATensor(ctx, ins[0]); - auto argmax = buildATensor(ctx, ins[1]); - auto grad_out = buildATensor(ctx, ins[2]); - - auto grad_in = buildATensor(ctx, outs[0]); - - roiaware_pool3d_backward(pts_idx_of_voxels, argmax, grad_out, grad_in, - pool_method); -} - -PARROTS_EXTENSION_REGISTER(roiaware_pool3d_forward) - .attr("pool_method") - .input(3) - .output(3) - .apply(roiaware_pool3d_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(roiaware_pool3d_backward) - .attr("pool_method") - .input(3) - .output(1) - .apply(roiaware_pool3d_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_pytorch.h deleted file mode 100644 index 0b4b0402..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roiaware_pool3d_pytorch.h +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROIAWARE_POOL3D_PYTORCH_H -#define ROIAWARE_POOL3D_PYTORCH_H -#include -using namespace at; - -void roiaware_pool3d_forward(Tensor rois, Tensor pts, Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void roiaware_pool3d_backward(Tensor pts_idx_of_voxels, Tensor argmax, - Tensor grad_out, Tensor grad_in, int pool_method); - -#endif // ROIAWARE_POOL3D_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d.cpp deleted file mode 100644 index a10080b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d.cpp +++ /dev/null @@ -1,39 +0,0 @@ -/* -Modified from -https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/roipoint_pool3d/src/roipoint_pool3d.cpp -Point cloud feature pooling -Written by Shaoshuai Shi -All Rights Reserved 2018. -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roipoint_pool3d_forward_impl(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag) { - DISPATCH_DEVICE_IMPL(roipoint_pool3d_forward_impl, batch_size, pts_num, - boxes_num, feature_in_len, sampled_pts_num, xyz, boxes3d, - pts_feature, pooled_features, pooled_empty_flag); -} - -void roipoint_pool3d_forward(Tensor xyz, Tensor boxes3d, Tensor pts_feature, - Tensor pooled_features, Tensor pooled_empty_flag) { - // params xyz: (B, N, 3) - // params boxes3d: (B, M, 7) - // params pts_feature: (B, N, C) - // params pooled_features: (B, M, 512, 3+C) - // params pooled_empty_flag: (B, M) - int batch_size = xyz.size(0); - int pts_num = xyz.size(1); - int boxes_num = boxes3d.size(1); - int feature_in_len = pts_feature.size(2); - int sampled_pts_num = pooled_features.size(2); - - roipoint_pool3d_forward_impl(batch_size, pts_num, boxes_num, feature_in_len, - sampled_pts_num, xyz, boxes3d, pts_feature, - pooled_features, pooled_empty_flag); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_parrots.cpp deleted file mode 100644 index 17f54984..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_parrots.cpp +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "roipoint_pool3d_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void roipoint_pool3d_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - auto xyz = buildATensor(ctx, ins[0]); - auto boxes3d = buildATensor(ctx, ins[1]); - auto pts_feature = buildATensor(ctx, ins[2]); - - auto pooled_features = buildATensor(ctx, outs[0]); - auto pooled_empty_flag = buildATensor(ctx, outs[1]); - - roipoint_pool3d_forward(xyz, boxes3d, pts_feature, pooled_features, - pooled_empty_flag); -} - -PARROTS_EXTENSION_REGISTER(roipoint_pool3d_forward) - .input(3) - .output(2) - .apply(roipoint_pool3d_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_pytorch.h deleted file mode 100644 index e5b61b0d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/roipoint_pool3d_pytorch.h +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROIPOINT_POOL3D_PYTORCH_H -#define ROIPOINT_POOL3D_PYTORCH_H -#include -using namespace at; - -void roipoint_pool3d_forward(Tensor xyz, Tensor boxes3d, Tensor pts_feature, - Tensor pooled_features, Tensor pooled_empty_flag); - -#endif // ROIPOINT_POOL3D_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align.cpp deleted file mode 100644 index 71fe0c9a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align.cpp +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_cuda.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void rotated_feature_align_forward_impl(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output) { - DISPATCH_DEVICE_IMPL(rotated_feature_align_forward_impl, features, - best_bboxes, spatial_scale, points, output); -} - -void rotated_feature_align_backward_impl(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad) { - DISPATCH_DEVICE_IMPL(rotated_feature_align_backward_impl, top_grad, - best_bboxes, spatial_scale, points, bottom_grad); -} - -void rotated_feature_align_forward(const Tensor features, - const Tensor best_bboxes, Tensor output, - const float spatial_scale, - const int points) { - rotated_feature_align_forward_impl(features, best_bboxes, spatial_scale, - points, output); -} - -void rotated_feature_align_backward(const Tensor top_grad, - const Tensor best_bboxes, - Tensor bottom_grad, - const float spatial_scale, - const int points) { - rotated_feature_align_backward_impl(top_grad, best_bboxes, spatial_scale, - points, bottom_grad); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_parrots.cpp deleted file mode 100644 index ad11a9d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_parrots.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "rotated_feature_align_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void rotated_feature_align_forward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float spatial_scale; - int points; - SSAttrs(attr) - .get("spatial_scale", spatial_scale) - .get("points", points) - .done(); - - auto features = buildATensor(ctx, ins[0]); - auto best_bboxes = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - rotated_feature_align_forward(features, best_bboxes, output, spatial_scale, - points); -} - -void rotated_feature_align_backward_cuda_parrots( - CudaContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float spatial_scale; - int points; - SSAttrs(attr) - .get("spatial_scale", spatial_scale) - .get("points", points) - .done(); - - auto grad_output = buildATensor(ctx, ins[0]); - auto best_bboxes = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - rotated_feature_align_backward(grad_output, best_bboxes, grad_input, - spatial_scale, points); -} - -void rotated_feature_align_forward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float spatial_scale; - int points; - SSAttrs(attr) - .get("spatial_scale", spatial_scale) - .get("points", points) - .done(); - - auto features = buildATensor(ctx, ins[0]); - auto best_bboxes = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - rotated_feature_align_forward(features, best_bboxes, output, spatial_scale, - points); -} -#endif - -void rotated_feature_align_backward_cpu_parrots( - HostContext& ctx, const SSElement& attr, const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - float spatial_scale; - int points; - SSAttrs(attr) - .get("spatial_scale", spatial_scale) - .get("points", points) - .done(); - - auto grad_output = buildATensor(ctx, ins[0]); - auto best_bboxes = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - rotated_feature_align_backward(grad_output, best_bboxes, grad_input, - spatial_scale, points); -} - -PARROTS_EXTENSION_REGISTER(rotated_feature_align_forward) - .attr("spatial_scale") - .attr("points") - .input(2) - .output(1) - .apply(rotated_feature_align_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(rotated_feature_align_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(rotated_feature_align_backward) - .attr("spatial_scale") - .attr("points") - .input(2) - .output(1) - .apply(rotated_feature_align_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(rotated_feature_align_backward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_pytorch.h deleted file mode 100644 index 9a695ee5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/rotated_feature_align_pytorch.h +++ /dev/null @@ -1,17 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef ROTATED_FEATURE_ALIGN_PYTORCH_H -#define ROTATED_FEATURE_ALIGN_PYTORCH_H -#include -using namespace at; - -void rotated_feature_align_forward(const Tensor features, - const Tensor best_bboxes, Tensor output, - const float spatial_scale, const int points); - -void rotated_feature_align_backward(const Tensor top_grad, - const Tensor best_bboxes, - Tensor bottom_grad, - const float spatial_scale, - const int points); - -#endif // ROTATED_FEATURE_ALIGN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn.cpp deleted file mode 100644 index fd5a5132..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn.cpp +++ /dev/null @@ -1,69 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void sync_bn_forward_mean_impl(const Tensor input, Tensor mean) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_mean_impl, input, mean); -} - -void sync_bn_forward_var_impl(const Tensor input, const Tensor mean, - Tensor var) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_var_impl, input, mean, var); -} - -void sync_bn_forward_output_impl(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_output_impl, input, mean, var, - running_mean, running_var, weight, bias, norm, std, - output, eps, momentum, group_size); -} - -void sync_bn_backward_param_impl(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - DISPATCH_DEVICE_IMPL(sync_bn_backward_param_impl, grad_output, norm, - grad_weight, grad_bias); -} - -void sync_bn_backward_data_impl(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input) { - DISPATCH_DEVICE_IMPL(sync_bn_backward_data_impl, grad_output, weight, - grad_weight, grad_bias, norm, std, grad_input); -} - -void sync_bn_forward_mean(const Tensor input, Tensor mean) { - sync_bn_forward_mean_impl(input, mean); -} - -void sync_bn_forward_var(const Tensor input, const Tensor mean, Tensor var) { - sync_bn_forward_var_impl(input, mean, var); -} - -void sync_bn_forward_output(const Tensor input, const Tensor mean, - const Tensor var, const Tensor weight, - const Tensor bias, Tensor running_mean, - Tensor running_var, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - sync_bn_forward_output_impl(input, mean, var, running_mean, running_var, - weight, bias, norm, std, output, eps, momentum, - group_size); -} - -void sync_bn_backward_param(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - sync_bn_backward_param_impl(grad_output, norm, grad_weight, grad_bias); -} - -void sync_bn_backward_data(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input) { - sync_bn_backward_data_impl(grad_output, weight, grad_weight, grad_bias, norm, - std, grad_input); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_parrots.cpp deleted file mode 100644 index 0b1855ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_parrots.cpp +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "sync_bn_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void sync_bn_forward_mean_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - const auto& input = buildATensor(ctx, ins[0]); - auto mean = buildATensor(ctx, outs[0]); - sync_bn_forward_mean_cuda(input, mean); -} - -void sync_bn_forward_var_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - const auto& input = buildATensor(ctx, ins[0]); - const auto& mean = buildATensor(ctx, ins[1]); - auto var = buildATensor(ctx, outs[0]); - sync_bn_forward_var_cuda(input, mean, var); -} - -void sync_bn_forward_output_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - size_t group_size; - float eps, momentum; - SSAttrs(attr) - .get("eps", eps) - .get("momentum", momentum) - .get("group_size", group_size) - .done(); - - const auto& input = buildATensor(ctx, ins[0]); - const auto& mean = buildATensor(ctx, ins[1]); - const auto& var = buildATensor(ctx, ins[2]); - const auto& weight = buildATensor(ctx, ins[3]); - const auto& bias = buildATensor(ctx, ins[4]); - auto running_mean = buildATensor(ctx, outs[0]); - auto running_var = buildATensor(ctx, outs[1]); - auto norm = buildATensor(ctx, outs[2]); - auto std = buildATensor(ctx, outs[3]); - auto output = buildATensor(ctx, outs[4]); - sync_bn_forward_output_cuda(input, mean, var, running_mean, running_var, - weight, bias, norm, std, output, eps, momentum, - group_size); -} - -void sync_bn_backward_param_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& norm = buildATensor(ctx, ins[1]); - auto grad_weight = buildATensor(ctx, outs[0]); - auto grad_bias = buildATensor(ctx, outs[1]); - sync_bn_backward_param_cuda(grad_output, norm, grad_weight, grad_bias); -} - -void sync_bn_backward_data_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - const auto& grad_output = buildATensor(ctx, ins[0]); - const auto& weight = buildATensor(ctx, ins[1]); - const auto& grad_weight = buildATensor(ctx, ins[2]); - const auto& grad_bias = buildATensor(ctx, ins[3]); - const auto& norm = buildATensor(ctx, ins[4]); - const auto& std = buildATensor(ctx, ins[5]); - auto grad_input = buildATensor(ctx, outs[0]); - sync_bn_backward_data_cuda(grad_output, weight, grad_weight, grad_bias, norm, - std, grad_input); -} - -PARROTS_EXTENSION_REGISTER(sync_bn_forward_mean) - .input(1) - .output(1) - .apply(sync_bn_forward_mean_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(sync_bn_forward_var) - .input(2) - .output(1) - .apply(sync_bn_forward_var_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(sync_bn_forward_output) - .attr("eps") - .attr("momentum") - .attr("group_size") - .input(5) - .output(5) - .apply(sync_bn_forward_output_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(sync_bn_backward_param) - .input(2) - .output(2) - .apply(sync_bn_backward_param_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(sync_bn_backward_data) - .input(6) - .output(1) - .apply(sync_bn_backward_data_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_pytorch.h deleted file mode 100644 index 6bd6a7fa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/sync_bn_pytorch.h +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SYNC_BN_PYTORCH_H -#define SYNC_BN_PYTORCH_H -#include -using namespace at; - -void sync_bn_forward_mean_cuda(const Tensor input, Tensor mean); - -void sync_bn_forward_var_cuda(const Tensor input, const Tensor mean, - Tensor var); - -void sync_bn_forward_output_cuda(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size); - -void sync_bn_backward_param_cuda(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias); - -void sync_bn_backward_data_cuda(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input); -#endif // SYNC_BN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate.cpp deleted file mode 100644 index 1e0ec71b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void three_interpolate_forward_impl(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out) { - DISPATCH_DEVICE_IMPL(three_interpolate_forward_impl, b, c, m, n, points, idx, - weight, out); -} - -void three_interpolate_backward_impl(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points) { - DISPATCH_DEVICE_IMPL(three_interpolate_backward_impl, b, c, n, m, grad_out, - idx, weight, grad_points); -} - -void three_interpolate_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor out_tensor, int b, - int c, int m, int n) { - three_interpolate_forward_impl(b, c, m, n, points_tensor, idx_tensor, - weight_tensor, out_tensor); -} - -void three_interpolate_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor grad_points_tensor, - int b, int c, int n, int m) { - three_interpolate_backward_impl(b, c, n, m, grad_out_tensor, idx_tensor, - weight_tensor, grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_parrots.cpp deleted file mode 100644 index a71a90fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_parrots.cpp +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "three_interpolate_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void three_interpolate_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, m, n; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("m", m) - .get("n", n) - .done(); - - auto points_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - auto weight_tensor = buildATensor(ctx, ins[2]); - - auto out_tensor = buildATensor(ctx, outs[0]); - - three_interpolate_forward(points_tensor, idx_tensor, weight_tensor, - out_tensor, b, c, m, n); -} - -void three_interpolate_backward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, c, n, m; - SSAttrs(attr) - .get("b", b) - .get("c", c) - .get("n", n) - .get("m", m) - .done(); - - auto grad_out_tensor = buildATensor(ctx, ins[0]); - auto idx_tensor = buildATensor(ctx, ins[1]); - auto weight_tensor = buildATensor(ctx, ins[2]); - - auto grad_points_tensor = buildATensor(ctx, outs[0]); - - three_interpolate_backward(grad_out_tensor, idx_tensor, weight_tensor, - grad_points_tensor, b, c, n, m); -} - -PARROTS_EXTENSION_REGISTER(three_interpolate_forward) - .attr("b") - .attr("c") - .attr("m") - .attr("n") - .input(3) - .output(1) - .apply(three_interpolate_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(three_interpolate_backward) - .attr("b") - .attr("c") - .attr("n") - .attr("m") - .input(3) - .output(1) - .apply(three_interpolate_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_pytorch.h deleted file mode 100644 index 464c6d90..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_interpolate_pytorch.h +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef THREE_INTERPOLATE_PYTORCH_H -#define THREE_INTERPOLATE_PYTORCH_H -#include -using namespace at; - -void three_interpolate_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor out_tensor, int b, - int c, int m, int n); - -void three_interpolate_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor grad_points_tensor, - int b, int c, int n, int m); -#endif // THREE_INTERPOLATE_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn.cpp deleted file mode 100644 index b629200c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn.cpp +++ /dev/null @@ -1,18 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void three_nn_forward_impl(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx) { - DISPATCH_DEVICE_IMPL(three_nn_forward_impl, b, n, m, unknown, known, dist2, - idx); -} - -void three_nn_forward(Tensor unknown_tensor, Tensor known_tensor, - Tensor dist2_tensor, Tensor idx_tensor, int b, int n, - int m) { - three_nn_forward_impl(b, n, m, unknown_tensor, known_tensor, dist2_tensor, - idx_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_parrots.cpp deleted file mode 100644 index c28c7d21..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_parrots.cpp +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "three_nn_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void three_nn_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int b, n, m; - SSAttrs(attr).get("b", b).get("n", n).get("m", m).done(); - - auto unknown_tensor = buildATensor(ctx, ins[0]); - auto known_tensor = buildATensor(ctx, ins[1]); - - auto dist2_tensor = buildATensor(ctx, outs[0]); - auto idx_tensor = buildATensor(ctx, outs[1]); - - three_nn_forward(unknown_tensor, known_tensor, dist2_tensor, idx_tensor, b, n, - m); -} - -PARROTS_EXTENSION_REGISTER(three_nn_forward) - .attr("b") - .attr("n") - .attr("m") - .input(2) - .output(2) - .apply(three_nn_forward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_pytorch.h deleted file mode 100644 index 6574fba0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/three_nn_pytorch.h +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef THREE_NN_PYTORCH_H -#define THREE_NN_PYTORCH_H -#include -using namespace at; - -void three_nn_forward(Tensor unknown_tensor, Tensor known_tensor, - Tensor dist2_tensor, Tensor idx_tensor, int b, int n, - int m); -#endif // THREE_NN_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift.cpp deleted file mode 100644 index b03f5875..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void tin_shift_forward_impl(Tensor input, Tensor shift, Tensor output) { - DISPATCH_DEVICE_IMPL(tin_shift_forward_impl, input, shift, output); -} - -void tin_shift_backward_impl(Tensor grad_output, Tensor shift, - Tensor grad_input) { - DISPATCH_DEVICE_IMPL(tin_shift_backward_impl, grad_output, shift, grad_input); -} - -void tin_shift_forward(Tensor input, Tensor shift, Tensor output) { - tin_shift_forward_impl(input, shift, output); -} - -void tin_shift_backward(Tensor grad_output, Tensor shift, Tensor grad_input) { - tin_shift_backward_impl(grad_output, shift, grad_input); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_parrots.cpp deleted file mode 100644 index b0920928..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_parrots.cpp +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "tin_shift_pytorch.h" -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void tin_shift_forward_cuda_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - const auto &input = buildATensor(ctx, ins[0]); - const auto &shift = buildATensor(ctx, ins[1]); - auto output = buildATensor(ctx, outs[0]); - tin_shift_forward_cuda(input, shift, output); -} - -void tin_shift_backward_cuda_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - const auto &grad_output = buildATensor(ctx, ins[0]); - const auto &shift = buildATensor(ctx, ins[1]); - auto grad_input = buildATensor(ctx, outs[0]); - tin_shift_backward_cuda(grad_output, shift, grad_input); -} - -PARROTS_EXTENSION_REGISTER(tin_shift_forward) - .input(2) - .output(1) - .apply(tin_shift_forward_cuda_parrots) - .done(); - -PARROTS_EXTENSION_REGISTER(tin_shift_backward) - .input(2) - .output(1) - .apply(tin_shift_backward_cuda_parrots) - .done(); -#endif diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_pytorch.h deleted file mode 100644 index fe723837..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/tin_shift_pytorch.h +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef TIN_SHIFT_PYTORCH_H -#define TIN_SHIFT_PYTORCH_H -#include -using namespace at; - -void tin_shift_forward_cuda(Tensor input, Tensor shift, Tensor output); - -void tin_shift_backward_cuda(Tensor grad_output, Tensor shift, - Tensor grad_input); -#endif // TIN_SHIFT_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d.cpp deleted file mode 100644 index dd325bd7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d.cpp +++ /dev/null @@ -1,118 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp - -/* -Copyright (c) 2021, NVIDIA Corporation. All rights reserved. - -NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -Augmentation (ADA) -======================================================================= - -1. Definitions - -"Licensor" means any person or entity that distributes its Work. - -"Software" means the original work of authorship made available under -this License. - -"Work" means the Software and any additions to or derivative works of -the Software that are made available under this License. - -The terms "reproduce," "reproduction," "derivative works," and -"distribution" have the meaning as provided under U.S. copyright law; -provided, however, that for the purposes of this License, derivative -works shall not include works that remain separable from, or merely -link (or bind by name) to the interfaces of, the Work. - -Works, including the Software, are "made available" under this License -by including in or with the Work either (a) a copyright notice -referencing the applicability of this License to the Work, or (b) a -copy of this License. - -2. License Grants - - 2.1 Copyright Grant. Subject to the terms and conditions of this - License, each Licensor grants to you a perpetual, worldwide, - non-exclusive, royalty-free, copyright license to reproduce, - prepare derivative works of, publicly display, publicly perform, - sublicense and distribute its Work and any resulting derivative - works in any form. - -3. Limitations - - 3.1 Redistribution. You may reproduce or distribute the Work only - if (a) you do so under this License, (b) you include a complete - copy of this License with your distribution, and (c) you retain - without modification any copyright, patent, trademark, or - attribution notices that are present in the Work. - - 3.2 Derivative Works. You may specify that additional or different - terms apply to the use, reproduction, and distribution of your - derivative works of the Work ("Your Terms") only if (a) Your Terms - provide that the use limitation in Section 3.3 applies to your - derivative works, and (b) you identify the specific derivative - works that are subject to Your Terms. Notwithstanding Your Terms, - this License (including the redistribution requirements in Section - 3.1) will continue to apply to the Work itself. - - 3.3 Use Limitation. The Work and any derivative works thereof only - may be used or intended for use non-commercially. Notwithstanding - the foregoing, NVIDIA and its affiliates may use the Work and any - derivative works commercially. As used herein, "non-commercially" - means for research or evaluation purposes only. - - 3.4 Patent Claims. If you bring or threaten to bring a patent claim - against any Licensor (including any claim, cross-claim or - counterclaim in a lawsuit) to enforce any patents that you allege - are infringed by any Work, then your rights under this License from - such Licensor (including the grant in Section 2.1) will terminate - immediately. - - 3.5 Trademarks. This License does not grant any rights to use any - Licensor’s or its affiliates’ names, logos, or trademarks, except - as necessary to reproduce the notices described in this License. - - 3.6 Termination. If you violate any term of this License, then your - rights under this License (including the grant in Section 2.1) will - terminate immediately. - -4. Disclaimer of Warranty. - -THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -THIS LICENSE. - -5. Limitation of Liability. - -EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -THE POSSIBILITY OF SUCH DAMAGES. - -======================================================================= -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -torch::Tensor upfirdn2d_op_impl(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1) { - return DISPATCH_DEVICE_IMPL(upfirdn2d_op_impl, input, kernel, up_x, up_y, - down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, int pad_x0, - int pad_x1, int pad_y0, int pad_y1) { - return upfirdn2d_op_impl(input, kernel, up_x, up_y, down_x, down_y, pad_x0, - pad_x1, pad_y0, pad_y1); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d_parrots.cpp deleted file mode 100644 index f0c50db5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/upfirdn2d_parrots.cpp +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include -#include -#include -using namespace at; -using namespace parrots; - -torch::Tensor upfirdn2d(const Tensor &input, const Tensor &kernel, int up_x, - int up_y, int down_x, int down_y, int pad_x0, - int pad_x1, int pad_y0, int pad_y1); - -void upfirdn2d_parrots(CudaContext &ctx, const SSElement &attr, - const OperatorBase::in_list_t &ins, - OperatorBase::out_list_t &outs) { - int up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1; - const auto &input = buildATensor(ctx, ins[0]); - const auto &kernel = buildATensor(ctx, ins[1]); - SSAttrs(attr) - .get("up_x", up_x) - .get("up_y", up_y) - .get("down_x", down_x) - .get("down_y", down_y) - .get("pad_x0", pad_x0) - .get("pad_x1", pad_x1) - .get("pad_y0", pad_y0) - .get("pad_y1", pad_y1) - .done(); - auto out = upfirdn2d(input, kernel, up_x, up_y, down_x, down_y, pad_x0, - pad_x1, pad_y0, pad_y1); - updateDArray(ctx, out, outs[0]); -} - -PARROTS_EXTENSION_REGISTER(upfirdn2d) - .attr("up_x") - .attr("up_y") - .attr("down_x") - .attr("down_y") - .attr("pad_x0") - .attr("pad_x1") - .attr("pad_y0") - .attr("pad_y1") - .input(2) - .output(1) - .apply(upfirdn2d_parrots) - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization.cpp deleted file mode 100644 index 7946be61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization.cpp +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -int hard_voxelize_forward_impl(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, - at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - return DISPATCH_DEVICE_IMPL(hard_voxelize_forward_impl, points, voxels, coors, - num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -} - -int nondeterministic_hard_voxelize_forward_impl( - const at::Tensor &points, at::Tensor &voxels, at::Tensor &coors, - at::Tensor &num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3) { - return DISPATCH_DEVICE_IMPL(nondeterministic_hard_voxelize_forward_impl, - points, voxels, coors, num_points_per_voxel, - voxel_size, coors_range, max_points, max_voxels, - NDim); -} - -void dynamic_voxelize_forward_impl(const at::Tensor &points, at::Tensor &coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim = 3) { - DISPATCH_DEVICE_IMPL(dynamic_voxelize_forward_impl, points, coors, voxel_size, - coors_range, NDim); -} - -void hard_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - at::Tensor &voxel_num, const int max_points, - const int max_voxels, const int NDim = 3, - const bool deterministic = true) { - int64_t *voxel_num_data = voxel_num.data_ptr(); - std::vector voxel_size_v( - voxel_size.data_ptr(), - voxel_size.data_ptr() + voxel_size.numel()); - std::vector coors_range_v( - coors_range.data_ptr(), - coors_range.data_ptr() + coors_range.numel()); - - if (deterministic) { - *voxel_num_data = hard_voxelize_forward_impl( - points, voxels, coors, num_points_per_voxel, voxel_size_v, - coors_range_v, max_points, max_voxels, NDim); - } else { - *voxel_num_data = nondeterministic_hard_voxelize_forward_impl( - points, voxels, coors, num_points_per_voxel, voxel_size_v, - coors_range_v, max_points, max_voxels, NDim); - } -} - -void dynamic_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &coors, - const int NDim = 3) { - std::vector voxel_size_v( - voxel_size.data_ptr(), - voxel_size.data_ptr() + voxel_size.numel()); - std::vector coors_range_v( - coors_range.data_ptr(), - coors_range.data_ptr() + coors_range.numel()); - dynamic_voxelize_forward_impl(points, coors, voxel_size_v, coors_range_v, - NDim); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_parrots.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_parrots.cpp deleted file mode 100644 index 90e2a444..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_parrots.cpp +++ /dev/null @@ -1,113 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include - -#include "voxelization_pytorch.h" - -using namespace parrots; - -#ifdef MMCV_WITH_CUDA -void hard_voxelize_forward_cuda_parrots(CudaContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int max_points, max_voxels, NDim; - bool deterministic; - SSAttrs(attr) - .get("max_points", max_points) - .get("max_voxels", max_voxels) - .get("NDim", NDim) - .get("deterministic", deterministic) - .done(); - const auto& points = buildATensor(ctx, ins[0]); - const auto& voxel_size = buildATensor(ctx, ins[1]); - const auto& coors_range = buildATensor(ctx, ins[2]); - - auto voxels = buildATensor(ctx, outs[0]); - auto coors = buildATensor(ctx, outs[1]); - auto num_points_per_voxel = buildATensor(ctx, outs[2]); - auto voxel_num = buildATensor(ctx, outs[3]); - - hard_voxelize_forward(points, voxel_size, coors_range, voxels, coors, - num_points_per_voxel, voxel_num, max_points, max_voxels, - NDim, deterministic); -} - -void dynamic_voxelize_forward_cuda_parrots(CudaContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int NDim; - SSAttrs(attr).get("NDim", NDim).done(); - const auto& points = buildATensor(ctx, ins[0]); - const auto& voxel_size = buildATensor(ctx, ins[1]); - const auto& coors_range = buildATensor(ctx, ins[2]); - - auto coors = buildATensor(ctx, outs[0]); - - dynamic_voxelize_forward(points, voxel_size, coors_range, coors, NDim); -} -#endif - -void hard_voxelize_forward_cpu_parrots(HostContext& ctx, const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int max_points, max_voxels, NDim; - bool deterministic; - SSAttrs(attr) - .get("max_points", max_points) - .get("max_voxels", max_voxels) - .get("NDim", NDim) - .get("deterministic", deterministic) - .done(); - const auto& points = buildATensor(ctx, ins[0]); - const auto& voxel_size = buildATensor(ctx, ins[1]); - const auto& coors_range = buildATensor(ctx, ins[2]); - - auto voxels = buildATensor(ctx, outs[0]); - auto coors = buildATensor(ctx, outs[1]); - auto num_points_per_voxel = buildATensor(ctx, outs[2]); - auto voxel_num = buildATensor(ctx, outs[3]); - - hard_voxelize_forward(points, voxel_size, coors_range, voxels, coors, - num_points_per_voxel, voxel_num, max_points, max_voxels, - NDim, deterministic); -} - -void dynamic_voxelize_forward_cpu_parrots(HostContext& ctx, - const SSElement& attr, - const OperatorBase::in_list_t& ins, - OperatorBase::out_list_t& outs) { - int NDim; - SSAttrs(attr).get("NDim", NDim).done(); - const auto& points = buildATensor(ctx, ins[0]); - const auto& voxel_size = buildATensor(ctx, ins[1]); - const auto& coors_range = buildATensor(ctx, ins[2]); - - auto coors = buildATensor(ctx, outs[0]); - - dynamic_voxelize_forward(points, voxel_size, coors_range, coors, NDim); -} - -PARROTS_EXTENSION_REGISTER(hard_voxelize_forward) - .attr("max_points") - .attr("max_voxels") - .attr("NDim") - .attr("deterministic") - .input(3) - .output(4) - .apply(hard_voxelize_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(hard_voxelize_forward_cuda_parrots) -#endif - .done(); - -PARROTS_EXTENSION_REGISTER(dynamic_voxelize_forward) - .attr("NDim") - .input(3) - .output(1) - .apply(dynamic_voxelize_forward_cpu_parrots) -#ifdef MMCV_WITH_CUDA - .apply(dynamic_voxelize_forward_cuda_parrots) -#endif - .done(); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_pytorch.h b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_pytorch.h deleted file mode 100644 index 0019d519..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/parrots/voxelization_pytorch.h +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef VOXELIZATION_PYTORCH_H -#define VOXELIZATION_PYTORCH_H -#include -using namespace at; - -void hard_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - at::Tensor &voxel_num, const int max_points, - const int max_voxels, const int NDim = 3, - const bool deterministic = true); - -void dynamic_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &coors, - const int NDim = 3); - -#endif // VOXELIZATION_PYTORCH_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/active_rotated_filter.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/active_rotated_filter.cpp deleted file mode 100644 index e1ead1f8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/active_rotated_filter.cpp +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/orn/src/ActiveRotatingFilter.h - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void active_rotated_filter_forward_impl(const Tensor input, - const Tensor indices, Tensor output) { - DISPATCH_DEVICE_IMPL(active_rotated_filter_forward_impl, input, indices, - output); -} - -void active_rotated_filter_backward_impl(const Tensor grad_out, - const Tensor indices, Tensor grad_in) { - DISPATCH_DEVICE_IMPL(active_rotated_filter_backward_impl, grad_out, indices, - grad_in); -} - -void active_rotated_filter_forward(const Tensor input, const Tensor indices, - Tensor output) { - active_rotated_filter_forward_impl(input, indices, output); -} - -void active_rotated_filter_backward(const Tensor grad_out, const Tensor indices, - Tensor grad_in) { - active_rotated_filter_backward_impl(grad_out, indices, grad_in); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/assign_score_withk.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/assign_score_withk.cpp deleted file mode 100644 index 90762771..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/assign_score_withk.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/paconv_lib/src/gpu -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void assign_score_withk_forward_impl(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output) { - DISPATCH_DEVICE_IMPL(assign_score_withk_forward_impl, B, N0, N1, M, K, O, - aggregate, points, centers, scores, knn_idx, output); -} - -void assign_score_withk_backward_impl( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores) { - DISPATCH_DEVICE_IMPL(assign_score_withk_backward_impl, B, N0, N1, M, K, O, - aggregate, grad_out, points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -} - -void assign_score_withk_forward(const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, - Tensor& output, int B, int N0, int N1, int M, - int K, int O, int aggregate) { - assign_score_withk_forward_impl(B, N0, N1, M, K, O, aggregate, points, - centers, scores, knn_idx, output); -} - -void assign_score_withk_backward(const Tensor& grad_out, const Tensor& points, - const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores, - int B, int N0, int N1, int M, int K, int O, - int aggregate) { - assign_score_withk_backward_impl(B, N0, N1, M, K, O, aggregate, grad_out, - points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ball_query.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ball_query.cpp deleted file mode 100644 index 1c9e7a20..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ball_query.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/ball_query.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void ball_query_forward_impl(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx) { - DISPATCH_DEVICE_IMPL(ball_query_forward_impl, b, n, m, min_radius, max_radius, - nsample, new_xyz, xyz, idx); -} - -void ball_query_forward(Tensor new_xyz_tensor, Tensor xyz_tensor, - Tensor idx_tensor, int b, int n, int m, - float min_radius, float max_radius, int nsample) { - ball_query_forward_impl(b, n, m, min_radius, max_radius, nsample, - new_xyz_tensor, xyz_tensor, idx_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/bbox_overlaps.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/bbox_overlaps.cpp deleted file mode 100644 index 187216fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/bbox_overlaps.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void bbox_overlaps_impl(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - DISPATCH_DEVICE_IMPL(bbox_overlaps_impl, bboxes1, bboxes2, ious, mode, - aligned, offset); -} - -void bbox_overlaps(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - bbox_overlaps_impl(bboxes1, bboxes2, ious, mode, aligned, offset); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/border_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/border_align.cpp deleted file mode 100644 index 565de689..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/border_align.cpp +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void border_align_forward_impl(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - DISPATCH_DEVICE_IMPL(border_align_forward_impl, input, boxes, output, - argmax_idx, pool_size); -} - -void border_align_backward_impl(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size) { - DISPATCH_DEVICE_IMPL(border_align_backward_impl, grad_output, boxes, - argmax_idx, grad_input, pool_size); -} - -void border_align_forward(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - border_align_forward_impl(input, boxes, output, argmax_idx, pool_size); -} - -void border_align_backward(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size) { - border_align_backward_impl(grad_output, boxes, argmax_idx, grad_input, - pool_size); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/box_iou_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/box_iou_rotated.cpp deleted file mode 100644 index a2a4e095..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/box_iou_rotated.cpp +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void box_iou_rotated_impl(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - DISPATCH_DEVICE_IMPL(box_iou_rotated_impl, boxes1, boxes2, ious, mode_flag, - aligned); -} - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -void box_iou_rotated(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - box_iou_rotated_impl(boxes1, boxes2, ious, mode_flag, aligned); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe.cpp deleted file mode 100644 index a563aed9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe.cpp +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void carafe_forward_impl(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_forward_impl, features, masks, rfeatures, routput, - rmasks, output, kernel_size, group_size, scale_factor); -} - -void carafe_backward_impl(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_backward_impl, top_grad, rfeatures, masks, - rtop_grad, rbottom_grad_hs, rbottom_grad, rmask_grad, - bottom_grad, mask_grad, kernel_size, group_size, - scale_factor); -} - -void carafe_forward(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - carafe_forward_impl(features, masks, rfeatures, routput, rmasks, output, - kernel_size, group_size, scale_factor); -} - -void carafe_backward(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, Tensor bottom_grad, - Tensor mask_grad, int kernel_size, int group_size, - int scale_factor) { - carafe_backward_impl(top_grad, rfeatures, masks, rtop_grad, rbottom_grad_hs, - rbottom_grad, rmask_grad, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe_naive.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe_naive.cpp deleted file mode 100644 index 6e8917a6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/carafe_naive.cpp +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void carafe_naive_forward_impl(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_naive_forward_impl, features, masks, output, - kernel_size, group_size, scale_factor); -} - -void carafe_naive_backward_impl(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor) { - DISPATCH_DEVICE_IMPL(carafe_naive_backward_impl, top_grad, features, masks, - bottom_grad, mask_grad, kernel_size, group_size, - scale_factor); -} - -void carafe_naive_forward(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - carafe_naive_forward_impl(features, masks, output, kernel_size, group_size, - scale_factor); -} - -void carafe_naive_backward(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, int scale_factor) { - carafe_naive_backward_impl(top_grad, features, masks, bottom_grad, mask_grad, - kernel_size, group_size, scale_factor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/chamfer_distance.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/chamfer_distance.cpp deleted file mode 100644 index 6ea1ba67..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/chamfer_distance.cpp +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/chrdiller/pyTorchChamferDistance/blob/master/chamfer_distance/chamfer_distance.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void chamfer_distance_forward_impl(const Tensor xyz1, const Tensor xyz2, - const Tensor dist1, const Tensor dist2, - const Tensor idx1, const Tensor idx2) { - DISPATCH_DEVICE_IMPL(chamfer_distance_forward_impl, xyz1, xyz2, dist1, dist2, - idx1, idx2); -} - -void chamfer_distance_backward_impl(const Tensor xyz1, const Tensor xyz2, - Tensor gradxyz1, Tensor gradxyz2, - Tensor graddist1, Tensor graddist2, - Tensor idx1, Tensor idx2) { - DISPATCH_DEVICE_IMPL(chamfer_distance_backward_impl, xyz1, xyz2, gradxyz1, - gradxyz2, graddist1, graddist2, idx1, idx2); -} - -void chamfer_distance_forward(const Tensor xyz1, const Tensor xyz2, - const Tensor dist1, const Tensor dist2, - const Tensor idx1, const Tensor idx2) { - chamfer_distance_forward_impl(xyz1, xyz2, dist1, dist2, idx1, idx2); -} - -void chamfer_distance_backward(const Tensor xyz1, const Tensor xyz2, - Tensor gradxyz1, Tensor gradxyz2, - Tensor graddist1, Tensor graddist2, Tensor idx1, - Tensor idx2) { - chamfer_distance_backward_impl(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, - graddist2, idx1, idx2); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/contour_expand.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/contour_expand.cpp deleted file mode 100755 index 586c48ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/contour_expand.cpp +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// It is modified from https://github.com/whai362/PSENet -#include -#include - -#include "pytorch_cpp_helper.hpp" - -using namespace std; - -class Point2d { - public: - int x; - int y; - - Point2d() : x(0), y(0) {} - Point2d(int _x, int _y) : x(_x), y(_y) {} -}; - -void kernel_dilate(const uint8_t *data, IntArrayRef data_shape, - const int *label_map, int &label_num, int &min_area, - vector> &text_line) { - std::vector area(label_num + 1); - int kernel_num = data_shape[0]; - int height = data_shape[1]; - int width = data_shape[2]; - - for (int x = 0; x < height; ++x) { - for (int y = 0; y < width; ++y) { - int label = label_map[x * width + y]; - if (label == 0) continue; - area[label] += 1; - } - } - - queue queue, next_queue; - for (int x = 0; x < height; ++x) { - vector row(width); - for (int y = 0; y < width; ++y) { - int label = label_map[x * width + y]; - if (label == 0) continue; - if (area[label] < min_area) continue; - - Point2d point(x, y); - queue.push(point); - row[y] = label; - } - text_line.emplace_back(row); - } - - int dx[] = {-1, 1, 0, 0}; - int dy[] = {0, 0, -1, 1}; - vector kernel_step(kernel_num); - std::for_each(kernel_step.begin(), kernel_step.end(), - [=](int &k) { return k * height * width; }); - - for (int kernel_id = kernel_num - 2; kernel_id >= 0; --kernel_id) { - while (!queue.empty()) { - Point2d point = queue.front(); - queue.pop(); - int x = point.x; - int y = point.y; - int label = text_line[x][y]; - - bool is_edge = true; - for (int d = 0; d < 4; ++d) { - int tmp_x = x + dx[d]; - int tmp_y = y + dy[d]; - - if (tmp_x < 0 || tmp_x >= height) continue; - if (tmp_y < 0 || tmp_y >= width) continue; - int kernel_value = data[kernel_step[kernel_id] + tmp_x * width + tmp_y]; - if (kernel_value == 0) continue; - if (text_line[tmp_x][tmp_y] > 0) continue; - - Point2d point(tmp_x, tmp_y); - queue.push(point); - text_line[tmp_x][tmp_y] = label; - is_edge = false; - } - - if (is_edge) { - next_queue.push(point); - } - } - swap(queue, next_queue); - } -} - -std::vector> contour_expand(Tensor kernel_mask, - Tensor internal_kernel_label, - int min_kernel_area, - int kernel_num) { - kernel_mask = kernel_mask.contiguous(); - internal_kernel_label = internal_kernel_label.contiguous(); - assert(kernel_mask.dim() == 3); - assert(internal_kernel_label.dim() == 2); - assert(kernel_mask.size(1) == internal_kernel_label.size(0)); - assert(kernel_mask.size(2) == internal_kernel_label.size(1)); - CHECK_CPU_INPUT(kernel_mask); - CHECK_CPU_INPUT(internal_kernel_label); - auto ptr_data = kernel_mask.data_ptr(); - IntArrayRef data_shape = kernel_mask.sizes(); - - auto data_label_map = internal_kernel_label.data_ptr(); - vector> text_line; - - kernel_dilate(ptr_data, data_shape, data_label_map, kernel_num, - min_kernel_area, text_line); - - return text_line; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/convex_iou.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/convex_iou.cpp deleted file mode 100644 index 79f2028b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/convex_iou.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/SDL-GuoZonghao/BeyondBoundingBox/tree/main/mmdet/ops/iou/src -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void convex_iou_impl(const Tensor pointsets, const Tensor polygons, - Tensor ious) { - DISPATCH_DEVICE_IMPL(convex_iou_impl, pointsets, polygons, ious); -} - -void convex_iou(const Tensor pointsets, const Tensor polygons, Tensor ious) { - convex_iou_impl(pointsets, polygons, ious); -} - -void convex_giou_impl(const Tensor pointsets, const Tensor polygons, - Tensor output) { - DISPATCH_DEVICE_IMPL(convex_giou_impl, pointsets, polygons, output); -} - -void convex_giou(const Tensor pointsets, const Tensor polygons, Tensor output) { - convex_giou_impl(pointsets, polygons, output); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/correlation.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/correlation.cpp deleted file mode 100644 index f4adba2a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/correlation.cpp +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void correlation_forward_impl(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - DISPATCH_DEVICE_IMPL(correlation_forward_impl, input1, input2, output, kH, kW, - patchH, patchW, padH, padW, dilationH, dilationW, - dilation_patchH, dilation_patchW, dH, dW); -} - -void correlation_backward_impl(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - DISPATCH_DEVICE_IMPL(correlation_backward_impl, grad_output, input1, input2, - grad_input1, grad_input2, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_forward(Tensor input1, Tensor input2, Tensor output, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - correlation_forward_impl(input1, input2, output, kH, kW, patchH, patchW, padH, - padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_backward(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - correlation_backward_impl(grad_output, input1, input2, grad_input1, - grad_input2, kH, kW, patchH, patchW, padH, padW, - dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/active_rotated_filter.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/active_rotated_filter.cpp deleted file mode 100644 index aa5a8b3d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/active_rotated_filter.cpp +++ /dev/null @@ -1,120 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/orn/src/cpu/ActiveRotatingFilter_cpu.cpp -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -void active_rotated_filter_forward_cpu_kernel( - const T* weightData, const int* indicesData, const int num_output_planes, - const int num_input_planes, const int num_orientations, const int kH, - const int kW, const int num_rotations, T* outputData) { - const int nEntry = num_orientations * kH * kW; - int i, j, l; - int k; - -#pragma omp parallel for private(i, j, l, k) - for (i = 0; i < num_output_planes; i++) { - for (j = 0; j < num_input_planes; j++) { - for (l = 0; l < nEntry; l++) { - int weightIndex = i * num_input_planes * nEntry + j * nEntry + l; - T val = *(weightData + weightIndex); - for (k = 0; k < num_rotations; k++) { - int index = (int)(*(indicesData + l * num_rotations + k)) - 1; - T* target = outputData + - i * (num_rotations * num_input_planes * nEntry) + - k * (num_input_planes * nEntry) + j * (nEntry) + index; - *target = val; - } - } - } - } -} - -template -void active_rotated_filter_backward_cpu_kernel( - const T* gradOutputData, const int* indicesData, - const int num_output_planes, const int num_input_planes, - const int num_orientations, const int kH, const int kW, - const int num_rotations, T* gradInputData) { - const int nEntry = num_orientations * kH * kW; - int i, j, l; - int k; - -#pragma omp parallel for private(i, j, l, k) - for (i = 0; i < num_output_planes; i++) { - for (j = 0; j < num_input_planes; j++) { - for (l = 0; l < nEntry; l++) { - int gradInputIndex = i * num_input_planes * nEntry + j * nEntry + l; - T* val = gradInputData + gradInputIndex; - *val = 0; - for (k = 0; k < num_rotations; k++) { - int index = (int)(*(indicesData + l * num_rotations + k)) - 1; - const T* target = - gradOutputData + i * (num_rotations * num_input_planes * nEntry) + - k * (num_input_planes * nEntry) + j * (nEntry) + index; - *val = *val + *target; - } - } - } - } -} - -void ActiveRotatedFilterForwardCPULauncher(const Tensor input, - const Tensor indices, - Tensor output) { - const int num_output_planes = input.size(0); - const int num_input_planes = input.size(1); - const int num_orientations = input.size(2); - const int kH = input.size(3); - const int kW = input.size(4); - const int num_rotations = indices.size(3); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "active_rotated_filter_forward_cpu_kernel", [&] { - active_rotated_filter_forward_cpu_kernel( - input.data_ptr(), indices.data_ptr(), - num_output_planes, num_input_planes, num_orientations, kH, kW, - num_rotations, output.data_ptr()); - }); -} - -void ActiveRotatedFilterBackwardCPULauncher(const Tensor grad_out, - const Tensor indices, - Tensor grad_in) { - const int num_orientations = indices.size(0); - const int kH = indices.size(1); - const int kW = indices.size(2); - const int num_rotations = indices.size(3); - const int num_output_planes = grad_out.size(0) / num_rotations; - const int num_input_planes = grad_out.size(1) / num_orientations; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "active_rotated_filter_backward_cpu_kernel", [&] { - active_rotated_filter_backward_cpu_kernel( - grad_out.data_ptr(), indices.data_ptr(), - num_output_planes, num_input_planes, num_orientations, kH, kW, - num_rotations, grad_in.data_ptr()); - }); -} - -void active_rotated_filter_forward_cpu(const Tensor input, const Tensor indices, - Tensor output) { - ActiveRotatedFilterForwardCPULauncher(input, indices, output); -} - -void active_rotated_filter_backward_cpu(const Tensor grad_out, - const Tensor indices, Tensor grad_in) { - ActiveRotatedFilterBackwardCPULauncher(grad_out, indices, grad_in); -} - -void active_rotated_filter_forward_impl(const Tensor input, - const Tensor indices, Tensor output); - -void active_rotated_filter_backward_impl(const Tensor grad_out, - const Tensor indices, Tensor grad_in); - -REGISTER_DEVICE_IMPL(active_rotated_filter_forward_impl, CPU, - active_rotated_filter_forward_cpu); -REGISTER_DEVICE_IMPL(active_rotated_filter_backward_impl, CPU, - active_rotated_filter_backward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/box_iou_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/box_iou_rotated.cpp deleted file mode 100644 index 585d2c9f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/box_iou_rotated.cpp +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp -#include "box_iou_rotated_utils.hpp" -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -void box_iou_rotated_cpu_kernel(const Tensor boxes1, const Tensor boxes2, - Tensor ious, const int mode_flag, - const bool aligned) { - int output_size = ious.numel(); - auto num_boxes1 = boxes1.size(0); - auto num_boxes2 = boxes2.size(0); - - if (aligned) { - for (int i = 0; i < output_size; i++) { - ious[i] = single_box_iou_rotated(boxes1[i].data_ptr(), - boxes2[i].data_ptr(), mode_flag); - } - } else { - for (int i = 0; i < num_boxes1; i++) { - for (int j = 0; j < num_boxes2; j++) { - ious[i * num_boxes2 + j] = single_box_iou_rotated( - boxes1[i].data_ptr(), boxes2[j].data_ptr(), mode_flag); - } - } - } -} - -void box_iou_rotated_cpu(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - box_iou_rotated_cpu_kernel(boxes1, boxes2, ious, mode_flag, aligned); -} - -void box_iou_rotated_impl(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); -REGISTER_DEVICE_IMPL(box_iou_rotated_impl, CPU, box_iou_rotated_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/deform_conv.cpp deleted file mode 100644 index 7ab67e78..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/deform_conv.cpp +++ /dev/null @@ -1,408 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -T deformable_im2col_bilinear_cpu(const T *input, const int data_width, - const int height, const int width, T h, T w) { - if (h <= -1 || height <= h || w <= -1 || width <= w) { - return 0; - } - - int h_low = floor(h); - int w_low = floor(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -T get_gradient_weight_cpu(T argmax_h, T argmax_w, const int h, const int w, - const int height, const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floor(argmax_h); - int argmax_w_low = floor(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -T get_coordinate_weight_cpu(T argmax_h, T argmax_w, const int height, - const int width, const T *im_data, - const int data_width, const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floor(argmax_h); - int argmax_w_low = floor(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -void deformable_im2col_cpu_kernel( - const int n, const T *data_im, const T *data_offset, const int height, - const int width, const int kernel_h, const int kernel_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - for (int index = 0; index < n; index++) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = deformable_im2col_bilinear_cpu(data_im_ptr, width, height, - width, h_im, w_im); - *data_col_ptr = val; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -void deformable_col2im_cpu_kernel( - const int n, const T *data_col, const T *data_offset, const int channels, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - for (int index = 0; index < n; index++) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index]; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = - get_gradient_weight_cpu(cur_inv_h_data, cur_inv_w_data, - cur_h + dy, cur_w + dx, height, width); - *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad; - } - } - } - } -} - -template -void deformable_col2im_coord_cpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int offset_channels, const int deformable_group, const int height_col, - const int width_col, T *grad_offset) { - for (int index = 0; index < n; index++) { - T val = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - const T weight = get_coordinate_weight_cpu( - inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos]; - cnt += 1; - } - - grad_offset[index] = val; - } -} - -void deformable_im2col_cpu(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col) { - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = channels * height_col * width_col * parallel_imgs; - int channel_per_deformable_group = channels / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "deformable_im2col_cpu", [&] { - deformable_im2col_cpu_kernel( - num_kernels, data_im.data_ptr(), - data_offset.data_ptr(), height, width, ksize_h, ksize_w, - pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, parallel_imgs, channels, - deformable_group, height_col, width_col, - data_col.data_ptr()); - }); -} - -void deformable_col2im_cpu(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im) { - // todo: make sure parallel_imgs is passed in correctly - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = - channels * ksize_h * ksize_w * height_col * width_col * parallel_imgs; - int channel_per_deformable_group = channels / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "deformable_col2im_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - deformable_col2im_cpu_kernel( - num_kernels, data_col_, data_offset_, channels, height, width, - ksize_h, ksize_w, pad_h, pad_w, stride_h, stride_w, dilation_h, - dilation_w, channel_per_deformable_group, parallel_imgs, - deformable_group, height_col, width_col, grad_im_); - })); -} - -void deformable_col2im_coord_cpu( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset) { - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = height_col * width_col * 2 * ksize_h * ksize_w * - deformable_group * parallel_imgs; - int channel_per_deformable_group = - channels * ksize_h * ksize_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "deformable_col2im_coord_cpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - - deformable_col2im_coord_cpu_kernel( - num_kernels, data_col_, data_im_, data_offset_, channels, height, - width, ksize_h, ksize_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, parallel_imgs, - 2 * ksize_h * ksize_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_); - })); -} - -void deformable_im2col_impl(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col); - -void deformable_col2im_impl(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im); - -void deformable_col2im_coord_impl( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset); - -REGISTER_DEVICE_IMPL(deformable_im2col_impl, CPU, deformable_im2col_cpu); -REGISTER_DEVICE_IMPL(deformable_col2im_impl, CPU, deformable_col2im_cpu); -REGISTER_DEVICE_IMPL(deformable_col2im_coord_impl, CPU, - deformable_col2im_coord_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/modulated_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/modulated_deform_conv.cpp deleted file mode 100644 index 95390956..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/modulated_deform_conv.cpp +++ /dev/null @@ -1,436 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -T dmcn_im2col_bilinear_cpu(const T *input, const int data_width, - const int height, const int width, T h, T w) { - int h_low = floorf(h); - int w_low = floorf(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -T dmcn_get_gradient_weight_cpu(T argmax_h, T argmax_w, const int h, const int w, - const int height, const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -T dmcn_get_coordinate_weight_cpu(T argmax_h, T argmax_w, const int height, - const int width, const T *im_data, - const int data_width, const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -void modulated_deformable_im2col_cpu_kernel( - const int n, const T *data_im, const T *data_offset, const T *data_mask, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - for (int index = 0; index < n; index++) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const T *data_mask_ptr = - data_mask + (b_col * deformable_group + deformable_group_index) * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_col) * width_col + w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, height, width, - h_im, w_im); - *data_col_ptr = val * mask; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -void modulated_deformable_col2im_cpu_kernel( - const int n, const T *data_col, const T *data_offset, const T *data_mask, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - for (int index = 0; index < n; index++) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index] * mask; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = dmcn_get_gradient_weight_cpu(cur_inv_h_data, - cur_inv_w_data, cur_h + dy, - cur_w + dx, height, width); - *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad; - } - } - } - } -} - -template -void modulated_deformable_col2im_coord_cpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const T *data_mask, const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int channel_per_deformable_group, - const int batch_size, const int offset_channels, const int deformable_group, - const int height_col, const int width_col, T *grad_offset, T *grad_mask) { - for (int index = 0; index < n; index++) { - T val = 0, mval = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const int data_mask_hw_ptr = - (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - else - mval += data_col_ptr[col_pos] * - dmcn_im2col_bilinear_cpu(data_im_ptr + cnt * height * width, - width, height, width, inv_h, inv_w); - const T weight = dmcn_get_coordinate_weight_cpu( - inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos] * mask; - cnt += 1; - } - // KERNEL_ASSIGN(grad_offset[index], offset_req, val); - grad_offset[index] = val; - if (offset_c % 2 == 0) - // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + - // deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * - // height_col + h) * width_col + w], mask_req, mval); - grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * - kernel_w + - offset_c / 2) * - height_col + - h) * - width_col + - w] = mval; - } -} - -void modulated_deformable_im2col_cpu( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "modulated_deformable_im2col_cpu", ([&] { - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *data_col_ = data_col.data_ptr(); - - modulated_deformable_im2col_cpu_kernel( - num_kernels, data_im_, data_offset_, data_mask_, height_im, - width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, batch_size, - channels, deformable_group, height_col, width_col, data_col_); - })); -} - -void modulated_deformable_col2im_cpu( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = - channels * kernel_h * kernel_w * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_cpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - modulated_deformable_col2im_cpu_kernel( - num_kernels, data_col_, data_offset_, data_mask_, channels, - height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, channel_per_deformable_group, - batch_size, deformable_group, height_col, width_col, grad_im_); - })); -} - -void modulated_deformable_col2im_coord_cpu( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * - kernel_w * deformable_group; - const int channel_per_deformable_group = - channels * kernel_h * kernel_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_coord_cpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - scalar_t *grad_mask_ = grad_mask.data_ptr(); - - modulated_deformable_col2im_coord_cpu_kernel( - num_kernels, data_col_, data_im_, data_offset_, data_mask_, - channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, batch_size, - 2 * kernel_h * kernel_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_, grad_mask_); - })); -} - -void modulated_deformable_im2col_impl( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_impl( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_impl( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -REGISTER_DEVICE_IMPL(modulated_deformable_im2col_impl, CPU, - modulated_deformable_im2col_cpu); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_impl, CPU, - modulated_deformable_col2im_cpu); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_coord_impl, CPU, - modulated_deformable_col2im_coord_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms.cpp deleted file mode 100644 index 53e9b9a8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms.cpp +++ /dev/null @@ -1,230 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor nms_cpu(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - auto x1_t = boxes.select(1, 0).contiguous(); - auto y1_t = boxes.select(1, 1).contiguous(); - auto x2_t = boxes.select(1, 2).contiguous(); - auto y2_t = boxes.select(1, 3).contiguous(); - - Tensor areas_t = (x2_t - x1_t + offset) * (y2_t - y1_t + offset); - - auto order_t = std::get<1>(scores.sort(0, /* descending=*/true)); - - auto nboxes = boxes.size(0); - Tensor select_t = at::ones({nboxes}, boxes.options().dtype(at::kBool)); - - auto select = select_t.data_ptr(); - auto order = order_t.data_ptr(); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto areas = areas_t.data_ptr(); - - for (int64_t _i = 0; _i < nboxes; _i++) { - if (select[_i] == false) continue; - auto i = order[_i]; - auto ix1 = x1[i]; - auto iy1 = y1[i]; - auto ix2 = x2[i]; - auto iy2 = y2[i]; - auto iarea = areas[i]; - - for (int64_t _j = _i + 1; _j < nboxes; _j++) { - if (select[_j] == false) continue; - auto j = order[_j]; - auto xx1 = std::max(ix1, x1[j]); - auto yy1 = std::max(iy1, y1[j]); - auto xx2 = std::min(ix2, x2[j]); - auto yy2 = std::min(iy2, y2[j]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[j] - inter); - if (ovr > iou_threshold) select[_j] = false; - } - } - return order_t.masked_select(select_t); -} - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset); -REGISTER_DEVICE_IMPL(nms_impl, CPU, nms_cpu); - -Tensor softnms_cpu(Tensor boxes, Tensor scores, Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset) { - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - - auto x1_t = boxes.select(1, 0).contiguous(); - auto y1_t = boxes.select(1, 1).contiguous(); - auto x2_t = boxes.select(1, 2).contiguous(); - auto y2_t = boxes.select(1, 3).contiguous(); - auto scores_t = scores.clone(); - - Tensor areas_t = (x2_t - x1_t + offset) * (y2_t - y1_t + offset); - - auto nboxes = boxes.size(0); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto sc = scores_t.data_ptr(); - auto areas = areas_t.data_ptr(); - auto de = dets.data_ptr(); - - int64_t pos = 0; - Tensor inds_t = at::arange(nboxes, boxes.options().dtype(at::kLong)); - auto inds = inds_t.data_ptr(); - - for (int64_t i = 0; i < nboxes; i++) { - auto max_score = sc[i]; - auto max_pos = i; - - pos = i + 1; - // get max box - while (pos < nboxes) { - if (max_score < sc[pos]) { - max_score = sc[pos]; - max_pos = pos; - } - pos = pos + 1; - } - // swap - auto ix1 = de[i * 5 + 0] = x1[max_pos]; - auto iy1 = de[i * 5 + 1] = y1[max_pos]; - auto ix2 = de[i * 5 + 2] = x2[max_pos]; - auto iy2 = de[i * 5 + 3] = y2[max_pos]; - auto iscore = de[i * 5 + 4] = sc[max_pos]; - auto iarea = areas[max_pos]; - auto iind = inds[max_pos]; - x1[max_pos] = x1[i]; - y1[max_pos] = y1[i]; - x2[max_pos] = x2[i]; - y2[max_pos] = y2[i]; - sc[max_pos] = sc[i]; - areas[max_pos] = areas[i]; - inds[max_pos] = inds[i]; - x1[i] = ix1; - y1[i] = iy1; - x2[i] = ix2; - y2[i] = iy2; - sc[i] = iscore; - areas[i] = iarea; - inds[i] = iind; - - pos = i + 1; - while (pos < nboxes) { - auto xx1 = std::max(ix1, x1[pos]); - auto yy1 = std::max(iy1, y1[pos]); - auto xx2 = std::min(ix2, x2[pos]); - auto yy2 = std::min(iy2, y2[pos]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[pos] - inter); - - float weight = 1.; - if (method == 0) { - if (ovr >= iou_threshold) weight = 0; - } else if (method == 1) { - if (ovr >= iou_threshold) weight = 1 - ovr; - } else if (method == 2) { - weight = std::exp(-(ovr * ovr) / sigma); - } - sc[pos] *= weight; - // if box score falls below threshold, discard the box by - // swapping with last box update N - if (sc[pos] < min_score) { - x1[pos] = x1[nboxes - 1]; - y1[pos] = y1[nboxes - 1]; - x2[pos] = x2[nboxes - 1]; - y2[pos] = y2[nboxes - 1]; - sc[pos] = sc[nboxes - 1]; - areas[pos] = areas[nboxes - 1]; - inds[pos] = inds[nboxes - 1]; - nboxes = nboxes - 1; - pos = pos - 1; - } - pos = pos + 1; - } - } - return inds_t.slice(0, 0, nboxes); -} - -Tensor softnms_impl(Tensor boxes, Tensor scores, Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset); -REGISTER_DEVICE_IMPL(softnms_impl, CPU, softnms_cpu); - -std::vector > nms_match_cpu(Tensor dets, float iou_threshold) { - auto x1_t = dets.select(1, 0).contiguous(); - auto y1_t = dets.select(1, 1).contiguous(); - auto x2_t = dets.select(1, 2).contiguous(); - auto y2_t = dets.select(1, 3).contiguous(); - auto scores = dets.select(1, 4).contiguous(); - - at::Tensor areas_t = (x2_t - x1_t) * (y2_t - y1_t); - - auto order_t = std::get<1>(scores.sort(0, /* descending=*/true)); - - auto ndets = dets.size(0); - at::Tensor suppressed_t = - at::zeros({ndets}, dets.options().dtype(at::kByte).device(at::kCPU)); - - auto suppressed = suppressed_t.data_ptr(); - auto order = order_t.data_ptr(); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto areas = areas_t.data_ptr(); - - std::vector keep; - std::vector > matched; - - for (int64_t _i = 0; _i < ndets; _i++) { - auto i = order[_i]; - if (suppressed[i] == 1) continue; - keep.push_back(i); - std::vector v_i; - auto ix1 = x1[i]; - auto iy1 = y1[i]; - auto ix2 = x2[i]; - auto iy2 = y2[i]; - auto iarea = areas[i]; - - for (int64_t _j = _i + 1; _j < ndets; _j++) { - auto j = order[_j]; - if (suppressed[j] == 1) continue; - auto xx1 = std::max(ix1, x1[j]); - auto yy1 = std::max(iy1, y1[j]); - auto xx2 = std::min(ix2, x2[j]); - auto yy2 = std::min(iy2, y2[j]); - - auto w = std::max(static_cast(0), xx2 - xx1); - auto h = std::max(static_cast(0), yy2 - yy1); - auto inter = w * h; - auto ovr = inter / (iarea + areas[j] - inter); - if (ovr >= iou_threshold) { - suppressed[j] = 1; - v_i.push_back(j); - } - } - matched.push_back(v_i); - } - for (size_t i = 0; i < keep.size(); i++) - matched[i].insert(matched[i].begin(), keep[i]); - return matched; -} - -std::vector > nms_match_impl(Tensor dets, float iou_threshold); -REGISTER_DEVICE_IMPL(nms_match_impl, CPU, nms_match_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms_rotated.cpp deleted file mode 100644 index d2774c82..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/nms_rotated.cpp +++ /dev/null @@ -1,66 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.cpp -#include "box_iou_rotated_utils.hpp" -#include "pytorch_cpp_helper.hpp" - -template -Tensor nms_rotated_cpu_kernel(const Tensor dets, const Tensor scores, - const float iou_threshold) { - // nms_rotated_cpu_kernel is modified from torchvision's nms_cpu_kernel, - // however, the code in this function is much shorter because - // we delegate the IoU computation for rotated boxes to - // the single_box_iou_rotated function in box_iou_rotated_utils.h - AT_ASSERTM(!dets.is_cuda(), "dets must be a CPU tensor"); - AT_ASSERTM(!scores.is_cuda(), "scores must be a CPU tensor"); - AT_ASSERTM(dets.scalar_type() == scores.scalar_type(), - "dets should have the same type as scores"); - - if (dets.numel() == 0) { - return at::empty({0}, dets.options().dtype(at::kLong)); - } - - auto order_t = std::get<1>(scores.sort(0, /* descending=*/true)); - - auto ndets = dets.size(0); - Tensor suppressed_t = at::zeros({ndets}, dets.options().dtype(at::kByte)); - Tensor keep_t = at::zeros({ndets}, dets.options().dtype(at::kLong)); - - auto suppressed = suppressed_t.data_ptr(); - auto keep = keep_t.data_ptr(); - auto order = order_t.data_ptr(); - - int64_t num_to_keep = 0; - - for (int64_t _i = 0; _i < ndets; _i++) { - auto i = order[_i]; - if (suppressed[i] == 1) { - continue; - } - - keep[num_to_keep++] = i; - - for (int64_t _j = _i + 1; _j < ndets; _j++) { - auto j = order[_j]; - if (suppressed[j] == 1) { - continue; - } - - auto ovr = single_box_iou_rotated( - dets[i].data_ptr(), dets[j].data_ptr(), 0); - if (ovr >= iou_threshold) { - suppressed[j] = 1; - } - } - } - return keep_t.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep); -} - -Tensor nms_rotated_cpu(const Tensor dets, const Tensor scores, - const float iou_threshold) { - auto result = at::empty({0}, dets.options()); - AT_DISPATCH_FLOATING_TYPES(dets.scalar_type(), "nms_rotated", [&] { - result = nms_rotated_cpu_kernel(dets, scores, iou_threshold); - }); - return result; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/pixel_group.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/pixel_group.cpp deleted file mode 100755 index 1d185f94..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/pixel_group.cpp +++ /dev/null @@ -1,126 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// It is modified from https://github.com/WenmuZhou/PAN.pytorch - -#include - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -std::vector> estimate_confidence(int32_t* label, - float* score, int label_num, - int height, int width) { - std::vector> point_vector; - for (int i = 0; i < label_num; i++) { - std::vector point; - point.push_back(0); - point.push_back(0); - point_vector.push_back(point); - } - for (int y = 0; y < height; y++) { - auto label_tmp = label + y * width; - auto score_tmp = score + y * width; - for (int x = 0; x < width; x++) { - auto l = label_tmp[x]; - if (l > 0) { - float confidence = score_tmp[x]; - point_vector[l].push_back(x); - point_vector[l].push_back(y); - point_vector[l][0] += confidence; - point_vector[l][1] += 1; - } - } - } - for (size_t l = 0; l < point_vector.size(); l++) - if (point_vector[l][1] > 0) { - point_vector[l][0] /= point_vector[l][1]; - } - return point_vector; -} -std::vector> pixel_group_cpu( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float dis_threshold) { - assert(score.dim() == 2); - assert(mask.dim() == 2); - assert(embedding_dim.dim() == 3); - int height = score.size(0); - int width = score.size(1); - assert(height == mask.size(0) == embedding.size(1) == kernel_label.size(1)); - assert(width == mask.size(1) == embedding.size(2) == kernel_label.size(2)); - - auto threshold_square = dis_threshold * dis_threshold; - auto ptr_score = score.data_ptr(); - auto ptr_mask = mask.data_ptr(); - auto ptr_kernel_contour = kernel_contour.data_ptr(); - auto ptr_embedding = embedding.data_ptr(); - auto ptr_kernel_label = kernel_label.data_ptr(); - std::queue> contour_pixels; - auto embedding_dim = embedding.size(2); - std::vector> kernel_vector( - kernel_region_num, std::vector(embedding_dim + 1, 0)); - - Tensor text_label; - text_label = kernel_label.clone(); - auto ptr_text_label = text_label.data_ptr(); - - for (int i = 0; i < height; i++) { - auto ptr_embedding_tmp = ptr_embedding + i * width * embedding_dim; - auto ptr_kernel_label_tmp = ptr_kernel_label + i * width; - auto ptr_kernel_contour_tmp = ptr_kernel_contour + i * width; - - for (int j = 0, k = 0; j < width && k < width * embedding_dim; - j++, k += embedding_dim) { - int32_t label = ptr_kernel_label_tmp[j]; - if (label > 0) { - for (int d = 0; d < embedding_dim; d++) - kernel_vector[label][d] += ptr_embedding_tmp[k + d]; - kernel_vector[label][embedding_dim] += 1; - // kernel pixel number - if (ptr_kernel_contour_tmp[j]) { - contour_pixels.push(std::make_tuple(i, j, label)); - } - } - } - } - for (int i = 0; i < kernel_region_num; i++) { - for (int j = 0; j < embedding_dim; j++) { - kernel_vector[i][j] /= kernel_vector[i][embedding_dim]; - } - } - int dx[4] = {-1, 1, 0, 0}; - int dy[4] = {0, 0, -1, 1}; - while (!contour_pixels.empty()) { - auto query_pixel = contour_pixels.front(); - contour_pixels.pop(); - int y = std::get<0>(query_pixel); - int x = std::get<1>(query_pixel); - int32_t l = std::get<2>(query_pixel); - auto kernel_cv = kernel_vector[l]; - for (int idx = 0; idx < 4; idx++) { - int tmpy = y + dy[idx]; - int tmpx = x + dx[idx]; - auto ptr_text_label_tmp = ptr_text_label + tmpy * width; - if (tmpy < 0 || tmpy >= height || tmpx < 0 || tmpx >= width) continue; - if (!ptr_mask[tmpy * width + tmpx] || ptr_text_label_tmp[tmpx] > 0) - continue; - - float dis = 0; - auto ptr_embedding_tmp = ptr_embedding + tmpy * width * embedding_dim; - for (size_t i = 0; i < size_t(embedding_dim); i++) { - dis += - pow(kernel_cv[i] - ptr_embedding_tmp[tmpx * embedding_dim + i], 2); - // ignore further computing if dis is big enough - if (dis >= threshold_square) break; - } - if (dis >= threshold_square) continue; - contour_pixels.push(std::make_tuple(tmpy, tmpx, l)); - ptr_text_label_tmp[tmpx] = l; - } - } - - return estimate_confidence(ptr_text_label, ptr_score, kernel_region_num, - height, width); -} -std::vector> pixel_group_impl( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float dis_threshold); -REGISTER_DEVICE_IMPL(pixel_group_impl, CPU, pixel_group_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/points_in_boxes.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/points_in_boxes.cpp deleted file mode 100644 index c16baa4c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/points_in_boxes.cpp +++ /dev/null @@ -1,53 +0,0 @@ -#include "pytorch_cpp_helper.hpp" - -inline void lidar_to_local_coords_cpu(float shift_x, float shift_y, float rz, - float &local_x, float &local_y) { - float cosa = cos(-rz), sina = sin(-rz); - local_x = shift_x * cosa + shift_y * (-sina); - local_y = shift_x * sina + shift_y * cosa; -} - -inline int check_pt_in_box3d_cpu(const float *pt, const float *box3d, - float &local_x, float &local_y) { - // param pt: (x, y, z) - // param box3d: (cx, cy, cz, x_size, y_size, z_size, rz) in LiDAR coordinate, - // cz in the bottom center - float x = pt[0], y = pt[1], z = pt[2]; - float cx = box3d[0], cy = box3d[1], cz = box3d[2]; - float x_size = box3d[3], y_size = box3d[4], z_size = box3d[5], rz = box3d[6]; - cz += z_size / - 2.0; // shift to the center since cz in box3d is the bottom center - - if (fabsf(z - cz) > z_size / 2.0) return 0; - lidar_to_local_coords_cpu(x - cx, y - cy, rz, local_x, local_y); - float in_flag = (local_x > -x_size / 2.0) & (local_x < x_size / 2.0) & - (local_y > -y_size / 2.0) & (local_y < y_size / 2.0); - return in_flag; -} - -void points_in_boxes_cpu_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor pts_indices_tensor) { - // params boxes: (N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box DO NOT overlaps params pts: - // (npoints, 3) [x, y, z] in LiDAR coordinate params pts_indices: (N, npoints) - - CHECK_CONTIGUOUS(boxes_tensor); - CHECK_CONTIGUOUS(pts_tensor); - CHECK_CONTIGUOUS(pts_indices_tensor); - - int boxes_num = boxes_tensor.size(0); - int pts_num = pts_tensor.size(0); - - const float *boxes = boxes_tensor.data_ptr(); - const float *pts = pts_tensor.data_ptr(); - int *pts_indices = pts_indices_tensor.data_ptr(); - - float local_x = 0, local_y = 0; - for (int i = 0; i < boxes_num; i++) { - for (int j = 0; j < pts_num; j++) { - int cur_in_flag = - check_pt_in_box3d_cpu(pts + j * 3, boxes + i * 7, local_x, local_y); - pts_indices[i * pts_num + j] = cur_in_flag; - } - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/psamask.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/psamask.cpp deleted file mode 100644 index aa7fdcbd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/psamask.cpp +++ /dev/null @@ -1,199 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/hszhao/semseg/blob/master/lib/psa/src -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -#ifndef min -#define min(a, b) (((a) < (b)) ? (a) : (b)) -#endif -#ifndef max -#define max(a, b) (((a) > (b)) ? (a) : (b)) -#endif - -void psamask_collect_forward(const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask, const Tensor mask_data, - Tensor buffer_data) { - for (int n = 0; n < num_; n++) { - for (int h = 0; h < h_feature; h++) { - for (int w = 0; w < w_feature; w++) { - // effective mask region : [hstart, hend) x [wstart, wend) with - // mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with - // feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - buffer_data.view({-1})[(n * h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)) * - h_feature * w_feature + - h * w_feature + w] = - mask_data.view( - {-1})[((n * h_mask * w_mask + hidx * w_mask + widx) * - h_feature + - h) * - w_feature + - w]; - } - } - } - } - } -} - -void psamask_distribute_forward(const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask, const Tensor mask_data, - Tensor buffer_data) { - for (int n = 0; n < num_; n++) { - for (int h = 0; h < h_feature; h++) { - for (int w = 0; w < w_feature; w++) { - // effective mask region : [hstart, hend) x [wstart, wend) with - // mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with - // feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - buffer_data.view( - {-1})[(n * h_feature * w_feature + h * w_feature + w) * - h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)] = - mask_data.view( - {-1})[((n * h_mask * w_mask + hidx * w_mask + widx) * - h_feature + - h) * - w_feature + - w]; - } - } - } - } - } -} - -void psamask_collect_backward(const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask, const Tensor buffer_diff, - Tensor mask_diff) { - for (int n = 0; n < num_; n++) { - for (int h = 0; h < h_feature; h++) { - for (int w = 0; w < w_feature; w++) { - // effective mask region : [hstart, hend) x [wstart, wend) with - // mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with - // feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - mask_diff.view({-1})[((n * h_mask * w_mask + hidx * w_mask + widx) * - h_feature + - h) * - w_feature + - w] = - buffer_diff.view({-1})[(n * h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)) * - h_feature * w_feature + - h * w_feature + w]; - } - } - } - } - } -} - -void psamask_distribute_backward(const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask, - const Tensor buffer_diff, Tensor mask_diff) { - for (int n = 0; n < num_; n++) { - for (int h = 0; h < h_feature; h++) { - for (int w = 0; w < w_feature; w++) { - // effective mask region : [hstart, hend) x [wstart, wend) with - // mask-indexed - const int hstart = max(0, half_h_mask - h); - const int hend = min(h_mask, h_feature + half_h_mask - h); - const int wstart = max(0, half_w_mask - w); - const int wend = min(w_mask, w_feature + half_w_mask - w); - // (hidx, widx ) with mask-indexed - // (hidx + h - half_h_mask, widx + w - half_w_mask) with - // feature-indexed - for (int hidx = hstart; hidx < hend; hidx++) { - for (int widx = wstart; widx < wend; widx++) { - mask_diff.view({-1})[((n * h_mask * w_mask + hidx * w_mask + widx) * - h_feature + - h) * - w_feature + - w] = - buffer_diff.view( - {-1})[(n * h_feature * w_feature + h * w_feature + w) * - h_feature * w_feature + - (hidx + h - half_h_mask) * w_feature + - (widx + w - half_w_mask)]; - } - } - } - } - } -} - -void psamask_forward_cpu(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - if (psa_type == 0) - psamask_collect_forward(num_, h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask, input, output); - else - psamask_distribute_forward(num_, h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask, input, output); -} - -void psamask_backward_cpu(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - if (psa_type == 0) - psamask_collect_backward(num_, h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask, grad_output, grad_input); - else - psamask_distribute_backward(num_, h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask, grad_output, - grad_input); -} - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); -REGISTER_DEVICE_IMPL(psamask_forward_impl, CPU, psamask_forward_cpu); -REGISTER_DEVICE_IMPL(psamask_backward_impl, CPU, psamask_backward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align.cpp deleted file mode 100644 index d5453906..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align.cpp +++ /dev/null @@ -1,466 +0,0 @@ -// Modified from -// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/ROIAlign -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include -#include - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -// implementation taken from Caffe2 -template -struct PreCalc { - int pos1; - int pos2; - int pos3; - int pos4; - T w1; - T w2; - T w3; - T w4; -}; - -template -void pre_calc_for_bilinear_interpolate( - const int height, const int width, const int pooled_height, - const int pooled_width, const int iy_upper, const int ix_upper, - T roi_start_h, T roi_start_w, T bin_size_h, T bin_size_w, - int roi_bin_grid_h, int roi_bin_grid_w, std::vector>& pre_calc) { - int pre_calc_index = 0; - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - for (int iy = 0; iy < iy_upper; iy++) { - const T yy = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < ix_upper; ix++) { - const T xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T x = xx; - T y = yy; - // deal with: inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - PreCalc pc; - pc.pos1 = 0; - pc.pos2 = 0; - pc.pos3 = 0; - pc.pos4 = 0; - pc.w1 = 0; - pc.w2 = 0; - pc.w3 = 0; - pc.w4 = 0; - pre_calc[pre_calc_index] = pc; - pre_calc_index += 1; - continue; - } - - if (y <= 0) { - y = 0; - } - if (x <= 0) { - x = 0; - } - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - // save weights and indices - PreCalc pc; - pc.pos1 = y_low * width + x_low; - pc.pos2 = y_low * width + x_high; - pc.pos3 = y_high * width + x_low; - pc.pos4 = y_high * width + x_high; - pc.w1 = w1; - pc.w2 = w2; - pc.w3 = w3; - pc.w4 = w4; - pre_calc[pre_calc_index] = pc; - - pre_calc_index += 1; - } - } - } - } -} - -template -void ROIAlignForward(const int nthreads, const T* input, const T* rois, - T* output, T* argmax_y, T* argmax_x, - const int pooled_height, const int pooled_width, - const T spatial_scale, const int sampling_ratio, - const int pool_mode, // 0 - max pool, 1 - avg pool - const bool aligned, const int channels, const int height, - const int width) { - int n_rois = nthreads / channels / pooled_width / pooled_height; - // (n, c, ph, pw) is an element in the pooled output - // can be parallelized using omp - // #pragma omp parallel for num_threads(32) - for (int n = 0; n < n_rois; n++) { - int index_n = n * channels * pooled_width * pooled_height; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (aligned) { - AT_ASSERTM(roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlign cannot have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceilf(roi_width / pooled_width); - - // When the grid is empty, output zeros == 0/1, instead of NaN. - const T count = std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - // we want to precalculate indices and weights shared by all channels, - // this is the key point of optimization - std::vector> pre_calc(roi_bin_grid_h * roi_bin_grid_w * - pooled_width * pooled_height); - pre_calc_for_bilinear_interpolate( - height, width, pooled_height, pooled_width, roi_bin_grid_h, - roi_bin_grid_w, roi_start_h, roi_start_w, bin_size_h, bin_size_w, - roi_bin_grid_h, roi_bin_grid_w, pre_calc); - - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * pooled_width * pooled_height; - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - int pre_calc_index = 0; - - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - int index = index_n_c + ph * pooled_width + pw; - - T output_val = 0.; - T maxval = -10000; - T maxidx_y = -1.f, maxidx_x = -1.f; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - PreCalc pc = pre_calc[pre_calc_index]; - T val = pc.w1 * offset_input[pc.pos1] + - pc.w2 * offset_input[pc.pos2] + - pc.w3 * offset_input[pc.pos3] + - pc.w4 * offset_input[pc.pos4]; - if (val > maxval) { - maxval = val; - maxidx_y = y; - maxidx_x = x; - } - output_val += val; - pre_calc_index += 1; - } - } - if (pool_mode == 0) { - // We do max pooling inside a bin - output[index] = maxval; - argmax_y[index] = maxidx_y; - argmax_x[index] = maxidx_x; - } else if (pool_mode == 1) { - // We do average (integral) pooling inside a bin - output[index] = output_val / count; - } // if - } // for pw - } // for ph - } // for c - } // for n -} - -template -void bilinear_interpolate_gradient(const int height, const int width, T y, T x, - T& w1, T& w2, T& w3, T& w4, int& x_low, - int& x_high, int& y_low, int& y_high, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} - -template -inline void add(T* address, const T& val) { - *address += val; -} - -template -void ROIAlignBackward(const int nthreads, const T* grad_output, const T* rois, - const T* argmax_y, const T* argmax_x, T* grad_input, - const int pooled_height, const int pooled_width, - const T spatial_scale, const int sampling_ratio, - const int pool_mode, // 0 - max pool, 1 - avg pool - const bool aligned, const int channels, const int height, - const int width, const int n_stride, const int c_stride, - const int h_stride, const int w_stride) { - for (int index = 0; index < nthreads; index++) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* offset_rois = rois + n * 5; - int roi_batch_ind = offset_rois[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_start_w = offset_rois[1] * spatial_scale - offset; - T roi_start_h = offset_rois[2] * spatial_scale - offset; - T roi_end_w = offset_rois[3] * spatial_scale - offset; - T roi_end_h = offset_rois[4] * spatial_scale - offset; - - T roi_width = roi_end_w - roi_start_w; - T roi_height = roi_end_h - roi_start_h; - if (aligned) { - AT_ASSERTM(roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlign do not have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - T* offset_grad_input = - grad_input + ((roi_batch_ind * channels + c) * height * width); - - int output_offset = n * n_stride + c * c_stride; - const T* offset_grad_output = grad_output + output_offset; - const T grad_output_this_bin = - offset_grad_output[ph * h_stride + pw * w_stride]; - - if (pool_mode == 0) { - // We do max pooling inside a bin - T y = argmax_y[index], x = argmax_x[index]; - if (y != -1.f) { - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high, index); - - T g1 = grad_output_this_bin * w1; - T g2 = grad_output_this_bin * w2; - T g3 = grad_output_this_bin * w3; - T g4 = grad_output_this_bin * w4; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - // atomic add is not needed for now since it is single threaded - add(offset_grad_input + y_low * width + x_low, static_cast(g1)); - add(offset_grad_input + y_low * width + x_high, static_cast(g2)); - add(offset_grad_input + y_high * width + x_low, static_cast(g3)); - add(offset_grad_input + y_high * width + x_high, static_cast(g4)); - } // if - } // mode - } else if (pool_mode == 1) { - // We do average (integral) pooling inside a bin - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = - (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_width / pooled_width); - - const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T y = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T x = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high, index); - - T g1 = grad_output_this_bin * w1 / count; - T g2 = grad_output_this_bin * w2 / count; - T g3 = grad_output_this_bin * w3 / count; - T g4 = grad_output_this_bin * w4 / count; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - // atomic add is not needed for now since it is single threaded - add(offset_grad_input + y_low * width + x_low, static_cast(g1)); - add(offset_grad_input + y_low * width + x_high, static_cast(g2)); - add(offset_grad_input + y_high * width + x_low, static_cast(g3)); - add(offset_grad_input + y_high * width + x_high, - static_cast(g4)); - } // if - } // ix - } // iy - } // mode - } // for -} // ROIAlignBackward - -void ROIAlignForwardCPULauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - int output_size = output.numel(); - int channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "ROIAlign_forward", [&] { - ROIAlignForward( - output_size, input.data_ptr(), rois.data_ptr(), - output.data_ptr(), argmax_y.data_ptr(), - argmax_x.data_ptr(), aligned_height, aligned_width, - static_cast(spatial_scale), sampling_ratio, pool_mode, - aligned, channels, height, width); - }); -} - -void ROIAlignBackwardCPULauncher(Tensor grad_output, Tensor rois, - Tensor argmax_y, Tensor argmax_x, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, - bool aligned) { - int output_size = grad_output.numel(); - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - - // get stride values to ensure indexing into gradients is correct. - int n_stride = grad_output.stride(0); - int c_stride = grad_output.stride(1); - int h_stride = grad_output.stride(2); - int w_stride = grad_output.stride(3); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "ROIAlign_backward", [&] { - ROIAlignBackward( - output_size, grad_output.data_ptr(), - rois.data_ptr(), argmax_y.data_ptr(), - argmax_x.data_ptr(), grad_input.data_ptr(), - aligned_height, aligned_width, static_cast(spatial_scale), - sampling_ratio, pool_mode, aligned, channels, height, width, - n_stride, c_stride, h_stride, w_stride); - }); -} - -void roi_align_forward_cpu(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - ROIAlignForwardCPULauncher(input, rois, output, argmax_y, argmax_x, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_cpu(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignBackwardCPULauncher(grad_output, rois, argmax_y, argmax_x, grad_input, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -REGISTER_DEVICE_IMPL(roi_align_forward_impl, CPU, roi_align_forward_cpu); -REGISTER_DEVICE_IMPL(roi_align_backward_impl, CPU, roi_align_backward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align_rotated.cpp deleted file mode 100644 index 8c849de0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/roi_align_rotated.cpp +++ /dev/null @@ -1,455 +0,0 @@ -// Modified from -// https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/ROIAlignRotated -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include -#include - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -// implementation taken from Caffe2 -template -struct PreCalc { - int pos1; - int pos2; - int pos3; - int pos4; - T w1; - T w2; - T w3; - T w4; -}; - -template -void pre_calc_for_bilinear_interpolate( - const int height, const int width, const int pooled_height, - const int pooled_width, const int iy_upper, const int ix_upper, - T roi_start_h, T roi_start_w, T bin_size_h, T bin_size_w, - int roi_bin_grid_h, int roi_bin_grid_w, T roi_center_h, T roi_center_w, - T cos_theta, T sin_theta, std::vector>& pre_calc) { - int pre_calc_index = 0; - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - for (int iy = 0; iy < iy_upper; iy++) { - const T yy = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < ix_upper; ix++) { - const T xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta around the center and translate - // In image space, (y, x) is the order for Right Handed System, - // and this is essentially multiplying the point by a rotation matrix - // to rotate it counterclockwise through angle theta. - T y = yy * cos_theta - xx * sin_theta + roi_center_h; - T x = yy * sin_theta + xx * cos_theta + roi_center_w; - // deal with: inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - PreCalc pc; - pc.pos1 = 0; - pc.pos2 = 0; - pc.pos3 = 0; - pc.pos4 = 0; - pc.w1 = 0; - pc.w2 = 0; - pc.w3 = 0; - pc.w4 = 0; - pre_calc[pre_calc_index] = pc; - pre_calc_index += 1; - continue; - } - - if (y < 0) { - y = 0; - } - if (x < 0) { - x = 0; - } - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - // save weights and indices - PreCalc pc; - pc.pos1 = y_low * width + x_low; - pc.pos2 = y_low * width + x_high; - pc.pos3 = y_high * width + x_low; - pc.pos4 = y_high * width + x_high; - pc.w1 = w1; - pc.w2 = w2; - pc.w3 = w3; - pc.w4 = w4; - pre_calc[pre_calc_index] = pc; - - pre_calc_index += 1; - } - } - } - } -} - -template -void ROIAlignRotatedForward(const int nthreads, const T* input, - const T& spatial_scale, const bool aligned, - const bool clockwise, const int channels, - const int height, const int width, - const int pooled_height, const int pooled_width, - const int sampling_ratio, const T* rois, - T* output) { - int n_rois = nthreads / channels / pooled_width / pooled_height; - // (n, c, ph, pw) is an element in the pooled output - // can be parallelized using omp - // #pragma omp parallel for num_threads(32) - for (int n = 0; n < n_rois; n++) { - int index_n = n * channels * pooled_width * pooled_height; - - const T* current_roi = rois + n * 6; - int roi_batch_ind = current_roi[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_center_w = current_roi[1] * spatial_scale - offset; - T roi_center_h = current_roi[2] * spatial_scale - offset; - T roi_width = current_roi[3] * spatial_scale; - T roi_height = current_roi[4] * spatial_scale; - T theta = current_roi[5]; - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - T cos_theta = cos(theta); - T sin_theta = sin(theta); - - if (aligned) { - AT_ASSERTM(roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlignRotated do not have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceilf(roi_width / pooled_width); - - // We do average (integral) pooling inside a bin - const T count = std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4 - - // we want to precalculate indices and weights shared by all channels, - // this is the key point of optimization - std::vector> pre_calc(roi_bin_grid_h * roi_bin_grid_w * - pooled_width * pooled_height); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - T roi_start_h = -roi_height / 2.0; - T roi_start_w = -roi_width / 2.0; - - pre_calc_for_bilinear_interpolate( - height, width, pooled_height, pooled_width, roi_bin_grid_h, - roi_bin_grid_w, roi_start_h, roi_start_w, bin_size_h, bin_size_w, - roi_bin_grid_h, roi_bin_grid_w, roi_center_h, roi_center_w, cos_theta, - sin_theta, pre_calc); - - for (int c = 0; c < channels; c++) { - int index_n_c = index_n + c * pooled_width * pooled_height; - const T* offset_input = - input + (roi_batch_ind * channels + c) * height * width; - int pre_calc_index = 0; - - for (int ph = 0; ph < pooled_height; ph++) { - for (int pw = 0; pw < pooled_width; pw++) { - int index = index_n_c + ph * pooled_width + pw; - - T output_val = 0.; - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - PreCalc pc = pre_calc[pre_calc_index]; - output_val += pc.w1 * offset_input[pc.pos1] + - pc.w2 * offset_input[pc.pos2] + - pc.w3 * offset_input[pc.pos3] + - pc.w4 * offset_input[pc.pos4]; - - pre_calc_index += 1; - } - } - output_val /= count; - - output[index] = output_val; - } // for pw - } // for ph - } // for c - } // for n -} - -template -void bilinear_interpolate_gradient(const int height, const int width, T y, T x, - T& w1, T& w2, T& w3, T& w4, int& x_low, - int& x_high, int& y_low, int& y_high) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y < 0) { - y = 0; - } - - if (x < 0) { - x = 0; - } - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} - -template -inline void add(T* address, const T& val) { - *address += val; -} - -template -void ROIAlignRotatedBackward( - const int nthreads, - // may not be contiguous. should index using n_stride, etc - const T* grad_output, const T& spatial_scale, const bool aligned, - const bool clockwise, const int channels, const int height, const int width, - const int pooled_height, const int pooled_width, const int sampling_ratio, - T* grad_input, const T* rois, const int n_stride, const int c_stride, - const int h_stride, const int w_stride) { - for (int index = 0; index < nthreads; index++) { - // (n, c, ph, pw) is an element in the pooled output - int pw = index % pooled_width; - int ph = (index / pooled_width) % pooled_height; - int c = (index / pooled_width / pooled_height) % channels; - int n = index / pooled_width / pooled_height / channels; - - const T* current_roi = rois + n * 6; - int roi_batch_ind = current_roi[0]; - - // Do not use rounding; this implementation detail is critical - T offset = aligned ? (T)0.5 : (T)0.0; - T roi_center_w = current_roi[1] * spatial_scale - offset; - T roi_center_h = current_roi[2] * spatial_scale - offset; - T roi_width = current_roi[3] * spatial_scale; - T roi_height = current_roi[4] * spatial_scale; - T theta = current_roi[5]; - if (clockwise) { - theta = -theta; // If clockwise, the angle needs to be reversed. - } - T cos_theta = cos(theta); - T sin_theta = sin(theta); - - if (aligned) { - AT_ASSERTM(roi_width >= 0 && roi_height >= 0, - "ROIs in ROIAlignRotated do not have non-negative size!"); - } else { // for backward-compatibility only - roi_width = std::max(roi_width, (T)1.); - roi_height = std::max(roi_height, (T)1.); - } - - T bin_size_h = static_cast(roi_height) / static_cast(pooled_height); - T bin_size_w = static_cast(roi_width) / static_cast(pooled_width); - - T* offset_grad_input = - grad_input + ((roi_batch_ind * channels + c) * height * width); - - int output_offset = n * n_stride + c * c_stride; - const T* offset_grad_output = grad_output + output_offset; - const T grad_output_this_bin = - offset_grad_output[ph * h_stride + pw * w_stride]; - - // We use roi_bin_grid to sample the grid and mimic integral - int roi_bin_grid_h = (sampling_ratio > 0) - ? sampling_ratio - : ceilf(roi_height / pooled_height); // e.g., = 2 - int roi_bin_grid_w = - (sampling_ratio > 0) ? sampling_ratio : ceilf(roi_width / pooled_width); - - // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y). - // Appropriate translation needs to be applied after. - T roi_start_h = -roi_height / 2.0; - T roi_start_w = -roi_width / 2.0; - - // We do average (integral) pooling inside a bin - const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4 - - for (int iy = 0; iy < roi_bin_grid_h; iy++) { - const T yy = roi_start_h + ph * bin_size_h + - static_cast(iy + .5f) * bin_size_h / - static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5 - for (int ix = 0; ix < roi_bin_grid_w; ix++) { - const T xx = roi_start_w + pw * bin_size_w + - static_cast(ix + .5f) * bin_size_w / - static_cast(roi_bin_grid_w); - - // Rotate by theta around the center and translate - T y = yy * cos_theta - xx * sin_theta + roi_center_h; - T x = yy * sin_theta + xx * cos_theta + roi_center_w; - - T w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, y, x, w1, w2, w3, w4, - x_low, x_high, y_low, y_high); - - T g1 = grad_output_this_bin * w1 / count; - T g2 = grad_output_this_bin * w2 / count; - T g3 = grad_output_this_bin * w3 / count; - T g4 = grad_output_this_bin * w4 / count; - - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - // atomic add is not needed for now since it is single threaded - add(offset_grad_input + y_low * width + x_low, static_cast(g1)); - add(offset_grad_input + y_low * width + x_high, static_cast(g2)); - add(offset_grad_input + y_high * width + x_low, static_cast(g3)); - add(offset_grad_input + y_high * width + x_high, static_cast(g4)); - } // if - } // ix - } // iy - } // for -} // ROIAlignRotatedBackward - -void ROIAlignRotatedForwardCPULauncher(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - int output_size = output.numel(); - int channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "ROIAlignRotated_forward", [&] { - ROIAlignRotatedForward( - output_size, input.data_ptr(), - static_cast(spatial_scale), aligned, clockwise, channels, - height, width, aligned_height, aligned_width, sampling_ratio, - rois.data_ptr(), output.data_ptr()); - }); -} - -void ROIAlignRotatedBackwardCPULauncher(Tensor grad_output, Tensor rois, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - - // get stride values to ensure indexing into gradients is correct. - int n_stride = grad_output.stride(0); - int c_stride = grad_output.stride(1); - int h_stride = grad_output.stride(2); - int w_stride = grad_output.stride(3); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "ROIAlignRotated_backward", [&] { - ROIAlignRotatedBackward( - grad_output.numel(), grad_output.data_ptr(), - static_cast(spatial_scale), aligned, clockwise, channels, - height, width, aligned_height, aligned_width, sampling_ratio, - grad_input.data_ptr(), rois.data_ptr(), - n_stride, c_stride, h_stride, w_stride); - }); -} - -void roi_align_rotated_forward_cpu(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - ROIAlignRotatedForwardCPULauncher(input, rois, output, aligned_height, - aligned_width, spatial_scale, - sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_backward_cpu(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - ROIAlignRotatedBackwardCPULauncher( - top_grad, rois, bottom_grad, aligned_height, aligned_width, spatial_scale, - sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_forward_impl(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); -REGISTER_DEVICE_IMPL(roi_align_rotated_forward_impl, CPU, - roi_align_rotated_forward_cpu); -REGISTER_DEVICE_IMPL(roi_align_rotated_backward_impl, CPU, - roi_align_rotated_backward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/rotated_feature_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/rotated_feature_align.cpp deleted file mode 100644 index 09dcdd33..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/rotated_feature_align.cpp +++ /dev/null @@ -1,262 +0,0 @@ -// modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_kernel.cu -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -T bilinear_interpolate(const T* input, const int height, const int width, T y, - T x, const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) return 0; - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - // do bilinear interpolation - T v1 = input[y_low * width + x_low]; - T v2 = input[y_low * width + x_high]; - T v3 = input[y_high * width + x_low]; - T v4 = input[y_high * width + x_high]; - const T v_low = fma(v2 - v1, lx, v1); - const T v_high = fma(v4 - v3, lx, v3); - const T val = fma(v_high - v_low, ly, v_low); - - return val; -} - -template -void rotated_feature_align_forward_cpu_kernel( - const int nthreads, const int points, const scalar_t* bottom_data, - const scalar_t* best_bboxes, const scalar_t spatial_scale, - const int channels, const int height, const int width, scalar_t* top_data) { - for (int index = 0; index < nthreads; index++) { - int w = index % width; - int h = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - const scalar_t* bbox_offset = - best_bboxes + ((n * height + h) * width + w) * 5; - scalar_t roi_y = bbox_offset[0] * spatial_scale; - scalar_t roi_x = bbox_offset[1] * spatial_scale; - - scalar_t px[5] = {roi_x, 0, 0, 0, 0}; - scalar_t py[5] = {roi_y, 0, 0, 0, 0}; - - if (points > 1) { - scalar_t roi_w = bbox_offset[2] * spatial_scale; - scalar_t roi_h = bbox_offset[3] * spatial_scale; - scalar_t roi_a = bbox_offset[4]; - - scalar_t w_2 = roi_w / 2, h_2 = roi_h / 2; - scalar_t cosa = cosf(roi_a), sina = sinf(roi_a); - scalar_t wx = cosa * w_2, wy = sina * w_2; - scalar_t hx = -sina * h_2, hy = cosa * h_2; - - px[1] = roi_x + wx + hx; - py[1] = roi_y + wy + hy; - px[2] = roi_x - wx + hx; - py[2] = roi_y - wy + hy; - px[3] = roi_x - wx - hx; - py[3] = roi_y - wy - hy; - px[4] = roi_x + wx - hx; - py[4] = roi_y + wy - hy; - } - - const scalar_t* offset_bottom_data = - bottom_data + (n * channels + c) * height * width; - - scalar_t output_val = bottom_data[index]; - for (int i = 0; i < points; i++) { - output_val += bilinear_interpolate(offset_bottom_data, height, - width, py[i], px[i], i); - } - top_data[index] = output_val; - } -} - -template -void bilinear_interpolate_gradient(const int height, const int width, T y, T x, - T& w1, T& w2, T& w3, T& w4, int& x_low, - int& x_high, int& y_low, int& y_high, - const int index) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} - -template -inline void valueAdd(scalar_t* address, scalar_t val) { - scalar_t old = *address; - *address = (old + val); -} - -template -void rotated_feature_align_backward_cpu_kernel( - const int nthreads, const int points, const scalar_t* top_diff, - const scalar_t* best_bboxes, const scalar_t spatial_scale, - const int channels, const int height, const int width, - scalar_t* bottom_diff) { - for (int index = 0; index < nthreads; index++) { - int w = index % width; - int h = (index / width) % height; - int c = (index / width / height) % channels; - int n = index / width / height / channels; - - const scalar_t* bbox_offset = - best_bboxes + ((n * height + h) * width + w) * 5; - scalar_t roi_y = bbox_offset[0] * spatial_scale; - scalar_t roi_x = bbox_offset[1] * spatial_scale; - - scalar_t px[5] = {roi_x, 0, 0, 0, 0}; - scalar_t py[5] = {roi_y, 0, 0, 0, 0}; - - if (points > 1) { - scalar_t roi_w = bbox_offset[2] * spatial_scale; - scalar_t roi_h = bbox_offset[3] * spatial_scale; - scalar_t roi_a = bbox_offset[4]; - - scalar_t w_2 = roi_w / 2, h_2 = roi_h / 2; - scalar_t cosa = cosf(roi_a), sina = sinf(roi_a); - scalar_t wx = cosa * w_2, wy = sina * w_2; - scalar_t hx = -sina * h_2, hy = cosa * h_2; - - px[1] = roi_x + wx + hx; - py[1] = roi_y + wy + hy; - px[2] = roi_x - wx + hx; - py[2] = roi_y - wy + hy; - px[3] = roi_x - wx - hx; - py[3] = roi_y - wy - hy; - px[4] = roi_x + wx - hx; - py[4] = roi_y + wy - hy; - } - - scalar_t* offset_bottom_diff = - bottom_diff + (n * channels + c) * height * width; - scalar_t value_top_diff = top_diff[index]; - - valueAdd(bottom_diff + index, value_top_diff); - for (int i = 0; i < points; i++) { - scalar_t w1, w2, w3, w4; - int x_low, x_high, y_low, y_high; - - bilinear_interpolate_gradient(height, width, py[i], px[i], w1, - w2, w3, w4, x_low, x_high, y_low, - y_high, i); - scalar_t g1 = value_top_diff * w1; - scalar_t g2 = value_top_diff * w2; - scalar_t g3 = value_top_diff * w3; - scalar_t g4 = value_top_diff * w4; - if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) { - valueAdd(offset_bottom_diff + y_low * width + x_low, g1); - valueAdd(offset_bottom_diff + y_low * width + x_high, g2); - valueAdd(offset_bottom_diff + y_high * width + x_low, g3); - valueAdd(offset_bottom_diff + y_high * width + x_high, g4); - } - } - } -} - -void rotated_feature_align_forward_cpu(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output) { - const int output_size = features.numel(); - AT_DISPATCH_FLOATING_TYPES( - features.scalar_type(), "rotated_feature_align_forward_cpu_kernel", [&] { - const scalar_t* bottom_data = features.data_ptr(); - const scalar_t* bboxes_data = best_bboxes.data_ptr(); - scalar_t* top_data = output.data_ptr(); - - rotated_feature_align_forward_cpu_kernel( - output_size, points, bottom_data, bboxes_data, - scalar_t(spatial_scale), features.size(1), features.size(2), - features.size(3), top_data); - }); -} - -void rotated_feature_align_backward_cpu(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad) { - const int output_size = top_grad.numel(); - AT_DISPATCH_FLOATING_TYPES( - top_grad.scalar_type(), "rotated_feature_align_backward_cpu_kernel", [&] { - const scalar_t* top_diff = top_grad.data_ptr(); - const scalar_t* bboxes_data = best_bboxes.data_ptr(); - scalar_t* bottom_diff = bottom_grad.data_ptr(); - - rotated_feature_align_backward_cpu_kernel( - output_size, points, top_diff, bboxes_data, scalar_t(spatial_scale), - top_grad.size(1), top_grad.size(2), top_grad.size(3), bottom_diff); - }); -} - -void rotated_feature_align_forward_impl(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output); - -void rotated_feature_align_backward_impl(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad); - -REGISTER_DEVICE_IMPL(rotated_feature_align_forward_impl, CPU, - rotated_feature_align_forward_cpu); - -REGISTER_DEVICE_IMPL(rotated_feature_align_backward_impl, CPU, - rotated_feature_align_backward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/voxelization.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/voxelization.cpp deleted file mode 100644 index a21f849a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cpu/voxelization.cpp +++ /dev/null @@ -1,186 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -template -void dynamic_voxelize_forward_cpu_kernel( - const torch::TensorAccessor points, - torch::TensorAccessor coors, const std::vector voxel_size, - const std::vector coors_range, const std::vector grid_size, - const int num_points, const int num_features, const int NDim) { - const int ndim_minus_1 = NDim - 1; - bool failed = false; - // int coor[NDim]; - int* coor = new int[NDim](); - int c; - - for (int i = 0; i < num_points; ++i) { - failed = false; - for (int j = 0; j < NDim; ++j) { - c = floor((points[i][j] - coors_range[j]) / voxel_size[j]); - // necessary to rm points out of range - if ((c < 0 || c >= grid_size[j])) { - failed = true; - break; - } - coor[ndim_minus_1 - j] = c; - } - - // memcpy and memset will cause problem because of the memory distribution - // discontinuity of TensorAccessor, so here using loops to replace memcpy - // or memset - if (failed) { - for (int k = 0; k < NDim; ++k) { - coors[i][k] = -1; - } - } else { - for (int k = 0; k < NDim; ++k) { - coors[i][k] = coor[k]; - } - } - } - - delete[] coor; - return; -} - -template -void hard_voxelize_forward_cpu_kernel( - const torch::TensorAccessor points, - torch::TensorAccessor voxels, torch::TensorAccessor coors, - torch::TensorAccessor num_points_per_voxel, - torch::TensorAccessor coor_to_voxelidx, int& voxel_num, - const std::vector voxel_size, const std::vector coors_range, - const std::vector grid_size, const int max_points, - const int max_voxels, const int num_points, const int num_features, - const int NDim) { - // declare a temp coors - at::Tensor temp_coors = at::zeros( - {num_points, NDim}, at::TensorOptions().dtype(at::kInt).device(at::kCPU)); - - // First use dynamic voxelization to get coors, - // then check max points/voxels constraints - dynamic_voxelize_forward_cpu_kernel( - points, temp_coors.accessor(), voxel_size, coors_range, grid_size, - num_points, num_features, NDim); - - int voxelidx, num; - auto coor = temp_coors.accessor(); - - for (int i = 0; i < num_points; ++i) { - // T_int* coor = temp_coors.data_ptr() + i * NDim; - - if (coor[i][0] == -1) continue; - - voxelidx = coor_to_voxelidx[coor[i][0]][coor[i][1]][coor[i][2]]; - - // record voxel - if (voxelidx == -1) { - voxelidx = voxel_num; - if (max_voxels != -1 && voxel_num >= max_voxels) continue; - voxel_num += 1; - - coor_to_voxelidx[coor[i][0]][coor[i][1]][coor[i][2]] = voxelidx; - // memcpy will cause problem because of the memory distribution - // discontinuity of TensorAccessor, so here using loops to replace memcpy - for (int k = 0; k < NDim; ++k) { - coors[voxelidx][k] = coor[i][k]; - } - } - - // put points into voxel - num = num_points_per_voxel[voxelidx]; - if (max_points == -1 || num < max_points) { - // memcpy will cause problem because of the memory distribution - // discontinuity of TensorAccessor, so here using loops to replace memcpy - for (int k = 0; k < num_features; ++k) { - voxels[voxelidx][num][k] = points[i][k]; - } - num_points_per_voxel[voxelidx] += 1; - } - } - - return; -} - -void dynamic_voxelize_forward_cpu(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim = 3) { - // check device - AT_ASSERTM(points.device().is_cpu(), "points must be a CPU tensor"); - - std::vector grid_size(NDim); - const int num_points = points.size(0); - const int num_features = points.size(1); - - for (int i = 0; i < NDim; ++i) { - grid_size[i] = - round((coors_range[NDim + i] - coors_range[i]) / voxel_size[i]); - } - - // coors, num_points_per_voxel, coor_to_voxelidx are int Tensor - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "dynamic_voxelize_forward_cpu_kernel", [&] { - dynamic_voxelize_forward_cpu_kernel( - points.accessor(), coors.accessor(), - voxel_size, coors_range, grid_size, num_points, num_features, NDim); - }); -} - -int hard_voxelize_forward_cpu(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - // current version tooks about 0.02s_0.03s for one frame on cpu - // check device - AT_ASSERTM(points.device().is_cpu(), "points must be a CPU tensor"); - - std::vector grid_size(NDim); - const int num_points = points.size(0); - const int num_features = points.size(1); - - for (int i = 0; i < NDim; ++i) { - grid_size[i] = - round((coors_range[NDim + i] - coors_range[i]) / voxel_size[i]); - } - - // coors, num_points_per_voxel, coor_to_voxelidx are int Tensor - // printf("cpu coor_to_voxelidx size: [%d, %d, %d]\n", grid_size[2], - // grid_size[1], grid_size[0]); - at::Tensor coor_to_voxelidx = - -at::ones({grid_size[2], grid_size[1], grid_size[0]}, coors.options()); - - int voxel_num = 0; - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "hard_voxelize_forward_cpu_kernel", [&] { - hard_voxelize_forward_cpu_kernel( - points.accessor(), voxels.accessor(), - coors.accessor(), num_points_per_voxel.accessor(), - coor_to_voxelidx.accessor(), voxel_num, voxel_size, - coors_range, grid_size, max_points, max_voxels, num_points, - num_features, NDim); - }); - - return voxel_num; -} - -int hard_voxelize_forward_impl(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim); - -void dynamic_voxelize_forward_impl(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim); -REGISTER_DEVICE_IMPL(hard_voxelize_forward_impl, CPU, - hard_voxelize_forward_cpu); -REGISTER_DEVICE_IMPL(dynamic_voxelize_forward_impl, CPU, - dynamic_voxelize_forward_cpu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/active_rotated_filter_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/active_rotated_filter_cuda.cu deleted file mode 100644 index 27fffb9f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/active_rotated_filter_cuda.cu +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/orn/src/cuda/ActiveRotatingFilter_cuda.cu -#include "active_rotated_filter_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void ActiveRotatedFilterForwardCUDAKernelLauncher(const Tensor input, - const Tensor indices, - Tensor output) { - int num_output_planes = input.size(0); - int num_input_planes = input.size(1); - int num_orientations = input.size(2); - int kH = input.size(3); - int kW = input.size(4); - int num_rotations = indices.size(3); - int nEntry = num_orientations * kH * kW; - int output_size = input.numel(); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "active_rotated_filter_forward_cuda_kernel", [&] { - active_rotated_filter_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - indices.data_ptr(), num_input_planes, num_output_planes, - num_orientations, num_rotations, nEntry, - output.data_ptr()); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ActiveRotatedFilterBackwardCUDAKernelLauncher(const Tensor grad_out, - const Tensor indices, - Tensor grad_in) { - int num_orientations = indices.size(0); - int kH = indices.size(1); - int kW = indices.size(2); - int num_rotations = indices.size(3); - int num_output_planes = grad_out.size(0) / num_rotations; - int num_input_planes = grad_out.size(1) / num_orientations; - int nEntry = num_orientations * kH * kW; - int output_size = grad_in.numel(); - - at::cuda::CUDAGuard device_guard(indices.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "active_rotated_filter_backward_cuda_kernel", - [&] { - active_rotated_filter_backward_cuda_kernel - <<>>( - output_size, grad_out.data_ptr(), - indices.data_ptr(), num_input_planes, num_output_planes, - num_orientations, num_rotations, nEntry, - grad_in.data_ptr()); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/assign_score_withk_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/assign_score_withk_cuda.cu deleted file mode 100644 index bdb5fab9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/assign_score_withk_cuda.cu +++ /dev/null @@ -1,66 +0,0 @@ -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/paconv_lib/src/gpu -#include -#include - -#include "assign_score_withk_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void AssignScoreWithKForwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& points, const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& output) { - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(B * O * N1 * K, THREADS_PER_BLOCK)); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "assign_score_withk_forward_cuda_kernel", [&] { - assign_score_withk_forward_cuda_kernel - <<>>( - B, N0, N1, M, K, O, aggregate, points.data_ptr(), - centers.data_ptr(), scores.data_ptr(), - knn_idx.data_ptr(), output.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void AssignScoreWithKBackwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores) { - at::cuda::CUDAGuard device_guard(grad_out.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks1(GET_BLOCKS(B * M * O, THREADS_PER_BLOCK)); - dim3 threads1(THREADS_PER_BLOCK); - dim3 blocks2(GET_BLOCKS(B * N1 * K * M, THREADS_PER_BLOCK)); - dim3 threads2(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "assign_score_withk_points_backward_cuda_kernel", - [&] { - assign_score_withk_points_backward_cuda_kernel - <<>>( - B, N0, N1, M, K, O, aggregate, grad_out.data_ptr(), - scores.data_ptr(), knn_idx.data_ptr(), - grad_points.data_ptr(), - grad_centers.data_ptr()); - }); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "assign_score_withk_scores_backward_cuda_kernel", - [&] { - assign_score_withk_scores_backward_cuda_kernel - <<>>( - B, N0, N1, M, K, O, aggregate, grad_out.data_ptr(), - points.data_ptr(), centers.data_ptr(), - knn_idx.data_ptr(), grad_scores.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ball_query_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ball_query_cuda.cu deleted file mode 100644 index c42c3e2a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ball_query_cuda.cu +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/ball_query_gpu.cu - -#include -#include -#include - -#include "ball_query_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void BallQueryForwardCUDAKernelLauncher(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx) { - // new_xyz: (B, M, 3) - // xyz: (B, N, 3) - // output: - // idx: (B, M, nsample) - - at::cuda::CUDAGuard device_guard(new_xyz.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(m, THREADS_PER_BLOCK), b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - new_xyz.scalar_type(), "ball_query_forward_cuda_kernel", [&] { - ball_query_forward_cuda_kernel - <<>>( - b, n, m, min_radius, max_radius, nsample, - new_xyz.data_ptr(), xyz.data_ptr(), - idx.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/bbox_overlaps_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/bbox_overlaps_cuda.cu deleted file mode 100644 index b3272539..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/bbox_overlaps_cuda.cu +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "bbox_overlaps_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -// Disable fp16 on ROCm device -#ifndef HIP_DIFF -#if __CUDA_ARCH__ >= 530 -template <> -__global__ void bbox_overlaps_cuda_kernel( - const at::Half* bbox1, const at::Half* bbox2, at::Half* ious, - const int num_bbox1, const int num_bbox2, const int mode, - const bool aligned, const int offset) { - bbox_overlaps_cuda_kernel_half(reinterpret_cast(bbox1), - reinterpret_cast(bbox2), - reinterpret_cast<__half*>(ious), num_bbox1, - num_bbox2, mode, aligned, offset); -} -#endif // __CUDA_ARCH__ >= 530 -#endif // HIP_DIFF - -void BBoxOverlapsCUDAKernelLauncher(const Tensor bboxes1, const Tensor bboxes2, - Tensor ious, const int mode, - const bool aligned, const int offset) { - int output_size = ious.numel(); - int num_bbox1 = bboxes1.size(0); - int num_bbox2 = bboxes2.size(0); - - at::cuda::CUDAGuard device_guard(bboxes1.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - bboxes1.scalar_type(), "bbox_overlaps_cuda_kernel", ([&] { - bbox_overlaps_cuda_kernel - <<>>( - bboxes1.data_ptr(), bboxes2.data_ptr(), - ious.data_ptr(), num_bbox1, num_bbox2, mode, aligned, - offset); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/border_align_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/border_align_cuda.cu deleted file mode 100644 index 3aeefea5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/border_align_cuda.cu +++ /dev/null @@ -1,68 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "border_align_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void BorderAlignForwardCUDAKernelLauncher(const Tensor &input, - const Tensor &boxes, Tensor output, - Tensor argmax_idx, - const int pool_size) { - // shape assertion - AT_ASSERTM(input.ndimension() == 4, - "non-empty 4D(batch mode) tensor expected for input feature"); - AT_ASSERTM(boxes.ndimension() == 3, - "boxes must be 3D tensor with size of [B, H*W, 4]"); - - int batch_size = input.size(0); - int feat_channels = input.size(1); - int channels = feat_channels / 4; - int height = input.size(2); - int width = input.size(3); - // shape [N, box_size, 4] for boxes. (x1, y1, x2, y2) format - int box_size = boxes.size(1); - // shape [N, channels, box_size, 4] for output - int nthreads = batch_size * channels * box_size; - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - dim3 block(128, 4); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "border_align_forward_cuda_kernel", [&] { - border_align_forward_cuda_kernel - <<>>( - nthreads, input.data_ptr(), - boxes.data_ptr(), output.data_ptr(), - argmax_idx.data_ptr(), channels, box_size, height, width, - pool_size); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void BorderAlignBackwardCUDAKernelLauncher(const Tensor &grad_output, - const Tensor &boxes, - const Tensor &argmax_idx, - Tensor grad_input, - const int pool_size) { - int batch_size = grad_input.size(0); - int feat_channels = grad_input.size(1); - int channels = feat_channels / 4; - int height = grad_input.size(2); - int width = grad_input.size(3); - int box_size = boxes.size(1); - int nthreads = batch_size * channels * box_size; - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - dim3 block(128, 4); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "border_align_backward_cuda_kernel", [&] { - border_align_backward_cuda_kernel - <<>>( - nthreads, grad_output.data_ptr(), - boxes.data_ptr(), argmax_idx.data_ptr(), - grad_input.data_ptr(), channels, box_size, height, - width, pool_size); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/box_iou_rotated_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/box_iou_rotated_cuda.cu deleted file mode 100644 index 3c13e062..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/box_iou_rotated_cuda.cu +++ /dev/null @@ -1,25 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cuda.cu -#include "box_iou_rotated_cuda.cuh" -#include "pytorch_cuda_helper.hpp" - -void box_iou_rotated_cuda(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned) { - using scalar_t = float; - AT_ASSERTM(boxes1.is_cuda(), "boxes1 must be a CUDA tensor"); - AT_ASSERTM(boxes2.is_cuda(), "boxes2 must be a CUDA tensor"); - - int output_size = ious.numel(); - int num_boxes1 = boxes1.size(0); - int num_boxes2 = boxes2.size(0); - - at::cuda::CUDAGuard device_guard(boxes1.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - box_iou_rotated_cuda_kernel - <<>>( - num_boxes1, num_boxes2, boxes1.data_ptr(), - boxes2.data_ptr(), (scalar_t*)ious.data_ptr(), - mode_flag, aligned); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_cuda.cu deleted file mode 100644 index 984e734f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_cuda.cu +++ /dev/null @@ -1,180 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "carafe_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void CARAFEForwardCUDAKernelLauncher(const Tensor features, const Tensor masks, - Tensor rfeatures, Tensor routput, - Tensor rmasks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor) { - const int batch_size = output.size(0); - const int channels = output.size(1); - const int output_height = output.size(2); - const int output_width = output.size(3); - - const int input_height = features.size(2); - const int input_width = features.size(3); - - const int mask_channels = masks.size(1); - - rfeatures.resize_({batch_size, input_height, input_width, channels}); - routput.resize_({batch_size, output_height, output_width, channels}); - rmasks.resize_({batch_size, output_height, output_width, mask_channels}); - - // one warp per pixel - at::cuda::CUDAGuard device_guard(features.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "NCHW2NHWC_Feature", ([&] { - const scalar_t *bottom_data = features.data_ptr(); - scalar_t *top_data = rfeatures.data_ptr(); - const int dh = divideUP(channels, kTileDim); - const int dw = divideUP(input_height * input_width, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, channels, input_height * input_width, dh, dw, - bottom_data, top_data); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "NCHW2NHWC_Masks", ([&] { - const scalar_t *bottom_data = masks.data_ptr(); - scalar_t *top_data = rmasks.data_ptr(); - const int dh = divideUP(mask_channels, kTileDim); - const int dw = divideUP(output_height * output_width, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, mask_channels, output_height * output_width, dh, dw, - bottom_data, top_data); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "CARAFELaucherForward", ([&] { - const int num_kernels = - batch_size * output_height * output_width * THREADS_PER_PIXEL; - const scalar_t *bottom_data = rfeatures.data_ptr(); - const scalar_t *bottom_masks = rmasks.data_ptr(); - scalar_t *top_data = routput.data_ptr(); - - CARAFEForward<<>>( - num_kernels, bottom_data, bottom_masks, kernel_size, group_size, - scale_factor, channels, input_height, input_width, output_height, - output_width, mask_channels, top_data); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "NHWC2NCHW", ([&] { - const scalar_t *bottom_data = routput.data_ptr(); - scalar_t *top_data = output.data_ptr(); - const int dh = divideUP(output_height * output_width, kTileDim); - const int dw = divideUP(channels, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, output_height * output_width, channels, dh, dw, - bottom_data, top_data); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void CARAFEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor rfeatures, const Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, Tensor rbottom_grad, - Tensor rmask_grad, Tensor bottom_grad, Tensor mask_grad, - const int kernel_size, const int group_size, const int scale_factor) { - const int batch_size = top_grad.size(0); - const int channels = top_grad.size(1); - const int output_height = top_grad.size(2); - const int output_width = top_grad.size(3); - - const int input_height = bottom_grad.size(2); - const int input_width = bottom_grad.size(3); - - const int mask_channels = masks.size(1); - - rtop_grad.resize_({batch_size, output_height, output_width, channels}); - rbottom_grad.resize_({batch_size, input_height, input_width, channels}); - rbottom_grad_hs.resize_({batch_size, output_height, output_width, channels}); - rmask_grad.resize_({batch_size, output_height, output_width, mask_channels}); - - at::cuda::CUDAGuard device_guard(top_grad.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "NCHW2NHWC_Top_Grad", ([&] { - const scalar_t *bottom_data = top_grad.data_ptr(); - scalar_t *top_data = rtop_grad.data_ptr(); - const int dh = divideUP(channels, kTileDim); - const int dw = divideUP(output_height * output_width, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, channels, output_height * output_width, dh, dw, - bottom_data, top_data); - })); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "CARAFELaucherBackward_Feature", ([&] { - const int num_kernels = - batch_size * output_height * output_width * THREADS_PER_PIXEL; - const scalar_t *top_diff = rtop_grad.data_ptr(); - const scalar_t *bottom_masks = masks.data_ptr(); - scalar_t *bottom_diff = rbottom_grad_hs.data_ptr(); - - CARAFEBackward_Feature - <<>>(num_kernels, top_diff, bottom_masks, kernel_size, - group_size, scale_factor, channels, input_height, - input_width, output_height, output_width, - mask_channels, bottom_diff); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "FeatureSum", ([&] { - const int num_kernels = - batch_size * input_height * input_width * THREADS_PER_PIXEL; - const scalar_t *bottom_diff_hs = rbottom_grad_hs.data_ptr(); - scalar_t *bottom_diff = rbottom_grad.data_ptr(); - - FeatureSum - <<>>(num_kernels, bottom_diff_hs, scale_factor, channels, - input_height, input_width, bottom_diff); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "NHWC2NCHW_Bottom_Grad", ([&] { - const scalar_t *bottom_data = rbottom_grad.data_ptr(); - scalar_t *top_data = bottom_grad.data_ptr(); - const int dh = divideUP(input_height * input_width, kTileDim); - const int dw = divideUP(channels, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, input_height * input_width, channels, dh, dw, - bottom_data, top_data); - })); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "CARAFELaucherBackward_Mask", ([&] { - const int num_kernels = batch_size * output_height * output_width * - mask_channels * WARP_SIZE; - const scalar_t *top_diff = rtop_grad.data_ptr(); - const scalar_t *bottom_data = rfeatures.data_ptr(); - scalar_t *mask_diff = rmask_grad.data_ptr(); - - CARAFEBackward_Mask - <<>>(num_kernels, top_diff, bottom_data, kernel_size, - group_size, scale_factor, channels, input_height, - input_width, output_height, output_width, - mask_channels, mask_diff); - })); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "NHWC2NCHW_Mask_Grad", ([&] { - const scalar_t *bottom_data = rmask_grad.data_ptr(); - scalar_t *top_data = mask_grad.data_ptr(); - const int dh = divideUP(output_height * output_width, kTileDim); - const int dw = divideUP(mask_channels, kTileDim); - BatchTranspose2DCUDAKernel - <<>>( - batch_size, output_height * output_width, mask_channels, dh, dw, - bottom_data, top_data); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_naive_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_naive_cuda.cu deleted file mode 100644 index 2fc56676..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/carafe_naive_cuda.cu +++ /dev/null @@ -1,52 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "carafe_naive_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void CARAFENAIVEForwardCUDAKernelLauncher(const Tensor features, - const Tensor masks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor) { - int output_size = output.numel(); - int channels = output.size(1); - int height = output.size(2); - int width = output.size(3); - - at::cuda::CUDAGuard device_guard(features.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "CARAFENAIVEForward", ([&] { - carafe_naive_forward_cuda_kernel - <<>>( - output_size, features.data_ptr(), - masks.data_ptr(), output.data_ptr(), - kernel_size, group_size, scale_factor, channels, height, width); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void CARAFENAIVEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor features, const Tensor masks, - Tensor bottom_grad, Tensor mask_grad, const int kernel_size, - const int group_size, const int scale_factor) { - int output_size = top_grad.numel(); - int channels = top_grad.size(1); - int height = top_grad.size(2); - int width = top_grad.size(3); - - at::cuda::CUDAGuard device_guard(top_grad.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "CARAFENAIVEBackward", ([&] { - carafe_naive_backward_cuda_kernel - <<>>( - output_size, top_grad.data_ptr(), - features.data_ptr(), masks.data_ptr(), - bottom_grad.data_ptr(), - mask_grad.data_ptr(), kernel_size, group_size, - scale_factor, channels, height, width); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/chamfer_distance_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/chamfer_distance_cuda.cu deleted file mode 100644 index 980482eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/chamfer_distance_cuda.cu +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/chrdiller/pyTorchChamferDistance/blob/master/chamfer_distance/chamfer_distance.cpp -#include "chamfer_distance_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void ChamferDistanceForwardCUDAKernelLauncher( - const Tensor xyz1, const Tensor xyz2, const Tensor dist1, - const Tensor dist2, const Tensor idx1, const Tensor idx2) { - int batch_size = xyz1.size(0); - int n = xyz1.size(1); - int m = xyz2.size(1); - - at::cuda::CUDAGuard device_guard(xyz1.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz1.scalar_type(), "chamfer_distance_forward_cuda_kernel", [&] { - chamfer_distance_forward_cuda_kernel - <<>>( - batch_size, n, xyz1.data_ptr(), m, - xyz2.data_ptr(), dist1.data_ptr(), - idx1.data_ptr()); - }); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz1.scalar_type(), "chamfer_distance_forward_cuda_kernel", [&] { - chamfer_distance_forward_cuda_kernel - <<>>( - batch_size, m, xyz2.data_ptr(), n, - xyz1.data_ptr(), dist2.data_ptr(), - idx2.data_ptr()); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ChamferDistanceBackwardCUDAKernelLauncher( - const Tensor xyz1, const Tensor xyz2, Tensor grad_xyz1, Tensor grad_xyz2, - Tensor grad_dist1, Tensor grad_dist2, Tensor idx1, Tensor idx2) { - int batch_size = xyz1.size(0); - int n = xyz1.size(1); - int m = xyz2.size(1); - - at::cuda::CUDAGuard device_guard(xyz1.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz1.scalar_type(), "chamfer_distance_backward_cuda_kernel", [&] { - chamfer_distance_backward_cuda_kernel - <<>>( - batch_size, m, xyz1.data_ptr(), n, - xyz2.data_ptr(), grad_dist1.data_ptr(), - idx1.data_ptr(), grad_xyz1.data_ptr(), - grad_xyz2.data_ptr()); - }); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz1.scalar_type(), "chamfer_distance_backward_cuda_kernel", [&] { - chamfer_distance_backward_cuda_kernel - <<>>( - batch_size, n, xyz2.data_ptr(), m, - xyz1.data_ptr(), grad_dist2.data_ptr(), - idx2.data_ptr(), grad_xyz2.data_ptr(), - grad_xyz1.data_ptr()); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/convex_iou.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/convex_iou.cu deleted file mode 100644 index 804f7ac3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/convex_iou.cu +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/SDL-GuoZonghao/BeyondBoundingBox/blob/main/mmdet/ops/iou/src/convex_iou_kernel.cu -#include "convex_iou_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void ConvexIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor ious) { - int output_size = ious.numel(); - int num_pointsets = pointsets.size(0); - int num_polygons = polygons.size(0); - - at::cuda::CUDAGuard device_guard(pointsets.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - pointsets.scalar_type(), "convex_iou_cuda_kernel", ([&] { - convex_iou_cuda_kernel - <<>>( - num_pointsets, num_polygons, pointsets.data_ptr(), - polygons.data_ptr(), ious.data_ptr()); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ConvexGIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor output) { - int output_size = output.numel(); - int num_pointsets = pointsets.size(0); - int num_polygons = polygons.size(0); - - at::cuda::CUDAGuard device_guard(pointsets.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - pointsets.scalar_type(), "convex_giou_cuda_kernel", ([&] { - convex_giou_cuda_kernel - <<>>( - num_pointsets, num_polygons, pointsets.data_ptr(), - polygons.data_ptr(), output.data_ptr()); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/correlation_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/correlation_cuda.cu deleted file mode 100644 index c10e9d40..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/correlation_cuda.cu +++ /dev/null @@ -1,94 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/ClementPinard/Pytorch-Correlation-extension/blob/master/Correlation_Module/correlation_cuda_kernel.cu -// Original licence: Under MIT License - -#include "correlation_cuda.cuh" -#include "pytorch_cuda_helper.hpp" - -void CorrelationForwardCUDAKernelLauncher(Tensor input1, Tensor input2, - Tensor output, int kH, int kW, - int patchH, int patchW, int padH, - int padW, int dilationH, - int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - const int batch_size = input1.size(0); - const int iH = input1.size(2); - const int iW = input1.size(3); - const int dilatedKH = (kH - 1) * dilationH + 1; - const int dilatedKW = (kW - 1) * dilationW + 1; - - const auto oH = (iH + 2 * padH - dilatedKH) / dH + 1; - const auto oW = (iW + 2 * padW - dilatedKW) / dW + 1; - - auto trInput1 = input1.permute({0, 2, 3, 1}).contiguous(); - auto trInput2 = input2.permute({0, 2, 3, 1}).contiguous(); - - const dim3 threads(WARP_SIZE, 4, 4); - const dim3 blocks(batch_size, (oH + 3) >> 2, (oW + 3) >> 2); - - at::cuda::CUDAGuard device_guard(input1.device()); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input1.scalar_type(), "correlation_forward_cuda", ([&] { - TensorAcc4R trInput1_acc = - trInput1.packed_accessor32(); - TensorAcc4R trInput2_acc = - trInput2.packed_accessor32(); - TensorAcc5R output_acc = - output.packed_accessor32(); - - correlation_forward_cuda_kernel - <<>>( - trInput1_acc, trInput2_acc, output_acc, kH, kW, patchH, patchW, - padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); - })); -} - -void CorrelationBackwardCUDAKernelLauncher( - Tensor grad_output, Tensor input1, Tensor input2, Tensor grad_input1, - Tensor grad_input2, int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW) { - const int batch_size = input1.size(0); - const int iH = input1.size(2); - const int iW = input1.size(3); - const int C = input1.size(1); - - auto trInput1 = input1.permute({0, 2, 3, 1}).contiguous(); - auto trInput2 = input2.permute({0, 2, 3, 1}).contiguous(); - const dim3 blocks(batch_size, iH, iW); - const dim3 threads(THREADS_PER_BLOCK); - - at::cuda::CUDAGuard device_guard(input1.device()); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input1.scalar_type(), "correlation_backward_cuda", ([&] { - const int grad_cache_size = patchH * patchW * sizeof(scalar_t); - TensorAcc4R input1_acc = - trInput1.packed_accessor32(); - TensorAcc4R input2_acc = - trInput2.packed_accessor32(); - TensorAcc4R grad_input1_acc = - grad_input1.packed_accessor32(); - TensorAcc4R grad_input2_acc = - grad_input2.packed_accessor32(); - TensorAcc5R grad_output_acc = - grad_output.packed_accessor32(); - - correlation_backward_cuda_kernel_input1 - <<>>( - grad_output_acc, input2_acc, grad_input1_acc, kH, kW, patchH, - patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); - - correlation_backward_cuda_kernel_input2 - <<>>( - grad_output_acc, input1_acc, grad_input2_acc, kH, kW, patchH, - patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); - })); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/cudabind.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/cudabind.cpp deleted file mode 100644 index 278ce8f6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/cudabind.cpp +++ /dev/null @@ -1,1739 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void AssignScoreWithKForwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& points, const Tensor& centers, const Tensor& scores, - const Tensor& knn_idx, Tensor& output); - -void AssignScoreWithKBackwardCUDAKernelLauncher( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores); - -void assign_score_withk_forward_cuda(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output) { - AssignScoreWithKForwardCUDAKernelLauncher( - B, N0, N1, M, K, O, aggregate, points, centers, scores, knn_idx, output); -}; - -void assign_score_withk_backward_cuda( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores) { - AssignScoreWithKBackwardCUDAKernelLauncher( - B, N0, N1, M, K, O, aggregate, grad_out, points, centers, scores, knn_idx, - grad_points, grad_centers, grad_scores); -}; - -void assign_score_withk_forward_impl(int B, int N0, int N1, int M, int K, int O, - int aggregate, const Tensor& points, - const Tensor& centers, - const Tensor& scores, - const Tensor& knn_idx, Tensor& output); - -void assign_score_withk_backward_impl( - int B, int N0, int N1, int M, int K, int O, int aggregate, - const Tensor& grad_out, const Tensor& points, const Tensor& centers, - const Tensor& scores, const Tensor& knn_idx, Tensor& grad_points, - Tensor& grad_centers, Tensor& grad_scores); - -REGISTER_DEVICE_IMPL(assign_score_withk_forward_impl, CUDA, - assign_score_withk_forward_cuda); -REGISTER_DEVICE_IMPL(assign_score_withk_backward_impl, CUDA, - assign_score_withk_backward_cuda); - -void BallQueryForwardCUDAKernelLauncher(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx); - -void ball_query_forward_cuda(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx) { - BallQueryForwardCUDAKernelLauncher(b, n, m, min_radius, max_radius, nsample, - new_xyz, xyz, idx); -}; - -void ball_query_forward_impl(int b, int n, int m, float min_radius, - float max_radius, int nsample, - const Tensor new_xyz, const Tensor xyz, - Tensor idx); -REGISTER_DEVICE_IMPL(ball_query_forward_impl, CUDA, ball_query_forward_cuda); - -void BBoxOverlapsCUDAKernelLauncher(const Tensor bboxes1, const Tensor bboxes2, - Tensor ious, const int mode, - const bool aligned, const int offset); - -void bbox_overlaps_cuda(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - BBoxOverlapsCUDAKernelLauncher(bboxes1, bboxes2, ious, mode, aligned, offset); -} - -void bbox_overlaps_impl(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset); -REGISTER_DEVICE_IMPL(bbox_overlaps_impl, CUDA, bbox_overlaps_cuda); - -void BorderAlignForwardCUDAKernelLauncher(const Tensor& input, - const Tensor& boxes, Tensor output, - Tensor argmax_idx, - const int pool_size); - -void BorderAlignBackwardCUDAKernelLauncher(const Tensor& grad_output, - const Tensor& boxes, - const Tensor& argmax_idx, - Tensor grad_input, - const int pool_size); - -void border_align_forward_cuda(const Tensor& input, const Tensor& boxes, - Tensor output, Tensor argmax_idx, - const int pool_size) { - BorderAlignForwardCUDAKernelLauncher(input, boxes, output, argmax_idx, - pool_size); -} - -void border_align_backward_cuda(const Tensor& grad_output, const Tensor& boxes, - const Tensor& argmax_idx, Tensor grad_input, - const int pool_size) { - BorderAlignBackwardCUDAKernelLauncher(grad_output, boxes, argmax_idx, - grad_input, pool_size); -} - -void border_align_forward_impl(const Tensor& input, const Tensor& boxes, - Tensor output, Tensor argmax_idx, - const int pool_size); - -void border_align_backward_impl(const Tensor& grad_output, const Tensor& boxes, - const Tensor& argmax_idx, Tensor grad_input, - const int pool_size); - -REGISTER_DEVICE_IMPL(border_align_forward_impl, CUDA, - border_align_forward_cuda); -REGISTER_DEVICE_IMPL(border_align_backward_impl, CUDA, - border_align_backward_cuda); - -void box_iou_rotated_cuda(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); - -void box_iou_rotated_impl(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); -REGISTER_DEVICE_IMPL(box_iou_rotated_impl, CUDA, box_iou_rotated_cuda); - -void CARAFEForwardCUDAKernelLauncher(const Tensor features, const Tensor masks, - Tensor rfeatures, Tensor routput, - Tensor rmasks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor); - -void CARAFEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor rfeatures, const Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, Tensor rbottom_grad, - Tensor rmask_grad, Tensor bottom_grad, Tensor mask_grad, - const int kernel_size, const int group_size, const int scale_factor); - -void carafe_forward_cuda(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor) { - CARAFEForwardCUDAKernelLauncher(features, masks, rfeatures, routput, rmasks, - output, kernel_size, group_size, - scale_factor); -} - -void carafe_backward_cuda(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor) { - CARAFEBackwardCUDAKernelLauncher(top_grad, rfeatures, masks, rtop_grad, - rbottom_grad_hs, rbottom_grad, rmask_grad, - bottom_grad, mask_grad, kernel_size, - group_size, scale_factor); -} - -void carafe_forward_impl(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor); - -void carafe_backward_impl(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, - Tensor bottom_grad, Tensor mask_grad, int kernel_size, - int group_size, int scale_factor); - -REGISTER_DEVICE_IMPL(carafe_forward_impl, CUDA, carafe_forward_cuda); -REGISTER_DEVICE_IMPL(carafe_backward_impl, CUDA, carafe_backward_cuda); - -void CARAFENAIVEForwardCUDAKernelLauncher(const Tensor features, - const Tensor masks, Tensor output, - const int kernel_size, - const int group_size, - const int scale_factor); - -void CARAFENAIVEBackwardCUDAKernelLauncher( - const Tensor top_grad, const Tensor features, const Tensor masks, - Tensor bottom_grad, Tensor mask_grad, const int kernel_size, - const int group_size, const int scale_factor); - -void carafe_naive_forward_cuda(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor) { - CARAFENAIVEForwardCUDAKernelLauncher(features, masks, output, kernel_size, - group_size, scale_factor); -} - -void carafe_naive_backward_cuda(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor) { - CARAFENAIVEBackwardCUDAKernelLauncher(top_grad, features, masks, bottom_grad, - mask_grad, kernel_size, group_size, - scale_factor); -} -void carafe_naive_forward_impl(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, - int scale_factor); - -void carafe_naive_backward_impl(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, - int scale_factor); - -REGISTER_DEVICE_IMPL(carafe_naive_forward_impl, CUDA, - carafe_naive_forward_cuda); -REGISTER_DEVICE_IMPL(carafe_naive_backward_impl, CUDA, - carafe_naive_backward_cuda); - -void CorrelationForwardCUDAKernelLauncher(Tensor input1, Tensor input2, - Tensor output, int kH, int kW, - int patchH, int patchW, int padH, - int padW, int dilationH, - int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void CorrelationBackwardCUDAKernelLauncher(Tensor grad_output, Tensor input1, - Tensor input2, Tensor grad_input1, - Tensor grad_input2, int kH, int kW, - int patchH, int patchW, int padH, - int padW, int dilationH, - int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void correlation_forward_cuda(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - CorrelationForwardCUDAKernelLauncher( - input1, input2, output, kH, kW, patchH, patchW, padH, padW, dilationH, - dilationW, dilation_patchH, dilation_patchW, dH, dW); -} - -void correlation_backward_cuda(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW) { - CorrelationBackwardCUDAKernelLauncher( - grad_output, input1, input2, grad_input1, grad_input2, kH, kW, patchH, - patchW, padH, padW, dilationH, dilationW, dilation_patchH, - dilation_patchW, dH, dW); -} - -void correlation_forward_impl(Tensor input1, Tensor input2, Tensor output, - int kH, int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW); - -void correlation_backward_impl(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, - int padW, int dilationH, int dilationW, - int dilation_patchH, int dilation_patchW, int dH, - int dW); - -REGISTER_DEVICE_IMPL(correlation_forward_impl, CUDA, correlation_forward_cuda); -REGISTER_DEVICE_IMPL(correlation_backward_impl, CUDA, - correlation_backward_cuda); - -void deformable_im2col_cuda(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col); - -void deformable_col2im_cuda(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im); - -void deformable_col2im_coord_cuda( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset); - -void deformable_im2col_impl(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col); - -void deformable_col2im_impl(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im); - -void deformable_col2im_coord_impl( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset); - -REGISTER_DEVICE_IMPL(deformable_im2col_impl, CUDA, deformable_im2col_cuda); -REGISTER_DEVICE_IMPL(deformable_col2im_impl, CUDA, deformable_col2im_cuda); -REGISTER_DEVICE_IMPL(deformable_col2im_coord_impl, CUDA, - deformable_col2im_coord_cuda); - -void DeformRoIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, - Tensor offset, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, - int sampling_ratio, float gamma); - -void DeformRoIPoolBackwardCUDAKernelLauncher( - Tensor grad_output, Tensor input, Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, float gamma); - -void deform_roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - DeformRoIPoolForwardCUDAKernelLauncher(input, rois, offset, output, - pooled_height, pooled_width, - spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_backward_cuda(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - DeformRoIPoolBackwardCUDAKernelLauncher( - grad_output, input, rois, offset, grad_input, grad_offset, pooled_height, - pooled_width, spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_forward_impl(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma); - -void deform_roi_pool_backward_impl(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma); - -REGISTER_DEVICE_IMPL(deform_roi_pool_forward_impl, CUDA, - deform_roi_pool_forward_cuda); -REGISTER_DEVICE_IMPL(deform_roi_pool_backward_impl, CUDA, - deform_roi_pool_backward_cuda); - -void SigmoidFocalLossForwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha); - -void SigmoidFocalLossBackwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, - Tensor grad_input, - const float gamma, - const float alpha); - -void SoftmaxFocalLossForwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha); - -void SoftmaxFocalLossBackwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, - const float gamma, - const float alpha); - -void sigmoid_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SigmoidFocalLossForwardCUDAKernelLauncher(input, target, weight, output, - gamma, alpha); -} - -void sigmoid_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha) { - SigmoidFocalLossBackwardCUDAKernelLauncher(input, target, weight, grad_input, - gamma, alpha); -} - -void softmax_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SoftmaxFocalLossForwardCUDAKernelLauncher(input, target, weight, output, - gamma, alpha); -} - -void softmax_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha) { - SoftmaxFocalLossBackwardCUDAKernelLauncher(input, target, weight, buff, - grad_input, gamma, alpha); -} - -void sigmoid_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha); - -void softmax_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void softmax_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha); - -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_forward_impl, CUDA, - sigmoid_focal_loss_forward_cuda); -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_backward_impl, CUDA, - sigmoid_focal_loss_backward_cuda); -REGISTER_DEVICE_IMPL(softmax_focal_loss_forward_impl, CUDA, - softmax_focal_loss_forward_cuda); -REGISTER_DEVICE_IMPL(softmax_focal_loss_backward_impl, CUDA, - softmax_focal_loss_backward_cuda); - -void FurthestPointSamplingForwardCUDAKernelLauncher(int b, int n, int m, - const float* dataset, - float* temp, int* idxs); - -void FurthestPointSamplingWithDistForwardCUDAKernelLauncher( - int b, int n, int m, const float* dataset, float* temp, int* idxs); - -void furthest_point_sampling_forward_cuda(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m) { - const float* dataset = points_tensor.data_ptr(); - float* temp = temp_tensor.data_ptr(); - int* idxs = idx_tensor.data_ptr(); - FurthestPointSamplingForwardCUDAKernelLauncher(b, n, m, dataset, temp, idxs); -} - -void furthest_point_sampling_with_dist_forward_cuda(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m) { - const float* dataset = points_tensor.data_ptr(); - float* temp = temp_tensor.data_ptr(); - int* idxs = idx_tensor.data_ptr(); - FurthestPointSamplingWithDistForwardCUDAKernelLauncher(b, n, m, dataset, temp, - idxs); -} - -void furthest_point_sampling_forward_impl(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m); - -void furthest_point_sampling_with_dist_forward_impl(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m); - -REGISTER_DEVICE_IMPL(furthest_point_sampling_forward_impl, CUDA, - furthest_point_sampling_forward_cuda); -REGISTER_DEVICE_IMPL(furthest_point_sampling_with_dist_forward_impl, CUDA, - furthest_point_sampling_with_dist_forward_cuda); - -torch::Tensor fused_bias_leakyrelu_op(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale); - -torch::Tensor fused_bias_leakyrelu_op_impl(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale); -REGISTER_DEVICE_IMPL(fused_bias_leakyrelu_op_impl, CUDA, - fused_bias_leakyrelu_op); - -void GatherPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor points, - const Tensor idx, Tensor out); - -void GatherPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor grad_out, - const Tensor idx, - Tensor grad_points); - -void gather_points_forward_cuda(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out) { - GatherPointsForwardCUDAKernelLauncher(b, c, n, npoints, points, idx, out); -}; - -void gather_points_backward_cuda(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - GatherPointsBackwardCUDAKernelLauncher(b, c, n, npoints, grad_out, idx, - grad_points); -}; - -void gather_points_forward_impl(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out); - -void gather_points_backward_impl(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points); - -REGISTER_DEVICE_IMPL(gather_points_forward_impl, CUDA, - gather_points_forward_cuda); -REGISTER_DEVICE_IMPL(gather_points_backward_impl, CUDA, - gather_points_backward_cuda); - -void GroupPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor points, - const Tensor idx, Tensor out); - -void GroupPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor grad_out, - const Tensor idx, - Tensor grad_points); - -void group_points_forward_cuda(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out) { - GroupPointsForwardCUDAKernelLauncher(b, c, n, npoints, nsample, points, idx, - out); -}; - -void group_points_backward_cuda(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - GroupPointsBackwardCUDAKernelLauncher(b, c, n, npoints, nsample, grad_out, - idx, grad_points); -}; - -void group_points_forward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out); - -void group_points_backward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points); - -REGISTER_DEVICE_IMPL(group_points_forward_impl, CUDA, - group_points_forward_cuda); -REGISTER_DEVICE_IMPL(group_points_backward_impl, CUDA, - group_points_backward_cuda); - -void IoU3DBoxesOverlapBevForwardCUDAKernelLauncher(const int num_a, - const Tensor boxes_a, - const int num_b, - const Tensor boxes_b, - Tensor ans_overlap); - -void IoU3DNMS3DForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long* mask, - int boxes_num, - float nms_overlap_thresh); - -void IoU3DNMS3DNormalForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long* mask, - int boxes_num, - float nms_overlap_thresh); - -void iou3d_boxes_overlap_bev_forward_cuda(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap) { - IoU3DBoxesOverlapBevForwardCUDAKernelLauncher(num_a, boxes_a, num_b, boxes_b, - ans_overlap); -}; - -void iou3d_nms3d_forward_cuda(const Tensor boxes, unsigned long long* mask, - int boxes_num, float nms_overlap_thresh) { - IoU3DNMS3DForwardCUDAKernelLauncher(boxes, mask, boxes_num, - nms_overlap_thresh); -}; - -void iou3d_nms3d_normal_forward_cuda(const Tensor boxes, - unsigned long long* mask, int boxes_num, - float nms_overlap_thresh) { - IoU3DNMS3DNormalForwardCUDAKernelLauncher(boxes, mask, boxes_num, - nms_overlap_thresh); -}; - -void iou3d_boxes_overlap_bev_forward_impl(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap); - -void iou3d_nms3d_forward_impl(const Tensor boxes, unsigned long long* mask, - int boxes_num, float nms_overlap_thresh); - -void iou3d_nms3d_normal_forward_impl(const Tensor boxes, - unsigned long long* mask, int boxes_num, - float nms_overlap_thresh); - -REGISTER_DEVICE_IMPL(iou3d_boxes_overlap_bev_forward_impl, CUDA, - iou3d_boxes_overlap_bev_forward_cuda); -REGISTER_DEVICE_IMPL(iou3d_nms3d_forward_impl, CUDA, iou3d_nms3d_forward_cuda); -REGISTER_DEVICE_IMPL(iou3d_nms3d_normal_forward_impl, CUDA, - iou3d_nms3d_normal_forward_cuda); - -void KNNForwardCUDAKernelLauncher(int b, int n, int m, int nsample, - const Tensor xyz, const Tensor new_xyz, - Tensor idx, Tensor dist2); - -void knn_forward_cuda(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2) { - KNNForwardCUDAKernelLauncher(b, n, m, nsample, xyz, new_xyz, idx, dist2); -} - -void knn_forward_impl(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2); -REGISTER_DEVICE_IMPL(knn_forward_impl, CUDA, knn_forward_cuda); - -void MaskedIm2colForwardCUDAKernelLauncher(const Tensor bottom_data, - const Tensor mask_h_idx, - const Tensor mask_w_idx, - Tensor top_data, const int kernel_h, - const int kernel_w, const int pad_h, - const int pad_w); - -void MaskedCol2imForwardCUDAKernelLauncher(const Tensor bottom_data, - const Tensor mask_h_idx, - const Tensor mask_w_idx, - Tensor top_data, const int height, - const int width, const int channels); - -void masked_im2col_forward_cuda(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kw), col: (kh * kw * ic, ow * oh) - MaskedIm2colForwardCUDAKernelLauncher(im, mask_h_idx, mask_w_idx, col, - kernel_h, kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward_cuda(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - // im: (n, ic, h, w), kernel size (kh, kw) - // kernel: (oc, ic * kh * kh), col: (kh * kw * ic, ow * oh) - MaskedCol2imForwardCUDAKernelLauncher(col, mask_h_idx, mask_w_idx, im, height, - width, channels); -} - -void masked_im2col_forward_impl(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w); - -void masked_col2im_forward_impl(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels); - -REGISTER_DEVICE_IMPL(masked_im2col_forward_impl, CUDA, - masked_im2col_forward_cuda); -REGISTER_DEVICE_IMPL(masked_col2im_forward_impl, CUDA, - masked_col2im_forward_cuda); - -void modulated_deformable_im2col_cuda( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_cuda( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_cuda( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -void modulated_deformable_im2col_impl( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_impl( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_impl( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -REGISTER_DEVICE_IMPL(modulated_deformable_im2col_impl, CUDA, - modulated_deformable_im2col_cuda); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_impl, CUDA, - modulated_deformable_col2im_cuda); -REGISTER_DEVICE_IMPL(modulated_deformable_col2im_coord_impl, CUDA, - modulated_deformable_col2im_coord_cuda); - -Tensor ms_deform_attn_cuda_forward(const Tensor& value, - const Tensor& spatial_shapes, - const Tensor& level_start_index, - const Tensor& sampling_loc, - const Tensor& attn_weight, - const int im2col_step); - -void ms_deform_attn_cuda_backward( - const Tensor& value, const Tensor& spatial_shapes, - const Tensor& level_start_index, const Tensor& sampling_loc, - const Tensor& attn_weight, const Tensor& grad_output, Tensor& grad_value, - Tensor& grad_sampling_loc, Tensor& grad_attn_weight, const int im2col_step); - -Tensor ms_deform_attn_impl_forward(const Tensor& value, - const Tensor& spatial_shapes, - const Tensor& level_start_index, - const Tensor& sampling_loc, - const Tensor& attn_weight, - const int im2col_step); - -void ms_deform_attn_impl_backward( - const Tensor& value, const Tensor& spatial_shapes, - const Tensor& level_start_index, const Tensor& sampling_loc, - const Tensor& attn_weight, const Tensor& grad_output, Tensor& grad_value, - Tensor& grad_sampling_loc, Tensor& grad_attn_weight, const int im2col_step); - -REGISTER_DEVICE_IMPL(ms_deform_attn_impl_forward, CUDA, - ms_deform_attn_cuda_forward); -REGISTER_DEVICE_IMPL(ms_deform_attn_impl_backward, CUDA, - ms_deform_attn_cuda_backward); - -Tensor NMSCUDAKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset); - -Tensor nms_cuda(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return NMSCUDAKernelLauncher(boxes, scores, iou_threshold, offset); -} - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset); -REGISTER_DEVICE_IMPL(nms_impl, CUDA, nms_cuda); - -void PointsInBoxesPartForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void PointsInBoxesAllForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void points_in_boxes_part_forward_cuda(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - PointsInBoxesPartForwardCUDAKernelLauncher(batch_size, boxes_num, pts_num, - boxes, pts, box_idx_of_points); -}; - -void points_in_boxes_all_forward_cuda(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - PointsInBoxesAllForwardCUDAKernelLauncher(batch_size, boxes_num, pts_num, - boxes, pts, box_idx_of_points); -}; - -void points_in_boxes_part_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); - -void points_in_boxes_all_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points); -REGISTER_DEVICE_IMPL(points_in_boxes_part_forward_impl, CUDA, - points_in_boxes_part_forward_cuda); -REGISTER_DEVICE_IMPL(points_in_boxes_all_forward_impl, CUDA, - points_in_boxes_all_forward_cuda); - -void PSAMaskForwardCUDAKernelLauncher(const int psa_type, const Tensor input, - Tensor output, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, - const int half_w_mask); - -void PSAMaskBackwardCUDAKernelLauncher( - const int psa_type, const Tensor grad_output, Tensor grad_input, - const int num_, const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, const int half_w_mask); - -void psamask_forward_cuda(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - PSAMaskForwardCUDAKernelLauncher(psa_type, input, output, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_backward_cuda(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - PSAMaskBackwardCUDAKernelLauncher(psa_type, grad_output, grad_input, num_, - h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask); -} - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); -REGISTER_DEVICE_IMPL(psamask_forward_impl, CUDA, psamask_forward_cuda); -REGISTER_DEVICE_IMPL(psamask_backward_impl, CUDA, psamask_backward_cuda); - -void ROIAlignForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void ROIAlignBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax_y, Tensor argmax_x, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, - bool aligned); - -void roi_align_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignForwardCUDAKernelLauncher( - input, rois, output, argmax_y, argmax_x, aligned_height, aligned_width, - spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignBackwardCUDAKernelLauncher( - grad_output, rois, argmax_y, argmax_x, grad_input, aligned_height, - aligned_width, spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -REGISTER_DEVICE_IMPL(roi_align_forward_impl, CUDA, roi_align_forward_cuda); -REGISTER_DEVICE_IMPL(roi_align_backward_impl, CUDA, roi_align_backward_cuda); - -void ROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor input, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor output); - -void ROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor bottom_grad); - -void roi_align_rotated_forward_cuda(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - - int num_channels = input.size(1); - int data_height = input.size(2); - int data_width = input.size(3); - ROIAlignRotatedForwardCUDAKernelLauncher( - input, rois, spatial_scale, sampling_ratio, aligned, clockwise, - num_channels, data_height, data_width, num_rois, aligned_height, - aligned_width, output); -} - -void roi_align_rotated_backward_cuda(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - - int num_channels = bottom_grad.size(1); - int data_height = bottom_grad.size(2); - int data_width = bottom_grad.size(3); - ROIAlignRotatedBackwardCUDAKernelLauncher( - top_grad, rois, spatial_scale, sampling_ratio, aligned, clockwise, - num_channels, data_height, data_width, num_rois, aligned_height, - aligned_width, bottom_grad); -} - -void roi_align_rotated_forward_impl(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); -REGISTER_DEVICE_IMPL(roi_align_rotated_forward_impl, CUDA, - roi_align_rotated_forward_cuda); -REGISTER_DEVICE_IMPL(roi_align_rotated_backward_impl, CUDA, - roi_align_rotated_backward_cuda); - -void RiROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor features, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor output); - -void RiROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor bottom_grad); - -void riroi_align_rotated_forward_cuda(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - CHECK_CONTIGUOUS(features); - CHECK_CONTIGUOUS(rois); - int num_channels = features.size(1) / num_orientations; - int data_height = features.size(2); - int data_width = features.size(3); - RiROIAlignRotatedForwardCUDAKernelLauncher( - features, rois, spatial_scale, num_samples, clockwise, num_channels, - data_height, data_width, num_rois, pooled_height, pooled_width, - num_orientations, output); -} - -void riroi_align_rotated_backward_cuda(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - // Number of ROIs - int num_rois = rois.size(0); - int size_rois = rois.size(1); - if (size_rois != 6) { - AT_ERROR("wrong roi size"); - } - CHECK_CONTIGUOUS(top_grad); - CHECK_CONTIGUOUS(rois); - int num_channels = bottom_grad.size(1) / num_orientations; - int data_height = bottom_grad.size(2); - int data_width = bottom_grad.size(3); - RiROIAlignRotatedBackwardCUDAKernelLauncher( - top_grad, rois, spatial_scale, num_samples, clockwise, num_channels, - data_height, data_width, num_rois, pooled_height, pooled_width, - num_orientations, bottom_grad); -} - -void riroi_align_rotated_forward_impl(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -void riroi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -REGISTER_DEVICE_IMPL(riroi_align_rotated_forward_impl, CUDA, - riroi_align_rotated_forward_cuda); -REGISTER_DEVICE_IMPL(riroi_align_rotated_backward_impl, CUDA, - riroi_align_rotated_backward_cuda); - -void RoiawarePool3dForwardCUDAKernelLauncher( - int boxes_num, int pts_num, int channels, int max_pts_each_voxel, int out_x, - int out_y, int out_z, const Tensor rois, const Tensor pts, - const Tensor pts_feature, Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void RoiawarePool3dBackwardCUDAKernelLauncher( - int boxes_num, int out_x, int out_y, int out_z, int channels, - int max_pts_each_voxel, const Tensor pts_idx_of_voxels, const Tensor argmax, - const Tensor grad_out, Tensor grad_in, int pool_method); - -void roiaware_pool3d_forward_cuda(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - RoiawarePool3dForwardCUDAKernelLauncher( - boxes_num, pts_num, channels, max_pts_each_voxel, out_x, out_y, out_z, - rois, pts, pts_feature, argmax, pts_idx_of_voxels, pooled_features, - pool_method); -}; - -void roiaware_pool3d_backward_cuda(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method) { - RoiawarePool3dBackwardCUDAKernelLauncher( - boxes_num, out_x, out_y, out_z, channels, max_pts_each_voxel, - pts_idx_of_voxels, argmax, grad_out, grad_in, pool_method); -}; - -void roiaware_pool3d_forward_impl(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void roiaware_pool3d_backward_impl(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method); - -REGISTER_DEVICE_IMPL(roiaware_pool3d_forward_impl, CUDA, - roiaware_pool3d_forward_cuda); -REGISTER_DEVICE_IMPL(roiaware_pool3d_backward_impl, CUDA, - roiaware_pool3d_backward_cuda); - -void RoIPointPool3dForwardCUDAKernelLauncher( - int batch_size, int pts_num, int boxes_num, int feature_in_len, - int sampled_pts_num, const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, Tensor pooled_features, Tensor pooled_empty_flag); - -void roipoint_pool3d_forward_cuda(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag) { - RoIPointPool3dForwardCUDAKernelLauncher( - batch_size, pts_num, boxes_num, feature_in_len, sampled_pts_num, xyz, - boxes3d, pts_feature, pooled_features, pooled_empty_flag); -}; - -void roipoint_pool3d_forward_impl(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag); -REGISTER_DEVICE_IMPL(roipoint_pool3d_forward_impl, CUDA, - roipoint_pool3d_forward_cuda); - -void ROIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, - int pooled_width, float spatial_scale); - -void ROIPoolBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax, Tensor grad_input, - int pooled_height, int pooled_width, - float spatial_scale); - -void roi_pool_forward_cuda(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale) { - ROIPoolForwardCUDAKernelLauncher(input, rois, output, argmax, pooled_height, - pooled_width, spatial_scale); -} - -void roi_pool_backward_cuda(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale) { - ROIPoolBackwardCUDAKernelLauncher(grad_output, rois, argmax, grad_input, - pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale); -void roi_pool_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale); -REGISTER_DEVICE_IMPL(roi_pool_forward_impl, CUDA, roi_pool_forward_cuda); -REGISTER_DEVICE_IMPL(roi_pool_backward_impl, CUDA, roi_pool_backward_cuda); - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; - -std::vector DynamicPointToVoxelForwardCUDAKernelLauncher( - const at::Tensor& feats, const at::Tensor& coors, - const reduce_t reduce_type); - -void DynamicPointToVoxelBackwardCUDAKernelLauncher( - at::Tensor& grad_feats, const at::Tensor& grad_reduced_feats, - const at::Tensor& feats, const at::Tensor& reduced_feats, - const at::Tensor& coors_map, const at::Tensor& reduce_count, - const reduce_t reduce_type); - -std::vector dynamic_point_to_voxel_forward_cuda( - const torch::Tensor& feats, const torch::Tensor& coors, - const reduce_t reduce_type) { - return DynamicPointToVoxelForwardCUDAKernelLauncher(feats, coors, - reduce_type); -}; - -void dynamic_point_to_voxel_backward_cuda( - torch::Tensor& grad_feats, const torch::Tensor& grad_reduced_feats, - const torch::Tensor& feats, const torch::Tensor& reduced_feats, - const torch::Tensor& coors_idx, const torch::Tensor& reduce_count, - const reduce_t reduce_type) { - DynamicPointToVoxelBackwardCUDAKernelLauncher(grad_feats, grad_reduced_feats, - feats, reduced_feats, coors_idx, - reduce_count, reduce_type); -}; - -std::vector dynamic_point_to_voxel_forward_impl( - const torch::Tensor& feats, const torch::Tensor& coors, - const reduce_t reduce_type); - -void dynamic_point_to_voxel_backward_impl( - torch::Tensor& grad_feats, const torch::Tensor& grad_reduced_feats, - const torch::Tensor& feats, const torch::Tensor& reduced_feats, - const torch::Tensor& coors_idx, const torch::Tensor& reduce_count, - const reduce_t reduce_type); - -REGISTER_DEVICE_IMPL(dynamic_point_to_voxel_forward_impl, CUDA, - dynamic_point_to_voxel_forward_cuda); -REGISTER_DEVICE_IMPL(dynamic_point_to_voxel_backward_impl, CUDA, - dynamic_point_to_voxel_backward_cuda); - -void SyncBNForwardMeanCUDAKernelLauncher(const Tensor input, Tensor mean); - -void SyncBNForwardVarCUDAKernelLauncher(const Tensor input, const Tensor mean, - Tensor var); - -void SyncBNForwardOutputCUDAKernelLauncher( - const Tensor input, const Tensor mean, const Tensor var, - Tensor running_mean, Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, Tensor output, float eps, - float momentum, int group_size); - -void SyncBNBackwardParamCUDAKernelLauncher(const Tensor grad_output, - const Tensor norm, - Tensor grad_weight, - Tensor grad_bias); - -void SyncBNBackwardDataCUDAKernelLauncher(const Tensor grad_output, - const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input); - -void sync_bn_forward_mean_cuda(const Tensor input, Tensor mean) { - SyncBNForwardMeanCUDAKernelLauncher(input, mean); -} - -void sync_bn_forward_var_cuda(const Tensor input, const Tensor mean, - Tensor var) { - SyncBNForwardVarCUDAKernelLauncher(input, mean, var); -} - -void sync_bn_forward_output_cuda(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - SyncBNForwardOutputCUDAKernelLauncher(input, mean, var, running_mean, - running_var, weight, bias, norm, std, - output, eps, momentum, group_size); -} - -void sync_bn_backward_param_cuda(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - SyncBNBackwardParamCUDAKernelLauncher(grad_output, norm, grad_weight, - grad_bias); -} - -void sync_bn_backward_data_cuda(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input) { - SyncBNBackwardDataCUDAKernelLauncher(grad_output, weight, grad_weight, - grad_bias, norm, std, grad_input); -} - -void sync_bn_forward_mean_impl(const Tensor input, Tensor mean); - -void sync_bn_forward_var_impl(const Tensor input, const Tensor mean, - Tensor var); - -void sync_bn_forward_output_impl(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size); - -void sync_bn_backward_param_impl(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias); - -void sync_bn_backward_data_impl(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input); - -REGISTER_DEVICE_IMPL(sync_bn_forward_mean_impl, CUDA, - sync_bn_forward_mean_cuda); -REGISTER_DEVICE_IMPL(sync_bn_forward_var_impl, CUDA, sync_bn_forward_var_cuda); -REGISTER_DEVICE_IMPL(sync_bn_forward_output_impl, CUDA, - sync_bn_forward_output_cuda); -REGISTER_DEVICE_IMPL(sync_bn_backward_param_impl, CUDA, - sync_bn_backward_param_cuda); -REGISTER_DEVICE_IMPL(sync_bn_backward_data_impl, CUDA, - sync_bn_backward_data_cuda); - -void ThreeInterpolateForwardCUDAKernelLauncher(int b, int c, int m, int n, - const Tensor points, - const Tensor idx, - const Tensor weight, Tensor out); - -void ThreeInterpolateBackwardCUDAKernelLauncher(int b, int c, int n, int m, - const Tensor grad_out, - const Tensor idx, - const Tensor weight, - Tensor grad_points); - -void three_interpolate_forward_cuda(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out) { - ThreeInterpolateForwardCUDAKernelLauncher(b, c, m, n, points, idx, weight, - out); -}; - -void three_interpolate_backward_cuda(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points) { - ThreeInterpolateBackwardCUDAKernelLauncher(b, c, n, m, grad_out, idx, weight, - grad_points); -}; - -void three_interpolate_forward_impl(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out); - -void three_interpolate_backward_impl(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points); -REGISTER_DEVICE_IMPL(three_interpolate_forward_impl, CUDA, - three_interpolate_forward_cuda); -REGISTER_DEVICE_IMPL(three_interpolate_backward_impl, CUDA, - three_interpolate_backward_cuda); - -void ThreeNNForwardCUDAKernelLauncher(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, - Tensor idx); - -void three_nn_forward_cuda(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx) { - ThreeNNForwardCUDAKernelLauncher(b, n, m, unknown, known, dist2, idx); -}; - -void three_nn_forward_impl(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx); -REGISTER_DEVICE_IMPL(three_nn_forward_impl, CUDA, three_nn_forward_cuda); - -void TINShiftForwardCUDAKernelLauncher(Tensor input, Tensor shift, - Tensor output); - -void TINShiftBackwardCUDAKernelLauncher(Tensor grad_output, Tensor shift, - Tensor grad_input); - -void tin_shift_forward_cuda(Tensor input, Tensor shift, Tensor output) { - TINShiftForwardCUDAKernelLauncher(input, shift, output); -} - -void tin_shift_backward_cuda(Tensor grad_output, Tensor shift, - Tensor grad_input) { - TINShiftBackwardCUDAKernelLauncher(grad_output, shift, grad_input); -} - -void tin_shift_forward_impl(Tensor input, Tensor shift, Tensor output); -void tin_shift_backward_impl(Tensor grad_output, Tensor shift, - Tensor grad_input); -REGISTER_DEVICE_IMPL(tin_shift_forward_impl, CUDA, tin_shift_forward_cuda); -REGISTER_DEVICE_IMPL(tin_shift_backward_impl, CUDA, tin_shift_backward_cuda); - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1); - -torch::Tensor upfirdn2d_op_impl(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1); -REGISTER_DEVICE_IMPL(upfirdn2d_op_impl, CUDA, upfirdn2d_op); - -int HardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3); - -int NondeterministicHardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3); - -void DynamicVoxelizeForwardCUDAKernelLauncher( - const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, const std::vector coors_range, - const int NDim = 3); - -int hard_voxelize_forward_cuda(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim) { - return HardVoxelizeForwardCUDAKernelLauncher( - points, voxels, coors, num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -}; - -int nondeterministic_hard_voxelize_forward_cuda( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim) { - return NondeterministicHardVoxelizeForwardCUDAKernelLauncher( - points, voxels, coors, num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -}; - -void dynamic_voxelize_forward_cuda(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim) { - DynamicVoxelizeForwardCUDAKernelLauncher(points, coors, voxel_size, - coors_range, NDim); -}; - -int hard_voxelize_forward_impl(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, - at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim); - -int nondeterministic_hard_voxelize_forward_impl( - const at::Tensor& points, at::Tensor& voxels, at::Tensor& coors, - at::Tensor& num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim); - -void dynamic_voxelize_forward_impl(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim); - -REGISTER_DEVICE_IMPL(hard_voxelize_forward_impl, CUDA, - hard_voxelize_forward_cuda); -REGISTER_DEVICE_IMPL(nondeterministic_hard_voxelize_forward_impl, CUDA, - nondeterministic_hard_voxelize_forward_cuda); -REGISTER_DEVICE_IMPL(dynamic_voxelize_forward_impl, CUDA, - dynamic_voxelize_forward_cuda); - -void RotatedFeatureAlignForwardCUDAKernelLauncher(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor output); - -void RotatedFeatureAlignBackwardCUDAKernelLauncher(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor bottom_grad); - -void rotated_feature_align_forward_cuda(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output) { - RotatedFeatureAlignForwardCUDAKernelLauncher(features, best_bboxes, - spatial_scale, points, output); -}; - -void rotated_feature_align_backward_cuda(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad) { - RotatedFeatureAlignBackwardCUDAKernelLauncher( - top_grad, best_bboxes, spatial_scale, points, bottom_grad); -}; - -void rotated_feature_align_forward_impl(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output); - -void rotated_feature_align_backward_impl(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad); - -REGISTER_DEVICE_IMPL(rotated_feature_align_forward_impl, CUDA, - rotated_feature_align_forward_cuda); -REGISTER_DEVICE_IMPL(rotated_feature_align_backward_impl, CUDA, - rotated_feature_align_backward_cuda); - -void PointsInPolygonsForwardCUDAKernelLauncher(const at::Tensor points, - const at::Tensor polygons, - const int rows, const int cols, - at::Tensor output); - -void points_in_polygons_forward_cuda(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols) { - PointsInPolygonsForwardCUDAKernelLauncher(points, polygons, rows, cols, - output); -}; - -void points_in_polygons_forward_impl(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols); - -REGISTER_DEVICE_IMPL(points_in_polygons_forward_impl, CUDA, - points_in_polygons_forward_cuda); - -// torch::Tensor IndiceMaxpoolForwardCUDAKernelLauncher(torch::Tensor features, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum, -// int64_t numAct); - -// torch::Tensor indice_maxpool_forward_cuda(torch::Tensor features, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum, -// int64_t numAct) { -// return IndiceMaxpoolForwardCUDAKernelLauncher(features, indicePairs, -// indiceNum, numAct); -// }; - -// torch::Tensor indice_maxpool_forward_impl(torch::Tensor features, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum, -// int64_t numAct); -// REGISTER_DEVICE_IMPL(indice_maxpool_forward_impl, CUDA, -// indice_maxpool_forward_cuda); - -// torch::Tensor IndiceMaxpoolBackwardCUDAKernelLauncher(torch::Tensor features, -// torch::Tensor outFeatures, -// torch::Tensor outGrad, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum); - -// torch::Tensor indice_maxpool_backward_cuda(torch::Tensor features, -// torch::Tensor outFeatures, -// torch::Tensor outGrad, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum) { -// return IndiceMaxpoolBackwardCUDAKernelLauncher(features, outFeatures, outGrad, -// indicePairs, indiceNum); -// }; - -// torch::Tensor indice_maxpool_backward_impl(torch::Tensor features, -// torch::Tensor outFeatures, -// torch::Tensor outGrad, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum); - -// REGISTER_DEVICE_IMPL(indice_maxpool_backward_impl, CUDA, -// indice_maxpool_backward_cuda) - -// torch::Tensor IndiceConvForwardCUDAKernelLauncher( -// torch::Tensor features, torch::Tensor filters, torch::Tensor indicePairs, -// torch::Tensor indiceNum, int64_t numActOut, int64_t _inverse, -// int64_t _subM); - -// torch::Tensor indice_conv_forward_cuda(torch::Tensor features, -// torch::Tensor filters, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum, -// int64_t numActOut, int64_t _inverse, -// int64_t _subM) { -// return IndiceConvForwardCUDAKernelLauncher( -// features, filters, indicePairs, indiceNum, numActOut, _inverse, _subM); -// }; - -// torch::Tensor indice_conv_forward_impl(torch::Tensor features, -// torch::Tensor filters, -// torch::Tensor indicePairs, -// torch::Tensor indiceNum, -// int64_t numActOut, int64_t _inverse, -// int64_t _subM); - -// REGISTER_DEVICE_IMPL(indice_conv_forward_impl, CUDA, indice_conv_forward_cuda); - -// std::vector IndiceConvBackwardCUDAKernelLauncher( -// torch::Tensor features, torch::Tensor filters, torch::Tensor outGrad, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t _inverse, -// int64_t _subM); - -// std::vector indice_conv_backward_cuda( -// torch::Tensor features, torch::Tensor filters, torch::Tensor outGrad, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t _inverse, -// int64_t _subM) { -// return IndiceConvBackwardCUDAKernelLauncher( -// features, filters, outGrad, indicePairs, indiceNum, _inverse, _subM); -// }; - -// std::vector indice_conv_backward_impl( -// torch::Tensor features, torch::Tensor filters, torch::Tensor outGrad, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t _inverse, -// int64_t _subM); - -// REGISTER_DEVICE_IMPL(indice_conv_backward_impl, CUDA, -// indice_conv_backward_cuda); - -// torch::Tensor FusedIndiceConvBatchnormCUDAKernelLauncher( -// torch::Tensor features, torch::Tensor filters, torch::Tensor bias, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t numActOut, -// int64_t _inverse, int64_t _subM); - -// torch::Tensor fused_indice_conv_batchnorm_forward_cuda( -// torch::Tensor features, torch::Tensor filters, torch::Tensor bias, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t numActOut, -// int64_t _inverse, int64_t _subM) { -// return FusedIndiceConvBatchnormCUDAKernelLauncher(features, filters, bias, -// indicePairs, indiceNum, -// numActOut, _inverse, _subM); -// }; - -// torch::Tensor fused_indice_conv_batchnorm_forward_impl( -// torch::Tensor features, torch::Tensor filters, torch::Tensor bias, -// torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t numActOut, -// int64_t _inverse, int64_t _subM); - -// REGISTER_DEVICE_IMPL(fused_indice_conv_batchnorm_forward_impl, CUDA, -// fused_indice_conv_batchnorm_forward_cuda) - -void MinAreaPolygonsCUDAKernelLauncher(const Tensor pointsets, Tensor polygons); - -void min_area_polygons_cuda(const Tensor pointsets, Tensor polygons) { - MinAreaPolygonsCUDAKernelLauncher(pointsets, polygons); -} - -void min_area_polygons_impl(const Tensor pointsets, Tensor polygons); - -REGISTER_DEVICE_IMPL(min_area_polygons_impl, CUDA, min_area_polygons_cuda); - -void ActiveRotatedFilterForwardCUDAKernelLauncher(const Tensor input, - const Tensor indices, - Tensor output); - -void ActiveRotatedFilterBackwardCUDAKernelLauncher(const Tensor grad_out, - const Tensor indices, - Tensor grad_in); - -void active_rotated_filter_forward_cuda(const Tensor input, - const Tensor indices, Tensor output) { - ActiveRotatedFilterForwardCUDAKernelLauncher(input, indices, output); -}; - -void active_rotated_filter_backward_cuda(const Tensor grad_out, - const Tensor indices, Tensor grad_in) { - ActiveRotatedFilterBackwardCUDAKernelLauncher(grad_out, indices, grad_in); -}; - -void active_rotated_filter_forward_impl(const Tensor input, - const Tensor indices, Tensor output); - -void active_rotated_filter_backward_impl(const Tensor grad_out, - const Tensor indices, Tensor grad_in); - -REGISTER_DEVICE_IMPL(active_rotated_filter_forward_impl, CUDA, - active_rotated_filter_forward_cuda); -REGISTER_DEVICE_IMPL(active_rotated_filter_backward_impl, CUDA, - active_rotated_filter_backward_cuda); - -void ConvexIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor ious); - -void ConvexGIoUCUDAKernelLauncher(const Tensor pointsets, const Tensor polygons, - Tensor output); - -void convex_iou_cuda(const Tensor pointsets, const Tensor polygons, - Tensor ious) { - ConvexIoUCUDAKernelLauncher(pointsets, polygons, ious); -} - -void convex_giou_cuda(const Tensor pointsets, const Tensor polygons, - Tensor output) { - ConvexGIoUCUDAKernelLauncher(pointsets, polygons, output); -} - -void convex_iou_impl(const Tensor pointsets, const Tensor polygons, - Tensor ious); - -void convex_giou_impl(const Tensor pointsets, const Tensor polygons, - Tensor output); - -REGISTER_DEVICE_IMPL(convex_iou_impl, CUDA, convex_iou_cuda); -REGISTER_DEVICE_IMPL(convex_giou_impl, CUDA, convex_giou_cuda); - -Tensor DiffIoURotatedSortVerticesCUDAKernelLauncher(Tensor vertices, - Tensor mask, - Tensor num_valid); - -Tensor diff_iou_rotated_sort_vertices_forward_cuda(Tensor vertices, Tensor mask, - Tensor num_valid) { - return DiffIoURotatedSortVerticesCUDAKernelLauncher(vertices, mask, - num_valid); -} - -Tensor diff_iou_rotated_sort_vertices_forward_impl(Tensor vertices, Tensor mask, - Tensor num_valid); - -REGISTER_DEVICE_IMPL(diff_iou_rotated_sort_vertices_forward_impl, CUDA, - diff_iou_rotated_sort_vertices_forward_cuda); - -void ChamferDistanceForwardCUDAKernelLauncher( - const Tensor xyz1, const Tensor xyz2, const Tensor dist1, - const Tensor dist2, const Tensor idx1, const Tensor idx2); - -void ChamferDistanceBackwardCUDAKernelLauncher( - const Tensor xyz1, const Tensor xyz2, Tensor grad_xyz1, Tensor grad_xyz2, - Tensor grad_dist1, Tensor grad_dist2, Tensor idx1, Tensor idx2); - -void chamfer_distance_forward_cuda(const Tensor xyz1, const Tensor xyz2, - const Tensor dist1, const Tensor dist2, - const Tensor idx1, const Tensor idx2) { - ChamferDistanceForwardCUDAKernelLauncher(xyz1, xyz2, dist1, dist2, idx1, - idx2); -}; - -void chamfer_distance_backward_cuda(const Tensor xyz1, const Tensor xyz2, - Tensor gradxyz1, Tensor gradxyz2, - Tensor graddist1, Tensor graddist2, - Tensor idx1, Tensor idx2) { - ChamferDistanceBackwardCUDAKernelLauncher(xyz1, xyz2, gradxyz1, gradxyz2, - graddist1, graddist2, idx1, idx2); -}; - -void chamfer_distance_forward_impl(const Tensor xyz1, const Tensor xyz2, - const Tensor dist1, const Tensor dist2, - const Tensor idx1, const Tensor idx2); - -void chamfer_distance_backward_impl(const Tensor xyz1, const Tensor xyz2, - Tensor gradxyz1, Tensor gradxyz2, - Tensor graddist1, Tensor graddist2, - Tensor idx1, Tensor idx2); - -REGISTER_DEVICE_IMPL(chamfer_distance_forward_impl, CUDA, - chamfer_distance_forward_cuda); -REGISTER_DEVICE_IMPL(chamfer_distance_backward_impl, CUDA, - chamfer_distance_backward_cuda); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_conv_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_conv_cuda.cu deleted file mode 100644 index 05fc08b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_conv_cuda.cu +++ /dev/null @@ -1,105 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "deform_conv_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void deformable_im2col_cuda(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col) { - // num_axes should be smaller than block size - // todo: check parallel_imgs is correctly passed in - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = channels * height_col * width_col * parallel_imgs; - int channel_per_deformable_group = channels / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "deformable_im2col_gpu", ([&] { - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - scalar_t *data_col_ = data_col.data_ptr(); - - deformable_im2col_gpu_kernel<<>>( - num_kernels, data_im_, data_offset_, height, width, ksize_h, - ksize_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, parallel_imgs, channels, - deformable_group, height_col, width_col, data_col_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void deformable_col2im_cuda(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im) { - // todo: make sure parallel_imgs is passed in correctly - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = - channels * ksize_h * ksize_w * height_col * width_col * parallel_imgs; - int channel_per_deformable_group = channels / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "deformable_col2im_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - deformable_col2im_gpu_kernel<<>>( - num_kernels, data_col_, data_offset_, channels, height, width, - ksize_h, ksize_w, pad_h, pad_w, stride_h, stride_w, dilation_h, - dilation_w, channel_per_deformable_group, parallel_imgs, - deformable_group, height_col, width_col, grad_im_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void deformable_col2im_coord_cuda( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset) { - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = height_col * width_col * 2 * ksize_h * ksize_w * - deformable_group * parallel_imgs; - int channel_per_deformable_group = - channels * ksize_h * ksize_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "deformable_col2im_coord_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - - deformable_col2im_coord_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_col_, data_im_, data_offset_, channels, height, - width, ksize_h, ksize_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, parallel_imgs, - 2 * ksize_h * ksize_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_roi_pool_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_roi_pool_cuda.cu deleted file mode 100644 index d4439982..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/deform_roi_pool_cuda.cu +++ /dev/null @@ -1,55 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "deform_roi_pool_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void DeformRoIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, - Tensor offset, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, - int sampling_ratio, float gamma) { - int output_size = output.numel(); - int channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "deform_roi_pool_forward_cuda_kernel", [&] { - deform_roi_pool_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - rois.data_ptr(), offset.data_ptr(), - output.data_ptr(), pooled_height, pooled_width, - static_cast(spatial_scale), sampling_ratio, - static_cast(gamma), channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void DeformRoIPoolBackwardCUDAKernelLauncher( - Tensor grad_output, Tensor input, Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, float gamma) { - int output_size = grad_output.numel(); - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "deform_roi_pool_backward_cuda_kernel", [&] { - deform_roi_pool_backward_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - input.data_ptr(), rois.data_ptr(), - offset.data_ptr(), grad_input.data_ptr(), - grad_offset.data_ptr(), pooled_height, pooled_width, - static_cast(spatial_scale), sampling_ratio, - static_cast(gamma), channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/diff_iou_rotated_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/diff_iou_rotated_cuda.cu deleted file mode 100644 index 62dbf5da..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/diff_iou_rotated_cuda.cu +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Adapted from -// https://github.com/lilanxiao/Rotated_IoU/cuda_op/sort_vert_kernel.cu # noqa -#include "diff_iou_rotated_cuda_kernel.cuh" -#include "pytorch_cpp_helper.hpp" -#include "pytorch_cuda_helper.hpp" - -at::Tensor DiffIoURotatedSortVerticesCUDAKernelLauncher(at::Tensor vertices, - at::Tensor mask, - at::Tensor num_valid) { - at::cuda::CUDAGuard device_guard(vertices.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - CHECK_CONTIGUOUS(vertices); - CHECK_CONTIGUOUS(mask); - CHECK_CONTIGUOUS(num_valid); - CHECK_CUDA(vertices); - CHECK_CUDA(mask); - CHECK_CUDA(num_valid); - - int b = vertices.size(0); - int n = vertices.size(1); - int m = vertices.size(2); - at::Tensor idx = - torch::zeros({b, n, MAX_NUM_VERT_IDX}, - at::device(vertices.device()).dtype(at::ScalarType::Int)); - - diff_iou_rotated_sort_vertices_forward_cuda_kernel<<>>( - b, n, m, vertices.data_ptr(), mask.data_ptr(), - num_valid.data_ptr(), idx.data_ptr()); - AT_CUDA_CHECK(cudaGetLastError()); - - return idx; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu deleted file mode 100644 index cb899f95..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu +++ /dev/null @@ -1,111 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "sigmoid_focal_loss_cuda_kernel.cuh" -#include "softmax_focal_loss_cuda_kernel.cuh" - -void SigmoidFocalLossForwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha) { - int output_size = output.numel(); - int num_classes = input.size(1); - AT_ASSERTM(target.max().item() <= (int64_t)num_classes, - "target label should smaller or equal than num classes"); - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sigmoid_focal_loss_forward_cuda_kernel", [&] { - sigmoid_focal_loss_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - target.data_ptr(), weight.data_ptr(), - output.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SigmoidFocalLossBackwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, - Tensor grad_input, - const float gamma, - const float alpha) { - int output_size = grad_input.numel(); - int num_classes = input.size(1); - - at::cuda::CUDAGuard device_guard(grad_input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sigmoid_focal_loss_backward_cuda_kernel", [&] { - sigmoid_focal_loss_backward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - target.data_ptr(), weight.data_ptr(), - grad_input.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SoftmaxFocalLossForwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha) { - int output_size = output.numel(); - int num_classes = softmax.size(1); - - AT_ASSERTM(target.max().item() <= (int64_t)num_classes, - "target label should smaller or equal than num classes"); - at::cuda::CUDAGuard device_guard(softmax.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - softmax.scalar_type(), "softmax_focal_loss_forward_cuda_kernel", [&] { - softmax_focal_loss_forward_cuda_kernel - <<>>( - output_size, softmax.data_ptr(), - target.data_ptr(), weight.data_ptr(), - output.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SoftmaxFocalLossBackwardCUDAKernelLauncher(Tensor softmax, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, - const float gamma, - const float alpha) { - int num_classes = softmax.size(1); - - int output_size = buff.numel(); - at::cuda::CUDAGuard device_guard(grad_input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_input.scalar_type(), - "softmax_focal_loss_backward_cuda1_" - "kernel", - [&] { - softmax_focal_loss_backward_cuda1_kernel - <<>>( - output_size, softmax.data_ptr(), - target.data_ptr(), weight.data_ptr(), - buff.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); - - output_size = grad_input.numel(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_input.scalar_type(), - "softmax_focal_loss_backward_cuda2_" - "kernel", - [&] { - softmax_focal_loss_backward_cuda2_kernel - <<>>( - output_size, softmax.data_ptr(), - target.data_ptr(), buff.data_ptr(), - grad_input.data_ptr(), num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/furthest_point_sample_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/furthest_point_sample_cuda.cu deleted file mode 100644 index 812930b0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/furthest_point_sample_cuda.cu +++ /dev/null @@ -1,143 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/sampling_gpu.cu - -#include -#include - -#include "furthest_point_sample_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -inline int opt_n_threads(int work_size) { - const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0); - - return std::max(std::min(1 << pow_2, 1024), 1); -} - -void FurthestPointSamplingForwardCUDAKernelLauncher(int b, int n, int m, - const float* dataset, - float* temp, int* idxs) { - // dataset: (B, N, 3) - // tmp: (B, N) - // output: - // idx: (B, M) - - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - unsigned int n_threads = opt_n_threads(n); - - switch (n_threads) { - case 1024: - furthest_point_sampling_forward_cuda_kernel<1024> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 512: - furthest_point_sampling_forward_cuda_kernel<512> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 256: - furthest_point_sampling_forward_cuda_kernel<256> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 128: - furthest_point_sampling_forward_cuda_kernel<128> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 64: - furthest_point_sampling_forward_cuda_kernel<64> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 32: - furthest_point_sampling_forward_cuda_kernel<32> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 16: - furthest_point_sampling_forward_cuda_kernel<16> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 8: - furthest_point_sampling_forward_cuda_kernel<8> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 4: - furthest_point_sampling_forward_cuda_kernel<4> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 2: - furthest_point_sampling_forward_cuda_kernel<2> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 1: - furthest_point_sampling_forward_cuda_kernel<1> - <<>>(b, n, m, dataset, temp, idxs); - break; - default: - furthest_point_sampling_forward_cuda_kernel<512> - <<>>(b, n, m, dataset, temp, idxs); - } - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void FurthestPointSamplingWithDistForwardCUDAKernelLauncher( - int b, int n, int m, const float* dataset, float* temp, int* idxs) { - // dataset: (B, N, N) - // temp: (B, N) - // output: - // idx: (B, M) - - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - unsigned int n_threads = opt_n_threads(n); - - switch (n_threads) { - case 1024: - furthest_point_sampling_with_dist_forward_cuda_kernel<1024> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 512: - furthest_point_sampling_with_dist_forward_cuda_kernel<512> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 256: - furthest_point_sampling_with_dist_forward_cuda_kernel<256> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 128: - furthest_point_sampling_with_dist_forward_cuda_kernel<128> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 64: - furthest_point_sampling_with_dist_forward_cuda_kernel<64> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 32: - furthest_point_sampling_with_dist_forward_cuda_kernel<32> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 16: - furthest_point_sampling_with_dist_forward_cuda_kernel<16> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 8: - furthest_point_sampling_with_dist_forward_cuda_kernel<8> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 4: - furthest_point_sampling_with_dist_forward_cuda_kernel<4> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 2: - furthest_point_sampling_with_dist_forward_cuda_kernel<2> - <<>>(b, n, m, dataset, temp, idxs); - break; - case 1: - furthest_point_sampling_with_dist_forward_cuda_kernel<1> - <<>>(b, n, m, dataset, temp, idxs); - break; - default: - furthest_point_sampling_with_dist_forward_cuda_kernel<512> - <<>>(b, n, m, dataset, temp, idxs); - } - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu deleted file mode 100644 index 911ea019..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu +++ /dev/null @@ -1,109 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act_kernel.cu -// Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -// -// This work is made available under the Nvidia Source Code License-NC. -// To view a copy of this license, visit -// https://nvlabs.github.io/stylegan2/license.html - -#include -#include -#include -#include -#include -#include - -#include - -template -static __global__ void fused_bias_act_kernel( - scalar_t* out, const scalar_t* p_x, const scalar_t* p_b, - const scalar_t* p_ref, int act, int grad, scalar_t alpha, scalar_t scale, - int loop_x, int size_x, int step_b, int size_b, int use_bias, int use_ref) { - int xi = blockIdx.x * loop_x * blockDim.x + threadIdx.x; - - scalar_t zero = 0.0; - - for (int loop_idx = 0; loop_idx < loop_x && xi < size_x; - loop_idx++, xi += blockDim.x) { - scalar_t x = p_x[xi]; - - if (use_bias) { - x += p_b[(xi / step_b) % size_b]; - } - - scalar_t ref = use_ref ? p_ref[xi] : zero; - - scalar_t y; - - // act = 1: linear layer - // act = 3: leaky relu layer - // grad = 0: direct forward path - // grad = 1: first order deviation - // grad = 2: second order deviation - switch (act * 10 + grad) { - default: - case 10: - y = x; - break; - case 11: - y = x; - break; - case 12: - y = 0.0; - break; - - case 30: - y = (x > 0.0) ? x : x * alpha; - break; - case 31: - y = (ref > 0.0) ? x : x * alpha; - break; - case 32: - y = 0.0; - break; - } - - out[xi] = y * scale; - } -} - -torch::Tensor fused_bias_leakyrelu_op(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale) { - int curDevice = -1; - cudaGetDevice(&curDevice); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice); - - auto x = input.contiguous(); - auto b = bias.contiguous(); - auto ref = refer.contiguous(); - - int use_bias = b.numel() ? 1 : 0; - int use_ref = ref.numel() ? 1 : 0; - - int size_x = x.numel(); - int size_b = b.numel(); - int step_b = 1; - - for (int i = 1 + 1; i < x.dim(); i++) { - step_b *= x.size(i); - } - - int loop_x = 4; - int block_size = 4 * 32; - int grid_size = (size_x - 1) / (loop_x * block_size) + 1; - - auto y = torch::empty_like(x); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - x.scalar_type(), "fused_bias_act_kernel", [&] { - fused_bias_act_kernel<<>>( - y.data_ptr(), x.data_ptr(), - b.data_ptr(), ref.data_ptr(), act, grad, alpha, - scale, loop_x, size_x, step_b, size_b, use_bias, use_ref); - }); - - return y; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/gather_points_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/gather_points_cuda.cu deleted file mode 100644 index fd0a7b5d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/gather_points_cuda.cu +++ /dev/null @@ -1,58 +0,0 @@ -#include -#include - -#include "gather_points_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void GatherPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor points, - const Tensor idx, Tensor out) { - // points: (B, C, N) - // idx: (B, npoints) - // output: - // out: (B, C, npoints) - - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(npoints, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "gather_points_forward_cuda_kernel", [&] { - gather_points_forward_cuda_kernel - <<>>( - b, c, n, npoints, points.data_ptr(), - idx.data_ptr(), out.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void GatherPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - const Tensor grad_out, - const Tensor idx, - Tensor grad_points) { - // grad_out: (B, C, npoints) - // idx: (B, npoints) - // output: - // grad_points: (B, C, N) - - at::cuda::CUDAGuard device_guard(grad_out.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(npoints, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "gather_points_backward_cuda_kernel", [&] { - gather_points_backward_cuda_kernel - <<>>( - b, c, n, npoints, grad_out.data_ptr(), - idx.data_ptr(), grad_points.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/group_points_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/group_points_cuda.cu deleted file mode 100644 index 42fc2bb6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/group_points_cuda.cu +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/group_points_gpu.cu -#include -#include - -#include "group_points_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void GroupPointsForwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor points, - const Tensor idx, Tensor out) { - // points: (B, C, N) - // idx: (B, npoints, nsample) - // output: - // out: (B, C, npoints, nsample) - - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(npoints * nsample, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "group_points_forward_cuda_kernel", [&] { - group_points_forward_cuda_kernel - <<>>( - b, c, n, npoints, nsample, points.data_ptr(), - idx.data_ptr(), out.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void GroupPointsBackwardCUDAKernelLauncher(int b, int c, int n, int npoints, - int nsample, const Tensor grad_out, - const Tensor idx, - Tensor grad_points) { - // grad_out: (B, C, npoints, nsample) - // idx: (B, npoints, nsample) - // output: - // grad_points: (B, C, N) - - at::cuda::CUDAGuard device_guard(grad_out.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(npoints * nsample, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "group_points_backward_cuda_kernel", [&] { - group_points_backward_cuda_kernel - <<>>( - b, c, n, npoints, nsample, grad_out.data_ptr(), - idx.data_ptr(), grad_points.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/iou3d_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/iou3d_cuda.cu deleted file mode 100644 index ad5878fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/iou3d_cuda.cu +++ /dev/null @@ -1,67 +0,0 @@ -// Modified from -// https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu - -/* -3D IoU Calculation and Rotated NMS(modified from 2D NMS written by others) -Written by Shaoshuai Shi -All Rights Reserved 2019-2020. -*/ - -#include - -#include "iou3d_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void IoU3DBoxesOverlapBevForwardCUDAKernelLauncher(const int num_a, - const Tensor boxes_a, - const int num_b, - const Tensor boxes_b, - Tensor ans_overlap) { - at::cuda::CUDAGuard device_guard(boxes_a.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(num_b, THREADS_PER_BLOCK_IOU3D), - GET_BLOCKS(num_a, THREADS_PER_BLOCK_IOU3D)); - dim3 threads(THREADS_PER_BLOCK_IOU3D, THREADS_PER_BLOCK_IOU3D); - - iou3d_boxes_overlap_bev_forward_cuda_kernel<<>>( - num_a, boxes_a.data_ptr(), num_b, boxes_b.data_ptr(), - ans_overlap.data_ptr()); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void IoU3DNMS3DForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long *mask, - int boxes_num, - float nms_overlap_thresh) { - at::cuda::CUDAGuard device_guard(boxes.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(boxes_num, THREADS_PER_BLOCK_NMS), - GET_BLOCKS(boxes_num, THREADS_PER_BLOCK_NMS)); - dim3 threads(THREADS_PER_BLOCK_NMS); - - iou3d_nms3d_forward_cuda_kernel<<>>( - boxes_num, nms_overlap_thresh, boxes.data_ptr(), mask); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void IoU3DNMS3DNormalForwardCUDAKernelLauncher(const Tensor boxes, - unsigned long long *mask, - int boxes_num, - float nms_overlap_thresh) { - at::cuda::CUDAGuard device_guard(boxes.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(boxes_num, THREADS_PER_BLOCK_NMS), - GET_BLOCKS(boxes_num, THREADS_PER_BLOCK_NMS)); - dim3 threads(THREADS_PER_BLOCK_NMS); - - iou3d_nms3d_normal_forward_cuda_kernel<<>>( - boxes_num, nms_overlap_thresh, boxes.data_ptr(), mask); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/knn_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/knn_cuda.cu deleted file mode 100644 index e3351819..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/knn_cuda.cu +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/pointops/src/knnquery_heap - -#include -#include - -#include "knn_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void KNNForwardCUDAKernelLauncher(int b, int n, int m, int nsample, - const Tensor xyz, const Tensor new_xyz, - Tensor idx, Tensor dist2) { - // param new_xyz: (B, m, 3) - // param xyz: (B, n, 3) - // param idx: (B, m, nsample) - - at::cuda::CUDAGuard device_guard(new_xyz.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(m, THREADS_PER_BLOCK), b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - new_xyz.scalar_type(), "knn_forward_cuda_kernel", [&] { - knn_forward_cuda_kernel<<>>( - b, n, m, nsample, xyz.data_ptr(), - new_xyz.data_ptr(), idx.data_ptr(), - dist2.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/masked_conv2d_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/masked_conv2d_cuda.cu deleted file mode 100644 index 022e1890..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/masked_conv2d_cuda.cu +++ /dev/null @@ -1,54 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "masked_conv2d_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void MaskedIm2colForwardCUDAKernelLauncher(const Tensor bottom_data, - const Tensor mask_h_idx, - const Tensor mask_w_idx, - Tensor top_data, const int kernel_h, - const int kernel_w, const int pad_h, - const int pad_w) { - int channels = bottom_data.size(1); - int height = bottom_data.size(2); - int width = bottom_data.size(3); - int mask_cnt = mask_h_idx.size(0); - int output_size = mask_cnt * channels; - - at::cuda::CUDAGuard device_guard(bottom_data.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - bottom_data.scalar_type(), "MaskedIm2colLaucherForward", ([&] { - const scalar_t *bottom_data_ = bottom_data.data_ptr(); - const int64_t *mask_h_idx_ = mask_h_idx.data_ptr(); - const int64_t *mask_w_idx_ = mask_w_idx.data_ptr(); - scalar_t *top_data_ = top_data.data_ptr(); - MaskedIm2colForward - <<>>( - output_size, bottom_data_, height, width, kernel_h, kernel_w, - pad_h, pad_w, mask_h_idx_, mask_w_idx_, mask_cnt, top_data_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void MaskedCol2imForwardCUDAKernelLauncher( - const Tensor bottom_data, const Tensor mask_h_idx, const Tensor mask_w_idx, - Tensor top_data, const int height, const int width, const int channels) { - int mask_cnt = mask_h_idx.size(0); - int output_size = mask_cnt * channels; - - at::cuda::CUDAGuard device_guard(bottom_data.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - bottom_data.scalar_type(), "MaskedCol2imLaucherForward", ([&] { - const scalar_t *bottom_data_ = bottom_data.data_ptr(); - const int64_t *mask_h_idx_ = mask_h_idx.data_ptr(); - const int64_t *mask_w_idx_ = mask_w_idx.data_ptr(); - scalar_t *top_data_ = top_data.data_ptr(); - - MaskedCol2imForward - <<>>( - output_size, bottom_data_, height, width, channels, mask_h_idx_, - mask_w_idx_, mask_cnt, top_data_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/min_area_polygons.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/min_area_polygons.cu deleted file mode 100644 index 9314f2dd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/min_area_polygons.cu +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/SDL-GuoZonghao/BeyondBoundingBox/blob/main/mmdet/ops/minareabbox/src/minareabbox_kernel.cu -#include "min_area_polygons_cuda.cuh" -#include "pytorch_cuda_helper.hpp" - -void MinAreaPolygonsCUDAKernelLauncher(const Tensor pointsets, - Tensor polygons) { - int num_pointsets = pointsets.size(0); - const int output_size = polygons.numel(); - at::cuda::CUDAGuard device_guard(pointsets.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - pointsets.scalar_type(), "min_area_polygons_cuda_kernel", ([&] { - min_area_polygons_cuda_kernel - <<>>( - num_pointsets, pointsets.data_ptr(), - polygons.data_ptr()); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu deleted file mode 100644 index 2b52796e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu +++ /dev/null @@ -1,96 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "modulated_deform_conv_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void modulated_deformable_im2col_cuda( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "modulated_deformable_im2col_gpu", ([&] { - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *data_col_ = data_col.data_ptr(); - - modulated_deformable_im2col_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_im_, data_offset_, data_mask_, height_im, - width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, batch_size, - channels, deformable_group, height_col, width_col, data_col_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void modulated_deformable_col2im_cuda( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = - channels * kernel_h * kernel_w * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - modulated_deformable_col2im_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_col_, data_offset_, data_mask_, channels, - height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, channel_per_deformable_group, - batch_size, deformable_group, height_col, width_col, grad_im_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void modulated_deformable_col2im_coord_cuda( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * - kernel_w * deformable_group; - const int channel_per_deformable_group = - channels * kernel_h * kernel_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_coord_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - scalar_t *grad_mask_ = grad_mask.data_ptr(); - - modulated_deformable_col2im_coord_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_col_, data_im_, data_offset_, data_mask_, - channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, batch_size, - 2 * kernel_h * kernel_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_, grad_mask_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ms_deform_attn_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ms_deform_attn_cuda.cu deleted file mode 100644 index fd191ee9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/ms_deform_attn_cuda.cu +++ /dev/null @@ -1,351 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from -*https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include -#include -#include -#include - -#include -#include - -#include "ms_deform_attn_cuda_kernel.cuh" - -template -void ms_deformable_im2col_cuda(cudaStream_t stream, const scalar_t *data_value, - const int64_t *data_spatial_shapes, - const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, - const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, - const int num_heads, const int channels, - const int num_levels, const int num_query, - const int num_point, scalar_t *data_col) { - const int num_kernels = batch_size * num_query * num_heads * channels; - const int num_actual_kernels = batch_size * num_query * num_heads * channels; - const int num_threads = THREADS_PER_BLOCK; - ms_deformable_im2col_gpu_kernel - <<>>( - num_kernels, data_value, data_spatial_shapes, data_level_start_index, - data_sampling_loc, data_attn_weight, batch_size, spatial_size, - num_heads, channels, num_levels, num_query, num_point, data_col); - - cudaError_t err = cudaGetLastError(); - if (err != cudaSuccess) { - printf("error in ms_deformable_im2col_cuda: %s\n", cudaGetErrorString(err)); - } -} - -template -void ms_deformable_col2im_cuda( - cudaStream_t stream, const scalar_t *grad_col, const scalar_t *data_value, - const int64_t *data_spatial_shapes, const int64_t *data_level_start_index, - const scalar_t *data_sampling_loc, const scalar_t *data_attn_weight, - const int batch_size, const int spatial_size, const int num_heads, - const int channels, const int num_levels, const int num_query, - const int num_point, scalar_t *grad_value, scalar_t *grad_sampling_loc, - scalar_t *grad_attn_weight) { - const int num_threads = - (channels > THREADS_PER_BLOCK) ? THREADS_PER_BLOCK : channels; - const int num_kernels = batch_size * num_query * num_heads * channels; - const int num_actual_kernels = batch_size * num_query * num_heads * channels; - if (channels > THREADS_PER_BLOCK) { - if ((channels & THREADS_PER_BLOCK - 1) == 0) { - ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks - <<>>( - num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, data_attn_weight, - batch_size, spatial_size, num_heads, channels, num_levels, - num_query, num_point, grad_value, grad_sampling_loc, - grad_attn_weight); - } else { - ms_deformable_col2im_gpu_kernel_gm - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - } - } else { - switch (channels) { - case 1: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 2: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 4: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 8: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 16: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 32: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 64: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 128: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 256: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - case 512: - ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2 - <<>>(num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, - data_attn_weight, batch_size, spatial_size, num_heads, - channels, num_levels, num_query, num_point, grad_value, - grad_sampling_loc, grad_attn_weight); - break; - default: - if (channels < 64) { - ms_deformable_col2im_gpu_kernel_shm_reduce_v1 - <<>>( - num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, data_attn_weight, - batch_size, spatial_size, num_heads, channels, num_levels, - num_query, num_point, grad_value, grad_sampling_loc, - grad_attn_weight); - } else { - ms_deformable_col2im_gpu_kernel_shm_reduce_v2 - <<>>( - num_kernels, grad_col, data_value, data_spatial_shapes, - data_level_start_index, data_sampling_loc, data_attn_weight, - batch_size, spatial_size, num_heads, channels, num_levels, - num_query, num_point, grad_value, grad_sampling_loc, - grad_attn_weight); - } - } - } - cudaError_t err = cudaGetLastError(); - if (err != cudaSuccess) { - printf("error in ms_deformable_col2im_cuda: %s\n", cudaGetErrorString(err)); - } -} - -at::Tensor ms_deform_attn_cuda_forward(const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) { - AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous"); - AT_ASSERTM(spatial_shapes.is_contiguous(), - "spatial_shapes tensor has to be contiguous"); - AT_ASSERTM(level_start_index.is_contiguous(), - "level_start_index tensor has to be contiguous"); - AT_ASSERTM(sampling_loc.is_contiguous(), - "sampling_loc tensor has to be contiguous"); - AT_ASSERTM(attn_weight.is_contiguous(), - "attn_weight tensor has to be contiguous"); - - AT_ASSERTM(value.is_cuda(), "value must be a CUDA tensor"); - AT_ASSERTM(spatial_shapes.is_cuda(), "spatial_shapes must be a CUDA tensor"); - AT_ASSERTM(level_start_index.is_cuda(), - "level_start_index must be a CUDA tensor"); - AT_ASSERTM(sampling_loc.is_cuda(), "sampling_loc must be a CUDA tensor"); - AT_ASSERTM(attn_weight.is_cuda(), "attn_weight must be a CUDA tensor"); - - const int batch = value.size(0); - const int spatial_size = value.size(1); - const int num_heads = value.size(2); - const int channels = value.size(3); - - const int num_levels = spatial_shapes.size(0); - - const int num_query = sampling_loc.size(1); - const int num_point = sampling_loc.size(4); - - const int im2col_step_ = std::min(batch, im2col_step); - - AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", - batch, im2col_step_); - - auto output = - at::zeros({batch, num_query, num_heads, channels}, value.options()); - - const int batch_n = im2col_step_; - auto output_n = output.view( - {batch / im2col_step_, batch_n, num_query, num_heads, channels}); - auto per_value_size = spatial_size * num_heads * channels; - auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2; - auto per_attn_weight_size = num_query * num_heads * num_levels * num_point; - for (int n = 0; n < batch / im2col_step_; ++n) { - auto columns = output_n.select(0, n); - AT_DISPATCH_FLOATING_TYPES( - value.scalar_type(), "ms_deform_attn_forward_cuda", ([&] { - ms_deformable_im2col_cuda( - at::cuda::getCurrentCUDAStream(), - value.data_ptr() + n * im2col_step_ * per_value_size, - spatial_shapes.data_ptr(), - level_start_index.data_ptr(), - sampling_loc.data_ptr() + - n * im2col_step_ * per_sample_loc_size, - attn_weight.data_ptr() + - n * im2col_step_ * per_attn_weight_size, - batch_n, spatial_size, num_heads, channels, num_levels, num_query, - num_point, columns.data_ptr()); - })); - } - - output = output.view({batch, num_query, num_heads * channels}); - - return output; -} - -void ms_deform_attn_cuda_backward( - const at::Tensor &value, const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, const at::Tensor &grad_output, - at::Tensor &grad_value, at::Tensor &grad_sampling_loc, - at::Tensor &grad_attn_weight, const int im2col_step) { - AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous"); - AT_ASSERTM(spatial_shapes.is_contiguous(), - "spatial_shapes tensor has to be contiguous"); - AT_ASSERTM(level_start_index.is_contiguous(), - "level_start_index tensor has to be contiguous"); - AT_ASSERTM(sampling_loc.is_contiguous(), - "sampling_loc tensor has to be contiguous"); - AT_ASSERTM(attn_weight.is_contiguous(), - "attn_weight tensor has to be contiguous"); - AT_ASSERTM(grad_output.is_contiguous(), - "grad_output tensor has to be contiguous"); - - AT_ASSERTM(value.is_cuda(), "value must be a CUDA tensor"); - AT_ASSERTM(spatial_shapes.is_cuda(), "spatial_shapes must be a CUDA tensor"); - AT_ASSERTM(level_start_index.is_cuda(), - "level_start_index must be a CUDA tensor"); - AT_ASSERTM(sampling_loc.is_cuda(), "sampling_loc must be a CUDA tensor"); - AT_ASSERTM(attn_weight.is_cuda(), "attn_weight must be a CUDA tensor"); - AT_ASSERTM(grad_output.is_cuda(), "grad_output must be a CUDA tensor"); - - const int batch = value.size(0); - const int spatial_size = value.size(1); - const int num_heads = value.size(2); - const int channels = value.size(3); - - const int num_levels = spatial_shapes.size(0); - - const int num_query = sampling_loc.size(1); - const int num_point = sampling_loc.size(4); - - const int im2col_step_ = std::min(batch, im2col_step); - - AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", - batch, im2col_step_); - - const int batch_n = im2col_step_; - auto per_value_size = spatial_size * num_heads * channels; - auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2; - auto per_attn_weight_size = num_query * num_heads * num_levels * num_point; - auto grad_output_n = grad_output.view( - {batch / im2col_step_, batch_n, num_query, num_heads, channels}); - - for (int n = 0; n < batch / im2col_step_; ++n) { - auto grad_output_g = grad_output_n.select(0, n); - AT_DISPATCH_FLOATING_TYPES( - value.scalar_type(), "ms_deform_attn_backward_cuda", ([&] { - ms_deformable_col2im_cuda( - at::cuda::getCurrentCUDAStream(), - grad_output_g.data_ptr(), - value.data_ptr() + n * im2col_step_ * per_value_size, - spatial_shapes.data_ptr(), - level_start_index.data_ptr(), - sampling_loc.data_ptr() + - n * im2col_step_ * per_sample_loc_size, - attn_weight.data_ptr() + - n * im2col_step_ * per_attn_weight_size, - batch_n, spatial_size, num_heads, channels, num_levels, num_query, - num_point, - grad_value.data_ptr() + - n * im2col_step_ * per_value_size, - grad_sampling_loc.data_ptr() + - n * im2col_step_ * per_sample_loc_size, - grad_attn_weight.data_ptr() + - n * im2col_step_ * per_attn_weight_size); - })); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu deleted file mode 100644 index e7179c6a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "nms_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -Tensor NMSCUDAKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset) { - at::cuda::CUDAGuard device_guard(boxes.device()); - - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - auto order_t = std::get<1>(scores.sort(0, /*descending=*/true)); - auto boxes_sorted = boxes.index_select(0, order_t); - - int boxes_num = boxes.size(0); - const int col_blocks = (boxes_num + threadsPerBlock - 1) / threadsPerBlock; - const int col_blocks_alloc = GET_BLOCKS(boxes_num, threadsPerBlock); - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - dim3 blocks(col_blocks_alloc, col_blocks_alloc); - dim3 threads(threadsPerBlock); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - nms_cuda<<>>( - boxes_num, iou_threshold, offset, boxes_sorted.data_ptr(), - (unsigned long long*)mask.data_ptr()); - - // Filter the boxes which should be kept. - at::Tensor keep_t = at::zeros( - {boxes_num}, boxes.options().dtype(at::kBool).device(at::kCUDA)); - gather_keep_from_mask<<<1, std::min(col_blocks, THREADS_PER_BLOCK), - col_blocks * sizeof(unsigned long long), stream>>>( - keep_t.data_ptr(), (unsigned long long*)mask.data_ptr(), - boxes_num); - AT_CUDA_CHECK(cudaGetLastError()); - return order_t.masked_select(keep_t); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_rotated_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_rotated_cuda.cu deleted file mode 100644 index e1185f81..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/nms_rotated_cuda.cu +++ /dev/null @@ -1,62 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/nms_rotated/nms_rotated_cuda.cu -#include "nms_rotated_cuda.cuh" -#include "pytorch_cuda_helper.hpp" - -Tensor nms_rotated_cuda(const Tensor dets, const Tensor scores, - const Tensor order_t, const Tensor dets_sorted, - float iou_threshold, const int multi_label) { - // using scalar_t = float; - AT_ASSERTM(dets.is_cuda(), "dets must be a CUDA tensor"); - AT_ASSERTM(scores.is_cuda(), "scores must be a CUDA tensor"); - at::cuda::CUDAGuard device_guard(dets.device()); - - int dets_num = dets.size(0); - - const int col_blocks = at::cuda::ATenCeilDiv(dets_num, threadsPerBlock); - - Tensor mask = - at::empty({dets_num * col_blocks}, dets.options().dtype(at::kLong)); - - dim3 blocks(col_blocks, col_blocks); - dim3 threads(threadsPerBlock); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - dets_sorted.scalar_type(), "nms_rotated_kernel_cuda", [&] { - nms_rotated_cuda_kernel<<>>( - dets_num, iou_threshold, dets_sorted.data_ptr(), - (unsigned long long*)mask.data_ptr(), multi_label); - }); - - Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long* mask_host = - (unsigned long long*)mask_cpu.data_ptr(); - - std::vector remv(col_blocks); - memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks); - - Tensor keep = - at::empty({dets_num}, dets.options().dtype(at::kLong).device(at::kCPU)); - int64_t* keep_out = keep.data_ptr(); - - int num_to_keep = 0; - for (int i = 0; i < dets_num; i++) { - int nblock = i / threadsPerBlock; - int inblock = i % threadsPerBlock; - - if (!(remv[nblock] & (1ULL << inblock))) { - keep_out[num_to_keep++] = i; - unsigned long long* p = mask_host + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv[j] |= p[j]; - } - } - } - - AT_CUDA_CHECK(cudaGetLastError()); - return order_t.index( - {keep.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep) - .to(order_t.device(), keep.scalar_type())}); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_boxes_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_boxes_cuda.cu deleted file mode 100644 index 3cc89d01..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_boxes_cuda.cu +++ /dev/null @@ -1,62 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/PCDet/blob/master/pcdet/ops/roiaware_pool3d/src/roiaware_pool3d_kernel.cu -// Written by Shaoshuai Shi -// All Rights Reserved 2019. - -#include - -#include "points_in_boxes_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void PointsInBoxesPartForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is - // the bottom center, each box DO NOT overlaps params pts: (B, npoints, 3) [x, - // y, z] in LiDAR coordinate params boxes_idx_of_points: (B, npoints), default - // -1 - - at::cuda::CUDAGuard device_guard(boxes.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(pts_num, THREADS_PER_BLOCK), batch_size); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - boxes.scalar_type(), "points_in_boxes_part_forward_cuda_kernel", [&] { - points_in_boxes_part_forward_cuda_kernel - <<>>( - batch_size, boxes_num, pts_num, boxes.data_ptr(), - pts.data_ptr(), box_idx_of_points.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void PointsInBoxesAllForwardCUDAKernelLauncher(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box params pts: (B, npoints, 3) - // [x, y, z] in LiDAR coordinate params boxes_idx_of_points: (B, npoints), - // default -1 - - at::cuda::CUDAGuard device_guard(boxes.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(pts_num, THREADS_PER_BLOCK), batch_size); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - boxes.scalar_type(), "points_in_boxes_all_forward_cuda_kernel", [&] { - points_in_boxes_all_forward_cuda_kernel - <<>>( - batch_size, boxes_num, pts_num, boxes.data_ptr(), - pts.data_ptr(), box_idx_of_points.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_polygons_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_polygons_cuda.cu deleted file mode 100644 index 6e7db9dd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/points_in_polygons_cuda.cu +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/ming71/CUDA/blob/master/point_justify/points_justify_kernel.cu - -#include - -#include "points_in_polygons_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void PointsInPolygonsForwardCUDAKernelLauncher(const at::Tensor points, - const at::Tensor polygons, - const int rows, const int cols, - at::Tensor output) { - const int output_size = rows * cols; - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "points_in_polygons_forward_cuda_kernel", ([&] { - const scalar_t *vertex1 = points.data_ptr(); - const scalar_t *vertex2 = polygons.data_ptr(); - scalar_t *inside_flag = output.data_ptr(); - - points_in_polygons_forward_cuda_kernel - <<>>( - output_size, vertex1, vertex2, rows, cols, inside_flag); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/psamask_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/psamask_cuda.cu deleted file mode 100644 index a0bdfa60..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/psamask_cuda.cu +++ /dev/null @@ -1,60 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/hszhao/semseg/blob/master/lib/psa/src - -#include - -#include "psamask_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void PSAMaskForwardCUDAKernelLauncher(const int psa_type, const Tensor input, - Tensor output, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, - const int half_w_mask) { - int nthreads = num_ * h_feature * w_feature; - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - if (psa_type == 0) - AT_DISPATCH_FLOATING_TYPES( - input.scalar_type(), "psamask_collect_forward_cuda", [&] { - psamask_collect_forward_cuda<<>>( - nthreads, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask, input.data_ptr(), - output.data_ptr()); - }); - else - AT_DISPATCH_FLOATING_TYPES( - input.scalar_type(), "psamask_distribute_forward_cuda", [&] { - psamask_distribute_forward_cuda - <<>>( - nthreads, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask, input.data_ptr(), - output.data_ptr()); - }); -} - -void PSAMaskBackwardCUDAKernelLauncher( - const int psa_type, const Tensor grad_output, Tensor grad_input, - const int num_, const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, const int half_w_mask) { - int nthreads = num_ * h_feature * w_feature; - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - if (psa_type == 0) - AT_DISPATCH_FLOATING_TYPES( - grad_input.scalar_type(), "psamask_collect_backward_cuda", [&] { - psamask_collect_backward_cuda<<>>( - nthreads, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask, grad_output.data_ptr(), - grad_input.data_ptr()); - }); - else - AT_DISPATCH_FLOATING_TYPES( - grad_input.scalar_type(), "psamask_distribute_backward_cuda", [&] { - psamask_distribute_backward_cuda - <<>>( - nthreads, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask, grad_output.data_ptr(), - grad_input.data_ptr()); - }); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/riroi_align_rotated_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/riroi_align_rotated_cuda.cu deleted file mode 100644 index 9829da73..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/riroi_align_rotated_cuda.cu +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "riroi_align_rotated_cuda_kernel.cuh" - -void RiROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor features, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor output) { - const int output_size = - num_rois * pooled_height * pooled_width * channels * num_orientations; - at::cuda::CUDAGuard device_guard(features.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "riroi_align_rotated_forward_cuda_kernel", ([&] { - const scalar_t *bottom_data = features.data_ptr(); - const scalar_t *rois_data = rois.data_ptr(); - scalar_t *top_data = output.data_ptr(); - - riroi_align_rotated_forward_cuda_kernel - <<>>( - output_size, bottom_data, rois_data, scalar_t(spatial_scale), - num_samples, clockwise, channels, height, width, pooled_height, - pooled_width, num_orientations, top_data); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void RiROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int num_samples, const bool clockwise, const int channels, - const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, const int num_orientations, - at::Tensor bottom_grad) { - const int output_size = - num_rois * pooled_height * pooled_width * channels * num_orientations; - at::cuda::CUDAGuard device_guard(top_grad.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "riroi_align_rotated_backward_cuda_kernel", ([&] { - const scalar_t *top_diff = top_grad.data_ptr(); - const scalar_t *rois_data = rois.data_ptr(); - scalar_t *bottom_diff = bottom_grad.data_ptr(); - riroi_align_rotated_backward_cuda_kernel - <<>>( - output_size, top_diff, rois_data, spatial_scale, num_samples, - clockwise, channels, height, width, pooled_height, pooled_width, - num_orientations, bottom_diff); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_cuda.cu deleted file mode 100644 index 3d4f7614..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_cuda.cu +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "roi_align_cuda_kernel.cuh" - -void ROIAlignForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - int output_size = output.numel(); - int channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "roi_align_forward_cuda_kernel", [&] { - roi_align_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - rois.data_ptr(), output.data_ptr(), - argmax_y.data_ptr(), argmax_x.data_ptr(), - aligned_height, aligned_width, - static_cast(spatial_scale), sampling_ratio, pool_mode, - aligned, channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ROIAlignBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax_y, Tensor argmax_x, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, - bool aligned) { - int output_size = grad_output.numel(); - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "roi_align_backward_cuda_kernel", [&] { - roi_align_backward_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - rois.data_ptr(), argmax_y.data_ptr(), - argmax_x.data_ptr(), grad_input.data_ptr(), - aligned_height, aligned_width, - static_cast(spatial_scale), sampling_ratio, pool_mode, - aligned, channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_rotated_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_rotated_cuda.cu deleted file mode 100644 index c0fd987b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_align_rotated_cuda.cu +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "roi_align_rotated_cuda_kernel.cuh" - -void ROIAlignRotatedForwardCUDAKernelLauncher( - const at::Tensor input, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor output) { - const int output_size = num_rois * pooled_height * pooled_width * channels; - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "ROIAlignRotatedLaucherForward", ([&] { - const scalar_t *bottom_data = input.data_ptr(); - const scalar_t *rois_data = rois.data_ptr(); - scalar_t *top_data = output.data_ptr(); - - roi_align_rotated_forward_cuda_kernel - <<>>( - output_size, bottom_data, rois_data, scalar_t(spatial_scale), - sampling_ratio, aligned, clockwise, channels, height, width, - pooled_height, pooled_width, top_data); - })); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ROIAlignRotatedBackwardCUDAKernelLauncher( - const at::Tensor top_grad, const at::Tensor rois, const float spatial_scale, - const int sampling_ratio, const bool aligned, const bool clockwise, - const int channels, const int height, const int width, const int num_rois, - const int pooled_height, const int pooled_width, at::Tensor bottom_grad) { - const int output_size = num_rois * pooled_height * pooled_width * channels; - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "ROIAlignLaucherBackward", ([&] { - const scalar_t *top_diff = top_grad.data_ptr(); - const scalar_t *rois_data = rois.data_ptr(); - scalar_t *bottom_diff = bottom_grad.data_ptr(); - roi_align_rotated_backward_cuda_kernel - <<>>( - output_size, top_diff, rois_data, spatial_scale, sampling_ratio, - aligned, clockwise, channels, height, width, pooled_height, - pooled_width, bottom_diff); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_pool_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_pool_cuda.cu deleted file mode 100644 index d9cdf305..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roi_pool_cuda.cu +++ /dev/null @@ -1,50 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "roi_pool_cuda_kernel.cuh" - -void ROIPoolForwardCUDAKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, - int pooled_width, float spatial_scale) { - int output_size = output.numel(); - int channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "roi_pool_forward_cuda_kernel", [&] { - roi_pool_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - rois.data_ptr(), output.data_ptr(), - argmax.data_ptr(), pooled_height, pooled_width, - static_cast(spatial_scale), channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ROIPoolBackwardCUDAKernelLauncher(Tensor grad_output, Tensor rois, - Tensor argmax, Tensor grad_input, - int pooled_height, int pooled_width, - float spatial_scale) { - int output_size = grad_output.numel(); - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "roi_pool_backward_cuda_kernel", [&] { - roi_pool_backward_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - rois.data_ptr(), argmax.data_ptr(), - grad_input.data_ptr(), pooled_height, pooled_width, - channels, height, width); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roiaware_pool3d_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roiaware_pool3d_cuda.cu deleted file mode 100644 index 7d83755f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roiaware_pool3d_cuda.cu +++ /dev/null @@ -1,118 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/PCDet/blob/master/pcdet/ops/roiaware_pool3d/src/roiaware_pool3d_kernel.cu -// Written by Shaoshuai Shi -// All Rights Reserved 2019. - -#include - -#include "pytorch_cuda_helper.hpp" -#include "roiaware_pool3d_cuda_kernel.cuh" - -void RoiawarePool3dForwardCUDAKernelLauncher( - int boxes_num, int pts_num, int channels, int max_pts_each_voxel, int out_x, - int out_y, int out_z, const Tensor rois, const Tensor pts, - const Tensor pts_feature, Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - // params rois: (N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate params pts: (npoints, 3) [x, y, z] in LiDAR coordinate params - // pts_feature: (npoints, C) params argmax: (N, out_x, out_y, out_z, C) params - // pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) params - // pooled_features: (N, out_x, out_y, out_z, C) params pool_method: 0: - // max_pool 1: avg_pool - - at::cuda::CUDAGuard device_guard(pts_feature.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - Tensor pts_mask = - -at::ones({boxes_num, pts_num}, pts_feature.options().dtype(at::kInt)); - - dim3 blocks_mask(GET_BLOCKS(pts_num, THREADS_PER_BLOCK), boxes_num); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - rois.scalar_type(), "generate_pts_mask_for_box3d", [&] { - generate_pts_mask_for_box3d - <<>>( - boxes_num, pts_num, out_x, out_y, out_z, - rois.data_ptr(), pts.data_ptr(), - pts_mask.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); - - // TODO: Merge the collect and pool functions, SS - - dim3 blocks_collect(GET_BLOCKS(boxes_num, THREADS_PER_BLOCK)); - - AT_DISPATCH_INTEGRAL_TYPES( - pts_idx_of_voxels.scalar_type(), "collect_inside_pts_for_box3d", [&] { - collect_inside_pts_for_box3d - <<>>( - boxes_num, pts_num, max_pts_each_voxel, out_x, out_y, out_z, - pts_mask.data_ptr(), - pts_idx_of_voxels.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); - - dim3 blocks_pool(GET_BLOCKS(out_x * out_y * out_z, THREADS_PER_BLOCK), - channels, boxes_num); - if (pool_method == 0) { - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - pts_feature.scalar_type(), "roiaware_maxpool3d", [&] { - roiaware_maxpool3d<<>>( - boxes_num, pts_num, channels, max_pts_each_voxel, out_x, out_y, - out_z, pts_feature.data_ptr(), - pts_idx_of_voxels.data_ptr(), - pooled_features.data_ptr(), argmax.data_ptr()); - }); - } else if (pool_method == 1) { - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - pts_feature.scalar_type(), "roiaware_avgpool3d", [&] { - roiaware_avgpool3d<<>>( - boxes_num, pts_num, channels, max_pts_each_voxel, out_x, out_y, - out_z, pts_feature.data_ptr(), - pts_idx_of_voxels.data_ptr(), - pooled_features.data_ptr()); - }); - } - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void RoiawarePool3dBackwardCUDAKernelLauncher( - int boxes_num, int out_x, int out_y, int out_z, int channels, - int max_pts_each_voxel, const Tensor pts_idx_of_voxels, const Tensor argmax, - const Tensor grad_out, Tensor grad_in, int pool_method) { - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params argmax: (N, out_x, out_y, out_z, C) - // params grad_out: (N, out_x, out_y, out_z, C) - // params grad_in: (npoints, C), return value - // params pool_method: 0: max_pool, 1: avg_pool - - at::cuda::CUDAGuard device_guard(grad_out.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(GET_BLOCKS(out_x * out_y * out_z, THREADS_PER_BLOCK), channels, - boxes_num); - dim3 threads(THREADS_PER_BLOCK); - - if (pool_method == 0) { - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_in.scalar_type(), "roiaware_maxpool3d_backward", [&] { - roiaware_maxpool3d_backward<<>>( - boxes_num, channels, out_x, out_y, out_z, argmax.data_ptr(), - grad_out.data_ptr(), grad_in.data_ptr()); - }); - } else if (pool_method == 1) { - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_in.scalar_type(), "roiaware_avgpool3d_backward", [&] { - roiaware_avgpool3d_backward<<>>( - boxes_num, channels, out_x, out_y, out_z, max_pts_each_voxel, - pts_idx_of_voxels.data_ptr(), grad_out.data_ptr(), - grad_in.data_ptr()); - }); - } - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roipoint_pool3d_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roipoint_pool3d_cuda.cu deleted file mode 100644 index af2098e8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/roipoint_pool3d_cuda.cu +++ /dev/null @@ -1,60 +0,0 @@ -/* -Modified from -https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/roipoint_pool3d/src/roipoint_pool3d_kernel.cu -Point cloud feature pooling -Written by Shaoshuai Shi -All Rights Reserved 2018. -*/ - -#include -#include - -#include "pytorch_cuda_helper.hpp" -#include "roipoint_pool3d_cuda_kernel.cuh" - -void RoIPointPool3dForwardCUDAKernelLauncher( - int batch_size, int pts_num, int boxes_num, int feature_in_len, - int sampled_pts_num, const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, Tensor pooled_features, - Tensor pooled_empty_flag) { - Tensor pts_assign = at::empty({batch_size, pts_num, boxes_num}, - boxes3d.options().dtype(at::kInt)); - - at::cuda::CUDAGuard device_guard(xyz.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(pts_num, THREADS_PER_BLOCK), boxes_num, batch_size); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz.scalar_type(), "assign_pts_to_box3d", [&] { - assign_pts_to_box3d<<>>( - batch_size, pts_num, boxes_num, xyz.data_ptr(), - boxes3d.data_ptr(), pts_assign.data_ptr()); - }); - - Tensor pts_idx = at::empty({batch_size, boxes_num, sampled_pts_num}, - boxes3d.options().dtype(at::kInt)); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks2(GET_BLOCKS(boxes_num, THREADS_PER_BLOCK), batch_size); - - get_pooled_idx<<>>( - batch_size, pts_num, boxes_num, sampled_pts_num, - pts_assign.data_ptr(), pts_idx.data_ptr(), - pooled_empty_flag.data_ptr()); - - dim3 blocks_pool(GET_BLOCKS(sampled_pts_num, THREADS_PER_BLOCK), boxes_num, - batch_size); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - xyz.scalar_type(), "roipoint_pool3d_forward", [&] { - roipoint_pool3d_forward<<>>( - batch_size, pts_num, boxes_num, feature_in_len, sampled_pts_num, - xyz.data_ptr(), pts_idx.data_ptr(), - pts_feature.data_ptr(), - pooled_features.data_ptr(), - pooled_empty_flag.data_ptr()); - }); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/rotated_feature_align_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/rotated_feature_align_cuda.cu deleted file mode 100644 index d172338a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/rotated_feature_align_cuda.cu +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_kernel.cu -#include "pytorch_cuda_helper.hpp" -#include "rotated_feature_align_cuda_kernel.cuh" - -void RotatedFeatureAlignForwardCUDAKernelLauncher(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor output) { - at::cuda::CUDAGuard device_guard(features.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - const int output_size = features.numel(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - features.scalar_type(), "rotated_feature_align_forward_cuda_kernel", - ([&] { - const scalar_t* bottom_data = features.data_ptr(); - const scalar_t* bboxes_data = best_bboxes.data_ptr(); - scalar_t* top_data = output.data_ptr(); - - rotated_feature_align_forward_kernel - <<>>( - output_size, points, bottom_data, bboxes_data, - scalar_t(spatial_scale), features.size(1), features.size(2), - features.size(3), top_data); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void RotatedFeatureAlignBackwardCUDAKernelLauncher(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, - Tensor bottom_grad) { - at::cuda::CUDAGuard device_guard(top_grad.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - const int output_size = top_grad.numel(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - top_grad.scalar_type(), "rotated_feature_align_backward_cuda_kernel", - ([&] { - const scalar_t* top_diff = top_grad.data_ptr(); - const scalar_t* bboxes_data = best_bboxes.data_ptr(); - scalar_t* bottom_diff = bottom_grad.data_ptr(); - - rotated_feature_align_backward_kernel - <<>>( - output_size, points, top_diff, bboxes_data, - scalar_t(spatial_scale), top_grad.size(1), top_grad.size(2), - top_grad.size(3), bottom_diff); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/scatter_points_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/scatter_points_cuda.cu deleted file mode 100644 index cbc44651..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/scatter_points_cuda.cu +++ /dev/null @@ -1,132 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include -#include -#include - -#include "pytorch_cuda_helper.hpp" -#include "scatter_points_cuda_kernel.cuh" - -std::vector DynamicPointToVoxelForwardCUDAKernelLauncher( - const at::Tensor &feats, const at::Tensor &coors, - const reduce_t reduce_type) { - const int num_input = feats.size(0); - const int num_feats = feats.size(1); - - if (num_input == 0) - return {feats.clone().detach(), coors.clone().detach(), - coors.new_empty({0}, torch::kInt32), - coors.new_empty({0}, torch::kInt32)}; - - at::Tensor out_coors; - at::Tensor coors_map; - at::Tensor reduce_count; - - auto coors_clean = coors.masked_fill(coors.lt(0).any(-1, true), -1); - - std::tie(out_coors, coors_map, reduce_count) = - at::unique_dim(coors_clean, 0, true, true, true); - - if (out_coors[0][0].lt(0).item()) { - // the first element of out_coors (-1,-1,-1) and should be removed - out_coors = out_coors.slice(0, 1); - reduce_count = reduce_count.slice(0, 1); - coors_map = coors_map - 1; - } - - coors_map = coors_map.to(torch::kInt32); - reduce_count = reduce_count.to(torch::kInt32); - - auto reduced_feats = - at::empty({out_coors.size(0), num_feats}, feats.options()); - - at::cuda::CUDAGuard device_guard(feats.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - AT_DISPATCH_FLOATING_TYPES( - feats.scalar_type(), "feats_reduce_kernel", ([&] { - if (reduce_type == reduce_t::MAX) - reduced_feats.fill_(-std::numeric_limits::infinity()); - else - reduced_feats.fill_(static_cast(0)); - - dim3 blocks(std::min( - at::cuda::ATenCeilDiv(num_input, THREADS_PER_BLOCK), maxGridDim)); - dim3 threads(THREADS_PER_BLOCK); - feats_reduce_kernel<<>>( - feats.data_ptr(), coors_map.data_ptr(), - reduced_feats.data_ptr(), num_input, num_feats, - reduce_type); - if (reduce_type == reduce_t::MEAN) - reduced_feats /= reduce_count.unsqueeze(-1).to(reduced_feats.dtype()); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - return {reduced_feats, out_coors, coors_map, reduce_count}; -} - -void DynamicPointToVoxelBackwardCUDAKernelLauncher( - at::Tensor &grad_feats, const at::Tensor &grad_reduced_feats, - const at::Tensor &feats, const at::Tensor &reduced_feats, - const at::Tensor &coors_map, const at::Tensor &reduce_count, - const reduce_t reduce_type) { - const int num_input = feats.size(0); - const int num_reduced = reduced_feats.size(0); - const int num_feats = feats.size(1); - - grad_feats.fill_(0); - // copy voxel grad to points - - if (num_input == 0 || num_reduced == 0) return; - at::cuda::CUDAGuard device_guard(feats.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - if (reduce_type == reduce_t::MEAN || reduce_type == reduce_t::SUM) { - AT_DISPATCH_FLOATING_TYPES( - grad_reduced_feats.scalar_type(), "add_reduce_traceback_grad_kernel", - ([&] { - dim3 blocks(std::min( - at::cuda::ATenCeilDiv(num_input, THREADS_PER_BLOCK), maxGridDim)); - dim3 threads(THREADS_PER_BLOCK); - add_reduce_traceback_grad_kernel<<>>( - grad_feats.data_ptr(), - grad_reduced_feats.data_ptr(), - coors_map.data_ptr(), reduce_count.data_ptr(), - num_input, num_feats, reduce_type); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - } else { - auto reduce_from = at::full({num_reduced, num_feats}, num_input, - coors_map.options().dtype(torch::kInt32)); - AT_DISPATCH_FLOATING_TYPES( - grad_reduced_feats.scalar_type(), - "max_reduce_traceback_scatter_idx_kernel", ([&] { - dim3 blocks(std::min( - at::cuda::ATenCeilDiv(num_input, THREADS_PER_BLOCK), maxGridDim)); - dim3 threads(THREADS_PER_BLOCK); - max_reduce_traceback_scatter_idx_kernel<<>>( - feats.data_ptr(), reduced_feats.data_ptr(), - reduce_from.data_ptr(), coors_map.data_ptr(), - num_input, num_feats); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - AT_DISPATCH_FLOATING_TYPES( - grad_reduced_feats.scalar_type(), - "max_reduce_traceback_scatter_idx_kernel", ([&] { - dim3 blocks( - std::min(at::cuda::ATenCeilDiv(num_reduced, THREADS_PER_BLOCK), - maxGridDim)); - dim3 threads(THREADS_PER_BLOCK); - max_reduce_scatter_grad_kernel<<>>( - grad_feats.data_ptr(), - grad_reduced_feats.data_ptr(), - reduce_from.data_ptr(), num_reduced, num_feats); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu deleted file mode 100644 index 657c8170..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu +++ /dev/null @@ -1,110 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "sync_bn_cuda_kernel.cuh" - -void SyncBNForwardMeanCUDAKernelLauncher(const Tensor input, Tensor mean) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_mean_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNForwardVarCUDAKernelLauncher(const Tensor input, const Tensor mean, - Tensor var) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_var_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), - var.data_ptr(), num, channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNForwardOutputCUDAKernelLauncher( - const Tensor input, const Tensor mean, const Tensor var, - Tensor running_mean, Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, Tensor output, float eps, - float momentum, int group_size) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_output_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), - var.data_ptr(), running_mean.data_ptr(), - running_var.data_ptr(), weight.data_ptr(), - bias.data_ptr(), norm.data_ptr(), - std.data_ptr(), output.data_ptr(), num, - channels, spatial, eps, momentum, group_size); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNBackwardParamCUDAKernelLauncher(const Tensor grad_output, - const Tensor norm, - Tensor grad_weight, - Tensor grad_bias) { - int num = grad_output.size(0); - int channels = grad_output.size(1); - int spatial = grad_output.size(2); - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "sync_bn_backward_param_cuda_kernel", [&] { - sync_bn_backward_param_cuda_kernel - <<>>( - grad_output.data_ptr(), norm.data_ptr(), - grad_weight.data_ptr(), grad_bias.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNBackwardDataCUDAKernelLauncher(const Tensor grad_output, - const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input) { - int output_size = grad_input.numel(); - int num = grad_input.size(0); - int channels = grad_input.size(1); - int spatial = grad_input.size(2); - - at::cuda::CUDAGuard device_guard(grad_input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "sync_bn_backward_data_cuda_kernel", [&] { - sync_bn_backward_data_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - weight.data_ptr(), grad_weight.data_ptr(), - grad_bias.data_ptr(), norm.data_ptr(), - std.data_ptr(), grad_input.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_interpolate_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_interpolate_cuda.cu deleted file mode 100644 index 56a55500..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_interpolate_cuda.cu +++ /dev/null @@ -1,66 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate_gpu.cu - -#include -#include -#include - -#include "pytorch_cuda_helper.hpp" -#include "three_interpolate_cuda_kernel.cuh" - -void ThreeInterpolateForwardCUDAKernelLauncher(int b, int c, int m, int n, - const Tensor points, - const Tensor idx, - const Tensor weight, - Tensor out) { - // points: (B, C, M) - // idx: (B, N, 3) - // weight: (B, N, 3) - // output: - // out: (B, C, N) - - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(n, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "three_interpolate_forward_cuda_kernel", [&] { - three_interpolate_forward_cuda_kernel - <<>>( - b, c, m, n, points.data_ptr(), idx.data_ptr(), - weight.data_ptr(), out.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void ThreeInterpolateBackwardCUDAKernelLauncher(int b, int c, int n, int m, - const Tensor grad_out, - const Tensor idx, - const Tensor weight, - Tensor grad_points) { - // grad_out: (B, C, N) - // weight: (B, N, 3) - // output: - // grad_points: (B, C, M) - - at::cuda::CUDAGuard device_guard(grad_out.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(n, THREADS_PER_BLOCK), c, b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_out.scalar_type(), "three_interpolate_backward_cuda_kernel", [&] { - three_interpolate_backward_cuda_kernel - <<>>( - b, c, n, m, grad_out.data_ptr(), idx.data_ptr(), - weight.data_ptr(), grad_points.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_nn_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_nn_cuda.cu deleted file mode 100644 index 91c68829..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/three_nn_cuda.cu +++ /dev/null @@ -1,35 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate_gpu.cu - -#include -#include -#include - -#include "pytorch_cuda_helper.hpp" -#include "three_nn_cuda_kernel.cuh" - -void ThreeNNForwardCUDAKernelLauncher(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, - Tensor idx) { - // unknown: (B, N, 3) - // known: (B, M, 3) - // output: - // dist2: (B, N, 3) - // idx: (B, N, 3) - - at::cuda::CUDAGuard device_guard(unknown.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - // blockIdx.x(col), blockIdx.y(row) - dim3 blocks(GET_BLOCKS(n, THREADS_PER_BLOCK), b); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - unknown.scalar_type(), "three_nn_forward_cuda_kernel", [&] { - three_nn_forward_cuda_kernel<<>>( - b, n, m, unknown.data_ptr(), known.data_ptr(), - dist2.data_ptr(), idx.data_ptr()); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/tin_shift_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/tin_shift_cuda.cu deleted file mode 100644 index 19c85c76..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/tin_shift_cuda.cu +++ /dev/null @@ -1,55 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "pytorch_device_registry.hpp" -#include "tin_shift_cuda_kernel.cuh" - -void TINShiftForwardCUDAKernelLauncher(Tensor input, Tensor shift, - Tensor output) { - int output_size = output.numel(); - int batch_size = input.size(0); - int t_size = input.size(1); - int channels = input.size(2); - int hw_size = input.size(3); - int group_size = shift.size(1); - int group_channel = channels / group_size; - int num_kernels = batch_size * hw_size * channels; - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "tin_shift_forward_cuda_kernel", [&] { - tin_shift_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), shift.data_ptr(), - output.data_ptr(), batch_size, channels, t_size, - hw_size, group_size, group_channel); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void TINShiftBackwardCUDAKernelLauncher(Tensor grad_output, Tensor shift, - Tensor grad_input) { - int output_size = grad_output.numel(); - int batch_size = grad_output.size(0); - int t_size = grad_output.size(1); - int channels = grad_output.size(2); - int hw_size = grad_output.size(3); - int group_size = shift.size(1); - int group_channel = channels / group_size; - int num_kernels = batch_size * hw_size * channels; - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "tin_shift_backward_cuda_kernel", [&] { - tin_shift_backward_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - shift.data_ptr(), grad_input.data_ptr(), - batch_size, channels, t_size, hw_size, group_size, - group_channel); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu deleted file mode 100644 index 81708a07..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu +++ /dev/null @@ -1,370 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d_kernel.cu -// Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -// -// This work is made available under the Nvidia Source Code License-NC. -// To view a copy of this license, visit -// https://nvlabs.github.io/stylegan2/license.html - -#include -#include -#include -#include -#include -#include - -#include - -static __host__ __device__ __forceinline__ int floor_div(int a, int b) { - int c = a / b; - - if (c * b > a) { - c--; - } - - return c; -} - -struct UpFirDn2DKernelParams { - int up_x; - int up_y; - int down_x; - int down_y; - int pad_x0; - int pad_x1; - int pad_y0; - int pad_y1; - - int major_dim; - int in_h; - int in_w; - int minor_dim; - int kernel_h; - int kernel_w; - int out_h; - int out_w; - int loop_major; - int loop_x; -}; - -template -__global__ void upfirdn2d_kernel_large(scalar_t *out, const scalar_t *input, - const scalar_t *kernel, - const UpFirDn2DKernelParams p) { - int minor_idx = blockIdx.x * blockDim.x + threadIdx.x; - int out_y = minor_idx / p.minor_dim; - minor_idx -= out_y * p.minor_dim; - int out_x_base = blockIdx.y * p.loop_x * blockDim.y + threadIdx.y; - int major_idx_base = blockIdx.z * p.loop_major; - - if (out_x_base >= p.out_w || out_y >= p.out_h || - major_idx_base >= p.major_dim) { - return; - } - - int mid_y = out_y * p.down_y + p.up_y - 1 - p.pad_y0; - int in_y = min(max(floor_div(mid_y, p.up_y), 0), p.in_h); - int h = min(max(floor_div(mid_y + p.kernel_h, p.up_y), 0), p.in_h) - in_y; - int kernel_y = mid_y + p.kernel_h - (in_y + 1) * p.up_y; - - for (int loop_major = 0, major_idx = major_idx_base; - loop_major < p.loop_major && major_idx < p.major_dim; - loop_major++, major_idx++) { - for (int loop_x = 0, out_x = out_x_base; - loop_x < p.loop_x && out_x < p.out_w; loop_x++, out_x += blockDim.y) { - int mid_x = out_x * p.down_x + p.up_x - 1 - p.pad_x0; - int in_x = min(max(floor_div(mid_x, p.up_x), 0), p.in_w); - int w = min(max(floor_div(mid_x + p.kernel_w, p.up_x), 0), p.in_w) - in_x; - int kernel_x = mid_x + p.kernel_w - (in_x + 1) * p.up_x; - - const scalar_t *x_p = - &input[((major_idx * p.in_h + in_y) * p.in_w + in_x) * p.minor_dim + - minor_idx]; - const scalar_t *k_p = &kernel[kernel_y * p.kernel_w + kernel_x]; - int x_px = p.minor_dim; - int k_px = -p.up_x; - int x_py = p.in_w * p.minor_dim; - int k_py = -p.up_y * p.kernel_w; - - scalar_t v = 0.0f; - - for (int y = 0; y < h; y++) { - for (int x = 0; x < w; x++) { - v += static_cast(*x_p) * static_cast(*k_p); - x_p += x_px; - k_p += k_px; - } - - x_p += x_py - w * x_px; - k_p += k_py - w * k_px; - } - - out[((major_idx * p.out_h + out_y) * p.out_w + out_x) * p.minor_dim + - minor_idx] = v; - } - } -} - -template -__global__ void upfirdn2d_kernel(scalar_t *out, const scalar_t *input, - const scalar_t *kernel, - const UpFirDn2DKernelParams p) { - const int tile_in_h = ((tile_out_h - 1) * down_y + kernel_h - 1) / up_y + 1; - const int tile_in_w = ((tile_out_w - 1) * down_x + kernel_w - 1) / up_x + 1; - - __shared__ volatile float sk[kernel_h][kernel_w]; - __shared__ volatile float sx[tile_in_h][tile_in_w]; - - int minor_idx = blockIdx.x; - int tile_out_y = minor_idx / p.minor_dim; - minor_idx -= tile_out_y * p.minor_dim; - tile_out_y *= tile_out_h; - int tile_out_x_base = blockIdx.y * p.loop_x * tile_out_w; - int major_idx_base = blockIdx.z * p.loop_major; - - if (tile_out_x_base >= p.out_w | tile_out_y >= p.out_h | - major_idx_base >= p.major_dim) { - return; - } - - for (int tap_idx = threadIdx.x; tap_idx < kernel_h * kernel_w; - tap_idx += blockDim.x) { - int ky = tap_idx / kernel_w; - int kx = tap_idx - ky * kernel_w; - scalar_t v = 0.0; - - if (kx < p.kernel_w & ky < p.kernel_h) { - v = kernel[(p.kernel_h - 1 - ky) * p.kernel_w + (p.kernel_w - 1 - kx)]; - } - - sk[ky][kx] = v; - } - - for (int loop_major = 0, major_idx = major_idx_base; - loop_major < p.loop_major & major_idx < p.major_dim; - loop_major++, major_idx++) { - for (int loop_x = 0, tile_out_x = tile_out_x_base; - loop_x < p.loop_x & tile_out_x < p.out_w; - loop_x++, tile_out_x += tile_out_w) { - int tile_mid_x = tile_out_x * down_x + up_x - 1 - p.pad_x0; - int tile_mid_y = tile_out_y * down_y + up_y - 1 - p.pad_y0; - int tile_in_x = floor_div(tile_mid_x, up_x); - int tile_in_y = floor_div(tile_mid_y, up_y); - - __syncthreads(); - - for (int in_idx = threadIdx.x; in_idx < tile_in_h * tile_in_w; - in_idx += blockDim.x) { - int rel_in_y = in_idx / tile_in_w; - int rel_in_x = in_idx - rel_in_y * tile_in_w; - int in_x = rel_in_x + tile_in_x; - int in_y = rel_in_y + tile_in_y; - - scalar_t v = 0.0; - - if (in_x >= 0 & in_y >= 0 & in_x < p.in_w & in_y < p.in_h) { - v = input[((major_idx * p.in_h + in_y) * p.in_w + in_x) * - p.minor_dim + - minor_idx]; - } - - sx[rel_in_y][rel_in_x] = v; - } - - __syncthreads(); - for (int out_idx = threadIdx.x; out_idx < tile_out_h * tile_out_w; - out_idx += blockDim.x) { - int rel_out_y = out_idx / tile_out_w; - int rel_out_x = out_idx - rel_out_y * tile_out_w; - int out_x = rel_out_x + tile_out_x; - int out_y = rel_out_y + tile_out_y; - - int mid_x = tile_mid_x + rel_out_x * down_x; - int mid_y = tile_mid_y + rel_out_y * down_y; - int in_x = floor_div(mid_x, up_x); - int in_y = floor_div(mid_y, up_y); - int rel_in_x = in_x - tile_in_x; - int rel_in_y = in_y - tile_in_y; - int kernel_x = (in_x + 1) * up_x - mid_x - 1; - int kernel_y = (in_y + 1) * up_y - mid_y - 1; - - scalar_t v = 0.0; - -#pragma unroll - for (int y = 0; y < kernel_h / up_y; y++) -#pragma unroll - for (int x = 0; x < kernel_w / up_x; x++) - v += sx[rel_in_y + y][rel_in_x + x] * - sk[kernel_y + y * up_y][kernel_x + x * up_x]; - - if (out_x < p.out_w & out_y < p.out_h) { - out[((major_idx * p.out_h + out_y) * p.out_w + out_x) * p.minor_dim + - minor_idx] = v; - } - } - } - } -} - -torch::Tensor upfirdn2d_op(const torch::Tensor &input, - const torch::Tensor &kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1) { - int curDevice = -1; - cudaGetDevice(&curDevice); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice); - - UpFirDn2DKernelParams p; - - auto x = input.contiguous(); - auto k = kernel.contiguous(); - - p.major_dim = x.size(0); - p.in_h = x.size(1); - p.in_w = x.size(2); - p.minor_dim = x.size(3); - p.kernel_h = k.size(0); - p.kernel_w = k.size(1); - p.up_x = up_x; - p.up_y = up_y; - p.down_x = down_x; - p.down_y = down_y; - p.pad_x0 = pad_x0; - p.pad_x1 = pad_x1; - p.pad_y0 = pad_y0; - p.pad_y1 = pad_y1; - - p.out_h = (p.in_h * p.up_y + p.pad_y0 + p.pad_y1 - p.kernel_h + p.down_y) / - p.down_y; - p.out_w = (p.in_w * p.up_x + p.pad_x0 + p.pad_x1 - p.kernel_w + p.down_x) / - p.down_x; - - auto out = - at::empty({p.major_dim, p.out_h, p.out_w, p.minor_dim}, x.options()); - - int mode = -1; - - int tile_out_h = -1; - int tile_out_w = -1; - - if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 && - p.kernel_h <= 4 && p.kernel_w <= 4) { - mode = 1; - tile_out_h = 16; - tile_out_w = 64; - } - - if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 && - p.kernel_h <= 3 && p.kernel_w <= 3) { - mode = 2; - tile_out_h = 16; - tile_out_w = 64; - } - - if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 && - p.kernel_h <= 4 && p.kernel_w <= 4) { - mode = 3; - tile_out_h = 16; - tile_out_w = 64; - } - - if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 && - p.kernel_h <= 2 && p.kernel_w <= 2) { - mode = 4; - tile_out_h = 16; - tile_out_w = 64; - } - - if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 && - p.kernel_h <= 4 && p.kernel_w <= 4) { - mode = 5; - tile_out_h = 8; - tile_out_w = 32; - } - - if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 && - p.kernel_h <= 2 && p.kernel_w <= 2) { - mode = 6; - tile_out_h = 8; - tile_out_w = 32; - } - - dim3 block_size; - dim3 grid_size; - - if (tile_out_h > 0 && tile_out_w > 0) { - p.loop_major = (p.major_dim - 1) / 16384 + 1; - p.loop_x = 1; - block_size = dim3(32 * 8, 1, 1); - grid_size = dim3(((p.out_h - 1) / tile_out_h + 1) * p.minor_dim, - (p.out_w - 1) / (p.loop_x * tile_out_w) + 1, - (p.major_dim - 1) / p.loop_major + 1); - } else { - p.loop_major = (p.major_dim - 1) / 16384 + 1; - p.loop_x = 4; - block_size = dim3(4, 32, 1); - grid_size = dim3((p.out_h * p.minor_dim - 1) / block_size.x + 1, - (p.out_w - 1) / (p.loop_x * block_size.y) + 1, - (p.major_dim - 1) / p.loop_major + 1); - } - - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] { - switch (mode) { - case 1: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - case 2: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - case 3: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - case 4: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - case 5: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - case 6: - upfirdn2d_kernel - <<>>(out.data_ptr(), - x.data_ptr(), - k.data_ptr(), p); - - break; - - default: - upfirdn2d_kernel_large<<>>( - out.data_ptr(), x.data_ptr(), - k.data_ptr(), p); - } - }); - - return out; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/voxelization_cuda.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/voxelization_cuda.cu deleted file mode 100644 index f4166b7b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/cuda/voxelization_cuda.cu +++ /dev/null @@ -1,286 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include -#include - -#include "pytorch_cuda_helper.hpp" -#include "voxelization_cuda_kernel.cuh" - -int HardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor &points, at::Tensor &voxels, at::Tensor &coors, - at::Tensor &num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3) { - // current version tooks about 0.04s for one frame on cpu - // check device - - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - // map points to voxel coors - at::Tensor temp_coors = - at::zeros({num_points, NDim}, points.options().dtype(at::kInt)); - - dim3 grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 block(512); - - // 1. link point to corresponding voxel coors - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "hard_voxelize_kernel", ([&] { - dynamic_voxelize_kernel<<>>( - points.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), voxel_x, voxel_y, voxel_z, - coors_x_min, coors_y_min, coors_z_min, coors_x_max, coors_y_max, - coors_z_max, grid_x, grid_y, grid_z, num_points, num_features, - NDim); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - // 2. map point to the idx of the corresponding voxel, find duplicate coor - // create some temporary variables - auto point_to_pointidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - auto point_to_voxelidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - - dim3 map_grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 map_block(512); - - AT_DISPATCH_ALL_TYPES( - temp_coors.scalar_type(), "determin_duplicate", ([&] { - point_to_voxelidx_kernel<<>>( - temp_coors.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - point_to_pointidx.contiguous().data_ptr(), max_points, - max_voxels, num_points, NDim); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - // 3. determine voxel num and voxel's coor index - // make the logic in the CUDA device could accelerate about 10 times - auto coor_to_voxelidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - auto voxel_num = at::zeros( - { - 1, - }, - points.options().dtype(at::kInt)); // must be zero from the beginning - - AT_DISPATCH_ALL_TYPES(temp_coors.scalar_type(), "determin_duplicate", ([&] { - determin_voxel_num<<<1, 1, 0, stream>>>( - num_points_per_voxel.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - point_to_pointidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - voxel_num.contiguous().data_ptr(), - max_points, max_voxels, num_points); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - // 4. copy point features to voxels - // Step 4 & 5 could be parallel - auto pts_output_size = num_points * num_features; - dim3 cp_grid(std::min(at::cuda::ATenCeilDiv(pts_output_size, 512), 4096)); - dim3 cp_block(512); - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - assign_point_to_voxel<<>>( - pts_output_size, points.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - voxels.contiguous().data_ptr(), max_points, num_features, - num_points, NDim); - })); - // cudaDeviceSynchronize(); - // AT_CUDA_CHECK(cudaGetLastError()); - - // 5. copy coors of each voxels - auto coors_output_size = num_points * NDim; - dim3 coors_cp_grid( - std::min(at::cuda::ATenCeilDiv(coors_output_size, 512), 4096)); - dim3 coors_cp_block(512); - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - assign_voxel_coors - <<>>( - coors_output_size, temp_coors.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - coors.contiguous().data_ptr(), num_points, NDim); - })); - - AT_CUDA_CHECK(cudaGetLastError()); - - auto voxel_num_cpu = voxel_num.to(at::kCPU); - int voxel_num_int = voxel_num_cpu.data_ptr()[0]; - - return voxel_num_int; -} - -int NondeterministicHardVoxelizeForwardCUDAKernelLauncher( - const at::Tensor &points, at::Tensor &voxels, at::Tensor &coors, - at::Tensor &num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3) { - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - if (num_points == 0) return 0; - - dim3 blocks( - std::min(at::cuda::ATenCeilDiv(num_points, THREADS_PER_BLOCK), 4096)); - dim3 threads(THREADS_PER_BLOCK); - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - // map points to voxel coors - at::Tensor temp_coors = - at::zeros({num_points, NDim}, points.options().dtype(at::kInt)); - - // 1. link point to corresponding voxel coors - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "hard_voxelize_kernel", ([&] { - dynamic_voxelize_kernel<<>>( - points.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), voxel_x, voxel_y, voxel_z, - coors_x_min, coors_y_min, coors_z_min, coors_x_max, coors_y_max, - coors_z_max, grid_x, grid_y, grid_z, num_points, num_features, - NDim); - })); - - at::Tensor coors_map; - at::Tensor reduce_count; - - auto coors_clean = temp_coors.masked_fill(temp_coors.lt(0).any(-1, true), -1); - - std::tie(temp_coors, coors_map, reduce_count) = - at::unique_dim(coors_clean, 0, true, true, false); - - if (temp_coors[0][0].lt(0).item()) { - // the first element of temp_coors is (-1,-1,-1) and should be removed - temp_coors = temp_coors.slice(0, 1); - coors_map = coors_map - 1; - } - - int num_coors = temp_coors.size(0); - temp_coors = temp_coors.to(at::kInt); - coors_map = coors_map.to(at::kInt); - - at::Tensor coors_count = at::zeros({1}, coors_map.options()); - at::Tensor coors_order = at::empty({num_coors}, coors_map.options()); - at::Tensor pts_id = at::zeros({num_points}, coors_map.options()); - reduce_count = at::zeros({num_coors}, coors_map.options()); - - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "get_assign_pos", ([&] { - nondeterministic_get_assign_pos<<>>( - num_points, coors_map.contiguous().data_ptr(), - pts_id.contiguous().data_ptr(), - coors_count.contiguous().data_ptr(), - reduce_count.contiguous().data_ptr(), - coors_order.contiguous().data_ptr()); - })); - - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - nondeterministic_assign_point_voxel - <<>>( - num_points, points.contiguous().data_ptr(), - coors_map.contiguous().data_ptr(), - pts_id.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), - reduce_count.contiguous().data_ptr(), - coors_order.contiguous().data_ptr(), - voxels.contiguous().data_ptr(), - coors.contiguous().data_ptr(), - num_points_per_voxel.contiguous().data_ptr(), - max_voxels, max_points, num_features, NDim); - })); - AT_CUDA_CHECK(cudaGetLastError()); - return max_voxels < num_coors ? max_voxels : num_coors; -} - -void DynamicVoxelizeForwardCUDAKernelLauncher( - const at::Tensor &points, at::Tensor &coors, - const std::vector voxel_size, const std::vector coors_range, - const int NDim = 3) { - // current version tooks about 0.04s for one frame on cpu - // check device - - at::cuda::CUDAGuard device_guard(points.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - const int col_blocks = at::cuda::ATenCeilDiv(num_points, THREADS_PER_BLOCK); - dim3 blocks(col_blocks); - dim3 threads(THREADS_PER_BLOCK); - - AT_DISPATCH_ALL_TYPES(points.scalar_type(), "dynamic_voxelize_kernel", [&] { - dynamic_voxelize_kernel<<>>( - points.contiguous().data_ptr(), - coors.contiguous().data_ptr(), voxel_x, voxel_y, voxel_z, - coors_x_min, coors_y_min, coors_z_min, coors_x_max, coors_y_max, - coors_z_max, grid_x, grid_y, grid_z, num_points, num_features, NDim); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_conv.cpp deleted file mode 100644 index 86690b93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_conv.cpp +++ /dev/null @@ -1,517 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void deformable_im2col_impl(Tensor data_im, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor data_col) { - DISPATCH_DEVICE_IMPL(deformable_im2col_impl, data_im, data_offset, channels, - height, width, ksize_h, ksize_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, parallel_imgs, - deformable_group, data_col); -} - -void deformable_col2im_impl(Tensor data_col, Tensor data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - Tensor grad_im) { - DISPATCH_DEVICE_IMPL(deformable_col2im_impl, data_col, data_offset, channels, - height, width, ksize_h, ksize_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, parallel_imgs, - deformable_group, grad_im); -} - -void deformable_col2im_coord_impl( - Tensor data_col, Tensor data_im, Tensor data_offset, const int channels, - const int height, const int width, const int ksize_h, const int ksize_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, Tensor grad_offset) { - DISPATCH_DEVICE_IMPL(deformable_col2im_coord_impl, data_col, data_im, - data_offset, channels, height, width, ksize_h, ksize_w, - pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, - parallel_imgs, deformable_group, grad_offset); -} - -void deform_conv_shape_check(at::Tensor input, at::Tensor offset, - at::Tensor *gradOutput, at::Tensor weight, int kH, - int kW, int dH, int dW, int padH, int padW, - int dilationH, int dilationW, int group, - int deformable_group) { - TORCH_CHECK( - weight.ndimension() == 4, - "4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, but got: %s", - weight.ndimension()); - - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - - TORCH_CHECK(kW > 0 && kH > 0, - "kernel size should be greater than zero, but got kH: %d kW: %d", - kH, kW); - - TORCH_CHECK((weight.size(2) == kH && weight.size(3) == kW), - "kernel size should be consistent with weight, ", - "but got kH: %d kW: %d weight.size(2): %d, weight.size(3): %d", - kH, kW, weight.size(2), weight.size(3)); - - TORCH_CHECK(dW > 0 && dH > 0, - "stride should be greater than zero, but got dH: %d dW: %d", dH, - dW); - - TORCH_CHECK( - dilationW > 0 && dilationH > 0, - "dilation should be greater than 0, but got dilationH: %d dilationW: %d", - dilationH, dilationW); - - int ndim = input.ndimension(); - int dimf = 0; - int dimh = 1; - int dimw = 2; - - if (ndim == 4) { - dimf++; - dimh++; - dimw++; - } - - TORCH_CHECK(ndim == 3 || ndim == 4, - "3D or 4D input tensor expected but got: %s", ndim); - - long nInputPlane = weight.size(1) * group; - long inputHeight = input.size(dimh); - long inputWidth = input.size(dimw); - long nOutputPlane = weight.size(0); - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - - TORCH_CHECK(nInputPlane % deformable_group == 0, - "input channels must divide deformable group size"); - - if (outputWidth < 1 || outputHeight < 1) - AT_ERROR( - "Given input size: (%ld x %ld x %ld). " - "Calculated output size: (%ld x %ld x %ld). Output size is too small", - nInputPlane, inputHeight, inputWidth, nOutputPlane, outputHeight, - outputWidth); - - TORCH_CHECK(input.size(1) == nInputPlane, - "invalid number of input planes, expected: %d, but got: %d", - nInputPlane, input.size(1)); - - TORCH_CHECK((inputHeight >= kH && inputWidth >= kW), - "input image is smaller than kernel"); - - TORCH_CHECK( - (offset.size(2) == outputHeight && offset.size(3) == outputWidth), - "invalid spatial size of offset, expected height: %d width: %d, but " - "got height: %d width: %d", - outputHeight, outputWidth, offset.size(2), offset.size(3)); - - TORCH_CHECK((offset.size(1) == deformable_group * 2 * kH * kW), - "invalid number of channels of offset"); - - if (gradOutput != NULL) { - TORCH_CHECK( - gradOutput->size(dimf) == nOutputPlane, - "invalid number of gradOutput planes, expected: %d, but got: %d", - nOutputPlane, gradOutput->size(dimf)); - - TORCH_CHECK( - (gradOutput->size(dimh) == outputHeight && - gradOutput->size(dimw) == outputWidth), - "invalid size of gradOutput, expected height: %d width: %d , but " - "got height: %d width: %d", - outputHeight, outputWidth, gradOutput->size(dimh), - gradOutput->size(dimw)); - } -} - -void deform_conv_forward(Tensor input, Tensor weight, Tensor offset, - Tensor output, Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(output); - CHECK_CUDA_INPUT(columns); - CHECK_CUDA_INPUT(ones); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(output); - CHECK_CPU_INPUT(columns); - CHECK_CPU_INPUT(ones); - } - - deform_conv_shape_check(input, offset, NULL, weight, kH, kW, dH, dW, padH, - padW, dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input.unsqueeze_(0); - offset.unsqueeze_(0); - } - - // todo: assert batchsize dividable by im2col_step - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - output = output.view({batchSize / im2col_step, im2col_step, nOutputPlane, - outputHeight, outputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < outputHeight * outputWidth) { - ones = at::ones({outputHeight, outputWidth}, input.options()); - } - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - Tensor output_buffer = at::zeros({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}, - output.options()); - - output_buffer = output_buffer.view( - {output_buffer.size(0), group, output_buffer.size(1) / group, - output_buffer.size(2), output_buffer.size(3)}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col_impl(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - output_buffer[elt][g] = output_buffer[elt][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output_buffer[elt][g]); - } - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - } - - output_buffer = output_buffer.view( - {output_buffer.size(0), output_buffer.size(1) * output_buffer.size(2), - output_buffer.size(3), output_buffer.size(4)}); - - output_buffer = output_buffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step, outputHeight, outputWidth}); - output_buffer.transpose_(1, 2); - output.copy_(output_buffer); - output = output.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - output = output.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - } -} - -void deform_conv_backward_input(Tensor input, Tensor offset, Tensor gradOutput, - Tensor gradInput, Tensor gradOffset, - Tensor weight, Tensor columns, int kW, int kH, - int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(gradOutput); - CHECK_CUDA_INPUT(gradInput); - CHECK_CUDA_INPUT(gradOffset); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(columns); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(gradOutput); - CHECK_CPU_INPUT(gradInput); - CHECK_CPU_INPUT(gradOffset); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(columns); - } - deform_conv_shape_check(input, offset, &gradOutput, weight, kH, kW, dH, dW, - padH, padW, dilationH, dilationW, group, - deformable_group); - - at::DeviceGuard guard(input.device()); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view({1, input.size(0), input.size(1), input.size(2)}); - offset = offset.view({1, offset.size(0), offset.size(1), offset.size(2)}); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset"); - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - // change order of grad output - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - gradInput = gradInput.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - gradOffset = gradOffset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, - outputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - // divide into groups - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), group, gradOutput.size(1) / group, - gradOutput.size(2), gradOutput.size(3), gradOutput.size(4)}); - - for (int g = 0; g < group; g++) { - columns[g] = columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - gradOutput[elt][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), gradOutput.size(1) * gradOutput.size(2), - gradOutput.size(3), gradOutput.size(4), gradOutput.size(5)}); - - deformable_col2im_coord_impl(columns, input[elt], offset[elt], nInputPlane, - inputHeight, inputWidth, kH, kW, padH, padW, - dH, dW, dilationH, dilationW, im2col_step, - deformable_group, gradOffset[elt]); - - deformable_col2im_impl(columns, offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, - gradInput[elt]); - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - } - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - gradOffset = gradOffset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - gradInput = gradInput.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - gradOffset = - gradOffset.view({offset.size(1), offset.size(2), offset.size(3)}); - } -} - -void deform_conv_backward_parameters(Tensor input, Tensor offset, - Tensor gradOutput, Tensor gradWeight, - Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, float scale, - int im2col_step) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(gradOutput); - CHECK_CUDA_INPUT(gradWeight); - CHECK_CUDA_INPUT(columns); - CHECK_CUDA_INPUT(ones); -#else - AT_ERROR("DeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(gradOutput); - CHECK_CPU_INPUT(gradWeight); - CHECK_CPU_INPUT(columns); - CHECK_CPU_INPUT(ones); - } - - deform_conv_shape_check(input, offset, &gradOutput, gradWeight, kH, kW, dH, - dW, padH, padW, dilationH, dilationW, group, - deformable_group); - at::DeviceGuard guard(input.device()); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view( - at::IntList({1, input.size(0), input.size(1), input.size(2)})); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = gradWeight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - Tensor gradOutputBuffer = at::zeros_like(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, im2col_step, - outputHeight, outputWidth}); - gradOutputBuffer = gradOutputBuffer.contiguous(); - gradOutputBuffer.copy_(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}); - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col_impl(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - // divide into group - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), group, gradOutputBuffer.size(1) / group, - gradOutputBuffer.size(2), gradOutputBuffer.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - gradWeight = - gradWeight.view({group, gradWeight.size(0) / group, gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3)}); - - for (int g = 0; g < group; g++) { - gradWeight[g] = gradWeight[g] - .flatten(1) - .addmm_(gradOutputBuffer[elt][g].flatten(1), - columns[g].transpose(1, 0), 1.0, scale) - .view_as(gradWeight[g]); - } - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), - gradOutputBuffer.size(1) * gradOutputBuffer.size(2), - gradOutputBuffer.size(3), gradOutputBuffer.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradWeight = gradWeight.view({gradWeight.size(0) * gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3), - gradWeight.size(4)}); - } - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - } -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_roi_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_roi_pool.cpp deleted file mode 100644 index 4fb78a96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/deform_roi_pool.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void deform_roi_pool_forward_impl(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - DISPATCH_DEVICE_IMPL(deform_roi_pool_forward_impl, input, rois, offset, - output, pooled_height, pooled_width, spatial_scale, - sampling_ratio, gamma); -} - -void deform_roi_pool_backward_impl(Tensor grad_output, Tensor input, - Tensor rois, Tensor offset, - Tensor grad_input, Tensor grad_offset, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - DISPATCH_DEVICE_IMPL(deform_roi_pool_backward_impl, grad_output, input, rois, - offset, grad_input, grad_offset, pooled_height, - pooled_width, spatial_scale, sampling_ratio, gamma); -} - -void deform_roi_pool_forward(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma) { - deform_roi_pool_forward_impl(input, rois, offset, output, pooled_height, - pooled_width, spatial_scale, sampling_ratio, - gamma); -} - -void deform_roi_pool_backward(Tensor grad_output, Tensor input, Tensor rois, - Tensor offset, Tensor grad_input, - Tensor grad_offset, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma) { - deform_roi_pool_backward_impl(grad_output, input, rois, offset, grad_input, - grad_offset, pooled_height, pooled_width, - spatial_scale, sampling_ratio, gamma); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/diff_iou_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/diff_iou_rotated.cpp deleted file mode 100644 index 2361b7fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/diff_iou_rotated.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor diff_iou_rotated_sort_vertices_forward_impl(Tensor vertices, Tensor mask, - Tensor num_valid) { - return DISPATCH_DEVICE_IMPL(diff_iou_rotated_sort_vertices_forward_impl, - vertices, mask, num_valid); -} - -Tensor diff_iou_rotated_sort_vertices_forward(Tensor vertices, Tensor mask, - Tensor num_valid) { - return diff_iou_rotated_sort_vertices_forward_impl(vertices, mask, num_valid); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/focal_loss.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/focal_loss.cpp deleted file mode 100644 index ed0e2186..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/focal_loss.cpp +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void sigmoid_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(sigmoid_focal_loss_forward_impl, input, target, weight, - output, gamma, alpha); -} - -void sigmoid_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(sigmoid_focal_loss_backward_impl, input, target, weight, - grad_input, gamma, alpha); -} - -void softmax_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - DISPATCH_DEVICE_IMPL(softmax_focal_loss_forward_impl, input, target, weight, - output, gamma, alpha); -} - -void softmax_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor buff, - Tensor grad_input, float gamma, - float alpha) { - DISPATCH_DEVICE_IMPL(softmax_focal_loss_backward_impl, input, target, weight, - buff, grad_input, gamma, alpha); -} - -void sigmoid_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - sigmoid_focal_loss_forward_impl(input, target, weight, output, gamma, alpha); -} - -void sigmoid_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, float alpha) { - sigmoid_focal_loss_backward_impl(input, target, weight, grad_input, gamma, - alpha); -} - -void softmax_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - softmax_focal_loss_forward_impl(input, target, weight, output, gamma, alpha); -} - -void softmax_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor buff, Tensor grad_input, float gamma, - float alpha) { - softmax_focal_loss_backward_impl(input, target, weight, buff, grad_input, - gamma, alpha); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/furthest_point_sample.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/furthest_point_sample.cpp deleted file mode 100644 index 9c7098ac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/furthest_point_sample.cpp +++ /dev/null @@ -1,34 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/sampling.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void furthest_point_sampling_forward_impl(Tensor points_tensor, - Tensor temp_tensor, Tensor idx_tensor, - int b, int n, int m) { - DISPATCH_DEVICE_IMPL(furthest_point_sampling_forward_impl, points_tensor, - temp_tensor, idx_tensor, b, n, m); -} - -void furthest_point_sampling_with_dist_forward_impl(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, - int n, int m) { - DISPATCH_DEVICE_IMPL(furthest_point_sampling_with_dist_forward_impl, - points_tensor, temp_tensor, idx_tensor, b, n, m); -} - -void furthest_point_sampling_forward(Tensor points_tensor, Tensor temp_tensor, - Tensor idx_tensor, int b, int n, int m) { - furthest_point_sampling_forward_impl(points_tensor, temp_tensor, idx_tensor, - b, n, m); -} - -void furthest_point_sampling_with_dist_forward(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, int n, - int m) { - furthest_point_sampling_with_dist_forward_impl(points_tensor, temp_tensor, - idx_tensor, b, n, m); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_bias_leakyrelu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_bias_leakyrelu.cpp deleted file mode 100644 index 8d411c9d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_bias_leakyrelu.cpp +++ /dev/null @@ -1,119 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act.cpp - -/* -Copyright (c) 2021, NVIDIA Corporation. All rights reserved. - -NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -Augmentation (ADA) -======================================================================= - -1. Definitions - -"Licensor" means any person or entity that distributes its Work. - -"Software" means the original work of authorship made available under -this License. - -"Work" means the Software and any additions to or derivative works of -the Software that are made available under this License. - -The terms "reproduce," "reproduction," "derivative works," and -"distribution" have the meaning as provided under U.S. copyright law; -provided, however, that for the purposes of this License, derivative -works shall not include works that remain separable from, or merely -link (or bind by name) to the interfaces of, the Work. - -Works, including the Software, are "made available" under this License -by including in or with the Work either (a) a copyright notice -referencing the applicability of this License to the Work, or (b) a -copy of this License. - -2. License Grants - - 2.1 Copyright Grant. Subject to the terms and conditions of this - License, each Licensor grants to you a perpetual, worldwide, - non-exclusive, royalty-free, copyright license to reproduce, - prepare derivative works of, publicly display, publicly perform, - sublicense and distribute its Work and any resulting derivative - works in any form. - -3. Limitations - - 3.1 Redistribution. You may reproduce or distribute the Work only - if (a) you do so under this License, (b) you include a complete - copy of this License with your distribution, and (c) you retain - without modification any copyright, patent, trademark, or - attribution notices that are present in the Work. - - 3.2 Derivative Works. You may specify that additional or different - terms apply to the use, reproduction, and distribution of your - derivative works of the Work ("Your Terms") only if (a) Your Terms - provide that the use limitation in Section 3.3 applies to your - derivative works, and (b) you identify the specific derivative - works that are subject to Your Terms. Notwithstanding Your Terms, - this License (including the redistribution requirements in Section - 3.1) will continue to apply to the Work itself. - - 3.3 Use Limitation. The Work and any derivative works thereof only - may be used or intended for use non-commercially. Notwithstanding - the foregoing, NVIDIA and its affiliates may use the Work and any - derivative works commercially. As used herein, "non-commercially" - means for research or evaluation purposes only. - - 3.4 Patent Claims. If you bring or threaten to bring a patent claim - against any Licensor (including any claim, cross-claim or - counterclaim in a lawsuit) to enforce any patents that you allege - are infringed by any Work, then your rights under this License from - such Licensor (including the grant in Section 2.1) will terminate - immediately. - - 3.5 Trademarks. This License does not grant any rights to use any - Licensor’s or its affiliates’ names, logos, or trademarks, except - as necessary to reproduce the notices described in this License. - - 3.6 Termination. If you violate any term of this License, then your - rights under this License (including the grant in Section 2.1) will - terminate immediately. - -4. Disclaimer of Warranty. - -THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -THIS LICENSE. - -5. Limitation of Liability. - -EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -THE POSSIBILITY OF SUCH DAMAGES. - -======================================================================= -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -torch::Tensor fused_bias_leakyrelu_op_impl(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale) { - return DISPATCH_DEVICE_IMPL(fused_bias_leakyrelu_op_impl, input, bias, refer, - act, grad, alpha, scale); -} - -torch::Tensor fused_bias_leakyrelu(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, int act, - int grad, float alpha, float scale) { - return fused_bias_leakyrelu_op_impl(input, bias, refer, act, grad, alpha, - scale); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_spconv_ops.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_spconv_ops.cpp deleted file mode 100644 index 54073a54..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/fused_spconv_ops.cpp +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright 2019 Yan Yan -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -torch::Tensor fused_indice_conv_batchnorm_forward_impl( - torch::Tensor features, torch::Tensor filters, torch::Tensor bias, - torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t numActOut, - int64_t _inverse, int64_t _subM) { - return DISPATCH_DEVICE_IMPL(fused_indice_conv_batchnorm_forward_impl, - features, filters, bias, indicePairs, indiceNum, - numActOut, _inverse, _subM); -} - -torch::Tensor fused_indice_conv_batchnorm_forward( - torch::Tensor features, torch::Tensor filters, torch::Tensor bias, - torch::Tensor indicePairs, torch::Tensor indiceNum, int64_t numActOut, - int64_t _inverse, int64_t _subM) { - return fused_indice_conv_batchnorm_forward_impl(features, filters, bias, - indicePairs, indiceNum, - numActOut, _inverse, _subM); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/gather_points.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/gather_points.cpp deleted file mode 100644 index b8fb0200..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/gather_points.cpp +++ /dev/null @@ -1,30 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void gather_points_forward_impl(int b, int c, int n, int npoints, - const Tensor points, const Tensor idx, - Tensor out) { - DISPATCH_DEVICE_IMPL(gather_points_forward_impl, b, c, n, npoints, points, - idx, out); -} - -void gather_points_backward_impl(int b, int c, int n, int npoints, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - DISPATCH_DEVICE_IMPL(gather_points_backward_impl, b, c, n, npoints, grad_out, - idx, grad_points); -} - -void gather_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, - int npoints) { - gather_points_forward_impl(b, c, n, npoints, points_tensor, idx_tensor, - out_tensor); -} - -void gather_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints) { - gather_points_backward_impl(b, c, n, npoints, grad_out_tensor, idx_tensor, - grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/group_points.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/group_points.cpp deleted file mode 100644 index cdd190d4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/group_points.cpp +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/group_points.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void group_points_forward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor points, const Tensor idx, - Tensor out) { - DISPATCH_DEVICE_IMPL(group_points_forward_impl, b, c, n, npoints, nsample, - points, idx, out); -} - -void group_points_backward_impl(int b, int c, int n, int npoints, int nsample, - const Tensor grad_out, const Tensor idx, - Tensor grad_points) { - DISPATCH_DEVICE_IMPL(group_points_backward_impl, b, c, n, npoints, nsample, - grad_out, idx, grad_points); -} - -void group_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints, - int nsample) { - DISPATCH_DEVICE_IMPL(group_points_forward_impl, b, c, n, npoints, nsample, - points_tensor, idx_tensor, out_tensor); -} - -void group_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints, int nsample) { - group_points_backward_impl(b, c, n, npoints, nsample, grad_out_tensor, - idx_tensor, grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/info.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/info.cpp deleted file mode 100644 index a08d227d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/info.cpp +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/vision.cpp -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF -#include -int get_cudart_version() { return CUDART_VERSION; } -#endif -#endif - -std::string get_compiling_cuda_version() { -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF - std::ostringstream oss; - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("rocm not available"); -#endif -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/iou3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/iou3d.cpp deleted file mode 100644 index 5ef9c7e8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/iou3d.cpp +++ /dev/null @@ -1,135 +0,0 @@ -// Modified from -// https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/src/iou3d_nms.cpp - -/* -3D IoU Calculation and Rotated NMS(modified from 2D NMS written by others) -Written by Shaoshuai Shi -All Rights Reserved 2019-2020. -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -const int THREADS_PER_BLOCK_NMS = sizeof(unsigned long long) * 8; - -void iou3d_boxes_overlap_bev_forward_impl(const int num_a, const Tensor boxes_a, - const int num_b, const Tensor boxes_b, - Tensor ans_overlap) { - DISPATCH_DEVICE_IMPL(iou3d_boxes_overlap_bev_forward_impl, num_a, boxes_a, - num_b, boxes_b, ans_overlap); -} - -void iou3d_nms3d_forward_impl(const Tensor boxes, unsigned long long *mask, - int boxes_num, float nms_overlap_thresh) { - DISPATCH_DEVICE_IMPL(iou3d_nms3d_forward_impl, boxes, mask, boxes_num, - nms_overlap_thresh); -} - -void iou3d_nms3d_normal_forward_impl(const Tensor boxes, - unsigned long long *mask, int boxes_num, - float nms_overlap_thresh) { - DISPATCH_DEVICE_IMPL(iou3d_nms3d_normal_forward_impl, boxes, mask, boxes_num, - nms_overlap_thresh); -} - -void iou3d_boxes_overlap_bev_forward(Tensor boxes_a, Tensor boxes_b, - Tensor ans_overlap) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params boxes_b: (M, 5) - // params ans_overlap: (N, M) - int num_a = boxes_a.size(0); - int num_b = boxes_b.size(0); - - iou3d_boxes_overlap_bev_forward_impl(num_a, boxes_a, num_b, boxes_b, - ans_overlap); -} - -void iou3d_nms3d_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params keep: (N) - CHECK_CONTIGUOUS(boxes); - CHECK_CONTIGUOUS(keep); - - int boxes_num = boxes.size(0); - int64_t *keep_data = keep.data_ptr(); - int64_t *keep_num_data = keep_num.data_ptr(); - - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - unsigned long long *mask_data = - (unsigned long long *)mask.data_ptr(); - iou3d_nms3d_forward_impl(boxes, mask_data, boxes_num, nms_overlap_thresh); - - at::Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long *mask_host = - (unsigned long long *)mask_cpu.data_ptr(); - - std::vector remv_cpu(col_blocks); - memset(&remv_cpu[0], 0, sizeof(unsigned long long) * col_blocks); - - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_host[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - *keep_num_data = num_to_keep; - } -} - -void iou3d_nms3d_normal_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh) { - // params boxes: (N, 7) [x, y, z, dx, dy, dz, heading] - // params keep: (N) - - CHECK_CONTIGUOUS(boxes); - CHECK_CONTIGUOUS(keep); - - int boxes_num = boxes.size(0); - int64_t *keep_data = keep.data_ptr(); - int64_t *keep_num_data = keep_num.data_ptr(); - - const int col_blocks = - (boxes_num + THREADS_PER_BLOCK_NMS - 1) / THREADS_PER_BLOCK_NMS; - - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - unsigned long long *mask_data = - (unsigned long long *)mask.data_ptr(); - iou3d_nms3d_normal_forward_impl(boxes, mask_data, boxes_num, - nms_overlap_thresh); - - at::Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long *mask_host = - (unsigned long long *)mask_cpu.data_ptr(); - - std::vector remv_cpu(col_blocks); - memset(&remv_cpu[0], 0, sizeof(unsigned long long) * col_blocks); - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_host[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - } - - *keep_num_data = num_to_keep; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/knn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/knn.cpp deleted file mode 100644 index b4be9428..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/knn.cpp +++ /dev/null @@ -1,17 +0,0 @@ -// Modified from -// https://github.com/CVMI-Lab/PAConv/tree/main/scene_seg/lib/pointops/src/knnquery_heap - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void knn_forward_impl(int b, int n, int m, int nsample, const Tensor xyz, - const Tensor new_xyz, Tensor idx, Tensor dist2) { - DISPATCH_DEVICE_IMPL(knn_forward_impl, b, n, m, nsample, xyz, new_xyz, idx, - dist2); -} - -void knn_forward(Tensor xyz_tensor, Tensor new_xyz_tensor, Tensor idx_tensor, - Tensor dist2_tensor, int b, int n, int m, int nsample) { - knn_forward_impl(b, n, m, nsample, xyz_tensor, new_xyz_tensor, idx_tensor, - dist2_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/masked_conv2d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/masked_conv2d.cpp deleted file mode 100644 index 59039253..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/masked_conv2d.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void masked_im2col_forward_impl(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - DISPATCH_DEVICE_IMPL(masked_im2col_forward_impl, im, mask_h_idx, mask_w_idx, - col, kernel_h, kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward_impl(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - DISPATCH_DEVICE_IMPL(masked_col2im_forward_impl, col, mask_h_idx, mask_w_idx, - im, height, width, channels); -} - -void masked_im2col_forward(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w) { - masked_im2col_forward_impl(im, mask_h_idx, mask_w_idx, col, kernel_h, - kernel_w, pad_h, pad_w); -} - -void masked_col2im_forward(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels) { - masked_col2im_forward_impl(col, mask_h_idx, mask_w_idx, im, height, width, - channels); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/min_area_polygons.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/min_area_polygons.cpp deleted file mode 100644 index 8ff996dc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/min_area_polygons.cpp +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void min_area_polygons_impl(const Tensor pointsets, Tensor polygons) { - DISPATCH_DEVICE_IMPL(min_area_polygons_impl, pointsets, polygons); -} - -void min_area_polygons(const Tensor pointsets, Tensor polygons) { - min_area_polygons_impl(pointsets, polygons); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/bbox_overlaps_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/bbox_overlaps_mlu.cpp deleted file mode 100644 index 82d55559..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/bbox_overlaps_mlu.cpp +++ /dev/null @@ -1,100 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ - -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -void KernelBBoxOverlaps(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t d_type, - const void *bbox1, const void *bbox2, void *ious, - const int32_t num_bbox1, const int32_t num_bbox2, - const int32_t mode, const bool aligned, - const int32_t offset); - -static void policyFunc(cnrtDim3_t *k_dim, cnrtFunctionType_t *k_type, - const int32_t batch_num_all) { - auto union_num = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - auto core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - auto core_num = union_num * core_dim; - - // Union1 policyFunc - *k_type = CNRT_FUNC_TYPE_UNION1; - k_dim->x = core_dim; - auto need_core_num = PAD_UP(batch_num_all, core_dim); - k_dim->y = - (need_core_num < core_num) ? (need_core_num / core_dim) : union_num; - k_dim->z = 1; - - return; -} - -void BBoxOverlapsMLUKernelLauncher(const Tensor bboxes1, const Tensor bboxes2, - Tensor ious, const int32_t mode, - const bool aligned, const int32_t offset) { - // check dtype - TORCH_CHECK( - bboxes1.scalar_type() == at::kFloat || bboxes1.scalar_type() == at::kHalf, - "Data type of input should be Float or Half. But now input type is ", - bboxes1.scalar_type(), "."); - TORCH_CHECK(bboxes1.scalar_type() == bboxes2.scalar_type(), - "bboxes1's dtype should be the same with bboxes2's dtype."); - - // params check - TORCH_CHECK(bboxes1.dim() == 2, "bboxes1 should be a 2d tensor, got ", - bboxes1.dim(), "D"); - TORCH_CHECK(bboxes2.dim() == 2, "bboxes2 should be a 2d tensor, got ", - bboxes2.dim(), "D"); - - auto rows = bboxes1.size(0); - auto cols = bboxes2.size(0); - auto batch_num_all = rows; - - if (rows * cols == 0) { - // return if zero element - return; - } - - // calculate task dimension - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - policyFunc(&k_dim, &k_type, batch_num_all); - - // get compute queue - cnrtQueue_t queue = torch_mlu::getCurQueue(); - - // get dtype of input - cnrtDataType_t d_type = torch_mlu::toCnrtDtype(bboxes1.dtype()); - - // get ptr of tensors - auto bboxes1_impl = torch_mlu::getMluTensorImpl(bboxes1); - auto bboxes1_ptr = bboxes1_impl->cnnlMalloc(); - auto bboxes2_impl = torch_mlu::getMluTensorImpl(bboxes2); - auto bboxes2_ptr = bboxes2_impl->cnnlMalloc(); - auto ious_impl = torch_mlu::getMluTensorImpl(ious); - auto ious_ptr = ious_impl->cnnlMalloc(); - - // launch kernel - CNLOG(INFO) << "Launch Kernel MLUUnion1BboxOverlapsKernel"; - CNLOG(INFO) << "kDim :[ " << k_dim.x << ", " << k_dim.y << ", " << k_dim.z - << " ]"; - KernelBBoxOverlaps(k_dim, k_type, queue, d_type, bboxes1_ptr, bboxes2_ptr, - ious_ptr, rows, cols, mode, aligned, offset); -} - -void bbox_overlaps_mlu(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset) { - BBoxOverlapsMLUKernelLauncher(bboxes1, bboxes2, ious, mode, aligned, offset); -} - -void bbox_overlaps_impl(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset); -REGISTER_DEVICE_IMPL(bbox_overlaps_impl, MLU, bbox_overlaps_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/focal_loss_sigmoid_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/focal_loss_sigmoid_mlu.cpp deleted file mode 100644 index 9242644c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/focal_loss_sigmoid_mlu.cpp +++ /dev/null @@ -1,332 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include -#include - -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -void KernelFocalLossSigmoidForward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, - const cnrtDataType_t d_type, - const void *input, const void *target, - const void *weight, const int32_t N, - const int32_t C, const float alpha, - const float gamma, void *output); - -void KernelFocalLossSigmoidBackward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, - const cnrtDataType_t d_type, - const void *input, const void *target, - const void *weight, const float gamma, - const float alpha, const int32_t dim_n, - const int32_t deal_n, const int32_t dim_c, - void *output); -// Policy Function for Forward -static void policyFuncForward(cnrtDim3_t *k_dim, cnrtFunctionType_t *k_type, - const Tensor &input, const Tensor &target, - const Tensor &weight) { - auto N = input.size(0); - auto C = input.size(1); - - const size_t nram_size = torch_mlu::getDeviceAttr(cnrtAttrNramSizePerMcore); - const size_t c_align_size = PAD_UP((C * input.itemsize()), NFU_ALIGN_SIZE); - const int split_target_num = 2; - const int split_pipeline_num = 6; - const int has_weight = weight.data_ptr() != nullptr; - const int target_data_width = target.scalar_type() == at::kLong - ? target.itemsize() / 2 - : target.itemsize(); - const int threshold_c = - PAD_DOWN((nram_size - split_target_num * sizeof(int)) / - (split_pipeline_num + has_weight), - NFU_ALIGN_SIZE) / - input.itemsize(); - - int n_seg = 1; - if (C <= threshold_c) { - int c_size = C * input.itemsize(); - int reservered_align_size = - (split_target_num + split_pipeline_num) * NFU_ALIGN_SIZE; - int wegiht_size = 0; - if (has_weight) { - c_size = c_align_size; - reservered_align_size = split_target_num * NFU_ALIGN_SIZE; - wegiht_size = c_align_size; - } - // n_seg * c_size * split_pipeline_num + n_seg * target.itemsize() * - // split_target_num - // + weight_size + reservered_align_size <= nram_size - n_seg = (nram_size - wegiht_size - reservered_align_size) / - (split_pipeline_num * c_size + split_target_num * sizeof(int32_t)); - } - auto seg_num = n_seg == 0 ? N : (N + n_seg - 1) / n_seg; - auto core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - auto cluster_num = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - auto core_num = core_dim * cluster_num; - - k_dim->x = *k_type; - k_dim->y = - seg_num > core_num ? cluster_num : (seg_num + core_dim - 1) / core_dim; - k_dim->z = 1; -} - -// Policy Function for Backward -static void policyFuncBackward(cnrtDim3_t *k_dim, cnrtFunctionType_t *k_type) { - // set Union1 Job - *k_type = CNRT_FUNC_TYPE_UNION1; - k_dim->x = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - k_dim->y = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - k_dim->z = 1; -} - -void SigmoidFocalLossForwardMLUKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha) { - // params check - TORCH_CHECK(gamma >= 0, "gamma should be greater than or equal to 0. ", - "But now gamma is ", gamma, "."); - - // check dtype - TORCH_CHECK( - input.scalar_type() == at::kFloat || input.scalar_type() == at::kHalf, - "Data type of input should be Float or Half. But now input type is ", - input.scalar_type(), "."); - - TORCH_CHECK( - (target.scalar_type() == at::kInt || target.scalar_type() == at::kLong), - "target type should be Int or Long. ", "But now target type is ", - target.scalar_type(), "."); - - if (weight.data_ptr() != nullptr) { - TORCH_CHECK(weight.scalar_type() == input.scalar_type(), - "Data types of input and weight should be the same. But now " - "input type is ", - input.scalar_type(), ", weight type is ", weight.scalar_type(), - "."); - } else { - CNLOG(INFO) << "weight is a empty tensor."; - } - - // return if zero-element - if (input.numel() == 0 || target.numel() == 0 || output.numel() == 0) { - return; - } - - // calculate task dimension - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type = CNRT_FUNC_TYPE_UNION1; - policyFuncForward(&k_dim, &k_type, input, target, weight); - auto core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto input_impl = torch_mlu::getMluTensorImpl(input); - auto input_ptr = input_impl->cnnlMalloc(); - auto target_impl = torch_mlu::getMluTensorImpl(target); - auto target_ptr = target_impl->cnnlMalloc(); - auto weight_impl = torch_mlu::getMluTensorImpl(weight); - auto weight_ptr = weight_impl->cnnlMalloc(); - auto output_impl = torch_mlu::getMluTensorImpl(output); - auto output_ptr = output_impl->cnnlMalloc(); - - // get dtype of input - cnrtDataType_t d_type = torch_mlu::toCnrtDtype(input.dtype()); - - CNLOG(INFO) << "Launch Kernel KernelFocalLossSigmoidForward<<>>"; - // launch kernel - KernelFocalLossSigmoidForward(k_dim, k_type, queue, d_type, input_ptr, - target_ptr, weight_ptr, input.size(0), - input.size(1), alpha, gamma, output_ptr); -} - -void getDealNAndThresholdC(const int compute_data_bytes, - const int target_data_bytes, const int total_c, - int *deal_n_ptr, int *threshold_c_ptr, - const bool has_weight, const bool is_half) { - /* NRAM partition: - * - * |-----------------ping pong--------------------| - * |input | pt | alpha_t | temp | output | target | flt_min | gamma | weight| - * - * split_pipeline_num is 5: including input, pt, alpha_t, temp, output. - */ - const int nram_split_num = 5; - const int nram_split_pingpong = 2; - const int max_nram_size = torch_mlu::getDeviceAttr(cnrtAttrNramSizePerMcore); - int32_t compute_align_size = NFU_ALIGN_SIZE; - if (is_half) { - compute_align_size += NFU_ALIGN_SIZE; - } - const int32_t compute_align_num = compute_align_size / compute_data_bytes; - // reservered_align_size: including input(ping pong), pt(ping pong), - // alpha_t(ping pong), temp(ping pong), - // output(ping pong), target(ping pong), - // flt_min and gamma. - const int reservered_align_size = - ((nram_split_num + 1) * nram_split_pingpong + 2) * compute_align_size; - int nram_pingpong_size = max_nram_size - reservered_align_size; - - int compute_c = total_c; - int threshold_c = 0; - if (has_weight) { - // reserved space for weight to align - nram_pingpong_size -= NFU_ALIGN_SIZE; - - // threshold_c * nram_split_pingpong * compute_data_bytes * nram_split_num + - // nram_split_pingpong * target_data_bytes + - // threshold_c * compute_data_bytes <= nram_pingpong_size - threshold_c = - (nram_pingpong_size - nram_split_pingpong * target_data_bytes) / - (compute_data_bytes * (nram_split_num * nram_split_pingpong + 1)); - threshold_c = PAD_DOWN(threshold_c, compute_align_num); - int weight_space = PAD_UP(total_c * compute_data_bytes, NFU_ALIGN_SIZE); - - // reserved space for weight - nram_pingpong_size -= weight_space; - compute_c = PAD_UP(total_c, compute_align_num); - } else { - // threshold_c * nram_split_pingpong * compute_data_bytes * nram_split_num + - // nram_split_pingpong * target_data_bytes <= nram_pingpong_size - threshold_c = - (nram_pingpong_size / nram_split_pingpong - target_data_bytes) / - (nram_split_num * compute_data_bytes); - } - // deal_n * compute_c * nram_split_pingpong * compute_data_bytes * - // nram_split_num + deal_n * nram_split_pingpong * target_data_bytes <= - // nram_pingpong_size - *deal_n_ptr = - nram_pingpong_size / - ((nram_split_num * compute_c * compute_data_bytes + target_data_bytes) * - nram_split_pingpong); - *threshold_c_ptr = threshold_c; -} - -void SigmoidFocalLossBackwardMLUKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha) { - // params check - TORCH_CHECK(gamma >= 0, "gamma should be greater than or equal to 0. ", - "But now gamma is ", gamma, "."); - // check dtype - TORCH_CHECK( - input.scalar_type() == at::kFloat || input.scalar_type() == at::kHalf, - "Data type of input should be Float or Half. But now input type is ", - input.scalar_type(), "."); - - TORCH_CHECK( - (target.scalar_type() == at::kInt || target.scalar_type() == at::kLong), - "target type should be Int or Long. ", "But now target type is ", - target.scalar_type(), "."); - - bool has_weight = false; - if (weight.data_ptr() != nullptr) { - TORCH_CHECK(weight.scalar_type() == input.scalar_type(), - "Data types of input and weight should be the same. But now " - "input type is ", - input.scalar_type(), ", weight type is ", weight.scalar_type(), - "."); - has_weight = true; - } else { - CNLOG(INFO) << "weight is a empty tensor."; - } - - auto dim_c = input.size(1); - const int compute_data_bytes = sizeof(float); - // target supports only INT on MLU device while it keeps LONG on host side, - // so target.itemsize() / 2 - const int target_data_bytes = target.scalar_type() == at::kLong - ? (target.itemsize() / 2) - : target.itemsize(); - int deal_n = 0; - int threshold_c = 0; - bool is_half = false; - if (input.scalar_type() == at::kHalf) { - is_half = true; - } - // calculate deal_n and threshold_c - getDealNAndThresholdC(compute_data_bytes, target_data_bytes, dim_c, &deal_n, - &threshold_c, has_weight, is_half); - - // check C - TORCH_CHECK(threshold_c >= dim_c, - "input.size(1) should be in the range of [0, ", threshold_c, - "]. ", "But now input.size(1) is ", dim_c, "."); - - if (input.numel() == 0 || target.numel() == 0 || output.numel() == 0) { - // return if zero-element - return; - } - - // set task dimension - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - policyFuncBackward(&k_dim, &k_type); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto input_impl = torch_mlu::getMluTensorImpl(input); - auto input_ptr = input_impl->cnnlMalloc(); - auto target_impl = torch_mlu::getMluTensorImpl(target); - auto target_ptr = target_impl->cnnlMalloc(); - auto weight_impl = torch_mlu::getMluTensorImpl(weight); - auto weight_ptr = weight_impl->cnnlMalloc(); - auto output_impl = torch_mlu::getMluTensorImpl(output); - auto output_ptr = output_impl->cnnlMalloc(); - - // get dtype of input - cnrtDataType_t d_type = torch_mlu::toCnrtDtype(input.dtype()); - auto core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - auto dim_n = input.size(0); - - CNLOG(INFO) << "Launch Kernel KernelFocalLossSigmoidBackward<<>>"; - - // launch kernel - KernelFocalLossSigmoidBackward(k_dim, k_type, queue, d_type, input_ptr, - target_ptr, weight_ptr, gamma, alpha, dim_n, - deal_n, dim_c, output_ptr); -} - -void sigmoid_focal_loss_forward_mlu(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SigmoidFocalLossForwardMLUKernelLauncher(input, target, weight, output, gamma, - alpha); -} - -void sigmoid_focal_loss_backward_mlu(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, - float alpha) { - SigmoidFocalLossBackwardMLUKernelLauncher(input, target, weight, grad_input, - gamma, alpha); -} - -void sigmoid_focal_loss_forward_impl(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward_impl(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha); - -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_forward_impl, MLU, - sigmoid_focal_loss_forward_mlu); -REGISTER_DEVICE_IMPL(sigmoid_focal_loss_backward_impl, MLU, - sigmoid_focal_loss_backward_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/nms_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/nms_mlu.cpp deleted file mode 100644 index 33c4f7de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/nms_mlu.cpp +++ /dev/null @@ -1,130 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 by Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ - -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -void KernelNms(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t data_type_input, const void *boxes_ptr, - const void *scores_ptr, const int input_num_boxes, - const int input_stride, const int max_output_boxes, - const float iou_threshold, const float offset, - void *workspace_ptr, void *output_size_ptr, void *output_ptr); - -int selectUnionType(uint32_t use_job, int box_num_per_core) { - // the box_num_per_core should be at least 256, otherwise the real IO - // bandwidth would be very low - while (box_num_per_core < 256 && use_job >= 4) { - box_num_per_core *= 2; - use_job /= 2; - } - return use_job; -} - -Tensor NMSMLUKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset) { - // dimension parameters check - TORCH_CHECK(boxes.dim() == 2, "boxes should be a 2d tensor, got ", - boxes.dim(), "D"); - TORCH_CHECK(boxes.size(1) == 4, - "boxes should have 4 elements in dimension 1, got ", - boxes.size(1)); - TORCH_CHECK(scores.dim() == 1, "scores should be a 1d tensor, got ", - scores.dim(), "D"); - - // data type check - TORCH_CHECK(boxes.scalar_type() == scores.scalar_type(), - "boxes should have the same type as scores"); - TORCH_CHECK( - boxes.scalar_type() == at::kFloat || boxes.scalar_type() == at::kHalf, - "data type of boxes should be Float or Half, got ", boxes.scalar_type()); - - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - - int input_num_boxes = boxes.size(0); - int input_stride = boxes.size(0); - int max_output_boxes = boxes.size(0); - - cnrtDataType_t data_type_input = torch_mlu::toCnrtDtype(boxes.dtype()); - cnrtDim3_t k_dim; - cnrtJobType_t k_type; - uint32_t union_number = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - uint32_t core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - uint32_t job_limit = union_number * core_dim; - uint32_t core_number = union_number * core_dim; - int box_num_per_core = (input_num_boxes + core_number - 1) / core_number; - // initiate k_type as Union1 - k_dim.x = core_dim; - k_dim.y = 1; - k_dim.z = 1; - k_type = CNRT_FUNC_TYPE_UNION1; - int use_job = selectUnionType(job_limit, box_num_per_core); - if (use_job < 4) { - k_dim.x = 1; - k_type = CNRT_FUNC_TYPE_BLOCK; - } else if (use_job == 4) { - k_dim.x = core_dim; - k_type = CNRT_FUNC_TYPE_UNION1; - } else { - k_dim.x = use_job; - k_type = (cnrtFunctionType_t)use_job; - } - - // transpose boxes (n, 4) to (4, n) for better performance - auto boxes_t = boxes.transpose(0, 1); - auto boxes_ = torch_mlu::cnnl::ops::cnnl_contiguous(boxes_t); - auto scores_ = torch_mlu::cnnl::ops::cnnl_contiguous(scores); - auto output = at::empty({max_output_boxes}, boxes.options().dtype(at::kLong)); - auto output_size = at::empty({1}, scores.options().dtype(at::kInt)); - - // workspace - const int info_num = 5; // x1, x2, y1, y2 and score - size_t space_size = 0; - if (boxes.scalar_type() == at::kHalf) { - space_size = input_num_boxes * sizeof(int16_t) * info_num + sizeof(float); - } else { - space_size = input_num_boxes * sizeof(float) * info_num + sizeof(float); - } - auto workspace = at::empty(space_size, boxes.options().dtype(at::kByte)); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - auto boxes_impl = torch_mlu::getMluTensorImpl(boxes_); - auto boxes_ptr = boxes_impl->cnnlMalloc(); - auto scores_impl = torch_mlu::getMluTensorImpl(scores_); - auto scores_ptr = scores_impl->cnnlMalloc(); - auto workspace_impl = torch_mlu::getMluTensorImpl(workspace); - auto workspace_ptr = workspace_impl->cnnlMalloc(); - auto output_impl = torch_mlu::getMluTensorImpl(output); - auto output_ptr = output_impl->cnnlMalloc(); - auto output_size_impl = torch_mlu::getMluTensorImpl(output_size); - auto output_size_ptr = output_size_impl->cnnlMalloc(); - - CNLOG(INFO) << "Launch Kernel MLUUnionX NMS<<>>"; - KernelNms(k_dim, k_type, queue, data_type_input, boxes_ptr, scores_ptr, - input_num_boxes, input_stride, max_output_boxes, iou_threshold, - offset, workspace_ptr, output_size_ptr, output_ptr); - - int output_num = *static_cast(output_size.cpu().data_ptr()); - return output.slice(0, 0, output_num); -} - -Tensor nms_mlu(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return NMSMLUKernelLauncher(boxes, scores, iou_threshold, offset); -} - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset); -REGISTER_DEVICE_IMPL(nms_impl, MLU, nms_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/psamask_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/psamask_mlu.cpp deleted file mode 100644 index 0579da87..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/psamask_mlu.cpp +++ /dev/null @@ -1,308 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include - -#include "psamask_utils.hpp" -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -#define COMPUTE_COUNT_ALIGN 64 - -void KernelPsamaskForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *x, void *y, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int x_c, const int y_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg); - -void KernelPsamaskBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *dy, void *dx, const PsamaskType psa_type, - const DimPartitionType core_partition, - const DimPartitionType cluster_partition, const int batch, - const int h_feature, const int w_feature, const int h_mask, - const int w_mask, const int dx_c, const int dy_c, const int half_h_mask, - const int half_w_mask, const int n_per_core, const int h_per_core, - const int n_per_cluster, const int h_per_cluster, const int limit_n_seg, - const int limit_h_seg, const int limit_w_seg); - -namespace { -void policyFunc(cnrtDim3_t *k_dim_ptr, cnrtFunctionType_t *f_type_ptr, - PartitionSeg *partition_ptr, const int n, const int h_feature) { - unsigned int core_dim = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - unsigned int cluster_num = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - unsigned int use_cluster_num = cluster_num; - unsigned int use_core_num = core_dim; - - if (n >= cluster_num || n >= h_feature) { - partition_ptr->cluster_partition = PARTITION_N; - partition_ptr->n_per_cluster = (n + cluster_num - 1) / cluster_num; - partition_ptr->h_per_cluster = h_feature; - use_cluster_num = - (n + partition_ptr->n_per_cluster - 1) / partition_ptr->n_per_cluster; - } else { - partition_ptr->cluster_partition = PARTITION_H; - partition_ptr->h_per_cluster = (h_feature + cluster_num - 1) / cluster_num; - partition_ptr->n_per_cluster = n; - use_cluster_num = (h_feature + partition_ptr->h_per_cluster - 1) / - partition_ptr->h_per_cluster; - } - - if (partition_ptr->n_per_cluster >= core_dim || - partition_ptr->n_per_cluster >= partition_ptr->h_per_cluster) { - partition_ptr->core_partition = PARTITION_N; - partition_ptr->n_per_core = - (partition_ptr->n_per_cluster + core_dim - 1) / core_dim; - partition_ptr->h_per_core = partition_ptr->h_per_cluster; - use_core_num = - (partition_ptr->n_per_cluster + partition_ptr->n_per_core - 1) / - partition_ptr->n_per_core; - } else { - partition_ptr->core_partition = PARTITION_H; - partition_ptr->h_per_core = - (partition_ptr->h_per_cluster + core_dim - 1) / core_dim; - partition_ptr->n_per_core = partition_ptr->n_per_cluster; - use_core_num = - (partition_ptr->h_per_cluster + partition_ptr->h_per_core - 1) / - partition_ptr->h_per_core; - } - *k_dim_ptr = {core_dim, use_cluster_num, 1}; -} - -} // namespace - -bool findLimit(const int shape_core_n, const int shape_core_h, - const int shape_core_w, const int shape_core_ci, - const int shape_core_co, int *limit_n_seg_ptr, - int *limit_h_seg_ptr, int *limit_w_seg_ptr, const int psa_type) { - const bool need_temp = psa_type == 1; - const int input_bytes = sizeof(float); - int limit_n_seg = shape_core_n; - int limit_h_seg = shape_core_h; - int limit_w_seg = shape_core_w; - - const int max_nram_size = torch_mlu::getDeviceAttr(cnrtAttrNramSizePerMcore); - const int align_base_128 = NFU_ALIGN_SIZE / input_bytes; - const int align_base_64 = COMPUTE_COUNT_ALIGN / input_bytes; - const int align_co = CEIL_ALIGN(shape_core_co, align_base_64); - const int align_w = CEIL_ALIGN(shape_core_w, align_base_64); - const int align_hw = CEIL_ALIGN(shape_core_h * shape_core_w, align_base_64); - const int max_num = max_nram_size / input_bytes; - - int n_limit = - max_num / - (CEIL_ALIGN(shape_core_h * shape_core_w * shape_core_ci, align_base_128) + - align_hw * align_co * (1 + need_temp)); - if (n_limit > 0) { - n_limit = std::min(n_limit, shape_core_n); - limit_n_seg = n_limit; - } else { - int h_limit = - max_num / (CEIL_ALIGN(shape_core_w * shape_core_ci, align_base_128) + - align_w * align_co * (1 + need_temp)); - if (h_limit > 0) { - h_limit = std::min(h_limit, shape_core_h); - limit_h_seg = h_limit; - limit_n_seg = 1; - } else { - int w_limit = - max_num / (CEIL_ALIGN(shape_core_ci, align_base_128) + - CEIL_ALIGN(align_co, align_base_128) * (1 + need_temp)); - if (w_limit > 0 && w_limit >= (COMPUTE_COUNT_ALIGN / input_bytes)) { - w_limit = std::min(w_limit, shape_core_w); - w_limit = w_limit / (COMPUTE_COUNT_ALIGN / input_bytes) * - (COMPUTE_COUNT_ALIGN / input_bytes); - limit_w_seg = w_limit; - limit_h_seg = 1; - limit_n_seg = 1; - } else { - CNLOG(INFO) << "The size of input channel is too large."; - return false; - } - } - } - *limit_n_seg_ptr = limit_n_seg; - *limit_h_seg_ptr = limit_h_seg; - *limit_w_seg_ptr = limit_w_seg; - return true; -} - -void PSAMaskForwardMLUKernelLauncher(const int psa_type, const Tensor x, - Tensor y, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, - const int half_w_mask) { - // params check - TORCH_CHECK(x.scalar_type() == at::kFloat, "x type should be Float, got ", - x.scalar_type()); - TORCH_CHECK(y.scalar_type() == x.scalar_type(), - "y should have the same type as x"); - TORCH_CHECK(x.dim() == 4, "x should be a 4d tensor, got ", x.dim(), "D"); - TORCH_CHECK(y.dim() == 4, "y should be a 4d tensor, got ", y.dim(), "D"); - - int x_c = x.size(1); - int y_c = y.size(1); - TORCH_CHECK(h_mask * w_mask == x_c, - "channel of x should be the same as h_mask * w_mask"); - TORCH_CHECK(h_feature * w_feature == y_c, - "channel of y should be the same as h_feature * w_feature"); - TORCH_CHECK(psa_type == 0 || psa_type == 1, - "psa_type only suppurts 'COLLECT' and 'DISTRIBUTE' currently"); - - if (x.numel() == 0) { - CNLOG(INFO) << "skip zero-element tensor"; - return; - } - - cnrtFunctionType_t k_type = CNRT_FUNC_TYPE_UNION1; - cnrtDim3_t k_dim; - PartitionSeg partition_info; - policyFunc(&k_dim, &k_type, &partition_info, num_, h_feature); - int n_limit_seg, h_limit_seg, w_limit_seg; - bool ret = - findLimit(partition_info.n_per_core, partition_info.h_per_core, w_feature, - x_c, y_c, &n_limit_seg, &h_limit_seg, &w_limit_seg, psa_type); - if (ret != true) { - return; - } - - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(x.dim()); - auto x_tensor = torch_mlu::cnnl::ops::cnnl_contiguous(x, memory_format); - at::Tensor y_tmp = - at::empty({num_, y_c, h_feature, w_feature}, x.options(), memory_format); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto x_impl = torch_mlu::getMluTensorImpl(x_tensor); - auto x_ptr = x_impl->cnnlMalloc(); - auto y_impl = torch_mlu::getMluTensorImpl(y_tmp); - auto y_ptr = y_impl->cnnlMalloc(); - - KernelPsamaskForward( - k_dim, k_type, queue, x_ptr, y_ptr, (PsamaskType)psa_type, - partition_info.core_partition, partition_info.cluster_partition, num_, - h_feature, w_feature, h_mask, w_mask, x_c, y_c, half_h_mask, half_w_mask, - partition_info.n_per_core, partition_info.h_per_core, - partition_info.n_per_cluster, partition_info.h_per_cluster, n_limit_seg, - h_limit_seg, w_limit_seg); - - y.copy_(y_tmp); -} - -void PSAMaskBackwardMLUKernelLauncher(const int psa_type, const Tensor dy, - Tensor dx, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, - const int half_w_mask) { - // params check - TORCH_CHECK(dy.scalar_type() == at::kFloat, "dy type should be Float, got ", - dy.scalar_type()); - TORCH_CHECK(dx.scalar_type() == dy.scalar_type(), - "dx should have the same type as dy"); - TORCH_CHECK(dy.dim() == 4, "dy should be a 4d tensor, got ", dy.dim(), "D"); - TORCH_CHECK(dx.dim() == 4, "dx should be a 4d tensor, got ", dx.dim(), "D"); - - int dy_c = dy.size(1); - int dx_c = dx.size(1); - TORCH_CHECK(h_feature * w_feature == dy_c, - "channel of dy should be the same as h_feature * w_feature"); - TORCH_CHECK(h_mask * w_mask == dx_c, - "channel of dx should be the same as h_mask * w_mask"); - TORCH_CHECK(psa_type == 0 || psa_type == 1, - "psa_type only suppurts 'COLLECT' and 'DISTRIBUTE' currently"); - - if (dx.numel() == 0) { - CNLOG(INFO) << "skip zero-element tensor"; - return; - } - - cnrtFunctionType_t k_type = CNRT_FUNC_TYPE_UNION1; - cnrtDim3_t k_dim; - PartitionSeg partition_info; - policyFunc(&k_dim, &k_type, &partition_info, num_, h_feature); - int n_limit_seg, h_limit_seg, w_limit_seg; - bool ret = - findLimit(partition_info.n_per_core, partition_info.h_per_core, w_feature, - dx_c, dy_c, &n_limit_seg, &h_limit_seg, &w_limit_seg, psa_type); - if (ret != true) { - return; - } - - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(dy.dim()); - auto dy_tensor = torch_mlu::cnnl::ops::cnnl_contiguous(dy, memory_format); - at::Tensor dx_tmp = at::empty({num_, dx_c, h_feature, w_feature}, - dy.options(), memory_format); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto dx_impl = torch_mlu::getMluTensorImpl(dx_tmp); - auto dx_ptr = dx_impl->cnnlMalloc(); - auto dy_impl = torch_mlu::getMluTensorImpl(dy_tensor); - auto dy_ptr = dy_impl->cnnlMalloc(); - - KernelPsamaskBackward( - k_dim, k_type, queue, dy_ptr, dx_ptr, (PsamaskType)psa_type, - partition_info.core_partition, partition_info.cluster_partition, num_, - h_feature, w_feature, h_mask, w_mask, dx_c, dy_c, half_h_mask, - half_w_mask, partition_info.n_per_core, partition_info.h_per_core, - partition_info.n_per_cluster, partition_info.h_per_cluster, n_limit_seg, - h_limit_seg, w_limit_seg); - - dx.copy_(dx_tmp); -} - -void psamask_forward_mlu(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - PSAMaskForwardMLUKernelLauncher(psa_type, input, output, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_backward_mlu(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - PSAMaskBackwardMLUKernelLauncher(psa_type, grad_output, grad_input, num_, - h_feature, w_feature, h_mask, w_mask, - half_h_mask, half_w_mask); -} - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); - -REGISTER_DEVICE_IMPL(psamask_forward_impl, MLU, psamask_forward_mlu); -REGISTER_DEVICE_IMPL(psamask_backward_impl, MLU, psamask_backward_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_mlu.cpp deleted file mode 100644 index 077dbfc5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_mlu.cpp +++ /dev/null @@ -1,206 +0,0 @@ -/************************************************************************* - * Copyright (C) 2021 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -void KernelRoiAlign(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t d_type, - const void *input, const void *rois, const int channels, - const bool aligned, const int pooled_height, - const int pooled_width, const int input_height, - const int input_width, const int sampling_ratio, - const float spatial_scale, const int num_rois, - void *output); - -void KernelRoiAlignBackward(cnrtDim3_t k_dim, cnrtFunctionType_t k_type, - cnrtQueue_t queue, const cnrtDataType_t dtype, - const void *grads, const void *boxes, - void *grads_image, const int boxes_num, - const int hi, const int wi, const int c, - const int no, const int ho, const int wo, - const float spatial_scale, const int sampling_ratio, - const bool aligned); - -void ROIAlignForwardMLUKernelLauncher(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - // params check - TORCH_CHECK( - input.scalar_type() == at::kFloat || input.scalar_type() == at::kHalf, - "input type should be Float or Half, got ", input.scalar_type()); - TORCH_CHECK(rois.scalar_type() == input.scalar_type(), - "rois should have the same type as input"); - TORCH_CHECK(input.dim() == 4, "input should be a 4d tensor, got ", - input.dim(), "D"); - TORCH_CHECK(rois.dim() == 2, "rois should be a 2d tensor, got ", rois.dim(), - "D"); - TORCH_CHECK(pool_mode == 1, "pool_mode only suppurts 'avg' currently"); - - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(input.dim()); - auto input_tensor = - torch_mlu::cnnl::ops::cnnl_contiguous(input, memory_format); - - auto num_rois = rois.size(0); - auto channels = input.size(1); - int height = input.size(2); - int width = input.size(3); - - if (output.numel() == 0) { - output = at::zeros({num_rois, channels, aligned_height, aligned_width}, - input.options()); - return; - } - - at::Tensor output_tmp = - at::empty({num_rois, channels, aligned_height, aligned_width}, - input.options(), memory_format); - - // get tensor impl - auto self_impl = torch_mlu::getMluTensorImpl(input_tensor); - auto rois_impl = torch_mlu::getMluTensorImpl(rois); - auto output_impl = torch_mlu::getMluTensorImpl(output_tmp); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get the mlu ptr - auto self_ptr = self_impl->cnnlMalloc(); - auto rois_ptr = rois_impl->cnnlMalloc(); - auto output_ptr = output_impl->cnnlMalloc(); - - cnrtJobType_t k_type = CNRT_FUNC_TYPE_UNION1; - cnrtDim3_t k_dim; - k_dim.x = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - k_dim.y = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - k_dim.z = 1; - cnrtDataType_t data_type = torch_mlu::toCnrtDtype(input.dtype()); - - KernelRoiAlign(k_dim, k_type, queue, data_type, self_ptr, rois_ptr, channels, - aligned, aligned_height, aligned_width, height, width, - sampling_ratio, spatial_scale, num_rois, output_ptr); - - output.copy_(output_tmp); -} - -static int nearestPower2(int x) { - x--; - x |= x >> 1; - x |= x >> 2; - x |= x >> 4; - x |= x >> 8; - x |= x >> 16; - x++; - return x; -} - -void ROIAlignBackwardMLUKernelLauncher(Tensor grad, Tensor rois, - Tensor argmax_y, Tensor argmax_x, - Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, - bool aligned) { - // params check - TORCH_CHECK( - grad.scalar_type() == at::kFloat || grad.scalar_type() == at::kHalf, - "grad type should be Float or Half, got ", grad.scalar_type()); - TORCH_CHECK(rois.scalar_type() == grad.scalar_type(), - "rois should have the same type as grad"); - TORCH_CHECK(grad.dim() == 4, "grad should be a 4d tensor, got ", grad.dim(), - "D"); - TORCH_CHECK(rois.dim() == 2, "rois should be a 2d tensor, got ", rois.dim(), - "D"); - TORCH_CHECK(pool_mode == 1, "pool_mode only suppurts 'avg' currently"); - - int batch_size = grad_input.size(0); - int channels = grad_input.size(1); - int height = grad_input.size(2); - int width = grad_input.size(3); - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(grad.dim()); - auto grad_ = torch_mlu::cnnl::ops::cnnl_contiguous(grad, memory_format); - auto grad_input_ = at::empty({batch_size, channels, height, width}, - grad.options(), memory_format) - .zero_(); - - int boxes_num = rois.size(0); - int hi = grad.size(2); - int wi = grad.size(3); - int c = grad.size(1); - - int no = grad_input.size(0); - int ho = grad_input.size(2); - int wo = grad_input.size(3); - - // get tensor impl - auto grad_impl = torch_mlu::getMluTensorImpl(grad_); - auto grad_input_impl = torch_mlu::getMluTensorImpl(grad_input_); - auto rois_impl = torch_mlu::getMluTensorImpl(rois); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get the mlu ptr - auto grad_ptr = grad_impl->cnnlMalloc(); - auto rois_ptr = rois_impl->cnnlMalloc(); - auto grad_input_ptr = grad_input_impl->cnnlMalloc(); - - cnrtJobType_t k_type = CNRT_FUNC_TYPE_UNION1; - int need_core = nearestPower2(boxes_num); - int union_number = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - uint32_t dim_x = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - uint32_t dim_y = (need_core - 1) / dim_x + 1; - dim_y = (dim_y > union_number) ? union_number : dim_y; - cnrtDim3_t k_dim = {dim_x, dim_y, 1}; - cnrtDataType_t k_dtype = torch_mlu::toCnrtDtype(grad.dtype()); - - KernelRoiAlignBackward(k_dim, k_type, queue, k_dtype, grad_ptr, rois_ptr, - grad_input_ptr, boxes_num, hi, wi, c, no, ho, wo, - spatial_scale, sampling_ratio, aligned); - grad_input.copy_(grad_input_); -} - -void roi_align_forward_mlu(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - ROIAlignForwardMLUKernelLauncher(input, rois, output, argmax_y, argmax_x, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_mlu(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - ROIAlignBackwardMLUKernelLauncher( - grad_output, rois, argmax_y, argmax_x, grad_input, aligned_height, - aligned_width, spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned); - -REGISTER_DEVICE_IMPL(roi_align_forward_impl, MLU, roi_align_forward_mlu); -REGISTER_DEVICE_IMPL(roi_align_backward_impl, MLU, roi_align_backward_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_rotated_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_rotated_mlu.cpp deleted file mode 100644 index 255aefdd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/roi_align_rotated_mlu.cpp +++ /dev/null @@ -1,232 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 by Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" -#include "roi_align_rotated_utils.hpp" - -namespace { - -void policyFunc(int bin_num, cnrtDim3_t *k_dim, cnrtFunctionType_t *k_type) { - unsigned int core_num = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - unsigned int cluster_num = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - *k_type = CNRT_FUNC_TYPE_UNION1; - k_dim->x = core_num; - unsigned int use_cluster = (bin_num + core_num - 1) / core_num; - k_dim->y = use_cluster > cluster_num ? cluster_num : use_cluster; - k_dim->z = 1; -} - -} // namespace - -void KernelRoiAlignRotatedForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t d_type, const void *features, const void *rois, - void *output, const int batch, const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams roiAlignRotatedParams); - -void KernelRoiAlignRotatedBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const cnrtDataType_t d_type, const void *top_grad, const void *rois, - void *bottom_grad, const int batch, const int height, const int width, - const int channel, const int rois_num, - const RoiAlignRotatedParams roiAlignRotatedParams); - -void ROIAlignRotatedForwardMLUKernelLauncher(Tensor input, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, - float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - TORCH_CHECK(((input.scalar_type() == output.scalar_type()) && - (output.scalar_type() == rois.scalar_type())), - "data types of input, rois and output should be the same, ", - "but now input type is ", input.scalar_type(), ", rois type is ", - rois.scalar_type(), ", output type is ", output.scalar_type(), - "."); - TORCH_CHECK( - (input.scalar_type() == at::kFloat || input.scalar_type() == at::kHalf), - "input type should be Float or Half, got ", input.scalar_type(), "."); - - TORCH_CHECK(input.dim() == 4, "input should be a 4d tensor, got ", - input.dim(), "D."); - TORCH_CHECK(rois.dim() == 2, "rois should be a 2d tensor, got ", rois.dim(), - "D."); - TORCH_CHECK(output.dim() == 4, "output should be a 4d tensor, got ", - output.dim(), "D."); - - TORCH_CHECK((rois.size(0) == output.size(0)), - "the 1st dimensions of rois and output should be the same, ", - "but now the 1st dimension of rois is ", rois.size(0), - ", and output is ", output.size(0), "."); - - TORCH_CHECK((input.size(1) == output.size(1)), - "the 2nd dimensions of input and output should be the same, ", - "but now the 2nd dimension of input is ", input.size(1), - ", and output is ", output.size(1), "."); - - int channel = input.size(1); - int width = input.size(3); - int height = input.size(2); - int batch = input.size(0); - int rois_nums = rois.size(0); - cnrtDataType_t d_type = torch_mlu::toCnrtDtype(input.dtype()); - - // return if zero-elements - if (input.numel() == 0) { - CNLOG(INFO) << "Skip the zero-elements case."; - return; - } - - RoiAlignRotatedParams roiAlignRotatedParams{pooled_height, pooled_width, - sampling_ratio, spatial_scale, - aligned, clockwise}; - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - policyFunc(rois_nums * pooled_height * pooled_width, &k_dim, &k_type); - - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(input.dim()); - auto input_tensor = - torch_mlu::cnnl::ops::cnnl_contiguous(input, memory_format); - at::Tensor output_tmp = - at::empty({batch, channel, pooled_height, pooled_width}, input.options(), - memory_format); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto input_impl = torch_mlu::getMluTensorImpl(input_tensor); - auto input_ptr = input_impl->cnnlMalloc(); - auto rois_impl = torch_mlu::getMluTensorImpl(rois); - auto rois_ptr = rois_impl->cnnlMalloc(); - auto output_impl = torch_mlu::getMluTensorImpl(output_tmp); - auto output_ptr = output_impl->cnnlMalloc(); - - KernelRoiAlignRotatedForward(k_dim, k_type, queue, d_type, input_ptr, - rois_ptr, output_ptr, batch, height, width, - channel, rois_nums, roiAlignRotatedParams); - output.copy_(output_tmp); -} - -void ROIAlignRotatedBackwardMLUKernelLauncher( - Tensor top_grad, Tensor rois, Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, int sampling_ratio, bool aligned, - bool clockwise) { - TORCH_CHECK(((top_grad.scalar_type() == bottom_grad.scalar_type()) && - (bottom_grad.scalar_type() == rois.scalar_type())), - "data types of top_grad, rois and bottom_grad should be ", - "the same, but now top_grad type is ", top_grad.scalar_type(), - ", rois type is ", rois.scalar_type(), ", bottom_grad type is ", - bottom_grad.scalar_type(), "."); - TORCH_CHECK((bottom_grad.scalar_type() == at::kFloat || - bottom_grad.scalar_type() == at::kHalf), - "Data type of bottom_grad should be Float ro Half, got ", - bottom_grad.scalar_type(), "."); - - TORCH_CHECK(bottom_grad.dim() == 4, "bottom_grad should be a 4d tensor, got ", - top_grad.dim(), "D."); - TORCH_CHECK(rois.dim() == 2, "rois should be a 2d tensor, got ", rois.dim(), - "D."); - TORCH_CHECK(top_grad.dim() == 4, "top_grad should be a 4d tensor, got ", - bottom_grad.dim(), "D."); - - TORCH_CHECK((rois.size(0) == top_grad.size(0)), - "the 1st dimensions of rois and top_grad should be the same, ", - "but now the 1st dimension of rois is ", rois.size(0), - ", and top_grad is ", top_grad.size(0), "."); - - TORCH_CHECK((bottom_grad.size(1) == top_grad.size(1)), - "the 2nd dimensions of bottom_grad and top_grad should be ", - "the same, but now the 2nd dimension of bottom_grad is ", - bottom_grad.size(1), ", and top_grad is ", top_grad.size(1), "."); - - int channel = bottom_grad.size(1); - int width = bottom_grad.size(3); - int height = bottom_grad.size(2); - int batch = bottom_grad.size(0); - int rois_nums = rois.size(0); - cnrtDataType_t d_type = torch_mlu::toCnrtDtype(bottom_grad.dtype()); - - // return if zero-elements - if (bottom_grad.numel() == 0) { - CNLOG(INFO) << "Skip the zero-elements case."; - return; - } - - RoiAlignRotatedParams roiAlignRotatedParams{pooled_height, pooled_width, - sampling_ratio, spatial_scale, - aligned, clockwise}; - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - policyFunc(rois_nums * pooled_height * pooled_width, &k_dim, &k_type); - - auto memory_format = - torch_mlu::cnnl::ops::get_channels_last_memory_format(top_grad.dim()); - auto top_grad_tensor = - torch_mlu::cnnl::ops::cnnl_contiguous(top_grad, memory_format); - at::Tensor bottom_grad_tmp = at::empty({batch, channel, height, width}, - top_grad.options(), memory_format) - .zero_(); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get ptr of tensors - auto bottom_grad_impl = torch_mlu::getMluTensorImpl(bottom_grad_tmp); - auto bottom_grad_ptr = bottom_grad_impl->cnnlMalloc(); - auto rois_impl = torch_mlu::getMluTensorImpl(rois); - auto rois_ptr = rois_impl->cnnlMalloc(); - auto top_grad_impl = torch_mlu::getMluTensorImpl(top_grad_tensor); - auto top_grad_ptr = top_grad_impl->cnnlMalloc(); - - KernelRoiAlignRotatedBackward(k_dim, k_type, queue, d_type, top_grad_ptr, - rois_ptr, bottom_grad_ptr, batch, height, width, - channel, rois_nums, roiAlignRotatedParams); - bottom_grad.copy_(bottom_grad_tmp); -} - -void roi_align_rotated_forward_mlu(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - ROIAlignRotatedForwardMLUKernelLauncher(input, rois, output, aligned_height, - aligned_width, spatial_scale, - sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_backward_mlu(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - ROIAlignRotatedBackwardMLUKernelLauncher( - top_grad, rois, bottom_grad, aligned_height, aligned_width, spatial_scale, - sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_forward_impl(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); - -REGISTER_DEVICE_IMPL(roi_align_rotated_forward_impl, MLU, - roi_align_rotated_forward_mlu); -REGISTER_DEVICE_IMPL(roi_align_rotated_backward_impl, MLU, - roi_align_rotated_backward_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/tin_shift_mlu.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/tin_shift_mlu.cpp deleted file mode 100644 index 72833079..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/mlu/tin_shift_mlu.cpp +++ /dev/null @@ -1,203 +0,0 @@ -/************************************************************************* - * Copyright (C) 2022 Cambricon. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - *************************************************************************/ -#include "pytorch_device_registry.hpp" -#include "pytorch_mlu_helper.hpp" - -void KernelTinShiftForward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *input, const void *shifts, void *output, const int batch_size, - const int time_size, const int channel_size, const int hw_size, - const int group_size, const int group_channel, - const cnrtDataType_t data_dtype, const int channel_per_core, - const int max_number_hw_per_core, const int max_length_per_core); - -void KernelTinShiftBackward( - cnrtDim3_t k_dim, cnrtFunctionType_t k_type, cnrtQueue_t queue, - const void *grad_output, const void *shifts, void *grad_input, - const int batch_size, const int time_size, const int channel_size, - const int hw_size, const int group_size, const int group_channel, - const cnrtDataType_t data_dtype, const int channel_per_core, - const int max_number_hw_per_core, const int max_length_per_core); - -// policy function -static void policyFunc(const Tensor &input, cnrtDim3_t *k_dim, - cnrtFunctionType_t *k_type, int *channel_per_core, - int *max_number_hw_per_core, int *max_length_per_core) { - const int32_t cluster_limit = torch_mlu::getDeviceAttr(cnrtAttrClusterCount); - const int32_t core_limit = torch_mlu::getDeviceAttr(cnrtAttrMcorePerCluster); - auto nram_size = torch_mlu::getDeviceAttr(cnrtAttrNramSizePerMcore); - const int core_num = core_limit * cluster_limit; - const int batch_size = input.size(0); - const int time_size = input.size(1); - const int channel_size = input.size(2); - const int hw_size = input.size(3); - - const size_t size_per_channel = time_size * hw_size * input.itemsize(); - *channel_per_core = nram_size / size_per_channel; - int task_dim = 0; - if (*channel_per_core == 0) { - const size_t size_per_hw = hw_size * input.itemsize(); - *max_number_hw_per_core = nram_size / size_per_hw; - if (*max_number_hw_per_core <= 0) { - *max_length_per_core = nram_size / input.itemsize(); - } - int tmp_max_number_hw_per_core = - *max_number_hw_per_core > 0 ? *max_number_hw_per_core : 1; - const int loop_time = - (time_size / (tmp_max_number_hw_per_core)) + - ((time_size % (tmp_max_number_hw_per_core)) > 0 ? 1 : 0); - task_dim = batch_size * channel_size * loop_time < core_num - ? batch_size * channel_size * loop_time - : core_num; - } else { - task_dim = batch_size * channel_size < core_num ? batch_size * channel_size - : core_num; - } - - k_dim->x = core_limit; - k_dim->y = (task_dim / core_limit) > 0 ? (task_dim / core_limit) : 1; - k_dim->z = 1; - *k_type = CNRT_FUNC_TYPE_UNION1; -} - -void TINShiftForwardMLUKernelLauncher(Tensor input, Tensor shift, - Tensor output) { - // params check - TORCH_CHECK( - input.scalar_type() == at::kFloat || input.scalar_type() == at::kHalf, - "input type should be Float or Half, got ", input.scalar_type(), "."); - TORCH_CHECK(input.dim() == 4, "input should be a 4d tensor, got ", - input.dim(), "d."); - TORCH_CHECK(shift.dim() == 2, "shift should be a 2d tensor, got ", - shift.dim(), "d."); - TORCH_CHECK( - input.size(0) == shift.size(0), - "input batch size should be the same as shift's, input batch size is ", - input.size(0), " and shift batch size is ", shift.size(0), "."); - TORCH_CHECK(input.size(0) != 0, "Input batch size should not be zero."); - TORCH_CHECK(input.size(3) != 0, - "The last dim size of input should not be zero."); - if (input.size(1) == 0) { - return; - } - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - int channel_per_core = 0; - int max_number_hw_per_core = 0; - int max_length_per_core = 0; - policyFunc(input, &k_dim, &k_type, &channel_per_core, &max_number_hw_per_core, - &max_length_per_core); - - const int batch_size = input.size(0); - const int time_size = input.size(1); - const int channel_size = input.size(2); - const int hw_size = input.size(3); - const int group_size = shift.size(1); - int group_channel = channel_size / group_size; - - // get tensor impl - auto input_impl = torch_mlu::getMluTensorImpl(input); - auto shift_impl = torch_mlu::getMluTensorImpl(shift); - auto output_impl = torch_mlu::getMluTensorImpl(output); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get the mlu ptr - auto input_ptr = input_impl->cnnlMalloc(); - auto shift_ptr = shift_impl->cnnlMalloc(); - auto output_ptr = output_impl->cnnlMalloc(); - - cnrtDataType_t data_dtype = torch_mlu::toCnrtDtype(input.dtype()); - - KernelTinShiftForward(k_dim, k_type, queue, input_ptr, shift_ptr, output_ptr, - batch_size, time_size, channel_size, hw_size, - group_size, group_channel, data_dtype, channel_per_core, - max_number_hw_per_core, max_length_per_core); -} - -void TINShiftBackwardMLUKernelLauncher(Tensor grad_output, Tensor shift, - Tensor grad_input) { - // params check - TORCH_CHECK(grad_output.scalar_type() == at::kFloat || - grad_output.scalar_type() == at::kHalf, - "grad_output type should be Float or Half, got ", - grad_output.scalar_type(), "."); - TORCH_CHECK(grad_output.dim() == 4, "grad_output should be a 4d tensor, got ", - grad_output.dim(), "d."); - TORCH_CHECK(shift.dim() == 2, "shift should be a 2d tensor, got ", - shift.dim(), "d."); - TORCH_CHECK(grad_output.size(0) == shift.size(0), - "grad_output batch size should be the same as shift's, " - "grad_output batch size is ", - grad_output.size(0), ", shift batch size is ", shift.size(0), - "."); - TORCH_CHECK(grad_output.size(0) != 0, - "grad_output batch size should not be zero."); - TORCH_CHECK(grad_output.size(3) != 0, - "The last dim size of grad_output should not be zero."); - if (grad_output.size(1) == 0) { - return; - } - cnrtDim3_t k_dim; - cnrtFunctionType_t k_type; - int channel_per_core = 0; - int max_number_hw_per_core = 0; - int max_length_per_core = 0; - policyFunc(grad_output, &k_dim, &k_type, &channel_per_core, - &max_number_hw_per_core, &max_length_per_core); - - const int batch_size = grad_output.size(0); - const int time_size = grad_output.size(1); - const int channel_size = grad_output.size(2); - const int hw_size = grad_output.size(3); - const int group_size = shift.size(1); - int group_channel = channel_size / group_size; - - // get tensor impl - auto grad_output_impl = torch_mlu::getMluTensorImpl(grad_output); - auto shift_impl = torch_mlu::getMluTensorImpl(shift); - auto grad_input_impl = torch_mlu::getMluTensorImpl(grad_input); - - // get compute queue - auto queue = torch_mlu::getCurQueue(); - - // get the mlu ptr - auto grad_output_ptr = grad_output_impl->cnnlMalloc(); - auto shift_ptr = shift_impl->cnnlMalloc(); - auto grad_input_ptr = grad_input_impl->cnnlMalloc(); - - cnrtDataType_t data_dtype = torch_mlu::toCnrtDtype(grad_output.dtype()); - - KernelTinShiftBackward(k_dim, k_type, queue, grad_output_ptr, shift_ptr, - grad_input_ptr, batch_size, time_size, channel_size, - hw_size, group_size, group_channel, data_dtype, - channel_per_core, max_number_hw_per_core, - max_length_per_core); -} - -void tin_shift_forward_mlu(Tensor input, Tensor shift, Tensor output) { - TINShiftForwardMLUKernelLauncher(input, shift, output); -} - -void tin_shift_backward_mlu(Tensor grad_output, Tensor shift, - Tensor grad_input) { - TINShiftBackwardMLUKernelLauncher(grad_output, shift, grad_input); -} - -void tin_shift_forward_impl(Tensor input, Tensor shift, Tensor output); - -void tin_shift_backward_impl(Tensor grad_output, Tensor shift, - Tensor grad_input); - -REGISTER_DEVICE_IMPL(tin_shift_forward_impl, MLU, tin_shift_forward_mlu); -REGISTER_DEVICE_IMPL(tin_shift_backward_impl, MLU, tin_shift_backward_mlu); diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp deleted file mode 100644 index 12b538a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp +++ /dev/null @@ -1,237 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void modulated_deformable_im2col_impl( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - DISPATCH_DEVICE_IMPL(modulated_deformable_im2col_impl, data_im, data_offset, - data_mask, batch_size, channels, height_im, width_im, - height_col, width_col, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - deformable_group, data_col); -} - -void modulated_deformable_col2im_impl( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - DISPATCH_DEVICE_IMPL(modulated_deformable_col2im_impl, data_col, data_offset, - data_mask, batch_size, channels, height_im, width_im, - height_col, width_col, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - deformable_group, grad_im); -} - -void modulated_deformable_col2im_coord_impl( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - DISPATCH_DEVICE_IMPL(modulated_deformable_col2im_coord_impl, data_col, - data_im, data_offset, data_mask, batch_size, channels, - height_im, width_im, height_col, width_col, kernel_h, - kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, - dilation_w, deformable_group, grad_offset, grad_mask); -} - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias) { - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_out = weight.size(0); - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - // resize output - output = output.view({batch, channels_out, height_out, width_out}).zero_(); - // resize temporary columns - columns = - at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, - input.options()); - - output = output.view({output.size(0), group, output.size(1) / group, - output.size(2), output.size(3)}); - - for (int b = 0; b < batch; b++) { - modulated_deformable_im2col_impl( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - // divide into group - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - - for (int g = 0; g < group; g++) { - output[b][g] = output[b][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output[b][g]); - } - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - } - - output = output.view({output.size(0), output.size(1) * output.size(2), - output.size(3), output.size(4)}); - - if (with_bias) { - output += bias.view({1, bias.size(0), 1, 1}); - } -} - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - grad_input = grad_input.view({batch, channels, height, width}); - columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out}, - input.options()); - - grad_output = - grad_output.view({grad_output.size(0), group, grad_output.size(1) / group, - grad_output.size(2), grad_output.size(3)}); - - for (int b = 0; b < batch; b++) { - // divide int group - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - grad_output[b][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_impl( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_impl( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_impl( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - grad_weight = grad_weight.view({group, grad_weight.size(0) / group, - grad_weight.size(1), grad_weight.size(2), - grad_weight.size(3)}); - if (with_bias) - grad_bias = grad_bias.view({group, grad_bias.size(0) / group}); - - for (int g = 0; g < group; g++) { - grad_weight[g] = - grad_weight[g] - .flatten(1) - .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1)) - .view_as(grad_weight[g]); - if (with_bias) { - grad_bias[g] = - grad_bias[g] - .view({-1, 1}) - .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1})) - .view(-1); - } - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1), - grad_weight.size(2), grad_weight.size(3), - grad_weight.size(4)}); - if (with_bias) - grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)}); - } - grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1), - grad_output.size(2), grad_output.size(3), - grad_output.size(4)}); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ms_deform_attn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ms_deform_attn.cpp deleted file mode 100644 index 25c8f620..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/ms_deform_attn.cpp +++ /dev/null @@ -1,60 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from -*https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor ms_deform_attn_impl_forward(const Tensor &value, - const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const int im2col_step) { - return DISPATCH_DEVICE_IMPL(ms_deform_attn_impl_forward, value, - spatial_shapes, level_start_index, sampling_loc, - attn_weight, im2col_step); -} - -void ms_deform_attn_impl_backward( - const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, const Tensor &sampling_loc, - const Tensor &attn_weight, const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, Tensor &grad_attn_weight, - const int im2col_step) { - DISPATCH_DEVICE_IMPL(ms_deform_attn_impl_backward, value, spatial_shapes, - level_start_index, sampling_loc, attn_weight, - grad_output, grad_value, grad_sampling_loc, - grad_attn_weight, im2col_step); -} - -Tensor ms_deform_attn_forward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const int im2col_step) { - at::DeviceGuard guard(value.device()); - return ms_deform_attn_impl_forward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, im2col_step); -} - -void ms_deform_attn_backward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, - Tensor &grad_attn_weight, const int im2col_step) { - at::DeviceGuard guard(value.device()); - ms_deform_attn_impl_backward(value, spatial_shapes, level_start_index, - sampling_loc, attn_weight, grad_output, - grad_value, grad_sampling_loc, grad_attn_weight, - im2col_step); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms.cpp deleted file mode 100644 index 199d8af2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -Tensor nms_impl(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return DISPATCH_DEVICE_IMPL(nms_impl, boxes, scores, iou_threshold, offset); -} - -Tensor softnms_impl(Tensor boxes, Tensor scores, Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset) { - return DISPATCH_DEVICE_IMPL(softnms_impl, boxes, scores, dets, iou_threshold, - sigma, min_score, method, offset); -} - -std::vector > nms_match_impl(Tensor dets, - float iou_threshold) { - return DISPATCH_DEVICE_IMPL(nms_match_impl, dets, iou_threshold); -} - -Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return nms_impl(boxes, scores, iou_threshold, offset); -} - -Tensor softnms(Tensor boxes, Tensor scores, Tensor dets, float iou_threshold, - float sigma, float min_score, int method, int offset) { - return softnms_impl(boxes, scores, dets, iou_threshold, sigma, min_score, - method, offset); -} - -std::vector > nms_match(Tensor dets, float iou_threshold) { - return nms_match_impl(dets, iou_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms_rotated.cpp deleted file mode 100644 index e4ef676a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/nms_rotated.cpp +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/nms_rotated/nms_rotated.h -#include "pytorch_cpp_helper.hpp" - -Tensor nms_rotated_cpu(const Tensor dets, const Tensor scores, - const float iou_threshold); - -#ifdef MMCV_WITH_CUDA -Tensor nms_rotated_cuda(const Tensor dets, const Tensor scores, - const Tensor order, const Tensor dets_sorted, - const float iou_threshold, const int multi_label); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -Tensor nms_rotated(const Tensor dets, const Tensor scores, const Tensor order, - const Tensor dets_sorted, const float iou_threshold, - const int multi_label) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - return nms_rotated_cuda(dets, scores, order, dets_sorted, iou_threshold, - multi_label); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - - return nms_rotated_cpu(dets, scores, iou_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pixel_group.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pixel_group.cpp deleted file mode 100755 index 2bf8c8bb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pixel_group.cpp +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// It is modified from https://github.com/WenmuZhou/PAN.pytorch - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -std::vector> pixel_group_impl( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float dis_threshold) { - return DISPATCH_DEVICE_IMPL(pixel_group_impl, score, mask, embedding, - kernel_label, kernel_contour, kernel_region_num, - dis_threshold); -} - -std::vector> pixel_group( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float distance_threshold) { - score = score.contiguous(); - mask = mask.contiguous(); - embedding = embedding.contiguous(); - kernel_label = kernel_label.contiguous(); - kernel_contour = kernel_contour.contiguous(); - - return pixel_group_impl(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_boxes.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_boxes.cpp deleted file mode 100644 index 540da940..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_boxes.cpp +++ /dev/null @@ -1,44 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void points_in_boxes_part_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - DISPATCH_DEVICE_IMPL(points_in_boxes_part_forward_impl, batch_size, boxes_num, - pts_num, boxes, pts, box_idx_of_points); -} - -void points_in_boxes_all_forward_impl(int batch_size, int boxes_num, - int pts_num, const Tensor boxes, - const Tensor pts, - Tensor box_idx_of_points) { - DISPATCH_DEVICE_IMPL(points_in_boxes_all_forward_impl, batch_size, boxes_num, - pts_num, boxes, pts, box_idx_of_points); -} - -void points_in_boxes_part_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center, each box params pts: (B, npoints, 3) - // [x, y, z] in LiDAR coordinate params boxes_idx_of_points: (B, npoints), - // default -1 - int batch_size = boxes_tensor.size(0); - int boxes_num = boxes_tensor.size(1); - int pts_num = pts_tensor.size(1); - points_in_boxes_part_forward_impl(batch_size, boxes_num, pts_num, - boxes_tensor, pts_tensor, - box_idx_of_points_tensor); -} - -void points_in_boxes_all_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor) { - // params boxes: (B, N, 7) [x, y, z, x_size, y_size, z_size, rz] in LiDAR - // coordinate, z is the bottom center. params pts: (B, npoints, 3) [x, y, z] - // in LiDAR coordinate params boxes_idx_of_points: (B, npoints), default -1 - int batch_size = boxes_tensor.size(0); - int boxes_num = boxes_tensor.size(1); - int pts_num = pts_tensor.size(1); - points_in_boxes_all_forward_impl(batch_size, boxes_num, pts_num, boxes_tensor, - pts_tensor, box_idx_of_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_polygons.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_polygons.cpp deleted file mode 100644 index 75a93dce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/points_in_polygons.cpp +++ /dev/null @@ -1,15 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void points_in_polygons_forward_impl(const Tensor points, const Tensor polygons, - Tensor output, const int rows, - const int cols) { - DISPATCH_DEVICE_IMPL(points_in_polygons_forward_impl, points, polygons, - output, rows, cols); -} - -void points_in_polygons_forward(Tensor points, Tensor polygons, Tensor output) { - int rows = points.size(0); - int cols = polygons.size(0); - points_in_polygons_forward_impl(points, polygons, output, rows, cols); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/psamask.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/psamask.cpp deleted file mode 100644 index 6064c9ba..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/psamask.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from -// https://github.com/hszhao/semseg/blob/master/lib/psa/src -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void psamask_forward_impl(const int psa_type, const Tensor input, Tensor output, - const int num_, const int h_feature, - const int w_feature, const int h_mask, - const int w_mask, const int half_h_mask, - const int half_w_mask) { - DISPATCH_DEVICE_IMPL(psamask_forward_impl, psa_type, input, output, num_, - h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_backward_impl(const int psa_type, const Tensor grad_output, - Tensor grad_input, const int num_, - const int h_feature, const int w_feature, - const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - DISPATCH_DEVICE_IMPL(psamask_backward_impl, psa_type, grad_output, grad_input, - num_, h_feature, w_feature, h_mask, w_mask, half_h_mask, - half_w_mask); -} - -void psamask_forward(const Tensor input, Tensor output, const int psa_type, - const int num_, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask) { - psamask_forward_impl(psa_type, input, output, num_, h_feature, w_feature, - h_mask, w_mask, half_h_mask, half_w_mask); -} - -void psamask_backward(Tensor grad_output, const Tensor grad_input, - const int psa_type, const int num_, const int h_feature, - const int w_feature, const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask) { - psamask_backward_impl(psa_type, grad_output, grad_input, num_, h_feature, - w_feature, h_mask, w_mask, half_h_mask, half_w_mask); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pybind.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pybind.cpp deleted file mode 100644 index 476034f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/pybind.cpp +++ /dev/null @@ -1,831 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include "pytorch_cpp_helper.hpp" - -std::string get_compiler_version(); -std::string get_compiling_cuda_version(); - -void assign_score_withk_forward(const Tensor &points, const Tensor ¢ers, - const Tensor &scores, const Tensor &knn_idx, - Tensor &output, int B, int N0, int N1, int M, - int K, int O, int aggregate); - -void assign_score_withk_backward(const Tensor &grad_out, const Tensor &points, - const Tensor ¢ers, const Tensor &scores, - const Tensor &knn_idx, Tensor &grad_points, - Tensor &grad_centers, Tensor &grad_scores, - int B, int N0, int N1, int M, int K, int O, - int aggregate); - -void carafe_naive_forward(Tensor features, Tensor masks, Tensor output, - int kernel_size, int group_size, int scale_factor); - -void carafe_naive_backward(Tensor top_grad, Tensor features, Tensor masks, - Tensor bottom_grad, Tensor mask_grad, - int kernel_size, int group_size, int scale_factor); - -void carafe_forward(Tensor features, Tensor masks, Tensor rfeatures, - Tensor routput, Tensor rmasks, Tensor output, - int kernel_size, int group_size, int scale_factor); - -void carafe_backward(Tensor top_grad, Tensor rfeatures, Tensor masks, - Tensor rtop_grad, Tensor rbottom_grad_hs, - Tensor rbottom_grad, Tensor rmask_grad, Tensor bottom_grad, - Tensor mask_grad, int kernel_size, int group_size, - int scale_factor); - -void deform_conv_forward(Tensor input, Tensor weight, Tensor offset, - Tensor output, Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -void deform_conv_backward_input(Tensor input, Tensor offset, Tensor gradOutput, - Tensor gradInput, Tensor gradOffset, - Tensor weight, Tensor columns, int kW, int kH, - int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -void deform_conv_backward_parameters(Tensor input, Tensor offset, - Tensor gradOutput, Tensor gradWeight, - Tensor columns, Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, float scale, - int im2col_step); - -void deform_roi_pool_forward(Tensor input, Tensor rois, Tensor offset, - Tensor output, int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - float gamma); - -void deform_roi_pool_backward(Tensor grad_output, Tensor input, Tensor rois, - Tensor offset, Tensor grad_input, - Tensor grad_offset, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, float gamma); - -void group_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints, - int nsample); - -void group_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints, int nsample); - -void roipoint_pool3d_forward(Tensor xyz, Tensor boxes3d, Tensor pts_feature, - Tensor pooled_features, Tensor pooled_empty_flag); - -void gather_points_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor out_tensor, int b, int c, int n, int npoints); - -void gather_points_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor grad_points_tensor, int b, int c, int n, - int npoints); - -void sigmoid_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, float alpha); - -void softmax_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void softmax_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor buff, Tensor grad_input, float gamma, - float alpha); - -void three_interpolate_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor out_tensor, int b, - int c, int m, int n); - -void three_interpolate_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor grad_points_tensor, - int b, int c, int n, int m); - -void three_nn_forward(Tensor unknown_tensor, Tensor known_tensor, - Tensor dist2_tensor, Tensor idx_tensor, int b, int n, - int m); - -void bbox_overlaps(const Tensor bboxes1, const Tensor bboxes2, Tensor ious, - const int mode, const bool aligned, const int offset); - -void knn_forward(Tensor xyz_tensor, Tensor new_xyz_tensor, Tensor idx_tensor, - Tensor dist2_tensor, int b, int n, int m, int nsample); - -void iou3d_boxes_overlap_bev_forward(Tensor boxes_a, Tensor boxes_b, - Tensor ans_overlap); - -void iou3d_nms3d_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh); - -void iou3d_nms3d_normal_forward(Tensor boxes, Tensor keep, Tensor keep_num, - float nms_overlap_thresh); - -void furthest_point_sampling_forward(Tensor points_tensor, Tensor temp_tensor, - Tensor idx_tensor, int b, int n, int m); - -void furthest_point_sampling_with_dist_forward(Tensor points_tensor, - Tensor temp_tensor, - Tensor idx_tensor, int b, int n, - int m); - -void masked_im2col_forward(const Tensor im, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor col, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w); - -void masked_col2im_forward(const Tensor col, const Tensor mask_h_idx, - const Tensor mask_w_idx, Tensor im, int height, - int width, int channels); - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias); - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); - -Tensor ms_deform_attn_forward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, const int im2col_step); - -void ms_deform_attn_backward(const Tensor &value, const Tensor &spatial_shapes, - const Tensor &level_start_index, - const Tensor &sampling_loc, - const Tensor &attn_weight, - const Tensor &grad_output, Tensor &grad_value, - Tensor &grad_sampling_loc, - Tensor &grad_attn_weight, const int im2col_step); - -Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset); - -Tensor softnms(Tensor boxes, Tensor scores, Tensor dets, float iou_threshold, - float sigma, float min_score, int method, int offset); - -std::vector> nms_match(Tensor dets, float iou_threshold); - -std::vector> pixel_group( - Tensor score, Tensor mask, Tensor embedding, Tensor kernel_label, - Tensor kernel_contour, int kernel_region_num, float distance_threshold); - -std::vector> contour_expand(Tensor kernel_mask, - Tensor internal_kernel_label, - int min_kernel_area, - int kernel_num); - -void roi_align_forward(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned); - -void roi_align_backward(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned); - -void roi_pool_forward(Tensor input, Tensor rois, Tensor output, Tensor argmax, - int pooled_height, int pooled_width, float spatial_scale); - -void roi_pool_backward(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, int pooled_width, - float spatial_scale); - -void sync_bn_forward_mean(const Tensor input, Tensor mean); - -void sync_bn_forward_var(const Tensor input, const Tensor mean, Tensor var); - -void sync_bn_forward_output(const Tensor input, const Tensor mean, - const Tensor var, const Tensor weight, - const Tensor bias, Tensor running_mean, - Tensor running_var, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size); - -void sync_bn_backward_param(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias); - -void sync_bn_backward_data(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input); - -void psamask_forward(const Tensor input, Tensor output, const int psa_type, - const int num_, const int h_feature, const int w_feature, - const int h_mask, const int w_mask, const int half_h_mask, - const int half_w_mask); - -void psamask_backward(Tensor grad_output, const Tensor grad_input, - const int psa_type, const int num_, const int h_feature, - const int w_feature, const int h_mask, const int w_mask, - const int half_h_mask, const int half_w_mask); - -void tin_shift_forward(Tensor input, Tensor shift, Tensor output); - -void tin_shift_backward(Tensor grad_output, Tensor shift, Tensor grad_input); - -void ball_query_forward(Tensor new_xyz_tensor, Tensor xyz_tensor, - Tensor idx_tensor, int b, int n, int m, - float min_radius, float max_radius, int nsample); - -// template -// std::vector get_indice_pairs_forward( -// torch::Tensor indices, int64_t batchSize, -// std::vector outSpatialShape, std::vector spatialShape, -// std::vector kernelSize, std::vector stride, -// std::vector padding, std::vector dilation, -// std::vector outPadding, int64_t _subM, int64_t _transpose); - -// template -// std::vector get_indice_pairs_backward( -// Tensor indices, Tensor gridOut, int64_t batchSize, -// std::vector outSpatialShape, std::vector spatialShape, -// std::vector kernelSize, std::vector stride, -// std::vector padding, std::vector dilation, -// std::vector outPadding, int64_t _subM, int64_t _transpose); - -// Tensor indice_conv_forward(Tensor features, Tensor filters, Tensor indicePairs, -// Tensor indiceNum, int64_t numActOut, -// int64_t _inverse, int64_t _subM); - -// std::vector indice_conv_backward(Tensor features, Tensor filters, -// Tensor outGrad, Tensor indicePairs, -// Tensor indiceNum, int64_t _inverse, -// int64_t _subM); - -// Tensor fused_indice_conv_batchnorm_forward(Tensor features, Tensor filters, -// Tensor bias, Tensor indicePairs, -// Tensor indiceNum, int64_t numActOut, -// int64_t _inverse, int64_t _subM); - -// Tensor indice_maxpool_forward(Tensor features, Tensor indicePairs, -// Tensor indiceNum, int64_t numAct); - -// Tensor indice_maxpool_backward(Tensor features, Tensor outFeatures, -// Tensor outGrad, Tensor indicePairs, -// Tensor indiceNum); - -void box_iou_rotated(const Tensor boxes1, const Tensor boxes2, Tensor ious, - const int mode_flag, const bool aligned); - -Tensor nms_rotated(const Tensor dets, const Tensor scores, const Tensor order, - const Tensor dets_sorted, const float iou_threshold, - const int multi_label); - -Tensor upfirdn2d(const Tensor &input, const Tensor &kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, int pad_y0, - int pad_y1); - -Tensor fused_bias_leakyrelu(const Tensor &input, const Tensor &bias, - const Tensor &refer, int act, int grad, float alpha, - float scale); - -void roi_align_rotated_forward(Tensor input, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise); - -void roi_align_rotated_backward(Tensor grad_output, Tensor rois, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise); - -std::vector dynamic_point_to_voxel_forward( - const torch::Tensor &feats, const torch::Tensor &coors, - const std::string &reduce_type); - -void dynamic_point_to_voxel_backward(torch::Tensor &grad_feats, - const torch::Tensor &grad_reduced_feats, - const torch::Tensor &feats, - const torch::Tensor &reduced_feats, - const torch::Tensor &coors_idx, - const torch::Tensor &reduce_count, - const std::string &reduce_type); - -void hard_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - at::Tensor &voxel_num, const int max_points, - const int max_voxels, const int NDim, - const bool deterministic); - -void dynamic_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &coors, - const int NDim); - -void border_align_forward(const Tensor &input, const Tensor &boxes, - Tensor output, Tensor argmax_idx, - const int pool_size); - -void border_align_backward(const Tensor &grad_output, const Tensor &boxes, - const Tensor &argmax_idx, Tensor grad_input, - const int pool_size); - -void points_in_boxes_cpu_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor pts_indices_tensor); - -void points_in_boxes_part_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor); - -void points_in_boxes_all_forward(Tensor boxes_tensor, Tensor pts_tensor, - Tensor box_idx_of_points_tensor); - -void roiaware_pool3d_forward(Tensor rois, Tensor pts, Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method); - -void roiaware_pool3d_backward(Tensor pts_idx_of_voxels, Tensor argmax, - Tensor grad_out, Tensor grad_in, int pool_method); - -void correlation_forward(Tensor input1, Tensor input2, Tensor output, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void correlation_backward(Tensor grad_output, Tensor input1, Tensor input2, - Tensor grad_input1, Tensor grad_input2, int kH, - int kW, int patchH, int patchW, int padH, int padW, - int dilationH, int dilationW, int dilation_patchH, - int dilation_patchW, int dH, int dW); - -void rotated_feature_align_forward(const Tensor features, - const Tensor best_bboxes, Tensor output, - const float spatial_scale, const int points); - -void rotated_feature_align_backward(const Tensor top_grad, - const Tensor best_bboxes, - Tensor bottom_grad, - const float spatial_scale, - const int points); - -void riroi_align_rotated_forward(Tensor features, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int num_samples, - int num_orientations, bool clockwise); - -void riroi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise); - -void points_in_polygons_forward(Tensor points, Tensor polygons, Tensor output); - -void min_area_polygons(const Tensor pointsets, Tensor polygons); - -void active_rotated_filter_forward(const Tensor input, const Tensor indices, - Tensor output); - -void active_rotated_filter_backward(const Tensor grad_out, const Tensor indices, - Tensor grad_in); - -void convex_iou(const Tensor pointsets, const Tensor polygons, Tensor ious); - -void convex_giou(const Tensor pointsets, const Tensor polygons, Tensor output); - -at::Tensor diff_iou_rotated_sort_vertices_forward(at::Tensor vertices, - at::Tensor mask, - at::Tensor num_valid); - -void chamfer_distance_forward(const Tensor xyz1, const Tensor xyz2, - const Tensor dist1, const Tensor dist2, - const Tensor idx1, const Tensor idx); - -void chamfer_distance_backward(const Tensor xyz1, const Tensor xyz2, - Tensor gradxyz1, Tensor gradxyz2, - Tensor graddist1, Tensor graddist2, Tensor idx1, - Tensor idx2); - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)", py::arg("input"), - py::arg("kernel"), py::arg("up_x"), py::arg("up_y"), py::arg("down_x"), - py::arg("down_y"), py::arg("pad_x0"), py::arg("pad_x1"), - py::arg("pad_y0"), py::arg("pad_y1")); - m.def("fused_bias_leakyrelu", &fused_bias_leakyrelu, - "fused_bias_leakyrelu (CUDA)", py::arg("input"), py::arg("bias"), - py::arg("empty"), py::arg("act"), py::arg("grad"), py::arg("alpha"), - py::arg("scale")); - m.def("gather_points_forward", &gather_points_forward, - "gather_points_forward", py::arg("points_tensor"), - py::arg("idx_tensor"), py::arg("out_tensor"), py::arg("b"), - py::arg("c"), py::arg("n"), py::arg("npoints")); - m.def("gather_points_backward", &gather_points_backward, - "gather_points_backward", py::arg("grad_out_tensor"), - py::arg("idx_tensor"), py::arg("grad_points_tensor"), py::arg("b"), - py::arg("c"), py::arg("n"), py::arg("npoints")); - m.def("get_compiler_version", &get_compiler_version, "get_compiler_version"); - m.def("get_compiling_cuda_version", &get_compiling_cuda_version, - "get_compiling_cuda_version"); - m.def("assign_score_withk_forward", &assign_score_withk_forward, - "assign_score_withk_forward", py::arg("points"), py::arg("centers"), - py::arg("scores"), py::arg("knn_idx"), py::arg("output"), py::arg("B"), - py::arg("N0"), py::arg("N1"), py::arg("M"), py::arg("K"), py::arg("O"), - py::arg("aggregate")); - m.def("assign_score_withk_backward", &assign_score_withk_backward, - "assign_score_withk_backward", py::arg("grad_out"), py::arg("points"), - py::arg("centers"), py::arg("scores"), py::arg("knn_idx"), - py::arg("grad_points"), py::arg("grad_centers"), py::arg("grad_scores"), - py::arg("B"), py::arg("N0"), py::arg("N1"), py::arg("M"), py::arg("K"), - py::arg("O"), py::arg("aggregate")); - m.def("knn_forward", &knn_forward, "knn_forward", py::arg("xyz_tensor"), - py::arg("new_xyz_tensor"), py::arg("idx_tensor"), - py::arg("dist2_tensor"), py::arg("b"), py::arg("n"), py::arg("m"), - py::arg("nsample")); - m.def("carafe_naive_forward", &carafe_naive_forward, "carafe_naive_forward", - py::arg("features"), py::arg("masks"), py::arg("output"), - py::arg("kernel_size"), py::arg("group_size"), py::arg("scale_factor")); - m.def("carafe_naive_backward", &carafe_naive_backward, - "carafe_naive_backward", py::arg("top_grad"), py::arg("features"), - py::arg("masks"), py::arg("bottom_grad"), py::arg("mask_grad"), - py::arg("kernel_size"), py::arg("group_size"), py::arg("scale_factor")); - m.def("carafe_forward", &carafe_forward, "carafe_forward", - py::arg("features"), py::arg("masks"), py::arg("rfeatures"), - py::arg("routput"), py::arg("rmasks"), py::arg("output"), - py::arg("kernel_size"), py::arg("group_size"), py::arg("scale_factor")); - m.def("carafe_backward", &carafe_backward, "carafe_backward", - py::arg("top_grad"), py::arg("rfeatures"), py::arg("masks"), - py::arg("rtop_grad"), py::arg("rbottom_grad_hs"), - py::arg("rbottom_grad"), py::arg("rmask_grad"), py::arg("bottom_grad"), - py::arg("mask_grad"), py::arg("kernel_size"), py::arg("group_size"), - py::arg("scale_factor")); - m.def("deform_conv_forward", &deform_conv_forward, "deform_conv_forward", - py::arg("input"), py::arg("weight"), py::arg("offset"), - py::arg("output"), py::arg("columns"), py::arg("ones"), py::arg("kW"), - py::arg("kH"), py::arg("dW"), py::arg("dH"), py::arg("padW"), - py::arg("padH"), py::arg("dilationW"), py::arg("dilationH"), - py::arg("group"), py::arg("deformable_group"), py::arg("im2col_step")); - m.def("deform_conv_backward_input", &deform_conv_backward_input, - "deform_conv_backward_input", py::arg("input"), py::arg("offset"), - py::arg("gradOutput"), py::arg("gradInput"), py::arg("gradOffset"), - py::arg("weight"), py::arg("columns"), py::arg("kW"), py::arg("kH"), - py::arg("dW"), py::arg("dH"), py::arg("padW"), py::arg("padH"), - py::arg("dilationW"), py::arg("dilationH"), py::arg("group"), - py::arg("deformable_group"), py::arg("im2col_step")); - m.def("deform_conv_backward_parameters", &deform_conv_backward_parameters, - "deform_conv_backward_parameters", py::arg("input"), py::arg("offset"), - py::arg("gradOutput"), py::arg("gradWeight"), py::arg("columns"), - py::arg("ones"), py::arg("kW"), py::arg("kH"), py::arg("dW"), - py::arg("dH"), py::arg("padW"), py::arg("padH"), py::arg("dilationW"), - py::arg("dilationH"), py::arg("group"), py::arg("deformable_group"), - py::arg("scale"), py::arg("im2col_step")); - m.def("deform_roi_pool_forward", &deform_roi_pool_forward, - "deform roi pool forward", py::arg("input"), py::arg("rois"), - py::arg("offset"), py::arg("output"), py::arg("pooled_height"), - py::arg("pooled_width"), py::arg("spatial_scale"), - py::arg("sampling_ratio"), py::arg("gamma")); - m.def("deform_roi_pool_backward", &deform_roi_pool_backward, - "deform roi pool backward", py::arg("grad_output"), py::arg("input"), - py::arg("rois"), py::arg("offset"), py::arg("grad_input"), - py::arg("grad_offset"), py::arg("pooled_height"), - py::arg("pooled_width"), py::arg("spatial_scale"), - py::arg("sampling_ratio"), py::arg("gamma")); - m.def("roipoint_pool3d_forward", &roipoint_pool3d_forward, - "roipoint_pool3d_forward", py::arg("xyz"), py::arg("boxes3d"), - py::arg("pts_feature"), py::arg("pooled_features"), - py::arg("pooled_empty_flag")); - m.def("sigmoid_focal_loss_forward", &sigmoid_focal_loss_forward, - "sigmoid_focal_loss_forward ", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("output"), py::arg("gamma"), - py::arg("alpha")); - m.def("sigmoid_focal_loss_backward", &sigmoid_focal_loss_backward, - "sigmoid_focal_loss_backward", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("grad_input"), py::arg("gamma"), - py::arg("alpha")); - m.def("softmax_focal_loss_forward", &softmax_focal_loss_forward, - "softmax_focal_loss_forward", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("output"), py::arg("gamma"), - py::arg("alpha")); - m.def("softmax_focal_loss_backward", &softmax_focal_loss_backward, - "softmax_focal_loss_backward", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("buff"), py::arg("grad_input"), - py::arg("gamma"), py::arg("alpha")); - m.def("three_interpolate_forward", &three_interpolate_forward, - "three_interpolate_forward", py::arg("points_tensor"), - py::arg("idx_tensor"), py::arg("weight_tensor"), py::arg("out_tensor"), - py::arg("b"), py::arg("c"), py::arg("m"), py::arg("n")); - m.def("three_interpolate_backward", &three_interpolate_backward, - "three_interpolate_backward", py::arg("grad_out_tensor"), - py::arg("idx_tensor"), py::arg("weight_tensor"), - py::arg("grad_points_tensor"), py::arg("b"), py::arg("c"), py::arg("n"), - py::arg("m")); - m.def("three_nn_forward", &three_nn_forward, "three_nn_forward", - py::arg("unknown_tensor"), py::arg("known_tensor"), - py::arg("dist2_tensor"), py::arg("idx_tensor"), py::arg("b"), - py::arg("n"), py::arg("m")); - m.def("bbox_overlaps", &bbox_overlaps, "bbox_overlaps", py::arg("bboxes1"), - py::arg("bboxes2"), py::arg("ious"), py::arg("mode"), - py::arg("aligned"), py::arg("offset")); - m.def("group_points_forward", &group_points_forward, "group_points_forward", - py::arg("points_tensor"), py::arg("idx_tensor"), py::arg("out_tensor"), - py::arg("b"), py::arg("c"), py::arg("n"), py::arg("npoints"), - py::arg("nsample")); - m.def("group_points_backward", &group_points_backward, - "group_points_backward", py::arg("grad_out_tensor"), - py::arg("idx_tensor"), py::arg("grad_points_tensor"), py::arg("b"), - py::arg("c"), py::arg("n"), py::arg("npoints"), py::arg("nsample")); - m.def("knn_forward", &knn_forward, "knn_forward", py::arg("b"), py::arg("n"), - py::arg("m"), py::arg("nsample"), py::arg("xyz_tensor"), - py::arg("new_xyz_tensor"), py::arg("idx_tensor"), - py::arg("dist2_tensor")); - m.def("iou3d_boxes_overlap_bev_forward", &iou3d_boxes_overlap_bev_forward, - "iou3d_boxes_overlap_bev_forward", py::arg("boxes_a"), - py::arg("boxes_b"), py::arg("ans_iou")); - m.def("iou3d_nms3d_forward", &iou3d_nms3d_forward, "iou3d_nms3d_forward", - py::arg("boxes"), py::arg("keep"), py::arg("num_out"), - py::arg("nms_overlap_thresh")); - m.def("iou3d_nms3d_normal_forward", &iou3d_nms3d_normal_forward, - "iou3d_nms3d_normal_forward", py::arg("boxes"), py::arg("keep"), - py::arg("num_out"), py::arg("nms_overlap_thresh")); - m.def("furthest_point_sampling_forward", &furthest_point_sampling_forward, - "furthest_point_sampling_forward", py::arg("points_tensor"), - py::arg("temp_tensor"), py::arg("idx_tensor"), py::arg("b"), - py::arg("n"), py::arg("m")); - m.def("furthest_point_sampling_with_dist_forward", - &furthest_point_sampling_with_dist_forward, - "furthest_point_sampling_with_dist_forward", py::arg("points_tensor"), - py::arg("temp_tensor"), py::arg("idx_tensor"), py::arg("b"), - py::arg("n"), py::arg("m")); - m.def("masked_im2col_forward", &masked_im2col_forward, - "masked_im2col_forward", py::arg("im"), py::arg("mask_h_idx"), - py::arg("mask_w_idx"), py::arg("col"), py::arg("kernel_h"), - py::arg("kernel_w"), py::arg("pad_h"), py::arg("pad_w")); - m.def("masked_col2im_forward", &masked_col2im_forward, - "masked_col2im_forward", py::arg("col"), py::arg("mask_h_idx"), - py::arg("mask_w_idx"), py::arg("im"), py::arg("height"), - py::arg("width"), py::arg("channels")); - m.def("modulated_deform_conv_forward", &modulated_deform_conv_forward, - "modulated deform conv forward", py::arg("input"), py::arg("weight"), - py::arg("bias"), py::arg("ones"), py::arg("offset"), py::arg("mask"), - py::arg("output"), py::arg("columns"), py::arg("kernel_h"), - py::arg("kernel_w"), py::arg("stride_h"), py::arg("stride_w"), - py::arg("pad_h"), py::arg("pad_w"), py::arg("dilation_h"), - py::arg("dilation_w"), py::arg("group"), py::arg("deformable_group"), - py::arg("with_bias")); - m.def("modulated_deform_conv_backward", &modulated_deform_conv_backward, - "modulated deform conv backward", py::arg("input"), py::arg("weight"), - py::arg("bias"), py::arg("ones"), py::arg("offset"), py::arg("mask"), - py::arg("columns"), py::arg("grad_input"), py::arg("grad_weight"), - py::arg("grad_bias"), py::arg("grad_offset"), py::arg("grad_mask"), - py::arg("grad_output"), py::arg("kernel_h"), py::arg("kernel_w"), - py::arg("stride_h"), py::arg("stride_w"), py::arg("pad_h"), - py::arg("pad_w"), py::arg("dilation_h"), py::arg("dilation_w"), - py::arg("group"), py::arg("deformable_group"), py::arg("with_bias")); - m.def("nms", &nms, "nms (CPU/CUDA) ", py::arg("boxes"), py::arg("scores"), - py::arg("iou_threshold"), py::arg("offset")); - m.def("softnms", &softnms, "softnms (CPU) ", py::arg("boxes"), - py::arg("scores"), py::arg("dets"), py::arg("iou_threshold"), - py::arg("sigma"), py::arg("min_score"), py::arg("method"), - py::arg("offset")); - m.def("nms_match", &nms_match, "nms_match (CPU) ", py::arg("dets"), - py::arg("iou_threshold")); - m.def("pixel_group", &pixel_group, "pixel group (CPU) ", py::arg("score"), - py::arg("mask"), py::arg("embedding"), py::arg("kernel_label"), - py::arg("kernel_contour"), py::arg("kernel_region_label"), - py::arg("distance_threshold")); - m.def("contour_expand", &contour_expand, "contour exapnd (CPU) ", - py::arg("kernel_mask"), py::arg("internal_kernel_label"), - py::arg("min_kernel_area"), py::arg("kernel_num")); - m.def("roi_align_forward", &roi_align_forward, "roi_align forward", - py::arg("input"), py::arg("rois"), py::arg("output"), - py::arg("argmax_y"), py::arg("argmax_x"), py::arg("aligned_height"), - py::arg("aligned_width"), py::arg("spatial_scale"), - py::arg("sampling_ratio"), py::arg("pool_mode"), py::arg("aligned")); - m.def("roi_align_backward", &roi_align_backward, "roi_align backward", - py::arg("grad_output"), py::arg("rois"), py::arg("argmax_y"), - py::arg("argmax_x"), py::arg("grad_input"), py::arg("aligned_height"), - py::arg("aligned_width"), py::arg("spatial_scale"), - py::arg("sampling_ratio"), py::arg("pool_mode"), py::arg("aligned")); - m.def("roi_pool_forward", &roi_pool_forward, "roi_pool forward", - py::arg("input"), py::arg("rois"), py::arg("output"), py::arg("argmax"), - py::arg("pooled_height"), py::arg("pooled_width"), - py::arg("spatial_scale")); - m.def("roi_pool_backward", &roi_pool_backward, "roi_pool backward", - py::arg("grad_output"), py::arg("rois"), py::arg("argmax"), - py::arg("grad_input"), py::arg("pooled_height"), - py::arg("pooled_width"), py::arg("spatial_scale")); - m.def("sync_bn_forward_mean", &sync_bn_forward_mean, "sync_bn forward_mean", - py::arg("input"), py::arg("mean")); - m.def("sync_bn_forward_var", &sync_bn_forward_var, "sync_bn forward_var", - py::arg("input"), py::arg("mean"), py::arg("var")); - m.def("sync_bn_forward_output", &sync_bn_forward_output, - "sync_bn forward_output", py::arg("input"), py::arg("mean"), - py::arg("var"), py::arg("weight"), py::arg("bias"), - py::arg("running_mean"), py::arg("running_var"), py::arg("norm"), - py::arg("std"), py::arg("output"), py::arg("eps"), py::arg("momentum"), - py::arg("group_size")); - m.def("sync_bn_backward_param", &sync_bn_backward_param, - "sync_bn backward_param", py::arg("grad_output"), py::arg("norm"), - py::arg("grad_weight"), py::arg("grad_bias")); - m.def("sync_bn_backward_data", &sync_bn_backward_data, - "sync_bn backward_data", py::arg("grad_output"), py::arg("weight"), - py::arg("grad_weight"), py::arg("grad_bias"), py::arg("norm"), - py::arg("std"), py::arg("grad_input")); -// m.def("get_indice_pairs_2d_forward", &get_indice_pairs_forward<2>, -// "get_indice_pairs_2d_forward", py::arg("indices"), py::arg("batchSize"), -// py::arg("outSpatialShape"), py::arg("spatialShape"), -// py::arg("kernelSize"), py::arg("stride"), py::arg("padding"), -// py::arg("dilation"), py::arg("outPadding"), py::arg("_subM"), -// py::arg("_transpose")); -// m.def("get_indice_pairs_3d_forward", &get_indice_pairs_forward<3>, -// "get_indice_pairs_3d_forward", py::arg("indices"), py::arg("batchSize"), -// py::arg("outSpatialShape"), py::arg("spatialShape"), -// py::arg("kernelSize"), py::arg("stride"), py::arg("padding"), -// py::arg("dilation"), py::arg("outPadding"), py::arg("_subM"), -// py::arg("_transpose")); -// m.def("get_indice_pairs_4d_forward", &get_indice_pairs_forward<4>, -// "get_indice_pairs_4d_forward", py::arg("indices"), py::arg("batchSize"), -// py::arg("outSpatialShape"), py::arg("spatialShape"), -// py::arg("kernelSize"), py::arg("stride"), py::arg("padding"), -// py::arg("dilation"), py::arg("outPadding"), py::arg("_subM"), -// py::arg("_transpose")); -// m.def("get_indice_pairs_2d_backward", &get_indice_pairs_backward<2>, -// "get_indice_pairs_2d_backward", py::arg("indices"), py::arg("gridOut"), -// py::arg("batchSize"), py::arg("outSpatialShape"), -// py::arg("spatialShape"), py::arg("kernelSize"), py::arg("stride"), -// py::arg("padding"), py::arg("dilation"), py::arg("outPadding"), -// py::arg("_subM"), py::arg("_transpose")); -// m.def("get_indice_pairs_3d_backward", &get_indice_pairs_backward<3>, -// "get_indice_pairs_3d_backward", py::arg("indices"), py::arg("gridOut"), -// py::arg("batchSize"), py::arg("outSpatialShape"), -// py::arg("spatialShape"), py::arg("kernelSize"), py::arg("stride"), -// py::arg("padding"), py::arg("dilation"), py::arg("outPadding"), -// py::arg("_subM"), py::arg("_transpose")); -// m.def("indice_conv_forward", &indice_conv_forward, "indice_conv_forward", -// py::arg("features"), py::arg("filters"), py::arg("indicePairs"), -// py::arg("indiceNum"), py::arg("numActOut"), py::arg("_inverse"), -// py::arg("_subM")); -// m.def("indice_conv_backward", &indice_conv_backward, "indice_conv_backward", -// py::arg("features"), py::arg("filters"), py::arg("outGrad"), -// py::arg("indicePairs"), py::arg("indiceNum"), py::arg("_inverse"), -// py::arg("_subM")); -// m.def("fused_indice_conv_forward", &fused_indice_conv_batchnorm_forward, -// "fused_indice_conv_forward", py::arg("features"), py::arg("filters"), -// py::arg("bias"), py::arg("indicePairs"), py::arg("indiceNum"), -// py::arg("numActOut"), py::arg("_inverse"), py::arg("_subM")); -// m.def("indice_maxpool_forward", &indice_maxpool_forward, -// "indice_maxpool_forward", py::arg("features"), py::arg("indicePairs"), -// py::arg("indiceNum"), py::arg("numAct")); -// m.def("indice_maxpool_backward", &indice_maxpool_backward, -// "indice_maxpool_backward", py::arg("features"), py::arg("outFeatures"), -// py::arg("outGrad"), py::arg("indicePairs"), py::arg("indiceNum")); - m.def("psamask_forward", &psamask_forward, "PSAMASK forward (CPU/CUDA)", - py::arg("input"), py::arg("output"), py::arg("psa_type"), - py::arg("num_"), py::arg("h_feature"), py::arg("w_feature"), - py::arg("h_mask"), py::arg("w_mask"), py::arg("half_h_mask"), - py::arg("half_w_mask")); - m.def("psamask_backward", &psamask_backward, "PSAMASK backward (CPU/CUDA)", - py::arg("grad_output"), py::arg("grad_input"), py::arg("psa_type"), - py::arg("num_"), py::arg("h_feature"), py::arg("w_feature"), - py::arg("h_mask"), py::arg("w_mask"), py::arg("half_h_mask"), - py::arg("half_w_mask")); - m.def("tin_shift_forward", &tin_shift_forward, "tin_shift forward", - py::arg("input"), py::arg("shift"), py::arg("output")); - m.def("tin_shift_backward", &tin_shift_backward, "tin_shift backward", - py::arg("grad_output"), py::arg("shift"), py::arg("grad_input")); - m.def("box_iou_rotated", &box_iou_rotated, "IoU for rotated boxes", - py::arg("boxes1"), py::arg("boxes2"), py::arg("ious"), - py::arg("mode_flag"), py::arg("aligned")); - m.def("nms_rotated", &nms_rotated, "NMS for rotated boxes", py::arg("dets"), - py::arg("scores"), py::arg("order"), py::arg("dets_sorted"), - py::arg("iou_threshold"), py::arg("multi_label")); - m.def("ball_query_forward", &ball_query_forward, "ball_query_forward", - py::arg("new_xyz_tensor"), py::arg("xyz_tensor"), py::arg("idx_tensor"), - py::arg("b"), py::arg("n"), py::arg("m"), py::arg("min_radius"), - py::arg("max_radius"), py::arg("nsample")); - m.def("roi_align_rotated_forward", &roi_align_rotated_forward, - "roi_align_rotated forward", py::arg("input"), py::arg("rois"), - py::arg("output"), py::arg("pooled_height"), py::arg("pooled_width"), - py::arg("spatial_scale"), py::arg("sampling_ratio"), py::arg("aligned"), - py::arg("clockwise")); - m.def("roi_align_rotated_backward", &roi_align_rotated_backward, - "roi_align_rotated backward", py::arg("rois"), py::arg("grad_input"), - py::arg("grad_output"), py::arg("pooled_height"), - py::arg("pooled_width"), py::arg("spatial_scale"), - py::arg("sampling_ratio"), py::arg("aligned"), py::arg("clockwise")); - m.def("dynamic_point_to_voxel_forward", &dynamic_point_to_voxel_forward, - "dynamic_point_to_voxel_forward", py::arg("feats"), py::arg("coors"), - py::arg("reduce_type")); - m.def("dynamic_point_to_voxel_backward", &dynamic_point_to_voxel_backward, - "dynamic_point_to_voxel_backward", py::arg("grad_feats"), - py::arg("grad_reduced_feats"), py::arg("feats"), - py::arg("reduced_feats"), py::arg("coors_idx"), py::arg("reduce_count"), - py::arg("reduce_type")); - m.def("hard_voxelize_forward", &hard_voxelize_forward, - "hard_voxelize_forward", py::arg("points"), py::arg("voxel_size"), - py::arg("coors_range"), py::arg("voxels"), py::arg("coors"), - py::arg("num_points_per_voxel"), py::arg("voxel_num"), - py::arg("max_points"), py::arg("max_voxels"), py::arg("NDim"), - py::arg("deterministic")); - m.def("dynamic_voxelize_forward", &dynamic_voxelize_forward, - "dynamic_voxelize_forward", py::arg("points"), py::arg("voxel_size"), - py::arg("coors_range"), py::arg("coors"), py::arg("NDim")); - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, - "forward function of multi-scale deformable attention", - py::arg("value"), py::arg("value_spatial_shapes"), - py::arg("value_level_start_index"), py::arg("sampling_locations"), - py::arg("attention_weights"), py::arg("im2col_step")); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, - "backward function of multi-scale deformable attention", - py::arg("value"), py::arg("value_spatial_shapes"), - py::arg("value_level_start_index"), py::arg("sampling_locations"), - py::arg("attention_weights"), py::arg("grad_output"), - py::arg("grad_value"), py::arg("grad_sampling_loc"), - py::arg("grad_attn_weight"), py::arg("im2col_step")); - m.def("border_align_forward", &border_align_forward, - "forward function of border_align", py::arg("input"), py::arg("boxes"), - py::arg("output"), py::arg("argmax_idx"), py::arg("pool_size")); - m.def("border_align_backward", &border_align_backward, - "backward function of border_align", py::arg("grad_output"), - py::arg("boxes"), py::arg("argmax_idx"), py::arg("grad_input"), - py::arg("pool_size")); - m.def("correlation_forward", &correlation_forward, "Correlation forward", - py::arg("input1"), py::arg("input2"), py::arg("output"), py::arg("kH"), - py::arg("kW"), py::arg("patchH"), py::arg("patchW"), py::arg("padH"), - py::arg("padW"), py::arg("dilationH"), py::arg("dilationW"), - py::arg("dilation_patchH"), py::arg("dilation_patchW"), py::arg("dH"), - py::arg("dW")); - m.def("correlation_backward", &correlation_backward, "Correlation backward", - py::arg("grad_output"), py::arg("input1"), py::arg("input2"), - py::arg("grad_input1"), py::arg("grad_input2"), py::arg("kH"), - py::arg("kW"), py::arg("patchH"), py::arg("patchW"), py::arg("padH"), - py::arg("padW"), py::arg("dilationH"), py::arg("dilationW"), - py::arg("dilation_patchH"), py::arg("dilation_patchW"), py::arg("dH"), - py::arg("dW")); - m.def("points_in_boxes_cpu_forward", &points_in_boxes_cpu_forward, - "points_in_boxes_cpu_forward", py::arg("boxes_tensor"), - py::arg("pts_tensor"), py::arg("pts_indices_tensor")); - m.def("points_in_boxes_part_forward", &points_in_boxes_part_forward, - "points_in_boxes_part_forward", py::arg("boxes_tensor"), - py::arg("pts_tensor"), py::arg("box_idx_of_points_tensor")); - m.def("points_in_boxes_all_forward", &points_in_boxes_all_forward, - "points_in_boxes_all_forward", py::arg("boxes_tensor"), - py::arg("pts_tensor"), py::arg("box_idx_of_points_tensor")); - m.def("roiaware_pool3d_forward", &roiaware_pool3d_forward, - "roiaware_pool3d_forward", py::arg("rois"), py::arg("pts"), - py::arg("pts_feature"), py::arg("argmax"), py::arg("pts_idx_of_voxels"), - py::arg("pooled_features"), py::arg("pool_method")); - m.def("roiaware_pool3d_backward", &roiaware_pool3d_backward, - "roiaware_pool3d_backward", py::arg("pts_idx_of_voxels"), - py::arg("argmax"), py::arg("grad_out"), py::arg("grad_in"), - py::arg("pool_method")); - m.def("rotated_feature_align_forward", &rotated_feature_align_forward, - "Feature Refine forward (CUDA)", py::arg("features"), - py::arg("best_bboxes"), py::arg("output"), py::arg("spatial_scale"), - py::arg("points")); - m.def("rotated_feature_align_backward", &rotated_feature_align_backward, - "Feature Refine backward (CUDA)", py::arg("top_grad"), - py::arg("best_bboxes"), py::arg("bottom_grad"), - py::arg("spatial_scale"), py::arg("points")); - m.def("riroi_align_rotated_forward", &riroi_align_rotated_forward, - "riroi_align_rotated forward", py::arg("features"), py::arg("rois"), - py::arg("output"), py::arg("pooled_height"), py::arg("pooled_width"), - py::arg("spatial_scale"), py::arg("num_samples"), - py::arg("num_orientations"), py::arg("clockwise")); - m.def("riroi_align_rotated_backward", &riroi_align_rotated_backward, - "riroi_align_rotated backward", py::arg("top_grad"), py::arg("rois"), - py::arg("bottom_grad"), py::arg("pooled_height"), - py::arg("pooled_width"), py::arg("spatial_scale"), - py::arg("num_samples"), py::arg("num_orientations"), - py::arg("clockwise")); - m.def("points_in_polygons_forward", &points_in_polygons_forward, - "points_in_polygons_forward", py::arg("points"), py::arg("polygons"), - py::arg("output")); - m.def("min_area_polygons", &min_area_polygons, "min_area_polygons", - py::arg("pointsets"), py::arg("polygons")); - m.def("active_rotated_filter_forward", &active_rotated_filter_forward, - "active_rotated_filter_forward", py::arg("input"), py::arg("indices"), - py::arg("output")); - m.def("active_rotated_filter_backward", &active_rotated_filter_backward, - "active_rotated_filter_backward", py::arg("grad_out"), - py::arg("indices"), py::arg("grad_in")); - m.def("convex_iou", &convex_iou, "convex_iou", py::arg("pointsets"), - py::arg("polygons"), py::arg("ious")); - m.def("convex_giou", &convex_giou, "convex_giou", py::arg("pointsets"), - py::arg("polygons"), py::arg("output")); - m.def("diff_iou_rotated_sort_vertices_forward", - &diff_iou_rotated_sort_vertices_forward, - "diff_iou_rotated_sort_vertices_forward", py::arg("vertices"), - py::arg("mask"), py::arg("num_valid")); - m.def("chamfer_distance_forward", &chamfer_distance_forward, - "chamfer_distance_forward", py::arg("xyz1"), py::arg("xyz2"), - py::arg("dist1"), py::arg("dist2"), py::arg("idx1"), py::arg("idx2")); - m.def("chamfer_distance_backward", &chamfer_distance_backward, - "chamfer_distance_backward", py::arg("xyz1"), py::arg("xyz2"), - py::arg("gradxyz1"), py::arg("gradxyz2"), py::arg("graddist1"), - py::arg("graddist2"), py::arg("idx1"), py::arg("idx2")); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/riroi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/riroi_align_rotated.cpp deleted file mode 100644 index 81ffa9fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/riroi_align_rotated.cpp +++ /dev/null @@ -1,42 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void riroi_align_rotated_forward_impl(Tensor features, Tensor rois, - Tensor output, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - DISPATCH_DEVICE_IMPL(riroi_align_rotated_forward_impl, features, rois, output, - pooled_height, pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} - -void riroi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - DISPATCH_DEVICE_IMPL(riroi_align_rotated_backward_impl, top_grad, rois, - bottom_grad, pooled_height, pooled_width, spatial_scale, - num_samples, num_orientations, clockwise); -} - -void riroi_align_rotated_forward(Tensor features, Tensor rois, Tensor output, - int pooled_height, int pooled_width, - float spatial_scale, int num_samples, - int num_orientations, bool clockwise) { - riroi_align_rotated_forward_impl(features, rois, output, pooled_height, - pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} - -void riroi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int pooled_height, - int pooled_width, float spatial_scale, - int num_samples, int num_orientations, - bool clockwise) { - riroi_align_rotated_backward_impl(top_grad, rois, bottom_grad, pooled_height, - pooled_width, spatial_scale, num_samples, - num_orientations, clockwise); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align.cpp deleted file mode 100644 index 6e707739..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_align_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - DISPATCH_DEVICE_IMPL(roi_align_forward_impl, input, rois, output, argmax_y, - argmax_x, aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - int pool_mode, bool aligned) { - DISPATCH_DEVICE_IMPL(roi_align_backward_impl, grad_output, rois, argmax_y, - argmax_x, grad_input, aligned_height, aligned_width, - spatial_scale, sampling_ratio, pool_mode, aligned); -} - -void roi_align_forward(Tensor input, Tensor rois, Tensor output, - Tensor argmax_y, Tensor argmax_x, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - roi_align_forward_impl(input, rois, output, argmax_y, argmax_x, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} - -void roi_align_backward(Tensor grad_output, Tensor rois, Tensor argmax_y, - Tensor argmax_x, Tensor grad_input, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned) { - roi_align_backward_impl(grad_output, rois, argmax_y, argmax_x, grad_input, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, pool_mode, aligned); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align_rotated.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align_rotated.cpp deleted file mode 100644 index 77ea5ce7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_align_rotated.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_align_rotated_forward_impl(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - DISPATCH_DEVICE_IMPL(roi_align_rotated_forward_impl, input, rois, output, - aligned_height, aligned_width, spatial_scale, - sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_backward_impl(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - DISPATCH_DEVICE_IMPL(roi_align_rotated_backward_impl, top_grad, rois, - bottom_grad, aligned_height, aligned_width, - spatial_scale, sampling_ratio, aligned, clockwise); -} - -void roi_align_rotated_forward(Tensor input, Tensor rois, Tensor output, - int aligned_height, int aligned_width, - float spatial_scale, int sampling_ratio, - bool aligned, bool clockwise) { - roi_align_rotated_forward_impl(input, rois, output, aligned_height, - aligned_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} - -void roi_align_rotated_backward(Tensor top_grad, Tensor rois, - Tensor bottom_grad, int aligned_height, - int aligned_width, float spatial_scale, - int sampling_ratio, bool aligned, - bool clockwise) { - roi_align_rotated_backward_impl(top_grad, rois, bottom_grad, aligned_height, - aligned_width, spatial_scale, sampling_ratio, - aligned, clockwise); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_pool.cpp deleted file mode 100644 index bba90b80..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roi_pool.cpp +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roi_pool_forward_impl(Tensor input, Tensor rois, Tensor output, - Tensor argmax, int pooled_height, int pooled_width, - float spatial_scale) { - DISPATCH_DEVICE_IMPL(roi_pool_forward_impl, input, rois, output, argmax, - pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_backward_impl(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, - int pooled_width, float spatial_scale) { - DISPATCH_DEVICE_IMPL(roi_pool_backward_impl, grad_output, rois, argmax, - grad_input, pooled_height, pooled_width, spatial_scale); -} - -void roi_pool_forward(Tensor input, Tensor rois, Tensor output, Tensor argmax, - int pooled_height, int pooled_width, - float spatial_scale) { - roi_pool_forward_impl(input, rois, output, argmax, pooled_height, - pooled_width, spatial_scale); -} - -void roi_pool_backward(Tensor grad_output, Tensor rois, Tensor argmax, - Tensor grad_input, int pooled_height, int pooled_width, - float spatial_scale) { - roi_pool_backward_impl(grad_output, rois, argmax, grad_input, pooled_height, - pooled_width, spatial_scale); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roiaware_pool3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roiaware_pool3d.cpp deleted file mode 100644 index 6cf9cf09..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roiaware_pool3d.cpp +++ /dev/null @@ -1,72 +0,0 @@ -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roiaware_pool3d_forward_impl(int boxes_num, int pts_num, int channels, - int max_pts_each_voxel, int out_x, int out_y, - int out_z, const Tensor rois, - const Tensor pts, const Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - DISPATCH_DEVICE_IMPL(roiaware_pool3d_forward_impl, boxes_num, pts_num, - channels, max_pts_each_voxel, out_x, out_y, out_z, rois, - pts, pts_feature, argmax, pts_idx_of_voxels, - pooled_features, pool_method); -} - -void roiaware_pool3d_backward_impl(int boxes_num, int out_x, int out_y, - int out_z, int channels, - int max_pts_each_voxel, - const Tensor pts_idx_of_voxels, - const Tensor argmax, const Tensor grad_out, - Tensor grad_in, int pool_method) { - DISPATCH_DEVICE_IMPL(roiaware_pool3d_backward_impl, boxes_num, out_x, out_y, - out_z, channels, max_pts_each_voxel, pts_idx_of_voxels, - argmax, grad_out, grad_in, pool_method); -} - -void roiaware_pool3d_forward(Tensor rois, Tensor pts, Tensor pts_feature, - Tensor argmax, Tensor pts_idx_of_voxels, - Tensor pooled_features, int pool_method) { - // params rois: (N, 7) [x, y, z, x_size, y_size, z_size, ry] in LiDAR - // coordinate - // params pts: (npoints, 3) [x, y, z] in LiDAR coordinate - // params pts_feature: (npoints, C) - // params argmax: (N, out_x, out_y, out_z, C) - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params pooled_features: (N, out_x, out_y, out_z, C) - // params pool_method: 0: max_pool 1: avg_pool - int boxes_num = rois.size(0); - int pts_num = pts.size(0); - int channels = pts_feature.size(1); - int max_pts_each_voxel = pts_idx_of_voxels.size(4); // index 0 is the counter - int out_x = pts_idx_of_voxels.size(1); - int out_y = pts_idx_of_voxels.size(2); - int out_z = pts_idx_of_voxels.size(3); - assert((out_x < 256) && (out_y < 256) && - (out_z < 256)); // we encode index with 8bit - - roiaware_pool3d_forward_impl(boxes_num, pts_num, channels, max_pts_each_voxel, - out_x, out_y, out_z, rois, pts, pts_feature, - argmax, pts_idx_of_voxels, pooled_features, - pool_method); -} - -void roiaware_pool3d_backward(Tensor pts_idx_of_voxels, Tensor argmax, - Tensor grad_out, Tensor grad_in, - int pool_method) { - // params pts_idx_of_voxels: (N, out_x, out_y, out_z, max_pts_each_voxel) - // params argmax: (N, out_x, out_y, out_z, C) - // params grad_out: (N, out_x, out_y, out_z, C) - // params grad_in: (npoints, C), return value - // params pool_method: 0: max_pool 1: avg_pool - int boxes_num = pts_idx_of_voxels.size(0); - int out_x = pts_idx_of_voxels.size(1); - int out_y = pts_idx_of_voxels.size(2); - int out_z = pts_idx_of_voxels.size(3); - int max_pts_each_voxel = pts_idx_of_voxels.size(4); // index 0 is the counter - int channels = grad_out.size(4); - - roiaware_pool3d_backward_impl(boxes_num, out_x, out_y, out_z, channels, - max_pts_each_voxel, pts_idx_of_voxels, argmax, - grad_out, grad_in, pool_method); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roipoint_pool3d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roipoint_pool3d.cpp deleted file mode 100644 index a10080b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/roipoint_pool3d.cpp +++ /dev/null @@ -1,39 +0,0 @@ -/* -Modified from -https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/roipoint_pool3d/src/roipoint_pool3d.cpp -Point cloud feature pooling -Written by Shaoshuai Shi -All Rights Reserved 2018. -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void roipoint_pool3d_forward_impl(int batch_size, int pts_num, int boxes_num, - int feature_in_len, int sampled_pts_num, - const Tensor xyz, const Tensor boxes3d, - const Tensor pts_feature, - Tensor pooled_features, - Tensor pooled_empty_flag) { - DISPATCH_DEVICE_IMPL(roipoint_pool3d_forward_impl, batch_size, pts_num, - boxes_num, feature_in_len, sampled_pts_num, xyz, boxes3d, - pts_feature, pooled_features, pooled_empty_flag); -} - -void roipoint_pool3d_forward(Tensor xyz, Tensor boxes3d, Tensor pts_feature, - Tensor pooled_features, Tensor pooled_empty_flag) { - // params xyz: (B, N, 3) - // params boxes3d: (B, M, 7) - // params pts_feature: (B, N, C) - // params pooled_features: (B, M, 512, 3+C) - // params pooled_empty_flag: (B, M) - int batch_size = xyz.size(0); - int pts_num = xyz.size(1); - int boxes_num = boxes3d.size(1); - int feature_in_len = pts_feature.size(2); - int sampled_pts_num = pooled_features.size(2); - - roipoint_pool3d_forward_impl(batch_size, pts_num, boxes_num, feature_in_len, - sampled_pts_num, xyz, boxes3d, pts_feature, - pooled_features, pooled_empty_flag); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/rotated_feature_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/rotated_feature_align.cpp deleted file mode 100644 index 71fe0c9a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/rotated_feature_align.cpp +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -// Modified from -// https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection/blob/master/mmdet/ops/fr/src/feature_refine_cuda.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void rotated_feature_align_forward_impl(const Tensor features, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor output) { - DISPATCH_DEVICE_IMPL(rotated_feature_align_forward_impl, features, - best_bboxes, spatial_scale, points, output); -} - -void rotated_feature_align_backward_impl(const Tensor top_grad, - const Tensor best_bboxes, - const float spatial_scale, - const int points, Tensor bottom_grad) { - DISPATCH_DEVICE_IMPL(rotated_feature_align_backward_impl, top_grad, - best_bboxes, spatial_scale, points, bottom_grad); -} - -void rotated_feature_align_forward(const Tensor features, - const Tensor best_bboxes, Tensor output, - const float spatial_scale, - const int points) { - rotated_feature_align_forward_impl(features, best_bboxes, spatial_scale, - points, output); -} - -void rotated_feature_align_backward(const Tensor top_grad, - const Tensor best_bboxes, - Tensor bottom_grad, - const float spatial_scale, - const int points) { - rotated_feature_align_backward_impl(top_grad, best_bboxes, spatial_scale, - points, bottom_grad); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/scatter_points.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/scatter_points.cpp deleted file mode 100644 index 0de8ebf6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/scatter_points.cpp +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; - -std::vector dynamic_point_to_voxel_forward_impl( - const torch::Tensor &feats, const torch::Tensor &coors, - const reduce_t reduce_type) { - return DISPATCH_DEVICE_IMPL(dynamic_point_to_voxel_forward_impl, feats, coors, - reduce_type); -} - -void dynamic_point_to_voxel_backward_impl( - torch::Tensor &grad_feats, const torch::Tensor &grad_reduced_feats, - const torch::Tensor &feats, const torch::Tensor &reduced_feats, - const torch::Tensor &coors_idx, const torch::Tensor &reduce_count, - const reduce_t reduce_type) { - DISPATCH_DEVICE_IMPL(dynamic_point_to_voxel_backward_impl, grad_feats, - grad_reduced_feats, feats, reduced_feats, coors_idx, - reduce_count, reduce_type); -} - -inline reduce_t convert_reduce_type(const std::string &reduce_type) { - if (reduce_type == "max") - return reduce_t::MAX; - else if (reduce_type == "sum") - return reduce_t::SUM; - else if (reduce_type == "mean") - return reduce_t::MEAN; - else - TORCH_CHECK(false, "do not support reduce type " + reduce_type) - return reduce_t::SUM; -} - -std::vector dynamic_point_to_voxel_forward( - const torch::Tensor &feats, const torch::Tensor &coors, - const std::string &reduce_type) { - return dynamic_point_to_voxel_forward_impl(feats, coors, - convert_reduce_type(reduce_type)); -} - -void dynamic_point_to_voxel_backward(torch::Tensor &grad_feats, - const torch::Tensor &grad_reduced_feats, - const torch::Tensor &feats, - const torch::Tensor &reduced_feats, - const torch::Tensor &coors_idx, - const torch::Tensor &reduce_count, - const std::string &reduce_type) { - dynamic_point_to_voxel_backward_impl(grad_feats, grad_reduced_feats, feats, - reduced_feats, coors_idx, reduce_count, - convert_reduce_type(reduce_type)); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/sync_bn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/sync_bn.cpp deleted file mode 100644 index fd5a5132..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/sync_bn.cpp +++ /dev/null @@ -1,69 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void sync_bn_forward_mean_impl(const Tensor input, Tensor mean) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_mean_impl, input, mean); -} - -void sync_bn_forward_var_impl(const Tensor input, const Tensor mean, - Tensor var) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_var_impl, input, mean, var); -} - -void sync_bn_forward_output_impl(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - DISPATCH_DEVICE_IMPL(sync_bn_forward_output_impl, input, mean, var, - running_mean, running_var, weight, bias, norm, std, - output, eps, momentum, group_size); -} - -void sync_bn_backward_param_impl(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - DISPATCH_DEVICE_IMPL(sync_bn_backward_param_impl, grad_output, norm, - grad_weight, grad_bias); -} - -void sync_bn_backward_data_impl(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input) { - DISPATCH_DEVICE_IMPL(sync_bn_backward_data_impl, grad_output, weight, - grad_weight, grad_bias, norm, std, grad_input); -} - -void sync_bn_forward_mean(const Tensor input, Tensor mean) { - sync_bn_forward_mean_impl(input, mean); -} - -void sync_bn_forward_var(const Tensor input, const Tensor mean, Tensor var) { - sync_bn_forward_var_impl(input, mean, var); -} - -void sync_bn_forward_output(const Tensor input, const Tensor mean, - const Tensor var, const Tensor weight, - const Tensor bias, Tensor running_mean, - Tensor running_var, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - sync_bn_forward_output_impl(input, mean, var, running_mean, running_var, - weight, bias, norm, std, output, eps, momentum, - group_size); -} - -void sync_bn_backward_param(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - sync_bn_backward_param_impl(grad_output, norm, grad_weight, grad_bias); -} - -void sync_bn_backward_data(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input) { - sync_bn_backward_data_impl(grad_output, weight, grad_weight, grad_bias, norm, - std, grad_input); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_interpolate.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_interpolate.cpp deleted file mode 100644 index 1e0ec71b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_interpolate.cpp +++ /dev/null @@ -1,33 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void three_interpolate_forward_impl(int b, int c, int m, int n, - const Tensor points, const Tensor idx, - const Tensor weight, Tensor out) { - DISPATCH_DEVICE_IMPL(three_interpolate_forward_impl, b, c, m, n, points, idx, - weight, out); -} - -void three_interpolate_backward_impl(int b, int c, int n, int m, - const Tensor grad_out, const Tensor idx, - const Tensor weight, Tensor grad_points) { - DISPATCH_DEVICE_IMPL(three_interpolate_backward_impl, b, c, n, m, grad_out, - idx, weight, grad_points); -} - -void three_interpolate_forward(Tensor points_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor out_tensor, int b, - int c, int m, int n) { - three_interpolate_forward_impl(b, c, m, n, points_tensor, idx_tensor, - weight_tensor, out_tensor); -} - -void three_interpolate_backward(Tensor grad_out_tensor, Tensor idx_tensor, - Tensor weight_tensor, Tensor grad_points_tensor, - int b, int c, int n, int m) { - three_interpolate_backward_impl(b, c, n, m, grad_out_tensor, idx_tensor, - weight_tensor, grad_points_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_nn.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_nn.cpp deleted file mode 100644 index b629200c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/three_nn.cpp +++ /dev/null @@ -1,18 +0,0 @@ -// Modified from -// https://github.com/sshaoshuai/Pointnet2.PyTorch/tree/master/pointnet2/src/interpolate.cpp - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void three_nn_forward_impl(int b, int n, int m, const Tensor unknown, - const Tensor known, Tensor dist2, Tensor idx) { - DISPATCH_DEVICE_IMPL(three_nn_forward_impl, b, n, m, unknown, known, dist2, - idx); -} - -void three_nn_forward(Tensor unknown_tensor, Tensor known_tensor, - Tensor dist2_tensor, Tensor idx_tensor, int b, int n, - int m) { - three_nn_forward_impl(b, n, m, unknown_tensor, known_tensor, dist2_tensor, - idx_tensor); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/tin_shift.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/tin_shift.cpp deleted file mode 100644 index b03f5875..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/tin_shift.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -void tin_shift_forward_impl(Tensor input, Tensor shift, Tensor output) { - DISPATCH_DEVICE_IMPL(tin_shift_forward_impl, input, shift, output); -} - -void tin_shift_backward_impl(Tensor grad_output, Tensor shift, - Tensor grad_input) { - DISPATCH_DEVICE_IMPL(tin_shift_backward_impl, grad_output, shift, grad_input); -} - -void tin_shift_forward(Tensor input, Tensor shift, Tensor output) { - tin_shift_forward_impl(input, shift, output); -} - -void tin_shift_backward(Tensor grad_output, Tensor shift, Tensor grad_input) { - tin_shift_backward_impl(grad_output, shift, grad_input); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/upfirdn2d.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/upfirdn2d.cpp deleted file mode 100644 index dd325bd7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/upfirdn2d.cpp +++ /dev/null @@ -1,118 +0,0 @@ -// Modified from -// https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp - -/* -Copyright (c) 2021, NVIDIA Corporation. All rights reserved. - -NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -Augmentation (ADA) -======================================================================= - -1. Definitions - -"Licensor" means any person or entity that distributes its Work. - -"Software" means the original work of authorship made available under -this License. - -"Work" means the Software and any additions to or derivative works of -the Software that are made available under this License. - -The terms "reproduce," "reproduction," "derivative works," and -"distribution" have the meaning as provided under U.S. copyright law; -provided, however, that for the purposes of this License, derivative -works shall not include works that remain separable from, or merely -link (or bind by name) to the interfaces of, the Work. - -Works, including the Software, are "made available" under this License -by including in or with the Work either (a) a copyright notice -referencing the applicability of this License to the Work, or (b) a -copy of this License. - -2. License Grants - - 2.1 Copyright Grant. Subject to the terms and conditions of this - License, each Licensor grants to you a perpetual, worldwide, - non-exclusive, royalty-free, copyright license to reproduce, - prepare derivative works of, publicly display, publicly perform, - sublicense and distribute its Work and any resulting derivative - works in any form. - -3. Limitations - - 3.1 Redistribution. You may reproduce or distribute the Work only - if (a) you do so under this License, (b) you include a complete - copy of this License with your distribution, and (c) you retain - without modification any copyright, patent, trademark, or - attribution notices that are present in the Work. - - 3.2 Derivative Works. You may specify that additional or different - terms apply to the use, reproduction, and distribution of your - derivative works of the Work ("Your Terms") only if (a) Your Terms - provide that the use limitation in Section 3.3 applies to your - derivative works, and (b) you identify the specific derivative - works that are subject to Your Terms. Notwithstanding Your Terms, - this License (including the redistribution requirements in Section - 3.1) will continue to apply to the Work itself. - - 3.3 Use Limitation. The Work and any derivative works thereof only - may be used or intended for use non-commercially. Notwithstanding - the foregoing, NVIDIA and its affiliates may use the Work and any - derivative works commercially. As used herein, "non-commercially" - means for research or evaluation purposes only. - - 3.4 Patent Claims. If you bring or threaten to bring a patent claim - against any Licensor (including any claim, cross-claim or - counterclaim in a lawsuit) to enforce any patents that you allege - are infringed by any Work, then your rights under this License from - such Licensor (including the grant in Section 2.1) will terminate - immediately. - - 3.5 Trademarks. This License does not grant any rights to use any - Licensor’s or its affiliates’ names, logos, or trademarks, except - as necessary to reproduce the notices described in this License. - - 3.6 Termination. If you violate any term of this License, then your - rights under this License (including the grant in Section 2.1) will - terminate immediately. - -4. Disclaimer of Warranty. - -THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -THIS LICENSE. - -5. Limitation of Liability. - -EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -THE POSSIBILITY OF SUCH DAMAGES. - -======================================================================= -*/ - -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -torch::Tensor upfirdn2d_op_impl(const torch::Tensor& input, - const torch::Tensor& kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1) { - return DISPATCH_DEVICE_IMPL(upfirdn2d_op_impl, input, kernel, up_x, up_y, - down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, int pad_x0, - int pad_x1, int pad_y0, int pad_y1) { - return upfirdn2d_op_impl(input, kernel, up_x, up_y, down_x, down_y, pad_x0, - pad_x1, pad_y0, pad_y1); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/voxelization.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/voxelization.cpp deleted file mode 100644 index 7946be61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/pytorch/voxelization.cpp +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved. -#include "pytorch_cpp_helper.hpp" -#include "pytorch_device_registry.hpp" - -int hard_voxelize_forward_impl(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, - at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - return DISPATCH_DEVICE_IMPL(hard_voxelize_forward_impl, points, voxels, coors, - num_points_per_voxel, voxel_size, coors_range, - max_points, max_voxels, NDim); -} - -int nondeterministic_hard_voxelize_forward_impl( - const at::Tensor &points, at::Tensor &voxels, at::Tensor &coors, - at::Tensor &num_points_per_voxel, const std::vector voxel_size, - const std::vector coors_range, const int max_points, - const int max_voxels, const int NDim = 3) { - return DISPATCH_DEVICE_IMPL(nondeterministic_hard_voxelize_forward_impl, - points, voxels, coors, num_points_per_voxel, - voxel_size, coors_range, max_points, max_voxels, - NDim); -} - -void dynamic_voxelize_forward_impl(const at::Tensor &points, at::Tensor &coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim = 3) { - DISPATCH_DEVICE_IMPL(dynamic_voxelize_forward_impl, points, coors, voxel_size, - coors_range, NDim); -} - -void hard_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - at::Tensor &voxel_num, const int max_points, - const int max_voxels, const int NDim = 3, - const bool deterministic = true) { - int64_t *voxel_num_data = voxel_num.data_ptr(); - std::vector voxel_size_v( - voxel_size.data_ptr(), - voxel_size.data_ptr() + voxel_size.numel()); - std::vector coors_range_v( - coors_range.data_ptr(), - coors_range.data_ptr() + coors_range.numel()); - - if (deterministic) { - *voxel_num_data = hard_voxelize_forward_impl( - points, voxels, coors, num_points_per_voxel, voxel_size_v, - coors_range_v, max_points, max_voxels, NDim); - } else { - *voxel_num_data = nondeterministic_hard_voxelize_forward_impl( - points, voxels, coors, num_points_per_voxel, voxel_size_v, - coors_range_v, max_points, max_voxels, NDim); - } -} - -void dynamic_voxelize_forward(const at::Tensor &points, - const at::Tensor &voxel_size, - const at::Tensor &coors_range, at::Tensor &coors, - const int NDim = 3) { - std::vector voxel_size_v( - voxel_size.data_ptr(), - voxel_size.data_ptr() + voxel_size.numel()); - std::vector coors_range_v( - coors_range.data_ptr(), - coors_range.data_ptr() + coors_range.numel()); - dynamic_voxelize_forward_impl(points, coors, voxel_size_v, coors_range_v, - NDim); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool.cpp deleted file mode 100644 index d405a7d6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool.cpp +++ /dev/null @@ -1,217 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_corner_pool.hpp" - -#include - -#include "trt_serialize.hpp" - -void CornerPoolForwardLauncher_float(const float *input, float *output, - const int batch_size, const int channels, - const int height, const int width, - const int pool_type, cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *CORNER_POOL_PLUGIN_NAME{"MMCVCornerPool"}; -} // namespace - -CornerPoolPluginDynamic::CornerPoolPluginDynamic(const std::string &name, - TRT_CORNER_POOL_TYPE poolType) - : mLayerName(name), mPoolType(poolType) {} - -CornerPoolPluginDynamic::CornerPoolPluginDynamic(const std::string name, - const void *data, - size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mPoolType); -} - -CornerPoolPluginDynamic::~CornerPoolPluginDynamic() {} - -nvinfer1::IPluginV2DynamicExt *CornerPoolPluginDynamic::clone() const { - CornerPoolPluginDynamic *plugin = - new CornerPoolPluginDynamic(mLayerName, mPoolType); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs CornerPoolPluginDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - return inputs[0]; -} - -bool CornerPoolPluginDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - switch (pos) { - // input[0] - case 0: - return inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - // output[0] - case 1: - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - default: - return false; - } -} - -void CornerPoolPluginDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t CornerPoolPluginDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - int sizeof_dtype = mmcv::getElementSize(outputs[0].type); -} - -int CornerPoolPluginDynamic::enqueue( - const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, const void *const *inputs, - void *const *outputs, void *workSpace, cudaStream_t stream) { - const void *input = inputs[0]; - void *output_value = outputs[0]; - - const int batch_size = inputDesc[0].dims.d[0]; - const int channels = inputDesc[0].dims.d[1]; - const int height = inputDesc[0].dims.d[2]; - const int width = inputDesc[0].dims.d[3]; - - CornerPoolForwardLauncher_float((float *)input, (float *)output_value, - batch_size, channels, height, width, - int(mPoolType), stream); - - return 0; -} - -nvinfer1::DataType CornerPoolPluginDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *CornerPoolPluginDynamic::getPluginType() const { - switch (mPoolType) { - case TRT_CORNER_POOL_TYPE::TRT_TOP_POOL: - case TRT_CORNER_POOL_TYPE::TRT_BOTTOM_POOL: - case TRT_CORNER_POOL_TYPE::TRT_LEFT_POOL: - case TRT_CORNER_POOL_TYPE::TRT_RIGHT_POOL: - return CORNER_POOL_PLUGIN_NAME; - - default: - return "UnknownpoolType"; - } -} - -const char *CornerPoolPluginDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int CornerPoolPluginDynamic::getNbOutputs() const { return 1; } - -int CornerPoolPluginDynamic::initialize() { return 0; } - -void CornerPoolPluginDynamic::terminate() {} - -size_t CornerPoolPluginDynamic::getSerializationSize() const { - return sizeof(mPoolType); -} - -void CornerPoolPluginDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mPoolType); -} - -void CornerPoolPluginDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void CornerPoolPluginDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *CornerPoolPluginDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -CornerPoolPluginDynamicCreator::CornerPoolPluginDynamicCreator() { - mPluginAttributes.clear(); - mPluginAttributes.emplace_back(nvinfer1::PluginField("mode")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *CornerPoolPluginDynamicCreator::getPluginName() const { - return CORNER_POOL_PLUGIN_NAME; -} - -const char *CornerPoolPluginDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -CornerPoolPluginDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *CornerPoolPluginDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - TRT_CORNER_POOL_TYPE poolType; - int poolMode = -1; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("mode") == 0) { - poolMode = static_cast(fc->fields[i].data)[0]; - } - } - - assert(poolMode >= 0 && poolMode <= 3); - switch (poolMode) { - case 0: - poolType = TRT_CORNER_POOL_TYPE::TRT_TOP_POOL; - break; - case 1: - poolType = TRT_CORNER_POOL_TYPE::TRT_BOTTOM_POOL; - break; - case 2: - poolType = TRT_CORNER_POOL_TYPE::TRT_LEFT_POOL; - break; - case 3: - poolType = TRT_CORNER_POOL_TYPE::TRT_RIGHT_POOL; - break; - - default: - break; - } - - CornerPoolPluginDynamic *plugin = new CornerPoolPluginDynamic(name, poolType); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *CornerPoolPluginDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - // This object will be deleted when the network is destroyed, which will - // call FCPluginDynamic::destroy() - auto plugin = new CornerPoolPluginDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void CornerPoolPluginDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *CornerPoolPluginDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool_kernel.cu deleted file mode 100644 index ecf9ee6e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_corner_pool_kernel.cu +++ /dev/null @@ -1,110 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "common_cuda_helper.hpp" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -template -__global__ void top_bottom_pool_kernel(const scalar_t *input, scalar_t *output, - const int batch_size, const int channels, - const int height, const int width, - const int pool_type) { - const int nthreads = batch_size * channels * width; - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n_idx = index / (channels * width); // batch - int w_idx = index % width; // width - int c_idx = (index / width) % channels; // channels - int offset_n = n_idx * channels * width * height; - int offset_n_c = offset_n + c_idx * width * height; - int direction = -1; // in [-1, 1], default for TopPool - int index_start = height - 2; // default for TopPool - // pool_type in [0, 1] - if (pool_type == 0) { - // TopPool - // directly copy the most bottom value from input to output - output[offset_n_c + (height - 1) * width + w_idx] = - input[offset_n_c + (height - 1) * width + w_idx]; - } else { - // BottomPool - // directly copy the most top value from input to output - output[offset_n_c + w_idx] = input[offset_n_c + w_idx]; - index_start = 1; - direction = 1; - } - // do pool - for (int h = index_start; h >= 0 && h < height; h += direction) { - output[offset_n_c + h * width + w_idx] = - max(output[offset_n_c + (h - direction) * width + w_idx], - input[offset_n_c + h * width + w_idx]); - } - } -} - -template -__global__ void left_right_pool_kernel(const scalar_t *input, scalar_t *output, - const int batch_size, const int channels, - const int height, const int width, - const int pool_type) { - const int nthreads = batch_size * channels * height; - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n_idx = index / (channels * height); // batch - int h_idx = index % height; // height - int c_idx = (index / height) % channels; // channels - int offset_n = n_idx * channels * width * height; - int offset_n_c = offset_n + c_idx * width * height; - int offset_n_c_h = offset_n_c + h_idx * width; - int direction = -1; // in [-1, 1], default for LeftPool - int index_start = width - 2; // default for LeftPool - // pool_type in [2, 3] - if (pool_type == 2) { - // LeftPool - // directly copy the most right value from input to output - output[offset_n_c_h + width - 1] = input[offset_n_c_h + width - 1]; - } else { - // RightPool - // directly copy the most left value from input to output - output[offset_n_c_h] = input[offset_n_c_h]; - index_start = 1; - direction = 1; - } - // do pool - for (int w = index_start; w >= 0 && w < width; w += direction) { - output[offset_n_c_h + w] = - max(output[offset_n_c_h + w - direction], input[offset_n_c_h + w]); - } - } -} - -template -void CornerPoolForwardLauncher(const scalar_t *input, scalar_t *output, - const int batch_size, const int channels, - const int height, const int width, - const int pool_type, cudaStream_t stream) { - int nthreads = -1, col_block = -1; - - switch (pool_type) { - case 0: - case 1: - nthreads = batch_size * channels * width; - col_block = GET_BLOCKS(nthreads, THREADS_PER_BLOCK); - top_bottom_pool_kernel - <<>>( - input, output, batch_size, channels, height, width, pool_type); - break; - case 2: - case 3: - nthreads = batch_size * channels * height; - col_block = GET_BLOCKS(nthreads, THREADS_PER_BLOCK); - left_right_pool_kernel - <<>>( - input, output, batch_size, channels, height, width, pool_type); - break; - } -} - -void CornerPoolForwardLauncher_float(const float *input, float *output, - const int batch_size, const int channels, - const int height, const int width, - const int pool_type, cudaStream_t stream) { - CornerPoolForwardLauncher(input, output, batch_size, channels, height, - width, pool_type, stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cuda_helper.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cuda_helper.cu deleted file mode 100644 index f76c5f22..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cuda_helper.cu +++ /dev/null @@ -1,91 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include "common_cuda_helper.hpp" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -using mmcv::TensorDesc; - -template -__global__ void copy_permute_kernel(scalar_t *dst, const scalar_t *src, int n, - TensorDesc ts_src_stride, - TensorDesc ts_dst_stride, - TensorDesc ts_permute) { - const int src_dim = ts_src_stride.dim; - int *src_stride = &(ts_src_stride.stride[0]); - int *dst_stride = &(ts_dst_stride.stride[0]); - int *permute = &(ts_permute.shape[0]); - CUDA_1D_KERNEL_LOOP(index, n) { - size_t dst_index = index; - size_t src_index = 0; - for (int i = 0; i < src_dim; ++i) { - int dim_index = dst_index / dst_stride[i]; - dst_index = dst_index % dst_stride[i]; - src_index += dim_index * src_stride[permute[i]]; - } - dst[index] = src[src_index]; - } -} - -template -void memcpyPermute(scalar_t *dst, const scalar_t *src, int *src_size, - int *permute, int src_dim, cudaStream_t stream) { - size_t copy_size = 1; - TensorDesc ts_permute; - memcpy(&(ts_permute.shape[0]), permute, src_dim * sizeof(int)); - - TensorDesc ts_src_stride; - TensorDesc ts_dst_stride; - ts_src_stride.dim = src_dim; - ts_dst_stride.dim = src_dim; - int *src_stride = &(ts_src_stride.stride[0]); - int *dst_stride = &(ts_dst_stride.stride[0]); - int *dst_size = &(ts_dst_stride.shape[0]); - src_stride[src_dim - 1] = 1; - dst_stride[src_dim - 1] = 1; - - for (int i = src_dim - 1; i >= 0; --i) { - dst_size[i] = src_size[permute[i]]; - if (i < src_dim - 1) { - src_stride[i] = src_stride[i + 1] * src_size[i + 1]; - } - } - - for (int i = src_dim - 1; i >= 0; --i) { - copy_size *= dst_size[i]; - if (i < src_dim - 1) { - dst_stride[i] = dst_stride[i + 1] * dst_size[i + 1]; - } - } - - copy_permute_kernel - <<>>( - dst, src, copy_size, ts_src_stride, ts_dst_stride, ts_permute); -} - -template void memcpyPermute(float *dst, const float *src, int *src_size, - int *permute, int src_dim, - cudaStream_t stream); - -template <> -cublasStatus_t cublasGemmWrap(cublasHandle_t handle, - cublasOperation_t transa, - cublasOperation_t transb, int m, int n, - int k, const float *alpha, const float *A, - int lda, const float *B, int ldb, - const float *beta, float *C, int ldc) { - return cublasSgemm(handle, transa, transb, m, n, k, alpha, A, lda, B, ldb, - beta, C, ldc); -} - -template <> -cublasStatus_t cublasGemmWrap(cublasHandle_t handle, - cublasOperation_t transa, - cublasOperation_t transb, int m, int n, - int k, const half *alpha, const half *A, - int lda, const half *B, int ldb, - const half *beta, half *C, int ldc) { - return cublasHgemm(handle, transa, transb, m, n, k, alpha, A, lda, B, ldb, - beta, C, ldc); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin.cpp deleted file mode 100644 index 40bebbca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin.cpp +++ /dev/null @@ -1,242 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_cummaxmin.hpp" - -#include - -#include "trt_serialize.hpp" - -void CumMaxMinForwardLauncher_float(const float *input, float *output_value, - int *output_index, const int *dims, - int nbDims, int cum_dim, int cum_type, - cudaStream_t stream); - -void CumMaxMinForwardLauncher_int32(const int *input, int *output_value, - int *output_index, const int *dims, - int nbDims, int cum_dim, int cum_type, - cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *CUMMAXMIN_PLUGIN_NAME{"cummaxmin"}; -static const char *CUMMAX_PLUGIN_NAME{"cummax"}; -static const char *CUMMIN_PLUGIN_NAME{"cummin"}; -} // namespace - -CumMaxMinPluginDynamic::CumMaxMinPluginDynamic(const std::string &name, int dim, - TRT_CUMCMPTYPE cumType) - : mLayerName(name), mDim(dim), mCumType(cumType) {} - -CumMaxMinPluginDynamic::CumMaxMinPluginDynamic(const std::string name, - const void *data, size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mDim); - deserialize_value(&data, &length, &mCumType); -} - -CumMaxMinPluginDynamic::~CumMaxMinPluginDynamic() {} - -nvinfer1::IPluginV2DynamicExt *CumMaxMinPluginDynamic::clone() const { - CumMaxMinPluginDynamic *plugin = - new CumMaxMinPluginDynamic(mLayerName, mDim, mCumType); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs CumMaxMinPluginDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - return inputs[0]; -} - -bool CumMaxMinPluginDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - switch (pos) { - // input[0] - case 0: - return (inOut[pos].type == nvinfer1::DataType::kFLOAT || - inOut[pos].type == nvinfer1::DataType::kINT32) && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - // output[0] - case 1: - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - // output[1] - case 2: - return inOut[pos].type == nvinfer1::DataType::kINT32 && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - default: - return false; - } -} - -void CumMaxMinPluginDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t CumMaxMinPluginDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - int sizeof_dtype = mmcv::getElementSize(outputs[0].type); -} - -int CumMaxMinPluginDynamic::enqueue( - const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, const void *const *inputs, - void *const *outputs, void *workSpace, cudaStream_t stream) { - const void *input = inputs[0]; - void *output_value = outputs[0]; - int *output_index = (int *)outputs[1]; - - const int *dims = &(inputDesc[0].dims.d[0]); - int nbDims = inputDesc[0].dims.nbDims; - - switch (inputDesc[0].type) { - case nvinfer1::DataType::kFLOAT: - CumMaxMinForwardLauncher_float((float *)input, (float *)output_value, - output_index, dims, nbDims, mDim, - int(mCumType), stream); - break; - case nvinfer1::DataType::kINT32: - CumMaxMinForwardLauncher_int32((int *)input, (int *)output_value, - output_index, dims, nbDims, mDim, - int(mCumType), stream); - break; - default: - break; - } - - return 0; -} - -nvinfer1::DataType CumMaxMinPluginDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - switch (index) { - case 0: - return inputTypes[0]; - case 1: - return nvinfer1::DataType::kINT32; - default: - break; - } -} - -// IPluginV2 Methods -const char *CumMaxMinPluginDynamic::getPluginType() const { - switch (mCumType) { - case TRT_CUMCMPTYPE::TRT_CUMMAX: - return CUMMAX_PLUGIN_NAME; - case TRT_CUMCMPTYPE::TRT_CUMMIN: - return CUMMIN_PLUGIN_NAME; - default: - return "UnknownCumType"; - } -} - -const char *CumMaxMinPluginDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int CumMaxMinPluginDynamic::getNbOutputs() const { return 2; } - -int CumMaxMinPluginDynamic::initialize() { return 0; } - -void CumMaxMinPluginDynamic::terminate() {} - -size_t CumMaxMinPluginDynamic::getSerializationSize() const { - return sizeof(mDim) + sizeof(mCumType); -} - -void CumMaxMinPluginDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mDim); - serialize_value(&buffer, mCumType); -} - -void CumMaxMinPluginDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void CumMaxMinPluginDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *CumMaxMinPluginDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -CumMaxMinPluginDynamicCreator::CumMaxMinPluginDynamicCreator( - TRT_CUMCMPTYPE cumType) - : mCumType(cumType) { - mPluginAttributes.clear(); - mPluginAttributes.emplace_back(nvinfer1::PluginField("dim")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *CumMaxMinPluginDynamicCreator::getPluginName() const { - return CUMMAXMIN_PLUGIN_NAME; -} - -const char *CumMaxMinPluginDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -CumMaxMinPluginDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *CumMaxMinPluginDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - int dim = 0; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("dim") == 0) { - dim = static_cast(fc->fields[i].data)[0]; - } - } - - CumMaxMinPluginDynamic *plugin = - new CumMaxMinPluginDynamic(name, dim, mCumType); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *CumMaxMinPluginDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - // This object will be deleted when the network is destroyed, which will - // call FCPluginDynamic::destroy() - auto plugin = new CumMaxMinPluginDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void CumMaxMinPluginDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *CumMaxMinPluginDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} - -CumMaxPluginDynamicCreator::CumMaxPluginDynamicCreator() - : CumMaxMinPluginDynamicCreator(TRT_CUMCMPTYPE::TRT_CUMMAX) {} - -const char *CumMaxPluginDynamicCreator::getPluginName() const { - return CUMMAX_PLUGIN_NAME; -} - -CumMinPluginDynamicCreator::CumMinPluginDynamicCreator() - : CumMaxMinPluginDynamicCreator(TRT_CUMCMPTYPE::TRT_CUMMIN) {} - -const char *CumMinPluginDynamicCreator::getPluginName() const { - return CUMMIN_PLUGIN_NAME; -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin_kernel.cu deleted file mode 100644 index 47d756a3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_cummaxmin_kernel.cu +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved - -#include "common_cuda_helper.hpp" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -using mmcv::TensorDesc; - -template -__global__ void cummaxmin_kernel(const scalar_t *input, scalar_t *output_value, - int *output_index, TensorDesc tensor_desc, - int cum_dim, int cum_type) { - const size_t cum_size = tensor_desc.shape[cum_dim]; - const size_t cum_stride = tensor_desc.stride[cum_dim]; - const size_t data_size = - tensor_desc.stride[0] * tensor_desc.shape[0] / cum_size; - CUDA_1D_KERNEL_LOOP(index, data_size) { - size_t cum_offset = - index / cum_stride * (cum_size * cum_stride) + index % cum_stride; - int cum_index = 0; - auto cum_value = input[cum_offset]; - output_value[cum_offset] = cum_value; - output_index[cum_offset] = cum_index; - - for (size_t cum_index_current = 1; cum_index_current < cum_size; - ++cum_index_current) { - cum_offset += cum_stride; - const auto cum_value_current = input[cum_offset]; - switch (cum_type) { - case 0: // max - if (cum_value_current > cum_value) { - cum_value = cum_value_current; - cum_index = cum_index_current; - } - break; - case 1: // min - if (cum_value_current < cum_value) { - cum_value = cum_value_current; - cum_index = cum_index_current; - } - break; - } - output_value[cum_offset] = cum_value; - output_index[cum_offset] = cum_index; - } - } -} - -template -void CumMaxMinForwardLauncher(const scalar_t *input, scalar_t *output_value, - int *output_index, const int *dims, int nbDims, - int cum_dim, int cum_type, cudaStream_t stream) { - // fill tensordesc and initial - TensorDesc tensor_desc; - memset((void *)&tensor_desc, 0, sizeof(TensorDesc)); - tensor_desc.dim = nbDims; - tensor_desc.shape[nbDims - 1] = dims[nbDims - 1]; - tensor_desc.stride[nbDims - 1] = 1; - for (int i = nbDims - 2; i >= 0; --i) { - tensor_desc.shape[i] = dims[i]; - tensor_desc.stride[i] = dims[i + 1] * tensor_desc.stride[i + 1]; - } - - // cum dim should be larger than 0 - cum_dim = cum_dim >= 0 ? cum_dim : (nbDims + cum_dim); - - const int data_size = - tensor_desc.stride[0] * tensor_desc.shape[0] / tensor_desc.shape[cum_dim]; - - const int col_block = GET_BLOCKS(data_size, THREADS_PER_BLOCK); - - cummaxmin_kernel<<>>( - input, output_value, output_index, tensor_desc, cum_dim, cum_type); -} - -void CumMaxMinForwardLauncher_float(const float *input, float *output_value, - int *output_index, const int *dims, - int nbDims, int cum_dim, int cum_type, - cudaStream_t stream) { - CumMaxMinForwardLauncher(input, output_value, output_index, dims, - nbDims, cum_dim, cum_type, stream); -} - -void CumMaxMinForwardLauncher_int32(const int *input, int *output_value, - int *output_index, const int *dims, - int nbDims, int cum_dim, int cum_type, - cudaStream_t stream) { - CumMaxMinForwardLauncher(input, output_value, output_index, dims, nbDims, - cum_dim, cum_type, stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv.cpp deleted file mode 100644 index 76056dee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv.cpp +++ /dev/null @@ -1,318 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_deform_conv.hpp" - -#include - -#include - -#include "trt_serialize.hpp" - -void DeformConvForwardCUDAKernelLauncher_float( - const float *input, const float *weight, const float *offset, float *output, - void *workspace, int batchSize, int nInputPlane, int inputHeight, - int inputWidth, int nOutputPlane, int kW, int kH, int dW, int dH, int padW, - int padH, int dilationW, int dilationH, int group, int deformable_group, - int im2col_step, cublasHandle_t cublas_handle, cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"MMCVDeformConv2d"}; -} // namespace - -nvinfer1::PluginFieldCollection DeformableConvPluginDynamicCreator::mFC{}; -std::vector - DeformableConvPluginDynamicCreator::mPluginAttributes; - -DeformableConvPluginDynamic::DeformableConvPluginDynamic( - const std::string &name, const nvinfer1::Dims &stride, - const nvinfer1::Dims &padding, const nvinfer1::Dims &dilation, - const int deformableGroup, const int group, int im2colStep) - : mLayerName(name), - mStride(stride), - mPadding(padding), - mDilation(dilation), - mDeformableGroup(deformableGroup), - mGroup(group), - mIm2colStep(im2colStep) {} - -DeformableConvPluginDynamic::DeformableConvPluginDynamic(const std::string name, - const void *data, - size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mStride); - deserialize_value(&data, &length, &mPadding); - deserialize_value(&data, &length, &mDilation); - deserialize_value(&data, &length, &mDeformableGroup); - deserialize_value(&data, &length, &mGroup); - deserialize_value(&data, &length, &mIm2colStep); -} -DeformableConvPluginDynamic::~DeformableConvPluginDynamic() {} - -nvinfer1::IPluginV2DynamicExt *DeformableConvPluginDynamic::clone() const { - DeformableConvPluginDynamic *plugin = - new DeformableConvPluginDynamic(mLayerName, mStride, mPadding, mDilation, - mDeformableGroup, mGroup, mIm2colStep); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs DeformableConvPluginDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - nvinfer1::DimsExprs ret; - ret.nbDims = 4; - ret.d[0] = inputs[0].d[0]; - ret.d[1] = inputs[2].d[0]; - - ret.d[2] = inputs[1].d[2]; - ret.d[3] = inputs[1].d[3]; - - return ret; -} - -bool DeformableConvPluginDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - if (pos == 0) { - return (inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR); - - } else { - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - } -} - -void DeformableConvPluginDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t DeformableConvPluginDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - int sizeof_dtype = mmcv::getElementSize(outputs[0].type); - - int batch_size = inputs[0].dims.d[0]; - int nInputPlane = inputs[0].dims.d[1]; - int inputHeight = inputs[0].dims.d[2]; - int inputWidth = inputs[0].dims.d[3]; - - int nOutputPlane = outputs[0].dims.d[1]; - int outputHeight = outputs[0].dims.d[2]; - int outputWidth = outputs[0].dims.d[3]; - - int kW = inputs[2].dims.d[2]; - int kH = inputs[2].dims.d[3]; - int im2col_step = std::min(batch_size, mIm2colStep); - - size_t col_size = - mmcv::getAlignedSize(nInputPlane * kW * kH * im2col_step * outputHeight * - outputWidth * sizeof_dtype); - - size_t out_size = 0; - if (im2col_step != 1) - out_size = mmcv::getAlignedSize(batch_size * nOutputPlane * outputHeight * - outputWidth * sizeof_dtype); - - return col_size + out_size; -} - -int DeformableConvPluginDynamic::enqueue( - const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, const void *const *inputs, - void *const *outputs, void *workSpace, cudaStream_t stream) { - int batch_size = inputDesc[0].dims.d[0]; - int inputChannel = inputDesc[0].dims.d[1]; - int inputHeight = inputDesc[0].dims.d[2]; - int inputWidth = inputDesc[0].dims.d[3]; - int outputChannel = outputDesc[0].dims.d[1]; - int kernelHeight = inputDesc[2].dims.d[2]; - int kernelWidth = inputDesc[2].dims.d[3]; - - const void *x = inputs[0]; - const void *offset = inputs[1]; - const void *weight = inputs[2]; - void *output = outputs[0]; - int im2col_step = std::min(batch_size, mIm2colStep); - - // TODO: add fp16 support - auto data_type = inputDesc[0].type; - switch (data_type) { - case nvinfer1::DataType::kFLOAT: - DeformConvForwardCUDAKernelLauncher_float( - (float *)x, (float *)weight, (float *)offset, (float *)output, - workSpace, batch_size, inputChannel, inputHeight, inputWidth, - outputChannel, kernelWidth, kernelHeight, mStride.d[0], mStride.d[1], - mPadding.d[0], mPadding.d[1], mDilation.d[0], mDilation.d[1], mGroup, - mDeformableGroup, im2col_step, m_cublas_handle, stream); - break; - default: - return 1; - break; - } - - return 0; -} - -nvinfer1::DataType DeformableConvPluginDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *DeformableConvPluginDynamic::getPluginType() const { - return PLUGIN_NAME; -} - -const char *DeformableConvPluginDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int DeformableConvPluginDynamic::getNbOutputs() const { return 1; } - -int DeformableConvPluginDynamic::initialize() { return 0; } - -void DeformableConvPluginDynamic::terminate() {} - -size_t DeformableConvPluginDynamic::getSerializationSize() const { - return sizeof(mStride) + sizeof(mPadding) + sizeof(mDilation) + - sizeof(mDeformableGroup) + sizeof(mGroup) + sizeof(mIm2colStep); -} - -void DeformableConvPluginDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mStride); - serialize_value(&buffer, mPadding); - serialize_value(&buffer, mDilation); - serialize_value(&buffer, mDeformableGroup); - serialize_value(&buffer, mGroup); - serialize_value(&buffer, mIm2colStep); -} - -void DeformableConvPluginDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void DeformableConvPluginDynamic::attachToContext( - cudnnContext *cudnnContext, cublasContext *cublasContext, - nvinfer1::IGpuAllocator *gpuAllocator) { - m_cublas_handle = cublasContext; -} - -void DeformableConvPluginDynamic::detachFromContext() {} - -void DeformableConvPluginDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *DeformableConvPluginDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -DeformableConvPluginDynamicCreator::DeformableConvPluginDynamicCreator() { - mPluginAttributes.emplace_back(nvinfer1::PluginField("stride")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("padding")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("dilation")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("groups")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("deform_groups")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("bias")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("im2col_step")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *DeformableConvPluginDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *DeformableConvPluginDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -DeformableConvPluginDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *DeformableConvPluginDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - nvinfer1::Dims stride{2, {1, 1}}; - nvinfer1::Dims padding{2, {0, 0}}; - nvinfer1::Dims dilation{2, {1, 1}}; - int deformableGroup = 1; - int group = 1; - int im2col_step = 32; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("stride") == 0) { - stride.nbDims = 2; - stride.d[0] = static_cast(fc->fields[i].data)[0]; - if (fc->fields[i].length == 1) { - stride.d[1] = stride.d[0]; - } else { - stride.d[1] = static_cast(fc->fields[i].data)[1]; - } - } - - if (field_name.compare("padding") == 0) { - padding.nbDims = 2; - padding.d[0] = static_cast(fc->fields[i].data)[0]; - if (fc->fields[i].length == 1) { - padding.d[1] = padding.d[0]; - } else { - padding.d[1] = static_cast(fc->fields[i].data)[1]; - } - } - - if (field_name.compare("dilation") == 0) { - dilation.nbDims = 2; - dilation.d[0] = static_cast(fc->fields[i].data)[0]; - if (fc->fields[i].length == 1) { - dilation.d[1] = dilation.d[0]; - } else { - dilation.d[1] = static_cast(fc->fields[i].data)[1]; - } - } - - if (field_name.compare("deformable_group") == 0) { - deformableGroup = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("group") == 0) { - group = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("im2col_step") == 0) { - im2col_step = static_cast(fc->fields[i].data)[0]; - } - } - - DeformableConvPluginDynamic *plugin = new DeformableConvPluginDynamic( - name, stride, padding, dilation, deformableGroup, group, im2col_step); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *DeformableConvPluginDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - auto plugin = new DeformableConvPluginDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void DeformableConvPluginDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *DeformableConvPluginDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv_kernel.cu deleted file mode 100644 index b1f69890..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_deform_conv_kernel.cu +++ /dev/null @@ -1,129 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include "common_cuda_helper.hpp" -#include "deform_conv_cuda_kernel.cuh" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -template -void trt_deformable_im2col(const T* data_input, const T* data_offset, - const int channels, const int height, - const int width, const int ksize_h, - const int ksize_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - T* data_col, cudaStream_t stream) { - int height_col = - (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1; - int width_col = - (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1; - int num_kernels = channels * height_col * width_col * parallel_imgs; - int channel_per_deformable_group = channels / deformable_group; - - deformable_im2col_gpu_kernel - <<>>( - num_kernels, data_input, data_offset, height, width, ksize_h, ksize_w, - pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, parallel_imgs, channels, - deformable_group, height_col, width_col, data_col); - - cudaCheckError(); -} - -template -void DeformConvForwardCUDAKernelLauncher( - const scalar_t* input, const scalar_t* weight, const scalar_t* offset, - scalar_t* output, void* workspace, int batchSize, int nInputPlane, - int inputHeight, int inputWidth, int nOutputPlane, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, int im2col_step, cublasHandle_t cublas_handle, - cudaStream_t stream) { - size_t word_size = sizeof(scalar_t); - - im2col_step = std::min(int(batchSize), im2col_step); - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - long long columns_size = - mmcv::getAlignedSize(nInputPlane * kW * kH * im2col_step * outputHeight * - outputWidth * word_size); - - // column buffer for img2col - scalar_t* columns = (scalar_t*)workspace; - workspace = workspace + columns_size; - - scalar_t* output_buffer; - long long output_buffer_size = 0; - if (im2col_step == 1) { - output_buffer = output; - } else { - // output need permute when im2col_step!=1 - output_buffer = (scalar_t*)workspace; - output_buffer_size = batchSize * nOutputPlane * outputWidth * outputHeight; - } - - long long input_elt_step = - im2col_step * nInputPlane * inputHeight * inputWidth; - long long offset_elt_step = - im2col_step * deformable_group * 2 * kH * kW * outputHeight * outputWidth; - long long out_buffer_step = - nOutputPlane * im2col_step * outputHeight * outputWidth; - long long col_g_step = - nInputPlane * kW * kH / group * im2col_step * outputHeight * outputWidth; - long long weight_g_step = - nOutputPlane / group * nInputPlane / group * kH * kW; - long long out_buffer_g_step = - nOutputPlane / group * im2col_step * outputHeight * outputWidth; - int m = nOutputPlane / group; - int n = im2col_step * outputHeight * outputWidth; - int k = nInputPlane / group * kH * kW; - scalar_t alpha = 1.; - scalar_t beta = 0.; - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - const scalar_t* input_start = input + elt * input_elt_step; - const scalar_t* offset_start = offset + elt * offset_elt_step; - - trt_deformable_im2col(input_start, offset_start, nInputPlane, - inputHeight, inputWidth, kH, kW, padH, padW, - dH, dW, dilationH, dilationW, im2col_step, - deformable_group, columns, stream); - - for (int g = 0; g < group; ++g) { - const scalar_t* weight_start = weight + g * weight_g_step; - scalar_t* col_start = columns + g * col_g_step; - scalar_t* out_buffer_start = - output_buffer + elt * out_buffer_step + g * out_buffer_g_step; - - cublasGemmWrap(cublas_handle, CUBLAS_OP_N, CUBLAS_OP_N, n, m, k, - &alpha, col_start, n, weight_start, k, &beta, - out_buffer_start, n); - cudaCheckError(); - } - } - - if (im2col_step != 1) { - int output_buffer_shape[5] = {batchSize / im2col_step, nOutputPlane, - im2col_step, outputHeight, outputWidth}; - int output_buffer_permute[5] = {0, 2, 1, 3, 4}; - memcpyPermute(output, output_buffer, &output_buffer_shape[0], - &output_buffer_permute[0], 5, stream); - } -} - -void DeformConvForwardCUDAKernelLauncher_float( - const float* input, const float* weight, const float* offset, float* output, - void* workspace, int batchSize, int nInputPlane, int inputHeight, - int inputWidth, int nOutputPlane, int kW, int kH, int dW, int dH, int padW, - int padH, int dilationW, int dilationH, int group, int deformable_group, - int im2col_step, cublasHandle_t cublas_handle, cudaStream_t stream) { - DeformConvForwardCUDAKernelLauncher( - input, weight, offset, output, workspace, batchSize, nInputPlane, - inputHeight, inputWidth, nOutputPlane, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, im2col_step, cublas_handle, - stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler.cpp deleted file mode 100644 index d955ca53..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler.cpp +++ /dev/null @@ -1,256 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_grid_sampler.hpp" - -#include -#include - -#include - -#include "trt_serialize.hpp" - -using mmcv::GridSamplerInterpolation; -using mmcv::GridSamplerPadding; - -void grid_sample_float(float *output, const float *input, const float *grid, - int *output_dims, int *input_dims, int *grid_dims, - int nb_dims, GridSamplerInterpolation interp, - GridSamplerPadding padding, bool align_corners, - cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"grid_sampler"}; -} // namespace - -nvinfer1::PluginFieldCollection GridSamplerDynamicCreator::mFC{}; -std::vector GridSamplerDynamicCreator::mPluginAttributes; - -GridSamplerDynamic::GridSamplerDynamic(const std::string &name, int mode, - int paddingMode, bool alignCorners) - : mLayerName(name), - mMode(mode), - mPaddingMode(paddingMode), - mAlignCorners(alignCorners) {} - -GridSamplerDynamic::GridSamplerDynamic(const std::string name, const void *data, - size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mMode); - deserialize_value(&data, &length, &mPaddingMode); - deserialize_value(&data, &length, &mAlignCorners); -} - -nvinfer1::IPluginV2DynamicExt *GridSamplerDynamic::clone() const { - GridSamplerDynamic *plugin = - new GridSamplerDynamic(mLayerName, mMode, mPaddingMode, mAlignCorners); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs GridSamplerDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - nvinfer1::DimsExprs ret; - ret.nbDims = inputs[0].nbDims; - ret.d[0] = inputs[0].d[0]; - ret.d[1] = inputs[0].d[1]; - for (int i = 2; i < ret.nbDims; ++i) { - ret.d[i] = inputs[1].d[i - 1]; - } - return ret; -} - -bool GridSamplerDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - if (pos == 0) { - return (inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR); - } else { - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - } -} - -void GridSamplerDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) { - // Validate input arguments -} - -size_t GridSamplerDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - return 0; -} - -int GridSamplerDynamic::enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, - void *workSpace, cudaStream_t stream) { - nvinfer1::Dims input_dims = inputDesc[0].dims; - nvinfer1::Dims grid_dims = inputDesc[1].dims; - nvinfer1::Dims output_dims = outputDesc[0].dims; - - using mmcv::GridSamplerInterpolation; - using mmcv::GridSamplerPadding; - - GridSamplerInterpolation interp_mode = GridSamplerInterpolation::Bilinear; - switch (mMode) { - case 0: - interp_mode = GridSamplerInterpolation::Bilinear; - break; - case 1: - interp_mode = GridSamplerInterpolation::Nearest; - break; - default: - break; - } - - GridSamplerPadding padding_mode = GridSamplerPadding::Zeros; - switch (mPaddingMode) { - case 0: - padding_mode = GridSamplerPadding::Zeros; - break; - - case 1: - padding_mode = GridSamplerPadding::Border; - break; - - case 2: - padding_mode = GridSamplerPadding::Reflection; - break; - default: - break; - } - - auto data_type = inputDesc[0].type; - - switch (data_type) { - case nvinfer1::DataType::kFLOAT: - grid_sample_float( - (float *)outputs[0], (float *)inputs[0], (float *)inputs[1], - &(output_dims.d[0]), &(input_dims.d[0]), &(grid_dims.d[0]), - input_dims.nbDims, interp_mode, padding_mode, mAlignCorners, stream); - break; - default: - return 1; - break; - } - - return 0; -} - -nvinfer1::DataType GridSamplerDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *GridSamplerDynamic::getPluginType() const { return PLUGIN_NAME; } - -const char *GridSamplerDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int GridSamplerDynamic::getNbOutputs() const { return 1; } - -int GridSamplerDynamic::initialize() { return 0; } - -void GridSamplerDynamic::terminate() {} - -size_t GridSamplerDynamic::getSerializationSize() const { - return sizeof(mMode) + sizeof(mPaddingMode) + sizeof(mAlignCorners); -} - -void GridSamplerDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mMode); - serialize_value(&buffer, mPaddingMode); - serialize_value(&buffer, mAlignCorners); -} - -void GridSamplerDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void GridSamplerDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *GridSamplerDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -GridSamplerDynamicCreator::GridSamplerDynamicCreator() { - mPluginAttributes.clear(); - mPluginAttributes.emplace_back(nvinfer1::PluginField("interpolation_mode")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("padding_mode")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("align_corners")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *GridSamplerDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *GridSamplerDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -GridSamplerDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *GridSamplerDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - int mode = 0; - int paddingMode = 0; - bool alignCorners = false; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("interpolation_mode") == 0) { - mode = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("padding_mode") == 0) { - paddingMode = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("align_corners") == 0) { - alignCorners = (bool)(static_cast(fc->fields[i].data)[0]); - } - } - - GridSamplerDynamic *plugin = - new GridSamplerDynamic(name, mode, paddingMode, alignCorners); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *GridSamplerDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - // This object will be deleted when the network is destroyed, which will - // call FCPluginDynamic::destroy() - auto plugin = new GridSamplerDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void GridSamplerDynamicCreator::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *GridSamplerDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler_kernel.cu deleted file mode 100644 index 253a35d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_grid_sampler_kernel.cu +++ /dev/null @@ -1,441 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/pytorch/pytorch/blob/ec683299ebabf297a3504c76248d37be830e4342/aten/src/ATen/native/cuda/GridSampler.cuh -// and -// https://github.com/pytorch/pytorch/blob/ec683299ebabf297a3504c76248d37be830e4342/aten/src/ATen/native/cuda/GridSampler.cu - -#include -#include - -#include -#include -#include - -#include "common_cuda_helper.hpp" -#include "trt_cuda_helper.cuh" -#include "trt_grid_sampler.hpp" -#include "trt_plugin_helper.hpp" - -using mmcv::GridSamplerInterpolation; -using mmcv::GridSamplerPadding; -using mmcv::TensorDesc; - -// Unnormalizes a coordinate from the -1 to +1 scale to its pixel index value, -// where we view each pixel as an area between (idx - 0.5) and (idx + 0.5). -// if align_corners: -1 and +1 get sent to the centers of the corner pixels -// -1 --> 0 -// +1 --> (size - 1) -// scale_factor = (size - 1) / 2 -// if not align_corners: -1 and +1 get sent to the image edges -// -1 --> -0.5 -// +1 --> (size - 1) + 0.5 == size - 0.5 -// scale_factor = size / 2 -template -static __forceinline__ __device__ scalar_t -grid_sampler_unnormalize(scalar_t coord, int size, bool align_corners) { - if (align_corners) { - // unnormalize coord from [-1, 1] to [0, size - 1] - return ((coord + 1.f) / 2) * (size - 1); - } else { - // unnormalize coord from [-1, 1] to [-0.5, size - 0.5] - return ((coord + 1.f) * size - 1) / 2; - } -} - -// Clips coordinates to between 0 and clip_limit - 1 -template -static __forceinline__ __device__ scalar_t clip_coordinates(scalar_t in, - int clip_limit) { - return ::min(static_cast(clip_limit - 1), - ::max(in, static_cast(0))); -} - -// Reflects coordinates until they fall between low and high (inclusive). -// The bounds are passed as twice their value so that half-integer values -// can be represented as ints. -template -static __forceinline__ __device__ scalar_t reflect_coordinates(scalar_t in, - int twice_low, - int twice_high) { - if (twice_low == twice_high) { - return static_cast(0); - } - scalar_t min = static_cast(twice_low) / 2; - scalar_t span = static_cast(twice_high - twice_low) / 2; - in = ::fabs(in - min); - // `fmod` returns same sign as `in`, which is positive after the `fabs` above. - scalar_t extra = ::fmod(in, span); - int flips = static_cast(::floor(in / span)); - if (flips % 2 == 0) { - return extra + min; - } else { - return span - extra + min; - } -} - -template -static __forceinline__ __device__ scalar_t -safe_downgrade_to_int_range(scalar_t x) { - // -100.0 does not have special meaning. This is just to make sure - // it's not within_bounds_2d or within_bounds_3d, and does not cause - // undefined behavior. See #35506. - if (x > INT_MAX - 1 || x < INT_MIN || !::isfinite(static_cast(x))) - return static_cast(-100.0); - return x; -} - -// Computes the pixel source index value for a grid coordinate -template -static __forceinline__ __device__ scalar_t grid_sampler_compute_source_index( - scalar_t coord, int size, GridSamplerPadding padding_mode, - bool align_corners) { - coord = grid_sampler_unnormalize(coord, size, align_corners); - if (padding_mode == GridSamplerPadding::Border) { - // clip coordinates to image borders - coord = clip_coordinates(coord, size); - } else if (padding_mode == GridSamplerPadding::Reflection) { - // reflect coordinates by image borders - if (align_corners) { - coord = reflect_coordinates(coord, 0, 2 * (size - 1)); - } else { - coord = reflect_coordinates(coord, -1, 2 * size - 1); - } - // clip coordinates to image borders - coord = clip_coordinates(coord, size); - } - - coord = safe_downgrade_to_int_range(coord); - return coord; -} - -static __forceinline__ __device__ bool within_bounds_2d(int h, int w, int H, - int W) { - return h >= 0 && h < H && w >= 0 && w < W; -} - -static __forceinline__ __device__ bool within_bounds_3d(int d, int h, int w, - int D, int H, int W) { - return d >= 0 && d < D && h >= 0 && h < H && w >= 0 && w < W; -} - -template -__global__ void grid_sampler_2d_kernel( - const int nthreads, const scalar_t *input, const scalar_t *grid, - scalar_t *output, TensorDesc input_desc, TensorDesc grid_desc, - TensorDesc output_desc, const GridSamplerInterpolation interpolation_mode, - const GridSamplerPadding padding_mode, bool align_corners) { - int C = input_desc.shape[1]; - int inp_H = input_desc.shape[2]; - int inp_W = input_desc.shape[3]; - int out_H = grid_desc.shape[1]; - int out_W = grid_desc.shape[2]; - int inp_sN = input_desc.stride[0]; - int inp_sC = input_desc.stride[1]; - int inp_sH = input_desc.stride[2]; - int inp_sW = input_desc.stride[3]; - int grid_sN = grid_desc.stride[0]; - int grid_sH = grid_desc.stride[1]; - int grid_sW = grid_desc.stride[2]; - int grid_sCoor = grid_desc.stride[3]; - int out_sN = output_desc.stride[0]; - int out_sC = output_desc.stride[1]; - int out_sH = output_desc.stride[2]; - int out_sW = output_desc.stride[3]; - - CUDA_1D_KERNEL_LOOP(index, nthreads) { - const int w = index % out_W; - const int h = (index / out_W) % out_H; - const int n = index / (out_H * out_W); - const int grid_offset = n * grid_sN + h * grid_sH + w * grid_sW; - - // get the corresponding input x, y coordinates from grid - scalar_t ix = grid[grid_offset]; - scalar_t iy = grid[grid_offset + grid_sCoor]; - - ix = grid_sampler_compute_source_index(ix, inp_W, padding_mode, - align_corners); - iy = grid_sampler_compute_source_index(iy, inp_H, padding_mode, - align_corners); - - if (interpolation_mode == GridSamplerInterpolation::Bilinear) { - // get NE, NW, SE, SW pixel values from (x, y) - int ix_nw = static_cast(::floor(ix)); - int iy_nw = static_cast(::floor(iy)); - int ix_ne = ix_nw + 1; - int iy_ne = iy_nw; - int ix_sw = ix_nw; - int iy_sw = iy_nw + 1; - int ix_se = ix_nw + 1; - int iy_se = iy_nw + 1; - - // get surfaces to each neighbor: - scalar_t nw = (ix_se - ix) * (iy_se - iy); - scalar_t ne = (ix - ix_sw) * (iy_sw - iy); - scalar_t sw = (ix_ne - ix) * (iy - iy_ne); - scalar_t se = (ix - ix_nw) * (iy - iy_nw); - - // calculate bilinear weighted pixel value and set output pixel - auto inp_ptr_NC = input + n * inp_sN; - auto out_ptr_NCHW = output + n * out_sN + h * out_sH + w * out_sW; - for (int c = 0; c < C; - ++c, inp_ptr_NC += inp_sC, out_ptr_NCHW += out_sC) { - *out_ptr_NCHW = static_cast(0); - if (within_bounds_2d(iy_nw, ix_nw, inp_H, inp_W)) { - *out_ptr_NCHW += inp_ptr_NC[iy_nw * inp_sH + ix_nw * inp_sW] * nw; - } - if (within_bounds_2d(iy_ne, ix_ne, inp_H, inp_W)) { - *out_ptr_NCHW += inp_ptr_NC[iy_ne * inp_sH + ix_ne * inp_sW] * ne; - } - if (within_bounds_2d(iy_sw, ix_sw, inp_H, inp_W)) { - *out_ptr_NCHW += inp_ptr_NC[iy_sw * inp_sH + ix_sw * inp_sW] * sw; - } - if (within_bounds_2d(iy_se, ix_se, inp_H, inp_W)) { - *out_ptr_NCHW += inp_ptr_NC[iy_se * inp_sH + ix_se * inp_sW] * se; - } - } - } else if (interpolation_mode == GridSamplerInterpolation::Nearest) { - int ix_nearest = static_cast(::round(ix)); - int iy_nearest = static_cast(::round(iy)); - - // assign nearest neighbor pixel value to output pixel - auto inp_ptr_NC = input + n * inp_sN; - auto out_ptr_NCHW = output + n * out_sN + h * out_sH + w * out_sW; - for (int c = 0; c < C; - ++c, inp_ptr_NC += inp_sC, out_ptr_NCHW += out_sC) { - if (within_bounds_2d(iy_nearest, ix_nearest, inp_H, inp_W)) { - *out_ptr_NCHW = inp_ptr_NC[iy_nearest * inp_sH + ix_nearest * inp_sW]; - } else { - *out_ptr_NCHW = static_cast(0); - } - } - } - } -} - -template -__global__ void grid_sampler_3d_kernel( - const int nthreads, const scalar_t *input, const scalar_t *grid, - scalar_t *output, TensorDesc input_desc, TensorDesc grid_desc, - TensorDesc output_desc, const GridSamplerInterpolation interpolation_mode, - const GridSamplerPadding padding_mode, bool align_corners) { - int C = input_desc.shape[1]; - int inp_D = input_desc.shape[2]; - int inp_H = input_desc.shape[3]; - int inp_W = input_desc.shape[4]; - int out_D = grid_desc.shape[1]; - int out_H = grid_desc.shape[2]; - int out_W = grid_desc.shape[3]; - int inp_sN = input_desc.stride[0]; - int inp_sC = input_desc.stride[1]; - int inp_sD = input_desc.stride[2]; - int inp_sH = input_desc.stride[3]; - int inp_sW = input_desc.stride[4]; - int grid_sN = grid_desc.stride[0]; - int grid_sD = grid_desc.stride[1]; - int grid_sH = grid_desc.stride[2]; - int grid_sW = grid_desc.stride[3]; - int grid_sCoor = grid_desc.stride[4]; - int out_sN = output_desc.stride[0]; - int out_sC = output_desc.stride[1]; - int out_sD = output_desc.stride[2]; - int out_sH = output_desc.stride[3]; - int out_sW = output_desc.stride[4]; - - CUDA_1D_KERNEL_LOOP(index, nthreads) { - const int w = index % out_W; - const int h = (index / out_W) % out_H; - const int d = (index / (out_H * out_W)) % out_D; - const int n = index / (out_D * out_H * out_W); - const int grid_offset = - n * grid_sN + d * grid_sD + h * grid_sH + w * grid_sW; - - // get the corresponding input x, y, z coordinates from grid - scalar_t ix = grid[grid_offset]; - scalar_t iy = grid[grid_offset + grid_sCoor]; - scalar_t iz = grid[grid_offset + 2 * grid_sCoor]; - - ix = grid_sampler_compute_source_index(ix, inp_W, padding_mode, - align_corners); - iy = grid_sampler_compute_source_index(iy, inp_H, padding_mode, - align_corners); - iz = grid_sampler_compute_source_index(iz, inp_D, padding_mode, - align_corners); - - if (interpolation_mode == GridSamplerInterpolation::Bilinear) { - // get corner pixel values from (x, y, z) - // for 4d, we used north-east-south-west - // for 5d, we add top-bottom - int ix_tnw = static_cast(::floor(ix)); - int iy_tnw = static_cast(::floor(iy)); - int iz_tnw = static_cast(::floor(iz)); - - int ix_tne = ix_tnw + 1; - int iy_tne = iy_tnw; - int iz_tne = iz_tnw; - - int ix_tsw = ix_tnw; - int iy_tsw = iy_tnw + 1; - int iz_tsw = iz_tnw; - - int ix_tse = ix_tnw + 1; - int iy_tse = iy_tnw + 1; - int iz_tse = iz_tnw; - - int ix_bnw = ix_tnw; - int iy_bnw = iy_tnw; - int iz_bnw = iz_tnw + 1; - - int ix_bne = ix_tnw + 1; - int iy_bne = iy_tnw; - int iz_bne = iz_tnw + 1; - - int ix_bsw = ix_tnw; - int iy_bsw = iy_tnw + 1; - int iz_bsw = iz_tnw + 1; - - int ix_bse = ix_tnw + 1; - int iy_bse = iy_tnw + 1; - int iz_bse = iz_tnw + 1; - - // get surfaces to each neighbor: - scalar_t tnw = (ix_bse - ix) * (iy_bse - iy) * (iz_bse - iz); - scalar_t tne = (ix - ix_bsw) * (iy_bsw - iy) * (iz_bsw - iz); - scalar_t tsw = (ix_bne - ix) * (iy - iy_bne) * (iz_bne - iz); - scalar_t tse = (ix - ix_bnw) * (iy - iy_bnw) * (iz_bnw - iz); - scalar_t bnw = (ix_tse - ix) * (iy_tse - iy) * (iz - iz_tse); - scalar_t bne = (ix - ix_tsw) * (iy_tsw - iy) * (iz - iz_tsw); - scalar_t bsw = (ix_tne - ix) * (iy - iy_tne) * (iz - iz_tne); - scalar_t bse = (ix - ix_tnw) * (iy - iy_tnw) * (iz - iz_tnw); - - auto inp_ptr_NC = input + n * inp_sN; - auto out_ptr_NCDHW = - output + n * out_sN + d * out_sD + h * out_sH + w * out_sW; - for (int c = 0; c < C; - ++c, inp_ptr_NC += inp_sC, out_ptr_NCDHW += out_sC) { - // (c, iz_tnw, iy_tnw, ix_tnw) * tnw + (c, iz_tne, iy_tne, ix_tne) * - // tne - // + (c, iz_tsw, iy_tsw, ix_tsw) * tsw + (c, iz_tse, iy_tse, ix_tse) * - // tse - // + (c, iz_bnw, iy_bnw, ix_bnw) * bnw + (c, iz_bne, iy_bne, ix_bne) * - // bne - // + (c, iz_bsw, iy_bsw, ix_bsw) * bsw + (c, iz_bse, iy_bse, ix_bse) * - // bse - *out_ptr_NCDHW = static_cast(0); - if (within_bounds_3d(iz_tnw, iy_tnw, ix_tnw, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_tnw * inp_sD + iy_tnw * inp_sH + ix_tnw * inp_sW] * - tnw; - } - if (within_bounds_3d(iz_tne, iy_tne, ix_tne, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_tne * inp_sD + iy_tne * inp_sH + ix_tne * inp_sW] * - tne; - } - if (within_bounds_3d(iz_tsw, iy_tsw, ix_tsw, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_tsw * inp_sD + iy_tsw * inp_sH + ix_tsw * inp_sW] * - tsw; - } - if (within_bounds_3d(iz_tse, iy_tse, ix_tse, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_tse * inp_sD + iy_tse * inp_sH + ix_tse * inp_sW] * - tse; - } - if (within_bounds_3d(iz_bnw, iy_bnw, ix_bnw, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_bnw * inp_sD + iy_bnw * inp_sH + ix_bnw * inp_sW] * - bnw; - } - if (within_bounds_3d(iz_bne, iy_bne, ix_bne, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_bne * inp_sD + iy_bne * inp_sH + ix_bne * inp_sW] * - bne; - } - if (within_bounds_3d(iz_bsw, iy_bsw, ix_bsw, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_bsw * inp_sD + iy_bsw * inp_sH + ix_bsw * inp_sW] * - bsw; - } - if (within_bounds_3d(iz_bse, iy_bse, ix_bse, inp_D, inp_H, inp_W)) { - *out_ptr_NCDHW += - inp_ptr_NC[iz_bse * inp_sD + iy_bse * inp_sH + ix_bse * inp_sW] * - bse; - } - } - } else if (interpolation_mode == GridSamplerInterpolation::Nearest) { - int ix_nearest = static_cast(::round(ix)); - int iy_nearest = static_cast(::round(iy)); - int iz_nearest = static_cast(::round(iz)); - - // assign nearest neighbor pixel value to output pixel - auto inp_ptr_NC = input + n * inp_sN; - auto out_ptr_NCDHW = - output + n * out_sN + d * out_sD + h * out_sH + w * out_sW; - for (int c = 0; c < C; - ++c, inp_ptr_NC += inp_sC, out_ptr_NCDHW += out_sC) { - if (within_bounds_3d(iz_nearest, iy_nearest, ix_nearest, inp_D, inp_H, - inp_W)) { - *out_ptr_NCDHW = - inp_ptr_NC[iz_nearest * inp_sD + iy_nearest * inp_sH + - ix_nearest * inp_sW]; - } else { - *out_ptr_NCDHW = static_cast(0); - } - } - } - } -} - -void create_desc(const int *dims, int nb_dims, TensorDesc &desc) { - memcpy(&desc.shape[0], dims, sizeof(int) * nb_dims); - desc.stride[nb_dims - 1] = 1; - for (int i = nb_dims - 2; i >= 0; --i) { - desc.stride[i] = desc.stride[i + 1] * desc.shape[i + 1]; - } -} - -template -void grid_sample(T *output, const T *input, const T *grid, int *output_dims, - int *input_dims, int *grid_dims, int nb_dims, - GridSamplerInterpolation interp, GridSamplerPadding padding, - bool align_corners, cudaStream_t stream) { - TensorDesc input_desc; - create_desc(input_dims, nb_dims, input_desc); - - TensorDesc output_desc; - create_desc(output_dims, nb_dims, output_desc); - - TensorDesc grid_desc; - create_desc(grid_dims, nb_dims, grid_desc); - - int count = 1; - for (int i = 0; i < nb_dims; ++i) { - if (i == 1) { - continue; - } - count *= output_desc.shape[i]; - } - - if (nb_dims == 4) { - grid_sampler_2d_kernel - <<>>( - count, input, grid, output, input_desc, grid_desc, output_desc, - interp, padding, align_corners); - } else if (nb_dims == 5) { - grid_sampler_3d_kernel - <<>>( - count, input, grid, output, input_desc, grid_desc, output_desc, - interp, padding, align_corners); - } else { - printf("input and grid dims should be 4 or 5\n"); - } -} - -void grid_sample_float(float *output, const float *input, const float *grid, - int *output_dims, int *input_dims, int *grid_dims, - int nb_dims, GridSamplerInterpolation interp, - GridSamplerPadding padding, bool align_corners, - cudaStream_t stream) { - grid_sample(output, input, grid, output_dims, input_dims, grid_dims, - nb_dims, interp, padding, align_corners, stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_instance_norm.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_instance_norm.cpp deleted file mode 100644 index b9b363a8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_instance_norm.cpp +++ /dev/null @@ -1,246 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// Modified from: -// https://github.com/NVIDIA/TensorRT/blob/master/plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp - -#include "trt_instance_norm.hpp" - -#include - -#include - -#include "trt_serialize.hpp" - -using namespace nvinfer1; - -cudnnStatus_t convert_trt2cudnn_dtype(nvinfer1::DataType trt_dtype, - cudnnDataType_t* cudnn_dtype) { - switch (trt_dtype) { - case nvinfer1::DataType::kFLOAT: - *cudnn_dtype = CUDNN_DATA_FLOAT; - break; - case nvinfer1::DataType::kHALF: - *cudnn_dtype = CUDNN_DATA_HALF; - break; - default: - return CUDNN_STATUS_BAD_PARAM; - } - return CUDNN_STATUS_SUCCESS; -} - -namespace { -constexpr const char* PLUGIN_VERSION{"1"}; -constexpr const char* PLUGIN_NAME{"MMCVInstanceNormalization"}; -} // namespace - -PluginFieldCollection InstanceNormalizationDynamicCreator::mFC{}; -std::vector InstanceNormalizationDynamicCreator::mPluginAttributes; - -InstanceNormalizationDynamic::InstanceNormalizationDynamic( - const std::string& name, float epsilon) - : mLayerName(name), mEpsilon(epsilon) {} - -InstanceNormalizationDynamic::InstanceNormalizationDynamic( - const std::string& name, void const* serialData, size_t serialLength) - : mLayerName(name) { - deserialize_value(&serialData, &serialLength, &mEpsilon); -} - -InstanceNormalizationDynamic::~InstanceNormalizationDynamic() {} - -// InstanceNormalizationDynamic returns one output. -int InstanceNormalizationDynamic::getNbOutputs() const { return 1; } - -DimsExprs InstanceNormalizationDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs* inputs, int nbInputs, - nvinfer1::IExprBuilder& exprBuilder) { - nvinfer1::DimsExprs output(inputs[0]); - return output; -} - -int InstanceNormalizationDynamic::initialize() { return 0; } - -void InstanceNormalizationDynamic::terminate() {} - -size_t InstanceNormalizationDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc* inputs, int nbInputs, - const nvinfer1::PluginTensorDesc* outputs, int nbOutputs) const { - int n = inputs[0].dims.d[0]; - int c = inputs[0].dims.d[1]; - int elem_size = mmcv::getElementSize(inputs[1].type); - return mmcv::getAlignedSize(n * c * elem_size) * 2; -} - -int InstanceNormalizationDynamic::enqueue( - const nvinfer1::PluginTensorDesc* inputDesc, - const nvinfer1::PluginTensorDesc* outputDesc, const void* const* inputs, - void* const* outputs, void* workspace, cudaStream_t stream) { - nvinfer1::Dims input_dims = inputDesc[0].dims; - int n = input_dims.d[0]; - int c = input_dims.d[1]; - int h = input_dims.d[2]; - int w = input_dims.nbDims > 3 ? input_dims.d[3] : 1; - int elem_size = mmcv::getElementSize(inputDesc[1].type); - - void* n_scales = (void*)workspace; - void* n_bias = (void*)(workspace + mmcv::getAlignedSize(n * c * elem_size)); - - const void* scales = (const void*)inputs[1]; - const void* bias = (const void*)inputs[2]; - - for (int i = 0; i < n; ++i) { - cudaMemcpyAsync(n_scales + i * c * elem_size, scales, c * elem_size, - cudaMemcpyDeviceToDevice, stream); - cudaMemcpyAsync(n_bias + i * c * elem_size, bias, c * elem_size, - cudaMemcpyDeviceToDevice, stream); - } - - cudnnSetTensor4dDescriptor(_b_desc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, - n * c, 1, 1); - cudnnDataType_t cudnn_dtype{}; - convert_trt2cudnn_dtype(inputDesc[0].type, &cudnn_dtype); - cudnnSetTensor4dDescriptor(_x_desc, CUDNN_TENSOR_NCHW, cudnn_dtype, 1, n * c, - h, w); - cudnnSetTensor4dDescriptor(_y_desc, CUDNN_TENSOR_NCHW, cudnn_dtype, 1, n * c, - h, w); - float alpha = 1; - float beta = 0; - void const* x_ptr = inputs[0]; - void* y_ptr = outputs[0]; - cudnnSetStream(_cudnn_handle, stream); - // Note: Use of CUDNN_BATCHNORM_SPATIAL_PERSISTENT can cause numerical - // overflows (NaNs) for fp32 data in some circumstances. The lower- - // performance CUDNN_BATCHNORM_SPATIAL should be used if this is not - // acceptable. - cudnnBatchNormalizationForwardTraining( - _cudnn_handle, CUDNN_BATCHNORM_SPATIAL_PERSISTENT, &alpha, &beta, _x_desc, - x_ptr, _y_desc, y_ptr, _b_desc, n_scales, n_bias, 1., nullptr, nullptr, - mEpsilon, nullptr, nullptr); - return 0; -} - -size_t InstanceNormalizationDynamic::getSerializationSize() const { - return serialized_size(mEpsilon); -} - -void InstanceNormalizationDynamic::serialize(void* buffer) const { - serialize_value(&buffer, mEpsilon); -} - -bool InstanceNormalizationDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc* inOut, int nbInputs, - int nbOutputs) { - return ((inOut[pos].type == nvinfer1::DataType::kFLOAT || - inOut[pos].type == nvinfer1::DataType::kHALF) && - inOut[pos].format == nvinfer1::PluginFormat::kLINEAR && - inOut[pos].type == inOut[0].type); -} - -const char* InstanceNormalizationDynamic::getPluginType() const { - return PLUGIN_NAME; -} - -const char* InstanceNormalizationDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -void InstanceNormalizationDynamic::destroy() { delete this; } - -IPluginV2DynamicExt* InstanceNormalizationDynamic::clone() const { - auto* plugin = new InstanceNormalizationDynamic{mLayerName, mEpsilon}; - plugin->setPluginNamespace(mPluginNamespace.c_str()); - return plugin; -} - -// Set plugin namespace -void InstanceNormalizationDynamic::setPluginNamespace( - const char* pluginNamespace) { - mPluginNamespace = pluginNamespace; -} - -const char* InstanceNormalizationDynamic::getPluginNamespace() const { - return mPluginNamespace.c_str(); -} - -nvinfer1::DataType InstanceNormalizationDynamic::getOutputDataType( - int index, const nvinfer1::DataType* inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// Attach the plugin object to an execution context and grant the plugin the -// access to some context resource. -void InstanceNormalizationDynamic::attachToContext( - cudnnContext* cudnnContext, cublasContext* cublasContext, - IGpuAllocator* gpuAllocator) { - _cudnn_handle = cudnnContext; - cudnnCreateTensorDescriptor(&_b_desc); - cudnnCreateTensorDescriptor(&_x_desc); - cudnnCreateTensorDescriptor(&_y_desc); -} - -// Detach the plugin object from its execution context. -void InstanceNormalizationDynamic::detachFromContext() { - cudnnDestroyTensorDescriptor(_y_desc); - cudnnDestroyTensorDescriptor(_x_desc); - cudnnDestroyTensorDescriptor(_b_desc); -} - -void InstanceNormalizationDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc* in, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc* out, int nbOutputs) {} - -// InstanceNormalizationDynamicCreator methods -InstanceNormalizationDynamicCreator::InstanceNormalizationDynamicCreator() { - mPluginAttributes.clear(); - mPluginAttributes.emplace_back( - PluginField("epsilon", nullptr, PluginFieldType::kFLOAT32, 1)); - - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char* InstanceNormalizationDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char* InstanceNormalizationDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const PluginFieldCollection* -InstanceNormalizationDynamicCreator::getFieldNames() { - return &mFC; -} - -IPluginV2DynamicExt* InstanceNormalizationDynamicCreator::createPlugin( - const char* name, const nvinfer1::PluginFieldCollection* fc) { - float epsilon = 1e-5; - const PluginField* fields = fc->fields; - for (int i = 0; i < fc->nbFields; ++i) { - const char* attrName = fields[i].name; - if (!strcmp(attrName, "epsilon")) { - epsilon = *(static_cast(fields[i].data)); - } - } - - InstanceNormalizationDynamic* obj = - new InstanceNormalizationDynamic(name, epsilon); - obj->setPluginNamespace(mNamespace.c_str()); - return obj; -} - -IPluginV2DynamicExt* InstanceNormalizationDynamicCreator::deserializePlugin( - const char* name, const void* serialData, size_t serialLength) { - InstanceNormalizationDynamic* obj = - new InstanceNormalizationDynamic{name, serialData, serialLength}; - obj->setPluginNamespace(mNamespace.c_str()); - return obj; -} - -void InstanceNormalizationDynamicCreator::setPluginNamespace( - const char* libNamespace) { - mNamespace = libNamespace; -} - -const char* InstanceNormalizationDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv.cpp deleted file mode 100644 index 330ee806..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv.cpp +++ /dev/null @@ -1,308 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_modulated_deform_conv.hpp" - -#include - -#include - -#include "trt_serialize.hpp" - -void ModulatedDeformConvForwardCUDAKernelLauncher_float( - const float *input, const float *weight, const float *bias, - const float *offset, const float *mask, float *output, void *workspace, - int batch, int channels, int height, int width, int channels_out, - int kernel_w, int kernel_h, int stride_w, int stride_h, int pad_w, - int pad_h, int dilation_w, int dilation_h, int group, int deformable_group, - int im2col_step, cublasHandle_t cublas_handle, cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"MMCVModulatedDeformConv2d"}; -} // namespace - -nvinfer1::PluginFieldCollection - ModulatedDeformableConvPluginDynamicCreator::mFC{}; -std::vector - ModulatedDeformableConvPluginDynamicCreator::mPluginAttributes; - -ModulatedDeformableConvPluginDynamic::ModulatedDeformableConvPluginDynamic( - const std::string &name, const nvinfer1::Dims stride, - const nvinfer1::Dims padding, const nvinfer1::Dims dilation, - const int deformableGroup, const int group) - : mLayerName(name), - mStride(stride), - mPadding(padding), - mDilation(dilation), - mDeformableGroup(deformableGroup), - mGroup(group) { - mWithBias = false; -} - -ModulatedDeformableConvPluginDynamic::ModulatedDeformableConvPluginDynamic( - const std::string name, const void *data, size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mStride); - deserialize_value(&data, &length, &mPadding); - deserialize_value(&data, &length, &mDilation); - deserialize_value(&data, &length, &mDeformableGroup); - deserialize_value(&data, &length, &mGroup); - mWithBias = false; -} -ModulatedDeformableConvPluginDynamic::~ModulatedDeformableConvPluginDynamic() {} - -nvinfer1::IPluginV2DynamicExt *ModulatedDeformableConvPluginDynamic::clone() - const { - ModulatedDeformableConvPluginDynamic *plugin = - new ModulatedDeformableConvPluginDynamic( - mLayerName, mStride, mPadding, mDilation, mDeformableGroup, mGroup); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs ModulatedDeformableConvPluginDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - nvinfer1::DimsExprs ret; - ret.nbDims = 4; - ret.d[0] = inputs[0].d[0]; - ret.d[1] = inputs[3].d[0]; - - ret.d[2] = inputs[1].d[2]; - ret.d[3] = inputs[1].d[3]; - - return ret; -} - -bool ModulatedDeformableConvPluginDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - if (pos == 0) { - return (inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR); - - } else { - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - } -} - -void ModulatedDeformableConvPluginDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) { - if (nbInputs == 5) { - mWithBias = true; - } -} - -size_t ModulatedDeformableConvPluginDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - int sizeof_dtype = mmcv::getElementSize(outputs[0].type); - - int batch_size = inputs[0].dims.d[0]; - int nInputPlane = inputs[0].dims.d[1]; - int inputHeight = inputs[0].dims.d[2]; - int inputWidth = inputs[0].dims.d[3]; - - int nOutputPlane = outputs[0].dims.d[1]; - int outputHeight = outputs[0].dims.d[2]; - int outputWidth = outputs[0].dims.d[3]; - - int kW = inputs[3].dims.d[2]; - int kH = inputs[3].dims.d[3]; - int im2col_step = std::min(32, batch_size); - - size_t col_size = mmcv::getAlignedSize(nInputPlane * kW * kH * outputHeight * - outputWidth * sizeof_dtype); - - return col_size; -} - -int ModulatedDeformableConvPluginDynamic::enqueue( - const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, const void *const *inputs, - void *const *outputs, void *workSpace, cudaStream_t stream) { - int batch = inputDesc[0].dims.d[0]; - int channels = inputDesc[0].dims.d[1]; - int height = inputDesc[0].dims.d[2]; - int width = inputDesc[0].dims.d[3]; - int channels_out = outputDesc[0].dims.d[1]; - int kernel_h = inputDesc[3].dims.d[2]; - int kernel_w = inputDesc[3].dims.d[3]; - - const void *x = inputs[0]; - const void *offset = inputs[1]; - const void *mask = inputs[2]; - const void *weight = inputs[3]; - const void *bias = mWithBias ? inputs[4] : nullptr; - void *output = outputs[0]; - int im2col_step = std::min(batch, 32); - - // TODO: add fp16 support - auto data_type = inputDesc[0].type; - switch (data_type) { - case nvinfer1::DataType::kFLOAT: - ModulatedDeformConvForwardCUDAKernelLauncher_float( - (float *)x, (float *)weight, (float *)bias, (float *)offset, - (float *)mask, (float *)output, workSpace, batch, channels, height, - width, channels_out, kernel_w, kernel_h, mStride.d[0], mStride.d[1], - mPadding.d[0], mPadding.d[1], mDilation.d[0], mDilation.d[1], mGroup, - mDeformableGroup, im2col_step, m_cublas_handle, stream); - break; - default: - return 1; - break; - } - - return 0; -} - -nvinfer1::DataType ModulatedDeformableConvPluginDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *ModulatedDeformableConvPluginDynamic::getPluginType() const { - return PLUGIN_NAME; -} - -const char *ModulatedDeformableConvPluginDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int ModulatedDeformableConvPluginDynamic::getNbOutputs() const { return 1; } - -int ModulatedDeformableConvPluginDynamic::initialize() { return 0; } - -void ModulatedDeformableConvPluginDynamic::terminate() {} - -size_t ModulatedDeformableConvPluginDynamic::getSerializationSize() const { - return sizeof(mStride) + sizeof(mPadding) + sizeof(mDilation) + - sizeof(mDeformableGroup) + sizeof(mGroup); -} - -void ModulatedDeformableConvPluginDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mStride); - serialize_value(&buffer, mPadding); - serialize_value(&buffer, mDilation); - serialize_value(&buffer, mDeformableGroup); - serialize_value(&buffer, mGroup); -} - -void ModulatedDeformableConvPluginDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void ModulatedDeformableConvPluginDynamic::attachToContext( - cudnnContext *cudnnContext, cublasContext *cublasContext, - nvinfer1::IGpuAllocator *gpuAllocator) { - m_cublas_handle = cublasContext; -} - -void ModulatedDeformableConvPluginDynamic::detachFromContext() {} - -void ModulatedDeformableConvPluginDynamic::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *ModulatedDeformableConvPluginDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -ModulatedDeformableConvPluginDynamicCreator:: - ModulatedDeformableConvPluginDynamicCreator() { - mPluginAttributes.emplace_back(nvinfer1::PluginField("stride")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("padding")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("dilation")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("groups")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("deform_groups")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *ModulatedDeformableConvPluginDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *ModulatedDeformableConvPluginDynamicCreator::getPluginVersion() - const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -ModulatedDeformableConvPluginDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *ModulatedDeformableConvPluginDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - nvinfer1::Dims stride{2, {1, 1}}; - nvinfer1::Dims padding{2, {0, 0}}; - nvinfer1::Dims dilation{2, {1, 1}}; - int deformableGroup = 1; - int group = 1; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("deformable_group") == 0) { - deformableGroup = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("group") == 0) { - group = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("stride") == 0) { - stride.nbDims = 2; - stride.d[0] = static_cast(fc->fields[i].data)[0]; - stride.d[1] = static_cast(fc->fields[i].data)[1]; - } - - if (field_name.compare("padding") == 0) { - padding.nbDims = 2; - padding.d[0] = static_cast(fc->fields[i].data)[0]; - padding.d[1] = static_cast(fc->fields[i].data)[1]; - } - - if (field_name.compare("dilation") == 0) { - dilation.nbDims = 2; - dilation.d[0] = static_cast(fc->fields[i].data)[0]; - dilation.d[1] = static_cast(fc->fields[i].data)[1]; - } - } - - ModulatedDeformableConvPluginDynamic *plugin = - new ModulatedDeformableConvPluginDynamic(name, stride, padding, dilation, - deformableGroup, group); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 * -ModulatedDeformableConvPluginDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - auto plugin = - new ModulatedDeformableConvPluginDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void ModulatedDeformableConvPluginDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *ModulatedDeformableConvPluginDynamicCreator::getPluginNamespace() - const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv_kernel.cu deleted file mode 100644 index f29a7a79..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_modulated_deform_conv_kernel.cu +++ /dev/null @@ -1,134 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include - -#include "common_cuda_helper.hpp" -#include "modulated_deform_conv_cuda_kernel.cuh" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -template -void trt_modulated_deformable_im2col( - const T* data_im_, const T* data_offset_, const T* data_mask_, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, T* data_col_, - cudaStream_t stream) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - - modulated_deformable_im2col_gpu_kernel - <<>>( - num_kernels, data_im_, data_offset_, data_mask_, height_im, width_im, - kernel_h, kenerl_w, pad_h, pad_w, stride_h, stride_w, dilation_h, - dilation_w, channel_per_deformable_group, batch_size, channels, - deformable_group, height_col, width_col, data_col_); - - cudaCheckError(); -} - -template -__global__ void output_add_bias_kernel(scalar_t* output, const scalar_t* bias, - size_t step_batch, size_t step_channel, - size_t n) { - CUDA_1D_KERNEL_LOOP(index, n) { - output[index] += bias[(index % step_batch) / step_channel]; - } -} - -template -static void output_add_bias(scalar_t* output, const scalar_t* bias, - size_t batch, size_t channel, size_t height, - size_t width, cudaStream_t stream) { - size_t step_channel = height * width; - size_t step_batch = step_channel * channel; - size_t n = step_batch * batch; - output_add_bias_kernel<<>>( - output, bias, step_batch, step_channel, n); -} - -template -void ModulatedDeformConvForwardCUDAKernelLauncher( - const scalar_t* input, const scalar_t* weight, const scalar_t* bias, - const scalar_t* offset, const scalar_t* mask, scalar_t* output, - void* workspace, int batch, int channels, int height, int width, - int channels_out, int kernel_w, int kernel_h, int stride_w, int stride_h, - int pad_w, int pad_h, int dilation_w, int dilation_h, int group, - int deformable_group, int im2col_step, cublasHandle_t cublas_handle, - cudaStream_t stream) { - size_t sizeof_dtype = sizeof(scalar_t); - bool with_bias = (bias != nullptr); - - im2col_step = std::min(int(batch), im2col_step); - assert(batch % im2col_step == 0); - const int channels_kernel = channels / group; - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - scalar_t* columns = (scalar_t*)workspace; - - const size_t input_step = channels * height * width; - const size_t offset_step = - deformable_group * kernel_h * kernel_w * 2 * height * width; - const size_t mask_step = - deformable_group * kernel_h * kernel_w * height * width; - const size_t out_step = channels_out * height_out * width_out; - const size_t out_group_step = out_step / group; - const size_t col_g_step = - channels * kernel_w * kernel_h / group * height_out * width_out; - const size_t weight_g_step = - channels_out / group * channels / group * kernel_h * kernel_w; - - const int m = channels_out / group; - const int n = height_out * width_out; - const int k = channels / group * kernel_h * kernel_w; - scalar_t alpha = 1.; - scalar_t beta = 0.; - - for (int b = 0; b < batch; b++) { - const scalar_t* input_start = input + b * input_step; - const scalar_t* offset_start = offset + b * offset_step; - const scalar_t* mask_start = mask + b * mask_step; - trt_modulated_deformable_im2col( - input_start, offset_start, mask_start, 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, columns, stream); - - for (int g = 0; g < group; g++) { - const scalar_t* weight_start = weight + g * weight_g_step; - scalar_t* col_start = columns + g * col_g_step; - scalar_t* out_buffer_start = output + b * out_step + g * out_group_step; - - // cudaMemsetAsync(out_buffer_start, 0, 1, stream); - cublasGemmWrap(cublas_handle, CUBLAS_OP_N, CUBLAS_OP_N, n, m, k, - &alpha, col_start, n, weight_start, k, &beta, - out_buffer_start, n); - cudaCheckError(); - } - } - - if (with_bias) { - output_add_bias(output, bias, batch, channels_out, height_out, - width_out, stream); - } -} - -void ModulatedDeformConvForwardCUDAKernelLauncher_float( - const float* input, const float* weight, const float* bias, - const float* offset, const float* mask, float* output, void* workspace, - int batch, int channels, int height, int width, int channels_out, - int kernel_w, int kernel_h, int stride_w, int stride_h, int pad_w, - int pad_h, int dilation_w, int dilation_h, int group, int deformable_group, - int im2col_step, cublasHandle_t cublas_handle, cudaStream_t stream) { - ModulatedDeformConvForwardCUDAKernelLauncher( - input, weight, bias, offset, mask, output, workspace, batch, channels, - height, width, channels_out, kernel_w, kernel_h, stride_w, stride_h, - pad_w, pad_h, dilation_w, dilation_h, group, deformable_group, - im2col_step, cublas_handle, stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms.cpp deleted file mode 100644 index 64be215e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms.cpp +++ /dev/null @@ -1,279 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_nms.hpp" - -#include -#include - -#include - -#include "trt_serialize.hpp" - -extern size_t get_onnxnms_workspace_size( - size_t num_batches, size_t spatial_dimension, size_t num_classes, - size_t boxes_word_size, int center_point_box, size_t output_length); - -extern void TRTNMSCUDAKernelLauncher_float( - const float *boxes, const float *scores, - const int max_output_boxes_per_class, const float iou_threshold, - const float score_threshold, const int offset, int *output, - int center_point_box, int num_batches, int spatial_dimension, - int num_classes, size_t output_length, void *workspace, - cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"NonMaxSuppression"}; -} // namespace - -nvinfer1::PluginFieldCollection NonMaxSuppressionDynamicCreator::mFC{}; -std::vector - NonMaxSuppressionDynamicCreator::mPluginAttributes; - -NonMaxSuppressionDynamic::NonMaxSuppressionDynamic( - const std::string &name, int centerPointBox, int maxOutputBoxesPerClass, - float iouThreshold, float scoreThreshold, int offset) - : mLayerName(name), - mCenterPointBox(centerPointBox), - mMaxOutputBoxesPerClass(maxOutputBoxesPerClass), - mIouThreshold(iouThreshold), - mScoreThreshold(scoreThreshold), - mOffset(offset) {} - -NonMaxSuppressionDynamic::NonMaxSuppressionDynamic(const std::string name, - const void *data, - size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mCenterPointBox); - deserialize_value(&data, &length, &mMaxOutputBoxesPerClass); - deserialize_value(&data, &length, &mIouThreshold); - deserialize_value(&data, &length, &mScoreThreshold); - deserialize_value(&data, &length, &mOffset); -} - -nvinfer1::IPluginV2DynamicExt *NonMaxSuppressionDynamic::clone() const { - NonMaxSuppressionDynamic *plugin = new NonMaxSuppressionDynamic( - mLayerName, mCenterPointBox, mMaxOutputBoxesPerClass, mIouThreshold, - mScoreThreshold, mOffset); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs NonMaxSuppressionDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - nvinfer1::DimsExprs ret; - ret.nbDims = 2; - auto num_batches = inputs[0].d[0]; - auto spatial_dimension = inputs[0].d[1]; - if (mMaxOutputBoxesPerClass > 0) { - spatial_dimension = exprBuilder.operation( - nvinfer1::DimensionOperation::kMIN, *spatial_dimension, - *exprBuilder.constant(mMaxOutputBoxesPerClass)); - } - auto num_classes = inputs[1].d[1]; - ret.d[0] = exprBuilder.operation( - nvinfer1::DimensionOperation::kPROD, *num_batches, - *exprBuilder.operation(nvinfer1::DimensionOperation::kPROD, - *spatial_dimension, *num_classes)); - ret.d[1] = exprBuilder.constant(3); - - return ret; -} - -bool NonMaxSuppressionDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - if (pos < nbInputs) { - switch (pos) { - case 0: - // boxes - return inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - case 1: - // scores - return inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - default: - return true; - } - } else { - switch (pos - nbInputs) { - case 0: - // selected_indices - return inOut[pos].type == nvinfer1::DataType::kINT32 && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - default: - return true; - } - } - return true; -} - -void NonMaxSuppressionDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t NonMaxSuppressionDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - size_t boxes_word_size = mmcv::getElementSize(inputs[0].type); - size_t num_batches = inputs[0].dims.d[0]; - size_t spatial_dimension = inputs[0].dims.d[1]; - size_t num_classes = inputs[1].dims.d[1]; - size_t output_length = outputs[0].dims.d[0]; - - return get_onnxnms_workspace_size(num_batches, spatial_dimension, num_classes, - boxes_word_size, mCenterPointBox, - output_length); -} - -int NonMaxSuppressionDynamic::enqueue( - const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, const void *const *inputs, - void *const *outputs, void *workSpace, cudaStream_t stream) { - int num_batches = inputDesc[0].dims.d[0]; - int spatial_dimension = inputDesc[0].dims.d[1]; - int num_classes = inputDesc[1].dims.d[1]; - int output_length = outputDesc[0].dims.d[0]; - - const float *boxes = (const float *)inputs[0]; - const float *scores = (const float *)inputs[1]; - int *output = (int *)outputs[0]; - TRTNMSCUDAKernelLauncher_float( - boxes, scores, mMaxOutputBoxesPerClass, mIouThreshold, mScoreThreshold, - mOffset, output, mCenterPointBox, num_batches, spatial_dimension, - num_classes, output_length, workSpace, stream); - - return 0; -} - -nvinfer1::DataType NonMaxSuppressionDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return nvinfer1::DataType::kINT32; -} - -// IPluginV2 Methods -const char *NonMaxSuppressionDynamic::getPluginType() const { - return PLUGIN_NAME; -} - -const char *NonMaxSuppressionDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int NonMaxSuppressionDynamic::getNbOutputs() const { return 1; } - -int NonMaxSuppressionDynamic::initialize() { return 0; } - -void NonMaxSuppressionDynamic::terminate() {} - -size_t NonMaxSuppressionDynamic::getSerializationSize() const { - return sizeof(mCenterPointBox) + sizeof(mMaxOutputBoxesPerClass) + - sizeof(mIouThreshold) + sizeof(mScoreThreshold) + sizeof(mOffset); -} - -void NonMaxSuppressionDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mCenterPointBox); - serialize_value(&buffer, mMaxOutputBoxesPerClass); - serialize_value(&buffer, mIouThreshold); - serialize_value(&buffer, mScoreThreshold); - serialize_value(&buffer, mOffset); -} - -void NonMaxSuppressionDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void NonMaxSuppressionDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *NonMaxSuppressionDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -NonMaxSuppressionDynamicCreator::NonMaxSuppressionDynamicCreator() { - mPluginAttributes.clear(); - mPluginAttributes.emplace_back(nvinfer1::PluginField("center_point_box")); - mPluginAttributes.emplace_back( - nvinfer1::PluginField("max_output_boxes_per_class")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("iou_threshold")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("score_threshold")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("offset")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *NonMaxSuppressionDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *NonMaxSuppressionDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -NonMaxSuppressionDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *NonMaxSuppressionDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - int centerPointBox = 0; - int maxOutputBoxesPerClass = 0; - float iouThreshold = 0.0f; - float scoreThreshold = 0.0f; - int offset = 0; - - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("center_point_box") == 0) { - centerPointBox = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("max_output_boxes_per_class") == 0) { - maxOutputBoxesPerClass = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("iou_threshold") == 0) { - iouThreshold = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("score_threshold") == 0) { - scoreThreshold = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("offset") == 0) { - offset = static_cast(fc->fields[i].data)[0]; - } - } - NonMaxSuppressionDynamic *plugin = - new NonMaxSuppressionDynamic(name, centerPointBox, maxOutputBoxesPerClass, - iouThreshold, scoreThreshold, offset); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *NonMaxSuppressionDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - auto plugin = new NonMaxSuppressionDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void NonMaxSuppressionDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *NonMaxSuppressionDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms_kernel.cu deleted file mode 100644 index 3de37ca6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_nms_kernel.cu +++ /dev/null @@ -1,274 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include -#include -#include -#include -#include - -#include -#include -#include - -#include "common_cuda_helper.hpp" -#include "nms_cuda_kernel.cuh" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -struct NMSBox { - float box[4]; -}; - -struct nms_centerwh2xyxy { - __host__ __device__ NMSBox operator()(const NMSBox box) { - NMSBox out; - out.box[0] = box.box[0] - box.box[2] / 2.0f; - out.box[1] = box.box[1] - box.box[3] / 2.0f; - out.box[2] = box.box[0] + box.box[2] / 2.0f; - out.box[3] = box.box[1] + box.box[3] / 2.0f; - return out; - } -}; - -struct nms_sbox_idle { - const float* idle_box_; - __host__ __device__ nms_sbox_idle(const float* idle_box) { - idle_box_ = idle_box; - } - - __host__ __device__ NMSBox operator()(const NMSBox box) { - return {idle_box_[0], idle_box_[1], idle_box_[2], idle_box_[3]}; - } -}; - -struct nms_score_threshold { - float score_threshold_; - __host__ __device__ nms_score_threshold(const float score_threshold) { - score_threshold_ = score_threshold; - } - - __host__ __device__ bool operator()(const float score) { - return score < score_threshold_; - } -}; - -__global__ void nms_reindex_kernel(int n, int* output, int* index_cache) { - CUDA_1D_KERNEL_LOOP(index, n) { - const int old_index = output[index * 3 + 2]; - output[index * 3 + 2] = index_cache[old_index]; - } -} - -__global__ void mask_to_output_kernel(const unsigned long long* dev_mask, - const int* index, int* output, - int* output_count, int batch_id, - int cls_id, int spatial_dimension, - int col_blocks, - int max_output_boxes_per_class) { - extern __shared__ unsigned long long remv[]; - - // fill remv with 0 - CUDA_1D_KERNEL_LOOP(i, col_blocks) { remv[i] = 0; } - __syncthreads(); - - int start = *output_count; - int out_per_class_count = 0; - for (int i = 0; i < spatial_dimension; i++) { - const int nblock = i / threadsPerBlock; - const int inblock = i % threadsPerBlock; - if (!(remv[nblock] & (1ULL << inblock))) { - if (threadIdx.x == 0) { - output[start * 3 + 0] = batch_id; - output[start * 3 + 1] = cls_id; - output[start * 3 + 2] = index[i]; - start += 1; - } - out_per_class_count += 1; - if (out_per_class_count >= max_output_boxes_per_class) { - break; - } - __syncthreads(); - // set every overlap box with bit 1 in remv - const unsigned long long* p = dev_mask + i * col_blocks; - CUDA_1D_KERNEL_LOOP(j, col_blocks) { - if (j >= nblock) { - remv[j] |= p[j]; - } - } // j - __syncthreads(); - } - } // i - if (threadIdx.x == 0) { - *output_count = start; - } -} - -size_t get_onnxnms_workspace_size(size_t num_batches, size_t spatial_dimension, - size_t num_classes, size_t boxes_word_size, - int center_point_box, size_t output_length) { - size_t boxes_xyxy_workspace = 0; - if (center_point_box == 1) { - boxes_xyxy_workspace = mmcv::getAlignedSize( - num_batches * spatial_dimension * 4 * boxes_word_size); - } - size_t scores_workspace = - mmcv::getAlignedSize(spatial_dimension * boxes_word_size); - size_t boxes_workspace = - mmcv::getAlignedSize(spatial_dimension * 4 * boxes_word_size); - const int col_blocks = - (spatial_dimension + threadsPerBlock - 1) / threadsPerBlock; - size_t mask_workspace = mmcv::getAlignedSize(spatial_dimension * col_blocks * - sizeof(unsigned long long)); - size_t index_template_workspace = - mmcv::getAlignedSize(spatial_dimension * sizeof(int)); - size_t index_workspace = - mmcv::getAlignedSize(spatial_dimension * sizeof(int)); - size_t count_workspace = mmcv::getAlignedSize(sizeof(int)); - return scores_workspace + boxes_xyxy_workspace + boxes_workspace + - mask_workspace + index_template_workspace + index_workspace + - count_workspace; -} - -/** - * Launch the NonMaxSuppression kernel - * - * The NMS will be performed on each batch/class, share the kernel implement - * `nms_cuda`. For each batch/class, the `boxes_sorted` and `index_cache` will - * be sorted by scores, boxes_sorted will be used in `nms_cuda` kernel. After - * that, the output would be generated by `mask_to_output_kernel` with - * `dev_mask` and `sorted_cache`. - * - * @param[in] bboxes with shape [num_batch, spatial_dimension, 4], input boxes - * @param[in] scores with shape [num_batch, num_classes, spatial_dimension], - * input scores - * @param[in] max_output_boxes_per_class max output boxes per class - * @param[in] iou_threshold threshold of iou - * @param[in] score_threshold threshold of scores - * @param[in] offset box offset, only 0 or 1 is valid - * @param[out] output with shape [output_length, 3], each row contain index - * (batch_id, class_id, boxes_id), filling -1 if result is not valid. - * @param[in] center_point_box 0 if boxes is [left, top, right, bottom] 1 if - * boxes is [center_x, center_y, width, height] - * @param[in] num_batches batch size of boxes and scores - * @param[in] spatial_dimension boxes numbers each batch - * @param[in] num_classes class numbers - * @param[in] output_length the max output rows - * @param[in] workspace memory for all temporary variables. - * @param[in] stream cuda stream - */ -void TRTNMSCUDAKernelLauncher_float(const float* boxes, const float* scores, - const int max_output_boxes_per_class, - const float iou_threshold, - const float score_threshold, - const int offset, int* output, - int center_point_box, int num_batches, - int spatial_dimension, int num_classes, - size_t output_length, void* workspace, - cudaStream_t stream) { - const int col_blocks = - (spatial_dimension + threadsPerBlock - 1) / threadsPerBlock; - float* boxes_sorted = (float*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(spatial_dimension * 4 * sizeof(float)); - - float* boxes_xyxy = nullptr; - if (center_point_box == 1) { - boxes_xyxy = (float*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(num_batches * spatial_dimension * 4 * - sizeof(float)); - thrust::transform(thrust::cuda::par.on(stream), (NMSBox*)boxes, - (NMSBox*)(boxes + num_batches * spatial_dimension * 4), - (NMSBox*)boxes_xyxy, nms_centerwh2xyxy()); - cudaCheckError(); - } - - float* scores_sorted = (float*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(spatial_dimension * sizeof(float)); - - unsigned long long* dev_mask = (unsigned long long*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(spatial_dimension * col_blocks * - sizeof(unsigned long long)); - - int* index_cache = (int*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(spatial_dimension * sizeof(int)); - - // generate sequence [0,1,2,3,4 ....] - int* index_template = (int*)workspace; - workspace = static_cast(workspace) + - mmcv::getAlignedSize(spatial_dimension * sizeof(int)); - thrust::sequence(thrust::cuda::par.on(stream), index_template, - index_template + spatial_dimension, 0); - - int max_output_boxes_per_class_cpu = max_output_boxes_per_class; - if (max_output_boxes_per_class_cpu <= 0) { - max_output_boxes_per_class_cpu = spatial_dimension; - } - - int* output_count = (int*)workspace; - workspace = static_cast(workspace) + mmcv::getAlignedSize(sizeof(int)); - cudaMemsetAsync(output_count, 0, sizeof(int), stream); - - // fill output with -1 - thrust::fill(thrust::cuda::par.on(stream), output, output + output_length * 3, - -1); - cudaCheckError(); - - dim3 blocks(col_blocks, col_blocks); - dim3 threads(threadsPerBlock); - - for (int batch_id = 0; batch_id < num_batches; ++batch_id) { - for (int cls_id = 0; cls_id < num_classes; ++cls_id) { - const int batch_cls_id = batch_id * num_classes + cls_id; - - // sort boxes by score - cudaMemcpyAsync(scores_sorted, scores + batch_cls_id * spatial_dimension, - spatial_dimension * sizeof(float), - cudaMemcpyDeviceToDevice, stream); - cudaCheckError(); - - cudaMemcpyAsync(index_cache, index_template, - spatial_dimension * sizeof(int), cudaMemcpyDeviceToDevice, - stream); - cudaCheckError(); - - thrust::sort_by_key(thrust::cuda::par.on(stream), scores_sorted, - scores_sorted + spatial_dimension, index_cache, - thrust::greater()); - - if (center_point_box == 1) { - thrust::gather(thrust::cuda::par.on(stream), index_cache, - index_cache + spatial_dimension, - (NMSBox*)(boxes_xyxy + batch_id * spatial_dimension * 4), - (NMSBox*)boxes_sorted); - } else { - thrust::gather(thrust::cuda::par.on(stream), index_cache, - index_cache + spatial_dimension, - (NMSBox*)(boxes + batch_id * spatial_dimension * 4), - (NMSBox*)boxes_sorted); - } - - cudaCheckError(); - - if (score_threshold > 0.0f) { - thrust::transform_if( - thrust::cuda::par.on(stream), (NMSBox*)boxes_sorted, - (NMSBox*)(boxes_sorted + spatial_dimension * 4), scores_sorted, - (NMSBox*)boxes_sorted, nms_sbox_idle(boxes_sorted), - nms_score_threshold(score_threshold)); - } - - nms_cuda<<>>(spatial_dimension, iou_threshold, - offset, boxes_sorted, dev_mask); - - // will be performed when dev_mask is full. - mask_to_output_kernel<<<1, threadsPerBlock, - col_blocks * sizeof(unsigned long long), - stream>>>( - dev_mask, index_cache, output, output_count, batch_id, cls_id, - spatial_dimension, col_blocks, max_output_boxes_per_class_cpu); - } // cls_id - } // batch_id -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp deleted file mode 100644 index eec1bb2c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp +++ /dev/null @@ -1,27 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_plugin.hpp" - -#include "trt_corner_pool.hpp" -#include "trt_cummaxmin.hpp" -#include "trt_deform_conv.hpp" -#include "trt_grid_sampler.hpp" -#include "trt_instance_norm.hpp" -#include "trt_modulated_deform_conv.hpp" -#include "trt_nms.hpp" -#include "trt_roi_align.hpp" -#include "trt_scatternd.hpp" - -REGISTER_TENSORRT_PLUGIN(CumMaxPluginDynamicCreator); -REGISTER_TENSORRT_PLUGIN(CumMinPluginDynamicCreator); -REGISTER_TENSORRT_PLUGIN(GridSamplerDynamicCreator); -REGISTER_TENSORRT_PLUGIN(DeformableConvPluginDynamicCreator); -REGISTER_TENSORRT_PLUGIN(ModulatedDeformableConvPluginDynamicCreator); -REGISTER_TENSORRT_PLUGIN(NonMaxSuppressionDynamicCreator); -REGISTER_TENSORRT_PLUGIN(RoIAlignPluginDynamicCreator); -REGISTER_TENSORRT_PLUGIN(ONNXScatterNDDynamicCreator); -REGISTER_TENSORRT_PLUGIN(InstanceNormalizationDynamicCreator); -REGISTER_TENSORRT_PLUGIN(CornerPoolPluginDynamicCreator); - -extern "C" { -bool initLibMMCVInferPlugins() { return true; } -} // extern "C" diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align.cpp deleted file mode 100644 index 97700f93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align.cpp +++ /dev/null @@ -1,294 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_roi_align.hpp" - -#include - -#include - -#include "trt_serialize.hpp" - -extern void TRTRoIAlignForwardCUDAKernelLauncher_float( - const float *input, const float *rois, float *output, float *argmax_y, - float *argmax_x, int output_size, int channels, int height, int width, - int aligned_height, int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned, cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"MMCVRoiAlign"}; -} // namespace - -nvinfer1::PluginFieldCollection RoIAlignPluginDynamicCreator::mFC{}; -std::vector - RoIAlignPluginDynamicCreator::mPluginAttributes; - -RoIAlignPluginDynamic::RoIAlignPluginDynamic(const std::string &name, - int outWidth, int outHeight, - float spatialScale, - int sampleRatio, int poolMode, - bool aligned) - : mLayerName(name), - mOutWidth(outWidth), - mOutHeight(outHeight), - mSpatialScale(spatialScale), - mSampleRatio(sampleRatio), - mPoolMode(poolMode), - mAligned(aligned) {} - -RoIAlignPluginDynamic::RoIAlignPluginDynamic(const std::string name, - const void *data, size_t length) - : mLayerName(name) { - deserialize_value(&data, &length, &mOutWidth); - deserialize_value(&data, &length, &mOutHeight); - deserialize_value(&data, &length, &mSpatialScale); - deserialize_value(&data, &length, &mSampleRatio); - deserialize_value(&data, &length, &mPoolMode); - deserialize_value(&data, &length, &mAligned); -} - -nvinfer1::IPluginV2DynamicExt *RoIAlignPluginDynamic::clone() const { - RoIAlignPluginDynamic *plugin = new RoIAlignPluginDynamic( - mLayerName, mOutWidth, mOutHeight, mSpatialScale, mSampleRatio, mPoolMode, - mAligned); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs RoIAlignPluginDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - nvinfer1::DimsExprs ret; - ret.nbDims = 4; - ret.d[0] = inputs[1].d[0]; - ret.d[1] = inputs[0].d[1]; - ret.d[2] = exprBuilder.constant(mOutHeight); - ret.d[3] = exprBuilder.constant(mOutWidth); - - return ret; -} - -bool RoIAlignPluginDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - return inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; -} - -void RoIAlignPluginDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t RoIAlignPluginDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - size_t output_size = 0; - size_t word_size = 0; - switch (mPoolMode) { - case 0: // max - output_size = outputs[0].dims.d[0] * outputs[0].dims.d[1] * - outputs[0].dims.d[2] * outputs[0].dims.d[3]; - word_size = mmcv::getElementSize(outputs[0].type); - return output_size * word_size * 2; - break; - case 1: - return 0; - break; - default: - return 0; - } - return 0; -} - -int RoIAlignPluginDynamic::enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, - void *const *outputs, void *workSpace, - cudaStream_t stream) { - int channels = inputDesc[0].dims.d[1]; - int height = inputDesc[0].dims.d[2]; - int width = inputDesc[0].dims.d[3]; - - int output_size = outputDesc[0].dims.d[0] * outputDesc[0].dims.d[1] * - outputDesc[0].dims.d[2] * outputDesc[0].dims.d[3]; - int word_size = mmcv::getElementSize(outputDesc[0].type); - - const void *feat = inputs[0]; - const void *rois = inputs[1]; - void *output = outputs[0]; - void *argmax_y = nullptr; - void *argmax_x = nullptr; - - switch (mPoolMode) { - case 0: // max - argmax_y = workSpace; - argmax_x = argmax_y + output_size * word_size; - break; - case 1: // avg - break; - } - - switch (outputDesc[0].type) { - case nvinfer1::DataType::kFLOAT: - TRTRoIAlignForwardCUDAKernelLauncher_float( - (const float *)feat, (const float *)rois, (float *)output, - (float *)argmax_y, (float *)argmax_x, output_size, channels, height, - width, mOutHeight, mOutWidth, mSpatialScale, mSampleRatio, mPoolMode, - mAligned, stream); - break; - - default: - break; - } - - return 0; -} - -nvinfer1::DataType RoIAlignPluginDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *RoIAlignPluginDynamic::getPluginType() const { return PLUGIN_NAME; } - -const char *RoIAlignPluginDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int RoIAlignPluginDynamic::getNbOutputs() const { return 1; } - -int RoIAlignPluginDynamic::initialize() { return 0; } - -void RoIAlignPluginDynamic::terminate() {} - -size_t RoIAlignPluginDynamic::getSerializationSize() const { - return sizeof(mOutWidth) + sizeof(mOutHeight) + sizeof(mSpatialScale) + - sizeof(mSampleRatio) + sizeof(mPoolMode) + sizeof(mAligned); -} - -void RoIAlignPluginDynamic::serialize(void *buffer) const { - serialize_value(&buffer, mOutWidth); - serialize_value(&buffer, mOutHeight); - serialize_value(&buffer, mSpatialScale); - serialize_value(&buffer, mSampleRatio); - serialize_value(&buffer, mPoolMode); - serialize_value(&buffer, mAligned); -} - -void RoIAlignPluginDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void RoIAlignPluginDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *RoIAlignPluginDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -RoIAlignPluginDynamicCreator::RoIAlignPluginDynamicCreator() { - mPluginAttributes.emplace_back(nvinfer1::PluginField("output_height")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("output_width")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("spatial_scale")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("sampling_ratio")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("mode")); - mPluginAttributes.emplace_back(nvinfer1::PluginField("aligned")); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *RoIAlignPluginDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *RoIAlignPluginDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -RoIAlignPluginDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *RoIAlignPluginDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - int outWidth = 7; - int outHeight = 7; - float spatialScale = 1.0; - int sampleRatio = 0; - int poolMode = -1; - bool aligned = true; - for (int i = 0; i < fc->nbFields; i++) { - if (fc->fields[i].data == nullptr) { - continue; - } - std::string field_name(fc->fields[i].name); - - if (field_name.compare("output_height") == 0) { - outHeight = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("output_width") == 0) { - outWidth = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("spatial_scale") == 0) { - spatialScale = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("sampling_ratio") == 0) { - sampleRatio = static_cast(fc->fields[i].data)[0]; - } - - if (field_name.compare("mode") == 0) { - int data_size = fc->fields[i].length; - const char *data_start = static_cast(fc->fields[i].data); - std::string poolModeStr(data_start, data_size); - if (poolModeStr == "avg") { - poolMode = 1; - } else if (poolModeStr == "max") { - poolMode = 0; - } else { - std::cout << "Unknown pool mode \"" << poolModeStr << "\"." - << std::endl; - } - assert(poolMode >= 0); - } - - if (field_name.compare("aligned") == 0) { - int aligned_int = static_cast(fc->fields[i].data)[0]; - aligned = aligned_int != 0; - } - } - - assert(outHeight > 0); - assert(outWidth > 0); - assert(spatialScale > 0.); - assert(poolMode >= 0); - - RoIAlignPluginDynamic *plugin = new RoIAlignPluginDynamic( - name, outWidth, outHeight, spatialScale, sampleRatio, poolMode, aligned); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *RoIAlignPluginDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - auto plugin = new RoIAlignPluginDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void RoIAlignPluginDynamicCreator::setPluginNamespace( - const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *RoIAlignPluginDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align_kernel.cu deleted file mode 100644 index 650bc685..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_roi_align_kernel.cu +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "common_cuda_helper.hpp" -#include "roi_align_cuda_kernel.cuh" - -template -void TRTRoIAlignForwardCUDAKernelLauncher( - const scalar_t* input, const scalar_t* rois, scalar_t* output, - scalar_t* argmax_y, scalar_t* argmax_x, int output_size, int channels, - int height, int width, int aligned_height, int aligned_width, - scalar_t spatial_scale, int sampling_ratio, int pool_mode, bool aligned, - cudaStream_t stream) { - roi_align_forward_cuda_kernel - <<>>( - output_size, input, rois, output, argmax_y, argmax_x, aligned_height, - aligned_width, static_cast(spatial_scale), sampling_ratio, - pool_mode, aligned, channels, height, width); -} - -void TRTRoIAlignForwardCUDAKernelLauncher_float( - const float* input, const float* rois, float* output, float* argmax_y, - float* argmax_x, int output_size, int channels, int height, int width, - int aligned_height, int aligned_width, float spatial_scale, - int sampling_ratio, int pool_mode, bool aligned, cudaStream_t stream) { - TRTRoIAlignForwardCUDAKernelLauncher( - input, rois, output, argmax_y, argmax_x, output_size, channels, height, - width, aligned_height, aligned_width, spatial_scale, sampling_ratio, - pool_mode, aligned, stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd.cpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd.cpp deleted file mode 100644 index 0d077902..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd.cpp +++ /dev/null @@ -1,207 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "trt_scatternd.hpp" - -#include -#include - -#include - -#include "trt_serialize.hpp" - -extern void TRTONNXScatterNDKernelLauncher_float( - const float *data, const int *indices, const float *update, const int *dims, - int nbDims, const int *indices_dims, int indice_nbDims, float *output, - cudaStream_t stream); - -extern void TRTONNXScatterNDKernelLauncher_int32( - const int *data, const int *indices, const int *update, const int *dims, - int nbDims, const int *indices_dims, int indice_nbDims, int *output, - cudaStream_t stream); - -namespace { -static const char *PLUGIN_VERSION{"1"}; -static const char *PLUGIN_NAME{"ScatterND"}; -} // namespace - -nvinfer1::PluginFieldCollection ONNXScatterNDDynamicCreator::mFC{}; -std::vector - ONNXScatterNDDynamicCreator::mPluginAttributes; - -ONNXScatterNDDynamic::ONNXScatterNDDynamic(const std::string &name) - : mLayerName(name) {} - -ONNXScatterNDDynamic::ONNXScatterNDDynamic(const std::string name, - const void *data, size_t length) - : mLayerName(name) {} - -nvinfer1::IPluginV2DynamicExt *ONNXScatterNDDynamic::clone() const { - ONNXScatterNDDynamic *plugin = new ONNXScatterNDDynamic(mLayerName); - plugin->setPluginNamespace(getPluginNamespace()); - - return plugin; -} - -nvinfer1::DimsExprs ONNXScatterNDDynamic::getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) { - return inputs[0]; -} - -bool ONNXScatterNDDynamic::supportsFormatCombination( - int pos, const nvinfer1::PluginTensorDesc *inOut, int nbInputs, - int nbOutputs) { - if (pos < nbInputs) { - switch (pos) { - case 0: - // data - return (inOut[pos].type == nvinfer1::DataType::kFLOAT && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR) || - (inOut[pos].type == nvinfer1::DataType::kINT32 && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR); - case 1: - // indices - return inOut[pos].type == nvinfer1::DataType::kINT32 && - inOut[pos].format == nvinfer1::TensorFormat::kLINEAR; - case 2: - // updates - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - default: - return true; - } - } else { - switch (pos - nbInputs) { - case 0: - // output - return inOut[pos].type == inOut[0].type && - inOut[pos].format == inOut[0].format; - default: - return true; - } - } - return true; -} - -void ONNXScatterNDDynamic::configurePlugin( - const nvinfer1::DynamicPluginTensorDesc *inputs, int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *outputs, int nbOutputs) {} - -size_t ONNXScatterNDDynamic::getWorkspaceSize( - const nvinfer1::PluginTensorDesc *inputs, int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, int nbOutputs) const { - return 0; -} - -int ONNXScatterNDDynamic::enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, - void *const *outputs, void *workSpace, - cudaStream_t stream) { - const int *dims = &(inputDesc[0].dims.d[0]); - const int *indices_dims = &(inputDesc[1].dims.d[0]); - int nbDims = inputDesc[0].dims.nbDims; - int indice_nbDims = inputDesc[1].dims.nbDims; - - const void *data = inputs[0]; - const void *indices = inputs[1]; - const void *update = inputs[2]; - void *output = outputs[0]; - - auto data_type = inputDesc[0].type; - - switch (data_type) { - case nvinfer1::DataType::kFLOAT: - TRTONNXScatterNDKernelLauncher_float( - (float *)data, (int *)indices, (float *)update, dims, nbDims, - indices_dims, indice_nbDims, (float *)output, stream); - break; - - case nvinfer1::DataType::kINT32: - TRTONNXScatterNDKernelLauncher_int32( - (int *)data, (int *)indices, (int *)update, dims, nbDims, - indices_dims, indice_nbDims, (int *)output, stream); - break; - default: - break; - } - - return 0; -} - -nvinfer1::DataType ONNXScatterNDDynamic::getOutputDataType( - int index, const nvinfer1::DataType *inputTypes, int nbInputs) const { - return inputTypes[0]; -} - -// IPluginV2 Methods -const char *ONNXScatterNDDynamic::getPluginType() const { return PLUGIN_NAME; } - -const char *ONNXScatterNDDynamic::getPluginVersion() const { - return PLUGIN_VERSION; -} - -int ONNXScatterNDDynamic::getNbOutputs() const { return 1; } - -int ONNXScatterNDDynamic::initialize() { return 0; } - -void ONNXScatterNDDynamic::terminate() {} - -size_t ONNXScatterNDDynamic::getSerializationSize() const { return 0; } - -void ONNXScatterNDDynamic::serialize(void *buffer) const {} - -void ONNXScatterNDDynamic::destroy() { - // This gets called when the network containing plugin is destroyed - delete this; -} - -void ONNXScatterNDDynamic::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *ONNXScatterNDDynamic::getPluginNamespace() const { - return mNamespace.c_str(); -} - -////////////////////// creator ///////////////////////////// - -ONNXScatterNDDynamicCreator::ONNXScatterNDDynamicCreator() { - mPluginAttributes.clear(); - mFC.nbFields = mPluginAttributes.size(); - mFC.fields = mPluginAttributes.data(); -} - -const char *ONNXScatterNDDynamicCreator::getPluginName() const { - return PLUGIN_NAME; -} - -const char *ONNXScatterNDDynamicCreator::getPluginVersion() const { - return PLUGIN_VERSION; -} - -const nvinfer1::PluginFieldCollection * -ONNXScatterNDDynamicCreator::getFieldNames() { - return &mFC; -} - -nvinfer1::IPluginV2 *ONNXScatterNDDynamicCreator::createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) { - ONNXScatterNDDynamic *plugin = new ONNXScatterNDDynamic(name); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -nvinfer1::IPluginV2 *ONNXScatterNDDynamicCreator::deserializePlugin( - const char *name, const void *serialData, size_t serialLength) { - auto plugin = new ONNXScatterNDDynamic(name, serialData, serialLength); - plugin->setPluginNamespace(getPluginNamespace()); - return plugin; -} - -void ONNXScatterNDDynamicCreator::setPluginNamespace(const char *libNamespace) { - mNamespace = libNamespace; -} - -const char *ONNXScatterNDDynamicCreator::getPluginNamespace() const { - return mNamespace.c_str(); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd_kernel.cu b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd_kernel.cu deleted file mode 100644 index f1b095ef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/plugins/trt_scatternd_kernel.cu +++ /dev/null @@ -1,93 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include - -#include - -#include "common_cuda_helper.hpp" -#include "trt_cuda_helper.cuh" -#include "trt_plugin_helper.hpp" - -static int const threadsPerBlock = sizeof(unsigned long long int) * 8; - -using mmcv::TensorDesc; - -template -__global__ void onnx_scatternd_kernel(const int n, const int* indices, - const T* update, T* output, - TensorDesc tensor_desc, - TensorDesc indice_desc) { - const int indice_cols = indice_desc.shape[indice_desc.dim - 1]; - const int copy_stride = tensor_desc.stride[indice_cols - 1]; - const int* stride = &(tensor_desc.stride[0]); - CUDA_1D_KERNEL_LOOP(index, n) { - int output_offset = 0; - const int* indices_current = indices + index * indice_cols; - for (int i = 0; i < indice_cols; ++i) { - output_offset += stride[i] * indices_current[i]; - } - memcpy(output + output_offset, update + index * copy_stride, - copy_stride * sizeof(T)); - } -} - -template -void TRTONNXScatterNDKernelLauncher(const T* data, const int* indices, - const T* update, const int* dims, - int nbDims, const int* indices_dims, - int indice_nbDims, T* output, - cudaStream_t stream) { - // fill tensordesc and initial - TensorDesc tensor_desc; - memset((void*)&tensor_desc, 0, sizeof(TensorDesc)); - tensor_desc.dim = nbDims; - tensor_desc.shape[nbDims - 1] = dims[nbDims - 1]; - tensor_desc.stride[nbDims - 1] = 1; - for (int i = nbDims - 2; i >= 0; --i) { - tensor_desc.shape[i] = dims[i]; - tensor_desc.stride[i] = dims[i + 1] * tensor_desc.stride[i + 1]; - } - const int data_size = tensor_desc.stride[0] * tensor_desc.shape[0]; - - TensorDesc indice_desc; - memset((void*)&indice_desc, 0, sizeof(TensorDesc)); - indice_desc.dim = indice_nbDims; - indice_desc.shape[indice_nbDims - 1] = indices_dims[indice_nbDims - 1]; - indice_desc.stride[indice_nbDims - 1] = 1; - for (int i = indice_nbDims - 2; i >= 0; --i) { - indice_desc.shape[i] = indices_dims[i]; - indice_desc.stride[i] = indices_dims[i + 1] * indice_desc.stride[i + 1]; - } - - // output = np.copy(data) - cudaMemcpyAsync(output, data, data_size * sizeof(T), - cudaMemcpyDeviceToDevice); - - int num_update_indice = 1; - for (int i = 0; i < indice_nbDims - 1; ++i) { - num_update_indice *= indice_desc.shape[i]; - } - // scatter - const int col_block = GET_BLOCKS(num_update_indice, threadsPerBlock); - onnx_scatternd_kernel<<>>( - num_update_indice, indices, update, output, tensor_desc, indice_desc); -} - -void TRTONNXScatterNDKernelLauncher_float(const float* data, const int* indices, - const float* update, const int* dims, - int nbDims, const int* indices_dims, - int indice_nbDims, float* output, - cudaStream_t stream) { - TRTONNXScatterNDKernelLauncher(data, indices, update, dims, nbDims, - indices_dims, indice_nbDims, output, - stream); -} - -void TRTONNXScatterNDKernelLauncher_int32(const int* data, const int* indices, - const int* update, const int* dims, - int nbDims, const int* indices_dims, - int indice_nbDims, int* output, - cudaStream_t stream) { - TRTONNXScatterNDKernelLauncher(data, indices, update, dims, nbDims, - indices_dims, indice_nbDims, output, - stream); -} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_corner_pool.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_corner_pool.hpp deleted file mode 100644 index f34e15b3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_corner_pool.hpp +++ /dev/null @@ -1,111 +0,0 @@ -#ifndef TRT_CORNER_POOL_HPP -#define TRT_CORNER_POOL_HPP -#include -#include - -#include "trt_plugin_helper.hpp" - -enum TRT_CORNER_POOL_TYPE { - TRT_TOP_POOL = 0, - TRT_BOTTOM_POOL = 1, - TRT_LEFT_POOL = 2, - TRT_RIGHT_POOL = 3 -}; - -// implement of CornerPool -class CornerPoolPluginDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - CornerPoolPluginDynamic(const std::string &name, - TRT_CORNER_POOL_TYPE poolType); - - CornerPoolPluginDynamic(const std::string name, const void *data, - size_t length); - - CornerPoolPluginDynamic() = delete; - - ~CornerPoolPluginDynamic(); - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - protected: - const std::string mLayerName; - std::string mNamespace; - - TRT_CORNER_POOL_TYPE mPoolType; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -// CornerPool creator -class CornerPoolPluginDynamicCreator : public nvinfer1::IPluginCreator { - public: - CornerPoolPluginDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - protected: - nvinfer1::PluginFieldCollection mFC; - std::vector mPluginAttributes; - std::string mNamespace; -}; - -#endif TRT_CORNER_POOL_HPP // TRT_CORNER_POOL_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cuda_helper.cuh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cuda_helper.cuh deleted file mode 100644 index 846d06a4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cuda_helper.cuh +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef TRT_CUDA_HELPER_HPP -#define TRT_CUDA_HELPER_HPP -#include - -#define cudaCheckError() \ - { \ - cudaError_t e = cudaGetLastError(); \ - if (e != cudaSuccess) { \ - printf("Cuda failure %s:%d: '%s'\n", __FILE__, __LINE__, \ - cudaGetErrorString(e)); \ - exit(0); \ - } \ - } - -/** - * Returns a view of the original tensor with its dimensions permuted. - * - * @param[out] dst pointer to the destination tensor - * @param[in] src pointer to the source tensor - * @param[in] src_size shape of the src tensor - * @param[in] permute The desired ordering of dimensions - * @param[in] src_dim dim of src tensor - * @param[in] stream cuda stream handle - */ -template -void memcpyPermute(scalar_t* dst, const scalar_t* src, int* src_size, - int* permute, int src_dim, cudaStream_t stream = 0); - -template -cublasStatus_t cublasGemmWrap(cublasHandle_t handle, cublasOperation_t transa, - cublasOperation_t transb, int m, int n, int k, - const scalar_t* alpha, const scalar_t* A, int lda, - const scalar_t* B, int ldb, const scalar_t* beta, - scalar_t* C, int ldc) { - return CUBLAS_STATUS_INTERNAL_ERROR; -} - -#endif // TRT_CUDA_HELPER_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cummaxmin.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cummaxmin.hpp deleted file mode 100644 index 5b856b02..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_cummaxmin.hpp +++ /dev/null @@ -1,122 +0,0 @@ -#ifndef TRT_CUMMAXMIN_HPP -#define TRT_CUMMAXMIN_HPP -#include -#include - -#include "trt_plugin_helper.hpp" - -enum TRT_CUMCMPTYPE { TRT_CUMMAX = 0, TRT_CUMMIN = 1 }; - -// implement of cummax and cummin -class CumMaxMinPluginDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - CumMaxMinPluginDynamic(const std::string &name, int dim, - TRT_CUMCMPTYPE cumType); - - CumMaxMinPluginDynamic(const std::string name, const void *data, - size_t length); - - CumMaxMinPluginDynamic() = delete; - - ~CumMaxMinPluginDynamic(); - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - protected: - const std::string mLayerName; - std::string mNamespace; - - int mDim; - TRT_CUMCMPTYPE mCumType; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -// cummax and cummin creator -class CumMaxMinPluginDynamicCreator : public nvinfer1::IPluginCreator { - public: - CumMaxMinPluginDynamicCreator(TRT_CUMCMPTYPE cumType); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - protected: - TRT_CUMCMPTYPE mCumType; - nvinfer1::PluginFieldCollection mFC; - std::vector mPluginAttributes; - std::string mNamespace; -}; - -// cummax creator -class CumMaxPluginDynamicCreator : public CumMaxMinPluginDynamicCreator { - public: - CumMaxPluginDynamicCreator(); - const char *getPluginName() const override; -}; - -// cummin creator -class CumMinPluginDynamicCreator : public CumMaxMinPluginDynamicCreator { - public: - CumMinPluginDynamicCreator(); - const char *getPluginName() const override; -}; - -#endif TRT_CUMMAXMIN_HPP // TRT_CUMMAXMIN_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_deform_conv.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_deform_conv.hpp deleted file mode 100644 index fc48ac5d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_deform_conv.hpp +++ /dev/null @@ -1,118 +0,0 @@ -#ifndef TRT_DEFORM_CONV_HPP -#define TRT_DEFORM_CONV_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -class DeformableConvPluginDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - DeformableConvPluginDynamic(const std::string &name, - const nvinfer1::Dims &stride, - const nvinfer1::Dims &padding, - const nvinfer1::Dims &dilation, - const int deformableGroup, const int group, - int im2colStep); - - DeformableConvPluginDynamic(const std::string name, const void *data, - size_t length); - - DeformableConvPluginDynamic() = delete; - - ~DeformableConvPluginDynamic(); - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - void attachToContext(cudnnContext *cudnnContext, cublasContext *cublasContext, - nvinfer1::IGpuAllocator *gpuAllocator) override; - void detachFromContext() override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - nvinfer1::Dims mStride; - nvinfer1::Dims mPadding; - nvinfer1::Dims mDilation; - int mDeformableGroup; - int mGroup; - int mIm2colStep; - - cublasHandle_t m_cublas_handle; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class DeformableConvPluginDynamicCreator : public nvinfer1::IPluginCreator { - public: - DeformableConvPluginDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_DEFORM_CONV_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_grid_sampler.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_grid_sampler.hpp deleted file mode 100644 index 40920ce5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_grid_sampler.hpp +++ /dev/null @@ -1,108 +0,0 @@ -#ifndef TRT_GRID_SAMPLER_HPP -#define TRT_GRID_SAMPLER_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -namespace mmcv { -enum class GridSamplerInterpolation { Bilinear, Nearest }; -enum class GridSamplerPadding { Zeros, Border, Reflection }; -} // namespace mmcv - -class GridSamplerDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - GridSamplerDynamic(const std::string &name, int mode, int paddingMode, - bool alignCorners); - - GridSamplerDynamic(const std::string name, const void *data, size_t length); - - GridSamplerDynamic() = delete; - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - int mMode; - int mPaddingMode; - bool mAlignCorners; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class GridSamplerDynamicCreator : public nvinfer1::IPluginCreator { - public: - GridSamplerDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_GRID_SAMPLER_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_instance_norm.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_instance_norm.hpp deleted file mode 100644 index 78060c39..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_instance_norm.hpp +++ /dev/null @@ -1,120 +0,0 @@ -// Modified from: -// https://github.com/NVIDIA/TensorRT/blob/master/plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.h - -#ifndef TRT_INSTANCE_NORMALIZATION_PLUGIN_H -#define TRT_INSTANCE_NORMALIZATION_PLUGIN_H -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -typedef unsigned short half_type; - -class InstanceNormalizationDynamic final - : public nvinfer1::IPluginV2DynamicExt { - public: - InstanceNormalizationDynamic(const std::string& name, float epsilon); - - InstanceNormalizationDynamic(const std::string& name, void const* serialData, - size_t serialLength); - - InstanceNormalizationDynamic() = delete; - - ~InstanceNormalizationDynamic() override; - - int getNbOutputs() const override; - - // DynamicExt plugins returns DimsExprs class instead of Dims - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs* inputs, int nbInputs, - nvinfer1::IExprBuilder& exprBuilder) override; - - int initialize() override; - - void terminate() override; - - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc* inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc* outputs, - int nbOutputs) const override; - - int enqueue(const nvinfer1::PluginTensorDesc* inputDesc, - const nvinfer1::PluginTensorDesc* outputDesc, - const void* const* inputs, void* const* outputs, void* workspace, - cudaStream_t stream) override; - - size_t getSerializationSize() const override; - - void serialize(void* buffer) const override; - - // DynamicExt plugin supportsFormat update. - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc* inOut, - int nbInputs, int nbOutputs) override; - - const char* getPluginType() const override; - - const char* getPluginVersion() const override; - - void destroy() override; - - nvinfer1::IPluginV2DynamicExt* clone() const override; - - void setPluginNamespace(const char* pluginNamespace) override; - - const char* getPluginNamespace() const override; - - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType* inputTypes, - int nbInputs) const override; - - void attachToContext(cudnnContext* cudnn, cublasContext* cublas, - nvinfer1::IGpuAllocator* allocator) override; - - void detachFromContext() override; - - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc* in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc* out, - int nbOutputs) override; - - private: - const std::string mLayerName; - float mEpsilon{}; - cudnnHandle_t _cudnn_handle{}; - cudnnTensorDescriptor_t _x_desc{}, _y_desc{}, _b_desc{}; - std::string mPluginNamespace{}; -}; - -class InstanceNormalizationDynamicCreator : public nvinfer1::IPluginCreator { - public: - InstanceNormalizationDynamicCreator(); - - ~InstanceNormalizationDynamicCreator() override = default; - - const char* getPluginName() const override; - - const char* getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection* getFieldNames() override; - - nvinfer1::IPluginV2DynamicExt* createPlugin( - const char* name, const nvinfer1::PluginFieldCollection* fc) override; - - nvinfer1::IPluginV2DynamicExt* deserializePlugin( - const char* name, const void* serialData, size_t serialLength) override; - - void setPluginNamespace(const char* pluginNamespace) override; - - const char* getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; - -#endif // TRT_INSTANCE_NORMALIZATION_PLUGIN_H diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_modulated_deform_conv.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_modulated_deform_conv.hpp deleted file mode 100644 index 0907e7ea..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_modulated_deform_conv.hpp +++ /dev/null @@ -1,120 +0,0 @@ -#ifndef TRT_MODULATED_DEFORM_CONV_HPP -#define TRT_MODULATED_DEFORM_CONV_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -class ModulatedDeformableConvPluginDynamic - : public nvinfer1::IPluginV2DynamicExt { - public: - ModulatedDeformableConvPluginDynamic(const std::string &name, - const nvinfer1::Dims stride, - const nvinfer1::Dims padding, - const nvinfer1::Dims dilation, - const int deformableGroup, - const int group); - - ModulatedDeformableConvPluginDynamic(const std::string name, const void *data, - size_t length); - - ModulatedDeformableConvPluginDynamic() = delete; - - ~ModulatedDeformableConvPluginDynamic(); - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - void attachToContext(cudnnContext *cudnnContext, cublasContext *cublasContext, - nvinfer1::IGpuAllocator *gpuAllocator) override; - void detachFromContext() override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - nvinfer1::Dims mStride; - nvinfer1::Dims mPadding; - nvinfer1::Dims mDilation; - int mDeformableGroup; - int mGroup; - bool mWithBias; - - cublasHandle_t m_cublas_handle; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class ModulatedDeformableConvPluginDynamicCreator - : public nvinfer1::IPluginCreator { - public: - ModulatedDeformableConvPluginDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_MODULATED_DEFORM_CONV_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_nms.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_nms.hpp deleted file mode 100644 index a914d909..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_nms.hpp +++ /dev/null @@ -1,107 +0,0 @@ -#ifndef TRT_NMS_HPP -#define TRT_NMS_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -class NonMaxSuppressionDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - NonMaxSuppressionDynamic(const std::string &name, int centerPointBox, - int maxOutputBoxesPerClass, float iouThreshold, - float scoreThreshold, int offset); - - NonMaxSuppressionDynamic(const std::string name, const void *data, - size_t length); - - NonMaxSuppressionDynamic() = delete; - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - int mCenterPointBox; - int mMaxOutputBoxesPerClass; - float mIouThreshold; - float mScoreThreshold; - int mOffset; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class NonMaxSuppressionDynamicCreator : public nvinfer1::IPluginCreator { - public: - NonMaxSuppressionDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_NMS_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin.hpp deleted file mode 100644 index a4adf29d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin.hpp +++ /dev/null @@ -1,7 +0,0 @@ -#ifndef TRT_PLUGIN_HPP -#define TRT_PLUGIN_HPP - -extern "C" { -bool initLibMMCVInferPlugins(); -} // extern "C" -#endif // TRT_PLUGIN_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin_helper.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin_helper.hpp deleted file mode 100644 index 70fba781..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_plugin_helper.hpp +++ /dev/null @@ -1,41 +0,0 @@ -#ifndef TRT_PLUGIN_HELPER_HPP -#define TRT_PLUGIN_HELPER_HPP -#include - -#include "NvInferPlugin.h" - -namespace mmcv { - -const int MAXTENSORDIMS = 10; - -struct TensorDesc { - int shape[MAXTENSORDIMS]; - int stride[MAXTENSORDIMS]; - int dim; -}; - -inline unsigned int getElementSize(nvinfer1::DataType t) { - switch (t) { - case nvinfer1::DataType::kINT32: - return 4; - case nvinfer1::DataType::kFLOAT: - return 4; - case nvinfer1::DataType::kHALF: - return 2; - // case nvinfer1::DataType::kBOOL: - case nvinfer1::DataType::kINT8: - return 1; - default: - throw std::runtime_error("Invalid DataType."); - } - throw std::runtime_error("Invalid DataType."); - return 0; -} - -inline size_t getAlignedSize(size_t origin_size, size_t aligned_number = 16) { - return size_t((origin_size + aligned_number - 1) / aligned_number) * - aligned_number; -} - -} // namespace mmcv -#endif // TRT_PLUGIN_HELPER_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_roi_align.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_roi_align.hpp deleted file mode 100644 index 5677af90..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_roi_align.hpp +++ /dev/null @@ -1,108 +0,0 @@ -#ifndef TRT_ROI_ALIGN_HPP -#define TRT_ROI_ALIGN_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -class RoIAlignPluginDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - RoIAlignPluginDynamic(const std::string &name, int outWidth, int outHeight, - float spatialScale, int sampleRatio, int poolMode, - bool aligned); - - RoIAlignPluginDynamic(const std::string name, const void *data, - size_t length); - - RoIAlignPluginDynamic() = delete; - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - int mOutWidth; - int mOutHeight; - float mSpatialScale; - int mSampleRatio; - int mPoolMode; // 1:avg 0:max - bool mAligned; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class RoIAlignPluginDynamicCreator : public nvinfer1::IPluginCreator { - public: - RoIAlignPluginDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_ROI_ALIGN_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_scatternd.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_scatternd.hpp deleted file mode 100644 index 6087cbef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_scatternd.hpp +++ /dev/null @@ -1,98 +0,0 @@ -#ifndef TRT_SCATTERND_HPP -#define TRT_SCATTERND_HPP -#include - -#include -#include -#include - -#include "trt_plugin_helper.hpp" - -class ONNXScatterNDDynamic : public nvinfer1::IPluginV2DynamicExt { - public: - ONNXScatterNDDynamic(const std::string &name); - - ONNXScatterNDDynamic(const std::string name, const void *data, size_t length); - - ONNXScatterNDDynamic() = delete; - - // IPluginV2DynamicExt Methods - nvinfer1::IPluginV2DynamicExt *clone() const override; - nvinfer1::DimsExprs getOutputDimensions( - int outputIndex, const nvinfer1::DimsExprs *inputs, int nbInputs, - nvinfer1::IExprBuilder &exprBuilder) override; - bool supportsFormatCombination(int pos, - const nvinfer1::PluginTensorDesc *inOut, - int nbInputs, int nbOutputs) override; - void configurePlugin(const nvinfer1::DynamicPluginTensorDesc *in, - int nbInputs, - const nvinfer1::DynamicPluginTensorDesc *out, - int nbOutputs) override; - size_t getWorkspaceSize(const nvinfer1::PluginTensorDesc *inputs, - int nbInputs, - const nvinfer1::PluginTensorDesc *outputs, - int nbOutputs) const override; - int enqueue(const nvinfer1::PluginTensorDesc *inputDesc, - const nvinfer1::PluginTensorDesc *outputDesc, - const void *const *inputs, void *const *outputs, void *workspace, - cudaStream_t stream) override; - - // IPluginV2Ext Methods - nvinfer1::DataType getOutputDataType(int index, - const nvinfer1::DataType *inputTypes, - int nbInputs) const override; - - // IPluginV2 Methods - const char *getPluginType() const override; - const char *getPluginVersion() const override; - int getNbOutputs() const override; - int initialize() override; - void terminate() override; - size_t getSerializationSize() const override; - void serialize(void *buffer) const override; - void destroy() override; - void setPluginNamespace(const char *pluginNamespace) override; - const char *getPluginNamespace() const override; - - private: - const std::string mLayerName; - std::string mNamespace; - - protected: - // To prevent compiler warnings. - using nvinfer1::IPluginV2DynamicExt::canBroadcastInputAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::configurePlugin; - using nvinfer1::IPluginV2DynamicExt::enqueue; - using nvinfer1::IPluginV2DynamicExt::getOutputDimensions; - using nvinfer1::IPluginV2DynamicExt::getWorkspaceSize; - using nvinfer1::IPluginV2DynamicExt::isOutputBroadcastAcrossBatch; - using nvinfer1::IPluginV2DynamicExt::supportsFormat; -}; - -class ONNXScatterNDDynamicCreator : public nvinfer1::IPluginCreator { - public: - ONNXScatterNDDynamicCreator(); - - const char *getPluginName() const override; - - const char *getPluginVersion() const override; - - const nvinfer1::PluginFieldCollection *getFieldNames() override; - - nvinfer1::IPluginV2 *createPlugin( - const char *name, const nvinfer1::PluginFieldCollection *fc) override; - - nvinfer1::IPluginV2 *deserializePlugin(const char *name, - const void *serialData, - size_t serialLength) override; - - void setPluginNamespace(const char *pluginNamespace) override; - - const char *getPluginNamespace() const override; - - private: - static nvinfer1::PluginFieldCollection mFC; - static std::vector mPluginAttributes; - std::string mNamespace; -}; -#endif // TRT_SCATTERND_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_serialize.hpp b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_serialize.hpp deleted file mode 100644 index 1f0899fd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/csrc/tensorrt/trt_serialize.hpp +++ /dev/null @@ -1,105 +0,0 @@ -// Modified from: -// https://github.com/NVIDIA/TensorRT/blob/master/plugin/common/serialize.hpp - -#ifndef TRT_SERIALIZE_HPP -#define TRT_SERIALIZE_HPP -#include -#include -#include -#include -#include -using std::cerr; -using std::cout; -using std::endl; - -template -inline void serialize_value(void** buffer, T const& value); - -template -inline void deserialize_value(void const** buffer, size_t* buffer_size, - T* value); - -namespace { - -template -struct Serializer {}; - -template -struct Serializer::value || - std::is_enum::value || - std::is_pod::value>::type> { - static size_t serialized_size(T const& value) { return sizeof(T); } - static void serialize(void** buffer, T const& value) { - ::memcpy(*buffer, &value, sizeof(T)); - reinterpret_cast(*buffer) += sizeof(T); - } - static void deserialize(void const** buffer, size_t* buffer_size, T* value) { - assert(*buffer_size >= sizeof(T)); - ::memcpy(value, *buffer, sizeof(T)); - reinterpret_cast(*buffer) += sizeof(T); - *buffer_size -= sizeof(T); - } -}; - -template <> -struct Serializer { - static size_t serialized_size(const char* value) { return strlen(value) + 1; } - static void serialize(void** buffer, const char* value) { - ::strcpy(static_cast(*buffer), value); - reinterpret_cast(*buffer) += strlen(value) + 1; - } - static void deserialize(void const** buffer, size_t* buffer_size, - const char** value) { - *value = static_cast(*buffer); - size_t data_size = strnlen(*value, *buffer_size) + 1; - assert(*buffer_size >= data_size); - reinterpret_cast(*buffer) += data_size; - *buffer_size -= data_size; - } -}; - -template -struct Serializer, - typename std::enable_if::value || - std::is_enum::value || - std::is_pod::value>::type> { - static size_t serialized_size(std::vector const& value) { - return sizeof(value.size()) + value.size() * sizeof(T); - } - static void serialize(void** buffer, std::vector const& value) { - serialize_value(buffer, value.size()); - size_t nbyte = value.size() * sizeof(T); - ::memcpy(*buffer, value.data(), nbyte); - reinterpret_cast(*buffer) += nbyte; - } - static void deserialize(void const** buffer, size_t* buffer_size, - std::vector* value) { - size_t size; - deserialize_value(buffer, buffer_size, &size); - value->resize(size); - size_t nbyte = value->size() * sizeof(T); - assert(*buffer_size >= nbyte); - ::memcpy(value->data(), *buffer, nbyte); - reinterpret_cast(*buffer) += nbyte; - *buffer_size -= nbyte; - } -}; - -} // namespace - -template -inline size_t serialized_size(T const& value) { - return Serializer::serialized_size(value); -} - -template -inline void serialize_value(void** buffer, T const& value) { - return Serializer::serialize(buffer, value); -} - -template -inline void deserialize_value(void const** buffer, size_t* buffer_size, - T* value) { - return Serializer::deserialize(buffer, buffer_size, value); -} -#endif // TRT_SERIALIZE_HPP diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_conv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_conv.py deleted file mode 100644 index 85f665cd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_conv.py +++ /dev/null @@ -1,408 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext('_ext', [ - 'deform_conv_forward', 'deform_conv_backward_input', - 'deform_conv_backward_parameters' -]) - - -class DeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, - input, - offset, - weight, - stride, - padding, - dilation, - groups, - deform_groups, - bias=False, - im2col_step=32): - return g.op( - 'mmcv::MMCVDeformConv2d', - input, - offset, - weight, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups, - bias_i=bias, - im2col_step_i=im2col_step) - - @staticmethod - def forward(ctx, - input: Tensor, - offset: Tensor, - weight: Tensor, - stride: Union[int, Tuple[int, ...]] = 1, - padding: Union[int, Tuple[int, ...]] = 0, - dilation: Union[int, Tuple[int, ...]] = 1, - groups: int = 1, - deform_groups: int = 1, - bias: bool = False, - im2col_step: int = 32) -> Tensor: - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - assert bias is False, 'Only support bias is False.' - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.im2col_step = im2col_step - - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - DeformConv2dFunction._output_size(ctx, input, weight)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % cur_im2col_step - ) == 0, 'batch size must be divisible by im2col_step' - ext_module.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward( - ctx, grad_output: Tensor - ) -> Tuple[Optional[Tensor], Optional[Tensor], Optional[Tensor], None, - None, None, None, None, None, None]: - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % cur_im2col_step - ) == 0, 'batch size must be divisible by im2col_step' - - grad_output = grad_output.contiguous() - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - ext_module.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - ext_module.deform_conv_backward_parameters( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - scale=1, - im2col_step=cur_im2col_step) - - return grad_input, grad_offset, grad_weight, \ - None, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -deform_conv2d = DeformConv2dFunction.apply - - -class DeformConv2d(nn.Module): - r"""Deformable 2D convolution. - - Applies a deformable 2D convolution over an input signal composed of - several input planes. DeformConv2d was described in the paper - `Deformable Convolutional Networks - `_ - - Note: - The argument ``im2col_step`` was added in version 1.3.17, which means - number of samples processed by the ``im2col_cuda_kernel`` per call. - It enables users to define ``batch_size`` and ``im2col_step`` more - flexibly and solved `issue mmcv#1440 - `_. - - Args: - in_channels (int): Number of channels in the input image. - out_channels (int): Number of channels produced by the convolution. - kernel_size(int, tuple): Size of the convolving kernel. - stride(int, tuple): Stride of the convolution. Default: 1. - padding (int or tuple): Zero-padding added to both sides of the input. - Default: 0. - dilation (int or tuple): Spacing between kernel elements. Default: 1. - groups (int): Number of blocked connections from input. - channels to output channels. Default: 1. - deform_groups (int): Number of deformable group partitions. - bias (bool): If True, adds a learnable bias to the output. - Default: False. - im2col_step (int): Number of samples processed by im2col_cuda_kernel - per call. It will work when ``batch_size`` > ``im2col_step``, but - ``batch_size`` must be divisible by ``im2col_step``. Default: 32. - `New in version 1.3.17.` - """ - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='DeformConv2d') - def __init__(self, - in_channels: int, - out_channels: int, - kernel_size: Union[int, Tuple[int, ...]], - stride: Union[int, Tuple[int, ...]] = 1, - padding: Union[int, Tuple[int, ...]] = 0, - dilation: Union[int, Tuple[int, ...]] = 1, - groups: int = 1, - deform_groups: int = 1, - bias: bool = False, - im2col_step: int = 32) -> None: - super().__init__() - - assert not bias, \ - f'bias={bias} is not supported in DeformConv2d.' - assert in_channels % groups == 0, \ - f'in_channels {in_channels} cannot be divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} cannot be divisible by groups \ - {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - self.im2col_step = im2col_step - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - # only weight, no bias - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, - *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - # switch the initialization of `self.weight` to the standard kaiming - # method described in `Delving deep into rectifiers: Surpassing - # human-level performance on ImageNet classification` - He, K. et al. - # (2015), using a uniform distribution - nn.init.kaiming_uniform_(self.weight, nonlinearity='relu') - - def forward(self, x: Tensor, offset: Tensor) -> Tensor: - """Deformable Convolutional forward function. - - Args: - x (Tensor): Input feature, shape (B, C_in, H_in, W_in) - offset (Tensor): Offset for deformable convolution, shape - (B, deform_groups*kernel_size[0]*kernel_size[1]*2, - H_out, W_out), H_out, W_out are equal to the output's. - - An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Returns: - Tensor: Output of the layer. - """ - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) < - self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0) - offset = offset.contiguous() - out = deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - - pad_w].contiguous() - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels},\n' - s += f'out_channels={self.out_channels},\n' - s += f'kernel_size={self.kernel_size},\n' - s += f'stride={self.stride},\n' - s += f'padding={self.padding},\n' - s += f'dilation={self.dilation},\n' - s += f'groups={self.groups},\n' - s += f'deform_groups={self.deform_groups},\n' - # bias is not supported in DeformConv2d. - s += 'bias=False)' - return s - - -@CONV_LAYERS.register_module('DCN') -class DeformConv2dPack(DeformConv2d): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x: Tensor) -> Tensor: # type: ignore - offset = self.conv_offset(x) - return deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, DeformConvPack loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_roi_pool.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_roi_pool.py deleted file mode 100644 index ec9a4c12..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deform_roi_pool.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Tuple - -from torch import Tensor, nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['deform_roi_pool_forward', 'deform_roi_pool_backward']) - - -class DeformRoIPoolFunction(Function): - - @staticmethod - def symbolic(g, input, rois, offset, output_size, spatial_scale, - sampling_ratio, gamma): - return g.op( - 'mmcv::MMCVDeformRoIPool', - input, - rois, - offset, - pooled_height_i=output_size[0], - pooled_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_f=sampling_ratio, - gamma_f=gamma) - - @staticmethod - def forward(ctx, - input: Tensor, - rois: Tensor, - offset: Optional[Tensor], - output_size: Tuple[int, ...], - spatial_scale: float = 1.0, - sampling_ratio: int = 0, - gamma: float = 0.1) -> Tensor: - if offset is None: - offset = input.new_zeros(0) - ctx.output_size = _pair(output_size) - ctx.spatial_scale = float(spatial_scale) - ctx.sampling_ratio = int(sampling_ratio) - ctx.gamma = float(gamma) - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - - ext_module.deform_roi_pool_forward( - input, - rois, - offset, - output, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - gamma=ctx.gamma) - - ctx.save_for_backward(input, rois, offset) - return output - - @staticmethod - @once_differentiable - def backward( - ctx, grad_output: Tensor - ) -> Tuple[Tensor, None, Tensor, None, None, None, None]: - input, rois, offset = ctx.saved_tensors - grad_input = grad_output.new_zeros(input.shape) - grad_offset = grad_output.new_zeros(offset.shape) - - ext_module.deform_roi_pool_backward( - grad_output, - input, - rois, - offset, - grad_input, - grad_offset, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - gamma=ctx.gamma) - if grad_offset.numel() == 0: - grad_offset = None - return grad_input, None, grad_offset, None, None, None, None - - -deform_roi_pool = DeformRoIPoolFunction.apply - - -class DeformRoIPool(nn.Module): - - def __init__(self, - output_size: Tuple[int, ...], - spatial_scale: float = 1.0, - sampling_ratio: int = 0, - gamma: float = 0.1): - super().__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - self.sampling_ratio = int(sampling_ratio) - self.gamma = float(gamma) - - def forward(self, - input: Tensor, - rois: Tensor, - offset: Optional[Tensor] = None) -> Tensor: - return deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - - -class DeformRoIPoolPack(DeformRoIPool): - - def __init__(self, - output_size: Tuple[int, ...], - output_channels: int, - deform_fc_channels: int = 1024, - spatial_scale: float = 1.0, - sampling_ratio: int = 0, - gamma: float = 0.1): - super().__init__(output_size, spatial_scale, sampling_ratio, gamma) - - self.output_channels = output_channels - self.deform_fc_channels = deform_fc_channels - - self.offset_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 2)) - self.offset_fc[-1].weight.data.zero_() - self.offset_fc[-1].bias.data.zero_() - - def forward(self, input: Tensor, rois: Tensor) -> Tensor: # type: ignore - assert input.size(1) == self.output_channels - x = deform_roi_pool(input, rois, None, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - rois_num = rois.size(0) - offset = self.offset_fc(x.view(rois_num, -1)) - offset = offset.view(rois_num, 2, self.output_size[0], - self.output_size[1]) - return deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - - -class ModulatedDeformRoIPoolPack(DeformRoIPool): - - def __init__(self, - output_size: Tuple[int, ...], - output_channels: int, - deform_fc_channels: int = 1024, - spatial_scale: float = 1.0, - sampling_ratio: int = 0, - gamma: float = 0.1): - super().__init__(output_size, spatial_scale, sampling_ratio, gamma) - - self.output_channels = output_channels - self.deform_fc_channels = deform_fc_channels - - self.offset_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 2)) - self.offset_fc[-1].weight.data.zero_() - self.offset_fc[-1].bias.data.zero_() - - self.mask_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 1), - nn.Sigmoid()) - self.mask_fc[2].weight.data.zero_() - self.mask_fc[2].bias.data.zero_() - - def forward(self, input: Tensor, rois: Tensor) -> Tensor: # type: ignore - assert input.size(1) == self.output_channels - x = deform_roi_pool(input, rois, None, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - rois_num = rois.size(0) - offset = self.offset_fc(x.view(rois_num, -1)) - offset = offset.view(rois_num, 2, self.output_size[0], - self.output_size[1]) - mask = self.mask_fc(x.view(rois_num, -1)) - mask = mask.view(rois_num, 1, self.output_size[0], self.output_size[1]) - d = deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - return d * mask diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deprecated_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deprecated_wrappers.py deleted file mode 100644 index 629a8033..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/deprecated_wrappers.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file is for backward compatibility. -# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks. -import warnings - -from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d - - -class Conv2d_deprecated(Conv2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead', - DeprecationWarning) - - -class ConvTranspose2d_deprecated(ConvTranspose2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be ' - 'deprecated in the future. Please import them from "mmcv.cnn" ' - 'instead', DeprecationWarning) - - -class MaxPool2d_deprecated(MaxPool2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead', - DeprecationWarning) - - -class Linear_deprecated(Linear): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Linear wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead', - DeprecationWarning) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/diff_iou_rotated.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/diff_iou_rotated.py deleted file mode 100644 index cdc6c72f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/diff_iou_rotated.py +++ /dev/null @@ -1,301 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Adapted from https://github.com/lilanxiao/Rotated_IoU/blob/master/box_intersection_2d.py # noqa -# Adapted from https://github.com/lilanxiao/Rotated_IoU/blob/master/oriented_iou_loss.py # noqa -from typing import Tuple - -import torch -from torch import Tensor -from torch.autograd import Function - -from ..utils import ext_loader - -EPSILON = 1e-8 -ext_module = ext_loader.load_ext('_ext', - ['diff_iou_rotated_sort_vertices_forward']) - - -class SortVertices(Function): - - @staticmethod - def forward(ctx, vertices, mask, num_valid): - idx = ext_module.diff_iou_rotated_sort_vertices_forward( - vertices, mask, num_valid) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - return idx - - @staticmethod - def backward(ctx, gradout): - return () - - -def box_intersection(corners1: Tensor, - corners2: Tensor) -> Tuple[Tensor, Tensor]: - """Find intersection points of rectangles. - Convention: if two edges are collinear, there is no intersection point. - - Args: - corners1 (Tensor): (B, N, 4, 2) First batch of boxes. - corners2 (Tensor): (B, N, 4, 2) Second batch of boxes. - - Returns: - Tuple: - - Tensor: (B, N, 4, 4, 2) Intersections. - - Tensor: (B, N, 4, 4) Valid intersections mask. - """ - # build edges from corners - # B, N, 4, 4: Batch, Box, edge, point - line1 = torch.cat([corners1, corners1[:, :, [1, 2, 3, 0], :]], dim=3) - line2 = torch.cat([corners2, corners2[:, :, [1, 2, 3, 0], :]], dim=3) - # duplicate data to pair each edges from the boxes - # (B, N, 4, 4) -> (B, N, 4, 4, 4) : Batch, Box, edge1, edge2, point - line1_ext = line1.unsqueeze(3) - line2_ext = line2.unsqueeze(2) - x1, y1, x2, y2 = line1_ext.split([1, 1, 1, 1], dim=-1) - x3, y3, x4, y4 = line2_ext.split([1, 1, 1, 1], dim=-1) - # math: https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection - numerator = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4) - denumerator_t = (x1 - x3) * (y3 - y4) - (y1 - y3) * (x3 - x4) - t = denumerator_t / numerator - t[numerator == .0] = -1. - mask_t = (t > 0) & (t < 1) # intersection on line segment 1 - denumerator_u = (x1 - x2) * (y1 - y3) - (y1 - y2) * (x1 - x3) - u = -denumerator_u / numerator - u[numerator == .0] = -1. - mask_u = (u > 0) & (u < 1) # intersection on line segment 2 - mask = mask_t * mask_u - # overwrite with EPSILON. otherwise numerically unstable - t = denumerator_t / (numerator + EPSILON) - intersections = torch.stack([x1 + t * (x2 - x1), y1 + t * (y2 - y1)], - dim=-1) - intersections = intersections * mask.float().unsqueeze(-1) - return intersections, mask - - -def box1_in_box2(corners1: Tensor, corners2: Tensor) -> Tensor: - """Check if corners of box1 lie in box2. - Convention: if a corner is exactly on the edge of the other box, - it's also a valid point. - - Args: - corners1 (Tensor): (B, N, 4, 2) First batch of boxes. - corners2 (Tensor): (B, N, 4, 2) Second batch of boxes. - - Returns: - Tensor: (B, N, 4) Intersection. - """ - # a, b, c, d - 4 vertices of box2 - a = corners2[:, :, 0:1, :] # (B, N, 1, 2) - b = corners2[:, :, 1:2, :] # (B, N, 1, 2) - d = corners2[:, :, 3:4, :] # (B, N, 1, 2) - # ab, am, ad - vectors between corresponding vertices - ab = b - a # (B, N, 1, 2) - am = corners1 - a # (B, N, 4, 2) - ad = d - a # (B, N, 1, 2) - prod_ab = torch.sum(ab * am, dim=-1) # (B, N, 4) - norm_ab = torch.sum(ab * ab, dim=-1) # (B, N, 1) - prod_ad = torch.sum(ad * am, dim=-1) # (B, N, 4) - norm_ad = torch.sum(ad * ad, dim=-1) # (B, N, 1) - # NOTE: the expression looks ugly but is stable if the two boxes - # are exactly the same also stable with different scale of bboxes - cond1 = (prod_ab / norm_ab > -1e-6) * (prod_ab / norm_ab < 1 + 1e-6 - ) # (B, N, 4) - cond2 = (prod_ad / norm_ad > -1e-6) * (prod_ad / norm_ad < 1 + 1e-6 - ) # (B, N, 4) - return cond1 * cond2 - - -def box_in_box(corners1: Tensor, corners2: Tensor) -> Tuple[Tensor, Tensor]: - """Check if corners of two boxes lie in each other. - - Args: - corners1 (Tensor): (B, N, 4, 2) First batch of boxes. - corners2 (Tensor): (B, N, 4, 2) Second batch of boxes. - - Returns: - Tuple: - - Tensor: (B, N, 4) True if i-th corner of box1 is in box2. - - Tensor: (B, N, 4) True if i-th corner of box2 is in box1. - """ - c1_in_2 = box1_in_box2(corners1, corners2) - c2_in_1 = box1_in_box2(corners2, corners1) - return c1_in_2, c2_in_1 - - -def build_vertices(corners1: Tensor, corners2: Tensor, c1_in_2: Tensor, - c2_in_1: Tensor, intersections: Tensor, - valid_mask: Tensor) -> Tuple[Tensor, Tensor]: - """Find vertices of intersection area. - - Args: - corners1 (Tensor): (B, N, 4, 2) First batch of boxes. - corners2 (Tensor): (B, N, 4, 2) Second batch of boxes. - c1_in_2 (Tensor): (B, N, 4) True if i-th corner of box1 is in box2. - c2_in_1 (Tensor): (B, N, 4) True if i-th corner of box2 is in box1. - intersections (Tensor): (B, N, 4, 4, 2) Intersections. - valid_mask (Tensor): (B, N, 4, 4) Valid intersections mask. - - Returns: - Tuple: - - Tensor: (B, N, 24, 2) Vertices of intersection area; - only some elements are valid. - - Tensor: (B, N, 24) Mask of valid elements in vertices. - """ - # NOTE: inter has elements equals zero and has zeros gradient - # (masked by multiplying with 0); can be used as trick - B = corners1.size()[0] - N = corners1.size()[1] - # (B, N, 4 + 4 + 16, 2) - vertices = torch.cat( - [corners1, corners2, - intersections.view([B, N, -1, 2])], dim=2) - # Bool (B, N, 4 + 4 + 16) - mask = torch.cat([c1_in_2, c2_in_1, valid_mask.view([B, N, -1])], dim=2) - return vertices, mask - - -def sort_indices(vertices: Tensor, mask: Tensor) -> Tensor: - """Sort indices. - Note: - why 9? the polygon has maximal 8 vertices. - +1 to duplicate the first element. - the index should have following structure: - (A, B, C, ... , A, X, X, X) - and X indicates the index of arbitrary elements in the last - 16 (intersections not corners) with value 0 and mask False. - (cause they have zero value and zero gradient) - - Args: - vertices (Tensor): (B, N, 24, 2) Box vertices. - mask (Tensor): (B, N, 24) Mask. - - Returns: - Tensor: (B, N, 9) Sorted indices. - - """ - num_valid = torch.sum(mask.int(), dim=2).int() # (B, N) - mean = torch.sum( - vertices * mask.float().unsqueeze(-1), dim=2, - keepdim=True) / num_valid.unsqueeze(-1).unsqueeze(-1) - vertices_normalized = vertices - mean # normalization makes sorting easier - return SortVertices.apply(vertices_normalized, mask, num_valid).long() - - -def calculate_area(idx_sorted: Tensor, - vertices: Tensor) -> Tuple[Tensor, Tensor]: - """Calculate area of intersection. - - Args: - idx_sorted (Tensor): (B, N, 9) Sorted vertex ids. - vertices (Tensor): (B, N, 24, 2) Vertices. - - Returns: - Tuple: - - Tensor (B, N): Area of intersection. - - Tensor: (B, N, 9, 2) Vertices of polygon with zero padding. - """ - idx_ext = idx_sorted.unsqueeze(-1).repeat([1, 1, 1, 2]) - selected = torch.gather(vertices, 2, idx_ext) - total = selected[:, :, 0:-1, 0] * selected[:, :, 1:, 1] \ - - selected[:, :, 0:-1, 1] * selected[:, :, 1:, 0] - total = torch.sum(total, dim=2) - area = torch.abs(total) / 2 - return area, selected - - -def oriented_box_intersection_2d(corners1: Tensor, - corners2: Tensor) -> Tuple[Tensor, Tensor]: - """Calculate intersection area of 2d rotated boxes. - - Args: - corners1 (Tensor): (B, N, 4, 2) First batch of boxes. - corners2 (Tensor): (B, N, 4, 2) Second batch of boxes. - - Returns: - Tuple: - - Tensor (B, N): Area of intersection. - - Tensor (B, N, 9, 2): Vertices of polygon with zero padding. - """ - intersections, valid_mask = box_intersection(corners1, corners2) - c12, c21 = box_in_box(corners1, corners2) - vertices, mask = build_vertices(corners1, corners2, c12, c21, - intersections, valid_mask) - sorted_indices = sort_indices(vertices, mask) - return calculate_area(sorted_indices, vertices) - - -def box2corners(box: Tensor) -> Tensor: - """Convert rotated 2d box coordinate to corners. - - Args: - box (Tensor): (B, N, 5) with x, y, w, h, alpha. - - Returns: - Tensor: (B, N, 4, 2) Corners. - """ - B = box.size()[0] - x, y, w, h, alpha = box.split([1, 1, 1, 1, 1], dim=-1) - x4 = torch.FloatTensor([0.5, -0.5, -0.5, 0.5]).to(box.device) - x4 = x4 * w # (B, N, 4) - y4 = torch.FloatTensor([0.5, 0.5, -0.5, -0.5]).to(box.device) - y4 = y4 * h # (B, N, 4) - corners = torch.stack([x4, y4], dim=-1) # (B, N, 4, 2) - sin = torch.sin(alpha) - cos = torch.cos(alpha) - row1 = torch.cat([cos, sin], dim=-1) - row2 = torch.cat([-sin, cos], dim=-1) # (B, N, 2) - rot_T = torch.stack([row1, row2], dim=-2) # (B, N, 2, 2) - rotated = torch.bmm(corners.view([-1, 4, 2]), rot_T.view([-1, 2, 2])) - rotated = rotated.view([B, -1, 4, 2]) # (B * N, 4, 2) -> (B, N, 4, 2) - rotated[..., 0] += x - rotated[..., 1] += y - return rotated - - -def diff_iou_rotated_2d(box1: Tensor, box2: Tensor) -> Tensor: - """Calculate differentiable iou of rotated 2d boxes. - - Args: - box1 (Tensor): (B, N, 5) First box. - box2 (Tensor): (B, N, 5) Second box. - - Returns: - Tensor: (B, N) IoU. - """ - corners1 = box2corners(box1) - corners2 = box2corners(box2) - intersection, _ = oriented_box_intersection_2d(corners1, - corners2) # (B, N) - area1 = box1[:, :, 2] * box1[:, :, 3] - area2 = box2[:, :, 2] * box2[:, :, 3] - union = area1 + area2 - intersection - iou = intersection / union - return iou - - -def diff_iou_rotated_3d(box3d1: Tensor, box3d2: Tensor) -> Tensor: - """Calculate differentiable iou of rotated 3d boxes. - - Args: - box3d1 (Tensor): (B, N, 3+3+1) First box (x,y,z,w,h,l,alpha). - box3d2 (Tensor): (B, N, 3+3+1) Second box (x,y,z,w,h,l,alpha). - - Returns: - Tensor: (B, N) IoU. - """ - box1 = box3d1[..., [0, 1, 3, 4, 6]] # 2d box - box2 = box3d2[..., [0, 1, 3, 4, 6]] - corners1 = box2corners(box1) - corners2 = box2corners(box2) - intersection, _ = oriented_box_intersection_2d(corners1, corners2) - zmax1 = box3d1[..., 2] + box3d1[..., 5] * 0.5 - zmin1 = box3d1[..., 2] - box3d1[..., 5] * 0.5 - zmax2 = box3d2[..., 2] + box3d2[..., 5] * 0.5 - zmin2 = box3d2[..., 2] - box3d2[..., 5] * 0.5 - z_overlap = (torch.min(zmax1, zmax2) - - torch.max(zmin1, zmin2)).clamp_(min=0.) - intersection_3d = intersection * z_overlap - volume1 = box3d1[..., 3] * box3d1[..., 4] * box3d1[..., 5] - volume2 = box3d2[..., 3] * box3d2[..., 4] * box3d2[..., 5] - union_3d = volume1 + volume2 - intersection_3d - return intersection_3d / union_3d diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/focal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/focal_loss.py deleted file mode 100644 index 3b203fc1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/focal_loss.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Union - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward', - 'softmax_focal_loss_forward', 'softmax_focal_loss_backward' -]) - - -class SigmoidFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input: torch.Tensor, target: torch.LongTensor, - gamma: float, alpha: float, weight: torch.Tensor, - reduction: str): - return g.op( - 'mmcv::MMCVSigmoidFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input: torch.Tensor, - target: Union[torch.LongTensor, torch.cuda.LongTensor], - gamma: float = 2.0, - alpha: float = 0.25, - weight: Optional[torch.Tensor] = None, - reduction: str = 'mean') -> torch.Tensor: - - assert isinstance( - target, (torch.Tensor, torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - output = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_forward( - input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha) - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input, target, weight) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output: torch.Tensor) -> tuple: - input, target, weight = ctx.saved_tensors - - grad_input = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_backward( - input, - target, - weight, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input.size(0) - return grad_input, None, None, None, None, None - - -sigmoid_focal_loss = SigmoidFocalLossFunction.apply - - -class SigmoidFocalLoss(nn.Module): - - def __init__(self, - gamma: float, - alpha: float, - weight: Optional[torch.Tensor] = None, - reduction: str = 'mean'): - super().__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward( - self, - input: torch.Tensor, - target: Union[torch.LongTensor, torch.cuda.LongTensor], - ) -> torch.Tensor: - return sigmoid_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s - - -class SoftmaxFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input: torch.Tensor, target: torch.LongTensor, - gamma: float, alpha: float, weight: torch.Tensor, - reduction: str): - return g.op( - 'mmcv::MMCVSoftmaxFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input: torch.Tensor, - target: Union[torch.LongTensor, torch.cuda.LongTensor], - gamma: float = 2.0, - alpha: float = 0.25, - weight: Optional[torch.Tensor] = None, - reduction='mean') -> torch.Tensor: - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - channel_stats, _ = torch.max(input, dim=1) - input_softmax = input - channel_stats.unsqueeze(1).expand_as(input) - input_softmax.exp_() - - channel_stats = input_softmax.sum(dim=1) - input_softmax /= channel_stats.unsqueeze(1).expand_as(input) - - output = input.new_zeros(input.size(0)) - ext_module.softmax_focal_loss_forward( - input_softmax, - target, - weight, - output, - gamma=ctx.gamma, - alpha=ctx.alpha) - - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input_softmax, target, weight) - return output - - @staticmethod - def backward(ctx, grad_output: torch.Tensor) -> tuple: - input_softmax, target, weight = ctx.saved_tensors - buff = input_softmax.new_zeros(input_softmax.size(0)) - grad_input = input_softmax.new_zeros(input_softmax.size()) - - ext_module.softmax_focal_loss_backward( - input_softmax, - target, - weight, - buff, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input_softmax.size(0) - return grad_input, None, None, None, None, None - - -softmax_focal_loss = SoftmaxFocalLossFunction.apply - - -class SoftmaxFocalLoss(nn.Module): - - def __init__(self, - gamma: float, - alpha: float, - weight: Optional[torch.Tensor] = None, - reduction: str = 'mean'): - super().__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward( - self, - input: torch.Tensor, - target: Union[torch.LongTensor, torch.cuda.LongTensor], - ) -> torch.Tensor: - return softmax_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/furthest_point_sample.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/furthest_point_sample.py deleted file mode 100644 index 22b1a304..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/furthest_point_sample.py +++ /dev/null @@ -1,84 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'furthest_point_sampling_forward', - 'furthest_point_sampling_with_dist_forward' -]) - - -class FurthestPointSampling(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_xyz: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_xyz (torch.Tensor): (B, N, 3) where N > num_points. - num_points (int): Number of points in the sampled set. - - Returns: - torch.Tensor: (B, num_points) indices of the sampled points. - """ - assert points_xyz.is_contiguous() - - B, N = points_xyz.size()[:2] - output = torch.cuda.IntTensor(B, num_points) - temp = torch.cuda.FloatTensor(B, N).fill_(1e10) - - ext_module.furthest_point_sampling_forward( - points_xyz, - temp, - output, - b=B, - n=N, - m=num_points, - ) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -class FurthestPointSamplingWithDist(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_dist: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_dist (torch.Tensor): (B, N, N) Distance between each point - pair. - num_points (int): Number of points in the sampled set. - - Returns: - torch.Tensor: (B, num_points) indices of the sampled points. - """ - assert points_dist.is_contiguous() - - B, N, _ = points_dist.size() - output = points_dist.new_zeros([B, num_points], dtype=torch.int32) - temp = points_dist.new_zeros([B, N]).fill_(1e10) - - ext_module.furthest_point_sampling_with_dist_forward( - points_dist, temp, output, b=B, n=N, m=num_points) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -furthest_point_sample = FurthestPointSampling.apply -furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/fused_bias_leakyrelu.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/fused_bias_leakyrelu.py deleted file mode 100644 index ee0b9ea6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/fused_bias_leakyrelu.py +++ /dev/null @@ -1,270 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -import torch.nn.functional as F -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['fused_bias_leakyrelu']) - - -class FusedBiasLeakyReLUFunctionBackward(Function): - """Calculate second order deviation. - - This function is to compute the second order deviation for the fused leaky - relu operation. - """ - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = ext_module.fused_bias_leakyrelu( - grad_output, - empty, - out, - act=3, - grad=1, - alpha=negative_slope, - scale=scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - - # The second order deviation, in fact, contains two parts, while the - # the first part is zero. Thus, we direct consider the second part - # which is similar with the first order deviation in implementation. - gradgrad_out = ext_module.fused_bias_leakyrelu( - gradgrad_input, - gradgrad_bias.to(out.dtype), - out, - act=3, - grad=1, - alpha=ctx.negative_slope, - scale=ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedBiasLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - out = ext_module.fused_bias_leakyrelu( - input, - bias, - empty, - act=3, - grad=0, - alpha=negative_slope, - scale=scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedBiasLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedBiasLeakyReLU(nn.Module): - r"""Fused bias leaky ReLU. - - This function is introduced in the StyleGAN2: - `Analyzing and Improving the Image Quality of StyleGAN - `_ - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`. Of course, you may change it with - your own scale. - - TODO: Implement the CPU version. - - Args: - channel (int): The channel number of the feature map. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - """ - - def __init__(self, num_channels, negative_slope=0.2, scale=2**0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_bias_leakyrelu(input, self.bias, self.negative_slope, - self.scale) - - -def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5): - r"""Fused bias leaky ReLU function. - - This function is introduced in the StyleGAN2: - `Analyzing and Improving the Image Quality of StyleGAN - `_ - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`. Of course, you may change it with - your own scale. - - Args: - input (torch.Tensor): Input feature map. - bias (nn.Parameter): The bias from convolution operation. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - - Returns: - torch.Tensor: Feature map after non-linear activation. - """ - - if not input.is_cuda: - return bias_leakyrelu_ref(input, bias, negative_slope, scale) - - return FusedBiasLeakyReLUFunction.apply(input, bias.to(input.dtype), - negative_slope, scale) - - -def bias_leakyrelu_ref(x, bias, negative_slope=0.2, scale=2**0.5): - - if bias is not None: - assert bias.ndim == 1 - assert bias.shape[0] == x.shape[1] - x = x + bias.reshape([-1 if i == 1 else 1 for i in range(x.ndim)]) - - x = F.leaky_relu(x, negative_slope) - if scale != 1: - x = x * scale - - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/gather_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/gather_points.py deleted file mode 100644 index dc0752ce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/gather_points.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['gather_points_forward', 'gather_points_backward']) - - -class GatherPoints(Function): - """Gather points with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (torch.Tensor): (B, C, N) features to gather. - indices (torch.Tensor): (B, M) where M is the number of points. - - Returns: - torch.Tensor: (B, C, M) where M is the number of points. - """ - assert features.is_contiguous() - assert indices.is_contiguous() - - B, npoint = indices.size() - _, C, N = features.size() - output = features.new_zeros((B, C, npoint)) - - ext_module.gather_points_forward( - features, indices, output, b=B, c=C, n=N, npoints=npoint) - - ctx.for_backwards = (indices, C, N) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(indices) - return output - - @staticmethod - def backward(ctx, grad_out): - idx, C, N = ctx.for_backwards - B, npoint = idx.size() - - grad_features = grad_out.new_zeros((B, C, N)) - grad_out_data = grad_out.data.contiguous() - ext_module.gather_points_backward( - grad_out_data, - idx, - grad_features.data, - b=B, - c=C, - n=N, - npoints=npoint) - return grad_features, None - - -gather_points = GatherPoints.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/group_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/group_points.py deleted file mode 100644 index 80c7d294..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/group_points.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch import nn as nn -from torch.autograd import Function - -from ..utils import ext_loader -from .ball_query import ball_query -from .knn import knn - -ext_module = ext_loader.load_ext( - '_ext', ['group_points_forward', 'group_points_backward']) - - -class QueryAndGroup(nn.Module): - """Groups points with a ball query of radius. - - Args: - max_radius (float): The maximum radius of the balls. - If None is given, we will use kNN sampling instead of ball query. - sample_num (int): Maximum number of features to gather in the ball. - min_radius (float, optional): The minimum radius of the balls. - Default: 0. - use_xyz (bool, optional): Whether to use xyz. - Default: True. - return_grouped_xyz (bool, optional): Whether to return grouped xyz. - Default: False. - normalize_xyz (bool, optional): Whether to normalize xyz. - Default: False. - uniform_sample (bool, optional): Whether to sample uniformly. - Default: False - return_unique_cnt (bool, optional): Whether to return the count of - unique samples. Default: False. - return_grouped_idx (bool, optional): Whether to return grouped idx. - Default: False. - """ - - def __init__(self, - max_radius, - sample_num, - min_radius=0, - use_xyz=True, - return_grouped_xyz=False, - normalize_xyz=False, - uniform_sample=False, - return_unique_cnt=False, - return_grouped_idx=False): - super().__init__() - self.max_radius = max_radius - self.min_radius = min_radius - self.sample_num = sample_num - self.use_xyz = use_xyz - self.return_grouped_xyz = return_grouped_xyz - self.normalize_xyz = normalize_xyz - self.uniform_sample = uniform_sample - self.return_unique_cnt = return_unique_cnt - self.return_grouped_idx = return_grouped_idx - if self.return_unique_cnt: - assert self.uniform_sample, \ - 'uniform_sample should be True when ' \ - 'returning the count of unique samples' - if self.max_radius is None: - assert not self.normalize_xyz, \ - 'can not normalize grouped xyz when max_radius is None' - - def forward(self, points_xyz, center_xyz, features=None): - """ - Args: - points_xyz (torch.Tensor): (B, N, 3) xyz coordinates of the - points. - center_xyz (torch.Tensor): (B, npoint, 3) coordinates of the - centriods. - features (torch.Tensor): (B, C, N) The features of grouped - points. - - Returns: - torch.Tensor: (B, 3 + C, npoint, sample_num) Grouped - concatenated coordinates and features of points. - """ - # if self.max_radius is None, we will perform kNN instead of ball query - # idx is of shape [B, npoint, sample_num] - if self.max_radius is None: - idx = knn(self.sample_num, points_xyz, center_xyz, False) - idx = idx.transpose(1, 2).contiguous() - else: - idx = ball_query(self.min_radius, self.max_radius, self.sample_num, - points_xyz, center_xyz) - - if self.uniform_sample: - unique_cnt = torch.zeros((idx.shape[0], idx.shape[1])) - for i_batch in range(idx.shape[0]): - for i_region in range(idx.shape[1]): - unique_ind = torch.unique(idx[i_batch, i_region, :]) - num_unique = unique_ind.shape[0] - unique_cnt[i_batch, i_region] = num_unique - sample_ind = torch.randint( - 0, - num_unique, (self.sample_num - num_unique, ), - dtype=torch.long) - all_ind = torch.cat((unique_ind, unique_ind[sample_ind])) - idx[i_batch, i_region, :] = all_ind - - xyz_trans = points_xyz.transpose(1, 2).contiguous() - # (B, 3, npoint, sample_num) - grouped_xyz = grouping_operation(xyz_trans, idx) - grouped_xyz_diff = grouped_xyz - \ - center_xyz.transpose(1, 2).unsqueeze(-1) # relative offsets - if self.normalize_xyz: - grouped_xyz_diff /= self.max_radius - - if features is not None: - grouped_features = grouping_operation(features, idx) - if self.use_xyz: - # (B, C + 3, npoint, sample_num) - new_features = torch.cat([grouped_xyz_diff, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - assert (self.use_xyz - ), 'Cannot have not features and not use xyz as a feature!' - new_features = grouped_xyz_diff - - ret = [new_features] - if self.return_grouped_xyz: - ret.append(grouped_xyz) - if self.return_unique_cnt: - ret.append(unique_cnt) - if self.return_grouped_idx: - ret.append(idx) - if len(ret) == 1: - return ret[0] - else: - return tuple(ret) - - -class GroupAll(nn.Module): - """Group xyz with feature. - - Args: - use_xyz (bool): Whether to use xyz. - """ - - def __init__(self, use_xyz: bool = True): - super().__init__() - self.use_xyz = use_xyz - - def forward(self, - xyz: torch.Tensor, - new_xyz: torch.Tensor, - features: torch.Tensor = None): - """ - Args: - xyz (Tensor): (B, N, 3) xyz coordinates of the features. - new_xyz (Tensor): new xyz coordinates of the features. - features (Tensor): (B, C, N) features to group. - - Returns: - Tensor: (B, C + 3, 1, N) Grouped feature. - """ - grouped_xyz = xyz.transpose(1, 2).unsqueeze(2) - if features is not None: - grouped_features = features.unsqueeze(2) - if self.use_xyz: - # (B, 3 + C, 1, N) - new_features = torch.cat([grouped_xyz, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - new_features = grouped_xyz - - return new_features - - -class GroupingOperation(Function): - """Group feature with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, N) tensor of features to group. - indices (Tensor): (B, npoint, nsample) the indices of - features to group with. - - Returns: - Tensor: (B, C, npoint, nsample) Grouped features. - """ - features = features.contiguous() - indices = indices.contiguous() - - B, nfeatures, nsample = indices.size() - _, C, N = features.size() - output = torch.cuda.FloatTensor(B, C, nfeatures, nsample) - - ext_module.group_points_forward( - features, - indices, - output, - b=B, - c=C, - n=N, - npoints=nfeatures, - nsample=nsample) - - ctx.for_backwards = (indices, N) - return output - - @staticmethod - def backward(ctx, - grad_out: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Args: - grad_out (Tensor): (B, C, npoint, nsample) tensor of the gradients - of the output from forward. - - Returns: - Tensor: (B, C, N) gradient of the features. - """ - idx, N = ctx.for_backwards - - B, C, npoint, nsample = grad_out.size() - grad_features = torch.cuda.FloatTensor(B, C, N).zero_() - - grad_out_data = grad_out.data.contiguous() - ext_module.group_points_backward( - grad_out_data, - idx, - grad_features.data, - b=B, - c=C, - n=N, - npoints=npoint, - nsample=nsample) - return grad_features, None - - -grouping_operation = GroupingOperation.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/info.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/info.py deleted file mode 100644 index 29f2e559..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/info.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os - -import torch - -if torch.__version__ == 'parrots': - import parrots - - def get_compiler_version(): - return 'GCC ' + parrots.version.compiler - - def get_compiling_cuda_version(): - return parrots.version.cuda -else: - from ..utils import ext_loader - ext_module = ext_loader.load_ext( - '_ext', ['get_compiler_version', 'get_compiling_cuda_version']) - - def get_compiler_version(): - return ext_module.get_compiler_version() - - def get_compiling_cuda_version(): - return ext_module.get_compiling_cuda_version() - - -def get_onnxruntime_op_path(): - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_ort.*.so') - - paths = glob.glob(wildcard) - if len(paths) > 0: - return paths[0] - else: - return '' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/iou3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/iou3d.py deleted file mode 100755 index dc45ee94..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/iou3d.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Optional - -import torch -from torch import Tensor - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'iou3d_boxes_overlap_bev_forward', 'iou3d_nms3d_forward', - 'iou3d_nms3d_normal_forward' -]) - - -def boxes_overlap_bev(boxes_a: Tensor, boxes_b: Tensor) -> Tensor: - """Calculate boxes BEV overlap. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 7). - boxes_b (torch.Tensor): Input boxes b with shape (N, 7). - - Returns: - torch.Tensor: BEV overlap result with shape (M, N). - """ - ans_overlap = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - ext_module.iou3d_boxes_overlap_bev_forward(boxes_a.contiguous(), - boxes_b.contiguous(), - ans_overlap) - - return ans_overlap - - -def boxes_iou3d(boxes_a: Tensor, boxes_b: Tensor) -> Tensor: - """Calculate boxes 3D IoU. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 7). - boxes_b (torch.Tensor): Input boxes b with shape (N, 7). - - Returns: - torch.Tensor: 3D IoU result with shape (M, N). - """ - assert boxes_a.shape[1] == boxes_b.shape[1] == 7,\ - 'Input boxes shape should be (N, 7)' - - boxes_a_height_max = (boxes_a[:, 2] + boxes_a[:, 5] / 2).view(-1, 1) - boxes_a_height_min = (boxes_a[:, 2] - boxes_a[:, 5] / 2).view(-1, 1) - boxes_b_height_max = (boxes_b[:, 2] + boxes_b[:, 5] / 2).view(1, -1) - boxes_b_height_min = (boxes_b[:, 2] - boxes_b[:, 5] / 2).view(1, -1) - - overlaps_bev = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - ext_module.iou3d_boxes_overlap_bev_forward(boxes_a.contiguous(), - boxes_b.contiguous(), - overlaps_bev) - - max_of_min = torch.max(boxes_a_height_min, boxes_b_height_min) - min_of_max = torch.min(boxes_a_height_max, boxes_b_height_max) - overlaps_h = torch.clamp(min_of_max - max_of_min, min=0) - overlaps_3d = overlaps_bev * overlaps_h - vol_a = (boxes_a[:, 3] * boxes_a[:, 4] * boxes_a[:, 5]).view(-1, 1) - vol_b = (boxes_b[:, 3] * boxes_b[:, 4] * boxes_b[:, 5]).view(1, -1) - iou3d = overlaps_3d / torch.clamp(vol_a + vol_b - overlaps_3d, min=1e-6) - return iou3d - - -def nms3d(boxes: Tensor, scores: Tensor, iou_threshold: float) -> Tensor: - """3D NMS function GPU implementation (for BEV boxes). - - Args: - boxes (torch.Tensor): Input boxes with the shape of (N, 7) - ([x, y, z, dx, dy, dz, heading]). - scores (torch.Tensor): Scores of boxes with the shape of (N). - iou_threshold (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 7, 'Input boxes shape should be (N, 7)' - order = scores.sort(0, descending=True)[1] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = torch.zeros(size=(), dtype=torch.long) - ext_module.iou3d_nms3d_forward( - boxes, keep, num_out, nms_overlap_thresh=iou_threshold) - keep = order[keep[:num_out].cuda(boxes.device)].contiguous() - return keep - - -def nms3d_normal(boxes: Tensor, scores: Tensor, - iou_threshold: float) -> Tensor: - """Normal 3D NMS function GPU implementation. The overlap of two boxes for - IoU calculation is defined as the exact overlapping area of the two boxes - WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 7). - ([x, y, z, dx, dy, dz, heading]). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - iou_threshold (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 7, 'Input boxes shape should be (N, 7)' - order = scores.sort(0, descending=True)[1] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = torch.zeros(size=(), dtype=torch.long) - ext_module.iou3d_nms3d_normal_forward( - boxes, keep, num_out, nms_overlap_thresh=iou_threshold) - return order[keep[:num_out].cuda(boxes.device)].contiguous() - - -def _xyxyr2xywhr(boxes: Tensor) -> Tensor: - """Convert [x1, y1, x2, y2, heading] box to [x, y, dx, dy, heading] box. - - Args: - box (torch.Tensor): Input boxes with shape (N, 5). - - Returns: - torch.Tensor: Converted boxes with shape (N, 7). - """ - warnings.warn( - 'This function is deprecated and will be removed in the future.', - DeprecationWarning) - return torch.stack( - ((boxes[:, 0] + boxes[:, 2]) / 2, (boxes[:, 1] + boxes[:, 3]) / 2, - boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1], boxes[:, 4]), - dim=-1) - - -def boxes_iou_bev(boxes_a: Tensor, boxes_b: Tensor) -> Tensor: - """Calculate boxes IoU in the Bird's Eye View. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5) - ([x1, y1, x2, y2, ry]). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5) - ([x1, y1, x2, y2, ry]). - - Returns: - torch.Tensor: IoU result with shape (M, N). - """ - from .box_iou_rotated import box_iou_rotated - - warnings.warn( - '`iou3d.boxes_iou_bev` is deprecated and will be removed in' - ' the future. Please, use `box_iou_rotated.box_iou_rotated`.', - DeprecationWarning) - - return box_iou_rotated(_xyxyr2xywhr(boxes_a), _xyxyr2xywhr(boxes_b)) - - -def nms_bev(boxes: Tensor, - scores: Tensor, - thresh: float, - pre_max_size: Optional[int] = None, - post_max_size: Optional[int] = None) -> Tensor: - """NMS function GPU implementation (for BEV boxes). - - The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - Args: - boxes (torch.Tensor): Input boxes with the shape of (N, 5) - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of (N,). - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - Returns: - torch.Tensor: Indexes after NMS. - """ - from .nms import nms_rotated - - warnings.warn( - '`iou3d.nms_bev` is deprecated and will be removed in' - ' the future. Please, use `nms.nms_rotated`.', DeprecationWarning) - assert boxes.size(1) == 5, 'Input boxes shape should be (N, 5)' - order = scores.sort(0, descending=True)[1] - - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = _xyxyr2xywhr(boxes)[order] - scores = scores[order] - - keep = nms_rotated(boxes, scores, thresh)[1] - keep = order[keep] - - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -def nms_normal_bev(boxes: Tensor, scores: Tensor, thresh: float) -> Tensor: - """Normal NMS function GPU implementation (for BEV boxes). - - The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5) - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of predicted boxes with shape (N,). - thresh (float): Overlap threshold of NMS. - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - from .nms import nms - - warnings.warn( - '`iou3d.nms_normal_bev` is deprecated and will be removed in' - ' the future. Please, use `nms.nms`.', DeprecationWarning) - assert boxes.shape[1] == 5, 'Input boxes shape should be (N, 5)' - - return nms(boxes[:, :-1], scores, thresh)[1] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/knn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/knn.py deleted file mode 100644 index ad81c52d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/knn.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['knn_forward']) - - -class KNN(Function): - r"""KNN (CUDA) based on heap data structure. - - Modified from `PAConv `_. - - Find k-nearest points. - """ - - @staticmethod - def forward(ctx, - k: int, - xyz: torch.Tensor, - center_xyz: torch.Tensor = None, - transposed: bool = False) -> torch.Tensor: - """ - Args: - k (int): number of nearest neighbors. - xyz (torch.Tensor): (B, N, 3) if transposed == False, else - (B, 3, N). xyz coordinates of the features. - center_xyz (torch.Tensor, optional): (B, npoint, 3) if transposed - is False, else (B, 3, npoint). centers of the knn query. - Default: None. - transposed (bool, optional): whether the input tensors are - transposed. Should not explicitly use this keyword when - calling knn (=KNN.apply), just add the fourth param. - Default: False. - - Returns: - torch.Tensor: (B, k, npoint) tensor with the indices of the - features that form k-nearest neighbours. - """ - assert (k > 0) & (k < 100), 'k should be in range(0, 100)' - - if center_xyz is None: - center_xyz = xyz - - if transposed: - xyz = xyz.transpose(2, 1).contiguous() - center_xyz = center_xyz.transpose(2, 1).contiguous() - - assert xyz.is_contiguous() # [B, N, 3] - assert center_xyz.is_contiguous() # [B, npoint, 3] - - center_xyz_device = center_xyz.get_device() - assert center_xyz_device == xyz.get_device(), \ - 'center_xyz and xyz should be put on the same device' - if torch.cuda.current_device() != center_xyz_device: - torch.cuda.set_device(center_xyz_device) - - B, npoint, _ = center_xyz.shape - N = xyz.shape[1] - - idx = center_xyz.new_zeros((B, npoint, k)).int() - dist2 = center_xyz.new_zeros((B, npoint, k)).float() - - ext_module.knn_forward( - xyz, center_xyz, idx, dist2, b=B, n=N, m=npoint, nsample=k) - # idx shape to [B, k, npoint] - idx = idx.transpose(2, 1).contiguous() - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - return idx - - @staticmethod - def backward(ctx, a=None): - return None, None, None - - -knn = KNN.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/masked_conv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/masked_conv.py deleted file mode 100644 index 9f0d3ca9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/masked_conv.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['masked_im2col_forward', 'masked_col2im_forward']) - - -class MaskedConv2dFunction(Function): - - @staticmethod - def symbolic(g, features, mask, weight, bias, padding, stride): - return g.op( - 'mmcv::MMCVMaskedConv2d', - features, - mask, - weight, - bias, - padding_i=padding, - stride_i=stride) - - @staticmethod - def forward(ctx, features, mask, weight, bias, padding=0, stride=1): - assert mask.dim() == 3 and mask.size(0) == 1 - assert features.dim() == 4 and features.size(0) == 1 - assert features.size()[2:] == mask.size()[1:] - pad_h, pad_w = _pair(padding) - stride_h, stride_w = _pair(stride) - if stride_h != 1 or stride_w != 1: - raise ValueError( - 'Stride could not only be 1 in masked_conv2d currently.') - out_channel, in_channel, kernel_h, kernel_w = weight.size() - - batch_size = features.size(0) - out_h = int( - math.floor((features.size(2) + 2 * pad_h - - (kernel_h - 1) - 1) / stride_h + 1)) - out_w = int( - math.floor((features.size(3) + 2 * pad_w - - (kernel_h - 1) - 1) / stride_w + 1)) - mask_inds = torch.nonzero(mask[0] > 0, as_tuple=False) - output = features.new_zeros(batch_size, out_channel, out_h, out_w) - if mask_inds.numel() > 0: - mask_h_idx = mask_inds[:, 0].contiguous() - mask_w_idx = mask_inds[:, 1].contiguous() - data_col = features.new_zeros(in_channel * kernel_h * kernel_w, - mask_inds.size(0)) - ext_module.masked_im2col_forward( - features, - mask_h_idx, - mask_w_idx, - data_col, - kernel_h=kernel_h, - kernel_w=kernel_w, - pad_h=pad_h, - pad_w=pad_w) - masked_output = torch.addmm(1, bias[:, None], 1, - weight.view(out_channel, -1), data_col) - ext_module.masked_col2im_forward( - masked_output, - mask_h_idx, - mask_w_idx, - output, - height=out_h, - width=out_w, - channels=out_channel) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - return (None, ) * 5 - - -masked_conv2d = MaskedConv2dFunction.apply - - -class MaskedConv2d(nn.Conv2d): - """A MaskedConv2d which inherits the official Conv2d. - - The masked forward doesn't implement the backward function and only - supports the stride parameter to be 1 currently. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__(in_channels, out_channels, kernel_size, stride, - padding, dilation, groups, bias) - - def forward(self, input, mask=None): - if mask is None: # fallback to the normal Conv2d - return super().forward(input) - else: - return masked_conv2d(input, mask, self.weight, self.bias, - self.padding) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/merge_cells.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/merge_cells.py deleted file mode 100644 index 9e5de344..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/merge_cells.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from abc import abstractmethod - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..cnn import ConvModule - - -class BaseMergeCell(nn.Module): - """The basic class for cells used in NAS-FPN and NAS-FCOS. - - BaseMergeCell takes 2 inputs. After applying convolution - on them, they are resized to the target size. Then, - they go through binary_op, which depends on the type of cell. - If with_out_conv is True, the result of output will go through - another convolution layer. - - Args: - in_channels (int): number of input channels in out_conv layer. - out_channels (int): number of output channels in out_conv layer. - with_out_conv (bool): Whether to use out_conv layer - out_conv_cfg (dict): Config dict for convolution layer, which should - contain "groups", "kernel_size", "padding", "bias" to build - out_conv layer. - out_norm_cfg (dict): Config dict for normalization layer in out_conv. - out_conv_order (tuple): The order of conv/norm/activation layers in - out_conv. - with_input1_conv (bool): Whether to use convolution on input1. - with_input2_conv (bool): Whether to use convolution on input2. - input_conv_cfg (dict): Config dict for building input1_conv layer and - input2_conv layer, which is expected to contain the type of - convolution. - Default: None, which means using conv2d. - input_norm_cfg (dict): Config dict for normalization layer in - input1_conv and input2_conv layer. Default: None. - upsample_mode (str): Interpolation method used to resize the output - of input1_conv and input2_conv to target size. Currently, we - support ['nearest', 'bilinear']. Default: 'nearest'. - """ - - def __init__(self, - fused_channels=256, - out_channels=256, - with_out_conv=True, - out_conv_cfg=dict( - groups=1, kernel_size=3, padding=1, bias=True), - out_norm_cfg=None, - out_conv_order=('act', 'conv', 'norm'), - with_input1_conv=False, - with_input2_conv=False, - input_conv_cfg=None, - input_norm_cfg=None, - upsample_mode='nearest'): - super().__init__() - assert upsample_mode in ['nearest', 'bilinear'] - self.with_out_conv = with_out_conv - self.with_input1_conv = with_input1_conv - self.with_input2_conv = with_input2_conv - self.upsample_mode = upsample_mode - - if self.with_out_conv: - self.out_conv = ConvModule( - fused_channels, - out_channels, - **out_conv_cfg, - norm_cfg=out_norm_cfg, - order=out_conv_order) - - self.input1_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input1_conv else nn.Sequential() - self.input2_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input2_conv else nn.Sequential() - - def _build_input_conv(self, channel, conv_cfg, norm_cfg): - return ConvModule( - channel, - channel, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True) - - @abstractmethod - def _binary_op(self, x1, x2): - pass - - def _resize(self, x, size): - if x.shape[-2:] == size: - return x - elif x.shape[-2:] < size: - return F.interpolate(x, size=size, mode=self.upsample_mode) - else: - if x.shape[-2] % size[-2] != 0 or x.shape[-1] % size[-1] != 0: - h, w = x.shape[-2:] - target_h, target_w = size - pad_h = math.ceil(h / target_h) * target_h - h - pad_w = math.ceil(w / target_w) * target_w - w - pad_l = pad_w // 2 - pad_r = pad_w - pad_l - pad_t = pad_h // 2 - pad_b = pad_h - pad_t - pad = (pad_l, pad_r, pad_t, pad_b) - x = F.pad(x, pad, mode='constant', value=0.0) - kernel_size = (x.shape[-2] // size[-2], x.shape[-1] // size[-1]) - x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size) - return x - - def forward(self, x1, x2, out_size=None): - assert x1.shape[:2] == x2.shape[:2] - assert out_size is None or len(out_size) == 2 - if out_size is None: # resize to larger one - out_size = max(x1.size()[2:], x2.size()[2:]) - - x1 = self.input1_conv(x1) - x2 = self.input2_conv(x2) - - x1 = self._resize(x1, out_size) - x2 = self._resize(x2, out_size) - - x = self._binary_op(x1, x2) - if self.with_out_conv: - x = self.out_conv(x) - return x - - -class SumCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super().__init__(in_channels, out_channels, **kwargs) - - def _binary_op(self, x1, x2): - return x1 + x2 - - -class ConcatCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super().__init__(in_channels * 2, out_channels, **kwargs) - - def _binary_op(self, x1, x2): - ret = torch.cat([x1, x2], dim=1) - return ret - - -class GlobalPoolingCell(BaseMergeCell): - - def __init__(self, in_channels=None, out_channels=None, **kwargs): - super().__init__(in_channels, out_channels, **kwargs) - self.global_pool = nn.AdaptiveAvgPool2d((1, 1)) - - def _binary_op(self, x1, x2): - x2_att = self.global_pool(x2).sigmoid() - return x2 + x2_att * x1 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/min_area_polygons.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/min_area_polygons.py deleted file mode 100644 index 9f42a8be..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/min_area_polygons.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['min_area_polygons']) - - -def min_area_polygons(pointsets): - """Find the smallest polygons that surrounds all points in the point sets. - - Args: - pointsets (Tensor): point sets with shape (N, 18). - - Returns: - torch.Tensor: Return the smallest polygons with shape (N, 8). - """ - polygons = pointsets.new_zeros((pointsets.size(0), 8)) - ext_module.min_area_polygons(pointsets, polygons) - return polygons diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/modulated_deform_conv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/modulated_deform_conv.py deleted file mode 100644 index 7c645cae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/modulated_deform_conv.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext( - '_ext', - ['modulated_deform_conv_forward', 'modulated_deform_conv_backward']) - - -class ModulatedDeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, input, offset, mask, weight, bias, stride, padding, - dilation, groups, deform_groups): - input_tensors = [input, offset, mask, weight] - if bias is not None: - input_tensors.append(bias) - return g.op( - 'mmcv::MMCVModulatedDeformConv2d', - *input_tensors, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups) - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(0) # fake tensor - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - bias = bias.type_as(input) - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty( - ModulatedDeformConv2dFunction._output_size(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - ext_module.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - grad_output = grad_output.contiguous() - ext_module.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, - None, None, None, None, None) - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply - - -class ModulatedDeformConv2d(nn.Module): - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='ModulatedDeformConv2d') - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=True): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, - *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -@CONV_LAYERS.register_module('DCNv2') -class ModulatedDeformConv2dPack(ModulatedDeformConv2d): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv - layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int): Same as nn.Conv2d, while tuple is not supported. - padding (int): Same as nn.Conv2d, while tuple is not supported. - dilation (int): Same as nn.Conv2d, while tuple is not supported. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, - bias=True) - self.init_weights() - - def init_weights(self): - super().init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, ModulatedDeformConvPack - # loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/multi_scale_deform_attn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/multi_scale_deform_attn.py deleted file mode 100644 index eb34aadf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/multi_scale_deform_attn.py +++ /dev/null @@ -1,357 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd.function import Function, once_differentiable - -from mmcv import deprecated_api_warning -from mmcv.cnn import constant_init, xavier_init -from mmcv.cnn.bricks.registry import ATTENTION -from mmcv.runner import BaseModule -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['ms_deform_attn_backward', 'ms_deform_attn_forward']) - - -class MultiScaleDeformableAttnFunction(Function): - - @staticmethod - def forward(ctx, value, value_spatial_shapes, value_level_start_index, - sampling_locations, attention_weights, im2col_step): - """GPU version of multi-scale deformable attention. - - Args: - value (torch.Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (torch.Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (torch.Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (torch.Tensor): The weight of sampling points - used when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - im2col_step (Tensor): The step used in image to column. - - Returns: - torch.Tensor: has shape (bs, num_queries, embed_dims) - """ - - ctx.im2col_step = im2col_step - output = ext_module.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step=ctx.im2col_step) - ctx.save_for_backward(value, value_spatial_shapes, - value_level_start_index, sampling_locations, - attention_weights) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - """GPU version of backward function. - - Args: - grad_output (torch.Tensor): Gradient of output tensor of forward. - - Returns: - tuple[Tensor]: Gradient of input tensors in forward. - """ - value, value_spatial_shapes, value_level_start_index,\ - sampling_locations, attention_weights = ctx.saved_tensors - grad_value = torch.zeros_like(value) - grad_sampling_loc = torch.zeros_like(sampling_locations) - grad_attn_weight = torch.zeros_like(attention_weights) - - ext_module.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output.contiguous(), - grad_value, - grad_sampling_loc, - grad_attn_weight, - im2col_step=ctx.im2col_step) - - return grad_value, None, None, \ - grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes, - sampling_locations, attention_weights): - """CPU version of multi-scale deformable attention. - - Args: - value (torch.Tensor): The value has shape - (bs, num_keys, num_heads, embed_dims//num_heads) - value_spatial_shapes (torch.Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (torch.Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (torch.Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - - Returns: - torch.Tensor: has shape (bs, num_queries, embed_dims) - """ - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ =\ - sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], - dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape( - bs * num_heads, embed_dims, H_, W_) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, - level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, - sampling_grid_l_, - mode='bilinear', - padding_mode='zeros', - align_corners=False) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points) - output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * - attention_weights).sum(-1).view(bs, num_heads * embed_dims, - num_queries) - return output.transpose(1, 2).contiguous() - - -@ATTENTION.register_module() -class MultiScaleDeformableAttention(BaseModule): - """An attention module used in Deformable-Detr. - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dims (int): The embedding dimension of Attention. - Default: 256. - num_heads (int): Parallel attention heads. Default: 64. - num_levels (int): The number of feature map used in - Attention. Default: 4. - num_points (int): The number of sampling points for - each query in each head. Default: 4. - im2col_step (int): The step used in image_to_column. - Default: 64. - dropout (float): A Dropout layer on `inp_identity`. - Default: 0.1. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - norm_cfg (dict): Config dict for normalization layer. - Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims=256, - num_heads=8, - num_levels=4, - num_points=4, - im2col_step=64, - dropout=0.1, - batch_first=False, - norm_cfg=None, - init_cfg=None): - super().__init__(init_cfg) - if embed_dims % num_heads != 0: - raise ValueError(f'embed_dims must be divisible by num_heads, ' - f'but got {embed_dims} and {num_heads}') - dim_per_head = embed_dims // num_heads - self.norm_cfg = norm_cfg - self.dropout = nn.Dropout(dropout) - self.batch_first = batch_first - - # you'd better set dim_per_head to a power of 2 - # which is more efficient in the CUDA implementation - def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError( - 'invalid input for _is_power_of_2: {} (type: {})'.format( - n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - if not _is_power_of_2(dim_per_head): - warnings.warn( - "You'd better set embed_dims in " - 'MultiScaleDeformAttention to make ' - 'the dimension of each attention head a power of 2 ' - 'which is more efficient in our CUDA implementation.') - - self.im2col_step = im2col_step - self.embed_dims = embed_dims - self.num_levels = num_levels - self.num_heads = num_heads - self.num_points = num_points - self.sampling_offsets = nn.Linear( - embed_dims, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dims, - num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dims, embed_dims) - self.output_proj = nn.Linear(embed_dims, embed_dims) - self.init_weights() - - def init_weights(self): - """Default initialization for Parameters of Module.""" - constant_init(self.sampling_offsets, 0.) - thetas = torch.arange( - self.num_heads, - dtype=torch.float32) * (2.0 * math.pi / self.num_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / - grid_init.abs().max(-1, keepdim=True)[0]).view( - self.num_heads, 1, 1, - 2).repeat(1, self.num_levels, self.num_points, 1) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - - self.sampling_offsets.bias.data = grid_init.view(-1) - constant_init(self.attention_weights, val=0., bias=0.) - xavier_init(self.value_proj, distribution='uniform', bias=0.) - xavier_init(self.output_proj, distribution='uniform', bias=0.) - self._is_init = True - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiScaleDeformableAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_padding_mask=None, - reference_points=None, - spatial_shapes=None, - level_start_index=None, - **kwargs): - """Forward Function of MultiScaleDeformAttention. - - Args: - query (torch.Tensor): Query of Transformer with shape - (num_query, bs, embed_dims). - key (torch.Tensor): The key tensor with shape - `(num_key, bs, embed_dims)`. - value (torch.Tensor): The value tensor with shape - `(num_key, bs, embed_dims)`. - identity (torch.Tensor): The tensor used for addition, with the - same shape as `query`. Default None. If None, - `query` will be used. - query_pos (torch.Tensor): The positional encoding for `query`. - Default: None. - key_pos (torch.Tensor): The positional encoding for `key`. Default - None. - reference_points (torch.Tensor): The normalized reference - points with shape (bs, num_query, num_levels, 2), - all elements is range in [0, 1], top-left (0,0), - bottom-right (1, 1), including padding area. - or (N, Length_{query}, num_levels, 4), add - additional two dimensions is (w, h) to - form reference boxes. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with - shape [bs, num_key]. - spatial_shapes (torch.Tensor): Spatial shape of features in - different levels. With shape (num_levels, 2), - last dimension represents (h, w). - level_start_index (torch.Tensor): The start index of each level. - A tensor has shape ``(num_levels, )`` and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - - Returns: - torch.Tensor: forwarded results with shape - [num_query, bs, embed_dims]. - """ - - if value is None: - value = query - - if identity is None: - identity = query - if query_pos is not None: - query = query + query_pos - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], 0.0) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points) - attention_weights = attention_weights.softmax(-1) - - attention_weights = attention_weights.view(bs, num_query, - self.num_heads, - self.num_levels, - self.num_points) - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack( - [spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = reference_points[:, :, None, :, None, :] \ - + sampling_offsets \ - / offset_normalizer[None, None, None, :, None, :] - elif reference_points.shape[-1] == 4: - sampling_locations = reference_points[:, :, None, :, None, :2] \ - + sampling_offsets / self.num_points \ - * reference_points[:, :, None, :, None, 2:] \ - * 0.5 - else: - raise ValueError( - f'Last dim of reference_points must be' - f' 2 or 4, but get {reference_points.shape[-1]} instead.') - if torch.cuda.is_available() and value.is_cuda: - output = MultiScaleDeformableAttnFunction.apply( - value, spatial_shapes, level_start_index, sampling_locations, - attention_weights, self.im2col_step) - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights) - - output = self.output_proj(output) - - if not self.batch_first: - # (num_query, bs ,embed_dims) - output = output.permute(1, 0, 2) - - return self.dropout(output) + identity diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/nms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/nms.py deleted file mode 100644 index d41b1ac9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/nms.py +++ /dev/null @@ -1,477 +0,0 @@ -import os -from typing import Any, Dict, List, Optional, Tuple, Union - -import numpy as np -import torch -from torch import Tensor - -from mmcv.utils import deprecated_api_warning -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['nms', 'softnms', 'nms_match', 'nms_rotated']) - - -# This function is modified from: https://github.com/pytorch/vision/ -class NMSop(torch.autograd.Function): - - @staticmethod - def forward(ctx: Any, bboxes: Tensor, scores: Tensor, iou_threshold: float, - offset: int, score_threshold: float, max_num: int) -> Tensor: - is_filtering_by_score = score_threshold > 0 - if is_filtering_by_score: - valid_mask = scores > score_threshold - bboxes, scores = bboxes[valid_mask], scores[valid_mask] - valid_inds = torch.nonzero( - valid_mask, as_tuple=False).squeeze(dim=1) - - inds = ext_module.nms( - bboxes, scores, iou_threshold=float(iou_threshold), offset=offset) - - if max_num > 0: - inds = inds[:max_num] - if is_filtering_by_score: - inds = valid_inds[inds] - return inds - - @staticmethod - def symbolic(g, bboxes, scores, iou_threshold, offset, score_threshold, - max_num): - from ..onnx import is_custom_op_loaded - has_custom_op = is_custom_op_loaded() - # TensorRT nms plugin is aligned with original nms in ONNXRuntime - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - if has_custom_op and (not is_trt_backend): - return g.op( - 'mmcv::NonMaxSuppression', - bboxes, - scores, - iou_threshold_f=float(iou_threshold), - offset_i=int(offset)) - else: - from torch.onnx.symbolic_opset9 import select, squeeze, unsqueeze - - from ..onnx.onnx_utils.symbolic_helper import _size_helper - - boxes = unsqueeze(g, bboxes, 0) - scores = unsqueeze(g, unsqueeze(g, scores, 0), 0) - - if max_num > 0: - max_num = g.op( - 'Constant', - value_t=torch.tensor(max_num, dtype=torch.long)) - else: - dim = g.op('Constant', value_t=torch.tensor(0)) - max_num = _size_helper(g, bboxes, dim) - max_output_per_class = max_num - iou_threshold = g.op( - 'Constant', - value_t=torch.tensor([iou_threshold], dtype=torch.float)) - score_threshold = g.op( - 'Constant', - value_t=torch.tensor([score_threshold], dtype=torch.float)) - nms_out = g.op('NonMaxSuppression', boxes, scores, - max_output_per_class, iou_threshold, - score_threshold) - return squeeze( - g, - select( - g, nms_out, 1, - g.op( - 'Constant', - value_t=torch.tensor([2], dtype=torch.long))), 1) - - -class SoftNMSop(torch.autograd.Function): - - @staticmethod - def forward(ctx: Any, boxes: Tensor, scores: Tensor, iou_threshold: float, - sigma: float, min_score: float, method: int, - offset: int) -> Tuple[Tensor, Tensor]: - dets = boxes.new_empty((boxes.size(0), 5), device='cpu') - inds = ext_module.softnms( - boxes.cpu(), - scores.cpu(), - dets.cpu(), - iou_threshold=float(iou_threshold), - sigma=float(sigma), - min_score=float(min_score), - method=int(method), - offset=int(offset)) - return dets, inds - - @staticmethod - def symbolic(g, boxes, scores, iou_threshold, sigma, min_score, method, - offset): - from packaging import version - assert version.parse(torch.__version__) >= version.parse('1.7.0') - nms_out = g.op( - 'mmcv::SoftNonMaxSuppression', - boxes, - scores, - iou_threshold_f=float(iou_threshold), - sigma_f=float(sigma), - min_score_f=float(min_score), - method_i=int(method), - offset_i=int(offset), - outputs=2) - return nms_out - - -array_like_type = Union[Tensor, np.ndarray] - - -@deprecated_api_warning({'iou_thr': 'iou_threshold'}) -def nms(boxes: array_like_type, - scores: array_like_type, - iou_threshold: float, - offset: int = 0, - score_threshold: float = 0, - max_num: int = -1) -> Tuple[array_like_type, array_like_type]: - """Dispatch to either CPU or GPU NMS implementations. - - The input can be either torch tensor or numpy array. GPU NMS will be used - if the input is gpu tensor, otherwise CPU NMS - will be used. The returned type will always be the same as inputs. - - Arguments: - boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4). - scores (torch.Tensor or np.ndarray): scores in shape (N, ). - iou_threshold (float): IoU threshold for NMS. - offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset). - score_threshold (float): score threshold for NMS. - max_num (int): maximum number of boxes after NMS. - - Returns: - tuple: kept dets (boxes and scores) and indice, which always have - the same data type as the input. - - Example: - >>> boxes = np.array([[49.1, 32.4, 51.0, 35.9], - >>> [49.3, 32.9, 51.0, 35.3], - >>> [49.2, 31.8, 51.0, 35.4], - >>> [35.1, 11.5, 39.1, 15.7], - >>> [35.6, 11.8, 39.3, 14.2], - >>> [35.3, 11.5, 39.9, 14.5], - >>> [35.2, 11.7, 39.7, 15.7]], dtype=np.float32) - >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.5, 0.4, 0.3],\ - dtype=np.float32) - >>> iou_threshold = 0.6 - >>> dets, inds = nms(boxes, scores, iou_threshold) - >>> assert len(inds) == len(dets) == 3 - """ - assert isinstance(boxes, (Tensor, np.ndarray)) - assert isinstance(scores, (Tensor, np.ndarray)) - is_numpy = False - if isinstance(boxes, np.ndarray): - is_numpy = True - boxes = torch.from_numpy(boxes) - if isinstance(scores, np.ndarray): - scores = torch.from_numpy(scores) - assert boxes.size(1) == 4 - assert boxes.size(0) == scores.size(0) - assert offset in (0, 1) - - inds = NMSop.apply(boxes, scores, iou_threshold, offset, score_threshold, - max_num) - dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1) - if is_numpy: - dets = dets.cpu().numpy() - inds = inds.cpu().numpy() - return dets, inds - - -@deprecated_api_warning({'iou_thr': 'iou_threshold'}) -def soft_nms(boxes: array_like_type, - scores: array_like_type, - iou_threshold: float = 0.3, - sigma: float = 0.5, - min_score: float = 1e-3, - method: str = 'linear', - offset: int = 0) -> Tuple[array_like_type, array_like_type]: - """Dispatch to only CPU Soft NMS implementations. - - The input can be either a torch tensor or numpy array. - The returned type will always be the same as inputs. - - Args: - boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4). - scores (torch.Tensor or np.ndarray): scores in shape (N, ). - iou_threshold (float): IoU threshold for NMS. - sigma (float): hyperparameter for gaussian method - min_score (float): score filter threshold - method (str): either 'linear' or 'gaussian' - offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset). - - Returns: - tuple: kept dets (boxes and scores) and indice, which always have - the same data type as the input. - - Example: - >>> boxes = np.array([[4., 3., 5., 3.], - >>> [4., 3., 5., 4.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.]], dtype=np.float32) - >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.4, 0.0], dtype=np.float32) - >>> iou_threshold = 0.6 - >>> dets, inds = soft_nms(boxes, scores, iou_threshold, sigma=0.5) - >>> assert len(inds) == len(dets) == 5 - """ - - assert isinstance(boxes, (Tensor, np.ndarray)) - assert isinstance(scores, (Tensor, np.ndarray)) - is_numpy = False - if isinstance(boxes, np.ndarray): - is_numpy = True - boxes = torch.from_numpy(boxes) - if isinstance(scores, np.ndarray): - scores = torch.from_numpy(scores) - assert boxes.size(1) == 4 - assert boxes.size(0) == scores.size(0) - assert offset in (0, 1) - method_dict = {'naive': 0, 'linear': 1, 'gaussian': 2} - assert method in method_dict.keys() - - if torch.__version__ == 'parrots': - dets = boxes.new_empty((boxes.size(0), 5), device='cpu') - indata_list = [boxes.cpu(), scores.cpu(), dets.cpu()] - indata_dict = { - 'iou_threshold': float(iou_threshold), - 'sigma': float(sigma), - 'min_score': min_score, - 'method': method_dict[method], - 'offset': int(offset) - } - inds = ext_module.softnms(*indata_list, **indata_dict) - else: - dets, inds = SoftNMSop.apply(boxes.cpu(), scores.cpu(), - float(iou_threshold), float(sigma), - float(min_score), method_dict[method], - int(offset)) - - dets = dets[:inds.size(0)] - - if is_numpy: - dets = dets.cpu().numpy() - inds = inds.cpu().numpy() - return dets, inds - else: - return dets.to(device=boxes.device), inds.to(device=boxes.device) - - -def batched_nms(boxes: Tensor, - scores: Tensor, - idxs: Tensor, - nms_cfg: Optional[Dict], - class_agnostic: bool = False) -> Tuple[Tensor, Tensor]: - r"""Performs non-maximum suppression in a batched fashion. - - Modified from `torchvision/ops/boxes.py#L39 - `_. - In order to perform NMS independently per class, we add an offset to all - the boxes. The offset is dependent only on the class idx, and is large - enough so that boxes from different classes do not overlap. - - Note: - In v1.4.1 and later, ``batched_nms`` supports skipping the NMS and - returns sorted raw results when `nms_cfg` is None. - - Args: - boxes (torch.Tensor): boxes in shape (N, 4) or (N, 5). - scores (torch.Tensor): scores in shape (N, ). - idxs (torch.Tensor): each index value correspond to a bbox cluster, - and NMS will not be applied between elements of different idxs, - shape (N, ). - nms_cfg (dict | optional): Supports skipping the nms when `nms_cfg` - is None, otherwise it should specify nms type and other - parameters like `iou_thr`. Possible keys includes the following. - - - iou_threshold (float): IoU threshold used for NMS. - - split_thr (float): threshold number of boxes. In some cases the - number of boxes is large (e.g., 200k). To avoid OOM during - training, the users could set `split_thr` to a small value. - If the number of boxes is greater than the threshold, it will - perform NMS on each group of boxes separately and sequentially. - Defaults to 10000. - class_agnostic (bool): if true, nms is class agnostic, - i.e. IoU thresholding happens over all boxes, - regardless of the predicted class. Defaults to False. - - Returns: - tuple: kept dets and indice. - - - boxes (Tensor): Bboxes with score after nms, has shape - (num_bboxes, 5). last dimension 5 arrange as - (x1, y1, x2, y2, score) - - keep (Tensor): The indices of remaining boxes in input - boxes. - """ - # skip nms when nms_cfg is None - if nms_cfg is None: - scores, inds = scores.sort(descending=True) - boxes = boxes[inds] - return torch.cat([boxes, scores[:, None]], -1), inds - - nms_cfg_ = nms_cfg.copy() - class_agnostic = nms_cfg_.pop('class_agnostic', class_agnostic) - if class_agnostic: - boxes_for_nms = boxes - else: - # When using rotated boxes, only apply offsets on center. - if boxes.size(-1) == 5: - # Strictly, the maximum coordinates of the rotating box - # (x,y,w,h,a) should be calculated by polygon coordinates. - # But the conversion from rotated box to polygon will - # slow down the speed. - # So we use max(x,y) + max(w,h) as max coordinate - # which is larger than polygon max coordinate - # max(x1, y1, x2, y2,x3, y3, x4, y4) - max_coordinate = boxes[..., :2].max() + boxes[..., 2:4].max() - offsets = idxs.to(boxes) * ( - max_coordinate + torch.tensor(1).to(boxes)) - boxes_ctr_for_nms = boxes[..., :2] + offsets[:, None] - boxes_for_nms = torch.cat([boxes_ctr_for_nms, boxes[..., 2:5]], - dim=-1) - else: - max_coordinate = boxes.max() - offsets = idxs.to(boxes) * ( - max_coordinate + torch.tensor(1).to(boxes)) - boxes_for_nms = boxes + offsets[:, None] - - nms_type = nms_cfg_.pop('type', 'nms') - nms_op = eval(nms_type) - - split_thr = nms_cfg_.pop('split_thr', 10000) - # Won't split to multiple nms nodes when exporting to onnx - if boxes_for_nms.shape[0] < split_thr or torch.onnx.is_in_onnx_export(): - dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_) - boxes = boxes[keep] - - # This assumes `dets` has arbitrary dimensions where - # the last dimension is score. - # Currently it supports bounding boxes [x1, y1, x2, y2, score] or - # rotated boxes [cx, cy, w, h, angle_radian, score]. - - scores = dets[:, -1] - else: - max_num = nms_cfg_.pop('max_num', -1) - total_mask = scores.new_zeros(scores.size(), dtype=torch.bool) - # Some type of nms would reweight the score, such as SoftNMS - scores_after_nms = scores.new_zeros(scores.size()) - for id in torch.unique(idxs): - mask = (idxs == id).nonzero(as_tuple=False).view(-1) - dets, keep = nms_op(boxes_for_nms[mask], scores[mask], **nms_cfg_) - total_mask[mask[keep]] = True - scores_after_nms[mask[keep]] = dets[:, -1] - keep = total_mask.nonzero(as_tuple=False).view(-1) - - scores, inds = scores_after_nms[keep].sort(descending=True) - keep = keep[inds] - boxes = boxes[keep] - - if max_num > 0: - keep = keep[:max_num] - boxes = boxes[:max_num] - scores = scores[:max_num] - - boxes = torch.cat([boxes, scores[:, None]], -1) - return boxes, keep - - -def nms_match(dets: array_like_type, - iou_threshold: float) -> List[array_like_type]: - """Matched dets into different groups by NMS. - - NMS match is Similar to NMS but when a bbox is suppressed, nms match will - record the indice of suppressed bbox and form a group with the indice of - kept bbox. In each group, indice is sorted as score order. - - Args: - dets (torch.Tensor | np.ndarray): Det boxes with scores, shape (N, 5). - iou_threshold (float): IoU thresh for NMS. - - Returns: - list[torch.Tensor | np.ndarray]: The outer list corresponds different - matched group, the inner Tensor corresponds the indices for a group - in score order. - """ - if dets.shape[0] == 0: - matched = [] - else: - assert dets.shape[-1] == 5, 'inputs dets.shape should be (N, 5), ' \ - f'but get {dets.shape}' - if isinstance(dets, Tensor): - dets_t = dets.detach().cpu() - else: - dets_t = torch.from_numpy(dets) - indata_list = [dets_t] - indata_dict = {'iou_threshold': float(iou_threshold)} - matched = ext_module.nms_match(*indata_list, **indata_dict) - if torch.__version__ == 'parrots': - matched = matched.tolist() # type: ignore - - if isinstance(dets, Tensor): - return [dets.new_tensor(m, dtype=torch.long) for m in matched] - else: - return [np.array(m, dtype=int) for m in matched] - - -def nms_rotated(dets: Tensor, - scores: Tensor, - iou_threshold: float, - labels: Optional[Tensor] = None, - clockwise: bool = True) -> Tuple[Tensor, Tensor]: - """Performs non-maximum suppression (NMS) on the rotated boxes according to - their intersection-over-union (IoU). - - Rotated NMS iteratively removes lower scoring rotated boxes which have an - IoU greater than iou_threshold with another (higher scoring) rotated box. - - Args: - dets (torch.Tensor): Rotated boxes in shape (N, 5). - They are expected to be in - (x_ctr, y_ctr, width, height, angle_radian) format. - scores (torch.Tensor): scores in shape (N, ). - iou_threshold (float): IoU thresh for NMS. - labels (torch.Tensor, optional): boxes' label in shape (N,). - clockwise (bool): flag indicating whether the positive angular - orientation is clockwise. default True. - `New in version 1.4.3.` - - Returns: - tuple: kept dets(boxes and scores) and indice, which is always the - same data type as the input. - """ - if dets.shape[0] == 0: - return dets, None - if not clockwise: - flip_mat = dets.new_ones(dets.shape[-1]) - flip_mat[-1] = -1 - dets_cw = dets * flip_mat - else: - dets_cw = dets - multi_label = labels is not None - if multi_label: - dets_wl = torch.cat((dets_cw, labels.unsqueeze(1)), 1) # type: ignore - else: - dets_wl = dets_cw - _, order = scores.sort(0, descending=True) - dets_sorted = dets_wl.index_select(0, order) - - if torch.__version__ == 'parrots': - keep_inds = ext_module.nms_rotated( - dets_wl, - scores, - order, - dets_sorted, - iou_threshold=iou_threshold, - multi_label=multi_label) - else: - keep_inds = ext_module.nms_rotated(dets_wl, scores, order, dets_sorted, - iou_threshold, multi_label) - dets = torch.cat((dets[keep_inds], scores[keep_inds].reshape(-1, 1)), - dim=1) - return dets, keep_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/pixel_group.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/pixel_group.py deleted file mode 100644 index cf73e326..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/pixel_group.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Union - -import numpy as np -import torch -from torch import Tensor - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['pixel_group']) - - -def pixel_group( - score: Union[np.ndarray, Tensor], - mask: Union[np.ndarray, Tensor], - embedding: Union[np.ndarray, Tensor], - kernel_label: Union[np.ndarray, Tensor], - kernel_contour: Union[np.ndarray, Tensor], - kernel_region_num: int, - distance_threshold: float, -) -> List[List[float]]: - """Group pixels into text instances, which is widely used text detection - methods. - - Arguments: - score (np.array or torch.Tensor): The foreground score with size hxw. - mask (np.array or Tensor): The foreground mask with size hxw. - embedding (np.array or torch.Tensor): The embedding with size hxwxc to - distinguish instances. - kernel_label (np.array or torch.Tensor): The instance kernel index with - size hxw. - kernel_contour (np.array or torch.Tensor): The kernel contour with - size hxw. - kernel_region_num (int): The instance kernel region number. - distance_threshold (float): The embedding distance threshold between - kernel and pixel in one instance. - - Returns: - list[list[float]]: The instance coordinates and attributes list. Each - element consists of averaged confidence, pixel number, and coordinates - (x_i, y_i for all pixels) in order. - """ - assert isinstance(score, (torch.Tensor, np.ndarray)) - assert isinstance(mask, (torch.Tensor, np.ndarray)) - assert isinstance(embedding, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_contour, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_region_num, int) - assert isinstance(distance_threshold, float) - - if isinstance(score, np.ndarray): - score = torch.from_numpy(score) - if isinstance(mask, np.ndarray): - mask = torch.from_numpy(mask) - if isinstance(embedding, np.ndarray): - embedding = torch.from_numpy(embedding) - if isinstance(kernel_label, np.ndarray): - kernel_label = torch.from_numpy(kernel_label) - if isinstance(kernel_contour, np.ndarray): - kernel_contour = torch.from_numpy(kernel_contour) - - if torch.__version__ == 'parrots': - label = ext_module.pixel_group( - score, - mask, - embedding, - kernel_label, - kernel_contour, - kernel_region_num=kernel_region_num, - distance_threshold=distance_threshold) - label = label.tolist() - label = label[0] - list_index = kernel_region_num - pixel_assignment = [] - for x in range(kernel_region_num): - pixel_assignment.append( - np.array( - label[list_index:list_index + int(label[x])], - dtype=np.float)) - list_index = list_index + int(label[x]) - else: - pixel_assignment = ext_module.pixel_group(score, mask, embedding, - kernel_label, kernel_contour, - kernel_region_num, - distance_threshold) - return pixel_assignment diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/point_sample.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/point_sample.py deleted file mode 100644 index b40ccaba..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/point_sample.py +++ /dev/null @@ -1,360 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa - -from os import path as osp -from typing import Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.nn.modules.utils import _pair -from torch.onnx.operators import shape_as_tensor - - -def bilinear_grid_sample(im: Tensor, - grid: Tensor, - align_corners: bool = False) -> Tensor: - """Given an input and a flow-field grid, computes the output using input - values and pixel locations from grid. Supported only bilinear interpolation - method to sample the input pixels. - - Args: - im (torch.Tensor): Input feature map, shape (N, C, H, W) - grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2) - align_corners (bool): If set to True, the extrema (-1 and 1) are - considered as referring to the center points of the input’s - corner pixels. If set to False, they are instead considered as - referring to the corner points of the input’s corner pixels, - making the sampling more resolution agnostic. - - Returns: - torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg) - """ - n, c, h, w = im.shape - gn, gh, gw, _ = grid.shape - assert n == gn - - x = grid[:, :, :, 0] - y = grid[:, :, :, 1] - - if align_corners: - x = ((x + 1) / 2) * (w - 1) - y = ((y + 1) / 2) * (h - 1) - else: - x = ((x + 1) * w - 1) / 2 - y = ((y + 1) * h - 1) / 2 - - x = x.view(n, -1) - y = y.view(n, -1) - - x0 = torch.floor(x).long() - y0 = torch.floor(y).long() - x1 = x0 + 1 - y1 = y0 + 1 - - wa = ((x1 - x) * (y1 - y)).unsqueeze(1) - wb = ((x1 - x) * (y - y0)).unsqueeze(1) - wc = ((x - x0) * (y1 - y)).unsqueeze(1) - wd = ((x - x0) * (y - y0)).unsqueeze(1) - - # Apply default for grid_sample function zero padding - im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0) - padded_h = h + 2 - padded_w = w + 2 - # save points positions after padding - x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1 - - # Clip coordinates to padded image size - x0 = torch.where(x0 < 0, torch.tensor(0), x0) - x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0) - x1 = torch.where(x1 < 0, torch.tensor(0), x1) - x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1) - y0 = torch.where(y0 < 0, torch.tensor(0), y0) - y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0) - y1 = torch.where(y1 < 0, torch.tensor(0), y1) - y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1) - - im_padded = im_padded.view(n, c, -1) - - x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - - Ia = torch.gather(im_padded, 2, x0_y0) - Ib = torch.gather(im_padded, 2, x0_y1) - Ic = torch.gather(im_padded, 2, x1_y0) - Id = torch.gather(im_padded, 2, x1_y1) - - return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw) - - -def is_in_onnx_export_without_custom_ops() -> bool: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - return torch.onnx.is_in_onnx_export( - ) and not osp.exists(ort_custom_op_path) - - -def normalize(grid: Tensor) -> Tensor: - """Normalize input grid from [-1, 1] to [0, 1] - - Args: - grid (torch.Tensor): The grid to be normalize, range [-1, 1]. - - Returns: - torch.Tensor: Normalized grid, range [0, 1]. - """ - - return (grid + 1.0) / 2.0 - - -def denormalize(grid: Tensor) -> Tensor: - """Denormalize input grid from range [0, 1] to [-1, 1] - - Args: - grid (torch.Tensor): The grid to be denormalize, range [0, 1]. - - Returns: - torch.Tensor: Denormalized grid, range [-1, 1]. - """ - - return grid * 2.0 - 1.0 - - -def generate_grid(num_grid: int, size: Tuple[int, int], - device: torch.device) -> Tensor: - """Generate regular square grid of points in [0, 1] x [0, 1] coordinate - space. - - Args: - num_grid (int): The number of grids to sample, one for each region. - size (tuple[int, int]): The side size of the regular grid. - device (torch.device): Desired device of returned tensor. - - Returns: - torch.Tensor: A tensor of shape (num_grid, size[0]*size[1], 2) that - contains coordinates for the regular grids. - """ - - affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device) - grid = F.affine_grid( - affine_trans, torch.Size((1, 1, *size)), align_corners=False) - grid = normalize(grid) - return grid.view(1, -1, 2).expand(num_grid, -1, -1) - - -def rel_roi_point_to_abs_img_point(rois: Tensor, - rel_roi_points: Tensor) -> Tensor: - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (torch.Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (torch.Tensor): Point coordinates inside RoI, relative - to RoI, location, range (0, 1), shape (N, P, 2) - Returns: - torch.Tensor: Image based absolute point coordinates, shape (N, P, 2) - """ - - with torch.no_grad(): - assert rel_roi_points.size(0) == rois.size(0) - assert rois.dim() == 2 - assert rel_roi_points.dim() == 3 - assert rel_roi_points.size(2) == 2 - # remove batch idx - if rois.size(1) == 5: - rois = rois[:, 1:] - abs_img_points = rel_roi_points.clone() - # To avoid an error during exporting to onnx use independent - # variables instead inplace computation - xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0]) - ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1]) - xs += rois[:, None, 0] - ys += rois[:, None, 1] - abs_img_points = torch.stack([xs, ys], dim=2) - return abs_img_points - - -def get_shape_from_feature_map(x: Tensor) -> Tensor: - """Get spatial resolution of input feature map considering exporting to - onnx mode. - - Args: - x (torch.Tensor): Input tensor, shape (N, C, H, W) - - Returns: - torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2) - """ - if torch.onnx.is_in_onnx_export(): - img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to( - x.device).float() - else: - img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to( - x.device).float() - return img_shape - - -def abs_img_point_to_rel_img_point(abs_img_points: Tensor, - img: Union[tuple, Tensor], - spatial_scale: float = 1.) -> Tensor: - """Convert image based absolute point coordinates to image based relative - coordinates for sampling. - - Args: - abs_img_points (torch.Tensor): Image based absolute point coordinates, - shape (N, P, 2) - img (tuple or torch.Tensor): (height, width) of image or feature map. - spatial_scale (float, optional): Scale points by this factor. - Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, shape - (N, P, 2). - """ - - assert (isinstance(img, tuple) and len(img) == 2) or \ - (isinstance(img, torch.Tensor) and len(img.shape) == 4) - - if isinstance(img, tuple): - h, w = img - scale = torch.tensor([w, h], - dtype=torch.float, - device=abs_img_points.device) - scale = scale.view(1, 1, 2) - else: - scale = get_shape_from_feature_map(img) - - return abs_img_points / scale * spatial_scale - - -def rel_roi_point_to_rel_img_point(rois: Tensor, - rel_roi_points: Tensor, - img: Union[tuple, Tensor], - spatial_scale: float = 1.) -> Tensor: - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (torch.Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (torch.Tensor): Point coordinates inside RoI, relative - to RoI, location, range (0, 1), shape (N, P, 2) - img (tuple or torch.Tensor): (height, width) of image or feature map. - spatial_scale (float, optional): Scale points by this factor. - Default: 1. - - Returns: - torch.Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2). - """ - - abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points) - rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img, - spatial_scale) - - return rel_img_point - - -def point_sample(input: Tensor, - points: Tensor, - align_corners: bool = False, - **kwargs) -> Tensor: - """A wrapper around :func:`grid_sample` to support 3D point_coords tensors - Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to - lie inside ``[0, 1] x [0, 1]`` square. - - Args: - input (torch.Tensor): Feature map, shape (N, C, H, W). - points (torch.Tensor): Image based absolute point coordinates - (normalized), range [0, 1] x [0, 1], shape (N, P, 2) or - (N, Hgrid, Wgrid, 2). - align_corners (bool, optional): Whether align_corners. - Default: False - - Returns: - torch.Tensor: Features of `point` on `input`, shape (N, C, P) or - (N, C, Hgrid, Wgrid). - """ - - add_dim = False - if points.dim() == 3: - add_dim = True - points = points.unsqueeze(2) - if is_in_onnx_export_without_custom_ops(): - # If custom ops for onnx runtime not compiled use python - # implementation of grid_sample function to make onnx graph - # with supported nodes - output = bilinear_grid_sample( - input, denormalize(points), align_corners=align_corners) - else: - output = F.grid_sample( - input, denormalize(points), align_corners=align_corners, **kwargs) - if add_dim: - output = output.squeeze(3) - return output - - -class SimpleRoIAlign(nn.Module): - - def __init__(self, - output_size: Tuple[int], - spatial_scale: float, - aligned: bool = True) -> None: - """Simple RoI align in PointRend, faster than standard RoIAlign. - - Args: - output_size (tuple[int]): h, w - spatial_scale (float): scale the input boxes by this number - aligned (bool): if False, use the legacy implementation in - MMDetection, align_corners=True will be used in F.grid_sample. - If True, align the results more perfectly. - """ - - super().__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - # to be consistent with other RoI ops - self.use_torchvision = False - self.aligned = aligned - - def forward(self, features: Tensor, rois: Tensor) -> Tensor: - num_imgs = features.size(0) - num_rois = rois.size(0) - rel_roi_points = generate_grid( - num_rois, self.output_size, device=rois.device) - - if torch.onnx.is_in_onnx_export(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, features, self.spatial_scale) - rel_img_points = rel_img_points.reshape(num_imgs, -1, - *rel_img_points.shape[1:]) - point_feats = point_sample( - features, rel_img_points, align_corners=not self.aligned) - point_feats = point_feats.transpose(1, 2) - else: - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = features[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat, - self.spatial_scale).unsqueeze(0) - point_feat = point_sample( - feat, rel_img_points, align_corners=not self.aligned) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - - point_feats = torch.cat(point_feats, dim=0) - - channels = features.size(1) - roi_feats = point_feats.reshape(num_rois, channels, *self.output_size) - - return roi_feats - - def __repr__(self) -> str: - format_str = self.__class__.__name__ - format_str += '(output_size={}, spatial_scale={}'.format( - self.output_size, self.spatial_scale) - return format_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_boxes.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4915e6b5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,137 +0,0 @@ -import torch -from torch import Tensor - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points: Tensor, boxes: Tensor) -> Tensor: - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate. - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center. - - Returns: - torch.Tensor: Return the box indices of points with the shape of - (B, M). Default background = -1. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points: Tensor, boxes: Tensor) -> Tensor: - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - torch.Tensor: Return the box indices of points with the shape of - (B, M, T). Default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points: Tensor, boxes: Tensor) -> Tensor: - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - torch.Tensor: Return the box indices of points with the shape of - (B, M, T). Default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_polygons.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_polygons.py deleted file mode 100644 index cf16d408..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_in_polygons.py +++ /dev/null @@ -1,40 +0,0 @@ -import torch -from torch import Tensor - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['points_in_polygons_forward']) - - -def points_in_polygons(points: Tensor, polygons: Tensor) -> Tensor: - """Judging whether points are inside polygons, which is used in the ATSS - assignment for the rotated boxes. - - It should be noted that when the point is just at the polygon boundary, the - judgment will be inaccurate, but the effect on assignment is limited. - - Args: - points (torch.Tensor): It has shape (B, 2), indicating (x, y). - M means the number of predicted points. - polygons (torch.Tensor): It has shape (M, 8), indicating - (x1, y1, x2, y2, x3, y3, x4, y4). M means the number of - ground truth polygons. - - Returns: - torch.Tensor: Return the result with the shape of (B, M), - 1 indicates that the point is inside the polygon, - 0 indicates that the point is outside the polygon. - """ - assert points.shape[1] == 2, \ - 'points dimension should be 2, ' \ - f'but got unexpected shape {points.shape[1]}' - assert polygons.shape[1] == 8, \ - 'polygons dimension should be 8, ' \ - f'but got unexpected shape {polygons.shape[1]}' - # output = torch.full([points.shape[0], polygons.shape[0]], - # 0.).cuda().float() - output = torch.full([points.shape[0], polygons.shape[0]], - 0.).float().cuda() - ext_module.points_in_polygons_forward(points.contiguous(), - polygons.contiguous(), output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_sampler.py deleted file mode 100644 index f9d1c29b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/points_sampler.py +++ /dev/null @@ -1,181 +0,0 @@ -from typing import List - -import torch -from torch import Tensor -from torch import nn as nn - -from mmcv.runner import force_fp32 -from .furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) - - -def calc_square_dist(point_feat_a: Tensor, - point_feat_b: Tensor, - norm: bool = True) -> Tensor: - """Calculating square distance between a and b. - - Args: - point_feat_a (torch.Tensor): (B, N, C) Feature vector of each point. - point_feat_b (torch.Tensor): (B, M, C) Feature vector of each point. - norm (bool, optional): Whether to normalize the distance. - Default: True. - - Returns: - torch.Tensor: (B, N, M) Square distance between each point pair. - """ - num_channel = point_feat_a.shape[-1] - # [bs, n, 1] - a_square = torch.sum(point_feat_a.unsqueeze(dim=2).pow(2), dim=-1) - # [bs, 1, m] - b_square = torch.sum(point_feat_b.unsqueeze(dim=1).pow(2), dim=-1) - - corr_matrix = torch.matmul(point_feat_a, point_feat_b.transpose(1, 2)) - - dist = a_square + b_square - 2 * corr_matrix - if norm: - dist = torch.sqrt(dist) / num_channel - return dist - - -def get_sampler_cls(sampler_type: str) -> nn.Module: - """Get the type and mode of points sampler. - - Args: - sampler_type (str): The type of points sampler. - The valid value are "D-FPS", "F-FPS", or "FS". - - Returns: - class: Points sampler type. - """ - sampler_mappings = { - 'D-FPS': DFPSSampler, - 'F-FPS': FFPSSampler, - 'FS': FSSampler, - } - try: - return sampler_mappings[sampler_type] - except KeyError: - raise KeyError( - f'Supported `sampler_type` are {sampler_mappings.keys()}, but got \ - {sampler_type}') - - -class PointsSampler(nn.Module): - """Points sampling. - - Args: - num_point (list[int]): Number of sample points. - fps_mod_list (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): - Range of points to apply FPS. Default: [-1]. - """ - - def __init__(self, - num_point: List[int], - fps_mod_list: List[str] = ['D-FPS'], - fps_sample_range_list: List[int] = [-1]) -> None: - super().__init__() - # FPS would be applied to different fps_mod in the list, - # so the length of the num_point should be equal to - # fps_mod_list and fps_sample_range_list. - assert len(num_point) == len(fps_mod_list) == len( - fps_sample_range_list) - self.num_point = num_point - self.fps_sample_range_list = fps_sample_range_list - self.samplers = nn.ModuleList() - for fps_mod in fps_mod_list: - self.samplers.append(get_sampler_cls(fps_mod)()) - self.fp16_enabled = False - - @force_fp32() - def forward(self, points_xyz: Tensor, features: Tensor) -> Tensor: - """ - Args: - points_xyz (torch.Tensor): (B, N, 3) xyz coordinates of - the points. - features (torch.Tensor): (B, C, N) features of the points. - - Returns: - torch.Tensor: (B, npoint, sample_num) Indices of sampled points. - """ - indices = [] - last_fps_end_index = 0 - - for fps_sample_range, sampler, npoint in zip( - self.fps_sample_range_list, self.samplers, self.num_point): - assert fps_sample_range < points_xyz.shape[1] - - if fps_sample_range == -1: - sample_points_xyz = points_xyz[:, last_fps_end_index:] - if features is not None: - sample_features = features[:, :, last_fps_end_index:] - else: - sample_features = None - else: - sample_points_xyz = \ - points_xyz[:, last_fps_end_index:fps_sample_range] - if features is not None: - sample_features = features[:, :, last_fps_end_index: - fps_sample_range] - else: - sample_features = None - - fps_idx = sampler(sample_points_xyz.contiguous(), sample_features, - npoint) - - indices.append(fps_idx + last_fps_end_index) - last_fps_end_index += fps_sample_range - indices = torch.cat(indices, dim=1) - - return indices - - -class DFPSSampler(nn.Module): - """Using Euclidean distances of points for FPS.""" - - def __init__(self) -> None: - super().__init__() - - def forward(self, points: Tensor, features: Tensor, npoint: int) -> Tensor: - """Sampling points with D-FPS.""" - fps_idx = furthest_point_sample(points.contiguous(), npoint) - return fps_idx - - -class FFPSSampler(nn.Module): - """Using feature distances for FPS.""" - - def __init__(self) -> None: - super().__init__() - - def forward(self, points: Tensor, features: Tensor, npoint: int) -> Tensor: - """Sampling points with F-FPS.""" - assert features is not None, \ - 'feature input to FFPS_Sampler should not be None' - features_for_fps = torch.cat([points, features.transpose(1, 2)], dim=2) - features_dist = calc_square_dist( - features_for_fps, features_for_fps, norm=False) - fps_idx = furthest_point_sample_with_dist(features_dist, npoint) - return fps_idx - - -class FSSampler(nn.Module): - """Using F-FPS and D-FPS simultaneously.""" - - def __init__(self) -> None: - super().__init__() - - def forward(self, points: Tensor, features: Tensor, npoint: int) -> Tensor: - """Sampling points with FS_Sampling.""" - assert features is not None, \ - 'feature input to FS_Sampler should not be None' - ffps_sampler = FFPSSampler() - dfps_sampler = DFPSSampler() - fps_idx_ffps = ffps_sampler(points, features, npoint) - fps_idx_dfps = dfps_sampler(points, features, npoint) - fps_idx = torch.cat([fps_idx_ffps, fps_idx_dfps], dim=1) - return fps_idx diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/psa_mask.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/psa_mask.py deleted file mode 100644 index 0786e2d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/psa_mask.py +++ /dev/null @@ -1,92 +0,0 @@ -# Modified from https://github.com/hszhao/semseg/blob/master/lib/psa -from torch import nn -from torch.autograd import Function -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['psamask_forward', 'psamask_backward']) - - -class PSAMaskFunction(Function): - - @staticmethod - def symbolic(g, input, psa_type, mask_size): - return g.op( - 'mmcv::MMCVPSAMask', - input, - psa_type_i=psa_type, - mask_size_i=mask_size) - - @staticmethod - def forward(ctx, input, psa_type, mask_size): - ctx.psa_type = psa_type - ctx.mask_size = _pair(mask_size) - ctx.save_for_backward(input) - - h_mask, w_mask = ctx.mask_size - batch_size, channels, h_feature, w_feature = input.size() - assert channels == h_mask * w_mask - output = input.new_zeros( - (batch_size, h_feature * w_feature, h_feature, w_feature)) - - ext_module.psamask_forward( - input, - output, - psa_type=psa_type, - num_=batch_size, - h_feature=h_feature, - w_feature=w_feature, - h_mask=h_mask, - w_mask=w_mask, - half_h_mask=(h_mask - 1) // 2, - half_w_mask=(w_mask - 1) // 2) - return output - - @staticmethod - def backward(ctx, grad_output): - input = ctx.saved_tensors[0] - psa_type = ctx.psa_type - h_mask, w_mask = ctx.mask_size - batch_size, channels, h_feature, w_feature = input.size() - grad_input = grad_output.new_zeros( - (batch_size, channels, h_feature, w_feature)) - ext_module.psamask_backward( - grad_output, - grad_input, - psa_type=psa_type, - num_=batch_size, - h_feature=h_feature, - w_feature=w_feature, - h_mask=h_mask, - w_mask=w_mask, - half_h_mask=(h_mask - 1) // 2, - half_w_mask=(w_mask - 1) // 2) - return grad_input, None, None, None - - -psa_mask = PSAMaskFunction.apply - - -class PSAMask(nn.Module): - - def __init__(self, psa_type, mask_size=None): - super().__init__() - assert psa_type in ['collect', 'distribute'] - if psa_type == 'collect': - psa_type_enum = 0 - else: - psa_type_enum = 1 - self.psa_type_enum = psa_type_enum - self.mask_size = mask_size - self.psa_type = psa_type - - def forward(self, input): - return psa_mask(input, self.psa_type_enum, self.mask_size) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(psa_type={self.psa_type}, ' - s += f'mask_size={self.mask_size})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/riroi_align_rotated.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/riroi_align_rotated.py deleted file mode 100644 index af1e6098..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/riroi_align_rotated.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader, is_tuple_of - -ext_module = ext_loader.load_ext( - '_ext', ['riroi_align_rotated_forward', 'riroi_align_rotated_backward']) - - -class RiRoIAlignRotatedFunction(Function): - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - num_samples=0, - num_orientations=8, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif is_tuple_of(out_size, int): - assert len(out_size) == 2 - out_h, out_w = out_size - else: - raise TypeError( - f'"out_size" should be an integer or tuple of integers,' - f' but got {out_size}') - ctx.spatial_scale = spatial_scale - ctx.num_samples = num_samples - ctx.num_orientations = num_orientations - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, _, _ = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - - ext_module.riroi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - num_samples=num_samples, - num_orientations=num_orientations, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - num_orientations = ctx.num_orientations - clockwise = ctx.clockwise - num_samples = ctx.num_samples - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, feature_h, feature_w = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, feature_h, - feature_w) - ext_module.riroi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - num_samples=num_samples, - num_orientations=num_orientations, - clockwise=clockwise) - - return grad_input, grad_rois, None, None, None, None, None - - -riroi_align_rotated = RiRoIAlignRotatedFunction.apply - - -class RiRoIAlignRotated(nn.Module): - """Rotation-invariant RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - The details are described in the paper `ReDet: A Rotation-equivariant - Detector for Aerial Object Detection `_. - - Args: - out_size (tuple): fixed dimensional RoI output with shape (h, w). - spatial_scale (float): scale the input boxes by this number - num_samples (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - num_orientations (int): number of oriented channels. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - """ - - def __init__(self, - out_size, - spatial_scale, - num_samples=0, - num_orientations=8, - clockwise=False): - super().__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.num_samples = int(num_samples) - self.num_orientations = int(num_orientations) - self.clockwise = clockwise - - def forward(self, features, rois): - return RiRoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.num_samples, - self.num_orientations, - self.clockwise) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align.py deleted file mode 100644 index 7e387f94..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import deprecated_api_warning, ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['roi_align_forward', 'roi_align_backward']) - - -class RoIAlignFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale, sampling_ratio, - pool_mode, aligned): - from ..onnx import is_custom_op_loaded - has_custom_op = is_custom_op_loaded() - if has_custom_op: - return g.op( - 'mmcv::MMCVRoiAlign', - input, - rois, - output_height_i=output_size[0], - output_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_i=sampling_ratio, - mode_s=pool_mode, - aligned_i=aligned) - else: - from torch.onnx import TensorProtoDataType - from torch.onnx.symbolic_helper import _slice_helper - from torch.onnx.symbolic_opset9 import squeeze, sub - - # batch_indices = rois[:, 0].long() - batch_indices = _slice_helper( - g, rois, axes=[1], starts=[0], ends=[1]) - batch_indices = squeeze(g, batch_indices, 1) - batch_indices = g.op( - 'Cast', batch_indices, to_i=TensorProtoDataType.INT64) - # rois = rois[:, 1:] - rois = _slice_helper(g, rois, axes=[1], starts=[1], ends=[5]) - if aligned: - # rois -= 0.5/spatial_scale - aligned_offset = g.op( - 'Constant', - value_t=torch.tensor([0.5 / spatial_scale], - dtype=torch.float32)) - rois = sub(g, rois, aligned_offset) - # roi align - return g.op( - 'RoiAlign', - input, - rois, - batch_indices, - output_height_i=output_size[0], - output_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_i=max(0, sampling_ratio), - mode_s=pool_mode) - - @staticmethod - def forward(ctx, - input, - rois, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - pool_mode='avg', - aligned=True): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - assert pool_mode in ('max', 'avg') - ctx.pool_mode = 0 if pool_mode == 'max' else 1 - ctx.aligned = aligned - ctx.input_shape = input.size() - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - if ctx.pool_mode == 0: - argmax_y = input.new_zeros(output_shape) - argmax_x = input.new_zeros(output_shape) - else: - argmax_y = input.new_zeros(0) - argmax_x = input.new_zeros(0) - - ext_module.roi_align_forward( - input, - rois, - output, - argmax_y, - argmax_x, - aligned_height=ctx.output_size[0], - aligned_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - pool_mode=ctx.pool_mode, - aligned=ctx.aligned) - - ctx.save_for_backward(rois, argmax_y, argmax_x) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, argmax_y, argmax_x = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - # complex head architecture may cause grad_output uncontiguous. - grad_output = grad_output.contiguous() - ext_module.roi_align_backward( - grad_output, - rois, - argmax_y, - argmax_x, - grad_input, - aligned_height=ctx.output_size[0], - aligned_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - pool_mode=ctx.pool_mode, - aligned=ctx.aligned) - return grad_input, None, None, None, None, None, None - - -roi_align = RoIAlignFunction.apply - - -class RoIAlign(nn.Module): - """RoI align pooling layer. - - Args: - output_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sampling_ratio (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - pool_mode (str, 'avg' or 'max'): pooling mode in each bin. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - use_torchvision (bool): whether to use roi_align from torchvision. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - @deprecated_api_warning( - { - 'out_size': 'output_size', - 'sample_num': 'sampling_ratio' - }, - cls_name='RoIAlign') - def __init__(self, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - pool_mode='avg', - aligned=True, - use_torchvision=False): - super().__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - self.sampling_ratio = int(sampling_ratio) - self.pool_mode = pool_mode - self.aligned = aligned - self.use_torchvision = use_torchvision - - def forward(self, input, rois): - """ - Args: - input: NCHW images - rois: Bx5 boxes. First column is the index into N.\ - The other 4 columns are xyxy. - """ - if self.use_torchvision: - from torchvision.ops import roi_align as tv_roi_align - if 'aligned' in tv_roi_align.__code__.co_varnames: - return tv_roi_align(input, rois, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.aligned) - else: - if self.aligned: - rois -= rois.new_tensor([0.] + - [0.5 / self.spatial_scale] * 4) - return tv_roi_align(input, rois, self.output_size, - self.spatial_scale, self.sampling_ratio) - else: - return roi_align(input, rois, self.output_size, self.spatial_scale, - self.sampling_ratio, self.pool_mode, self.aligned) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale}, ' - s += f'sampling_ratio={self.sampling_ratio}, ' - s += f'pool_mode={self.pool_mode}, ' - s += f'aligned={self.aligned}, ' - s += f'use_torchvision={self.use_torchvision})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align_rotated.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 8551b172..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function -from torch.nn.modules.utils import _pair - -from ..utils import deprecated_api_warning, ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale, sampling_ratio, - aligned, clockwise): - if isinstance(output_size, int): - out_h = output_size - out_w = output_size - elif isinstance(output_size, tuple): - assert len(output_size) == 2 - assert isinstance(output_size[0], int) - assert isinstance(output_size[1], int) - out_h, out_w = output_size - else: - raise TypeError( - '"output_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - input, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sampling_ratio, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - input, - rois, - output_size, - spatial_scale, - sampling_ratio=0, - aligned=True, - clockwise=False): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = input.size() - - batch_size, num_channels, data_height, data_width = input.size() - num_rois = rois.size(0) - - output = input.new_zeros(num_rois, num_channels, ctx.output_size[0], - ctx.output_size[1]) - ext_module.roi_align_rotated_forward( - input, - rois, - output, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - aligned=ctx.aligned, - clockwise=ctx.clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - aligned=ctx.aligned, - clockwise=ctx.clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - output_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sampling_ratio(int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - @deprecated_api_warning( - { - 'out_size': 'output_size', - 'sample_num': 'sampling_ratio' - }, - cls_name='RoIAlignRotated') - def __init__(self, - output_size, - spatial_scale, - sampling_ratio=0, - aligned=True, - clockwise=False): - super().__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - self.sampling_ratio = int(sampling_ratio) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, input, rois): - return RoIAlignRotatedFunction.apply(input, rois, self.output_size, - self.spatial_scale, - self.sampling_ratio, self.aligned, - self.clockwise) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale}, ' - s += f'sampling_ratio={self.sampling_ratio}, ' - s += f'aligned={self.aligned}, ' - s += f'clockwise={self.clockwise})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_pool.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_pool.py deleted file mode 100644 index f63faabb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roi_pool.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['roi_pool_forward', 'roi_pool_backward']) - - -class RoIPoolFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale): - return g.op( - 'MaxRoiPool', - input, - rois, - pooled_shape_i=output_size, - spatial_scale_f=spatial_scale) - - @staticmethod - def forward(ctx, input, rois, output_size, spatial_scale=1.0): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.input_shape = input.size() - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - argmax = input.new_zeros(output_shape, dtype=torch.int) - - ext_module.roi_pool_forward( - input, - rois, - output, - argmax, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - ctx.save_for_backward(rois, argmax) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, argmax = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - - ext_module.roi_pool_backward( - grad_output, - rois, - argmax, - grad_input, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - return grad_input, None, None, None - - -roi_pool = RoIPoolFunction.apply - - -class RoIPool(nn.Module): - - def __init__(self, output_size, spatial_scale=1.0): - super().__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - - def forward(self, input, rois): - return roi_pool(input, rois, self.output_size, self.spatial_scale) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roiaware_pool3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roiaware_pool3d.py deleted file mode 100644 index 2096b436..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roiaware_pool3d.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.autograd import Function - -import mmcv -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roiaware_pool3d_forward', 'roiaware_pool3d_backward']) - - -class RoIAwarePool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `PartA2 `_ for more - details. - - Args: - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int, optional): The maximum number of points per - voxel. Default: 128. - mode (str, optional): Pooling method of RoIAware, 'max' or 'avg'. - Default: 'max'. - """ - - def __init__(self, out_size, max_pts_per_voxel=128, mode='max'): - super().__init__() - - self.out_size = out_size - self.max_pts_per_voxel = max_pts_per_voxel - assert mode in ['max', 'avg'] - pool_mapping = {'max': 0, 'avg': 1} - self.mode = pool_mapping[mode] - - def forward(self, rois, pts, pts_feature): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - - Returns: - torch.Tensor: Pooled features whose shape is - [N, out_x, out_y, out_z, C]. - """ - - return RoIAwarePool3dFunction.apply(rois, pts, pts_feature, - self.out_size, - self.max_pts_per_voxel, self.mode) - - -class RoIAwarePool3dFunction(Function): - - @staticmethod - def forward(ctx, rois, pts, pts_feature, out_size, max_pts_per_voxel, - mode): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int): The maximum number of points per voxel. - Default: 128. - mode (int): Pooling method of RoIAware, 0 (max pool) or 1 (average - pool). - - Returns: - torch.Tensor: Pooled features whose shape is - [N, out_x, out_y, out_z, C]. - """ - - if isinstance(out_size, int): - out_x = out_y = out_z = out_size - else: - assert len(out_size) == 3 - assert mmcv.is_tuple_of(out_size, int) - out_x, out_y, out_z = out_size - - num_rois = rois.shape[0] - num_channels = pts_feature.shape[-1] - num_pts = pts.shape[0] - - pooled_features = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels)) - argmax = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels), dtype=torch.int) - pts_idx_of_voxels = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, max_pts_per_voxel), - dtype=torch.int) - - ext_module.roiaware_pool3d_forward( - rois, - pts, - pts_feature, - argmax, - pts_idx_of_voxels, - pooled_features, - pool_method=mode) - - ctx.roiaware_pool3d_for_backward = (pts_idx_of_voxels, argmax, mode, - num_pts, num_channels) - return pooled_features - - @staticmethod - def backward(ctx, grad_out): - ret = ctx.roiaware_pool3d_for_backward - pts_idx_of_voxels, argmax, mode, num_pts, num_channels = ret - - grad_in = grad_out.new_zeros((num_pts, num_channels)) - ext_module.roiaware_pool3d_backward( - pts_idx_of_voxels, - argmax, - grad_out.contiguous(), - grad_in, - pool_method=mode) - - return None, None, grad_in, None, None, None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roipoint_pool3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roipoint_pool3d.py deleted file mode 100644 index 2a9315bd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/roipoint_pool3d.py +++ /dev/null @@ -1,77 +0,0 @@ -from torch import nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['roipoint_pool3d_forward']) - - -class RoIPointPool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `Paper of PartA2 `_ - for more details. - - Args: - num_sampled_points (int, optional): Number of samples in each roi. - Default: 512. - """ - - def __init__(self, num_sampled_points=512): - super().__init__() - self.num_sampled_points = num_sampled_points - - def forward(self, points, point_features, boxes3d): - """ - Args: - points (torch.Tensor): Input points whose shape is (B, N, C). - point_features (torch.Tensor): Features of input points whose shape - is (B, N, C). - boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7). - - Returns: - tuple[torch.Tensor]: A tuple contains two elements. The first one - is the pooled features whose shape is (B, M, 512, 3 + C). The - second is an empty flag whose shape is (B, M). - """ - return RoIPointPool3dFunction.apply(points, point_features, boxes3d, - self.num_sampled_points) - - -class RoIPointPool3dFunction(Function): - - @staticmethod - def forward(ctx, points, point_features, boxes3d, num_sampled_points=512): - """ - Args: - points (torch.Tensor): Input points whose shape is (B, N, C). - point_features (torch.Tensor): Features of input points whose shape - is (B, N, C). - boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7). - num_sampled_points (int, optional): The num of sampled points. - Default: 512. - - Returns: - tuple[torch.Tensor]: A tuple contains two elements. The first one - is the pooled features whose shape is (B, M, 512, 3 + C). The - second is an empty flag whose shape is (B, M). - """ - assert len(points.shape) == 3 and points.shape[2] == 3 - batch_size, boxes_num, feature_len = points.shape[0], boxes3d.shape[ - 1], point_features.shape[2] - pooled_boxes3d = boxes3d.view(batch_size, -1, 7) - pooled_features = point_features.new_zeros( - (batch_size, boxes_num, num_sampled_points, 3 + feature_len)) - pooled_empty_flag = point_features.new_zeros( - (batch_size, boxes_num)).int() - - ext_module.roipoint_pool3d_forward(points.contiguous(), - pooled_boxes3d.contiguous(), - point_features.contiguous(), - pooled_features, pooled_empty_flag) - - return pooled_features, pooled_empty_flag - - @staticmethod - def backward(ctx, grad_out): - raise NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/rotated_feature_align.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/rotated_feature_align.py deleted file mode 100644 index 95353b70..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/rotated_feature_align.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['rotated_feature_align_forward', 'rotated_feature_align_backward']) - - -class RotatedFeatureAlignFunction(Function): - """Using the feature interpolation to obtain the position information - correspond to the refined rotate anchors and reconstruct the feature maps - in pixel-wise manner to achieve feature alignment. - - The details are described in the paper - `R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating - Object `_. - """ - - @staticmethod - def symbolic(g, features, best_rbboxes, spatial_scale, points): - assert points in [1, 5] - return g.op( - 'mmcv::MMCVRotatedFeatureAlign', - features, - best_rbboxes, - spatial_scale_f=spatial_scale, - points_i=points) - - @staticmethod - def forward(ctx, features, best_rbboxes, spatial_scale, points): - """ - Args: - features (torch.Tensor): Input features with shape [N,C,H,W]. - best_rbboxes (torch.Tensor): Refined rotate anchors with - shape [N,H,W,5]. Coordinate format (cx,cx,h,w,a). - spatial_scale (float): The scale of feature map size and - input image size. - points (int, optional): The number of sample points. - Only 1 and 5 are supported. Defaults to 1. - - Returns: - torch.Tensor: Refined features with shape [N,C,H,W]. - """ - ctx.spatial_scale = spatial_scale - ctx.points = points - ctx.save_for_backward(best_rbboxes) - assert points in [1, 5] - output = torch.zeros_like(features) - ext_module.rotated_feature_align_forward( - features, - best_rbboxes, - output, - spatial_scale=spatial_scale, - points=points) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - """ - Args: - grad_output (torch.Tensor): The gradiant of output features - with shape [N,C,H,W]. - - Returns: - torch.Tensor: The gradiant of input features with shape [N,C,H,W]. - """ - best_rbboxes = ctx.saved_tensors[0] - points = ctx.points - spatial_scale = ctx.spatial_scale - grad_input = None - if ctx.needs_input_grad[0]: - grad_input = torch.zeros_like(grad_output) - ext_module.rotated_feature_align_backward( - grad_output.contiguous(), - best_rbboxes, - grad_input, - spatial_scale=spatial_scale, - points=points) - return grad_input, None, None, None - - -def rotated_feature_align(features, - best_rbboxes, - spatial_scale=1 / 8, - points=1): - return RotatedFeatureAlignFunction.apply(features, best_rbboxes, - spatial_scale, points) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/saconv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/saconv.py deleted file mode 100644 index 817ef949..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/saconv.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.cnn import CONV_LAYERS, ConvAWS2d, constant_init -from mmcv.ops.deform_conv import deform_conv2d -from mmcv.utils import TORCH_VERSION, digit_version - - -@CONV_LAYERS.register_module(name='SAC') -class SAConv2d(ConvAWS2d): - """SAC (Switchable Atrous Convolution) - - This is an implementation of `DetectoRS: Detecting Objects with Recursive - Feature Pyramid and Switchable Atrous Convolution - `_. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the convolving kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - padding_mode (string, optional): ``'zeros'``, ``'reflect'``, - ``'replicate'`` or ``'circular'``. Default: ``'zeros'`` - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If ``True``, adds a learnable bias to the - output. Default: ``True`` - use_deform: If ``True``, replace convolution with deformable - convolution. Default: ``False``. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - use_deform=False): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.use_deform = use_deform - self.switch = nn.Conv2d( - self.in_channels, 1, kernel_size=1, stride=stride, bias=True) - self.weight_diff = nn.Parameter(torch.Tensor(self.weight.size())) - self.pre_context = nn.Conv2d( - self.in_channels, self.in_channels, kernel_size=1, bias=True) - self.post_context = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=1, bias=True) - if self.use_deform: - self.offset_s = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.offset_l = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.init_weights() - - def init_weights(self): - constant_init(self.switch, 0, bias=1) - self.weight_diff.data.zero_() - constant_init(self.pre_context, 0) - constant_init(self.post_context, 0) - if self.use_deform: - constant_init(self.offset_s, 0) - constant_init(self.offset_l, 0) - - def forward(self, x): - # pre-context - avg_x = F.adaptive_avg_pool2d(x, output_size=1) - avg_x = self.pre_context(avg_x) - avg_x = avg_x.expand_as(x) - x = x + avg_x - # switch - avg_x = F.pad(x, pad=(2, 2, 2, 2), mode='reflect') - avg_x = F.avg_pool2d(avg_x, kernel_size=5, stride=1, padding=0) - switch = self.switch(avg_x) - # sac - weight = self._get_weight(self.weight) - zero_bias = torch.zeros( - self.out_channels, device=weight.device, dtype=weight.dtype) - - if self.use_deform: - offset = self.offset_s(avg_x) - out_s = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_s = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_s = super()._conv_forward(x, weight, zero_bias) - else: - out_s = super()._conv_forward(x, weight) - ori_p = self.padding - ori_d = self.dilation - self.padding = tuple(3 * p for p in self.padding) - self.dilation = tuple(3 * d for d in self.dilation) - weight = weight + self.weight_diff - if self.use_deform: - offset = self.offset_l(avg_x) - out_l = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_l = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_l = super()._conv_forward(x, weight, zero_bias) - else: - out_l = super()._conv_forward(x, weight) - - out = switch * out_s + (1 - switch) * out_l - self.padding = ori_p - self.dilation = ori_d - # post-context - avg_x = F.adaptive_avg_pool2d(out, output_size=1) - avg_x = self.post_context(avg_x) - avg_x = avg_x.expand_as(out) - out = out + avg_x - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/scatter_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/scatter_points.py deleted file mode 100644 index f8a4a556..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/scatter_points.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['dynamic_point_to_voxel_forward', 'dynamic_point_to_voxel_backward']) - - -class _DynamicScatter(Function): - - @staticmethod - def forward(ctx, feats, coors, reduce_type='max'): - """convert kitti points(N, >=3) to voxels. - - Args: - feats (torch.Tensor): [N, C]. Points features to be reduced - into voxels. - coors (torch.Tensor): [N, ndim]. Corresponding voxel coordinates - (specifically multi-dim voxel index) of each points. - reduce_type (str, optional): Reduce op. support 'max', 'sum' and - 'mean'. Default: 'max'. - - Returns: - tuple[torch.Tensor]: A tuple contains two elements. The first one - is the voxel features with shape [M, C] which are respectively - reduced from input features that share the same voxel coordinates. - The second is voxel coordinates with shape [M, ndim]. - """ - results = ext_module.dynamic_point_to_voxel_forward( - feats, coors, reduce_type) - (voxel_feats, voxel_coors, point2voxel_map, - voxel_points_count) = results - ctx.reduce_type = reduce_type - ctx.save_for_backward(feats, voxel_feats, point2voxel_map, - voxel_points_count) - ctx.mark_non_differentiable(voxel_coors) - return voxel_feats, voxel_coors - - @staticmethod - def backward(ctx, grad_voxel_feats, grad_voxel_coors=None): - (feats, voxel_feats, point2voxel_map, - voxel_points_count) = ctx.saved_tensors - grad_feats = torch.zeros_like(feats) - # TODO: whether to use index put or use cuda_backward - # To use index put, need point to voxel index - ext_module.dynamic_point_to_voxel_backward( - grad_feats, grad_voxel_feats.contiguous(), feats, voxel_feats, - point2voxel_map, voxel_points_count, ctx.reduce_type) - return grad_feats, None, None - - -dynamic_scatter = _DynamicScatter.apply - - -class DynamicScatter(nn.Module): - """Scatters points into voxels, used in the voxel encoder with dynamic - voxelization. - - Note: - The CPU and GPU implementation get the same output, but have numerical - difference after summation and division (e.g., 5e-7). - - Args: - voxel_size (list): list [x, y, z] size of three dimension. - point_cloud_range (list): The coordinate range of points, [x_min, - y_min, z_min, x_max, y_max, z_max]. - average_points (bool): whether to use avg pooling to scatter points - into voxel. - """ - - def __init__(self, voxel_size, point_cloud_range, average_points: bool): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.average_points = average_points - - def forward_single(self, points, coors): - """Scatters points into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - tuple[torch.Tensor]: A tuple contains two elements. The first one - is the voxel features with shape [M, C] which are respectively - reduced from input features that share the same voxel coordinates. - The second is voxel coordinates with shape [M, ndim]. - """ - reduce = 'mean' if self.average_points else 'max' - return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce) - - def forward(self, points, coors): - """Scatters points/features into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - tuple[torch.Tensor]: A tuple contains two elements. The first one - is the voxel features with shape [M, C] which are respectively - reduced from input features that share the same voxel coordinates. - The second is voxel coordinates with shape [M, ndim]. - """ - if coors.size(-1) == 3: - return self.forward_single(points, coors) - else: - batch_size = coors[-1, 0] + 1 - voxels, voxel_coors = [], [] - for i in range(batch_size): - inds = torch.where(coors[:, 0] == i) - voxel, voxel_coor = self.forward_single( - points[inds], coors[inds][:, 1:]) - coor_pad = F.pad(voxel_coor, (1, 0), mode='constant', value=i) - voxel_coors.append(coor_pad) - voxels.append(voxel) - features = torch.cat(voxels, dim=0) - feature_coors = torch.cat(voxel_coors, dim=0) - - return features, feature_coors - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', average_points=' + str(self.average_points) - s += ')' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/sync_bn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/sync_bn.py deleted file mode 100644 index e24ecb1a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/sync_bn.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.module import Module -from torch.nn.parameter import Parameter - -from mmcv.cnn import NORM_LAYERS -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sync_bn_forward_mean', 'sync_bn_forward_var', 'sync_bn_forward_output', - 'sync_bn_backward_param', 'sync_bn_backward_data' -]) - - -class SyncBatchNormFunction(Function): - - @staticmethod - def symbolic(g, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - return g.op( - 'mmcv::MMCVSyncBatchNorm', - input, - running_mean, - running_var, - weight, - bias, - momentum_f=momentum, - eps_f=eps, - group_i=group, - group_size_i=group_size, - stats_mode=stats_mode) - - @staticmethod - def forward(self, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - self.momentum = momentum - self.eps = eps - self.group = group - self.group_size = group_size - self.stats_mode = stats_mode - - assert isinstance( - input, (torch.HalfTensor, torch.FloatTensor, - torch.cuda.HalfTensor, torch.cuda.FloatTensor)), \ - f'only support Half or Float Tensor, but {input.type()}' - output = torch.zeros_like(input) - input3d = input.flatten(start_dim=2) - output3d = output.view_as(input3d) - num_channels = input3d.size(1) - - # ensure mean/var/norm/std are initialized as zeros - # ``torch.empty()`` does not guarantee that - mean = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - var = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - norm = torch.zeros_like( - input3d, dtype=torch.float, device=input3d.device) - std = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - - batch_size = input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_forward_mean(input3d, mean) - batch_flag = torch.ones([1], device=mean.device, dtype=mean.dtype) - else: - # skip updating mean and leave it as zeros when the input is empty - batch_flag = torch.zeros([1], device=mean.device, dtype=mean.dtype) - - # synchronize mean and the batch flag - vec = torch.cat([mean, batch_flag]) - if self.stats_mode == 'N': - vec *= batch_size - if self.group_size > 1: - dist.all_reduce(vec, group=self.group) - total_batch = vec[-1].detach() - mean = vec[:num_channels] - - if self.stats_mode == 'default': - mean = mean / self.group_size - elif self.stats_mode == 'N': - mean = mean / total_batch.clamp(min=1) - else: - raise NotImplementedError - - # leave var as zeros when the input is empty - if batch_size > 0: - ext_module.sync_bn_forward_var(input3d, mean, var) - - if self.stats_mode == 'N': - var *= batch_size - if self.group_size > 1: - dist.all_reduce(var, group=self.group) - - if self.stats_mode == 'default': - var /= self.group_size - elif self.stats_mode == 'N': - var /= total_batch.clamp(min=1) - else: - raise NotImplementedError - - # if the total batch size over all the ranks is zero, - # we should not update the statistics in the current batch - update_flag = total_batch.clamp(max=1) - momentum = update_flag * self.momentum - ext_module.sync_bn_forward_output( - input3d, - mean, - var, - weight, - bias, - running_mean, - running_var, - norm, - std, - output3d, - eps=self.eps, - momentum=momentum, - group_size=self.group_size) - self.save_for_backward(norm, std, weight) - return output - - @staticmethod - @once_differentiable - def backward(self, grad_output): - norm, std, weight = self.saved_tensors - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(weight) - grad_input = torch.zeros_like(grad_output) - grad_output3d = grad_output.flatten(start_dim=2) - grad_input3d = grad_input.view_as(grad_output3d) - - batch_size = grad_input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_backward_param(grad_output3d, norm, grad_weight, - grad_bias) - - # all reduce - if self.group_size > 1: - dist.all_reduce(grad_weight, group=self.group) - dist.all_reduce(grad_bias, group=self.group) - grad_weight /= self.group_size - grad_bias /= self.group_size - - if batch_size > 0: - ext_module.sync_bn_backward_data(grad_output3d, weight, - grad_weight, grad_bias, norm, std, - grad_input3d) - - return grad_input, None, None, grad_weight, grad_bias, \ - None, None, None, None, None - - -@NORM_LAYERS.register_module(name='MMSyncBN') -class SyncBatchNorm(Module): - """Synchronized Batch Normalization. - - Args: - num_features (int): number of features/chennels in input tensor - eps (float, optional): a value added to the denominator for numerical - stability. Defaults to 1e-5. - momentum (float, optional): the value used for the running_mean and - running_var computation. Defaults to 0.1. - affine (bool, optional): whether to use learnable affine parameters. - Defaults to True. - track_running_stats (bool, optional): whether to track the running - mean and variance during training. When set to False, this - module does not track such statistics, and initializes statistics - buffers ``running_mean`` and ``running_var`` as ``None``. When - these buffers are ``None``, this module always uses batch - statistics in both training and eval modes. Defaults to True. - group (int, optional): synchronization of stats happen within - each process group individually. By default it is synchronization - across the whole world. Defaults to None. - stats_mode (str, optional): The statistical mode. Available options - includes ``'default'`` and ``'N'``. Defaults to 'default'. - When ``stats_mode=='default'``, it computes the overall statistics - using those from each worker with equal weight, i.e., the - statistics are synchronized and simply divied by ``group``. This - mode will produce inaccurate statistics when empty tensors occur. - When ``stats_mode=='N'``, it compute the overall statistics using - the total number of batches in each worker ignoring the number of - group, i.e., the statistics are synchronized and then divied by - the total batch ``N``. This mode is beneficial when empty tensors - occur during training, as it average the total mean by the real - number of batch. - """ - - def __init__(self, - num_features, - eps=1e-5, - momentum=0.1, - affine=True, - track_running_stats=True, - group=None, - stats_mode='default'): - super().__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - self.affine = affine - self.track_running_stats = track_running_stats - group = dist.group.WORLD if group is None else group - self.group = group - self.group_size = dist.get_world_size(group) - assert stats_mode in ['default', 'N'], \ - f'"stats_mode" only accepts "default" and "N", got "{stats_mode}"' - self.stats_mode = stats_mode - if self.affine: - self.weight = Parameter(torch.Tensor(num_features)) - self.bias = Parameter(torch.Tensor(num_features)) - else: - self.register_parameter('weight', None) - self.register_parameter('bias', None) - if self.track_running_stats: - self.register_buffer('running_mean', torch.zeros(num_features)) - self.register_buffer('running_var', torch.ones(num_features)) - self.register_buffer('num_batches_tracked', - torch.tensor(0, dtype=torch.long)) - else: - self.register_buffer('running_mean', None) - self.register_buffer('running_var', None) - self.register_buffer('num_batches_tracked', None) - self.reset_parameters() - - def reset_running_stats(self): - if self.track_running_stats: - self.running_mean.zero_() - self.running_var.fill_(1) - self.num_batches_tracked.zero_() - - def reset_parameters(self): - self.reset_running_stats() - if self.affine: - self.weight.data.uniform_() # pytorch use ones_() - self.bias.data.zero_() - - def forward(self, input): - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input, got {input.dim()}D input') - if self.momentum is None: - exponential_average_factor = 0.0 - else: - exponential_average_factor = self.momentum - - if self.training and self.track_running_stats: - if self.num_batches_tracked is not None: - self.num_batches_tracked += 1 - if self.momentum is None: # use cumulative moving average - exponential_average_factor = 1.0 / float( - self.num_batches_tracked) - else: # use exponential moving average - exponential_average_factor = self.momentum - - if self.training or not self.track_running_stats: - return SyncBatchNormFunction.apply( - input, self.running_mean, self.running_var, self.weight, - self.bias, exponential_average_factor, self.eps, self.group, - self.group_size, self.stats_mode) - else: - return F.batch_norm(input, self.running_mean, self.running_var, - self.weight, self.bias, False, - exponential_average_factor, self.eps) - - def __repr__(self): - s = self.__class__.__name__ - s += f'({self.num_features}, ' - s += f'eps={self.eps}, ' - s += f'momentum={self.momentum}, ' - s += f'affine={self.affine}, ' - s += f'track_running_stats={self.track_running_stats}, ' - s += f'group_size={self.group_size},' - s += f'stats_mode={self.stats_mode})' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_interpolate.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_interpolate.py deleted file mode 100644 index 7256e91f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_interpolate.py +++ /dev/null @@ -1,69 +0,0 @@ -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['three_interpolate_forward', 'three_interpolate_backward']) - - -class ThreeInterpolate(Function): - """Performs weighted linear interpolation on 3 features. - - Please refer to `Paper of PointNet++ `_ - for more details. - """ - - @staticmethod - def forward(ctx, features: torch.Tensor, indices: torch.Tensor, - weight: torch.Tensor) -> torch.Tensor: - """ - Args: - features (torch.Tensor): (B, C, M) Features descriptors to be - interpolated. - indices (torch.Tensor): (B, n, 3) indices of three nearest - neighbor features for the target features. - weight (torch.Tensor): (B, n, 3) weights of three nearest - neighbor features for the target features. - - Returns: - torch.Tensor: (B, C, N) tensor of the interpolated features - """ - assert features.is_contiguous() - assert indices.is_contiguous() - assert weight.is_contiguous() - - B, c, m = features.size() - n = indices.size(1) - ctx.three_interpolate_for_backward = (indices, weight, m) - output = torch.cuda.FloatTensor(B, c, n) - - ext_module.three_interpolate_forward( - features, indices, weight, output, b=B, c=c, m=m, n=n) - return output - - @staticmethod - def backward( - ctx, grad_out: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Args: - grad_out (torch.Tensor): (B, C, N) tensor with gradients of outputs - - Returns: - torch.Tensor: (B, C, M) tensor with gradients of features - """ - idx, weight, m = ctx.three_interpolate_for_backward - B, c, n = grad_out.size() - - grad_features = torch.cuda.FloatTensor(B, c, m).zero_() - grad_out_data = grad_out.data.contiguous() - - ext_module.three_interpolate_backward( - grad_out_data, idx, weight, grad_features.data, b=B, c=c, n=n, m=m) - return grad_features, None, None - - -three_interpolate = ThreeInterpolate.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_nn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_nn.py deleted file mode 100644 index 25d60bb2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/three_nn.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['three_nn_forward']) - - -class ThreeNN(Function): - """Find the top-3 nearest neighbors of the target set from the source set. - - Please refer to `Paper of PointNet++ `_ - for more details. - """ - - @staticmethod - def forward(ctx, target: torch.Tensor, - source: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Args: - target (torch.Tensor): shape (B, N, 3), points set that needs to - find the nearest neighbors. - source (torch.Tensor): shape (B, M, 3), points set that is used - to find the nearest neighbors of points in target set. - - Returns: - torch.Tensor: shape (B, N, 3), L2 distance of each point in target - set to their corresponding top three nearest neighbors. - """ - target = target.contiguous() - source = source.contiguous() - - B, N, _ = target.size() - m = source.size(1) - dist2 = torch.cuda.FloatTensor(B, N, 3) - idx = torch.cuda.IntTensor(B, N, 3) - - ext_module.three_nn_forward(target, source, dist2, idx, b=B, n=N, m=m) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - - return torch.sqrt(dist2), idx - - @staticmethod - def backward(ctx, a=None, b=None): - return None, None - - -three_nn = ThreeNN.apply diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/tin_shift.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/tin_shift.py deleted file mode 100755 index 473231cc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/tin_shift.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Code reference from "Temporal Interlacing Network" -# https://github.com/deepcs233/TIN/blob/master/cuda_shift/rtc_wrap.py -# Hao Shao, Shengju Qian, Yu Liu -# shaoh19@mails.tsinghua.edu.cn, sjqian@cse.cuhk.edu.hk, yuliu@ee.cuhk.edu.hk - -import torch -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['tin_shift_forward', 'tin_shift_backward']) - - -class TINShiftFunction(Function): - - @staticmethod - def forward(ctx, input, shift): - if input.size(0) != shift.size(0): - raise ValueError( - 'The first dim (batch) of `input` and `shift` should be ' - f'same, but got {input.size(0)} and {shift.size(0)}.') - C = input.size(2) - num_segments = shift.size(1) - if C // num_segments <= 0 or C % num_segments != 0: - raise ValueError('C should be a multiple of num_segments, ' - f'but got C={C} and num_segments={num_segments}.') - - ctx.save_for_backward(shift) - - out = torch.zeros_like(input) - ext_module.tin_shift_forward(input, shift, out) - - return out - - @staticmethod - def backward(ctx, grad_output): - - shift = ctx.saved_tensors[0] - data_grad_input = grad_output.new(*grad_output.size()).zero_() - shift_grad_input = shift.new(*shift.size()).zero_() - ext_module.tin_shift_backward(grad_output, shift, data_grad_input) - - return data_grad_input, shift_grad_input - - -tin_shift = TINShiftFunction.apply - - -class TINShift(nn.Module): - """Temporal Interlace Shift. - - Temporal Interlace shift is a differentiable temporal-wise frame shifting - which is proposed in "Temporal Interlacing Network" - - Please refer to `Temporal Interlacing Network - `_ for more details. - - Code is modified from https://github.com/mit-han-lab/temporal-shift-module - """ - - def forward(self, input, shift): - """Perform temporal interlace shift. - - Args: - input (torch.Tensor): Feature map with shape - [N, num_segments, C, H * W]. - shift (torch.Tensor): Shift tensor with shape [N, num_segments]. - - Returns: - Feature map after temporal interlace shift. - """ - return tin_shift(input, shift) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/upfirdn2d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/upfirdn2d.py deleted file mode 100644 index 255354da..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/upfirdn2d.py +++ /dev/null @@ -1,330 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -from mmcv.utils import to_2tuple -from ..utils import ext_loader - -upfirdn2d_ext = ext_loader.load_ext('_ext', ['upfirdn2d']) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, - in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - up_x=down_x, - up_y=down_y, - down_x=up_x, - down_y=up_y, - pad_x0=g_pad_x0, - pad_x1=g_pad_x1, - pad_y0=g_pad_y0, - pad_y1=g_pad_y1) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], - in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], - ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - up_x=ctx.up_x, - up_y=ctx.up_y, - down_x=ctx.down_x, - down_y=ctx.down_y, - pad_x0=ctx.pad_x0, - pad_x1=ctx.pad_x1, - pad_y0=ctx.pad_y0, - pad_y1=ctx.pad_y1) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], - ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d( - input, - kernel, - up_x=up_x, - up_y=up_y, - down_x=down_x, - down_y=down_y, - pad_x0=pad_x0, - pad_x1=pad_x1, - pad_y0=pad_y0, - pad_y1=pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - """UpFRIDn for 2d features. - - UpFIRDn is short for upsample, apply FIR filter and downsample. More - details can be found in: - https://www.mathworks.com/help/signal/ref/upfirdn.html - - Args: - input (torch.Tensor): Tensor with shape of (n, c, h, w). - kernel (torch.Tensor): Filter kernel. - up (int | tuple[int], optional): Upsampling factor. If given a number, - we will use this factor for the both height and width side. - Defaults to 1. - down (int | tuple[int], optional): Downsampling factor. If given a - number, we will use this factor for the both height and width side. - Defaults to 1. - pad (tuple[int], optional): Padding for tensors, (x_pad, y_pad) or - (x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0). - - Returns: - torch.Tensor: Tensor after UpFIRDn. - """ - if input.device.type == 'cpu': - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - up = to_2tuple(up) - - down = to_2tuple(down) - - out = upfirdn2d_native(input, kernel, up[0], up[1], down[0], down[1], - pad[0], pad[1], pad[2], pad[3]) - else: - _up = to_2tuple(up) - - _down = to_2tuple(down) - - if len(pad) == 4: - _pad = pad - elif len(pad) == 2: - _pad = (pad[0], pad[1], pad[0], pad[1]) - - out = UpFirDn2d.apply(input, kernel, _up, _down, _pad) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, - pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, - [0, 0, - max(pad_x0, 0), - max(pad_x1, 0), - max(pad_y0, 0), - max(pad_y1, 0)]) - out = out[:, - max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/voxelize.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/voxelize.py deleted file mode 100644 index ee4e0ae8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/ops/voxelize.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['dynamic_voxelize_forward', 'hard_voxelize_forward']) - - -class _Voxelization(Function): - - @staticmethod - def forward(ctx, - points, - voxel_size, - coors_range, - max_points=35, - max_voxels=20000, - deterministic=True): - """Convert kitti points(N, >=3) to voxels. - - Args: - points (torch.Tensor): [N, ndim]. Points[:, :3] contain xyz points - and points[:, 3:] contain other information like reflectivity. - voxel_size (tuple or float): The size of voxel with the shape of - [3]. - coors_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_points (int, optional): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. Default: 35. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - deterministic: bool. whether to invoke the non-deterministic - version of hard-voxelization implementations. non-deterministic - version is considerablly fast but is not deterministic. only - affects hard voxelization. default True. for more information - of this argument and the implementation insights, please refer - to the following links: - https://github.com/open-mmlab/mmdetection3d/issues/894 - https://github.com/open-mmlab/mmdetection3d/pull/904 - it is an experimental feature and we will appreciate it if - you could share with us the failing cases. - - Returns: - tuple[torch.Tensor]: tuple[torch.Tensor]: A tuple contains three - elements. The first one is the output voxels with the shape of - [M, max_points, n_dim], which only contain points and returned - when max_points != -1. The second is the voxel coordinates with - shape of [M, 3]. The last is number of point per voxel with the - shape of [M], which only returned when max_points != -1. - """ - if max_points == -1 or max_voxels == -1: - coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int) - ext_module.dynamic_voxelize_forward( - points, - torch.tensor(voxel_size, dtype=torch.float), - torch.tensor(coors_range, dtype=torch.float), - coors, - NDim=3) - return coors - else: - voxels = points.new_zeros( - size=(max_voxels, max_points, points.size(1))) - coors = points.new_zeros(size=(max_voxels, 3), dtype=torch.int) - num_points_per_voxel = points.new_zeros( - size=(max_voxels, ), dtype=torch.int) - voxel_num = torch.zeros(size=(), dtype=torch.long) - ext_module.hard_voxelize_forward( - points, - torch.tensor(voxel_size, dtype=torch.float), - torch.tensor(coors_range, dtype=torch.float), - voxels, - coors, - num_points_per_voxel, - voxel_num, - max_points=max_points, - max_voxels=max_voxels, - NDim=3, - deterministic=deterministic) - # select the valid voxels - voxels_out = voxels[:voxel_num] - coors_out = coors[:voxel_num] - num_points_per_voxel_out = num_points_per_voxel[:voxel_num] - return voxels_out, coors_out, num_points_per_voxel_out - - -voxelization = _Voxelization.apply - - -class Voxelization(nn.Module): - """Convert kitti points(N, >=3) to voxels. - - Please refer to `Point-Voxel CNN for Efficient 3D Deep Learning - `_ for more details. - - Args: - voxel_size (tuple or float): The size of voxel with the shape of [3]. - point_cloud_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_num_points (int): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - """ - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels=20000, - deterministic=True): - """ - Args: - voxel_size (list): list [x, y, z] size of three dimension - point_cloud_range (list): - [x_min, y_min, z_min, x_max, y_max, z_max] - max_num_points (int): max number of points per voxel - max_voxels (tuple or int): max number of voxels in - (training, testing) time - deterministic: bool. whether to invoke the non-deterministic - version of hard-voxelization implementations. non-deterministic - version is considerablly fast but is not deterministic. only - affects hard voxelization. default True. for more information - of this argument and the implementation insights, please refer - to the following links: - https://github.com/open-mmlab/mmdetection3d/issues/894 - https://github.com/open-mmlab/mmdetection3d/pull/904 - it is an experimental feature and we will appreciate it if - you could share with us the failing cases. - """ - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.max_num_points = max_num_points - if isinstance(max_voxels, tuple): - self.max_voxels = max_voxels - else: - self.max_voxels = _pair(max_voxels) - self.deterministic = deterministic - - point_cloud_range = torch.tensor( - point_cloud_range, dtype=torch.float32) - voxel_size = torch.tensor(voxel_size, dtype=torch.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = torch.round(grid_size).long() - input_feat_shape = grid_size[:2] - self.grid_size = grid_size - # the origin shape is as [x-len, y-len, z-len] - # [w, h, d] -> [d, h, w] - self.pcd_shape = [*input_feat_shape, 1][::-1] - - def forward(self, input): - if self.training: - max_voxels = self.max_voxels[0] - else: - max_voxels = self.max_voxels[1] - - return voxelization(input, self.voxel_size, self.point_cloud_range, - self.max_num_points, max_voxels, - self.deterministic) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', max_num_points=' + str(self.max_num_points) - s += ', max_voxels=' + str(self.max_voxels) - s += ', deterministic=' + str(self.deterministic) - s += ')' - return s diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/__init__.py deleted file mode 100644 index 2ed2c17a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collate import collate -from .data_container import DataContainer -from .data_parallel import MMDataParallel -from .distributed import MMDistributedDataParallel -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter, scatter_kwargs -from .utils import is_module_wrapper - -__all__ = [ - 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel', - 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/_functions.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/_functions.py deleted file mode 100644 index 95c58bf1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/_functions.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) if isinstance(outputs, list) else (outputs, ) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/collate.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/collate.py deleted file mode 100644 index ad749197..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_container.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_container.py deleted file mode 100644 index cedb0d32..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_container.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import torch - - -def assert_tensor_type(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not isinstance(args[0].data, torch.Tensor): - raise AttributeError( - f'{args[0].__class__.__name__} has no attribute ' - f'{func.__name__} for type {args[0].datatype}') - return func(*args, **kwargs) - - return wrapper - - -class DataContainer: - """A container for any type of objects. - - Typically tensors will be stacked in the collate function and sliced along - some dimension in the scatter function. This behavior has some limitations. - 1. All tensors have to be the same size. - 2. Types are limited (numpy array or Tensor). - - We design `DataContainer` and `MMDataParallel` to overcome these - limitations. The behavior can be either of the following. - - - copy to GPU, pad all tensors to the same size and stack them - - copy to GPU without stacking - - leave the objects as is and pass it to the model - - pad_dims specifies the number of last few dimensions to do padding - """ - - def __init__(self, - data, - stack=False, - padding_value=0, - cpu_only=False, - pad_dims=2): - self._data = data - self._cpu_only = cpu_only - self._stack = stack - self._padding_value = padding_value - assert pad_dims in [None, 1, 2, 3] - self._pad_dims = pad_dims - - def __repr__(self): - return f'{self.__class__.__name__}({repr(self.data)})' - - def __len__(self): - return len(self._data) - - @property - def data(self): - return self._data - - @property - def datatype(self): - if isinstance(self.data, torch.Tensor): - return self.data.type() - else: - return type(self.data) - - @property - def cpu_only(self): - return self._cpu_only - - @property - def stack(self): - return self._stack - - @property - def padding_value(self): - return self._padding_value - - @property - def pad_dims(self): - return self._pad_dims - - @assert_tensor_type - def size(self, *args, **kwargs): - return self.data.size(*args, **kwargs) - - @assert_tensor_type - def dim(self): - return self.data.dim() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_parallel.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_parallel.py deleted file mode 100644 index e86b47a5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - .. warning:: - MMDataParallel only supports single GPU training, if you need to - train with multiple GPUs, please use MMDistributedDataParallel - instead. If you have multiple GPUs and you just want to use - MMDataParallel, you can set the environment variable - ``CUDA_VISIBLE_DEVICES=0`` or instantiate ``MMDataParallel`` with - ``device_ids=[0]``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super().__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed.py deleted file mode 100644 index 0188ca4a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel.distributed import (DistributedDataParallel, - _find_tensors) - -from mmcv import print_log -from mmcv.utils import TORCH_VERSION, digit_version -from .scatter_gather import scatter_kwargs - - -class MMDistributedDataParallel(DistributedDataParallel): - """The DDP module that supports DataContainer. - - MMDDP has two main differences with PyTorch DDP: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data. - - It implement two APIs ``train_step()`` and ``val_step()``. - """ - - def to_kwargs(self, inputs, kwargs, device_id): - # Use `self.to_kwargs` instead of `self.scatter` in pytorch1.8 - # to move all tensors to device_id - return scatter_kwargs(inputs, kwargs, [device_id], dim=self.dim) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - """train_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.train_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.11.0a0')): - if self._check_sync_bufs_pre_fwd(): - self._sync_buffers() - else: - if (getattr(self, 'require_forward_param_sync', False) - and self.require_forward_param_sync): - self._sync_params() - - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.train_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.train_step(*inputs, **kwargs) - - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.11.0a0')): - if self._check_sync_bufs_post_fwd(): - self._sync_buffers() - - if (torch.is_grad_enabled() - and getattr(self, 'require_backward_grad_sync', False) - and self.require_backward_grad_sync): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output - - def val_step(self, *inputs, **kwargs): - """val_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.val_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.11.0a0')): - if self._check_sync_bufs_pre_fwd(): - self._sync_buffers() - else: - if (getattr(self, 'require_forward_param_sync', False) - and self.require_forward_param_sync): - self._sync_params() - - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.val_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.val_step(*inputs, **kwargs) - - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.11.0a0')): - if self._check_sync_bufs_post_fwd(): - self._sync_buffers() - - if (torch.is_grad_enabled() - and getattr(self, 'require_backward_grad_sync', False) - and self.require_backward_grad_sync): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed_deprecated.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed_deprecated.py deleted file mode 100644 index d438b58b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super().__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/registry.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/registry.py deleted file mode 100644 index 144f9fb1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from mmcv.utils import Registry - -MODULE_WRAPPERS = Registry('module wrapper') -MODULE_WRAPPERS.register_module(module=DataParallel) -MODULE_WRAPPERS.register_module(module=DistributedDataParallel) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/scatter_gather.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/scatter_gather.py deleted file mode 100644 index 900ff885..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/utils.py deleted file mode 100644 index cc800d6b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/parallel/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS or - its children registries. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - - def is_module_in_wrapper(module, module_wrapper): - module_wrappers = tuple(module_wrapper.module_dict.values()) - if isinstance(module, module_wrappers): - return True - for child in module_wrapper.children.values(): - if is_module_in_wrapper(module, child): - return True - return False - - return is_module_in_wrapper(module, MODULE_WRAPPERS) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/__init__.py deleted file mode 100644 index 183d5367..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/__init__.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_module import BaseModule, ModuleDict, ModuleList, Sequential -from .base_runner import BaseRunner -from .builder import RUNNERS, build_runner -from .checkpoint import (CheckpointLoader, _load_checkpoint, - _load_checkpoint_with_prefix, load_checkpoint, - load_state_dict, save_checkpoint, weights_to_cpu) -from .default_constructor import DefaultRunnerConstructor -from .dist_utils import (allreduce_grads, allreduce_params, get_dist_info, - init_dist, master_only) -from .epoch_based_runner import EpochBasedRunner, Runner -from .fp16_utils import LossScaler, auto_fp16, force_fp32, wrap_fp16_model -from .hooks import (HOOKS, CheckpointHook, ClearMLLoggerHook, ClosureHook, - DistEvalHook, DistSamplerSeedHook, DvcliveLoggerHook, - EMAHook, EvalHook, Fp16OptimizerHook, - GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, Hook, IterTimerHook, - LoggerHook, MlflowLoggerHook, NeptuneLoggerHook, - OptimizerHook, PaviLoggerHook, SegmindLoggerHook, - SyncBuffersHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .hooks.lr_updater import StepLrUpdaterHook # noqa -from .hooks.lr_updater import (CosineAnnealingLrUpdaterHook, - CosineRestartLrUpdaterHook, CyclicLrUpdaterHook, - ExpLrUpdaterHook, FixedLrUpdaterHook, - FlatCosineAnnealingLrUpdaterHook, - InvLrUpdaterHook, LinearAnnealingLrUpdaterHook, - LrUpdaterHook, OneCycleLrUpdaterHook, - PolyLrUpdaterHook) -from .hooks.momentum_updater import (CosineAnnealingMomentumUpdaterHook, - CyclicMomentumUpdaterHook, - LinearAnnealingMomentumUpdaterHook, - MomentumUpdaterHook, - OneCycleMomentumUpdaterHook, - StepMomentumUpdaterHook) -from .iter_based_runner import IterBasedRunner, IterLoader -from .log_buffer import LogBuffer -from .optimizer import (OPTIMIZER_BUILDERS, OPTIMIZERS, - DefaultOptimizerConstructor, build_optimizer, - build_optimizer_constructor) -from .priority import Priority, get_priority -from .utils import get_host_info, get_time_str, obj_from_dict, set_random_seed - -# initialize ipu to registor ipu runner to RUNNERS -from mmcv.device import ipu # isort:skip # noqa - -__all__ = [ - 'BaseRunner', 'Runner', 'EpochBasedRunner', 'IterBasedRunner', 'LogBuffer', - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'FixedLrUpdaterHook', 'StepLrUpdaterHook', 'ExpLrUpdaterHook', - 'PolyLrUpdaterHook', 'InvLrUpdaterHook', 'CosineAnnealingLrUpdaterHook', - 'FlatCosineAnnealingLrUpdaterHook', 'CosineRestartLrUpdaterHook', - 'CyclicLrUpdaterHook', 'OneCycleLrUpdaterHook', 'MomentumUpdaterHook', - 'StepMomentumUpdaterHook', 'CosineAnnealingMomentumUpdaterHook', - 'CyclicMomentumUpdaterHook', 'OneCycleMomentumUpdaterHook', - 'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'LoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'MlflowLoggerHook', - 'DvcliveLoggerHook', '_load_checkpoint', 'load_state_dict', - 'load_checkpoint', 'weights_to_cpu', 'save_checkpoint', 'Priority', - 'get_priority', 'get_host_info', 'get_time_str', 'obj_from_dict', - 'init_dist', 'get_dist_info', 'master_only', 'OPTIMIZER_BUILDERS', - 'OPTIMIZERS', 'DefaultOptimizerConstructor', 'build_optimizer', - 'build_optimizer_constructor', 'IterLoader', 'set_random_seed', - 'auto_fp16', 'force_fp32', 'wrap_fp16_model', 'Fp16OptimizerHook', - 'SyncBuffersHook', 'EMAHook', 'build_runner', 'RUNNERS', 'allreduce_grads', - 'allreduce_params', 'LossScaler', 'CheckpointLoader', 'BaseModule', - '_load_checkpoint_with_prefix', 'EvalHook', 'DistEvalHook', 'Sequential', - 'ModuleDict', 'ModuleList', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook', 'DefaultRunnerConstructor', - 'SegmindLoggerHook', 'LinearAnnealingMomentumUpdaterHook', - 'LinearAnnealingLrUpdaterHook', 'ClearMLLoggerHook' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_module.py deleted file mode 100644 index 7e64bdfb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_module.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings -from abc import ABCMeta -from collections import defaultdict -from logging import FileHandler -from typing import Iterable, Optional - -import torch.nn as nn - -from mmcv.runner.dist_utils import master_only -from mmcv.utils.logging import get_logger, logger_initialized, print_log - - -class BaseModule(nn.Module, metaclass=ABCMeta): - """Base module for all modules in openmmlab. - - ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional - functionality of parameter initialization. Compared with - ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes. - - - ``init_cfg``: the config to control the initialization. - - ``init_weights``: The function of parameter initialization and recording - initialization information. - - ``_params_init_info``: Used to track the parameter initialization - information. This attribute only exists during executing the - ``init_weights``. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, init_cfg: Optional[dict] = None): - """Initialize BaseModule, inherited from `torch.nn.Module`""" - - # NOTE init_cfg can be defined in different levels, but init_cfg - # in low levels has a higher priority. - - super().__init__() - # define default value of init_cfg instead of hard code - # in init_weights() function - self._is_init = False - - self.init_cfg = copy.deepcopy(init_cfg) - - # Backward compatibility in derived classes - # if pretrained is not None: - # warnings.warn('DeprecationWarning: pretrained is a deprecated \ - # key, please consider using init_cfg') - # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @property - def is_init(self): - return self._is_init - - def init_weights(self): - """Initialize the weights.""" - - is_top_level_module = False - # check if it is top-level module - if not hasattr(self, '_params_init_info'): - # The `_params_init_info` is used to record the initialization - # information of the parameters - # the key should be the obj:`nn.Parameter` of model and the value - # should be a dict containing - # - init_info (str): The string that describes the initialization. - # - tmp_mean_value (FloatTensor): The mean of the parameter, - # which indicates whether the parameter has been modified. - # this attribute would be deleted after all parameters - # is initialized. - self._params_init_info = defaultdict(dict) - is_top_level_module = True - - # Initialize the `_params_init_info`, - # When detecting the `tmp_mean_value` of - # the corresponding parameter is changed, update related - # initialization information - for name, param in self.named_parameters(): - self._params_init_info[param][ - 'init_info'] = f'The value is the same before and ' \ - f'after calling `init_weights` ' \ - f'of {self.__class__.__name__} ' - self._params_init_info[param][ - 'tmp_mean_value'] = param.data.mean() - - # pass `params_init_info` to all submodules - # All submodules share the same `params_init_info`, - # so it will be updated when parameters are - # modified at any level of the model. - for sub_module in self.modules(): - sub_module._params_init_info = self._params_init_info - - # Get the initialized logger, if not exist, - # create a logger named `mmcv` - logger_names = list(logger_initialized.keys()) - logger_name = logger_names[0] if logger_names else 'mmcv' - - from ..cnn import initialize - from ..cnn.utils.weight_init import update_init_info - module_name = self.__class__.__name__ - if not self._is_init: - if self.init_cfg: - print_log( - f'initialize {module_name} with init_cfg {self.init_cfg}', - logger=logger_name) - initialize(self, self.init_cfg) - if isinstance(self.init_cfg, dict): - # prevent the parameters of - # the pre-trained model - # from being overwritten by - # the `init_weights` - if self.init_cfg['type'] == 'Pretrained': - return - - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights() - # users may overload the `init_weights` - update_init_info( - m, - init_info=f'Initialized by ' - f'user-defined `init_weights`' - f' in {m.__class__.__name__} ') - - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - if is_top_level_module: - self._dump_init_info(logger_name) - - for sub_module in self.modules(): - del sub_module._params_init_info - - @master_only - def _dump_init_info(self, logger_name: str): - """Dump the initialization information to a file named - `initialization.log.json` in workdir. - - Args: - logger_name (str): The name of logger. - """ - - logger = get_logger(logger_name) - - with_file_handler = False - # dump the information to the logger file if there is a `FileHandler` - for handler in logger.handlers: - if isinstance(handler, FileHandler): - handler.stream.write( - 'Name of parameter - Initialization information\n') - for name, param in self.named_parameters(): - handler.stream.write( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n") - handler.stream.flush() - with_file_handler = True - if not with_file_handler: - for name, param in self.named_parameters(): - print_log( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n ", - logger=logger_name) - - def __repr__(self): - s = super().__repr__() - if self.init_cfg: - s += f'\ninit_cfg={self.init_cfg}' - return s - - -class Sequential(BaseModule, nn.Sequential): - """Sequential module in openmmlab. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, *args, init_cfg: Optional[dict] = None): - BaseModule.__init__(self, init_cfg) - nn.Sequential.__init__(self, *args) - - -class ModuleList(BaseModule, nn.ModuleList): - """ModuleList in openmmlab. - - Args: - modules (iterable, optional): an iterable of modules to add. - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, - modules: Optional[Iterable] = None, - init_cfg: Optional[dict] = None): - BaseModule.__init__(self, init_cfg) - nn.ModuleList.__init__(self, modules) - - -class ModuleDict(BaseModule, nn.ModuleDict): - """ModuleDict in openmmlab. - - Args: - modules (dict, optional): a mapping (dictionary) of (string: module) - or an iterable of key-value pairs of type (string, module). - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, - modules: Optional[dict] = None, - init_cfg: Optional[dict] = None): - BaseModule.__init__(self, init_cfg) - nn.ModuleDict.__init__(self, modules) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_runner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_runner.py deleted file mode 100644 index 12a0025f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/base_runner.py +++ /dev/null @@ -1,544 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import logging -import os.path as osp -import warnings -from abc import ABCMeta, abstractmethod - -import torch -from torch.optim import Optimizer - -import mmcv -from ..parallel import is_module_wrapper -from .checkpoint import load_checkpoint -from .dist_utils import get_dist_info -from .hooks import HOOKS, Hook -from .log_buffer import LogBuffer -from .priority import Priority, get_priority -from .utils import get_time_str - - -class BaseRunner(metaclass=ABCMeta): - """The base class of Runner, a training helper for PyTorch. - - All subclasses should implement the following APIs: - - - ``run()`` - - ``train()`` - - ``val()`` - - ``save_checkpoint()`` - - Args: - model (:obj:`torch.nn.Module`): The model to be run. - batch_processor (callable): A callable method that process a data - batch. The interface of this method should be - `batch_processor(model, data, train_mode) -> dict` - optimizer (dict or :obj:`torch.optim.Optimizer`): It can be either an - optimizer (in most cases) or a dict of optimizers (in models that - requires more than one optimizer, e.g., GAN). - work_dir (str, optional): The working directory to save checkpoints - and logs. Defaults to None. - logger (:obj:`logging.Logger`): Logger used during training. - Defaults to None. (The default value is just for backward - compatibility) - meta (dict | None): A dict records some import information such as - environment info and seed, which will be logged in logger hook. - Defaults to None. - max_epochs (int, optional): Total training epochs. - max_iters (int, optional): Total training iterations. - """ - - def __init__(self, - model, - batch_processor=None, - optimizer=None, - work_dir=None, - logger=None, - meta=None, - max_iters=None, - max_epochs=None): - if batch_processor is not None: - if not callable(batch_processor): - raise TypeError('batch_processor must be callable, ' - f'but got {type(batch_processor)}') - warnings.warn( - 'batch_processor is deprecated, please implement ' - 'train_step() and val_step() in the model instead.', - DeprecationWarning) - # raise an error is `batch_processor` is not None and - # `model.train_step()` exists. - if is_module_wrapper(model): - _model = model.module - else: - _model = model - if hasattr(_model, 'train_step') or hasattr(_model, 'val_step'): - raise RuntimeError( - 'batch_processor and model.train_step()/model.val_step() ' - 'cannot be both available.') - else: - assert hasattr(model, 'train_step') - - # check the type of `optimizer` - if isinstance(optimizer, dict): - for name, optim in optimizer.items(): - if not isinstance(optim, Optimizer): - raise TypeError( - f'optimizer must be a dict of torch.optim.Optimizers, ' - f'but optimizer["{name}"] is a {type(optim)}') - elif not isinstance(optimizer, Optimizer) and optimizer is not None: - raise TypeError( - f'optimizer must be a torch.optim.Optimizer object ' - f'or dict or None, but got {type(optimizer)}') - - # check the type of `logger` - if not isinstance(logger, logging.Logger): - raise TypeError(f'logger must be a logging.Logger object, ' - f'but got {type(logger)}') - - # check the type of `meta` - if meta is not None and not isinstance(meta, dict): - raise TypeError( - f'meta must be a dict or None, but got {type(meta)}') - - self.model = model - self.batch_processor = batch_processor - self.optimizer = optimizer - self.logger = logger - self.meta = meta - # create work_dir - if mmcv.is_str(work_dir): - self.work_dir = osp.abspath(work_dir) - mmcv.mkdir_or_exist(self.work_dir) - elif work_dir is None: - self.work_dir = None - else: - raise TypeError('"work_dir" must be a str or None') - - # get model name from the model class - if hasattr(self.model, 'module'): - self._model_name = self.model.module.__class__.__name__ - else: - self._model_name = self.model.__class__.__name__ - - self._rank, self._world_size = get_dist_info() - self.timestamp = get_time_str() - self.mode = None - self._hooks = [] - self._epoch = 0 - self._iter = 0 - self._inner_iter = 0 - - if max_epochs is not None and max_iters is not None: - raise ValueError( - 'Only one of `max_epochs` or `max_iters` can be set.') - - self._max_epochs = max_epochs - self._max_iters = max_iters - # TODO: Redesign LogBuffer, it is not flexible and elegant enough - self.log_buffer = LogBuffer() - - @property - def model_name(self): - """str: Name of the model, usually the module class name.""" - return self._model_name - - @property - def rank(self): - """int: Rank of current process. (distributed training)""" - return self._rank - - @property - def world_size(self): - """int: Number of processes participating in the job. - (distributed training)""" - return self._world_size - - @property - def hooks(self): - """list[:obj:`Hook`]: A list of registered hooks.""" - return self._hooks - - @property - def epoch(self): - """int: Current epoch.""" - return self._epoch - - @property - def iter(self): - """int: Current iteration.""" - return self._iter - - @property - def inner_iter(self): - """int: Iteration in an epoch.""" - return self._inner_iter - - @property - def max_epochs(self): - """int: Maximum training epochs.""" - return self._max_epochs - - @property - def max_iters(self): - """int: Maximum training iterations.""" - return self._max_iters - - @abstractmethod - def train(self): - pass - - @abstractmethod - def val(self): - pass - - @abstractmethod - def run(self, data_loaders, workflow, **kwargs): - pass - - @abstractmethod - def save_checkpoint(self, - out_dir, - filename_tmpl, - save_optimizer=True, - meta=None, - create_symlink=True): - pass - - def current_lr(self): - """Get current learning rates. - - Returns: - list[float] | dict[str, list[float]]: Current learning rates of all - param groups. If the runner has a dict of optimizers, this method - will return a dict. - """ - if isinstance(self.optimizer, torch.optim.Optimizer): - lr = [group['lr'] for group in self.optimizer.param_groups] - elif isinstance(self.optimizer, dict): - lr = dict() - for name, optim in self.optimizer.items(): - lr[name] = [group['lr'] for group in optim.param_groups] - else: - raise RuntimeError( - 'lr is not applicable because optimizer does not exist.') - return lr - - def current_momentum(self): - """Get current momentums. - - Returns: - list[float] | dict[str, list[float]]: Current momentums of all - param groups. If the runner has a dict of optimizers, this method - will return a dict. - """ - - def _get_momentum(optimizer): - momentums = [] - for group in optimizer.param_groups: - if 'momentum' in group.keys(): - momentums.append(group['momentum']) - elif 'betas' in group.keys(): - momentums.append(group['betas'][0]) - else: - momentums.append(0) - return momentums - - if self.optimizer is None: - raise RuntimeError( - 'momentum is not applicable because optimizer does not exist.') - elif isinstance(self.optimizer, torch.optim.Optimizer): - momentums = _get_momentum(self.optimizer) - elif isinstance(self.optimizer, dict): - momentums = dict() - for name, optim in self.optimizer.items(): - momentums[name] = _get_momentum(optim) - return momentums - - def register_hook(self, hook, priority='NORMAL'): - """Register a hook into the hook list. - - The hook will be inserted into a priority queue, with the specified - priority (See :class:`Priority` for details of priorities). - For hooks with the same priority, they will be triggered in the same - order as they are registered. - - Args: - hook (:obj:`Hook`): The hook to be registered. - priority (int or str or :obj:`Priority`): Hook priority. - Lower value means higher priority. - """ - assert isinstance(hook, Hook) - if hasattr(hook, 'priority'): - raise ValueError('"priority" is a reserved attribute for hooks') - priority = get_priority(priority) - hook.priority = priority - # insert the hook to a sorted list - inserted = False - for i in range(len(self._hooks) - 1, -1, -1): - if priority >= self._hooks[i].priority: - self._hooks.insert(i + 1, hook) - inserted = True - break - if not inserted: - self._hooks.insert(0, hook) - - def register_hook_from_cfg(self, hook_cfg): - """Register a hook from its cfg. - - Args: - hook_cfg (dict): Hook config. It should have at least keys 'type' - and 'priority' indicating its type and priority. - - Note: - The specific hook class to register should not use 'type' and - 'priority' arguments during initialization. - """ - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = mmcv.build_from_cfg(hook_cfg, HOOKS) - self.register_hook(hook, priority=priority) - - def call_hook(self, fn_name): - """Call all hooks. - - Args: - fn_name (str): The function name in each hook to be called, such as - "before_train_epoch". - """ - for hook in self._hooks: - getattr(hook, fn_name)(self) - - def get_hook_info(self): - # Get hooks info in each stage - stage_hook_map = {stage: [] for stage in Hook.stages} - for hook in self.hooks: - try: - priority = Priority(hook.priority).name - except ValueError: - priority = hook.priority - classname = hook.__class__.__name__ - hook_info = f'({priority:<12}) {classname:<35}' - for trigger_stage in hook.get_triggered_stages(): - stage_hook_map[trigger_stage].append(hook_info) - - stage_hook_infos = [] - for stage in Hook.stages: - hook_infos = stage_hook_map[stage] - if len(hook_infos) > 0: - info = f'{stage}:\n' - info += '\n'.join(hook_infos) - info += '\n -------------------- ' - stage_hook_infos.append(info) - return '\n'.join(stage_hook_infos) - - def load_checkpoint(self, - filename, - map_location='cpu', - strict=False, - revise_keys=[(r'^module.', '')]): - return load_checkpoint( - self.model, - filename, - map_location, - strict, - self.logger, - revise_keys=revise_keys) - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - if map_location == 'default': - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint(checkpoint) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - if self.meta is None: - self.meta = {} - self.meta.setdefault('hook_msgs', {}) - # load `last_ckpt`, `best_score`, `best_ckpt`, etc. for hook messages - self.meta['hook_msgs'].update(checkpoint['meta'].get('hook_msgs', {})) - - # Re-calculate the number of iterations when resuming - # models with different number of GPUs - if 'config' in checkpoint['meta']: - config = mmcv.Config.fromstring( - checkpoint['meta']['config'], file_format='.py') - previous_gpu_ids = config.get('gpu_ids', None) - if previous_gpu_ids and len(previous_gpu_ids) > 0 and len( - previous_gpu_ids) != self.world_size: - self._iter = int(self._iter * len(previous_gpu_ids) / - self.world_size) - self.logger.info('the iteration number is changed due to ' - 'change of GPU number') - - # resume meta information meta - self.meta = checkpoint['meta'] - - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter) - - def register_lr_hook(self, lr_config): - if lr_config is None: - return - elif isinstance(lr_config, dict): - assert 'policy' in lr_config - policy_type = lr_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of Lr updater. - # Since this is not applicable for ` - # CosineAnnealingLrUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'LrUpdaterHook' - lr_config['type'] = hook_type - hook = mmcv.build_from_cfg(lr_config, HOOKS) - else: - hook = lr_config - self.register_hook(hook, priority='VERY_HIGH') - - def register_momentum_hook(self, momentum_config): - if momentum_config is None: - return - if isinstance(momentum_config, dict): - assert 'policy' in momentum_config - policy_type = momentum_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of momentum updater. - # Since this is not applicable for - # `CosineAnnealingMomentumUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'MomentumUpdaterHook' - momentum_config['type'] = hook_type - hook = mmcv.build_from_cfg(momentum_config, HOOKS) - else: - hook = momentum_config - self.register_hook(hook, priority='HIGH') - - def register_optimizer_hook(self, optimizer_config): - if optimizer_config is None: - return - if isinstance(optimizer_config, dict): - optimizer_config.setdefault('type', 'OptimizerHook') - hook = mmcv.build_from_cfg(optimizer_config, HOOKS) - else: - hook = optimizer_config - self.register_hook(hook, priority='ABOVE_NORMAL') - - def register_checkpoint_hook(self, checkpoint_config): - if checkpoint_config is None: - return - if isinstance(checkpoint_config, dict): - checkpoint_config.setdefault('type', 'CheckpointHook') - hook = mmcv.build_from_cfg(checkpoint_config, HOOKS) - else: - hook = checkpoint_config - self.register_hook(hook, priority='NORMAL') - - def register_logger_hooks(self, log_config): - if log_config is None: - return - log_interval = log_config['interval'] - for info in log_config['hooks']: - logger_hook = mmcv.build_from_cfg( - info, HOOKS, default_args=dict(interval=log_interval)) - self.register_hook(logger_hook, priority='VERY_LOW') - - def register_timer_hook(self, timer_config): - if timer_config is None: - return - if isinstance(timer_config, dict): - timer_config_ = copy.deepcopy(timer_config) - hook = mmcv.build_from_cfg(timer_config_, HOOKS) - else: - hook = timer_config - self.register_hook(hook, priority='LOW') - - def register_custom_hooks(self, custom_config): - if custom_config is None: - return - - if not isinstance(custom_config, list): - custom_config = [custom_config] - - for item in custom_config: - if isinstance(item, dict): - self.register_hook_from_cfg(item) - else: - self.register_hook(item, priority='NORMAL') - - def register_profiler_hook(self, profiler_config): - if profiler_config is None: - return - if isinstance(profiler_config, dict): - profiler_config.setdefault('type', 'ProfilerHook') - hook = mmcv.build_from_cfg(profiler_config, HOOKS) - else: - hook = profiler_config - self.register_hook(hook) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - timer_config=dict(type='IterTimerHook'), - custom_hooks_config=None): - """Register default and custom hooks for training. - - Default and custom hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - self.register_lr_hook(lr_config) - self.register_momentum_hook(momentum_config) - self.register_optimizer_hook(optimizer_config) - self.register_checkpoint_hook(checkpoint_config) - self.register_timer_hook(timer_config) - self.register_logger_hooks(log_config) - self.register_custom_hooks(custom_hooks_config) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/builder.py deleted file mode 100644 index 008da32a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/builder.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from typing import Optional - -from ..utils import Registry - -RUNNERS = Registry('runner') -RUNNER_BUILDERS = Registry('runner builder') - - -def build_runner_constructor(cfg: dict): - return RUNNER_BUILDERS.build(cfg) - - -def build_runner(cfg: dict, default_args: Optional[dict] = None): - runner_cfg = copy.deepcopy(cfg) - constructor_type = runner_cfg.pop('constructor', - 'DefaultRunnerConstructor') - runner_constructor = build_runner_constructor( - dict( - type=constructor_type, - runner_cfg=runner_cfg, - default_args=default_args)) - runner = runner_constructor() - return runner diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/checkpoint.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/checkpoint.py deleted file mode 100644 index 693dee93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/checkpoint.py +++ /dev/null @@ -1,800 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import logging -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory -from typing import Callable, List, Optional, Sequence, Union - -import torch -import torchvision -from torch.optim import Optimizer - -import mmcv -from ..fileio import FileClient -from ..fileio import load as load_file -from ..parallel import is_module_wrapper -from ..utils import digit_version, load_url, mkdir_or_exist -from .dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module: torch.nn.Module, - state_dict: Union[dict, OrderedDict], - strict: bool = False, - logger: Optional[logging.Logger] = None) -> None: - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys: List = [] - all_missing_keys: List = [] - err_msg: List = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata # type: ignore - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - # break load->load reference cycle - load = None # type: ignore - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) # type: ignore - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def get_torchvision_models(): - if digit_version(torchvision.__version__) < digit_version('0.13.0a0'): - model_urls = dict() - # When the version of torchvision is lower than 0.13, the model url is - # not declared in `torchvision.model.__init__.py`, so we need to - # iterate through `torchvision.models.__path__` to get the url for each - # model. - for _, name, ispkg in pkgutil.walk_packages( - torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - else: - # Since torchvision bumps to v0.13, the weight loading logic, - # model keys and model urls have been changed. Here the URLs of old - # version is loaded to avoid breaking back compatibility. If the - # torchvision version>=0.13.0, new URLs will be added. Users can get - # the resnet50 checkpoint by setting 'resnet50.imagent1k_v1', - # 'resnet50' or 'ResNet50_Weights.IMAGENET1K_V1' in the config. - json_path = osp.join(mmcv.__path__[0], - 'model_zoo/torchvision_0.12.json') - model_urls = mmcv.load(json_path) - for cls_name, cls in torchvision.models.__dict__.items(): - # The name of torchvision model weights classes ends with - # `_Weights` such as `ResNet18_Weights`. However, some model weight - # classes, such as `MNASNet0_75_Weights` does not have any urls in - # torchvision 0.13.0 and cannot be iterated. Here we simply check - # `DEFAULT` attribute to ensure the class is not empty. - if (not cls_name.endswith('_Weights') - or not hasattr(cls, 'DEFAULT')): - continue - # Since `cls.DEFAULT` can not be accessed by iterating cls, we set - # default urls explicitly. - cls_key = cls_name.replace('_Weights', '').lower() - model_urls[f'{cls_key}.default'] = cls.DEFAULT.url - for weight_enum in cls: - cls_key = cls_name.replace('_Weights', '').lower() - cls_key = f'{cls_key}.{weight_enum.name.lower()}' - model_urls[cls_key] = weight_enum.url - - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - # Some checkpoints converted from 3rd-party repo don't - # have the "state_dict" key. - state_dict = checkpoint - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes: dict = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, - prefixes: Union[str, Sequence[str]], - loader: Optional[Callable] = None, - force: bool = False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or Sequence[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - callable: checkpoint loader - """ - for p in cls._schemes: - # use regular match to handle some cases that where the prefix of - # loader has a prefix. For example, both 's3://path' and - # 'open-mmlab:s3://path' should return `load_from_ceph` - if re.match(p, path) is not None: - return cls._schemes[p] - - @classmethod - def load_checkpoint( - cls, - filename: str, - map_location: Optional[str] = None, - logger: Optional[logging.Logger] = None - ) -> Union[dict, OrderedDict]: - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - -@CheckpointLoader.register_scheme(prefixes='') -def load_from_local( - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint by local file path. - - Args: - filename (str): local checkpoint file path - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - filename = osp.expanduser(filename) - if not osp.isfile(filename): - raise FileNotFoundError(f'{filename} can not be found.') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('http://', 'https://')) -def load_from_http( - filename: str, - map_location: Optional[str] = None, - model_dir: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint through HTTP or HTTPS scheme path. In distributed - setting, this function only download checkpoint at local rank 0. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - model_dir (str, optional): directory in which to save the object, - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - rank, world_size = get_dist_info() - if rank == 0: - checkpoint = load_url( - filename, model_dir=model_dir, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = load_url( - filename, model_dir=model_dir, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='pavi://') -def load_from_pavi( - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint through the file path prefixed with pavi. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with pavi prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - assert filename.startswith('pavi://'), \ - f'Expected filename startswith `pavi://`, but get {filename}' - model_path = filename[7:] - - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=r'(\S+\:)?s3://') -def load_from_ceph(filename: str, - map_location: Optional[str] = None, - backend: str = 'petrel') -> Union[dict, OrderedDict]: - """load checkpoint through the file path prefixed with s3. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Note: - Since v1.4.1, the registered scheme prefixes have been enhanced to - support bucket names in the path prefix, e.g. 's3://xx.xx/xx.path', - 'bucket1:s3://xx.xx/xx.path'. - - Args: - filename (str): checkpoint file path with s3 prefix - map_location (str, optional): Same as :func:`torch.load`. - backend (str): The storage backend type. Options are 'ceph', - 'petrel'. Default: 'petrel'. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - allowed_backends = ['ceph', 'petrel'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - - if backend == 'ceph': - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead', - DeprecationWarning) - - # CephClient and PetrelBackend have the same prefix 's3://' and the latter - # will be chosen as default. If PetrelBackend can not be instantiated - # successfully, the CephClient will be chosen. - try: - file_client = FileClient(backend=backend) - except ImportError: - allowed_backends.remove(backend) - file_client = FileClient(backend=allowed_backends[0]) - - with io.BytesIO(file_client.get(filename)) as buffer: - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://')) -def load_from_torchvision( - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint through the file path prefixed with modelzoo or - torchvision. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - model_urls = get_torchvision_models() - if filename.startswith('modelzoo://'): - warnings.warn( - 'The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead', DeprecationWarning) - model_name = filename[11:] - else: - model_name = filename[14:] - - # Support getting model urls in the same way as torchvision - # `ResNet50_Weights.IMAGENET1K_V1` will be mapped to - # resnet50.imagenet1k_v1. - model_name = model_name.lower().replace('_weights', '') - return load_from_http(model_urls[model_name], map_location=map_location) - - -@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://')) -def load_from_openmmlab( - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint through the file path prefixed with open-mmlab or - openmmlab. - - Args: - filename (str): checkpoint file path with open-mmlab or - openmmlab prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_external_models() - prefix_str = 'open-mmlab://' - if filename.startswith(prefix_str): - model_name = filename[13:] - else: - model_name = filename[12:] - prefix_str = 'openmmlab://' - - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn( - f'{prefix_str}{model_name} is deprecated in favor ' - f'of {prefix_str}{deprecated_urls[model_name]}', - DeprecationWarning) - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_from_http(model_url, map_location=map_location) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise FileNotFoundError(f'{filename} can not be found.') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='mmcls://') -def load_from_mmcls( - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """load checkpoint through the file path prefixed with mmcls. - - Args: - filename (str): checkpoint file path with mmcls prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_from_http( - model_urls[model_name], map_location=map_location) - checkpoint = _process_mmcls_checkpoint(checkpoint) - return checkpoint - - -def _load_checkpoint( - filename: str, - map_location: Optional[str] = None, - logger: Optional[logging.Logger] = None) -> Union[dict, OrderedDict]: - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def _load_checkpoint_with_prefix( - prefix: str, - filename: str, - map_location: Optional[str] = None) -> Union[dict, OrderedDict]: - """Load partial pretrained model with specific prefix. - - Args: - prefix (str): The prefix of sub-module. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location=map_location) - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if not prefix.endswith('.'): - prefix += '.' - prefix_len = len(prefix) - - state_dict = { - k[prefix_len:]: v - for k, v in state_dict.items() if k.startswith(prefix) - } - - assert state_dict, f'{prefix} is not in the pretrained model' - return state_dict - - -def load_checkpoint( - model: torch.nn.Module, - filename: str, - map_location: Optional[str] = None, - strict: bool = False, - logger: Optional[logging.Logger] = None, - revise_keys: list = [(r'^module\.', '')]) -> Union[dict, OrderedDict]: - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict: OrderedDict) -> OrderedDict: - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - # Keep metadata in state_dict - state_dict_cpu._metadata = getattr( # type: ignore - state_dict, '_metadata', OrderedDict()) - return state_dict_cpu - - -def _save_to_state_dict(module: torch.nn.Module, destination: dict, - prefix: str, keep_vars: bool) -> None: - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module: torch.nn.Module, - destination: Optional[OrderedDict] = None, - prefix: str = '', - keep_vars: bool = False) -> OrderedDict: - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() # type: ignore - destination._metadata[prefix[:-1]] = local_metadata = dict( # type: ignore - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination # type: ignore - - -def save_checkpoint(model: torch.nn.Module, - filename: str, - optimizer: Optional[Optimizer] = None, - meta: Optional[dict] = None, - file_client_args: Optional[dict] = None) -> None: - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - if file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" if filename starts with' - f'"pavi://", but got {file_client_args}') - try: - from pavi import exception, modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except exception.NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - file_client = FileClient.infer_client(file_client_args, filename) - with io.BytesIO() as f: - torch.save(checkpoint, f) - file_client.put(f.getvalue(), filename) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/default_constructor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/default_constructor.py deleted file mode 100644 index 394b51cf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/default_constructor.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional - -from .builder import RUNNER_BUILDERS, RUNNERS - - -@RUNNER_BUILDERS.register_module() -class DefaultRunnerConstructor: - """Default constructor for runners. - - Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`. - For example, We can inject some new properties and functions for `Runner`. - - Example: - >>> from mmcv.runner import RUNNER_BUILDERS, build_runner - >>> # Define a new RunnerReconstructor - >>> @RUNNER_BUILDERS.register_module() - >>> class MyRunnerConstructor: - ... def __init__(self, runner_cfg, default_args=None): - ... if not isinstance(runner_cfg, dict): - ... raise TypeError('runner_cfg should be a dict', - ... f'but got {type(runner_cfg)}') - ... self.runner_cfg = runner_cfg - ... self.default_args = default_args - ... - ... def __call__(self): - ... runner = RUNNERS.build(self.runner_cfg, - ... default_args=self.default_args) - ... # Add new properties for existing runner - ... runner.my_name = 'my_runner' - ... runner.my_function = lambda self: print(self.my_name) - ... ... - >>> # build your runner - >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40, - ... constructor='MyRunnerConstructor') - >>> runner = build_runner(runner_cfg) - """ - - def __init__(self, runner_cfg: dict, default_args: Optional[dict] = None): - if not isinstance(runner_cfg, dict): - raise TypeError('runner_cfg should be a dict', - f'but got {type(runner_cfg)}') - self.runner_cfg = runner_cfg - self.default_args = default_args - - def __call__(self): - return RUNNERS.build(self.runner_cfg, default_args=self.default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/dist_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/dist_utils.py deleted file mode 100644 index abed57d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/dist_utils.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import functools -import os -import socket -import subprocess -from collections import OrderedDict -from typing import Callable, List, Optional, Tuple - -import torch -import torch.multiprocessing as mp -from torch import distributed as dist -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from mmcv.utils import IS_MLU_AVAILABLE - - -def _find_free_port(): - # Copied from https://github.com/facebookresearch/detectron2/blob/main/detectron2/engine/launch.py # noqa: E501 - sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - # Binding to port 0 will cause the OS to find an available port for us - sock.bind(('', 0)) - port = sock.getsockname()[1] - sock.close() - # NOTE: there is still a chance the port could be taken by other processes. - return port - - -def _is_free_port(port): - ips = socket.gethostbyname_ex(socket.gethostname())[-1] - ips.append('localhost') - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - return all(s.connect_ex((ip, port)) != 0 for ip in ips) - - -def init_dist(launcher: str, backend: str = 'nccl', **kwargs) -> None: - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'mpi': - _init_dist_mpi(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend: str, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['RANK']) - if IS_MLU_AVAILABLE: - import torch_mlu # noqa: F401 - torch.mlu.set_device(rank) - dist.init_process_group( - backend='cncl', - rank=rank, - world_size=int(os.environ['WORLD_SIZE']), - **kwargs) - else: - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_mpi(backend: str, **kwargs): - local_rank = int(os.environ['OMPI_COMM_WORLD_LOCAL_RANK']) - torch.cuda.set_device(local_rank) - if 'MASTER_PORT' not in os.environ: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - if 'MASTER_ADDR' not in os.environ: - raise KeyError('The environment variable MASTER_ADDR is not set') - os.environ['WORLD_SIZE'] = os.environ['OMPI_COMM_WORLD_SIZE'] - os.environ['RANK'] = os.environ['OMPI_COMM_WORLD_RANK'] - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend: str, port: Optional[int] = None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput( - f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # if torch.distributed default port(29500) is available - # then use it, else find a free port - if _is_free_port(29500): - os.environ['MASTER_PORT'] = '29500' - else: - os.environ['MASTER_PORT'] = str(_find_free_port()) - # use MASTER_ADDR in the environment variable if it already exists - if 'MASTER_ADDR' not in os.environ: - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info() -> Tuple[int, int]: - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func: Callable) -> Callable: - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - -def allreduce_params(params: List[torch.nn.Parameter], - coalesce: bool = True, - bucket_size_mb: int = -1) -> None: - """Allreduce parameters. - - Args: - params (list[torch.nn.Parameter]): List of parameters or buffers - of a model. - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - _, world_size = get_dist_info() - if world_size == 1: - return - params = [param.data for param in params] - if coalesce: - _allreduce_coalesced(params, world_size, bucket_size_mb) - else: - for tensor in params: - dist.all_reduce(tensor.div_(world_size)) - - -def allreduce_grads(params: List[torch.nn.Parameter], - coalesce: bool = True, - bucket_size_mb: int = -1) -> None: - """Allreduce gradients. - - Args: - params (list[torch.nn.Parameter]): List of parameters of a model. - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - _, world_size = get_dist_info() - if world_size == 1: - return - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/epoch_based_runner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/epoch_based_runner.py deleted file mode 100644 index 7a1b1030..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self.data_batch = data_batch - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - del self.data_batch - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self.data_batch = data_batch - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - del self.data_batch - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead', - DeprecationWarning) - super().__init__(*args, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/fp16_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/fp16_utils.py deleted file mode 100644 index 9deb228a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/fp16_utils.py +++ /dev/null @@ -1,435 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import warnings -from collections import abc -from inspect import getfullargspec -from typing import Callable, Iterable, List, Optional - -import numpy as np -import torch -import torch.nn as nn -from torch.nn.parameter import Parameter - -from mmcv.utils import TORCH_VERSION, digit_version -from .dist_utils import allreduce_grads as _allreduce_grads - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.autocast would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - # Note that when PyTorch >= 1.6.0, we still cast tensor types to fp16 - # manually, so the behavior may not be consistent with real amp. - from torch.cuda.amp import autocast -except ImportError: - pass - - -def cast_tensor_type(inputs, src_type: torch.dtype, dst_type: torch.dtype): - """Recursively convert Tensor in inputs from src_type to dst_type. - - Note: - In v1.4.4 and later, ``cast_tersor_type`` will only convert the - torch.Tensor which is consistent with ``src_type`` to the ``dst_type``. - Before v1.4.4, it ignores the ``src_type`` argument, leading to some - potential problems. For example, - ``cast_tensor_type(inputs, torch.float, torch.half)`` will convert all - tensors in inputs to ``torch.half`` including those originally in - ``torch.Int`` or other types, which is not expected. - - Args: - inputs: Inputs that to be casted. - src_type (torch.dtype): Source type.. - dst_type (torch.dtype): Destination type. - - Returns: - The same type with inputs, but all contained Tensors have been cast. - """ - if isinstance(inputs, nn.Module): - return inputs - elif isinstance(inputs, torch.Tensor): - # we need to ensure that the type of inputs to be casted are the same - # as the argument `src_type`. - return inputs.to(dst_type) if inputs.dtype == src_type else inputs - elif isinstance(inputs, str): - return inputs - elif isinstance(inputs, np.ndarray): - return inputs - elif isinstance(inputs, abc.Mapping): - return type(inputs)({ # type: ignore - k: cast_tensor_type(v, src_type, dst_type) - for k, v in inputs.items() - }) - elif isinstance(inputs, abc.Iterable): - return type(inputs)( # type: ignore - cast_tensor_type(item, src_type, dst_type) for item in inputs) - else: - return inputs - - -def auto_fp16( - apply_to: Optional[Iterable] = None, - out_fp32: bool = False, - supported_types: tuple = (nn.Module, ), -) -> Callable: - """Decorator to enable fp16 training automatically. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If inputs arguments are fp32 tensors, they will - be converted to fp16 automatically. Arguments other than fp32 tensors are - ignored. If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp32 (bool): Whether to convert the output back to fp32. - supported_types (tuple): Classes can be decorated by ``auto_fp16``. - `New in version 1.5.0.` - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp16 - >>> @auto_fp16() - >>> def forward(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp16 - >>> @auto_fp16(apply_to=('pred', )) - >>> def do_something(self, pred, others): - >>> pass - """ - - def auto_fp16_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], supported_types): - raise TypeError('@auto_fp16 can only be used to decorate the ' - f'method of those classes {supported_types}') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - # NOTE: default args are not taken into consideration - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.float, torch.half)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = {} - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.float, torch.half) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=True): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp32: - output = cast_tensor_type(output, torch.half, torch.float) - return output - - return new_func - - return auto_fp16_wrapper - - -def force_fp32(apply_to: Optional[Iterable] = None, - out_fp16: bool = False) -> Callable: - """Decorator to convert input arguments to fp32 in force. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If there are some inputs that must be processed - in fp32 mode, then this decorator can handle it. If inputs arguments are - fp16 tensors, they will be converted to fp32 automatically. Arguments other - than fp16 tensors are ignored. If you are using PyTorch >= 1.6, - torch.cuda.amp is used as the backend, otherwise, original mmcv - implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp16 (bool): Whether to convert the output back to fp16. - - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp32 - >>> @force_fp32() - >>> def loss(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp32 - >>> @force_fp32(apply_to=('pred', )) - >>> def post_process(self, pred, others): - >>> pass - """ - - def force_fp32_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], torch.nn.Module): - raise TypeError('@force_fp32 can only be used to decorate the ' - 'method of nn.Module') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.half, torch.float)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = dict() - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.half, torch.float) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=False): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp16: - output = cast_tensor_type(output, torch.float, torch.half) - return output - - return new_func - - return force_fp32_wrapper - - -def allreduce_grads(params: List[Parameter], - coalesce: bool = True, - bucket_size_mb: int = -1) -> None: - warnings.warn( - '"mmcv.runner.fp16_utils.allreduce_grads" is deprecated, and will be ' - 'removed in v2.8. Please switch to "mmcv.runner.allreduce_grads', - DeprecationWarning) - _allreduce_grads(params, coalesce=coalesce, bucket_size_mb=bucket_size_mb) - - -def wrap_fp16_model(model: nn.Module) -> None: - """Wrap the FP32 model to FP16. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - For PyTorch >= 1.6, this function will - 1. Set fp16 flag inside the model to True. - - Otherwise: - 1. Convert FP32 model to FP16. - 2. Remain some necessary layers to be FP32, e.g., normalization layers. - 3. Set `fp16_enabled` flag inside the model to True. - - Args: - model (nn.Module): Model in FP32. - """ - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.6.0')): - # convert model to fp16 - model.half() - # patch the normalization layers to make it work in fp32 mode - patch_norm_fp32(model) - # set `fp16_enabled` flag - for m in model.modules(): - if hasattr(m, 'fp16_enabled'): - m.fp16_enabled = True - - -def patch_norm_fp32(module: nn.Module) -> nn.Module: - """Recursively convert normalization layers from FP16 to FP32. - - Args: - module (nn.Module): The modules to be converted in FP16. - - Returns: - nn.Module: The converted module, the normalization layers have been - converted to FP32. - """ - if isinstance(module, (nn.modules.batchnorm._BatchNorm, nn.GroupNorm)): - module.float() - if isinstance(module, nn.GroupNorm) or torch.__version__ < '1.3': - module.forward = patch_forward_method(module.forward, torch.half, - torch.float) - for child in module.children(): - patch_norm_fp32(child) - return module - - -def patch_forward_method(func: Callable, - src_type: torch.dtype, - dst_type: torch.dtype, - convert_output: bool = True) -> Callable: - """Patch the forward method of a module. - - Args: - func (callable): The original forward method. - src_type (torch.dtype): Type of input arguments to be converted from. - dst_type (torch.dtype): Type of input arguments to be converted to. - convert_output (bool): Whether to convert the output back to src_type. - - Returns: - callable: The patched forward method. - """ - - def new_forward(*args, **kwargs): - output = func(*cast_tensor_type(args, src_type, dst_type), - **cast_tensor_type(kwargs, src_type, dst_type)) - if convert_output: - output = cast_tensor_type(output, dst_type, src_type) - return output - - return new_forward - - -class LossScaler: - """Class that manages loss scaling in mixed precision training which - supports both dynamic or static mode. - - The implementation refers to - https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/loss_scaler.py. - Indirectly, by supplying ``mode='dynamic'`` for dynamic loss scaling. - It's important to understand how :class:`LossScaler` operates. - Loss scaling is designed to combat the problem of underflowing - gradients encountered at long times when training fp16 networks. - Dynamic loss scaling begins by attempting a very high loss - scale. Ironically, this may result in OVERflowing gradients. - If overflowing gradients are encountered, :class:`FP16_Optimizer` then - skips the update step for this particular iteration/minibatch, - and :class:`LossScaler` adjusts the loss scale to a lower value. - If a certain number of iterations occur without overflowing gradients - detected,:class:`LossScaler` increases the loss scale once more. - In this way :class:`LossScaler` attempts to "ride the edge" of always - using the highest loss scale possible without incurring overflow. - - Args: - init_scale (float): Initial loss scale value, default: 2**32. - scale_factor (float): Factor used when adjusting the loss scale. - Default: 2. - mode (str): Loss scaling mode. 'dynamic' or 'static' - scale_window (int): Number of consecutive iterations without an - overflow to wait before increasing the loss scale. Default: 1000. - """ - - def __init__(self, - init_scale: float = 2**32, - mode: str = 'dynamic', - scale_factor: float = 2., - scale_window: int = 1000): - self.cur_scale = init_scale - self.cur_iter = 0 - assert mode in ('dynamic', - 'static'), 'mode can only be dynamic or static' - self.mode = mode - self.last_overflow_iter = -1 - self.scale_factor = scale_factor - self.scale_window = scale_window - - def has_overflow(self, params: List[Parameter]) -> bool: - """Check if params contain overflow.""" - if self.mode != 'dynamic': - return False - for p in params: - if p.grad is not None and LossScaler._has_inf_or_nan(p.grad.data): - return True - return False - - def _has_inf_or_nan(x): - """Check if params contain NaN.""" - try: - cpu_sum = float(x.float().sum()) - except RuntimeError as instance: - if 'value cannot be converted' not in instance.args[0]: - raise - return True - else: - if cpu_sum == float('inf') or cpu_sum == -float('inf') \ - or cpu_sum != cpu_sum: - return True - return False - - def update_scale(self, overflow: bool) -> None: - """update the current loss scale value when overflow happens.""" - if self.mode != 'dynamic': - return - if overflow: - self.cur_scale = max(self.cur_scale / self.scale_factor, 1) - self.last_overflow_iter = self.cur_iter - else: - if (self.cur_iter - self.last_overflow_iter) % \ - self.scale_window == 0: - self.cur_scale *= self.scale_factor - self.cur_iter += 1 - - def state_dict(self): - """Returns the state of the scaler as a :class:`dict`.""" - return dict( - cur_scale=self.cur_scale, - cur_iter=self.cur_iter, - mode=self.mode, - last_overflow_iter=self.last_overflow_iter, - scale_factor=self.scale_factor, - scale_window=self.scale_window) - - def load_state_dict(self, state_dict: dict) -> None: - """Loads the loss_scaler state dict. - - Args: - state_dict (dict): scaler state. - """ - self.cur_scale = state_dict['cur_scale'] - self.cur_iter = state_dict['cur_iter'] - self.mode = state_dict['mode'] - self.last_overflow_iter = state_dict['last_overflow_iter'] - self.scale_factor = state_dict['scale_factor'] - self.scale_window = state_dict['scale_window'] - - @property - def loss_scale(self): - return self.cur_scale diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/__init__.py deleted file mode 100644 index 03e2a619..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (ClearMLLoggerHook, DvcliveLoggerHook, LoggerHook, - MlflowLoggerHook, NeptuneLoggerHook, PaviLoggerHook, - SegmindLoggerHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .lr_updater import (CosineAnnealingLrUpdaterHook, - CosineRestartLrUpdaterHook, CyclicLrUpdaterHook, - ExpLrUpdaterHook, FixedLrUpdaterHook, - FlatCosineAnnealingLrUpdaterHook, InvLrUpdaterHook, - LinearAnnealingLrUpdaterHook, LrUpdaterHook, - OneCycleLrUpdaterHook, PolyLrUpdaterHook, - StepLrUpdaterHook) -from .memory import EmptyCacheHook -from .momentum_updater import (CosineAnnealingMomentumUpdaterHook, - CyclicMomentumUpdaterHook, - LinearAnnealingMomentumUpdaterHook, - MomentumUpdaterHook, - OneCycleMomentumUpdaterHook, - StepMomentumUpdaterHook) -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'FixedLrUpdaterHook', 'StepLrUpdaterHook', 'ExpLrUpdaterHook', - 'PolyLrUpdaterHook', 'InvLrUpdaterHook', 'CosineAnnealingLrUpdaterHook', - 'FlatCosineAnnealingLrUpdaterHook', 'CosineRestartLrUpdaterHook', - 'CyclicLrUpdaterHook', 'OneCycleLrUpdaterHook', 'OptimizerHook', - 'Fp16OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', - 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TextLoggerHook', 'TensorboardLoggerHook', 'NeptuneLoggerHook', - 'WandbLoggerHook', 'DvcliveLoggerHook', 'MomentumUpdaterHook', - 'StepMomentumUpdaterHook', 'CosineAnnealingMomentumUpdaterHook', - 'CyclicMomentumUpdaterHook', 'OneCycleMomentumUpdaterHook', - 'SyncBuffersHook', 'EMAHook', 'EvalHook', 'DistEvalHook', 'ProfilerHook', - 'GradientCumulativeOptimizerHook', 'GradientCumulativeFp16OptimizerHook', - 'SegmindLoggerHook', 'LinearAnnealingLrUpdaterHook', - 'LinearAnnealingMomentumUpdaterHook', 'ClearMLLoggerHook' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/checkpoint.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/checkpoint.py deleted file mode 100644 index 3449e98a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/checkpoint.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -from mmcv.fileio import FileClient -from ..dist_utils import allreduce_params, master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class CheckpointHook(Hook): - """Save checkpoints periodically. - - Args: - interval (int): The saving period. If ``by_epoch=True``, interval - indicates epochs, otherwise it indicates iterations. - Default: -1, which means "never". - by_epoch (bool): Saving checkpoints by epoch or by iteration. - Default: True. - save_optimizer (bool): Whether to save optimizer state_dict in the - checkpoint. It is usually used for resuming experiments. - Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, ``runner.work_dir`` will be used by default. If - specified, the ``out_dir`` will be the concatenation of ``out_dir`` - and the last level directory of ``runner.work_dir``. - `Changed in version 1.3.16.` - max_keep_ckpts (int, optional): The maximum checkpoints to keep. - In some cases we want only the latest few checkpoints and would - like to delete old ones to save the disk space. - Default: -1, which means unlimited. - save_last (bool, optional): Whether to force the last checkpoint to be - saved regardless of interval. Default: True. - sync_buffer (bool, optional): Whether to synchronize buffers in - different gpus. Default: False. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - - .. warning:: - Before v1.3.16, the ``out_dir`` argument indicates the path where the - checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the - root directory and the final path to save checkpoint is the - concatenation of ``out_dir`` and the last level directory of - ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A" - and the value of ``runner.work_dir`` is "/path/of/B", then the final - path will be "/path/of/A/B". - """ - - def __init__(self, - interval=-1, - by_epoch=True, - save_optimizer=True, - out_dir=None, - max_keep_ckpts=-1, - save_last=True, - sync_buffer=False, - file_client_args=None, - **kwargs): - self.interval = interval - self.by_epoch = by_epoch - self.save_optimizer = save_optimizer - self.out_dir = out_dir - self.max_keep_ckpts = max_keep_ckpts - self.save_last = save_last - self.args = kwargs - self.sync_buffer = sync_buffer - self.file_client_args = file_client_args - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - - runner.logger.info(f'Checkpoints will be saved to {self.out_dir} by ' - f'{self.file_client.name}.') - - # disable the create_symlink option because some file backends do not - # allow to create a symlink - if 'create_symlink' in self.args: - if self.args[ - 'create_symlink'] and not self.file_client.allow_symlink: - self.args['create_symlink'] = False - warnings.warn( - 'create_symlink is set as True by the user but is changed' - 'to be False because creating symbolic link is not ' - f'allowed in {self.file_client.name}') - else: - self.args['create_symlink'] = self.file_client.allow_symlink - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` epochs - # 2. reach the last epoch of training - if self.every_n_epochs( - runner, self.interval) or (self.save_last - and self.is_last_epoch(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.epoch + 1} epochs') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) - - @master_only - def _save_checkpoint(self, runner): - """Save the current checkpoint and delete unwanted checkpoint.""" - runner.save_checkpoint( - self.out_dir, save_optimizer=self.save_optimizer, **self.args) - if runner.meta is not None: - if self.by_epoch: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1) - else: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1) - runner.meta.setdefault('hook_msgs', dict()) - runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path( - self.out_dir, cur_ckpt_filename) - # remove other checkpoints - if self.max_keep_ckpts > 0: - if self.by_epoch: - name = 'epoch_{}.pth' - current_ckpt = runner.epoch + 1 - else: - name = 'iter_{}.pth' - current_ckpt = runner.iter + 1 - redundant_ckpts = range( - current_ckpt - self.max_keep_ckpts * self.interval, 0, - -self.interval) - filename_tmpl = self.args.get('filename_tmpl', name) - for _step in redundant_ckpts: - ckpt_path = self.file_client.join_path( - self.out_dir, filename_tmpl.format(_step)) - if self.file_client.isfile(ckpt_path): - self.file_client.remove(ckpt_path) - else: - break - - def after_train_iter(self, runner): - if self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` iterations - # 2. reach the last iteration of training - if self.every_n_iters( - runner, self.interval) or (self.save_last - and self.is_last_iter(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.iter + 1} iterations') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/closure.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/closure.py deleted file mode 100644 index b955f81f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/closure.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ClosureHook(Hook): - - def __init__(self, fn_name, fn): - assert hasattr(self, fn_name) - assert callable(fn) - setattr(self, fn_name, fn) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/ema.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/ema.py deleted file mode 100644 index 6ed77b84..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - Xema\_{t+1} = (1 - \text{momentum}) \times - Xema\_{t} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/evaluation.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/evaluation.py deleted file mode 100644 index c79628f9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/evaluation.py +++ /dev/null @@ -1,511 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from math import inf - -import torch.distributed as dist -from torch.nn.modules.batchnorm import _BatchNorm -from torch.utils.data import DataLoader - -from mmcv.fileio import FileClient -from mmcv.utils import is_seq_of -from .hook import Hook -from .logger import LoggerHook - - -class EvalHook(Hook): - """Non-Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in non-distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader, and return the test results. If ``None``, the default - test function ``mmcv.engine.single_gpu_test`` will be used. - (default: ``None``) - greater_keys (List[str] | None, optional): Metric keys that will be - inferred by 'greater' comparison rule. If ``None``, - _default_greater_keys will be used. (default: ``None``) - less_keys (List[str] | None, optional): Metric keys that will be - inferred by 'less' comparison rule. If ``None``, _default_less_keys - will be used. (default: ``None``) - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - `New in version 1.3.16.` - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - `New in version 1.3.16.` - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - - Note: - If new arguments are added for EvalHook, tools/test.py, - tools/eval_metric.py may be affected. - """ - - # Since the key for determine greater or less is related to the downstream - # tasks, downstream repos may need to overwrite the following inner - # variable accordingly. - - rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y} - init_value_map = {'greater': -inf, 'less': inf} - _default_greater_keys = [ - 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU', - 'mAcc', 'aAcc' - ] - _default_less_keys = ['loss'] - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - out_dir=None, - file_client_args=None, - **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError(f'dataloader must be a pytorch DataLoader, ' - f'but got {type(dataloader)}') - - if interval <= 0: - raise ValueError(f'interval must be a positive number, ' - f'but got {interval}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean' - - if start is not None and start < 0: - raise ValueError(f'The evaluation start epoch {start} is smaller ' - f'than 0') - - self.dataloader = dataloader - self.interval = interval - self.start = start - self.by_epoch = by_epoch - - assert isinstance(save_best, str) or save_best is None, \ - '""save_best"" should be a str or None ' \ - f'rather than {type(save_best)}' - self.save_best = save_best - self.eval_kwargs = eval_kwargs - self.initial_flag = True - - if test_fn is None: - from mmcv.engine import single_gpu_test - self.test_fn = single_gpu_test - else: - self.test_fn = test_fn - - if greater_keys is None: - self.greater_keys = self._default_greater_keys - else: - if not isinstance(greater_keys, (list, tuple)): - greater_keys = (greater_keys, ) - assert is_seq_of(greater_keys, str) - self.greater_keys = greater_keys - - if less_keys is None: - self.less_keys = self._default_less_keys - else: - if not isinstance(less_keys, (list, tuple)): - less_keys = (less_keys, ) - assert is_seq_of(less_keys, str) - self.less_keys = less_keys - - if self.save_best is not None: - self.best_ckpt_path = None - self._init_rule(rule, self.save_best) - - self.out_dir = out_dir - self.file_client_args = file_client_args - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific (note that the key indicator matching - is case-insensitive): - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if any one item in ``self.greater_keys`` is a substring of - key_indicator , the rule will be specified as 'greater'. - 4. Or if any one item in ``self.less_keys`` is a substring of - key_indicator , the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f'rule must be greater, less or None, ' - f'but got {rule}.') - - if rule is None: - if key_indicator != 'auto': - # `_lc` here means we use the lower case of keys for - # case-insensitive matching - key_indicator_lc = key_indicator.lower() - greater_keys = [key.lower() for key in self.greater_keys] - less_keys = [key.lower() for key in self.less_keys] - - if key_indicator_lc in greater_keys: - rule = 'greater' - elif key_indicator_lc in less_keys: - rule = 'less' - elif any(key in key_indicator_lc for key in greater_keys): - rule = 'greater' - elif any(key in key_indicator_lc for key in less_keys): - rule = 'less' - else: - raise ValueError(f'Cannot infer the rule for key ' - f'{key_indicator}, thus a specific rule ' - f'must be specified.') - self.rule = rule - self.key_indicator = key_indicator - if self.rule is not None: - self.compare_func = self.rule_map[self.rule] - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - f'The best checkpoint will be saved to {self.out_dir} by ' - f'{self.file_client.name}') - - if self.save_best is not None: - if runner.meta is None: - warnings.warn('runner.meta is None. Creating an empty one.') - runner.meta = dict() - runner.meta.setdefault('hook_msgs', dict()) - self.best_ckpt_path = runner.meta['hook_msgs'].get( - 'best_ckpt', None) - - def before_train_iter(self, runner): - """Evaluate the model only at the start of training by iteration.""" - if self.by_epoch or not self.initial_flag: - return - if self.start is not None and runner.iter >= self.start: - self.after_train_iter(runner) - self.initial_flag = False - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - if not (self.by_epoch and self.initial_flag): - return - if self.start is not None and runner.epoch >= self.start: - self.after_train_epoch(runner) - self.initial_flag = False - - def after_train_iter(self, runner): - """Called after every training iter to evaluate the results.""" - if not self.by_epoch and self._should_evaluate(runner): - # Because the priority of EvalHook is higher than LoggerHook, the - # training log and the evaluating log are mixed. Therefore, - # we need to dump the training log and clear it before evaluating - # log is generated. In addition, this problem will only appear in - # `IterBasedRunner` whose `self.by_epoch` is False, because - # `EpochBasedRunner` whose `self.by_epoch` is True calls - # `_do_evaluate` in `after_train_epoch` stage, and at this stage - # the training log has been printed, so it will not cause any - # problem. more details at - # https://github.com/open-mmlab/mmsegmentation/issues/694 - for hook in runner._hooks: - if isinstance(hook, LoggerHook): - hook.after_train_iter(runner) - runner.log_buffer.clear() - - self._do_evaluate(runner) - - def after_train_epoch(self, runner): - """Called after every training epoch to evaluate the results.""" - if self.by_epoch and self._should_evaluate(runner): - self._do_evaluate(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - results = self.test_fn(runner.model, self.dataloader) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - def _should_evaluate(self, runner): - """Judge whether to perform evaluation. - - Here is the rule to judge whether to perform evaluation: - 1. It will not perform evaluation during the epoch/iteration interval, - which is determined by ``self.interval``. - 2. It will not perform evaluation if the start time is larger than - current time. - 3. It will not perform evaluation when current time is larger than - the start time but during epoch/iteration interval. - - Returns: - bool: The flag indicating whether to perform evaluation. - """ - if self.by_epoch: - current = runner.epoch - check_time = self.every_n_epochs - else: - current = runner.iter - check_time = self.every_n_iters - - if self.start is None: - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - elif (current + 1) < self.start: - # No evaluation if start is larger than the current time. - return False - else: - # Evaluation only at epochs/iters 3, 5, 7... - # if start==3 and interval==2 - if (current + 1 - self.start) % self.interval: - return False - return True - - def _save_ckpt(self, runner, key_score): - """Save the best checkpoint. - - It will compare the score according to the compare function, write - related information (best score, best checkpoint path) and save the - best checkpoint into ``work_dir``. - """ - if self.by_epoch: - current = f'epoch_{runner.epoch + 1}' - cur_type, cur_time = 'epoch', runner.epoch + 1 - else: - current = f'iter_{runner.iter + 1}' - cur_type, cur_time = 'iter', runner.iter + 1 - - best_score = runner.meta['hook_msgs'].get( - 'best_score', self.init_value_map[self.rule]) - if self.compare_func(key_score, best_score): - best_score = key_score - runner.meta['hook_msgs']['best_score'] = best_score - - if self.best_ckpt_path and self.file_client.isfile( - self.best_ckpt_path): - self.file_client.remove(self.best_ckpt_path) - runner.logger.info( - f'The previous best checkpoint {self.best_ckpt_path} was ' - 'removed') - - best_ckpt_name = f'best_{self.key_indicator}_{current}.pth' - self.best_ckpt_path = self.file_client.join_path( - self.out_dir, best_ckpt_name) - runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path - - runner.save_checkpoint( - self.out_dir, - filename_tmpl=best_ckpt_name, - create_symlink=False) - runner.logger.info( - f'Now best checkpoint is saved as {best_ckpt_name}.') - runner.logger.info( - f'Best {self.key_indicator} is {best_score:0.4f} ' - f'at {cur_time} {cur_type}.') - - def evaluate(self, runner, results): - """Evaluate the results. - - Args: - runner (:obj:`mmcv.Runner`): The underlined training runner. - results (list): Output results. - """ - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - - if self.save_best is not None: - # If the performance of model is pool, the `eval_res` may be an - # empty dict and it will raise exception when `self.save_best` is - # not None. More details at - # https://github.com/open-mmlab/mmdetection/issues/6265. - if not eval_res: - warnings.warn( - 'Since `eval_res` is an empty dict, the behavior to save ' - 'the best checkpoint will be skipped in this evaluation.') - return None - - if self.key_indicator == 'auto': - # infer from eval_results - self._init_rule(self.rule, list(eval_res.keys())[0]) - return eval_res[self.key_indicator] - - return None - - -class DistEvalHook(EvalHook): - """Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader in a multi-gpu manner, and return the test results. If - ``None``, the default test function ``mmcv.engine.multi_gpu_test`` - will be used. (default: ``None``) - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - broadcast_bn_buffer (bool): Whether to broadcast the - buffer(running_mean and running_var) of rank 0 to other rank - before evaluation. Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - broadcast_bn_buffer=True, - tmpdir=None, - gpu_collect=False, - out_dir=None, - file_client_args=None, - **eval_kwargs): - - if test_fn is None: - from mmcv.engine import multi_gpu_test - test_fn = multi_gpu_test - - super().__init__( - dataloader, - start=start, - interval=interval, - by_epoch=by_epoch, - save_best=save_best, - rule=rule, - test_fn=test_fn, - greater_keys=greater_keys, - less_keys=less_keys, - out_dir=out_dir, - file_client_args=file_client_args, - **eval_kwargs) - - self.broadcast_bn_buffer = broadcast_bn_buffer - self.tmpdir = tmpdir - self.gpu_collect = gpu_collect - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - results = self.test_fn( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to - # save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/hook.py deleted file mode 100644 index f2d1c986..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/hook.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, is_method_overridden - -HOOKS = Registry('hook') - - -class Hook: - stages = ('before_run', 'before_train_epoch', 'before_train_iter', - 'after_train_iter', 'after_train_epoch', 'before_val_epoch', - 'before_val_iter', 'after_val_iter', 'after_val_epoch', - 'after_run') - - def before_run(self, runner): - pass - - def after_run(self, runner): - pass - - def before_epoch(self, runner): - pass - - def after_epoch(self, runner): - pass - - def before_iter(self, runner): - pass - - def after_iter(self, runner): - pass - - def before_train_epoch(self, runner): - self.before_epoch(runner) - - def before_val_epoch(self, runner): - self.before_epoch(runner) - - def after_train_epoch(self, runner): - self.after_epoch(runner) - - def after_val_epoch(self, runner): - self.after_epoch(runner) - - def before_train_iter(self, runner): - self.before_iter(runner) - - def before_val_iter(self, runner): - self.before_iter(runner) - - def after_train_iter(self, runner): - self.after_iter(runner) - - def after_val_iter(self, runner): - self.after_iter(runner) - - def every_n_epochs(self, runner, n): - return (runner.epoch + 1) % n == 0 if n > 0 else False - - def every_n_inner_iters(self, runner, n): - return (runner.inner_iter + 1) % n == 0 if n > 0 else False - - def every_n_iters(self, runner, n): - return (runner.iter + 1) % n == 0 if n > 0 else False - - def end_of_epoch(self, runner): - return runner.inner_iter + 1 == len(runner.data_loader) - - def is_last_epoch(self, runner): - return runner.epoch + 1 == runner._max_epochs - - def is_last_iter(self, runner): - return runner.iter + 1 == runner._max_iters - - def get_triggered_stages(self): - trigger_stages = set() - for stage in Hook.stages: - if is_method_overridden(stage, Hook, self): - trigger_stages.add(stage) - - # some methods will be triggered in multi stages - # use this dict to map method to stages. - method_stages_map = { - 'before_epoch': ['before_train_epoch', 'before_val_epoch'], - 'after_epoch': ['after_train_epoch', 'after_val_epoch'], - 'before_iter': ['before_train_iter', 'before_val_iter'], - 'after_iter': ['after_train_iter', 'after_val_iter'], - } - - for method, map_stages in method_stages_map.items(): - if is_method_overridden(method, Hook, self): - trigger_stages.update(map_stages) - - return [stage for stage in Hook.stages if stage in trigger_stages] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/iter_timer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/iter_timer.py deleted file mode 100644 index cfd5002f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/iter_timer.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import time - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class IterTimerHook(Hook): - - def before_epoch(self, runner): - self.t = time.time() - - def before_iter(self, runner): - runner.log_buffer.update({'data_time': time.time() - self.t}) - - def after_iter(self, runner): - runner.log_buffer.update({'time': time.time() - self.t}) - self.t = time.time() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/__init__.py deleted file mode 100644 index 062709e7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .clearml import ClearMLLoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .segmind import SegmindLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook', 'SegmindLoggerHook', - 'ClearMLLoggerHook' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/base.py deleted file mode 100644 index 416a1b75..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/base.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from abc import ABCMeta, abstractmethod -from typing import Dict - -import numpy as np -import torch - -from ..hook import Hook - - -class LoggerHook(Hook): - """Base class for logger hooks. - - Args: - interval (int): Logging interval (every k iterations). Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default False. - by_epoch (bool): Whether EpochBasedRunner is used. Default True. - """ - - __metaclass__ = ABCMeta - - def __init__(self, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True): - self.interval = interval - self.ignore_last = ignore_last - self.reset_flag = reset_flag - self.by_epoch = by_epoch - - @abstractmethod - def log(self, runner): - pass - - @staticmethod - def is_scalar(val, - include_np: bool = True, - include_torch: bool = True) -> bool: - """Tell the input variable is a scalar or not. - - Args: - val: Input variable. - include_np (bool): Whether include 0-d np.ndarray as a scalar. - include_torch (bool): Whether include 0-d torch.Tensor as a scalar. - - Returns: - bool: True or False. - """ - if isinstance(val, numbers.Number): - return True - elif include_np and isinstance(val, np.ndarray) and val.ndim == 0: - return True - elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1: - return True - else: - return False - - def get_mode(self, runner) -> str: - if runner.mode == 'train': - if 'time' in runner.log_buffer.output: - mode = 'train' - else: - mode = 'val' - elif runner.mode == 'val': - mode = 'val' - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return mode - - def get_epoch(self, runner) -> int: - if runner.mode == 'train': - epoch = runner.epoch + 1 - elif runner.mode == 'val': - # normal val mode - # runner.epoch += 1 has been done before val workflow - epoch = runner.epoch - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return epoch - - def get_iter(self, runner, inner_iter: bool = False) -> int: - """Get the current training iteration step.""" - if self.by_epoch and inner_iter: - current_iter = runner.inner_iter + 1 - else: - current_iter = runner.iter + 1 - return current_iter - - def get_lr_tags(self, runner) -> Dict[str, float]: - tags = {} - lrs = runner.current_lr() - if isinstance(lrs, dict): - for name, value in lrs.items(): - tags[f'learning_rate/{name}'] = value[0] - else: - tags['learning_rate'] = lrs[0] - return tags - - def get_momentum_tags(self, runner) -> Dict[str, float]: - tags = {} - momentums = runner.current_momentum() - if isinstance(momentums, dict): - for name, value in momentums.items(): - tags[f'momentum/{name}'] = value[0] - else: - tags['momentum'] = momentums[0] - return tags - - def get_loggable_tags( - self, - runner, - allow_scalar: bool = True, - allow_text: bool = False, - add_mode: bool = True, - tags_to_skip: tuple = ('time', 'data_time') - ) -> Dict: - tags = {} - for var, val in runner.log_buffer.output.items(): - if var in tags_to_skip: - continue - if self.is_scalar(val) and not allow_scalar: - continue - if isinstance(val, str) and not allow_text: - continue - if add_mode: - var = f'{self.get_mode(runner)}/{var}' - tags[var] = val - tags.update(self.get_lr_tags(runner)) - tags.update(self.get_momentum_tags(runner)) - return tags - - def before_run(self, runner) -> None: - for hook in runner.hooks[::-1]: - if isinstance(hook, LoggerHook): - hook.reset_flag = True - break - - def before_epoch(self, runner) -> None: - runner.log_buffer.clear() # clear logs of last epoch - - def after_train_iter(self, runner) -> None: - if self.by_epoch and self.every_n_inner_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif not self.by_epoch and self.every_n_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif self.end_of_epoch(runner) and not self.ignore_last: - # not precise but more stable - runner.log_buffer.average(self.interval) - - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_train_epoch(self, runner) -> None: - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_val_epoch(self, runner) -> None: - runner.log_buffer.average() - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/clearml.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/clearml.py deleted file mode 100644 index 7db651f0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/clearml.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from typing import Dict, Optional - -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class ClearMLLoggerHook(LoggerHook): - """Class to log metrics with clearml. - - It requires `clearml`_ to be installed. - - - Args: - init_kwargs (dict): A dict contains the `clearml.Task.init` - initialization keys. See `taskinit`_ for more details. - interval (int): Logging interval (every k iterations). Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - - .. _clearml: - https://clear.ml/docs/latest/docs/ - .. _taskinit: - https://clear.ml/docs/latest/docs/references/sdk/task/#taskinit - """ - - def __init__(self, - init_kwargs: Optional[Dict] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.import_clearml() - self.init_kwargs = init_kwargs - - def import_clearml(self): - try: - import clearml - except ImportError: - raise ImportError( - 'Please run "pip install clearml" to install clearml') - self.clearml = clearml - - @master_only - def before_run(self, runner) -> None: - super().before_run(runner) - task_kwargs = self.init_kwargs if self.init_kwargs else {} - self.task = self.clearml.Task.init(**task_kwargs) - self.task_logger = self.task.get_logger() - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - for tag, val in tags.items(): - self.task_logger.report_scalar(tag, tag, val, - self.get_iter(runner)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/dvclive.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/dvclive.py deleted file mode 100644 index fc0a58c4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/dvclive.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from pathlib import Path -from typing import Optional - -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class DvcliveLoggerHook(LoggerHook): - """Class to log metrics with dvclive. - - It requires `dvclive`_ to be installed. - - Args: - model_file (str): Default None. If not None, after each epoch the - model will be saved to {model_file}. - interval (int): Logging interval (every k iterations). Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - kwargs: Arguments for instantiating `Live`_. - - .. _dvclive: - https://dvc.org/doc/dvclive - - .. _Live: - https://dvc.org/doc/dvclive/api-reference/live#parameters - """ - - def __init__(self, - model_file: Optional[str] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True, - **kwargs): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.model_file = model_file - self.import_dvclive(**kwargs) - - def import_dvclive(self, **kwargs) -> None: - try: - from dvclive import Live - except ImportError: - raise ImportError( - 'Please run "pip install dvclive" to install dvclive') - self.dvclive = Live(**kwargs) - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - if tags: - self.dvclive.set_step(self.get_iter(runner)) - for k, v in tags.items(): - self.dvclive.log(k, v) - - @master_only - def after_train_epoch(self, runner) -> None: - super().after_train_epoch(runner) - if self.model_file is not None: - runner.save_checkpoint( - Path(self.model_file).parent, - filename_tmpl=Path(self.model_file).name, - create_symlink=False, - ) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/mlflow.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/mlflow.py deleted file mode 100644 index a76b0426..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/mlflow.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, Optional - -from mmcv.utils import TORCH_VERSION -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class MlflowLoggerHook(LoggerHook): - """Class to log metrics and (optionally) a trained model to MLflow. - - It requires `MLflow`_ to be installed. - - Args: - exp_name (str, optional): Name of the experiment to be used. - Default None. If not None, set the active experiment. - If experiment does not exist, an experiment with provided name - will be created. - tags (Dict[str], optional): Tags for the current run. - Default None. If not None, set tags for the current run. - log_model (bool, optional): Whether to log an MLflow artifact. - Default True. If True, log runner.model as an MLflow artifact - for the current run. - interval (int): Logging interval (every k iterations). Default: 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - - .. _MLflow: - https://www.mlflow.org/docs/latest/index.html - """ - - def __init__(self, - exp_name: Optional[str] = None, - tags: Optional[Dict] = None, - log_model: bool = True, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.import_mlflow() - self.exp_name = exp_name - self.tags = tags - self.log_model = log_model - - def import_mlflow(self) -> None: - try: - import mlflow - import mlflow.pytorch as mlflow_pytorch - except ImportError: - raise ImportError( - 'Please run "pip install mlflow" to install mlflow') - self.mlflow = mlflow - self.mlflow_pytorch = mlflow_pytorch - - @master_only - def before_run(self, runner) -> None: - super().before_run(runner) - if self.exp_name is not None: - self.mlflow.set_experiment(self.exp_name) - if self.tags is not None: - self.mlflow.set_tags(self.tags) - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - if tags: - self.mlflow.log_metrics(tags, step=self.get_iter(runner)) - - @master_only - def after_run(self, runner) -> None: - if self.log_model: - self.mlflow_pytorch.log_model( - runner.model, - 'models', - pip_requirements=[f'torch=={TORCH_VERSION}']) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/neptune.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/neptune.py deleted file mode 100644 index e398fe1e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, Optional - -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `Neptune`_ to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of NEPTUNE_PROJECT - environment variable will be taken. - - api_token (str): User’s API token. If None, the value of - NEPTUNE_API_TOKEN environment variable will be taken. Note: It is - strongly recommended to use NEPTUNE_API_TOKEN environment - variable rather than placing your API token in plain text in your - source code. - - name (str, optional, default is 'Untitled'): Editable name of the - run. Name is displayed in the run's Details and in Runs table as - a column. - - Check https://docs.neptune.ai/api-reference/neptune#init for more - init arguments. - interval (int): Logging interval (every k iterations). Default: 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than ``interval``. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: True. - with_step (bool): If True, the step will be logged from - ``self.get_iters``. Otherwise, step will not be logged. - Default: True. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - - .. _Neptune: - https://docs.neptune.ai - """ - - def __init__(self, - init_kwargs: Optional[Dict] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = True, - with_step: bool = True, - by_epoch: bool = True): - - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self) -> None: - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner) -> None: - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( # type: ignore - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) # type: ignore - - @master_only - def after_run(self, runner) -> None: - self.run.stop() # type: ignore diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/pavi.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/pavi.py deleted file mode 100644 index 13fbf7b2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/pavi.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -import os -import os.path as osp -from typing import Dict, Optional - -import torch -import yaml - -import mmcv -from ....parallel.utils import is_module_wrapper -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class PaviLoggerHook(LoggerHook): - """Class to visual model, log metrics (for internal use). - - Args: - init_kwargs (dict): A dict contains the initialization keys. - add_graph (bool): Whether to visual model. Default: False. - add_last_ckpt (bool): Whether to save checkpoint after run. - Default: False. - interval (int): Logging interval (every k iterations). Default: True. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - img_key (string): Get image data from Dataset. Default: 'img_info'. - """ - - def __init__(self, - init_kwargs: Optional[Dict] = None, - add_graph: bool = False, - add_last_ckpt: bool = False, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True, - img_key: str = 'img_info'): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.init_kwargs = init_kwargs - self.add_graph = add_graph - self.add_last_ckpt = add_last_ckpt - self.img_key = img_key - - @master_only - def before_run(self, runner) -> None: - super().before_run(runner) - try: - from pavi import SummaryWriter - except ImportError: - raise ImportError('Please run "pip install pavi" to install pavi.') - - self.run_name = runner.work_dir.split('/')[-1] - - if not self.init_kwargs: - self.init_kwargs = dict() - self.init_kwargs['name'] = self.run_name - self.init_kwargs['model'] = runner._model_name - if runner.meta is not None: - if 'config_dict' in runner.meta: - config_dict = runner.meta['config_dict'] - assert isinstance( - config_dict, - dict), ('meta["config_dict"] has to be of a dict, ' - f'but got {type(config_dict)}') - elif 'config_file' in runner.meta: - config_file = runner.meta['config_file'] - config_dict = dict(mmcv.Config.fromfile(config_file)) - else: - config_dict = None - if config_dict is not None: - # 'max_.*iter' is parsed in pavi sdk as the maximum iterations - # to properly set up the progress bar. - config_dict = config_dict.copy() - config_dict.setdefault('max_iter', runner.max_iters) - # non-serializable values are first converted in - # mmcv.dump to json - config_dict = json.loads( - mmcv.dump(config_dict, file_format='json')) - session_text = yaml.dump(config_dict) - self.init_kwargs['session_text'] = session_text - self.writer = SummaryWriter(**self.init_kwargs) - - def get_step(self, runner) -> int: - """Get the total training step/epoch.""" - if self.get_mode(runner) == 'val' and self.by_epoch: - return self.get_epoch(runner) - else: - return self.get_iter(runner) - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner, add_mode=False) - if tags: - self.writer.add_scalars( - self.get_mode(runner), tags, self.get_step(runner)) - - @master_only - def after_run(self, runner) -> None: - if self.add_last_ckpt: - ckpt_path = osp.join(runner.work_dir, 'latest.pth') - if osp.islink(ckpt_path): - ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path)) - - if osp.isfile(ckpt_path): - # runner.epoch += 1 has been done before `after_run`. - iteration = runner.epoch if self.by_epoch else runner.iter - return self.writer.add_snapshot_file( - tag=self.run_name, - snapshot_file_path=ckpt_path, - iteration=iteration) - - # flush the buffer and send a task ending signal to Pavi - self.writer.close() - - @master_only - def before_epoch(self, runner) -> None: - if runner.epoch == 0 and self.add_graph: - if is_module_wrapper(runner.model): - _model = runner.model.module - else: - _model = runner.model - device = next(_model.parameters()).device - data = next(iter(runner.data_loader)) - image = data[self.img_key][0:1].to(device) - with torch.no_grad(): - self.writer.add_graph(_model, image) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/segmind.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/segmind.py deleted file mode 100644 index ecb3751e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/segmind.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class SegmindLoggerHook(LoggerHook): - """Class to log metrics to Segmind. - - It requires `Segmind`_ to be installed. - - Args: - interval (int): Logging interval (every k iterations). Default: 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default False. - by_epoch (bool): Whether EpochBasedRunner is used. Default True. - - .. _Segmind: - https://docs.segmind.com/python-library - """ - - def __init__(self, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch=True): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.import_segmind() - - def import_segmind(self) -> None: - try: - import segmind - except ImportError: - raise ImportError( - "Please run 'pip install segmind' to install segmind") - self.log_metrics = segmind.tracking.fluent.log_metrics - self.mlflow_log = segmind.utils.logging_utils.try_mlflow_log - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - if tags: - # logging metrics to segmind - self.mlflow_log( - self.log_metrics, tags, step=runner.epoch, epoch=runner.epoch) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/tensorboard.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/tensorboard.py deleted file mode 100644 index 11d07991..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/tensorboard.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from typing import Optional - -from mmcv.utils import TORCH_VERSION, digit_version -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TensorboardLoggerHook(LoggerHook): - """Class to log metrics to Tensorboard. - - Args: - log_dir (string): Save directory location. Default: None. If default - values are used, directory location is ``runner.work_dir``/tf_logs. - interval (int): Logging interval (every k iterations). Default: True. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - by_epoch (bool): Whether EpochBasedRunner is used. Default: True. - """ - - def __init__(self, - log_dir: Optional[str] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - by_epoch: bool = True): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.log_dir = log_dir - - @master_only - def before_run(self, runner) -> None: - super().before_run(runner) - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.1')): - try: - from tensorboardX import SummaryWriter - except ImportError: - raise ImportError('Please install tensorboardX to use ' - 'TensorboardLoggerHook.') - else: - try: - from torch.utils.tensorboard import SummaryWriter - except ImportError: - raise ImportError( - 'Please run "pip install future tensorboard" to install ' - 'the dependencies to use torch.utils.tensorboard ' - '(applicable to PyTorch 1.1 or higher)') - - if self.log_dir is None: - self.log_dir = osp.join(runner.work_dir, 'tf_logs') - self.writer = SummaryWriter(self.log_dir) - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner, allow_text=True) - for tag, val in tags.items(): - if isinstance(val, str): - self.writer.add_text(tag, val, self.get_iter(runner)) - else: - self.writer.add_scalar(tag, val, self.get_iter(runner)) - - @master_only - def after_run(self, runner) -> None: - self.writer.close() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/text.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/text.py deleted file mode 100644 index 05d3442c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict -from typing import Dict, Optional, Union - -import torch -import torch.distributed as dist - -import mmcv -from mmcv.fileio.file_client import FileClient -from mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch: bool = True, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - interval_exp_name: int = 1000, - out_dir: Optional[str] = None, - out_suffix: Union[str, tuple] = ('.log.json', '.log', '.py'), - keep_local: bool = True, - file_client_args: Optional[Dict] = None): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner) -> None: - super().before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.') - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner) -> int: - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([int(mem) // (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict: Dict, runner) -> None: - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) # type: ignore - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' # type: ignore - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - #if torch.cuda.is_available(): - #log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', - 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict: Dict, runner) -> None: - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner) -> OrderedDict: - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - #if 'time' in runner.log_buffer.output: - # statistic memory - #if torch.cuda.is_available(): - #log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) # type: ignore - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner) -> None: - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath) as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.') - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - f'{local_filepath} was removed due to the ' - '`self.keep_local=False`') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/wandb.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/wandb.py deleted file mode 100644 index 1cf16550..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/logger/wandb.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from typing import Dict, Optional, Union - -from mmcv.utils import scandir -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class WandbLoggerHook(LoggerHook): - """Class to log metrics with wandb. - - It requires `wandb`_ to be installed. - - - Args: - init_kwargs (dict): A dict contains the initialization keys. Check - https://docs.wandb.ai/ref/python/init for more init arguments. - interval (int): Logging interval (every k iterations). - Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: False. - commit (bool): Save the metrics dict to the wandb server and increment - the step. If false ``wandb.log`` just updates the current metrics - dict with the row argument and metrics won't be saved until - ``wandb.log`` is called with ``commit=True``. - Default: True. - by_epoch (bool): Whether EpochBasedRunner is used. - Default: True. - with_step (bool): If True, the step will be logged from - ``self.get_iters``. Otherwise, step will not be logged. - Default: True. - log_artifact (bool): If True, artifacts in {work_dir} will be uploaded - to wandb after training ends. - Default: True - `New in version 1.4.3.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be uploaded to wandb. - Default: ('.log.json', '.log', '.py'). - `New in version 1.4.3.` - - .. _wandb: - https://docs.wandb.ai - """ - - def __init__(self, - init_kwargs: Optional[Dict] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = False, - commit: bool = True, - by_epoch: bool = True, - with_step: bool = True, - log_artifact: bool = True, - out_suffix: Union[str, tuple] = ('.log.json', '.log', '.py')): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.import_wandb() - self.init_kwargs = init_kwargs - self.commit = commit - self.with_step = with_step - self.log_artifact = log_artifact - self.out_suffix = out_suffix - - def import_wandb(self) -> None: - try: - import wandb - except ImportError: - raise ImportError( - 'Please run "pip install wandb" to install wandb') - self.wandb = wandb - - @master_only - def before_run(self, runner) -> None: - super().before_run(runner) - if self.wandb is None: - self.import_wandb() - if self.init_kwargs: - self.wandb.init(**self.init_kwargs) # type: ignore - else: - self.wandb.init() # type: ignore - - @master_only - def log(self, runner) -> None: - tags = self.get_loggable_tags(runner) - if tags: - if self.with_step: - self.wandb.log( - tags, step=self.get_iter(runner), commit=self.commit) - else: - tags['global_step'] = self.get_iter(runner) - self.wandb.log(tags, commit=self.commit) - - @master_only - def after_run(self, runner) -> None: - if self.log_artifact: - wandb_artifact = self.wandb.Artifact( - name='artifacts', type='model') - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - wandb_artifact.add_file(local_filepath) - self.wandb.log_artifact(wandb_artifact) - self.wandb.join() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/lr_updater.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/lr_updater.py deleted file mode 100644 index 3d3351d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/lr_updater.py +++ /dev/null @@ -1,751 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from math import cos, pi -from typing import Callable, List, Optional, Union - -import mmcv -from mmcv import runner -from .hook import HOOKS, Hook - - -class LrUpdaterHook(Hook): - """LR Scheduler in MMCV. - - Args: - by_epoch (bool): LR changes epoch by epoch - warmup (string): Type of warmup used. It can be None(use no warmup), - 'constant', 'linear' or 'exp' - warmup_iters (int): The number of iterations or epochs that warmup - lasts - warmup_ratio (float): LR used at the beginning of warmup equals to - warmup_ratio * initial_lr - warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters - means the number of epochs that warmup lasts, otherwise means the - number of iteration that warmup lasts - """ - - def __init__(self, - by_epoch: bool = True, - warmup: Optional[str] = None, - warmup_iters: int = 0, - warmup_ratio: float = 0.1, - warmup_by_epoch: bool = False) -> None: - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant", "linear" and "exp"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_ratio" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters: Optional[int] = warmup_iters - self.warmup_ratio = warmup_ratio - self.warmup_by_epoch = warmup_by_epoch - - if self.warmup_by_epoch: - self.warmup_epochs: Optional[int] = self.warmup_iters - self.warmup_iters = None - else: - self.warmup_epochs = None - - self.base_lr: Union[list, dict] = [] # initial lr for all param groups - self.regular_lr: list = [] # expected lr if no warming up is performed - - def _set_lr(self, runner, lr_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, lr in zip(optim.param_groups, lr_groups[k]): - param_group['lr'] = lr - else: - for param_group, lr in zip(runner.optimizer.param_groups, - lr_groups): - param_group['lr'] = lr - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - raise NotImplementedError - - def get_regular_lr(self, runner: 'runner.BaseRunner'): - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(runner, _base_lr) - for _base_lr in self.base_lr[k] - ] - lr_groups.update({k: _lr_group}) - - return lr_groups - else: - return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr] - - def get_warmup_lr(self, cur_iters: int): - - def _get_warmup_lr(cur_iters, regular_lr): - if self.warmup == 'constant': - warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_lr = [_lr * (1 - k) for _lr in regular_lr] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.regular_lr, dict): - lr_groups = {} - for key, regular_lr in self.regular_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.regular_lr) - - def before_run(self, runner: 'runner.BaseRunner'): - # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - group.setdefault('initial_lr', group['lr']) - _base_lr = [ - group['initial_lr'] for group in optim.param_groups - ] - self.base_lr.update({k: _base_lr}) - else: - for group in runner.optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - self.base_lr = [ - group['initial_lr'] for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner: 'runner.BaseRunner'): - if self.warmup_iters is None: - epoch_len = len(runner.data_loader) # type: ignore - self.warmup_iters = self.warmup_epochs * epoch_len # type: ignore - - if not self.by_epoch: - return - - self.regular_lr = self.get_regular_lr(runner) - self._set_lr(runner, self.regular_lr) - - def before_train_iter(self, runner: 'runner.BaseRunner'): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_lr = self.get_regular_lr(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - - -@HOOKS.register_module() -class FixedLrUpdaterHook(LrUpdaterHook): - - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def get_lr(self, runner, base_lr): - return base_lr - - -@HOOKS.register_module() -class StepLrUpdaterHook(LrUpdaterHook): - """Step LR scheduler with min_lr clipping. - - Args: - step (int | list[int]): Step to decay the LR. If an int value is given, - regard it as the decay interval. If a list is given, decay LR at - these steps. - gamma (float): Decay LR ratio. Defaults to 0.1. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. If None - is given, we don't perform lr clipping. Default: None. - """ - - def __init__(self, - step: Union[int, List[int]], - gamma: float = 0.1, - min_lr: Optional[float] = None, - **kwargs) -> None: - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_lr = min_lr - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - lr = base_lr * (self.gamma**exp) - if self.min_lr is not None: - # clip to a minimum value - lr = max(lr, self.min_lr) - return lr - - -@HOOKS.register_module() -class ExpLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma: float, **kwargs) -> None: - self.gamma = gamma - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * self.gamma**progress - - -@HOOKS.register_module() -class PolyLrUpdaterHook(LrUpdaterHook): - - def __init__(self, - power: float = 1., - min_lr: float = 0., - **kwargs) -> None: - self.power = power - self.min_lr = min_lr - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - coeff = (1 - progress / max_progress)**self.power - return (base_lr - self.min_lr) * coeff + self.min_lr - - -@HOOKS.register_module() -class InvLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma: float, power: float = 1., **kwargs) -> None: - self.gamma = gamma - self.power = power - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * (1 + self.gamma * progress)**(-self.power) - - -@HOOKS.register_module() -class CosineAnnealingLrUpdaterHook(LrUpdaterHook): - """CosineAnnealing LR scheduler. - - Args: - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - min_lr: Optional[float] = None, - min_lr_ratio: Optional[float] = None, - **kwargs) -> None: - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr # type:ignore - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook): - """Flat + Cosine lr schedule. - - Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501 - - Args: - start_percent (float): When to start annealing the learning rate - after the percentage of the total training steps. - The value should be in range [0, 1). - Default: 0.75 - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - start_percent: float = 0.75, - min_lr: Optional[float] = None, - min_lr_ratio: Optional[float] = None, - **kwargs) -> None: - assert (min_lr is None) ^ (min_lr_ratio is None) - if start_percent < 0 or start_percent > 1 or not isinstance( - start_percent, float): - raise ValueError( - 'expected float between 0 and 1 start_percent, but ' - f'got {start_percent}') - self.start_percent = start_percent - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - if self.by_epoch: - start = round(runner.max_epochs * self.start_percent) - progress = runner.epoch - start - max_progress = runner.max_epochs - start - else: - start = round(runner.max_iters * self.start_percent) - progress = runner.iter - start - max_progress = runner.max_iters - start - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr # type:ignore - - if progress < 0: - return base_lr - else: - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class CosineRestartLrUpdaterHook(LrUpdaterHook): - """Cosine annealing with restarts learning rate scheme. - - Args: - periods (list[int]): Periods for each cosine anneling cycle. - restart_weights (list[float]): Restart weights at each - restart iteration. Defaults to [1]. - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - periods: List[int], - restart_weights: List[float] = [1], - min_lr: Optional[float] = None, - min_lr_ratio: Optional[float] = None, - **kwargs) -> None: - assert (min_lr is None) ^ (min_lr_ratio is None) - self.periods = periods - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - self.restart_weights = restart_weights - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - super().__init__(**kwargs) - - self.cumulative_periods = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - if self.by_epoch: - progress = runner.epoch - else: - progress = runner.iter - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr # type:ignore - - idx = get_position_from_periods(progress, self.cumulative_periods) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1] - current_periods = self.periods[idx] - - alpha = min((progress - nearest_restart) / current_periods, 1) - return annealing_cos(base_lr, target_lr, alpha, current_weight) - - -def get_position_from_periods(iteration: int, cumulative_periods: List[int]): - """Get the position from a period list. - - It will return the index of the right-closest number in the period list. - For example, the cumulative_periods = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 3. - - Args: - iteration (int): Current iteration. - cumulative_periods (list[int]): Cumulative period list. - - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_periods): - if iteration < period: - return i - raise ValueError(f'Current iteration {iteration} exceeds ' - f'cumulative_periods {cumulative_periods}') - - -@HOOKS.register_module() -class CyclicLrUpdaterHook(LrUpdaterHook): - """Cyclic LR Scheduler. - - Implement the cyclical learning rate policy (CLR) described in - https://arxiv.org/pdf/1506.01186.pdf - - Different from the original paper, we use cosine annealing rather than - triangular policy inside a cycle. This improves the performance in the - 3D detection area. - - Args: - by_epoch (bool, optional): Whether to update LR by epoch. - target_ratio (tuple[float], optional): Relative ratio of the highest LR - and the lowest LR to the initial LR. - cyclic_times (int, optional): Number of cycles during training - step_ratio_up (float, optional): The ratio of the increasing process of - LR in the total cycle. - anneal_strategy (str, optional): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - gamma (float, optional): Cycle decay ratio. Default: 1. - It takes values in the range (0, 1]. The difference between the - maximum learning rate and the minimum learning rate decreases - periodically when it is less than 1. `New in version 1.4.4.` - """ - - def __init__(self, - by_epoch: bool = False, - target_ratio: Union[float, tuple] = (10, 1e-4), - cyclic_times: int = 1, - step_ratio_up: float = 0.4, - anneal_strategy: str = 'cos', - gamma: float = 1, - **kwargs) -> None: - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - assert 0 < gamma <= 1, \ - '"gamma" must be in range (0, 1]' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.gamma = gamma - self.max_iter_per_phase = None - self.lr_phases: list = [] # init lr_phases - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func: Callable[[float, float, float], - float] = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super().__init__(by_epoch, **kwargs) - - def before_run(self, runner: 'runner.BaseRunner'): - super().before_run(runner) - # initiate lr_phases - # total lr_phases are separated as up and down - self.max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * - self.max_iter_per_phase) # type:ignore - self.lr_phases.append([0, iter_up_phase, 1, self.target_ratio[0]]) - self.lr_phases.append([ - iter_up_phase, self.max_iter_per_phase, self.target_ratio[0], - self.target_ratio[1] - ]) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - curr_iter = runner.iter % self.max_iter_per_phase - curr_cycle = runner.iter // self.max_iter_per_phase - # Update weight decay - scale = self.gamma**curr_cycle - - for (start_iter, end_iter, start_ratio, end_ratio) in self.lr_phases: - if start_iter <= curr_iter < end_iter: - # Apply cycle scaling to gradually reduce the difference - # between max_lr and base lr. The target end_ratio can be - # expressed as: - # end_ratio = (base_lr + scale * (max_lr - base_lr)) / base_lr - # iteration: 0-iter_up_phase: - if start_iter == 0: - end_ratio = 1 - scale + end_ratio * scale - # iteration: iter_up_phase-self.max_iter_per_phase - else: - start_ratio = 1 - scale + start_ratio * scale - progress = curr_iter - start_iter - return self.anneal_func(base_lr * start_ratio, - base_lr * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleLrUpdaterHook(LrUpdaterHook): - """One Cycle LR Scheduler. - - The 1cycle learning rate policy changes the learning rate after every - batch. The one cycle learning rate policy is described in - https://arxiv.org/pdf/1708.07120.pdf - - Args: - max_lr (float or list): Upper learning rate boundaries in the cycle - for each parameter group. - total_steps (int, optional): The total number of steps in the cycle. - Note that if a value is not provided here, it will be the max_iter - of runner. Default: None. - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - div_factor (float): Determines the initial learning rate via - initial_lr = max_lr/div_factor - Default: 25 - final_div_factor (float): Determines the minimum learning rate via - min_lr = initial_lr/final_div_factor - Default: 1e4 - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - max_lr: Union[float, List], - total_steps: Optional[int] = None, - pct_start: float = 0.3, - anneal_strategy: str = 'cos', - div_factor: float = 25, - final_div_factor: float = 1e4, - three_phase: bool = False, - **kwargs) -> None: - # validate by_epoch, currently only support by_epoch = False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(max_lr, (numbers.Number, list, dict)): - raise ValueError('the type of max_lr must be the one of list or ' - f'dict, but got {type(max_lr)}') - self._max_lr = max_lr - if total_steps is not None: - if not isinstance(total_steps, int): - raise ValueError('the type of total_steps must be int, but' - f'got {type(total_steps)}') - self.total_steps = total_steps - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func: Callable[[float, float, float], - float] = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.div_factor = div_factor - self.final_div_factor = final_div_factor - self.three_phase = three_phase - self.lr_phases: list = [] # init lr_phases - super().__init__(**kwargs) - - def before_run(self, runner: 'runner.BaseRunner'): - if hasattr(self, 'total_steps'): - total_steps = self.total_steps - else: - total_steps = runner.max_iters - if total_steps < runner.max_iters: - raise ValueError( - 'The total steps must be greater than or equal to max ' - f'iterations {runner.max_iters} of runner, but total steps ' - f'is {total_steps}.') - - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - _max_lr = format_param(k, optim, self._max_lr) - self.base_lr[k] = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(optim.param_groups, self.base_lr[k]): - group.setdefault('initial_lr', lr) - else: - k = type(runner.optimizer).__name__ - _max_lr = format_param(k, runner.optimizer, self._max_lr) - self.base_lr = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(runner.optimizer.param_groups, self.base_lr): - group.setdefault('initial_lr', lr) - - if self.three_phase: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append([ - float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1 - ]) - self.lr_phases.append( - [total_steps - 1, 1, 1 / self.final_div_factor]) - else: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append( - [total_steps - 1, self.div_factor, 1 / self.final_div_factor]) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - curr_iter = runner.iter - start_iter = 0 - for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases): - if curr_iter <= end_iter: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr, - pct) - break - start_iter = end_iter - return lr - - -@HOOKS.register_module() -class LinearAnnealingLrUpdaterHook(LrUpdaterHook): - """Linear annealing LR Scheduler decays the learning rate of each parameter - group linearly. - - Args: - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - min_lr: Optional[float] = None, - min_lr_ratio: Optional[float] = None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super().__init__(**kwargs) - - def get_lr(self, runner: 'runner.BaseRunner', base_lr: float): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr # type:ignore - return annealing_linear(base_lr, target_lr, progress / max_progress) - - -def annealing_cos(start: float, - end: float, - factor: float, - weight: float = 1) -> float: - """Calculate annealing cos learning rate. - - Cosine anneal from `weight * start + (1 - weight) * end` to `end` as - percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the cosine annealing. - end (float): The ending learing rate of the cosine annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - weight (float, optional): The combination factor of `start` and `end` - when calculating the actual starting learning rate. Default to 1. - """ - cos_out = cos(pi * factor) + 1 - return end + 0.5 * weight * (start - end) * cos_out - - -def annealing_linear(start: float, end: float, factor: float) -> float: - """Calculate annealing linear learning rate. - - Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the linear annealing. - end (float): The ending learing rate of the linear annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - """ - return start + (end - start) * factor - - -def format_param(name, optim, param): - if isinstance(param, numbers.Number): - return [param] * len(optim.param_groups) - elif isinstance(param, (list, tuple)): # multi param groups - if len(param) != len(optim.param_groups): - raise ValueError(f'expected {len(optim.param_groups)} ' - f'values for {name}, got {len(param)}') - return param - else: # multi optimizers - if name not in param: - raise KeyError(f'{name} is not found in {param.keys()}') - return param[name] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/memory.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/memory.py deleted file mode 100644 index 70cf9a83..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/memory.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class EmptyCacheHook(Hook): - - def __init__(self, before_epoch=False, after_epoch=True, after_iter=False): - self._before_epoch = before_epoch - self._after_epoch = after_epoch - self._after_iter = after_iter - - def after_iter(self, runner): - if self._after_iter: - torch.cuda.empty_cache() - - def before_epoch(self, runner): - if self._before_epoch: - torch.cuda.empty_cache() - - def after_epoch(self, runner): - if self._after_epoch: - torch.cuda.empty_cache() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/momentum_updater.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/momentum_updater.py deleted file mode 100644 index c86d6e90..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/momentum_updater.py +++ /dev/null @@ -1,566 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -from .hook import HOOKS, Hook -from .lr_updater import annealing_cos, annealing_linear, format_param - - -class MomentumUpdaterHook(Hook): - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.9): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_momentum" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - - self.base_momentum = [] # initial momentum for all param groups - self.regular_momentum = [ - ] # expected momentum if no warming up is performed - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, base_momentum): - raise NotImplementedError - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k in runner.optimizer.keys(): - _momentum_group = [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum[k] - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - return [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum - ] - - def get_warmup_momentum(self, cur_iters): - - def _get_warmup_momentum(cur_iters, regular_momentum): - if self.warmup == 'constant': - warmup_momentum = [ - _momentum / self.warmup_ratio - for _momentum in regular_momentum - ] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_momentum = [ - _momentum / (1 - k) for _momentum in regular_momentum - ] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_momentum = [ - _momentum / k for _momentum in regular_momentum - ] - return warmup_momentum - - if isinstance(self.regular_momentum, dict): - momentum_groups = {} - for key, regular_momentum in self.regular_momentum.items(): - momentum_groups[key] = _get_warmup_momentum( - cur_iters, regular_momentum) - return momentum_groups - else: - return _get_warmup_momentum(cur_iters, self.regular_momentum) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, - # if 'initial_momentum' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_momentum = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - _base_momentum = [ - group['initial_momentum'] for group in optim.param_groups - ] - self.base_momentum.update({k: _base_momentum}) - else: - for group in runner.optimizer.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - self.base_momentum = [ - group['initial_momentum'] - for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if not self.by_epoch: - return - self.regular_momentum = self.get_regular_momentum(runner) - self._set_momentum(runner, self.regular_momentum) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_momentum = self.get_regular_momentum(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_momentum(runner, self.regular_momentum) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_momentum(runner, self.regular_momentum) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - - -@HOOKS.register_module() -class StepMomentumUpdaterHook(MomentumUpdaterHook): - """Step momentum scheduler with min value clipping. - - Args: - step (int | list[int]): Step to decay the momentum. If an int value is - given, regard it as the decay interval. If a list is given, decay - momentum at these steps. - gamma (float, optional): Decay momentum ratio. Default: 0.5. - min_momentum (float, optional): Minimum momentum value to keep. If - momentum after decay is lower than this value, it will be clipped - accordingly. If None is given, we don't perform lr clipping. - Default: None. - """ - - def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_momentum = min_momentum - super().__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - momentum = base_momentum * (self.gamma**exp) - if self.min_momentum is not None: - # clip to a minimum value - momentum = max(momentum, self.min_momentum) - return momentum - - -@HOOKS.register_module() -class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook): - """Cosine annealing LR Momentum decays the Momentum of each parameter group - linearly. - - Args: - min_momentum (float, optional): The minimum momentum. Default: None. - min_momentum_ratio (float, optional): The ratio of minimum momentum to - the base momentum. Either `min_momentum` or `min_momentum_ratio` - should be specified. Default: None. - """ - - def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs): - assert (min_momentum is None) ^ (min_momentum_ratio is None) - self.min_momentum = min_momentum - self.min_momentum_ratio = min_momentum_ratio - super().__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_momentum_ratio is not None: - target_momentum = base_momentum * self.min_momentum_ratio - else: - target_momentum = self.min_momentum - return annealing_cos(base_momentum, target_momentum, - progress / max_progress) - - -@HOOKS.register_module() -class LinearAnnealingMomentumUpdaterHook(MomentumUpdaterHook): - """Linear annealing LR Momentum decays the Momentum of each parameter group - linearly. - - Args: - min_momentum (float, optional): The minimum momentum. Default: None. - min_momentum_ratio (float, optional): The ratio of minimum momentum to - the base momentum. Either `min_momentum` or `min_momentum_ratio` - should be specified. Default: None. - """ - - def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs): - assert (min_momentum is None) ^ (min_momentum_ratio is None) - self.min_momentum = min_momentum - self.min_momentum_ratio = min_momentum_ratio - super().__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_momentum_ratio is not None: - target_momentum = base_momentum * self.min_momentum_ratio - else: - target_momentum = self.min_momentum - return annealing_linear(base_momentum, target_momentum, - progress / max_progress) - - -@HOOKS.register_module() -class CyclicMomentumUpdaterHook(MomentumUpdaterHook): - """Cyclic momentum Scheduler. - - Implement the cyclical momentum scheduler policy described in - https://arxiv.org/pdf/1708.07120.pdf - - This momentum scheduler usually used together with the CyclicLRUpdater - to improve the performance in the 3D detection area. - - Args: - target_ratio (tuple[float]): Relative ratio of the lowest momentum and - the highest momentum to the initial momentum. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of momentum - in the total cycle. - by_epoch (bool): Whether to update momentum by epoch. - anneal_strategy (str, optional): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - gamma (float, optional): Cycle decay ratio. Default: 1. - It takes values in the range (0, 1]. The difference between the - maximum learning rate and the minimum learning rate decreases - periodically when it is less than 1. `New in version 1.4.4.` - """ - - def __init__(self, - by_epoch=False, - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, - anneal_strategy='cos', - gamma=1, - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.gamma = gamma - self.momentum_phases = [] # init momentum_phases - - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - # currently only support by_epoch=False - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super().__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super().before_run(runner) - # initiate momentum_phases - # total momentum_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.max_iter_per_phase = max_iter_per_phase - self.momentum_phases.append( - [0, iter_up_phase, 1, self.target_ratio[0]]) - self.momentum_phases.append([ - iter_up_phase, max_iter_per_phase, self.target_ratio[0], - self.target_ratio[1] - ]) - - def get_momentum(self, runner, base_momentum): - curr_iter = runner.iter % self.max_iter_per_phase - curr_cycle = runner.iter // self.max_iter_per_phase - scale = self.gamma**curr_cycle - for (start_iter, end_iter, start_ratio, end_ratio) \ - in self.momentum_phases: - if start_iter <= curr_iter < end_iter: - # Apply cycle scaling to gradually reduce the difference - # between max_momentum and base momentum. The target end_ratio - # can be expressed as: - # end_ratio = (base_momentum + scale * \ - # (max_momentum - base_momentum)) / base_momentum - # iteration: 0-iter_up_phase: - if start_iter == 0: - end_ratio = 1 - scale + end_ratio * scale - # iteration: iter_up_phase-self.max_iter_per_phase - else: - start_ratio = 1 - scale + start_ratio * scale - progress = curr_iter - start_iter - return self.anneal_func(base_momentum * start_ratio, - base_momentum * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleMomentumUpdaterHook(MomentumUpdaterHook): - """OneCycle momentum Scheduler. - - This momentum scheduler usually used together with the OneCycleLrUpdater - to improve the performance. - - Args: - base_momentum (float or list): Lower momentum boundaries in the cycle - for each parameter group. Note that momentum is cycled inversely - to learning rate; at the peak of a cycle, momentum is - 'base_momentum' and learning rate is 'max_lr'. - Default: 0.85 - max_momentum (float or list): Upper momentum boundaries in the cycle - for each parameter group. Functionally, - it defines the cycle amplitude (max_momentum - base_momentum). - Note that momentum is cycled inversely - to learning rate; at the start of a cycle, momentum is - 'max_momentum' and learning rate is 'base_lr' - Default: 0.95 - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - base_momentum=0.85, - max_momentum=0.95, - pct_start=0.3, - anneal_strategy='cos', - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch=False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(base_momentum, (float, list, dict)): - raise ValueError('base_momentum must be the type among of float,' - 'list or dict.') - self._base_momentum = base_momentum - if not isinstance(max_momentum, (float, list, dict)): - raise ValueError('max_momentum must be the type among of float,' - 'list or dict.') - self._max_momentum = max_momentum - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('Expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must by one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.three_phase = three_phase - self.momentum_phases = [] # init momentum_phases - super().__init__(**kwargs) - - def before_run(self, runner): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip( - optim.param_groups, _base_momentum, _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - else: - optim = runner.optimizer - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - k = type(optim).__name__ - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip(optim.param_groups, - _base_momentum, - _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - - if self.three_phase: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': - float(2 * self.pct_start * runner.max_iters) - 2, - 'start_momentum': - 'base_momentum', - 'end_momentum': - 'max_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'max_momentum', - 'end_momentum': 'max_momentum' - }) - else: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'base_momentum', - 'end_momentum': 'max_momentum' - }) - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, param_group): - curr_iter = runner.iter - start_iter = 0 - for i, phase in enumerate(self.momentum_phases): - end_iter = phase['end_iter'] - if curr_iter <= end_iter or i == len(self.momentum_phases) - 1: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - momentum = self.anneal_func( - param_group[phase['start_momentum']], - param_group[phase['end_momentum']], pct) - break - start_iter = end_iter - return momentum - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k, optim in runner.optimizer.items(): - _momentum_group = [ - self.get_momentum(runner, param_group) - for param_group in optim.param_groups - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - momentum_groups = [] - for param_group in runner.optimizer.param_groups: - momentum_groups.append(self.get_momentum(runner, param_group)) - return momentum_groups diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/optimizer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/optimizer.py deleted file mode 100644 index a5c10988..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,554 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import logging -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - """A hook contains custom operations for the optimizer. - - Args: - grad_clip (dict, optional): A config dict to control the clip_grad. - Default: None. - detect_anomalous_params (bool): This option is only used for - debugging which will slow down the training speed. - Detect anomalous parameters that are not included in - the computational graph with `loss` as the root. - There are two cases - - - Parameters were not used during - forward pass. - - Parameters were not used to produce - loss. - Default: False. - """ - - def __init__(self, grad_clip=None, detect_anomalous_params=False): - self.grad_clip = grad_clip - self.detect_anomalous_params = detect_anomalous_params - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - if self.detect_anomalous_params: - self.detect_anomalous_parameters(runner.outputs['loss'], runner) - runner.outputs['loss'].backward() - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - def detect_anomalous_parameters(self, loss, runner): - logger = runner.logger - parameters_in_graph = set() - visited = set() - - def traverse(grad_fn): - if grad_fn is None: - return - if grad_fn not in visited: - visited.add(grad_fn) - if hasattr(grad_fn, 'variable'): - parameters_in_graph.add(grad_fn.variable) - parents = grad_fn.next_functions - if parents is not None: - for parent in parents: - grad_fn = parent[0] - traverse(grad_fn) - - traverse(loss.grad_fn) - for n, p in runner.model.named_parameters(): - if p not in parameters_in_graph and p.requires_grad: - logger.log( - level=logging.ERROR, - msg=f'{n} with shape {p.size()} is not ' - f'in the computational graph \n') - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super().__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): # type: ignore - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook( # type: ignore - GradientCumulativeOptimizerHook, Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/profiler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/profiler.py deleted file mode 100644 index fef9adc1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if self.by_epoch and runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sampler_seed.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sampler_seed.py deleted file mode 100644 index ee0dc6bd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sync_buffer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/iter_based_runner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/iter_based_runner.py deleted file mode 100644 index 2de7317b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.data_batch = data_batch - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - del self.data_batch - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.data_batch = data_batch - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - del self.data_batch - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super().register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/log_buffer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/log_buffer.py deleted file mode 100644 index 3c9f3796..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/log_buffer.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import numpy as np - - -class LogBuffer: - - def __init__(self): - self.val_history = OrderedDict() - self.n_history = OrderedDict() - self.output = OrderedDict() - self.ready = False - - def clear(self) -> None: - self.val_history.clear() - self.n_history.clear() - self.clear_output() - - def clear_output(self) -> None: - self.output.clear() - self.ready = False - - def update(self, vars: dict, count: int = 1) -> None: - assert isinstance(vars, dict) - for key, var in vars.items(): - if key not in self.val_history: - self.val_history[key] = [] - self.n_history[key] = [] - self.val_history[key].append(var) - self.n_history[key].append(count) - - def average(self, n: int = 0) -> None: - """Average latest n values or all values.""" - assert n >= 0 - for key in self.val_history: - values = np.array(self.val_history[key][-n:]) - nums = np.array(self.n_history[key][-n:]) - avg = np.sum(values * nums) / np.sum(nums) - self.output[key] = avg - self.ready = True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/__init__.py deleted file mode 100644 index 53c34d04..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import (OPTIMIZER_BUILDERS, OPTIMIZERS, build_optimizer, - build_optimizer_constructor) -from .default_constructor import DefaultOptimizerConstructor - -__all__ = [ - 'OPTIMIZER_BUILDERS', 'OPTIMIZERS', 'DefaultOptimizerConstructor', - 'build_optimizer', 'build_optimizer_constructor' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/builder.py deleted file mode 100644 index 49d8f05a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect -from typing import Dict, List - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers() -> List: - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg: Dict): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg: Dict): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/default_constructor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/default_constructor.py deleted file mode 100644 index c82b56e5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Dict, List, Optional, Union - -import torch -import torch.nn as nn -from torch.nn import GroupNorm, LayerNorm - -from mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset layer. - So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the offset - layer in deformable convs, set ``dcn_offset_lr_mult`` to the original - ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when the - model contains multiple DCN layers in places other than backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - 'backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, - optimizer_cfg: Dict, - paramwise_cfg: Optional[Dict] = None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self) -> None: - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group: Dict, param_group_list: List) -> bool: - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, - params: List[Dict], - module: nn.Module, - prefix: str = '', - is_dcn_module: Union[int, float, None] = None) -> None: - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from mmcv.ops import DeformConv2d, ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (DeformConv2d, ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model: nn.Module): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params: List[Dict] = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/priority.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/priority.py deleted file mode 100644 index ff644043..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/priority.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum -from typing import Union - - -class Priority(Enum): - """Hook priority levels. - - +--------------+------------+ - | Level | Value | - +==============+============+ - | HIGHEST | 0 | - +--------------+------------+ - | VERY_HIGH | 10 | - +--------------+------------+ - | HIGH | 30 | - +--------------+------------+ - | ABOVE_NORMAL | 40 | - +--------------+------------+ - | NORMAL | 50 | - +--------------+------------+ - | BELOW_NORMAL | 60 | - +--------------+------------+ - | LOW | 70 | - +--------------+------------+ - | VERY_LOW | 90 | - +--------------+------------+ - | LOWEST | 100 | - +--------------+------------+ - """ - - HIGHEST = 0 - VERY_HIGH = 10 - HIGH = 30 - ABOVE_NORMAL = 40 - NORMAL = 50 - BELOW_NORMAL = 60 - LOW = 70 - VERY_LOW = 90 - LOWEST = 100 - - -def get_priority(priority: Union[int, str, Priority]) -> int: - """Get priority value. - - Args: - priority (int or str or :obj:`Priority`): Priority. - - Returns: - int: The priority value. - """ - if isinstance(priority, int): - if priority < 0 or priority > 100: - raise ValueError('priority must be between 0 and 100') - return priority - elif isinstance(priority, Priority): - return priority.value - elif isinstance(priority, str): - return Priority[priority.upper()].value - else: - raise TypeError('priority must be an integer or Priority enum value') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/utils.py deleted file mode 100644 index 34e4ed5a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/runner/utils.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random -import sys -import time -import warnings -from getpass import getuser -from socket import gethostname -from types import ModuleType -from typing import Optional - -import numpy as np -import torch - -import mmcv - - -def get_host_info(): - """Get hostname and username. - - Return empty string if exception raised, e.g. ``getpass.getuser()`` will - lead to error in docker container - """ - host = '' - try: - host = f'{getuser()}@{gethostname()}' - except Exception as e: - warnings.warn(f'Host or user not found: {str(e)}') - finally: - return host - - -def get_time_str(): - return time.strftime('%Y%m%d_%H%M%S', time.localtime()) - - -def obj_from_dict(info: dict, - parent: Optional[ModuleType] = None, - default_args: Optional[dict] = None): - """Initialize an object from dict. - - The dict must contain the key "type", which indicates the object type, it - can be either a string or type, such as "list" or ``list``. Remaining - fields are treated as the arguments for constructing the object. - - Args: - info (dict): Object types and arguments. - parent (:class:`module`): Module which may containing expected object - classes. - default_args (dict, optional): Default arguments for initializing the - object. - - Returns: - any type: Object built from the dict. - """ - assert isinstance(info, dict) and 'type' in info - assert isinstance(default_args, dict) or default_args is None - args = info.copy() - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if parent is not None: - obj_type = getattr(parent, obj_type) - else: - obj_type = sys.modules[obj_type] - elif not isinstance(obj_type, type): - raise TypeError('type must be a str or valid type, but ' - f'got {type(obj_type)}') - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - return obj_type(**args) - - -def set_random_seed(seed: int, - deterministic: bool = False, - use_rank_shift: bool = False) -> None: - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - rank_shift (bool): Whether to add rank number to the random seed to - have different random seed in different threads. Default: False. - """ - if use_rank_shift: - rank, _ = mmcv.runner.get_dist_info() - seed += rank - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/__init__.py deleted file mode 100644 index d86ddbf4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -from .init_plugins import is_tensorrt_plugin_loaded, load_tensorrt_plugin -from .preprocess import preprocess_onnx - - -def is_tensorrt_available(): - try: - import tensorrt - del tensorrt - return True - except ModuleNotFoundError: - return False - - -__all__ = [] - -if is_tensorrt_available(): - from .tensorrt_utils import (TRTWraper, TRTWrapper, load_trt_engine, - onnx2trt, save_trt_engine) - - # load tensorrt plugin lib - load_tensorrt_plugin() - - __all__.extend([ - 'onnx2trt', 'save_trt_engine', 'load_trt_engine', 'TRTWraper', - 'TRTWrapper' - ]) - -__all__.extend(['is_tensorrt_plugin_loaded', 'preprocess_onnx']) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/init_plugins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/init_plugins.py deleted file mode 100644 index 909b9ae2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/init_plugins.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import ctypes -import glob -import os -import warnings - - -def get_tensorrt_op_path() -> str: - """Get TensorRT plugins library path.""" - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_trt.*.so') - - paths = glob.glob(wildcard) - lib_path = paths[0] if len(paths) > 0 else '' - return lib_path - - -plugin_is_loaded = False - - -def is_tensorrt_plugin_loaded() -> bool: - """Check if TensorRT plugins library is loaded or not. - - Returns: - bool: plugin_is_loaded flag - """ - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - global plugin_is_loaded - return plugin_is_loaded - - -def load_tensorrt_plugin() -> None: - """load TensorRT plugins library.""" - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - global plugin_is_loaded - lib_path = get_tensorrt_op_path() - if (not plugin_is_loaded) and os.path.exists(lib_path): - ctypes.CDLL(lib_path) - plugin_is_loaded = True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/preprocess.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/preprocess.py deleted file mode 100644 index a0ad2542..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/preprocess.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import onnx - - -def preprocess_onnx(onnx_model: onnx.ModelProto) -> onnx.ModelProto: - """Modify onnx model to match with TensorRT plugins in mmcv. - - There are some conflict between onnx node definition and TensorRT limit. - This function perform preprocess on the onnx model to solve the conflicts. - For example, onnx `attribute` is loaded in TensorRT on host and onnx - `input` is loaded on device. The shape inference is performed on host, so - any `input` related to shape (such as `max_output_boxes_per_class` in - NonMaxSuppression) should be transformed to `attribute` before conversion. - - Arguments: - onnx_model (onnx.ModelProto): Input onnx model. - - Returns: - onnx.ModelProto: Modified onnx model. - """ - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - graph = onnx_model.graph - nodes = graph.node - initializers = graph.initializer - node_dict = {} - for node in nodes: - node_outputs = node.output - for output in node_outputs: - if len(output) > 0: - node_dict[output] = node - - init_dict = {_.name: _ for _ in initializers} - - nodes_name_to_remove = set() - - def is_node_without_output(name): - for node_name, node in node_dict.items(): - if node_name not in nodes_name_to_remove: - if name in node.input: - return False - return True - - def mark_nodes_to_remove(name): - node = node_dict[name] - nodes_name_to_remove.add(name) - for input_node_name in node.input: - if is_node_without_output(input_node_name): - mark_nodes_to_remove(input_node_name) - - def parse_data(name, typ, default_value=0): - if name in node_dict: - node = node_dict[name] - if node.op_type == 'Constant': - raw_data = node.attribute[0].t.raw_data - else: - mark_nodes_to_remove(name) - return default_value - elif name in init_dict: - raw_data = init_dict[name].raw_data - else: - raise ValueError(f'{name} not found in node or initilizer.') - return np.frombuffer(raw_data, typ).item() - - nrof_node = len(nodes) - for idx in range(nrof_node): - node = nodes[idx] - node_attributes = node.attribute - node_inputs = node.input - node_outputs = node.output - node_name = node.name - # process NonMaxSuppression node - if node.op_type == 'NonMaxSuppression': - center_point_box = 0 - max_output_boxes_per_class = 1000000 - iou_threshold = 0.3 - score_threshold = 0.0 - offset = 0 - for attribute in node_attributes: - if attribute.name == 'center_point_box': - center_point_box = attribute.i - elif attribute.name == 'offset': - offset = attribute.i - - if len(node_inputs) >= 3: - max_output_boxes_per_class = parse_data( - node_inputs[2], np.int64, max_output_boxes_per_class) - mark_nodes_to_remove(node_inputs[2]) - - if len(node_inputs) >= 4: - iou_threshold = parse_data(node_inputs[3], np.float32, - iou_threshold) - mark_nodes_to_remove(node_inputs[3]) - - if len(node_inputs) >= 5: - score_threshold = parse_data(node_inputs[4], np.float32) - mark_nodes_to_remove(node_inputs[4]) - - new_node = onnx.helper.make_node( - 'NonMaxSuppression', - node_inputs[:2], - node_outputs, - name=node_name, - center_point_box=center_point_box, - max_output_boxes_per_class=max_output_boxes_per_class, - iou_threshold=iou_threshold, - score_threshold=score_threshold, - offset=offset) - - for output in node_outputs: - if output in node_dict: - node_dict[output] = new_node - nodes.insert(idx, new_node) - nodes.remove(node) - elif node.op_type == 'InstanceNormalization': - # directly change op name - node.op_type = 'MMCVInstanceNormalization' - - for node_name in nodes_name_to_remove: - nodes.remove(node_dict[node_name]) - - return onnx_model diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/tensorrt_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/tensorrt_utils.py deleted file mode 100644 index b415abcd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/tensorrt/tensorrt_utils.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Union - -import onnx -import tensorrt as trt -import torch - -from .preprocess import preprocess_onnx - - -def onnx2trt(onnx_model: Union[str, onnx.ModelProto], - opt_shape_dict: dict, - log_level: trt.ILogger.Severity = trt.Logger.ERROR, - fp16_mode: bool = False, - max_workspace_size: int = 0, - device_id: int = 0) -> trt.ICudaEngine: - """Convert onnx model to tensorrt engine. - - Arguments: - onnx_model (str or onnx.ModelProto): the onnx model to convert from - opt_shape_dict (dict): the min/opt/max shape of each input - log_level (TensorRT log level): the log level of TensorRT - fp16_mode (bool): enable fp16 mode - max_workspace_size (int): set max workspace size of TensorRT engine. - some tactic and layers need large workspace. - device_id (int): choice the device to create engine. - - Returns: - tensorrt.ICudaEngine: the TensorRT engine created from onnx_model - - Example: - >>> engine = onnx2trt( - >>> "onnx_model.onnx", - >>> {'input': [[1, 3, 160, 160], - >>> [1, 3, 320, 320], - >>> [1, 3, 640, 640]]}, - >>> log_level=trt.Logger.WARNING, - >>> fp16_mode=True, - >>> max_workspace_size=1 << 30, - >>> device_id=0) - >>> }) - """ - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - device = torch.device(f'cuda:{device_id}') - # create builder and network - logger = trt.Logger(log_level) - builder = trt.Builder(logger) - EXPLICIT_BATCH = 1 << (int)( - trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) - network = builder.create_network(EXPLICIT_BATCH) - - # parse onnx - parser = trt.OnnxParser(network, logger) - - if isinstance(onnx_model, str): - onnx_model = onnx.load(onnx_model) - - onnx_model = preprocess_onnx(onnx_model) - - if not parser.parse(onnx_model.SerializeToString()): - error_msgs = '' - for error in range(parser.num_errors): - error_msgs += f'{parser.get_error(error)}\n' - raise RuntimeError(f'parse onnx failed:\n{error_msgs}') - - # config builder - builder.max_workspace_size = max_workspace_size - - config = builder.create_builder_config() - config.max_workspace_size = max_workspace_size - profile = builder.create_optimization_profile() - - for input_name, param in opt_shape_dict.items(): - min_shape = tuple(param[0][:]) - opt_shape = tuple(param[1][:]) - max_shape = tuple(param[2][:]) - profile.set_shape(input_name, min_shape, opt_shape, max_shape) - config.add_optimization_profile(profile) - - if fp16_mode: - builder.fp16_mode = fp16_mode - config.set_flag(trt.BuilderFlag.FP16) - - # create engine - with torch.cuda.device(device): - engine = builder.build_engine(network, config) - - return engine - - -def save_trt_engine(engine: trt.ICudaEngine, path: str) -> None: - """Serialize TensorRT engine to disk. - - Arguments: - engine (tensorrt.ICudaEngine): TensorRT engine to serialize - path (str): disk path to write the engine - """ - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - with open(path, mode='wb') as f: - f.write(bytearray(engine.serialize())) - - -def load_trt_engine(path: str) -> trt.ICudaEngine: - """Deserialize TensorRT engine from disk. - - Arguments: - path (str): disk path to read the engine - - Returns: - tensorrt.ICudaEngine: the TensorRT engine loaded from disk - """ - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This function will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - with trt.Logger() as logger, trt.Runtime(logger) as runtime: - with open(path, mode='rb') as f: - engine_bytes = f.read() - engine = runtime.deserialize_cuda_engine(engine_bytes) - return engine - - -def torch_dtype_from_trt(dtype: trt.DataType) -> Union[torch.dtype, TypeError]: - """Convert pytorch dtype to TensorRT dtype.""" - if dtype == trt.bool: - return torch.bool - elif dtype == trt.int8: - return torch.int8 - elif dtype == trt.int32: - return torch.int32 - elif dtype == trt.float16: - return torch.float16 - elif dtype == trt.float32: - return torch.float32 - else: - raise TypeError('%s is not supported by torch' % dtype) - - -def torch_device_from_trt( - device: trt.TensorLocation) -> Union[torch.device, TypeError]: - """Convert pytorch device to TensorRT device.""" - if device == trt.TensorLocation.DEVICE: - return torch.device('cuda') - elif device == trt.TensorLocation.HOST: - return torch.device('cpu') - else: - return TypeError('%s is not supported by torch' % device) - - -class TRTWrapper(torch.nn.Module): - """TensorRT engine Wrapper. - - Arguments: - engine (tensorrt.ICudaEngine): TensorRT engine to wrap - input_names (list[str]): names of each inputs - output_names (list[str]): names of each outputs - - Note: - If the engine is converted from onnx model. The input_names and - output_names should be the same as onnx model. - """ - - def __init__(self, engine, input_names=None, output_names=None): - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This tool will be deprecated in future. ' - msg += blue_text + \ - 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - super().__init__() - self.engine = engine - if isinstance(self.engine, str): - self.engine = load_trt_engine(engine) - - if not isinstance(self.engine, trt.ICudaEngine): - raise TypeError('engine should be str or trt.ICudaEngine') - - self._register_state_dict_hook(TRTWrapper._on_state_dict) - self.context = self.engine.create_execution_context() - - # get input and output names from engine - if input_names is None or output_names is None: - names = [_ for _ in self.engine] - input_names = list(filter(self.engine.binding_is_input, names)) - output_names = list(set(names) - set(input_names)) - self.input_names = input_names - self.output_names = output_names - - def _on_state_dict(self, state_dict, prefix, local_metadata): - state_dict[prefix + 'engine'] = bytearray(self.engine.serialize()) - state_dict[prefix + 'input_names'] = self.input_names - state_dict[prefix + 'output_names'] = self.output_names - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - engine_bytes = state_dict[prefix + 'engine'] - - with trt.Logger() as logger, trt.Runtime(logger) as runtime: - self.engine = runtime.deserialize_cuda_engine(engine_bytes) - self.context = self.engine.create_execution_context() - - self.input_names = state_dict[prefix + 'input_names'] - self.output_names = state_dict[prefix + 'output_names'] - - def forward(self, inputs): - """ - Arguments: - inputs (dict): dict of input name-tensors pair - - Return: - dict: dict of output name-tensors pair - """ - assert self.input_names is not None - assert self.output_names is not None - bindings = [None] * (len(self.input_names) + len(self.output_names)) - - for input_name, input_tensor in inputs.items(): - idx = self.engine.get_binding_index(input_name) - - if input_tensor.dtype == torch.long: - input_tensor = input_tensor.int() - self.context.set_binding_shape(idx, tuple(input_tensor.shape)) - bindings[idx] = input_tensor.contiguous().data_ptr() - - # create output tensors - outputs = {} - for i, output_name in enumerate(self.output_names): - idx = self.engine.get_binding_index(output_name) - dtype = torch_dtype_from_trt(self.engine.get_binding_dtype(idx)) - shape = tuple(self.context.get_binding_shape(idx)) - - device = torch_device_from_trt(self.engine.get_location(idx)) - output = torch.empty(size=shape, dtype=dtype, device=device) - outputs[output_name] = output - bindings[idx] = output.data_ptr() - - self.context.execute_async_v2(bindings, - torch.cuda.current_stream().cuda_stream) - - return outputs - - -class TRTWraper(TRTWrapper): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'TRTWraper will be deprecated in' - ' future. Please use TRTWrapper instead', DeprecationWarning) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/__init__.py deleted file mode 100644 index e0825ed6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/__init__.py +++ /dev/null @@ -1,78 +0,0 @@ -# flake8: noqa -# Copyright (c) OpenMMLab. All rights reserved. -from .config import Config, ConfigDict, DictAction -from .misc import (check_prerequisites, concat_list, deprecated_api_warning, - has_method, import_modules_from_strings, is_list_of, - is_method_overridden, is_seq_of, is_str, is_tuple_of, - iter_cast, list_cast, requires_executable, requires_package, - slice_list, to_1tuple, to_2tuple, to_3tuple, to_4tuple, - to_ntuple, tuple_cast) -from .path import (check_file_exist, fopen, is_filepath, mkdir_or_exist, - scandir, symlink) -from .progressbar import (ProgressBar, track_iter_progress, - track_parallel_progress, track_progress) -from .testing import (assert_attrs_equal, assert_dict_contains_subset, - assert_dict_has_keys, assert_is_norm_layer, - assert_keys_equal, assert_params_all_zeros, - check_python_script) -from .timer import Timer, TimerError, check_time -from .version_utils import digit_version, get_git_hash - -try: - import torch -except ImportError: - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'is_str', 'iter_cast', - 'list_cast', 'tuple_cast', 'is_seq_of', 'is_list_of', 'is_tuple_of', - 'slice_list', 'concat_list', 'check_prerequisites', 'requires_package', - 'requires_executable', 'is_filepath', 'fopen', 'check_file_exist', - 'mkdir_or_exist', 'symlink', 'scandir', 'ProgressBar', - 'track_progress', 'track_iter_progress', 'track_parallel_progress', - 'Timer', 'TimerError', 'check_time', 'deprecated_api_warning', - 'digit_version', 'get_git_hash', 'import_modules_from_strings', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'check_python_script', - 'to_1tuple', 'to_2tuple', 'to_3tuple', 'to_4tuple', 'to_ntuple', - 'is_method_overridden', 'has_method' - ] -else: - from .device_type import IS_IPU_AVAILABLE, IS_MLU_AVAILABLE - from .env import collect_env - from .hub import load_url - from .logging import get_logger, print_log - from .parrots_jit import jit, skip_no_elena - # yapf: disable - from .parrots_wrapper import (IS_CUDA_AVAILABLE, TORCH_VERSION, - BuildExtension, CppExtension, CUDAExtension, - DataLoader, PoolDataLoader, SyncBatchNorm, - _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, - _AvgPoolNd, _BatchNorm, _ConvNd, - _ConvTransposeMixin, _get_cuda_home, - _InstanceNorm, _MaxPoolNd, get_build_config, - is_rocm_pytorch) - # yapf: enable - from .registry import Registry, build_from_cfg - from .seed import worker_init_fn - from .trace import is_jit_tracing - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'collect_env', 'get_logger', - 'print_log', 'is_str', 'iter_cast', 'list_cast', 'tuple_cast', - 'is_seq_of', 'is_list_of', 'is_tuple_of', 'slice_list', 'concat_list', - 'check_prerequisites', 'requires_package', 'requires_executable', - 'is_filepath', 'fopen', 'check_file_exist', 'mkdir_or_exist', - 'symlink', 'scandir', 'ProgressBar', 'track_progress', - 'track_iter_progress', 'track_parallel_progress', 'Registry', - 'build_from_cfg', 'Timer', 'TimerError', 'check_time', 'SyncBatchNorm', - '_AdaptiveAvgPoolNd', '_AdaptiveMaxPoolNd', '_AvgPoolNd', '_BatchNorm', - '_ConvNd', '_ConvTransposeMixin', '_InstanceNorm', '_MaxPoolNd', - 'get_build_config', 'BuildExtension', 'CppExtension', 'CUDAExtension', - 'DataLoader', 'PoolDataLoader', 'TORCH_VERSION', - 'deprecated_api_warning', 'digit_version', 'get_git_hash', - 'import_modules_from_strings', 'jit', 'skip_no_elena', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'assert_is_norm_layer', - 'assert_params_all_zeros', 'check_python_script', - 'is_method_overridden', 'is_jit_tracing', 'is_rocm_pytorch', - '_get_cuda_home', 'load_url', 'has_method', 'IS_CUDA_AVAILABLE', - 'worker_init_fn', 'IS_MLU_AVAILABLE', 'IS_IPU_AVAILABLE' - ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/config.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/config.py deleted file mode 100644 index a76bc487..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/config.py +++ /dev/null @@ -1,741 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import ast -import copy -import os -import os.path as osp -import platform -import shutil -import sys -import tempfile -import types -import uuid -import warnings -from argparse import Action, ArgumentParser -from collections import abc -from importlib import import_module -from pathlib import Path - -from addict import Dict -from yapf.yapflib.yapf_api import FormatCode - -from .misc import import_modules_from_strings -from .path import check_file_exist - -if platform.system() == 'Windows': - import regex as re # type: ignore -else: - import re # type: ignore - -BASE_KEY = '_base_' -DELETE_KEY = '_delete_' -DEPRECATION_KEY = '_deprecation_' -RESERVED_KEYS = ['filename', 'text', 'pretty_text'] - - -class ConfigDict(Dict): - - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super().__getattr__(name) - except KeyError: - ex = AttributeError(f"'{self.__class__.__name__}' object has no " - f"attribute '{name}'") - except Exception as e: - ex = e - else: - return value - raise ex - - -def add_args(parser, cfg, prefix=''): - for k, v in cfg.items(): - if isinstance(v, str): - parser.add_argument('--' + prefix + k) - elif isinstance(v, int): - parser.add_argument('--' + prefix + k, type=int) - elif isinstance(v, float): - parser.add_argument('--' + prefix + k, type=float) - elif isinstance(v, bool): - parser.add_argument('--' + prefix + k, action='store_true') - elif isinstance(v, dict): - add_args(parser, v, prefix + k + '.') - elif isinstance(v, abc.Iterable): - parser.add_argument('--' + prefix + k, type=type(v[0]), nargs='+') - else: - print(f'cannot parse key {prefix + k} of type {type(v)}') - return parser - - -class Config: - """A facility for config and config files. - - It supports common file formats as configs: python/json/yaml. The interface - is the same as a dict object and also allows access config values as - attributes. - - Example: - >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1]))) - >>> cfg.a - 1 - >>> cfg.b - {'b1': [0, 1]} - >>> cfg.b.b1 - [0, 1] - >>> cfg = Config.fromfile('tests/data/config/a.py') - >>> cfg.filename - "/home/kchen/projects/mmcv/tests/data/config/a.py" - >>> cfg.item4 - 'test' - >>> cfg - "Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: " - "{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}" - """ - - @staticmethod - def _validate_py_syntax(filename): - with open(filename, encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - content = f.read() - try: - ast.parse(content) - except SyntaxError as e: - raise SyntaxError('There are syntax errors in config ' - f'file {filename}: {e}') - - @staticmethod - def _substitute_predefined_vars(filename, temp_config_name): - file_dirname = osp.dirname(filename) - file_basename = osp.basename(filename) - file_basename_no_extension = osp.splitext(file_basename)[0] - file_extname = osp.splitext(filename)[1] - support_templates = dict( - fileDirname=file_dirname, - fileBasename=file_basename, - fileBasenameNoExtension=file_basename_no_extension, - fileExtname=file_extname) - with open(filename, encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - for key, value in support_templates.items(): - regexp = r'\{\{\s*' + str(key) + r'\s*\}\}' - value = value.replace('\\', '/') - config_file = re.sub(regexp, value, config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - - @staticmethod - def _pre_substitute_base_vars(filename, temp_config_name): - """Substitute base variable placehoders to string, so that parsing - would work.""" - with open(filename, encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - base_var_dict = {} - regexp = r'\{\{\s*' + BASE_KEY + r'\.([\w\.]+)\s*\}\}' - base_vars = set(re.findall(regexp, config_file)) - for base_var in base_vars: - randstr = f'_{base_var}_{uuid.uuid4().hex.lower()[:6]}' - base_var_dict[randstr] = base_var - regexp = r'\{\{\s*' + BASE_KEY + r'\.' + base_var + r'\s*\}\}' - config_file = re.sub(regexp, f'"{randstr}"', config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - return base_var_dict - - @staticmethod - def _substitute_base_vars(cfg, base_var_dict, base_cfg): - """Substitute variable strings to their actual values.""" - cfg = copy.deepcopy(cfg) - - if isinstance(cfg, dict): - for k, v in cfg.items(): - if isinstance(v, str) and v in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[v].split('.'): - new_v = new_v[new_k] - cfg[k] = new_v - elif isinstance(v, (list, tuple, dict)): - cfg[k] = Config._substitute_base_vars( - v, base_var_dict, base_cfg) - elif isinstance(cfg, tuple): - cfg = tuple( - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg) - elif isinstance(cfg, list): - cfg = [ - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg - ] - elif isinstance(cfg, str) and cfg in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[cfg].split('.'): - new_v = new_v[new_k] - cfg = new_v - - return cfg - - @staticmethod - def _file2dict(filename, use_predefined_variables=True): - filename = osp.abspath(osp.expanduser(filename)) - check_file_exist(filename) - fileExtname = osp.splitext(filename)[1] - if fileExtname not in ['.py', '.json', '.yaml', '.yml']: - raise OSError('Only py/yml/yaml/json type are supported now!') - - with tempfile.TemporaryDirectory() as temp_config_dir: - temp_config_file = tempfile.NamedTemporaryFile( - dir=temp_config_dir, suffix=fileExtname) - if platform.system() == 'Windows': - temp_config_file.close() - temp_config_name = osp.basename(temp_config_file.name) - # Substitute predefined variables - if use_predefined_variables: - Config._substitute_predefined_vars(filename, - temp_config_file.name) - else: - shutil.copyfile(filename, temp_config_file.name) - # Substitute base variables from placeholders to strings - base_var_dict = Config._pre_substitute_base_vars( - temp_config_file.name, temp_config_file.name) - - if filename.endswith('.py'): - temp_module_name = osp.splitext(temp_config_name)[0] - sys.path.insert(0, temp_config_dir) - Config._validate_py_syntax(filename) - mod = import_module(temp_module_name) - sys.path.pop(0) - cfg_dict = { - name: value - for name, value in mod.__dict__.items() - if not name.startswith('__') - and not isinstance(value, types.ModuleType) - and not isinstance(value, types.FunctionType) - } - # delete imported module - del sys.modules[temp_module_name] - elif filename.endswith(('.yml', '.yaml', '.json')): - import mmcv - cfg_dict = mmcv.load(temp_config_file.name) - # close temp file - temp_config_file.close() - - # check deprecation information - if DEPRECATION_KEY in cfg_dict: - deprecation_info = cfg_dict.pop(DEPRECATION_KEY) - warning_msg = f'The config file {filename} will be deprecated ' \ - 'in the future.' - if 'expected' in deprecation_info: - warning_msg += f' Please use {deprecation_info["expected"]} ' \ - 'instead.' - if 'reference' in deprecation_info: - warning_msg += ' More information can be found at ' \ - f'{deprecation_info["reference"]}' - warnings.warn(warning_msg, DeprecationWarning) - - cfg_text = filename + '\n' - with open(filename, encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - cfg_text += f.read() - - if BASE_KEY in cfg_dict: - cfg_dir = osp.dirname(filename) - base_filename = cfg_dict.pop(BASE_KEY) - base_filename = base_filename if isinstance( - base_filename, list) else [base_filename] - - cfg_dict_list = list() - cfg_text_list = list() - for f in base_filename: - _cfg_dict, _cfg_text = Config._file2dict(osp.join(cfg_dir, f)) - cfg_dict_list.append(_cfg_dict) - cfg_text_list.append(_cfg_text) - - base_cfg_dict = dict() - for c in cfg_dict_list: - duplicate_keys = base_cfg_dict.keys() & c.keys() - if len(duplicate_keys) > 0: - raise KeyError('Duplicate key is not allowed among bases. ' - f'Duplicate keys: {duplicate_keys}') - base_cfg_dict.update(c) - - # Substitute base variables from strings to their actual values - cfg_dict = Config._substitute_base_vars(cfg_dict, base_var_dict, - base_cfg_dict) - - base_cfg_dict = Config._merge_a_into_b(cfg_dict, base_cfg_dict) - cfg_dict = base_cfg_dict - - # merge cfg_text - cfg_text_list.append(cfg_text) - cfg_text = '\n'.join(cfg_text_list) - - return cfg_dict, cfg_text - - @staticmethod - def _merge_a_into_b(a, b, allow_list_keys=False): - """merge dict ``a`` into dict ``b`` (non-inplace). - - Values in ``a`` will overwrite ``b``. ``b`` is copied first to avoid - in-place modifications. - - Args: - a (dict): The source dict to be merged into ``b``. - b (dict): The origin dict to be fetch keys from ``a``. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in source ``a`` and will replace the element of the - corresponding index in b if b is a list. Default: False. - - Returns: - dict: The modified dict of ``b`` using ``a``. - - Examples: - # Normally merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # Delete b first and merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(_delete_=True, a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # b is a list - >>> Config._merge_a_into_b( - ... {'0': dict(a=2)}, [dict(a=1), dict(b=2)], True) - [{'a': 2}, {'b': 2}] - """ - b = b.copy() - for k, v in a.items(): - if allow_list_keys and k.isdigit() and isinstance(b, list): - k = int(k) - if len(b) <= k: - raise KeyError(f'Index {k} exceeds the length of list {b}') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - elif isinstance(v, dict): - if k in b and not v.pop(DELETE_KEY, False): - allowed_types = (dict, list) if allow_list_keys else dict - if not isinstance(b[k], allowed_types): - raise TypeError( - f'{k}={v} in child config cannot inherit from ' - f'base because {k} is a dict in the child config ' - f'but is of type {type(b[k])} in base config. ' - f'You may set `{DELETE_KEY}=True` to ignore the ' - f'base config.') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - else: - b[k] = ConfigDict(v) - else: - b[k] = v - return b - - @staticmethod - def fromfile(filename, - use_predefined_variables=True, - import_custom_modules=True): - if isinstance(filename, Path): - filename = str(filename) - cfg_dict, cfg_text = Config._file2dict(filename, - use_predefined_variables) - if import_custom_modules and cfg_dict.get('custom_imports', None): - import_modules_from_strings(**cfg_dict['custom_imports']) - return Config(cfg_dict, cfg_text=cfg_text, filename=filename) - - @staticmethod - def fromstring(cfg_str, file_format): - """Generate config from config str. - - Args: - cfg_str (str): Config str. - file_format (str): Config file format corresponding to the - config str. Only py/yml/yaml/json type are supported now! - - Returns: - :obj:`Config`: Config obj. - """ - if file_format not in ['.py', '.json', '.yaml', '.yml']: - raise OSError('Only py/yml/yaml/json type are supported now!') - if file_format != '.py' and 'dict(' in cfg_str: - # check if users specify a wrong suffix for python - warnings.warn( - 'Please check "file_format", the file format may be .py') - with tempfile.NamedTemporaryFile( - 'w', encoding='utf-8', suffix=file_format, - delete=False) as temp_file: - temp_file.write(cfg_str) - # on windows, previous implementation cause error - # see PR 1077 for details - cfg = Config.fromfile(temp_file.name) - os.remove(temp_file.name) - return cfg - - @staticmethod - def auto_argparser(description=None): - """Generate argparser from config file automatically (experimental)""" - partial_parser = ArgumentParser(description=description) - partial_parser.add_argument('config', help='config file path') - cfg_file = partial_parser.parse_known_args()[0].config - cfg = Config.fromfile(cfg_file) - parser = ArgumentParser(description=description) - parser.add_argument('config', help='config file path') - add_args(parser, cfg) - return parser, cfg - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError('cfg_dict must be a dict, but ' - f'got {type(cfg_dict)}') - for key in cfg_dict: - if key in RESERVED_KEYS: - raise KeyError(f'{key} is reserved for config file') - - if isinstance(filename, Path): - filename = str(filename) - - super().__setattr__('_cfg_dict', ConfigDict(cfg_dict)) - super().__setattr__('_filename', filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename) as f: - text = f.read() - else: - text = '' - super().__setattr__('_text', text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - @property - def pretty_text(self): - - indent = 4 - - def _indent(s_, num_spaces): - s = s_.split('\n') - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(num_spaces * ' ') + line for line in s] - s = '\n'.join(s) - s = first + '\n' + s - return s - - def _format_basic_types(k, v, use_mapping=False): - if isinstance(v, str): - v_str = f"'{v}'" - else: - v_str = str(v) - - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) - - return attr_str - - def _format_list(k, v, use_mapping=False): - # check if all items in the list are dict - if all(isinstance(_, dict) for _ in v): - v_str = '[\n' - v_str += '\n'.join( - f'dict({_indent(_format_dict(v_), indent)}),' - for v_ in v).rstrip(',') - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) + ']' - else: - attr_str = _format_basic_types(k, v, use_mapping) - return attr_str - - def _contain_invalid_identifier(dict_str): - contain_invalid_identifier = False - for key_name in dict_str: - contain_invalid_identifier |= \ - (not str(key_name).isidentifier()) - return contain_invalid_identifier - - def _format_dict(input_dict, outest_level=False): - r = '' - s = [] - - use_mapping = _contain_invalid_identifier(input_dict) - if use_mapping: - r += '{' - for idx, (k, v) in enumerate(input_dict.items()): - is_last = idx >= len(input_dict) - 1 - end = '' if outest_level or is_last else ',' - if isinstance(v, dict): - v_str = '\n' + _format_dict(v) - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: dict({v_str}' - else: - attr_str = f'{str(k)}=dict({v_str}' - attr_str = _indent(attr_str, indent) + ')' + end - elif isinstance(v, list): - attr_str = _format_list(k, v, use_mapping) + end - else: - attr_str = _format_basic_types(k, v, use_mapping) + end - - s.append(attr_str) - r += '\n'.join(s) - if use_mapping: - r += '}' - return r - - cfg_dict = self._cfg_dict.to_dict() - text = _format_dict(cfg_dict, outest_level=True) - # copied from setup.cfg - yapf_style = dict( - based_on_style='pep8', - blank_line_before_nested_class_or_def=True, - split_before_expression_after_opening_paren=True) - text, _ = FormatCode(text, style_config=yapf_style, verify=True) - - return text - - def __repr__(self): - return f'Config (path: {self.filename}): {self._cfg_dict.__repr__()}' - - def __len__(self): - return len(self._cfg_dict) - - def __getattr__(self, name): - return getattr(self._cfg_dict, name) - - def __getitem__(self, name): - return self._cfg_dict.__getitem__(name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) - - def __setitem__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setitem__(name, value) - - def __iter__(self): - return iter(self._cfg_dict) - - def __getstate__(self): - return (self._cfg_dict, self._filename, self._text) - - def __copy__(self): - cls = self.__class__ - other = cls.__new__(cls) - other.__dict__.update(self.__dict__) - - return other - - def __deepcopy__(self, memo): - cls = self.__class__ - other = cls.__new__(cls) - memo[id(self)] = other - - for key, value in self.__dict__.items(): - super(Config, other).__setattr__(key, copy.deepcopy(value, memo)) - - return other - - def __setstate__(self, state): - _cfg_dict, _filename, _text = state - super().__setattr__('_cfg_dict', _cfg_dict) - super().__setattr__('_filename', _filename) - super().__setattr__('_text', _text) - - def dump(self, file=None): - """Dumps config into a file or returns a string representation of the - config. - - If a file argument is given, saves the config to that file using the - format defined by the file argument extension. - - Otherwise, returns a string representing the config. The formatting of - this returned string is defined by the extension of `self.filename`. If - `self.filename` is not defined, returns a string representation of a - dict (lowercased and using ' for strings). - - Examples: - >>> cfg_dict = dict(item1=[1, 2], item2=dict(a=0), - ... item3=True, item4='test') - >>> cfg = Config(cfg_dict=cfg_dict) - >>> dump_file = "a.py" - >>> cfg.dump(dump_file) - - Args: - file (str, optional): Path of the output file where the config - will be dumped. Defaults to None. - """ - import mmcv - cfg_dict = super().__getattribute__('_cfg_dict').to_dict() - if file is None: - if self.filename is None or self.filename.endswith('.py'): - return self.pretty_text - else: - file_format = self.filename.split('.')[-1] - return mmcv.dump(cfg_dict, file_format=file_format) - elif file.endswith('.py'): - with open(file, 'w', encoding='utf-8') as f: - f.write(self.pretty_text) - else: - file_format = file.split('.')[-1] - return mmcv.dump(cfg_dict, file=file, file_format=file_format) - - def merge_from_dict(self, options, allow_list_keys=True): - """Merge list into cfg_dict. - - Merge the dict parsed by MultipleKVAction into this cfg. - - Examples: - >>> options = {'model.backbone.depth': 50, - ... 'model.backbone.with_cp':True} - >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet')))) - >>> cfg.merge_from_dict(options) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict( - ... model=dict(backbone=dict(depth=50, with_cp=True))) - - >>> # Merge list element - >>> cfg = Config(dict(pipeline=[ - ... dict(type='LoadImage'), dict(type='LoadAnnotations')])) - >>> options = dict(pipeline={'0': dict(type='SelfLoadImage')}) - >>> cfg.merge_from_dict(options, allow_list_keys=True) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict(pipeline=[ - ... dict(type='SelfLoadImage'), dict(type='LoadAnnotations')]) - - Args: - options (dict): dict of configs to merge from. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in ``options`` and will replace the element of the - corresponding index in the config if the config is a list. - Default: True. - """ - option_cfg_dict = {} - for full_key, v in options.items(): - d = option_cfg_dict - key_list = full_key.split('.') - for subkey in key_list[:-1]: - d.setdefault(subkey, ConfigDict()) - d = d[subkey] - subkey = key_list[-1] - d[subkey] = v - - cfg_dict = super().__getattribute__('_cfg_dict') - super().__setattr__( - '_cfg_dict', - Config._merge_a_into_b( - option_cfg_dict, cfg_dict, allow_list_keys=allow_list_keys)) - - -class DictAction(Action): - """ - argparse action to split an argument into KEY=VALUE form - on the first = and append to a dictionary. List options can - be passed as comma separated values, i.e 'KEY=V1,V2,V3', or with explicit - brackets, i.e. 'KEY=[V1,V2,V3]'. It also support nested brackets to build - list/tuple values. e.g. 'KEY=[(V1,V2),(V3,V4)]' - """ - - @staticmethod - def _parse_int_float_bool(val): - try: - return int(val) - except ValueError: - pass - try: - return float(val) - except ValueError: - pass - if val.lower() in ['true', 'false']: - return True if val.lower() == 'true' else False - if val == 'None': - return None - return val - - @staticmethod - def _parse_iterable(val): - """Parse iterable values in the string. - - All elements inside '()' or '[]' are treated as iterable values. - - Args: - val (str): Value string. - - Returns: - list | tuple: The expanded list or tuple from the string. - - Examples: - >>> DictAction._parse_iterable('1,2,3') - [1, 2, 3] - >>> DictAction._parse_iterable('[a, b, c]') - ['a', 'b', 'c'] - >>> DictAction._parse_iterable('[(1, 2, 3), [a, b], c]') - [(1, 2, 3), ['a', 'b'], 'c'] - """ - - def find_next_comma(string): - """Find the position of next comma in the string. - - If no ',' is found in the string, return the string length. All - chars inside '()' and '[]' are treated as one element and thus ',' - inside these brackets are ignored. - """ - assert (string.count('(') == string.count(')')) and ( - string.count('[') == string.count(']')), \ - f'Imbalanced brackets exist in {string}' - end = len(string) - for idx, char in enumerate(string): - pre = string[:idx] - # The string before this ',' is balanced - if ((char == ',') and (pre.count('(') == pre.count(')')) - and (pre.count('[') == pre.count(']'))): - end = idx - break - return end - - # Strip ' and " characters and replace whitespace. - val = val.strip('\'\"').replace(' ', '') - is_tuple = False - if val.startswith('(') and val.endswith(')'): - is_tuple = True - val = val[1:-1] - elif val.startswith('[') and val.endswith(']'): - val = val[1:-1] - elif ',' not in val: - # val is a single value - return DictAction._parse_int_float_bool(val) - - values = [] - while len(val) > 0: - comma_idx = find_next_comma(val) - element = DictAction._parse_iterable(val[:comma_idx]) - values.append(element) - val = val[comma_idx + 1:] - if is_tuple: - values = tuple(values) - return values - - def __call__(self, parser, namespace, values, option_string=None): - options = {} - for kv in values: - key, val = kv.split('=', maxsplit=1) - options[key] = self._parse_iterable(val) - setattr(namespace, self.dest, options) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/device_type.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/device_type.py deleted file mode 100644 index c66052c2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/device_type.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - - -def is_ipu_available() -> bool: - try: - import poptorch - return poptorch.ipuHardwareIsAvailable() - except ImportError: - return False - - -IS_IPU_AVAILABLE = is_ipu_available() - - -def is_mlu_available() -> bool: - try: - import torch - return (hasattr(torch, 'is_mlu_available') - and torch.is_mlu_available()) - except Exception: - return False - - -IS_MLU_AVAILABLE = is_mlu_available() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/env.py deleted file mode 100644 index 51133250..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/env.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - MSVC: Microsoft Virtual C++ Compiler version, Windows only. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output(f'"{nvcc}" -V', shell=True) - nvcc = nvcc.decode('utf-8').strip() - release = nvcc.rfind('Cuda compilation tools') - build = nvcc.rfind('Build ') - nvcc = nvcc[release:build].strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - # Check C++ Compiler. - # For Unix-like, sysconfig has 'CC' variable like 'gcc -pthread ...', - # indicating the compiler used, we use this to get the compiler name - import sysconfig - cc = sysconfig.get_config_var('CC') - if cc: - cc = osp.basename(cc.split()[0]) - cc_info = subprocess.check_output(f'{cc} --version', shell=True) - env_info['GCC'] = cc_info.decode('utf-8').partition( - '\n')[0].strip() - else: - # on Windows, cl.exe is not in PATH. We need to find the path. - # distutils.ccompiler.new_compiler() returns a msvccompiler - # object and after initialization, path to cl.exe is found. - import locale - import os - from distutils.ccompiler import new_compiler - ccompiler = new_compiler() - ccompiler.initialize() - cc = subprocess.check_output( - f'{ccompiler.cc}', stderr=subprocess.STDOUT, shell=True) - encoding = os.device_encoding( - sys.stdout.fileno()) or locale.getpreferredencoding() - env_info['MSVC'] = cc.decode(encoding).partition('\n')[0].strip() - env_info['GCC'] = 'n/a' - except subprocess.CalledProcessError: - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/ext_loader.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/ext_loader.py deleted file mode 100644 index a31e107d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/ext_loader.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import importlib -import os -import pkgutil -import warnings -from collections import namedtuple - -import torch - -if torch.__version__ != 'parrots': - - def load_ext(name, funcs): - ext = importlib.import_module('mmcv.' + name) - for fun in funcs: - assert hasattr(ext, fun), f'{fun} miss in module {name}' - return ext -else: - from parrots import extension - from parrots.base import ParrotsException - - has_return_value_ops = [ - 'nms', - 'softnms', - 'nms_match', - 'nms_rotated', - 'top_pool_forward', - 'top_pool_backward', - 'bottom_pool_forward', - 'bottom_pool_backward', - 'left_pool_forward', - 'left_pool_backward', - 'right_pool_forward', - 'right_pool_backward', - 'fused_bias_leakyrelu', - 'upfirdn2d', - 'ms_deform_attn_forward', - 'pixel_group', - 'contour_expand', - 'diff_iou_rotated_sort_vertices_forward', - ] - - def get_fake_func(name, e): - - def fake_func(*args, **kwargs): - warnings.warn(f'{name} is not supported in parrots now') - raise e - - return fake_func - - def load_ext(name, funcs): - ExtModule = namedtuple('ExtModule', funcs) - ext_list = [] - lib_root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) - for fun in funcs: - try: - ext_fun = extension.load(fun, name, lib_dir=lib_root) - except ParrotsException as e: - if 'No element registered' not in e.message: - warnings.warn(e.message) - ext_fun = get_fake_func(fun, e) - ext_list.append(ext_fun) - else: - if fun in has_return_value_ops: - ext_list.append(ext_fun.op) - else: - ext_list.append(ext_fun.op_) - return ExtModule(*ext_list) - - -def check_ops_exist() -> bool: - ext_loader = pkgutil.find_loader('mmcv._ext') - return ext_loader is not None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/hub.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/hub.py deleted file mode 100644 index a9cbbc95..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/hub.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# The 1.6 release of PyTorch switched torch.save to use a new zipfile-based -# file format. It will cause RuntimeError when a checkpoint was saved in -# torch >= 1.6.0 but loaded in torch < 1.7.0. -# More details at https://github.com/open-mmlab/mmpose/issues/904 -from .parrots_wrapper import TORCH_VERSION -from .path import mkdir_or_exist -from .version_utils import digit_version - -if TORCH_VERSION != 'parrots' and digit_version(TORCH_VERSION) < digit_version( - '1.7.0'): - # Modified from https://github.com/pytorch/pytorch/blob/master/torch/hub.py - import os - import sys - import warnings - import zipfile - from urllib.parse import urlparse - - import torch - from torch.hub import HASH_REGEX, _get_torch_home, download_url_to_file - - # Hub used to support automatically extracts from zipfile manually - # compressed by users. The legacy zip format expects only one file from - # torch.save() < 1.6 in the zip. We should remove this support since - # zipfile is now default zipfile format for torch.save(). - def _is_legacy_zip_format(filename): - if zipfile.is_zipfile(filename): - infolist = zipfile.ZipFile(filename).infolist() - return len(infolist) == 1 and not infolist[0].is_dir() - return False - - def _legacy_zip_load(filename, model_dir, map_location): - warnings.warn( - 'Falling back to the old format < 1.6. This support will' - ' be deprecated in favor of default zipfile format ' - 'introduced in 1.6. Please redo torch.save() to save it ' - 'in the new zipfile format.', DeprecationWarning) - # Note: extractall() defaults to overwrite file if exists. No need to - # clean up beforehand. We deliberately don't handle tarfile here - # since our legacy serialization format was in tar. - # E.g. resnet18-5c106cde.pth which is widely used. - with zipfile.ZipFile(filename) as f: - members = f.infolist() - if len(members) != 1: - raise RuntimeError( - 'Only one file(not dir) is allowed in the zipfile') - f.extractall(model_dir) - extraced_name = members[0].filename - extracted_file = os.path.join(model_dir, extraced_name) - return torch.load(extracted_file, map_location=map_location) - - def load_url(url, - model_dir=None, - map_location=None, - progress=True, - check_hash=False, - file_name=None): - r"""Loads the Torch serialized object at the given URL. - - If downloaded file is a zip file, it will be automatically decompressed - - If the object is already present in `model_dir`, it's deserialized and - returned. - The default value of ``model_dir`` is ``/checkpoints`` where - ``hub_dir`` is the directory returned by :func:`~torch.hub.get_dir`. - - Args: - url (str): URL of the object to download - model_dir (str, optional): directory in which to save the object - map_location (optional): a function or a dict specifying how to - remap storage locations (see torch.load) - progress (bool, optional): whether or not to display a progress bar - to stderr. Default: True - check_hash(bool, optional): If True, the filename part of the URL - should follow the naming convention ``filename-.ext`` - where ```` is the first eight or more digits of the - SHA256 hash of the contents of the file. The hash is used to - ensure unique names and to verify the contents of the file. - Default: False - file_name (str, optional): name for the downloaded file. Filename - from ``url`` will be used if not set. Default: None. - - Example: - >>> url = ('https://s3.amazonaws.com/pytorch/models/resnet18-5c106' - ... 'cde.pth') - >>> state_dict = torch.hub.load_state_dict_from_url(url) - """ - # Issue warning to move data if old env is set - if os.getenv('TORCH_MODEL_ZOO'): - warnings.warn( - 'TORCH_MODEL_ZOO is deprecated, please use env ' - 'TORCH_HOME instead', DeprecationWarning) - - if model_dir is None: - torch_home = _get_torch_home() - model_dir = os.path.join(torch_home, 'checkpoints') - - mkdir_or_exist(model_dir) - - parts = urlparse(url) - filename = os.path.basename(parts.path) - if file_name is not None: - filename = file_name - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file): - sys.stderr.write('Downloading: "{}" to {}\n'.format( - url, cached_file)) - hash_prefix = None - if check_hash: - r = HASH_REGEX.search(filename) # r is Optional[Match[str]] - hash_prefix = r.group(1) if r else None - download_url_to_file( - url, cached_file, hash_prefix, progress=progress) - - if _is_legacy_zip_format(cached_file): - return _legacy_zip_load(cached_file, model_dir, map_location) - - try: - return torch.load(cached_file, map_location=map_location) - except RuntimeError as error: - if digit_version(TORCH_VERSION) < digit_version('1.5.0'): - warnings.warn( - f'If the error is the same as "{cached_file} is a zip ' - 'archive (did you mean to use torch.jit.load()?)", you can' - ' upgrade your torch to 1.5.0 or higher (current torch ' - f'version is {TORCH_VERSION}). The error was raised ' - ' because the checkpoint was saved in torch>=1.6.0 but ' - 'loaded in torch<1.5.') - raise error -else: - from torch.utils.model_zoo import load_url # type: ignore # noqa: F401 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/logging.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/logging.py deleted file mode 100644 index 5a90aac8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/logging.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.distributed as dist - -logger_initialized: dict = {} - - -def get_logger(name, log_file=None, log_level=logging.INFO, file_mode='w'): - """Initialize and get a logger by name. - - If the logger has not been initialized, this method will initialize the - logger by adding one or two handlers, otherwise the initialized logger will - be directly returned. During initialization, a StreamHandler will always be - added. If `log_file` is specified and the process rank is 0, a FileHandler - will also be added. - - Args: - name (str): Logger name. - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the logger. - log_level (int): The logger level. Note that only the process of - rank 0 is affected, and other processes will set the level to - "Error" thus be silent most of the time. - file_mode (str): The file mode used in opening log file. - Defaults to 'w'. - - Returns: - logging.Logger: The expected logger. - """ - logger = logging.getLogger(name) - if name in logger_initialized: - return logger - # handle hierarchical names - # e.g., logger "a" is initialized, then logger "a.b" will skip the - # initialization since it is a child of "a". - for logger_name in logger_initialized: - if name.startswith(logger_name): - return logger - - # handle duplicate logs to the console - # Starting in 1.8.0, PyTorch DDP attaches a StreamHandler (NOTSET) - # to the root logger. As logger.propagate is True by default, this root - # level handler causes logging messages from rank>0 processes to - # unexpectedly show up on the console, creating much unwanted clutter. - # To fix this issue, we set the root logger's StreamHandler, if any, to log - # at the ERROR level. - for handler in logger.root.handlers: - if type(handler) is logging.StreamHandler: - handler.setLevel(logging.ERROR) - - stream_handler = logging.StreamHandler() - handlers = [stream_handler] - - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - else: - rank = 0 - - # only rank 0 will add a FileHandler - if rank == 0 and log_file is not None: - # Here, the default behaviour of the official logger is 'a'. Thus, we - # provide an interface to change the file mode to the default - # behaviour. - file_handler = logging.FileHandler(log_file, file_mode) - handlers.append(file_handler) - - formatter = logging.Formatter( - '%(asctime)s - %(name)s - %(levelname)s - %(message)s') - for handler in handlers: - handler.setFormatter(formatter) - handler.setLevel(log_level) - logger.addHandler(handler) - - if rank == 0: - logger.setLevel(log_level) - else: - logger.setLevel(logging.ERROR) - - logger_initialized[name] = True - - return logger - - -def print_log(msg, logger=None, level=logging.INFO): - """Print a log message. - - Args: - msg (str): The message to be logged. - logger (logging.Logger | str | None): The logger to be used. - Some special loggers are: - - - "silent": no message will be printed. - - other str: the logger obtained with `get_root_logger(logger)`. - - None: The `print()` method will be used to print log messages. - level (int): Logging level. Only available when `logger` is a Logger - object or "root". - """ - if logger is None: - print(msg) - elif isinstance(logger, logging.Logger): - logger.log(level, msg) - elif logger == 'silent': - pass - elif isinstance(logger, str): - _logger = get_logger(logger) - _logger.log(level, msg) - else: - raise TypeError( - 'logger should be either a logging.Logger object, str, ' - f'"silent" or None, but got {type(logger)}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/misc.py deleted file mode 100644 index 7957ea89..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead', DeprecationWarning) - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead', DeprecationWarning) - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_jit.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_jit.py deleted file mode 100644 index 61873f6d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_jit.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -from .parrots_wrapper import TORCH_VERSION - -parrots_jit_option = os.getenv('PARROTS_JIT_OPTION') - -if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON': - from parrots.jit import pat as jit -else: - - def jit(func=None, - check_input=None, - full_shape=True, - derivate=False, - coderize=False, - optimize=False): - - def wrapper(func): - - def wrapper_inner(*args, **kargs): - return func(*args, **kargs) - - return wrapper_inner - - if func is None: - return wrapper - else: - return func - - -if TORCH_VERSION == 'parrots': - from parrots.utils.tester import skip_no_elena -else: - - def skip_no_elena(func): - - def wrapper(*args, **kargs): - return func(*args, **kargs) - - return wrapper diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_wrapper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_wrapper.py deleted file mode 100644 index cf2c7e5c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_cuda_available() -> bool: - return torch.cuda.is_available() - - -IS_CUDA_AVAILABLE = is_cuda_available() - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.batchnorm import _BatchNorm - from torch.nn.modules.instancenorm import _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): # type: ignore - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/path.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/path.py deleted file mode 100644 index 56808183..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/path.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -from pathlib import Path - -from .misc import is_str - - -def is_filepath(x): - return is_str(x) or isinstance(x, Path) - - -def fopen(filepath, *args, **kwargs): - if is_str(filepath): - return open(filepath, *args, **kwargs) - elif isinstance(filepath, Path): - return filepath.open(*args, **kwargs) - raise ValueError('`filepath` should be a string or a Path') - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -def mkdir_or_exist(dir_name, mode=0o777): - if dir_name == '': - return - dir_name = osp.expanduser(dir_name) - os.makedirs(dir_name, mode=mode, exist_ok=True) - - -def symlink(src, dst, overwrite=True, **kwargs): - if os.path.lexists(dst) and overwrite: - os.remove(dst) - os.symlink(src, dst, **kwargs) - - -def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True): - """Scan a directory to find the interested files. - - Args: - dir_path (str | :obj:`Path`): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - case_sensitive (bool, optional) : If set to False, ignore the case of - suffix. Default: True. - - Returns: - A generator for all the interested files with relative paths. - """ - if isinstance(dir_path, (str, Path)): - dir_path = str(dir_path) - else: - raise TypeError('"dir_path" must be a string or Path object') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - if suffix is not None and not case_sensitive: - suffix = suffix.lower() if isinstance(suffix, str) else tuple( - item.lower() for item in suffix) - - root = dir_path - - def _scandir(dir_path, suffix, recursive, case_sensitive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - _rel_path = rel_path if case_sensitive else rel_path.lower() - if suffix is None or _rel_path.endswith(suffix): - yield rel_path - elif recursive and os.path.isdir(entry.path): - # scan recursively if entry.path is a directory - yield from _scandir(entry.path, suffix, recursive, - case_sensitive) - - return _scandir(dir_path, suffix, recursive, case_sensitive) - - -def find_vcs_root(path, markers=('.git', )): - """Finds the root directory (including itself) of specified markers. - - Args: - path (str): Path of directory or file. - markers (list[str], optional): List of file or directory names. - - Returns: - The directory contained one of the markers or None if not found. - """ - if osp.isfile(path): - path = osp.dirname(path) - - prev, cur = None, osp.abspath(osp.expanduser(path)) - while cur != prev: - if any(osp.exists(osp.join(cur, marker)) for marker in markers): - return cur - prev, cur = cur, osp.split(cur)[0] - return None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/progressbar.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/progressbar.py deleted file mode 100644 index 0062f670..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/progressbar.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from collections.abc import Iterable -from multiprocessing import Pool -from shutil import get_terminal_size - -from .timer import Timer - - -class ProgressBar: - """A progress bar which can print the progress.""" - - def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout): - self.task_num = task_num - self.bar_width = bar_width - self.completed = 0 - self.file = file - if start: - self.start() - - @property - def terminal_width(self): - width, _ = get_terminal_size() - return width - - def start(self): - if self.task_num > 0: - self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, ' - 'elapsed: 0s, ETA:') - else: - self.file.write('completed: 0, elapsed: 0s') - self.file.flush() - self.timer = Timer() - - def update(self, num_tasks=1): - assert num_tasks > 0 - self.completed += num_tasks - elapsed = self.timer.since_start() - if elapsed > 0: - fps = self.completed / elapsed - else: - fps = float('inf') - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \ - f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \ - f'ETA: {eta:5}s' - - bar_width = min(self.bar_width, - int(self.terminal_width - len(msg)) + 2, - int(self.terminal_width * 0.6)) - bar_width = max(2, bar_width) - mark_width = int(bar_width * percentage) - bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width) - self.file.write(msg.format(bar_chars)) - else: - self.file.write( - f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,' - f' {fps:.1f} tasks/s') - self.file.flush() - - -def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs): - """Track the progress of tasks execution with a progress bar. - - Tasks are done with a simple for-loop. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - results = [] - for task in tasks: - results.append(func(task, **kwargs)) - prog_bar.update() - prog_bar.file.write('\n') - return results - - -def init_pool(process_num, initializer=None, initargs=None): - if initializer is None: - return Pool(process_num) - elif initargs is None: - return Pool(process_num, initializer) - else: - if not isinstance(initargs, tuple): - raise TypeError('"initargs" must be a tuple') - return Pool(process_num, initializer, initargs) - - -def track_parallel_progress(func, - tasks, - nproc, - initializer=None, - initargs=None, - bar_width=50, - chunksize=1, - skip_first=False, - keep_order=True, - file=sys.stdout): - """Track the progress of parallel task execution with a progress bar. - - The built-in :mod:`multiprocessing` module is used for process pools and - tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - nproc (int): Process (worker) number. - initializer (None or callable): Refer to :class:`multiprocessing.Pool` - for details. - initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for - details. - chunksize (int): Refer to :class:`multiprocessing.Pool` for details. - bar_width (int): Width of progress bar. - skip_first (bool): Whether to skip the first sample for each worker - when estimating fps, since the initialization step may takes - longer. - keep_order (bool): If True, :func:`Pool.imap` is used, otherwise - :func:`Pool.imap_unordered` is used. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - pool = init_pool(nproc, initializer, initargs) - start = not skip_first - task_num -= nproc * chunksize * int(skip_first) - prog_bar = ProgressBar(task_num, bar_width, start, file=file) - results = [] - if keep_order: - gen = pool.imap(func, tasks, chunksize) - else: - gen = pool.imap_unordered(func, tasks, chunksize) - for result in gen: - results.append(result) - if skip_first: - if len(results) < nproc * chunksize: - continue - elif len(results) == nproc * chunksize: - prog_bar.start() - continue - prog_bar.update() - prog_bar.file.write('\n') - pool.close() - pool.join() - return results - - -def track_iter_progress(tasks, bar_width=50, file=sys.stdout): - """Track the progress of tasks iteration or enumeration with a progress - bar. - - Tasks are yielded with a simple for-loop. - - Args: - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Yields: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - for task in tasks: - yield task - prog_bar.update() - prog_bar.file.write('\n') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/registry.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/registry.py deleted file mode 100644 index a7db6bd4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/registry.py +++ /dev/null @@ -1,340 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import warnings -from functools import partial -from typing import Any, Dict, Optional - -from .misc import deprecated_api_warning, is_seq_of - - -def build_from_cfg(cfg: Dict, - registry: 'Registry', - default_args: Optional[Dict] = None) -> Any: - """Build a module from config dict when it is a class configuration, or - call a function from config dict when it is a function configuration. - - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = build_from_cfg(dict(type='Resnet'), MODELS) - >>> # Returns an instantiated object - >>> @MODELS.register_module() - >>> def resnet50(): - >>> pass - >>> resnet = build_from_cfg(dict(type='resnet50'), MODELS) - >>> # Return a result of the calling function - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type) or inspect.isfunction(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - try: - return obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{obj_cls.__name__}: {e}') - - -class Registry: - """A registry to map strings to classes or functions. - - Registered object could be built from registry. Meanwhile, registered - functions could be called from registry. - - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = MODELS.build(dict(type='ResNet')) - >>> @MODELS.register_module() - >>> def resnet50(): - >>> pass - >>> resnet = MODELS.build(dict(type='resnet50')) - - Please refer to - https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for - advanced usage. - - Args: - name (str): Registry name. - build_func(func, optional): Build function to construct instance from - Registry, func:`build_from_cfg` is used if neither ``parent`` or - ``build_func`` is specified. If ``parent`` is specified and - ``build_func`` is not given, ``build_func`` will be inherited - from ``parent``. Default: None. - parent (Registry, optional): Parent registry. The class registered in - children registry could be built from parent. Default: None. - scope (str, optional): The scope of registry. It is the key to search - for children registry. If not specified, scope will be the name of - the package where class is defined, e.g. mmdet, mmcls, mmseg. - Default: None. - """ - - def __init__(self, name, build_func=None, parent=None, scope=None): - self._name = name - self._module_dict = dict() - self._children = dict() - self._scope = self.infer_scope() if scope is None else scope - - # self.build_func will be set with the following priority: - # 1. build_func - # 2. parent.build_func - # 3. build_from_cfg - if build_func is None: - if parent is not None: - self.build_func = parent.build_func - else: - self.build_func = build_from_cfg - else: - self.build_func = build_func - if parent is not None: - assert isinstance(parent, Registry) - parent._add_children(self) - self.parent = parent - else: - self.parent = None - - def __len__(self): - return len(self._module_dict) - - def __contains__(self, key): - return self.get(key) is not None - - def __repr__(self): - format_str = self.__class__.__name__ + \ - f'(name={self._name}, ' \ - f'items={self._module_dict})' - return format_str - - @staticmethod - def infer_scope(): - """Infer the scope of registry. - - The name of the package where registry is defined will be returned. - - Example: - >>> # in mmdet/models/backbone/resnet.py - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - The scope of ``ResNet`` will be ``mmdet``. - - Returns: - str: The inferred scope name. - """ - # We access the caller using inspect.currentframe() instead of - # inspect.stack() for performance reasons. See details in PR #1844 - frame = inspect.currentframe() - # get the frame where `infer_scope()` is called - infer_scope_caller = frame.f_back.f_back - filename = inspect.getmodule(infer_scope_caller).__name__ - split_filename = filename.split('.') - return split_filename[0] - - @staticmethod - def split_scope_key(key): - """Split scope and key. - - The first scope will be split from key. - - Examples: - >>> Registry.split_scope_key('mmdet.ResNet') - 'mmdet', 'ResNet' - >>> Registry.split_scope_key('ResNet') - None, 'ResNet' - - Return: - tuple[str | None, str]: The former element is the first scope of - the key, which can be ``None``. The latter is the remaining key. - """ - split_index = key.find('.') - if split_index != -1: - return key[:split_index], key[split_index + 1:] - else: - return None, key - - @property - def name(self): - return self._name - - @property - def scope(self): - return self._scope - - @property - def module_dict(self): - return self._module_dict - - @property - def children(self): - return self._children - - def get(self, key): - """Get the registry record. - - Args: - key (str): The class name in string format. - - Returns: - class: The corresponding class. - """ - scope, real_key = self.split_scope_key(key) - if scope is None or scope == self._scope: - # get from self - if real_key in self._module_dict: - return self._module_dict[real_key] - else: - # get from self._children - if scope in self._children: - return self._children[scope].get(real_key) - else: - # goto root - parent = self.parent - while parent.parent is not None: - parent = parent.parent - return parent.get(key) - - def build(self, *args, **kwargs): - return self.build_func(*args, **kwargs, registry=self) - - def _add_children(self, registry): - """Add children for a registry. - - The ``registry`` will be added as children based on its scope. - The parent registry could build objects from children registry. - - Example: - >>> models = Registry('models') - >>> mmdet_models = Registry('models', parent=models) - >>> @mmdet_models.register_module() - >>> class ResNet: - >>> pass - >>> resnet = models.build(dict(type='mmdet.ResNet')) - """ - - assert isinstance(registry, Registry) - assert registry.scope is not None - assert registry.scope not in self.children, \ - f'scope {registry.scope} exists in {self.name} registry' - self.children[registry.scope] = registry - - @deprecated_api_warning(name_dict=dict(module_class='module')) - def _register_module(self, module, module_name=None, force=False): - if not inspect.isclass(module) and not inspect.isfunction(module): - raise TypeError('module must be a class or a function, ' - f'but got {type(module)}') - - if module_name is None: - module_name = module.__name__ - if isinstance(module_name, str): - module_name = [module_name] - for name in module_name: - if not force and name in self._module_dict: - raise KeyError(f'{name} is already registered ' - f'in {self.name}') - self._module_dict[name] = module - - def deprecated_register_module(self, cls=None, force=False): - warnings.warn( - 'The old API of register_module(module, force=False) ' - 'is deprecated and will be removed, please use the new API ' - 'register_module(name=None, force=False, module=None) instead.', - DeprecationWarning) - if cls is None: - return partial(self.deprecated_register_module, force=force) - self._register_module(cls, force=force) - return cls - - def register_module(self, name=None, force=False, module=None): - """Register a module. - - A record will be added to `self._module_dict`, whose key is the class - name or the specified name, and value is the class itself. - It can be used as a decorator or a normal function. - - Example: - >>> backbones = Registry('backbone') - >>> @backbones.register_module() - >>> class ResNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> @backbones.register_module(name='mnet') - >>> class MobileNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> class ResNet: - >>> pass - >>> backbones.register_module(ResNet) - - Args: - name (str | None): The module name to be registered. If not - specified, the class name will be used. - force (bool, optional): Whether to override an existing class with - the same name. Default: False. - module (type): Module class or function to be registered. - """ - if not isinstance(force, bool): - raise TypeError(f'force must be a boolean, but got {type(force)}') - # NOTE: This is a walkaround to be compatible with the old api, - # while it may introduce unexpected bugs. - if isinstance(name, type): - return self.deprecated_register_module(name, force=force) - - # raise the error ahead of time - if not (name is None or isinstance(name, str) or is_seq_of(name, str)): - raise TypeError( - 'name must be either of None, an instance of str or a sequence' - f' of str, but got {type(name)}') - - # use it as a normal method: x.register_module(module=SomeClass) - if module is not None: - self._register_module(module=module, module_name=name, force=force) - return module - - # use it as a decorator: @x.register_module() - def _register(module): - self._register_module(module=module, module_name=name, force=force) - return module - - return _register diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/seed.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/seed.py deleted file mode 100644 index 003f9236..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/seed.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random - -import numpy as np -import torch - - -def worker_init_fn(worker_id: int, num_workers: int, rank: int, seed: int): - """Function to initialize each worker. - - The seed of each worker equals to - ``num_worker * rank + worker_id + user_seed``. - - Args: - worker_id (int): Id for each worker. - num_workers (int): Number of workers. - rank (int): Rank in distributed training. - seed (int): Random seed. - """ - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/testing.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/testing.py deleted file mode 100644 index 7b64e8fa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/testing.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Open-MMLab. -import sys -from collections.abc import Iterable -from runpy import run_path -from shlex import split -from typing import Any, Dict, List -from unittest.mock import patch - - -def check_python_script(cmd): - """Run the python cmd script with `__main__`. The difference between - `os.system` is that, this function exectues code in the current process, so - that it can be tracked by coverage tools. Currently it supports two forms: - - - ./tests/data/scripts/hello.py zz - - python tests/data/scripts/hello.py zz - """ - args = split(cmd) - if args[0] == 'python': - args = args[1:] - with patch.object(sys, 'argv', args): - run_path(args[0], run_name='__main__') - - -def _any(judge_result): - """Since built-in ``any`` works only when the element of iterable is not - iterable, implement the function.""" - if not isinstance(judge_result, Iterable): - return judge_result - - try: - for element in judge_result: - if _any(element): - return True - except TypeError: - # Maybe encounter the case: torch.tensor(True) | torch.tensor(False) - if judge_result: - return True - return False - - -def assert_dict_contains_subset(dict_obj: Dict[Any, Any], - expected_subset: Dict[Any, Any]) -> bool: - """Check if the dict_obj contains the expected_subset. - - Args: - dict_obj (Dict[Any, Any]): Dict object to be checked. - expected_subset (Dict[Any, Any]): Subset expected to be contained in - dict_obj. - - Returns: - bool: Whether the dict_obj contains the expected_subset. - """ - - for key, value in expected_subset.items(): - if key not in dict_obj.keys() or _any(dict_obj[key] != value): - return False - return True - - -def assert_attrs_equal(obj: Any, expected_attrs: Dict[str, Any]) -> bool: - """Check if attribute of class object is correct. - - Args: - obj (object): Class object to be checked. - expected_attrs (Dict[str, Any]): Dict of the expected attrs. - - Returns: - bool: Whether the attribute of class object is correct. - """ - for attr, value in expected_attrs.items(): - if not hasattr(obj, attr) or _any(getattr(obj, attr) != value): - return False - return True - - -def assert_dict_has_keys(obj: Dict[str, Any], - expected_keys: List[str]) -> bool: - """Check if the obj has all the expected_keys. - - Args: - obj (Dict[str, Any]): Object to be checked. - expected_keys (List[str]): Keys expected to contained in the keys of - the obj. - - Returns: - bool: Whether the obj has the expected keys. - """ - return set(expected_keys).issubset(set(obj.keys())) - - -def assert_keys_equal(result_keys: List[str], target_keys: List[str]) -> bool: - """Check if target_keys is equal to result_keys. - - Args: - result_keys (List[str]): Result keys to be checked. - target_keys (List[str]): Target keys to be checked. - - Returns: - bool: Whether target_keys is equal to result_keys. - """ - return set(result_keys) == set(target_keys) - - -def assert_is_norm_layer(module) -> bool: - """Check if the module is a norm layer. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the module is a norm layer. - """ - from torch.nn import GroupNorm, LayerNorm - - from .parrots_wrapper import _BatchNorm, _InstanceNorm - norm_layer_candidates = (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm) - return isinstance(module, norm_layer_candidates) - - -def assert_params_all_zeros(module) -> bool: - """Check if the parameters of the module is all zeros. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the parameters of the module is all zeros. - """ - weight_data = module.weight.data - is_weight_zero = weight_data.allclose( - weight_data.new_zeros(weight_data.size())) - - if hasattr(module, 'bias') and module.bias is not None: - bias_data = module.bias.data - is_bias_zero = bias_data.allclose( - bias_data.new_zeros(bias_data.size())) - else: - is_bias_zero = True - - return is_weight_zero and is_bias_zero diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/timer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/timer.py deleted file mode 100644 index 087a969c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/timer.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from time import time - - -class TimerError(Exception): - - def __init__(self, message): - self.message = message - super().__init__(message) - - -class Timer: - """A flexible Timer class. - - Examples: - >>> import time - >>> import mmcv - >>> with mmcv.Timer(): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - 1.000 - >>> with mmcv.Timer(print_tmpl='it takes {:.1f} seconds'): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - it takes 1.0 seconds - >>> timer = mmcv.Timer() - >>> time.sleep(0.5) - >>> print(timer.since_start()) - 0.500 - >>> time.sleep(0.5) - >>> print(timer.since_last_check()) - 0.500 - >>> print(timer.since_start()) - 1.000 - """ - - def __init__(self, start=True, print_tmpl=None): - self._is_running = False - self.print_tmpl = print_tmpl if print_tmpl else '{:.3f}' - if start: - self.start() - - @property - def is_running(self): - """bool: indicate whether the timer is running""" - return self._is_running - - def __enter__(self): - self.start() - return self - - def __exit__(self, type, value, traceback): - print(self.print_tmpl.format(self.since_last_check())) - self._is_running = False - - def start(self): - """Start the timer.""" - if not self._is_running: - self._t_start = time() - self._is_running = True - self._t_last = time() - - def since_start(self): - """Total time since the timer is started. - - Returns: - float: Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - self._t_last = time() - return self._t_last - self._t_start - - def since_last_check(self): - """Time since the last checking. - - Either :func:`since_start` or :func:`since_last_check` is a checking - operation. - - Returns: - float: Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - dur = time() - self._t_last - self._t_last = time() - return dur - - -_g_timers = {} # global timers - - -def check_time(timer_id): - """Add check points in a single line. - - This method is suitable for running a task on a list of items. A timer will - be registered when the method is called for the first time. - - Examples: - >>> import time - >>> import mmcv - >>> for i in range(1, 6): - >>> # simulate a code block - >>> time.sleep(i) - >>> mmcv.check_time('task1') - 2.000 - 3.000 - 4.000 - 5.000 - - Args: - str: Timer identifier. - """ - if timer_id not in _g_timers: - _g_timers[timer_id] = Timer() - return 0 - else: - return _g_timers[timer_id].since_last_check() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/trace.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/trace.py deleted file mode 100644 index 45423bd0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/trace.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from mmcv.utils import digit_version - - -def is_jit_tracing() -> bool: - if (torch.__version__ != 'parrots' - and digit_version(torch.__version__) >= digit_version('1.6.0')): - on_trace = torch.jit.is_tracing() - # In PyTorch 1.6, torch.jit.is_tracing has a bug. - # Refers to https://github.com/pytorch/pytorch/issues/42448 - if isinstance(on_trace, bool): - return on_trace - else: - return torch._C._is_tracing() - else: - warnings.warn( - 'torch.jit.is_tracing is only supported after v1.6.0. ' - 'Therefore is_tracing returns False automatically. Please ' - 'set on_trace manually if you are using trace.', UserWarning) - return False diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/version_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/version_utils.py deleted file mode 100644 index 77c41f60..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/utils/version_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import subprocess -import warnings - -from packaging.version import parse - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - assert 'parrots' not in version_str - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) # type: ignore - else: - release.extend([0, 0]) - return tuple(release) - - -def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen( - cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - -def get_git_hash(fallback='unknown', digits=None): - """Get the git hash of the current repo. - - Args: - fallback (str, optional): The fallback string when git hash is - unavailable. Defaults to 'unknown'. - digits (int, optional): kept digits of the hash. Defaults to None, - meaning all digits are kept. - - Returns: - str: Git commit hash. - """ - - if digits is not None and not isinstance(digits, int): - raise TypeError('digits must be None or an integer') - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - if digits is not None: - sha = sha[:digits] - except OSError: - sha = fallback - - return sha diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/version.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/version.py deleted file mode 100644 index a65e3278..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/version.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -__version__ = '1.5.3' - - -def parse_version_info(version_str: str, length: int = 4) -> tuple: - """Parse a version string into a tuple. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int | str]: The version info, e.g., "1.3.0" is parsed into - (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into - (2, 0, 0, 0, 'rc', 1) (when length is set to 4). - """ - from packaging.version import parse - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - release.extend(list(version.pre)) # type: ignore - elif version.is_postrelease: - release.extend(list(version.post)) # type: ignore - else: - release.extend([0, 0]) - return tuple(release) - - -version_info = tuple(int(x) for x in __version__.split('.')[:3]) - -__all__ = ['__version__', 'version_info', 'parse_version_info'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/__init__.py deleted file mode 100644 index 73199b01..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .io import Cache, VideoReader, frames2video -from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread, - flowwrite, quantize_flow, sparse_flow_from_bytes) -from .processing import concat_video, convert_video, cut_video, resize_video - -__all__ = [ - 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video', - 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow', - 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/io.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/io.py deleted file mode 100644 index 09fa770d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/io.py +++ /dev/null @@ -1,317 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from collections import OrderedDict - -import cv2 -from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT, - CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH, - CAP_PROP_POS_FRAMES, VideoWriter_fourcc) - -from mmcv.utils import (check_file_exist, mkdir_or_exist, scandir, - track_progress) - - -class Cache: - - def __init__(self, capacity): - self._cache = OrderedDict() - self._capacity = int(capacity) - if capacity <= 0: - raise ValueError('capacity must be a positive integer') - - @property - def capacity(self): - return self._capacity - - @property - def size(self): - return len(self._cache) - - def put(self, key, val): - if key in self._cache: - return - if len(self._cache) >= self.capacity: - self._cache.popitem(last=False) - self._cache[key] = val - - def get(self, key, default=None): - val = self._cache[key] if key in self._cache else default - return val - - -class VideoReader: - """Video class with similar usage to a list object. - - This video warpper class provides convenient apis to access frames. - There exists an issue of OpenCV's VideoCapture class that jumping to a - certain frame may be inaccurate. It is fixed in this class by checking - the position after jumping each time. - Cache is used when decoding videos. So if the same frame is visited for - the second time, there is no need to decode again if it is stored in the - cache. - - Examples: - >>> import mmcv - >>> v = mmcv.VideoReader('sample.mp4') - >>> len(v) # get the total frame number with `len()` - 120 - >>> for img in v: # v is iterable - >>> mmcv.imshow(img) - >>> v[5] # get the 6th frame - """ - - def __init__(self, filename, cache_capacity=10): - # Check whether the video path is a url - if not filename.startswith(('https://', 'http://')): - check_file_exist(filename, 'Video file not found: ' + filename) - self._vcap = cv2.VideoCapture(filename) - assert cache_capacity > 0 - self._cache = Cache(cache_capacity) - self._position = 0 - # get basic info - self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH)) - self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT)) - self._fps = self._vcap.get(CAP_PROP_FPS) - self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT)) - self._fourcc = self._vcap.get(CAP_PROP_FOURCC) - - @property - def vcap(self): - """:obj:`cv2.VideoCapture`: The raw VideoCapture object.""" - return self._vcap - - @property - def opened(self): - """bool: Indicate whether the video is opened.""" - return self._vcap.isOpened() - - @property - def width(self): - """int: Width of video frames.""" - return self._width - - @property - def height(self): - """int: Height of video frames.""" - return self._height - - @property - def resolution(self): - """tuple: Video resolution (width, height).""" - return (self._width, self._height) - - @property - def fps(self): - """float: FPS of the video.""" - return self._fps - - @property - def frame_cnt(self): - """int: Total frames of the video.""" - return self._frame_cnt - - @property - def fourcc(self): - """str: "Four character code" of the video.""" - return self._fourcc - - @property - def position(self): - """int: Current cursor position, indicating frame decoded.""" - return self._position - - def _get_real_position(self): - return int(round(self._vcap.get(CAP_PROP_POS_FRAMES))) - - def _set_real_position(self, frame_id): - self._vcap.set(CAP_PROP_POS_FRAMES, frame_id) - pos = self._get_real_position() - for _ in range(frame_id - pos): - self._vcap.read() - self._position = frame_id - - def read(self): - """Read the next frame. - - If the next frame have been decoded before and in the cache, then - return it directly, otherwise decode, cache and return it. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - # pos = self._position - if self._cache: - img = self._cache.get(self._position) - if img is not None: - ret = True - else: - if self._position != self._get_real_position(): - self._set_real_position(self._position) - ret, img = self._vcap.read() - if ret: - self._cache.put(self._position, img) - else: - ret, img = self._vcap.read() - if ret: - self._position += 1 - return img - - def get_frame(self, frame_id): - """Get frame by index. - - Args: - frame_id (int): Index of the expected frame, 0-based. - - Returns: - ndarray or None: Return the frame if successful, otherwise None. - """ - if frame_id < 0 or frame_id >= self._frame_cnt: - raise IndexError( - f'"frame_id" must be between 0 and {self._frame_cnt - 1}') - if frame_id == self._position: - return self.read() - if self._cache: - img = self._cache.get(frame_id) - if img is not None: - self._position = frame_id + 1 - return img - self._set_real_position(frame_id) - ret, img = self._vcap.read() - if ret: - if self._cache: - self._cache.put(self._position, img) - self._position += 1 - return img - - def current_frame(self): - """Get the current frame (frame that is just visited). - - Returns: - ndarray or None: If the video is fresh, return None, otherwise - return the frame. - """ - if self._position == 0: - return None - return self._cache.get(self._position - 1) - - def cvt2frames(self, - frame_dir, - file_start=0, - filename_tmpl='{:06d}.jpg', - start=0, - max_num=0, - show_progress=True): - """Convert a video to frame images. - - Args: - frame_dir (str): Output directory to store all the frame images. - file_start (int): Filenames will start from the specified number. - filename_tmpl (str): Filename template with the index as the - placeholder. - start (int): The starting frame index. - max_num (int): Maximum number of frames to be written. - show_progress (bool): Whether to show a progress bar. - """ - mkdir_or_exist(frame_dir) - if max_num == 0: - task_num = self.frame_cnt - start - else: - task_num = min(self.frame_cnt - start, max_num) - if task_num <= 0: - raise ValueError('start must be less than total frame number') - if start > 0: - self._set_real_position(start) - - def write_frame(file_idx): - img = self.read() - if img is None: - return - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - cv2.imwrite(filename, img) - - if show_progress: - track_progress(write_frame, range(file_start, - file_start + task_num)) - else: - for i in range(task_num): - write_frame(file_start + i) - - def __len__(self): - return self.frame_cnt - - def __getitem__(self, index): - if isinstance(index, slice): - return [ - self.get_frame(i) - for i in range(*index.indices(self.frame_cnt)) - ] - # support negative indexing - if index < 0: - index += self.frame_cnt - if index < 0: - raise IndexError('index out of range') - return self.get_frame(index) - - def __iter__(self): - self._set_real_position(0) - return self - - def __next__(self): - img = self.read() - if img is not None: - return img - else: - raise StopIteration - - next = __next__ - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._vcap.release() - - -def frames2video(frame_dir: str, - video_file: str, - fps: float = 30, - fourcc: str = 'XVID', - filename_tmpl: str = '{:06d}.jpg', - start: int = 0, - end: int = 0, - show_progress: bool = True) -> None: - """Read the frame images from a directory and join them as a video. - - Args: - frame_dir (str): The directory containing video frames. - video_file (str): Output filename. - fps (float): FPS of the output video. - fourcc (str): Fourcc of the output video, this should be compatible - with the output file type. - filename_tmpl (str): Filename template with the index as the variable. - start (int): Starting frame index. - end (int): Ending frame index. - show_progress (bool): Whether to show a progress bar. - """ - if end == 0: - ext = filename_tmpl.split('.')[-1] - end = len([name for name in scandir(frame_dir, ext)]) - first_file = osp.join(frame_dir, filename_tmpl.format(start)) - check_file_exist(first_file, 'The start frame not found: ' + first_file) - img = cv2.imread(first_file) - height, width = img.shape[:2] - resolution = (width, height) - vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps, - resolution) - - def write_frame(file_idx): - filename = osp.join(frame_dir, filename_tmpl.format(file_idx)) - img = cv2.imread(filename) - vwriter.write(img) - - if show_progress: - track_progress(write_frame, range(start, end)) - else: - for i in range(start, end): - write_frame(i) - vwriter.release() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/optflow.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/optflow.py deleted file mode 100644 index 91ce0045..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/optflow.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Tuple, Union - -import cv2 -import numpy as np - -from mmcv.arraymisc import dequantize, quantize -from mmcv.image import imread, imwrite -from mmcv.utils import is_str - - -def flowread(flow_or_path: Union[np.ndarray, str], - quantize: bool = False, - concat_axis: int = 0, - *args, - **kwargs) -> np.ndarray: - """Read an optical flow map. - - Args: - flow_or_path (ndarray or str): A flow map or filepath. - quantize (bool): whether to read quantized pair, if set to True, - remaining args will be passed to :func:`dequantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - - Returns: - ndarray: Optical flow represented as a (h, w, 2) numpy array - """ - if isinstance(flow_or_path, np.ndarray): - if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2): - raise ValueError(f'Invalid flow with shape {flow_or_path.shape}') - return flow_or_path - elif not is_str(flow_or_path): - raise TypeError(f'"flow_or_path" must be a filename or numpy array, ' - f'not {type(flow_or_path)}') - - if not quantize: - with open(flow_or_path, 'rb') as f: - try: - header = f.read(4).decode('utf-8') - except Exception: - raise OSError(f'Invalid flow file: {flow_or_path}') - else: - if header != 'PIEH': - raise OSError(f'Invalid flow file: {flow_or_path}, ' - 'header does not contain PIEH') - - w = np.fromfile(f, np.int32, 1).squeeze() - h = np.fromfile(f, np.int32, 1).squeeze() - flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2)) - else: - assert concat_axis in [0, 1] - cat_flow = imread(flow_or_path, flag='unchanged') - if cat_flow.ndim != 2: - raise OSError( - f'{flow_or_path} is not a valid quantized flow file, ' - f'its dimension is {cat_flow.ndim}.') - assert cat_flow.shape[concat_axis] % 2 == 0 - dx, dy = np.split(cat_flow, 2, axis=concat_axis) - flow = dequantize_flow(dx, dy, *args, **kwargs) - - return flow.astype(np.float32) - - -def flowwrite(flow: np.ndarray, - filename: str, - quantize: bool = False, - concat_axis: int = 0, - *args, - **kwargs) -> None: - """Write optical flow to file. - - If the flow is not quantized, it will be saved as a .flo file losslessly, - otherwise a jpeg image which is lossy but of much smaller size. (dx and dy - will be concatenated horizontally into a single image if quantize is True.) - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - filename (str): Output filepath. - quantize (bool): Whether to quantize the flow and save it to 2 jpeg - images. If set to True, remaining args will be passed to - :func:`quantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - """ - if not quantize: - with open(filename, 'wb') as f: - f.write(b'PIEH') - np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f) - flow = flow.astype(np.float32) - flow.tofile(f) - f.flush() - else: - assert concat_axis in [0, 1] - dx, dy = quantize_flow(flow, *args, **kwargs) - dxdy = np.concatenate((dx, dy), axis=concat_axis) - imwrite(dxdy, filename) - - -def quantize_flow(flow: np.ndarray, - max_val: float = 0.02, - norm: bool = True) -> tuple: - """Quantize flow to [0, 255]. - - After this step, the size of flow will be much smaller, and can be - dumped as jpeg images. - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - max_val (float): Maximum value of flow, values beyond - [-max_val, max_val] will be truncated. - norm (bool): Whether to divide flow values by image width/height. - - Returns: - tuple[ndarray]: Quantized dx and dy. - """ - h, w, _ = flow.shape - dx = flow[..., 0] - dy = flow[..., 1] - if norm: - dx = dx / w # avoid inplace operations - dy = dy / h - # use 255 levels instead of 256 to make sure 0 is 0 after dequantization. - flow_comps = [ - quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy] - ] - return tuple(flow_comps) - - -def dequantize_flow(dx: np.ndarray, - dy: np.ndarray, - max_val: float = 0.02, - denorm: bool = True) -> np.ndarray: - """Recover from quantized flow. - - Args: - dx (ndarray): Quantized dx. - dy (ndarray): Quantized dy. - max_val (float): Maximum value used when quantizing. - denorm (bool): Whether to multiply flow values with width/height. - - Returns: - ndarray: Dequantized flow. - """ - assert dx.shape == dy.shape - assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1) - - dx, dy = (dequantize(d, -max_val, max_val, 255) for d in [dx, dy]) - - if denorm: - dx *= dx.shape[1] - dy *= dx.shape[0] - flow = np.dstack((dx, dy)) - return flow - - -def flow_warp(img: np.ndarray, - flow: np.ndarray, - filling_value: int = 0, - interpolate_mode: str = 'nearest') -> np.ndarray: - """Use flow to warp img. - - Args: - img (ndarray): Image to be warped. - flow (ndarray): Optical Flow. - filling_value (int): The missing pixels will be set with filling_value. - interpolate_mode (str): bilinear -> Bilinear Interpolation; - nearest -> Nearest Neighbor. - - Returns: - ndarray: Warped image with the same shape of img - """ - warnings.warn('This function is just for prototyping and cannot ' - 'guarantee the computational efficiency.') - assert flow.ndim == 3, 'Flow must be in 3D arrays.' - height = flow.shape[0] - width = flow.shape[1] - channels = img.shape[2] - - output = np.ones( - (height, width, channels), dtype=img.dtype) * filling_value - - grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2) - dx = grid[:, :, 0] + flow[:, :, 1] - dy = grid[:, :, 1] + flow[:, :, 0] - sx = np.floor(dx).astype(int) - sy = np.floor(dy).astype(int) - valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1) - - if interpolate_mode == 'nearest': - output[valid, :] = img[dx[valid].round().astype(int), - dy[valid].round().astype(int), :] - elif interpolate_mode == 'bilinear': - # dirty walkround for integer positions - eps_ = 1e-6 - dx, dy = dx + eps_, dy + eps_ - left_top_ = img[np.floor(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - left_down_ = img[np.ceil(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - right_top_ = img[np.floor(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - right_down_ = img[np.ceil(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_ - else: - raise NotImplementedError( - 'We only support interpolation modes of nearest and bilinear, ' - f'but got {interpolate_mode}.') - return output.astype(img.dtype) - - -def flow_from_bytes(content: bytes) -> np.ndarray: - """Read dense optical flow from bytes. - - .. note:: - This load optical flow function works for FlyingChairs, FlyingThings3D, - Sintel, FlyingChairsOcc datasets, but cannot load the data from - ChairsSDHom. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - ndarray: Loaded optical flow with the shape (H, W, 2). - """ - - # header in first 4 bytes - header = content[:4] - if header.decode('utf-8') != 'PIEH': - raise Exception('Flow file header does not contain PIEH') - # width in second 4 bytes - width = np.frombuffer(content[4:], np.int32, 1).squeeze() - # height in third 4 bytes - height = np.frombuffer(content[8:], np.int32, 1).squeeze() - # after first 12 bytes, all bytes are flow - flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape( - (height, width, 2)) - - return flow - - -def sparse_flow_from_bytes(content: bytes) -> Tuple[np.ndarray, np.ndarray]: - """Read the optical flow in KITTI datasets from bytes. - - This function is modified from RAFT load the `KITTI datasets - `_. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2) - and flow valid mask with the shape (H, W). - """ # nopa - - content = np.frombuffer(content, np.uint8) - flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - # flow shape (H, W, 2) valid shape (H, W) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2**15) / 64.0 - return flow, valid diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/processing.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/processing.py deleted file mode 100644 index 90e2a4c0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/video/processing.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import subprocess -import tempfile -from typing import List, Optional, Union - -from mmcv.utils import requires_executable - - -@requires_executable('ffmpeg') -def convert_video(in_file: str, - out_file: str, - print_cmd: bool = False, - pre_options: str = '', - **kwargs) -> None: - """Convert a video with ffmpeg. - - This provides a general api to ffmpeg, the executed command is:: - - `ffmpeg -y -i ` - - Options(kwargs) are mapped to ffmpeg commands with the following rules: - - - key=val: "-key val" - - key=True: "-key" - - key=False: "" - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - pre_options (str): Options appears before "-i ". - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = [] - for k, v in kwargs.items(): - if isinstance(v, bool): - if v: - options.append(f'-{k}') - elif k == 'log_level': - assert v in [ - 'quiet', 'panic', 'fatal', 'error', 'warning', 'info', - 'verbose', 'debug', 'trace' - ] - options.append(f'-loglevel {v}') - else: - options.append(f'-{k} {v}') - cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \ - f'{out_file}' - if print_cmd: - print(cmd) - subprocess.call(cmd, shell=True) - - -@requires_executable('ffmpeg') -def resize_video(in_file: str, - out_file: str, - size: Optional[tuple] = None, - ratio: Union[tuple, float, None] = None, - keep_ar: bool = False, - log_level: str = 'info', - print_cmd: bool = False) -> None: - """Resize a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). - ratio (tuple or float): Expected resize ratio, (2, 0.5) means - (w*2, h*0.5). - keep_ar (bool): Whether to keep original aspect ratio. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - if size is None and ratio is None: - raise ValueError('expected size or ratio must be specified') - if size is not None and ratio is not None: - raise ValueError('size and ratio cannot be specified at the same time') - options = {'log_level': log_level} - if size: - if not keep_ar: - options['vf'] = f'scale={size[0]}:{size[1]}' - else: - options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \ - 'force_original_aspect_ratio=decrease' - else: - if not isinstance(ratio, tuple): - ratio = (ratio, ratio) - options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"' - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def cut_video(in_file: str, - out_file: str, - start: Optional[float] = None, - end: Optional[float] = None, - vcodec: Optional[str] = None, - acodec: Optional[str] = None, - log_level: str = 'info', - print_cmd: bool = False) -> None: - """Cut a clip from a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - start (None or float): Start time (in seconds). - end (None or float): End time (in seconds). - vcodec (None or str): Output video codec, None for unchanged. - acodec (None or str): Output audio codec, None for unchanged. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - if start: - options['ss'] = start # type: ignore - else: - start = 0 - if end: - options['t'] = end - start # type: ignore - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def concat_video(video_list: List, - out_file: str, - vcodec: Optional[str] = None, - acodec: Optional[str] = None, - log_level: str = 'info', - print_cmd: bool = False) -> None: - """Concatenate multiple videos into a single one. - - Args: - video_list (list): A list of video filenames - out_file (str): Output video filename - vcodec (None or str): Output video codec, None for unchanged - acodec (None or str): Output audio codec, None for unchanged - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True) - with open(tmp_filename, 'w') as f: - for filename in video_list: - f.write(f'file {osp.abspath(filename)}\n') - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - convert_video( - tmp_filename, - out_file, - print_cmd, - pre_options='-f concat -safe 0', - **options) - os.close(tmp_filehandler) - os.remove(tmp_filename) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/__init__.py deleted file mode 100644 index 835df136..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .color import Color, color_val -from .image import imshow, imshow_bboxes, imshow_det_bboxes -from .optflow import flow2rgb, flowshow, make_color_wheel - -__all__ = [ - 'Color', 'color_val', 'imshow', 'imshow_bboxes', 'imshow_det_bboxes', - 'flowshow', 'flow2rgb', 'make_color_wheel' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/color.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/color.py deleted file mode 100644 index 2cc0b523..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/color.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum -from typing import Union - -import numpy as np - -from mmcv.utils import is_str - - -class Color(Enum): - """An enum that defines common colors. - - Contains red, green, blue, cyan, yellow, magenta, white and black. - """ - red = (0, 0, 255) - green = (0, 255, 0) - blue = (255, 0, 0) - cyan = (255, 255, 0) - yellow = (0, 255, 255) - magenta = (255, 0, 255) - white = (255, 255, 255) - black = (0, 0, 0) - - -def color_val(color: Union[Color, str, tuple, int, np.ndarray]) -> tuple: - """Convert various input to color tuples. - - Args: - color (:obj:`Color`/str/tuple/int/ndarray): Color inputs - - Returns: - tuple[int]: A tuple of 3 integers indicating BGR channels. - """ - if is_str(color): - return Color[color].value # type: ignore - elif isinstance(color, Color): - return color.value - elif isinstance(color, tuple): - assert len(color) == 3 - for channel in color: - assert 0 <= channel <= 255 - return color - elif isinstance(color, int): - assert 0 <= color <= 255 - return color, color, color - elif isinstance(color, np.ndarray): - assert color.ndim == 1 and color.size == 3 - assert np.all((color >= 0) & (color <= 255)) - color = color.astype(np.uint8) - return tuple(color) - else: - raise TypeError(f'Invalid type for color: {type(color)}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/image.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/image.py deleted file mode 100644 index e7ac4c18..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/image.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Optional, Union - -import cv2 -import numpy as np - -from mmcv.image import imread, imwrite -from .color import Color, color_val - -# a type alias declares the optional types of color argument -ColorType = Union[Color, str, tuple, int, np.ndarray] - - -def imshow(img: Union[str, np.ndarray], - win_name: str = '', - wait_time: int = 0): - """Show an image. - - Args: - img (str or ndarray): The image to be displayed. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - """ - cv2.imshow(win_name, imread(img)) - if wait_time == 0: # prevent from hanging if windows was closed - while True: - ret = cv2.waitKey(1) - - closed = cv2.getWindowProperty(win_name, cv2.WND_PROP_VISIBLE) < 1 - # if user closed window or if some key pressed - if closed or ret != -1: - break - else: - ret = cv2.waitKey(wait_time) - - -def imshow_bboxes(img: Union[str, np.ndarray], - bboxes: Union[list, np.ndarray], - colors: ColorType = 'green', - top_k: int = -1, - thickness: int = 1, - show: bool = True, - win_name: str = '', - wait_time: int = 0, - out_file: Optional[str] = None): - """Draw bboxes on an image. - - Args: - img (str or ndarray): The image to be displayed. - bboxes (list or ndarray): A list of ndarray of shape (k, 4). - colors (Color or str or tuple or int or ndarray): A list of colors. - top_k (int): Plot the first k bboxes only if set positive. - thickness (int): Thickness of lines. - show (bool): Whether to show the image. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - out_file (str, optional): The filename to write the image. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - img = imread(img) - img = np.ascontiguousarray(img) - - if isinstance(bboxes, np.ndarray): - bboxes = [bboxes] - if not isinstance(colors, list): - colors = [colors for _ in range(len(bboxes))] - colors = [color_val(c) for c in colors] - assert len(bboxes) == len(colors) - - for i, _bboxes in enumerate(bboxes): - _bboxes = _bboxes.astype(np.int32) - if top_k <= 0: - _top_k = _bboxes.shape[0] - else: - _top_k = min(top_k, _bboxes.shape[0]) - for j in range(_top_k): - left_top = (_bboxes[j, 0], _bboxes[j, 1]) - right_bottom = (_bboxes[j, 2], _bboxes[j, 3]) - cv2.rectangle( - img, left_top, right_bottom, colors[i], thickness=thickness) - - if show: - imshow(img, win_name, wait_time) - if out_file is not None: - imwrite(img, out_file) - return img - - -def imshow_det_bboxes(img: Union[str, np.ndarray], - bboxes: np.ndarray, - labels: np.ndarray, - class_names: List[str] = None, - score_thr: float = 0, - bbox_color: ColorType = 'green', - text_color: ColorType = 'green', - thickness: int = 1, - font_scale: float = 0.5, - show: bool = True, - win_name: str = '', - wait_time: int = 0, - out_file: Optional[str] = None): - """Draw bboxes and class labels (with scores) on an image. - - Args: - img (str or ndarray): The image to be displayed. - bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or - (n, 5). - labels (ndarray): Labels of bboxes. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. - bbox_color (Color or str or tuple or int or ndarray): Color - of bbox lines. - text_color (Color or str or tuple or int or ndarray): Color - of texts. - thickness (int): Thickness of lines. - font_scale (float): Font scales of texts. - show (bool): Whether to show the image. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - out_file (str or None): The filename to write the image. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - assert bboxes.ndim == 2 - assert labels.ndim == 1 - assert bboxes.shape[0] == labels.shape[0] - assert bboxes.shape[1] == 4 or bboxes.shape[1] == 5 - img = imread(img) - img = np.ascontiguousarray(img) - - if score_thr > 0: - assert bboxes.shape[1] == 5 - scores = bboxes[:, -1] - inds = scores > score_thr - bboxes = bboxes[inds, :] - labels = labels[inds] - - bbox_color = color_val(bbox_color) - text_color = color_val(text_color) - - for bbox, label in zip(bboxes, labels): - bbox_int = bbox.astype(np.int32) - left_top = (bbox_int[0], bbox_int[1]) - right_bottom = (bbox_int[2], bbox_int[3]) - cv2.rectangle( - img, left_top, right_bottom, bbox_color, thickness=thickness) - label_text = class_names[ - label] if class_names is not None else f'cls {label}' - if len(bbox) > 4: - label_text += f'|{bbox[-1]:.02f}' - cv2.putText(img, label_text, (bbox_int[0], bbox_int[1] - 2), - cv2.FONT_HERSHEY_COMPLEX, font_scale, text_color) - - if show: - imshow(img, win_name, wait_time) - if out_file is not None: - imwrite(img, out_file) - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/optflow.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/optflow.py deleted file mode 100644 index 080b0e61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmcv/visualization/optflow.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Union - -import numpy as np - -from mmcv.image import rgb2bgr -from mmcv.video import flowread -from .image import imshow - - -def flowshow(flow: Union[np.ndarray, str], - win_name: str = '', - wait_time: int = 0) -> None: - """Show optical flow. - - Args: - flow (ndarray or str): The optical flow to be displayed. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - """ - flow = flowread(flow) - flow_img = flow2rgb(flow) - imshow(rgb2bgr(flow_img), win_name, wait_time) - - -def flow2rgb(flow: np.ndarray, - color_wheel: Optional[np.ndarray] = None, - unknown_thr: float = 1e6) -> np.ndarray: - """Convert flow map to RGB image. - - Args: - flow (ndarray): Array of optical flow. - color_wheel (ndarray or None): Color wheel used to map flow field to - RGB colorspace. Default color wheel will be used if not specified. - unknown_thr (float): Values above this threshold will be marked as - unknown and thus ignored. - - Returns: - ndarray: RGB image that can be visualized. - """ - assert flow.ndim == 3 and flow.shape[-1] == 2 - if color_wheel is None: - color_wheel = make_color_wheel() - assert color_wheel.ndim == 2 and color_wheel.shape[1] == 3 - num_bins = color_wheel.shape[0] - - dx = flow[:, :, 0].copy() - dy = flow[:, :, 1].copy() - - ignore_inds = ( - np.isnan(dx) | np.isnan(dy) | (np.abs(dx) > unknown_thr) | - (np.abs(dy) > unknown_thr)) - dx[ignore_inds] = 0 - dy[ignore_inds] = 0 - - rad = np.sqrt(dx**2 + dy**2) - if np.any(rad > np.finfo(float).eps): - max_rad = np.max(rad) - dx /= max_rad - dy /= max_rad - - rad = np.sqrt(dx**2 + dy**2) - angle = np.arctan2(-dy, -dx) / np.pi - - bin_real = (angle + 1) / 2 * (num_bins - 1) - bin_left = np.floor(bin_real).astype(int) - bin_right = (bin_left + 1) % num_bins - w = (bin_real - bin_left.astype(np.float32))[..., None] - flow_img = (1 - - w) * color_wheel[bin_left, :] + w * color_wheel[bin_right, :] - small_ind = rad <= 1 - flow_img[small_ind] = 1 - rad[small_ind, None] * (1 - flow_img[small_ind]) - flow_img[np.logical_not(small_ind)] *= 0.75 - - flow_img[ignore_inds, :] = 0 - - return flow_img - - -def make_color_wheel(bins: Optional[Union[list, tuple]] = None) -> np.ndarray: - """Build a color wheel. - - Args: - bins(list or tuple, optional): Specify the number of bins for each - color range, corresponding to six ranges: red -> yellow, - yellow -> green, green -> cyan, cyan -> blue, blue -> magenta, - magenta -> red. [15, 6, 4, 11, 13, 6] is used for default - (see Middlebury). - - Returns: - ndarray: Color wheel of shape (total_bins, 3). - """ - if bins is None: - bins = [15, 6, 4, 11, 13, 6] - assert len(bins) == 6 - - RY, YG, GC, CB, BM, MR = tuple(bins) - - ry = [1, np.arange(RY) / RY, 0] - yg = [1 - np.arange(YG) / YG, 1, 0] - gc = [0, 1, np.arange(GC) / GC] - cb = [0, 1 - np.arange(CB) / CB, 1] - bm = [np.arange(BM) / BM, 0, 1] - mr = [1, 0, 1 - np.arange(MR) / MR] - - num_bins = RY + YG + GC + CB + BM + MR - - color_wheel = np.zeros((3, num_bins), dtype=np.float32) - - col = 0 - for i, color in enumerate([ry, yg, gc, cb, bm, mr]): - for j in range(3): - color_wheel[j, col:col + bins[i]] = color[j] - col += bins[i] - - return color_wheel.T diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/__init__.py deleted file mode 100644 index 1f8ee169..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.3.17' -mmcv_maximum_version = '1.6.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/__init__.py deleted file mode 100644 index a865e942..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import (async_inference_detector, inference_detector, - init_detector, show_result_pyplot) -from .test import multi_gpu_test, single_gpu_test -from .train import (get_root_logger, init_random_seed, set_random_seed, - train_detector) - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_detector', 'init_detector', - 'async_inference_detector', 'inference_detector', 'show_result_pyplot', - 'multi_gpu_test', 'single_gpu_test', 'init_random_seed' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/inference.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/inference.py deleted file mode 100644 index 795fce51..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/inference.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from pathlib import Path - -import mmcv -import numpy as np -import torch -from mmcv.ops import RoIPool -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmdet.core import get_classes -from mmdet.datasets import replace_ImageToTensor -from mmdet.datasets.pipelines import Compose -from mmdet.models import build_detector - - -def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None): - """Initialize a detector from config file. - - Args: - config (str, :obj:`Path`, or :obj:`mmcv.Config`): Config file path, - :obj:`Path`, or the config object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - cfg_options (dict): Options to override some settings in the used - config. - - Returns: - nn.Module: The constructed detector. - """ - if isinstance(config, (str, Path)): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - if cfg_options is not None: - config.merge_from_dict(cfg_options) - if 'pretrained' in config.model: - config.model.pretrained = None - elif 'init_cfg' in config.model.backbone: - config.model.backbone.init_cfg = None - config.model.train_cfg = None - model = build_detector(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - warnings.simplefilter('once') - warnings.warn('Class names are not saved in the checkpoint\'s ' - 'meta data, use COCO classes by default.') - model.CLASSES = get_classes('coco') - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """Deprecated. - - A simple pipeline to load image. - """ - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - Returns: - dict: ``results`` will be returned containing loaded image. - """ - warnings.simplefilter('once') - warnings.warn('`LoadImage` is deprecated and will be removed in ' - 'future releases. You may use `LoadImageFromWebcam` ' - 'from `mmdet.datasets.pipelines.` instead.') - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_fields'] = ['img'] - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_detector(model, imgs): - """Inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]): - Either image files or loaded images. - - Returns: - If imgs is a list or tuple, the same length list type results - will be returned, otherwise return the detection results directly. - """ - - if isinstance(imgs, (list, tuple)): - is_batch = True - else: - imgs = [imgs] - is_batch = False - - cfg = model.cfg - device = next(model.parameters()).device # model device - - if isinstance(imgs[0], np.ndarray): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - - cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) - test_pipeline = Compose(cfg.data.test.pipeline) - - datas = [] - for img in imgs: - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - data = test_pipeline(data) - datas.append(data) - - data = collate(datas, samples_per_gpu=len(imgs)) - # just get the actual data from DataContainer - data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']] - data['img'] = [img.data[0] for img in data['img']] - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - for m in model.modules(): - assert not isinstance( - m, RoIPool - ), 'CPU inference with RoIPool is not supported currently.' - - # forward the model - with torch.no_grad(): - results = model(return_loss=False, rescale=True, **data) - - if not is_batch: - return results[0] - else: - return results - - -async def async_inference_detector(model, imgs): - """Async inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - img (str | ndarray): Either image files or loaded images. - - Returns: - Awaitable detection results. - """ - if not isinstance(imgs, (list, tuple)): - imgs = [imgs] - - cfg = model.cfg - device = next(model.parameters()).device # model device - - if isinstance(imgs[0], np.ndarray): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - - cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) - test_pipeline = Compose(cfg.data.test.pipeline) - - datas = [] - for img in imgs: - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - data = test_pipeline(data) - datas.append(data) - - data = collate(datas, samples_per_gpu=len(imgs)) - # just get the actual data from DataContainer - data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']] - data['img'] = [img.data[0] for img in data['img']] - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - for m in model.modules(): - assert not isinstance( - m, RoIPool - ), 'CPU inference with RoIPool is not supported currently.' - - # We don't restore `torch.is_grad_enabled()` value during concurrent - # inference since execution can overlap - torch.set_grad_enabled(False) - results = await model.aforward_test(rescale=True, **data) - return results - - -def show_result_pyplot(model, - img, - result, - score_thr=0.3, - title='result', - wait_time=0, - palette=None, - out_file=None): - """Visualize the detection results on the image. - - Args: - model (nn.Module): The loaded detector. - img (str or np.ndarray): Image filename or loaded image. - result (tuple[list] or list): The detection result, can be either - (bbox, segm) or just bbox. - score_thr (float): The threshold to visualize the bboxes and masks. - title (str): Title of the pyplot figure. - wait_time (float): Value of waitKey param. Default: 0. - palette (str or tuple(int) or :obj:`Color`): Color. - The tuple of color should be in BGR order. - out_file (str or None): The path to write the image. - Default: None. - """ - if hasattr(model, 'module'): - model = model.module - model.show_result( - img, - result, - score_thr=score_thr, - show=True, - wait_time=wait_time, - win_name=title, - bbox_color=palette, - text_color=(200, 200, 200), - mask_color=palette, - out_file=out_file) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/test.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/test.py deleted file mode 100644 index 973d3623..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/test.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time - -import mmcv -import torch -import torch.distributed as dist -from mmcv.image import tensor2imgs -from mmcv.runner import get_dist_info - -from mmdet.core import encode_mask_results - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - show_score_thr=0.3): - model.eval() - results = [] - dataset = data_loader.dataset - PALETTE = getattr(dataset, 'PALETTE', None) - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - batch_size = len(result) - if show or out_dir: - if batch_size == 1 and isinstance(data['img'][0], torch.Tensor): - img_tensor = data['img'][0] - else: - img_tensor = data['img'][0].data[0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for i, (img, img_meta) in enumerate(zip(imgs, img_metas)): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result[i], - bbox_color=PALETTE, - text_color=PALETTE, - mask_color=PALETTE, - show=show, - out_file=out_file, - score_thr=show_score_thr) - - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = (bbox_results, - encode_mask_results(mask_results)) - - results.extend(result) - - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = ( - bbox_results, encode_mask_results(mask_results)) - - results.extend(result) - - if rank == 0: - batch_size = len(result) - for _ in range(batch_size * world_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl')) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append( - pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/train.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/train.py deleted file mode 100644 index 3bba6600..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/apis/train.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import (DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner, get_dist_info) - -from mmdet.core import DistEvalHook, EvalHook -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import (build_ddp, build_dp, compat_cfg, - find_latest_checkpoint, get_root_logger) - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def auto_scale_lr(cfg, distributed, logger): - """Automatically scaling LR according to GPU number and sample per GPU. - - Args: - cfg (config): Training config. - distributed (bool): Using distributed or not. - logger (logging.Logger): Logger. - """ - # Get flag from config - if ('auto_scale_lr' not in cfg) or \ - (not cfg.auto_scale_lr.get('enable', False)): - logger.info('Automatic scaling of learning rate (LR)' - ' has been disabled.') - return - - # Get base batch size from config - base_batch_size = cfg.auto_scale_lr.get('base_batch_size', None) - if base_batch_size is None: - return - - # Get gpu number - if distributed: - _, world_size = get_dist_info() - num_gpus = len(range(world_size)) - else: - num_gpus = len(cfg.gpu_ids) - - # calculate the batch size - batch_size = num_gpus * cfg.data.samples_per_gpu - logger.info(f'You are using {num_gpus} GPU(s) ' - f'and {cfg.data.samples_per_gpu} samples per GPU. ' - f'Total batch size is {batch_size}.') - - if batch_size != base_batch_size: - # scale LR with - # [linear scaling rule](https://arxiv.org/abs/1706.02677) - scaled_lr = (batch_size / base_batch_size) * cfg.optimizer.lr - logger.info('LR has been automatically scaled ' - f'from {cfg.optimizer.lr} to {scaled_lr}') - cfg.optimizer.lr = scaled_lr - else: - logger.info('The batch size match the ' - f'base batch size: {base_batch_size}, ' - f'will not scaling the LR ({cfg.optimizer.lr}).') - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - - cfg = compat_cfg(cfg) - logger = get_root_logger(log_level=cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - - runner_type = 'EpochBasedRunner' if 'runner' not in cfg else cfg.runner[ - 'type'] - - train_dataloader_default_args = dict( - samples_per_gpu=2, - workers_per_gpu=2, - # `num_gpus` will be ignored if distributed - num_gpus=len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - runner_type=runner_type, - persistent_workers=False) - - train_loader_cfg = { - **train_dataloader_default_args, - **cfg.data.get('train_dataloader', {}) - } - - data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = build_ddp( - model, - cfg.device, - device_ids=[int(os.environ['LOCAL_RANK'])], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = build_dp(model, cfg.device, device_ids=cfg.gpu_ids) - - # build optimizer - auto_scale_lr(cfg, distributed, logger) - optimizer = build_optimizer(model, cfg.optimizer) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - optimizer_config, - cfg.checkpoint_config, - cfg.log_config, - cfg.get('momentum_config', None), - custom_hooks_config=cfg.get('custom_hooks', None)) - - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - val_dataloader_default_args = dict( - samples_per_gpu=1, - workers_per_gpu=2, - dist=distributed, - shuffle=False, - persistent_workers=False) - - val_dataloader_args = { - **val_dataloader_default_args, - **cfg.data.get('val_dataloader', {}) - } - # Support batch_size > 1 in validation - - if val_dataloader_args['samples_per_gpu'] > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - - val_dataloader = build_dataloader(val_dataset, **val_dataloader_args) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - resume_from = None - if cfg.resume_from is None and cfg.get('auto_resume'): - resume_from = find_latest_checkpoint(cfg.work_dir) - if resume_from is not None: - cfg.resume_from = resume_from - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/__init__.py deleted file mode 100644 index 7eca58cf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .data_structures import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -from .hook import * # noqa: F401, F403 -from .mask import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/__init__.py deleted file mode 100644 index fcc7e4af..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor_generator import (AnchorGenerator, LegacyAnchorGenerator, - YOLOAnchorGenerator) -from .builder import (ANCHOR_GENERATORS, PRIOR_GENERATORS, - build_anchor_generator, build_prior_generator) -from .point_generator import MlvlPointGenerator, PointGenerator -from .utils import anchor_inside_flags, calc_region, images_to_levels - -__all__ = [ - 'AnchorGenerator', 'LegacyAnchorGenerator', 'anchor_inside_flags', - 'PointGenerator', 'images_to_levels', 'calc_region', - 'build_anchor_generator', 'ANCHOR_GENERATORS', 'YOLOAnchorGenerator', - 'build_prior_generator', 'PRIOR_GENERATORS', 'MlvlPointGenerator' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/anchor_generator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/anchor_generator.py deleted file mode 100644 index 20886fbd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/anchor_generator.py +++ /dev/null @@ -1,866 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import PRIOR_GENERATORS - - -@PRIOR_GENERATORS.register_module() -class AnchorGenerator: - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int] | None): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - - Examples: - >>> from mmdet.core import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_priors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_priors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides, - ratios, - scales=None, - base_sizes=None, - scale_major=True, - octave_base_scale=None, - scales_per_octave=None, - centers=None, - center_offset=0.): - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - - @property - def num_base_anchors(self): - """list[int]: total number of base anchors in a feature grid""" - return self.num_base_priors - - @property - def num_base_priors(self): - """list[int]: The number of priors (anchors) at a point - on the feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, x, y, row_major=True): - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool, optional): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_priors(self, featmap_sizes, dtype=torch.float32, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - dtype (:obj:`torch.dtype`): Dtype of priors. - Default: torch.float32. - device (str): The device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_priors( - featmap_sizes[i], level_idx=i, dtype=dtype, device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_priors(self, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int]): Size of the feature maps. - level_idx (int): The index of corresponding feature map level. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (str, optional): The device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - base_anchors = self.base_anchors[level_idx].to(device).to(dtype) - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - shift_x = torch.arange(0, feat_w, device=device).to(dtype) * stride_w - shift_y = torch.arange(0, feat_h, device=device).to(dtype) * stride_h - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def sparse_priors(self, - prior_idxs, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate sparse anchors according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int]): feature map size arrange as (h, w). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (obj:`torch.device`): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 4), N should be equal to - the length of ``prior_idxs``. - """ - - height, width = featmap_size - num_base_anchors = self.num_base_anchors[level_idx] - base_anchor_id = prior_idxs % num_base_anchors - x = (prior_idxs // - num_base_anchors) % width * self.strides[level_idx][0] - y = (prior_idxs // width // - num_base_anchors) % height * self.strides[level_idx][1] - priors = torch.stack([x, y, x, y], 1).to(dtype).to(device) + \ - self.base_anchors[level_idx][base_anchor_id, :].to(device) - - return priors - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str): Device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - warnings.warn('``grid_anchors`` would be deprecated soon. ' - 'Please use ``grid_priors`` ') - - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors, - featmap_size, - stride=(16, 16), - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int], optional): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - warnings.warn( - '``single_level_grid_anchors`` would be deprecated soon. ' - 'Please use ``single_level_grid_priors`` ') - - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - num_base_anchors, - device='cuda'): - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - min_sizes (list[float]): The list of minimum anchor sizes on each - level. - max_sizes (list[float]): The list of maximum anchor sizes on each - level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. Being - used when not setting min_sizes and max_sizes. - input_size (int): Size of feature map, 300 for SSD300, 512 for - SSD512. Being used when not setting min_sizes and max_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - """ - - def __init__(self, - strides, - ratios, - min_sizes=None, - max_sizes=None, - basesize_ratio_range=(0.15, 0.9), - input_size=300, - scale_major=True): - assert len(strides) == len(ratios) - assert not (min_sizes is None) ^ (max_sizes is None) - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - - if min_sizes is None and max_sizes is None: - # use hard code to generate SSD anchors - self.input_size = input_size - assert mmcv.is_tuple_of(basesize_ratio_range, float) - self.basesize_ratio_range = basesize_ratio_range - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError( - 'When not setting min_sizes and max_sizes,' - 'basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError( - 'Only support 300 or 512 in SSDAnchorGenerator when ' - 'not setting min_sizes and max_sizes, ' - f'got {self.input_size}.') - - assert len(min_sizes) == len(max_sizes) == len(strides) - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - - Examples: - >>> from mmdet.core import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@PRIOR_GENERATORS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - super(LegacySSDAnchorGenerator, self).__init__( - strides=strides, - ratios=ratios, - basesize_ratio_range=basesize_ratio_range, - input_size=input_size, - scale_major=scale_major) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@PRIOR_GENERATORS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, strides, base_sizes): - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, base_sizes_per_level, center=None): - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int, int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors - - def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'): - """Generate responsible anchor flags of grid cells in multiple scales. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in multiple - feature levels. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): responsible flags of anchors in multiple level - """ - assert self.num_levels == len(featmap_sizes) - multi_level_responsible_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - flags = self.single_level_responsible_flags( - featmap_sizes[i], - gt_bboxes, - anchor_stride, - self.num_base_anchors[i], - device=device) - multi_level_responsible_flags.append(flags) - return multi_level_responsible_flags - - def single_level_responsible_flags(self, - featmap_size, - gt_bboxes, - stride, - num_base_anchors, - device='cuda'): - """Generate the responsible flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - stride (tuple(int)): stride of current level - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device) - gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device) - gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long() - gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long() - - # row major indexing - gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x - - responsible_grid = torch.zeros( - feat_h * feat_w, dtype=torch.uint8, device=device) - responsible_grid[gt_bboxes_grid_idx] = 1 - - responsible_grid = responsible_grid[:, None].expand( - responsible_grid.size(0), num_base_anchors).contiguous().view(-1) - return responsible_grid diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/builder.py deleted file mode 100644 index ddb25ad3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/builder.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.utils import Registry, build_from_cfg - -PRIOR_GENERATORS = Registry('Generator for anchors and points') - -ANCHOR_GENERATORS = PRIOR_GENERATORS - - -def build_prior_generator(cfg, default_args=None): - return build_from_cfg(cfg, PRIOR_GENERATORS, default_args) - - -def build_anchor_generator(cfg, default_args=None): - warnings.warn( - '``build_anchor_generator`` would be deprecated soon, please use ' - '``build_prior_generator`` ') - return build_prior_generator(cfg, default_args=default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/point_generator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/point_generator.py deleted file mode 100644 index cc9c3887..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/point_generator.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import PRIOR_GENERATORS - - -@PRIOR_GENERATORS.register_module() -class PointGenerator: - - def _meshgrid(self, x, y, row_major=True): - xx = x.repeat(len(y)) - yy = y.view(-1, 1).repeat(1, len(x)).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_points(self, featmap_size, stride=16, device='cuda'): - feat_h, feat_w = featmap_size - shift_x = torch.arange(0., feat_w, device=device) * stride - shift_y = torch.arange(0., feat_h, device=device) * stride - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - stride = shift_x.new_full((shift_xx.shape[0], ), stride) - shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_size, valid_size, device='cuda'): - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid - - -@PRIOR_GENERATORS.register_module() -class MlvlPointGenerator: - """Standard points generator for multi-level (Mlvl) feature maps in 2D - points-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - offset (float): The offset of points, the value is normalized with - corresponding stride. Defaults to 0.5. - """ - - def __init__(self, strides, offset=0.5): - self.strides = [_pair(stride) for stride in strides] - self.offset = offset - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - @property - def num_base_priors(self): - """list[int]: The number of priors (points) at a point - on the feature grid""" - return [1 for _ in range(len(self.strides))] - - def _meshgrid(self, x, y, row_major=True): - yy, xx = torch.meshgrid(y, x) - if row_major: - # warning .flatten() would cause error in ONNX exporting - # have to use reshape here - return xx.reshape(-1), yy.reshape(-1) - - else: - return yy.reshape(-1), xx.reshape(-1) - - def grid_priors(self, - featmap_sizes, - dtype=torch.float32, - device='cuda', - with_stride=False): - """Generate grid points of multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels, each size arrange as - as (h, w). - dtype (:obj:`dtype`): Dtype of priors. Default: torch.float32. - device (str): The device where the anchors will be put on. - with_stride (bool): Whether to concatenate the stride to - the last dimension of points. - - Return: - list[torch.Tensor]: Points of multiple feature levels. - The sizes of each tensor should be (N, 2) when with stride is - ``False``, where N = width * height, width and height - are the sizes of the corresponding feature level, - and the last dimension 2 represent (coord_x, coord_y), - otherwise the shape should be (N, 4), - and the last dimension 4 represent - (coord_x, coord_y, stride_w, stride_h). - """ - - assert self.num_levels == len(featmap_sizes) - multi_level_priors = [] - for i in range(self.num_levels): - priors = self.single_level_grid_priors( - featmap_sizes[i], - level_idx=i, - dtype=dtype, - device=device, - with_stride=with_stride) - multi_level_priors.append(priors) - return multi_level_priors - - def single_level_grid_priors(self, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda', - with_stride=False): - """Generate grid Points of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int]): Size of the feature maps, arrange as - (h, w). - level_idx (int): The index of corresponding feature map level. - dtype (:obj:`dtype`): Dtype of priors. Default: torch.float32. - device (str, optional): The device the tensor will be put on. - Defaults to 'cuda'. - with_stride (bool): Concatenate the stride to the last dimension - of points. - - Return: - Tensor: Points of single feature levels. - The shape of tensor should be (N, 2) when with stride is - ``False``, where N = width * height, width and height - are the sizes of the corresponding feature level, - and the last dimension 2 represent (coord_x, coord_y), - otherwise the shape should be (N, 4), - and the last dimension 4 represent - (coord_x, coord_y, stride_w, stride_h). - """ - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - shift_x = (torch.arange(0, feat_w, device=device) + - self.offset) * stride_w - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - shift_x = shift_x.to(dtype) - - shift_y = (torch.arange(0, feat_h, device=device) + - self.offset) * stride_h - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - shift_y = shift_y.to(dtype) - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - if not with_stride: - shifts = torch.stack([shift_xx, shift_yy], dim=-1) - else: - # use `shape[0]` instead of `len(shift_xx)` for ONNX export - stride_w = shift_xx.new_full((shift_xx.shape[0], ), - stride_w).to(dtype) - stride_h = shift_xx.new_full((shift_yy.shape[0], ), - stride_h).to(dtype) - shifts = torch.stack([shift_xx, shift_yy, stride_w, stride_h], - dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of points of multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels, each size arrange as - as (h, w). - pad_shape (tuple(int)): The padded shape of the image, - arrange as (h, w). - device (str): The device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of points of multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - point_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / point_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / point_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - device='cuda'): - """Generate the valid flags of points of a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange as - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - The size arrange as as (h, w). - device (str, optional): The device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each points in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid - - def sparse_priors(self, - prior_idxs, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate sparse points according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int]): feature map size arrange as (w, h). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points. Defaults to - ``torch.float32``. - device (obj:`torch.device`): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 2), N should be equal to - the length of ``prior_idxs``. And last dimension - 2 represent (coord_x, coord_y). - """ - height, width = featmap_size - x = (prior_idxs % width + self.offset) * self.strides[level_idx][0] - y = ((prior_idxs // width) % height + - self.offset) * self.strides[level_idx][1] - prioris = torch.stack([x, y], 1).to(dtype) - prioris = prioris.to(device) - return prioris diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/utils.py deleted file mode 100644 index c2f20247..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/__init__.py deleted file mode 100644 index 371eba19..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assigners import (AssignResult, BaseAssigner, CenterRegionAssigner, - MaxIoUAssigner, RegionAssigner) -from .builder import build_assigner, build_bbox_coder, build_sampler -from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, DistancePointBBoxCoder, - PseudoBBoxCoder, TBLRBBoxCoder) -from .iou_calculators import BboxOverlaps2D, bbox_overlaps -from .samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, IoUBalancedNegSampler, - OHEMSampler, PseudoSampler, RandomSampler, - SamplingResult, ScoreHLRSampler) -from .transforms import (bbox2distance, bbox2result, bbox2roi, - bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping, - bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh, - distance2bbox, find_inside_bboxes, roi2bbox) - -__all__ = [ - 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', 'MaxIoUAssigner', - 'AssignResult', 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'build_assigner', - 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back', - 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance', - 'build_bbox_coder', 'BaseBBoxCoder', 'PseudoBBoxCoder', - 'DeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'DistancePointBBoxCoder', - 'CenterRegionAssigner', 'bbox_rescale', 'bbox_cxcywh_to_xyxy', - 'bbox_xyxy_to_cxcywh', 'RegionAssigner', 'find_inside_bboxes' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/__init__.py deleted file mode 100644 index 5eaf7fa3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .mask_hungarian_assigner import MaskHungarianAssigner -from .max_iou_assigner import MaxIoUAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner -from .sim_ota_assigner import SimOTAAssigner -from .task_aligned_assigner import TaskAlignedAssigner -from .uniform_assigner import UniformAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner', 'UniformAssigner', 'SimOTAAssigner', - 'TaskAlignedAssigner', 'MaskHungarianAssigner' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/approx_max_iou_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/approx_max_iou_assigner.py deleted file mode 100644 index 304d09c3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/approx_max_iou_assigner.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .max_iou_assigner import MaxIoUAssigner - - -@BBOX_ASSIGNERS.register_module() -class ApproxMaxIoUAssigner(MaxIoUAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with an integer indicating the ground truth - index. (semi-positive index: gt label (0-based), -1: background) - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - approxs, - squares, - approxs_per_octave, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to approxs. - - This method assign a gt bbox to each group of approxs (bboxes), - each group of approxs is represent by a base approx (bbox) and - will be assigned with -1, or a semi-positive number. - background_label (-1) means negative sample, - semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to background_label (-1) - 2. use the max IoU of each group of approxs to assign - 2. assign proposals whose iou with all gts < neg_iou_thr to background - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - approxs (Tensor): Bounding boxes to be assigned, - shape(approxs_per_octave*n, 4). - squares (Tensor): Base Bounding boxes to be assigned, - shape(n, 4). - approxs_per_octave (int): number of approxs per octave - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_squares = squares.size(0) - num_gts = gt_bboxes.size(0) - - if num_squares == 0 or num_gts == 0: - # No predictions and/or truth, return empty assignment - overlaps = approxs.new(num_gts, num_squares) - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - return assign_result - - # re-organize anchors by approxs_per_octave x num_squares - approxs = torch.transpose( - approxs.view(num_squares, approxs_per_octave, 4), 0, - 1).contiguous().view(-1, 4) - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - num_gts > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = approxs.device - approxs = approxs.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - all_overlaps = self.iou_calculator(approxs, gt_bboxes) - - overlaps, _ = all_overlaps.view(approxs_per_octave, num_squares, - num_gts).max(dim=0) - overlaps = torch.transpose(overlaps, 0, 1) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and squares.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - squares, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, squares, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/assign_result.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/assign_result.py deleted file mode 100644 index 488010b5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assigned to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned].long() - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/atss_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/atss_assigner.py deleted file mode 100644 index 7b195303..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/atss_assigner.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class ATSSAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (float): number of bbox selected in each level - """ - - def __init__(self, - topk, - iou_calculator=dict(type='BboxOverlaps2D'), - ignore_iof_thr=-1): - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - self.ignore_iof_thr = ignore_iof_thr - - # https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py - - def assign(self, - bboxes, - num_level_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute iou between all bbox (bbox of all pyramid levels) and gt - 2. compute center distance between all bbox and gt - 3. on each pyramid level, for each gt, select k bbox whose center - are closest to the gt center, so we total select k*l bbox as - candidates for each gt - 4. get corresponding iou for the these candidates, and compute the - mean and std, set mean + std as the iou threshold - 5. select these candidates whose iou are greater than or equal to - the threshold as positive - 6. limit the positive sample's center in gt - - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - num_level_bboxes (List): num of bboxes in each level - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - INF = 100000000 - bboxes = bboxes[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all bbox and gt - overlaps = self.iou_calculator(bboxes, gt_bboxes) - - # assign 0 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - 0, - dtype=torch.long) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - # compute center distance between all bbox and gt - gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - gt_points = torch.stack((gt_cx, gt_cy), dim=1) - - bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0 - bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0 - bboxes_points = torch.stack((bboxes_cx, bboxes_cy), dim=1) - - distances = (bboxes_points[:, None, :] - - gt_points[None, :, :]).pow(2).sum(-1).sqrt() - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - ignore_idxs = ignore_max_overlaps > self.ignore_iof_thr - distances[ignore_idxs, :] = INF - assigned_gt_inds[ignore_idxs] = -1 - - # Selecting candidates based on the center distance - candidate_idxs = [] - start_idx = 0 - for level, bboxes_per_level in enumerate(num_level_bboxes): - # on each pyramid level, for each gt, - # select k bbox whose center are closest to the gt center - end_idx = start_idx + bboxes_per_level - distances_per_level = distances[start_idx:end_idx, :] - selectable_k = min(self.topk, bboxes_per_level) - _, topk_idxs_per_level = distances_per_level.topk( - selectable_k, dim=0, largest=False) - candidate_idxs.append(topk_idxs_per_level + start_idx) - start_idx = end_idx - candidate_idxs = torch.cat(candidate_idxs, dim=0) - - # get corresponding iou for the these candidates, and compute the - # mean and std, set mean + std as the iou threshold - candidate_overlaps = overlaps[candidate_idxs, torch.arange(num_gt)] - overlaps_mean_per_gt = candidate_overlaps.mean(0) - overlaps_std_per_gt = candidate_overlaps.std(0) - overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt - - is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :] - - # limit the positive sample's center in gt - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_bboxes_cx = bboxes_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_bboxes_cy = bboxes_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest IoU will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/base_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/base_assigner.py deleted file mode 100644 index 3c2d597a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/base_assigner.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseAssigner(metaclass=ABCMeta): - """Base assigner that assigns boxes to ground truth boxes.""" - - @abstractmethod - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign boxes to either a ground truth boxes or a negative boxes.""" diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/center_region_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/center_region_assigner.py deleted file mode 100644 index 86e78597..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/center_region_assigner.py +++ /dev/null @@ -1,336 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def scale_boxes(bboxes, scale): - """Expand an array of boxes by a given scale. - - Args: - bboxes (Tensor): Shape (m, 4) - scale (float): The scale factor of bboxes - - Returns: - (Tensor): Shape (m, 4). Scaled bboxes - """ - assert bboxes.size(1) == 4 - w_half = (bboxes[:, 2] - bboxes[:, 0]) * .5 - h_half = (bboxes[:, 3] - bboxes[:, 1]) * .5 - x_c = (bboxes[:, 2] + bboxes[:, 0]) * .5 - y_c = (bboxes[:, 3] + bboxes[:, 1]) * .5 - - w_half *= scale - h_half *= scale - - boxes_scaled = torch.zeros_like(bboxes) - boxes_scaled[:, 0] = x_c - w_half - boxes_scaled[:, 2] = x_c + w_half - boxes_scaled[:, 1] = y_c - h_half - boxes_scaled[:, 3] = y_c + h_half - return boxes_scaled - - -def is_located_in(points, bboxes): - """Are points located in bboxes. - - Args: - points (Tensor): Points, shape: (m, 2). - bboxes (Tensor): Bounding boxes, shape: (n, 4). - - Return: - Tensor: Flags indicating if points are located in bboxes, shape: (m, n). - """ - assert points.size(1) == 2 - assert bboxes.size(1) == 4 - return (points[:, 0].unsqueeze(1) > bboxes[:, 0].unsqueeze(0)) & \ - (points[:, 0].unsqueeze(1) < bboxes[:, 2].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) > bboxes[:, 1].unsqueeze(0)) & \ - (points[:, 1].unsqueeze(1) < bboxes[:, 3].unsqueeze(0)) - - -def bboxes_area(bboxes): - """Compute the area of an array of bboxes. - - Args: - bboxes (Tensor): The coordinates ox bboxes. Shape: (m, 4) - - Returns: - Tensor: Area of the bboxes. Shape: (m, ) - """ - assert bboxes.size(1) == 4 - w = (bboxes[:, 2] - bboxes[:, 0]) - h = (bboxes[:, 3] - bboxes[:, 1]) - areas = w * h - return areas - - -@BBOX_ASSIGNERS.register_module() -class CenterRegionAssigner(BaseAssigner): - """Assign pixels at the center region of a bbox as positive. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - -1: negative samples - - semi-positive numbers: positive sample, index (0-based) of assigned gt - - Args: - pos_scale (float): Threshold within which pixels are - labelled as positive. - neg_scale (float): Threshold above which pixels are - labelled as positive. - min_pos_iof (float): Minimum iof of a pixel with a gt to be - labelled as positive. Default: 1e-2 - ignore_gt_scale (float): Threshold within which the pixels - are ignored when the gt is labelled as shadowed. Default: 0.5 - foreground_dominate (bool): If True, the bbox will be assigned as - positive when a gt's kernel region overlaps with another's shadowed - (ignored) region, otherwise it is set as ignored. Default to False. - """ - - def __init__(self, - pos_scale, - neg_scale, - min_pos_iof=1e-2, - ignore_gt_scale=0.5, - foreground_dominate=False, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_scale = pos_scale - self.neg_scale = neg_scale - self.min_pos_iof = min_pos_iof - self.ignore_gt_scale = ignore_gt_scale - self.foreground_dominate = foreground_dominate - self.iou_calculator = build_iou_calculator(iou_calculator) - - def get_gt_priorities(self, gt_bboxes): - """Get gt priorities according to their areas. - - Smaller gt has higher priority. - - Args: - gt_bboxes (Tensor): Ground truth boxes, shape (k, 4). - - Returns: - Tensor: The priority of gts so that gts with larger priority is \ - more likely to be assigned. Shape (k, ) - """ - gt_areas = bboxes_area(gt_bboxes) - # Rank all gt bbox areas. Smaller objects has larger priority - _, sort_idx = gt_areas.sort(descending=True) - sort_idx = sort_idx.argsort() - return sort_idx - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assigns gts to every bbox (proposal/anchor), each bbox \ - will be assigned with -1, or a semi-positive number. -1 means \ - negative sample, semi-positive number is the index (0-based) of \ - assigned gt. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (tensor, optional): Label of gt_bboxes, shape (num_gts,). - - Returns: - :obj:`AssignResult`: The assigned result. Note that \ - shadowed_labels of shape (N, 2) is also added as an \ - `assign_result` attribute. `shadowed_labels` is a tensor \ - composed of N pairs of anchor_ind, class_label], where N \ - is the number of anchors that lie in the outer region of a \ - gt, anchor_ind is the shadowed anchor index and class_label \ - is the shadowed class label. - - Example: - >>> self = CenterRegionAssigner(0.2, 0.2) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - # There are in total 5 steps in the pixel assignment - # 1. Find core (the center region, say inner 0.2) - # and shadow (the relatively ourter part, say inner 0.2-0.5) - # regions of every gt. - # 2. Find all prior bboxes that lie in gt_core and gt_shadow regions - # 3. Assign prior bboxes in gt_core with a one-hot id of the gt in - # the image. - # 3.1. For overlapping objects, the prior bboxes in gt_core is - # assigned with the object with smallest area - # 4. Assign prior bboxes with class label according to its gt id. - # 4.1. Assign -1 to prior bboxes lying in shadowed gts - # 4.2. Assign positive prior boxes with the corresponding label - # 5. Find pixels lying in the shadow of an object and assign them with - # background label, but set the loss weight of its corresponding - # gt to zero. - assert bboxes.size(1) == 4, 'bboxes must have size of 4' - # 1. Find core positive and shadow region of every gt - gt_core = scale_boxes(gt_bboxes, self.pos_scale) - gt_shadow = scale_boxes(gt_bboxes, self.neg_scale) - - # 2. Find prior bboxes that lie in gt_core and gt_shadow regions - bbox_centers = (bboxes[:, 2:4] + bboxes[:, 0:2]) / 2 - # The center points lie within the gt boxes - is_bbox_in_gt = is_located_in(bbox_centers, gt_bboxes) - # Only calculate bbox and gt_core IoF. This enables small prior bboxes - # to match large gts - bbox_and_gt_core_overlaps = self.iou_calculator( - bboxes, gt_core, mode='iof') - # The center point of effective priors should be within the gt box - is_bbox_in_gt_core = is_bbox_in_gt & ( - bbox_and_gt_core_overlaps > self.min_pos_iof) # shape (n, k) - - is_bbox_in_gt_shadow = ( - self.iou_calculator(bboxes, gt_shadow, mode='iof') > - self.min_pos_iof) - # Rule out center effective positive pixels - is_bbox_in_gt_shadow &= (~is_bbox_in_gt_core) - - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - if num_gts == 0 or num_bboxes == 0: - # If no gts exist, assign all pixels to negative - assigned_gt_ids = \ - is_bbox_in_gt_core.new_zeros((num_bboxes,), - dtype=torch.long) - pixels_in_gt_shadow = assigned_gt_ids.new_empty((0, 2)) - else: - # Step 3: assign a one-hot gt id to each pixel, and smaller objects - # have high priority to assign the pixel. - sort_idx = self.get_gt_priorities(gt_bboxes) - assigned_gt_ids, pixels_in_gt_shadow = \ - self.assign_one_hot_gt_indices(is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=sort_idx) - - if gt_bboxes_ignore is not None and gt_bboxes_ignore.numel() > 0: - # No ground truth or boxes, return empty assignment - gt_bboxes_ignore = scale_boxes( - gt_bboxes_ignore, scale=self.ignore_gt_scale) - is_bbox_in_ignored_gts = is_located_in(bbox_centers, - gt_bboxes_ignore) - is_bbox_in_ignored_gts = is_bbox_in_ignored_gts.any(dim=1) - assigned_gt_ids[is_bbox_in_ignored_gts] = -1 - - # 4. Assign prior bboxes with class label according to its gt id. - assigned_labels = None - shadowed_pixel_labels = None - if gt_labels is not None: - # Default assigned label is the background (-1) - assigned_labels = assigned_gt_ids.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_ids > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_ids[pos_inds] - - 1] - # 5. Find pixels lying in the shadow of an object - shadowed_pixel_labels = pixels_in_gt_shadow.clone() - if pixels_in_gt_shadow.numel() > 0: - pixel_idx, gt_idx =\ - pixels_in_gt_shadow[:, 0], pixels_in_gt_shadow[:, 1] - assert (assigned_gt_ids[pixel_idx] != gt_idx).all(), \ - 'Some pixels are dually assigned to ignore and gt!' - shadowed_pixel_labels[:, 1] = gt_labels[gt_idx - 1] - override = ( - assigned_labels[pixel_idx] == shadowed_pixel_labels[:, 1]) - if self.foreground_dominate: - # When a pixel is both positive and shadowed, set it as pos - shadowed_pixel_labels = shadowed_pixel_labels[~override] - else: - # When a pixel is both pos and shadowed, set it as shadowed - assigned_labels[pixel_idx[override]] = -1 - assigned_gt_ids[pixel_idx[override]] = 0 - - assign_result = AssignResult( - num_gts, assigned_gt_ids, None, labels=assigned_labels) - # Add shadowed_labels as assign_result property. Shape: (num_shadow, 2) - assign_result.set_extra_property('shadowed_labels', - shadowed_pixel_labels) - return assign_result - - def assign_one_hot_gt_indices(self, - is_bbox_in_gt_core, - is_bbox_in_gt_shadow, - gt_priority=None): - """Assign only one gt index to each prior box. - - Gts with large gt_priority are more likely to be assigned. - - Args: - is_bbox_in_gt_core (Tensor): Bool tensor indicating the bbox center - is in the core area of a gt (e.g. 0-0.2). - Shape: (num_prior, num_gt). - is_bbox_in_gt_shadow (Tensor): Bool tensor indicating the bbox - center is in the shadowed area of a gt (e.g. 0.2-0.5). - Shape: (num_prior, num_gt). - gt_priority (Tensor): Priorities of gts. The gt with a higher - priority is more likely to be assigned to the bbox when the bbox - match with multiple gts. Shape: (num_gt, ). - - Returns: - tuple: Returns (assigned_gt_inds, shadowed_gt_inds). - - - assigned_gt_inds: The assigned gt index of each prior bbox \ - (i.e. index from 1 to num_gts). Shape: (num_prior, ). - - shadowed_gt_inds: shadowed gt indices. It is a tensor of \ - shape (num_ignore, 2) with first column being the \ - shadowed prior bbox indices and the second column the \ - shadowed gt indices (1-based). - """ - num_bboxes, num_gts = is_bbox_in_gt_core.shape - - if gt_priority is None: - gt_priority = torch.arange( - num_gts, device=is_bbox_in_gt_core.device) - assert gt_priority.size(0) == num_gts - # The bigger gt_priority, the more preferable to be assigned - # The assigned inds are by default 0 (background) - assigned_gt_inds = is_bbox_in_gt_core.new_zeros((num_bboxes, ), - dtype=torch.long) - # Shadowed bboxes are assigned to be background. But the corresponding - # label is ignored during loss calculation, which is done through - # shadowed_gt_inds - shadowed_gt_inds = torch.nonzero(is_bbox_in_gt_shadow, as_tuple=False) - if is_bbox_in_gt_core.sum() == 0: # No gt match - shadowed_gt_inds[:, 1] += 1 # 1-based. For consistency issue - return assigned_gt_inds, shadowed_gt_inds - - # The priority of each prior box and gt pair. If one prior box is - # matched bo multiple gts. Only the pair with the highest priority - # is saved - pair_priority = is_bbox_in_gt_core.new_full((num_bboxes, num_gts), - -1, - dtype=torch.long) - - # Each bbox could match with multiple gts. - # The following codes deal with this situation - # Matched bboxes (to any gt). Shape: (num_pos_anchor, ) - inds_of_match = torch.any(is_bbox_in_gt_core, dim=1) - # The matched gt index of each positive bbox. Length >= num_pos_anchor - # , since one bbox could match multiple gts - matched_bbox_gt_inds = torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)[:, 1] - # Assign priority to each bbox-gt pair. - pair_priority[is_bbox_in_gt_core] = gt_priority[matched_bbox_gt_inds] - _, argmax_priority = pair_priority[inds_of_match].max(dim=1) - assigned_gt_inds[inds_of_match] = argmax_priority + 1 # 1-based - # Zero-out the assigned anchor box to filter the shadowed gt indices - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 0 - # Concat the shadowed indices due to overlapping with that out side of - # effective scale. shape: (total_num_ignore, 2) - shadowed_gt_inds = torch.cat( - (shadowed_gt_inds, torch.nonzero( - is_bbox_in_gt_core, as_tuple=False)), - dim=0) - # `is_bbox_in_gt_core` should be changed back to keep arguments intact. - is_bbox_in_gt_core[inds_of_match, argmax_priority] = 1 - # 1-based shadowed gt indices, to be consistent with `assigned_gt_inds` - if shadowed_gt_inds.numel() > 0: - shadowed_gt_inds[:, 1] += 1 - return assigned_gt_inds, shadowed_gt_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/grid_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/grid_assigner.py deleted file mode 100644 index a0c814e7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/grid_assigner.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class GridAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, box_responsible_flags, gt_bboxes, gt_labels=None): - """Assign gt to bboxes. The process is very much like the max iou - assigner, except that positive samples are constrained within the cell - that the gt boxes fell in. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to -1 - 2. assign proposals whose iou with all gts <= neg_iou_thr to 0 - 3. for each bbox within a cell, if the iou with its nearest gt > - pos_iou_thr and the center of that gt falls inside the cell, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals within the cell the - gt bbox falls in to itself. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - box_responsible_flags (Tensor): flag to indicate whether box is - responsible for prediction, shape(n, ) - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all gt and bboxes - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # 2. assign negative: below - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - # shape of max_overlaps == argmax_overlaps == num_bboxes - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps <= self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, (tuple, list)): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps > self.neg_iou_thr[0]) - & (max_overlaps <= self.neg_iou_thr[1])] = 0 - - # 3. assign positive: falls into responsible cell and above - # positive IOU threshold, the order matters. - # the prior condition of comparison is to filter out all - # unrelated anchors, i.e. not box_responsible_flags - overlaps[:, ~box_responsible_flags.type(torch.bool)] = -1. - - # calculate max_overlaps again, but this time we only consider IOUs - # for anchors responsible for prediction - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - # shape of gt_max_overlaps == gt_argmax_overlaps == num_gts - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - pos_inds = (max_overlaps > - self.pos_iou_thr) & box_responsible_flags.type(torch.bool) - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - # 4. assign positive to max overlapped anchors within responsible cell - for i in range(num_gts): - if gt_max_overlaps[i] > self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = (overlaps[i, :] == gt_max_overlaps[i]) & \ - box_responsible_flags.type(torch.bool) - assigned_gt_inds[max_iou_inds] = i + 1 - elif box_responsible_flags[gt_argmax_overlaps[i]]: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - # assign labels of positive anchors - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/hungarian_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/hungarian_assigner.py deleted file mode 100644 index 4105fb5c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/hungarian_assigner.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..match_costs import build_match_cost -from ..transforms import bbox_cxcywh_to_xyxy -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class HungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, regression L1 cost and regression iou cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - bbox_weight (int | float, optional): The scale factor for regression - L1 cost. Default 1.0. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 1.0. - iou_calculator (dict | optional): The config for the iou calculation. - Default type `BboxOverlaps2D`. - iou_mode (str | optional): "iou" (intersection over union), "iof" - (intersection over foreground), or "giou" (generalized - intersection over union). Default "giou". - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=1.0), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.reg_cost = build_match_cost(reg_cost) - self.iou_cost = build_match_cost(iou_cost) - - def assign(self, - bbox_pred, - cls_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - This method assign each query prediction to a ground truth or - background. The `assigned_gt_inds` with -1 means don't care, - 0 means negative sample, and positive number is the index (1-based) - of assigned gt. - The assignment is done in the following steps, the order matters. - - 1. assign every prediction to -1 - 2. compute the weighted costs - 3. do Hungarian matching on CPU based on the costs - 4. assign all to 0 (background) first, then for each matched pair - between predictions and gts, treat this prediction as foreground - and assign the corresponding gt index (plus 1) to it. - - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - img_h, img_w, _ = img_meta['img_shape'] - factor = gt_bboxes.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - - # 2. compute the weighted costs - # classification and bboxcost. - cls_cost = self.cls_cost(cls_pred, gt_labels) - # regression L1 cost - normalize_gt_bboxes = gt_bboxes / factor - reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes) - # regression iou cost, defaultly giou is used in official DETR. - bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor - iou_cost = self.iou_cost(bboxes, gt_bboxes) - # weighted sum of above three costs - cost = cls_cost + reg_cost + iou_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - bbox_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - bbox_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/mask_hungarian_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/mask_hungarian_assigner.py deleted file mode 100644 index f5f27f3f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/mask_hungarian_assigner.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox.builder import BBOX_ASSIGNERS -from mmdet.core.bbox.match_costs.builder import build_match_cost -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class MaskHungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth for - mask. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, mask focal cost and mask dice cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_cost (:obj:`mmcv.ConfigDict` | dict): Classification cost config. - mask_cost (:obj:`mmcv.ConfigDict` | dict): Mask cost config. - dice_cost (:obj:`mmcv.ConfigDict` | dict): Dice cost config. - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.0), - mask_cost=dict( - type='FocalLossCost', weight=1.0, binary_input=True), - dice_cost=dict(type='DiceCost', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.mask_cost = build_match_cost(mask_cost) - self.dice_cost = build_match_cost(dice_cost) - - def assign(self, - cls_pred, - mask_pred, - gt_labels, - gt_mask, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - Args: - cls_pred (Tensor | None): Class prediction in shape - (num_query, cls_out_channels). - mask_pred (Tensor): Mask prediction in shape (num_query, H, W). - gt_labels (Tensor): Label of 'gt_mask'in shape = (num_gt, ). - gt_mask (Tensor): Ground truth mask in shape = (num_gt, H, W). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - # K-Net sometimes passes cls_pred=None to this assigner. - # So we should use the shape of mask_pred - num_gt, num_query = gt_labels.shape[0], mask_pred.shape[0] - - # 1. assign -1 by default - assigned_gt_inds = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - assigned_labels = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - if num_gt == 0 or num_query == 0: - # No ground truth or boxes, return empty assignment - if num_gt == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) - - # 2. compute the weighted costs - # classification and maskcost. - if self.cls_cost.weight != 0 and cls_pred is not None: - cls_cost = self.cls_cost(cls_pred, gt_labels) - else: - cls_cost = 0 - - if self.mask_cost.weight != 0: - # mask_pred shape = [num_query, h, w] - # gt_mask shape = [num_gt, h, w] - # mask_cost shape = [num_query, num_gt] - mask_cost = self.mask_cost(mask_pred, gt_mask) - else: - mask_cost = 0 - - if self.dice_cost.weight != 0: - dice_cost = self.dice_cost(mask_pred, gt_mask) - else: - dice_cost = 0 - cost = cls_cost + mask_cost + dice_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - mask_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - mask_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/max_iou_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/max_iou_assigner.py deleted file mode 100644 index 676421f7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/max_iou_assigner.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - `min_pos_iou` is set to avoid assigning bboxes that have extremely - small iou with GT as positive samples. It brings about 0.3 mAP - improvements in 1x schedule but does not affect the performance of - 3x schedule. More comparisons can be found in - `PR #7464 `_. - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = bboxes.device - bboxes = bboxes.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, bboxes, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps, gt_labels=None): - """Assign w.r.t. the overlaps of bboxes with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox 2. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/point_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/point_assigner.py deleted file mode 100644 index b0dc2246..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/point_assigner.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class PointAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each point. - - Each proposals will be assigned with `0`, or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - """ - - def __init__(self, scale=4, pos_num=3): - self.scale = scale - self.pos_num = pos_num - - def assign(self, points, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to points. - - This method assign a gt bbox to every points set, each points set - will be assigned with the background_label (-1), or a label number. - -1 is background, and semi-positive number is the index (0-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every points to the background_label (-1) - 2. A point is assigned to some gt bbox if - (i) the point is within the k closest points to the gt bbox - (ii) the distance between this point and the gt is smaller than - other gt bboxes - - Args: - points (Tensor): points to be assigned, shape(n, 3) while last - dimension stands for (x, y, stride). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - NOTE: currently unused. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_points = points.shape[0] - num_gts = gt_bboxes.shape[0] - - if num_gts == 0 or num_points == 0: - # If no truth assign everything to the background - assigned_gt_inds = points.new_full((num_points, ), - 0, - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = points.new_full((num_points, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - - points_xy = points[:, :2] - points_stride = points[:, 2] - points_lvl = torch.log2( - points_stride).int() # [3...,4...,5...,6...,7...] - lvl_min, lvl_max = points_lvl.min(), points_lvl.max() - - # assign gt box - gt_bboxes_xy = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2 - gt_bboxes_wh = (gt_bboxes[:, 2:] - gt_bboxes[:, :2]).clamp(min=1e-6) - scale = self.scale - gt_bboxes_lvl = ((torch.log2(gt_bboxes_wh[:, 0] / scale) + - torch.log2(gt_bboxes_wh[:, 1] / scale)) / 2).int() - gt_bboxes_lvl = torch.clamp(gt_bboxes_lvl, min=lvl_min, max=lvl_max) - - # stores the assigned gt index of each point - assigned_gt_inds = points.new_zeros((num_points, ), dtype=torch.long) - # stores the assigned gt dist (to this point) of each point - assigned_gt_dist = points.new_full((num_points, ), float('inf')) - points_range = torch.arange(points.shape[0]) - - for idx in range(num_gts): - gt_lvl = gt_bboxes_lvl[idx] - # get the index of points in this level - lvl_idx = gt_lvl == points_lvl - points_index = points_range[lvl_idx] - # get the points in this level - lvl_points = points_xy[lvl_idx, :] - # get the center point of gt - gt_point = gt_bboxes_xy[[idx], :] - # get width and height of gt - gt_wh = gt_bboxes_wh[[idx], :] - # compute the distance between gt center and - # all points in this level - points_gt_dist = ((lvl_points - gt_point) / gt_wh).norm(dim=1) - # find the nearest k points to gt center in this level - min_dist, min_dist_index = torch.topk( - points_gt_dist, self.pos_num, largest=False) - # the index of nearest k points to gt center in this level - min_dist_points_index = points_index[min_dist_index] - # The less_than_recorded_index stores the index - # of min_dist that is less then the assigned_gt_dist. Where - # assigned_gt_dist stores the dist from previous assigned gt - # (if exist) to each point. - less_than_recorded_index = min_dist < assigned_gt_dist[ - min_dist_points_index] - # The min_dist_points_index stores the index of points satisfy: - # (1) it is k nearest to current gt center in this level. - # (2) it is closer to current gt center than other gt center. - min_dist_points_index = min_dist_points_index[ - less_than_recorded_index] - # assign the result - assigned_gt_inds[min_dist_points_index] = idx + 1 - assigned_gt_dist[min_dist_points_index] = min_dist[ - less_than_recorded_index] - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_points, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/region_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/region_assigner.py deleted file mode 100644 index 1833b894..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/region_assigner.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import anchor_inside_flags -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def calc_region(bbox, ratio, stride, featmap_size=None): - """Calculate region of the box defined by the ratio, the ratio is from the - center of the box to every edge.""" - # project bbox on the feature - f_bbox = bbox / stride - x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2]) - y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3]) - x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2]) - y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3]) - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) - - -def anchor_ctr_inside_region_flags(anchors, stride, region): - """Get the flag indicate whether anchor centers are inside regions.""" - x1, y1, x2, y2 = region - f_anchors = anchors / stride - x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5 - y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5 - flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2) - return flags - - -@BBOX_ASSIGNERS.register_module() -class RegionAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - center_ratio: ratio of the region in the center of the bbox to - define positive sample. - ignore_ratio: ratio of the region to define ignore samples. - """ - - def __init__(self, center_ratio=0.2, ignore_ratio=0.5): - self.center_ratio = center_ratio - self.ignore_ratio = ignore_ratio - - def assign(self, - mlvl_anchors, - mlvl_valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - anchor_scale, - anchor_strides, - gt_bboxes_ignore=None, - gt_labels=None, - allowed_border=0): - """Assign gt to anchors. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - - The assignment is done in following steps, and the order matters. - - 1. Assign every anchor to 0 (negative) - 2. (For each gt_bboxes) Compute ignore flags based on ignore_region - then assign -1 to anchors w.r.t. ignore flags - 3. (For each gt_bboxes) Compute pos flags based on center_region then - assign gt_bboxes to anchors w.r.t. pos flags - 4. (For each gt_bboxes) Compute ignore flags based on adjacent anchor - level then assign -1 to anchors w.r.t. ignore flags - 5. Assign anchor outside of image to -1 - - Args: - mlvl_anchors (list[Tensor]): Multi level anchors. - mlvl_valid_flags (list[Tensor]): Multi level valid flags. - gt_bboxes (Tensor): Ground truth bboxes of image - img_meta (dict): Meta info of image. - featmap_sizes (list[Tensor]): Feature mapsize each level - anchor_scale (int): Scale of the anchor. - anchor_strides (list[int]): Stride of the anchor. - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - allowed_border (int, optional): The border to allow the valid - anchor. Defaults to 0. - - Returns: - :obj:`AssignResult`: The assign result. - """ - if gt_bboxes_ignore is not None: - raise NotImplementedError - - num_gts = gt_bboxes.shape[0] - num_bboxes = sum(x.shape[0] for x in mlvl_anchors) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = gt_bboxes.new_zeros((num_bboxes, )) - assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ), - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = gt_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - num_lvls = len(mlvl_anchors) - r1 = (1 - self.center_ratio) / 2 - r2 = (1 - self.ignore_ratio) / 2 - - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - - # 1. assign 0 (negative) by default - mlvl_assigned_gt_inds = [] - mlvl_ignore_flags = [] - for lvl in range(num_lvls): - h, w = featmap_sizes[lvl] - assert h * w == mlvl_anchors[lvl].shape[0] - assigned_gt_inds = gt_bboxes.new_full((h * w, ), - 0, - dtype=torch.long) - ignore_flags = torch.zeros_like(assigned_gt_inds) - mlvl_assigned_gt_inds.append(assigned_gt_inds) - mlvl_ignore_flags.append(ignore_flags) - - for gt_id in range(num_gts): - lvl = target_lvls[gt_id].item() - featmap_size = featmap_sizes[lvl] - stride = anchor_strides[lvl] - anchors = mlvl_anchors[lvl] - gt_bbox = gt_bboxes[gt_id, :4] - - # Compute regions - ignore_region = calc_region(gt_bbox, r2, stride, featmap_size) - ctr_region = calc_region(gt_bbox, r1, stride, featmap_size) - - # 2. Assign -1 to ignore flags - ignore_flags = anchor_ctr_inside_region_flags( - anchors, stride, ignore_region) - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 3. Assign gt_bboxes to pos flags - pos_flags = anchor_ctr_inside_region_flags(anchors, stride, - ctr_region) - mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1 - - # 4. Assign -1 to ignore adjacent lvl - if lvl > 0: - d_lvl = lvl - 1 - d_anchors = mlvl_anchors[d_lvl] - d_featmap_size = featmap_sizes[d_lvl] - d_stride = anchor_strides[d_lvl] - d_ignore_region = calc_region(gt_bbox, r2, d_stride, - d_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - d_anchors, d_stride, d_ignore_region) - mlvl_ignore_flags[d_lvl][ignore_flags] = 1 - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - u_anchors = mlvl_anchors[u_lvl] - u_featmap_size = featmap_sizes[u_lvl] - u_stride = anchor_strides[u_lvl] - u_ignore_region = calc_region(gt_bbox, r2, u_stride, - u_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - u_anchors, u_stride, u_ignore_region) - mlvl_ignore_flags[u_lvl][ignore_flags] = 1 - - # 4. (cont.) Assign -1 to ignore adjacent lvl - for lvl in range(num_lvls): - ignore_flags = mlvl_ignore_flags[lvl] - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 5. Assign -1 to anchor outside of image - flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds) - flat_anchors = torch.cat(mlvl_anchors) - flat_valid_flags = torch.cat(mlvl_valid_flags) - assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] == - flat_valid_flags.shape[0]) - inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags, - img_meta['img_shape'], - allowed_border) - outside_flags = ~inside_flags - flat_assigned_gt_inds[outside_flags] = -1 - - if gt_labels is not None: - assigned_labels = torch.zeros_like(flat_assigned_gt_inds) - pos_flags = assigned_gt_inds > 0 - assigned_labels[pos_flags] = gt_labels[ - flat_assigned_gt_inds[pos_flags] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, flat_assigned_gt_inds, None, labels=assigned_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/sim_ota_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/sim_ota_assigner.py deleted file mode 100644 index 58bfef43..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/sim_ota_assigner.py +++ /dev/null @@ -1,257 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn.functional as F - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import bbox_overlaps -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class SimOTAAssigner(BaseAssigner): - """Computes matching between predictions and ground truth. - - Args: - center_radius (int | float, optional): Ground truth center size - to judge whether a prior is in center. Default 2.5. - candidate_topk (int, optional): The candidate top-k which used to - get top-k ious to calculate dynamic-k. Default 10. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 3.0. - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - """ - - def __init__(self, - center_radius=2.5, - candidate_topk=10, - iou_weight=3.0, - cls_weight=1.0): - self.center_radius = center_radius - self.candidate_topk = candidate_topk - self.iou_weight = iou_weight - self.cls_weight = cls_weight - - def assign(self, - pred_scores, - priors, - decoded_bboxes, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - eps=1e-7): - """Assign gt to priors using SimOTA. It will switch to CPU mode when - GPU is out of memory. - Args: - pred_scores (Tensor): Classification scores of one image, - a 2D-Tensor with shape [num_priors, num_classes] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Predicted bboxes, a 2D-Tensor with shape - [num_priors, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - eps (float): A value added to the denominator for numerical - stability. Default 1e-7. - Returns: - assign_result (obj:`AssignResult`): The assigned result. - """ - try: - assign_result = self._assign(pred_scores, priors, decoded_bboxes, - gt_bboxes, gt_labels, - gt_bboxes_ignore, eps) - return assign_result - except RuntimeError: - origin_device = pred_scores.device - warnings.warn('OOM RuntimeError is raised due to the huge memory ' - 'cost during label assignment. CPU mode is applied ' - 'in this batch. If you want to avoid this issue, ' - 'try to reduce the batch size or image size.') - torch.cuda.empty_cache() - - pred_scores = pred_scores.cpu() - priors = priors.cpu() - decoded_bboxes = decoded_bboxes.cpu() - gt_bboxes = gt_bboxes.cpu().float() - gt_labels = gt_labels.cpu() - - assign_result = self._assign(pred_scores, priors, decoded_bboxes, - gt_bboxes, gt_labels, - gt_bboxes_ignore, eps) - assign_result.gt_inds = assign_result.gt_inds.to(origin_device) - assign_result.max_overlaps = assign_result.max_overlaps.to( - origin_device) - assign_result.labels = assign_result.labels.to(origin_device) - - return assign_result - - def _assign(self, - pred_scores, - priors, - decoded_bboxes, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - eps=1e-7): - """Assign gt to priors using SimOTA. - Args: - pred_scores (Tensor): Classification scores of one image, - a 2D-Tensor with shape [num_priors, num_classes] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Predicted bboxes, a 2D-Tensor with shape - [num_priors, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - eps (float): A value added to the denominator for numerical - stability. Default 1e-7. - Returns: - :obj:`AssignResult`: The assigned result. - """ - INF = 100000.0 - num_gt = gt_bboxes.size(0) - num_bboxes = decoded_bboxes.size(0) - - # assign 0 by default - assigned_gt_inds = decoded_bboxes.new_full((num_bboxes, ), - 0, - dtype=torch.long) - valid_mask, is_in_boxes_and_center = self.get_in_gt_and_in_center_info( - priors, gt_bboxes) - valid_decoded_bbox = decoded_bboxes[valid_mask] - valid_pred_scores = pred_scores[valid_mask] - num_valid = valid_decoded_bbox.size(0) - - if num_gt == 0 or num_bboxes == 0 or num_valid == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = decoded_bboxes.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = decoded_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - pairwise_ious = bbox_overlaps(valid_decoded_bbox, gt_bboxes) - iou_cost = -torch.log(pairwise_ious + eps) - - gt_onehot_label = ( - F.one_hot(gt_labels.to(torch.int64), - pred_scores.shape[-1]).float().unsqueeze(0).repeat( - num_valid, 1, 1)) - - valid_pred_scores = valid_pred_scores.unsqueeze(1).repeat(1, num_gt, 1) - cls_cost = ( - F.binary_cross_entropy( - valid_pred_scores.to(dtype=torch.float32).sqrt_(), - gt_onehot_label, - reduction='none', - ).sum(-1).to(dtype=valid_pred_scores.dtype)) - - cost_matrix = ( - cls_cost * self.cls_weight + iou_cost * self.iou_weight + - (~is_in_boxes_and_center) * INF) - - matched_pred_ious, matched_gt_inds = \ - self.dynamic_k_matching( - cost_matrix, pairwise_ious, num_gt, valid_mask) - - # convert to AssignResult format - assigned_gt_inds[valid_mask] = matched_gt_inds + 1 - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - assigned_labels[valid_mask] = gt_labels[matched_gt_inds].long() - max_overlaps = assigned_gt_inds.new_full((num_bboxes, ), - -INF, - dtype=torch.float32) - max_overlaps[valid_mask] = matched_pred_ious - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - def get_in_gt_and_in_center_info(self, priors, gt_bboxes): - num_gt = gt_bboxes.size(0) - - repeated_x = priors[:, 0].unsqueeze(1).repeat(1, num_gt) - repeated_y = priors[:, 1].unsqueeze(1).repeat(1, num_gt) - repeated_stride_x = priors[:, 2].unsqueeze(1).repeat(1, num_gt) - repeated_stride_y = priors[:, 3].unsqueeze(1).repeat(1, num_gt) - - # is prior centers in gt bboxes, shape: [n_prior, n_gt] - l_ = repeated_x - gt_bboxes[:, 0] - t_ = repeated_y - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - repeated_x - b_ = gt_bboxes[:, 3] - repeated_y - - deltas = torch.stack([l_, t_, r_, b_], dim=1) - is_in_gts = deltas.min(dim=1).values > 0 - is_in_gts_all = is_in_gts.sum(dim=1) > 0 - - # is prior centers in gt centers - gt_cxs = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cys = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - ct_box_l = gt_cxs - self.center_radius * repeated_stride_x - ct_box_t = gt_cys - self.center_radius * repeated_stride_y - ct_box_r = gt_cxs + self.center_radius * repeated_stride_x - ct_box_b = gt_cys + self.center_radius * repeated_stride_y - - cl_ = repeated_x - ct_box_l - ct_ = repeated_y - ct_box_t - cr_ = ct_box_r - repeated_x - cb_ = ct_box_b - repeated_y - - ct_deltas = torch.stack([cl_, ct_, cr_, cb_], dim=1) - is_in_cts = ct_deltas.min(dim=1).values > 0 - is_in_cts_all = is_in_cts.sum(dim=1) > 0 - - # in boxes or in centers, shape: [num_priors] - is_in_gts_or_centers = is_in_gts_all | is_in_cts_all - - # both in boxes and centers, shape: [num_fg, num_gt] - is_in_boxes_and_centers = ( - is_in_gts[is_in_gts_or_centers, :] - & is_in_cts[is_in_gts_or_centers, :]) - return is_in_gts_or_centers, is_in_boxes_and_centers - - def dynamic_k_matching(self, cost, pairwise_ious, num_gt, valid_mask): - matching_matrix = torch.zeros_like(cost, dtype=torch.uint8) - # select candidate topk ious for dynamic-k calculation - candidate_topk = min(self.candidate_topk, pairwise_ious.size(0)) - topk_ious, _ = torch.topk(pairwise_ious, candidate_topk, dim=0) - # calculate dynamic k for each gt - dynamic_ks = torch.clamp(topk_ious.sum(0).int(), min=1) - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[:, gt_idx], k=dynamic_ks[gt_idx], largest=False) - matching_matrix[:, gt_idx][pos_idx] = 1 - - del topk_ious, dynamic_ks, pos_idx - - prior_match_gt_mask = matching_matrix.sum(1) > 1 - if prior_match_gt_mask.sum() > 0: - cost_min, cost_argmin = torch.min( - cost[prior_match_gt_mask, :], dim=1) - matching_matrix[prior_match_gt_mask, :] *= 0 - matching_matrix[prior_match_gt_mask, cost_argmin] = 1 - # get foreground mask inside box and center prior - fg_mask_inboxes = matching_matrix.sum(1) > 0 - valid_mask[valid_mask.clone()] = fg_mask_inboxes - - matched_gt_inds = matching_matrix[fg_mask_inboxes, :].argmax(1) - matched_pred_ious = (matching_matrix * - pairwise_ious).sum(1)[fg_mask_inboxes] - return matched_pred_ious, matched_gt_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/task_aligned_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/task_aligned_assigner.py deleted file mode 100644 index 1872de4a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/task_aligned_assigner.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -INF = 100000000 - - -@BBOX_ASSIGNERS.register_module() -class TaskAlignedAssigner(BaseAssigner): - """Task aligned assigner used in the paper: - `TOOD: Task-aligned One-stage Object Detection. - `_. - - Assign a corresponding gt bbox or background to each predicted bbox. - Each bbox will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (int): number of bbox selected in each level - iou_calculator (dict): Config dict for iou calculator. - Default: dict(type='BboxOverlaps2D') - """ - - def __init__(self, topk, iou_calculator=dict(type='BboxOverlaps2D')): - assert topk >= 1 - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - pred_scores, - decode_bboxes, - anchors, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None, - alpha=1, - beta=6): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute alignment metric between all bbox (bbox of all pyramid - levels) and gt - 2. select top-k bbox as candidates for each gt - 3. limit the positive sample's center in gt (because the anchor-free - detector only can predict positive distance) - - - Args: - pred_scores (Tensor): predicted class probability, - shape(n, num_classes) - decode_bboxes (Tensor): predicted bounding boxes, shape(n, 4) - anchors (Tensor): pre-defined anchors, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`TaskAlignedAssignResult`: The assign result. - """ - anchors = anchors[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), anchors.size(0) - # compute alignment metric between all bbox and gt - overlaps = self.iou_calculator(decode_bboxes, gt_bboxes).detach() - bbox_scores = pred_scores[:, gt_labels].detach() - # assign 0 by default - assigned_gt_inds = anchors.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assign_metrics = anchors.new_zeros((num_bboxes, )) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = anchors.new_zeros((num_bboxes, )) - if num_gt == 0: - # No gt boxes, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = anchors.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assign_result = AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - assign_result.assign_metrics = assign_metrics - return assign_result - - # select top-k bboxes as candidates for each gt - alignment_metrics = bbox_scores**alpha * overlaps**beta - topk = min(self.topk, alignment_metrics.size(0)) - _, candidate_idxs = alignment_metrics.topk(topk, dim=0, largest=True) - candidate_metrics = alignment_metrics[candidate_idxs, - torch.arange(num_gt)] - is_pos = candidate_metrics > 0 - - # limit the positive sample's center in gt - anchors_cx = (anchors[:, 0] + anchors[:, 2]) / 2.0 - anchors_cy = (anchors[:, 1] + anchors[:, 3]) / 2.0 - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_anchors_cx = anchors_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_anchors_cy = anchors_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_anchors_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_anchors_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_anchors_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_anchors_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest iou will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - assign_metrics[max_overlaps != -INF] = alignment_metrics[ - max_overlaps != -INF, argmax_overlaps[max_overlaps != -INF]] - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - assign_result = AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - assign_result.assign_metrics = assign_metrics - return assign_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/uniform_assigner.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/uniform_assigner.py deleted file mode 100644 index 70294fc4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/assigners/uniform_assigner.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from ..transforms import bbox_xyxy_to_cxcywh -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class UniformAssigner(BaseAssigner): - """Uniform Matching between the anchors and gt boxes, which can achieve - balance in positive anchors, and gt_bboxes_ignore was not considered for - now. - - Args: - pos_ignore_thr (float): the threshold to ignore positive anchors - neg_ignore_thr (float): the threshold to ignore negative anchors - match_times(int): Number of positive anchors for each gt box. - Default 4. - iou_calculator (dict): iou_calculator config - """ - - def __init__(self, - pos_ignore_thr, - neg_ignore_thr, - match_times=4, - iou_calculator=dict(type='BboxOverlaps2D')): - self.match_times = match_times - self.pos_ignore_thr = pos_ignore_thr - self.neg_ignore_thr = neg_ignore_thr - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - bbox_pred, - anchor, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - assign_result = AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - assign_result.set_extra_property( - 'pos_idx', bbox_pred.new_empty(0, dtype=torch.bool)) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred.new_empty((0, 4))) - assign_result.set_extra_property('target_boxes', - bbox_pred.new_empty((0, 4))) - return assign_result - - # 2. Compute the L1 cost between boxes - # Note that we use anchors and predict boxes both - cost_bbox = torch.cdist( - bbox_xyxy_to_cxcywh(bbox_pred), - bbox_xyxy_to_cxcywh(gt_bboxes), - p=1) - cost_bbox_anchors = torch.cdist( - bbox_xyxy_to_cxcywh(anchor), bbox_xyxy_to_cxcywh(gt_bboxes), p=1) - - # We found that topk function has different results in cpu and - # cuda mode. In order to ensure consistency with the source code, - # we also use cpu mode. - # TODO: Check whether the performance of cpu and cuda are the same. - C = cost_bbox.cpu() - C1 = cost_bbox_anchors.cpu() - - # self.match_times x n - index = torch.topk( - C, # c=b,n,x c[i]=n,x - k=self.match_times, - dim=0, - largest=False)[1] - - # self.match_times x n - index1 = torch.topk(C1, k=self.match_times, dim=0, largest=False)[1] - # (self.match_times*2) x n - indexes = torch.cat((index, index1), - dim=1).reshape(-1).to(bbox_pred.device) - - pred_overlaps = self.iou_calculator(bbox_pred, gt_bboxes) - anchor_overlaps = self.iou_calculator(anchor, gt_bboxes) - pred_max_overlaps, _ = pred_overlaps.max(dim=1) - anchor_max_overlaps, _ = anchor_overlaps.max(dim=0) - - # 3. Compute the ignore indexes use gt_bboxes and predict boxes - ignore_idx = pred_max_overlaps > self.neg_ignore_thr - assigned_gt_inds[ignore_idx] = -1 - - # 4. Compute the ignore indexes of positive sample use anchors - # and predict boxes - pos_gt_index = torch.arange( - 0, C1.size(1), - device=bbox_pred.device).repeat(self.match_times * 2) - pos_ious = anchor_overlaps[indexes, pos_gt_index] - pos_ignore_idx = pos_ious < self.pos_ignore_thr - - pos_gt_index_with_ignore = pos_gt_index + 1 - pos_gt_index_with_ignore[pos_ignore_idx] = -1 - assigned_gt_inds[indexes] = pos_gt_index_with_ignore - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - assign_result = AssignResult( - num_gts, - assigned_gt_inds, - anchor_max_overlaps, - labels=assigned_labels) - assign_result.set_extra_property('pos_idx', ~pos_ignore_idx) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred[indexes]) - assign_result.set_extra_property('target_boxes', - gt_bboxes[pos_gt_index]) - return assign_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/builder.py deleted file mode 100644 index 9cfa055b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/builder.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -BBOX_ASSIGNERS = Registry('bbox_assigner') -BBOX_SAMPLERS = Registry('bbox_sampler') -BBOX_CODERS = Registry('bbox_coder') - - -def build_assigner(cfg, **default_args): - """Builder of box assigner.""" - return build_from_cfg(cfg, BBOX_ASSIGNERS, default_args) - - -def build_sampler(cfg, **default_args): - """Builder of box sampler.""" - return build_from_cfg(cfg, BBOX_SAMPLERS, default_args) - - -def build_bbox_coder(cfg, **default_args): - """Builder of box coder.""" - return build_from_cfg(cfg, BBOX_CODERS, default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/__init__.py deleted file mode 100644 index e12fd64e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_bbox_coder import BaseBBoxCoder -from .bucketing_bbox_coder import BucketingBBoxCoder -from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder -from .distance_point_bbox_coder import DistancePointBBoxCoder -from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder -from .pseudo_bbox_coder import PseudoBBoxCoder -from .tblr_bbox_coder import TBLRBBoxCoder -from .yolo_bbox_coder import YOLOBBoxCoder - -__all__ = [ - 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder', - 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder', - 'BucketingBBoxCoder', 'DistancePointBBoxCoder' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/base_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index a7ed041a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/bucketing_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/bucketing_bbox_coder.py deleted file mode 100644 index 4be0ada0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/bucketing_bbox_coder.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn.functional as F - -from ..builder import BBOX_CODERS -from ..transforms import bbox_rescale -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class BucketingBBoxCoder(BaseBBoxCoder): - """Bucketing BBox Coder for Side-Aware Boundary Localization (SABL). - - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented here. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_buckets (int): Number of buckets. - scale_factor (int): Scale factor of proposals to generate buckets. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset upperbound to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True, - clip_border=True): - super(BucketingBBoxCoder, self).__init__() - self.num_buckets = num_buckets - self.scale_factor = scale_factor - self.offset_topk = offset_topk - self.offset_upperbound = offset_upperbound - self.cls_ignore_neighbor = cls_ignore_neighbor - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get bucketing estimation and fine regression targets during - training. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - encoded_bboxes(tuple[Tensor]): bucketing estimation - and fine regression targets and weights - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2bucket(bboxes, gt_bboxes, self.num_buckets, - self.scale_factor, self.offset_topk, - self.offset_upperbound, - self.cls_ignore_neighbor) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Predictions for bucketing estimation - and fine regression - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert len(pred_bboxes) == 2 - cls_preds, offset_preds = pred_bboxes - assert cls_preds.size(0) == bboxes.size(0) and offset_preds.size( - 0) == bboxes.size(0) - decoded_bboxes = bucket2bbox(bboxes, cls_preds, offset_preds, - self.num_buckets, self.scale_factor, - max_shape, self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def generat_buckets(proposals, num_buckets, scale_factor=1.0): - """Generate buckets w.r.t bucket number and scale factor of proposals. - - Args: - proposals (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - - Returns: - tuple[Tensor]: (bucket_w, bucket_h, l_buckets, r_buckets, - t_buckets, d_buckets) - - - bucket_w: Width of buckets on x-axis. Shape (n, ). - - bucket_h: Height of buckets on y-axis. Shape (n, ). - - l_buckets: Left buckets. Shape (n, ceil(side_num/2)). - - r_buckets: Right buckets. Shape (n, ceil(side_num/2)). - - t_buckets: Top buckets. Shape (n, ceil(side_num/2)). - - d_buckets: Down buckets. Shape (n, ceil(side_num/2)). - """ - proposals = bbox_rescale(proposals, scale_factor) - - # number of buckets in each side - side_num = int(np.ceil(num_buckets / 2.0)) - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - px1 = proposals[..., 0] - py1 = proposals[..., 1] - px2 = proposals[..., 2] - py2 = proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - # left buckets - l_buckets = px1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # right buckets - r_buckets = px2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # top buckets - t_buckets = py1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - # down buckets - d_buckets = py2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - return bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, d_buckets - - -@mmcv.jit(coderize=True) -def bbox2bucket(proposals, - gt, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True): - """Generate buckets estimation and fine regression targets. - - Args: - proposals (Tensor): Shape (n, 4) - gt (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset allowance to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - - Returns: - tuple[Tensor]: (offsets, offsets_weights, bucket_labels, cls_weights). - - - offsets: Fine regression targets. \ - Shape (n, num_buckets*2). - - offsets_weights: Fine regression weights. \ - Shape (n, num_buckets*2). - - bucket_labels: Bucketing estimation labels. \ - Shape (n, num_buckets*2). - - cls_weights: Bucketing estimation weights. \ - Shape (n, num_buckets*2). - """ - assert proposals.size() == gt.size() - - # generate buckets - proposals = proposals.float() - gt = gt.float() - (bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, - d_buckets) = generat_buckets(proposals, num_buckets, scale_factor) - - gx1 = gt[..., 0] - gy1 = gt[..., 1] - gx2 = gt[..., 2] - gy2 = gt[..., 3] - - # generate offset targets and weights - # offsets from buckets to gts - l_offsets = (l_buckets - gx1[:, None]) / bucket_w[:, None] - r_offsets = (r_buckets - gx2[:, None]) / bucket_w[:, None] - t_offsets = (t_buckets - gy1[:, None]) / bucket_h[:, None] - d_offsets = (d_buckets - gy2[:, None]) / bucket_h[:, None] - - # select top-k nearest buckets - l_topk, l_label = l_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - r_topk, r_label = r_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - t_topk, t_label = t_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - d_topk, d_label = d_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - - offset_l_weights = l_offsets.new_zeros(l_offsets.size()) - offset_r_weights = r_offsets.new_zeros(r_offsets.size()) - offset_t_weights = t_offsets.new_zeros(t_offsets.size()) - offset_d_weights = d_offsets.new_zeros(d_offsets.size()) - inds = torch.arange(0, proposals.size(0)).to(proposals).long() - - # generate offset weights of top-k nearest buckets - for k in range(offset_topk): - if k >= 1: - offset_l_weights[inds, l_label[:, - k]] = (l_topk[:, k] < - offset_upperbound).float() - offset_r_weights[inds, r_label[:, - k]] = (r_topk[:, k] < - offset_upperbound).float() - offset_t_weights[inds, t_label[:, - k]] = (t_topk[:, k] < - offset_upperbound).float() - offset_d_weights[inds, d_label[:, - k]] = (d_topk[:, k] < - offset_upperbound).float() - else: - offset_l_weights[inds, l_label[:, k]] = 1.0 - offset_r_weights[inds, r_label[:, k]] = 1.0 - offset_t_weights[inds, t_label[:, k]] = 1.0 - offset_d_weights[inds, d_label[:, k]] = 1.0 - - offsets = torch.cat([l_offsets, r_offsets, t_offsets, d_offsets], dim=-1) - offsets_weights = torch.cat([ - offset_l_weights, offset_r_weights, offset_t_weights, offset_d_weights - ], - dim=-1) - - # generate bucket labels and weight - side_num = int(np.ceil(num_buckets / 2.0)) - labels = torch.stack( - [l_label[:, 0], r_label[:, 0], t_label[:, 0], d_label[:, 0]], dim=-1) - - batch_size = labels.size(0) - bucket_labels = F.one_hot(labels.view(-1), side_num).view(batch_size, - -1).float() - bucket_cls_l_weights = (l_offsets.abs() < 1).float() - bucket_cls_r_weights = (r_offsets.abs() < 1).float() - bucket_cls_t_weights = (t_offsets.abs() < 1).float() - bucket_cls_d_weights = (d_offsets.abs() < 1).float() - bucket_cls_weights = torch.cat([ - bucket_cls_l_weights, bucket_cls_r_weights, bucket_cls_t_weights, - bucket_cls_d_weights - ], - dim=-1) - # ignore second nearest buckets for cls if necessary - if cls_ignore_neighbor: - bucket_cls_weights = (~((bucket_cls_weights == 1) & - (bucket_labels == 0))).float() - else: - bucket_cls_weights[:] = 1.0 - return offsets, offsets_weights, bucket_labels, bucket_cls_weights - - -@mmcv.jit(coderize=True) -def bucket2bbox(proposals, - cls_preds, - offset_preds, - num_buckets, - scale_factor=1.0, - max_shape=None, - clip_border=True): - """Apply bucketing estimation (cls preds) and fine regression (offset - preds) to generate det bboxes. - - Args: - proposals (Tensor): Boxes to be transformed. Shape (n, 4) - cls_preds (Tensor): bucketing estimation. Shape (n, num_buckets*2). - offset_preds (Tensor): fine regression. Shape (n, num_buckets*2). - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Returns: - tuple[Tensor]: (bboxes, loc_confidence). - - - bboxes: predicted bboxes. Shape (n, 4) - - loc_confidence: localization confidence of predicted bboxes. - Shape (n,). - """ - - side_num = int(np.ceil(num_buckets / 2.0)) - cls_preds = cls_preds.view(-1, side_num) - offset_preds = offset_preds.view(-1, side_num) - - scores = F.softmax(cls_preds, dim=1) - score_topk, score_label = scores.topk(2, dim=1, largest=True, sorted=True) - - rescaled_proposals = bbox_rescale(proposals, scale_factor) - - pw = rescaled_proposals[..., 2] - rescaled_proposals[..., 0] - ph = rescaled_proposals[..., 3] - rescaled_proposals[..., 1] - px1 = rescaled_proposals[..., 0] - py1 = rescaled_proposals[..., 1] - px2 = rescaled_proposals[..., 2] - py2 = rescaled_proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - score_inds_l = score_label[0::4, 0] - score_inds_r = score_label[1::4, 0] - score_inds_t = score_label[2::4, 0] - score_inds_d = score_label[3::4, 0] - l_buckets = px1 + (0.5 + score_inds_l.float()) * bucket_w - r_buckets = px2 - (0.5 + score_inds_r.float()) * bucket_w - t_buckets = py1 + (0.5 + score_inds_t.float()) * bucket_h - d_buckets = py2 - (0.5 + score_inds_d.float()) * bucket_h - - offsets = offset_preds.view(-1, 4, side_num) - inds = torch.arange(proposals.size(0)).to(proposals).long() - l_offsets = offsets[:, 0, :][inds, score_inds_l] - r_offsets = offsets[:, 1, :][inds, score_inds_r] - t_offsets = offsets[:, 2, :][inds, score_inds_t] - d_offsets = offsets[:, 3, :][inds, score_inds_d] - - x1 = l_buckets - l_offsets * bucket_w - x2 = r_buckets - r_offsets * bucket_w - y1 = t_buckets - t_offsets * bucket_h - y2 = d_buckets - d_offsets * bucket_h - - if clip_border and max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.cat([x1[:, None], y1[:, None], x2[:, None], y2[:, None]], - dim=-1) - - # bucketing guided rescoring - loc_confidence = score_topk[:, 0] - top2_neighbor_inds = (score_label[:, 0] - score_label[:, 1]).abs() == 1 - loc_confidence += score_topk[:, 1] * top2_neighbor_inds.float() - loc_confidence = loc_confidence.view(-1, 4).mean(dim=1) - - return bboxes, loc_confidence diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py deleted file mode 100644 index a7f1c62f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DeltaXYWHBBoxCoder(BaseBBoxCoder): - """Delta XYWH BBox coder. - - Following the practice in `R-CNN `_, - this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and - decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2). - - Args: - target_means (Sequence[float]): Denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): Denormalizing standard deviation of - target for delta coordinates - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.), - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - self.clip_border = clip_border - self.add_ctr_clamp = add_ctr_clamp - self.ctr_clamp = ctr_clamp - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes. Shape (B, N, 4) or (N, 4) - pred_bboxes (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - - assert pred_bboxes.size(0) == bboxes.size(0) - if pred_bboxes.ndim == 3: - assert pred_bboxes.size(1) == bboxes.size(1) - - if pred_bboxes.ndim == 2 and not torch.onnx.is_in_onnx_export(): - # single image decode - decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip, - self.clip_border, self.add_ctr_clamp, - self.ctr_clamp) - else: - if pred_bboxes.ndim == 3 and not torch.onnx.is_in_onnx_export(): - warnings.warn( - 'DeprecationWarning: onnx_delta2bbox is deprecated ' - 'in the case of batch decoding and non-ONNX, ' - 'please use “delta2bbox” instead. In order to improve ' - 'the decoding speed, the batch function will no ' - 'longer be supported. ') - decoded_bboxes = onnx_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, - wh_ratio_clip, self.clip_border, - self.add_ctr_clamp, - self.ctr_clamp) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of :func:`delta2bbox`. - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] - gh = gt[..., 3] - gt[..., 1] - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4). - deltas (Tensor): Encoded offsets relative to each roi. - Has shape (N, num_classes * 4) or (N, 4). Note - N = num_base_anchors * W * H, when rois is a grid of - anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (tuple[int, int]): Maximum bounds for boxes, specifies - (H, W). Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. Default - 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp. When set to True, - the center of the prediction bounding box will be clamped to - avoid being too far away from the center of the anchor. - Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (N, num_classes * 4) or (N, 4), where 4 - represent tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - num_bboxes, num_classes = deltas.size(0), deltas.size(1) // 4 - if num_bboxes == 0: - return deltas - - deltas = deltas.reshape(-1, 4) - - means = deltas.new_tensor(means).view(1, -1) - stds = deltas.new_tensor(stds).view(1, -1) - denorm_deltas = deltas * stds + means - - dxy = denorm_deltas[:, :2] - dwh = denorm_deltas[:, 2:] - - # Compute width/height of each roi - rois_ = rois.repeat(1, num_classes).reshape(-1, 4) - pxy = ((rois_[:, :2] + rois_[:, 2:]) * 0.5) - pwh = (rois_[:, 2:] - rois_[:, :2]) - - dxy_wh = pwh * dxy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dxy_wh = torch.clamp(dxy_wh, max=ctr_clamp, min=-ctr_clamp) - dwh = torch.clamp(dwh, max=max_ratio) - else: - dwh = dwh.clamp(min=-max_ratio, max=max_ratio) - - gxy = pxy + dxy_wh - gwh = pwh * dwh.exp() - x1y1 = gxy - (gwh * 0.5) - x2y2 = gxy + (gwh * 0.5) - bboxes = torch.cat([x1y1, x2y2], dim=-1) - if clip_border and max_shape is not None: - bboxes[..., 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[..., 1::2].clamp_(min=0, max=max_shape[0]) - bboxes = bboxes.reshape(num_bboxes, -1) - return bboxes - - -def onnx_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) or (B, N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If rois shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. - Default 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4), where 4 represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - means = deltas.new_tensor(means).view(1, - -1).repeat(1, - deltas.size(-1) // 4) - stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(-1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[..., 0::4] - dy = denorm_deltas[..., 1::4] - dw = denorm_deltas[..., 2::4] - dh = denorm_deltas[..., 3::4] - - x1, y1 = rois[..., 0], rois[..., 1] - x2, y2 = rois[..., 2], rois[..., 3] - # Compute center of each roi - px = ((x1 + x2) * 0.5).unsqueeze(-1).expand_as(dx) - py = ((y1 + y2) * 0.5).unsqueeze(-1).expand_as(dy) - # Compute width/height of each roi - pw = (x2 - x1).unsqueeze(-1).expand_as(dw) - ph = (y2 - y1).unsqueeze(-1).expand_as(dh) - - dx_width = pw * dx - dy_height = ph * dy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dx_width = torch.clamp(dx_width, max=ctr_clamp, min=-ctr_clamp) - dy_height = torch.clamp(dy_height, max=ctr_clamp, min=-ctr_clamp) - dw = torch.clamp(dw, max=max_ratio) - dh = torch.clamp(dh, max=max_ratio) - else: - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + dx_width - gy = py + dy_height - # Convert center-xy/width/height to top-left, bottom-right - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat( - [max_shape] * (deltas.size(-1) // 2), - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/distance_point_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/distance_point_bbox_coder.py deleted file mode 100644 index 9f308a84..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/distance_point_bbox_coder.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_CODERS -from ..transforms import bbox2distance, distance2bbox -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DistancePointBBoxCoder(BaseBBoxCoder): - """Distance Point BBox coder. - - This coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.clip_border = clip_border - - def encode(self, points, gt_bboxes, max_dis=None, eps=0.1): - """Encode bounding box to distances. - - Args: - points (Tensor): Shape (N, 2), The format is [x, y]. - gt_bboxes (Tensor): Shape (N, 4), The format is "xyxy" - max_dis (float): Upper bound of the distance. Default None. - eps (float): a small value to ensure target < max_dis, instead <=. - Default 0.1. - - Returns: - Tensor: Box transformation deltas. The shape is (N, 4). - """ - assert points.size(0) == gt_bboxes.size(0) - assert points.size(-1) == 2 - assert gt_bboxes.size(-1) == 4 - return bbox2distance(points, gt_bboxes, max_dis, eps) - - def decode(self, points, pred_bboxes, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - pred_bboxes (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) - or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]], - and the length of max_shape should also be B. - Default None. - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - assert points.size(0) == pred_bboxes.size(0) - assert points.size(-1) == 2 - assert pred_bboxes.size(-1) == 4 - if self.clip_border is False: - max_shape = None - return distance2bbox(points, pred_bboxes, max_shape) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py deleted file mode 100644 index 7fa348b2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class LegacyDeltaXYWHBBoxCoder(BaseBBoxCoder): - """Legacy Delta XYWH BBox coder used in MMDet V1.x. - - Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2, - y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) - back to original bbox (x1, y1, x2, y2). - - Note: - The main difference between :class`LegacyDeltaXYWHBBoxCoder` and - :class:`DeltaXYWHBBoxCoder` is whether ``+ 1`` is used during width and - height calculation. We suggest to only use this coder when testing with - MMDet V1.x models. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Args: - target_means (Sequence[float]): denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): denormalizing standard deviation of - target for delta coordinates - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.)): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = legacy_bbox2delta(bboxes, gt_bboxes, self.means, - self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Encoded boxes with shape - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(0) == bboxes.size(0) - decoded_bboxes = legacy_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def legacy_bbox2delta(proposals, - gt, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt in the MMDet V1.x manner. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of `delta2bbox()` - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] + 1.0 - ph = proposals[..., 3] - proposals[..., 1] + 1.0 - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] + 1.0 - gh = gt[..., 3] - gt[..., 1] + 1.0 - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def legacy_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply deltas to shift/scale base boxes in the MMDet V1.x manner. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of `bbox2delta()` - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when - rois is a grid of anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - wh_ratio_clip (float): Maximum aspect ratio for boxes. - - Returns: - Tensor: Boxes with shape (N, 4), where columns represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> legacy_delta2bbox(rois, deltas, max_shape=(32, 32)) - tensor([[0.0000, 0.0000, 1.5000, 1.5000], - [0.0000, 0.0000, 5.2183, 5.2183], - [0.0000, 0.1321, 7.8891, 0.8679], - [5.3967, 2.4251, 6.0033, 3.7749]]) - """ - means = deltas.new_tensor(means).repeat(1, deltas.size(1) // 4) - stds = deltas.new_tensor(stds).repeat(1, deltas.size(1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[:, 0::4] - dy = denorm_deltas[:, 1::4] - dw = denorm_deltas[:, 2::4] - dh = denorm_deltas[:, 3::4] - max_ratio = np.abs(np.log(wh_ratio_clip)) - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Compute center of each roi - px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx) - py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy) - # Compute width/height of each roi - pw = (rois[:, 2] - rois[:, 0] + 1.0).unsqueeze(1).expand_as(dw) - ph = (rois[:, 3] - rois[:, 1] + 1.0).unsqueeze(1).expand_as(dh) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + pw * dx - gy = py + ph * dy - # Convert center-xy/width/height to top-left, bottom-right - - # The true legacy box coder should +- 0.5 here. - # However, current implementation improves the performance when testing - # the models trained in MMDetection 1.X (~0.5 bbox AP, 0.2 mask AP) - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - if max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view_as(deltas) - return bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/pseudo_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/pseudo_bbox_coder.py deleted file mode 100644 index fe71f369..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/pseudo_bbox_coder.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class PseudoBBoxCoder(BaseBBoxCoder): - """Pseudo bounding box coder.""" - - def __init__(self, **kwargs): - super(BaseBBoxCoder, self).__init__(**kwargs) - - def encode(self, bboxes, gt_bboxes): - """torch.Tensor: return the given ``bboxes``""" - return gt_bboxes - - def decode(self, bboxes, pred_bboxes): - """torch.Tensor: return the given ``pred_bboxes``""" - return pred_bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/tblr_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/tblr_bbox_coder.py deleted file mode 100644 index cb420663..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/tblr_bbox_coder.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class TBLRBBoxCoder(BaseBBoxCoder): - """TBLR BBox coder. - - Following the practice in `FSAF `_, - this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - normalizer (list | float): Normalization factor to be - divided with when coding the coordinates. If it is a list, it should - have length of 4 indicating normalization factor in tblr dims. - Otherwise it is a unified float factor for all dims. Default: 4.0 - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, normalizer=4.0, clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.normalizer = normalizer - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes`` in the (top, left, - bottom, right) order. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bboxes2tblr( - bboxes, gt_bboxes, normalizer=self.normalizer) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes.Shape (B, N, 4) or (N, 4) - pred_bboxes (torch.Tensor): Encoded boxes with shape - (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - torch.Tensor: Decoded boxes. - """ - decoded_bboxes = tblr2bboxes( - bboxes, - pred_bboxes, - normalizer=self.normalizer, - max_shape=max_shape, - clip_border=self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bboxes2tblr(priors, gts, normalizer=4.0, normalize_by_wh=True): - """Encode ground truth boxes to tblr coordinate. - - It first convert the gt coordinate to tblr format, - (top, bottom, left, right), relative to prior box centers. - The tblr coordinate may be normalized by the side length of prior bboxes - if `normalize_by_wh` is specified as True, and it is then normalized by - the `normalizer` factor. - - Args: - priors (Tensor): Prior boxes in point form - Shape: (num_proposals,4). - gts (Tensor): Coords of ground truth for each prior in point-form - Shape: (num_proposals, 4). - normalizer (Sequence[float] | float): normalization parameter of - encoded boxes. If it is a list, it has to have length = 4. - Default: 4.0 - normalize_by_wh (bool): Whether to normalize tblr coordinate by the - side length (wh) of prior bboxes. - - Return: - encoded boxes (Tensor), Shape: (num_proposals, 4) - """ - - # dist b/t match center and prior's center - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == gts.size(0) - prior_centers = (priors[:, 0:2] + priors[:, 2:4]) / 2 - xmin, ymin, xmax, ymax = gts.split(1, dim=1) - top = prior_centers[:, 1].unsqueeze(1) - ymin - bottom = ymax - prior_centers[:, 1].unsqueeze(1) - left = prior_centers[:, 0].unsqueeze(1) - xmin - right = xmax - prior_centers[:, 0].unsqueeze(1) - loc = torch.cat((top, bottom, left, right), dim=1) - if normalize_by_wh: - # Normalize tblr by anchor width and height - wh = priors[:, 2:4] - priors[:, 0:2] - w, h = torch.split(wh, 1, dim=1) - loc[:, :2] /= h # tb is normalized by h - loc[:, 2:] /= w # lr is normalized by w - # Normalize tblr by the given normalization factor - return loc / normalizer - - -@mmcv.jit(coderize=True) -def tblr2bboxes(priors, - tblr, - normalizer=4.0, - normalize_by_wh=True, - max_shape=None, - clip_border=True): - """Decode tblr outputs to prediction boxes. - - The process includes 3 steps: 1) De-normalize tblr coordinates by - multiplying it with `normalizer`; 2) De-normalize tblr coordinates by the - prior bbox width and height if `normalize_by_wh` is `True`; 3) Convert - tblr (top, bottom, left, right) pair relative to the center of priors back - to (xmin, ymin, xmax, ymax) coordinate. - - Args: - priors (Tensor): Prior boxes in point form (x0, y0, x1, y1) - Shape: (N,4) or (B, N, 4). - tblr (Tensor): Coords of network output in tblr form - Shape: (N, 4) or (B, N, 4). - normalizer (Sequence[float] | float): Normalization parameter of - encoded boxes. By list, it represents the normalization factors at - tblr dims. By float, it is the unified normalization factor at all - dims. Default: 4.0 - normalize_by_wh (bool): Whether the tblr coordinates have been - normalized by the side length (wh) of prior bboxes. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Return: - encoded boxes (Tensor): Boxes with shape (N, 4) or (B, N, 4) - """ - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == tblr.size(0) - if priors.ndim == 3: - assert priors.size(1) == tblr.size(1) - - loc_decode = tblr * normalizer - prior_centers = (priors[..., 0:2] + priors[..., 2:4]) / 2 - if normalize_by_wh: - wh = priors[..., 2:4] - priors[..., 0:2] - w, h = torch.split(wh, 1, dim=-1) - # Inplace operation with slice would failed for exporting to ONNX - th = h * loc_decode[..., :2] # tb - tw = w * loc_decode[..., 2:] # lr - loc_decode = torch.cat([th, tw], dim=-1) - # Cannot be exported using onnx when loc_decode.split(1, dim=-1) - top, bottom, left, right = loc_decode.split((1, 1, 1, 1), dim=-1) - xmin = prior_centers[..., 0].unsqueeze(-1) - left - xmax = prior_centers[..., 0].unsqueeze(-1) + right - ymin = prior_centers[..., 1].unsqueeze(-1) - top - ymax = prior_centers[..., 1].unsqueeze(-1) + bottom - - bboxes = torch.cat((xmin, ymin, xmax, ymax), dim=-1) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - xmin, ymin, xmax, ymax = dynamic_clip_for_onnx( - xmin, ymin, xmax, ymax, max_shape) - bboxes = torch.cat([xmin, ymin, xmax, ymax], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = priors.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(priors) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = priors.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/yolo_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/yolo_bbox_coder.py deleted file mode 100644 index 2852eca7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/coder/yolo_bbox_coder.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class YOLOBBoxCoder(BaseBBoxCoder): - """YOLO BBox coder. - - Following `YOLO `_, this coder divide - image into grids, and encode bbox (x1, y1, x2, y2) into (cx, cy, dw, dh). - cx, cy in [0., 1.], denotes relative center position w.r.t the center of - bboxes. dw, dh are the same as :obj:`DeltaXYWHBBoxCoder`. - - Args: - eps (float): Min value of cx, cy when encoding. - """ - - def __init__(self, eps=1e-6): - super(BaseBBoxCoder, self).__init__() - self.eps = eps - - @mmcv.jit(coderize=True) - def encode(self, bboxes, gt_bboxes, stride): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., anchors. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - stride (torch.Tensor | int): Stride of bboxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - x_center_gt = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) * 0.5 - y_center_gt = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) * 0.5 - w_gt = gt_bboxes[..., 2] - gt_bboxes[..., 0] - h_gt = gt_bboxes[..., 3] - gt_bboxes[..., 1] - x_center = (bboxes[..., 0] + bboxes[..., 2]) * 0.5 - y_center = (bboxes[..., 1] + bboxes[..., 3]) * 0.5 - w = bboxes[..., 2] - bboxes[..., 0] - h = bboxes[..., 3] - bboxes[..., 1] - w_target = torch.log((w_gt / w).clamp(min=self.eps)) - h_target = torch.log((h_gt / h).clamp(min=self.eps)) - x_center_target = ((x_center_gt - x_center) / stride + 0.5).clamp( - self.eps, 1 - self.eps) - y_center_target = ((y_center_gt - y_center) / stride + 0.5).clamp( - self.eps, 1 - self.eps) - encoded_bboxes = torch.stack( - [x_center_target, y_center_target, w_target, h_target], dim=-1) - return encoded_bboxes - - @mmcv.jit(coderize=True) - def decode(self, bboxes, pred_bboxes, stride): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes, e.g. anchors. - pred_bboxes (torch.Tensor): Encoded boxes with shape - stride (torch.Tensor | int): Strides of bboxes. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(-1) == bboxes.size(-1) == 4 - xy_centers = (bboxes[..., :2] + bboxes[..., 2:]) * 0.5 + ( - pred_bboxes[..., :2] - 0.5) * stride - whs = (bboxes[..., 2:] - - bboxes[..., :2]) * 0.5 * pred_bboxes[..., 2:].exp() - decoded_bboxes = torch.stack( - (xy_centers[..., 0] - whs[..., 0], xy_centers[..., 1] - - whs[..., 1], xy_centers[..., 0] + whs[..., 0], - xy_centers[..., 1] + whs[..., 1]), - dim=-1) - return decoded_bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/demodata.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/demodata.py deleted file mode 100644 index eb24b34b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/demodata.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.utils.util_random import ensure_rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/__init__.py deleted file mode 100644 index 04ba925b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_iou_calculator -from .iou2d_calculator import BboxOverlaps2D, bbox_overlaps - -__all__ = ['build_iou_calculator', 'BboxOverlaps2D', 'bbox_overlaps'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/builder.py deleted file mode 100644 index 378ee269..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -IOU_CALCULATORS = Registry('IoU calculator') - - -def build_iou_calculator(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, IOU_CALCULATORS, default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/iou2d_calculator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/iou2d_calculator.py deleted file mode 100644 index 4656d619..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/iou_calculators/iou2d_calculator.py +++ /dev/null @@ -1,261 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .builder import IOU_CALCULATORS - - -def cast_tensor_type(x, scale=1., dtype=None): - if dtype == 'fp16': - # scale is for preventing overflows - x = (x / scale).half() - return x - - -def fp16_clamp(x, min=None, max=None): - if not x.is_cuda and x.dtype == torch.float16: - # clamp for cpu float16, tensor fp16 has no clamp implementation - return x.float().clamp(min, max).half() - - return x.clamp(min, max) - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps2D: - """2D Overlaps (e.g. IoUs, GIoUs) Calculator.""" - - def __init__(self, scale=1., dtype=None): - self.scale = scale - self.dtype = dtype - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): bboxes have shape (m, 4) in - format, or shape (m, 5) in format. - bboxes2 (Tensor): bboxes have shape (m, 4) in - format, shape (m, 5) in format, or be - empty. If ``is_aligned `` is ``True``, then m and n must be - equal. - mode (str): "iou" (intersection over union), "iof" (intersection - over foreground), or "giou" (generalized intersection over - union). - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - - Returns: - Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,) - """ - assert bboxes1.size(-1) in [0, 4, 5] - assert bboxes2.size(-1) in [0, 4, 5] - if bboxes2.size(-1) == 5: - bboxes2 = bboxes2[..., :4] - if bboxes1.size(-1) == 5: - bboxes1 = bboxes1[..., :4] - - if self.dtype == 'fp16': - # change tensor type to save cpu and cuda memory and keep speed - bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) - bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) - overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - if not overlaps.is_cuda and overlaps.dtype == torch.float16: - # resume cpu float32 - overlaps = overlaps.float() - return overlaps - - return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + f'(' \ - f'scale={self.scale}, dtype={self.dtype})' - return repr_str - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6): - """Calculate overlap between two set of bboxes. - - FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889 - Note: - Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou', - there are some new generated variable when calculating IOU - using bbox_overlaps function: - - 1) is_aligned is False - area1: M x 1 - area2: N x 1 - lt: M x N x 2 - rb: M x N x 2 - wh: M x N x 2 - overlap: M x N x 1 - union: M x N x 1 - ious: M x N x 1 - - Total memory: - S = (9 x N x M + N + M) * 4 Byte, - - When using FP16, we can reduce: - R = (9 x N x M + N + M) * 4 / 2 Byte - R large than (N + M) * 4 * 2 is always true when N and M >= 1. - Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2, - N + 1 < 3 * N, when N or M is 1. - - Given M = 40 (ground truth), N = 400000 (three anchor boxes - in per grid, FPN, R-CNNs), - R = 275 MB (one times) - - A special case (dense detection), M = 512 (ground truth), - R = 3516 MB = 3.43 GB - - When the batch size is B, reduce: - B x R - - Therefore, CUDA memory runs out frequently. - - Experiments on GeForce RTX 2080Ti (11019 MiB): - - | dtype | M | N | Use | Real | Ideal | - |:----:|:----:|:----:|:----:|:----:|:----:| - | FP32 | 512 | 400000 | 8020 MiB | -- | -- | - | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB | - | FP32 | 40 | 400000 | 1540 MiB | -- | -- | - | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB | - - 2) is_aligned is True - area1: N x 1 - area2: N x 1 - lt: N x 2 - rb: N x 2 - wh: N x 2 - overlap: N x 1 - union: N x 1 - ious: N x 1 - - Total memory: - S = 11 x N * 4 Byte - - When using FP16, we can reduce: - R = 11 x N * 4 / 2 Byte - - So do the 'giou' (large than 'iou'). - - Time-wise, FP16 is generally faster than FP32. - - When gpu_assign_thr is not -1, it takes more time on cpu - but not reduce memory. - There, we can reduce half the memory and keep the speed. - - If ``is_aligned`` is ``False``, then calculate the overlaps between each - bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned - pair of bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 4) in format or empty. - bboxes2 (Tensor): shape (B, n, 4) in format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union), "iof" (intersection over - foreground) or "giou" (generalized intersection over union). - Default "iou". - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - - Example: - >>> empty = torch.empty(0, 4) - >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * ( - bboxes1[..., 3] - bboxes1[..., 1]) - area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * ( - bboxes2[..., 3] - bboxes2[..., 1]) - - if is_aligned: - lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2] - rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2]) - enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:]) - else: - lt = torch.max(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) # [B, rows, cols, 2] - rb = torch.min(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) # [B, rows, cols, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - else: - union = area1[..., None] - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) - enclosed_rb = torch.max(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou', 'iof']: - return ious - # calculate gious - enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/__init__.py deleted file mode 100644 index 1b636795..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_match_cost -from .match_cost import (BBoxL1Cost, ClassificationCost, CrossEntropyLossCost, - DiceCost, FocalLossCost, IoUCost) - -__all__ = [ - 'build_match_cost', 'ClassificationCost', 'BBoxL1Cost', 'IoUCost', - 'FocalLossCost', 'DiceCost', 'CrossEntropyLossCost' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/builder.py deleted file mode 100644 index ea086adf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -MATCH_COST = Registry('Match Cost') - - -def build_match_cost(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, MATCH_COST, default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/match_cost.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/match_cost.py deleted file mode 100644 index 4342b024..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/match_costs/match_cost.py +++ /dev/null @@ -1,359 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.core.bbox.transforms import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh -from .builder import MATCH_COST - - -@MATCH_COST.register_module() -class BBoxL1Cost: - """BBoxL1Cost. - - Args: - weight (int | float, optional): loss_weight - box_format (str, optional): 'xyxy' for DETR, 'xywh' for Sparse_RCNN - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import BBoxL1Cost - >>> import torch - >>> self = BBoxL1Cost() - >>> bbox_pred = torch.rand(1, 4) - >>> gt_bboxes= torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(bbox_pred, gt_bboxes, factor) - tensor([[1.6172, 1.6422]]) - """ - - def __init__(self, weight=1., box_format='xyxy'): - self.weight = weight - assert box_format in ['xyxy', 'xywh'] - self.box_format = box_format - - def __call__(self, bbox_pred, gt_bboxes): - """ - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - (num_query, 4). - gt_bboxes (Tensor): Ground truth boxes with normalized - coordinates (x1, y1, x2, y2). Shape (num_gt, 4). - - Returns: - torch.Tensor: bbox_cost value with weight - """ - if self.box_format == 'xywh': - gt_bboxes = bbox_xyxy_to_cxcywh(gt_bboxes) - elif self.box_format == 'xyxy': - bbox_pred = bbox_cxcywh_to_xyxy(bbox_pred) - bbox_cost = torch.cdist(bbox_pred, gt_bboxes, p=1) - return bbox_cost * self.weight - - -@MATCH_COST.register_module() -class FocalLossCost: - """FocalLossCost. - - Args: - weight (int | float, optional): loss_weight - alpha (int | float, optional): focal_loss alpha - gamma (int | float, optional): focal_loss gamma - eps (float, optional): default 1e-12 - binary_input (bool, optional): Whether the input is binary, - default False. - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import FocalLossCost - >>> import torch - >>> self = FocalLossCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3236, -0.3364, -0.2699], - [-0.3439, -0.3209, -0.4807], - [-0.4099, -0.3795, -0.2929], - [-0.1950, -0.1207, -0.2626]]) - """ - - def __init__(self, - weight=1., - alpha=0.25, - gamma=2, - eps=1e-12, - binary_input=False): - self.weight = weight - self.alpha = alpha - self.gamma = gamma - self.eps = eps - self.binary_input = binary_input - - def _focal_loss_cost(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - (num_query, num_class). - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - cls_pred = cls_pred.sigmoid() - neg_cost = -(1 - cls_pred + self.eps).log() * ( - 1 - self.alpha) * cls_pred.pow(self.gamma) - pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( - 1 - cls_pred).pow(self.gamma) - - cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels] - return cls_cost * self.weight - - def _mask_focal_loss_cost(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits - in shape (num_query, d1, ..., dn), dtype=torch.float32. - gt_labels (Tensor): Ground truth in shape (num_gt, d1, ..., dn), - dtype=torch.long. Labels should be binary. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - cls_pred = cls_pred.flatten(1) - gt_labels = gt_labels.flatten(1).float() - n = cls_pred.shape[1] - cls_pred = cls_pred.sigmoid() - neg_cost = -(1 - cls_pred + self.eps).log() * ( - 1 - self.alpha) * cls_pred.pow(self.gamma) - pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( - 1 - cls_pred).pow(self.gamma) - - cls_cost = torch.einsum('nc,mc->nm', pos_cost, gt_labels) + \ - torch.einsum('nc,mc->nm', neg_cost, (1 - gt_labels)) - return cls_cost / n * self.weight - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits. - gt_labels (Tensor)): Labels. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - if self.binary_input: - return self._mask_focal_loss_cost(cls_pred, gt_labels) - else: - return self._focal_loss_cost(cls_pred, gt_labels) - - -@MATCH_COST.register_module() -class ClassificationCost: - """ClsSoftmaxCost. - - Args: - weight (int | float, optional): loss_weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import \ - ... ClassificationCost - >>> import torch - >>> self = ClassificationCost() - >>> cls_pred = torch.rand(4, 3) - >>> gt_labels = torch.tensor([0, 1, 2]) - >>> factor = torch.tensor([10, 8, 10, 8]) - >>> self(cls_pred, gt_labels) - tensor([[-0.3430, -0.3525, -0.3045], - [-0.3077, -0.2931, -0.3992], - [-0.3664, -0.3455, -0.2881], - [-0.3343, -0.2701, -0.3956]]) - """ - - def __init__(self, weight=1.): - self.weight = weight - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits, shape - (num_query, num_class). - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - - Returns: - torch.Tensor: cls_cost value with weight - """ - # Following the official DETR repo, contrary to the loss that - # NLL is used, we approximate it in 1 - cls_score[gt_label]. - # The 1 is a constant that doesn't change the matching, - # so it can be omitted. - cls_score = cls_pred.softmax(-1) - cls_cost = -cls_score[:, gt_labels] - return cls_cost * self.weight - - -@MATCH_COST.register_module() -class IoUCost: - """IoUCost. - - Args: - iou_mode (str, optional): iou mode such as 'iou' | 'giou' - weight (int | float, optional): loss weight - - Examples: - >>> from mmdet.core.bbox.match_costs.match_cost import IoUCost - >>> import torch - >>> self = IoUCost() - >>> bboxes = torch.FloatTensor([[1,1, 2, 2], [2, 2, 3, 4]]) - >>> gt_bboxes = torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) - >>> self(bboxes, gt_bboxes) - tensor([[-0.1250, 0.1667], - [ 0.1667, -0.5000]]) - """ - - def __init__(self, iou_mode='giou', weight=1.): - self.weight = weight - self.iou_mode = iou_mode - - def __call__(self, bboxes, gt_bboxes): - """ - Args: - bboxes (Tensor): Predicted boxes with unnormalized coordinates - (x1, y1, x2, y2). Shape (num_query, 4). - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape (num_gt, 4). - - Returns: - torch.Tensor: iou_cost value with weight - """ - # overlaps: [num_bboxes, num_gt] - overlaps = bbox_overlaps( - bboxes, gt_bboxes, mode=self.iou_mode, is_aligned=False) - # The 1 is a constant that doesn't change the matching, so omitted. - iou_cost = -overlaps - return iou_cost * self.weight - - -@MATCH_COST.register_module() -class DiceCost: - """Cost of mask assignments based on dice losses. - - Args: - weight (int | float, optional): loss_weight. Defaults to 1. - pred_act (bool, optional): Whether to apply sigmoid to mask_pred. - Defaults to False. - eps (float, optional): default 1e-12. - naive_dice (bool, optional): If True, use the naive dice loss - in which the power of the number in the denominator is - the first power. If Flase, use the second power that - is adopted by K-Net and SOLO. - Defaults to True. - """ - - def __init__(self, weight=1., pred_act=False, eps=1e-3, naive_dice=True): - self.weight = weight - self.pred_act = pred_act - self.eps = eps - self.naive_dice = naive_dice - - def binary_mask_dice_loss(self, mask_preds, gt_masks): - """ - Args: - mask_preds (Tensor): Mask prediction in shape (num_query, *). - gt_masks (Tensor): Ground truth in shape (num_gt, *) - store 0 or 1, 0 for negative class and 1 for - positive class. - - Returns: - Tensor: Dice cost matrix in shape (num_query, num_gt). - """ - mask_preds = mask_preds.flatten(1) - gt_masks = gt_masks.flatten(1).float() - numerator = 2 * torch.einsum('nc,mc->nm', mask_preds, gt_masks) - if self.naive_dice: - denominator = mask_preds.sum(-1)[:, None] + \ - gt_masks.sum(-1)[None, :] - else: - denominator = mask_preds.pow(2).sum(1)[:, None] + \ - gt_masks.pow(2).sum(1)[None, :] - loss = 1 - (numerator + self.eps) / (denominator + self.eps) - return loss - - def __call__(self, mask_preds, gt_masks): - """ - Args: - mask_preds (Tensor): Mask prediction logits in shape (num_query, *) - gt_masks (Tensor): Ground truth in shape (num_gt, *) - - Returns: - Tensor: Dice cost matrix with weight in shape (num_query, num_gt). - """ - if self.pred_act: - mask_preds = mask_preds.sigmoid() - dice_cost = self.binary_mask_dice_loss(mask_preds, gt_masks) - return dice_cost * self.weight - - -@MATCH_COST.register_module() -class CrossEntropyLossCost: - """CrossEntropyLossCost. - - Args: - weight (int | float, optional): loss weight. Defaults to 1. - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to True. - Examples: - >>> from mmdet.core.bbox.match_costs import CrossEntropyLossCost - >>> import torch - >>> bce = CrossEntropyLossCost(use_sigmoid=True) - >>> cls_pred = torch.tensor([[7.6, 1.2], [-1.3, 10]]) - >>> gt_labels = torch.tensor([[1, 1], [1, 0]]) - >>> print(bce(cls_pred, gt_labels)) - """ - - def __init__(self, weight=1., use_sigmoid=True): - assert use_sigmoid, 'use_sigmoid = False is not supported yet.' - self.weight = weight - self.use_sigmoid = use_sigmoid - - def _binary_cross_entropy(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): The prediction with shape (num_query, 1, *) or - (num_query, *). - gt_labels (Tensor): The learning label of prediction with - shape (num_gt, *). - - Returns: - Tensor: Cross entropy cost matrix in shape (num_query, num_gt). - """ - cls_pred = cls_pred.flatten(1).float() - gt_labels = gt_labels.flatten(1).float() - n = cls_pred.shape[1] - pos = F.binary_cross_entropy_with_logits( - cls_pred, torch.ones_like(cls_pred), reduction='none') - neg = F.binary_cross_entropy_with_logits( - cls_pred, torch.zeros_like(cls_pred), reduction='none') - cls_cost = torch.einsum('nc,mc->nm', pos, gt_labels) + \ - torch.einsum('nc,mc->nm', neg, 1 - gt_labels) - cls_cost = cls_cost / n - - return cls_cost - - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classification logits. - gt_labels (Tensor): Labels. - - Returns: - Tensor: Cross entropy cost matrix with weight in - shape (num_query, num_gt). - """ - if self.use_sigmoid: - cls_cost = self._binary_cross_entropy(cls_pred, gt_labels) - else: - raise NotImplementedError - - return cls_cost * self.weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/__init__.py deleted file mode 100644 index f58505b5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_sampler import BaseSampler -from .combined_sampler import CombinedSampler -from .instance_balanced_pos_sampler import InstanceBalancedPosSampler -from .iou_balanced_neg_sampler import IoUBalancedNegSampler -from .mask_pseudo_sampler import MaskPseudoSampler -from .mask_sampling_result import MaskSamplingResult -from .ohem_sampler import OHEMSampler -from .pseudo_sampler import PseudoSampler -from .random_sampler import RandomSampler -from .sampling_result import SamplingResult -from .score_hlr_sampler import ScoreHLRSampler - -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'MaskPseudoSampler', - 'MaskSamplingResult' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/base_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/base_sampler.py deleted file mode 100644 index bd15c7c6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/base_sampler.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch - -from .sampling_result import SamplingResult - - -class BaseSampler(metaclass=ABCMeta): - """Base class of samplers.""" - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - self.num = num - self.pos_fraction = pos_fraction - self.neg_pos_ub = neg_pos_ub - self.add_gt_as_proposals = add_gt_as_proposals - self.pos_sampler = self - self.neg_sampler = self - - @abstractmethod - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive samples.""" - pass - - @abstractmethod - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative samples.""" - pass - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - - Example: - >>> from mmdet.core.bbox import RandomSampler - >>> from mmdet.core.bbox import AssignResult - >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes - >>> rng = ensure_rng(None) - >>> assign_result = AssignResult.random(rng=rng) - >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) - >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) - >>> gt_labels = None - >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, - >>> add_gt_as_proposals=False) - >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels) - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - neg_inds = neg_inds.unique() - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/combined_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/combined_sampler.py deleted file mode 100644 index 4f6d86ff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/combined_sampler.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import BBOX_SAMPLERS, build_sampler -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class CombinedSampler(BaseSampler): - """A sampler that combines positive sampler and negative sampler.""" - - def __init__(self, pos_sampler, neg_sampler, **kwargs): - super(CombinedSampler, self).__init__(**kwargs) - self.pos_sampler = build_sampler(pos_sampler, **kwargs) - self.neg_sampler = build_sampler(neg_sampler, **kwargs) - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py deleted file mode 100644 index 5e0d9cc0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class InstanceBalancedPosSampler(RandomSampler): - """Instance balanced sampler that samples equal number of positive samples - for each instance.""" - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - unique_gt_inds = assign_result.gt_inds[pos_inds].unique() - num_gts = len(unique_gt_inds) - num_per_gt = int(round(num_expected / float(num_gts)) + 1) - sampled_inds = [] - for i in unique_gt_inds: - inds = torch.nonzero( - assign_result.gt_inds == i.item(), as_tuple=False) - if inds.numel() != 0: - inds = inds.squeeze(1) - else: - continue - if len(inds) > num_per_gt: - inds = self.random_choice(inds, num_per_gt) - sampled_inds.append(inds) - sampled_inds = torch.cat(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array( - list(set(pos_inds.cpu()) - set(sampled_inds.cpu()))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - extra_inds = torch.from_numpy(extra_inds).to( - assign_result.gt_inds.device).long() - sampled_inds = torch.cat([sampled_inds, extra_inds]) - elif len(sampled_inds) > num_expected: - sampled_inds = self.random_choice(sampled_inds, num_expected) - return sampled_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py deleted file mode 100644 index 56e2874a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class IoUBalancedNegSampler(RandomSampler): - """IoU Balanced Sampling. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Sampling proposals according to their IoU. `floor_fraction` of needed RoIs - are sampled from proposals whose IoU are lower than `floor_thr` randomly. - The others are sampled from proposals whose IoU are higher than - `floor_thr`. These proposals are sampled from some bins evenly, which are - split by `num_bins` via IoU evenly. - - Args: - num (int): number of proposals. - pos_fraction (float): fraction of positive proposals. - floor_thr (float): threshold (minimum) IoU for IoU balanced sampling, - set to -1 if all using IoU balanced sampling. - floor_fraction (float): sampling fraction of proposals under floor_thr. - num_bins (int): number of bins in IoU balanced sampling. - """ - - def __init__(self, - num, - pos_fraction, - floor_thr=-1, - floor_fraction=0, - num_bins=3, - **kwargs): - super(IoUBalancedNegSampler, self).__init__(num, pos_fraction, - **kwargs) - assert floor_thr >= 0 or floor_thr == -1 - assert 0 <= floor_fraction <= 1 - assert num_bins >= 1 - - self.floor_thr = floor_thr - self.floor_fraction = floor_fraction - self.num_bins = num_bins - - def sample_via_interval(self, max_overlaps, full_set, num_expected): - """Sample according to the iou interval. - - Args: - max_overlaps (torch.Tensor): IoU between bounding boxes and ground - truth boxes. - full_set (set(int)): A full set of indices of boxes。 - num_expected (int): Number of expected samples。 - - Returns: - np.ndarray: Indices of samples - """ - max_iou = max_overlaps.max() - iou_interval = (max_iou - self.floor_thr) / self.num_bins - per_num_expected = int(num_expected / self.num_bins) - - sampled_inds = [] - for i in range(self.num_bins): - start_iou = self.floor_thr + i * iou_interval - end_iou = self.floor_thr + (i + 1) * iou_interval - tmp_set = set( - np.where( - np.logical_and(max_overlaps >= start_iou, - max_overlaps < end_iou))[0]) - tmp_inds = list(tmp_set & full_set) - if len(tmp_inds) > per_num_expected: - tmp_sampled_set = self.random_choice(tmp_inds, - per_num_expected) - else: - tmp_sampled_set = np.array(tmp_inds, dtype=np.int) - sampled_inds.append(tmp_sampled_set) - - sampled_inds = np.concatenate(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(full_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate([sampled_inds, extra_inds]) - - return sampled_inds - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected negative samples - - Returns: - Tensor or ndarray: sampled indices. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - max_overlaps = assign_result.max_overlaps.cpu().numpy() - # balance sampling for negative samples - neg_set = set(neg_inds.cpu().numpy()) - - if self.floor_thr > 0: - floor_set = set( - np.where( - np.logical_and(max_overlaps >= 0, - max_overlaps < self.floor_thr))[0]) - iou_sampling_set = set( - np.where(max_overlaps >= self.floor_thr)[0]) - elif self.floor_thr == 0: - floor_set = set(np.where(max_overlaps == 0)[0]) - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - else: - floor_set = set() - iou_sampling_set = set( - np.where(max_overlaps > self.floor_thr)[0]) - # for sampling interval calculation - self.floor_thr = 0 - - floor_neg_inds = list(floor_set & neg_set) - iou_sampling_neg_inds = list(iou_sampling_set & neg_set) - num_expected_iou_sampling = int(num_expected * - (1 - self.floor_fraction)) - if len(iou_sampling_neg_inds) > num_expected_iou_sampling: - if self.num_bins >= 2: - iou_sampled_inds = self.sample_via_interval( - max_overlaps, set(iou_sampling_neg_inds), - num_expected_iou_sampling) - else: - iou_sampled_inds = self.random_choice( - iou_sampling_neg_inds, num_expected_iou_sampling) - else: - iou_sampled_inds = np.array( - iou_sampling_neg_inds, dtype=np.int) - num_expected_floor = num_expected - len(iou_sampled_inds) - if len(floor_neg_inds) > num_expected_floor: - sampled_floor_inds = self.random_choice( - floor_neg_inds, num_expected_floor) - else: - sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int) - sampled_inds = np.concatenate( - (sampled_floor_inds, iou_sampled_inds)) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array(list(neg_set - set(sampled_inds))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - sampled_inds = np.concatenate((sampled_inds, extra_inds)) - sampled_inds = torch.from_numpy(sampled_inds).long().to( - assign_result.gt_inds.device) - return sampled_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_pseudo_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_pseudo_sampler.py deleted file mode 100644 index b5f69658..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_pseudo_sampler.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""copy from -https://github.com/ZwwWayne/K-Net/blob/main/knet/det/mask_pseudo_sampler.py.""" - -import torch - -from mmdet.core.bbox.builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler -from .mask_sampling_result import MaskSamplingResult - - -@BBOX_SAMPLERS.register_module() -class MaskPseudoSampler(BaseSampler): - """A pseudo sampler that does not do sampling actually.""" - - def __init__(self, **kwargs): - pass - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError - - def sample(self, assign_result, masks, gt_masks, **kwargs): - """Directly returns the positive and negative indices of samples. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - masks (torch.Tensor): Bounding boxes - gt_masks (torch.Tensor): Ground truth boxes - Returns: - :obj:`SamplingResult`: sampler results - """ - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - gt_flags = masks.new_zeros(masks.shape[0], dtype=torch.uint8) - sampling_result = MaskSamplingResult(pos_inds, neg_inds, masks, - gt_masks, assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_sampling_result.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_sampling_result.py deleted file mode 100644 index 3d109432..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/mask_sampling_result.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""copy from -https://github.com/ZwwWayne/K-Net/blob/main/knet/det/mask_pseudo_sampler.py.""" - -import torch - -from .sampling_result import SamplingResult - - -class MaskSamplingResult(SamplingResult): - """Mask sampling result.""" - - def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_masks = masks[pos_inds] - self.neg_masks = masks[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_masks.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_masks.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_masks = torch.empty_like(gt_masks) - else: - self.pos_gt_masks = gt_masks[self.pos_assigned_gt_inds, :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def masks(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_masks, self.neg_masks]) - - def __nice__(self): - data = self.info.copy() - data['pos_masks'] = data.pop('pos_masks').shape - data['neg_masks'] = data.pop('neg_masks').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_masks': self.pos_masks, - 'neg_masks': self.neg_masks, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/ohem_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/ohem_sampler.py deleted file mode 100644 index 7eb06663..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/ohem_sampler.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class OHEMSampler(BaseSampler): - r"""Online Hard Example Mining Sampler described in `Training Region-based - Object Detectors with Online Hard Example Mining - `_. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - loss_key='loss_cls', - **kwargs): - super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.context = context - if not hasattr(self.context, 'num_stages'): - self.bbox_head = self.context.bbox_head - else: - self.bbox_head = self.context.bbox_head[self.context.current_stage] - - self.loss_key = loss_key - - def hard_mining(self, inds, num_expected, bboxes, labels, feats): - with torch.no_grad(): - rois = bbox2roi([bboxes]) - if not hasattr(self.context, 'num_stages'): - bbox_results = self.context._bbox_forward(feats, rois) - else: - bbox_results = self.context._bbox_forward( - self.context.current_stage, feats, rois) - cls_score = bbox_results['cls_score'] - loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=rois, - labels=labels, - label_weights=cls_score.new_ones(cls_score.size(0)), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')[self.loss_key] - _, topk_loss_inds = loss.topk(num_expected) - return inds[topk_loss_inds] - - def _sample_pos(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected positive samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of positive samples - """ - # Sample some hard positive samples - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds], - assign_result.labels[pos_inds], feats) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected negative samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of negative samples - """ - # Sample some hard negative samples - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - neg_labels = assign_result.labels.new_empty( - neg_inds.size(0)).fill_(self.bbox_head.num_classes) - return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds], - neg_labels, feats) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/pseudo_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/pseudo_sampler.py deleted file mode 100644 index b5ce298e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/pseudo_sampler.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class PseudoSampler(BaseSampler): - """A pseudo sampler that does not do sampling actually.""" - - def __init__(self, **kwargs): - pass - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError - - def sample(self, assign_result, bboxes, gt_bboxes, *args, **kwargs): - """Directly returns the positive and negative indices of samples. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - bboxes (torch.Tensor): Bounding boxes - gt_bboxes (torch.Tensor): Ground truth boxes - - Returns: - :obj:`SamplingResult`: sampler results - """ - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - gt_flags = bboxes.new_zeros(bboxes.shape[0], dtype=torch.uint8) - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/random_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/random_sampler.py deleted file mode 100644 index d09207e7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/random_sampler.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class RandomSampler(BaseSampler): - """Random sampler. - - Args: - num (int): Number of samples - pos_fraction (float): Fraction of positive samples - neg_pos_up (int, optional): Upper bound number of negative and - positive samples. Defaults to -1. - add_gt_as_proposals (bool, optional): Whether to add ground truth - boxes as proposals. Defaults to True. - """ - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - from mmdet.core.bbox import demodata - super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.rng = demodata.ensure_rng(kwargs.get('rng', None)) - - def random_choice(self, gallery, num): - """Random select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - # This is a temporary fix. We can revert the following code - # when PyTorch fixes the abnormal return of torch.randperm. - # See: https://github.com/open-mmlab/mmdetection/pull/5014 - perm = torch.randperm(gallery.numel())[:num].to(device=gallery.device) - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - return self.random_choice(neg_inds, num_expected) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/sampling_result.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/sampling_result.py deleted file mode 100644 index 50676d04..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/sampling_result.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class SamplingResult(util_mixins.NiceRepr): - """Bbox sampling result. - - Example: - >>> # xdoctest: +IGNORE_WANT - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random(rng=10) - >>> print(f'self = {self}') - self = - """ - - def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_bboxes = bboxes[pos_inds] - self.neg_bboxes = bboxes[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_bboxes.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_bboxes.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_bboxes = torch.empty_like(gt_bboxes).view(-1, 4) - else: - if len(gt_bboxes.shape) < 2: - gt_bboxes = gt_bboxes.view(-1, 4) - - self.pos_gt_bboxes = gt_bboxes[self.pos_assigned_gt_inds.long(), :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_bboxes, self.neg_bboxes]) - - def to(self, device): - """Change the device of the data inplace. - - Example: - >>> self = SamplingResult.random() - >>> print(f'self = {self.to(None)}') - >>> # xdoctest: +REQUIRES(--gpu) - >>> print(f'self = {self.to(0)}') - """ - _dict = self.__dict__ - for key, value in _dict.items(): - if isinstance(value, torch.Tensor): - _dict[key] = value.to(device) - return self - - def __nice__(self): - data = self.info.copy() - data['pos_bboxes'] = data.pop('pos_bboxes').shape - data['neg_bboxes'] = data.pop('neg_bboxes').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_bboxes': self.pos_bboxes, - 'neg_bboxes': self.neg_bboxes, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } - - @classmethod - def random(cls, rng=None, **kwargs): - """ - Args: - rng (None | int | numpy.random.RandomState): seed or state. - kwargs (keyword arguments): - - num_preds: number of predicted boxes - - num_gts: number of true boxes - - p_ignore (float): probability of a predicted box assigned to \ - an ignored truth. - - p_assigned (float): probability of a predicted box not being \ - assigned. - - p_use_label (float | bool): with labels or not. - - Returns: - :obj:`SamplingResult`: Randomly generated sampling result. - - Example: - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random() - >>> print(self.__dict__) - """ - from mmdet.core.bbox import demodata - from mmdet.core.bbox.assigners.assign_result import AssignResult - from mmdet.core.bbox.samplers.random_sampler import RandomSampler - rng = demodata.ensure_rng(rng) - - # make probabalistic? - num = 32 - pos_fraction = 0.5 - neg_pos_ub = -1 - - assign_result = AssignResult.random(rng=rng, **kwargs) - - # Note we could just compute an assignment - bboxes = demodata.random_boxes(assign_result.num_preds, rng=rng) - gt_bboxes = demodata.random_boxes(assign_result.num_gts, rng=rng) - - if rng.rand() > 0.2: - # sometimes algorithms squeeze their data, be robust to that - gt_bboxes = gt_bboxes.squeeze() - bboxes = bboxes.squeeze() - - if assign_result.labels is None: - gt_labels = None - else: - gt_labels = None # todo - - if gt_labels is None: - add_gt_as_proposals = False - else: - add_gt_as_proposals = True # make probabalistic? - - sampler = RandomSampler( - num, - pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals, - rng=rng) - self = sampler.sample(assign_result, bboxes, gt_bboxes, gt_labels) - return self diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/score_hlr_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/score_hlr_sampler.py deleted file mode 100644 index f4be9b8c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/samplers/score_hlr_sampler.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import nms_match - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class ScoreHLRSampler(BaseSampler): - r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample - Attention in Object Detection `_. - - Score hierarchical local rank (HLR) differentiates with RandomSampler in - negative part. It firstly computes Score-HLR in a two-step way, - then linearly maps score hlr to the loss weights. - - Args: - num (int): Total number of sampled RoIs. - pos_fraction (float): Fraction of positive samples. - context (:class:`BaseRoIHead`): RoI head that the sampler belongs to. - neg_pos_ub (int): Upper bound of the ratio of num negative to num - positive, -1 means no upper bound. - add_gt_as_proposals (bool): Whether to add ground truth as proposals. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - score_thr (float): Minimum score that a negative sample is to be - considered as valid bbox. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0, - score_thr=0.05, - iou_thr=0.5, - **kwargs): - super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals) - self.k = k - self.bias = bias - self.score_thr = score_thr - self.iou_thr = iou_thr - self.context = context - # context of cascade detectors is a list, so distinguish them here. - if not hasattr(context, 'num_stages'): - self.bbox_roi_extractor = context.bbox_roi_extractor - self.bbox_head = context.bbox_head - self.with_shared_head = context.with_shared_head - if self.with_shared_head: - self.shared_head = context.shared_head - else: - self.bbox_roi_extractor = context.bbox_roi_extractor[ - context.current_stage] - self.bbox_head = context.bbox_head[context.current_stage] - - @staticmethod - def random_choice(gallery, num): - """Randomly select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten() - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes, - feats=None, - img_meta=None, - **kwargs): - """Sample negative samples. - - Score-HLR sampler is done in the following steps: - 1. Take the maximum positive score prediction of each negative samples - as s_i. - 2. Filter out negative samples whose s_i <= score_thr, the left samples - are called valid samples. - 3. Use NMS-Match to divide valid samples into different groups, - samples in the same group will greatly overlap with each other - 4. Rank the matched samples in two-steps to get Score-HLR. - (1) In the same group, rank samples with their scores. - (2) In the same score rank across different groups, - rank samples with their scores again. - 5. Linearly map Score-HLR to the final label weights. - - Args: - assign_result (:obj:`AssignResult`): result of assigner. - num_expected (int): Expected number of samples. - bboxes (Tensor): bbox to be sampled. - feats (Tensor): Features come from FPN. - img_meta (dict): Meta information dictionary. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten() - num_neg = neg_inds.size(0) - if num_neg == 0: - return neg_inds, None - with torch.no_grad(): - neg_bboxes = bboxes[neg_inds] - neg_rois = bbox2roi([neg_bboxes]) - bbox_result = self.context._bbox_forward(feats, neg_rois) - cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[ - 'bbox_pred'] - - ori_loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=None, - labels=neg_inds.new_full((num_neg, ), - self.bbox_head.num_classes), - label_weights=cls_score.new_ones(num_neg), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - - # filter out samples with the max score lower than score_thr - max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1) - valid_inds = (max_score > self.score_thr).nonzero().view(-1) - invalid_inds = (max_score <= self.score_thr).nonzero().view(-1) - num_valid = valid_inds.size(0) - num_invalid = invalid_inds.size(0) - - num_expected = min(num_neg, num_expected) - num_hlr = min(num_valid, num_expected) - num_rand = num_expected - num_hlr - if num_valid > 0: - valid_rois = neg_rois[valid_inds] - valid_max_score = max_score[valid_inds] - valid_argmax_score = argmax_score[valid_inds] - valid_bbox_pred = bbox_pred[valid_inds] - - # valid_bbox_pred shape: [num_valid, #num_classes, 4] - valid_bbox_pred = valid_bbox_pred.view( - valid_bbox_pred.size(0), -1, 4) - selected_bbox_pred = valid_bbox_pred[range(num_valid), - valid_argmax_score] - pred_bboxes = self.bbox_head.bbox_coder.decode( - valid_rois[:, 1:], selected_bbox_pred) - pred_bboxes_with_score = torch.cat( - [pred_bboxes, valid_max_score[:, None]], -1) - group = nms_match(pred_bboxes_with_score, self.iou_thr) - - # imp: importance - imp = cls_score.new_zeros(num_valid) - for g in group: - g_score = valid_max_score[g] - # g_score has already sorted - rank = g_score.new_tensor(range(g_score.size(0))) - imp[g] = num_valid - rank + g_score - _, imp_rank_inds = imp.sort(descending=True) - _, imp_rank = imp_rank_inds.sort() - hlr_inds = imp_rank_inds[:num_expected] - - if num_rand > 0: - rand_inds = torch.randperm(num_invalid)[:num_rand] - select_inds = torch.cat( - [valid_inds[hlr_inds], invalid_inds[rand_inds]]) - else: - select_inds = valid_inds[hlr_inds] - - neg_label_weights = cls_score.new_ones(num_expected) - - up_bound = max(num_expected, num_valid) - imp_weights = (up_bound - - imp_rank[hlr_inds].float()) / up_bound - neg_label_weights[:num_hlr] = imp_weights - neg_label_weights[num_hlr:] = imp_weights.min() - neg_label_weights = (self.bias + - (1 - self.bias) * neg_label_weights).pow( - self.k) - ori_selected_loss = ori_loss[select_inds] - new_loss = ori_selected_loss * neg_label_weights - norm_ratio = ori_selected_loss.sum() / new_loss.sum() - neg_label_weights *= norm_ratio - else: - neg_label_weights = cls_score.new_ones(num_expected) - select_inds = torch.randperm(num_neg)[:num_expected] - - return neg_inds[select_inds], neg_label_weights - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - img_meta=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negative - label weights. - """ - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals: - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds, neg_label_weights = self.neg_sampler._sample_neg( - assign_result, - num_expected_neg, - bboxes, - img_meta=img_meta, - **kwargs) - - return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags), neg_label_weights diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/transforms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/transforms.py deleted file mode 100644 index 6d72076a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/bbox/transforms.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def find_inside_bboxes(bboxes, img_h, img_w): - """Find bboxes as long as a part of bboxes is inside the image. - - Args: - bboxes (Tensor): Shape (N, 4). - img_h (int): Image height. - img_w (int): Image width. - - Returns: - Tensor: Index of the remaining bboxes. - """ - inside_inds = (bboxes[:, 0] < img_w) & (bboxes[:, 2] > 0) \ - & (bboxes[:, 1] < img_h) & (bboxes[:, 3] > 0) - return inside_inds - - -def bbox_flip(bboxes, img_shape, direction='horizontal'): - """Flip bboxes horizontally or vertically. - - Args: - bboxes (Tensor): Shape (..., 4*k) - img_shape (tuple): Image shape. - direction (str): Flip direction, options are "horizontal", "vertical", - "diagonal". Default: "horizontal" - - Returns: - Tensor: Flipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - assert direction in ['horizontal', 'vertical', 'diagonal'] - flipped = bboxes.clone() - if direction == 'horizontal': - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - elif direction == 'vertical': - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - else: - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - return flipped - - -def bbox_mapping(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from the original image scale to testing scale.""" - new_bboxes = bboxes * bboxes.new_tensor(scale_factor) - if flip: - new_bboxes = bbox_flip(new_bboxes, img_shape, flip_direction) - return new_bboxes - - -def bbox_mapping_back(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from testing scale to original image scale.""" - new_bboxes = bbox_flip(bboxes, img_shape, - flip_direction) if flip else bboxes - new_bboxes = new_bboxes.view(-1, 4) / new_bboxes.new_tensor(scale_factor) - return new_bboxes.view(bboxes.shape) - - -def bbox2roi(bbox_list): - """Convert a list of bboxes to roi format. - - Args: - bbox_list (list[Tensor]): a list of bboxes corresponding to a batch - of images. - - Returns: - Tensor: shape (n, 5), [batch_ind, x1, y1, x2, y2] - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes[:, :4]], dim=-1) - else: - rois = bboxes.new_zeros((0, 5)) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def roi2bbox(rois): - """Convert rois to bounding box format. - - Args: - rois (torch.Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - list[torch.Tensor]: Converted boxes of corresponding rois. - """ - bbox_list = [] - img_ids = torch.unique(rois[:, 0].cpu(), sorted=True) - for img_id in img_ids: - inds = (rois[:, 0] == img_id.item()) - bbox = rois[inds, 1:] - bbox_list.append(bbox) - return bbox_list - - -def bbox2result(bboxes, labels, num_classes): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor | np.ndarray): shape (n, 5) - labels (torch.Tensor | np.ndarray): shape (n, ) - num_classes (int): class number, including background class - - Returns: - list(ndarray): bbox results of each class - """ - if bboxes.shape[0] == 0: - return [np.zeros((0, 5), dtype=np.float32) for i in range(num_classes)] - else: - if isinstance(bboxes, torch.Tensor): - bboxes = bboxes.detach().cpu().numpy() - labels = labels.detach().cpu().numpy() - return [bboxes[labels == i, :] for i in range(num_classes)] - - -def distance2bbox(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - - x1 = points[..., 0] - distance[..., 0] - y1 = points[..., 1] - distance[..., 1] - x2 = points[..., 0] + distance[..., 2] - y2 = points[..., 1] + distance[..., 3] - - bboxes = torch.stack([x1, y1, x2, y2], -1) - - if max_shape is not None: - if bboxes.dim() == 2 and not torch.onnx.is_in_onnx_export(): - # speed up - bboxes[:, 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[:, 1::2].clamp_(min=0, max=max_shape[0]) - return bboxes - - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes - - -def bbox2distance(points, bbox, max_dis=None, eps=0.1): - """Decode bounding box based on distances. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - bbox (Tensor): Shape (n, 4), "xyxy" format - max_dis (float): Upper bound of the distance. - eps (float): a small value to ensure target < max_dis, instead <= - - Returns: - Tensor: Decoded distances. - """ - left = points[:, 0] - bbox[:, 0] - top = points[:, 1] - bbox[:, 1] - right = bbox[:, 2] - points[:, 0] - bottom = bbox[:, 3] - points[:, 1] - if max_dis is not None: - left = left.clamp(min=0, max=max_dis - eps) - top = top.clamp(min=0, max=max_dis - eps) - right = right.clamp(min=0, max=max_dis - eps) - bottom = bottom.clamp(min=0, max=max_dis - eps) - return torch.stack([left, top, right, bottom], -1) - - -def bbox_rescale(bboxes, scale_factor=1.0): - """Rescale bounding box w.r.t. scale_factor. - - Args: - bboxes (Tensor): Shape (n, 4) for bboxes or (n, 5) for rois - scale_factor (float): rescale factor - - Returns: - Tensor: Rescaled bboxes. - """ - if bboxes.size(1) == 5: - bboxes_ = bboxes[:, 1:] - inds_ = bboxes[:, 0] - else: - bboxes_ = bboxes - cx = (bboxes_[:, 0] + bboxes_[:, 2]) * 0.5 - cy = (bboxes_[:, 1] + bboxes_[:, 3]) * 0.5 - w = bboxes_[:, 2] - bboxes_[:, 0] - h = bboxes_[:, 3] - bboxes_[:, 1] - w = w * scale_factor - h = h * scale_factor - x1 = cx - 0.5 * w - x2 = cx + 0.5 * w - y1 = cy - 0.5 * h - y2 = cy + 0.5 * h - if bboxes.size(1) == 5: - rescaled_bboxes = torch.stack([inds_, x1, y1, x2, y2], dim=-1) - else: - rescaled_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return rescaled_bboxes - - -def bbox_cxcywh_to_xyxy(bbox): - """Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - cx, cy, w, h = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(cx - 0.5 * w), (cy - 0.5 * h), (cx + 0.5 * w), (cy + 0.5 * h)] - return torch.cat(bbox_new, dim=-1) - - -def bbox_xyxy_to_cxcywh(bbox): - """Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - x1, y1, x2, y2 = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(x1 + x2) / 2, (y1 + y2) / 2, (x2 - x1), (y2 - y1)] - return torch.cat(bbox_new, dim=-1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/__init__.py deleted file mode 100644 index 11ab96c5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .general_data import GeneralData -from .instance_data import InstanceData - -__all__ = ['GeneralData', 'InstanceData'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/general_data.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/general_data.py deleted file mode 100644 index 99316e41..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/general_data.py +++ /dev/null @@ -1,326 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch - -from mmdet.utils.util_mixins import NiceRepr - - -class GeneralData(NiceRepr): - """A general data structure of OpenMMlab. - - A data structure that stores the meta information, - the annotations of the images or the model predictions, - which can be used in communication between components. - - The attributes in `GeneralData` are divided into two parts, - the `meta_info_fields` and the `data_fields` respectively. - - - `meta_info_fields`: Usually contains the - information about the image such as filename, - image_shape, pad_shape, etc. All attributes in - it are immutable once set, - but the user can add new meta information with - `set_meta_info` function, all information can be accessed - with methods `meta_info_keys`, `meta_info_values`, - `meta_info_items`. - - - `data_fields`: Annotations or model predictions are - stored. The attributes can be accessed or modified by - dict-like or object-like operations, such as - `.` , `[]`, `in`, `del`, `pop(str)` `get(str)`, `keys()`, - `values()`, `items()`. Users can also apply tensor-like methods - to all obj:`torch.Tensor` in the `data_fileds`, - such as `.cuda()`, `.cpu()`, `.numpy()`, `device`, `.to()` - `.detach()`, `.numpy()` - - Args: - meta_info (dict, optional): A dict contains the meta information - of single image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of single image or - model predictions. Default: None. - - Examples: - >>> from mmdet.core import GeneralData - >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3)) - >>> instance_data = GeneralData(meta_info=img_meta) - >>> img_shape in instance_data - True - >>> instance_data.det_labels = torch.LongTensor([0, 1, 2, 3]) - >>> instance_data["det_scores"] = torch.Tensor([0.01, 0.1, 0.2, 0.3]) - >>> print(results) - - >>> instance_data.det_scores - tensor([0.0100, 0.1000, 0.2000, 0.3000]) - >>> instance_data.det_labels - tensor([0, 1, 2, 3]) - >>> instance_data['det_labels'] - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - True - >>> instance_data.img_shape - (800, 1196, 3) - >>> 'det_scores' in instance_data - True - >>> del instance_data.det_scores - >>> 'det_scores' in instance_data - False - >>> det_labels = instance_data.pop('det_labels', None) - >>> det_labels - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - >>> False - """ - - def __init__(self, meta_info=None, data=None): - - self._meta_info_fields = set() - self._data_fields = set() - - if meta_info is not None: - self.set_meta_info(meta_info=meta_info) - if data is not None: - self.set_data(data) - - def set_meta_info(self, meta_info): - """Add meta information. - - Args: - meta_info (dict): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - """ - assert isinstance(meta_info, - dict), f'meta should be a `dict` but get {meta_info}' - meta = copy.deepcopy(meta_info) - for k, v in meta.items(): - # should be consistent with original meta_info - if k in self._meta_info_fields: - ori_value = getattr(self, k) - if isinstance(ori_value, (torch.Tensor, np.ndarray)): - if (ori_value == v).all(): - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - elif ori_value == v: - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - else: - self._meta_info_fields.add(k) - self.__dict__[k] = v - - def set_data(self, data): - """Update a dict to `data_fields`. - - Args: - data (dict): A dict contains annotations of image or - model predictions. Default: None. - """ - assert isinstance(data, - dict), f'meta should be a `dict` but get {data}' - for k, v in data.items(): - self.__setattr__(k, v) - - def new(self, meta_info=None, data=None): - """Return a new results with same image meta information. - - Args: - meta_info (dict, optional): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of image or - model predictions. Default: None. - """ - new_data = self.__class__() - new_data.set_meta_info(dict(self.meta_info_items())) - if meta_info is not None: - new_data.set_meta_info(meta_info) - if data is not None: - new_data.set_data(data) - return new_data - - def keys(self): - """ - Returns: - list: Contains all keys in data_fields. - """ - return [key for key in self._data_fields] - - def meta_info_keys(self): - """ - Returns: - list: Contains all keys in meta_info_fields. - """ - return [key for key in self._meta_info_fields] - - def values(self): - """ - Returns: - list: Contains all values in data_fields. - """ - return [getattr(self, k) for k in self.keys()] - - def meta_info_values(self): - """ - Returns: - list: Contains all values in meta_info_fields. - """ - return [getattr(self, k) for k in self.meta_info_keys()] - - def items(self): - for k in self.keys(): - yield (k, getattr(self, k)) - - def meta_info_items(self): - for k in self.meta_info_keys(): - yield (k, getattr(self, k)) - - def __setattr__(self, name, val): - if name in ('_meta_info_fields', '_data_fields'): - if not hasattr(self, name): - super().__setattr__(name, val) - else: - raise AttributeError( - f'{name} has been used as a ' - f'private attribute, which is immutable. ') - else: - if name in self._meta_info_fields: - raise AttributeError(f'`{name}` is used in meta information,' - f'which is immutable') - - self._data_fields.add(name) - super().__setattr__(name, val) - - def __delattr__(self, item): - - if item in ('_meta_info_fields', '_data_fields'): - raise AttributeError(f'{item} has been used as a ' - f'private attribute, which is immutable. ') - - if item in self._meta_info_fields: - raise KeyError(f'{item} is used in meta information, ' - f'which is immutable.') - super().__delattr__(item) - if item in self._data_fields: - self._data_fields.remove(item) - - # dict-like methods - __setitem__ = __setattr__ - __delitem__ = __delattr__ - - def __getitem__(self, name): - return getattr(self, name) - - def get(self, *args): - assert len(args) < 3, '`get` get more than 2 arguments' - return self.__dict__.get(*args) - - def pop(self, *args): - assert len(args) < 3, '`pop` get more than 2 arguments' - name = args[0] - if name in self._meta_info_fields: - raise KeyError(f'{name} is a key in meta information, ' - f'which is immutable') - - if args[0] in self._data_fields: - self._data_fields.remove(args[0]) - return self.__dict__.pop(*args) - - # with default value - elif len(args) == 2: - return args[1] - else: - raise KeyError(f'{args[0]}') - - def __contains__(self, item): - return item in self._data_fields or \ - item in self._meta_info_fields - - # Tensor-like methods - def to(self, *args, **kwargs): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if hasattr(v, 'to'): - v = v.to(*args, **kwargs) - new_data[k] = v - return new_data - - # Tensor-like methods - def cpu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cpu() - new_data[k] = v - return new_data - - # Tensor-like methods - def mlu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.mlu() - new_data[k] = v - return new_data - - # Tensor-like methods - def cuda(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cuda() - new_data[k] = v - return new_data - - # Tensor-like methods - def detach(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach() - new_data[k] = v - return new_data - - # Tensor-like methods - def numpy(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach().cpu().numpy() - new_data[k] = v - return new_data - - def __nice__(self): - repr = '\n \n META INFORMATION \n' - for k, v in self.meta_info_items(): - repr += f'{k}: {v} \n' - repr += '\n DATA FIELDS \n' - for k, v in self.items(): - if isinstance(v, (torch.Tensor, np.ndarray)): - repr += f'shape of {k}: {v.shape} \n' - else: - repr += f'{k}: {v} \n' - return repr + '\n' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/instance_data.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/instance_data.py deleted file mode 100644 index eef2065c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/data_structures/instance_data.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -import numpy as np -import torch - -from .general_data import GeneralData - - -class InstanceData(GeneralData): - """Data structure for instance-level annnotations or predictions. - - Subclass of :class:`GeneralData`. All value in `data_fields` - should have the same length. This design refer to - https://github.com/facebookresearch/detectron2/blob/master/detectron2/structures/instances.py # noqa E501 - - Examples: - >>> from mmdet.core import InstanceData - >>> import numpy as np - >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3)) - >>> results = InstanceData(img_meta) - >>> img_shape in results - True - >>> results.det_labels = torch.LongTensor([0, 1, 2, 3]) - >>> results["det_scores"] = torch.Tensor([0.01, 0.7, 0.6, 0.3]) - >>> results["det_masks"] = np.ndarray(4, 2, 2) - >>> len(results) - 4 - >>> print(resutls) - - >>> sorted_results = results[results.det_scores.sort().indices] - >>> sorted_results.det_scores - tensor([0.0100, 0.3000, 0.6000, 0.7000]) - >>> sorted_results.det_labels - tensor([0, 3, 2, 1]) - >>> print(results[results.scores > 0.5]) - - >>> results[results.det_scores > 0.5].det_labels - tensor([1, 2]) - >>> results[results.det_scores > 0.5].det_scores - tensor([0.7000, 0.6000]) - """ - - def __setattr__(self, name, value): - - if name in ('_meta_info_fields', '_data_fields'): - if not hasattr(self, name): - super().__setattr__(name, value) - else: - raise AttributeError( - f'{name} has been used as a ' - f'private attribute, which is immutable. ') - - else: - assert isinstance(value, (torch.Tensor, np.ndarray, list)), \ - f'Can set {type(value)}, only support' \ - f' {(torch.Tensor, np.ndarray, list)}' - - if self._data_fields: - assert len(value) == len(self), f'the length of ' \ - f'values {len(value)} is ' \ - f'not consistent with' \ - f' the length ' \ - f'of this :obj:`InstanceData` ' \ - f'{len(self)} ' - super().__setattr__(name, value) - - def __getitem__(self, item): - """ - Args: - item (str, obj:`slice`, - obj`torch.LongTensor`, obj:`torch.BoolTensor`): - get the corresponding values according to item. - - Returns: - obj:`InstanceData`: Corresponding values. - """ - assert len(self), ' This is a empty instance' - - assert isinstance( - item, (str, slice, int, torch.LongTensor, torch.BoolTensor)) - - if isinstance(item, str): - return getattr(self, item) - - if type(item) == int: - if item >= len(self) or item < -len(self): - raise IndexError(f'Index {item} out of range!') - else: - # keep the dimension - item = slice(item, None, len(self)) - - new_data = self.new() - if isinstance(item, (torch.Tensor)): - assert item.dim() == 1, 'Only support to get the' \ - ' values along the first dimension.' - if isinstance(item, torch.BoolTensor): - assert len(item) == len(self), f'The shape of the' \ - f' input(BoolTensor)) ' \ - f'{len(item)} ' \ - f' does not match the shape ' \ - f'of the indexed tensor ' \ - f'in results_filed ' \ - f'{len(self)} at ' \ - f'first dimension. ' - - for k, v in self.items(): - if isinstance(v, torch.Tensor): - new_data[k] = v[item] - elif isinstance(v, np.ndarray): - new_data[k] = v[item.cpu().numpy()] - elif isinstance(v, list): - r_list = [] - # convert to indexes from boolTensor - if isinstance(item, torch.BoolTensor): - indexes = torch.nonzero(item).view(-1) - else: - indexes = item - for index in indexes: - r_list.append(v[index]) - new_data[k] = r_list - else: - # item is a slice - for k, v in self.items(): - new_data[k] = v[item] - return new_data - - @staticmethod - def cat(instances_list): - """Concat the predictions of all :obj:`InstanceData` in the list. - - Args: - instances_list (list[:obj:`InstanceData`]): A list - of :obj:`InstanceData`. - - Returns: - obj:`InstanceData` - """ - assert all( - isinstance(results, InstanceData) for results in instances_list) - assert len(instances_list) > 0 - if len(instances_list) == 1: - return instances_list[0] - - new_data = instances_list[0].new() - for k in instances_list[0]._data_fields: - values = [results[k] for results in instances_list] - v0 = values[0] - if isinstance(v0, torch.Tensor): - values = torch.cat(values, dim=0) - elif isinstance(v0, np.ndarray): - values = np.concatenate(values, axis=0) - elif isinstance(v0, list): - values = list(itertools.chain(*values)) - else: - raise ValueError( - f'Can not concat the {k} which is a {type(v0)}') - new_data[k] = values - return new_data - - def __len__(self): - if len(self._data_fields): - for v in self.values(): - return len(v) - else: - raise AssertionError('This is an empty `InstanceData`.') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/__init__.py deleted file mode 100644 index 67e7c55b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_names import (cityscapes_classes, coco_classes, dataset_aliases, - get_classes, imagenet_det_classes, - imagenet_vid_classes, oid_challenge_classes, - oid_v6_classes, voc_classes) -from .eval_hooks import DistEvalHook, EvalHook -from .mean_ap import average_precision, eval_map, print_map_summary -from .panoptic_utils import INSTANCE_OFFSET -from .recall import (eval_recalls, plot_iou_recall, plot_num_recall, - print_recall_summary) - -__all__ = [ - 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes', - 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes', - 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map', - 'print_map_summary', 'eval_recalls', 'print_recall_summary', - 'plot_num_recall', 'plot_iou_recall', 'oid_v6_classes', - 'oid_challenge_classes', 'INSTANCE_OFFSET' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/bbox_overlaps.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/bbox_overlaps.py deleted file mode 100644 index 5d6eb82f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/bbox_overlaps.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def bbox_overlaps(bboxes1, - bboxes2, - mode='iou', - eps=1e-6, - use_legacy_coordinate=False): - """Calculate the ious between each bbox of bboxes1 and bboxes2. - - Args: - bboxes1 (ndarray): Shape (n, 4) - bboxes2 (ndarray): Shape (k, 4) - mode (str): IOU (intersection over union) or IOF (intersection - over foreground) - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Note when function is used in `VOCDataset`, it should be - True to align with the official implementation - `http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar` - Default: False. - - Returns: - ious (ndarray): Shape (n, k) - """ - - assert mode in ['iou', 'iof'] - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - bboxes1 = bboxes1.astype(np.float32) - bboxes2 = bboxes2.astype(np.float32) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - ious = np.zeros((rows, cols), dtype=np.float32) - if rows * cols == 0: - return ious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - ious = np.zeros((cols, rows), dtype=np.float32) - exchange = True - area1 = (bboxes1[:, 2] - bboxes1[:, 0] + extra_length) * ( - bboxes1[:, 3] - bboxes1[:, 1] + extra_length) - area2 = (bboxes2[:, 2] - bboxes2[:, 0] + extra_length) * ( - bboxes2[:, 3] - bboxes2[:, 1] + extra_length) - for i in range(bboxes1.shape[0]): - x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0]) - y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1]) - x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2]) - y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3]) - overlap = np.maximum(x_end - x_start + extra_length, 0) * np.maximum( - y_end - y_start + extra_length, 0) - if mode == 'iou': - union = area1[i] + area2 - overlap - else: - union = area1[i] if not exchange else area2 - union = np.maximum(union, eps) - ious[i, :] = overlap / union - if exchange: - ious = ious.T - return ious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/class_names.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/class_names.py deleted file mode 100644 index 73797118..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/class_names.py +++ /dev/null @@ -1,332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - - -def wider_face_classes(): - return ['face'] - - -def voc_classes(): - return [ - 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', - 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', - 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' - ] - - -def imagenet_det_classes(): - return [ - 'accordion', 'airplane', 'ant', 'antelope', 'apple', 'armadillo', - 'artichoke', 'axe', 'baby_bed', 'backpack', 'bagel', 'balance_beam', - 'banana', 'band_aid', 'banjo', 'baseball', 'basketball', 'bathing_cap', - 'beaker', 'bear', 'bee', 'bell_pepper', 'bench', 'bicycle', 'binder', - 'bird', 'bookshelf', 'bow_tie', 'bow', 'bowl', 'brassiere', 'burrito', - 'bus', 'butterfly', 'camel', 'can_opener', 'car', 'cart', 'cattle', - 'cello', 'centipede', 'chain_saw', 'chair', 'chime', 'cocktail_shaker', - 'coffee_maker', 'computer_keyboard', 'computer_mouse', 'corkscrew', - 'cream', 'croquet_ball', 'crutch', 'cucumber', 'cup_or_mug', 'diaper', - 'digital_clock', 'dishwasher', 'dog', 'domestic_cat', 'dragonfly', - 'drum', 'dumbbell', 'electric_fan', 'elephant', 'face_powder', 'fig', - 'filing_cabinet', 'flower_pot', 'flute', 'fox', 'french_horn', 'frog', - 'frying_pan', 'giant_panda', 'goldfish', 'golf_ball', 'golfcart', - 'guacamole', 'guitar', 'hair_dryer', 'hair_spray', 'hamburger', - 'hammer', 'hamster', 'harmonica', 'harp', 'hat_with_a_wide_brim', - 'head_cabbage', 'helmet', 'hippopotamus', 'horizontal_bar', 'horse', - 'hotdog', 'iPod', 'isopod', 'jellyfish', 'koala_bear', 'ladle', - 'ladybug', 'lamp', 'laptop', 'lemon', 'lion', 'lipstick', 'lizard', - 'lobster', 'maillot', 'maraca', 'microphone', 'microwave', 'milk_can', - 'miniskirt', 'monkey', 'motorcycle', 'mushroom', 'nail', 'neck_brace', - 'oboe', 'orange', 'otter', 'pencil_box', 'pencil_sharpener', 'perfume', - 'person', 'piano', 'pineapple', 'ping-pong_ball', 'pitcher', 'pizza', - 'plastic_bag', 'plate_rack', 'pomegranate', 'popsicle', 'porcupine', - 'power_drill', 'pretzel', 'printer', 'puck', 'punching_bag', 'purse', - 'rabbit', 'racket', 'ray', 'red_panda', 'refrigerator', - 'remote_control', 'rubber_eraser', 'rugby_ball', 'ruler', - 'salt_or_pepper_shaker', 'saxophone', 'scorpion', 'screwdriver', - 'seal', 'sheep', 'ski', 'skunk', 'snail', 'snake', 'snowmobile', - 'snowplow', 'soap_dispenser', 'soccer_ball', 'sofa', 'spatula', - 'squirrel', 'starfish', 'stethoscope', 'stove', 'strainer', - 'strawberry', 'stretcher', 'sunglasses', 'swimming_trunks', 'swine', - 'syringe', 'table', 'tape_player', 'tennis_ball', 'tick', 'tie', - 'tiger', 'toaster', 'traffic_light', 'train', 'trombone', 'trumpet', - 'turtle', 'tv_or_monitor', 'unicycle', 'vacuum', 'violin', - 'volleyball', 'waffle_iron', 'washer', 'water_bottle', 'watercraft', - 'whale', 'wine_bottle', 'zebra' - ] - - -def imagenet_vid_classes(): - return [ - 'airplane', 'antelope', 'bear', 'bicycle', 'bird', 'bus', 'car', - 'cattle', 'dog', 'domestic_cat', 'elephant', 'fox', 'giant_panda', - 'hamster', 'horse', 'lion', 'lizard', 'monkey', 'motorcycle', 'rabbit', - 'red_panda', 'sheep', 'snake', 'squirrel', 'tiger', 'train', 'turtle', - 'watercraft', 'whale', 'zebra' - ] - - -def coco_classes(): - return [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic_light', 'fire_hydrant', 'stop_sign', - 'parking_meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard', - 'surfboard', 'tennis_racket', 'bottle', 'wine_glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted_plant', 'bed', 'dining_table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell_phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy_bear', 'hair_drier', 'toothbrush' - ] - - -def cityscapes_classes(): - return [ - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def oid_challenge_classes(): - return [ - 'Footwear', 'Jeans', 'House', 'Tree', 'Woman', 'Man', 'Land vehicle', - 'Person', 'Wheel', 'Bus', 'Human face', 'Bird', 'Dress', 'Girl', - 'Vehicle', 'Building', 'Cat', 'Car', 'Belt', 'Elephant', 'Dessert', - 'Butterfly', 'Train', 'Guitar', 'Poster', 'Book', 'Boy', 'Bee', - 'Flower', 'Window', 'Hat', 'Human head', 'Dog', 'Human arm', 'Drink', - 'Human mouth', 'Human hair', 'Human nose', 'Human hand', 'Table', - 'Marine invertebrates', 'Fish', 'Sculpture', 'Rose', 'Street light', - 'Glasses', 'Fountain', 'Skyscraper', 'Swimwear', 'Brassiere', 'Drum', - 'Duck', 'Countertop', 'Furniture', 'Ball', 'Human leg', 'Boat', - 'Balloon', 'Bicycle helmet', 'Goggles', 'Door', 'Human eye', 'Shirt', - 'Toy', 'Teddy bear', 'Pasta', 'Tomato', 'Human ear', - 'Vehicle registration plate', 'Microphone', 'Musical keyboard', - 'Tower', 'Houseplant', 'Flowerpot', 'Fruit', 'Vegetable', - 'Musical instrument', 'Suit', 'Motorcycle', 'Bagel', 'French fries', - 'Hamburger', 'Chair', 'Salt and pepper shakers', 'Snail', 'Airplane', - 'Horse', 'Laptop', 'Computer keyboard', 'Football helmet', 'Cocktail', - 'Juice', 'Tie', 'Computer monitor', 'Human beard', 'Bottle', - 'Saxophone', 'Lemon', 'Mouse', 'Sock', 'Cowboy hat', 'Sun hat', - 'Football', 'Porch', 'Sunglasses', 'Lobster', 'Crab', 'Picture frame', - 'Van', 'Crocodile', 'Surfboard', 'Shorts', 'Helicopter', 'Helmet', - 'Sports uniform', 'Taxi', 'Swan', 'Goose', 'Coat', 'Jacket', 'Handbag', - 'Flag', 'Skateboard', 'Television', 'Tire', 'Spoon', 'Palm tree', - 'Stairs', 'Salad', 'Castle', 'Oven', 'Microwave oven', 'Wine', - 'Ceiling fan', 'Mechanical fan', 'Cattle', 'Truck', 'Box', 'Ambulance', - 'Desk', 'Wine glass', 'Reptile', 'Tank', 'Traffic light', 'Billboard', - 'Tent', 'Insect', 'Spider', 'Treadmill', 'Cupboard', 'Shelf', - 'Seat belt', 'Human foot', 'Bicycle', 'Bicycle wheel', 'Couch', - 'Bookcase', 'Fedora', 'Backpack', 'Bench', 'Oyster', - 'Moths and butterflies', 'Lavender', 'Waffle', 'Fork', 'Animal', - 'Accordion', 'Mobile phone', 'Plate', 'Coffee cup', 'Saucer', - 'Platter', 'Dagger', 'Knife', 'Bull', 'Tortoise', 'Sea turtle', 'Deer', - 'Weapon', 'Apple', 'Ski', 'Taco', 'Traffic sign', 'Beer', 'Necklace', - 'Sunflower', 'Piano', 'Organ', 'Harpsichord', 'Bed', 'Cabinetry', - 'Nightstand', 'Curtain', 'Chest of drawers', 'Drawer', 'Parrot', - 'Sandal', 'High heels', 'Tableware', 'Cart', 'Mushroom', 'Kite', - 'Missile', 'Seafood', 'Camera', 'Paper towel', 'Toilet paper', - 'Sombrero', 'Radish', 'Lighthouse', 'Segway', 'Pig', 'Watercraft', - 'Golf cart', 'studio couch', 'Dolphin', 'Whale', 'Earrings', 'Otter', - 'Sea lion', 'Whiteboard', 'Monkey', 'Gondola', 'Zebra', - 'Baseball glove', 'Scarf', 'Adhesive tape', 'Trousers', 'Scoreboard', - 'Lily', 'Carnivore', 'Power plugs and sockets', 'Office building', - 'Sandwich', 'Swimming pool', 'Headphones', 'Tin can', 'Crown', 'Doll', - 'Cake', 'Frog', 'Beetle', 'Ant', 'Gas stove', 'Canoe', 'Falcon', - 'Blue jay', 'Egg', 'Fire hydrant', 'Raccoon', 'Muffin', 'Wall clock', - 'Coffee', 'Mug', 'Tea', 'Bear', 'Waste container', 'Home appliance', - 'Candle', 'Lion', 'Mirror', 'Starfish', 'Marine mammal', 'Wheelchair', - 'Umbrella', 'Alpaca', 'Violin', 'Cello', 'Brown bear', 'Canary', 'Bat', - 'Ruler', 'Plastic bag', 'Penguin', 'Watermelon', 'Harbor seal', 'Pen', - 'Pumpkin', 'Harp', 'Kitchen appliance', 'Roller skates', 'Bust', - 'Coffee table', 'Tennis ball', 'Tennis racket', 'Ladder', 'Boot', - 'Bowl', 'Stop sign', 'Volleyball', 'Eagle', 'Paddle', 'Chicken', - 'Skull', 'Lamp', 'Beehive', 'Maple', 'Sink', 'Goldfish', 'Tripod', - 'Coconut', 'Bidet', 'Tap', 'Bathroom cabinet', 'Toilet', - 'Filing cabinet', 'Pretzel', 'Table tennis racket', 'Bronze sculpture', - 'Rocket', 'Mouse', 'Hamster', 'Lizard', 'Lifejacket', 'Goat', - 'Washing machine', 'Trumpet', 'Horn', 'Trombone', 'Sheep', - 'Tablet computer', 'Pillow', 'Kitchen & dining room table', - 'Parachute', 'Raven', 'Glove', 'Loveseat', 'Christmas tree', - 'Shellfish', 'Rifle', 'Shotgun', 'Sushi', 'Sparrow', 'Bread', - 'Toaster', 'Watch', 'Asparagus', 'Artichoke', 'Suitcase', 'Antelope', - 'Broccoli', 'Ice cream', 'Racket', 'Banana', 'Cookie', 'Cucumber', - 'Dragonfly', 'Lynx', 'Caterpillar', 'Light bulb', 'Office supplies', - 'Miniskirt', 'Skirt', 'Fireplace', 'Potato', 'Light switch', - 'Croissant', 'Cabbage', 'Ladybug', 'Handgun', 'Luggage and bags', - 'Window blind', 'Snowboard', 'Baseball bat', 'Digital clock', - 'Serving tray', 'Infant bed', 'Sofa bed', 'Guacamole', 'Fox', 'Pizza', - 'Snowplow', 'Jet ski', 'Refrigerator', 'Lantern', 'Convenience store', - 'Sword', 'Rugby ball', 'Owl', 'Ostrich', 'Pancake', 'Strawberry', - 'Carrot', 'Tart', 'Dice', 'Turkey', 'Rabbit', 'Invertebrate', 'Vase', - 'Stool', 'Swim cap', 'Shower', 'Clock', 'Jellyfish', 'Aircraft', - 'Chopsticks', 'Orange', 'Snake', 'Sewing machine', 'Kangaroo', 'Mixer', - 'Food processor', 'Shrimp', 'Towel', 'Porcupine', 'Jaguar', 'Cannon', - 'Limousine', 'Mule', 'Squirrel', 'Kitchen knife', 'Tiara', 'Tiger', - 'Bow and arrow', 'Candy', 'Rhinoceros', 'Shark', 'Cricket ball', - 'Doughnut', 'Plumbing fixture', 'Camel', 'Polar bear', 'Coin', - 'Printer', 'Blender', 'Giraffe', 'Billiard table', 'Kettle', - 'Dinosaur', 'Pineapple', 'Zucchini', 'Jug', 'Barge', 'Teapot', - 'Golf ball', 'Binoculars', 'Scissors', 'Hot dog', 'Door handle', - 'Seahorse', 'Bathtub', 'Leopard', 'Centipede', 'Grapefruit', 'Snowman', - 'Cheetah', 'Alarm clock', 'Grape', 'Wrench', 'Wok', 'Bell pepper', - 'Cake stand', 'Barrel', 'Woodpecker', 'Flute', 'Corded phone', - 'Willow', 'Punching bag', 'Pomegranate', 'Telephone', 'Pear', - 'Common fig', 'Bench', 'Wood-burning stove', 'Burrito', 'Nail', - 'Turtle', 'Submarine sandwich', 'Drinking straw', 'Peach', 'Popcorn', - 'Frying pan', 'Picnic basket', 'Honeycomb', 'Envelope', 'Mango', - 'Cutting board', 'Pitcher', 'Stationary bicycle', 'Dumbbell', - 'Personal care', 'Dog bed', 'Snowmobile', 'Oboe', 'Briefcase', - 'Squash', 'Tick', 'Slow cooker', 'Coffeemaker', 'Measuring cup', - 'Crutch', 'Stretcher', 'Screwdriver', 'Flashlight', 'Spatula', - 'Pressure cooker', 'Ring binder', 'Beaker', 'Torch', 'Winter melon' - ] - - -def oid_v6_classes(): - return [ - 'Tortoise', 'Container', 'Magpie', 'Sea turtle', 'Football', - 'Ambulance', 'Ladder', 'Toothbrush', 'Syringe', 'Sink', 'Toy', - 'Organ (Musical Instrument)', 'Cassette deck', 'Apple', 'Human eye', - 'Cosmetics', 'Paddle', 'Snowman', 'Beer', 'Chopsticks', 'Human beard', - 'Bird', 'Parking meter', 'Traffic light', 'Croissant', 'Cucumber', - 'Radish', 'Towel', 'Doll', 'Skull', 'Washing machine', 'Glove', 'Tick', - 'Belt', 'Sunglasses', 'Banjo', 'Cart', 'Ball', 'Backpack', 'Bicycle', - 'Home appliance', 'Centipede', 'Boat', 'Surfboard', 'Boot', - 'Headphones', 'Hot dog', 'Shorts', 'Fast food', 'Bus', 'Boy', - 'Screwdriver', 'Bicycle wheel', 'Barge', 'Laptop', 'Miniskirt', - 'Drill (Tool)', 'Dress', 'Bear', 'Waffle', 'Pancake', 'Brown bear', - 'Woodpecker', 'Blue jay', 'Pretzel', 'Bagel', 'Tower', 'Teapot', - 'Person', 'Bow and arrow', 'Swimwear', 'Beehive', 'Brassiere', 'Bee', - 'Bat (Animal)', 'Starfish', 'Popcorn', 'Burrito', 'Chainsaw', - 'Balloon', 'Wrench', 'Tent', 'Vehicle registration plate', 'Lantern', - 'Toaster', 'Flashlight', 'Billboard', 'Tiara', 'Limousine', 'Necklace', - 'Carnivore', 'Scissors', 'Stairs', 'Computer keyboard', 'Printer', - 'Traffic sign', 'Chair', 'Shirt', 'Poster', 'Cheese', 'Sock', - 'Fire hydrant', 'Land vehicle', 'Earrings', 'Tie', 'Watercraft', - 'Cabinetry', 'Suitcase', 'Muffin', 'Bidet', 'Snack', 'Snowmobile', - 'Clock', 'Medical equipment', 'Cattle', 'Cello', 'Jet ski', 'Camel', - 'Coat', 'Suit', 'Desk', 'Cat', 'Bronze sculpture', 'Juice', 'Gondola', - 'Beetle', 'Cannon', 'Computer mouse', 'Cookie', 'Office building', - 'Fountain', 'Coin', 'Calculator', 'Cocktail', 'Computer monitor', - 'Box', 'Stapler', 'Christmas tree', 'Cowboy hat', 'Hiking equipment', - 'Studio couch', 'Drum', 'Dessert', 'Wine rack', 'Drink', 'Zucchini', - 'Ladle', 'Human mouth', 'Dairy Product', 'Dice', 'Oven', 'Dinosaur', - 'Ratchet (Device)', 'Couch', 'Cricket ball', 'Winter melon', 'Spatula', - 'Whiteboard', 'Pencil sharpener', 'Door', 'Hat', 'Shower', 'Eraser', - 'Fedora', 'Guacamole', 'Dagger', 'Scarf', 'Dolphin', 'Sombrero', - 'Tin can', 'Mug', 'Tap', 'Harbor seal', 'Stretcher', 'Can opener', - 'Goggles', 'Human body', 'Roller skates', 'Coffee cup', - 'Cutting board', 'Blender', 'Plumbing fixture', 'Stop sign', - 'Office supplies', 'Volleyball (Ball)', 'Vase', 'Slow cooker', - 'Wardrobe', 'Coffee', 'Whisk', 'Paper towel', 'Personal care', 'Food', - 'Sun hat', 'Tree house', 'Flying disc', 'Skirt', 'Gas stove', - 'Salt and pepper shakers', 'Mechanical fan', 'Face powder', 'Fax', - 'Fruit', 'French fries', 'Nightstand', 'Barrel', 'Kite', 'Tart', - 'Treadmill', 'Fox', 'Flag', 'French horn', 'Window blind', - 'Human foot', 'Golf cart', 'Jacket', 'Egg (Food)', 'Street light', - 'Guitar', 'Pillow', 'Human leg', 'Isopod', 'Grape', 'Human ear', - 'Power plugs and sockets', 'Panda', 'Giraffe', 'Woman', 'Door handle', - 'Rhinoceros', 'Bathtub', 'Goldfish', 'Houseplant', 'Goat', - 'Baseball bat', 'Baseball glove', 'Mixing bowl', - 'Marine invertebrates', 'Kitchen utensil', 'Light switch', 'House', - 'Horse', 'Stationary bicycle', 'Hammer', 'Ceiling fan', 'Sofa bed', - 'Adhesive tape', 'Harp', 'Sandal', 'Bicycle helmet', 'Saucer', - 'Harpsichord', 'Human hair', 'Heater', 'Harmonica', 'Hamster', - 'Curtain', 'Bed', 'Kettle', 'Fireplace', 'Scale', 'Drinking straw', - 'Insect', 'Hair dryer', 'Kitchenware', 'Indoor rower', 'Invertebrate', - 'Food processor', 'Bookcase', 'Refrigerator', 'Wood-burning stove', - 'Punching bag', 'Common fig', 'Cocktail shaker', 'Jaguar (Animal)', - 'Golf ball', 'Fashion accessory', 'Alarm clock', 'Filing cabinet', - 'Artichoke', 'Table', 'Tableware', 'Kangaroo', 'Koala', 'Knife', - 'Bottle', 'Bottle opener', 'Lynx', 'Lavender (Plant)', 'Lighthouse', - 'Dumbbell', 'Human head', 'Bowl', 'Humidifier', 'Porch', 'Lizard', - 'Billiard table', 'Mammal', 'Mouse', 'Motorcycle', - 'Musical instrument', 'Swim cap', 'Frying pan', 'Snowplow', - 'Bathroom cabinet', 'Missile', 'Bust', 'Man', 'Waffle iron', 'Milk', - 'Ring binder', 'Plate', 'Mobile phone', 'Baked goods', 'Mushroom', - 'Crutch', 'Pitcher (Container)', 'Mirror', 'Personal flotation device', - 'Table tennis racket', 'Pencil case', 'Musical keyboard', 'Scoreboard', - 'Briefcase', 'Kitchen knife', 'Nail (Construction)', 'Tennis ball', - 'Plastic bag', 'Oboe', 'Chest of drawers', 'Ostrich', 'Piano', 'Girl', - 'Plant', 'Potato', 'Hair spray', 'Sports equipment', 'Pasta', - 'Penguin', 'Pumpkin', 'Pear', 'Infant bed', 'Polar bear', 'Mixer', - 'Cupboard', 'Jacuzzi', 'Pizza', 'Digital clock', 'Pig', 'Reptile', - 'Rifle', 'Lipstick', 'Skateboard', 'Raven', 'High heels', 'Red panda', - 'Rose', 'Rabbit', 'Sculpture', 'Saxophone', 'Shotgun', 'Seafood', - 'Submarine sandwich', 'Snowboard', 'Sword', 'Picture frame', 'Sushi', - 'Loveseat', 'Ski', 'Squirrel', 'Tripod', 'Stethoscope', 'Submarine', - 'Scorpion', 'Segway', 'Training bench', 'Snake', 'Coffee table', - 'Skyscraper', 'Sheep', 'Television', 'Trombone', 'Tea', 'Tank', 'Taco', - 'Telephone', 'Torch', 'Tiger', 'Strawberry', 'Trumpet', 'Tree', - 'Tomato', 'Train', 'Tool', 'Picnic basket', 'Cooking spray', - 'Trousers', 'Bowling equipment', 'Football helmet', 'Truck', - 'Measuring cup', 'Coffeemaker', 'Violin', 'Vehicle', 'Handbag', - 'Paper cutter', 'Wine', 'Weapon', 'Wheel', 'Worm', 'Wok', 'Whale', - 'Zebra', 'Auto part', 'Jug', 'Pizza cutter', 'Cream', 'Monkey', 'Lion', - 'Bread', 'Platter', 'Chicken', 'Eagle', 'Helicopter', 'Owl', 'Duck', - 'Turtle', 'Hippopotamus', 'Crocodile', 'Toilet', 'Toilet paper', - 'Squid', 'Clothing', 'Footwear', 'Lemon', 'Spider', 'Deer', 'Frog', - 'Banana', 'Rocket', 'Wine glass', 'Countertop', 'Tablet computer', - 'Waste container', 'Swimming pool', 'Dog', 'Book', 'Elephant', 'Shark', - 'Candle', 'Leopard', 'Axe', 'Hand dryer', 'Soap dispenser', - 'Porcupine', 'Flower', 'Canary', 'Cheetah', 'Palm tree', 'Hamburger', - 'Maple', 'Building', 'Fish', 'Lobster', 'Garden Asparagus', - 'Furniture', 'Hedgehog', 'Airplane', 'Spoon', 'Otter', 'Bull', - 'Oyster', 'Horizontal bar', 'Convenience store', 'Bomb', 'Bench', - 'Ice cream', 'Caterpillar', 'Butterfly', 'Parachute', 'Orange', - 'Antelope', 'Beaker', 'Moths and butterflies', 'Window', 'Closet', - 'Castle', 'Jellyfish', 'Goose', 'Mule', 'Swan', 'Peach', 'Coconut', - 'Seat belt', 'Raccoon', 'Chisel', 'Fork', 'Lamp', 'Camera', - 'Squash (Plant)', 'Racket', 'Human face', 'Human arm', 'Vegetable', - 'Diaper', 'Unicycle', 'Falcon', 'Chime', 'Snail', 'Shellfish', - 'Cabbage', 'Carrot', 'Mango', 'Jeans', 'Flowerpot', 'Pineapple', - 'Drawer', 'Stool', 'Envelope', 'Cake', 'Dragonfly', 'Common sunflower', - 'Microwave oven', 'Honeycomb', 'Marine mammal', 'Sea lion', 'Ladybug', - 'Shelf', 'Watch', 'Candy', 'Salad', 'Parrot', 'Handgun', 'Sparrow', - 'Van', 'Grinder', 'Spice rack', 'Light bulb', 'Corded phone', - 'Sports uniform', 'Tennis racket', 'Wall clock', 'Serving tray', - 'Kitchen & dining room table', 'Dog bed', 'Cake stand', - 'Cat furniture', 'Bathroom accessory', 'Facial tissue holder', - 'Pressure cooker', 'Kitchen appliance', 'Tire', 'Ruler', - 'Luggage and bags', 'Microphone', 'Broccoli', 'Umbrella', 'Pastry', - 'Grapefruit', 'Band-aid', 'Animal', 'Bell pepper', 'Turkey', 'Lily', - 'Pomegranate', 'Doughnut', 'Glasses', 'Human nose', 'Pen', 'Ant', - 'Car', 'Aircraft', 'Human hand', 'Skunk', 'Teddy bear', 'Watermelon', - 'Cantaloupe', 'Dishwasher', 'Flute', 'Balance beam', 'Sandwich', - 'Shrimp', 'Sewing machine', 'Binoculars', 'Rays and skates', 'Ipod', - 'Accordion', 'Willow', 'Crab', 'Crown', 'Seahorse', 'Perfume', - 'Alpaca', 'Taxi', 'Canoe', 'Remote control', 'Wheelchair', - 'Rugby ball', 'Armadillo', 'Maracas', 'Helmet' - ] - - -dataset_aliases = { - 'voc': ['voc', 'pascal_voc', 'voc07', 'voc12'], - 'imagenet_det': ['det', 'imagenet_det', 'ilsvrc_det'], - 'imagenet_vid': ['vid', 'imagenet_vid', 'ilsvrc_vid'], - 'coco': ['coco', 'mscoco', 'ms_coco'], - 'wider_face': ['WIDERFaceDataset', 'wider_face', 'WIDERFace'], - 'cityscapes': ['cityscapes'], - 'oid_challenge': ['oid_challenge', 'openimages_challenge'], - 'oid_v6': ['oid_v6', 'openimages_v6'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/eval_hooks.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/eval_hooks.py deleted file mode 100644 index 7c1fbe96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import os.path as osp - -import mmcv -import torch.distributed as dist -from mmcv.runner import DistEvalHook as BaseDistEvalHook -from mmcv.runner import EvalHook as BaseEvalHook -from torch.nn.modules.batchnorm import _BatchNorm - - -def _calc_dynamic_intervals(start_interval, dynamic_interval_list): - assert mmcv.is_list_of(dynamic_interval_list, tuple) - - dynamic_milestones = [0] - dynamic_milestones.extend( - [dynamic_interval[0] for dynamic_interval in dynamic_interval_list]) - dynamic_intervals = [start_interval] - dynamic_intervals.extend( - [dynamic_interval[1] for dynamic_interval in dynamic_interval_list]) - return dynamic_milestones, dynamic_intervals - - -class EvalHook(BaseEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(EvalHook, self).__init__(*args, **kwargs) - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - if not self._should_evaluate(runner): - return - - from mmdet.apis import single_gpu_test - results = single_gpu_test(runner.model, self.dataloader, show=False) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - -# Note: Considering that MMCV's EvalHook updated its interface in V1.3.16, -# in order to avoid strong version dependency, we did not directly -# inherit EvalHook but BaseDistEvalHook. -class DistEvalHook(BaseDistEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(DistEvalHook, self).__init__(*args, **kwargs) - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - if not self._should_evaluate(runner): - return - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - from mmdet.apis import multi_gpu_test - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - - # the key_score may be `None` so it needs to skip - # the action to save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/mean_ap.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/mean_ap.py deleted file mode 100644 index fc1274ae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/mean_ap.py +++ /dev/null @@ -1,753 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from multiprocessing import Pool - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps -from .class_names import get_classes - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (ndarray): shape (num_scales, num_dets) or (num_dets, ) - precisions (ndarray): shape (num_scales, num_dets) or (num_dets, ) - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or ndarray: calculated average precision - """ - no_scale = False - if recalls.ndim == 1: - no_scale = True - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - assert recalls.shape == precisions.shape and recalls.ndim == 2 - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - if no_scale: - ap = ap[0] - return ap - - -def tpfp_imagenet(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - default_iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - default_iou_thr (float): IoU threshold to be considered as matched for - medium and large bboxes (small ones have special rules). - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp - # of a certain scale. - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - ious = bbox_overlaps( - det_bboxes, gt_bboxes - 1, use_legacy_coordinate=use_legacy_coordinate) - gt_w = gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length - gt_h = gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length - iou_thrs = np.minimum((gt_w * gt_h) / ((gt_w + 10.0) * (gt_h + 10.0)), - default_iou_thr) - # sort all detections by scores in descending order - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = gt_w * gt_h - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - max_iou = -1 - matched_gt = -1 - # find best overlapped available gt - for j in range(num_gts): - # different from PASCAL VOC: allow finding other gts if the - # best overlapped ones are already matched by other det bboxes - if gt_covered[j]: - continue - elif ious[i, j] >= iou_thrs[j] and ious[i, j] > max_iou: - max_iou = ious[i, j] - matched_gt = j - # there are 4 cases for a det bbox: - # 1. it matches a gt, tp = 1, fp = 0 - # 2. it matches an ignored gt, tp = 0, fp = 0 - # 3. it matches no gt and within area range, tp = 0, fp = 1 - # 4. it matches no gt but is beyond area range, tp = 0, fp = 0 - if matched_gt >= 0: - gt_covered[matched_gt] = 1 - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - tp[k, i] = 1 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_default(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_openimages(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False, - gt_bboxes_group_of=None, - use_group_of=True, - ioa_thr=0.5): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - gt_bboxes_group_of (ndarray): GT group_of of this image, of shape - (k, 1). Default: None - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: True. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: 0.5. - - Returns: - tuple[np.ndarray]: Returns a tuple (tp, fp, det_bboxes), where - (tp, fp) whose elements are 0 and 1. The shape of each array is - (num_scales, m). (det_bboxes) whose will filter those are not - matched by group of gts when processing Open Images evaluation. - The shape is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp, det_bboxes - - if gt_bboxes_group_of is not None and use_group_of: - # if handle group-of boxes, divided gt boxes into two parts: - # non-group-of and group-of.Then calculate ious and ioas through - # non-group-of group-of gts respectively. This only used in - # OpenImages evaluation. - assert gt_bboxes_group_of.shape[0] == gt_bboxes.shape[0] - non_group_gt_bboxes = gt_bboxes[~gt_bboxes_group_of] - group_gt_bboxes = gt_bboxes[gt_bboxes_group_of] - num_gts_group = group_gt_bboxes.shape[0] - ious = bbox_overlaps(det_bboxes, non_group_gt_bboxes) - ioas = bbox_overlaps(det_bboxes, group_gt_bboxes, mode='iof') - else: - # if not consider group-of boxes, only calculate ious through gt boxes - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - ioas = None - - if ious.shape[1] > 0: - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = ( - gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - else: - # if there is no no-group-of gt bboxes in this image, - # then all det bboxes within area range are false positives. - # Only used in OpenImages evaluation. - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - - if ioas is None or ioas.shape[1] <= 0: - return tp, fp, det_bboxes - else: - # The evaluation of group-of TP and FP are done in two stages: - # 1. All detections are first matched to non group-of boxes; true - # positives are determined. - # 2. Detections that are determined as false positives are matched - # against group-of boxes and calculated group-of TP and FP. - # Only used in OpenImages evaluation. - det_bboxes_group = np.zeros( - (num_scales, ioas.shape[1], det_bboxes.shape[1]), dtype=float) - match_group_of = np.zeros((num_scales, num_dets), dtype=bool) - tp_group = np.zeros((num_scales, num_gts_group), dtype=np.float32) - ioas_max = ioas.max(axis=1) - # for each det, which gt overlaps most with it - ioas_argmax = ioas.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - box_is_covered = tp[k] - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - matched_gt = ioas_argmax[i] - if not box_is_covered[i]: - if ioas_max[i] >= ioa_thr: - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not tp_group[k, matched_gt]: - tp_group[k, matched_gt] = 1 - match_group_of[k, i] = True - else: - match_group_of[k, i] = True - - if det_bboxes_group[k, matched_gt, -1] < \ - det_bboxes[i, -1]: - det_bboxes_group[k, matched_gt] = \ - det_bboxes[i] - - fp_group = (tp_group <= 0).astype(float) - tps = [] - fps = [] - # concatenate tp, fp, and det-boxes which not matched group of - # gt boxes and tp_group, fp_group, and det_bboxes_group which - # matched group of boxes respectively. - for i in range(num_scales): - tps.append( - np.concatenate((tp[i][~match_group_of[i]], tp_group[i]))) - fps.append( - np.concatenate((fp[i][~match_group_of[i]], fp_group[i]))) - det_bboxes = np.concatenate( - (det_bboxes[~match_group_of[i]], det_bboxes_group[i])) - - tp = np.vstack(tps) - fp = np.vstack(fps) - return tp, fp, det_bboxes - - -def get_cls_results(det_results, annotations, class_id): - """Get det results and gt information of a certain class. - - Args: - det_results (list[list]): Same as `eval_map()`. - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes - """ - cls_dets = [img_res[class_id] for img_res in det_results] - cls_gts = [] - cls_gts_ignore = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - cls_gts.append(ann['bboxes'][gt_inds, :]) - - if ann.get('labels_ignore', None) is not None: - ignore_inds = ann['labels_ignore'] == class_id - cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :]) - else: - cls_gts_ignore.append(np.empty((0, 4), dtype=np.float32)) - - return cls_dets, cls_gts, cls_gts_ignore - - -def get_cls_group_ofs(annotations, class_id): - """Get `gt_group_of` of a certain class, which is used in Open Images. - - Args: - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - list[np.ndarray]: `gt_group_of` of a certain class. - """ - gt_group_ofs = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - if ann.get('gt_is_group_ofs', None) is not None: - gt_group_ofs.append(ann['gt_is_group_ofs'][gt_inds]) - else: - gt_group_ofs.append(np.empty((0, 1), dtype=np.bool)) - - return gt_group_ofs - - -def eval_map(det_results, - annotations, - scale_ranges=None, - iou_thr=0.5, - ioa_thr=None, - dataset=None, - logger=None, - tpfp_fn=None, - nproc=4, - use_legacy_coordinate=False, - use_group_of=False): - """Evaluate mAP of a dataset. - - Args: - det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. - The outer list indicates images, and the inner list indicates - per-class detected bboxes. - annotations (list[dict]): Ground truth annotations where each item of - the list indicates an image. Keys of annotations are: - - - `bboxes`: numpy array of shape (n, 4) - - `labels`: numpy array of shape (n, ) - - `bboxes_ignore` (optional): numpy array of shape (k, 4) - - `labels_ignore` (optional): numpy array of shape (k, ) - scale_ranges (list[tuple] | None): Range of scales to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. A range of - (32, 64) means the area range between (32**2, 64**2). - Default: None. - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: None. - dataset (list[str] | str | None): Dataset name or dataset classes, - there are minor differences in metrics for different datasets, e.g. - "voc07", "imagenet_det", etc. Default: None. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - tpfp_fn (callable | None): The function used to determine true/ - false positives. If None, :func:`tpfp_default` is used as default - unless dataset is 'det' or 'vid' (:func:`tpfp_imagenet` in this - case). If it is given as a function, then this function is used - to evaluate tp & fp. Default None. - nproc (int): Processes used for computing TP and FP. - Default: 4. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: False. - - Returns: - tuple: (mAP, [dict, dict, ...]) - """ - assert len(det_results) == len(annotations) - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - num_imgs = len(det_results) - num_scales = len(scale_ranges) if scale_ranges is not None else 1 - num_classes = len(det_results[0]) # positive class num - area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges] - if scale_ranges is not None else None) - - pool = Pool(nproc) - eval_results = [] - for i in range(num_classes): - # get gt and det bboxes of this class - cls_dets, cls_gts, cls_gts_ignore = get_cls_results( - det_results, annotations, i) - # choose proper function according to datasets to compute tp and fp - if tpfp_fn is None: - if dataset in ['det', 'vid']: - tpfp_fn = tpfp_imagenet - elif dataset in ['oid_challenge', 'oid_v6'] \ - or use_group_of is True: - tpfp_fn = tpfp_openimages - else: - tpfp_fn = tpfp_default - if not callable(tpfp_fn): - raise ValueError( - f'tpfp_fn has to be a function or None, but got {tpfp_fn}') - args = [] - if use_group_of: - # used in Open Images Dataset evaluation - gt_group_ofs = get_cls_group_ofs(annotations, i) - args.append(gt_group_ofs) - args.append([use_group_of for _ in range(num_imgs)]) - if ioa_thr is not None: - args.append([ioa_thr for _ in range(num_imgs)]) - # compute tp and fp for each image with multiple processes - tpfp = pool.starmap( - tpfp_fn, - zip(cls_dets, cls_gts, cls_gts_ignore, - [iou_thr for _ in range(num_imgs)], - [area_ranges for _ in range(num_imgs)], - [use_legacy_coordinate for _ in range(num_imgs)], *args)) - if use_group_of: - tp, fp, cls_dets = tuple(zip(*tpfp)) - else: - tp, fp = tuple(zip(*tpfp)) - # calculate gt number of each scale - # ignored gts or gts beyond the specific scale are not counted - num_gts = np.zeros(num_scales, dtype=int) - for j, bbox in enumerate(cls_gts): - if area_ranges is None: - num_gts[0] += bbox.shape[0] - else: - gt_areas = (bbox[:, 2] - bbox[:, 0] + extra_length) * ( - bbox[:, 3] - bbox[:, 1] + extra_length) - for k, (min_area, max_area) in enumerate(area_ranges): - num_gts[k] += np.sum((gt_areas >= min_area) - & (gt_areas < max_area)) - # sort all det bboxes by score, also sort tp and fp - cls_dets = np.vstack(cls_dets) - num_dets = cls_dets.shape[0] - sort_inds = np.argsort(-cls_dets[:, -1]) - tp = np.hstack(tp)[:, sort_inds] - fp = np.hstack(fp)[:, sort_inds] - # calculate recall and precision with tp and fp - tp = np.cumsum(tp, axis=1) - fp = np.cumsum(fp, axis=1) - eps = np.finfo(np.float32).eps - recalls = tp / np.maximum(num_gts[:, np.newaxis], eps) - precisions = tp / np.maximum((tp + fp), eps) - # calculate AP - if scale_ranges is None: - recalls = recalls[0, :] - precisions = precisions[0, :] - num_gts = num_gts.item() - mode = 'area' if dataset != 'voc07' else '11points' - ap = average_precision(recalls, precisions, mode) - eval_results.append({ - 'num_gts': num_gts, - 'num_dets': num_dets, - 'recall': recalls, - 'precision': precisions, - 'ap': ap - }) - pool.close() - if scale_ranges is not None: - # shape (num_classes, num_scales) - all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results]) - all_num_gts = np.vstack( - [cls_result['num_gts'] for cls_result in eval_results]) - mean_ap = [] - for i in range(num_scales): - if np.any(all_num_gts[:, i] > 0): - mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean()) - else: - mean_ap.append(0.0) - else: - aps = [] - for cls_result in eval_results: - if cls_result['num_gts'] > 0: - aps.append(cls_result['ap']) - mean_ap = np.array(aps).mean().item() if aps else 0.0 - - print_map_summary( - mean_ap, eval_results, dataset, area_ranges, logger=logger) - - return mean_ap, eval_results - - -def print_map_summary(mean_ap, - results, - dataset=None, - scale_ranges=None, - logger=None): - """Print mAP and results of each class. - - A table will be printed to show the gts/dets/recall/AP of each class and - the mAP. - - Args: - mean_ap (float): Calculated from `eval_map()`. - results (list[dict]): Calculated from `eval_map()`. - dataset (list[str] | str | None): Dataset name or dataset classes. - scale_ranges (list[tuple] | None): Range of scales to be evaluated. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - - if logger == 'silent': - return - - if isinstance(results[0]['ap'], np.ndarray): - num_scales = len(results[0]['ap']) - else: - num_scales = 1 - - if scale_ranges is not None: - assert len(scale_ranges) == num_scales - - num_classes = len(results) - - recalls = np.zeros((num_scales, num_classes), dtype=np.float32) - aps = np.zeros((num_scales, num_classes), dtype=np.float32) - num_gts = np.zeros((num_scales, num_classes), dtype=int) - for i, cls_result in enumerate(results): - if cls_result['recall'].size > 0: - recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1] - aps[:, i] = cls_result['ap'] - num_gts[:, i] = cls_result['num_gts'] - - if dataset is None: - label_names = [str(i) for i in range(num_classes)] - elif mmcv.is_str(dataset): - label_names = get_classes(dataset) - else: - label_names = dataset - - if not isinstance(mean_ap, list): - mean_ap = [mean_ap] - - header = ['class', 'gts', 'dets', 'recall', 'ap'] - for i in range(num_scales): - if scale_ranges is not None: - print_log(f'Scale range {scale_ranges[i]}', logger=logger) - table_data = [header] - for j in range(num_classes): - row_data = [ - label_names[j], num_gts[i, j], results[j]['num_dets'], - f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}' - ] - table_data.append(row_data) - table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}']) - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/panoptic_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/panoptic_utils.py deleted file mode 100644 index 10c9ad93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/panoptic_utils.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# A custom value to distinguish instance ID and category ID; need to -# be greater than the number of categories. -# For a pixel in the panoptic result map: -# pan_id = ins_id * INSTANCE_OFFSET + cat_id -INSTANCE_OFFSET = 1000 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/recall.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/recall.py deleted file mode 100644 index 82b3c909..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/evaluation/recall.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps - - -def _recalls(all_ious, proposal_nums, thrs): - - img_num = all_ious.shape[0] - total_gt_num = sum([ious.shape[0] for ious in all_ious]) - - _ious = np.zeros((proposal_nums.size, total_gt_num), dtype=np.float32) - for k, proposal_num in enumerate(proposal_nums): - tmp_ious = np.zeros(0) - for i in range(img_num): - ious = all_ious[i][:, :proposal_num].copy() - gt_ious = np.zeros((ious.shape[0])) - if ious.size == 0: - tmp_ious = np.hstack((tmp_ious, gt_ious)) - continue - for j in range(ious.shape[0]): - gt_max_overlaps = ious.argmax(axis=1) - max_ious = ious[np.arange(0, ious.shape[0]), gt_max_overlaps] - gt_idx = max_ious.argmax() - gt_ious[j] = max_ious[gt_idx] - box_idx = gt_max_overlaps[gt_idx] - ious[gt_idx, :] = -1 - ious[:, box_idx] = -1 - tmp_ious = np.hstack((tmp_ious, gt_ious)) - _ious[k, :] = tmp_ious - - _ious = np.fliplr(np.sort(_ious, axis=1)) - recalls = np.zeros((proposal_nums.size, thrs.size)) - for i, thr in enumerate(thrs): - recalls[:, i] = (_ious >= thr).sum(axis=1) / float(total_gt_num) - - return recalls - - -def set_recall_param(proposal_nums, iou_thrs): - """Check proposal_nums and iou_thrs and set correct format.""" - if isinstance(proposal_nums, Sequence): - _proposal_nums = np.array(proposal_nums) - elif isinstance(proposal_nums, int): - _proposal_nums = np.array([proposal_nums]) - else: - _proposal_nums = proposal_nums - - if iou_thrs is None: - _iou_thrs = np.array([0.5]) - elif isinstance(iou_thrs, Sequence): - _iou_thrs = np.array(iou_thrs) - elif isinstance(iou_thrs, float): - _iou_thrs = np.array([iou_thrs]) - else: - _iou_thrs = iou_thrs - - return _proposal_nums, _iou_thrs - - -def eval_recalls(gts, - proposals, - proposal_nums=None, - iou_thrs=0.5, - logger=None, - use_legacy_coordinate=False): - """Calculate recalls. - - Args: - gts (list[ndarray]): a list of arrays of shape (n, 4) - proposals (list[ndarray]): a list of arrays of shape (k, 4) or (k, 5) - proposal_nums (int | Sequence[int]): Top N proposals to be evaluated. - iou_thrs (float | Sequence[float]): IoU thresholds. Default: 0.5. - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - use_legacy_coordinate (bool): Whether use coordinate system - in mmdet v1.x. "1" was added to both height and width - which means w, h should be - computed as 'x2 - x1 + 1` and 'y2 - y1 + 1'. Default: False. - - - Returns: - ndarray: recalls of different ious and proposal nums - """ - - img_num = len(gts) - assert img_num == len(proposals) - proposal_nums, iou_thrs = set_recall_param(proposal_nums, iou_thrs) - all_ious = [] - for i in range(img_num): - if proposals[i].ndim == 2 and proposals[i].shape[1] == 5: - scores = proposals[i][:, 4] - sort_idx = np.argsort(scores)[::-1] - img_proposal = proposals[i][sort_idx, :] - else: - img_proposal = proposals[i] - prop_num = min(img_proposal.shape[0], proposal_nums[-1]) - if gts[i] is None or gts[i].shape[0] == 0: - ious = np.zeros((0, img_proposal.shape[0]), dtype=np.float32) - else: - ious = bbox_overlaps( - gts[i], - img_proposal[:prop_num, :4], - use_legacy_coordinate=use_legacy_coordinate) - all_ious.append(ious) - all_ious = np.array(all_ious) - recalls = _recalls(all_ious, proposal_nums, iou_thrs) - - print_recall_summary(recalls, proposal_nums, iou_thrs, logger=logger) - return recalls - - -def print_recall_summary(recalls, - proposal_nums, - iou_thrs, - row_idxs=None, - col_idxs=None, - logger=None): - """Print recalls in a table. - - Args: - recalls (ndarray): calculated from `bbox_recalls` - proposal_nums (ndarray or list): top N proposals - iou_thrs (ndarray or list): iou thresholds - row_idxs (ndarray): which rows(proposal nums) to print - col_idxs (ndarray): which cols(iou thresholds) to print - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - proposal_nums = np.array(proposal_nums, dtype=np.int32) - iou_thrs = np.array(iou_thrs) - if row_idxs is None: - row_idxs = np.arange(proposal_nums.size) - if col_idxs is None: - col_idxs = np.arange(iou_thrs.size) - row_header = [''] + iou_thrs[col_idxs].tolist() - table_data = [row_header] - for i, num in enumerate(proposal_nums[row_idxs]): - row = [f'{val:.3f}' for val in recalls[row_idxs[i], col_idxs].tolist()] - row.insert(0, num) - table_data.append(row) - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - -def plot_num_recall(recalls, proposal_nums): - """Plot Proposal_num-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - proposal_nums(ndarray or list): same shape as `recalls` - """ - if isinstance(proposal_nums, np.ndarray): - _proposal_nums = proposal_nums.tolist() - else: - _proposal_nums = proposal_nums - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot([0] + _proposal_nums, [0] + _recalls) - plt.xlabel('Proposal num') - plt.ylabel('Recall') - plt.axis([0, proposal_nums.max(), 0, 1]) - f.show() - - -def plot_iou_recall(recalls, iou_thrs): - """Plot IoU-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - iou_thrs(ndarray or list): same shape as `recalls` - """ - if isinstance(iou_thrs, np.ndarray): - _iou_thrs = iou_thrs.tolist() - else: - _iou_thrs = iou_thrs - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot(_iou_thrs + [1.0], _recalls + [0.]) - plt.xlabel('IoU') - plt.ylabel('Recall') - plt.axis([iou_thrs.min(), 1, 0, 1]) - f.show() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/__init__.py deleted file mode 100644 index a8179c93..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .onnx_helper import (add_dummy_nms_for_onnx, dynamic_clip_for_onnx, - get_k_for_topk) -from .pytorch2onnx import (build_model_from_cfg, - generate_inputs_and_wrap_model, - preprocess_example_input) - -__all__ = [ - 'build_model_from_cfg', 'generate_inputs_and_wrap_model', - 'preprocess_example_input', 'get_k_for_topk', 'add_dummy_nms_for_onnx', - 'dynamic_clip_for_onnx' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/model_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/model_wrappers.py deleted file mode 100644 index 2f62bb03..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/model_wrappers.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import numpy as np -import torch - -from mmdet.core import bbox2result -from mmdet.models import BaseDetector - - -class DeployBaseDetector(BaseDetector): - """DeployBaseDetector.""" - - def __init__(self, class_names, device_id): - super(DeployBaseDetector, self).__init__() - self.CLASSES = class_names - self.device_id = device_id - - def simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def extract_feat(self, imgs): - raise NotImplementedError('This method is not implemented.') - - def forward_train(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def val_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def train_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def forward_test(self, *, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def forward(self, img, img_metas, return_loss=True, **kwargs): - outputs = self.forward_test(img, img_metas, **kwargs) - batch_dets, batch_labels = outputs[:2] - batch_masks = outputs[2] if len(outputs) == 3 else None - batch_size = img[0].shape[0] - img_metas = img_metas[0] - results = [] - rescale = kwargs.get('rescale', True) - for i in range(batch_size): - dets, labels = batch_dets[i], batch_labels[i] - if rescale: - scale_factor = img_metas[i]['scale_factor'] - - if isinstance(scale_factor, (list, tuple, np.ndarray)): - assert len(scale_factor) == 4 - scale_factor = np.array(scale_factor)[None, :] # [1,4] - dets[:, :4] /= scale_factor - - if 'border' in img_metas[i]: - # offset pixel of the top-left corners between original image - # and padded/enlarged image, 'border' is used when exporting - # CornerNet and CentripetalNet to onnx - x_off = img_metas[i]['border'][2] - y_off = img_metas[i]['border'][0] - dets[:, [0, 2]] -= x_off - dets[:, [1, 3]] -= y_off - dets[:, :4] *= (dets[:, :4] > 0).astype(dets.dtype) - - dets_results = bbox2result(dets, labels, len(self.CLASSES)) - - if batch_masks is not None: - masks = batch_masks[i] - img_h, img_w = img_metas[i]['img_shape'][:2] - ori_h, ori_w = img_metas[i]['ori_shape'][:2] - masks = masks[:, :img_h, :img_w] - if rescale: - masks = masks.astype(np.float32) - masks = torch.from_numpy(masks) - masks = torch.nn.functional.interpolate( - masks.unsqueeze(0), size=(ori_h, ori_w)) - masks = masks.squeeze(0).detach().numpy() - if masks.dtype != np.bool: - masks = masks >= 0.5 - segms_results = [[] for _ in range(len(self.CLASSES))] - for j in range(len(dets)): - segms_results[labels[j]].append(masks[j]) - results.append((dets_results, segms_results)) - else: - results.append(dets_results) - return results - - -class ONNXRuntimeDetector(DeployBaseDetector): - """Wrapper for detector's inference with ONNXRuntime.""" - - def __init__(self, onnx_file, class_names, device_id): - super(ONNXRuntimeDetector, self).__init__(class_names, device_id) - import onnxruntime as ort - - # get the custom op path - ort_custom_op_path = '' - try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - providers = ['CPUExecutionProvider'] - options = [{}] - is_cuda_available = ort.get_device() == 'GPU' - if is_cuda_available: - providers.insert(0, 'CUDAExecutionProvider') - options.insert(0, {'device_id': device_id}) - - sess.set_providers(providers, options) - - self.sess = sess - self.io_binding = sess.io_binding() - self.output_names = [_.name for _ in sess.get_outputs()] - self.is_cuda_available = is_cuda_available - - def forward_test(self, imgs, img_metas, **kwargs): - input_data = imgs[0] - # set io binding for inputs/outputs - device_type = 'cuda' if self.is_cuda_available else 'cpu' - if not self.is_cuda_available: - input_data = input_data.cpu() - self.io_binding.bind_input( - name='input', - device_type=device_type, - device_id=self.device_id, - element_type=np.float32, - shape=input_data.shape, - buffer_ptr=input_data.data_ptr()) - - for name in self.output_names: - self.io_binding.bind_output(name) - # run session to get outputs - self.sess.run_with_iobinding(self.io_binding) - ort_outputs = self.io_binding.copy_outputs_to_cpu() - return ort_outputs - - -class TensorRTDetector(DeployBaseDetector): - """Wrapper for detector's inference with TensorRT.""" - - def __init__(self, engine_file, class_names, device_id, output_names=None): - super(TensorRTDetector, self).__init__(class_names, device_id) - warnings.warn('`output_names` is deprecated and will be removed in ' - 'future releases.') - from mmcv.tensorrt import TRTWraper, load_tensorrt_plugin - try: - load_tensorrt_plugin() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with TensorRT from source.') - - output_names = ['dets', 'labels'] - model = TRTWraper(engine_file, ['input'], output_names) - with_masks = False - # if TensorRT has totally 4 inputs/outputs, then - # the detector should have `mask` output. - if len(model.engine) == 4: - model.output_names = output_names + ['masks'] - with_masks = True - self.model = model - self.with_masks = with_masks - - def forward_test(self, imgs, img_metas, **kwargs): - input_data = imgs[0].contiguous() - with torch.cuda.device(self.device_id), torch.no_grad(): - outputs = self.model({'input': input_data}) - outputs = [outputs[name] for name in self.model.output_names] - outputs = [out.detach().cpu().numpy() for out in outputs] - return outputs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/onnx_helper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/onnx_helper.py deleted file mode 100644 index 9f6b9a01..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/onnx_helper.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -import torch - - -def dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape): - """Clip boxes dynamically for onnx. - - Since torch.clamp cannot have dynamic `min` and `max`, we scale the - boxes by 1/max_shape and clamp in the range [0, 1]. - - Args: - x1 (Tensor): The x1 for bounding boxes. - y1 (Tensor): The y1 for bounding boxes. - x2 (Tensor): The x2 for bounding boxes. - y2 (Tensor): The y2 for bounding boxes. - max_shape (Tensor or torch.Size): The (H,W) of original image. - Returns: - tuple(Tensor): The clipped x1, y1, x2, y2. - """ - assert isinstance( - max_shape, - torch.Tensor), '`max_shape` should be tensor of (h,w) for onnx' - - # scale by 1/max_shape - x1 = x1 / max_shape[1] - y1 = y1 / max_shape[0] - x2 = x2 / max_shape[1] - y2 = y2 / max_shape[0] - - # clamp [0, 1] - x1 = torch.clamp(x1, 0, 1) - y1 = torch.clamp(y1, 0, 1) - x2 = torch.clamp(x2, 0, 1) - y2 = torch.clamp(y2, 0, 1) - - # scale back - x1 = x1 * max_shape[1] - y1 = y1 * max_shape[0] - x2 = x2 * max_shape[1] - y2 = y2 * max_shape[0] - return x1, y1, x2, y2 - - -def get_k_for_topk(k, size): - """Get k of TopK for onnx exporting. - - The K of TopK in TensorRT should not be a Tensor, while in ONNX Runtime - it could be a Tensor.Due to dynamic shape feature, we have to decide - whether to do TopK and what K it should be while exporting to ONNX. - If returned K is less than zero, it means we do not have to do - TopK operation. - - Args: - k (int or Tensor): The set k value for nms from config file. - size (Tensor or torch.Size): The number of elements of \ - TopK's input tensor - Returns: - tuple: (int or Tensor): The final K for TopK. - """ - ret_k = -1 - if k <= 0 or size <= 0: - return ret_k - if torch.onnx.is_in_onnx_export(): - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - if is_trt_backend: - # TensorRT does not support dynamic K with TopK op - if 0 < k < size: - ret_k = k - else: - # Always keep topk op for dynamic input in onnx for ONNX Runtime - ret_k = torch.where(k < size, k, size) - elif k < size: - ret_k = k - else: - # ret_k is -1 - pass - return ret_k - - -def add_dummy_nms_for_onnx(boxes, - scores, - max_output_boxes_per_class=1000, - iou_threshold=0.5, - score_threshold=0.05, - pre_top_k=-1, - after_top_k=-1, - labels=None): - """Create a dummy onnx::NonMaxSuppression op while exporting to ONNX. - - This function helps exporting to onnx with batch and multiclass NMS op. - It only supports class-agnostic detection results. That is, the scores - is of shape (N, num_bboxes, num_classes) and the boxes is of shape - (N, num_boxes, 4). - - Args: - boxes (Tensor): The bounding boxes of shape [N, num_boxes, 4] - scores (Tensor): The detection scores of shape - [N, num_boxes, num_classes] - max_output_boxes_per_class (int): Maximum number of output - boxes per class of nms. Defaults to 1000. - iou_threshold (float): IOU threshold of nms. Defaults to 0.5 - score_threshold (float): score threshold of nms. - Defaults to 0.05. - pre_top_k (bool): Number of top K boxes to keep before nms. - Defaults to -1. - after_top_k (int): Number of top K boxes to keep after nms. - Defaults to -1. - labels (Tensor, optional): It not None, explicit labels would be used. - Otherwise, labels would be automatically generated using - num_classed. Defaults to None. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - max_output_boxes_per_class = torch.LongTensor([max_output_boxes_per_class]) - iou_threshold = torch.tensor([iou_threshold], dtype=torch.float32) - score_threshold = torch.tensor([score_threshold], dtype=torch.float32) - batch_size = scores.shape[0] - num_class = scores.shape[2] - - nms_pre = torch.tensor(pre_top_k, device=scores.device, dtype=torch.long) - nms_pre = get_k_for_topk(nms_pre, boxes.shape[1]) - - if nms_pre > 0: - max_scores, _ = scores.max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = boxes.shape[1] * batch_inds + topk_inds - boxes = boxes.reshape(-1, 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - scores = scores.reshape(-1, num_class)[transformed_inds, :].reshape( - batch_size, -1, num_class) - if labels is not None: - labels = labels.reshape(-1, 1)[transformed_inds].reshape( - batch_size, -1) - - scores = scores.permute(0, 2, 1) - num_box = boxes.shape[1] - # turn off tracing to create a dummy output of nms - state = torch._C._get_tracing_state() - # dummy indices of nms's output - num_fake_det = 2 - batch_inds = torch.randint(batch_size, (num_fake_det, 1)) - cls_inds = torch.randint(num_class, (num_fake_det, 1)) - box_inds = torch.randint(num_box, (num_fake_det, 1)) - indices = torch.cat([batch_inds, cls_inds, box_inds], dim=1) - output = indices - setattr(DummyONNXNMSop, 'output', output) - - # open tracing - torch._C._set_tracing_state(state) - selected_indices = DummyONNXNMSop.apply(boxes, scores, - max_output_boxes_per_class, - iou_threshold, score_threshold) - - batch_inds, cls_inds = selected_indices[:, 0], selected_indices[:, 1] - box_inds = selected_indices[:, 2] - if labels is None: - labels = torch.arange(num_class, dtype=torch.long).to(scores.device) - labels = labels.view(1, num_class, 1).expand_as(scores) - scores = scores.reshape(-1, 1) - boxes = boxes.reshape(batch_size, -1).repeat(1, num_class).reshape(-1, 4) - pos_inds = (num_class * batch_inds + cls_inds) * num_box + box_inds - mask = scores.new_zeros(scores.shape) - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - # PyTorch style code: mask[batch_inds, box_inds] += 1 - mask[pos_inds, :] += 1 - scores = scores * mask - boxes = boxes * mask - - scores = scores.reshape(batch_size, -1) - boxes = boxes.reshape(batch_size, -1, 4) - labels = labels.reshape(batch_size, -1) - - nms_after = torch.tensor( - after_top_k, device=scores.device, dtype=torch.long) - nms_after = get_k_for_topk(nms_after, num_box * num_class) - - if nms_after > 0: - _, topk_inds = scores.topk(nms_after) - batch_inds = torch.arange(batch_size).view(-1, 1).expand_as(topk_inds) - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = scores.shape[1] * batch_inds + topk_inds - scores = scores.reshape(-1, 1)[transformed_inds, :].reshape( - batch_size, -1) - boxes = boxes.reshape(-1, 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - labels = labels.reshape(-1, 1)[transformed_inds, :].reshape( - batch_size, -1) - - scores = scores.unsqueeze(2) - dets = torch.cat([boxes, scores], dim=2) - return dets, labels - - -class DummyONNXNMSop(torch.autograd.Function): - """DummyONNXNMSop. - - This class is only for creating onnx::NonMaxSuppression. - """ - - @staticmethod - def forward(ctx, boxes, scores, max_output_boxes_per_class, iou_threshold, - score_threshold): - - return DummyONNXNMSop.output - - @staticmethod - def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, - score_threshold): - return g.op( - 'NonMaxSuppression', - boxes, - scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - outputs=1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/pytorch2onnx.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/pytorch2onnx.py deleted file mode 100644 index b8261eed..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/export/pytorch2onnx.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import mmcv -import numpy as np -import torch -from mmcv.runner import load_checkpoint - - -def generate_inputs_and_wrap_model(config_path, - checkpoint_path, - input_config, - cfg_options=None): - """Prepare sample input and wrap model for ONNX export. - - The ONNX export API only accept args, and all inputs should be - torch.Tensor or corresponding types (such as tuple of tensor). - So we should call this function before exporting. This function will: - - 1. generate corresponding inputs which are used to execute the model. - 2. Wrap the model's forward function. - - For example, the MMDet models' forward function has a parameter - ``return_loss:bool``. As we want to set it as False while export API - supports neither bool type or kwargs. So we have to replace the forward - method like ``model.forward = partial(model.forward, return_loss=False)``. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - input_config (dict): the exactly data in this dict depends on the - framework. For MMSeg, we can just declare the input shape, - and generate the dummy data accordingly. However, for MMDet, - we may pass the real img path, or the NMS will return None - as there is no legal bbox. - - Returns: - tuple: (model, tensor_data) wrapped model which can be called by - ``model(*tensor_data)`` and a list of inputs which are used to - execute the model while exporting. - """ - - model = build_model_from_cfg( - config_path, checkpoint_path, cfg_options=cfg_options) - one_img, one_meta = preprocess_example_input(input_config) - tensor_data = [one_img] - model.forward = partial( - model.forward, img_metas=[[one_meta]], return_loss=False) - - # pytorch has some bug in pytorch1.3, we have to fix it - # by replacing these existing op - opset_version = 11 - # put the import within the function thus it will not cause import error - # when not using this function - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(opset_version) - - return model, tensor_data - - -def build_model_from_cfg(config_path, checkpoint_path, cfg_options=None): - """Build a model from config and load the given checkpoint. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - - Returns: - torch.nn.Module: the built model - """ - from mmdet.models import build_detector - - cfg = mmcv.Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the model - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - checkpoint = load_checkpoint(model, checkpoint_path, map_location='cpu') - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - from mmdet.datasets import DATASETS - dataset = DATASETS.get(cfg.data.test['type']) - assert (dataset is not None) - model.CLASSES = dataset.CLASSES - model.cpu().eval() - return model - - -def preprocess_example_input(input_config): - """Prepare an example input image for ``generate_inputs_and_wrap_model``. - - Args: - input_config (dict): customized config describing the example input. - - Returns: - tuple: (one_img, one_meta), tensor of the example input image and \ - meta information for the example input image. - - Examples: - >>> from mmdet.core.export import preprocess_example_input - >>> input_config = { - >>> 'input_shape': (1,3,224,224), - >>> 'input_path': 'demo/demo.jpg', - >>> 'normalize_cfg': { - >>> 'mean': (123.675, 116.28, 103.53), - >>> 'std': (58.395, 57.12, 57.375) - >>> } - >>> } - >>> one_img, one_meta = preprocess_example_input(input_config) - >>> print(one_img.shape) - torch.Size([1, 3, 224, 224]) - >>> print(one_meta) - {'img_shape': (224, 224, 3), - 'ori_shape': (224, 224, 3), - 'pad_shape': (224, 224, 3), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False} - """ - input_path = input_config['input_path'] - input_shape = input_config['input_shape'] - one_img = mmcv.imread(input_path) - one_img = mmcv.imresize(one_img, input_shape[2:][::-1]) - show_img = one_img.copy() - if 'normalize_cfg' in input_config.keys(): - normalize_cfg = input_config['normalize_cfg'] - mean = np.array(normalize_cfg['mean'], dtype=np.float32) - std = np.array(normalize_cfg['std'], dtype=np.float32) - to_rgb = normalize_cfg.get('to_rgb', True) - one_img = mmcv.imnormalize(one_img, mean, std, to_rgb=to_rgb) - one_img = one_img.transpose(2, 0, 1) - one_img = torch.from_numpy(one_img).unsqueeze(0).float().requires_grad_( - True) - (_, C, H, W) = input_shape - one_meta = { - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': np.ones(4, dtype=np.float32), - 'flip': False, - 'show_img': show_img, - 'flip_direction': None - } - - return one_img, one_meta diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/__init__.py deleted file mode 100644 index 788ab494..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkloss_hook import CheckInvalidLossHook -from .ema import ExpMomentumEMAHook, LinearMomentumEMAHook -from .memory_profiler_hook import MemoryProfilerHook -from .set_epoch_info_hook import SetEpochInfoHook -from .sync_norm_hook import SyncNormHook -from .sync_random_size_hook import SyncRandomSizeHook -from .yolox_lrupdater_hook import YOLOXLrUpdaterHook -from .yolox_mode_switch_hook import YOLOXModeSwitchHook - -__all__ = [ - 'SyncRandomSizeHook', 'YOLOXModeSwitchHook', 'SyncNormHook', - 'ExpMomentumEMAHook', 'LinearMomentumEMAHook', 'YOLOXLrUpdaterHook', - 'CheckInvalidLossHook', 'SetEpochInfoHook', 'MemoryProfilerHook' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/checkloss_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/checkloss_hook.py deleted file mode 100644 index 754e61be..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/checkloss_hook.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class CheckInvalidLossHook(Hook): - """Check invalid loss hook. - - This hook will regularly check whether the loss is valid - during training. - - Args: - interval (int): Checking interval (every k iterations). - Default: 50. - """ - - def __init__(self, interval=50): - self.interval = interval - - def after_train_iter(self, runner): - if self.every_n_iters(runner, self.interval): - assert torch.isfinite(runner.outputs['loss']), \ - runner.logger.info('loss become infinite or NaN!') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/ema.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/ema.py deleted file mode 100644 index ff7bfbab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/ema.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.parallel import is_module_wrapper -from mmcv.runner.hooks import HOOKS, Hook - - -class BaseEMAHook(Hook): - """Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointHook. Note, - the original model parameters are actually saved in ema field after train. - - Args: - momentum (float): The momentum used for updating ema parameter. - Ema's parameter are updated with the formula: - `ema_param = (1-momentum) * ema_param + momentum * cur_param`. - Defaults to 0.0002. - skip_buffers (bool): Whether to skip the model buffers, such as - batchnorm running stats (running_mean, running_var), it does not - perform the ema operation. Default to False. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - resume_from (str, optional): The checkpoint path. Defaults to None. - momentum_fun (func, optional): The function to change momentum - during early iteration (also warmup) to help early training. - It uses `momentum` as a constant. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - skip_buffers=False, - resume_from=None, - momentum_fun=None): - assert 0 < momentum < 1 - self.momentum = momentum - self.skip_buffers = skip_buffers - self.interval = interval - self.checkpoint = resume_from - self.momentum_fun = momentum_fun - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model. - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - if self.skip_buffers: - self.model_parameters = dict(model.named_parameters()) - else: - self.model_parameters = model.state_dict() - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers()) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def get_momentum(self, runner): - return self.momentum_fun(runner.iter) if self.momentum_fun else \ - self.momentum - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - if (runner.iter + 1) % self.interval != 0: - return - momentum = self.get_momentum(runner) - for name, parameter in self.model_parameters.items(): - # exclude num_tracking - if parameter.dtype.is_floating_point: - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_( - parameter.data, alpha=momentum) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) - - -@HOOKS.register_module() -class ExpMomentumEMAHook(BaseEMAHook): - """EMAHook using exponential momentum strategy. - - Args: - total_iter (int): The total number of iterations of EMA momentum. - Defaults to 2000. - """ - - def __init__(self, total_iter=2000, **kwargs): - super(ExpMomentumEMAHook, self).__init__(**kwargs) - self.momentum_fun = lambda x: (1 - self.momentum) * math.exp(-( - 1 + x) / total_iter) + self.momentum - - -@HOOKS.register_module() -class LinearMomentumEMAHook(BaseEMAHook): - """EMAHook using linear momentum strategy. - - Args: - warm_up (int): During first warm_up steps, we may use smaller decay - to update ema parameters more slowly. Defaults to 100. - """ - - def __init__(self, warm_up=100, **kwargs): - super(LinearMomentumEMAHook, self).__init__(**kwargs) - self.momentum_fun = lambda x: min(self.momentum**self.interval, - (1 + x) / (warm_up + x)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/memory_profiler_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/memory_profiler_hook.py deleted file mode 100644 index e78a2838..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/memory_profiler_hook.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class MemoryProfilerHook(Hook): - """Memory profiler hook recording memory information: virtual memory, swap - memory and memory of current process. - - Args: - interval (int): Checking interval (every k iterations). - Default: 50. - """ - - def __init__(self, interval=50): - try: - from psutil import swap_memory, virtual_memory - self._swap_memory = swap_memory - self._virtual_memory = virtual_memory - except ImportError: - raise ImportError('psutil is not installed, please install it by: ' - 'pip install psutil') - - try: - from memory_profiler import memory_usage - self._memory_usage = memory_usage - except ImportError: - raise ImportError( - 'memory_profiler is not installed, please install it by: ' - 'pip install memory_profiler') - - self.interval = interval - - def after_iter(self, runner): - if self.every_n_iters(runner, self.interval): - # in Byte - virtual_memory = self._virtual_memory() - swap_memory = self._swap_memory() - # in MB - process_memory = self._memory_usage()[0] - factor = 1024 * 1024 - runner.logger.info( - 'Memory information ' - 'available_memory: ' - f'{round(virtual_memory.available / factor)} MB, ' - 'used_memory: ' - f'{round(virtual_memory.used / factor)} MB, ' - f'memory_utilization: {virtual_memory.percent} %, ' - 'available_swap_memory: ' - f'{round((swap_memory.total - swap_memory.used) / factor)}' - 'MB, ' - f'used_swap_memory: {round(swap_memory.used / factor)} MB, ' - f'swap_memory_utilization: {swap_memory.percent} %, ' - 'current_process_memory: ' - f'{round(process_memory)} MB') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/set_epoch_info_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/set_epoch_info_hook.py deleted file mode 100644 index c2b134ce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/set_epoch_info_hook.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.parallel import is_module_wrapper -from mmcv.runner import HOOKS, Hook - - -@HOOKS.register_module() -class SetEpochInfoHook(Hook): - """Set runner's epoch information to the model.""" - - def before_train_epoch(self, runner): - epoch = runner.epoch - model = runner.model - if is_module_wrapper(model): - model = model.module - model.set_epoch(epoch) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_norm_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_norm_hook.py deleted file mode 100644 index 82931cef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_norm_hook.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -from mmcv.runner import get_dist_info -from mmcv.runner.hooks import HOOKS, Hook -from torch import nn - -from ..utils.dist_utils import all_reduce_dict - - -def get_norm_states(module): - async_norm_states = OrderedDict() - for name, child in module.named_modules(): - if isinstance(child, nn.modules.batchnorm._NormBase): - for k, v in child.state_dict().items(): - async_norm_states['.'.join([name, k])] = v - return async_norm_states - - -@HOOKS.register_module() -class SyncNormHook(Hook): - """Synchronize Norm states after training epoch, currently used in YOLOX. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to switch to synchronizing norm interval. Default: 15. - interval (int): Synchronizing norm interval. Default: 1. - """ - - def __init__(self, num_last_epochs=15, interval=1): - self.interval = interval - self.num_last_epochs = num_last_epochs - - def before_train_epoch(self, runner): - epoch = runner.epoch - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - # Synchronize norm every epoch. - self.interval = 1 - - def after_train_epoch(self, runner): - """Synchronizing norm.""" - epoch = runner.epoch - module = runner.model - if (epoch + 1) % self.interval == 0: - _, world_size = get_dist_info() - if world_size == 1: - return - norm_states = get_norm_states(module) - if len(norm_states) == 0: - return - norm_states = all_reduce_dict(norm_states, op='mean') - module.load_state_dict(norm_states, strict=False) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_random_size_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_random_size_hook.py deleted file mode 100644 index 6d7e96c6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/sync_random_size_hook.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import torch -from mmcv.runner import get_dist_info -from mmcv.runner.hooks import HOOKS, Hook -from torch import distributed as dist - - -@HOOKS.register_module() -class SyncRandomSizeHook(Hook): - """Change and synchronize the random image size across ranks. - SyncRandomSizeHook is deprecated, please use Resize pipeline to achieve - similar functions. Such as `dict(type='Resize', img_scale=[(448, 448), - (832, 832)], multiscale_mode='range', keep_ratio=True)`. - - Note: Due to the multi-process dataloader, its behavior is different - from YOLOX's official implementation, the official is to change the - size every fixed iteration interval and what we achieved is a fixed - epoch interval. - - Args: - ratio_range (tuple[int]): Random ratio range. It will be multiplied - by 32, and then change the dataset output image size. - Default: (14, 26). - img_scale (tuple[int]): Size of input image. Default: (640, 640). - interval (int): The epoch interval of change image size. Default: 1. - device (torch.device | str): device for returned tensors. - Default: 'cuda'. - """ - - def __init__(self, - ratio_range=(14, 26), - img_scale=(640, 640), - interval=1, - device='cuda'): - warnings.warn('DeprecationWarning: SyncRandomSizeHook is deprecated. ' - 'Please use Resize pipeline to achieve similar ' - 'functions. Due to the multi-process dataloader, ' - 'its behavior is different from YOLOX\'s official ' - 'implementation, the official is to change the size ' - 'every fixed iteration interval and what we achieved ' - 'is a fixed epoch interval.') - self.rank, world_size = get_dist_info() - self.is_distributed = world_size > 1 - self.ratio_range = ratio_range - self.img_scale = img_scale - self.interval = interval - self.device = device - - def after_train_epoch(self, runner): - """Change the dataset output image size.""" - if self.ratio_range is not None and (runner.epoch + - 1) % self.interval == 0: - # Due to DDP and DP get the device behavior inconsistent, - # so we did not get the device from runner.model. - tensor = torch.LongTensor(2).to(self.device) - - if self.rank == 0: - size_factor = self.img_scale[1] * 1. / self.img_scale[0] - size = random.randint(*self.ratio_range) - size = (int(32 * size), 32 * int(size * size_factor)) - tensor[0] = size[0] - tensor[1] = size[1] - - if self.is_distributed: - dist.barrier() - dist.broadcast(tensor, 0) - - runner.data_loader.dataset.update_dynamic_scale( - (tensor[0].item(), tensor[1].item())) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_lrupdater_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_lrupdater_hook.py deleted file mode 100644 index ecb028ed..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_lrupdater_hook.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner.hooks import HOOKS -from mmcv.runner.hooks.lr_updater import (CosineAnnealingLrUpdaterHook, - annealing_cos) - - -@HOOKS.register_module() -class YOLOXLrUpdaterHook(CosineAnnealingLrUpdaterHook): - """YOLOX learning rate scheme. - - There are two main differences between YOLOXLrUpdaterHook - and CosineAnnealingLrUpdaterHook. - - 1. When the current running epoch is greater than - `max_epoch-last_epoch`, a fixed learning rate will be used - 2. The exp warmup scheme is different with LrUpdaterHook in MMCV - - Args: - num_last_epochs (int): The number of epochs with a fixed learning rate - before the end of the training. - """ - - def __init__(self, num_last_epochs, **kwargs): - self.num_last_epochs = num_last_epochs - super(YOLOXLrUpdaterHook, self).__init__(**kwargs) - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - # exp warmup scheme - k = self.warmup_ratio * pow( - (cur_iters + 1) / float(self.warmup_iters), 2) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.base_lr, dict): - lr_groups = {} - for key, base_lr in self.base_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, base_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.base_lr) - - def get_lr(self, runner, base_lr): - last_iter = len(runner.data_loader) * self.num_last_epochs - - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - progress += 1 - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress >= max_progress - last_iter: - # fixed learning rate - return target_lr - else: - return annealing_cos( - base_lr, target_lr, (progress - self.warmup_iters) / - (max_progress - self.warmup_iters - last_iter)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_mode_switch_hook.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_mode_switch_hook.py deleted file mode 100644 index 10834e68..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/hook/yolox_mode_switch_hook.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.parallel import is_module_wrapper -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class YOLOXModeSwitchHook(Hook): - """Switch the mode of YOLOX during training. - - This hook turns off the mosaic and mixup data augmentation and switches - to use L1 loss in bbox_head. - - Args: - num_last_epochs (int): The number of latter epochs in the end of the - training to close the data augmentation and switch to L1 loss. - Default: 15. - skip_type_keys (list[str], optional): Sequence of type string to be - skip pipeline. Default: ('Mosaic', 'RandomAffine', 'MixUp') - """ - - def __init__(self, - num_last_epochs=15, - skip_type_keys=('Mosaic', 'RandomAffine', 'MixUp')): - self.num_last_epochs = num_last_epochs - self.skip_type_keys = skip_type_keys - self._restart_dataloader = False - - def before_train_epoch(self, runner): - """Close mosaic and mixup augmentation and switches to use L1 loss.""" - epoch = runner.epoch - train_loader = runner.data_loader - model = runner.model - if is_module_wrapper(model): - model = model.module - if (epoch + 1) == runner.max_epochs - self.num_last_epochs: - runner.logger.info('No mosaic and mixup aug now!') - # The dataset pipeline cannot be updated when persistent_workers - # is True, so we need to force the dataloader's multi-process - # restart. This is a very hacky approach. - train_loader.dataset.update_skip_type_keys(self.skip_type_keys) - if hasattr(train_loader, 'persistent_workers' - ) and train_loader.persistent_workers is True: - train_loader._DataLoader__initialized = False - train_loader._iterator = None - self._restart_dataloader = True - runner.logger.info('Add additional L1 loss now!') - model.bbox_head.use_l1 = True - else: - # Once the restart is complete, we need to restore - # the initialization flag. - if self._restart_dataloader: - train_loader._DataLoader__initialized = True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/__init__.py deleted file mode 100644 index 644a9b1d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .mask_target import mask_target -from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks -from .utils import encode_mask_results, mask2bbox, split_combined_polys - -__all__ = [ - 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks', - 'PolygonMasks', 'encode_mask_results', 'mask2bbox' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/mask_target.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/mask_target.py deleted file mode 100644 index 273e7678..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/mask_target.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn.modules.utils import _pair - - -def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, - cfg): - """Compute mask target for positive proposals in multiple images. - - Args: - pos_proposals_list (list[Tensor]): Positive proposals in multiple - images. - pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each - positive proposals. - gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of - each image. - cfg (dict): Config dict that specifies the mask size. - - Returns: - list[Tensor]: Mask target of each image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * - >>> H, W = 17, 18 - >>> cfg = mmcv.Config({'mask_size': (13, 14)}) - >>> rng = np.random.RandomState(0) - >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image - >>> pos_proposals_list = [ - >>> torch.Tensor([ - >>> [ 7.2425, 5.5929, 13.9414, 14.9541], - >>> [ 7.3241, 3.6170, 16.3850, 15.3102], - >>> ]), - >>> torch.Tensor([ - >>> [ 4.8448, 6.4010, 7.0314, 9.7681], - >>> [ 5.9790, 2.6989, 7.4416, 4.8580], - >>> [ 0.0000, 0.0000, 0.1398, 9.8232], - >>> ]), - >>> ] - >>> # Corresponding class index for each proposal for each image - >>> pos_assigned_gt_inds_list = [ - >>> torch.LongTensor([7, 0]), - >>> torch.LongTensor([5, 4, 1]), - >>> ] - >>> # Ground truth mask for each true object for each image - >>> gt_masks_list = [ - >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), - >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), - >>> ] - >>> mask_targets = mask_target( - >>> pos_proposals_list, pos_assigned_gt_inds_list, - >>> gt_masks_list, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - cfg_list = [cfg for _ in range(len(pos_proposals_list))] - mask_targets = map(mask_target_single, pos_proposals_list, - pos_assigned_gt_inds_list, gt_masks_list, cfg_list) - mask_targets = list(mask_targets) - if len(mask_targets) > 0: - mask_targets = torch.cat(mask_targets) - return mask_targets - - -def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg): - """Compute mask target for each positive proposal in the image. - - Args: - pos_proposals (Tensor): Positive proposals. - pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals. - gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap - or Polygon. - cfg (dict): Config dict that indicate the mask size. - - Returns: - Tensor: Mask target of each positive proposals in the image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * # NOQA - >>> H, W = 32, 32 - >>> cfg = mmcv.Config({'mask_size': (7, 11)}) - >>> rng = np.random.RandomState(0) - >>> # Masks for each ground truth box (relative to the image) - >>> gt_masks_data = rng.rand(3, H, W) - >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W) - >>> # Predicted positive boxes in one image - >>> pos_proposals = torch.FloatTensor([ - >>> [ 16.2, 5.5, 19.9, 20.9], - >>> [ 17.3, 13.6, 19.3, 19.3], - >>> [ 14.8, 16.4, 17.0, 23.7], - >>> [ 0.0, 0.0, 16.0, 16.0], - >>> [ 4.0, 0.0, 20.0, 16.0], - >>> ]) - >>> # For each predicted proposal, its assignment to a gt mask - >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1]) - >>> mask_targets = mask_target_single( - >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - device = pos_proposals.device - mask_size = _pair(cfg.mask_size) - binarize = not cfg.get('soft_mask_target', False) - num_pos = pos_proposals.size(0) - if num_pos > 0: - proposals_np = pos_proposals.cpu().numpy() - maxh, maxw = gt_masks.height, gt_masks.width - proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw) - proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh) - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - - mask_targets = gt_masks.crop_and_resize( - proposals_np, - mask_size, - device=device, - inds=pos_assigned_gt_inds, - binarize=binarize).to_ndarray() - - mask_targets = torch.from_numpy(mask_targets).float().to(device) - else: - mask_targets = pos_proposals.new_zeros((0, ) + mask_size) - - return mask_targets diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/structures.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/structures.py deleted file mode 100644 index a9d0ebb4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/structures.py +++ /dev/null @@ -1,1102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import cv2 -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -import torch -from mmcv.ops.roi_align import roi_align - - -class BaseInstanceMasks(metaclass=ABCMeta): - """Base class for instance masks.""" - - @abstractmethod - def rescale(self, scale, interpolation='nearest'): - """Rescale masks as large as possible while keeping the aspect ratio. - For details can refer to `mmcv.imrescale`. - - Args: - scale (tuple[int]): The maximum size (h, w) of rescaled mask. - interpolation (str): Same as :func:`mmcv.imrescale`. - - Returns: - BaseInstanceMasks: The rescaled masks. - """ - - @abstractmethod - def resize(self, out_shape, interpolation='nearest'): - """Resize masks to the given out_shape. - - Args: - out_shape: Target (h, w) of resized mask. - interpolation (str): See :func:`mmcv.imresize`. - - Returns: - BaseInstanceMasks: The resized masks. - """ - - @abstractmethod - def flip(self, flip_direction='horizontal'): - """Flip masks alone the given direction. - - Args: - flip_direction (str): Either 'horizontal' or 'vertical'. - - Returns: - BaseInstanceMasks: The flipped masks. - """ - - @abstractmethod - def pad(self, out_shape, pad_val): - """Pad masks to the given size of (h, w). - - Args: - out_shape (tuple[int]): Target (h, w) of padded mask. - pad_val (int): The padded value. - - Returns: - BaseInstanceMasks: The padded masks. - """ - - @abstractmethod - def crop(self, bbox): - """Crop each mask by the given bbox. - - Args: - bbox (ndarray): Bbox in format [x1, y1, x2, y2], shape (4, ). - - Return: - BaseInstanceMasks: The cropped masks. - """ - - @abstractmethod - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device, - interpolation='bilinear', - binarize=True): - """Crop and resize masks by the given bboxes. - - This function is mainly used in mask targets computation. - It firstly align mask to bboxes by assigned_inds, then crop mask by the - assigned bbox and resize to the size of (mask_h, mask_w) - - Args: - bboxes (Tensor): Bboxes in format [x1, y1, x2, y2], shape (N, 4) - out_shape (tuple[int]): Target (h, w) of resized mask - inds (ndarray): Indexes to assign masks to each bbox, - shape (N,) and values should be between [0, num_masks - 1]. - device (str): Device of bboxes - interpolation (str): See `mmcv.imresize` - binarize (bool): if True fractional values are rounded to 0 or 1 - after the resize operation. if False and unsupported an error - will be raised. Defaults to True. - - Return: - BaseInstanceMasks: the cropped and resized masks. - """ - - @abstractmethod - def expand(self, expanded_h, expanded_w, top, left): - """see :class:`Expand`.""" - - @property - @abstractmethod - def areas(self): - """ndarray: areas of each instance.""" - - @abstractmethod - def to_ndarray(self): - """Convert masks to the format of ndarray. - - Return: - ndarray: Converted masks in the format of ndarray. - """ - - @abstractmethod - def to_tensor(self, dtype, device): - """Convert masks to the format of Tensor. - - Args: - dtype (str): Dtype of converted mask. - device (torch.device): Device of converted masks. - - Returns: - Tensor: Converted masks in the format of Tensor. - """ - - @abstractmethod - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - Translated masks. - """ - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. Default 0. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - ndarray: Sheared masks. - """ - - @abstractmethod - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - Rotated masks. - """ - - -class BitmapMasks(BaseInstanceMasks): - """This class represents masks in the form of bitmaps. - - Args: - masks (ndarray): ndarray of masks in shape (N, H, W), where N is - the number of objects. - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> num_masks, H, W = 3, 32, 32 - >>> rng = np.random.RandomState(0) - >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int) - >>> self = BitmapMasks(masks, height=H, width=W) - - >>> # demo crop_and_resize - >>> num_boxes = 5 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (14, 14) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - self.height = height - self.width = width - if len(masks) == 0: - self.masks = np.empty((0, self.height, self.width), dtype=np.uint8) - else: - assert isinstance(masks, (list, np.ndarray)) - if isinstance(masks, list): - assert isinstance(masks[0], np.ndarray) - assert masks[0].ndim == 2 # (H, W) - else: - assert masks.ndim == 3 # (N, H, W) - - self.masks = np.stack(masks).reshape(-1, height, width) - assert self.masks.shape[1] == self.height - assert self.masks.shape[2] == self.width - - def __getitem__(self, index): - """Index the BitmapMask. - - Args: - index (int | ndarray): Indices in the format of integer or ndarray. - - Returns: - :obj:`BitmapMasks`: Indexed bitmap masks. - """ - masks = self.masks[index].reshape(-1, self.height, self.width) - return BitmapMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation='nearest'): - """See :func:`BaseInstanceMasks.rescale`.""" - if len(self.masks) == 0: - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8) - else: - rescaled_masks = np.stack([ - mmcv.imrescale(mask, scale, interpolation=interpolation) - for mask in self.masks - ]) - height, width = rescaled_masks.shape[1:] - return BitmapMasks(rescaled_masks, height, width) - - def resize(self, out_shape, interpolation='nearest'): - """See :func:`BaseInstanceMasks.resize`.""" - if len(self.masks) == 0: - resized_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - resized_masks = np.stack([ - mmcv.imresize( - mask, out_shape[::-1], interpolation=interpolation) - for mask in self.masks - ]) - return BitmapMasks(resized_masks, *out_shape) - - def flip(self, flip_direction='horizontal'): - """See :func:`BaseInstanceMasks.flip`.""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - - if len(self.masks) == 0: - flipped_masks = self.masks - else: - flipped_masks = np.stack([ - mmcv.imflip(mask, direction=flip_direction) - for mask in self.masks - ]) - return BitmapMasks(flipped_masks, self.height, self.width) - - def pad(self, out_shape, pad_val=0): - """See :func:`BaseInstanceMasks.pad`.""" - if len(self.masks) == 0: - padded_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - padded_masks = np.stack([ - mmcv.impad(mask, shape=out_shape, pad_val=pad_val) - for mask in self.masks - ]) - return BitmapMasks(padded_masks, *out_shape) - - def crop(self, bbox): - """See :func:`BaseInstanceMasks.crop`.""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = np.empty((0, h, w), dtype=np.uint8) - else: - cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] - return BitmapMasks(cropped_masks, h, w) - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear', - binarize=True): - """See :func:`BaseInstanceMasks.crop_and_resize`.""" - if len(self.masks) == 0: - empty_masks = np.empty((0, *out_shape), dtype=np.uint8) - return BitmapMasks(empty_masks, *out_shape) - - # convert bboxes to tensor - if isinstance(bboxes, np.ndarray): - bboxes = torch.from_numpy(bboxes).to(device=device) - if isinstance(inds, np.ndarray): - inds = torch.from_numpy(inds).to(device=device) - - num_bbox = bboxes.shape[0] - fake_inds = torch.arange( - num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] - rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 - rois = rois.to(device=device) - if num_bbox > 0: - gt_masks_th = torch.from_numpy(self.masks).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - if binarize: - resized_masks = (targets >= 0.5).cpu().numpy() - else: - resized_masks = targets.cpu().numpy() - else: - resized_masks = [] - return BitmapMasks(resized_masks, *out_shape) - - def expand(self, expanded_h, expanded_w, top, left): - """See :func:`BaseInstanceMasks.expand`.""" - if len(self.masks) == 0: - expanded_mask = np.empty((0, expanded_h, expanded_w), - dtype=np.uint8) - else: - expanded_mask = np.zeros((len(self), expanded_h, expanded_w), - dtype=np.uint8) - expanded_mask[:, top:top + self.height, - left:left + self.width] = self.masks - return BitmapMasks(expanded_mask, expanded_h, expanded_w) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0 for masks. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - BitmapMasks: Translated BitmapMasks. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random(dtype=np.uint8) - >>> out_shape = (32, 32) - >>> offset = 4 - >>> direction = 'horizontal' - >>> fill_val = 0 - >>> interpolation = 'bilinear' - >>> # Note, There seem to be issues when: - >>> # * out_shape is different than self's shape - >>> # * the mask dtype is not supported by cv2.AffineWarp - >>> new = self.translate(out_shape, offset, direction, fill_val, - >>> interpolation) - >>> assert len(new) == len(self) - >>> assert new.height, new.width == out_shape - """ - if len(self.masks) == 0: - translated_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - translated_masks = mmcv.imtranslate( - self.masks.transpose((1, 2, 0)), - offset, - direction, - border_value=fill_val, - interpolation=interpolation) - if translated_masks.ndim == 2: - translated_masks = translated_masks[:, :, None] - translated_masks = translated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(translated_masks, *out_shape) - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - BitmapMasks: The sheared masks. - """ - if len(self.masks) == 0: - sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - sheared_masks = mmcv.imshear( - self.masks.transpose((1, 2, 0)), - magnitude, - direction, - border_value=border_value, - interpolation=interpolation) - if sheared_masks.ndim == 2: - sheared_masks = sheared_masks[:, :, None] - sheared_masks = sheared_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(sheared_masks, *out_shape) - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - BitmapMasks: Rotated BitmapMasks. - """ - if len(self.masks) == 0: - rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) - else: - rotated_masks = mmcv.imrotate( - self.masks.transpose((1, 2, 0)), - angle, - center=center, - scale=scale, - border_value=fill_val) - if rotated_masks.ndim == 2: - # case when only one mask, (h, w) - rotated_masks = rotated_masks[:, :, None] # (h, w, 1) - rotated_masks = rotated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(rotated_masks, *out_shape) - - @property - def areas(self): - """See :py:attr:`BaseInstanceMasks.areas`.""" - return self.masks.sum((1, 2)) - - def to_ndarray(self): - """See :func:`BaseInstanceMasks.to_ndarray`.""" - return self.masks - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - return torch.tensor(self.masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - dtype=np.uint8, - rng=None): - """Generate random bitmap masks for demo / testing purposes. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random() - >>> print('self = {}'.format(self)) - self = BitmapMasks(num_masks=3, height=32, width=32) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - masks = (rng.rand(num_masks, height, width) > 0.1).astype(dtype) - self = cls(masks, height=height, width=width) - return self - - def get_bboxes(self): - num_masks = len(self) - boxes = np.zeros((num_masks, 4), dtype=np.float32) - x_any = self.masks.any(axis=1) - y_any = self.masks.any(axis=2) - for idx in range(num_masks): - x = np.where(x_any[idx, :])[0] - y = np.where(y_any[idx, :])[0] - if len(x) > 0 and len(y) > 0: - # use +1 for x_max and y_max so that the right and bottom - # boundary of instance masks are fully included by the box - boxes[idx, :] = np.array([x[0], y[0], x[-1] + 1, y[-1] + 1], - dtype=np.float32) - return boxes - - -class PolygonMasks(BaseInstanceMasks): - """This class represents masks in the form of polygons. - - Polygons is a list of three levels. The first level of the list - corresponds to objects, the second level to the polys that compose the - object, the third level to the poly coordinates - - Args: - masks (list[list[ndarray]]): The first level of the list - corresponds to objects, the second level to the polys that - compose the object, the third level to the poly coordinates - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> masks = [ - >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ] - >>> ] - >>> height, width = 16, 16 - >>> self = PolygonMasks(masks, height, width) - - >>> # demo translate - >>> new = self.translate((16, 16), 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4) - - >>> # demo crop_and_resize - >>> num_boxes = 3 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (16, 16) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - assert isinstance(masks, list) - if len(masks) > 0: - assert isinstance(masks[0], list) - assert isinstance(masks[0][0], np.ndarray) - - self.height = height - self.width = width - self.masks = masks - - def __getitem__(self, index): - """Index the polygon masks. - - Args: - index (ndarray | List): The indices. - - Returns: - :obj:`PolygonMasks`: The indexed polygon masks. - """ - if isinstance(index, np.ndarray): - index = index.tolist() - if isinstance(index, list): - masks = [self.masks[i] for i in index] - else: - try: - masks = self.masks[index] - except Exception: - raise ValueError( - f'Unsupported input of type {type(index)} for indexing!') - if len(masks) and isinstance(masks[0], np.ndarray): - masks = [masks] # ensure a list of three levels - return PolygonMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation=None): - """see :func:`BaseInstanceMasks.rescale`""" - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - if len(self.masks) == 0: - rescaled_masks = PolygonMasks([], new_h, new_w) - else: - rescaled_masks = self.resize((new_h, new_w)) - return rescaled_masks - - def resize(self, out_shape, interpolation=None): - """see :func:`BaseInstanceMasks.resize`""" - if len(self.masks) == 0: - resized_masks = PolygonMasks([], *out_shape) - else: - h_scale = out_shape[0] / self.height - w_scale = out_shape[1] / self.width - resized_masks = [] - for poly_per_obj in self.masks: - resized_poly = [] - for p in poly_per_obj: - p = p.copy() - p[0::2] = p[0::2] * w_scale - p[1::2] = p[1::2] * h_scale - resized_poly.append(p) - resized_masks.append(resized_poly) - resized_masks = PolygonMasks(resized_masks, *out_shape) - return resized_masks - - def flip(self, flip_direction='horizontal'): - """see :func:`BaseInstanceMasks.flip`""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - if len(self.masks) == 0: - flipped_masks = PolygonMasks([], self.height, self.width) - else: - flipped_masks = [] - for poly_per_obj in self.masks: - flipped_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if flip_direction == 'horizontal': - p[0::2] = self.width - p[0::2] - elif flip_direction == 'vertical': - p[1::2] = self.height - p[1::2] - else: - p[0::2] = self.width - p[0::2] - p[1::2] = self.height - p[1::2] - flipped_poly_per_obj.append(p) - flipped_masks.append(flipped_poly_per_obj) - flipped_masks = PolygonMasks(flipped_masks, self.height, - self.width) - return flipped_masks - - def crop(self, bbox): - """see :func:`BaseInstanceMasks.crop`""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = PolygonMasks([], h, w) - else: - cropped_masks = [] - for poly_per_obj in self.masks: - cropped_poly_per_obj = [] - for p in poly_per_obj: - # pycocotools will clip the boundary - p = p.copy() - p[0::2] = p[0::2] - bbox[0] - p[1::2] = p[1::2] - bbox[1] - cropped_poly_per_obj.append(p) - cropped_masks.append(cropped_poly_per_obj) - cropped_masks = PolygonMasks(cropped_masks, h, w) - return cropped_masks - - def pad(self, out_shape, pad_val=0): - """padding has no effect on polygons`""" - return PolygonMasks(self.masks, *out_shape) - - def expand(self, *args, **kwargs): - """TODO: Add expand for polygon""" - raise NotImplementedError - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear', - binarize=True): - """see :func:`BaseInstanceMasks.crop_and_resize`""" - out_h, out_w = out_shape - if len(self.masks) == 0: - return PolygonMasks([], out_h, out_w) - - if not binarize: - raise ValueError('Polygons are always binary, ' - 'setting binarize=False is unsupported') - - resized_masks = [] - for i in range(len(bboxes)): - mask = self.masks[inds[i]] - bbox = bboxes[i, :] - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - h_scale = out_h / max(h, 0.1) # avoid too large scale - w_scale = out_w / max(w, 0.1) - - resized_mask = [] - for p in mask: - p = p.copy() - # crop - # pycocotools will clip the boundary - p[0::2] = p[0::2] - bbox[0] - p[1::2] = p[1::2] - bbox[1] - - # resize - p[0::2] = p[0::2] * w_scale - p[1::2] = p[1::2] * h_scale - resized_mask.append(p) - resized_masks.append(resized_mask) - return PolygonMasks(resized_masks, *out_shape) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=None, - interpolation=None): - """Translate the PolygonMasks. - - Example: - >>> self = PolygonMasks.random(dtype=np.int) - >>> out_shape = (self.height, self.width) - >>> new = self.translate(out_shape, 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501 - """ - assert fill_val is None or fill_val == 0, 'Here fill_val is not '\ - f'used, and defaultly should be None or 0. got {fill_val}.' - if len(self.masks) == 0: - translated_masks = PolygonMasks([], *out_shape) - else: - translated_masks = [] - for poly_per_obj in self.masks: - translated_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if direction == 'horizontal': - p[0::2] = np.clip(p[0::2] + offset, 0, out_shape[1]) - elif direction == 'vertical': - p[1::2] = np.clip(p[1::2] + offset, 0, out_shape[0]) - translated_poly_per_obj.append(p) - translated_masks.append(translated_poly_per_obj) - translated_masks = PolygonMasks(translated_masks, *out_shape) - return translated_masks - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.shear`.""" - if len(self.masks) == 0: - sheared_masks = PolygonMasks([], *out_shape) - else: - sheared_masks = [] - if direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) - elif direction == 'vertical': - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for poly_per_obj in self.masks: - sheared_poly = [] - for p in poly_per_obj: - p = np.stack([p[0::2], p[1::2]], axis=0) # [2, n] - new_coords = np.matmul(shear_matrix, p) # [2, n] - new_coords[0, :] = np.clip(new_coords[0, :], 0, - out_shape[1]) - new_coords[1, :] = np.clip(new_coords[1, :], 0, - out_shape[0]) - sheared_poly.append( - new_coords.transpose((1, 0)).reshape(-1)) - sheared_masks.append(sheared_poly) - sheared_masks = PolygonMasks(sheared_masks, *out_shape) - return sheared_masks - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """See :func:`BaseInstanceMasks.rotate`.""" - if len(self.masks) == 0: - rotated_masks = PolygonMasks([], *out_shape) - else: - rotated_masks = [] - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, scale) - for poly_per_obj in self.masks: - rotated_poly = [] - for p in poly_per_obj: - p = p.copy() - coords = np.stack([p[0::2], p[1::2]], axis=1) # [n, 2] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coords = np.concatenate( - (coords, np.ones((coords.shape[0], 1), coords.dtype)), - axis=1) # [n, 3] - rotated_coords = np.matmul( - rotate_matrix[None, :, :], - coords[:, :, None])[..., 0] # [n, 2, 1] -> [n, 2] - rotated_coords[:, 0] = np.clip(rotated_coords[:, 0], 0, - out_shape[1]) - rotated_coords[:, 1] = np.clip(rotated_coords[:, 1], 0, - out_shape[0]) - rotated_poly.append(rotated_coords.reshape(-1)) - rotated_masks.append(rotated_poly) - rotated_masks = PolygonMasks(rotated_masks, *out_shape) - return rotated_masks - - def to_bitmap(self): - """convert polygon masks to bitmap masks.""" - bitmap_masks = self.to_ndarray() - return BitmapMasks(bitmap_masks, self.height, self.width) - - @property - def areas(self): - """Compute areas of masks. - - This func is modified from `detectron2 - `_. - The function only works with Polygons using the shoelace formula. - - Return: - ndarray: areas of each instance - """ # noqa: W501 - area = [] - for polygons_per_obj in self.masks: - area_per_obj = 0 - for p in polygons_per_obj: - area_per_obj += self._polygon_area(p[0::2], p[1::2]) - area.append(area_per_obj) - return np.asarray(area) - - def _polygon_area(self, x, y): - """Compute the area of a component of a polygon. - - Using the shoelace formula: - https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates - - Args: - x (ndarray): x coordinates of the component - y (ndarray): y coordinates of the component - - Return: - float: the are of the component - """ # noqa: 501 - return 0.5 * np.abs( - np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) - - def to_ndarray(self): - """Convert masks to the format of ndarray.""" - if len(self.masks) == 0: - return np.empty((0, self.height, self.width), dtype=np.uint8) - bitmap_masks = [] - for poly_per_obj in self.masks: - bitmap_masks.append( - polygon_to_bitmap(poly_per_obj, self.height, self.width)) - return np.stack(bitmap_masks) - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - if len(self.masks) == 0: - return torch.empty((0, self.height, self.width), - dtype=dtype, - device=device) - ndarray_masks = self.to_ndarray() - return torch.tensor(ndarray_masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - n_verts=5, - dtype=np.float32, - rng=None): - """Generate random polygon masks for demo / testing purposes. - - Adapted from [1]_ - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501 - - Example: - >>> from mmdet.core.mask.structures import PolygonMasks - >>> self = PolygonMasks.random() - >>> print('self = {}'.format(self)) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - - def _gen_polygon(n, irregularity, spikeyness): - """Creates the polygon by sampling points on a circle around the - centre. Random noise is added by varying the angular spacing - between sequential points, and by varying the radial distance of - each point from the centre. - - Based on original code by Mike Ounsworth - - Args: - n (int): number of vertices - irregularity (float): [0,1] indicating how much variance there - is in the angular spacing of vertices. [0,1] will map to - [0, 2pi/numberOfVerts] - spikeyness (float): [0,1] indicating how much variance there is - in each vertex from the circle of radius aveRadius. [0,1] - will map to [0, aveRadius] - - Returns: - a list of vertices, in CCW order. - """ - from scipy.stats import truncnorm - - # Generate around the unit circle - cx, cy = (0.0, 0.0) - radius = 1 - - tau = np.pi * 2 - - irregularity = np.clip(irregularity, 0, 1) * 2 * np.pi / n - spikeyness = np.clip(spikeyness, 1e-9, 1) - - # generate n angle steps - lower = (tau / n) - irregularity - upper = (tau / n) + irregularity - angle_steps = rng.uniform(lower, upper, n) - - # normalize the steps so that point 0 and point n+1 are the same - k = angle_steps.sum() / (2 * np.pi) - angles = (angle_steps / k).cumsum() + rng.uniform(0, tau) - - # Convert high and low values to be wrt the standard normal range - # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html - low = 0 - high = 2 * radius - mean = radius - std = spikeyness - a = (low - mean) / std - b = (high - mean) / std - tnorm = truncnorm(a=a, b=b, loc=mean, scale=std) - - # now generate the points - radii = tnorm.rvs(n, random_state=rng) - x_pts = cx + radii * np.cos(angles) - y_pts = cy + radii * np.sin(angles) - - points = np.hstack([x_pts[:, None], y_pts[:, None]]) - - # Scale to 0-1 space - points = points - points.min(axis=0) - points = points / points.max(axis=0) - - # Randomly place within 0-1 space - points = points * (rng.rand() * .8 + .2) - min_pt = points.min(axis=0) - max_pt = points.max(axis=0) - - high = (1 - max_pt) - low = (0 - min_pt) - offset = (rng.rand(2) * (high - low)) + low - points = points + offset - return points - - def _order_vertices(verts): - """ - References: - https://stackoverflow.com/questions/1709283/how-can-i-sort-a-coordinate-list-for-a-rectangle-counterclockwise - """ - mlat = verts.T[0].sum() / len(verts) - mlng = verts.T[1].sum() / len(verts) - - tau = np.pi * 2 - angle = (np.arctan2(mlat - verts.T[0], verts.T[1] - mlng) + - tau) % tau - sortx = angle.argsort() - verts = verts.take(sortx, axis=0) - return verts - - # Generate a random exterior for each requested mask - masks = [] - for _ in range(num_masks): - exterior = _order_vertices(_gen_polygon(n_verts, 0.9, 0.9)) - exterior = (exterior * [(width, height)]).astype(dtype) - masks.append([exterior.ravel()]) - - self = cls(masks, height, width) - return self - - def get_bboxes(self): - num_masks = len(self) - boxes = np.zeros((num_masks, 4), dtype=np.float32) - for idx, poly_per_obj in enumerate(self.masks): - # simply use a number that is big enough for comparison with - # coordinates - xy_min = np.array([self.width * 2, self.height * 2], - dtype=np.float32) - xy_max = np.zeros(2, dtype=np.float32) - for p in poly_per_obj: - xy = np.array(p).reshape(-1, 2).astype(np.float32) - xy_min = np.minimum(xy_min, np.min(xy, axis=0)) - xy_max = np.maximum(xy_max, np.max(xy, axis=0)) - boxes[idx, :2] = xy_min - boxes[idx, 2:] = xy_max - - return boxes - - -def polygon_to_bitmap(polygons, height, width): - """Convert masks from the form of polygons to bitmaps. - - Args: - polygons (list[ndarray]): masks in polygon representation - height (int): mask height - width (int): mask width - - Return: - ndarray: the converted masks in bitmap representation - """ - rles = maskUtils.frPyObjects(polygons, height, width) - rle = maskUtils.merge(rles) - bitmap_mask = maskUtils.decode(rle).astype(np.bool) - return bitmap_mask - - -def bitmap_to_polygon(bitmap): - """Convert masks from the form of bitmaps to polygons. - - Args: - bitmap (ndarray): masks in bitmap representation. - - Return: - list[ndarray]: the converted mask in polygon representation. - bool: whether the mask has holes. - """ - bitmap = np.ascontiguousarray(bitmap).astype(np.uint8) - # cv2.RETR_CCOMP: retrieves all of the contours and organizes them - # into a two-level hierarchy. At the top level, there are external - # boundaries of the components. At the second level, there are - # boundaries of the holes. If there is another contour inside a hole - # of a connected component, it is still put at the top level. - # cv2.CHAIN_APPROX_NONE: stores absolutely all the contour points. - outs = cv2.findContours(bitmap, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) - contours = outs[-2] - hierarchy = outs[-1] - if hierarchy is None: - return [], False - # hierarchy[i]: 4 elements, for the indexes of next, previous, - # parent, or nested contours. If there is no corresponding contour, - # it will be -1. - with_hole = (hierarchy.reshape(-1, 4)[:, 3] >= 0).any() - contours = [c.reshape(-1, 2) for c in contours] - return contours, with_hole diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/utils.py deleted file mode 100644 index 90544b34..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/mask/utils.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import pycocotools.mask as mask_util -import torch - - -def split_combined_polys(polys, poly_lens, polys_per_mask): - """Split the combined 1-D polys into masks. - - A mask is represented as a list of polys, and a poly is represented as - a 1-D array. In dataset, all masks are concatenated into a single 1-D - tensor. Here we need to split the tensor into original representations. - - Args: - polys (list): a list (length = image num) of 1-D tensors - poly_lens (list): a list (length = image num) of poly length - polys_per_mask (list): a list (length = image num) of poly number - of each mask - - Returns: - list: a list (length = image num) of list (length = mask num) of \ - list (length = poly num) of numpy array. - """ - mask_polys_list = [] - for img_id in range(len(polys)): - polys_single = polys[img_id] - polys_lens_single = poly_lens[img_id].tolist() - polys_per_mask_single = polys_per_mask[img_id].tolist() - - split_polys = mmcv.slice_list(polys_single, polys_lens_single) - mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single) - mask_polys_list.append(mask_polys) - return mask_polys_list - - -# TODO: move this function to more proper place -def encode_mask_results(mask_results): - """Encode bitmap mask to RLE code. - - Args: - mask_results (list | tuple[list]): bitmap mask results. - In mask scoring rcnn, mask_results is a tuple of (segm_results, - segm_cls_score). - - Returns: - list | tuple: RLE encoded mask. - """ - if isinstance(mask_results, tuple): # mask scoring - cls_segms, cls_mask_scores = mask_results - else: - cls_segms = mask_results - num_classes = len(cls_segms) - encoded_mask_results = [[] for _ in range(num_classes)] - for i in range(len(cls_segms)): - for cls_segm in cls_segms[i]: - encoded_mask_results[i].append( - mask_util.encode( - np.array( - cls_segm[:, :, np.newaxis], order='F', - dtype='uint8'))[0]) # encoded with RLE - if isinstance(mask_results, tuple): - return encoded_mask_results, cls_mask_scores - else: - return encoded_mask_results - - -def mask2bbox(masks): - """Obtain tight bounding boxes of binary masks. - - Args: - masks (Tensor): Binary mask of shape (n, h, w). - - Returns: - Tensor: Bboxe with shape (n, 4) of \ - positive region in binary mask. - """ - N = masks.shape[0] - bboxes = masks.new_zeros((N, 4), dtype=torch.float32) - x_any = torch.any(masks, dim=1) - y_any = torch.any(masks, dim=2) - for i in range(N): - x = torch.where(x_any[i, :])[0] - y = torch.where(y_any[i, :])[0] - if len(x) > 0 and len(y) > 0: - bboxes[i, :] = bboxes.new_tensor( - [x[0], y[0], x[-1] + 1, y[-1] + 1]) - - return bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/__init__.py deleted file mode 100644 index 00376bd4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bbox_nms import fast_nms, multiclass_nms -from .matrix_nms import mask_matrix_nms -from .merge_augs import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores) - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'mask_matrix_nms', 'fast_nms' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/bbox_nms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/bbox_nms.py deleted file mode 100644 index 4fcf57bb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/bbox_nms.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms - -from mmdet.core.bbox.iou_calculators import bbox_overlaps - - -def multiclass_nms(multi_bboxes, - multi_scores, - score_thr, - nms_cfg, - max_num=-1, - score_factors=None, - return_inds=False): - """NMS for multi-class bboxes. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class), where the last column - contains scores of the background class, but this will be ignored. - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - nms_cfg (dict): a dict that contains the arguments of nms operations - max_num (int, optional): if there are more than max_num bboxes after - NMS, only top max_num will be kept. Default to -1. - score_factors (Tensor, optional): The factors multiplied to scores - before applying NMS. Default to None. - return_inds (bool, optional): Whether return the indices of kept - bboxes. Default to False. - - Returns: - tuple: (dets, labels, indices (optional)), tensors of shape (k, 5), - (k), and (k). Dets are boxes with scores. Labels are 0-based. - """ - num_classes = multi_scores.size(1) - 1 - # exclude background category - if multi_bboxes.shape[1] > 4: - bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4) - else: - bboxes = multi_bboxes[:, None].expand( - multi_scores.size(0), num_classes, 4) - - scores = multi_scores[:, :-1] - - labels = torch.arange(num_classes, dtype=torch.long, device=scores.device) - labels = labels.view(1, -1).expand_as(scores) - - bboxes = bboxes.reshape(-1, 4) - scores = scores.reshape(-1) - labels = labels.reshape(-1) - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - # remove low scoring boxes - valid_mask = scores > score_thr - # multiply score_factor after threshold to preserve more bboxes, improve - # mAP by 1% for YOLOv3 - if score_factors is not None: - # expand the shape to match original shape of score - score_factors = score_factors.view(-1, 1).expand( - multi_scores.size(0), num_classes) - score_factors = score_factors.reshape(-1) - scores = scores * score_factors - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - inds = valid_mask.nonzero(as_tuple=False).squeeze(1) - bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds] - else: - # TensorRT NMS plugin has invalid output filled with -1 - # add dummy data to make detection output correct. - bboxes = torch.cat([bboxes, bboxes.new_zeros(1, 4)], dim=0) - scores = torch.cat([scores, scores.new_zeros(1)], dim=0) - labels = torch.cat([labels, labels.new_zeros(1)], dim=0) - - if bboxes.numel() == 0: - if torch.onnx.is_in_onnx_export(): - raise RuntimeError('[ONNX Error] Can not record NMS ' - 'as it has not been executed this time') - dets = torch.cat([bboxes, scores[:, None]], -1) - if return_inds: - return dets, labels, inds - else: - return dets, labels - - dets, keep = batched_nms(bboxes, scores, labels, nms_cfg) - - if max_num > 0: - dets = dets[:max_num] - keep = keep[:max_num] - - if return_inds: - return dets, labels[keep], inds[keep] - else: - return dets, labels[keep] - - -def fast_nms(multi_bboxes, - multi_scores, - multi_coeffs, - score_thr, - iou_thr, - top_k, - max_num=-1): - """Fast NMS in `YOLACT `_. - - Fast NMS allows already-removed detections to suppress other detections so - that every instance can be decided to be kept or discarded in parallel, - which is not possible in traditional NMS. This relaxation allows us to - implement Fast NMS entirely in standard GPU-accelerated matrix operations. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class+1), where the last column - contains scores of the background class, but this will be ignored. - multi_coeffs (Tensor): shape (n, #class*coeffs_dim). - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - iou_thr (float): IoU threshold to be considered as conflicted. - top_k (int): if there are more than top_k bboxes before NMS, - only top top_k will be kept. - max_num (int): if there are more than max_num bboxes after NMS, - only top max_num will be kept. If -1, keep all the bboxes. - Default: -1. - - Returns: - tuple: (dets, labels, coefficients), tensors of shape (k, 5), (k, 1), - and (k, coeffs_dim). Dets are boxes with scores. - Labels are 0-based. - """ - - scores = multi_scores[:, :-1].t() # [#class, n] - scores, idx = scores.sort(1, descending=True) - - idx = idx[:, :top_k].contiguous() - scores = scores[:, :top_k] # [#class, topk] - num_classes, num_dets = idx.size() - boxes = multi_bboxes[idx.view(-1), :].view(num_classes, num_dets, 4) - coeffs = multi_coeffs[idx.view(-1), :].view(num_classes, num_dets, -1) - - iou = bbox_overlaps(boxes, boxes) # [#class, topk, topk] - iou.triu_(diagonal=1) - iou_max, _ = iou.max(dim=1) - - # Now just filter out the ones higher than the threshold - keep = iou_max <= iou_thr - - # Second thresholding introduces 0.2 mAP gain at negligible time cost - keep *= scores > score_thr - - # Assign each kept detection to its corresponding class - classes = torch.arange( - num_classes, device=boxes.device)[:, None].expand_as(keep) - classes = classes[keep] - - boxes = boxes[keep] - coeffs = coeffs[keep] - scores = scores[keep] - - # Only keep the top max_num highest scores across all classes - scores, idx = scores.sort(0, descending=True) - if max_num > 0: - idx = idx[:max_num] - scores = scores[:max_num] - - classes = classes[idx] - boxes = boxes[idx] - coeffs = coeffs[idx] - - cls_dets = torch.cat([boxes, scores[:, None]], dim=1) - return cls_dets, classes, coeffs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/matrix_nms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/matrix_nms.py deleted file mode 100644 index 9dc8c4f7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/matrix_nms.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def mask_matrix_nms(masks, - labels, - scores, - filter_thr=-1, - nms_pre=-1, - max_num=-1, - kernel='gaussian', - sigma=2.0, - mask_area=None): - """Matrix NMS for multi-class masks. - - Args: - masks (Tensor): Has shape (num_instances, h, w) - labels (Tensor): Labels of corresponding masks, - has shape (num_instances,). - scores (Tensor): Mask scores of corresponding masks, - has shape (num_instances). - filter_thr (float): Score threshold to filter the masks - after matrix nms. Default: -1, which means do not - use filter_thr. - nms_pre (int): The max number of instances to do the matrix nms. - Default: -1, which means do not use nms_pre. - max_num (int, optional): If there are more than max_num masks after - matrix, only top max_num will be kept. Default: -1, which means - do not use max_num. - kernel (str): 'linear' or 'gaussian'. - sigma (float): std in gaussian method. - mask_area (Tensor): The sum of seg_masks. - - Returns: - tuple(Tensor): Processed mask results. - - - scores (Tensor): Updated scores, has shape (n,). - - labels (Tensor): Remained labels, has shape (n,). - - masks (Tensor): Remained masks, has shape (n, w, h). - - keep_inds (Tensor): The indices number of - the remaining mask in the input mask, has shape (n,). - """ - assert len(labels) == len(masks) == len(scores) - if len(labels) == 0: - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - if mask_area is None: - mask_area = masks.sum((1, 2)).float() - else: - assert len(masks) == len(mask_area) - - # sort and keep top nms_pre - scores, sort_inds = torch.sort(scores, descending=True) - - keep_inds = sort_inds - if nms_pre > 0 and len(sort_inds) > nms_pre: - sort_inds = sort_inds[:nms_pre] - keep_inds = keep_inds[:nms_pre] - scores = scores[:nms_pre] - masks = masks[sort_inds] - mask_area = mask_area[sort_inds] - labels = labels[sort_inds] - - num_masks = len(labels) - flatten_masks = masks.reshape(num_masks, -1).float() - # inter. - inter_matrix = torch.mm(flatten_masks, flatten_masks.transpose(1, 0)) - expanded_mask_area = mask_area.expand(num_masks, num_masks) - # Upper triangle iou matrix. - iou_matrix = (inter_matrix / - (expanded_mask_area + expanded_mask_area.transpose(1, 0) - - inter_matrix)).triu(diagonal=1) - # label_specific matrix. - expanded_labels = labels.expand(num_masks, num_masks) - # Upper triangle label matrix. - label_matrix = (expanded_labels == expanded_labels.transpose( - 1, 0)).triu(diagonal=1) - - # IoU compensation - compensate_iou, _ = (iou_matrix * label_matrix).max(0) - compensate_iou = compensate_iou.expand(num_masks, - num_masks).transpose(1, 0) - - # IoU decay - decay_iou = iou_matrix * label_matrix - - # Calculate the decay_coefficient - if kernel == 'gaussian': - decay_matrix = torch.exp(-1 * sigma * (decay_iou**2)) - compensate_matrix = torch.exp(-1 * sigma * (compensate_iou**2)) - decay_coefficient, _ = (decay_matrix / compensate_matrix).min(0) - elif kernel == 'linear': - decay_matrix = (1 - decay_iou) / (1 - compensate_iou) - decay_coefficient, _ = decay_matrix.min(0) - else: - raise NotImplementedError( - f'{kernel} kernel is not supported in matrix nms!') - # update the score. - scores = scores * decay_coefficient - - if filter_thr > 0: - keep = scores >= filter_thr - keep_inds = keep_inds[keep] - if not keep.any(): - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - masks = masks[keep] - scores = scores[keep] - labels = labels[keep] - - # sort and keep top max_num - scores, sort_inds = torch.sort(scores, descending=True) - keep_inds = keep_inds[sort_inds] - if max_num > 0 and len(sort_inds) > max_num: - sort_inds = sort_inds[:max_num] - keep_inds = keep_inds[:max_num] - scores = scores[:max_num] - masks = masks[sort_inds] - labels = labels[sort_inds] - - return scores, labels, masks, keep_inds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/merge_augs.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/merge_augs.py deleted file mode 100644 index 2ac4603a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/post_processing/merge_augs.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - if flip: - flip_direction = img_info[0]['flip_direction'] - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - elif flip_direction == 'diagonal': - mask = mask[:, :, :, ::-1] - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/__init__.py deleted file mode 100644 index 3f0d0708..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dist_utils import (DistOptimizerHook, all_reduce_dict, allreduce_grads, - reduce_mean, sync_random_seed) -from .misc import (center_of_mass, filter_scores_and_topk, flip_tensor, - generate_coordinate, mask2ndarray, multi_apply, - select_single_mlvl, unmap) - -__all__ = [ - 'allreduce_grads', 'DistOptimizerHook', 'reduce_mean', 'multi_apply', - 'unmap', 'mask2ndarray', 'flip_tensor', 'all_reduce_dict', - 'center_of_mass', 'generate_coordinate', 'select_single_mlvl', - 'filter_scores_and_topk', 'sync_random_seed' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/dist_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/dist_utils.py deleted file mode 100644 index 8760774f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/dist_utils.py +++ /dev/null @@ -1,193 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import pickle -import warnings -from collections import OrderedDict - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import OptimizerHook, get_dist_info -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - world_size = dist.get_world_size() - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -class DistOptimizerHook(OptimizerHook): - """Deprecated optimizer hook for distributed training.""" - - def __init__(self, *args, **kwargs): - warnings.warn('"DistOptimizerHook" is deprecated, please switch to' - '"mmcv.runner.OptimizerHook".') - super().__init__(*args, **kwargs) - - -def reduce_mean(tensor): - """"Obtain the mean of tensor on different GPUs.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -def obj2tensor(pyobj, device='cuda'): - """Serialize picklable python object to tensor.""" - storage = torch.ByteStorage.from_buffer(pickle.dumps(pyobj)) - return torch.ByteTensor(storage).to(device=device) - - -def tensor2obj(tensor): - """Deserialize tensor to picklable python object.""" - return pickle.loads(tensor.cpu().numpy().tobytes()) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """Return a process group based on gloo backend, containing all the ranks - The result is cached.""" - if dist.get_backend() == 'nccl': - return dist.new_group(backend='gloo') - else: - return dist.group.WORLD - - -def all_reduce_dict(py_dict, op='sum', group=None, to_float=True): - """Apply all reduce function for python dict object. - - The code is modified from https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/utils/allreduce_norm.py. - - NOTE: make sure that py_dict in different ranks has the same keys and - the values should be in the same shape. Currently only supports - nccl backend. - - Args: - py_dict (dict): Dict to be applied all reduce op. - op (str): Operator, could be 'sum' or 'mean'. Default: 'sum' - group (:obj:`torch.distributed.group`, optional): Distributed group, - Default: None. - to_float (bool): Whether to convert all values of dict to float. - Default: True. - - Returns: - OrderedDict: reduced python dict object. - """ - warnings.warn( - 'group` is deprecated. Currently only supports NCCL backend.') - _, world_size = get_dist_info() - if world_size == 1: - return py_dict - - # all reduce logic across different devices. - py_key = list(py_dict.keys()) - if not isinstance(py_dict, OrderedDict): - py_key_tensor = obj2tensor(py_key) - dist.broadcast(py_key_tensor, src=0) - py_key = tensor2obj(py_key_tensor) - - tensor_shapes = [py_dict[k].shape for k in py_key] - tensor_numels = [py_dict[k].numel() for k in py_key] - - if to_float: - warnings.warn('Note: the "to_float" is True, you need to ' - 'ensure that the behavior is reasonable.') - flatten_tensor = torch.cat( - [py_dict[k].flatten().float() for k in py_key]) - else: - flatten_tensor = torch.cat([py_dict[k].flatten() for k in py_key]) - - dist.all_reduce(flatten_tensor, op=dist.ReduceOp.SUM) - if op == 'mean': - flatten_tensor /= world_size - - split_tensors = [ - x.reshape(shape) for x, shape in zip( - torch.split(flatten_tensor, tensor_numels), tensor_shapes) - ] - out_dict = {k: v for k, v in zip(py_key, split_tensors)} - if isinstance(py_dict, OrderedDict): - out_dict = OrderedDict(out_dict) - return out_dict - - -def sync_random_seed(seed=None, device='cuda'): - """Make sure different ranks share the same seed. - - All workers must call this function, otherwise it will deadlock. - This method is generally used in `DistributedSampler`, - because the seed should be identical across all processes - in the distributed group. - - In distributed sampling, different ranks should sample non-overlapped - data in the dataset. Therefore, this function is used to make sure that - each rank shuffles the data indices in the same order based - on the same seed. Then different ranks could use different indices - to select non-overlapped data from the same data list. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is None: - seed = np.random.randint(2**31) - assert isinstance(seed, int) - - rank, world_size = get_dist_info() - - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/misc.py deleted file mode 100644 index 14cb745e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/utils/misc.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import numpy as np -import torch -from six.moves import map, zip - -from ..mask.structures import BitmapMasks, PolygonMasks - - -def multi_apply(func, *args, **kwargs): - """Apply function to a list of arguments. - - Note: - This function applies the ``func`` to multiple inputs and - map the multiple outputs of the ``func`` into different - list. Each list contains the same type of outputs corresponding - to different inputs. - - Args: - func (Function): A function that will be applied to a list of - arguments - - Returns: - tuple(list): A tuple containing multiple list, each list contains \ - a kind of returned results by the function - """ - pfunc = partial(func, **kwargs) if kwargs else func - map_results = map(pfunc, *args) - return tuple(map(list, zip(*map_results))) - - -def unmap(data, count, inds, fill=0): - """Unmap a subset of item (data) back to the original set of items (of size - count)""" - if data.dim() == 1: - ret = data.new_full((count, ), fill) - ret[inds.type(torch.bool)] = data - else: - new_size = (count, ) + data.size()[1:] - ret = data.new_full(new_size, fill) - ret[inds.type(torch.bool), :] = data - return ret - - -def mask2ndarray(mask): - """Convert Mask to ndarray.. - - Args: - mask (:obj:`BitmapMasks` or :obj:`PolygonMasks` or - torch.Tensor or np.ndarray): The mask to be converted. - - Returns: - np.ndarray: Ndarray mask of shape (n, h, w) that has been converted - """ - if isinstance(mask, (BitmapMasks, PolygonMasks)): - mask = mask.to_ndarray() - elif isinstance(mask, torch.Tensor): - mask = mask.detach().cpu().numpy() - elif not isinstance(mask, np.ndarray): - raise TypeError(f'Unsupported {type(mask)} data type') - return mask - - -def flip_tensor(src_tensor, flip_direction): - """flip tensor base on flip_direction. - - Args: - src_tensor (Tensor): input feature map, shape (B, C, H, W). - flip_direction (str): The flipping direction. Options are - 'horizontal', 'vertical', 'diagonal'. - - Returns: - out_tensor (Tensor): Flipped tensor. - """ - assert src_tensor.ndim == 4 - valid_directions = ['horizontal', 'vertical', 'diagonal'] - assert flip_direction in valid_directions - if flip_direction == 'horizontal': - out_tensor = torch.flip(src_tensor, [3]) - elif flip_direction == 'vertical': - out_tensor = torch.flip(src_tensor, [2]) - else: - out_tensor = torch.flip(src_tensor, [2, 3]) - return out_tensor - - -def select_single_mlvl(mlvl_tensors, batch_id, detach=True): - """Extract a multi-scale single image tensor from a multi-scale batch - tensor based on batch index. - - Note: The default value of detach is True, because the proposal gradient - needs to be detached during the training of the two-stage model. E.g - Cascade Mask R-CNN. - - Args: - mlvl_tensors (list[Tensor]): Batch tensor for all scale levels, - each is a 4D-tensor. - batch_id (int): Batch index. - detach (bool): Whether detach gradient. Default True. - - Returns: - list[Tensor]: Multi-scale single image tensor. - """ - assert isinstance(mlvl_tensors, (list, tuple)) - num_levels = len(mlvl_tensors) - - if detach: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id].detach() for i in range(num_levels) - ] - else: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id] for i in range(num_levels) - ] - return mlvl_tensor_list - - -def filter_scores_and_topk(scores, score_thr, topk, results=None): - """Filter results using score threshold and topk candidates. - - Args: - scores (Tensor): The scores, shape (num_bboxes, K). - score_thr (float): The score filter threshold. - topk (int): The number of topk candidates. - results (dict or list or Tensor, Optional): The results to - which the filtering rule is to be applied. The shape - of each item is (num_bboxes, N). - - Returns: - tuple: Filtered results - - - scores (Tensor): The scores after being filtered, \ - shape (num_bboxes_filtered, ). - - labels (Tensor): The class labels, shape \ - (num_bboxes_filtered, ). - - anchor_idxs (Tensor): The anchor indexes, shape \ - (num_bboxes_filtered, ). - - filtered_results (dict or list or Tensor, Optional): \ - The filtered results. The shape of each item is \ - (num_bboxes_filtered, N). - """ - valid_mask = scores > score_thr - scores = scores[valid_mask] - valid_idxs = torch.nonzero(valid_mask) - - num_topk = min(topk, valid_idxs.size(0)) - # torch.sort is actually faster than .topk (at least on GPUs) - scores, idxs = scores.sort(descending=True) - scores = scores[:num_topk] - topk_idxs = valid_idxs[idxs[:num_topk]] - keep_idxs, labels = topk_idxs.unbind(dim=1) - - filtered_results = None - if results is not None: - if isinstance(results, dict): - filtered_results = {k: v[keep_idxs] for k, v in results.items()} - elif isinstance(results, list): - filtered_results = [result[keep_idxs] for result in results] - elif isinstance(results, torch.Tensor): - filtered_results = results[keep_idxs] - else: - raise NotImplementedError(f'Only supports dict or list or Tensor, ' - f'but get {type(results)}.') - return scores, labels, keep_idxs, filtered_results - - -def center_of_mass(mask, esp=1e-6): - """Calculate the centroid coordinates of the mask. - - Args: - mask (Tensor): The mask to be calculated, shape (h, w). - esp (float): Avoid dividing by zero. Default: 1e-6. - - Returns: - tuple[Tensor]: the coordinates of the center point of the mask. - - - center_h (Tensor): the center point of the height. - - center_w (Tensor): the center point of the width. - """ - h, w = mask.shape - grid_h = torch.arange(h, device=mask.device)[:, None] - grid_w = torch.arange(w, device=mask.device) - normalizer = mask.sum().float().clamp(min=esp) - center_h = (mask * grid_h).sum() / normalizer - center_w = (mask * grid_w).sum() / normalizer - return center_h, center_w - - -def generate_coordinate(featmap_sizes, device='cuda'): - """Generate the coordinate. - - Args: - featmap_sizes (tuple): The feature to be calculated, - of shape (N, C, W, H). - device (str): The device where the feature will be put on. - Returns: - coord_feat (Tensor): The coordinate feature, of shape (N, 2, W, H). - """ - - x_range = torch.linspace(-1, 1, featmap_sizes[-1], device=device) - y_range = torch.linspace(-1, 1, featmap_sizes[-2], device=device) - y, x = torch.meshgrid(y_range, x_range) - y = y.expand([featmap_sizes[0], 1, -1, -1]) - x = x.expand([featmap_sizes[0], 1, -1, -1]) - coord_feat = torch.cat([x, y], 1) - - return coord_feat diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/__init__.py deleted file mode 100644 index 2eb17c4b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) -from .palette import get_palette, palette_val - -__all__ = [ - 'imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib', - 'palette_val', 'get_palette' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/image.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/image.py deleted file mode 100644 index c574b2d4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/image.py +++ /dev/null @@ -1,524 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import matplotlib.pyplot as plt -import mmcv -import numpy as np -import pycocotools.mask as mask_util -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from ..mask.structures import bitmap_to_polygon -from ..utils import mask2ndarray -from .palette import get_palette, palette_val - -__all__ = [ - 'color_val_matplotlib', 'draw_masks', 'draw_bboxes', 'draw_labels', - 'imshow_det_bboxes', 'imshow_gt_det_bboxes' -] - -EPS = 1e-2 - - -def color_val_matplotlib(color): - """Convert various input in BGR order to normalized RGB matplotlib color - tuples. - - Args: - color (:obj`Color` | str | tuple | int | ndarray): Color inputs. - - Returns: - tuple[float]: A tuple of 3 normalized floats indicating RGB channels. - """ - color = mmcv.color_val(color) - color = [color / 255 for color in color[::-1]] - return tuple(color) - - -def _get_adaptive_scales(areas, min_area=800, max_area=30000): - """Get adaptive scales according to areas. - - The scale range is [0.5, 1.0]. When the area is less than - ``'min_area'``, the scale is 0.5 while the area is larger than - ``'max_area'``, the scale is 1.0. - - Args: - areas (ndarray): The areas of bboxes or masks with the - shape of (n, ). - min_area (int): Lower bound areas for adaptive scales. - Default: 800. - max_area (int): Upper bound areas for adaptive scales. - Default: 30000. - - Returns: - ndarray: The adaotive scales with the shape of (n, ). - """ - scales = 0.5 + (areas - min_area) / (max_area - min_area) - scales = np.clip(scales, 0.5, 1.0) - return scales - - -def _get_bias_color(base, max_dist=30): - """Get different colors for each masks. - - Get different colors for each masks by adding a bias - color to the base category color. - Args: - base (ndarray): The base category color with the shape - of (3, ). - max_dist (int): The max distance of bias. Default: 30. - - Returns: - ndarray: The new color for a mask with the shape of (3, ). - """ - new_color = base + np.random.randint( - low=-max_dist, high=max_dist + 1, size=3) - return np.clip(new_color, 0, 255, new_color) - - -def draw_bboxes(ax, bboxes, color='g', alpha=0.8, thickness=2): - """Draw bounding boxes on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - bboxes (ndarray): The input bounding boxes with the shape - of (n, 4). - color (list[tuple] | matplotlib.color): the colors for each - bounding boxes. - alpha (float): Transparency of bounding boxes. Default: 0.8. - thickness (int): Thickness of lines. Default: 2. - - Returns: - matplotlib.Axes: The result axes. - """ - polygons = [] - for i, bbox in enumerate(bboxes): - bbox_int = bbox.astype(np.int32) - poly = [[bbox_int[0], bbox_int[1]], [bbox_int[0], bbox_int[3]], - [bbox_int[2], bbox_int[3]], [bbox_int[2], bbox_int[1]]] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - p = PatchCollection( - polygons, - facecolor='none', - edgecolors=color, - linewidths=thickness, - alpha=alpha) - ax.add_collection(p) - - return ax - - -def draw_labels(ax, - labels, - positions, - scores=None, - class_names=None, - color='w', - font_size=8, - scales=None, - horizontal_alignment='left'): - """Draw labels on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - labels (ndarray): The labels with the shape of (n, ). - positions (ndarray): The positions to draw each labels. - scores (ndarray): The scores for each labels. - class_names (list[str]): The class names. - color (list[tuple] | matplotlib.color): The colors for labels. - font_size (int): Font size of texts. Default: 8. - scales (list[float]): Scales of texts. Default: None. - horizontal_alignment (str): The horizontal alignment method of - texts. Default: 'left'. - - Returns: - matplotlib.Axes: The result axes. - """ - for i, (pos, label) in enumerate(zip(positions, labels)): - label_text = class_names[ - label] if class_names is not None else f'class {label}' - if scores is not None: - label_text += f'|{scores[i]:.02f}' - text_color = color[i] if isinstance(color, list) else color - - font_size_mask = font_size if scales is None else font_size * scales[i] - ax.text( - pos[0], - pos[1], - f'{label_text}', - bbox={ - 'facecolor': 'black', - 'alpha': 0.8, - 'pad': 0.7, - 'edgecolor': 'none' - }, - color=text_color, - fontsize=font_size_mask, - verticalalignment='top', - horizontalalignment=horizontal_alignment) - - return ax - - -def draw_masks(ax, img, masks, color=None, with_edge=True, alpha=0.8): - """Draw masks on the image and their edges on the axes. - - Args: - ax (matplotlib.Axes): The input axes. - img (ndarray): The image with the shape of (3, h, w). - masks (ndarray): The masks with the shape of (n, h, w). - color (ndarray): The colors for each masks with the shape - of (n, 3). - with_edge (bool): Whether to draw edges. Default: True. - alpha (float): Transparency of bounding boxes. Default: 0.8. - - Returns: - matplotlib.Axes: The result axes. - ndarray: The result image. - """ - taken_colors = set([0, 0, 0]) - if color is None: - random_colors = np.random.randint(0, 255, (masks.size(0), 3)) - color = [tuple(c) for c in random_colors] - color = np.array(color, dtype=np.uint8) - polygons = [] - for i, mask in enumerate(masks): - if with_edge: - contours, _ = bitmap_to_polygon(mask) - polygons += [Polygon(c) for c in contours] - - color_mask = color[i] - while tuple(color_mask) in taken_colors: - color_mask = _get_bias_color(color_mask) - taken_colors.add(tuple(color_mask)) - - mask = mask.astype(bool) - img[mask] = img[mask] * (1 - alpha) + color_mask * alpha - - p = PatchCollection( - polygons, facecolor='none', edgecolors='w', linewidths=1, alpha=0.8) - ax.add_collection(p) - - return ax, img - - -def imshow_det_bboxes(img, - bboxes=None, - labels=None, - segms=None, - class_names=None, - score_thr=0, - bbox_color='green', - text_color='green', - mask_color=None, - thickness=2, - font_size=8, - win_name='', - show=True, - wait_time=0, - out_file=None): - """Draw bboxes and class labels (with scores) on an image. - - Args: - img (str | ndarray): The image to be displayed. - bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or - (n, 5). - labels (ndarray): Labels of bboxes. - segms (ndarray | None): Masks, shaped (n,h,w) or None. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0. - bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: 'green'. - text_color (list[tuple] | tuple | str | None): Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: 'green'. - mask_color (list[tuple] | tuple | str | None, optional): Colors of - masks. If a single color is given, it will be applied to all - classes. The tuple of color should be in RGB order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - show (bool): Whether to show the image. Default: True. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - assert bboxes is None or bboxes.ndim == 2, \ - f' bboxes ndim should be 2, but its ndim is {bboxes.ndim}.' - assert labels.ndim == 1, \ - f' labels ndim should be 1, but its ndim is {labels.ndim}.' - assert bboxes is None or bboxes.shape[1] == 4 or bboxes.shape[1] == 5, \ - f' bboxes.shape[1] should be 4 or 5, but its {bboxes.shape[1]}.' - assert bboxes is None or bboxes.shape[0] <= labels.shape[0], \ - 'labels.shape[0] should not be less than bboxes.shape[0].' - assert segms is None or segms.shape[0] == labels.shape[0], \ - 'segms.shape[0] and labels.shape[0] should have the same length.' - assert segms is not None or bboxes is not None, \ - 'segms and bboxes should not be None at the same time.' - - img = mmcv.imread(img).astype(np.uint8) - - if score_thr > 0: - assert bboxes is not None and bboxes.shape[1] == 5 - scores = bboxes[:, -1] - inds = scores > score_thr - bboxes = bboxes[inds, :] - labels = labels[inds] - if segms is not None: - segms = segms[inds, ...] - - img = mmcv.bgr2rgb(img) - width, height = img.shape[1], img.shape[0] - img = np.ascontiguousarray(img) - - fig = plt.figure(win_name, frameon=False) - plt.title(win_name) - canvas = fig.canvas - dpi = fig.get_dpi() - # add a small EPS to avoid precision lost due to matplotlib's truncation - # (https://github.com/matplotlib/matplotlib/issues/15363) - fig.set_size_inches((width + EPS) / dpi, (height + EPS) / dpi) - - # remove white edges by set subplot margin - plt.subplots_adjust(left=0, right=1, bottom=0, top=1) - ax = plt.gca() - ax.axis('off') - - max_label = int(max(labels) if len(labels) > 0 else 0) - text_palette = palette_val(get_palette(text_color, max_label + 1)) - text_colors = [text_palette[label] for label in labels] - - num_bboxes = 0 - if bboxes is not None: - num_bboxes = bboxes.shape[0] - bbox_palette = palette_val(get_palette(bbox_color, max_label + 1)) - colors = [bbox_palette[label] for label in labels[:num_bboxes]] - draw_bboxes(ax, bboxes, colors, alpha=0.8, thickness=thickness) - - horizontal_alignment = 'left' - positions = bboxes[:, :2].astype(np.int32) + thickness - areas = (bboxes[:, 3] - bboxes[:, 1]) * (bboxes[:, 2] - bboxes[:, 0]) - scales = _get_adaptive_scales(areas) - scores = bboxes[:, 4] if bboxes.shape[1] == 5 else None - draw_labels( - ax, - labels[:num_bboxes], - positions, - scores=scores, - class_names=class_names, - color=text_colors, - font_size=font_size, - scales=scales, - horizontal_alignment=horizontal_alignment) - - if segms is not None: - mask_palette = get_palette(mask_color, max_label + 1) - colors = [mask_palette[label] for label in labels] - colors = np.array(colors, dtype=np.uint8) - draw_masks(ax, img, segms, colors, with_edge=True) - - if num_bboxes < segms.shape[0]: - segms = segms[num_bboxes:] - horizontal_alignment = 'center' - areas = [] - positions = [] - for mask in segms: - _, _, stats, centroids = cv2.connectedComponentsWithStats( - mask.astype(np.uint8), connectivity=8) - largest_id = np.argmax(stats[1:, -1]) + 1 - positions.append(centroids[largest_id]) - areas.append(stats[largest_id, -1]) - areas = np.stack(areas, axis=0) - scales = _get_adaptive_scales(areas) - draw_labels( - ax, - labels[num_bboxes:], - positions, - class_names=class_names, - color=text_colors, - font_size=font_size, - scales=scales, - horizontal_alignment=horizontal_alignment) - - plt.imshow(img) - - stream, _ = canvas.print_to_buffer() - buffer = np.frombuffer(stream, dtype='uint8') - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - img = rgb.astype('uint8') - img = mmcv.rgb2bgr(img) - - if show: - # We do not use cv2 for display because in some cases, opencv will - # conflict with Qt, it will output a warning: Current thread - # is not the object's thread. You can refer to - # https://github.com/opencv/opencv-python/issues/46 for details - if wait_time == 0: - plt.show() - else: - plt.show(block=False) - plt.pause(wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - plt.close() - - return img - - -def imshow_gt_det_bboxes(img, - annotation, - result, - class_names=None, - score_thr=0, - gt_bbox_color=(61, 102, 255), - gt_text_color=(200, 200, 200), - gt_mask_color=(61, 102, 255), - det_bbox_color=(241, 101, 72), - det_text_color=(200, 200, 200), - det_mask_color=(241, 101, 72), - thickness=2, - font_size=13, - win_name='', - show=True, - wait_time=0, - out_file=None): - """General visualization GT and result function. - - Args: - img (str | ndarray): The image to be displayed. - annotation (dict): Ground truth annotations where contain keys of - 'gt_bboxes' and 'gt_labels' or 'gt_masks'. - result (tuple[list] | list): The detection result, can be either - (bbox, segm) or just bbox. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. Default: 0. - gt_bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (61, 102, 255). - gt_text_color (list[tuple] | tuple | str | None): Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (200, 200, 200). - gt_mask_color (list[tuple] | tuple | str | None, optional): Colors of - masks. If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (61, 102, 255). - det_bbox_color (list[tuple] | tuple | str | None):Colors of bbox lines. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (241, 101, 72). - det_text_color (list[tuple] | tuple | str | None):Colors of texts. - If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (200, 200, 200). - det_mask_color (list[tuple] | tuple | str | None, optional): Color of - masks. If a single color is given, it will be applied to all classes. - The tuple of color should be in RGB order. Default: (241, 101, 72). - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - show (bool): Whether to show the image. Default: True. - wait_time (float): Value of waitKey param. Default: 0. - out_file (str, optional): The filename to write the image. - Default: None. - - Returns: - ndarray: The image with bboxes or masks drawn on it. - """ - assert 'gt_bboxes' in annotation - assert 'gt_labels' in annotation - assert isinstance(result, (tuple, list, dict)), 'Expected ' \ - f'tuple or list or dict, but get {type(result)}' - - gt_bboxes = annotation['gt_bboxes'] - gt_labels = annotation['gt_labels'] - gt_masks = annotation.get('gt_masks', None) - if gt_masks is not None: - gt_masks = mask2ndarray(gt_masks) - - gt_seg = annotation.get('gt_semantic_seg', None) - if gt_seg is not None: - pad_value = 255 # the padding value of gt_seg - sem_labels = np.unique(gt_seg) - all_labels = np.concatenate((gt_labels, sem_labels), axis=0) - all_labels, counts = np.unique(all_labels, return_counts=True) - stuff_labels = all_labels[np.logical_and(counts < 2, - all_labels != pad_value)] - stuff_masks = gt_seg[None] == stuff_labels[:, None, None] - gt_labels = np.concatenate((gt_labels, stuff_labels), axis=0) - gt_masks = np.concatenate((gt_masks, stuff_masks.astype(np.uint8)), - axis=0) - # If you need to show the bounding boxes, - # please comment the following line - # gt_bboxes = None - - img = mmcv.imread(img) - - img = imshow_det_bboxes( - img, - gt_bboxes, - gt_labels, - gt_masks, - class_names=class_names, - bbox_color=gt_bbox_color, - text_color=gt_text_color, - mask_color=gt_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=False) - - if not isinstance(result, dict): - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - segms = mask_util.decode(segms) - segms = segms.transpose(2, 0, 1) - else: - assert class_names is not None, 'We need to know the number ' \ - 'of classes.' - VOID = len(class_names) - bboxes = None - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != VOID - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms=segms, - class_names=class_names, - score_thr=score_thr, - bbox_color=det_bbox_color, - text_color=det_text_color, - mask_color=det_mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/palette.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/palette.py deleted file mode 100644 index 11692cdd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/core/visualization/palette.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - - -def palette_val(palette): - """Convert palette to matplotlib palette. - - Args: - palette List[tuple]: A list of color tuples. - - Returns: - List[tuple[float]]: A list of RGB matplotlib color tuples. - """ - new_palette = [] - for color in palette: - color = [c / 255 for c in color] - new_palette.append(tuple(color)) - return new_palette - - -def get_palette(palette, num_classes): - """Get palette from various inputs. - - Args: - palette (list[tuple] | str | tuple | :obj:`Color`): palette inputs. - num_classes (int): the number of classes. - - Returns: - list[tuple[int]]: A list of color tuples. - """ - assert isinstance(num_classes, int) - - if isinstance(palette, list): - dataset_palette = palette - elif isinstance(palette, tuple): - dataset_palette = [palette] * num_classes - elif palette == 'random' or palette is None: - state = np.random.get_state() - # random color - np.random.seed(42) - palette = np.random.randint(0, 256, size=(num_classes, 3)) - np.random.set_state(state) - dataset_palette = [tuple(c) for c in palette] - elif palette == 'coco': - from mmdet.datasets import CocoDataset, CocoPanopticDataset - dataset_palette = CocoDataset.PALETTE - if len(dataset_palette) < num_classes: - dataset_palette = CocoPanopticDataset.PALETTE - elif palette == 'citys': - from mmdet.datasets import CityscapesDataset - dataset_palette = CityscapesDataset.PALETTE - elif palette == 'voc': - from mmdet.datasets import VOCDataset - dataset_palette = VOCDataset.PALETTE - elif mmcv.is_str(palette): - dataset_palette = [mmcv.color_val(palette)[::-1]] * num_classes - else: - raise TypeError(f'Invalid type for palette: {type(palette)}') - - assert len(dataset_palette) >= num_classes, \ - 'The length of palette should not be less than `num_classes`.' - return dataset_palette diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/__init__.py deleted file mode 100644 index f251d07e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .cityscapes import CityscapesDataset -from .coco import CocoDataset -from .coco_panoptic import CocoPanopticDataset -from .custom import CustomDataset -from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) -from .deepfashion import DeepFashionDataset -from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset -from .openimages import OpenImagesChallengeDataset, OpenImagesDataset -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -from .utils import (NumClassCheckHook, get_loading_pipeline, - replace_ImageToTensor) -from .voc import VOCDataset -from .wider_face import WIDERFaceDataset -from .xml_style import XMLDataset - -__all__ = [ - 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset', - 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset', - 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler', - 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', - 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', - 'NumClassCheckHook', 'CocoPanopticDataset', 'MultiImageMixDataset', - 'OpenImagesDataset', 'OpenImagesChallengeDataset' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/__init__.py deleted file mode 100644 index af855759..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coco_api import COCO, COCOeval -from .panoptic_evaluation import pq_compute_multi_core, pq_compute_single_core - -__all__ = [ - 'COCO', 'COCOeval', 'pq_compute_multi_core', 'pq_compute_single_core' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/coco_api.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/coco_api.py deleted file mode 100644 index eef6341e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/coco_api.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file add snake case alias for coco api - -import warnings - -import pycocotools -from pycocotools.coco import COCO as _COCO -from pycocotools.cocoeval import COCOeval as _COCOeval - - -class COCO(_COCO): - """This class is almost the same as official pycocotools package. - - It implements some snake case function aliases. So that the COCO class has - the same interface as LVIS class. - """ - - def __init__(self, annotation_file=None): - if getattr(pycocotools, '__version__', '0') >= '12.0.2': - warnings.warn( - 'mmpycocotools is deprecated. Please install official pycocotools by "pip install pycocotools"', # noqa: E501 - UserWarning) - super().__init__(annotation_file=annotation_file) - self.img_ann_map = self.imgToAnns - self.cat_img_map = self.catToImgs - - def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None): - return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd) - - def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]): - return self.getCatIds(cat_names, sup_names, cat_ids) - - def get_img_ids(self, img_ids=[], cat_ids=[]): - return self.getImgIds(img_ids, cat_ids) - - def load_anns(self, ids): - return self.loadAnns(ids) - - def load_cats(self, ids): - return self.loadCats(ids) - - def load_imgs(self, ids): - return self.loadImgs(ids) - - -# just for the ease of import -COCOeval = _COCOeval diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/panoptic_evaluation.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/panoptic_evaluation.py deleted file mode 100644 index b29d5007..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/api_wrappers/panoptic_evaluation.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# Copyright (c) 2018, Alexander Kirillov -# This file supports `file_client` for `panopticapi`, -# the source code is copied from `panopticapi`, -# only the way to load the gt images is modified. -import multiprocessing -import os - -import mmcv -import numpy as np - -try: - from panopticapi.evaluation import OFFSET, VOID, PQStat - from panopticapi.utils import rgb2id -except ImportError: - PQStat = None - rgb2id = None - VOID = 0 - OFFSET = 256 * 256 * 256 - - -def pq_compute_single_core(proc_id, - annotation_set, - gt_folder, - pred_folder, - categories, - file_client=None): - """The single core function to evaluate the metric of Panoptic - Segmentation. - - Same as the function with the same name in `panopticapi`. Only the function - to load the images is changed to use the file client. - - Args: - proc_id (int): The id of the mini process. - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - pq_stat = PQStat() - - idx = 0 - for gt_ann, pred_ann in annotation_set: - if idx % 100 == 0: - print('Core: {}, {} from {} images processed'.format( - proc_id, idx, len(annotation_set))) - idx += 1 - # The gt images can be on the local disk or `ceph`, so we use - # file_client here. - img_bytes = file_client.get( - os.path.join(gt_folder, gt_ann['file_name'])) - pan_gt = mmcv.imfrombytes(img_bytes, flag='color', channel_order='rgb') - pan_gt = rgb2id(pan_gt) - - # The predictions can only be on the local dist now. - pan_pred = mmcv.imread( - os.path.join(pred_folder, pred_ann['file_name']), - flag='color', - channel_order='rgb') - pan_pred = rgb2id(pan_pred) - - gt_segms = {el['id']: el for el in gt_ann['segments_info']} - pred_segms = {el['id']: el for el in pred_ann['segments_info']} - - # predicted segments area calculation + prediction sanity checks - pred_labels_set = set(el['id'] for el in pred_ann['segments_info']) - labels, labels_cnt = np.unique(pan_pred, return_counts=True) - for label, label_cnt in zip(labels, labels_cnt): - if label not in pred_segms: - if label == VOID: - continue - raise KeyError( - 'In the image with ID {} segment with ID {} is ' - 'presented in PNG and not presented in JSON.'.format( - gt_ann['image_id'], label)) - pred_segms[label]['area'] = label_cnt - pred_labels_set.remove(label) - if pred_segms[label]['category_id'] not in categories: - raise KeyError( - 'In the image with ID {} segment with ID {} has ' - 'unknown category_id {}.'.format( - gt_ann['image_id'], label, - pred_segms[label]['category_id'])) - if len(pred_labels_set) != 0: - raise KeyError( - 'In the image with ID {} the following segment IDs {} ' - 'are presented in JSON and not presented in PNG.'.format( - gt_ann['image_id'], list(pred_labels_set))) - - # confusion matrix calculation - pan_gt_pred = pan_gt.astype(np.uint64) * OFFSET + pan_pred.astype( - np.uint64) - gt_pred_map = {} - labels, labels_cnt = np.unique(pan_gt_pred, return_counts=True) - for label, intersection in zip(labels, labels_cnt): - gt_id = label // OFFSET - pred_id = label % OFFSET - gt_pred_map[(gt_id, pred_id)] = intersection - - # count all matched pairs - gt_matched = set() - pred_matched = set() - for label_tuple, intersection in gt_pred_map.items(): - gt_label, pred_label = label_tuple - if gt_label not in gt_segms: - continue - if pred_label not in pred_segms: - continue - if gt_segms[gt_label]['iscrowd'] == 1: - continue - if gt_segms[gt_label]['category_id'] != pred_segms[pred_label][ - 'category_id']: - continue - - union = pred_segms[pred_label]['area'] + gt_segms[gt_label][ - 'area'] - intersection - gt_pred_map.get((VOID, pred_label), 0) - iou = intersection / union - if iou > 0.5: - pq_stat[gt_segms[gt_label]['category_id']].tp += 1 - pq_stat[gt_segms[gt_label]['category_id']].iou += iou - gt_matched.add(gt_label) - pred_matched.add(pred_label) - - # count false positives - crowd_labels_dict = {} - for gt_label, gt_info in gt_segms.items(): - if gt_label in gt_matched: - continue - # crowd segments are ignored - if gt_info['iscrowd'] == 1: - crowd_labels_dict[gt_info['category_id']] = gt_label - continue - pq_stat[gt_info['category_id']].fn += 1 - - # count false positives - for pred_label, pred_info in pred_segms.items(): - if pred_label in pred_matched: - continue - # intersection of the segment with VOID - intersection = gt_pred_map.get((VOID, pred_label), 0) - # plus intersection with corresponding CROWD region if it exists - if pred_info['category_id'] in crowd_labels_dict: - intersection += gt_pred_map.get( - (crowd_labels_dict[pred_info['category_id']], pred_label), - 0) - # predicted segment is ignored if more than half of - # the segment correspond to VOID and CROWD regions - if intersection / pred_info['area'] > 0.5: - continue - pq_stat[pred_info['category_id']].fp += 1 - print('Core: {}, all {} images processed'.format(proc_id, - len(annotation_set))) - return pq_stat - - -def pq_compute_multi_core(matched_annotations_list, - gt_folder, - pred_folder, - categories, - file_client=None, - nproc=32): - """Evaluate the metrics of Panoptic Segmentation with multithreading. - - Same as the function with the same name in `panopticapi`. - - Args: - matched_annotations_list (list): The matched annotation list. Each - element is a tuple of annotations of the same image with the - format (gt_anns, pred_anns). - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - cpu_num = min(nproc, multiprocessing.cpu_count()) - - annotations_split = np.array_split(matched_annotations_list, cpu_num) - print('Number of cores: {}, images per core: {}'.format( - cpu_num, len(annotations_split[0]))) - workers = multiprocessing.Pool(processes=cpu_num) - processes = [] - for proc_id, annotation_set in enumerate(annotations_split): - p = workers.apply_async(pq_compute_single_core, - (proc_id, annotation_set, gt_folder, - pred_folder, categories, file_client)) - processes.append(p) - - # Close the process pool, otherwise it will lead to memory - # leaking problems. - workers.close() - workers.join() - - pq_stat = PQStat() - for p in processes: - pq_stat += p.get() - - return pq_stat diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/builder.py deleted file mode 100644 index 1936296a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/builder.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -import warnings -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import TORCH_VERSION, Registry, build_from_cfg, digit_version -from torch.utils.data import DataLoader - -from .samplers import (ClassAwareSampler, DistributedGroupSampler, - DistributedSampler, GroupSampler, InfiniteBatchSampler, - InfiniteGroupBatchSampler) - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from .dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif cfg['type'] == 'MultiImageMixDataset': - cp_cfg = copy.deepcopy(cfg) - cp_cfg['dataset'] = build_dataset(cp_cfg['dataset']) - cp_cfg.pop('type') - dataset = MultiImageMixDataset(**cp_cfg) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - runner_type='EpochBasedRunner', - persistent_workers=False, - class_aware_sampler=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int, Optional): Seed to be used. Default: None. - runner_type (str): Type of runner. Default: `EpochBasedRunner` - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers `Dataset` instances alive. - This argument is only valid when PyTorch>=1.7.0. Default: False. - class_aware_sampler (dict): Whether to use `ClassAwareSampler` - during training. Default: None. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - - if dist: - # When model is :obj:`DistributedDataParallel`, - # `batch_size` of :obj:`dataloader` is the - # number of training samples on each GPU. - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - # When model is obj:`DataParallel` - # the batch size is samples on all the GPUS - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - if runner_type == 'IterBasedRunner': - # this is a batch sampler, which can yield - # a mini-batch indices each time. - # it can be used in both `DataParallel` and - # `DistributedDataParallel` - if shuffle: - batch_sampler = InfiniteGroupBatchSampler( - dataset, batch_size, world_size, rank, seed=seed) - else: - batch_sampler = InfiniteBatchSampler( - dataset, - batch_size, - world_size, - rank, - seed=seed, - shuffle=False) - batch_size = 1 - sampler = None - else: - if class_aware_sampler is not None: - # ClassAwareSampler can be used in both distributed and - # non-distributed training. - num_sample_class = class_aware_sampler.get('num_sample_class', 1) - sampler = ClassAwareSampler( - dataset, - samples_per_gpu, - world_size, - rank, - seed=seed, - num_sample_class=num_sample_class) - elif dist: - # DistributedGroupSampler will definitely shuffle the data to - # satisfy that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - else: - sampler = GroupSampler(dataset, - samples_per_gpu) if shuffle else None - batch_sampler = None - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.7.0')): - kwargs['persistent_workers'] = persistent_workers - elif persistent_workers is True: - warnings.warn('persistent_workers is invalid because your pytorch ' - 'version is lower than 1.7.0') - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - batch_sampler=batch_sampler, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=kwargs.pop('pin_memory', False), - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/cityscapes.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/cityscapes.py deleted file mode 100644 index da6a2adc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/cityscapes.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa -# and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - -import glob -import os -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -from mmcv.utils import print_log - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class CityscapesDataset(CocoDataset): - - CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [(220, 20, 60), (255, 0, 0), (0, 0, 142), (0, 0, 70), - (0, 60, 100), (0, 80, 100), (0, 0, 230), (119, 11, 32)] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = img_info['id'] - ann_ids = self.coco.getAnnIds(imgIds=[img_id]) - ann_info = self.coco.loadAnns(ann_ids) - all_iscrowd = all([_['iscrowd'] for _ in ann_info]) - if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat - or all_iscrowd): - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - img_info (dict): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, \ - bboxes_ignore, labels, masks, seg_map. \ - "masks" are already decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=img_info['segm_file']) - - return ann - - def results2txt(self, results, outfile_prefix): - """Dump the detection results to a txt file. - - Args: - results (list[list | tuple]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. - If the prefix is "somepath/xxx", - the txt files will be named "somepath/xxx.txt". - - Returns: - list[str]: Result txt files which contains corresponding \ - instance segmentation images. - """ - try: - import cityscapesscripts.helpers.labels as CSLabels - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - result_files = [] - os.makedirs(outfile_prefix, exist_ok=True) - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - filename = self.data_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(outfile_prefix, basename + '_pred.txt') - - bbox_result, segm_result = result - bboxes = np.vstack(bbox_result) - # segm results - if isinstance(segm_result, tuple): - # Some detectors use different scores for bbox and mask, - # like Mask Scoring R-CNN. Score of segm will be used instead - # of bbox score. - segms = mmcv.concat_list(segm_result[0]) - mask_score = segm_result[1] - else: - # use bbox score for mask score - segms = mmcv.concat_list(segm_result) - mask_score = [bbox[-1] for bbox in bboxes] - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - assert len(bboxes) == len(segms) == len(labels) - num_instances = len(bboxes) - prog_bar.update() - with open(pred_txt, 'w') as fout: - for i in range(num_instances): - pred_class = labels[i] - classes = self.CLASSES[pred_class] - class_id = CSLabels.name2label[classes].id - score = mask_score[i] - mask = maskUtils.decode(segms[i]).astype(np.uint8) - png_filename = osp.join(outfile_prefix, - basename + f'_{i}_{classes}.png') - mmcv.imwrite(mask, png_filename) - fout.write(f'{osp.basename(png_filename)} {class_id} ' - f'{score}\n') - result_files.append(pred_txt) - - return result_files - - def format_results(self, results, txtfile_prefix=None): - """Format the results to txt (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of txt files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving txt/png files when txtfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2txt(results, txtfile_prefix) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - outfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in Cityscapes/COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - outfile_prefix (str | None): The prefix of output file. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with COCO protocol, it would be the - prefix of output json file. For example, the metric is 'bbox' - and 'segm', then json files would be "a/b/prefix.bbox.json" and - "a/b/prefix.segm.json". - If results are evaluated with cityscapes protocol, it would be - the prefix of output txt/png files. The output files would be - png images under folder "a/b/prefix/xxx/" and the file name of - images would be written into a txt file - "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of - cityscapes. If not specified, a temp file will be created. - Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: COCO style evaluation metric or cityscapes mAP \ - and AP@50. - """ - eval_results = dict() - - metrics = metric.copy() if isinstance(metric, list) else [metric] - - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, outfile_prefix, logger)) - metrics.remove('cityscapes') - - # left metrics are all coco metric - if len(metrics) > 0: - # create CocoDataset with CityscapesDataset annotation - self_coco = CocoDataset(self.ann_file, self.pipeline.transforms, - None, self.data_root, self.img_prefix, - self.seg_prefix, self.proposal_file, - self.test_mode, self.filter_empty_gt) - # TODO: remove this in the future - # reload annotations of correct class - self_coco.CLASSES = self.CLASSES - self_coco.data_infos = self_coco.load_annotations(self.ann_file) - eval_results.update( - self_coco.evaluate(results, metrics, logger, outfile_prefix, - classwise, proposal_nums, iou_thrs)) - - return eval_results - - def _evaluate_cityscapes(self, results, txtfile_prefix, logger): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of output txt file - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: Cityscapes evaluation results, contains 'mAP' \ - and 'AP@50'. - """ - - try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, txtfile_prefix) - - if tmp_dir is None: - result_dir = osp.join(txtfile_prefix, 'results') - else: - result_dir = osp.join(tmp_dir.name, 'results') - - eval_results = OrderedDict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - # set global states in cityscapes evaluation API - CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..') - CSEval.args.predictionPath = os.path.abspath(result_dir) - CSEval.args.predictionWalk = None - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = os.path.join(result_dir, - 'gtInstances.json') - CSEval.args.groundTruthSearch = os.path.join( - self.img_prefix.replace('leftImg8bit', 'gtFine'), - '*/*_gtFine_instanceIds.png') - - groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch) - assert len(groundTruthImgList), 'Cannot find ground truth images' \ - f' in {CSEval.args.groundTruthSearch}.' - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(CSEval.getPrediction(gt, CSEval.args)) - CSEval_results = CSEval.evaluateImgLists(predictionImgList, - groundTruthImgList, - CSEval.args)['averages'] - - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco.py deleted file mode 100644 index bcdd4df3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco.py +++ /dev/null @@ -1,649 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import contextlib -import io -import itertools -import logging -import os.path as osp -import tempfile -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from mmdet.core import eval_recalls -from .api_wrappers import COCO, COCOeval -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CocoDataset(CustomDataset): - - CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', - 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') - - PALETTE = [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), - (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), - (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), - (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), - (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), - (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), - (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), - (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), - (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), - (134, 134, 103), (145, 148, 174), (255, 208, 186), - (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), - (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), - (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), - (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), - (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), - (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), - (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), - (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), - (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), - (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), - (191, 162, 208)] - - def load_annotations(self, ann_file): - """Load annotation from COCO style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - - self.coco = COCO(ann_file) - # The order of returned `cat_ids` will not - # change with the order of the CLASSES - self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES) - - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - total_ann_ids = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - data_infos.append(info) - ann_ids = self.coco.get_ann_ids(img_ids=[i]) - total_ann_ids.extend(ann_ids) - assert len(set(total_ann_ids)) == len( - total_ann_ids), f"Annotation ids in '{ann_file}' are not unique!" - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def get_cat_ids(self, idx): - """Get COCO category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return [ann['category_id'] for ann in ann_info] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_in_cat: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore,\ - labels, masks, seg_map. "masks" are raw annotations and not \ - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def xyxy2xywh(self, bbox): - """Convert ``xyxy`` style bounding boxes to ``xywh`` style for COCO - evaluation. - - Args: - bbox (numpy.ndarray): The bounding boxes, shape (4, ), in - ``xyxy`` order. - - Returns: - list[float]: The converted bounding boxes, in ``xywh`` order. - """ - - _bbox = bbox.tolist() - return [ - _bbox[0], - _bbox[1], - _bbox[2] - _bbox[0], - _bbox[3] - _bbox[1], - ] - - def _proposal2json(self, results): - """Convert proposal results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - bboxes = results[idx] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = 1 - json_results.append(data) - return json_results - - def _det2json(self, results): - """Convert detection results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - result = results[idx] - for label in range(len(result)): - bboxes = result[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - json_results.append(data) - return json_results - - def _segm2json(self, results): - """Convert instance segmentation results to COCO json style.""" - bbox_json_results = [] - segm_json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - det, seg = results[idx] - for label in range(len(det)): - # bbox results - bboxes = det[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - bbox_json_results.append(data) - - # segm results - # some detectors use different scores for bbox and mask - if isinstance(seg, tuple): - segms = seg[0][label] - mask_score = seg[1][label] - else: - segms = seg[label] - mask_score = [bbox[4] for bbox in bboxes] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(mask_score[i]) - data['category_id'] = self.cat_ids[label] - if isinstance(segms[i]['counts'], bytes): - segms[i]['counts'] = segms[i]['counts'].decode() - data['segmentation'] = segms[i] - segm_json_results.append(data) - return bbox_json_results, segm_json_results - - def results2json(self, results, outfile_prefix): - """Dump the detection results to a COCO style json file. - - There are 3 types of results: proposals, bbox predictions, mask - predictions, and they have different data types. This method will - automatically recognize the type, and dump them to json files. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.bbox.json", "somepath/xxx.segm.json", - "somepath/xxx.proposal.json". - - Returns: - dict[str: str]: Possible keys are "bbox", "segm", "proposal", and \ - values are corresponding filenames. - """ - result_files = dict() - if isinstance(results[0], list): - json_results = self._det2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - mmcv.dump(json_results, result_files['bbox']) - elif isinstance(results[0], tuple): - json_results = self._segm2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(json_results[0], result_files['bbox']) - mmcv.dump(json_results[1], result_files['segm']) - elif isinstance(results[0], np.ndarray): - json_results = self._proposal2json(results) - result_files['proposal'] = f'{outfile_prefix}.proposal.json' - mmcv.dump(json_results, result_files['proposal']) - else: - raise TypeError('invalid type of results') - return result_files - - def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None): - gt_bboxes = [] - for i in range(len(self.img_ids)): - ann_ids = self.coco.get_ann_ids(img_ids=self.img_ids[i]) - ann_info = self.coco.load_anns(ann_ids) - if len(ann_info) == 0: - gt_bboxes.append(np.zeros((0, 4))) - continue - bboxes = [] - for ann in ann_info: - if ann.get('ignore', False) or ann['iscrowd']: - continue - x1, y1, w, h = ann['bbox'] - bboxes.append([x1, y1, x1 + w, y1 + h]) - bboxes = np.array(bboxes, dtype=np.float32) - if bboxes.shape[0] == 0: - bboxes = np.zeros((0, 4)) - gt_bboxes.append(bboxes) - - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thrs, logger=logger) - ar = recalls.mean(axis=1) - return ar - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - return result_files, tmp_dir - - def evaluate_det_segm(self, - results, - result_files, - coco_gt, - metrics, - logger=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Instance segmentation and object detection evaluation in COCO - protocol. - - Args: - results (list[list | tuple | dict]): Testing results of the - dataset. - result_files (dict[str, str]): a dict contains json file path. - coco_gt (COCO): COCO API object with ground truth annotation. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - if iou_thrs is None: - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - if metric_items is not None: - if not isinstance(metric_items, list): - metric_items = [metric_items] - - eval_results = OrderedDict() - for metric in metrics: - msg = f'Evaluating {metric}...' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - if isinstance(results[0], tuple): - raise KeyError('proposal_fast is not supported for ' - 'instance segmentation result.') - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}') - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - iou_type = 'bbox' if metric == 'proposal' else metric - if metric not in result_files: - raise KeyError(f'{metric} is not in results') - try: - predictions = mmcv.load(result_files[metric]) - if iou_type == 'segm': - # Refer to https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/coco.py#L331 # noqa - # When evaluating mask AP, if the results contain bbox, - # cocoapi will use the box area instead of the mask area - # for calculating the instance area. Though the overall AP - # is not affected, this leads to different - # small/medium/large mask AP results. - for x in predictions: - x.pop('bbox') - warnings.simplefilter('once') - warnings.warn( - 'The key "bbox" is deleted for more accurate mask AP ' - 'of small/medium/large instances since v2.12.0. This ' - 'does not change the overall mAP calculation.', - UserWarning) - coco_det = coco_gt.loadRes(predictions) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - cocoEval = COCOeval(coco_gt, coco_det, iou_type) - cocoEval.params.catIds = self.cat_ids - cocoEval.params.imgIds = self.img_ids - cocoEval.params.maxDets = list(proposal_nums) - cocoEval.params.iouThrs = iou_thrs - # mapping of cocoEval.stats - coco_metric_names = { - 'mAP': 0, - 'mAP_50': 1, - 'mAP_75': 2, - 'mAP_s': 3, - 'mAP_m': 4, - 'mAP_l': 5, - 'AR@100': 6, - 'AR@300': 7, - 'AR@1000': 8, - 'AR_s@1000': 9, - 'AR_m@1000': 10, - 'AR_l@1000': 11 - } - if metric_items is not None: - for metric_item in metric_items: - if metric_item not in coco_metric_names: - raise KeyError( - f'metric item {metric_item} is not supported') - - if metric == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if metric_items is None: - metric_items = [ - 'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', - 'AR_m@1000', 'AR_l@1000' - ] - - for item in metric_items: - val = float( - f'{cocoEval.stats[coco_metric_names[item]]:.3f}') - eval_results[item] = val - else: - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = cocoEval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - nm = self.coco.loadCats(catId)[0] - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - if metric_items is None: - metric_items = [ - 'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l' - ] - - for metric_item in metric_items: - key = f'{metric}_{metric_item}' - val = float( - f'{cocoEval.stats[coco_metric_names[metric_item]]:.3f}' - ) - eval_results[key] = val - ap = cocoEval.stats[:6] - eval_results[f'{metric}_mAP_copypaste'] = ( - f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} ' - f'{ap[4]:.3f} {ap[5]:.3f}') - - return eval_results - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - - coco_gt = self.coco - self.cat_ids = coco_gt.get_cat_ids(cat_names=self.CLASSES) - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - eval_results = self.evaluate_det_segm(results, result_files, coco_gt, - metrics, logger, classwise, - proposal_nums, iou_thrs, - metric_items) - - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco_panoptic.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco_panoptic.py deleted file mode 100644 index 53ef5947..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/coco_panoptic.py +++ /dev/null @@ -1,692 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools -import os -from collections import defaultdict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from mmdet.core import INSTANCE_OFFSET -from .api_wrappers import COCO, pq_compute_multi_core -from .builder import DATASETS -from .coco import CocoDataset - -try: - import panopticapi - from panopticapi.evaluation import VOID - from panopticapi.utils import id2rgb -except ImportError: - panopticapi = None - id2rgb = None - VOID = None - -__all__ = ['CocoPanopticDataset'] - - -class COCOPanoptic(COCO): - """This wrapper is for loading the panoptic style annotation file. - - The format is shown in the CocoPanopticDataset class. - - Args: - annotation_file (str): Path of annotation file. - """ - - def __init__(self, annotation_file=None): - if panopticapi is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(COCOPanoptic, self).__init__(annotation_file) - - def createIndex(self): - # create index - print('creating index...') - # anns stores 'segment_id -> annotation' - anns, cats, imgs = {}, {}, {} - img_to_anns, cat_to_imgs = defaultdict(list), defaultdict(list) - if 'annotations' in self.dataset: - for ann, img_info in zip(self.dataset['annotations'], - self.dataset['images']): - img_info['segm_file'] = ann['file_name'] - for seg_ann in ann['segments_info']: - # to match with instance.json - seg_ann['image_id'] = ann['image_id'] - seg_ann['height'] = img_info['height'] - seg_ann['width'] = img_info['width'] - img_to_anns[ann['image_id']].append(seg_ann) - # segment_id is not unique in coco dataset orz... - if seg_ann['id'] in anns.keys(): - anns[seg_ann['id']].append(seg_ann) - else: - anns[seg_ann['id']] = [seg_ann] - - if 'images' in self.dataset: - for img in self.dataset['images']: - imgs[img['id']] = img - - if 'categories' in self.dataset: - for cat in self.dataset['categories']: - cats[cat['id']] = cat - - if 'annotations' in self.dataset and 'categories' in self.dataset: - for ann in self.dataset['annotations']: - for seg_ann in ann['segments_info']: - cat_to_imgs[seg_ann['category_id']].append(ann['image_id']) - - print('index created!') - - self.anns = anns - self.imgToAnns = img_to_anns - self.catToImgs = cat_to_imgs - self.imgs = imgs - self.cats = cats - - def load_anns(self, ids=[]): - """Load anns with the specified ids. - - self.anns is a list of annotation lists instead of a - list of annotations. - - Args: - ids (int array): integer ids specifying anns - - Returns: - anns (object array): loaded ann objects - """ - anns = [] - - if hasattr(ids, '__iter__') and hasattr(ids, '__len__'): - # self.anns is a list of annotation lists instead of - # a list of annotations - for id in ids: - anns += self.anns[id] - return anns - elif type(ids) == int: - return self.anns[ids] - - -@DATASETS.register_module() -class CocoPanopticDataset(CocoDataset): - """Coco dataset for Panoptic segmentation. - - The annotation format is shown as follows. The `ann` field is optional - for testing. - - .. code-block:: none - - [ - { - 'filename': f'{image_id:012}.png', - 'image_id':9 - 'segments_info': { - [ - { - 'id': 8345037, (segment_id in panoptic png, - convert from rgb) - 'category_id': 51, - 'iscrowd': 0, - 'bbox': (x1, y1, w, h), - 'area': 24315, - 'segmentation': list,(coded mask) - }, - ... - } - } - }, - ... - ] - - Args: - ann_file (str): Panoptic segmentation annotation file path. - pipeline (list[dict]): Processing pipeline. - ins_ann_file (str): Instance segmentation annotation file path. - Defaults to None. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Defaults to None. - data_root (str, optional): Data root for ``ann_file``, - ``ins_ann_file`` ``img_prefix``, ``seg_prefix``, ``proposal_file`` - if specified. Defaults to None. - img_prefix (str, optional): Prefix of path to images. Defaults to ''. - seg_prefix (str, optional): Prefix of path to segmentation files. - Defaults to None. - proposal_file (str, optional): Path to proposal file. Defaults to None. - test_mode (bool, optional): If set True, annotation will not be loaded. - Defaults to False. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. Defaults to True. - file_client_args (:obj:`mmcv.ConfigDict` | dict): file client args. - Defaults to dict(backend='disk'). - """ - CLASSES = [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - ' truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', - 'blanket', 'bridge', 'cardboard', 'counter', 'curtain', 'door-stuff', - 'floor-wood', 'flower', 'fruit', 'gravel', 'house', 'light', - 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield', - 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow', - 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile', - 'wall-wood', 'water-other', 'window-blind', 'window-other', - 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged', - 'cabinet-merged', 'table-merged', 'floor-other-merged', - 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged', - 'paper-merged', 'food-other-merged', 'building-other-merged', - 'rock-merged', 'wall-other-merged', 'rug-merged' - ] - THING_CLASSES = [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - STUFF_CLASSES = [ - 'banner', 'blanket', 'bridge', 'cardboard', 'counter', 'curtain', - 'door-stuff', 'floor-wood', 'flower', 'fruit', 'gravel', 'house', - 'light', 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield', - 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow', - 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile', - 'wall-wood', 'water-other', 'window-blind', 'window-other', - 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged', - 'cabinet-merged', 'table-merged', 'floor-other-merged', - 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged', - 'paper-merged', 'food-other-merged', 'building-other-merged', - 'rock-merged', 'wall-other-merged', 'rug-merged' - ] - - PALETTE = [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), - (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), - (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), - (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), - (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), - (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), - (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), - (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), - (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), - (134, 134, 103), (145, 148, 174), (255, 208, 186), - (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), - (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), - (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), - (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), - (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), - (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), - (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), - (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), - (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), - (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), - (191, 162, 208), (255, 255, 128), (147, 211, 203), - (150, 100, 100), (168, 171, 172), (146, 112, 198), - (210, 170, 100), (92, 136, 89), (218, 88, 184), (241, 129, 0), - (217, 17, 255), (124, 74, 181), (70, 70, 70), (255, 228, 255), - (154, 208, 0), (193, 0, 92), (76, 91, 113), (255, 180, 195), - (106, 154, 176), - (230, 150, 140), (60, 143, 255), (128, 64, 128), (92, 82, 55), - (254, 212, 124), (73, 77, 174), (255, 160, 98), (255, 255, 255), - (104, 84, 109), (169, 164, 131), (225, 199, 255), (137, 54, 74), - (135, 158, 223), (7, 246, 231), (107, 255, 200), (58, 41, 149), - (183, 121, 142), (255, 73, 97), (107, 142, 35), (190, 153, 153), - (146, 139, 141), - (70, 130, 180), (134, 199, 156), (209, 226, 140), (96, 36, 108), - (96, 96, 96), (64, 170, 64), (152, 251, 152), (208, 229, 228), - (206, 186, 171), (152, 161, 64), (116, 112, 0), (0, 114, 143), - (102, 102, 156), (250, 141, 255)] - - def __init__(self, - ann_file, - pipeline, - ins_ann_file=None, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - super().__init__( - ann_file, - pipeline, - classes=classes, - data_root=data_root, - img_prefix=img_prefix, - seg_prefix=seg_prefix, - proposal_file=proposal_file, - test_mode=test_mode, - filter_empty_gt=filter_empty_gt, - file_client_args=file_client_args) - self.ins_ann_file = ins_ann_file - - def load_annotations(self, ann_file): - """Load annotation from COCO Panoptic style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - self.coco = COCOPanoptic(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.categories = self.coco.cats - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - info['segm_file'] = info['filename'].replace('jpg', 'png') - data_infos.append(info) - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - # filter out unmatched images - ann_info = [i for i in ann_info if i['image_id'] == img_id] - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def _parse_ann_info(self, img_info, ann_info): - """Parse annotations and load panoptic ground truths. - - Args: - img_info (int): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore, - labels, masks, seg_map. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_mask_infos = [] - - for i, ann in enumerate(ann_info): - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - bbox = [x1, y1, x1 + w, y1 + h] - - category_id = ann['category_id'] - contiguous_cat_id = self.cat2label[category_id] - - is_thing = self.coco.load_cats(ids=category_id)[0]['isthing'] - if is_thing: - is_crowd = ann.get('iscrowd', False) - if not is_crowd: - gt_bboxes.append(bbox) - gt_labels.append(contiguous_cat_id) - else: - gt_bboxes_ignore.append(bbox) - is_thing = False - - mask_info = { - 'id': ann['id'], - 'category': contiguous_cat_id, - 'is_thing': is_thing - } - gt_mask_infos.append(mask_info) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_mask_infos, - seg_map=img_info['segm_file']) - - return ann - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - ids_with_ann = [] - # check whether images have legal thing annotations. - for lists in self.coco.anns.values(): - for item in lists: - category_id = item['category_id'] - is_thing = self.coco.load_cats(ids=category_id)[0]['isthing'] - if not is_thing: - continue - ids_with_ann.append(item['image_id']) - ids_with_ann = set(ids_with_ann) - - valid_inds = [] - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_with_ann: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _pan2json(self, results, outfile_prefix): - """Convert panoptic results to COCO panoptic json style.""" - label2cat = dict((v, k) for (k, v) in self.cat2label.items()) - pred_annotations = [] - outdir = os.path.join(os.path.dirname(outfile_prefix), 'panoptic') - - for idx in range(len(self)): - img_id = self.img_ids[idx] - segm_file = self.data_infos[idx]['segm_file'] - pan = results[idx] - - pan_labels = np.unique(pan) - segm_info = [] - for pan_label in pan_labels: - sem_label = pan_label % INSTANCE_OFFSET - # We reserve the length of self.CLASSES for VOID label - if sem_label == len(self.CLASSES): - continue - # convert sem_label to json label - cat_id = label2cat[sem_label] - is_thing = self.categories[cat_id]['isthing'] - mask = pan == pan_label - area = mask.sum() - segm_info.append({ - 'id': int(pan_label), - 'category_id': cat_id, - 'isthing': is_thing, - 'area': int(area) - }) - # evaluation script uses 0 for VOID label. - pan[pan % INSTANCE_OFFSET == len(self.CLASSES)] = VOID - pan = id2rgb(pan).astype(np.uint8) - mmcv.imwrite(pan[:, :, ::-1], os.path.join(outdir, segm_file)) - record = { - 'image_id': img_id, - 'segments_info': segm_info, - 'file_name': segm_file - } - pred_annotations.append(record) - pan_json_results = dict(annotations=pred_annotations) - return pan_json_results - - def results2json(self, results, outfile_prefix): - """Dump the results to a COCO style json file. - - There are 4 types of results: proposals, bbox predictions, mask - predictions, panoptic segmentation predictions, and they have - different data types. This method will automatically recognize - the type, and dump them to json files. - - .. code-block:: none - - [ - { - 'pan_results': np.array, # shape (h, w) - # ins_results which includes bboxes and RLE encoded masks - # is optional. - 'ins_results': (list[np.array], list[list[str]]) - }, - ... - ] - - Args: - results (list[dict]): Testing results of the dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.panoptic.json", "somepath/xxx.bbox.json", - "somepath/xxx.segm.json" - - Returns: - dict[str: str]: Possible keys are "panoptic", "bbox", "segm", \ - "proposal", and values are corresponding filenames. - """ - result_files = dict() - # panoptic segmentation results - if 'pan_results' in results[0]: - pan_results = [result['pan_results'] for result in results] - pan_json_results = self._pan2json(pan_results, outfile_prefix) - result_files['panoptic'] = f'{outfile_prefix}.panoptic.json' - mmcv.dump(pan_json_results, result_files['panoptic']) - - # instance segmentation results - if 'ins_results' in results[0]: - ins_results = [result['ins_results'] for result in results] - bbox_json_results, segm_json_results = self._segm2json(ins_results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(bbox_json_results, result_files['bbox']) - mmcv.dump(segm_json_results, result_files['segm']) - - return result_files - - def evaluate_pan_json(self, - result_files, - outfile_prefix, - logger=None, - classwise=False, - nproc=32): - """Evaluate PQ according to the panoptic results json file.""" - imgs = self.coco.imgs - gt_json = self.coco.img_ann_map # image to annotations - gt_json = [{ - 'image_id': k, - 'segments_info': v, - 'file_name': imgs[k]['segm_file'] - } for k, v in gt_json.items()] - pred_json = mmcv.load(result_files['panoptic']) - pred_json = dict( - (el['image_id'], el) for el in pred_json['annotations']) - - # match the gt_anns and pred_anns in the same image - matched_annotations_list = [] - for gt_ann in gt_json: - img_id = gt_ann['image_id'] - if img_id not in pred_json.keys(): - raise Exception('no prediction for the image' - ' with id: {}'.format(img_id)) - matched_annotations_list.append((gt_ann, pred_json[img_id])) - - gt_folder = self.seg_prefix - pred_folder = os.path.join(os.path.dirname(outfile_prefix), 'panoptic') - - pq_stat = pq_compute_multi_core( - matched_annotations_list, - gt_folder, - pred_folder, - self.categories, - self.file_client, - nproc=nproc) - - metrics = [('All', None), ('Things', True), ('Stuff', False)] - pq_results = {} - - for name, isthing in metrics: - pq_results[name], classwise_results = pq_stat.pq_average( - self.categories, isthing=isthing) - if name == 'All': - pq_results['classwise'] = classwise_results - - classwise_results = None - if classwise: - classwise_results = { - k: v - for k, v in zip(self.CLASSES, pq_results['classwise'].values()) - } - print_panoptic_table(pq_results, classwise_results, logger=logger) - results = parse_pq_results(pq_results) - results['PQ_copypaste'] = ( - f'{results["PQ"]:.3f} {results["SQ"]:.3f} ' - f'{results["RQ"]:.3f} ' - f'{results["PQ_th"]:.3f} {results["SQ_th"]:.3f} ' - f'{results["RQ_th"]:.3f} ' - f'{results["PQ_st"]:.3f} {results["SQ_st"]:.3f} ' - f'{results["RQ_st"]:.3f}') - - return results - - def evaluate(self, - results, - metric='PQ', - logger=None, - jsonfile_prefix=None, - classwise=False, - nproc=32, - **kwargs): - """Evaluation in COCO Panoptic protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. 'PQ', 'bbox', - 'segm', 'proposal' are supported. 'pq' will be regarded as 'PQ. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to print classwise evaluation results. - Default: False. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - - Returns: - dict[str, float]: COCO Panoptic style evaluation metric. - """ - metrics = metric if isinstance(metric, list) else [metric] - # Compatible with lowercase 'pq' - metrics = ['PQ' if metric == 'pq' else metric for metric in metrics] - allowed_metrics = ['PQ', 'bbox', 'segm', 'proposal'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - eval_results = {} - - outfile_prefix = os.path.join(tmp_dir.name, 'results') \ - if tmp_dir is not None else jsonfile_prefix - if 'PQ' in metrics: - eval_pan_results = self.evaluate_pan_json( - result_files, outfile_prefix, logger, classwise, nproc=nproc) - - eval_results.update(eval_pan_results) - metrics.remove('PQ') - - if (('bbox' in metrics) or ('segm' in metrics) - or ('proposal' in metrics)): - - assert 'ins_results' in results[0], 'instance segmentation' \ - 'results are absent from results' - - assert self.ins_ann_file is not None, 'Annotation '\ - 'file for instance segmentation or object detection ' \ - 'shuold not be None' - - coco_gt = COCO(self.ins_ann_file) - panoptic_cat_ids = self.cat_ids - self.cat_ids = coco_gt.get_cat_ids(cat_names=self.THING_CLASSES) - - eval_ins_results = self.evaluate_det_segm(results, result_files, - coco_gt, metrics, logger, - classwise, **kwargs) - self.cat_ids = panoptic_cat_ids - eval_results.update(eval_ins_results) - - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results - - -def parse_pq_results(pq_results): - """Parse the Panoptic Quality results.""" - result = dict() - result['PQ'] = 100 * pq_results['All']['pq'] - result['SQ'] = 100 * pq_results['All']['sq'] - result['RQ'] = 100 * pq_results['All']['rq'] - result['PQ_th'] = 100 * pq_results['Things']['pq'] - result['SQ_th'] = 100 * pq_results['Things']['sq'] - result['RQ_th'] = 100 * pq_results['Things']['rq'] - result['PQ_st'] = 100 * pq_results['Stuff']['pq'] - result['SQ_st'] = 100 * pq_results['Stuff']['sq'] - result['RQ_st'] = 100 * pq_results['Stuff']['rq'] - return result - - -def print_panoptic_table(pq_results, classwise_results=None, logger=None): - """Print the panoptic evaluation results table. - - Args: - pq_results(dict): The Panoptic Quality results. - classwise_results(dict | None): The classwise Panoptic Quality results. - The keys are class names and the values are metrics. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - """ - - headers = ['', 'PQ', 'SQ', 'RQ', 'categories'] - data = [headers] - for name in ['All', 'Things', 'Stuff']: - numbers = [ - f'{(pq_results[name][k] * 100):0.3f}' for k in ['pq', 'sq', 'rq'] - ] - row = [name] + numbers + [pq_results[name]['n']] - data.append(row) - table = AsciiTable(data) - print_log('Panoptic Evaluation Results:\n' + table.table, logger=logger) - - if classwise_results is not None: - class_metrics = [(name, ) + tuple(f'{(metrics[k] * 100):0.3f}' - for k in ['pq', 'sq', 'rq']) - for name, metrics in classwise_results.items()] - num_columns = min(8, len(class_metrics) * 4) - results_flatten = list(itertools.chain(*class_metrics)) - headers = ['category', 'PQ', 'SQ', 'RQ'] * (num_columns // 4) - results_2d = itertools.zip_longest( - *[results_flatten[i::num_columns] for i in range(num_columns)]) - data = [headers] - data += [result for result in results_2d] - table = AsciiTable(data) - print_log( - 'Classwise Panoptic Evaluation Results:\n' + table.table, - logger=logger) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/custom.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/custom.py deleted file mode 100644 index a4d82589..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/custom.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable -from torch.utils.data import Dataset - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for detection. - - The annotation format is shown as follows. The `ann` field is optional for - testing. - - .. code-block:: none - - [ - { - 'filename': 'a.jpg', - 'width': 1280, - 'height': 720, - 'ann': { - 'bboxes': (n, 4) in (x1, y1, x2, y2) order. - 'labels': (n, ), - 'bboxes_ignore': (k, 4), (optional field) - 'labels_ignore': (k, 4) (optional field) - } - }, - ... - ] - - Args: - ann_file (str): Annotation file path. - pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified. - test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.file_client = mmcv.FileClient(**file_client_args) - self.CLASSES = self.get_classes(classes) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.ann_file): - self.ann_file = osp.join(self.data_root, self.ann_file) - if not (self.img_prefix is None or osp.isabs(self.img_prefix)): - self.img_prefix = osp.join(self.data_root, self.img_prefix) - if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): - self.seg_prefix = osp.join(self.data_root, self.seg_prefix) - if not (self.proposal_file is None - or osp.isabs(self.proposal_file)): - self.proposal_file = osp.join(self.data_root, - self.proposal_file) - # load annotations (and proposals) - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - if self.proposal_file is not None: - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path( - self.proposal_file) as local_path: - self.proposals = self.load_proposals(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.proposals = self.load_proposals(self.proposal_file) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - def __len__(self): - """Total number of samples of data.""" - return len(self.data_infos) - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - return mmcv.load(ann_file) - - def load_proposals(self, proposal_file): - """Load proposal from proposal file.""" - return mmcv.load(proposal_file) - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.data_infos[idx]['ann'] - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - while True: - data = self.prepare_train_img(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys \ - introduced by pipeline. - """ - - img_info = self.data_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by \ - pipeline. - """ - - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def get_cat2imgs(self): - """Get a dict with class as key and img_ids as values, which will be - used in :class:`ClassAwareSampler`. - - Returns: - dict[list]: A dict of per-label image list, - the item of the dict indicates a label index, - corresponds to the image index that contains the label. - """ - if self.CLASSES is None: - raise ValueError('self.CLASSES can not be None') - # sort the label index - cat2imgs = {i: [] for i in range(len(self.CLASSES))} - for i in range(len(self)): - cat_ids = set(self.get_cat_ids(i)) - for cat in cat_ids: - cat2imgs[cat].append(i) - return cat2imgs - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP. - Default: None. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - dataset=self.CLASSES, - logger=logger) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thr, logger=logger) - for i, num in enumerate(proposal_nums): - for j, iou in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results - - def __repr__(self): - """Print the number of instance number.""" - dataset_type = 'Test' if self.test_mode else 'Train' - result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' - f'with number of images {len(self)}, ' - f'and instance counts: \n') - if self.CLASSES is None: - result += 'Category names are not provided. \n' - return result - instance_count = np.zeros(len(self.CLASSES) + 1).astype(int) - # count the instance number in each image - for idx in range(len(self)): - label = self.get_ann_info(idx)['labels'] - unique, counts = np.unique(label, return_counts=True) - if len(unique) > 0: - # add the occurrence number to each class - instance_count[unique] += counts - else: - # background is the last index - instance_count[-1] += 1 - # create a table with category count - table_data = [['category', 'count'] * 5] - row_data = [] - for cls, count in enumerate(instance_count): - if cls < len(self.CLASSES): - row_data += [f'{cls} [{self.CLASSES[cls]}]', f'{count}'] - else: - # add the background number - row_data += ['-1 background', f'{count}'] - if len(row_data) == 10: - table_data.append(row_data) - row_data = [] - if len(row_data) >= 2: - if row_data[-1] == '0': - row_data = row_data[:-2] - if len(row_data) >= 2: - table_data.append([]) - table_data.append(row_data) - - table = AsciiTable(table_data) - result += table.table - return result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/dataset_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/dataset_wrappers.py deleted file mode 100644 index e62b88eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/dataset_wrappers.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import collections -import copy -import math -from collections import defaultdict - -import numpy as np -from mmcv.utils import build_from_cfg, print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS, PIPELINES -from .coco import CocoDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = getattr(datasets[0], 'PALETTE', None) - self.separate_eval = separate_eval - if not separate_eval: - if any([isinstance(ds, CocoDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - - if hasattr(datasets[0], 'flag'): - flags = [] - for i in range(0, len(datasets)): - flags.append(datasets[i].flag) - self.flag = np.concatenate(flags) - - def get_cat_ids(self, idx): - """Get category ids of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_cat_ids(sample_idx) - - def get_ann_info(self, idx): - """Get annotation of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_ann_info(sample_idx) - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[list | tuple]): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: AP results of the total dataset or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.ann_file} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - elif any([isinstance(ds, CocoDataset) for ds in self.datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - else: - original_data_infos = self.datasets[0].data_infos - self.datasets[0].data_infos = sum( - [dataset.data_infos for dataset in self.datasets], []) - eval_results = self.datasets[0].evaluate( - results, logger=logger, **kwargs) - self.datasets[0].data_infos = original_data_infos - return eval_results - - -@DATASETS.register_module() -class RepeatDataset: - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = np.tile(self.dataset.flag, times) - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - return self.dataset[idx % self._ori_len] - - def get_cat_ids(self, idx): - """Get category ids of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.dataset.get_cat_ids(idx % self._ori_len) - - def get_ann_info(self, idx): - """Get annotation of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.dataset.get_ann_info(idx % self._ori_len) - - def __len__(self): - """Length after repetition.""" - return self.times * self._ori_len - - -# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa -@DATASETS.register_module() -class ClassBalancedDataset: - """A wrapper of repeated dataset with repeat factor. - - Suitable for training on class imbalanced datasets like LVIS. Following - the sampling strategy in the `paper `_, - in each epoch, an image may appear multiple times based on its - "repeat factor". - The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] - is defined by the fraction of images in the training set (without repeats) - in which category c appears. - The dataset needs to instantiate :func:`self.get_cat_ids` to support - ClassBalancedDataset. - - The repeat factor is computed as followed. - - 1. For each category c, compute the fraction # of images - that contain it: :math:`f(c)` - 2. For each category c, compute the category-level repeat factor: - :math:`r(c) = max(1, sqrt(t/f(c)))` - 3. For each image I, compute the image-level repeat factor: - :math:`r(I) = max_{c in I} r(c)` - - Args: - dataset (:obj:`CustomDataset`): The dataset to be repeated. - oversample_thr (float): frequency threshold below which data is - repeated. For categories with ``f_c >= oversample_thr``, there is - no oversampling. For categories with ``f_c < oversample_thr``, the - degree of oversampling following the square-root inverse frequency - heuristic above. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes will not be oversampled. Otherwise, they will be categorized - as the pure background class and involved into the oversampling. - Default: True. - """ - - def __init__(self, dataset, oversample_thr, filter_empty_gt=True): - self.dataset = dataset - self.oversample_thr = oversample_thr - self.filter_empty_gt = filter_empty_gt - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - - repeat_factors = self._get_repeat_factors(dataset, oversample_thr) - repeat_indices = [] - for dataset_idx, repeat_factor in enumerate(repeat_factors): - repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor)) - self.repeat_indices = repeat_indices - - flags = [] - if hasattr(self.dataset, 'flag'): - for flag, repeat_factor in zip(self.dataset.flag, repeat_factors): - flags.extend([flag] * int(math.ceil(repeat_factor))) - assert len(flags) == len(repeat_indices) - self.flag = np.asarray(flags, dtype=np.uint8) - - def _get_repeat_factors(self, dataset, repeat_thr): - """Get repeat factor for each images in the dataset. - - Args: - dataset (:obj:`CustomDataset`): The dataset - repeat_thr (float): The threshold of frequency. If an image - contains the categories whose frequency below the threshold, - it would be repeated. - - Returns: - list[float]: The repeat factors for each images in the dataset. - """ - - # 1. For each category c, compute the fraction # of images - # that contain it: f(c) - category_freq = defaultdict(int) - num_images = len(dataset) - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - for cat_id in cat_ids: - category_freq[cat_id] += 1 - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t/f(c))) - category_repeat = { - cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - repeat_factors = [] - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - repeat_factor = 1 - if len(cat_ids) > 0: - repeat_factor = max( - {category_repeat[cat_id] - for cat_id in cat_ids}) - repeat_factors.append(repeat_factor) - - return repeat_factors - - def __getitem__(self, idx): - ori_index = self.repeat_indices[idx] - return self.dataset[ori_index] - - def get_ann_info(self, idx): - """Get annotation of dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - ori_index = self.repeat_indices[idx] - return self.dataset.get_ann_info(ori_index) - - def __len__(self): - """Length after repetition.""" - return len(self.repeat_indices) - - -@DATASETS.register_module() -class MultiImageMixDataset: - """A wrapper of multiple images mixed dataset. - - Suitable for training on multiple images mixed data augmentation like - mosaic and mixup. For the augmentation pipeline of mixed image data, - the `get_indexes` method needs to be provided to obtain the image - indexes, and you can set `skip_flags` to change the pipeline running - process. At the same time, we provide the `dynamic_scale` parameter - to dynamically change the output image size. - - Args: - dataset (:obj:`CustomDataset`): The dataset to be mixed. - pipeline (Sequence[dict]): Sequence of transform object or - config dict to be composed. - dynamic_scale (tuple[int], optional): The image scale can be changed - dynamically. Default to None. It is deprecated. - skip_type_keys (list[str], optional): Sequence of type string to - be skip pipeline. Default to None. - max_refetch (int): The maximum number of retry iterations for getting - valid results from the pipeline. If the number of iterations is - greater than `max_refetch`, but results is still None, then the - iteration is terminated and raise the error. Default: 15. - """ - - def __init__(self, - dataset, - pipeline, - dynamic_scale=None, - skip_type_keys=None, - max_refetch=15): - if dynamic_scale is not None: - raise RuntimeError( - 'dynamic_scale is deprecated. Please use Resize pipeline ' - 'to achieve similar functions') - assert isinstance(pipeline, collections.abc.Sequence) - if skip_type_keys is not None: - assert all([ - isinstance(skip_type_key, str) - for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys - - self.pipeline = [] - self.pipeline_types = [] - for transform in pipeline: - if isinstance(transform, dict): - self.pipeline_types.append(transform['type']) - transform = build_from_cfg(transform, PIPELINES) - self.pipeline.append(transform) - else: - raise TypeError('pipeline must be a dict') - - self.dataset = dataset - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = dataset.flag - self.num_samples = len(dataset) - self.max_refetch = max_refetch - - def __len__(self): - return self.num_samples - - def __getitem__(self, idx): - results = copy.deepcopy(self.dataset[idx]) - for (transform, transform_type) in zip(self.pipeline, - self.pipeline_types): - if self._skip_type_keys is not None and \ - transform_type in self._skip_type_keys: - continue - - if hasattr(transform, 'get_indexes'): - for i in range(self.max_refetch): - # Make sure the results passed the loading pipeline - # of the original dataset is not None. - indexes = transform.get_indexes(self.dataset) - if not isinstance(indexes, collections.abc.Sequence): - indexes = [indexes] - mix_results = [ - copy.deepcopy(self.dataset[index]) for index in indexes - ] - if None not in mix_results: - results['mix_results'] = mix_results - break - else: - raise RuntimeError( - 'The loading pipeline of the original dataset' - ' always return None. Please check the correctness ' - 'of the dataset and its pipeline.') - - for i in range(self.max_refetch): - # To confirm the results passed the training pipeline - # of the wrapper is not None. - updated_results = transform(copy.deepcopy(results)) - if updated_results is not None: - results = updated_results - break - else: - raise RuntimeError( - 'The training pipeline of the dataset wrapper' - ' always return None.Please check the correctness ' - 'of the dataset and its pipeline.') - - if 'mix_results' in results: - results.pop('mix_results') - - return results - - def update_skip_type_keys(self, skip_type_keys): - """Update skip_type_keys. It is called by an external hook. - - Args: - skip_type_keys (list[str], optional): Sequence of type - string to be skip pipeline. - """ - assert all([ - isinstance(skip_type_key, str) for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/deepfashion.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/deepfashion.py deleted file mode 100644 index 609f8091..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/deepfashion.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class DeepFashionDataset(CocoDataset): - - CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag', - 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair', - 'skin', 'face') - - PALETTE = [(0, 192, 64), (0, 64, 96), (128, 192, 192), (0, 64, 64), - (0, 192, 224), (0, 192, 192), (128, 192, 64), (0, 192, 96), - (128, 32, 192), (0, 0, 224), (0, 0, 64), (0, 160, 192), - (128, 0, 96), (128, 0, 192), (0, 32, 192)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/lvis.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/lvis.py deleted file mode 100644 index 511e31ae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/lvis.py +++ /dev/null @@ -1,742 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools -import logging -import os.path as osp -import tempfile -import warnings -from collections import OrderedDict - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class LVISV05Dataset(CocoDataset): - - CLASSES = ( - 'acorn', 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', - 'alcohol', 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', - 'antenna', 'apple', 'apple_juice', 'applesauce', 'apricot', 'apron', - 'aquarium', 'armband', 'armchair', 'armoire', 'armor', 'artichoke', - 'trash_can', 'ashtray', 'asparagus', 'atomizer', 'avocado', 'award', - 'awning', 'ax', 'baby_buggy', 'basketball_backboard', 'backpack', - 'handbag', 'suitcase', 'bagel', 'bagpipe', 'baguet', 'bait', 'ball', - 'ballet_skirt', 'balloon', 'bamboo', 'banana', 'Band_Aid', 'bandage', - 'bandanna', 'banjo', 'banner', 'barbell', 'barge', 'barrel', - 'barrette', 'barrow', 'baseball_base', 'baseball', 'baseball_bat', - 'baseball_cap', 'baseball_glove', 'basket', 'basketball_hoop', - 'basketball', 'bass_horn', 'bat_(animal)', 'bath_mat', 'bath_towel', - 'bathrobe', 'bathtub', 'batter_(food)', 'battery', 'beachball', 'bead', - 'beaker', 'bean_curd', 'beanbag', 'beanie', 'bear', 'bed', - 'bedspread', 'cow', 'beef_(food)', 'beeper', 'beer_bottle', 'beer_can', - 'beetle', 'bell', 'bell_pepper', 'belt', 'belt_buckle', 'bench', - 'beret', 'bib', 'Bible', 'bicycle', 'visor', 'binder', 'binoculars', - 'bird', 'birdfeeder', 'birdbath', 'birdcage', 'birdhouse', - 'birthday_cake', 'birthday_card', 'biscuit_(bread)', 'pirate_flag', - 'black_sheep', 'blackboard', 'blanket', 'blazer', 'blender', 'blimp', - 'blinker', 'blueberry', 'boar', 'gameboard', 'boat', 'bobbin', - 'bobby_pin', 'boiled_egg', 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', - 'book', 'book_bag', 'bookcase', 'booklet', 'bookmark', - 'boom_microphone', 'boot', 'bottle', 'bottle_opener', 'bouquet', - 'bow_(weapon)', 'bow_(decorative_ribbons)', 'bow-tie', 'bowl', - 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'bowling_pin', - 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere', - 'bread-bin', 'breechcloth', 'bridal_gown', 'briefcase', - 'bristle_brush', 'broccoli', 'broach', 'broom', 'brownie', - 'brussels_sprouts', 'bubble_gum', 'bucket', 'horse_buggy', 'bull', - 'bulldog', 'bulldozer', 'bullet_train', 'bulletin_board', - 'bulletproof_vest', 'bullhorn', 'corned_beef', 'bun', 'bunk_bed', - 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', 'butcher_knife', - 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car', - 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf', - 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)', - 'can', 'can_opener', 'candelabrum', 'candle', 'candle_holder', - 'candy_bar', 'candy_cane', 'walking_cane', 'canister', 'cannon', - 'canoe', 'cantaloup', 'canteen', 'cap_(headwear)', 'bottle_cap', - 'cape', 'cappuccino', 'car_(automobile)', 'railcar_(part_of_a_train)', - 'elevator_car', 'car_battery', 'identity_card', 'card', 'cardigan', - 'cargo_ship', 'carnation', 'horse_carriage', 'carrot', 'tote_bag', - 'cart', 'carton', 'cash_register', 'casserole', 'cassette', 'cast', - 'cat', 'cauliflower', 'caviar', 'cayenne_(spice)', 'CD_player', - 'celery', 'cellular_telephone', 'chain_mail', 'chair', 'chaise_longue', - 'champagne', 'chandelier', 'chap', 'checkbook', 'checkerboard', - 'cherry', 'chessboard', 'chest_of_drawers_(furniture)', - 'chicken_(animal)', 'chicken_wire', 'chickpea', 'Chihuahua', - 'chili_(vegetable)', 'chime', 'chinaware', 'crisp_(potato_chip)', - 'poker_chip', 'chocolate_bar', 'chocolate_cake', 'chocolate_milk', - 'chocolate_mousse', 'choker', 'chopping_board', 'chopstick', - 'Christmas_tree', 'slide', 'cider', 'cigar_box', 'cigarette', - 'cigarette_case', 'cistern', 'clarinet', 'clasp', 'cleansing_agent', - 'clementine', 'clip', 'clipboard', 'clock', 'clock_tower', - 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', 'coat', - 'coat_hanger', 'coatrack', 'cock', 'coconut', 'coffee_filter', - 'coffee_maker', 'coffee_table', 'coffeepot', 'coil', 'coin', - 'colander', 'coleslaw', 'coloring_material', 'combination_lock', - 'pacifier', 'comic_book', 'computer_keyboard', 'concrete_mixer', - 'cone', 'control', 'convertible_(automobile)', 'sofa_bed', 'cookie', - 'cookie_jar', 'cooking_utensil', 'cooler_(for_food)', - 'cork_(bottle_plug)', 'corkboard', 'corkscrew', 'edible_corn', - 'cornbread', 'cornet', 'cornice', 'cornmeal', 'corset', - 'romaine_lettuce', 'costume', 'cougar', 'coverall', 'cowbell', - 'cowboy_hat', 'crab_(animal)', 'cracker', 'crape', 'crate', 'crayon', - 'cream_pitcher', 'credit_card', 'crescent_roll', 'crib', 'crock_pot', - 'crossbar', 'crouton', 'crow', 'crown', 'crucifix', 'cruise_ship', - 'police_cruiser', 'crumb', 'crutch', 'cub_(animal)', 'cube', - 'cucumber', 'cufflink', 'cup', 'trophy_cup', 'cupcake', 'hair_curler', - 'curling_iron', 'curtain', 'cushion', 'custard', 'cutting_tool', - 'cylinder', 'cymbal', 'dachshund', 'dagger', 'dartboard', - 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk', - 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux', - 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher', - 'dishwasher_detergent', 'diskette', 'dispenser', 'Dixie_cup', 'dog', - 'dog_collar', 'doll', 'dollar', 'dolphin', 'domestic_ass', 'eye_mask', - 'doorbell', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly', - 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit', - 'dresser', 'drill', 'drinking_fountain', 'drone', 'dropper', - 'drum_(musical_instrument)', 'drumstick', 'duck', 'duckling', - 'duct_tape', 'duffel_bag', 'dumbbell', 'dumpster', 'dustpan', - 'Dutch_oven', 'eagle', 'earphone', 'earplug', 'earring', 'easel', - 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater', - 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk', - 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan', - 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)', - 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm', - 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace', - 'fireplug', 'fish', 'fish_(food)', 'fishbowl', 'fishing_boat', - 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flash', - 'flashlight', 'fleece', 'flip-flop_(sandal)', 'flipper_(footwear)', - 'flower_arrangement', 'flute_glass', 'foal', 'folding_chair', - 'food_processor', 'football_(American)', 'football_helmet', - 'footstool', 'fork', 'forklift', 'freight_car', 'French_toast', - 'freshener', 'frisbee', 'frog', 'fruit_juice', 'fruit_salad', - 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage', - 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic', - 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'giant_panda', - 'gift_wrap', 'ginger', 'giraffe', 'cincture', - 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles', - 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose', - 'gorilla', 'gourd', 'surgical_gown', 'grape', 'grasshopper', 'grater', - 'gravestone', 'gravy_boat', 'green_bean', 'green_onion', 'griddle', - 'grillroom', 'grinder_(tool)', 'grits', 'grizzly', 'grocery_bag', - 'guacamole', 'guitar', 'gull', 'gun', 'hair_spray', 'hairbrush', - 'hairnet', 'hairpin', 'ham', 'hamburger', 'hammer', 'hammock', - 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel', - 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw', - 'hardback_book', 'harmonium', 'hat', 'hatbox', 'hatch', 'veil', - 'headband', 'headboard', 'headlight', 'headscarf', 'headset', - 'headstall_(for_horses)', 'hearing_aid', 'heart', 'heater', - 'helicopter', 'helmet', 'heron', 'highchair', 'hinge', 'hippopotamus', - 'hockey_stick', 'hog', 'home_plate_(baseball)', 'honey', 'fume_hood', - 'hook', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce', - 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear', - 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate', - 'ice_tea', 'igniter', 'incense', 'inhaler', 'iPod', - 'iron_(for_clothing)', 'ironing_board', 'jacket', 'jam', 'jean', - 'jeep', 'jelly_bean', 'jersey', 'jet_plane', 'jewelry', 'joystick', - 'jumpsuit', 'kayak', 'keg', 'kennel', 'kettle', 'key', 'keycard', - 'kilt', 'kimono', 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', - 'kiwi_fruit', 'knee_pad', 'knife', 'knight_(chess_piece)', - 'knitting_needle', 'knob', 'knocker_(on_a_door)', 'koala', 'lab_coat', - 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', 'lamb-chop', 'lamp', - 'lamppost', 'lampshade', 'lantern', 'lanyard', 'laptop_computer', - 'lasagna', 'latch', 'lawn_mower', 'leather', 'legging_(clothing)', - 'Lego', 'lemon', 'lemonade', 'lettuce', 'license_plate', 'life_buoy', - 'life_jacket', 'lightbulb', 'lightning_rod', 'lime', 'limousine', - 'linen_paper', 'lion', 'lip_balm', 'lipstick', 'liquor', 'lizard', - 'Loafer_(type_of_shoe)', 'log', 'lollipop', 'lotion', - 'speaker_(stereo_equipment)', 'loveseat', 'machine_gun', 'magazine', - 'magnet', 'mail_slot', 'mailbox_(at_home)', 'mallet', 'mammoth', - 'mandarin_orange', 'manger', 'manhole', 'map', 'marker', 'martini', - 'mascot', 'mashed_potato', 'masher', 'mask', 'mast', - 'mat_(gym_equipment)', 'matchbox', 'mattress', 'measuring_cup', - 'measuring_stick', 'meatball', 'medicine', 'melon', 'microphone', - 'microscope', 'microwave_oven', 'milestone', 'milk', 'minivan', - 'mint_candy', 'mirror', 'mitten', 'mixer_(kitchen_tool)', 'money', - 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor', - 'motor_scooter', 'motor_vehicle', 'motorboat', 'motorcycle', - 'mound_(baseball)', 'mouse_(animal_rodent)', - 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom', - 'music_stool', 'musical_instrument', 'nailfile', 'nameplate', 'napkin', - 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newsstand', - 'nightshirt', 'nosebag_(for_animals)', 'noseband_(for_animals)', - 'notebook', 'notepad', 'nut', 'nutcracker', 'oar', 'octopus_(food)', - 'octopus_(animal)', 'oil_lamp', 'olive_oil', 'omelet', 'onion', - 'orange_(fruit)', 'orange_juice', 'oregano', 'ostrich', 'ottoman', - 'overalls_(clothing)', 'owl', 'packet', 'inkpad', 'pad', 'paddle', - 'padlock', 'paintbox', 'paintbrush', 'painting', 'pajamas', 'palette', - 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', 'pantyhose', - 'papaya', 'paperclip', 'paper_plate', 'paper_towel', 'paperback_book', - 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', - 'parchment', 'parka', 'parking_meter', 'parrot', - 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport', - 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter', - 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'pegboard', - 'pelican', 'pen', 'pencil', 'pencil_box', 'pencil_sharpener', - 'pendulum', 'penguin', 'pennant', 'penny_(coin)', 'pepper', - 'pepper_mill', 'perfume', 'persimmon', 'baby', 'pet', 'petfood', - 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano', - 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow', - 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball', - 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)', - 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat', - 'plate', 'platter', 'playing_card', 'playpen', 'pliers', - 'plow_(farm_equipment)', 'pocket_watch', 'pocketknife', - 'poker_(fire_stirring_tool)', 'pole', 'police_van', 'polo_shirt', - 'poncho', 'pony', 'pool_table', 'pop_(soda)', 'portrait', - 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato', - 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'printer', - 'projectile_(weapon)', 'projector', 'propeller', 'prune', 'pudding', - 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', 'puppet', - 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', 'race_car', - 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', 'raft', - 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat', - 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt', - 'recliner', 'record_player', 'red_cabbage', 'reflector', - 'remote_control', 'rhinoceros', 'rib_(food)', 'rifle', 'ring', - 'river_boat', 'road_map', 'robe', 'rocking_chair', 'roller_skate', - 'Rollerblade', 'rolling_pin', 'root_beer', - 'router_(computer_equipment)', 'rubber_band', 'runner_(carpet)', - 'plastic_bag', 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', - 'safety_pin', 'sail', 'salad', 'salad_plate', 'salami', - 'salmon_(fish)', 'salmon_(food)', 'salsa', 'saltshaker', - 'sandal_(type_of_shoe)', 'sandwich', 'satchel', 'saucepan', 'saucer', - 'sausage', 'sawhorse', 'saxophone', 'scale_(measuring_instrument)', - 'scarecrow', 'scarf', 'school_bus', 'scissors', 'scoreboard', - 'scrambled_eggs', 'scraper', 'scratcher', 'screwdriver', - 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane', - 'seashell', 'seedling', 'serving_dish', 'sewing_machine', 'shaker', - 'shampoo', 'shark', 'sharpener', 'Sharpie', 'shaver_(electric)', - 'shaving_cream', 'shawl', 'shears', 'sheep', 'shepherd_dog', - 'sherbert', 'shield', 'shirt', 'shoe', 'shopping_bag', 'shopping_cart', - 'short_pants', 'shot_glass', 'shoulder_bag', 'shovel', 'shower_head', - 'shower_curtain', 'shredder_(for_paper)', 'sieve', 'signboard', 'silo', - 'sink', 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', - 'ski_pole', 'skirt', 'sled', 'sleeping_bag', 'sling_(bandage)', - 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman', - 'snowmobile', 'soap', 'soccer_ball', 'sock', 'soda_fountain', - 'carbonated_water', 'sofa', 'softball', 'solar_array', 'sombrero', - 'soup', 'soup_bowl', 'soupspoon', 'sour_cream', 'soya_milk', - 'space_shuttle', 'sparkler_(fireworks)', 'spatula', 'spear', - 'spectacles', 'spice_rack', 'spider', 'sponge', 'spoon', 'sportswear', - 'spotlight', 'squirrel', 'stapler_(stapling_machine)', 'starfish', - 'statue_(sculpture)', 'steak_(food)', 'steak_knife', - 'steamer_(kitchen_appliance)', 'steering_wheel', 'stencil', - 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer', - 'stirrup', 'stockings_(leg_wear)', 'stool', 'stop_sign', 'brake_light', - 'stove', 'strainer', 'strap', 'straw_(for_drinking)', 'strawberry', - 'street_sign', 'streetlight', 'string_cheese', 'stylus', 'subwoofer', - 'sugar_bowl', 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', - 'sunglasses', 'sunhat', 'sunscreen', 'surfboard', 'sushi', 'mop', - 'sweat_pants', 'sweatband', 'sweater', 'sweatshirt', 'sweet_potato', - 'swimsuit', 'sword', 'syringe', 'Tabasco_sauce', 'table-tennis_table', - 'table', 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', - 'taillight', 'tambourine', 'army_tank', 'tank_(storage_vessel)', - 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure', - 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup', - 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth', - 'telephone_pole', 'telephoto_lens', 'television_camera', - 'television_set', 'tennis_ball', 'tennis_racket', 'tequila', - 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread', - 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil', - 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven', - 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush', - 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel', - 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light', - 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline', - 'tray', 'tree_house', 'trench_coat', 'triangle_(musical_instrument)', - 'tricycle', 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', - 'trunk', 'vat', 'turban', 'turkey_(bird)', 'turkey_(food)', 'turnip', - 'turtle', 'turtleneck_(clothing)', 'typewriter', 'umbrella', - 'underwear', 'unicycle', 'urinal', 'urn', 'vacuum_cleaner', 'valve', - 'vase', 'vending_machine', 'vent', 'videotape', 'vinegar', 'violin', - 'vodka', 'volleyball', 'vulture', 'waffle', 'waffle_iron', 'wagon', - 'wagon_wheel', 'walking_stick', 'wall_clock', 'wall_socket', 'wallet', - 'walrus', 'wardrobe', 'wasabi', 'automatic_washer', 'watch', - 'water_bottle', 'water_cooler', 'water_faucet', 'water_filter', - 'water_heater', 'water_jug', 'water_gun', 'water_scooter', 'water_ski', - 'water_tower', 'watering_can', 'watermelon', 'weathervane', 'webcam', - 'wedding_cake', 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', - 'whipped_cream', 'whiskey', 'whistle', 'wick', 'wig', 'wind_chime', - 'windmill', 'window_box_(for_plants)', 'windshield_wiper', 'windsock', - 'wine_bottle', 'wine_bucket', 'wineglass', 'wing_chair', - 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', 'wreath', - 'wrench', 'wristband', 'wristlet', 'yacht', 'yak', 'yogurt', - 'yoke_(animal_equipment)', 'zebra', 'zucchini') - - PALETTE = None - - def load_annotations(self, ann_file): - """Load annotation from lvis style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from LVIS api. - """ - - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVIS - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - self.coco = LVIS(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - if info['file_name'].startswith('COCO'): - # Convert form the COCO 2014 file naming convention of - # COCO_[train/val/test]2014_000000000000.jpg to the 2017 - # naming convention of 000000000000.jpg - # (LVIS v1 will fix this naming issue) - info['filename'] = info['file_name'][-16:] - else: - info['filename'] = info['file_name'] - data_infos.append(info) - return data_infos - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in LVIS protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: LVIS style metrics. - """ - - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVISEval, LVISResults - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError('metric {} is not supported'.format(metric)) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - - eval_results = OrderedDict() - # get original api - lvis_gt = self.coco - for metric in metrics: - msg = 'Evaluating {}...'.format(metric) - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results['AR@{}'.format(num)] = ar[i] - log_msg.append('\nAR@{}\t{:.4f}'.format(num, ar[i])) - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - if metric not in result_files: - raise KeyError('{} is not in results'.format(metric)) - try: - lvis_dt = LVISResults(lvis_gt, result_files[metric]) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - iou_type = 'bbox' if metric == 'proposal' else metric - lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type) - lvis_eval.params.imgIds = self.img_ids - if metric == 'proposal': - lvis_eval.params.useCats = 0 - lvis_eval.params.maxDets = list(proposal_nums) - lvis_eval.evaluate() - lvis_eval.accumulate() - lvis_eval.summarize() - for k, v in lvis_eval.get_results().items(): - if k.startswith('AR'): - val = float('{:.3f}'.format(float(v))) - eval_results[k] = val - else: - lvis_eval.evaluate() - lvis_eval.accumulate() - lvis_eval.summarize() - lvis_results = lvis_eval.get_results() - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = lvis_eval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - # the dimensions of precisions are - # [num_thrs, num_recalls, num_cats, num_area_rngs] - nm = self.coco.load_cats([catId])[0] - precision = precisions[:, :, idx, 0] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - for k, v in lvis_results.items(): - if k.startswith('AP'): - key = '{}_{}'.format(metric, k) - val = float('{:.3f}'.format(float(v))) - eval_results[key] = val - ap_summary = ' '.join([ - '{}:{:.3f}'.format(k, float(v)) - for k, v in lvis_results.items() if k.startswith('AP') - ]) - eval_results['{}_mAP_copypaste'.format(metric)] = ap_summary - lvis_eval.print_results() - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results - - -LVISDataset = LVISV05Dataset -DATASETS.register_module(name='LVISDataset', module=LVISDataset) - - -@DATASETS.register_module() -class LVISV1Dataset(LVISDataset): - - CLASSES = ( - 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', 'alcohol', - 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', 'antenna', - 'apple', 'applesauce', 'apricot', 'apron', 'aquarium', - 'arctic_(type_of_shoe)', 'armband', 'armchair', 'armoire', 'armor', - 'artichoke', 'trash_can', 'ashtray', 'asparagus', 'atomizer', - 'avocado', 'award', 'awning', 'ax', 'baboon', 'baby_buggy', - 'basketball_backboard', 'backpack', 'handbag', 'suitcase', 'bagel', - 'bagpipe', 'baguet', 'bait', 'ball', 'ballet_skirt', 'balloon', - 'bamboo', 'banana', 'Band_Aid', 'bandage', 'bandanna', 'banjo', - 'banner', 'barbell', 'barge', 'barrel', 'barrette', 'barrow', - 'baseball_base', 'baseball', 'baseball_bat', 'baseball_cap', - 'baseball_glove', 'basket', 'basketball', 'bass_horn', 'bat_(animal)', - 'bath_mat', 'bath_towel', 'bathrobe', 'bathtub', 'batter_(food)', - 'battery', 'beachball', 'bead', 'bean_curd', 'beanbag', 'beanie', - 'bear', 'bed', 'bedpan', 'bedspread', 'cow', 'beef_(food)', 'beeper', - 'beer_bottle', 'beer_can', 'beetle', 'bell', 'bell_pepper', 'belt', - 'belt_buckle', 'bench', 'beret', 'bib', 'Bible', 'bicycle', 'visor', - 'billboard', 'binder', 'binoculars', 'bird', 'birdfeeder', 'birdbath', - 'birdcage', 'birdhouse', 'birthday_cake', 'birthday_card', - 'pirate_flag', 'black_sheep', 'blackberry', 'blackboard', 'blanket', - 'blazer', 'blender', 'blimp', 'blinker', 'blouse', 'blueberry', - 'gameboard', 'boat', 'bob', 'bobbin', 'bobby_pin', 'boiled_egg', - 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', 'book', 'bookcase', - 'booklet', 'bookmark', 'boom_microphone', 'boot', 'bottle', - 'bottle_opener', 'bouquet', 'bow_(weapon)', 'bow_(decorative_ribbons)', - 'bow-tie', 'bowl', 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'box', - 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere', - 'bread-bin', 'bread', 'breechcloth', 'bridal_gown', 'briefcase', - 'broccoli', 'broach', 'broom', 'brownie', 'brussels_sprouts', - 'bubble_gum', 'bucket', 'horse_buggy', 'bull', 'bulldog', 'bulldozer', - 'bullet_train', 'bulletin_board', 'bulletproof_vest', 'bullhorn', - 'bun', 'bunk_bed', 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', - 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car', - 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf', - 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)', - 'can', 'can_opener', 'candle', 'candle_holder', 'candy_bar', - 'candy_cane', 'walking_cane', 'canister', 'canoe', 'cantaloup', - 'canteen', 'cap_(headwear)', 'bottle_cap', 'cape', 'cappuccino', - 'car_(automobile)', 'railcar_(part_of_a_train)', 'elevator_car', - 'car_battery', 'identity_card', 'card', 'cardigan', 'cargo_ship', - 'carnation', 'horse_carriage', 'carrot', 'tote_bag', 'cart', 'carton', - 'cash_register', 'casserole', 'cassette', 'cast', 'cat', 'cauliflower', - 'cayenne_(spice)', 'CD_player', 'celery', 'cellular_telephone', - 'chain_mail', 'chair', 'chaise_longue', 'chalice', 'chandelier', - 'chap', 'checkbook', 'checkerboard', 'cherry', 'chessboard', - 'chicken_(animal)', 'chickpea', 'chili_(vegetable)', 'chime', - 'chinaware', 'crisp_(potato_chip)', 'poker_chip', 'chocolate_bar', - 'chocolate_cake', 'chocolate_milk', 'chocolate_mousse', 'choker', - 'chopping_board', 'chopstick', 'Christmas_tree', 'slide', 'cider', - 'cigar_box', 'cigarette', 'cigarette_case', 'cistern', 'clarinet', - 'clasp', 'cleansing_agent', 'cleat_(for_securing_rope)', 'clementine', - 'clip', 'clipboard', 'clippers_(for_plants)', 'cloak', 'clock', - 'clock_tower', 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', - 'coat', 'coat_hanger', 'coatrack', 'cock', 'cockroach', - 'cocoa_(beverage)', 'coconut', 'coffee_maker', 'coffee_table', - 'coffeepot', 'coil', 'coin', 'colander', 'coleslaw', - 'coloring_material', 'combination_lock', 'pacifier', 'comic_book', - 'compass', 'computer_keyboard', 'condiment', 'cone', 'control', - 'convertible_(automobile)', 'sofa_bed', 'cooker', 'cookie', - 'cooking_utensil', 'cooler_(for_food)', 'cork_(bottle_plug)', - 'corkboard', 'corkscrew', 'edible_corn', 'cornbread', 'cornet', - 'cornice', 'cornmeal', 'corset', 'costume', 'cougar', 'coverall', - 'cowbell', 'cowboy_hat', 'crab_(animal)', 'crabmeat', 'cracker', - 'crape', 'crate', 'crayon', 'cream_pitcher', 'crescent_roll', 'crib', - 'crock_pot', 'crossbar', 'crouton', 'crow', 'crowbar', 'crown', - 'crucifix', 'cruise_ship', 'police_cruiser', 'crumb', 'crutch', - 'cub_(animal)', 'cube', 'cucumber', 'cufflink', 'cup', 'trophy_cup', - 'cupboard', 'cupcake', 'hair_curler', 'curling_iron', 'curtain', - 'cushion', 'cylinder', 'cymbal', 'dagger', 'dalmatian', 'dartboard', - 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk', - 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux', - 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher', - 'dishwasher_detergent', 'dispenser', 'diving_board', 'Dixie_cup', - 'dog', 'dog_collar', 'doll', 'dollar', 'dollhouse', 'dolphin', - 'domestic_ass', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly', - 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit', - 'dresser', 'drill', 'drone', 'dropper', 'drum_(musical_instrument)', - 'drumstick', 'duck', 'duckling', 'duct_tape', 'duffel_bag', 'dumbbell', - 'dumpster', 'dustpan', 'eagle', 'earphone', 'earplug', 'earring', - 'easel', 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater', - 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk', - 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan', - 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)', - 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm', - 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace', - 'fireplug', 'first-aid_kit', 'fish', 'fish_(food)', 'fishbowl', - 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flap', - 'flash', 'flashlight', 'fleece', 'flip-flop_(sandal)', - 'flipper_(footwear)', 'flower_arrangement', 'flute_glass', 'foal', - 'folding_chair', 'food_processor', 'football_(American)', - 'football_helmet', 'footstool', 'fork', 'forklift', 'freight_car', - 'French_toast', 'freshener', 'frisbee', 'frog', 'fruit_juice', - 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage', - 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic', - 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'generator', - 'giant_panda', 'gift_wrap', 'ginger', 'giraffe', 'cincture', - 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles', - 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose', - 'gorilla', 'gourd', 'grape', 'grater', 'gravestone', 'gravy_boat', - 'green_bean', 'green_onion', 'griddle', 'grill', 'grits', 'grizzly', - 'grocery_bag', 'guitar', 'gull', 'gun', 'hairbrush', 'hairnet', - 'hairpin', 'halter_top', 'ham', 'hamburger', 'hammer', 'hammock', - 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel', - 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw', - 'hardback_book', 'harmonium', 'hat', 'hatbox', 'veil', 'headband', - 'headboard', 'headlight', 'headscarf', 'headset', - 'headstall_(for_horses)', 'heart', 'heater', 'helicopter', 'helmet', - 'heron', 'highchair', 'hinge', 'hippopotamus', 'hockey_stick', 'hog', - 'home_plate_(baseball)', 'honey', 'fume_hood', 'hook', 'hookah', - 'hornet', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce', - 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear', - 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate', - 'igniter', 'inhaler', 'iPod', 'iron_(for_clothing)', 'ironing_board', - 'jacket', 'jam', 'jar', 'jean', 'jeep', 'jelly_bean', 'jersey', - 'jet_plane', 'jewel', 'jewelry', 'joystick', 'jumpsuit', 'kayak', - 'keg', 'kennel', 'kettle', 'key', 'keycard', 'kilt', 'kimono', - 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', 'kiwi_fruit', - 'knee_pad', 'knife', 'knitting_needle', 'knob', 'knocker_(on_a_door)', - 'koala', 'lab_coat', 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', - 'lamb-chop', 'lamp', 'lamppost', 'lampshade', 'lantern', 'lanyard', - 'laptop_computer', 'lasagna', 'latch', 'lawn_mower', 'leather', - 'legging_(clothing)', 'Lego', 'legume', 'lemon', 'lemonade', 'lettuce', - 'license_plate', 'life_buoy', 'life_jacket', 'lightbulb', - 'lightning_rod', 'lime', 'limousine', 'lion', 'lip_balm', 'liquor', - 'lizard', 'log', 'lollipop', 'speaker_(stereo_equipment)', 'loveseat', - 'machine_gun', 'magazine', 'magnet', 'mail_slot', 'mailbox_(at_home)', - 'mallard', 'mallet', 'mammoth', 'manatee', 'mandarin_orange', 'manger', - 'manhole', 'map', 'marker', 'martini', 'mascot', 'mashed_potato', - 'masher', 'mask', 'mast', 'mat_(gym_equipment)', 'matchbox', - 'mattress', 'measuring_cup', 'measuring_stick', 'meatball', 'medicine', - 'melon', 'microphone', 'microscope', 'microwave_oven', 'milestone', - 'milk', 'milk_can', 'milkshake', 'minivan', 'mint_candy', 'mirror', - 'mitten', 'mixer_(kitchen_tool)', 'money', - 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor', - 'motor_scooter', 'motor_vehicle', 'motorcycle', 'mound_(baseball)', - 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom', - 'music_stool', 'musical_instrument', 'nailfile', 'napkin', - 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newspaper', - 'newsstand', 'nightshirt', 'nosebag_(for_animals)', - 'noseband_(for_animals)', 'notebook', 'notepad', 'nut', 'nutcracker', - 'oar', 'octopus_(food)', 'octopus_(animal)', 'oil_lamp', 'olive_oil', - 'omelet', 'onion', 'orange_(fruit)', 'orange_juice', 'ostrich', - 'ottoman', 'oven', 'overalls_(clothing)', 'owl', 'packet', 'inkpad', - 'pad', 'paddle', 'padlock', 'paintbrush', 'painting', 'pajamas', - 'palette', 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', - 'pantyhose', 'papaya', 'paper_plate', 'paper_towel', 'paperback_book', - 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', 'parasol', - 'parchment', 'parka', 'parking_meter', 'parrot', - 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport', - 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter', - 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'wooden_leg', - 'pegboard', 'pelican', 'pen', 'pencil', 'pencil_box', - 'pencil_sharpener', 'pendulum', 'penguin', 'pennant', 'penny_(coin)', - 'pepper', 'pepper_mill', 'perfume', 'persimmon', 'person', 'pet', - 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano', - 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow', - 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball', - 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)', - 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat', - 'plate', 'platter', 'playpen', 'pliers', 'plow_(farm_equipment)', - 'plume', 'pocket_watch', 'pocketknife', 'poker_(fire_stirring_tool)', - 'pole', 'polo_shirt', 'poncho', 'pony', 'pool_table', 'pop_(soda)', - 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato', - 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'pretzel', - 'printer', 'projectile_(weapon)', 'projector', 'propeller', 'prune', - 'pudding', 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', - 'puppet', 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', - 'race_car', 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', - 'raft', 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat', - 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt', - 'recliner', 'record_player', 'reflector', 'remote_control', - 'rhinoceros', 'rib_(food)', 'rifle', 'ring', 'river_boat', 'road_map', - 'robe', 'rocking_chair', 'rodent', 'roller_skate', 'Rollerblade', - 'rolling_pin', 'root_beer', 'router_(computer_equipment)', - 'rubber_band', 'runner_(carpet)', 'plastic_bag', - 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', 'safety_pin', - 'sail', 'salad', 'salad_plate', 'salami', 'salmon_(fish)', - 'salmon_(food)', 'salsa', 'saltshaker', 'sandal_(type_of_shoe)', - 'sandwich', 'satchel', 'saucepan', 'saucer', 'sausage', 'sawhorse', - 'saxophone', 'scale_(measuring_instrument)', 'scarecrow', 'scarf', - 'school_bus', 'scissors', 'scoreboard', 'scraper', 'screwdriver', - 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane', - 'seashell', 'sewing_machine', 'shaker', 'shampoo', 'shark', - 'sharpener', 'Sharpie', 'shaver_(electric)', 'shaving_cream', 'shawl', - 'shears', 'sheep', 'shepherd_dog', 'sherbert', 'shield', 'shirt', - 'shoe', 'shopping_bag', 'shopping_cart', 'short_pants', 'shot_glass', - 'shoulder_bag', 'shovel', 'shower_head', 'shower_cap', - 'shower_curtain', 'shredder_(for_paper)', 'signboard', 'silo', 'sink', - 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', 'ski_pole', - 'skirt', 'skullcap', 'sled', 'sleeping_bag', 'sling_(bandage)', - 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman', - 'snowmobile', 'soap', 'soccer_ball', 'sock', 'sofa', 'softball', - 'solar_array', 'sombrero', 'soup', 'soup_bowl', 'soupspoon', - 'sour_cream', 'soya_milk', 'space_shuttle', 'sparkler_(fireworks)', - 'spatula', 'spear', 'spectacles', 'spice_rack', 'spider', 'crawfish', - 'sponge', 'spoon', 'sportswear', 'spotlight', 'squid_(food)', - 'squirrel', 'stagecoach', 'stapler_(stapling_machine)', 'starfish', - 'statue_(sculpture)', 'steak_(food)', 'steak_knife', 'steering_wheel', - 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer', - 'stirrup', 'stool', 'stop_sign', 'brake_light', 'stove', 'strainer', - 'strap', 'straw_(for_drinking)', 'strawberry', 'street_sign', - 'streetlight', 'string_cheese', 'stylus', 'subwoofer', 'sugar_bowl', - 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', 'sunglasses', - 'sunhat', 'surfboard', 'sushi', 'mop', 'sweat_pants', 'sweatband', - 'sweater', 'sweatshirt', 'sweet_potato', 'swimsuit', 'sword', - 'syringe', 'Tabasco_sauce', 'table-tennis_table', 'table', - 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', 'taillight', - 'tambourine', 'army_tank', 'tank_(storage_vessel)', - 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure', - 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup', - 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth', - 'telephone_pole', 'telephoto_lens', 'television_camera', - 'television_set', 'tennis_ball', 'tennis_racket', 'tequila', - 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread', - 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil', - 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven', - 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush', - 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel', - 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light', - 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline', - 'tray', 'trench_coat', 'triangle_(musical_instrument)', 'tricycle', - 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', 'trunk', 'vat', - 'turban', 'turkey_(food)', 'turnip', 'turtle', 'turtleneck_(clothing)', - 'typewriter', 'umbrella', 'underwear', 'unicycle', 'urinal', 'urn', - 'vacuum_cleaner', 'vase', 'vending_machine', 'vent', 'vest', - 'videotape', 'vinegar', 'violin', 'vodka', 'volleyball', 'vulture', - 'waffle', 'waffle_iron', 'wagon', 'wagon_wheel', 'walking_stick', - 'wall_clock', 'wall_socket', 'wallet', 'walrus', 'wardrobe', - 'washbasin', 'automatic_washer', 'watch', 'water_bottle', - 'water_cooler', 'water_faucet', 'water_heater', 'water_jug', - 'water_gun', 'water_scooter', 'water_ski', 'water_tower', - 'watering_can', 'watermelon', 'weathervane', 'webcam', 'wedding_cake', - 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', 'whipped_cream', - 'whistle', 'wig', 'wind_chime', 'windmill', 'window_box_(for_plants)', - 'windshield_wiper', 'windsock', 'wine_bottle', 'wine_bucket', - 'wineglass', 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', - 'wreath', 'wrench', 'wristband', 'wristlet', 'yacht', 'yogurt', - 'yoke_(animal_equipment)', 'zebra', 'zucchini') - - def load_annotations(self, ann_file): - try: - import lvis - if getattr(lvis, '__version__', '0') >= '10.5.3': - warnings.warn( - 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501 - UserWarning) - from lvis import LVIS - except ImportError: - raise ImportError( - 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501 - ) - self.coco = LVIS(ann_file) - self.cat_ids = self.coco.get_cat_ids() - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - # coco_url is used in LVISv1 instead of file_name - # e.g. http://images.cocodataset.org/train2017/000000391895.jpg - # train/val split in specified in url - info['filename'] = info['coco_url'].replace( - 'http://images.cocodataset.org/', '') - data_infos.append(info) - return data_infos diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/openimages.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/openimages.py deleted file mode 100644 index fba660c3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/openimages.py +++ /dev/null @@ -1,891 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import csv -import json -import os.path as osp -import warnings -from collections import OrderedDict, defaultdict - -import mmcv -import numpy as np -import torch.distributed as dist -from mmcv.runner import get_dist_info -from mmcv.utils import print_log - -from mmdet.core import eval_map -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class OpenImagesDataset(CustomDataset): - """Open Images dataset for detection. - - Args: - ann_file (str): Annotation file path. - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - image_level_ann_file (str): Image level annotation, which is used - in evaluation. - get_supercategory (bool): Whether to get parent class of the - current class. Default: True. - hierarchy_file (str): The file path of the class hierarchy. - Default: None. - get_metas (bool): Whether to get image metas in testing or - validation time. This should be `True` during evaluation. - Default: True. The OpenImages annotations do not have image - metas (width and height of the image), which will be used - during evaluation. We provide two ways to get image metas - in `OpenImagesDataset`: - - - 1. `load from file`: Load image metas from pkl file, which - is suggested to use. We provided a script to get image metas: - `tools/misc/get_image_metas.py`, which need to run - this script before training/testing. Please refer to - `config/openimages/README.md` for more details. - - - 2. `load from pipeline`, which will get image metas during - test time. However, this may reduce the inference speed, - especially when using distribution. - - load_from_file (bool): Whether to get image metas from pkl file. - meta_file (str): File path to get image metas. - filter_labels (bool): Whether filter unannotated classes. - Default: True. - load_image_level_labels (bool): Whether load and consider image - level labels during evaluation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - ann_file, - label_file='', - image_level_ann_file='', - get_supercategory=True, - hierarchy_file=None, - get_metas=True, - load_from_file=True, - meta_file='', - filter_labels=True, - load_image_level_labels=True, - file_client_args=dict(backend='disk'), - **kwargs): - # may get error if use other file_client - self.file_client_args = file_client_args - - self.cat2label = defaultdict(str) - self.index_dict = {} - - # Although it will init file_client in `CustomDataset`, - # it needs to be init here. - file_client = mmcv.FileClient(**file_client_args) - # need get `index_dict` before load annotations - assert label_file.endswith('csv') - if hasattr(file_client, 'get_local_path'): - with file_client.get_local_path(label_file) as local_path: - class_names = self.get_classes_from_csv(local_path) - else: - class_names = self.get_classes_from_csv(label_file) - super(OpenImagesDataset, self).__init__( - ann_file=ann_file, file_client_args=file_client_args, **kwargs) - self.CLASSES = class_names - self.image_level_ann_file = image_level_ann_file - self.load_image_level_labels = load_image_level_labels - if get_supercategory is True: - assert hierarchy_file is not None - if self.__class__.__name__ == 'OpenImagesDataset': - assert hierarchy_file.endswith('json') - elif self.__class__.__name__ == 'OpenImagesChallengeDataset': - assert hierarchy_file.endswith('np') - else: - raise NotImplementedError - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path( - hierarchy_file) as local_path: - self.class_label_tree = self.get_relation_matrix( - local_path) - else: - self.class_label_tree = self.get_relation_matrix( - hierarchy_file) - self.get_supercategory = get_supercategory - self.get_metas = get_metas - self.load_from_file = load_from_file - self.meta_file = meta_file - if self.data_root is not None: - if not osp.isabs(self.meta_file): - self.meta_file = osp.join(self.data_root, self.meta_file) - self.filter_labels = filter_labels - self.rank, self.world_size = get_dist_info() - self.temp_img_metas = [] - self.test_img_metas = [] - self.test_img_shapes = [] - self.load_from_pipeline = False if load_from_file else True - - def get_classes_from_csv(self, label_file): - """Get classes name from file. - - Args: - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - - Returns: - list[str]: Class name of OpenImages. - """ - - index_list = [] - classes_names = [] - with open(label_file, 'r') as f: - reader = csv.reader(f) - for line in reader: - self.cat2label[line[0]] = line[1] - classes_names.append(line[1]) - index_list.append(line[0]) - self.index_dict = {index: i for i, index in enumerate(index_list)} - return classes_names - - def load_annotations(self, ann_file): - """Load annotation from annotation file. - - Special described `self.data_infos` (defaultdict[list[dict]]) - in this function: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. Keys of dicts are: - - - `bbox` (list): coordinates of the box, in normalized image - coordinates, of shape 4. - - `label` (int): the label id. - - `is_group_of` (bool): Indicates that the box spans a group - of objects (e.g., a bed of flowers or a crowd of people). - - `is_occluded` (bool): Indicates that the object is occluded - by another object in the image. - - `is_truncated` (bool): Indicates that the object extends - beyond the boundary of the image. - - `is_depiction` (bool): Indicates that the object is a - depiction. - - `is_inside` (bool): Indicates a picture taken from the - inside of the object. - - Args: - ann_file (str): CSV style annotation file path. - - Returns: - list[dict]: Data infos where each item of the list - indicates an image. Keys of annotations are: - - - `img_id` (str): Image name. - - `filename` (str): Image name with suffix. - """ - self.ann_infos = defaultdict(list) - data_infos = [] - cp_filename = None - with open(ann_file, 'r') as f: - reader = csv.reader(f) - for i, line in enumerate(reader): - if i == 0: - continue - img_id = line[0] - filename = f'{img_id}.jpg' - label_id = line[2] - assert label_id in self.index_dict - label = int(self.index_dict[label_id]) - bbox = [ - float(line[4]), # xmin - float(line[6]), # ymin - float(line[5]), # xmax - float(line[7]) # ymax - ] - is_occluded = True if int(line[8]) == 1 else False - is_truncated = True if int(line[9]) == 1 else False - is_group_of = True if int(line[10]) == 1 else False - is_depiction = True if int(line[11]) == 1 else False - is_inside = True if int(line[12]) == 1 else False - - self.ann_infos[img_id].append( - dict( - bbox=bbox, - label=label, - is_occluded=is_occluded, - is_truncated=is_truncated, - is_group_of=is_group_of, - is_depiction=is_depiction, - is_inside=is_inside)) - if filename != cp_filename: - data_infos.append(dict(img_id=img_id, filename=filename)) - cp_filename = filename - return data_infos - - def get_ann_info(self, idx): - """Get OpenImages annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - img_id = self.data_infos[idx]['img_id'] - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - is_occludeds = [] - is_truncateds = [] - is_group_ofs = [] - is_depictions = [] - is_insides = [] - for obj in self.ann_infos[img_id]: - label = int(obj['label']) - bbox = [ - float(obj['bbox'][0]), - float(obj['bbox'][1]), - float(obj['bbox'][2]), - float(obj['bbox'][3]) - ] - bboxes.append(bbox) - labels.append(label) - - # Other parameters - is_occludeds.append(obj['is_occluded']) - is_truncateds.append(obj['is_truncated']) - is_group_ofs.append(obj['is_group_of']) - is_depictions.append(obj['is_depiction']) - is_insides.append(obj['is_inside']) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes) - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore) - labels_ignore = np.array(labels_ignore) - - assert len(is_group_ofs) == len(labels) == len(bboxes) - gt_is_group_ofs = np.array(is_group_ofs, dtype=np.bool) - - # These parameters is not used yet. - is_occludeds = np.array(is_occludeds, dtype=np.bool) - is_truncateds = np.array(is_truncateds, dtype=np.bool) - is_depictions = np.array(is_depictions, dtype=np.bool) - is_insides = np.array(is_insides, dtype=np.bool) - - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64), - gt_is_group_ofs=gt_is_group_ofs, - is_occludeds=is_occludeds, - is_truncateds=is_truncateds, - is_depictions=is_depictions, - is_insides=is_insides) - - return ann - - def get_meta_from_file(self, meta_file=''): - """Get image metas from pkl file.""" - metas = mmcv.load( - meta_file, - file_format='pkl', - file_client_args=self.file_client_args) - assert len(metas) == len(self) - for i in range(len(metas)): - file_name = osp.split(metas[i]['filename'])[-1] - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - assert file_name == osp.split(img_info['filename'])[-1] - else: - assert file_name == self.data_infos[i]['filename'] - hw = metas[i]['ori_shape'][:2] - self.test_img_shapes.append(hw) - - def get_meta_from_pipeline(self, results): - """Get image metas from pipeline.""" - self.temp_img_metas.extend(results['img_metas']) - if dist.is_available() and self.world_size > 1: - from mmdet.apis.test import collect_results_cpu - - self.test_img_metas = collect_results_cpu(self.temp_img_metas, - len(self)) - else: - self.test_img_metas = self.temp_img_metas - - def get_img_shape(self, metas): - """Set images original shape into data_infos.""" - assert len(metas) == len(self) - for i in range(len(metas)): - file_name = osp.split(metas[i].data['ori_filename'])[-1] - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - assert file_name == osp.split(img_info['filename'])[-1] - else: - assert file_name == self.data_infos[i]['filename'] - hw = metas[i].data['ori_shape'][:2] - self.test_img_shapes.append(hw) - - def prepare_test_img(self, idx): - """Get testing data after pipeline.""" - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - results = self.pipeline(results) - if self.get_metas and self.load_from_pipeline: - self.get_meta_from_pipeline(results) - return results - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn('OpenImageDatasets does not support ' - 'filtering empty gt images.') - valid_inds = [i for i in range(len(self))] - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio.""" - self.flag = np.zeros(len(self), dtype=np.uint8) - # TODO: set flag without width and height - - def get_relation_matrix(self, hierarchy_file): - """Get hierarchy for classes. - - Args: - hierarchy_file (sty): File path to the hierarchy for classes. - - Returns: - ndarray: The matrix of the corresponding relationship between - the parent class and the child class, of shape - (class_num, class_num). - """ - - if self.data_root is not None: - if not osp.isabs(hierarchy_file): - hierarchy_file = osp.join(self.data_root, hierarchy_file) - with open(hierarchy_file, 'r') as f: - hierarchy = json.load(f) - class_num = len(self.CLASSES) - class_label_tree = np.eye(class_num, class_num) - class_label_tree = self._convert_hierarchy_tree( - hierarchy, class_label_tree) - return class_label_tree - - def _convert_hierarchy_tree(self, - hierarchy_map, - class_label_tree, - parents=[], - get_all_parents=True): - """Get matrix of the corresponding relationship between the parent - class and the child class. - - Args: - hierarchy_map (dict): Including label name and corresponding - subcategory. Keys of dicts are: - - - `LabeName` (str): Name of the label. - - `Subcategory` (dict | list): Corresponding subcategory(ies). - class_label_tree (ndarray): The matrix of the corresponding - relationship between the parent class and the child class, - of shape (class_num, class_num). - parents (list): Corresponding parent class. - get_all_parents (bool): Whether get all parent names. - Default: True - - Returns: - ndarray: The matrix of the corresponding relationship between - the parent class and the child class, of shape - (class_num, class_num). - """ - - if 'Subcategory' in hierarchy_map: - for node in hierarchy_map['Subcategory']: - if 'LabelName' in node: - children_name = node['LabelName'] - children_index = self.index_dict[children_name] - children = [children_index] - else: - continue - if len(parents) > 0: - for parent_index in parents: - if get_all_parents: - children.append(parent_index) - class_label_tree[children_index, parent_index] = 1 - - class_label_tree = self._convert_hierarchy_tree( - node, class_label_tree, parents=children) - - return class_label_tree - - def add_supercategory_ann(self, annotations): - """Add parent classes of the corresponding class of the ground truth - bboxes.""" - for i, ann in enumerate(annotations): - assert len(ann['labels']) == len(ann['bboxes']) == \ - len(ann['gt_is_group_ofs']) - gt_bboxes = [] - gt_is_group_ofs = [] - gt_labels = [] - for j in range(len(ann['labels'])): - label = ann['labels'][j] - bbox = ann['bboxes'][j] - is_group = ann['gt_is_group_ofs'][j] - label = np.where(self.class_label_tree[label])[0] - if len(label) > 1: - for k in range(len(label)): - gt_bboxes.append(bbox) - gt_is_group_ofs.append(is_group) - gt_labels.append(label[k]) - else: - gt_bboxes.append(bbox) - gt_is_group_ofs.append(is_group) - gt_labels.append(label[0]) - annotations[i] = dict( - bboxes=np.array(gt_bboxes).astype(np.float32), - labels=np.array(gt_labels).astype(np.int64), - bboxes_ignore=ann['bboxes_ignore'], - gt_is_group_ofs=np.array(gt_is_group_ofs).astype(np.bool)) - - return annotations - - def process_results(self, det_results, annotations, - image_level_annotations): - """Process results of the corresponding class of the detection bboxes. - - Note: It will choose to do the following two processing according to - the parameters: - - 1. Whether to add parent classes of the corresponding class of the - detection bboxes. - - 2. Whether to ignore the classes that unannotated on that image. - """ - if image_level_annotations is not None: - assert len(annotations) == \ - len(image_level_annotations) == \ - len(det_results) - else: - assert len(annotations) == len(det_results) - for i in range(len(det_results)): - results = copy.deepcopy(det_results[i]) - valid_classes = np.where( - np.array([[bbox.shape[0]] for bbox in det_results[i]]) != 0)[0] - if image_level_annotations is not None: - labels = annotations[i]['labels'] - image_level_labels = \ - image_level_annotations[i]['image_level_labels'] - allowed_labeles = np.unique( - np.append(labels, image_level_labels)) - else: - allowed_labeles = np.unique(annotations[i]['labels']) - - for valid_class in valid_classes: - det_cls = np.where(self.class_label_tree[valid_class])[0] - for index in det_cls: - if index in allowed_labeles and \ - index != valid_class and \ - self.get_supercategory: - det_results[i][index] = \ - np.concatenate((det_results[i][index], - results[valid_class])) - elif index not in allowed_labeles and self.filter_labels: - # Remove useless parts - det_results[i][index] = np.empty( - (0, 5)).astype(np.float32) - return det_results - - def load_image_label_from_csv(self, image_level_ann_file): - """Load image level annotations from csv style ann_file. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - defaultdict[list[dict]]: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. - Keys of dicts are: - - - `image_level_label` (int): Label id. - - `confidence` (float): Labels that are human-verified to be - present in an image have confidence = 1 (positive labels). - Labels that are human-verified to be absent from an image - have confidence = 0 (negative labels). Machine-generated - labels have fractional confidences, generally >= 0.5. - The higher the confidence, the smaller the chance for - the label to be a false positive. - """ - - item_lists = defaultdict(list) - with open(image_level_ann_file, 'r') as f: - reader = csv.reader(f) - for i, line in enumerate(reader): - if i == 0: - continue - img_id = line[0] - item_lists[img_id].append( - dict( - image_level_label=int(self.index_dict[line[2]]), - confidence=float(line[3]))) - return item_lists - - def get_image_level_ann(self, image_level_ann_file): - """Get OpenImages annotation by index. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - dict: Annotation info of specified index. - """ - - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(image_level_ann_file) \ - as local_path: - item_lists = self.load_image_label_from_csv(local_path) - else: - item_lists = self.load_image_label_from_csv(image_level_ann_file) - image_level_annotations = [] - for i in range(len(self)): - img_info = self.data_infos[i].get('img_info', None) - if img_info is not None: - # for Open Images Challenges - img_id = osp.split(img_info['filename'])[-1][:-4] - else: - # for Open Images v6 - img_id = self.data_infos[i]['img_id'] - item_list = item_lists.get(img_id, None) - if item_list is not None: - image_level_labels = [] - confidences = [] - for obj in item_list: - image_level_label = int(obj['image_level_label']) - confidence = float(obj['confidence']) - - image_level_labels.append(image_level_label) - confidences.append(confidence) - - if not image_level_labels: - image_level_labels = np.zeros((0, )) - confidences = np.zeros((0, )) - else: - image_level_labels = np.array(image_level_labels) - confidences = np.array(confidences) - else: - image_level_labels = np.zeros((0, )) - confidences = np.zeros((0, )) - ann = dict( - image_level_labels=image_level_labels.astype(np.int64), - confidences=confidences.astype(np.float32)) - image_level_annotations.append(ann) - - return image_level_annotations - - def denormalize_gt_bboxes(self, annotations): - """Convert ground truth bboxes from relative position to absolute - position. - - Only used in evaluating time. - """ - assert len(self.test_img_shapes) == len(annotations) - for i in range(len(annotations)): - h, w = self.test_img_shapes[i] - annotations[i]['bboxes'][:, 0::2] *= w - annotations[i]['bboxes'][:, 1::2] *= h - return annotations - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - return self.get_ann_info(idx)['labels'].astype(np.int).tolist() - - def evaluate(self, - results, - metric='mAP', - logger=None, - iou_thr=0.5, - ioa_thr=0.5, - scale_ranges=None, - denorm_gt_bbox=True, - use_group_of=True): - """Evaluate in OpenImages. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Option is - 'mAP'. Default: 'mAP'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - ioa_thr (float | list[float]): IoA threshold. Default: 0.5. - scale_ranges (list[tuple], optional): Scale ranges for evaluating - mAP. If not specified, all bounding boxes would be included in - evaluation. Default: None - denorm_gt_bbox (bool): Whether to denorm ground truth bboxes from - relative position to absolute position. Default: True - use_group_of (bool): Whether consider group of groud truth bboxes - during evaluating. Default: True. - - Returns: - dict[str, float]: AP metrics. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - - if self.load_image_level_labels: - image_level_annotations = \ - self.get_image_level_ann(self.image_level_ann_file) - else: - image_level_annotations = None - - # load metas from file - if self.get_metas and self.load_from_file: - assert self.meta_file.endswith( - 'pkl'), 'File name must be pkl suffix' - self.get_meta_from_file(self.meta_file) - # load metas from pipeline - else: - self.get_img_shape(self.test_img_metas) - - if len(self.test_img_shapes) > len(self): - self.test_img_shapes = self.test_img_shapes[:len(self)] - - if denorm_gt_bbox: - annotations = self.denormalize_gt_bboxes(annotations) - - # Reset test_image_metas, temp_image_metas and test_img_shapes - # to avoid potential error - self.temp_img_metas = [] - self.test_img_shapes = [] - self.test_img_metas = [] - if self.get_supercategory: - annotations = self.add_supercategory_ann(annotations) - - results = self.process_results(results, annotations, - image_level_annotations) - if use_group_of: - assert ioa_thr is not None, \ - 'ioa_thr must have value when using group_of in evaluation.' - - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - ioa_thrs = [ioa_thr] if isinstance(ioa_thr, float) or ioa_thr is None \ - else ioa_thr - - # get dataset type - if len(self.CLASSES) == 500: - ds_name = 'oid_challenge' - elif len(self.CLASSES) == 601: - ds_name = 'oid_v6' - else: - ds_name = self.CLASSES - warnings.warn('Cannot infer dataset type from the length of the ' - 'classes. Set `oid_v6` as dataset type.') - - if metric == 'mAP': - assert isinstance(iou_thrs, list) and isinstance(ioa_thrs, list) - assert len(ioa_thrs) == len(iou_thrs) - mean_aps = [] - for iou_thr, ioa_thr in zip(iou_thrs, ioa_thrs): - print_log(f'\n{"-" * 15}iou_thr, ioa_thr: {iou_thr}, {ioa_thr}' - f'{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - ioa_thr=ioa_thr, - dataset=ds_name, - logger=logger, - use_group_of=use_group_of) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - return eval_results - - -@DATASETS.register_module() -class OpenImagesChallengeDataset(OpenImagesDataset): - """Open Images Challenge dataset for detection.""" - - def __init__(self, ann_file, **kwargs): - assert ann_file.endswith('txt') - super(OpenImagesChallengeDataset, self).__init__( - ann_file=ann_file, **kwargs) - - def get_classes_from_csv(self, label_file): - """Get classes name from file. - - Args: - label_file (str): File path of the label description file that - maps the classes names in MID format to their short - descriptions. - - Returns: - list: Class name of OpenImages. - """ - - label_list = [] - id_list = [] - with open(label_file, 'r') as f: - reader = csv.reader(f) - for line in reader: - label_name = line[0] - label_id = int(line[2]) - - label_list.append(line[1]) - id_list.append(label_id) - self.index_dict[label_name] = label_id - 1 - - indexes = np.argsort(id_list) - classes_names = [] - for index in indexes: - classes_names.append(label_list[index]) - return classes_names - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - with open(ann_file) as f: - lines = f.readlines() - i = 0 - ann_infos = [] - while i < len(lines): - bboxes = [] - labels = [] - is_group_ofs = [] - filename = lines[i].rstrip() - i += 2 - img_gt_size = int(lines[i]) - i += 1 - for j in range(img_gt_size): - sp = lines[i + j].split() - bboxes.append( - [float(sp[1]), - float(sp[2]), - float(sp[3]), - float(sp[4])]) - labels.append(int(sp[0]) - 1) # labels begin from 1 - is_group_ofs.append(True if int(sp[5]) == 1 else False) - i += img_gt_size - - gt_bboxes = np.array(bboxes, dtype=np.float32) - gt_labels = np.array(labels, dtype=np.int64) - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - gt_is_group_ofs = np.array(is_group_ofs, dtype=np.bool) - - img_info = dict(filename=filename) - ann_info = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - gt_is_group_ofs=gt_is_group_ofs) - ann_infos.append(dict(img_info=img_info, ann_info=ann_info)) - - return ann_infos - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline.""" - ann_info = self.data_infos[idx] - results = dict( - img_info=ann_info['img_info'], - ann_info=ann_info['ann_info'], - ) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline.""" - ann_info = self.data_infos[idx] - results = dict(img_info=ann_info['img_info']) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - - results = self.pipeline(results) - if self.get_metas and self.load_from_pipeline: - self.get_meta_from_pipeline(results) - return results - - def get_relation_matrix(self, hierarchy_file): - """Get hierarchy for classes. - - Args: - hierarchy_file (str): File path to the hierarchy for classes. - - Returns: - ndarray: The matrix of the corresponding - relationship between the parent class and the child class, - of shape (class_num, class_num). - """ - class_label_tree = np.load(hierarchy_file, allow_pickle=True) - return class_label_tree[1:, 1:] - - def get_ann_info(self, idx): - """Get OpenImages annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - # avoid some potential error - data_infos = copy.deepcopy(self.data_infos[idx]['ann_info']) - return data_infos - - def load_image_label_from_csv(self, image_level_ann_file): - """Load image level annotations from csv style ann_file. - - Args: - image_level_ann_file (str): CSV style image level annotation - file path. - - Returns: - defaultdict[list[dict]]: Annotations where item of the defaultdict - indicates an image, each of which has (n) dicts. - Keys of dicts are: - - - `image_level_label` (int): of shape 1. - - `confidence` (float): of shape 1. - """ - - item_lists = defaultdict(list) - with open(image_level_ann_file, 'r') as f: - reader = csv.reader(f) - i = -1 - for line in reader: - i += 1 - if i == 0: - continue - else: - img_id = line[0] - label_id = line[1] - assert label_id in self.index_dict - image_level_label = int(self.index_dict[label_id]) - confidence = float(line[2]) - item_lists[img_id].append( - dict( - image_level_label=image_level_label, - confidence=confidence)) - return item_lists diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/__init__.py deleted file mode 100644 index dae4b8b1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform, - ContrastTransform, EqualizeTransform, Rotate, Shear, - Translate) -from .compose import Compose -from .formatting import (Collect, DefaultFormatBundle, ImageToTensor, - ToDataContainer, ToTensor, Transpose, to_tensor) -from .instaboost import InstaBoost -from .loading import (LoadAnnotations, LoadImageFromFile, LoadImageFromWebcam, - LoadMultiChannelImageFromFiles, LoadPanopticAnnotations, - LoadProposals) -from .test_time_aug import MultiScaleFlipAug -from .transforms import (Albu, CopyPaste, CutOut, Expand, MinIoURandomCrop, - MixUp, Mosaic, Normalize, Pad, PhotoMetricDistortion, - RandomAffine, RandomCenterCropPad, RandomCrop, - RandomFlip, RandomShift, Resize, SegRescale, - YOLOXHSVRandomAug) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations', - 'LoadImageFromFile', 'LoadImageFromWebcam', 'LoadPanopticAnnotations', - 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'MultiScaleFlipAug', - 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale', - 'MinIoURandomCrop', 'Expand', 'PhotoMetricDistortion', 'Albu', - 'InstaBoost', 'RandomCenterCropPad', 'AutoAugment', 'CutOut', 'Shear', - 'Rotate', 'ColorTransform', 'EqualizeTransform', 'BrightnessTransform', - 'ContrastTransform', 'Translate', 'RandomShift', 'Mosaic', 'MixUp', - 'RandomAffine', 'YOLOXHSVRandomAug', 'CopyPaste' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/auto_augment.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/auto_augment.py deleted file mode 100644 index b0ff67db..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/auto_augment.py +++ /dev/null @@ -1,894 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import cv2 -import mmcv -import numpy as np - -from ..builder import PIPELINES -from .compose import Compose - -_MAX_LEVEL = 10 - - -def level_to_value(level, max_value): - """Map from level to values based on max_value.""" - return (level / _MAX_LEVEL) * max_value - - -def enhance_level_to_value(level, a=1.8, b=0.1): - """Map from level to values.""" - return (level / _MAX_LEVEL) * a + b - - -def random_negative(value, random_negative_prob): - """Randomly negate value based on random_negative_prob.""" - return -value if np.random.rand() < random_negative_prob else value - - -def bbox2fields(): - """The key correspondence from bboxes to labels, masks and - segmentations.""" - bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - bbox2seg = { - 'gt_bboxes': 'gt_semantic_seg', - } - return bbox2label, bbox2mask, bbox2seg - - -@PIPELINES.register_module() -class AutoAugment: - """Auto augmentation. - - This data augmentation is proposed in `Learning Data Augmentation - Strategies for Object Detection `_. - - TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms - - Args: - policies (list[list[dict]]): The policies of auto augmentation. Each - policy in ``policies`` is a specific augmentation policy, and is - composed by several augmentations (dict). When AutoAugment is - called, a random policy in ``policies`` will be selected to - augment images. - - Examples: - >>> replace = (104, 116, 124) - >>> policies = [ - >>> [ - >>> dict(type='Sharpness', prob=0.0, level=8), - >>> dict( - >>> type='Shear', - >>> prob=0.4, - >>> level=0, - >>> replace=replace, - >>> axis='x') - >>> ], - >>> [ - >>> dict( - >>> type='Rotate', - >>> prob=0.6, - >>> level=10, - >>> replace=replace), - >>> dict(type='Color', prob=1.0, level=6) - >>> ] - >>> ] - >>> augmentation = AutoAugment(policies) - >>> img = np.ones(100, 100, 3) - >>> gt_bboxes = np.ones(10, 4) - >>> results = dict(img=img, gt_bboxes=gt_bboxes) - >>> results = augmentation(results) - """ - - def __init__(self, policies): - assert isinstance(policies, list) and len(policies) > 0, \ - 'Policies must be a non-empty list.' - for policy in policies: - assert isinstance(policy, list) and len(policy) > 0, \ - 'Each policy in policies must be a non-empty list.' - for augment in policy: - assert isinstance(augment, dict) and 'type' in augment, \ - 'Each specific augmentation must be a dict with key' \ - ' "type".' - - self.policies = copy.deepcopy(policies) - self.transforms = [Compose(policy) for policy in self.policies] - - def __call__(self, results): - transform = np.random.choice(self.transforms) - return transform(results) - - def __repr__(self): - return f'{self.__class__.__name__}(policies={self.policies})' - - -@PIPELINES.register_module() -class Shear: - """Apply Shear Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range [0,_MAX_LEVEL]. - img_fill_val (int | float | tuple): The filled values for image border. - If float, the same fill value will be used for all the three - channels of image. If tuple, the should be 3 elements. - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for performing Shear and should be in - range [0, 1]. - direction (str): The direction for shear, either "horizontal" - or "vertical". - max_shear_magnitude (float): The maximum magnitude for Shear - transformation. - random_negative_prob (float): The probability that turns the - offset negative. Should be in range [0,1] - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - - def __init__(self, - level, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - direction='horizontal', - max_shear_magnitude=0.3, - random_negative_prob=0.5, - interpolation='bilinear'): - assert isinstance(level, (int, float)), 'The level must be type ' \ - f'int or float, got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \ - f'[0,{_MAX_LEVEL}], got {level}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \ - 'elements of img_fill_val should between range [0,255].' \ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \ - f'range [0,1]. got {prob}.' - assert direction in ('horizontal', 'vertical'), 'direction must ' \ - f'in be either "horizontal" or "vertical". got {direction}.' - assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \ - f'should be type float. got {type(max_shear_magnitude)}.' - assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \ - 'max_shear_magnitude should be in range [0,1]. ' \ - f'got {max_shear_magnitude}.' - self.level = level - self.magnitude = level_to_value(level, max_shear_magnitude) - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.direction = direction - self.max_shear_magnitude = max_shear_magnitude - self.random_negative_prob = random_negative_prob - self.interpolation = interpolation - - def _shear_img(self, - results, - magnitude, - direction='horizontal', - interpolation='bilinear'): - """Shear the image. - - Args: - results (dict): Result dict from loading pipeline. - magnitude (int | float): The magnitude used for shear. - direction (str): The direction for shear, either "horizontal" - or "vertical". - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - for key in results.get('img_fields', ['img']): - img = results[key] - img_sheared = mmcv.imshear( - img, - magnitude, - direction, - border_value=self.img_fill_val, - interpolation=interpolation) - results[key] = img_sheared.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _shear_bboxes(self, results, magnitude): - """Shear the bboxes.""" - h, w, c = results['img_shape'] - if self.direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) # [2, 2] - else: - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_box, 1] - coordinates = coordinates[..., 0].transpose( - (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4] - new_coords = np.matmul(shear_matrix[None, :, :], - coordinates) # [nb_box, 2, 4] - min_x = np.min(new_coords[:, 0, :], axis=-1) - min_y = np.min(new_coords[:, 1, :], axis=-1) - max_x = np.max(new_coords[:, 0, :], axis=-1) - max_y = np.max(new_coords[:, 1, :], axis=-1) - min_x = np.clip(min_x, a_min=0, a_max=w) - min_y = np.clip(min_y, a_min=0, a_max=h) - max_x = np.clip(max_x, a_min=min_x, a_max=w) - max_y = np.clip(max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _shear_masks(self, - results, - magnitude, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Shear the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.shear((h, w), - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation) - - def _shear_seg(self, - results, - magnitude, - direction='horizontal', - fill_val=255, - interpolation='bilinear'): - """Shear the segmentation maps.""" - for key in results.get('seg_fields', []): - seg = results[key] - results[key] = mmcv.imshear( - seg, - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after shear - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to shear images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Sheared results. - """ - if np.random.rand() > self.prob: - return results - magnitude = random_negative(self.magnitude, self.random_negative_prob) - self._shear_img(results, magnitude, self.direction, self.interpolation) - self._shear_bboxes(results, magnitude) - # fill_val set to 0 for background of mask. - self._shear_masks( - results, - magnitude, - self.direction, - fill_val=0, - interpolation=self.interpolation) - self._shear_seg( - results, - magnitude, - self.direction, - fill_val=self.seg_ignore_label, - interpolation=self.interpolation) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'direction={self.direction}, ' - repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, ' - repr_str += f'random_negative_prob={self.random_negative_prob}, ' - repr_str += f'interpolation={self.interpolation})' - return repr_str - - -@PIPELINES.register_module() -class Rotate: - """Apply Rotate Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range (0,_MAX_LEVEL]. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - center (int | float | tuple[float]): Center point (w, h) of the - rotation in the source image. If None, the center of the - image will be used. Same in ``mmcv.imrotate``. - img_fill_val (int | float | tuple): The fill value for image border. - If float, the same value will be used for all the three - channels of image. If tuple, the should be 3 elements (e.g. - equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for perform transformation and - should be in range 0 to 1. - max_rotate_angle (int | float): The maximum angles for rotate - transformation. - random_negative_prob (float): The probability that turns the - offset negative. - """ - - def __init__(self, - level, - scale=1, - center=None, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - max_rotate_angle=30, - random_negative_prob=0.5): - assert isinstance(level, (int, float)), \ - f'The level must be type int or float. got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.' - assert isinstance(scale, (int, float)), \ - f'The scale must be type int or float. got type {type(scale)}.' - if isinstance(center, (int, float)): - center = (center, center) - elif isinstance(center, tuple): - assert len(center) == 2, 'center with type tuple must have '\ - f'2 elements. got {len(center)} elements.' - else: - assert center is None, 'center must be None or type int, '\ - f'float or tuple, got type {type(center)}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255]. '\ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\ - f'should be type int or float. got type {type(max_rotate_angle)}.' - self.level = level - self.scale = scale - # Rotation angle in degrees. Positive values mean - # clockwise rotation. - self.angle = level_to_value(level, max_rotate_angle) - self.center = center - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.max_rotate_angle = max_rotate_angle - self.random_negative_prob = random_negative_prob - - def _rotate_img(self, results, angle, center=None, scale=1.0): - """Rotate the image. - - Args: - results (dict): Result dict from loading pipeline. - angle (float): Rotation angle in degrees, positive values - mean clockwise rotation. Same in ``mmcv.imrotate``. - center (tuple[float], optional): Center point (w, h) of the - rotation. Same in ``mmcv.imrotate``. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - img_rotated = mmcv.imrotate( - img, angle, center, scale, border_value=self.img_fill_val) - results[key] = img_rotated.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _rotate_bboxes(self, results, rotate_matrix): - """Rotate the bboxes.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_bbox, 1] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coordinates = np.concatenate( - (coordinates, - np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)), - axis=1) # [4, 3, nb_bbox, 1] - coordinates = coordinates.transpose( - (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1] - rotated_coords = np.matmul(rotate_matrix, - coordinates) # [nb_bbox, 4, 2, 1] - rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2] - min_x, min_y = np.min( - rotated_coords[:, :, 0], axis=1), np.min( - rotated_coords[:, :, 1], axis=1) - max_x, max_y = np.max( - rotated_coords[:, :, 0], axis=1), np.max( - rotated_coords[:, :, 1], axis=1) - min_x, min_y = np.clip( - min_x, a_min=0, a_max=w), np.clip( - min_y, a_min=0, a_max=h) - max_x, max_y = np.clip( - max_x, a_min=min_x, a_max=w), np.clip( - max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _rotate_masks(self, - results, - angle, - center=None, - scale=1.0, - fill_val=0): - """Rotate the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.rotate((h, w), angle, center, scale, fill_val) - - def _rotate_seg(self, - results, - angle, - center=None, - scale=1.0, - fill_val=255): - """Rotate the segmentation map.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imrotate( - seg, angle, center, scale, - border_value=fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after rotate - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to rotate images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - if np.random.rand() > self.prob: - return results - h, w = results['img'].shape[:2] - center = self.center - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - angle = random_negative(self.angle, self.random_negative_prob) - self._rotate_img(results, angle, center, self.scale) - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale) - self._rotate_bboxes(results, rotate_matrix) - self._rotate_masks(results, angle, center, self.scale, fill_val=0) - self._rotate_seg( - results, angle, center, self.scale, fill_val=self.seg_ignore_label) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'center={self.center}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'max_rotate_angle={self.max_rotate_angle}, ' - repr_str += f'random_negative_prob={self.random_negative_prob})' - return repr_str - - -@PIPELINES.register_module() -class Translate: - """Translate the images, bboxes, masks and segmentation maps horizontally - or vertically. - - Args: - level (int | float): The level for Translate and should be in - range [0,_MAX_LEVEL]. - prob (float): The probability for performing translation and - should be in range [0, 1]. - img_fill_val (int | float | tuple): The filled value for image - border. If float, the same fill value will be used for all - the three channels of image. If tuple, the should be 3 - elements (e.g. equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - direction (str): The translate direction, either "horizontal" - or "vertical". - max_translate_offset (int | float): The maximum pixel's offset for - Translate. - random_negative_prob (float): The probability that turns the - offset negative. - min_size (int | float): The minimum pixel for filtering - invalid bboxes after the translation. - """ - - def __init__(self, - level, - prob=0.5, - img_fill_val=128, - seg_ignore_label=255, - direction='horizontal', - max_translate_offset=250., - random_negative_prob=0.5, - min_size=0): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level used for calculating Translate\'s offset should be ' \ - 'in range [0,_MAX_LEVEL]' - assert 0 <= prob <= 1.0, \ - 'The probability of translation should be in range [0, 1].' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, \ - 'img_fill_val as tuple must have 3 elements.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError('img_fill_val must be type float or tuple.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255].' - assert direction in ('horizontal', 'vertical'), \ - 'direction should be "horizontal" or "vertical".' - assert isinstance(max_translate_offset, (int, float)), \ - 'The max_translate_offset must be type int or float.' - # the offset used for translation - self.offset = int(level_to_value(level, max_translate_offset)) - self.level = level - self.prob = prob - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.direction = direction - self.max_translate_offset = max_translate_offset - self.random_negative_prob = random_negative_prob - self.min_size = min_size - - def _translate_img(self, results, offset, direction='horizontal'): - """Translate the image. - - Args: - results (dict): Result dict from loading pipeline. - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - results[key] = mmcv.imtranslate( - img, offset, direction, self.img_fill_val).astype(img.dtype) - results['img_shape'] = results[key].shape - - def _translate_bboxes(self, results, offset): - """Shift bboxes horizontally or vertically, according to offset.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - if self.direction == 'horizontal': - min_x = np.maximum(0, min_x + offset) - max_x = np.minimum(w, max_x + offset) - elif self.direction == 'vertical': - min_y = np.maximum(0, min_y + offset) - max_y = np.minimum(h, max_y + offset) - - # the boxes translated outside of image will be filtered along with - # the corresponding masks, by invoking ``_filter_invalid``. - results[key] = np.concatenate([min_x, min_y, max_x, max_y], - axis=-1) - - def _translate_masks(self, - results, - offset, - direction='horizontal', - fill_val=0): - """Translate masks horizontally or vertically.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.translate((h, w), offset, direction, fill_val) - - def _translate_seg(self, - results, - offset, - direction='horizontal', - fill_val=255): - """Translate segmentation maps horizontally or vertically.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imtranslate(seg, offset, direction, - fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_size=0): - """Filter bboxes and masks too small or translated out of image.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_size) & (bbox_h > min_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - return results - - def __call__(self, results): - """Call function to translate images, bounding boxes, masks and - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Translated results. - """ - if np.random.rand() > self.prob: - return results - offset = random_negative(self.offset, self.random_negative_prob) - self._translate_img(results, offset, self.direction) - self._translate_bboxes(results, offset) - # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks. - self._translate_masks(results, offset, self.direction) - # fill_val set to ``seg_ignore_label`` for the ignored value - # of segmentation map. - self._translate_seg( - results, offset, self.direction, fill_val=self.seg_ignore_label) - self._filter_invalid(results, min_size=self.min_size) - return results - - -@PIPELINES.register_module() -class ColorTransform: - """Apply Color transformation to image. The bboxes, masks, and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Color transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_color_img(self, results, factor=1.0): - """Apply Color transformation to image.""" - for key in results.get('img_fields', ['img']): - # NOTE defaultly the image should be BGR format - img = results[key] - results[key] = mmcv.adjust_color(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Color transformation. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Colored results. - """ - if np.random.rand() > self.prob: - return results - self._adjust_color_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class EqualizeTransform: - """Apply Equalize transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - prob (float): The probability for performing Equalize transformation. - """ - - def __init__(self, prob=0.5): - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.prob = prob - - def _imequalize(self, results): - """Equalizes the histogram of one image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.imequalize(img).astype(img.dtype) - - def __call__(self, results): - """Call function for Equalize transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._imequalize(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob})' - - -@PIPELINES.register_module() -class BrightnessTransform: - """Apply Brightness transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Brightness transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_brightness_img(self, results, factor=1.0): - """Adjust the brightness of image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_brightness(img, - factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Brightness transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_brightness_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class ContrastTransform: - """Apply Contrast transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Contrast transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_contrast_img(self, results, factor=1.0): - """Adjust the image contrast.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Contrast transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_contrast_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/compose.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/compose.py deleted file mode 100644 index d7592200..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/compose.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - str_ = t.__repr__() - if 'Compose(' in str_: - str_ = str_.replace('\n', '\n ') - format_string += '\n' - format_string += f' {str_}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formating.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formating.py deleted file mode 100644 index 3b3e45ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formating.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -import warnings - -from .formatting import * - -warnings.warn('DeprecationWarning: mmdet.datasets.pipelines.formating will be ' - 'deprecated, please replace it with ' - 'mmdet.datasets.pipelines.formatting.') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formatting.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formatting.py deleted file mode 100644 index 45ca69cf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/formatting.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor: - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor: - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = (to_tensor(img.transpose(2, 0, 1))).contiguous() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose: - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to transpose the channel order of data in results. - - Args: - results (dict): Result dict contains the data to transpose. - - Returns: - dict: The result dict contains the data transposed to \ - ``self.order``. - """ - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer: - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))``. - """ - - def __init__(self, - fields=(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to \ - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle: - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, \ - (3)to DataContainer (stack=True) - - Args: - img_to_float (bool): Whether to force the image to be converted to - float type. Default: True. - pad_val (dict): A dict for padding value in batch collating, - the default value is `dict(img=0, masks=0, seg=255)`. - Without this argument, the padding value of "gt_semantic_seg" - will be set to 0 by default, which should be 255. - """ - - def __init__(self, - img_to_float=True, - pad_val=dict(img=0, masks=0, seg=255)): - self.img_to_float = img_to_float - self.pad_val = pad_val - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with \ - default bundle. - """ - - if 'img' in results: - img = results['img'] - if self.img_to_float is True and img.dtype == np.uint8: - # Normally, image is of uint8 type without normalization. - # At this time, it needs to be forced to be converted to - # flot32, otherwise the model training and inference - # will be wrong. Only used for YOLOX currently . - img = img.astype(np.float32) - # add default meta keys - results = self._add_default_meta_keys(results) - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC( - to_tensor(img), padding_value=self.pad_val['img'], stack=True) - for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key])) - if 'gt_masks' in results: - results['gt_masks'] = DC( - results['gt_masks'], - padding_value=self.pad_val['masks'], - cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), - padding_value=self.pad_val['seg'], - stack=True) - return results - - def _add_default_meta_keys(self, results): - """Add default meta keys. - - We set default meta keys including `pad_shape`, `scale_factor` and - `img_norm_cfg` to avoid the case where no `Resize`, `Normalize` and - `Pad` are implemented during the whole pipeline. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - results (dict): Updated result dict contains the data to convert. - """ - img = results['img'] - results.setdefault('pad_shape', img.shape) - results.setdefault('scale_factor', 1.0) - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results.setdefault( - 'img_norm_cfg', - dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False)) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(img_to_float={self.img_to_float})' - - -@PIPELINES.register_module() -class Collect: - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple \ - (h, w, c). Note that images may be zero padded on the \ - bottom/right if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class WrapFieldsToLists: - """Wrap fields of the data dictionary into lists for evaluation. - - This class can be used as a last step of a test or validation - pipeline for single image evaluation or inference. - - Example: - >>> test_pipeline = [ - >>> dict(type='LoadImageFromFile'), - >>> dict(type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - >>> dict(type='Pad', size_divisor=32), - >>> dict(type='ImageToTensor', keys=['img']), - >>> dict(type='Collect', keys=['img']), - >>> dict(type='WrapFieldsToLists') - >>> ] - """ - - def __call__(self, results): - """Call function to wrap fields into lists. - - Args: - results (dict): Result dict contains the data to wrap. - - Returns: - dict: The result dict where value of ``self.keys`` are wrapped \ - into list. - """ - - # Wrap dict fields into lists - for key, val in results.items(): - results[key] = [val] - return results - - def __repr__(self): - return f'{self.__class__.__name__}()' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/instaboost.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/instaboost.py deleted file mode 100644 index ca10c4c7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/instaboost.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class InstaBoost: - r"""Data augmentation method in `InstaBoost: Boosting Instance - Segmentation Via Probability Map Guided Copy-Pasting - `_. - - Refer to https://github.com/GothicAi/Instaboost for implementation details. - - Args: - action_candidate (tuple): Action candidates. "normal", "horizontal", \ - "vertical", "skip" are supported. Default: ('normal', \ - 'horizontal', 'skip'). - action_prob (tuple): Corresponding action probabilities. Should be \ - the same length as action_candidate. Default: (1, 0, 0). - scale (tuple): (min scale, max scale). Default: (0.8, 1.2). - dx (int): The maximum x-axis shift will be (instance width) / dx. - Default 15. - dy (int): The maximum y-axis shift will be (instance height) / dy. - Default 15. - theta (tuple): (min rotation degree, max rotation degree). \ - Default: (-1, 1). - color_prob (float): Probability of images for color augmentation. - Default 0.5. - heatmap_flag (bool): Whether to use heatmap guided. Default False. - aug_ratio (float): Probability of applying this transformation. \ - Default 0.5. - """ - - def __init__(self, - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError( - 'Please run "pip install instaboostfast" ' - 'to install instaboostfast first for instaboost augmentation.') - self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob, - scale, dx, dy, theta, - color_prob, hflag) - self.aug_ratio = aug_ratio - - def _load_anns(self, results): - labels = results['ann_info']['labels'] - masks = results['ann_info']['masks'] - bboxes = results['ann_info']['bboxes'] - n = len(labels) - - anns = [] - for i in range(n): - label = labels[i] - bbox = bboxes[i] - mask = masks[i] - x1, y1, x2, y2 = bbox - # assert (x2 - x1) >= 1 and (y2 - y1) >= 1 - bbox = [x1, y1, x2 - x1, y2 - y1] - anns.append({ - 'category_id': label, - 'segmentation': mask, - 'bbox': bbox - }) - - return anns - - def _parse_anns(self, results, anns, img): - gt_bboxes = [] - gt_labels = [] - gt_masks_ann = [] - for ann in anns: - x1, y1, w, h = ann['bbox'] - # TODO: more essential bug need to be fixed in instaboost - if w <= 0 or h <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - gt_bboxes.append(bbox) - gt_labels.append(ann['category_id']) - gt_masks_ann.append(ann['segmentation']) - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - results['ann_info']['labels'] = gt_labels - results['ann_info']['bboxes'] = gt_bboxes - results['ann_info']['masks'] = gt_masks_ann - results['img'] = img - return results - - def __call__(self, results): - img = results['img'] - ori_type = img.dtype - anns = self._load_anns(results) - if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError('Please run "pip install instaboostfast" ' - 'to install instaboostfast first.') - anns, img = instaboost.get_new_data( - anns, img.astype(np.uint8), self.cfg, background=None) - - results = self._parse_anns(results, anns, img.astype(ori_type)) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/loading.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/loading.py deleted file mode 100644 index 41ccff5d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/loading.py +++ /dev/null @@ -1,609 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -from mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - -try: - from panopticapi.utils import rgb2id -except ImportError: - rgb2id = None - - -@PIPELINES.register_module() -class LoadImageFromFile: - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - channel_order='bgr', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.channel_order = channel_order - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, channel_order=self.channel_order) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f"channel_order='{self.channel_order}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles: - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations: - """Load multiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - denorm_bbox (bool): Whether to convert bbox from relative value to - absolute value. Only used in OpenImage Dataset. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - denorm_bbox=False, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.denorm_bbox = denorm_bbox - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - if self.denorm_bbox: - bbox_num = results['gt_bboxes'].shape[0] - if bbox_num != 0: - h, w = results['img_shape'][:2] - results['gt_bboxes'][:, 0::2] *= w - results['gt_bboxes'][:, 1::2] *= h - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - - gt_is_group_ofs = ann_info.get('gt_is_group_ofs', None) - if gt_is_group_ofs is not None: - results['gt_is_group_ofs'] = gt_is_group_ofs.copy() - - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'poly2mask={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadPanopticAnnotations(LoadAnnotations): - """Load multiple types of panoptic annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: True. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=True, - with_seg=True, - file_client_args=dict(backend='disk')): - if rgb2id is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(LoadPanopticAnnotations, self).__init__( - with_bbox=with_bbox, - with_label=with_label, - with_mask=with_mask, - with_seg=with_seg, - poly2mask=True, - denorm_bbox=False, - file_client_args=file_client_args) - - def _load_masks_and_semantic_segs(self, results): - """Private function to load mask and semantic segmentation annotations. - - In gt_semantic_seg, the foreground label is from `0` to - `num_things - 1`, the background label is from `num_things` to - `num_things + num_stuff - 1`, 255 means the ignored label (`VOID`). - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask and semantic segmentation - annotations. `BitmapMasks` is used for mask annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - pan_png = mmcv.imfrombytes( - img_bytes, flag='color', channel_order='rgb').squeeze() - pan_png = rgb2id(pan_png) - - gt_masks = [] - gt_seg = np.zeros_like(pan_png) + 255 # 255 as ignore - - for mask_info in results['ann_info']['masks']: - mask = (pan_png == mask_info['id']) - gt_seg = np.where(mask, mask_info['category'], gt_seg) - - # The legal thing masks - if mask_info.get('is_thing'): - gt_masks.append(mask.astype(np.uint8)) - - if self.with_mask: - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - - if self.with_seg: - results['gt_semantic_seg'] = gt_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types panoptic annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask or self.with_seg: - # The tasks completed by '_load_masks' and '_load_semantic_segs' - # in LoadAnnotations are merged to one function. - results = self._load_masks_and_semantic_segs(results) - - return results - - -@PIPELINES.register_module() -class LoadProposals: - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations: - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth - boxes. - keep_empty (bool): Whether to return None when it - becomes an empty bbox after filtering. Default: True - """ - - def __init__(self, min_gt_bbox_wh, keep_empty=True): - # TODO: add more filter options - self.min_gt_bbox_wh = min_gt_bbox_wh - self.keep_empty = keep_empty - - def __call__(self, results): - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - if gt_bboxes.shape[0] == 0: - return results - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1]) - if not keep.any(): - if self.keep_empty: - return None - else: - return results - else: - keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg') - for key in keys: - if key in results: - results[key] = results[key][keep] - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(min_gt_bbox_wh={self.min_gt_bbox_wh},' \ - f'always_keep={self.always_keep})' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/test_time_aug.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 5f1ab7b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/transforms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/transforms.py deleted file mode 100644 index 0a1b3891..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/pipelines/transforms.py +++ /dev/null @@ -1,2919 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect -import math -import warnings - -import cv2 -import mmcv -import numpy as np -from numpy import random - -from mmdet.core import BitmapMasks, PolygonMasks, find_inside_bboxes -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps -from mmdet.utils import log_img_scale -from ..builder import PIPELINES - -try: - from imagecorruptions import corrupt -except ImportError: - corrupt = None - -try: - import albumentations - from albumentations import Compose -except ImportError: - albumentations = None - Compose = None - - -@PIPELINES.register_module() -class Resize: - """Resize images & bbox & mask. - - This transform resizes the input image to some scale. Bboxes and masks are - then resized with the same scale factor. If the input dict contains the key - "scale", then the scale in the input dict is used, otherwise the specified - scale in the init method is used. If the input dict contains the key - "scale_factor" (if MultiScaleFlipAug does not give img_scale but - scale_factor), the actual scale will be computed by image shape and - scale_factor. - - `img_scale` can either be a tuple (single-scale) or a list of tuple - (multi-scale). There are 3 multiscale modes: - - - ``ratio_range is not None``: randomly sample a ratio from the ratio \ - range and multiply it with the image scale. - - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ - sample a scale from the multiscale range. - - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ - sample a scale from multiple scales. - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - backend (str): Image resize backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - override (bool, optional): Whether to override `scale` and - `scale_factor` so as to call resize twice. Default False. If True, - after the first resizing, the existed `scale` and `scale_factor` - will be ignored so the second resizing can be allowed. - This option is a work-around for multiple times of resize in DETR. - Defaults to False. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - interpolation='bilinear', - override=False): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given a scale and a range of image ratio - assert len(self.img_scale) == 1 - else: - # mode 2: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.backend = backend - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize - self.interpolation = interpolation - self.override = override - self.bbox_clip_border = bbox_clip_border - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ - where ``img_scale`` is the selected image scale and \ - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where \ - ``img_scale`` is sampled scale and None is just a placeholder \ - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where \ - ``scale`` is sampled ratio multiplied with ``img_scale`` and \ - None is just a placeholder to be consistent with \ - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - for key in results.get('img_fields', ['img']): - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results[key].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - results[key] = img - - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img_shape'] = img.shape - # in case that there is no padding - results['pad_shape'] = img.shape - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip: - """Flip the image & bbox & mask. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - When random flip is enabled, ``flip_ratio``/``direction`` can either be a - float/string or tuple of float/string. There are 3 flip modes: - - - ``flip_ratio`` is float, ``direction`` is string: the image will be - ``direction``ly flipped with probability of ``flip_ratio`` . - E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, - then image will be horizontally flipped with probability of 0.5. - - ``flip_ratio`` is float, ``direction`` is list of string: the image will - be ``direction[i]``ly flipped with probability of - ``flip_ratio/len(direction)``. - E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, - then image will be horizontally flipped with probability of 0.25, - vertically with probability of 0.25. - - ``flip_ratio`` is list of float, ``direction`` is list of string: - given ``len(flip_ratio) == len(direction)``, the image will - be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. - E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', - 'vertical']``, then image will be horizontally flipped with probability - of 0.3, vertically with probability of 0.5. - - Args: - flip_ratio (float | list[float], optional): The flipping probability. - Default: None. - direction(str | list[str], optional): The flipping direction. Options - are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. - If input is a list, the length must equal ``flip_ratio``. Each - element in ``flip_ratio`` indicates the flip probability of - corresponding direction. - """ - - def __init__(self, flip_ratio=None, direction='horizontal'): - if isinstance(flip_ratio, list): - assert mmcv.is_list_of(flip_ratio, float) - assert 0 <= sum(flip_ratio) <= 1 - elif isinstance(flip_ratio, float): - assert 0 <= flip_ratio <= 1 - elif flip_ratio is None: - pass - else: - raise ValueError('flip_ratios must be None, float, ' - 'or list of float') - self.flip_ratio = flip_ratio - - valid_directions = ['horizontal', 'vertical', 'diagonal'] - if isinstance(direction, str): - assert direction in valid_directions - elif isinstance(direction, list): - assert mmcv.is_list_of(direction, str) - assert set(direction).issubset(set(valid_directions)) - else: - raise ValueError('direction must be either str or list of str') - self.direction = direction - - if isinstance(flip_ratio, list): - assert len(self.flip_ratio) == len(self.direction) - - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added \ - into result dict. - """ - - if 'flip' not in results: - if isinstance(self.direction, list): - # None means non-flip - direction_list = self.direction + [None] - else: - # None means non-flip - direction_list = [self.direction, None] - - if isinstance(self.flip_ratio, list): - non_flip_ratio = 1 - sum(self.flip_ratio) - flip_ratio_list = self.flip_ratio + [non_flip_ratio] - else: - non_flip_ratio = 1 - self.flip_ratio - # exclude non-flip - single_ratio = self.flip_ratio / (len(direction_list) - 1) - flip_ratio_list = [single_ratio] * (len(direction_list) - - 1) + [non_flip_ratio] - - cur_dir = np.random.choice(direction_list, p=flip_ratio_list) - - results['flip'] = cur_dir is not None - if 'flip_direction' not in results: - results['flip_direction'] = cur_dir - if results['flip']: - # flip image - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - # flip bboxes - for key in results.get('bbox_fields', []): - results[key] = self.bbox_flip(results[key], - results['img_shape'], - results['flip_direction']) - # flip masks - for key in results.get('mask_fields', []): - results[key] = results[key].flip(results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - - -@PIPELINES.register_module() -class RandomShift: - """Shift the image and box given shift pixels and probability. - - Args: - shift_ratio (float): Probability of shifts. Default 0.5. - max_shift_px (int): The max pixels for shifting. Default 32. - filter_thr_px (int): The width and height threshold for filtering. - The bbox and the rest of the targets below the width and - height threshold will be filtered. Default 1. - """ - - def __init__(self, shift_ratio=0.5, max_shift_px=32, filter_thr_px=1): - assert 0 <= shift_ratio <= 1 - assert max_shift_px >= 0 - self.shift_ratio = shift_ratio - self.max_shift_px = max_shift_px - self.filter_thr_px = int(filter_thr_px) - # The key correspondence from bboxes to labels. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - - def __call__(self, results): - """Call function to random shift images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Shift results. - """ - if random.random() < self.shift_ratio: - img_shape = results['img'].shape[:2] - - random_shift_x = random.randint(-self.max_shift_px, - self.max_shift_px) - random_shift_y = random.randint(-self.max_shift_px, - self.max_shift_px) - new_x = max(0, random_shift_x) - ori_x = max(0, -random_shift_x) - new_y = max(0, random_shift_y) - ori_y = max(0, -random_shift_y) - - # TODO: support mask and semantic segmentation maps. - for key in results.get('bbox_fields', []): - bboxes = results[key].copy() - bboxes[..., 0::2] += random_shift_x - bboxes[..., 1::2] += random_shift_y - - # clip border - bboxes[..., 0::2] = np.clip(bboxes[..., 0::2], 0, img_shape[1]) - bboxes[..., 1::2] = np.clip(bboxes[..., 1::2], 0, img_shape[0]) - - # remove invalid bboxes - bbox_w = bboxes[..., 2] - bboxes[..., 0] - bbox_h = bboxes[..., 3] - bboxes[..., 1] - valid_inds = (bbox_w > self.filter_thr_px) & ( - bbox_h > self.filter_thr_px) - # If the shift does not contain any gt-bbox area, skip this - # image. - if key == 'gt_bboxes' and not valid_inds.any(): - return results - bboxes = bboxes[valid_inds] - results[key] = bboxes - - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - for key in results.get('img_fields', ['img']): - img = results[key] - new_img = np.zeros_like(img) - img_h, img_w = img.shape[:2] - new_h = img_h - np.abs(random_shift_y) - new_w = img_w - np.abs(random_shift_x) - new_img[new_y:new_y + new_h, new_x:new_x + new_w] \ - = img[ori_y:ori_y + new_h, ori_x:ori_x + new_w] - results[key] = new_img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_shift_px={self.max_shift_px}, ' - return repr_str - - -@PIPELINES.register_module() -class Pad: - """Pad the image & masks & segmentation map. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_to_square (bool): Whether to pad the image into a square. - Currently only used for YOLOX. Default: False. - pad_val (dict, optional): A dict for padding value, the default - value is `dict(img=0, masks=0, seg=255)`. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_to_square=False, - pad_val=dict(img=0, masks=0, seg=255)): - self.size = size - self.size_divisor = size_divisor - if isinstance(pad_val, float) or isinstance(pad_val, int): - warnings.warn( - 'pad_val of float type is deprecated now, ' - f'please use pad_val=dict(img={pad_val}, ' - f'masks={pad_val}, seg=255) instead.', DeprecationWarning) - pad_val = dict(img=pad_val, masks=pad_val, seg=255) - assert isinstance(pad_val, dict) - self.pad_val = pad_val - self.pad_to_square = pad_to_square - - if pad_to_square: - assert size is None and size_divisor is None, \ - 'The size and size_divisor must be None ' \ - 'when pad2square is True' - else: - assert size is not None or size_divisor is not None, \ - 'only one of size and size_divisor should be valid' - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - pad_val = self.pad_val.get('img', 0) - for key in results.get('img_fields', ['img']): - if self.pad_to_square: - max_size = max(results[key].shape[:2]) - self.size = (max_size, max_size) - if self.size is not None: - padded_img = mmcv.impad( - results[key], shape=self.size, pad_val=pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results[key], self.size_divisor, pad_val=pad_val) - results[key] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - pad_val = self.pad_val.get('masks', 0) - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=pad_val) - - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_to_square={self.pad_to_square}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize: - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imnormalize(results[key], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop: - """Random crop the image & bboxes & masks. - - The absolute `crop_size` is sampled based on `crop_type` and `image_size`, - then the cropped results are generated. - - Args: - crop_size (tuple): The relative ratio or absolute pixels of - height and width. - crop_type (str, optional): one of "relative_range", "relative", - "absolute", "absolute_range". "relative" randomly crops - (h * crop_size[0], w * crop_size[1]) part from an input of size - (h, w). "relative_range" uniformly samples relative crop size from - range [crop_size[0], 1] and [crop_size[1], 1] for height and width - respectively. "absolute" crops from an input with absolute size - (crop_size[0], crop_size[1]). "absolute_range" uniformly samples - crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w - in range [crop_size[0], min(w, crop_size[1])]. Default "absolute". - allow_negative_crop (bool, optional): Whether to allow a crop that does - not contain any bbox area. Default False. - recompute_bbox (bool, optional): Whether to re-compute the boxes based - on cropped instance masks. Default False. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - - If the image is smaller than the absolute crop size, return the - original image. - - The keys for bboxes, labels and masks must be aligned. That is, - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and - `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and - `gt_masks_ignore`. - - If the crop does not contain any gt-bbox region and - `allow_negative_crop` is set to False, skip this image. - """ - - def __init__(self, - crop_size, - crop_type='absolute', - allow_negative_crop=False, - recompute_bbox=False, - bbox_clip_border=True): - if crop_type not in [ - 'relative_range', 'relative', 'absolute', 'absolute_range' - ]: - raise ValueError(f'Invalid crop_type {crop_type}.') - if crop_type in ['absolute', 'absolute_range']: - assert crop_size[0] > 0 and crop_size[1] > 0 - assert isinstance(crop_size[0], int) and isinstance( - crop_size[1], int) - else: - assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1 - self.crop_size = crop_size - self.crop_type = crop_type - self.allow_negative_crop = allow_negative_crop - self.bbox_clip_border = bbox_clip_border - self.recompute_bbox = recompute_bbox - # The key correspondence from bboxes to labels and masks. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images, bounding boxes, masks, semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - allow_negative_crop (bool): Whether to allow a crop that does not - contain any bbox area. Default to False. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - if self.recompute_bbox: - results[key] = results[mask_key].get_bboxes() - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - def _get_crop_size(self, image_size): - """Randomly generates the absolute crop size based on `crop_type` and - `image_size`. - - Args: - image_size (tuple): (h, w). - - Returns: - crop_size (tuple): (crop_h, crop_w) in absolute pixels. - """ - h, w = image_size - if self.crop_type == 'absolute': - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == 'absolute_range': - assert self.crop_size[0] <= self.crop_size[1] - crop_h = np.random.randint( - min(h, self.crop_size[0]), - min(h, self.crop_size[1]) + 1) - crop_w = np.random.randint( - min(w, self.crop_size[0]), - min(w, self.crop_size[1]) + 1) - return crop_h, crop_w - elif self.crop_type == 'relative': - crop_h, crop_w = self.crop_size - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - elif self.crop_type == 'relative_range': - crop_size = np.asarray(self.crop_size, dtype=np.float32) - crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - - def __call__(self, results): - """Call function to randomly crop images, bounding boxes, masks, - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - image_size = results['img'].shape[:2] - crop_size = self._get_crop_size(image_size) - results = self._crop_data(results, crop_size, self.allow_negative_crop) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'crop_type={self.crop_type}, ' - repr_str += f'allow_negative_crop={self.allow_negative_crop}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class SegRescale: - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - backend (str): Image rescale backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - """ - - def __init__(self, scale_factor=1, backend='cv2'): - self.scale_factor = scale_factor - self.backend = backend - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], - self.scale_factor, - interpolation='nearest', - backend=self.backend) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion: - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - 8. randomly swap channels - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - img = img.astype(np.float32) - # random brightness - if random.randint(2): - delta = random.uniform(-self.brightness_delta, - self.brightness_delta) - img += delta - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # convert color from BGR to HSV - img = mmcv.bgr2hsv(img) - - # random saturation - if random.randint(2): - img[..., 1] *= random.uniform(self.saturation_lower, - self.saturation_upper) - - # random hue - if random.randint(2): - img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta) - img[..., 0][img[..., 0] > 360] -= 360 - img[..., 0][img[..., 0] < 0] += 360 - - # convert color from HSV to BGR - img = mmcv.hsv2bgr(img) - - # random contrast - if mode == 0: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # randomly swap channels - if random.randint(2): - img = img[..., random.permutation(3)] - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(\nbrightness_delta={self.brightness_delta},\n' - repr_str += 'contrast_range=' - repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n' - repr_str += 'saturation_range=' - repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n' - repr_str += f'hue_delta={self.hue_delta})' - return repr_str - - -@PIPELINES.register_module() -class Expand: - """Random expand the image & bboxes. - - Randomly place the original image on a canvas of 'ratio' x original image - size filled with mean values. The ratio is in the range of ratio_range. - - Args: - mean (tuple): mean value of dataset. - to_rgb (bool): if need to convert the order of mean to align with RGB. - ratio_range (tuple): range of expand ratio. - prob (float): probability of applying this transformation - """ - - def __init__(self, - mean=(0, 0, 0), - to_rgb=True, - ratio_range=(1, 4), - seg_ignore_label=None, - prob=0.5): - self.to_rgb = to_rgb - self.ratio_range = ratio_range - if to_rgb: - self.mean = mean[::-1] - else: - self.mean = mean - self.min_ratio, self.max_ratio = ratio_range - self.seg_ignore_label = seg_ignore_label - self.prob = prob - - def __call__(self, results): - """Call function to expand images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images, bounding boxes expanded - """ - - if random.uniform(0, 1) > self.prob: - return results - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - - h, w, c = img.shape - ratio = random.uniform(self.min_ratio, self.max_ratio) - # speedup expand when meets large image - if np.all(self.mean == self.mean[0]): - expand_img = np.empty((int(h * ratio), int(w * ratio), c), - img.dtype) - expand_img.fill(self.mean[0]) - else: - expand_img = np.full((int(h * ratio), int(w * ratio), c), - self.mean, - dtype=img.dtype) - left = int(random.uniform(0, w * ratio - w)) - top = int(random.uniform(0, h * ratio - h)) - expand_img[top:top + h, left:left + w] = img - - results['img'] = expand_img - # expand bboxes - for key in results.get('bbox_fields', []): - results[key] = results[key] + np.tile( - (left, top), 2).astype(results[key].dtype) - - # expand masks - for key in results.get('mask_fields', []): - results[key] = results[key].expand( - int(h * ratio), int(w * ratio), top, left) - - # expand segs - for key in results.get('seg_fields', []): - gt_seg = results[key] - expand_gt_seg = np.full((int(h * ratio), int(w * ratio)), - self.seg_ignore_label, - dtype=gt_seg.dtype) - expand_gt_seg[top:top + h, left:left + w] = gt_seg - results[key] = expand_gt_seg - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label})' - return repr_str - - -@PIPELINES.register_module() -class MinIoURandomCrop: - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size). - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - The keys for bboxes, labels and masks should be paired. That is, \ - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \ - `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`. - """ - - def __init__(self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - bbox_clip_border=True): - # 1: return ori img - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.bbox_clip_border = bbox_clip_border - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def __call__(self, results): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images and bounding boxes cropped, \ - 'img_shape' key is updated. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert 'bbox_fields' in results - boxes = [results[key] for key in results['bbox_fields']] - boxes = np.concatenate(boxes, 0) - h, w, c = img.shape - while True: - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return results - - min_iou = mode - for i in range(50): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array( - (int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = bbox_overlaps( - patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ((center[:, 0] > patch[0]) * - (center[:, 1] > patch[1]) * - (center[:, 0] < patch[2]) * - (center[:, 1] < patch[3])) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - for key in results.get('bbox_fields', []): - boxes = results[key].copy() - mask = is_center_of_bboxes_in_patch(boxes, patch) - boxes = boxes[mask] - if self.bbox_clip_border: - boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:]) - boxes[:, :2] = boxes[:, :2].clip(min=patch[:2]) - boxes -= np.tile(patch[:2], 2) - - results[key] = boxes - # labels - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][mask] - - # mask fields - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - mask.nonzero()[0]].crop(patch) - # adjust the img no matter whether the gt is empty before crop - img = img[patch[1]:patch[3], patch[0]:patch[2]] - results['img'] = img - results['img_shape'] = img.shape - - # seg fields - for key in results.get('seg_fields', []): - results[key] = results[key][patch[1]:patch[3], - patch[0]:patch[2]] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_ious={self.min_ious}, ' - repr_str += f'min_crop_size={self.min_crop_size}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class Corrupt: - """Corruption augmentation. - - Corruption transforms implemented based on - `imagecorruptions `_. - - Args: - corruption (str): Corruption name. - severity (int, optional): The severity of corruption. Default: 1. - """ - - def __init__(self, corruption, severity=1): - self.corruption = corruption - self.severity = severity - - def __call__(self, results): - """Call function to corrupt image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images corrupted. - """ - - if corrupt is None: - raise RuntimeError('imagecorruptions is not installed') - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - results['img'] = corrupt( - results['img'].astype(np.uint8), - corruption_name=self.corruption, - severity=self.severity) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(corruption={self.corruption}, ' - repr_str += f'severity={self.severity})' - return repr_str - - -@PIPELINES.register_module() -class Albu: - """Albumentation augmentation. - - Adds custom transformations from Albumentations library. - Please, visit `https://albumentations.readthedocs.io` - to get more information. - - An example of ``transforms`` is as followed: - - .. code-block:: - - [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), - ] - - Args: - transforms (list[dict]): A list of albu transformations - bbox_params (dict): Bbox_params for albumentation `Compose` - keymap (dict): Contains {'input key':'albumentation-style key'} - skip_img_without_anno (bool): Whether to skip the image if no ann left - after aug - """ - - def __init__(self, - transforms, - bbox_params=None, - keymap=None, - update_pad_shape=False, - skip_img_without_anno=False): - if Compose is None: - raise RuntimeError('albumentations is not installed') - - # Args will be modified later, copying it will be safer - transforms = copy.deepcopy(transforms) - if bbox_params is not None: - bbox_params = copy.deepcopy(bbox_params) - if keymap is not None: - keymap = copy.deepcopy(keymap) - self.transforms = transforms - self.filter_lost_elements = False - self.update_pad_shape = update_pad_shape - self.skip_img_without_anno = skip_img_without_anno - - # A simple workaround to remove masks without boxes - if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params - and 'filter_lost_elements' in bbox_params): - self.filter_lost_elements = True - self.origin_label_fields = bbox_params['label_fields'] - bbox_params['label_fields'] = ['idx_mapper'] - del bbox_params['filter_lost_elements'] - - self.bbox_params = ( - self.albu_builder(bbox_params) if bbox_params else None) - self.aug = Compose([self.albu_builder(t) for t in self.transforms], - bbox_params=self.bbox_params) - - if not keymap: - self.keymap_to_albu = { - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - } - else: - self.keymap_to_albu = keymap - self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} - - def albu_builder(self, cfg): - """Import a module from albumentations. - - It inherits some of :func:`build_from_cfg` logic. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - - Returns: - obj: The constructed object. - """ - - assert isinstance(cfg, dict) and 'type' in cfg - args = cfg.copy() - - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if albumentations is None: - raise RuntimeError('albumentations is not installed') - obj_cls = getattr(albumentations, obj_type) - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if 'transforms' in args: - args['transforms'] = [ - self.albu_builder(transform) - for transform in args['transforms'] - ] - - return obj_cls(**args) - - @staticmethod - def mapper(d, keymap): - """Dictionary mapper. Renames keys according to keymap provided. - - Args: - d (dict): old dict - keymap (dict): {'old_key':'new_key'} - Returns: - dict: new dict. - """ - - updated_dict = {} - for k, v in zip(d.keys(), d.values()): - new_k = keymap.get(k, k) - updated_dict[new_k] = d[k] - return updated_dict - - def __call__(self, results): - # dict to albumentations format - results = self.mapper(results, self.keymap_to_albu) - # TODO: add bbox_fields - if 'bboxes' in results: - # to list of boxes - if isinstance(results['bboxes'], np.ndarray): - results['bboxes'] = [x for x in results['bboxes']] - # add pseudo-field for filtration - if self.filter_lost_elements: - results['idx_mapper'] = np.arange(len(results['bboxes'])) - - # TODO: Support mask structure in albu - if 'masks' in results: - if isinstance(results['masks'], PolygonMasks): - raise NotImplementedError( - 'Albu only supports BitMap masks now') - ori_masks = results['masks'] - if albumentations.__version__ < '0.5': - results['masks'] = results['masks'].masks - else: - results['masks'] = [mask for mask in results['masks'].masks] - - results = self.aug(**results) - - if 'bboxes' in results: - if isinstance(results['bboxes'], list): - results['bboxes'] = np.array( - results['bboxes'], dtype=np.float32) - results['bboxes'] = results['bboxes'].reshape(-1, 4) - - # filter label_fields - if self.filter_lost_elements: - - for label in self.origin_label_fields: - results[label] = np.array( - [results[label][i] for i in results['idx_mapper']]) - if 'masks' in results: - results['masks'] = np.array( - [results['masks'][i] for i in results['idx_mapper']]) - results['masks'] = ori_masks.__class__( - results['masks'], results['image'].shape[0], - results['image'].shape[1]) - - if (not len(results['idx_mapper']) - and self.skip_img_without_anno): - return None - - if 'gt_labels' in results: - if isinstance(results['gt_labels'], list): - results['gt_labels'] = np.array(results['gt_labels']) - results['gt_labels'] = results['gt_labels'].astype(np.int64) - - # back to the original format - results = self.mapper(results, self.keymap_back) - - # update final shape - if self.update_pad_shape: - results['pad_shape'] = results['img'].shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' - return repr_str - - -@PIPELINES.register_module() -class RandomCenterCropPad: - """Random center crop and random around padding for CornerNet. - - This operation generates randomly cropped image from the original image and - pads it simultaneously. Different from :class:`RandomCrop`, the output - shape may not equal to ``crop_size`` strictly. We choose a random value - from ``ratios`` and the output shape could be larger or smaller than - ``crop_size``. The padding operation is also different from :class:`Pad`, - here we use around padding instead of right-bottom padding. - - The relation between output image (padding image) and original image: - - .. code:: text - - output image - - +----------------------------+ - | padded area | - +------|----------------------------|----------+ - | | cropped area | | - | | +---------------+ | | - | | | . center | | | original image - | | | range | | | - | | +---------------+ | | - +------|----------------------------|----------+ - | padded area | - +----------------------------+ - - There are 5 main areas in the figure: - - - output image: output image of this operation, also called padding - image in following instruction. - - original image: input image of this operation. - - padded area: non-intersect area of output image and original image. - - cropped area: the overlap of output image and original image. - - center range: a smaller area where random center chosen from. - center range is computed by ``border`` and original image's shape - to avoid our random center is too close to original image's border. - - Also this operation act differently in train and test mode, the summary - pipeline is listed below. - - Train pipeline: - - 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image - will be ``random_ratio * crop_size``. - 2. Choose a ``random_center`` in center range. - 3. Generate padding image with center matches the ``random_center``. - 4. Initialize the padding image with pixel value equals to ``mean``. - 5. Copy the cropped area to padding image. - 6. Refine annotations. - - Test pipeline: - - 1. Compute output shape according to ``test_pad_mode``. - 2. Generate padding image with center matches the original image - center. - 3. Initialize the padding image with pixel value equals to ``mean``. - 4. Copy the ``cropped area`` to padding image. - - Args: - crop_size (tuple | None): expected size after crop, final size will - computed according to ratio. Requires (h, w) in train mode, and - None in test mode. - ratios (tuple): random select a ratio from tuple and crop image to - (crop_size[0] * ratio) * (crop_size[1] * ratio). - Only available in train mode. - border (int): max distance from center select area to image border. - Only available in train mode. - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB. - test_mode (bool): whether involve random variables in transform. - In train mode, crop_size is fixed, center coords and ratio is - random selected from predefined lists. In test mode, crop_size - is image's original shape, center coords and ratio is fixed. - test_pad_mode (tuple): padding method and padding shape value, only - available in test mode. Default is using 'logical_or' with - 127 as padding shape value. - - - 'logical_or': final_shape = input_shape | padding_shape_value - - 'size_divisor': final_shape = int( - ceil(input_shape / padding_shape_value) * padding_shape_value) - test_pad_add_pix (int): Extra padding pixel in test mode. Default 0. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - """ - - def __init__(self, - crop_size=None, - ratios=(0.9, 1.0, 1.1), - border=128, - mean=None, - std=None, - to_rgb=None, - test_mode=False, - test_pad_mode=('logical_or', 127), - test_pad_add_pix=0, - bbox_clip_border=True): - if test_mode: - assert crop_size is None, 'crop_size must be None in test mode' - assert ratios is None, 'ratios must be None in test mode' - assert border is None, 'border must be None in test mode' - assert isinstance(test_pad_mode, (list, tuple)) - assert test_pad_mode[0] in ['logical_or', 'size_divisor'] - else: - assert isinstance(crop_size, (list, tuple)) - assert crop_size[0] > 0 and crop_size[1] > 0, ( - 'crop_size must > 0 in train mode') - assert isinstance(ratios, (list, tuple)) - assert test_pad_mode is None, ( - 'test_pad_mode must be None in train mode') - - self.crop_size = crop_size - self.ratios = ratios - self.border = border - # We do not set default value to mean, std and to_rgb because these - # hyper-parameters are easy to forget but could affect the performance. - # Please use the same setting as Normalize for performance assurance. - assert mean is not None and std is not None and to_rgb is not None - self.to_rgb = to_rgb - self.input_mean = mean - self.input_std = std - if to_rgb: - self.mean = mean[::-1] - self.std = std[::-1] - else: - self.mean = mean - self.std = std - self.test_mode = test_mode - self.test_pad_mode = test_pad_mode - self.test_pad_add_pix = test_pad_add_pix - self.bbox_clip_border = bbox_clip_border - - def _get_border(self, border, size): - """Get final border for the target size. - - This function generates a ``final_border`` according to image's shape. - The area between ``final_border`` and ``size - final_border`` is the - ``center range``. We randomly choose center from the ``center range`` - to avoid our random center is too close to original image's border. - Also ``center range`` should be larger than 0. - - Args: - border (int): The initial border, default is 128. - size (int): The width or height of original image. - Returns: - int: The final border. - """ - k = 2 * border / size - i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k))) - return border // i - - def _filter_boxes(self, patch, boxes): - """Check whether the center of each box is in the patch. - - Args: - patch (list[int]): The cropped area, [left, top, right, bottom]. - boxes (numpy array, (N x 4)): Ground truth boxes. - - Returns: - mask (numpy array, (N,)): Each box is inside or outside the patch. - """ - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * ( - center[:, 0] < patch[2]) * ( - center[:, 1] < patch[3]) - return mask - - def _crop_image_and_paste(self, image, center, size): - """Crop image with a given center and size, then paste the cropped - image to a blank image with two centers align. - - This function is equivalent to generating a blank image with ``size`` - as its shape. Then cover it on the original image with two centers ( - the center of blank image and the random center of original image) - aligned. The overlap area is paste from the original image and the - outside area is filled with ``mean pixel``. - - Args: - image (np array, H x W x C): Original image. - center (list[int]): Target crop center coord. - size (list[int]): Target crop size. [target_h, target_w] - - Returns: - cropped_img (np array, target_h x target_w x C): Cropped image. - border (np array, 4): The distance of four border of - ``cropped_img`` to the original image area, [top, bottom, - left, right] - patch (list[int]): The cropped area, [left, top, right, bottom]. - """ - center_y, center_x = center - target_h, target_w = size - img_h, img_w, img_c = image.shape - - x0 = max(0, center_x - target_w // 2) - x1 = min(center_x + target_w // 2, img_w) - y0 = max(0, center_y - target_h // 2) - y1 = min(center_y + target_h // 2, img_h) - patch = np.array((int(x0), int(y0), int(x1), int(y1))) - - left, right = center_x - x0, x1 - center_x - top, bottom = center_y - y0, y1 - center_y - - cropped_center_y, cropped_center_x = target_h // 2, target_w // 2 - cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype) - for i in range(img_c): - cropped_img[:, :, i] += self.mean[i] - y_slice = slice(cropped_center_y - top, cropped_center_y + bottom) - x_slice = slice(cropped_center_x - left, cropped_center_x + right) - cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :] - - border = np.array([ - cropped_center_y - top, cropped_center_y + bottom, - cropped_center_x - left, cropped_center_x + right - ], - dtype=np.float32) - - return cropped_img, border, patch - - def _train_aug(self, results): - """Random crop and around padding the original image. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - boxes = results['gt_bboxes'] - while True: - scale = random.choice(self.ratios) - new_h = int(self.crop_size[0] * scale) - new_w = int(self.crop_size[1] * scale) - h_border = self._get_border(self.border, h) - w_border = self._get_border(self.border, w) - - for i in range(50): - center_x = random.randint(low=w_border, high=w - w_border) - center_y = random.randint(low=h_border, high=h - h_border) - - cropped_img, border, patch = self._crop_image_and_paste( - img, [center_y, center_x], [new_h, new_w]) - - mask = self._filter_boxes(patch, boxes) - # if image do not have valid bbox, any crop patch is valid. - if not mask.any() and len(boxes) > 0: - continue - - results['img'] = cropped_img - results['img_shape'] = cropped_img.shape - results['pad_shape'] = cropped_img.shape - - x0, y0, x1, y1 = patch - - left_w, top_h = center_x - x0, center_y - y0 - cropped_center_x, cropped_center_y = new_w // 2, new_h // 2 - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - mask = self._filter_boxes(patch, results[key]) - bboxes = results[key][mask] - bboxes[:, 0:4:2] += cropped_center_x - left_w - x0 - bboxes[:, 1:4:2] += cropped_center_y - top_h - y0 - if self.bbox_clip_border: - bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w) - bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h) - keep = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - bboxes = bboxes[keep] - results[key] = bboxes - if key in ['gt_bboxes']: - if 'gt_labels' in results: - labels = results['gt_labels'][mask] - labels = labels[keep] - results['gt_labels'] = labels - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - - # crop semantic seg - for key in results.get('seg_fields', []): - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - return results - - def _test_aug(self, results): - """Around padding the original image without cropping. - - The padding mode and value are from ``test_pad_mode``. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - results['img_shape'] = img.shape - if self.test_pad_mode[0] in ['logical_or']: - # self.test_pad_add_pix is only used for centernet - target_h = (h | self.test_pad_mode[1]) + self.test_pad_add_pix - target_w = (w | self.test_pad_mode[1]) + self.test_pad_add_pix - elif self.test_pad_mode[0] in ['size_divisor']: - divisor = self.test_pad_mode[1] - target_h = int(np.ceil(h / divisor)) * divisor - target_w = int(np.ceil(w / divisor)) * divisor - else: - raise NotImplementedError( - 'RandomCenterCropPad only support two testing pad mode:' - 'logical-or and size_divisor.') - - cropped_img, border, _ = self._crop_image_and_paste( - img, [h // 2, w // 2], [target_h, target_w]) - results['img'] = cropped_img - results['pad_shape'] = cropped_img.shape - results['border'] = border - return results - - def __call__(self, results): - img = results['img'] - assert img.dtype == np.float32, ( - 'RandomCenterCropPad needs the input image of dtype np.float32,' - ' please set "to_float32=True" in "LoadImageFromFile" pipeline') - h, w, c = img.shape - assert c == len(self.mean) - if self.test_mode: - return self._test_aug(results) - else: - return self._train_aug(results) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'ratios={self.ratios}, ' - repr_str += f'border={self.border}, ' - repr_str += f'mean={self.input_mean}, ' - repr_str += f'std={self.input_std}, ' - repr_str += f'to_rgb={self.to_rgb}, ' - repr_str += f'test_mode={self.test_mode}, ' - repr_str += f'test_pad_mode={self.test_pad_mode}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class CutOut: - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - - Args: - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - """ - - def __init__(self, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0)): - - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - self.n_holes = n_holes - self.fill_in = fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in})' - return repr_str - - -@PIPELINES.register_module() -class Mosaic: - """Mosaic augmentation. - - Given 4 images, mosaic transform combines them into - one output image. The output image is composed of the parts from each sub- - image. - - .. code:: text - - mosaic transform - center_x - +------------------------------+ - | pad | pad | - | +-----------+ | - | | | | - | | image1 |--------+ | - | | | | | - | | | image2 | | - center_y |----+-------------+-----------| - | | cropped | | - |pad | image3 | image4 | - | | | | - +----|-------------+-----------+ - | | - +-------------+ - - The mosaic transform steps are as follows: - - 1. Choose the mosaic center as the intersections of 4 images - 2. Get the left top image according to the index, and randomly - sample another 3 images from the custom dataset. - 3. Sub image will be cropped if image is larger than mosaic patch - - Args: - img_scale (Sequence[int]): Image size after mosaic pipeline of single - image. The shape order should be (height, width). - Default to (640, 640). - center_ratio_range (Sequence[float]): Center ratio range of mosaic - output. Default to (0.5, 1.5). - min_bbox_size (int | float): The minimum pixel for filtering - invalid bboxes after the mosaic pipeline. Default to 0. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` is invalid. Default to True. - pad_val (int): Pad value. Default to 114. - prob (float): Probability of applying this transformation. - Default to 1.0. - """ - - def __init__(self, - img_scale=(640, 640), - center_ratio_range=(0.5, 1.5), - min_bbox_size=0, - bbox_clip_border=True, - skip_filter=True, - pad_val=114, - prob=1.0): - assert isinstance(img_scale, tuple) - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - - log_img_scale(img_scale, skip_square=True) - self.img_scale = img_scale - self.center_ratio_range = center_ratio_range - self.min_bbox_size = min_bbox_size - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - self.pad_val = pad_val - self.prob = prob - - def __call__(self, results): - """Call function to make a mosaic of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mosaic transformed. - """ - - if random.uniform(0, 1) > self.prob: - return results - - results = self._mosaic_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - indexes = [random.randint(0, len(dataset)) for _ in range(3)] - return indexes - - def _mosaic_transform(self, results): - """Mosaic transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - mosaic_labels = [] - mosaic_bboxes = [] - if len(results['img'].shape) == 3: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2), 3), - self.pad_val, - dtype=results['img'].dtype) - else: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), - self.pad_val, - dtype=results['img'].dtype) - - # mosaic center x, y - center_x = int( - random.uniform(*self.center_ratio_range) * self.img_scale[1]) - center_y = int( - random.uniform(*self.center_ratio_range) * self.img_scale[0]) - center_position = (center_x, center_y) - - loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right') - for i, loc in enumerate(loc_strs): - if loc == 'top_left': - results_patch = copy.deepcopy(results) - else: - results_patch = copy.deepcopy(results['mix_results'][i - 1]) - - img_i = results_patch['img'] - h_i, w_i = img_i.shape[:2] - # keep_ratio resize - scale_ratio_i = min(self.img_scale[0] / h_i, - self.img_scale[1] / w_i) - img_i = mmcv.imresize( - img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i))) - - # compute the combine parameters - paste_coord, crop_coord = self._mosaic_combine( - loc, center_position, img_i.shape[:2][::-1]) - x1_p, y1_p, x2_p, y2_p = paste_coord - x1_c, y1_c, x2_c, y2_c = crop_coord - - # crop and paste image - mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c] - - # adjust coordinate - gt_bboxes_i = results_patch['gt_bboxes'] - gt_labels_i = results_patch['gt_labels'] - - if gt_bboxes_i.shape[0] > 0: - padw = x1_p - x1_c - padh = y1_p - y1_c - gt_bboxes_i[:, 0::2] = \ - scale_ratio_i * gt_bboxes_i[:, 0::2] + padw - gt_bboxes_i[:, 1::2] = \ - scale_ratio_i * gt_bboxes_i[:, 1::2] + padh - - mosaic_bboxes.append(gt_bboxes_i) - mosaic_labels.append(gt_labels_i) - - if len(mosaic_labels) > 0: - mosaic_bboxes = np.concatenate(mosaic_bboxes, 0) - mosaic_labels = np.concatenate(mosaic_labels, 0) - - if self.bbox_clip_border: - mosaic_bboxes[:, 0::2] = np.clip(mosaic_bboxes[:, 0::2], 0, - 2 * self.img_scale[1]) - mosaic_bboxes[:, 1::2] = np.clip(mosaic_bboxes[:, 1::2], 0, - 2 * self.img_scale[0]) - - if not self.skip_filter: - mosaic_bboxes, mosaic_labels = \ - self._filter_box_candidates(mosaic_bboxes, mosaic_labels) - - # remove outside bboxes - inside_inds = find_inside_bboxes(mosaic_bboxes, 2 * self.img_scale[0], - 2 * self.img_scale[1]) - mosaic_bboxes = mosaic_bboxes[inside_inds] - mosaic_labels = mosaic_labels[inside_inds] - - results['img'] = mosaic_img - results['img_shape'] = mosaic_img.shape - results['gt_bboxes'] = mosaic_bboxes - results['gt_labels'] = mosaic_labels - - return results - - def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): - """Calculate global coordinate of mosaic image and local coordinate of - cropped sub-image. - - Args: - loc (str): Index for the sub-image, loc in ('top_left', - 'top_right', 'bottom_left', 'bottom_right'). - center_position_xy (Sequence[float]): Mixing center for 4 images, - (x, y). - img_shape_wh (Sequence[int]): Width and height of sub-image - - Returns: - tuple[tuple[float]]: Corresponding coordinate of pasting and - cropping - - paste_coord (tuple): paste corner coordinate in mosaic image. - - crop_coord (tuple): crop corner coordinate in mosaic image. - """ - assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') - if loc == 'top_left': - # index0 to top left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - center_position_xy[0], \ - center_position_xy[1] - crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - ( - y2 - y1), img_shape_wh[0], img_shape_wh[1] - - elif loc == 'top_right': - # index1 to top right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - center_position_xy[1] - crop_coord = 0, img_shape_wh[1] - (y2 - y1), min( - img_shape_wh[0], x2 - x1), img_shape_wh[1] - - elif loc == 'bottom_left': - # index2 to bottom left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - center_position_xy[1], \ - center_position_xy[0], \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min( - y2 - y1, img_shape_wh[1]) - - else: - # index3 to bottom right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - center_position_xy[1], \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = 0, 0, min(img_shape_wh[0], - x2 - x1), min(y2 - y1, img_shape_wh[1]) - - paste_coord = x1, y1, x2, y2 - return paste_coord, crop_coord - - def _filter_box_candidates(self, bboxes, labels): - """Filter out bboxes too small after Mosaic.""" - bbox_w = bboxes[:, 2] - bboxes[:, 0] - bbox_h = bboxes[:, 3] - bboxes[:, 1] - valid_inds = (bbox_w > self.min_bbox_size) & \ - (bbox_h > self.min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - return bboxes[valid_inds], labels[valid_inds] - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'img_scale={self.img_scale}, ' - repr_str += f'center_ratio_range={self.center_ratio_range}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class MixUp: - """MixUp data augmentation. - - .. code:: text - - mixup transform - +------------------------------+ - | mixup image | | - | +--------|--------+ | - | | | | | - |---------------+ | | - | | | | - | | image | | - | | | | - | | | | - | |-----------------+ | - | pad | - +------------------------------+ - - The mixup transform steps are as follows: - - 1. Another random image is picked by dataset and embedded in - the top left patch(after padding and resizing) - 2. The target of mixup transform is the weighted average of mixup - image and origin image. - - Args: - img_scale (Sequence[int]): Image output size after mixup pipeline. - The shape order should be (height, width). Default: (640, 640). - ratio_range (Sequence[float]): Scale ratio of mixup image. - Default: (0.5, 1.5). - flip_ratio (float): Horizontal flip ratio of mixup image. - Default: 0.5. - pad_val (int): Pad value. Default: 114. - max_iters (int): The maximum number of iterations. If the number of - iterations is greater than `max_iters`, but gt_bbox is still - empty, then the iteration is terminated. Default: 15. - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 5. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. Default: 20. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - img_scale=(640, 640), - ratio_range=(0.5, 1.5), - flip_ratio=0.5, - pad_val=114, - max_iters=15, - min_bbox_size=5, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert isinstance(img_scale, tuple) - log_img_scale(img_scale, skip_square=True) - self.dynamic_scale = img_scale - self.ratio_range = ratio_range - self.flip_ratio = flip_ratio - self.pad_val = pad_val - self.max_iters = max_iters - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - """Call function to make a mixup of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mixup transformed. - """ - - results = self._mixup_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - for i in range(self.max_iters): - index = random.randint(0, len(dataset)) - gt_bboxes_i = dataset.get_ann_info(index)['bboxes'] - if len(gt_bboxes_i) != 0: - break - - return index - - def _mixup_transform(self, results): - """MixUp transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - assert len( - results['mix_results']) == 1, 'MixUp only support 2 images now !' - - if results['mix_results'][0]['gt_bboxes'].shape[0] == 0: - # empty bbox - return results - - retrieve_results = results['mix_results'][0] - retrieve_img = retrieve_results['img'] - - jit_factor = random.uniform(*self.ratio_range) - is_filp = random.uniform(0, 1) > self.flip_ratio - - if len(retrieve_img.shape) == 3: - out_img = np.ones( - (self.dynamic_scale[0], self.dynamic_scale[1], 3), - dtype=retrieve_img.dtype) * self.pad_val - else: - out_img = np.ones( - self.dynamic_scale, dtype=retrieve_img.dtype) * self.pad_val - - # 1. keep_ratio resize - scale_ratio = min(self.dynamic_scale[0] / retrieve_img.shape[0], - self.dynamic_scale[1] / retrieve_img.shape[1]) - retrieve_img = mmcv.imresize( - retrieve_img, (int(retrieve_img.shape[1] * scale_ratio), - int(retrieve_img.shape[0] * scale_ratio))) - - # 2. paste - out_img[:retrieve_img.shape[0], :retrieve_img.shape[1]] = retrieve_img - - # 3. scale jit - scale_ratio *= jit_factor - out_img = mmcv.imresize(out_img, (int(out_img.shape[1] * jit_factor), - int(out_img.shape[0] * jit_factor))) - - # 4. flip - if is_filp: - out_img = out_img[:, ::-1, :] - - # 5. random crop - ori_img = results['img'] - origin_h, origin_w = out_img.shape[:2] - target_h, target_w = ori_img.shape[:2] - padded_img = np.zeros( - (max(origin_h, target_h), max(origin_w, - target_w), 3)).astype(np.uint8) - padded_img[:origin_h, :origin_w] = out_img - - x_offset, y_offset = 0, 0 - if padded_img.shape[0] > target_h: - y_offset = random.randint(0, padded_img.shape[0] - target_h) - if padded_img.shape[1] > target_w: - x_offset = random.randint(0, padded_img.shape[1] - target_w) - padded_cropped_img = padded_img[y_offset:y_offset + target_h, - x_offset:x_offset + target_w] - - # 6. adjust bbox - retrieve_gt_bboxes = retrieve_results['gt_bboxes'] - retrieve_gt_bboxes[:, 0::2] = retrieve_gt_bboxes[:, 0::2] * scale_ratio - retrieve_gt_bboxes[:, 1::2] = retrieve_gt_bboxes[:, 1::2] * scale_ratio - if self.bbox_clip_border: - retrieve_gt_bboxes[:, 0::2] = np.clip(retrieve_gt_bboxes[:, 0::2], - 0, origin_w) - retrieve_gt_bboxes[:, 1::2] = np.clip(retrieve_gt_bboxes[:, 1::2], - 0, origin_h) - - if is_filp: - retrieve_gt_bboxes[:, 0::2] = ( - origin_w - retrieve_gt_bboxes[:, 0::2][:, ::-1]) - - # 7. filter - cp_retrieve_gt_bboxes = retrieve_gt_bboxes.copy() - cp_retrieve_gt_bboxes[:, 0::2] = \ - cp_retrieve_gt_bboxes[:, 0::2] - x_offset - cp_retrieve_gt_bboxes[:, 1::2] = \ - cp_retrieve_gt_bboxes[:, 1::2] - y_offset - if self.bbox_clip_border: - cp_retrieve_gt_bboxes[:, 0::2] = np.clip( - cp_retrieve_gt_bboxes[:, 0::2], 0, target_w) - cp_retrieve_gt_bboxes[:, 1::2] = np.clip( - cp_retrieve_gt_bboxes[:, 1::2], 0, target_h) - - # 8. mix up - ori_img = ori_img.astype(np.float32) - mixup_img = 0.5 * ori_img + 0.5 * padded_cropped_img.astype(np.float32) - - retrieve_gt_labels = retrieve_results['gt_labels'] - if not self.skip_filter: - keep_list = self._filter_box_candidates(retrieve_gt_bboxes.T, - cp_retrieve_gt_bboxes.T) - - retrieve_gt_labels = retrieve_gt_labels[keep_list] - cp_retrieve_gt_bboxes = cp_retrieve_gt_bboxes[keep_list] - - mixup_gt_bboxes = np.concatenate( - (results['gt_bboxes'], cp_retrieve_gt_bboxes), axis=0) - mixup_gt_labels = np.concatenate( - (results['gt_labels'], retrieve_gt_labels), axis=0) - - # remove outside bbox - inside_inds = find_inside_bboxes(mixup_gt_bboxes, target_h, target_w) - mixup_gt_bboxes = mixup_gt_bboxes[inside_inds] - mixup_gt_labels = mixup_gt_labels[inside_inds] - - results['img'] = mixup_img.astype(np.uint8) - results['img_shape'] = mixup_img.shape - results['gt_bboxes'] = mixup_gt_bboxes - results['gt_labels'] = mixup_gt_labels - - return results - - def _filter_box_candidates(self, bbox1, bbox2): - """Compute candidate boxes which include following 5 things: - - bbox1 before augment, bbox2 after augment, min_bbox_size (pixels), - min_area_ratio, max_aspect_ratio. - """ - - w1, h1 = bbox1[2] - bbox1[0], bbox1[3] - bbox1[1] - w2, h2 = bbox2[2] - bbox2[0], bbox2[3] - bbox2[1] - ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) - return ((w2 > self.min_bbox_size) - & (h2 > self.min_bbox_size) - & (w2 * h2 / (w1 * h1 + 1e-16) > self.min_area_ratio) - & (ar < self.max_aspect_ratio)) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'dynamic_scale={self.dynamic_scale}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'flip_ratio={self.flip_ratio}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'max_iters={self.max_iters}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class RandomAffine: - """Random affine transform data augmentation. - - This operation randomly generates affine transform matrix which including - rotation, translation, shear and scaling transforms. - - Args: - max_rotate_degree (float): Maximum degrees of rotation transform. - Default: 10. - max_translate_ratio (float): Maximum ratio of translation. - Default: 0.1. - scaling_ratio_range (tuple[float]): Min and max ratio of - scaling transform. Default: (0.5, 1.5). - max_shear_degree (float): Maximum degrees of shear - transform. Default: 2. - border (tuple[int]): Distance from height and width sides of input - image to adjust output shape. Only used in mosaic dataset. - Default: (0, 0). - border_val (tuple[int]): Border padding values of 3 channels. - Default: (114, 114, 114). - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 2. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - max_rotate_degree=10.0, - max_translate_ratio=0.1, - scaling_ratio_range=(0.5, 1.5), - max_shear_degree=2.0, - border=(0, 0), - border_val=(114, 114, 114), - min_bbox_size=2, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert 0 <= max_translate_ratio <= 1 - assert scaling_ratio_range[0] <= scaling_ratio_range[1] - assert scaling_ratio_range[0] > 0 - self.max_rotate_degree = max_rotate_degree - self.max_translate_ratio = max_translate_ratio - self.scaling_ratio_range = scaling_ratio_range - self.max_shear_degree = max_shear_degree - self.border = border - self.border_val = border_val - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - img = results['img'] - height = img.shape[0] + self.border[0] * 2 - width = img.shape[1] + self.border[1] * 2 - - # Rotation - rotation_degree = random.uniform(-self.max_rotate_degree, - self.max_rotate_degree) - rotation_matrix = self._get_rotation_matrix(rotation_degree) - - # Scaling - scaling_ratio = random.uniform(self.scaling_ratio_range[0], - self.scaling_ratio_range[1]) - scaling_matrix = self._get_scaling_matrix(scaling_ratio) - - # Shear - x_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - y_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - shear_matrix = self._get_shear_matrix(x_degree, y_degree) - - # Translation - trans_x = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * width - trans_y = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * height - translate_matrix = self._get_translation_matrix(trans_x, trans_y) - - warp_matrix = ( - translate_matrix @ shear_matrix @ rotation_matrix @ scaling_matrix) - - img = cv2.warpPerspective( - img, - warp_matrix, - dsize=(width, height), - borderValue=self.border_val) - results['img'] = img - results['img_shape'] = img.shape - - for key in results.get('bbox_fields', []): - bboxes = results[key] - num_bboxes = len(bboxes) - if num_bboxes: - # homogeneous coordinates - xs = bboxes[:, [0, 0, 2, 2]].reshape(num_bboxes * 4) - ys = bboxes[:, [1, 3, 3, 1]].reshape(num_bboxes * 4) - ones = np.ones_like(xs) - points = np.vstack([xs, ys, ones]) - - warp_points = warp_matrix @ points - warp_points = warp_points[:2] / warp_points[2] - xs = warp_points[0].reshape(num_bboxes, 4) - ys = warp_points[1].reshape(num_bboxes, 4) - - warp_bboxes = np.vstack( - (xs.min(1), ys.min(1), xs.max(1), ys.max(1))).T - - if self.bbox_clip_border: - warp_bboxes[:, [0, 2]] = \ - warp_bboxes[:, [0, 2]].clip(0, width) - warp_bboxes[:, [1, 3]] = \ - warp_bboxes[:, [1, 3]].clip(0, height) - - # remove outside bbox - valid_index = find_inside_bboxes(warp_bboxes, height, width) - if not self.skip_filter: - # filter bboxes - filter_index = self.filter_gt_bboxes( - bboxes * scaling_ratio, warp_bboxes) - valid_index = valid_index & filter_index - - results[key] = warp_bboxes[valid_index] - if key in ['gt_bboxes']: - if 'gt_labels' in results: - results['gt_labels'] = results['gt_labels'][ - valid_index] - - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomAffine only supports bbox.') - return results - - def filter_gt_bboxes(self, origin_bboxes, wrapped_bboxes): - origin_w = origin_bboxes[:, 2] - origin_bboxes[:, 0] - origin_h = origin_bboxes[:, 3] - origin_bboxes[:, 1] - wrapped_w = wrapped_bboxes[:, 2] - wrapped_bboxes[:, 0] - wrapped_h = wrapped_bboxes[:, 3] - wrapped_bboxes[:, 1] - aspect_ratio = np.maximum(wrapped_w / (wrapped_h + 1e-16), - wrapped_h / (wrapped_w + 1e-16)) - - wh_valid_idx = (wrapped_w > self.min_bbox_size) & \ - (wrapped_h > self.min_bbox_size) - area_valid_idx = wrapped_w * wrapped_h / (origin_w * origin_h + - 1e-16) > self.min_area_ratio - aspect_ratio_valid_idx = aspect_ratio < self.max_aspect_ratio - return wh_valid_idx & area_valid_idx & aspect_ratio_valid_idx - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_rotate_degree={self.max_rotate_degree}, ' - repr_str += f'max_translate_ratio={self.max_translate_ratio}, ' - repr_str += f'scaling_ratio={self.scaling_ratio_range}, ' - repr_str += f'max_shear_degree={self.max_shear_degree}, ' - repr_str += f'border={self.border}, ' - repr_str += f'border_val={self.border_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - @staticmethod - def _get_rotation_matrix(rotate_degrees): - radian = math.radians(rotate_degrees) - rotation_matrix = np.array( - [[np.cos(radian), -np.sin(radian), 0.], - [np.sin(radian), np.cos(radian), 0.], [0., 0., 1.]], - dtype=np.float32) - return rotation_matrix - - @staticmethod - def _get_scaling_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_share_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_shear_matrix(x_shear_degrees, y_shear_degrees): - x_radian = math.radians(x_shear_degrees) - y_radian = math.radians(y_shear_degrees) - shear_matrix = np.array([[1, np.tan(x_radian), 0.], - [np.tan(y_radian), 1, 0.], [0., 0., 1.]], - dtype=np.float32) - return shear_matrix - - @staticmethod - def _get_translation_matrix(x, y): - translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]], - dtype=np.float32) - return translation_matrix - - -@PIPELINES.register_module() -class YOLOXHSVRandomAug: - """Apply HSV augmentation to image sequentially. It is referenced from - https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/data/data_augment.py#L21. - - Args: - hue_delta (int): delta of hue. Default: 5. - saturation_delta (int): delta of saturation. Default: 30. - value_delta (int): delat of value. Default: 30. - """ - - def __init__(self, hue_delta=5, saturation_delta=30, value_delta=30): - self.hue_delta = hue_delta - self.saturation_delta = saturation_delta - self.value_delta = value_delta - - def __call__(self, results): - img = results['img'] - hsv_gains = np.random.uniform(-1, 1, 3) * [ - self.hue_delta, self.saturation_delta, self.value_delta - ] - # random selection of h, s, v - hsv_gains *= np.random.randint(0, 2, 3) - # prevent overflow - hsv_gains = hsv_gains.astype(np.int16) - img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.int16) - - img_hsv[..., 0] = (img_hsv[..., 0] + hsv_gains[0]) % 180 - img_hsv[..., 1] = np.clip(img_hsv[..., 1] + hsv_gains[1], 0, 255) - img_hsv[..., 2] = np.clip(img_hsv[..., 2] + hsv_gains[2], 0, 255) - cv2.cvtColor(img_hsv.astype(img.dtype), cv2.COLOR_HSV2BGR, dst=img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(hue_delta={self.hue_delta}, ' - repr_str += f'saturation_delta={self.saturation_delta}, ' - repr_str += f'value_delta={self.value_delta})' - return repr_str - - -@PIPELINES.register_module() -class CopyPaste: - """Simple Copy-Paste is a Strong Data Augmentation Method for Instance - Segmentation The simple copy-paste transform steps are as follows: - - 1. The destination image is already resized with aspect ratio kept, - cropped and padded. - 2. Randomly select a source image, which is also already resized - with aspect ratio kept, cropped and padded in a similar way - as the destination image. - 3. Randomly select some objects from the source image. - 4. Paste these source objects to the destination image directly, - due to the source and destination image have the same size. - 5. Update object masks of the destination image, for some origin objects - may be occluded. - 6. Generate bboxes from the updated destination masks and - filter some objects which are totally occluded, and adjust bboxes - which are partly occluded. - 7. Append selected source bboxes, masks, and labels. - - Args: - max_num_pasted (int): The maximum number of pasted objects. - Default: 100. - bbox_occluded_thr (int): The threshold of occluded bbox. - Default: 10. - mask_occluded_thr (int): The threshold of occluded mask. - Default: 300. - selected (bool): Whether select objects or not. If select is False, - all objects of the source image will be pasted to the - destination image. - Default: True. - """ - - def __init__( - self, - max_num_pasted=100, - bbox_occluded_thr=10, - mask_occluded_thr=300, - selected=True, - ): - self.max_num_pasted = max_num_pasted - self.bbox_occluded_thr = bbox_occluded_thr - self.mask_occluded_thr = mask_occluded_thr - self.selected = selected - - def get_indexes(self, dataset): - """Call function to collect indexes.s. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - Returns: - list: Indexes. - """ - return random.randint(0, len(dataset)) - - def __call__(self, results): - """Call function to make a copy-paste of image. - - Args: - results (dict): Result dict. - Returns: - dict: Result dict with copy-paste transformed. - """ - - assert 'mix_results' in results - num_images = len(results['mix_results']) - assert num_images == 1, \ - f'CopyPaste only supports processing 2 images, got {num_images}' - if self.selected: - selected_results = self._select_object(results['mix_results'][0]) - else: - selected_results = results['mix_results'][0] - return self._copy_paste(results, selected_results) - - def _select_object(self, results): - """Select some objects from the source results.""" - bboxes = results['gt_bboxes'] - labels = results['gt_labels'] - masks = results['gt_masks'] - max_num_pasted = min(bboxes.shape[0] + 1, self.max_num_pasted) - num_pasted = np.random.randint(0, max_num_pasted) - selected_inds = np.random.choice( - bboxes.shape[0], size=num_pasted, replace=False) - - selected_bboxes = bboxes[selected_inds] - selected_labels = labels[selected_inds] - selected_masks = masks[selected_inds] - - results['gt_bboxes'] = selected_bboxes - results['gt_labels'] = selected_labels - results['gt_masks'] = selected_masks - return results - - def _copy_paste(self, dst_results, src_results): - """CopyPaste transform function. - - Args: - dst_results (dict): Result dict of the destination image. - src_results (dict): Result dict of the source image. - Returns: - dict: Updated result dict. - """ - dst_img = dst_results['img'] - dst_bboxes = dst_results['gt_bboxes'] - dst_labels = dst_results['gt_labels'] - dst_masks = dst_results['gt_masks'] - - src_img = src_results['img'] - src_bboxes = src_results['gt_bboxes'] - src_labels = src_results['gt_labels'] - src_masks = src_results['gt_masks'] - - if len(src_bboxes) == 0: - return dst_results - - # update masks and generate bboxes from updated masks - composed_mask = np.where(np.any(src_masks.masks, axis=0), 1, 0) - updated_dst_masks = self.get_updated_masks(dst_masks, composed_mask) - updated_dst_bboxes = updated_dst_masks.get_bboxes() - assert len(updated_dst_bboxes) == len(updated_dst_masks) - - # filter totally occluded objects - bboxes_inds = np.all( - np.abs( - (updated_dst_bboxes - dst_bboxes)) <= self.bbox_occluded_thr, - axis=-1) - masks_inds = updated_dst_masks.masks.sum( - axis=(1, 2)) > self.mask_occluded_thr - valid_inds = bboxes_inds | masks_inds - - # Paste source objects to destination image directly - img = dst_img * (1 - composed_mask[..., np.newaxis] - ) + src_img * composed_mask[..., np.newaxis] - bboxes = np.concatenate([updated_dst_bboxes[valid_inds], src_bboxes]) - labels = np.concatenate([dst_labels[valid_inds], src_labels]) - masks = np.concatenate( - [updated_dst_masks.masks[valid_inds], src_masks.masks]) - - dst_results['img'] = img - dst_results['gt_bboxes'] = bboxes - dst_results['gt_labels'] = labels - dst_results['gt_masks'] = BitmapMasks(masks, masks.shape[1], - masks.shape[2]) - - return dst_results - - def get_updated_masks(self, masks, composed_mask): - assert masks.masks.shape[-2:] == composed_mask.shape[-2:], \ - 'Cannot compare two arrays of different size' - masks.masks = np.where(composed_mask, 0, masks.masks) - return masks - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'max_num_pasted={self.max_num_pasted}, ' - repr_str += f'bbox_occluded_thr={self.bbox_occluded_thr}, ' - repr_str += f'mask_occluded_thr={self.mask_occluded_thr}, ' - repr_str += f'selected={self.selected}, ' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/__init__.py deleted file mode 100644 index a4c7ea13..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_aware_sampler import ClassAwareSampler -from .distributed_sampler import DistributedSampler -from .group_sampler import DistributedGroupSampler, GroupSampler -from .infinite_sampler import InfiniteBatchSampler, InfiniteGroupBatchSampler - -__all__ = [ - 'DistributedSampler', 'DistributedGroupSampler', 'GroupSampler', - 'InfiniteGroupBatchSampler', 'InfiniteBatchSampler', 'ClassAwareSampler' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/class_aware_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/class_aware_sampler.py deleted file mode 100644 index c52708eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/class_aware_sampler.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - -from mmdet.core.utils import sync_random_seed - - -class ClassAwareSampler(Sampler): - r"""Sampler that restricts data loading to the label of the dataset. - - A class-aware sampling strategy to effectively tackle the - non-uniform class distribution. The length of the training data is - consistent with source data. Simple improvements based on `Relay - Backpropagation for Effective Learning of Deep Convolutional - Neural Networks `_ - - The implementation logic is referred to - https://github.com/Sense-X/TSD/blob/master/mmdet/datasets/samplers/distributed_classaware_sampler.py - - Args: - dataset: Dataset used for sampling. - samples_per_gpu (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - num_sample_class (int): The number of samples taken from each - per-label list. Default: 1 - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0, - num_sample_class=1): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - - self.dataset = dataset - self.num_replicas = num_replicas - self.samples_per_gpu = samples_per_gpu - self.rank = rank - self.epoch = 0 - # Must be the same across all workers. If None, will use a - # random seed shared among workers - # (require synchronization among all workers) - self.seed = sync_random_seed(seed) - - # The number of samples taken from each per-label list - assert num_sample_class > 0 and isinstance(num_sample_class, int) - self.num_sample_class = num_sample_class - # Get per-label image list from dataset - assert hasattr(dataset, 'get_cat2imgs'), \ - 'dataset must have `get_cat2imgs` function' - self.cat_dict = dataset.get_cat2imgs() - - self.num_samples = int( - math.ceil( - len(self.dataset) * 1.0 / self.num_replicas / - self.samples_per_gpu)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - # get number of images containing each category - self.num_cat_imgs = [len(x) for x in self.cat_dict.values()] - # filter labels without images - self.valid_cat_inds = [ - i for i, length in enumerate(self.num_cat_imgs) if length != 0 - ] - self.num_classes = len(self.valid_cat_inds) - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - # initialize label list - label_iter_list = RandomCycleIter(self.valid_cat_inds, generator=g) - # initialize each per-label image list - data_iter_dict = dict() - for i in self.valid_cat_inds: - data_iter_dict[i] = RandomCycleIter(self.cat_dict[i], generator=g) - - def gen_cat_img_inds(cls_list, data_dict, num_sample_cls): - """Traverse the categories and extract `num_sample_cls` image - indexes of the corresponding categories one by one.""" - id_indices = [] - for _ in range(len(cls_list)): - cls_idx = next(cls_list) - for _ in range(num_sample_cls): - id = next(data_dict[cls_idx]) - id_indices.append(id) - return id_indices - - # deterministically shuffle based on epoch - num_bins = int( - math.ceil(self.total_size * 1.0 / self.num_classes / - self.num_sample_class)) - indices = [] - for i in range(num_bins): - indices += gen_cat_img_inds(label_iter_list, data_iter_dict, - self.num_sample_class) - - # fix extra samples to make it evenly divisible - if len(indices) >= self.total_size: - indices = indices[:self.total_size] - else: - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch - - -class RandomCycleIter: - """Shuffle the list and do it again after the list have traversed. - - The implementation logic is referred to - https://github.com/wutong16/DistributionBalancedLoss/blob/master/mllt/datasets/loader/sampler.py - - Example: - >>> label_list = [0, 1, 2, 4, 5] - >>> g = torch.Generator() - >>> g.manual_seed(0) - >>> label_iter_list = RandomCycleIter(label_list, generator=g) - >>> index = next(label_iter_list) - Args: - data (list or ndarray): The data that needs to be shuffled. - generator: An torch.Generator object, which is used in setting the seed - for generating random numbers. - """ # noqa: W605 - - def __init__(self, data, generator=None): - self.data = data - self.length = len(data) - self.index = torch.randperm(self.length, generator=generator).numpy() - self.i = 0 - self.generator = generator - - def __iter__(self): - return self - - def __len__(self): - return len(self.data) - - def __next__(self): - if self.i == self.length: - self.index = torch.randperm( - self.length, generator=self.generator).numpy() - self.i = 0 - idx = self.data[self.index[self.i]] - self.i += 1 - return idx diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/distributed_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100644 index 1bc8b7c3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - -from mmdet.core.utils import sync_random_seed -from mmdet.utils import get_device - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - device = get_device() - self.seed = sync_random_seed(seed, device) - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/group_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/group_sampler.py deleted file mode 100644 index 783d2b21..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/group_sampler.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - - -class GroupSampler(Sampler): - - def __init__(self, dataset, samples_per_gpu=1): - assert hasattr(dataset, 'flag') - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.flag = dataset.flag.astype(np.int64) - self.group_sizes = np.bincount(self.flag) - self.num_samples = 0 - for i, size in enumerate(self.group_sizes): - self.num_samples += int(np.ceil( - size / self.samples_per_gpu)) * self.samples_per_gpu - - def __iter__(self): - indices = [] - for i, size in enumerate(self.group_sizes): - if size == 0: - continue - indice = np.where(self.flag == i)[0] - assert len(indice) == size - np.random.shuffle(indice) - num_extra = int(np.ceil(size / self.samples_per_gpu) - ) * self.samples_per_gpu - len(indice) - indice = np.concatenate( - [indice, np.random.choice(indice, num_extra)]) - indices.append(indice) - indices = np.concatenate(indices) - indices = [ - indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu] - for i in np.random.permutation( - range(len(indices) // self.samples_per_gpu)) - ] - indices = np.concatenate(indices) - indices = indices.astype(np.int64).tolist() - assert len(indices) == self.num_samples - return iter(indices) - - def __len__(self): - return self.num_samples - - -class DistributedGroupSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.seed = seed if seed is not None else 0 - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - - self.num_samples = 0 - for i, j in enumerate(self.group_sizes): - self.num_samples += int( - math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu / - self.num_replicas)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - indices = [] - for i, size in enumerate(self.group_sizes): - if size > 0: - indice = np.where(self.flag == i)[0] - assert len(indice) == size - # add .numpy() to avoid bug when selecting indice in parrots. - # TODO: check whether torch.randperm() can be replaced by - # numpy.random.permutation(). - indice = indice[list( - torch.randperm(int(size), generator=g).numpy())].tolist() - extra = int( - math.ceil( - size * 1.0 / self.samples_per_gpu / self.num_replicas) - ) * self.samples_per_gpu * self.num_replicas - len(indice) - # pad indice - tmp = indice.copy() - for _ in range(extra // size): - indice.extend(tmp) - indice.extend(tmp[:extra % size]) - indices.extend(indice) - - assert len(indices) == self.total_size - - indices = [ - indices[j] for i in list( - torch.randperm( - len(indices) // self.samples_per_gpu, generator=g)) - for j in range(i * self.samples_per_gpu, (i + 1) * - self.samples_per_gpu) - ] - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/infinite_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/infinite_sampler.py deleted file mode 100644 index d42487e6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/samplers/infinite_sampler.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data.sampler import Sampler - -from mmdet.core.utils import sync_random_seed - - -class InfiniteGroupBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `GroupSampler. It is designed for - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time, all indices in a batch should be in the same group. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the indices of a dummy `epoch`, it - should be noted that `shuffle` can not guarantee that you can - generate sequential indices because it need to ensure - that all indices in a batch is in a group. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - # buffer used to save indices of each group - self.buffer_per_group = {k: [] for k in range(len(self.group_sizes))} - - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - for idx in self.indices: - flag = self.flag[idx] - group_buffer = self.buffer_per_group[flag] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] - del group_buffer[:] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError - - -class InfiniteBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `DistributedSampler. It is designed - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU, - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the dataset or not. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - batch_buffer = [] - for idx in self.indices: - batch_buffer.append(idx) - if len(batch_buffer) == self.batch_size: - yield batch_buffer - batch_buffer = [] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/utils.py deleted file mode 100644 index 26e922d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/utils.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv.cnn import VGG -from mmcv.runner.hooks import HOOKS, Hook - -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines import (LoadAnnotations, LoadImageFromFile, - LoadPanopticAnnotations) -from mmdet.models.dense_heads import GARPNHead, RPNHead -from mmdet.models.roi_heads.mask_heads import FusedSemanticHead - - -def replace_ImageToTensor(pipelines): - """Replace the ImageToTensor transform in a data pipeline to - DefaultFormatBundle, which is normally useful in batch inference. - - Args: - pipelines (list[dict]): Data pipeline configs. - - Returns: - list: The new pipeline list with all ImageToTensor replaced by - DefaultFormatBundle. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='ImageToTensor', keys=['img']), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> assert expected_pipelines == replace_ImageToTensor(pipelines) - """ - pipelines = copy.deepcopy(pipelines) - for i, pipeline in enumerate(pipelines): - if pipeline['type'] == 'MultiScaleFlipAug': - assert 'transforms' in pipeline - pipeline['transforms'] = replace_ImageToTensor( - pipeline['transforms']) - elif pipeline['type'] == 'ImageToTensor': - warnings.warn( - '"ImageToTensor" pipeline is replaced by ' - '"DefaultFormatBundle" for batch inference. It is ' - 'recommended to manually replace it in the test ' - 'data pipeline in your config file.', UserWarning) - pipelines[i] = {'type': 'DefaultFormatBundle'} - return pipelines - - -def get_loading_pipeline(pipeline): - """Only keep loading image and annotations related configuration. - - Args: - pipeline (list[dict]): Data pipeline configs. - - Returns: - list[dict]: The new pipeline list with only keep - loading image and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True), - ... dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - ... dict(type='RandomFlip', flip_ratio=0.5), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True) - ... ] - >>> assert expected_pipelines ==\ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline_cfg = [] - for cfg in pipeline: - obj_cls = PIPELINES.get(cfg['type']) - # TODO:use more elegant way to distinguish loading modules - if obj_cls is not None and obj_cls in (LoadImageFromFile, - LoadAnnotations, - LoadPanopticAnnotations): - loading_pipeline_cfg.append(cfg) - assert len(loading_pipeline_cfg) == 2, \ - 'The data pipeline in your config file must include ' \ - 'loading image and annotations related pipeline.' - return loading_pipeline_cfg - - -@HOOKS.register_module() -class NumClassCheckHook(Hook): - - def _check_head(self, runner): - """Check whether the `num_classes` in head matches the length of - `CLASSES` in `dataset`. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - model = runner.model - dataset = runner.data_loader.dataset - if dataset.CLASSES is None: - runner.logger.warning( - f'Please set `CLASSES` ' - f'in the {dataset.__class__.__name__} and' - f'check if it is consistent with the `num_classes` ' - f'of head') - else: - assert type(dataset.CLASSES) is not str, \ - (f'`CLASSES` in {dataset.__class__.__name__}' - f'should be a tuple of str.' - f'Add comma if number of classes is 1 as ' - f'CLASSES = ({dataset.CLASSES},)') - for name, module in model.named_modules(): - if hasattr(module, 'num_classes') and not isinstance( - module, (RPNHead, VGG, FusedSemanticHead, GARPNHead)): - assert module.num_classes == len(dataset.CLASSES), \ - (f'The `num_classes` ({module.num_classes}) in ' - f'{module.__class__.__name__} of ' - f'{model.__class__.__name__} does not matches ' - f'the length of `CLASSES` ' - f'{len(dataset.CLASSES)}) in ' - f'{dataset.__class__.__name__}') - - def before_train_epoch(self, runner): - """Check whether the training dataset is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) - - def before_val_epoch(self, runner): - """Check whether the dataset in val epoch is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/voc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/voc.py deleted file mode 100644 index 0a3ea7aa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/voc.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -from mmcv.utils import print_log - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .xml_style import XMLDataset - - -@DATASETS.register_module() -class VOCDataset(XMLDataset): - - CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', - 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor') - - PALETTE = [(106, 0, 228), (119, 11, 32), (165, 42, 42), (0, 0, 192), - (197, 226, 255), (0, 60, 100), (0, 0, 142), (255, 77, 255), - (153, 69, 1), (120, 166, 157), (0, 182, 199), (0, 226, 252), - (182, 182, 255), (0, 0, 230), (220, 20, 60), (163, 255, 0), - (0, 82, 0), (3, 95, 161), (0, 80, 100), (183, 130, 88)] - - def __init__(self, **kwargs): - super(VOCDataset, self).__init__(**kwargs) - if 'VOC2007' in self.img_prefix: - self.year = 2007 - elif 'VOC2012' in self.img_prefix: - self.year = 2012 - else: - raise ValueError('Cannot infer dataset year from img_prefix') - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate in VOC protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'mAP', 'recall'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple], optional): Scale ranges for evaluating - mAP. If not specified, all bounding boxes would be included in - evaluation. Default: None. - - Returns: - dict[str, float]: AP/recall metrics. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - if self.year == 2007: - ds_name = 'voc07' - else: - ds_name = self.CLASSES - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - # Follow the official implementation, - # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar - # we should use the legacy coordinate system in mmdet 1.x, - # which means w, h should be computed as 'x2 - x1 + 1` and - # `y2 - y1 + 1` - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=None, - iou_thr=iou_thr, - dataset=ds_name, - logger=logger, - use_legacy_coordinate=True) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - eval_results.move_to_end('mAP', last=False) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, - results, - proposal_nums, - iou_thrs, - logger=logger, - use_legacy_coordinate=True) - for i, num in enumerate(proposal_nums): - for j, iou_thr in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou_thr}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/wider_face.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/wider_face.py deleted file mode 100644 index 85a5fdc5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/wider_face.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv - -from .builder import DATASETS -from .xml_style import XMLDataset - - -@DATASETS.register_module() -class WIDERFaceDataset(XMLDataset): - """Reader for the WIDER Face dataset in PASCAL VOC format. - - Conversion scripts can be found in - https://github.com/sovrasov/wider-face-pascal-voc-annotations - """ - CLASSES = ('face', ) - - PALETTE = [(0, 255, 0)] - - def __init__(self, **kwargs): - super(WIDERFaceDataset, self).__init__(**kwargs) - - def load_annotations(self, ann_file): - """Load annotation from WIDERFace XML style annotation file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = f'{img_id}.jpg' - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - width = int(size.find('width').text) - height = int(size.find('height').text) - folder = root.find('folder').text - data_infos.append( - dict( - id=img_id, - filename=osp.join(folder, filename), - width=width, - height=height)) - - return data_infos diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/xml_style.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/xml_style.py deleted file mode 100644 index 039d5d7d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class XMLDataset(CustomDataset): - """XML dataset for detection. - - Args: - min_size (int | float, optional): The minimum size of bounding - boxes in the images. If the size of a bounding box is less than - ``min_size``, it would be add to ignored field. - img_subdir (str): Subdir where images are stored. Default: JPEGImages. - ann_subdir (str): Subdir where annotations are. Default: Annotations. - """ - - def __init__(self, - min_size=None, - img_subdir='JPEGImages', - ann_subdir='Annotations', - **kwargs): - assert self.CLASSES or kwargs.get( - 'classes', None), 'CLASSES in `XMLDataset` can not be None.' - self.img_subdir = img_subdir - self.ann_subdir = ann_subdir - super(XMLDataset, self).__init__(**kwargs) - self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)} - self.min_size = min_size - - def load_annotations(self, ann_file): - """Load annotation from XML style ann_file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = osp.join(self.img_subdir, f'{img_id}.jpg') - xml_path = osp.join(self.img_prefix, self.ann_subdir, - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_path = osp.join(self.img_prefix, filename) - img = Image.open(img_path) - width, height = img.size - data_infos.append( - dict(id=img_id, filename=filename, width=width, height=height)) - - return data_infos - - def _filter_imgs(self, min_size=32): - """Filter images too small or without annotation.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) < min_size: - continue - if self.filter_empty_gt: - img_id = img_info['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name in self.CLASSES: - valid_inds.append(i) - break - else: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, idx): - """Get annotation from XML file by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - # TODO: check whether it is necessary to use int - # Coordinates may be float type - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - ignore = False - if self.min_size: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.min_size or h < self.min_size: - ignore = True - if difficult or ignore: - bboxes_ignore.append(bbox) - labels_ignore.append(label) - else: - bboxes.append(bbox) - labels.append(label) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes, ndmin=2) - 1 - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1 - labels_ignore = np.array(labels_ignore) - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64)) - return ann - - def get_cat_ids(self, idx): - """Get category ids in XML file by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - cat_ids = [] - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, self.ann_subdir, f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - cat_ids.append(label) - - return cat_ids diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/__init__.py deleted file mode 100644 index 12efb013..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, DETECTORS, HEADS, LOSSES, NECKS, - ROI_EXTRACTORS, SHARED_HEADS, build_backbone, - build_detector, build_head, build_loss, build_neck, - build_roi_extractor, build_shared_head) -from .dense_heads import * # noqa: F401,F403 -from .detectors import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .plugins import * # noqa: F401,F403 -from .roi_heads import * # noqa: F401,F403 -from .seg_heads import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES', - 'DETECTORS', 'build_backbone', 'build_neck', 'build_roi_extractor', - 'build_shared_head', 'build_head', 'build_loss', 'build_detector' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/__init__.py deleted file mode 100644 index 91b50d25..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .csp_darknet import CSPDarknet -from .darknet import Darknet -from .detectors_resnet import DetectoRS_ResNet -from .detectors_resnext import DetectoRS_ResNeXt -from .efficientnet import EfficientNet -from .hourglass import HourglassNet -from .hrnet import HRNet -from .mobilenet_v2 import MobileNetV2 -from .pvt import PyramidVisionTransformer, PyramidVisionTransformerV2 -from .regnet import RegNet -from .res2net import Res2Net -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1d -from .resnext import ResNeXt -from .ssd_vgg import SSDVGG -from .swin import SwinTransformer -from .trident_resnet import TridentResNet - -__all__ = [ - 'RegNet', 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', - 'MobileNetV2', 'Res2Net', 'HourglassNet', 'DetectoRS_ResNet', - 'DetectoRS_ResNeXt', 'Darknet', 'ResNeSt', 'TridentResNet', 'CSPDarknet', - 'SwinTransformer', 'PyramidVisionTransformer', - 'PyramidVisionTransformerV2', 'EfficientNet' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/csp_darknet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/csp_darknet.py deleted file mode 100644 index 2bbf3968..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/csp_darknet.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import CSPLayer - - -class Focus(nn.Module): - """Focus width and height information into channel space. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_size (int): The kernel size of the convolution. Default: 1 - stride (int): The stride of the convolution. Default: 1 - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', momentum=0.03, eps=0.001). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=1, - stride=1, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish')): - super().__init__() - self.conv = ConvModule( - in_channels * 4, - out_channels, - kernel_size, - stride, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2) - patch_top_left = x[..., ::2, ::2] - patch_top_right = x[..., ::2, 1::2] - patch_bot_left = x[..., 1::2, ::2] - patch_bot_right = x[..., 1::2, 1::2] - x = torch.cat( - ( - patch_top_left, - patch_bot_left, - patch_top_right, - patch_bot_right, - ), - dim=1, - ) - return self.conv(x) - - -class SPPBottleneck(BaseModule): - """Spatial pyramid pooling layer used in YOLOv3-SPP. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - kernel_sizes (tuple[int]): Sequential of kernel sizes of pooling - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - mid_channels = in_channels // 2 - self.conv1 = ConvModule( - in_channels, - mid_channels, - 1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.poolings = nn.ModuleList([ - nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) - for ks in kernel_sizes - ]) - conv2_channels = mid_channels * (len(kernel_sizes) + 1) - self.conv2 = ConvModule( - conv2_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - x = self.conv1(x) - x = torch.cat([x] + [pooling(x) for pooling in self.poolings], dim=1) - x = self.conv2(x) - return x - - -@BACKBONES.register_module() -class CSPDarknet(BaseModule): - """CSP-Darknet backbone used in YOLOv5 and YOLOX. - - Args: - arch (str): Architecture of CSP-Darknet, from {P5, P6}. - Default: P5. - deepen_factor (float): Depth multiplier, multiply number of - blocks in CSP layer by this amount. Default: 1.0. - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int]): Output from which stages. - Default: (2, 3, 4). - frozen_stages (int): Stages to be frozen (stop grad and set eval - mode). -1 means not freezing any parameters. Default: -1. - use_depthwise (bool): Whether to use depthwise separable convolution. - Default: False. - arch_ovewrite(list): Overwrite default arch settings. Default: None. - spp_kernal_sizes: (tuple[int]): Sequential of kernel sizes of SPP - layers. Default: (5, 9, 13). - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Example: - >>> from mmdet.models import CSPDarknet - >>> import torch - >>> self = CSPDarknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - # From left to right: - # in_channels, out_channels, num_blocks, add_identity, use_spp - arch_settings = { - 'P5': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 1024, 3, False, True]], - 'P6': [[64, 128, 3, True, False], [128, 256, 9, True, False], - [256, 512, 9, True, False], [512, 768, 3, True, False], - [768, 1024, 3, False, True]] - } - - def __init__(self, - arch='P5', - deepen_factor=1.0, - widen_factor=1.0, - out_indices=(2, 3, 4), - frozen_stages=-1, - use_depthwise=False, - arch_ovewrite=None, - spp_kernal_sizes=(5, 9, 13), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - norm_eval=False, - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - super().__init__(init_cfg) - arch_setting = self.arch_settings[arch] - if arch_ovewrite: - arch_setting = arch_ovewrite - assert set(out_indices).issubset( - i for i in range(len(arch_setting) + 1)) - if frozen_stages not in range(-1, len(arch_setting) + 1): - raise ValueError('frozen_stages must be in range(-1, ' - 'len(arch_setting) + 1). But received ' - f'{frozen_stages}') - - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.use_depthwise = use_depthwise - self.norm_eval = norm_eval - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - - self.stem = Focus( - 3, - int(arch_setting[0][0] * widen_factor), - kernel_size=3, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.layers = ['stem'] - - for i, (in_channels, out_channels, num_blocks, add_identity, - use_spp) in enumerate(arch_setting): - in_channels = int(in_channels * widen_factor) - out_channels = int(out_channels * widen_factor) - num_blocks = max(round(num_blocks * deepen_factor), 1) - stage = [] - conv_layer = conv( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(conv_layer) - if use_spp: - spp = SPPBottleneck( - out_channels, - out_channels, - kernel_sizes=spp_kernal_sizes, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(spp) - csp_layer = CSPLayer( - out_channels, - out_channels, - num_blocks=num_blocks, - add_identity=add_identity, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - stage.append(csp_layer) - self.add_module(f'stage{i + 1}', nn.Sequential(*stage)) - self.layers.append(f'stage{i + 1}') - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages + 1): - m = getattr(self, self.layers[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(CSPDarknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/darknet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/darknet.py deleted file mode 100644 index adfb1159..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/darknet.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES - - -class ResBlock(BaseModule): - """The basic residual block used in Darknet. Each ResBlock consists of two - ConvModules and the input is added to the final output. Each ConvModule is - composed of Conv, BN, and LeakyReLU. In YoloV3 paper, the first convLayer - has half of the number of the filters as much as the second convLayer. The - first convLayer has filter size of 1x1 and the second one has the filter - size of 3x3. - - Args: - in_channels (int): The input channels. Must be even. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(ResBlock, self).__init__(init_cfg) - assert in_channels % 2 == 0 # ensure the in_channels is even - half_in_channels = in_channels // 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(in_channels, half_in_channels, 1, **cfg) - self.conv2 = ConvModule( - half_in_channels, in_channels, 3, padding=1, **cfg) - - def forward(self, x): - residual = x - out = self.conv1(x) - out = self.conv2(out) - out = out + residual - - return out - - -@BACKBONES.register_module() -class Darknet(BaseModule): - """Darknet backbone. - - Args: - depth (int): Depth of Darknet. Currently only support 53. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import Darknet - >>> import torch - >>> self = Darknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - - # Dict(depth: (layers, channels)) - arch_settings = { - 53: ((1, 2, 8, 8, 4), ((32, 64), (64, 128), (128, 256), (256, 512), - (512, 1024))) - } - - def __init__(self, - depth=53, - out_indices=(3, 4, 5), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - norm_eval=True, - pretrained=None, - init_cfg=None): - super(Darknet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for darknet') - - self.depth = depth - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.layers, self.channels = self.arch_settings[depth] - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(3, 32, 3, padding=1, **cfg) - - self.cr_blocks = ['conv1'] - for i, n_layers in enumerate(self.layers): - layer_name = f'conv_res_block{i + 1}' - in_c, out_c = self.channels[i] - self.add_module( - layer_name, - self.make_conv_res_block(in_c, out_c, n_layers, **cfg)) - self.cr_blocks.append(layer_name) - - self.norm_eval = norm_eval - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.cr_blocks): - cr_block = getattr(self, layer_name) - x = cr_block(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages): - m = getattr(self, self.cr_blocks[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(Darknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - @staticmethod - def make_conv_res_block(in_channels, - out_channels, - res_repeat, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', - negative_slope=0.1)): - """In Darknet backbone, ConvLayer is usually followed by ResBlock. This - function will make that. The Conv layers always have 3x3 filters with - stride=2. The number of the filters in Conv layer is the same as the - out channels of the ResBlock. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - res_repeat (int): The number of ResBlocks. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - model = nn.Sequential() - model.add_module( - 'conv', - ConvModule( - in_channels, out_channels, 3, stride=2, padding=1, **cfg)) - for idx in range(res_repeat): - model.add_module('res{}'.format(idx), - ResBlock(out_channels, **cfg)) - return model diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnet.py deleted file mode 100644 index a3c0d40b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnet.py +++ /dev/null @@ -1,353 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import Sequential, load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import BasicBlock -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - r"""Bottleneck for the ResNet backbone in `DetectoRS - `_. - - This bottleneck allows the users to specify whether to use - SAC (Switchable Atrous Convolution) and RFP (Recursive Feature Pyramid). - - Args: - inplanes (int): The number of input channels. - planes (int): The number of output channels before expansion. - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - sac (dict, optional): Dictionary to construct SAC. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - rfp_inplanes=None, - sac=None, - init_cfg=None, - **kwargs): - super(Bottleneck, self).__init__( - inplanes, planes, init_cfg=init_cfg, **kwargs) - - assert sac is None or isinstance(sac, dict) - self.sac = sac - self.with_sac = sac is not None - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False) - - self.rfp_inplanes = rfp_inplanes - if self.rfp_inplanes: - self.rfp_conv = build_conv_layer( - None, - self.rfp_inplanes, - planes * self.expansion, - 1, - stride=1, - bias=True) - if init_cfg is None: - self.init_cfg = dict( - type='Constant', val=0, override=dict(name='rfp_conv')) - - def rfp_forward(self, x, rfp_feat): - """The forward function that also takes the RFP features as input.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - if self.rfp_inplanes: - rfp_feat = self.rfp_conv(rfp_feat) - out = out + rfp_feat - - out = self.relu(out) - - return out - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone for RPF in detectoRS. - - The difference between this module and base class is that we pass - ``rfp_inplanes`` to the first block. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - rfp_inplanes=None, - **kwargs): - self.block = block - assert downsample_first, f'downsample_first={downsample_first} is ' \ - 'not supported in DetectoRS' - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down and stride != 1: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - rfp_inplanes=rfp_inplanes, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - super(ResLayer, self).__init__(*layers) - - -@BACKBONES.register_module() -class DetectoRS_ResNet(ResNet): - """ResNet backbone for DetectoRS. - - Args: - sac (dict, optional): Dictionary to construct SAC (Switchable Atrous - Convolution). Default: None. - stage_with_sac (list): Which stage to use sac. Default: (False, False, - False, False). - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - output_img (bool): If ``True``, the input image will be inserted into - the starting position of output. Default: False. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - sac=None, - stage_with_sac=(False, False, False, False), - rfp_inplanes=None, - output_img=False, - pretrained=None, - init_cfg=None, - **kwargs): - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - self.pretrained = pretrained - if init_cfg is not None: - assert isinstance(init_cfg, dict), \ - f'init_cfg must be a dict, but got {type(init_cfg)}' - if 'type' in init_cfg: - assert init_cfg.get('type') == 'Pretrained', \ - 'Only can initialize module by loading a pretrained model' - else: - raise KeyError('`init_cfg` must contain the key "type"') - self.pretrained = init_cfg.get('checkpoint') - self.sac = sac - self.stage_with_sac = stage_with_sac - self.rfp_inplanes = rfp_inplanes - self.output_img = output_img - super(DetectoRS_ResNet, self).__init__(**kwargs) - - self.inplanes = self.stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - sac = self.sac if self.stage_with_sac[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - planes = self.base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - sac=sac, - rfp_inplanes=rfp_inplanes if i > 0 else None, - plugins=stage_plugins) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - # In order to be properly initialized by RFP - def init_weights(self): - # Calling this method will cause parameter initialization exception - # super(DetectoRS_ResNet, self).init_weights() - - if isinstance(self.pretrained, str): - logger = get_root_logger() - load_checkpoint(self, self.pretrained, strict=False, logger=logger) - elif self.pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m.conv2, 'conv_offset'): - constant_init(m.conv2.conv_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer`` for DetectoRS.""" - return ResLayer(**kwargs) - - def forward(self, x): - """Forward function.""" - outs = list(super(DetectoRS_ResNet, self).forward(x)) - if self.output_img: - outs.insert(0, x) - return tuple(outs) - - def rfp_forward(self, x, rfp_feats): - """Forward function for RFP.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - rfp_feat = rfp_feats[i] if i > 0 else None - for layer in res_layer: - x = layer.rfp_forward(x, rfp_feat) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnext.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 5e8b20a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/efficientnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/efficientnet.py deleted file mode 100644 index 7ee35956..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/efficientnet.py +++ /dev/null @@ -1,417 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -from functools import partial - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn.bricks import ConvModule, DropPath -from mmcv.runner import BaseModule, Sequential - -from ..builder import BACKBONES -from ..utils import InvertedResidual, SELayer, make_divisible - - -class EdgeResidual(BaseModule): - """Edge Residual Block. - - Args: - in_channels (int): The input channels of this module. - out_channels (int): The output channels of this module. - mid_channels (int): The input channels of the second convolution. - kernel_size (int): The kernel size of the first convolution. - Defaults to 3. - stride (int): The stride of the first convolution. Defaults to 1. - se_cfg (dict, optional): Config dict for se layer. Defaults to None, - which means no se layer. - with_residual (bool): Use residual connection. Defaults to True. - conv_cfg (dict, optional): Config dict for convolution layer. - Defaults to None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Defaults to ``dict(type='BN')``. - act_cfg (dict): Config dict for activation layer. - Defaults to ``dict(type='ReLU')``. - drop_path_rate (float): stochastic depth rate. Defaults to 0. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Defaults to False. - init_cfg (dict | list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_residual=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - drop_path_rate=0., - with_cp=False, - init_cfg=None, - **kwargs): - super(EdgeResidual, self).__init__(init_cfg=init_cfg) - assert stride in [1, 2] - self.with_cp = with_cp - self.drop_path = DropPath( - drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.with_se = se_cfg is not None - self.with_residual = ( - stride == 1 and in_channels == out_channels and with_residual) - - if self.with_se: - assert isinstance(se_cfg, dict) - - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=1, - padding=kernel_size // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.conv2 = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=stride, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - out = self.conv1(out) - - if self.with_se: - out = self.se(out) - - out = self.conv2(out) - - if self.with_residual: - return x + self.drop_path(out) - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -def model_scaling(layer_setting, arch_setting): - """Scaling operation to the layer's parameters according to the - arch_setting.""" - # scale width - new_layer_setting = copy.deepcopy(layer_setting) - for layer_cfg in new_layer_setting: - for block_cfg in layer_cfg: - block_cfg[1] = make_divisible(block_cfg[1] * arch_setting[0], 8) - - # scale depth - split_layer_setting = [new_layer_setting[0]] - for layer_cfg in new_layer_setting[1:-1]: - tmp_index = [0] - for i in range(len(layer_cfg) - 1): - if layer_cfg[i + 1][1] != layer_cfg[i][1]: - tmp_index.append(i + 1) - tmp_index.append(len(layer_cfg)) - for i in range(len(tmp_index) - 1): - split_layer_setting.append(layer_cfg[tmp_index[i]:tmp_index[i + - 1]]) - split_layer_setting.append(new_layer_setting[-1]) - - num_of_layers = [len(layer_cfg) for layer_cfg in split_layer_setting[1:-1]] - new_layers = [ - int(math.ceil(arch_setting[1] * num)) for num in num_of_layers - ] - - merge_layer_setting = [split_layer_setting[0]] - for i, layer_cfg in enumerate(split_layer_setting[1:-1]): - if new_layers[i] <= num_of_layers[i]: - tmp_layer_cfg = layer_cfg[:new_layers[i]] - else: - tmp_layer_cfg = copy.deepcopy(layer_cfg) + [layer_cfg[-1]] * ( - new_layers[i] - num_of_layers[i]) - if tmp_layer_cfg[0][3] == 1 and i != 0: - merge_layer_setting[-1] += tmp_layer_cfg.copy() - else: - merge_layer_setting.append(tmp_layer_cfg.copy()) - merge_layer_setting.append(split_layer_setting[-1]) - - return merge_layer_setting - - -@BACKBONES.register_module() -class EfficientNet(BaseModule): - """EfficientNet backbone. - - Args: - arch (str): Architecture of efficientnet. Defaults to b0. - out_indices (Sequence[int]): Output from which stages. - Defaults to (6, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Defaults to 0, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Defaults to None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Defaults to dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Defaults to dict(type='Swish'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Defaults to False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Defaults to False. - """ - - # Parameters to build layers. - # 'b' represents the architecture of normal EfficientNet family includes - # 'b0', 'b1', 'b2', 'b3', 'b4', 'b5', 'b6', 'b7', 'b8'. - # 'e' represents the architecture of EfficientNet-EdgeTPU including 'es', - # 'em', 'el'. - # 6 parameters are needed to construct a layer, From left to right: - # - kernel_size: The kernel size of the block - # - out_channel: The number of out_channels of the block - # - se_ratio: The sequeeze ratio of SELayer. - # - stride: The stride of the block - # - expand_ratio: The expand_ratio of the mid_channels - # - block_type: -1: Not a block, 0: InvertedResidual, 1: EdgeResidual - layer_settings = { - 'b': [[[3, 32, 0, 2, 0, -1]], - [[3, 16, 4, 1, 1, 0]], - [[3, 24, 4, 2, 6, 0], - [3, 24, 4, 1, 6, 0]], - [[5, 40, 4, 2, 6, 0], - [5, 40, 4, 1, 6, 0]], - [[3, 80, 4, 2, 6, 0], - [3, 80, 4, 1, 6, 0], - [3, 80, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0], - [5, 112, 4, 1, 6, 0]], - [[5, 192, 4, 2, 6, 0], - [5, 192, 4, 1, 6, 0], - [5, 192, 4, 1, 6, 0], - [5, 192, 4, 1, 6, 0], - [3, 320, 4, 1, 6, 0]], - [[1, 1280, 0, 1, 0, -1]] - ], - 'e': [[[3, 32, 0, 2, 0, -1]], - [[3, 24, 0, 1, 3, 1]], - [[3, 32, 0, 2, 8, 1], - [3, 32, 0, 1, 8, 1]], - [[3, 48, 0, 2, 8, 1], - [3, 48, 0, 1, 8, 1], - [3, 48, 0, 1, 8, 1], - [3, 48, 0, 1, 8, 1]], - [[5, 96, 0, 2, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 96, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0], - [5, 144, 0, 1, 8, 0]], - [[5, 192, 0, 2, 8, 0], - [5, 192, 0, 1, 8, 0]], - [[1, 1280, 0, 1, 0, -1]] - ] - } # yapf: disable - - # Parameters to build different kinds of architecture. - # From left to right: scaling factor for width, scaling factor for depth, - # resolution. - arch_settings = { - 'b0': (1.0, 1.0, 224), - 'b1': (1.0, 1.1, 240), - 'b2': (1.1, 1.2, 260), - 'b3': (1.2, 1.4, 300), - 'b4': (1.4, 1.8, 380), - 'b5': (1.6, 2.2, 456), - 'b6': (1.8, 2.6, 528), - 'b7': (2.0, 3.1, 600), - 'b8': (2.2, 3.6, 672), - 'es': (1.0, 1.0, 224), - 'em': (1.0, 1.1, 240), - 'el': (1.2, 1.4, 300) - } - - def __init__(self, - arch='b0', - drop_path_rate=0., - out_indices=(6, ), - frozen_stages=0, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='Swish'), - norm_eval=False, - with_cp=False, - init_cfg=[ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - layer=['_BatchNorm', 'GroupNorm'], - val=1) - ]): - super(EfficientNet, self).__init__(init_cfg) - assert arch in self.arch_settings, \ - f'"{arch}" is not one of the arch_settings ' \ - f'({", ".join(self.arch_settings.keys())})' - self.arch_setting = self.arch_settings[arch] - self.layer_setting = self.layer_settings[arch[:1]] - for index in out_indices: - if index not in range(0, len(self.layer_setting)): - raise ValueError('the item in out_indices must in ' - f'range(0, {len(self.layer_setting)}). ' - f'But received {index}') - - if frozen_stages not in range(len(self.layer_setting) + 1): - raise ValueError('frozen_stages must be in range(0, ' - f'{len(self.layer_setting) + 1}). ' - f'But received {frozen_stages}') - self.drop_path_rate = drop_path_rate - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.layer_setting = model_scaling(self.layer_setting, - self.arch_setting) - block_cfg_0 = self.layer_setting[0][0] - block_cfg_last = self.layer_setting[-1][0] - self.in_channels = make_divisible(block_cfg_0[1], 8) - self.out_channels = block_cfg_last[1] - self.layers = nn.ModuleList() - self.layers.append( - ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=block_cfg_0[0], - stride=block_cfg_0[3], - padding=block_cfg_0[0] // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.make_layer() - # Avoid building unused layers in mmdetection. - if len(self.layers) < max(self.out_indices) + 1: - self.layers.append( - ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=block_cfg_last[0], - stride=block_cfg_last[3], - padding=block_cfg_last[0] // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def make_layer(self): - # Without the first and the final conv block. - layer_setting = self.layer_setting[1:-1] - - total_num_blocks = sum([len(x) for x in layer_setting]) - block_idx = 0 - dpr = [ - x.item() - for x in torch.linspace(0, self.drop_path_rate, total_num_blocks) - ] # stochastic depth decay rule - - for i, layer_cfg in enumerate(layer_setting): - # Avoid building unused layers in mmdetection. - if i > max(self.out_indices) - 1: - break - layer = [] - for i, block_cfg in enumerate(layer_cfg): - (kernel_size, out_channels, se_ratio, stride, expand_ratio, - block_type) = block_cfg - - mid_channels = int(self.in_channels * expand_ratio) - out_channels = make_divisible(out_channels, 8) - if se_ratio <= 0: - se_cfg = None - else: - # In mmdetection, the `divisor` is deleted to align - # the logic of SELayer with mmcls. - se_cfg = dict( - channels=mid_channels, - ratio=expand_ratio * se_ratio, - act_cfg=(self.act_cfg, dict(type='Sigmoid'))) - if block_type == 1: # edge tpu - if i > 0 and expand_ratio == 3: - with_residual = False - expand_ratio = 4 - else: - with_residual = True - mid_channels = int(self.in_channels * expand_ratio) - if se_cfg is not None: - # In mmdetection, the `divisor` is deleted to align - # the logic of SELayer with mmcls. - se_cfg = dict( - channels=mid_channels, - ratio=se_ratio * expand_ratio, - act_cfg=(self.act_cfg, dict(type='Sigmoid'))) - block = partial(EdgeResidual, with_residual=with_residual) - else: - block = InvertedResidual - layer.append( - block( - in_channels=self.in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - drop_path_rate=dpr[block_idx], - with_cp=self.with_cp, - # In mmdetection, `with_expand_conv` is set to align - # the logic of InvertedResidual with mmcls. - with_expand_conv=(mid_channels != self.in_channels))) - self.in_channels = out_channels - block_idx += 1 - self.layers.append(Sequential(*layer)) - - def forward(self, x): - outs = [] - for i, layer in enumerate(self.layers): - x = layer(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def _freeze_stages(self): - for i in range(self.frozen_stages): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(EfficientNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hourglass.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hourglass.py deleted file mode 100644 index f0dfb434..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hourglass.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import BasicBlock - - -class HourglassModule(BaseModule): - """Hourglass Module for HourglassNet backbone. - - Generate module recursively and use BasicBlock as the base unit. - - Args: - depth (int): Depth of current HourglassModule. - stage_channels (list[int]): Feature channels of sub-modules in current - and follow-up HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in current and - follow-up HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - upsample_cfg (dict, optional): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - """ - - def __init__(self, - depth, - stage_channels, - stage_blocks, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=None, - upsample_cfg=dict(mode='nearest')): - super(HourglassModule, self).__init__(init_cfg) - - self.depth = depth - - cur_block = stage_blocks[0] - next_block = stage_blocks[1] - - cur_channel = stage_channels[0] - next_channel = stage_channels[1] - - self.up1 = ResLayer( - BasicBlock, cur_channel, cur_channel, cur_block, norm_cfg=norm_cfg) - - self.low1 = ResLayer( - BasicBlock, - cur_channel, - next_channel, - cur_block, - stride=2, - norm_cfg=norm_cfg) - - if self.depth > 1: - self.low2 = HourglassModule(depth - 1, stage_channels[1:], - stage_blocks[1:]) - else: - self.low2 = ResLayer( - BasicBlock, - next_channel, - next_channel, - next_block, - norm_cfg=norm_cfg) - - self.low3 = ResLayer( - BasicBlock, - next_channel, - cur_channel, - cur_block, - norm_cfg=norm_cfg, - downsample_first=False) - - self.up2 = F.interpolate - self.upsample_cfg = upsample_cfg - - def forward(self, x): - """Forward function.""" - up1 = self.up1(x) - low1 = self.low1(x) - low2 = self.low2(low1) - low3 = self.low3(low2) - # Fixing `scale factor` (e.g. 2) is common for upsampling, but - # in some cases the spatial size is mismatched and error will arise. - if 'scale_factor' in self.upsample_cfg: - up2 = self.up2(low3, **self.upsample_cfg) - else: - shape = up1.shape[2:] - up2 = self.up2(low3, size=shape, **self.upsample_cfg) - return up1 + up2 - - -@BACKBONES.register_module() -class HourglassNet(BaseModule): - """HourglassNet backbone. - - Stacked Hourglass Networks for Human Pose Estimation. - More details can be found in the `paper - `_ . - - Args: - downsample_times (int): Downsample times in a HourglassModule. - num_stacks (int): Number of HourglassModule modules stacked, - 1 for Hourglass-52, 2 for Hourglass-104. - stage_channels (list[int]): Feature channel of each sub-module in a - HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in a - HourglassModule. - feat_channel (int): Feature channel of conv after a HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import HourglassNet - >>> import torch - >>> self = HourglassNet() - >>> self.eval() - >>> inputs = torch.rand(1, 3, 511, 511) - >>> level_outputs = self.forward(inputs) - >>> for level_output in level_outputs: - ... print(tuple(level_output.shape)) - (1, 256, 128, 128) - (1, 256, 128, 128) - """ - - def __init__(self, - downsample_times=5, - num_stacks=2, - stage_channels=(256, 256, 384, 384, 384, 512), - stage_blocks=(2, 2, 2, 2, 2, 4), - feat_channel=256, - norm_cfg=dict(type='BN', requires_grad=True), - pretrained=None, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(HourglassNet, self).__init__(init_cfg) - - self.num_stacks = num_stacks - assert self.num_stacks >= 1 - assert len(stage_channels) == len(stage_blocks) - assert len(stage_channels) > downsample_times - - cur_channel = stage_channels[0] - - self.stem = nn.Sequential( - ConvModule( - 3, cur_channel // 2, 7, padding=3, stride=2, - norm_cfg=norm_cfg), - ResLayer( - BasicBlock, - cur_channel // 2, - cur_channel, - 1, - stride=2, - norm_cfg=norm_cfg)) - - self.hourglass_modules = nn.ModuleList([ - HourglassModule(downsample_times, stage_channels, stage_blocks) - for _ in range(num_stacks) - ]) - - self.inters = ResLayer( - BasicBlock, - cur_channel, - cur_channel, - num_stacks - 1, - norm_cfg=norm_cfg) - - self.conv1x1s = nn.ModuleList([ - ConvModule( - cur_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.out_convs = nn.ModuleList([ - ConvModule( - cur_channel, feat_channel, 3, padding=1, norm_cfg=norm_cfg) - for _ in range(num_stacks) - ]) - - self.remap_convs = nn.ModuleList([ - ConvModule( - feat_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - """Init module weights.""" - # Training Centripetal Model needs to reset parameters for Conv2d - super(HourglassNet, self).init_weights() - for m in self.modules(): - if isinstance(m, nn.Conv2d): - m.reset_parameters() - - def forward(self, x): - """Forward function.""" - inter_feat = self.stem(x) - out_feats = [] - - for ind in range(self.num_stacks): - single_hourglass = self.hourglass_modules[ind] - out_conv = self.out_convs[ind] - - hourglass_feat = single_hourglass(inter_feat) - out_feat = out_conv(hourglass_feat) - out_feats.append(out_feat) - - if ind < self.num_stacks - 1: - inter_feat = self.conv1x1s[ind]( - inter_feat) + self.remap_convs[ind]( - out_feat) - inter_feat = self.inters[ind](self.relu(inter_feat)) - - return out_feats diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hrnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hrnet.py deleted file mode 100644 index 06c210a6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/hrnet.py +++ /dev/null @@ -1,589 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, ModuleList, Sequential -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(BaseModule): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - block_init_cfg=None, - init_cfg=None): - super(HRModule, self).__init__(init_cfg) - self.block_init_cfg = block_init_cfg - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_BLOCKS({len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_CHANNELS({len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_INCHANNELS({len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - - return Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - nn.Upsample( - scale_factor=2**(j - i), mode='nearest'))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(BaseModule): - """HRNet backbone. - - `High-Resolution Representations for Labeling Pixels and Regions - arXiv: `_. - - Args: - extra (dict): Detailed configuration for each stage of HRNet. - There must be 4 stages, the configuration for each stage must have - 5 keys: - - - num_modules(int): The number of HRModule in this stage. - - num_branches(int): The number of branches in the HRModule. - - block(str): The type of convolution block. - - num_blocks(tuple): The number of blocks in each branch. - The length must be equal to num_branches. - - num_channels(tuple): The number of channels in each branch. - The length must be equal to num_branches. - in_channels (int): Number of input image channels. Default: 3. - conv_cfg (dict): Dictionary to construct and config conv layer. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: True. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: False. - multiscale_output (bool): Whether to output multi-level features - produced by multiple branches. If False, only the first level - feature will be output. Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmdet.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - norm_eval=True, - with_cp=False, - zero_init_residual=False, - multiscale_output=True, - pretrained=None, - init_cfg=None): - super(HRNet, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - # Assert configurations of 4 stages are in extra - assert 'stage1' in extra and 'stage2' in extra \ - and 'stage3' in extra and 'stage4' in extra - # Assert whether the length of `num_blocks` and `num_channels` are - # equal to `num_branches` - for i in range(4): - cfg = extra[f'stage{i + 1}'] - assert len(cfg['num_blocks']) == cfg['num_branches'] and \ - len(cfg['num_channels']) == cfg['num_branches'] - - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.zero_init_residual = zero_init_residual - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels, multiscale_output=multiscale_output) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg, - )) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - - return Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - block_init_cfg=block_init_cfg)) - - return Sequential(*hr_modules), in_channels - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/mobilenet_v2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/mobilenet_v2.py deleted file mode 100644 index 8c6fcfaa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(BaseModule): - """MobileNetV2 backbone. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - out_indices (Sequence[int], optional): Output from which stages. - Default: (1, 2, 4, 7). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - # Parameters to build layers. 4 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks, stride. - arch_settings = [[1, 16, 1, 1], [6, 24, 2, 2], [6, 32, 3, 2], - [6, 64, 4, 2], [6, 96, 3, 1], [6, 160, 3, 2], - [6, 320, 1, 1]] - - def __init__(self, - widen_factor=1., - out_indices=(1, 2, 4, 7), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV2, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - self.widen_factor = widen_factor - self.out_indices = out_indices - if not set(out_indices).issubset(set(range(0, 8))): - raise ValueError('out_indices must be a subset of range' - f'(0, 8). But received {out_indices}') - - if frozen_stages not in range(-1, 8): - raise ValueError('frozen_stages must be in range(-1, 8). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks, stride = layer_cfg - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - if widen_factor > 1.0: - self.out_channel = int(1280 * widen_factor) - else: - self.out_channel = 1280 - - layer = ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channel, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.add_module('conv2', layer) - self.layers.append('conv2') - - def make_layer(self, out_channels, num_blocks, stride, expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. Default: 6. - """ - layers = [] - for i in range(num_blocks): - if i >= 1: - stride = 1 - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - mid_channels=int(round(self.in_channels * expand_ratio)), - stride=stride, - with_expand_conv=expand_ratio != 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - frozen.""" - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/pvt.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/pvt.py deleted file mode 100644 index 8b7d5d53..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/pvt.py +++ /dev/null @@ -1,591 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (Conv2d, build_activation_layer, build_norm_layer, - constant_init, normal_init, trunc_normal_init) -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import MultiheadAttention -from mmcv.cnn.utils.weight_init import trunc_normal_ -from mmcv.runner import (BaseModule, ModuleList, Sequential, _load_checkpoint, - load_state_dict) -from torch.nn.modules.utils import _pair as to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils import PatchEmbed, nchw_to_nlc, nlc_to_nchw, pvt_convert - - -class MixFFN(BaseModule): - """An implementation of MixFFN of PVT. - - The differences between MixFFN & FFN: - 1. Use 1X1 Conv to replace Linear layer. - 2. Introduce 3X3 Depth-wise Conv to encode positional information. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='GELU'). - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - Default: None. - use_conv (bool): If True, add 3x3 DWConv between two Linear layers. - Defaults: False. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - act_cfg=dict(type='GELU'), - ffn_drop=0., - dropout_layer=None, - use_conv=False, - init_cfg=None): - super(MixFFN, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.act_cfg = act_cfg - activate = build_activation_layer(act_cfg) - - in_channels = embed_dims - fc1 = Conv2d( - in_channels=in_channels, - out_channels=feedforward_channels, - kernel_size=1, - stride=1, - bias=True) - if use_conv: - # 3x3 depth wise conv to provide positional encode information - dw_conv = Conv2d( - in_channels=feedforward_channels, - out_channels=feedforward_channels, - kernel_size=3, - stride=1, - padding=(3 - 1) // 2, - bias=True, - groups=feedforward_channels) - fc2 = Conv2d( - in_channels=feedforward_channels, - out_channels=in_channels, - kernel_size=1, - stride=1, - bias=True) - drop = nn.Dropout(ffn_drop) - layers = [fc1, activate, drop, fc2, drop] - if use_conv: - layers.insert(1, dw_conv) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - - def forward(self, x, hw_shape, identity=None): - out = nlc_to_nchw(x, hw_shape) - out = self.layers(out) - out = nchw_to_nlc(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -class SpatialReductionAttention(MultiheadAttention): - """An implementation of Spatial Reduction Attention of PVT. - - This module is modified from MultiheadAttention which is a module from - mmcv.cnn.bricks.transformer. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default: True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Spatial Reduction - Attention of PVT. Default: 1. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - batch_first=True, - qkv_bias=True, - norm_cfg=dict(type='LN'), - sr_ratio=1, - init_cfg=None): - super().__init__( - embed_dims, - num_heads, - attn_drop, - proj_drop, - batch_first=batch_first, - dropout_layer=dropout_layer, - bias=qkv_bias, - init_cfg=init_cfg) - - self.sr_ratio = sr_ratio - if sr_ratio > 1: - self.sr = Conv2d( - in_channels=embed_dims, - out_channels=embed_dims, - kernel_size=sr_ratio, - stride=sr_ratio) - # The ret[0] of build_norm_layer is norm name. - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - # handle the BC-breaking from https://github.com/open-mmlab/mmcv/pull/1418 # noqa - from mmdet import digit_version, mmcv_version - if mmcv_version < digit_version('1.3.17'): - warnings.warn('The legacy version of forward function in' - 'SpatialReductionAttention is deprecated in' - 'mmcv>=1.3.17 and will no longer support in the' - 'future. Please upgrade your mmcv.') - self.forward = self.legacy_forward - - def forward(self, x, hw_shape, identity=None): - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - x_q = x_q.transpose(0, 1) - x_kv = x_kv.transpose(0, 1) - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - def legacy_forward(self, x, hw_shape, identity=None): - """multi head attention forward in mmcv version < 1.3.17.""" - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - return identity + self.dropout_layer(self.proj_drop(out)) - - -class PVTEncoderLayer(BaseModule): - """Implements one encoder layer in PVT. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed. - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): stochastic depth rate. Default: 0.0. - qkv_bias (bool): enable bias for qkv if True. - Default: True. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Spatial Reduction - Attention of PVT. Default: 1. - use_conv_ffn (bool): If True, use Convolutional FFN to replace FFN. - Default: False. - init_cfg (dict, optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=1, - use_conv_ffn=False, - init_cfg=None): - super(PVTEncoderLayer, self).__init__(init_cfg=init_cfg) - - # The ret[0] of build_norm_layer is norm name. - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.attn = SpatialReductionAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - # The ret[0] of build_norm_layer is norm name. - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.ffn = MixFFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - use_conv=use_conv_ffn, - act_cfg=act_cfg) - - def forward(self, x, hw_shape): - x = self.attn(self.norm1(x), hw_shape, identity=x) - x = self.ffn(self.norm2(x), hw_shape, identity=x) - - return x - - -class AbsolutePositionEmbedding(BaseModule): - """An implementation of the absolute position embedding in PVT. - - Args: - pos_shape (int): The shape of the absolute position embedding. - pos_dim (int): The dimension of the absolute position embedding. - drop_rate (float): Probability of an element to be zeroed. - Default: 0.0. - """ - - def __init__(self, pos_shape, pos_dim, drop_rate=0., init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(pos_shape, int): - pos_shape = to_2tuple(pos_shape) - elif isinstance(pos_shape, tuple): - if len(pos_shape) == 1: - pos_shape = to_2tuple(pos_shape[0]) - assert len(pos_shape) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pos_shape)}' - self.pos_shape = pos_shape - self.pos_dim = pos_dim - - self.pos_embed = nn.Parameter( - torch.zeros(1, pos_shape[0] * pos_shape[1], pos_dim)) - self.drop = nn.Dropout(p=drop_rate) - - def init_weights(self): - trunc_normal_(self.pos_embed, std=0.02) - - def resize_pos_embed(self, pos_embed, input_shape, mode='bilinear'): - """Resize pos_embed weights. - - Resize pos_embed using bilinear interpolate method. - - Args: - pos_embed (torch.Tensor): Position embedding weights. - input_shape (tuple): Tuple for (downsampled input image height, - downsampled input image width). - mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'bilinear'``. - - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C]. - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - pos_h, pos_w = self.pos_shape - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, self.pos_dim).permute(0, 3, 1, 2).contiguous() - pos_embed_weight = F.interpolate( - pos_embed_weight, size=input_shape, mode=mode) - pos_embed_weight = torch.flatten(pos_embed_weight, - 2).transpose(1, 2).contiguous() - pos_embed = pos_embed_weight - - return pos_embed - - def forward(self, x, hw_shape, mode='bilinear'): - pos_embed = self.resize_pos_embed(self.pos_embed, hw_shape, mode) - return self.drop(x + pos_embed) - - -@BACKBONES.register_module() -class PyramidVisionTransformer(BaseModule): - """Pyramid Vision Transformer (PVT) - - Implementation of `Pyramid Vision Transformer: A Versatile Backbone for - Dense Prediction without Convolutions - `_. - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): Embedding dimension. Default: 64. - num_stags (int): The num of stages. Default: 4. - num_layers (Sequence[int]): The layer number of each transformer encode - layer. Default: [3, 4, 6, 3]. - num_heads (Sequence[int]): The attention heads of each transformer - encode layer. Default: [1, 2, 5, 8]. - patch_sizes (Sequence[int]): The patch_size of each patch embedding. - Default: [4, 2, 2, 2]. - strides (Sequence[int]): The stride of each patch embedding. - Default: [4, 2, 2, 2]. - paddings (Sequence[int]): The padding of each patch embedding. - Default: [0, 0, 0, 0]. - sr_ratios (Sequence[int]): The spatial reduction rate of each - transformer encode layer. Default: [8, 4, 2, 1]. - out_indices (Sequence[int] | int): Output from which stages. - Default: (0, 1, 2, 3). - mlp_ratios (Sequence[int]): The ratio of the mlp hidden dim to the - embedding dim of each transformer encode layer. - Default: [8, 8, 4, 4]. - qkv_bias (bool): Enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: True. - use_conv_ffn (bool): If True, use Convolutional FFN to replace FFN. - Default: False. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - pretrained (str, optional): model pretrained path. Default: None. - convert_weights (bool): The flag indicates whether the - pre-trained model is from the original repo. We may need - to convert some keys to make it compatible. - Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=64, - num_stages=4, - num_layers=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - paddings=[0, 0, 0, 0], - sr_ratios=[8, 4, 2, 1], - out_indices=(0, 1, 2, 3), - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=True, - norm_after_stage=False, - use_conv_ffn=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - pretrained=None, - convert_weights=True, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.convert_weights = convert_weights - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - self.embed_dims = embed_dims - - self.num_stages = num_stages - self.num_layers = num_layers - self.num_heads = num_heads - self.patch_sizes = patch_sizes - self.strides = strides - self.sr_ratios = sr_ratios - assert num_stages == len(num_layers) == len(num_heads) \ - == len(patch_sizes) == len(strides) == len(sr_ratios) - - self.out_indices = out_indices - assert max(out_indices) < self.num_stages - self.pretrained = pretrained - - # transformer encoder - dpr = [ - x.item() - for x in torch.linspace(0, drop_path_rate, sum(num_layers)) - ] # stochastic num_layer decay rule - - cur = 0 - self.layers = ModuleList() - for i, num_layer in enumerate(num_layers): - embed_dims_i = embed_dims * num_heads[i] - patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims_i, - kernel_size=patch_sizes[i], - stride=strides[i], - padding=paddings[i], - bias=True, - norm_cfg=norm_cfg) - - layers = ModuleList() - if use_abs_pos_embed: - pos_shape = pretrain_img_size // np.prod(patch_sizes[:i + 1]) - pos_embed = AbsolutePositionEmbedding( - pos_shape=pos_shape, - pos_dim=embed_dims_i, - drop_rate=drop_rate) - layers.append(pos_embed) - layers.extend([ - PVTEncoderLayer( - embed_dims=embed_dims_i, - num_heads=num_heads[i], - feedforward_channels=mlp_ratios[i] * embed_dims_i, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[cur + idx], - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - sr_ratio=sr_ratios[i], - use_conv_ffn=use_conv_ffn) for idx in range(num_layer) - ]) - in_channels = embed_dims_i - # The ret[0] of build_norm_layer is norm name. - if norm_after_stage: - norm = build_norm_layer(norm_cfg, embed_dims_i)[1] - else: - norm = nn.Identity() - self.layers.append(ModuleList([patch_embed, layers, norm])) - cur += num_layer - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init(m, 0, math.sqrt(2.0 / fan_out)) - elif isinstance(m, AbsolutePositionEmbedding): - m.init_weights() - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - checkpoint = _load_checkpoint( - self.init_cfg.checkpoint, logger=logger, map_location='cpu') - logger.warn(f'Load pre-trained model for ' - f'{self.__class__.__name__} from original repo') - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - if self.convert_weights: - # Because pvt backbones are not supported by mmcls, - # so we need to convert pre-trained weights to match this - # implementation. - state_dict = pvt_convert(state_dict) - load_state_dict(self, state_dict, strict=False, logger=logger) - - def forward(self, x): - outs = [] - - for i, layer in enumerate(self.layers): - x, hw_shape = layer[0](x) - - for block in layer[1]: - x = block(x, hw_shape) - x = layer[2](x) - x = nlc_to_nchw(x, hw_shape) - if i in self.out_indices: - outs.append(x) - - return outs - - -@BACKBONES.register_module() -class PyramidVisionTransformerV2(PyramidVisionTransformer): - """Implementation of `PVTv2: Improved Baselines with Pyramid Vision - Transformer `_.""" - - def __init__(self, **kwargs): - super(PyramidVisionTransformerV2, self).__init__( - patch_sizes=[7, 3, 3, 3], - paddings=[3, 1, 1, 1], - use_abs_pos_embed=False, - norm_after_stage=True, - use_conv_ffn=True, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/regnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/regnet.py deleted file mode 100644 index 63adc3c1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/regnet.py +++ /dev/null @@ -1,356 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .resnet import ResNet -from .resnext import Bottleneck - - -@BACKBONES.register_module() -class RegNet(ResNet): - """RegNet backbone. - - More details can be found in `paper `_ . - - Args: - arch (dict): The parameter of RegNets. - - - w0 (int): initial width - - wa (float): slope of width - - wm (float): quantization parameter to quantize the width - - depth (int): depth of the backbone - - group_w (int): width of group - - bot_mul (float): bottleneck ratio, i.e. expansion of bottleneck. - strides (Sequence[int]): Strides of the first block of each stage. - base_channels (int): Base channels after stem layer. - in_channels (int): Number of input image channels. Default: 3. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import RegNet - >>> import torch - >>> self = RegNet( - arch=dict( - w0=88, - wa=26.31, - wm=2.25, - group_w=48, - depth=25, - bot_mul=1.0)) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 96, 8, 8) - (1, 192, 4, 4) - (1, 432, 2, 2) - (1, 1008, 1, 1) - """ - arch_settings = { - 'regnetx_400mf': - dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - 'regnetx_800mf': - dict(w0=56, wa=35.73, wm=2.28, group_w=16, depth=16, bot_mul=1.0), - 'regnetx_1.6gf': - dict(w0=80, wa=34.01, wm=2.25, group_w=24, depth=18, bot_mul=1.0), - 'regnetx_3.2gf': - dict(w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0), - 'regnetx_4.0gf': - dict(w0=96, wa=38.65, wm=2.43, group_w=40, depth=23, bot_mul=1.0), - 'regnetx_6.4gf': - dict(w0=184, wa=60.83, wm=2.07, group_w=56, depth=17, bot_mul=1.0), - 'regnetx_8.0gf': - dict(w0=80, wa=49.56, wm=2.88, group_w=120, depth=23, bot_mul=1.0), - 'regnetx_12gf': - dict(w0=168, wa=73.36, wm=2.37, group_w=112, depth=19, bot_mul=1.0), - } - - def __init__(self, - arch, - in_channels=3, - stem_channels=32, - base_channels=32, - strides=(2, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - - # Generate RegNet parameters first - if isinstance(arch, str): - assert arch in self.arch_settings, \ - f'"arch": "{arch}" is not one of the' \ - ' arch_settings' - arch = self.arch_settings[arch] - elif not isinstance(arch, dict): - raise ValueError('Expect "arch" to be either a string ' - f'or a dict, got {type(arch)}') - - widths, num_stages = self.generate_regnet( - arch['w0'], - arch['wa'], - arch['wm'], - arch['depth'], - ) - # Convert to per stage format - stage_widths, stage_blocks = self.get_stages_from_blocks(widths) - # Generate group widths and bot muls - group_widths = [arch['group_w'] for _ in range(num_stages)] - self.bottleneck_ratio = [arch['bot_mul'] for _ in range(num_stages)] - # Adjust the compatibility of stage_widths and group_widths - stage_widths, group_widths = self.adjust_width_group( - stage_widths, self.bottleneck_ratio, group_widths) - - # Group params by stage - self.stage_widths = stage_widths - self.group_widths = group_widths - self.depth = sum(stage_blocks) - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.zero_init_residual = zero_init_residual - self.block = Bottleneck - expansion_bak = self.block.expansion - self.block.expansion = 1 - self.stage_blocks = stage_blocks[:num_stages] - - self._make_stem_layer(in_channels, stem_channels) - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - if self.zero_init_residual: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.inplanes = stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - group_width = self.group_widths[i] - width = int(round(self.stage_widths[i] * self.bottleneck_ratio[i])) - stage_groups = width // group_width - - dcn = self.dcn if self.stage_with_dcn[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=self.stage_widths[i], - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - groups=stage_groups, - base_width=group_width, - base_channels=self.stage_widths[i], - init_cfg=block_init_cfg) - self.inplanes = self.stage_widths[i] - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = stage_widths[-1] - self.block.expansion = expansion_bak - - def _make_stem_layer(self, in_channels, base_channels): - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - base_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, base_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - - def generate_regnet(self, - initial_width, - width_slope, - width_parameter, - depth, - divisor=8): - """Generates per block width from RegNet parameters. - - Args: - initial_width ([int]): Initial width of the backbone - width_slope ([float]): Slope of the quantized linear function - width_parameter ([int]): Parameter used to quantize the width. - depth ([int]): Depth of the backbone. - divisor (int, optional): The divisor of channels. Defaults to 8. - - Returns: - list, int: return a list of widths of each stage and the number \ - of stages - """ - assert width_slope >= 0 - assert initial_width > 0 - assert width_parameter > 1 - assert initial_width % divisor == 0 - widths_cont = np.arange(depth) * width_slope + initial_width - ks = np.round( - np.log(widths_cont / initial_width) / np.log(width_parameter)) - widths = initial_width * np.power(width_parameter, ks) - widths = np.round(np.divide(widths, divisor)) * divisor - num_stages = len(np.unique(widths)) - widths, widths_cont = widths.astype(int).tolist(), widths_cont.tolist() - return widths, num_stages - - @staticmethod - def quantize_float(number, divisor): - """Converts a float to closest non-zero int divisible by divisor. - - Args: - number (int): Original number to be quantized. - divisor (int): Divisor used to quantize the number. - - Returns: - int: quantized number that is divisible by devisor. - """ - return int(round(number / divisor) * divisor) - - def adjust_width_group(self, widths, bottleneck_ratio, groups): - """Adjusts the compatibility of widths and groups. - - Args: - widths (list[int]): Width of each stage. - bottleneck_ratio (float): Bottleneck ratio. - groups (int): number of groups in each stage - - Returns: - tuple(list): The adjusted widths and groups of each stage. - """ - bottleneck_width = [ - int(w * b) for w, b in zip(widths, bottleneck_ratio) - ] - groups = [min(g, w_bot) for g, w_bot in zip(groups, bottleneck_width)] - bottleneck_width = [ - self.quantize_float(w_bot, g) - for w_bot, g in zip(bottleneck_width, groups) - ] - widths = [ - int(w_bot / b) - for w_bot, b in zip(bottleneck_width, bottleneck_ratio) - ] - return widths, groups - - def get_stages_from_blocks(self, widths): - """Gets widths/stage_blocks of network at each stage. - - Args: - widths (list[int]): Width in each stage. - - Returns: - tuple(list): width and depth of each stage - """ - width_diff = [ - width != width_prev - for width, width_prev in zip(widths + [0], [0] + widths) - ] - stage_widths = [ - width for width, diff in zip(widths, width_diff[:-1]) if diff - ] - stage_blocks = np.diff([ - depth for depth, diff in zip(range(len(width_diff)), width_diff) - if diff - ]).tolist() - return stage_widths, stage_blocks - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/res2net.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/res2net.py deleted file mode 100644 index 96afb2fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/res2net.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import Sequential - -from ..builder import BACKBONES -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottle2neck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - scales=4, - base_width=26, - base_channels=64, - stage_type='normal', - **kwargs): - """Bottle2neck block for Res2Net. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottle2neck, self).__init__(inplanes, planes, **kwargs) - assert scales > 1, 'Res2Net degenerates to ResNet when scales = 1.' - width = int(math.floor(self.planes * (base_width / base_channels))) - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width * scales, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width * scales, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - - if stage_type == 'stage' and self.conv2_stride != 1: - self.pool = nn.AvgPool2d( - kernel_size=3, stride=self.conv2_stride, padding=1) - convs = [] - bns = [] - - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width * scales, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.stage_type = stage_type - self.scales = scales - self.width = width - delattr(self, 'conv2') - delattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - spx = torch.split(out, self.width, 1) - sp = self.convs[0](spx[0].contiguous()) - sp = self.relu(self.bns[0](sp)) - out = sp - for i in range(1, self.scales - 1): - if self.stage_type == 'stage': - sp = spx[i] - else: - sp = sp + spx[i] - sp = self.convs[i](sp.contiguous()) - sp = self.relu(self.bns[i](sp)) - out = torch.cat((out, sp), 1) - - if self.stage_type == 'normal' or self.conv2_stride == 1: - out = torch.cat((out, spx[self.scales - 1]), 1) - elif self.stage_type == 'stage': - out = torch.cat((out, self.pool(spx[self.scales - 1])), 1) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Res2Layer(Sequential): - """Res2Layer to build Res2Net style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - scales=4, - base_width=26, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False), - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=1, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1], - ) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - stage_type='stage', - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - **kwargs)) - super(Res2Layer, self).__init__(*layers) - - -@BACKBONES.register_module() -class Res2Net(ResNet): - """Res2Net backbone. - - Args: - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - depth (int): Depth of res2net, from {50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Res2net stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import Res2Net - >>> import torch - >>> self = Res2Net(depth=50, scales=4, base_width=26) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottle2neck, (3, 4, 6, 3)), - 101: (Bottle2neck, (3, 4, 23, 3)), - 152: (Bottle2neck, (3, 8, 36, 3)) - } - - def __init__(self, - scales=4, - base_width=26, - style='pytorch', - deep_stem=True, - avg_down=True, - pretrained=None, - init_cfg=None, - **kwargs): - self.scales = scales - self.base_width = base_width - super(Res2Net, self).__init__( - style='pytorch', - deep_stem=True, - avg_down=True, - pretrained=pretrained, - init_cfg=init_cfg, - **kwargs) - - def make_res_layer(self, **kwargs): - return Res2Layer( - scales=self.scales, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnest.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnest.py deleted file mode 100644 index 69629b96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnest.py +++ /dev/null @@ -1,322 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(BaseModule): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Number of channels in the input feature map. - channels (int): Number of intermediate channels. - kernel_size (int | tuple[int]): Size of the convolution kernel. - stride (int | tuple[int]): Stride of the convolution. - padding (int | tuple[int]): Zero-padding added to both sides of - dilation (int | tuple[int]): Spacing between kernel elements. - groups (int): Number of blocked connections from input channels to - output channels. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - init_cfg=None): - super(SplitAttentionConv2d, self).__init__(init_cfg) - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - # To be consistent with original implementation, starting from 0 - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - base_width (int): Base of width in terms of base channels. Default: 4. - base_channels (int): Base of channels for calculating width. - Default: 64. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SplitAttentionConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnet.py deleted file mode 100644 index 1eaaae67..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnet.py +++ /dev/null @@ -1,672 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(out) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - stem_channels (int | None): Number of stem channels. If not specified, - it will be the same as `base_channels`. Default: None. - base_channels (int): Number of base channels of res layer. Default: 64. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=None, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - self.zero_init_residual = zero_init_residual - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - if stem_channels is None: - stem_channels = base_channels - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """Make plugins for ResNet ``stage_idx`` th stage. - - Currently we support to insert ``context_block``, - ``empirical_attention_block``, ``nonlocal_block`` into the backbone - like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be: - - Examples: - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose ``stage_idx=0``, the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->conv3->yyy->zzz1->zzz2 - - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - r"""ResNetV1d variant described in `Bag of Tricks - `_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnext.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnext.py deleted file mode 100644 index 8675d7c1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/resnext.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - if self.with_plugins: - self._del_block_plugins(self.after_conv1_plugin_names + - self.after_conv2_plugin_names + - self.after_conv3_plugin_names) - self.after_conv1_plugin_names = self.make_block_plugins( - width, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - width, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - self.planes * self.expansion, self.after_conv3_plugins) - - def _del_block_plugins(self, plugin_names): - """delete plugins for block if exist. - - Args: - plugin_names (list[str]): List of plugins name to delete. - """ - assert isinstance(plugin_names, list) - for plugin_name in plugin_names: - del self._modules[plugin_name] - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/ssd_vgg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/ssd_vgg.py deleted file mode 100644 index c15aeac0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/ssd_vgg.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import VGG -from mmcv.runner import BaseModule - -from ..builder import BACKBONES -from ..necks import ssd_neck - - -@BACKBONES.register_module() -class SSDVGG(VGG, BaseModule): - """VGG Backbone network for single-shot-detection. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_last_pool (bool): Whether to add a pooling layer at the last - of the model - ceil_mode (bool): When True, will use `ceil` instead of `floor` - to compute the output shape. - out_indices (Sequence[int]): Output from which stages. - out_feature_indices (Sequence[int]): Output from which feature map. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - input_size (int, optional): Deprecated argumment. - Width and height of input, from {300, 512}. - l2_norm_scale (float, optional) : Deprecated argumment. - L2 normalization layer init scale. - - Example: - >>> self = SSDVGG(input_size=300, depth=11) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 300, 300) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 1024, 19, 19) - (1, 512, 10, 10) - (1, 256, 5, 5) - (1, 256, 3, 3) - (1, 256, 1, 1) - """ - extra_setting = { - 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256), - 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128), - } - - def __init__(self, - depth, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - pretrained=None, - init_cfg=None, - input_size=None, - l2_norm_scale=None): - # TODO: in_channels for mmcv.VGG - super(SSDVGG, self).__init__( - depth, - with_last_pool=with_last_pool, - ceil_mode=ceil_mode, - out_indices=out_indices) - - self.features.add_module( - str(len(self.features)), - nn.MaxPool2d(kernel_size=3, stride=1, padding=1)) - self.features.add_module( - str(len(self.features)), - nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.features.add_module( - str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.out_feature_indices = out_feature_indices - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - - if init_cfg is not None: - self.init_cfg = init_cfg - elif isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict(type='Constant', val=1, layer='BatchNorm2d'), - dict(type='Normal', std=0.01, layer='Linear'), - ] - else: - raise TypeError('pretrained must be a str or None') - - if input_size is not None: - warnings.warn('DeprecationWarning: input_size is deprecated') - if l2_norm_scale is not None: - warnings.warn('DeprecationWarning: l2_norm_scale in VGG is ' - 'deprecated, it has been moved to SSDNeck.') - - def init_weights(self, pretrained=None): - super(VGG, self).init_weights() - - def forward(self, x): - """Forward function.""" - outs = [] - for i, layer in enumerate(self.features): - x = layer(x) - if i in self.out_feature_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - -class L2Norm(ssd_neck.L2Norm): - - def __init__(self, **kwargs): - super(L2Norm, self).__init__(**kwargs) - warnings.warn('DeprecationWarning: L2Norm in ssd_vgg.py ' - 'is deprecated, please use L2Norm in ' - 'mmdet/models/necks/ssd_neck.py instead') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/swin.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/swin.py deleted file mode 100644 index c9f1455a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/swin.py +++ /dev/null @@ -1,763 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from copy import deepcopy - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_norm_layer, constant_init, trunc_normal_init -from mmcv.cnn.bricks.transformer import FFN, build_dropout -from mmcv.cnn.utils.weight_init import trunc_normal_ -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from mmcv.utils import to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils.ckpt_convert import swin_converter -from ..utils.transformer import PatchEmbed, PatchMerging - - -class WindowMSA(BaseModule): - """Window based multi-head self-attention (W-MSA) module with relative - position bias. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (tuple[int]): The height and width of the window. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - init_cfg (dict | None, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - init_cfg=None): - - super().__init__() - self.embed_dims = embed_dims - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_embed_dims = embed_dims // num_heads - self.scale = qk_scale or head_embed_dims**-0.5 - self.init_cfg = init_cfg - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), - num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # About 2x faster than original impl - Wh, Ww = self.window_size - rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) - rel_position_index = rel_index_coords + rel_index_coords.T - rel_position_index = rel_position_index.flip(1).contiguous() - self.register_buffer('relative_position_index', rel_position_index) - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - - self.softmax = nn.Softmax(dim=-1) - - def init_weights(self): - trunc_normal_(self.relative_position_bias_table, std=0.02) - - def forward(self, x, mask=None): - """ - Args: - - x (tensor): input features with shape of (num_windows*B, N, C) - mask (tensor | None, Optional): mask with shape of (num_windows, - Wh*Ww, Wh*Ww), value should be between (-inf, 0]. - """ - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, 4) - # make torchscript happy (cannot use tensor as tuple) - q, k, v = qkv[0], qkv[1], qkv[2] - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B // nW, nW, self.num_heads, N, - N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - @staticmethod - def double_step_seq(step1, len1, step2, len2): - seq1 = torch.arange(0, step1 * len1, step1) - seq2 = torch.arange(0, step2 * len2, step2) - return (seq1[:, None] + seq2[None, :]).reshape(1, -1) - - -class ShiftWindowMSA(BaseModule): - """Shifted Window Multihead Self-Attention Module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): The height and width of the window. - shift_size (int, optional): The shift step of each window towards - right-bottom. If zero, act as regular window-msa. Defaults to 0. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Defaults: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Defaults: 0. - proj_drop_rate (float, optional): Dropout ratio of output. - Defaults: 0. - dropout_layer (dict, optional): The dropout_layer used before output. - Defaults: dict(type='DropPath', drop_prob=0.). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - shift_size=0, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0, - proj_drop_rate=0, - dropout_layer=dict(type='DropPath', drop_prob=0.), - init_cfg=None): - super().__init__(init_cfg) - - self.window_size = window_size - self.shift_size = shift_size - assert 0 <= self.shift_size < self.window_size - - self.w_msa = WindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=to_2tuple(window_size), - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=proj_drop_rate, - init_cfg=None) - - self.drop = build_dropout(dropout_layer) - - def forward(self, query, hw_shape): - B, L, C = query.shape - H, W = hw_shape - assert L == H * W, 'input feature has wrong size' - query = query.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) - H_pad, W_pad = query.shape[1], query.shape[2] - - # cyclic shift - if self.shift_size > 0: - shifted_query = torch.roll( - query, - shifts=(-self.shift_size, -self.shift_size), - dims=(1, 2)) - - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - # nW, window_size, window_size, 1 - mask_windows = self.window_partition(img_mask) - mask_windows = mask_windows.view( - -1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-100.0)).masked_fill( - attn_mask == 0, float(0.0)) - else: - shifted_query = query - attn_mask = None - - # nW*B, window_size, window_size, C - query_windows = self.window_partition(shifted_query) - # nW*B, window_size*window_size, C - query_windows = query_windows.view(-1, self.window_size**2, C) - - # W-MSA/SW-MSA (nW*B, window_size*window_size, C) - attn_windows = self.w_msa(query_windows, mask=attn_mask) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, - self.window_size, C) - - # B H' W' C - shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, - shifts=(self.shift_size, self.shift_size), - dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - x = self.drop(x) - return x - - def window_reverse(self, windows, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - window_size = self.window_size - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, - window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - def window_partition(self, x): - """ - Args: - x: (B, H, W, C) - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - window_size = self.window_size - x = x.view(B, H // window_size, window_size, W // window_size, - window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() - windows = windows.view(-1, window_size, window_size, C) - return windows - - -class SwinBlock(BaseModule): - """" - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - window_size (int, optional): The local window scale. Default: 7. - shift (bool, optional): whether to shift window or not. Default False. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float, optional): Stochastic depth rate. Default: 0. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - window_size=7, - shift=False, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - - super(SwinBlock, self).__init__() - - self.init_cfg = init_cfg - self.with_cp = with_cp - - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - self.attn = ShiftWindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=window_size, - shift_size=window_size // 2 if shift else 0, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - init_cfg=None) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=2, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=True, - init_cfg=None) - - def forward(self, x, hw_shape): - - def _inner_forward(x): - identity = x - x = self.norm1(x) - x = self.attn(x, hw_shape) - - x = x + identity - - identity = x - x = self.norm2(x) - x = self.ffn(x, identity=identity) - - return x - - if self.with_cp and x.requires_grad: - x = cp.checkpoint(_inner_forward, x) - else: - x = _inner_forward(x) - - return x - - -class SwinBlockSequence(BaseModule): - """Implements one stage in Swin Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - depth (int): The number of blocks in this stage. - window_size (int, optional): The local window scale. Default: 7. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float | list[float], optional): Stochastic depth - rate. Default: 0. - downsample (BaseModule | None, optional): The downsample operation - module. Default: None. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - depth, - window_size=7, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - downsample=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(drop_path_rate, list): - drop_path_rates = drop_path_rate - assert len(drop_path_rates) == depth - else: - drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] - - self.blocks = ModuleList() - for i in range(depth): - block = SwinBlock( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=feedforward_channels, - window_size=window_size, - shift=False if i % 2 == 0 else True, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=drop_path_rates[i], - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.blocks.append(block) - - self.downsample = downsample - - def forward(self, x, hw_shape): - for block in self.blocks: - x = block(x, hw_shape) - - if self.downsample: - x_down, down_hw_shape = self.downsample(x, hw_shape) - return x_down, down_hw_shape, x, hw_shape - else: - return x, hw_shape, x, hw_shape - - -@BACKBONES.register_module() -class SwinTransformer(BaseModule): - """ Swin Transformer - A PyTorch implement of : `Swin Transformer: - Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/abs/2103.14030 - - Inspiration from - https://github.com/microsoft/Swin-Transformer - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): The num of input channels. - Defaults: 3. - embed_dims (int): The feature dimension. Default: 96. - patch_size (int | tuple[int]): Patch size. Default: 4. - window_size (int): Window size. Default: 7. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - Default: 4. - depths (tuple[int]): Depths of each Swin Transformer stage. - Default: (2, 2, 6, 2). - num_heads (tuple[int]): Parallel attention heads of each Swin - Transformer stage. Default: (3, 6, 12, 24). - strides (tuple[int]): The patch merging or patch embedding stride of - each Swin Transformer stage. (In swin, we set kernel size equal to - stride.) Default: (4, 2, 2, 2). - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool, optional): If True, add a learnable bias to query, key, - value. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - patch_norm (bool): If add a norm layer for patch embed and patch - merging. Default: True. - drop_rate (float): Dropout rate. Defaults: 0. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: False. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LN'). - norm_cfg (dict): Config dict for normalization layer at - output of backone. Defaults: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None. - convert_weights (bool): The flag indicates whether the - pre-trained model is from the original repo. We may need - to convert some keys to make it compatible. - Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - Default: -1 (-1 means not freezing any parameters). - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=96, - patch_size=4, - window_size=7, - mlp_ratio=4, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - strides=(4, 2, 2, 2), - out_indices=(0, 1, 2, 3), - qkv_bias=True, - qk_scale=None, - patch_norm=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - pretrained=None, - convert_weights=False, - frozen_stages=-1, - init_cfg=None): - self.convert_weights = convert_weights - self.frozen_stages = frozen_stages - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - super(SwinTransformer, self).__init__(init_cfg=init_cfg) - - num_layers = len(depths) - self.out_indices = out_indices - self.use_abs_pos_embed = use_abs_pos_embed - - assert strides[0] == patch_size, 'Use non-overlapping patch embed.' - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=strides[0], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - - if self.use_abs_pos_embed: - patch_row = pretrain_img_size[0] // patch_size - patch_col = pretrain_img_size[1] // patch_size - num_patches = patch_row * patch_col - self.absolute_pos_embed = nn.Parameter( - torch.zeros((1, num_patches, embed_dims))) - - self.drop_after_pos = nn.Dropout(p=drop_rate) - - # set stochastic depth decay rule - total_depth = sum(depths) - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, total_depth) - ] - - self.stages = ModuleList() - in_channels = embed_dims - for i in range(num_layers): - if i < num_layers - 1: - downsample = PatchMerging( - in_channels=in_channels, - out_channels=2 * in_channels, - stride=strides[i + 1], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - else: - downsample = None - - stage = SwinBlockSequence( - embed_dims=in_channels, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * in_channels, - depth=depths[i], - window_size=window_size, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], - downsample=downsample, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.stages.append(stage) - if downsample: - in_channels = downsample.out_channels - - self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] - # Add a norm layer for each output - for i in out_indices: - layer = build_norm_layer(norm_cfg, self.num_features[i])[1] - layer_name = f'norm{i}' - self.add_module(layer_name, layer) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - if self.use_abs_pos_embed: - self.absolute_pos_embed.requires_grad = False - self.drop_after_pos.eval() - - for i in range(1, self.frozen_stages + 1): - - if (i - 1) in self.out_indices: - norm_layer = getattr(self, f'norm{i-1}') - norm_layer.eval() - for param in norm_layer.parameters(): - param.requires_grad = False - - m = self.stages[i - 1] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - if self.use_abs_pos_embed: - trunc_normal_(self.absolute_pos_embed, std=0.02) - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, 1.0) - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - ckpt = _load_checkpoint( - self.init_cfg.checkpoint, logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - if self.convert_weights: - # supported loading weight from original repo, - _state_dict = swin_converter(_state_dict) - - state_dict = OrderedDict() - for k, v in _state_dict.items(): - if k.startswith('backbone.'): - state_dict[k[9:]] = v - - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = self.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H * W: - logger.warning('Error in loading absolute_pos_embed, pass') - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view( - N2, H, W, C2).permute(0, 3, 1, 2).contiguous() - - # interpolate position bias table if needed - relative_position_bias_table_keys = [ - k for k in state_dict.keys() - if 'relative_position_bias_table' in k - ] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = self.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f'Error in loading {table_key}, pass') - elif L1 != L2: - S1 = int(L1**0.5) - S2 = int(L2**0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view( - nH2, L2).permute(1, 0).contiguous() - - # load state_dict - self.load_state_dict(state_dict, False) - - def forward(self, x): - x, hw_shape = self.patch_embed(x) - - if self.use_abs_pos_embed: - x = x + self.absolute_pos_embed - x = self.drop_after_pos(x) - - outs = [] - for i, stage in enumerate(self.stages): - x, hw_shape, out, out_hw_shape = stage(x, hw_shape) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - out = norm_layer(out) - out = out.view(-1, *out_hw_shape, - self.num_features[i]).permute(0, 3, 1, - 2).contiguous() - outs.append(out) - - return outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/trident_resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/trident_resnet.py deleted file mode 100644 index 013ba64b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/backbones/trident_resnet.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch.nn.modules.utils import _pair - -from mmdet.models.backbones.resnet import Bottleneck, ResNet -from mmdet.models.builder import BACKBONES - - -class TridentConv(BaseModule): - """Trident Convolution Module. - - Args: - in_channels (int): Number of channels in input. - out_channels (int): Number of channels in output. - kernel_size (int): Size of convolution kernel. - stride (int, optional): Convolution stride. Default: 1. - trident_dilations (tuple[int, int, int], optional): Dilations of - different trident branch. Default: (1, 2, 3). - test_branch_idx (int, optional): In inference, all 3 branches will - be used if `test_branch_idx==-1`, otherwise only branch with - index `test_branch_idx` will be used. Default: 1. - bias (bool, optional): Whether to use bias in convolution or not. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - trident_dilations=(1, 2, 3), - test_branch_idx=1, - bias=False, - init_cfg=None): - super(TridentConv, self).__init__(init_cfg) - self.num_branch = len(trident_dilations) - self.with_bias = bias - self.test_branch_idx = test_branch_idx - self.stride = _pair(stride) - self.kernel_size = _pair(kernel_size) - self.paddings = _pair(trident_dilations) - self.dilations = trident_dilations - self.in_channels = in_channels - self.out_channels = out_channels - self.bias = bias - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - def extra_repr(self): - tmpstr = f'in_channels={self.in_channels}' - tmpstr += f', out_channels={self.out_channels}' - tmpstr += f', kernel_size={self.kernel_size}' - tmpstr += f', num_branch={self.num_branch}' - tmpstr += f', test_branch_idx={self.test_branch_idx}' - tmpstr += f', stride={self.stride}' - tmpstr += f', paddings={self.paddings}' - tmpstr += f', dilations={self.dilations}' - tmpstr += f', bias={self.bias}' - return tmpstr - - def forward(self, inputs): - if self.training or self.test_branch_idx == -1: - outputs = [ - F.conv2d(input, self.weight, self.bias, self.stride, padding, - dilation) for input, dilation, padding in zip( - inputs, self.dilations, self.paddings) - ] - else: - assert len(inputs) == 1 - outputs = [ - F.conv2d(inputs[0], self.weight, self.bias, self.stride, - self.paddings[self.test_branch_idx], - self.dilations[self.test_branch_idx]) - ] - - return outputs - - -# Since TridentNet is defined over ResNet50 and ResNet101, here we -# only support TridentBottleneckBlock. -class TridentBottleneck(Bottleneck): - """BottleBlock for TridentResNet. - - Args: - trident_dilations (tuple[int, int, int]): Dilations of different - trident branch. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - concat_output (bool): Whether to concat the output list to a Tensor. - `True` only in the last Block. - """ - - def __init__(self, trident_dilations, test_branch_idx, concat_output, - **kwargs): - - super(TridentBottleneck, self).__init__(**kwargs) - self.trident_dilations = trident_dilations - self.num_branch = len(trident_dilations) - self.concat_output = concat_output - self.test_branch_idx = test_branch_idx - self.conv2 = TridentConv( - self.planes, - self.planes, - kernel_size=3, - stride=self.conv2_stride, - bias=False, - trident_dilations=self.trident_dilations, - test_branch_idx=test_branch_idx, - init_cfg=dict( - type='Kaiming', - distribution='uniform', - mode='fan_in', - override=dict(name='conv2'))) - - def forward(self, x): - - def _inner_forward(x): - num_branch = ( - self.num_branch - if self.training or self.test_branch_idx == -1 else 1) - identity = x - if not isinstance(x, list): - x = (x, ) * num_branch - identity = x - if self.downsample is not None: - identity = [self.downsample(b) for b in x] - - out = [self.conv1(b) for b in x] - out = [self.norm1(b) for b in out] - out = [self.relu(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv1_plugin_names) - - out = self.conv2(out) - out = [self.norm2(b) for b in out] - out = [self.relu(b) for b in out] - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv2_plugin_names) - - out = [self.conv3(b) for b in out] - out = [self.norm3(b) for b in out] - - if self.with_plugins: - for k in range(len(out)): - out[k] = self.forward_plugin(out[k], - self.after_conv3_plugin_names) - - out = [ - out_b + identity_b for out_b, identity_b in zip(out, identity) - ] - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = [self.relu(b) for b in out] - if self.concat_output: - out = torch.cat(out, dim=0) - return out - - -def make_trident_res_layer(block, - inplanes, - planes, - num_blocks, - stride=1, - trident_dilations=(1, 2, 3), - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - test_branch_idx=-1): - """Build Trident Res Layers.""" - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - for i in range(num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride if i == 0 else 1, - trident_dilations=trident_dilations, - downsample=downsample if i == 0 else None, - style=style, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=plugins, - test_branch_idx=test_branch_idx, - concat_output=True if i == num_blocks - 1 else False)) - inplanes = planes * block.expansion - return nn.Sequential(*layers) - - -@BACKBONES.register_module() -class TridentResNet(ResNet): - """The stem layer, stage 1 and stage 2 in Trident ResNet are identical to - ResNet, while in stage 3, Trident BottleBlock is utilized to replace the - normal BottleBlock to yield trident output. Different branch shares the - convolution weight but uses different dilations to achieve multi-scale - output. - - / stage3(b0) \ - x - stem - stage1 - stage2 - stage3(b1) - output - \ stage3(b2) / - - Args: - depth (int): Depth of resnet, from {50, 101, 152}. - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - trident_dilations (tuple[int]): Dilations of different trident branch. - len(trident_dilations) should be equal to num_branch. - """ # noqa - - def __init__(self, depth, num_branch, test_branch_idx, trident_dilations, - **kwargs): - - assert num_branch == len(trident_dilations) - assert depth in (50, 101, 152) - super(TridentResNet, self).__init__(depth, **kwargs) - assert self.num_stages == 3 - self.test_branch_idx = test_branch_idx - self.num_branch = num_branch - - last_stage_idx = self.num_stages - 1 - stride = self.strides[last_stage_idx] - dilation = trident_dilations - dcn = self.dcn if self.stage_with_dcn[last_stage_idx] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, - last_stage_idx) - else: - stage_plugins = None - planes = self.base_channels * 2**last_stage_idx - res_layer = make_trident_res_layer( - TridentBottleneck, - inplanes=(self.block.expansion * self.base_channels * - 2**(last_stage_idx - 1)), - planes=planes, - num_blocks=self.stage_blocks[last_stage_idx], - stride=stride, - trident_dilations=dilation, - style=self.style, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - test_branch_idx=self.test_branch_idx) - - layer_name = f'layer{last_stage_idx + 1}' - - self.__setattr__(layer_name, res_layer) - self.res_layers.pop(last_stage_idx) - self.res_layers.insert(last_stage_idx, layer_name) - - self._freeze_stages() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/builder.py deleted file mode 100644 index ace6209f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/builder.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -ROI_EXTRACTORS = MODELS -SHARED_HEADS = MODELS -HEADS = MODELS -LOSSES = MODELS -DETECTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_roi_extractor(cfg): - """Build roi extractor.""" - return ROI_EXTRACTORS.build(cfg) - - -def build_shared_head(cfg): - """Build shared head.""" - return SHARED_HEADS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/__init__.py deleted file mode 100644 index 375197a6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor_free_head import AnchorFreeHead -from .anchor_head import AnchorHead -from .atss_head import ATSSHead -from .autoassign_head import AutoAssignHead -from .cascade_rpn_head import CascadeRPNHead, StageCascadeRPNHead -from .centernet_head import CenterNetHead -from .centripetal_head import CentripetalHead -from .corner_head import CornerHead -from .deformable_detr_head import DeformableDETRHead -from .detr_head import DETRHead -from .embedding_rpn_head import EmbeddingRPNHead -from .fcos_head import FCOSHead -from .fovea_head import FoveaHead -from .free_anchor_retina_head import FreeAnchorRetinaHead -from .fsaf_head import FSAFHead -from .ga_retina_head import GARetinaHead -from .ga_rpn_head import GARPNHead -from .gfl_head import GFLHead -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead -from .lad_head import LADHead -from .ld_head import LDHead -from .mask2former_head import Mask2FormerHead -from .maskformer_head import MaskFormerHead -from .nasfcos_head import NASFCOSHead -from .paa_head import PAAHead -from .pisa_retinanet_head import PISARetinaHead -from .pisa_ssd_head import PISASSDHead -from .reppoints_head import RepPointsHead -from .retina_head import RetinaHead -from .retina_sepbn_head import RetinaSepBNHead -from .rpn_head import RPNHead -from .sabl_retina_head import SABLRetinaHead -from .solo_head import DecoupledSOLOHead, DecoupledSOLOLightHead, SOLOHead -from .ssd_head import SSDHead -from .tood_head import TOODHead -from .vfnet_head import VFNetHead -from .yolact_head import YOLACTHead, YOLACTProtonet, YOLACTSegmHead -from .yolo_head import YOLOV3Head -from .yolof_head import YOLOFHead -from .yolox_head import YOLOXHead - -__all__ = [ - 'AnchorFreeHead', 'AnchorHead', 'GuidedAnchorHead', 'FeatureAdaption', - 'RPNHead', 'GARPNHead', 'RetinaHead', 'RetinaSepBNHead', 'GARetinaHead', - 'SSDHead', 'FCOSHead', 'RepPointsHead', 'FoveaHead', - 'FreeAnchorRetinaHead', 'ATSSHead', 'FSAFHead', 'NASFCOSHead', - 'PISARetinaHead', 'PISASSDHead', 'GFLHead', 'CornerHead', 'YOLACTHead', - 'YOLACTSegmHead', 'YOLACTProtonet', 'YOLOV3Head', 'PAAHead', - 'SABLRetinaHead', 'CentripetalHead', 'VFNetHead', 'StageCascadeRPNHead', - 'CascadeRPNHead', 'EmbeddingRPNHead', 'LDHead', 'CascadeRPNHead', - 'AutoAssignHead', 'DETRHead', 'YOLOFHead', 'DeformableDETRHead', - 'SOLOHead', 'DecoupledSOLOHead', 'CenterNetHead', 'YOLOXHead', - 'DecoupledSOLOLightHead', 'LADHead', 'TOODHead', 'MaskFormerHead', - 'Mask2FormerHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_free_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_free_head.py deleted file mode 100644 index b0460b94..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_free_head.py +++ /dev/null @@ -1,350 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import build_bbox_coder, multi_apply -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorFreeHead(BaseDenseHead, BBoxTestMixin): - """Anchor-free head (FCOS, Fovea, RepPoints, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - stacked_convs (int): Number of stacking convs of the head. - strides (tuple): Downsample factor of each feature map. - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - bbox_coder (dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - _version = 1 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - dcn_on_last_conv=False, - conv_bias='auto', - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=1.0), - bbox_coder=dict(type='DistancePointBBoxCoder'), - conv_cfg=None, - norm_cfg=None, - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01))): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.bbox_coder = build_bbox_coder(bbox_coder) - - self.prior_generator = MlvlPointGenerator(strides) - - # In order to keep a more general interface and be consistent with - # anchor_head. We can think of point like one anchor - self.num_base_priors = self.prior_generator.num_base_priors[0] - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self): - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self): - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Hack some keys of the model state dict so that can load checkpoints - of previous version.""" - version = local_metadata.get('version', None) - if version is None: - # the key is different in early versions - # for example, 'fcos_cls' become 'conv_cls' now - bbox_head_keys = [ - k for k in state_dict.keys() if k.startswith(prefix) - ] - ori_predictor_keys = [] - new_predictor_keys = [] - # e.g. 'fcos_cls' or 'fcos_reg' - for key in bbox_head_keys: - ori_predictor_keys.append(key) - key = key.split('.') - conv_name = None - if key[1].endswith('cls'): - conv_name = 'conv_cls' - elif key[1].endswith('reg'): - conv_name = 'conv_reg' - elif key[1].endswith('centerness'): - conv_name = 'conv_centerness' - else: - assert NotImplementedError - if conv_name is not None: - key[1] = conv_name - new_predictor_keys.append('.'.join(key)) - else: - ori_predictor_keys.pop(-1) - for i in range(len(new_predictor_keys)): - state_dict[new_predictor_keys[i]] = state_dict.pop( - ori_predictor_keys[i]) - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores and bbox predictions. - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - """ - return multi_apply(self.forward_single, feats)[:2] - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, features - after classification and regression conv layers, some - models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - return cls_score, bbox_pred, cls_feat, reg_feat - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - """ - raise NotImplementedError - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points of a single scale level. - - This function will be deprecated soon. - """ - - warnings.warn( - '`_get_points_single` in `AnchorFreeHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - - h, w = featmap_size - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - x_range = torch.arange(w, device=device).to(dtype) - y_range = torch.arange(h, device=device).to(dtype) - y, x = torch.meshgrid(y_range, x_range) - if flatten: - y = y.flatten() - x = x.flatten() - return y, x - - def get_points(self, featmap_sizes, dtype, device, flatten=False): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - dtype (torch.dtype): Type of points. - device (torch.device): Device of points. - - Returns: - tuple: points of each image. - """ - warnings.warn( - '`get_points` in `AnchorFreeHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of all levels ' - 'with `self.prior_generator.grid_priors` ') - - mlvl_points = [] - for i in range(len(featmap_sizes)): - mlvl_points.append( - self._get_points_single(featmap_sizes[i], self.strides[i], - dtype, device, flatten)) - return mlvl_points - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_head.py deleted file mode 100644 index d1bfab62..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01)): - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, - 'sampler') and self.train_cfg.sampler.type.split( - '.')[-1] != 'PseudoSampler': - self.sampling = True - sampler_cfg = self.train_cfg.sampler - # avoid BC-breaking - if loss_cls['type'] in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ]: - warnings.warn( - 'DeprecationWarning: Determining whether to sampling' - 'by loss type is deprecated, please delete sampler in' - 'your config when using `FocalLoss`, `GHMC`, ' - '`QualityFocalLoss` or other FocalLoss variant.') - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - else: - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.prior_generator = build_prior_generator(anchor_generator) - - # Usually the numbers of anchors for each level are the same - # except SSD detectors. So it is an int in the most dense - # heads but a list of int in SSDHead - self.num_base_priors = self.prior_generator.num_base_priors[0] - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'for consistency or also use ' - '`num_base_priors` instead') - return self.prior_generator.num_base_priors[0] - - @property - def anchor_generator(self): - warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_base_priors * 4, - 1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_base_priors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_base_priors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all - images. - - num_total_neg (int): Number of negative samples in all - images. - - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), where - 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,), The length of list should always be 1. - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/atss_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/atss_head.py deleted file mode 100644 index e8f401ca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/atss_head.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_sampler, - images_to_levels, multi_apply, reduce_mean, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class ATSSHead(AnchorHead): - """Bridging the Gap Between Anchor-based and Anchor-free Detection via - Adaptive Training Sample Selection. - - ATSS head structure is similar with FCOS, however ATSS use anchor boxes - and assign label by Adaptive Training Sample Selection instead max-iou. - - https://arxiv.org/abs/1912.02424 - """ - - def __init__(self, - num_classes, - in_channels, - pred_kernel_size=3, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - reg_decoded_bbox=True, - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='atss_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.pred_kernel_size = pred_kernel_size - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(ATSSHead, self).__init__( - num_classes, - in_channels, - reg_decoded_bbox=reg_decoded_bbox, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pred_pad_size = self.pred_kernel_size // 2 - self.atss_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_reg = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 4, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_centerness = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 1, - self.pred_kernel_size, - padding=pred_pad_size) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - centerness (Tensor): Centerness for a single scale level, the - channel number is (N, num_anchors * 1, H, W). - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.atss_cls(cls_feat) - # we just follow atss, not apply exp in bbox_pred - bbox_pred = scale(self.atss_reg(reg_feat)).float() - centerness = self.atss_centerness(reg_feat) - return cls_score, bbox_pred, centerness - - def loss_single(self, anchors, cls_score, bbox_pred, centerness, labels, - label_weights, bbox_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - num_total_samples (int): Number os positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - centerness = centerness.permute(0, 2, 3, 1).reshape(-1) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # classification loss - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_centerness = centerness[pos_inds] - - centerness_targets = self.centerness_target( - pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchors, pos_bbox_pred) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_bbox_targets, - weight=centerness_targets, - avg_factor=1.0) - - # centerness loss - loss_centerness = self.loss_centerness( - pos_centerness, - centerness_targets, - avg_factor=num_total_samples) - - else: - loss_bbox = bbox_pred.sum() * 0 - loss_centerness = centerness.sum() * 0 - centerness_targets = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, loss_centerness, centerness_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - centernesses (list[Tensor]): Centerness for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, loss_centerness,\ - bbox_avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - centernesses, - labels_list, - label_weights_list, - bbox_targets_list, - num_total_samples=num_total_samples) - - bbox_avg_factor = sum(bbox_avg_factor) - bbox_avg_factor = reduce_mean(bbox_avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_centerness=loss_centerness) - - def centerness_target(self, anchors, gts): - # only calculate pos centerness targets, otherwise there may be nan - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - l_ = anchors_cx - gts[:, 0] - t_ = anchors_cy - gts[:, 1] - r_ = gts[:, 2] - anchors_cx - b_ = gts[:, 3] - anchors_cy - - left_right = torch.stack([l_, r_], dim=1) - top_bottom = torch.stack([t_, b_], dim=1) - centerness = torch.sqrt( - (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * - (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])) - assert not torch.isnan(centerness).any() - return centerness - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for ATSS head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4) - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if self.reg_decoded_bbox: - pos_bbox_targets = sampling_result.pos_gt_bboxes - else: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/autoassign_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/autoassign_head.py deleted file mode 100644 index 446da244..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/autoassign_head.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from mmdet.core.bbox import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.dense_heads.fcos_head import FCOSHead -from mmdet.models.dense_heads.paa_head import levels_to_images - -EPS = 1e-12 - - -class CenterPrior(nn.Module): - """Center Weighting module to adjust the category-specific prior - distributions. - - Args: - force_topk (bool): When no point falls into gt_bbox, forcibly - select the k points closest to the center to calculate - the center prior. Defaults to False. - topk (int): The number of points used to calculate the - center prior when no point falls in gt_bbox. Only work when - force_topk if True. Defaults to 9. - num_classes (int): The class number of dataset. Defaults to 80. - strides (tuple[int]): The stride of each input feature map. Defaults - to (8, 16, 32, 64, 128). - """ - - def __init__(self, - force_topk=False, - topk=9, - num_classes=80, - strides=(8, 16, 32, 64, 128)): - super(CenterPrior, self).__init__() - self.mean = nn.Parameter(torch.zeros(num_classes, 2)) - self.sigma = nn.Parameter(torch.ones(num_classes, 2)) - self.strides = strides - self.force_topk = force_topk - self.topk = topk - - def forward(self, anchor_points_list, gt_bboxes, labels, - inside_gt_bbox_mask): - """Get the center prior of each point on the feature map for each - instance. - - Args: - anchor_points_list (list[Tensor]): list of coordinate - of points on feature map. Each with shape - (num_points, 2). - gt_bboxes (Tensor): The gt_bboxes with shape of - (num_gt, 4). - labels (Tensor): The gt_labels with shape of (num_gt). - inside_gt_bbox_mask (Tensor): Tensor of bool type, - with shape of (num_points, num_gt), each - value is used to mark whether this point falls - within a certain gt. - - Returns: - tuple(Tensor): - - - center_prior_weights(Tensor): Float tensor with shape \ - of (num_points, num_gt). Each value represents \ - the center weighting coefficient. - - inside_gt_bbox_mask (Tensor): Tensor of bool type, \ - with shape of (num_points, num_gt), each \ - value is used to mark whether this point falls \ - within a certain gt or is the topk nearest points for \ - a specific gt_bbox. - """ - inside_gt_bbox_mask = inside_gt_bbox_mask.clone() - num_gts = len(labels) - num_points = sum([len(item) for item in anchor_points_list]) - if num_gts == 0: - return gt_bboxes.new_zeros(num_points, - num_gts), inside_gt_bbox_mask - center_prior_list = [] - for slvl_points, stride in zip(anchor_points_list, self.strides): - # slvl_points: points from single level in FPN, has shape (h*w, 2) - # single_level_points has shape (h*w, num_gt, 2) - single_level_points = slvl_points[:, None, :].expand( - (slvl_points.size(0), len(gt_bboxes), 2)) - gt_center_x = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2) - gt_center_y = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2) - gt_center = torch.stack((gt_center_x, gt_center_y), dim=1) - gt_center = gt_center[None] - # instance_center has shape (1, num_gt, 2) - instance_center = self.mean[labels][None] - # instance_sigma has shape (1, num_gt, 2) - instance_sigma = self.sigma[labels][None] - # distance has shape (num_points, num_gt, 2) - distance = (((single_level_points - gt_center) / float(stride) - - instance_center)**2) - center_prior = torch.exp(-distance / - (2 * instance_sigma**2)).prod(dim=-1) - center_prior_list.append(center_prior) - center_prior_weights = torch.cat(center_prior_list, dim=0) - - if self.force_topk: - gt_inds_no_points_inside = torch.nonzero( - inside_gt_bbox_mask.sum(0) == 0).reshape(-1) - if gt_inds_no_points_inside.numel(): - topk_center_index = \ - center_prior_weights[:, gt_inds_no_points_inside].topk( - self.topk, - dim=0)[1] - temp_mask = inside_gt_bbox_mask[:, gt_inds_no_points_inside] - inside_gt_bbox_mask[:, gt_inds_no_points_inside] = \ - torch.scatter(temp_mask, - dim=0, - index=topk_center_index, - src=torch.ones_like( - topk_center_index, - dtype=torch.bool)) - - center_prior_weights[~inside_gt_bbox_mask] = 0 - return center_prior_weights, inside_gt_bbox_mask - - -@HEADS.register_module() -class AutoAssignHead(FCOSHead): - """AutoAssignHead head used in AutoAssign. - - More details can be found in the `paper - `_ . - - Args: - force_topk (bool): Used in center prior initialization to - handle extremely small gt. Default is False. - topk (int): The number of points used to calculate the - center prior when no point falls in gt_bbox. Only work when - force_topk if True. Defaults to 9. - pos_loss_weight (float): The loss weight of positive loss - and with default value 0.25. - neg_loss_weight (float): The loss weight of negative loss - and with default value 0.75. - center_loss_weight (float): The loss weight of center prior - loss and with default value 0.75. - """ - - def __init__(self, - *args, - force_topk=False, - topk=9, - pos_loss_weight=0.25, - neg_loss_weight=0.75, - center_loss_weight=0.75, - **kwargs): - super().__init__(*args, conv_bias=True, **kwargs) - self.center_prior = CenterPrior( - force_topk=force_topk, - topk=topk, - num_classes=self.num_classes, - strides=self.strides) - self.pos_loss_weight = pos_loss_weight - self.neg_loss_weight = neg_loss_weight - self.center_loss_weight = center_loss_weight - self.prior_generator = MlvlPointGenerator(self.strides, offset=0) - - def init_weights(self): - """Initialize weights of the head. - - In particular, we have special initialization for classified conv's and - regression conv's bias - """ - - super(AutoAssignHead, self).init_weights() - bias_cls = bias_init_with_prob(0.02) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - normal_init(self.conv_reg, std=0.01, bias=4.0) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox predictions and centerness \ - predictions of input feature maps. - """ - cls_score, bbox_pred, cls_feat, reg_feat = super( - FCOSHead, self).forward_single(x) - centerness = self.conv_centerness(reg_feat) - # scale the bbox_pred of different level - # float to avoid overflow when enabling FP16 - bbox_pred = scale(bbox_pred).float() - # bbox_pred needed for gradient computation has been modified - # by F.relu(bbox_pred) when run with PyTorch 1.10. So replace - # F.relu(bbox_pred) with bbox_pred.clamp(min=0) - bbox_pred = bbox_pred.clamp(min=0) - bbox_pred *= stride - return cls_score, bbox_pred, centerness - - def get_pos_loss_single(self, cls_score, objectness, reg_loss, gt_labels, - center_prior_weights): - """Calculate the positive loss of all points in gt_bboxes. - - Args: - cls_score (Tensor): All category scores for each point on - the feature map. The shape is (num_points, num_class). - objectness (Tensor): Foreground probability of all points, - has shape (num_points, 1). - reg_loss (Tensor): The regression loss of each gt_bbox and each - prediction box, has shape of (num_points, num_gt). - gt_labels (Tensor): The zeros based gt_labels of all gt - with shape of (num_gt,). - center_prior_weights (Tensor): Float tensor with shape - of (num_points, num_gt). Each value represents - the center weighting coefficient. - - Returns: - tuple[Tensor]: - - - pos_loss (Tensor): The positive loss of all points - in the gt_bboxes. - """ - # p_loc: localization confidence - p_loc = torch.exp(-reg_loss) - # p_cls: classification confidence - p_cls = (cls_score * objectness)[:, gt_labels] - # p_pos: joint confidence indicator - p_pos = p_cls * p_loc - - # 3 is a hyper-parameter to control the contributions of high and - # low confidence locations towards positive losses. - confidence_weight = torch.exp(p_pos * 3) - p_pos_weight = (confidence_weight * center_prior_weights) / ( - (confidence_weight * center_prior_weights).sum( - 0, keepdim=True)).clamp(min=EPS) - reweighted_p_pos = (p_pos * p_pos_weight).sum(0) - pos_loss = F.binary_cross_entropy( - reweighted_p_pos, - torch.ones_like(reweighted_p_pos), - reduction='none') - pos_loss = pos_loss.sum() * self.pos_loss_weight - return pos_loss, - - def get_neg_loss_single(self, cls_score, objectness, gt_labels, ious, - inside_gt_bbox_mask): - """Calculate the negative loss of all points in feature map. - - Args: - cls_score (Tensor): All category scores for each point on - the feature map. The shape is (num_points, num_class). - objectness (Tensor): Foreground probability of all points - and is shape of (num_points, 1). - gt_labels (Tensor): The zeros based label of all gt with shape of - (num_gt). - ious (Tensor): Float tensor with shape of (num_points, num_gt). - Each value represent the iou of pred_bbox and gt_bboxes. - inside_gt_bbox_mask (Tensor): Tensor of bool type, - with shape of (num_points, num_gt), each - value is used to mark whether this point falls - within a certain gt. - - Returns: - tuple[Tensor]: - - - neg_loss (Tensor): The negative loss of all points - in the feature map. - """ - num_gts = len(gt_labels) - joint_conf = (cls_score * objectness) - p_neg_weight = torch.ones_like(joint_conf) - if num_gts > 0: - # the order of dinmension would affect the value of - # p_neg_weight, we strictly follow the original - # implementation. - inside_gt_bbox_mask = inside_gt_bbox_mask.permute(1, 0) - ious = ious.permute(1, 0) - - foreground_idxs = torch.nonzero(inside_gt_bbox_mask, as_tuple=True) - temp_weight = (1 / (1 - ious[foreground_idxs]).clamp_(EPS)) - - def normalize(x): - return (x - x.min() + EPS) / (x.max() - x.min() + EPS) - - for instance_idx in range(num_gts): - idxs = foreground_idxs[0] == instance_idx - if idxs.any(): - temp_weight[idxs] = normalize(temp_weight[idxs]) - - p_neg_weight[foreground_idxs[1], - gt_labels[foreground_idxs[0]]] = 1 - temp_weight - - logits = (joint_conf * p_neg_weight) - neg_loss = ( - logits**2 * F.binary_cross_entropy( - logits, torch.zeros_like(logits), reduction='none')) - neg_loss = neg_loss.sum() * self.neg_loss_weight - return neg_loss, - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def loss(self, - cls_scores, - bbox_preds, - objectnesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - objectnesses (list[Tensor]): objectness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - assert len(cls_scores) == len(bbox_preds) == len(objectnesses) - all_num_gt = sum([len(item) for item in gt_bboxes]) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - inside_gt_bbox_mask_list, bbox_targets_list = self.get_targets( - all_level_points, gt_bboxes) - - center_prior_weight_list = [] - temp_inside_gt_bbox_mask_list = [] - for gt_bboxe, gt_label, inside_gt_bbox_mask in zip( - gt_bboxes, gt_labels, inside_gt_bbox_mask_list): - center_prior_weight, inside_gt_bbox_mask = \ - self.center_prior(all_level_points, gt_bboxe, gt_label, - inside_gt_bbox_mask) - center_prior_weight_list.append(center_prior_weight) - temp_inside_gt_bbox_mask_list.append(inside_gt_bbox_mask) - inside_gt_bbox_mask_list = temp_inside_gt_bbox_mask_list - mlvl_points = torch.cat(all_level_points, dim=0) - bbox_preds = levels_to_images(bbox_preds) - cls_scores = levels_to_images(cls_scores) - objectnesses = levels_to_images(objectnesses) - - reg_loss_list = [] - ious_list = [] - num_points = len(mlvl_points) - - for bbox_pred, encoded_targets, inside_gt_bbox_mask in zip( - bbox_preds, bbox_targets_list, inside_gt_bbox_mask_list): - temp_num_gt = encoded_targets.size(1) - expand_mlvl_points = mlvl_points[:, None, :].expand( - num_points, temp_num_gt, 2).reshape(-1, 2) - encoded_targets = encoded_targets.reshape(-1, 4) - expand_bbox_pred = bbox_pred[:, None, :].expand( - num_points, temp_num_gt, 4).reshape(-1, 4) - decoded_bbox_preds = self.bbox_coder.decode( - expand_mlvl_points, expand_bbox_pred) - decoded_target_preds = self.bbox_coder.decode( - expand_mlvl_points, encoded_targets) - with torch.no_grad(): - ious = bbox_overlaps( - decoded_bbox_preds, decoded_target_preds, is_aligned=True) - ious = ious.reshape(num_points, temp_num_gt) - if temp_num_gt: - ious = ious.max( - dim=-1, keepdim=True).values.repeat(1, temp_num_gt) - else: - ious = ious.new_zeros(num_points, temp_num_gt) - ious[~inside_gt_bbox_mask] = 0 - ious_list.append(ious) - loss_bbox = self.loss_bbox( - decoded_bbox_preds, - decoded_target_preds, - weight=None, - reduction_override='none') - reg_loss_list.append(loss_bbox.reshape(num_points, temp_num_gt)) - - cls_scores = [item.sigmoid() for item in cls_scores] - objectnesses = [item.sigmoid() for item in objectnesses] - pos_loss_list, = multi_apply(self.get_pos_loss_single, cls_scores, - objectnesses, reg_loss_list, gt_labels, - center_prior_weight_list) - pos_avg_factor = reduce_mean( - bbox_pred.new_tensor(all_num_gt)).clamp_(min=1) - pos_loss = sum(pos_loss_list) / pos_avg_factor - - neg_loss_list, = multi_apply(self.get_neg_loss_single, cls_scores, - objectnesses, gt_labels, ious_list, - inside_gt_bbox_mask_list) - neg_avg_factor = sum(item.data.sum() - for item in center_prior_weight_list) - neg_avg_factor = reduce_mean(neg_avg_factor).clamp_(min=1) - neg_loss = sum(neg_loss_list) / neg_avg_factor - - center_loss = [] - for i in range(len(img_metas)): - - if inside_gt_bbox_mask_list[i].any(): - center_loss.append( - len(gt_bboxes[i]) / - center_prior_weight_list[i].sum().clamp_(min=EPS)) - # when width or height of gt_bbox is smaller than stride of p3 - else: - center_loss.append(center_prior_weight_list[i].sum() * 0) - - center_loss = torch.stack(center_loss).mean() * self.center_loss_weight - - # avoid dead lock in DDP - if all_num_gt == 0: - pos_loss = bbox_preds[0].sum() * 0 - dummy_center_prior_loss = self.center_prior.mean.sum( - ) * 0 + self.center_prior.sigma.sum() * 0 - center_loss = objectnesses[0].sum() * 0 + dummy_center_prior_loss - - loss = dict( - loss_pos=pos_loss, loss_neg=neg_loss, loss_center=center_loss) - - return loss - - def get_targets(self, points, gt_bboxes_list): - """Compute regression targets and each point inside or outside gt_bbox - in multiple images. - - Args: - points (list[Tensor]): Points of all fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - - Returns: - tuple(list[Tensor]): - - - inside_gt_bbox_mask_list (list[Tensor]): Each - Tensor is with bool type and shape of - (num_points, num_gt), each value - is used to mark whether this point falls - within a certain gt. - - concat_lvl_bbox_targets (list[Tensor]): BBox - targets of each level. Each tensor has shape - (num_points, num_gt, 4). - """ - - concat_points = torch.cat(points, dim=0) - # the number of points per img, per lvl - inside_gt_bbox_mask_list, bbox_targets_list = multi_apply( - self._get_target_single, gt_bboxes_list, points=concat_points) - return inside_gt_bbox_mask_list, bbox_targets_list - - def _get_target_single(self, gt_bboxes, points): - """Compute regression targets and each point inside or outside gt_bbox - for a single image. - - Args: - gt_bboxes (Tensor): gt_bbox of single image, has shape - (num_gt, 4). - points (Tensor): Points of all fpn level, has shape - (num_points, 2). - - Returns: - tuple[Tensor]: Containing the following Tensors: - - - inside_gt_bbox_mask (Tensor): Bool tensor with shape - (num_points, num_gt), each value is used to mark - whether this point falls within a certain gt. - - bbox_targets (Tensor): BBox targets of each points with - each gt_bboxes, has shape (num_points, num_gt, 4). - """ - num_points = points.size(0) - num_gts = gt_bboxes.size(0) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None] - ys = ys[:, None] - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - if num_gts: - inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0 - else: - inside_gt_bbox_mask = bbox_targets.new_zeros((num_points, num_gts), - dtype=torch.bool) - - return inside_gt_bbox_mask, bbox_targets - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Almost the same as the implementation in fcos, we remove half stride - offset to align with the original implementation. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `AutoAssignHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - y, x = super(FCOSHead, - self)._get_points_single(featmap_size, stride, dtype, - device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) - return points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_dense_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_dense_head.py deleted file mode 100644 index 0c7abb7b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -from mmcv.cnn.utils.weight_init import constant_init -from mmcv.ops import batched_nms -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core.utils import filter_scores_and_topk, select_single_mlvl - - -class BaseDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for DenseHeads.""" - - def __init__(self, init_cfg=None): - super(BaseDenseHead, self).__init__(init_cfg) - - def init_weights(self): - super(BaseDenseHead, self).init_weights() - # avoid init_cfg overwrite the initialization of `conv_offset` - for m in self.modules(): - # DeformConv2dPack, ModulatedDeformConv2dPack - if hasattr(m, 'conv_offset'): - constant_init(m.conv_offset, 0) - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True, - **kwargs): - """Transform network outputs of a batch into bbox results. - - Note: When score_factors is not None, the cls_scores are - usually multiplied by it then obtain the real score used in NMS, - such as CenterNess in FCOS, IoU branch in ATSS. - - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - score_factors (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, num_priors * 1, H, W). Default None. - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. Default None. - rescale (bool): If True, return boxes in original image space. - Default False. - with_nms (bool): If True, do nms before return boxes. - Default True. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - - if score_factors is None: - # e.g. Retina, FreeAnchor, Foveabox, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, AutoAssign, etc. - with_score_factors = True - assert len(cls_scores) == len(score_factors) - - num_levels = len(cls_scores) - - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device) - - result_list = [] - - for img_id in range(len(img_metas)): - img_meta = img_metas[img_id] - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - if with_score_factors: - score_factor_list = select_single_mlvl(score_factors, img_id) - else: - score_factor_list = [None for _ in range(num_levels)] - - results = self._get_bboxes_single(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors, - img_meta, cfg, rescale, with_nms, - **kwargs) - result_list.append(results) - return result_list - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - if score_factor_list[0] is None: - # e.g. Retina, FreeAnchor, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - if with_score_factors: - mlvl_score_factors = [] - else: - mlvl_score_factors = None - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - if with_score_factors: - score_factor = score_factor.permute(1, 2, - 0).reshape(-1).sigmoid() - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - if with_score_factors: - score_factor = score_factor[keep_idxs] - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - if with_score_factors: - mlvl_score_factors.append(score_factor) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, mlvl_score_factors, **kwargs) - - def _bbox_post_process(self, - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - scale_factor, - cfg, - rescale=False, - with_nms=True, - mlvl_score_factors=None, - **kwargs): - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually `with_nms` is False is used for aug test. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_labels (list[Tensor]): Box class labels from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - mlvl_score_factors (list[Tensor], optional): Score factor from - all scale levels of a single image, each item has shape - (num_bboxes, ). Default: None. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - assert len(mlvl_scores) == len(mlvl_bboxes) == len(mlvl_labels) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_labels = torch.cat(mlvl_labels) - - if mlvl_score_factors is not None: - # TODO: Add sqrt operation in order to be consistent with - # the paper. - mlvl_score_factors = torch.cat(mlvl_score_factors) - mlvl_scores = mlvl_scores * mlvl_score_factors - - if with_nms: - if mlvl_bboxes.numel() == 0: - det_bboxes = torch.cat([mlvl_bboxes, mlvl_scores[:, None]], -1) - return det_bboxes, mlvl_labels - - det_bboxes, keep_idxs = batched_nms(mlvl_bboxes, mlvl_scores, - mlvl_labels, cfg.nms) - det_bboxes = det_bboxes[:cfg.max_per_img] - det_labels = mlvl_labels[keep_idxs][:cfg.max_per_img] - return det_bboxes, det_labels - else: - return mlvl_bboxes, mlvl_scores, mlvl_labels - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes( - *outs, img_metas=img_metas, cfg=proposal_cfg) - return losses, proposal_list - - def simple_test(self, feats, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n, ). - """ - return self.simple_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def onnx_export(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - score_factors (list[Tensor]): score_factors for each s - cale level with shape (N, num_points * 1, H, W). - Default: None. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. Default: None. - with_nms (bool): Whether apply nms to the bboxes. Default: True. - - Returns: - tuple[Tensor, Tensor] | list[tuple]: When `with_nms` is True, - it is tuple[Tensor, Tensor], first tensor bboxes with shape - [N, num_det, 5], 5 arrange as (x1, y1, x2, y2, score) - and second element is class labels of shape [N, num_det]. - When `with_nms` is False, first tensor is bboxes with - shape [N, num_det, 4], second tensor is raw score has - shape [N, num_det, num_classes]. - """ - assert len(cls_scores) == len(bbox_preds) - - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shape = img_metas[0]['img_shape_for_onnx'] - - cfg = self.test_cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_priors) - device = cls_scores[0].device - batch_size = cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - # e.g. Retina, FreeAnchor, etc. - if score_factors is None: - with_score_factors = False - mlvl_score_factor = [None for _ in range(num_levels)] - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - mlvl_score_factor = [ - score_factors[i].detach() for i in range(num_levels) - ] - mlvl_score_factors = [] - - mlvl_batch_bboxes = [] - mlvl_scores = [] - - for cls_score, bbox_pred, score_factors, priors in zip( - mlvl_cls_scores, mlvl_bbox_preds, mlvl_score_factor, - mlvl_priors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - nms_pre_score = scores - else: - scores = scores.softmax(-1) - nms_pre_score = scores - - if with_score_factors: - score_factors = score_factors.permute(0, 2, 3, 1).reshape( - batch_size, -1).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - priors = priors.expand(batch_size, -1, priors.size(-1)) - # Get top-k predictions - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - - if with_score_factors: - nms_pre_score = (nms_pre_score * score_factors[..., None]) - else: - nms_pre_score = nms_pre_score - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = nms_pre_score.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = nms_pre_score[..., :-1].max(-1) - _, topk_inds = max_scores.topk(nms_pre) - - batch_inds = torch.arange( - batch_size, device=bbox_pred.device).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = bbox_pred.shape[1] * batch_inds + topk_inds - priors = priors.reshape( - -1, priors.size(-1))[transformed_inds, :].reshape( - batch_size, -1, priors.size(-1)) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - scores = scores.reshape( - -1, self.cls_out_channels)[transformed_inds, :].reshape( - batch_size, -1, self.cls_out_channels) - if with_score_factors: - score_factors = score_factors.reshape( - -1, 1)[transformed_inds].reshape(batch_size, -1) - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_batch_bboxes.append(bboxes) - mlvl_scores.append(scores) - if with_score_factors: - mlvl_score_factors.append(score_factors) - - batch_bboxes = torch.cat(mlvl_batch_bboxes, dim=1) - batch_scores = torch.cat(mlvl_scores, dim=1) - if with_score_factors: - batch_score_factors = torch.cat(mlvl_score_factors, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - - from mmdet.core.export import add_dummy_nms_for_onnx - - if not self.use_sigmoid_cls: - batch_scores = batch_scores[..., :self.num_classes] - - if with_score_factors: - batch_scores = batch_scores * (batch_score_factors.unsqueeze(2)) - - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - score_threshold = cfg.score_thr - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx(batch_bboxes, batch_scores, - max_output_boxes_per_class, - iou_threshold, score_threshold, - nms_pre, cfg.max_per_img) - else: - return batch_bboxes, batch_scores diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_mask_head.py deleted file mode 100644 index 5eb94fb2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/base_mask_head.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class BaseMaskHead(BaseModule, metaclass=ABCMeta): - """Base class for mask heads used in One-Stage Instance Segmentation.""" - - def __init__(self, init_cfg): - super(BaseMaskHead, self).__init__(init_cfg) - - @abstractmethod - def loss(self, **kwargs): - pass - - @abstractmethod - def get_results(self, **kwargs): - """Get precessed :obj:`InstanceData` of multiple images.""" - pass - - def forward_train(self, - x, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None, - positive_infos=None, - **kwargs): - """ - Args: - x (list[Tensor] | tuple[Tensor]): Features from FPN. - Each has a shape (B, C, H, W). - gt_labels (list[Tensor]): Ground truth labels of all images. - each has a shape (num_gts,). - gt_masks (list[Tensor]) : Masks for each bbox, has a shape - (num_gts, h , w). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - each item has a shape (num_gts, 4). - gt_bboxes_ignore (list[Tensor], None): Ground truth bboxes to be - ignored, each item has a shape (num_ignored_gts, 4). - positive_infos (list[:obj:`InstanceData`], optional): Information - of positive samples. Used when the label assignment is - done outside the MaskHead, e.g., in BboxHead in - YOLACT or CondInst, etc. When the label assignment is done in - MaskHead, it would be None, like SOLO. All values - in it should have shape (num_positive_samples, *). - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if positive_infos is None: - outs = self(x) - else: - outs = self(x, positive_infos) - - assert isinstance(outs, tuple), 'Forward results should be a tuple, ' \ - 'even if only one item is returned' - loss = self.loss( - *outs, - gt_labels=gt_labels, - gt_masks=gt_masks, - img_metas=img_metas, - gt_bboxes=gt_bboxes, - gt_bboxes_ignore=gt_bboxes_ignore, - positive_infos=positive_infos, - **kwargs) - return loss - - def simple_test(self, - feats, - img_metas, - rescale=False, - instances_list=None, - **kwargs): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - instances_list (list[obj:`InstanceData`], optional): Detection - results of each image after the post process. Only exist - if there is a `bbox_head`, like `YOLACT`, `CondInst`, etc. - - Returns: - list[obj:`InstanceData`]: Instance segmentation \ - results of each image after the post process. \ - Each item usually contains following keys. \ - - - scores (Tensor): Classification scores, has a shape - (num_instance,) - - labels (Tensor): Has a shape (num_instances,). - - masks (Tensor): Processed mask results, has a - shape (num_instances, h, w). - """ - if instances_list is None: - outs = self(feats) - else: - outs = self(feats, instances_list=instances_list) - mask_inputs = outs + (img_metas, ) - results_list = self.get_results( - *mask_inputs, - rescale=rescale, - instances_list=instances_list, - **kwargs) - return results_list - - def onnx_export(self, img, img_metas): - raise NotImplementedError(f'{self.__class__.__name__} does ' - f'not support ONNX EXPORT') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/cascade_rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/cascade_rpn_head.py deleted file mode 100644 index 69347e00..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/cascade_rpn_head.py +++ /dev/null @@ -1,801 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from __future__ import division -import copy -import warnings - -import torch -import torch.nn as nn -from mmcv import ConfigDict -from mmcv.ops import DeformConv2d, batched_nms -from mmcv.runner import BaseModule, ModuleList - -from mmdet.core import (RegionAssigner, build_assigner, build_sampler, - images_to_levels, multi_apply) -from mmdet.core.utils import select_single_mlvl -from ..builder import HEADS, build_head -from .base_dense_head import BaseDenseHead -from .rpn_head import RPNHead - - -class AdaptiveConv(BaseModule): - """AdaptiveConv used to adapt the sampling location with the anchors. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel. Default: 3 - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 1 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 3 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: False. - type (str, optional): Type of adaptive conv, can be either 'offset' - (arbitrary anchors) or 'dilation' (uniform anchor). - Default: 'dilation'. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - dilation=3, - groups=1, - bias=False, - type='dilation', - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='conv'))): - super(AdaptiveConv, self).__init__(init_cfg) - assert type in ['offset', 'dilation'] - self.adapt_type = type - - assert kernel_size == 3, 'Adaptive conv only supports kernels 3' - if self.adapt_type == 'offset': - assert stride == 1 and padding == 1 and groups == 1, \ - 'Adaptive conv offset mode only supports padding: {1}, ' \ - f'stride: {1}, groups: {1}' - self.conv = DeformConv2d( - in_channels, - out_channels, - kernel_size, - padding=padding, - stride=stride, - groups=groups, - bias=bias) - else: - self.conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - padding=dilation, - dilation=dilation) - - def forward(self, x, offset): - """Forward function.""" - if self.adapt_type == 'offset': - N, _, H, W = x.shape - assert offset is not None - assert H * W == offset.shape[1] - # reshape [N, NA, 18] to (N, 18, H, W) - offset = offset.permute(0, 2, 1).reshape(N, -1, H, W) - offset = offset.contiguous() - x = self.conv(x, offset) - else: - assert offset is None - x = self.conv(x) - return x - - -@HEADS.register_module() -class StageCascadeRPNHead(RPNHead): - """Stage of CascadeRPNHead. - - Args: - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): anchor generator config. - adapt_cfg (dict): adaptation config. - bridged_feature (bool, optional): whether update rpn feature. - Default: False. - with_cls (bool, optional): whether use classification branch. - Default: True. - sampling (bool, optional): whether use sampling. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[1.0], - strides=[4, 8, 16, 32, 64]), - adapt_cfg=dict(type='dilation', dilation=3), - bridged_feature=False, - with_cls=True, - sampling=True, - init_cfg=None, - **kwargs): - self.with_cls = with_cls - self.anchor_strides = anchor_generator['strides'] - self.anchor_scales = anchor_generator['scales'] - self.bridged_feature = bridged_feature - self.adapt_cfg = adapt_cfg - super(StageCascadeRPNHead, self).__init__( - in_channels, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - - # override sampling and sampler - self.sampling = sampling - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - if init_cfg is None: - self.init_cfg = dict( - type='Normal', std=0.01, override=[dict(name='rpn_reg')]) - if self.with_cls: - self.init_cfg['override'].append(dict(name='rpn_cls')) - - def _init_layers(self): - """Init layers of a CascadeRPN stage.""" - self.rpn_conv = AdaptiveConv(self.in_channels, self.feat_channels, - **self.adapt_cfg) - if self.with_cls: - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, - 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1) - self.relu = nn.ReLU(inplace=True) - - def forward_single(self, x, offset): - """Forward function of single scale.""" - bridged_x = x - x = self.relu(self.rpn_conv(x, offset)) - if self.bridged_feature: - bridged_x = x # update feature - cls_score = self.rpn_cls(x) if self.with_cls else None - bbox_pred = self.rpn_reg(x) - return bridged_x, cls_score, bbox_pred - - def forward(self, feats, offset_list=None): - """Forward function.""" - if offset_list is None: - offset_list = [None for _ in range(len(feats))] - return multi_apply(self.forward_single, feats, offset_list) - - def _region_targets_single(self, - anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - featmap_sizes, - label_channels=1): - """Get anchor targets based on region for single level.""" - assign_result = self.assigner.assign( - anchors, - valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - self.anchor_scales[0], - self.anchor_strides, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_labels=None, - allowed_border=self.train_cfg.allowed_border) - flat_anchors = torch.cat(anchors) - sampling_result = self.sampler.sample(assign_result, flat_anchors, - gt_bboxes) - - num_anchors = flat_anchors.shape[0] - bbox_targets = torch.zeros_like(flat_anchors) - bbox_weights = torch.zeros_like(flat_anchors) - labels = flat_anchors.new_zeros(num_anchors, dtype=torch.long) - label_weights = flat_anchors.new_zeros(num_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - labels[pos_inds] = 1 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - def region_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """See :func:`StageCascadeRPNHead.get_targets`.""" - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._region_targets_single, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - featmap_sizes=featmap_sizes, - label_channels=label_channels) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=None, - label_channels=1): - """Compute regression and classification targets for anchors. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - featmap_sizes (list[Tensor]): Feature mapsize each level - gt_bboxes_ignore (list[Tensor]): Ignore bboxes of each images - label_channels (int): Channel of label. - - Returns: - cls_reg_targets (tuple) - """ - if isinstance(self.assigner, RegionAssigner): - cls_reg_targets = self.region_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - else: - cls_reg_targets = super(StageCascadeRPNHead, self).get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - return cls_reg_targets - - def anchor_offset(self, anchor_list, anchor_strides, featmap_sizes): - """ Get offset for deformable conv based on anchor shape - NOTE: currently support deformable kernel_size=3 and dilation=1 - - Args: - anchor_list (list[list[tensor])): [NI, NLVL, NA, 4] list of - multi-level anchors - anchor_strides (list[int]): anchor stride of each level - - Returns: - offset_list (list[tensor]): [NLVL, NA, 2, 18]: offset of DeformConv - kernel. - """ - - def _shape_offset(anchors, stride, ks=3, dilation=1): - # currently support kernel_size=3 and dilation=1 - assert ks == 3 and dilation == 1 - pad = (ks - 1) // 2 - idx = torch.arange(-pad, pad + 1, dtype=dtype, device=device) - yy, xx = torch.meshgrid(idx, idx) # return order matters - xx = xx.reshape(-1) - yy = yy.reshape(-1) - w = (anchors[:, 2] - anchors[:, 0]) / stride - h = (anchors[:, 3] - anchors[:, 1]) / stride - w = w / (ks - 1) - dilation - h = h / (ks - 1) - dilation - offset_x = w[:, None] * xx # (NA, ks**2) - offset_y = h[:, None] * yy # (NA, ks**2) - return offset_x, offset_y - - def _ctr_offset(anchors, stride, featmap_size): - feat_h, feat_w = featmap_size - assert len(anchors) == feat_h * feat_w - - x = (anchors[:, 0] + anchors[:, 2]) * 0.5 - y = (anchors[:, 1] + anchors[:, 3]) * 0.5 - # compute centers on feature map - x = x / stride - y = y / stride - # compute predefine centers - xx = torch.arange(0, feat_w, device=anchors.device) - yy = torch.arange(0, feat_h, device=anchors.device) - yy, xx = torch.meshgrid(yy, xx) - xx = xx.reshape(-1).type_as(x) - yy = yy.reshape(-1).type_as(y) - - offset_x = x - xx # (NA, ) - offset_y = y - yy # (NA, ) - return offset_x, offset_y - - num_imgs = len(anchor_list) - num_lvls = len(anchor_list[0]) - dtype = anchor_list[0][0].dtype - device = anchor_list[0][0].device - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - offset_list = [] - for i in range(num_imgs): - mlvl_offset = [] - for lvl in range(num_lvls): - c_offset_x, c_offset_y = _ctr_offset(anchor_list[i][lvl], - anchor_strides[lvl], - featmap_sizes[lvl]) - s_offset_x, s_offset_y = _shape_offset(anchor_list[i][lvl], - anchor_strides[lvl]) - - # offset = ctr_offset + shape_offset - offset_x = s_offset_x + c_offset_x[:, None] - offset_y = s_offset_y + c_offset_y[:, None] - - # offset order (y0, x0, y1, x2, .., y8, x8, y9, x9) - offset = torch.stack([offset_y, offset_x], dim=-1) - offset = offset.reshape(offset.size(0), -1) # [NA, 2*ks**2] - mlvl_offset.append(offset) - offset_list.append(torch.cat(mlvl_offset)) # [totalNA, 2*ks**2] - offset_list = images_to_levels(offset_list, num_level_anchors) - return offset_list - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Loss function on single scale.""" - # classification loss - if self.with_cls: - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_reg = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - if self.with_cls: - return loss_cls, loss_reg - return None, loss_reg - - def loss(self, - anchor_list, - valid_flag_list, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in bbox_preds] - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=gt_bboxes_ignore, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - if self.sampling: - num_total_samples = num_total_pos + num_total_neg - else: - # 200 is hard-coded average factor, - # which follows guided anchoring. - num_total_samples = sum([label.numel() - for label in labels_list]) / 200.0 - - # change per image, per level anchor_list to per_level, per_image - mlvl_anchor_list = list(zip(*anchor_list)) - # concat mlvl_anchor_list - mlvl_anchor_list = [ - torch.cat(anchors, dim=0) for anchors in mlvl_anchor_list - ] - - losses = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - mlvl_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - if self.with_cls: - return dict(loss_rpn_cls=losses[0], loss_rpn_reg=losses[1]) - return dict(loss_rpn_reg=losses[1]) - - def get_bboxes(self, - anchor_list, - cls_scores, - bbox_preds, - img_metas, - cfg, - rescale=False): - """Get proposal predict. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - assert len(cls_scores) == len(bbox_preds) - - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - anchor_list[img_id], img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has - shape (num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Box reference from all scale - levels of a single image, each item has shape - (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default False. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - nms_pre = cfg.get('nms_pre', -1) - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(dim=1)[:, 0] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, 4) - anchors = mlvl_anchors[idx] - - if 0 < nms_pre < scores.shape[0]: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:nms_pre] - scores = ranked_scores[:nms_pre] - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full((scores.size(0), ), idx, dtype=torch.long)) - - scores = torch.cat(mlvl_scores) - anchors = torch.cat(mlvl_valid_anchors) - rpn_bbox_pred = torch.cat(mlvl_bbox_preds) - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - ids = torch.cat(level_ids) - - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - ids = ids[valid_mask] - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \ - f' iou_threshold in nms and ' \ - f'nms_thr at the same time, but get' \ - f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - if proposals.numel() > 0: - dets, _ = batched_nms(proposals, scores, ids, cfg.nms) - else: - return proposals.new_zeros(0, 5) - - return dets[:cfg.max_per_img] - - def refine_bboxes(self, anchor_list, bbox_preds, img_metas): - """Refine bboxes through stages.""" - num_levels = len(bbox_preds) - new_anchor_list = [] - for img_id in range(len(img_metas)): - mlvl_anchors = [] - for i in range(num_levels): - bbox_pred = bbox_preds[i][img_id].detach() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - img_shape = img_metas[img_id]['img_shape'] - bboxes = self.bbox_coder.decode(anchor_list[img_id][i], - bbox_pred, img_shape) - mlvl_anchors.append(bboxes) - new_anchor_list.append(mlvl_anchors) - return new_anchor_list - - -@HEADS.register_module() -class CascadeRPNHead(BaseDenseHead): - """The CascadeRPNHead will predict more accurate region proposals, which is - required for two-stage detectors (such as Fast/Faster R-CNN). CascadeRPN - consists of a sequence of RPNStage to progressively improve the accuracy of - the detected proposals. - - More details can be found in ``https://arxiv.org/abs/1909.06720``. - - Args: - num_stages (int): number of CascadeRPN stages. - stages (list[dict]): list of configs to build the stages. - train_cfg (list[dict]): list of configs at training time each stage. - test_cfg (dict): config at testing time. - """ - - def __init__(self, num_stages, stages, train_cfg, test_cfg, init_cfg=None): - super(CascadeRPNHead, self).__init__(init_cfg) - assert num_stages == len(stages) - self.num_stages = num_stages - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.stages = ModuleList() - for i in range(len(stages)): - train_cfg_i = train_cfg[i] if train_cfg is not None else None - stages[i].update(train_cfg=train_cfg_i) - stages[i].update(test_cfg=test_cfg) - self.stages.append(build_head(stages[i])) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def loss(self): - """loss() is implemented in StageCascadeRPNHead.""" - pass - - def get_bboxes(self): - """get_bboxes() is implemented in StageCascadeRPNHead.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None): - """Forward train function.""" - assert gt_labels is None, 'RPN does not require gt_labels' - - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, valid_flag_list = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - losses = dict() - - for i in range(self.num_stages): - stage = self.stages[i] - - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - rpn_loss_inputs = (anchor_list, valid_flag_list, cls_score, - bbox_pred, gt_bboxes, img_metas) - stage_loss = stage.loss(*rpn_loss_inputs) - for name, value in stage_loss.items(): - losses['s{}.{}'.format(i, name)] = value - - # refine boxes - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - if proposal_cfg is None: - return losses - else: - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return losses, proposal_list - - def simple_test_rpn(self, x, img_metas): - """Simple forward test function.""" - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, _ = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - for i in range(self.num_stages): - stage = self.stages[i] - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return proposal_list - - def aug_test_rpn(self, x, img_metas): - """Augmented forward test function.""" - raise NotImplementedError( - 'CascadeRPNHead does not support test-time augmentation') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centernet_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centernet_head.py deleted file mode 100644 index b9d5d2f0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centernet_head.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.ops import batched_nms -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.models import HEADS, build_loss -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from ..utils.gaussian_target import (get_local_maximum, get_topk_from_heatmap, - transpose_and_gather_feat) -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class CenterNetHead(BaseDenseHead, BBoxTestMixin): - """Objects as Points Head. CenterHead use center_point to indicate object's - position. Paper link - - Args: - in_channel (int): Number of channel in the input feature map. - feat_channel (int): Number of channel in the intermediate feature map. - num_classes (int): Number of categories excluding the background - category. - loss_center_heatmap (dict | None): Config of center heatmap loss. - Default: GaussianFocalLoss. - loss_wh (dict | None): Config of wh loss. Default: L1Loss. - loss_offset (dict | None): Config of offset loss. Default: L1Loss. - train_cfg (dict | None): Training config. Useless in CenterNet, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CenterNet. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channel, - feat_channel, - num_classes, - loss_center_heatmap=dict( - type='GaussianFocalLoss', loss_weight=1.0), - loss_wh=dict(type='L1Loss', loss_weight=0.1), - loss_offset=dict(type='L1Loss', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(CenterNetHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.heatmap_head = self._build_head(in_channel, feat_channel, - num_classes) - self.wh_head = self._build_head(in_channel, feat_channel, 2) - self.offset_head = self._build_head(in_channel, feat_channel, 2) - - self.loss_center_heatmap = build_loss(loss_center_heatmap) - self.loss_wh = build_loss(loss_wh) - self.loss_offset = build_loss(loss_offset) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - - def _build_head(self, in_channel, feat_channel, out_channel): - """Build head for each branch.""" - layer = nn.Sequential( - nn.Conv2d(in_channel, feat_channel, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(feat_channel, out_channel, kernel_size=1)) - return layer - - def init_weights(self): - """Initialize weights of the head.""" - bias_init = bias_init_with_prob(0.1) - self.heatmap_head[-1].bias.data.fill_(bias_init) - for head in [self.wh_head, self.offset_head]: - for m in head.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - - def forward(self, feats): - """Forward features. Notice CenterNet head does not use FPN. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - center_heatmap_preds (List[Tensor]): center predict heatmaps for - all levels, the channels number is num_classes. - wh_preds (List[Tensor]): wh predicts for all levels, the channels - number is 2. - offset_preds (List[Tensor]): offset predicts for all levels, the - channels number is 2. - """ - return multi_apply(self.forward_single, feats) - - def forward_single(self, feat): - """Forward feature of a single level. - - Args: - feat (Tensor): Feature of a single level. - - Returns: - center_heatmap_pred (Tensor): center predict heatmaps, the - channels number is num_classes. - wh_pred (Tensor): wh predicts, the channels number is 2. - offset_pred (Tensor): offset predicts, the channels number is 2. - """ - center_heatmap_pred = self.heatmap_head(feat).sigmoid() - wh_pred = self.wh_head(feat) - offset_pred = self.offset_head(feat) - return center_heatmap_pred, wh_pred, offset_pred - - @force_fp32(apply_to=('center_heatmap_preds', 'wh_preds', 'offset_preds')) - def loss(self, - center_heatmap_preds, - wh_preds, - offset_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - center_heatmap_preds (list[Tensor]): center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): wh predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): offset predicts for all levels - with shape (B, 2, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: which has components below: - - loss_center_heatmap (Tensor): loss of center heatmap. - - loss_wh (Tensor): loss of hw heatmap - - loss_offset (Tensor): loss of offset heatmap. - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - center_heatmap_pred = center_heatmap_preds[0] - wh_pred = wh_preds[0] - offset_pred = offset_preds[0] - - target_result, avg_factor = self.get_targets(gt_bboxes, gt_labels, - center_heatmap_pred.shape, - img_metas[0]['pad_shape']) - - center_heatmap_target = target_result['center_heatmap_target'] - wh_target = target_result['wh_target'] - offset_target = target_result['offset_target'] - wh_offset_target_weight = target_result['wh_offset_target_weight'] - - # Since the channel of wh_target and offset_target is 2, the avg_factor - # of loss_center_heatmap is always 1/2 of loss_wh and loss_offset. - loss_center_heatmap = self.loss_center_heatmap( - center_heatmap_pred, center_heatmap_target, avg_factor=avg_factor) - loss_wh = self.loss_wh( - wh_pred, - wh_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - loss_offset = self.loss_offset( - offset_pred, - offset_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - return dict( - loss_center_heatmap=loss_center_heatmap, - loss_wh=loss_wh, - loss_offset=loss_offset) - - def get_targets(self, gt_bboxes, gt_labels, feat_shape, img_shape): - """Compute regression and classification targets in multiple images. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - feat_shape (list[int]): feature map shape with value [B, _, H, W] - img_shape (list[int]): image shape in [h, w] format. - - Returns: - tuple[dict,float]: The float value is mean avg_factor, the dict has - components below: - - center_heatmap_target (Tensor): targets of center heatmap, \ - shape (B, num_classes, H, W). - - wh_target (Tensor): targets of wh predict, shape \ - (B, 2, H, W). - - offset_target (Tensor): targets of offset predict, shape \ - (B, 2, H, W). - - wh_offset_target_weight (Tensor): weights of wh and offset \ - predict, shape (B, 2, H, W). - """ - img_h, img_w = img_shape[:2] - bs, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) - height_ratio = float(feat_h / img_h) - - center_heatmap_target = gt_bboxes[-1].new_zeros( - [bs, self.num_classes, feat_h, feat_w]) - wh_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - offset_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - wh_offset_target_weight = gt_bboxes[-1].new_zeros( - [bs, 2, feat_h, feat_w]) - - for batch_id in range(bs): - gt_bbox = gt_bboxes[batch_id] - gt_label = gt_labels[batch_id] - center_x = (gt_bbox[:, [0]] + gt_bbox[:, [2]]) * width_ratio / 2 - center_y = (gt_bbox[:, [1]] + gt_bbox[:, [3]]) * height_ratio / 2 - gt_centers = torch.cat((center_x, center_y), dim=1) - - for j, ct in enumerate(gt_centers): - ctx_int, cty_int = ct.int() - ctx, cty = ct - scale_box_h = (gt_bbox[j][3] - gt_bbox[j][1]) * height_ratio - scale_box_w = (gt_bbox[j][2] - gt_bbox[j][0]) * width_ratio - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.3) - radius = max(0, int(radius)) - ind = gt_label[j] - gen_gaussian_target(center_heatmap_target[batch_id, ind], - [ctx_int, cty_int], radius) - - wh_target[batch_id, 0, cty_int, ctx_int] = scale_box_w - wh_target[batch_id, 1, cty_int, ctx_int] = scale_box_h - - offset_target[batch_id, 0, cty_int, ctx_int] = ctx - ctx_int - offset_target[batch_id, 1, cty_int, ctx_int] = cty - cty_int - - wh_offset_target_weight[batch_id, :, cty_int, ctx_int] = 1 - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - target_result = dict( - center_heatmap_target=center_heatmap_target, - wh_target=wh_target, - offset_target=offset_target, - wh_offset_target_weight=wh_offset_target_weight) - return target_result, avg_factor - - @force_fp32(apply_to=('center_heatmap_preds', 'wh_preds', 'offset_preds')) - def get_bboxes(self, - center_heatmap_preds, - wh_preds, - offset_preds, - img_metas, - rescale=True, - with_nms=False): - """Transform network output for a batch into bbox predictions. - - Args: - center_heatmap_preds (list[Tensor]): Center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): WH predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): Offset predicts for all levels - with shape (B, 2, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: True. - with_nms (bool): If True, do nms before return boxes. - Default: False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - center_heatmap_preds[0][img_id:img_id + 1, ...], - wh_preds[0][img_id:img_id + 1, ...], - offset_preds[0][img_id:img_id + 1, ...], - img_metas[img_id], - rescale=rescale, - with_nms=with_nms)) - return result_list - - def _get_bboxes_single(self, - center_heatmap_pred, - wh_pred, - offset_pred, - img_meta, - rescale=False, - with_nms=True): - """Transform outputs of a single image into bbox results. - - Args: - center_heatmap_pred (Tensor): Center heatmap for current level with - shape (1, num_classes, H, W). - wh_pred (Tensor): WH heatmap for current level with shape - (1, num_classes, H, W). - offset_pred (Tensor): Offset for current level with shape - (1, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor, Tensor]: The first item is an (n, 5) tensor, where - 5 represent (tl_x, tl_y, br_x, br_y, score) and the score - between 0 and 1. The shape of the second tensor in the tuple - is (n,), and each element represents the class label of the - corresponding box. - """ - batch_det_bboxes, batch_labels = self.decode_heatmap( - center_heatmap_pred, - wh_pred, - offset_pred, - img_meta['batch_input_shape'], - k=self.test_cfg.topk, - kernel=self.test_cfg.local_maximum_kernel) - - det_bboxes = batch_det_bboxes.view([-1, 5]) - det_labels = batch_labels.view(-1) - - batch_border = det_bboxes.new_tensor(img_meta['border'])[..., - [2, 0, 2, 0]] - det_bboxes[..., :4] -= batch_border - - if rescale: - det_bboxes[..., :4] /= det_bboxes.new_tensor( - img_meta['scale_factor']) - - if with_nms: - det_bboxes, det_labels = self._bboxes_nms(det_bboxes, det_labels, - self.test_cfg) - return det_bboxes, det_labels - - def decode_heatmap(self, - center_heatmap_pred, - wh_pred, - offset_pred, - img_shape, - k=100, - kernel=3): - """Transform outputs into detections raw bbox prediction. - - Args: - center_heatmap_pred (Tensor): center predict heatmap, - shape (B, num_classes, H, W). - wh_pred (Tensor): wh predict, shape (B, 2, H, W). - offset_pred (Tensor): offset predict, shape (B, 2, H, W). - img_shape (list[int]): image shape in [h, w] format. - k (int): Get top k center keypoints from heatmap. Default 100. - kernel (int): Max pooling kernel for extract local maximum pixels. - Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of CenterNetHead, containing - the following Tensors: - - - batch_bboxes (Tensor): Coords of each box with shape (B, k, 5) - - batch_topk_labels (Tensor): Categories of each box with \ - shape (B, k) - """ - height, width = center_heatmap_pred.shape[2:] - inp_h, inp_w = img_shape - - center_heatmap_pred = get_local_maximum( - center_heatmap_pred, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=k) - batch_scores, batch_index, batch_topk_labels = batch_dets - - wh = transpose_and_gather_feat(wh_pred, batch_index) - offset = transpose_and_gather_feat(offset_pred, batch_index) - topk_xs = topk_xs + offset[..., 0] - topk_ys = topk_ys + offset[..., 1] - tl_x = (topk_xs - wh[..., 0] / 2) * (inp_w / width) - tl_y = (topk_ys - wh[..., 1] / 2) * (inp_h / height) - br_x = (topk_xs + wh[..., 0] / 2) * (inp_w / width) - br_y = (topk_ys + wh[..., 1] / 2) * (inp_h / height) - - batch_bboxes = torch.stack([tl_x, tl_y, br_x, br_y], dim=2) - batch_bboxes = torch.cat((batch_bboxes, batch_scores[..., None]), - dim=-1) - return batch_bboxes, batch_topk_labels - - def _bboxes_nms(self, bboxes, labels, cfg): - if labels.numel() > 0: - max_num = cfg.max_per_img - bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, - -1].contiguous(), - labels, cfg.nms) - if max_num > 0: - bboxes = bboxes[:max_num] - labels = labels[keep][:max_num] - - return bboxes, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centripetal_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centripetal_head.py deleted file mode 100644 index ebc721b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/centripetal_head.py +++ /dev/null @@ -1,430 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, normal_init -from mmcv.ops import DeformConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .corner_head import CornerHead - - -@HEADS.register_module() -class CentripetalHead(CornerHead): - """Head of CentripetalNet: Pursuing High-quality Keypoint Pairs for Object - Detection. - - CentripetalHead inherits from :class:`CornerHead`. It removes the - embedding branch and adds guiding shift and centripetal shift branches. - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - num_feat_levels (int): Levels of feature from the previous module. 2 - for HourglassNet-104 and 1 for HourglassNet-52. HourglassNet-104 - outputs the final feature and intermediate supervision feature and - HourglassNet-52 only outputs the final feature. Default: 2. - corner_emb_channels (int): Channel of embedding vector. Default: 1. - train_cfg (dict | None): Training config. Useless in CornerHead, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CornerHead. Default: None. - loss_heatmap (dict | None): Config of corner heatmap loss. Default: - GaussianFocalLoss. - loss_embedding (dict | None): Config of corner embedding loss. Default: - AssociativeEmbeddingLoss. - loss_offset (dict | None): Config of corner offset loss. Default: - SmoothL1Loss. - loss_guiding_shift (dict): Config of guiding shift loss. Default: - SmoothL1Loss. - loss_centripetal_shift (dict): Config of centripetal shift loss. - Default: SmoothL1Loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - *args, - centripetal_shift_channels=2, - guiding_shift_channels=2, - feat_adaption_conv_kernel=3, - loss_guiding_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=0.05), - loss_centripetal_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - assert centripetal_shift_channels == 2, ( - 'CentripetalHead only support centripetal_shift_channels == 2') - self.centripetal_shift_channels = centripetal_shift_channels - assert guiding_shift_channels == 2, ( - 'CentripetalHead only support guiding_shift_channels == 2') - self.guiding_shift_channels = guiding_shift_channels - self.feat_adaption_conv_kernel = feat_adaption_conv_kernel - super(CentripetalHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - self.loss_guiding_shift = build_loss(loss_guiding_shift) - self.loss_centripetal_shift = build_loss(loss_centripetal_shift) - - def _init_centripetal_layers(self): - """Initialize centripetal layers. - - Including feature adaption deform convs (feat_adaption), deform offset - prediction convs (dcn_off), guiding shift (guiding_shift) and - centripetal shift ( centripetal_shift). Each branch has two parts: - prefix `tl_` for top-left and `br_` for bottom-right. - """ - self.tl_feat_adaption = nn.ModuleList() - self.br_feat_adaption = nn.ModuleList() - self.tl_dcn_offset = nn.ModuleList() - self.br_dcn_offset = nn.ModuleList() - self.tl_guiding_shift = nn.ModuleList() - self.br_guiding_shift = nn.ModuleList() - self.tl_centripetal_shift = nn.ModuleList() - self.br_centripetal_shift = nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_feat_adaption.append( - DeformConv2d(self.in_channels, self.in_channels, - self.feat_adaption_conv_kernel, 1, 1)) - self.br_feat_adaption.append( - DeformConv2d(self.in_channels, self.in_channels, - self.feat_adaption_conv_kernel, 1, 1)) - - self.tl_guiding_shift.append( - self._make_layers( - out_channels=self.guiding_shift_channels, - in_channels=self.in_channels)) - self.br_guiding_shift.append( - self._make_layers( - out_channels=self.guiding_shift_channels, - in_channels=self.in_channels)) - - self.tl_dcn_offset.append( - ConvModule( - self.guiding_shift_channels, - self.feat_adaption_conv_kernel**2 * - self.guiding_shift_channels, - 1, - bias=False, - act_cfg=None)) - self.br_dcn_offset.append( - ConvModule( - self.guiding_shift_channels, - self.feat_adaption_conv_kernel**2 * - self.guiding_shift_channels, - 1, - bias=False, - act_cfg=None)) - - self.tl_centripetal_shift.append( - self._make_layers( - out_channels=self.centripetal_shift_channels, - in_channels=self.in_channels)) - self.br_centripetal_shift.append( - self._make_layers( - out_channels=self.centripetal_shift_channels, - in_channels=self.in_channels)) - - def _init_layers(self): - """Initialize layers for CentripetalHead. - - Including two parts: CornerHead layers and CentripetalHead layers - """ - super()._init_layers() # using _init_layers in CornerHead - self._init_centripetal_layers() - - def init_weights(self): - super(CentripetalHead, self).init_weights() - for i in range(self.num_feat_levels): - normal_init(self.tl_feat_adaption[i], std=0.01) - normal_init(self.br_feat_adaption[i], std=0.01) - normal_init(self.tl_dcn_offset[i].conv, std=0.1) - normal_init(self.br_dcn_offset[i].conv, std=0.1) - _ = [x.conv.reset_parameters() for x in self.tl_guiding_shift[i]] - _ = [x.conv.reset_parameters() for x in self.br_guiding_shift[i]] - _ = [ - x.conv.reset_parameters() for x in self.tl_centripetal_shift[i] - ] - _ = [ - x.conv.reset_parameters() for x in self.br_centripetal_shift[i] - ] - - def forward_single(self, x, lvl_ind): - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - lvl_ind (int): Level index of current feature. - - Returns: - tuple[Tensor]: A tuple of CentripetalHead's output for current - feature level. Containing the following Tensors: - - - tl_heat (Tensor): Predicted top-left corner heatmap. - - br_heat (Tensor): Predicted bottom-right corner heatmap. - - tl_off (Tensor): Predicted top-left offset heatmap. - - br_off (Tensor): Predicted bottom-right offset heatmap. - - tl_guiding_shift (Tensor): Predicted top-left guiding shift - heatmap. - - br_guiding_shift (Tensor): Predicted bottom-right guiding - shift heatmap. - - tl_centripetal_shift (Tensor): Predicted top-left centripetal - shift heatmap. - - br_centripetal_shift (Tensor): Predicted bottom-right - centripetal shift heatmap. - """ - tl_heat, br_heat, _, _, tl_off, br_off, tl_pool, br_pool = super( - ).forward_single( - x, lvl_ind, return_pool=True) - - tl_guiding_shift = self.tl_guiding_shift[lvl_ind](tl_pool) - br_guiding_shift = self.br_guiding_shift[lvl_ind](br_pool) - - tl_dcn_offset = self.tl_dcn_offset[lvl_ind](tl_guiding_shift.detach()) - br_dcn_offset = self.br_dcn_offset[lvl_ind](br_guiding_shift.detach()) - - tl_feat_adaption = self.tl_feat_adaption[lvl_ind](tl_pool, - tl_dcn_offset) - br_feat_adaption = self.br_feat_adaption[lvl_ind](br_pool, - br_dcn_offset) - - tl_centripetal_shift = self.tl_centripetal_shift[lvl_ind]( - tl_feat_adaption) - br_centripetal_shift = self.br_centripetal_shift[lvl_ind]( - br_feat_adaption) - - result_list = [ - tl_heat, br_heat, tl_off, br_off, tl_guiding_shift, - br_guiding_shift, tl_centripetal_shift, br_centripetal_shift - ] - return result_list - - @force_fp32() - def loss(self, - tl_heats, - br_heats, - tl_offs, - br_offs, - tl_guiding_shifts, - br_guiding_shifts, - tl_centripetal_shifts, - br_centripetal_shifts, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each - level with shape (N, guiding_shift_channels, H, W). - br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for - each level with shape (N, guiding_shift_channels, H, W). - tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts - for each level with shape (N, centripetal_shift_channels, H, - W). - br_centripetal_shifts (list[Tensor]): Bottom-right centripetal - shifts for each level with shape (N, - centripetal_shift_channels, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [left, top, right, bottom] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. Containing the - following losses: - - - det_loss (list[Tensor]): Corner keypoint losses of all - feature levels. - - off_loss (list[Tensor]): Corner offset losses of all feature - levels. - - guiding_loss (list[Tensor]): Guiding shift losses of all - feature levels. - - centripetal_loss (list[Tensor]): Centripetal shift losses of - all feature levels. - """ - targets = self.get_targets( - gt_bboxes, - gt_labels, - tl_heats[-1].shape, - img_metas[0]['pad_shape'], - with_corner_emb=self.with_corner_emb, - with_guiding_shift=True, - with_centripetal_shift=True) - mlvl_targets = [targets for _ in range(self.num_feat_levels)] - [det_losses, off_losses, guiding_losses, centripetal_losses - ] = multi_apply(self.loss_single, tl_heats, br_heats, tl_offs, - br_offs, tl_guiding_shifts, br_guiding_shifts, - tl_centripetal_shifts, br_centripetal_shifts, - mlvl_targets) - loss_dict = dict( - det_loss=det_losses, - off_loss=off_losses, - guiding_loss=guiding_losses, - centripetal_loss=centripetal_losses) - return loss_dict - - def loss_single(self, tl_hmp, br_hmp, tl_off, br_off, tl_guiding_shift, - br_guiding_shift, tl_centripetal_shift, - br_centripetal_shift, targets): - """Compute losses for single level. - - Args: - tl_hmp (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_hmp (Tensor): Bottom-right corner heatmap for current level with - shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - tl_guiding_shift (Tensor): Top-left guiding shift for current level - with shape (N, guiding_shift_channels, H, W). - br_guiding_shift (Tensor): Bottom-right guiding shift for current - level with shape (N, guiding_shift_channels, H, W). - tl_centripetal_shift (Tensor): Top-left centripetal shift for - current level with shape (N, centripetal_shift_channels, H, W). - br_centripetal_shift (Tensor): Bottom-right centripetal shift for - current level with shape (N, centripetal_shift_channels, H, W). - targets (dict): Corner target generated by `get_targets`. - - Returns: - tuple[torch.Tensor]: Losses of the head's different branches - containing the following losses: - - - det_loss (Tensor): Corner keypoint loss. - - off_loss (Tensor): Corner offset loss. - - guiding_loss (Tensor): Guiding shift loss. - - centripetal_loss (Tensor): Centripetal shift loss. - """ - targets['corner_embedding'] = None - - det_loss, _, _, off_loss = super().loss_single(tl_hmp, br_hmp, None, - None, tl_off, br_off, - targets) - - gt_tl_guiding_shift = targets['topleft_guiding_shift'] - gt_br_guiding_shift = targets['bottomright_guiding_shift'] - gt_tl_centripetal_shift = targets['topleft_centripetal_shift'] - gt_br_centripetal_shift = targets['bottomright_centripetal_shift'] - - gt_tl_heatmap = targets['topleft_heatmap'] - gt_br_heatmap = targets['bottomright_heatmap'] - # We only compute the offset loss at the real corner position. - # The value of real corner would be 1 in heatmap ground truth. - # The mask is computed in class agnostic mode and its shape is - # batch * 1 * width * height. - tl_mask = gt_tl_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_tl_heatmap) - br_mask = gt_br_heatmap.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_br_heatmap) - - # Guiding shift loss - tl_guiding_loss = self.loss_guiding_shift( - tl_guiding_shift, - gt_tl_guiding_shift, - tl_mask, - avg_factor=tl_mask.sum()) - br_guiding_loss = self.loss_guiding_shift( - br_guiding_shift, - gt_br_guiding_shift, - br_mask, - avg_factor=br_mask.sum()) - guiding_loss = (tl_guiding_loss + br_guiding_loss) / 2.0 - # Centripetal shift loss - tl_centripetal_loss = self.loss_centripetal_shift( - tl_centripetal_shift, - gt_tl_centripetal_shift, - tl_mask, - avg_factor=tl_mask.sum()) - br_centripetal_loss = self.loss_centripetal_shift( - br_centripetal_shift, - gt_br_centripetal_shift, - br_mask, - avg_factor=br_mask.sum()) - centripetal_loss = (tl_centripetal_loss + br_centripetal_loss) / 2.0 - - return det_loss, off_loss, guiding_loss, centripetal_loss - - @force_fp32() - def get_bboxes(self, - tl_heats, - br_heats, - tl_offs, - br_offs, - tl_guiding_shifts, - br_guiding_shifts, - tl_centripetal_shifts, - br_centripetal_shifts, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - tl_guiding_shifts (list[Tensor]): Top-left guiding shifts for each - level with shape (N, guiding_shift_channels, H, W). Useless in - this function, we keep this arg because it's the raw output - from CentripetalHead. - br_guiding_shifts (list[Tensor]): Bottom-right guiding shifts for - each level with shape (N, guiding_shift_channels, H, W). - Useless in this function, we keep this arg because it's the - raw output from CentripetalHead. - tl_centripetal_shifts (list[Tensor]): Top-left centripetal shifts - for each level with shape (N, centripetal_shift_channels, H, - W). - br_centripetal_shifts (list[Tensor]): Bottom-right centripetal - shifts for each level with shape (N, - centripetal_shift_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas) - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=None, - br_emb=None, - tl_centripetal_shift=tl_centripetal_shifts[-1][ - img_id:img_id + 1, :], - br_centripetal_shift=br_centripetal_shifts[-1][ - img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - return result_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/corner_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/corner_head.py deleted file mode 100644 index c6a2866f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/corner_head.py +++ /dev/null @@ -1,1086 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from logging import warning -from math import ceil, log - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob -from mmcv.ops import CornerPool, batched_nms -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from ..utils import gaussian_radius, gen_gaussian_target -from ..utils.gaussian_target import (gather_feat, get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -class BiCornerPool(BaseModule): - """Bidirectional Corner Pooling Module (TopLeft, BottomRight, etc.) - - Args: - in_channels (int): Input channels of module. - out_channels (int): Output channels of module. - feat_channels (int): Feature channels of module. - directions (list[str]): Directions of two CornerPools. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - directions, - feat_channels=128, - out_channels=128, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=None): - super(BiCornerPool, self).__init__(init_cfg) - self.direction1_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - self.direction2_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.aftpool_conv = ConvModule( - feat_channels, - out_channels, - 3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=None) - - self.conv1 = ConvModule( - in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.conv2 = ConvModule( - in_channels, out_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.direction1_pool = CornerPool(directions[0]) - self.direction2_pool = CornerPool(directions[1]) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward features from the upstream network. - - Args: - x (tensor): Input feature of BiCornerPool. - - Returns: - conv2 (tensor): Output feature of BiCornerPool. - """ - direction1_conv = self.direction1_conv(x) - direction2_conv = self.direction2_conv(x) - direction1_feat = self.direction1_pool(direction1_conv) - direction2_feat = self.direction2_pool(direction2_conv) - aftpool_conv = self.aftpool_conv(direction1_feat + direction2_feat) - conv1 = self.conv1(x) - relu = self.relu(aftpool_conv + conv1) - conv2 = self.conv2(relu) - return conv2 - - -@HEADS.register_module() -class CornerHead(BaseDenseHead, BBoxTestMixin): - """Head of CornerNet: Detecting Objects as Paired Keypoints. - - Code is modified from the `official github repo - `_ . - - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - num_feat_levels (int): Levels of feature from the previous module. 2 - for HourglassNet-104 and 1 for HourglassNet-52. Because - HourglassNet-104 outputs the final feature and intermediate - supervision feature and HourglassNet-52 only outputs the final - feature. Default: 2. - corner_emb_channels (int): Channel of embedding vector. Default: 1. - train_cfg (dict | None): Training config. Useless in CornerHead, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CornerHead. Default: None. - loss_heatmap (dict | None): Config of corner heatmap loss. Default: - GaussianFocalLoss. - loss_embedding (dict | None): Config of corner embedding loss. Default: - AssociativeEmbeddingLoss. - loss_offset (dict | None): Config of corner offset loss. Default: - SmoothL1Loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_classes, - in_channels, - num_feat_levels=2, - corner_emb_channels=1, - train_cfg=None, - test_cfg=None, - loss_heatmap=dict( - type='GaussianFocalLoss', - alpha=2.0, - gamma=4.0, - loss_weight=1), - loss_embedding=dict( - type='AssociativeEmbeddingLoss', - pull_weight=0.25, - push_weight=0.25), - loss_offset=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(CornerHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.in_channels = in_channels - self.corner_emb_channels = corner_emb_channels - self.with_corner_emb = self.corner_emb_channels > 0 - self.corner_offset_channels = 2 - self.num_feat_levels = num_feat_levels - self.loss_heatmap = build_loss( - loss_heatmap) if loss_heatmap is not None else None - self.loss_embedding = build_loss( - loss_embedding) if loss_embedding is not None else None - self.loss_offset = build_loss( - loss_offset) if loss_offset is not None else None - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.fp16_enabled = False - self._init_layers() - - def _make_layers(self, out_channels, in_channels=256, feat_channels=256): - """Initialize conv sequential for CornerHead.""" - return nn.Sequential( - ConvModule(in_channels, feat_channels, 3, padding=1), - ConvModule( - feat_channels, out_channels, 1, norm_cfg=None, act_cfg=None)) - - def _init_corner_kpt_layers(self): - """Initialize corner keypoint layers. - - Including corner heatmap branch and corner offset branch. Each branch - has two parts: prefix `tl_` for top-left and `br_` for bottom-right. - """ - self.tl_pool, self.br_pool = nn.ModuleList(), nn.ModuleList() - self.tl_heat, self.br_heat = nn.ModuleList(), nn.ModuleList() - self.tl_off, self.br_off = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_pool.append( - BiCornerPool( - self.in_channels, ['top', 'left'], - out_channels=self.in_channels)) - self.br_pool.append( - BiCornerPool( - self.in_channels, ['bottom', 'right'], - out_channels=self.in_channels)) - - self.tl_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - self.br_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - - self.tl_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - self.br_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - - def _init_corner_emb_layers(self): - """Initialize corner embedding layers. - - Only include corner embedding branch with two parts: prefix `tl_` for - top-left and `br_` for bottom-right. - """ - self.tl_emb, self.br_emb = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - self.br_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - - def _init_layers(self): - """Initialize layers for CornerHead. - - Including two parts: corner keypoint layers and corner embedding layers - """ - self._init_corner_kpt_layers() - if self.with_corner_emb: - self._init_corner_emb_layers() - - def init_weights(self): - super(CornerHead, self).init_weights() - bias_init = bias_init_with_prob(0.1) - for i in range(self.num_feat_levels): - # The initialization of parameters are different between - # nn.Conv2d and ConvModule. Our experiments show that - # using the original initialization of nn.Conv2d increases - # the final mAP by about 0.2% - self.tl_heat[i][-1].conv.reset_parameters() - self.tl_heat[i][-1].conv.bias.data.fill_(bias_init) - self.br_heat[i][-1].conv.reset_parameters() - self.br_heat[i][-1].conv.bias.data.fill_(bias_init) - self.tl_off[i][-1].conv.reset_parameters() - self.br_off[i][-1].conv.reset_parameters() - if self.with_corner_emb: - self.tl_emb[i][-1].conv.reset_parameters() - self.br_emb[i][-1].conv.reset_parameters() - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of corner heatmaps, offset heatmaps and - embedding heatmaps. - - tl_heats (list[Tensor]): Top-left corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - br_heats (list[Tensor]): Bottom-right corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - tl_embs (list[Tensor] | list[None]): Top-left embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - br_embs (list[Tensor] | list[None]): Bottom-right embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - tl_offs (list[Tensor]): Top-left offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - - br_offs (list[Tensor]): Bottom-right offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - """ - lvl_ind = list(range(self.num_feat_levels)) - return multi_apply(self.forward_single, feats, lvl_ind) - - def forward_single(self, x, lvl_ind, return_pool=False): - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - lvl_ind (int): Level index of current feature. - return_pool (bool): Return corner pool feature or not. - - Returns: - tuple[Tensor]: A tuple of CornerHead's output for current feature - level. Containing the following Tensors: - - - tl_heat (Tensor): Predicted top-left corner heatmap. - - br_heat (Tensor): Predicted bottom-right corner heatmap. - - tl_emb (Tensor | None): Predicted top-left embedding heatmap. - None for `self.with_corner_emb == False`. - - br_emb (Tensor | None): Predicted bottom-right embedding - heatmap. None for `self.with_corner_emb == False`. - - tl_off (Tensor): Predicted top-left offset heatmap. - - br_off (Tensor): Predicted bottom-right offset heatmap. - - tl_pool (Tensor): Top-left corner pool feature. Not must - have. - - br_pool (Tensor): Bottom-right corner pool feature. Not must - have. - """ - tl_pool = self.tl_pool[lvl_ind](x) - tl_heat = self.tl_heat[lvl_ind](tl_pool) - br_pool = self.br_pool[lvl_ind](x) - br_heat = self.br_heat[lvl_ind](br_pool) - - tl_emb, br_emb = None, None - if self.with_corner_emb: - tl_emb = self.tl_emb[lvl_ind](tl_pool) - br_emb = self.br_emb[lvl_ind](br_pool) - - tl_off = self.tl_off[lvl_ind](tl_pool) - br_off = self.br_off[lvl_ind](br_pool) - - result_list = [tl_heat, br_heat, tl_emb, br_emb, tl_off, br_off] - if return_pool: - result_list.append(tl_pool) - result_list.append(br_pool) - - return result_list - - def get_targets(self, - gt_bboxes, - gt_labels, - feat_shape, - img_shape, - with_corner_emb=False, - with_guiding_shift=False, - with_centripetal_shift=False): - """Generate corner targets. - - Including corner heatmap, corner offset. - - Optional: corner embedding, corner guiding shift, centripetal shift. - - For CornerNet, we generate corner heatmap, corner offset and corner - embedding from this function. - - For CentripetalNet, we generate corner heatmap, corner offset, guiding - shift and centripetal shift from this function. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, each - has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, each has - shape (num_gt,). - feat_shape (list[int]): Shape of output feature, - [batch, channel, height, width]. - img_shape (list[int]): Shape of input image, - [height, width, channel]. - with_corner_emb (bool): Generate corner embedding target or not. - Default: False. - with_guiding_shift (bool): Generate guiding shift target or not. - Default: False. - with_centripetal_shift (bool): Generate centripetal shift target or - not. Default: False. - - Returns: - dict: Ground truth of corner heatmap, corner offset, corner - embedding, guiding shift and centripetal shift. Containing the - following keys: - - - topleft_heatmap (Tensor): Ground truth top-left corner - heatmap. - - bottomright_heatmap (Tensor): Ground truth bottom-right - corner heatmap. - - topleft_offset (Tensor): Ground truth top-left corner offset. - - bottomright_offset (Tensor): Ground truth bottom-right corner - offset. - - corner_embedding (list[list[list[int]]]): Ground truth corner - embedding. Not must have. - - topleft_guiding_shift (Tensor): Ground truth top-left corner - guiding shift. Not must have. - - bottomright_guiding_shift (Tensor): Ground truth bottom-right - corner guiding shift. Not must have. - - topleft_centripetal_shift (Tensor): Ground truth top-left - corner centripetal shift. Not must have. - - bottomright_centripetal_shift (Tensor): Ground truth - bottom-right corner centripetal shift. Not must have. - """ - batch_size, _, height, width = feat_shape - img_h, img_w = img_shape[:2] - - width_ratio = float(width / img_w) - height_ratio = float(height / img_h) - - gt_tl_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_br_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_tl_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - gt_br_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - - if with_corner_emb: - match = [] - - # Guiding shift is a kind of offset, from center to corner - if with_guiding_shift: - gt_tl_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - # Centripetal shift is also a kind of offset, from center to corner - # and normalized by log. - if with_centripetal_shift: - gt_tl_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - - for batch_id in range(batch_size): - # Ground truth of corner embedding per image is a list of coord set - corner_match = [] - for box_id in range(len(gt_labels[batch_id])): - left, top, right, bottom = gt_bboxes[batch_id][box_id] - center_x = (left + right) / 2.0 - center_y = (top + bottom) / 2.0 - label = gt_labels[batch_id][box_id] - - # Use coords in the feature level to generate ground truth - scale_left = left * width_ratio - scale_right = right * width_ratio - scale_top = top * height_ratio - scale_bottom = bottom * height_ratio - scale_center_x = center_x * width_ratio - scale_center_y = center_y * height_ratio - - # Int coords on feature map/ground truth tensor - left_idx = int(min(scale_left, width - 1)) - right_idx = int(min(scale_right, width - 1)) - top_idx = int(min(scale_top, height - 1)) - bottom_idx = int(min(scale_bottom, height - 1)) - - # Generate gaussian heatmap - scale_box_width = ceil(scale_right - scale_left) - scale_box_height = ceil(scale_bottom - scale_top) - radius = gaussian_radius((scale_box_height, scale_box_width), - min_overlap=0.3) - radius = max(0, int(radius)) - gt_tl_heatmap[batch_id, label] = gen_gaussian_target( - gt_tl_heatmap[batch_id, label], [left_idx, top_idx], - radius) - gt_br_heatmap[batch_id, label] = gen_gaussian_target( - gt_br_heatmap[batch_id, label], [right_idx, bottom_idx], - radius) - - # Generate corner offset - left_offset = scale_left - left_idx - top_offset = scale_top - top_idx - right_offset = scale_right - right_idx - bottom_offset = scale_bottom - bottom_idx - gt_tl_offset[batch_id, 0, top_idx, left_idx] = left_offset - gt_tl_offset[batch_id, 1, top_idx, left_idx] = top_offset - gt_br_offset[batch_id, 0, bottom_idx, right_idx] = right_offset - gt_br_offset[batch_id, 1, bottom_idx, - right_idx] = bottom_offset - - # Generate corner embedding - if with_corner_emb: - corner_match.append([[top_idx, left_idx], - [bottom_idx, right_idx]]) - # Generate guiding shift - if with_guiding_shift: - gt_tl_guiding_shift[batch_id, 0, top_idx, - left_idx] = scale_center_x - left_idx - gt_tl_guiding_shift[batch_id, 1, top_idx, - left_idx] = scale_center_y - top_idx - gt_br_guiding_shift[batch_id, 0, bottom_idx, - right_idx] = right_idx - scale_center_x - gt_br_guiding_shift[ - batch_id, 1, bottom_idx, - right_idx] = bottom_idx - scale_center_y - # Generate centripetal shift - if with_centripetal_shift: - gt_tl_centripetal_shift[batch_id, 0, top_idx, - left_idx] = log(scale_center_x - - scale_left) - gt_tl_centripetal_shift[batch_id, 1, top_idx, - left_idx] = log(scale_center_y - - scale_top) - gt_br_centripetal_shift[batch_id, 0, bottom_idx, - right_idx] = log(scale_right - - scale_center_x) - gt_br_centripetal_shift[batch_id, 1, bottom_idx, - right_idx] = log(scale_bottom - - scale_center_y) - - if with_corner_emb: - match.append(corner_match) - - target_result = dict( - topleft_heatmap=gt_tl_heatmap, - topleft_offset=gt_tl_offset, - bottomright_heatmap=gt_br_heatmap, - bottomright_offset=gt_br_offset) - - if with_corner_emb: - target_result.update(corner_embedding=match) - if with_guiding_shift: - target_result.update( - topleft_guiding_shift=gt_tl_guiding_shift, - bottomright_guiding_shift=gt_br_guiding_shift) - if with_centripetal_shift: - target_result.update( - topleft_centripetal_shift=gt_tl_centripetal_shift, - bottomright_centripetal_shift=gt_br_centripetal_shift) - - return target_result - - @force_fp32() - def loss(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [left, top, right, bottom] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. Containing the - following losses: - - - det_loss (list[Tensor]): Corner keypoint losses of all - feature levels. - - pull_loss (list[Tensor]): Part one of AssociativeEmbedding - losses of all feature levels. - - push_loss (list[Tensor]): Part two of AssociativeEmbedding - losses of all feature levels. - - off_loss (list[Tensor]): Corner offset losses of all feature - levels. - """ - targets = self.get_targets( - gt_bboxes, - gt_labels, - tl_heats[-1].shape, - img_metas[0]['pad_shape'], - with_corner_emb=self.with_corner_emb) - mlvl_targets = [targets for _ in range(self.num_feat_levels)] - det_losses, pull_losses, push_losses, off_losses = multi_apply( - self.loss_single, tl_heats, br_heats, tl_embs, br_embs, tl_offs, - br_offs, mlvl_targets) - loss_dict = dict(det_loss=det_losses, off_loss=off_losses) - if self.with_corner_emb: - loss_dict.update(pull_loss=pull_losses, push_loss=push_losses) - return loss_dict - - def loss_single(self, tl_hmp, br_hmp, tl_emb, br_emb, tl_off, br_off, - targets): - """Compute losses for single level. - - Args: - tl_hmp (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_hmp (Tensor): Bottom-right corner heatmap for current level with - shape (N, num_classes, H, W). - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - targets (dict): Corner target generated by `get_targets`. - - Returns: - tuple[torch.Tensor]: Losses of the head's different branches - containing the following losses: - - - det_loss (Tensor): Corner keypoint loss. - - pull_loss (Tensor): Part one of AssociativeEmbedding loss. - - push_loss (Tensor): Part two of AssociativeEmbedding loss. - - off_loss (Tensor): Corner offset loss. - """ - gt_tl_hmp = targets['topleft_heatmap'] - gt_br_hmp = targets['bottomright_heatmap'] - gt_tl_off = targets['topleft_offset'] - gt_br_off = targets['bottomright_offset'] - gt_embedding = targets['corner_embedding'] - - # Detection loss - tl_det_loss = self.loss_heatmap( - tl_hmp.sigmoid(), - gt_tl_hmp, - avg_factor=max(1, - gt_tl_hmp.eq(1).sum())) - br_det_loss = self.loss_heatmap( - br_hmp.sigmoid(), - gt_br_hmp, - avg_factor=max(1, - gt_br_hmp.eq(1).sum())) - det_loss = (tl_det_loss + br_det_loss) / 2.0 - - # AssociativeEmbedding loss - if self.with_corner_emb and self.loss_embedding is not None: - pull_loss, push_loss = self.loss_embedding(tl_emb, br_emb, - gt_embedding) - else: - pull_loss, push_loss = None, None - - # Offset loss - # We only compute the offset loss at the real corner position. - # The value of real corner would be 1 in heatmap ground truth. - # The mask is computed in class agnostic mode and its shape is - # batch * 1 * width * height. - tl_off_mask = gt_tl_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_tl_hmp) - br_off_mask = gt_br_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_br_hmp) - tl_off_loss = self.loss_offset( - tl_off, - gt_tl_off, - tl_off_mask, - avg_factor=max(1, tl_off_mask.sum())) - br_off_loss = self.loss_offset( - br_off, - gt_br_off, - br_off_mask, - avg_factor=max(1, br_off_mask.sum())) - - off_loss = (tl_off_loss + br_off_loss) / 2.0 - - return det_loss, pull_loss, push_loss, off_loss - - @force_fp32() - def get_bboxes(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas) - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=tl_embs[-1][img_id:img_id + 1, :], - br_emb=br_embs[-1][img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - return result_list - - def _get_bboxes_single(self, - tl_heat, - br_heat, - tl_off, - br_off, - img_meta, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift: Top-left corner's centripetal shift for - current level with shape (N, 2, H, W). - br_centripetal_shift: Bottom-right corner's centripetal shift for - current level with shape (N, 2, H, W). - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - if isinstance(img_meta, (list, tuple)): - img_meta = img_meta[0] - - batch_bboxes, batch_scores, batch_clses = self.decode_heatmap( - tl_heat=tl_heat.sigmoid(), - br_heat=br_heat.sigmoid(), - tl_off=tl_off, - br_off=br_off, - tl_emb=tl_emb, - br_emb=br_emb, - tl_centripetal_shift=tl_centripetal_shift, - br_centripetal_shift=br_centripetal_shift, - img_meta=img_meta, - k=self.test_cfg.corner_topk, - kernel=self.test_cfg.local_maximum_kernel, - distance_threshold=self.test_cfg.distance_threshold) - - if rescale: - batch_bboxes /= batch_bboxes.new_tensor(img_meta['scale_factor']) - - bboxes = batch_bboxes.view([-1, 4]) - scores = batch_scores.view(-1) - clses = batch_clses.view(-1) - - detections = torch.cat([bboxes, scores.unsqueeze(-1)], -1) - keepinds = (detections[:, -1] > -0.1) - detections = detections[keepinds] - labels = clses[keepinds] - - if with_nms: - detections, labels = self._bboxes_nms(detections, labels, - self.test_cfg) - - return detections, labels - - def _bboxes_nms(self, bboxes, labels, cfg): - if 'nms_cfg' in cfg: - warning.warn('nms_cfg in test_cfg will be deprecated. ' - 'Please rename it as nms') - if 'nms' not in cfg: - cfg.nms = cfg.nms_cfg - - if labels.numel() > 0: - max_num = cfg.max_per_img - bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, - -1].contiguous(), - labels, cfg.nms) - if max_num > 0: - bboxes = bboxes[:max_num] - labels = labels[keep][:max_num] - - return bboxes, labels - - def decode_heatmap(self, - tl_heat, - br_heat, - tl_off, - br_off, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - img_meta=None, - k=100, - kernel=3, - distance_threshold=0.5, - num_dets=1000): - """Transform outputs for a single batch item into raw bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - tl_emb (Tensor | None): Top-left corner embedding for current - level with shape (N, corner_emb_channels, H, W). - br_emb (Tensor | None): Bottom-right corner embedding for current - level with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift (Tensor | None): Top-left centripetal shift - for current level with shape (N, 2, H, W). - br_centripetal_shift (Tensor | None): Bottom-right centripetal - shift for current level with shape (N, 2, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - k (int): Get top k corner keypoints from heatmap. - kernel (int): Max pooling kernel for extract local maximum pixels. - distance_threshold (float): Distance threshold. Top-left and - bottom-right corner keypoints with feature distance less than - the threshold will be regarded as keypoints from same object. - num_dets (int): Num of raw boxes before doing nms. - - Returns: - tuple[torch.Tensor]: Decoded output of CornerHead, containing the - following Tensors: - - - bboxes (Tensor): Coords of each box. - - scores (Tensor): Scores of each box. - - clses (Tensor): Categories of each box. - """ - with_embedding = tl_emb is not None and br_emb is not None - with_centripetal_shift = ( - tl_centripetal_shift is not None - and br_centripetal_shift is not None) - assert with_embedding + with_centripetal_shift == 1 - batch, _, height, width = tl_heat.size() - if torch.onnx.is_in_onnx_export(): - inp_h, inp_w = img_meta['pad_shape_for_onnx'][:2] - else: - inp_h, inp_w, _ = img_meta['pad_shape'] - - # perform nms on heatmaps - tl_heat = get_local_maximum(tl_heat, kernel=kernel) - br_heat = get_local_maximum(br_heat, kernel=kernel) - - tl_scores, tl_inds, tl_clses, tl_ys, tl_xs = get_topk_from_heatmap( - tl_heat, k=k) - br_scores, br_inds, br_clses, br_ys, br_xs = get_topk_from_heatmap( - br_heat, k=k) - - # We use repeat instead of expand here because expand is a - # shallow-copy function. Thus it could cause unexpected testing result - # sometimes. Using expand will decrease about 10% mAP during testing - # compared to repeat. - tl_ys = tl_ys.view(batch, k, 1).repeat(1, 1, k) - tl_xs = tl_xs.view(batch, k, 1).repeat(1, 1, k) - br_ys = br_ys.view(batch, 1, k).repeat(1, k, 1) - br_xs = br_xs.view(batch, 1, k).repeat(1, k, 1) - - tl_off = transpose_and_gather_feat(tl_off, tl_inds) - tl_off = tl_off.view(batch, k, 1, 2) - br_off = transpose_and_gather_feat(br_off, br_inds) - br_off = br_off.view(batch, 1, k, 2) - - tl_xs = tl_xs + tl_off[..., 0] - tl_ys = tl_ys + tl_off[..., 1] - br_xs = br_xs + br_off[..., 0] - br_ys = br_ys + br_off[..., 1] - - if with_centripetal_shift: - tl_centripetal_shift = transpose_and_gather_feat( - tl_centripetal_shift, tl_inds).view(batch, k, 1, 2).exp() - br_centripetal_shift = transpose_and_gather_feat( - br_centripetal_shift, br_inds).view(batch, 1, k, 2).exp() - - tl_ctxs = tl_xs + tl_centripetal_shift[..., 0] - tl_ctys = tl_ys + tl_centripetal_shift[..., 1] - br_ctxs = br_xs - br_centripetal_shift[..., 0] - br_ctys = br_ys - br_centripetal_shift[..., 1] - - # all possible boxes based on top k corners (ignoring class) - tl_xs *= (inp_w / width) - tl_ys *= (inp_h / height) - br_xs *= (inp_w / width) - br_ys *= (inp_h / height) - - if with_centripetal_shift: - tl_ctxs *= (inp_w / width) - tl_ctys *= (inp_h / height) - br_ctxs *= (inp_w / width) - br_ctys *= (inp_h / height) - - x_off, y_off = 0, 0 # no crop - if not torch.onnx.is_in_onnx_export(): - # since `RandomCenterCropPad` is done on CPU with numpy and it's - # not dynamic traceable when exporting to ONNX, thus 'border' - # does not appears as key in 'img_meta'. As a tmp solution, - # we move this 'border' handle part to the postprocess after - # finished exporting to ONNX, which is handle in - # `mmdet/core/export/model_wrappers.py`. Though difference between - # pytorch and exported onnx model, it might be ignored since - # comparable performance is achieved between them (e.g. 40.4 vs - # 40.6 on COCO val2017, for CornerNet without test-time flip) - if 'border' in img_meta: - x_off = img_meta['border'][2] - y_off = img_meta['border'][0] - - tl_xs -= x_off - tl_ys -= y_off - br_xs -= x_off - br_ys -= y_off - - zeros = tl_xs.new_zeros(*tl_xs.size()) - tl_xs = torch.where(tl_xs > 0.0, tl_xs, zeros) - tl_ys = torch.where(tl_ys > 0.0, tl_ys, zeros) - br_xs = torch.where(br_xs > 0.0, br_xs, zeros) - br_ys = torch.where(br_ys > 0.0, br_ys, zeros) - - bboxes = torch.stack((tl_xs, tl_ys, br_xs, br_ys), dim=3) - area_bboxes = ((br_xs - tl_xs) * (br_ys - tl_ys)).abs() - - if with_centripetal_shift: - tl_ctxs -= x_off - tl_ctys -= y_off - br_ctxs -= x_off - br_ctys -= y_off - - tl_ctxs *= tl_ctxs.gt(0.0).type_as(tl_ctxs) - tl_ctys *= tl_ctys.gt(0.0).type_as(tl_ctys) - br_ctxs *= br_ctxs.gt(0.0).type_as(br_ctxs) - br_ctys *= br_ctys.gt(0.0).type_as(br_ctys) - - ct_bboxes = torch.stack((tl_ctxs, tl_ctys, br_ctxs, br_ctys), - dim=3) - area_ct_bboxes = ((br_ctxs - tl_ctxs) * (br_ctys - tl_ctys)).abs() - - rcentral = torch.zeros_like(ct_bboxes) - # magic nums from paper section 4.1 - mu = torch.ones_like(area_bboxes) / 2.4 - mu[area_bboxes > 3500] = 1 / 2.1 # large bbox have smaller mu - - bboxes_center_x = (bboxes[..., 0] + bboxes[..., 2]) / 2 - bboxes_center_y = (bboxes[..., 1] + bboxes[..., 3]) / 2 - rcentral[..., 0] = bboxes_center_x - mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 1] = bboxes_center_y - mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - rcentral[..., 2] = bboxes_center_x + mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 3] = bboxes_center_y + mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - area_rcentral = ((rcentral[..., 2] - rcentral[..., 0]) * - (rcentral[..., 3] - rcentral[..., 1])).abs() - dists = area_ct_bboxes / area_rcentral - - tl_ctx_inds = (ct_bboxes[..., 0] <= rcentral[..., 0]) | ( - ct_bboxes[..., 0] >= rcentral[..., 2]) - tl_cty_inds = (ct_bboxes[..., 1] <= rcentral[..., 1]) | ( - ct_bboxes[..., 1] >= rcentral[..., 3]) - br_ctx_inds = (ct_bboxes[..., 2] <= rcentral[..., 0]) | ( - ct_bboxes[..., 2] >= rcentral[..., 2]) - br_cty_inds = (ct_bboxes[..., 3] <= rcentral[..., 1]) | ( - ct_bboxes[..., 3] >= rcentral[..., 3]) - - if with_embedding: - tl_emb = transpose_and_gather_feat(tl_emb, tl_inds) - tl_emb = tl_emb.view(batch, k, 1) - br_emb = transpose_and_gather_feat(br_emb, br_inds) - br_emb = br_emb.view(batch, 1, k) - dists = torch.abs(tl_emb - br_emb) - - tl_scores = tl_scores.view(batch, k, 1).repeat(1, 1, k) - br_scores = br_scores.view(batch, 1, k).repeat(1, k, 1) - - scores = (tl_scores + br_scores) / 2 # scores for all possible boxes - - # tl and br should have same class - tl_clses = tl_clses.view(batch, k, 1).repeat(1, 1, k) - br_clses = br_clses.view(batch, 1, k).repeat(1, k, 1) - cls_inds = (tl_clses != br_clses) - - # reject boxes based on distances - dist_inds = dists > distance_threshold - - # reject boxes based on widths and heights - width_inds = (br_xs <= tl_xs) - height_inds = (br_ys <= tl_ys) - - # No use `scores[cls_inds]`, instead we use `torch.where` here. - # Since only 1-D indices with type 'tensor(bool)' are supported - # when exporting to ONNX, any other bool indices with more dimensions - # (e.g. 2-D bool tensor) as input parameter in node is invalid - negative_scores = -1 * torch.ones_like(scores) - scores = torch.where(cls_inds, negative_scores, scores) - scores = torch.where(width_inds, negative_scores, scores) - scores = torch.where(height_inds, negative_scores, scores) - scores = torch.where(dist_inds, negative_scores, scores) - - if with_centripetal_shift: - scores[tl_ctx_inds] = -1 - scores[tl_cty_inds] = -1 - scores[br_ctx_inds] = -1 - scores[br_cty_inds] = -1 - - scores = scores.view(batch, -1) - scores, inds = torch.topk(scores, num_dets) - scores = scores.unsqueeze(2) - - bboxes = bboxes.view(batch, -1, 4) - bboxes = gather_feat(bboxes, inds) - - clses = tl_clses.contiguous().view(batch, -1, 1) - clses = gather_feat(clses, inds).float() - - return bboxes, scores, clses - - def onnx_export(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor, Tensor]: First tensor bboxes with shape - [N, num_det, 5], 5 arrange as (x1, y1, x2, y2, score) - and second element is class labels of shape [N, num_det]. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len( - img_metas) == 1 - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=tl_embs[-1][img_id:img_id + 1, :], - br_emb=br_embs[-1][img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - detections, labels = result_list[0] - # batch_size 1 here, [1, num_det, 5], [1, num_det] - return detections.unsqueeze(0), labels.unsqueeze(0) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/deformable_detr_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/deformable_detr_head.py deleted file mode 100644 index 71c27852..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/deformable_detr_head.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Linear, bias_init_with_prob, constant_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.utils.transformer import inverse_sigmoid -from ..builder import HEADS -from .detr_head import DETRHead - - -@HEADS.register_module() -class DeformableDETRHead(DETRHead): - """Head of DeformDETR: Deformable DETR: Deformable Transformers for End-to- - End Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - with_box_refine (bool): Whether to refine the reference points - in the decoder. Defaults to False. - as_two_stage (bool) : Whether to generate the proposal from - the outputs of encoder. - transformer (obj:`ConfigDict`): ConfigDict is used for building - the Encoder and Decoder. - """ - - def __init__(self, - *args, - with_box_refine=False, - as_two_stage=False, - transformer=None, - **kwargs): - self.with_box_refine = with_box_refine - self.as_two_stage = as_two_stage - if self.as_two_stage: - transformer['as_two_stage'] = self.as_two_stage - - super(DeformableDETRHead, self).__init__( - *args, transformer=transformer, **kwargs) - - def _init_layers(self): - """Initialize classification branch and regression branch of head.""" - - fc_cls = Linear(self.embed_dims, self.cls_out_channels) - reg_branch = [] - for _ in range(self.num_reg_fcs): - reg_branch.append(Linear(self.embed_dims, self.embed_dims)) - reg_branch.append(nn.ReLU()) - reg_branch.append(Linear(self.embed_dims, 4)) - reg_branch = nn.Sequential(*reg_branch) - - def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - # last reg_branch is used to generate proposal from - # encode feature map when as_two_stage is True. - num_pred = (self.transformer.decoder.num_layers + 1) if \ - self.as_two_stage else self.transformer.decoder.num_layers - - if self.with_box_refine: - self.cls_branches = _get_clones(fc_cls, num_pred) - self.reg_branches = _get_clones(reg_branch, num_pred) - else: - - self.cls_branches = nn.ModuleList( - [fc_cls for _ in range(num_pred)]) - self.reg_branches = nn.ModuleList( - [reg_branch for _ in range(num_pred)]) - - if not self.as_two_stage: - self.query_embedding = nn.Embedding(self.num_query, - self.embed_dims * 2) - - def init_weights(self): - """Initialize weights of the DeformDETR head.""" - self.transformer.init_weights() - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - for m in self.cls_branches: - nn.init.constant_(m.bias, bias_init) - for m in self.reg_branches: - constant_init(m[-1], 0, bias=0) - nn.init.constant_(self.reg_branches[0][-1].bias.data[2:], -2.0) - if self.as_two_stage: - for m in self.reg_branches: - nn.init.constant_(m[-1].bias.data[2:], 0.0) - - def forward(self, mlvl_feats, img_metas): - """Forward function. - - Args: - mlvl_feats (tuple[Tensor]): Features from the upstream - network, each is a 4D-tensor with shape - (N, C, H, W). - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, \ - shape [nb_dec, bs, num_query, cls_out_channels]. Note \ - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression \ - head with normalized coordinate format (cx, cy, w, h). \ - Shape [nb_dec, bs, num_query, 4]. - enc_outputs_class (Tensor): The score of each point on encode \ - feature map, has shape (N, h*w, num_class). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - enc_outputs_coord (Tensor): The proposal generate from the \ - encode feature map, has shape (N, h*w, 4). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - """ - - batch_size = mlvl_feats[0].size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - img_masks = mlvl_feats[0].new_ones( - (batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - img_masks[img_id, :img_h, :img_w] = 0 - - mlvl_masks = [] - mlvl_positional_encodings = [] - for feat in mlvl_feats: - mlvl_masks.append( - F.interpolate(img_masks[None], - size=feat.shape[-2:]).to(torch.bool).squeeze(0)) - mlvl_positional_encodings.append( - self.positional_encoding(mlvl_masks[-1])) - - query_embeds = None - if not self.as_two_stage: - query_embeds = self.query_embedding.weight - hs, init_reference, inter_references, \ - enc_outputs_class, enc_outputs_coord = self.transformer( - mlvl_feats, - mlvl_masks, - query_embeds, - mlvl_positional_encodings, - reg_branches=self.reg_branches if self.with_box_refine else None, # noqa:E501 - cls_branches=self.cls_branches if self.as_two_stage else None # noqa:E501 - ) - hs = hs.permute(0, 2, 1, 3) - outputs_classes = [] - outputs_coords = [] - - for lvl in range(hs.shape[0]): - if lvl == 0: - reference = init_reference - else: - reference = inter_references[lvl - 1] - reference = inverse_sigmoid(reference) - outputs_class = self.cls_branches[lvl](hs[lvl]) - tmp = self.reg_branches[lvl](hs[lvl]) - if reference.shape[-1] == 4: - tmp += reference - else: - assert reference.shape[-1] == 2 - tmp[..., :2] += reference - outputs_coord = tmp.sigmoid() - outputs_classes.append(outputs_class) - outputs_coords.append(outputs_coord) - - outputs_classes = torch.stack(outputs_classes) - outputs_coords = torch.stack(outputs_coords) - if self.as_two_stage: - return outputs_classes, outputs_coords, \ - enc_outputs_class, \ - enc_outputs_coord.sigmoid() - else: - return outputs_classes, outputs_coords, \ - None, None - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert gt_bboxes_ignore is None, \ - f'{self.__class__.__name__} only supports ' \ - f'for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss of proposal generated from encode feature map. - if enc_cls_scores is not None: - binary_labels_list = [ - torch.zeros_like(gt_labels_list[i]) - for i in range(len(img_metas)) - ] - enc_loss_cls, enc_losses_bbox, enc_losses_iou = \ - self.loss_single(enc_cls_scores, enc_bbox_preds, - gt_bboxes_list, binary_labels_list, - img_metas, gt_bboxes_ignore) - loss_dict['enc_loss_cls'] = enc_loss_cls - loss_dict['enc_loss_bbox'] = enc_losses_bbox - loss_dict['enc_loss_iou'] = enc_losses_iou - - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - cls_scores = all_cls_scores[-1] - bbox_preds = all_bbox_preds[-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - return result_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/dense_test_mixins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/dense_test_mixins.py deleted file mode 100644 index 34215489..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/dense_test_mixins.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from inspect import signature - -import torch -from mmcv.ops import batched_nms - -from mmdet.core import bbox_mapping_back, merge_aug_proposals - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - """Mixin class for testing det bboxes via DenseHead.""" - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,) - """ - outs = self.forward(feats) - results_list = self.get_bboxes( - *outs, img_metas=img_metas, rescale=rescale) - return results_list - - def aug_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes with test time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,). The length of list should always be 1. - """ - # check with_nms argument - gb_sig = signature(self.get_bboxes) - gb_args = [p.name for p in gb_sig.parameters.values()] - gbs_sig = signature(self._get_bboxes_single) - gbs_args = [p.name for p in gbs_sig.parameters.values()] - assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \ - f'{self.__class__.__name__}' \ - ' does not support test-time augmentation' - - aug_bboxes = [] - aug_scores = [] - aug_labels = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - outs = self.forward(x) - bbox_outputs = self.get_bboxes( - *outs, - img_metas=img_meta, - cfg=self.test_cfg, - rescale=False, - with_nms=False)[0] - aug_bboxes.append(bbox_outputs[0]) - aug_scores.append(bbox_outputs[1]) - if len(bbox_outputs) >= 3: - aug_labels.append(bbox_outputs[2]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_labels = torch.cat(aug_labels, dim=0) if aug_labels else None - - if merged_bboxes.numel() == 0: - det_bboxes = torch.cat([merged_bboxes, merged_scores[:, None]], -1) - return [ - (det_bboxes, merged_labels), - ] - - det_bboxes, keep_idxs = batched_nms(merged_bboxes, merged_scores, - merged_labels, self.test_cfg.nms) - det_bboxes = det_bboxes[:self.test_cfg.max_per_img] - det_labels = merged_labels[keep_idxs][:self.test_cfg.max_per_img] - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - - return [ - (_det_bboxes, det_labels), - ] - - def simple_test_rpn(self, x, img_metas): - """Test without augmentation, only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - rpn_outs = self(x) - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def aug_test_rpn(self, feats, img_metas): - """Test with augmentation for only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - samples_per_gpu = len(img_metas[0]) - aug_proposals = [[] for _ in range(samples_per_gpu)] - for x, img_meta in zip(feats, img_metas): - proposal_list = self.simple_test_rpn(x, img_meta) - for i, proposals in enumerate(proposal_list): - aug_proposals[i].append(proposals) - # reorganize the order of 'img_metas' to match the dimensions - # of 'aug_proposals' - aug_img_metas = [] - for i in range(samples_per_gpu): - aug_img_meta = [] - for j in range(len(img_metas)): - aug_img_meta.append(img_metas[j][i]) - aug_img_metas.append(aug_img_meta) - # after merging, proposals will be rescaled to the original image size - merged_proposals = [ - merge_aug_proposals(proposals, aug_img_meta, self.test_cfg) - for proposals, aug_img_meta in zip(aug_proposals, aug_img_metas) - ] - return merged_proposals - - if sys.version_info >= (3, 7): - - async def async_simple_test_rpn(self, x, img_metas): - sleep_interval = self.test_cfg.pop('async_sleep_interval', 0.025) - async with completed( - __name__, 'rpn_head_forward', - sleep_interval=sleep_interval): - rpn_outs = self(x) - - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - - Returns: - tuple[Tensor]: ``bboxes`` with shape (n,4), where - 4 represent (tl_x, tl_y, br_x, br_y) - and ``scores`` with shape (n,). - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/detr_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/detr_head.py deleted file mode 100644 index de1913c9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/detr_head.py +++ /dev/null @@ -1,844 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, Linear, build_activation_layer -from mmcv.cnn.bricks.transformer import FFN, build_positional_encoding -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from mmdet.models.utils import build_transformer -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class DETRHead(AnchorFreeHead): - """Implements the DETR transformer head. - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_classes (int): Number of categories excluding the background. - in_channels (int): Number of channels in the input feature map. - num_query (int): Number of query in Transformer. - num_reg_fcs (int, optional): Number of fully-connected layers used in - `FFN`, which is then used for the regression head. Default 2. - transformer (obj:`mmcv.ConfigDict`|dict): Config for transformer. - Default: None. - sync_cls_avg_factor (bool): Whether to sync the avg_factor of - all ranks. Default to False. - positional_encoding (obj:`mmcv.ConfigDict`|dict): - Config for position encoding. - loss_cls (obj:`mmcv.ConfigDict`|dict): Config of the - classification loss. Default `CrossEntropyLoss`. - loss_bbox (obj:`mmcv.ConfigDict`|dict): Config of the - regression loss. Default `L1Loss`. - loss_iou (obj:`mmcv.ConfigDict`|dict): Config of the - regression iou loss. Default `GIoULoss`. - tran_cfg (obj:`mmcv.ConfigDict`|dict): Training config of - transformer head. - test_cfg (obj:`mmcv.ConfigDict`|dict): Testing config of - transformer head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - _version = 2 - - def __init__(self, - num_classes, - in_channels, - num_query=100, - num_reg_fcs=2, - transformer=None, - sync_cls_avg_factor=False, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0), - iou_cost=dict( - type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100), - init_cfg=None, - **kwargs): - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since it brings inconvenience when the initialization of - # `AnchorFreeHead` is called. - super(AnchorFreeHead, self).__init__(init_cfg) - self.bg_cls_weight = 0 - self.sync_cls_avg_factor = sync_cls_avg_factor - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None and (self.__class__ is DETRHead): - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR rep0, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided '\ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \ - 'The classification weight for loss and matcher should be' \ - 'exactly the same.' - assert loss_bbox['loss_weight'] == assigner['reg_cost'][ - 'weight'], 'The regression L1 weight for loss and matcher ' \ - 'should be exactly the same.' - assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \ - 'The regression iou weight for loss and matcher should be' \ - 'exactly the same.' - self.assigner = build_assigner(assigner) - # DETR sampling=False, so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.num_query = num_query - self.num_classes = num_classes - self.in_channels = in_channels - self.num_reg_fcs = num_reg_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_iou = build_loss(loss_iou) - - if self.loss_cls.use_sigmoid: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.act_cfg = transformer.get('act_cfg', - dict(type='ReLU', inplace=True)) - self.activate = build_activation_layer(self.act_cfg) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.transformer = build_transformer(transformer) - self.embed_dims = self.transformer.embed_dims - assert 'num_feats' in positional_encoding - num_feats = positional_encoding['num_feats'] - assert num_feats * 2 == self.embed_dims, 'embed_dims should' \ - f' be exactly 2 times of num_feats. Found {self.embed_dims}' \ - f' and {num_feats}.' - self._init_layers() - - def _init_layers(self): - """Initialize layers of the transformer head.""" - self.input_proj = Conv2d( - self.in_channels, self.embed_dims, kernel_size=1) - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_reg_fcs, - self.act_cfg, - dropout=0.0, - add_residual=False) - self.fc_reg = Linear(self.embed_dims, 4) - self.query_embedding = nn.Embedding(self.num_query, self.embed_dims) - - def init_weights(self): - """Initialize weights of the transformer head.""" - # The initialization for transformer is important - self.transformer.init_weights() - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """load checkpoints.""" - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since `AnchorFreeHead._load_from_state_dict` should not be - # called here. Invoking the default `Module._load_from_state_dict` - # is enough. - - # Names of some parameters in has been changed. - version = local_metadata.get('version', None) - if (version is None or version < 2) and self.__class__ is DETRHead: - convert_dict = { - '.self_attn.': '.attentions.0.', - '.ffn.': '.ffns.0.', - '.multihead_attn.': '.attentions.1.', - '.decoder.norm.': '.decoder.post_norm.' - } - state_dict_keys = list(state_dict.keys()) - for k in state_dict_keys: - for ori_key, convert_key in convert_dict.items(): - if ori_key in k: - convert_key = k.replace(ori_key, convert_key) - state_dict[convert_key] = state_dict[k] - del state_dict[k] - - super(AnchorFreeHead, - self)._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, - unexpected_keys, error_msgs) - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single, feats, img_metas_list) - - def forward_single(self, x, img_metas): - """"Forward function for a single feature level. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # construct binary masks which used for the transformer. - # NOTE following the official DETR repo, non-zero values representing - # ignored positions, while zero values means valid positions. - batch_size = x.size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - masks = x.new_ones((batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - masks[img_id, :img_h, :img_w] = 0 - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - # position encoding - pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w] - # outs_dec: [nb_dec, bs, num_query, embed_dim] - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores_list, - all_bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # NOTE defaultly only the outputs from the last feature scale is used. - all_cls_scores = all_cls_scores_list[-1] - all_bbox_preds = all_bbox_preds_list[-1] - assert gt_bboxes_ignore is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, - cls_scores, - bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images. Shape [bs, num_query, cls_out_channels]. - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape [bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components for outputs from - a single decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, - img_metas, gt_bboxes_ignore_list) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - if self.sync_cls_avg_factor: - cls_avg_factor = reduce_mean( - cls_scores.new_tensor([cls_avg_factor])) - cls_avg_factor = max(cls_avg_factor, 1) - - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes across all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(img_metas, bbox_preds): - img_h, img_w, _ = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image with shape [num_query, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all \ - images. - - bbox_targets_list (list[Tensor]): BBox targets for all \ - images. - - bbox_weights_list (list[Tensor]): BBox weights for all \ - images. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - assert gt_bboxes_ignore_list is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - num_imgs = len(cls_scores_list) - gt_bboxes_ignore_list = [ - gt_bboxes_ignore_list for _ in range(num_imgs) - ] - - (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None): - """"Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth bboxes for one image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth class indices for one image - with shape (num_gts, ). - img_meta (dict): Meta information for one image. - gt_bboxes_ignore (Tensor, optional): Bounding boxes - which can be ignored. Default None. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - - num_bboxes = bbox_pred.size(0) - # assigner and sampler - assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes, - gt_labels, img_meta, - gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, bbox_pred, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - img_h, img_w, _ = img_meta['img_shape'] - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - # over-write because img_metas are needed as inputs for bbox_head. - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """Forward function for training mode. - - Args: - x (list[Tensor]): Features from backbone. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert proposal_cfg is None, '"proposal_cfg" must be None' - outs = self(x, img_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores_list, - all_bbox_preds_list, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - # NOTE defaultly only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - - return result_list - - def _get_bboxes_single(self, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False): - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_query, 4]. - img_shape (tuple[int]): Shape of input image, (height, width, 3). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - rescale (bool, optional): If True, return boxes in original image - space. Default False. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. - - - det_bboxes: Predicted bboxes with shape [num_query, 5], \ - where the first 4 columns are bounding box positions \ - (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \ - between 0 and 1. - - det_labels: Predicted labels of the corresponding box with \ - shape [num_query]. - """ - assert len(cls_score) == len(bbox_pred) - max_per_img = self.test_cfg.get('max_per_img', self.num_query) - # exclude background - if self.loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - scores, indexes = cls_score.view(-1).topk(max_per_img) - det_labels = indexes % self.num_classes - bbox_index = indexes // self.num_classes - bbox_pred = bbox_pred[bbox_index] - else: - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img) - bbox_pred = bbox_pred[bbox_index] - det_labels = det_labels[bbox_index] - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1) - - return det_bboxes, det_labels - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,) - """ - # forward of this head requires img_metas - outs = self.forward(feats, img_metas) - results_list = self.get_bboxes(*outs, img_metas, rescale=rescale) - return results_list - - def forward_onnx(self, feats, img_metas): - """Forward function for exporting to ONNX. - - Over-write `forward` because: `masks` is directly created with - zero (valid position tag) and has the same spatial size as `x`. - Thus the construction of `masks` is different from that in `forward`. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single_onnx, feats, img_metas_list) - - def forward_single_onnx(self, x, img_metas): - """"Forward function for a single feature level with ONNX exportation. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # Note `img_shape` is not dynamically traceable to ONNX, - # since the related augmentation was done with numpy under - # CPU. Thus `masks` is directly created with zeros (valid tag) - # and the same spatial shape as `x`. - # The difference between torch and exported ONNX model may be - # ignored, since the same performance is achieved (e.g. - # 40.1 vs 40.1 for DETR) - batch_size = x.size(0) - h, w = x.size()[-2:] - masks = x.new_zeros((batch_size, h, w)) # [B,h,w] - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - pos_embed = self.positional_encoding(masks) - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - def onnx_export(self, all_cls_scores_list, all_bbox_preds_list, img_metas): - """Transform network outputs into bbox predictions, with ONNX - exportation. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - assert len(img_metas) == 1, \ - 'Only support one input image while in exporting to ONNX' - - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - # Note `img_shape` is not dynamically traceable to ONNX, - # here `img_shape_for_onnx` (padded shape of image tensor) - # is used. - img_shape = img_metas[0]['img_shape_for_onnx'] - max_per_img = self.test_cfg.get('max_per_img', self.num_query) - batch_size = cls_scores.size(0) - # `batch_index_offset` is used for the gather of concatenated tensor - batch_index_offset = torch.arange(batch_size).to( - cls_scores.device) * max_per_img - batch_index_offset = batch_index_offset.unsqueeze(1).expand( - batch_size, max_per_img) - - # supports dynamical batch inference - if self.loss_cls.use_sigmoid: - cls_scores = cls_scores.sigmoid() - scores, indexes = cls_scores.view(batch_size, -1).topk( - max_per_img, dim=1) - det_labels = indexes % self.num_classes - bbox_index = indexes // self.num_classes - bbox_index = (bbox_index + batch_index_offset).view(-1) - bbox_preds = bbox_preds.view(-1, 4)[bbox_index] - bbox_preds = bbox_preds.view(batch_size, -1, 4) - else: - scores, det_labels = F.softmax( - cls_scores, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img, dim=1) - bbox_index = (bbox_index + batch_index_offset).view(-1) - bbox_preds = bbox_preds.view(-1, 4)[bbox_index] - det_labels = det_labels.view(-1)[bbox_index] - bbox_preds = bbox_preds.view(batch_size, -1, 4) - det_labels = det_labels.view(batch_size, -1) - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_preds) - # use `img_shape_tensor` for dynamically exporting to ONNX - img_shape_tensor = img_shape.flip(0).repeat(2) # [w,h,w,h] - img_shape_tensor = img_shape_tensor.unsqueeze(0).unsqueeze(0).expand( - batch_size, det_bboxes.size(1), 4) - det_bboxes = det_bboxes * img_shape_tensor - # dynamically clip bboxes - x1, y1, x2, y2 = det_bboxes.split((1, 1, 1, 1), dim=-1) - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, img_shape) - det_bboxes = torch.cat([x1, y1, x2, y2], dim=-1) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(-1)), -1) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/embedding_rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/embedding_rpn_head.py deleted file mode 100644 index 22060b96..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/embedding_rpn_head.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS -from ...core import bbox_cxcywh_to_xyxy - - -@HEADS.register_module() -class EmbeddingRPNHead(BaseModule): - """RPNHead in the `Sparse R-CNN `_ . - - Unlike traditional RPNHead, this module does not need FPN input, but just - decode `init_proposal_bboxes` and expand the first dimension of - `init_proposal_bboxes` and `init_proposal_features` to the batch_size. - - Args: - num_proposals (int): Number of init_proposals. Default 100. - proposal_feature_channel (int): Channel number of - init_proposal_feature. Defaults to 256. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_proposals=100, - proposal_feature_channel=256, - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(EmbeddingRPNHead, self).__init__(init_cfg) - self.num_proposals = num_proposals - self.proposal_feature_channel = proposal_feature_channel - self._init_layers() - - def _init_layers(self): - """Initialize a sparse set of proposal boxes and proposal features.""" - self.init_proposal_bboxes = nn.Embedding(self.num_proposals, 4) - self.init_proposal_features = nn.Embedding( - self.num_proposals, self.proposal_feature_channel) - - def init_weights(self): - """Initialize the init_proposal_bboxes as normalized. - - [c_x, c_y, w, h], and we initialize it to the size of the entire - image. - """ - super(EmbeddingRPNHead, self).init_weights() - nn.init.constant_(self.init_proposal_bboxes.weight[:, :2], 0.5) - nn.init.constant_(self.init_proposal_bboxes.weight[:, 2:], 1) - - def _decode_init_proposals(self, imgs, img_metas): - """Decode init_proposal_bboxes according to the size of images and - expand dimension of init_proposal_features to batch_size. - - Args: - imgs (list[Tensor]): List of FPN features. - img_metas (list[dict]): List of meta-information of - images. Need the img_shape to decode the init_proposals. - - Returns: - Tuple(Tensor): - - - proposals (Tensor): Decoded proposal bboxes, - has shape (batch_size, num_proposals, 4). - - init_proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel). - - imgs_whwh (Tensor): Tensor with shape - (batch_size, 4), the dimension means - [img_width, img_height, img_width, img_height]. - """ - proposals = self.init_proposal_bboxes.weight.clone() - proposals = bbox_cxcywh_to_xyxy(proposals) - num_imgs = len(imgs[0]) - imgs_whwh = [] - for meta in img_metas: - h, w, _ = meta['img_shape'] - imgs_whwh.append(imgs[0].new_tensor([[w, h, w, h]])) - imgs_whwh = torch.cat(imgs_whwh, dim=0) - imgs_whwh = imgs_whwh[:, None, :] - - # imgs_whwh has shape (batch_size, 1, 4) - # The shape of proposals change from (num_proposals, 4) - # to (batch_size ,num_proposals, 4) - proposals = proposals * imgs_whwh - - init_proposal_features = self.init_proposal_features.weight.clone() - init_proposal_features = init_proposal_features[None].expand( - num_imgs, *init_proposal_features.size()) - return proposals, init_proposal_features, imgs_whwh - - def forward_dummy(self, img, img_metas): - """Dummy forward function. - - Used in flops calculation. - """ - return self._decode_init_proposals(img, img_metas) - - def forward_train(self, img, img_metas): - """Forward function in training stage.""" - return self._decode_init_proposals(img, img_metas) - - def simple_test_rpn(self, img, img_metas): - """Forward function in testing stage.""" - return self._decode_init_proposals(img, img_metas) - - def simple_test(self, img, img_metas): - """Forward function in testing stage.""" - raise NotImplementedError - - def aug_test_rpn(self, feats, img_metas): - raise NotImplementedError( - 'EmbeddingRPNHead does not support test-time augmentation') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fcos_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fcos_head.py deleted file mode 100644 index d72fb56c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fcos_head.py +++ /dev/null @@ -1,455 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import Scale -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, reduce_mean -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -@HEADS.register_module() -class FCOSHead(AnchorFreeHead): - """Anchor-free head used in `FCOS `_. - - The FCOS head does not use anchor boxes. Instead bounding boxes are - predicted at each pixel and a centerness measure is used to suppress - low-quality predictions. - Here norm_on_bbox, centerness_on_reg, dcn_on_last_conv are training - tricks used in official repo, which will bring remarkable mAP gains - of up to 4.9. Please see https://github.com/tianzhi0549/FCOS for - more detail. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - strides (list[int] | list[tuple[int, int]]): Strides of points - in multiple feature levels. Default: (4, 8, 16, 32, 64). - regress_ranges (tuple[tuple[int, int]]): Regress range of multiple - level points. - center_sampling (bool): If true, use center sampling. Default: False. - center_sample_radius (float): Radius of center sampling. Default: 1.5. - norm_on_bbox (bool): If true, normalize the regression targets - with FPN strides. Default: False. - centerness_on_reg (bool): If true, position centerness on the - regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. - Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias of conv will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_centerness (dict): Config of centerness loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> self = FCOSHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred, centerness = self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), - (512, INF)), - center_sampling=False, - center_sample_radius=1.5, - norm_on_bbox=False, - centerness_on_reg=False, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.regress_ranges = regress_ranges - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.norm_on_bbox = norm_on_bbox - self.centerness_on_reg = centerness_on_reg - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1) - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, \ - each is a 4D-tensor, the channel number is \ - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each \ - scale level, each is a 4D-tensor, the channel number is \ - num_points * 4. - centernesses (list[Tensor]): centerness for each scale level, \ - each is a 4D-tensor, the channel number is num_points * 1. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.strides) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox predictions and centerness \ - predictions of input feature maps. - """ - cls_score, bbox_pred, cls_feat, reg_feat = super().forward_single(x) - if self.centerness_on_reg: - centerness = self.conv_centerness(reg_feat) - else: - centerness = self.conv_centerness(cls_feat) - # scale the bbox_pred of different level - # float to avoid overflow when enabling FP16 - bbox_pred = scale(bbox_pred).float() - if self.norm_on_bbox: - # bbox_pred needed for gradient computation has been modified - # by F.relu(bbox_pred) when run with PyTorch 1.10. So replace - # F.relu(bbox_pred) with bbox_pred.clamp(min=0) - bbox_pred = bbox_pred.clamp(min=0) - if not self.training: - bbox_pred *= stride - else: - bbox_pred = bbox_pred.exp() - return cls_score, bbox_pred, centerness - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - centernesses (list[Tensor]): centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(centernesses) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - labels, bbox_targets = self.get_targets(all_level_points, gt_bboxes, - gt_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds and centerness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_centerness = torch.cat(flatten_centerness) - flatten_labels = torch.cat(labels) - flatten_bbox_targets = torch.cat(bbox_targets) - # repeat points to align with bbox_preds - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < bg_class_ind)).nonzero().reshape(-1) - num_pos = torch.tensor( - len(pos_inds), dtype=torch.float, device=bbox_preds[0].device) - num_pos = max(reduce_mean(num_pos), 1.0) - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos) - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_centerness_targets = self.centerness_target(pos_bbox_targets) - # centerness weighted iou loss - centerness_denorm = max( - reduce_mean(pos_centerness_targets.sum().detach()), 1e-6) - - if len(pos_inds) > 0: - pos_points = flatten_points[pos_inds] - pos_decoded_bbox_preds = self.bbox_coder.decode( - pos_points, pos_bbox_preds) - pos_decoded_target_preds = self.bbox_coder.decode( - pos_points, pos_bbox_targets) - loss_bbox = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds, - weight=pos_centerness_targets, - avg_factor=centerness_denorm) - loss_centerness = self.loss_centerness( - pos_centerness, pos_centerness_targets, avg_factor=num_pos) - else: - loss_bbox = pos_bbox_preds.sum() - loss_centerness = pos_centerness.sum() - - return dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_centerness=loss_centerness) - - def get_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. \ - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each \ - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - # get labels and bbox_targets of each image - labels_list, bbox_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - labels_list = [labels.split(num_points, 0) for labels in labels_list] - bbox_targets_list = [ - bbox_targets.split(num_points, 0) - for bbox_targets in bbox_targets_list - ] - - # concat per level image - concat_lvl_labels = [] - concat_lvl_bbox_targets = [] - for i in range(num_levels): - concat_lvl_labels.append( - torch.cat([labels[i] for labels in labels_list])) - bbox_targets = torch.cat( - [bbox_targets[i] for bbox_targets in bbox_targets_list]) - if self.norm_on_bbox: - bbox_targets = bbox_targets / self.strides[i] - concat_lvl_bbox_targets.append(bbox_targets) - return concat_lvl_labels, concat_lvl_bbox_targets - - def _get_target_single(self, gt_bboxes, gt_labels, points, regress_ranges, - num_points_per_lvl): - """Compute regression and classification targets for a single image.""" - num_points = points.size(0) - num_gts = gt_labels.size(0) - if num_gts == 0: - return gt_labels.new_full((num_points,), self.num_classes), \ - gt_bboxes.new_zeros((num_points, 4)) - - areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # TODO: figure out why these two are different - # areas = areas[None].expand(num_points, num_gts) - areas = areas[None].repeat(num_points, 1) - regress_ranges = regress_ranges[:, None, :].expand( - num_points, num_gts, 2) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None].expand(num_points, num_gts) - ys = ys[:, None].expand(num_points, num_gts) - - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - - if self.center_sampling: - # condition1: inside a `center bbox` - radius = self.center_sample_radius - center_xs = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) / 2 - center_ys = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) / 2 - center_gts = torch.zeros_like(gt_bboxes) - stride = center_xs.new_zeros(center_xs.shape) - - # project the points on current lvl back to the `original` sizes - lvl_begin = 0 - for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl): - lvl_end = lvl_begin + num_points_lvl - stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius - lvl_begin = lvl_end - - x_mins = center_xs - stride - y_mins = center_ys - stride - x_maxs = center_xs + stride - y_maxs = center_ys + stride - center_gts[..., 0] = torch.where(x_mins > gt_bboxes[..., 0], - x_mins, gt_bboxes[..., 0]) - center_gts[..., 1] = torch.where(y_mins > gt_bboxes[..., 1], - y_mins, gt_bboxes[..., 1]) - center_gts[..., 2] = torch.where(x_maxs > gt_bboxes[..., 2], - gt_bboxes[..., 2], x_maxs) - center_gts[..., 3] = torch.where(y_maxs > gt_bboxes[..., 3], - gt_bboxes[..., 3], y_maxs) - - cb_dist_left = xs - center_gts[..., 0] - cb_dist_right = center_gts[..., 2] - xs - cb_dist_top = ys - center_gts[..., 1] - cb_dist_bottom = center_gts[..., 3] - ys - center_bbox = torch.stack( - (cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1) - inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0 - else: - # condition1: inside a gt bbox - inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0 - - # condition2: limit the regression range for each location - max_regress_distance = bbox_targets.max(-1)[0] - inside_regress_range = ( - (max_regress_distance >= regress_ranges[..., 0]) - & (max_regress_distance <= regress_ranges[..., 1])) - - # if there are still more than one objects for a location, - # we choose the one with minimal area - areas[inside_gt_bbox_mask == 0] = INF - areas[inside_regress_range == 0] = INF - min_area, min_area_inds = areas.min(dim=1) - - labels = gt_labels[min_area_inds] - labels[min_area == INF] = self.num_classes # set as BG - bbox_targets = bbox_targets[range(num_points), min_area_inds] - - return labels, bbox_targets - - def centerness_target(self, pos_bbox_targets): - """Compute centerness targets. - - Args: - pos_bbox_targets (Tensor): BBox targets of positive bboxes in shape - (num_pos, 4) - - Returns: - Tensor: Centerness target. - """ - # only calculate pos centerness targets, otherwise there may be nan - left_right = pos_bbox_targets[:, [0, 2]] - top_bottom = pos_bbox_targets[:, [1, 3]] - if len(left_right) == 0: - centerness_targets = left_right[..., 0] - else: - centerness_targets = ( - left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * ( - top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0]) - return torch.sqrt(centerness_targets) - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `FCOSHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - - y, x = super()._get_points_single(featmap_size, stride, dtype, device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) + stride // 2 - return points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fovea_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fovea_head.py deleted file mode 100644 index 8be7fc94..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,385 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d -from mmcv.runner import BaseModule - -from mmdet.core import multi_apply -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(BaseModule): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAlign, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class FoveaHead(AnchorFreeHead): - """FoveaBox: Beyond Anchor-based Object Detector - https://arxiv.org/abs/1904.03797 - """ - - def __init__(self, - num_classes, - in_channels, - base_edge_list=(16, 32, 64, 128, 256), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, - 512)), - sigma=0.4, - with_deform=False, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__(num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bbox_list, - gt_label_list, - img_metas, - gt_bboxes_ignore=None): - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - gt_bbox_list, gt_label_list, featmap_sizes, points) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_zeros( - pos_bbox_targets.size()) + 1.0 - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points): - label_list, bbox_target_list = multi_apply( - self._get_target_single, - gt_bbox_list, - gt_label_list, - featmap_size_list=featmap_sizes, - point_list=points) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_target_single(self, - gt_bboxes_raw, - gt_labels_raw, - featmap_size_list=None, - point_list=None): - - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - points in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, point_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - points = points.view(*featmap_size, 2) - x, y = points[..., 0], points[..., 1] - labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes - bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1], - 4) + 1 - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long(). \ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long(). \ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. Fovea head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, base_len, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, self.strides, - self.base_edge_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, base_len, img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms) - - def _bbox_decode(self, priors, bbox_pred, base_len, max_shape): - bbox_pred = bbox_pred.exp() - - y = priors[:, 1] - x = priors[:, 0] - x1 = (x - base_len * bbox_pred[:, 0]). \ - clamp(min=0, max=max_shape[1] - 1) - y1 = (y - base_len * bbox_pred[:, 1]). \ - clamp(min=0, max=max_shape[0] - 1) - x2 = (x + base_len * bbox_pred[:, 2]). \ - clamp(min=0, max=max_shape[1] - 1) - y2 = (y + base_len * bbox_pred[:, 3]). \ - clamp(min=0, max=max_shape[0] - 1) - decoded_bboxes = torch.stack([x1, y1, x2, y2], -1) - return decoded_bboxes - - def _get_points_single(self, *args, **kwargs): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - warnings.warn( - '`_get_points_single` in `FoveaHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map ' - 'with `self.prior_generator.single_level_grid_priors` ') - y, x = super()._get_points_single(*args, **kwargs) - return y + 0.5, x + 0.5 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/free_anchor_retina_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/free_anchor_retina_head.py deleted file mode 100644 index 3acd25ec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/free_anchor_retina_head.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core import bbox_overlaps -from ..builder import HEADS -from .retina_head import RetinaHead - -EPS = 1e-12 - - -@HEADS.register_module() -class FreeAnchorRetinaHead(RetinaHead): - """FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - pre_anchor_topk (int): Number of boxes that be token in each bag. - bbox_thr (float): The threshold of the saturated linear function. It is - usually the same with the IoU threshold used in NMS. - gamma (float): Gamma parameter in focal loss. - alpha (float): Alpha parameter in focal loss. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - pre_anchor_topk=50, - bbox_thr=0.6, - gamma=2.0, - alpha=0.5, - **kwargs): - super(FreeAnchorRetinaHead, - self).__init__(num_classes, in_channels, stacked_convs, conv_cfg, - norm_cfg, **kwargs) - - self.pre_anchor_topk = pre_anchor_topk - self.bbox_thr = bbox_thr - self.gamma = gamma - self.alpha = alpha - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - device = cls_scores[0].device - anchor_list, _ = self.get_anchors( - featmap_sizes, img_metas, device=device) - anchors = [torch.cat(anchor) for anchor in anchor_list] - - # concatenate each level - cls_scores = [ - cls.permute(0, 2, 3, - 1).reshape(cls.size(0), -1, self.cls_out_channels) - for cls in cls_scores - ] - bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(bbox_pred.size(0), -1, 4) - for bbox_pred in bbox_preds - ] - cls_scores = torch.cat(cls_scores, dim=1) - bbox_preds = torch.cat(bbox_preds, dim=1) - - cls_prob = torch.sigmoid(cls_scores) - box_prob = [] - num_pos = 0 - positive_losses = [] - for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_, - bbox_preds_) in enumerate( - zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds)): - - with torch.no_grad(): - if len(gt_bboxes_) == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(bbox_preds_) - else: - # box_localization: a_{j}^{loc}, shape: [j, 4] - pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_) - - # object_box_iou: IoU_{ij}^{loc}, shape: [i, j] - object_box_iou = bbox_overlaps(gt_bboxes_, pred_boxes) - - # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j] - t1 = self.bbox_thr - t2 = object_box_iou.max( - dim=1, keepdim=True).values.clamp(min=t1 + 1e-12) - object_box_prob = ((object_box_iou - t1) / - (t2 - t1)).clamp( - min=0, max=1) - - # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j] - num_obj = gt_labels_.size(0) - indices = torch.stack([ - torch.arange(num_obj).type_as(gt_labels_), gt_labels_ - ], - dim=0) - object_cls_box_prob = torch.sparse_coo_tensor( - indices, object_box_prob) - - # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j] - """ - from "start" to "end" implement: - image_box_iou = torch.sparse.max(object_cls_box_prob, - dim=0).t() - - """ - # start - box_cls_prob = torch.sparse.sum( - object_cls_box_prob, dim=0).to_dense() - - indices = torch.nonzero(box_cls_prob, as_tuple=False).t_() - if indices.numel() == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(object_box_prob) - else: - nonzero_box_prob = torch.where( - (gt_labels_.unsqueeze(dim=-1) == indices[0]), - object_box_prob[:, indices[1]], - torch.tensor([ - 0 - ]).type_as(object_box_prob)).max(dim=0).values - - # upmap to shape [j, c] - image_box_prob = torch.sparse_coo_tensor( - indices.flip([0]), - nonzero_box_prob, - size=(anchors_.size(0), - self.cls_out_channels)).to_dense() - # end - - box_prob.append(image_box_prob) - - # construct bags for objects - match_quality_matrix = bbox_overlaps(gt_bboxes_, anchors_) - _, matched = torch.topk( - match_quality_matrix, - self.pre_anchor_topk, - dim=1, - sorted=False) - del match_quality_matrix - - # matched_cls_prob: P_{ij}^{cls} - matched_cls_prob = torch.gather( - cls_prob_[matched], 2, - gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk, - 1)).squeeze(2) - - # matched_box_prob: P_{ij}^{loc} - matched_anchors = anchors_[matched] - matched_object_targets = self.bbox_coder.encode( - matched_anchors, - gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors)) - loss_bbox = self.loss_bbox( - bbox_preds_[matched], - matched_object_targets, - reduction_override='none').sum(-1) - matched_box_prob = torch.exp(-loss_bbox) - - # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )} - num_pos += len(gt_bboxes_) - positive_losses.append( - self.positive_bag_loss(matched_cls_prob, matched_box_prob)) - positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos) - - # box_prob: P{a_{j} \in A_{+}} - box_prob = torch.stack(box_prob, dim=0) - - # negative_loss: - # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B|| - negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max( - 1, num_pos * self.pre_anchor_topk) - - # avoid the absence of gradients in regression subnet - # when no ground-truth in a batch - if num_pos == 0: - positive_loss = bbox_preds.sum() * 0 - - losses = { - 'positive_bag_loss': positive_loss, - 'negative_bag_loss': negative_loss - } - return losses - - def positive_bag_loss(self, matched_cls_prob, matched_box_prob): - """Compute positive bag loss. - - :math:`-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )`. - - :math:`P_{ij}^{cls}`: matched_cls_prob, classification probability of matched samples. - - :math:`P_{ij}^{loc}`: matched_box_prob, box probability of matched samples. - - Args: - matched_cls_prob (Tensor): Classification probability of matched - samples in shape (num_gt, pre_anchor_topk). - matched_box_prob (Tensor): BBox probability of matched samples, - in shape (num_gt, pre_anchor_topk). - - Returns: - Tensor: Positive bag loss in shape (num_gt,). - """ # noqa: E501, W605 - # bag_prob = Mean-max(matched_prob) - matched_prob = matched_cls_prob * matched_box_prob - weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None) - weight /= weight.sum(dim=1).unsqueeze(dim=-1) - bag_prob = (weight * matched_prob).sum(dim=1) - # positive_bag_loss = -self.alpha * log(bag_prob) - return self.alpha * F.binary_cross_entropy( - bag_prob, torch.ones_like(bag_prob), reduction='none') - - def negative_bag_loss(self, cls_prob, box_prob): - """Compute negative bag loss. - - :math:`FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))`. - - :math:`P_{a_{j} \in A_{+}}`: Box_probability of matched samples. - - :math:`P_{j}^{bg}`: Classification probability of negative samples. - - Args: - cls_prob (Tensor): Classification probability, in shape - (num_img, num_anchors, num_classes). - box_prob (Tensor): Box probability, in shape - (num_img, num_anchors, num_classes). - - Returns: - Tensor: Negative bag loss in shape (num_img, num_anchors, num_classes). - """ # noqa: E501, W605 - prob = cls_prob * (1 - box_prob) - # There are some cases when neg_prob = 0. - # This will cause the neg_prob.log() to be inf without clamp. - prob = prob.clamp(min=EPS, max=1 - EPS) - negative_bag_loss = prob**self.gamma * F.binary_cross_entropy( - prob, torch.zeros_like(prob), reduction='none') - return (1 - self.alpha) * negative_bag_loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fsaf_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fsaf_head.py deleted file mode 100644 index 2d2b7879..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/fsaf_head.py +++ /dev/null @@ -1,433 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, images_to_levels, multi_apply, - unmap) -from ..builder import HEADS -from ..losses.accuracy import accuracy -from ..losses.utils import weight_reduce_loss -from .retina_head import RetinaHead - - -@HEADS.register_module() -class FSAFHead(RetinaHead): - """Anchor-free head used in `FSAF `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors (num_anchors is 1 for anchor- - free methods) - - Args: - *args: Same as its base class in :class:`RetinaHead` - score_threshold (float, optional): The score_threshold to calculate - positive recall. If given, prediction scores lower than this value - is counted as incorrect prediction. Default to None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - **kwargs: Same as its base class in :class:`RetinaHead` - - Example: - >>> import torch - >>> self = FSAFHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == self.num_classes - >>> assert box_per_anchor == 4 - """ - - def __init__(self, *args, score_threshold=None, init_cfg=None, **kwargs): - # The positive bias in self.retina_reg conv is to prevent predicted \ - # bbox with 0 area - if init_cfg is None: - init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=[ - dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', name='retina_reg', std=0.01, bias=0.25) - ]) - super().__init__(*args, init_cfg=init_cfg, **kwargs) - self.score_threshold = score_threshold - - def forward_single(self, x): - """Forward feature map of a single scale level. - - Args: - x (Tensor): Feature map of a single scale level. - - Returns: - tuple (Tensor): - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - """ - cls_score, bbox_pred = super().forward_single(x) - # relu: TBLR encoder only accepts positive bbox_pred - return cls_score, self.relu(bbox_pred) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Most of the codes are the same with the base class - :obj: `AnchorHead`, except that it also collects and returns - the matched gt index in the image (from 0 to num_gt-1). If the - anchor bbox is not matched to any gt, the corresponding value in - pos_gt_inds is -1. - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # Assign gt and sample anchors - anchors = flat_anchors[inside_flags.type(torch.bool), :] - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros((num_valid_anchors, label_channels), - dtype=torch.float) - pos_gt_inds = anchors.new_full((num_valid_anchors, ), - -1, - dtype=torch.long) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, both - # the predicted boxes and regression targets should be with - # absolute coordinate format. - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - # The assigned gt_index for each anchor. (0-based) - pos_gt_inds[pos_inds] = sampling_result.pos_assigned_gt_inds - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # shadowed_labels is a tensor composed of tuples - # (anchor_inds, class_label) that indicate those anchors lying in the - # outer region of a gt or overlapped by another gt with a smaller - # area. - # - # Therefore, only the shadowed labels are ignored for loss calculation. - # the key `shadowed_labels` is defined in :obj:`CenterRegionAssigner` - shadowed_labels = assign_result.get_extra_property('shadowed_labels') - if shadowed_labels is not None and shadowed_labels.numel(): - if len(shadowed_labels.shape) == 2: - idx_, label_ = shadowed_labels[:, 0], shadowed_labels[:, 1] - assert (labels[idx_] != label_).all(), \ - 'One label cannot be both positive and ignored' - label_weights[idx_, label_] = 0 - else: - label_weights[shadowed_labels] = 0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap(labels, num_total_anchors, inside_flags) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - pos_gt_inds = unmap( - pos_gt_inds, num_total_anchors, inside_flags, fill=-1) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result, pos_gt_inds) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - for i in range(len(bbox_preds)): # loop over fpn level - # avoid 0 area of the predicted bbox - bbox_preds[i] = bbox_preds[i].clamp(min=1e-4) - # TODO: It may directly use the base-class loss function. - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - batch_size = len(gt_bboxes) - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, - pos_assigned_gt_inds_list) = cls_reg_targets - - num_gts = np.array(list(map(len, gt_labels))) - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # `pos_assigned_gt_inds_list` (length: fpn_levels) stores the assigned - # gt index of each anchor bbox in each fpn level. - cum_num_gts = list(np.cumsum(num_gts)) # length of batch_size - for i, assign in enumerate(pos_assigned_gt_inds_list): - # loop over fpn levels - for j in range(1, batch_size): - # loop over batch size - # Convert gt indices in each img to those in the batch - assign[j][assign[j] >= 0] += int(cum_num_gts[j - 1]) - pos_assigned_gt_inds_list[i] = assign.flatten() - labels_list[i] = labels_list[i].flatten() - num_gts = sum(map(len, gt_labels)) # total number of gt in the batch - # The unique label index of each gt in the batch - label_sequence = torch.arange(num_gts, device=device) - # Collect the average loss of each gt in each level - with torch.no_grad(): - loss_levels, = multi_apply( - self.collect_loss_level_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_seq=label_sequence) - # Shape: (fpn_levels, num_gts). Loss of each gt at each fpn level - loss_levels = torch.stack(loss_levels, dim=0) - # Locate the best fpn level for loss back-propagation - if loss_levels.numel() == 0: # zero gt - argmin = loss_levels.new_empty((num_gts, ), dtype=torch.long) - else: - _, argmin = loss_levels.min(dim=0) - - # Reweight the loss of each (anchor, label) pair, so that only those - # at the best gt level are back-propagated. - losses_cls, losses_bbox, pos_inds = multi_apply( - self.reweight_loss_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_list, - list(range(len(losses_cls))), - min_levels=argmin) - num_pos = torch.cat(pos_inds, 0).sum().float() - pos_recall = self.calculate_pos_recall(cls_scores, labels_list, - pos_inds) - - if num_pos == 0: # No gt - avg_factor = num_pos + float(num_total_neg) - else: - avg_factor = num_pos - for i in range(len(losses_cls)): - losses_cls[i] /= avg_factor - losses_bbox[i] /= avg_factor - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - num_pos=num_pos / batch_size, - pos_recall=pos_recall) - - def calculate_pos_recall(self, cls_scores, labels_list, pos_inds): - """Calculate positive recall with score threshold. - - Args: - cls_scores (list[Tensor]): Classification scores at all fpn levels. - Each tensor is in shape (N, num_classes * num_anchors, H, W) - labels_list (list[Tensor]): The label that each anchor is assigned - to. Shape (N * H * W * num_anchors, ) - pos_inds (list[Tensor]): List of bool tensors indicating whether - the anchor is assigned to a positive label. - Shape (N * H * W * num_anchors, ) - - Returns: - Tensor: A single float number indicating the positive recall. - """ - with torch.no_grad(): - num_class = self.num_classes - scores = [ - cls.permute(0, 2, 3, 1).reshape(-1, num_class)[pos] - for cls, pos in zip(cls_scores, pos_inds) - ] - labels = [ - label.reshape(-1)[pos] - for label, pos in zip(labels_list, pos_inds) - ] - scores = torch.cat(scores, dim=0) - labels = torch.cat(labels, dim=0) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - else: - scores = scores.softmax(dim=1) - - return accuracy(scores, labels, thresh=self.score_threshold) - - def collect_loss_level_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels_seq): - """Get the average loss in each FPN level w.r.t. each gt label. - - Args: - cls_loss (Tensor): Classification loss of each feature map pixel, - shape (num_anchor, num_class) - reg_loss (Tensor): Regression loss of each feature map pixel, - shape (num_anchor, 4) - assigned_gt_inds (Tensor): It indicates which gt the prior is - assigned to (0-based, -1: no assignment). shape (num_anchor), - labels_seq: The rank of labels. shape (num_gt) - - Returns: - shape: (num_gt), average loss of each gt in this level - """ - if len(reg_loss.shape) == 2: # iou loss has shape (num_prior, 4) - reg_loss = reg_loss.sum(dim=-1) # sum loss in tblr dims - if len(cls_loss.shape) == 2: - cls_loss = cls_loss.sum(dim=-1) # sum loss in class dims - loss = cls_loss + reg_loss - assert loss.size(0) == assigned_gt_inds.size(0) - # Default loss value is 1e6 for a layer where no anchor is positive - # to ensure it will not be chosen to back-propagate gradient - losses_ = loss.new_full(labels_seq.shape, 1e6) - for i, l in enumerate(labels_seq): - match = assigned_gt_inds == l - if match.any(): - losses_[i] = loss[match].mean() - return losses_, - - def reweight_loss_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels, level, min_levels): - """Reweight loss values at each level. - - Reassign loss values at each level by masking those where the - pre-calculated loss is too large. Then return the reduced losses. - - Args: - cls_loss (Tensor): Element-wise classification loss. - Shape: (num_anchors, num_classes) - reg_loss (Tensor): Element-wise regression loss. - Shape: (num_anchors, 4) - assigned_gt_inds (Tensor): The gt indices that each anchor bbox - is assigned to. -1 denotes a negative anchor, otherwise it is the - gt index (0-based). Shape: (num_anchors, ), - labels (Tensor): Label assigned to anchors. Shape: (num_anchors, ). - level (int): The current level index in the pyramid - (0-4 for RetinaNet) - min_levels (Tensor): The best-matching level for each gt. - Shape: (num_gts, ), - - Returns: - tuple: - - cls_loss: Reduced corrected classification loss. Scalar. - - reg_loss: Reduced corrected regression loss. Scalar. - - pos_flags (Tensor): Corrected bool tensor indicating the - final positive anchors. Shape: (num_anchors, ). - """ - loc_weight = torch.ones_like(reg_loss) - cls_weight = torch.ones_like(cls_loss) - pos_flags = assigned_gt_inds >= 0 # positive pixel flag - pos_indices = torch.nonzero(pos_flags, as_tuple=False).flatten() - - if pos_flags.any(): # pos pixels exist - pos_assigned_gt_inds = assigned_gt_inds[pos_flags] - zeroing_indices = (min_levels[pos_assigned_gt_inds] != level) - neg_indices = pos_indices[zeroing_indices] - - if neg_indices.numel(): - pos_flags[neg_indices] = 0 - loc_weight[neg_indices] = 0 - # Only the weight corresponding to the label is - # zeroed out if not selected - zeroing_labels = labels[neg_indices] - assert (zeroing_labels >= 0).all() - cls_weight[neg_indices, zeroing_labels] = 0 - - # Weighted loss for both cls and reg loss - cls_loss = weight_reduce_loss(cls_loss, cls_weight, reduction='sum') - reg_loss = weight_reduce_loss(reg_loss, loc_weight, reduction='sum') - - return cls_loss, reg_loss, pos_flags diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_retina_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_retina_head.py deleted file mode 100644 index 6d9e874c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_retina_head.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import MaskedConv2d - -from ..builder import HEADS -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead - - -@HEADS.register_module() -class GARetinaHead(GuidedAnchorHead): - """Guided-Anchor-based RetinaNet head.""" - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - if init_cfg is None: - init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=[ - dict( - type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01) - ]) - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(GARetinaHead, self).__init__( - num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2, - 1) - self.feature_adaption_cls = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.feature_adaption_reg = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.retina_cls = MaskedConv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = MaskedConv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - - loc_pred = self.conv_loc(cls_feat) - shape_pred = self.conv_shape(reg_feat) - - cls_feat = self.feature_adaption_cls(cls_feat, shape_pred) - reg_feat = self.feature_adaption_reg(reg_feat, shape_pred) - - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.retina_cls(cls_feat, mask) - bbox_pred = self.retina_reg(reg_feat, mask) - return cls_score, bbox_pred, shape_pred, loc_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_rpn_head.py deleted file mode 100644 index 4123c8b3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ga_rpn_head.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..builder import HEADS -from .guided_anchor_head import GuidedAnchorHead - - -@HEADS.register_module() -class GARPNHead(GuidedAnchorHead): - """Guided-Anchor-based RPN head.""" - - def __init__(self, - in_channels, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01)), - **kwargs): - super(GARPNHead, self).__init__( - 1, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - super(GARPNHead, self)._init_layers() - - def forward_single(self, x): - """Forward feature of a single scale level.""" - - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - (cls_score, bbox_pred, shape_pred, - loc_pred) = super(GARPNHead, self).forward_single(x) - return cls_score, bbox_pred, shape_pred, loc_pred - - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - losses = super(GARPNHead, self).loss( - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], - loss_rpn_bbox=losses['loss_bbox'], - loss_anchor_shape=losses['loss_shape'], - loss_anchor_loc=losses['loss_loc']) - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and max_per_img at the same time, ' \ - f'but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the ' \ - f'nms_thr which will be deprecated.' - - assert cfg.nms.get('type', 'nms') == 'nms', 'GARPNHead only support ' \ - 'naive nms.' - - mlvl_proposals = [] - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - anchors = mlvl_anchors[idx] - mask = mlvl_masks[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = rpn_cls_score.softmax(dim=1)[:, :-1] - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, - 4)[mask, :] - if scores.dim() == 0: - rpn_bbox_pred = rpn_bbox_pred.unsqueeze(0) - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - if cfg.nms_pre > 0 and scores.shape[0] > cfg.nms_pre: - _, topk_inds = scores.topk(cfg.nms_pre) - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - scores = scores[topk_inds] - # get proposals w.r.t. anchors and rpn_bbox_pred - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - # filter out too small bboxes - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - - # NMS in current level - proposals, _ = nms(proposals, scores, cfg.nms.iou_threshold) - proposals = proposals[:cfg.nms_post, :] - mlvl_proposals.append(proposals) - proposals = torch.cat(mlvl_proposals, 0) - if cfg.get('nms_across_levels', False): - # NMS across multi levels - proposals, _ = nms(proposals[:, :4], proposals[:, -1], - cfg.nms.iou_threshold) - proposals = proposals[:cfg.max_per_img, :] - else: - scores = proposals[:, 4] - num = min(cfg.max_per_img, proposals.shape[0]) - _, topk_inds = scores.topk(num) - proposals = proposals[topk_inds, :] - return proposals diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/gfl_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/gfl_head.py deleted file mode 100644 index 12eb89db..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/gfl_head.py +++ /dev/null @@ -1,648 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, bbox_overlaps, build_assigner, - build_sampler, images_to_levels, multi_apply, - reduce_mean, unmap) -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class Integral(nn.Module): - """A fixed layer for calculating integral result from distribution. - - This layer calculates the target location by :math: `sum{P(y_i) * y_i}`, - P(y_i) denotes the softmax vector that represents the discrete distribution - y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max} - - Args: - reg_max (int): The maximal value of the discrete set. Default: 16. You - may want to reset it according to your new dataset or related - settings. - """ - - def __init__(self, reg_max=16): - super(Integral, self).__init__() - self.reg_max = reg_max - self.register_buffer('project', - torch.linspace(0, self.reg_max, self.reg_max + 1)) - - def forward(self, x): - """Forward feature from the regression head to get integral result of - bounding box location. - - Args: - x (Tensor): Features of the regression head, shape (N, 4*(n+1)), - n is self.reg_max. - - Returns: - x (Tensor): Integral result of box locations, i.e., distance - offsets from the box center in four directions, shape (N, 4). - """ - x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1) - x = F.linear(x, self.project.type_as(x)).reshape(-1, 4) - return x - - -@HEADS.register_module() -class GFLHead(AnchorHead): - """Generalized Focal Loss: Learning Qualified and Distributed Bounding - Boxes for Dense Object Detection. - - GFL head structure is similar with ATSS, however GFL uses - 1) joint representation for classification and localization quality, and - 2) flexible General distribution for bounding box locations, - which are supervised by - Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively - - https://arxiv.org/abs/2006.04388 - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='GN', num_groups=32, requires_grad=True). - loss_qfl (dict): Config of Quality Focal Loss (QFL). - bbox_coder (dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - reg_max (int): Max value of integral set :math: `{0, ..., reg_max}` - in QFL setting. Default: 16. - init_cfg (dict or list[dict], optional): Initialization config dict. - Example: - >>> self = GFLHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_quality_score, bbox_pred = self.forward(feats) - >>> assert len(cls_quality_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25), - bbox_coder=dict(type='DistancePointBBoxCoder'), - reg_max=16, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='gfl_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.reg_max = reg_max - super(GFLHead, self).__init__( - num_classes, - in_channels, - bbox_coder=bbox_coder, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.integral = Integral(self.reg_max) - self.loss_dfl = build_loss(loss_dfl) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - assert self.num_anchors == 1, 'anchor free version' - self.gfl_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.gfl_reg = nn.Conv2d( - self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification and quality (IoU) - joint scores for all scale levels, each is a 4D-tensor, - the channel number is num_classes. - bbox_preds (list[Tensor]): Box distribution logits for all - scale levels, each is a 4D-tensor, the channel number is - 4*(n+1), n is max value of integral set. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls and quality joint scores for a single - scale level the channel number is num_classes. - bbox_pred (Tensor): Box distribution logits for a single scale - level, the channel number is 4*(n+1), n is max value of - integral set. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.gfl_cls(cls_feat) - bbox_pred = scale(self.gfl_reg(reg_feat)).float() - return cls_score, bbox_pred - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2 - anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - else: - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, weight_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl,\ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) - avg_factor = reduce_mean(avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox)) - losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl)) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. GFL head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, - self.prior_generator.strides, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert stride[0] == stride[1] - - bbox_pred = bbox_pred.permute(1, 2, 0) - bbox_pred = self.integral(bbox_pred) * stride[0] - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self.bbox_coder.decode( - self.anchor_center(priors), bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for GFL head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors, 4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4). - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/guided_anchor_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/guided_anchor_head.py deleted file mode 100644 index 53e8cd8a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/guided_anchor_head.py +++ /dev/null @@ -1,868 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.ops import DeformConv2d, MaskedConv2d -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, calc_region, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class FeatureAdaption(BaseModule): - """Feature Adaption Module. - - Feature Adaption Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Deformable conv kernel size. - deform_groups (int): Deformable conv group size. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAdaption, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 2, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape.detach()) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class GuidedAnchorHead(AnchorHead): - """Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.). - - This GuidedAnchorHead will predict high-quality feature guided - anchors and locations where anchors will be kept in inference. - There are mainly 3 categories of bounding-boxes. - - - Sampled 9 pairs for target assignment. (approxes) - - The square boxes where the predicted anchors are based on. (squares) - - Guided anchors. - - Please refer to https://arxiv.org/abs/1901.03278 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. - approx_anchor_generator (dict): Config dict for approx generator - square_anchor_generator (dict): Config dict for square generator - anchor_coder (dict): Config dict for anchor coder - bbox_coder (dict): Config dict for bbox coder - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - loc_filter_thr (float): Threshold to filter out unconcerned regions. - loss_loc (dict): Config of location loss. - loss_shape (dict): Config of anchor shape loss. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of bbox regression loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - reg_decoded_bbox=False, - deform_groups=4, - loc_filter_thr=0.01, - train_cfg=None, - test_cfg=None, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0), - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01, - override=dict(type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01))): # yapf: disable - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.deform_groups = deform_groups - self.loc_filter_thr = loc_filter_thr - - # build approx_anchor_generator and square_anchor_generator - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - self.approx_anchor_generator = build_prior_generator( - approx_anchor_generator) - self.square_anchor_generator = build_prior_generator( - square_anchor_generator) - self.approxs_per_octave = self.approx_anchor_generator \ - .num_base_priors[0] - - self.reg_decoded_bbox = reg_decoded_bbox - - # one anchor per location - self.num_base_priors = self.square_anchor_generator.num_base_priors[0] - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.loc_focal_loss = loss_loc['type'] in ['FocalLoss'] - self.sampling = loss_cls['type'] not in ['FocalLoss'] - self.ga_sampling = train_cfg is not None and hasattr( - train_cfg, 'ga_sampler') - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - - # build bbox_coder - self.anchor_coder = build_bbox_coder(anchor_coder) - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build losses - self.loss_loc = build_loss(loss_loc) - self.loss_shape = build_loss(loss_shape) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.ga_assigner = build_assigner(self.train_cfg.ga_assigner) - if self.ga_sampling: - ga_sampler_cfg = self.train_cfg.ga_sampler - else: - ga_sampler_cfg = dict(type='PseudoSampler') - self.ga_sampler = build_sampler(ga_sampler_cfg, context=self) - - self.fp16_enabled = False - - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.square_anchor_generator.num_base_priors[0] - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.conv_loc = nn.Conv2d(self.in_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.in_channels, self.num_base_priors * 2, - 1) - self.feature_adaption = FeatureAdaption( - self.in_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = MaskedConv2d( - self.feat_channels, self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = MaskedConv2d(self.feat_channels, - self.num_base_priors * 4, 1) - - def forward_single(self, x): - loc_pred = self.conv_loc(x) - shape_pred = self.conv_shape(x) - x = self.feature_adaption(x, shape_pred) - # masked conv is only used during inference for speed-up - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.conv_cls(x, mask) - bbox_pred = self.conv_reg(x, mask) - return cls_score, bbox_pred, shape_pred, loc_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'): - """Get sampled approxs and inside flags according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: approxes of each image, inside flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # approxes for one time - multi_level_approxs = self.approx_anchor_generator.grid_priors( - featmap_sizes, device=device) - approxs_list = [multi_level_approxs for _ in range(num_imgs)] - - # for each image, we compute inside flags of multi level approxes - inside_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - multi_level_approxs = approxs_list[img_id] - - # obtain valid flags for each approx first - multi_level_approx_flags = self.approx_anchor_generator \ - .valid_flags(featmap_sizes, - img_meta['pad_shape'], - device=device) - - for i, flags in enumerate(multi_level_approx_flags): - approxs = multi_level_approxs[i] - inside_flags_list = [] - for i in range(self.approxs_per_octave): - split_valid_flags = flags[i::self.approxs_per_octave] - split_approxs = approxs[i::self.approxs_per_octave, :] - inside_flags = anchor_inside_flags( - split_approxs, split_valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - inside_flags_list.append(inside_flags) - # inside_flag for a position is true if any anchor in this - # position is true - inside_flags = ( - torch.stack(inside_flags_list, 0).sum(dim=0) > 0) - multi_level_flags.append(inside_flags) - inside_flag_list.append(multi_level_flags) - return approxs_list, inside_flag_list - - def get_anchors(self, - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=False, - device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - shape_preds (list[tensor]): Multi-level shape predictions. - loc_preds (list[tensor]): Multi-level location predictions. - img_metas (list[dict]): Image meta info. - use_loc_filter (bool): Use loc filter or not. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image, guided anchors of each image, - loc masks of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_priors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - # for each image, we compute multi level guided anchors - guided_anchors_list = [] - loc_mask_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_guided_anchors = [] - multi_level_loc_mask = [] - for i in range(num_levels): - squares = squares_list[img_id][i] - shape_pred = shape_preds[i][img_id] - loc_pred = loc_preds[i][img_id] - guided_anchors, loc_mask = self._get_guided_anchors_single( - squares, - shape_pred, - loc_pred, - use_loc_filter=use_loc_filter) - multi_level_guided_anchors.append(guided_anchors) - multi_level_loc_mask.append(loc_mask) - guided_anchors_list.append(multi_level_guided_anchors) - loc_mask_list.append(multi_level_loc_mask) - return squares_list, guided_anchors_list, loc_mask_list - - def _get_guided_anchors_single(self, - squares, - shape_pred, - loc_pred, - use_loc_filter=False): - """Get guided anchors and loc masks for a single level. - - Args: - square (tensor): Squares of a single level. - shape_pred (tensor): Shape predictions of a single level. - loc_pred (tensor): Loc predictions of a single level. - use_loc_filter (list[tensor]): Use loc filter or not. - - Returns: - tuple: guided anchors, location masks - """ - # calculate location filtering mask - loc_pred = loc_pred.sigmoid().detach() - if use_loc_filter: - loc_mask = loc_pred >= self.loc_filter_thr - else: - loc_mask = loc_pred >= 0.0 - mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_base_priors) - mask = mask.contiguous().view(-1) - # calculate guided anchors - squares = squares[mask] - anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view( - -1, 2).detach()[mask] - bbox_deltas = anchor_deltas.new_full(squares.size(), 0) - bbox_deltas[:, 2:] = anchor_deltas - guided_anchors = self.anchor_coder.decode( - squares, bbox_deltas, wh_ratio_clip=1e-6) - return guided_anchors, mask - - def ga_loc_targets(self, gt_bboxes_list, featmap_sizes): - """Compute location targets for guided anchoring. - - Each feature map is divided into positive, negative and ignore regions. - - positive regions: target 1, weight 1 - - ignore regions: target 0, weight 0 - - negative regions: target 0, weight 0.1 - - Args: - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - featmap_sizes (list[tuple]): Multi level sizes of each feature - maps. - - Returns: - tuple - """ - anchor_scale = self.approx_anchor_generator.octave_base_scale - anchor_strides = self.approx_anchor_generator.strides - # Currently only supports same stride in x and y direction. - for stride in anchor_strides: - assert (stride[0] == stride[1]) - anchor_strides = [stride[0] for stride in anchor_strides] - - center_ratio = self.train_cfg.center_ratio - ignore_ratio = self.train_cfg.ignore_ratio - img_per_gpu = len(gt_bboxes_list) - num_lvls = len(featmap_sizes) - r1 = (1 - center_ratio) / 2 - r2 = (1 - ignore_ratio) / 2 - all_loc_targets = [] - all_loc_weights = [] - all_ignore_map = [] - for lvl_id in range(num_lvls): - h, w = featmap_sizes[lvl_id] - loc_targets = torch.zeros( - img_per_gpu, - 1, - h, - w, - device=gt_bboxes_list[0].device, - dtype=torch.float32) - loc_weights = torch.full_like(loc_targets, -1) - ignore_map = torch.zeros_like(loc_targets) - all_loc_targets.append(loc_targets) - all_loc_weights.append(loc_weights) - all_ignore_map.append(ignore_map) - for img_id in range(img_per_gpu): - gt_bboxes = gt_bboxes_list[img_id] - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - # assign gt bboxes to different feature levels w.r.t. their scales - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - for gt_id in range(gt_bboxes.size(0)): - lvl = target_lvls[gt_id].item() - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl] - # calculate ignore regions - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[lvl]) - # calculate positive (center) regions - ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region( - gt_, r1, featmap_sizes[lvl]) - all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 0 - all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - # calculate ignore map on nearby low level feature - if lvl > 0: - d_lvl = lvl - 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[d_lvl]) - all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - # calculate ignore map on nearby high level feature - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[u_lvl]) - all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - for lvl_id in range(num_lvls): - # ignore negative regions w.r.t. ignore map - all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0) - & (all_ignore_map[lvl_id] > 0)] = 0 - # set negative regions with weight 0.1 - all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1 - # loc average factor to balance loss - loc_avg_factor = sum( - [t.size(0) * t.size(-1) * t.size(-2) - for t in all_loc_targets]) / 200 - return all_loc_targets, all_loc_weights, loc_avg_factor - - def _ga_shape_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - img_meta, - unmap_outputs=True): - """Compute guided anchoring targets. - - This function returns sampled anchors and gt bboxes directly - rather than calculates regression targets. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image. - img_meta (dict): Meta info of a single image. - approxs_per_octave (int): number of approxs per octave - cfg (dict): RPN train configs. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - if not inside_flags.any(): - return (None, ) * 5 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.ga_assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.ga_sampler.sample(assign_result, squares, - gt_bboxes) - - bbox_anchors = torch.zeros_like(squares) - bbox_gts = torch.zeros_like(squares) - bbox_weights = torch.zeros_like(squares) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes - bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes - bbox_weights[pos_inds, :] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags) - bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds) - - def ga_shape_targets(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - unmap_outputs=True): - """Compute guided anchoring targets. - - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - (all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self._ga_shape_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - img_metas, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - bbox_anchors_list = images_to_levels(all_bbox_anchors, - num_level_squares) - bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_squares) - return (bbox_anchors_list, bbox_gts_list, bbox_weights_list, - num_total_pos, num_total_neg) - - def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts, - anchor_weights, anchor_total_num): - shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2) - bbox_anchors = bbox_anchors.contiguous().view(-1, 4) - bbox_gts = bbox_gts.contiguous().view(-1, 4) - anchor_weights = anchor_weights.contiguous().view(-1, 4) - bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0) - bbox_deltas[:, 2:] += shape_pred - # filter out negative samples to speed-up weighted_bounded_iou_loss - inds = torch.nonzero( - anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1) - bbox_deltas_ = bbox_deltas[inds] - bbox_anchors_ = bbox_anchors[inds] - bbox_gts_ = bbox_gts[inds] - anchor_weights_ = anchor_weights[inds] - pred_anchors_ = self.anchor_coder.decode( - bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6) - loss_shape = self.loss_shape( - pred_anchors_, - bbox_gts_, - anchor_weights_, - avg_factor=anchor_total_num) - return loss_shape - - def loss_loc_single(self, loc_pred, loc_target, loc_weight, - loc_avg_factor): - loss_loc = self.loss_loc( - loc_pred.reshape(-1, 1), - loc_target.reshape(-1).long(), - loc_weight.reshape(-1), - avg_factor=loc_avg_factor) - return loss_loc - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get loc targets - loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets( - gt_bboxes, featmap_sizes) - - # get sampled approxes - approxs_list, inside_flag_list = self.get_sampled_approxs( - featmap_sizes, img_metas, device=device) - # get squares and guided anchors - squares_list, guided_anchors_list, _ = self.get_anchors( - featmap_sizes, shape_preds, loc_preds, img_metas, device=device) - - # get shape targets - shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list, - squares_list, gt_bboxes, - img_metas) - if shape_targets is None: - return None - (bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num, - anchor_bg_num) = shape_targets - anchor_total_num = ( - anchor_fg_num if not self.ga_sampling else anchor_fg_num + - anchor_bg_num) - - # get anchor targets - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - guided_anchors_list, - inside_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [ - anchors.size(0) for anchors in guided_anchors_list[0] - ] - # concat all level anchors to a single tensor - concat_anchor_list = [] - for i in range(len(guided_anchors_list)): - concat_anchor_list.append(torch.cat(guided_anchors_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - # get classification and bbox regression losses - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # get anchor location loss - losses_loc = [] - for i in range(len(loc_preds)): - loss_loc = self.loss_loc_single( - loc_preds[i], - loc_targets[i], - loc_weights[i], - loc_avg_factor=loc_avg_factor) - losses_loc.append(loss_loc) - - # get anchor shape loss - losses_shape = [] - for i in range(len(shape_preds)): - loss_shape = self.loss_shape_single( - shape_preds[i], - bbox_anchors_list[i], - bbox_gts_list[i], - anchor_weights_list[i], - anchor_total_num=anchor_total_num) - losses_shape.append(loss_shape) - - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_shape=losses_shape, - loss_loc=losses_loc) - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len( - loc_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - # get guided anchors - _, guided_anchors, loc_masks = self.get_anchors( - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=not self.training, - device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - guided_anchor_list = [ - guided_anchors[img_id][i].detach() for i in range(num_levels) - ] - loc_mask_list = [ - loc_masks[img_id][i].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - guided_anchor_list, - loc_mask_list, img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds, - mlvl_anchors, - mlvl_masks): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - # reshape scores and bbox_pred - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask, :] - bbox_pred = bbox_pred[mask, :] - if scores.dim() == 0: - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - bbox_pred = bbox_pred.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - # multi class NMS - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/lad_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/lad_head.py deleted file mode 100644 index 85273bcb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/lad_head.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import bbox_overlaps, multi_apply -from ..builder import HEADS -from .paa_head import PAAHead, levels_to_images - - -@HEADS.register_module() -class LADHead(PAAHead): - """Label Assignment Head from the paper: `Improving Object Detection by - Label Assignment Distillation `_""" - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def get_label_assignment(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Get label assignment (from teacher). - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - tuple: Returns a tuple containing label assignment variables. - - - labels (Tensor): Labels of all anchors, each with - shape (num_anchors,). - - labels_weight (Tensor): Label weights of all anchor. - each with shape (num_anchors,). - - bboxes_target (Tensor): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bboxes_weight (Tensor): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds_flatten (Tensor): Contains all index of positive - sample in all anchor. - - pos_anchors (Tensor): Positive anchors. - - num_pos (int): Number of positive anchors. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - if num_pos: - pos_anchors = flatten_anchors[pos_inds_flatten] - else: - pos_anchors = None - - label_assignment_results = (labels, labels_weight, bboxes_target, - bboxes_weight, pos_inds_flatten, - pos_anchors, num_pos) - return label_assignment_results - - def forward_train(self, - x, - label_assignment_results, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - **kwargs): - """Forward train with the available label assignment (student receives - from teacher). - - Args: - x (list[Tensor]): Features from FPN. - label_assignment_results (tuple): As the outputs defined in the - function `self.get_label_assignment`. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - - Returns: - losses: (dict[str, Tensor]): A dictionary of loss components. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss( - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore, - label_assignment_results=label_assignment_results) - return losses - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None, - label_assignment_results=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - label_assignment_results (tuple): As the outputs defined in the - function `self.get_label_assignment`. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds_flatten, - pos_anchors, num_pos) = label_assignment_results - - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - pos_anchors, bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, pos_bbox_target, avg_factor=num_pos) - - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ld_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ld_head.py deleted file mode 100644 index c5a945fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ld_head.py +++ /dev/null @@ -1,261 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import bbox_overlaps, multi_apply, reduce_mean -from ..builder import HEADS, build_loss -from .gfl_head import GFLHead - - -@HEADS.register_module() -class LDHead(GFLHead): - """Localization distillation Head. (Short description) - - It utilizes the learned bbox distributions to transfer the localization - dark knowledge from teacher to student. Original paper: `Localization - Distillation for Object Detection. `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - loss_ld (dict): Config of Localization Distillation Loss (LD), - T is the temperature for distillation. - """ - - def __init__(self, - num_classes, - in_channels, - loss_ld=dict( - type='LocalizationDistillationLoss', - loss_weight=0.25, - T=10), - **kwargs): - - super(LDHead, self).__init__(num_classes, in_channels, **kwargs) - self.loss_ld = build_loss(loss_ld) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, soft_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[tuple, Tensor]: Loss components and weight targets. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - soft_targets = soft_targets.permute(0, 2, 3, - 1).reshape(-1, - 4 * (self.reg_max + 1)) - - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - pos_soft_targets = soft_targets[pos_inds] - soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1) - - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - # ld loss - loss_ld = self.loss_ld( - pred_corners, - soft_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - else: - loss_ld = bbox_pred.sum() * 0 - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum() - - def forward_train(self, - x, - out_teacher, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple[dict, list]: The loss components and proposals of each image. - - - losses (dict[str, Tensor]): A dictionary of loss components. - - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - soft_target = out_teacher[1] - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, soft_target, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, soft_target, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - soft_target, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl, losses_ld, \ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - soft_target, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) + 1e-6 - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = [x / avg_factor for x in losses_bbox] - losses_dfl = [x / avg_factor for x in losses_dfl] - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_dfl=losses_dfl, - loss_ld=losses_ld) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/mask2former_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/mask2former_head.py deleted file mode 100644 index 78e4d49b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/mask2former_head.py +++ /dev/null @@ -1,430 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, build_plugin_layer, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.ops import point_sample -from mmcv.runner import ModuleList - -from mmdet.core import build_assigner, build_sampler, reduce_mean -from mmdet.models.utils import get_uncertain_point_coords_with_randomness -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead -from .maskformer_head import MaskFormerHead - - -@HEADS.register_module() -class Mask2FormerHead(MaskFormerHead): - """Implements the Mask2Former head. - - See `Masked-attention Mask Transformer for Universal Image - Segmentation `_ for details. - - Args: - in_channels (list[int]): Number of channels in the input feature map. - feat_channels (int): Number of channels for features. - out_channels (int): Number of channels for output. - num_things_classes (int): Number of things. - num_stuff_classes (int): Number of stuff. - num_queries (int): Number of query in Transformer decoder. - pixel_decoder (:obj:`mmcv.ConfigDict` | dict): Config for pixel - decoder. Defaults to None. - enforce_decoder_input_project (bool, optional): Whether to add - a layer to change the embed_dim of tranformer encoder in - pixel decoder to the embed_dim of transformer decoder. - Defaults to False. - transformer_decoder (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder. Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder position encoding. Defaults to None. - loss_cls (:obj:`mmcv.ConfigDict` | dict): Config of the classification - loss. Defaults to None. - loss_mask (:obj:`mmcv.ConfigDict` | dict): Config of the mask loss. - Defaults to None. - loss_dice (:obj:`mmcv.ConfigDict` | dict): Config of the dice loss. - Defaults to None. - train_cfg (:obj:`mmcv.ConfigDict` | dict): Training config of - Mask2Former head. - test_cfg (:obj:`mmcv.ConfigDict` | dict): Testing config of - Mask2Former head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Defaults to None. - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - num_things_classes=80, - num_stuff_classes=53, - num_queries=100, - num_transformer_feat_level=3, - pixel_decoder=None, - enforce_decoder_input_project=False, - transformer_decoder=None, - positional_encoding=None, - loss_cls=None, - loss_mask=None, - loss_dice=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - **kwargs): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = self.num_things_classes + self.num_stuff_classes - self.num_queries = num_queries - self.num_transformer_feat_level = num_transformer_feat_level - self.num_heads = transformer_decoder.transformerlayers.\ - attn_cfgs.num_heads - self.num_transformer_decoder_layers = transformer_decoder.num_layers - assert pixel_decoder.encoder.transformerlayers.\ - attn_cfgs.num_levels == num_transformer_feat_level - pixel_decoder_ = copy.deepcopy(pixel_decoder) - pixel_decoder_.update( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels) - self.pixel_decoder = build_plugin_layer(pixel_decoder_)[1] - self.transformer_decoder = build_transformer_layer_sequence( - transformer_decoder) - self.decoder_embed_dims = self.transformer_decoder.embed_dims - - self.decoder_input_projs = ModuleList() - # from low resolution to high resolution - for _ in range(num_transformer_feat_level): - if (self.decoder_embed_dims != feat_channels - or enforce_decoder_input_project): - self.decoder_input_projs.append( - Conv2d( - feat_channels, self.decoder_embed_dims, kernel_size=1)) - else: - self.decoder_input_projs.append(nn.Identity()) - self.decoder_positional_encoding = build_positional_encoding( - positional_encoding) - self.query_embed = nn.Embedding(self.num_queries, feat_channels) - self.query_feat = nn.Embedding(self.num_queries, feat_channels) - # from low resolution to high resolution - self.level_embed = nn.Embedding(self.num_transformer_feat_level, - feat_channels) - - self.cls_embed = nn.Linear(feat_channels, self.num_classes + 1) - self.mask_embed = nn.Sequential( - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, out_channels)) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - self.sampler = build_sampler(self.train_cfg.sampler, context=self) - self.num_points = self.train_cfg.get('num_points', 12544) - self.oversample_ratio = self.train_cfg.get('oversample_ratio', 3.0) - self.importance_sample_ratio = self.train_cfg.get( - 'importance_sample_ratio', 0.75) - - self.class_weight = loss_cls.class_weight - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.loss_dice = build_loss(loss_dice) - - def init_weights(self): - for m in self.decoder_input_projs: - if isinstance(m, Conv2d): - caffe2_xavier_init(m, bias=0) - - self.pixel_decoder.init_weights() - - for p in self.transformer_decoder.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def _get_target_single(self, cls_score, mask_pred, gt_labels, gt_masks, - img_metas): - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_labels (Tensor): Ground truth class indices for one image with - shape (num_gts, ). - gt_masks (Tensor): Ground truth mask for each image, each with - shape (num_gts, h, w). - img_metas (dict): Image informtation. - - Returns: - tuple[Tensor]: A tuple containing the following for one image. - - - labels (Tensor): Labels of each image. \ - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. \ - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. \ - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. \ - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each \ - image. - - neg_inds (Tensor): Sampled negative indices for each \ - image. - """ - # sample points - num_queries = cls_score.shape[0] - num_gts = gt_labels.shape[0] - - point_coords = torch.rand((1, self.num_points, 2), - device=cls_score.device) - # shape (num_queries, num_points) - mask_points_pred = point_sample( - mask_pred.unsqueeze(1), point_coords.repeat(num_queries, 1, - 1)).squeeze(1) - # shape (num_gts, num_points) - gt_points_masks = point_sample( - gt_masks.unsqueeze(1).float(), point_coords.repeat(num_gts, 1, - 1)).squeeze(1) - - # assign and sample - assign_result = self.assigner.assign(cls_score, mask_points_pred, - gt_labels, gt_points_masks, - img_metas) - sampling_result = self.sampler.sample(assign_result, mask_pred, - gt_masks) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - labels = gt_labels.new_full((self.num_queries, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones((self.num_queries, )) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((self.num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds) - - def loss_single(self, cls_scores, mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function for outputs from a single decoder layer. - - Args: - cls_scores (Tensor): Mask score logits from a single decoder layer - for all images. Shape (batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - mask_preds (Tensor): Mask logits for a pixel decoder for all - images. Shape (batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image, each with shape (num_gts, ). - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (num_gts, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[Tensor]: Loss components for outputs from a single \ - decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - mask_preds_list = [mask_preds[i] for i in range(num_imgs)] - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - num_total_pos, - num_total_neg) = self.get_targets(cls_scores_list, mask_preds_list, - gt_labels_list, gt_masks_list, - img_metas) - # shape (batch_size, num_queries) - labels = torch.stack(labels_list, dim=0) - # shape (batch_size, num_queries) - label_weights = torch.stack(label_weights_list, dim=0) - # shape (num_total_gts, h, w) - mask_targets = torch.cat(mask_targets_list, dim=0) - # shape (batch_size, num_queries) - mask_weights = torch.stack(mask_weights_list, dim=0) - - # classfication loss - # shape (batch_size * num_queries, ) - cls_scores = cls_scores.flatten(0, 1) - labels = labels.flatten(0, 1) - label_weights = label_weights.flatten(0, 1) - - class_weight = cls_scores.new_tensor(self.class_weight) - loss_cls = self.loss_cls( - cls_scores, - labels, - label_weights, - avg_factor=class_weight[labels].sum()) - - num_total_masks = reduce_mean(cls_scores.new_tensor([num_total_pos])) - num_total_masks = max(num_total_masks, 1) - - # extract positive ones - # shape (batch_size, num_queries, h, w) -> (num_total_gts, h, w) - mask_preds = mask_preds[mask_weights > 0] - - if mask_targets.shape[0] == 0: - # zero match - loss_dice = mask_preds.sum() - loss_mask = mask_preds.sum() - return loss_cls, loss_mask, loss_dice - - with torch.no_grad(): - points_coords = get_uncertain_point_coords_with_randomness( - mask_preds.unsqueeze(1), None, self.num_points, - self.oversample_ratio, self.importance_sample_ratio) - # shape (num_total_gts, h, w) -> (num_total_gts, num_points) - mask_point_targets = point_sample( - mask_targets.unsqueeze(1).float(), points_coords).squeeze(1) - # shape (num_queries, h, w) -> (num_queries, num_points) - mask_point_preds = point_sample( - mask_preds.unsqueeze(1), points_coords).squeeze(1) - - # dice loss - loss_dice = self.loss_dice( - mask_point_preds, mask_point_targets, avg_factor=num_total_masks) - - # mask loss - # shape (num_queries, num_points) -> (num_queries * num_points, ) - mask_point_preds = mask_point_preds.reshape(-1) - # shape (num_total_gts, num_points) -> (num_total_gts * num_points, ) - mask_point_targets = mask_point_targets.reshape(-1) - loss_mask = self.loss_mask( - mask_point_preds, - mask_point_targets, - avg_factor=num_total_masks * self.num_points) - - return loss_cls, loss_mask, loss_dice - - def forward_head(self, decoder_out, mask_feature, attn_mask_target_size): - """Forward for head part which is called after every decoder layer. - - Args: - decoder_out (Tensor): in shape (num_queries, batch_size, c). - mask_feature (Tensor): in shape (batch_size, c, h, w). - attn_mask_target_size (tuple[int, int]): target attention - mask size. - - Returns: - tuple: A tuple contain three elements. - - - cls_pred (Tensor): Classification scores in shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred (Tensor): Mask scores in shape \ - (batch_size, num_queries,h, w). - - attn_mask (Tensor): Attention mask in shape \ - (batch_size * num_heads, num_queries, h, w). - """ - decoder_out = self.transformer_decoder.post_norm(decoder_out) - decoder_out = decoder_out.transpose(0, 1) - # shape (num_queries, batch_size, c) - cls_pred = self.cls_embed(decoder_out) - # shape (num_queries, batch_size, c) - mask_embed = self.mask_embed(decoder_out) - # shape (num_queries, batch_size, h, w) - mask_pred = torch.einsum('bqc,bchw->bqhw', mask_embed, mask_feature) - attn_mask = F.interpolate( - mask_pred, - attn_mask_target_size, - mode='bilinear', - align_corners=False) - # shape (num_queries, batch_size, h, w) -> - # (batch_size * num_head, num_queries, h, w) - attn_mask = attn_mask.flatten(2).unsqueeze(1).repeat( - (1, self.num_heads, 1, 1)).flatten(0, 1) - attn_mask = attn_mask.sigmoid() < 0.5 - attn_mask = attn_mask.detach() - - return cls_pred, mask_pred, attn_mask - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (list[Tensor]): Multi scale Features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: A tuple contains two elements. - - - cls_pred_list (list[Tensor)]: Classification logits \ - for each decoder layer. Each is a 3D-tensor with shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred_list (list[Tensor]): Mask logits for each \ - decoder layer. Each with shape (batch_size, num_queries, \ - h, w). - """ - batch_size = len(img_metas) - mask_features, multi_scale_memorys = self.pixel_decoder(feats) - # multi_scale_memorys (from low resolution to high resolution) - decoder_inputs = [] - decoder_positional_encodings = [] - for i in range(self.num_transformer_feat_level): - decoder_input = self.decoder_input_projs[i](multi_scale_memorys[i]) - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - decoder_input = decoder_input.flatten(2).permute(2, 0, 1) - level_embed = self.level_embed.weight[i].view(1, 1, -1) - decoder_input = decoder_input + level_embed - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - mask = decoder_input.new_zeros( - (batch_size, ) + multi_scale_memorys[i].shape[-2:], - dtype=torch.bool) - decoder_positional_encoding = self.decoder_positional_encoding( - mask) - decoder_positional_encoding = decoder_positional_encoding.flatten( - 2).permute(2, 0, 1) - decoder_inputs.append(decoder_input) - decoder_positional_encodings.append(decoder_positional_encoding) - # shape (num_queries, c) -> (num_queries, batch_size, c) - query_feat = self.query_feat.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - query_embed = self.query_embed.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - - cls_pred_list = [] - mask_pred_list = [] - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[0].shape[-2:]) - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - for i in range(self.num_transformer_decoder_layers): - level_idx = i % self.num_transformer_feat_level - # if a mask is all True(all background), then set it all False. - attn_mask[torch.where( - attn_mask.sum(-1) == attn_mask.shape[-1])] = False - - # cross_attn + self_attn - layer = self.transformer_decoder.layers[i] - attn_masks = [attn_mask, None] - query_feat = layer( - query=query_feat, - key=decoder_inputs[level_idx], - value=decoder_inputs[level_idx], - query_pos=query_embed, - key_pos=decoder_positional_encodings[level_idx], - attn_masks=attn_masks, - query_key_padding_mask=None, - # here we do not apply masking on padded region - key_padding_mask=None) - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[ - (i + 1) % self.num_transformer_feat_level].shape[-2:]) - - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - return cls_pred_list, mask_pred_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/maskformer_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/maskformer_head.py deleted file mode 100644 index 4541e018..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/maskformer_head.py +++ /dev/null @@ -1,553 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, build_plugin_layer, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import force_fp32 - -from mmdet.core import build_assigner, build_sampler, multi_apply, reduce_mean -from mmdet.models.utils import preprocess_panoptic_gt -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class MaskFormerHead(AnchorFreeHead): - """Implements the MaskFormer head. - - See `Per-Pixel Classification is Not All You Need for Semantic - Segmentation `_ for details. - - Args: - in_channels (list[int]): Number of channels in the input feature map. - feat_channels (int): Number of channels for feature. - out_channels (int): Number of channels for output. - num_things_classes (int): Number of things. - num_stuff_classes (int): Number of stuff. - num_queries (int): Number of query in Transformer. - pixel_decoder (:obj:`mmcv.ConfigDict` | dict): Config for pixel - decoder. Defaults to None. - enforce_decoder_input_project (bool, optional): Whether to add a layer - to change the embed_dim of tranformer encoder in pixel decoder to - the embed_dim of transformer decoder. Defaults to False. - transformer_decoder (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder. Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder position encoding. Defaults to None. - loss_cls (:obj:`mmcv.ConfigDict` | dict): Config of the classification - loss. Defaults to `CrossEntropyLoss`. - loss_mask (:obj:`mmcv.ConfigDict` | dict): Config of the mask loss. - Defaults to `FocalLoss`. - loss_dice (:obj:`mmcv.ConfigDict` | dict): Config of the dice loss. - Defaults to `DiceLoss`. - train_cfg (:obj:`mmcv.ConfigDict` | dict): Training config of - Maskformer head. - test_cfg (:obj:`mmcv.ConfigDict` | dict): Testing config of Maskformer - head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Defaults to None. - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - num_things_classes=80, - num_stuff_classes=53, - num_queries=100, - pixel_decoder=None, - enforce_decoder_input_project=False, - transformer_decoder=None, - positional_encoding=None, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[1.0] * 133 + [0.1]), - loss_mask=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=20.0), - loss_dice=dict( - type='DiceLoss', - use_sigmoid=True, - activate=True, - naive_dice=True, - loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=None, - **kwargs): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = self.num_things_classes + self.num_stuff_classes - self.num_queries = num_queries - - pixel_decoder.update( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels) - self.pixel_decoder = build_plugin_layer(pixel_decoder)[1] - self.transformer_decoder = build_transformer_layer_sequence( - transformer_decoder) - self.decoder_embed_dims = self.transformer_decoder.embed_dims - pixel_decoder_type = pixel_decoder.get('type') - if pixel_decoder_type == 'PixelDecoder' and ( - self.decoder_embed_dims != in_channels[-1] - or enforce_decoder_input_project): - self.decoder_input_proj = Conv2d( - in_channels[-1], self.decoder_embed_dims, kernel_size=1) - else: - self.decoder_input_proj = nn.Identity() - self.decoder_pe = build_positional_encoding(positional_encoding) - self.query_embed = nn.Embedding(self.num_queries, out_channels) - - self.cls_embed = nn.Linear(feat_channels, self.num_classes + 1) - self.mask_embed = nn.Sequential( - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, out_channels)) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = build_assigner(train_cfg.assigner) - self.sampler = build_sampler(train_cfg.sampler, context=self) - - self.class_weight = loss_cls.class_weight - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.loss_dice = build_loss(loss_dice) - - def init_weights(self): - if isinstance(self.decoder_input_proj, Conv2d): - caffe2_xavier_init(self.decoder_input_proj, bias=0) - - self.pixel_decoder.init_weights() - - for p in self.transformer_decoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def preprocess_gt(self, gt_labels_list, gt_masks_list, gt_semantic_segs): - """Preprocess the ground truth for all images. - - Args: - gt_labels_list (list[Tensor]): Each is ground truth - labels of each bbox, with shape (num_gts, ). - gt_masks_list (list[BitmapMasks]): Each is ground truth - masks of each instances of a image, shape - (num_gts, h, w). - gt_semantic_seg (Tensor): Ground truth of semantic - segmentation with the shape (batch_size, n, h, w). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - target_shape (tuple[int]): Shape of output mask_preds. - Resize the masks to shape of mask_preds. - - Returns: - tuple: a tuple containing the following targets. - - labels (list[Tensor]): Ground truth class indices\ - for all images. Each with shape (n, ), n is the sum of\ - number of stuff type and number of instance in a image. - - masks (list[Tensor]): Ground truth mask for each\ - image, each with shape (n, h, w). - """ - num_things_list = [self.num_things_classes] * len(gt_labels_list) - num_stuff_list = [self.num_stuff_classes] * len(gt_labels_list) - - targets = multi_apply(preprocess_panoptic_gt, gt_labels_list, - gt_masks_list, gt_semantic_segs, num_things_list, - num_stuff_list) - labels, masks = targets - return labels, masks - - def get_targets(self, cls_scores_list, mask_preds_list, gt_labels_list, - gt_masks_list, img_metas): - """Compute classification and mask targets for all images for a decoder - layer. - - Args: - cls_scores_list (list[Tensor]): Mask score logits from a single - decoder layer for all images. Each with shape (num_queries, - cls_out_channels). - mask_preds_list (list[Tensor]): Mask logits from a single decoder - layer for all images. Each with shape (num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for all - images. Each with shape (n, ), n is the sum of number of stuff - type and number of instance in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[list[Tensor]]: a tuple containing the following targets. - - labels_list (list[Tensor]): Labels of all images.\ - Each with shape (num_queries, ). - - label_weights_list (list[Tensor]): Label weights\ - of all images. Each with shape (num_queries, ). - - mask_targets_list (list[Tensor]): Mask targets of\ - all images. Each with shape (num_queries, h, w). - - mask_weights_list (list[Tensor]): Mask weights of\ - all images. Each with shape (num_queries, ). - - num_total_pos (int): Number of positive samples in\ - all images. - - num_total_neg (int): Number of negative samples in\ - all images. - """ - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - pos_inds_list, - neg_inds_list) = multi_apply(self._get_target_single, cls_scores_list, - mask_preds_list, gt_labels_list, - gt_masks_list, img_metas) - - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, mask_targets_list, - mask_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, cls_score, mask_pred, gt_labels, gt_masks, - img_metas): - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_labels (Tensor): Ground truth class indices for one image with - shape (n, ). n is the sum of number of stuff type and number - of instance in a image. - gt_masks (Tensor): Ground truth mask for each image, each with - shape (n, h, w). - img_metas (dict): Image informtation. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - labels (Tensor): Labels of each image. - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - target_shape = mask_pred.shape[-2:] - if gt_masks.shape[0] > 0: - gt_masks_downsampled = F.interpolate( - gt_masks.unsqueeze(1).float(), target_shape, - mode='nearest').squeeze(1).long() - else: - gt_masks_downsampled = gt_masks - - # assign and sample - assign_result = self.assigner.assign(cls_score, mask_pred, gt_labels, - gt_masks_downsampled, img_metas) - sampling_result = self.sampler.sample(assign_result, mask_pred, - gt_masks) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - labels = gt_labels.new_full((self.num_queries, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones(self.num_queries) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((self.num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds) - - @force_fp32(apply_to=('all_cls_scores', 'all_mask_preds')) - def loss(self, all_cls_scores, all_mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function. - - Args: - all_cls_scores (Tensor): Classification scores for all decoder - layers with shape (num_decoder, batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - all_mask_preds (Tensor): Mask scores for all decoder layers with - shape (num_decoder, batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (n, ). n is the sum of number of stuff type - and number of instance in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image with - shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_dec_layers = len(all_cls_scores) - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_masks_list = [gt_masks_list for _ in range(num_dec_layers)] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - losses_cls, losses_mask, losses_dice = multi_apply( - self.loss_single, all_cls_scores, all_mask_preds, - all_gt_labels_list, all_gt_masks_list, img_metas_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_mask'] = losses_mask[-1] - loss_dict['loss_dice'] = losses_dice[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_mask_i, loss_dice_i in zip( - losses_cls[:-1], losses_mask[:-1], losses_dice[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_mask'] = loss_mask_i - loss_dict[f'd{num_dec_layer}.loss_dice'] = loss_dice_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, cls_scores, mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function for outputs from a single decoder layer. - - Args: - cls_scores (Tensor): Mask score logits from a single decoder layer - for all images. Shape (batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - mask_preds (Tensor): Mask logits for a pixel decoder for all - images. Shape (batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image, each with shape (n, ). n is the sum of number of stuff - types and number of instances in a image. - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (n, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[Tensor]: Loss components for outputs from a single decoder\ - layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - mask_preds_list = [mask_preds[i] for i in range(num_imgs)] - - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - num_total_pos, - num_total_neg) = self.get_targets(cls_scores_list, mask_preds_list, - gt_labels_list, gt_masks_list, - img_metas) - # shape (batch_size, num_queries) - labels = torch.stack(labels_list, dim=0) - # shape (batch_size, num_queries) - label_weights = torch.stack(label_weights_list, dim=0) - # shape (num_total_gts, h, w) - mask_targets = torch.cat(mask_targets_list, dim=0) - # shape (batch_size, num_queries) - mask_weights = torch.stack(mask_weights_list, dim=0) - - # classfication loss - # shape (batch_size * num_queries, ) - cls_scores = cls_scores.flatten(0, 1) - labels = labels.flatten(0, 1) - label_weights = label_weights.flatten(0, 1) - - class_weight = cls_scores.new_tensor(self.class_weight) - loss_cls = self.loss_cls( - cls_scores, - labels, - label_weights, - avg_factor=class_weight[labels].sum()) - - num_total_masks = reduce_mean(cls_scores.new_tensor([num_total_pos])) - num_total_masks = max(num_total_masks, 1) - - # extract positive ones - # shape (batch_size, num_queries, h, w) -> (num_total_gts, h, w) - mask_preds = mask_preds[mask_weights > 0] - target_shape = mask_targets.shape[-2:] - - if mask_targets.shape[0] == 0: - # zero match - loss_dice = mask_preds.sum() - loss_mask = mask_preds.sum() - return loss_cls, loss_mask, loss_dice - - # upsample to shape of target - # shape (num_total_gts, h, w) - mask_preds = F.interpolate( - mask_preds.unsqueeze(1), - target_shape, - mode='bilinear', - align_corners=False).squeeze(1) - - # dice loss - loss_dice = self.loss_dice( - mask_preds, mask_targets, avg_factor=num_total_masks) - - # mask loss - # FocalLoss support input of shape (n, num_class) - h, w = mask_preds.shape[-2:] - # shape (num_total_gts, h, w) -> (num_total_gts * h * w, 1) - mask_preds = mask_preds.reshape(-1, 1) - # shape (num_total_gts, h, w) -> (num_total_gts * h * w) - mask_targets = mask_targets.reshape(-1) - # target is (1 - mask_targets) !!! - loss_mask = self.loss_mask( - mask_preds, 1 - mask_targets, avg_factor=num_total_masks * h * w) - - return loss_cls, loss_mask, loss_dice - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (list[Tensor]): Features from the upstream network, each - is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: a tuple contains two elements. - - all_cls_scores (Tensor): Classification scores for each\ - scale level. Each is a 4D-tensor with shape\ - (num_decoder, batch_size, num_queries, cls_out_channels).\ - Note `cls_out_channels` should includes background. - - all_mask_preds (Tensor): Mask scores for each decoder\ - layer. Each with shape (num_decoder, batch_size,\ - num_queries, h, w). - """ - batch_size = len(img_metas) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - padding_mask = feats[-1].new_ones( - (batch_size, input_img_h, input_img_w), dtype=torch.float32) - for i in range(batch_size): - img_h, img_w, _ = img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feats[-1].shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - # when backbone is swin, memory is output of last stage of swin. - # when backbone is r50, memory is output of tranformer encoder. - mask_features, memory = self.pixel_decoder(feats, img_metas) - pos_embed = self.decoder_pe(padding_mask) - memory = self.decoder_input_proj(memory) - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - memory = memory.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - # shape (batch_size, h * w) - padding_mask = padding_mask.flatten(1) - # shape = (num_queries, embed_dims) - query_embed = self.query_embed.weight - # shape = (num_queries, batch_size, embed_dims) - query_embed = query_embed.unsqueeze(1).repeat(1, batch_size, 1) - target = torch.zeros_like(query_embed) - # shape (num_decoder, num_queries, batch_size, embed_dims) - out_dec = self.transformer_decoder( - query=target, - key=memory, - value=memory, - key_pos=pos_embed, - query_pos=query_embed, - key_padding_mask=padding_mask) - # shape (num_decoder, batch_size, num_queries, embed_dims) - out_dec = out_dec.transpose(1, 2) - - # cls_scores - all_cls_scores = self.cls_embed(out_dec) - - # mask_preds - mask_embed = self.mask_embed(out_dec) - all_mask_preds = torch.einsum('lbqc,bchw->lbqhw', mask_embed, - mask_features) - - return all_cls_scores, all_mask_preds - - def forward_train(self, - feats, - img_metas, - gt_bboxes, - gt_labels, - gt_masks, - gt_semantic_seg, - gt_bboxes_ignore=None): - """Forward function for training mode. - - Args: - feats (list[Tensor]): Multi-level features from the upstream - network, each is a 4D-tensor. - img_metas (list[Dict]): List of image information. - gt_bboxes (list[Tensor]): Each element is ground truth bboxes of - the image, shape (num_gts, 4). Not used here. - gt_labels (list[Tensor]): Each element is ground truth labels of - each box, shape (num_gts,). - gt_masks (list[BitmapMasks]): Each element is masks of instances - of a image, shape (num_gts, h, w). - gt_semantic_seg (list[tensor]):Each element is the ground truth - of semantic segmentation with the shape (N, H, W). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored. Defaults to None. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # not consider ignoring bboxes - assert gt_bboxes_ignore is None - - # forward - all_cls_scores, all_mask_preds = self(feats, img_metas) - - # preprocess ground truth - gt_labels, gt_masks = self.preprocess_gt(gt_labels, gt_masks, - gt_semantic_seg) - - # loss - losses = self.loss(all_cls_scores, all_mask_preds, gt_labels, gt_masks, - img_metas) - - return losses - - def simple_test(self, feats, img_metas, **kwargs): - """Test without augmentaton. - - Args: - feats (list[Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: A tuple contains two tensors. - - - mask_cls_results (Tensor): Mask classification logits,\ - shape (batch_size, num_queries, cls_out_channels). - Note `cls_out_channels` should includes background. - - mask_pred_results (Tensor): Mask logits, shape \ - (batch_size, num_queries, h, w). - """ - all_cls_scores, all_mask_preds = self(feats, img_metas) - mask_cls_results = all_cls_scores[-1] - mask_pred_results = all_mask_preds[-1] - - # upsample masks - img_shape = img_metas[0]['batch_input_shape'] - mask_pred_results = F.interpolate( - mask_pred_results, - size=(img_shape[0], img_shape[1]), - mode='bilinear', - align_corners=False) - - return mask_cls_results, mask_pred_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/nasfcos_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/nasfcos_head.py deleted file mode 100644 index 380c912c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/nasfcos_head.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale - -from mmdet.models.dense_heads.fcos_head import FCOSHead -from ..builder import HEADS - - -@HEADS.register_module() -class NASFCOSHead(FCOSHead): - """Anchor-free head used in `NASFCOS `_. - - It is quite similar with FCOS head, except for the searched structure of - classification branch and bbox regression branch, where a structure of - "dconv3x3, conv3x3, dconv3x3, conv1x1" is utilized instead. - """ - - def __init__(self, *args, init_cfg=None, **kwargs): - if init_cfg is None: - init_cfg = [ - dict(type='Caffe2Xavier', layer=['ConvModule', 'Conv2d']), - dict( - type='Normal', - std=0.01, - override=[ - dict(name='conv_reg'), - dict(name='conv_centerness'), - dict( - name='conv_cls', - type='Normal', - std=0.01, - bias_prob=0.01) - ]), - ] - super(NASFCOSHead, self).__init__(*args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - dconv3x3_config = dict( - type='DCNv2', - kernel_size=3, - use_bias=True, - deform_groups=2, - padding=1) - conv3x3_config = dict(type='Conv', kernel_size=3, padding=1) - conv1x1_config = dict(type='Conv', kernel_size=1) - - self.arch_config = [ - dconv3x3_config, conv3x3_config, dconv3x3_config, conv1x1_config - ] - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i, op_ in enumerate(self.arch_config): - op = copy.deepcopy(op_) - chn = self.in_channels if i == 0 else self.feat_channels - assert isinstance(op, dict) - use_bias = op.pop('use_bias', False) - padding = op.pop('padding', 0) - kernel_size = op.pop('kernel_size') - module = ConvModule( - chn, - self.feat_channels, - kernel_size, - stride=1, - padding=padding, - norm_cfg=self.norm_cfg, - bias=use_bias, - conv_cfg=op) - - self.cls_convs.append(copy.deepcopy(module)) - self.reg_convs.append(copy.deepcopy(module)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1) - - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/paa_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/paa_head.py deleted file mode 100644 index d79b5b9f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/paa_head.py +++ /dev/null @@ -1,756 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, multiclass_nms -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads import ATSSHead - -EPS = 1e-12 -try: - import sklearn.mixture as skm -except ImportError: - skm = None - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class PAAHead(ATSSHead): - """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU - Prediction for Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - topk (int): Select topk samples with smallest loss in - each level. - score_voting (bool): Whether to use score voting in post-process. - covariance_type : String describing the type of covariance parameters - to be used in :class:`sklearn.mixture.GaussianMixture`. - It must be one of: - - - 'full': each component has its own general covariance matrix - - 'tied': all components share the same general covariance matrix - - 'diag': each component has its own diagonal covariance matrix - - 'spherical': each component has its own single variance - Default: 'diag'. From 'full' to 'spherical', the gmm fitting - process is faster yet the performance could be influenced. For most - cases, 'diag' should be a good choice. - """ - - def __init__(self, - *args, - topk=9, - score_voting=True, - covariance_type='diag', - **kwargs): - # topk used in paa reassign process - self.topk = topk - self.with_score_voting = score_voting - self.covariance_type = covariance_type - super(PAAHead, self).__init__(*args, **kwargs) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - flatten_anchors[pos_inds_flatten], - bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - iou_target.clamp(min=EPS), - avg_factor=iou_target.sum()) - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) - - def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight, - bbox_target, bbox_weight, pos_inds): - """Calculate loss of all potential positive samples obtained from first - match process. - - Args: - anchors (list[Tensor]): Anchors of each scale. - cls_score (Tensor): Box scores of single image with shape - (num_anchors, num_classes) - bbox_pred (Tensor): Box energies / deltas of single image - with shape (num_anchors, 4) - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_target (dict): Regression target of each anchor with - shape (num_anchors, 4). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - - Returns: - Tensor: Losses of all positive samples in single image. - """ - if not len(pos_inds): - return cls_score.new([]), - anchors_all_level = torch.cat(anchors, 0) - pos_scores = cls_score[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_label = label[pos_inds] - pos_label_weight = label_weight[pos_inds] - pos_bbox_target = bbox_target[pos_inds] - pos_bbox_weight = bbox_weight[pos_inds] - pos_anchors = anchors_all_level[pos_inds] - pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - - # to keep loss dimension - loss_cls = self.loss_cls( - pos_scores, - pos_label, - pos_label_weight, - avg_factor=1.0, - reduction_override='none') - - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - pos_bbox_weight, - avg_factor=1.0, # keep same loss weight before reassign - reduction_override='none') - - loss_cls = loss_cls.sum(-1) - pos_loss = loss_bbox + loss_cls - return pos_loss, - - def paa_reassign(self, pos_losses, label, label_weight, bbox_weight, - pos_inds, pos_gt_inds, anchors): - """Fit loss to GMM distribution and separate positive, ignore, negative - samples again with GMM model. - - Args: - pos_losses (Tensor): Losses of all positive samples in - single image. - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - pos_gt_inds (Tensor): Gt_index of all positive samples got - from first assign process. - anchors (list[Tensor]): Anchors of each scale. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - label (Tensor): classification target of each anchor after - paa assign, with shape (num_anchors,) - - label_weight (Tensor): Classification loss weight of each - anchor after paa assign, with shape (num_anchors). - - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - - num_pos (int): The number of positive samples after paa - assign. - """ - if not len(pos_inds): - return label, label_weight, bbox_weight, 0 - label = label.clone() - label_weight = label_weight.clone() - bbox_weight = bbox_weight.clone() - num_gt = pos_gt_inds.max() + 1 - num_level = len(anchors) - num_anchors_each_level = [item.size(0) for item in anchors] - num_anchors_each_level.insert(0, 0) - inds_level_interval = np.cumsum(num_anchors_each_level) - pos_level_mask = [] - for i in range(num_level): - mask = (pos_inds >= inds_level_interval[i]) & ( - pos_inds < inds_level_interval[i + 1]) - pos_level_mask.append(mask) - pos_inds_after_paa = [label.new_tensor([])] - ignore_inds_after_paa = [label.new_tensor([])] - for gt_ind in range(num_gt): - pos_inds_gmm = [] - pos_loss_gmm = [] - gt_mask = pos_gt_inds == gt_ind - for level in range(num_level): - level_mask = pos_level_mask[level] - level_gt_mask = level_mask & gt_mask - value, topk_inds = pos_losses[level_gt_mask].topk( - min(level_gt_mask.sum(), self.topk), largest=False) - pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds]) - pos_loss_gmm.append(value) - pos_inds_gmm = torch.cat(pos_inds_gmm) - pos_loss_gmm = torch.cat(pos_loss_gmm) - # fix gmm need at least two sample - if len(pos_inds_gmm) < 2: - continue - device = pos_inds_gmm.device - pos_loss_gmm, sort_inds = pos_loss_gmm.sort() - pos_inds_gmm = pos_inds_gmm[sort_inds] - pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy() - min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max() - means_init = np.array([min_loss, max_loss]).reshape(2, 1) - weights_init = np.array([0.5, 0.5]) - precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full - if self.covariance_type == 'spherical': - precisions_init = precisions_init.reshape(2) - elif self.covariance_type == 'diag': - precisions_init = precisions_init.reshape(2, 1) - elif self.covariance_type == 'tied': - precisions_init = np.array([[1.0]]) - if skm is None: - raise ImportError('Please run "pip install sklearn" ' - 'to install sklearn first.') - gmm = skm.GaussianMixture( - 2, - weights_init=weights_init, - means_init=means_init, - precisions_init=precisions_init, - covariance_type=self.covariance_type) - gmm.fit(pos_loss_gmm) - gmm_assignment = gmm.predict(pos_loss_gmm) - scores = gmm.score_samples(pos_loss_gmm) - gmm_assignment = torch.from_numpy(gmm_assignment).to(device) - scores = torch.from_numpy(scores).to(device) - - pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme( - gmm_assignment, scores, pos_inds_gmm) - pos_inds_after_paa.append(pos_inds_temp) - ignore_inds_after_paa.append(ignore_inds_temp) - - pos_inds_after_paa = torch.cat(pos_inds_after_paa) - ignore_inds_after_paa = torch.cat(ignore_inds_after_paa) - reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1) - reassign_ids = pos_inds[reassign_mask] - label[reassign_ids] = self.num_classes - label_weight[ignore_inds_after_paa] = 0 - bbox_weight[reassign_ids] = 0 - num_pos = len(pos_inds_after_paa) - return label, label_weight, bbox_weight, num_pos - - def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm): - """A general separation scheme for gmm model. - - It separates a GMM distribution of candidate samples into three - parts, 0 1 and uncertain areas, and you can implement other - separation schemes by rewriting this function. - - Args: - gmm_assignment (Tensor): The prediction of GMM which is of shape - (num_samples,). The 0/1 value indicates the distribution - that each sample comes from. - scores (Tensor): The probability of sample coming from the - fit GMM distribution. The tensor is of shape (num_samples,). - pos_inds_gmm (Tensor): All the indexes of samples which are used - to fit GMM model. The tensor is of shape (num_samples,) - - Returns: - tuple[Tensor]: The indices of positive and ignored samples. - - - pos_inds_temp (Tensor): Indices of positive samples. - - ignore_inds_temp (Tensor): Indices of ignore samples. - """ - # The implementation is (c) in Fig.3 in origin paper instead of (b). - # You can refer to issues such as - # https://github.com/kkhoot/PAA/issues/8 and - # https://github.com/kkhoot/PAA/issues/9. - fgs = gmm_assignment == 0 - pos_inds_temp = fgs.new_tensor([], dtype=torch.long) - ignore_inds_temp = fgs.new_tensor([], dtype=torch.long) - if fgs.nonzero().numel(): - _, pos_thr_ind = scores[fgs].topk(1) - pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1] - ignore_inds_temp = pos_inds_gmm.new_tensor([]) - return pos_inds_temp, ignore_inds_temp - - def get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - ): - """Get targets for PAA head. - - This method is almost the same as `AnchorHead.get_targets()`. We direct - return the results from _get_targets_single instead map it to levels - by images_to_levels function. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels (list[Tensor]): Labels of all anchors, each with - shape (num_anchors,). - - label_weights (list[Tensor]): Label weights of all anchor. - each with shape (num_anchors,). - - bbox_targets (list[Tensor]): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bbox_weights (list[Tensor]): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds (list[Tensor]): Contains all index of positive - sample in all anchor. - - gt_inds (list[Tensor]): Contains all gt_index of positive - sample in all anchor. - """ - - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - - (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds, - valid_neg_inds, sampling_result) = results - - # Due to valid flag of anchors, we have to calculate the real pos_inds - # in origin anchor set. - pos_inds = [] - for i, single_labels in enumerate(labels): - pos_mask = (0 <= single_labels) & ( - single_labels < self.num_classes) - pos_inds.append(pos_mask.nonzero().view(-1)) - - gt_inds = [item.pos_assigned_gt_inds for item in sampling_result] - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - gt_inds) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - This method is same as `AnchorHead._get_targets_single()`. - """ - assert unmap_outputs, 'We must map outputs back to the original' \ - 'set of anchors in PAAhead' - return super(ATSSHead, self)._get_targets_single( - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True, - **kwargs): - assert with_nms, 'PAA only supports "with_nms=True" now and it ' \ - 'means PAAHead does not support ' \ - 'test-time augmentation' - return super(ATSSHead, self).get_bboxes(cls_scores, bbox_preds, - score_factors, img_metas, cfg, - rescale, with_nms, **kwargs) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factors from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_score_factors = [] - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - score_factor = score_factor.permute(1, 2, 0).reshape(-1).sigmoid() - - if 0 < nms_pre < scores.shape[0]: - max_scores, _ = (scores * - score_factor[:, None]).sqrt().max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - priors = priors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - score_factor = score_factor[topk_inds] - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_score_factors.append(score_factor) - - return self._bbox_post_process(mlvl_scores, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, mlvl_score_factors, **kwargs) - - def _bbox_post_process(self, - mlvl_scores, - mlvl_bboxes, - scale_factor, - cfg, - rescale=False, - with_nms=True, - mlvl_score_factors=None, - **kwargs): - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually with_nms is False is used for aug test. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, num_class). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - mlvl_score_factors (list[Tensor], optional): Score factor from - all scale levels of a single image, each item has shape - (num_bboxes, ). Default: None. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - mlvl_iou_preds = torch.cat(mlvl_score_factors) - mlvl_nms_scores = (mlvl_scores * mlvl_iou_preds[:, None]).sqrt() - det_bboxes, det_labels = multiclass_nms( - mlvl_bboxes, - mlvl_nms_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=None) - if self.with_score_voting and len(det_bboxes) > 0: - det_bboxes, det_labels = self.score_voting(det_bboxes, det_labels, - mlvl_bboxes, - mlvl_nms_scores, - cfg.score_thr) - - return det_bboxes, det_labels - - def score_voting(self, det_bboxes, det_labels, mlvl_bboxes, - mlvl_nms_scores, score_thr): - """Implementation of score voting method works on each remaining boxes - after NMS procedure. - - Args: - det_bboxes (Tensor): Remaining boxes after NMS procedure, - with shape (k, 5), each dimension means - (x1, y1, x2, y2, score). - det_labels (Tensor): The label of remaining boxes, with shape - (k, 1),Labels are 0-based. - mlvl_bboxes (Tensor): All boxes before the NMS procedure, - with shape (num_anchors,4). - mlvl_nms_scores (Tensor): The scores of all boxes which is used - in the NMS procedure, with shape (num_anchors, num_class) - score_thr (float): The score threshold of bboxes. - - Returns: - tuple: Usually returns a tuple containing voting results. - - - det_bboxes_voted (Tensor): Remaining boxes after - score voting procedure, with shape (k, 5), each - dimension means (x1, y1, x2, y2, score). - - det_labels_voted (Tensor): Label of remaining bboxes - after voting, with shape (num_anchors,). - """ - candidate_mask = mlvl_nms_scores > score_thr - candidate_mask_nonzeros = candidate_mask.nonzero(as_tuple=False) - candidate_inds = candidate_mask_nonzeros[:, 0] - candidate_labels = candidate_mask_nonzeros[:, 1] - candidate_bboxes = mlvl_bboxes[candidate_inds] - candidate_scores = mlvl_nms_scores[candidate_mask] - det_bboxes_voted = [] - det_labels_voted = [] - for cls in range(self.cls_out_channels): - candidate_cls_mask = candidate_labels == cls - if not candidate_cls_mask.any(): - continue - candidate_cls_scores = candidate_scores[candidate_cls_mask] - candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask] - det_cls_mask = det_labels == cls - det_cls_bboxes = det_bboxes[det_cls_mask].view( - -1, det_bboxes.size(-1)) - det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4], - candidate_cls_bboxes) - for det_ind in range(len(det_cls_bboxes)): - single_det_ious = det_candidate_ious[det_ind] - pos_ious_mask = single_det_ious > 0.01 - pos_ious = single_det_ious[pos_ious_mask] - pos_bboxes = candidate_cls_bboxes[pos_ious_mask] - pos_scores = candidate_cls_scores[pos_ious_mask] - pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) * - pos_scores)[:, None] - voted_box = torch.sum( - pis * pos_bboxes, dim=0) / torch.sum( - pis, dim=0) - voted_score = det_cls_bboxes[det_ind][-1:][None, :] - det_bboxes_voted.append( - torch.cat((voted_box[None, :], voted_score), dim=1)) - det_labels_voted.append(cls) - - det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0) - det_labels_voted = det_labels.new_tensor(det_labels_voted) - return det_bboxes_voted, det_labels_voted diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_retinanet_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_retinanet_head.py deleted file mode 100644 index 8654ef45..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_retinanet_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import images_to_levels -from ..builder import HEADS -from ..losses import carl_loss, isr_p -from .retina_head import RetinaHead - - -@HEADS.register_module() -class PISARetinaHead(RetinaHead): - """PISA Retinanet Head. - - The head owns the same structure with Retinanet Head, but differs in two - aspects: - 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to - change the positive loss weights. - 2. Classification-aware regression loss is adopted as a third loss. - """ - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss, regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - num_imgs = len(img_metas) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, label_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat( - flatten_cls_scores, dim=1).reshape(-1, - flatten_cls_scores[0].size(-1)) - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_bbox_preds = torch.cat( - flatten_bbox_preds, dim=1).view(-1, flatten_bbox_preds[0].size(-1)) - flatten_labels = torch.cat(labels_list, dim=1).reshape(-1) - flatten_label_weights = torch.cat( - label_weights_list, dim=1).reshape(-1) - flatten_anchors = torch.cat(all_anchor_list, dim=1).reshape(-1, 4) - flatten_bbox_targets = torch.cat( - bbox_targets_list, dim=1).reshape(-1, 4) - flatten_bbox_weights = torch.cat( - bbox_weights_list, dim=1).reshape(-1, 4) - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - all_targets = (flatten_labels, flatten_label_weights, - flatten_bbox_targets, flatten_bbox_weights) - with torch.no_grad(): - all_targets = isr_p( - flatten_cls_scores, - flatten_bbox_preds, - all_targets, - flatten_anchors, - sampling_results_list, - bbox_coder=self.bbox_coder, - loss_cls=self.loss_cls, - num_class=self.num_classes, - **self.train_cfg.isr) - (flatten_labels, flatten_label_weights, flatten_bbox_targets, - flatten_bbox_weights) = all_targets - - # For convenience we compute loss once instead separating by fpn level, - # so that we don't need to separate the weights by level again. - # The result should be the same - losses_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - flatten_label_weights, - avg_factor=num_total_samples) - losses_bbox = self.loss_bbox( - flatten_bbox_preds, - flatten_bbox_targets, - flatten_bbox_weights, - avg_factor=num_total_samples) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - # CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - flatten_cls_scores, - flatten_labels, - flatten_bbox_preds, - flatten_bbox_targets, - self.loss_bbox, - **self.train_cfg.carl, - avg_factor=num_total_pos, - sigmoid=True, - num_class=self.num_classes) - loss_dict.update(loss_carl) - - return loss_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_ssd_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_ssd_head.py deleted file mode 100644 index 86b67abe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/pisa_ssd_head.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import multi_apply -from ..builder import HEADS -from ..losses import CrossEntropyLoss, SmoothL1Loss, carl_loss, isr_p -from .ssd_head import SSDHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class PISASSDHead(SSDHead): - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=False, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - isr_cfg = self.train_cfg.get('isr', None) - all_targets = (all_labels.view(-1), all_label_weights.view(-1), - all_bbox_targets.view(-1, - 4), all_bbox_weights.view(-1, 4)) - # apply ISR-P - if isr_cfg is not None: - all_targets = isr_p( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_bbox_preds.view(-1, 4), - all_targets, - torch.cat(all_anchors), - sampling_results_list, - loss_cls=CrossEntropyLoss(), - bbox_coder=self.bbox_coder, - **self.train_cfg.isr, - num_class=self.num_classes) - (new_labels, new_label_weights, new_bbox_targets, - new_bbox_weights) = all_targets - all_labels = new_labels.view(all_labels.shape) - all_label_weights = new_label_weights.view(all_label_weights.shape) - all_bbox_targets = new_bbox_targets.view(all_bbox_targets.shape) - all_bbox_weights = new_bbox_weights.view(all_bbox_weights.shape) - - # add CARL loss - carl_loss_cfg = self.train_cfg.get('carl', None) - if carl_loss_cfg is not None: - loss_carl = carl_loss( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_targets[0], - all_bbox_preds.view(-1, 4), - all_targets[2], - SmoothL1Loss(beta=1.), - **self.train_cfg.carl, - avg_factor=num_total_pos, - num_class=self.num_classes) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - if carl_loss_cfg is not None: - loss_dict.update(loss_carl) - return loss_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/reppoints_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/reppoints_head.py deleted file mode 100644 index f7204141..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/reppoints_head.py +++ /dev/null @@ -1,764 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d - -from mmdet.core import (build_assigner, build_sampler, images_to_levels, - multi_apply, unmap) -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class RepPointsHead(AnchorFreeHead): - """RepPoint head. - - Args: - point_feat_channels (int): Number of channels of points features. - gradient_mul (float): The multiplier to gradients from - points refinement and recognition. - point_strides (Iterable): points strides. - point_base_scale (int): bbox scale for assigning labels. - loss_cls (dict): Config of classification loss. - loss_bbox_init (dict): Config of initial points loss. - loss_bbox_refine (dict): Config of points loss in refinement. - use_grid_points (bool): If we use bounding box representation, the - reppoints is represented as grid points on the bounding box. - center_init (bool): Whether to use center point assignment. - transform_method (str): The methods to transform RepPoints to bbox. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - point_feat_channels=256, - num_points=9, - gradient_mul=0.1, - point_strides=[8, 16, 32, 64, 128], - point_base_scale=4, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_init=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.5), - loss_bbox_refine=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - use_grid_points=False, - center_init=True, - transform_method='moment', - moment_mul=0.01, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='reppoints_cls_out', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.num_points = num_points - self.point_feat_channels = point_feat_channels - self.use_grid_points = use_grid_points - self.center_init = center_init - - # we use deform conv to extract points features - self.dcn_kernel = int(np.sqrt(num_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - assert self.dcn_kernel * self.dcn_kernel == num_points, \ - 'The points number should be a square number.' - assert self.dcn_kernel % 2 == 1, \ - 'The points number should be an odd square number.' - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - init_cfg=init_cfg, - **kwargs) - - self.gradient_mul = gradient_mul - self.point_base_scale = point_base_scale - self.point_strides = point_strides - self.prior_generator = MlvlPointGenerator( - self.point_strides, offset=0.) - - self.sampling = loss_cls['type'] not in ['FocalLoss'] - if self.train_cfg: - self.init_assigner = build_assigner(self.train_cfg.init.assigner) - self.refine_assigner = build_assigner( - self.train_cfg.refine.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.transform_method = transform_method - if self.transform_method == 'moment': - self.moment_transfer = nn.Parameter( - data=torch.zeros(2), requires_grad=True) - self.moment_mul = moment_mul - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - self.loss_bbox_init = build_loss(loss_bbox_init) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pts_out_dim = 4 if self.use_grid_points else 2 * self.num_points - self.reppoints_cls_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_cls_out = nn.Conv2d(self.point_feat_channels, - self.cls_out_channels, 1, 1, 0) - self.reppoints_pts_init_conv = nn.Conv2d(self.feat_channels, - self.point_feat_channels, 3, - 1, 1) - self.reppoints_pts_init_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - self.reppoints_pts_refine_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_pts_refine_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - - def points2bbox(self, pts, y_first=True): - """Converting the points set into bounding box. - - :param pts: the input points sets (fields), each points - set (fields) is represented as 2n scalar. - :param y_first: if y_first=True, the point set is represented as - [y1, x1, y2, x2 ... yn, xn], otherwise the point set is - represented as [x1, y1, x2, y2 ... xn, yn]. - :return: each points set is converting to a bbox [x1, y1, x2, y2]. - """ - pts_reshape = pts.view(pts.shape[0], -1, 2, *pts.shape[2:]) - pts_y = pts_reshape[:, :, 0, ...] if y_first else pts_reshape[:, :, 1, - ...] - pts_x = pts_reshape[:, :, 1, ...] if y_first else pts_reshape[:, :, 0, - ...] - if self.transform_method == 'minmax': - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'partial_minmax': - pts_y = pts_y[:, :4, ...] - pts_x = pts_x[:, :4, ...] - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'moment': - pts_y_mean = pts_y.mean(dim=1, keepdim=True) - pts_x_mean = pts_x.mean(dim=1, keepdim=True) - pts_y_std = torch.std(pts_y - pts_y_mean, dim=1, keepdim=True) - pts_x_std = torch.std(pts_x - pts_x_mean, dim=1, keepdim=True) - moment_transfer = (self.moment_transfer * self.moment_mul) + ( - self.moment_transfer.detach() * (1 - self.moment_mul)) - moment_width_transfer = moment_transfer[0] - moment_height_transfer = moment_transfer[1] - half_width = pts_x_std * torch.exp(moment_width_transfer) - half_height = pts_y_std * torch.exp(moment_height_transfer) - bbox = torch.cat([ - pts_x_mean - half_width, pts_y_mean - half_height, - pts_x_mean + half_width, pts_y_mean + half_height - ], - dim=1) - else: - raise NotImplementedError - return bbox - - def gen_grid_from_reg(self, reg, previous_boxes): - """Base on the previous bboxes and regression values, we compute the - regressed bboxes and generate the grids on the bboxes. - - :param reg: the regression value to previous bboxes. - :param previous_boxes: previous bboxes. - :return: generate grids on the regressed bboxes. - """ - b, _, h, w = reg.shape - bxy = (previous_boxes[:, :2, ...] + previous_boxes[:, 2:, ...]) / 2. - bwh = (previous_boxes[:, 2:, ...] - - previous_boxes[:, :2, ...]).clamp(min=1e-6) - grid_topleft = bxy + bwh * reg[:, :2, ...] - 0.5 * bwh * torch.exp( - reg[:, 2:, ...]) - grid_wh = bwh * torch.exp(reg[:, 2:, ...]) - grid_left = grid_topleft[:, [0], ...] - grid_top = grid_topleft[:, [1], ...] - grid_width = grid_wh[:, [0], ...] - grid_height = grid_wh[:, [1], ...] - intervel = torch.linspace(0., 1., self.dcn_kernel).view( - 1, self.dcn_kernel, 1, 1).type_as(reg) - grid_x = grid_left + grid_width * intervel - grid_x = grid_x.unsqueeze(1).repeat(1, self.dcn_kernel, 1, 1, 1) - grid_x = grid_x.view(b, -1, h, w) - grid_y = grid_top + grid_height * intervel - grid_y = grid_y.unsqueeze(2).repeat(1, 1, self.dcn_kernel, 1, 1) - grid_y = grid_y.view(b, -1, h, w) - grid_yx = torch.stack([grid_y, grid_x], dim=2) - grid_yx = grid_yx.view(b, -1, h, w) - regressed_bbox = torch.cat([ - grid_left, grid_top, grid_left + grid_width, grid_top + grid_height - ], 1) - return grid_yx, regressed_bbox - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward feature map of a single FPN level.""" - dcn_base_offset = self.dcn_base_offset.type_as(x) - # If we use center_init, the initial reppoints is from center points. - # If we use bounding bbox representation, the initial reppoints is - # from regular grid placed on a pre-defined bbox. - if self.use_grid_points or not self.center_init: - scale = self.point_base_scale / 2 - points_init = dcn_base_offset / dcn_base_offset.max() * scale - bbox_init = x.new_tensor([-scale, -scale, scale, - scale]).view(1, 4, 1, 1) - else: - points_init = 0 - cls_feat = x - pts_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - pts_feat = reg_conv(pts_feat) - # initialize reppoints - pts_out_init = self.reppoints_pts_init_out( - self.relu(self.reppoints_pts_init_conv(pts_feat))) - if self.use_grid_points: - pts_out_init, bbox_out_init = self.gen_grid_from_reg( - pts_out_init, bbox_init.detach()) - else: - pts_out_init = pts_out_init + points_init - # refine and classify reppoints - pts_out_init_grad_mul = (1 - self.gradient_mul) * pts_out_init.detach( - ) + self.gradient_mul * pts_out_init - dcn_offset = pts_out_init_grad_mul - dcn_base_offset - cls_out = self.reppoints_cls_out( - self.relu(self.reppoints_cls_conv(cls_feat, dcn_offset))) - pts_out_refine = self.reppoints_pts_refine_out( - self.relu(self.reppoints_pts_refine_conv(pts_feat, dcn_offset))) - if self.use_grid_points: - pts_out_refine, bbox_out_refine = self.gen_grid_from_reg( - pts_out_refine, bbox_out_init.detach()) - else: - pts_out_refine = pts_out_refine + pts_out_init.detach() - - if self.training: - return cls_out, pts_out_init, pts_out_refine - else: - return cls_out, self.points2bbox(pts_out_refine) - - def get_points(self, featmap_sizes, img_metas, device): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - - Returns: - tuple: points of each image, valid flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # points center for one time - multi_level_points = self.prior_generator.grid_priors( - featmap_sizes, device=device, with_stride=True) - points_list = [[point.clone() for point in multi_level_points] - for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level grids - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape']) - valid_flag_list.append(multi_level_flags) - - return points_list, valid_flag_list - - def centers_to_bboxes(self, point_list): - """Get bboxes according to center points. - - Only used in :class:`MaxIoUAssigner`. - """ - bbox_list = [] - for i_img, point in enumerate(point_list): - bbox = [] - for i_lvl in range(len(self.point_strides)): - scale = self.point_base_scale * self.point_strides[i_lvl] * 0.5 - bbox_shift = torch.Tensor([-scale, -scale, scale, - scale]).view(1, 4).type_as(point[0]) - bbox_center = torch.cat( - [point[i_lvl][:, :2], point[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + bbox_shift) - bbox_list.append(bbox) - return bbox_list - - def offset_to_pts(self, center_list, pred_list): - """Change from point offset to point coordinate.""" - pts_list = [] - for i_lvl in range(len(self.point_strides)): - pts_lvl = [] - for i_img in range(len(center_list)): - pts_center = center_list[i_img][i_lvl][:, :2].repeat( - 1, self.num_points) - pts_shift = pred_list[i_lvl][i_img] - yx_pts_shift = pts_shift.permute(1, 2, 0).view( - -1, 2 * self.num_points) - y_pts_shift = yx_pts_shift[..., 0::2] - x_pts_shift = yx_pts_shift[..., 1::2] - xy_pts_shift = torch.stack([x_pts_shift, y_pts_shift], -1) - xy_pts_shift = xy_pts_shift.view(*yx_pts_shift.shape[:-1], -1) - pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center - pts_lvl.append(pts) - pts_lvl = torch.stack(pts_lvl, 0) - pts_list.append(pts_lvl) - return pts_list - - def _point_target_single(self, - flat_proposals, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - stage='init', - unmap_outputs=True): - inside_flags = valid_flags - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample proposals - proposals = flat_proposals[inside_flags, :] - - if stage == 'init': - assigner = self.init_assigner - pos_weight = self.train_cfg.init.pos_weight - else: - assigner = self.refine_assigner - pos_weight = self.train_cfg.refine.pos_weight - assign_result = assigner.assign(proposals, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, proposals, - gt_bboxes) - - num_valid_proposals = proposals.shape[0] - bbox_gt = proposals.new_zeros([num_valid_proposals, 4]) - pos_proposals = torch.zeros_like(proposals) - proposals_weights = proposals.new_zeros([num_valid_proposals, 4]) - labels = proposals.new_full((num_valid_proposals, ), - self.num_classes, - dtype=torch.long) - label_weights = proposals.new_zeros( - num_valid_proposals, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_gt_bboxes = sampling_result.pos_gt_bboxes - bbox_gt[pos_inds, :] = pos_gt_bboxes - pos_proposals[pos_inds, :] = proposals[pos_inds, :] - proposals_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of proposals - if unmap_outputs: - num_total_proposals = flat_proposals.size(0) - labels = unmap(labels, num_total_proposals, inside_flags) - label_weights = unmap(label_weights, num_total_proposals, - inside_flags) - bbox_gt = unmap(bbox_gt, num_total_proposals, inside_flags) - pos_proposals = unmap(pos_proposals, num_total_proposals, - inside_flags) - proposals_weights = unmap(proposals_weights, num_total_proposals, - inside_flags) - - return (labels, label_weights, bbox_gt, pos_proposals, - proposals_weights, pos_inds, neg_inds) - - def get_targets(self, - proposals_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - stage='init', - label_channels=1, - unmap_outputs=True): - """Compute corresponding GT box and classification targets for - proposals. - - Args: - proposals_list (list[list]): Multi level points/bboxes of each - image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_bboxes_list (list[Tensor]): Ground truth labels of each box. - stage (str): `init` or `refine`. Generate target for init stage or - refine stage - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501 - - bbox_gt_list (list[Tensor]): Ground truth bbox of each level. - - proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501 - - proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501 - - num_total_pos (int): Number of positive samples in all images. # noqa: E501 - - num_total_neg (int): Number of negative samples in all images. # noqa: E501 - """ - assert stage in ['init', 'refine'] - num_imgs = len(img_metas) - assert len(proposals_list) == len(valid_flag_list) == num_imgs - - # points number of multi levels - num_level_proposals = [points.size(0) for points in proposals_list[0]] - - # concat all level points and flags to a single tensor - for i in range(num_imgs): - assert len(proposals_list[i]) == len(valid_flag_list[i]) - proposals_list[i] = torch.cat(proposals_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_gt, all_proposals, - all_proposal_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._point_target_single, - proposals_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - stage=stage, - unmap_outputs=unmap_outputs) - # no valid points - if any([labels is None for labels in all_labels]): - return None - # sampled points of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - labels_list = images_to_levels(all_labels, num_level_proposals) - label_weights_list = images_to_levels(all_label_weights, - num_level_proposals) - bbox_gt_list = images_to_levels(all_bbox_gt, num_level_proposals) - proposals_list = images_to_levels(all_proposals, num_level_proposals) - proposal_weights_list = images_to_levels(all_proposal_weights, - num_level_proposals) - return (labels_list, label_weights_list, bbox_gt_list, proposals_list, - proposal_weights_list, num_total_pos, num_total_neg) - - def loss_single(self, cls_score, pts_pred_init, pts_pred_refine, labels, - label_weights, bbox_gt_init, bbox_weights_init, - bbox_gt_refine, bbox_weights_refine, stride, - num_total_samples_init, num_total_samples_refine): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - cls_score = cls_score.contiguous() - loss_cls = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=num_total_samples_refine) - - # points loss - bbox_gt_init = bbox_gt_init.reshape(-1, 4) - bbox_weights_init = bbox_weights_init.reshape(-1, 4) - bbox_pred_init = self.points2bbox( - pts_pred_init.reshape(-1, 2 * self.num_points), y_first=False) - bbox_gt_refine = bbox_gt_refine.reshape(-1, 4) - bbox_weights_refine = bbox_weights_refine.reshape(-1, 4) - bbox_pred_refine = self.points2bbox( - pts_pred_refine.reshape(-1, 2 * self.num_points), y_first=False) - normalize_term = self.point_base_scale * stride - loss_pts_init = self.loss_bbox_init( - bbox_pred_init / normalize_term, - bbox_gt_init / normalize_term, - bbox_weights_init, - avg_factor=num_total_samples_init) - loss_pts_refine = self.loss_bbox_refine( - bbox_pred_refine / normalize_term, - bbox_gt_refine / normalize_term, - bbox_weights_refine, - avg_factor=num_total_samples_refine) - return loss_cls, loss_pts_init, loss_pts_refine - - def loss(self, - cls_scores, - pts_preds_init, - pts_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - # target for initial stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_init = self.offset_to_pts(center_list, - pts_preds_init) - if self.train_cfg.init.assigner['type'] == 'PointAssigner': - # Assign target for center list - candidate_list = center_list - else: - # transform center list to bbox list and - # assign target for bbox list - bbox_list = self.centers_to_bboxes(center_list) - candidate_list = bbox_list - cls_reg_targets_init = self.get_targets( - candidate_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='init', - label_channels=label_channels) - (*_, bbox_gt_list_init, candidate_list_init, bbox_weights_list_init, - num_total_pos_init, num_total_neg_init) = cls_reg_targets_init - num_total_samples_init = ( - num_total_pos_init + - num_total_neg_init if self.sampling else num_total_pos_init) - - # target for refinement stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_refine = self.offset_to_pts( - center_list, pts_preds_refine) - bbox_list = [] - for i_img, center in enumerate(center_list): - bbox = [] - for i_lvl in range(len(pts_preds_refine)): - bbox_preds_init = self.points2bbox( - pts_preds_init[i_lvl].detach()) - bbox_shift = bbox_preds_init * self.point_strides[i_lvl] - bbox_center = torch.cat( - [center[i_lvl][:, :2], center[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + - bbox_shift[i_img].permute(1, 2, 0).reshape(-1, 4)) - bbox_list.append(bbox) - cls_reg_targets_refine = self.get_targets( - bbox_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='refine', - label_channels=label_channels) - (labels_list, label_weights_list, bbox_gt_list_refine, - candidate_list_refine, bbox_weights_list_refine, num_total_pos_refine, - num_total_neg_refine) = cls_reg_targets_refine - num_total_samples_refine = ( - num_total_pos_refine + - num_total_neg_refine if self.sampling else num_total_pos_refine) - - # compute loss - losses_cls, losses_pts_init, losses_pts_refine = multi_apply( - self.loss_single, - cls_scores, - pts_coordinate_preds_init, - pts_coordinate_preds_refine, - labels_list, - label_weights_list, - bbox_gt_list_init, - bbox_weights_list_init, - bbox_gt_list_refine, - bbox_weights_list_refine, - self.point_strides, - num_total_samples_init=num_total_samples_init, - num_total_samples_refine=num_total_samples_refine) - loss_dict_all = { - 'loss_cls': losses_cls, - 'loss_pts_init': losses_pts_init, - 'loss_pts_refine': losses_pts_refine - } - return loss_dict_all - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. RepPoints head does not need - this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, - self.point_strides[level_idx], - img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def _bbox_decode(self, points, bbox_pred, stride, max_shape): - bbox_pos_center = torch.cat([points[:, :2], points[:, :2]], dim=1) - bboxes = bbox_pred * stride + bbox_pos_center - x1 = bboxes[:, 0].clamp(min=0, max=max_shape[1]) - y1 = bboxes[:, 1].clamp(min=0, max=max_shape[0]) - x2 = bboxes[:, 2].clamp(min=0, max=max_shape[1]) - y2 = bboxes[:, 3].clamp(min=0, max=max_shape[0]) - decoded_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return decoded_bboxes diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_head.py deleted file mode 100644 index a48720c2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_head.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RetinaHead(AnchorHead): - r"""An anchor-based head used in `RetinaNet - `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors. - - Example: - >>> import torch - >>> self = RetinaHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == (self.num_classes) - >>> assert box_per_anchor == 4 - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(RetinaHead, self).__init__( - num_classes, - in_channels, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_pred = self.retina_reg(reg_feat) - return cls_score, bbox_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_sepbn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_sepbn_head.py deleted file mode 100644 index b385c618..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/retina_sepbn_head.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RetinaSepBNHead(AnchorHead): - """"RetinaHead with separate BN. - - In RetinaHead, conv/norm layers are shared across different FPN levels, - while in RetinaSepBNHead, conv layers are shared across different FPN - levels, but BN layers are separated. - """ - - def __init__(self, - num_classes, - num_ins, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.num_ins = num_ins - super(RetinaSepBNHead, self).__init__( - num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.num_ins): - cls_convs = nn.ModuleList() - reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.cls_convs.append(cls_convs) - self.reg_convs.append(reg_convs) - for i in range(self.stacked_convs): - for j in range(1, self.num_ins): - self.cls_convs[j][i].conv = self.cls_convs[0][i].conv - self.reg_convs[j][i].conv = self.reg_convs[0][i].conv - self.retina_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the head.""" - super(RetinaSepBNHead, self).init_weights() - for m in self.cls_convs[0]: - normal_init(m.conv, std=0.01) - for m in self.reg_convs[0]: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for i, x in enumerate(feats): - cls_feat = feats[i] - reg_feat = feats[i] - for cls_conv in self.cls_convs[i]: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs[i]: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_pred = self.retina_reg(reg_feat) - cls_scores.append(cls_score) - bbox_preds.append(bbox_pred) - return cls_scores, bbox_preds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/rpn_head.py deleted file mode 100644 index f5d6a3b3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/rpn_head.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.ops import batched_nms - -from ..builder import HEADS -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class RPNHead(AnchorHead): - """RPN head. - - Args: - in_channels (int): Number of channels in the input feature map. - init_cfg (dict or list[dict], optional): Initialization config dict. - num_convs (int): Number of convolution layers in the head. Default 1. - """ # noqa: W605 - - def __init__(self, - in_channels, - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01), - num_convs=1, - **kwargs): - self.num_convs = num_convs - super(RPNHead, self).__init__( - 1, in_channels, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - if self.num_convs > 1: - rpn_convs = [] - for i in range(self.num_convs): - if i == 0: - in_channels = self.in_channels - else: - in_channels = self.feat_channels - # use ``inplace=False`` to avoid error: one of the variables - # needed for gradient computation has been modified by an - # inplace operation. - rpn_convs.append( - ConvModule( - in_channels, - self.feat_channels, - 3, - padding=1, - inplace=False)) - self.rpn_conv = nn.Sequential(*rpn_convs) - else: - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_base_priors * 4, - 1) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - rpn_cls_score = self.rpn_cls(x) - rpn_bbox_pred = self.rpn_reg(x) - return rpn_cls_score, rpn_bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - losses = super(RPNHead, self).loss( - cls_scores, - bbox_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], loss_rpn_bbox=losses['loss_bbox']) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_anchors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_anchors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has - shape (num_anchors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. RPN head does not need this value. - mlvl_anchors (list[Tensor]): Anchors of all scale level - each item has shape (num_anchors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - img_shape = img_meta['img_shape'] - - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - nms_pre = cfg.get('nms_pre', -1) - for level_idx in range(len(cls_score_list)): - rpn_cls_score = cls_score_list[level_idx] - rpn_bbox_pred = bbox_pred_list[level_idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(dim=1)[:, 0] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - anchors = mlvl_anchors[level_idx] - if 0 < nms_pre < scores.shape[0]: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:nms_pre] - scores = ranked_scores[:nms_pre] - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full((scores.size(0), ), - level_idx, - dtype=torch.long)) - - return self._bbox_post_process(mlvl_scores, mlvl_bbox_preds, - mlvl_valid_anchors, level_ids, cfg, - img_shape) - - def _bbox_post_process(self, mlvl_scores, mlvl_bboxes, mlvl_valid_anchors, - level_ids, cfg, img_shape, **kwargs): - """bbox post-processing method. - - Do the nms operation for bboxes in same level. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - mlvl_valid_anchors (list[Tensor]): Anchors of all scale level - each item has shape (num_bboxes, 4). - level_ids (list[Tensor]): Indexes from all scale levels of a - single image, each item has shape (num_bboxes, ). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, `self.test_cfg` would be used. - img_shape (tuple(int)): The shape of model's input image. - - Returns: - Tensor: Labeled boxes in shape (n, 5), where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. - """ - scores = torch.cat(mlvl_scores) - anchors = torch.cat(mlvl_valid_anchors) - rpn_bbox_pred = torch.cat(mlvl_bboxes) - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - ids = torch.cat(level_ids) - - if cfg.min_bbox_size >= 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - proposals = proposals[valid_mask] - scores = scores[valid_mask] - ids = ids[valid_mask] - - if proposals.numel() > 0: - dets, _ = batched_nms(proposals, scores, ids, cfg.nms) - else: - return proposals.new_zeros(0, 5) - - return dets[:cfg.max_per_img] - - def onnx_export(self, x, img_metas): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - Returns: - Tensor: dets of shape [N, num_det, 5]. - """ - cls_scores, bbox_preds = self(x) - - assert len(cls_scores) == len(bbox_preds) - - batch_bboxes, batch_scores = super(RPNHead, self).onnx_export( - cls_scores, bbox_preds, img_metas=img_metas, with_nms=False) - # Use ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - cfg = copy.deepcopy(self.test_cfg) - score_threshold = cfg.nms.get('score_thr', 0.0) - nms_pre = cfg.get('deploy_nms_pre', -1) - # Different from the normal forward doing NMS level by level, - # we do NMS across all levels when exporting ONNX. - dets, _ = add_dummy_nms_for_onnx(batch_bboxes, batch_scores, - cfg.max_per_img, - cfg.nms.iou_threshold, - score_threshold, nms_pre, - cfg.max_per_img) - return dets diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/sabl_retina_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/sabl_retina_head.py deleted file mode 100644 index 4fede710..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/sabl_retina_head.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, unmap) -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin -from .guided_anchor_head import GuidedAnchorHead - - -@HEADS.register_module() -class SABLRetinaHead(BaseDenseHead, BBoxTestMixin): - """Side-Aware Boundary Localization (SABL) for RetinaNet. - - The anchor generation, assigning and sampling in SABLRetinaHead - are the same as GuidedAnchorHead for guided anchoring. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of Convs for classification \ - and regression branches. Defaults to 4. - feat_channels (int): Number of hidden channels. \ - Defaults to 256. - approx_anchor_generator (dict): Config dict for approx generator. - square_anchor_generator (dict): Config dict for square generator. - conv_cfg (dict): Config dict for ConvModule. Defaults to None. - norm_cfg (dict): Config dict for Norm Layer. Defaults to None. - bbox_coder (dict): Config dict for bbox coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of SABLRetinaHead. - test_cfg (dict): Testing config of SABLRetinaHead. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - conv_cfg=None, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=3.0), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='retina_cls', - std=0.01, - bias_prob=0.01))): - super(SABLRetinaHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.num_buckets = bbox_coder['num_buckets'] - self.side_num = int(np.ceil(self.num_buckets / 2)) - - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - - self.approx_anchor_generator = build_prior_generator( - approx_anchor_generator) - self.square_anchor_generator = build_prior_generator( - square_anchor_generator) - self.approxs_per_octave = ( - self.approx_anchor_generator.num_base_priors[0]) - - # one anchor per location - self.num_base_priors = self.square_anchor_generator.num_base_priors[0] - - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reg_decoded_bbox = reg_decoded_bbox - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.fp16_enabled = False - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.square_anchor_generator.num_base_priors[0] - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.retina_bbox_reg = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - self.retina_bbox_cls = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_cls_pred = self.retina_bbox_cls(reg_feat) - bbox_reg_pred = self.retina_bbox_reg(reg_feat) - bbox_pred = (bbox_cls_pred, bbox_reg_pred) - return cls_score, bbox_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_priors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - return squares_list - - def get_target(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute bucketing targets. - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: Returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_cls_targets_list (list[Tensor]): BBox cls targets of \ - each level. - - bbox_cls_weights_list (list[Tensor]): BBox cls weights of \ - each level. - - bbox_reg_targets_list (list[Tensor]): BBox reg targets of \ - each level. - - bbox_reg_weights_list (list[Tensor]): BBox reg weights of \ - each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_cls_targets, - all_bbox_cls_weights, all_bbox_reg_targets, all_bbox_reg_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - sampling=sampling, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_squares) - label_weights_list = images_to_levels(all_label_weights, - num_level_squares) - bbox_cls_targets_list = images_to_levels(all_bbox_cls_targets, - num_level_squares) - bbox_cls_weights_list = images_to_levels(all_bbox_cls_weights, - num_level_squares) - bbox_reg_targets_list = images_to_levels(all_bbox_reg_targets, - num_level_squares) - bbox_reg_weights_list = images_to_levels(all_bbox_reg_weights, - num_level_squares) - return (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, - bbox_reg_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image, \ - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: - - - labels_list (Tensor): Labels in a single image - - label_weights (Tensor): Label weights in a single image - - bbox_cls_targets (Tensor): BBox cls targets in a single image - - bbox_cls_weights (Tensor): BBox cls weights in a single image - - bbox_reg_targets (Tensor): BBox reg targets in a single image - - bbox_reg_weights (Tensor): BBox reg weights in a single image - - num_total_pos (int): Number of positive samples \ - in a single image - - num_total_neg (int): Number of negative samples \ - in a single image - """ - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, squares, - gt_bboxes) - - num_valid_squares = squares.shape[0] - bbox_cls_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_cls_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - labels = squares.new_full((num_valid_squares, ), - self.num_classes, - dtype=torch.long) - label_weights = squares.new_zeros(num_valid_squares, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - (pos_bbox_reg_targets, pos_bbox_reg_weights, pos_bbox_cls_targets, - pos_bbox_cls_weights) = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - - bbox_cls_targets[pos_inds, :] = pos_bbox_cls_targets - bbox_reg_targets[pos_inds, :] = pos_bbox_reg_targets - bbox_cls_weights[pos_inds, :] = pos_bbox_cls_weights - bbox_reg_weights[pos_inds, :] = pos_bbox_reg_weights - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_cls_targets = unmap(bbox_cls_targets, num_total_anchors, - inside_flags) - bbox_cls_weights = unmap(bbox_cls_weights, num_total_anchors, - inside_flags) - bbox_reg_targets = unmap(bbox_reg_targets, num_total_anchors, - inside_flags) - bbox_reg_weights = unmap(bbox_reg_weights, num_total_anchors, - inside_flags) - return (labels, label_weights, bbox_cls_targets, bbox_cls_weights, - bbox_reg_targets, bbox_reg_weights, pos_inds, neg_inds) - - def loss_single(self, cls_score, bbox_pred, labels, label_weights, - bbox_cls_targets, bbox_cls_weights, bbox_reg_targets, - bbox_reg_weights, num_total_samples): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_cls_targets = bbox_cls_targets.reshape(-1, self.side_num * 4) - bbox_cls_weights = bbox_cls_weights.reshape(-1, self.side_num * 4) - bbox_reg_targets = bbox_reg_targets.reshape(-1, self.side_num * 4) - bbox_reg_weights = bbox_reg_weights.reshape(-1, self.side_num * 4) - (bbox_cls_pred, bbox_reg_pred) = bbox_pred - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - loss_bbox_cls = self.loss_bbox_cls( - bbox_cls_pred, - bbox_cls_targets.long(), - bbox_cls_weights, - avg_factor=num_total_samples * 4 * self.side_num) - loss_bbox_reg = self.loss_bbox_reg( - bbox_reg_pred, - bbox_reg_targets, - bbox_reg_weights, - avg_factor=num_total_samples * 4 * self.bbox_coder.offset_topk) - return loss_cls, loss_bbox_cls, loss_bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get sampled approxes - approxs_list, inside_flag_list = GuidedAnchorHead.get_sampled_approxs( - self, featmap_sizes, img_metas, device=device) - - square_list = self.get_anchors(featmap_sizes, img_metas, device=device) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_target( - approxs_list, - inside_flag_list, - square_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - sampling=self.sampling) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, bbox_reg_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - losses_cls, losses_bbox_cls, losses_bbox_reg = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_cls_targets_list, - bbox_cls_weights_list, - bbox_reg_targets_list, - bbox_reg_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, - loss_bbox_cls=losses_bbox_cls, - loss_bbox_reg=losses_bbox_reg) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - - device = cls_scores[0].device - mlvl_anchors = self.get_anchors( - featmap_sizes, img_metas, device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_cls_pred_list = [ - bbox_preds[i][0][img_id].detach() for i in range(num_levels) - ] - bbox_reg_pred_list = [ - bbox_preds[i][1][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single( - cls_score_list, bbox_cls_pred_list, bbox_reg_pred_list, - mlvl_anchors[img_id], img_shape, scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_cls_preds, - bbox_reg_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_confids = [] - mlvl_labels = [] - assert len(cls_scores) == len(bbox_cls_preds) == len( - bbox_reg_preds) == len(mlvl_anchors) - for cls_score, bbox_cls_pred, bbox_reg_pred, anchors in zip( - cls_scores, bbox_cls_preds, bbox_reg_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_cls_pred.size( - )[-2:] == bbox_reg_pred.size()[-2::] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1)[:, :-1] - bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict( - anchors=anchors, - bbox_cls_pred=bbox_cls_pred, - bbox_reg_pred=bbox_reg_pred)) - scores, labels, _, filtered_results = results - - anchors = filtered_results['anchors'] - bbox_cls_pred = filtered_results['bbox_cls_pred'] - bbox_reg_pred = filtered_results['bbox_reg_pred'] - - bbox_preds = [ - bbox_cls_pred.contiguous(), - bbox_reg_pred.contiguous() - ] - bboxes, confids = self.bbox_coder.decode( - anchors.contiguous(), bbox_preds, max_shape=img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_confids.append(confids) - mlvl_labels.append(labels) - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - scale_factor, cfg, rescale, True, - mlvl_confids) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/solo_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/solo_head.py deleted file mode 100644 index 148f819f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/solo_head.py +++ /dev/null @@ -1,1177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmdet.core import InstanceData, mask_matrix_nms, multi_apply -from mmdet.core.utils import center_of_mass, generate_coordinate -from mmdet.models.builder import HEADS, build_loss -from .base_mask_head import BaseMaskHead - - -@HEADS.register_module() -class SOLOHead(BaseMaskHead): - """SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - Default: 256. - stacked_convs (int): Number of stacking convs of the head. - Default: 4. - strides (tuple): Downsample factor of each feature map. - scale_ranges (tuple[tuple[int, int]]): Area range of multiple - level masks, in the format [(min1, max1), (min2, max2), ...]. - A range of (16, 64) means the area range between (16, 64). - pos_scale (float): Constant scale factor to control the center region. - num_grids (list[int]): Divided image into a uniform grids, each - feature map has a different grid value. The number of output - channels is grid ** 2. Default: [40, 36, 24, 16, 12]. - cls_down_index (int): The index of downsample operation in - classification branch. Default: 0. - loss_mask (dict): Config of mask loss. - loss_cls (dict): Config of classification loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - train_cfg (dict): Training config of head. - test_cfg (dict): Testing config of head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), - pos_scale=0.2, - num_grids=[40, 36, 24, 16, 12], - cls_down_index=0, - loss_mask=None, - loss_cls=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - train_cfg=None, - test_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - ): - super(SOLOHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.cls_out_channels = self.num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.num_grids = num_grids - # number of FPN feats - self.num_levels = len(strides) - assert self.num_levels == len(scale_ranges) == len(num_grids) - self.scale_ranges = scale_ranges - self.pos_scale = pos_scale - - self.cls_down_index = cls_down_index - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.norm_cfg = norm_cfg - self.init_cfg = init_cfg - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self._init_layers() - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.conv_mask_list = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list.append( - nn.Conv2d(self.feat_channels, num_grid**2, 1)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def resize_feats(self, feats): - """Downsample the first feat and upsample last feat in feats.""" - out = [] - for i in range(len(feats)): - if i == 0: - out.append( - F.interpolate(feats[0], scale_factor=0.5, mode='bilinear')) - elif i == len(feats) - 1: - out.append( - F.interpolate( - feats[i], - size=feats[i - 1].shape[-2:], - mode='bilinear')) - else: - out.append(feats[i]) - return out - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mlvl_mask_preds = [] - mlvl_cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in (self.mask_convs): - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - mask_pred = self.conv_mask_list[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred = F.interpolate( - mask_pred.sigmoid(), size=upsampled_size, mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mlvl_mask_preds.append(mask_pred) - mlvl_cls_preds.append(cls_pred) - return mlvl_mask_preds, mlvl_cls_preds - - def loss(self, - mlvl_mask_preds, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds] - - # `BoolTensor` in `pos_masks` represent - # whether the corresponding point is - # positive - pos_mask_targets, labels, pos_masks = multi_apply( - self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds = [[] for _ in range(num_levels)] - mlvl_pos_masks = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - assert num_levels == len(pos_mask_targets[img_id]) - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds[lvl].append( - mlvl_mask_preds[lvl][img_id, pos_masks[img_id][lvl], ...]) - mlvl_pos_masks[lvl].append(pos_masks[img_id][lvl].flatten()) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds[lvl] = torch.cat( - mlvl_pos_mask_preds[lvl], dim=0) - mlvl_pos_masks[lvl] = torch.cat(mlvl_pos_masks[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = sum(item.sum() for item in mlvl_pos_masks) - # dice loss - loss_mask = [] - for pred, target in zip(mlvl_pos_mask_preds, mlvl_pos_mask_targets): - if pred.size()[0] == 0: - loss_mask.append(pred.sum().unsqueeze(0)) - continue - loss_mask.append( - self.loss_mask(pred, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_pos_masks (list[Tensor]): Each element is - a `BoolTensor` to represent whether the - corresponding point in single level - is positive, has shape (num_grid **2). - """ - device = gt_labels.device - gt_areas = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - - mlvl_pos_mask_targets = [] - mlvl_labels = [] - mlvl_pos_masks = [] - for (lower_bound, upper_bound), stride, featmap_size, num_grid \ - in zip(self.scale_ranges, self.strides, - featmap_sizes, self.num_grids): - - mask_target = torch.zeros( - [num_grid**2, featmap_size[0], featmap_size[1]], - dtype=torch.uint8, - device=device) - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - labels = torch.zeros([num_grid, num_grid], - dtype=torch.int64, - device=device) + self.num_classes - pos_mask = torch.zeros([num_grid**2], - dtype=torch.bool, - device=device) - - gt_inds = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(gt_inds) == 0: - mlvl_pos_mask_targets.append( - mask_target.new_zeros(0, featmap_size[0], featmap_size[1])) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - continue - hit_gt_bboxes = gt_bboxes[gt_inds] - hit_gt_labels = gt_labels[gt_inds] - hit_gt_masks = gt_masks[gt_inds, ...] - - pos_w_ranges = 0.5 * (hit_gt_bboxes[:, 2] - - hit_gt_bboxes[:, 0]) * self.pos_scale - pos_h_ranges = 0.5 * (hit_gt_bboxes[:, 3] - - hit_gt_bboxes[:, 1]) * self.pos_scale - - # Make sure hit_gt_masks has a value - valid_mask_flags = hit_gt_masks.sum(dim=-1).sum(dim=-1) > 0 - output_stride = stride / 2 - - for gt_mask, gt_label, pos_h_range, pos_w_range, \ - valid_mask_flag in \ - zip(hit_gt_masks, hit_gt_labels, pos_h_ranges, - pos_w_ranges, valid_mask_flags): - if not valid_mask_flag: - continue - upsampled_size = (featmap_sizes[0][0] * 4, - featmap_sizes[0][1] * 4) - center_h, center_w = center_of_mass(gt_mask) - - coord_w = int( - (center_w / upsampled_size[1]) // (1. / num_grid)) - coord_h = int( - (center_h / upsampled_size[0]) // (1. / num_grid)) - - # left, top, right, down - top_box = max( - 0, - int(((center_h - pos_h_range) / upsampled_size[0]) // - (1. / num_grid))) - down_box = min( - num_grid - 1, - int(((center_h + pos_h_range) / upsampled_size[0]) // - (1. / num_grid))) - left_box = max( - 0, - int(((center_w - pos_w_range) / upsampled_size[1]) // - (1. / num_grid))) - right_box = min( - num_grid - 1, - int(((center_w + pos_w_range) / upsampled_size[1]) // - (1. / num_grid))) - - top = max(top_box, coord_h - 1) - down = min(down_box, coord_h + 1) - left = max(coord_w - 1, left_box) - right = min(right_box, coord_w + 1) - - labels[top:(down + 1), left:(right + 1)] = gt_label - # ins - gt_mask = np.uint8(gt_mask.cpu().numpy()) - # Follow the original implementation, F.interpolate is - # different from cv2 and opencv - gt_mask = mmcv.imrescale(gt_mask, scale=1. / output_stride) - gt_mask = torch.from_numpy(gt_mask).to(device=device) - - for i in range(top, down + 1): - for j in range(left, right + 1): - index = int(i * num_grid + j) - mask_target[index, :gt_mask.shape[0], :gt_mask. - shape[1]] = gt_mask - pos_mask[index] = True - mlvl_pos_mask_targets.append(mask_target[pos_mask]) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - return mlvl_pos_mask_targets, mlvl_labels, mlvl_pos_masks - - def get_results(self, mlvl_mask_preds, mlvl_cls_scores, img_metas, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds (list[Tensor]): Multi-level mask prediction. - Each element in the list has shape - (batch_size, num_grids**2 ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[lvl][img_id].view(-1, self.cls_out_channels) - for lvl in range(num_levels) - ] - mask_pred_list = [ - mlvl_mask_preds[lvl][img_id] for lvl in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list = torch.cat(mask_pred_list, dim=0) - - results = self._get_results_single( - cls_pred_list, mask_pred_list, img_meta=img_metas[img_id]) - results_list.append(results) - - return results_list - - def _get_results_single(self, cls_scores, mask_preds, img_meta, cfg=None): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds (Tensor): Mask prediction of all points in - single image, has shape (num_points, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict, optional): Config used in test phase. - Default: None. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(mask_preds) - results = InstanceData(img_meta) - - featmap_size = mask_preds.size()[-2:] - - img_shape = results.img_shape - ori_shape = results.ori_shape - - h, w, _ = img_shape - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - if len(cls_scores) == 0: - return empty_results(results, cls_scores) - - inds = score_mask.nonzero() - cls_labels = inds[:, 1] - - # Filter the mask mask with an area is smaller than - # stride of corresponding feature level - lvl_interval = cls_labels.new_tensor(self.num_grids).pow(2).cumsum(0) - strides = cls_scores.new_ones(lvl_interval[-1]) - strides[:lvl_interval[0]] *= self.strides[0] - for lvl in range(1, self.num_levels): - strides[lvl_interval[lvl - - 1]:lvl_interval[lvl]] *= self.strides[lvl] - strides = strides[inds[:, 0]] - mask_preds = mask_preds[inds[:, 0]] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOHead(SOLOHead): - """Decoupled SOLO mask head used in `SOLO: Segmenting Objects by Locations. - - `_ - - Args: - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - super(DecoupledSOLOHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs_x = nn.ModuleList() - self.mask_convs_y = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - chn = self.in_channels + 1 if i == 0 else self.feat_channels - self.mask_convs_x.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - self.mask_convs_y.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat_x = torch.cat([mask_feat, coord_feat[:, 0:1, ...]], 1) - mask_feat_y = torch.cat([mask_feat, coord_feat[:, 1:2, ...]], 1) - - for mask_layer_x, mask_layer_y in \ - zip(self.mask_convs_x, self.mask_convs_y): - mask_feat_x = mask_layer_x(mask_feat_x) - mask_feat_y = mask_layer_y(mask_feat_y) - - mask_feat_x = F.interpolate( - mask_feat_x, scale_factor=2, mode='bilinear') - mask_feat_y = F.interpolate( - mask_feat_y, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat_x) - mask_pred_y = self.conv_mask_list_y[i](mask_feat_y) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds - - def loss(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_preds, - gt_labels, - gt_masks, - img_metas, - gt_bboxes=None, - **kwargs): - """Calculate the loss of total batch. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids ,num_grids). - gt_labels (list[Tensor]): Labels of multiple images. - gt_masks (list[Tensor]): Ground truth masks of multiple images. - Each has shape (num_instances, h, w). - img_metas (list[dict]): Meta information of multiple images. - gt_bboxes (list[Tensor]): Ground truth bboxes of multiple - images. Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_levels = self.num_levels - num_imgs = len(gt_labels) - featmap_sizes = [featmap.size()[-2:] for featmap in mlvl_mask_preds_x] - - pos_mask_targets, labels, \ - xy_pos_indexes = \ - multi_apply(self._get_targets_single, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=featmap_sizes) - - # change from the outside list meaning multi images - # to the outside list meaning multi levels - mlvl_pos_mask_targets = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_x = [[] for _ in range(num_levels)] - mlvl_pos_mask_preds_y = [[] for _ in range(num_levels)] - mlvl_labels = [[] for _ in range(num_levels)] - for img_id in range(num_imgs): - - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl].append( - pos_mask_targets[img_id][lvl]) - mlvl_pos_mask_preds_x[lvl].append( - mlvl_mask_preds_x[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 1]]) - mlvl_pos_mask_preds_y[lvl].append( - mlvl_mask_preds_y[lvl][img_id, - xy_pos_indexes[img_id][lvl][:, 0]]) - mlvl_labels[lvl].append(labels[img_id][lvl].flatten()) - - # cat multiple image - temp_mlvl_cls_preds = [] - for lvl in range(num_levels): - mlvl_pos_mask_targets[lvl] = torch.cat( - mlvl_pos_mask_targets[lvl], dim=0) - mlvl_pos_mask_preds_x[lvl] = torch.cat( - mlvl_pos_mask_preds_x[lvl], dim=0) - mlvl_pos_mask_preds_y[lvl] = torch.cat( - mlvl_pos_mask_preds_y[lvl], dim=0) - mlvl_labels[lvl] = torch.cat(mlvl_labels[lvl], dim=0) - temp_mlvl_cls_preds.append(mlvl_cls_preds[lvl].permute( - 0, 2, 3, 1).reshape(-1, self.cls_out_channels)) - - num_pos = 0. - # dice loss - loss_mask = [] - for pred_x, pred_y, target in \ - zip(mlvl_pos_mask_preds_x, - mlvl_pos_mask_preds_y, mlvl_pos_mask_targets): - num_masks = pred_x.size(0) - if num_masks == 0: - # make sure can get grad - loss_mask.append((pred_x.sum() + pred_y.sum()).unsqueeze(0)) - continue - num_pos += num_masks - pred_mask = pred_y.sigmoid() * pred_x.sigmoid() - loss_mask.append( - self.loss_mask(pred_mask, target, reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = torch.cat(loss_mask).mean() - - # cate - flatten_labels = torch.cat(mlvl_labels) - flatten_cls_preds = torch.cat(temp_mlvl_cls_preds) - - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def _get_targets_single(self, - gt_bboxes, - gt_labels, - gt_masks, - featmap_sizes=None): - """Compute targets for predictions of single image. - - Args: - gt_bboxes (Tensor): Ground truth bbox of each instance, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth label of each instance, - shape (num_gts,). - gt_masks (Tensor): Ground truth mask of each instance, - shape (num_gts, h, w). - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Default: None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_xy_pos_indexes (list[Tensor]): Each element - in the list contains the index of positive samples in - corresponding level, has shape (num_pos, 2), last - dimension 2 present (index_x, index_y). - """ - mlvl_pos_mask_targets, mlvl_labels, \ - mlvl_pos_masks = \ - super()._get_targets_single(gt_bboxes, gt_labels, gt_masks, - featmap_sizes=featmap_sizes) - - mlvl_xy_pos_indexes = [(item - self.num_classes).nonzero() - for item in mlvl_labels] - - return mlvl_pos_mask_targets, mlvl_labels, mlvl_xy_pos_indexes - - def get_results(self, - mlvl_mask_preds_x, - mlvl_mask_preds_y, - mlvl_cls_scores, - img_metas, - rescale=None, - **kwargs): - """Get multi-image mask results. - - Args: - mlvl_mask_preds_x (list[Tensor]): Multi-level mask prediction - from x branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_mask_preds_y (list[Tensor]): Multi-level mask prediction - from y branch. Each element in the list has shape - (batch_size, num_grids ,h ,w). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes ,num_grids ,num_grids). - img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - mlvl_cls_scores = [ - item.permute(0, 2, 3, 1) for item in mlvl_cls_scores - ] - assert len(mlvl_mask_preds_x) == len(mlvl_cls_scores) - num_levels = len(mlvl_cls_scores) - - results_list = [] - for img_id in range(len(img_metas)): - cls_pred_list = [ - mlvl_cls_scores[i][img_id].view( - -1, self.cls_out_channels).detach() - for i in range(num_levels) - ] - mask_pred_list_x = [ - mlvl_mask_preds_x[i][img_id] for i in range(num_levels) - ] - mask_pred_list_y = [ - mlvl_mask_preds_y[i][img_id] for i in range(num_levels) - ] - - cls_pred_list = torch.cat(cls_pred_list, dim=0) - mask_pred_list_x = torch.cat(mask_pred_list_x, dim=0) - mask_pred_list_y = torch.cat(mask_pred_list_y, dim=0) - - results = self._get_results_single( - cls_pred_list, - mask_pred_list_x, - mask_pred_list_y, - img_meta=img_metas[img_id], - cfg=self.test_cfg) - results_list.append(results) - return results_list - - def _get_results_single(self, cls_scores, mask_preds_x, mask_preds_y, - img_meta, cfg): - """Get processed mask related results of single image. - - Args: - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_preds_x (Tensor): Mask prediction of x branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - mask_preds_y (Tensor): Mask prediction of y branch of - all points in single image, has shape - (sum_num_grids, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict): Config used in test phase. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(results, cls_scores): - """Generate a empty results.""" - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *results.ori_shape[:2]) - results.labels = cls_scores.new_ones(0) - return results - - cfg = self.test_cfg if cfg is None else cfg - - results = InstanceData(img_meta) - img_shape = results.img_shape - ori_shape = results.ori_shape - h, w, _ = img_shape - featmap_size = mask_preds_x.size()[-2:] - upsampled_size = (featmap_size[0] * 4, featmap_size[1] * 4) - - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - inds = score_mask.nonzero() - lvl_interval = inds.new_tensor(self.num_grids).pow(2).cumsum(0) - num_all_points = lvl_interval[-1] - lvl_start_index = inds.new_ones(num_all_points) - num_grids = inds.new_ones(num_all_points) - seg_size = inds.new_tensor(self.num_grids).cumsum(0) - mask_lvl_start_index = inds.new_ones(num_all_points) - strides = inds.new_ones(num_all_points) - - lvl_start_index[:lvl_interval[0]] *= 0 - mask_lvl_start_index[:lvl_interval[0]] *= 0 - num_grids[:lvl_interval[0]] *= self.num_grids[0] - strides[:lvl_interval[0]] *= self.strides[0] - - for lvl in range(1, self.num_levels): - lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - lvl_interval[lvl - 1] - mask_lvl_start_index[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - seg_size[lvl - 1] - num_grids[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.num_grids[lvl] - strides[lvl_interval[lvl - 1]:lvl_interval[lvl]] *= \ - self.strides[lvl] - - lvl_start_index = lvl_start_index[inds[:, 0]] - mask_lvl_start_index = mask_lvl_start_index[inds[:, 0]] - num_grids = num_grids[inds[:, 0]] - strides = strides[inds[:, 0]] - - y_lvl_offset = (inds[:, 0] - lvl_start_index) // num_grids - x_lvl_offset = (inds[:, 0] - lvl_start_index) % num_grids - y_inds = mask_lvl_start_index + y_lvl_offset - x_inds = mask_lvl_start_index + x_lvl_offset - - cls_labels = inds[:, 1] - mask_preds = mask_preds_x[x_inds, ...] * mask_preds_y[y_inds, ...] - - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(results, cls_scores) - - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), size=upsampled_size, - mode='bilinear')[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, size=ori_shape[:2], mode='bilinear').squeeze(0) - masks = mask_preds > cfg.mask_thr - - results.masks = masks - results.labels = labels - results.scores = scores - - return results - - -@HEADS.register_module() -class DecoupledSOLOLightHead(DecoupledSOLOHead): - """Decoupled Light SOLO mask head used in `SOLO: Segmenting Objects by - Locations `_ - - Args: - with_dcn (bool): Whether use dcn in mask_convs and cls_convs, - default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - dcn_cfg=None, - init_cfg=[ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_x')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_mask_list_y')), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs): - assert dcn_cfg is None or isinstance(dcn_cfg, dict) - self.dcn_cfg = dcn_cfg - super(DecoupledSOLOLightHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - - def _init_layers(self): - self.mask_convs = nn.ModuleList() - self.cls_convs = nn.ModuleList() - - for i in range(self.stacked_convs): - if self.dcn_cfg is not None\ - and i == self.stacked_convs - 1: - conv_cfg = self.dcn_cfg - else: - conv_cfg = None - - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.mask_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_mask_list_x = nn.ModuleList() - self.conv_mask_list_y = nn.ModuleList() - for num_grid in self.num_grids: - self.conv_mask_list_x.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_mask_list_y.append( - nn.Conv2d(self.feat_channels, num_grid, 3, padding=1)) - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - assert len(feats) == self.num_levels - feats = self.resize_feats(feats) - mask_preds_x = [] - mask_preds_y = [] - cls_preds = [] - for i in range(self.num_levels): - x = feats[i] - mask_feat = x - cls_feat = x - # generate and concat the coordinate - coord_feat = generate_coordinate(mask_feat.size(), - mask_feat.device) - mask_feat = torch.cat([mask_feat, coord_feat], 1) - - for mask_layer in self.mask_convs: - mask_feat = mask_layer(mask_feat) - - mask_feat = F.interpolate( - mask_feat, scale_factor=2, mode='bilinear') - - mask_pred_x = self.conv_mask_list_x[i](mask_feat) - mask_pred_y = self.conv_mask_list_y[i](mask_feat) - - # cls branch - for j, cls_layer in enumerate(self.cls_convs): - if j == self.cls_down_index: - num_grid = self.num_grids[i] - cls_feat = F.interpolate( - cls_feat, size=num_grid, mode='bilinear') - cls_feat = cls_layer(cls_feat) - - cls_pred = self.conv_cls(cls_feat) - - if not self.training: - feat_wh = feats[0].size()[-2:] - upsampled_size = (feat_wh[0] * 2, feat_wh[1] * 2) - mask_pred_x = F.interpolate( - mask_pred_x.sigmoid(), - size=upsampled_size, - mode='bilinear') - mask_pred_y = F.interpolate( - mask_pred_y.sigmoid(), - size=upsampled_size, - mode='bilinear') - cls_pred = cls_pred.sigmoid() - # get local maximum - local_max = F.max_pool2d(cls_pred, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_pred - cls_pred = cls_pred * keep_mask - - mask_preds_x.append(mask_pred_x) - mask_preds_y.append(mask_pred_y) - cls_preds.append(cls_pred) - return mask_preds_x, mask_preds_y, cls_preds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ssd_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ssd_head.py deleted file mode 100644 index e362fd80..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/ssd_head.py +++ /dev/null @@ -1,357 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, multi_apply) -from ..builder import HEADS -from ..losses import smooth_l1_loss -from .anchor_head import AnchorHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class SSDHead(AnchorHead): - """SSD head used in https://arxiv.org/abs/1512.02325. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 0. - feat_channels (int): Number of hidden channels when stacked_convs - > 0. Default: 256. - use_depthwise (bool): Whether to use DepthwiseSeparableConv. - Default: False. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: None. - act_cfg (dict): Dictionary to construct and config activation layer. - Default: None. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes=80, - in_channels=(512, 1024, 512, 256, 256, 256), - stacked_convs=0, - feat_channels=256, - use_depthwise=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=300, - strides=[8, 16, 32, 64, 100, 300], - ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]), - basesize_ratio_range=(0.1, 0.9)), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Xavier', - layer='Conv2d', - distribution='uniform', - bias=0)): - super(AnchorHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.in_channels = in_channels - self.stacked_convs = stacked_convs - self.feat_channels = feat_channels - self.use_depthwise = use_depthwise - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.cls_out_channels = num_classes + 1 # add background class - self.prior_generator = build_prior_generator(anchor_generator) - - # Usually the numbers of anchors for each level are the same - # except SSD detectors. So it is an int in the most dense - # heads but a list of int in SSDHead - self.num_base_priors = self.prior_generator.num_base_priors - - self._init_layers() - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = False - self.cls_focal_loss = False - self.train_cfg = train_cfg - self.test_cfg = test_cfg - # set sampling=False for archor_target - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - @property - def num_anchors(self): - """ - Returns: - list[int]: Number of base_anchors on each point of each level. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - def _init_layers(self): - """Initialize layers of the head.""" - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - # TODO: Use registry to choose ConvModule type - conv = DepthwiseSeparableConvModule \ - if self.use_depthwise else ConvModule - - for channel, num_base_priors in zip(self.in_channels, - self.num_base_priors): - cls_layers = [] - reg_layers = [] - in_channel = channel - # build stacked conv tower, not used in default ssd - for i in range(self.stacked_convs): - cls_layers.append( - conv( - in_channel, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - reg_layers.append( - conv( - in_channel, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - in_channel = self.feat_channels - # SSD-Lite head - if self.use_depthwise: - cls_layers.append( - ConvModule( - in_channel, - in_channel, - 3, - padding=1, - groups=in_channel, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - reg_layers.append( - ConvModule( - in_channel, - in_channel, - 3, - padding=1, - groups=in_channel, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - cls_layers.append( - nn.Conv2d( - in_channel, - num_base_priors * self.cls_out_channels, - kernel_size=1 if self.use_depthwise else 3, - padding=0 if self.use_depthwise else 1)) - reg_layers.append( - nn.Conv2d( - in_channel, - num_base_priors * 4, - kernel_size=1 if self.use_depthwise else 3, - padding=0 if self.use_depthwise else 1)) - self.cls_convs.append(nn.Sequential(*cls_layers)) - self.reg_convs.append(nn.Sequential(*reg_layers)) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for feat, reg_conv, cls_conv in zip(feats, self.reg_convs, - self.cls_convs): - cls_scores.append(cls_conv(feat)) - bbox_preds.append(reg_conv(feat)) - return cls_scores, bbox_preds - - def loss_single(self, cls_score, bbox_pred, anchor, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single image. - - Args: - cls_score (Tensor): Box scores for eachimage - Has shape (num_total_anchors, num_classes). - bbox_pred (Tensor): Box energies / deltas for each image - level with shape (num_total_anchors, 4). - anchors (Tensor): Box reference for each scale level with shape - (num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (num_total_anchors,). - label_weights (Tensor): Label weights of each anchor with shape - (num_total_anchors,) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - loss_cls_all = F.cross_entropy( - cls_score, labels, reduction='none') * label_weights - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero( - as_tuple=False).reshape(-1) - neg_inds = (labels == self.num_classes).nonzero( - as_tuple=False).view(-1) - - num_pos_samples = pos_inds.size(0) - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchor, bbox_pred) - - loss_bbox = smooth_l1_loss( - bbox_pred, - bbox_targets, - bbox_weights, - beta=self.train_cfg.smoothl1_beta, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/tood_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/tood_head.py deleted file mode 100644 index c64ebf7a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/tood_head.py +++ /dev/null @@ -1,778 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init -from mmcv.ops import deform_conv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, distance2bbox, - images_to_levels, multi_apply, reduce_mean, unmap) -from mmdet.core.utils import filter_scores_and_topk -from mmdet.models.utils import sigmoid_geometric_mean -from ..builder import HEADS, build_loss -from .atss_head import ATSSHead - - -class TaskDecomposition(nn.Module): - """Task decomposition module in task-aligned predictor of TOOD. - - Args: - feat_channels (int): Number of feature channels in TOOD head. - stacked_convs (int): Number of conv layers in TOOD head. - la_down_rate (int): Downsample rate of layer attention. - conv_cfg (dict): Config dict for convolution layer. - norm_cfg (dict): Config dict for normalization layer. - """ - - def __init__(self, - feat_channels, - stacked_convs, - la_down_rate=8, - conv_cfg=None, - norm_cfg=None): - super(TaskDecomposition, self).__init__() - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.in_channels = self.feat_channels * self.stacked_convs - self.norm_cfg = norm_cfg - self.layer_attention = nn.Sequential( - nn.Conv2d(self.in_channels, self.in_channels // la_down_rate, 1), - nn.ReLU(inplace=True), - nn.Conv2d( - self.in_channels // la_down_rate, - self.stacked_convs, - 1, - padding=0), nn.Sigmoid()) - - self.reduction_conv = ConvModule( - self.in_channels, - self.feat_channels, - 1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=norm_cfg is None) - - def init_weights(self): - for m in self.layer_attention.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - normal_init(self.reduction_conv.conv, std=0.01) - - def forward(self, feat, avg_feat=None): - b, c, h, w = feat.shape - if avg_feat is None: - avg_feat = F.adaptive_avg_pool2d(feat, (1, 1)) - weight = self.layer_attention(avg_feat) - - # here we first compute the product between layer attention weight and - # conv weight, and then compute the convolution between new conv weight - # and feature map, in order to save memory and FLOPs. - conv_weight = weight.reshape( - b, 1, self.stacked_convs, - 1) * self.reduction_conv.conv.weight.reshape( - 1, self.feat_channels, self.stacked_convs, self.feat_channels) - conv_weight = conv_weight.reshape(b, self.feat_channels, - self.in_channels) - feat = feat.reshape(b, self.in_channels, h * w) - feat = torch.bmm(conv_weight, feat).reshape(b, self.feat_channels, h, - w) - if self.norm_cfg is not None: - feat = self.reduction_conv.norm(feat) - feat = self.reduction_conv.activate(feat) - - return feat - - -@HEADS.register_module() -class TOODHead(ATSSHead): - """TOODHead used in `TOOD: Task-aligned One-stage Object Detection. - - `_. - - TOOD uses Task-aligned head (T-head) and is optimized by Task Alignment - Learning (TAL). - - Args: - num_dcn (int): Number of deformable convolution in the head. - Default: 0. - anchor_type (str): If set to `anchor_free`, the head will use centers - to regress bboxes. If set to `anchor_based`, the head will - regress bboxes based on anchors. Default: `anchor_free`. - initial_loss_cls (dict): Config of initial loss. - - Example: - >>> self = TOODHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred = self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - num_dcn=0, - anchor_type='anchor_free', - initial_loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - activated=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - **kwargs): - assert anchor_type in ['anchor_free', 'anchor_based'] - self.num_dcn = num_dcn - self.anchor_type = anchor_type - self.epoch = 0 # which would be update in SetEpochInfoHook! - super(TOODHead, self).__init__(num_classes, in_channels, **kwargs) - - if self.train_cfg: - self.initial_epoch = self.train_cfg.initial_epoch - self.initial_assigner = build_assigner( - self.train_cfg.initial_assigner) - self.initial_loss_cls = build_loss(initial_loss_cls) - self.assigner = self.initial_assigner - self.alignment_assigner = build_assigner(self.train_cfg.assigner) - self.alpha = self.train_cfg.alpha - self.beta = self.train_cfg.beta - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.inter_convs = nn.ModuleList() - for i in range(self.stacked_convs): - if i < self.num_dcn: - conv_cfg = dict(type='DCNv2', deform_groups=4) - else: - conv_cfg = self.conv_cfg - chn = self.in_channels if i == 0 else self.feat_channels - self.inter_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg)) - - self.cls_decomp = TaskDecomposition(self.feat_channels, - self.stacked_convs, - self.stacked_convs * 8, - self.conv_cfg, self.norm_cfg) - self.reg_decomp = TaskDecomposition(self.feat_channels, - self.stacked_convs, - self.stacked_convs * 8, - self.conv_cfg, self.norm_cfg) - - self.tood_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.tood_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - - self.cls_prob_module = nn.Sequential( - nn.Conv2d(self.feat_channels * self.stacked_convs, - self.feat_channels // 4, 1), nn.ReLU(inplace=True), - nn.Conv2d(self.feat_channels // 4, 1, 3, padding=1)) - self.reg_offset_module = nn.Sequential( - nn.Conv2d(self.feat_channels * self.stacked_convs, - self.feat_channels // 4, 1), nn.ReLU(inplace=True), - nn.Conv2d(self.feat_channels // 4, 4 * 2, 3, padding=1)) - - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def init_weights(self): - """Initialize weights of the head.""" - bias_cls = bias_init_with_prob(0.01) - for m in self.inter_convs: - normal_init(m.conv, std=0.01) - for m in self.cls_prob_module: - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.01) - for m in self.reg_offset_module: - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - normal_init(self.cls_prob_module[-1], std=0.01, bias=bias_cls) - - self.cls_decomp.init_weights() - self.reg_decomp.init_weights() - - normal_init(self.tood_cls, std=0.01, bias=bias_cls) - normal_init(self.tood_reg, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Decoded box for all scale levels, - each is a 4D-tensor, the channels number is - num_anchors * 4. In [tl_x, tl_y, br_x, br_y] format. - """ - cls_scores = [] - bbox_preds = [] - for idx, (x, scale, stride) in enumerate( - zip(feats, self.scales, self.prior_generator.strides)): - b, c, h, w = x.shape - anchor = self.prior_generator.single_level_grid_priors( - (h, w), idx, device=x.device) - anchor = torch.cat([anchor for _ in range(b)]) - # extract task interactive features - inter_feats = [] - for inter_conv in self.inter_convs: - x = inter_conv(x) - inter_feats.append(x) - feat = torch.cat(inter_feats, 1) - - # task decomposition - avg_feat = F.adaptive_avg_pool2d(feat, (1, 1)) - cls_feat = self.cls_decomp(feat, avg_feat) - reg_feat = self.reg_decomp(feat, avg_feat) - - # cls prediction and alignment - cls_logits = self.tood_cls(cls_feat) - cls_prob = self.cls_prob_module(feat) - cls_score = sigmoid_geometric_mean(cls_logits, cls_prob) - - # reg prediction and alignment - if self.anchor_type == 'anchor_free': - reg_dist = scale(self.tood_reg(reg_feat).exp()).float() - reg_dist = reg_dist.permute(0, 2, 3, 1).reshape(-1, 4) - reg_bbox = distance2bbox( - self.anchor_center(anchor) / stride[0], - reg_dist).reshape(b, h, w, 4).permute(0, 3, 1, - 2) # (b, c, h, w) - elif self.anchor_type == 'anchor_based': - reg_dist = scale(self.tood_reg(reg_feat)).float() - reg_dist = reg_dist.permute(0, 2, 3, 1).reshape(-1, 4) - reg_bbox = self.bbox_coder.decode(anchor, reg_dist).reshape( - b, h, w, 4).permute(0, 3, 1, 2) / stride[0] - else: - raise NotImplementedError( - f'Unknown anchor type: {self.anchor_type}.' - f'Please use `anchor_free` or `anchor_based`.') - reg_offset = self.reg_offset_module(feat) - bbox_pred = self.deform_sampling(reg_bbox.contiguous(), - reg_offset.contiguous()) - - # After deform_sampling, some boxes will become invalid (The - # left-top point is at the right or bottom of the right-bottom - # point), which will make the GIoULoss negative. - invalid_bbox_idx = (bbox_pred[:, [0]] > bbox_pred[:, [2]]) | \ - (bbox_pred[:, [1]] > bbox_pred[:, [3]]) - invalid_bbox_idx = invalid_bbox_idx.expand_as(bbox_pred) - bbox_pred = torch.where(invalid_bbox_idx, reg_bbox, bbox_pred) - - cls_scores.append(cls_score) - bbox_preds.append(bbox_pred) - return tuple(cls_scores), tuple(bbox_preds) - - def deform_sampling(self, feat, offset): - """Sampling the feature x according to offset. - - Args: - feat (Tensor): Feature - offset (Tensor): Spatial offset for feature sampling - """ - # it is an equivalent implementation of bilinear interpolation - b, c, h, w = feat.shape - weight = feat.new_ones(c, 1, 1, 1) - y = deform_conv2d(feat, offset, weight, 1, 0, 1, c, c) - return y - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, alignment_metrics, stride): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Decoded bboxes for each scale - level with shape (N, num_anchors * 4, H, W). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors). - bbox_targets (Tensor): BBox regression targets of each anchor with - shape (N, num_total_anchors, 4). - alignment_metrics (Tensor): Alignment metrics with shape - (N, num_total_anchors). - stride (tuple[int]): Downsample stride of the feature map. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - alignment_metrics = alignment_metrics.reshape(-1) - label_weights = label_weights.reshape(-1) - targets = labels if self.epoch < self.initial_epoch else ( - labels, alignment_metrics) - cls_loss_func = self.initial_loss_cls \ - if self.epoch < self.initial_epoch else self.loss_cls - - loss_cls = cls_loss_func( - cls_score, targets, label_weights, avg_factor=1.0) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - - pos_decode_bbox_pred = pos_bbox_pred - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - - # regression loss - pos_bbox_weight = self.centerness_target( - pos_anchors, pos_bbox_targets - ) if self.epoch < self.initial_epoch else alignment_metrics[ - pos_inds] - - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=pos_bbox_weight, - avg_factor=1.0) - else: - loss_bbox = bbox_pred.sum() * 0 - pos_bbox_weight = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, alignment_metrics.sum( - ), pos_bbox_weight.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Decoded box for each scale - level with shape (N, num_anchors * 4, H, W) in - [tl_x, tl_y, br_x, br_y] format. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = len(img_metas) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - flatten_cls_scores = torch.cat([ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_score in cls_scores - ], 1) - flatten_bbox_preds = torch.cat([ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) * stride[0] - for bbox_pred, stride in zip(bbox_preds, - self.prior_generator.strides) - ], 1) - - cls_reg_targets = self.get_targets( - flatten_cls_scores, - flatten_bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - alignment_metrics_list) = cls_reg_targets - - losses_cls, losses_bbox,\ - cls_avg_factors, bbox_avg_factors = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - alignment_metrics_list, - self.prior_generator.strides) - - cls_avg_factor = reduce_mean(sum(cls_avg_factors)).clamp_(min=1).item() - losses_cls = list(map(lambda x: x / cls_avg_factor, losses_cls)) - - bbox_avg_factor = reduce_mean( - sum(bbox_avg_factors)).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - - cfg = self.test_cfg if cfg is None else cfg - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for cls_score, bbox_pred, priors, stride in zip( - cls_score_list, bbox_pred_list, mlvl_priors, - self.prior_generator.strides): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) * stride[0] - scores = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bboxes = filtered_results['bbox_pred'] - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, None, **kwargs) - - def get_targets(self, - cls_scores, - bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores (Tensor): Classification predictions of images, - a 3D-Tensor with shape [num_imgs, num_priors, num_classes]. - bbox_preds (Tensor): Decoded bboxes predictions of one image, - a 3D-Tensor with shape [num_imgs, num_priors, 4] in [tl_x, - tl_y, br_x, br_y] format. - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: a tuple containing learning targets. - - - anchors_list (list[list[Tensor]]): Anchors of each level. - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - norm_alignment_metrics_list (list[Tensor]): Normalized - alignment metrics of each level. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - # anchor_list: list(b * [-1, 4]) - - if self.epoch < self.initial_epoch: - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - super()._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - all_assign_metrics = [ - weight[..., 0] for weight in all_bbox_weights - ] - else: - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_assign_metrics) = multi_apply( - self._get_target_single, - cls_scores, - bbox_preds, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - norm_alignment_metrics_list = images_to_levels(all_assign_metrics, - num_level_anchors) - - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, norm_alignment_metrics_list) - - def _get_target_single(self, - cls_scores, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - cls_scores (list(Tensor)): Box scores for each image. - bbox_preds (list(Tensor)): Box energies / deltas for each image. - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - norm_alignment_metrics (Tensor): Normalized alignment metrics - of all priors in the image with shape (N,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - assign_result = self.alignment_assigner.assign( - cls_scores[inside_flags, :], bbox_preds[inside_flags, :], anchors, - gt_bboxes, gt_bboxes_ignore, gt_labels, self.alpha, self.beta) - assign_ious = assign_result.max_overlaps - assign_metrics = assign_result.assign_metrics - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - norm_alignment_metrics = anchors.new_zeros( - num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - # point-based - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - class_assigned_gt_inds = torch.unique( - sampling_result.pos_assigned_gt_inds) - for gt_inds in class_assigned_gt_inds: - gt_class_inds = pos_inds[sampling_result.pos_assigned_gt_inds == - gt_inds] - pos_alignment_metrics = assign_metrics[gt_class_inds] - pos_ious = assign_ious[gt_class_inds] - pos_norm_alignment_metrics = pos_alignment_metrics / ( - pos_alignment_metrics.max() + 10e-8) * pos_ious.max() - norm_alignment_metrics[gt_class_inds] = pos_norm_alignment_metrics - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - norm_alignment_metrics = unmap(norm_alignment_metrics, - num_total_anchors, inside_flags) - return (anchors, labels, label_weights, bbox_targets, - norm_alignment_metrics) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/vfnet_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/vfnet_head.py deleted file mode 100644 index ba285e22..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/vfnet_head.py +++ /dev/null @@ -1,740 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmcv.ops import DeformConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (MlvlPointGenerator, bbox_overlaps, build_assigner, - build_prior_generator, build_sampler, multi_apply, - reduce_mean) -from ..builder import HEADS, build_loss -from .atss_head import ATSSHead -from .fcos_head import FCOSHead - -INF = 1e8 - - -@HEADS.register_module() -class VFNetHead(ATSSHead, FCOSHead): - """Head of `VarifocalNet (VFNet): An IoU-aware Dense Object - Detector.`_. - - The VFNet predicts IoU-aware classification scores which mix the - object presence confidence and object localization accuracy as the - detection score. It is built on the FCOS architecture and uses ATSS - for defining positive/negative training examples. The VFNet is trained - with Varifocal Loss and empolys star-shaped deformable convolution to - extract features for a bbox. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - regress_ranges (tuple[tuple[int, int]]): Regress range of multiple - level points. - center_sampling (bool): If true, use center sampling. Default: False. - center_sample_radius (float): Radius of center sampling. Default: 1.5. - sync_num_pos (bool): If true, synchronize the number of positive - examples across GPUs. Default: True - gradient_mul (float): The multiplier to gradients from bbox refinement - and recognition. Default: 0.1. - bbox_norm_type (str): The bbox normalization type, 'reg_denom' or - 'stride'. Default: reg_denom - loss_cls_fl (dict): Config of focal loss. - use_vfl (bool): If true, use varifocal loss for training. - Default: True. - loss_cls (dict): Config of varifocal loss. - loss_bbox (dict): Config of localization loss, GIoU Loss. - loss_bbox (dict): Config of localization refinement loss, GIoU Loss. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - use_atss (bool): If true, use ATSS to define positive/negative - examples. Default: True. - anchor_generator (dict): Config of anchor generator for ATSS. - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> self = VFNetHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_score, bbox_pred, bbox_pred_refine= self.forward(feats) - >>> assert len(cls_score) == len(self.scales) - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), - (512, INF)), - center_sampling=False, - center_sample_radius=1.5, - sync_num_pos=True, - gradient_mul=0.1, - bbox_norm_type='reg_denom', - loss_cls_fl=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - use_vfl=True, - loss_cls=dict( - type='VarifocalLoss', - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=1.5), - loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - use_atss=True, - reg_decoded_bbox=True, - anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - octave_base_scale=8, - scales_per_octave=1, - center_offset=0.0, - strides=[8, 16, 32, 64, 128]), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='vfnet_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - # dcn base offsets, adapted from reppoints_head.py - self.num_dconv_points = 9 - self.dcn_kernel = int(np.sqrt(self.num_dconv_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super(FCOSHead, self).__init__( - num_classes, - in_channels, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.regress_ranges = regress_ranges - self.reg_denoms = [ - regress_range[-1] for regress_range in regress_ranges - ] - self.reg_denoms[-1] = self.reg_denoms[-2] * 2 - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.sync_num_pos = sync_num_pos - self.bbox_norm_type = bbox_norm_type - self.gradient_mul = gradient_mul - self.use_vfl = use_vfl - if self.use_vfl: - self.loss_cls = build_loss(loss_cls) - else: - self.loss_cls = build_loss(loss_cls_fl) - self.loss_bbox = build_loss(loss_bbox) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - # for getting ATSS targets - self.use_atss = use_atss - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - self.anchor_center_offset = anchor_generator['center_offset'] - - self.num_base_priors = self.prior_generator.num_base_priors[0] - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - # only be used in `get_atss_targets` when `use_atss` is True - self.atss_prior_generator = build_prior_generator(anchor_generator) - - self.fcos_prior_generator = MlvlPointGenerator( - anchor_generator['strides'], - self.anchor_center_offset if self.use_atss else 0.5) - - # In order to reuse the `get_bboxes` in `BaseDenseHead. - # Only be used in testing phase. - self.prior_generator = self.fcos_prior_generator - - @property - def num_anchors(self): - """ - Returns: - int: Number of anchors on each point of feature map. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - @property - def anchor_generator(self): - warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' - 'please use "atss_prior_generator" instead') - return self.prior_generator - - def _init_layers(self): - """Initialize layers of the head.""" - super(FCOSHead, self)._init_cls_convs() - super(FCOSHead, self)._init_reg_convs() - self.relu = nn.ReLU(inplace=True) - self.vfnet_reg_conv = ConvModule( - self.feat_channels, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias) - self.vfnet_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - self.vfnet_reg_refine_dconv = DeformConv2d( - self.feat_channels, - self.feat_channels, - self.dcn_kernel, - 1, - padding=self.dcn_pad) - self.vfnet_reg_refine = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.scales_refine = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - self.vfnet_cls_dconv = DeformConv2d( - self.feat_channels, - self.feat_channels, - self.dcn_kernel, - 1, - padding=self.dcn_pad) - self.vfnet_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box offsets for each - scale level, each is a 4D-tensor, the channel number is - num_points * 4. - bbox_preds_refine (list[Tensor]): Refined Box offsets for - each scale level, each is a 4D-tensor, the channel - number is num_points * 4. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.scales_refine, self.strides, self.reg_denoms) - - def forward_single(self, x, scale, scale_refine, stride, reg_denom): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - scale_refine (:obj: `mmcv.cnn.Scale`): Learnable scale module to - resize the refined bbox prediction. - stride (int): The corresponding stride for feature maps, - used to normalize the bbox prediction when - bbox_norm_type = 'stride'. - reg_denom (int): The corresponding regression range for feature - maps, only used to normalize the bbox prediction when - bbox_norm_type = 'reg_denom'. - - Returns: - tuple: iou-aware cls scores for each box, bbox predictions and - refined bbox predictions of input feature maps. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - - # predict the bbox_pred of different level - reg_feat_init = self.vfnet_reg_conv(reg_feat) - if self.bbox_norm_type == 'reg_denom': - bbox_pred = scale( - self.vfnet_reg(reg_feat_init)).float().exp() * reg_denom - elif self.bbox_norm_type == 'stride': - bbox_pred = scale( - self.vfnet_reg(reg_feat_init)).float().exp() * stride - else: - raise NotImplementedError - - # compute star deformable convolution offsets - # converting dcn_offset to reg_feat.dtype thus VFNet can be - # trained with FP16 - dcn_offset = self.star_dcn_offset(bbox_pred, self.gradient_mul, - stride).to(reg_feat.dtype) - - # refine the bbox_pred - reg_feat = self.relu(self.vfnet_reg_refine_dconv(reg_feat, dcn_offset)) - bbox_pred_refine = scale_refine( - self.vfnet_reg_refine(reg_feat)).float().exp() - bbox_pred_refine = bbox_pred_refine * bbox_pred.detach() - - # predict the iou-aware cls score - cls_feat = self.relu(self.vfnet_cls_dconv(cls_feat, dcn_offset)) - cls_score = self.vfnet_cls(cls_feat) - - if self.training: - return cls_score, bbox_pred, bbox_pred_refine - else: - return cls_score, bbox_pred_refine - - def star_dcn_offset(self, bbox_pred, gradient_mul, stride): - """Compute the star deformable conv offsets. - - Args: - bbox_pred (Tensor): Predicted bbox distance offsets (l, r, t, b). - gradient_mul (float): Gradient multiplier. - stride (int): The corresponding stride for feature maps, - used to project the bbox onto the feature map. - - Returns: - dcn_offsets (Tensor): The offsets for deformable convolution. - """ - dcn_base_offset = self.dcn_base_offset.type_as(bbox_pred) - bbox_pred_grad_mul = (1 - gradient_mul) * bbox_pred.detach() + \ - gradient_mul * bbox_pred - # map to the feature map scale - bbox_pred_grad_mul = bbox_pred_grad_mul / stride - N, C, H, W = bbox_pred.size() - - x1 = bbox_pred_grad_mul[:, 0, :, :] - y1 = bbox_pred_grad_mul[:, 1, :, :] - x2 = bbox_pred_grad_mul[:, 2, :, :] - y2 = bbox_pred_grad_mul[:, 3, :, :] - bbox_pred_grad_mul_offset = bbox_pred.new_zeros( - N, 2 * self.num_dconv_points, H, W) - bbox_pred_grad_mul_offset[:, 0, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 1, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 2, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 4, :, :] = -1.0 * y1 # -y1 - bbox_pred_grad_mul_offset[:, 5, :, :] = x2 # x2 - bbox_pred_grad_mul_offset[:, 7, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 11, :, :] = x2 # x2 - bbox_pred_grad_mul_offset[:, 12, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 13, :, :] = -1.0 * x1 # -x1 - bbox_pred_grad_mul_offset[:, 14, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 16, :, :] = y2 # y2 - bbox_pred_grad_mul_offset[:, 17, :, :] = x2 # x2 - dcn_offset = bbox_pred_grad_mul_offset - dcn_base_offset - - return dcn_offset - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine')) - def loss(self, - cls_scores, - bbox_preds, - bbox_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box offsets for each - scale level, each is a 4D-tensor, the channel number is - num_points * 4. - bbox_preds_refine (list[Tensor]): Refined Box offsets for - each scale level, each is a 4D-tensor, the channel - number is num_points * 4. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.fcos_prior_generator.grid_priors( - featmap_sizes, bbox_preds[0].dtype, bbox_preds[0].device) - labels, label_weights, bbox_targets, bbox_weights = self.get_targets( - cls_scores, all_level_points, gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds and bbox_preds_refine - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, - 1).reshape(-1, - self.cls_out_channels).contiguous() - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4).contiguous() - for bbox_pred in bbox_preds - ] - flatten_bbox_preds_refine = [ - bbox_pred_refine.permute(0, 2, 3, 1).reshape(-1, 4).contiguous() - for bbox_pred_refine in bbox_preds_refine - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_bbox_preds_refine = torch.cat(flatten_bbox_preds_refine) - flatten_labels = torch.cat(labels) - flatten_bbox_targets = torch.cat(bbox_targets) - # repeat points to align with bbox_preds - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - - # FG cat_id: [0, num_classes - 1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = torch.where( - ((flatten_labels >= 0) & (flatten_labels < bg_class_ind)) > 0)[0] - num_pos = len(pos_inds) - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_preds_refine = flatten_bbox_preds_refine[pos_inds] - pos_labels = flatten_labels[pos_inds] - - # sync num_pos across all gpus - if self.sync_num_pos: - num_pos_avg_per_gpu = reduce_mean( - pos_inds.new_tensor(num_pos).float()).item() - num_pos_avg_per_gpu = max(num_pos_avg_per_gpu, 1.0) - else: - num_pos_avg_per_gpu = num_pos - - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_points = flatten_points[pos_inds] - - pos_decoded_bbox_preds = self.bbox_coder.decode( - pos_points, pos_bbox_preds) - pos_decoded_target_preds = self.bbox_coder.decode( - pos_points, pos_bbox_targets) - iou_targets_ini = bbox_overlaps( - pos_decoded_bbox_preds, - pos_decoded_target_preds.detach(), - is_aligned=True).clamp(min=1e-6) - bbox_weights_ini = iou_targets_ini.clone().detach() - bbox_avg_factor_ini = reduce_mean( - bbox_weights_ini.sum()).clamp_(min=1).item() - - pos_decoded_bbox_preds_refine = \ - self.bbox_coder.decode(pos_points, pos_bbox_preds_refine) - iou_targets_rf = bbox_overlaps( - pos_decoded_bbox_preds_refine, - pos_decoded_target_preds.detach(), - is_aligned=True).clamp(min=1e-6) - bbox_weights_rf = iou_targets_rf.clone().detach() - bbox_avg_factor_rf = reduce_mean( - bbox_weights_rf.sum()).clamp_(min=1).item() - - if num_pos > 0: - loss_bbox = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds.detach(), - weight=bbox_weights_ini, - avg_factor=bbox_avg_factor_ini) - - loss_bbox_refine = self.loss_bbox_refine( - pos_decoded_bbox_preds_refine, - pos_decoded_target_preds.detach(), - weight=bbox_weights_rf, - avg_factor=bbox_avg_factor_rf) - - # build IoU-aware cls_score targets - if self.use_vfl: - pos_ious = iou_targets_rf.clone().detach() - cls_iou_targets = torch.zeros_like(flatten_cls_scores) - cls_iou_targets[pos_inds, pos_labels] = pos_ious - else: - loss_bbox = pos_bbox_preds.sum() * 0 - loss_bbox_refine = pos_bbox_preds_refine.sum() * 0 - if self.use_vfl: - cls_iou_targets = torch.zeros_like(flatten_cls_scores) - - if self.use_vfl: - loss_cls = self.loss_cls( - flatten_cls_scores, - cls_iou_targets, - avg_factor=num_pos_avg_per_gpu) - else: - loss_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - weight=label_weights, - avg_factor=num_pos_avg_per_gpu) - - return dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_bbox_rf=loss_bbox_refine) - - def get_targets(self, cls_scores, mlvl_points, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore): - """A wrapper for computing ATSS and FCOS targets for points in multiple - images. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level with shape (N, num_points * num_classes, H, W). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level. - label_weights (Tensor/None): Label weights of all levels. - bbox_targets_list (list[Tensor]): Regression targets of each - level, (l, t, r, b). - bbox_weights (Tensor/None): Bbox weights of all levels. - """ - if self.use_atss: - return self.get_atss_targets(cls_scores, mlvl_points, gt_bboxes, - gt_labels, img_metas, - gt_bboxes_ignore) - else: - self.norm_on_bbox = False - return self.get_fcos_targets(mlvl_points, gt_bboxes, gt_labels) - - def _get_target_single(self, *args, **kwargs): - """Avoid ambiguity in multiple inheritance.""" - if self.use_atss: - return ATSSHead._get_target_single(self, *args, **kwargs) - else: - return FCOSHead._get_target_single(self, *args, **kwargs) - - def get_fcos_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute FCOS regression and classification targets for points in - multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - labels (list[Tensor]): Labels of each level. - label_weights: None, to be compatible with ATSS targets. - bbox_targets (list[Tensor]): BBox targets of each level. - bbox_weights: None, to be compatible with ATSS targets. - """ - labels, bbox_targets = FCOSHead.get_targets(self, points, - gt_bboxes_list, - gt_labels_list) - label_weights = None - bbox_weights = None - return labels, label_weights, bbox_targets, bbox_weights - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.atss_prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.atss_prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device=device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def get_atss_targets(self, - cls_scores, - mlvl_points, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """A wrapper for computing ATSS targets for points in multiple images. - - Args: - cls_scores (list[Tensor]): Box iou-aware scores for each scale - level with shape (N, num_points * num_classes, H, W). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). Default: None. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level. - label_weights (Tensor): Label weights of all levels. - bbox_targets_list (list[Tensor]): Regression targets of each - level, (l, t, r, b). - bbox_weights (Tensor): Bbox weights of all levels. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len( - featmap_sizes - ) == self.atss_prior_generator.num_levels == \ - self.fcos_prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = ATSSHead.get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - unmap_outputs=True) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - bbox_targets_list = [ - bbox_targets.reshape(-1, 4) for bbox_targets in bbox_targets_list - ] - - num_imgs = len(img_metas) - # transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format - bbox_targets_list = self.transform_bbox_targets( - bbox_targets_list, mlvl_points, num_imgs) - - labels_list = [labels.reshape(-1) for labels in labels_list] - label_weights_list = [ - label_weights.reshape(-1) for label_weights in label_weights_list - ] - bbox_weights_list = [ - bbox_weights.reshape(-1) for bbox_weights in bbox_weights_list - ] - label_weights = torch.cat(label_weights_list) - bbox_weights = torch.cat(bbox_weights_list) - return labels_list, label_weights, bbox_targets_list, bbox_weights - - def transform_bbox_targets(self, decoded_bboxes, mlvl_points, num_imgs): - """Transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format. - - Args: - decoded_bboxes (list[Tensor]): Regression targets of each level, - in the form of (x1, y1, x2, y2). - mlvl_points (list[Tensor]): Points of each fpn level, each has - shape (num_points, 2). - num_imgs (int): the number of images in a batch. - - Returns: - bbox_targets (list[Tensor]): Regression targets of each level in - the form of (l, t, r, b). - """ - # TODO: Re-implemented in Class PointCoder - assert len(decoded_bboxes) == len(mlvl_points) - num_levels = len(decoded_bboxes) - mlvl_points = [points.repeat(num_imgs, 1) for points in mlvl_points] - bbox_targets = [] - for i in range(num_levels): - bbox_target = self.bbox_coder.encode(mlvl_points[i], - decoded_bboxes[i]) - bbox_targets.append(bbox_target) - - return bbox_targets - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override the method in the parent class to avoid changing para's - name.""" - pass - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map size. - - This function will be deprecated soon. - """ - - warnings.warn( - '`_get_points_single` in `VFNetHead` will be ' - 'deprecated soon, we support a multi level point generator now' - 'you can get points of a single level feature map' - 'with `self.fcos_prior_generator.single_level_grid_priors` ') - - h, w = featmap_size - x_range = torch.arange( - 0, w * stride, stride, dtype=dtype, device=device) - y_range = torch.arange( - 0, h * stride, stride, dtype=dtype, device=device) - y, x = torch.meshgrid(y_range, x_range) - # to be compatible with anchor points in ATSS - if self.use_atss: - points = torch.stack( - (x.reshape(-1), y.reshape(-1)), dim=-1) + \ - stride * self.anchor_center_offset - else: - points = torch.stack( - (x.reshape(-1), y.reshape(-1)), dim=-1) + stride // 2 - return points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolact_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolact_head.py deleted file mode 100644 index 8f89a271..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolact_head.py +++ /dev/null @@ -1,1018 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList, force_fp32 - -from mmdet.core import build_sampler, fast_nms, images_to_levels, multi_apply -from mmdet.core.utils import select_single_mlvl -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class YOLACTHead(AnchorHead): - """YOLACT box head used in https://arxiv.org/abs/1904.02689. - - Note that YOLACT head is a light version of RetinaNet head. - Four differences are described as follows: - - 1. YOLACT box head has three-times fewer anchors. - 2. YOLACT box head shares the convs for box and cls branches. - 3. YOLACT box head uses OHEM instead of Focal loss. - 4. YOLACT box head predicts a set of mask coefficients for each box. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): Config dict for anchor generator - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - num_head_convs (int): Number of the conv layers shared by - box and cls branches. - num_protos (int): Number of the mask coefficients. - use_ohem (bool): If true, ``loss_single_OHEM`` will be used for - cls loss calculation. If false, ``loss_single`` will be used. - conv_cfg (dict): Dictionary to construct and config conv layer. - norm_cfg (dict): Dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=3, - scales_per_octave=1, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - reduction='none', - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1.5), - num_head_convs=1, - num_protos=32, - use_ohem=True, - conv_cfg=None, - norm_cfg=None, - init_cfg=dict( - type='Xavier', - distribution='uniform', - bias=0, - layer='Conv2d'), - **kwargs): - self.num_head_convs = num_head_convs - self.num_protos = num_protos - self.use_ohem = use_ohem - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(YOLACTHead, self).__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - anchor_generator=anchor_generator, - init_cfg=init_cfg, - **kwargs) - if self.use_ohem: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.sampling = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.head_convs = ModuleList() - for i in range(self.num_head_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.head_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_cls = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.cls_out_channels, - 3, - padding=1) - self.conv_reg = nn.Conv2d( - self.feat_channels, self.num_base_priors * 4, 3, padding=1) - self.conv_coeff = nn.Conv2d( - self.feat_channels, - self.num_base_priors * self.num_protos, - 3, - padding=1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - coeff_pred (Tensor): Mask coefficients for a single scale \ - level, the channels number is num_anchors * num_protos. - """ - for head_conv in self.head_convs: - x = head_conv(x) - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - coeff_pred = self.conv_coeff(x).tanh() - return cls_score, bbox_pred, coeff_pred - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """A combination of the func:``AnchorHead.loss`` and - func:``SSDHead.loss``. - - When ``self.use_ohem == True``, it functions like ``SSDHead.loss``, - otherwise, it follows ``AnchorHead.loss``. Besides, it additionally - returns ``sampling_results``. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - tuple: - dict[str, Tensor]: A dictionary of loss components. - List[:obj:``SamplingResult``]: Sampler results for each image. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - unmap_outputs=not self.use_ohem, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results) = cls_reg_targets - - if self.use_ohem: - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single_OHEM, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - else: - num_total_samples = ( - num_total_pos + - num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox), sampling_results - - def loss_single_OHEM(self, cls_score, bbox_pred, anchors, labels, - label_weights, bbox_targets, bbox_weights, - num_total_samples): - """"See func:``SSDHead.loss``.""" - loss_cls_all = self.loss_cls(cls_score, labels, label_weights) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero( - as_tuple=False).reshape(-1) - neg_inds = (labels == self.num_classes).nonzero( - as_tuple=False).view(-1) - - num_pos_samples = pos_inds.size(0) - if num_pos_samples == 0: - num_neg_samples = neg_inds.size(0) - else: - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'coeff_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - coeff_preds, - img_metas, - cfg=None, - rescale=False): - """"Similar to func:``AnchorHead.get_bboxes``, but additionally - processes coeff_preds. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - coeff_preds (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - list[tuple[Tensor, Tensor, Tensor]]: Each item in result_list is - a 3-tuple. The first item is an (n, 5) tensor, where the - first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score - between 0 and 1. The second item is an (n,) tensor where each - item is the predicted class label of the corresponding box. - The third item is an (n, num_protos) tensor where each item - is the predicted mask coefficients of instance inside the - corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - - det_bboxes = [] - det_labels = [] - det_coeffs = [] - for img_id in range(len(img_metas)): - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - coeff_pred_list = select_single_mlvl(coeff_preds, img_id) - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - bbox_res = self._get_bboxes_single(cls_score_list, bbox_pred_list, - coeff_pred_list, mlvl_anchors, - img_shape, scale_factor, cfg, - rescale) - det_bboxes.append(bbox_res[0]) - det_labels.append(bbox_res[1]) - det_coeffs.append(bbox_res[2]) - return det_bboxes, det_labels, det_coeffs - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - coeff_preds_list, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """"Similar to func:``AnchorHead._get_bboxes_single``, but additionally - processes coeff_preds_list and uses fast NMS instead of traditional - NMS. - - Args: - cls_score_list (list[Tensor]): Box scores for a single scale level - Has shape (num_anchors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas for a single - scale level with shape (num_anchors * 4, H, W). - coeff_preds_list (list[Tensor]): Mask coefficients for a single - scale level with shape (num_anchors * num_protos, H, W). - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - tuple[Tensor, Tensor, Tensor]: The first item is an (n, 5) tensor, - where the first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between - 0 and 1. The second item is an (n,) tensor where each item is - the predicted class label of the corresponding box. The third - item is an (n, num_protos) tensor where each item is the - predicted mask coefficients of instance inside the - corresponding box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) == len(mlvl_anchors) - nms_pre = cfg.get('nms_pre', -1) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_coeffs = [] - for cls_score, bbox_pred, coeff_pred, anchors in \ - zip(cls_score_list, bbox_pred_list, - coeff_preds_list, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - coeff_pred = coeff_pred.permute(1, 2, - 0).reshape(-1, self.num_protos) - - if 0 < nms_pre < scores.shape[0]: - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - coeff_pred = coeff_pred[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_coeffs.append(coeff_pred) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_coeffs = torch.cat(mlvl_coeffs) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - det_bboxes, det_labels, det_coeffs = fast_nms(mlvl_bboxes, mlvl_scores, - mlvl_coeffs, - cfg.score_thr, - cfg.iou_thr, cfg.top_k, - cfg.max_per_img) - return det_bboxes, det_labels, det_coeffs - - -@HEADS.register_module() -class YOLACTSegmHead(BaseModule): - """YOLACT segmentation head used in https://arxiv.org/abs/1904.02689. - - Apply a semantic segmentation loss on feature space using layers that are - only evaluated during training to increase performance with no speed - penalty. - - Args: - in_channels (int): Number of channels in the input feature map. - num_classes (int): Number of categories excluding the background - category. - loss_segm (dict): Config of semantic segmentation loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels=256, - loss_segm=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg=dict( - type='Xavier', - distribution='uniform', - override=dict(name='segm_conv'))): - super(YOLACTSegmHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.loss_segm = build_loss(loss_segm) - self._init_layers() - self.fp16_enabled = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.segm_conv = nn.Conv2d( - self.in_channels, self.num_classes, kernel_size=1) - - def forward(self, x): - """Forward feature from the upstream network. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - - Returns: - Tensor: Predicted semantic segmentation map with shape - (N, num_classes, H, W). - """ - return self.segm_conv(x) - - @force_fp32(apply_to=('segm_pred', )) - def loss(self, segm_pred, gt_masks, gt_labels): - """Compute loss of the head. - - Args: - segm_pred (list[Tensor]): Predicted semantic segmentation map - with shape (N, num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_labels (list[Tensor]): Class indices corresponding to each box. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_segm = [] - num_imgs, num_classes, mask_h, mask_w = segm_pred.size() - for idx in range(num_imgs): - cur_segm_pred = segm_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_labels = gt_labels[idx] - segm_targets = self.get_targets(cur_segm_pred, cur_gt_masks, - cur_gt_labels) - if segm_targets is None: - loss = self.loss_segm(cur_segm_pred, - torch.zeros_like(cur_segm_pred), - torch.zeros_like(cur_segm_pred)) - else: - loss = self.loss_segm( - cur_segm_pred, - segm_targets, - avg_factor=num_imgs * mask_h * mask_w) - loss_segm.append(loss) - return dict(loss_segm=loss_segm) - - def get_targets(self, segm_pred, gt_masks, gt_labels): - """Compute semantic segmentation targets for each image. - - Args: - segm_pred (Tensor): Predicted semantic segmentation map - with shape (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - gt_labels (Tensor): Class indices corresponding to each box. - - Returns: - Tensor: Semantic segmentation targets with shape - (num_classes, H, W). - """ - if gt_masks.size(0) == 0: - return None - num_classes, mask_h, mask_w = segm_pred.size() - with torch.no_grad(): - downsampled_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - downsampled_masks = downsampled_masks.gt(0.5).float() - segm_targets = torch.zeros_like(segm_pred, requires_grad=False) - for obj_idx in range(downsampled_masks.size(0)): - segm_targets[gt_labels[obj_idx] - 1] = torch.max( - segm_targets[gt_labels[obj_idx] - 1], - downsampled_masks[obj_idx]) - return segm_targets - - def simple_test(self, feats, img_metas, rescale=False): - """Test function without test-time augmentation.""" - raise NotImplementedError( - 'simple_test of YOLACTSegmHead is not implemented ' - 'because this head is only evaluated during training') - - -@HEADS.register_module() -class YOLACTProtonet(BaseModule): - """YOLACT mask head used in https://arxiv.org/abs/1904.02689. - - This head outputs the mask prototypes for YOLACT. - - Args: - in_channels (int): Number of channels in the input feature map. - proto_channels (tuple[int]): Output channels of protonet convs. - proto_kernel_sizes (tuple[int]): Kernel sizes of protonet convs. - include_last_relu (Bool): If keep the last relu of protonet. - num_protos (int): Number of prototypes. - num_classes (int): Number of categories excluding the background - category. - loss_mask_weight (float): Reweight the mask loss by this factor. - max_masks_to_train (int): Maximum number of masks to train for - each image. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels=256, - proto_channels=(256, 256, 256, None, 256, 32), - proto_kernel_sizes=(3, 3, 3, -2, 3, 1), - include_last_relu=True, - num_protos=32, - loss_mask_weight=1.0, - max_masks_to_train=100, - init_cfg=dict( - type='Xavier', - distribution='uniform', - override=dict(name='protonet'))): - super(YOLACTProtonet, self).__init__(init_cfg) - self.in_channels = in_channels - self.proto_channels = proto_channels - self.proto_kernel_sizes = proto_kernel_sizes - self.include_last_relu = include_last_relu - self.protonet = self._init_layers() - - self.loss_mask_weight = loss_mask_weight - self.num_protos = num_protos - self.num_classes = num_classes - self.max_masks_to_train = max_masks_to_train - self.fp16_enabled = False - - def _init_layers(self): - """A helper function to take a config setting and turn it into a - network.""" - # Possible patterns: - # ( 256, 3) -> conv - # ( 256,-2) -> deconv - # (None,-2) -> bilinear interpolate - in_channels = self.in_channels - protonets = ModuleList() - for num_channels, kernel_size in zip(self.proto_channels, - self.proto_kernel_sizes): - if kernel_size > 0: - layer = nn.Conv2d( - in_channels, - num_channels, - kernel_size, - padding=kernel_size // 2) - else: - if num_channels is None: - layer = InterpolateModule( - scale_factor=-kernel_size, - mode='bilinear', - align_corners=False) - else: - layer = nn.ConvTranspose2d( - in_channels, - num_channels, - -kernel_size, - padding=kernel_size // 2) - protonets.append(layer) - protonets.append(nn.ReLU(inplace=True)) - in_channels = num_channels if num_channels is not None \ - else in_channels - if not self.include_last_relu: - protonets = protonets[:-1] - return nn.Sequential(*protonets) - - def forward_dummy(self, x): - prototypes = self.protonet(x) - return prototypes - - def forward(self, x, coeff_pred, bboxes, img_meta, sampling_results=None): - """Forward feature from the upstream network to get prototypes and - linearly combine the prototypes, using masks coefficients, into - instance masks. Finally, crop the instance masks with given bboxes. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - coeff_pred (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W). - bboxes (list[Tensor]): Box used for cropping with shape - (N, num_anchors * 4, H, W). During training, they are - ground truth boxes. During testing, they are predicted - boxes. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - list[Tensor]: Predicted instance segmentation masks. - """ - prototypes = self.protonet(x) - prototypes = prototypes.permute(0, 2, 3, 1).contiguous() - - num_imgs = x.size(0) - - # The reason for not using self.training is that - # val workflow will have a dimension mismatch error. - # Note that this writing method is very tricky. - # Fix https://github.com/open-mmlab/mmdetection/issues/5978 - is_train_or_val_workflow = (coeff_pred[0].dim() == 4) - - # Train or val workflow - if is_train_or_val_workflow: - coeff_pred_list = [] - for coeff_pred_per_level in coeff_pred: - coeff_pred_per_level = \ - coeff_pred_per_level.permute( - 0, 2, 3, 1).reshape(num_imgs, -1, self.num_protos) - coeff_pred_list.append(coeff_pred_per_level) - coeff_pred = torch.cat(coeff_pred_list, dim=1) - - mask_pred_list = [] - for idx in range(num_imgs): - cur_prototypes = prototypes[idx] - cur_coeff_pred = coeff_pred[idx] - cur_bboxes = bboxes[idx] - cur_img_meta = img_meta[idx] - - # Testing state - if not is_train_or_val_workflow: - bboxes_for_cropping = cur_bboxes - else: - cur_sampling_results = sampling_results[idx] - pos_assigned_gt_inds = \ - cur_sampling_results.pos_assigned_gt_inds - bboxes_for_cropping = cur_bboxes[pos_assigned_gt_inds].clone() - pos_inds = cur_sampling_results.pos_inds - cur_coeff_pred = cur_coeff_pred[pos_inds] - - # Linearly combine the prototypes with the mask coefficients - mask_pred = cur_prototypes @ cur_coeff_pred.t() - mask_pred = torch.sigmoid(mask_pred) - - h, w = cur_img_meta['img_shape'][:2] - bboxes_for_cropping[:, 0] /= w - bboxes_for_cropping[:, 1] /= h - bboxes_for_cropping[:, 2] /= w - bboxes_for_cropping[:, 3] /= h - - mask_pred = self.crop(mask_pred, bboxes_for_cropping) - mask_pred = mask_pred.permute(2, 0, 1).contiguous() - mask_pred_list.append(mask_pred) - return mask_pred_list - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, gt_masks, gt_bboxes, img_meta, sampling_results): - """Compute loss of the head. - - Args: - mask_pred (list[Tensor]): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_mask = [] - num_imgs = len(mask_pred) - total_pos = 0 - for idx in range(num_imgs): - cur_mask_pred = mask_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_bboxes = gt_bboxes[idx] - cur_img_meta = img_meta[idx] - cur_sampling_results = sampling_results[idx] - - pos_assigned_gt_inds = cur_sampling_results.pos_assigned_gt_inds - num_pos = pos_assigned_gt_inds.size(0) - # Since we're producing (near) full image masks, - # it'd take too much vram to backprop on every single mask. - # Thus we select only a subset. - if num_pos > self.max_masks_to_train: - perm = torch.randperm(num_pos) - select = perm[:self.max_masks_to_train] - cur_mask_pred = cur_mask_pred[select] - pos_assigned_gt_inds = pos_assigned_gt_inds[select] - num_pos = self.max_masks_to_train - total_pos += num_pos - - gt_bboxes_for_reweight = cur_gt_bboxes[pos_assigned_gt_inds] - - mask_targets = self.get_targets(cur_mask_pred, cur_gt_masks, - pos_assigned_gt_inds) - if num_pos == 0: - loss = cur_mask_pred.sum() * 0. - elif mask_targets is None: - loss = F.binary_cross_entropy(cur_mask_pred, - torch.zeros_like(cur_mask_pred), - torch.zeros_like(cur_mask_pred)) - else: - cur_mask_pred = torch.clamp(cur_mask_pred, 0, 1) - loss = F.binary_cross_entropy( - cur_mask_pred, mask_targets, - reduction='none') * self.loss_mask_weight - - h, w = cur_img_meta['img_shape'][:2] - gt_bboxes_width = (gt_bboxes_for_reweight[:, 2] - - gt_bboxes_for_reweight[:, 0]) / w - gt_bboxes_height = (gt_bboxes_for_reweight[:, 3] - - gt_bboxes_for_reweight[:, 1]) / h - loss = loss.mean(dim=(1, - 2)) / gt_bboxes_width / gt_bboxes_height - loss = torch.sum(loss) - loss_mask.append(loss) - - if total_pos == 0: - total_pos += 1 # avoid nan - loss_mask = [x / total_pos for x in loss_mask] - - return dict(loss_mask=loss_mask) - - def get_targets(self, mask_pred, gt_masks, pos_assigned_gt_inds): - """Compute instance segmentation targets for each image. - - Args: - mask_pred (Tensor): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - pos_assigned_gt_inds (Tensor): GT indices of the corresponding - positive samples. - Returns: - Tensor: Instance segmentation targets with shape - (num_instances, H, W). - """ - if gt_masks.size(0) == 0: - return None - mask_h, mask_w = mask_pred.shape[-2:] - gt_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - gt_masks = gt_masks.gt(0.5).float() - mask_targets = gt_masks[pos_assigned_gt_inds] - return mask_targets - - def get_seg_masks(self, mask_pred, label_pred, img_meta, rescale): - """Resize, binarize, and format the instance mask predictions. - - Args: - mask_pred (Tensor): shape (N, H, W). - label_pred (Tensor): shape (N, ). - img_meta (dict): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If rescale is False, then returned masks will - fit the scale of imgs[0]. - Returns: - list[ndarray]: Mask predictions grouped by their predicted classes. - """ - ori_shape = img_meta['ori_shape'] - scale_factor = img_meta['scale_factor'] - if rescale: - img_h, img_w = ori_shape[:2] - else: - img_h = np.round(ori_shape[0] * scale_factor[1]).astype(np.int32) - img_w = np.round(ori_shape[1] * scale_factor[0]).astype(np.int32) - - cls_segms = [[] for _ in range(self.num_classes)] - if mask_pred.size(0) == 0: - return cls_segms - - mask_pred = F.interpolate( - mask_pred.unsqueeze(0), (img_h, img_w), - mode='bilinear', - align_corners=False).squeeze(0) > 0.5 - mask_pred = mask_pred.cpu().numpy().astype(np.uint8) - - for m, l in zip(mask_pred, label_pred): - cls_segms[l].append(m) - return cls_segms - - def crop(self, masks, boxes, padding=1): - """Crop predicted masks by zeroing out everything not in the predicted - bbox. - - Args: - masks (Tensor): shape [H, W, N]. - boxes (Tensor): bbox coords in relative point form with - shape [N, 4]. - - Return: - Tensor: The cropped masks. - """ - h, w, n = masks.size() - x1, x2 = self.sanitize_coordinates( - boxes[:, 0], boxes[:, 2], w, padding, cast=False) - y1, y2 = self.sanitize_coordinates( - boxes[:, 1], boxes[:, 3], h, padding, cast=False) - - rows = torch.arange( - w, device=masks.device, dtype=x1.dtype).view(1, -1, - 1).expand(h, w, n) - cols = torch.arange( - h, device=masks.device, dtype=x1.dtype).view(-1, 1, - 1).expand(h, w, n) - - masks_left = rows >= x1.view(1, 1, -1) - masks_right = rows < x2.view(1, 1, -1) - masks_up = cols >= y1.view(1, 1, -1) - masks_down = cols < y2.view(1, 1, -1) - - crop_mask = masks_left * masks_right * masks_up * masks_down - - return masks * crop_mask.float() - - def sanitize_coordinates(self, x1, x2, img_size, padding=0, cast=True): - """Sanitizes the input coordinates so that x1 < x2, x1 != x2, x1 >= 0, - and x2 <= image_size. Also converts from relative to absolute - coordinates and casts the results to long tensors. - - Warning: this does things in-place behind the scenes so - copy if necessary. - - Args: - _x1 (Tensor): shape (N, ). - _x2 (Tensor): shape (N, ). - img_size (int): Size of the input image. - padding (int): x1 >= padding, x2 <= image_size-padding. - cast (bool): If cast is false, the result won't be cast to longs. - - Returns: - tuple: - x1 (Tensor): Sanitized _x1. - x2 (Tensor): Sanitized _x2. - """ - x1 = x1 * img_size - x2 = x2 * img_size - if cast: - x1 = x1.long() - x2 = x2.long() - x1 = torch.min(x1, x2) - x2 = torch.max(x1, x2) - x1 = torch.clamp(x1 - padding, min=0) - x2 = torch.clamp(x2 + padding, max=img_size) - return x1, x2 - - def simple_test(self, - feats, - det_bboxes, - det_labels, - det_coeffs, - img_metas, - rescale=False): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - det_bboxes (list[Tensor]): BBox results of each image. each - element is (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - det_labels (list[Tensor]): BBox results of each image. each - element is (n, ) tensor, each element represents the class - label of the corresponding box. - det_coeffs (list[Tensor]): BBox coefficient of each image. each - element is (n, m) tensor, m is vector length. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - """ - num_imgs = len(img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_preds = self.forward(feats[0], det_coeffs, _bboxes, img_metas) - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append([[] for _ in range(self.num_classes)]) - else: - segm_result = self.get_seg_masks(mask_preds[i], - det_labels[i], - img_metas[i], rescale) - segm_results.append(segm_result) - return segm_results - - -class InterpolateModule(BaseModule): - """This is a module version of F.interpolate. - - Any arguments you give it just get passed along for the ride. - """ - - def __init__(self, *args, init_cfg=None, **kwargs): - super().__init__(init_cfg) - - self.args = args - self.kwargs = kwargs - - def forward(self, x): - """Forward features from the upstream network.""" - return F.interpolate(x, *self.args, **self.kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolo_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolo_head.py deleted file mode 100644 index 08957e6a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolo_head.py +++ /dev/null @@ -1,619 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, multiclass_nms) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class YOLOV3Head(BaseDenseHead, BBoxTestMixin): - """YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): Number of input channels per scale. - out_channels (List[int]): The number of output channels per scale - before the final 1x1 layer. Default: (1024, 512, 256). - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - featmap_strides (List[int]): The stride of each scale. - Should be in descending order. Default: (32, 16, 8). - one_hot_smoother (float): Set a non-zero value to enable label-smooth - Default: 0. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - loss_cls (dict): Config of classification loss. - loss_conf (dict): Config of confidence loss. - loss_xy (dict): Config of xy coordinate loss. - loss_wh (dict): Config of wh coordinate loss. - train_cfg (dict): Training config of YOLOV3 head. Default: None. - test_cfg (dict): Testing config of YOLOV3 head. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - out_channels=(1024, 512, 256), - anchor_generator=dict( - type='YOLOAnchorGenerator', - base_sizes=[[(116, 90), (156, 198), (373, 326)], - [(30, 61), (62, 45), (59, 119)], - [(10, 13), (16, 30), (33, 23)]], - strides=[32, 16, 8]), - bbox_coder=dict(type='YOLOBBoxCoder'), - featmap_strides=[32, 16, 8], - one_hot_smoother=0., - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_conf=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_xy=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_wh=dict(type='MSELoss', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Normal', std=0.01, - override=dict(name='convs_pred'))): - super(YOLOV3Head, self).__init__(init_cfg) - # Check params - assert (len(in_channels) == len(out_channels) == len(featmap_strides)) - - self.num_classes = num_classes - self.in_channels = in_channels - self.out_channels = out_channels - self.featmap_strides = featmap_strides - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.one_hot_smoother = one_hot_smoother - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - - self.prior_generator = build_prior_generator(anchor_generator) - - self.loss_cls = build_loss(loss_cls) - self.loss_conf = build_loss(loss_conf) - self.loss_xy = build_loss(loss_xy) - self.loss_wh = build_loss(loss_wh) - - self.num_base_priors = self.prior_generator.num_base_priors[0] - assert len( - self.prior_generator.num_base_priors) == len(featmap_strides) - self._init_layers() - - @property - def anchor_generator(self): - - warnings.warn('DeprecationWarning: `anchor_generator` is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - @property - def num_anchors(self): - """ - Returns: - int: Number of anchors on each point of feature map. - """ - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.num_base_priors - - @property - def num_levels(self): - return len(self.featmap_strides) - - @property - def num_attrib(self): - """int: number of attributes in pred_map, bboxes (4) + - objectness (1) + num_classes""" - - return 5 + self.num_classes - - def _init_layers(self): - self.convs_bridge = nn.ModuleList() - self.convs_pred = nn.ModuleList() - for i in range(self.num_levels): - conv_bridge = ConvModule( - self.in_channels[i], - self.out_channels[i], - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - conv_pred = nn.Conv2d(self.out_channels[i], - self.num_base_priors * self.num_attrib, 1) - - self.convs_bridge.append(conv_bridge) - self.convs_pred.append(conv_pred) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - for conv_pred, stride in zip(self.convs_pred, self.featmap_strides): - bias = conv_pred.bias.reshape(self.num_base_priors, -1) - # init objectness with prior of 8 objects per feature map - # refer to https://github.com/ultralytics/yolov3 - nn.init.constant_(bias.data[:, 4], - bias_init_with_prob(8 / (608 / stride)**2)) - nn.init.constant_(bias.data[:, 5:], bias_init_with_prob(0.01)) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple[Tensor]: A tuple of multi-level predication map, each is a - 4D-tensor of shape (batch_size, 5+num_classes, height, width). - """ - - assert len(feats) == self.num_levels - pred_maps = [] - for i in range(self.num_levels): - x = feats[i] - x = self.convs_bridge[i](x) - pred_map = self.convs_pred[i](x) - pred_maps.append(pred_map) - - return tuple(pred_maps), - - @force_fp32(apply_to=('pred_maps', )) - def get_bboxes(self, - pred_maps, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. It has - been accelerated since PR #5991. - - Args: - pred_maps (list[Tensor]): Raw predictions for a batch of images. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. Default: None. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(pred_maps) == self.num_levels - cfg = self.test_cfg if cfg is None else cfg - scale_factors = [img_meta['scale_factor'] for img_meta in img_metas] - - num_imgs = len(img_metas) - featmap_sizes = [pred_map.shape[-2:] for pred_map in pred_maps] - - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=pred_maps[0].device) - flatten_preds = [] - flatten_strides = [] - for pred, stride in zip(pred_maps, self.featmap_strides): - pred = pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.num_attrib) - pred[..., :2].sigmoid_() - flatten_preds.append(pred) - flatten_strides.append( - pred.new_tensor(stride).expand(pred.size(1))) - - flatten_preds = torch.cat(flatten_preds, dim=1) - flatten_bbox_preds = flatten_preds[..., :4] - flatten_objectness = flatten_preds[..., 4].sigmoid() - flatten_cls_scores = flatten_preds[..., 5:].sigmoid() - flatten_anchors = torch.cat(mlvl_anchors) - flatten_strides = torch.cat(flatten_strides) - flatten_bboxes = self.bbox_coder.decode(flatten_anchors, - flatten_bbox_preds, - flatten_strides.unsqueeze(-1)) - - if with_nms and (flatten_objectness.size(0) == 0): - return torch.zeros((0, 5)), torch.zeros((0, )) - - if rescale: - flatten_bboxes /= flatten_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - padding = flatten_bboxes.new_zeros(num_imgs, flatten_bboxes.shape[1], - 1) - flatten_cls_scores = torch.cat([flatten_cls_scores, padding], dim=-1) - - det_results = [] - for (bboxes, scores, objectness) in zip(flatten_bboxes, - flatten_cls_scores, - flatten_objectness): - # Filtering out all predictions with conf < conf_thr - conf_thr = cfg.get('conf_thr', -1) - if conf_thr > 0: - conf_inds = objectness >= conf_thr - bboxes = bboxes[conf_inds, :] - scores = scores[conf_inds, :] - objectness = objectness[conf_inds] - - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=objectness) - det_results.append(tuple([det_bboxes, det_labels])) - return det_results - - @force_fp32(apply_to=('pred_maps', )) - def loss(self, - pred_maps, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - pred_maps (list[Tensor]): Prediction map for each scale level, - shape (N, num_anchors * num_attrib, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = len(img_metas) - device = pred_maps[0][0].device - - featmap_sizes = [ - pred_maps[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [mlvl_anchors for _ in range(num_imgs)] - - responsible_flag_list = [] - for img_id in range(len(img_metas)): - responsible_flag_list.append( - self.prior_generator.responsible_flags(featmap_sizes, - gt_bboxes[img_id], - device)) - - target_maps_list, neg_maps_list = self.get_targets( - anchor_list, responsible_flag_list, gt_bboxes, gt_labels) - - losses_cls, losses_conf, losses_xy, losses_wh = multi_apply( - self.loss_single, pred_maps, target_maps_list, neg_maps_list) - - return dict( - loss_cls=losses_cls, - loss_conf=losses_conf, - loss_xy=losses_xy, - loss_wh=losses_wh) - - def loss_single(self, pred_map, target_map, neg_map): - """Compute loss of a single image from a batch. - - Args: - pred_map (Tensor): Raw predictions for a single level. - target_map (Tensor): The Ground-Truth target for a single level. - neg_map (Tensor): The negative masks for a single level. - - Returns: - tuple: - loss_cls (Tensor): Classification loss. - loss_conf (Tensor): Confidence loss. - loss_xy (Tensor): Regression loss of x, y coordinate. - loss_wh (Tensor): Regression loss of w, h coordinate. - """ - - num_imgs = len(pred_map) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(num_imgs, -1, self.num_attrib) - neg_mask = neg_map.float() - pos_mask = target_map[..., 4] - pos_and_neg_mask = neg_mask + pos_mask - pos_mask = pos_mask.unsqueeze(dim=-1) - if torch.max(pos_and_neg_mask) > 1.: - warnings.warn('There is overlap between pos and neg sample.') - pos_and_neg_mask = pos_and_neg_mask.clamp(min=0., max=1.) - - pred_xy = pred_map[..., :2] - pred_wh = pred_map[..., 2:4] - pred_conf = pred_map[..., 4] - pred_label = pred_map[..., 5:] - - target_xy = target_map[..., :2] - target_wh = target_map[..., 2:4] - target_conf = target_map[..., 4] - target_label = target_map[..., 5:] - - loss_cls = self.loss_cls(pred_label, target_label, weight=pos_mask) - loss_conf = self.loss_conf( - pred_conf, target_conf, weight=pos_and_neg_mask) - loss_xy = self.loss_xy(pred_xy, target_xy, weight=pos_mask) - loss_wh = self.loss_wh(pred_wh, target_wh, weight=pos_mask) - - return loss_cls, loss_conf, loss_xy, loss_wh - - def get_targets(self, anchor_list, responsible_flag_list, gt_bboxes_list, - gt_labels_list): - """Compute target maps for anchors in multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_total_anchors, 4). - responsible_flag_list (list[list[Tensor]]): Multi level responsible - flags of each image. Each element is a tensor of shape - (num_total_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - target_map_list (list[Tensor]): Target map of each level. - - neg_map_list (list[Tensor]): Negative map of each level. - """ - num_imgs = len(anchor_list) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - results = multi_apply(self._get_targets_single, anchor_list, - responsible_flag_list, gt_bboxes_list, - gt_labels_list) - - all_target_maps, all_neg_maps = results - assert num_imgs == len(all_target_maps) == len(all_neg_maps) - target_maps_list = images_to_levels(all_target_maps, num_level_anchors) - neg_maps_list = images_to_levels(all_neg_maps, num_level_anchors) - - return target_maps_list, neg_maps_list - - def _get_targets_single(self, anchors, responsible_flags, gt_bboxes, - gt_labels): - """Generate matching bounding box prior and converted GT. - - Args: - anchors (list[Tensor]): Multi-level anchors of the image. - responsible_flags (list[Tensor]): Multi-level responsible flags of - anchors - gt_bboxes (Tensor): Ground truth bboxes of single image. - gt_labels (Tensor): Ground truth labels of single image. - - Returns: - tuple: - target_map (Tensor): Predication target map of each - scale level, shape (num_total_anchors, - 5+num_classes) - neg_map (Tensor): Negative map of each scale level, - shape (num_total_anchors,) - """ - - anchor_strides = [] - for i in range(len(anchors)): - anchor_strides.append( - torch.tensor(self.featmap_strides[i], - device=gt_bboxes.device).repeat(len(anchors[i]))) - concat_anchors = torch.cat(anchors) - concat_responsible_flags = torch.cat(responsible_flags) - - anchor_strides = torch.cat(anchor_strides) - assert len(anchor_strides) == len(concat_anchors) == \ - len(concat_responsible_flags) - assign_result = self.assigner.assign(concat_anchors, - concat_responsible_flags, - gt_bboxes) - sampling_result = self.sampler.sample(assign_result, concat_anchors, - gt_bboxes) - - target_map = concat_anchors.new_zeros( - concat_anchors.size(0), self.num_attrib) - - target_map[sampling_result.pos_inds, :4] = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes, - anchor_strides[sampling_result.pos_inds]) - - target_map[sampling_result.pos_inds, 4] = 1 - - gt_labels_one_hot = F.one_hot( - gt_labels, num_classes=self.num_classes).float() - if self.one_hot_smoother != 0: # label smooth - gt_labels_one_hot = gt_labels_one_hot * ( - 1 - self.one_hot_smoother - ) + self.one_hot_smoother / self.num_classes - target_map[sampling_result.pos_inds, 5:] = gt_labels_one_hot[ - sampling_result.pos_assigned_gt_inds] - - neg_map = concat_anchors.new_zeros( - concat_anchors.size(0), dtype=torch.uint8) - neg_map[sampling_result.neg_inds] = 1 - - return target_map, neg_map - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('pred_maps')) - def onnx_export(self, pred_maps, img_metas, with_nms=True): - num_levels = len(pred_maps) - pred_maps_list = [pred_maps[i].detach() for i in range(num_levels)] - - cfg = self.test_cfg - assert len(pred_maps_list) == self.num_levels - - device = pred_maps_list[0].device - batch_size = pred_maps_list[0].shape[0] - - featmap_sizes = [ - pred_maps_list[i].shape[-2:] for i in range(self.num_levels) - ] - mlvl_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - multi_lvl_bboxes = [] - multi_lvl_cls_scores = [] - multi_lvl_conf_scores = [] - for i in range(self.num_levels): - # get some key info for current scale - pred_map = pred_maps_list[i] - stride = self.featmap_strides[i] - # (b,h, w, num_anchors*num_attrib) -> - # (b,h*w*num_anchors, num_attrib) - pred_map = pred_map.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.num_attrib) - # Inplace operation like - # ```pred_map[..., :2] = \torch.sigmoid(pred_map[..., :2])``` - # would create constant tensor when exporting to onnx - pred_map_conf = torch.sigmoid(pred_map[..., :2]) - pred_map_rest = pred_map[..., 2:] - pred_map = torch.cat([pred_map_conf, pred_map_rest], dim=-1) - pred_map_boxes = pred_map[..., :4] - multi_lvl_anchor = mlvl_anchors[i] - multi_lvl_anchor = multi_lvl_anchor.expand_as(pred_map_boxes) - bbox_pred = self.bbox_coder.decode(multi_lvl_anchor, - pred_map_boxes, stride) - # conf and cls - conf_pred = torch.sigmoid(pred_map[..., 4]) - cls_pred = torch.sigmoid(pred_map[..., 5:]).view( - batch_size, -1, self.num_classes) # Cls pred one-hot. - - # Get top-k prediction - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - _, topk_inds = conf_pred.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = ( - bbox_pred.shape[1] * batch_inds + topk_inds) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - cls_pred = cls_pred.reshape( - -1, self.num_classes)[transformed_inds, :].reshape( - batch_size, -1, self.num_classes) - conf_pred = conf_pred.reshape(-1, 1)[transformed_inds].reshape( - batch_size, -1) - - # Save the result of current scale - multi_lvl_bboxes.append(bbox_pred) - multi_lvl_cls_scores.append(cls_pred) - multi_lvl_conf_scores.append(conf_pred) - - # Merge the results of different scales together - batch_mlvl_bboxes = torch.cat(multi_lvl_bboxes, dim=1) - batch_mlvl_scores = torch.cat(multi_lvl_cls_scores, dim=1) - batch_mlvl_conf_scores = torch.cat(multi_lvl_conf_scores, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - conf_thr = cfg.get('conf_thr', -1) - score_thr = cfg.get('score_thr', -1) - # follow original pipeline of YOLOv3 - if conf_thr > 0: - mask = (batch_mlvl_conf_scores >= conf_thr).float() - batch_mlvl_conf_scores *= mask - if score_thr > 0: - mask = (batch_mlvl_scores > score_thr).float() - batch_mlvl_scores *= mask - batch_mlvl_conf_scores = batch_mlvl_conf_scores.unsqueeze(2).expand_as( - batch_mlvl_scores) - batch_mlvl_scores = batch_mlvl_scores * batch_mlvl_conf_scores - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - # keep aligned with original pipeline, improve - # mAP by 1% for YOLOv3 in ONNX - score_threshold = 0 - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx( - batch_mlvl_bboxes, - batch_mlvl_scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - nms_pre, - cfg.max_per_img, - ) - else: - return batch_mlvl_bboxes, batch_mlvl_scores diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolof_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolof_head.py deleted file mode 100644 index 1063524a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolof_head.py +++ /dev/null @@ -1,416 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import anchor_inside_flags, multi_apply, reduce_mean, unmap -from ..builder import HEADS -from .anchor_head import AnchorHead - -INF = 1e8 - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class YOLOFHead(AnchorHead): - """YOLOFHead Paper link: https://arxiv.org/abs/2103.09460. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): The number of input channels per scale. - cls_num_convs (int): The number of convolutions of cls branch. - Default 2. - reg_num_convs (int): The number of convolutions of reg branch. - Default 4. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - num_classes, - in_channels, - num_cls_convs=2, - num_reg_convs=4, - norm_cfg=dict(type='BN', requires_grad=True), - **kwargs): - self.num_cls_convs = num_cls_convs - self.num_reg_convs = num_reg_convs - self.norm_cfg = norm_cfg - super(YOLOFHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - cls_subnet = [] - bbox_subnet = [] - for i in range(self.num_cls_convs): - cls_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - for i in range(self.num_reg_convs): - bbox_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - self.in_channels, - self.num_base_priors * self.num_classes, - kernel_size=3, - stride=1, - padding=1) - self.bbox_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors * 4, - kernel_size=3, - stride=1, - padding=1) - self.object_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors, - kernel_size=3, - stride=1, - padding=1) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - bias_cls = bias_init_with_prob(0.01) - torch.nn.init.constant_(self.cls_score.bias, bias_cls) - - def forward_single(self, feature): - cls_score = self.cls_score(self.cls_subnet(feature)) - N, _, H, W = cls_score.shape - cls_score = cls_score.view(N, -1, self.num_classes, H, W) - - reg_feat = self.bbox_subnet(feature) - bbox_reg = self.bbox_pred(reg_feat) - objectness = self.object_pred(reg_feat) - - # implicit objectness - objectness = objectness.view(N, -1, 1, H, W) - normalized_cls_score = cls_score + objectness - torch.log( - 1. + torch.clamp(cls_score.exp(), max=INF) + - torch.clamp(objectness.exp(), max=INF)) - normalized_cls_score = normalized_cls_score.view(N, -1, H, W) - return normalized_cls_score, bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (batch, num_anchors * num_classes, h, w) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (batch, num_anchors * 4, h, w) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == 1 - assert self.prior_generator.num_levels == 1 - - device = cls_scores[0].device - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - - # The output level is always 1 - anchor_list = [anchors[0] for anchors in anchor_list] - valid_flag_list = [valid_flags[0] for valid_flags in valid_flag_list] - - cls_scores_list = levels_to_images(cls_scores) - bbox_preds_list = levels_to_images(bbox_preds) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (batch_labels, batch_label_weights, num_total_pos, num_total_neg, - batch_bbox_weights, batch_pos_predicted_boxes, - batch_target_boxes) = cls_reg_targets - - flatten_labels = batch_labels.reshape(-1) - batch_label_weights = batch_label_weights.reshape(-1) - cls_score = cls_scores[0].permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - - num_total_samples = (num_total_pos + - num_total_neg) if self.sampling else num_total_pos - num_total_samples = reduce_mean( - cls_score.new_tensor(num_total_samples)).clamp_(1.0).item() - - # classification loss - loss_cls = self.loss_cls( - cls_score, - flatten_labels, - batch_label_weights, - avg_factor=num_total_samples) - - # regression loss - if batch_pos_predicted_boxes.shape[0] == 0: - # no pos sample - loss_bbox = batch_pos_predicted_boxes.sum() * 0 - else: - loss_bbox = self.loss_bbox( - batch_pos_predicted_boxes, - batch_target_boxes, - batch_bbox_weights.float(), - avg_factor=num_total_samples) - - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores_list (list[Tensor]): Classification scores of - each image. each is a 4D-tensor, the shape is - (h * w, num_anchors * num_classes). - bbox_preds_list (list[Tensor]): Bbox preds of each image. - each is a 4D-tensor, the shape is (h * w, num_anchors * 4). - anchor_list (list[Tensor]): Anchors of each image. Each element of - is a tensor of shape (h * w * num_anchors, 4). - valid_flag_list (list[Tensor]): Valid flags of each image. Each - element of is a tensor of shape (h * w * num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - batch_labels (Tensor): Label of all images. Each element \ - of is a tensor of shape (batch, h * w * num_anchors) - - batch_label_weights (Tensor): Label weights of all images \ - of is a tensor of shape (batch, h * w * num_anchors) - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, pos_inds_list, neg_inds_list, - sampling_results_list) = results[:5] - rest_results = list(results[5:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - - batch_labels = torch.stack(all_labels, 0) - batch_label_weights = torch.stack(all_label_weights, 0) - - res = (batch_labels, batch_label_weights, num_total_pos, num_total_neg) - for i, rests in enumerate(rest_results): # user-added return values - rest_results[i] = torch.cat(rests, 0) - - return res + tuple(rest_results) - - def _get_targets_single(self, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - bbox_preds (Tensor): Bbox prediction of the image, which - shape is (h * w ,4) - flat_anchors (Tensor): Anchors of the image, which shape is - (h * w * num_anchors ,4) - valid_flags (Tensor): Valid flags of the image, which shape is - (h * w * num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels (Tensor): Labels of image, which shape is - (h * w * num_anchors, ). - label_weights (Tensor): Label weights of image, which shape is - (h * w * num_anchors, ). - pos_inds (Tensor): Pos index of image. - neg_inds (Tensor): Neg index of image. - sampling_result (obj:`SamplingResult`): Sampling result. - pos_bbox_weights (Tensor): The Weight of using to calculate - the bbox branch loss, which shape is (num, ). - pos_predicted_boxes (Tensor): boxes predicted value of - using to calculate the bbox branch loss, which shape is - (num, 4). - pos_target_boxes (Tensor): boxes target value of - using to calculate the bbox branch loss, which shape is - (num, 4). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - bbox_preds = bbox_preds.reshape(-1, 4) - bbox_preds = bbox_preds[inside_flags, :] - - # decoded bbox - decoder_bbox_preds = self.bbox_coder.decode(anchors, bbox_preds) - assign_result = self.assigner.assign( - decoder_bbox_preds, anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - pos_bbox_weights = assign_result.get_extra_property('pos_idx') - pos_predicted_boxes = assign_result.get_extra_property( - 'pos_predicted_boxes') - pos_target_boxes = assign_result.get_extra_property('target_boxes') - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - num_valid_anchors = anchors.shape[0] - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - - return (labels, label_weights, pos_inds, neg_inds, sampling_result, - pos_bbox_weights, pos_predicted_boxes, pos_target_boxes) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolox_head.py deleted file mode 100644 index de3f93cc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/dense_heads/yolox_head.py +++ /dev/null @@ -1,491 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, - bias_init_with_prob) -from mmcv.ops.nms import batched_nms -from mmcv.runner import force_fp32 - -from mmdet.core import (MlvlPointGenerator, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class YOLOXHead(BaseDenseHead, BBoxTestMixin): - """YOLOXHead head used in `YOLOX `_. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels in stacking convs. - Default: 256 - stacked_convs (int): Number of stacking convs of the head. - Default: 2. - strides (tuple): Downsample factor of each feature map. - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. Default: None. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_obj (dict): Config of objectness loss. - loss_l1 (dict): Config of L1 loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=2, - strides=[8, 16, 32], - use_depthwise=False, - dcn_on_last_conv=False, - conv_bias='auto', - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - loss_bbox=dict( - type='IoULoss', - mode='square', - eps=1e-16, - reduction='sum', - loss_weight=5.0), - loss_obj=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - loss_l1=dict(type='L1Loss', reduction='sum', loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.cls_out_channels = num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.use_depthwise = use_depthwise - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.use_sigmoid_cls = True - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_obj = build_loss(loss_obj) - - self.use_l1 = False # This flag will be modified by hooks. - self.loss_l1 = build_loss(loss_l1) - - self.prior_generator = MlvlPointGenerator(strides, offset=0) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.fp16_enabled = False - self._init_layers() - - def _init_layers(self): - self.multi_level_cls_convs = nn.ModuleList() - self.multi_level_reg_convs = nn.ModuleList() - self.multi_level_conv_cls = nn.ModuleList() - self.multi_level_conv_reg = nn.ModuleList() - self.multi_level_conv_obj = nn.ModuleList() - for _ in self.strides: - self.multi_level_cls_convs.append(self._build_stacked_convs()) - self.multi_level_reg_convs.append(self._build_stacked_convs()) - conv_cls, conv_reg, conv_obj = self._build_predictor() - self.multi_level_conv_cls.append(conv_cls) - self.multi_level_conv_reg.append(conv_reg) - self.multi_level_conv_obj.append(conv_obj) - - def _build_stacked_convs(self): - """Initialize conv layers of a single level head.""" - conv = DepthwiseSeparableConvModule \ - if self.use_depthwise else ConvModule - stacked_convs = [] - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - stacked_convs.append( - conv( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.conv_bias)) - return nn.Sequential(*stacked_convs) - - def _build_predictor(self): - """Initialize predictor layers of a single level head.""" - conv_cls = nn.Conv2d(self.feat_channels, self.cls_out_channels, 1) - conv_reg = nn.Conv2d(self.feat_channels, 4, 1) - conv_obj = nn.Conv2d(self.feat_channels, 1, 1) - return conv_cls, conv_reg, conv_obj - - def init_weights(self): - super(YOLOXHead, self).init_weights() - # Use prior in model initialization to improve stability - bias_init = bias_init_with_prob(0.01) - for conv_cls, conv_obj in zip(self.multi_level_conv_cls, - self.multi_level_conv_obj): - conv_cls.bias.data.fill_(bias_init) - conv_obj.bias.data.fill_(bias_init) - - def forward_single(self, x, cls_convs, reg_convs, conv_cls, conv_reg, - conv_obj): - """Forward feature of a single scale level.""" - - cls_feat = cls_convs(x) - reg_feat = reg_convs(x) - - cls_score = conv_cls(cls_feat) - bbox_pred = conv_reg(reg_feat) - objectness = conv_obj(reg_feat) - - return cls_score, bbox_pred, objectness - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - Returns: - tuple[Tensor]: A tuple of multi-level predication map, each is a - 4D-tensor of shape (batch_size, 5+num_classes, height, width). - """ - - return multi_apply(self.forward_single, feats, - self.multi_level_cls_convs, - self.multi_level_reg_convs, - self.multi_level_conv_cls, - self.multi_level_conv_reg, - self.multi_level_conv_obj) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - objectnesses, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True): - """Transform network outputs of a batch into bbox results. - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - objectnesses (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, 1, H, W). - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. Default None. - rescale (bool): If True, return boxes in original image space. - Default False. - with_nms (bool): If True, do nms before return boxes. - Default True. - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) == len(objectnesses) - cfg = self.test_cfg if cfg is None else cfg - scale_factors = [img_meta['scale_factor'] for img_meta in img_metas] - - num_imgs = len(img_metas) - featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device, - with_stride=True) - - # flatten cls_scores, bbox_preds and objectness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_objectness = [ - objectness.permute(0, 2, 3, 1).reshape(num_imgs, -1) - for objectness in objectnesses - ] - - flatten_cls_scores = torch.cat(flatten_cls_scores, dim=1).sigmoid() - flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) - flatten_objectness = torch.cat(flatten_objectness, dim=1).sigmoid() - flatten_priors = torch.cat(mlvl_priors) - - flatten_bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds) - - if rescale: - flatten_bboxes[..., :4] /= flatten_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - result_list = [] - for img_id in range(len(img_metas)): - cls_scores = flatten_cls_scores[img_id] - score_factor = flatten_objectness[img_id] - bboxes = flatten_bboxes[img_id] - - result_list.append( - self._bboxes_nms(cls_scores, bboxes, score_factor, cfg)) - - return result_list - - def _bbox_decode(self, priors, bbox_preds): - xys = (bbox_preds[..., :2] * priors[:, 2:]) + priors[:, :2] - whs = bbox_preds[..., 2:].exp() * priors[:, 2:] - - tl_x = (xys[..., 0] - whs[..., 0] / 2) - tl_y = (xys[..., 1] - whs[..., 1] / 2) - br_x = (xys[..., 0] + whs[..., 0] / 2) - br_y = (xys[..., 1] + whs[..., 1] / 2) - - decoded_bboxes = torch.stack([tl_x, tl_y, br_x, br_y], -1) - return decoded_bboxes - - def _bboxes_nms(self, cls_scores, bboxes, score_factor, cfg): - max_scores, labels = torch.max(cls_scores, 1) - valid_mask = score_factor * max_scores >= cfg.score_thr - - bboxes = bboxes[valid_mask] - scores = max_scores[valid_mask] * score_factor[valid_mask] - labels = labels[valid_mask] - - if labels.numel() == 0: - return bboxes, labels - else: - dets, keep = batched_nms(bboxes, scores, labels, cfg.nms) - return dets, labels[keep] - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'objectnesses')) - def loss(self, - cls_scores, - bbox_preds, - objectnesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_priors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_priors * 4. - objectnesses (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, 1, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - num_imgs = len(img_metas) - featmap_sizes = [cls_score.shape[2:] for cls_score in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device, - with_stride=True) - - flatten_cls_preds = [ - cls_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, - self.cls_out_channels) - for cls_pred in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_objectness = [ - objectness.permute(0, 2, 3, 1).reshape(num_imgs, -1) - for objectness in objectnesses - ] - - flatten_cls_preds = torch.cat(flatten_cls_preds, dim=1) - flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) - flatten_objectness = torch.cat(flatten_objectness, dim=1) - flatten_priors = torch.cat(mlvl_priors) - flatten_bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds) - - (pos_masks, cls_targets, obj_targets, bbox_targets, l1_targets, - num_fg_imgs) = multi_apply( - self._get_target_single, flatten_cls_preds.detach(), - flatten_objectness.detach(), - flatten_priors.unsqueeze(0).repeat(num_imgs, 1, 1), - flatten_bboxes.detach(), gt_bboxes, gt_labels) - - # The experimental results show that ‘reduce_mean’ can improve - # performance on the COCO dataset. - num_pos = torch.tensor( - sum(num_fg_imgs), - dtype=torch.float, - device=flatten_cls_preds.device) - num_total_samples = max(reduce_mean(num_pos), 1.0) - - pos_masks = torch.cat(pos_masks, 0) - cls_targets = torch.cat(cls_targets, 0) - obj_targets = torch.cat(obj_targets, 0) - bbox_targets = torch.cat(bbox_targets, 0) - if self.use_l1: - l1_targets = torch.cat(l1_targets, 0) - - loss_bbox = self.loss_bbox( - flatten_bboxes.view(-1, 4)[pos_masks], - bbox_targets) / num_total_samples - loss_obj = self.loss_obj(flatten_objectness.view(-1, 1), - obj_targets) / num_total_samples - loss_cls = self.loss_cls( - flatten_cls_preds.view(-1, self.num_classes)[pos_masks], - cls_targets) / num_total_samples - - loss_dict = dict( - loss_cls=loss_cls, loss_bbox=loss_bbox, loss_obj=loss_obj) - - if self.use_l1: - loss_l1 = self.loss_l1( - flatten_bbox_preds.view(-1, 4)[pos_masks], - l1_targets) / num_total_samples - loss_dict.update(loss_l1=loss_l1) - - return loss_dict - - @torch.no_grad() - def _get_target_single(self, cls_preds, objectness, priors, decoded_bboxes, - gt_bboxes, gt_labels): - """Compute classification, regression, and objectness targets for - priors in a single image. - Args: - cls_preds (Tensor): Classification predictions of one image, - a 2D-Tensor with shape [num_priors, num_classes] - objectness (Tensor): Objectness predictions of one image, - a 1D-Tensor with shape [num_priors] - priors (Tensor): All priors of one image, a 2D-Tensor with shape - [num_priors, 4] in [cx, xy, stride_w, stride_y] format. - decoded_bboxes (Tensor): Decoded bboxes predictions of one image, - a 2D-Tensor with shape [num_priors, 4] in [tl_x, tl_y, - br_x, br_y] format. - gt_bboxes (Tensor): Ground truth bboxes of one image, a 2D-Tensor - with shape [num_gts, 4] in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth labels of one image, a Tensor - with shape [num_gts]. - """ - - num_priors = priors.size(0) - num_gts = gt_labels.size(0) - gt_bboxes = gt_bboxes.to(decoded_bboxes.dtype) - # No target - if num_gts == 0: - cls_target = cls_preds.new_zeros((0, self.num_classes)) - bbox_target = cls_preds.new_zeros((0, 4)) - l1_target = cls_preds.new_zeros((0, 4)) - obj_target = cls_preds.new_zeros((num_priors, 1)) - foreground_mask = cls_preds.new_zeros(num_priors).bool() - return (foreground_mask, cls_target, obj_target, bbox_target, - l1_target, 0) - - # YOLOX uses center priors with 0.5 offset to assign targets, - # but use center priors without offset to regress bboxes. - offset_priors = torch.cat( - [priors[:, :2] + priors[:, 2:] * 0.5, priors[:, 2:]], dim=-1) - - assign_result = self.assigner.assign( - cls_preds.sigmoid() * objectness.unsqueeze(1).sigmoid(), - offset_priors, decoded_bboxes, gt_bboxes, gt_labels) - - sampling_result = self.sampler.sample(assign_result, priors, gt_bboxes) - pos_inds = sampling_result.pos_inds - num_pos_per_img = pos_inds.size(0) - - pos_ious = assign_result.max_overlaps[pos_inds] - # IOU aware classification score - cls_target = F.one_hot(sampling_result.pos_gt_labels, - self.num_classes) * pos_ious.unsqueeze(-1) - obj_target = torch.zeros_like(objectness).unsqueeze(-1) - obj_target[pos_inds] = 1 - bbox_target = sampling_result.pos_gt_bboxes - l1_target = cls_preds.new_zeros((num_pos_per_img, 4)) - if self.use_l1: - l1_target = self._get_l1_target(l1_target, bbox_target, - priors[pos_inds]) - foreground_mask = torch.zeros_like(objectness).to(torch.bool) - foreground_mask[pos_inds] = 1 - return (foreground_mask, cls_target, obj_target, bbox_target, - l1_target, num_pos_per_img) - - def _get_l1_target(self, l1_target, gt_bboxes, priors, eps=1e-8): - """Convert gt bboxes to center offset and log width height.""" - gt_cxcywh = bbox_xyxy_to_cxcywh(gt_bboxes) - l1_target[:, :2] = (gt_cxcywh[:, :2] - priors[:, :2]) / priors[:, 2:] - l1_target[:, 2:] = torch.log(gt_cxcywh[:, 2:] / priors[:, 2:] + eps) - return l1_target diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/__init__.py deleted file mode 100644 index 5f2b3088..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .atss import ATSS -from .autoassign import AutoAssign -from .base import BaseDetector -from .cascade_rcnn import CascadeRCNN -from .centernet import CenterNet -from .cornernet import CornerNet -from .deformable_detr import DeformableDETR -from .detr import DETR -from .fast_rcnn import FastRCNN -from .faster_rcnn import FasterRCNN -from .fcos import FCOS -from .fovea import FOVEA -from .fsaf import FSAF -from .gfl import GFL -from .grid_rcnn import GridRCNN -from .htc import HybridTaskCascade -from .kd_one_stage import KnowledgeDistillationSingleStageDetector -from .lad import LAD -from .mask2former import Mask2Former -from .mask_rcnn import MaskRCNN -from .mask_scoring_rcnn import MaskScoringRCNN -from .maskformer import MaskFormer -from .nasfcos import NASFCOS -from .paa import PAA -from .panoptic_fpn import PanopticFPN -from .panoptic_two_stage_segmentor import TwoStagePanopticSegmentor -from .point_rend import PointRend -from .queryinst import QueryInst -from .reppoints_detector import RepPointsDetector -from .retinanet import RetinaNet -from .rpn import RPN -from .scnet import SCNet -from .single_stage import SingleStageDetector -from .solo import SOLO -from .sparse_rcnn import SparseRCNN -from .tood import TOOD -from .trident_faster_rcnn import TridentFasterRCNN -from .two_stage import TwoStageDetector -from .vfnet import VFNet -from .yolact import YOLACT -from .yolo import YOLOV3 -from .yolof import YOLOF -from .yolox import YOLOX - -__all__ = [ - 'ATSS', 'BaseDetector', 'SingleStageDetector', 'TwoStageDetector', 'RPN', - 'KnowledgeDistillationSingleStageDetector', 'FastRCNN', 'FasterRCNN', - 'MaskRCNN', 'CascadeRCNN', 'HybridTaskCascade', 'RetinaNet', 'FCOS', - 'GridRCNN', 'MaskScoringRCNN', 'RepPointsDetector', 'FOVEA', 'FSAF', - 'NASFCOS', 'PointRend', 'GFL', 'CornerNet', 'PAA', 'YOLOV3', 'YOLACT', - 'VFNet', 'DETR', 'TridentFasterRCNN', 'SparseRCNN', 'SCNet', 'SOLO', - 'DeformableDETR', 'AutoAssign', 'YOLOF', 'CenterNet', 'YOLOX', - 'TwoStagePanopticSegmentor', 'PanopticFPN', 'QueryInst', 'LAD', 'TOOD', - 'MaskFormer', 'Mask2Former' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/atss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/atss.py deleted file mode 100644 index 00f1acd9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/atss.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class ATSS(SingleStageDetector): - """Implementation of `ATSS `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/autoassign.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/autoassign.py deleted file mode 100644 index 30ab7207..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/autoassign.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class AutoAssign(SingleStageDetector): - """Implementation of `AutoAssign: Differentiable Label Assignment for Dense - Object Detection `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(AutoAssign, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/base.py deleted file mode 100644 index bf64bce6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/base.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.core.visualization import imshow_det_bboxes - - -class BaseDetector(BaseModule, metaclass=ABCMeta): - """Base class for detectors.""" - - def __init__(self, init_cfg=None): - super(BaseDetector, self).__init__(init_cfg) - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the detector has a neck""" - return hasattr(self, 'neck') and self.neck is not None - - # TODO: these properties need to be carefully handled - # for both single stage & two stage detectors - @property - def with_shared_head(self): - """bool: whether the detector has a shared head in the RoI Head""" - return hasattr(self, 'roi_head') and self.roi_head.with_shared_head - - @property - def with_bbox(self): - """bool: whether the detector has a bbox head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_bbox) - or (hasattr(self, 'bbox_head') and self.bbox_head is not None)) - - @property - def with_mask(self): - """bool: whether the detector has a mask head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_mask) - or (hasattr(self, 'mask_head') and self.mask_head is not None)) - - @abstractmethod - def extract_feat(self, imgs): - """Extract features from images.""" - pass - - def extract_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, imgs, img_metas, **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (keyword arguments): Specific to concrete implementation. - """ - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - batch_input_shape = tuple(imgs[0].size()[-2:]) - for img_meta in img_metas: - img_meta['batch_input_shape'] = batch_input_shape - - async def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError - - @abstractmethod - def simple_test(self, img, img_metas, **kwargs): - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Test function with test time augmentation.""" - pass - - async def aforward_test(self, *, img, img_metas, **kwargs): - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image metas ({len(img_metas)})') - # TODO: remove the restriction of samples_per_gpu == 1 when prepared - samples_per_gpu = img[0].size(0) - assert samples_per_gpu == 1 - - if num_augs == 1: - return await self.async_simple_test(img[0], img_metas[0], **kwargs) - else: - raise NotImplementedError - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - for img, img_meta in zip(imgs, img_metas): - batch_size = len(img_meta) - for img_id in range(batch_size): - img_meta[img_id]['batch_input_shape'] = tuple(img.size()[-2:]) - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - assert imgs[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{imgs[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if torch.onnx.is_in_onnx_export(): - assert len(img_metas) == 1 - return self.onnx_export(img[0], img_metas[0]) - - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \ - which may be a weighted sum of all losses, log_vars contains \ - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - # If the loss_vars has different length, GPUs will wait infinitely - if dist.is_available() and dist.is_initialized(): - log_var_length = torch.tensor(len(log_vars), device=loss.device) - dist.all_reduce(log_var_length) - message = (f'rank {dist.get_rank()}' + - f' len(log_vars): {len(log_vars)}' + ' keys: ' + - ','.join(log_vars.keys())) - assert log_var_length == len(log_vars) * dist.get_world_size(), \ - 'loss log variables are different across GPUs!\n' + message - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \ - ``num_samples``. - - - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def val_step(self, data, optimizer=None): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - # draw segmentation masks - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - if isinstance(segms[0], torch.Tensor): - segms = torch.stack(segms, dim=0).detach().cpu().numpy() - else: - segms = np.stack(segms, axis=0) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - bboxes, - labels, - segms, - class_names=self.CLASSES, - score_thr=score_thr, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img - - def onnx_export(self, img, img_metas): - raise NotImplementedError(f'{self.__class__.__name__} does ' - f'not support ONNX EXPORT') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cascade_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cascade_rcnn.py deleted file mode 100644 index d8c73827..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cascade_rcnn.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class CascadeRCNN(TwoStageDetector): - r"""Implementation of `Cascade R-CNN: Delving into High Quality Object - Detection `_""" - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CascadeRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def show_result(self, data, result, **kwargs): - """Show prediction results of the detector. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if self.with_mask: - ms_bbox_result, ms_segm_result = result - if isinstance(ms_bbox_result, dict): - result = (ms_bbox_result['ensemble'], - ms_segm_result['ensemble']) - else: - if isinstance(result, dict): - result = result['ensemble'] - return super(CascadeRCNN, self).show_result(data, result, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/centernet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/centernet.py deleted file mode 100644 index e1e3fd3c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/centernet.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result -from mmdet.models.builder import DETECTORS -from ...core.utils import flip_tensor -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CenterNet(SingleStageDetector): - """Implementation of CenterNet(Objects as Points) - - . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CenterNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def merge_aug_results(self, aug_results, with_nms): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - with_nms (bool): If True, do nms before return boxes. - - Returns: - tuple: (out_bboxes, out_labels) - """ - recovered_bboxes, aug_labels = [], [] - for single_result in aug_results: - recovered_bboxes.append(single_result[0][0]) - aug_labels.append(single_result[0][1]) - - bboxes = torch.cat(recovered_bboxes, dim=0).contiguous() - labels = torch.cat(aug_labels).contiguous() - if with_nms: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=True): - """Augment testing of CenterNet. Aug test must have flipped image pair, - and unlike CornerNet, it will perform an averaging operation on the - feature map instead of detecting bbox. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: True. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - flip_direction = img_metas[flip_ind][0]['flip_direction'] - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - center_heatmap_preds, wh_preds, offset_preds = self.bbox_head(x) - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - - # Feature map averaging - center_heatmap_preds[0] = ( - center_heatmap_preds[0][0:1] + - flip_tensor(center_heatmap_preds[0][1:2], flip_direction)) / 2 - wh_preds[0] = (wh_preds[0][0:1] + - flip_tensor(wh_preds[0][1:2], flip_direction)) / 2 - - bbox_list = self.bbox_head.get_bboxes( - center_heatmap_preds, - wh_preds, [offset_preds[0][0:1]], - img_metas[ind], - rescale=rescale, - with_nms=False) - aug_results.append(bbox_list) - - nms_cfg = self.bbox_head.test_cfg.get('nms_cfg', None) - if nms_cfg is None: - with_nms = False - else: - with_nms = True - bbox_list = [self.merge_aug_results(aug_results, with_nms)] - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cornernet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cornernet.py deleted file mode 100644 index ce921cc3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/cornernet.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result, bbox_mapping_back -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CornerNet(SingleStageDetector): - """CornerNet. - - This detector is the implementation of the paper `CornerNet: Detecting - Objects as Paired Keypoints `_ . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CornerNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def merge_aug_results(self, aug_results, img_metas): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: (bboxes, labels) - """ - recovered_bboxes, aug_labels = [], [] - for bboxes_labels, img_info in zip(aug_results, img_metas): - img_shape = img_info[0]['img_shape'] # using shape before padding - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - bboxes, labels = bboxes_labels - bboxes, scores = bboxes[:, :4], bboxes[:, -1:] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip) - recovered_bboxes.append(torch.cat([bboxes, scores], dim=-1)) - aug_labels.append(labels) - - bboxes = torch.cat(recovered_bboxes, dim=0) - labels = torch.cat(aug_labels) - - if bboxes.shape[0] > 0: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=False): - """Augment testing of CornerNet. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, [img_metas[ind], img_metas[flip_ind]], False, False) - aug_results.append(bbox_list[0]) - aug_results.append(bbox_list[1]) - - bboxes, labels = self.merge_aug_results(aug_results, img_metas) - bbox_results = bbox2result(bboxes, labels, self.bbox_head.num_classes) - - return [bbox_results] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/deformable_detr.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/deformable_detr.py deleted file mode 100644 index b1f16422..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/deformable_detr.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .detr import DETR - - -@DETECTORS.register_module() -class DeformableDETR(DETR): - - def __init__(self, *args, **kwargs): - super(DETR, self).__init__(*args, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/detr.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/detr.py deleted file mode 100644 index 06d76913..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/detr.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class DETR(SingleStageDetector): - r"""Implementation of `DETR: End-to-End Object Detection with - Transformers `_""" - - def __init__(self, - backbone, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(DETR, self).__init__(backbone, None, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - # over-write `forward_dummy` because: - # the forward of bbox_head requires img_metas - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - warnings.warn('Warning! MultiheadAttention in DETR does not ' - 'support flops computation! Do not use the ' - 'results in your papers!') - - batch_size, _, height, width = img.shape - dummy_img_metas = [ - dict( - batch_input_shape=(height, width), - img_shape=(height, width, 3)) for _ in range(batch_size) - ] - x = self.extract_feat(img) - outs = self.bbox_head(x, dummy_img_metas) - return outs - - # over-write `onnx_export` because: - # (1) the forward of bbox_head requires img_metas - # (2) the different behavior (e.g. construction of `masks`) between - # torch and ONNX model, during the forward of bbox_head - def onnx_export(self, img, img_metas): - """Test function for exporting to ONNX, without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - # forward of this head requires img_metas - outs = self.bbox_head.forward_onnx(x, img_metas) - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fast_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fast_rcnn.py deleted file mode 100644 index 7aebe151..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/faster_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/faster_rcnn.py deleted file mode 100644 index 70fb662f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/faster_rcnn.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FasterRCNN(TwoStageDetector): - """Implementation of `Faster R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(FasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fcos.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fcos.py deleted file mode 100644 index d985bd02..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fcos.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FCOS(SingleStageDetector): - """Implementation of `FCOS `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fovea.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fovea.py deleted file mode 100644 index 6fd908c7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fovea.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FOVEA(SingleStageDetector): - """Implementation of `FoveaBox `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fsaf.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 81ed1bde..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/gfl.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/gfl.py deleted file mode 100644 index 4628e2e7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/gfl.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class GFL(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(GFL, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/grid_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/grid_rcnn.py deleted file mode 100644 index bba7873b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/grid_rcnn.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class GridRCNN(TwoStageDetector): - """Grid R-CNN. - - This detector is the implementation of: - - Grid R-CNN (https://arxiv.org/abs/1811.12030) - - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688) - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(GridRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/htc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/htc.py deleted file mode 100644 index f7c95338..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/htc.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class HybridTaskCascade(CascadeRCNN): - """Implementation of `HTC `_""" - - def __init__(self, **kwargs): - super(HybridTaskCascade, self).__init__(**kwargs) - - @property - def with_semantic(self): - """bool: whether the detector has a semantic head""" - return self.roi_head.with_semantic diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/kd_one_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/kd_one_stage.py deleted file mode 100644 index fb66b515..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/kd_one_stage.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from pathlib import Path - -import mmcv -import torch -from mmcv.runner import load_checkpoint - -from .. import build_detector -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class KnowledgeDistillationSingleStageDetector(SingleStageDetector): - r"""Implementation of `Distilling the Knowledge in a Neural Network. - `_. - - Args: - teacher_config (str | dict): Config file path - or the config object of teacher model. - teacher_ckpt (str, optional): Checkpoint path of teacher model. - If left as None, the model will not load any weights. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_config, - teacher_ckpt=None, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - # Build teacher model - if isinstance(teacher_config, (str, Path)): - teacher_config = mmcv.Config.fromfile(teacher_config) - self.teacher_model = build_detector(teacher_config['model']) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - with torch.no_grad(): - teacher_x = self.teacher_model.extract_feat(img) - out_teacher = self.teacher_model.bbox_head(teacher_x) - losses = self.bbox_head.forward_train(x, out_teacher, img_metas, - gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses - - def cuda(self, device=None): - """Since teacher_model is registered as a plain object, it is necessary - to put the teacher model to cuda when calling cuda function.""" - self.teacher_model.cuda(device=device) - return super().cuda(device=device) - - def train(self, mode=True): - """Set the same train mode for teacher and student model.""" - if self.eval_teacher: - self.teacher_model.train(False) - else: - self.teacher_model.train(mode) - super().train(mode) - - def __setattr__(self, name, value): - """Set attribute, i.e. self.name = value - - This reloading prevent the teacher model from being registered as a - nn.Module. The teacher module is registered as a plain object, so that - the teacher parameters will not show up when calling - ``self.parameters``, ``self.modules``, ``self.children`` methods. - """ - if name == 'teacher_model': - object.__setattr__(self, name, value) - else: - super().__setattr__(name, value) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/lad.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/lad.py deleted file mode 100644 index c6cc1e0b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/lad.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import load_checkpoint - -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .kd_one_stage import KnowledgeDistillationSingleStageDetector - - -@DETECTORS.register_module() -class LAD(KnowledgeDistillationSingleStageDetector): - """Implementation of `LAD `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_backbone, - teacher_neck, - teacher_bbox_head, - teacher_ckpt, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(KnowledgeDistillationSingleStageDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - self.teacher_model = nn.Module() - self.teacher_model.backbone = build_backbone(teacher_backbone) - if teacher_neck is not None: - self.teacher_model.neck = build_neck(teacher_neck) - teacher_bbox_head.update(train_cfg=train_cfg) - teacher_bbox_head.update(test_cfg=test_cfg) - self.teacher_model.bbox_head = build_head(teacher_bbox_head) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - @property - def with_teacher_neck(self): - """bool: whether the detector has a teacher_neck""" - return hasattr(self.teacher_model, 'neck') and \ - self.teacher_model.neck is not None - - def extract_teacher_feat(self, img): - """Directly extract teacher features from the backbone+neck.""" - x = self.teacher_model.backbone(img) - if self.with_teacher_neck: - x = self.teacher_model.neck(x) - return x - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # get label assignment from the teacher - with torch.no_grad(): - x_teacher = self.extract_teacher_feat(img) - outs_teacher = self.teacher_model.bbox_head(x_teacher) - label_assignment_results = \ - self.teacher_model.bbox_head.get_label_assignment( - *outs_teacher, gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - - # the student use the label assignment from the teacher to learn - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, label_assignment_results, - img_metas, gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask2former.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask2former.py deleted file mode 100644 index b9ad2ed2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask2former.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .maskformer import MaskFormer - - -@DETECTORS.register_module() -class Mask2Former(MaskFormer): - r"""Implementation of `Masked-attention Mask - Transformer for Universal Image Segmentation - `_.""" - - def __init__(self, - backbone, - neck=None, - panoptic_head=None, - panoptic_fusion_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super().__init__( - backbone, - neck=neck, - panoptic_head=panoptic_head, - panoptic_fusion_head=panoptic_fusion_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_rcnn.py deleted file mode 100644 index c68489f9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_rcnn.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskRCNN(TwoStageDetector): - """Implementation of `Mask R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_scoring_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_scoring_rcnn.py deleted file mode 100644 index 5f55656f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/mask_scoring_rcnn.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskScoringRCNN(TwoStageDetector): - """Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskScoringRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/maskformer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/maskformer.py deleted file mode 100644 index b626e070..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/maskformer.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet.core import INSTANCE_OFFSET, bbox2result -from mmdet.core.visualization import imshow_det_bboxes -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class MaskFormer(SingleStageDetector): - r"""Implementation of `Per-Pixel Classification is - NOT All You Need for Semantic Segmentation - `_.""" - - def __init__(self, - backbone, - neck=None, - panoptic_head=None, - panoptic_fusion_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - - panoptic_head_ = panoptic_head.deepcopy() - panoptic_head_.update(train_cfg=train_cfg) - panoptic_head_.update(test_cfg=test_cfg) - self.panoptic_head = build_head(panoptic_head_) - - panoptic_fusion_head_ = panoptic_fusion_head.deepcopy() - panoptic_fusion_head_.update(test_cfg=test_cfg) - self.panoptic_fusion_head = build_head(panoptic_fusion_head_) - - self.num_things_classes = self.panoptic_head.num_things_classes - self.num_stuff_classes = self.panoptic_head.num_stuff_classes - self.num_classes = self.panoptic_head.num_classes - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def forward_dummy(self, img, img_metas): - """Used for computing network flops. See - `mmdetection/tools/analysis_tools/get_flops.py` - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[Dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - outs = self.panoptic_head(x, img_metas) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_masks, - gt_semantic_seg, - gt_bboxes_ignore=None, - **kargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[Dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - gt_masks (list[BitmapMasks]): true segmentation masks for each box - used if the architecture supports a segmentation task. - gt_semantic_seg (list[tensor]): semantic segmentation mask for - images. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - Defaults to None. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # add batch_input_shape in img_metas - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.panoptic_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_masks, - gt_semantic_seg, - gt_bboxes_ignore) - - return losses - - def simple_test(self, imgs, img_metas, **kwargs): - """Test without augmentation. - - Args: - imgs (Tensor): A batch of images. - img_metas (list[dict]): List of image information. - - Returns: - list[dict[str, np.array | tuple]]: Semantic segmentation \ - results and panoptic segmentation results for each \ - image. - - .. code-block:: none - - [ - { - 'pan_results': np.array, # shape = [h, w] - 'ins_results': tuple[list], - # semantic segmentation results are not supported yet - 'sem_results': np.array - }, - ... - ] - """ - feats = self.extract_feat(imgs) - mask_cls_results, mask_pred_results = self.panoptic_head.simple_test( - feats, img_metas, **kwargs) - results = self.panoptic_fusion_head.simple_test( - mask_cls_results, mask_pred_results, img_metas, **kwargs) - for i in range(len(results)): - if 'pan_results' in results[i]: - results[i]['pan_results'] = results[i]['pan_results'].detach( - ).cpu().numpy() - - if 'ins_results' in results[i]: - labels_per_image, bboxes, mask_pred_binary = results[i][ - 'ins_results'] - bbox_results = bbox2result(bboxes, labels_per_image, - self.num_things_classes) - mask_results = [[] for _ in range(self.num_things_classes)] - for j, label in enumerate(labels_per_image): - mask = mask_pred_binary[j].detach().cpu().numpy() - mask_results[label].append(mask) - results[i]['ins_results'] = bbox_results, mask_results - - assert 'sem_results' not in results[i], 'segmantic segmentation '\ - 'results are not supported yet.' - - return results - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError - - def onnx_export(self, img, img_metas): - raise NotImplementedError - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (dict): The results. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green'. - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green'. - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file`. - """ - img = mmcv.imread(img) - img = img.copy() - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != self.num_classes # for VOID label - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - segms=segms, - labels=labels, - class_names=self.CLASSES, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/nasfcos.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/nasfcos.py deleted file mode 100644 index a34c2280..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/nasfcos.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class NASFCOS(SingleStageDetector): - """NAS-FCOS: Fast Neural Architecture Search for Object Detection. - - https://arxiv.org/abs/1906.0442 - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(NASFCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/paa.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/paa.py deleted file mode 100644 index f5cb8372..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/paa.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class PAA(SingleStageDetector): - """Implementation of `PAA `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PAA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_fpn.py deleted file mode 100644 index f8ac751f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_fpn.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .panoptic_two_stage_segmentor import TwoStagePanopticSegmentor - - -@DETECTORS.register_module() -class PanopticFPN(TwoStagePanopticSegmentor): - r"""Implementation of `Panoptic feature pyramid - networks `_""" - - def __init__( - self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None, - # for panoptic segmentation - semantic_head=None, - panoptic_fusion_head=None): - super(PanopticFPN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg, - semantic_head=semantic_head, - panoptic_fusion_head=panoptic_fusion_head) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_two_stage_segmentor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_two_stage_segmentor.py deleted file mode 100644 index 5ad49bac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/panoptic_two_stage_segmentor.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch - -from mmdet.core import INSTANCE_OFFSET, bbox2roi, multiclass_nms -from mmdet.core.visualization import imshow_det_bboxes -from ..builder import DETECTORS, build_head -from ..roi_heads.mask_heads.fcn_mask_head import _do_paste_mask -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class TwoStagePanopticSegmentor(TwoStageDetector): - """Base class of Two-stage Panoptic Segmentor. - - As well as the components in TwoStageDetector, Panoptic Segmentor has extra - semantic_head and panoptic_fusion_head. - """ - - def __init__( - self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None, - # for panoptic segmentation - semantic_head=None, - panoptic_fusion_head=None): - super(TwoStagePanopticSegmentor, - self).__init__(backbone, neck, rpn_head, roi_head, train_cfg, - test_cfg, pretrained, init_cfg) - if semantic_head is not None: - self.semantic_head = build_head(semantic_head) - if panoptic_fusion_head is not None: - panoptic_cfg = test_cfg.panoptic if test_cfg is not None else None - panoptic_fusion_head_ = panoptic_fusion_head.deepcopy() - panoptic_fusion_head_.update(test_cfg=panoptic_cfg) - self.panoptic_fusion_head = build_head(panoptic_fusion_head_) - - self.num_things_classes = self.panoptic_fusion_head.\ - num_things_classes - self.num_stuff_classes = self.panoptic_fusion_head.\ - num_stuff_classes - self.num_classes = self.panoptic_fusion_head.num_classes - - @property - def with_semantic_head(self): - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - @property - def with_panoptic_fusion_head(self): - return hasattr(self, 'panoptic_fusion_heads') and \ - self.panoptic_fusion_head is not None - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/get_flops.py` - """ - raise NotImplementedError( - f'`forward_dummy` is not implemented in {self.__class__.__name__}') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None, - proposals=None, - **kwargs): - x = self.extract_feat(img) - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - semantic_loss = self.semantic_head.forward_train(x, gt_semantic_seg) - losses.update(semantic_loss) - - return losses - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - img_shapes = tuple(meta['ori_shape'] - for meta in img_metas) if rescale else tuple( - meta['pad_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - masks = [] - for img_shape in img_shapes: - out_shape = (0, self.roi_head.bbox_head.num_classes) \ - + img_shape[:2] - masks.append(det_bboxes[0].new_zeros(out_shape)) - mask_pred = det_bboxes[0].new_zeros((0, 80, 28, 28)) - mask_results = dict( - masks=masks, mask_pred=mask_pred, mask_feats=None) - return mask_results - - _bboxes = [det_bboxes[i][:, :4] for i in range(len(det_bboxes))] - if rescale: - if not isinstance(scale_factors[0], float): - scale_factors = [ - det_bboxes[0].new_tensor(scale_factor) - for scale_factor in scale_factors - ] - _bboxes = [ - _bboxes[i] * scale_factors[i] for i in range(len(_bboxes)) - ] - - mask_rois = bbox2roi(_bboxes) - mask_results = self.roi_head._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - - # resize the mask_preds to (K, H, W) - masks = [] - for i in range(len(_bboxes)): - det_bbox = det_bboxes[i][:, :4] - det_label = det_labels[i] - - mask_pred = mask_preds[i].sigmoid() - - box_inds = torch.arange(mask_pred.shape[0]) - mask_pred = mask_pred[box_inds, det_label][:, None] - - img_h, img_w, _ = img_shapes[i] - mask_pred, _ = _do_paste_mask( - mask_pred, det_bbox, img_h, img_w, skip_empty=False) - masks.append(mask_pred) - - mask_results['masks'] = masks - - return mask_results - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without Augmentation.""" - x = self.extract_feat(img) - - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - bboxes, scores = self.roi_head.simple_test_bboxes( - x, img_metas, proposal_list, None, rescale=rescale) - - pan_cfg = self.test_cfg.panoptic - # class-wise predictions - det_bboxes = [] - det_labels = [] - for bboxe, score in zip(bboxes, scores): - det_bbox, det_label = multiclass_nms(bboxe, score, - pan_cfg.score_thr, - pan_cfg.nms, - pan_cfg.max_per_img) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - mask_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - masks = mask_results['masks'] - - seg_preds = self.semantic_head.simple_test(x, img_metas, rescale) - - results = [] - for i in range(len(det_bboxes)): - pan_results = self.panoptic_fusion_head.simple_test( - det_bboxes[i], det_labels[i], masks[i], seg_preds[i]) - pan_results = pan_results.int().detach().cpu().numpy() - result = dict(pan_results=pan_results) - results.append(result) - return results - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (dict): The results. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green'. - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green'. - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None. - thickness (int): Thickness of lines. Default: 2. - font_size (int): Font size of texts. Default: 13. - win_name (str): The window name. Default: ''. - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file`. - """ - img = mmcv.imread(img) - img = img.copy() - pan_results = result['pan_results'] - # keep objects ahead - ids = np.unique(pan_results)[::-1] - legal_indices = ids != self.num_classes # for VOID label - ids = ids[legal_indices] - labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64) - segms = (pan_results[None] == ids[:, None, None]) - - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - segms=segms, - labels=labels, - class_names=self.CLASSES, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/point_rend.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/point_rend.py deleted file mode 100644 index 90eb4d40..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/point_rend.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class PointRend(TwoStageDetector): - """PointRend: Image Segmentation as Rendering - - This detector is the implementation of - `PointRend `_. - - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(PointRend, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/queryinst.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/queryinst.py deleted file mode 100644 index 5fc216c4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/queryinst.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .sparse_rcnn import SparseRCNN - - -@DETECTORS.register_module() -class QueryInst(SparseRCNN): - r"""Implementation of - `Instances as Queries `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(QueryInst, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/reppoints_detector.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/reppoints_detector.py deleted file mode 100644 index f1986cdc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/reppoints_detector.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RepPointsDetector(SingleStageDetector): - """RepPoints: Point Set Representation for Object Detection. - - This detector is the implementation of: - - RepPoints detector (https://arxiv.org/pdf/1904.11490) - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(RepPointsDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/retinanet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/retinanet.py deleted file mode 100644 index c28545ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/retinanet.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RetinaNet(SingleStageDetector): - """Implementation of `RetinaNet `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(RetinaNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/rpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/rpn.py deleted file mode 100644 index 6ec326b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/rpn.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import torch -from mmcv.image import tensor2imgs - -from mmdet.core import bbox_mapping -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class RPN(BaseDetector): - """Implementation of Region Proposal Network.""" - - def __init__(self, - backbone, - neck, - rpn_head, - train_cfg, - test_cfg, - pretrained=None, - init_cfg=None): - super(RPN, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) if neck is not None else None - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head.update(train_cfg=rpn_train_cfg) - rpn_head.update(test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Extract features. - - Args: - img (torch.Tensor): Image tensor with shape (n, c, h ,w). - - Returns: - list[torch.Tensor]: Multi-level features that may have - different resolutions. - """ - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Dummy forward function.""" - x = self.extract_feat(img) - rpn_outs = self.rpn_head(x) - return rpn_outs - - def forward_train(self, - img, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if (isinstance(self.train_cfg.rpn, dict) - and self.train_cfg.rpn.get('debug', False)): - self.rpn_head.debug_imgs = tensor2imgs(img) - - x = self.extract_feat(img) - losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None, - gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - x = self.extract_feat(img) - # get origin input shape to onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - if rescale: - for proposals, meta in zip(proposal_list, img_metas): - proposals[:, :4] /= proposals.new_tensor(meta['scale_factor']) - if torch.onnx.is_in_onnx_export(): - return proposal_list - - return [proposal.cpu().numpy() for proposal in proposal_list] - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - proposal_list = self.rpn_head.aug_test_rpn( - self.extract_feats(imgs), img_metas) - if not rescale: - for proposals, img_meta in zip(proposal_list, img_metas[0]): - img_shape = img_meta['img_shape'] - scale_factor = img_meta['scale_factor'] - flip = img_meta['flip'] - flip_direction = img_meta['flip_direction'] - proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - return [proposal.cpu().numpy() for proposal in proposal_list] - - def show_result(self, data, result, top_k=20, **kwargs): - """Show RPN proposals on the image. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - top_k (int): Plot the first k bboxes only - if set positive. Default: 20 - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if kwargs is not None: - kwargs.pop('score_thr', None) - kwargs.pop('text_color', None) - kwargs['colors'] = kwargs.pop('bbox_color', 'green') - mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/scnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/scnet.py deleted file mode 100644 index a361d81c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/scnet.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class SCNet(CascadeRCNN): - """Implementation of `SCNet `_""" - - def __init__(self, **kwargs): - super(SCNet, self).__init__(**kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage.py deleted file mode 100644 index c375c72d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class SingleStageDetector(BaseDetector): - """Base class for single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - img (torch.Tensor): Images with shape (N, C, H, W). - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - feat = self.extract_feat(img) - results_list = self.bbox_head.simple_test( - feat, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - assert hasattr(self.bbox_head, 'aug_test'), \ - f'{self.bbox_head.__class__.__name__}' \ - ' does not support test-time augmentation' - - feats = self.extract_feats(imgs) - results_list = self.bbox_head.aug_test( - feats, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def onnx_export(self, img, img_metas, with_nms=True): - """Test function without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - # get origin input shape to support onnx dynamic shape - - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - # get pad input shape to support onnx dynamic shape for exporting - # `CornerNet` and `CentripetalNet`, which 'pad_shape' is used - # for inference - img_metas[0]['pad_shape_for_onnx'] = img_shape - - if len(outs) == 2: - # add dummy score_factor - outs = (*outs, None) - # TODO Can we change to `get_bboxes` when `onnx_export` fail - det_bboxes, det_labels = self.bbox_head.onnx_export( - *outs, img_metas, with_nms=with_nms) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage_instance_seg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage_instance_seg.py deleted file mode 100644 index 239b6699..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/single_stage_instance_seg.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import mmcv -import numpy as np -import torch - -from mmdet.core.visualization.image import imshow_det_bboxes -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - -INF = 1e8 - - -@DETECTORS.register_module() -class SingleStageInstanceSegmentor(BaseDetector): - """Base class for single-stage instance segmentors.""" - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - super(SingleStageInstanceSegmentor, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - else: - self.neck = None - if bbox_head is not None: - bbox_head.update(train_cfg=copy.deepcopy(train_cfg)) - bbox_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.bbox_head = build_head(bbox_head) - else: - self.bbox_head = None - - assert mask_head, f'`mask_head` must ' \ - f'be implemented in {self.__class__.__name__}' - mask_head.update(train_cfg=copy.deepcopy(train_cfg)) - mask_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.mask_head = build_head(mask_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone and neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - raise NotImplementedError( - f'`forward_dummy` is not implemented in {self.__class__.__name__}') - - def forward_train(self, - img, - img_metas, - gt_masks, - gt_labels, - gt_bboxes=None, - gt_bboxes_ignore=None, - **kwargs): - """ - Args: - img (Tensor): Input images of shape (B, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_masks (list[:obj:`BitmapMasks`] | None) : The segmentation - masks for each box. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes (list[Tensor]): Each item is the truth boxes - of each image in [tl_x, tl_y, br_x, br_y] format. - Default: None. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - gt_masks = [ - gt_mask.to_tensor(dtype=torch.bool, device=img.device) - for gt_mask in gt_masks - ] - x = self.extract_feat(img) - losses = dict() - - # CondInst and YOLACT have bbox_head - if self.bbox_head: - # bbox_head_preds is a tuple - bbox_head_preds = self.bbox_head(x) - # positive_infos is a list of obj:`InstanceData` - # It contains the information about the positive samples - # CondInst, YOLACT - det_losses, positive_infos = self.bbox_head.loss( - *bbox_head_preds, - gt_bboxes=gt_bboxes, - gt_labels=gt_labels, - gt_masks=gt_masks, - img_metas=img_metas, - gt_bboxes_ignore=gt_bboxes_ignore, - **kwargs) - losses.update(det_losses) - else: - positive_infos = None - - mask_loss = self.mask_head.forward_train( - x, - gt_labels, - gt_masks, - img_metas, - positive_infos=positive_infos, - gt_bboxes=gt_bboxes, - gt_bboxes_ignore=gt_bboxes_ignore, - **kwargs) - # avoid loss override - assert not set(mask_loss.keys()) & set(losses.keys()) - - losses.update(mask_loss) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - img (torch.Tensor): Images with shape (B, C, H, W). - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list(tuple): Formatted bbox and mask results of multiple \ - images. The outer list corresponds to each image. \ - Each tuple contains two type of results of single image: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has a shape (N, img_h, img_w), N - is the number of masks with this category. - """ - feat = self.extract_feat(img) - if self.bbox_head: - outs = self.bbox_head(feat) - # results_list is list[obj:`InstanceData`] - results_list = self.bbox_head.get_results( - *outs, img_metas=img_metas, cfg=self.test_cfg, rescale=rescale) - else: - results_list = None - - results_list = self.mask_head.simple_test( - feat, img_metas, rescale=rescale, instances_list=results_list) - - format_results_list = [] - for results in results_list: - format_results_list.append(self.format_results(results)) - - return format_results_list - - def format_results(self, results): - """Format the model predictions according to the interface with - dataset. - - Args: - results (:obj:`InstanceData`): Processed - results of single images. Usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,) - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - - Returns: - tuple: Formatted bbox and mask results.. It contains two items: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has shape (N, img_h, img_w), N - is the number of masks with this category. - """ - data_keys = results.keys() - assert 'scores' in data_keys - assert 'labels' in data_keys - - assert 'masks' in data_keys, \ - 'results should contain ' \ - 'masks when format the results ' - mask_results = [[] for _ in range(self.mask_head.num_classes)] - - num_masks = len(results) - - if num_masks == 0: - bbox_results = [ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.mask_head.num_classes) - ] - return bbox_results, mask_results - - labels = results.labels.detach().cpu().numpy() - - if 'bboxes' not in results: - # create dummy bbox results to store the scores - results.bboxes = results.scores.new_zeros(len(results), 4) - - det_bboxes = torch.cat([results.bboxes, results.scores[:, None]], - dim=-1) - det_bboxes = det_bboxes.detach().cpu().numpy() - bbox_results = [ - det_bboxes[labels == i, :] - for i in range(self.mask_head.num_classes) - ] - - masks = results.masks.detach().cpu().numpy() - - for idx in range(num_masks): - mask = masks[idx] - mask_results[labels[idx]].append(mask) - - return bbox_results, mask_results - - def aug_test(self, imgs, img_metas, rescale=False): - raise NotImplementedError - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (tuple): Format bbox and mask results. - It contains two items: - - - bbox_results (list[np.ndarray]): BBox results of - single image. The list corresponds to each class. - each ndarray has a shape (N, 5), N is the number of - bboxes with this category, and last dimension - 5 arrange as (x1, y1, x2, y2, scores). - - mask_results (list[np.ndarray]): Mask results of - single image. The list corresponds to each class. - each ndarray has shape (N, img_h, img_w), N - is the number of masks with this category. - - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - - assert isinstance(result, tuple) - bbox_result, mask_result = result - bboxes = np.vstack(bbox_result) - img = mmcv.imread(img) - img = img.copy() - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - if len(labels) == 0: - bboxes = np.zeros([0, 5]) - masks = np.zeros([0, 0, 0]) - # draw segmentation masks - else: - masks = mmcv.concat_list(mask_result) - - if isinstance(masks[0], torch.Tensor): - masks = torch.stack(masks, dim=0).detach().cpu().numpy() - else: - masks = np.stack(masks, axis=0) - # dummy bboxes - if bboxes[:, :4].sum() == 0: - num_masks = len(bboxes) - x_any = masks.any(axis=1) - y_any = masks.any(axis=2) - for idx in range(num_masks): - x = np.where(x_any[idx, :])[0] - y = np.where(y_any[idx, :])[0] - if len(x) > 0 and len(y) > 0: - bboxes[idx, :4] = np.array( - [x[0], y[0], x[-1] + 1, y[-1] + 1], - dtype=np.float32) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - img = imshow_det_bboxes( - img, - bboxes, - labels, - masks, - class_names=self.CLASSES, - score_thr=score_thr, - bbox_color=bbox_color, - text_color=text_color, - mask_color=mask_color, - thickness=thickness, - font_size=font_size, - win_name=win_name, - show=show, - wait_time=wait_time, - out_file=out_file) - - if not (show or out_file): - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/solo.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/solo.py deleted file mode 100644 index df6f6de0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/solo.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_instance_seg import SingleStageInstanceSegmentor - - -@DETECTORS.register_module() -class SOLO(SingleStageInstanceSegmentor): - """`SOLO: Segmenting Objects by Locations - `_ - - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - mask_head=mask_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/sparse_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/sparse_rcnn.py deleted file mode 100644 index e90c2a5a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/sparse_rcnn.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class SparseRCNN(TwoStageDetector): - r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_""" - - def __init__(self, *args, **kwargs): - super(SparseRCNN, self).__init__(*args, **kwargs) - assert self.with_rpn, 'Sparse R-CNN and QueryInst ' \ - 'do not support external proposals' - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """Forward function of SparseR-CNN and QueryInst in train stage. - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (List[Tensor], optional) : Segmentation masks for - each box. This is required to train QueryInst. - proposals (List[Tensor], optional): override rpn proposals with - custom proposals. Use when `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - assert proposals is None, 'Sparse R-CNN and QueryInst ' \ - 'do not support external proposals' - - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.forward_train(x, img_metas) - roi_losses = self.roi_head.forward_train( - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_masks=gt_masks, - imgs_whwh=imgs_whwh) - return roi_losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, img_metas) - results = self.roi_head.simple_test( - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh=imgs_whwh, - rescale=rescale) - return results - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - # backbone - x = self.extract_feat(img) - # rpn - num_imgs = len(img) - dummy_img_metas = [ - dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs) - ] - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, dummy_img_metas) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposal_boxes, - proposal_features, - dummy_img_metas) - return roi_outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/tood.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/tood.py deleted file mode 100644 index 7dd18c3c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/tood.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class TOOD(SingleStageDetector): - r"""Implementation of `TOOD: Task-aligned One-stage Object Detection. - `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TOOD, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def set_epoch(self, epoch): - self.bbox_head.epoch = epoch diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/trident_faster_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index fb26168c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .faster_rcnn import FasterRCNN - - -@DETECTORS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - - super(TridentFasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = img_metas * num_branch - proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas) - else: - proposal_list = proposals - # TODO: Fix trident_img_metas undefined errors - # when proposals is specified - return self.roi_head.simple_test( - x, proposal_list, trident_img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs): - """make copies of img and gts to fit multi-branch.""" - trident_gt_bboxes = tuple(gt_bboxes * self.num_branch) - trident_gt_labels = tuple(gt_labels * self.num_branch) - trident_img_metas = tuple(img_metas * self.num_branch) - - return super(TridentFasterRCNN, - self).forward_train(img, trident_img_metas, - trident_gt_bboxes, trident_gt_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/two_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/two_stage.py deleted file mode 100644 index 870e2b84..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/two_stage.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class TwoStageDetector(BaseDetector): - """Base class for two-stage detectors. - - Two-stage detectors typically consisting of a region proposal network and a - task-specific regression head. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TwoStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - roi_head.pretrained = pretrained - self.roi_head = build_head(roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - @property - def with_rpn(self): - """bool: whether the detector has RPN""" - return hasattr(self, 'rpn_head') and self.rpn_head is not None - - @property - def with_roi_head(self): - """bool: whether the detector has a RoI head""" - return hasattr(self, 'roi_head') and self.roi_head is not None - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - outs = () - # backbone - x = self.extract_feat(img) - # rpn - if self.with_rpn: - rpn_outs = self.rpn_head(x) - outs = outs + (rpn_outs, ) - proposals = torch.randn(1000, 4).to(img.device) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposals) - outs = outs + (roi_outs, ) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - proposals : override rpn proposals with custom proposals. Use when - `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self.extract_feat(img) - - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg, - **kwargs) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - return losses - - async def async_simple_test(self, - img, - img_meta, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - - if proposals is None: - proposal_list = await self.rpn_head.async_simple_test_rpn( - x, img_meta) - else: - proposal_list = proposals - - return await self.roi_head.async_simple_test( - x, proposal_list, img_meta, rescale=rescale) - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - proposal_list = self.rpn_head.aug_test_rpn(x, img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def onnx_export(self, img, img_metas): - - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - x = self.extract_feat(img) - proposals = self.rpn_head.onnx_export(x, img_metas) - if hasattr(self.roi_head, 'onnx_export'): - return self.roi_head.onnx_export(x, proposals, img_metas) - else: - raise NotImplementedError( - f'{self.__class__.__name__} can not ' - f'be exported to ONNX. Please refer to the ' - f'list of supported models,' - f'https://mmdetection.readthedocs.io/en/latest/tutorials/pytorch2onnx.html#list-of-supported-models-exportable-to-onnx' # noqa E501 - ) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/vfnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/vfnet.py deleted file mode 100644 index 38ddcdab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/vfnet.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class VFNet(SingleStageDetector): - """Implementation of `VarifocalNet - (VFNet).`_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(VFNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolact.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolact.py deleted file mode 100644 index 4ddea0b2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolact.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_head -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLACT(SingleStageDetector): - """Implementation of `YOLACT `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - segm_head, - mask_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(YOLACT, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - self.segm_head = build_head(segm_head) - self.mask_head = build_head(mask_head) - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - feat = self.extract_feat(img) - bbox_outs = self.bbox_head(feat) - prototypes = self.mask_head.forward_dummy(feat[0]) - return (bbox_outs, prototypes) - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # convert Bitmap mask or Polygon Mask to Tensor here - gt_masks = [ - gt_mask.to_tensor(dtype=torch.uint8, device=img.device) - for gt_mask in gt_masks - ] - - x = self.extract_feat(img) - - cls_score, bbox_pred, coeff_pred = self.bbox_head(x) - bbox_head_loss_inputs = (cls_score, bbox_pred) + (gt_bboxes, gt_labels, - img_metas) - losses, sampling_results = self.bbox_head.loss( - *bbox_head_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - - segm_head_outs = self.segm_head(x[0]) - loss_segm = self.segm_head.loss(segm_head_outs, gt_masks, gt_labels) - losses.update(loss_segm) - - mask_pred = self.mask_head(x[0], coeff_pred, gt_bboxes, img_metas, - sampling_results) - loss_mask = self.mask_head.loss(mask_pred, gt_masks, gt_bboxes, - img_metas, sampling_results) - losses.update(loss_mask) - - # check NaN and Inf - for loss_name in losses.keys(): - assert torch.isfinite(torch.stack(losses[loss_name]))\ - .all().item(), '{} becomes infinite or NaN!'\ - .format(loss_name) - - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation.""" - feat = self.extract_feat(img) - det_bboxes, det_labels, det_coeffs = self.bbox_head.simple_test( - feat, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bbox, det_label, self.bbox_head.num_classes) - for det_bbox, det_label in zip(det_bboxes, det_labels) - ] - - segm_results = self.mask_head.simple_test( - feat, - det_bboxes, - det_labels, - det_coeffs, - img_metas, - rescale=rescale) - - return list(zip(bbox_results, segm_results)) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations.""" - raise NotImplementedError( - 'YOLACT does not support test-time augmentation') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolo.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolo.py deleted file mode 100644 index 0ccd4177..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. -import torch - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOV3(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(YOLOV3, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - - def onnx_export(self, img, img_metas): - """Test function for exporting to ONNX, without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - outs = self.bbox_head.forward(x) - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - det_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolof.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolof.py deleted file mode 100644 index 6d08d16d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolof.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOF(SingleStageDetector): - r"""Implementation of `You Only Look One-level Feature - `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolox.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolox.py deleted file mode 100644 index d26dc734..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/detectors/yolox.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random - -import torch -import torch.distributed as dist -import torch.nn.functional as F -from mmcv.runner import get_dist_info - -from ...utils import log_img_scale -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOX(SingleStageDetector): - r"""Implementation of `YOLOX: Exceeding YOLO Series in 2021 - `_ - - Note: Considering the trade-off between training speed and accuracy, - multi-scale training is temporarily kept. More elegant implementation - will be adopted in the future. - - Args: - backbone (nn.Module): The backbone module. - neck (nn.Module): The neck module. - bbox_head (nn.Module): The bbox head module. - train_cfg (obj:`ConfigDict`, optional): The training config - of YOLOX. Default: None. - test_cfg (obj:`ConfigDict`, optional): The testing config - of YOLOX. Default: None. - pretrained (str, optional): model pretrained path. - Default: None. - input_size (tuple): The model default input image size. The shape - order should be (height, width). Default: (640, 640). - size_multiplier (int): Image size multiplication factor. - Default: 32. - random_size_range (tuple): The multi-scale random range during - multi-scale training. The real training image size will - be multiplied by size_multiplier. Default: (15, 25). - random_size_interval (int): The iter interval of change - image size. Default: 10. - init_cfg (dict, optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - input_size=(640, 640), - size_multiplier=32, - random_size_range=(15, 25), - random_size_interval=10, - init_cfg=None): - super(YOLOX, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) - log_img_scale(input_size, skip_square=True) - self.rank, self.world_size = get_dist_info() - self._default_input_size = input_size - self._input_size = input_size - self._random_size_range = random_size_range - self._random_size_interval = random_size_interval - self._size_multiplier = size_multiplier - self._progress_in_iter = 0 - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # Multi-scale training - img, gt_bboxes = self._preprocess(img, gt_bboxes) - - losses = super(YOLOX, self).forward_train(img, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - - # random resizing - if (self._progress_in_iter + 1) % self._random_size_interval == 0: - self._input_size = self._random_resize() - self._progress_in_iter += 1 - - return losses - - def _preprocess(self, img, gt_bboxes): - scale_y = self._input_size[0] / self._default_input_size[0] - scale_x = self._input_size[1] / self._default_input_size[1] - if scale_x != 1 or scale_y != 1: - img = F.interpolate( - img, - size=self._input_size, - mode='bilinear', - align_corners=False) - for gt_bbox in gt_bboxes: - gt_bbox[..., 0::2] = gt_bbox[..., 0::2] * scale_x - gt_bbox[..., 1::2] = gt_bbox[..., 1::2] * scale_y - return img, gt_bboxes - - def _random_resize(self): - tensor = torch.LongTensor(2).cuda() - - if self.rank == 0: - size = random.randint(*self._random_size_range) - aspect_ratio = float( - self._default_input_size[1]) / self._default_input_size[0] - size = (self._size_multiplier * size, - self._size_multiplier * int(aspect_ratio * size)) - tensor[0] = size[0] - tensor[1] = size[1] - - if self.world_size > 1: - dist.barrier() - dist.broadcast(tensor, 0) - - input_size = (tensor[0].item(), tensor[1].item()) - return input_size diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/__init__.py deleted file mode 100644 index 068a54d6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .accuracy import Accuracy, accuracy -from .ae_loss import AssociativeEmbeddingLoss -from .balanced_l1_loss import BalancedL1Loss, balanced_l1_loss -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .focal_loss import FocalLoss, sigmoid_focal_loss -from .gaussian_focal_loss import GaussianFocalLoss -from .gfocal_loss import DistributionFocalLoss, QualityFocalLoss -from .ghm_loss import GHMC, GHMR -from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, GIoULoss, IoULoss, - bounded_iou_loss, iou_loss) -from .kd_loss import KnowledgeDistillationKLDivLoss -from .mse_loss import MSELoss, mse_loss -from .pisa_loss import carl_loss, isr_p -from .seesaw_loss import SeesawLoss -from .smooth_l1_loss import L1Loss, SmoothL1Loss, l1_loss, smooth_l1_loss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss -from .varifocal_loss import VarifocalLoss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'sigmoid_focal_loss', - 'FocalLoss', 'smooth_l1_loss', 'SmoothL1Loss', 'balanced_l1_loss', - 'BalancedL1Loss', 'mse_loss', 'MSELoss', 'iou_loss', 'bounded_iou_loss', - 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss', 'GHMC', - 'GHMR', 'reduce_loss', 'weight_reduce_loss', 'weighted_loss', 'L1Loss', - 'l1_loss', 'isr_p', 'carl_loss', 'AssociativeEmbeddingLoss', - 'GaussianFocalLoss', 'QualityFocalLoss', 'DistributionFocalLoss', - 'VarifocalLoss', 'KnowledgeDistillationKLDivLoss', 'SeesawLoss', 'DiceLoss' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/accuracy.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/accuracy.py deleted file mode 100644 index fe765a39..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/accuracy.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn - - -@mmcv.jit(coderize=True) -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class) - target (torch.Tensor): The target of each prediction, shape (N, ) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == 2 and target.ndim == 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - pred_label = pred_label.t() # transpose to shape (maxk, N) - correct = pred_label.eq(target.view(1, -1).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / pred.size(0))) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ae_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ae_loss.py deleted file mode 100644 index 5c6da22a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ae_loss.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES - - -@mmcv.jit(derivate=True, coderize=True) -def ae_loss_per_image(tl_preds, br_preds, match): - """Associative Embedding Loss in one image. - - Associative Embedding Loss including two parts: pull loss and push loss. - Pull loss makes embedding vectors from same object closer to each other. - Push loss distinguish embedding vector from different objects, and makes - the gap between them is large enough. - - During computing, usually there are 3 cases: - - no object in image: both pull loss and push loss will be 0. - - one object in image: push loss will be 0 and pull loss is computed - by the two corner of the only object. - - more than one objects in image: pull loss is computed by corner pairs - from each object, push loss is computed by each object with all - other objects. We use confusion matrix with 0 in diagonal to - compute the push loss. - - Args: - tl_preds (tensor): Embedding feature map of left-top corner. - br_preds (tensor): Embedding feature map of bottim-right corner. - match (list): Downsampled coordinates pair of each ground truth box. - """ - - tl_list, br_list, me_list = [], [], [] - if len(match) == 0: # no object in image - pull_loss = tl_preds.sum() * 0. - push_loss = tl_preds.sum() * 0. - else: - for m in match: - [tl_y, tl_x], [br_y, br_x] = m - tl_e = tl_preds[:, tl_y, tl_x].view(-1, 1) - br_e = br_preds[:, br_y, br_x].view(-1, 1) - tl_list.append(tl_e) - br_list.append(br_e) - me_list.append((tl_e + br_e) / 2.0) - - tl_list = torch.cat(tl_list) - br_list = torch.cat(br_list) - me_list = torch.cat(me_list) - - assert tl_list.size() == br_list.size() - - # N is object number in image, M is dimension of embedding vector - N, M = tl_list.size() - - pull_loss = (tl_list - me_list).pow(2) + (br_list - me_list).pow(2) - pull_loss = pull_loss.sum() / N - - margin = 1 # exp setting of CornerNet, details in section 3.3 of paper - - # confusion matrix of push loss - conf_mat = me_list.expand((N, N, M)).permute(1, 0, 2) - me_list - conf_weight = 1 - torch.eye(N).type_as(me_list) - conf_mat = conf_weight * (margin - conf_mat.sum(-1).abs()) - - if N > 1: # more than one object in current image - push_loss = F.relu(conf_mat).sum() / (N * (N - 1)) - else: - push_loss = tl_preds.sum() * 0. - - return pull_loss, push_loss - - -@LOSSES.register_module() -class AssociativeEmbeddingLoss(nn.Module): - """Associative Embedding Loss. - - More details can be found in - `Associative Embedding `_ and - `CornerNet `_ . - Code is modified from `kp_utils.py `_ # noqa: E501 - - Args: - pull_weight (float): Loss weight for corners from same object. - push_weight (float): Loss weight for corners from different object. - """ - - def __init__(self, pull_weight=0.25, push_weight=0.25): - super(AssociativeEmbeddingLoss, self).__init__() - self.pull_weight = pull_weight - self.push_weight = push_weight - - def forward(self, pred, target, match): - """Forward function.""" - batch = pred.size(0) - pull_all, push_all = 0.0, 0.0 - for i in range(batch): - pull, push = ae_loss_per_image(pred[i], target[i], match[i]) - - pull_all += self.pull_weight * pull - push_all += self.push_weight * push - - return pull_all, push_all diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/balanced_l1_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/balanced_l1_loss.py deleted file mode 100644 index 8500345f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/balanced_l1_loss.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def balanced_l1_loss(pred, - target, - beta=1.0, - alpha=0.5, - gamma=1.5, - reduction='mean'): - """Calculate balanced L1 loss. - - Please see the `Libra R-CNN `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - beta (float): The loss is a piecewise function of prediction and target - and ``beta`` serves as a threshold for the difference between the - prediction and target. Defaults to 1.0. - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. - Defaults to 1.5. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert beta > 0 - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - - diff = torch.abs(pred - target) - b = np.e**(gamma / alpha) - 1 - loss = torch.where( - diff < beta, alpha / b * - (b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff, - gamma * diff + gamma / b - alpha * beta) - - return loss - - -@LOSSES.register_module() -class BalancedL1Loss(nn.Module): - """Balanced L1 Loss. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Args: - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. Defaults to 1.5. - beta (float, optional): The loss is a piecewise function of prediction - and target. ``beta`` serves as a threshold for the difference - between the prediction and target. Defaults to 1.0. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, - alpha=0.5, - gamma=1.5, - beta=1.0, - reduction='mean', - loss_weight=1.0): - super(BalancedL1Loss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - weight (torch.Tensor, optional): Sample-wise loss weight with - shape (N, ). - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * balanced_l1_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/cross_entropy_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/cross_entropy_loss.py deleted file mode 100644 index 41411fc5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,301 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=-100, - avg_non_ignore=False): - """Calculate the CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. - If None, it will be set to default value. Default: -100. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - - Returns: - torch.Tensor: The calculated loss - """ - # The default value of ignore_index is the same as F.cross_entropy - ignore_index = -100 if ignore_index is None else ignore_index - # element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # average loss over non-ignored elements - # pytorch's official cross_entropy average loss over non-ignored elements - # refer to https://github.com/pytorch/pytorch/blob/56b43f4fec1f76953f15a627694d4bba34588969/torch/nn/functional.py#L2660 # noqa - if (avg_factor is None) and avg_non_ignore and reduction == 'mean': - avg_factor = label.numel() - (label == ignore_index).sum().item() - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, label_channels, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero( - valid_mask & (labels < label_channels), as_tuple=False) - - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - - valid_mask = valid_mask.view(-1, 1).expand(labels.size(0), - label_channels).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.view(-1, 1).repeat(1, label_channels) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights, valid_mask - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=-100, - avg_non_ignore=False): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1) or (N, ). - When the shape of pred is (N, 1), label will be expanded to - one-hot format, and when the shape of pred is (N, ), label - will not be expanded to one-hot format. - label (torch.Tensor): The learning label of the prediction, - with shape (N, ). - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. - If None, it will be set to default value. Default: -100. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - - Returns: - torch.Tensor: The calculated loss. - """ - # The default value of ignore_index is the same as F.cross_entropy - ignore_index = -100 if ignore_index is None else ignore_index - - if pred.dim() != label.dim(): - label, weight, valid_mask = _expand_onehot_labels( - label, weight, pred.size(-1), ignore_index) - else: - # should mask out the ignored elements - valid_mask = ((label >= 0) & (label != ignore_index)).float() - if weight is not None: - # The inplace writing method will have a mismatched broadcast - # shape error if the weight and valid_mask dimensions - # are inconsistent such as (B,N,1) and (B,N,C). - weight = weight * valid_mask - else: - weight = valid_mask - - # average loss over non-ignored elements - if (avg_factor is None) and avg_non_ignore and reduction == 'mean': - avg_factor = valid_mask.sum().item() - - # weighted element-wise losses - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None, - **kwargs): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C, *), C is the - number of classes. The trailing * indicates arbitrary shape. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - - Example: - >>> N, C = 3, 11 - >>> H, W = 2, 2 - >>> pred = torch.randn(N, C, H, W) * 1000 - >>> target = torch.rand(N, H, W) - >>> label = torch.randint(0, C, size=(N,)) - >>> reduction = 'mean' - >>> avg_factor = None - >>> class_weights = None - >>> loss = mask_cross_entropy(pred, target, label, reduction, - >>> avg_factor, class_weights) - >>> assert loss.shape == (1,) - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - ignore_index=None, - loss_weight=1.0, - avg_non_ignore=False): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - ignore_index (int | None): The label index to be ignored. - Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - avg_non_ignore (bool): The flag decides to whether the loss is - only averaged over non-ignored targets. Default: False. - """ - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = class_weight - self.ignore_index = ignore_index - self.avg_non_ignore = avg_non_ignore - if ((ignore_index is not None) and not self.avg_non_ignore - and self.reduction == 'mean'): - warnings.warn( - 'Default ``avg_non_ignore`` is False, if you would like to ' - 'ignore the certain label and average loss over non-ignore ' - 'labels, which is the same with PyTorch official ' - 'cross_entropy, set ``avg_non_ignore=True``.') - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def extra_repr(self): - """Extra repr.""" - s = f'avg_non_ignore={self.avg_non_ignore}' - return s - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - ignore_index=None, - **kwargs): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The method used to reduce the - loss. Options are "none", "mean" and "sum". - ignore_index (int | None): The label index to be ignored. - If not None, it will override the default value. Default: None. - Returns: - torch.Tensor: The calculated loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if ignore_index is None: - ignore_index = self.ignore_index - - if self.class_weight is not None: - class_weight = cls_score.new_tensor( - self.class_weight, device=cls_score.device) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - ignore_index=ignore_index, - avg_non_ignore=self.avg_non_ignore, - **kwargs) - return loss_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/dice_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/dice_loss.py deleted file mode 100644 index 585beeaf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/dice_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def dice_loss(pred, - target, - weight=None, - eps=1e-3, - reduction='mean', - naive_dice=False, - avg_factor=None): - """Calculate dice loss, there are two forms of dice loss is supported: - - - the one proposed in `V-Net: Fully Convolutional Neural - Networks for Volumetric Medical Image Segmentation - `_. - - the dice loss in which the power of the number in the - denominator is the first power instead of the second - power. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *) - target (torch.Tensor): The learning label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - eps (float): Avoid dividing by zero. Default: 1e-3. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - Options are "none", "mean" and "sum". - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power.Defaults to False. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - - input = pred.flatten(1) - target = target.flatten(1).float() - - a = torch.sum(input * target, 1) - if naive_dice: - b = torch.sum(input, 1) - c = torch.sum(target, 1) - d = (2 * a + eps) / (b + c + eps) - else: - b = torch.sum(input * input, 1) + eps - c = torch.sum(target * target, 1) + eps - d = (2 * a) / (b + c) - - loss = 1 - d - if weight is not None: - assert weight.ndim == loss.ndim - assert len(weight) == len(pred) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - activate=True, - reduction='mean', - naive_dice=False, - loss_weight=1.0, - eps=1e-3): - """Compute dice loss. - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - activate (bool): Whether to activate the predictions inside, - this will disable the inside sigmoid operation. - Defaults to True. - reduction (str, optional): The method used - to reduce the loss. Options are "none", - "mean" and "sum". Defaults to 'mean'. - naive_dice (bool, optional): If false, use the dice - loss defined in the V-Net paper, otherwise, use the - naive dice loss in which the power of the number in the - denominator is the first power instead of the second - power. Defaults to False. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - eps (float): Avoid dividing by zero. Defaults to 1e-3. - """ - - super(DiceLoss, self).__init__() - self.use_sigmoid = use_sigmoid - self.reduction = reduction - self.naive_dice = naive_dice - self.loss_weight = loss_weight - self.eps = eps - self.activate = activate - - def forward(self, - pred, - target, - weight=None, - reduction_override=None, - avg_factor=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction, has a shape (n, *). - target (torch.Tensor): The label of the prediction, - shape (n, *), same shape of pred. - weight (torch.Tensor, optional): The weight of loss for each - prediction, has a shape (n,). Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - if self.activate: - if self.use_sigmoid: - pred = pred.sigmoid() - else: - raise NotImplementedError - - loss = self.loss_weight * dice_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - naive_dice=self.naive_dice, - avg_factor=avg_factor) - - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/focal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/focal_loss.py deleted file mode 100644 index 6c20fddd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/focal_loss.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -# This method is only for debugging -def py_sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def py_focal_loss_with_prob(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - Different from `py_sigmoid_focal_loss`, this function accepts probability - as input. - - Args: - pred (torch.Tensor): The prediction probability with shape (N, C), - C is the number of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - - target = target.type_as(pred) - pt = (1 - pred) * target + pred * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - r"""A warpper of cuda version `Focal Loss - `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # Function.apply does not accept keyword arguments, so the decorator - # "weighted_loss" is not applicable - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), gamma, - alpha, None, 'none') - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class FocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - reduction='mean', - loss_weight=1.0, - activated=False): - """`Focal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - activated (bool, optional): Whether the input is activated. - If True, it means the input has been activated and can be - treated as probabilities. Else, it should be treated as logits. - Defaults to False. - """ - super(FocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid focal loss supported now.' - self.use_sigmoid = use_sigmoid - self.gamma = gamma - self.alpha = alpha - self.reduction = reduction - self.loss_weight = loss_weight - self.activated = activated - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - if self.activated: - calculate_loss_func = py_focal_loss_with_prob - else: - if torch.cuda.is_available() and pred.is_cuda: - calculate_loss_func = sigmoid_focal_loss - else: - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - calculate_loss_func = py_sigmoid_focal_loss - - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - weight, - gamma=self.gamma, - alpha=self.alpha, - reduction=reduction, - avg_factor=avg_factor) - - else: - raise NotImplementedError - return loss_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gaussian_focal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gaussian_focal_loss.py deleted file mode 100644 index 7abcb691..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gaussian_focal_loss.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def gaussian_focal_loss(pred, gaussian_target, alpha=2.0, gamma=4.0): - """`Focal Loss `_ for targets in gaussian - distribution. - - Args: - pred (torch.Tensor): The prediction. - gaussian_target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 2.0. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 4.0. - """ - eps = 1e-12 - pos_weights = gaussian_target.eq(1) - neg_weights = (1 - gaussian_target).pow(gamma) - pos_loss = -(pred + eps).log() * (1 - pred).pow(alpha) * pos_weights - neg_loss = -(1 - pred + eps).log() * pred.pow(alpha) * neg_weights - return pos_loss + neg_loss - - -@LOSSES.register_module() -class GaussianFocalLoss(nn.Module): - """GaussianFocalLoss is a variant of focal loss. - - More details can be found in the `paper - `_ - Code is modified from `kp_utils.py - `_ # noqa: E501 - Please notice that the target in GaussianFocalLoss is a gaussian heatmap, - not 0/1 binary target. - - Args: - alpha (float): Power of prediction. - gamma (float): Power of target for negative samples. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - alpha=2.0, - gamma=4.0, - reduction='mean', - loss_weight=1.0): - super(GaussianFocalLoss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_reg = self.loss_weight * gaussian_focal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - reduction=reduction, - avg_factor=avg_factor) - return loss_reg diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gfocal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gfocal_loss.py deleted file mode 100644 index 0e8d2637..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/gfocal_loss.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def quality_focal_loss(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred.sigmoid() - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy_with_logits( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy_with_logits( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@weighted_loss -def quality_focal_loss_with_prob(pred, target, beta=2.0): - r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - Different from `quality_focal_loss`, this function accepts probability - as input. - - Args: - pred (torch.Tensor): Predicted joint representation of classification - and quality (IoU) estimation with shape (N, C), C is the number of - classes. - target (tuple([torch.Tensor])): Target category label with shape (N,) - and target quality label with shape (N,). - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert len(target) == 2, """target for QFL must be a tuple of two elements, - including category label and quality label, respectively""" - # label denotes the category id, score denotes the quality score - label, score = target - - # negatives are supervised by 0 quality score - pred_sigmoid = pred - scale_factor = pred_sigmoid - zerolabel = scale_factor.new_zeros(pred.shape) - loss = F.binary_cross_entropy( - pred, zerolabel, reduction='none') * scale_factor.pow(beta) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = pred.size(1) - pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1) - pos_label = label[pos].long() - # positives are supervised by bbox quality (IoU) score - scale_factor = score[pos] - pred_sigmoid[pos, pos_label] - loss[pos, pos_label] = F.binary_cross_entropy( - pred[pos, pos_label], score[pos], - reduction='none') * scale_factor.abs().pow(beta) - - loss = loss.sum(dim=1, keepdim=False) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def distribution_focal_loss(pred, label): - r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning - Qualified and Distributed Bounding Boxes for Dense Object Detection - `_. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding boxes - (before softmax) with shape (N, n+1), n is the max value of the - integral set `{0, ..., n}` in paper. - label (torch.Tensor): Target distance label for bounding boxes with - shape (N,). - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - dis_left = label.long() - dis_right = dis_left + 1 - weight_left = dis_right.float() - label - weight_right = label - dis_left.float() - loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \ - + F.cross_entropy(pred, dis_right, reduction='none') * weight_right - return loss - - -@LOSSES.register_module() -class QualityFocalLoss(nn.Module): - r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - use_sigmoid (bool): Whether sigmoid operation is conducted in QFL. - Defaults to True. - beta (float): The beta parameter for calculating the modulating factor. - Defaults to 2.0. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - activated (bool, optional): Whether the input is activated. - If True, it means the input has been activated and can be - treated as probabilities. Else, it should be treated as logits. - Defaults to False. - """ - - def __init__(self, - use_sigmoid=True, - beta=2.0, - reduction='mean', - loss_weight=1.0, - activated=False): - super(QualityFocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid in QFL supported now.' - self.use_sigmoid = use_sigmoid - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - self.activated = activated - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted joint representation of - classification and quality (IoU) estimation with shape (N, C), - C is the number of classes. - target (tuple([torch.Tensor])): Target category label with shape - (N,) and target quality label with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - if self.activated: - calculate_loss_func = quality_focal_loss_with_prob - else: - calculate_loss_func = quality_focal_loss - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls - - -@LOSSES.register_module() -class DistributionFocalLoss(nn.Module): - r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss: - Learning Qualified and Distributed Bounding Boxes for Dense Object - Detection `_. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(DistributionFocalLoss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): Predicted general distribution of bounding - boxes (before softmax) with shape (N, n+1), n is the max value - of the integral set `{0, ..., n}` in paper. - target (torch.Tensor): Target distance label for bounding boxes - with shape (N,). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_cls = self.loss_weight * distribution_focal_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ghm_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ghm_loss.py deleted file mode 100644 index a4df9fe8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/ghm_loss.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def _expand_onehot_labels(labels, label_weights, label_channels): - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - inds = torch.nonzero( - (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze() - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - bin_label_weights = label_weights.view(-1, 1).expand( - label_weights.size(0), label_channels) - return bin_labels, bin_label_weights - - -# TODO: code refactoring to make it consistent with other losses -@LOSSES.register_module() -class GHMC(nn.Module): - """GHM Classification Loss. - - Details of the theorem can be viewed in the paper - `Gradient Harmonized Single-stage Detector - `_. - - Args: - bins (int): Number of the unit regions for distribution calculation. - momentum (float): The parameter for moving average. - use_sigmoid (bool): Can only be true for BCE based loss now. - loss_weight (float): The weight of the total GHM-C loss. - reduction (str): Options are "none", "mean" and "sum". - Defaults to "mean" - """ - - def __init__(self, - bins=10, - momentum=0, - use_sigmoid=True, - loss_weight=1.0, - reduction='mean'): - super(GHMC, self).__init__() - self.bins = bins - self.momentum = momentum - edges = torch.arange(bins + 1).float() / bins - self.register_buffer('edges', edges) - self.edges[-1] += 1e-6 - if momentum > 0: - acc_sum = torch.zeros(bins) - self.register_buffer('acc_sum', acc_sum) - self.use_sigmoid = use_sigmoid - if not self.use_sigmoid: - raise NotImplementedError - self.loss_weight = loss_weight - self.reduction = reduction - - def forward(self, - pred, - target, - label_weight, - reduction_override=None, - **kwargs): - """Calculate the GHM-C loss. - - Args: - pred (float tensor of size [batch_num, class_num]): - The direct prediction of classification fc layer. - target (float tensor of size [batch_num, class_num]): - Binary class target for each sample. - label_weight (float tensor of size [batch_num, class_num]): - the value is 1 if the sample is valid and 0 if ignored. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - Returns: - The gradient harmonized loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - # the target should be binary class label - if pred.dim() != target.dim(): - target, label_weight = _expand_onehot_labels( - target, label_weight, pred.size(-1)) - target, label_weight = target.float(), label_weight.float() - edges = self.edges - mmt = self.momentum - weights = torch.zeros_like(pred) - - # gradient length - g = torch.abs(pred.sigmoid().detach() - target) - - valid = label_weight > 0 - tot = max(valid.float().sum().item(), 1.0) - n = 0 # n valid bins - for i in range(self.bins): - inds = (g >= edges[i]) & (g < edges[i + 1]) & valid - num_in_bin = inds.sum().item() - if num_in_bin > 0: - if mmt > 0: - self.acc_sum[i] = mmt * self.acc_sum[i] \ - + (1 - mmt) * num_in_bin - weights[inds] = tot / self.acc_sum[i] - else: - weights[inds] = tot / num_in_bin - n += 1 - if n > 0: - weights = weights / n - - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') - loss = weight_reduce_loss( - loss, weights, reduction=reduction, avg_factor=tot) - return loss * self.loss_weight - - -# TODO: code refactoring to make it consistent with other losses -@LOSSES.register_module() -class GHMR(nn.Module): - """GHM Regression Loss. - - Details of the theorem can be viewed in the paper - `Gradient Harmonized Single-stage Detector - `_. - - Args: - mu (float): The parameter for the Authentic Smooth L1 loss. - bins (int): Number of the unit regions for distribution calculation. - momentum (float): The parameter for moving average. - loss_weight (float): The weight of the total GHM-R loss. - reduction (str): Options are "none", "mean" and "sum". - Defaults to "mean" - """ - - def __init__(self, - mu=0.02, - bins=10, - momentum=0, - loss_weight=1.0, - reduction='mean'): - super(GHMR, self).__init__() - self.mu = mu - self.bins = bins - edges = torch.arange(bins + 1).float() / bins - self.register_buffer('edges', edges) - self.edges[-1] = 1e3 - self.momentum = momentum - if momentum > 0: - acc_sum = torch.zeros(bins) - self.register_buffer('acc_sum', acc_sum) - self.loss_weight = loss_weight - self.reduction = reduction - - # TODO: support reduction parameter - def forward(self, - pred, - target, - label_weight, - avg_factor=None, - reduction_override=None): - """Calculate the GHM-R loss. - - Args: - pred (float tensor of size [batch_num, 4 (* class_num)]): - The prediction of box regression layer. Channel number can be 4 - or 4 * class_num depending on whether it is class-agnostic. - target (float tensor of size [batch_num, 4 (* class_num)]): - The target regression values with the same size of pred. - label_weight (float tensor of size [batch_num, 4 (* class_num)]): - The weight of each sample, 0 if ignored. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - Returns: - The gradient harmonized loss. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - mu = self.mu - edges = self.edges - mmt = self.momentum - - # ASL1 loss - diff = pred - target - loss = torch.sqrt(diff * diff + mu * mu) - mu - - # gradient length - g = torch.abs(diff / torch.sqrt(mu * mu + diff * diff)).detach() - weights = torch.zeros_like(g) - - valid = label_weight > 0 - tot = max(label_weight.float().sum().item(), 1.0) - n = 0 # n: valid bins - for i in range(self.bins): - inds = (g >= edges[i]) & (g < edges[i + 1]) & valid - num_in_bin = inds.sum().item() - if num_in_bin > 0: - n += 1 - if mmt > 0: - self.acc_sum[i] = mmt * self.acc_sum[i] \ - + (1 - mmt) * num_in_bin - weights[inds] = tot / self.acc_sum[i] - else: - weights[inds] = tot / num_in_bin - if n > 0: - weights /= n - loss = weight_reduce_loss( - loss, weights, reduction=reduction, avg_factor=tot) - return loss * self.loss_weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/iou_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/iou_loss.py deleted file mode 100644 index bf1ed04e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,474 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, mode='log', eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'iou_loss is deprecated, please use "mode=`linear`" ' - 'instead.') - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if mode == 'linear': - loss = 1 - ious - elif mode == 'square': - loss = 1 - ious**2 - elif mode == 'log': - loss = -ious.log() - else: - raise NotImplementedError - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - # view(..., -1) does not work for empty tensor - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).flatten(1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - with torch.no_grad(): - alpha = (ious > 0.5).float() * v / (1 - ious + v) - - # CIoU - cious = ious - (rho2 / c2 + alpha * v) - loss = 1 - cious.clamp(min=-1.0, max=1.0) - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss else determined - by mode. Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0, - mode='log'): - super(IoULoss, self).__init__() - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'IOULoss is deprecated, please use "mode=`linear`" ' - 'instead.') - self.mode = mode - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - mode=self.mode, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/kd_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/kd_loss.py deleted file mode 100644 index 75c19355..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/kd_loss.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def knowledge_distillation_kl_div_loss(pred, - soft_label, - T, - detach_target=True): - r"""Loss function for knowledge distilling using KL divergence. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - T (int): Temperature for distillation. - detach_target (bool): Remove soft_label from automatic differentiation - - Returns: - torch.Tensor: Loss tensor with shape (N,). - """ - assert pred.size() == soft_label.size() - target = F.softmax(soft_label / T, dim=1) - if detach_target: - target = target.detach() - - kd_loss = F.kl_div( - F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * ( - T * T) - - return kd_loss - - -@LOSSES.register_module() -class KnowledgeDistillationKLDivLoss(nn.Module): - """Loss function for knowledge distilling using KL divergence. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - T (int): Temperature for distillation. - """ - - def __init__(self, reduction='mean', loss_weight=1.0, T=10): - super(KnowledgeDistillationKLDivLoss, self).__init__() - assert T >= 1 - self.reduction = reduction - self.loss_weight = loss_weight - self.T = T - - def forward(self, - pred, - soft_label, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - - reduction = ( - reduction_override if reduction_override else self.reduction) - - loss_kd = self.loss_weight * knowledge_distillation_kl_div_loss( - pred, - soft_label, - weight, - reduction=reduction, - avg_factor=avg_factor, - T=self.T) - - return loss_kd diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/mse_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/mse_loss.py deleted file mode 100644 index 4a622f86..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/mse_loss.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@weighted_loss -def mse_loss(pred, target): - """Warpper of mse loss.""" - return F.mse_loss(pred, target, reduction='none') - - -@LOSSES.register_module() -class MSELoss(nn.Module): - """MSELoss. - - Args: - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super().__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): Weight of the loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * mse_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/pisa_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/pisa_loss.py deleted file mode 100644 index 6afea0e5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/pisa_loss.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from mmdet.core import bbox_overlaps - - -@mmcv.jit(derivate=True, coderize=True) -def isr_p(cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - loss_cls, - bbox_coder, - k=2, - bias=0, - num_class=80): - """Importance-based Sample Reweighting (ISR_P), positive part. - - Args: - cls_score (Tensor): Predicted classification scores. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (tuple[Tensor]): A tuple of bbox targets, the are - labels, label_weights, bbox_targets, bbox_weights, respectively. - rois (Tensor): Anchors (single_stage) in shape (n, 4) or RoIs - (two_stage) in shape (n, 5). - sampling_results (obj): Sampling results. - loss_cls (func): Classification loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - num_class (int): Number of classes, default: 80. - - Return: - tuple([Tensor]): labels, imp_based_label_weights, bbox_targets, - bbox_target_weights - """ - - labels, label_weights, bbox_targets, bbox_weights = bbox_targets - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - pos_labels = labels[pos_label_inds] - - # if no positive samples, return the original targets - num_pos = float(pos_label_inds.size(0)) - if num_pos == 0: - return labels, label_weights, bbox_targets, bbox_weights - - # merge pos_assigned_gt_inds of per image to a single tensor - gts = list() - last_max_gt = 0 - for i in range(len(sampling_results)): - gt_i = sampling_results[i].pos_assigned_gt_inds - gts.append(gt_i + last_max_gt) - if len(gt_i) != 0: - last_max_gt = gt_i.max() + 1 - gts = torch.cat(gts) - assert len(gts) == num_pos - - cls_score = cls_score.detach() - bbox_pred = bbox_pred.detach() - - # For single stage detectors, rois here indicate anchors, in shape (N, 4) - # For two stage detectors, rois are in shape (N, 5) - if rois.size(-1) == 5: - pos_rois = rois[pos_label_inds][:, 1:] - else: - pos_rois = rois[pos_label_inds] - - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_delta_pred = bbox_pred[pos_label_inds, pos_labels].view(-1, 4) - else: - pos_delta_pred = bbox_pred[pos_label_inds].view(-1, 4) - - # compute iou of the predicted bbox and the corresponding GT - pos_delta_target = bbox_targets[pos_label_inds].view(-1, 4) - pos_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_pred) - target_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_target) - ious = bbox_overlaps(pos_bbox_pred, target_bbox_pred, is_aligned=True) - - pos_imp_weights = label_weights[pos_label_inds] - # Two steps to compute IoU-HLR. Samples are first sorted by IoU locally, - # then sorted again within the same-rank group - max_l_num = pos_labels.bincount().max() - for label in pos_labels.unique(): - l_inds = (pos_labels == label).nonzero().view(-1) - l_gts = gts[l_inds] - for t in l_gts.unique(): - t_inds = l_inds[l_gts == t] - t_ious = ious[t_inds] - _, t_iou_rank_idx = t_ious.sort(descending=True) - _, t_iou_rank = t_iou_rank_idx.sort() - ious[t_inds] += max_l_num - t_iou_rank.float() - l_ious = ious[l_inds] - _, l_iou_rank_idx = l_ious.sort(descending=True) - _, l_iou_rank = l_iou_rank_idx.sort() # IoU-HLR - # linearly map HLR to label weights - pos_imp_weights[l_inds] *= (max_l_num - l_iou_rank.float()) / max_l_num - - pos_imp_weights = (bias + pos_imp_weights * (1 - bias)).pow(k) - - # normalize to make the new weighted loss value equal to the original loss - pos_loss_cls = loss_cls( - cls_score[pos_label_inds], pos_labels, reduction_override='none') - if pos_loss_cls.dim() > 1: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds][:, - None] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights[:, None] - else: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights - pos_loss_cls_ratio = ori_pos_loss_cls.sum() / new_pos_loss_cls.sum() - pos_imp_weights = pos_imp_weights * pos_loss_cls_ratio - label_weights[pos_label_inds] = pos_imp_weights - - bbox_targets = labels, label_weights, bbox_targets, bbox_weights - return bbox_targets - - -@mmcv.jit(derivate=True, coderize=True) -def carl_loss(cls_score, - labels, - bbox_pred, - bbox_targets, - loss_bbox, - k=1, - bias=0.2, - avg_factor=None, - sigmoid=False, - num_class=80): - """Classification-Aware Regression Loss (CARL). - - Args: - cls_score (Tensor): Predicted classification scores. - labels (Tensor): Targets of classification. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (Tensor): Target of bbox regression. - loss_bbox (func): Regression loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - avg_factor (int): Average factor used in regression loss. - sigmoid (bool): Activation of the classification score. - num_class (int): Number of classes, default: 80. - - Return: - dict: CARL loss dict. - """ - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - if pos_label_inds.numel() == 0: - return dict(loss_carl=cls_score.sum()[None] * 0.) - pos_labels = labels[pos_label_inds] - - # multiply pos_cls_score with the corresponding bbox weight - # and remain gradient - if sigmoid: - pos_cls_score = cls_score.sigmoid()[pos_label_inds, pos_labels] - else: - pos_cls_score = cls_score.softmax(-1)[pos_label_inds, pos_labels] - carl_loss_weights = (bias + (1 - bias) * pos_cls_score).pow(k) - - # normalize carl_loss_weight to make its sum equal to num positive - num_pos = float(pos_cls_score.size(0)) - weight_ratio = num_pos / carl_loss_weights.sum() - carl_loss_weights *= weight_ratio - - if avg_factor is None: - avg_factor = bbox_targets.size(0) - # if is class agnostic, bbox pred is in shape (N, 4) - # otherwise, bbox pred is in shape (N, #classes, 4) - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_bbox_preds = bbox_pred[pos_label_inds, pos_labels] - else: - pos_bbox_preds = bbox_pred[pos_label_inds] - ori_loss_reg = loss_bbox( - pos_bbox_preds, - bbox_targets[pos_label_inds], - reduction_override='none') / avg_factor - loss_carl = (ori_loss_reg * carl_loss_weights[:, None]).sum() - return dict(loss_carl=loss_carl[None]) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/seesaw_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/seesaw_loss.py deleted file mode 100644 index 01040472..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/seesaw_loss.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .accuracy import accuracy -from .cross_entropy_loss import cross_entropy -from .utils import weight_reduce_loss - - -def seesaw_ce_loss(cls_score, - labels, - label_weights, - cum_samples, - num_classes, - p, - q, - eps, - reduction='mean', - avg_factor=None): - """Calculate the Seesaw CrossEntropy loss. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C), - C is the number of classes. - labels (torch.Tensor): The learning label of the prediction. - label_weights (torch.Tensor): Sample-wise loss weight. - cum_samples (torch.Tensor): Cumulative samples for each category. - num_classes (int): The number of classes. - p (float): The ``p`` in the mitigation factor. - q (float): The ``q`` in the compenstation factor. - eps (float): The minimal value of divisor to smooth - the computation of compensation factor - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - - Returns: - torch.Tensor: The calculated loss - """ - assert cls_score.size(-1) == num_classes - assert len(cum_samples) == num_classes - - onehot_labels = F.one_hot(labels, num_classes) - seesaw_weights = cls_score.new_ones(onehot_labels.size()) - - # mitigation factor - if p > 0: - sample_ratio_matrix = cum_samples[None, :].clamp( - min=1) / cum_samples[:, None].clamp(min=1) - index = (sample_ratio_matrix < 1.0).float() - sample_weights = sample_ratio_matrix.pow(p) * index + (1 - index) - mitigation_factor = sample_weights[labels.long(), :] - seesaw_weights = seesaw_weights * mitigation_factor - - # compensation factor - if q > 0: - scores = F.softmax(cls_score.detach(), dim=1) - self_scores = scores[ - torch.arange(0, len(scores)).to(scores.device).long(), - labels.long()] - score_matrix = scores / self_scores[:, None].clamp(min=eps) - index = (score_matrix > 1.0).float() - compensation_factor = score_matrix.pow(q) * index + (1 - index) - seesaw_weights = seesaw_weights * compensation_factor - - cls_score = cls_score + (seesaw_weights.log() * (1 - onehot_labels)) - - loss = F.cross_entropy(cls_score, labels, weight=None, reduction='none') - - if label_weights is not None: - label_weights = label_weights.float() - loss = weight_reduce_loss( - loss, weight=label_weights, reduction=reduction, avg_factor=avg_factor) - return loss - - -@LOSSES.register_module() -class SeesawLoss(nn.Module): - """ - Seesaw Loss for Long-Tailed Instance Segmentation (CVPR 2021) - arXiv: https://arxiv.org/abs/2008.10032 - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Only False is supported. - p (float, optional): The ``p`` in the mitigation factor. - Defaults to 0.8. - q (float, optional): The ``q`` in the compenstation factor. - Defaults to 2.0. - num_classes (int, optional): The number of classes. - Default to 1203 for LVIS v1 dataset. - eps (float, optional): The minimal value of divisor to smooth - the computation of compensation factor - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - return_dict (bool, optional): Whether return the losses as a dict. - Default to True. - """ - - def __init__(self, - use_sigmoid=False, - p=0.8, - q=2.0, - num_classes=1203, - eps=1e-2, - reduction='mean', - loss_weight=1.0, - return_dict=True): - super(SeesawLoss, self).__init__() - assert not use_sigmoid - self.use_sigmoid = False - self.p = p - self.q = q - self.num_classes = num_classes - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - self.return_dict = return_dict - - # 0 for pos, 1 for neg - self.cls_criterion = seesaw_ce_loss - - # cumulative samples for each category - self.register_buffer( - 'cum_samples', - torch.zeros(self.num_classes + 1, dtype=torch.float)) - - # custom output channels of the classifier - self.custom_cls_channels = True - # custom activation of cls_score - self.custom_activation = True - # custom accuracy of the classsifier - self.custom_accuracy = True - - def _split_cls_score(self, cls_score): - # split cls_score to cls_score_classes and cls_score_objectness - assert cls_score.size(-1) == self.num_classes + 2 - cls_score_classes = cls_score[..., :-2] - cls_score_objectness = cls_score[..., -2:] - return cls_score_classes, cls_score_objectness - - def get_cls_channels(self, num_classes): - """Get custom classification channels. - - Args: - num_classes (int): The number of classes. - - Returns: - int: The custom classification channels. - """ - assert num_classes == self.num_classes - return num_classes + 2 - - def get_activation(self, cls_score): - """Get custom activation of cls_score. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - - Returns: - torch.Tensor: The custom activation of cls_score with shape - (N, C + 1). - """ - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - score_classes = F.softmax(cls_score_classes, dim=-1) - score_objectness = F.softmax(cls_score_objectness, dim=-1) - score_pos = score_objectness[..., [0]] - score_neg = score_objectness[..., [1]] - score_classes = score_classes * score_pos - scores = torch.cat([score_classes, score_neg], dim=-1) - return scores - - def get_accuracy(self, cls_score, labels): - """Get custom accuracy w.r.t. cls_score and labels. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - labels (torch.Tensor): The learning label of the prediction. - - Returns: - Dict [str, torch.Tensor]: The accuracy for objectness and classes, - respectively. - """ - pos_inds = labels < self.num_classes - obj_labels = (labels == self.num_classes).long() - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - acc_objectness = accuracy(cls_score_objectness, obj_labels) - acc_classes = accuracy(cls_score_classes[pos_inds], labels[pos_inds]) - acc = dict() - acc['acc_objectness'] = acc_objectness - acc['acc_classes'] = acc_classes - return acc - - def forward(self, - cls_score, - labels, - label_weights=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction with shape (N, C + 2). - labels (torch.Tensor): The learning label of the prediction. - label_weights (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - Returns: - torch.Tensor | Dict [str, torch.Tensor]: - if return_dict == False: The calculated loss | - if return_dict == True: The dict of calculated losses - for objectness and classes, respectively. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - assert cls_score.size(-1) == self.num_classes + 2 - pos_inds = labels < self.num_classes - # 0 for pos, 1 for neg - obj_labels = (labels == self.num_classes).long() - - # accumulate the samples for each category - unique_labels = labels.unique() - for u_l in unique_labels: - inds_ = labels == u_l.item() - self.cum_samples[u_l] += inds_.sum() - - if label_weights is not None: - label_weights = label_weights.float() - else: - label_weights = labels.new_ones(labels.size(), dtype=torch.float) - - cls_score_classes, cls_score_objectness = self._split_cls_score( - cls_score) - # calculate loss_cls_classes (only need pos samples) - if pos_inds.sum() > 0: - loss_cls_classes = self.loss_weight * self.cls_criterion( - cls_score_classes[pos_inds], labels[pos_inds], - label_weights[pos_inds], self.cum_samples[:self.num_classes], - self.num_classes, self.p, self.q, self.eps, reduction, - avg_factor) - else: - loss_cls_classes = cls_score_classes[pos_inds].sum() - # calculate loss_cls_objectness - loss_cls_objectness = self.loss_weight * cross_entropy( - cls_score_objectness, obj_labels, label_weights, reduction, - avg_factor) - - if self.return_dict: - loss_cls = dict() - loss_cls['loss_cls_objectness'] = loss_cls_objectness - loss_cls['loss_cls_classes'] = loss_cls_classes - else: - loss_cls = loss_cls_classes + loss_cls_objectness - return loss_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/smooth_l1_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/smooth_l1_loss.py deleted file mode 100644 index 55117467..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/smooth_l1_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def smooth_l1_loss(pred, target, beta=1.0): - """Smooth L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def l1_loss(pred, target): - """L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - - Returns: - torch.Tensor: Calculated loss - """ - if target.numel() == 0: - return pred.sum() * 0 - - assert pred.size() == target.size() - loss = torch.abs(pred - target) - return loss - - -@LOSSES.register_module() -class SmoothL1Loss(nn.Module): - """Smooth L1 loss. - - Args: - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". Defaults to "mean". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, beta=1.0, reduction='mean', loss_weight=1.0): - super(SmoothL1Loss, self).__init__() - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * smooth_l1_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class L1Loss(nn.Module): - """L1 loss. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(L1Loss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * l1_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_bbox diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/utils.py deleted file mode 100644 index 778237eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import mmcv -import torch -import torch.nn.functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -@mmcv.jit(derivate=True, coderize=True) -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Average factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - # Avoid causing ZeroDivisionError when avg_factor is 0.0, - # i.e., all labels of an image belong to ignore index. - eps = torch.finfo(torch.float32).eps - loss = loss.sum() / (avg_factor + eps) - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/varifocal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/varifocal_loss.py deleted file mode 100644 index 42f0eef9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/losses/varifocal_loss.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -@mmcv.jit(derivate=True, coderize=True) -def varifocal_loss(pred, - target, - weight=None, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - avg_factor=None): - """`Varifocal Loss `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning target of the iou-aware - classification score with shape (N, C), C is the number of classes. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal Loss. - Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive example with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # pred and target should be of the same size - assert pred.size() == target.size() - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - if iou_weighted: - focal_weight = target * (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - else: - focal_weight = (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class VarifocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - loss_weight=1.0): - """`Varifocal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether the prediction is - used for sigmoid or softmax. Defaults to True. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal - Loss. Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive examples with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - super(VarifocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'Only sigmoid varifocal loss supported now.' - assert alpha >= 0.0 - self.use_sigmoid = use_sigmoid - self.alpha = alpha - self.gamma = gamma - self.iou_weighted = iou_weighted - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * varifocal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - iou_weighted=self.iou_weighted, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/__init__.py deleted file mode 100644 index 6f2fa823..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bfp import BFP -from .channel_mapper import ChannelMapper -from .ct_resnet_neck import CTResNetNeck -from .dilated_encoder import DilatedEncoder -from .dyhead import DyHead -from .fpg import FPG -from .fpn import FPN -from .fpn_carafe import FPN_CARAFE -from .hrfpn import HRFPN -from .nas_fpn import NASFPN -from .nasfcos_fpn import NASFCOS_FPN -from .pafpn import PAFPN -from .rfp import RFP -from .ssd_neck import SSDNeck -from .yolo_neck import YOLOV3Neck -from .yolox_pafpn import YOLOXPAFPN - -__all__ = [ - 'FPN', 'BFP', 'ChannelMapper', 'HRFPN', 'NASFPN', 'FPN_CARAFE', 'PAFPN', - 'NASFCOS_FPN', 'RFP', 'YOLOV3Neck', 'FPG', 'DilatedEncoder', - 'CTResNetNeck', 'SSDNeck', 'YOLOXPAFPN', 'DyHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/bfp.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/bfp.py deleted file mode 100644 index 9fdfa036..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/bfp.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import NonLocal2d -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class BFP(BaseModule): - """BFP (Balanced Feature Pyramids) - - BFP takes multi-level features as inputs and gather them into a single one, - then refine the gathered feature and scatter the refined results to - multi-level features. This module is used in Libra R-CNN (CVPR 2019), see - the paper `Libra R-CNN: Towards Balanced Learning for Object Detection - `_ for details. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - num_levels (int): Number of input feature levels. - conv_cfg (dict): The config dict for convolution layers. - norm_cfg (dict): The config dict for normalization layers. - refine_level (int): Index of integration and refine level of BSF in - multi-level features from bottom to top. - refine_type (str): Type of the refine op, currently support - [None, 'conv', 'non_local']. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - num_levels, - refine_level=2, - refine_type=None, - conv_cfg=None, - norm_cfg=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(BFP, self).__init__(init_cfg) - assert refine_type in [None, 'conv', 'non_local'] - - self.in_channels = in_channels - self.num_levels = num_levels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.refine_level = refine_level - self.refine_type = refine_type - assert 0 <= self.refine_level < self.num_levels - - if self.refine_type == 'conv': - self.refine = ConvModule( - self.in_channels, - self.in_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - elif self.refine_type == 'non_local': - self.refine = NonLocal2d( - self.in_channels, - reduction=1, - use_scale=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == self.num_levels - - # step 1: gather multi-level features by resize and average - feats = [] - gather_size = inputs[self.refine_level].size()[2:] - for i in range(self.num_levels): - if i < self.refine_level: - gathered = F.adaptive_max_pool2d( - inputs[i], output_size=gather_size) - else: - gathered = F.interpolate( - inputs[i], size=gather_size, mode='nearest') - feats.append(gathered) - - bsf = sum(feats) / len(feats) - - # step 2: refine gathered features - if self.refine_type is not None: - bsf = self.refine(bsf) - - # step 3: scatter refined features to multi-levels by a residual path - outs = [] - for i in range(self.num_levels): - out_size = inputs[i].size()[2:] - if i < self.refine_level: - residual = F.interpolate(bsf, size=out_size, mode='nearest') - else: - residual = F.adaptive_max_pool2d(bsf, output_size=out_size) - outs.append(residual + inputs[i]) - - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/channel_mapper.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/channel_mapper.py deleted file mode 100644 index 774bdb1d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/channel_mapper.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class ChannelMapper(BaseModule): - r"""Channel Mapper to reduce/increase channels of backbone features. - - This is used to reduce/increase channels of backbone features. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - kernel_size (int, optional): kernel_size for reducing channels (used - at each scale). Default: 3. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - act_cfg (dict, optional): Config dict for activation layer in - ConvModule. Default: dict(type='ReLU'). - num_outs (int, optional): Number of output feature maps. There - would be extra_convs when num_outs larger than the length - of in_channels. - init_cfg (dict or list[dict], optional): Initialization config dict. - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = ChannelMapper(in_channels, 11, 3).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - num_outs=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(ChannelMapper, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.extra_convs = None - if num_outs is None: - num_outs = len(in_channels) - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - if num_outs > len(in_channels): - self.extra_convs = nn.ModuleList() - for i in range(len(in_channels), num_outs): - if i == len(in_channels): - in_channel = in_channels[-1] - else: - in_channel = out_channels - self.extra_convs.append( - ConvModule( - in_channel, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.convs) - outs = [self.convs[i](inputs[i]) for i in range(len(inputs))] - if self.extra_convs: - for i in range(len(self.extra_convs)): - if i == 0: - outs.append(self.extra_convs[0](inputs[-1])) - else: - outs.append(self.extra_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ct_resnet_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ct_resnet_neck.py deleted file mode 100644 index 40eb2685..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ct_resnet_neck.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.builder import NECKS - - -@NECKS.register_module() -class CTResNetNeck(BaseModule): - """The neck used in `CenterNet `_ for - object classification and box regression. - - Args: - in_channel (int): Number of input channels. - num_deconv_filters (tuple[int]): Number of filters per stage. - num_deconv_kernels (tuple[int]): Number of kernels per stage. - use_dcn (bool): If True, use DCNv2. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channel, - num_deconv_filters, - num_deconv_kernels, - use_dcn=True, - init_cfg=None): - super(CTResNetNeck, self).__init__(init_cfg) - assert len(num_deconv_filters) == len(num_deconv_kernels) - self.fp16_enabled = False - self.use_dcn = use_dcn - self.in_channel = in_channel - self.deconv_layers = self._make_deconv_layer(num_deconv_filters, - num_deconv_kernels) - - def _make_deconv_layer(self, num_deconv_filters, num_deconv_kernels): - """use deconv layers to upsample backbone's output.""" - layers = [] - for i in range(len(num_deconv_filters)): - feat_channel = num_deconv_filters[i] - conv_module = ConvModule( - self.in_channel, - feat_channel, - 3, - padding=1, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=dict(type='BN')) - layers.append(conv_module) - upsample_module = ConvModule( - feat_channel, - feat_channel, - num_deconv_kernels[i], - stride=2, - padding=1, - conv_cfg=dict(type='deconv'), - norm_cfg=dict(type='BN')) - layers.append(upsample_module) - self.in_channel = feat_channel - - return nn.Sequential(*layers) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.ConvTranspose2d): - # In order to be consistent with the source code, - # reset the ConvTranspose2d initialization parameters - m.reset_parameters() - # Simulated bilinear upsampling kernel - w = m.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * ( - 1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - # self.use_dcn is False - elif not self.use_dcn and isinstance(m, nn.Conv2d): - # In order to be consistent with the source code, - # reset the Conv2d initialization parameters - m.reset_parameters() - - @auto_fp16() - def forward(self, inputs): - assert isinstance(inputs, (list, tuple)) - outs = self.deconv_layers(inputs[-1]) - return outs, diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dilated_encoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dilated_encoder.py deleted file mode 100644 index 6679835b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dilated_encoder.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import (ConvModule, caffe2_xavier_init, constant_init, is_norm, - normal_init) -from torch.nn import BatchNorm2d - -from ..builder import NECKS - - -class Bottleneck(nn.Module): - """Bottleneck block for DilatedEncoder used in `YOLOF. - - `. - - The Bottleneck contains three ConvLayers and one residual connection. - - Args: - in_channels (int): The number of input channels. - mid_channels (int): The number of middle output channels. - dilation (int): Dilation rate. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - in_channels, - mid_channels, - dilation, - norm_cfg=dict(type='BN', requires_grad=True)): - super(Bottleneck, self).__init__() - self.conv1 = ConvModule( - in_channels, mid_channels, 1, norm_cfg=norm_cfg) - self.conv2 = ConvModule( - mid_channels, - mid_channels, - 3, - padding=dilation, - dilation=dilation, - norm_cfg=norm_cfg) - self.conv3 = ConvModule( - mid_channels, in_channels, 1, norm_cfg=norm_cfg) - - def forward(self, x): - identity = x - out = self.conv1(x) - out = self.conv2(out) - out = self.conv3(out) - out = out + identity - return out - - -@NECKS.register_module() -class DilatedEncoder(nn.Module): - """Dilated Encoder for YOLOF `. - - This module contains two types of components: - - the original FPN lateral convolution layer and fpn convolution layer, - which are 1x1 conv + 3x3 conv - - the dilated residual block - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - block_mid_channels (int): The number of middle block output channels - num_residual_blocks (int): The number of residual blocks. - """ - - def __init__(self, in_channels, out_channels, block_mid_channels, - num_residual_blocks): - super(DilatedEncoder, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.block_mid_channels = block_mid_channels - self.num_residual_blocks = num_residual_blocks - self.block_dilations = [2, 4, 6, 8] - self._init_layers() - - def _init_layers(self): - self.lateral_conv = nn.Conv2d( - self.in_channels, self.out_channels, kernel_size=1) - self.lateral_norm = BatchNorm2d(self.out_channels) - self.fpn_conv = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=3, padding=1) - self.fpn_norm = BatchNorm2d(self.out_channels) - encoder_blocks = [] - for i in range(self.num_residual_blocks): - dilation = self.block_dilations[i] - encoder_blocks.append( - Bottleneck( - self.out_channels, - self.block_mid_channels, - dilation=dilation)) - self.dilated_encoder_blocks = nn.Sequential(*encoder_blocks) - - def init_weights(self): - caffe2_xavier_init(self.lateral_conv) - caffe2_xavier_init(self.fpn_conv) - for m in [self.lateral_norm, self.fpn_norm]: - constant_init(m, 1) - for m in self.dilated_encoder_blocks.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - def forward(self, feature): - out = self.lateral_norm(self.lateral_conv(feature[-1])) - out = self.fpn_norm(self.fpn_conv(out)) - return self.dilated_encoder_blocks(out), diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dyhead.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dyhead.py deleted file mode 100644 index 5d752c34..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/dyhead.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (build_activation_layer, build_norm_layer, constant_init, - normal_init) -from mmcv.ops.modulated_deform_conv import ModulatedDeformConv2d -from mmcv.runner import BaseModule - -from ..builder import NECKS -from ..utils import DyReLU - -# Reference: -# https://github.com/microsoft/DynamicHead -# https://github.com/jshilong/SEPC - - -class DyDCNv2(nn.Module): - """ModulatedDeformConv2d with normalization layer used in DyHead. - - This module cannot be configured with `conv_cfg=dict(type='DCNv2')` - because DyHead calculates offset and mask from middle-level feature. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - stride (int | tuple[int], optional): Stride of the convolution. - Default: 1. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='GN', num_groups=16, requires_grad=True). - """ - - def __init__(self, - in_channels, - out_channels, - stride=1, - norm_cfg=dict(type='GN', num_groups=16, requires_grad=True)): - super().__init__() - self.with_norm = norm_cfg is not None - bias = not self.with_norm - self.conv = ModulatedDeformConv2d( - in_channels, out_channels, 3, stride=stride, padding=1, bias=bias) - if self.with_norm: - self.norm = build_norm_layer(norm_cfg, out_channels)[1] - - def forward(self, x, offset, mask): - """Forward function.""" - x = self.conv(x.contiguous(), offset, mask) - if self.with_norm: - x = self.norm(x) - return x - - -class DyHeadBlock(nn.Module): - """DyHead Block with three types of attention. - - HSigmoid arguments in default act_cfg follow official code, not paper. - https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - zero_init_offset (bool, optional): Whether to use zero init for - `spatial_conv_offset`. Default: True. - act_cfg (dict, optional): Config dict for the last activation layer of - scale-aware attention. Default: dict(type='HSigmoid', bias=3.0, - divisor=6.0). - """ - - def __init__(self, - in_channels, - out_channels, - zero_init_offset=True, - act_cfg=dict(type='HSigmoid', bias=3.0, divisor=6.0)): - super().__init__() - self.zero_init_offset = zero_init_offset - # (offset_x, offset_y, mask) * kernel_size_y * kernel_size_x - self.offset_and_mask_dim = 3 * 3 * 3 - self.offset_dim = 2 * 3 * 3 - - self.spatial_conv_high = DyDCNv2(in_channels, out_channels) - self.spatial_conv_mid = DyDCNv2(in_channels, out_channels) - self.spatial_conv_low = DyDCNv2(in_channels, out_channels, stride=2) - self.spatial_conv_offset = nn.Conv2d( - in_channels, self.offset_and_mask_dim, 3, padding=1) - self.scale_attn_module = nn.Sequential( - nn.AdaptiveAvgPool2d(1), nn.Conv2d(out_channels, 1, 1), - nn.ReLU(inplace=True), build_activation_layer(act_cfg)) - self.task_attn_module = DyReLU(out_channels) - self._init_weights() - - def _init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, 0, 0.01) - if self.zero_init_offset: - constant_init(self.spatial_conv_offset, 0) - - def forward(self, x): - """Forward function.""" - outs = [] - for level in range(len(x)): - # calculate offset and mask of DCNv2 from middle-level feature - offset_and_mask = self.spatial_conv_offset(x[level]) - offset = offset_and_mask[:, :self.offset_dim, :, :] - mask = offset_and_mask[:, self.offset_dim:, :, :].sigmoid() - - mid_feat = self.spatial_conv_mid(x[level], offset, mask) - sum_feat = mid_feat * self.scale_attn_module(mid_feat) - summed_levels = 1 - if level > 0: - low_feat = self.spatial_conv_low(x[level - 1], offset, mask) - sum_feat += low_feat * self.scale_attn_module(low_feat) - summed_levels += 1 - if level < len(x) - 1: - # this upsample order is weird, but faster than natural order - # https://github.com/microsoft/DynamicHead/issues/25 - high_feat = F.interpolate( - self.spatial_conv_high(x[level + 1], offset, mask), - size=x[level].shape[-2:], - mode='bilinear', - align_corners=True) - sum_feat += high_feat * self.scale_attn_module(high_feat) - summed_levels += 1 - outs.append(self.task_attn_module(sum_feat / summed_levels)) - - return outs - - -@NECKS.register_module() -class DyHead(BaseModule): - """DyHead neck consisting of multiple DyHead Blocks. - - See `Dynamic Head: Unifying Object Detection Heads with Attentions - `_ for details. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_blocks (int, optional): Number of DyHead Blocks. Default: 6. - zero_init_offset (bool, optional): Whether to use zero init for - `spatial_conv_offset`. Default: True. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_blocks=6, - zero_init_offset=True, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_blocks = num_blocks - self.zero_init_offset = zero_init_offset - - dyhead_blocks = [] - for i in range(num_blocks): - in_channels = self.in_channels if i == 0 else self.out_channels - dyhead_blocks.append( - DyHeadBlock( - in_channels, - self.out_channels, - zero_init_offset=zero_init_offset)) - self.dyhead_blocks = nn.Sequential(*dyhead_blocks) - - def forward(self, inputs): - """Forward function.""" - assert isinstance(inputs, (tuple, list)) - outs = self.dyhead_blocks(inputs) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpg.py deleted file mode 100644 index a6a2a12e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpg.py +++ /dev/null @@ -1,406 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -class Transition(BaseModule): - """Base class for transition. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - """ - - def __init__(self, in_channels, out_channels, init_cfg=None): - super().__init__(init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - - def forward(x): - pass - - -class UpInterpolationConv(Transition): - """A transition used for up-sampling. - - Up-sample the input by interpolation then refines the feature by - a convolution layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Up-sampling factor. Default: 2. - mode (int): Interpolation mode. Default: nearest. - align_corners (bool): Whether align corners when interpolation. - Default: None. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=2, - mode='nearest', - align_corners=None, - kernel_size=3, - init_cfg=None, - **kwargs): - super().__init__(in_channels, out_channels, init_cfg) - self.mode = mode - self.scale_factor = scale_factor - self.align_corners = align_corners - self.conv = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, x): - x = F.interpolate( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners) - x = self.conv(x) - return x - - -class LastConv(Transition): - """A transition used for refining the output of the last stage. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_inputs (int): Number of inputs of the FPN features. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - num_inputs, - kernel_size=3, - init_cfg=None, - **kwargs): - super().__init__(in_channels, out_channels, init_cfg) - self.num_inputs = num_inputs - self.conv_out = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, inputs): - assert len(inputs) == self.num_inputs - return self.conv_out(inputs[-1]) - - -@NECKS.register_module() -class FPG(BaseModule): - """FPG. - - Implementation of `Feature Pyramid Grids (FPG) - `_. - This implementation only gives the basic structure stated in the paper. - But users can implement different type of transitions to fully explore the - the potential power of the structure of FPG. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - paths (list[str]): Specify the path order of each stack level. - Each element in the list should be either 'bu' (bottom-up) or - 'td' (top-down). - inter_channels (int): Number of inter channels. - same_up_trans (dict): Transition that goes down at the same stage. - same_down_trans (dict): Transition that goes up at the same stage. - across_lateral_trans (dict): Across-pathway same-stage - across_down_trans (dict): Across-pathway bottom-up connection. - across_up_trans (dict): Across-pathway top-down connection. - across_skip_trans (dict): Across-pathway skip connection. - output_trans (dict): Transition that trans the output of the - last stage. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - transition_types = { - 'conv': ConvModule, - 'interpolation_conv': UpInterpolationConv, - 'last_conv': LastConv, - } - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - paths, - inter_channels=None, - same_down_trans=None, - same_up_trans=dict( - type='conv', kernel_size=3, stride=2, padding=1), - across_lateral_trans=dict(type='conv', kernel_size=1), - across_down_trans=dict(type='conv', kernel_size=3), - across_up_trans=None, - across_skip_trans=dict(type='identity'), - output_trans=dict(type='last_conv', kernel_size=3), - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - skip_inds=None, - init_cfg=[ - dict(type='Caffe2Xavier', layer='Conv2d'), - dict( - type='Constant', - layer=[ - '_BatchNorm', '_InstanceNorm', 'GroupNorm', - 'LayerNorm' - ], - val=1.0) - ]): - super(FPG, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - if inter_channels is None: - self.inter_channels = [out_channels for _ in range(num_outs)] - elif isinstance(inter_channels, int): - self.inter_channels = [inter_channels for _ in range(num_outs)] - else: - assert isinstance(inter_channels, list) - assert len(inter_channels) == num_outs - self.inter_channels = inter_channels - self.stack_times = stack_times - self.paths = paths - assert isinstance(paths, list) and len(paths) == stack_times - for d in paths: - assert d in ('bu', 'td') - - self.same_down_trans = same_down_trans - self.same_up_trans = same_up_trans - self.across_lateral_trans = across_lateral_trans - self.across_down_trans = across_down_trans - self.across_up_trans = across_up_trans - self.output_trans = output_trans - self.across_skip_trans = across_skip_trans - - self.with_bias = norm_cfg is None - # skip inds must be specified if across skip trans is not None - if self.across_skip_trans is not None: - skip_inds is not None - self.skip_inds = skip_inds - assert len(self.skip_inds[0]) <= self.stack_times - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # build lateral 1x1 convs to reduce channels - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = nn.Conv2d(self.in_channels[i], - self.inter_channels[i - self.start_level], 1) - self.lateral_convs.append(l_conv) - - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - if self.add_extra_convs: - fpn_idx = self.backbone_end_level - self.start_level + i - extra_conv = nn.Conv2d( - self.inter_channels[fpn_idx - 1], - self.inter_channels[fpn_idx], - 3, - stride=2, - padding=1) - self.extra_downsamples.append(extra_conv) - else: - self.extra_downsamples.append(nn.MaxPool2d(1, stride=2)) - - self.fpn_transitions = nn.ModuleList() # stack times - for s in range(self.stack_times): - stage_trans = nn.ModuleList() # num of feature levels - for i in range(self.num_outs): - # same, across_lateral, across_down, across_up - trans = nn.ModuleDict() - if s in self.skip_inds[i]: - stage_trans.append(trans) - continue - # build same-stage down trans (used in bottom-up paths) - if i == 0 or self.same_up_trans is None: - same_up_trans = None - else: - same_up_trans = self.build_trans( - self.same_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['same_up'] = same_up_trans - # build same-stage up trans (used in top-down paths) - if i == self.num_outs - 1 or self.same_down_trans is None: - same_down_trans = None - else: - same_down_trans = self.build_trans( - self.same_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['same_down'] = same_down_trans - # build across lateral trans - across_lateral_trans = self.build_trans( - self.across_lateral_trans, self.inter_channels[i], - self.inter_channels[i]) - trans['across_lateral'] = across_lateral_trans - # build across down trans - if i == self.num_outs - 1 or self.across_down_trans is None: - across_down_trans = None - else: - across_down_trans = self.build_trans( - self.across_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['across_down'] = across_down_trans - # build across up trans - if i == 0 or self.across_up_trans is None: - across_up_trans = None - else: - across_up_trans = self.build_trans( - self.across_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_up'] = across_up_trans - if self.across_skip_trans is None: - across_skip_trans = None - else: - across_skip_trans = self.build_trans( - self.across_skip_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_skip'] = across_skip_trans - # build across_skip trans - stage_trans.append(trans) - self.fpn_transitions.append(stage_trans) - - self.output_transition = nn.ModuleList() # output levels - for i in range(self.num_outs): - trans = self.build_trans( - self.output_trans, - self.inter_channels[i], - self.out_channels, - num_inputs=self.stack_times + 1) - self.output_transition.append(trans) - - self.relu = nn.ReLU(inplace=True) - - def build_trans(self, cfg, in_channels, out_channels, **extra_args): - cfg_ = cfg.copy() - trans_type = cfg_.pop('type') - trans_cls = self.transition_types[trans_type] - return trans_cls(in_channels, out_channels, **cfg_, **extra_args) - - def fuse(self, fuse_dict): - out = None - for item in fuse_dict.values(): - if item is not None: - if out is None: - out = item - else: - out = out + item - return out - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build all levels from original feature maps - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - outs = [feats] - - for i in range(self.stack_times): - current_outs = outs[-1] - next_outs = [] - direction = self.paths[i] - for j in range(self.num_outs): - if i in self.skip_inds[j]: - next_outs.append(outs[-1][j]) - continue - # feature level - if direction == 'td': - lvl = self.num_outs - j - 1 - else: - lvl = j - # get transitions - if direction == 'td': - same_trans = self.fpn_transitions[i][lvl]['same_down'] - else: - same_trans = self.fpn_transitions[i][lvl]['same_up'] - across_lateral_trans = self.fpn_transitions[i][lvl][ - 'across_lateral'] - across_down_trans = self.fpn_transitions[i][lvl]['across_down'] - across_up_trans = self.fpn_transitions[i][lvl]['across_up'] - across_skip_trans = self.fpn_transitions[i][lvl]['across_skip'] - # init output - to_fuse = dict( - same=None, lateral=None, across_up=None, across_down=None) - # same downsample/upsample - if same_trans is not None: - to_fuse['same'] = same_trans(next_outs[-1]) - # across lateral - if across_lateral_trans is not None: - to_fuse['lateral'] = across_lateral_trans( - current_outs[lvl]) - # across downsample - if lvl > 0 and across_up_trans is not None: - to_fuse['across_up'] = across_up_trans(current_outs[lvl - - 1]) - # across upsample - if (lvl < self.num_outs - 1 and across_down_trans is not None): - to_fuse['across_down'] = across_down_trans( - current_outs[lvl + 1]) - if across_skip_trans is not None: - to_fuse['across_skip'] = across_skip_trans(outs[0][lvl]) - x = self.fuse(to_fuse) - next_outs.append(x) - - if direction == 'td': - outs.append(next_outs[::-1]) - else: - outs.append(next_outs) - - # output trans - final_outs = [] - for i in range(self.num_outs): - lvl_out_list = [] - for s in range(len(outs)): - lvl_out_list.append(outs[s][i]) - lvl_out = self.output_transition[i](lvl_out_list) - final_outs.append(lvl_out) - - return final_outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn.py deleted file mode 100644 index 4bdb5b22..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(BaseModule): - r"""Feature Pyramid Network. - - This is an implementation of paper `Feature Pyramid Networks for Object - Detection `_. - - Args: - in_channels (list[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, it is equivalent to `add_extra_convs='on_input'`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: dict(mode='nearest'). - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest'), - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - self.add_extra_convs = 'on_input' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - @auto_fp16() - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - # fix runtime error of "+=" inplace operation in PyTorch 1.10 - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn_carafe.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn_carafe.py deleted file mode 100644 index fdd91f34..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/fpn_carafe.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_upsample_layer, xavier_init -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN_CARAFE(BaseModule): - """FPN_CARAFE is a more flexible implementation of FPN. It allows more - choice for upsample methods during the top-down pathway. - - It can reproduce the performance of ICCV 2019 paper - CARAFE: Content-Aware ReAssembly of FEatures - Please refer to https://arxiv.org/abs/1905.02188 for more details. - - Args: - in_channels (list[int]): Number of channels for each input feature map. - out_channels (int): Output channels of feature pyramids. - num_outs (int): Number of output stages. - start_level (int): Start level of feature pyramids. - (Default: 0) - end_level (int): End level of feature pyramids. - (Default: -1 indicates the last level). - norm_cfg (dict): Dictionary to construct and config norm layer. - activate (str): Type of activation function in ConvModule - (Default: None indicates w/o activation). - order (dict): Order of components in ConvModule. - upsample (str): Type of upsample layer. - upsample_cfg (dict): Dictionary to construct and config upsample layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(FPN_CARAFE, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.with_bias = norm_cfg is None - self.upsample_cfg = upsample_cfg.copy() - self.upsample = self.upsample_cfg.get('type') - self.relu = nn.ReLU(inplace=False) - - self.order = order - assert order in [('conv', 'norm', 'act'), ('act', 'conv', 'norm')] - - assert self.upsample in [ - 'nearest', 'bilinear', 'deconv', 'pixel_shuffle', 'carafe', None - ] - if self.upsample in ['deconv', 'pixel_shuffle']: - assert hasattr( - self.upsample_cfg, - 'upsample_kernel') and self.upsample_cfg.upsample_kernel > 0 - self.upsample_kernel = self.upsample_cfg.pop('upsample_kernel') - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - - self.lateral_convs = ModuleList() - self.fpn_convs = ModuleList() - self.upsample_modules = ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if i != self.backbone_end_level - 1: - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample == 'deconv': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsample_cfg_.update( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsample_cfg_.update(channels=out_channels, scale_factor=2) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsample_cfg_.update( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsample_module = build_upsample_layer(upsample_cfg_) - self.upsample_modules.append(upsample_module) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_out_levels = ( - num_outs - self.backbone_end_level + self.start_level) - if extra_out_levels >= 1: - for i in range(extra_out_levels): - in_channels = ( - self.in_channels[self.backbone_end_level - - 1] if i == 0 else out_channels) - extra_l_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - if self.upsample == 'deconv': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=self.upsample_kernel, - stride=2, - padding=(self.upsample_kernel - 1) // 2, - output_padding=(self.upsample_kernel - 1) // 2) - elif self.upsample == 'pixel_shuffle': - upsampler_cfg_ = dict( - in_channels=out_channels, - out_channels=out_channels, - scale_factor=2, - upsample_kernel=self.upsample_kernel) - elif self.upsample == 'carafe': - upsampler_cfg_ = dict( - channels=out_channels, - scale_factor=2, - **self.upsample_cfg) - else: - # suppress warnings - align_corners = (None - if self.upsample == 'nearest' else False) - upsampler_cfg_ = dict( - scale_factor=2, - mode=self.upsample, - align_corners=align_corners) - upsampler_cfg_['type'] = self.upsample - upsample_module = build_upsample_layer(upsampler_cfg_) - extra_fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - bias=self.with_bias, - act_cfg=act_cfg, - inplace=False, - order=self.order) - self.upsample_modules.append(upsample_module) - self.fpn_convs.append(extra_fpn_conv) - self.lateral_convs.append(extra_l_conv) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - """Initialize the weights of module.""" - super(FPN_CARAFE, self).init_weights() - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - xavier_init(m, distribution='uniform') - for m in self.modules(): - if isinstance(m, CARAFEPack): - m.init_weights() - - def slice_as(self, src, dst): - """Slice ``src`` as ``dst`` - - Note: - ``src`` should have the same or larger size than ``dst``. - - Args: - src (torch.Tensor): Tensors to be sliced. - dst (torch.Tensor): ``src`` will be sliced to have the same - size as ``dst``. - - Returns: - torch.Tensor: Sliced tensor. - """ - assert (src.size(2) >= dst.size(2)) and (src.size(3) >= dst.size(3)) - if src.size(2) == dst.size(2) and src.size(3) == dst.size(3): - return src - else: - return src[:, :, :dst.size(2), :dst.size(3)] - - def tensor_add(self, a, b): - """Add tensors ``a`` and ``b`` that might have different sizes.""" - if a.size() == b.size(): - c = a + b - else: - c = a + self.slice_as(b, a) - return c - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [] - for i, lateral_conv in enumerate(self.lateral_convs): - if i <= self.backbone_end_level - self.start_level: - input = inputs[min(i + self.start_level, len(inputs) - 1)] - else: - input = laterals[-1] - lateral = lateral_conv(input) - laterals.append(lateral) - - # build top-down path - for i in range(len(laterals) - 1, 0, -1): - if self.upsample is not None: - upsample_feat = self.upsample_modules[i - 1](laterals[i]) - else: - upsample_feat = laterals[i] - laterals[i - 1] = self.tensor_add(laterals[i - 1], upsample_feat) - - # build outputs - num_conv_outs = len(self.fpn_convs) - outs = [] - for i in range(num_conv_outs): - out = self.fpn_convs[i](laterals[i]) - outs.append(out) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/hrfpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/hrfpn.py deleted file mode 100644 index ca15be6b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/hrfpn.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.utils.checkpoint import checkpoint - -from ..builder import NECKS - - -@NECKS.register_module() -class HRFPN(BaseModule): - """HRFPN (High Resolution Feature Pyramids) - - paper: `High-Resolution Representations for Labeling Pixels and Regions - `_. - - Args: - in_channels (list): number of channels for each branch. - out_channels (int): output channels of feature pyramids. - num_outs (int): number of output stages. - pooling_type (str): pooling for generating feature pyramids - from {MAX, AVG}. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - stride (int): stride of 3x3 convolutional layers - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs=5, - pooling_type='AVG', - conv_cfg=None, - norm_cfg=None, - with_cp=False, - stride=1, - init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')): - super(HRFPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reduction_conv = ConvModule( - sum(in_channels), - out_channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - act_cfg=None) - - self.fpn_convs = nn.ModuleList() - for i in range(self.num_outs): - self.fpn_convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=stride, - conv_cfg=self.conv_cfg, - act_cfg=None)) - - if pooling_type == 'MAX': - self.pooling = F.max_pool2d - else: - self.pooling = F.avg_pool2d - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == self.num_ins - outs = [inputs[0]] - for i in range(1, self.num_ins): - outs.append( - F.interpolate(inputs[i], scale_factor=2**i, mode='bilinear')) - out = torch.cat(outs, dim=1) - if out.requires_grad and self.with_cp: - out = checkpoint(self.reduction_conv, out) - else: - out = self.reduction_conv(out) - outs = [out] - for i in range(1, self.num_outs): - outs.append(self.pooling(out, kernel_size=2**i, stride=2**i)) - outputs = [] - - for i in range(self.num_outs): - if outs[i].requires_grad and self.with_cp: - tmp_out = checkpoint(self.fpn_convs[i], outs[i]) - else: - tmp_out = self.fpn_convs[i](outs[i]) - outputs.append(tmp_out) - return tuple(outputs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nas_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nas_fpn.py deleted file mode 100644 index 710592ec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nas_fpn.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops.merge_cells import GlobalPoolingCell, SumCell -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFPN(BaseModule): - """NAS-FPN. - - Implementation of `NAS-FPN: Learning Scalable Feature Pyramid Architecture - for Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')): - super(NASFPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) # num of input feature levels - self.num_outs = num_outs # num of output feature levels - self.stack_times = stack_times - self.norm_cfg = norm_cfg - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # add lateral connections - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None) - self.lateral_convs.append(l_conv) - - # add extra downsample layers (stride-2 pooling or conv) - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_conv = ConvModule( - out_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.extra_downsamples.append( - nn.Sequential(extra_conv, nn.MaxPool2d(2, 2))) - - # add NAS FPN connections - self.fpn_stages = ModuleList() - for _ in range(self.stack_times): - stage = nn.ModuleDict() - # gp(p6, p4) -> p4_1 - stage['gp_64_4'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_1, p4) -> p4_2 - stage['sum_44_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_2, p3) -> p3_out - stage['sum_43_3'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p3_out, p4_2) -> p4_out - stage['sum_34_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - stage['gp_43_5'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_55_5'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - stage['gp_54_7'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_77_7'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # gp(p7_out, p5_out) -> p6_out - stage['gp_75_6'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - self.fpn_stages.append(stage) - - def forward(self, inputs): - """Forward function.""" - # build P3-P5 - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # build P6-P7 on top of P5 - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - p3, p4, p5, p6, p7 = feats - - for stage in self.fpn_stages: - # gp(p6, p4) -> p4_1 - p4_1 = stage['gp_64_4'](p6, p4, out_size=p4.shape[-2:]) - # sum(p4_1, p4) -> p4_2 - p4_2 = stage['sum_44_4'](p4_1, p4, out_size=p4.shape[-2:]) - # sum(p4_2, p3) -> p3_out - p3 = stage['sum_43_3'](p4_2, p3, out_size=p3.shape[-2:]) - # sum(p3_out, p4_2) -> p4_out - p4 = stage['sum_34_4'](p3, p4_2, out_size=p4.shape[-2:]) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - p5_tmp = stage['gp_43_5'](p4, p3, out_size=p5.shape[-2:]) - p5 = stage['sum_55_5'](p5, p5_tmp, out_size=p5.shape[-2:]) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - p7_tmp = stage['gp_54_7'](p5, p4_2, out_size=p7.shape[-2:]) - p7 = stage['sum_77_7'](p7, p7_tmp, out_size=p7.shape[-2:]) - # gp(p7_out, p5_out) -> p6_out - p6 = stage['gp_75_6'](p7, p5, out_size=p6.shape[-2:]) - - return p3, p4, p5, p6, p7 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nasfcos_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nasfcos_fpn.py deleted file mode 100644 index c4abfe7b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/nasfcos_fpn.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import ConcatCell -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFCOS_FPN(BaseModule): - """FPN structure in NASFPN. - - Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for - Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=1, - end_level=-1, - add_extra_convs=False, - conv_cfg=None, - norm_cfg=None, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(NASFCOS_FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - self.adapt_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - adapt_conv = ConvModule( - in_channels[i], - out_channels, - 1, - stride=1, - padding=0, - bias=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU', inplace=False)) - self.adapt_convs.append(adapt_conv) - - # C2 is omitted according to the paper - extra_levels = num_outs - self.backbone_end_level + self.start_level - - def build_concat_cell(with_input1_conv, with_input2_conv): - cell_conv_cfg = dict( - kernel_size=1, padding=0, bias=False, groups=out_channels) - return ConcatCell( - in_channels=out_channels, - out_channels=out_channels, - with_out_conv=True, - out_conv_cfg=cell_conv_cfg, - out_norm_cfg=dict(type='BN'), - out_conv_order=('norm', 'act', 'conv'), - with_input1_conv=with_input1_conv, - with_input2_conv=with_input2_conv, - input_conv_cfg=conv_cfg, - input_norm_cfg=norm_cfg, - upsample_mode='nearest') - - # Denote c3=f0, c4=f1, c5=f2 for convince - self.fpn = nn.ModuleDict() - self.fpn['c22_1'] = build_concat_cell(True, True) - self.fpn['c22_2'] = build_concat_cell(True, True) - self.fpn['c32'] = build_concat_cell(True, False) - self.fpn['c02'] = build_concat_cell(True, False) - self.fpn['c42'] = build_concat_cell(True, True) - self.fpn['c36'] = build_concat_cell(True, True) - self.fpn['c61'] = build_concat_cell(True, True) # f9 - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_act_cfg = None if i == 0 \ - else dict(type='ReLU', inplace=False) - self.extra_downsamples.append( - ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - act_cfg=extra_act_cfg, - order=('act', 'norm', 'conv'))) - - def forward(self, inputs): - """Forward function.""" - feats = [ - adapt_conv(inputs[i + self.start_level]) - for i, adapt_conv in enumerate(self.adapt_convs) - ] - - for (i, module_name) in enumerate(self.fpn): - idx_1, idx_2 = int(module_name[1]), int(module_name[2]) - res = self.fpn[module_name](feats[idx_1], feats[idx_2]) - feats.append(res) - - ret = [] - for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5 - feats1, feats2 = feats[idx], feats[5] - feats2_resize = F.interpolate( - feats2, - size=feats1.size()[2:], - mode='bilinear', - align_corners=False) - - feats_sum = feats1 + feats2_resize - ret.append( - F.interpolate( - feats_sum, - size=inputs[input_idx].size()[2:], - mode='bilinear', - align_corners=False)) - - for submodule in self.extra_downsamples: - ret.append(submodule(ret[-1])) - - return tuple(ret) - - def init_weights(self): - """Initialize the weights of module.""" - super(NASFCOS_FPN, self).init_weights() - for module in self.fpn.values(): - if hasattr(module, 'conv_out'): - caffe2_xavier_init(module.out_conv.conv) - - for modules in [ - self.adapt_convs.modules(), - self.extra_downsamples.modules() - ]: - for module in modules: - if isinstance(module, nn.Conv2d): - caffe2_xavier_init(module) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/pafpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/pafpn.py deleted file mode 100644 index 8d5e32f0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/pafpn.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16 - -from ..builder import NECKS -from .fpn import FPN - - -@NECKS.register_module() -class PAFPN(FPN): - """Path Aggregation Network for Instance Segmentation. - - This is an implementation of the `PAFPN in Path Aggregation Network - `_. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, it is equivalent to `add_extra_convs='on_input'`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(PAFPN, self).__init__( - in_channels, - out_channels, - num_outs, - start_level, - end_level, - add_extra_convs, - relu_before_extra_convs, - no_norm_on_lateral, - conv_cfg, - norm_cfg, - act_cfg, - init_cfg=init_cfg) - # add extra bottom up pathway - self.downsample_convs = nn.ModuleList() - self.pafpn_convs = nn.ModuleList() - for i in range(self.start_level + 1, self.backbone_end_level): - d_conv = ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - pafpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.downsample_convs.append(d_conv) - self.pafpn_convs.append(pafpn_conv) - - @auto_fp16() - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += F.interpolate( - laterals[i], size=prev_shape, mode='nearest') - - # build outputs - # part 1: from original levels - inter_outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - - # part 2: add bottom-up path - for i in range(0, used_backbone_levels - 1): - inter_outs[i + 1] += self.downsample_convs[i](inter_outs[i]) - - outs = [] - outs.append(inter_outs[0]) - outs.extend([ - self.pafpn_convs[i - 1](inter_outs[i]) - for i in range(1, used_backbone_levels) - ]) - - # part 3: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - orig = inputs[self.backbone_end_level - 1] - outs.append(self.fpn_convs[used_backbone_levels](orig)) - elif self.add_extra_convs == 'on_lateral': - outs.append(self.fpn_convs[used_backbone_levels]( - laterals[-1])) - elif self.add_extra_convs == 'on_output': - outs.append(self.fpn_convs[used_backbone_levels](outs[-1])) - else: - raise NotImplementedError - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/rfp.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/rfp.py deleted file mode 100644 index 6976f4da..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/rfp.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import constant_init, xavier_init -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS, build_backbone -from .fpn import FPN - - -class ASPP(BaseModule): - """ASPP (Atrous Spatial Pyramid Pooling) - - This is an implementation of the ASPP module used in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of channels produced by this module - dilations (tuple[int]): Dilations of the four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - dilations=(1, 3, 6, 1), - init_cfg=dict(type='Kaiming', layer='Conv2d')): - super().__init__(init_cfg) - assert dilations[-1] == 1 - self.aspp = nn.ModuleList() - for dilation in dilations: - kernel_size = 3 if dilation > 1 else 1 - padding = dilation if dilation > 1 else 0 - conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - dilation=dilation, - padding=padding, - bias=True) - self.aspp.append(conv) - self.gap = nn.AdaptiveAvgPool2d(1) - - def forward(self, x): - avg_x = self.gap(x) - out = [] - for aspp_idx in range(len(self.aspp)): - inp = avg_x if (aspp_idx == len(self.aspp) - 1) else x - out.append(F.relu_(self.aspp[aspp_idx](inp))) - out[-1] = out[-1].expand_as(out[-2]) - out = torch.cat(out, dim=1) - return out - - -@NECKS.register_module() -class RFP(FPN): - """RFP (Recursive Feature Pyramid) - - This is an implementation of RFP in `DetectoRS - `_. Different from standard FPN, the - input of RFP should be multi level features along with origin input image - of backbone. - - Args: - rfp_steps (int): Number of unrolled steps of RFP. - rfp_backbone (dict): Configuration of the backbone for RFP. - aspp_out_channels (int): Number of output channels of ASPP module. - aspp_dilations (tuple[int]): Dilation rates of four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - rfp_steps, - rfp_backbone, - aspp_out_channels, - aspp_dilations=(1, 3, 6, 1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg, **kwargs) - self.rfp_steps = rfp_steps - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.rfp_modules = ModuleList() - for rfp_idx in range(1, rfp_steps): - rfp_module = build_backbone(rfp_backbone) - self.rfp_modules.append(rfp_module) - self.rfp_aspp = ASPP(self.out_channels, aspp_out_channels, - aspp_dilations) - self.rfp_weight = nn.Conv2d( - self.out_channels, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=True) - - def init_weights(self): - # Avoid using super().init_weights(), which may alter the default - # initialization of the modules in self.rfp_modules that have missing - # keys in the pretrained checkpoint. - for convs in [self.lateral_convs, self.fpn_convs]: - for m in convs.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - for rfp_idx in range(self.rfp_steps - 1): - self.rfp_modules[rfp_idx].init_weights() - constant_init(self.rfp_weight, 0) - - def forward(self, inputs): - inputs = list(inputs) - assert len(inputs) == len(self.in_channels) + 1 # +1 for input image - img = inputs.pop(0) - # FPN forward - x = super().forward(tuple(inputs)) - for rfp_idx in range(self.rfp_steps - 1): - rfp_feats = [x[0]] + list( - self.rfp_aspp(x[i]) for i in range(1, len(x))) - x_idx = self.rfp_modules[rfp_idx].rfp_forward(img, rfp_feats) - # FPN forward - x_idx = super().forward(x_idx) - x_new = [] - for ft_idx in range(len(x_idx)): - add_weight = torch.sigmoid(self.rfp_weight(x_idx[ft_idx])) - x_new.append(add_weight * x_idx[ft_idx] + - (1 - add_weight) * x[ft_idx]) - x = x_new - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ssd_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ssd_neck.py deleted file mode 100644 index 179d575e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/ssd_neck.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -@NECKS.register_module() -class SSDNeck(BaseModule): - """Extra layers of SSD backbone to generate multi-scale feature maps. - - Args: - in_channels (Sequence[int]): Number of input channels per scale. - out_channels (Sequence[int]): Number of output channels per scale. - level_strides (Sequence[int]): Stride of 3x3 conv per level. - level_paddings (Sequence[int]): Padding size of 3x3 conv per level. - l2_norm_scale (float|None): L2 normalization layer init scale. - If None, not use L2 normalization on the first input feature. - last_kernel_size (int): Kernel size of the last conv layer. - Default: 3. - use_depthwise (bool): Whether to use DepthwiseSeparableConv. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - level_strides, - level_paddings, - l2_norm_scale=20., - last_kernel_size=3, - use_depthwise=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - init_cfg=[ - dict( - type='Xavier', distribution='uniform', - layer='Conv2d'), - dict(type='Constant', val=1, layer='BatchNorm2d'), - ]): - super(SSDNeck, self).__init__(init_cfg) - assert len(out_channels) > len(in_channels) - assert len(out_channels) - len(in_channels) == len(level_strides) - assert len(level_strides) == len(level_paddings) - assert in_channels == out_channels[:len(in_channels)] - - if l2_norm_scale: - self.l2_norm = L2Norm(in_channels[0], l2_norm_scale) - self.init_cfg += [ - dict( - type='Constant', - val=self.l2_norm.scale, - override=dict(name='l2_norm')) - ] - - self.extra_layers = nn.ModuleList() - extra_layer_channels = out_channels[len(in_channels):] - second_conv = DepthwiseSeparableConvModule if \ - use_depthwise else ConvModule - - for i, (out_channel, stride, padding) in enumerate( - zip(extra_layer_channels, level_strides, level_paddings)): - kernel_size = last_kernel_size \ - if i == len(extra_layer_channels) - 1 else 3 - per_lvl_convs = nn.Sequential( - ConvModule( - out_channels[len(in_channels) - 1 + i], - out_channel // 2, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - second_conv( - out_channel // 2, - out_channel, - kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.extra_layers.append(per_lvl_convs) - - def forward(self, inputs): - """Forward function.""" - outs = [feat for feat in inputs] - if hasattr(self, 'l2_norm'): - outs[0] = self.l2_norm(outs[0]) - - feat = outs[-1] - for layer in self.extra_layers: - feat = layer(feat) - outs.append(feat) - return tuple(outs) - - -class L2Norm(nn.Module): - - def __init__(self, n_dims, scale=20., eps=1e-10): - """L2 normalization layer. - - Args: - n_dims (int): Number of dimensions to be normalized - scale (float, optional): Defaults to 20.. - eps (float, optional): Used to avoid division by zero. - Defaults to 1e-10. - """ - super(L2Norm, self).__init__() - self.n_dims = n_dims - self.weight = nn.Parameter(torch.Tensor(self.n_dims)) - self.eps = eps - self.scale = scale - - def forward(self, x): - """Forward function.""" - # normalization layer convert to FP32 in FP16 training - x_float = x.float() - norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps - return (self.weight[None, :, None, None].float().expand_as(x_float) * - x_float / norm).type_as(x) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolo_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolo_neck.py deleted file mode 100644 index c8eeb573..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolo_neck.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS - - -class DetectionBlock(BaseModule): - """Detection block in YOLO neck. - - Let out_channels = n, the DetectionBlock contains: - Six ConvLayers, 1 Conv2D Layer and 1 YoloLayer. - The first 6 ConvLayers are formed the following way: - 1x1xn, 3x3x2n, 1x1xn, 3x3x2n, 1x1xn, 3x3x2n. - The Conv2D layer is 1x1x255. - Some block will have branch after the fifth ConvLayer. - The input channel is arbitrary (in_channels) - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(DetectionBlock, self).__init__(init_cfg) - double_out_channels = out_channels * 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.conv1 = ConvModule(in_channels, out_channels, 1, **cfg) - self.conv2 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv3 = ConvModule(double_out_channels, out_channels, 1, **cfg) - self.conv4 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv5 = ConvModule(double_out_channels, out_channels, 1, **cfg) - - def forward(self, x): - tmp = self.conv1(x) - tmp = self.conv2(tmp) - tmp = self.conv3(tmp) - tmp = self.conv4(tmp) - out = self.conv5(tmp) - return out - - -@NECKS.register_module() -class YOLOV3Neck(BaseModule): - """The neck of YOLOV3. - - It can be treated as a simplified version of FPN. It - will take the result from Darknet backbone and do some upsampling and - concatenation. It will finally output the detection result. - - Note: - The input feats should be from top to bottom. - i.e., from high-lvl to low-lvl - But YOLOV3Neck will process them in reversed order. - i.e., from bottom (high-lvl) to top (low-lvl) - - Args: - num_scales (int): The number of scales / stages. - in_channels (List[int]): The number of input channels per scale. - out_channels (List[int]): The number of output channels per scale. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Dictionary to construct and config norm - layer. Default: dict(type='BN', requires_grad=True) - act_cfg (dict, optional): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_scales, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(YOLOV3Neck, self).__init__(init_cfg) - assert (num_scales == len(in_channels) == len(out_channels)) - self.num_scales = num_scales - self.in_channels = in_channels - self.out_channels = out_channels - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - # To support arbitrary scales, the code looks awful, but it works. - # Better solution is welcomed. - self.detect1 = DetectionBlock(in_channels[0], out_channels[0], **cfg) - for i in range(1, self.num_scales): - in_c, out_c = self.in_channels[i], self.out_channels[i] - inter_c = out_channels[i - 1] - self.add_module(f'conv{i}', ConvModule(inter_c, out_c, 1, **cfg)) - # in_c + out_c : High-lvl feats will be cat with low-lvl feats - self.add_module(f'detect{i+1}', - DetectionBlock(in_c + out_c, out_c, **cfg)) - - def forward(self, feats): - assert len(feats) == self.num_scales - - # processed from bottom (high-lvl) to top (low-lvl) - outs = [] - out = self.detect1(feats[-1]) - outs.append(out) - - for i, x in enumerate(reversed(feats[:-1])): - conv = getattr(self, f'conv{i+1}') - tmp = conv(out) - - # Cat with low-lvl feats - tmp = F.interpolate(tmp, scale_factor=2) - tmp = torch.cat((tmp, x), 1) - - detect = getattr(self, f'detect{i+2}') - out = detect(tmp) - outs.append(out) - - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolox_pafpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolox_pafpn.py deleted file mode 100644 index b0f6f706..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/necks/yolox_pafpn.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from ..builder import NECKS -from ..utils import CSPLayer - - -@NECKS.register_module() -class YOLOXPAFPN(BaseModule): - """Path Aggregation Network used in YOLOX. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_csp_blocks (int): Number of bottlenecks in CSPLayer. Default: 3 - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(scale_factor=2, mode='nearest')` - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN') - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish') - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_csp_blocks=3, - use_depthwise=False, - upsample_cfg=dict(scale_factor=2, mode='nearest'), - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=dict( - type='Kaiming', - layer='Conv2d', - a=math.sqrt(5), - distribution='uniform', - mode='fan_in', - nonlinearity='leaky_relu')): - super(YOLOXPAFPN, self).__init__(init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - - # build top-down blocks - self.upsample = nn.Upsample(**upsample_cfg) - self.reduce_layers = nn.ModuleList() - self.top_down_blocks = nn.ModuleList() - for idx in range(len(in_channels) - 1, 0, -1): - self.reduce_layers.append( - ConvModule( - in_channels[idx], - in_channels[idx - 1], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.top_down_blocks.append( - CSPLayer( - in_channels[idx - 1] * 2, - in_channels[idx - 1], - num_blocks=num_csp_blocks, - add_identity=False, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # build bottom-up blocks - self.downsamples = nn.ModuleList() - self.bottom_up_blocks = nn.ModuleList() - for idx in range(len(in_channels) - 1): - self.downsamples.append( - conv( - in_channels[idx], - in_channels[idx], - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottom_up_blocks.append( - CSPLayer( - in_channels[idx] * 2, - in_channels[idx + 1], - num_blocks=num_csp_blocks, - add_identity=False, - use_depthwise=use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.out_convs = nn.ModuleList() - for i in range(len(in_channels)): - self.out_convs.append( - ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - """ - Args: - inputs (tuple[Tensor]): input features. - - Returns: - tuple[Tensor]: YOLOXPAFPN features. - """ - assert len(inputs) == len(self.in_channels) - - # top-down path - inner_outs = [inputs[-1]] - for idx in range(len(self.in_channels) - 1, 0, -1): - feat_heigh = inner_outs[0] - feat_low = inputs[idx - 1] - feat_heigh = self.reduce_layers[len(self.in_channels) - 1 - idx]( - feat_heigh) - inner_outs[0] = feat_heigh - - upsample_feat = self.upsample(feat_heigh) - - inner_out = self.top_down_blocks[len(self.in_channels) - 1 - idx]( - torch.cat([upsample_feat, feat_low], 1)) - inner_outs.insert(0, inner_out) - - # bottom-up path - outs = [inner_outs[0]] - for idx in range(len(self.in_channels) - 1): - feat_low = outs[-1] - feat_height = inner_outs[idx + 1] - downsample_feat = self.downsamples[idx](feat_low) - out = self.bottom_up_blocks[idx]( - torch.cat([downsample_feat, feat_height], 1)) - outs.append(out) - - # out convs - for idx, conv in enumerate(self.out_convs): - outs[idx] = conv(outs[idx]) - - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/__init__.py deleted file mode 100644 index a455c07b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dropblock import DropBlock -from .msdeformattn_pixel_decoder import MSDeformAttnPixelDecoder -from .pixel_decoder import PixelDecoder, TransformerEncoderPixelDecoder - -__all__ = [ - 'DropBlock', 'PixelDecoder', 'TransformerEncoderPixelDecoder', - 'MSDeformAttnPixelDecoder' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/dropblock.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/dropblock.py deleted file mode 100644 index bb00ade7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/dropblock.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import PLUGIN_LAYERS - -eps = 1e-6 - - -@PLUGIN_LAYERS.register_module() -class DropBlock(nn.Module): - """Randomly drop some regions of feature maps. - - Please refer to the method proposed in `DropBlock - `_ for details. - - Args: - drop_prob (float): The probability of dropping each block. - block_size (int): The size of dropped blocks. - warmup_iters (int): The drop probability will linearly increase - from `0` to `drop_prob` during the first `warmup_iters` iterations. - Default: 2000. - """ - - def __init__(self, drop_prob, block_size, warmup_iters=2000, **kwargs): - super(DropBlock, self).__init__() - assert block_size % 2 == 1 - assert 0 < drop_prob <= 1 - assert warmup_iters >= 0 - self.drop_prob = drop_prob - self.block_size = block_size - self.warmup_iters = warmup_iters - self.iter_cnt = 0 - - def forward(self, x): - """ - Args: - x (Tensor): Input feature map on which some areas will be randomly - dropped. - - Returns: - Tensor: The tensor after DropBlock layer. - """ - if not self.training: - return x - self.iter_cnt += 1 - N, C, H, W = list(x.shape) - gamma = self._compute_gamma((H, W)) - mask_shape = (N, C, H - self.block_size + 1, W - self.block_size + 1) - mask = torch.bernoulli(torch.full(mask_shape, gamma, device=x.device)) - - mask = F.pad(mask, [self.block_size // 2] * 4, value=0) - mask = F.max_pool2d( - input=mask, - stride=(1, 1), - kernel_size=(self.block_size, self.block_size), - padding=self.block_size // 2) - mask = 1 - mask - x = x * mask * mask.numel() / (eps + mask.sum()) - return x - - def _compute_gamma(self, feat_size): - """Compute the value of gamma according to paper. gamma is the - parameter of bernoulli distribution, which controls the number of - features to drop. - - gamma = (drop_prob * fm_area) / (drop_area * keep_area) - - Args: - feat_size (tuple[int, int]): The height and width of feature map. - - Returns: - float: The value of gamma. - """ - gamma = (self.drop_prob * feat_size[0] * feat_size[1]) - gamma /= ((feat_size[0] - self.block_size + 1) * - (feat_size[1] - self.block_size + 1)) - gamma /= (self.block_size**2) - factor = (1.0 if self.iter_cnt > self.warmup_iters else self.iter_cnt / - self.warmup_iters) - return gamma * factor - - def extra_repr(self): - return (f'drop_prob={self.drop_prob}, block_size={self.block_size}, ' - f'warmup_iters={self.warmup_iters}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/msdeformattn_pixel_decoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/msdeformattn_pixel_decoder.py deleted file mode 100644 index d553582b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/msdeformattn_pixel_decoder.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init, - normal_init, xavier_init) -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import BaseModule, ModuleList - -from mmdet.core.anchor import MlvlPointGenerator -from mmdet.models.utils.transformer import MultiScaleDeformableAttention - - -@PLUGIN_LAYERS.register_module() -class MSDeformAttnPixelDecoder(BaseModule): - """Pixel decoder with multi-scale deformable attention. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - strides (list[int] | tuple[int]): Output strides of feature from - backbone. - feat_channels (int): Number of channels for feature. - out_channels (int): Number of channels for output. - num_outs (int): Number of output scales. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transformer - encoder. Defaults to `DetrTransformerEncoder`. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - """ - - def __init__(self, - in_channels=[256, 512, 1024, 2048], - strides=[4, 8, 16, 32], - feat_channels=256, - out_channels=256, - num_outs=3, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=dict( - type='MultiScaleDeformableAttention', - embed_dims=256, - num_heads=8, - num_levels=3, - num_points=4, - im2col_step=64, - dropout=0.0, - batch_first=False, - norm_cfg=None, - init_cfg=None), - feedforward_channels=1024, - ffn_dropout=0.0, - operation_order=('self_attn', 'norm', 'ffn', 'norm')), - init_cfg=None), - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.strides = strides - self.num_input_levels = len(in_channels) - self.num_encoder_levels = \ - encoder.transformerlayers.attn_cfgs.num_levels - assert self.num_encoder_levels >= 1, \ - 'num_levels in attn_cfgs must be at least one' - input_conv_list = [] - # from top to down (low to high resolution) - for i in range(self.num_input_levels - 1, - self.num_input_levels - self.num_encoder_levels - 1, - -1): - input_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=None, - bias=True) - input_conv_list.append(input_conv) - self.input_convs = ModuleList(input_conv_list) - - self.encoder = build_transformer_layer_sequence(encoder) - self.postional_encoding = build_positional_encoding( - positional_encoding) - # high resolution to low resolution - self.level_encoding = nn.Embedding(self.num_encoder_levels, - feat_channels) - - # fpn-like structure - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - # from top to down (low to high resolution) - # fpn for the rest features that didn't pass in encoder - for i in range(self.num_input_levels - self.num_encoder_levels - 1, -1, - -1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=1, stride=1, padding=0) - - self.num_outs = num_outs - self.point_generator = MlvlPointGenerator(strides) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_encoder_levels): - xavier_init( - self.input_convs[i].conv, - gain=1, - bias=0, - distribution='uniform') - - for i in range(0, self.num_input_levels - self.num_encoder_levels): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - - normal_init(self.level_encoding, mean=0, std=1) - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - # init_weights defined in MultiScaleDeformableAttention - for layer in self.encoder.layers: - for attn in layer.attentions: - if isinstance(attn, MultiScaleDeformableAttention): - attn.init_weights() - - def forward(self, feats): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - - Returns: - tuple: A tuple containing the following: - - - mask_feature (Tensor): shape (batch_size, c, h, w). - - multi_scale_features (list[Tensor]): Multi scale \ - features, each in shape (batch_size, c, h, w). - """ - # generate padding mask for each level, for each image - batch_size = feats[0].shape[0] - encoder_input_list = [] - padding_mask_list = [] - level_positional_encoding_list = [] - spatial_shapes = [] - reference_points_list = [] - for i in range(self.num_encoder_levels): - level_idx = self.num_input_levels - i - 1 - feat = feats[level_idx] - feat_projected = self.input_convs[i](feat) - h, w = feat.shape[-2:] - - # no padding - padding_mask_resized = feat.new_zeros( - (batch_size, ) + feat.shape[-2:], dtype=torch.bool) - pos_embed = self.postional_encoding(padding_mask_resized) - level_embed = self.level_encoding.weight[i] - level_pos_embed = level_embed.view(1, -1, 1, 1) + pos_embed - # (h_i * w_i, 2) - reference_points = self.point_generator.single_level_grid_priors( - feat.shape[-2:], level_idx, device=feat.device) - # normalize - factor = feat.new_tensor([[w, h]]) * self.strides[level_idx] - reference_points = reference_points / factor - - # shape (batch_size, c, h_i, w_i) -> (h_i * w_i, batch_size, c) - feat_projected = feat_projected.flatten(2).permute(2, 0, 1) - level_pos_embed = level_pos_embed.flatten(2).permute(2, 0, 1) - padding_mask_resized = padding_mask_resized.flatten(1) - - encoder_input_list.append(feat_projected) - padding_mask_list.append(padding_mask_resized) - level_positional_encoding_list.append(level_pos_embed) - spatial_shapes.append(feat.shape[-2:]) - reference_points_list.append(reference_points) - # shape (batch_size, total_num_query), - # total_num_query=sum([., h_i * w_i,.]) - padding_masks = torch.cat(padding_mask_list, dim=1) - # shape (total_num_query, batch_size, c) - encoder_inputs = torch.cat(encoder_input_list, dim=0) - level_positional_encodings = torch.cat( - level_positional_encoding_list, dim=0) - device = encoder_inputs.device - # shape (num_encoder_levels, 2), from low - # resolution to high resolution - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=device) - # shape (0, h_0*w_0, h_0*w_0+h_1*w_1, ...) - level_start_index = torch.cat((spatial_shapes.new_zeros( - (1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - reference_points = torch.cat(reference_points_list, dim=0) - reference_points = reference_points[None, :, None].repeat( - batch_size, 1, self.num_encoder_levels, 1) - valid_radios = reference_points.new_ones( - (batch_size, self.num_encoder_levels, 2)) - # shape (num_total_query, batch_size, c) - memory = self.encoder( - query=encoder_inputs, - key=None, - value=None, - query_pos=level_positional_encodings, - key_pos=None, - attn_masks=None, - key_padding_mask=None, - query_key_padding_mask=padding_masks, - spatial_shapes=spatial_shapes, - reference_points=reference_points, - level_start_index=level_start_index, - valid_radios=valid_radios) - # (num_total_query, batch_size, c) -> (batch_size, c, num_total_query) - memory = memory.permute(1, 2, 0) - - # from low resolution to high resolution - num_query_per_level = [e[0] * e[1] for e in spatial_shapes] - outs = torch.split(memory, num_query_per_level, dim=-1) - outs = [ - x.reshape(batch_size, -1, spatial_shapes[i][0], - spatial_shapes[i][1]) for i, x in enumerate(outs) - ] - - for i in range(self.num_input_levels - self.num_encoder_levels - 1, -1, - -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + F.interpolate( - outs[-1], - size=cur_feat.shape[-2:], - mode='bilinear', - align_corners=False) - y = self.output_convs[i](y) - outs.append(y) - multi_scale_features = outs[:self.num_outs] - - mask_feature = self.mask_feature(outs[-1]) - return mask_feature, multi_scale_features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/pixel_decoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/pixel_decoder.py deleted file mode 100644 index 537a187d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/plugins/pixel_decoder.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import BaseModule, ModuleList - - -@PLUGIN_LAYERS.register_module() -class PixelDecoder(BaseModule): - """Pixel decoder with a structure like fpn. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_inputs = len(in_channels) - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - for i in range(0, self.num_inputs - 1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.last_feat_conv = ConvModule( - in_channels[-1], - feat_channels, - kernel_size=3, - padding=1, - stride=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=3, stride=1, padding=1) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.last_feat_conv, bias=0) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. Not used here. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): Shape (batch_size, c, h, w). - - memory (Tensor): Output of last stage of backbone.\ - Shape (batch_size, c, h, w). - """ - y = self.last_feat_conv(feats[-1]) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - memory = feats[-1] - return mask_feature, memory - - -@PLUGIN_LAYERS.register_module() -class TransformerEncoderPixelDecoder(PixelDecoder): - """Pixel decoder with transormer encoder inside. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=None, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - init_cfg=None): - super(TransformerEncoderPixelDecoder, self).__init__( - in_channels, - feat_channels, - out_channels, - norm_cfg, - act_cfg, - init_cfg=init_cfg) - self.last_feat_conv = None - - self.encoder = build_transformer_layer_sequence(encoder) - self.encoder_embed_dims = self.encoder.embed_dims - assert self.encoder_embed_dims == feat_channels, 'embed_dims({}) of ' \ - 'tranformer encoder must equal to feat_channels({})'.format( - feat_channels, self.encoder_embed_dims) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.encoder_in_proj = Conv2d( - in_channels[-1], feat_channels, kernel_size=1) - self.encoder_out_proj = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.encoder_in_proj, bias=0) - caffe2_xavier_init(self.encoder_out_proj.conv, bias=0) - - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): shape (batch_size, c, h, w). - - memory (Tensor): shape (batch_size, c, h, w). - """ - feat_last = feats[-1] - bs, c, h, w = feat_last.shape - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - padding_mask = feat_last.new_ones((bs, input_img_h, input_img_w), - dtype=torch.float32) - for i in range(bs): - img_h, img_w, _ = img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feat_last.shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - - pos_embed = self.positional_encoding(padding_mask) - feat_last = self.encoder_in_proj(feat_last) - # (batch_size, c, h, w) -> (num_queries, batch_size, c) - feat_last = feat_last.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - # (batch_size, h, w) -> (batch_size, h*w) - padding_mask = padding_mask.flatten(1) - memory = self.encoder( - query=feat_last, - key=None, - value=None, - query_pos=pos_embed, - query_key_padding_mask=padding_mask) - # (num_queries, batch_size, c) -> (batch_size, c, h, w) - memory = memory.permute(1, 2, 0).view(bs, self.encoder_embed_dims, h, - w) - y = self.encoder_out_proj(memory) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - return mask_feature, memory diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/__init__.py deleted file mode 100644 index baae2a05..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_roi_head import BaseRoIHead -from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DIIHead, - DoubleConvFCBBoxHead, SABLHead, SCNetBBoxHead, - Shared2FCBBoxHead, Shared4Conv1FCBBoxHead) -from .cascade_roi_head import CascadeRoIHead -from .double_roi_head import DoubleHeadRoIHead -from .dynamic_roi_head import DynamicRoIHead -from .grid_roi_head import GridRoIHead -from .htc_roi_head import HybridTaskCascadeRoIHead -from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead, - FusedSemanticHead, GlobalContextHead, GridHead, - HTCMaskHead, MaskIoUHead, MaskPointHead, - SCNetMaskHead, SCNetSemanticHead) -from .mask_scoring_roi_head import MaskScoringRoIHead -from .pisa_roi_head import PISARoIHead -from .point_rend_roi_head import PointRendRoIHead -from .roi_extractors import (BaseRoIExtractor, GenericRoIExtractor, - SingleRoIExtractor) -from .scnet_roi_head import SCNetRoIHead -from .shared_heads import ResLayer -from .sparse_roi_head import SparseRoIHead -from .standard_roi_head import StandardRoIHead -from .trident_roi_head import TridentRoIHead - -__all__ = [ - 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead', - 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead', - 'ConvFCBBoxHead', 'DIIHead', 'SABLHead', 'Shared2FCBBoxHead', - 'StandardRoIHead', 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', - 'FCNMaskHead', 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', - 'MaskIoUHead', 'BaseRoIExtractor', 'GenericRoIExtractor', - 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead', - 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead', - 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead', - 'FeatureRelayHead', 'GlobalContextHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/base_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/base_roi_head.py deleted file mode 100644 index 4adbdef8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/base_roi_head.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - -from ..builder import build_shared_head - - -class BaseRoIHead(BaseModule, metaclass=ABCMeta): - """Base class for RoIHeads.""" - - def __init__(self, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(BaseRoIHead, self).__init__(init_cfg) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if shared_head is not None: - shared_head.pretrained = pretrained - self.shared_head = build_shared_head(shared_head) - - if bbox_head is not None: - self.init_bbox_head(bbox_roi_extractor, bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoI head contains a `bbox_head`""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoI head contains a `mask_head`""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @property - def with_shared_head(self): - """bool: whether the RoI head contains a `shared_head`""" - return hasattr(self, 'shared_head') and self.shared_head is not None - - @abstractmethod - def init_bbox_head(self): - """Initialize ``bbox_head``""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize ``mask_head``""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_meta, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """Forward function during training.""" - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False, - **kwargs): - """Asynchronized test function.""" - raise NotImplementedError - - def simple_test(self, - x, - proposal_list, - img_meta, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/__init__.py deleted file mode 100644 index d1207dbe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bbox_head import BBoxHead -from .convfc_bbox_head import (ConvFCBBoxHead, Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .dii_head import DIIHead -from .double_bbox_head import DoubleConvFCBBoxHead -from .sabl_head import SABLHead -from .scnet_bbox_head import SCNetBBoxHead - -__all__ = [ - 'BBoxHead', 'ConvFCBBoxHead', 'Shared2FCBBoxHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'SABLHead', 'DIIHead', - 'SCNetBBoxHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/bbox_head.py deleted file mode 100644 index 461b18b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/bbox_head.py +++ /dev/null @@ -1,594 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.runner import BaseModule, auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.losses import accuracy -from mmdet.models.utils import build_linear_layer - - -@HEADS.register_module() -class BBoxHead(BaseModule): - """Simplest RoI head, with only two fc layers for classification and - regression respectively.""" - - def __init__(self, - with_avg_pool=False, - with_cls=True, - with_reg=True, - roi_feat_size=7, - in_channels=256, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - reg_decoded_bbox=False, - reg_predictor_cfg=dict(type='Linear'), - cls_predictor_cfg=dict(type='Linear'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1.0), - init_cfg=None): - super(BBoxHead, self).__init__(init_cfg) - assert with_cls or with_reg - self.with_avg_pool = with_avg_pool - self.with_cls = with_cls - self.with_reg = with_reg - self.roi_feat_size = _pair(roi_feat_size) - self.roi_feat_area = self.roi_feat_size[0] * self.roi_feat_size[1] - self.in_channels = in_channels - self.num_classes = num_classes - self.reg_class_agnostic = reg_class_agnostic - self.reg_decoded_bbox = reg_decoded_bbox - self.reg_predictor_cfg = reg_predictor_cfg - self.cls_predictor_cfg = cls_predictor_cfg - self.fp16_enabled = False - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - in_channels = self.in_channels - if self.with_avg_pool: - self.avg_pool = nn.AvgPool2d(self.roi_feat_size) - else: - in_channels *= self.roi_feat_area - if self.with_cls: - # need to add background class - if self.custom_cls_channels: - cls_channels = self.loss_cls.get_cls_channels(self.num_classes) - else: - cls_channels = num_classes + 1 - self.fc_cls = build_linear_layer( - self.cls_predictor_cfg, - in_features=in_channels, - out_features=cls_channels) - if self.with_reg: - out_dim_reg = 4 if reg_class_agnostic else 4 * num_classes - self.fc_reg = build_linear_layer( - self.reg_predictor_cfg, - in_features=in_channels, - out_features=out_dim_reg) - self.debug_imgs = None - if init_cfg is None: - self.init_cfg = [] - if self.with_cls: - self.init_cfg += [ - dict( - type='Normal', std=0.01, override=dict(name='fc_cls')) - ] - if self.with_reg: - self.init_cfg += [ - dict( - type='Normal', std=0.001, override=dict(name='fc_reg')) - ] - - @property - def custom_cls_channels(self): - return getattr(self.loss_cls, 'custom_cls_channels', False) - - @property - def custom_activation(self): - return getattr(self.loss_cls, 'custom_activation', False) - - @property - def custom_accuracy(self): - return getattr(self.loss_cls, 'custom_accuracy', False) - - @auto_fp16() - def forward(self, x): - if self.with_avg_pool: - if x.numel() > 0: - x = self.avg_pool(x) - x = x.view(x.size(0), -1) - else: - # avg_pool does not support empty tensor, - # so use torch.mean instead it - x = torch.mean(x, dim=(-1, -2)) - cls_score = self.fc_cls(x) if self.with_cls else None - bbox_pred = self.fc_reg(x) if self.with_reg else None - return cls_score, bbox_pred - - def _get_target_single(self, pos_bboxes, neg_bboxes, pos_gt_bboxes, - pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Args: - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains gt_boxes for - all positive samples, has shape (num_pos, 4), - the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains gt_labels for - all positive samples, has shape (num_pos, ). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals - in a single image. Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all - proposals, has shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all - proposals, has shape (num_proposals, 4), the - last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all - proposals, has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[:num_pos] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[:num_pos] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, both - # the predicted boxes and regression targets should be with - # absolute coordinate format. - pos_bbox_targets = pos_gt_bboxes - bbox_targets[:num_pos, :] = pos_bbox_targets - bbox_weights[:num_pos, :] = 1 - if num_neg > 0: - label_weights[-num_neg:] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:ConfigDict): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list - has shape (num_proposals, 4) when `concat=False`, - otherwise just a single tensor has shape - (num_all_proposals, 4), the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - rois, - labels, - label_weights, - bbox_targets, - bbox_weights, - reduction_override=None): - losses = dict() - if cls_score is not None: - avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.) - if cls_score.numel() > 0: - loss_cls_ = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - if isinstance(loss_cls_, dict): - losses.update(loss_cls_) - else: - losses['loss_cls'] = loss_cls_ - if self.custom_activation: - acc_ = self.loss_cls.get_accuracy(cls_score, labels) - losses.update(acc_) - else: - losses['acc'] = accuracy(cls_score, labels) - if bbox_pred is not None: - bg_class_ind = self.num_classes - # 0~self.num_classes-1 are FG, self.num_classes is BG - pos_inds = (labels >= 0) & (labels < bg_class_ind) - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, - # `GIouLoss`, `DIouLoss`) is applied directly on - # the decoded bounding boxes, it decodes the - # already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(rois[:, 1:], bbox_pred) - if self.reg_class_agnostic: - pos_bbox_pred = bbox_pred.view( - bbox_pred.size(0), 4)[pos_inds.type(torch.bool)] - else: - pos_bbox_pred = bbox_pred.view( - bbox_pred.size(0), -1, - 4)[pos_inds.type(torch.bool), - labels[pos_inds.type(torch.bool)]] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=bbox_targets.size(0), - reduction_override=reduction_override) - else: - losses['loss_bbox'] = bbox_pred[pos_inds].sum() - return losses - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False, - cfg=None): - """Transform network output for a batch into bbox predictions. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (num_boxes, 5). - last dimension 5 arrange as (batch_index, x1, y1, x2, y2). - cls_score (Tensor): Box scores, has shape - (num_boxes, num_classes + 1). - bbox_pred (Tensor, optional): Box energies / deltas. - has shape (num_boxes, num_classes * 4). - img_shape (Sequence[int], optional): Maximum bounds for boxes, - specifies (H, W, C) or (H, W). - scale_factor (ndarray): Scale factor of the - image arrange as (w_scale, h_scale, w_scale, h_scale). - rescale (bool): If True, return boxes in original image space. - Default: False. - cfg (obj:`ConfigDict`): `test_cfg` of Bbox Head. Default: None - - Returns: - tuple[Tensor, Tensor]: - First tensor is `det_bboxes`, has the shape - (num_boxes, 5) and last - dimension 5 represent (tl_x, tl_y, br_x, br_y, score). - Second tensor is the labels with shape (num_boxes, ). - """ - - # some loss (Seesaw loss..) may have custom activation - if self.custom_cls_channels: - scores = self.loss_cls.get_activation(cls_score) - else: - scores = F.softmax( - cls_score, dim=-1) if cls_score is not None else None - # bbox_pred would be None in some detector when with_reg is False, - # e.g. Grid R-CNN. - if bbox_pred is not None: - bboxes = self.bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=img_shape) - else: - bboxes = rois[:, 1:].clone() - if img_shape is not None: - bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1]) - bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0]) - - if rescale and bboxes.size(0) > 0: - scale_factor = bboxes.new_tensor(scale_factor) - bboxes = (bboxes.view(bboxes.size(0), -1, 4) / scale_factor).view( - bboxes.size()[0], -1) - - if cfg is None: - return bboxes, scores - else: - det_bboxes, det_labels = multiclass_nms(bboxes, scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - - return det_bboxes, det_labels - - @force_fp32(apply_to=('bbox_preds', )) - def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas): - """Refine bboxes during training. - - Args: - rois (Tensor): Shape (n*bs, 5), where n is image number per GPU, - and bs is the sampled RoIs per image. The first column is - the image id and the next 4 columns are x1, y1, x2, y2. - labels (Tensor): Shape (n*bs, ). - bbox_preds (Tensor): Shape (n*bs, 4) or (n*bs, 4*#class). - pos_is_gts (list[Tensor]): Flags indicating if each positive bbox - is a gt bbox. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Refined bboxes of each image in a mini-batch. - - Example: - >>> # xdoctest: +REQUIRES(module:kwarray) - >>> import kwarray - >>> import numpy as np - >>> from mmdet.core.bbox.demodata import random_boxes - >>> self = BBoxHead(reg_class_agnostic=True) - >>> n_roi = 2 - >>> n_img = 4 - >>> scale = 512 - >>> rng = np.random.RandomState(0) - >>> img_metas = [{'img_shape': (scale, scale)} - ... for _ in range(n_img)] - >>> # Create rois in the expected format - >>> roi_boxes = random_boxes(n_roi, scale=scale, rng=rng) - >>> img_ids = torch.randint(0, n_img, (n_roi,)) - >>> img_ids = img_ids.float() - >>> rois = torch.cat([img_ids[:, None], roi_boxes], dim=1) - >>> # Create other args - >>> labels = torch.randint(0, 2, (n_roi,)).long() - >>> bbox_preds = random_boxes(n_roi, scale=scale, rng=rng) - >>> # For each image, pretend random positive boxes are gts - >>> is_label_pos = (labels.numpy() > 0).astype(np.int) - >>> lbl_per_img = kwarray.group_items(is_label_pos, - ... img_ids.numpy()) - >>> pos_per_img = [sum(lbl_per_img.get(gid, [])) - ... for gid in range(n_img)] - >>> pos_is_gts = [ - >>> torch.randint(0, 2, (npos,)).byte().sort( - >>> descending=True)[0] - >>> for npos in pos_per_img - >>> ] - >>> bboxes_list = self.refine_bboxes(rois, labels, bbox_preds, - >>> pos_is_gts, img_metas) - >>> print(bboxes_list) - """ - img_ids = rois[:, 0].long().unique(sorted=True) - assert img_ids.numel() <= len(img_metas) - - bboxes_list = [] - for i in range(len(img_metas)): - inds = torch.nonzero( - rois[:, 0] == i, as_tuple=False).squeeze(dim=1) - num_rois = inds.numel() - - bboxes_ = rois[inds, 1:] - label_ = labels[inds] - bbox_pred_ = bbox_preds[inds] - img_meta_ = img_metas[i] - pos_is_gts_ = pos_is_gts[i] - - bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_, - img_meta_) - - # filter gt bboxes - pos_keep = 1 - pos_is_gts_ - keep_inds = pos_is_gts_.new_ones(num_rois) - keep_inds[:len(pos_is_gts_)] = pos_keep - - bboxes_list.append(bboxes[keep_inds.type(torch.bool)]) - - return bboxes_list - - @force_fp32(apply_to=('bbox_pred', )) - def regress_by_class(self, rois, label, bbox_pred, img_meta): - """Regress the bbox for the predicted class. Used in Cascade R-CNN. - - Args: - rois (Tensor): Rois from `rpn_head` or last stage - `bbox_head`, has shape (num_proposals, 4) or - (num_proposals, 5). - label (Tensor): Only used when `self.reg_class_agnostic` - is False, has shape (num_proposals, ). - bbox_pred (Tensor): Regression prediction of - current stage `bbox_head`. When `self.reg_class_agnostic` - is False, it has shape (n, num_classes * 4), otherwise - it has shape (n, 4). - img_meta (dict): Image meta info. - - Returns: - Tensor: Regressed bboxes, the same shape as input rois. - """ - - assert rois.size(1) == 4 or rois.size(1) == 5, repr(rois.shape) - - if not self.reg_class_agnostic: - label = label * 4 - inds = torch.stack((label, label + 1, label + 2, label + 3), 1) - bbox_pred = torch.gather(bbox_pred, 1, inds) - assert bbox_pred.size(1) == 4 - - max_shape = img_meta['img_shape'] - - if rois.size(1) == 4: - new_rois = self.bbox_coder.decode( - rois, bbox_pred, max_shape=max_shape) - else: - bboxes = self.bbox_coder.decode( - rois[:, 1:], bbox_pred, max_shape=max_shape) - new_rois = torch.cat((rois[:, [0]], bboxes), dim=1) - - return new_rois - - def onnx_export(self, - rois, - cls_score, - bbox_pred, - img_shape, - cfg=None, - **kwargs): - """Transform network output for a batch into bbox predictions. - - Args: - rois (Tensor): Boxes to be transformed. - Has shape (B, num_boxes, 5) - cls_score (Tensor): Box scores. has shape - (B, num_boxes, num_classes + 1), 1 represent the background. - bbox_pred (Tensor, optional): Box energies / deltas for, - has shape (B, num_boxes, num_classes * 4) when. - img_shape (torch.Tensor): Shape of image. - cfg (obj:`ConfigDict`): `test_cfg` of Bbox Head. Default: None - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - - assert rois.ndim == 3, 'Only support export two stage ' \ - 'model to ONNX ' \ - 'with batch dimension. ' - if self.custom_cls_channels: - scores = self.loss_cls.get_activation(cls_score) - else: - scores = F.softmax( - cls_score, dim=-1) if cls_score is not None else None - - if bbox_pred is not None: - bboxes = self.bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=img_shape) - else: - bboxes = rois[..., 1:].clone() - if img_shape is not None: - max_shape = bboxes.new_tensor(img_shape)[..., :2] - min_xy = bboxes.new_tensor(0) - max_xy = torch.cat( - [max_shape] * 2, dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - from mmdet.core.export import add_dummy_nms_for_onnx - max_output_boxes_per_class = cfg.nms.get('max_output_boxes_per_class', - cfg.max_per_img) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - score_threshold = cfg.score_thr - nms_pre = cfg.get('deploy_nms_pre', -1) - - scores = scores[..., :self.num_classes] - if self.reg_class_agnostic: - return add_dummy_nms_for_onnx( - bboxes, - scores, - max_output_boxes_per_class, - iou_threshold, - score_threshold, - pre_top_k=nms_pre, - after_top_k=cfg.max_per_img) - else: - batch_size = scores.shape[0] - labels = torch.arange( - self.num_classes, dtype=torch.long).to(scores.device) - labels = labels.view(1, 1, -1).expand_as(scores) - labels = labels.reshape(batch_size, -1) - scores = scores.reshape(batch_size, -1) - bboxes = bboxes.reshape(batch_size, -1, 4) - - max_size = torch.max(img_shape) - # Offset bboxes of each class so that bboxes of different labels - # do not overlap. - offsets = (labels * max_size + 1).unsqueeze(2) - bboxes_for_nms = bboxes + offsets - - batch_dets, labels = add_dummy_nms_for_onnx( - bboxes_for_nms, - scores.unsqueeze(2), - max_output_boxes_per_class, - iou_threshold, - score_threshold, - pre_top_k=nms_pre, - after_top_k=cfg.max_per_img, - labels=labels) - # Offset the bboxes back after dummy nms. - offsets = (labels * max_size + 1).unsqueeze(2) - # Indexing + inplace operation fails with dynamic shape in ONNX - # original style: batch_dets[..., :4] -= offsets - bboxes, scores = batch_dets[..., 0:4], batch_dets[..., 4:5] - bboxes -= offsets - batch_dets = torch.cat([bboxes, scores], dim=2) - return batch_dets, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py deleted file mode 100644 index 21124b9c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from mmdet.models.utils import build_linear_layer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class ConvFCBBoxHead(BBoxHead): - r"""More general bbox head, with shared conv and fc layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls fcs -> cls - shared convs -> shared fcs - \-> reg convs -> reg fcs -> reg - """ # noqa: W605 - - def __init__(self, - num_shared_convs=0, - num_shared_fcs=0, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - conv_out_channels=256, - fc_out_channels=1024, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - *args, - **kwargs): - super(ConvFCBBoxHead, self).__init__( - *args, init_cfg=init_cfg, **kwargs) - assert (num_shared_convs + num_shared_fcs + num_cls_convs + - num_cls_fcs + num_reg_convs + num_reg_fcs > 0) - if num_cls_convs > 0 or num_reg_convs > 0: - assert num_shared_fcs == 0 - if not self.with_cls: - assert num_cls_convs == 0 and num_cls_fcs == 0 - if not self.with_reg: - assert num_reg_convs == 0 and num_reg_fcs == 0 - self.num_shared_convs = num_shared_convs - self.num_shared_fcs = num_shared_fcs - self.num_cls_convs = num_cls_convs - self.num_cls_fcs = num_cls_fcs - self.num_reg_convs = num_reg_convs - self.num_reg_fcs = num_reg_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # add shared convs and fcs - self.shared_convs, self.shared_fcs, last_layer_dim = \ - self._add_conv_fc_branch( - self.num_shared_convs, self.num_shared_fcs, self.in_channels, - True) - self.shared_out_channels = last_layer_dim - - # add cls specific branch - self.cls_convs, self.cls_fcs, self.cls_last_dim = \ - self._add_conv_fc_branch( - self.num_cls_convs, self.num_cls_fcs, self.shared_out_channels) - - # add reg specific branch - self.reg_convs, self.reg_fcs, self.reg_last_dim = \ - self._add_conv_fc_branch( - self.num_reg_convs, self.num_reg_fcs, self.shared_out_channels) - - if self.num_shared_fcs == 0 and not self.with_avg_pool: - if self.num_cls_fcs == 0: - self.cls_last_dim *= self.roi_feat_area - if self.num_reg_fcs == 0: - self.reg_last_dim *= self.roi_feat_area - - self.relu = nn.ReLU(inplace=True) - # reconstruct fc_cls and fc_reg since input channels are changed - if self.with_cls: - if self.custom_cls_channels: - cls_channels = self.loss_cls.get_cls_channels(self.num_classes) - else: - cls_channels = self.num_classes + 1 - self.fc_cls = build_linear_layer( - self.cls_predictor_cfg, - in_features=self.cls_last_dim, - out_features=cls_channels) - if self.with_reg: - out_dim_reg = (4 if self.reg_class_agnostic else 4 * - self.num_classes) - self.fc_reg = build_linear_layer( - self.reg_predictor_cfg, - in_features=self.reg_last_dim, - out_features=out_dim_reg) - - if init_cfg is None: - # when init_cfg is None, - # It has been set to - # [[dict(type='Normal', std=0.01, override=dict(name='fc_cls'))], - # [dict(type='Normal', std=0.001, override=dict(name='fc_reg'))] - # after `super(ConvFCBBoxHead, self).__init__()` - # we only need to append additional configuration - # for `shared_fcs`, `cls_fcs` and `reg_fcs` - self.init_cfg += [ - dict( - type='Xavier', - distribution='uniform', - override=[ - dict(name='shared_fcs'), - dict(name='cls_fcs'), - dict(name='reg_fcs') - ]) - ] - - def _add_conv_fc_branch(self, - num_branch_convs, - num_branch_fcs, - in_channels, - is_shared=False): - """Add shared or separable branch. - - convs -> avg pool (optional) -> fcs - """ - last_layer_dim = in_channels - # add branch specific conv layers - branch_convs = nn.ModuleList() - if num_branch_convs > 0: - for i in range(num_branch_convs): - conv_in_channels = ( - last_layer_dim if i == 0 else self.conv_out_channels) - branch_convs.append( - ConvModule( - conv_in_channels, - self.conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - last_layer_dim = self.conv_out_channels - # add branch specific fc layers - branch_fcs = nn.ModuleList() - if num_branch_fcs > 0: - # for shared branch, only consider self.with_avg_pool - # for separated branches, also consider self.num_shared_fcs - if (is_shared - or self.num_shared_fcs == 0) and not self.with_avg_pool: - last_layer_dim *= self.roi_feat_area - for i in range(num_branch_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - branch_fcs.append( - nn.Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - return branch_convs, branch_fcs, last_layer_dim - - def forward(self, x): - # shared part - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - # separate branches - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - return cls_score, bbox_pred - - -@HEADS.register_module() -class Shared2FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared2FCBBoxHead, self).__init__( - num_shared_convs=0, - num_shared_fcs=2, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) - - -@HEADS.register_module() -class Shared4Conv1FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared4Conv1FCBBoxHead, self).__init__( - num_shared_convs=4, - num_shared_fcs=1, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/dii_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/dii_head.py deleted file mode 100644 index 3777f52b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/dii_head.py +++ /dev/null @@ -1,426 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (bias_init_with_prob, build_activation_layer, - build_norm_layer) -from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.losses import accuracy -from mmdet.models.utils import build_transformer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class DIIHead(BBoxHead): - r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object - Detection with Learnable Proposals `_ - - Args: - num_classes (int): Number of class in dataset. - Defaults to 80. - num_ffn_fcs (int): The number of fully-connected - layers in FFNs. Defaults to 2. - num_heads (int): The hidden dimension of FFNs. - Defaults to 8. - num_cls_fcs (int): The number of fully-connected - layers in classification subnet. Defaults to 1. - num_reg_fcs (int): The number of fully-connected - layers in regression subnet. Defaults to 3. - feedforward_channels (int): The hidden dimension - of FFNs. Defaults to 2048 - in_channels (int): Hidden_channels of MultiheadAttention. - Defaults to 256. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - ffn_act_cfg (dict): The activation config for FFNs. - dynamic_conv_cfg (dict): The convolution config - for DynamicConv. - loss_iou (dict): The config for iou or giou loss. - - """ - - def __init__(self, - num_classes=80, - num_ffn_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - in_channels=256, - dropout=0.0, - ffn_act_cfg=dict(type='ReLU', inplace=True), - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(DIIHead, self).__init__( - num_classes=num_classes, - reg_decoded_bbox=True, - reg_class_agnostic=True, - init_cfg=init_cfg, - **kwargs) - self.loss_iou = build_loss(loss_iou) - self.in_channels = in_channels - self.fp16_enabled = False - self.attention = MultiheadAttention(in_channels, num_heads, dropout) - self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - self.instance_interactive_conv_dropout = nn.Dropout(dropout) - self.instance_interactive_conv_norm = build_norm_layer( - dict(type='LN'), in_channels)[1] - - self.ffn = FFN( - in_channels, - feedforward_channels, - num_ffn_fcs, - act_cfg=ffn_act_cfg, - dropout=dropout) - self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.cls_fcs = nn.ModuleList() - for _ in range(num_cls_fcs): - self.cls_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.cls_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.cls_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - - # over load the self.fc_cls in BBoxHead - if self.loss_cls.use_sigmoid: - self.fc_cls = nn.Linear(in_channels, self.num_classes) - else: - self.fc_cls = nn.Linear(in_channels, self.num_classes + 1) - - self.reg_fcs = nn.ModuleList() - for _ in range(num_reg_fcs): - self.reg_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.reg_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.reg_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - # over load the self.fc_cls in BBoxHead - self.fc_reg = nn.Linear(in_channels, 4) - - assert self.reg_class_agnostic, 'DIIHead only ' \ - 'suppport `reg_class_agnostic=True` ' - assert self.reg_decoded_bbox, 'DIIHead only ' \ - 'suppport `reg_decoded_bbox=True`' - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - super(DIIHead, self).init_weights() - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - else: - # adopt the default initialization for - # the weight and bias of the layer norm - pass - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - nn.init.constant_(self.fc_cls.bias, bias_init) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of Dynamic Instance Interactive Head. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size, num_proposals, feature_dimensions) - - Returns: - tuple[Tensor]: Usually a tuple of classification scores - and bbox prediction and a intermediate feature. - - - cls_scores (Tensor): Classification scores for - all proposals, has shape - (batch_size, num_proposals, num_classes). - - bbox_preds (Tensor): Box energies / deltas for - all proposals, has shape - (batch_size, num_proposals, 4). - - obj_feat (Tensor): Object feature before classification - and regression subnet, has shape - (batch_size, num_proposal, feature_dimensions). - """ - N, num_proposals = proposal_feat.shape[:2] - - # Self attention - proposal_feat = proposal_feat.permute(1, 0, 2) - proposal_feat = self.attention_norm(self.attention(proposal_feat)) - attn_feats = proposal_feat.permute(1, 0, 2) - - # instance interactive - proposal_feat = attn_feats.reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - proposal_feat = proposal_feat + self.instance_interactive_conv_dropout( - proposal_feat_iic) - obj_feat = self.instance_interactive_conv_norm(proposal_feat) - - # FFN - obj_feat = self.ffn_norm(self.ffn(obj_feat)) - - cls_feat = obj_feat - reg_feat = obj_feat - - for cls_layer in self.cls_fcs: - cls_feat = cls_layer(cls_feat) - for reg_layer in self.reg_fcs: - reg_feat = reg_layer(reg_feat) - - cls_score = self.fc_cls(cls_feat).view( - N, num_proposals, self.num_classes - if self.loss_cls.use_sigmoid else self.num_classes + 1) - bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, 4) - - return cls_score, bbox_delta, obj_feat.view( - N, num_proposals, self.in_channels), attn_feats - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - labels, - label_weights, - bbox_targets, - bbox_weights, - imgs_whwh=None, - reduction_override=None, - **kwargs): - """"Loss function of DIIHead, get loss of all images. - - Args: - cls_score (Tensor): Classification prediction - results of all class, has shape - (batch_size * num_proposals_single_image, num_classes) - bbox_pred (Tensor): Regression prediction results, - has shape - (batch_size * num_proposals_single_image, 4), the last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - labels (Tensor): Label of each proposals, has shape - (batch_size * num_proposals_single_image - label_weights (Tensor): Classification loss - weight of each proposals, has shape - (batch_size * num_proposals_single_image - bbox_targets (Tensor): Regression targets of each - proposals, has shape - (batch_size * num_proposals_single_image, 4), - the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - bbox_weights (Tensor): Regression loss weight of each - proposals's coordinate, has shape - (batch_size * num_proposals_single_image, 4), - imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\ - shape (batch_size, num_proposals, 4), the last - dimension means - [img_width,img_height, img_width, img_height]. - reduction_override (str, optional): The reduction - method used to override the original reduction - method of the loss. Options are "none", - "mean" and "sum". Defaults to None, - - Returns: - dict[str, Tensor]: Dictionary of loss components - """ - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0) & (labels < bg_class_ind) - num_pos = pos_inds.sum().float() - avg_factor = reduce_mean(num_pos) - if cls_score is not None: - if cls_score.numel() > 0: - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['pos_acc'] = accuracy(cls_score[pos_inds], - labels[pos_inds]) - if bbox_pred is not None: - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred / imgs_whwh, - bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh, - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - losses['loss_iou'] = self.loss_iou( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - else: - losses['loss_bbox'] = bbox_pred.sum() * 0 - losses['loss_iou'] = bbox_pred.sum() * 0 - return losses - - def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes, - pos_gt_bboxes, pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Almost the same as the implementation in `bbox_head`, - we add pos_inds and neg_inds to select positive and - negative samples instead of selecting the first num_pos - as positive samples. - - Args: - pos_inds (Tensor): The length is equal to the - positive sample numbers contain all index - of the positive sample in the origin proposal set. - neg_inds (Tensor): The length is equal to the - negative sample numbers contain all index - of the negative sample in the origin proposal set. - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains gt_boxes for - all positive samples, has shape (num_pos, 4), - the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains gt_labels for - all positive samples, has shape (num_pos, ). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all proposals, has - shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all proposals, has - shape (num_proposals, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all proposals, - has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[pos_inds] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - pos_bbox_targets = pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1 - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise just - a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has shape - (num_proposals,) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list has - shape (num_proposals, 4) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals, 4), - the last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_inds_list = [res.pos_inds for res in sampling_results] - neg_inds_list = [res.neg_inds for res in sampling_results] - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_inds_list, - neg_inds_list, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py deleted file mode 100644 index 2a38d591..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList - -from mmdet.models.backbones.resnet import Bottleneck -from mmdet.models.builder import HEADS -from .bbox_head import BBoxHead - - -class BasicResBlock(BaseModule): - """Basic residual block. - - This block is a little different from the block in the ResNet backbone. - The kernel size of conv1 is 1 in this block while 3 in ResNet BasicBlock. - - Args: - in_channels (int): Channels of the input feature map. - out_channels (int): Channels of the output feature map. - conv_cfg (dict): The config dict for convolution layers. - norm_cfg (dict): The config dict for normalization layers. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - init_cfg=None): - super(BasicResBlock, self).__init__(init_cfg) - - # main path - self.conv1 = ConvModule( - in_channels, - in_channels, - kernel_size=3, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - self.conv2 = ConvModule( - in_channels, - out_channels, - kernel_size=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - # identity path - self.conv_identity = ConvModule( - in_channels, - out_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - identity = x - - x = self.conv1(x) - x = self.conv2(x) - - identity = self.conv_identity(identity) - out = x + identity - - out = self.relu(out) - return out - - -@HEADS.register_module() -class DoubleConvFCBBoxHead(BBoxHead): - r"""Bbox head used in Double-Head R-CNN - - .. code-block:: none - - /-> cls - /-> shared convs -> - \-> reg - roi features - /-> cls - \-> shared fc -> - \-> reg - """ # noqa: W605 - - def __init__(self, - num_convs=0, - num_fcs=0, - conv_out_channels=1024, - fc_out_channels=1024, - conv_cfg=None, - norm_cfg=dict(type='BN'), - init_cfg=dict( - type='Normal', - override=[ - dict(type='Normal', name='fc_cls', std=0.01), - dict(type='Normal', name='fc_reg', std=0.001), - dict( - type='Xavier', - name='fc_branch', - distribution='uniform') - ]), - **kwargs): - kwargs.setdefault('with_avg_pool', True) - super(DoubleConvFCBBoxHead, self).__init__(init_cfg=init_cfg, **kwargs) - assert self.with_avg_pool - assert num_convs > 0 - assert num_fcs > 0 - self.num_convs = num_convs - self.num_fcs = num_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # increase the channel of input features - self.res_block = BasicResBlock(self.in_channels, - self.conv_out_channels) - - # add conv heads - self.conv_branch = self._add_conv_branch() - # add fc heads - self.fc_branch = self._add_fc_branch() - - out_dim_reg = 4 if self.reg_class_agnostic else 4 * self.num_classes - self.fc_reg = nn.Linear(self.conv_out_channels, out_dim_reg) - - self.fc_cls = nn.Linear(self.fc_out_channels, self.num_classes + 1) - self.relu = nn.ReLU(inplace=True) - - def _add_conv_branch(self): - """Add the fc branch which consists of a sequential of conv layers.""" - branch_convs = ModuleList() - for i in range(self.num_convs): - branch_convs.append( - Bottleneck( - inplanes=self.conv_out_channels, - planes=self.conv_out_channels // 4, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - return branch_convs - - def _add_fc_branch(self): - """Add the fc branch which consists of a sequential of fc layers.""" - branch_fcs = ModuleList() - for i in range(self.num_fcs): - fc_in_channels = ( - self.in_channels * - self.roi_feat_area if i == 0 else self.fc_out_channels) - branch_fcs.append(nn.Linear(fc_in_channels, self.fc_out_channels)) - return branch_fcs - - def forward(self, x_cls, x_reg): - # conv head - x_conv = self.res_block(x_reg) - - for conv in self.conv_branch: - x_conv = conv(x_conv) - - if self.with_avg_pool: - x_conv = self.avg_pool(x_conv) - - x_conv = x_conv.view(x_conv.size(0), -1) - bbox_pred = self.fc_reg(x_conv) - - # fc head - x_fc = x_cls.view(x_cls.size(0), -1) - for fc in self.fc_branch: - x_fc = self.relu(fc(x_fc)) - - cls_score = self.fc_cls(x_fc) - - return cls_score, bbox_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/sabl_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/sabl_head.py deleted file mode 100644 index 0ce986b9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/sabl_head.py +++ /dev/null @@ -1,596 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import build_bbox_coder, multi_apply, multiclass_nms -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.losses import accuracy - - -@HEADS.register_module() -class SABLHead(BaseModule): - """Side-Aware Boundary Localization (SABL) for RoI-Head. - - Side-Aware features are extracted by conv layers - with an attention mechanism. - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented in BucketingBBoxCoder. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - cls_in_channels (int): Input channels of cls RoI feature. \ - Defaults to 256. - reg_in_channels (int): Input channels of reg RoI feature. \ - Defaults to 256. - roi_feat_size (int): Size of RoI features. Defaults to 7. - reg_feat_up_ratio (int): Upsample ratio of reg features. \ - Defaults to 2. - reg_pre_kernel (int): Kernel of 2D conv layers before \ - attention pooling. Defaults to 3. - reg_post_kernel (int): Kernel of 1D conv layers after \ - attention pooling. Defaults to 3. - reg_pre_num (int): Number of pre convs. Defaults to 2. - reg_post_num (int): Number of post convs. Defaults to 1. - num_classes (int): Number of classes in dataset. Defaults to 80. - cls_out_channels (int): Hidden channels in cls fcs. Defaults to 1024. - reg_offset_out_channels (int): Hidden and output channel \ - of reg offset branch. Defaults to 256. - reg_cls_out_channels (int): Hidden and output channel \ - of reg cls branch. Defaults to 256. - num_cls_fcs (int): Number of fcs for cls branch. Defaults to 1. - num_reg_fcs (int): Number of fcs for reg branch.. Defaults to 0. - reg_class_agnostic (bool): Class agnostic regression or not. \ - Defaults to True. - norm_cfg (dict): Config of norm layers. Defaults to None. - bbox_coder (dict): Config of bbox coder. Defaults 'BucketingBBoxCoder'. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_classes, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=1.7), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=0.1, loss_weight=1.0), - init_cfg=None): - super(SABLHead, self).__init__(init_cfg) - self.cls_in_channels = cls_in_channels - self.reg_in_channels = reg_in_channels - self.roi_feat_size = roi_feat_size - self.reg_feat_up_ratio = int(reg_feat_up_ratio) - self.num_buckets = bbox_coder['num_buckets'] - assert self.reg_feat_up_ratio // 2 >= 1 - self.up_reg_feat_size = roi_feat_size * self.reg_feat_up_ratio - assert self.up_reg_feat_size == bbox_coder['num_buckets'] - self.reg_pre_kernel = reg_pre_kernel - self.reg_post_kernel = reg_post_kernel - self.reg_pre_num = reg_pre_num - self.reg_post_num = reg_post_num - self.num_classes = num_classes - self.cls_out_channels = cls_out_channels - self.reg_offset_out_channels = reg_offset_out_channels - self.reg_cls_out_channels = reg_cls_out_channels - self.num_cls_fcs = num_cls_fcs - self.num_reg_fcs = num_reg_fcs - self.reg_class_agnostic = reg_class_agnostic - assert self.reg_class_agnostic - self.norm_cfg = norm_cfg - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.cls_fcs = self._add_fc_branch(self.num_cls_fcs, - self.cls_in_channels, - self.roi_feat_size, - self.cls_out_channels) - - self.side_num = int(np.ceil(self.num_buckets / 2)) - - if self.reg_feat_up_ratio > 1: - self.upsample_x = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - self.upsample_y = nn.ConvTranspose1d( - reg_in_channels, - reg_in_channels, - self.reg_feat_up_ratio, - stride=self.reg_feat_up_ratio) - - self.reg_pre_convs = nn.ModuleList() - for i in range(self.reg_pre_num): - reg_pre_conv = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=reg_pre_kernel, - padding=reg_pre_kernel // 2, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_pre_convs.append(reg_pre_conv) - - self.reg_post_conv_xs = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_x = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(1, reg_post_kernel), - padding=(0, reg_post_kernel // 2), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_xs.append(reg_post_conv_x) - self.reg_post_conv_ys = nn.ModuleList() - for i in range(self.reg_post_num): - reg_post_conv_y = ConvModule( - reg_in_channels, - reg_in_channels, - kernel_size=(reg_post_kernel, 1), - padding=(reg_post_kernel // 2, 0), - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU')) - self.reg_post_conv_ys.append(reg_post_conv_y) - - self.reg_conv_att_x = nn.Conv2d(reg_in_channels, 1, 1) - self.reg_conv_att_y = nn.Conv2d(reg_in_channels, 1, 1) - - self.fc_cls = nn.Linear(self.cls_out_channels, self.num_classes + 1) - self.relu = nn.ReLU(inplace=True) - - self.reg_cls_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_cls_out_channels) - self.reg_offset_fcs = self._add_fc_branch(self.num_reg_fcs, - self.reg_in_channels, 1, - self.reg_offset_out_channels) - self.fc_reg_cls = nn.Linear(self.reg_cls_out_channels, 1) - self.fc_reg_offset = nn.Linear(self.reg_offset_out_channels, 1) - - if init_cfg is None: - self.init_cfg = [ - dict( - type='Xavier', - layer='Linear', - distribution='uniform', - override=[ - dict(type='Normal', name='reg_conv_att_x', std=0.01), - dict(type='Normal', name='reg_conv_att_y', std=0.01), - dict(type='Normal', name='fc_reg_cls', std=0.01), - dict(type='Normal', name='fc_cls', std=0.01), - dict(type='Normal', name='fc_reg_offset', std=0.001) - ]) - ] - if self.reg_feat_up_ratio > 1: - self.init_cfg += [ - dict( - type='Kaiming', - distribution='normal', - override=[ - dict(name='upsample_x'), - dict(name='upsample_y') - ]) - ] - - @property - def custom_cls_channels(self): - return getattr(self.loss_cls, 'custom_cls_channels', False) - - @property - def custom_activation(self): - return getattr(self.loss_cls, 'custom_activation', False) - - @property - def custom_accuracy(self): - return getattr(self.loss_cls, 'custom_accuracy', False) - - def _add_fc_branch(self, num_branch_fcs, in_channels, roi_feat_size, - fc_out_channels): - in_channels = in_channels * roi_feat_size * roi_feat_size - branch_fcs = nn.ModuleList() - for i in range(num_branch_fcs): - fc_in_channels = (in_channels if i == 0 else fc_out_channels) - branch_fcs.append(nn.Linear(fc_in_channels, fc_out_channels)) - return branch_fcs - - def cls_forward(self, cls_x): - cls_x = cls_x.view(cls_x.size(0), -1) - for fc in self.cls_fcs: - cls_x = self.relu(fc(cls_x)) - cls_score = self.fc_cls(cls_x) - return cls_score - - def attention_pool(self, reg_x): - """Extract direction-specific features fx and fy with attention - methanism.""" - reg_fx = reg_x - reg_fy = reg_x - reg_fx_att = self.reg_conv_att_x(reg_fx).sigmoid() - reg_fy_att = self.reg_conv_att_y(reg_fy).sigmoid() - reg_fx_att = reg_fx_att / reg_fx_att.sum(dim=2).unsqueeze(2) - reg_fy_att = reg_fy_att / reg_fy_att.sum(dim=3).unsqueeze(3) - reg_fx = (reg_fx * reg_fx_att).sum(dim=2) - reg_fy = (reg_fy * reg_fy_att).sum(dim=3) - return reg_fx, reg_fy - - def side_aware_feature_extractor(self, reg_x): - """Refine and extract side-aware features without split them.""" - for reg_pre_conv in self.reg_pre_convs: - reg_x = reg_pre_conv(reg_x) - reg_fx, reg_fy = self.attention_pool(reg_x) - - if self.reg_post_num > 0: - reg_fx = reg_fx.unsqueeze(2) - reg_fy = reg_fy.unsqueeze(3) - for i in range(self.reg_post_num): - reg_fx = self.reg_post_conv_xs[i](reg_fx) - reg_fy = self.reg_post_conv_ys[i](reg_fy) - reg_fx = reg_fx.squeeze(2) - reg_fy = reg_fy.squeeze(3) - if self.reg_feat_up_ratio > 1: - reg_fx = self.relu(self.upsample_x(reg_fx)) - reg_fy = self.relu(self.upsample_y(reg_fy)) - reg_fx = torch.transpose(reg_fx, 1, 2) - reg_fy = torch.transpose(reg_fy, 1, 2) - return reg_fx.contiguous(), reg_fy.contiguous() - - def reg_pred(self, x, offset_fcs, cls_fcs): - """Predict bucketing estimation (cls_pred) and fine regression (offset - pred) with side-aware features.""" - x_offset = x.view(-1, self.reg_in_channels) - x_cls = x.view(-1, self.reg_in_channels) - - for fc in offset_fcs: - x_offset = self.relu(fc(x_offset)) - for fc in cls_fcs: - x_cls = self.relu(fc(x_cls)) - offset_pred = self.fc_reg_offset(x_offset) - cls_pred = self.fc_reg_cls(x_cls) - - offset_pred = offset_pred.view(x.size(0), -1) - cls_pred = cls_pred.view(x.size(0), -1) - - return offset_pred, cls_pred - - def side_aware_split(self, feat): - """Split side-aware features aligned with orders of bucketing - targets.""" - l_end = int(np.ceil(self.up_reg_feat_size / 2)) - r_start = int(np.floor(self.up_reg_feat_size / 2)) - feat_fl = feat[:, :l_end] - feat_fr = feat[:, r_start:].flip(dims=(1, )) - feat_fl = feat_fl.contiguous() - feat_fr = feat_fr.contiguous() - feat = torch.cat([feat_fl, feat_fr], dim=-1) - return feat - - def bbox_pred_split(self, bbox_pred, num_proposals_per_img): - """Split batch bbox prediction back to each image.""" - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_preds = bucket_cls_preds.split(num_proposals_per_img, 0) - bucket_offset_preds = bucket_offset_preds.split( - num_proposals_per_img, 0) - bbox_pred = tuple(zip(bucket_cls_preds, bucket_offset_preds)) - return bbox_pred - - def reg_forward(self, reg_x): - outs = self.side_aware_feature_extractor(reg_x) - edge_offset_preds = [] - edge_cls_preds = [] - reg_fx = outs[0] - reg_fy = outs[1] - offset_pred_x, cls_pred_x = self.reg_pred(reg_fx, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_y, cls_pred_y = self.reg_pred(reg_fy, self.reg_offset_fcs, - self.reg_cls_fcs) - offset_pred_x = self.side_aware_split(offset_pred_x) - offset_pred_y = self.side_aware_split(offset_pred_y) - cls_pred_x = self.side_aware_split(cls_pred_x) - cls_pred_y = self.side_aware_split(cls_pred_y) - edge_offset_preds = torch.cat([offset_pred_x, offset_pred_y], dim=-1) - edge_cls_preds = torch.cat([cls_pred_x, cls_pred_y], dim=-1) - - return (edge_cls_preds, edge_offset_preds) - - def forward(self, x): - - bbox_pred = self.reg_forward(x) - cls_score = self.cls_forward(x) - - return cls_score, bbox_pred - - def get_targets(self, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - neg_proposals = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels = [res.pos_gt_labels for res in sampling_results] - cls_reg_targets = self.bucket_target(pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, - rcnn_train_cfg) - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = cls_reg_targets - return (labels, label_weights, (bucket_cls_targets, - bucket_offset_targets), - (bucket_cls_weights, bucket_offset_weights)) - - def bucket_target(self, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - rcnn_train_cfg, - concat=True): - (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) = multi_apply( - self._bucket_target_single, - pos_proposals_list, - neg_proposals_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bucket_cls_targets = torch.cat(bucket_cls_targets, 0) - bucket_cls_weights = torch.cat(bucket_cls_weights, 0) - bucket_offset_targets = torch.cat(bucket_offset_targets, 0) - bucket_offset_weights = torch.cat(bucket_offset_weights, 0) - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def _bucket_target_single(self, pos_proposals, neg_proposals, - pos_gt_bboxes, pos_gt_labels, cfg): - """Compute bucketing estimation targets and fine regression targets for - a single image. - - Args: - pos_proposals (Tensor): positive proposals of a single image, - Shape (n_pos, 4) - neg_proposals (Tensor): negative proposals of a single image, - Shape (n_neg, 4). - pos_gt_bboxes (Tensor): gt bboxes assigned to positive proposals - of a single image, Shape (n_pos, 4). - pos_gt_labels (Tensor): gt labels assigned to positive proposals - of a single image, Shape (n_pos, ). - cfg (dict): Config of calculating targets - - Returns: - tuple: - - - labels (Tensor): Labels in a single image. \ - Shape (n,). - - label_weights (Tensor): Label weights in a single image.\ - Shape (n,) - - bucket_cls_targets (Tensor): Bucket cls targets in \ - a single image. Shape (n, num_buckets*2). - - bucket_cls_weights (Tensor): Bucket cls weights in \ - a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset targets \ - in a single image. Shape (n, num_buckets*2). - - bucket_offset_targets (Tensor): Bucket offset weights \ - in a single image. Shape (n, num_buckets*2). - """ - num_pos = pos_proposals.size(0) - num_neg = neg_proposals.size(0) - num_samples = num_pos + num_neg - labels = pos_gt_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_proposals.new_zeros(num_samples) - bucket_cls_targets = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_cls_weights = pos_proposals.new_zeros(num_samples, - 4 * self.side_num) - bucket_offset_targets = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - bucket_offset_weights = pos_proposals.new_zeros( - num_samples, 4 * self.side_num) - if num_pos > 0: - labels[:num_pos] = pos_gt_labels - label_weights[:num_pos] = 1.0 - (pos_bucket_offset_targets, pos_bucket_offset_weights, - pos_bucket_cls_targets, - pos_bucket_cls_weights) = self.bbox_coder.encode( - pos_proposals, pos_gt_bboxes) - bucket_cls_targets[:num_pos, :] = pos_bucket_cls_targets - bucket_cls_weights[:num_pos, :] = pos_bucket_cls_weights - bucket_offset_targets[:num_pos, :] = pos_bucket_offset_targets - bucket_offset_weights[:num_pos, :] = pos_bucket_offset_weights - if num_neg > 0: - label_weights[-num_neg:] = 1.0 - return (labels, label_weights, bucket_cls_targets, bucket_cls_weights, - bucket_offset_targets, bucket_offset_weights) - - def loss(self, - cls_score, - bbox_pred, - rois, - labels, - label_weights, - bbox_targets, - bbox_weights, - reduction_override=None): - losses = dict() - if cls_score is not None: - avg_factor = max(torch.sum(label_weights > 0).float().item(), 1.) - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['acc'] = accuracy(cls_score, labels) - - if bbox_pred is not None: - bucket_cls_preds, bucket_offset_preds = bbox_pred - bucket_cls_targets, bucket_offset_targets = bbox_targets - bucket_cls_weights, bucket_offset_weights = bbox_weights - # edge cls - bucket_cls_preds = bucket_cls_preds.view(-1, self.side_num) - bucket_cls_targets = bucket_cls_targets.view(-1, self.side_num) - bucket_cls_weights = bucket_cls_weights.view(-1, self.side_num) - losses['loss_bbox_cls'] = self.loss_bbox_cls( - bucket_cls_preds, - bucket_cls_targets, - bucket_cls_weights, - avg_factor=bucket_cls_targets.size(0), - reduction_override=reduction_override) - - losses['loss_bbox_reg'] = self.loss_bbox_reg( - bucket_offset_preds, - bucket_offset_targets, - bucket_offset_weights, - avg_factor=bucket_offset_targets.size(0), - reduction_override=reduction_override) - - return losses - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False, - cfg=None): - if isinstance(cls_score, list): - cls_score = sum(cls_score) / float(len(cls_score)) - scores = F.softmax(cls_score, dim=1) if cls_score is not None else None - - if bbox_pred is not None: - bboxes, confidences = self.bbox_coder.decode( - rois[:, 1:], bbox_pred, img_shape) - else: - bboxes = rois[:, 1:].clone() - confidences = None - if img_shape is not None: - bboxes[:, [0, 2]].clamp_(min=0, max=img_shape[1] - 1) - bboxes[:, [1, 3]].clamp_(min=0, max=img_shape[0] - 1) - - if rescale and bboxes.size(0) > 0: - if isinstance(scale_factor, float): - bboxes /= scale_factor - else: - bboxes /= torch.from_numpy(scale_factor).to(bboxes.device) - - if cfg is None: - return bboxes, scores - else: - det_bboxes, det_labels = multiclass_nms( - bboxes, - scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=confidences) - - return det_bboxes, det_labels - - @force_fp32(apply_to=('bbox_preds', )) - def refine_bboxes(self, rois, labels, bbox_preds, pos_is_gts, img_metas): - """Refine bboxes during training. - - Args: - rois (Tensor): Shape (n*bs, 5), where n is image number per GPU, - and bs is the sampled RoIs per image. - labels (Tensor): Shape (n*bs, ). - bbox_preds (list[Tensor]): Shape [(n*bs, num_buckets*2), \ - (n*bs, num_buckets*2)]. - pos_is_gts (list[Tensor]): Flags indicating if each positive bbox - is a gt bbox. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Refined bboxes of each image in a mini-batch. - """ - img_ids = rois[:, 0].long().unique(sorted=True) - assert img_ids.numel() == len(img_metas) - - bboxes_list = [] - for i in range(len(img_metas)): - inds = torch.nonzero( - rois[:, 0] == i, as_tuple=False).squeeze(dim=1) - num_rois = inds.numel() - - bboxes_ = rois[inds, 1:] - label_ = labels[inds] - edge_cls_preds, edge_offset_preds = bbox_preds - edge_cls_preds_ = edge_cls_preds[inds] - edge_offset_preds_ = edge_offset_preds[inds] - bbox_pred_ = [edge_cls_preds_, edge_offset_preds_] - img_meta_ = img_metas[i] - pos_is_gts_ = pos_is_gts[i] - - bboxes = self.regress_by_class(bboxes_, label_, bbox_pred_, - img_meta_) - # filter gt bboxes - pos_keep = 1 - pos_is_gts_ - keep_inds = pos_is_gts_.new_ones(num_rois) - keep_inds[:len(pos_is_gts_)] = pos_keep - - bboxes_list.append(bboxes[keep_inds.type(torch.bool)]) - - return bboxes_list - - @force_fp32(apply_to=('bbox_pred', )) - def regress_by_class(self, rois, label, bbox_pred, img_meta): - """Regress the bbox for the predicted class. Used in Cascade R-CNN. - - Args: - rois (Tensor): shape (n, 4) or (n, 5) - label (Tensor): shape (n, ) - bbox_pred (list[Tensor]): shape [(n, num_buckets *2), \ - (n, num_buckets *2)] - img_meta (dict): Image meta info. - - Returns: - Tensor: Regressed bboxes, the same shape as input rois. - """ - assert rois.size(1) == 4 or rois.size(1) == 5 - - if rois.size(1) == 4: - new_rois, _ = self.bbox_coder.decode(rois, bbox_pred, - img_meta['img_shape']) - else: - bboxes, _ = self.bbox_coder.decode(rois[:, 1:], bbox_pred, - img_meta['img_shape']) - new_rois = torch.cat((rois[:, [0]], bboxes), dim=1) - - return new_rois diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index cf39ebef..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/cascade_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/cascade_roi_head.py deleted file mode 100644 index e17313f2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/cascade_roi_head.py +++ /dev/null @@ -1,631 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.runner import ModuleList - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner, - build_sampler, merge_aug_bboxes, merge_aug_masks, - multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1712.00726 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert shared_head is None, \ - 'Shared head is not supported in Cascade RCNN anymore' - - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - super(CascadeRoIHead, self).__init__( - bbox_roi_extractor=bbox_roi_extractor, - bbox_head=bbox_head, - mask_roi_extractor=mask_roi_extractor, - mask_head=mask_head, - shared_head=shared_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (dict): Config of box roi extractor. - bbox_head (dict): Config of box in box head. - """ - self.bbox_roi_extractor = ModuleList() - self.bbox_head = ModuleList() - if not isinstance(bbox_roi_extractor, list): - bbox_roi_extractor = [ - bbox_roi_extractor for _ in range(self.num_stages) - ] - if not isinstance(bbox_head, list): - bbox_head = [bbox_head for _ in range(self.num_stages)] - assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages - for roi_extractor, head in zip(bbox_roi_extractor, bbox_head): - self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor)) - self.bbox_head.append(build_head(head)) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize mask head and mask roi extractor. - - Args: - mask_roi_extractor (dict): Config of mask roi extractor. - mask_head (dict): Config of mask in mask head. - """ - self.mask_head = nn.ModuleList() - if not isinstance(mask_head, list): - mask_head = [mask_head for _ in range(self.num_stages)] - assert len(mask_head) == self.num_stages - for head in mask_head: - self.mask_head.append(build_head(head)) - if mask_roi_extractor is not None: - self.share_roi_extractor = False - self.mask_roi_extractor = ModuleList() - if not isinstance(mask_roi_extractor, list): - mask_roi_extractor = [ - mask_roi_extractor for _ in range(self.num_stages) - ] - assert len(mask_roi_extractor) == self.num_stages - for roi_extractor in mask_roi_extractor: - self.mask_roi_extractor.append( - build_roi_extractor(roi_extractor)) - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - - def init_assigner_sampler(self): - """Initialize assigner and sampler for each stage.""" - self.bbox_assigner = [] - self.bbox_sampler = [] - if self.train_cfg is not None: - for idx, rcnn_train_cfg in enumerate(self.train_cfg): - self.bbox_assigner.append( - build_assigner(rcnn_train_cfg.assigner)) - self.current_stage = idx - self.bbox_sampler.append( - build_sampler(rcnn_train_cfg.sampler, context=self)) - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward(self, stage, x, rois): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(stage, x, rois) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg) - loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward(self, stage, x, rois): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - bbox_feats=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(stage, x, pos_rois) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask) - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - if self.with_bbox or self.with_mask: - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self._bbox_forward_train(i, x, sampling_results, - gt_bboxes, gt_labels, - rcnn_train_cfg) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - bbox_results['bbox_feats']) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - # bbox_targets is a tuple - roi_labels = bbox_results['bbox_targets'][0] - with torch.no_grad(): - cls_score = bbox_results['cls_score'] - if self.bbox_head[i].custom_activation: - cls_score = self.bbox_head[i].loss_cls.get_activation( - cls_score) - - # Empty proposal. - if cls_score.numel() == 0: - break - - roi_labels = torch.where( - roi_labels == self.bbox_head[i].num_classes, - cls_score[:, :-1].argmax(1), roi_labels) - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple( - len(proposals) for proposals in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head[i].bbox_pred_split( - bbox_pred, num_proposals_per_img) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - if self.bbox_head[i].custom_activation: - cls_score = [ - self.bbox_head[i].loss_cls.get_activation(s) - for s in cls_score - ] - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refined_rois = self.bbox_head[i].regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refined_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_results - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - num_mask_rois_per_img = tuple( - _bbox.size(0) for _bbox in _bboxes) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_mask_rois_per_img, 0) - aug_masks.append([ - m.sigmoid().cpu().detach().numpy() for m in mask_pred - ]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_masks = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(features, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - cls_score = bbox_results['cls_score'] - if self.bbox_head[i].custom_activation: - cls_score = self.bbox_head[i].loss_cls.get_activation( - cls_score) - bbox_label = cls_score[:, :-1].argmax(dim=1) - rois = self.bbox_head[i].regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[] - for _ in range(self.mask_head[-1].num_classes)] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta in zip(features, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - dummy_scale_factor = np.ones(4) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=dummy_scale_factor, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] - - def onnx_export(self, x, proposals, img_metas): - - assert self.with_bbox, 'Bbox head must be implemented.' - assert proposals.shape[0] == 1, 'Only support one input image ' \ - 'while in exporting to ONNX' - # remove the scores - rois = proposals[..., :-1] - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - # Eliminate the batch dimension - rois = rois.view(-1, 4) - - # add dummy batch index - rois = torch.cat([rois.new_zeros(rois.shape[0], 1), rois], dim=-1) - - max_shape = img_metas[0]['img_shape_for_onnx'] - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, - rois.size(-1)) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, - cls_score.size(-1)) - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, 4) - ms_scores.append(cls_score) - if i < self.num_stages - 1: - assert self.bbox_head[i].reg_class_agnostic - new_rois = self.bbox_head[i].bbox_coder.decode( - rois[..., 1:], bbox_pred, max_shape=max_shape) - rois = new_rois.reshape(-1, new_rois.shape[-1]) - # add dummy batch index - rois = torch.cat([rois.new_zeros(rois.shape[0], 1), rois], - dim=-1) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, 4) - rois = rois.reshape(batch_size, num_proposals_per_img, -1) - det_bboxes, det_labels = self.bbox_head[-1].onnx_export( - rois, cls_score, bbox_pred, max_shape, cfg=rcnn_test_cfg) - - if not self.with_mask: - return det_bboxes, det_labels - else: - batch_index = torch.arange( - det_bboxes.size(0), - device=det_bboxes.device).float().view(-1, 1, 1).expand( - det_bboxes.size(0), det_bboxes.size(1), 1) - rois = det_bboxes[..., :4] - mask_rois = torch.cat([batch_index, rois], dim=-1) - mask_rois = mask_rois.view(-1, 5) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - aug_masks.append(mask_pred) - max_shape = img_metas[0]['img_shape_for_onnx'] - # calculate the mean of masks from several stage - mask_pred = sum(aug_masks) / len(aug_masks) - segm_results = self.mask_head[-1].onnx_export( - mask_pred, rois.reshape(-1, 4), det_labels.reshape(-1), - self.test_cfg, max_shape) - segm_results = segm_results.reshape(batch_size, - det_bboxes.shape[1], - max_shape[0], max_shape[1]) - return det_bboxes, det_labels, segm_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/double_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/double_roi_head.py deleted file mode 100644 index 895b5d30..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/double_roi_head.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class DoubleHeadRoIHead(StandardRoIHead): - """RoI head for Double Head RCNN. - - https://arxiv.org/abs/1904.06493 - """ - - def __init__(self, reg_roi_scale_factor, **kwargs): - super(DoubleHeadRoIHead, self).__init__(**kwargs) - self.reg_roi_scale_factor = reg_roi_scale_factor - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing time.""" - bbox_cls_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - bbox_reg_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], - rois, - roi_scale_factor=self.reg_roi_scale_factor) - if self.with_shared_head: - bbox_cls_feats = self.shared_head(bbox_cls_feats) - bbox_reg_feats = self.shared_head(bbox_reg_feats) - cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - bbox_feats=bbox_cls_feats) - return bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/dynamic_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/dynamic_roi_head.py deleted file mode 100644 index 4c2b6cda..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/dynamic_roi_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2roi -from mmdet.models.losses import SmoothL1Loss -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - -EPS = 1e-15 - - -@HEADS.register_module() -class DynamicRoIHead(StandardRoIHead): - """RoI head for `Dynamic R-CNN `_.""" - - def __init__(self, **kwargs): - super(DynamicRoIHead, self).__init__(**kwargs) - assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss) - # the IoU history of the past `update_iter_interval` iterations - self.iou_history = [] - # the beta history of the past `update_iter_interval` iterations - self.beta_history = [] - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): list of multi-level img features. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - proposals (list[Tensors]): list of region proposals. - - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - cur_iou = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # record the `iou_topk`-th largest IoU in an image - iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk, - len(assign_result.max_overlaps)) - ious, _ = torch.topk(assign_result.max_overlaps, iou_topk) - cur_iou.append(ious[-1].item()) - sampling_results.append(sampling_result) - # average the current IoUs over images - cur_iou = np.mean(cur_iou) - self.iou_history.append(cur_iou) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - # update IoU threshold and SmoothL1 beta - update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval - if len(self.iou_history) % update_iter_interval == 0: - new_iou_thr, new_beta = self.update_hyperparameters() - - return losses - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - num_imgs = len(img_metas) - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - # record the `beta_topk`-th smallest target - # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets - # and bbox_weights, respectively - pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1) - num_pos = len(pos_inds) - cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1) - beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs, - num_pos) - cur_target = torch.kthvalue(cur_target, beta_topk)[0].item() - self.beta_history.append(cur_target) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def update_hyperparameters(self): - """Update hyperparameters like IoU thresholds for assigner and beta for - SmoothL1 loss based on the training statistics. - - Returns: - tuple[float]: the updated ``iou_thr`` and ``beta``. - """ - new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou, - np.mean(self.iou_history)) - self.iou_history = [] - self.bbox_assigner.pos_iou_thr = new_iou_thr - self.bbox_assigner.neg_iou_thr = new_iou_thr - self.bbox_assigner.min_pos_iou = new_iou_thr - if (np.median(self.beta_history) < EPS): - # avoid 0 or too small value for new_beta - new_beta = self.bbox_head.loss_bbox.beta - else: - new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta, - np.median(self.beta_history)) - self.beta_history = [] - self.bbox_head.loss_bbox.beta = new_beta - return new_iou_thr, new_beta diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/grid_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/grid_roi_head.py deleted file mode 100644 index 333f6297..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/grid_roi_head.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2result, bbox2roi -from ..builder import HEADS, build_head, build_roi_extractor -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class GridRoIHead(StandardRoIHead): - """Grid roi head for Grid R-CNN. - - https://arxiv.org/abs/1811.12030 - """ - - def __init__(self, grid_roi_extractor, grid_head, **kwargs): - assert grid_head is not None - super(GridRoIHead, self).__init__(**kwargs) - if grid_roi_extractor is not None: - self.grid_roi_extractor = build_roi_extractor(grid_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.grid_roi_extractor = self.bbox_roi_extractor - self.grid_head = build_head(grid_head) - - def _random_jitter(self, sampling_results, img_metas, amplitude=0.15): - """Ramdom jitter positive proposals for training.""" - for sampling_result, img_meta in zip(sampling_results, img_metas): - bboxes = sampling_result.pos_bboxes - random_offsets = bboxes.new_empty(bboxes.shape[0], 4).uniform_( - -amplitude, amplitude) - # before jittering - cxcy = (bboxes[:, 2:4] + bboxes[:, :2]) / 2 - wh = (bboxes[:, 2:4] - bboxes[:, :2]).abs() - # after jittering - new_cxcy = cxcy + wh * random_offsets[:, :2] - new_wh = wh * (1 + random_offsets[:, 2:]) - # xywh to xyxy - new_x1y1 = (new_cxcy - new_wh / 2) - new_x2y2 = (new_cxcy + new_wh / 2) - new_bboxes = torch.cat([new_x1y1, new_x2y2], dim=1) - # clip bboxes - max_shape = img_meta['img_shape'] - if max_shape is not None: - new_bboxes[:, 0::2].clamp_(min=0, max=max_shape[1] - 1) - new_bboxes[:, 1::2].clamp_(min=0, max=max_shape[0] - 1) - - sampling_result.pos_bboxes = new_bboxes - return sampling_results - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - - # grid head - grid_rois = rois[:100] - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], grid_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - grid_pred = self.grid_head(grid_feats) - outs = outs + (grid_pred, ) - - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - bbox_results = super(GridRoIHead, - self)._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - - # Grid head forward and loss - sampling_results = self._random_jitter(sampling_results, img_metas) - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - # GN in head does not support zero shape input - if pos_rois.shape[0] == 0: - return bbox_results - - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], pos_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - # Accelerate training - max_sample_num_grid = self.train_cfg.get('max_num_grid', 192) - sample_idx = torch.randperm( - grid_feats.shape[0])[:min(grid_feats.shape[0], max_sample_num_grid - )] - grid_feats = grid_feats[sample_idx] - - grid_pred = self.grid_head(grid_feats) - - grid_targets = self.grid_head.get_targets(sampling_results, - self.train_cfg) - grid_targets = grid_targets[sample_idx] - - loss_grid = self.grid_head.loss(grid_pred, grid_targets) - - bbox_results['loss_bbox'].update(loss_grid) - return bbox_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=False) - # pack rois into bboxes - grid_rois = bbox2roi([det_bbox[:, :4] for det_bbox in det_bboxes]) - if grid_rois.shape[0] != 0: - grid_feats = self.grid_roi_extractor( - x[:len(self.grid_roi_extractor.featmap_strides)], grid_rois) - self.grid_head.test_mode = True - grid_pred = self.grid_head(grid_feats) - # split batch grid head prediction back to each image - num_roi_per_img = tuple(len(det_bbox) for det_bbox in det_bboxes) - grid_pred = { - k: v.split(num_roi_per_img, 0) - for k, v in grid_pred.items() - } - - # apply bbox post-processing to each image individually - bbox_results = [] - num_imgs = len(det_bboxes) - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - bbox_results.append([ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head.num_classes) - ]) - else: - det_bbox = self.grid_head.get_bboxes( - det_bboxes[i], grid_pred['fused'][i], [img_metas[i]]) - if rescale: - det_bbox[:, :4] /= img_metas[i]['scale_factor'] - bbox_results.append( - bbox2result(det_bbox, det_labels[i], - self.bbox_head.num_classes)) - else: - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head.num_classes) - ] for _ in range(len(det_bboxes))] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/htc_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/htc_roi_head.py deleted file mode 100644 index 08bc1dbf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/htc_roi_head.py +++ /dev/null @@ -1,628 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn.functional as F - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from ..utils.brick_wrappers import adaptive_avg_pool2d -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class HybridTaskCascadeRoIHead(CascadeRoIHead): - """Hybrid task cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1901.07518 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - semantic_roi_extractor=None, - semantic_head=None, - semantic_fusion=('bbox', 'mask'), - interleaved=True, - mask_info_flow=True, - **kwargs): - super(HybridTaskCascadeRoIHead, - self).__init__(num_stages, stage_loss_weights, **kwargs) - assert self.with_bbox - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = build_roi_extractor( - semantic_roi_extractor) - self.semantic_head = build_head(semantic_head) - - self.semantic_fusion = semantic_fusion - self.interleaved = interleaved - self.mask_info_flow = mask_info_flow - - @property - def with_semantic(self): - """bool: whether the head has semantic head""" - if hasattr(self, 'semantic_head') and self.semantic_head is not None: - return True - else: - return False - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - outs = () - # semantic head - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - # bbox heads - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - outs = outs + (mask_pred, ) - return outs - - def _bbox_forward_train(self, - stage, - x, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for box head in training.""" - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, x, rois, semantic_feat=semantic_feat) - - bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg) - loss_bbox = bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, - rois=rois, - bbox_targets=bbox_targets, - ) - return bbox_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - semantic_feat=None): - """Run forward function and calculate loss for mask head in - training.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - pos_rois) - - # semantic feature fusion - # element-wise sum for original features and pooled semantic features - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - pos_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - - # mask information flow - # forward all previous mask heads to obtain last_feat, and fuse it - # with the normal mask feature - if self.mask_info_flow: - last_feat = None - for i in range(stage): - last_feat = self.mask_head[i]( - mask_feats, last_feat, return_logits=False) - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - else: - mask_pred = mask_head(mask_feats, return_feat=False) - - mask_targets = mask_head.get_targets(sampling_results, gt_masks, - rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = mask_head.loss(mask_pred, mask_targets, pos_labels) - - mask_results = dict(loss_mask=loss_mask) - return mask_results - - def _bbox_forward(self, stage, x, rois, semantic_feat=None): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor( - x[:len(bbox_roi_extractor.featmap_strides)], rois) - if self.with_semantic and 'bbox' in self.semantic_fusion: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def _mask_forward_test(self, stage, x, bboxes, semantic_feat=None): - """Mask head forward function for testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_rois = bbox2roi([bboxes]) - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.mask_info_flow: - last_feat = None - last_pred = None - for i in range(stage): - mask_pred, last_feat = self.mask_head[i](mask_feats, last_feat) - if last_pred is not None: - mask_pred = mask_pred + last_pred - last_pred = mask_pred - mask_pred = mask_head(mask_feats, last_feat, return_feat=False) - if last_pred is not None: - mask_pred = mask_pred + last_pred - else: - mask_pred = mask_head(mask_feats) - return mask_pred - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - proposal_list (list[Tensors]): list of region proposals. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None, list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None, Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - gt_semantic_seg (None, list[Tensor]): semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # semantic segmentation part - # 2 outputs: segmentation prediction and embedded features - losses = dict() - if self.with_semantic: - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign(proposal_list[j], - gt_bboxes[j], - gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = \ - self._bbox_forward_train( - i, x, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg, semantic_feat) - roi_labels = bbox_results['bbox_targets'][0] - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - # interleaved execution: use regressed bboxes by the box branch - # to train the mask branch - if self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - # re-assign and sample 512 RoIs from 512 RoIs - sampling_results = [] - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], - gt_bboxes_ignore[j], gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - semantic_feat) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes (same as Cascade R-CNN) - if i < self.num_stages - 1 and not self.interleaved: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic_feat) - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refine_rois = bbox_head.regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refine_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - bbox_result = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_result - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - aug_masks = [] - mask_roi_extractor = self.mask_roi_extractor[-1] - mask_feats = mask_roi_extractor( - x[:len(mask_roi_extractor.featmap_strides)], mask_rois) - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - mask_feats += mask_semantic_feat - last_feat = None - - num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes) - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head(mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_bbox_per_img, 0) - aug_masks.append( - [mask.sigmoid().cpu().numpy() for mask in mask_pred]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_mask, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, img_feats, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - if self.with_semantic: - semantic_feats = [ - self.semantic_head(feat)[1] for feat in img_feats - ] - else: - semantic_feats = [None] * len(img_metas) - - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta, semantic in zip(img_feats, img_metas, semantic_feats): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, x, rois, semantic_feat=semantic) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'].argmax(dim=1) - rois = bbox_head.regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[] - for _ in range(self.mask_head[-1].num_classes)] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta, semantic in zip(img_feats, img_metas, - semantic_feats): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor[-1]( - x[:len(self.mask_roi_extractor[-1].featmap_strides)], - mask_rois) - if self.with_semantic: - semantic_feat = semantic - mask_semantic_feat = self.semantic_roi_extractor( - [semantic_feat], mask_rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[ - -2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_pred, last_feat = mask_head( - mask_feats, last_feat) - else: - mask_pred = mask_head(mask_feats) - aug_masks.append(mask_pred.sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/__init__.py deleted file mode 100644 index 48a5d422..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coarse_mask_head import CoarseMaskHead -from .dynamic_mask_head import DynamicMaskHead -from .fcn_mask_head import FCNMaskHead -from .feature_relay_head import FeatureRelayHead -from .fused_semantic_head import FusedSemanticHead -from .global_context_head import GlobalContextHead -from .grid_head import GridHead -from .htc_mask_head import HTCMaskHead -from .mask_point_head import MaskPointHead -from .maskiou_head import MaskIoUHead -from .scnet_mask_head import SCNetMaskHead -from .scnet_semantic_head import SCNetSemanticHead - -__all__ = [ - 'FCNMaskHead', 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', - 'MaskIoUHead', 'CoarseMaskHead', 'MaskPointHead', 'SCNetMaskHead', - 'SCNetSemanticHead', 'GlobalContextHead', 'FeatureRelayHead', - 'DynamicMaskHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py deleted file mode 100644 index 946254cb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule, Linear -from mmcv.runner import ModuleList, auto_fp16 - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class CoarseMaskHead(FCNMaskHead): - """Coarse mask head used in PointRend. - - Compared with standard ``FCNMaskHead``, ``CoarseMaskHead`` will downsample - the input feature map instead of upsample it. - - Args: - num_convs (int): Number of conv layers in the head. Default: 0. - num_fcs (int): Number of fc layers in the head. Default: 2. - fc_out_channels (int): Number of output channels of fc layer. - Default: 1024. - downsample_factor (int): The factor that feature map is downsampled by. - Default: 2. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_convs=0, - num_fcs=2, - fc_out_channels=1024, - downsample_factor=2, - init_cfg=dict( - type='Xavier', - override=[ - dict(name='fcs'), - dict(type='Constant', val=0.001, name='fc_logits') - ]), - *arg, - **kwarg): - super(CoarseMaskHead, self).__init__( - *arg, - num_convs=num_convs, - upsample_cfg=dict(type=None), - init_cfg=None, - **kwarg) - self.init_cfg = init_cfg - self.num_fcs = num_fcs - assert self.num_fcs > 0 - self.fc_out_channels = fc_out_channels - self.downsample_factor = downsample_factor - assert self.downsample_factor >= 1 - # remove conv_logit - delattr(self, 'conv_logits') - - if downsample_factor > 1: - downsample_in_channels = ( - self.conv_out_channels - if self.num_convs > 0 else self.in_channels) - self.downsample_conv = ConvModule( - downsample_in_channels, - self.conv_out_channels, - kernel_size=downsample_factor, - stride=downsample_factor, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - else: - self.downsample_conv = None - - self.output_size = (self.roi_feat_size[0] // downsample_factor, - self.roi_feat_size[1] // downsample_factor) - self.output_area = self.output_size[0] * self.output_size[1] - - last_layer_dim = self.conv_out_channels * self.output_area - - self.fcs = ModuleList() - for i in range(num_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - output_channels = self.num_classes * self.output_area - self.fc_logits = Linear(last_layer_dim, output_channels) - - def init_weights(self): - super(FCNMaskHead, self).init_weights() - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - - if self.downsample_conv is not None: - x = self.downsample_conv(x) - - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_pred = self.fc_logits(x).view( - x.size(0), self.num_classes, *self.output_size) - return mask_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py deleted file mode 100644 index 5bbe7eea..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.utils import build_transformer -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class DynamicMaskHead(FCNMaskHead): - r"""Dynamic Mask Head for - `Instances as Queries `_ - - Args: - num_convs (int): Number of convolution layer. - Defaults to 4. - roi_feat_size (int): The output size of RoI extractor, - Defaults to 14. - in_channels (int): Input feature channels. - Defaults to 256. - conv_kernel_size (int): Kernel size of convolution layers. - Defaults to 3. - conv_out_channels (int): Output channels of convolution layers. - Defaults to 256. - num_classes (int): Number of classes. - Defaults to 80 - class_agnostic (int): Whether generate class agnostic prediction. - Defaults to False. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - upsample_cfg (dict): The config for upsample layer. - conv_cfg (dict): The convolution layer config. - norm_cfg (dict): The norm layer config. - dynamic_conv_cfg (dict): The dynamic convolution layer config. - loss_mask (dict): The config for mask loss. - """ - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=14, - with_proj=False, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_mask=dict(type='DiceLoss', loss_weight=8.0), - **kwargs): - super(DynamicMaskHead, self).__init__( - num_convs=num_convs, - roi_feat_size=roi_feat_size, - in_channels=in_channels, - conv_kernel_size=conv_kernel_size, - conv_out_channels=conv_out_channels, - num_classes=num_classes, - class_agnostic=class_agnostic, - upsample_cfg=upsample_cfg, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - loss_mask=loss_mask, - **kwargs) - assert class_agnostic is False, \ - 'DynamicMaskHead only support class_agnostic=False' - self.fp16_enabled = False - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - nn.init.constant_(self.conv_logits.bias, 0.) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of DynamicMaskHead. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size*num_proposals, feature_dimensions) - - Returns: - mask_pred (Tensor): Predicted foreground masks with shape - (batch_size*num_proposals, num_classes, - pooling_h*2, pooling_w*2). - """ - - proposal_feat = proposal_feat.reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - - x = proposal_feat_iic.permute(0, 2, 1).reshape(roi_feat.size()) - - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - num_pos = labels.new_ones(labels.size()).float().sum() - avg_factor = torch.clamp(reduce_mean(num_pos), min=1.).item() - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - loss_mask = self.loss_mask( - mask_pred[torch.arange(num_pos).long(), labels, ...].sigmoid(), - mask_targets, - avg_factor=avg_factor) - loss['loss_mask'] = loss_mask - return loss - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py deleted file mode 100644 index 355d8822..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from warnings import warn - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_conv_layer, build_upsample_layer -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import BaseModule, ModuleList, auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS, build_loss - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit - - -@HEADS.register_module() -class FCNMaskHead(BaseModule): - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - predictor_cfg=dict(type='Conv'), - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(FCNMaskHead, self).__init__(init_cfg) - self.upsample_cfg = upsample_cfg.copy() - if self.upsample_cfg['type'] not in [ - None, 'deconv', 'nearest', 'bilinear', 'carafe' - ]: - raise ValueError( - f'Invalid upsample method {self.upsample_cfg["type"]}, ' - 'accepted methods are "deconv", "nearest", "bilinear", ' - '"carafe"') - self.num_convs = num_convs - # WARN: roi_feat_size is reserved and not used - self.roi_feat_size = _pair(roi_feat_size) - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.conv_out_channels = conv_out_channels - self.upsample_method = self.upsample_cfg.get('type') - self.scale_factor = self.upsample_cfg.pop('scale_factor', None) - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.predictor_cfg = predictor_cfg - self.fp16_enabled = False - self.loss_mask = build_loss(loss_mask) - - self.convs = ModuleList() - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - upsample_in_channels = ( - self.conv_out_channels if self.num_convs > 0 else in_channels) - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample_method is None: - self.upsample = None - elif self.upsample_method == 'deconv': - upsample_cfg_.update( - in_channels=upsample_in_channels, - out_channels=self.conv_out_channels, - kernel_size=self.scale_factor, - stride=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - elif self.upsample_method == 'carafe': - upsample_cfg_.update( - channels=upsample_in_channels, scale_factor=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - else: - # suppress warnings - align_corners = (None - if self.upsample_method == 'nearest' else False) - upsample_cfg_.update( - scale_factor=self.scale_factor, - mode=self.upsample_method, - align_corners=align_corners) - self.upsample = build_upsample_layer(upsample_cfg_) - - out_channels = 1 if self.class_agnostic else self.num_classes - logits_in_channel = ( - self.conv_out_channels - if self.upsample_method == 'deconv' else upsample_in_channels) - self.conv_logits = build_conv_layer(self.predictor_cfg, - logits_in_channel, out_channels, 1) - self.relu = nn.ReLU(inplace=True) - self.debug_imgs = None - - def init_weights(self): - super(FCNMaskHead, self).init_weights() - for m in [self.upsample, self.conv_logits]: - if m is None: - continue - elif isinstance(m, CARAFEPack): - m.init_weights() - elif hasattr(m, 'weight') and hasattr(m, 'bias'): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu') - nn.init.constant_(m.bias, 0) - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - """ - Example: - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> # There are lots of variations depending on the configuration - >>> self = FCNMaskHead(num_classes=C, num_convs=1) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> sf = self.scale_factor - >>> labels = torch.randint(0, C, size=(N,)) - >>> # With the default properties the mask targets should indicate - >>> # a (potentially soft) single-class label - >>> mask_targets = torch.rand(N, H * sf, W * sf) - >>> loss = self.loss(mask_pred, mask_targets, labels) - >>> print('loss = {!r}'.format(loss)) - """ - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - if self.class_agnostic: - loss_mask = self.loss_mask(mask_pred, mask_targets, - torch.zeros_like(labels)) - else: - loss_mask = self.loss_mask(mask_pred, mask_targets, labels) - loss['loss_mask'] = loss_mask - return loss - - def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, scale_factor, rescale): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor or ndarray): shape (n, #class, h, w). - For single-scale testing, mask_pred is the direct output of - model, whose type is Tensor, while for multi-scale testing, - it will be converted to numpy array outside of this method. - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - scale_factor(ndarray | Tensor): If ``rescale is True``, box - coordinates are divided by this scale factor to fit - ``ori_shape``. - rescale (bool): If True, the resulting masks will be rescaled to - ``ori_shape``. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - - Example: - >>> import mmcv - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> self = FCNMaskHead(num_classes=C, num_convs=0) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> # Each input is associated with some bounding box - >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N) - >>> det_labels = torch.randint(0, C, size=(N,)) - >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, }) - >>> ori_shape = (H * 4, W * 4) - >>> scale_factor = torch.FloatTensor((1, 1)) - >>> rescale = False - >>> # Encoded masks are a list for each category. - >>> encoded_masks = self.get_seg_masks( - >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, - >>> scale_factor, rescale - >>> ) - >>> assert len(encoded_masks) == C - >>> assert sum(list(map(len, encoded_masks))) == N - """ - if isinstance(mask_pred, torch.Tensor): - mask_pred = mask_pred.sigmoid() - else: - # In AugTest, has been activated before - mask_pred = det_bboxes.new_tensor(mask_pred) - - device = mask_pred.device - cls_segms = [[] for _ in range(self.num_classes) - ] # BG is not included in num_classes - bboxes = det_bboxes[:, :4] - labels = det_labels - - # In most cases, scale_factor should have been - # converted to Tensor when rescale the bbox - if not isinstance(scale_factor, torch.Tensor): - if isinstance(scale_factor, float): - scale_factor = np.array([scale_factor] * 4) - warn('Scale_factor should be a Tensor or ndarray ' - 'with shape (4,), float would be deprecated. ') - assert isinstance(scale_factor, np.ndarray) - scale_factor = torch.Tensor(scale_factor) - - if rescale: - img_h, img_w = ori_shape[:2] - bboxes = bboxes / scale_factor.to(bboxes) - else: - w_scale, h_scale = scale_factor[0], scale_factor[1] - img_h = np.round(ori_shape[0] * h_scale.item()).astype(np.int32) - img_w = np.round(ori_shape[1] * w_scale.item()).astype(np.int32) - - N = len(mask_pred) - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == 'cpu': - # CPU is most efficient when they are pasted one by one with - # skip_empty=True, so that it performs minimal number of - # operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, - # but may have memory issue - # the types of img_w and img_h are np.int32, - # when the image resolution is large, - # the calculation of num_chunks will overflow. - # so we need to change the types of img_w and img_h to int. - # See https://github.com/open-mmlab/mmdetection/pull/5191 - num_chunks = int( - np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / - GPU_MEM_LIMIT)) - assert (num_chunks <= - N), 'Default GPU_MEM_LIMIT is too small; try increasing it' - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - threshold = rcnn_test_cfg.mask_thr_binary - im_mask = torch.zeros( - N, - img_h, - img_w, - device=device, - dtype=torch.bool if threshold >= 0 else torch.uint8) - - if not self.class_agnostic: - mask_pred = mask_pred[range(N), labels][:, None] - - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - mask_pred[inds], - bboxes[inds], - img_h, - img_w, - skip_empty=device.type == 'cpu') - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - im_mask[(inds, ) + spatial_inds] = masks_chunk - - for i in range(N): - cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy()) - return cls_segms - - def onnx_export(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, **kwargs): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor): shape (n, #class, h, w). - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - - Returns: - Tensor: a mask of shape (N, img_h, img_w). - """ - - mask_pred = mask_pred.sigmoid() - bboxes = det_bboxes[:, :4] - labels = det_labels - # No need to consider rescale and scale_factor while exporting to ONNX - img_h, img_w = ori_shape[:2] - threshold = rcnn_test_cfg.mask_thr_binary - if not self.class_agnostic: - box_inds = torch.arange(mask_pred.shape[0]) - mask_pred = mask_pred[box_inds, labels][:, None] - masks, _ = _do_paste_mask( - mask_pred, bboxes, img_h, img_w, skip_empty=False) - if threshold >= 0: - # should convert to float to avoid problems in TRT - masks = (masks >= threshold).to(dtype=torch.float) - return masks - - -def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True): - """Paste instance masks according to boxes. - - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - Args: - masks (Tensor): N, 1, H, W - boxes (Tensor): N, 4 - img_h (int): Height of the image to be pasted. - img_w (int): Width of the image to be pasted. - skip_empty (bool): Only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - tuple: (Tensor, tuple). The first item is mask tensor, the second one - is the slice object. - If skip_empty == False, the whole image will be pasted. It will - return a mask of shape (N, img_h, img_w) and an empty tuple. - If skip_empty == True, only area around the mask will be pasted. - A mask of shape (N, h', w') and its start and end coordinates - in the original image will be returned. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - if skip_empty: - x0_int, y0_int = torch.clamp( - boxes.min(dim=0).values.floor()[:2] - 1, - min=0).to(dtype=torch.int32) - x1_int = torch.clamp( - boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp( - boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange(y0_int, y1_int, device=device).to(torch.float32) + 0.5 - img_x = torch.arange(x0_int, x1_int, device=device).to(torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - # IsInf op is not supported with ONNX<=1.7.0 - if not torch.onnx.is_in_onnx_export(): - if torch.isinf(img_x).any(): - inds = torch.where(torch.isinf(img_x)) - img_x[inds] = 0 - if torch.isinf(img_y).any(): - inds = torch.where(torch.isinf(img_y)) - img_y[inds] = 0 - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - img_masks = F.grid_sample( - masks.to(dtype=torch.float32), grid, align_corners=False) - - if skip_empty: - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/feature_relay_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/feature_relay_head.py deleted file mode 100644 index 452f37af..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/feature_relay_head.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.builder import HEADS - - -@HEADS.register_module() -class FeatureRelayHead(BaseModule): - """Feature Relay Head used in `SCNet `_. - - Args: - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - roi_feat_size (int, optional): roi feat size at box head. Default: 7. - scale_factor (int, optional): scale factor to match roi feat size - at mask head. Default: 2. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels=1024, - out_conv_channels=256, - roi_feat_size=7, - scale_factor=2, - init_cfg=dict(type='Kaiming', layer='Linear')): - super(FeatureRelayHead, self).__init__(init_cfg) - assert isinstance(roi_feat_size, int) - - self.in_channels = in_channels - self.out_conv_channels = out_conv_channels - self.roi_feat_size = roi_feat_size - self.out_channels = (roi_feat_size**2) * out_conv_channels - self.scale_factor = scale_factor - self.fp16_enabled = False - - self.fc = nn.Linear(self.in_channels, self.out_channels) - self.upsample = nn.Upsample( - scale_factor=scale_factor, mode='bilinear', align_corners=True) - - @auto_fp16() - def forward(self, x): - """Forward function.""" - N, in_C = x.shape - if N > 0: - out_C = self.out_conv_channels - out_HW = self.roi_feat_size - x = self.fc(x) - x = x.reshape(N, out_C, out_HW, out_HW) - x = self.upsample(x) - return x - return None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py deleted file mode 100644 index 8494f7e4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class FusedSemanticHead(BaseModule): - r"""Multi-level fused semantic segmentation head. - - .. code-block:: none - - in_1 -> 1x1 conv --- - | - in_2 -> 1x1 conv -- | - || - in_3 -> 1x1 conv - || - ||| /-> 1x1 conv (mask prediction) - in_4 -> 1x1 conv -----> 3x3 convs (*4) - | \-> 1x1 conv (feature) - in_5 -> 1x1 conv --- - """ # noqa: W605 - - def __init__(self, - num_ins, - fusion_level, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=183, - conv_cfg=None, - norm_cfg=None, - ignore_label=None, - loss_weight=None, - loss_seg=dict( - type='CrossEntropyLoss', - ignore_index=255, - loss_weight=0.2), - init_cfg=dict( - type='Kaiming', override=dict(name='conv_logits'))): - super(FusedSemanticHead, self).__init__(init_cfg) - self.num_ins = num_ins - self.fusion_level = fusion_level - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self.lateral_convs = nn.ModuleList() - for i in range(self.num_ins): - self.lateral_convs.append( - ConvModule( - self.in_channels, - self.in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_embedding = ConvModule( - conv_out_channels, - conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.conv_logits = nn.Conv2d(conv_out_channels, self.num_classes, 1) - if ignore_label: - loss_seg['ignore_index'] = ignore_label - if loss_weight: - loss_seg['loss_weight'] = loss_weight - if ignore_label or loss_weight: - warnings.warn('``ignore_label`` and ``loss_weight`` would be ' - 'deprecated soon. Please set ``ingore_index`` and ' - '``loss_weight`` in ``loss_seg`` instead.') - self.criterion = build_loss(loss_seg) - - @auto_fp16() - def forward(self, feats): - x = self.lateral_convs[self.fusion_level](feats[self.fusion_level]) - fused_size = tuple(x.shape[-2:]) - for i, feat in enumerate(feats): - if i != self.fusion_level: - feat = F.interpolate( - feat, size=fused_size, mode='bilinear', align_corners=True) - x += self.lateral_convs[i](feat) - - for i in range(self.num_convs): - x = self.convs[i](x) - - mask_pred = self.conv_logits(x) - x = self.conv_embedding(x) - return mask_pred, x - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, labels): - labels = labels.squeeze(1).long() - loss_semantic_seg = self.criterion(mask_pred, labels) - return loss_semantic_seg diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/global_context_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/global_context_head.py deleted file mode 100644 index af76a174..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/global_context_head.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock - - -@HEADS.register_module() -class GlobalContextHead(BaseModule): - """Global context head used in `SCNet `_. - - Args: - num_convs (int, optional): number of convolutional layer in GlbCtxHead. - Default: 4. - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - num_classes (int, optional): number of classes. Default: 80. - loss_weight (float, optional): global context loss weight. Default: 1. - conv_cfg (dict, optional): config to init conv layer. Default: None. - norm_cfg (dict, optional): config to init norm layer. Default: None. - conv_to_res (bool, optional): if True, 2 convs will be grouped into - 1 `SimplifiedBasicBlock` using a skip connection. Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_weight=1.0, - conv_cfg=None, - norm_cfg=None, - conv_to_res=False, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='fc'))): - super(GlobalContextHead, self).__init__(init_cfg) - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.loss_weight = loss_weight - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.conv_to_res = conv_to_res - self.fp16_enabled = False - - if self.conv_to_res: - num_res_blocks = num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks - else: - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Linear(conv_out_channels, num_classes) - - self.criterion = nn.BCEWithLogitsLoss() - - @auto_fp16() - def forward(self, feats): - """Forward function.""" - x = feats[-1] - for i in range(self.num_convs): - x = self.convs[i](x) - x = self.pool(x) - - # multi-class prediction - mc_pred = x.reshape(x.size(0), -1) - mc_pred = self.fc(mc_pred) - - return mc_pred, x - - @force_fp32(apply_to=('pred', )) - def loss(self, pred, labels): - """Loss function.""" - labels = [lbl.unique() for lbl in labels] - targets = pred.new_zeros(pred.size()) - for i, label in enumerate(labels): - targets[i, label] = 1.0 - loss = self.loss_weight * self.criterion(pred, targets) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/grid_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/grid_head.py deleted file mode 100644 index 0c0702d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/grid_head.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class GridHead(BaseModule): - - def __init__(self, - grid_points=9, - num_convs=8, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - point_feat_channels=64, - deconv_kernel_size=4, - class_agnostic=False, - loss_grid=dict( - type='CrossEntropyLoss', use_sigmoid=True, - loss_weight=15), - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=36), - init_cfg=[ - dict(type='Kaiming', layer=['Conv2d', 'Linear']), - dict( - type='Normal', - layer='ConvTranspose2d', - std=0.001, - override=dict( - type='Normal', - name='deconv2', - std=0.001, - bias=-np.log(0.99 / 0.01))) - ]): - super(GridHead, self).__init__(init_cfg) - self.grid_points = grid_points - self.num_convs = num_convs - self.roi_feat_size = roi_feat_size - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.point_feat_channels = point_feat_channels - self.conv_out_channels = self.point_feat_channels * self.grid_points - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - if isinstance(norm_cfg, dict) and norm_cfg['type'] == 'GN': - assert self.conv_out_channels % norm_cfg['num_groups'] == 0 - - assert self.grid_points >= 4 - self.grid_size = int(np.sqrt(self.grid_points)) - if self.grid_size * self.grid_size != self.grid_points: - raise ValueError('grid_points must be a square number') - - # the predicted heatmap is half of whole_map_size - if not isinstance(self.roi_feat_size, int): - raise ValueError('Only square RoIs are supporeted in Grid R-CNN') - self.whole_map_size = self.roi_feat_size * 4 - - # compute point-wise sub-regions - self.sub_regions = self.calc_sub_regions() - - self.convs = [] - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - stride = 2 if i == 0 else 1 - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - stride=stride, - padding=padding, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=True)) - self.convs = nn.Sequential(*self.convs) - - self.deconv1 = nn.ConvTranspose2d( - self.conv_out_channels, - self.conv_out_channels, - kernel_size=deconv_kernel_size, - stride=2, - padding=(deconv_kernel_size - 2) // 2, - groups=grid_points) - self.norm1 = nn.GroupNorm(grid_points, self.conv_out_channels) - self.deconv2 = nn.ConvTranspose2d( - self.conv_out_channels, - grid_points, - kernel_size=deconv_kernel_size, - stride=2, - padding=(deconv_kernel_size - 2) // 2, - groups=grid_points) - - # find the 4-neighbor of each grid point - self.neighbor_points = [] - grid_size = self.grid_size - for i in range(grid_size): # i-th column - for j in range(grid_size): # j-th row - neighbors = [] - if i > 0: # left: (i - 1, j) - neighbors.append((i - 1) * grid_size + j) - if j > 0: # up: (i, j - 1) - neighbors.append(i * grid_size + j - 1) - if j < grid_size - 1: # down: (i, j + 1) - neighbors.append(i * grid_size + j + 1) - if i < grid_size - 1: # right: (i + 1, j) - neighbors.append((i + 1) * grid_size + j) - self.neighbor_points.append(tuple(neighbors)) - # total edges in the grid - self.num_edges = sum([len(p) for p in self.neighbor_points]) - - self.forder_trans = nn.ModuleList() # first-order feature transition - self.sorder_trans = nn.ModuleList() # second-order feature transition - for neighbors in self.neighbor_points: - fo_trans = nn.ModuleList() - so_trans = nn.ModuleList() - for _ in range(len(neighbors)): - # each transition module consists of a 5x5 depth-wise conv and - # 1x1 conv. - fo_trans.append( - nn.Sequential( - nn.Conv2d( - self.point_feat_channels, - self.point_feat_channels, - 5, - stride=1, - padding=2, - groups=self.point_feat_channels), - nn.Conv2d(self.point_feat_channels, - self.point_feat_channels, 1))) - so_trans.append( - nn.Sequential( - nn.Conv2d( - self.point_feat_channels, - self.point_feat_channels, - 5, - 1, - 2, - groups=self.point_feat_channels), - nn.Conv2d(self.point_feat_channels, - self.point_feat_channels, 1))) - self.forder_trans.append(fo_trans) - self.sorder_trans.append(so_trans) - - self.loss_grid = build_loss(loss_grid) - - def forward(self, x): - assert x.shape[-1] == x.shape[-2] == self.roi_feat_size - # RoI feature transformation, downsample 2x - x = self.convs(x) - - c = self.point_feat_channels - # first-order fusion - x_fo = [None for _ in range(self.grid_points)] - for i, points in enumerate(self.neighbor_points): - x_fo[i] = x[:, i * c:(i + 1) * c] - for j, point_idx in enumerate(points): - x_fo[i] = x_fo[i] + self.forder_trans[i][j]( - x[:, point_idx * c:(point_idx + 1) * c]) - - # second-order fusion - x_so = [None for _ in range(self.grid_points)] - for i, points in enumerate(self.neighbor_points): - x_so[i] = x[:, i * c:(i + 1) * c] - for j, point_idx in enumerate(points): - x_so[i] = x_so[i] + self.sorder_trans[i][j](x_fo[point_idx]) - - # predicted heatmap with fused features - x2 = torch.cat(x_so, dim=1) - x2 = self.deconv1(x2) - x2 = F.relu(self.norm1(x2), inplace=True) - heatmap = self.deconv2(x2) - - # predicted heatmap with original features (applicable during training) - if self.training: - x1 = x - x1 = self.deconv1(x1) - x1 = F.relu(self.norm1(x1), inplace=True) - heatmap_unfused = self.deconv2(x1) - else: - heatmap_unfused = heatmap - - return dict(fused=heatmap, unfused=heatmap_unfused) - - def calc_sub_regions(self): - """Compute point specific representation regions. - - See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details. - """ - # to make it consistent with the original implementation, half_size - # is computed as 2 * quarter_size, which is smaller - half_size = self.whole_map_size // 4 * 2 - sub_regions = [] - for i in range(self.grid_points): - x_idx = i // self.grid_size - y_idx = i % self.grid_size - if x_idx == 0: - sub_x1 = 0 - elif x_idx == self.grid_size - 1: - sub_x1 = half_size - else: - ratio = x_idx / (self.grid_size - 1) - 0.25 - sub_x1 = max(int(ratio * self.whole_map_size), 0) - - if y_idx == 0: - sub_y1 = 0 - elif y_idx == self.grid_size - 1: - sub_y1 = half_size - else: - ratio = y_idx / (self.grid_size - 1) - 0.25 - sub_y1 = max(int(ratio * self.whole_map_size), 0) - sub_regions.append( - (sub_x1, sub_y1, sub_x1 + half_size, sub_y1 + half_size)) - return sub_regions - - def get_targets(self, sampling_results, rcnn_train_cfg): - # mix all samples (across images) together. - pos_bboxes = torch.cat([res.pos_bboxes for res in sampling_results], - dim=0).cpu() - pos_gt_bboxes = torch.cat( - [res.pos_gt_bboxes for res in sampling_results], dim=0).cpu() - assert pos_bboxes.shape == pos_gt_bboxes.shape - - # expand pos_bboxes to 2x of original size - x1 = pos_bboxes[:, 0] - (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2 - y1 = pos_bboxes[:, 1] - (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2 - x2 = pos_bboxes[:, 2] + (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2 - y2 = pos_bboxes[:, 3] + (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2 - pos_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - pos_bbox_ws = (pos_bboxes[:, 2] - pos_bboxes[:, 0]).unsqueeze(-1) - pos_bbox_hs = (pos_bboxes[:, 3] - pos_bboxes[:, 1]).unsqueeze(-1) - - num_rois = pos_bboxes.shape[0] - map_size = self.whole_map_size - # this is not the final target shape - targets = torch.zeros((num_rois, self.grid_points, map_size, map_size), - dtype=torch.float) - - # pre-compute interpolation factors for all grid points. - # the first item is the factor of x-dim, and the second is y-dim. - # for a 9-point grid, factors are like (1, 0), (0.5, 0.5), (0, 1) - factors = [] - for j in range(self.grid_points): - x_idx = j // self.grid_size - y_idx = j % self.grid_size - factors.append((1 - x_idx / (self.grid_size - 1), - 1 - y_idx / (self.grid_size - 1))) - - radius = rcnn_train_cfg.pos_radius - radius2 = radius**2 - for i in range(num_rois): - # ignore small bboxes - if (pos_bbox_ws[i] <= self.grid_size - or pos_bbox_hs[i] <= self.grid_size): - continue - # for each grid point, mark a small circle as positive - for j in range(self.grid_points): - factor_x, factor_y = factors[j] - gridpoint_x = factor_x * pos_gt_bboxes[i, 0] + ( - 1 - factor_x) * pos_gt_bboxes[i, 2] - gridpoint_y = factor_y * pos_gt_bboxes[i, 1] + ( - 1 - factor_y) * pos_gt_bboxes[i, 3] - - cx = int((gridpoint_x - pos_bboxes[i, 0]) / pos_bbox_ws[i] * - map_size) - cy = int((gridpoint_y - pos_bboxes[i, 1]) / pos_bbox_hs[i] * - map_size) - - for x in range(cx - radius, cx + radius + 1): - for y in range(cy - radius, cy + radius + 1): - if x >= 0 and x < map_size and y >= 0 and y < map_size: - if (x - cx)**2 + (y - cy)**2 <= radius2: - targets[i, j, y, x] = 1 - # reduce the target heatmap size by a half - # proposed in Grid R-CNN Plus (https://arxiv.org/abs/1906.05688). - sub_targets = [] - for i in range(self.grid_points): - sub_x1, sub_y1, sub_x2, sub_y2 = self.sub_regions[i] - sub_targets.append(targets[:, [i], sub_y1:sub_y2, sub_x1:sub_x2]) - sub_targets = torch.cat(sub_targets, dim=1) - sub_targets = sub_targets.to(sampling_results[0].pos_bboxes.device) - return sub_targets - - def loss(self, grid_pred, grid_targets): - loss_fused = self.loss_grid(grid_pred['fused'], grid_targets) - loss_unfused = self.loss_grid(grid_pred['unfused'], grid_targets) - loss_grid = loss_fused + loss_unfused - return dict(loss_grid=loss_grid) - - def get_bboxes(self, det_bboxes, grid_pred, img_metas): - # TODO: refactoring - assert det_bboxes.shape[0] == grid_pred.shape[0] - det_bboxes = det_bboxes.cpu() - cls_scores = det_bboxes[:, [4]] - det_bboxes = det_bboxes[:, :4] - grid_pred = grid_pred.sigmoid().cpu() - - R, c, h, w = grid_pred.shape - half_size = self.whole_map_size // 4 * 2 - assert h == w == half_size - assert c == self.grid_points - - # find the point with max scores in the half-sized heatmap - grid_pred = grid_pred.view(R * c, h * w) - pred_scores, pred_position = grid_pred.max(dim=1) - xs = pred_position % w - ys = pred_position // w - - # get the position in the whole heatmap instead of half-sized heatmap - for i in range(self.grid_points): - xs[i::self.grid_points] += self.sub_regions[i][0] - ys[i::self.grid_points] += self.sub_regions[i][1] - - # reshape to (num_rois, grid_points) - pred_scores, xs, ys = tuple( - map(lambda x: x.view(R, c), [pred_scores, xs, ys])) - - # get expanded pos_bboxes - widths = (det_bboxes[:, 2] - det_bboxes[:, 0]).unsqueeze(-1) - heights = (det_bboxes[:, 3] - det_bboxes[:, 1]).unsqueeze(-1) - x1 = (det_bboxes[:, 0, None] - widths / 2) - y1 = (det_bboxes[:, 1, None] - heights / 2) - # map the grid point to the absolute coordinates - abs_xs = (xs.float() + 0.5) / w * widths + x1 - abs_ys = (ys.float() + 0.5) / h * heights + y1 - - # get the grid points indices that fall on the bbox boundaries - x1_inds = [i for i in range(self.grid_size)] - y1_inds = [i * self.grid_size for i in range(self.grid_size)] - x2_inds = [ - self.grid_points - self.grid_size + i - for i in range(self.grid_size) - ] - y2_inds = [(i + 1) * self.grid_size - 1 for i in range(self.grid_size)] - - # voting of all grid points on some boundary - bboxes_x1 = (abs_xs[:, x1_inds] * pred_scores[:, x1_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, x1_inds].sum(dim=1, keepdim=True)) - bboxes_y1 = (abs_ys[:, y1_inds] * pred_scores[:, y1_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, y1_inds].sum(dim=1, keepdim=True)) - bboxes_x2 = (abs_xs[:, x2_inds] * pred_scores[:, x2_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, x2_inds].sum(dim=1, keepdim=True)) - bboxes_y2 = (abs_ys[:, y2_inds] * pred_scores[:, y2_inds]).sum( - dim=1, keepdim=True) / ( - pred_scores[:, y2_inds].sum(dim=1, keepdim=True)) - - bbox_res = torch.cat( - [bboxes_x1, bboxes_y1, bboxes_x2, bboxes_y2, cls_scores], dim=1) - bbox_res[:, [0, 2]].clamp_(min=0, max=img_metas[0]['img_shape'][1]) - bbox_res[:, [1, 3]].clamp_(min=0, max=img_metas[0]['img_shape'][0]) - - return bbox_res diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/htc_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/htc_mask_head.py deleted file mode 100644 index 7ad8592b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/htc_mask_head.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class HTCMaskHead(FCNMaskHead): - - def __init__(self, with_conv_res=True, *args, **kwargs): - super(HTCMaskHead, self).__init__(*args, **kwargs) - self.with_conv_res = with_conv_res - if self.with_conv_res: - self.conv_res = ConvModule( - self.conv_out_channels, - self.conv_out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def forward(self, x, res_feat=None, return_logits=True, return_feat=True): - if res_feat is not None: - assert self.with_conv_res - res_feat = self.conv_res(res_feat) - x = x + res_feat - for conv in self.convs: - x = conv(x) - res_feat = x - outs = [] - if return_logits: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - outs.append(mask_pred) - if return_feat: - outs.append(res_feat) - return outs if len(outs) > 1 else outs[0] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/mask_point_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/mask_point_head.py deleted file mode 100644 index c77c46d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/mask_point_head.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point -from mmcv.runner import BaseModule - -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.utils import (get_uncertain_point_coords_with_randomness, - get_uncertainty) - - -@HEADS.register_module() -class MaskPointHead(BaseModule): - """A mask point head use in PointRend. - - ``MaskPointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict | None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict | None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_classes, - num_fcs=3, - in_channels=256, - fc_channels=256, - class_agnostic=False, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU'), - loss_point=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - init_cfg=dict( - type='Normal', std=0.001, - override=dict(name='fc_logits'))): - super().__init__(init_cfg) - self.num_fcs = num_fcs - self.in_channels = in_channels - self.fc_channels = fc_channels - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.coarse_pred_each_layer = coarse_pred_each_layer - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.loss_point = build_loss(loss_point) - - fc_in_channels = in_channels + num_classes - self.fcs = nn.ModuleList() - for _ in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += num_classes if self.coarse_pred_each_layer else 0 - - out_channels = 1 if self.class_agnostic else self.num_classes - self.fc_logits = nn.Conv1d( - fc_in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, fine_grained_feats, coarse_feats): - """Classify each point base on fine grained and coarse feats. - - Args: - fine_grained_feats (Tensor): Fine grained feature sampled from FPN, - shape (num_rois, in_channels, num_points). - coarse_feats (Tensor): Coarse feature sampled from CoarseMaskHead, - shape (num_rois, num_classes, num_points). - - Returns: - Tensor: Point classification results, - shape (num_rois, num_class, num_points). - """ - - x = torch.cat([fine_grained_feats, coarse_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_feats), dim=1) - return self.fc_logits(x) - - def get_targets(self, rois, rel_roi_points, sampling_results, gt_masks, - cfg): - """Get training targets of MaskPointHead for all images. - - Args: - rois (Tensor): Region of Interest, shape (num_rois, 5). - rel_roi_points: Points coordinates relative to RoI, shape - (num_rois, num_points, 2). - sampling_results (:obj:`SamplingResult`): Sampling result after - sampling and assignment. - gt_masks (Tensor) : Ground truth segmentation masks of - corresponding boxes, shape (num_rois, height, width). - cfg (dict): Training cfg. - - Returns: - Tensor: Point target, shape (num_rois, num_points). - """ - - num_imgs = len(sampling_results) - rois_list = [] - rel_roi_points_list = [] - for batch_ind in range(num_imgs): - inds = (rois[:, 0] == batch_ind) - rois_list.append(rois[inds]) - rel_roi_points_list.append(rel_roi_points[inds]) - pos_assigned_gt_inds_list = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - cfg_list = [cfg for _ in range(num_imgs)] - - point_targets = map(self._get_target_single, rois_list, - rel_roi_points_list, pos_assigned_gt_inds_list, - gt_masks, cfg_list) - point_targets = list(point_targets) - - if len(point_targets) > 0: - point_targets = torch.cat(point_targets) - - return point_targets - - def _get_target_single(self, rois, rel_roi_points, pos_assigned_gt_inds, - gt_masks, cfg): - """Get training target of MaskPointHead for each image.""" - num_pos = rois.size(0) - num_points = cfg.num_points - if num_pos > 0: - gt_masks_th = ( - gt_masks.to_tensor(rois.dtype, rois.device).index_select( - 0, pos_assigned_gt_inds)) - gt_masks_th = gt_masks_th.unsqueeze(1) - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, gt_masks_th) - point_targets = point_sample(gt_masks_th, - rel_img_points).squeeze(1) - else: - point_targets = rois.new_zeros((0, num_points)) - return point_targets - - def loss(self, point_pred, point_targets, labels): - """Calculate loss for MaskPointHead. - - Args: - point_pred (Tensor): Point predication result, shape - (num_rois, num_classes, num_points). - point_targets (Tensor): Point targets, shape (num_roi, num_points). - labels (Tensor): Class label of corresponding boxes, - shape (num_rois, ) - - Returns: - dict[str, Tensor]: a dictionary of point loss components - """ - - loss = dict() - if self.class_agnostic: - loss_point = self.loss_point(point_pred, point_targets, - torch.zeros_like(labels)) - else: - loss_point = self.loss_point(point_pred, point_targets, labels) - loss['loss_point'] = loss_point - return loss - - def get_roi_rel_points_train(self, mask_pred, labels, cfg): - """Get ``num_points`` most uncertain points with random points during - train. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - '_get_uncertainty()' function that takes point's logit prediction as - input. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - labels (list): The ground truth class for each instance. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains the coordinates sampled points. - """ - point_coords = get_uncertain_point_coords_with_randomness( - mask_pred, labels, cfg.num_points, cfg.oversample_ratio, - cfg.importance_sample_ratio) - return point_coords - - def get_roi_rel_points_test(self, mask_pred, pred_label, cfg): - """Get ``num_points`` most uncertain points during test. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - pred_label (list): The predication class for each instance. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (num_rois, num_points) - that contains indices from [0, mask_height x mask_width) of the - most uncertain points. - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid . - """ - num_points = cfg.subdivision_num_points - uncertainty_map = get_uncertainty(mask_pred, pred_label) - num_rois, _, mask_height, mask_width = uncertainty_map.shape - - # During ONNX exporting, the type of each elements of 'shape' is - # `Tensor(float)`, while it is `float` during PyTorch inference. - if isinstance(mask_height, torch.Tensor): - h_step = 1.0 / mask_height.float() - w_step = 1.0 / mask_width.float() - else: - h_step = 1.0 / mask_height - w_step = 1.0 / mask_width - # cast to int to avoid dynamic K for TopK op in ONNX - mask_size = int(mask_height * mask_width) - uncertainty_map = uncertainty_map.view(num_rois, mask_size) - num_points = min(mask_size, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - xs = w_step / 2.0 + (point_indices % mask_width).float() * w_step - ys = h_step / 2.0 + (point_indices // mask_width).float() * h_step - point_coords = torch.stack([xs, ys], dim=2) - return point_indices, point_coords diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/maskiou_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/maskiou_head.py deleted file mode 100644 index a7ff7c7c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/maskiou_head.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, Linear, MaxPool2d -from mmcv.runner import BaseModule, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class MaskIoUHead(BaseModule): - """Mask IoU Head. - - This head predicts the IoU of predicted masks and corresponding gt masks. - """ - - def __init__(self, - num_convs=4, - num_fcs=2, - roi_feat_size=14, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_iou=dict(type='MSELoss', loss_weight=0.5), - init_cfg=[ - dict(type='Kaiming', override=dict(name='convs')), - dict(type='Caffe2Xavier', override=dict(name='fcs')), - dict( - type='Normal', - std=0.01, - override=dict(name='fc_mask_iou')) - ]): - super(MaskIoUHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.num_classes = num_classes - self.fp16_enabled = False - - self.convs = nn.ModuleList() - for i in range(num_convs): - if i == 0: - # concatenation of mask feature and mask prediction - in_channels = self.in_channels + 1 - else: - in_channels = self.conv_out_channels - stride = 2 if i == num_convs - 1 else 1 - self.convs.append( - Conv2d( - in_channels, - self.conv_out_channels, - 3, - stride=stride, - padding=1)) - - roi_feat_size = _pair(roi_feat_size) - pooled_area = (roi_feat_size[0] // 2) * (roi_feat_size[1] // 2) - self.fcs = nn.ModuleList() - for i in range(num_fcs): - in_channels = ( - self.conv_out_channels * - pooled_area if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(in_channels, self.fc_out_channels)) - - self.fc_mask_iou = Linear(self.fc_out_channels, self.num_classes) - self.relu = nn.ReLU() - self.max_pool = MaxPool2d(2, 2) - self.loss_iou = build_loss(loss_iou) - - def forward(self, mask_feat, mask_pred): - mask_pred = mask_pred.sigmoid() - mask_pred_pooled = self.max_pool(mask_pred.unsqueeze(1)) - - x = torch.cat((mask_feat, mask_pred_pooled), 1) - - for conv in self.convs: - x = self.relu(conv(x)) - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_iou = self.fc_mask_iou(x) - return mask_iou - - @force_fp32(apply_to=('mask_iou_pred', )) - def loss(self, mask_iou_pred, mask_iou_targets): - pos_inds = mask_iou_targets > 0 - if pos_inds.sum() > 0: - loss_mask_iou = self.loss_iou(mask_iou_pred[pos_inds], - mask_iou_targets[pos_inds]) - else: - loss_mask_iou = mask_iou_pred.sum() * 0 - return dict(loss_mask_iou=loss_mask_iou) - - @force_fp32(apply_to=('mask_pred', )) - def get_targets(self, sampling_results, gt_masks, mask_pred, mask_targets, - rcnn_train_cfg): - """Compute target of mask IoU. - - Mask IoU target is the IoU of the predicted mask (inside a bbox) and - the gt mask of corresponding gt mask (the whole instance). - The intersection area is computed inside the bbox, and the gt mask area - is computed with two steps, firstly we compute the gt area inside the - bbox, then divide it by the area ratio of gt area inside the bbox and - the gt area of the whole instance. - - Args: - sampling_results (list[:obj:`SamplingResult`]): sampling results. - gt_masks (BitmapMask | PolygonMask): Gt masks (the whole instance) - of each image, with the same shape of the input image. - mask_pred (Tensor): Predicted masks of each positive proposal, - shape (num_pos, h, w). - mask_targets (Tensor): Gt mask of each positive proposal, - binary map of the shape (num_pos, h, w). - rcnn_train_cfg (dict): Training config for R-CNN part. - - Returns: - Tensor: mask iou target (length == num positive). - """ - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - - # compute the area ratio of gt areas inside the proposals and - # the whole instance - area_ratios = map(self._get_area_ratio, pos_proposals, - pos_assigned_gt_inds, gt_masks) - area_ratios = torch.cat(list(area_ratios)) - assert mask_targets.size(0) == area_ratios.size(0) - - mask_pred = (mask_pred > rcnn_train_cfg.mask_thr_binary).float() - mask_pred_areas = mask_pred.sum((-1, -2)) - - # mask_pred and mask_targets are binary maps - overlap_areas = (mask_pred * mask_targets).sum((-1, -2)) - - # compute the mask area of the whole instance - gt_full_areas = mask_targets.sum((-1, -2)) / (area_ratios + 1e-7) - - mask_iou_targets = overlap_areas / ( - mask_pred_areas + gt_full_areas - overlap_areas) - return mask_iou_targets - - def _get_area_ratio(self, pos_proposals, pos_assigned_gt_inds, gt_masks): - """Compute area ratio of the gt mask inside the proposal and the gt - mask of the corresponding instance.""" - num_pos = pos_proposals.size(0) - if num_pos > 0: - area_ratios = [] - proposals_np = pos_proposals.cpu().numpy() - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - # compute mask areas of gt instances (batch processing for speedup) - gt_instance_mask_area = gt_masks.areas - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - - # crop the gt mask inside the proposal - bbox = proposals_np[i, :].astype(np.int32) - gt_mask_in_proposal = gt_mask.crop(bbox) - - ratio = gt_mask_in_proposal.areas[0] / ( - gt_instance_mask_area[pos_assigned_gt_inds[i]] + 1e-7) - area_ratios.append(ratio) - area_ratios = torch.from_numpy(np.stack(area_ratios)).float().to( - pos_proposals.device) - else: - area_ratios = pos_proposals.new_zeros((0, )) - return area_ratios - - @force_fp32(apply_to=('mask_iou_pred', )) - def get_mask_scores(self, mask_iou_pred, det_bboxes, det_labels): - """Get the mask scores. - - mask_score = bbox_score * mask_iou - """ - inds = range(det_labels.size(0)) - mask_scores = mask_iou_pred[inds, det_labels] * det_bboxes[inds, -1] - mask_scores = mask_scores.cpu().numpy() - det_labels = det_labels.cpu().numpy() - return [mask_scores[det_labels == i] for i in range(self.num_classes)] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py deleted file mode 100644 index ca624866..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class SCNetMaskHead(FCNMaskHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetMaskHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if conv_to_res: - assert self.conv_kernel_size == 3 - self.num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - self.num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py deleted file mode 100644 index 2b8c5c32..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fused_semantic_head import FusedSemanticHead - - -@HEADS.register_module() -class SCNetSemanticHead(FusedSemanticHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetSemanticHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if self.conv_to_res: - num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_scoring_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_scoring_roi_head.py deleted file mode 100644 index 4617988e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/mask_scoring_roi_head.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2roi -from ..builder import HEADS, build_head -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class MaskScoringRoIHead(StandardRoIHead): - """Mask Scoring RoIHead for Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, mask_iou_head, **kwargs): - assert mask_iou_head is not None - super(MaskScoringRoIHead, self).__init__(**kwargs) - self.mask_iou_head = build_head(mask_iou_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for Mask head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - mask_results = super(MaskScoringRoIHead, - self)._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is None: - return mask_results - - # mask iou head forward and loss - pos_mask_pred = mask_results['mask_pred'][ - range(mask_results['mask_pred'].size(0)), pos_labels] - mask_iou_pred = self.mask_iou_head(mask_results['mask_feats'], - pos_mask_pred) - pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)), - pos_labels] - - mask_iou_targets = self.mask_iou_head.get_targets( - sampling_results, gt_masks, pos_mask_pred, - mask_results['mask_targets'], self.train_cfg) - loss_mask_iou = self.mask_iou_head.loss(pos_mask_iou_pred, - mask_iou_targets) - mask_results['loss_mask'].update(loss_mask_iou) - return mask_results - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - num_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - mask_scores = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - concat_det_labels = torch.cat(det_labels) - # get mask scores with mask iou head - mask_feats = mask_results['mask_feats'] - mask_pred = mask_results['mask_pred'] - mask_iou_pred = self.mask_iou_head( - mask_feats, mask_pred[range(concat_det_labels.size(0)), - concat_det_labels]) - # split batch mask prediction back to each image - num_bboxes_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bboxes_per_img, 0) - mask_iou_preds = mask_iou_pred.split(num_bboxes_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - mask_scores = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - mask_scores.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - # get mask scores with mask iou head - mask_score = self.mask_iou_head.get_mask_scores( - mask_iou_preds[i], det_bboxes[i], det_labels[i]) - segm_results.append(segm_result) - mask_scores.append(mask_score) - return list(zip(segm_results, mask_scores)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/pisa_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/pisa_roi_head.py deleted file mode 100644 index 92a51186..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/pisa_roi_head.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core import bbox2roi -from ..builder import HEADS -from ..losses.pisa_loss import carl_loss, isr_p -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PISARoIHead(StandardRoIHead): - r"""The RoI head for `Prime Sample Attention in Object Detection - `_.""" - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): List of region proposals. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (list[Tensor], optional): Specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : True segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - neg_label_weights = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # neg label weight is obtained by sampling when using ISR-N - neg_label_weight = None - if isinstance(sampling_result, tuple): - sampling_result, neg_label_weight = sampling_result - sampling_results.append(sampling_result) - neg_label_weights.append(neg_label_weight) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train( - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=neg_label_weights) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=None): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - - # neg_label_weights obtained by sampler is image-wise, mapping back to - # the corresponding location in label weights - if neg_label_weights[0] is not None: - label_weights = bbox_targets[1] - cur_num_rois = 0 - for i in range(len(sampling_results)): - num_pos = sampling_results[i].pos_inds.size(0) - num_neg = sampling_results[i].neg_inds.size(0) - label_weights[cur_num_rois + num_pos:cur_num_rois + num_pos + - num_neg] = neg_label_weights[i] - cur_num_rois += num_pos + num_neg - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - bbox_targets = isr_p( - cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - self.bbox_head.loss_cls, - self.bbox_head.bbox_coder, - **isr_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox = self.bbox_head.loss(cls_score, bbox_pred, rois, - *bbox_targets) - - # Add CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - cls_score, - bbox_targets[0], - bbox_pred, - bbox_targets[2], - self.bbox_head.loss_bbox, - **carl_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox.update(loss_carl) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/point_rend_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/point_rend_roi_head.py deleted file mode 100644 index 9f667793..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/point_rend_roi_head.py +++ /dev/null @@ -1,393 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa -import os -import warnings - -import numpy as np -import torch -import torch.nn.functional as F -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point - -from mmdet.core import bbox2roi, bbox_mapping, merge_aug_masks -from .. import builder -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PointRendRoIHead(StandardRoIHead): - """`PointRend `_.""" - - def __init__(self, point_head, *args, **kwargs): - super().__init__(*args, **kwargs) - assert self.with_bbox and self.with_mask - self.init_point_head(point_head) - - def init_point_head(self, point_head): - """Initialize ``point_head``""" - self.point_head = builder.build_head(point_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head and point head - in training.""" - mask_results = super()._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is not None: - loss_point = self._mask_point_forward_train( - x, sampling_results, mask_results['mask_pred'], gt_masks, - img_metas) - mask_results['loss_mask'].update(loss_point) - - return mask_results - - def _mask_point_forward_train(self, x, sampling_results, mask_pred, - gt_masks, img_metas): - """Run forward function and calculate loss for point head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - rel_roi_points = self.point_head.get_roi_rel_points_train( - mask_pred, pos_labels, cfg=self.train_cfg) - rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - mask_point_target = self.point_head.get_targets( - rois, rel_roi_points, sampling_results, gt_masks, self.train_cfg) - loss_mask_point = self.point_head.loss(mask_point_pred, - mask_point_target, pos_labels) - - return loss_mask_point - - def _get_fine_grained_point_feats(self, x, rois, rel_roi_points, - img_metas): - """Sample fine grained feats from each level feature map and - concatenate them together. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - num_imgs = len(img_metas) - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = feats[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat.shape[2:], - spatial_scale).unsqueeze(0) - point_feat = point_sample(feat, rel_img_points) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - fine_grained_feats.append(torch.cat(point_feats, dim=0)) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_forward_test(self, x, rois, label_pred, mask_pred, - img_metas): - """Mask refining process with point head in testing. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if isinstance(scale_factors[0], float): - warnings.warn( - 'Scale factor in img_metas should be a ' - 'ndarray with shape (4,) ' - 'arrange as (factor_w, factor_h, factor_w, factor_h), ' - 'The scale_factor with float type has been deprecated. ') - scale_factors = np.array([scale_factors] * 4, dtype=np.float32) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - _bboxes = [det_bboxes[i][:, :4] for i in range(len(det_bboxes))] - if rescale: - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - _bboxes[i] * scale_factors[i] for i in range(len(_bboxes)) - ] - - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - # split batch mask prediction back to each image - mask_pred = mask_results['mask_pred'] - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - mask_rois = mask_rois.split(num_mask_roi_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - x_i = [xx[[i]] for xx in x] - mask_rois_i = mask_rois[i] - mask_rois_i[:, 0] = 0 # TODO: remove this hack - mask_pred_i = self._mask_point_forward_test( - x_i, mask_rois_i, det_labels[i], mask_preds[i], - [img_metas]) - segm_result = self.mask_head.get_seg_masks( - mask_pred_i, _bboxes[i], det_labels[i], self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - mask_results['mask_pred'] = self._mask_point_forward_test( - x, mask_rois, det_labels, mask_results['mask_pred'], - img_meta) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result - - def _onnx_get_fine_grained_point_feats(self, x, rois, rel_roi_points): - """Export the process of sampling fine grained feats to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - batch_size = x[0].shape[0] - num_rois = rois.shape[0] - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, feats, spatial_scale) - channels = feats.shape[1] - num_points = rel_img_points.shape[1] - rel_img_points = rel_img_points.reshape(batch_size, -1, num_points, - 2) - point_feats = point_sample(feats, rel_img_points) - point_feats = point_feats.transpose(1, 2).reshape( - num_rois, channels, num_points) - fine_grained_feats.append(point_feats) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_onnx_export(self, x, rois, label_pred, mask_pred): - """Export mask refining process with point head to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._onnx_get_fine_grained_point_feats( - x, rois, rel_roi_points) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - # avoid ScatterElements op in ONNX for TensorRT - if is_trt_backend: - mask_shape = refined_mask_pred.shape - point_shape = point_indices.shape - inds_dim0 = torch.arange(point_shape[0]).reshape( - point_shape[0], 1, 1).expand_as(point_indices) - inds_dim1 = torch.arange(point_shape[1]).reshape( - 1, point_shape[1], 1).expand_as(point_indices) - inds_1d = inds_dim0.reshape( - -1) * mask_shape[1] * mask_shape[2] + inds_dim1.reshape( - -1) * mask_shape[2] + point_indices.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(-1) - refined_mask_pred[inds_1d] = mask_point_pred.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(*mask_shape) - else: - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def mask_onnx_export(self, x, img_metas, det_bboxes, det_labels, **kwargs): - """Export mask branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - det_bboxes (Tensor): Bboxes and corresponding scores. - has shape [N, num_bboxes, 5]. - det_labels (Tensor): class labels of - shape [N, num_bboxes]. - - Returns: - Tensor: The segmentation results of shape [N, num_bboxes, - image_height, image_width]. - """ - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - raise RuntimeError('[ONNX Error] Can not record MaskHead ' - 'as it has not been executed this time') - batch_size = det_bboxes.size(0) - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - max_shape = img_metas[0]['img_shape_for_onnx'] - num_det = det_bboxes.shape[1] - det_bboxes = det_bboxes.reshape(-1, 4) - det_labels = det_labels.reshape(-1) - - mask_pred = self._mask_point_onnx_export(x, mask_rois, det_labels, - mask_pred) - - segm_results = self.mask_head.onnx_export(mask_pred, det_bboxes, - det_labels, self.test_cfg, - max_shape) - segm_results = segm_results.reshape(batch_size, num_det, max_shape[0], - max_shape[1]) - return segm_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/__init__.py deleted file mode 100644 index 0f602149..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_roi_extractor import BaseRoIExtractor -from .generic_roi_extractor import GenericRoIExtractor -from .single_level_roi_extractor import SingleRoIExtractor - -__all__ = ['BaseRoIExtractor', 'SingleRoIExtractor', 'GenericRoIExtractor'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py deleted file mode 100644 index 82629757..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -import torch.nn as nn -from mmcv import ops -from mmcv.runner import BaseModule - - -class BaseRoIExtractor(BaseModule, metaclass=ABCMeta): - """Base class for RoI extractor. - - Args: - roi_layer (dict): Specify RoI layer type and arguments. - out_channels (int): Output channels of RoI layers. - featmap_strides (int): Strides of input feature maps. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - roi_layer, - out_channels, - featmap_strides, - init_cfg=None): - super(BaseRoIExtractor, self).__init__(init_cfg) - self.roi_layers = self.build_roi_layers(roi_layer, featmap_strides) - self.out_channels = out_channels - self.featmap_strides = featmap_strides - self.fp16_enabled = False - - @property - def num_inputs(self): - """int: Number of input feature maps.""" - return len(self.featmap_strides) - - def build_roi_layers(self, layer_cfg, featmap_strides): - """Build RoI operator to extract feature from each level feature map. - - Args: - layer_cfg (dict): Dictionary to construct and config RoI layer - operation. Options are modules under ``mmcv/ops`` such as - ``RoIAlign``. - featmap_strides (List[int]): The stride of input feature map w.r.t - to the original image size, which would be used to scale RoI - coordinate (original image coordinate system) to feature - coordinate system. - - Returns: - nn.ModuleList: The RoI extractor modules for each level feature - map. - """ - - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = nn.ModuleList( - [layer_cls(spatial_scale=1 / s, **cfg) for s in featmap_strides]) - return roi_layers - - def roi_rescale(self, rois, scale_factor): - """Scale RoI coordinates by scale factor. - - Args: - rois (torch.Tensor): RoI (Region of Interest), shape (n, 5) - scale_factor (float): Scale factor that RoI will be multiplied by. - - Returns: - torch.Tensor: Scaled RoI. - """ - - cx = (rois[:, 1] + rois[:, 3]) * 0.5 - cy = (rois[:, 2] + rois[:, 4]) * 0.5 - w = rois[:, 3] - rois[:, 1] - h = rois[:, 4] - rois[:, 2] - new_w = w * scale_factor - new_h = h * scale_factor - x1 = cx - new_w * 0.5 - x2 = cx + new_w * 0.5 - y1 = cy - new_h * 0.5 - y2 = cy + new_h * 0.5 - new_rois = torch.stack((rois[:, 0], x1, y1, x2, y2), dim=-1) - return new_rois - - @abstractmethod - def forward(self, feats, rois, roi_scale_factor=None): - pass diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py deleted file mode 100644 index 566d3de8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/generic_roi_extractor.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import build_plugin_layer -from mmcv.runner import force_fp32 - -from mmdet.models.builder import ROI_EXTRACTORS -from .base_roi_extractor import BaseRoIExtractor - - -@ROI_EXTRACTORS.register_module() -class GenericRoIExtractor(BaseRoIExtractor): - """Extract RoI features from all level feature maps levels. - - This is the implementation of `A novel Region of Interest Extraction Layer - for Instance Segmentation `_. - - Args: - aggregation (str): The method to aggregate multiple feature maps. - Options are 'sum', 'concat'. Default: 'sum'. - pre_cfg (dict | None): Specify pre-processing modules. Default: None. - post_cfg (dict | None): Specify post-processing modules. Default: None. - kwargs (keyword arguments): Arguments that are the same - as :class:`BaseRoIExtractor`. - """ - - def __init__(self, - aggregation='sum', - pre_cfg=None, - post_cfg=None, - **kwargs): - super(GenericRoIExtractor, self).__init__(**kwargs) - - assert aggregation in ['sum', 'concat'] - - self.aggregation = aggregation - self.with_post = post_cfg is not None - self.with_pre = pre_cfg is not None - # build pre/post processing modules - if self.with_post: - self.post_module = build_plugin_layer(post_cfg, '_post_module')[1] - if self.with_pre: - self.pre_module = build_plugin_layer(pre_cfg, '_pre_module')[1] - - @force_fp32(apply_to=('feats', ), out_fp16=True) - def forward(self, feats, rois, roi_scale_factor=None): - """Forward function.""" - if len(feats) == 1: - return self.roi_layers[0](feats[0], rois) - - out_size = self.roi_layers[0].output_size - num_levels = len(feats) - roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, *out_size) - - # some times rois is an empty tensor - if roi_feats.shape[0] == 0: - return roi_feats - - if roi_scale_factor is not None: - rois = self.roi_rescale(rois, roi_scale_factor) - - # mark the starting channels for concat mode - start_channels = 0 - for i in range(num_levels): - roi_feats_t = self.roi_layers[i](feats[i], rois) - end_channels = start_channels + roi_feats_t.size(1) - if self.with_pre: - # apply pre-processing to a RoI extracted from each layer - roi_feats_t = self.pre_module(roi_feats_t) - if self.aggregation == 'sum': - # and sum them all - roi_feats += roi_feats_t - else: - # and concat them along channel dimension - roi_feats[:, start_channels:end_channels] = roi_feats_t - # update channels starting position - start_channels = end_channels - # check if concat channels match at the end - if self.aggregation == 'concat': - assert start_channels == self.out_channels - - if self.with_post: - # apply post-processing before return the result - roi_feats = self.post_module(roi_feats) - return roi_feats diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py deleted file mode 100644 index 1b569ce1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 - -from mmdet.models.builder import ROI_EXTRACTORS -from .base_roi_extractor import BaseRoIExtractor - - -@ROI_EXTRACTORS.register_module() -class SingleRoIExtractor(BaseRoIExtractor): - """Extract RoI features from a single level feature map. - - If there are multiple input feature levels, each RoI is mapped to a level - according to its scale. The mapping rule is proposed in - `FPN `_. - - Args: - roi_layer (dict): Specify RoI layer type and arguments. - out_channels (int): Output channels of RoI layers. - featmap_strides (List[int]): Strides of input feature maps. - finest_scale (int): Scale threshold of mapping to level 0. Default: 56. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - roi_layer, - out_channels, - featmap_strides, - finest_scale=56, - init_cfg=None): - super(SingleRoIExtractor, self).__init__(roi_layer, out_channels, - featmap_strides, init_cfg) - self.finest_scale = finest_scale - - def map_roi_levels(self, rois, num_levels): - """Map rois to corresponding feature levels by scales. - - - scale < finest_scale * 2: level 0 - - finest_scale * 2 <= scale < finest_scale * 4: level 1 - - finest_scale * 4 <= scale < finest_scale * 8: level 2 - - scale >= finest_scale * 8: level 3 - - Args: - rois (Tensor): Input RoIs, shape (k, 5). - num_levels (int): Total level number. - - Returns: - Tensor: Level index (0-based) of each RoI, shape (k, ) - """ - scale = torch.sqrt( - (rois[:, 3] - rois[:, 1]) * (rois[:, 4] - rois[:, 2])) - target_lvls = torch.floor(torch.log2(scale / self.finest_scale + 1e-6)) - target_lvls = target_lvls.clamp(min=0, max=num_levels - 1).long() - return target_lvls - - @force_fp32(apply_to=('feats', ), out_fp16=True) - def forward(self, feats, rois, roi_scale_factor=None): - """Forward function.""" - out_size = self.roi_layers[0].output_size - num_levels = len(feats) - expand_dims = (-1, self.out_channels * out_size[0] * out_size[1]) - if torch.onnx.is_in_onnx_export(): - # Work around to export mask-rcnn to onnx - roi_feats = rois[:, :1].clone().detach() - roi_feats = roi_feats.expand(*expand_dims) - roi_feats = roi_feats.reshape(-1, self.out_channels, *out_size) - roi_feats = roi_feats * 0 - else: - roi_feats = feats[0].new_zeros( - rois.size(0), self.out_channels, *out_size) - # TODO: remove this when parrots supports - if torch.__version__ == 'parrots': - roi_feats.requires_grad = True - - if num_levels == 1: - if len(rois) == 0: - return roi_feats - return self.roi_layers[0](feats[0], rois) - - target_lvls = self.map_roi_levels(rois, num_levels) - - if roi_scale_factor is not None: - rois = self.roi_rescale(rois, roi_scale_factor) - - for i in range(num_levels): - mask = target_lvls == i - if torch.onnx.is_in_onnx_export(): - # To keep all roi_align nodes exported to onnx - # and skip nonzero op - mask = mask.float().unsqueeze(-1) - # select target level rois and reset the rest rois to zero. - rois_i = rois.clone().detach() - rois_i *= mask - mask_exp = mask.expand(*expand_dims).reshape(roi_feats.shape) - roi_feats_t = self.roi_layers[i](feats[i], rois_i) - roi_feats_t *= mask_exp - roi_feats += roi_feats_t - continue - inds = mask.nonzero(as_tuple=False).squeeze(1) - if inds.numel() > 0: - rois_ = rois[inds] - roi_feats_t = self.roi_layers[i](feats[i], rois_) - roi_feats[inds] = roi_feats_t - else: - # Sometimes some pyramid levels will not be used for RoI - # feature extraction and this will cause an incomplete - # computation graph in one GPU, which is different from those - # in other GPUs and will cause a hanging error. - # Therefore, we add it to ensure each feature pyramid is - # included in the computation graph to avoid runtime bugs. - roi_feats += sum( - x.view(-1)[0] - for x in self.parameters()) * 0. + feats[i].sum() * 0. - return roi_feats diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/scnet_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/scnet_roi_head.py deleted file mode 100644 index 705430a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/scnet_roi_head.py +++ /dev/null @@ -1,605 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn.functional as F - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from ..utils.brick_wrappers import adaptive_avg_pool2d -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class SCNetRoIHead(CascadeRoIHead): - """RoIHead for `SCNet `_. - - Args: - num_stages (int): number of cascade stages. - stage_loss_weights (list): loss weight of cascade stages. - semantic_roi_extractor (dict): config to init semantic roi extractor. - semantic_head (dict): config to init semantic head. - feat_relay_head (dict): config to init feature_relay_head. - glbctx_head (dict): config to init global context head. - """ - - def __init__(self, - num_stages, - stage_loss_weights, - semantic_roi_extractor=None, - semantic_head=None, - feat_relay_head=None, - glbctx_head=None, - **kwargs): - super(SCNetRoIHead, self).__init__(num_stages, stage_loss_weights, - **kwargs) - assert self.with_bbox and self.with_mask - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = build_roi_extractor( - semantic_roi_extractor) - self.semantic_head = build_head(semantic_head) - - if feat_relay_head is not None: - self.feat_relay_head = build_head(feat_relay_head) - - if glbctx_head is not None: - self.glbctx_head = build_head(glbctx_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.mask_head = build_head(mask_head) - - @property - def with_semantic(self): - """bool: whether the head has semantic head""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - @property - def with_feat_relay(self): - """bool: whether the head has feature relay head""" - return (hasattr(self, 'feat_relay_head') - and self.feat_relay_head is not None) - - @property - def with_glbctx(self): - """bool: whether the head has global context head""" - return hasattr(self, 'glbctx_head') and self.glbctx_head is not None - - def _fuse_glbctx(self, roi_feats, glbctx_feat, rois): - """Fuse global context feats with roi feats.""" - assert roi_feats.size(0) == rois.size(0) - img_inds = torch.unique(rois[:, 0].cpu(), sorted=True).long() - fused_feats = torch.zeros_like(roi_feats) - for img_id in img_inds: - inds = (rois[:, 0] == img_id.item()) - fused_feats[inds] = roi_feats[inds] + glbctx_feat[img_id] - return fused_feats - - def _slice_pos_feats(self, feats, sampling_results): - """Get features from pos rois.""" - num_rois = [res.bboxes.size(0) for res in sampling_results] - num_pos_rois = [res.pos_bboxes.size(0) for res in sampling_results] - inds = torch.zeros(sum(num_rois), dtype=torch.bool) - start = 0 - for i in range(len(num_rois)): - start = 0 if i == 0 else start + num_rois[i - 1] - stop = start + num_pos_rois[i] - inds[start:stop] = 1 - sliced_feats = feats[inds] - return sliced_feats - - def _bbox_forward(self, - stage, - x, - rois, - semantic_feat=None, - glbctx_feat=None): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor( - x[:len(bbox_roi_extractor.featmap_strides)], rois) - if self.with_semantic and semantic_feat is not None: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - bbox_feats = self._fuse_glbctx(bbox_feats, glbctx_feat, rois) - cls_score, bbox_pred, relayed_feat = bbox_head( - bbox_feats, return_shared_feat=True) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - relayed_feat=relayed_feat) - return bbox_results - - def _mask_forward(self, - x, - rois, - semantic_feat=None, - glbctx_feat=None, - relayed_feat=None): - """Mask head forward function used in both training and testing.""" - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_semantic and semantic_feat is not None: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats += mask_semantic_feat - if self.with_glbctx and glbctx_feat is not None: - mask_feats = self._fuse_glbctx(mask_feats, glbctx_feat, rois) - if self.with_feat_relay and relayed_feat is not None: - mask_feats = mask_feats + relayed_feat - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred) - - return mask_results - - def _bbox_forward_train(self, - stage, - x, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - semantic_feat=None, - glbctx_feat=None): - """Run forward function and calculate loss for box head in training.""" - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - - bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg) - loss_bbox = bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward_train(self, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - semantic_feat=None, - glbctx_feat=None, - relayed_feat=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward( - x, - pos_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results = loss_mask - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposal_list (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None, list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None, Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - gt_semantic_seg (None, list[Tensor]): semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - - # semantic segmentation branch - if self.with_semantic: - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - # global context branch - if self.with_glbctx: - mc_pred, glbctx_feat = self.glbctx_head(x) - loss_glbctx = self.glbctx_head.loss(mc_pred, gt_labels) - losses['loss_glbctx'] = loss_glbctx - else: - glbctx_feat = None - - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign(proposal_list[j], - gt_bboxes[j], - gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - bbox_results = \ - self._bbox_forward_train( - i, x, sampling_results, gt_bboxes, gt_labels, - rcnn_train_cfg, semantic_feat, glbctx_feat) - roi_labels = bbox_results['bbox_targets'][0] - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine boxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - with torch.no_grad(): - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - if self.with_feat_relay: - relayed_feat = self._slice_pos_feats(bbox_results['relayed_feat'], - sampling_results) - relayed_feat = self.feat_relay_head(relayed_feat) - else: - relayed_feat = None - - mask_results = self._mask_forward_train(x, sampling_results, gt_masks, - rcnn_train_cfg, semantic_feat, - glbctx_feat, relayed_feat) - mask_lw = sum(self.stage_loss_weights) - losses['loss_mask'] = mask_lw * mask_results['loss_mask'] - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - if self.with_glbctx: - mc_pred, glbctx_feat = self.glbctx_head(x) - else: - glbctx_feat = None - - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - - if rois.shape[0] == 0: - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for _ in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - - if self.with_mask: - mask_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - refine_rois_list = [] - for j in range(num_imgs): - if rois[j].shape[0] > 0: - bbox_label = cls_score[j][:, :-1].argmax(dim=1) - refine_rois = bbox_head.regress_by_class( - rois[j], bbox_label, bbox_pred[j], img_metas[j]) - refine_rois_list.append(refine_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - det_bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head.num_classes - det_segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - - # get relay feature on mask_rois - bbox_results = self._bbox_forward( - -1, - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - relayed_feat = bbox_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - - mask_results = self._mask_forward( - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_pred = mask_results['mask_pred'] - - # split batch mask prediction back to each image - num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bbox_per_img, 0) - - # apply mask post-processing to each image individually - det_segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - det_segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - det_segm_results.append(segm_result) - - # return results - if self.with_mask: - return list(zip(det_bbox_results, det_segm_results)) - else: - return det_bbox_results - - def aug_test(self, img_feats, proposal_list, img_metas, rescale=False): - if self.with_semantic: - semantic_feats = [ - self.semantic_head(feat)[1] for feat in img_feats - ] - else: - semantic_feats = [None] * len(img_metas) - - if self.with_glbctx: - glbctx_feats = [self.glbctx_head(feat)[1] for feat in img_feats] - else: - glbctx_feats = [None] * len(img_metas) - - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta, semantic_feat, glbctx_feat in zip( - img_feats, img_metas, semantic_feats, glbctx_feats): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - - if rois.shape[0] == 0: - # There is no proposal in the single image - aug_bboxes.append(rois.new_zeros(0, 4)) - aug_scores.append(rois.new_zeros(0, 1)) - continue - - for i in range(self.num_stages): - bbox_head = self.bbox_head[i] - bbox_results = self._bbox_forward( - i, - x, - rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - ms_scores.append(bbox_results['cls_score']) - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'].argmax(dim=1) - rois = bbox_head.regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - det_bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - det_segm_results = [[] - for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta, semantic_feat, glbctx_feat in zip( - img_feats, img_metas, semantic_feats, glbctx_feats): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip) - mask_rois = bbox2roi([_bboxes]) - # get relay feature on mask_rois - bbox_results = self._bbox_forward( - -1, - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat) - relayed_feat = bbox_results['relayed_feat'] - relayed_feat = self.feat_relay_head(relayed_feat) - mask_results = self._mask_forward( - x, - mask_rois, - semantic_feat=semantic_feat, - glbctx_feat=glbctx_feat, - relayed_feat=relayed_feat) - mask_pred = mask_results['mask_pred'] - aug_masks.append(mask_pred.sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, - self.test_cfg) - ori_shape = img_metas[0][0]['ori_shape'] - det_segm_results = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(det_bbox_results, det_segm_results)] - else: - return [det_bbox_results] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/__init__.py deleted file mode 100644 index d56636ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .res_layer import ResLayer - -__all__ = ['ResLayer'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/res_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/res_layer.py deleted file mode 100644 index bef00a05..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/shared_heads/res_layer.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet.models.backbones import ResNet -from mmdet.models.builder import SHARED_HEADS -from mmdet.models.utils import ResLayer as _ResLayer - - -@SHARED_HEADS.register_module() -class ResLayer(BaseModule): - - def __init__(self, - depth, - stage=3, - stride=2, - dilation=1, - style='pytorch', - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - with_cp=False, - dcn=None, - pretrained=None, - init_cfg=None): - super(ResLayer, self).__init__(init_cfg) - - self.norm_eval = norm_eval - self.norm_cfg = norm_cfg - self.stage = stage - self.fp16_enabled = False - block, stage_blocks = ResNet.arch_settings[depth] - stage_block = stage_blocks[stage] - planes = 64 * 2**stage - inplanes = 64 * 2**(stage - 1) * block.expansion - - res_layer = _ResLayer( - block, - inplanes, - planes, - stage_block, - stride=stride, - dilation=dilation, - style=style, - with_cp=with_cp, - norm_cfg=self.norm_cfg, - dcn=dcn) - self.add_module(f'layer{stage + 1}', res_layer) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - @auto_fp16() - def forward(self, x): - res_layer = getattr(self, f'layer{self.stage + 1}') - out = res_layer(x) - return out - - def train(self, mode=True): - super(ResLayer, self).train(mode) - if self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/sparse_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/sparse_roi_head.py deleted file mode 100644 index 2613469e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/sparse_roi_head.py +++ /dev/null @@ -1,424 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core import bbox2result, bbox2roi, bbox_xyxy_to_cxcywh -from mmdet.core.bbox.samplers import PseudoSampler -from ..builder import HEADS -from .cascade_roi_head import CascadeRoIHead - - -@HEADS.register_module() -class SparseRoIHead(CascadeRoIHead): - r"""The RoIHead for `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_ - and `Instances as Queries `_ - - Args: - num_stages (int): Number of stage whole iterative process. - Defaults to 6. - stage_loss_weights (Tuple[float]): The loss - weight of each stage. By default all stages have - the same weight 1. - bbox_roi_extractor (dict): Config of box roi extractor. - mask_roi_extractor (dict): Config of mask roi extractor. - bbox_head (dict): Config of box head. - mask_head (dict): Config of mask head. - train_cfg (dict, optional): Configuration information in train stage. - Defaults to None. - test_cfg (dict, optional): Configuration information in test stage. - Defaults to None. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - """ - - def __init__(self, - num_stages=6, - stage_loss_weights=(1, 1, 1, 1, 1, 1), - proposal_feature_channel=256, - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict( - type='RoIAlign', output_size=7, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_roi_extractor=None, - bbox_head=dict( - type='DIIHead', - num_classes=80, - num_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - hidden_channels=256, - dropout=0.0, - roi_feat_size=7, - ffn_act_cfg=dict(type='ReLU', inplace=True)), - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert len(stage_loss_weights) == num_stages - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - self.proposal_feature_channel = proposal_feature_channel - super(SparseRoIHead, self).__init__( - num_stages, - stage_loss_weights, - bbox_roi_extractor=bbox_roi_extractor, - mask_roi_extractor=mask_roi_extractor, - bbox_head=bbox_head, - mask_head=mask_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - # train_cfg would be None when run the test.py - if train_cfg is not None: - for stage in range(num_stages): - assert isinstance(self.bbox_sampler[stage], PseudoSampler), \ - 'Sparse R-CNN and QueryInst only support `PseudoSampler`' - - def _bbox_forward(self, stage, x, rois, object_feats, img_metas): - """Box head forward function used in both training and testing. Returns - all regression, classification results and a intermediate feature. - - Args: - stage (int): The index of current stage in - iterative process. - x (List[Tensor]): List of FPN features - rois (Tensor): Rois in total batch. With shape (num_proposal, 5). - the last dimension 5 represents (img_index, x1, y1, x2, y2). - object_feats (Tensor): The object feature extracted from - the previous stage. - img_metas (dict): meta information of images. - - Returns: - dict[str, Tensor]: a dictionary of bbox head outputs, - Containing the following results: - - - cls_score (Tensor): The score of each class, has - shape (batch_size, num_proposals, num_classes) - when use focal loss or - (batch_size, num_proposals, num_classes+1) - otherwise. - - decode_bbox_pred (Tensor): The regression results - with shape (batch_size, num_proposal, 4). - The last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - - object_feats (Tensor): The object feature extracted - from current stage - - detach_cls_score_list (list[Tensor]): The detached - classification results, length is batch_size, and - each tensor has shape (num_proposal, num_classes). - - detach_proposal_list (list[tensor]): The detached - regression results, length is batch_size, and each - tensor has shape (num_proposal, 4). The last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - """ - num_imgs = len(img_metas) - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - cls_score, bbox_pred, object_feats, attn_feats = bbox_head( - bbox_feats, object_feats) - proposal_list = self.bbox_head[stage].refine_bboxes( - rois, - rois.new_zeros(len(rois)), # dummy arg - bbox_pred.view(-1, bbox_pred.size(-1)), - [rois.new_zeros(object_feats.size(1)) for _ in range(num_imgs)], - img_metas) - bbox_results = dict( - cls_score=cls_score, - decode_bbox_pred=torch.cat(proposal_list), - object_feats=object_feats, - attn_feats=attn_feats, - # detach then use it in label assign - detach_cls_score_list=[ - cls_score[i].detach() for i in range(num_imgs) - ], - detach_proposal_list=[item.detach() for item in proposal_list]) - - return bbox_results - - def _mask_forward(self, stage, x, rois, attn_feats): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats, attn_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, stage, x, attn_feats, sampling_results, - gt_masks, rcnn_train_cfg): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - attn_feats = torch.cat([ - feats[res.pos_inds] - for (feats, res) in zip(attn_feats, sampling_results) - ]) - mask_results = self._mask_forward(stage, x, pos_rois, attn_feats) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - mask_results.update(loss_mask) - return mask_results - - def forward_train(self, - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - imgs_whwh=None, - gt_masks=None): - """Forward function in training stage. - - Args: - x (list[Tensor]): list of multi-level img features. - proposals (Tensor): Decoded proposal bboxes, has shape - (batch_size, num_proposals, 4) - proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel) - img_metas (list[dict]): list of image info dict where - each dict has: 'img_shape', 'scale_factor', 'flip', - and may also contain 'filename', 'ori_shape', - 'pad_shape', and 'img_norm_cfg'. For details on the - values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - imgs_whwh (Tensor): Tensor with shape (batch_size, 4), - the dimension means - [img_width,img_height, img_width, img_height]. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components of all stage. - """ - - num_imgs = len(img_metas) - num_proposals = proposal_boxes.size(1) - imgs_whwh = imgs_whwh.repeat(1, num_proposals, 1) - all_stage_bbox_results = [] - proposal_list = [proposal_boxes[i] for i in range(len(proposal_boxes))] - object_feats = proposal_features - all_stage_loss = {} - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - all_stage_bbox_results.append(bbox_results) - if gt_bboxes_ignore is None: - # TODO support ignore - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - cls_pred_list = bbox_results['detach_cls_score_list'] - proposal_list = bbox_results['detach_proposal_list'] - for i in range(num_imgs): - normalize_bbox_ccwh = bbox_xyxy_to_cxcywh(proposal_list[i] / - imgs_whwh[i]) - assign_result = self.bbox_assigner[stage].assign( - normalize_bbox_ccwh, cls_pred_list[i], gt_bboxes[i], - gt_labels[i], img_metas[i]) - sampling_result = self.bbox_sampler[stage].sample( - assign_result, proposal_list[i], gt_bboxes[i]) - sampling_results.append(sampling_result) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, self.train_cfg[stage], - True) - cls_score = bbox_results['cls_score'] - decode_bbox_pred = bbox_results['decode_bbox_pred'] - - single_stage_loss = self.bbox_head[stage].loss( - cls_score.view(-1, cls_score.size(-1)), - decode_bbox_pred.view(-1, 4), - *bbox_targets, - imgs_whwh=imgs_whwh) - - if self.with_mask: - mask_results = self._mask_forward_train( - stage, x, bbox_results['attn_feats'], sampling_results, - gt_masks, self.train_cfg[stage]) - single_stage_loss['loss_mask'] = mask_results['loss_mask'] - - for key, value in single_stage_loss.items(): - all_stage_loss[f'stage{stage}_{key}'] = value * \ - self.stage_loss_weights[stage] - object_feats = bbox_results['object_feats'] - - return all_stage_loss - - def simple_test(self, - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh, - rescale=False): - """Test without augmentation. - - Args: - x (list[Tensor]): list of multi-level img features. - proposal_boxes (Tensor): Decoded proposal bboxes, has shape - (batch_size, num_proposals, 4) - proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel) - img_metas (dict): meta information of images. - imgs_whwh (Tensor): Tensor with shape (batch_size, 4), - the dimension means - [img_width,img_height, img_width, img_height]. - rescale (bool): If True, return boxes in original image - space. Defaults to False. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has a mask branch, - it is a list[tuple] that contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - # Decode initial proposals - num_imgs = len(img_metas) - proposal_list = [proposal_boxes[i] for i in range(num_imgs)] - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - object_feats = proposal_features - if all([proposal.shape[0] == 0 for proposal in proposal_list]): - # There is no proposal in the whole batch - bbox_results = [[ - np.zeros((0, 5), dtype=np.float32) - for i in range(self.bbox_head[-1].num_classes) - ]] * num_imgs - return bbox_results - - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - object_feats = bbox_results['object_feats'] - cls_score = bbox_results['cls_score'] - proposal_list = bbox_results['detach_proposal_list'] - - if self.with_mask: - rois = bbox2roi(proposal_list) - mask_results = self._mask_forward(stage, x, rois, - bbox_results['attn_feats']) - mask_results['mask_pred'] = mask_results['mask_pred'].reshape( - num_imgs, -1, *mask_results['mask_pred'].size()[1:]) - - num_classes = self.bbox_head[-1].num_classes - det_bboxes = [] - det_labels = [] - - if self.bbox_head[-1].loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - else: - cls_score = cls_score.softmax(-1)[..., :-1] - - for img_id in range(num_imgs): - cls_score_per_img = cls_score[img_id] - scores_per_img, topk_indices = cls_score_per_img.flatten( - 0, 1).topk( - self.test_cfg.max_per_img, sorted=False) - labels_per_img = topk_indices % num_classes - bbox_pred_per_img = proposal_list[img_id][topk_indices // - num_classes] - if rescale: - scale_factor = img_metas[img_id]['scale_factor'] - bbox_pred_per_img /= bbox_pred_per_img.new_tensor(scale_factor) - det_bboxes.append( - torch.cat([bbox_pred_per_img, scores_per_img[:, None]], dim=1)) - det_labels.append(labels_per_img) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], num_classes) - for i in range(num_imgs) - ] - - if self.with_mask: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - segm_results = [] - mask_pred = mask_results['mask_pred'] - for img_id in range(num_imgs): - mask_pred_per_img = mask_pred[img_id].flatten(0, - 1)[topk_indices] - mask_pred_per_img = mask_pred_per_img[:, None, ...].repeat( - 1, num_classes, 1, 1) - segm_result = self.mask_head[-1].get_seg_masks( - mask_pred_per_img, _bboxes[img_id], det_labels[img_id], - self.test_cfg, ori_shapes[img_id], scale_factors[img_id], - rescale) - segm_results.append(segm_result) - - if self.with_mask: - results = list(zip(bbox_results, segm_results)) - else: - results = bbox_results - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - raise NotImplementedError( - 'Sparse R-CNN and QueryInst does not support `aug_test`') - - def forward_dummy(self, x, proposal_boxes, proposal_features, img_metas): - """Dummy forward function when do the flops computing.""" - all_stage_bbox_results = [] - proposal_list = [proposal_boxes[i] for i in range(len(proposal_boxes))] - object_feats = proposal_features - if self.with_bbox: - for stage in range(self.num_stages): - rois = bbox2roi(proposal_list) - bbox_results = self._bbox_forward(stage, x, rois, object_feats, - img_metas) - - all_stage_bbox_results.append((bbox_results, )) - proposal_list = bbox_results['detach_proposal_list'] - object_feats = bbox_results['object_feats'] - - if self.with_mask: - rois = bbox2roi(proposal_list) - mask_results = self._mask_forward( - stage, x, rois, bbox_results['attn_feats']) - all_stage_bbox_results[-1] += (mask_results, ) - return all_stage_bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/standard_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/standard_roi_head.py deleted file mode 100644 index 3fdd82ad..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/standard_roi_head.py +++ /dev/null @@ -1,397 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Simplest base roi head including one bbox head and one mask head.""" - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - self.bbox_sampler = build_sampler( - self.train_cfg.sampler, context=self) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize ``bbox_head``""" - self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor) - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - self.mask_head = build_head(mask_head) - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head in - training.""" - if not self.share_roi_extractor: - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(x, pos_rois) - else: - pos_inds = [] - device = bbox_feats.device - for res in sampling_results: - pos_inds.append( - torch.ones( - res.pos_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds.append( - torch.zeros( - res.neg_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds = torch.cat(pos_inds) - - mask_results = self._mask_forward( - x, pos_inds=pos_inds, bbox_feats=bbox_feats) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - self.train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets) - return mask_results - - def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None): - """Mask head forward function used in both training and testing.""" - assert ((rois is not None) ^ - (pos_inds is not None and bbox_feats is not None)) - if rois is not None: - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - else: - assert bbox_feats is not None - mask_feats = bbox_feats[pos_inds] - - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats) - return mask_results - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = await self.async_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head.num_classes) - if not self.with_mask: - return bbox_results - else: - segm_results = await self.async_test_mask( - x, - img_metas, - det_bboxes, - det_labels, - rescale=rescale, - mask_test_cfg=self.test_cfg.get('mask')) - return bbox_results, segm_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (batch_size, c, h, w). - proposal_list (list(Tensor)): Proposals from rpn head. - Each has shape (num_proposals, 5), last dimension - 5 represent (x1, y1, x2, y2, score). - img_metas (list[dict]): Meta information of images. - rescale (bool): Whether to rescale the results to - the original image. Default: True. - - Returns: - list[list[np.ndarray]] or list[tuple]: When no mask branch, - it is bbox results of each image and classes with type - `list[list[np.ndarray]]`. The outer list - corresponds to each image. The inner list - corresponds to each class. When the model has mask branch, - it contains bbox results and mask results. - The outer list corresponds to each image, and first element - of tuple is bbox results, second element is mask results. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, x, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas, - proposal_list, - self.test_cfg) - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, - self.bbox_head.num_classes) - - # det_bboxes always keep the original scale - if self.with_mask: - segm_results = self.aug_test_mask(x, img_metas, det_bboxes, - det_labels) - return [(bbox_results, segm_results)] - else: - return [bbox_results] - - def onnx_export(self, x, proposals, img_metas, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes, det_labels = self.bbox_onnx_export( - x, img_metas, proposals, self.test_cfg, rescale=rescale) - - if not self.with_mask: - return det_bboxes, det_labels - else: - segm_results = self.mask_onnx_export( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return det_bboxes, det_labels, segm_results - - def mask_onnx_export(self, x, img_metas, det_bboxes, det_labels, **kwargs): - """Export mask branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - det_bboxes (Tensor): Bboxes and corresponding scores. - has shape [N, num_bboxes, 5]. - det_labels (Tensor): class labels of - shape [N, num_bboxes]. - - Returns: - Tensor: The segmentation results of shape [N, num_bboxes, - image_height, image_width]. - """ - # image shapes of images in the batch - - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - raise RuntimeError('[ONNX Error] Can not record MaskHead ' - 'as it has not been executed this time') - batch_size = det_bboxes.size(0) - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - max_shape = img_metas[0]['img_shape_for_onnx'] - num_det = det_bboxes.shape[1] - det_bboxes = det_bboxes.reshape(-1, 4) - det_labels = det_labels.reshape(-1) - segm_results = self.mask_head.onnx_export(mask_pred, det_bboxes, - det_labels, self.test_cfg, - max_shape) - segm_results = segm_results.reshape(batch_size, num_det, max_shape[0], - max_shape[1]) - return segm_results - - def bbox_onnx_export(self, x, img_metas, proposals, rcnn_test_cfg, - **kwargs): - """Export bbox branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (Tensor): Region proposals with - batch dimension, has shape [N, num_bboxes, 5]. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - - Returns: - tuple[Tensor, Tensor]: bboxes of shape [N, num_bboxes, 5] - and class labels of shape [N, num_bboxes]. - """ - # get origin input shape to support onnx dynamic input shape - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - - rois = proposals - - batch_index = torch.arange( - rois.size(0), device=rois.device).float().view(-1, 1, 1).expand( - rois.size(0), rois.size(1), 1) - - rois = torch.cat([batch_index, rois[..., :4]], dim=-1) - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - - # Eliminate the batch dimension - rois = rois.view(-1, 5) - bbox_results = self._bbox_forward(x, rois) - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, rois.size(-1)) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, - cls_score.size(-1)) - - bbox_pred = bbox_pred.reshape(batch_size, num_proposals_per_img, - bbox_pred.size(-1)) - det_bboxes, det_labels = self.bbox_head.onnx_export( - rois, cls_score, bbox_pred, img_shapes, cfg=rcnn_test_cfg) - - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/test_mixins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/test_mixins.py deleted file mode 100644 index ae6e79ae..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/test_mixins.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -import warnings - -import numpy as np -import torch - -from mmdet.core import (bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin: - - if sys.version_info >= (3, 7): - - async def async_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False, - **kwargs): - """Asynchronized test for box head without augmentation.""" - rois = bbox2roi(proposals) - roi_feats = self.bbox_roi_extractor( - x[:len(self.bbox_roi_extractor.featmap_strides)], rois) - if self.with_shared_head: - roi_feats = self.shared_head(roi_feats) - sleep_interval = rcnn_test_cfg.get('async_sleep_interval', 0.017) - - async with completed( - __name__, 'bbox_head_forward', - sleep_interval=sleep_interval): - cls_score, bbox_pred = self.bbox_head(roi_feats) - - img_shape = img_metas[0]['img_shape'] - scale_factor = img_metas[0]['scale_factor'] - det_bboxes, det_labels = self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=rescale, - cfg=rcnn_test_cfg) - return det_bboxes, det_labels - - def simple_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False): - """Test only det bboxes without augmentation. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (List[Tensor]): Region proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - tuple[list[Tensor], list[Tensor]]: The first list contains - the boxes of the corresponding image in a batch, each - tensor has the shape (num_boxes, 5) and last dimension - 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor - in the second list is the labels with shape (num_boxes, ). - The length of both lists should be equal to batch_size. - """ - - rois = bbox2roi(proposals) - - if rois.shape[0] == 0: - batch_size = len(proposals) - det_bbox = rois.new_zeros(0, 5) - det_label = rois.new_zeros((0, ), dtype=torch.long) - if rcnn_test_cfg is None: - det_bbox = det_bbox[:, :4] - det_label = rois.new_zeros( - (0, self.bbox_head.fc_cls.out_features)) - # There is no proposal in the whole batch - return [det_bbox] * batch_size, [det_label] * batch_size - - bbox_results = self._bbox_forward(x, rois) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - - # some detector with_reg is False, bbox_pred will be None - if bbox_pred is not None: - # TODO move this to a sabl_roi_head - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head.bbox_pred_split( - bbox_pred, num_proposals_per_img) - else: - bbox_pred = (None, ) * len(proposals) - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(len(proposals)): - if rois[i].shape[0] == 0: - # There is no proposal in the single image - det_bbox = rois[i].new_zeros(0, 5) - det_label = rois[i].new_zeros((0, ), dtype=torch.long) - if rcnn_test_cfg is None: - det_bbox = det_bbox[:, :4] - det_label = rois[i].new_zeros( - (0, self.bbox_head.fc_cls.out_features)) - - else: - det_bbox, det_label = self.bbox_head.get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - return det_bboxes, det_labels - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - # TODO more flexible - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - if merged_bboxes.shape[0] == 0: - # There is no proposal in the single image - det_bboxes = merged_bboxes.new_zeros(0, 5) - det_labels = merged_bboxes.new_zeros((0, ), dtype=torch.long) - else: - det_bboxes, det_labels = multiclass_nms(merged_bboxes, - merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels - - -class MaskTestMixin: - - if sys.version_info >= (3, 7): - - async def async_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False, - mask_test_cfg=None): - """Asynchronized test for mask head without augmentation.""" - # image shape of the first image in the batch (only one) - ori_shape = img_metas[0]['ori_shape'] - scale_factor = img_metas[0]['scale_factor'] - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - if rescale and not isinstance(scale_factor, - (float, torch.Tensor)): - scale_factor = det_bboxes.new_tensor(scale_factor) - _bboxes = ( - det_bboxes[:, :4] * - scale_factor if rescale else det_bboxes) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor( - x[:len(self.mask_roi_extractor.featmap_strides)], - mask_rois) - - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - if mask_test_cfg and mask_test_cfg.get('async_sleep_interval'): - sleep_interval = mask_test_cfg['async_sleep_interval'] - else: - sleep_interval = 0.035 - async with completed( - __name__, - 'mask_head_forward', - sleep_interval=sleep_interval): - mask_pred = self.mask_head(mask_feats) - segm_result = self.mask_head.get_seg_masks( - mask_pred, _bboxes, det_labels, self.test_cfg, ori_shape, - scale_factor, rescale) - return segm_result - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if isinstance(scale_factors[0], float): - warnings.warn( - 'Scale factor in img_metas should be a ' - 'ndarray with shape (4,) ' - 'arrange as (factor_w, factor_h, factor_w, factor_h), ' - 'The scale_factor with float type has been deprecated. ') - scale_factors = np.array([scale_factors] * 4, dtype=np.float32) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale: - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - scale_factor = det_bboxes.new_ones(4) - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=scale_factor, - rescale=False) - return segm_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/trident_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/trident_roi_head.py deleted file mode 100644 index 09758792..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/roi_heads/trident_roi_head.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import batched_nms - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - multiclass_nms) -from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead -from ..builder import HEADS - - -@HEADS.register_module() -class TridentRoIHead(StandardRoIHead): - """Trident roi head. - - Args: - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - """ - - def __init__(self, num_branch, test_branch_idx, **kwargs): - self.num_branch = num_branch - self.test_branch_idx = test_branch_idx - super(TridentRoIHead, self).__init__(**kwargs) - - def merge_trident_bboxes(self, trident_det_bboxes, trident_det_labels): - """Merge bbox predictions of each branch.""" - if trident_det_bboxes.numel() == 0: - det_bboxes = trident_det_bboxes.new_zeros((0, 5)) - det_labels = trident_det_bboxes.new_zeros((0, ), dtype=torch.long) - else: - nms_bboxes = trident_det_bboxes[:, :4] - nms_scores = trident_det_bboxes[:, 4].contiguous() - nms_inds = trident_det_labels - nms_cfg = self.test_cfg['nms'] - det_bboxes, keep = batched_nms(nms_bboxes, nms_scores, nms_inds, - nms_cfg) - det_labels = trident_det_labels[keep] - if self.test_cfg['max_per_img'] > 0: - det_labels = det_labels[:self.test_cfg['max_per_img']] - det_bboxes = det_bboxes[:self.test_cfg['max_per_img']] - - return det_bboxes, det_labels - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation as follows: - - 1. Compute prediction bbox and label per branch. - 2. Merge predictions of each branch according to scores of - bboxes, i.e., bboxes with higher score are kept to give - top-k prediction. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes_list, det_labels_list = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - num_branch = self.num_branch if self.test_branch_idx == -1 else 1 - for _ in range(len(det_bboxes_list)): - if det_bboxes_list[_].shape[0] == 0: - det_bboxes_list[_] = det_bboxes_list[_].new_empty((0, 5)) - det_bboxes, det_labels = [], [] - for i in range(len(img_metas) // num_branch): - det_result = self.merge_trident_bboxes( - torch.cat(det_bboxes_list[i * num_branch:(i + 1) * - num_branch]), - torch.cat(det_labels_list[i * num_branch:(i + 1) * - num_branch])) - det_bboxes.append(det_result[0]) - det_labels.append(det_result[1]) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - return bbox_results - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - trident_bboxes, trident_scores = [], [] - for branch_idx in range(len(proposal_list)): - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - trident_bboxes.append(bboxes) - trident_scores.append(scores) - - aug_bboxes.append(torch.cat(trident_bboxes, 0)) - aug_scores.append(torch.cat(trident_scores, 0)) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/__init__.py deleted file mode 100644 index b489a905..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .panoptic_fpn_head import PanopticFPNHead # noqa: F401,F403 -from .panoptic_fusion_heads import * # noqa: F401,F403 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/base_semantic_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/base_semantic_head.py deleted file mode 100644 index 2b6ca145..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/base_semantic_head.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch.nn.functional as F -from mmcv.runner import BaseModule, force_fp32 - -from ..builder import build_loss -from ..utils import interpolate_as - - -class BaseSemanticHead(BaseModule, metaclass=ABCMeta): - """Base module of Semantic Head. - - Args: - num_classes (int): the number of classes. - init_cfg (dict): the initialization config. - loss_seg (dict): the loss of the semantic head. - """ - - def __init__(self, - num_classes, - init_cfg=None, - loss_seg=dict( - type='CrossEntropyLoss', - ignore_index=255, - loss_weight=1.0)): - super(BaseSemanticHead, self).__init__(init_cfg) - self.loss_seg = build_loss(loss_seg) - self.num_classes = num_classes - - @force_fp32(apply_to=('seg_preds', )) - def loss(self, seg_preds, gt_semantic_seg): - """Get the loss of semantic head. - - Args: - seg_preds (Tensor): The input logits with the shape (N, C, H, W). - gt_semantic_seg: The ground truth of semantic segmentation with - the shape (N, H, W). - label_bias: The starting number of the semantic label. - Default: 1. - - Returns: - dict: the loss of semantic head. - """ - if seg_preds.shape[-2:] != gt_semantic_seg.shape[-2:]: - seg_preds = interpolate_as(seg_preds, gt_semantic_seg) - seg_preds = seg_preds.permute((0, 2, 3, 1)) - - loss_seg = self.loss_seg( - seg_preds.reshape(-1, self.num_classes), # => [NxHxW, C] - gt_semantic_seg.reshape(-1).long()) - return dict(loss_seg=loss_seg) - - @abstractmethod - def forward(self, x): - """Placeholder of forward function. - - Returns: - dict[str, Tensor]: A dictionary, including features - and predicted scores. Required keys: 'seg_preds' - and 'feats'. - """ - pass - - def forward_train(self, x, gt_semantic_seg): - output = self.forward(x) - seg_preds = output['seg_preds'] - return self.loss(seg_preds, gt_semantic_seg) - - def simple_test(self, x, img_metas, rescale=False): - output = self.forward(x) - seg_preds = output['seg_preds'] - seg_preds = F.interpolate( - seg_preds, - size=img_metas[0]['pad_shape'][:2], - mode='bilinear', - align_corners=False) - - if rescale: - h, w, _ = img_metas[0]['img_shape'] - seg_preds = seg_preds[:, :, :h, :w] - - h, w, _ = img_metas[0]['ori_shape'] - seg_preds = F.interpolate( - seg_preds, size=(h, w), mode='bilinear', align_corners=False) - return seg_preds diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fpn_head.py deleted file mode 100644 index f1df2976..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fpn_head.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.runner import ModuleList - -from ..builder import HEADS -from ..utils import ConvUpsample -from .base_semantic_head import BaseSemanticHead - - -@HEADS.register_module() -class PanopticFPNHead(BaseSemanticHead): - """PanopticFPNHead used in Panoptic FPN. - - In this head, the number of output channels is ``num_stuff_classes - + 1``, including all stuff classes and one thing class. The stuff - classes will be reset from ``0`` to ``num_stuff_classes - 1``, the - thing classes will be merged to ``num_stuff_classes``-th channel. - - Arg: - num_things_classes (int): Number of thing classes. Default: 80. - num_stuff_classes (int): Number of stuff classes. Default: 53. - num_classes (int): Number of classes, including all stuff - classes and one thing class. This argument is deprecated, - please use ``num_things_classes`` and ``num_stuff_classes``. - The module will automatically infer the num_classes by - ``num_stuff_classes + 1``. - in_channels (int): Number of channels in the input feature - map. - inner_channels (int): Number of channels in inner features. - start_level (int): The start level of the input features - used in PanopticFPN. - end_level (int): The end level of the used features, the - ``end_level``-th layer will not be used. - fg_range (tuple): Range of the foreground classes. It starts - from ``0`` to ``num_things_classes-1``. Deprecated, please use - ``num_things_classes`` directly. - bg_range (tuple): Range of the background classes. It starts - from ``num_things_classes`` to ``num_things_classes + - num_stuff_classes - 1``. Deprecated, please use - ``num_stuff_classes`` and ``num_things_classes`` directly. - conv_cfg (dict): Dictionary to construct and config - conv layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Use ``GN`` by default. - init_cfg (dict or list[dict], optional): Initialization config dict. - loss_seg (dict): the loss of the semantic head. - """ - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - num_classes=None, - in_channels=256, - inner_channels=128, - start_level=0, - end_level=4, - fg_range=None, - bg_range=None, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=None, - loss_seg=dict( - type='CrossEntropyLoss', ignore_index=-1, - loss_weight=1.0)): - if num_classes is not None: - warnings.warn( - '`num_classes` is deprecated now, please set ' - '`num_stuff_classes` directly, the `num_classes` will be ' - 'set to `num_stuff_classes + 1`') - # num_classes = num_stuff_classes + 1 for PanopticFPN. - assert num_classes == num_stuff_classes + 1 - super(PanopticFPNHead, self).__init__(num_stuff_classes + 1, init_cfg, - loss_seg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - if fg_range is not None and bg_range is not None: - self.fg_range = fg_range - self.bg_range = bg_range - self.num_things_classes = fg_range[1] - fg_range[0] + 1 - self.num_stuff_classes = bg_range[1] - bg_range[0] + 1 - warnings.warn( - '`fg_range` and `bg_range` are deprecated now, ' - f'please use `num_things_classes`={self.num_things_classes} ' - f'and `num_stuff_classes`={self.num_stuff_classes} instead.') - - # Used feature layers are [start_level, end_level) - self.start_level = start_level - self.end_level = end_level - self.num_stages = end_level - start_level - self.inner_channels = inner_channels - - self.conv_upsample_layers = ModuleList() - for i in range(start_level, end_level): - self.conv_upsample_layers.append( - ConvUpsample( - in_channels, - inner_channels, - num_layers=i if i > 0 else 1, - num_upsample=i if i > 0 else 0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - )) - self.conv_logits = nn.Conv2d(inner_channels, self.num_classes, 1) - - def _set_things_to_void(self, gt_semantic_seg): - """Merge thing classes to one class. - - In PanopticFPN, the background labels will be reset from `0` to - `self.num_stuff_classes-1`, the foreground labels will be merged to - `self.num_stuff_classes`-th channel. - """ - gt_semantic_seg = gt_semantic_seg.int() - fg_mask = gt_semantic_seg < self.num_things_classes - bg_mask = (gt_semantic_seg >= self.num_things_classes) * ( - gt_semantic_seg < self.num_things_classes + self.num_stuff_classes) - - new_gt_seg = torch.clone(gt_semantic_seg) - new_gt_seg = torch.where(bg_mask, - gt_semantic_seg - self.num_things_classes, - new_gt_seg) - new_gt_seg = torch.where(fg_mask, - fg_mask.int() * self.num_stuff_classes, - new_gt_seg) - return new_gt_seg - - def loss(self, seg_preds, gt_semantic_seg): - """The loss of PanopticFPN head. - - Things classes will be merged to one class in PanopticFPN. - """ - gt_semantic_seg = self._set_things_to_void(gt_semantic_seg) - return super().loss(seg_preds, gt_semantic_seg) - - def init_weights(self): - super().init_weights() - nn.init.normal_(self.conv_logits.weight.data, 0, 0.01) - self.conv_logits.bias.data.zero_() - - def forward(self, x): - # the number of subnets must be not more than - # the length of features. - assert self.num_stages <= len(x) - - feats = [] - for i, layer in enumerate(self.conv_upsample_layers): - f = layer(x[self.start_level + i]) - feats.append(f) - - feats = torch.sum(torch.stack(feats, dim=0), dim=0) - seg_preds = self.conv_logits(feats) - out = dict(seg_preds=seg_preds, feats=feats) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py deleted file mode 100644 index 41625a61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_panoptic_fusion_head import \ - BasePanopticFusionHead # noqa: F401,F403 -from .heuristic_fusion_head import HeuristicFusionHead # noqa: F401,F403 -from .maskformer_fusion_head import MaskFormerFusionHead # noqa: F401,F403 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py deleted file mode 100644 index a38ac1c6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/base_panoptic_fusion_head.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - -from ...builder import build_loss - - -class BasePanopticFusionHead(BaseModule, metaclass=ABCMeta): - """Base class for panoptic heads.""" - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - loss_panoptic=None, - init_cfg=None, - **kwargs): - super(BasePanopticFusionHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = num_things_classes + num_stuff_classes - self.test_cfg = test_cfg - - if loss_panoptic: - self.loss_panoptic = build_loss(loss_panoptic) - else: - self.loss_panoptic = None - - @property - def with_loss(self): - """bool: whether the panoptic head contains loss function.""" - return self.loss_panoptic is not None - - @abstractmethod - def forward_train(self, gt_masks=None, gt_semantic_seg=None, **kwargs): - """Forward function during training.""" - - @abstractmethod - def simple_test(self, - img_metas, - det_labels, - mask_preds, - seg_preds, - det_bboxes, - cfg=None, - **kwargs): - """Test without augmentation.""" diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py deleted file mode 100644 index 06c1de2b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/heuristic_fusion_head.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from mmdet.models.builder import HEADS -from .base_panoptic_fusion_head import BasePanopticFusionHead - - -@HEADS.register_module() -class HeuristicFusionHead(BasePanopticFusionHead): - """Fusion Head with Heuristic method.""" - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - init_cfg=None, - **kwargs): - super(HeuristicFusionHead, - self).__init__(num_things_classes, num_stuff_classes, test_cfg, - None, init_cfg, **kwargs) - - def forward_train(self, gt_masks=None, gt_semantic_seg=None, **kwargs): - """HeuristicFusionHead has no training loss.""" - return dict() - - def _lay_masks(self, bboxes, labels, masks, overlap_thr=0.5): - """Lay instance masks to a result map. - - Args: - bboxes: The bboxes results, (K, 4). - labels: The labels of bboxes, (K, ). - masks: The instance masks, (K, H, W). - overlap_thr: Threshold to determine whether two masks overlap. - default: 0.5. - - Returns: - Tensor: The result map, (H, W). - """ - num_insts = bboxes.shape[0] - id_map = torch.zeros( - masks.shape[-2:], device=bboxes.device, dtype=torch.long) - if num_insts == 0: - return id_map, labels - - scores, bboxes = bboxes[:, -1], bboxes[:, :4] - - # Sort by score to use heuristic fusion - order = torch.argsort(-scores) - bboxes = bboxes[order] - labels = labels[order] - segm_masks = masks[order] - - instance_id = 1 - left_labels = [] - for idx in range(bboxes.shape[0]): - _cls = labels[idx] - _mask = segm_masks[idx] - instance_id_map = torch.ones_like( - _mask, dtype=torch.long) * instance_id - area = _mask.sum() - if area == 0: - continue - - pasted = id_map > 0 - intersect = (_mask * pasted).sum() - if (intersect / (area + 1e-5)) > overlap_thr: - continue - - _part = _mask * (~pasted) - id_map = torch.where(_part, instance_id_map, id_map) - left_labels.append(_cls) - instance_id += 1 - - if len(left_labels) > 0: - instance_labels = torch.stack(left_labels) - else: - instance_labels = bboxes.new_zeros((0, ), dtype=torch.long) - assert instance_id == (len(instance_labels) + 1) - return id_map, instance_labels - - def simple_test(self, det_bboxes, det_labels, mask_preds, seg_preds, - **kwargs): - """Fuse the results of instance and semantic segmentations. - - Args: - det_bboxes: The bboxes results, (K, 4). - det_labels: The labels of bboxes, (K,). - mask_preds: The masks results, (K, H, W). - seg_preds: The semantic segmentation results, - (K, num_stuff + 1, H, W). - - Returns: - Tensor : The panoptic segmentation result, (H, W). - """ - mask_preds = mask_preds >= self.test_cfg.mask_thr_binary - id_map, labels = self._lay_masks(det_bboxes, det_labels, mask_preds, - self.test_cfg.mask_overlap) - - seg_results = seg_preds.argmax(dim=0) - seg_results = seg_results + self.num_things_classes - - pan_results = seg_results - instance_id = 1 - for idx in range(det_labels.shape[0]): - _mask = id_map == (idx + 1) - if _mask.sum() == 0: - continue - _cls = labels[idx] - # simply trust detection - segment_id = _cls + instance_id * INSTANCE_OFFSET - pan_results[_mask] = segment_id - instance_id += 1 - - ids, counts = torch.unique( - pan_results % INSTANCE_OFFSET, return_counts=True) - stuff_ids = ids[ids >= self.num_things_classes] - stuff_counts = counts[ids >= self.num_things_classes] - ignore_stuff_ids = stuff_ids[ - stuff_counts < self.test_cfg.stuff_area_limit] - - assert pan_results.ndim == 2 - pan_results[(pan_results.unsqueeze(2) == ignore_stuff_ids.reshape( - 1, 1, -1)).any(dim=2)] = self.num_classes - - return pan_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py deleted file mode 100644 index 5b59ce4d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from mmdet.core.evaluation.panoptic_utils import INSTANCE_OFFSET -from mmdet.core.mask import mask2bbox -from mmdet.models.builder import HEADS -from .base_panoptic_fusion_head import BasePanopticFusionHead - - -@HEADS.register_module() -class MaskFormerFusionHead(BasePanopticFusionHead): - - def __init__(self, - num_things_classes=80, - num_stuff_classes=53, - test_cfg=None, - loss_panoptic=None, - init_cfg=None, - **kwargs): - super().__init__(num_things_classes, num_stuff_classes, test_cfg, - loss_panoptic, init_cfg, **kwargs) - - def forward_train(self, **kwargs): - """MaskFormerFusionHead has no training loss.""" - return dict() - - def panoptic_postprocess(self, mask_cls, mask_pred): - """Panoptic segmengation inference. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - Tensor: Panoptic segment result of shape \ - (h, w), each element in Tensor means: \ - ``segment_id = _cls + instance_id * INSTANCE_OFFSET``. - """ - object_mask_thr = self.test_cfg.get('object_mask_thr', 0.8) - iou_thr = self.test_cfg.get('iou_thr', 0.8) - filter_low_score = self.test_cfg.get('filter_low_score', False) - - scores, labels = F.softmax(mask_cls, dim=-1).max(-1) - mask_pred = mask_pred.sigmoid() - - keep = labels.ne(self.num_classes) & (scores > object_mask_thr) - cur_scores = scores[keep] - cur_classes = labels[keep] - cur_masks = mask_pred[keep] - - cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks - - h, w = cur_masks.shape[-2:] - panoptic_seg = torch.full((h, w), - self.num_classes, - dtype=torch.int32, - device=cur_masks.device) - if cur_masks.shape[0] == 0: - # We didn't detect any mask :( - pass - else: - cur_mask_ids = cur_prob_masks.argmax(0) - instance_id = 1 - for k in range(cur_classes.shape[0]): - pred_class = int(cur_classes[k].item()) - isthing = pred_class < self.num_things_classes - mask = cur_mask_ids == k - mask_area = mask.sum().item() - original_area = (cur_masks[k] >= 0.5).sum().item() - - if filter_low_score: - mask = mask & (cur_masks[k] >= 0.5) - - if mask_area > 0 and original_area > 0: - if mask_area / original_area < iou_thr: - continue - - if not isthing: - # different stuff regions of same class will be - # merged here, and stuff share the instance_id 0. - panoptic_seg[mask] = pred_class - else: - panoptic_seg[mask] = ( - pred_class + instance_id * INSTANCE_OFFSET) - instance_id += 1 - - return panoptic_seg - - def semantic_postprocess(self, mask_cls, mask_pred): - """Semantic segmengation postprocess. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - Tensor: Semantic segment result of shape \ - (cls_out_channels, h, w). - """ - # TODO add semantic segmentation result - raise NotImplementedError - - def instance_postprocess(self, mask_cls, mask_pred): - """Instance segmengation postprocess. - - Args: - mask_cls (Tensor): Classfication outputs of shape - (num_queries, cls_out_channels) for a image. - Note `cls_out_channels` should includes - background. - mask_pred (Tensor): Mask outputs of shape - (num_queries, h, w) for a image. - - Returns: - tuple[Tensor]: Instance segmentation results. - - - labels_per_image (Tensor): Predicted labels,\ - shape (n, ). - - bboxes (Tensor): Bboxes and scores with shape (n, 5) of \ - positive region in binary mask, the last column is scores. - - mask_pred_binary (Tensor): Instance masks of \ - shape (n, h, w). - """ - max_per_image = self.test_cfg.get('max_per_image', 100) - num_queries = mask_cls.shape[0] - # shape (num_queries, num_class) - scores = F.softmax(mask_cls, dim=-1)[:, :-1] - # shape (num_queries * num_class, ) - labels = torch.arange(self.num_classes, device=mask_cls.device).\ - unsqueeze(0).repeat(num_queries, 1).flatten(0, 1) - scores_per_image, top_indices = scores.flatten(0, 1).topk( - max_per_image, sorted=False) - labels_per_image = labels[top_indices] - - query_indices = top_indices // self.num_classes - mask_pred = mask_pred[query_indices] - - # extract things - is_thing = labels_per_image < self.num_things_classes - scores_per_image = scores_per_image[is_thing] - labels_per_image = labels_per_image[is_thing] - mask_pred = mask_pred[is_thing] - - mask_pred_binary = (mask_pred > 0).float() - mask_scores_per_image = (mask_pred.sigmoid() * - mask_pred_binary).flatten(1).sum(1) / ( - mask_pred_binary.flatten(1).sum(1) + 1e-6) - det_scores = scores_per_image * mask_scores_per_image - mask_pred_binary = mask_pred_binary.bool() - bboxes = mask2bbox(mask_pred_binary) - bboxes = torch.cat([bboxes, det_scores[:, None]], dim=-1) - - return labels_per_image, bboxes, mask_pred_binary - - def simple_test(self, - mask_cls_results, - mask_pred_results, - img_metas, - rescale=False, - **kwargs): - """Test segment without test-time aumengtation. - - Only the output of last decoder layers was used. - - Args: - mask_cls_results (Tensor): Mask classification logits, - shape (batch_size, num_queries, cls_out_channels). - Note `cls_out_channels` should includes background. - mask_pred_results (Tensor): Mask logits, shape - (batch_size, num_queries, h, w). - img_metas (list[dict]): List of image information. - rescale (bool, optional): If True, return boxes in - original image space. Default False. - - Returns: - list[dict[str, Tensor | tuple[Tensor]]]: Semantic segmentation \ - results and panoptic segmentation results for each \ - image. - - .. code-block:: none - - [ - { - 'pan_results': Tensor, # shape = [h, w] - 'ins_results': tuple[Tensor], - # semantic segmentation results are not supported yet - 'sem_results': Tensor - }, - ... - ] - """ - panoptic_on = self.test_cfg.get('panoptic_on', True) - semantic_on = self.test_cfg.get('semantic_on', False) - instance_on = self.test_cfg.get('instance_on', False) - assert not semantic_on, 'segmantic segmentation '\ - 'results are not supported yet.' - - results = [] - for mask_cls_result, mask_pred_result, meta in zip( - mask_cls_results, mask_pred_results, img_metas): - # remove padding - img_height, img_width = meta['img_shape'][:2] - mask_pred_result = mask_pred_result[:, :img_height, :img_width] - - if rescale: - # return result in original resolution - ori_height, ori_width = meta['ori_shape'][:2] - mask_pred_result = F.interpolate( - mask_pred_result[:, None], - size=(ori_height, ori_width), - mode='bilinear', - align_corners=False)[:, 0] - - result = dict() - if panoptic_on: - pan_results = self.panoptic_postprocess( - mask_cls_result, mask_pred_result) - result['pan_results'] = pan_results - - if instance_on: - ins_results = self.instance_postprocess( - mask_cls_result, mask_pred_result) - result['ins_results'] = ins_results - - if semantic_on: - sem_results = self.semantic_postprocess( - mask_cls_result, mask_pred_result) - result['sem_results'] = sem_results - - results.append(result) - - return results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/__init__.py deleted file mode 100644 index e74ba89e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .brick_wrappers import AdaptiveAvgPool2d, adaptive_avg_pool2d -from .builder import build_linear_layer, build_transformer -from .ckpt_convert import pvt_convert -from .conv_upsample import ConvUpsample -from .csp_layer import CSPLayer -from .gaussian_target import gaussian_radius, gen_gaussian_target -from .inverted_residual import InvertedResidual -from .make_divisible import make_divisible -from .misc import interpolate_as, sigmoid_geometric_mean -from .normed_predictor import NormedConv2d, NormedLinear -from .panoptic_gt_processing import preprocess_panoptic_gt -from .point_sample import (get_uncertain_point_coords_with_randomness, - get_uncertainty) -from .positional_encoding import (LearnedPositionalEncoding, - SinePositionalEncoding) -from .res_layer import ResLayer, SimplifiedBasicBlock -from .se_layer import DyReLU, SELayer -from .transformer import (DetrTransformerDecoder, DetrTransformerDecoderLayer, - DynamicConv, PatchEmbed, Transformer, nchw_to_nlc, - nlc_to_nchw) - -__all__ = [ - 'ResLayer', 'gaussian_radius', 'gen_gaussian_target', - 'DetrTransformerDecoderLayer', 'DetrTransformerDecoder', 'Transformer', - 'build_transformer', 'build_linear_layer', 'SinePositionalEncoding', - 'LearnedPositionalEncoding', 'DynamicConv', 'SimplifiedBasicBlock', - 'NormedLinear', 'NormedConv2d', 'make_divisible', 'InvertedResidual', - 'SELayer', 'interpolate_as', 'ConvUpsample', 'CSPLayer', - 'adaptive_avg_pool2d', 'AdaptiveAvgPool2d', 'PatchEmbed', 'nchw_to_nlc', - 'nlc_to_nchw', 'pvt_convert', 'sigmoid_geometric_mean', - 'preprocess_panoptic_gt', 'DyReLU', - 'get_uncertain_point_coords_with_randomness', 'get_uncertainty' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/brick_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/brick_wrappers.py deleted file mode 100644 index fa0279ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/brick_wrappers.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn.bricks.wrappers import NewEmptyTensorOp, obsolete_torch_version - -if torch.__version__ == 'parrots': - TORCH_VERSION = torch.__version__ -else: - # torch.__version__ could be 1.3.1+cu92, we only need the first two - # for comparison - TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2]) - - -def adaptive_avg_pool2d(input, output_size): - """Handle empty batch dimension to adaptive_avg_pool2d. - - Args: - input (tensor): 4D tensor. - output_size (int, tuple[int,int]): the target output size. - """ - if input.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - if isinstance(output_size, int): - output_size = [output_size, output_size] - output_size = [*input.shape[:2], *output_size] - empty = NewEmptyTensorOp.apply(input, output_size) - return empty - else: - return F.adaptive_avg_pool2d(input, output_size) - - -class AdaptiveAvgPool2d(nn.AdaptiveAvgPool2d): - """Handle empty batch dimension to AdaptiveAvgPool2d.""" - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - output_size = self.output_size - if isinstance(output_size, int): - output_size = [output_size, output_size] - else: - output_size = [ - v if v is not None else d - for v, d in zip(output_size, - x.size()[-2:]) - ] - output_size = [*x.shape[:2], *output_size] - empty = NewEmptyTensorOp.apply(x, output_size) - return empty - - return super().forward(x) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/builder.py deleted file mode 100644 index 20fe7a6d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/builder.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.utils import Registry, build_from_cfg - -TRANSFORMER = Registry('Transformer') -LINEAR_LAYERS = Registry('linear layers') - - -def build_transformer(cfg, default_args=None): - """Builder for Transformer.""" - return build_from_cfg(cfg, TRANSFORMER, default_args) - - -LINEAR_LAYERS.register_module('Linear', module=nn.Linear) - - -def build_linear_layer(cfg, *args, **kwargs): - """Build linear layer. - Args: - cfg (None or dict): The linear layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an linear layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding linear layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding linear layer. - Returns: - nn.Module: Created linear layer. - """ - if cfg is None: - cfg_ = dict(type='Linear') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in LINEAR_LAYERS: - raise KeyError(f'Unrecognized linear type {layer_type}') - else: - linear_layer = LINEAR_LAYERS.get(layer_type) - - layer = linear_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/ckpt_convert.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/ckpt_convert.py deleted file mode 100644 index 4d660c4e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/ckpt_convert.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# This script consists of several convert functions which -# can modify the weights of model in original repo to be -# pre-trained weights. - -from collections import OrderedDict - -import torch - - -def pvt_convert(ckpt): - new_ckpt = OrderedDict() - # Process the concat between q linear weights and kv linear weights - use_abs_pos_embed = False - use_conv_ffn = False - for k in ckpt.keys(): - if k.startswith('pos_embed'): - use_abs_pos_embed = True - if k.find('dwconv') >= 0: - use_conv_ffn = True - for k, v in ckpt.items(): - if k.startswith('head'): - continue - if k.startswith('norm.'): - continue - if k.startswith('cls_token'): - continue - if k.startswith('pos_embed'): - stage_i = int(k.replace('pos_embed', '')) - new_k = k.replace(f'pos_embed{stage_i}', - f'layers.{stage_i - 1}.1.0.pos_embed') - if stage_i == 4 and v.size(1) == 50: # 1 (cls token) + 7 * 7 - new_v = v[:, 1:, :] # remove cls token - else: - new_v = v - elif k.startswith('patch_embed'): - stage_i = int(k.split('.')[0].replace('patch_embed', '')) - new_k = k.replace(f'patch_embed{stage_i}', - f'layers.{stage_i - 1}.0') - new_v = v - if 'proj.' in new_k: - new_k = new_k.replace('proj.', 'projection.') - elif k.startswith('block'): - stage_i = int(k.split('.')[0].replace('block', '')) - layer_i = int(k.split('.')[1]) - new_layer_i = layer_i + use_abs_pos_embed - new_k = k.replace(f'block{stage_i}.{layer_i}', - f'layers.{stage_i - 1}.1.{new_layer_i}') - new_v = v - if 'attn.q.' in new_k: - sub_item_k = k.replace('q.', 'kv.') - new_k = new_k.replace('q.', 'attn.in_proj_') - new_v = torch.cat([v, ckpt[sub_item_k]], dim=0) - elif 'attn.kv.' in new_k: - continue - elif 'attn.proj.' in new_k: - new_k = new_k.replace('proj.', 'attn.out_proj.') - elif 'attn.sr.' in new_k: - new_k = new_k.replace('sr.', 'sr.') - elif 'mlp.' in new_k: - string = f'{new_k}-' - new_k = new_k.replace('mlp.', 'ffn.layers.') - if 'fc1.weight' in new_k or 'fc2.weight' in new_k: - new_v = v.reshape((*v.shape, 1, 1)) - new_k = new_k.replace('fc1.', '0.') - new_k = new_k.replace('dwconv.dwconv.', '1.') - if use_conv_ffn: - new_k = new_k.replace('fc2.', '4.') - else: - new_k = new_k.replace('fc2.', '3.') - string += f'{new_k} {v.shape}-{new_v.shape}' - elif k.startswith('norm'): - stage_i = int(k[4]) - new_k = k.replace(f'norm{stage_i}', f'layers.{stage_i - 1}.2') - new_v = v - else: - new_k = k - new_v = v - new_ckpt[new_k] = new_v - - return new_ckpt - - -def swin_converter(ckpt): - - new_ckpt = OrderedDict() - - def correct_unfold_reduction_order(x): - out_channel, in_channel = x.shape - x = x.reshape(out_channel, 4, in_channel // 4) - x = x[:, [0, 2, 1, 3], :].transpose(1, - 2).reshape(out_channel, in_channel) - return x - - def correct_unfold_norm_order(x): - in_channel = x.shape[0] - x = x.reshape(4, in_channel // 4) - x = x[[0, 2, 1, 3], :].transpose(0, 1).reshape(in_channel) - return x - - for k, v in ckpt.items(): - if k.startswith('head'): - continue - elif k.startswith('layers'): - new_v = v - if 'attn.' in k: - new_k = k.replace('attn.', 'attn.w_msa.') - elif 'mlp.' in k: - if 'mlp.fc1.' in k: - new_k = k.replace('mlp.fc1.', 'ffn.layers.0.0.') - elif 'mlp.fc2.' in k: - new_k = k.replace('mlp.fc2.', 'ffn.layers.1.') - else: - new_k = k.replace('mlp.', 'ffn.') - elif 'downsample' in k: - new_k = k - if 'reduction.' in k: - new_v = correct_unfold_reduction_order(v) - elif 'norm.' in k: - new_v = correct_unfold_norm_order(v) - else: - new_k = k - new_k = new_k.replace('layers', 'stages', 1) - elif k.startswith('patch_embed'): - new_v = v - if 'proj' in k: - new_k = k.replace('proj', 'projection') - else: - new_k = k - else: - new_v = v - new_k = k - - new_ckpt['backbone.' + new_k] = new_v - - return new_ckpt diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/conv_upsample.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/conv_upsample.py deleted file mode 100644 index bb5ba767..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/conv_upsample.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, ModuleList - - -class ConvUpsample(BaseModule): - """ConvUpsample performs 2x upsampling after Conv. - - There are several `ConvModule` layers. In the first few layers, upsampling - will be applied after each layer of convolution. The number of upsampling - must be no more than the number of ConvModule layers. - - Args: - in_channels (int): Number of channels in the input feature map. - inner_channels (int): Number of channels produced by the convolution. - num_layers (int): Number of convolution layers. - num_upsample (int | optional): Number of upsampling layer. Must be no - more than num_layers. Upsampling will be applied after the first - ``num_upsample`` layers of convolution. Default: ``num_layers``. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict): Config dict for initialization. Default: None. - kwargs (key word augments): Other augments used in ConvModule. - """ - - def __init__(self, - in_channels, - inner_channels, - num_layers=1, - num_upsample=None, - conv_cfg=None, - norm_cfg=None, - init_cfg=None, - **kwargs): - super(ConvUpsample, self).__init__(init_cfg) - if num_upsample is None: - num_upsample = num_layers - assert num_upsample <= num_layers, \ - f'num_upsample({num_upsample})must be no more than ' \ - f'num_layers({num_layers})' - self.num_layers = num_layers - self.num_upsample = num_upsample - self.conv = ModuleList() - for i in range(num_layers): - self.conv.append( - ConvModule( - in_channels, - inner_channels, - 3, - padding=1, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - in_channels = inner_channels - - def forward(self, x): - num_upsample = self.num_upsample - for i in range(self.num_layers): - x = self.conv[i](x) - if num_upsample > 0: - num_upsample -= 1 - x = F.interpolate( - x, scale_factor=2, mode='bilinear', align_corners=False) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/csp_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/csp_layer.py deleted file mode 100644 index 5760b014..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/csp_layer.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - - -class DarknetBottleneck(BaseModule): - """The basic bottleneck block used in Darknet. - - Each ResBlock consists of two ConvModules and the input is added to the - final output. Each ConvModule is composed of Conv, BN, and LeakyReLU. - The first convLayer has filter size of 1x1 and the second one has the - filter size of 3x3. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - expansion (int): The kernel size of the convolution. Default: 0.5 - add_identity (bool): Whether to add identity to the out. - Default: True - use_depthwise (bool): Whether to use depthwise separable convolution. - Default: False - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish'). - """ - - def __init__(self, - in_channels, - out_channels, - expansion=0.5, - add_identity=True, - use_depthwise=False, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - hidden_channels = int(out_channels * expansion) - conv = DepthwiseSeparableConvModule if use_depthwise else ConvModule - self.conv1 = ConvModule( - in_channels, - hidden_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv2 = conv( - hidden_channels, - out_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.add_identity = \ - add_identity and in_channels == out_channels - - def forward(self, x): - identity = x - out = self.conv1(x) - out = self.conv2(out) - - if self.add_identity: - return out + identity - else: - return out - - -class CSPLayer(BaseModule): - """Cross Stage Partial Layer. - - Args: - in_channels (int): The input channels of the CSP layer. - out_channels (int): The output channels of the CSP layer. - expand_ratio (float): Ratio to adjust the number of channels of the - hidden layer. Default: 0.5 - num_blocks (int): Number of blocks. Default: 1 - add_identity (bool): Whether to add identity in blocks. - Default: True - use_depthwise (bool): Whether to depthwise separable convolution in - blocks. Default: False - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN') - act_cfg (dict): Config dict for activation layer. - Default: dict(type='Swish') - """ - - def __init__(self, - in_channels, - out_channels, - expand_ratio=0.5, - num_blocks=1, - add_identity=True, - use_depthwise=False, - conv_cfg=None, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='Swish'), - init_cfg=None): - super().__init__(init_cfg) - mid_channels = int(out_channels * expand_ratio) - self.main_conv = ConvModule( - in_channels, - mid_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.short_conv = ConvModule( - in_channels, - mid_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.final_conv = ConvModule( - 2 * mid_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.blocks = nn.Sequential(*[ - DarknetBottleneck( - mid_channels, - mid_channels, - 1.0, - add_identity, - use_depthwise, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) for _ in range(num_blocks) - ]) - - def forward(self, x): - x_short = self.short_conv(x) - - x_main = self.main_conv(x) - x_main = self.blocks(x_main) - - x_final = torch.cat((x_main, x_short), dim=1) - return self.final_conv(x_final) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/gaussian_target.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/gaussian_target.py deleted file mode 100644 index 5bf4d558..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/gaussian_target.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from math import sqrt - -import torch -import torch.nn.functional as F - - -def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'): - """Generate 2D gaussian kernel. - - Args: - radius (int): Radius of gaussian kernel. - sigma (int): Sigma of gaussian function. Default: 1. - dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32. - device (str): Device of gaussian tensor. Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius + 1) * (2 * radius + 1)`` shape. - """ - x = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x + y * y) / (2 * sigma * sigma)).exp() - - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def gen_gaussian_target(heatmap, center, radius, k=1): - """Generate 2D gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius (int): Radius of gaussian kernel. - k (int): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter = 2 * radius + 1 - gaussian_kernel = gaussian2D( - radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device) - - x, y = center - - height, width = heatmap.shape[:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius - top:radius + bottom, - radius - left:radius + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def gaussian_radius(det_size, min_overlap): - r"""Generate 2D gaussian radius. - - This function is modified from the `official github repo - `_. - - Given ``min_overlap``, radius could computed by a quadratic equation - according to Vieta's formulas. - - There are 3 cases for computing gaussian radius, details are following: - - - Explanation of figure: ``lt`` and ``br`` indicates the left-top and - bottom-right corner of ground truth box. ``x`` indicates the - generated corner at the limited position when ``radius=r``. - - - Case1: one corner is inside the gt box and the other is outside. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x----------+--+ - | | | | - | | | | height - | | overlap | | - | | | | - | | | | v - +--+---------br--+ - - | | | - +----------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad - {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ - {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case2: both two corners are inside the gt box. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x-------+ | - | | | | - | |overlap| | height - | | | | - | +-------x--+ - | | | v - +----------+-br - - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad - {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ - {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case3: both two corners are outside the gt box. - - .. code:: text - - |< width >| - - x--+----------------+ - | | | - +-lt-------------+ | - - | | | | ^ - | | | | - | | overlap | | height - | | | | - | | | | v - | +------------br--+ - - | | | - +----------------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad - {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ - {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ - {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a} - - Args: - det_size (list[int]): Shape of object. - min_overlap (float): Min IoU with ground truth for boxes generated by - keypoints inside the gaussian kernel. - - Returns: - radius (int): Radius of gaussian kernel. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 - sq1) / (2 * a1) - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 - sq2) / (2 * a2) - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / (2 * a3) - return min(r1, r2, r3) - - -def get_local_maximum(heat, kernel=3): - """Extract local maximum pixel with given kernel. - - Args: - heat (Tensor): Target heatmap. - kernel (int): Kernel size of max pooling. Default: 3. - - Returns: - heat (Tensor): A heatmap where local maximum pixels maintain its - own value and other positions are 0. - """ - pad = (kernel - 1) // 2 - hmax = F.max_pool2d(heat, kernel, stride=1, padding=pad) - keep = (hmax == heat).float() - return heat * keep - - -def get_topk_from_heatmap(scores, k=20): - """Get top k positions from heatmap. - - Args: - scores (Tensor): Target heatmap with shape - [batch, num_classes, height, width]. - k (int): Target number. Default: 20. - - Returns: - tuple[torch.Tensor]: Scores, indexes, categories and coords of - topk keypoint. Containing following Tensors: - - - topk_scores (Tensor): Max scores of each topk keypoint. - - topk_inds (Tensor): Indexes of each topk keypoint. - - topk_clses (Tensor): Categories of each topk keypoint. - - topk_ys (Tensor): Y-coord of each topk keypoint. - - topk_xs (Tensor): X-coord of each topk keypoint. - """ - batch, _, height, width = scores.size() - topk_scores, topk_inds = torch.topk(scores.view(batch, -1), k) - topk_clses = topk_inds // (height * width) - topk_inds = topk_inds % (height * width) - topk_ys = topk_inds // width - topk_xs = (topk_inds % width).int().float() - return topk_scores, topk_inds, topk_clses, topk_ys, topk_xs - - -def gather_feat(feat, ind, mask=None): - """Gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - mask (Tensor | None): Mask of feature map. Default: None. - - Returns: - feat (Tensor): Gathered feature. - """ - dim = feat.size(2) - ind = ind.unsqueeze(2).repeat(1, 1, dim) - feat = feat.gather(1, ind) - if mask is not None: - mask = mask.unsqueeze(2).expand_as(feat) - feat = feat[mask] - feat = feat.view(-1, dim) - return feat - - -def transpose_and_gather_feat(feat, ind): - """Transpose and gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - - Returns: - feat (Tensor): Transposed and gathered feature. - """ - feat = feat.permute(0, 2, 3, 1).contiguous() - feat = feat.view(feat.size(0), -1, feat.size(3)) - feat = gather_feat(feat, ind) - return feat diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/inverted_residual.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/inverted_residual.py deleted file mode 100644 index 1f241ae3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/inverted_residual.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import DropPath -from mmcv.runner import BaseModule - -from .se_layer import SELayer - - -class InvertedResidual(BaseModule): - """Inverted Residual Block. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - mid_channels (int): The input channels of the depthwise convolution. - kernel_size (int): The kernel size of the depthwise convolution. - Default: 3. - stride (int): The stride of the depthwise convolution. Default: 1. - se_cfg (dict): Config dict for se layer. Default: None, which means no - se layer. - with_expand_conv (bool): Use expand conv or not. If set False, - mid_channels must be the same with in_channels. - Default: True. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - drop_path_rate (float): stochastic depth rate. Defaults to 0. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_expand_conv=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - drop_path_rate=0., - with_cp=False, - init_cfg=None): - super(InvertedResidual, self).__init__(init_cfg) - self.with_res_shortcut = (stride == 1 and in_channels == out_channels) - assert stride in [1, 2], f'stride must in [1, 2]. ' \ - f'But received {stride}.' - self.with_cp = with_cp - self.drop_path = DropPath( - drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.with_se = se_cfg is not None - self.with_expand_conv = with_expand_conv - - if self.with_se: - assert isinstance(se_cfg, dict) - if not self.with_expand_conv: - assert mid_channels == in_channels - - if self.with_expand_conv: - self.expand_conv = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.depthwise_conv = ConvModule( - in_channels=mid_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - padding=kernel_size // 2, - groups=mid_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.linear_conv = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - - if self.with_expand_conv: - out = self.expand_conv(out) - - out = self.depthwise_conv(out) - - if self.with_se: - out = self.se(out) - - out = self.linear_conv(out) - - if self.with_res_shortcut: - return x + self.drop_path(out) - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/make_divisible.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/make_divisible.py deleted file mode 100644 index ed42c2ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/make_divisible.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def make_divisible(value, divisor, min_value=None, min_ratio=0.9): - """Make divisible function. - - This function rounds the channel number to the nearest value that can be - divisible by the divisor. It is taken from the original tf repo. It ensures - that all layers have a channel number that is divisible by divisor. It can - be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa - - Args: - value (int): The original channel number. - divisor (int): The divisor to fully divide the channel number. - min_value (int): The minimum value of the output channel. - Default: None, means that the minimum value equal to the divisor. - min_ratio (float): The minimum ratio of the rounded channel number to - the original channel number. Default: 0.9. - - Returns: - int: The modified output channel number. - """ - - if min_value is None: - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than (1-min_ratio). - if new_value < min_ratio * value: - new_value += divisor - return new_value diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/misc.py deleted file mode 100644 index 8f9be9ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/misc.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.autograd import Function -from torch.nn import functional as F - - -class SigmoidGeometricMean(Function): - """Forward and backward function of geometric mean of two sigmoid - functions. - - This implementation with analytical gradient function substitutes - the autograd function of (x.sigmoid() * y.sigmoid()).sqrt(). The - original implementation incurs none during gradient backprapagation - if both x and y are very small values. - """ - - @staticmethod - def forward(ctx, x, y): - x_sigmoid = x.sigmoid() - y_sigmoid = y.sigmoid() - z = (x_sigmoid * y_sigmoid).sqrt() - ctx.save_for_backward(x_sigmoid, y_sigmoid, z) - return z - - @staticmethod - def backward(ctx, grad_output): - x_sigmoid, y_sigmoid, z = ctx.saved_tensors - grad_x = grad_output * z * (1 - x_sigmoid) / 2 - grad_y = grad_output * z * (1 - y_sigmoid) / 2 - return grad_x, grad_y - - -sigmoid_geometric_mean = SigmoidGeometricMean.apply - - -def interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` to the shape of the `target`. - - The `source` must be a Tensor, but the `target` can be a Tensor or a - np.ndarray with the shape (..., target_h, target_w). - - Args: - source (Tensor): A 3D/4D Tensor with the shape (N, H, W) or - (N, C, H, W). - target (Tensor | np.ndarray): The interpolation target with the shape - (..., target_h, target_w). - mode (str): Algorithm used for interpolation. The options are the - same as those in F.interpolate(). Default: ``'bilinear'``. - align_corners (bool): The same as the argument in F.interpolate(). - - Returns: - Tensor: The interpolated source Tensor. - """ - assert len(target.shape) >= 2 - - def _interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` (4D) to the shape of the `target`.""" - target_h, target_w = target.shape[-2:] - source_h, source_w = source.shape[-2:] - if target_h != source_h or target_w != source_w: - source = F.interpolate( - source, - size=(target_h, target_w), - mode=mode, - align_corners=align_corners) - return source - - if len(source.shape) == 3: - source = source[:, None, :, :] - source = _interpolate_as(source, target, mode, align_corners) - return source[:, 0, :, :] - else: - return _interpolate_as(source, target, mode, align_corners) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/normed_predictor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/normed_predictor.py deleted file mode 100644 index f0eeef7d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/normed_predictor.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import CONV_LAYERS - -from .builder import LINEAR_LAYERS - - -@LINEAR_LAYERS.register_module(name='NormedLinear') -class NormedLinear(nn.Linear): - """Normalized Linear Layer. - - Args: - tempeature (float, optional): Tempeature term. Default to 20. - power (int, optional): Power term. Default to 1.0. - eps (float, optional): The minimal value of divisor to - keep numerical stability. Default to 1e-6. - """ - - def __init__(self, *args, tempearture=20, power=1.0, eps=1e-6, **kwargs): - super(NormedLinear, self).__init__(*args, **kwargs) - self.tempearture = tempearture - self.power = power - self.eps = eps - self.init_weights() - - def init_weights(self): - nn.init.normal_(self.weight, mean=0, std=0.01) - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, x): - weight_ = self.weight / ( - self.weight.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x / (x.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x_ * self.tempearture - - return F.linear(x_, weight_, self.bias) - - -@CONV_LAYERS.register_module(name='NormedConv2d') -class NormedConv2d(nn.Conv2d): - """Normalized Conv2d Layer. - - Args: - tempeature (float, optional): Tempeature term. Default to 20. - power (int, optional): Power term. Default to 1.0. - eps (float, optional): The minimal value of divisor to - keep numerical stability. Default to 1e-6. - norm_over_kernel (bool, optional): Normalize over kernel. - Default to False. - """ - - def __init__(self, - *args, - tempearture=20, - power=1.0, - eps=1e-6, - norm_over_kernel=False, - **kwargs): - super(NormedConv2d, self).__init__(*args, **kwargs) - self.tempearture = tempearture - self.power = power - self.norm_over_kernel = norm_over_kernel - self.eps = eps - - def forward(self, x): - if not self.norm_over_kernel: - weight_ = self.weight / ( - self.weight.norm(dim=1, keepdim=True).pow(self.power) + - self.eps) - else: - weight_ = self.weight / ( - self.weight.view(self.weight.size(0), -1).norm( - dim=1, keepdim=True).pow(self.power)[..., None, None] + - self.eps) - x_ = x / (x.norm(dim=1, keepdim=True).pow(self.power) + self.eps) - x_ = x_ * self.tempearture - - if hasattr(self, 'conv2d_forward'): - x_ = self.conv2d_forward(x_, weight_) - else: - if torch.__version__ >= '1.8': - x_ = self._conv_forward(x_, weight_, self.bias) - else: - x_ = self._conv_forward(x_, weight_) - return x_ diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/panoptic_gt_processing.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/panoptic_gt_processing.py deleted file mode 100644 index 513f6449..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/panoptic_gt_processing.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def preprocess_panoptic_gt(gt_labels, gt_masks, gt_semantic_seg, num_things, - num_stuff): - """Preprocess the ground truth for a image. - - Args: - gt_labels (Tensor): Ground truth labels of each bbox, - with shape (num_gts, ). - gt_masks (BitmapMasks): Ground truth masks of each instances - of a image, shape (num_gts, h, w). - gt_semantic_seg (Tensor): Ground truth of semantic - segmentation with the shape (1, h, w). - [0, num_thing_class - 1] means things, - [num_thing_class, num_class-1] means stuff, - 255 means VOID. - target_shape (tuple[int]): Shape of output mask_preds. - Resize the masks to shape of mask_preds. - - Returns: - tuple: a tuple containing the following targets. - - - labels (Tensor): Ground truth class indices for a - image, with shape (n, ), n is the sum of number - of stuff type and number of instance in a image. - - masks (Tensor): Ground truth mask for a image, with - shape (n, h, w). - """ - num_classes = num_things + num_stuff - things_labels = gt_labels - gt_semantic_seg = gt_semantic_seg.squeeze(0) - - things_masks = gt_masks.pad(gt_semantic_seg.shape[-2:], pad_val=0)\ - .to_tensor(dtype=torch.bool, device=gt_labels.device) - - semantic_labels = torch.unique( - gt_semantic_seg, - sorted=False, - return_inverse=False, - return_counts=False) - stuff_masks_list = [] - stuff_labels_list = [] - for label in semantic_labels: - if label < num_things or label >= num_classes: - continue - stuff_mask = gt_semantic_seg == label - stuff_masks_list.append(stuff_mask) - stuff_labels_list.append(label) - - if len(stuff_masks_list) > 0: - stuff_masks = torch.stack(stuff_masks_list, dim=0) - stuff_labels = torch.stack(stuff_labels_list, dim=0) - labels = torch.cat([things_labels, stuff_labels], dim=0) - masks = torch.cat([things_masks, stuff_masks], dim=0) - else: - labels = things_labels - masks = things_masks - - masks = masks.long() - return labels, masks diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/point_sample.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/point_sample.py deleted file mode 100644 index c2c3cf91..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/point_sample.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import point_sample - - -def get_uncertainty(mask_pred, labels): - """Estimate uncertainty based on pred logits. - - We estimate uncertainty as L1 distance between 0.0 and the logits - prediction in 'mask_pred' for the foreground class in `classes`. - - Args: - mask_pred (Tensor): mask predication logits, shape (num_rois, - num_classes, mask_height, mask_width). - - labels (list[Tensor]): Either predicted or ground truth label for - each predicted mask, of length num_rois. - - Returns: - scores (Tensor): Uncertainty scores with the most uncertain - locations having the highest uncertainty score, - shape (num_rois, 1, mask_height, mask_width) - """ - if mask_pred.shape[1] == 1: - gt_class_logits = mask_pred.clone() - else: - inds = torch.arange(mask_pred.shape[0], device=mask_pred.device) - gt_class_logits = mask_pred[inds, labels].unsqueeze(1) - return -torch.abs(gt_class_logits) - - -def get_uncertain_point_coords_with_randomness(mask_pred, labels, num_points, - oversample_ratio, - importance_sample_ratio): - """Get ``num_points`` most uncertain points with random points during - train. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - 'get_uncertainty()' function that takes point's logit prediction as - input. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - labels (list): The ground truth class for each instance. - num_points (int): The number of points to sample. - oversample_ratio (int): Oversampling parameter. - importance_sample_ratio (float): Ratio of points that are sampled - via importnace sampling. - - Returns: - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains the coordinates sampled points. - """ - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = mask_pred.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=mask_pred.device) - point_logits = point_sample(mask_pred, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = get_uncertainty(point_logits, labels) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=mask_pred.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_roi_coords = torch.rand( - batch_size, num_random_points, 2, device=mask_pred.device) - point_coords = torch.cat((point_coords, rand_roi_coords), dim=1) - return point_coords diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/positional_encoding.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/positional_encoding.py deleted file mode 100644 index dd29cd65..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/positional_encoding.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from mmcv.cnn.bricks.transformer import POSITIONAL_ENCODING -from mmcv.runner import BaseModule - - -@POSITIONAL_ENCODING.register_module() -class SinePositionalEncoding(BaseModule): - """Position encoding with sine and cosine functions. - - See `End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_feats (int): The feature dimension for each position - along x-axis or y-axis. Note the final returned dimension - for each position is 2 times of this value. - temperature (int, optional): The temperature used for scaling - the position embedding. Defaults to 10000. - normalize (bool, optional): Whether to normalize the position - embedding. Defaults to False. - scale (float, optional): A scale factor that scales the position - embedding. The scale will be used only when `normalize` is True. - Defaults to 2*pi. - eps (float, optional): A value added to the denominator for - numerical stability. Defaults to 1e-6. - offset (float): offset add to embed when do the normalization. - Defaults to 0. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - num_feats, - temperature=10000, - normalize=False, - scale=2 * math.pi, - eps=1e-6, - offset=0., - init_cfg=None): - super(SinePositionalEncoding, self).__init__(init_cfg) - if normalize: - assert isinstance(scale, (float, int)), 'when normalize is set,' \ - 'scale should be provided and in float or int type, ' \ - f'found {type(scale)}' - self.num_feats = num_feats - self.temperature = temperature - self.normalize = normalize - self.scale = scale - self.eps = eps - self.offset = offset - - def forward(self, mask): - """Forward function for `SinePositionalEncoding`. - - Args: - mask (Tensor): ByteTensor mask. Non-zero values representing - ignored positions, while zero values means valid positions - for this image. Shape [bs, h, w]. - - Returns: - pos (Tensor): Returned position embedding with shape - [bs, num_feats*2, h, w]. - """ - # For convenience of exporting to ONNX, it's required to convert - # `masks` from bool to int. - mask = mask.to(torch.int) - not_mask = 1 - mask # logical_not - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - y_embed = (y_embed + self.offset) / \ - (y_embed[:, -1:, :] + self.eps) * self.scale - x_embed = (x_embed + self.offset) / \ - (x_embed[:, :, -1:] + self.eps) * self.scale - dim_t = torch.arange( - self.num_feats, dtype=torch.float32, device=mask.device) - dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - # use `view` instead of `flatten` for dynamically exporting to ONNX - B, H, W = mask.size() - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), - dim=4).view(B, H, W, -1) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), - dim=4).view(B, H, W, -1) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'temperature={self.temperature}, ' - repr_str += f'normalize={self.normalize}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'eps={self.eps})' - return repr_str - - -@POSITIONAL_ENCODING.register_module() -class LearnedPositionalEncoding(BaseModule): - """Position embedding with learnable embedding weights. - - Args: - num_feats (int): The feature dimension for each position - along x-axis or y-axis. The final returned dimension for - each position is 2 times of this value. - row_num_embed (int, optional): The dictionary size of row embeddings. - Default 50. - col_num_embed (int, optional): The dictionary size of col embeddings. - Default 50. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - num_feats, - row_num_embed=50, - col_num_embed=50, - init_cfg=dict(type='Uniform', layer='Embedding')): - super(LearnedPositionalEncoding, self).__init__(init_cfg) - self.row_embed = nn.Embedding(row_num_embed, num_feats) - self.col_embed = nn.Embedding(col_num_embed, num_feats) - self.num_feats = num_feats - self.row_num_embed = row_num_embed - self.col_num_embed = col_num_embed - - def forward(self, mask): - """Forward function for `LearnedPositionalEncoding`. - - Args: - mask (Tensor): ByteTensor mask. Non-zero values representing - ignored positions, while zero values means valid positions - for this image. Shape [bs, h, w]. - - Returns: - pos (Tensor): Returned position embedding with shape - [bs, num_feats*2, h, w]. - """ - h, w = mask.shape[-2:] - x = torch.arange(w, device=mask.device) - y = torch.arange(h, device=mask.device) - x_embed = self.col_embed(x) - y_embed = self.row_embed(y) - pos = torch.cat( - (x_embed.unsqueeze(0).repeat(h, 1, 1), y_embed.unsqueeze(1).repeat( - 1, w, 1)), - dim=-1).permute(2, 0, - 1).unsqueeze(0).repeat(mask.shape[0], 1, 1, 1) - return pos - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'row_num_embed={self.row_num_embed}, ' - repr_str += f'col_num_embed={self.col_num_embed})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/res_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/res_layer.py deleted file mode 100644 index 5c3e89fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(BaseModule): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_fg=None): - super(SimplifiedBasicBlock, self).__init__(init_fg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/se_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/se_layer.py deleted file mode 100644 index a2492103..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/se_layer.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - - -class SELayer(BaseModule): - """Squeeze-and-Excitation Module. - - Args: - channels (int): The input (and output) channels of the SE layer. - ratio (int): Squeeze ratio in SELayer, the intermediate channel will be - ``int(channels/ratio)``. Default: 16. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configurated - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configurated by the first dict and the - second activation layer will be configurated by the second dict. - Default: (dict(type='ReLU'), dict(type='Sigmoid')) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - channels, - ratio=16, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), dict(type='Sigmoid')), - init_cfg=None): - super(SELayer, self).__init__(init_cfg) - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=int(channels / ratio), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=int(channels / ratio), - out_channels=channels, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - out = self.global_avgpool(x) - out = self.conv1(out) - out = self.conv2(out) - return x * out - - -class DyReLU(BaseModule): - """Dynamic ReLU (DyReLU) module. - - See `Dynamic ReLU `_ for details. - Current implementation is specialized for task-aware attention in DyHead. - HSigmoid arguments in default act_cfg follow DyHead official code. - https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py - - Args: - channels (int): The input (and output) channels of DyReLU module. - ratio (int): Squeeze ratio in Squeeze-and-Excitation-like module, - the intermediate channel will be ``int(channels/ratio)``. - Default: 4. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configurated - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configurated by the first dict and the - second activation layer will be configurated by the second dict. - Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, - divisor=6.0)) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - channels, - ratio=4, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0)), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.channels = channels - self.expansion = 4 # for a1, b1, a2, b2 - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=int(channels / ratio), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=int(channels / ratio), - out_channels=channels * self.expansion, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - """Forward function.""" - coeffs = self.global_avgpool(x) - coeffs = self.conv1(coeffs) - coeffs = self.conv2(coeffs) - 0.5 # value range: [-0.5, 0.5] - a1, b1, a2, b2 = torch.split(coeffs, self.channels, dim=1) - a1 = a1 * 2.0 + 1.0 # [-1.0, 1.0] + 1.0 - a2 = a2 * 2.0 # [-1.0, 1.0] - out = torch.max(x * a1 + b1, x * a2 + b2) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/transformer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/transformer.py deleted file mode 100644 index 3c390c83..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/models/utils/transformer.py +++ /dev/null @@ -1,1167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings -from typing import Sequence - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import (build_activation_layer, build_conv_layer, - build_norm_layer, xavier_init) -from mmcv.cnn.bricks.registry import (TRANSFORMER_LAYER, - TRANSFORMER_LAYER_SEQUENCE) -from mmcv.cnn.bricks.transformer import (BaseTransformerLayer, - TransformerLayerSequence, - build_transformer_layer_sequence) -from mmcv.runner.base_module import BaseModule -from mmcv.utils import to_2tuple -from torch.nn.init import normal_ - -from mmdet.models.utils.builder import TRANSFORMER - -try: - from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention - -except ImportError: - warnings.warn( - '`MultiScaleDeformableAttention` in MMCV has been moved to ' - '`mmcv.ops.multi_scale_deform_attn`, please update your MMCV') - from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention - - -def nlc_to_nchw(x, hw_shape): - """Convert [N, L, C] shape tensor to [N, C, H, W] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, L, C] before conversion. - hw_shape (Sequence[int]): The height and width of output feature map. - - Returns: - Tensor: The output tensor of shape [N, C, H, W] after conversion. - """ - H, W = hw_shape - assert len(x.shape) == 3 - B, L, C = x.shape - assert L == H * W, 'The seq_len does not match H, W' - return x.transpose(1, 2).reshape(B, C, H, W).contiguous() - - -def nchw_to_nlc(x): - """Flatten [N, C, H, W] shape tensor to [N, L, C] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, C, H, W] before conversion. - - Returns: - Tensor: The output tensor of shape [N, L, C] after conversion. - """ - assert len(x.shape) == 4 - return x.flatten(2).transpose(1, 2).contiguous() - - -class AdaptivePadding(nn.Module): - """Applies padding to input (if needed) so that input can get fully covered - by filter you specified. It support two modes "same" and "corner". The - "same" mode is same with "SAME" padding mode in TensorFlow, pad zero around - input. The "corner" mode would pad zero to bottom right. - - Args: - kernel_size (int | tuple): Size of the kernel: - stride (int | tuple): Stride of the filter. Default: 1: - dilation (int | tuple): Spacing between kernel elements. - Default: 1 - padding (str): Support "same" and "corner", "corner" mode - would pad zero to bottom right, and "same" mode would - pad zero around input. Default: "corner". - Example: - >>> kernel_size = 16 - >>> stride = 16 - >>> dilation = 1 - >>> input = torch.rand(1, 1, 15, 17) - >>> adap_pad = AdaptivePadding( - >>> kernel_size=kernel_size, - >>> stride=stride, - >>> dilation=dilation, - >>> padding="corner") - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - >>> input = torch.rand(1, 1, 16, 17) - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - """ - - def __init__(self, kernel_size=1, stride=1, dilation=1, padding='corner'): - - super(AdaptivePadding, self).__init__() - - assert padding in ('same', 'corner') - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - padding = to_2tuple(padding) - dilation = to_2tuple(dilation) - - self.padding = padding - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - - def get_pad_shape(self, input_shape): - input_h, input_w = input_shape - kernel_h, kernel_w = self.kernel_size - stride_h, stride_w = self.stride - output_h = math.ceil(input_h / stride_h) - output_w = math.ceil(input_w / stride_w) - pad_h = max((output_h - 1) * stride_h + - (kernel_h - 1) * self.dilation[0] + 1 - input_h, 0) - pad_w = max((output_w - 1) * stride_w + - (kernel_w - 1) * self.dilation[1] + 1 - input_w, 0) - return pad_h, pad_w - - def forward(self, x): - pad_h, pad_w = self.get_pad_shape(x.size()[-2:]) - if pad_h > 0 or pad_w > 0: - if self.padding == 'corner': - x = F.pad(x, [0, pad_w, 0, pad_h]) - elif self.padding == 'same': - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2 - ]) - return x - - -class PatchEmbed(BaseModule): - """Image to Patch Embedding. - - We use a conv layer to implement PatchEmbed. - - Args: - in_channels (int): The num of input channels. Default: 3 - embed_dims (int): The dimensions of embedding. Default: 768 - conv_type (str): The config dict for embedding - conv layer type selection. Default: "Conv2d. - kernel_size (int): The kernel_size of embedding conv. Default: 16. - stride (int): The slide stride of embedding conv. - Default: None (Would be set as `kernel_size`). - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int): The dilation rate of embedding conv. Default: 1. - bias (bool): Bias of embed conv. Default: True. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - input_size (int | tuple | None): The size of input, which will be - used to calculate the out size. Only work when `dynamic_size` - is False. Default: None. - init_cfg (`mmcv.ConfigDict`, optional): The Config for initialization. - Default: None. - """ - - def __init__( - self, - in_channels=3, - embed_dims=768, - conv_type='Conv2d', - kernel_size=16, - stride=16, - padding='corner', - dilation=1, - bias=True, - norm_cfg=None, - input_size=None, - init_cfg=None, - ): - super(PatchEmbed, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - if stride is None: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of conv - padding = 0 - else: - self.adap_padding = None - padding = to_2tuple(padding) - - self.projection = build_conv_layer( - dict(type=conv_type), - in_channels=in_channels, - out_channels=embed_dims, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias) - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - else: - self.norm = None - - if input_size: - input_size = to_2tuple(input_size) - # `init_out_size` would be used outside to - # calculate the num_patches - # when `use_abs_pos_embed` outside - self.init_input_size = input_size - if self.adap_padding: - pad_h, pad_w = self.adap_padding.get_pad_shape(input_size) - input_h, input_w = input_size - input_h = input_h + pad_h - input_w = input_w + pad_w - input_size = (input_h, input_w) - - # https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html - h_out = (input_size[0] + 2 * padding[0] - dilation[0] * - (kernel_size[0] - 1) - 1) // stride[0] + 1 - w_out = (input_size[1] + 2 * padding[1] - dilation[1] * - (kernel_size[1] - 1) - 1) // stride[1] + 1 - self.init_out_size = (h_out, w_out) - else: - self.init_input_size = None - self.init_out_size = None - - def forward(self, x): - """ - Args: - x (Tensor): Has shape (B, C, H, W). In most case, C is 3. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, out_h * out_w, embed_dims) - - out_size (tuple[int]): Spatial shape of x, arrange as - (out_h, out_w). - """ - - if self.adap_padding: - x = self.adap_padding(x) - - x = self.projection(x) - out_size = (x.shape[2], x.shape[3]) - x = x.flatten(2).transpose(1, 2) - if self.norm is not None: - x = self.norm(x) - return x, out_size - - -class PatchMerging(BaseModule): - """Merge patch feature map. - - This layer groups feature map by kernel_size, and applies norm and linear - layers to the grouped feature map. Our implementation uses `nn.Unfold` to - merge patch, which is about 25% faster than original implementation. - Instead, we need to modify pretrained models for compatibility. - - Args: - in_channels (int): The num of input channels. - to gets fully covered by filter and stride you specified.. - Default: True. - out_channels (int): The num of output channels. - kernel_size (int | tuple, optional): the kernel size in the unfold - layer. Defaults to 2. - stride (int | tuple, optional): the stride of the sliding blocks in the - unfold layer. Default: None. (Would be set as `kernel_size`) - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int | tuple, optional): dilation parameter in the unfold - layer. Default: 1. - bias (bool, optional): Whether to add bias in linear layer or not. - Defaults: False. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=2, - stride=None, - padding='corner', - dilation=1, - bias=False, - norm_cfg=dict(type='LN'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - if stride: - stride = stride - else: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of unfold - padding = 0 - else: - self.adap_padding = None - - padding = to_2tuple(padding) - self.sampler = nn.Unfold( - kernel_size=kernel_size, - dilation=dilation, - padding=padding, - stride=stride) - - sample_dim = kernel_size[0] * kernel_size[1] * in_channels - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, sample_dim)[1] - else: - self.norm = None - - self.reduction = nn.Linear(sample_dim, out_channels, bias=bias) - - def forward(self, x, input_size): - """ - Args: - x (Tensor): Has shape (B, H*W, C_in). - input_size (tuple[int]): The spatial shape of x, arrange as (H, W). - Default: None. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, Merged_H * Merged_W, C_out) - - out_size (tuple[int]): Spatial shape of x, arrange as - (Merged_H, Merged_W). - """ - B, L, C = x.shape - assert isinstance(input_size, Sequence), f'Expect ' \ - f'input_size is ' \ - f'`Sequence` ' \ - f'but get {input_size}' - - H, W = input_size - assert L == H * W, 'input feature has wrong size' - - x = x.view(B, H, W, C).permute([0, 3, 1, 2]) # B, C, H, W - # Use nn.Unfold to merge patch. About 25% faster than original method, - # but need to modify pretrained model for compatibility - - if self.adap_padding: - x = self.adap_padding(x) - H, W = x.shape[-2:] - - x = self.sampler(x) - # if kernel_size=2 and stride=2, x should has shape (B, 4*C, H/2*W/2) - - out_h = (H + 2 * self.sampler.padding[0] - self.sampler.dilation[0] * - (self.sampler.kernel_size[0] - 1) - - 1) // self.sampler.stride[0] + 1 - out_w = (W + 2 * self.sampler.padding[1] - self.sampler.dilation[1] * - (self.sampler.kernel_size[1] - 1) - - 1) // self.sampler.stride[1] + 1 - - output_size = (out_h, out_w) - x = x.transpose(1, 2) # B, H/2*W/2, 4*C - x = self.norm(x) if self.norm else x - x = self.reduction(x) - return x, output_size - - -def inverse_sigmoid(x, eps=1e-5): - """Inverse function of sigmoid. - - Args: - x (Tensor): The tensor to do the - inverse. - eps (float): EPS avoid numerical - overflow. Defaults 1e-5. - Returns: - Tensor: The x has passed the inverse - function of sigmoid, has same - shape with input. - """ - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -@TRANSFORMER_LAYER.register_module() -class DetrTransformerDecoderLayer(BaseTransformerLayer): - """Implements decoder layer in DETR transformer. - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | list[dict] | dict )): - Configs for self_attention or cross_attention, the order - should be consistent with it in `operation_order`. If it is - a dict, it would be expand to the number of attention in - `operation_order`. - feedforward_channels (int): The hidden dimension for FFNs. - ffn_dropout (float): Probability of an element to be zeroed - in ffn. Default 0.0. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Default:None - act_cfg (dict): The activation config for FFNs. Default: `LN` - norm_cfg (dict): Config dict for normalization layer. - Default: `LN`. - ffn_num_fcs (int): The number of fully-connected layers in FFNs. - Default:2. - """ - - def __init__(self, - attn_cfgs, - feedforward_channels, - ffn_dropout=0.0, - operation_order=None, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - ffn_num_fcs=2, - **kwargs): - super(DetrTransformerDecoderLayer, self).__init__( - attn_cfgs=attn_cfgs, - feedforward_channels=feedforward_channels, - ffn_dropout=ffn_dropout, - operation_order=operation_order, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - ffn_num_fcs=ffn_num_fcs, - **kwargs) - assert len(operation_order) == 6 - assert set(operation_order) == set( - ['self_attn', 'norm', 'cross_attn', 'ffn']) - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DetrTransformerEncoder(TransformerLayerSequence): - """TransformerEncoder of DETR. - - Args: - post_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. Only used when `self.pre_norm` is `True` - """ - - def __init__(self, *args, post_norm_cfg=dict(type='LN'), **kwargs): - super(DetrTransformerEncoder, self).__init__(*args, **kwargs) - if post_norm_cfg is not None: - self.post_norm = build_norm_layer( - post_norm_cfg, self.embed_dims)[1] if self.pre_norm else None - else: - assert not self.pre_norm, f'Use prenorm in ' \ - f'{self.__class__.__name__},' \ - f'Please specify post_norm_cfg' - self.post_norm = None - - def forward(self, *args, **kwargs): - """Forward function for `TransformerCoder`. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - x = super(DetrTransformerEncoder, self).forward(*args, **kwargs) - if self.post_norm is not None: - x = self.post_norm(x) - return x - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DetrTransformerDecoder(TransformerLayerSequence): - """Implements the decoder in DETR transformer. - - Args: - return_intermediate (bool): Whether to return intermediate outputs. - post_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. - """ - - def __init__(self, - *args, - post_norm_cfg=dict(type='LN'), - return_intermediate=False, - **kwargs): - - super(DetrTransformerDecoder, self).__init__(*args, **kwargs) - self.return_intermediate = return_intermediate - if post_norm_cfg is not None: - self.post_norm = build_norm_layer(post_norm_cfg, - self.embed_dims)[1] - else: - self.post_norm = None - - def forward(self, query, *args, **kwargs): - """Forward function for `TransformerDecoder`. - - Args: - query (Tensor): Input query with shape - `(num_query, bs, embed_dims)`. - - Returns: - Tensor: Results with shape [1, num_query, bs, embed_dims] when - return_intermediate is `False`, otherwise it has shape - [num_layers, num_query, bs, embed_dims]. - """ - if not self.return_intermediate: - x = super().forward(query, *args, **kwargs) - if self.post_norm: - x = self.post_norm(x)[None] - return x - - intermediate = [] - for layer in self.layers: - query = layer(query, *args, **kwargs) - if self.return_intermediate: - if self.post_norm is not None: - intermediate.append(self.post_norm(query)) - else: - intermediate.append(query) - return torch.stack(intermediate) - - -@TRANSFORMER.register_module() -class Transformer(BaseModule): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - encoder (`mmcv.ConfigDict` | Dict): Config of - TransformerEncoder. Defaults to None. - decoder ((`mmcv.ConfigDict` | Dict)): Config of - TransformerDecoder. Defaults to None - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Defaults to None. - """ - - def __init__(self, encoder=None, decoder=None, init_cfg=None): - super(Transformer, self).__init__(init_cfg=init_cfg) - self.encoder = build_transformer_layer_sequence(encoder) - self.decoder = build_transformer_layer_sequence(decoder) - self.embed_dims = self.encoder.embed_dims - - def init_weights(self): - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution='uniform') - self._is_init = True - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - # use `view` instead of `flatten` for dynamically exporting to ONNX - x = x.view(bs, c, -1).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.view(bs, c, -1).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.view(bs, -1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - query=x, - key=None, - value=None, - query_pos=pos_embed, - query_key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - query=target, - key=memory, - value=memory, - key_pos=pos_embed, - query_pos=query_embed, - key_padding_mask=mask) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class DeformableDetrTransformerDecoder(TransformerLayerSequence): - """Implements the decoder in DETR transformer. - - Args: - return_intermediate (bool): Whether to return intermediate outputs. - coder_norm_cfg (dict): Config of last normalization layer. Default: - `LN`. - """ - - def __init__(self, *args, return_intermediate=False, **kwargs): - - super(DeformableDetrTransformerDecoder, self).__init__(*args, **kwargs) - self.return_intermediate = return_intermediate - - def forward(self, - query, - *args, - reference_points=None, - valid_ratios=None, - reg_branches=None, - **kwargs): - """Forward function for `TransformerDecoder`. - - Args: - query (Tensor): Input query with shape - `(num_query, bs, embed_dims)`. - reference_points (Tensor): The reference - points of offset. has shape - (bs, num_query, 4) when as_two_stage, - otherwise has shape ((bs, num_query, 2). - valid_ratios (Tensor): The radios of valid - points on the feature map, has shape - (bs, num_levels, 2) - reg_branch: (obj:`nn.ModuleList`): Used for - refining the regression results. Only would - be passed when with_box_refine is True, - otherwise would be passed a `None`. - - Returns: - Tensor: Results with shape [1, num_query, bs, embed_dims] when - return_intermediate is `False`, otherwise it has shape - [num_layers, num_query, bs, embed_dims]. - """ - output = query - intermediate = [] - intermediate_reference_points = [] - for lid, layer in enumerate(self.layers): - if reference_points.shape[-1] == 4: - reference_points_input = reference_points[:, :, None] * \ - torch.cat([valid_ratios, valid_ratios], -1)[:, None] - else: - assert reference_points.shape[-1] == 2 - reference_points_input = reference_points[:, :, None] * \ - valid_ratios[:, None] - output = layer( - output, - *args, - reference_points=reference_points_input, - **kwargs) - output = output.permute(1, 0, 2) - - if reg_branches is not None: - tmp = reg_branches[lid](output) - if reference_points.shape[-1] == 4: - new_reference_points = tmp + inverse_sigmoid( - reference_points) - new_reference_points = new_reference_points.sigmoid() - else: - assert reference_points.shape[-1] == 2 - new_reference_points = tmp - new_reference_points[..., :2] = tmp[ - ..., :2] + inverse_sigmoid(reference_points) - new_reference_points = new_reference_points.sigmoid() - reference_points = new_reference_points.detach() - - output = output.permute(1, 0, 2) - if self.return_intermediate: - intermediate.append(output) - intermediate_reference_points.append(reference_points) - - if self.return_intermediate: - return torch.stack(intermediate), torch.stack( - intermediate_reference_points) - - return output, reference_points - - -@TRANSFORMER.register_module() -class DeformableDetrTransformer(Transformer): - """Implements the DeformableDETR transformer. - - Args: - as_two_stage (bool): Generate query from encoder features. - Default: False. - num_feature_levels (int): Number of feature maps from FPN: - Default: 4. - two_stage_num_proposals (int): Number of proposals when set - `as_two_stage` as True. Default: 300. - """ - - def __init__(self, - as_two_stage=False, - num_feature_levels=4, - two_stage_num_proposals=300, - **kwargs): - super(DeformableDetrTransformer, self).__init__(**kwargs) - self.as_two_stage = as_two_stage - self.num_feature_levels = num_feature_levels - self.two_stage_num_proposals = two_stage_num_proposals - self.embed_dims = self.encoder.embed_dims - self.init_layers() - - def init_layers(self): - """Initialize layers of the DeformableDetrTransformer.""" - self.level_embeds = nn.Parameter( - torch.Tensor(self.num_feature_levels, self.embed_dims)) - - if self.as_two_stage: - self.enc_output = nn.Linear(self.embed_dims, self.embed_dims) - self.enc_output_norm = nn.LayerNorm(self.embed_dims) - self.pos_trans = nn.Linear(self.embed_dims * 2, - self.embed_dims * 2) - self.pos_trans_norm = nn.LayerNorm(self.embed_dims * 2) - else: - self.reference_points = nn.Linear(self.embed_dims, 2) - - def init_weights(self): - """Initialize the transformer weights.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MultiScaleDeformableAttention): - m.init_weights() - if not self.as_two_stage: - xavier_init(self.reference_points, distribution='uniform', bias=0.) - normal_(self.level_embeds) - - def gen_encoder_output_proposals(self, memory, memory_padding_mask, - spatial_shapes): - """Generate proposals from encoded memory. - - Args: - memory (Tensor) : The output of encoder, - has shape (bs, num_key, embed_dim). num_key is - equal the number of points on feature map from - all level. - memory_padding_mask (Tensor): Padding mask for memory. - has shape (bs, num_key). - spatial_shapes (Tensor): The shape of all feature maps. - has shape (num_level, 2). - - Returns: - tuple: A tuple of feature map and bbox prediction. - - - output_memory (Tensor): The input of decoder, \ - has shape (bs, num_key, embed_dim). num_key is \ - equal the number of points on feature map from \ - all levels. - - output_proposals (Tensor): The normalized proposal \ - after a inverse sigmoid, has shape \ - (bs, num_keys, 4). - """ - - N, S, C = memory.shape - proposals = [] - _cur = 0 - for lvl, (H, W) in enumerate(spatial_shapes): - mask_flatten_ = memory_padding_mask[:, _cur:(_cur + H * W)].view( - N, H, W, 1) - valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1) - valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1) - - grid_y, grid_x = torch.meshgrid( - torch.linspace( - 0, H - 1, H, dtype=torch.float32, device=memory.device), - torch.linspace( - 0, W - 1, W, dtype=torch.float32, device=memory.device)) - grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) - - scale = torch.cat([valid_W.unsqueeze(-1), - valid_H.unsqueeze(-1)], 1).view(N, 1, 1, 2) - grid = (grid.unsqueeze(0).expand(N, -1, -1, -1) + 0.5) / scale - wh = torch.ones_like(grid) * 0.05 * (2.0**lvl) - proposal = torch.cat((grid, wh), -1).view(N, -1, 4) - proposals.append(proposal) - _cur += (H * W) - output_proposals = torch.cat(proposals, 1) - output_proposals_valid = ((output_proposals > 0.01) & - (output_proposals < 0.99)).all( - -1, keepdim=True) - output_proposals = torch.log(output_proposals / (1 - output_proposals)) - output_proposals = output_proposals.masked_fill( - memory_padding_mask.unsqueeze(-1), float('inf')) - output_proposals = output_proposals.masked_fill( - ~output_proposals_valid, float('inf')) - - output_memory = memory - output_memory = output_memory.masked_fill( - memory_padding_mask.unsqueeze(-1), float(0)) - output_memory = output_memory.masked_fill(~output_proposals_valid, - float(0)) - output_memory = self.enc_output_norm(self.enc_output(output_memory)) - return output_memory, output_proposals - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - """Get the reference points used in decoder. - - Args: - spatial_shapes (Tensor): The shape of all - feature maps, has shape (num_level, 2). - valid_ratios (Tensor): The radios of valid - points on the feature map, has shape - (bs, num_levels, 2) - device (obj:`device`): The device where - reference_points should be. - - Returns: - Tensor: reference points used in decoder, has \ - shape (bs, num_keys, num_levels, 2). - """ - reference_points_list = [] - for lvl, (H, W) in enumerate(spatial_shapes): - # TODO check this 0.5 - ref_y, ref_x = torch.meshgrid( - torch.linspace( - 0.5, H - 0.5, H, dtype=torch.float32, device=device), - torch.linspace( - 0.5, W - 0.5, W, dtype=torch.float32, device=device)) - ref_y = ref_y.reshape(-1)[None] / ( - valid_ratios[:, None, lvl, 1] * H) - ref_x = ref_x.reshape(-1)[None] / ( - valid_ratios[:, None, lvl, 0] * W) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def get_valid_ratio(self, mask): - """Get the valid radios of feature maps of all level.""" - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def get_proposal_pos_embed(self, - proposals, - num_pos_feats=128, - temperature=10000): - """Get the position embedding of proposal.""" - scale = 2 * math.pi - dim_t = torch.arange( - num_pos_feats, dtype=torch.float32, device=proposals.device) - dim_t = temperature**(2 * (dim_t // 2) / num_pos_feats) - # N, L, 4 - proposals = proposals.sigmoid() * scale - # N, L, 4, 128 - pos = proposals[:, :, :, None] / dim_t - # N, L, 4, 64, 2 - pos = torch.stack((pos[:, :, :, 0::2].sin(), pos[:, :, :, 1::2].cos()), - dim=4).flatten(2) - return pos - - def forward(self, - mlvl_feats, - mlvl_masks, - query_embed, - mlvl_pos_embeds, - reg_branches=None, - cls_branches=None, - **kwargs): - """Forward function for `Transformer`. - - Args: - mlvl_feats (list(Tensor)): Input queries from - different level. Each element has shape - [bs, embed_dims, h, w]. - mlvl_masks (list(Tensor)): The key_padding_mask from - different level used for encoder and decoder, - each element has shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, - with shape [num_query, c]. - mlvl_pos_embeds (list(Tensor)): The positional encoding - of feats from different level, has the shape - [bs, embed_dims, h, w]. - reg_branches (obj:`nn.ModuleList`): Regression heads for - feature maps from each decoder layer. Only would - be passed when - `with_box_refine` is True. Default to None. - cls_branches (obj:`nn.ModuleList`): Classification heads - for feature maps from each decoder layer. Only would - be passed when `as_two_stage` - is True. Default to None. - - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - inter_states: Outputs from decoder. If - return_intermediate_dec is True output has shape \ - (num_dec_layers, bs, num_query, embed_dims), else has \ - shape (1, bs, num_query, embed_dims). - - init_reference_out: The initial value of reference \ - points, has shape (bs, num_queries, 4). - - inter_references_out: The internal value of reference \ - points in decoder, has shape \ - (num_dec_layers, bs,num_query, embed_dims) - - enc_outputs_class: The classification score of \ - proposals generated from \ - encoder's feature maps, has shape \ - (batch, h*w, num_classes). \ - Only would be returned when `as_two_stage` is True, \ - otherwise None. - - enc_outputs_coord_unact: The regression results \ - generated from encoder's feature maps., has shape \ - (batch, h*w, 4). Only would \ - be returned when `as_two_stage` is True, \ - otherwise None. - """ - assert self.as_two_stage or query_embed is not None - - feat_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (feat, mask, pos_embed) in enumerate( - zip(mlvl_feats, mlvl_masks, mlvl_pos_embeds)): - bs, c, h, w = feat.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - feat = feat.flatten(2).transpose(1, 2) - mask = mask.flatten(1) - pos_embed = pos_embed.flatten(2).transpose(1, 2) - lvl_pos_embed = pos_embed + self.level_embeds[lvl].view(1, 1, -1) - lvl_pos_embed_flatten.append(lvl_pos_embed) - feat_flatten.append(feat) - mask_flatten.append(mask) - feat_flatten = torch.cat(feat_flatten, 1) - mask_flatten = torch.cat(mask_flatten, 1) - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=feat_flatten.device) - level_start_index = torch.cat((spatial_shapes.new_zeros( - (1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - valid_ratios = torch.stack( - [self.get_valid_ratio(m) for m in mlvl_masks], 1) - - reference_points = \ - self.get_reference_points(spatial_shapes, - valid_ratios, - device=feat.device) - - feat_flatten = feat_flatten.permute(1, 0, 2) # (H*W, bs, embed_dims) - lvl_pos_embed_flatten = lvl_pos_embed_flatten.permute( - 1, 0, 2) # (H*W, bs, embed_dims) - memory = self.encoder( - query=feat_flatten, - key=None, - value=None, - query_pos=lvl_pos_embed_flatten, - query_key_padding_mask=mask_flatten, - spatial_shapes=spatial_shapes, - reference_points=reference_points, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - **kwargs) - - memory = memory.permute(1, 0, 2) - bs, _, c = memory.shape - if self.as_two_stage: - output_memory, output_proposals = \ - self.gen_encoder_output_proposals( - memory, mask_flatten, spatial_shapes) - enc_outputs_class = cls_branches[self.decoder.num_layers]( - output_memory) - enc_outputs_coord_unact = \ - reg_branches[ - self.decoder.num_layers](output_memory) + output_proposals - - topk = self.two_stage_num_proposals - # We only use the first channel in enc_outputs_class as foreground, - # the other (num_classes - 1) channels are actually not used. - # Its targets are set to be 0s, which indicates the first - # class (foreground) because we use [0, num_classes - 1] to - # indicate class labels, background class is indicated by - # num_classes (similar convention in RPN). - # See https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/deformable_detr_head.py#L241 # noqa - # This follows the official implementation of Deformable DETR. - topk_proposals = torch.topk( - enc_outputs_class[..., 0], topk, dim=1)[1] - topk_coords_unact = torch.gather( - enc_outputs_coord_unact, 1, - topk_proposals.unsqueeze(-1).repeat(1, 1, 4)) - topk_coords_unact = topk_coords_unact.detach() - reference_points = topk_coords_unact.sigmoid() - init_reference_out = reference_points - pos_trans_out = self.pos_trans_norm( - self.pos_trans(self.get_proposal_pos_embed(topk_coords_unact))) - query_pos, query = torch.split(pos_trans_out, c, dim=2) - else: - query_pos, query = torch.split(query_embed, c, dim=1) - query_pos = query_pos.unsqueeze(0).expand(bs, -1, -1) - query = query.unsqueeze(0).expand(bs, -1, -1) - reference_points = self.reference_points(query_pos).sigmoid() - init_reference_out = reference_points - - # decoder - query = query.permute(1, 0, 2) - memory = memory.permute(1, 0, 2) - query_pos = query_pos.permute(1, 0, 2) - inter_states, inter_references = self.decoder( - query=query, - key=None, - value=memory, - query_pos=query_pos, - key_padding_mask=mask_flatten, - reference_points=reference_points, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - reg_branches=reg_branches, - **kwargs) - - inter_references_out = inter_references - if self.as_two_stage: - return inter_states, init_reference_out,\ - inter_references_out, enc_outputs_class,\ - enc_outputs_coord_unact - return inter_states, init_reference_out, \ - inter_references_out, None, None - - -@TRANSFORMER.register_module() -class DynamicConv(BaseModule): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - with_proj (bool): Project two-dimentional feature to - one-dimentional feature. Default to True. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - with_proj=True, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - init_cfg=None): - super(DynamicConv, self).__init__(init_cfg) - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.with_proj = with_proj - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - if self.with_proj: - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - input_feature = input_feature.flatten(2).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - if self.with_proj: - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/__init__.py deleted file mode 100644 index 350452a9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collect_env import collect_env -from .compat_config import compat_cfg -from .logger import get_caller_name, get_root_logger, log_img_scale -from .misc import find_latest_checkpoint, update_data_root -from .setup_env import setup_multi_processes -from .split_batch import split_batch -from .util_distribution import build_ddp, build_dp, get_device - -__all__ = [ - 'get_root_logger', 'collect_env', 'find_latest_checkpoint', - 'update_data_root', 'setup_multi_processes', 'get_caller_name', - 'log_img_scale', 'compat_cfg', 'split_batch', 'build_ddp', 'build_dp', - 'get_device' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/collect_env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/collect_env.py deleted file mode 100644 index 97e25c0e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/collect_env.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmdet - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMDetection'] = mmdet.__version__ + '+' + get_git_hash()[:7] - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print(f'{name}: {val}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/compat_config.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/compat_config.py deleted file mode 100644 index 05aa37dc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/compat_config.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv import ConfigDict - - -def compat_cfg(cfg): - """This function would modify some filed to keep the compatibility of - config. - - For example, it will move some args which will be deprecated to the correct - fields. - """ - cfg = copy.deepcopy(cfg) - cfg = compat_imgs_per_gpu(cfg) - cfg = compat_loader_args(cfg) - cfg = compat_runner_args(cfg) - return cfg - - -def compat_runner_args(cfg): - if 'runner' not in cfg: - cfg.runner = ConfigDict({ - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - }) - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - return cfg - - -def compat_imgs_per_gpu(cfg): - cfg = copy.deepcopy(cfg) - if 'imgs_per_gpu' in cfg.data: - warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - warnings.warn( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - return cfg - - -def compat_loader_args(cfg): - """Deprecated sample_per_gpu in cfg.data.""" - - cfg = copy.deepcopy(cfg) - if 'train_dataloader' not in cfg.data: - cfg.data['train_dataloader'] = ConfigDict() - if 'val_dataloader' not in cfg.data: - cfg.data['val_dataloader'] = ConfigDict() - if 'test_dataloader' not in cfg.data: - cfg.data['test_dataloader'] = ConfigDict() - - # special process for train_dataloader - if 'samples_per_gpu' in cfg.data: - - samples_per_gpu = cfg.data.pop('samples_per_gpu') - assert 'samples_per_gpu' not in \ - cfg.data.train_dataloader, ('`samples_per_gpu` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu - - if 'persistent_workers' in cfg.data: - - persistent_workers = cfg.data.pop('persistent_workers') - assert 'persistent_workers' not in \ - cfg.data.train_dataloader, ('`persistent_workers` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['persistent_workers'] = persistent_workers - - if 'workers_per_gpu' in cfg.data: - - workers_per_gpu = cfg.data.pop('workers_per_gpu') - cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu - - # special process for val_dataloader - if 'samples_per_gpu' in cfg.data.val: - # keep default value of `sample_per_gpu` is 1 - assert 'samples_per_gpu' not in \ - cfg.data.val_dataloader, ('`samples_per_gpu` are set ' - 'in `data.val` field and ` ' - 'data.val_dataloader` at ' - 'the same time. ' - 'Please only set it in ' - '`data.val_dataloader`. ') - cfg.data.val_dataloader['samples_per_gpu'] = \ - cfg.data.val.pop('samples_per_gpu') - # special process for val_dataloader - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - if 'samples_per_gpu' in cfg.data.test: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - - cfg.data.test_dataloader['samples_per_gpu'] = \ - cfg.data.test.pop('samples_per_gpu') - - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - if 'samples_per_gpu' in ds_cfg: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` at' - ' the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu - - return cfg diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/contextmanagers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/contextmanagers.py deleted file mode 100644 index fa12bfca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/contextmanagers.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import asyncio -import contextlib -import logging -import os -import time -from typing import List - -import torch - -logger = logging.getLogger(__name__) - -DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False)) - - -@contextlib.asynccontextmanager -async def completed(trace_name='', - name='', - sleep_interval=0.05, - streams: List[torch.cuda.Stream] = None): - """Async context manager that waits for work to complete on given CUDA - streams.""" - if not torch.cuda.is_available(): - yield - return - - stream_before_context_switch = torch.cuda.current_stream() - if not streams: - streams = [stream_before_context_switch] - else: - streams = [s if s else stream_before_context_switch for s in streams] - - end_events = [ - torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams - ] - - if DEBUG_COMPLETED_TIME: - start = torch.cuda.Event(enable_timing=True) - stream_before_context_switch.record_event(start) - - cpu_start = time.monotonic() - logger.debug('%s %s starting, streams: %s', trace_name, name, streams) - grad_enabled_before = torch.is_grad_enabled() - try: - yield - finally: - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_end = time.monotonic() - for i, stream in enumerate(streams): - event = end_events[i] - stream.record_event(event) - - grad_enabled_after = torch.is_grad_enabled() - - # observed change of torch.is_grad_enabled() during concurrent run of - # async_test_bboxes code - assert (grad_enabled_before == grad_enabled_after - ), 'Unexpected is_grad_enabled() value change' - - are_done = [e.query() for e in end_events] - logger.debug('%s %s completed: %s streams: %s', trace_name, name, - are_done, streams) - with torch.cuda.stream(stream_before_context_switch): - while not all(are_done): - await asyncio.sleep(sleep_interval) - are_done = [e.query() for e in end_events] - logger.debug( - '%s %s completed: %s streams: %s', - trace_name, - name, - are_done, - streams, - ) - - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_time = (cpu_end - cpu_start) * 1000 - stream_times_ms = '' - for i, stream in enumerate(streams): - elapsed_time = start.elapsed_time(end_events[i]) - stream_times_ms += f' {stream} {elapsed_time:.2f} ms' - logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time, - stream_times_ms) - - -@contextlib.asynccontextmanager -async def concurrent(streamqueue: asyncio.Queue, - trace_name='concurrent', - name='stream'): - """Run code concurrently in different streams. - - :param streamqueue: asyncio.Queue instance. - - Queue tasks define the pool of streams used for concurrent execution. - """ - if not torch.cuda.is_available(): - yield - return - - initial_stream = torch.cuda.current_stream() - - with torch.cuda.stream(initial_stream): - stream = await streamqueue.get() - assert isinstance(stream, torch.cuda.Stream) - - try: - with torch.cuda.stream(stream): - logger.debug('%s %s is starting, stream: %s', trace_name, name, - stream) - yield - current = torch.cuda.current_stream() - assert current == stream - logger.debug('%s %s has finished, stream: %s', trace_name, - name, stream) - finally: - streamqueue.task_done() - streamqueue.put_nowait(stream) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/logger.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/logger.py deleted file mode 100644 index 485f641b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/logger.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get root logger. - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level) - - return logger - - -def get_caller_name(): - """Get name of caller method.""" - # this_func_frame = inspect.stack()[0][0] # i.e., get_caller_name - # callee_frame = inspect.stack()[1][0] # e.g., log_img_scale - caller_frame = inspect.stack()[2][0] # e.g., caller of log_img_scale - caller_method = caller_frame.f_code.co_name - try: - caller_class = caller_frame.f_locals['self'].__class__.__name__ - return f'{caller_class}.{caller_method}' - except KeyError: # caller is a function - return caller_method - - -def log_img_scale(img_scale, shape_order='hw', skip_square=False): - """Log image size. - - Args: - img_scale (tuple): Image size to be logged. - shape_order (str, optional): The order of image shape. - 'hw' for (height, width) and 'wh' for (width, height). - Defaults to 'hw'. - skip_square (bool, optional): Whether to skip logging for square - img_scale. Defaults to False. - - Returns: - bool: Whether to have done logging. - """ - if shape_order == 'hw': - height, width = img_scale - elif shape_order == 'wh': - width, height = img_scale - else: - raise ValueError(f'Invalid shape_order {shape_order}.') - - if skip_square and (height == width): - return False - - logger = get_root_logger() - caller = get_caller_name() - logger.info(f'image shape: height={height}, width={width} in {caller}') - - return True diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/misc.py deleted file mode 100644 index 4113672a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/misc.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os -import os.path as osp -import warnings - -import mmcv -from mmcv.utils import print_log - - -def find_latest_checkpoint(path, suffix='pth'): - """Find the latest checkpoint from the working directory. - - Args: - path(str): The path to find checkpoints. - suffix(str): File extension. - Defaults to pth. - - Returns: - latest_path(str | None): File path of the latest checkpoint. - References: - .. [1] https://github.com/microsoft/SoftTeacher - /blob/main/ssod/utils/patch.py - """ - if not osp.exists(path): - warnings.warn('The path of checkpoints does not exist.') - return None - if osp.exists(osp.join(path, f'latest.{suffix}')): - return osp.join(path, f'latest.{suffix}') - - checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) - if len(checkpoints) == 0: - warnings.warn('There are no checkpoints in the path.') - return None - latest = -1 - latest_path = None - for checkpoint in checkpoints: - count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) - if count > latest: - latest = count - latest_path = checkpoint - return latest_path - - -def update_data_root(cfg, logger=None): - """Update data root according to env MMDET_DATASETS. - - If set env MMDET_DATASETS, update cfg.data_root according to - MMDET_DATASETS. Otherwise, using cfg.data_root as default. - - Args: - cfg (mmcv.Config): The model config need to modify - logger (logging.Logger | str | None): the way to print msg - """ - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - if 'MMDET_DATASETS' in os.environ: - dst_root = os.environ['MMDET_DATASETS'] - print_log(f'MMDET_DATASETS has been set to be {dst_root}.' - f'Using {dst_root} as data root.') - else: - return - - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - def update(cfg, src_str, dst_str): - for k, v in cfg.items(): - if isinstance(v, mmcv.ConfigDict): - update(cfg[k], src_str, dst_str) - if isinstance(v, str) and src_str in v: - cfg[k] = v.replace(src_str, dst_str) - - update(cfg.data, cfg.data_root, dst_root) - cfg.data_root = dst_root diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/profiling.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/profiling.py deleted file mode 100644 index 2f53f456..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/profiling.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import contextlib -import sys -import time - -import torch - -if sys.version_info >= (3, 7): - - @contextlib.contextmanager - def profile_time(trace_name, - name, - enabled=True, - stream=None, - end_stream=None): - """Print time spent by CPU and GPU. - - Useful as a temporary context manager to find sweet spots of code - suitable for async implementation. - """ - if (not enabled) or not torch.cuda.is_available(): - yield - return - stream = stream if stream else torch.cuda.current_stream() - end_stream = end_stream if end_stream else stream - start = torch.cuda.Event(enable_timing=True) - end = torch.cuda.Event(enable_timing=True) - stream.record_event(start) - try: - cpu_start = time.monotonic() - yield - finally: - cpu_end = time.monotonic() - end_stream.record_event(end) - end.synchronize() - cpu_time = (cpu_end - cpu_start) * 1000 - gpu_time = start.elapsed_time(end) - msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms ' - msg += f'gpu_time {gpu_time:.2f} ms stream {stream}' - print(msg, end_stream) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/setup_env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/setup_env.py deleted file mode 100644 index 6637cf87..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/setup_env.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -import torch.multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - workers_per_gpu = cfg.data.get('workers_per_gpu', 1) - if 'train_dataloader' in cfg.data: - workers_per_gpu = \ - max(cfg.data.train_dataloader.get('workers_per_gpu', 1), - workers_per_gpu) - - if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/split_batch.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/split_batch.py deleted file mode 100644 index 0276fb33..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/split_batch.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def split_batch(img, img_metas, kwargs): - """Split data_batch by tags. - - Code is modified from - # noqa: E501 - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (dict): Specific to concrete implementation. - - Returns: - data_groups (dict): a dict that data_batch splited by tags, - such as 'sup', 'unsup_teacher', and 'unsup_student'. - """ - - # only stack img in the batch - def fuse_list(obj_list, obj): - return torch.stack(obj_list) if isinstance(obj, - torch.Tensor) else obj_list - - # select data with tag from data_batch - def select_group(data_batch, current_tag): - group_flag = [tag == current_tag for tag in data_batch['tag']] - return { - k: fuse_list([vv for vv, gf in zip(v, group_flag) if gf], v) - for k, v in data_batch.items() - } - - kwargs.update({'img': img, 'img_metas': img_metas}) - kwargs.update({'tag': [meta['tag'] for meta in img_metas]}) - tags = list(set(kwargs['tag'])) - data_groups = {tag: select_group(kwargs, tag) for tag in tags} - for tag, group in data_groups.items(): - group.pop('tag') - return data_groups diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_distribution.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_distribution.py deleted file mode 100644 index a186bf6c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_distribution.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel - -dp_factory = {'cuda': MMDataParallel, 'cpu': MMDataParallel} - -ddp_factory = {'cuda': MMDistributedDataParallel} - - -def build_dp(model, device='cuda', dim=0, *args, **kwargs): - """build DataParallel module by device type. - - if device is cuda, return a MMDataParallel model; if device is mlu, - return a MLUDataParallel model. - - Args: - model (:class:`nn.Module`): model to be parallelized. - device (str): device type, cuda, cpu or mlu. Defaults to cuda. - dim (int): Dimension used to scatter the data. Defaults to 0. - - Returns: - nn.Module: the model to be parallelized. - """ - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDataParallel - dp_factory['mlu'] = MLUDataParallel - model = model.mlu() - - return dp_factory[device](model, dim=dim, *args, **kwargs) - - -def build_ddp(model, device='cuda', *args, **kwargs): - """Build DistributedDataParallel module by device type. - - If device is cuda, return a MMDistributedDataParallel model; - if device is mlu, return a MLUDistributedDataParallel model. - - Args: - model (:class:`nn.Module`): module to be parallelized. - device (str): device type, mlu or cuda. - - Returns: - :class:`nn.Module`: the module to be parallelized - - References: - .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. - DistributedDataParallel.html - """ - assert device in ['cuda', 'mlu'], 'Only available for cuda or mlu devices.' - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDistributedDataParallel - ddp_factory['mlu'] = MLUDistributedDataParallel - model = model.mlu() - - return ddp_factory[device](model, *args, **kwargs) - - -def is_mlu_available(): - """Returns a bool indicating if MLU is currently available.""" - return hasattr(torch, 'is_mlu_available') and torch.is_mlu_available() - - -def get_device(): - """Returns an available device, cpu, cuda or mlu.""" - is_device_available = { - 'cuda': torch.cuda.is_available(), - 'mlu': is_mlu_available() - } - device_list = [k for k, v in is_device_available.items() if v] - return device_list[0] if len(device_list) == 1 else 'cpu' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_mixins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_mixins.py deleted file mode 100644 index b83b6617..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_mixins.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This module defines the :class:`NiceRepr` mixin class, which defines a -``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__`` -method, which you must define. This means you only have to overload one -function instead of two. Furthermore, if the object defines a ``__len__`` -method, then the ``__nice__`` method defaults to something sensible, otherwise -it is treated as abstract and raises ``NotImplementedError``. - -To use simply have your object inherit from :class:`NiceRepr` -(multi-inheritance should be ok). - -This code was copied from the ubelt library: https://github.com/Erotemic/ubelt - -Example: - >>> # Objects that define __nice__ have a default __str__ and __repr__ - >>> class Student(NiceRepr): - ... def __init__(self, name): - ... self.name = name - ... def __nice__(self): - ... return self.name - >>> s1 = Student('Alice') - >>> s2 = Student('Bob') - >>> print(f's1 = {s1}') - >>> print(f's2 = {s2}') - s1 = - s2 = - -Example: - >>> # Objects that define __len__ have a default __nice__ - >>> class Group(NiceRepr): - ... def __init__(self, data): - ... self.data = data - ... def __len__(self): - ... return len(self.data) - >>> g = Group([1, 2, 3]) - >>> print(f'g = {g}') - g = -""" -import warnings - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, '__len__'): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError( - f'Define the __nice__ method for {self.__class__!r}') - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f'<{classname}({nice}) at {hex(id(self))}>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f'<{classname}({nice})>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_random.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_random.py deleted file mode 100644 index dc1ecb6c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/utils/util_random.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Helpers for random number generators.""" -import numpy as np - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/version.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/version.py deleted file mode 100644 index 0e03a9d3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet/version.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -__version__ = '2.24.0' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/__init__.py deleted file mode 100644 index 3c7ec9a3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/__init__.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -import mmdet -import mmseg -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.4.8' -mmcv_maximum_version = '1.6.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -mmdet_minimum_version = '2.24.0' -mmdet_maximum_version = '3.0.0' -mmdet_version = digit_version(mmdet.__version__) -assert (mmdet_version >= digit_version(mmdet_minimum_version) - and mmdet_version <= digit_version(mmdet_maximum_version)), \ - f'MMDET=={mmdet.__version__} is used but incompatible. ' \ - f'Please install mmdet>={mmdet_minimum_version}, ' \ - f'<={mmdet_maximum_version}.' - -mmseg_minimum_version = '0.20.0' -mmseg_maximum_version = '1.0.0' -mmseg_version = digit_version(mmseg.__version__) -assert (mmseg_version >= digit_version(mmseg_minimum_version) - and mmseg_version <= digit_version(mmseg_maximum_version)), \ - f'MMSEG=={mmseg.__version__} is used but incompatible. ' \ - f'Please install mmseg>={mmseg_minimum_version}, ' \ - f'<={mmseg_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/__init__.py deleted file mode 100644 index c0cbe80e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import (convert_SyncBN, inference_detector, - inference_mono_3d_detector, - inference_multi_modality_detector, inference_segmentor, - init_model, show_result_meshlab) -from .test import single_gpu_test -from .train import init_random_seed, train_model - -__all__ = [ - 'inference_detector', 'init_model', 'single_gpu_test', - 'inference_mono_3d_detector', 'show_result_meshlab', 'convert_SyncBN', - 'train_model', 'inference_multi_modality_detector', 'inference_segmentor', - 'init_random_seed' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/inference.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/inference.py deleted file mode 100644 index 9a04b3fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/inference.py +++ /dev/null @@ -1,528 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import re -from copy import deepcopy -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmdet3d.core import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - DepthInstance3DBoxes, LiDARInstance3DBoxes, - show_multi_modality_result, show_result, - show_seg_result) -from mmdet3d.core.bbox import get_box_type -from mmdet3d.datasets.pipelines import Compose -from mmdet3d.models import build_model -from mmdet3d.utils import get_root_logger - - -def convert_SyncBN(config): - """Convert config's naiveSyncBN to BN. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - """ - if isinstance(config, dict): - for item in config: - if item == 'norm_cfg': - config[item]['type'] = config[item]['type']. \ - replace('naiveSyncBN', 'BN') - else: - convert_SyncBN(config[item]) - - -def init_model(config, checkpoint=None, device='cuda:0'): - """Initialize a model from config file, which could be a 3D detector or a - 3D segmentor. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str): Device to use. - - Returns: - nn.Module: The constructed detector. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - config.model.pretrained = None - convert_SyncBN(config.model) - config.model.train_cfg = None - model = build_model(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - if 'CLASSES' in checkpoint['meta']: - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = config.class_names - if 'PALETTE' in checkpoint['meta']: # 3D Segmentor - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - if device != 'cpu': - torch.cuda.set_device(device) - else: - logger = get_root_logger() - logger.warning('Don\'t suggest using CPU device. ' - 'Some functions are not supported for now.') - model.to(device) - model.eval() - return model - - -def inference_detector(model, pcd): - """Inference point cloud with the detector. - - Args: - model (nn.Module): The loaded detector. - pcd (str): Point cloud files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - - if not isinstance(pcd, str): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadPointsFromDict' - - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - - if isinstance(pcd, str): - # load from point clouds file - data = dict( - pts_filename=pcd, - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - # for ScanNet demo we need axis_align_matrix - ann_info=dict(axis_align_matrix=np.eye(4)), - sweeps=[], - # set timestamp = 0 - timestamp=[0], - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - else: - # load from http - data = dict( - points=pcd, - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - # for ScanNet demo we need axis_align_matrix - ann_info=dict(axis_align_matrix=np.eye(4)), - sweeps=[], - # set timestamp = 0 - timestamp=[0], - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_multi_modality_detector(model, pcd, image, ann_file): - """Inference point cloud with the multi-modality detector. - - Args: - model (nn.Module): The loaded detector. - pcd (str): Point cloud files. - image (str): Image files. - ann_file (str): Annotation files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - # get data info containing calib - data_infos = mmcv.load(ann_file) - image_idx = int(re.findall(r'\d+', image)[-1]) # xxx/sunrgbd_000017.jpg - for x in data_infos: - if int(x['image']['image_idx']) != image_idx: - continue - info = x - break - data = dict( - pts_filename=pcd, - img_prefix=osp.dirname(image), - img_info=dict(filename=osp.basename(image)), - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - - # TODO: this code is dataset-specific. Move lidar2img and - # depth2img to .pkl annotations in the future. - # LiDAR to image conversion - if box_mode_3d == Box3DMode.LIDAR: - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - lidar2img = P2 @ rect @ Trv2c - data['img_metas'][0].data['lidar2img'] = lidar2img - # Depth to image conversion - elif box_mode_3d == Box3DMode.DEPTH: - rt_mat = info['calib']['Rt'] - # follow Coord3DMode.convert_point - rt_mat = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0] - ]) @ rt_mat.transpose(1, 0) - depth2img = info['calib']['K'] @ rt_mat - data['img_metas'][0].data['depth2img'] = depth2img - - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - data['img'] = data['img'][0].data - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_mono_3d_detector(model, image, ann_file): - """Inference image with the monocular 3D detector. - - Args: - model (nn.Module): The loaded detector. - image (str): Image files. - ann_file (str): Annotation files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - box_type_3d, box_mode_3d = get_box_type(cfg.data.test.box_type_3d) - # get data info containing calib - data_infos = mmcv.load(ann_file) - # find the info corresponding to this image - for x in data_infos['images']: - if osp.basename(x['file_name']) != osp.basename(image): - continue - img_info = x - break - data = dict( - img_prefix=osp.dirname(image), - img_info=dict(filename=osp.basename(image)), - box_type_3d=box_type_3d, - box_mode_3d=box_mode_3d, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - - # camera points to image conversion - if box_mode_3d == Box3DMode.CAM: - data['img_info'].update(dict(cam_intrinsic=img_info['cam_intrinsic'])) - - data = test_pipeline(data) - - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['img'] = data['img'][0].data - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def inference_segmentor(model, pcd): - """Inference point cloud with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - pcd (str): Point cloud files. - - Returns: - tuple: Predicted results and data from pipeline. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = deepcopy(cfg.data.test.pipeline) - test_pipeline = Compose(test_pipeline) - data = dict( - pts_filename=pcd, - img_fields=[], - bbox3d_fields=[], - pts_mask_fields=[], - pts_seg_fields=[], - bbox_fields=[], - mask_fields=[], - seg_fields=[]) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device.index])[0] - else: - # this is a workaround to avoid the bug of MMDataParallel - data['img_metas'] = data['img_metas'][0].data - data['points'] = data['points'][0].data - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result, data - - -def show_det_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False): - """Show 3D detection result by meshlab.""" - points = data['points'][0][0].cpu().numpy() - pts_filename = data['img_metas'][0][0]['pts_filename'] - file_name = osp.split(pts_filename)[-1].split('.')[0] - - if 'pts_bbox' in result[0].keys(): - pred_bboxes = result[0]['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = result[0]['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = result[0]['boxes_3d'].tensor.numpy() - pred_scores = result[0]['scores_3d'].numpy() - - # filter out low score bboxes for visualization - if score_thr > 0: - inds = pred_scores > score_thr - pred_bboxes = pred_bboxes[inds] - - # for now we convert points into depth mode - box_mode = data['img_metas'][0][0]['box_mode_3d'] - if box_mode != Box3DMode.DEPTH: - points = Coord3DMode.convert(points, box_mode, Coord3DMode.DEPTH) - show_bboxes = Box3DMode.convert(pred_bboxes, box_mode, Box3DMode.DEPTH) - else: - show_bboxes = deepcopy(pred_bboxes) - - show_result( - points, - None, - show_bboxes, - out_dir, - file_name, - show=show, - snapshot=snapshot) - - return file_name - - -def show_seg_result_meshlab(data, - result, - out_dir, - palette, - show=False, - snapshot=False): - """Show 3D segmentation result by meshlab.""" - points = data['points'][0][0].cpu().numpy() - pts_filename = data['img_metas'][0][0]['pts_filename'] - file_name = osp.split(pts_filename)[-1].split('.')[0] - - pred_seg = result[0]['semantic_mask'].numpy() - - if palette is None: - # generate random color map - max_idx = pred_seg.max() - palette = np.random.randint(0, 256, size=(max_idx + 1, 3)) - palette = np.array(palette).astype(np.int) - - show_seg_result( - points, - None, - pred_seg, - out_dir, - file_name, - palette=palette, - show=show, - snapshot=snapshot) - - return file_name - - -def show_proj_det_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False): - """Show result of projecting 3D bbox to 2D image by meshlab.""" - assert 'img' in data.keys(), 'image data is not provided for visualization' - - img_filename = data['img_metas'][0][0]['filename'] - file_name = osp.split(img_filename)[-1].split('.')[0] - - # read from file because img in data_dict has undergone pipeline transform - img = mmcv.imread(img_filename) - - if 'pts_bbox' in result[0].keys(): - result[0] = result[0]['pts_bbox'] - elif 'img_bbox' in result[0].keys(): - result[0] = result[0]['img_bbox'] - pred_bboxes = result[0]['boxes_3d'].tensor.numpy() - pred_scores = result[0]['scores_3d'].numpy() - - # filter out low score bboxes for visualization - if score_thr > 0: - inds = pred_scores > score_thr - pred_bboxes = pred_bboxes[inds] - - box_mode = data['img_metas'][0][0]['box_mode_3d'] - if box_mode == Box3DMode.LIDAR: - if 'lidar2img' not in data['img_metas'][0][0]: - raise NotImplementedError( - 'LiDAR to image transformation matrix is not provided') - - show_bboxes = LiDARInstance3DBoxes(pred_bboxes, origin=(0.5, 0.5, 0)) - - show_multi_modality_result( - img, - None, - show_bboxes, - data['img_metas'][0][0]['lidar2img'], - out_dir, - file_name, - box_mode='lidar', - show=show) - elif box_mode == Box3DMode.DEPTH: - show_bboxes = DepthInstance3DBoxes(pred_bboxes, origin=(0.5, 0.5, 0)) - - show_multi_modality_result( - img, - None, - show_bboxes, - None, - out_dir, - file_name, - box_mode='depth', - img_metas=data['img_metas'][0][0], - show=show) - elif box_mode == Box3DMode.CAM: - if 'cam2img' not in data['img_metas'][0][0]: - raise NotImplementedError( - 'camera intrinsic matrix is not provided') - - show_bboxes = CameraInstance3DBoxes( - pred_bboxes, box_dim=pred_bboxes.shape[-1], origin=(0.5, 1.0, 0.5)) - - show_multi_modality_result( - img, - None, - show_bboxes, - data['img_metas'][0][0]['cam2img'], - out_dir, - file_name, - box_mode='camera', - show=show) - else: - raise NotImplementedError( - f'visualization of {box_mode} bbox is not supported') - - return file_name - - -def show_result_meshlab(data, - result, - out_dir, - score_thr=0.0, - show=False, - snapshot=False, - task='det', - palette=None): - """Show result by meshlab. - - Args: - data (dict): Contain data from pipeline. - result (dict): Predicted result from model. - out_dir (str): Directory to save visualized result. - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.0 - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - task (str, optional): Distinguish which task result to visualize. - Currently we support 3D detection, multi-modality detection and - 3D segmentation. Defaults to 'det'. - palette (list[list[int]]] | np.ndarray, optional): The palette - of segmentation map. If None is given, random palette will be - generated. Defaults to None. - """ - assert task in ['det', 'multi_modality-det', 'seg', 'mono-det'], \ - f'unsupported visualization task {task}' - assert out_dir is not None, 'Expect out_dir, got none.' - - if task in ['det', 'multi_modality-det']: - file_name = show_det_result_meshlab(data, result, out_dir, score_thr, - show, snapshot) - - if task in ['seg']: - file_name = show_seg_result_meshlab(data, result, out_dir, palette, - show, snapshot) - - if task in ['multi_modality-det', 'mono-det']: - file_name = show_proj_det_result_meshlab(data, result, out_dir, - score_thr, show, snapshot) - - return out_dir, file_name diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/test.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/test.py deleted file mode 100644 index c0e66c07..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/test.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import torch -from mmcv.image import tensor2imgs - -from mmdet3d.models import (Base3DDetector, Base3DSegmentor, - SingleStageMono3DDetector) - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - show_score_thr=0.3): - """Test model with single gpu. - - This method tests model with single gpu and gives the 'show' option. - By setting ``show=True``, it saves the visualization results under - ``out_dir``. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - show (bool, optional): Whether to save viualization results. - Default: True. - out_dir (str, optional): The path to save visualization results. - Default: None. - - Returns: - list[dict]: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if show: - # Visualize the results of MMDetection3D model - # 'show_results' is MMdetection3D visualization API - models_3d = (Base3DDetector, Base3DSegmentor, - SingleStageMono3DDetector) - if isinstance(model.module, models_3d): - model.module.show_results( - data, - result, - out_dir=out_dir, - show=show, - score_thr=show_score_thr) - # Visualize the results of MMDetection model - # 'show_result' is MMdetection visualization API - else: - batch_size = len(result) - if batch_size == 1 and isinstance(data['img'][0], - torch.Tensor): - img_tensor = data['img'][0] - else: - img_tensor = data['img'][0].data[0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for i, (img, img_meta) in enumerate(zip(imgs, img_metas)): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result[i], - show=show, - out_file=out_file, - score_thr=show_score_thr) - results.extend(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/train.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/train.py deleted file mode 100644 index 4d970264..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/apis/train.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner, get_dist_info) -from mmcv.utils import build_from_cfg -from torch import distributed as dist - -from mmdet3d.datasets import build_dataset -from mmdet3d.utils import find_latest_checkpoint -from mmdet.core import DistEvalHook as MMDET_DistEvalHook -from mmdet.core import EvalHook as MMDET_EvalHook -from mmdet.datasets import build_dataloader as build_mmdet_dataloader -from mmdet.datasets import replace_ImageToTensor -from mmdet.utils import get_root_logger as get_mmdet_root_logger -from mmseg.core import DistEvalHook as MMSEG_DistEvalHook -from mmseg.core import EvalHook as MMSEG_EvalHook -from mmseg.datasets import build_dataloader as build_mmseg_dataloader -from mmseg.utils import get_root_logger as get_mmseg_root_logger - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - Args: - seed (int, optional): The seed. Default to None. - device (str, optional): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_mmseg_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_mmseg_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_mmseg_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = MMSEG_DistEvalHook if distributed else MMSEG_EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - logger = get_mmdet_root_logger(log_level=cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - if 'imgs_per_gpu' in cfg.data: - logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - logger.warning( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - logger.warning( - 'Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - - runner_type = 'EpochBasedRunner' if 'runner' not in cfg else cfg.runner[ - 'type'] - data_loaders = [ - build_mmdet_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # `num_gpus` will be ignored if distributed - num_gpus=len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - runner_type=runner_type, - persistent_workers=cfg.data.get('persistent_workers', False)) - for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if 'runner' not in cfg: - cfg.runner = { - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - } - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - optimizer_config, - cfg.checkpoint_config, - cfg.log_config, - cfg.get('momentum_config', None), - custom_hooks_config=cfg.get('custom_hooks', None)) - - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - # Support batch_size > 1 in validation - val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1) - if val_samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_mmdet_dataloader( - val_dataset, - samples_per_gpu=val_samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = MMDET_DistEvalHook if distributed else MMDET_EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - resume_from = None - if cfg.resume_from is None and cfg.get('auto_resume'): - resume_from = find_latest_checkpoint(cfg.work_dir) - - if resume_from is not None: - cfg.resume_from = resume_from - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) - - -def train_model(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """A function wrapper for launching model training according to cfg. - - Because we need different eval_hook in runner. Should be deprecated in the - future. - """ - if cfg.model.type in ['EncoderDecoder3D']: - train_segmentor( - model, - dataset, - cfg, - distributed=distributed, - validate=validate, - timestamp=timestamp, - meta=meta) - else: - train_detector( - model, - dataset, - cfg, - distributed=distributed, - validate=validate, - timestamp=timestamp, - meta=meta) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/__init__.py deleted file mode 100644 index ffb0c1ac..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -from .points import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 -from .visualizer import * # noqa: F401, F403 -from .voxel import * # noqa: F401, F403 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/__init__.py deleted file mode 100644 index 7a34bf56..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.anchor import build_prior_generator -from .anchor_3d_generator import (AlignedAnchor3DRangeGenerator, - AlignedAnchor3DRangeGeneratorPerCls, - Anchor3DRangeGenerator) - -__all__ = [ - 'AlignedAnchor3DRangeGenerator', 'Anchor3DRangeGenerator', - 'build_prior_generator', 'AlignedAnchor3DRangeGeneratorPerCls' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/anchor_3d_generator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/anchor_3d_generator.py deleted file mode 100644 index e8681b71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/anchor/anchor_3d_generator.py +++ /dev/null @@ -1,419 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch - -from mmdet.core.anchor import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class Anchor3DRangeGenerator(object): - """3D Anchor Generator by range. - - This anchor generator generates anchors by the given range in different - feature levels. - Due the convention in 3D detection, different anchor sizes are related to - different ranges for different categories. However we find this setting - does not effect the performance much in some datasets, e.g., nuScenes. - - Args: - ranges (list[list[float]]): Ranges of different anchors. - The ranges are the same across different feature levels. But may - vary for different anchor sizes if size_per_range is True. - sizes (list[list[float]], optional): 3D sizes of anchors. - Defaults to [[3.9, 1.6, 1.56]]. - scales (list[int], optional): Scales of anchors in different feature - levels. Defaults to [1]. - rotations (list[float], optional): Rotations of anchors in a feature - grid. Defaults to [0, 1.5707963]. - custom_values (tuple[float], optional): Customized values of that - anchor. For example, in nuScenes the anchors have velocities. - Defaults to (). - reshape_out (bool, optional): Whether to reshape the output into - (N x 4). Defaults to True. - size_per_range (bool, optional): Whether to use separate ranges for - different sizes. If size_per_range is True, the ranges should have - the same length as the sizes, if not, it will be duplicated. - Defaults to True. - """ - - def __init__(self, - ranges, - sizes=[[3.9, 1.6, 1.56]], - scales=[1], - rotations=[0, 1.5707963], - custom_values=(), - reshape_out=True, - size_per_range=True): - assert mmcv.is_list_of(ranges, list) - if size_per_range: - if len(sizes) != len(ranges): - assert len(ranges) == 1 - ranges = ranges * len(sizes) - assert len(ranges) == len(sizes) - else: - assert len(ranges) == 1 - assert mmcv.is_list_of(sizes, list) - assert isinstance(scales, list) - - self.sizes = sizes - self.scales = scales - self.ranges = ranges - self.rotations = rotations - self.custom_values = custom_values - self.cached_anchors = None - self.reshape_out = reshape_out - self.size_per_range = size_per_range - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'anchor_range={self.ranges},\n' - s += f'scales={self.scales},\n' - s += f'sizes={self.sizes},\n' - s += f'rotations={self.rotations},\n' - s += f'reshape_out={self.reshape_out},\n' - s += f'size_per_range={self.size_per_range})' - return s - - @property - def num_base_anchors(self): - """list[int]: Total number of base anchors in a feature grid.""" - num_rot = len(self.rotations) - num_size = torch.tensor(self.sizes).reshape(-1, 3).size(0) - return num_rot * num_size - - @property - def num_levels(self): - """int: Number of feature levels that the generator is applied to.""" - return len(self.scales) - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str, optional): Device where the anchors will be put on. - Defaults to 'cuda'. - - Returns: - list[torch.Tensor]: Anchors in multiple feature levels. - The sizes of each tensor should be [N, 4], where - N = width * height * num_base_anchors, width and height - are the sizes of the corresponding feature level, - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - featmap_sizes[i], self.scales[i], device=device) - if self.reshape_out: - anchors = anchors.reshape(-1, anchors.size(-1)) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, featmap_size, scale, device='cuda'): - """Generate grid anchors of a single level feature map. - - This function is usually called by method ``self.grid_anchors``. - - Args: - featmap_size (tuple[int]): Size of the feature map. - scale (float): Scale factor of the anchors in the current level. - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature map. - """ - # We reimplement the anchor generator using torch in cuda - # torch: 0.6975 s for 1000 times - # numpy: 4.3345 s for 1000 times - # which is ~5 times faster than the numpy implementation - if not self.size_per_range: - return self.anchors_single_range( - featmap_size, - self.ranges[0], - scale, - self.sizes, - self.rotations, - device=device) - - mr_anchors = [] - for anchor_range, anchor_size in zip(self.ranges, self.sizes): - mr_anchors.append( - self.anchors_single_range( - featmap_size, - anchor_range, - scale, - anchor_size, - self.rotations, - device=device)) - mr_anchors = torch.cat(mr_anchors, dim=-3) - return mr_anchors - - def anchors_single_range(self, - feature_size, - anchor_range, - scale=1, - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.5707963], - device='cuda'): - """Generate anchors in a single range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - scale (float | int, optional): The scale factor of anchors. - Defaults to 1. - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to [[3.9, 1.6, 1.56]]. - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to [0, 1.5707963]. - device (str): Devices that the anchors will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors with shape - [*feature_size, num_sizes, num_rots, 7]. - """ - if len(feature_size) == 2: - feature_size = [1, feature_size[0], feature_size[1]] - anchor_range = torch.tensor(anchor_range, device=device) - z_centers = torch.linspace( - anchor_range[2], anchor_range[5], feature_size[0], device=device) - y_centers = torch.linspace( - anchor_range[1], anchor_range[4], feature_size[1], device=device) - x_centers = torch.linspace( - anchor_range[0], anchor_range[3], feature_size[2], device=device) - sizes = torch.tensor(sizes, device=device).reshape(-1, 3) * scale - rotations = torch.tensor(rotations, device=device) - - # torch.meshgrid default behavior is 'id', np's default is 'xy' - rets = torch.meshgrid(x_centers, y_centers, z_centers, rotations) - # torch.meshgrid returns a tuple rather than list - rets = list(rets) - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = rets[i].unsqueeze(-2).repeat(tile_shape).unsqueeze(-1) - - sizes = sizes.reshape([1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = sizes.repeat(tile_size_shape) - rets.insert(3, sizes) - - ret = torch.cat(rets, dim=-1).permute([2, 1, 0, 3, 4, 5]) - # [1, 200, 176, N, 2, 7] for kitti after permute - - if len(self.custom_values) > 0: - custom_ndim = len(self.custom_values) - custom = ret.new_zeros([*ret.shape[:-1], custom_ndim]) - # custom[:] = self.custom_values - ret = torch.cat([ret, custom], dim=-1) - # [1, 200, 176, N, 2, 9] for nus dataset after permute - return ret - - -@ANCHOR_GENERATORS.register_module() -class AlignedAnchor3DRangeGenerator(Anchor3DRangeGenerator): - """Aligned 3D Anchor Generator by range. - - This anchor generator uses a different manner to generate the positions - of anchors' centers from :class:`Anchor3DRangeGenerator`. - - Note: - The `align` means that the anchor's center is aligned with the voxel - grid, which is also the feature grid. The previous implementation of - :class:`Anchor3DRangeGenerator` does not generate the anchors' center - according to the voxel grid. Rather, it generates the center by - uniformly distributing the anchors inside the minimum and maximum - anchor ranges according to the feature map sizes. - However, this makes the anchors center does not match the feature grid. - The :class:`AlignedAnchor3DRangeGenerator` add + 1 when using the - feature map sizes to obtain the corners of the voxel grid. Then it - shifts the coordinates to the center of voxel grid and use the left - up corner to distribute anchors. - - Args: - anchor_corner (bool, optional): Whether to align with the corner of the - voxel grid. By default it is False and the anchor's center will be - the same as the corresponding voxel's center, which is also the - center of the corresponding greature grid. Defaults to False. - """ - - def __init__(self, align_corner=False, **kwargs): - super(AlignedAnchor3DRangeGenerator, self).__init__(**kwargs) - self.align_corner = align_corner - - def anchors_single_range(self, - feature_size, - anchor_range, - scale, - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.5707963], - device='cuda'): - """Generate anchors in a single range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - scale (float | int): The scale factor of anchors. - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to [[3.9, 1.6, 1.56]]. - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to [0, 1.5707963]. - device (str, optional): Devices that the anchors will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors with shape - [*feature_size, num_sizes, num_rots, 7]. - """ - if len(feature_size) == 2: - feature_size = [1, feature_size[0], feature_size[1]] - anchor_range = torch.tensor(anchor_range, device=device) - z_centers = torch.linspace( - anchor_range[2], - anchor_range[5], - feature_size[0] + 1, - device=device) - y_centers = torch.linspace( - anchor_range[1], - anchor_range[4], - feature_size[1] + 1, - device=device) - x_centers = torch.linspace( - anchor_range[0], - anchor_range[3], - feature_size[2] + 1, - device=device) - sizes = torch.tensor(sizes, device=device).reshape(-1, 3) * scale - rotations = torch.tensor(rotations, device=device) - - # shift the anchor center - if not self.align_corner: - z_shift = (z_centers[1] - z_centers[0]) / 2 - y_shift = (y_centers[1] - y_centers[0]) / 2 - x_shift = (x_centers[1] - x_centers[0]) / 2 - z_centers += z_shift - y_centers += y_shift - x_centers += x_shift - - # torch.meshgrid default behavior is 'id', np's default is 'xy' - rets = torch.meshgrid(x_centers[:feature_size[2]], - y_centers[:feature_size[1]], - z_centers[:feature_size[0]], rotations) - - # torch.meshgrid returns a tuple rather than list - rets = list(rets) - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = rets[i].unsqueeze(-2).repeat(tile_shape).unsqueeze(-1) - - sizes = sizes.reshape([1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = sizes.repeat(tile_size_shape) - rets.insert(3, sizes) - - ret = torch.cat(rets, dim=-1).permute([2, 1, 0, 3, 4, 5]) - - if len(self.custom_values) > 0: - custom_ndim = len(self.custom_values) - custom = ret.new_zeros([*ret.shape[:-1], custom_ndim]) - # TODO: check the support of custom values - # custom[:] = self.custom_values - ret = torch.cat([ret, custom], dim=-1) - return ret - - -@ANCHOR_GENERATORS.register_module() -class AlignedAnchor3DRangeGeneratorPerCls(AlignedAnchor3DRangeGenerator): - """3D Anchor Generator by range for per class. - - This anchor generator generates anchors by the given range for per class. - Note that feature maps of different classes may be different. - - Args: - kwargs (dict): Arguments are the same as those in - :class:`AlignedAnchor3DRangeGenerator`. - """ - - def __init__(self, **kwargs): - super(AlignedAnchor3DRangeGeneratorPerCls, self).__init__(**kwargs) - assert len(self.scales) == 1, 'Multi-scale feature map levels are' + \ - ' not supported currently in this kind of anchor generator.' - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes for - different classes in a single feature level. - device (str, optional): Device where the anchors will be put on. - Defaults to 'cuda'. - - Returns: - list[list[torch.Tensor]]: Anchors in multiple feature levels. - Note that in this anchor generator, we currently only - support single feature level. The sizes of each tensor - should be [num_sizes/ranges*num_rots*featmap_size, - box_code_size]. - """ - multi_level_anchors = [] - anchors = self.multi_cls_grid_anchors( - featmap_sizes, self.scales[0], device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def multi_cls_grid_anchors(self, featmap_sizes, scale, device='cuda'): - """Generate grid anchors of a single level feature map for multi-class - with different feature map sizes. - - This function is usually called by method ``self.grid_anchors``. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes for - different classes in a single feature level. - scale (float): Scale factor of the anchors in the current level. - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature map. - """ - assert len(featmap_sizes) == len(self.sizes) == len(self.ranges), \ - 'The number of different feature map sizes anchor sizes and ' + \ - 'ranges should be the same.' - - multi_cls_anchors = [] - for i in range(len(featmap_sizes)): - anchors = self.anchors_single_range( - featmap_sizes[i], - self.ranges[i], - scale, - self.sizes[i], - self.rotations, - device=device) - # [*featmap_size, num_sizes/ranges, num_rots, box_code_size] - ndim = len(featmap_sizes[i]) - anchors = anchors.view(*featmap_sizes[i], -1, anchors.size(-1)) - # [*featmap_size, num_sizes/ranges*num_rots, box_code_size] - anchors = anchors.permute(ndim, *range(0, ndim), ndim + 1) - # [num_sizes/ranges*num_rots, *featmap_size, box_code_size] - multi_cls_anchors.append(anchors.reshape(-1, anchors.size(-1))) - # [num_sizes/ranges*num_rots*featmap_size, box_code_size] - return multi_cls_anchors diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/__init__.py deleted file mode 100644 index 8c666306..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner -from .coders import DeltaXYZWLHRBBoxCoder -# from .bbox_target import bbox_target -from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, - BboxOverlapsNearest3D, - axis_aligned_bbox_overlaps_3d, bbox_overlaps_3d, - bbox_overlaps_nearest_3d) -from .samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, IoUBalancedNegSampler, - PseudoSampler, RandomSampler, SamplingResult) -from .structures import (BaseInstance3DBoxes, Box3DMode, CameraInstance3DBoxes, - Coord3DMode, DepthInstance3DBoxes, - LiDARInstance3DBoxes, get_box_type, limit_period, - mono_cam_box2vis, points_cam2img, points_img2cam, - xywhr2xyxyr) -from .transforms import bbox3d2result, bbox3d2roi, bbox3d_mapping_back - -__all__ = [ - 'BaseSampler', 'AssignResult', 'BaseAssigner', 'MaxIoUAssigner', - 'PseudoSampler', 'RandomSampler', 'InstanceBalancedPosSampler', - 'IoUBalancedNegSampler', 'CombinedSampler', 'SamplingResult', - 'DeltaXYZWLHRBBoxCoder', 'BboxOverlapsNearest3D', 'BboxOverlaps3D', - 'bbox_overlaps_nearest_3d', 'bbox_overlaps_3d', - 'AxisAlignedBboxOverlaps3D', 'axis_aligned_bbox_overlaps_3d', 'Box3DMode', - 'LiDARInstance3DBoxes', 'CameraInstance3DBoxes', 'bbox3d2roi', - 'bbox3d2result', 'DepthInstance3DBoxes', 'BaseInstance3DBoxes', - 'bbox3d_mapping_back', 'xywhr2xyxyr', 'limit_period', 'points_cam2img', - 'points_img2cam', 'get_box_type', 'Coord3DMode', 'mono_cam_box2vis' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/assigners/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/assigners/__init__.py deleted file mode 100644 index d1493687..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox import AssignResult, BaseAssigner, MaxIoUAssigner - -__all__ = ['BaseAssigner', 'MaxIoUAssigner', 'AssignResult'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/box_np_ops.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/box_np_ops.py deleted file mode 100644 index bb52bbbf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/box_np_ops.py +++ /dev/null @@ -1,827 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# TODO: clean the functions in this file and move the APIs into box structures -# in the future -# NOTICE: All functions in this file are valid for LiDAR or depth boxes only -# if we use default parameters. - -import numba -import numpy as np - -from .structures.utils import limit_period, points_cam2img, rotation_3d_in_axis - - -def camera_to_lidar(points, r_rect, velo2cam): - """Convert points in camera coordinate to lidar coordinate. - - Note: - This function is for KITTI only. - - Args: - points (np.ndarray, shape=[N, 3]): Points in camera coordinate. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray, shape=[N, 3]: Points in lidar coordinate. - """ - points_shape = list(points.shape[0:-1]) - if points.shape[-1] == 3: - points = np.concatenate([points, np.ones(points_shape + [1])], axis=-1) - lidar_points = points @ np.linalg.inv((r_rect @ velo2cam).T) - return lidar_points[..., :3] - - -def box_camera_to_lidar(data, r_rect, velo2cam): - """Convert boxes in camera coordinate to lidar coordinate. - - Note: - This function is for KITTI only. - - Args: - data (np.ndarray, shape=[N, 7]): Boxes in camera coordinate. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray, shape=[N, 3]: Boxes in lidar coordinate. - """ - xyz = data[:, 0:3] - x_size, y_size, z_size = data[:, 3:4], data[:, 4:5], data[:, 5:6] - r = data[:, 6:7] - xyz_lidar = camera_to_lidar(xyz, r_rect, velo2cam) - # yaw and dims also needs to be converted - r_new = -r - np.pi / 2 - r_new = limit_period(r_new, period=np.pi * 2) - return np.concatenate([xyz_lidar, x_size, z_size, y_size, r_new], axis=1) - - -def corners_nd(dims, origin=0.5): - """Generate relative box corners based on length per dim and origin point. - - Args: - dims (np.ndarray, shape=[N, ndim]): Array of length per dim - origin (list or array or float, optional): origin point relate to - smallest point. Defaults to 0.5 - - Returns: - np.ndarray, shape=[N, 2 ** ndim, ndim]: Returned corners. - point layout example: (2d) x0y0, x0y1, x1y0, x1y1; - (3d) x0y0z0, x0y0z1, x0y1z0, x0y1z1, x1y0z0, x1y0z1, x1y1z0, x1y1z1 - where x0 < x1, y0 < y1, z0 < z1. - """ - ndim = int(dims.shape[1]) - corners_norm = np.stack( - np.unravel_index(np.arange(2**ndim), [2] * ndim), - axis=1).astype(dims.dtype) - # now corners_norm has format: (2d) x0y0, x0y1, x1y0, x1y1 - # (3d) x0y0z0, x0y0z1, x0y1z0, x0y1z1, x1y0z0, x1y0z1, x1y1z0, x1y1z1 - # so need to convert to a format which is convenient to do other computing. - # for 2d boxes, format is clockwise start with minimum point - # for 3d boxes, please draw lines by your hand. - if ndim == 2: - # generate clockwise box corners - corners_norm = corners_norm[[0, 1, 3, 2]] - elif ndim == 3: - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - corners_norm = corners_norm - np.array(origin, dtype=dims.dtype) - corners = dims.reshape([-1, 1, ndim]) * corners_norm.reshape( - [1, 2**ndim, ndim]) - return corners - - -def center_to_corner_box2d(centers, dims, angles=None, origin=0.5): - """Convert kitti locations, dimensions and angles to corners. - format: center(xy), dims(xy), angles(counterclockwise when positive) - - Args: - centers (np.ndarray): Locations in kitti label file with shape (N, 2). - dims (np.ndarray): Dimensions in kitti label file with shape (N, 2). - angles (np.ndarray, optional): Rotation_y in kitti label file with - shape (N). Defaults to None. - origin (list or array or float, optional): origin point relate to - smallest point. Defaults to 0.5. - - Returns: - np.ndarray: Corners with the shape of (N, 4, 2). - """ - # 'length' in kitti format is in x axis. - # xyz(hwl)(kitti label file)<->xyz(lhw)(camera)<->z(-x)(-y)(wlh)(lidar) - # center in kitti format is [0.5, 1.0, 0.5] in xyz. - corners = corners_nd(dims, origin=origin) - # corners: [N, 4, 2] - if angles is not None: - corners = rotation_3d_in_axis(corners, angles) - corners += centers.reshape([-1, 1, 2]) - return corners - - -@numba.jit(nopython=True) -def depth_to_points(depth, trunc_pixel): - """Convert depth map to points. - - Args: - depth (np.array, shape=[H, W]): Depth map which - the row of [0~`trunc_pixel`] are truncated. - trunc_pixel (int): The number of truncated row. - - Returns: - np.ndarray: Points in camera coordinates. - """ - num_pts = np.sum(depth[trunc_pixel:, ] > 0.1) - points = np.zeros((num_pts, 3), dtype=depth.dtype) - x = np.array([0, 0, 1], dtype=depth.dtype) - k = 0 - for i in range(trunc_pixel, depth.shape[0]): - for j in range(depth.shape[1]): - if depth[i, j] > 0.1: - x = np.array([j, i, 1], dtype=depth.dtype) - points[k] = x * depth[i, j] - k += 1 - return points - - -def depth_to_lidar_points(depth, trunc_pixel, P2, r_rect, velo2cam): - """Convert depth map to points in lidar coordinate. - - Args: - depth (np.array, shape=[H, W]): Depth map which - the row of [0~`trunc_pixel`] are truncated. - trunc_pixel (int): The number of truncated row. - P2 (p.array, shape=[4, 4]): Intrinsics of Camera2. - r_rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - velo2cam (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - - Returns: - np.ndarray: Points in lidar coordinates. - """ - pts = depth_to_points(depth, trunc_pixel) - points_shape = list(pts.shape[0:-1]) - points = np.concatenate([pts, np.ones(points_shape + [1])], axis=-1) - points = points @ np.linalg.inv(P2.T) - lidar_points = camera_to_lidar(points, r_rect, velo2cam) - return lidar_points - - -def center_to_corner_box3d(centers, - dims, - angles=None, - origin=(0.5, 1.0, 0.5), - axis=1): - """Convert kitti locations, dimensions and angles to corners. - - Args: - centers (np.ndarray): Locations in kitti label file with shape (N, 3). - dims (np.ndarray): Dimensions in kitti label file with shape (N, 3). - angles (np.ndarray, optional): Rotation_y in kitti label file with - shape (N). Defaults to None. - origin (list or array or float, optional): Origin point relate to - smallest point. Use (0.5, 1.0, 0.5) in camera and (0.5, 0.5, 0) - in lidar. Defaults to (0.5, 1.0, 0.5). - axis (int, optional): Rotation axis. 1 for camera and 2 for lidar. - Defaults to 1. - - Returns: - np.ndarray: Corners with the shape of (N, 8, 3). - """ - # 'length' in kitti format is in x axis. - # yzx(hwl)(kitti label file)<->xyz(lhw)(camera)<->z(-x)(-y)(lwh)(lidar) - # center in kitti format is [0.5, 1.0, 0.5] in xyz. - corners = corners_nd(dims, origin=origin) - # corners: [N, 8, 3] - if angles is not None: - corners = rotation_3d_in_axis(corners, angles, axis=axis) - corners += centers.reshape([-1, 1, 3]) - return corners - - -@numba.jit(nopython=True) -def box2d_to_corner_jit(boxes): - """Convert box2d to corner. - - Args: - boxes (np.ndarray, shape=[N, 5]): Boxes2d with rotation. - - Returns: - box_corners (np.ndarray, shape=[N, 4, 2]): Box corners. - """ - num_box = boxes.shape[0] - corners_norm = np.zeros((4, 2), dtype=boxes.dtype) - corners_norm[1, 1] = 1.0 - corners_norm[2] = 1.0 - corners_norm[3, 0] = 1.0 - corners_norm -= np.array([0.5, 0.5], dtype=boxes.dtype) - corners = boxes.reshape(num_box, 1, 5)[:, :, 2:4] * corners_norm.reshape( - 1, 4, 2) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - box_corners = np.zeros((num_box, 4, 2), dtype=boxes.dtype) - for i in range(num_box): - rot_sin = np.sin(boxes[i, -1]) - rot_cos = np.cos(boxes[i, -1]) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - box_corners[i] = corners[i] @ rot_mat_T + boxes[i, :2] - return box_corners - - -@numba.njit -def corner_to_standup_nd_jit(boxes_corner): - """Convert boxes_corner to aligned (min-max) boxes. - - Args: - boxes_corner (np.ndarray, shape=[N, 2**dim, dim]): Boxes corners. - - Returns: - np.ndarray, shape=[N, dim*2]: Aligned (min-max) boxes. - """ - num_boxes = boxes_corner.shape[0] - ndim = boxes_corner.shape[-1] - result = np.zeros((num_boxes, ndim * 2), dtype=boxes_corner.dtype) - for i in range(num_boxes): - for j in range(ndim): - result[i, j] = np.min(boxes_corner[i, :, j]) - for j in range(ndim): - result[i, j + ndim] = np.max(boxes_corner[i, :, j]) - return result - - -@numba.jit(nopython=True) -def corner_to_surfaces_3d_jit(corners): - """Convert 3d box corners from corner function above to surfaces that - normal vectors all direct to internal. - - Args: - corners (np.ndarray): 3d box corners with the shape of (N, 8, 3). - - Returns: - np.ndarray: Surfaces with the shape of (N, 6, 4, 3). - """ - # box_corners: [N, 8, 3], must from corner functions in this module - num_boxes = corners.shape[0] - surfaces = np.zeros((num_boxes, 6, 4, 3), dtype=corners.dtype) - corner_idxes = np.array([ - 0, 1, 2, 3, 7, 6, 5, 4, 0, 3, 7, 4, 1, 5, 6, 2, 0, 4, 5, 1, 3, 2, 6, 7 - ]).reshape(6, 4) - for i in range(num_boxes): - for j in range(6): - for k in range(4): - surfaces[i, j, k] = corners[i, corner_idxes[j, k]] - return surfaces - - -def rotation_points_single_angle(points, angle, axis=0): - """Rotate points with a single angle. - - Args: - points (np.ndarray, shape=[N, 3]]): - angle (np.ndarray, shape=[1]]): - axis (int, optional): Axis to rotate at. Defaults to 0. - - Returns: - np.ndarray: Rotated points. - """ - # points: [N, 3] - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - if axis == 1: - rot_mat_T = np.array( - [[rot_cos, 0, rot_sin], [0, 1, 0], [-rot_sin, 0, rot_cos]], - dtype=points.dtype) - elif axis == 2 or axis == -1: - rot_mat_T = np.array( - [[rot_cos, rot_sin, 0], [-rot_sin, rot_cos, 0], [0, 0, 1]], - dtype=points.dtype) - elif axis == 0: - rot_mat_T = np.array( - [[1, 0, 0], [0, rot_cos, rot_sin], [0, -rot_sin, rot_cos]], - dtype=points.dtype) - else: - raise ValueError('axis should in range') - - return points @ rot_mat_T, rot_mat_T - - -def box3d_to_bbox(box3d, P2): - """Convert box3d in camera coordinates to bbox in image coordinates. - - Args: - box3d (np.ndarray, shape=[N, 7]): Boxes in camera coordinate. - P2 (np.array, shape=[4, 4]): Intrinsics of Camera2. - - Returns: - np.ndarray, shape=[N, 4]: Boxes 2d in image coordinates. - """ - box_corners = center_to_corner_box3d( - box3d[:, :3], box3d[:, 3:6], box3d[:, 6], [0.5, 1.0, 0.5], axis=1) - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = np.min(box_corners_in_image, axis=1) - maxxy = np.max(box_corners_in_image, axis=1) - bbox = np.concatenate([minxy, maxxy], axis=1) - return bbox - - -def corner_to_surfaces_3d(corners): - """convert 3d box corners from corner function above to surfaces that - normal vectors all direct to internal. - - Args: - corners (np.ndarray): 3D box corners with shape of (N, 8, 3). - - Returns: - np.ndarray: Surfaces with the shape of (N, 6, 4, 3). - """ - # box_corners: [N, 8, 3], must from corner functions in this module - surfaces = np.array([ - [corners[:, 0], corners[:, 1], corners[:, 2], corners[:, 3]], - [corners[:, 7], corners[:, 6], corners[:, 5], corners[:, 4]], - [corners[:, 0], corners[:, 3], corners[:, 7], corners[:, 4]], - [corners[:, 1], corners[:, 5], corners[:, 6], corners[:, 2]], - [corners[:, 0], corners[:, 4], corners[:, 5], corners[:, 1]], - [corners[:, 3], corners[:, 2], corners[:, 6], corners[:, 7]], - ]).transpose([2, 0, 1, 3]) - return surfaces - - -def points_in_rbbox(points, rbbox, z_axis=2, origin=(0.5, 0.5, 0)): - """Check points in rotated bbox and return indices. - - Note: - This function is for counterclockwise boxes. - - Args: - points (np.ndarray, shape=[N, 3+dim]): Points to query. - rbbox (np.ndarray, shape=[M, 7]): Boxes3d with rotation. - z_axis (int, optional): Indicate which axis is height. - Defaults to 2. - origin (tuple[int], optional): Indicate the position of - box center. Defaults to (0.5, 0.5, 0). - - Returns: - np.ndarray, shape=[N, M]: Indices of points in each box. - """ - # TODO: this function is different from PointCloud3D, be careful - # when start to use nuscene, check the input - rbbox_corners = center_to_corner_box3d( - rbbox[:, :3], rbbox[:, 3:6], rbbox[:, 6], origin=origin, axis=z_axis) - surfaces = corner_to_surfaces_3d(rbbox_corners) - indices = points_in_convex_polygon_3d_jit(points[:, :3], surfaces) - return indices - - -def minmax_to_corner_2d(minmax_box): - """Convert minmax box to corners2d. - - Args: - minmax_box (np.ndarray, shape=[N, dims]): minmax boxes. - - Returns: - np.ndarray: 2d corners of boxes - """ - ndim = minmax_box.shape[-1] // 2 - center = minmax_box[..., :ndim] - dims = minmax_box[..., ndim:] - center - return center_to_corner_box2d(center, dims, origin=0.0) - - -def create_anchors_3d_range(feature_size, - anchor_range, - sizes=((3.9, 1.6, 1.56), ), - rotations=(0, np.pi / 2), - dtype=np.float32): - """Create anchors 3d by range. - - Args: - feature_size (list[float] | tuple[float]): Feature map size. It is - either a list of a tuple of [D, H, W](in order of z, y, and x). - anchor_range (torch.Tensor | list[float]): Range of anchors with - shape [6]. The order is consistent with that of anchors, i.e., - (x_min, y_min, z_min, x_max, y_max, z_max). - sizes (list[list] | np.ndarray | torch.Tensor, optional): - Anchor size with shape [N, 3], in order of x, y, z. - Defaults to ((3.9, 1.6, 1.56), ). - rotations (list[float] | np.ndarray | torch.Tensor, optional): - Rotations of anchors in a single feature grid. - Defaults to (0, np.pi / 2). - dtype (type, optional): Data type. Defaults to np.float32. - - Returns: - np.ndarray: Range based anchors with shape of - (*feature_size, num_sizes, num_rots, 7). - """ - anchor_range = np.array(anchor_range, dtype) - z_centers = np.linspace( - anchor_range[2], anchor_range[5], feature_size[0], dtype=dtype) - y_centers = np.linspace( - anchor_range[1], anchor_range[4], feature_size[1], dtype=dtype) - x_centers = np.linspace( - anchor_range[0], anchor_range[3], feature_size[2], dtype=dtype) - sizes = np.reshape(np.array(sizes, dtype=dtype), [-1, 3]) - rotations = np.array(rotations, dtype=dtype) - rets = np.meshgrid( - x_centers, y_centers, z_centers, rotations, indexing='ij') - tile_shape = [1] * 5 - tile_shape[-2] = int(sizes.shape[0]) - for i in range(len(rets)): - rets[i] = np.tile(rets[i][..., np.newaxis, :], tile_shape) - rets[i] = rets[i][..., np.newaxis] # for concat - sizes = np.reshape(sizes, [1, 1, 1, -1, 1, 3]) - tile_size_shape = list(rets[0].shape) - tile_size_shape[3] = 1 - sizes = np.tile(sizes, tile_size_shape) - rets.insert(3, sizes) - ret = np.concatenate(rets, axis=-1) - return np.transpose(ret, [2, 1, 0, 3, 4, 5]) - - -def center_to_minmax_2d(centers, dims, origin=0.5): - """Center to minmax. - - Args: - centers (np.ndarray): Center points. - dims (np.ndarray): Dimensions. - origin (list or array or float, optional): Origin point relate - to smallest point. Defaults to 0.5. - - Returns: - np.ndarray: Minmax points. - """ - if origin == 0.5: - return np.concatenate([centers - dims / 2, centers + dims / 2], - axis=-1) - corners = center_to_corner_box2d(centers, dims, origin=origin) - return corners[:, [0, 2]].reshape([-1, 4]) - - -def rbbox2d_to_near_bbox(rbboxes): - """convert rotated bbox to nearest 'standing' or 'lying' bbox. - - Args: - rbboxes (np.ndarray): Rotated bboxes with shape of - (N, 5(x, y, xdim, ydim, rad)). - - Returns: - np.ndarray: Bounding boxes with the shape of - (N, 4(xmin, ymin, xmax, ymax)). - """ - rots = rbboxes[..., -1] - rots_0_pi_div_2 = np.abs(limit_period(rots, 0.5, np.pi)) - cond = (rots_0_pi_div_2 > np.pi / 4)[..., np.newaxis] - bboxes_center = np.where(cond, rbboxes[:, [0, 1, 3, 2]], rbboxes[:, :4]) - bboxes = center_to_minmax_2d(bboxes_center[:, :2], bboxes_center[:, 2:]) - return bboxes - - -@numba.jit(nopython=True) -def iou_jit(boxes, query_boxes, mode='iou', eps=0.0): - """Calculate box iou. Note that jit version runs ~10x faster than the - box_overlaps function in mmdet3d.core.evaluation. - - Note: - This function is for counterclockwise boxes. - - Args: - boxes (np.ndarray): Input bounding boxes with shape of (N, 4). - query_boxes (np.ndarray): Query boxes with shape of (K, 4). - mode (str, optional): IoU mode. Defaults to 'iou'. - eps (float, optional): Value added to denominator. Defaults to 0. - - Returns: - np.ndarray: Overlap between boxes and query_boxes - with the shape of [N, K]. - """ - N = boxes.shape[0] - K = query_boxes.shape[0] - overlaps = np.zeros((N, K), dtype=boxes.dtype) - for k in range(K): - box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + eps) * - (query_boxes[k, 3] - query_boxes[k, 1] + eps)) - for n in range(N): - iw = ( - min(boxes[n, 2], query_boxes[k, 2]) - - max(boxes[n, 0], query_boxes[k, 0]) + eps) - if iw > 0: - ih = ( - min(boxes[n, 3], query_boxes[k, 3]) - - max(boxes[n, 1], query_boxes[k, 1]) + eps) - if ih > 0: - if mode == 'iou': - ua = ((boxes[n, 2] - boxes[n, 0] + eps) * - (boxes[n, 3] - boxes[n, 1] + eps) + box_area - - iw * ih) - else: - ua = ((boxes[n, 2] - boxes[n, 0] + eps) * - (boxes[n, 3] - boxes[n, 1] + eps)) - overlaps[n, k] = iw * ih / ua - return overlaps - - -def projection_matrix_to_CRT_kitti(proj): - """Split projection matrix of KITTI. - - Note: - This function is for KITTI only. - - P = C @ [R|T] - C is upper triangular matrix, so we need to inverse CR and use QR - stable for all kitti camera projection matrix. - - Args: - proj (p.array, shape=[4, 4]): Intrinsics of camera. - - Returns: - tuple[np.ndarray]: Splited matrix of C, R and T. - """ - - CR = proj[0:3, 0:3] - CT = proj[0:3, 3] - RinvCinv = np.linalg.inv(CR) - Rinv, Cinv = np.linalg.qr(RinvCinv) - C = np.linalg.inv(Cinv) - R = np.linalg.inv(Rinv) - T = Cinv @ CT - return C, R, T - - -def remove_outside_points(points, rect, Trv2c, P2, image_shape): - """Remove points which are outside of image. - - Note: - This function is for KITTI only. - - Args: - points (np.ndarray, shape=[N, 3+dims]): Total points. - rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - Trv2c (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - P2 (p.array, shape=[4, 4]): Intrinsics of Camera2. - image_shape (list[int]): Shape of image. - - Returns: - np.ndarray, shape=[N, 3+dims]: Filtered points. - """ - # 5x faster than remove_outside_points_v1(2ms vs 10ms) - C, R, T = projection_matrix_to_CRT_kitti(P2) - image_bbox = [0, 0, image_shape[1], image_shape[0]] - frustum = get_frustum(image_bbox, C) - frustum -= T - frustum = np.linalg.inv(R) @ frustum.T - frustum = camera_to_lidar(frustum.T, rect, Trv2c) - frustum_surfaces = corner_to_surfaces_3d_jit(frustum[np.newaxis, ...]) - indices = points_in_convex_polygon_3d_jit(points[:, :3], frustum_surfaces) - points = points[indices.reshape([-1])] - return points - - -def get_frustum(bbox_image, C, near_clip=0.001, far_clip=100): - """Get frustum corners in camera coordinates. - - Args: - bbox_image (list[int]): box in image coordinates. - C (np.ndarray): Intrinsics. - near_clip (float, optional): Nearest distance of frustum. - Defaults to 0.001. - far_clip (float, optional): Farthest distance of frustum. - Defaults to 100. - - Returns: - np.ndarray, shape=[8, 3]: coordinates of frustum corners. - """ - fku = C[0, 0] - fkv = -C[1, 1] - u0v0 = C[0:2, 2] - z_points = np.array( - [near_clip] * 4 + [far_clip] * 4, dtype=C.dtype)[:, np.newaxis] - b = bbox_image - box_corners = np.array( - [[b[0], b[1]], [b[0], b[3]], [b[2], b[3]], [b[2], b[1]]], - dtype=C.dtype) - near_box_corners = (box_corners - u0v0) / np.array( - [fku / near_clip, -fkv / near_clip], dtype=C.dtype) - far_box_corners = (box_corners - u0v0) / np.array( - [fku / far_clip, -fkv / far_clip], dtype=C.dtype) - ret_xy = np.concatenate([near_box_corners, far_box_corners], - axis=0) # [8, 2] - ret_xyz = np.concatenate([ret_xy, z_points], axis=1) - return ret_xyz - - -def surface_equ_3d(polygon_surfaces): - """ - - Args: - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - [num_polygon, max_num_surfaces, max_num_points_of_surface, 3]. - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - - Returns: - tuple: normal vector and its direction. - """ - # return [a, b, c], d in ax+by+cz+d=0 - # polygon_surfaces: [num_polygon, num_surfaces, num_points_of_polygon, 3] - surface_vec = polygon_surfaces[:, :, :2, :] - \ - polygon_surfaces[:, :, 1:3, :] - # normal_vec: [..., 3] - normal_vec = np.cross(surface_vec[:, :, 0, :], surface_vec[:, :, 1, :]) - # print(normal_vec.shape, points[..., 0, :].shape) - # d = -np.inner(normal_vec, points[..., 0, :]) - d = np.einsum('aij, aij->ai', normal_vec, polygon_surfaces[:, :, 0, :]) - return normal_vec, -d - - -@numba.njit -def _points_in_convex_polygon_3d_jit(points, polygon_surfaces, normal_vec, d, - num_surfaces): - """ - Args: - points (np.ndarray): Input points with shape of (num_points, 3). - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - (num_polygon, max_num_surfaces, max_num_points_of_surface, 3). - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - normal_vec (np.ndarray): Normal vector of polygon_surfaces. - d (int): Directions of normal vector. - num_surfaces (np.ndarray): Number of surfaces a polygon contains - shape of (num_polygon). - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3] - num_points = points.shape[0] - num_polygons = polygon_surfaces.shape[0] - ret = np.ones((num_points, num_polygons), dtype=np.bool_) - sign = 0.0 - for i in range(num_points): - for j in range(num_polygons): - for k in range(max_num_surfaces): - if k > num_surfaces[j]: - break - sign = ( - points[i, 0] * normal_vec[j, k, 0] + - points[i, 1] * normal_vec[j, k, 1] + - points[i, 2] * normal_vec[j, k, 2] + d[j, k]) - if sign >= 0: - ret[i, j] = False - break - return ret - - -def points_in_convex_polygon_3d_jit(points, - polygon_surfaces, - num_surfaces=None): - """Check points is in 3d convex polygons. - - Args: - points (np.ndarray): Input points with shape of (num_points, 3). - polygon_surfaces (np.ndarray): Polygon surfaces with shape of - (num_polygon, max_num_surfaces, max_num_points_of_surface, 3). - All surfaces' normal vector must direct to internal. - Max_num_points_of_surface must at least 3. - num_surfaces (np.ndarray, optional): Number of surfaces a polygon - contains shape of (num_polygon). Defaults to None. - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3] - # num_points = points.shape[0] - num_polygons = polygon_surfaces.shape[0] - if num_surfaces is None: - num_surfaces = np.full((num_polygons, ), 9999999, dtype=np.int64) - normal_vec, d = surface_equ_3d(polygon_surfaces[:, :, :3, :]) - # normal_vec: [num_polygon, max_num_surfaces, 3] - # d: [num_polygon, max_num_surfaces] - return _points_in_convex_polygon_3d_jit(points, polygon_surfaces, - normal_vec, d, num_surfaces) - - -@numba.njit -def points_in_convex_polygon_jit(points, polygon, clockwise=False): - """Check points is in 2d convex polygons. True when point in polygon. - - Args: - points (np.ndarray): Input points with the shape of [num_points, 2]. - polygon (np.ndarray): Input polygon with the shape of - [num_polygon, num_points_of_polygon, 2]. - clockwise (bool, optional): Indicate polygon is clockwise. Defaults - to True. - - Returns: - np.ndarray: Result matrix with the shape of [num_points, num_polygon]. - """ - # first convert polygon to directed lines - num_points_of_polygon = polygon.shape[1] - num_points = points.shape[0] - num_polygons = polygon.shape[0] - # vec for all the polygons - if clockwise: - vec1 = polygon - polygon[:, - np.array([num_points_of_polygon - 1] + list( - range(num_points_of_polygon - 1))), :] - else: - vec1 = polygon[:, - np.array([num_points_of_polygon - 1] + - list(range(num_points_of_polygon - - 1))), :] - polygon - ret = np.zeros((num_points, num_polygons), dtype=np.bool_) - success = True - cross = 0.0 - for i in range(num_points): - for j in range(num_polygons): - success = True - for k in range(num_points_of_polygon): - vec = vec1[j, k] - cross = vec[1] * (polygon[j, k, 0] - points[i, 0]) - cross -= vec[0] * (polygon[j, k, 1] - points[i, 1]) - if cross >= 0: - success = False - break - ret[i, j] = success - return ret - - -def boxes3d_to_corners3d_lidar(boxes3d, bottom_center=True): - """Convert kitti center boxes to corners. - - 7 -------- 4 - /| /| - 6 -------- 5 . - | | | | - . 3 -------- 0 - |/ |/ - 2 -------- 1 - - Note: - This function is for LiDAR boxes only. - - Args: - boxes3d (np.ndarray): Boxes with shape of (N, 7) - [x, y, z, x_size, y_size, z_size, ry] in LiDAR coords, - see the definition of ry in KITTI dataset. - bottom_center (bool, optional): Whether z is on the bottom center - of object. Defaults to True. - - Returns: - np.ndarray: Box corners with the shape of [N, 8, 3]. - """ - boxes_num = boxes3d.shape[0] - x_size, y_size, z_size = boxes3d[:, 3], boxes3d[:, 4], boxes3d[:, 5] - x_corners = np.array([ - x_size / 2., -x_size / 2., -x_size / 2., x_size / 2., x_size / 2., - -x_size / 2., -x_size / 2., x_size / 2. - ], - dtype=np.float32).T - y_corners = np.array([ - -y_size / 2., -y_size / 2., y_size / 2., y_size / 2., -y_size / 2., - -y_size / 2., y_size / 2., y_size / 2. - ], - dtype=np.float32).T - if bottom_center: - z_corners = np.zeros((boxes_num, 8), dtype=np.float32) - z_corners[:, 4:8] = z_size.reshape(boxes_num, 1).repeat( - 4, axis=1) # (N, 8) - else: - z_corners = np.array([ - -z_size / 2., -z_size / 2., -z_size / 2., -z_size / 2., - z_size / 2., z_size / 2., z_size / 2., z_size / 2. - ], - dtype=np.float32).T - - ry = boxes3d[:, 6] - zeros, ones = np.zeros( - ry.size, dtype=np.float32), np.ones( - ry.size, dtype=np.float32) - rot_list = np.array([[np.cos(ry), np.sin(ry), zeros], - [-np.sin(ry), np.cos(ry), zeros], - [zeros, zeros, ones]]) # (3, 3, N) - R_list = np.transpose(rot_list, (2, 0, 1)) # (N, 3, 3) - - temp_corners = np.concatenate((x_corners.reshape( - -1, 8, 1), y_corners.reshape(-1, 8, 1), z_corners.reshape(-1, 8, 1)), - axis=2) # (N, 8, 3) - rotated_corners = np.matmul(temp_corners, R_list) # (N, 8, 3) - x_corners = rotated_corners[:, :, 0] - y_corners = rotated_corners[:, :, 1] - z_corners = rotated_corners[:, :, 2] - - x_loc, y_loc, z_loc = boxes3d[:, 0], boxes3d[:, 1], boxes3d[:, 2] - - x = x_loc.reshape(-1, 1) + x_corners.reshape(-1, 8) - y = y_loc.reshape(-1, 1) + y_corners.reshape(-1, 8) - z = z_loc.reshape(-1, 1) + z_corners.reshape(-1, 8) - - corners = np.concatenate( - (x.reshape(-1, 8, 1), y.reshape(-1, 8, 1), z.reshape(-1, 8, 1)), - axis=2) - - return corners.astype(np.float32) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/__init__.py deleted file mode 100644 index b306525c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox import build_bbox_coder -from .anchor_free_bbox_coder import AnchorFreeBBoxCoder -from .centerpoint_bbox_coders import CenterPointBBoxCoder -from .delta_xyzwhlr_bbox_coder import DeltaXYZWLHRBBoxCoder -from .fcos3d_bbox_coder import FCOS3DBBoxCoder -from .groupfree3d_bbox_coder import GroupFree3DBBoxCoder -from .monoflex_bbox_coder import MonoFlexCoder -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder -from .pgd_bbox_coder import PGDBBoxCoder -from .point_xyzwhlr_bbox_coder import PointXYZWHLRBBoxCoder -from .smoke_bbox_coder import SMOKECoder - -__all__ = [ - 'build_bbox_coder', 'DeltaXYZWLHRBBoxCoder', 'PartialBinBasedBBoxCoder', - 'CenterPointBBoxCoder', 'AnchorFreeBBoxCoder', 'GroupFree3DBBoxCoder', - 'PointXYZWHLRBBoxCoder', 'FCOS3DBBoxCoder', 'PGDBBoxCoder', 'SMOKECoder', - 'MonoFlexCoder' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py deleted file mode 100644 index d64f38b5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/anchor_free_bbox_coder.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox.builder import BBOX_CODERS -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder - - -@BBOX_CODERS.register_module() -class AnchorFreeBBoxCoder(PartialBinBasedBBoxCoder): - """Anchor free bbox coder for 3D boxes. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - with_rot (bool): Whether the bbox is with rotation. - """ - - def __init__(self, num_dir_bins, with_rot=True): - super(AnchorFreeBBoxCoder, self).__init__( - num_dir_bins, 0, [], with_rot=with_rot) - self.num_dir_bins = num_dir_bins - self.with_rot = with_rot - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_res_target = gt_bboxes_3d.dims / 2 - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - dir_res_target /= (2 * np.pi / self.num_dir_bins) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_res_target, dir_class_target, - dir_res_target) - - def decode(self, bbox_out): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size: predicted bbox size. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out['center'] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out['dir_class'], -1) - dir_res = torch.gather(bbox_out['dir_res'], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - bbox_size = torch.clamp(bbox_out['size'] * 2, min=0.1) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def split_pred(self, cls_preds, reg_preds, base_xyz): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - results['obj_scores'] = cls_preds - - start, end = 0, 0 - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['center_offset'] = reg_preds_trans[..., start:end] - results['center'] = base_xyz.detach() + reg_preds_trans[..., start:end] - start = end - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['size'] = reg_preds_trans[..., start:end] - start = end - - # decode direction - end += self.num_dir_bins - results['dir_class'] = reg_preds_trans[..., start:end] - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end] - start = end - - results['dir_res_norm'] = dir_res_norm - results['dir_res'] = dir_res_norm * (2 * np.pi / self.num_dir_bins) - - return results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py deleted file mode 100644 index 6d43a63d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/centerpoint_bbox_coders.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class CenterPointBBoxCoder(BaseBBoxCoder): - """Bbox coder for CenterPoint. - - Args: - pc_range (list[float]): Range of point cloud. - out_size_factor (int): Downsample factor of the model. - voxel_size (list[float]): Size of voxel. - post_center_range (list[float], optional): Limit of the center. - Default: None. - max_num (int, optional): Max number to be kept. Default: 100. - score_threshold (float, optional): Threshold to filter boxes - based on score. Default: None. - code_size (int, optional): Code size of bboxes. Default: 9 - """ - - def __init__(self, - pc_range, - out_size_factor, - voxel_size, - post_center_range=None, - max_num=100, - score_threshold=None, - code_size=9): - - self.pc_range = pc_range - self.out_size_factor = out_size_factor - self.voxel_size = voxel_size - self.post_center_range = post_center_range - self.max_num = max_num - self.score_threshold = score_threshold - self.code_size = code_size - - def _gather_feat(self, feats, inds, feat_masks=None): - """Given feats and indexes, returns the gathered feats. - - Args: - feats (torch.Tensor): Features to be transposed and gathered - with the shape of [B, 2, W, H]. - inds (torch.Tensor): Indexes with the shape of [B, N]. - feat_masks (torch.Tensor, optional): Mask of the feats. - Default: None. - - Returns: - torch.Tensor: Gathered feats. - """ - dim = feats.size(2) - inds = inds.unsqueeze(2).expand(inds.size(0), inds.size(1), dim) - feats = feats.gather(1, inds) - if feat_masks is not None: - feat_masks = feat_masks.unsqueeze(2).expand_as(feats) - feats = feats[feat_masks] - feats = feats.view(-1, dim) - return feats - - def _topk(self, scores, K=80): - """Get indexes based on scores. - - Args: - scores (torch.Tensor): scores with the shape of [B, N, W, H]. - K (int, optional): Number to be kept. Defaults to 80. - - Returns: - tuple[torch.Tensor] - torch.Tensor: Selected scores with the shape of [B, K]. - torch.Tensor: Selected indexes with the shape of [B, K]. - torch.Tensor: Selected classes with the shape of [B, K]. - torch.Tensor: Selected y coord with the shape of [B, K]. - torch.Tensor: Selected x coord with the shape of [B, K]. - """ - batch, cat, height, width = scores.size() - - topk_scores, topk_inds = torch.topk(scores.view(batch, cat, -1), K) - - topk_inds = topk_inds % (height * width) - topk_ys = (topk_inds.float() / - torch.tensor(width, dtype=torch.float)).int().float() - topk_xs = (topk_inds % width).int().float() - - topk_score, topk_ind = torch.topk(topk_scores.view(batch, -1), K) - topk_clses = (topk_ind / torch.tensor(K, dtype=torch.float)).int() - topk_inds = self._gather_feat(topk_inds.view(batch, -1, 1), - topk_ind).view(batch, K) - topk_ys = self._gather_feat(topk_ys.view(batch, -1, 1), - topk_ind).view(batch, K) - topk_xs = self._gather_feat(topk_xs.view(batch, -1, 1), - topk_ind).view(batch, K) - - return topk_score, topk_inds, topk_clses, topk_ys, topk_xs - - def _transpose_and_gather_feat(self, feat, ind): - """Given feats and indexes, returns the transposed and gathered feats. - - Args: - feat (torch.Tensor): Features to be transposed and gathered - with the shape of [B, 2, W, H]. - ind (torch.Tensor): Indexes with the shape of [B, N]. - - Returns: - torch.Tensor: Transposed and gathered feats. - """ - feat = feat.permute(0, 2, 3, 1).contiguous() - feat = feat.view(feat.size(0), -1, feat.size(3)) - feat = self._gather_feat(feat, ind) - return feat - - def encode(self): - pass - - def decode(self, - heat, - rot_sine, - rot_cosine, - hei, - dim, - vel, - reg=None, - task_id=-1): - """Decode bboxes. - - Args: - heat (torch.Tensor): Heatmap with the shape of [B, N, W, H]. - rot_sine (torch.Tensor): Sine of rotation with the shape of - [B, 1, W, H]. - rot_cosine (torch.Tensor): Cosine of rotation with the shape of - [B, 1, W, H]. - hei (torch.Tensor): Height of the boxes with the shape - of [B, 1, W, H]. - dim (torch.Tensor): Dim of the boxes with the shape of - [B, 1, W, H]. - vel (torch.Tensor): Velocity with the shape of [B, 1, W, H]. - reg (torch.Tensor, optional): Regression value of the boxes in - 2D with the shape of [B, 2, W, H]. Default: None. - task_id (int, optional): Index of task. Default: -1. - - Returns: - list[dict]: Decoded boxes. - """ - batch, cat, _, _ = heat.size() - - scores, inds, clses, ys, xs = self._topk(heat, K=self.max_num) - - if reg is not None: - reg = self._transpose_and_gather_feat(reg, inds) - reg = reg.view(batch, self.max_num, 2) - xs = xs.view(batch, self.max_num, 1) + reg[:, :, 0:1] - ys = ys.view(batch, self.max_num, 1) + reg[:, :, 1:2] - else: - xs = xs.view(batch, self.max_num, 1) + 0.5 - ys = ys.view(batch, self.max_num, 1) + 0.5 - - # rotation value and direction label - rot_sine = self._transpose_and_gather_feat(rot_sine, inds) - rot_sine = rot_sine.view(batch, self.max_num, 1) - - rot_cosine = self._transpose_and_gather_feat(rot_cosine, inds) - rot_cosine = rot_cosine.view(batch, self.max_num, 1) - rot = torch.atan2(rot_sine, rot_cosine) - - # height in the bev - hei = self._transpose_and_gather_feat(hei, inds) - hei = hei.view(batch, self.max_num, 1) - - # dim of the box - dim = self._transpose_and_gather_feat(dim, inds) - dim = dim.view(batch, self.max_num, 3) - - # class label - clses = clses.view(batch, self.max_num).float() - scores = scores.view(batch, self.max_num) - - xs = xs.view( - batch, self.max_num, - 1) * self.out_size_factor * self.voxel_size[0] + self.pc_range[0] - ys = ys.view( - batch, self.max_num, - 1) * self.out_size_factor * self.voxel_size[1] + self.pc_range[1] - - if vel is None: # KITTI FORMAT - final_box_preds = torch.cat([xs, ys, hei, dim, rot], dim=2) - else: # exist velocity, nuscene format - vel = self._transpose_and_gather_feat(vel, inds) - vel = vel.view(batch, self.max_num, 2) - final_box_preds = torch.cat([xs, ys, hei, dim, rot, vel], dim=2) - - final_scores = scores - final_preds = clses - - # use score threshold - if self.score_threshold is not None: - thresh_mask = final_scores > self.score_threshold - - if self.post_center_range is not None: - self.post_center_range = torch.tensor( - self.post_center_range, device=heat.device) - mask = (final_box_preds[..., :3] >= - self.post_center_range[:3]).all(2) - mask &= (final_box_preds[..., :3] <= - self.post_center_range[3:]).all(2) - - predictions_dicts = [] - for i in range(batch): - cmask = mask[i, :] - if self.score_threshold: - cmask &= thresh_mask[i] - - boxes3d = final_box_preds[i, cmask] - scores = final_scores[i, cmask] - labels = final_preds[i, cmask] - predictions_dict = { - 'bboxes': boxes3d, - 'scores': scores, - 'labels': labels - } - - predictions_dicts.append(predictions_dict) - else: - raise NotImplementedError( - 'Need to reorganize output as a batch, only ' - 'support post_center_range is not None for now!') - - return predictions_dicts diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py deleted file mode 100644 index 931e8398..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/delta_xyzwhlr_bbox_coder.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class DeltaXYZWLHRBBoxCoder(BaseBBoxCoder): - """Bbox Coder for 3D boxes. - - Args: - code_size (int): The dimension of boxes to be encoded. - """ - - def __init__(self, code_size=7): - super(DeltaXYZWLHRBBoxCoder, self).__init__() - self.code_size = code_size - - @staticmethod - def encode(src_boxes, dst_boxes): - """Get box regression transformation deltas (dx, dy, dz, dx_size, - dy_size, dz_size, dr, dv*) that can be used to transform the - `src_boxes` into the `target_boxes`. - - Args: - src_boxes (torch.Tensor): source boxes, e.g., object proposals. - dst_boxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas. - """ - box_ndim = src_boxes.shape[-1] - cas, cgs, cts = [], [], [] - if box_ndim > 7: - xa, ya, za, wa, la, ha, ra, *cas = torch.split( - src_boxes, 1, dim=-1) - xg, yg, zg, wg, lg, hg, rg, *cgs = torch.split( - dst_boxes, 1, dim=-1) - cts = [g - a for g, a in zip(cgs, cas)] - else: - xa, ya, za, wa, la, ha, ra = torch.split(src_boxes, 1, dim=-1) - xg, yg, zg, wg, lg, hg, rg = torch.split(dst_boxes, 1, dim=-1) - za = za + ha / 2 - zg = zg + hg / 2 - diagonal = torch.sqrt(la**2 + wa**2) - xt = (xg - xa) / diagonal - yt = (yg - ya) / diagonal - zt = (zg - za) / ha - lt = torch.log(lg / la) - wt = torch.log(wg / wa) - ht = torch.log(hg / ha) - rt = rg - ra - return torch.cat([xt, yt, zt, wt, lt, ht, rt, *cts], dim=-1) - - @staticmethod - def decode(anchors, deltas): - """Apply transformation `deltas` (dx, dy, dz, dx_size, dy_size, - dz_size, dr, dv*) to `boxes`. - - Args: - anchors (torch.Tensor): Parameters of anchors with shape (N, 7). - deltas (torch.Tensor): Encoded boxes with shape - (N, 7+n) [x, y, z, x_size, y_size, z_size, r, velo*]. - - Returns: - torch.Tensor: Decoded boxes. - """ - cas, cts = [], [] - box_ndim = anchors.shape[-1] - if box_ndim > 7: - xa, ya, za, wa, la, ha, ra, *cas = torch.split(anchors, 1, dim=-1) - xt, yt, zt, wt, lt, ht, rt, *cts = torch.split(deltas, 1, dim=-1) - else: - xa, ya, za, wa, la, ha, ra = torch.split(anchors, 1, dim=-1) - xt, yt, zt, wt, lt, ht, rt = torch.split(deltas, 1, dim=-1) - - za = za + ha / 2 - diagonal = torch.sqrt(la**2 + wa**2) - xg = xt * diagonal + xa - yg = yt * diagonal + ya - zg = zt * ha + za - - lg = torch.exp(lt) * la - wg = torch.exp(wt) * wa - hg = torch.exp(ht) * ha - rg = rt + ra - zg = zg - hg / 2 - cgs = [t + a for t, a in zip(cts, cas)] - return torch.cat([xg, yg, zg, wg, lg, hg, rg, *cgs], dim=-1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py deleted file mode 100644 index 7cb6b1a3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/fcos3d_bbox_coder.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS -from ..structures import limit_period - - -@BBOX_CODERS.register_module() -class FCOS3DBBoxCoder(BaseBBoxCoder): - """Bounding box coder for FCOS3D. - - Args: - base_depths (tuple[tuple[float]]): Depth references for decode box - depth. Defaults to None. - base_dims (tuple[tuple[float]]): Dimension references for decode box - dimension. Defaults to None. - code_size (int): The dimension of boxes to be encoded. Defaults to 7. - norm_on_bbox (bool): Whether to apply normalization on the bounding - box 2D attributes. Defaults to True. - """ - - def __init__(self, - base_depths=None, - base_dims=None, - code_size=7, - norm_on_bbox=True): - super(FCOS3DBBoxCoder, self).__init__() - self.base_depths = base_depths - self.base_dims = base_dims - self.bbox_code_size = code_size - self.norm_on_bbox = norm_on_bbox - - def encode(self, gt_bboxes_3d, gt_labels_3d, gt_bboxes, gt_labels): - # TODO: refactor the encoder in the FCOS3D and PGD head - pass - - def decode(self, bbox, scale, stride, training, cls_score=None): - """Decode regressed results into 3D predictions. - - Note that offsets are not transformed to the projected 3D centers. - - Args: - bbox (torch.Tensor): Raw bounding box predictions in shape - [N, C, H, W]. - scale (tuple[`Scale`]): Learnable scale parameters. - stride (int): Stride for a specific feature level. - training (bool): Whether the decoding is in the training - procedure. - cls_score (torch.Tensor): Classification score map for deciding - which base depth or dim is used. Defaults to None. - - Returns: - torch.Tensor: Decoded boxes. - """ - # scale the bbox of different level - # only apply to offset, depth and size prediction - scale_offset, scale_depth, scale_size = scale[0:3] - - clone_bbox = bbox.clone() - bbox[:, :2] = scale_offset(clone_bbox[:, :2]).float() - bbox[:, 2] = scale_depth(clone_bbox[:, 2]).float() - bbox[:, 3:6] = scale_size(clone_bbox[:, 3:6]).float() - - if self.base_depths is None: - bbox[:, 2] = bbox[:, 2].exp() - elif len(self.base_depths) == 1: # only single prior - mean = self.base_depths[0][0] - std = self.base_depths[0][1] - bbox[:, 2] = mean + bbox.clone()[:, 2] * std - else: # multi-class priors - assert len(self.base_depths) == cls_score.shape[1], \ - 'The number of multi-class depth priors should be equal to ' \ - 'the number of categories.' - indices = cls_score.max(dim=1)[1] - depth_priors = cls_score.new_tensor( - self.base_depths)[indices, :].permute(0, 3, 1, 2) - mean = depth_priors[:, 0] - std = depth_priors[:, 1] - bbox[:, 2] = mean + bbox.clone()[:, 2] * std - - bbox[:, 3:6] = bbox[:, 3:6].exp() - if self.base_dims is not None: - assert len(self.base_dims) == cls_score.shape[1], \ - 'The number of anchor sizes should be equal to the number ' \ - 'of categories.' - indices = cls_score.max(dim=1)[1] - size_priors = cls_score.new_tensor( - self.base_dims)[indices, :].permute(0, 3, 1, 2) - bbox[:, 3:6] = size_priors * bbox.clone()[:, 3:6] - - assert self.norm_on_bbox is True, 'Setting norm_on_bbox to False '\ - 'has not been thoroughly tested for FCOS3D.' - if self.norm_on_bbox: - if not training: - # Note that this line is conducted only when testing - bbox[:, :2] *= stride - - return bbox - - @staticmethod - def decode_yaw(bbox, centers2d, dir_cls, dir_offset, cam2img): - """Decode yaw angle and change it from local to global.i. - - Args: - bbox (torch.Tensor): Bounding box predictions in shape - [N, C] with yaws to be decoded. - centers2d (torch.Tensor): Projected 3D-center on the image planes - corresponding to the box predictions. - dir_cls (torch.Tensor): Predicted direction classes. - dir_offset (float): Direction offset before dividing all the - directions into several classes. - cam2img (torch.Tensor): Camera intrinsic matrix in shape [4, 4]. - - Returns: - torch.Tensor: Bounding boxes with decoded yaws. - """ - if bbox.shape[0] > 0: - dir_rot = limit_period(bbox[..., 6] - dir_offset, 0, np.pi) - bbox[..., 6] = \ - dir_rot + dir_offset + np.pi * dir_cls.to(bbox.dtype) - - bbox[:, 6] = torch.atan2(centers2d[:, 0] - cam2img[0, 2], - cam2img[0, 0]) + bbox[:, 6] - - return bbox diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py deleted file mode 100644 index 08d83e92..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/groupfree3d_bbox_coder.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox.builder import BBOX_CODERS -from .partial_bin_based_bbox_coder import PartialBinBasedBBoxCoder - - -@BBOX_CODERS.register_module() -class GroupFree3DBBoxCoder(PartialBinBasedBBoxCoder): - """Modified partial bin based bbox coder for GroupFree3D. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - num_sizes (int): Number of size clusters. - mean_sizes (list[list[int]]): Mean size of bboxes in each class. - with_rot (bool, optional): Whether the bbox is with rotation. - Defaults to True. - size_cls_agnostic (bool, optional): Whether the predicted size is - class-agnostic. Defaults to True. - """ - - def __init__(self, - num_dir_bins, - num_sizes, - mean_sizes, - with_rot=True, - size_cls_agnostic=True): - super(GroupFree3DBBoxCoder, self).__init__( - num_dir_bins=num_dir_bins, - num_sizes=num_sizes, - mean_sizes=mean_sizes, - with_rot=with_rot) - self.size_cls_agnostic = size_cls_agnostic - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_target = gt_bboxes_3d.dims - size_class_target = gt_labels_3d - size_res_target = gt_bboxes_3d.dims - gt_bboxes_3d.tensor.new_tensor( - self.mean_sizes)[size_class_target] - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_target, size_class_target, size_res_target, - dir_class_target, dir_res_target) - - def decode(self, bbox_out, prefix=''): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size_class: predicted bbox size class. - - size_res: predicted bbox size residual. - - size: predicted class-agnostic bbox size - prefix (str, optional): Decode predictions with specific prefix. - Defaults to ''. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out[f'{prefix}center'] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out[f'{prefix}dir_class'], -1) - dir_res = torch.gather(bbox_out[f'{prefix}dir_res'], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - if self.size_cls_agnostic: - bbox_size = bbox_out[f'{prefix}size'].reshape( - batch_size, num_proposal, 3) - else: - size_class = torch.argmax( - bbox_out[f'{prefix}size_class'], -1, keepdim=True) - size_res = torch.gather( - bbox_out[f'{prefix}size_res'], 2, - size_class.unsqueeze(-1).repeat(1, 1, 1, 3)) - mean_sizes = center.new_tensor(self.mean_sizes) - size_base = torch.index_select(mean_sizes, 0, - size_class.reshape(-1)) - bbox_size = size_base.reshape(batch_size, num_proposal, - -1) + size_res.squeeze(2) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def split_pred(self, cls_preds, reg_preds, base_xyz, prefix=''): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - prefix (str, optional): Decode predictions with specific prefix. - Defaults to ''. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - start, end = 0, 0 - - cls_preds_trans = cls_preds.transpose(2, 1) - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results[f'{prefix}center_residual'] = \ - reg_preds_trans[..., start:end].contiguous() - results[f'{prefix}center'] = base_xyz + \ - reg_preds_trans[..., start:end].contiguous() - start = end - - # decode direction - end += self.num_dir_bins - results[f'{prefix}dir_class'] = \ - reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end].contiguous() - start = end - - results[f'{prefix}dir_res_norm'] = dir_res_norm - results[f'{prefix}dir_res'] = dir_res_norm * ( - np.pi / self.num_dir_bins) - - # decode size - if self.size_cls_agnostic: - end += 3 - results[f'{prefix}size'] = \ - reg_preds_trans[..., start:end].contiguous() - else: - end += self.num_sizes - results[f'{prefix}size_class'] = reg_preds_trans[ - ..., start:end].contiguous() - start = end - - end += self.num_sizes * 3 - size_res_norm = reg_preds_trans[..., start:end] - batch_size, num_proposal = reg_preds_trans.shape[:2] - size_res_norm = size_res_norm.view( - [batch_size, num_proposal, self.num_sizes, 3]) - start = end - - results[f'{prefix}size_res_norm'] = size_res_norm.contiguous() - mean_sizes = reg_preds.new_tensor(self.mean_sizes) - results[f'{prefix}size_res'] = ( - size_res_norm * mean_sizes.unsqueeze(0).unsqueeze(0)) - - # decode objectness score - # Group-Free-3D objectness output shape (batch, proposal, 1) - results[f'{prefix}obj_scores'] = cls_preds_trans[..., :1].contiguous() - - # decode semantic score - results[f'{prefix}sem_scores'] = cls_preds_trans[..., 1:].contiguous() - - return results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py deleted file mode 100644 index e2ada29a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/monoflex_bbox_coder.py +++ /dev/null @@ -1,515 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn import functional as F - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class MonoFlexCoder(BaseBBoxCoder): - """Bbox Coder for MonoFlex. - - Args: - depth_mode (str): The mode for depth calculation. - Available options are "linear", "inv_sigmoid", and "exp". - base_depth (tuple[float]): References for decoding box depth. - depth_range (list): Depth range of predicted depth. - combine_depth (bool): Whether to use combined depth (direct depth - and depth from keypoints) or use direct depth only. - uncertainty_range (list): Uncertainty range of predicted depth. - base_dims (tuple[tuple[float]]): Dimensions mean and std of decode bbox - dimensions [l, h, w] for each category. - dims_mode (str): The mode for dimension calculation. - Available options are "linear" and "exp". - multibin (bool): Whether to use multibin representation. - num_dir_bins (int): Number of Number of bins to encode - direction angle. - bin_centers (list[float]): Local yaw centers while using multibin - representations. - bin_margin (float): Margin of multibin representations. - code_size (int): The dimension of boxes to be encoded. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-3. - """ - - def __init__(self, - depth_mode, - base_depth, - depth_range, - combine_depth, - uncertainty_range, - base_dims, - dims_mode, - multibin, - num_dir_bins, - bin_centers, - bin_margin, - code_size, - eps=1e-3): - super(MonoFlexCoder, self).__init__() - - # depth related - self.depth_mode = depth_mode - self.base_depth = base_depth - self.depth_range = depth_range - self.combine_depth = combine_depth - self.uncertainty_range = uncertainty_range - - # dimensions related - self.base_dims = base_dims - self.dims_mode = dims_mode - - # orientation related - self.multibin = multibin - self.num_dir_bins = num_dir_bins - self.bin_centers = bin_centers - self.bin_margin = bin_margin - - # output related - self.bbox_code_size = code_size - self.eps = eps - - def encode(self, gt_bboxes_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (`BaseInstance3DBoxes`): Ground truth 3D bboxes. - shape: (N, 7). - - Returns: - torch.Tensor: Targets of orientations. - """ - local_yaw = gt_bboxes_3d.local_yaw - # encode local yaw (-pi ~ pi) to multibin format - encode_local_yaw = local_yaw.new_zeros( - [local_yaw.shape[0], self.num_dir_bins * 2]) - bin_size = 2 * np.pi / self.num_dir_bins - margin_size = bin_size * self.bin_margin - - bin_centers = local_yaw.new_tensor(self.bin_centers) - range_size = bin_size / 2 + margin_size - - offsets = local_yaw.unsqueeze(1) - bin_centers.unsqueeze(0) - offsets[offsets > np.pi] = offsets[offsets > np.pi] - 2 * np.pi - offsets[offsets < -np.pi] = offsets[offsets < -np.pi] + 2 * np.pi - - for i in range(self.num_dir_bins): - offset = offsets[:, i] - inds = abs(offset) < range_size - encode_local_yaw[inds, i] = 1 - encode_local_yaw[inds, i + self.num_dir_bins] = offset[inds] - - orientation_target = encode_local_yaw - - return orientation_target - - def decode(self, bbox, base_centers2d, labels, downsample_ratio, cam2imgs): - """Decode bounding box regression into 3D predictions. - - Args: - bbox (Tensor): Raw bounding box predictions for each - predict center2d point. - shape: (N, C) - base_centers2d (torch.Tensor): Base centers2d for 3D bboxes. - shape: (N, 2). - labels (Tensor): Batch predict class label for each predict - center2d point. - shape: (N, ) - downsample_ratio (int): The stride of feature map. - cam2imgs (Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - - Return: - dict: The 3D prediction dict decoded from regression map. - the dict has components below: - - bboxes2d (torch.Tensor): Decoded [x1, y1, x2, y2] format - 2D bboxes. - - dimensions (torch.Tensor): Decoded dimensions for each - object. - - offsets2d (torch.Tenosr): Offsets between base centers2d - and real centers2d. - - direct_depth (torch.Tensor): Decoded directly regressed - depth. - - keypoints2d (torch.Tensor): Keypoints of each projected - 3D box on image. - - keypoints_depth (torch.Tensor): Decoded depth from keypoints. - - combined_depth (torch.Tensor): Combined depth using direct - depth and keypoints depth with depth uncertainty. - - orientations (torch.Tensor): Multibin format orientations - (local yaw) for each objects. - """ - - # 4 dimensions for FCOS style regression - pred_bboxes2d = bbox[:, 0:4] - - # change FCOS style to [x1, y1, x2, y2] format for IOU Loss - pred_bboxes2d = self.decode_bboxes2d(pred_bboxes2d, base_centers2d) - - # 2 dimensions for projected centers2d offsets - pred_offsets2d = bbox[:, 4:6] - - # 3 dimensions for 3D bbox dimensions offsets - pred_dimensions_offsets3d = bbox[:, 29:32] - - # the first 8 dimensions are for orientation bin classification - # and the second 8 dimensions are for orientation offsets. - pred_orientations = torch.cat((bbox[:, 32:40], bbox[:, 40:48]), dim=1) - - # 3 dimensions for the uncertainties of the solved depths from - # groups of keypoints - pred_keypoints_depth_uncertainty = bbox[:, 26:29] - - # 1 dimension for the uncertainty of directly regressed depth - pred_direct_depth_uncertainty = bbox[:, 49:50].squeeze(-1) - - # 2 dimension of offsets x keypoints (8 corners + top/bottom center) - pred_keypoints2d = bbox[:, 6:26].reshape(-1, 10, 2) - - # 1 dimension for depth offsets - pred_direct_depth_offsets = bbox[:, 48:49].squeeze(-1) - - # decode the pred residual dimensions to real dimensions - pred_dimensions = self.decode_dims(labels, pred_dimensions_offsets3d) - pred_direct_depth = self.decode_direct_depth(pred_direct_depth_offsets) - pred_keypoints_depth = self.keypoints2depth(pred_keypoints2d, - pred_dimensions, cam2imgs, - downsample_ratio) - - pred_direct_depth_uncertainty = torch.clamp( - pred_direct_depth_uncertainty, self.uncertainty_range[0], - self.uncertainty_range[1]) - pred_keypoints_depth_uncertainty = torch.clamp( - pred_keypoints_depth_uncertainty, self.uncertainty_range[0], - self.uncertainty_range[1]) - - if self.combine_depth: - pred_depth_uncertainty = torch.cat( - (pred_direct_depth_uncertainty.unsqueeze(-1), - pred_keypoints_depth_uncertainty), - dim=1).exp() - pred_depth = torch.cat( - (pred_direct_depth.unsqueeze(-1), pred_keypoints_depth), dim=1) - pred_combined_depth = \ - self.combine_depths(pred_depth, pred_depth_uncertainty) - else: - pred_combined_depth = None - - preds = dict( - bboxes2d=pred_bboxes2d, - dimensions=pred_dimensions, - offsets2d=pred_offsets2d, - keypoints2d=pred_keypoints2d, - orientations=pred_orientations, - direct_depth=pred_direct_depth, - keypoints_depth=pred_keypoints_depth, - combined_depth=pred_combined_depth, - direct_depth_uncertainty=pred_direct_depth_uncertainty, - keypoints_depth_uncertainty=pred_keypoints_depth_uncertainty, - ) - - return preds - - def decode_direct_depth(self, depth_offsets): - """Transform depth offset to directly regressed depth. - - Args: - depth_offsets (torch.Tensor): Predicted depth offsets. - shape: (N, ) - - Return: - torch.Tensor: Directly regressed depth. - shape: (N, ) - """ - if self.depth_mode == 'exp': - direct_depth = depth_offsets.exp() - elif self.depth_mode == 'linear': - base_depth = depth_offsets.new_tensor(self.base_depth) - direct_depth = depth_offsets * base_depth[1] + base_depth[0] - elif self.depth_mode == 'inv_sigmoid': - direct_depth = 1 / torch.sigmoid(depth_offsets) - 1 - else: - raise ValueError - - if self.depth_range is not None: - direct_depth = torch.clamp( - direct_depth, min=self.depth_range[0], max=self.depth_range[1]) - - return direct_depth - - def decode_location(self, - base_centers2d, - offsets2d, - depths, - cam2imgs, - downsample_ratio, - pad_mode='default'): - """Retrieve object location. - - Args: - base_centers2d (torch.Tensor): predicted base centers2d. - shape: (N, 2) - offsets2d (torch.Tensor): The offsets between real centers2d - and base centers2d. - shape: (N , 2) - depths (torch.Tensor): Depths of objects. - shape: (N, ) - cam2imgs (torch.Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - downsample_ratio (int): The stride of feature map. - pad_mode (str, optional): Padding mode used in - training data augmentation. - - Return: - tuple(torch.Tensor): Centers of 3D boxes. - shape: (N, 3) - """ - N = cam2imgs.shape[0] - # (N, 4, 4) - cam2imgs_inv = cam2imgs.inverse() - if pad_mode == 'default': - centers2d_img = (base_centers2d + offsets2d) * downsample_ratio - else: - raise NotImplementedError - # (N, 3) - centers2d_img = \ - torch.cat((centers2d_img, depths.unsqueeze(-1)), dim=1) - # (N, 4, 1) - centers2d_extend = \ - torch.cat((centers2d_img, centers2d_img.new_ones(N, 1)), - dim=1).unsqueeze(-1) - locations = torch.matmul(cam2imgs_inv, centers2d_extend).squeeze(-1) - - return locations[:, :3] - - def keypoints2depth(self, - keypoints2d, - dimensions, - cam2imgs, - downsample_ratio=4, - group0_index=[(7, 3), (0, 4)], - group1_index=[(2, 6), (1, 5)]): - """Decode depth form three groups of keypoints and geometry projection - model. 2D keypoints inlucding 8 coreners and top/bottom centers will be - divided into three groups which will be used to calculate three depths - of object. - - .. code-block:: none - - Group center keypoints: - - + --------------- + - /| top center /| - / | . / | - / | | / | - + ---------|----- + + - | / | | / - | / . | / - |/ bottom center |/ - + --------------- + - - Group 0 keypoints: - - 0 - + -------------- + - /| /| - / | / | - / | 5/ | - + -------------- + + - | /3 | / - | / | / - |/ |/ - + -------------- + 6 - - Group 1 keypoints: - - 4 - + -------------- + - /| /| - / | / | - / | / | - 1 + -------------- + + 7 - | / | / - | / | / - |/ |/ - 2 + -------------- + - - - Args: - keypoints2d (torch.Tensor): Keypoints of objects. - 8 vertices + top/bottom center. - shape: (N, 10, 2) - dimensions (torch.Tensor): Dimensions of objetcts. - shape: (N, 3) - cam2imgs (torch.Tensor): Batch images' camera intrinsic matrix. - shape: kitti (N, 4, 4) nuscenes (N, 3, 3) - downsample_ratio (int, opitonal): The stride of feature map. - Defaults: 4. - group0_index(list[tuple[int]], optional): Keypoints group 0 - of index to calculate the depth. - Defaults: [0, 3, 4, 7]. - group1_index(list[tuple[int]], optional): Keypoints group 1 - of index to calculate the depth. - Defaults: [1, 2, 5, 6] - - Return: - tuple(torch.Tensor): Depth computed from three groups of - keypoints (top/bottom, group0, group1) - shape: (N, 3) - """ - - pred_height_3d = dimensions[:, 1].clone() - f_u = cam2imgs[:, 0, 0] - center_height = keypoints2d[:, -2, 1] - keypoints2d[:, -1, 1] - corner_group0_height = keypoints2d[:, group0_index[0], 1] \ - - keypoints2d[:, group0_index[1], 1] - corner_group1_height = keypoints2d[:, group1_index[0], 1] \ - - keypoints2d[:, group1_index[1], 1] - center_depth = f_u * pred_height_3d / ( - F.relu(center_height) * downsample_ratio + self.eps) - corner_group0_depth = (f_u * pred_height_3d).unsqueeze(-1) / ( - F.relu(corner_group0_height) * downsample_ratio + self.eps) - corner_group1_depth = (f_u * pred_height_3d).unsqueeze(-1) / ( - F.relu(corner_group1_height) * downsample_ratio + self.eps) - - corner_group0_depth = corner_group0_depth.mean(dim=1) - corner_group1_depth = corner_group1_depth.mean(dim=1) - - keypoints_depth = torch.stack( - (center_depth, corner_group0_depth, corner_group1_depth), dim=1) - keypoints_depth = torch.clamp( - keypoints_depth, min=self.depth_range[0], max=self.depth_range[1]) - - return keypoints_depth - - def decode_dims(self, labels, dims_offset): - """Retrieve object dimensions. - - Args: - labels (torch.Tensor): Each points' category id. - shape: (N, K) - dims_offset (torch.Tensor): Dimension offsets. - shape: (N, 3) - - Returns: - torch.Tensor: Shape (N, 3) - """ - - if self.dims_mode == 'exp': - dims_offset = dims_offset.exp() - elif self.dims_mode == 'linear': - labels = labels.long() - base_dims = dims_offset.new_tensor(self.base_dims) - dims_mean = base_dims[:, :3] - dims_std = base_dims[:, 3:6] - cls_dimension_mean = dims_mean[labels, :] - cls_dimension_std = dims_std[labels, :] - dimensions = dims_offset * cls_dimension_mean + cls_dimension_std - else: - raise ValueError - - return dimensions - - def decode_orientation(self, ori_vector, locations): - """Retrieve object orientation. - - Args: - ori_vector (torch.Tensor): Local orientation vector - in [axis_cls, head_cls, sin, cos] format. - shape: (N, num_dir_bins * 4) - locations (torch.Tensor): Object location. - shape: (N, 3) - - Returns: - tuple[torch.Tensor]: yaws and local yaws of 3d bboxes. - """ - if self.multibin: - pred_bin_cls = ori_vector[:, :self.num_dir_bins * 2].view( - -1, self.num_dir_bins, 2) - pred_bin_cls = pred_bin_cls.softmax(dim=2)[..., 1] - orientations = ori_vector.new_zeros(ori_vector.shape[0]) - for i in range(self.num_dir_bins): - mask_i = (pred_bin_cls.argmax(dim=1) == i) - start_bin = self.num_dir_bins * 2 + i * 2 - end_bin = start_bin + 2 - pred_bin_offset = ori_vector[mask_i, start_bin:end_bin] - orientations[mask_i] = pred_bin_offset[:, 0].atan2( - pred_bin_offset[:, 1]) + self.bin_centers[i] - else: - axis_cls = ori_vector[:, :2].softmax(dim=1) - axis_cls = axis_cls[:, 0] < axis_cls[:, 1] - head_cls = ori_vector[:, 2:4].softmax(dim=1) - head_cls = head_cls[:, 0] < head_cls[:, 1] - # cls axis - orientations = self.bin_centers[axis_cls + head_cls * 2] - sin_cos_offset = F.normalize(ori_vector[:, 4:]) - orientations += sin_cos_offset[:, 0].atan(sin_cos_offset[:, 1]) - - locations = locations.view(-1, 3) - rays = locations[:, 0].atan2(locations[:, 2]) - local_yaws = orientations - yaws = local_yaws + rays - - larger_idx = (yaws > np.pi).nonzero(as_tuple=False) - small_idx = (yaws < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - yaws[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - yaws[small_idx] += 2 * np.pi - - larger_idx = (local_yaws > np.pi).nonzero(as_tuple=False) - small_idx = (local_yaws < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - local_yaws[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - local_yaws[small_idx] += 2 * np.pi - - return yaws, local_yaws - - def decode_bboxes2d(self, reg_bboxes2d, base_centers2d): - """Retrieve [x1, y1, x2, y2] format 2D bboxes. - - Args: - reg_bboxes2d (torch.Tensor): Predicted FCOS style - 2D bboxes. - shape: (N, 4) - base_centers2d (torch.Tensor): predicted base centers2d. - shape: (N, 2) - - Returns: - torch.Tenosr: [x1, y1, x2, y2] format 2D bboxes. - """ - centers_x = base_centers2d[:, 0] - centers_y = base_centers2d[:, 1] - - xs_min = centers_x - reg_bboxes2d[..., 0] - ys_min = centers_y - reg_bboxes2d[..., 1] - xs_max = centers_x + reg_bboxes2d[..., 2] - ys_max = centers_y + reg_bboxes2d[..., 3] - - bboxes2d = torch.stack([xs_min, ys_min, xs_max, ys_max], dim=-1) - - return bboxes2d - - def combine_depths(self, depth, depth_uncertainty): - """Combine all the prediced depths with depth uncertainty. - - Args: - depth (torch.Tensor): Predicted depths of each object. - 2D bboxes. - shape: (N, 4) - depth_uncertainty (torch.Tensor): Depth uncertainty for - each depth of each object. - shape: (N, 4) - - Returns: - torch.Tenosr: combined depth. - """ - uncertainty_weights = 1 / depth_uncertainty - uncertainty_weights = \ - uncertainty_weights / \ - uncertainty_weights.sum(dim=1, keepdim=True) - combined_depth = torch.sum(depth * uncertainty_weights, dim=1) - - return combined_depth diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py deleted file mode 100644 index ed8020d7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/partial_bin_based_bbox_coder.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class PartialBinBasedBBoxCoder(BaseBBoxCoder): - """Partial bin based bbox coder. - - Args: - num_dir_bins (int): Number of bins to encode direction angle. - num_sizes (int): Number of size clusters. - mean_sizes (list[list[int]]): Mean size of bboxes in each class. - with_rot (bool): Whether the bbox is with rotation. - """ - - def __init__(self, num_dir_bins, num_sizes, mean_sizes, with_rot=True): - super(PartialBinBasedBBoxCoder, self).__init__() - assert len(mean_sizes) == num_sizes - self.num_dir_bins = num_dir_bins - self.num_sizes = num_sizes - self.mean_sizes = mean_sizes - self.with_rot = with_rot - - def encode(self, gt_bboxes_3d, gt_labels_3d): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (BaseInstance3DBoxes): Ground truth bboxes - with shape (n, 7). - gt_labels_3d (torch.Tensor): Ground truth classes. - - Returns: - tuple: Targets of center, size and direction. - """ - # generate center target - center_target = gt_bboxes_3d.gravity_center - - # generate bbox size target - size_class_target = gt_labels_3d - size_res_target = gt_bboxes_3d.dims - gt_bboxes_3d.tensor.new_tensor( - self.mean_sizes)[size_class_target] - - # generate dir target - box_num = gt_labels_3d.shape[0] - if self.with_rot: - (dir_class_target, - dir_res_target) = self.angle2class(gt_bboxes_3d.yaw) - else: - dir_class_target = gt_labels_3d.new_zeros(box_num) - dir_res_target = gt_bboxes_3d.tensor.new_zeros(box_num) - - return (center_target, size_class_target, size_res_target, - dir_class_target, dir_res_target) - - def decode(self, bbox_out, suffix=''): - """Decode predicted parts to bbox3d. - - Args: - bbox_out (dict): Predictions from model, should contain keys below. - - - center: predicted bottom center of bboxes. - - dir_class: predicted bbox direction class. - - dir_res: predicted bbox direction residual. - - size_class: predicted bbox size class. - - size_res: predicted bbox size residual. - suffix (str): Decode predictions with specific suffix. - - Returns: - torch.Tensor: Decoded bbox3d with shape (batch, n, 7). - """ - center = bbox_out['center' + suffix] - batch_size, num_proposal = center.shape[:2] - - # decode heading angle - if self.with_rot: - dir_class = torch.argmax(bbox_out['dir_class' + suffix], -1) - dir_res = torch.gather(bbox_out['dir_res' + suffix], 2, - dir_class.unsqueeze(-1)) - dir_res.squeeze_(2) - dir_angle = self.class2angle(dir_class, dir_res).reshape( - batch_size, num_proposal, 1) - else: - dir_angle = center.new_zeros(batch_size, num_proposal, 1) - - # decode bbox size - size_class = torch.argmax( - bbox_out['size_class' + suffix], -1, keepdim=True) - size_res = torch.gather(bbox_out['size_res' + suffix], 2, - size_class.unsqueeze(-1).repeat(1, 1, 1, 3)) - mean_sizes = center.new_tensor(self.mean_sizes) - size_base = torch.index_select(mean_sizes, 0, size_class.reshape(-1)) - bbox_size = size_base.reshape(batch_size, num_proposal, - -1) + size_res.squeeze(2) - - bbox3d = torch.cat([center, bbox_size, dir_angle], dim=-1) - return bbox3d - - def decode_corners(self, center, size_res, size_class): - """Decode center, size residuals and class to corners. Only useful for - axis-aligned bounding boxes, so angle isn't considered. - - Args: - center (torch.Tensor): Shape [B, N, 3] - size_res (torch.Tensor): Shape [B, N, 3] or [B, N, C, 3] - size_class (torch.Tensor): Shape: [B, N] or [B, N, 1] - or [B, N, C, 3] - - Returns: - torch.Tensor: Corners with shape [B, N, 6] - """ - if len(size_class.shape) == 2 or size_class.shape[-1] == 1: - batch_size, proposal_num = size_class.shape[:2] - one_hot_size_class = size_res.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - if len(size_class.shape) == 2: - size_class = size_class.unsqueeze(-1) - one_hot_size_class.scatter_(2, size_class, 1) - one_hot_size_class_expand = one_hot_size_class.unsqueeze( - -1).repeat(1, 1, 1, 3).contiguous() - else: - one_hot_size_class_expand = size_class - - if len(size_res.shape) == 4: - size_res = torch.sum(size_res * one_hot_size_class_expand, 2) - - mean_sizes = size_res.new_tensor(self.mean_sizes) - mean_sizes = torch.sum(mean_sizes * one_hot_size_class_expand, 2) - size_full = (size_res + 1) * mean_sizes - size_full = torch.clamp(size_full, 0) - half_size_full = size_full / 2 - corner1 = center - half_size_full - corner2 = center + half_size_full - corners = torch.cat([corner1, corner2], dim=-1) - return corners - - def split_pred(self, cls_preds, reg_preds, base_xyz): - """Split predicted features to specific parts. - - Args: - cls_preds (torch.Tensor): Class predicted features to split. - reg_preds (torch.Tensor): Regression predicted features to split. - base_xyz (torch.Tensor): Coordinates of points. - - Returns: - dict[str, torch.Tensor]: Split results. - """ - results = {} - start, end = 0, 0 - - cls_preds_trans = cls_preds.transpose(2, 1) - reg_preds_trans = reg_preds.transpose(2, 1) - - # decode center - end += 3 - # (batch_size, num_proposal, 3) - results['center'] = base_xyz + \ - reg_preds_trans[..., start:end].contiguous() - start = end - - # decode direction - end += self.num_dir_bins - results['dir_class'] = reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_dir_bins - dir_res_norm = reg_preds_trans[..., start:end].contiguous() - start = end - - results['dir_res_norm'] = dir_res_norm - results['dir_res'] = dir_res_norm * (np.pi / self.num_dir_bins) - - # decode size - end += self.num_sizes - results['size_class'] = reg_preds_trans[..., start:end].contiguous() - start = end - - end += self.num_sizes * 3 - size_res_norm = reg_preds_trans[..., start:end] - batch_size, num_proposal = reg_preds_trans.shape[:2] - size_res_norm = size_res_norm.view( - [batch_size, num_proposal, self.num_sizes, 3]) - start = end - - results['size_res_norm'] = size_res_norm.contiguous() - mean_sizes = reg_preds.new_tensor(self.mean_sizes) - results['size_res'] = ( - size_res_norm * mean_sizes.unsqueeze(0).unsqueeze(0)) - - # decode objectness score - start = 0 - end = 2 - results['obj_scores'] = cls_preds_trans[..., start:end].contiguous() - start = end - - # decode semantic score - results['sem_scores'] = cls_preds_trans[..., start:].contiguous() - - return results - - def angle2class(self, angle): - """Convert continuous angle to a discrete class and a residual. - - Convert continuous angle to a discrete class and a small - regression number from class center angle to current angle. - - Args: - angle (torch.Tensor): Angle is from 0-2pi (or -pi~pi), - class center at 0, 1*(2pi/N), 2*(2pi/N) ... (N-1)*(2pi/N). - - Returns: - tuple: Encoded discrete class and residual. - """ - angle = angle % (2 * np.pi) - angle_per_class = 2 * np.pi / float(self.num_dir_bins) - shifted_angle = (angle + angle_per_class / 2) % (2 * np.pi) - angle_cls = shifted_angle // angle_per_class - angle_res = shifted_angle - ( - angle_cls * angle_per_class + angle_per_class / 2) - return angle_cls.long(), angle_res - - def class2angle(self, angle_cls, angle_res, limit_period=True): - """Inverse function to angle2class. - - Args: - angle_cls (torch.Tensor): Angle class to decode. - angle_res (torch.Tensor): Angle residual to decode. - limit_period (bool): Whether to limit angle to [-pi, pi]. - - Returns: - torch.Tensor: Angle decoded from angle_cls and angle_res. - """ - angle_per_class = 2 * np.pi / float(self.num_dir_bins) - angle_center = angle_cls.float() * angle_per_class - angle = angle_center + angle_res - if limit_period: - angle[angle > np.pi] -= 2 * np.pi - return angle diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/pgd_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/pgd_bbox_coder.py deleted file mode 100644 index 094ed39d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/pgd_bbox_coder.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch.nn import functional as F - -from mmdet.core.bbox.builder import BBOX_CODERS -from .fcos3d_bbox_coder import FCOS3DBBoxCoder - - -@BBOX_CODERS.register_module() -class PGDBBoxCoder(FCOS3DBBoxCoder): - """Bounding box coder for PGD.""" - - def encode(self, gt_bboxes_3d, gt_labels_3d, gt_bboxes, gt_labels): - # TODO: refactor the encoder codes in the FCOS3D and PGD head - pass - - def decode_2d(self, - bbox, - scale, - stride, - max_regress_range, - training, - pred_keypoints=False, - pred_bbox2d=True): - """Decode regressed 2D attributes. - - Args: - bbox (torch.Tensor): Raw bounding box predictions in shape - [N, C, H, W]. - scale (tuple[`Scale`]): Learnable scale parameters. - stride (int): Stride for a specific feature level. - max_regress_range (int): Maximum regression range for a specific - feature level. - training (bool): Whether the decoding is in the training - procedure. - pred_keypoints (bool, optional): Whether to predict keypoints. - Defaults to False. - pred_bbox2d (bool, optional): Whether to predict 2D bounding - boxes. Defaults to False. - - Returns: - torch.Tensor: Decoded boxes. - """ - clone_bbox = bbox.clone() - if pred_keypoints: - scale_kpts = scale[3] - # 2 dimension of offsets x 8 corners of a 3D bbox - bbox[:, self.bbox_code_size:self.bbox_code_size + 16] = \ - torch.tanh(scale_kpts(clone_bbox[ - :, self.bbox_code_size:self.bbox_code_size + 16]).float()) - - if pred_bbox2d: - scale_bbox2d = scale[-1] - # The last four dimensions are offsets to four sides of a 2D bbox - bbox[:, -4:] = scale_bbox2d(clone_bbox[:, -4:]).float() - - if self.norm_on_bbox: - if pred_bbox2d: - bbox[:, -4:] = F.relu(bbox.clone()[:, -4:]) - if not training: - if pred_keypoints: - bbox[ - :, self.bbox_code_size:self.bbox_code_size + 16] *= \ - max_regress_range - if pred_bbox2d: - bbox[:, -4:] *= stride - else: - if pred_bbox2d: - bbox[:, -4:] = bbox.clone()[:, -4:].exp() - return bbox - - def decode_prob_depth(self, depth_cls_preds, depth_range, depth_unit, - division, num_depth_cls): - """Decode probabilistic depth map. - - Args: - depth_cls_preds (torch.Tensor): Depth probabilistic map in shape - [..., self.num_depth_cls] (raw output before softmax). - depth_range (tuple[float]): Range of depth estimation. - depth_unit (int): Unit of depth range division. - division (str): Depth division method. Options include 'uniform', - 'linear', 'log', 'loguniform'. - num_depth_cls (int): Number of depth classes. - - Returns: - torch.Tensor: Decoded probabilistic depth estimation. - """ - if division == 'uniform': - depth_multiplier = depth_unit * \ - depth_cls_preds.new_tensor( - list(range(num_depth_cls))).reshape([1, -1]) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'linear': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - depth_multiplier = depth_range[0] + ( - depth_range[1] - depth_range[0]) / \ - (num_depth_cls * (num_depth_cls - 1)) * \ - (split_pts * (split_pts+1)) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'log': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - start = max(depth_range[0], 1) - end = depth_range[1] - depth_multiplier = (np.log(start) + - split_pts * np.log(end / start) / - (num_depth_cls - 1)).exp() - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - depth_multiplier).sum(dim=-1) - return prob_depth_preds - elif division == 'loguniform': - split_pts = depth_cls_preds.new_tensor(list( - range(num_depth_cls))).reshape([1, -1]) - start = max(depth_range[0], 1) - end = depth_range[1] - log_multiplier = np.log(start) + \ - split_pts * np.log(end / start) / (num_depth_cls - 1) - prob_depth_preds = (F.softmax(depth_cls_preds.clone(), dim=-1) * - log_multiplier).sum(dim=-1).exp() - return prob_depth_preds - else: - raise NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py deleted file mode 100644 index d246777b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/point_xyzwhlr_bbox_coder.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class PointXYZWHLRBBoxCoder(BaseBBoxCoder): - """Point based bbox coder for 3D boxes. - - Args: - code_size (int): The dimension of boxes to be encoded. - use_mean_size (bool, optional): Whether using anchors based on class. - Defaults to True. - mean_size (list[list[float]], optional): Mean size of bboxes in - each class. Defaults to None. - """ - - def __init__(self, code_size=7, use_mean_size=True, mean_size=None): - super(PointXYZWHLRBBoxCoder, self).__init__() - self.code_size = code_size - self.use_mean_size = use_mean_size - if self.use_mean_size: - self.mean_size = torch.from_numpy(np.array(mean_size)).float() - assert self.mean_size.min() > 0, \ - f'The min of mean_size should > 0, however currently it is '\ - f'{self.mean_size.min()}, please check it in your config.' - - def encode(self, gt_bboxes_3d, points, gt_labels_3d=None): - """Encode ground truth to prediction targets. - - Args: - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth bboxes - with shape (N, 7 + C). - points (torch.Tensor): Point cloud with shape (N, 3). - gt_labels_3d (torch.Tensor, optional): Ground truth classes. - Defaults to None. - - Returns: - torch.Tensor: Encoded boxes with shape (N, 8 + C). - """ - gt_bboxes_3d[:, 3:6] = torch.clamp_min(gt_bboxes_3d[:, 3:6], min=1e-5) - - xg, yg, zg, dxg, dyg, dzg, rg, *cgs = torch.split( - gt_bboxes_3d, 1, dim=-1) - xa, ya, za = torch.split(points, 1, dim=-1) - - if self.use_mean_size: - assert gt_labels_3d.max() <= self.mean_size.shape[0] - 1, \ - f'the max gt label {gt_labels_3d.max()} is bigger than' \ - f'anchor types {self.mean_size.shape[0] - 1}.' - self.mean_size = self.mean_size.to(gt_labels_3d.device) - point_anchor_size = self.mean_size[gt_labels_3d] - dxa, dya, dza = torch.split(point_anchor_size, 1, dim=-1) - diagonal = torch.sqrt(dxa**2 + dya**2) - xt = (xg - xa) / diagonal - yt = (yg - ya) / diagonal - zt = (zg - za) / dza - dxt = torch.log(dxg / dxa) - dyt = torch.log(dyg / dya) - dzt = torch.log(dzg / dza) - else: - xt = (xg - xa) - yt = (yg - ya) - zt = (zg - za) - dxt = torch.log(dxg) - dyt = torch.log(dyg) - dzt = torch.log(dzg) - - return torch.cat( - [xt, yt, zt, dxt, dyt, dzt, - torch.cos(rg), - torch.sin(rg), *cgs], - dim=-1) - - def decode(self, box_encodings, points, pred_labels_3d=None): - """Decode predicted parts and points to bbox3d. - - Args: - box_encodings (torch.Tensor): Encoded boxes with shape (N, 8 + C). - points (torch.Tensor): Point cloud with shape (N, 3). - pred_labels_3d (torch.Tensor): Bbox predicted labels (N, M). - - Returns: - torch.Tensor: Decoded boxes with shape (N, 7 + C) - """ - xt, yt, zt, dxt, dyt, dzt, cost, sint, *cts = torch.split( - box_encodings, 1, dim=-1) - xa, ya, za = torch.split(points, 1, dim=-1) - - if self.use_mean_size: - assert pred_labels_3d.max() <= self.mean_size.shape[0] - 1, \ - f'The max pred label {pred_labels_3d.max()} is bigger than' \ - f'anchor types {self.mean_size.shape[0] - 1}.' - self.mean_size = self.mean_size.to(pred_labels_3d.device) - point_anchor_size = self.mean_size[pred_labels_3d] - dxa, dya, dza = torch.split(point_anchor_size, 1, dim=-1) - diagonal = torch.sqrt(dxa**2 + dya**2) - xg = xt * diagonal + xa - yg = yt * diagonal + ya - zg = zt * dza + za - - dxg = torch.exp(dxt) * dxa - dyg = torch.exp(dyt) * dya - dzg = torch.exp(dzt) * dza - else: - xg = xt + xa - yg = yt + ya - zg = zt + za - dxg, dyg, dzg = torch.split( - torch.exp(box_encodings[..., 3:6]), 1, dim=-1) - - rg = torch.atan2(sint, cost) - - return torch.cat([xg, yg, zg, dxg, dyg, dzg, rg, *cts], dim=-1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/smoke_bbox_coder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/smoke_bbox_coder.py deleted file mode 100644 index 66aae917..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/coders/smoke_bbox_coder.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import numpy as np -import torch - -from mmdet.core.bbox import BaseBBoxCoder -from mmdet.core.bbox.builder import BBOX_CODERS - - -@BBOX_CODERS.register_module() -class SMOKECoder(BaseBBoxCoder): - """Bbox Coder for SMOKE. - - Args: - base_depth (tuple[float]): Depth references for decode box depth. - base_dims (tuple[tuple[float]]): Dimension references [l, h, w] - for decode box dimension for each category. - code_size (int): The dimension of boxes to be encoded. - """ - - def __init__(self, base_depth, base_dims, code_size): - super(SMOKECoder, self).__init__() - self.base_depth = base_depth - self.base_dims = base_dims - self.bbox_code_size = code_size - - def encode(self, locations, dimensions, orientations, input_metas): - """Encode CameraInstance3DBoxes by locations, dimensions, orientations. - - Args: - locations (Tensor): Center location for 3D boxes. - (N, 3) - dimensions (Tensor): Dimensions for 3D boxes. - shape (N, 3) - orientations (Tensor): Orientations for 3D boxes. - shape (N, 1) - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Return: - :obj:`CameraInstance3DBoxes`: 3D bboxes of batch images, - shape (N, bbox_code_size). - """ - - bboxes = torch.cat((locations, dimensions, orientations), dim=1) - assert bboxes.shape[1] == self.bbox_code_size, 'bboxes shape dose not'\ - 'match the bbox_code_size.' - batch_bboxes = input_metas[0]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size) - - return batch_bboxes - - def decode(self, - reg, - points, - labels, - cam2imgs, - trans_mats, - locations=None): - """Decode regression into locations, dimensions, orientations. - - Args: - reg (Tensor): Batch regression for each predict center2d point. - shape: (batch * K (max_objs), C) - points(Tensor): Batch projected bbox centers on image plane. - shape: (batch * K (max_objs) , 2) - labels (Tensor): Batch predict class label for each predict - center2d point. - shape: (batch, K (max_objs)) - cam2imgs (Tensor): Batch images' camera intrinsic matrix. - shape: kitti (batch, 4, 4) nuscenes (batch, 3, 3) - trans_mats (Tensor): transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - locations (None | Tensor): if locations is None, this function - is used to decode while inference, otherwise, it's used while - training using the ground truth 3d bbox locations. - shape: (batch * K (max_objs), 3) - - Return: - tuple(Tensor): The tuple has components below: - - locations (Tensor): Centers of 3D boxes. - shape: (batch * K (max_objs), 3) - - dimensions (Tensor): Dimensions of 3D boxes. - shape: (batch * K (max_objs), 3) - - orientations (Tensor): Orientations of 3D - boxes. - shape: (batch * K (max_objs), 1) - """ - depth_offsets = reg[:, 0] - centers2d_offsets = reg[:, 1:3] - dimensions_offsets = reg[:, 3:6] - orientations = reg[:, 6:8] - depths = self._decode_depth(depth_offsets) - # get the 3D Bounding box's center location. - pred_locations = self._decode_location(points, centers2d_offsets, - depths, cam2imgs, trans_mats) - pred_dimensions = self._decode_dimension(labels, dimensions_offsets) - if locations is None: - pred_orientations = self._decode_orientation( - orientations, pred_locations) - else: - pred_orientations = self._decode_orientation( - orientations, locations) - - return pred_locations, pred_dimensions, pred_orientations - - def _decode_depth(self, depth_offsets): - """Transform depth offset to depth.""" - base_depth = depth_offsets.new_tensor(self.base_depth) - depths = depth_offsets * base_depth[1] + base_depth[0] - - return depths - - def _decode_location(self, points, centers2d_offsets, depths, cam2imgs, - trans_mats): - """Retrieve objects location in camera coordinate based on projected - points. - - Args: - points (Tensor): Projected points on feature map in (x, y) - shape: (batch * K, 2) - centers2d_offset (Tensor): Project points offset in - (delta_x, delta_y). shape: (batch * K, 2) - depths (Tensor): Object depth z. - shape: (batch * K) - cam2imgs (Tensor): Batch camera intrinsics matrix. - shape: kitti (batch, 4, 4) nuscenes (batch, 3, 3) - trans_mats (Tensor): transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - """ - # number of points - N = centers2d_offsets.shape[0] - # batch_size - N_batch = cam2imgs.shape[0] - batch_id = torch.arange(N_batch).unsqueeze(1) - obj_id = batch_id.repeat(1, N // N_batch).flatten() - # trans_mats_inv = trans_mats.inverse()[obj_id] - # cam2imgs_inv = cam2imgs.inverse()[obj_id] - - #change for smoke - device = trans_mats.device - trans_mats_inv = trans_mats.cpu().inverse()[obj_id].to(device) - cam2imgs_inv = cam2imgs.cpu().inverse()[obj_id].to(device) - - centers2d = points + centers2d_offsets - centers2d_extend = torch.cat((centers2d, centers2d.new_ones(N, 1)), - dim=1) - # expand project points as [N, 3, 1] - centers2d_extend = centers2d_extend.unsqueeze(-1) - # transform project points back on original image - centers2d_img = torch.matmul(trans_mats_inv, centers2d_extend) - centers2d_img = centers2d_img * depths.view(N, -1, 1) - if cam2imgs.shape[1] == 4: - centers2d_img = torch.cat( - (centers2d_img, centers2d.new_ones(N, 1, 1)), dim=1) - locations = torch.matmul(cam2imgs_inv, centers2d_img).squeeze(2) - - return locations[:, :3] - - def _decode_dimension(self, labels, dims_offset): - """Transform dimension offsets to dimension according to its category. - - Args: - labels (Tensor): Each points' category id. - shape: (N, K) - dims_offset (Tensor): Dimension offsets. - shape: (N, 3) - """ - labels = labels.flatten().long() - base_dims = dims_offset.new_tensor(self.base_dims) - dims_select = base_dims[labels, :] - dimensions = dims_offset.exp() * dims_select - - return dimensions - - def _decode_orientation(self, ori_vector, locations): - """Retrieve object orientation. - - Args: - ori_vector (Tensor): Local orientation in [sin, cos] format. - shape: (N, 2) - locations (Tensor): Object location. - shape: (N, 3) - - Return: - Tensor: yaw(Orientation). Notice that the yaw's - range is [-np.pi, np.pi]. - shape:(N, 1) - """ - assert len(ori_vector) == len(locations) - locations = locations.view(-1, 3) - rays = torch.atan(locations[:, 0] / (locations[:, 2] + 1e-7)) - alphas = torch.atan(ori_vector[:, 0] / (ori_vector[:, 1] + 1e-7)) - - # get cosine value positive and negative index. - cos_pos_inds = (ori_vector[:, 1] >= 0).nonzero(as_tuple=False) - cos_neg_inds = (ori_vector[:, 1] < 0).nonzero(as_tuple=False) - - alphas[cos_pos_inds] -= np.pi / 2 - alphas[cos_neg_inds] += np.pi / 2 - # retrieve object rotation y angle. - yaws = alphas + rays - - larger_inds = (yaws > np.pi).nonzero(as_tuple=False) - small_inds = (yaws < -np.pi).nonzero(as_tuple=False) - - if len(larger_inds) != 0: - yaws[larger_inds] -= 2 * np.pi - if len(small_inds) != 0: - yaws[small_inds] += 2 * np.pi - - yaws = yaws.unsqueeze(-1) - return yaws diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py deleted file mode 100644 index d2faf69c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, - BboxOverlapsNearest3D, - axis_aligned_bbox_overlaps_3d, bbox_overlaps_3d, - bbox_overlaps_nearest_3d) - -__all__ = [ - 'BboxOverlapsNearest3D', 'BboxOverlaps3D', 'bbox_overlaps_nearest_3d', - 'bbox_overlaps_3d', 'AxisAlignedBboxOverlaps3D', - 'axis_aligned_bbox_overlaps_3d' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py deleted file mode 100644 index 2b1d8eab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox import bbox_overlaps -from mmdet.core.bbox.iou_calculators.builder import IOU_CALCULATORS -from ..structures import get_box_type - - -@IOU_CALCULATORS.register_module() -class BboxOverlapsNearest3D(object): - """Nearest 3D IoU Calculator. - - Note: - This IoU calculator first finds the nearest 2D boxes in bird eye view - (BEV), and then calculates the 2D IoU using :meth:`bbox_overlaps`. - - Args: - coordinate (str): 'camera', 'lidar', or 'depth' coordinate system. - """ - - def __init__(self, coordinate='lidar'): - assert coordinate in ['camera', 'lidar', 'depth'] - self.coordinate = coordinate - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate nearest 3D IoU. - - Note: - If ``is_aligned`` is ``False``, then it calculates the ious between - each bbox of bboxes1 and bboxes2, otherwise it calculates the ious - between each aligned pair of bboxes1 and bboxes2. - - Args: - bboxes1 (torch.Tensor): shape (N, 7+N) - [x, y, z, x_size, y_size, z_size, ry, v]. - bboxes2 (torch.Tensor): shape (M, 7+N) - [x, y, z, x_size, y_size, z_size, ry, v]. - mode (str): "iou" (intersection over union) or iof - (intersection over foreground). - is_aligned (bool): Whether the calculation is aligned. - - Return: - torch.Tensor: If ``is_aligned`` is ``True``, return ious between - bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is - ``False``, return shape is M. - """ - return bbox_overlaps_nearest_3d(bboxes1, bboxes2, mode, is_aligned, - self.coordinate) - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(coordinate={self.coordinate}' - return repr_str - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps3D(object): - """3D IoU Calculator. - - Args: - coordinate (str): The coordinate system, valid options are - 'camera', 'lidar', and 'depth'. - """ - - def __init__(self, coordinate): - assert coordinate in ['camera', 'lidar', 'depth'] - self.coordinate = coordinate - - def __call__(self, bboxes1, bboxes2, mode='iou'): - """Calculate 3D IoU using cuda implementation. - - Note: - This function calculate the IoU of 3D boxes based on their volumes. - IoU calculator ``:class:BboxOverlaps3D`` uses this function to - calculate the actual 3D IoUs of boxes. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or - iof (intersection over foreground). - - Return: - torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2 - with shape (M, N) (aligned mode is not supported currently). - """ - return bbox_overlaps_3d(bboxes1, bboxes2, mode, self.coordinate) - - def __repr__(self): - """str: return a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(coordinate={self.coordinate}' - return repr_str - - -def bbox_overlaps_nearest_3d(bboxes1, - bboxes2, - mode='iou', - is_aligned=False, - coordinate='lidar'): - """Calculate nearest 3D IoU. - - Note: - This function first finds the nearest 2D boxes in bird eye view - (BEV), and then calculates the 2D IoU using :meth:`bbox_overlaps`. - This IoU calculator :class:`BboxOverlapsNearest3D` uses this - function to calculate IoUs of boxes. - - If ``is_aligned`` is ``False``, then it calculates the ious between - each bbox of bboxes1 and bboxes2, otherwise the ious between each - aligned pair of bboxes1 and bboxes2. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or iof - (intersection over foreground). - is_aligned (bool): Whether the calculation is aligned - - Return: - torch.Tensor: If ``is_aligned`` is ``True``, return ious between - bboxes1 and bboxes2 with shape (M, N). If ``is_aligned`` is - ``False``, return shape is M. - """ - assert bboxes1.size(-1) == bboxes2.size(-1) >= 7 - - box_type, _ = get_box_type(coordinate) - - bboxes1 = box_type(bboxes1, box_dim=bboxes1.shape[-1]) - bboxes2 = box_type(bboxes2, box_dim=bboxes2.shape[-1]) - - # Change the bboxes to bev - # box conversion and iou calculation in torch version on CUDA - # is 10x faster than that in numpy version - bboxes1_bev = bboxes1.nearest_bev - bboxes2_bev = bboxes2.nearest_bev - - ret = bbox_overlaps( - bboxes1_bev, bboxes2_bev, mode=mode, is_aligned=is_aligned) - return ret - - -def bbox_overlaps_3d(bboxes1, bboxes2, mode='iou', coordinate='camera'): - """Calculate 3D IoU using cuda implementation. - - Note: - This function calculates the IoU of 3D boxes based on their volumes. - IoU calculator :class:`BboxOverlaps3D` uses this function to - calculate the actual IoUs of boxes. - - Args: - bboxes1 (torch.Tensor): with shape (N, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - bboxes2 (torch.Tensor): with shape (M, 7+C), - (x, y, z, x_size, y_size, z_size, ry, v*). - mode (str): "iou" (intersection over union) or - iof (intersection over foreground). - coordinate (str): 'camera' or 'lidar' coordinate system. - - Return: - torch.Tensor: Bbox overlaps results of bboxes1 and bboxes2 - with shape (M, N) (aligned mode is not supported currently). - """ - assert bboxes1.size(-1) == bboxes2.size(-1) >= 7 - - box_type, _ = get_box_type(coordinate) - - bboxes1 = box_type(bboxes1, box_dim=bboxes1.shape[-1]) - bboxes2 = box_type(bboxes2, box_dim=bboxes2.shape[-1]) - - return bboxes1.overlaps(bboxes1, bboxes2, mode=mode) - - -@IOU_CALCULATORS.register_module() -class AxisAlignedBboxOverlaps3D(object): - """Axis-aligned 3D Overlaps (IoU) Calculator.""" - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): shape (B, m, 6) in - format or empty. - bboxes2 (Tensor): shape (B, n, 6) in - format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or "giou" (generalized - intersection over union). - is_aligned (bool, optional): If True, then m and n must be equal. - Defaults to False. - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - """ - assert bboxes1.size(-1) == bboxes2.size(-1) == 6 - return axis_aligned_bbox_overlaps_3d(bboxes1, bboxes2, mode, - is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + '()' - return repr_str - - -def axis_aligned_bbox_overlaps_3d(bboxes1, - bboxes2, - mode='iou', - is_aligned=False, - eps=1e-6): - """Calculate overlap between two set of axis aligned 3D bboxes. If - ``is_aligned`` is ``False``, then calculate the overlaps between each bbox - of bboxes1 and bboxes2, otherwise the overlaps between each aligned pair of - bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 6) in - format or empty. - bboxes2 (Tensor): shape (B, n, 6) in - format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or "giou" (generalized - intersection over union). - is_aligned (bool, optional): If True, then m and n must be equal. - Defaults to False. - eps (float, optional): A value added to the denominator for numerical - stability. Defaults to 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 0, 10, 10, 10], - >>> [10, 10, 10, 20, 20, 20], - >>> [32, 32, 32, 38, 40, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 0, 10, 20, 20], - >>> [0, 10, 10, 10, 19, 20], - >>> [10, 10, 10, 20, 20, 20], - >>> ]) - >>> overlaps = axis_aligned_bbox_overlaps_3d(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - Example: - >>> empty = torch.empty(0, 6) - >>> nonempty = torch.FloatTensor([[0, 0, 0, 10, 9, 10]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes's last dimension is 6 - assert (bboxes1.size(-1) == 6 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 6 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 3] - - bboxes1[..., 0]) * (bboxes1[..., 4] - bboxes1[..., 1]) * ( - bboxes1[..., 5] - bboxes1[..., 2]) - area2 = (bboxes2[..., 3] - - bboxes2[..., 0]) * (bboxes2[..., 4] - bboxes2[..., 1]) * ( - bboxes2[..., 5] - bboxes2[..., 2]) - - if is_aligned: - lt = torch.max(bboxes1[..., :3], bboxes2[..., :3]) # [B, rows, 3] - rb = torch.min(bboxes1[..., 3:], bboxes2[..., 3:]) # [B, rows, 3] - - wh = (rb - lt).clamp(min=0) # [B, rows, 2] - overlap = wh[..., 0] * wh[..., 1] * wh[..., 2] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :3], bboxes2[..., :3]) - enclosed_rb = torch.max(bboxes1[..., 3:], bboxes2[..., 3:]) - else: - lt = torch.max(bboxes1[..., :, None, :3], - bboxes2[..., None, :, :3]) # [B, rows, cols, 3] - rb = torch.min(bboxes1[..., :, None, 3:], - bboxes2[..., None, :, 3:]) # [B, rows, cols, 3] - - wh = (rb - lt).clamp(min=0) # [B, rows, cols, 3] - overlap = wh[..., 0] * wh[..., 1] * wh[..., 2] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :3], - bboxes2[..., None, :, :3]) - enclosed_rb = torch.max(bboxes1[..., :, None, 3:], - bboxes2[..., None, :, 3:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou']: - return ious - # calculate gious - enclose_wh = (enclosed_rb - enclosed_lt).clamp(min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] * enclose_wh[..., 2] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/__init__.py deleted file mode 100644 index 168780b2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.bbox.samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, - IoUBalancedNegSampler, OHEMSampler, - PseudoSampler, RandomSampler, - SamplingResult) -from .iou_neg_piecewise_sampler import IoUNegPiecewiseSampler - -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'IoUNegPiecewiseSampler' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py deleted file mode 100644 index cbd8483c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/samplers/iou_neg_piecewise_sampler.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core.bbox.builder import BBOX_SAMPLERS -from . import RandomSampler, SamplingResult - - -@BBOX_SAMPLERS.register_module() -class IoUNegPiecewiseSampler(RandomSampler): - """IoU Piece-wise Sampling. - - Sampling negative proposals according to a list of IoU thresholds. - The negative proposals are divided into several pieces according - to `neg_iou_piece_thrs`. And the ratio of each piece is indicated - by `neg_piece_fractions`. - - Args: - num (int): Number of proposals. - pos_fraction (float): The fraction of positive proposals. - neg_piece_fractions (list): A list contains fractions that indicates - the ratio of each piece of total negative samplers. - neg_iou_piece_thrs (list): A list contains IoU thresholds that - indicate the upper bound of this piece. - neg_pos_ub (float): The total ratio to limit the upper bound - number of negative samples. - add_gt_as_proposals (bool): Whether to add gt as proposals. - """ - - def __init__(self, - num, - pos_fraction=None, - neg_piece_fractions=None, - neg_iou_piece_thrs=None, - neg_pos_ub=-1, - add_gt_as_proposals=False, - return_iou=False): - super(IoUNegPiecewiseSampler, - self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - assert isinstance(neg_piece_fractions, list) - assert len(neg_piece_fractions) == len(neg_iou_piece_thrs) - self.neg_piece_fractions = neg_piece_fractions - self.neg_iou_thr = neg_iou_piece_thrs - self.return_iou = return_iou - self.neg_piece_num = len(self.neg_piece_fractions) - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= 0: - return neg_inds.squeeze(1) - else: - neg_inds_choice = neg_inds.new_zeros([0]) - extend_num = 0 - max_overlaps = assign_result.max_overlaps[neg_inds] - - for piece_inds in range(self.neg_piece_num): - if piece_inds == self.neg_piece_num - 1: # for the last piece - piece_expected_num = num_expected - len(neg_inds_choice) - min_iou_thr = 0 - else: - # if the numbers of negative samplers in previous - # pieces are less than the expected number, extend - # the same number in the current piece. - piece_expected_num = int( - num_expected * - self.neg_piece_fractions[piece_inds]) + extend_num - min_iou_thr = self.neg_iou_thr[piece_inds + 1] - max_iou_thr = self.neg_iou_thr[piece_inds] - piece_neg_inds = torch.nonzero( - (max_overlaps >= min_iou_thr) - & (max_overlaps < max_iou_thr), - as_tuple=False).view(-1) - - if len(piece_neg_inds) < piece_expected_num: - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds[piece_neg_inds]], dim=0) - extend_num += piece_expected_num - len(piece_neg_inds) - - # for the last piece - if piece_inds == self.neg_piece_num - 1: - extend_neg_num = num_expected - len(neg_inds_choice) - # if the numbers of nagetive samples > 0, we will - # randomly select num_expected samples in last piece - if piece_neg_inds.numel() > 0: - rand_idx = torch.randint( - low=0, - high=piece_neg_inds.numel(), - size=(extend_neg_num, )).long() - neg_inds_choice = torch.cat( - [neg_inds_choice, piece_neg_inds[rand_idx]], - dim=0) - # if the numbers of nagetive samples == 0, we will - # randomly select num_expected samples in all - # previous pieces - else: - rand_idx = torch.randint( - low=0, - high=neg_inds_choice.numel(), - size=(extend_neg_num, )).long() - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds_choice[rand_idx]], - dim=0) - else: - piece_choice = self.random_choice(piece_neg_inds, - piece_expected_num) - neg_inds_choice = torch.cat( - [neg_inds_choice, neg_inds[piece_choice]], dim=0) - extend_num = 0 - assert len(neg_inds_choice) == num_expected - return neg_inds_choice - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (torch.Tensor): Boxes to be sampled from. - gt_bboxes (torch.Tensor): Ground truth bboxes. - gt_labels (torch.Tensor, optional): Class labels of ground truth - bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.bool) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.bool) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - if self.return_iou: - # PartA2 needs iou score to regression. - sampling_result.iou = assign_result.max_overlaps[torch.cat( - [pos_inds, neg_inds])] - sampling_result.iou.detach_() - - return sampling_result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py deleted file mode 100644 index 460035a5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_box3d import BaseInstance3DBoxes -from .box_3d_mode import Box3DMode -from .cam_box3d import CameraInstance3DBoxes -from .coord_3d_mode import Coord3DMode -from .depth_box3d import DepthInstance3DBoxes -from .lidar_box3d import LiDARInstance3DBoxes -from .utils import (get_box_type, get_proj_mat_by_coord_type, limit_period, - mono_cam_box2vis, points_cam2img, points_img2cam, - rotation_3d_in_axis, xywhr2xyxyr) - -__all__ = [ - 'Box3DMode', 'BaseInstance3DBoxes', 'LiDARInstance3DBoxes', - 'CameraInstance3DBoxes', 'DepthInstance3DBoxes', 'xywhr2xyxyr', - 'get_box_type', 'rotation_3d_in_axis', 'limit_period', 'points_cam2img', - 'points_img2cam', 'Coord3DMode', 'mono_cam_box2vis', - 'get_proj_mat_by_coord_type' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py deleted file mode 100644 index 3c74f670..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py +++ /dev/null @@ -1,578 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import numpy as np -import torch -from mmcv.ops import box_iou_rotated, points_in_boxes_all, points_in_boxes_part - -from .utils import limit_period - - -class BaseInstance3DBoxes(object): - """Base class for 3D Boxes. - - Note: - The box is bottom centered, i.e. the relative position of origin in - the box is (0.5, 0.5, 0). - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x box_dim matrix. - box_dim (int): Number of the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw). - Defaults to 7. - with_yaw (bool): Whether the box is with yaw rotation. - If False, the value of yaw will be set to 0 as minmax boxes. - Defaults to True. - origin (tuple[float], optional): Relative position of the box origin. - Defaults to (0.5, 0.5, 0). This will guide the box be converted to - (0.5, 0.5, 0) mode. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicating the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - - def __init__(self, tensor, box_dim=7, with_yaw=True, origin=(0.5, 0.5, 0)): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, box_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == box_dim, tensor.size() - - if tensor.shape[-1] == 6: - # If the dimension of boxes is 6, we expand box_dim by padding - # 0 as a fake yaw and set with_yaw to False. - assert box_dim == 6 - fake_rot = tensor.new_zeros(tensor.shape[0], 1) - tensor = torch.cat((tensor, fake_rot), dim=-1) - self.box_dim = box_dim + 1 - self.with_yaw = False - else: - self.box_dim = box_dim - self.with_yaw = with_yaw - self.tensor = tensor.clone() - - if origin != (0.5, 0.5, 0): - dst = self.tensor.new_tensor((0.5, 0.5, 0)) - src = self.tensor.new_tensor(origin) - self.tensor[:, :3] += self.tensor[:, 3:6] * (dst - src) - - @property - def volume(self): - """torch.Tensor: A vector with volume of each box.""" - return self.tensor[:, 3] * self.tensor[:, 4] * self.tensor[:, 5] - - @property - def dims(self): - """torch.Tensor: Size dimensions of each box in shape (N, 3).""" - return self.tensor[:, 3:6] - - @property - def yaw(self): - """torch.Tensor: A vector with yaw of each box in shape (N, ).""" - return self.tensor[:, 6] - - @property - def height(self): - """torch.Tensor: A vector with height of each box in shape (N, ).""" - return self.tensor[:, 5] - - @property - def top_height(self): - """torch.Tensor: - A vector with the top height of each box in shape (N, ).""" - return self.bottom_height + self.height - - @property - def bottom_height(self): - """torch.Tensor: - A vector with bottom's height of each box in shape (N, ).""" - return self.tensor[:, 2] - - @property - def center(self): - """Calculate the center of all the boxes. - - Note: - In MMDetection3D's convention, the bottom center is - usually taken as the default center. - - The relative position of the centers in different kinds of - boxes are different, e.g., the relative center of a boxes is - (0.5, 1.0, 0.5) in camera and (0.5, 0.5, 0) in lidar. - It is recommended to use ``bottom_center`` or ``gravity_center`` - for clearer usage. - - Returns: - torch.Tensor: A tensor with center of each box in shape (N, 3). - """ - return self.bottom_center - - @property - def bottom_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - return self.tensor[:, :3] - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - pass - - @property - def corners(self): - """torch.Tensor: - a tensor with 8 corners of each box in shape (N, 8, 3).""" - pass - - @property - def bev(self): - """torch.Tensor: 2D BEV box of each box with rotation - in XYWHR format, in shape (N, 5).""" - return self.tensor[:, [0, 1, 3, 4, 6]] - - @property - def nearest_bev(self): - """torch.Tensor: A tensor of 2D BEV box of each box - without rotation.""" - # Obtain BEV boxes with rotation in XYWHR format - bev_rotated_boxes = self.bev - # convert the rotation to a valid range - rotations = bev_rotated_boxes[:, -1] - normed_rotations = torch.abs(limit_period(rotations, 0.5, np.pi)) - - # find the center of boxes - conditions = (normed_rotations > np.pi / 4)[..., None] - bboxes_xywh = torch.where(conditions, bev_rotated_boxes[:, - [0, 1, 3, 2]], - bev_rotated_boxes[:, :4]) - - centers = bboxes_xywh[:, :2] - dims = bboxes_xywh[:, 2:] - bev_boxes = torch.cat([centers - dims / 2, centers + dims / 2], dim=-1) - return bev_boxes - - def in_range_bev(self, box_range): - """Check whether the boxes are in the given range. - - Args: - box_range (list | torch.Tensor): the range of box - (x_min, y_min, x_max, y_max) - - Note: - The original implementation of SECOND checks whether boxes in - a range by checking whether the points are in a convex - polygon, we reduce the burden for simpler cases. - - Returns: - torch.Tensor: Whether each box is inside the reference range. - """ - in_range_flags = ((self.bev[:, 0] > box_range[0]) - & (self.bev[:, 1] > box_range[1]) - & (self.bev[:, 0] < box_range[2]) - & (self.bev[:, 1] < box_range[3])) - return in_range_flags - - @abstractmethod - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | numpy.ndarray | - :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - """ - pass - - @abstractmethod - def flip(self, bev_direction='horizontal'): - """Flip the boxes in BEV along given BEV direction. - - Args: - bev_direction (str, optional): Direction by which to flip. - Can be chosen from 'horizontal' and 'vertical'. - Defaults to 'horizontal'. - """ - pass - - def translate(self, trans_vector): - """Translate boxes with the given translation vector. - - Args: - trans_vector (torch.Tensor): Translation vector of size (1, 3). - """ - if not isinstance(trans_vector, torch.Tensor): - trans_vector = self.tensor.new_tensor(trans_vector) - self.tensor[:, :3] += trans_vector - - def in_range_3d(self, box_range): - """Check whether the boxes are in the given range. - - Args: - box_range (list | torch.Tensor): The range of box - (x_min, y_min, z_min, x_max, y_max, z_max) - - Note: - In the original implementation of SECOND, checking whether - a box in the range checks whether the points are in a convex - polygon, we try to reduce the burden for simpler cases. - - Returns: - torch.Tensor: A binary vector indicating whether each box is - inside the reference range. - """ - in_range_flags = ((self.tensor[:, 0] > box_range[0]) - & (self.tensor[:, 1] > box_range[1]) - & (self.tensor[:, 2] > box_range[2]) - & (self.tensor[:, 0] < box_range[3]) - & (self.tensor[:, 1] < box_range[4]) - & (self.tensor[:, 2] < box_range[5])) - return in_range_flags - - @abstractmethod - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: The converted box of the same type - in the `dst` mode. - """ - pass - - def scale(self, scale_factor): - """Scale the box with horizontal and vertical scaling factors. - - Args: - scale_factors (float): Scale factors to scale the boxes. - """ - self.tensor[:, :6] *= scale_factor - self.tensor[:, 7:] *= scale_factor # velocity - - def limit_yaw(self, offset=0.5, period=np.pi): - """Limit the yaw to a given period and offset. - - Args: - offset (float, optional): The offset of the yaw. Defaults to 0.5. - period (float, optional): The expected period. Defaults to np.pi. - """ - self.tensor[:, 6] = limit_period(self.tensor[:, 6], offset, period) - - def nonempty(self, threshold=0.0): - """Find boxes that are non-empty. - - A box is considered empty, - if either of its side is no larger than threshold. - - Args: - threshold (float, optional): The threshold of minimal sizes. - Defaults to 0.0. - - Returns: - torch.Tensor: A binary vector which represents whether each - box is empty (False) or non-empty (True). - """ - box = self.tensor - size_x = box[..., 3] - size_y = box[..., 4] - size_z = box[..., 5] - keep = ((size_x > threshold) - & (size_y > threshold) & (size_z > threshold)) - return keep - - def __getitem__(self, item): - """ - Note: - The following usage are allowed: - 1. `new_boxes = boxes[3]`: - return a `Boxes` that contains only one box. - 2. `new_boxes = boxes[2:10]`: - return a slice of boxes. - 3. `new_boxes = boxes[vector]`: - where vector is a torch.BoolTensor with `length = len(boxes)`. - Nonzero elements in the vector will be selected. - Note that the returned Boxes might share storage with this Boxes, - subject to Pytorch's indexing semantics. - - Returns: - :obj:`BaseInstance3DBoxes`: A new object of - :class:`BaseInstance3DBoxes` after indexing. - """ - original_type = type(self) - if isinstance(item, int): - return original_type( - self.tensor[item].view(1, -1), - box_dim=self.box_dim, - with_yaw=self.with_yaw) - b = self.tensor[item] - assert b.dim() == 2, \ - f'Indexing on Boxes with {item} failed to return a matrix!' - return original_type(b, box_dim=self.box_dim, with_yaw=self.with_yaw) - - def __len__(self): - """int: Number of boxes in the current object.""" - return self.tensor.shape[0] - - def __repr__(self): - """str: Return a strings that describes the object.""" - return self.__class__.__name__ + '(\n ' + str(self.tensor) + ')' - - @classmethod - def cat(cls, boxes_list): - """Concatenate a list of Boxes into a single Boxes. - - Args: - boxes_list (list[:obj:`BaseInstance3DBoxes`]): List of boxes. - - Returns: - :obj:`BaseInstance3DBoxes`: The concatenated Boxes. - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all(isinstance(box, cls) for box in boxes_list) - - # use torch.cat (v.s. layers.cat) - # so the returned boxes never share storage with input - cat_boxes = cls( - torch.cat([b.tensor for b in boxes_list], dim=0), - box_dim=boxes_list[0].tensor.shape[1], - with_yaw=boxes_list[0].with_yaw) - return cat_boxes - - def to(self, device): - """Convert current boxes to a specific device. - - Args: - device (str | :obj:`torch.device`): The name of the device. - - Returns: - :obj:`BaseInstance3DBoxes`: A new boxes object on the - specific device. - """ - original_type = type(self) - return original_type( - self.tensor.to(device), - box_dim=self.box_dim, - with_yaw=self.with_yaw) - - def clone(self): - """Clone the Boxes. - - Returns: - :obj:`BaseInstance3DBoxes`: Box object with the same properties - as self. - """ - original_type = type(self) - return original_type( - self.tensor.clone(), box_dim=self.box_dim, with_yaw=self.with_yaw) - - @property - def device(self): - """str: The device of the boxes are on.""" - return self.tensor.device - - def __iter__(self): - """Yield a box as a Tensor of shape (4,) at a time. - - Returns: - torch.Tensor: A box of shape (4,). - """ - yield from self.tensor - - @classmethod - def height_overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate height overlaps of two boxes. - - Note: - This function calculates the height overlaps between boxes1 and - boxes2, boxes1 and boxes2 should be in the same type. - - Args: - boxes1 (:obj:`BaseInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`BaseInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of IoU calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated iou of boxes. - """ - assert isinstance(boxes1, BaseInstance3DBoxes) - assert isinstance(boxes2, BaseInstance3DBoxes) - assert type(boxes1) == type(boxes2), '"boxes1" and "boxes2" should' \ - f'be in the same type, got {type(boxes1)} and {type(boxes2)}.' - - boxes1_top_height = boxes1.top_height.view(-1, 1) - boxes1_bottom_height = boxes1.bottom_height.view(-1, 1) - boxes2_top_height = boxes2.top_height.view(1, -1) - boxes2_bottom_height = boxes2.bottom_height.view(1, -1) - - heighest_of_bottom = torch.max(boxes1_bottom_height, - boxes2_bottom_height) - lowest_of_top = torch.min(boxes1_top_height, boxes2_top_height) - overlaps_h = torch.clamp(lowest_of_top - heighest_of_bottom, min=0) - return overlaps_h - - @classmethod - def overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate 3D overlaps of two boxes. - - Note: - This function calculates the overlaps between ``boxes1`` and - ``boxes2``, ``boxes1`` and ``boxes2`` should be in the same type. - - Args: - boxes1 (:obj:`BaseInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`BaseInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of iou calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated 3D overlaps of the boxes. - """ - assert isinstance(boxes1, BaseInstance3DBoxes) - assert isinstance(boxes2, BaseInstance3DBoxes) - assert type(boxes1) == type(boxes2), '"boxes1" and "boxes2" should' \ - f'be in the same type, got {type(boxes1)} and {type(boxes2)}.' - - assert mode in ['iou', 'iof'] - - rows = len(boxes1) - cols = len(boxes2) - if rows * cols == 0: - return boxes1.tensor.new(rows, cols) - - # height overlap - overlaps_h = cls.height_overlaps(boxes1, boxes2) - - # bev overlap - iou2d = box_iou_rotated(boxes1.bev, boxes2.bev) - areas1 = (boxes1.bev[:, 2] * boxes1.bev[:, 3]).unsqueeze(1).expand( - rows, cols) - areas2 = (boxes2.bev[:, 2] * boxes2.bev[:, 3]).unsqueeze(0).expand( - rows, cols) - overlaps_bev = iou2d * (areas1 + areas2) / (1 + iou2d) - - # 3d overlaps - overlaps_3d = overlaps_bev.to(boxes1.device) * overlaps_h - - volume1 = boxes1.volume.view(-1, 1) - volume2 = boxes2.volume.view(1, -1) - - if mode == 'iou': - # the clamp func is used to avoid division of 0 - iou3d = overlaps_3d / torch.clamp( - volume1 + volume2 - overlaps_3d, min=1e-8) - else: - iou3d = overlaps_3d / torch.clamp(volume1, min=1e-8) - - return iou3d - - def new_box(self, data): - """Create a new box object with data. - - The new box and its tensor has the similar properties - as self and self.tensor, respectively. - - Args: - data (torch.Tensor | numpy.array | list): Data to be copied. - - Returns: - :obj:`BaseInstance3DBoxes`: A new bbox object with ``data``, - the object's other properties are similar to ``self``. - """ - new_tensor = self.tensor.new_tensor(data) \ - if not isinstance(data, torch.Tensor) else data.to(self.device) - original_type = type(self) - return original_type( - new_tensor, box_dim=self.box_dim, with_yaw=self.with_yaw) - - def points_in_boxes_part(self, points, boxes_override=None): - """Find the box in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor`. Defaults to None. - - Returns: - torch.Tensor: The index of the first box that each point - is in, in shape (M, ). Default value is -1 - (if the point is not enclosed by any box). - - Note: - If a point is enclosed by multiple boxes, the index of the - first box will be returned. - """ - if boxes_override is not None: - boxes = boxes_override - else: - boxes = self.tensor - if points.dim() == 2: - points = points.unsqueeze(0) - box_idx = points_in_boxes_part(points, - boxes.unsqueeze(0).to( - points.device)).squeeze(0) - return box_idx - - def points_in_boxes_all(self, points, boxes_override=None): - """Find all boxes in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor`. Defaults to None. - - Returns: - torch.Tensor: A tensor indicating whether a point is in a box, - in shape (M, T). T is the number of boxes. Denote this - tensor as A, if the m^th point is in the t^th box, then - `A[m, t] == 1`, elsewise `A[m, t] == 0`. - """ - if boxes_override is not None: - boxes = boxes_override - else: - boxes = self.tensor - - points_clone = points.clone()[..., :3] - if points_clone.dim() == 2: - points_clone = points_clone.unsqueeze(0) - else: - assert points_clone.dim() == 3 and points_clone.shape[0] == 1 - - boxes = boxes.to(points_clone.device).unsqueeze(0) - box_idxs_of_pts = points_in_boxes_all(points_clone, boxes) - - return box_idxs_of_pts.squeeze(0) - - def points_in_boxes(self, points, boxes_override=None): - warnings.warn('DeprecationWarning: points_in_boxes is a ' - 'deprecated method, please consider using ' - 'points_in_boxes_part.') - return self.points_in_boxes_part(points, boxes_override) - - def points_in_boxes_batch(self, points, boxes_override=None): - warnings.warn('DeprecationWarning: points_in_boxes_batch is a ' - 'deprecated method, please consider using ' - 'points_in_boxes_all.') - return self.points_in_boxes_all(points, boxes_override) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/box_3d_mode.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/box_3d_mode.py deleted file mode 100644 index 3048b0ad..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/box_3d_mode.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import IntEnum, unique - -import numpy as np -import torch - -from .base_box3d import BaseInstance3DBoxes -from .cam_box3d import CameraInstance3DBoxes -from .depth_box3d import DepthInstance3DBoxes -from .lidar_box3d import LiDARInstance3DBoxes -from .utils import limit_period - - -@unique -class Box3DMode(IntEnum): - r"""Enum of different ways to represent a box. - - Coordinates in LiDAR: - - .. code-block:: none - - up z - ^ x front - | / - | / - left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - - Coordinates in camera: - - .. code-block:: none - - z front - / - / - 0 ------> x right - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is [0.5, 1.0, 0.5], - and the yaw is around the y axis, thus the rotation axis=1. - - Coordinates in Depth mode: - - .. code-block:: none - - up z - ^ y front - | / - | / - 0 ------> x right - - The relative coordinate of bottom center in a DEPTH box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - """ - - LIDAR = 0 - CAM = 1 - DEPTH = 2 - - @staticmethod - def convert(box, src, dst, rt_mat=None, with_yaw=True): - """Convert boxes from `src` mode to `dst` mode. - - Args: - box (tuple | list | np.ndarray | - torch.Tensor | :obj:`BaseInstance3DBoxes`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode`): The src Box mode. - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool, optional): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes`): - The converted box of the same type. - """ - if src == dst: - return box - - is_numpy = isinstance(box, np.ndarray) - is_Instance3DBoxes = isinstance(box, BaseInstance3DBoxes) - single_box = isinstance(box, (list, tuple)) - if single_box: - assert len(box) >= 7, ( - 'Box3DMode.convert takes either a k-tuple/list or ' - 'an Nxk array/tensor, where k >= 7') - arr = torch.tensor(box)[None, :] - else: - # avoid modifying the input box - if is_numpy: - arr = torch.from_numpy(np.asarray(box)).clone() - elif is_Instance3DBoxes: - arr = box.tensor.clone() - else: - arr = box.clone() - - if is_Instance3DBoxes: - with_yaw = box.with_yaw - - # convert box from `src` mode to `dst` mode. - x_size, y_size, z_size = arr[..., 3:4], arr[..., 4:5], arr[..., 5:6] - if with_yaw: - yaw = arr[..., 6:7] - if src == Box3DMode.LIDAR and dst == Box3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [0, 0, -1], [1, 0, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.CAM and dst == Box3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 0, 1], [-1, 0, 0], [0, -1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.DEPTH and dst == Box3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - elif src == Box3DMode.CAM and dst == Box3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, 1], [0, -1, 0]]) - xyz_size = torch.cat([x_size, z_size, y_size], dim=-1) - if with_yaw: - yaw = -yaw - elif src == Box3DMode.LIDAR and dst == Box3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) - xyz_size = torch.cat([x_size, y_size, z_size], dim=-1) - if with_yaw: - yaw = yaw + np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - elif src == Box3DMode.DEPTH and dst == Box3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) - xyz_size = torch.cat([x_size, y_size, z_size], dim=-1) - if with_yaw: - yaw = yaw - np.pi / 2 - yaw = limit_period(yaw, period=np.pi * 2) - else: - raise NotImplementedError( - f'Conversion from Box3DMode {src} to {dst} ' - 'is not supported yet') - - if not isinstance(rt_mat, torch.Tensor): - rt_mat = arr.new_tensor(rt_mat) - if rt_mat.size(1) == 4: - extended_xyz = torch.cat( - [arr[..., :3], arr.new_ones(arr.size(0), 1)], dim=-1) - xyz = extended_xyz @ rt_mat.t() - else: - xyz = arr[..., :3] @ rt_mat.t() - - if with_yaw: - remains = arr[..., 7:] - arr = torch.cat([xyz[..., :3], xyz_size, yaw, remains], dim=-1) - else: - remains = arr[..., 6:] - arr = torch.cat([xyz[..., :3], xyz_size, remains], dim=-1) - - # convert arr to the original type - original_type = type(box) - if single_box: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - elif is_Instance3DBoxes: - if dst == Box3DMode.CAM: - target_type = CameraInstance3DBoxes - elif dst == Box3DMode.LIDAR: - target_type = LiDARInstance3DBoxes - elif dst == Box3DMode.DEPTH: - target_type = DepthInstance3DBoxes - else: - raise NotImplementedError( - f'Conversion to {dst} through {original_type}' - ' is not supported yet') - return target_type(arr, box_dim=arr.size(-1), with_yaw=with_yaw) - else: - return arr diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/cam_box3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/cam_box3d.py deleted file mode 100644 index b7086134..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/cam_box3d.py +++ /dev/null @@ -1,354 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ...points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis, yaw2local - - -class CameraInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in CAM coordinates. - - Coordinates in camera: - - .. code-block:: none - - z front (yaw=-0.5*pi) - / - / - 0 ------> x right (yaw=0) - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is (0.5, 1.0, 0.5), - and the yaw is around the y axis, thus the rotation axis=1. - The yaw is 0 at the positive direction of x axis, and decreases from - the positive direction of x to the positive direction of z. - - Attributes: - tensor (torch.Tensor): Float matrix in shape (N, box_dim). - box_dim (int): Integer indicating the dimension of a box - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as - axis-aligned boxes tightly enclosing the original boxes. - """ - YAW_AXIS = 1 - - def __init__(self, - tensor, - box_dim=7, - with_yaw=True, - origin=(0.5, 1.0, 0.5)): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, box_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == box_dim, tensor.size() - - if tensor.shape[-1] == 6: - # If the dimension of boxes is 6, we expand box_dim by padding - # 0 as a fake yaw and set with_yaw to False. - assert box_dim == 6 - fake_rot = tensor.new_zeros(tensor.shape[0], 1) - tensor = torch.cat((tensor, fake_rot), dim=-1) - self.box_dim = box_dim + 1 - self.with_yaw = False - else: - self.box_dim = box_dim - self.with_yaw = with_yaw - self.tensor = tensor.clone() - - if origin != (0.5, 1.0, 0.5): - dst = self.tensor.new_tensor((0.5, 1.0, 0.5)) - src = self.tensor.new_tensor(origin) - self.tensor[:, :3] += self.tensor[:, 3:6] * (dst - src) - - @property - def height(self): - """torch.Tensor: A vector with height of each box in shape (N, ).""" - return self.tensor[:, 4] - - @property - def top_height(self): - """torch.Tensor: - A vector with the top height of each box in shape (N, ).""" - # the positive direction is down rather than up - return self.bottom_height - self.height - - @property - def bottom_height(self): - """torch.Tensor: - A vector with bottom's height of each box in shape (N, ).""" - return self.tensor[:, 1] - - @property - def local_yaw(self): - """torch.Tensor: - A vector with local yaw of each box in shape (N, ). - local_yaw equals to alpha in kitti, which is commonly - used in monocular 3D object detection task, so only - :obj:`CameraInstance3DBoxes` has the property. - """ - yaw = self.yaw - loc = self.gravity_center - local_yaw = yaw2local(yaw, loc) - - return local_yaw - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, [0, 2]] = bottom_center[:, [0, 2]] - gravity_center[:, 1] = bottom_center[:, 1] - self.tensor[:, 4] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes in - shape (N, 8, 3). - - Convert the boxes to in clockwise order, in the form of - (x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0) - - .. code-block:: none - - front z - / - / - (x0, y0, z1) + ----------- + (x1, y0, z1) - /| / | - / | / | - (x0, y0, z0) + ----------- + + (x1, y1, z1) - | / . | / - | / origin | / - (x0, y1, z0) + ----------- + -------> x right - | (x1, y1, z0) - | - v - down y - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin [0.5, 1, 0.5] - corners_norm = corners_norm - dims.new_tensor([0.5, 1, 0.5]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - @property - def bev(self): - """torch.Tensor: 2D BEV box of each box with rotation - in XYWHR format, in shape (N, 5).""" - bev = self.tensor[:, [0, 2, 3, 5, 6]].clone() - # positive direction of the gravity axis - # in cam coord system points to the earth - # so the bev yaw angle needs to be reversed - bev[:, -1] = -bev[:, -1] - return bev - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[2, 0] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - self.tensor[:, 6] += angle - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In CAM coordinates, it flips the x (horizontal) or z (vertical) axis. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - elif bev_direction == 'vertical': - self.tensor[:, 2::7] = -self.tensor[:, 2::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 0] = -points[:, 0] - elif bev_direction == 'vertical': - points[:, 2] = -points[:, 2] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - @classmethod - def height_overlaps(cls, boxes1, boxes2, mode='iou'): - """Calculate height overlaps of two boxes. - - This function calculates the height overlaps between ``boxes1`` and - ``boxes2``, where ``boxes1`` and ``boxes2`` should be in the same type. - - Args: - boxes1 (:obj:`CameraInstance3DBoxes`): Boxes 1 contain N boxes. - boxes2 (:obj:`CameraInstance3DBoxes`): Boxes 2 contain M boxes. - mode (str, optional): Mode of iou calculation. Defaults to 'iou'. - - Returns: - torch.Tensor: Calculated iou of boxes' heights. - """ - assert isinstance(boxes1, CameraInstance3DBoxes) - assert isinstance(boxes2, CameraInstance3DBoxes) - - boxes1_top_height = boxes1.top_height.view(-1, 1) - boxes1_bottom_height = boxes1.bottom_height.view(-1, 1) - boxes2_top_height = boxes2.top_height.view(1, -1) - boxes2_bottom_height = boxes2.bottom_height.view(1, -1) - - # positive direction of the gravity axis - # in cam coord system points to the earth - heighest_of_bottom = torch.min(boxes1_bottom_height, - boxes2_bottom_height) - lowest_of_top = torch.max(boxes1_top_height, boxes2_top_height) - overlaps_h = torch.clamp(heighest_of_bottom - lowest_of_top, min=0) - return overlaps_h - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.CAM, dst=dst, rt_mat=rt_mat) - - def points_in_boxes_part(self, points, boxes_override=None): - """Find the box in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor `. Defaults to None. - - Returns: - torch.Tensor: The index of the box in which - each point is, in shape (M, ). Default value is -1 - (if the point is not enclosed by any box). - """ - from .coord_3d_mode import Coord3DMode - - points_lidar = Coord3DMode.convert(points, Coord3DMode.CAM, - Coord3DMode.LIDAR) - if boxes_override is not None: - boxes_lidar = boxes_override - else: - boxes_lidar = Coord3DMode.convert(self.tensor, Coord3DMode.CAM, - Coord3DMode.LIDAR) - - box_idx = super().points_in_boxes_part(points_lidar, boxes_lidar) - return box_idx - - def points_in_boxes_all(self, points, boxes_override=None): - """Find all boxes in which each point is. - - Args: - points (torch.Tensor): Points in shape (1, M, 3) or (M, 3), - 3 dimensions are (x, y, z) in LiDAR or depth coordinate. - boxes_override (torch.Tensor, optional): Boxes to override - `self.tensor `. Defaults to None. - - Returns: - torch.Tensor: The index of all boxes in which each point is, - in shape (B, M, T). - """ - from .coord_3d_mode import Coord3DMode - - points_lidar = Coord3DMode.convert(points, Coord3DMode.CAM, - Coord3DMode.LIDAR) - if boxes_override is not None: - boxes_lidar = boxes_override - else: - boxes_lidar = Coord3DMode.convert(self.tensor, Coord3DMode.CAM, - Coord3DMode.LIDAR) - - box_idx = super().points_in_boxes_all(points_lidar, boxes_lidar) - return box_idx diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/coord_3d_mode.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/coord_3d_mode.py deleted file mode 100644 index 6309b654..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/coord_3d_mode.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import IntEnum, unique - -import numpy as np -import torch - -from ...points import BasePoints, CameraPoints, DepthPoints, LiDARPoints -from .base_box3d import BaseInstance3DBoxes -from .box_3d_mode import Box3DMode - - -@unique -class Coord3DMode(IntEnum): - r"""Enum of different ways to represent a box - and point cloud. - - Coordinates in LiDAR: - - .. code-block:: none - - up z - ^ x front - | / - | / - left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - - Coordinates in camera: - - .. code-block:: none - - z front - / - / - 0 ------> x right - | - | - v - down y - - The relative coordinate of bottom center in a CAM box is [0.5, 1.0, 0.5], - and the yaw is around the y axis, thus the rotation axis=1. - - Coordinates in Depth mode: - - .. code-block:: none - - up z - ^ y front - | / - | / - 0 ------> x right - - The relative coordinate of bottom center in a DEPTH box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - """ - - LIDAR = 0 - CAM = 1 - DEPTH = 2 - - @staticmethod - def convert(input, src, dst, rt_mat=None, with_yaw=True, is_point=True): - """Convert boxes or points from `src` mode to `dst` mode. - - Args: - input (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes` | :obj:`BasePoints`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode` | :obj:`Coord3DMode`): The source mode. - dst (:obj:`Box3DMode` | :obj:`Coord3DMode`): The target mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - is_point (bool): If `input` is neither an instance of - :obj:`BaseInstance3DBoxes` nor an instance of - :obj:`BasePoints`, whether or not it is point data. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes` | :obj:`BasePoints`): - The converted box of the same type. - """ - if isinstance(input, BaseInstance3DBoxes): - return Coord3DMode.convert_box( - input, src, dst, rt_mat=rt_mat, with_yaw=with_yaw) - elif isinstance(input, BasePoints): - return Coord3DMode.convert_point(input, src, dst, rt_mat=rt_mat) - elif isinstance(input, (tuple, list, np.ndarray, torch.Tensor)): - if is_point: - return Coord3DMode.convert_point( - input, src, dst, rt_mat=rt_mat) - else: - return Coord3DMode.convert_box( - input, src, dst, rt_mat=rt_mat, with_yaw=with_yaw) - else: - raise NotImplementedError - - @staticmethod - def convert_box(box, src, dst, rt_mat=None, with_yaw=True): - """Convert boxes from `src` mode to `dst` mode. - - Args: - box (tuple | list | np.ndarray | - torch.Tensor | :obj:`BaseInstance3DBoxes`): - Can be a k-tuple, k-list or an Nxk array/tensor, where k = 7. - src (:obj:`Box3DMode`): The src Box mode. - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - with_yaw (bool): If `box` is an instance of - :obj:`BaseInstance3DBoxes`, whether or not it has a yaw angle. - Defaults to True. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | - :obj:`BaseInstance3DBoxes`): - The converted box of the same type. - """ - return Box3DMode.convert(box, src, dst, rt_mat=rt_mat) - - @staticmethod - def convert_point(point, src, dst, rt_mat=None): - """Convert points from `src` mode to `dst` mode. - - Args: - point (tuple | list | np.ndarray | - torch.Tensor | :obj:`BasePoints`): - Can be a k-tuple, k-list or an Nxk array/tensor. - src (:obj:`CoordMode`): The src Point mode. - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - (tuple | list | np.ndarray | torch.Tensor | :obj:`BasePoints`): - The converted point of the same type. - """ - if src == dst: - return point - - is_numpy = isinstance(point, np.ndarray) - is_InstancePoints = isinstance(point, BasePoints) - single_point = isinstance(point, (list, tuple)) - if single_point: - assert len(point) >= 3, ( - 'CoordMode.convert takes either a k-tuple/list or ' - 'an Nxk array/tensor, where k >= 3') - arr = torch.tensor(point)[None, :] - else: - # avoid modifying the input point - if is_numpy: - arr = torch.from_numpy(np.asarray(point)).clone() - elif is_InstancePoints: - arr = point.tensor.clone() - else: - arr = point.clone() - - # convert point from `src` mode to `dst` mode. - if src == Coord3DMode.LIDAR and dst == Coord3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [0, 0, -1], [1, 0, 0]]) - elif src == Coord3DMode.CAM and dst == Coord3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 0, 1], [-1, 0, 0], [0, -1, 0]]) - elif src == Coord3DMode.DEPTH and dst == Coord3DMode.CAM: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) - elif src == Coord3DMode.CAM and dst == Coord3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[1, 0, 0], [0, 0, 1], [0, -1, 0]]) - elif src == Coord3DMode.LIDAR and dst == Coord3DMode.DEPTH: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) - elif src == Coord3DMode.DEPTH and dst == Coord3DMode.LIDAR: - if rt_mat is None: - rt_mat = arr.new_tensor([[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) - else: - raise NotImplementedError( - f'Conversion from Coord3DMode {src} to {dst} ' - 'is not supported yet') - - if not isinstance(rt_mat, torch.Tensor): - rt_mat = arr.new_tensor(rt_mat) - if rt_mat.size(1) == 4: - extended_xyz = torch.cat( - [arr[..., :3], arr.new_ones(arr.size(0), 1)], dim=-1) - xyz = extended_xyz @ rt_mat.t() - else: - xyz = arr[..., :3] @ rt_mat.t() - - remains = arr[..., 3:] - arr = torch.cat([xyz[..., :3], remains], dim=-1) - - # convert arr to the original type - original_type = type(point) - if single_point: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - elif is_InstancePoints: - if dst == Coord3DMode.CAM: - target_type = CameraPoints - elif dst == Coord3DMode.LIDAR: - target_type = LiDARPoints - elif dst == Coord3DMode.DEPTH: - target_type = DepthPoints - else: - raise NotImplementedError( - f'Conversion to {dst} through {original_type}' - ' is not supported yet') - return target_type( - arr, - points_dim=arr.size(-1), - attribute_dims=point.attribute_dims) - else: - return arr diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/depth_box3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/depth_box3d.py deleted file mode 100644 index dd9278bf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/depth_box3d.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core.points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis - - -class DepthInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in Depth coordinates. - - Coordinates in Depth: - - .. code-block:: none - - up z y front (yaw=-0.5*pi) - ^ ^ - | / - | / - 0 ------> x right (yaw=0) - - The relative coordinate of bottom center in a Depth box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - The yaw is 0 at the positive direction of x axis, and decreases from - the positive direction of x to the positive direction of y. - Also note that rotation of DepthInstance3DBoxes is counterclockwise, - which is reverse to the definition of the yaw angle (clockwise). - - A refactor is ongoing to make the three coordinate systems - easier to understand and convert between each other. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicates the dimension of a box - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - YAW_AXIS = 2 - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, :2] = bottom_center[:, :2] - gravity_center[:, 2] = bottom_center[:, 2] + self.tensor[:, 5] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes - in shape (N, 8, 3). - - Convert the boxes to corners in clockwise order, in form of - ``(x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0)`` - - .. code-block:: none - - up z - front y ^ - / | - / | - (x0, y1, z1) + ----------- + (x1, y1, z1) - /| / | - / | / | - (x0, y0, z1) + ----------- + + (x1, y1, z0) - | / . | / - | / origin | / - (x0, y0, z0) + ----------- + --------> right x - (x1, y0, z0) - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin (0.5, 0.5, 0) - corners_norm = corners_norm - dims.new_tensor([0.5, 0.5, 0]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - # rotate around z axis - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angle (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[0, 1] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - if self.with_yaw: - self.tensor[:, 6] += angle - else: - # for axis-aligned boxes, we take the new - # enclosing axis-aligned boxes after rotation - corners_rot = self.corners @ rot_mat_T - new_x_size = corners_rot[..., 0].max( - dim=1, keepdim=True)[0] - corners_rot[..., 0].min( - dim=1, keepdim=True)[0] - new_y_size = corners_rot[..., 1].max( - dim=1, keepdim=True)[0] - corners_rot[..., 1].min( - dim=1, keepdim=True)[0] - self.tensor[:, 3:5] = torch.cat((new_x_size, new_y_size), dim=-1) - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In Depth coordinates, it flips x (horizontal) or y (vertical) axis. - - Args: - bev_direction (str, optional): Flip direction - (horizontal or vertical). Defaults to 'horizontal'. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - elif bev_direction == 'vertical': - self.tensor[:, 1::7] = -self.tensor[:, 1::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 0] = -points[:, 0] - elif bev_direction == 'vertical': - points[:, 1] = -points[:, 1] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`DepthInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.DEPTH, dst=dst, rt_mat=rt_mat) - - def enlarged_box(self, extra_width): - """Enlarge the length, width and height boxes. - - Args: - extra_width (float | torch.Tensor): Extra width to enlarge the box. - - Returns: - :obj:`DepthInstance3DBoxes`: Enlarged boxes. - """ - enlarged_boxes = self.tensor.clone() - enlarged_boxes[:, 3:6] += extra_width * 2 - # bottom center z minus extra_width - enlarged_boxes[:, 2] -= extra_width - return self.new_box(enlarged_boxes) - - def get_surface_line_center(self): - """Compute surface and line center of bounding boxes. - - Returns: - torch.Tensor: Surface and line center of bounding boxes. - """ - obj_size = self.dims - center = self.gravity_center.view(-1, 1, 3) - batch_size = center.shape[0] - - rot_sin = torch.sin(-self.yaw) - rot_cos = torch.cos(-self.yaw) - rot_mat_T = self.yaw.new_zeros(tuple(list(self.yaw.shape) + [3, 3])) - rot_mat_T[..., 0, 0] = rot_cos - rot_mat_T[..., 0, 1] = -rot_sin - rot_mat_T[..., 1, 0] = rot_sin - rot_mat_T[..., 1, 1] = rot_cos - rot_mat_T[..., 2, 2] = 1 - - # Get the object surface center - offset = obj_size.new_tensor([[0, 0, 1], [0, 0, -1], [0, 1, 0], - [0, -1, 0], [1, 0, 0], [-1, 0, 0]]) - offset = offset.view(1, 6, 3) / 2 - surface_3d = (offset * - obj_size.view(batch_size, 1, 3).repeat(1, 6, 1)).reshape( - -1, 3) - - # Get the object line center - offset = obj_size.new_tensor([[1, 0, 1], [-1, 0, 1], [0, 1, 1], - [0, -1, 1], [1, 0, -1], [-1, 0, -1], - [0, 1, -1], [0, -1, -1], [1, 1, 0], - [1, -1, 0], [-1, 1, 0], [-1, -1, 0]]) - offset = offset.view(1, 12, 3) / 2 - - line_3d = (offset * - obj_size.view(batch_size, 1, 3).repeat(1, 12, 1)).reshape( - -1, 3) - - surface_rot = rot_mat_T.repeat(6, 1, 1) - surface_3d = torch.matmul(surface_3d.unsqueeze(-2), - surface_rot).squeeze(-2) - surface_center = center.repeat(1, 6, 1).reshape(-1, 3) + surface_3d - - line_rot = rot_mat_T.repeat(12, 1, 1) - line_3d = torch.matmul(line_3d.unsqueeze(-2), line_rot).squeeze(-2) - line_center = center.repeat(1, 12, 1).reshape(-1, 3) + line_3d - - return surface_center, line_center diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/lidar_box3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/lidar_box3d.py deleted file mode 100644 index 706a6c0d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/lidar_box3d.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core.points import BasePoints -from .base_box3d import BaseInstance3DBoxes -from .utils import rotation_3d_in_axis - - -class LiDARInstance3DBoxes(BaseInstance3DBoxes): - """3D boxes of instances in LIDAR coordinates. - - Coordinates in LiDAR: - - .. code-block:: none - - up z x front (yaw=0) - ^ ^ - | / - | / - (yaw=0.5*pi) left y <------ 0 - - The relative coordinate of bottom center in a LiDAR box is (0.5, 0.5, 0), - and the yaw is around the z axis, thus the rotation axis=2. - The yaw is 0 at the positive direction of x axis, and increases from - the positive direction of x to the positive direction of y. - - A refactor is ongoing to make the three coordinate systems - easier to understand and convert between each other. - - Attributes: - tensor (torch.Tensor): Float matrix of N x box_dim. - box_dim (int): Integer indicating the dimension of a box. - Each row is (x, y, z, x_size, y_size, z_size, yaw, ...). - with_yaw (bool): If True, the value of yaw will be set to 0 as minmax - boxes. - """ - YAW_AXIS = 2 - - @property - def gravity_center(self): - """torch.Tensor: A tensor with center of each box in shape (N, 3).""" - bottom_center = self.bottom_center - gravity_center = torch.zeros_like(bottom_center) - gravity_center[:, :2] = bottom_center[:, :2] - gravity_center[:, 2] = bottom_center[:, 2] + self.tensor[:, 5] * 0.5 - return gravity_center - - @property - def corners(self): - """torch.Tensor: Coordinates of corners of all the boxes - in shape (N, 8, 3). - - Convert the boxes to corners in clockwise order, in form of - ``(x0y0z0, x0y0z1, x0y1z1, x0y1z0, x1y0z0, x1y0z1, x1y1z1, x1y1z0)`` - - .. code-block:: none - - up z - front x ^ - / | - / | - (x1, y0, z1) + ----------- + (x1, y1, z1) - /| / | - / | / | - (x0, y0, z1) + ----------- + + (x1, y1, z0) - | / . | / - | / origin | / - left y<-------- + ----------- + (x0, y1, z0) - (x0, y0, z0) - """ - if self.tensor.numel() == 0: - return torch.empty([0, 8, 3], device=self.tensor.device) - - dims = self.dims - corners_norm = torch.from_numpy( - np.stack(np.unravel_index(np.arange(8), [2] * 3), axis=1)).to( - device=dims.device, dtype=dims.dtype) - - corners_norm = corners_norm[[0, 1, 3, 2, 4, 5, 7, 6]] - # use relative origin [0.5, 0.5, 0] - corners_norm = corners_norm - dims.new_tensor([0.5, 0.5, 0]) - corners = dims.view([-1, 1, 3]) * corners_norm.reshape([1, 8, 3]) - - # rotate around z axis - corners = rotation_3d_in_axis( - corners, self.tensor[:, 6], axis=self.YAW_AXIS) - corners += self.tensor[:, :3].view(-1, 1, 3) - return corners - - def rotate(self, angle, points=None): - """Rotate boxes with points (optional) with the given angle or rotation - matrix. - - Args: - angles (float | torch.Tensor | np.ndarray): - Rotation angle or rotation matrix. - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to rotate. Defaults to None. - - Returns: - tuple or None: When ``points`` is None, the function returns - None, otherwise it returns the rotated points and the - rotation matrix ``rot_mat_T``. - """ - if not isinstance(angle, torch.Tensor): - angle = self.tensor.new_tensor(angle) - - assert angle.shape == torch.Size([3, 3]) or angle.numel() == 1, \ - f'invalid rotation angle shape {angle.shape}' - - if angle.numel() == 1: - self.tensor[:, 0:3], rot_mat_T = rotation_3d_in_axis( - self.tensor[:, 0:3], - angle, - axis=self.YAW_AXIS, - return_mat=True) - else: - rot_mat_T = angle - rot_sin = rot_mat_T[0, 1] - rot_cos = rot_mat_T[0, 0] - angle = np.arctan2(rot_sin, rot_cos) - self.tensor[:, 0:3] = self.tensor[:, 0:3] @ rot_mat_T - - self.tensor[:, 6] += angle - - if self.tensor.shape[1] == 9: - # rotate velo vector - self.tensor[:, 7:9] = self.tensor[:, 7:9] @ rot_mat_T[:2, :2] - - if points is not None: - if isinstance(points, torch.Tensor): - points[:, :3] = points[:, :3] @ rot_mat_T - elif isinstance(points, np.ndarray): - rot_mat_T = rot_mat_T.cpu().numpy() - points[:, :3] = np.dot(points[:, :3], rot_mat_T) - elif isinstance(points, BasePoints): - points.rotate(rot_mat_T) - else: - raise ValueError - return points, rot_mat_T - - def flip(self, bev_direction='horizontal', points=None): - """Flip the boxes in BEV along given BEV direction. - - In LIDAR coordinates, it flips the y (horizontal) or x (vertical) axis. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - points (torch.Tensor | np.ndarray | :obj:`BasePoints`, optional): - Points to flip. Defaults to None. - - Returns: - torch.Tensor, numpy.ndarray or None: Flipped points. - """ - assert bev_direction in ('horizontal', 'vertical') - if bev_direction == 'horizontal': - self.tensor[:, 1::7] = -self.tensor[:, 1::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] - elif bev_direction == 'vertical': - self.tensor[:, 0::7] = -self.tensor[:, 0::7] - if self.with_yaw: - self.tensor[:, 6] = -self.tensor[:, 6] + np.pi - - if points is not None: - assert isinstance(points, (torch.Tensor, np.ndarray, BasePoints)) - if isinstance(points, (torch.Tensor, np.ndarray)): - if bev_direction == 'horizontal': - points[:, 1] = -points[:, 1] - elif bev_direction == 'vertical': - points[:, 0] = -points[:, 0] - elif isinstance(points, BasePoints): - points.flip(bev_direction) - return points - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`Box3DMode`): the target Box mode - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from ``src`` coordinates to ``dst`` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BaseInstance3DBoxes`: - The converted box of the same type in the ``dst`` mode. - """ - from .box_3d_mode import Box3DMode - return Box3DMode.convert( - box=self, src=Box3DMode.LIDAR, dst=dst, rt_mat=rt_mat) - - def enlarged_box(self, extra_width): - """Enlarge the length, width and height boxes. - - Args: - extra_width (float | torch.Tensor): Extra width to enlarge the box. - - Returns: - :obj:`LiDARInstance3DBoxes`: Enlarged boxes. - """ - enlarged_boxes = self.tensor.clone() - enlarged_boxes[:, 3:6] += extra_width * 2 - # bottom center z minus extra_width - enlarged_boxes[:, 2] -= extra_width - return self.new_box(enlarged_boxes) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/utils.py deleted file mode 100644 index 6ebaabe0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/structures/utils.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from logging import warning - -import numpy as np -import torch - -from mmdet3d.core.utils import array_converter - - -@array_converter(apply_to=('val', )) -def limit_period(val, offset=0.5, period=np.pi): - """Limit the value into a period for periodic function. - - Args: - val (torch.Tensor | np.ndarray): The value to be converted. - offset (float, optional): Offset to set the value range. - Defaults to 0.5. - period ([type], optional): Period of the value. Defaults to np.pi. - - Returns: - (torch.Tensor | np.ndarray): Value in the range of - [-offset * period, (1-offset) * period] - """ - limited_val = val - torch.floor(val / period + offset) * period - return limited_val - - -@array_converter(apply_to=('points', 'angles')) -def rotation_3d_in_axis(points, - angles, - axis=0, - return_mat=False, - clockwise=False): - """Rotate points by angles according to axis. - - Args: - points (np.ndarray | torch.Tensor | list | tuple ): - Points of shape (N, M, 3). - angles (np.ndarray | torch.Tensor | list | tuple | float): - Vector of angles in shape (N,) - axis (int, optional): The axis to be rotated. Defaults to 0. - return_mat: Whether or not return the rotation matrix (transposed). - Defaults to False. - clockwise: Whether the rotation is clockwise. Defaults to False. - - Raises: - ValueError: when the axis is not in range [0, 1, 2], it will - raise value error. - - Returns: - (torch.Tensor | np.ndarray): Rotated points in shape (N, M, 3). - """ - batch_free = len(points.shape) == 2 - if batch_free: - points = points[None] - - if isinstance(angles, float) or len(angles.shape) == 0: - angles = torch.full(points.shape[:1], angles) - - assert len(points.shape) == 3 and len(angles.shape) == 1 \ - and points.shape[0] == angles.shape[0], f'Incorrect shape of points ' \ - f'angles: {points.shape}, {angles.shape}' - - assert points.shape[-1] in [2, 3], \ - f'Points size should be 2 or 3 instead of {points.shape[-1]}' - - rot_sin = torch.sin(angles) - rot_cos = torch.cos(angles) - ones = torch.ones_like(rot_cos) - zeros = torch.zeros_like(rot_cos) - - if points.shape[-1] == 3: - if axis == 1 or axis == -2: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, zeros, -rot_sin]), - torch.stack([zeros, ones, zeros]), - torch.stack([rot_sin, zeros, rot_cos]) - ]) - elif axis == 2 or axis == -1: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, rot_sin, zeros]), - torch.stack([-rot_sin, rot_cos, zeros]), - torch.stack([zeros, zeros, ones]) - ]) - elif axis == 0 or axis == -3: - rot_mat_T = torch.stack([ - torch.stack([ones, zeros, zeros]), - torch.stack([zeros, rot_cos, rot_sin]), - torch.stack([zeros, -rot_sin, rot_cos]) - ]) - else: - raise ValueError(f'axis should in range ' - f'[-3, -2, -1, 0, 1, 2], got {axis}') - else: - rot_mat_T = torch.stack([ - torch.stack([rot_cos, rot_sin]), - torch.stack([-rot_sin, rot_cos]) - ]) - - if clockwise: - rot_mat_T = rot_mat_T.transpose(0, 1) - - if points.shape[0] == 0: - points_new = points - else: - points_new = torch.einsum('aij,jka->aik', points, rot_mat_T) - - if batch_free: - points_new = points_new.squeeze(0) - - if return_mat: - rot_mat_T = torch.einsum('jka->ajk', rot_mat_T) - if batch_free: - rot_mat_T = rot_mat_T.squeeze(0) - return points_new, rot_mat_T - else: - return points_new - - -@array_converter(apply_to=('boxes_xywhr', )) -def xywhr2xyxyr(boxes_xywhr): - """Convert a rotated boxes in XYWHR format to XYXYR format. - - Args: - boxes_xywhr (torch.Tensor | np.ndarray): Rotated boxes in XYWHR format. - - Returns: - (torch.Tensor | np.ndarray): Converted boxes in XYXYR format. - """ - boxes = torch.zeros_like(boxes_xywhr) - half_w = boxes_xywhr[..., 2] / 2 - half_h = boxes_xywhr[..., 3] / 2 - - boxes[..., 0] = boxes_xywhr[..., 0] - half_w - boxes[..., 1] = boxes_xywhr[..., 1] - half_h - boxes[..., 2] = boxes_xywhr[..., 0] + half_w - boxes[..., 3] = boxes_xywhr[..., 1] + half_h - boxes[..., 4] = boxes_xywhr[..., 4] - return boxes - - -def get_box_type(box_type): - """Get the type and mode of box structure. - - Args: - box_type (str): The type of box structure. - The valid value are "LiDAR", "Camera", or "Depth". - - Raises: - ValueError: A ValueError is raised when `box_type` - does not belong to the three valid types. - - Returns: - tuple: Box type and box mode. - """ - from .box_3d_mode import (Box3DMode, CameraInstance3DBoxes, - DepthInstance3DBoxes, LiDARInstance3DBoxes) - box_type_lower = box_type.lower() - if box_type_lower == 'lidar': - box_type_3d = LiDARInstance3DBoxes - box_mode_3d = Box3DMode.LIDAR - elif box_type_lower == 'camera': - box_type_3d = CameraInstance3DBoxes - box_mode_3d = Box3DMode.CAM - elif box_type_lower == 'depth': - box_type_3d = DepthInstance3DBoxes - box_mode_3d = Box3DMode.DEPTH - else: - raise ValueError('Only "box_type" of "camera", "lidar", "depth"' - f' are supported, got {box_type}') - - return box_type_3d, box_mode_3d - - -@array_converter(apply_to=('points_3d', 'proj_mat')) -def points_cam2img(points_3d, proj_mat, with_depth=False): - """Project points in camera coordinates to image coordinates. - - Args: - points_3d (torch.Tensor | np.ndarray): Points in shape (N, 3) - proj_mat (torch.Tensor | np.ndarray): - Transformation matrix between coordinates. - with_depth (bool, optional): Whether to keep depth in the output. - Defaults to False. - - Returns: - (torch.Tensor | np.ndarray): Points in image coordinates, - with shape [N, 2] if `with_depth=False`, else [N, 3]. - """ - points_shape = list(points_3d.shape) - points_shape[-1] = 1 - - assert len(proj_mat.shape) == 2, 'The dimension of the projection'\ - f' matrix should be 2 instead of {len(proj_mat.shape)}.' - d1, d2 = proj_mat.shape[:2] - assert (d1 == 3 and d2 == 3) or (d1 == 3 and d2 == 4) or ( - d1 == 4 and d2 == 4), 'The shape of the projection matrix'\ - f' ({d1}*{d2}) is not supported.' - if d1 == 3: - proj_mat_expanded = torch.eye( - 4, device=proj_mat.device, dtype=proj_mat.dtype) - proj_mat_expanded[:d1, :d2] = proj_mat - proj_mat = proj_mat_expanded - - # previous implementation use new_zeros, new_one yields better results - points_4 = torch.cat([points_3d, points_3d.new_ones(points_shape)], dim=-1) - - point_2d = points_4 @ proj_mat.T - point_2d_res = point_2d[..., :2] / point_2d[..., 2:3] - - if with_depth: - point_2d_res = torch.cat([point_2d_res, point_2d[..., 2:3]], dim=-1) - - return point_2d_res - - -@array_converter(apply_to=('points', 'cam2img')) -def points_img2cam(points, cam2img): - """Project points in image coordinates to camera coordinates. - - Args: - points (torch.Tensor): 2.5D points in 2D images, [N, 3], - 3 corresponds with x, y in the image and depth. - cam2img (torch.Tensor): Camera intrinsic matrix. The shape can be - [3, 3], [3, 4] or [4, 4]. - - Returns: - torch.Tensor: points in 3D space. [N, 3], - 3 corresponds with x, y, z in 3D space. - """ - assert cam2img.shape[0] <= 4 - assert cam2img.shape[1] <= 4 - assert points.shape[1] == 3 - - xys = points[:, :2] - depths = points[:, 2].view(-1, 1) - unnormed_xys = torch.cat([xys * depths, depths], dim=1) - - pad_cam2img = torch.eye(4, dtype=xys.dtype, device=xys.device) - pad_cam2img[:cam2img.shape[0], :cam2img.shape[1]] = cam2img - # inv_pad_cam2img = torch.inverse(pad_cam2img).transpose(0, 1) - - #change for pgd - device = pad_cam2img.device - inv_pad_cam2img = torch.inverse(pad_cam2img.cpu()).transpose(0, 1).cpu() - inv_pad_cam2img = inv_pad_cam2img.to(device) - - # Do operation in homogeneous coordinates. - num_points = unnormed_xys.shape[0] - homo_xys = torch.cat([unnormed_xys, xys.new_ones((num_points, 1))], dim=1) - points3D = torch.mm(homo_xys, inv_pad_cam2img)[:, :3] - - return points3D - - -def mono_cam_box2vis(cam_box): - """This is a post-processing function on the bboxes from Mono-3D task. If - we want to perform projection visualization, we need to: - - 1. rotate the box along x-axis for np.pi / 2 (roll) - 2. change orientation from local yaw to global yaw - 3. convert yaw by (np.pi / 2 - yaw) - - After applying this function, we can project and draw it on 2D images. - - Args: - cam_box (:obj:`CameraInstance3DBoxes`): 3D bbox in camera coordinate - system before conversion. Could be gt bbox loaded from dataset - or network prediction output. - - Returns: - :obj:`CameraInstance3DBoxes`: Box after conversion. - """ - warning.warn('DeprecationWarning: The hack of yaw and dimension in the ' - 'monocular 3D detection on nuScenes has been removed. The ' - 'function mono_cam_box2vis will be deprecated.') - from . import CameraInstance3DBoxes - assert isinstance(cam_box, CameraInstance3DBoxes), \ - 'input bbox should be CameraInstance3DBoxes!' - - loc = cam_box.gravity_center - dim = cam_box.dims - yaw = cam_box.yaw - feats = cam_box.tensor[:, 7:] - # rotate along x-axis for np.pi / 2 - # see also here: https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/nuscenes_mono_dataset.py#L557 # noqa - dim[:, [1, 2]] = dim[:, [2, 1]] - # change local yaw to global yaw for visualization - # refer to https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/nuscenes_mono_dataset.py#L164-L166 # noqa - yaw += torch.atan2(loc[:, 0], loc[:, 2]) - # convert yaw by (-yaw - np.pi / 2) - # this is because mono 3D box class such as `NuScenesBox` has different - # definition of rotation with our `CameraInstance3DBoxes` - yaw = -yaw - np.pi / 2 - cam_box = torch.cat([loc, dim, yaw[:, None], feats], dim=1) - cam_box = CameraInstance3DBoxes( - cam_box, box_dim=cam_box.shape[-1], origin=(0.5, 0.5, 0.5)) - - return cam_box - - -def get_proj_mat_by_coord_type(img_meta, coord_type): - """Obtain image features using points. - - Args: - img_meta (dict): Meta info. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - Can be case-insensitive. - - Returns: - torch.Tensor: transformation matrix. - """ - coord_type = coord_type.upper() - mapping = {'LIDAR': 'lidar2img', 'DEPTH': 'depth2img', 'CAMERA': 'cam2img'} - assert coord_type in mapping.keys() - return img_meta[mapping[coord_type]] - - -def yaw2local(yaw, loc): - """Transform global yaw to local yaw (alpha in kitti) in camera - coordinates, ranges from -pi to pi. - - Args: - yaw (torch.Tensor): A vector with local yaw of each box. - shape: (N, ) - loc (torch.Tensor): gravity center of each box. - shape: (N, 3) - - Returns: - torch.Tensor: local yaw (alpha in kitti). - """ - local_yaw = yaw - torch.atan2(loc[:, 0], loc[:, 2]) - larger_idx = (local_yaw > np.pi).nonzero(as_tuple=False) - small_idx = (local_yaw < -np.pi).nonzero(as_tuple=False) - if len(larger_idx) != 0: - local_yaw[larger_idx] -= 2 * np.pi - if len(small_idx) != 0: - local_yaw[small_idx] += 2 * np.pi - - return local_yaw diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/transforms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/transforms.py deleted file mode 100644 index 8a2eb90f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/bbox/transforms.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def bbox3d_mapping_back(bboxes, scale_factor, flip_horizontal, flip_vertical): - """Map bboxes from testing scale to original image scale. - - Args: - bboxes (:obj:`BaseInstance3DBoxes`): Boxes to be mapped back. - scale_factor (float): Scale factor. - flip_horizontal (bool): Whether to flip horizontally. - flip_vertical (bool): Whether to flip vertically. - - Returns: - :obj:`BaseInstance3DBoxes`: Boxes mapped back. - """ - new_bboxes = bboxes.clone() - if flip_horizontal: - new_bboxes.flip('horizontal') - if flip_vertical: - new_bboxes.flip('vertical') - new_bboxes.scale(1 / scale_factor) - - return new_bboxes - - -def bbox3d2roi(bbox_list): - """Convert a list of bounding boxes to roi format. - - Args: - bbox_list (list[torch.Tensor]): A list of bounding boxes - corresponding to a batch of images. - - Returns: - torch.Tensor: Region of interests in shape (n, c), where - the channels are in order of [batch_ind, x, y ...]. - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes], dim=-1) - else: - rois = torch.zeros_like(bboxes) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def bbox3d2result(bboxes, scores, labels, attrs=None): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor): Bounding boxes with shape (N, 5). - labels (torch.Tensor): Labels with shape (N, ). - scores (torch.Tensor): Scores with shape (N, ). - attrs (torch.Tensor, optional): Attributes with shape (N, ). - Defaults to None. - - Returns: - dict[str, torch.Tensor]: Bounding box results in cpu mode. - - - boxes_3d (torch.Tensor): 3D boxes. - - scores (torch.Tensor): Prediction scores. - - labels_3d (torch.Tensor): Box labels. - - attrs_3d (torch.Tensor, optional): Box attributes. - """ - result_dict = dict( - boxes_3d=bboxes.to('cpu'), - scores_3d=scores.cpu(), - labels_3d=labels.cpu()) - - if attrs is not None: - result_dict['attrs_3d'] = attrs.cpu() - - return result_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/__init__.py deleted file mode 100644 index b1d489f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .indoor_eval import indoor_eval -from .instance_seg_eval import instance_seg_eval -from .kitti_utils import kitti_eval, kitti_eval_coco_style -from .lyft_eval import lyft_eval -from .seg_eval import seg_eval - -__all__ = [ - 'kitti_eval_coco_style', 'kitti_eval', 'indoor_eval', 'lyft_eval', - 'seg_eval', 'instance_seg_eval' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/indoor_eval.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/indoor_eval.py deleted file mode 100644 index 2ff98773..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/indoor_eval.py +++ /dev/null @@ -1,309 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (np.ndarray): Recalls with shape of (num_scales, num_dets) - or (num_dets, ). - precisions (np.ndarray): Precisions with shape of - (num_scales, num_dets) or (num_dets, ). - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or np.ndarray: Calculated average precision. - """ - if recalls.ndim == 1: - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - - assert recalls.shape == precisions.shape - assert recalls.ndim == 2 - - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - return ap - - -def eval_det_cls(pred, gt, iou_thr=None): - """Generic functions to compute precision/recall for object detection for a - single class. - - Args: - pred (dict): Predictions mapping from image id to bounding boxes - and scores. - gt (dict): Ground truths mapping from image id to bounding boxes. - iou_thr (list[float]): A list of iou thresholds. - - Return: - tuple (np.ndarray, np.ndarray, float): Recalls, precisions and - average precision. - """ - - # {img_id: {'bbox': box structure, 'det': matched list}} - class_recs = {} - npos = 0 - for img_id in gt.keys(): - cur_gt_num = len(gt[img_id]) - if cur_gt_num != 0: - gt_cur = torch.zeros([cur_gt_num, 7], dtype=torch.float32) - for i in range(cur_gt_num): - gt_cur[i] = gt[img_id][i].tensor - bbox = gt[img_id][0].new_box(gt_cur) - else: - bbox = gt[img_id] - det = [[False] * len(bbox) for i in iou_thr] - npos += len(bbox) - class_recs[img_id] = {'bbox': bbox, 'det': det} - - # construct dets - image_ids = [] - confidence = [] - ious = [] - for img_id in pred.keys(): - cur_num = len(pred[img_id]) - if cur_num == 0: - continue - pred_cur = torch.zeros((cur_num, 7), dtype=torch.float32) - box_idx = 0 - for box, score in pred[img_id]: - image_ids.append(img_id) - confidence.append(score) - pred_cur[box_idx] = box.tensor - box_idx += 1 - pred_cur = box.new_box(pred_cur) - gt_cur = class_recs[img_id]['bbox'] - if len(gt_cur) > 0: - # calculate iou in each image - iou_cur = pred_cur.overlaps(pred_cur, gt_cur) - for i in range(cur_num): - ious.append(iou_cur[i]) - else: - for i in range(cur_num): - ious.append(np.zeros(1)) - - confidence = np.array(confidence) - - # sort by confidence - sorted_ind = np.argsort(-confidence) - image_ids = [image_ids[x] for x in sorted_ind] - ious = [ious[x] for x in sorted_ind] - - # go down dets and mark TPs and FPs - nd = len(image_ids) - tp_thr = [np.zeros(nd) for i in iou_thr] - fp_thr = [np.zeros(nd) for i in iou_thr] - for d in range(nd): - R = class_recs[image_ids[d]] - iou_max = -np.inf - BBGT = R['bbox'] - cur_iou = ious[d] - - if len(BBGT) > 0: - # compute overlaps - for j in range(len(BBGT)): - # iou = get_iou_main(get_iou_func, (bb, BBGT[j,...])) - iou = cur_iou[j] - if iou > iou_max: - iou_max = iou - jmax = j - - for iou_idx, thresh in enumerate(iou_thr): - if iou_max > thresh: - if not R['det'][iou_idx][jmax]: - tp_thr[iou_idx][d] = 1. - R['det'][iou_idx][jmax] = 1 - else: - fp_thr[iou_idx][d] = 1. - else: - fp_thr[iou_idx][d] = 1. - - ret = [] - for iou_idx, thresh in enumerate(iou_thr): - # compute precision recall - fp = np.cumsum(fp_thr[iou_idx]) - tp = np.cumsum(tp_thr[iou_idx]) - recall = tp / float(npos) - # avoid divide by zero in case the first detection matches a difficult - # ground truth - precision = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) - ap = average_precision(recall, precision) - ret.append((recall, precision, ap)) - - return ret - - -def eval_map_recall(pred, gt, ovthresh=None): - """Evaluate mAP and recall. - - Generic functions to compute precision/recall for object detection - for multiple classes. - - Args: - pred (dict): Information of detection results, - which maps class_id and predictions. - gt (dict): Information of ground truths, which maps class_id and - ground truths. - ovthresh (list[float], optional): iou threshold. Default: None. - - Return: - tuple[dict]: dict results of recall, AP, and precision for all classes. - """ - - ret_values = {} - for classname in gt.keys(): - if classname in pred: - ret_values[classname] = eval_det_cls(pred[classname], - gt[classname], ovthresh) - recall = [{} for i in ovthresh] - precision = [{} for i in ovthresh] - ap = [{} for i in ovthresh] - - for label in gt.keys(): - for iou_idx, thresh in enumerate(ovthresh): - if label in pred: - recall[iou_idx][label], precision[iou_idx][label], ap[iou_idx][ - label] = ret_values[label][iou_idx] - else: - recall[iou_idx][label] = np.zeros(1) - precision[iou_idx][label] = np.zeros(1) - ap[iou_idx][label] = np.zeros(1) - - return recall, precision, ap - - -def indoor_eval(gt_annos, - dt_annos, - metric, - label2cat, - logger=None, - box_type_3d=None, - box_mode_3d=None): - """Indoor Evaluation. - - Evaluate the result of the detection. - - Args: - gt_annos (list[dict]): Ground truth annotations. - dt_annos (list[dict]): Detection annotations. the dict - includes the following keys - - - labels_3d (torch.Tensor): Labels of boxes. - - boxes_3d (:obj:`BaseInstance3DBoxes`): - 3D bounding boxes in Depth coordinate. - - scores_3d (torch.Tensor): Scores of boxes. - metric (list[float]): IoU thresholds for computing average precisions. - label2cat (dict): Map from label to category. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Return: - dict[str, float]: Dict of results. - """ - assert len(dt_annos) == len(gt_annos) - pred = {} # map {class_id: pred} - gt = {} # map {class_id: gt} - for img_id in range(len(dt_annos)): - # parse detected annotations - det_anno = dt_annos[img_id] - for i in range(len(det_anno['labels_3d'])): - label = det_anno['labels_3d'].numpy()[i] - bbox = det_anno['boxes_3d'].convert_to(box_mode_3d)[i] - score = det_anno['scores_3d'].numpy()[i] - if label not in pred: - pred[int(label)] = {} - if img_id not in pred[label]: - pred[int(label)][img_id] = [] - if label not in gt: - gt[int(label)] = {} - if img_id not in gt[label]: - gt[int(label)][img_id] = [] - pred[int(label)][img_id].append((bbox, score)) - - # parse gt annotations - gt_anno = gt_annos[img_id] - if gt_anno['gt_num'] != 0: - gt_boxes = box_type_3d( - gt_anno['gt_boxes_upright_depth'], - box_dim=gt_anno['gt_boxes_upright_depth'].shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(box_mode_3d) - labels_3d = gt_anno['class'] - else: - gt_boxes = box_type_3d(np.array([], dtype=np.float32)) - labels_3d = np.array([], dtype=np.int64) - - for i in range(len(labels_3d)): - label = labels_3d[i] - bbox = gt_boxes[i] - if label not in gt: - gt[label] = {} - if img_id not in gt[label]: - gt[label][img_id] = [] - gt[label][img_id].append(bbox) - - rec, prec, ap = eval_map_recall(pred, gt, metric) - ret_dict = dict() - header = ['classes'] - table_columns = [[label2cat[label] - for label in ap[0].keys()] + ['Overall']] - - for i, iou_thresh in enumerate(metric): - header.append(f'AP_{iou_thresh:.2f}') - header.append(f'AR_{iou_thresh:.2f}') - rec_list = [] - for label in ap[i].keys(): - ret_dict[f'{label2cat[label]}_AP_{iou_thresh:.2f}'] = float( - ap[i][label][0]) - ret_dict[f'mAP_{iou_thresh:.2f}'] = float( - np.mean(list(ap[i].values()))) - - table_columns.append(list(map(float, list(ap[i].values())))) - table_columns[-1] += [ret_dict[f'mAP_{iou_thresh:.2f}']] - table_columns[-1] = [f'{x:.4f}' for x in table_columns[-1]] - - for label in rec[i].keys(): - ret_dict[f'{label2cat[label]}_rec_{iou_thresh:.2f}'] = float( - rec[i][label][-1]) - rec_list.append(rec[i][label][-1]) - ret_dict[f'mAR_{iou_thresh:.2f}'] = float(np.mean(rec_list)) - - table_columns.append(list(map(float, rec_list))) - table_columns[-1] += [ret_dict[f'mAR_{iou_thresh:.2f}']] - table_columns[-1] = [f'{x:.4f}' for x in table_columns[-1]] - - table_data = [header] - table_rows = list(zip(*table_columns)) - table_data += table_rows - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - - return ret_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/instance_seg_eval.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/instance_seg_eval.py deleted file mode 100644 index 31f5110a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/instance_seg_eval.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .scannet_utils.evaluate_semantic_instance import scannet_eval - - -def aggregate_predictions(masks, labels, scores, valid_class_ids): - """Maps predictions to ScanNet evaluator format. - - Args: - masks (list[torch.Tensor]): Per scene predicted instance masks. - labels (list[torch.Tensor]): Per scene predicted instance labels. - scores (list[torch.Tensor]): Per scene predicted instance scores. - valid_class_ids (tuple[int]): Ids of valid categories. - - Returns: - list[dict]: Per scene aggregated predictions. - """ - infos = [] - for id, (mask, label, score) in enumerate(zip(masks, labels, scores)): - mask = mask.clone().numpy() - label = label.clone().numpy() - score = score.clone().numpy() - info = dict() - n_instances = mask.max() + 1 - for i in range(n_instances): - # match pred_instance['filename'] from assign_instances_for_scan - file_name = f'{id}_{i}' - info[file_name] = dict() - info[file_name]['mask'] = (mask == i).astype(np.int) - info[file_name]['label_id'] = valid_class_ids[label[i]] - info[file_name]['conf'] = score[i] - infos.append(info) - return infos - - -def rename_gt(gt_semantic_masks, gt_instance_masks, valid_class_ids): - """Maps gt instance and semantic masks to instance masks for ScanNet - evaluator. - - Args: - gt_semantic_masks (list[torch.Tensor]): Per scene gt semantic masks. - gt_instance_masks (list[torch.Tensor]): Per scene gt instance masks. - valid_class_ids (tuple[int]): Ids of valid categories. - - Returns: - list[np.array]: Per scene instance masks. - """ - renamed_instance_masks = [] - for semantic_mask, instance_mask in zip(gt_semantic_masks, - gt_instance_masks): - semantic_mask = semantic_mask.clone().numpy() - instance_mask = instance_mask.clone().numpy() - unique = np.unique(instance_mask) - assert len(unique) < 1000 - for i in unique: - semantic_instance = semantic_mask[instance_mask == i] - semantic_unique = np.unique(semantic_instance) - assert len(semantic_unique) == 1 - if semantic_unique[0] < len(valid_class_ids): - instance_mask[ - instance_mask == - i] = 1000 * valid_class_ids[semantic_unique[0]] + i - renamed_instance_masks.append(instance_mask) - return renamed_instance_masks - - -def instance_seg_eval(gt_semantic_masks, - gt_instance_masks, - pred_instance_masks, - pred_instance_labels, - pred_instance_scores, - valid_class_ids, - class_labels, - options=None, - logger=None): - """Instance Segmentation Evaluation. - - Evaluate the result of the instance segmentation. - - Args: - gt_semantic_masks (list[torch.Tensor]): Ground truth semantic masks. - gt_instance_masks (list[torch.Tensor]): Ground truth instance masks. - pred_instance_masks (list[torch.Tensor]): Predicted instance masks. - pred_instance_labels (list[torch.Tensor]): Predicted instance labels. - pred_instance_scores (list[torch.Tensor]): Predicted instance labels. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Names of valid categories. - options (dict, optional): Additional options. Keys may contain: - `overlaps`, `min_region_sizes`, `distance_threshes`, - `distance_confs`. Default: None. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Returns: - dict[str, float]: Dict of results. - """ - assert len(valid_class_ids) == len(class_labels) - id_to_label = { - valid_class_ids[i]: class_labels[i] - for i in range(len(valid_class_ids)) - } - preds = aggregate_predictions( - masks=pred_instance_masks, - labels=pred_instance_labels, - scores=pred_instance_scores, - valid_class_ids=valid_class_ids) - gts = rename_gt(gt_semantic_masks, gt_instance_masks, valid_class_ids) - metrics = scannet_eval( - preds=preds, - gts=gts, - options=options, - valid_class_ids=valid_class_ids, - class_labels=class_labels, - id_to_label=id_to_label) - header = ['classes', 'AP_0.25', 'AP_0.50', 'AP'] - rows = [] - for label, data in metrics['classes'].items(): - aps = [data['ap25%'], data['ap50%'], data['ap']] - rows.append([label] + [f'{ap:.4f}' for ap in aps]) - aps = metrics['all_ap_25%'], metrics['all_ap_50%'], metrics['all_ap'] - footer = ['Overall'] + [f'{ap:.4f}' for ap in aps] - table = AsciiTable([header] + rows + [footer]) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - return metrics diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/__init__.py deleted file mode 100644 index 23c1cdf2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .eval import kitti_eval, kitti_eval_coco_style - -__all__ = ['kitti_eval', 'kitti_eval_coco_style'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py deleted file mode 100644 index f8408dfa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py +++ /dev/null @@ -1,950 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import gc -import io as sysio - -import numba -import numpy as np - - -@numba.jit -def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): - scores.sort() - scores = scores[::-1] - current_recall = 0 - thresholds = [] - for i, score in enumerate(scores): - l_recall = (i + 1) / num_gt - if i < (len(scores) - 1): - r_recall = (i + 2) / num_gt - else: - r_recall = l_recall - if (((r_recall - current_recall) < (current_recall - l_recall)) - and (i < (len(scores) - 1))): - continue - # recall = l_recall - thresholds.append(score) - current_recall += 1 / (num_sample_pts - 1.0) - return thresholds - - -def clean_data(gt_anno, dt_anno, current_class, difficulty): - CLASS_NAMES = ['car', 'pedestrian', 'cyclist'] - MIN_HEIGHT = [40, 25, 25] - MAX_OCCLUSION = [0, 1, 2] - MAX_TRUNCATION = [0.15, 0.3, 0.5] - dc_bboxes, ignored_gt, ignored_dt = [], [], [] - current_cls_name = CLASS_NAMES[current_class].lower() - num_gt = len(gt_anno['name']) - num_dt = len(dt_anno['name']) - num_valid_gt = 0 - for i in range(num_gt): - bbox = gt_anno['bbox'][i] - gt_name = gt_anno['name'][i].lower() - height = bbox[3] - bbox[1] - valid_class = -1 - if (gt_name == current_cls_name): - valid_class = 1 - elif (current_cls_name == 'Pedestrian'.lower() - and 'Person_sitting'.lower() == gt_name): - valid_class = 0 - elif (current_cls_name == 'Car'.lower() and 'Van'.lower() == gt_name): - valid_class = 0 - else: - valid_class = -1 - ignore = False - if ((gt_anno['occluded'][i] > MAX_OCCLUSION[difficulty]) - or (gt_anno['truncated'][i] > MAX_TRUNCATION[difficulty]) - or (height <= MIN_HEIGHT[difficulty])): - ignore = True - if valid_class == 1 and not ignore: - ignored_gt.append(0) - num_valid_gt += 1 - elif (valid_class == 0 or (ignore and (valid_class == 1))): - ignored_gt.append(1) - else: - ignored_gt.append(-1) - # for i in range(num_gt): - if gt_anno['name'][i] == 'DontCare': - dc_bboxes.append(gt_anno['bbox'][i]) - for i in range(num_dt): - if (dt_anno['name'][i].lower() == current_cls_name): - valid_class = 1 - else: - valid_class = -1 - height = abs(dt_anno['bbox'][i, 3] - dt_anno['bbox'][i, 1]) - if height < MIN_HEIGHT[difficulty]: - ignored_dt.append(1) - elif valid_class == 1: - ignored_dt.append(0) - else: - ignored_dt.append(-1) - - return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes - - -@numba.jit(nopython=True) -def image_box_overlap(boxes, query_boxes, criterion=-1): - N = boxes.shape[0] - K = query_boxes.shape[0] - overlaps = np.zeros((N, K), dtype=boxes.dtype) - for k in range(K): - qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) * - (query_boxes[k, 3] - query_boxes[k, 1])) - for n in range(N): - iw = ( - min(boxes[n, 2], query_boxes[k, 2]) - - max(boxes[n, 0], query_boxes[k, 0])) - if iw > 0: - ih = ( - min(boxes[n, 3], query_boxes[k, 3]) - - max(boxes[n, 1], query_boxes[k, 1])) - if ih > 0: - if criterion == -1: - ua = ((boxes[n, 2] - boxes[n, 0]) * - (boxes[n, 3] - boxes[n, 1]) + qbox_area - - iw * ih) - elif criterion == 0: - ua = ((boxes[n, 2] - boxes[n, 0]) * - (boxes[n, 3] - boxes[n, 1])) - elif criterion == 1: - ua = qbox_area - else: - ua = 1.0 - overlaps[n, k] = iw * ih / ua - return overlaps - - -def bev_box_overlap(boxes, qboxes, criterion=-1): - from .rotate_iou import rotate_iou_gpu_eval - riou = rotate_iou_gpu_eval(boxes, qboxes, criterion) - return riou - - -@numba.jit(nopython=True, parallel=True) -def d3_box_overlap_kernel(boxes, qboxes, rinc, criterion=-1): - # ONLY support overlap in CAMERA, not lidar. - # TODO: change to use prange for parallel mode, should check the difference - N, K = boxes.shape[0], qboxes.shape[0] - for i in numba.prange(N): - for j in numba.prange(K): - if rinc[i, j] > 0: - # iw = (min(boxes[i, 1] + boxes[i, 4], qboxes[j, 1] + - # qboxes[j, 4]) - max(boxes[i, 1], qboxes[j, 1])) - iw = ( - min(boxes[i, 1], qboxes[j, 1]) - - max(boxes[i, 1] - boxes[i, 4], - qboxes[j, 1] - qboxes[j, 4])) - - if iw > 0: - area1 = boxes[i, 3] * boxes[i, 4] * boxes[i, 5] - area2 = qboxes[j, 3] * qboxes[j, 4] * qboxes[j, 5] - inc = iw * rinc[i, j] - if criterion == -1: - ua = (area1 + area2 - inc) - elif criterion == 0: - ua = area1 - elif criterion == 1: - ua = area2 - else: - ua = inc - rinc[i, j] = inc / ua - else: - rinc[i, j] = 0.0 - - -def d3_box_overlap(boxes, qboxes, criterion=-1): - from .rotate_iou import rotate_iou_gpu_eval - rinc = rotate_iou_gpu_eval(boxes[:, [0, 2, 3, 5, 6]], - qboxes[:, [0, 2, 3, 5, 6]], 2) - d3_box_overlap_kernel(boxes, qboxes, rinc, criterion) - return rinc - - -@numba.jit(nopython=True) -def compute_statistics_jit(overlaps, - gt_datas, - dt_datas, - ignored_gt, - ignored_det, - dc_bboxes, - metric, - min_overlap, - thresh=0, - compute_fp=False, - compute_aos=False): - - det_size = dt_datas.shape[0] - gt_size = gt_datas.shape[0] - dt_scores = dt_datas[:, -1] - dt_alphas = dt_datas[:, 4] - gt_alphas = gt_datas[:, 4] - dt_bboxes = dt_datas[:, :4] - # gt_bboxes = gt_datas[:, :4] - - assigned_detection = [False] * det_size - ignored_threshold = [False] * det_size - if compute_fp: - for i in range(det_size): - if (dt_scores[i] < thresh): - ignored_threshold[i] = True - NO_DETECTION = -10000000 - tp, fp, fn, similarity = 0, 0, 0, 0 - # thresholds = [0.0] - # delta = [0.0] - thresholds = np.zeros((gt_size, )) - thresh_idx = 0 - delta = np.zeros((gt_size, )) - delta_idx = 0 - for i in range(gt_size): - if ignored_gt[i] == -1: - continue - det_idx = -1 - valid_detection = NO_DETECTION - max_overlap = 0 - assigned_ignored_det = False - - for j in range(det_size): - if (ignored_det[j] == -1): - continue - if (assigned_detection[j]): - continue - if (ignored_threshold[j]): - continue - overlap = overlaps[j, i] - dt_score = dt_scores[j] - if (not compute_fp and (overlap > min_overlap) - and dt_score > valid_detection): - det_idx = j - valid_detection = dt_score - elif (compute_fp and (overlap > min_overlap) - and (overlap > max_overlap or assigned_ignored_det) - and ignored_det[j] == 0): - max_overlap = overlap - det_idx = j - valid_detection = 1 - assigned_ignored_det = False - elif (compute_fp and (overlap > min_overlap) - and (valid_detection == NO_DETECTION) - and ignored_det[j] == 1): - det_idx = j - valid_detection = 1 - assigned_ignored_det = True - - if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0: - fn += 1 - elif ((valid_detection != NO_DETECTION) - and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)): - assigned_detection[det_idx] = True - elif valid_detection != NO_DETECTION: - tp += 1 - # thresholds.append(dt_scores[det_idx]) - thresholds[thresh_idx] = dt_scores[det_idx] - thresh_idx += 1 - if compute_aos: - # delta.append(gt_alphas[i] - dt_alphas[det_idx]) - delta[delta_idx] = gt_alphas[i] - dt_alphas[det_idx] - delta_idx += 1 - - assigned_detection[det_idx] = True - if compute_fp: - for i in range(det_size): - if (not (assigned_detection[i] or ignored_det[i] == -1 - or ignored_det[i] == 1 or ignored_threshold[i])): - fp += 1 - nstuff = 0 - if metric == 0: - overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0) - for i in range(dc_bboxes.shape[0]): - for j in range(det_size): - if (assigned_detection[j]): - continue - if (ignored_det[j] == -1 or ignored_det[j] == 1): - continue - if (ignored_threshold[j]): - continue - if overlaps_dt_dc[j, i] > min_overlap: - assigned_detection[j] = True - nstuff += 1 - fp -= nstuff - if compute_aos: - tmp = np.zeros((fp + delta_idx, )) - # tmp = [0] * fp - for i in range(delta_idx): - tmp[i + fp] = (1.0 + np.cos(delta[i])) / 2.0 - # tmp.append((1.0 + np.cos(delta[i])) / 2.0) - # assert len(tmp) == fp + tp - # assert len(delta) == tp - if tp > 0 or fp > 0: - similarity = np.sum(tmp) - else: - similarity = -1 - return tp, fp, fn, similarity, thresholds[:thresh_idx] - - -def get_split_parts(num, num_part): - same_part = num // num_part - remain_num = num % num_part - if remain_num == 0: - return [same_part] * num_part - else: - return [same_part] * num_part + [remain_num] - - -@numba.jit(nopython=True) -def fused_compute_statistics(overlaps, - pr, - gt_nums, - dt_nums, - dc_nums, - gt_datas, - dt_datas, - dontcares, - ignored_gts, - ignored_dets, - metric, - min_overlap, - thresholds, - compute_aos=False): - gt_num = 0 - dt_num = 0 - dc_num = 0 - for i in range(gt_nums.shape[0]): - for t, thresh in enumerate(thresholds): - overlap = overlaps[dt_num:dt_num + dt_nums[i], - gt_num:gt_num + gt_nums[i]] - - gt_data = gt_datas[gt_num:gt_num + gt_nums[i]] - dt_data = dt_datas[dt_num:dt_num + dt_nums[i]] - ignored_gt = ignored_gts[gt_num:gt_num + gt_nums[i]] - ignored_det = ignored_dets[dt_num:dt_num + dt_nums[i]] - dontcare = dontcares[dc_num:dc_num + dc_nums[i]] - tp, fp, fn, similarity, _ = compute_statistics_jit( - overlap, - gt_data, - dt_data, - ignored_gt, - ignored_det, - dontcare, - metric, - min_overlap=min_overlap, - thresh=thresh, - compute_fp=True, - compute_aos=compute_aos) - pr[t, 0] += tp - pr[t, 1] += fp - pr[t, 2] += fn - if similarity != -1: - pr[t, 3] += similarity - gt_num += gt_nums[i] - dt_num += dt_nums[i] - dc_num += dc_nums[i] - - -def calculate_iou_partly(gt_annos, dt_annos, metric, num_parts=50): - """Fast iou algorithm. this function can be used independently to do result - analysis. Must be used in CAMERA coordinate system. - - Args: - gt_annos (dict): Must from get_label_annos() in kitti_common.py. - dt_annos (dict): Must from get_label_annos() in kitti_common.py. - metric (int): Eval type. 0: bbox, 1: bev, 2: 3d. - num_parts (int): A parameter for fast calculate algorithm. - """ - assert len(gt_annos) == len(dt_annos) - total_dt_num = np.stack([len(a['name']) for a in dt_annos], 0) - total_gt_num = np.stack([len(a['name']) for a in gt_annos], 0) - num_examples = len(gt_annos) - split_parts = get_split_parts(num_examples, num_parts) - parted_overlaps = [] - example_idx = 0 - - for num_part in split_parts: - gt_annos_part = gt_annos[example_idx:example_idx + num_part] - dt_annos_part = dt_annos[example_idx:example_idx + num_part] - if metric == 0: - gt_boxes = np.concatenate([a['bbox'] for a in gt_annos_part], 0) - dt_boxes = np.concatenate([a['bbox'] for a in dt_annos_part], 0) - overlap_part = image_box_overlap(gt_boxes, dt_boxes) - elif metric == 1: - loc = np.concatenate( - [a['location'][:, [0, 2]] for a in gt_annos_part], 0) - dims = np.concatenate( - [a['dimensions'][:, [0, 2]] for a in gt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in gt_annos_part], 0) - gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - loc = np.concatenate( - [a['location'][:, [0, 2]] for a in dt_annos_part], 0) - dims = np.concatenate( - [a['dimensions'][:, [0, 2]] for a in dt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in dt_annos_part], 0) - dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - overlap_part = bev_box_overlap(gt_boxes, - dt_boxes).astype(np.float64) - elif metric == 2: - loc = np.concatenate([a['location'] for a in gt_annos_part], 0) - dims = np.concatenate([a['dimensions'] for a in gt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in gt_annos_part], 0) - gt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - loc = np.concatenate([a['location'] for a in dt_annos_part], 0) - dims = np.concatenate([a['dimensions'] for a in dt_annos_part], 0) - rots = np.concatenate([a['rotation_y'] for a in dt_annos_part], 0) - dt_boxes = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - overlap_part = d3_box_overlap(gt_boxes, - dt_boxes).astype(np.float64) - else: - raise ValueError('unknown metric') - parted_overlaps.append(overlap_part) - example_idx += num_part - overlaps = [] - example_idx = 0 - for j, num_part in enumerate(split_parts): - gt_annos_part = gt_annos[example_idx:example_idx + num_part] - dt_annos_part = dt_annos[example_idx:example_idx + num_part] - gt_num_idx, dt_num_idx = 0, 0 - for i in range(num_part): - gt_box_num = total_gt_num[example_idx + i] - dt_box_num = total_dt_num[example_idx + i] - overlaps.append( - parted_overlaps[j][gt_num_idx:gt_num_idx + gt_box_num, - dt_num_idx:dt_num_idx + dt_box_num]) - gt_num_idx += gt_box_num - dt_num_idx += dt_box_num - example_idx += num_part - - return overlaps, parted_overlaps, total_gt_num, total_dt_num - - -def _prepare_data(gt_annos, dt_annos, current_class, difficulty): - gt_datas_list = [] - dt_datas_list = [] - total_dc_num = [] - ignored_gts, ignored_dets, dontcares = [], [], [] - total_num_valid_gt = 0 - for i in range(len(gt_annos)): - rets = clean_data(gt_annos[i], dt_annos[i], current_class, difficulty) - num_valid_gt, ignored_gt, ignored_det, dc_bboxes = rets - ignored_gts.append(np.array(ignored_gt, dtype=np.int64)) - ignored_dets.append(np.array(ignored_det, dtype=np.int64)) - if len(dc_bboxes) == 0: - dc_bboxes = np.zeros((0, 4)).astype(np.float64) - else: - dc_bboxes = np.stack(dc_bboxes, 0).astype(np.float64) - total_dc_num.append(dc_bboxes.shape[0]) - dontcares.append(dc_bboxes) - total_num_valid_gt += num_valid_gt - gt_datas = np.concatenate( - [gt_annos[i]['bbox'], gt_annos[i]['alpha'][..., np.newaxis]], 1) - dt_datas = np.concatenate([ - dt_annos[i]['bbox'], dt_annos[i]['alpha'][..., np.newaxis], - dt_annos[i]['score'][..., np.newaxis] - ], 1) - gt_datas_list.append(gt_datas) - dt_datas_list.append(dt_datas) - total_dc_num = np.stack(total_dc_num, axis=0) - return (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, dontcares, - total_dc_num, total_num_valid_gt) - - -def eval_class(gt_annos, - dt_annos, - current_classes, - difficultys, - metric, - min_overlaps, - compute_aos=False, - num_parts=200): - """Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP. - - Args: - gt_annos (dict): Must from get_label_annos() in kitti_common.py. - dt_annos (dict): Must from get_label_annos() in kitti_common.py. - current_classes (list[int]): 0: car, 1: pedestrian, 2: cyclist. - difficultys (list[int]): Eval difficulty, 0: easy, 1: normal, 2: hard - metric (int): Eval type. 0: bbox, 1: bev, 2: 3d - min_overlaps (float): Min overlap. format: - [num_overlap, metric, class]. - num_parts (int): A parameter for fast calculate algorithm - - Returns: - dict[str, np.ndarray]: recall, precision and aos - """ - assert len(gt_annos) == len(dt_annos) - num_examples = len(gt_annos) - if num_examples < num_parts: - num_parts = num_examples - split_parts = get_split_parts(num_examples, num_parts) - - rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts) - overlaps, parted_overlaps, total_dt_num, total_gt_num = rets - N_SAMPLE_PTS = 41 - num_minoverlap = len(min_overlaps) - num_class = len(current_classes) - num_difficulty = len(difficultys) - precision = np.zeros( - [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - recall = np.zeros( - [num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - aos = np.zeros([num_class, num_difficulty, num_minoverlap, N_SAMPLE_PTS]) - for m, current_class in enumerate(current_classes): - for idx_l, difficulty in enumerate(difficultys): - rets = _prepare_data(gt_annos, dt_annos, current_class, difficulty) - (gt_datas_list, dt_datas_list, ignored_gts, ignored_dets, - dontcares, total_dc_num, total_num_valid_gt) = rets - for k, min_overlap in enumerate(min_overlaps[:, metric, m]): - thresholdss = [] - for i in range(len(gt_annos)): - rets = compute_statistics_jit( - overlaps[i], - gt_datas_list[i], - dt_datas_list[i], - ignored_gts[i], - ignored_dets[i], - dontcares[i], - metric, - min_overlap=min_overlap, - thresh=0.0, - compute_fp=False) - tp, fp, fn, similarity, thresholds = rets - thresholdss += thresholds.tolist() - thresholdss = np.array(thresholdss) - thresholds = get_thresholds(thresholdss, total_num_valid_gt) - thresholds = np.array(thresholds) - pr = np.zeros([len(thresholds), 4]) - idx = 0 - for j, num_part in enumerate(split_parts): - gt_datas_part = np.concatenate( - gt_datas_list[idx:idx + num_part], 0) - dt_datas_part = np.concatenate( - dt_datas_list[idx:idx + num_part], 0) - dc_datas_part = np.concatenate( - dontcares[idx:idx + num_part], 0) - ignored_dets_part = np.concatenate( - ignored_dets[idx:idx + num_part], 0) - ignored_gts_part = np.concatenate( - ignored_gts[idx:idx + num_part], 0) - fused_compute_statistics( - parted_overlaps[j], - pr, - total_gt_num[idx:idx + num_part], - total_dt_num[idx:idx + num_part], - total_dc_num[idx:idx + num_part], - gt_datas_part, - dt_datas_part, - dc_datas_part, - ignored_gts_part, - ignored_dets_part, - metric, - min_overlap=min_overlap, - thresholds=thresholds, - compute_aos=compute_aos) - idx += num_part - for i in range(len(thresholds)): - recall[m, idx_l, k, i] = pr[i, 0] / (pr[i, 0] + pr[i, 2]) - precision[m, idx_l, k, i] = pr[i, 0] / ( - pr[i, 0] + pr[i, 1]) - if compute_aos: - aos[m, idx_l, k, i] = pr[i, 3] / (pr[i, 0] + pr[i, 1]) - for i in range(len(thresholds)): - precision[m, idx_l, k, i] = np.max( - precision[m, idx_l, k, i:], axis=-1) - recall[m, idx_l, k, i] = np.max( - recall[m, idx_l, k, i:], axis=-1) - if compute_aos: - aos[m, idx_l, k, i] = np.max( - aos[m, idx_l, k, i:], axis=-1) - ret_dict = { - 'recall': recall, - 'precision': precision, - 'orientation': aos, - } - - # clean temp variables - del overlaps - del parted_overlaps - - gc.collect() - return ret_dict - - -def get_mAP11(prec): - sums = 0 - for i in range(0, prec.shape[-1], 4): - sums = sums + prec[..., i] - return sums / 11 * 100 - - -def get_mAP40(prec): - sums = 0 - for i in range(1, prec.shape[-1]): - sums = sums + prec[..., i] - return sums / 40 * 100 - - -def print_str(value, *arg, sstream=None): - if sstream is None: - sstream = sysio.StringIO() - sstream.truncate(0) - sstream.seek(0) - print(value, *arg, file=sstream) - return sstream.getvalue() - - -def do_eval(gt_annos, - dt_annos, - current_classes, - min_overlaps, - eval_types=['bbox', 'bev', '3d']): - # min_overlaps: [num_minoverlap, metric, num_class] - difficultys = [0, 1, 2] - mAP11_bbox = None - mAP11_aos = None - mAP40_bbox = None - mAP40_aos = None - if 'bbox' in eval_types: - ret = eval_class( - gt_annos, - dt_annos, - current_classes, - difficultys, - 0, - min_overlaps, - compute_aos=('aos' in eval_types)) - # ret: [num_class, num_diff, num_minoverlap, num_sample_points] - mAP11_bbox = get_mAP11(ret['precision']) - mAP40_bbox = get_mAP40(ret['precision']) - if 'aos' in eval_types: - mAP11_aos = get_mAP11(ret['orientation']) - mAP40_aos = get_mAP40(ret['orientation']) - - mAP11_bev = None - mAP40_bev = None - if 'bev' in eval_types: - ret = eval_class(gt_annos, dt_annos, current_classes, difficultys, 1, - min_overlaps) - mAP11_bev = get_mAP11(ret['precision']) - mAP40_bev = get_mAP40(ret['precision']) - - mAP11_3d = None - mAP40_3d = None - if '3d' in eval_types: - ret = eval_class(gt_annos, dt_annos, current_classes, difficultys, 2, - min_overlaps) - mAP11_3d = get_mAP11(ret['precision']) - mAP40_3d = get_mAP40(ret['precision']) - return (mAP11_bbox, mAP11_bev, mAP11_3d, mAP11_aos, mAP40_bbox, mAP40_bev, - mAP40_3d, mAP40_aos) - - -def do_coco_style_eval(gt_annos, dt_annos, current_classes, overlap_ranges, - compute_aos): - # overlap_ranges: [range, metric, num_class] - min_overlaps = np.zeros([10, *overlap_ranges.shape[1:]]) - for i in range(overlap_ranges.shape[1]): - for j in range(overlap_ranges.shape[2]): - min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j]) - mAP_bbox, mAP_bev, mAP_3d, mAP_aos, _, _, \ - _, _ = do_eval(gt_annos, dt_annos, - current_classes, min_overlaps, - compute_aos) - # ret: [num_class, num_diff, num_minoverlap] - mAP_bbox = mAP_bbox.mean(-1) - mAP_bev = mAP_bev.mean(-1) - mAP_3d = mAP_3d.mean(-1) - if mAP_aos is not None: - mAP_aos = mAP_aos.mean(-1) - return mAP_bbox, mAP_bev, mAP_3d, mAP_aos - - -def kitti_eval(gt_annos, - dt_annos, - current_classes, - eval_types=['bbox', 'bev', '3d']): - """KITTI evaluation. - - Args: - gt_annos (list[dict]): Contain gt information of each sample. - dt_annos (list[dict]): Contain detected information of each sample. - current_classes (list[str]): Classes to evaluation. - eval_types (list[str], optional): Types to eval. - Defaults to ['bbox', 'bev', '3d']. - - Returns: - tuple: String and dict of evaluation results. - """ - assert len(eval_types) > 0, 'must contain at least one evaluation type' - if 'aos' in eval_types: - assert 'bbox' in eval_types, 'must evaluate bbox when evaluating aos' - overlap_0_7 = np.array([[0.7, 0.5, 0.5, 0.7, - 0.5], [0.7, 0.5, 0.5, 0.7, 0.5], - [0.7, 0.5, 0.5, 0.7, 0.5]]) - overlap_0_5 = np.array([[0.7, 0.5, 0.5, 0.7, 0.5], - [0.5, 0.25, 0.25, 0.5, 0.25], - [0.5, 0.25, 0.25, 0.5, 0.25]]) - min_overlaps = np.stack([overlap_0_7, overlap_0_5], axis=0) # [2, 3, 5] - class_to_name = { - 0: 'Car', - 1: 'Pedestrian', - 2: 'Cyclist', - 3: 'Van', - 4: 'Person_sitting', - } - name_to_class = {v: n for n, v in class_to_name.items()} - if not isinstance(current_classes, (list, tuple)): - current_classes = [current_classes] - current_classes_int = [] - for curcls in current_classes: - if isinstance(curcls, str): - current_classes_int.append(name_to_class[curcls]) - else: - current_classes_int.append(curcls) - current_classes = current_classes_int - min_overlaps = min_overlaps[:, :, current_classes] - result = '' - # check whether alpha is valid - compute_aos = False - pred_alpha = False - valid_alpha_gt = False - for anno in dt_annos: - mask = (anno['alpha'] != -10) - if anno['alpha'][mask].shape[0] != 0: - pred_alpha = True - break - for anno in gt_annos: - if anno['alpha'][0] != -10: - valid_alpha_gt = True - break - compute_aos = (pred_alpha and valid_alpha_gt) - if compute_aos: - eval_types.append('aos') - - mAP11_bbox, mAP11_bev, mAP11_3d, mAP11_aos, mAP40_bbox, mAP40_bev, \ - mAP40_3d, mAP40_aos = do_eval(gt_annos, dt_annos, - current_classes, min_overlaps, - eval_types) - - ret_dict = {} - difficulty = ['easy', 'moderate', 'hard'] - - # calculate AP11 - result += '\n----------- AP11 Results ------------\n\n' - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - curcls_name = class_to_name[curcls] - for i in range(min_overlaps.shape[0]): - # prepare results for print - result += ('{} AP11@{:.2f}, {:.2f}, {:.2f}:\n'.format( - curcls_name, *min_overlaps[i, :, j])) - if mAP11_bbox is not None: - result += 'bbox AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bbox[j, :, i]) - if mAP11_bev is not None: - result += 'bev AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bev[j, :, i]) - if mAP11_3d is not None: - result += '3d AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_3d[j, :, i]) - if compute_aos: - result += 'aos AP11:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP11_aos[j, :, i]) - - # prepare results for logger - for idx in range(3): - if i == 0: - postfix = f'{difficulty[idx]}_strict' - else: - postfix = f'{difficulty[idx]}_loose' - prefix = f'KITTI/{curcls_name}' - if mAP11_3d is not None: - ret_dict[f'{prefix}_3D_AP11_{postfix}'] =\ - mAP11_3d[j, idx, i] - if mAP11_bev is not None: - ret_dict[f'{prefix}_BEV_AP11_{postfix}'] =\ - mAP11_bev[j, idx, i] - if mAP11_bbox is not None: - ret_dict[f'{prefix}_2D_AP11_{postfix}'] =\ - mAP11_bbox[j, idx, i] - - # calculate mAP11 over all classes if there are multiple classes - if len(current_classes) > 1: - # prepare results for print - result += ('\nOverall AP11@{}, {}, {}:\n'.format(*difficulty)) - if mAP11_bbox is not None: - mAP11_bbox = mAP11_bbox.mean(axis=0) - result += 'bbox AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bbox[:, 0]) - if mAP11_bev is not None: - mAP11_bev = mAP11_bev.mean(axis=0) - result += 'bev AP11:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP11_bev[:, 0]) - if mAP11_3d is not None: - mAP11_3d = mAP11_3d.mean(axis=0) - result += '3d AP11:{:.4f}, {:.4f}, {:.4f}\n'.format(*mAP11_3d[:, - 0]) - if compute_aos: - mAP11_aos = mAP11_aos.mean(axis=0) - result += 'aos AP11:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP11_aos[:, 0]) - - # prepare results for logger - for idx in range(3): - postfix = f'{difficulty[idx]}' - if mAP11_3d is not None: - ret_dict[f'KITTI/Overall_3D_AP11_{postfix}'] = mAP11_3d[idx, 0] - if mAP11_bev is not None: - ret_dict[f'KITTI/Overall_BEV_AP11_{postfix}'] =\ - mAP11_bev[idx, 0] - if mAP11_bbox is not None: - ret_dict[f'KITTI/Overall_2D_AP11_{postfix}'] =\ - mAP11_bbox[idx, 0] - - # Calculate AP40 - result += '\n----------- AP40 Results ------------\n\n' - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - curcls_name = class_to_name[curcls] - for i in range(min_overlaps.shape[0]): - # prepare results for print - result += ('{} AP40@{:.2f}, {:.2f}, {:.2f}:\n'.format( - curcls_name, *min_overlaps[i, :, j])) - if mAP40_bbox is not None: - result += 'bbox AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bbox[j, :, i]) - if mAP40_bev is not None: - result += 'bev AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bev[j, :, i]) - if mAP40_3d is not None: - result += '3d AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_3d[j, :, i]) - if compute_aos: - result += 'aos AP40:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP40_aos[j, :, i]) - - # prepare results for logger - for idx in range(3): - if i == 0: - postfix = f'{difficulty[idx]}_strict' - else: - postfix = f'{difficulty[idx]}_loose' - prefix = f'KITTI/{curcls_name}' - if mAP40_3d is not None: - ret_dict[f'{prefix}_3D_AP40_{postfix}'] =\ - mAP40_3d[j, idx, i] - if mAP40_bev is not None: - ret_dict[f'{prefix}_BEV_AP40_{postfix}'] =\ - mAP40_bev[j, idx, i] - if mAP40_bbox is not None: - ret_dict[f'{prefix}_2D_AP40_{postfix}'] =\ - mAP40_bbox[j, idx, i] - - # calculate mAP40 over all classes if there are multiple classes - if len(current_classes) > 1: - # prepare results for print - result += ('\nOverall AP40@{}, {}, {}:\n'.format(*difficulty)) - if mAP40_bbox is not None: - mAP40_bbox = mAP40_bbox.mean(axis=0) - result += 'bbox AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bbox[:, 0]) - if mAP40_bev is not None: - mAP40_bev = mAP40_bev.mean(axis=0) - result += 'bev AP40:{:.4f}, {:.4f}, {:.4f}\n'.format( - *mAP40_bev[:, 0]) - if mAP40_3d is not None: - mAP40_3d = mAP40_3d.mean(axis=0) - result += '3d AP40:{:.4f}, {:.4f}, {:.4f}\n'.format(*mAP40_3d[:, - 0]) - if compute_aos: - mAP40_aos = mAP40_aos.mean(axis=0) - result += 'aos AP40:{:.2f}, {:.2f}, {:.2f}\n'.format( - *mAP40_aos[:, 0]) - - # prepare results for logger - for idx in range(3): - postfix = f'{difficulty[idx]}' - if mAP40_3d is not None: - ret_dict[f'KITTI/Overall_3D_AP40_{postfix}'] = mAP40_3d[idx, 0] - if mAP40_bev is not None: - ret_dict[f'KITTI/Overall_BEV_AP40_{postfix}'] =\ - mAP40_bev[idx, 0] - if mAP40_bbox is not None: - ret_dict[f'KITTI/Overall_2D_AP40_{postfix}'] =\ - mAP40_bbox[idx, 0] - - return result, ret_dict - - -def kitti_eval_coco_style(gt_annos, dt_annos, current_classes): - """coco style evaluation of kitti. - - Args: - gt_annos (list[dict]): Contain gt information of each sample. - dt_annos (list[dict]): Contain detected information of each sample. - current_classes (list[str]): Classes to evaluation. - - Returns: - string: Evaluation results. - """ - class_to_name = { - 0: 'Car', - 1: 'Pedestrian', - 2: 'Cyclist', - 3: 'Van', - 4: 'Person_sitting', - } - class_to_range = { - 0: [0.5, 0.95, 10], - 1: [0.25, 0.7, 10], - 2: [0.25, 0.7, 10], - 3: [0.5, 0.95, 10], - 4: [0.25, 0.7, 10], - } - name_to_class = {v: n for n, v in class_to_name.items()} - if not isinstance(current_classes, (list, tuple)): - current_classes = [current_classes] - current_classes_int = [] - for curcls in current_classes: - if isinstance(curcls, str): - current_classes_int.append(name_to_class[curcls]) - else: - current_classes_int.append(curcls) - current_classes = current_classes_int - overlap_ranges = np.zeros([3, 3, len(current_classes)]) - for i, curcls in enumerate(current_classes): - overlap_ranges[:, :, i] = np.array(class_to_range[curcls])[:, - np.newaxis] - result = '' - # check whether alpha is valid - compute_aos = False - for anno in dt_annos: - if anno['alpha'].shape[0] != 0: - if anno['alpha'][0] != -10: - compute_aos = True - break - mAPbbox, mAPbev, mAP3d, mAPaos = do_coco_style_eval( - gt_annos, dt_annos, current_classes, overlap_ranges, compute_aos) - for j, curcls in enumerate(current_classes): - # mAP threshold array: [num_minoverlap, metric, class] - # mAP result: [num_class, num_diff, num_minoverlap] - o_range = np.array(class_to_range[curcls])[[0, 2, 1]] - o_range[1] = (o_range[2] - o_range[0]) / (o_range[1] - 1) - result += print_str((f'{class_to_name[curcls]} ' - 'coco AP@{:.2f}:{:.2f}:{:.2f}:'.format(*o_range))) - result += print_str((f'bbox AP:{mAPbbox[j, 0]:.2f}, ' - f'{mAPbbox[j, 1]:.2f}, ' - f'{mAPbbox[j, 2]:.2f}')) - result += print_str((f'bev AP:{mAPbev[j, 0]:.2f}, ' - f'{mAPbev[j, 1]:.2f}, ' - f'{mAPbev[j, 2]:.2f}')) - result += print_str((f'3d AP:{mAP3d[j, 0]:.2f}, ' - f'{mAP3d[j, 1]:.2f}, ' - f'{mAP3d[j, 2]:.2f}')) - if compute_aos: - result += print_str((f'aos AP:{mAPaos[j, 0]:.2f}, ' - f'{mAPaos[j, 1]:.2f}, ' - f'{mAPaos[j, 2]:.2f}')) - return result diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py deleted file mode 100644 index 9ed75bf0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/rotate_iou.py +++ /dev/null @@ -1,379 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -##################### -# Based on https://github.com/hongzhenwang/RRPN-revise -# Licensed under The MIT License -# Author: yanyan, scrin@foxmail.com -##################### -import math - -import numba -import numpy as np -from numba import cuda - - -@numba.jit(nopython=True) -def div_up(m, n): - return m // n + (m % n > 0) - - -@cuda.jit(device=True, inline=True) -def trangle_area(a, b, c): - return ((a[0] - c[0]) * (b[1] - c[1]) - (a[1] - c[1]) * - (b[0] - c[0])) / 2.0 - - -@cuda.jit(device=True, inline=True) -def area(int_pts, num_of_inter): - area_val = 0.0 - for i in range(num_of_inter - 2): - area_val += abs( - trangle_area(int_pts[:2], int_pts[2 * i + 2:2 * i + 4], - int_pts[2 * i + 4:2 * i + 6])) - return area_val - - -@cuda.jit(device=True, inline=True) -def sort_vertex_in_convex_polygon(int_pts, num_of_inter): - if num_of_inter > 0: - center = cuda.local.array((2, ), dtype=numba.float32) - center[:] = 0.0 - for i in range(num_of_inter): - center[0] += int_pts[2 * i] - center[1] += int_pts[2 * i + 1] - center[0] /= num_of_inter - center[1] /= num_of_inter - v = cuda.local.array((2, ), dtype=numba.float32) - vs = cuda.local.array((16, ), dtype=numba.float32) - for i in range(num_of_inter): - v[0] = int_pts[2 * i] - center[0] - v[1] = int_pts[2 * i + 1] - center[1] - d = math.sqrt(v[0] * v[0] + v[1] * v[1]) - v[0] = v[0] / d - v[1] = v[1] / d - if v[1] < 0: - v[0] = -2 - v[0] - vs[i] = v[0] - j = 0 - temp = 0 - for i in range(1, num_of_inter): - if vs[i - 1] > vs[i]: - temp = vs[i] - tx = int_pts[2 * i] - ty = int_pts[2 * i + 1] - j = i - while j > 0 and vs[j - 1] > temp: - vs[j] = vs[j - 1] - int_pts[j * 2] = int_pts[j * 2 - 2] - int_pts[j * 2 + 1] = int_pts[j * 2 - 1] - j -= 1 - - vs[j] = temp - int_pts[j * 2] = tx - int_pts[j * 2 + 1] = ty - - -@cuda.jit(device=True, inline=True) -def line_segment_intersection(pts1, pts2, i, j, temp_pts): - A = cuda.local.array((2, ), dtype=numba.float32) - B = cuda.local.array((2, ), dtype=numba.float32) - C = cuda.local.array((2, ), dtype=numba.float32) - D = cuda.local.array((2, ), dtype=numba.float32) - - A[0] = pts1[2 * i] - A[1] = pts1[2 * i + 1] - - B[0] = pts1[2 * ((i + 1) % 4)] - B[1] = pts1[2 * ((i + 1) % 4) + 1] - - C[0] = pts2[2 * j] - C[1] = pts2[2 * j + 1] - - D[0] = pts2[2 * ((j + 1) % 4)] - D[1] = pts2[2 * ((j + 1) % 4) + 1] - BA0 = B[0] - A[0] - BA1 = B[1] - A[1] - DA0 = D[0] - A[0] - CA0 = C[0] - A[0] - DA1 = D[1] - A[1] - CA1 = C[1] - A[1] - acd = DA1 * CA0 > CA1 * DA0 - bcd = (D[1] - B[1]) * (C[0] - B[0]) > (C[1] - B[1]) * (D[0] - B[0]) - if acd != bcd: - abc = CA1 * BA0 > BA1 * CA0 - abd = DA1 * BA0 > BA1 * DA0 - if abc != abd: - DC0 = D[0] - C[0] - DC1 = D[1] - C[1] - ABBA = A[0] * B[1] - B[0] * A[1] - CDDC = C[0] * D[1] - D[0] * C[1] - DH = BA1 * DC0 - BA0 * DC1 - Dx = ABBA * DC0 - BA0 * CDDC - Dy = ABBA * DC1 - BA1 * CDDC - temp_pts[0] = Dx / DH - temp_pts[1] = Dy / DH - return True - return False - - -@cuda.jit(device=True, inline=True) -def line_segment_intersection_v1(pts1, pts2, i, j, temp_pts): - a = cuda.local.array((2, ), dtype=numba.float32) - b = cuda.local.array((2, ), dtype=numba.float32) - c = cuda.local.array((2, ), dtype=numba.float32) - d = cuda.local.array((2, ), dtype=numba.float32) - - a[0] = pts1[2 * i] - a[1] = pts1[2 * i + 1] - - b[0] = pts1[2 * ((i + 1) % 4)] - b[1] = pts1[2 * ((i + 1) % 4) + 1] - - c[0] = pts2[2 * j] - c[1] = pts2[2 * j + 1] - - d[0] = pts2[2 * ((j + 1) % 4)] - d[1] = pts2[2 * ((j + 1) % 4) + 1] - - area_abc = trangle_area(a, b, c) - area_abd = trangle_area(a, b, d) - - if area_abc * area_abd >= 0: - return False - - area_cda = trangle_area(c, d, a) - area_cdb = area_cda + area_abc - area_abd - - if area_cda * area_cdb >= 0: - return False - t = area_cda / (area_abd - area_abc) - - dx = t * (b[0] - a[0]) - dy = t * (b[1] - a[1]) - temp_pts[0] = a[0] + dx - temp_pts[1] = a[1] + dy - return True - - -@cuda.jit(device=True, inline=True) -def point_in_quadrilateral(pt_x, pt_y, corners): - ab0 = corners[2] - corners[0] - ab1 = corners[3] - corners[1] - - ad0 = corners[6] - corners[0] - ad1 = corners[7] - corners[1] - - ap0 = pt_x - corners[0] - ap1 = pt_y - corners[1] - - abab = ab0 * ab0 + ab1 * ab1 - abap = ab0 * ap0 + ab1 * ap1 - adad = ad0 * ad0 + ad1 * ad1 - adap = ad0 * ap0 + ad1 * ap1 - - return abab >= abap and abap >= 0 and adad >= adap and adap >= 0 - - -@cuda.jit(device=True, inline=True) -def quadrilateral_intersection(pts1, pts2, int_pts): - num_of_inter = 0 - for i in range(4): - if point_in_quadrilateral(pts1[2 * i], pts1[2 * i + 1], pts2): - int_pts[num_of_inter * 2] = pts1[2 * i] - int_pts[num_of_inter * 2 + 1] = pts1[2 * i + 1] - num_of_inter += 1 - if point_in_quadrilateral(pts2[2 * i], pts2[2 * i + 1], pts1): - int_pts[num_of_inter * 2] = pts2[2 * i] - int_pts[num_of_inter * 2 + 1] = pts2[2 * i + 1] - num_of_inter += 1 - temp_pts = cuda.local.array((2, ), dtype=numba.float32) - for i in range(4): - for j in range(4): - has_pts = line_segment_intersection(pts1, pts2, i, j, temp_pts) - if has_pts: - int_pts[num_of_inter * 2] = temp_pts[0] - int_pts[num_of_inter * 2 + 1] = temp_pts[1] - num_of_inter += 1 - - return num_of_inter - - -@cuda.jit(device=True, inline=True) -def rbbox_to_corners(corners, rbbox): - # generate clockwise corners and rotate it clockwise - angle = rbbox[4] - a_cos = math.cos(angle) - a_sin = math.sin(angle) - center_x = rbbox[0] - center_y = rbbox[1] - x_d = rbbox[2] - y_d = rbbox[3] - corners_x = cuda.local.array((4, ), dtype=numba.float32) - corners_y = cuda.local.array((4, ), dtype=numba.float32) - corners_x[0] = -x_d / 2 - corners_x[1] = -x_d / 2 - corners_x[2] = x_d / 2 - corners_x[3] = x_d / 2 - corners_y[0] = -y_d / 2 - corners_y[1] = y_d / 2 - corners_y[2] = y_d / 2 - corners_y[3] = -y_d / 2 - for i in range(4): - corners[2 * i] = a_cos * corners_x[i] + a_sin * corners_y[i] + center_x - corners[2 * i + - 1] = -a_sin * corners_x[i] + a_cos * corners_y[i] + center_y - - -@cuda.jit(device=True, inline=True) -def inter(rbbox1, rbbox2): - """Compute intersection of two rotated boxes. - - Args: - rbox1 (np.ndarray, shape=[5]): Rotated 2d box. - rbox2 (np.ndarray, shape=[5]): Rotated 2d box. - - Returns: - float: Intersection of two rotated boxes. - """ - corners1 = cuda.local.array((8, ), dtype=numba.float32) - corners2 = cuda.local.array((8, ), dtype=numba.float32) - intersection_corners = cuda.local.array((16, ), dtype=numba.float32) - - rbbox_to_corners(corners1, rbbox1) - rbbox_to_corners(corners2, rbbox2) - - num_intersection = quadrilateral_intersection(corners1, corners2, - intersection_corners) - sort_vertex_in_convex_polygon(intersection_corners, num_intersection) - # print(intersection_corners.reshape([-1, 2])[:num_intersection]) - - return area(intersection_corners, num_intersection) - - -@cuda.jit(device=True, inline=True) -def devRotateIoUEval(rbox1, rbox2, criterion=-1): - """Compute rotated iou on device. - - Args: - rbox1 (np.ndarray, shape=[5]): Rotated 2d box. - rbox2 (np.ndarray, shape=[5]): Rotated 2d box. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - - Returns: - float: iou between two input boxes. - """ - area1 = rbox1[2] * rbox1[3] - area2 = rbox2[2] * rbox2[3] - area_inter = inter(rbox1, rbox2) - if criterion == -1: - return area_inter / (area1 + area2 - area_inter) - elif criterion == 0: - return area_inter / area1 - elif criterion == 1: - return area_inter / area2 - else: - return area_inter - - -@cuda.jit( - '(int64, int64, float32[:], float32[:], float32[:], int32)', - fastmath=False) -def rotate_iou_kernel_eval(N, - K, - dev_boxes, - dev_query_boxes, - dev_iou, - criterion=-1): - """Kernel of computing rotated IoU. This function is for bev boxes in - camera coordinate system ONLY (the rotation is clockwise). - - Args: - N (int): The number of boxes. - K (int): The number of query boxes. - dev_boxes (np.ndarray): Boxes on device. - dev_query_boxes (np.ndarray): Query boxes on device. - dev_iou (np.ndarray): Computed iou to return. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - """ - threadsPerBlock = 8 * 8 - row_start = cuda.blockIdx.x - col_start = cuda.blockIdx.y - tx = cuda.threadIdx.x - row_size = min(N - row_start * threadsPerBlock, threadsPerBlock) - col_size = min(K - col_start * threadsPerBlock, threadsPerBlock) - block_boxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32) - block_qboxes = cuda.shared.array(shape=(64 * 5, ), dtype=numba.float32) - - dev_query_box_idx = threadsPerBlock * col_start + tx - dev_box_idx = threadsPerBlock * row_start + tx - if (tx < col_size): - block_qboxes[tx * 5 + 0] = dev_query_boxes[dev_query_box_idx * 5 + 0] - block_qboxes[tx * 5 + 1] = dev_query_boxes[dev_query_box_idx * 5 + 1] - block_qboxes[tx * 5 + 2] = dev_query_boxes[dev_query_box_idx * 5 + 2] - block_qboxes[tx * 5 + 3] = dev_query_boxes[dev_query_box_idx * 5 + 3] - block_qboxes[tx * 5 + 4] = dev_query_boxes[dev_query_box_idx * 5 + 4] - if (tx < row_size): - block_boxes[tx * 5 + 0] = dev_boxes[dev_box_idx * 5 + 0] - block_boxes[tx * 5 + 1] = dev_boxes[dev_box_idx * 5 + 1] - block_boxes[tx * 5 + 2] = dev_boxes[dev_box_idx * 5 + 2] - block_boxes[tx * 5 + 3] = dev_boxes[dev_box_idx * 5 + 3] - block_boxes[tx * 5 + 4] = dev_boxes[dev_box_idx * 5 + 4] - cuda.syncthreads() - if tx < row_size: - for i in range(col_size): - offset = ( - row_start * threadsPerBlock * K + col_start * threadsPerBlock + - tx * K + i) - dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5], - block_boxes[tx * 5:tx * 5 + 5], - criterion) - - -def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0): - """Rotated box iou running in gpu. 500x faster than cpu version (take 5ms - in one example with numba.cuda code). convert from [this project]( - https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation). - - This function is for bev boxes in camera coordinate system ONLY - (the rotation is clockwise). - - Args: - boxes (torch.Tensor): rbboxes. format: centers, dims, - angles(clockwise when positive) with the shape of [N, 5]. - query_boxes (torch.FloatTensor, shape=(K, 5)): - rbboxes to compute iou with boxes. - device_id (int, optional): Defaults to 0. Device to use. - criterion (int, optional): Indicate different type of iou. - -1 indicate `area_inter / (area1 + area2 - area_inter)`, - 0 indicate `area_inter / area1`, - 1 indicate `area_inter / area2`. - - Returns: - np.ndarray: IoU results. - """ - boxes = boxes.astype(np.float32) - query_boxes = query_boxes.astype(np.float32) - N = boxes.shape[0] - K = query_boxes.shape[0] - iou = np.zeros((N, K), dtype=np.float32) - if N == 0 or K == 0: - return iou - threadsPerBlock = 8 * 8 - cuda.select_device(device_id) - blockspergrid = (div_up(N, threadsPerBlock), div_up(K, threadsPerBlock)) - - stream = cuda.stream() - with stream.auto_synchronize(): - boxes_dev = cuda.to_device(boxes.reshape([-1]), stream) - query_boxes_dev = cuda.to_device(query_boxes.reshape([-1]), stream) - iou_dev = cuda.to_device(iou.reshape([-1]), stream) - rotate_iou_kernel_eval[blockspergrid, threadsPerBlock, - stream](N, K, boxes_dev, query_boxes_dev, - iou_dev, criterion) - iou_dev.copy_to_host(iou.reshape([-1]), stream=stream) - return iou.astype(boxes.dtype) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/lyft_eval.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/lyft_eval.py deleted file mode 100644 index 47c5cd6a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/lyft_eval.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -from lyft_dataset_sdk.eval.detection.mAP_evaluation import (Box3D, get_ap, - get_class_names, - get_ious, - group_by_key, - wrap_in_box) -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def load_lyft_gts(lyft, data_root, eval_split, logger=None): - """Loads ground truth boxes from database. - - Args: - lyft (:obj:`LyftDataset`): Lyft class in the sdk. - data_root (str): Root of data for reading splits. - eval_split (str): Name of the split for evaluation. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - - Returns: - list[dict]: List of annotation dictionaries. - """ - split_scenes = mmcv.list_from_file( - osp.join(data_root, f'{eval_split}.txt')) - - # Read out all sample_tokens in DB. - sample_tokens_all = [s['token'] for s in lyft.sample] - assert len(sample_tokens_all) > 0, 'Error: Database has no samples!' - - if eval_split == 'test': - # Check that you aren't trying to cheat :) - assert len(lyft.sample_annotation) > 0, \ - 'Error: You are trying to evaluate on the test set \ - but you do not have the annotations!' - - sample_tokens = [] - for sample_token in sample_tokens_all: - scene_token = lyft.get('sample', sample_token)['scene_token'] - scene_record = lyft.get('scene', scene_token) - if scene_record['name'] in split_scenes: - sample_tokens.append(sample_token) - - all_annotations = [] - - print_log('Loading ground truth annotations...', logger=logger) - # Load annotations and filter predictions and annotations. - for sample_token in mmcv.track_iter_progress(sample_tokens): - sample = lyft.get('sample', sample_token) - sample_annotation_tokens = sample['anns'] - for sample_annotation_token in sample_annotation_tokens: - # Get label name in detection task and filter unused labels. - sample_annotation = \ - lyft.get('sample_annotation', sample_annotation_token) - detection_name = sample_annotation['category_name'] - if detection_name is None: - continue - annotation = { - 'sample_token': sample_token, - 'translation': sample_annotation['translation'], - 'size': sample_annotation['size'], - 'rotation': sample_annotation['rotation'], - 'name': detection_name, - } - all_annotations.append(annotation) - - return all_annotations - - -def load_lyft_predictions(res_path): - """Load Lyft predictions from json file. - - Args: - res_path (str): Path of result json file recording detections. - - Returns: - list[dict]: List of prediction dictionaries. - """ - predictions = mmcv.load(res_path) - predictions = predictions['results'] - all_preds = [] - for sample_token in predictions.keys(): - all_preds.extend(predictions[sample_token]) - return all_preds - - -def lyft_eval(lyft, data_root, res_path, eval_set, output_dir, logger=None): - """Evaluation API for Lyft dataset. - - Args: - lyft (:obj:`LyftDataset`): Lyft class in the sdk. - data_root (str): Root of data for reading splits. - res_path (str): Path of result json file recording detections. - eval_set (str): Name of the split for evaluation. - output_dir (str): Output directory for output json files. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str, float]: The evaluation results. - """ - # evaluate by lyft metrics - gts = load_lyft_gts(lyft, data_root, eval_set, logger) - predictions = load_lyft_predictions(res_path) - - class_names = get_class_names(gts) - print('Calculating mAP@0.5:0.95...') - - iou_thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] - metrics = {} - average_precisions = \ - get_classwise_aps(gts, predictions, class_names, iou_thresholds) - APs_data = [['IOU', 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]] - - mAPs = np.mean(average_precisions, axis=0) - mAPs_cate = np.mean(average_precisions, axis=1) - final_mAP = np.mean(mAPs) - - metrics['average_precisions'] = average_precisions.tolist() - metrics['mAPs'] = mAPs.tolist() - metrics['Final mAP'] = float(final_mAP) - metrics['class_names'] = class_names - metrics['mAPs_cate'] = mAPs_cate.tolist() - - APs_data = [['class', 'mAP@0.5:0.95']] - for i in range(len(class_names)): - row = [class_names[i], round(mAPs_cate[i], 3)] - APs_data.append(row) - APs_data.append(['Overall', round(final_mAP, 3)]) - APs_table = AsciiTable(APs_data, title='mAPs@0.5:0.95') - APs_table.inner_footing_row_border = True - print_log(APs_table.table, logger=logger) - - res_path = osp.join(output_dir, 'lyft_metrics.json') - mmcv.dump(metrics, res_path) - return metrics - - -def get_classwise_aps(gt, predictions, class_names, iou_thresholds): - """Returns an array with an average precision per class. - - Note: Ground truth and predictions should have the following format. - - .. code-block:: - - gt = [{ - 'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207 - fbb039a550991a5149214f98cec136ac', - 'translation': [974.2811881299899, 1714.6815014457964, - -23.689857123368846], - 'size': [1.796, 4.488, 1.664], - 'rotation': [0.14882026466054782, 0, 0, 0.9888642620837121], - 'name': 'car' - }] - - predictions = [{ - 'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207 - fbb039a550991a5149214f98cec136ac', - 'translation': [971.8343488872263, 1713.6816097857359, - -25.82534357061308], - 'size': [2.519726579986132, 7.810161372666739, 3.483438286096803], - 'rotation': [0.10913582721095375, 0.04099572636992043, - 0.01927712319721745, 1.029328402625659], - 'name': 'car', - 'score': 0.3077029437237213 - }] - - Args: - gt (list[dict]): list of dictionaries in the format described below. - predictions (list[dict]): list of dictionaries in the format - described below. - class_names (list[str]): list of the class names. - iou_thresholds (list[float]): IOU thresholds used to calculate - TP / FN - - Returns: - np.ndarray: an array with an average precision per class. - """ - assert all([0 <= iou_th <= 1 for iou_th in iou_thresholds]) - - gt_by_class_name = group_by_key(gt, 'name') - pred_by_class_name = group_by_key(predictions, 'name') - - average_precisions = np.zeros((len(class_names), len(iou_thresholds))) - - for class_id, class_name in enumerate(class_names): - if class_name in pred_by_class_name: - recalls, precisions, average_precision = get_single_class_aps( - gt_by_class_name[class_name], pred_by_class_name[class_name], - iou_thresholds) - average_precisions[class_id, :] = average_precision - - return average_precisions - - -def get_single_class_aps(gt, predictions, iou_thresholds): - """Compute recall and precision for all iou thresholds. Adapted from - LyftDatasetDevkit. - - Args: - gt (list[dict]): list of dictionaries in the format described above. - predictions (list[dict]): list of dictionaries in the format - described below. - iou_thresholds (list[float]): IOU thresholds used to calculate - TP / FN - - Returns: - tuple[np.ndarray]: Returns (recalls, precisions, average precisions) - for each class. - """ - num_gts = len(gt) - image_gts = group_by_key(gt, 'sample_token') - image_gts = wrap_in_box(image_gts) - - sample_gt_checked = { - sample_token: np.zeros((len(boxes), len(iou_thresholds))) - for sample_token, boxes in image_gts.items() - } - - predictions = sorted(predictions, key=lambda x: x['score'], reverse=True) - - # go down dets and mark TPs and FPs - num_predictions = len(predictions) - tps = np.zeros((num_predictions, len(iou_thresholds))) - fps = np.zeros((num_predictions, len(iou_thresholds))) - - for prediction_index, prediction in enumerate(predictions): - predicted_box = Box3D(**prediction) - - sample_token = prediction['sample_token'] - - max_overlap = -np.inf - jmax = -1 - - if sample_token in image_gts: - gt_boxes = image_gts[sample_token] - # gt_boxes per sample - gt_checked = sample_gt_checked[sample_token] - # gt flags per sample - else: - gt_boxes = [] - gt_checked = None - - if len(gt_boxes) > 0: - overlaps = get_ious(gt_boxes, predicted_box) - - max_overlap = np.max(overlaps) - - jmax = np.argmax(overlaps) - - for i, iou_threshold in enumerate(iou_thresholds): - if max_overlap > iou_threshold: - if gt_checked[jmax, i] == 0: - tps[prediction_index, i] = 1.0 - gt_checked[jmax, i] = 1 - else: - fps[prediction_index, i] = 1.0 - else: - fps[prediction_index, i] = 1.0 - - # compute precision recall - fps = np.cumsum(fps, axis=0) - tps = np.cumsum(tps, axis=0) - - recalls = tps / float(num_gts) - # avoid divide by zero in case the first detection - # matches a difficult ground truth - precisions = tps / np.maximum(tps + fps, np.finfo(np.float64).eps) - - aps = [] - for i in range(len(iou_thresholds)): - recall = recalls[:, i] - precision = precisions[:, i] - assert np.all(0 <= recall) & np.all(recall <= 1) - assert np.all(0 <= precision) & np.all(precision <= 1) - ap = get_ap(recall, precision) - aps.append(ap) - - aps = np.array(aps) - - return recalls, precisions, aps diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/__init__.py deleted file mode 100644 index c98ea835..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluate_semantic_instance import evaluate_matches, scannet_eval - -__all__ = ['scannet_eval', 'evaluate_matches'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py deleted file mode 100644 index e4b94395..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/evaluate_semantic_instance.py +++ /dev/null @@ -1,347 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# adapted from https://github.com/ScanNet/ScanNet/blob/master/BenchmarkScripts/3d_evaluation/evaluate_semantic_instance.py # noqa -from copy import deepcopy - -import numpy as np - -from . import util_3d - - -def evaluate_matches(matches, class_labels, options): - """Evaluate instance segmentation from matched gt and predicted instances - for all scenes. - - Args: - matches (dict): Contains gt2pred and pred2gt infos for every scene. - class_labels (tuple[str]): Class names. - options (dict): ScanNet evaluator options. See get_options. - - Returns: - np.array: Average precision scores for all thresholds and categories. - """ - overlaps = options['overlaps'] - min_region_sizes = [options['min_region_sizes'][0]] - dist_threshes = [options['distance_threshes'][0]] - dist_confs = [options['distance_confs'][0]] - - # results: class x overlap - ap = np.zeros((len(dist_threshes), len(class_labels), len(overlaps)), - np.float) - for di, (min_region_size, distance_thresh, distance_conf) in enumerate( - zip(min_region_sizes, dist_threshes, dist_confs)): - for oi, overlap_th in enumerate(overlaps): - pred_visited = {} - for m in matches: - for label_name in class_labels: - for p in matches[m]['pred'][label_name]: - if 'filename' in p: - pred_visited[p['filename']] = False - for li, label_name in enumerate(class_labels): - y_true = np.empty(0) - y_score = np.empty(0) - hard_false_negatives = 0 - has_gt = False - has_pred = False - for m in matches: - pred_instances = matches[m]['pred'][label_name] - gt_instances = matches[m]['gt'][label_name] - # filter groups in ground truth - gt_instances = [ - gt for gt in gt_instances - if gt['instance_id'] >= 1000 and gt['vert_count'] >= - min_region_size and gt['med_dist'] <= distance_thresh - and gt['dist_conf'] >= distance_conf - ] - if gt_instances: - has_gt = True - if pred_instances: - has_pred = True - - cur_true = np.ones(len(gt_instances)) - cur_score = np.ones(len(gt_instances)) * (-float('inf')) - cur_match = np.zeros(len(gt_instances), dtype=np.bool) - # collect matches - for (gti, gt) in enumerate(gt_instances): - found_match = False - for pred in gt['matched_pred']: - # greedy assignments - if pred_visited[pred['filename']]: - continue - overlap = float(pred['intersection']) / ( - gt['vert_count'] + pred['vert_count'] - - pred['intersection']) - if overlap > overlap_th: - confidence = pred['confidence'] - # if already have a prediction for this gt, - # the prediction with the lower score is automatically a false positive # noqa - if cur_match[gti]: - max_score = max(cur_score[gti], confidence) - min_score = min(cur_score[gti], confidence) - cur_score[gti] = max_score - # append false positive - cur_true = np.append(cur_true, 0) - cur_score = np.append(cur_score, min_score) - cur_match = np.append(cur_match, True) - # otherwise set score - else: - found_match = True - cur_match[gti] = True - cur_score[gti] = confidence - pred_visited[pred['filename']] = True - if not found_match: - hard_false_negatives += 1 - # remove non-matched ground truth instances - cur_true = cur_true[cur_match] - cur_score = cur_score[cur_match] - - # collect non-matched predictions as false positive - for pred in pred_instances: - found_gt = False - for gt in pred['matched_gt']: - overlap = float(gt['intersection']) / ( - gt['vert_count'] + pred['vert_count'] - - gt['intersection']) - if overlap > overlap_th: - found_gt = True - break - if not found_gt: - num_ignore = pred['void_intersection'] - for gt in pred['matched_gt']: - # group? - if gt['instance_id'] < 1000: - num_ignore += gt['intersection'] - # small ground truth instances - if gt['vert_count'] < min_region_size or gt[ - 'med_dist'] > distance_thresh or gt[ - 'dist_conf'] < distance_conf: - num_ignore += gt['intersection'] - proportion_ignore = float( - num_ignore) / pred['vert_count'] - # if not ignored append false positive - if proportion_ignore <= overlap_th: - cur_true = np.append(cur_true, 0) - confidence = pred['confidence'] - cur_score = np.append(cur_score, confidence) - - # append to overall results - y_true = np.append(y_true, cur_true) - y_score = np.append(y_score, cur_score) - - # compute average precision - if has_gt and has_pred: - # compute precision recall curve first - - # sorting and cumsum - score_arg_sort = np.argsort(y_score) - y_score_sorted = y_score[score_arg_sort] - y_true_sorted = y_true[score_arg_sort] - y_true_sorted_cumsum = np.cumsum(y_true_sorted) - - # unique thresholds - (thresholds, unique_indices) = np.unique( - y_score_sorted, return_index=True) - num_prec_recall = len(unique_indices) + 1 - - # prepare precision recall - num_examples = len(y_score_sorted) - # follow https://github.com/ScanNet/ScanNet/pull/26 ? # noqa - num_true_examples = y_true_sorted_cumsum[-1] if len( - y_true_sorted_cumsum) > 0 else 0 - precision = np.zeros(num_prec_recall) - recall = np.zeros(num_prec_recall) - - # deal with the first point - y_true_sorted_cumsum = np.append(y_true_sorted_cumsum, 0) - # deal with remaining - for idx_res, idx_scores in enumerate(unique_indices): - cumsum = y_true_sorted_cumsum[idx_scores - 1] - tp = num_true_examples - cumsum - fp = num_examples - idx_scores - tp - fn = cumsum + hard_false_negatives - p = float(tp) / (tp + fp) - r = float(tp) / (tp + fn) - precision[idx_res] = p - recall[idx_res] = r - - # first point in curve is artificial - precision[-1] = 1. - recall[-1] = 0. - - # compute average of precision-recall curve - recall_for_conv = np.copy(recall) - recall_for_conv = np.append(recall_for_conv[0], - recall_for_conv) - recall_for_conv = np.append(recall_for_conv, 0.) - - stepWidths = np.convolve(recall_for_conv, [-0.5, 0, 0.5], - 'valid') - # integrate is now simply a dot product - ap_current = np.dot(precision, stepWidths) - - elif has_gt: - ap_current = 0.0 - else: - ap_current = float('nan') - ap[di, li, oi] = ap_current - return ap - - -def compute_averages(aps, options, class_labels): - """Averages AP scores for all categories. - - Args: - aps (np.array): AP scores for all thresholds and categories. - options (dict): ScanNet evaluator options. See get_options. - class_labels (tuple[str]): Class names. - - Returns: - dict: Overall and per-category AP scores. - """ - d_inf = 0 - o50 = np.where(np.isclose(options['overlaps'], 0.5)) - o25 = np.where(np.isclose(options['overlaps'], 0.25)) - o_all_but25 = np.where( - np.logical_not(np.isclose(options['overlaps'], 0.25))) - avg_dict = {} - avg_dict['all_ap'] = np.nanmean(aps[d_inf, :, o_all_but25]) - avg_dict['all_ap_50%'] = np.nanmean(aps[d_inf, :, o50]) - avg_dict['all_ap_25%'] = np.nanmean(aps[d_inf, :, o25]) - avg_dict['classes'] = {} - for (li, label_name) in enumerate(class_labels): - avg_dict['classes'][label_name] = {} - avg_dict['classes'][label_name]['ap'] = np.average(aps[d_inf, li, - o_all_but25]) - avg_dict['classes'][label_name]['ap50%'] = np.average(aps[d_inf, li, - o50]) - avg_dict['classes'][label_name]['ap25%'] = np.average(aps[d_inf, li, - o25]) - return avg_dict - - -def assign_instances_for_scan(pred_info, gt_ids, options, valid_class_ids, - class_labels, id_to_label): - """Assign gt and predicted instances for a single scene. - - Args: - pred_info (dict): Predicted masks, labels and scores. - gt_ids (np.array): Ground truth instance masks. - options (dict): ScanNet evaluator options. See get_options. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id_to_label (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict: Per class assigned gt to predicted instances. - dict: Per class assigned predicted to gt instances. - """ - # get gt instances - gt_instances = util_3d.get_instances(gt_ids, valid_class_ids, class_labels, - id_to_label) - # associate - gt2pred = deepcopy(gt_instances) - for label in gt2pred: - for gt in gt2pred[label]: - gt['matched_pred'] = [] - pred2gt = {} - for label in class_labels: - pred2gt[label] = [] - num_pred_instances = 0 - # mask of void labels in the ground truth - bool_void = np.logical_not(np.in1d(gt_ids // 1000, valid_class_ids)) - # go through all prediction masks - for pred_mask_file in pred_info: - label_id = int(pred_info[pred_mask_file]['label_id']) - conf = pred_info[pred_mask_file]['conf'] - if not label_id in id_to_label: # noqa E713 - continue - label_name = id_to_label[label_id] - # read the mask - pred_mask = pred_info[pred_mask_file]['mask'] - if len(pred_mask) != len(gt_ids): - raise ValueError('len(pred_mask) != len(gt_ids)') - # convert to binary - pred_mask = np.not_equal(pred_mask, 0) - num = np.count_nonzero(pred_mask) - if num < options['min_region_sizes'][0]: - continue # skip if empty - - pred_instance = {} - pred_instance['filename'] = pred_mask_file - pred_instance['pred_id'] = num_pred_instances - pred_instance['label_id'] = label_id - pred_instance['vert_count'] = num - pred_instance['confidence'] = conf - pred_instance['void_intersection'] = np.count_nonzero( - np.logical_and(bool_void, pred_mask)) - - # matched gt instances - matched_gt = [] - # go through all gt instances with matching label - for (gt_num, gt_inst) in enumerate(gt2pred[label_name]): - intersection = np.count_nonzero( - np.logical_and(gt_ids == gt_inst['instance_id'], pred_mask)) - if intersection > 0: - gt_copy = gt_inst.copy() - pred_copy = pred_instance.copy() - gt_copy['intersection'] = intersection - pred_copy['intersection'] = intersection - matched_gt.append(gt_copy) - gt2pred[label_name][gt_num]['matched_pred'].append(pred_copy) - pred_instance['matched_gt'] = matched_gt - num_pred_instances += 1 - pred2gt[label_name].append(pred_instance) - - return gt2pred, pred2gt - - -def scannet_eval(preds, gts, options, valid_class_ids, class_labels, - id_to_label): - """Evaluate instance segmentation in ScanNet protocol. - - Args: - preds (list[dict]): Per scene predictions of mask, label and - confidence. - gts (list[np.array]): Per scene ground truth instance masks. - options (dict): ScanNet evaluator options. See get_options. - valid_class_ids (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id_to_label (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict: Overall and per-category AP scores. - """ - options = get_options(options) - matches = {} - for i, (pred, gt) in enumerate(zip(preds, gts)): - matches_key = i - # assign gt to predictions - gt2pred, pred2gt = assign_instances_for_scan(pred, gt, options, - valid_class_ids, - class_labels, id_to_label) - matches[matches_key] = {} - matches[matches_key]['gt'] = gt2pred - matches[matches_key]['pred'] = pred2gt - - ap_scores = evaluate_matches(matches, class_labels, options) - avgs = compute_averages(ap_scores, options, class_labels) - return avgs - - -def get_options(options=None): - """Set ScanNet evaluator options. - - Args: - options (dict, optional): Not default options. Default: None. - - Returns: - dict: Updated options with all 4 keys. - """ - assert options is None or isinstance(options, dict) - _options = dict( - overlaps=np.append(np.arange(0.5, 0.95, 0.05), 0.25), - min_region_sizes=np.array([100]), - distance_threshes=np.array([float('inf')]), - distance_confs=np.array([-float('inf')])) - if options is not None: - _options.update(options) - return _options diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/util_3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/util_3d.py deleted file mode 100644 index 527d3412..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/scannet_utils/util_3d.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# adapted from https://github.com/ScanNet/ScanNet/blob/master/BenchmarkScripts/util_3d.py # noqa -import json - -import numpy as np - - -class Instance: - """Single instance for ScanNet evaluator. - - Args: - mesh_vert_instances (np.array): Instance ids for each point. - instance_id: Id of single instance. - """ - instance_id = 0 - label_id = 0 - vert_count = 0 - med_dist = -1 - dist_conf = 0.0 - - def __init__(self, mesh_vert_instances, instance_id): - if instance_id == -1: - return - self.instance_id = int(instance_id) - self.label_id = int(self.get_label_id(instance_id)) - self.vert_count = int( - self.get_instance_verts(mesh_vert_instances, instance_id)) - - @staticmethod - def get_label_id(instance_id): - return int(instance_id // 1000) - - @staticmethod - def get_instance_verts(mesh_vert_instances, instance_id): - return (mesh_vert_instances == instance_id).sum() - - def to_json(self): - return json.dumps( - self, default=lambda o: o.__dict__, sort_keys=True, indent=4) - - def to_dict(self): - dict = {} - dict['instance_id'] = self.instance_id - dict['label_id'] = self.label_id - dict['vert_count'] = self.vert_count - dict['med_dist'] = self.med_dist - dict['dist_conf'] = self.dist_conf - return dict - - def from_json(self, data): - self.instance_id = int(data['instance_id']) - self.label_id = int(data['label_id']) - self.vert_count = int(data['vert_count']) - if 'med_dist' in data: - self.med_dist = float(data['med_dist']) - self.dist_conf = float(data['dist_conf']) - - def __str__(self): - return '(' + str(self.instance_id) + ')' - - -def get_instances(ids, class_ids, class_labels, id2label): - """Transform gt instance mask to Instance objects. - - Args: - ids (np.array): Instance ids for each point. - class_ids: (tuple[int]): Ids of valid categories. - class_labels (tuple[str]): Class names. - id2label: (dict[int, str]): Mapping of valid class id to class label. - - Returns: - dict [str, list]: Instance objects grouped by class label. - """ - instances = {} - for label in class_labels: - instances[label] = [] - instance_ids = np.unique(ids) - for id in instance_ids: - if id == 0: - continue - inst = Instance(ids, id) - if inst.label_id in class_ids: - instances[id2label[inst.label_id]].append(inst.to_dict()) - return instances diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/seg_eval.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/seg_eval.py deleted file mode 100644 index 4a3166d6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/seg_eval.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - - -def fast_hist(preds, labels, num_classes): - """Compute the confusion matrix for every batch. - - Args: - preds (np.ndarray): Prediction labels of points with shape of - (num_points, ). - labels (np.ndarray): Ground truth labels of points with shape of - (num_points, ). - num_classes (int): number of classes - - Returns: - np.ndarray: Calculated confusion matrix. - """ - - k = (labels >= 0) & (labels < num_classes) - bin_count = np.bincount( - num_classes * labels[k].astype(int) + preds[k], - minlength=num_classes**2) - return bin_count[:num_classes**2].reshape(num_classes, num_classes) - - -def per_class_iou(hist): - """Compute the per class iou. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - np.ndarray: Calculated per class iou - """ - - return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) - - -def get_acc(hist): - """Compute the overall accuracy. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - float: Calculated overall acc - """ - - return np.diag(hist).sum() / hist.sum() - - -def get_acc_cls(hist): - """Compute the class average accuracy. - - Args: - hist(np.ndarray): Overall confusion martix - (num_classes, num_classes ). - - Returns: - float: Calculated class average acc - """ - - return np.nanmean(np.diag(hist) / hist.sum(axis=1)) - - -def seg_eval(gt_labels, seg_preds, label2cat, ignore_index, logger=None): - """Semantic Segmentation Evaluation. - - Evaluate the result of the Semantic Segmentation. - - Args: - gt_labels (list[torch.Tensor]): Ground truth labels. - seg_preds (list[torch.Tensor]): Predictions. - label2cat (dict): Map from label to category name. - ignore_index (int): Index that will be ignored in evaluation. - logger (logging.Logger | str, optional): The way to print the mAP - summary. See `mmdet.utils.print_log()` for details. Default: None. - - Returns: - dict[str, float]: Dict of results. - """ - assert len(seg_preds) == len(gt_labels) - num_classes = len(label2cat) - - hist_list = [] - for i in range(len(gt_labels)): - gt_seg = gt_labels[i].clone().numpy().astype(np.int) - pred_seg = seg_preds[i].clone().numpy().astype(np.int) - - # filter out ignored points - pred_seg[gt_seg == ignore_index] = -1 - gt_seg[gt_seg == ignore_index] = -1 - - # calculate one instance result - hist_list.append(fast_hist(pred_seg, gt_seg, num_classes)) - - iou = per_class_iou(sum(hist_list)) - miou = np.nanmean(iou) - acc = get_acc(sum(hist_list)) - acc_cls = get_acc_cls(sum(hist_list)) - - header = ['classes'] - for i in range(len(label2cat)): - header.append(label2cat[i]) - header.extend(['miou', 'acc', 'acc_cls']) - - ret_dict = dict() - table_columns = [['results']] - for i in range(len(label2cat)): - ret_dict[label2cat[i]] = float(iou[i]) - table_columns.append([f'{iou[i]:.4f}']) - ret_dict['miou'] = float(miou) - ret_dict['acc'] = float(acc) - ret_dict['acc_cls'] = float(acc_cls) - - table_columns.append([f'{miou:.4f}']) - table_columns.append([f'{acc:.4f}']) - table_columns.append([f'{acc_cls:.4f}']) - - table_data = [header] - table_rows = list(zip(*table_columns)) - table_data += table_rows - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) - - return ret_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/__init__.py deleted file mode 100644 index 72d3a9bd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .prediction_kitti_to_waymo import KITTI2Waymo - -__all__ = ['KITTI2Waymo'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py deleted file mode 100644 index 205c24cb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/evaluation/waymo_utils/prediction_kitti_to_waymo.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Adapted from `Waymo to KITTI converter - `_. -""" - -try: - from waymo_open_dataset import dataset_pb2 as open_dataset -except ImportError: - raise ImportError( - 'Please run "pip install waymo-open-dataset-tf-2-1-0==1.2.0" ' - 'to install the official devkit first.') - -from glob import glob -from os.path import join - -import mmcv -import numpy as np -import tensorflow as tf -from waymo_open_dataset import label_pb2 -from waymo_open_dataset.protos import metrics_pb2 - - -class KITTI2Waymo(object): - """KITTI predictions to Waymo converter. - - This class serves as the converter to change predictions from KITTI to - Waymo format. - - Args: - kitti_result_files (list[dict]): Predictions in KITTI format. - waymo_tfrecords_dir (str): Directory to load waymo raw data. - waymo_results_save_dir (str): Directory to save converted predictions - in waymo format (.bin files). - waymo_results_final_path (str): Path to save combined - predictions in waymo format (.bin file), like 'a/b/c.bin'. - prefix (str): Prefix of filename. In general, 0 for training, 1 for - validation and 2 for testing. - workers (str): Number of parallel processes. - """ - - def __init__(self, - kitti_result_files, - waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, - prefix, - workers=64): - - self.kitti_result_files = kitti_result_files - self.waymo_tfrecords_dir = waymo_tfrecords_dir - self.waymo_results_save_dir = waymo_results_save_dir - self.waymo_results_final_path = waymo_results_final_path - self.prefix = prefix - self.workers = int(workers) - self.name2idx = {} - for idx, result in enumerate(kitti_result_files): - if len(result['sample_idx']) > 0: - self.name2idx[str(result['sample_idx'][0])] = idx - - # turn on eager execution for older tensorflow versions - if int(tf.__version__.split('.')[0]) < 2: - tf.enable_eager_execution() - - self.k2w_cls_map = { - 'Car': label_pb2.Label.TYPE_VEHICLE, - 'Pedestrian': label_pb2.Label.TYPE_PEDESTRIAN, - 'Sign': label_pb2.Label.TYPE_SIGN, - 'Cyclist': label_pb2.Label.TYPE_CYCLIST, - } - - self.T_ref_to_front_cam = np.array([[0.0, 0.0, 1.0, 0.0], - [-1.0, 0.0, 0.0, 0.0], - [0.0, -1.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 1.0]]) - - self.get_file_names() - self.create_folder() - - def get_file_names(self): - """Get file names of waymo raw data.""" - self.waymo_tfrecord_pathnames = sorted( - glob(join(self.waymo_tfrecords_dir, '*.tfrecord'))) - print(len(self.waymo_tfrecord_pathnames), 'tfrecords found.') - - def create_folder(self): - """Create folder for data conversion.""" - mmcv.mkdir_or_exist(self.waymo_results_save_dir) - - def parse_objects(self, kitti_result, T_k2w, context_name, - frame_timestamp_micros): - """Parse one prediction with several instances in kitti format and - convert them to `Object` proto. - - Args: - kitti_result (dict): Predictions in kitti format. - - - name (np.ndarray): Class labels of predictions. - - dimensions (np.ndarray): Height, width, length of boxes. - - location (np.ndarray): Bottom center of boxes (x, y, z). - - rotation_y (np.ndarray): Orientation of boxes. - - score (np.ndarray): Scores of predictions. - T_k2w (np.ndarray): Transformation matrix from kitti to waymo. - context_name (str): Context name of the frame. - frame_timestamp_micros (int): Frame timestamp. - - Returns: - :obj:`Object`: Predictions in waymo dataset Object proto. - """ - - def parse_one_object(instance_idx): - """Parse one instance in kitti format and convert them to `Object` - proto. - - Args: - instance_idx (int): Index of the instance to be converted. - - Returns: - :obj:`Object`: Predicted instance in waymo dataset - Object proto. - """ - cls = kitti_result['name'][instance_idx] - length = round(kitti_result['dimensions'][instance_idx, 0], 4) - height = round(kitti_result['dimensions'][instance_idx, 1], 4) - width = round(kitti_result['dimensions'][instance_idx, 2], 4) - x = round(kitti_result['location'][instance_idx, 0], 4) - y = round(kitti_result['location'][instance_idx, 1], 4) - z = round(kitti_result['location'][instance_idx, 2], 4) - rotation_y = round(kitti_result['rotation_y'][instance_idx], 4) - score = round(kitti_result['score'][instance_idx], 4) - - # y: downwards; move box origin from bottom center (kitti) to - # true center (waymo) - y -= height / 2 - # frame transformation: kitti -> waymo - x, y, z = self.transform(T_k2w, x, y, z) - - # different conventions - heading = -(rotation_y + np.pi / 2) - while heading < -np.pi: - heading += 2 * np.pi - while heading > np.pi: - heading -= 2 * np.pi - - box = label_pb2.Label.Box() - box.center_x = x - box.center_y = y - box.center_z = z - box.length = length - box.width = width - box.height = height - box.heading = heading - - o = metrics_pb2.Object() - o.object.box.CopyFrom(box) - o.object.type = self.k2w_cls_map[cls] - o.score = score - - o.context_name = context_name - o.frame_timestamp_micros = frame_timestamp_micros - - return o - - objects = metrics_pb2.Objects() - - for instance_idx in range(len(kitti_result['name'])): - o = parse_one_object(instance_idx) - objects.objects.append(o) - - return objects - - def convert_one(self, file_idx): - """Convert action for single file. - - Args: - file_idx (int): Index of the file to be converted. - """ - file_pathname = self.waymo_tfrecord_pathnames[file_idx] - file_data = tf.data.TFRecordDataset(file_pathname, compression_type='') - - for frame_num, frame_data in enumerate(file_data): - frame = open_dataset.Frame() - frame.ParseFromString(bytearray(frame_data.numpy())) - - filename = f'{self.prefix}{file_idx:03d}{frame_num:03d}' - - for camera in frame.context.camera_calibrations: - # FRONT = 1, see dataset.proto for details - if camera.name == 1: - T_front_cam_to_vehicle = np.array( - camera.extrinsic.transform).reshape(4, 4) - - T_k2w = T_front_cam_to_vehicle @ self.T_ref_to_front_cam - - context_name = frame.context.name - frame_timestamp_micros = frame.timestamp_micros - - if filename in self.name2idx: - kitti_result = \ - self.kitti_result_files[self.name2idx[filename]] - objects = self.parse_objects(kitti_result, T_k2w, context_name, - frame_timestamp_micros) - else: - print(filename, 'not found.') - objects = metrics_pb2.Objects() - - with open( - join(self.waymo_results_save_dir, f'{filename}.bin'), - 'wb') as f: - f.write(objects.SerializeToString()) - - def convert(self): - """Convert action.""" - print('Start converting ...') - mmcv.track_parallel_progress(self.convert_one, range(len(self)), - self.workers) - print('\nFinished ...') - - # combine all files into one .bin - pathnames = sorted(glob(join(self.waymo_results_save_dir, '*.bin'))) - combined = self.combine(pathnames) - - with open(self.waymo_results_final_path, 'wb') as f: - f.write(combined.SerializeToString()) - - def __len__(self): - """Length of the filename list.""" - return len(self.waymo_tfrecord_pathnames) - - def transform(self, T, x, y, z): - """Transform the coordinates with matrix T. - - Args: - T (np.ndarray): Transformation matrix. - x(float): Coordinate in x axis. - y(float): Coordinate in y axis. - z(float): Coordinate in z axis. - - Returns: - list: Coordinates after transformation. - """ - pt_bef = np.array([x, y, z, 1.0]).reshape(4, 1) - pt_aft = np.matmul(T, pt_bef) - return pt_aft[:3].flatten().tolist() - - def combine(self, pathnames): - """Combine predictions in waymo format for each sample together. - - Args: - pathnames (str): Paths to save predictions. - - Returns: - :obj:`Objects`: Combined predictions in Objects proto. - """ - combined = metrics_pb2.Objects() - - for pathname in pathnames: - objects = metrics_pb2.Objects() - with open(pathname, 'rb') as f: - objects.ParseFromString(f.read()) - for o in objects.objects: - combined.objects.append(o) - - return combined diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/__init__.py deleted file mode 100644 index 73d2d833..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints -from .cam_points import CameraPoints -from .depth_points import DepthPoints -from .lidar_points import LiDARPoints - -__all__ = ['BasePoints', 'CameraPoints', 'DepthPoints', 'LiDARPoints'] - - -def get_points_type(points_type): - """Get the class of points according to coordinate type. - - Args: - points_type (str): The type of points coordinate. - The valid value are "CAMERA", "LIDAR", or "DEPTH". - - Returns: - class: Points type. - """ - if points_type == 'CAMERA': - points_cls = CameraPoints - elif points_type == 'LIDAR': - points_cls = LiDARPoints - elif points_type == 'DEPTH': - points_cls = DepthPoints - else: - raise ValueError('Only "points_type" of "CAMERA", "LIDAR", or "DEPTH"' - f' are supported, got {points_type}') - - return points_cls diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/base_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/base_points.py deleted file mode 100644 index 929fa21e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/base_points.py +++ /dev/null @@ -1,440 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import abstractmethod - -import numpy as np -import torch - -from ..bbox.structures.utils import rotation_3d_in_axis - - -class BasePoints(object): - """Base class for Points. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - if isinstance(tensor, torch.Tensor): - device = tensor.device - else: - device = torch.device('cpu') - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that - # does not depend on the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, points_dim)).to( - dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == \ - points_dim, tensor.size() - - self.tensor = tensor - self.points_dim = points_dim - self.attribute_dims = attribute_dims - self.rotation_axis = 0 - - @property - def coord(self): - """torch.Tensor: Coordinates of each point in shape (N, 3).""" - return self.tensor[:, :3] - - @coord.setter - def coord(self, tensor): - """Set the coordinates of each point.""" - try: - tensor = tensor.reshape(self.shape[0], 3) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - self.tensor[:, :3] = tensor - - @property - def height(self): - """torch.Tensor: - A vector with height of each point in shape (N, 1), or None.""" - if self.attribute_dims is not None and \ - 'height' in self.attribute_dims.keys(): - return self.tensor[:, self.attribute_dims['height']] - else: - return None - - @height.setter - def height(self, tensor): - """Set the height of each point.""" - try: - tensor = tensor.reshape(self.shape[0]) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - if self.attribute_dims is not None and \ - 'height' in self.attribute_dims.keys(): - self.tensor[:, self.attribute_dims['height']] = tensor - else: - # add height attribute - if self.attribute_dims is None: - self.attribute_dims = dict() - attr_dim = self.shape[1] - self.tensor = torch.cat([self.tensor, tensor.unsqueeze(1)], dim=1) - self.attribute_dims.update(dict(height=attr_dim)) - self.points_dim += 1 - - @property - def color(self): - """torch.Tensor: - A vector with color of each point in shape (N, 3), or None.""" - if self.attribute_dims is not None and \ - 'color' in self.attribute_dims.keys(): - return self.tensor[:, self.attribute_dims['color']] - else: - return None - - @color.setter - def color(self, tensor): - """Set the color of each point.""" - try: - tensor = tensor.reshape(self.shape[0], 3) - except (RuntimeError, ValueError): # for torch.Tensor and np.ndarray - raise ValueError(f'got unexpected shape {tensor.shape}') - if tensor.max() >= 256 or tensor.min() < 0: - warnings.warn('point got color value beyond [0, 255]') - if not isinstance(tensor, torch.Tensor): - tensor = self.tensor.new_tensor(tensor) - if self.attribute_dims is not None and \ - 'color' in self.attribute_dims.keys(): - self.tensor[:, self.attribute_dims['color']] = tensor - else: - # add color attribute - if self.attribute_dims is None: - self.attribute_dims = dict() - attr_dim = self.shape[1] - self.tensor = torch.cat([self.tensor, tensor], dim=1) - self.attribute_dims.update( - dict(color=[attr_dim, attr_dim + 1, attr_dim + 2])) - self.points_dim += 3 - - @property - def shape(self): - """torch.Shape: Shape of points.""" - return self.tensor.shape - - def shuffle(self): - """Shuffle the points. - - Returns: - torch.Tensor: The shuffled index. - """ - idx = torch.randperm(self.__len__(), device=self.tensor.device) - self.tensor = self.tensor[idx] - return idx - - def rotate(self, rotation, axis=None): - """Rotate points with the given rotation matrix or angle. - - Args: - rotation (float | np.ndarray | torch.Tensor): Rotation matrix - or angle. - axis (int, optional): Axis to rotate at. Defaults to None. - """ - if not isinstance(rotation, torch.Tensor): - rotation = self.tensor.new_tensor(rotation) - assert rotation.shape == torch.Size([3, 3]) or \ - rotation.numel() == 1, f'invalid rotation shape {rotation.shape}' - - if axis is None: - axis = self.rotation_axis - - if rotation.numel() == 1: - rotated_points, rot_mat_T = rotation_3d_in_axis( - self.tensor[:, :3][None], rotation, axis=axis, return_mat=True) - self.tensor[:, :3] = rotated_points.squeeze(0) - rot_mat_T = rot_mat_T.squeeze(0) - else: - # rotation.numel() == 9 - self.tensor[:, :3] = self.tensor[:, :3] @ rotation - rot_mat_T = rotation - - return rot_mat_T - - @abstractmethod - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - pass - - def translate(self, trans_vector): - """Translate points with the given translation vector. - - Args: - trans_vector (np.ndarray, torch.Tensor): Translation - vector of size 3 or nx3. - """ - if not isinstance(trans_vector, torch.Tensor): - trans_vector = self.tensor.new_tensor(trans_vector) - trans_vector = trans_vector.squeeze(0) - if trans_vector.dim() == 1: - assert trans_vector.shape[0] == 3 - elif trans_vector.dim() == 2: - assert trans_vector.shape[0] == self.tensor.shape[0] and \ - trans_vector.shape[1] == 3 - else: - raise NotImplementedError( - f'Unsupported translation vector of shape {trans_vector.shape}' - ) - self.tensor[:, :3] += trans_vector - - def in_range_3d(self, point_range): - """Check whether the points are in the given range. - - Args: - point_range (list | torch.Tensor): The range of point - (x_min, y_min, z_min, x_max, y_max, z_max) - - Note: - In the original implementation of SECOND, checking whether - a box in the range checks whether the points are in a convex - polygon, we try to reduce the burden for simpler cases. - - Returns: - torch.Tensor: A binary vector indicating whether each point is - inside the reference range. - """ - in_range_flags = ((self.tensor[:, 0] > point_range[0]) - & (self.tensor[:, 1] > point_range[1]) - & (self.tensor[:, 2] > point_range[2]) - & (self.tensor[:, 0] < point_range[3]) - & (self.tensor[:, 1] < point_range[4]) - & (self.tensor[:, 2] < point_range[5])) - return in_range_flags - - @property - def bev(self): - """torch.Tensor: BEV of the points in shape (N, 2).""" - return self.tensor[:, [0, 1]] - - def in_range_bev(self, point_range): - """Check whether the points are in the given range. - - Args: - point_range (list | torch.Tensor): The range of point - in order of (x_min, y_min, x_max, y_max). - - Returns: - torch.Tensor: Indicating whether each point is inside - the reference range. - """ - in_range_flags = ((self.bev[:, 0] > point_range[0]) - & (self.bev[:, 1] > point_range[1]) - & (self.bev[:, 0] < point_range[2]) - & (self.bev[:, 1] < point_range[3])) - return in_range_flags - - @abstractmethod - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Box mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted box of the same type - in the `dst` mode. - """ - pass - - def scale(self, scale_factor): - """Scale the points with horizontal and vertical scaling factors. - - Args: - scale_factors (float): Scale factors to scale the points. - """ - self.tensor[:, :3] *= scale_factor - - def __getitem__(self, item): - """ - Note: - The following usage are allowed: - 1. `new_points = points[3]`: - return a `Points` that contains only one point. - 2. `new_points = points[2:10]`: - return a slice of points. - 3. `new_points = points[vector]`: - where vector is a torch.BoolTensor with `length = len(points)`. - Nonzero elements in the vector will be selected. - 4. `new_points = points[3:11, vector]`: - return a slice of points and attribute dims. - 5. `new_points = points[4:12, 2]`: - return a slice of points with single attribute. - Note that the returned Points might share storage with this Points, - subject to Pytorch's indexing semantics. - - Returns: - :obj:`BasePoints`: A new object of - :class:`BasePoints` after indexing. - """ - original_type = type(self) - if isinstance(item, int): - return original_type( - self.tensor[item].view(1, -1), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - elif isinstance(item, tuple) and len(item) == 2: - if isinstance(item[1], slice): - start = 0 if item[1].start is None else item[1].start - stop = self.tensor.shape[1] if \ - item[1].stop is None else item[1].stop - step = 1 if item[1].step is None else item[1].step - item = list(item) - item[1] = list(range(start, stop, step)) - item = tuple(item) - elif isinstance(item[1], int): - item = list(item) - item[1] = [item[1]] - item = tuple(item) - p = self.tensor[item[0], item[1]] - - keep_dims = list( - set(item[1]).intersection(set(range(3, self.tensor.shape[1])))) - if self.attribute_dims is not None: - attribute_dims = self.attribute_dims.copy() - for key in self.attribute_dims.keys(): - cur_attribute_dims = attribute_dims[key] - if isinstance(cur_attribute_dims, int): - cur_attribute_dims = [cur_attribute_dims] - intersect_attr = list( - set(cur_attribute_dims).intersection(set(keep_dims))) - if len(intersect_attr) == 1: - attribute_dims[key] = intersect_attr[0] - elif len(intersect_attr) > 1: - attribute_dims[key] = intersect_attr - else: - attribute_dims.pop(key) - else: - attribute_dims = None - elif isinstance(item, (slice, np.ndarray, torch.Tensor)): - p = self.tensor[item] - attribute_dims = self.attribute_dims - else: - raise NotImplementedError(f'Invalid slice {item}!') - - assert p.dim() == 2, \ - f'Indexing on Points with {item} failed to return a matrix!' - return original_type( - p, points_dim=p.shape[1], attribute_dims=attribute_dims) - - def __len__(self): - """int: Number of points in the current object.""" - return self.tensor.shape[0] - - def __repr__(self): - """str: Return a strings that describes the object.""" - return self.__class__.__name__ + '(\n ' + str(self.tensor) + ')' - - @classmethod - def cat(cls, points_list): - """Concatenate a list of Points into a single Points. - - Args: - points_list (list[:obj:`BasePoints`]): List of points. - - Returns: - :obj:`BasePoints`: The concatenated Points. - """ - assert isinstance(points_list, (list, tuple)) - if len(points_list) == 0: - return cls(torch.empty(0)) - assert all(isinstance(points, cls) for points in points_list) - - # use torch.cat (v.s. layers.cat) - # so the returned points never share storage with input - cat_points = cls( - torch.cat([p.tensor for p in points_list], dim=0), - points_dim=points_list[0].tensor.shape[1], - attribute_dims=points_list[0].attribute_dims) - return cat_points - - def to(self, device): - """Convert current points to a specific device. - - Args: - device (str | :obj:`torch.device`): The name of the device. - - Returns: - :obj:`BasePoints`: A new boxes object on the - specific device. - """ - original_type = type(self) - return original_type( - self.tensor.to(device), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - - def clone(self): - """Clone the Points. - - Returns: - :obj:`BasePoints`: Box object with the same properties - as self. - """ - original_type = type(self) - return original_type( - self.tensor.clone(), - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) - - @property - def device(self): - """str: The device of the points are on.""" - return self.tensor.device - - def __iter__(self): - """Yield a point as a Tensor of shape (4,) at a time. - - Returns: - torch.Tensor: A point of shape (4,). - """ - yield from self.tensor - - def new_point(self, data): - """Create a new point object with data. - - The new point and its tensor has the similar properties - as self and self.tensor, respectively. - - Args: - data (torch.Tensor | numpy.array | list): Data to be copied. - - Returns: - :obj:`BasePoints`: A new point object with ``data``, - the object's other properties are similar to ``self``. - """ - new_tensor = self.tensor.new_tensor(data) \ - if not isinstance(data, torch.Tensor) else data.to(self.device) - original_type = type(self) - return original_type( - new_tensor, - points_dim=self.points_dim, - attribute_dims=self.attribute_dims) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/cam_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/cam_points.py deleted file mode 100644 index a57c3db1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/cam_points.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class CameraPoints(BasePoints): - """Points of instances in CAM coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(CameraPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 1 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 0] = -self.tensor[:, 0] - elif bev_direction == 'vertical': - self.tensor[:, 2] = -self.tensor[:, 2] - - @property - def bev(self): - """torch.Tensor: BEV of the points in shape (N, 2).""" - return self.tensor[:, [0, 2]] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.CAM, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/depth_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/depth_points.py deleted file mode 100644 index 2d9221fb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/depth_points.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class DepthPoints(BasePoints): - """Points of instances in DEPTH coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(DepthPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 2 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 0] = -self.tensor[:, 0] - elif bev_direction == 'vertical': - self.tensor[:, 1] = -self.tensor[:, 1] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.DEPTH, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/lidar_points.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/lidar_points.py deleted file mode 100644 index ff4f57ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/points/lidar_points.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_points import BasePoints - - -class LiDARPoints(BasePoints): - """Points of instances in LIDAR coordinates. - - Args: - tensor (torch.Tensor | np.ndarray | list): a N x points_dim matrix. - points_dim (int, optional): Number of the dimension of a point. - Each row is (x, y, z). Defaults to 3. - attribute_dims (dict, optional): Dictionary to indicate the - meaning of extra dimension. Defaults to None. - - Attributes: - tensor (torch.Tensor): Float matrix of N x points_dim. - points_dim (int): Integer indicating the dimension of a point. - Each row is (x, y, z, ...). - attribute_dims (bool): Dictionary to indicate the meaning of extra - dimension. Defaults to None. - rotation_axis (int): Default rotation axis for points rotation. - """ - - def __init__(self, tensor, points_dim=3, attribute_dims=None): - super(LiDARPoints, self).__init__( - tensor, points_dim=points_dim, attribute_dims=attribute_dims) - self.rotation_axis = 2 - - def flip(self, bev_direction='horizontal'): - """Flip the points along given BEV direction. - - Args: - bev_direction (str): Flip direction (horizontal or vertical). - """ - if bev_direction == 'horizontal': - self.tensor[:, 1] = -self.tensor[:, 1] - elif bev_direction == 'vertical': - self.tensor[:, 0] = -self.tensor[:, 0] - - def convert_to(self, dst, rt_mat=None): - """Convert self to ``dst`` mode. - - Args: - dst (:obj:`CoordMode`): The target Point mode. - rt_mat (np.ndarray | torch.Tensor, optional): The rotation and - translation matrix between different coordinates. - Defaults to None. - The conversion from `src` coordinates to `dst` coordinates - usually comes along the change of sensors, e.g., from camera - to LiDAR. This requires a transformation matrix. - - Returns: - :obj:`BasePoints`: The converted point of the same type - in the `dst` mode. - """ - from mmdet3d.core.bbox import Coord3DMode - return Coord3DMode.convert_point( - point=self, src=Coord3DMode.LIDAR, dst=dst, rt_mat=rt_mat) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/__init__.py deleted file mode 100644 index 2fb534e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.core.post_processing import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores, - multiclass_nms) -from .box3d_nms import (aligned_3d_nms, box3d_multiclass_nms, circle_nms, - nms_bev, nms_normal_bev) -from .merge_augs import merge_aug_bboxes_3d - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'box3d_multiclass_nms', - 'aligned_3d_nms', 'merge_aug_bboxes_3d', 'circle_nms', 'nms_bev', - 'nms_normal_bev' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/box3d_nms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/box3d_nms.py deleted file mode 100644 index 2d42085e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/box3d_nms.py +++ /dev/null @@ -1,288 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numba -import numpy as np -import torch -from mmcv.ops import nms, nms_rotated - - -def box3d_multiclass_nms(mlvl_bboxes, - mlvl_bboxes_for_nms, - mlvl_scores, - score_thr, - max_num, - cfg, - mlvl_dir_scores=None, - mlvl_attr_scores=None, - mlvl_bboxes2d=None): - """Multi-class NMS for 3D boxes. The IoU used for NMS is defined as the 2D - IoU between BEV boxes. - - Args: - mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M). - M is the dimensions of boxes. - mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape - (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes. - The coordinate system of the BEV boxes is counterclockwise. - mlvl_scores (torch.Tensor): Multi-level boxes with shape - (N, C + 1). N is the number of boxes. C is the number of classes. - score_thr (float): Score threshold to filter boxes with low - confidence. - max_num (int): Maximum number of boxes will be kept. - cfg (dict): Configuration dict of NMS. - mlvl_dir_scores (torch.Tensor, optional): Multi-level scores - of direction classifier. Defaults to None. - mlvl_attr_scores (torch.Tensor, optional): Multi-level scores - of attribute classifier. Defaults to None. - mlvl_bboxes2d (torch.Tensor, optional): Multi-level 2D bounding - boxes. Defaults to None. - - Returns: - tuple[torch.Tensor]: Return results after nms, including 3D - bounding boxes, scores, labels, direction scores, attribute - scores (optional) and 2D bounding boxes (optional). - """ - # do multi class nms - # the fg class id range: [0, num_classes-1] - num_classes = mlvl_scores.shape[1] - 1 - bboxes = [] - scores = [] - labels = [] - dir_scores = [] - attr_scores = [] - bboxes2d = [] - for i in range(0, num_classes): - # get bboxes and scores of this class - cls_inds = mlvl_scores[:, i] > score_thr - if not cls_inds.any(): - continue - - _scores = mlvl_scores[cls_inds, i] - _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :] - - if cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr) - _mlvl_bboxes = mlvl_bboxes[cls_inds, :] - bboxes.append(_mlvl_bboxes[selected]) - scores.append(_scores[selected]) - cls_label = mlvl_bboxes.new_full((len(selected), ), - i, - dtype=torch.long) - labels.append(cls_label) - - if mlvl_dir_scores is not None: - _mlvl_dir_scores = mlvl_dir_scores[cls_inds] - dir_scores.append(_mlvl_dir_scores[selected]) - if mlvl_attr_scores is not None: - _mlvl_attr_scores = mlvl_attr_scores[cls_inds] - attr_scores.append(_mlvl_attr_scores[selected]) - if mlvl_bboxes2d is not None: - _mlvl_bboxes2d = mlvl_bboxes2d[cls_inds] - bboxes2d.append(_mlvl_bboxes2d[selected]) - - if bboxes: - bboxes = torch.cat(bboxes, dim=0) - scores = torch.cat(scores, dim=0) - labels = torch.cat(labels, dim=0) - if mlvl_dir_scores is not None: - dir_scores = torch.cat(dir_scores, dim=0) - if mlvl_attr_scores is not None: - attr_scores = torch.cat(attr_scores, dim=0) - if mlvl_bboxes2d is not None: - bboxes2d = torch.cat(bboxes2d, dim=0) - if bboxes.shape[0] > max_num: - _, inds = scores.sort(descending=True) - inds = inds[:max_num] - bboxes = bboxes[inds, :] - labels = labels[inds] - scores = scores[inds] - if mlvl_dir_scores is not None: - dir_scores = dir_scores[inds] - if mlvl_attr_scores is not None: - attr_scores = attr_scores[inds] - if mlvl_bboxes2d is not None: - bboxes2d = bboxes2d[inds] - else: - bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1))) - scores = mlvl_scores.new_zeros((0, )) - labels = mlvl_scores.new_zeros((0, ), dtype=torch.long) - if mlvl_dir_scores is not None: - dir_scores = mlvl_scores.new_zeros((0, )) - if mlvl_attr_scores is not None: - attr_scores = mlvl_scores.new_zeros((0, )) - if mlvl_bboxes2d is not None: - bboxes2d = mlvl_scores.new_zeros((0, 4)) - - results = (bboxes, scores, labels) - - if mlvl_dir_scores is not None: - results = results + (dir_scores, ) - if mlvl_attr_scores is not None: - results = results + (attr_scores, ) - if mlvl_bboxes2d is not None: - results = results + (bboxes2d, ) - - return results - - -def aligned_3d_nms(boxes, scores, classes, thresh): - """3D NMS for aligned boxes. - - Args: - boxes (torch.Tensor): Aligned box with shape [n, 6]. - scores (torch.Tensor): Scores of each box. - classes (torch.Tensor): Class of each box. - thresh (float): IoU threshold for nms. - - Returns: - torch.Tensor: Indices of selected boxes. - """ - x1 = boxes[:, 0] - y1 = boxes[:, 1] - z1 = boxes[:, 2] - x2 = boxes[:, 3] - y2 = boxes[:, 4] - z2 = boxes[:, 5] - area = (x2 - x1) * (y2 - y1) * (z2 - z1) - zero = boxes.new_zeros(1, ) - - score_sorted = torch.argsort(scores) - pick = [] - while (score_sorted.shape[0] != 0): - last = score_sorted.shape[0] - i = score_sorted[-1] - pick.append(i) - - xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]]) - yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]]) - zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]]) - xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]]) - yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]]) - zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]]) - classes1 = classes[i] - classes2 = classes[score_sorted[:last - 1]] - inter_l = torch.max(zero, xx2 - xx1) - inter_w = torch.max(zero, yy2 - yy1) - inter_h = torch.max(zero, zz2 - zz1) - - inter = inter_l * inter_w * inter_h - iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter) - iou = iou * (classes1 == classes2).float() - score_sorted = score_sorted[torch.nonzero( - iou <= thresh, as_tuple=False).flatten()] - - indices = boxes.new_tensor(pick, dtype=torch.long) - return indices - - -@numba.jit(nopython=True) -def circle_nms(dets, thresh, post_max_size=83): - """Circular NMS. - - An object is only counted as positive if no other center - with a higher confidence exists within a radius r using a - bird-eye view distance metric. - - Args: - dets (torch.Tensor): Detection results with the shape of [N, 3]. - thresh (float): Value of threshold. - post_max_size (int, optional): Max number of prediction to be kept. - Defaults to 83. - - Returns: - torch.Tensor: Indexes of the detections to be kept. - """ - x1 = dets[:, 0] - y1 = dets[:, 1] - scores = dets[:, 2] - order = scores.argsort()[::-1].astype(np.int32) # highest->lowest - ndets = dets.shape[0] - suppressed = np.zeros((ndets), dtype=np.int32) - keep = [] - for _i in range(ndets): - i = order[_i] # start with highest score box - if suppressed[ - i] == 1: # if any box have enough iou with this, remove it - continue - keep.append(i) - for _j in range(_i + 1, ndets): - j = order[_j] - if suppressed[j] == 1: - continue - # calculate center distance between i and j box - dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2 - - # ovr = inter / areas[j] - if dist <= thresh: - suppressed[j] = 1 - - if post_max_size < len(keep): - return keep[:post_max_size] - - return keep - - -# This function duplicates functionality of mmcv.ops.iou_3d.nms_bev -# from mmcv<=1.5, but using cuda ops from mmcv.ops.nms.nms_rotated. -# Nms api will be unified in mmdetection3d one day. -def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None): - """NMS function GPU implementation (for BEV boxes). The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = boxes[order].contiguous() - scores = scores[order] - - # xyxyr -> back to xywhr - # note: better skip this step before nms_bev call in the future - boxes = torch.stack( - ((boxes[:, 0] + boxes[:, 2]) / 2, (boxes[:, 1] + boxes[:, 3]) / 2, - boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1], boxes[:, 4]), - dim=-1) - - keep = nms_rotated(boxes, scores, thresh)[1] - keep = order[keep] - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -# This function duplicates functionality of mmcv.ops.iou_3d.nms_normal_bev -# from mmcv<=1.5, but using cuda ops from mmcv.ops.nms.nms. -# Nms api will be unified in mmdetection3d one day. -def nms_normal_bev(boxes, scores, thresh): - """Normal NMS function GPU implementation (for BEV boxes). The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]' - return nms(boxes[:, :-1], scores, thresh)[1] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/merge_augs.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/merge_augs.py deleted file mode 100644 index 0e20dcd5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/post_processing/merge_augs.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from ..bbox import bbox3d2result, bbox3d_mapping_back, xywhr2xyxyr - - -def merge_aug_bboxes_3d(aug_results, img_metas, test_cfg): - """Merge augmented detection 3D bboxes and scores. - - Args: - aug_results (list[dict]): The dict of detection results. - The dict contains the following keys - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - img_metas (list[dict]): Meta information of each sample. - test_cfg (dict): Test config. - - Returns: - dict: Bounding boxes results in cpu mode, containing merged results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Merged detection bbox. - - scores_3d (torch.Tensor): Merged detection scores. - - labels_3d (torch.Tensor): Merged predicted box labels. - """ - - assert len(aug_results) == len(img_metas), \ - '"aug_results" should have the same length as "img_metas", got len(' \ - f'aug_results)={len(aug_results)} and len(img_metas)={len(img_metas)}' - - recovered_bboxes = [] - recovered_scores = [] - recovered_labels = [] - - for bboxes, img_info in zip(aug_results, img_metas): - scale_factor = img_info[0]['pcd_scale_factor'] - pcd_horizontal_flip = img_info[0]['pcd_horizontal_flip'] - pcd_vertical_flip = img_info[0]['pcd_vertical_flip'] - recovered_scores.append(bboxes['scores_3d']) - recovered_labels.append(bboxes['labels_3d']) - bboxes = bbox3d_mapping_back(bboxes['boxes_3d'], scale_factor, - pcd_horizontal_flip, pcd_vertical_flip) - recovered_bboxes.append(bboxes) - - aug_bboxes = recovered_bboxes[0].cat(recovered_bboxes) - aug_bboxes_for_nms = xywhr2xyxyr(aug_bboxes.bev) - aug_scores = torch.cat(recovered_scores, dim=0) - aug_labels = torch.cat(recovered_labels, dim=0) - - # TODO: use a more elegent way to deal with nms - if test_cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - merged_bboxes = [] - merged_scores = [] - merged_labels = [] - - # Apply multi-class nms when merge bboxes - if len(aug_labels) == 0: - return bbox3d2result(aug_bboxes, aug_scores, aug_labels) - - for class_id in range(torch.max(aug_labels).item() + 1): - class_inds = (aug_labels == class_id) - bboxes_i = aug_bboxes[class_inds] - bboxes_nms_i = aug_bboxes_for_nms[class_inds, :] - scores_i = aug_scores[class_inds] - labels_i = aug_labels[class_inds] - if len(bboxes_nms_i) == 0: - continue - selected = nms_func(bboxes_nms_i, scores_i, test_cfg.nms_thr) - - merged_bboxes.append(bboxes_i[selected, :]) - merged_scores.append(scores_i[selected]) - merged_labels.append(labels_i[selected]) - - merged_bboxes = merged_bboxes[0].cat(merged_bboxes) - merged_scores = torch.cat(merged_scores, dim=0) - merged_labels = torch.cat(merged_labels, dim=0) - - _, order = merged_scores.sort(0, descending=True) - num = min(test_cfg.max_num, len(aug_bboxes)) - order = order[:num] - - merged_bboxes = merged_bboxes[order] - merged_scores = merged_scores[order] - merged_labels = merged_labels[order] - - return bbox3d2result(merged_bboxes, merged_scores, merged_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/__init__.py deleted file mode 100644 index b2a8deca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .array_converter import ArrayConverter, array_converter -from .gaussian import (draw_heatmap_gaussian, ellip_gaussian2D, gaussian_2d, - gaussian_radius, get_ellip_gaussian_2D) - -__all__ = [ - 'gaussian_2d', 'gaussian_radius', 'draw_heatmap_gaussian', - 'ArrayConverter', 'array_converter', 'ellip_gaussian2D', - 'get_ellip_gaussian_2D' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/array_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/array_converter.py deleted file mode 100644 index a555aa60..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/array_converter.py +++ /dev/null @@ -1,324 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -from inspect import getfullargspec - -import numpy as np -import torch - - -def array_converter(to_torch=True, - apply_to=tuple(), - template_arg_name_=None, - recover=True): - """Wrapper function for data-type agnostic processing. - - First converts input arrays to PyTorch tensors or NumPy ndarrays - for middle calculation, then convert output to original data-type if - `recover=True`. - - Args: - to_torch (Bool, optional): Whether convert to PyTorch tensors - for middle calculation. Defaults to True. - apply_to (tuple[str], optional): The arguments to which we apply - data-type conversion. Defaults to an empty tuple. - template_arg_name_ (str, optional): Argument serving as the template ( - return arrays should have the same dtype and device - as the template). Defaults to None. If None, we will use the - first argument in `apply_to` as the template argument. - recover (Bool, optional): Whether or not recover the wrapped function - outputs to the `template_arg_name_` type. Defaults to True. - - Raises: - ValueError: When template_arg_name_ is not among all args, or - when apply_to contains an arg which is not among all args, - a ValueError will be raised. When the template argument or - an argument to convert is a list or tuple, and cannot be - converted to a NumPy array, a ValueError will be raised. - TypeError: When the type of the template argument or - an argument to convert does not belong to the above range, - or the contents of such an list-or-tuple-type argument - do not share the same data type, a TypeError is raised. - - Returns: - (function): wrapped function. - - Example: - >>> import torch - >>> import numpy as np - >>> - >>> # Use torch addition for a + b, - >>> # and convert return values to the type of a - >>> @array_converter(apply_to=('a', 'b')) - >>> def simple_add(a, b): - >>> return a + b - >>> - >>> a = np.array([1.1]) - >>> b = np.array([2.2]) - >>> simple_add(a, b) - >>> - >>> # Use numpy addition for a + b, - >>> # and convert return values to the type of b - >>> @array_converter(to_torch=False, apply_to=('a', 'b'), - >>> template_arg_name_='b') - >>> def simple_add(a, b): - >>> return a + b - >>> - >>> simple_add() - >>> - >>> # Use torch funcs for floor(a) if flag=True else ceil(a), - >>> # and return the torch tensor - >>> @array_converter(apply_to=('a',), recover=False) - >>> def floor_or_ceil(a, flag=True): - >>> return torch.floor(a) if flag else torch.ceil(a) - >>> - >>> floor_or_ceil(a, flag=False) - """ - - def array_converter_wrapper(func): - """Outer wrapper for the function.""" - - @functools.wraps(func) - def new_func(*args, **kwargs): - """Inner wrapper for the arguments.""" - if len(apply_to) == 0: - return func(*args, **kwargs) - - func_name = func.__name__ - - arg_spec = getfullargspec(func) - - arg_names = arg_spec.args - arg_num = len(arg_names) - default_arg_values = arg_spec.defaults - if default_arg_values is None: - default_arg_values = [] - no_default_arg_num = len(arg_names) - len(default_arg_values) - - kwonly_arg_names = arg_spec.kwonlyargs - kwonly_default_arg_values = arg_spec.kwonlydefaults - if kwonly_default_arg_values is None: - kwonly_default_arg_values = {} - - all_arg_names = arg_names + kwonly_arg_names - - # in case there are args in the form of *args - if len(args) > arg_num: - named_args = args[:arg_num] - nameless_args = args[arg_num:] - else: - named_args = args - nameless_args = [] - - # template argument data type is used for all array-like arguments - if template_arg_name_ is None: - template_arg_name = apply_to[0] - else: - template_arg_name = template_arg_name_ - - if template_arg_name not in all_arg_names: - raise ValueError(f'{template_arg_name} is not among the ' - f'argument list of function {func_name}') - - # inspect apply_to - for arg_to_apply in apply_to: - if arg_to_apply not in all_arg_names: - raise ValueError(f'{arg_to_apply} is not ' - f'an argument of {func_name}') - - new_args = [] - new_kwargs = {} - - converter = ArrayConverter() - target_type = torch.Tensor if to_torch else np.ndarray - - # non-keyword arguments - for i, arg_value in enumerate(named_args): - if arg_names[i] in apply_to: - new_args.append( - converter.convert( - input_array=arg_value, target_type=target_type)) - else: - new_args.append(arg_value) - - if arg_names[i] == template_arg_name: - template_arg_value = arg_value - - kwonly_default_arg_values.update(kwargs) - kwargs = kwonly_default_arg_values - - # keyword arguments and non-keyword arguments using default value - for i in range(len(named_args), len(all_arg_names)): - arg_name = all_arg_names[i] - if arg_name in kwargs: - if arg_name in apply_to: - new_kwargs[arg_name] = converter.convert( - input_array=kwargs[arg_name], - target_type=target_type) - else: - new_kwargs[arg_name] = kwargs[arg_name] - else: - default_value = default_arg_values[i - no_default_arg_num] - if arg_name in apply_to: - new_kwargs[arg_name] = converter.convert( - input_array=default_value, target_type=target_type) - else: - new_kwargs[arg_name] = default_value - if arg_name == template_arg_name: - template_arg_value = kwargs[arg_name] - - # add nameless args provided by *args (if exists) - new_args += nameless_args - - return_values = func(*new_args, **new_kwargs) - converter.set_template(template_arg_value) - - def recursive_recover(input_data): - if isinstance(input_data, (tuple, list)): - new_data = [] - for item in input_data: - new_data.append(recursive_recover(item)) - return tuple(new_data) if isinstance(input_data, - tuple) else new_data - elif isinstance(input_data, dict): - new_data = {} - for k, v in input_data.items(): - new_data[k] = recursive_recover(v) - return new_data - elif isinstance(input_data, (torch.Tensor, np.ndarray)): - return converter.recover(input_data) - else: - return input_data - - if recover: - return recursive_recover(return_values) - else: - return return_values - - return new_func - - return array_converter_wrapper - - -class ArrayConverter: - - SUPPORTED_NON_ARRAY_TYPES = (int, float, np.int8, np.int16, np.int32, - np.int64, np.uint8, np.uint16, np.uint32, - np.uint64, np.float16, np.float32, np.float64) - - def __init__(self, template_array=None): - if template_array is not None: - self.set_template(template_array) - - def set_template(self, array): - """Set template array. - - Args: - array (tuple | list | int | float | np.ndarray | torch.Tensor): - Template array. - - Raises: - ValueError: If input is list or tuple and cannot be converted to - to a NumPy array, a ValueError is raised. - TypeError: If input type does not belong to the above range, - or the contents of a list or tuple do not share the - same data type, a TypeError is raised. - """ - self.array_type = type(array) - self.is_num = False - self.device = 'cpu' - - if isinstance(array, np.ndarray): - self.dtype = array.dtype - elif isinstance(array, torch.Tensor): - self.dtype = array.dtype - self.device = array.device - elif isinstance(array, (list, tuple)): - try: - array = np.array(array) - if array.dtype not in self.SUPPORTED_NON_ARRAY_TYPES: - raise TypeError - self.dtype = array.dtype - except (ValueError, TypeError): - print(f'The following list cannot be converted to' - f' a numpy array of supported dtype:\n{array}') - raise - elif isinstance(array, self.SUPPORTED_NON_ARRAY_TYPES): - self.array_type = np.ndarray - self.is_num = True - self.dtype = np.dtype(type(array)) - else: - raise TypeError(f'Template type {self.array_type}' - f' is not supported.') - - def convert(self, input_array, target_type=None, target_array=None): - """Convert input array to target data type. - - Args: - input_array (tuple | list | np.ndarray | - torch.Tensor | int | float ): - Input array. Defaults to None. - target_type ( | , - optional): - Type to which input array is converted. Defaults to None. - target_array (np.ndarray | torch.Tensor, optional): - Template array to which input array is converted. - Defaults to None. - - Raises: - ValueError: If input is list or tuple and cannot be converted to - to a NumPy array, a ValueError is raised. - TypeError: If input type does not belong to the above range, - or the contents of a list or tuple do not share the - same data type, a TypeError is raised. - """ - if isinstance(input_array, (list, tuple)): - try: - input_array = np.array(input_array) - if input_array.dtype not in self.SUPPORTED_NON_ARRAY_TYPES: - raise TypeError - except (ValueError, TypeError): - print(f'The input cannot be converted to' - f' a single-type numpy array:\n{input_array}') - raise - elif isinstance(input_array, self.SUPPORTED_NON_ARRAY_TYPES): - input_array = np.array(input_array) - array_type = type(input_array) - assert target_type is not None or target_array is not None, \ - 'must specify a target' - if target_type is not None: - assert target_type in (np.ndarray, torch.Tensor), \ - 'invalid target type' - if target_type == array_type: - return input_array - elif target_type == np.ndarray: - # default dtype is float32 - converted_array = input_array.cpu().numpy().astype(np.float32) - else: - # default dtype is float32, device is 'cpu' - converted_array = torch.tensor( - input_array, dtype=torch.float32) - else: - assert isinstance(target_array, (np.ndarray, torch.Tensor)), \ - 'invalid target array type' - if isinstance(target_array, array_type): - return input_array - elif isinstance(target_array, np.ndarray): - converted_array = input_array.cpu().numpy().astype( - target_array.dtype) - else: - converted_array = target_array.new_tensor(input_array) - return converted_array - - def recover(self, input_array): - assert isinstance(input_array, (np.ndarray, torch.Tensor)), \ - 'invalid input array type' - if isinstance(input_array, self.array_type): - return input_array - elif isinstance(input_array, torch.Tensor): - converted_array = input_array.cpu().numpy().astype(self.dtype) - else: - converted_array = torch.tensor( - input_array, dtype=self.dtype, device=self.device) - if self.is_num: - converted_array = converted_array.item() - return converted_array diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/gaussian.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/gaussian.py deleted file mode 100644 index 66ccbd9e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/utils/gaussian.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def gaussian_2d(shape, sigma=1): - """Generate gaussian map. - - Args: - shape (list[int]): Shape of the map. - sigma (float, optional): Sigma to generate gaussian map. - Defaults to 1. - - Returns: - np.ndarray: Generated gaussian map. - """ - m, n = [(ss - 1.) / 2. for ss in shape] - y, x = np.ogrid[-m:m + 1, -n:n + 1] - - h = np.exp(-(x * x + y * y) / (2 * sigma * sigma)) - h[h < np.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def draw_heatmap_gaussian(heatmap, center, radius, k=1): - """Get gaussian masked heatmap. - - Args: - heatmap (torch.Tensor): Heatmap to be masked. - center (torch.Tensor): Center coord of the heatmap. - radius (int): Radius of gaussian. - K (int, optional): Multiple of masked_gaussian. Defaults to 1. - - Returns: - torch.Tensor: Masked heatmap. - """ - diameter = 2 * radius + 1 - gaussian = gaussian_2d((diameter, diameter), sigma=diameter / 6) - - x, y = int(center[0]), int(center[1]) - - height, width = heatmap.shape[0:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = torch.from_numpy( - gaussian[radius - top:radius + bottom, - radius - left:radius + right]).to(heatmap.device, - torch.float32) - if min(masked_gaussian.shape) > 0 and min(masked_heatmap.shape) > 0: - torch.max(masked_heatmap, masked_gaussian * k, out=masked_heatmap) - return heatmap - - -def gaussian_radius(det_size, min_overlap=0.5): - """Get radius of gaussian. - - Args: - det_size (tuple[torch.Tensor]): Size of the detection result. - min_overlap (float, optional): Gaussian_overlap. Defaults to 0.5. - - Returns: - torch.Tensor: Computed radius. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = torch.sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 + sq1) / 2 - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = torch.sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 + sq2) / 2 - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = torch.sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / 2 - return min(r1, r2, r3) - - -def get_ellip_gaussian_2D(heatmap, center, radius_x, radius_y, k=1): - """Generate 2D ellipse gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius_x (int): X-axis radius of gaussian kernel. - radius_y (int): Y-axis radius of gaussian kernel. - k (int, optional): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter_x, diameter_y = 2 * radius_x + 1, 2 * radius_y + 1 - gaussian_kernel = ellip_gaussian2D((radius_x, radius_y), - sigma_x=diameter_x / 6, - sigma_y=diameter_y / 6, - dtype=heatmap.dtype, - device=heatmap.device) - - x, y = int(center[0]), int(center[1]) - height, width = heatmap.shape[0:2] - - left, right = min(x, radius_x), min(width - x, radius_x + 1) - top, bottom = min(y, radius_y), min(height - y, radius_y + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius_y - top:radius_y + bottom, - radius_x - left:radius_x + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def ellip_gaussian2D(radius, - sigma_x, - sigma_y, - dtype=torch.float32, - device='cpu'): - """Generate 2D ellipse gaussian kernel. - - Args: - radius (tuple(int)): Ellipse radius (radius_x, radius_y) of gaussian - kernel. - sigma_x (int): X-axis sigma of gaussian function. - sigma_y (int): Y-axis sigma of gaussian function. - dtype (torch.dtype, optional): Dtype of gaussian tensor. - Default: torch.float32. - device (str, optional): Device of gaussian tensor. - Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius_y + 1) * (2 * radius_x + 1)`` shape. - """ - x = torch.arange( - -radius[0], radius[0] + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius[1], radius[1] + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x) / (2 * sigma_x * sigma_x) - (y * y) / - (2 * sigma_y * sigma_y)).exp() - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - - return h diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/__init__.py deleted file mode 100644 index bbf1e60f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .show_result import (show_multi_modality_result, show_result, - show_seg_result) - -__all__ = ['show_result', 'show_seg_result', 'show_multi_modality_result'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/image_vis.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/image_vis.py deleted file mode 100644 index 7ac765c2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/image_vis.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import cv2 -import numpy as np -import torch -from matplotlib import pyplot as plt - - -def project_pts_on_img(points, - raw_img, - lidar2img_rt, - max_distance=70, - thickness=-1): - """Project the 3D points cloud on 2D image. - - Args: - points (numpy.array): 3D points cloud (x, y, z) to visualize. - raw_img (numpy.array): The numpy array of image. - lidar2img_rt (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - max_distance (float, optional): the max distance of the points cloud. - Default: 70. - thickness (int, optional): The thickness of 2D points. Default: -1. - """ - img = raw_img.copy() - num_points = points.shape[0] - pts_4d = np.concatenate([points[:, :3], np.ones((num_points, 1))], axis=-1) - pts_2d = pts_4d @ lidar2img_rt.T - - # cam_points is Tensor of Nx4 whose last column is 1 - # transform camera coordinate to image coordinate - pts_2d[:, 2] = np.clip(pts_2d[:, 2], a_min=1e-5, a_max=99999) - pts_2d[:, 0] /= pts_2d[:, 2] - pts_2d[:, 1] /= pts_2d[:, 2] - - fov_inds = ((pts_2d[:, 0] < img.shape[1]) - & (pts_2d[:, 0] >= 0) - & (pts_2d[:, 1] < img.shape[0]) - & (pts_2d[:, 1] >= 0)) - - imgfov_pts_2d = pts_2d[fov_inds, :3] # u, v, d - - cmap = plt.cm.get_cmap('hsv', 256) - cmap = np.array([cmap(i) for i in range(256)])[:, :3] * 255 - for i in range(imgfov_pts_2d.shape[0]): - depth = imgfov_pts_2d[i, 2] - color = cmap[np.clip(int(max_distance * 10 / depth), 0, 255), :] - cv2.circle( - img, - center=(int(np.round(imgfov_pts_2d[i, 0])), - int(np.round(imgfov_pts_2d[i, 1]))), - radius=1, - color=tuple(color), - thickness=thickness, - ) - cv2.imshow('project_pts_img', img.astype(np.uint8)) - cv2.waitKey(100) - - -def plot_rect3d_on_img(img, - num_rects, - rect_corners, - color=(0, 255, 0), - thickness=1): - """Plot the boundary lines of 3D rectangular on 2D images. - - Args: - img (numpy.array): The numpy array of image. - num_rects (int): Number of 3D rectangulars. - rect_corners (numpy.array): Coordinates of the corners of 3D - rectangulars. Should be in the shape of [num_rect, 8, 2]. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - line_indices = ((0, 1), (0, 3), (0, 4), (1, 2), (1, 5), (3, 2), (3, 7), - (4, 5), (4, 7), (2, 6), (5, 6), (6, 7)) - for i in range(num_rects): - corners = rect_corners[i].astype(np.int) - for start, end in line_indices: - cv2.line(img, (corners[start, 0], corners[start, 1]), - (corners[end, 0], corners[end, 1]), color, thickness, - cv2.LINE_AA) - - return img.astype(np.uint8) - - -def draw_lidar_bbox3d_on_img(bboxes3d, - raw_img, - lidar2img_rt, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`LiDARInstance3DBoxes`): - 3d bbox in lidar coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - lidar2img_rt (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - img_metas (dict): Useless here. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - img = raw_img.copy() - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - pts_4d = np.concatenate( - [corners_3d.reshape(-1, 3), - np.ones((num_bbox * 8, 1))], axis=-1) - lidar2img_rt = copy.deepcopy(lidar2img_rt).reshape(4, 4) - if isinstance(lidar2img_rt, torch.Tensor): - lidar2img_rt = lidar2img_rt.cpu().numpy() - pts_2d = pts_4d @ lidar2img_rt.T - - pts_2d[:, 2] = np.clip(pts_2d[:, 2], a_min=1e-5, a_max=1e5) - pts_2d[:, 0] /= pts_2d[:, 2] - pts_2d[:, 1] /= pts_2d[:, 2] - imgfov_pts_2d = pts_2d[..., :2].reshape(num_bbox, 8, 2) - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) - - -# TODO: remove third parameter in all functions here in favour of img_metas -def draw_depth_bbox3d_on_img(bboxes3d, - raw_img, - calibs, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`DepthInstance3DBoxes`, shape=[M, 7]): - 3d bbox in depth coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - calibs (dict): Camera calibration information, Rt and K. - img_metas (dict): Used in coordinates transformation. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - from mmdet3d.core.bbox import points_cam2img - from mmdet3d.models import apply_3d_transformation - - img = raw_img.copy() - img_metas = copy.deepcopy(img_metas) - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - points_3d = corners_3d.reshape(-1, 3) - - # first reverse the data transformations - xyz_depth = apply_3d_transformation( - points_3d, 'DEPTH', img_metas, reverse=True) - - # project to 2d to get image coords (uv) - uv_origin = points_cam2img(xyz_depth, - xyz_depth.new_tensor(img_metas['depth2img'])) - uv_origin = (uv_origin - 1).round() - imgfov_pts_2d = uv_origin[..., :2].reshape(num_bbox, 8, 2).numpy() - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) - - -def draw_camera_bbox3d_on_img(bboxes3d, - raw_img, - cam2img, - img_metas, - color=(0, 255, 0), - thickness=1): - """Project the 3D bbox on 2D plane and draw on input image. - - Args: - bboxes3d (:obj:`CameraInstance3DBoxes`, shape=[M, 7]): - 3d bbox in camera coordinate system to visualize. - raw_img (numpy.array): The numpy array of image. - cam2img (dict): Camera intrinsic matrix, - denoted as `K` in depth bbox coordinate system. - img_metas (dict): Useless here. - color (tuple[int], optional): The color to draw bboxes. - Default: (0, 255, 0). - thickness (int, optional): The thickness of bboxes. Default: 1. - """ - from mmdet3d.core.bbox import points_cam2img - - img = raw_img.copy() - cam2img = copy.deepcopy(cam2img) - corners_3d = bboxes3d.corners - num_bbox = corners_3d.shape[0] - points_3d = corners_3d.reshape(-1, 3) - if not isinstance(cam2img, torch.Tensor): - cam2img = torch.from_numpy(np.array(cam2img)) - - assert (cam2img.shape == torch.Size([3, 3]) - or cam2img.shape == torch.Size([4, 4])) - cam2img = cam2img.float().cpu() - - # project to 2d to get image coords (uv) - uv_origin = points_cam2img(points_3d, cam2img) - uv_origin = (uv_origin - 1).round() - imgfov_pts_2d = uv_origin[..., :2].reshape(num_bbox, 8, 2).numpy() - - return plot_rect3d_on_img(img, num_bbox, imgfov_pts_2d, color, thickness) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/open3d_vis.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/open3d_vis.py deleted file mode 100644 index c63b6eca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/open3d_vis.py +++ /dev/null @@ -1,460 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch - -try: - import open3d as o3d - from open3d import geometry -except ImportError: - raise ImportError( - 'Please run "pip install open3d" to install open3d first.') - - -def _draw_points(points, - vis, - points_size=2, - point_color=(0.5, 0.5, 0.5), - mode='xyz'): - """Draw points on visualizer. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - - Returns: - tuple: points, color of each point. - """ - vis.get_render_option().point_size = points_size # set points size - if isinstance(points, torch.Tensor): - points = points.cpu().numpy() - - points = points.copy() - pcd = geometry.PointCloud() - if mode == 'xyz': - pcd.points = o3d.utility.Vector3dVector(points[:, :3]) - points_colors = np.tile(np.array(point_color), (points.shape[0], 1)) - elif mode == 'xyzrgb': - pcd.points = o3d.utility.Vector3dVector(points[:, :3]) - points_colors = points[:, 3:6] - # normalize to [0, 1] for open3d drawing - if not ((points_colors >= 0.0) & (points_colors <= 1.0)).all(): - points_colors /= 255.0 - else: - raise NotImplementedError - - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.add_geometry(pcd) - - return pcd, points_colors - - -def _draw_bboxes(bbox3d, - vis, - points_colors, - pcd=None, - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox on visualizer and change the color of points inside bbox3d. - - Args: - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3d bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - points_colors (numpy.array): color of each points. - pcd (:obj:`open3d.geometry.PointCloud`, optional): point cloud. - Default: None. - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points inside bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - if isinstance(bbox3d, torch.Tensor): - bbox3d = bbox3d.cpu().numpy() - bbox3d = bbox3d.copy() - - in_box_color = np.array(points_in_box_color) - for i in range(len(bbox3d)): - center = bbox3d[i, 0:3] - dim = bbox3d[i, 3:6] - yaw = np.zeros(3) - yaw[rot_axis] = bbox3d[i, 6] - rot_mat = geometry.get_rotation_matrix_from_xyz(yaw) - - if center_mode == 'lidar_bottom': - center[rot_axis] += dim[ - rot_axis] / 2 # bottom center to gravity center - elif center_mode == 'camera_bottom': - center[rot_axis] -= dim[ - rot_axis] / 2 # bottom center to gravity center - box3d = geometry.OrientedBoundingBox(center, rot_mat, dim) - - line_set = geometry.LineSet.create_from_oriented_bounding_box(box3d) - line_set.paint_uniform_color(bbox_color) - # draw bboxes on visualizer - vis.add_geometry(line_set) - - # change the color of points which are in box - if pcd is not None and mode == 'xyz': - indices = box3d.get_point_indices_within_bounding_box(pcd.points) - points_colors[indices] = in_box_color - - # update points colors - if pcd is not None: - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.update_geometry(pcd) - - -def show_pts_boxes(points, - bbox3d=None, - show=True, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox and points on visualizer. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - bbox3d (numpy.array | torch.tensor, shape=[M, 7], optional): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - Defaults to None. - show (bool, optional): whether to show the visualization results. - Default: True. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is bottom - center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, available - mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - # TODO: support score and class info - assert 0 <= rot_axis <= 2 - - # init visualizer - vis = o3d.visualization.Visualizer() - vis.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - vis.add_geometry(mesh_frame) - - # draw points - pcd, points_colors = _draw_points(points, vis, points_size, point_color, - mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes(bbox3d, vis, points_colors, pcd, bbox_color, - points_in_box_color, rot_axis, center_mode, mode) - - if show: - vis.run() - - if save_path is not None: - vis.capture_screen_image(save_path) - - vis.destroy_window() - - -def _draw_bboxes_ind(bbox3d, - vis, - indices, - points_colors, - pcd=None, - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox on visualizer and change the color or points inside bbox3d - with indices. - - Args: - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3d bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - vis (:obj:`open3d.visualization.Visualizer`): open3d visualizer. - indices (numpy.array | torch.tensor, shape=[N, M]): - indicate which bbox3d that each point lies in. - points_colors (numpy.array): color of each points. - pcd (:obj:`open3d.geometry.PointCloud`, optional): point cloud. - Default: None. - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - if isinstance(bbox3d, torch.Tensor): - bbox3d = bbox3d.cpu().numpy() - if isinstance(indices, torch.Tensor): - indices = indices.cpu().numpy() - bbox3d = bbox3d.copy() - - in_box_color = np.array(points_in_box_color) - for i in range(len(bbox3d)): - center = bbox3d[i, 0:3] - dim = bbox3d[i, 3:6] - yaw = np.zeros(3) - # TODO: fix problem of current coordinate system - # dim[0], dim[1] = dim[1], dim[0] # for current coordinate - # yaw[rot_axis] = -(bbox3d[i, 6] - 0.5 * np.pi) - yaw[rot_axis] = -bbox3d[i, 6] - rot_mat = geometry.get_rotation_matrix_from_xyz(yaw) - if center_mode == 'lidar_bottom': - center[rot_axis] += dim[ - rot_axis] / 2 # bottom center to gravity center - elif center_mode == 'camera_bottom': - center[rot_axis] -= dim[ - rot_axis] / 2 # bottom center to gravity center - box3d = geometry.OrientedBoundingBox(center, rot_mat, dim) - - line_set = geometry.LineSet.create_from_oriented_bounding_box(box3d) - line_set.paint_uniform_color(bbox_color) - # draw bboxes on visualizer - vis.add_geometry(line_set) - - # change the color of points which are in box - if pcd is not None and mode == 'xyz': - points_colors[indices[:, i].astype(np.bool)] = in_box_color - - # update points colors - if pcd is not None: - pcd.colors = o3d.utility.Vector3dVector(points_colors) - vis.update_geometry(pcd) - - -def show_pts_index_boxes(points, - bbox3d=None, - show=True, - indices=None, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - """Draw bbox and points on visualizer with indices that indicate which - bbox3d that each point lies in. - - Args: - points (numpy.array | torch.tensor, shape=[N, 3+C]): - points to visualize. - bbox3d (numpy.array | torch.tensor, shape=[M, 7]): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) to visualize. - Defaults to None. - show (bool, optional): whether to show the visualization results. - Default: True. - indices (numpy.array | torch.tensor, shape=[N, M], optional): - indicate which bbox3d that each point lies in. Default: None. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - # TODO: support score and class info - assert 0 <= rot_axis <= 2 - - # init visualizer - vis = o3d.visualization.Visualizer() - vis.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - vis.add_geometry(mesh_frame) - - # draw points - pcd, points_colors = _draw_points(points, vis, points_size, point_color, - mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes_ind(bbox3d, vis, indices, points_colors, pcd, bbox_color, - points_in_box_color, rot_axis, center_mode, mode) - - if show: - vis.run() - - if save_path is not None: - vis.capture_screen_image(save_path) - - vis.destroy_window() - - -class Visualizer(object): - r"""Online visualizer implemented with Open3d. - - Args: - points (numpy.array, shape=[N, 3+C]): Points to visualize. The Points - cloud is in mode of Coord3DMode.DEPTH (please refer to - core.structures.coord_3d_mode). - bbox3d (numpy.array, shape=[M, 7], optional): 3D bbox - (x, y, z, x_size, y_size, z_size, yaw) to visualize. - The 3D bbox is in mode of Box3DMode.DEPTH with - gravity_center (please refer to core.structures.box_3d_mode). - Default: None. - save_path (str, optional): path to save visualized results. - Default: None. - points_size (int, optional): the size of points to show on visualizer. - Default: 2. - point_color (tuple[float], optional): the color of points. - Default: (0.5, 0.5, 0.5). - bbox_color (tuple[float], optional): the color of bbox. - Default: (0, 1, 0). - points_in_box_color (tuple[float], optional): - the color of points which are in bbox3d. Default: (1, 0, 0). - rot_axis (int, optional): rotation axis of bbox. Default: 2. - center_mode (bool, optional): indicate the center of bbox is - bottom center or gravity center. available mode - ['lidar_bottom', 'camera_bottom']. Default: 'lidar_bottom'. - mode (str, optional): indicate type of the input points, - available mode ['xyz', 'xyzrgb']. Default: 'xyz'. - """ - - def __init__(self, - points, - bbox3d=None, - save_path=None, - points_size=2, - point_color=(0.5, 0.5, 0.5), - bbox_color=(0, 1, 0), - points_in_box_color=(1, 0, 0), - rot_axis=2, - center_mode='lidar_bottom', - mode='xyz'): - super(Visualizer, self).__init__() - assert 0 <= rot_axis <= 2 - - # init visualizer - self.o3d_visualizer = o3d.visualization.Visualizer() - self.o3d_visualizer.create_window() - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[0, 0, 0]) # create coordinate frame - self.o3d_visualizer.add_geometry(mesh_frame) - - self.points_size = points_size - self.point_color = point_color - self.bbox_color = bbox_color - self.points_in_box_color = points_in_box_color - self.rot_axis = rot_axis - self.center_mode = center_mode - self.mode = mode - self.seg_num = 0 - - # draw points - if points is not None: - self.pcd, self.points_colors = _draw_points( - points, self.o3d_visualizer, points_size, point_color, mode) - - # draw boxes - if bbox3d is not None: - _draw_bboxes(bbox3d, self.o3d_visualizer, self.points_colors, - self.pcd, bbox_color, points_in_box_color, rot_axis, - center_mode, mode) - - def add_bboxes(self, bbox3d, bbox_color=None, points_in_box_color=None): - """Add bounding box to visualizer. - - Args: - bbox3d (numpy.array, shape=[M, 7]): - 3D bbox (x, y, z, x_size, y_size, z_size, yaw) - to be visualized. The 3d bbox is in mode of - Box3DMode.DEPTH with gravity_center (please refer to - core.structures.box_3d_mode). - bbox_color (tuple[float]): the color of bbox. Default: None. - points_in_box_color (tuple[float]): the color of points which - are in bbox3d. Default: None. - """ - if bbox_color is None: - bbox_color = self.bbox_color - if points_in_box_color is None: - points_in_box_color = self.points_in_box_color - _draw_bboxes(bbox3d, self.o3d_visualizer, self.points_colors, self.pcd, - bbox_color, points_in_box_color, self.rot_axis, - self.center_mode, self.mode) - - def add_seg_mask(self, seg_mask_colors): - """Add segmentation mask to visualizer via per-point colorization. - - Args: - seg_mask_colors (numpy.array, shape=[N, 6]): - The segmentation mask whose first 3 dims are point coordinates - and last 3 dims are converted colors. - """ - # we can't draw the colors on existing points - # in case gt and pred mask would overlap - # instead we set a large offset along x-axis for each seg mask - self.seg_num += 1 - offset = (np.array(self.pcd.points).max(0) - - np.array(self.pcd.points).min(0))[0] * 1.2 * self.seg_num - mesh_frame = geometry.TriangleMesh.create_coordinate_frame( - size=1, origin=[offset, 0, 0]) # create coordinate frame for seg - self.o3d_visualizer.add_geometry(mesh_frame) - seg_points = copy.deepcopy(seg_mask_colors) - seg_points[:, 0] += offset - _draw_points( - seg_points, self.o3d_visualizer, self.points_size, mode='xyzrgb') - - def show(self, save_path=None): - """Visualize the points cloud. - - Args: - save_path (str, optional): path to save image. Default: None. - """ - - self.o3d_visualizer.run() - - if save_path is not None: - self.o3d_visualizer.capture_screen_image(save_path) - - self.o3d_visualizer.destroy_window() - return diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/show_result.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/show_result.py deleted file mode 100644 index aa732cf4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/visualizer/show_result.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -import trimesh - -from .image_vis import (draw_camera_bbox3d_on_img, draw_depth_bbox3d_on_img, - draw_lidar_bbox3d_on_img) - - -def _write_obj(points, out_filename): - """Write points into ``obj`` format for meshlab visualization. - - Args: - points (np.ndarray): Points in shape (N, dim). - out_filename (str): Filename to be saved. - """ - N = points.shape[0] - fout = open(out_filename, 'w') - for i in range(N): - if points.shape[1] == 6: - c = points[i, 3:].astype(int) - fout.write( - 'v %f %f %f %d %d %d\n' % - (points[i, 0], points[i, 1], points[i, 2], c[0], c[1], c[2])) - - else: - fout.write('v %f %f %f\n' % - (points[i, 0], points[i, 1], points[i, 2])) - fout.close() - - -def _write_oriented_bbox(scene_bbox, out_filename): - """Export oriented (around Z axis) scene bbox to meshes. - - Args: - scene_bbox(list[ndarray] or ndarray): xyz pos of center and - 3 lengths (x_size, y_size, z_size) and heading angle around Z axis. - Y forward, X right, Z upward. heading angle of positive X is 0, - heading angle of positive Y is 90 degrees. - out_filename(str): Filename. - """ - - def heading2rotmat(heading_angle): - rotmat = np.zeros((3, 3)) - rotmat[2, 2] = 1 - cosval = np.cos(heading_angle) - sinval = np.sin(heading_angle) - rotmat[0:2, 0:2] = np.array([[cosval, -sinval], [sinval, cosval]]) - return rotmat - - def convert_oriented_box_to_trimesh_fmt(box): - ctr = box[:3] - lengths = box[3:6] - trns = np.eye(4) - trns[0:3, 3] = ctr - trns[3, 3] = 1.0 - trns[0:3, 0:3] = heading2rotmat(box[6]) - box_trimesh_fmt = trimesh.creation.box(lengths, trns) - return box_trimesh_fmt - - if len(scene_bbox) == 0: - scene_bbox = np.zeros((1, 7)) - scene = trimesh.scene.Scene() - for box in scene_bbox: - scene.add_geometry(convert_oriented_box_to_trimesh_fmt(box)) - - mesh_list = trimesh.util.concatenate(scene.dump()) - # save to obj file - trimesh.io.export.export_mesh(mesh_list, out_filename, file_type='obj') - - return - - -def show_result(points, - gt_bboxes, - pred_bboxes, - out_dir, - filename, - show=False, - snapshot=False, - pred_labels=None): - """Convert results into format that is directly readable for meshlab. - - Args: - points (np.ndarray): Points. - gt_bboxes (np.ndarray): Ground truth boxes. - pred_bboxes (np.ndarray): Predicted boxes. - out_dir (str): Path of output directory - filename (str): Filename of the current frame. - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - pred_labels (np.ndarray, optional): Predicted labels of boxes. - Defaults to None. - """ - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - if show: - from .open3d_vis import Visualizer - - vis = Visualizer(points) - if pred_bboxes is not None: - if pred_labels is None: - vis.add_bboxes(bbox3d=pred_bboxes) - else: - palette = np.random.randint( - 0, 255, size=(pred_labels.max() + 1, 3)) / 256 - labelDict = {} - for j in range(len(pred_labels)): - i = int(pred_labels[j].numpy()) - if labelDict.get(i) is None: - labelDict[i] = [] - labelDict[i].append(pred_bboxes[j]) - for i in labelDict: - vis.add_bboxes( - bbox3d=np.array(labelDict[i]), - bbox_color=palette[i], - points_in_box_color=palette[i]) - - if gt_bboxes is not None: - vis.add_bboxes(bbox3d=gt_bboxes, bbox_color=(0, 0, 1)) - show_path = osp.join(result_path, - f'{filename}_online.png') if snapshot else None - vis.show(show_path) - - if points is not None: - _write_obj(points, osp.join(result_path, f'{filename}_points.obj')) - - if gt_bboxes is not None: - # bottom center to gravity center - gt_bboxes[..., 2] += gt_bboxes[..., 5] / 2 - - _write_oriented_bbox(gt_bboxes, - osp.join(result_path, f'{filename}_gt.obj')) - - if pred_bboxes is not None: - # bottom center to gravity center - pred_bboxes[..., 2] += pred_bboxes[..., 5] / 2 - - _write_oriented_bbox(pred_bboxes, - osp.join(result_path, f'{filename}_pred.obj')) - - -def show_seg_result(points, - gt_seg, - pred_seg, - out_dir, - filename, - palette, - ignore_index=None, - show=False, - snapshot=False): - """Convert results into format that is directly readable for meshlab. - - Args: - points (np.ndarray): Points. - gt_seg (np.ndarray): Ground truth segmentation mask. - pred_seg (np.ndarray): Predicted segmentation mask. - out_dir (str): Path of output directory - filename (str): Filename of the current frame. - palette (np.ndarray): Mapping between class labels and colors. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. Defaults to None. - show (bool, optional): Visualize the results online. Defaults to False. - snapshot (bool, optional): Whether to save the online results. - Defaults to False. - """ - # we need 3D coordinates to visualize segmentation mask - if gt_seg is not None or pred_seg is not None: - assert points is not None, \ - '3D coordinates are required for segmentation visualization' - - # filter out ignored points - if gt_seg is not None and ignore_index is not None: - if points is not None: - points = points[gt_seg != ignore_index] - if pred_seg is not None: - pred_seg = pred_seg[gt_seg != ignore_index] - gt_seg = gt_seg[gt_seg != ignore_index] - - if gt_seg is not None: - gt_seg_color = palette[gt_seg] - gt_seg_color = np.concatenate([points[:, :3], gt_seg_color], axis=1) - if pred_seg is not None: - pred_seg_color = palette[pred_seg] - pred_seg_color = np.concatenate([points[:, :3], pred_seg_color], - axis=1) - - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - # online visualization of segmentation mask - # we show three masks in a row, scene_points, gt_mask, pred_mask - if show: - from .open3d_vis import Visualizer - mode = 'xyzrgb' if points.shape[1] == 6 else 'xyz' - vis = Visualizer(points, mode=mode) - if gt_seg is not None: - vis.add_seg_mask(gt_seg_color) - if pred_seg is not None: - vis.add_seg_mask(pred_seg_color) - show_path = osp.join(result_path, - f'{filename}_online.png') if snapshot else None - vis.show(show_path) - - if points is not None: - _write_obj(points, osp.join(result_path, f'{filename}_points.obj')) - - if gt_seg is not None: - _write_obj(gt_seg_color, osp.join(result_path, f'{filename}_gt.obj')) - - if pred_seg is not None: - _write_obj(pred_seg_color, osp.join(result_path, - f'{filename}_pred.obj')) - - -def show_multi_modality_result(img, - gt_bboxes, - pred_bboxes, - proj_mat, - out_dir, - filename, - box_mode='lidar', - img_metas=None, - show=False, - gt_bbox_color=(61, 102, 255), - pred_bbox_color=(241, 101, 72)): - """Convert multi-modality detection results into 2D results. - - Project the predicted 3D bbox to 2D image plane and visualize them. - - Args: - img (np.ndarray): The numpy array of image in cv2 fashion. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Ground truth boxes. - pred_bboxes (:obj:`BaseInstance3DBoxes`): Predicted boxes. - proj_mat (numpy.array, shape=[4, 4]): The projection matrix - according to the camera intrinsic parameters. - out_dir (str): Path of output directory. - filename (str): Filename of the current frame. - box_mode (str, optional): Coordinate system the boxes are in. - Should be one of 'depth', 'lidar' and 'camera'. - Defaults to 'lidar'. - img_metas (dict, optional): Used in projecting depth bbox. - Defaults to None. - show (bool, optional): Visualize the results online. Defaults to False. - gt_bbox_color (str or tuple(int), optional): Color of bbox lines. - The tuple of color should be in BGR order. Default: (255, 102, 61). - pred_bbox_color (str or tuple(int), optional): Color of bbox lines. - The tuple of color should be in BGR order. Default: (72, 101, 241). - """ - if box_mode == 'depth': - draw_bbox = draw_depth_bbox3d_on_img - elif box_mode == 'lidar': - draw_bbox = draw_lidar_bbox3d_on_img - elif box_mode == 'camera': - draw_bbox = draw_camera_bbox3d_on_img - else: - raise NotImplementedError(f'unsupported box mode {box_mode}') - - result_path = osp.join(out_dir, filename) - mmcv.mkdir_or_exist(result_path) - - if show: - show_img = img.copy() - if gt_bboxes is not None: - show_img = draw_bbox( - gt_bboxes, show_img, proj_mat, img_metas, color=gt_bbox_color) - if pred_bboxes is not None: - show_img = draw_bbox( - pred_bboxes, - show_img, - proj_mat, - img_metas, - color=pred_bbox_color) - mmcv.imshow(show_img, win_name='project_bbox3d_img', wait_time=0) - - if img is not None: - mmcv.imwrite(img, osp.join(result_path, f'{filename}_img.png')) - - if gt_bboxes is not None: - gt_img = draw_bbox( - gt_bboxes, img, proj_mat, img_metas, color=gt_bbox_color) - mmcv.imwrite(gt_img, osp.join(result_path, f'{filename}_gt.png')) - - if pred_bboxes is not None: - pred_img = draw_bbox( - pred_bboxes, img, proj_mat, img_metas, color=pred_bbox_color) - mmcv.imwrite(pred_img, osp.join(result_path, f'{filename}_pred.png')) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/__init__.py deleted file mode 100644 index 8d695437..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_voxel_generator -from .voxel_generator import VoxelGenerator - -__all__ = ['build_voxel_generator', 'VoxelGenerator'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/builder.py deleted file mode 100644 index bc663ee4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/builder.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from . import voxel_generator - - -def build_voxel_generator(cfg, **kwargs): - """Builder of voxel generator.""" - if isinstance(cfg, voxel_generator.VoxelGenerator): - return cfg - elif isinstance(cfg, dict): - return mmcv.runner.obj_from_dict( - cfg, voxel_generator, default_args=kwargs) - else: - raise TypeError('Invalid type {} for building a sampler'.format( - type(cfg))) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/voxel_generator.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/voxel_generator.py deleted file mode 100644 index 404f2cdc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/core/voxel/voxel_generator.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numba -import numpy as np - - -class VoxelGenerator(object): - """Voxel generator in numpy implementation. - - Args: - voxel_size (list[float]): Size of a single voxel - point_cloud_range (list[float]): Range of points - max_num_points (int): Maximum number of points in a single voxel - max_voxels (int, optional): Maximum number of voxels. - Defaults to 20000. - """ - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels=20000): - - point_cloud_range = np.array(point_cloud_range, dtype=np.float32) - # [0, -40, -3, 70.4, 40, 1] - voxel_size = np.array(voxel_size, dtype=np.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = np.round(grid_size).astype(np.int64) - - self._voxel_size = voxel_size - self._point_cloud_range = point_cloud_range - self._max_num_points = max_num_points - self._max_voxels = max_voxels - self._grid_size = grid_size - - def generate(self, points): - """Generate voxels given points.""" - return points_to_voxel(points, self._voxel_size, - self._point_cloud_range, self._max_num_points, - True, self._max_voxels) - - @property - def voxel_size(self): - """list[float]: Size of a single voxel.""" - return self._voxel_size - - @property - def max_num_points_per_voxel(self): - """int: Maximum number of points per voxel.""" - return self._max_num_points - - @property - def point_cloud_range(self): - """list[float]: Range of point cloud.""" - return self._point_cloud_range - - @property - def grid_size(self): - """np.ndarray: The size of grids.""" - return self._grid_size - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - indent = ' ' * (len(repr_str) + 1) - repr_str += f'(voxel_size={self._voxel_size},\n' - repr_str += indent + 'point_cloud_range=' - repr_str += f'{self._point_cloud_range.tolist()},\n' - repr_str += indent + f'max_num_points={self._max_num_points},\n' - repr_str += indent + f'max_voxels={self._max_voxels},\n' - repr_str += indent + f'grid_size={self._grid_size.tolist()}' - repr_str += ')' - return repr_str - - -def points_to_voxel(points, - voxel_size, - coors_range, - max_points=35, - reverse_index=True, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size - coors_range (list[float | tuple[float] | ndarray]): Voxel range. - format: xyzxyz, minmax - max_points (int): Indicate maximum points contained in a voxel. - reverse_index (bool): Whether return reversed coordinates. - if points has xyz format and reverse_index is True, output - coordinates will be zyx format, but points in features always - xyz format. - max_voxels (int): Maximum number of voxels this function creates. - For second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: [M, max_points, ndim] float tensor. only contain points. - coordinates: [M, 3] int32 tensor. - num_points_per_voxel: [M] int32 tensor. - """ - if not isinstance(voxel_size, np.ndarray): - voxel_size = np.array(voxel_size, dtype=points.dtype) - if not isinstance(coors_range, np.ndarray): - coors_range = np.array(coors_range, dtype=points.dtype) - voxelmap_shape = (coors_range[3:] - coors_range[:3]) / voxel_size - voxelmap_shape = tuple(np.round(voxelmap_shape).astype(np.int32).tolist()) - if reverse_index: - voxelmap_shape = voxelmap_shape[::-1] - # don't create large array in jit(nopython=True) code. - num_points_per_voxel = np.zeros(shape=(max_voxels, ), dtype=np.int32) - coor_to_voxelidx = -np.ones(shape=voxelmap_shape, dtype=np.int32) - voxels = np.zeros( - shape=(max_voxels, max_points, points.shape[-1]), dtype=points.dtype) - coors = np.zeros(shape=(max_voxels, 3), dtype=np.int32) - if reverse_index: - voxel_num = _points_to_voxel_reverse_kernel( - points, voxel_size, coors_range, num_points_per_voxel, - coor_to_voxelidx, voxels, coors, max_points, max_voxels) - - else: - voxel_num = _points_to_voxel_kernel(points, voxel_size, coors_range, - num_points_per_voxel, - coor_to_voxelidx, voxels, coors, - max_points, max_voxels) - - coors = coors[:voxel_num] - voxels = voxels[:voxel_num] - num_points_per_voxel = num_points_per_voxel[:voxel_num] - - return voxels, coors, num_points_per_voxel - - -@numba.jit(nopython=True) -def _points_to_voxel_reverse_kernel(points, - voxel_size, - coors_range, - num_points_per_voxel, - coor_to_voxelidx, - voxels, - coors, - max_points=35, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size - coors_range (list[float | tuple[float] | ndarray]): Range of voxels. - format: xyzxyz, minmax - num_points_per_voxel (int): Number of points per voxel. - coor_to_voxel_idx (np.ndarray): A voxel grid of shape (D, H, W), - which has the same shape as the complete voxel map. It indicates - the index of each corresponding voxel. - voxels (np.ndarray): Created empty voxels. - coors (np.ndarray): Created coordinates of each voxel. - max_points (int): Indicate maximum points contained in a voxel. - max_voxels (int): Maximum number of voxels this function create. - for second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: Shape [M, max_points, ndim], only contain points. - coordinates: Shape [M, 3]. - num_points_per_voxel: Shape [M]. - """ - # put all computations to one loop. - # we shouldn't create large array in main jit code, otherwise - # reduce performance - N = points.shape[0] - # ndim = points.shape[1] - 1 - ndim = 3 - ndim_minus_1 = ndim - 1 - grid_size = (coors_range[3:] - coors_range[:3]) / voxel_size - # np.round(grid_size) - # grid_size = np.round(grid_size).astype(np.int64)(np.int32) - grid_size = np.round(grid_size, 0, grid_size).astype(np.int32) - coor = np.zeros(shape=(3, ), dtype=np.int32) - voxel_num = 0 - failed = False - for i in range(N): - failed = False - for j in range(ndim): - c = np.floor((points[i, j] - coors_range[j]) / voxel_size[j]) - if c < 0 or c >= grid_size[j]: - failed = True - break - coor[ndim_minus_1 - j] = c - if failed: - continue - voxelidx = coor_to_voxelidx[coor[0], coor[1], coor[2]] - if voxelidx == -1: - voxelidx = voxel_num - if voxel_num >= max_voxels: - continue - voxel_num += 1 - coor_to_voxelidx[coor[0], coor[1], coor[2]] = voxelidx - coors[voxelidx] = coor - num = num_points_per_voxel[voxelidx] - if num < max_points: - voxels[voxelidx, num] = points[i] - num_points_per_voxel[voxelidx] += 1 - return voxel_num - - -@numba.jit(nopython=True) -def _points_to_voxel_kernel(points, - voxel_size, - coors_range, - num_points_per_voxel, - coor_to_voxelidx, - voxels, - coors, - max_points=35, - max_voxels=20000): - """convert kitti points(N, >=3) to voxels. - - Args: - points (np.ndarray): [N, ndim]. points[:, :3] contain xyz points and - points[:, 3:] contain other information such as reflectivity. - voxel_size (list, tuple, np.ndarray): [3] xyz, indicate voxel size. - coors_range (list[float | tuple[float] | ndarray]): Range of voxels. - format: xyzxyz, minmax - num_points_per_voxel (int): Number of points per voxel. - coor_to_voxel_idx (np.ndarray): A voxel grid of shape (D, H, W), - which has the same shape as the complete voxel map. It indicates - the index of each corresponding voxel. - voxels (np.ndarray): Created empty voxels. - coors (np.ndarray): Created coordinates of each voxel. - max_points (int): Indicate maximum points contained in a voxel. - max_voxels (int): Maximum number of voxels this function create. - for second, 20000 is a good choice. Points should be shuffled for - randomness before this function because max_voxels drops points. - - Returns: - tuple[np.ndarray]: - voxels: Shape [M, max_points, ndim], only contain points. - coordinates: Shape [M, 3]. - num_points_per_voxel: Shape [M]. - """ - N = points.shape[0] - # ndim = points.shape[1] - 1 - ndim = 3 - grid_size = (coors_range[3:] - coors_range[:3]) / voxel_size - # grid_size = np.round(grid_size).astype(np.int64)(np.int32) - grid_size = np.round(grid_size, 0, grid_size).astype(np.int32) - - # lower_bound = coors_range[:3] - # upper_bound = coors_range[3:] - coor = np.zeros(shape=(3, ), dtype=np.int32) - voxel_num = 0 - failed = False - for i in range(N): - failed = False - for j in range(ndim): - c = np.floor((points[i, j] - coors_range[j]) / voxel_size[j]) - if c < 0 or c >= grid_size[j]: - failed = True - break - coor[j] = c - if failed: - continue - voxelidx = coor_to_voxelidx[coor[0], coor[1], coor[2]] - if voxelidx == -1: - voxelidx = voxel_num - if voxel_num >= max_voxels: - continue - voxel_num += 1 - coor_to_voxelidx[coor[0], coor[1], coor[2]] = voxelidx - coors[voxelidx] = coor - num = num_points_per_voxel[voxelidx] - if num < max_points: - voxels[voxelidx, num] = points[i] - num_points_per_voxel[voxelidx] += 1 - return voxel_num diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/__init__.py deleted file mode 100644 index 49cbc6b1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmdet.datasets.builder import build_dataloader -from .builder import DATASETS, PIPELINES, build_dataset -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .kitti_dataset import KittiDataset -from .kitti_mono_dataset import KittiMonoDataset -from .lyft_dataset import LyftDataset -from .nuscenes_dataset import NuScenesDataset -from .nuscenes_mono_dataset import NuScenesMonoDataset -# yapf: disable -from .pipelines import (AffineResize, BackgroundPointsFilter, GlobalAlignment, - GlobalRotScaleTrans, IndoorPatchPointSample, - IndoorPointSample, LoadAnnotations3D, - LoadPointsFromDict, LoadPointsFromFile, - LoadPointsFromMultiSweeps, MultiViewWrapper, - NormalizePointsColor, ObjectNameFilter, ObjectNoise, - ObjectRangeFilter, ObjectSample, PointSample, - PointShuffle, PointsRangeFilter, RandomDropPointsColor, - RandomFlip3D, RandomJitterPoints, RandomRotate, - RandomShiftScale, RangeLimitedRandomCrop, - VoxelBasedPointSampler) -# yapf: enable -from .s3dis_dataset import S3DISDataset, S3DISSegDataset -from .scannet_dataset import (ScanNetDataset, ScanNetInstanceSegDataset, - ScanNetSegDataset) -from .semantickitti_dataset import SemanticKITTIDataset -from .sunrgbd_dataset import SUNRGBDDataset -from .utils import get_loading_pipeline -from .waymo_dataset import WaymoDataset - -__all__ = [ - 'KittiDataset', 'KittiMonoDataset', 'build_dataloader', 'DATASETS', - 'build_dataset', 'NuScenesDataset', 'NuScenesMonoDataset', 'LyftDataset', - 'ObjectSample', 'RandomFlip3D', 'ObjectNoise', 'GlobalRotScaleTrans', - 'PointShuffle', 'ObjectRangeFilter', 'PointsRangeFilter', - 'LoadPointsFromFile', 'S3DISSegDataset', 'S3DISDataset', - 'NormalizePointsColor', 'IndoorPatchPointSample', 'IndoorPointSample', - 'PointSample', 'LoadAnnotations3D', 'GlobalAlignment', 'SUNRGBDDataset', - 'ScanNetDataset', 'ScanNetSegDataset', 'ScanNetInstanceSegDataset', - 'SemanticKITTIDataset', 'Custom3DDataset', 'Custom3DSegDataset', - 'LoadPointsFromMultiSweeps', 'WaymoDataset', 'BackgroundPointsFilter', - 'VoxelBasedPointSampler', 'get_loading_pipeline', 'RandomDropPointsColor', - 'RandomJitterPoints', 'ObjectNameFilter', 'AffineResize', - 'RandomShiftScale', 'LoadPointsFromDict', 'PIPELINES', - 'RangeLimitedRandomCrop', 'RandomRotate', 'MultiViewWrapper' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/builder.py deleted file mode 100644 index 157f6404..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/builder.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import platform - -from mmcv.utils import Registry, build_from_cfg - -from mmdet.datasets import DATASETS as MMDET_DATASETS -from mmdet.datasets.builder import _concat_dataset - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -OBJECTSAMPLERS = Registry('Object sampler') -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def build_dataset(cfg, default_args=None): - from mmdet3d.datasets.dataset_wrappers import CBGSDataset - from mmdet.datasets.dataset_wrappers import (ClassBalancedDataset, - ConcatDataset, RepeatDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif cfg['type'] == 'CBGSDataset': - dataset = CBGSDataset(build_dataset(cfg['dataset'], default_args)) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - elif cfg['type'] in DATASETS._module_dict.keys(): - dataset = build_from_cfg(cfg, DATASETS, default_args) - else: - dataset = build_from_cfg(cfg, MMDET_DATASETS, default_args) - return dataset diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d.py deleted file mode 100644 index 9c6e3517..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d.py +++ /dev/null @@ -1,448 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -from torch.utils.data import Dataset - -from ..core.bbox import get_box_type -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -class Custom3DDataset(Dataset): - """Customized 3D dataset. - - This is the base dataset of SUNRGB-D, ScanNet, nuScenes, and KITTI - dataset. - - .. code-block:: none - - [ - {'sample_idx': - 'lidar_points': {'lidar_path': velodyne_path, - .... - }, - 'annos': {'box_type_3d': (str) 'LiDAR/Camera/Depth' - 'gt_bboxes_3d': (n, 7) - 'gt_names': [list] - .... - } - 'calib': { .....} - 'images': { .....} - } - ] - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR'. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.ann_file = ann_file - self.test_mode = test_mode - self.modality = modality - self.filter_empty_gt = filter_empty_gt - self.box_type_3d, self.box_mode_3d = get_box_type(box_type_3d) - - self.CLASSES = self.get_classes(classes) - self.file_client = mmcv.FileClient(**file_client_args) - self.cat2id = {name: i for i, name in enumerate(self.CLASSES)} - - # load annotations - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(open(local_path, 'rb')) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - # process pipeline - if pipeline is not None: - self.pipeline = Compose(pipeline) - - # set group flag for the samplers - if not self.test_mode: - self._set_group_flag() - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - # loading data from a file-like object needs file format - return mmcv.load(ann_file, file_format='pkl') - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['sample_idx'] - pts_filename = osp.join(self.data_root, - info['lidar_points']['lidar_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - gt_bboxes_3d = info['annos']['gt_bboxes_3d'] - gt_names_3d = info['annos']['gt_names'] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - # Obtain original box 3d type in info file - ori_box_type_3d = info['annos']['box_type_3d'] - ori_box_type_3d, _ = get_box_type(ori_box_type_3d) - - # turn original box type to target box type - gt_bboxes_3d = ori_box_type_3d( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - gt_names=gt_names_3d) - return anns_results - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - bbox3d_fields (list): 3D bounding boxes fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - bbox_fields (list): Fields of bounding boxes. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - - box_type_3d (str): 3D box type. - - box_mode_3d (str): 3D box mode. - """ - results['img_fields'] = [] - results['bbox3d_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['box_type_3d'] = self.box_type_3d - results['box_mode_3d'] = self.box_mode_3d - - def prepare_train_data(self, index): - """Training data preparation. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Training data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - if input_dict is None: - return None - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - if self.filter_empty_gt and \ - (example is None or - ~(example['gt_labels_3d']._data != -1).any()): - return None - return example - - def prepare_test_data(self, index): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Return: - list[str]: A list of class names. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving json - files when ``jsonfile_prefix`` is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - out = f'{pklfile_prefix}.pkl' - mmcv.dump(outputs, out) - return outputs, tmp_dir - - def evaluate(self, - results, - metric=None, - iou_thr=(0.25, 0.5), - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in indoor protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str], optional): Metrics to be evaluated. - Defaults to None. - iou_thr (list[float]): AP IoU thresholds. Defaults to (0.25, 0.5). - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - from mmdet3d.core.evaluation import indoor_eval - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - gt_annos = [info['annos'] for info in self.data_infos] - label2cat = {i: cat_id for i, cat_id in enumerate(self.CLASSES)} - ret_dict = indoor_eval( - gt_annos, - results, - iou_thr, - label2cat, - logger=logger, - box_type_3d=self.box_type_3d, - box_mode_3d=self.box_mode_3d) - if show: - self.show(results, out_dir, pipeline=pipeline) - - return ret_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - raise NotImplementedError('_build_default_pipeline is not implemented ' - f'for dataset {self.__class__.__name__}') - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - # when we want to load ground-truth via pipeline (e.g. bbox, seg mask) - # we need to set self.test_mode as False so that we have 'annos' - if load_annos: - original_test_mode = self.test_mode - self.test_mode = False - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - if load_annos: - self.test_mode = original_test_mode - - return data - - def __len__(self): - """Return the length of data infos. - - Returns: - int: Length of data infos. - """ - return len(self.data_infos) - - def _rand_another(self, idx): - """Randomly get another item with the same flag. - - Returns: - int: Another index of item with the same flag. - """ - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - Returns: - dict: Data dictionary of the corresponding index. - """ - if self.test_mode: - return self.prepare_test_data(idx) - while True: - data = self.prepare_train_data(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. In 3D datasets, they are all the same, thus are all - zeros. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d_seg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d_seg.py deleted file mode 100644 index e123611d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/custom_3d_seg.py +++ /dev/null @@ -1,465 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -from torch.utils.data import Dataset - -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class Custom3DSegDataset(Dataset): - """Customized 3D dataset for semantic segmentation task. - - This is the base dataset of ScanNet and S3DIS dataset. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES) to - be consistent with PointSegClassMapping function in pipeline. - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - # names of all classes data used for the task - CLASSES = None - - # class_ids used for training - VALID_CLASS_IDS = None - - # all possible class_ids in loaded segmentation mask - ALL_CLASS_IDS = None - - # official color for visualization - PALETTE = None - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.ann_file = ann_file - self.test_mode = test_mode - self.modality = modality - self.file_client = mmcv.FileClient(**file_client_args) - - # load annotations - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(open(local_path, 'rb')) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - if pipeline is not None: - self.pipeline = Compose(pipeline) - - self.ignore_index = len(self.CLASSES) if \ - ignore_index is None else ignore_index - - self.scene_idxs = self.get_scene_idxs(scene_idxs) - self.CLASSES, self.PALETTE = \ - self.get_classes_and_palette(classes, palette) - - # set group flag for the sampler - if not self.test_mode: - self._set_group_flag() - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - # loading data from a file-like object needs file format - return mmcv.load(ann_file, file_format='pkl') - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - return input_dict - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - """ - results['img_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['bbox3d_fields'] = [] - - def prepare_train_data(self, index): - """Training data preparation. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Training data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - if input_dict is None: - return None - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - def prepare_test_data(self, index): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. - - This function is taken from MMSegmentation. - - Args: - classes (Sequence[str] | str): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - Defaults to None. - palette (Sequence[Sequence[int]]] | np.ndarray): - The palette of segmentation map. If None is given, random - palette will be generated. Defaults to None. - """ - if classes is None: - self.custom_classes = False - # map id in the loaded mask to label used for training - self.label_map = { - cls_id: self.ignore_index - for cls_id in self.ALL_CLASS_IDS - } - self.label_map.update( - {cls_id: i - for i, cls_id in enumerate(self.VALID_CLASS_IDS)}) - # map label to category name - self.label2cat = { - i: cat_name - for i, cat_name in enumerate(self.CLASSES) - } - return self.CLASSES, self.PALETTE - - self.custom_classes = True - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - if self.CLASSES: - if not set(class_names).issubset(self.CLASSES): - raise ValueError('classes is not a subset of CLASSES.') - - # update valid_class_ids - self.VALID_CLASS_IDS = [ - self.VALID_CLASS_IDS[self.CLASSES.index(cls_name)] - for cls_name in class_names - ] - - # dictionary, its keys are the old label ids and its values - # are the new label ids. - # used for changing pixel labels in load_annotations. - self.label_map = { - cls_id: self.ignore_index - for cls_id in self.ALL_CLASS_IDS - } - self.label_map.update( - {cls_id: i - for i, cls_id in enumerate(self.VALID_CLASS_IDS)}) - self.label2cat = { - i: cat_name - for i, cat_name in enumerate(class_names) - } - - # modify palette for visualization - palette = [ - self.PALETTE[self.CLASSES.index(cls_name)] - for cls_name in class_names - ] - - return class_names, palette - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - if self.test_mode: - # when testing, we load one whole scene every time - return np.arange(len(self.data_infos)).astype(np.int32) - - # we may need to re-sample different scenes according to scene_idxs - # this is necessary for indoor scene segmentation such as ScanNet - if scene_idxs is None: - scene_idxs = np.arange(len(self.data_infos)) - if isinstance(scene_idxs, str): - with self.file_client.get_local_path(scene_idxs) as local_path: - scene_idxs = np.load(local_path) - else: - scene_idxs = np.array(scene_idxs) - - return scene_idxs.astype(np.int32) - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving json - files when ``jsonfile_prefix`` is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - out = f'{pklfile_prefix}.pkl' - mmcv.dump(outputs, out) - return outputs, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in semantic segmentation protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Defaults to False. - out_dir (str, optional): Path to save the visualization results. - Defaults to None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - from mmdet3d.core.evaluation import seg_eval - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - - load_pipeline = self._get_pipeline(pipeline) - pred_sem_masks = [result['semantic_mask'] for result in results] - gt_sem_masks = [ - self._extract_data( - i, load_pipeline, 'pts_semantic_mask', load_annos=True) - for i in range(len(self.data_infos)) - ] - ret_dict = seg_eval( - gt_sem_masks, - pred_sem_masks, - self.label2cat, - self.ignore_index, - logger=logger) - - if show: - self.show(pred_sem_masks, out_dir, pipeline=pipeline) - - return ret_dict - - def _rand_another(self, idx): - """Randomly get another item with the same flag. - - Returns: - int: Another index of item with the same flag. - """ - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - raise NotImplementedError('_build_default_pipeline is not implemented ' - f'for dataset {self.__class__.__name__}') - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - # when we want to load ground-truth via pipeline (e.g. bbox, seg mask) - # we need to set self.test_mode as False so that we have 'annos' - if load_annos: - original_test_mode = self.test_mode - self.test_mode = False - input_dict = self.get_data_info(index) - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - if load_annos: - self.test_mode = original_test_mode - - return data - - def __len__(self): - """Return the length of scene_idxs. - - Returns: - int: Length of data infos. - """ - return len(self.scene_idxs) - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - In indoor scene segmentation task, each scene contains millions of - points. However, we only sample less than 10k points within a patch - each time. Therefore, we use `scene_idxs` to re-sample different rooms. - - Returns: - dict: Data dictionary of the corresponding index. - """ - scene_idx = self.scene_idxs[idx] # map to scene idx - if self.test_mode: - return self.prepare_test_data(scene_idx) - while True: - data = self.prepare_train_data(scene_idx) - if data is None: - idx = self._rand_another(idx) - scene_idx = self.scene_idxs[idx] # map to scene idx - continue - return data - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. In 3D datasets, they are all the same, thus are all - zeros. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/dataset_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/dataset_wrappers.py deleted file mode 100644 index f8a5ce0e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/dataset_wrappers.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from .builder import DATASETS - - -@DATASETS.register_module() -class CBGSDataset(object): - """A wrapper of class sampled dataset with ann_file path. Implementation of - paper `Class-balanced Grouping and Sampling for Point Cloud 3D Object - Detection `_. - - Balance the number of scenes under different classes. - - Args: - dataset (:obj:`CustomDataset`): The dataset to be class sampled. - """ - - def __init__(self, dataset): - self.dataset = dataset - self.CLASSES = dataset.CLASSES - self.cat2id = {name: i for i, name in enumerate(self.CLASSES)} - self.sample_indices = self._get_sample_indices() - # self.dataset.data_infos = self.data_infos - if hasattr(self.dataset, 'flag'): - self.flag = np.array( - [self.dataset.flag[ind] for ind in self.sample_indices], - dtype=np.uint8) - - def _get_sample_indices(self): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations after class sampling. - """ - class_sample_idxs = {cat_id: [] for cat_id in self.cat2id.values()} - for idx in range(len(self.dataset)): - sample_cat_ids = self.dataset.get_cat_ids(idx) - for cat_id in sample_cat_ids: - class_sample_idxs[cat_id].append(idx) - duplicated_samples = sum( - [len(v) for _, v in class_sample_idxs.items()]) - class_distribution = { - k: len(v) / duplicated_samples - for k, v in class_sample_idxs.items() - } - - sample_indices = [] - - frac = 1.0 / len(self.CLASSES) - ratios = [frac / v for v in class_distribution.values()] - for cls_inds, ratio in zip(list(class_sample_idxs.values()), ratios): - sample_indices += np.random.choice(cls_inds, - int(len(cls_inds) * - ratio)).tolist() - return sample_indices - - def __getitem__(self, idx): - """Get item from infos according to the given index. - - Returns: - dict: Data dictionary of the corresponding index. - """ - ori_idx = self.sample_indices[idx] - return self.dataset[ori_idx] - - def __len__(self): - """Return the length of data infos. - - Returns: - int: Length of data infos. - """ - return len(self.sample_indices) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti2d_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti2d_dataset.py deleted file mode 100644 index bc312e56..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti2d_dataset.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet.datasets import CustomDataset -from .builder import DATASETS - - -@DATASETS.register_module() -class Kitti2DDataset(CustomDataset): - r"""KITTI 2D Dataset. - - This class serves as the API for experiments on the `KITTI Dataset - `_. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR'. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - - CLASSES = ('car', 'pedestrian', 'cyclist') - """ - Annotation format: - [ - { - 'image': { - 'image_idx': 0, - 'image_path': 'training/image_2/000000.png', - 'image_shape': array([ 370, 1224], dtype=int32) - }, - 'point_cloud': { - 'num_features': 4, - 'velodyne_path': 'training/velodyne/000000.bin' - }, - 'calib': { - 'P0': (4, 4), - 'P1': (4, 4), - 'P2': (4, 4), - 'P3': (4, 4), - 'R0_rect':4x4 np.array, - 'Tr_velo_to_cam': 4x4 np.array, - 'Tr_imu_to_velo': 4x4 np.array - }, - 'annos': { - 'name': (n), - 'truncated': (n), - 'occluded': (n), - 'alpha': (n), - 'bbox': (n, 4), - 'dimensions': (n, 3), - 'location': (n, 3), - 'rotation_y': (n), - 'score': (n), - 'index': array([0], dtype=int32), - 'group_ids': array([0], dtype=int32), - 'difficulty': array([0], dtype=int32), - 'num_points_in_gt': (n), - } - } - ] - """ - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations. - """ - self.data_infos = mmcv.load(ann_file) - self.cat2label = { - cat_name: i - for i, cat_name in enumerate(self.CLASSES) - } - return self.data_infos - - def _filter_imgs(self, min_size=32): - """Filter images without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if len(img_info['annos']['name']) > 0: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - bboxes (np.ndarray): Ground truth bboxes. - - labels (np.ndarray): Labels of ground truths. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - annos = info['annos'] - gt_names = annos['name'] - gt_bboxes = annos['bbox'] - difficulty = annos['difficulty'] - - # remove classes that is not needed - selected = self.keep_arrays_by_name(gt_names, self.CLASSES) - gt_bboxes = gt_bboxes[selected] - gt_names = gt_names[selected] - difficulty = difficulty[selected] - gt_labels = np.array([self.cat2label[n] for n in gt_names]) - - anns_results = dict( - bboxes=gt_bboxes.astype(np.float32), - labels=gt_labels, - ) - return anns_results - - def prepare_train_img(self, idx): - """Training image preparation. - - Args: - index (int): Index for accessing the target image data. - - Returns: - dict: Training image data dict after preprocessing - corresponding to the index. - """ - img_raw_info = self.data_infos[idx]['image'] - img_info = dict(filename=img_raw_info['image_path']) - ann_info = self.get_ann_info(idx) - if len(ann_info['bboxes']) == 0: - return None - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Prepare data for testing. - - Args: - index (int): Index for accessing the target image data. - - Returns: - dict: Testing image data dict after preprocessing - corresponding to the index. - """ - img_raw_info = self.data_infos[idx]['image'] - img_info = dict(filename=img_raw_info['image_path']) - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def drop_arrays_by_name(self, gt_names, used_classes): - """Drop irrelevant ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be dropped. - """ - inds = [i for i, x in enumerate(gt_names) if x not in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def keep_arrays_by_name(self, gt_names, used_classes): - """Keep useful ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be keeped. - """ - inds = [i for i, x in enumerate(gt_names) if x in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def reformat_bbox(self, outputs, out=None): - """Reformat bounding boxes to KITTI 2D styles. - - Args: - outputs (list[np.ndarray]): List of arrays storing the inferenced - bounding boxes and scores. - out (str, optional): The prefix of output file. - Default: None. - - Returns: - list[dict]: A list of dictionaries with the kitti 2D format. - """ - from mmdet3d.core.bbox.transforms import bbox2result_kitti2d - sample_idx = [info['image']['image_idx'] for info in self.data_infos] - result_files = bbox2result_kitti2d(outputs, self.CLASSES, sample_idx, - out) - return result_files - - def evaluate(self, result_files, eval_types=None): - """Evaluation in KITTI protocol. - - Args: - result_files (str): Path of result files. - eval_types (str, optional): Types of evaluation. Default: None. - KITTI dataset only support 'bbox' evaluation type. - - Returns: - tuple (str, dict): Average precision results in str format - and average precision results in dict format. - """ - from mmdet3d.core.evaluation import kitti_eval - eval_types = ['bbox'] if not eval_types else eval_types - assert eval_types in ('bbox', ['bbox' - ]), 'KITTI data set only evaluate bbox' - gt_annos = [info['annos'] for info in self.data_infos] - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - return ap_result_str, ap_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_dataset.py deleted file mode 100644 index e2919ef8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_dataset.py +++ /dev/null @@ -1,775 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core import show_multi_modality_result, show_result -from ..core.bbox import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - LiDARInstance3DBoxes, points_cam2img) -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class KittiDataset(Custom3DDataset): - r"""KITTI Dataset. - - This class serves as the API for experiments on the `KITTI Dataset - `_. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - split (str): Split of input data. - pts_prefix (str, optional): Prefix of points files. - Defaults to 'velodyne'. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - pcd_limit_range (list, optional): The range of point cloud used to - filter invalid predicted boxes. - Default: [0, -40, -3, 70.4, 40, 0.0]. - """ - CLASSES = ('car', 'pedestrian', 'cyclist') - - def __init__(self, - data_root, - ann_file, - split, - pts_prefix='velodyne', - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - pcd_limit_range=[0, -40, -3, 70.4, 40, 0.0], - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - - self.split = split - self.root_split = os.path.join(self.data_root, split) - assert self.modality is not None - self.pcd_limit_range = pcd_limit_range - self.pts_prefix = pts_prefix - - def _get_pts_filename(self, idx): - """Get point cloud filename according to the given index. - - Args: - index (int): Index of the point cloud file to get. - - Returns: - str: Name of the point cloud file. - """ - pts_filename = osp.join(self.root_split, self.pts_prefix, - f'{idx:06d}.bin') - return pts_filename - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - img_prefix (str): Prefix of image files. - - img_info (dict): Image info. - - lidar2img (list[np.ndarray], optional): Transformations - from lidar to different cameras. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['image']['image_idx'] - img_filename = os.path.join(self.data_root, - info['image']['image_path']) - - # TODO: consider use torch.Tensor only - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - lidar2img = P2 @ rect @ Trv2c - - pts_filename = self._get_pts_filename(sample_idx) - input_dict = dict( - sample_idx=sample_idx, - pts_filename=pts_filename, - img_prefix=None, - img_info=dict(filename=img_filename), - lidar2img=lidar2img) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes. - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_bboxes (np.ndarray): 2D ground truth bboxes. - - gt_labels (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - - difficulty (int): Difficulty defined by KITTI. - 0, 1, 2 represent xxxxx respectively. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - - if 'plane' in info: - # convert ground plane to velodyne coordinates - reverse = np.linalg.inv(rect @ Trv2c) - - (plane_norm_cam, - plane_off_cam) = (info['plane'][:3], - -info['plane'][:3] * info['plane'][3]) - plane_norm_lidar = \ - (reverse[:3, :3] @ plane_norm_cam[:, None])[:, 0] - plane_off_lidar = ( - reverse[:3, :3] @ plane_off_cam[:, None][:, 0] + - reverse[:3, 3]) - plane_lidar = np.zeros_like(plane_norm_lidar, shape=(4, )) - plane_lidar[:3] = plane_norm_lidar - plane_lidar[3] = -plane_norm_lidar.T @ plane_off_lidar - else: - plane_lidar = None - - difficulty = info['annos']['difficulty'] - annos = info['annos'] - # we need other objects to avoid collision when sample - annos = self.remove_dontcare(annos) - loc = annos['location'] - dims = annos['dimensions'] - rots = annos['rotation_y'] - gt_names = annos['name'] - gt_bboxes_3d = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1).astype(np.float32) - - # convert gt_bboxes_3d to velodyne coordinates - gt_bboxes_3d = CameraInstance3DBoxes(gt_bboxes_3d).convert_to( - self.box_mode_3d, np.linalg.inv(rect @ Trv2c)) - gt_bboxes = annos['bbox'] - - selected = self.drop_arrays_by_name(gt_names, ['DontCare']) - gt_bboxes = gt_bboxes[selected].astype('float32') - gt_names = gt_names[selected] - - gt_labels = [] - for cat in gt_names: - if cat in self.CLASSES: - gt_labels.append(self.CLASSES.index(cat)) - else: - gt_labels.append(-1) - gt_labels = np.array(gt_labels).astype(np.int64) - gt_labels_3d = copy.deepcopy(gt_labels) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - bboxes=gt_bboxes, - labels=gt_labels, - gt_names=gt_names, - plane=plane_lidar, - difficulty=difficulty) - return anns_results - - def drop_arrays_by_name(self, gt_names, used_classes): - """Drop irrelevant ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be dropped. - """ - inds = [i for i, x in enumerate(gt_names) if x not in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def keep_arrays_by_name(self, gt_names, used_classes): - """Keep useful ground truths by name. - - Args: - gt_names (list[str]): Names of ground truths. - used_classes (list[str]): Classes of interest. - - Returns: - np.ndarray: Indices of ground truths that will be keeped. - """ - inds = [i for i, x in enumerate(gt_names) if x in used_classes] - inds = np.array(inds, dtype=np.int64) - return inds - - def remove_dontcare(self, ann_info): - """Remove annotations that do not need to be cared. - - Args: - ann_info (dict): Dict of annotation infos. The ``'DontCare'`` - annotations will be removed according to ann_file['name']. - - Returns: - dict: Annotations after filtering. - """ - img_filtered_annotations = {} - relevant_annotation_indices = [ - i for i, x in enumerate(ann_info['name']) if x != 'DontCare' - ] - for key in ann_info.keys(): - img_filtered_annotations[key] = ( - ann_info[key][relevant_annotation_indices]) - return img_filtered_annotations - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - if not isinstance(outputs[0], dict): - result_files = self.bbox2result_kitti2d(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - elif 'pts_bbox' in outputs[0] or 'img_bbox' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = submission_prefix + name - else: - submission_prefix_ = None - if 'img' in name: - result_files = self.bbox2result_kitti2d( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - else: - result_files_ = self.bbox2result_kitti( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: None. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files, including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - Default: None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, pklfile_prefix) - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.data_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bbox', 'bev', '3d'] - if 'img' in name: - eval_types = ['bbox'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float('{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - if metric == 'img_bbox': - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - else: - ap_result_str, ap_dict = kitti_eval(gt_annos, result_files, - self.CLASSES) - print_log('\n' + ap_result_str, logger=logger) - - if tmp_dir is not None: - tmp_dir.cleanup() - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 3D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries with the kitti format. - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.data_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - box_dict = self.convert_valid_bboxes(pred_dicts, info) - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append( - -np.arctan2(-box_lidar[1], box_lidar[0]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - else: - anno = { - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - } - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'.format( - anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], bbox[idx][2], - bbox[idx][3], dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print(f'Result is saved to {out}.') - - return det_annos - - def bbox2result_kitti2d(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 2D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries have the kitti format - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - det_annos = [] - print('\nConverting prediction to KITTI format') - for i, bboxes_per_sample in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - anno = dict( - name=[], - truncated=[], - occluded=[], - alpha=[], - bbox=[], - dimensions=[], - location=[], - rotation_y=[], - score=[]) - sample_idx = self.data_infos[i]['image']['image_idx'] - - num_example = 0 - for label in range(len(bboxes_per_sample)): - bbox = bboxes_per_sample[label] - for i in range(bbox.shape[0]): - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(0.0) - anno['bbox'].append(bbox[i, :4]) - # set dimensions (height, width, length) to zero - anno['dimensions'].append( - np.zeros(shape=[3], dtype=np.float32)) - # set the 3D translation to (-1000, -1000, -1000) - anno['location'].append( - np.ones(shape=[3], dtype=np.float32) * (-1000.0)) - anno['rotation_y'].append(0.0) - anno['score'].append(bbox[i, 4]) - num_example += 1 - - if num_example == 0: - annos.append( - dict( - name=np.array([]), - truncated=np.array([]), - occluded=np.array([]), - alpha=np.array([]), - bbox=np.zeros([0, 4]), - dimensions=np.zeros([0, 3]), - location=np.zeros([0, 3]), - rotation_y=np.array([]), - score=np.array([]), - )) - else: - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * num_example, dtype=np.int64) - det_annos += annos - - if pklfile_prefix is not None: - # save file in pkl format - pklfile_path = ( - pklfile_prefix[:-4] if pklfile_prefix.endswith( - ('.pkl', '.pickle')) else pklfile_prefix) - mmcv.dump(det_annos, pklfile_path) - - if submission_prefix is not None: - # save file in submission format - mmcv.mkdir_or_exist(submission_prefix) - print(f'Saving KITTI submission to {submission_prefix}') - for i, anno in enumerate(det_annos): - sample_idx = self.data_infos[i]['image']['image_idx'] - cur_det_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(cur_det_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'][::-1] # lhw -> hwl - for idx in range(len(bbox)): - print( - '{} -1 -1 {:4f} {:4f} {:4f} {:4f} {:4f} {:4f} ' - '{:4f} {:4f} {:4f} {:4f} {:4f} {:4f} {:4f}'.format( - anno['name'][idx], - anno['alpha'][idx], - *bbox[idx], # 4 float - *dims[idx], # 3 float - *loc[idx], # 3 float - anno['rotation_y'][idx], - anno['score'][idx]), - file=f, - ) - print(f'Result is saved to {submission_prefix}') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the predicted boxes into valid ones. - - Args: - box_dict (dict): Box dictionaries to be converted. - - - boxes_3d (:obj:`LiDARInstance3DBoxes`): 3D bounding boxes. - - scores_3d (torch.Tensor): Scores of boxes. - - labels_3d (torch.Tensor): Class labels of boxes. - info (dict): Data info. - - Returns: - dict: Valid predicted boxes. - - - bbox (np.ndarray): 2D bounding boxes. - - box3d_camera (np.ndarray): 3D bounding boxes in - camera coordinate. - - box3d_lidar (np.ndarray): 3D bounding boxes in - LiDAR coordinate. - - scores (np.ndarray): Scores of boxes. - - label_preds (np.ndarray): Class label predictions. - - sample_idx (int): Sample index. - """ - # TODO: refactor this function - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - box_preds.limit_yaw(offset=0.5, period=np.pi * 2) - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - img_shape = info['image']['image_shape'] - P2 = box_preds.tensor.new_tensor(P2) - - box_preds_camera = box_preds.convert_to(Box3DMode.CAM, rect @ Trv2c) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds_camera - image_shape = box_preds.tensor.new_tensor(img_shape) - valid_cam_inds = ((box_2d_preds[:, 0] < image_shape[1]) & - (box_2d_preds[:, 1] < image_shape[0]) & - (box_2d_preds[:, 2] > 0) & (box_2d_preds[:, 3] > 0)) - # check box_preds - limit_range = box_preds.tensor.new_tensor(self.pcd_limit_range) - valid_pcd_inds = ((box_preds.center > limit_range[:3]) & - (box_preds.center < limit_range[3:])) - valid_inds = valid_cam_inds & valid_pcd_inds.all(-1) - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - if self.modality['use_camera']: - pipeline.insert(0, dict(type='LoadImageFromFile')) - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['point_cloud']['velodyne_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, img_metas, img = self._extract_data( - i, pipeline, ['points', 'img_metas', 'img']) - points = points.numpy() - # for now we convert points into depth mode - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - # multi-modality visualization - if self.modality['use_camera'] and 'lidar2img' in img_metas.keys(): - img = img.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - show_pred_bboxes = LiDARInstance3DBoxes( - pred_bboxes, origin=(0.5, 0.5, 0)) - show_gt_bboxes = LiDARInstance3DBoxes( - gt_bboxes, origin=(0.5, 0.5, 0)) - show_multi_modality_result( - img, - show_gt_bboxes, - show_pred_bboxes, - img_metas['lidar2img'], - out_dir, - file_name, - box_mode='lidar', - show=show) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_mono_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_mono_dataset.py deleted file mode 100644 index c669b0af..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/kitti_mono_dataset.py +++ /dev/null @@ -1,569 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core.bbox import Box3DMode, CameraInstance3DBoxes, points_cam2img -from .builder import DATASETS -from .nuscenes_mono_dataset import NuScenesMonoDataset - - -@DATASETS.register_module() -class KittiMonoDataset(NuScenesMonoDataset): - """Monocular 3D detection on KITTI Dataset. - - Args: - data_root (str): Path of dataset root. - info_file (str): Path of info file. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to False. - eval_version (str, optional): Configuration version of evaluation. - Defaults to None. - version (str, optional): Dataset version. Defaults to None. - kwargs (dict): Other arguments are the same of NuScenesMonoDataset. - """ - - CLASSES = ('Pedestrian', 'Cyclist', 'Car') - - def __init__(self, - data_root, - info_file, - ann_file, - pipeline, - load_interval=1, - with_velocity=False, - eval_version=None, - version=None, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - load_interval=load_interval, - with_velocity=with_velocity, - eval_version=eval_version, - version=version, - **kwargs) - self.anno_infos = mmcv.load(info_file) - self.bbox_code_size = 7 - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore, - labels, masks, seg_map. "masks" are raw annotations and not - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - gt_bboxes_cam3d = [] - centers2d = [] - depths = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - # 3D annotations in camera coordinates - bbox_cam3d = np.array(ann['bbox_cam3d']).reshape(-1, ) - gt_bboxes_cam3d.append(bbox_cam3d) - # 2.5D annotations in camera coordinates - center2d = ann['center2d'][:2] - depth = ann['center2d'][2] - centers2d.append(center2d) - depths.append(depth) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_cam3d: - gt_bboxes_cam3d = np.array(gt_bboxes_cam3d, dtype=np.float32) - centers2d = np.array(centers2d, dtype=np.float32) - depths = np.array(depths, dtype=np.float32) - else: - gt_bboxes_cam3d = np.zeros((0, self.bbox_code_size), - dtype=np.float32) - centers2d = np.zeros((0, 2), dtype=np.float32) - depths = np.zeros((0), dtype=np.float32) - - gt_bboxes_cam3d = CameraInstance3DBoxes( - gt_bboxes_cam3d, - box_dim=gt_bboxes_cam3d.shape[-1], - origin=(0.5, 0.5, 0.5)) - gt_labels_3d = copy.deepcopy(gt_labels) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - gt_bboxes_3d=gt_bboxes_cam3d, - gt_labels_3d=gt_labels_3d, - centers2d=centers2d, - depths=depths, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - if not isinstance(outputs[0], dict): - result_files = self.bbox2result_kitti2d(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - elif 'pts_bbox' in outputs[0] or 'img_bbox' in outputs[0] or \ - 'img_bbox2d' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = submission_prefix + name - else: - submission_prefix_ = None - if '2d' in name: - result_files_ = self.bbox2result_kitti2d( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - else: - result_files_ = self.bbox2result_kitti( - results_, self.CLASSES, pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric=None, - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Defaults to None. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files, including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, pklfile_prefix) - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.anno_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bbox', 'bev', '3d'] - if '2d' in name: - eval_types = ['bbox'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float('{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - if metric == 'img_bbox2d': - ap_result_str, ap_dict = kitti_eval( - gt_annos, result_files, self.CLASSES, eval_types=['bbox']) - else: - ap_result_str, ap_dict = kitti_eval(gt_annos, result_files, - self.CLASSES) - print_log('\n' + ap_result_str, logger=logger) - - if tmp_dir is not None: - tmp_dir.cleanup() - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 3D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries with the kitti format. - """ - assert len(net_outputs) == len(self.anno_infos) - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.anno_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - - box_dict = self.convert_valid_bboxes(pred_dicts, info) - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(-np.arctan2(box[0], box[2]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - else: - anno = { - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - } - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'.format( - anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], bbox[idx][2], - bbox[idx][3], dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print('Result is saved to %s' % out) - - return det_annos - - def bbox2result_kitti2d(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert 2D detection results to kitti format for evaluation and test - submission. - - Args: - net_outputs (list[np.ndarray]): List of array storing the - inferenced bounding boxes and scores. - class_names (list[String]): A list of class names. - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - list[dict]: A list of dictionaries have the kitti format - """ - assert len(net_outputs) == len(self.anno_infos) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for i, bboxes_per_sample in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - anno = dict( - name=[], - truncated=[], - occluded=[], - alpha=[], - bbox=[], - dimensions=[], - location=[], - rotation_y=[], - score=[]) - sample_idx = self.anno_infos[i]['image']['image_idx'] - - num_example = 0 - for label in range(len(bboxes_per_sample)): - bbox = bboxes_per_sample[label] - for i in range(bbox.shape[0]): - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append(-10) - anno['bbox'].append(bbox[i, :4]) - # set dimensions (height, width, length) to zero - anno['dimensions'].append( - np.zeros(shape=[3], dtype=np.float32)) - # set the 3D translation to (-1000, -1000, -1000) - anno['location'].append( - np.ones(shape=[3], dtype=np.float32) * (-1000.0)) - anno['rotation_y'].append(0.0) - anno['score'].append(bbox[i, 4]) - num_example += 1 - - if num_example == 0: - annos.append( - dict( - name=np.array([]), - truncated=np.array([]), - occluded=np.array([]), - alpha=np.array([]), - bbox=np.zeros([0, 4]), - dimensions=np.zeros([0, 3]), - location=np.zeros([0, 3]), - rotation_y=np.array([]), - score=np.array([]), - )) - else: - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - annos[-1]['sample_idx'] = np.array( - [sample_idx] * num_example, dtype=np.int64) - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print('Result is saved to %s' % out) - - if submission_prefix is not None: - # save file in submission format - mmcv.mkdir_or_exist(submission_prefix) - print(f'Saving KITTI submission to {submission_prefix}') - for i, anno in enumerate(det_annos): - sample_idx = self.anno_infos[i]['image']['image_idx'] - cur_det_file = f'{submission_prefix}/{sample_idx:06d}.txt' - with open(cur_det_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'][::-1] # lhw -> hwl - for idx in range(len(bbox)): - print( - '{} -1 -1 {:4f} {:4f} {:4f} {:4f} {:4f} {:4f} ' - '{:4f} {:4f} {:4f} {:4f} {:4f} {:4f} {:4f}'.format( - anno['name'][idx], - anno['alpha'][idx], - *bbox[idx], # 4 float - *dims[idx], # 3 float - *loc[idx], # 3 float - anno['rotation_y'][idx], - anno['score'][idx]), - file=f, - ) - print(f'Result is saved to {submission_prefix}') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the predicted boxes into valid ones. - - Args: - box_dict (dict): Box dictionaries to be converted. - - boxes_3d (:obj:`CameraInstance3DBoxes`): 3D bounding boxes. - - scores_3d (torch.Tensor): Scores of boxes. - - labels_3d (torch.Tensor): Class labels of boxes. - info (dict): Data info. - - Returns: - dict: Valid predicted boxes. - - bbox (np.ndarray): 2D bounding boxes. - - box3d_camera (np.ndarray): 3D bounding boxes in - camera coordinate. - - scores (np.ndarray): Scores of boxes. - - label_preds (np.ndarray): Class label predictions. - - sample_idx (int): Sample index. - """ - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P2 = info['calib']['P2'].astype(np.float32) - img_shape = info['image']['image_shape'] - P2 = box_preds.tensor.new_tensor(P2) - - box_preds_camera = box_preds - box_preds_lidar = box_preds.convert_to(Box3DMode.LIDAR, - np.linalg.inv(rect @ Trv2c)) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P2) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds_camera - image_shape = box_preds.tensor.new_tensor(img_shape) - valid_cam_inds = ((box_2d_preds[:, 0] < image_shape[1]) & - (box_2d_preds[:, 1] < image_shape[0]) & - (box_2d_preds[:, 2] > 0) & (box_2d_preds[:, 3] > 0)) - # check box_preds - valid_inds = valid_cam_inds - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds_lidar[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/lyft_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/lyft_dataset.py deleted file mode 100644 index c213c62f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/lyft_dataset.py +++ /dev/null @@ -1,569 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import pandas as pd -from lyft_dataset_sdk.lyftdataset import LyftDataset as Lyft -from lyft_dataset_sdk.utils.data_classes import Box as LyftBox -from pyquaternion import Quaternion - -from mmdet3d.core.evaluation.lyft_eval import lyft_eval -from ..core import show_result -from ..core.bbox import Box3DMode, Coord3DMode, LiDARInstance3DBoxes -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class LyftDataset(Custom3DDataset): - r"""Lyft Dataset. - - This class serves as the API for experiments on the Lyft Dataset. - - Please refer to - ``_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - data_root (str): Path of dataset root. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ # noqa: E501 - NameMapping = { - 'bicycle': 'bicycle', - 'bus': 'bus', - 'car': 'car', - 'emergency_vehicle': 'emergency_vehicle', - 'motorcycle': 'motorcycle', - 'other_vehicle': 'other_vehicle', - 'pedestrian': 'pedestrian', - 'truck': 'truck', - 'animal': 'animal' - } - DefaultAttribute = { - 'car': 'is_stationary', - 'truck': 'is_stationary', - 'bus': 'is_stationary', - 'emergency_vehicle': 'is_stationary', - 'other_vehicle': 'is_stationary', - 'motorcycle': 'is_stationary', - 'bicycle': 'is_stationary', - 'pedestrian': 'is_stationary', - 'animal': 'is_stationary' - } - CLASSES = ('car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', - 'motorcycle', 'bicycle', 'pedestrian', 'animal') - - def __init__(self, - ann_file, - pipeline=None, - data_root=None, - classes=None, - load_interval=1, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - **kwargs): - self.load_interval = load_interval - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - - if self.modality is None: - self.modality = dict( - use_camera=False, - use_lidar=True, - use_radar=False, - use_map=False, - use_external=False, - ) - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations sorted by timestamps. - """ - # loading data from a file-like object needs file format - data = mmcv.load(ann_file, file_format='pkl') - data_infos = list(sorted(data['infos'], key=lambda e: e['timestamp'])) - data_infos = data_infos[::self.load_interval] - self.metadata = data['metadata'] - self.version = self.metadata['version'] - return data_infos - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): sample index - - pts_filename (str): filename of point clouds - - sweeps (list[dict]): infos of sweeps - - timestamp (float): sample timestamp - - img_filename (str, optional): image filename - - lidar2img (list[np.ndarray], optional): transformations - from lidar to different cameras - - ann_info (dict): annotation info - """ - info = self.data_infos[index] - - # standard protocol modified from SECOND.Pytorch - input_dict = dict( - sample_idx=info['token'], - pts_filename=info['lidar_path'], - sweeps=info['sweeps'], - timestamp=info['timestamp'] / 1e6, - ) - - if self.modality['use_camera']: - image_paths = [] - lidar2img_rts = [] - for cam_type, cam_info in info['cams'].items(): - image_paths.append(cam_info['data_path']) - # obtain lidar to image transformation matrix - lidar2cam_r = np.linalg.inv(cam_info['sensor2lidar_rotation']) - lidar2cam_t = cam_info[ - 'sensor2lidar_translation'] @ lidar2cam_r.T - lidar2cam_rt = np.eye(4) - lidar2cam_rt[:3, :3] = lidar2cam_r.T - lidar2cam_rt[3, :3] = -lidar2cam_t - intrinsic = cam_info['cam_intrinsic'] - viewpad = np.eye(4) - viewpad[:intrinsic.shape[0], :intrinsic.shape[1]] = intrinsic - lidar2img_rt = (viewpad @ lidar2cam_rt.T) - lidar2img_rts.append(lidar2img_rt) - - input_dict.update( - dict( - img_filename=image_paths, - lidar2img=lidar2img_rts, - )) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes. - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - gt_bboxes_3d = info['gt_boxes'] - gt_names_3d = info['gt_names'] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - if 'gt_shape' in info: - gt_shape = info['gt_shape'] - gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_shape], axis=-1) - - # the lyft box center is [0.5, 0.5, 0.5], we change it to be - # the same as KITTI (0.5, 0.5, 0) - gt_bboxes_3d = LiDARInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - ) - return anns_results - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - lyft_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - annos = [] - boxes = output_to_lyft_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes = lidar_lyft_box_to_global(self.data_infos[sample_id], boxes) - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - lyft_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - name=name, - score=box.score) - annos.append(lyft_anno) - lyft_annos[sample_token] = annos - lyft_submissions = { - 'meta': self.modality, - 'results': lyft_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_lyft.json') - print('Results writes to', res_path) - mmcv.dump(lyft_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='pts_bbox'): - """Evaluation for a single model in Lyft protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'pts_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - - output_dir = osp.join(*osp.split(result_path)[:-1]) - lyft = Lyft( - data_path=osp.join(self.data_root, self.version), - json_path=osp.join(self.data_root, self.version, self.version), - verbose=True) - eval_set_map = { - 'v1.01-train': 'val', - } - metrics = lyft_eval(lyft, self.data_root, result_path, - eval_set_map[self.version], output_dir, logger) - - # record metrics - detail = dict() - metric_prefix = f'{result_name}_Lyft' - - for i, name in enumerate(metrics['class_names']): - AP = float(metrics['mAPs_cate'][i]) - detail[f'{metric_prefix}/{name}_AP'] = AP - - detail[f'{metric_prefix}/mAP'] = metrics['Final mAP'] - return detail - - def format_results(self, results, jsonfile_prefix=None, csv_savepath=None): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - csv_savepath (str): The path for saving csv files. - It includes the file path and the csv filename, - e.g., "a/b/filename.csv". If not specified, - the result will not be converted to csv file. - - Returns: - tuple: Returns (result_files, tmp_dir), where `result_files` is a - dict containing the json filepaths, `tmp_dir` is the temporal - directory created for saving json files when - `jsonfile_prefix` is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on Lyft - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - if csv_savepath is not None: - self.json2csv(result_files['pts_bbox'], csv_savepath) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - csv_savepath=None, - result_names=['pts_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in Lyft protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str, optional): The prefix of json files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - csv_savepath (str, optional): The path for saving csv files. - It includes the file path and the csv filename, - e.g., "a/b/filename.csv". If not specified, - the result will not be converted to csv file. - result_names (list[str], optional): Result names in the - metric prefix. Default: ['pts_bbox']. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Evaluation results. - """ - result_files, tmp_dir = self.format_results(results, jsonfile_prefix, - csv_savepath) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print(f'Evaluating bboxes of {name}') - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return results_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=dict(backend='disk')), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['lidar_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - inds = result['scores_3d'] > 0.1 - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'][inds].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - def json2csv(self, json_path, csv_savepath): - """Convert the json file to csv format for submission. - - Args: - json_path (str): Path of the result json file. - csv_savepath (str): Path to save the csv file. - """ - results = mmcv.load(json_path)['results'] - sample_list_path = osp.join(self.data_root, 'sample_submission.csv') - data = pd.read_csv(sample_list_path) - Id_list = list(data['Id']) - pred_list = list(data['PredictionString']) - cnt = 0 - print('Converting the json to csv...') - for token in results.keys(): - cnt += 1 - predictions = results[token] - prediction_str = '' - for i in range(len(predictions)): - prediction_str += \ - str(predictions[i]['score']) + ' ' + \ - str(predictions[i]['translation'][0]) + ' ' + \ - str(predictions[i]['translation'][1]) + ' ' + \ - str(predictions[i]['translation'][2]) + ' ' + \ - str(predictions[i]['size'][0]) + ' ' + \ - str(predictions[i]['size'][1]) + ' ' + \ - str(predictions[i]['size'][2]) + ' ' + \ - str(Quaternion(list(predictions[i]['rotation'])) - .yaw_pitch_roll[0]) + ' ' + \ - predictions[i]['name'] + ' ' - prediction_str = prediction_str[:-1] - idx = Id_list.index(token) - pred_list[idx] = prediction_str - df = pd.DataFrame({'Id': Id_list, 'PredictionString': pred_list}) - mmcv.mkdir_or_exist(os.path.dirname(csv_savepath)) - df.to_csv(csv_savepath, index=False) - - -def output_to_lyft_box(detection): - """Convert the output to the box class in the Lyft. - - Args: - detection (dict): Detection results. - - Returns: - list[:obj:`LyftBox`]: List of standard LyftBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # our LiDAR coordinate system -> Lyft box coordinate system - lyft_box_dims = box_dims[:, [1, 0, 2]] - - box_list = [] - for i in range(len(box3d)): - quat = Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - box = LyftBox( - box_gravity_center[i], - lyft_box_dims[i], - quat, - label=labels[i], - score=scores[i]) - box_list.append(box) - return box_list - - -def lidar_lyft_box_to_global(info, boxes): - """Convert the box from ego to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`LyftBox`]): List of predicted LyftBoxes. - - Returns: - list: List of standard LyftBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.rotate(Quaternion(info['lidar2ego_rotation'])) - box.translate(np.array(info['lidar2ego_translation'])) - # Move box to global coord system - box.rotate(Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - return box_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_dataset.py deleted file mode 100644 index 9a8a35f4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_dataset.py +++ /dev/null @@ -1,656 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import pyquaternion -from nuscenes.utils.data_classes import Box as NuScenesBox - -from ..core import show_result -from ..core.bbox import Box3DMode, Coord3DMode, LiDARInstance3DBoxes -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class NuScenesDataset(Custom3DDataset): - r"""NuScenes Dataset. - - This class serves as the API for experiments on the NuScenes Dataset. - - Please refer to `NuScenes Dataset `_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - data_root (str): Path of dataset root. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to True. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes. - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - eval_version (bool, optional): Configuration version of evaluation. - Defaults to 'detection_cvpr_2019'. - use_valid_flag (bool, optional): Whether to use `use_valid_flag` key - in the info file as mask to filter gt_boxes and gt_names. - Defaults to False. - """ - NameMapping = { - 'movable_object.barrier': 'barrier', - 'vehicle.bicycle': 'bicycle', - 'vehicle.bus.bendy': 'bus', - 'vehicle.bus.rigid': 'bus', - 'vehicle.car': 'car', - 'vehicle.construction': 'construction_vehicle', - 'vehicle.motorcycle': 'motorcycle', - 'human.pedestrian.adult': 'pedestrian', - 'human.pedestrian.child': 'pedestrian', - 'human.pedestrian.construction_worker': 'pedestrian', - 'human.pedestrian.police_officer': 'pedestrian', - 'movable_object.trafficcone': 'traffic_cone', - 'vehicle.trailer': 'trailer', - 'vehicle.truck': 'truck' - } - DefaultAttribute = { - 'car': 'vehicle.parked', - 'pedestrian': 'pedestrian.moving', - 'trailer': 'vehicle.parked', - 'truck': 'vehicle.parked', - 'bus': 'vehicle.moving', - 'motorcycle': 'cycle.without_rider', - 'construction_vehicle': 'vehicle.parked', - 'bicycle': 'cycle.without_rider', - 'barrier': '', - 'traffic_cone': '', - } - AttrMapping = { - 'cycle.with_rider': 0, - 'cycle.without_rider': 1, - 'pedestrian.moving': 2, - 'pedestrian.standing': 3, - 'pedestrian.sitting_lying_down': 4, - 'vehicle.moving': 5, - 'vehicle.parked': 6, - 'vehicle.stopped': 7, - } - AttrMapping_rev = [ - 'cycle.with_rider', - 'cycle.without_rider', - 'pedestrian.moving', - 'pedestrian.standing', - 'pedestrian.sitting_lying_down', - 'vehicle.moving', - 'vehicle.parked', - 'vehicle.stopped', - ] - # https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa - ErrNameMapping = { - 'trans_err': 'mATE', - 'scale_err': 'mASE', - 'orient_err': 'mAOE', - 'vel_err': 'mAVE', - 'attr_err': 'mAAE' - } - CLASSES = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - - def __init__(self, - ann_file, - pipeline=None, - data_root=None, - classes=None, - load_interval=1, - with_velocity=True, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - eval_version='detection_cvpr_2019', - use_valid_flag=False): - self.load_interval = load_interval - self.use_valid_flag = use_valid_flag - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode) - - self.with_velocity = with_velocity - self.eval_version = eval_version - from nuscenes.eval.detection.config import config_factory - self.eval_detection_configs = config_factory(self.eval_version) - if self.modality is None: - self.modality = dict( - use_camera=False, - use_lidar=True, - use_radar=False, - use_map=False, - use_external=False, - ) - - def get_cat_ids(self, idx): - """Get category distribution of single scene. - - Args: - idx (int): Index of the data_info. - - Returns: - dict[list]: for each category, if the current scene - contains such boxes, store a list containing idx, - otherwise, store empty list. - """ - info = self.data_infos[idx] - if self.use_valid_flag: - mask = info['valid_flag'] - gt_names = set(info['gt_names'][mask]) - else: - gt_names = set(info['gt_names']) - - cat_ids = [] - for name in gt_names: - if name in self.CLASSES: - cat_ids.append(self.cat2id[name]) - return cat_ids - - def load_annotations(self, ann_file): - """Load annotations from ann_file. - - Args: - ann_file (str): Path of the annotation file. - - Returns: - list[dict]: List of annotations sorted by timestamps. - """ - data = mmcv.load(ann_file, file_format='pkl') - data_infos = list(sorted(data['infos'], key=lambda e: e['timestamp'])) - data_infos = data_infos[::self.load_interval] - self.metadata = data['metadata'] - self.version = self.metadata['version'] - return data_infos - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - sweeps (list[dict]): Infos of sweeps. - - timestamp (float): Sample timestamp. - - img_filename (str, optional): Image filename. - - lidar2img (list[np.ndarray], optional): Transformations - from lidar to different cameras. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - # standard protocol modified from SECOND.Pytorch - input_dict = dict( - sample_idx=info['token'], - pts_filename=info['lidar_path'], - sweeps=info['sweeps'], - timestamp=info['timestamp'] / 1e6, - ) - - if self.modality['use_camera']: - image_paths = [] - lidar2img_rts = [] - for cam_type, cam_info in info['cams'].items(): - image_paths.append(cam_info['data_path']) - # obtain lidar to image transformation matrix - lidar2cam_r = np.linalg.inv(cam_info['sensor2lidar_rotation']) - lidar2cam_t = cam_info[ - 'sensor2lidar_translation'] @ lidar2cam_r.T - lidar2cam_rt = np.eye(4) - lidar2cam_rt[:3, :3] = lidar2cam_r.T - lidar2cam_rt[3, :3] = -lidar2cam_t - intrinsic = cam_info['cam_intrinsic'] - viewpad = np.eye(4) - viewpad[:intrinsic.shape[0], :intrinsic.shape[1]] = intrinsic - lidar2img_rt = (viewpad @ lidar2cam_rt.T) - lidar2img_rts.append(lidar2img_rt) - - input_dict.update( - dict( - img_filename=image_paths, - lidar2img=lidar2img_rts, - )) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: Annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - gt_names (list[str]): Class names of ground truths. - """ - info = self.data_infos[index] - # filter out bbox containing no points - if self.use_valid_flag: - mask = info['valid_flag'] - else: - mask = info['num_lidar_pts'] > 0 - gt_bboxes_3d = info['gt_boxes'][mask] - gt_names_3d = info['gt_names'][mask] - gt_labels_3d = [] - for cat in gt_names_3d: - if cat in self.CLASSES: - gt_labels_3d.append(self.CLASSES.index(cat)) - else: - gt_labels_3d.append(-1) - gt_labels_3d = np.array(gt_labels_3d) - - if self.with_velocity: - gt_velocity = info['gt_velocity'][mask] - nan_mask = np.isnan(gt_velocity[:, 0]) - gt_velocity[nan_mask] = [0.0, 0.0] - gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1) - - # the nuscenes box center is [0.5, 0.5, 0.5], we change it to be - # the same as KITTI (0.5, 0.5, 0) - gt_bboxes_3d = LiDARInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - gt_names=gt_names_3d) - return anns_results - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - nusc_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - annos = [] - boxes = output_to_nusc_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes = lidar_nusc_box_to_global(self.data_infos[sample_id], boxes, - mapped_class_names, - self.eval_detection_configs, - self.eval_version) - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - if np.sqrt(box.velocity[0]**2 + box.velocity[1]**2) > 0.2: - if name in [ - 'car', - 'construction_vehicle', - 'bus', - 'truck', - 'trailer', - ]: - attr = 'vehicle.moving' - elif name in ['bicycle', 'motorcycle']: - attr = 'cycle.with_rider' - else: - attr = NuScenesDataset.DefaultAttribute[name] - else: - if name in ['pedestrian']: - attr = 'pedestrian.standing' - elif name in ['bus']: - attr = 'vehicle.stopped' - else: - attr = NuScenesDataset.DefaultAttribute[name] - - nusc_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - velocity=box.velocity[:2].tolist(), - detection_name=name, - detection_score=box.score, - attribute_name=attr) - annos.append(nusc_anno) - nusc_annos[sample_token] = annos - nusc_submissions = { - 'meta': self.modality, - 'results': nusc_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_nusc.json') - print('Results writes to', res_path) - mmcv.dump(nusc_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='pts_bbox'): - """Evaluation for a single model in nuScenes protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'pts_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - from nuscenes import NuScenes - from nuscenes.eval.detection.evaluate import NuScenesEval - - output_dir = osp.join(*osp.split(result_path)[:-1]) - nusc = NuScenes( - version=self.version, dataroot=self.data_root, verbose=False) - eval_set_map = { - 'v1.0-mini': 'mini_val', - 'v1.0-trainval': 'val', - } - nusc_eval = NuScenesEval( - nusc, - config=self.eval_detection_configs, - result_path=result_path, - eval_set=eval_set_map[self.version], - output_dir=output_dir, - verbose=False) - nusc_eval.main(render_curves=False) - - # record metrics - metrics = mmcv.load(osp.join(output_dir, 'metrics_summary.json')) - detail = dict() - metric_prefix = f'{result_name}_NuScenes' - for name in self.CLASSES: - for k, v in metrics['label_aps'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_AP_dist_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['label_tp_errors'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['tp_errors'].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}'.format(metric_prefix, - self.ErrNameMapping[k])] = val - - detail['{}/NDS'.format(metric_prefix)] = metrics['nd_score'] - detail['{}/mAP'.format(metric_prefix)] = metrics['mean_ap'] - return detail - - def format_results(self, results, jsonfile_prefix=None): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: Returns (result_files, tmp_dir), where `result_files` is a - dict containing the json filepaths, `tmp_dir` is the temporal - directory created for saving json files when - `jsonfile_prefix` is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on nuScenes - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - result_names=['pts_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in nuScenes protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str, optional): The prefix of json files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print('Evaluating bboxes of {}'.format(name)) - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return results_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5, - file_client_args=dict(backend='disk')), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - file_client_args=dict(backend='disk')), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'pts_bbox' in result.keys(): - result = result['pts_bbox'] - data_info = self.data_infos[i] - pts_path = data_info['lidar_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - # for now we convert points into depth mode - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - inds = result['scores_3d'] > 0.1 - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - show_gt_bboxes = Box3DMode.convert(gt_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - pred_bboxes = result['boxes_3d'][inds].tensor.numpy() - show_pred_bboxes = Box3DMode.convert(pred_bboxes, Box3DMode.LIDAR, - Box3DMode.DEPTH) - show_result(points, show_gt_bboxes, show_pred_bboxes, out_dir, - file_name, show) - - -def output_to_nusc_box(detection): - """Convert the output to the box class in the nuScenes. - - Args: - detection (dict): Detection results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - - Returns: - list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # our LiDAR coordinate system -> nuScenes box coordinate system - nus_box_dims = box_dims[:, [1, 0, 2]] - - box_list = [] - for i in range(len(box3d)): - quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - velocity = (*box3d.tensor[i, 7:9], 0.0) - # velo_val = np.linalg.norm(box3d[i, 7:9]) - # velo_ori = box3d[i, 6] - # velocity = ( - # velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0) - box = NuScenesBox( - box_gravity_center[i], - nus_box_dims[i], - quat, - label=labels[i], - score=scores[i], - velocity=velocity) - box_list.append(box) - return box_list - - -def lidar_nusc_box_to_global(info, - boxes, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from ego to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.rotate(pyquaternion.Quaternion(info['lidar2ego_rotation'])) - box.translate(np.array(info['lidar2ego_translation'])) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to global coord system - box.rotate(pyquaternion.Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - return box_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_mono_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_mono_dataset.py deleted file mode 100644 index c3eb8f1a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/nuscenes_mono_dataset.py +++ /dev/null @@ -1,840 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import tempfile -import warnings -from os import path as osp - -import mmcv -import numpy as np -import pyquaternion -import torch -from nuscenes.utils.data_classes import Box as NuScenesBox - -from mmdet3d.core import bbox3d2result, box3d_multiclass_nms, xywhr2xyxyr -from mmdet.datasets import CocoDataset -from ..core import show_multi_modality_result -from ..core.bbox import CameraInstance3DBoxes, get_box_type -from .builder import DATASETS -from .pipelines import Compose -from .utils import extract_result_dict, get_loading_pipeline - - -@DATASETS.register_module() -class NuScenesMonoDataset(CocoDataset): - r"""Monocular 3D detection on NuScenes Dataset. - - This class serves as the API for experiments on the NuScenes Dataset. - - Please refer to `NuScenes Dataset `_ - for data downloading. - - Args: - ann_file (str): Path of annotation file. - data_root (str): Path of dataset root. - load_interval (int, optional): Interval of loading the dataset. It is - used to uniformly sample the dataset. Defaults to 1. - with_velocity (bool, optional): Whether include velocity prediction - into the experiments. Defaults to True. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Camera' in this class. Available options includes. - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - eval_version (str, optional): Configuration version of evaluation. - Defaults to 'detection_cvpr_2019'. - use_valid_flag (bool, optional): Whether to use `use_valid_flag` key - in the info file as mask to filter gt_boxes and gt_names. - Defaults to False. - version (str, optional): Dataset version. Defaults to 'v1.0-trainval'. - """ - CLASSES = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - DefaultAttribute = { - 'car': 'vehicle.parked', - 'pedestrian': 'pedestrian.moving', - 'trailer': 'vehicle.parked', - 'truck': 'vehicle.parked', - 'bus': 'vehicle.moving', - 'motorcycle': 'cycle.without_rider', - 'construction_vehicle': 'vehicle.parked', - 'bicycle': 'cycle.without_rider', - 'barrier': '', - 'traffic_cone': '', - } - # https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa - ErrNameMapping = { - 'trans_err': 'mATE', - 'scale_err': 'mASE', - 'orient_err': 'mAOE', - 'vel_err': 'mAVE', - 'attr_err': 'mAAE' - } - - def __init__(self, - data_root, - ann_file, - pipeline, - load_interval=1, - with_velocity=True, - modality=None, - box_type_3d='Camera', - eval_version='detection_cvpr_2019', - use_valid_flag=False, - version='v1.0-trainval', - classes=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.get_classes(classes) - self.file_client = mmcv.FileClient(**file_client_args) - - # load annotations (and proposals) - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(local_path) - - if self.proposal_file is not None: - with self.file_client.get_local_path( - self.proposal_file) as local_path: - self.proposals = self.load_proposals(local_path) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - self.load_interval = load_interval - self.with_velocity = with_velocity - self.modality = modality - self.box_type_3d, self.box_mode_3d = get_box_type(box_type_3d) - self.eval_version = eval_version - self.use_valid_flag = use_valid_flag - self.bbox_code_size = 9 - self.version = version - if self.eval_version is not None: - from nuscenes.eval.detection.config import config_factory - self.eval_detection_configs = config_factory(self.eval_version) - if self.modality is None: - self.modality = dict( - use_camera=True, - use_lidar=False, - use_radar=False, - use_map=False, - use_external=False) - - def pre_pipeline(self, results): - """Initialization before data preparation. - - Args: - results (dict): Dict before data preprocessing. - - - img_fields (list): Image fields. - - bbox3d_fields (list): 3D bounding boxes fields. - - pts_mask_fields (list): Mask fields of points. - - pts_seg_fields (list): Mask fields of point segments. - - bbox_fields (list): Fields of bounding boxes. - - mask_fields (list): Fields of masks. - - seg_fields (list): Segment fields. - - box_type_3d (str): 3D box type. - - box_mode_3d (str): 3D box mode. - """ - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['img_fields'] = [] - results['bbox3d_fields'] = [] - results['pts_mask_fields'] = [] - results['pts_seg_fields'] = [] - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - results['box_type_3d'] = self.box_type_3d - results['box_mode_3d'] = self.box_mode_3d - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox annotation. - - Args: - img_info (list[dict]): Image info. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, labels, - gt_bboxes_3d, gt_labels_3d, attr_labels, centers2d, - depths, bboxes_ignore, masks, seg_map - """ - gt_bboxes = [] - gt_labels = [] - attr_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - gt_bboxes_cam3d = [] - centers2d = [] - depths = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - attr_labels.append(ann['attribute_id']) - gt_masks_ann.append(ann.get('segmentation', None)) - # 3D annotations in camera coordinates - bbox_cam3d = np.array(ann['bbox_cam3d']).reshape(1, -1) - velo_cam3d = np.array(ann['velo_cam3d']).reshape(1, 2) - nan_mask = np.isnan(velo_cam3d[:, 0]) - velo_cam3d[nan_mask] = [0.0, 0.0] - bbox_cam3d = np.concatenate([bbox_cam3d, velo_cam3d], axis=-1) - gt_bboxes_cam3d.append(bbox_cam3d.squeeze()) - # 2.5D annotations in camera coordinates - center2d = ann['center2d'][:2] - depth = ann['center2d'][2] - centers2d.append(center2d) - depths.append(depth) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - attr_labels = np.array(attr_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - attr_labels = np.array([], dtype=np.int64) - - if gt_bboxes_cam3d: - gt_bboxes_cam3d = np.array(gt_bboxes_cam3d, dtype=np.float32) - centers2d = np.array(centers2d, dtype=np.float32) - depths = np.array(depths, dtype=np.float32) - else: - gt_bboxes_cam3d = np.zeros((0, self.bbox_code_size), - dtype=np.float32) - centers2d = np.zeros((0, 2), dtype=np.float32) - depths = np.zeros((0), dtype=np.float32) - - gt_bboxes_cam3d = CameraInstance3DBoxes( - gt_bboxes_cam3d, - box_dim=gt_bboxes_cam3d.shape[-1], - origin=(0.5, 0.5, 0.5)) - gt_labels_3d = copy.deepcopy(gt_labels) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - gt_bboxes_3d=gt_bboxes_cam3d, - gt_labels_3d=gt_labels_3d, - attr_labels=attr_labels, - centers2d=centers2d, - depths=depths, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def get_attr_name(self, attr_idx, label_name): - """Get attribute from predicted index. - - This is a workaround to predict attribute when the predicted velocity - is not reliable. We map the predicted attribute index to the one - in the attribute set. If it is consistent with the category, we will - keep it. Otherwise, we will use the default attribute. - - Args: - attr_idx (int): Attribute index. - label_name (str): Predicted category name. - - Returns: - str: Predicted attribute name. - """ - # TODO: Simplify the variable name - AttrMapping_rev2 = [ - 'cycle.with_rider', 'cycle.without_rider', 'pedestrian.moving', - 'pedestrian.standing', 'pedestrian.sitting_lying_down', - 'vehicle.moving', 'vehicle.parked', 'vehicle.stopped', 'None' - ] - if label_name == 'car' or label_name == 'bus' \ - or label_name == 'truck' or label_name == 'trailer' \ - or label_name == 'construction_vehicle': - if AttrMapping_rev2[attr_idx] == 'vehicle.moving' or \ - AttrMapping_rev2[attr_idx] == 'vehicle.parked' or \ - AttrMapping_rev2[attr_idx] == 'vehicle.stopped': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - elif label_name == 'pedestrian': - if AttrMapping_rev2[attr_idx] == 'pedestrian.moving' or \ - AttrMapping_rev2[attr_idx] == 'pedestrian.standing' or \ - AttrMapping_rev2[attr_idx] == \ - 'pedestrian.sitting_lying_down': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - elif label_name == 'bicycle' or label_name == 'motorcycle': - if AttrMapping_rev2[attr_idx] == 'cycle.with_rider' or \ - AttrMapping_rev2[attr_idx] == 'cycle.without_rider': - return AttrMapping_rev2[attr_idx] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - else: - return NuScenesMonoDataset.DefaultAttribute[label_name] - - def _format_bbox(self, results, jsonfile_prefix=None): - """Convert the results to the standard format. - - Args: - results (list[dict]): Testing results of the dataset. - jsonfile_prefix (str): The prefix of the output jsonfile. - You can specify the output directory/filename by - modifying the jsonfile_prefix. Default: None. - - Returns: - str: Path of the output json file. - """ - nusc_annos = {} - mapped_class_names = self.CLASSES - - print('Start to convert detection format...') - - CAM_NUM = 6 - - for sample_id, det in enumerate(mmcv.track_iter_progress(results)): - - if sample_id % CAM_NUM == 0: - boxes_per_frame = [] - attrs_per_frame = [] - - # need to merge results from images of the same sample - annos = [] - boxes, attrs = output_to_nusc_box(det) - sample_token = self.data_infos[sample_id]['token'] - boxes, attrs = cam_nusc_box_to_global(self.data_infos[sample_id], - boxes, attrs, - mapped_class_names, - self.eval_detection_configs, - self.eval_version) - - boxes_per_frame.extend(boxes) - attrs_per_frame.extend(attrs) - # Remove redundant predictions caused by overlap of images - if (sample_id + 1) % CAM_NUM != 0: - continue - boxes = global_nusc_box_to_cam( - self.data_infos[sample_id + 1 - CAM_NUM], boxes_per_frame, - mapped_class_names, self.eval_detection_configs, - self.eval_version) - cam_boxes3d, scores, labels = nusc_box_to_cam_box3d(boxes) - # box nms 3d over 6 images in a frame - # TODO: move this global setting into config - nms_cfg = dict( - use_rotate_nms=True, - nms_across_levels=False, - nms_pre=4096, - nms_thr=0.05, - score_thr=0.01, - min_bbox_size=0, - max_per_frame=500) - from mmcv import Config - nms_cfg = Config(nms_cfg) - cam_boxes3d_for_nms = xywhr2xyxyr(cam_boxes3d.bev) - boxes3d = cam_boxes3d.tensor - # generate attr scores from attr labels - attrs = labels.new_tensor([attr for attr in attrs_per_frame]) - boxes3d, scores, labels, attrs = box3d_multiclass_nms( - boxes3d, - cam_boxes3d_for_nms, - scores, - nms_cfg.score_thr, - nms_cfg.max_per_frame, - nms_cfg, - mlvl_attr_scores=attrs) - cam_boxes3d = CameraInstance3DBoxes(boxes3d, box_dim=9) - det = bbox3d2result(cam_boxes3d, scores, labels, attrs) - boxes, attrs = output_to_nusc_box(det) - boxes, attrs = cam_nusc_box_to_global( - self.data_infos[sample_id + 1 - CAM_NUM], boxes, attrs, - mapped_class_names, self.eval_detection_configs, - self.eval_version) - - for i, box in enumerate(boxes): - name = mapped_class_names[box.label] - attr = self.get_attr_name(attrs[i], name) - nusc_anno = dict( - sample_token=sample_token, - translation=box.center.tolist(), - size=box.wlh.tolist(), - rotation=box.orientation.elements.tolist(), - velocity=box.velocity[:2].tolist(), - detection_name=name, - detection_score=box.score, - attribute_name=attr) - annos.append(nusc_anno) - # other views results of the same frame should be concatenated - if sample_token in nusc_annos: - nusc_annos[sample_token].extend(annos) - else: - nusc_annos[sample_token] = annos - - nusc_submissions = { - 'meta': self.modality, - 'results': nusc_annos, - } - - mmcv.mkdir_or_exist(jsonfile_prefix) - res_path = osp.join(jsonfile_prefix, 'results_nusc.json') - print('Results writes to', res_path) - mmcv.dump(nusc_submissions, res_path) - return res_path - - def _evaluate_single(self, - result_path, - logger=None, - metric='bbox', - result_name='img_bbox'): - """Evaluation for a single model in nuScenes protocol. - - Args: - result_path (str): Path of the result file. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - metric (str, optional): Metric name used for evaluation. - Default: 'bbox'. - result_name (str, optional): Result name in the metric prefix. - Default: 'img_bbox'. - - Returns: - dict: Dictionary of evaluation details. - """ - from nuscenes import NuScenes - from nuscenes.eval.detection.evaluate import NuScenesEval - - output_dir = osp.join(*osp.split(result_path)[:-1]) - nusc = NuScenes( - version=self.version, dataroot=self.data_root, verbose=False) - eval_set_map = { - 'v1.0-mini': 'mini_val', - 'v1.0-trainval': 'val', - } - nusc_eval = NuScenesEval( - nusc, - config=self.eval_detection_configs, - result_path=result_path, - eval_set=eval_set_map[self.version], - output_dir=output_dir, - verbose=False) - nusc_eval.main(render_curves=True) - - # record metrics - metrics = mmcv.load(osp.join(output_dir, 'metrics_summary.json')) - detail = dict() - metric_prefix = f'{result_name}_NuScenes' - for name in self.CLASSES: - for k, v in metrics['label_aps'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_AP_dist_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['label_tp_errors'][name].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}_{}'.format(metric_prefix, name, k)] = val - for k, v in metrics['tp_errors'].items(): - val = float('{:.4f}'.format(v)) - detail['{}/{}'.format(metric_prefix, - self.ErrNameMapping[k])] = val - - detail['{}/NDS'.format(metric_prefix)] = metrics['nd_score'] - detail['{}/mAP'.format(metric_prefix)] = metrics['mean_ap'] - return detail - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - # currently the output prediction results could be in two formats - # 1. list of dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...) - # 2. list of dict('pts_bbox' or 'img_bbox': - # dict('boxes_3d': ..., 'scores_3d': ..., 'labels_3d': ...)) - # this is a workaround to enable evaluation of both formats on nuScenes - # refer to https://github.com/open-mmlab/mmdetection3d/issues/449 - if not ('pts_bbox' in results[0] or 'img_bbox' in results[0]): - result_files = self._format_bbox(results, jsonfile_prefix) - else: - # should take the inner dict out of 'pts_bbox' or 'img_bbox' dict - result_files = dict() - for name in results[0]: - # not evaluate 2D predictions on nuScenes - if '2d' in name: - continue - print(f'\nFormating bboxes of {name}') - results_ = [out[name] for out in results] - tmp_file_ = osp.join(jsonfile_prefix, name) - result_files.update( - {name: self._format_bbox(results_, tmp_file_)}) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - result_names=['img_bbox'], - show=False, - out_dir=None, - pipeline=None): - """Evaluation in nuScenes protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'bbox'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - result_names (list[str], optional): Result names in the - metric prefix. Default: ['img_bbox']. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str, float]: Results of each evaluation metric. - """ - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - if isinstance(result_files, dict): - results_dict = dict() - for name in result_names: - print('Evaluating bboxes of {}'.format(name)) - ret_dict = self._evaluate_single(result_files[name]) - results_dict.update(ret_dict) - elif isinstance(result_files, str): - results_dict = self._evaluate_single(result_files) - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, pipeline=pipeline) - return results_dict - - def _extract_data(self, index, pipeline, key, load_annos=False): - """Load data using input pipeline and extract data according to key. - - Args: - index (int): Index for accessing the target data. - pipeline (:obj:`Compose`): Composed data loading pipeline. - key (str | list[str]): One single or a list of data key. - load_annos (bool): Whether to load data annotations. - If True, need to set self.test_mode as False before loading. - - Returns: - np.ndarray | torch.Tensor | list[np.ndarray | torch.Tensor]: - A single or a list of loaded data. - """ - assert pipeline is not None, 'data loading pipeline is not provided' - img_info = self.data_infos[index] - input_dict = dict(img_info=img_info) - - if load_annos: - ann_info = self.get_ann_info(index) - input_dict.update(dict(ann_info=ann_info)) - - self.pre_pipeline(input_dict) - example = pipeline(input_dict) - - # extract data items according to keys - if isinstance(key, str): - data = extract_result_dict(example, key) - else: - data = [extract_result_dict(example, k) for k in key] - - return data - - def _get_pipeline(self, pipeline): - """Get data loading pipeline in self.show/evaluate function. - - Args: - pipeline (list[dict]): Input pipeline. If None is given, - get from self.pipeline. - """ - if pipeline is None: - if not hasattr(self, 'pipeline') or self.pipeline is None: - warnings.warn( - 'Use default pipeline for data loading, this may cause ' - 'errors when data is on ceph') - return self._build_default_pipeline() - loading_pipeline = get_loading_pipeline(self.pipeline.transforms) - return Compose(loading_pipeline) - return Compose(pipeline) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict(type='LoadImageFromFileMono3D'), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['img']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=False, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Whether to visualize the results online. - Default: False. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - if 'img_bbox' in result.keys(): - result = result['img_bbox'] - data_info = self.data_infos[i] - img_path = data_info['file_name'] - file_name = osp.split(img_path)[-1].split('.')[0] - img, img_metas = self._extract_data(i, pipeline, - ['img', 'img_metas']) - # need to transpose channel to first dim - img = img.numpy().transpose(1, 2, 0) - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'] - pred_bboxes = result['boxes_3d'] - show_multi_modality_result( - img, - gt_bboxes, - pred_bboxes, - img_metas['cam2img'], - out_dir, - file_name, - box_mode='camera', - show=show) - - -def output_to_nusc_box(detection): - """Convert the output to the box class in the nuScenes. - - Args: - detection (dict): Detection results. - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox. - - scores_3d (torch.Tensor): Detection scores. - - labels_3d (torch.Tensor): Predicted box labels. - - attrs_3d (torch.Tensor, optional): Predicted attributes. - - Returns: - list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes. - """ - box3d = detection['boxes_3d'] - scores = detection['scores_3d'].numpy() - labels = detection['labels_3d'].numpy() - attrs = None - if 'attrs_3d' in detection: - attrs = detection['attrs_3d'].numpy() - - box_gravity_center = box3d.gravity_center.numpy() - box_dims = box3d.dims.numpy() - box_yaw = box3d.yaw.numpy() - - # convert the dim/rot to nuscbox convention - box_dims[:, [0, 1, 2]] = box_dims[:, [2, 0, 1]] - box_yaw = -box_yaw - - box_list = [] - for i in range(len(box3d)): - q1 = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i]) - q2 = pyquaternion.Quaternion(axis=[1, 0, 0], radians=np.pi / 2) - quat = q2 * q1 - velocity = (box3d.tensor[i, 7], 0.0, box3d.tensor[i, 8]) - box = NuScenesBox( - box_gravity_center[i], - box_dims[i], - quat, - label=labels[i], - score=scores[i], - velocity=velocity) - box_list.append(box) - return box_list, attrs - - -def cam_nusc_box_to_global(info, - boxes, - attrs, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from camera to global coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - attr_list = [] - for (box, attr) in zip(boxes, attrs): - # Move box to ego vehicle coord system - box.rotate(pyquaternion.Quaternion(info['cam2ego_rotation'])) - box.translate(np.array(info['cam2ego_translation'])) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to global coord system - box.rotate(pyquaternion.Quaternion(info['ego2global_rotation'])) - box.translate(np.array(info['ego2global_translation'])) - box_list.append(box) - attr_list.append(attr) - return box_list, attr_list - - -def global_nusc_box_to_cam(info, - boxes, - classes, - eval_configs, - eval_version='detection_cvpr_2019'): - """Convert the box from global to camera coordinate. - - Args: - info (dict): Info for a specific sample data, including the - calibration information. - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - classes (list[str]): Mapped classes in the evaluation. - eval_configs (object): Evaluation configuration object. - eval_version (str, optional): Evaluation version. - Default: 'detection_cvpr_2019' - - Returns: - list: List of standard NuScenesBoxes in the global - coordinate. - """ - box_list = [] - for box in boxes: - # Move box to ego vehicle coord system - box.translate(-np.array(info['ego2global_translation'])) - box.rotate( - pyquaternion.Quaternion(info['ego2global_rotation']).inverse) - # filter det in ego. - cls_range_map = eval_configs.class_range - radius = np.linalg.norm(box.center[:2], 2) - det_range = cls_range_map[classes[box.label]] - if radius > det_range: - continue - # Move box to camera coord system - box.translate(-np.array(info['cam2ego_translation'])) - box.rotate(pyquaternion.Quaternion(info['cam2ego_rotation']).inverse) - box_list.append(box) - return box_list - - -def nusc_box_to_cam_box3d(boxes): - """Convert boxes from :obj:`NuScenesBox` to :obj:`CameraInstance3DBoxes`. - - Args: - boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes. - - Returns: - tuple (:obj:`CameraInstance3DBoxes` | torch.Tensor | torch.Tensor): - Converted 3D bounding boxes, scores and labels. - """ - locs = torch.Tensor([b.center for b in boxes]).view(-1, 3) - dims = torch.Tensor([b.wlh for b in boxes]).view(-1, 3) - rots = torch.Tensor([b.orientation.yaw_pitch_roll[0] - for b in boxes]).view(-1, 1) - velocity = torch.Tensor([b.velocity[0::2] for b in boxes]).view(-1, 2) - - # convert nusbox to cambox convention - dims[:, [0, 1, 2]] = dims[:, [1, 2, 0]] - rots = -rots - - boxes_3d = torch.cat([locs, dims, rots, velocity], dim=1).cuda() - cam_boxes3d = CameraInstance3DBoxes( - boxes_3d, box_dim=9, origin=(0.5, 0.5, 0.5)) - scores = torch.Tensor([b.score for b in boxes]).cuda() - labels = torch.LongTensor([b.label for b in boxes]).cuda() - nms_scores = scores.new_zeros(scores.shape[0], 10 + 1) - indices = labels.new_tensor(list(range(scores.shape[0]))) - nms_scores[indices, labels] = scores - return cam_boxes3d, nms_scores, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/__init__.py deleted file mode 100644 index 7a5a71d6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .compose import Compose -from .dbsampler import DataBaseSampler -from .formating import Collect3D, DefaultFormatBundle, DefaultFormatBundle3D -from .loading import (LoadAnnotations3D, LoadImageFromFileMono3D, - LoadMultiViewImageFromFiles, LoadPointsFromDict, - LoadPointsFromFile, LoadPointsFromMultiSweeps, - NormalizePointsColor, PointSegClassMapping) -from .test_time_aug import MultiScaleFlipAug3D -# yapf: disable -from .transforms_3d import (AffineResize, BackgroundPointsFilter, - GlobalAlignment, GlobalRotScaleTrans, - IndoorPatchPointSample, IndoorPointSample, - MultiViewWrapper, ObjectNameFilter, ObjectNoise, - ObjectRangeFilter, ObjectSample, PointSample, - PointShuffle, PointsRangeFilter, - RandomDropPointsColor, RandomFlip3D, - RandomJitterPoints, RandomRotate, RandomShiftScale, - RangeLimitedRandomCrop, VoxelBasedPointSampler) - -__all__ = [ - 'ObjectSample', 'RandomFlip3D', 'ObjectNoise', 'GlobalRotScaleTrans', - 'PointShuffle', 'ObjectRangeFilter', 'PointsRangeFilter', 'Collect3D', - 'Compose', 'LoadMultiViewImageFromFiles', 'LoadPointsFromFile', - 'DefaultFormatBundle', 'DefaultFormatBundle3D', 'DataBaseSampler', - 'NormalizePointsColor', 'LoadAnnotations3D', 'IndoorPointSample', - 'PointSample', 'PointSegClassMapping', 'MultiScaleFlipAug3D', - 'LoadPointsFromMultiSweeps', 'BackgroundPointsFilter', - 'VoxelBasedPointSampler', 'GlobalAlignment', 'IndoorPatchPointSample', - 'LoadImageFromFileMono3D', 'ObjectNameFilter', 'RandomDropPointsColor', - 'RandomJitterPoints', 'AffineResize', 'RandomShiftScale', - 'LoadPointsFromDict', 'MultiViewWrapper', 'RandomRotate', - 'RangeLimitedRandomCrop' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/compose.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/compose.py deleted file mode 100644 index 9ab25d9e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/compose.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from mmdet.datasets.builder import PIPELINES as MMDET_PIPELINES -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. The pipeline registry of - mmdet3d separates with mmdet, however, sometimes we may need to use mmdet's - pipeline. So the class is rewritten to be able to use pipelines from both - mmdet3d and mmdet. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - _, key = PIPELINES.split_scope_key(transform['type']) - if key in PIPELINES._module_dict.keys(): - transform = build_from_cfg(transform, PIPELINES) - else: - transform = build_from_cfg(transform, MMDET_PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/data_augment_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/data_augment_utils.py deleted file mode 100644 index 21be3c06..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/data_augment_utils.py +++ /dev/null @@ -1,411 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numba -import numpy as np -from numba.core.errors import NumbaPerformanceWarning - -from mmdet3d.core.bbox import box_np_ops - -warnings.filterwarnings('ignore', category=NumbaPerformanceWarning) - - -@numba.njit -def _rotation_box2d_jit_(corners, angle, rot_mat_T): - """Rotate 2D boxes. - - Args: - corners (np.ndarray): Corners of boxes. - angle (float): Rotation angle. - rot_mat_T (np.ndarray): Transposed rotation matrix. - """ - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - corners[:] = corners @ rot_mat_T - - -@numba.jit(nopython=True) -def box_collision_test(boxes, qboxes, clockwise=True): - """Box collision test. - - Args: - boxes (np.ndarray): Corners of current boxes. - qboxes (np.ndarray): Boxes to be avoid colliding. - clockwise (bool, optional): Whether the corners are in - clockwise order. Default: True. - """ - N = boxes.shape[0] - K = qboxes.shape[0] - ret = np.zeros((N, K), dtype=np.bool_) - slices = np.array([1, 2, 3, 0]) - lines_boxes = np.stack((boxes, boxes[:, slices, :]), - axis=2) # [N, 4, 2(line), 2(xy)] - lines_qboxes = np.stack((qboxes, qboxes[:, slices, :]), axis=2) - # vec = np.zeros((2,), dtype=boxes.dtype) - boxes_standup = box_np_ops.corner_to_standup_nd_jit(boxes) - qboxes_standup = box_np_ops.corner_to_standup_nd_jit(qboxes) - for i in range(N): - for j in range(K): - # calculate standup first - iw = ( - min(boxes_standup[i, 2], qboxes_standup[j, 2]) - - max(boxes_standup[i, 0], qboxes_standup[j, 0])) - if iw > 0: - ih = ( - min(boxes_standup[i, 3], qboxes_standup[j, 3]) - - max(boxes_standup[i, 1], qboxes_standup[j, 1])) - if ih > 0: - for k in range(4): - for box_l in range(4): - A = lines_boxes[i, k, 0] - B = lines_boxes[i, k, 1] - C = lines_qboxes[j, box_l, 0] - D = lines_qboxes[j, box_l, 1] - acd = (D[1] - A[1]) * (C[0] - - A[0]) > (C[1] - A[1]) * ( - D[0] - A[0]) - bcd = (D[1] - B[1]) * (C[0] - - B[0]) > (C[1] - B[1]) * ( - D[0] - B[0]) - if acd != bcd: - abc = (C[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - C[0] - A[0]) - abd = (D[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - D[0] - A[0]) - if abc != abd: - ret[i, j] = True # collision. - break - if ret[i, j] is True: - break - if ret[i, j] is False: - # now check complete overlap. - # box overlap qbox: - box_overlap_qbox = True - for box_l in range(4): # point l in qboxes - for k in range(4): # corner k in boxes - vec = boxes[i, k] - boxes[i, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - boxes[i, k, 0] - qboxes[j, box_l, 0]) - cross -= vec[0] * ( - boxes[i, k, 1] - qboxes[j, box_l, 1]) - if cross >= 0: - box_overlap_qbox = False - break - if box_overlap_qbox is False: - break - - if box_overlap_qbox is False: - qbox_overlap_box = True - for box_l in range(4): # point box_l in boxes - for k in range(4): # corner k in qboxes - vec = qboxes[j, k] - qboxes[j, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - qboxes[j, k, 0] - boxes[i, box_l, 0]) - cross -= vec[0] * ( - qboxes[j, k, 1] - boxes[i, box_l, 1]) - if cross >= 0: # - qbox_overlap_box = False - break - if qbox_overlap_box is False: - break - if qbox_overlap_box: - ret[i, j] = True # collision. - else: - ret[i, j] = True # collision. - return ret - - -@numba.njit -def noise_per_box(boxes, valid_mask, loc_noises, rot_noises): - """Add noise to every box (only on the horizontal plane). - - Args: - boxes (np.ndarray): Input boxes with shape (N, 5). - valid_mask (np.ndarray): Mask to indicate which boxes are valid - with shape (N). - loc_noises (np.ndarray): Location noises with shape (N, M, 3). - rot_noises (np.ndarray): Rotation noises with shape (N, M). - - Returns: - np.ndarray: Mask to indicate whether the noise is - added successfully (pass the collision test). - """ - num_boxes = boxes.shape[0] - num_tests = loc_noises.shape[1] - box_corners = box_np_ops.box2d_to_corner_jit(boxes) - current_corners = np.zeros((4, 2), dtype=boxes.dtype) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - success_mask = -np.ones((num_boxes, ), dtype=np.int64) - # print(valid_mask) - for i in range(num_boxes): - if valid_mask[i]: - for j in range(num_tests): - current_corners[:] = box_corners[i] - current_corners -= boxes[i, :2] - _rotation_box2d_jit_(current_corners, rot_noises[i, j], - rot_mat_T) - current_corners += boxes[i, :2] + loc_noises[i, j, :2] - coll_mat = box_collision_test( - current_corners.reshape(1, 4, 2), box_corners) - coll_mat[0, i] = False - # print(coll_mat) - if not coll_mat.any(): - success_mask[i] = j - box_corners[i] = current_corners - break - return success_mask - - -@numba.njit -def noise_per_box_v2_(boxes, valid_mask, loc_noises, rot_noises, - global_rot_noises): - """Add noise to every box (only on the horizontal plane). Version 2 used - when enable global rotations. - - Args: - boxes (np.ndarray): Input boxes with shape (N, 5). - valid_mask (np.ndarray): Mask to indicate which boxes are valid - with shape (N). - loc_noises (np.ndarray): Location noises with shape (N, M, 3). - rot_noises (np.ndarray): Rotation noises with shape (N, M). - - Returns: - np.ndarray: Mask to indicate whether the noise is - added successfully (pass the collision test). - """ - num_boxes = boxes.shape[0] - num_tests = loc_noises.shape[1] - box_corners = box_np_ops.box2d_to_corner_jit(boxes) - current_corners = np.zeros((4, 2), dtype=boxes.dtype) - current_box = np.zeros((1, 5), dtype=boxes.dtype) - rot_mat_T = np.zeros((2, 2), dtype=boxes.dtype) - dst_pos = np.zeros((2, ), dtype=boxes.dtype) - success_mask = -np.ones((num_boxes, ), dtype=np.int64) - corners_norm = np.zeros((4, 2), dtype=boxes.dtype) - corners_norm[1, 1] = 1.0 - corners_norm[2] = 1.0 - corners_norm[3, 0] = 1.0 - corners_norm -= np.array([0.5, 0.5], dtype=boxes.dtype) - corners_norm = corners_norm.reshape(4, 2) - for i in range(num_boxes): - if valid_mask[i]: - for j in range(num_tests): - current_box[0, :] = boxes[i] - current_radius = np.sqrt(boxes[i, 0]**2 + boxes[i, 1]**2) - current_grot = np.arctan2(boxes[i, 0], boxes[i, 1]) - dst_grot = current_grot + global_rot_noises[i, j] - dst_pos[0] = current_radius * np.sin(dst_grot) - dst_pos[1] = current_radius * np.cos(dst_grot) - current_box[0, :2] = dst_pos - current_box[0, -1] += (dst_grot - current_grot) - - rot_sin = np.sin(current_box[0, -1]) - rot_cos = np.cos(current_box[0, -1]) - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - current_corners[:] = current_box[ - 0, 2:4] * corners_norm @ rot_mat_T + current_box[0, :2] - current_corners -= current_box[0, :2] - _rotation_box2d_jit_(current_corners, rot_noises[i, j], - rot_mat_T) - current_corners += current_box[0, :2] + loc_noises[i, j, :2] - coll_mat = box_collision_test( - current_corners.reshape(1, 4, 2), box_corners) - coll_mat[0, i] = False - if not coll_mat.any(): - success_mask[i] = j - box_corners[i] = current_corners - loc_noises[i, j, :2] += (dst_pos - boxes[i, :2]) - rot_noises[i, j] += (dst_grot - current_grot) - break - return success_mask - - -def _select_transform(transform, indices): - """Select transform. - - Args: - transform (np.ndarray): Transforms to select from. - indices (np.ndarray): Mask to indicate which transform to select. - - Returns: - np.ndarray: Selected transforms. - """ - result = np.zeros((transform.shape[0], *transform.shape[2:]), - dtype=transform.dtype) - for i in range(transform.shape[0]): - if indices[i] != -1: - result[i] = transform[i, indices[i]] - return result - - -@numba.njit -def _rotation_matrix_3d_(rot_mat_T, angle, axis): - """Get the 3D rotation matrix. - - Args: - rot_mat_T (np.ndarray): Transposed rotation matrix. - angle (float): Rotation angle. - axis (int): Rotation axis. - """ - rot_sin = np.sin(angle) - rot_cos = np.cos(angle) - rot_mat_T[:] = np.eye(3) - if axis == 1: - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 2] = rot_sin - rot_mat_T[2, 0] = -rot_sin - rot_mat_T[2, 2] = rot_cos - elif axis == 2 or axis == -1: - rot_mat_T[0, 0] = rot_cos - rot_mat_T[0, 1] = rot_sin - rot_mat_T[1, 0] = -rot_sin - rot_mat_T[1, 1] = rot_cos - elif axis == 0: - rot_mat_T[1, 1] = rot_cos - rot_mat_T[1, 2] = rot_sin - rot_mat_T[2, 1] = -rot_sin - rot_mat_T[2, 2] = rot_cos - - -@numba.njit -def points_transform_(points, centers, point_masks, loc_transform, - rot_transform, valid_mask): - """Apply transforms to points and box centers. - - Args: - points (np.ndarray): Input points. - centers (np.ndarray): Input box centers. - point_masks (np.ndarray): Mask to indicate which points need - to be transformed. - loc_transform (np.ndarray): Location transform to be applied. - rot_transform (np.ndarray): Rotation transform to be applied. - valid_mask (np.ndarray): Mask to indicate which boxes are valid. - """ - num_box = centers.shape[0] - num_points = points.shape[0] - rot_mat_T = np.zeros((num_box, 3, 3), dtype=points.dtype) - for i in range(num_box): - _rotation_matrix_3d_(rot_mat_T[i], rot_transform[i], 2) - for i in range(num_points): - for j in range(num_box): - if valid_mask[j]: - if point_masks[i, j] == 1: - points[i, :3] -= centers[j, :3] - points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j] - points[i, :3] += centers[j, :3] - points[i, :3] += loc_transform[j] - break # only apply first box's transform - - -@numba.njit -def box3d_transform_(boxes, loc_transform, rot_transform, valid_mask): - """Transform 3D boxes. - - Args: - boxes (np.ndarray): 3D boxes to be transformed. - loc_transform (np.ndarray): Location transform to be applied. - rot_transform (np.ndarray): Rotation transform to be applied. - valid_mask (np.ndarray): Mask to indicate which boxes are valid. - """ - num_box = boxes.shape[0] - for i in range(num_box): - if valid_mask[i]: - boxes[i, :3] += loc_transform[i] - boxes[i, 6] += rot_transform[i] - - -def noise_per_object_v3_(gt_boxes, - points=None, - valid_mask=None, - rotation_perturb=np.pi / 4, - center_noise_std=1.0, - global_random_rot_range=np.pi / 4, - num_try=100): - """Random rotate or remove each groundtruth independently. use kitti viewer - to test this function points_transform_ - - Args: - gt_boxes (np.ndarray): Ground truth boxes with shape (N, 7). - points (np.ndarray, optional): Input point cloud with - shape (M, 4). Default: None. - valid_mask (np.ndarray, optional): Mask to indicate which - boxes are valid. Default: None. - rotation_perturb (float, optional): Rotation perturbation. - Default: pi / 4. - center_noise_std (float, optional): Center noise standard deviation. - Default: 1.0. - global_random_rot_range (float, optional): Global random rotation - range. Default: pi/4. - num_try (int, optional): Number of try. Default: 100. - """ - num_boxes = gt_boxes.shape[0] - if not isinstance(rotation_perturb, (list, tuple, np.ndarray)): - rotation_perturb = [-rotation_perturb, rotation_perturb] - if not isinstance(global_random_rot_range, (list, tuple, np.ndarray)): - global_random_rot_range = [ - -global_random_rot_range, global_random_rot_range - ] - enable_grot = np.abs(global_random_rot_range[0] - - global_random_rot_range[1]) >= 1e-3 - - if not isinstance(center_noise_std, (list, tuple, np.ndarray)): - center_noise_std = [ - center_noise_std, center_noise_std, center_noise_std - ] - if valid_mask is None: - valid_mask = np.ones((num_boxes, ), dtype=np.bool_) - center_noise_std = np.array(center_noise_std, dtype=gt_boxes.dtype) - - loc_noises = np.random.normal( - scale=center_noise_std, size=[num_boxes, num_try, 3]) - rot_noises = np.random.uniform( - rotation_perturb[0], rotation_perturb[1], size=[num_boxes, num_try]) - gt_grots = np.arctan2(gt_boxes[:, 0], gt_boxes[:, 1]) - grot_lowers = global_random_rot_range[0] - gt_grots - grot_uppers = global_random_rot_range[1] - gt_grots - global_rot_noises = np.random.uniform( - grot_lowers[..., np.newaxis], - grot_uppers[..., np.newaxis], - size=[num_boxes, num_try]) - - origin = (0.5, 0.5, 0) - gt_box_corners = box_np_ops.center_to_corner_box3d( - gt_boxes[:, :3], - gt_boxes[:, 3:6], - gt_boxes[:, 6], - origin=origin, - axis=2) - - # TODO: rewrite this noise box function? - if not enable_grot: - selected_noise = noise_per_box(gt_boxes[:, [0, 1, 3, 4, 6]], - valid_mask, loc_noises, rot_noises) - else: - selected_noise = noise_per_box_v2_(gt_boxes[:, [0, 1, 3, 4, 6]], - valid_mask, loc_noises, rot_noises, - global_rot_noises) - - loc_transforms = _select_transform(loc_noises, selected_noise) - rot_transforms = _select_transform(rot_noises, selected_noise) - surfaces = box_np_ops.corner_to_surfaces_3d_jit(gt_box_corners) - if points is not None: - # TODO: replace this points_in_convex function by my tools? - point_masks = box_np_ops.points_in_convex_polygon_3d_jit( - points[:, :3], surfaces) - points_transform_(points, gt_boxes[:, :3], point_masks, loc_transforms, - rot_transforms, valid_mask) - - box3d_transform_(gt_boxes, loc_transforms, rot_transforms, valid_mask) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/dbsampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/dbsampler.py deleted file mode 100644 index ef82c88e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/dbsampler.py +++ /dev/null @@ -1,340 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os -import warnings - -import mmcv -import numpy as np - -from mmdet3d.core.bbox import box_np_ops -from mmdet3d.datasets.pipelines import data_augment_utils -from ..builder import OBJECTSAMPLERS, PIPELINES - - -class BatchSampler: - """Class for sampling specific category of ground truths. - - Args: - sample_list (list[dict]): List of samples. - name (str, optional): The category of samples. Default: None. - epoch (int, optional): Sampling epoch. Default: None. - shuffle (bool, optional): Whether to shuffle indices. Default: False. - drop_reminder (bool, optional): Drop reminder. Default: False. - """ - - def __init__(self, - sampled_list, - name=None, - epoch=None, - shuffle=True, - drop_reminder=False): - self._sampled_list = sampled_list - self._indices = np.arange(len(sampled_list)) - if shuffle: - np.random.shuffle(self._indices) - self._idx = 0 - self._example_num = len(sampled_list) - self._name = name - self._shuffle = shuffle - self._epoch = epoch - self._epoch_counter = 0 - self._drop_reminder = drop_reminder - - def _sample(self, num): - """Sample specific number of ground truths and return indices. - - Args: - num (int): Sampled number. - - Returns: - list[int]: Indices of sampled ground truths. - """ - if self._idx + num >= self._example_num: - ret = self._indices[self._idx:].copy() - self._reset() - else: - ret = self._indices[self._idx:self._idx + num] - self._idx += num - return ret - - def _reset(self): - """Reset the index of batchsampler to zero.""" - assert self._name is not None - # print("reset", self._name) - if self._shuffle: - np.random.shuffle(self._indices) - self._idx = 0 - - def sample(self, num): - """Sample specific number of ground truths. - - Args: - num (int): Sampled number. - - Returns: - list[dict]: Sampled ground truths. - """ - indices = self._sample(num) - return [self._sampled_list[i] for i in indices] - - -@OBJECTSAMPLERS.register_module() -class DataBaseSampler(object): - """Class for sampling data from the ground truth database. - - Args: - info_path (str): Path of groundtruth database info. - data_root (str): Path of groundtruth database. - rate (float): Rate of actual sampled over maximum sampled number. - prepare (dict): Name of preparation functions and the input value. - sample_groups (dict): Sampled classes and numbers. - classes (list[str], optional): List of classes. Default: None. - points_loader(dict, optional): Config of points loader. Default: - dict(type='LoadPointsFromFile', load_dim=4, use_dim=[0,1,2,3]) - """ - - def __init__(self, - info_path, - data_root, - rate, - prepare, - sample_groups, - classes=None, - points_loader=dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=[0, 1, 2, 3]), - file_client_args=dict(backend='disk')): - super().__init__() - self.data_root = data_root - self.info_path = info_path - self.rate = rate - self.prepare = prepare - self.classes = classes - self.cat2label = {name: i for i, name in enumerate(classes)} - self.label2cat = {i: name for i, name in enumerate(classes)} - self.points_loader = mmcv.build_from_cfg(points_loader, PIPELINES) - self.file_client = mmcv.FileClient(**file_client_args) - - # load data base infos - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(info_path) as local_path: - # loading data from a file-like object needs file format - db_infos = mmcv.load(open(local_path, 'rb'), file_format='pkl') - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {info_path} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - db_infos = mmcv.load(info_path) - - # filter database infos - from mmdet3d.utils import get_root_logger - logger = get_root_logger() - for k, v in db_infos.items(): - logger.info(f'load {len(v)} {k} database infos') - for prep_func, val in prepare.items(): - db_infos = getattr(self, prep_func)(db_infos, val) - logger.info('After filter database:') - for k, v in db_infos.items(): - logger.info(f'load {len(v)} {k} database infos') - - self.db_infos = db_infos - - # load sample groups - # TODO: more elegant way to load sample groups - self.sample_groups = [] - for name, num in sample_groups.items(): - self.sample_groups.append({name: int(num)}) - - self.group_db_infos = self.db_infos # just use db_infos - self.sample_classes = [] - self.sample_max_nums = [] - for group_info in self.sample_groups: - self.sample_classes += list(group_info.keys()) - self.sample_max_nums += list(group_info.values()) - - self.sampler_dict = {} - for k, v in self.group_db_infos.items(): - self.sampler_dict[k] = BatchSampler(v, k, shuffle=True) - # TODO: No group_sampling currently - - @staticmethod - def filter_by_difficulty(db_infos, removed_difficulty): - """Filter ground truths by difficulties. - - Args: - db_infos (dict): Info of groundtruth database. - removed_difficulty (list): Difficulties that are not qualified. - - Returns: - dict: Info of database after filtering. - """ - new_db_infos = {} - for key, dinfos in db_infos.items(): - new_db_infos[key] = [ - info for info in dinfos - if info['difficulty'] not in removed_difficulty - ] - return new_db_infos - - @staticmethod - def filter_by_min_points(db_infos, min_gt_points_dict): - """Filter ground truths by number of points in the bbox. - - Args: - db_infos (dict): Info of groundtruth database. - min_gt_points_dict (dict): Different number of minimum points - needed for different categories of ground truths. - - Returns: - dict: Info of database after filtering. - """ - for name, min_num in min_gt_points_dict.items(): - min_num = int(min_num) - if min_num > 0: - filtered_infos = [] - for info in db_infos[name]: - if info['num_points_in_gt'] >= min_num: - filtered_infos.append(info) - db_infos[name] = filtered_infos - return db_infos - - def sample_all(self, gt_bboxes, gt_labels, img=None, ground_plane=None): - """Sampling all categories of bboxes. - - Args: - gt_bboxes (np.ndarray): Ground truth bounding boxes. - gt_labels (np.ndarray): Ground truth labels of boxes. - - Returns: - dict: Dict of sampled 'pseudo ground truths'. - - - gt_labels_3d (np.ndarray): ground truths labels - of sampled objects. - - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): - sampled ground truth 3D bounding boxes - - points (np.ndarray): sampled points - - group_ids (np.ndarray): ids of sampled ground truths - """ - sampled_num_dict = {} - sample_num_per_class = [] - for class_name, max_sample_num in zip(self.sample_classes, - self.sample_max_nums): - class_label = self.cat2label[class_name] - # sampled_num = int(max_sample_num - - # np.sum([n == class_name for n in gt_names])) - sampled_num = int(max_sample_num - - np.sum([n == class_label for n in gt_labels])) - sampled_num = np.round(self.rate * sampled_num).astype(np.int64) - sampled_num_dict[class_name] = sampled_num - sample_num_per_class.append(sampled_num) - - sampled = [] - sampled_gt_bboxes = [] - avoid_coll_boxes = gt_bboxes - - for class_name, sampled_num in zip(self.sample_classes, - sample_num_per_class): - if sampled_num > 0: - sampled_cls = self.sample_class_v2(class_name, sampled_num, - avoid_coll_boxes) - - sampled += sampled_cls - if len(sampled_cls) > 0: - if len(sampled_cls) == 1: - sampled_gt_box = sampled_cls[0]['box3d_lidar'][ - np.newaxis, ...] - else: - sampled_gt_box = np.stack( - [s['box3d_lidar'] for s in sampled_cls], axis=0) - - sampled_gt_bboxes += [sampled_gt_box] - avoid_coll_boxes = np.concatenate( - [avoid_coll_boxes, sampled_gt_box], axis=0) - - ret = None - if len(sampled) > 0: - sampled_gt_bboxes = np.concatenate(sampled_gt_bboxes, axis=0) - # center = sampled_gt_bboxes[:, 0:3] - - # num_sampled = len(sampled) - s_points_list = [] - count = 0 - for info in sampled: - file_path = os.path.join( - self.data_root, - info['path']) if self.data_root else info['path'] - results = dict(pts_filename=file_path) - s_points = self.points_loader(results)['points'] - s_points.translate(info['box3d_lidar'][:3]) - - count += 1 - - s_points_list.append(s_points) - - gt_labels = np.array([self.cat2label[s['name']] for s in sampled], - dtype=np.long) - - if ground_plane is not None: - xyz = sampled_gt_bboxes[:, :3] - dz = (ground_plane[:3][None, :] * - xyz).sum(-1) + ground_plane[3] - sampled_gt_bboxes[:, 2] -= dz - for i, s_points in enumerate(s_points_list): - s_points.tensor[:, 2].sub_(dz[i]) - - ret = { - 'gt_labels_3d': - gt_labels, - 'gt_bboxes_3d': - sampled_gt_bboxes, - 'points': - s_points_list[0].cat(s_points_list), - 'group_ids': - np.arange(gt_bboxes.shape[0], - gt_bboxes.shape[0] + len(sampled)) - } - - return ret - - def sample_class_v2(self, name, num, gt_bboxes): - """Sampling specific categories of bounding boxes. - - Args: - name (str): Class of objects to be sampled. - num (int): Number of sampled bboxes. - gt_bboxes (np.ndarray): Ground truth boxes. - - Returns: - list[dict]: Valid samples after collision test. - """ - sampled = self.sampler_dict[name].sample(num) - sampled = copy.deepcopy(sampled) - num_gt = gt_bboxes.shape[0] - num_sampled = len(sampled) - gt_bboxes_bv = box_np_ops.center_to_corner_box2d( - gt_bboxes[:, 0:2], gt_bboxes[:, 3:5], gt_bboxes[:, 6]) - - sp_boxes = np.stack([i['box3d_lidar'] for i in sampled], axis=0) - boxes = np.concatenate([gt_bboxes, sp_boxes], axis=0).copy() - - sp_boxes_new = boxes[gt_bboxes.shape[0]:] - sp_boxes_bv = box_np_ops.center_to_corner_box2d( - sp_boxes_new[:, 0:2], sp_boxes_new[:, 3:5], sp_boxes_new[:, 6]) - - total_bv = np.concatenate([gt_bboxes_bv, sp_boxes_bv], axis=0) - coll_mat = data_augment_utils.box_collision_test(total_bv, total_bv) - diag = np.arange(total_bv.shape[0]) - coll_mat[diag, diag] = False - - valid_samples = [] - for i in range(num_gt, num_gt + num_sampled): - if coll_mat[i].any(): - coll_mat[i] = False - coll_mat[:, i] = False - else: - valid_samples.append(sampled[i - num_gt]) - return valid_samples diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/formating.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/formating.py deleted file mode 100644 index 94a62e65..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/formating.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from mmcv.parallel import DataContainer as DC - -from mmdet3d.core.bbox import BaseInstance3DBoxes -from mmdet3d.core.points import BasePoints -from mmdet.datasets.pipelines import to_tensor -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __init__(self, ): - return - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - if 'img' in results: - if isinstance(results['img'], list): - # process multiple imgs in single frame - imgs = [img.transpose(2, 0, 1) for img in results['img']] - imgs = np.ascontiguousarray(np.stack(imgs, axis=0)) - results['img'] = DC(to_tensor(imgs), stack=True) - else: - img = np.ascontiguousarray(results['img'].transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - for key in [ - 'proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels', - 'gt_labels_3d', 'attr_labels', 'pts_instance_mask', - 'pts_semantic_mask', 'centers2d', 'depths' - ]: - if key not in results: - continue - if isinstance(results[key], list): - results[key] = DC([to_tensor(res) for res in results[key]]) - else: - results[key] = DC(to_tensor(results[key])) - if 'gt_bboxes_3d' in results: - if isinstance(results['gt_bboxes_3d'], BaseInstance3DBoxes): - results['gt_bboxes_3d'] = DC( - results['gt_bboxes_3d'], cpu_only=True) - else: - results['gt_bboxes_3d'] = DC( - to_tensor(results['gt_bboxes_3d'])) - - if 'gt_masks' in results: - results['gt_masks'] = DC(results['gt_masks'], cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), stack=True) - - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect3D(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - 'img_shape': shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the - bottom/right if the batch tensor is larger than this shape. - - 'scale_factor': a float indicating the preprocessing scale - - 'flip': a boolean indicating if image flip transform was used - - 'filename': path to the image file - - 'ori_shape': original shape of the image as a tuple (h, w, c) - - 'pad_shape': image shape after padding - - 'lidar2img': transform from lidar to image - - 'depth2img': transform from depth to image - - 'cam2img': transform from camera to image - - 'pcd_horizontal_flip': a boolean indicating if point cloud is - flipped horizontally - - 'pcd_vertical_flip': a boolean indicating if point cloud is - flipped vertically - - 'box_mode_3d': 3D box mode - - 'box_type_3d': 3D box type - - 'img_norm_cfg': a dict of normalization information: - - mean: per channel mean subtraction - - std: per channel std divisor - - to_rgb: bool indicating if bgr was converted to rgb - - 'pcd_trans': point cloud transformations - - 'sample_idx': sample index - - 'pcd_scale_factor': point cloud scale factor - - 'pcd_rotation': rotation applied to point cloud - - 'pts_filename': path to point cloud file. - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ('filename', 'ori_shape', 'img_shape', 'lidar2img', - 'depth2img', 'cam2img', 'pad_shape', 'scale_factor', 'flip', - 'pcd_horizontal_flip', 'pcd_vertical_flip', 'box_mode_3d', - 'box_type_3d', 'img_norm_cfg', 'pcd_trans', - 'sample_idx', 'pcd_scale_factor', 'pcd_rotation', 'pts_filename') - """ - - def __init__( - self, - keys, - meta_keys=('filename', 'ori_shape', 'img_shape', 'lidar2img', - 'depth2img', 'cam2img', 'pad_shape', 'scale_factor', 'flip', - 'pcd_horizontal_flip', 'pcd_vertical_flip', 'box_mode_3d', - 'box_type_3d', 'img_norm_cfg', 'pcd_trans', 'sample_idx', - 'pcd_scale_factor', 'pcd_rotation', 'pcd_rotation_angle', - 'pts_filename', 'transformation_3d_flow', 'trans_mat', - 'affine_aug')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in ``self.keys`` - - ``img_metas`` - """ - data = {} - img_metas = {} - for key in self.meta_keys: - if key in results: - img_metas[key] = results[key] - - data['img_metas'] = DC(img_metas, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - """str: Return a string that describes the module.""" - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class DefaultFormatBundle3D(DefaultFormatBundle): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields for voxels, - including "proposals", "gt_bboxes", "gt_labels", "gt_masks" and - "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - """ - - def __init__(self, class_names, with_gt=True, with_label=True): - super(DefaultFormatBundle3D, self).__init__() - self.class_names = class_names - self.with_gt = with_gt - self.with_label = with_label - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - # Format 3D data - if 'points' in results: - assert isinstance(results['points'], BasePoints) - results['points'] = DC(results['points'].tensor) - - for key in ['voxels', 'coors', 'voxel_centers', 'num_points']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key]), stack=False) - - if self.with_gt: - # Clean GT bboxes in the final - if 'gt_bboxes_3d_mask' in results: - gt_bboxes_3d_mask = results['gt_bboxes_3d_mask'] - results['gt_bboxes_3d'] = results['gt_bboxes_3d'][ - gt_bboxes_3d_mask] - if 'gt_names_3d' in results: - results['gt_names_3d'] = results['gt_names_3d'][ - gt_bboxes_3d_mask] - if 'centers2d' in results: - results['centers2d'] = results['centers2d'][ - gt_bboxes_3d_mask] - if 'depths' in results: - results['depths'] = results['depths'][gt_bboxes_3d_mask] - if 'gt_bboxes_mask' in results: - gt_bboxes_mask = results['gt_bboxes_mask'] - if 'gt_bboxes' in results: - results['gt_bboxes'] = results['gt_bboxes'][gt_bboxes_mask] - results['gt_names'] = results['gt_names'][gt_bboxes_mask] - if self.with_label: - if 'gt_names' in results and len(results['gt_names']) == 0: - results['gt_labels'] = np.array([], dtype=np.int64) - results['attr_labels'] = np.array([], dtype=np.int64) - elif 'gt_names' in results and isinstance( - results['gt_names'][0], list): - # gt_labels might be a list of list in multi-view setting - results['gt_labels'] = [ - np.array([self.class_names.index(n) for n in res], - dtype=np.int64) for res in results['gt_names'] - ] - elif 'gt_names' in results: - results['gt_labels'] = np.array([ - self.class_names.index(n) for n in results['gt_names'] - ], - dtype=np.int64) - # we still assume one pipeline for one frame LiDAR - # thus, the 3D name is list[string] - if 'gt_names_3d' in results: - results['gt_labels_3d'] = np.array([ - self.class_names.index(n) - for n in results['gt_names_3d'] - ], - dtype=np.int64) - results = super(DefaultFormatBundle3D, self).__call__(results) - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(class_names={self.class_names}, ' - repr_str += f'with_gt={self.with_gt}, with_label={self.with_label})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/loading.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/loading.py deleted file mode 100644 index bbdcb8ed..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/loading.py +++ /dev/null @@ -1,685 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from mmdet3d.core.points import BasePoints, get_points_type -from mmdet.datasets.pipelines import LoadAnnotations, LoadImageFromFile -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadMultiViewImageFromFiles(object): - """Load multi channel images from a list of separate channel files. - - Expects results['img_filename'] to be a list of filenames. - - Args: - to_float32 (bool, optional): Whether to convert the img to float32. - Defaults to False. - color_type (str, optional): Color type of the file. - Defaults to 'unchanged'. - """ - - def __init__(self, to_float32=False, color_type='unchanged'): - self.to_float32 = to_float32 - self.color_type = color_type - - def __call__(self, results): - """Call function to load multi-view image from files. - - Args: - results (dict): Result dict containing multi-view image filenames. - - Returns: - dict: The result dict containing the multi-view image data. - Added keys and values are described below. - - - filename (str): Multi-view image filenames. - - img (np.ndarray): Multi-view image arrays. - - img_shape (tuple[int]): Shape of multi-view image arrays. - - ori_shape (tuple[int]): Shape of original image arrays. - - pad_shape (tuple[int]): Shape of padded image arrays. - - scale_factor (float): Scale factor. - - img_norm_cfg (dict): Normalization configuration of images. - """ - filename = results['img_filename'] - # img is of shape (h, w, c, num_views) - img = np.stack( - [mmcv.imread(name, self.color_type) for name in filename], axis=-1) - if self.to_float32: - img = img.astype(np.float32) - results['filename'] = filename - # unravel to list, see `DefaultFormatBundle` in formatting.py - # which will transpose each image separately and then stack into array - results['img'] = [img[..., i] for i in range(img.shape[-1])] - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32}, ' - repr_str += f"color_type='{self.color_type}')" - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromFileMono3D(LoadImageFromFile): - """Load an image from file in monocular 3D object detection. Compared to 2D - detection, additional camera parameters need to be loaded. - - Args: - kwargs (dict): Arguments are the same as those in - :class:`LoadImageFromFile`. - """ - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - super().__call__(results) - results['cam2img'] = results['img_info']['cam_intrinsic'] - return results - - -@PIPELINES.register_module() -class LoadPointsFromMultiSweeps(object): - """Load points from multiple sweeps. - - This is usually used for nuScenes dataset to utilize previous sweeps. - - Args: - sweeps_num (int, optional): Number of sweeps. Defaults to 10. - load_dim (int, optional): Dimension number of the loaded points. - Defaults to 5. - use_dim (list[int], optional): Which dimension to use. - Defaults to [0, 1, 2, 4]. - file_client_args (dict, optional): Config dict of file clients, - refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. Defaults to dict(backend='disk'). - pad_empty_sweeps (bool, optional): Whether to repeat keyframe when - sweeps is empty. Defaults to False. - remove_close (bool, optional): Whether to remove close points. - Defaults to False. - test_mode (bool, optional): If `test_mode=True`, it will not - randomly sample sweeps but select the nearest N frames. - Defaults to False. - """ - - def __init__(self, - sweeps_num=10, - load_dim=5, - use_dim=[0, 1, 2, 4], - file_client_args=dict(backend='disk'), - pad_empty_sweeps=False, - remove_close=False, - test_mode=False): - self.load_dim = load_dim - self.sweeps_num = sweeps_num - self.use_dim = use_dim - self.file_client_args = file_client_args.copy() - self.file_client = None - self.pad_empty_sweeps = pad_empty_sweeps - self.remove_close = remove_close - self.test_mode = test_mode - - def _load_points(self, pts_filename): - """Private function to load point clouds data. - - Args: - pts_filename (str): Filename of point clouds data. - - Returns: - np.ndarray: An array containing point clouds data. - """ - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - pts_bytes = self.file_client.get(pts_filename) - points = np.frombuffer(pts_bytes, dtype=np.float32) - except ConnectionError: - mmcv.check_file_exist(pts_filename) - if pts_filename.endswith('.npy'): - points = np.load(pts_filename) - else: - points = np.fromfile(pts_filename, dtype=np.float32) - return points - - def _remove_close(self, points, radius=1.0): - """Removes point too close within a certain radius from origin. - - Args: - points (np.ndarray | :obj:`BasePoints`): Sweep points. - radius (float, optional): Radius below which points are removed. - Defaults to 1.0. - - Returns: - np.ndarray: Points after removing. - """ - if isinstance(points, np.ndarray): - points_numpy = points - elif isinstance(points, BasePoints): - points_numpy = points.tensor.numpy() - else: - raise NotImplementedError - x_filt = np.abs(points_numpy[:, 0]) < radius - y_filt = np.abs(points_numpy[:, 1]) < radius - not_close = np.logical_not(np.logical_and(x_filt, y_filt)) - return points[not_close] - - def __call__(self, results): - """Call function to load multi-sweep point clouds from files. - - Args: - results (dict): Result dict containing multi-sweep point cloud - filenames. - - Returns: - dict: The result dict containing the multi-sweep points data. - Added key and value are described below. - - - points (np.ndarray | :obj:`BasePoints`): Multi-sweep point - cloud arrays. - """ - points = results['points'] - points.tensor[:, 4] = 0 - sweep_points_list = [points] - ts = results['timestamp'] - if self.pad_empty_sweeps and len(results['sweeps']) == 0: - for i in range(self.sweeps_num): - if self.remove_close: - sweep_points_list.append(self._remove_close(points)) - else: - sweep_points_list.append(points) - else: - if len(results['sweeps']) <= self.sweeps_num: - choices = np.arange(len(results['sweeps'])) - elif self.test_mode: - choices = np.arange(self.sweeps_num) - else: - choices = np.random.choice( - len(results['sweeps']), self.sweeps_num, replace=False) - for idx in choices: - sweep = results['sweeps'][idx] - points_sweep = self._load_points(sweep['data_path']) - points_sweep = np.copy(points_sweep).reshape(-1, self.load_dim) - if self.remove_close: - points_sweep = self._remove_close(points_sweep) - sweep_ts = sweep['timestamp'] / 1e6 - points_sweep[:, :3] = points_sweep[:, :3] @ sweep[ - 'sensor2lidar_rotation'].T - points_sweep[:, :3] += sweep['sensor2lidar_translation'] - points_sweep[:, 4] = ts - sweep_ts - points_sweep = points.new_point(points_sweep) - sweep_points_list.append(points_sweep) - - points = points.cat(sweep_points_list) - points = points[:, self.use_dim] - results['points'] = points - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - return f'{self.__class__.__name__}(sweeps_num={self.sweeps_num})' - - -@PIPELINES.register_module() -class PointSegClassMapping(object): - """Map original semantic class to valid category ids. - - Map valid classes as 0~len(valid_cat_ids)-1 and - others as len(valid_cat_ids). - - Args: - valid_cat_ids (tuple[int]): A tuple of valid category. - max_cat_id (int, optional): The max possible cat_id in input - segmentation mask. Defaults to 40. - """ - - def __init__(self, valid_cat_ids, max_cat_id=40): - assert max_cat_id >= np.max(valid_cat_ids), \ - 'max_cat_id should be greater than maximum id in valid_cat_ids' - - self.valid_cat_ids = valid_cat_ids - self.max_cat_id = int(max_cat_id) - - # build cat_id to class index mapping - neg_cls = len(valid_cat_ids) - self.cat_id2class = np.ones( - self.max_cat_id + 1, dtype=np.int) * neg_cls - for cls_idx, cat_id in enumerate(valid_cat_ids): - self.cat_id2class[cat_id] = cls_idx - - def __call__(self, results): - """Call function to map original semantic class to valid category ids. - - Args: - results (dict): Result dict containing point semantic masks. - - Returns: - dict: The result dict containing the mapped category ids. - Updated key and value are described below. - - - pts_semantic_mask (np.ndarray): Mapped semantic masks. - """ - assert 'pts_semantic_mask' in results - pts_semantic_mask = results['pts_semantic_mask'] - - converted_pts_sem_mask = self.cat_id2class[pts_semantic_mask] - - results['pts_semantic_mask'] = converted_pts_sem_mask - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(valid_cat_ids={self.valid_cat_ids}, ' - repr_str += f'max_cat_id={self.max_cat_id})' - return repr_str - - -@PIPELINES.register_module() -class NormalizePointsColor(object): - """Normalize color of points. - - Args: - color_mean (list[float]): Mean color of the point cloud. - """ - - def __init__(self, color_mean): - self.color_mean = color_mean - - def __call__(self, results): - """Call function to normalize color of points. - - Args: - results (dict): Result dict containing point clouds data. - - Returns: - dict: The result dict containing the normalized points. - Updated key and value are described below. - - - points (:obj:`BasePoints`): Points after color normalization. - """ - points = results['points'] - assert points.attribute_dims is not None and \ - 'color' in points.attribute_dims.keys(), \ - 'Expect points have color attribute' - if self.color_mean is not None: - points.color = points.color - \ - points.color.new_tensor(self.color_mean) - points.color = points.color / 255.0 - results['points'] = points - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(color_mean={self.color_mean})' - return repr_str - - -@PIPELINES.register_module() -class LoadPointsFromFile(object): - """Load Points From File. - - Load points from file. - - Args: - coord_type (str): The type of coordinates of points cloud. - Available options includes: - - 'LIDAR': Points in LiDAR coordinates. - - 'DEPTH': Points in depth coordinates, usually for indoor dataset. - - 'CAMERA': Points in camera coordinates. - load_dim (int, optional): The dimension of the loaded points. - Defaults to 6. - use_dim (list[int], optional): Which dimensions of the points to use. - Defaults to [0, 1, 2]. For KITTI dataset, set use_dim=4 - or use_dim=[0, 1, 2, 3] to use the intensity dimension. - shift_height (bool, optional): Whether to use shifted height. - Defaults to False. - use_color (bool, optional): Whether to use color features. - Defaults to False. - file_client_args (dict, optional): Config dict of file clients, - refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. Defaults to dict(backend='disk'). - """ - - def __init__(self, - coord_type, - load_dim=6, - use_dim=[0, 1, 2], - shift_height=False, - use_color=False, - file_client_args=dict(backend='disk')): - self.shift_height = shift_height - self.use_color = use_color - if isinstance(use_dim, int): - use_dim = list(range(use_dim)) - assert max(use_dim) < load_dim, \ - f'Expect all used dimensions < {load_dim}, got {use_dim}' - assert coord_type in ['CAMERA', 'LIDAR', 'DEPTH'] - - self.coord_type = coord_type - self.load_dim = load_dim - self.use_dim = use_dim - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_points(self, pts_filename): - """Private function to load point clouds data. - - Args: - pts_filename (str): Filename of point clouds data. - - Returns: - np.ndarray: An array containing point clouds data. - """ - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - pts_bytes = self.file_client.get(pts_filename) - points = np.frombuffer(pts_bytes, dtype=np.float32) - except ConnectionError: - mmcv.check_file_exist(pts_filename) - if pts_filename.endswith('.npy'): - points = np.load(pts_filename) - else: - points = np.fromfile(pts_filename, dtype=np.float32) - - return points - - def __call__(self, results): - """Call function to load points data from file. - - Args: - results (dict): Result dict containing point clouds data. - - Returns: - dict: The result dict containing the point clouds data. - Added key and value are described below. - - - points (:obj:`BasePoints`): Point clouds data. - """ - pts_filename = results['pts_filename'] - points = self._load_points(pts_filename) - points = points.reshape(-1, self.load_dim) - points = points[:, self.use_dim] - attribute_dims = None - - if self.shift_height: - floor_height = np.percentile(points[:, 2], 0.99) - height = points[:, 2] - floor_height - points = np.concatenate( - [points[:, :3], - np.expand_dims(height, 1), points[:, 3:]], 1) - attribute_dims = dict(height=3) - - if self.use_color: - assert len(self.use_dim) >= 6 - if attribute_dims is None: - attribute_dims = dict() - attribute_dims.update( - dict(color=[ - points.shape[1] - 3, - points.shape[1] - 2, - points.shape[1] - 1, - ])) - - points_class = get_points_type(self.coord_type) - points = points_class( - points, points_dim=points.shape[-1], attribute_dims=attribute_dims) - results['points'] = points - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ + '(' - repr_str += f'shift_height={self.shift_height}, ' - repr_str += f'use_color={self.use_color}, ' - repr_str += f'file_client_args={self.file_client_args}, ' - repr_str += f'load_dim={self.load_dim}, ' - repr_str += f'use_dim={self.use_dim})' - return repr_str - - -@PIPELINES.register_module() -class LoadPointsFromDict(LoadPointsFromFile): - """Load Points From Dict.""" - - def __call__(self, results): - assert 'points' in results - return results - - -@PIPELINES.register_module() -class LoadAnnotations3D(LoadAnnotations): - """Load Annotations3D. - - Load instance mask and semantic mask of points and - encapsulate the items into related fields. - - Args: - with_bbox_3d (bool, optional): Whether to load 3D boxes. - Defaults to True. - with_label_3d (bool, optional): Whether to load 3D labels. - Defaults to True. - with_attr_label (bool, optional): Whether to load attribute label. - Defaults to False. - with_mask_3d (bool, optional): Whether to load 3D instance masks. - for points. Defaults to False. - with_seg_3d (bool, optional): Whether to load 3D semantic masks. - for points. Defaults to False. - with_bbox (bool, optional): Whether to load 2D boxes. - Defaults to False. - with_label (bool, optional): Whether to load 2D labels. - Defaults to False. - with_mask (bool, optional): Whether to load 2D instance masks. - Defaults to False. - with_seg (bool, optional): Whether to load 2D semantic masks. - Defaults to False. - with_bbox_depth (bool, optional): Whether to load 2.5D boxes. - Defaults to False. - poly2mask (bool, optional): Whether to convert polygon annotations - to bitmasks. Defaults to True. - seg_3d_dtype (dtype, optional): Dtype of 3D semantic masks. - Defaults to int64 - file_client_args (dict): Config dict of file clients, refer to - https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py - for more details. - """ - - def __init__(self, - with_bbox_3d=True, - with_label_3d=True, - with_attr_label=False, - with_mask_3d=False, - with_seg_3d=False, - with_bbox=False, - with_label=False, - with_mask=False, - with_seg=False, - with_bbox_depth=False, - poly2mask=True, - seg_3d_dtype=np.int64, - file_client_args=dict(backend='disk')): - super().__init__( - with_bbox, - with_label, - with_mask, - with_seg, - poly2mask, - file_client_args=file_client_args) - self.with_bbox_3d = with_bbox_3d - self.with_bbox_depth = with_bbox_depth - self.with_label_3d = with_label_3d - self.with_attr_label = with_attr_label - self.with_mask_3d = with_mask_3d - self.with_seg_3d = with_seg_3d - self.seg_3d_dtype = seg_3d_dtype - - def _load_bboxes_3d(self, results): - """Private function to load 3D bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D bounding box annotations. - """ - results['gt_bboxes_3d'] = results['ann_info']['gt_bboxes_3d'] - results['bbox3d_fields'].append('gt_bboxes_3d') - return results - - def _load_bboxes_depth(self, results): - """Private function to load 2.5D bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 2.5D bounding box annotations. - """ - results['centers2d'] = results['ann_info']['centers2d'] - results['depths'] = results['ann_info']['depths'] - return results - - def _load_labels_3d(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded label annotations. - """ - results['gt_labels_3d'] = results['ann_info']['gt_labels_3d'] - return results - - def _load_attr_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded label annotations. - """ - results['attr_labels'] = results['ann_info']['attr_labels'] - return results - - def _load_masks_3d(self, results): - """Private function to load 3D mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D mask annotations. - """ - pts_instance_mask_path = results['ann_info']['pts_instance_mask_path'] - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - mask_bytes = self.file_client.get(pts_instance_mask_path) - pts_instance_mask = np.frombuffer(mask_bytes, dtype=np.int64) - except ConnectionError: - mmcv.check_file_exist(pts_instance_mask_path) - pts_instance_mask = np.fromfile( - pts_instance_mask_path, dtype=np.int64) - - results['pts_instance_mask'] = pts_instance_mask - results['pts_mask_fields'].append('pts_instance_mask') - return results - - def _load_semantic_seg_3d(self, results): - """Private function to load 3D semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing the semantic segmentation annotations. - """ - pts_semantic_mask_path = results['ann_info']['pts_semantic_mask_path'] - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - try: - mask_bytes = self.file_client.get(pts_semantic_mask_path) - # add .copy() to fix read-only bug - pts_semantic_mask = np.frombuffer( - mask_bytes, dtype=self.seg_3d_dtype).copy() - except ConnectionError: - mmcv.check_file_exist(pts_semantic_mask_path) - pts_semantic_mask = np.fromfile( - pts_semantic_mask_path, dtype=np.int64) - - results['pts_semantic_mask'] = pts_semantic_mask - results['pts_seg_fields'].append('pts_semantic_mask') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet3d.CustomDataset`. - - Returns: - dict: The dict containing loaded 3D bounding box, label, mask and - semantic segmentation annotations. - """ - results = super().__call__(results) - if self.with_bbox_3d: - results = self._load_bboxes_3d(results) - if results is None: - return None - if self.with_bbox_depth: - results = self._load_bboxes_depth(results) - if results is None: - return None - if self.with_label_3d: - results = self._load_labels_3d(results) - if self.with_attr_label: - results = self._load_attr_labels(results) - if self.with_mask_3d: - results = self._load_masks_3d(results) - if self.with_seg_3d: - results = self._load_semantic_seg_3d(results) - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}with_bbox_3d={self.with_bbox_3d}, ' - repr_str += f'{indent_str}with_label_3d={self.with_label_3d}, ' - repr_str += f'{indent_str}with_attr_label={self.with_attr_label}, ' - repr_str += f'{indent_str}with_mask_3d={self.with_mask_3d}, ' - repr_str += f'{indent_str}with_seg_3d={self.with_seg_3d}, ' - repr_str += f'{indent_str}with_bbox={self.with_bbox}, ' - repr_str += f'{indent_str}with_label={self.with_label}, ' - repr_str += f'{indent_str}with_mask={self.with_mask}, ' - repr_str += f'{indent_str}with_seg={self.with_seg}, ' - repr_str += f'{indent_str}with_bbox_depth={self.with_bbox_depth}, ' - repr_str += f'{indent_str}poly2mask={self.poly2mask})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/test_time_aug.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/test_time_aug.py deleted file mode 100644 index d53f1109..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from copy import deepcopy - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. An example - configuration is as followed: - - .. code-block:: - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - .. code-block:: - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str - - -@PIPELINES.register_module() -class MultiScaleFlipAug3D(object): - """Test-time augmentation with multiple scales and flipping. - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple]: Images scales for resizing. - pts_scale_ratio (float | list[float]): Points scale ratios for - resizing. - flip (bool, optional): Whether apply flip augmentation. - Defaults to False. - flip_direction (str | list[str], optional): Flip augmentation - directions for images, options are "horizontal" and "vertical". - If flip_direction is list, multiple flip augmentations will - be applied. It has no effect when ``flip == False``. - Defaults to "horizontal". - pcd_horizontal_flip (bool, optional): Whether apply horizontal - flip augmentation to point cloud. Defaults to True. - Note that it works only when 'flip' is turned on. - pcd_vertical_flip (bool, optional): Whether apply vertical flip - augmentation to point cloud. Defaults to True. - Note that it works only when 'flip' is turned on. - """ - - def __init__(self, - transforms, - img_scale, - pts_scale_ratio, - flip=False, - flip_direction='horizontal', - pcd_horizontal_flip=False, - pcd_vertical_flip=False): - self.transforms = Compose(transforms) - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.pts_scale_ratio = pts_scale_ratio \ - if isinstance(pts_scale_ratio, list) else[float(pts_scale_ratio)] - - assert mmcv.is_list_of(self.img_scale, tuple) - assert mmcv.is_list_of(self.pts_scale_ratio, float) - - self.flip = flip - self.pcd_horizontal_flip = pcd_horizontal_flip - self.pcd_vertical_flip = pcd_vertical_flip - - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip and not any([(t['type'] == 'RandomFlip3D' - or t['type'] == 'RandomFlip') - for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to augment common fields in results. - - Args: - results (dict): Result dict contains the data to augment. - - Returns: - dict: The result dict contains the data that is augmented with - different scales and flips. - """ - aug_data = [] - - # modified from `flip_aug = [False, True] if self.flip else [False]` - # to reduce unnecessary scenes when using double flip augmentation - # during test time - flip_aug = [True] if self.flip else [False] - pcd_horizontal_flip_aug = [False, True] \ - if self.flip and self.pcd_horizontal_flip else [False] - pcd_vertical_flip_aug = [False, True] \ - if self.flip and self.pcd_vertical_flip else [False] - for scale in self.img_scale: - for pts_scale_ratio in self.pts_scale_ratio: - for flip in flip_aug: - for pcd_horizontal_flip in pcd_horizontal_flip_aug: - for pcd_vertical_flip in pcd_vertical_flip_aug: - for direction in self.flip_direction: - # results.copy will cause bug - # since it is shallow copy - _results = deepcopy(results) - _results['scale'] = scale - _results['flip'] = flip - _results['pcd_scale_factor'] = \ - pts_scale_ratio - _results['flip_direction'] = direction - _results['pcd_horizontal_flip'] = \ - pcd_horizontal_flip - _results['pcd_vertical_flip'] = \ - pcd_vertical_flip - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'pts_scale_ratio={self.pts_scale_ratio}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/transforms_3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/transforms_3d.py deleted file mode 100644 index 46f4765c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/pipelines/transforms_3d.py +++ /dev/null @@ -1,1855 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import random -import warnings - -import cv2 -import numpy as np -from mmcv import is_tuple_of -from mmcv.utils import build_from_cfg - -from mmdet3d.core import VoxelGenerator -from mmdet3d.core.bbox import (CameraInstance3DBoxes, DepthInstance3DBoxes, - LiDARInstance3DBoxes, box_np_ops) -from mmdet3d.datasets.pipelines.compose import Compose -from mmdet.datasets.pipelines import RandomCrop, RandomFlip, Rotate -from ..builder import OBJECTSAMPLERS, PIPELINES -from .data_augment_utils import noise_per_object_v3_ - - -@PIPELINES.register_module() -class RandomDropPointsColor(object): - r"""Randomly set the color of points to all zeros. - - Once this transform is executed, all the points' color will be dropped. - Refer to `PAConv `_ for more details. - - Args: - drop_ratio (float, optional): The probability of dropping point colors. - Defaults to 0.2. - """ - - def __init__(self, drop_ratio=0.2): - assert isinstance(drop_ratio, (int, float)) and 0 <= drop_ratio <= 1, \ - f'invalid drop_ratio value {drop_ratio}' - self.drop_ratio = drop_ratio - - def __call__(self, input_dict): - """Call function to drop point colors. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after color dropping, - 'points' key is updated in the result dict. - """ - points = input_dict['points'] - assert points.attribute_dims is not None and \ - 'color' in points.attribute_dims, \ - 'Expect points have color attribute' - - # this if-expression is a bit strange - # `RandomDropPointsColor` is used in training 3D segmentor PAConv - # we discovered in our experiments that, using - # `if np.random.rand() > 1.0 - self.drop_ratio` consistently leads to - # better results than using `if np.random.rand() < self.drop_ratio` - # so we keep this hack in our codebase - if np.random.rand() > 1.0 - self.drop_ratio: - points.color = points.color * 0.0 - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(drop_ratio={self.drop_ratio})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip3D(RandomFlip): - """Flip the points & bbox. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - Args: - sync_2d (bool, optional): Whether to apply flip according to the 2D - images. If True, it will apply the same flip as that to 2D images. - If False, it will decide whether to flip randomly and independently - to that of 2D images. Defaults to True. - flip_ratio_bev_horizontal (float, optional): The flipping probability - in horizontal direction. Defaults to 0.0. - flip_ratio_bev_vertical (float, optional): The flipping probability - in vertical direction. Defaults to 0.0. - """ - - def __init__(self, - sync_2d=True, - flip_ratio_bev_horizontal=0.0, - flip_ratio_bev_vertical=0.0, - **kwargs): - super(RandomFlip3D, self).__init__( - flip_ratio=flip_ratio_bev_horizontal, **kwargs) - self.sync_2d = sync_2d - self.flip_ratio_bev_vertical = flip_ratio_bev_vertical - if flip_ratio_bev_horizontal is not None: - assert isinstance( - flip_ratio_bev_horizontal, - (int, float)) and 0 <= flip_ratio_bev_horizontal <= 1 - if flip_ratio_bev_vertical is not None: - assert isinstance( - flip_ratio_bev_vertical, - (int, float)) and 0 <= flip_ratio_bev_vertical <= 1 - - def random_flip_data_3d(self, input_dict, direction='horizontal'): - """Flip 3D data randomly. - - Args: - input_dict (dict): Result dict from loading pipeline. - direction (str, optional): Flip direction. - Default: 'horizontal'. - - Returns: - dict: Flipped results, 'points', 'bbox3d_fields' keys are - updated in the result dict. - """ - assert direction in ['horizontal', 'vertical'] - # for semantic segmentation task, only points will be flipped. - if 'bbox3d_fields' not in input_dict: - input_dict['points'].flip(direction) - return - if len(input_dict['bbox3d_fields']) == 0: # test mode - input_dict['bbox3d_fields'].append('empty_box3d') - input_dict['empty_box3d'] = input_dict['box_type_3d']( - np.array([], dtype=np.float32)) - assert len(input_dict['bbox3d_fields']) == 1 - for key in input_dict['bbox3d_fields']: - if 'points' in input_dict: - input_dict['points'] = input_dict[key].flip( - direction, points=input_dict['points']) - else: - input_dict[key].flip(direction) - if 'centers2d' in input_dict: - assert self.sync_2d is True and direction == 'horizontal', \ - 'Only support sync_2d=True and horizontal flip with images' - w = input_dict['ori_shape'][1] - input_dict['centers2d'][..., 0] = \ - w - input_dict['centers2d'][..., 0] - # need to modify the horizontal position of camera center - # along u-axis in the image (flip like centers2d) - # ['cam2img'][0][2] = c_u - # see more details and examples at - # https://github.com/open-mmlab/mmdetection3d/pull/744 - input_dict['cam2img'][0][2] = w - input_dict['cam2img'][0][2] - - def __call__(self, input_dict): - """Call function to flip points, values in the ``bbox3d_fields`` and - also flip 2D image and its annotations. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction', - 'pcd_horizontal_flip' and 'pcd_vertical_flip' keys are added - into result dict. - """ - # flip 2D image and its annotations - super(RandomFlip3D, self).__call__(input_dict) - - if self.sync_2d: - input_dict['pcd_horizontal_flip'] = input_dict['flip'] - input_dict['pcd_vertical_flip'] = False - else: - if 'pcd_horizontal_flip' not in input_dict: - flip_horizontal = True if np.random.rand( - ) < self.flip_ratio else False - input_dict['pcd_horizontal_flip'] = flip_horizontal - if 'pcd_vertical_flip' not in input_dict: - flip_vertical = True if np.random.rand( - ) < self.flip_ratio_bev_vertical else False - input_dict['pcd_vertical_flip'] = flip_vertical - - if 'transformation_3d_flow' not in input_dict: - input_dict['transformation_3d_flow'] = [] - - if input_dict['pcd_horizontal_flip']: - self.random_flip_data_3d(input_dict, 'horizontal') - input_dict['transformation_3d_flow'].extend(['HF']) - if input_dict['pcd_vertical_flip']: - self.random_flip_data_3d(input_dict, 'vertical') - input_dict['transformation_3d_flow'].extend(['VF']) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(sync_2d={self.sync_2d},' - repr_str += f' flip_ratio_bev_vertical={self.flip_ratio_bev_vertical})' - return repr_str - - -@PIPELINES.register_module() -class MultiViewWrapper(object): - """Wrap transformation from single-view into multi-view. - - The wrapper processes the images from multi-view one by one. For each - image, it constructs a pseudo dict according to the keys specified by the - 'process_fields' parameter. After the transformation is finished, desired - information can be collected by specifying the keys in the 'collected_keys' - parameter. Multi-view images share the same transformation parameters - but do not share the same magnitude when a random transformation is - conducted. - - Args: - transforms (list[dict]): A list of dict specifying the transformations - for the monocular situation. - process_fields (dict): Desired keys that the transformations should - be conducted on. Default to dict(img_fields=['img']). - collected_keys (list[str]): Collect information in transformation - like rotate angles, crop roi, and flip state. - """ - - def __init__(self, - transforms, - process_fields=dict(img_fields=['img']), - collected_keys=[]): - self.transform = Compose(transforms) - self.collected_keys = collected_keys - self.process_fields = process_fields - - def __call__(self, input_dict): - for key in self.collected_keys: - input_dict[key] = [] - for img_id in range(len(input_dict['img'])): - process_dict = self.process_fields.copy() - for field in self.process_fields: - for key in self.process_fields[field]: - process_dict[key] = input_dict[key][img_id] - process_dict = self.transform(process_dict) - for field in self.process_fields: - for key in self.process_fields[field]: - input_dict[key][img_id] = process_dict[key] - for key in self.collected_keys: - input_dict[key].append(process_dict[key]) - return input_dict - - -@PIPELINES.register_module() -class RangeLimitedRandomCrop(RandomCrop): - """Randomly crop image-view objects under a limitation of range. - - Args: - relative_x_offset_range (tuple[float]): Relative range of random crop - in x direction. (x_min, x_max) in [0, 1.0]. Default to (0.0, 1.0). - relative_y_offset_range (tuple[float]): Relative range of random crop - in y direction. (y_min, y_max) in [0, 1.0]. Default to (0.0, 1.0). - """ - - def __init__(self, - relative_x_offset_range=(0.0, 1.0), - relative_y_offset_range=(0.0, 1.0), - **kwargs): - super(RangeLimitedRandomCrop, self).__init__(**kwargs) - for range in [relative_x_offset_range, relative_y_offset_range]: - assert 0 <= range[0] <= range[1] <= 1 - self.relative_x_offset_range = relative_x_offset_range - self.relative_y_offset_range = relative_y_offset_range - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images. - - Modified from RandomCrop in mmdet==2.25.0 - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_range_h = (margin_h * self.relative_y_offset_range[0], - margin_h * self.relative_y_offset_range[1] + 1) - offset_h = np.random.randint(*offset_range_h) - offset_range_w = (margin_w * self.relative_x_offset_range[0], - margin_w * self.relative_x_offset_range[1] + 1) - offset_w = np.random.randint(*offset_range_w) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['crop'] = (crop_x1, crop_y1, crop_x2, crop_y2) - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - if self.recompute_bbox: - results[key] = results[mask_key].get_bboxes() - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - -@PIPELINES.register_module() -class RandomRotate(Rotate): - """Randomly rotate images. - - The ratation angle is selected uniformly within the interval specified by - the 'range' parameter. - - Args: - range (tuple[float]): Define the range of random rotation. - (angle_min, angle_max) in angle. - """ - - def __init__(self, range, **kwargs): - super(RandomRotate, self).__init__(**kwargs) - self.range = range - - def __call__(self, results): - self.angle = np.random.uniform(self.range[0], self.range[1]) - super(RandomRotate, self).__call__(results) - results['rotate'] = self.angle - return results - - -@PIPELINES.register_module() -class RandomJitterPoints(object): - """Randomly jitter point coordinates. - - Different from the global translation in ``GlobalRotScaleTrans``, here we - apply different noises to each point in a scene. - - Args: - jitter_std (list[float]): The standard deviation of jittering noise. - This applies random noise to all points in a 3D scene, which is - sampled from a gaussian distribution whose standard deviation is - set by ``jitter_std``. Defaults to [0.01, 0.01, 0.01] - clip_range (list[float]): Clip the randomly generated jitter - noise into this range. If None is given, don't perform clipping. - Defaults to [-0.05, 0.05] - - Note: - This transform should only be used in point cloud segmentation tasks - because we don't transform ground-truth bboxes accordingly. - For similar transform in detection task, please refer to `ObjectNoise`. - """ - - def __init__(self, - jitter_std=[0.01, 0.01, 0.01], - clip_range=[-0.05, 0.05]): - seq_types = (list, tuple, np.ndarray) - if not isinstance(jitter_std, seq_types): - assert isinstance(jitter_std, (int, float)), \ - f'unsupported jitter_std type {type(jitter_std)}' - jitter_std = [jitter_std, jitter_std, jitter_std] - self.jitter_std = jitter_std - - if clip_range is not None: - if not isinstance(clip_range, seq_types): - assert isinstance(clip_range, (int, float)), \ - f'unsupported clip_range type {type(clip_range)}' - clip_range = [-clip_range, clip_range] - self.clip_range = clip_range - - def __call__(self, input_dict): - """Call function to jitter all the points in the scene. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after adding noise to each point, - 'points' key is updated in the result dict. - """ - points = input_dict['points'] - jitter_std = np.array(self.jitter_std, dtype=np.float32) - jitter_noise = \ - np.random.randn(points.shape[0], 3) * jitter_std[None, :] - if self.clip_range is not None: - jitter_noise = np.clip(jitter_noise, self.clip_range[0], - self.clip_range[1]) - - points.translate(jitter_noise) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(jitter_std={self.jitter_std},' - repr_str += f' clip_range={self.clip_range})' - return repr_str - - -@PIPELINES.register_module() -class ObjectSample(object): - """Sample GT objects to the data. - - Args: - db_sampler (dict): Config dict of the database sampler. - sample_2d (bool): Whether to also paste 2D image patch to the images - This should be true when applying multi-modality cut-and-paste. - Defaults to False. - use_ground_plane (bool): Whether to use gound plane to adjust the - 3D labels. - """ - - def __init__(self, db_sampler, sample_2d=False, use_ground_plane=False): - self.sampler_cfg = db_sampler - self.sample_2d = sample_2d - if 'type' not in db_sampler.keys(): - db_sampler['type'] = 'DataBaseSampler' - self.db_sampler = build_from_cfg(db_sampler, OBJECTSAMPLERS) - self.use_ground_plane = use_ground_plane - - @staticmethod - def remove_points_in_boxes(points, boxes): - """Remove the points in the sampled bounding boxes. - - Args: - points (:obj:`BasePoints`): Input point cloud array. - boxes (np.ndarray): Sampled ground truth boxes. - - Returns: - np.ndarray: Points with those in the boxes removed. - """ - masks = box_np_ops.points_in_rbbox(points.coord.numpy(), boxes) - points = points[np.logical_not(masks.any(-1))] - return points - - def __call__(self, input_dict): - """Call function to sample ground truth objects to the data. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after object sampling augmentation, - 'points', 'gt_bboxes_3d', 'gt_labels_3d' keys are updated - in the result dict. - """ - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - gt_labels_3d = input_dict['gt_labels_3d'] - - if self.use_ground_plane and 'plane' in input_dict['ann_info']: - ground_plane = input_dict['ann_info']['plane'] - input_dict['plane'] = ground_plane - else: - ground_plane = None - # change to float for blending operation - points = input_dict['points'] - if self.sample_2d: - img = input_dict['img'] - gt_bboxes_2d = input_dict['gt_bboxes'] - # Assume for now 3D & 2D bboxes are the same - sampled_dict = self.db_sampler.sample_all( - gt_bboxes_3d.tensor.numpy(), - gt_labels_3d, - gt_bboxes_2d=gt_bboxes_2d, - img=img) - else: - sampled_dict = self.db_sampler.sample_all( - gt_bboxes_3d.tensor.numpy(), - gt_labels_3d, - img=None, - ground_plane=ground_plane) - - if sampled_dict is not None: - sampled_gt_bboxes_3d = sampled_dict['gt_bboxes_3d'] - sampled_points = sampled_dict['points'] - sampled_gt_labels = sampled_dict['gt_labels_3d'] - - gt_labels_3d = np.concatenate([gt_labels_3d, sampled_gt_labels], - axis=0) - gt_bboxes_3d = gt_bboxes_3d.new_box( - np.concatenate( - [gt_bboxes_3d.tensor.numpy(), sampled_gt_bboxes_3d])) - - points = self.remove_points_in_boxes(points, sampled_gt_bboxes_3d) - # check the points dimension - points = points.cat([sampled_points, points]) - - if self.sample_2d: - sampled_gt_bboxes_2d = sampled_dict['gt_bboxes_2d'] - gt_bboxes_2d = np.concatenate( - [gt_bboxes_2d, sampled_gt_bboxes_2d]).astype(np.float32) - - input_dict['gt_bboxes'] = gt_bboxes_2d - input_dict['img'] = sampled_dict['img'] - - input_dict['gt_bboxes_3d'] = gt_bboxes_3d - input_dict['gt_labels_3d'] = gt_labels_3d.astype(np.int64) - input_dict['points'] = points - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f' sample_2d={self.sample_2d},' - repr_str += f' data_root={self.sampler_cfg.data_root},' - repr_str += f' info_path={self.sampler_cfg.info_path},' - repr_str += f' rate={self.sampler_cfg.rate},' - repr_str += f' prepare={self.sampler_cfg.prepare},' - repr_str += f' classes={self.sampler_cfg.classes},' - repr_str += f' sample_groups={self.sampler_cfg.sample_groups}' - return repr_str - - -@PIPELINES.register_module() -class ObjectNoise(object): - """Apply noise to each GT objects in the scene. - - Args: - translation_std (list[float], optional): Standard deviation of the - distribution where translation noise are sampled from. - Defaults to [0.25, 0.25, 0.25]. - global_rot_range (list[float], optional): Global rotation to the scene. - Defaults to [0.0, 0.0]. - rot_range (list[float], optional): Object rotation range. - Defaults to [-0.15707963267, 0.15707963267]. - num_try (int, optional): Number of times to try if the noise applied is - invalid. Defaults to 100. - """ - - def __init__(self, - translation_std=[0.25, 0.25, 0.25], - global_rot_range=[0.0, 0.0], - rot_range=[-0.15707963267, 0.15707963267], - num_try=100): - self.translation_std = translation_std - self.global_rot_range = global_rot_range - self.rot_range = rot_range - self.num_try = num_try - - def __call__(self, input_dict): - """Call function to apply noise to each ground truth in the scene. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after adding noise to each object, - 'points', 'gt_bboxes_3d' keys are updated in the result dict. - """ - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - points = input_dict['points'] - - # TODO: this is inplace operation - numpy_box = gt_bboxes_3d.tensor.numpy() - numpy_points = points.tensor.numpy() - - noise_per_object_v3_( - numpy_box, - numpy_points, - rotation_perturb=self.rot_range, - center_noise_std=self.translation_std, - global_random_rot_range=self.global_rot_range, - num_try=self.num_try) - - input_dict['gt_bboxes_3d'] = gt_bboxes_3d.new_box(numpy_box) - input_dict['points'] = points.new_point(numpy_points) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_try={self.num_try},' - repr_str += f' translation_std={self.translation_std},' - repr_str += f' global_rot_range={self.global_rot_range},' - repr_str += f' rot_range={self.rot_range})' - return repr_str - - -@PIPELINES.register_module() -class GlobalAlignment(object): - """Apply global alignment to 3D scene points by rotation and translation. - - Args: - rotation_axis (int): Rotation axis for points and bboxes rotation. - - Note: - We do not record the applied rotation and translation as in - GlobalRotScaleTrans. Because usually, we do not need to reverse - the alignment step. - For example, ScanNet 3D detection task uses aligned ground-truth - bounding boxes for evaluation. - """ - - def __init__(self, rotation_axis): - self.rotation_axis = rotation_axis - - def _trans_points(self, input_dict, trans_factor): - """Private function to translate points. - - Args: - input_dict (dict): Result dict from loading pipeline. - trans_factor (np.ndarray): Translation vector to be applied. - - Returns: - dict: Results after translation, 'points' is updated in the dict. - """ - input_dict['points'].translate(trans_factor) - - def _rot_points(self, input_dict, rot_mat): - """Private function to rotate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - rot_mat (np.ndarray): Rotation matrix to be applied. - - Returns: - dict: Results after rotation, 'points' is updated in the dict. - """ - # input should be rot_mat_T so I transpose it here - input_dict['points'].rotate(rot_mat.T) - - def _check_rot_mat(self, rot_mat): - """Check if rotation matrix is valid for self.rotation_axis. - - Args: - rot_mat (np.ndarray): Rotation matrix to be applied. - """ - is_valid = np.allclose(np.linalg.det(rot_mat), 1.0) - valid_array = np.zeros(3) - valid_array[self.rotation_axis] = 1.0 - is_valid &= (rot_mat[self.rotation_axis, :] == valid_array).all() - is_valid &= (rot_mat[:, self.rotation_axis] == valid_array).all() - assert is_valid, f'invalid rotation matrix {rot_mat}' - - def __call__(self, input_dict): - """Call function to shuffle points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after global alignment, 'points' and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - assert 'axis_align_matrix' in input_dict['ann_info'].keys(), \ - 'axis_align_matrix is not provided in GlobalAlignment' - - axis_align_matrix = input_dict['ann_info']['axis_align_matrix'] - assert axis_align_matrix.shape == (4, 4), \ - f'invalid shape {axis_align_matrix.shape} for axis_align_matrix' - rot_mat = axis_align_matrix[:3, :3] - trans_vec = axis_align_matrix[:3, -1] - - self._check_rot_mat(rot_mat) - self._rot_points(input_dict, rot_mat) - self._trans_points(input_dict, trans_vec) - - return input_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(rotation_axis={self.rotation_axis})' - return repr_str - - -@PIPELINES.register_module() -class GlobalRotScaleTrans(object): - """Apply global rotation, scaling and translation to a 3D scene. - - Args: - rot_range (list[float], optional): Range of rotation angle. - Defaults to [-0.78539816, 0.78539816] (close to [-pi/4, pi/4]). - scale_ratio_range (list[float], optional): Range of scale ratio. - Defaults to [0.95, 1.05]. - translation_std (list[float], optional): The standard deviation of - translation noise applied to a scene, which - is sampled from a gaussian distribution whose standard deviation - is set by ``translation_std``. Defaults to [0, 0, 0] - shift_height (bool, optional): Whether to shift height. - (the fourth dimension of indoor points) when scaling. - Defaults to False. - """ - - def __init__(self, - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0], - shift_height=False): - seq_types = (list, tuple, np.ndarray) - if not isinstance(rot_range, seq_types): - assert isinstance(rot_range, (int, float)), \ - f'unsupported rot_range type {type(rot_range)}' - rot_range = [-rot_range, rot_range] - self.rot_range = rot_range - - assert isinstance(scale_ratio_range, seq_types), \ - f'unsupported scale_ratio_range type {type(scale_ratio_range)}' - self.scale_ratio_range = scale_ratio_range - - if not isinstance(translation_std, seq_types): - assert isinstance(translation_std, (int, float)), \ - f'unsupported translation_std type {type(translation_std)}' - translation_std = [ - translation_std, translation_std, translation_std - ] - assert all([std >= 0 for std in translation_std]), \ - 'translation_std should be positive' - self.translation_std = translation_std - self.shift_height = shift_height - - def _trans_bbox_points(self, input_dict): - """Private function to translate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after translation, 'points', 'pcd_trans' - and keys in input_dict['bbox3d_fields'] are updated - in the result dict. - """ - translation_std = np.array(self.translation_std, dtype=np.float32) - trans_factor = np.random.normal(scale=translation_std, size=3).T - - input_dict['points'].translate(trans_factor) - input_dict['pcd_trans'] = trans_factor - for key in input_dict['bbox3d_fields']: - input_dict[key].translate(trans_factor) - - def _rot_bbox_points(self, input_dict): - """Private function to rotate bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after rotation, 'points', 'pcd_rotation' - and keys in input_dict['bbox3d_fields'] are updated - in the result dict. - """ - rotation = self.rot_range - noise_rotation = np.random.uniform(rotation[0], rotation[1]) - - # if no bbox in input_dict, only rotate points - if len(input_dict['bbox3d_fields']) == 0: - rot_mat_T = input_dict['points'].rotate(noise_rotation) - input_dict['pcd_rotation'] = rot_mat_T - input_dict['pcd_rotation_angle'] = noise_rotation - return - - # rotate points with bboxes - for key in input_dict['bbox3d_fields']: - if len(input_dict[key].tensor) != 0: - points, rot_mat_T = input_dict[key].rotate( - noise_rotation, input_dict['points']) - input_dict['points'] = points - input_dict['pcd_rotation'] = rot_mat_T - input_dict['pcd_rotation_angle'] = noise_rotation - - def _scale_bbox_points(self, input_dict): - """Private function to scale bounding boxes and points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'points'and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - scale = input_dict['pcd_scale_factor'] - points = input_dict['points'] - points.scale(scale) - if self.shift_height: - assert 'height' in points.attribute_dims.keys(), \ - 'setting shift_height=True but points have no height attribute' - points.tensor[:, points.attribute_dims['height']] *= scale - input_dict['points'] = points - - for key in input_dict['bbox3d_fields']: - input_dict[key].scale(scale) - - def _random_scale(self, input_dict): - """Private function to randomly set the scale factor. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'pcd_scale_factor' are updated - in the result dict. - """ - scale_factor = np.random.uniform(self.scale_ratio_range[0], - self.scale_ratio_range[1]) - input_dict['pcd_scale_factor'] = scale_factor - - def __call__(self, input_dict): - """Private function to rotate, scale and translate bounding boxes and - points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after scaling, 'points', 'pcd_rotation', - 'pcd_scale_factor', 'pcd_trans' and keys in - input_dict['bbox3d_fields'] are updated in the result dict. - """ - if 'transformation_3d_flow' not in input_dict: - input_dict['transformation_3d_flow'] = [] - - self._rot_bbox_points(input_dict) - - if 'pcd_scale_factor' not in input_dict: - self._random_scale(input_dict) - self._scale_bbox_points(input_dict) - - self._trans_bbox_points(input_dict) - - input_dict['transformation_3d_flow'].extend(['R', 'S', 'T']) - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(rot_range={self.rot_range},' - repr_str += f' scale_ratio_range={self.scale_ratio_range},' - repr_str += f' translation_std={self.translation_std},' - repr_str += f' shift_height={self.shift_height})' - return repr_str - - -@PIPELINES.register_module() -class PointShuffle(object): - """Shuffle input points.""" - - def __call__(self, input_dict): - """Call function to shuffle points. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - idx = input_dict['points'].shuffle() - idx = idx.numpy() - - pts_instance_mask = input_dict.get('pts_instance_mask', None) - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[idx] - - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[idx] - - return input_dict - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class ObjectRangeFilter(object): - """Filter objects by the range. - - Args: - point_cloud_range (list[float]): Point cloud range. - """ - - def __init__(self, point_cloud_range): - self.pcd_range = np.array(point_cloud_range, dtype=np.float32) - - def __call__(self, input_dict): - """Call function to filter objects by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'gt_bboxes_3d', 'gt_labels_3d' - keys are updated in the result dict. - """ - # Check points instance type and initialise bev_range - if isinstance(input_dict['gt_bboxes_3d'], - (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - bev_range = self.pcd_range[[0, 1, 3, 4]] - elif isinstance(input_dict['gt_bboxes_3d'], CameraInstance3DBoxes): - bev_range = self.pcd_range[[0, 2, 3, 5]] - - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - gt_labels_3d = input_dict['gt_labels_3d'] - mask = gt_bboxes_3d.in_range_bev(bev_range) - gt_bboxes_3d = gt_bboxes_3d[mask] - # mask is a torch tensor but gt_labels_3d is still numpy array - # using mask to index gt_labels_3d will cause bug when - # len(gt_labels_3d) == 1, where mask=1 will be interpreted - # as gt_labels_3d[1] and cause out of index error - gt_labels_3d = gt_labels_3d[mask.numpy().astype(np.bool)] - - # limit rad to [-pi, pi] - gt_bboxes_3d.limit_yaw(offset=0.5, period=2 * np.pi) - input_dict['gt_bboxes_3d'] = gt_bboxes_3d - input_dict['gt_labels_3d'] = gt_labels_3d - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(point_cloud_range={self.pcd_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class PointsRangeFilter(object): - """Filter points by the range. - - Args: - point_cloud_range (list[float]): Point cloud range. - """ - - def __init__(self, point_cloud_range): - self.pcd_range = np.array(point_cloud_range, dtype=np.float32) - - def __call__(self, input_dict): - """Call function to filter points by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = input_dict['points'] - points_mask = points.in_range_3d(self.pcd_range) - clean_points = points[points_mask] - input_dict['points'] = clean_points - points_mask = points_mask.numpy() - - pts_instance_mask = input_dict.get('pts_instance_mask', None) - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[points_mask] - - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[points_mask] - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(point_cloud_range={self.pcd_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class ObjectNameFilter(object): - """Filter GT objects by their names. - - Args: - classes (list[str]): List of class names to be kept for training. - """ - - def __init__(self, classes): - self.classes = classes - self.labels = list(range(len(self.classes))) - - def __call__(self, input_dict): - """Call function to filter objects by their names. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'gt_bboxes_3d', 'gt_labels_3d' - keys are updated in the result dict. - """ - gt_labels_3d = input_dict['gt_labels_3d'] - gt_bboxes_mask = np.array([n in self.labels for n in gt_labels_3d], - dtype=np.bool_) - input_dict['gt_bboxes_3d'] = input_dict['gt_bboxes_3d'][gt_bboxes_mask] - input_dict['gt_labels_3d'] = input_dict['gt_labels_3d'][gt_bboxes_mask] - - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(classes={self.classes})' - return repr_str - - -@PIPELINES.register_module() -class PointSample(object): - """Point sample. - - Sampling data to a certain number. - - Args: - num_points (int): Number of points to be sampled. - sample_range (float, optional): The range where to sample points. - If not None, the points with depth larger than `sample_range` are - prior to be sampled. Defaults to None. - replace (bool, optional): Whether the sampling is with or without - replacement. Defaults to False. - """ - - def __init__(self, num_points, sample_range=None, replace=False): - self.num_points = num_points - self.sample_range = sample_range - self.replace = replace - - def _points_random_sampling(self, - points, - num_samples, - sample_range=None, - replace=False, - return_choices=False): - """Points random sampling. - - Sample points to a certain number. - - Args: - points (np.ndarray | :obj:`BasePoints`): 3D Points. - num_samples (int): Number of samples to be sampled. - sample_range (float, optional): Indicating the range where the - points will be sampled. Defaults to None. - replace (bool, optional): Sampling with or without replacement. - Defaults to None. - return_choices (bool, optional): Whether return choice. - Defaults to False. - Returns: - tuple[np.ndarray] | np.ndarray: - - points (np.ndarray | :obj:`BasePoints`): 3D Points. - - choices (np.ndarray, optional): The generated random samples. - """ - if not replace: - replace = (points.shape[0] < num_samples) - point_range = range(len(points)) - if sample_range is not None and not replace: - # Only sampling the near points when len(points) >= num_samples - dist = np.linalg.norm(points.tensor, axis=1) - far_inds = np.where(dist >= sample_range)[0] - near_inds = np.where(dist < sample_range)[0] - # in case there are too many far points - if len(far_inds) > num_samples: - far_inds = np.random.choice( - far_inds, num_samples, replace=False) - point_range = near_inds - num_samples -= len(far_inds) - choices = np.random.choice(point_range, num_samples, replace=replace) - if sample_range is not None and not replace: - choices = np.concatenate((far_inds, choices)) - # Shuffle points after sampling - np.random.shuffle(choices) - if return_choices: - return points[choices], choices - else: - return points[choices] - - def __call__(self, results): - """Call function to sample points to in indoor scenes. - - Args: - input_dict (dict): Result dict from loading pipeline. - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - points, choices = self._points_random_sampling( - points, - self.num_points, - self.sample_range, - self.replace, - return_choices=True) - results['points'] = points - - pts_instance_mask = results.get('pts_instance_mask', None) - pts_semantic_mask = results.get('pts_semantic_mask', None) - - if pts_instance_mask is not None: - pts_instance_mask = pts_instance_mask[choices] - results['pts_instance_mask'] = pts_instance_mask - - if pts_semantic_mask is not None: - pts_semantic_mask = pts_semantic_mask[choices] - results['pts_semantic_mask'] = pts_semantic_mask - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_points={self.num_points},' - repr_str += f' sample_range={self.sample_range},' - repr_str += f' replace={self.replace})' - - return repr_str - - -@PIPELINES.register_module() -class IndoorPointSample(PointSample): - """Indoor point sample. - - Sampling data to a certain number. - NOTE: IndoorPointSample is deprecated in favor of PointSample - - Args: - num_points (int): Number of points to be sampled. - """ - - def __init__(self, *args, **kwargs): - warnings.warn( - 'IndoorPointSample is deprecated in favor of PointSample') - super(IndoorPointSample, self).__init__(*args, **kwargs) - - -@PIPELINES.register_module() -class IndoorPatchPointSample(object): - r"""Indoor point sample within a patch. Modified from `PointNet++ `_. - - Sampling data to a certain number for semantic segmentation. - - Args: - num_points (int): Number of points to be sampled. - block_size (float, optional): Size of a block to sample points from. - Defaults to 1.5. - sample_rate (float, optional): Stride used in sliding patch generation. - This parameter is unused in `IndoorPatchPointSample` and thus has - been deprecated. We plan to remove it in the future. - Defaults to None. - ignore_index (int, optional): Label index that won't be used for the - segmentation task. This is set in PointSegClassMapping as neg_cls. - If not None, will be used as a patch selection criterion. - Defaults to None. - use_normalized_coord (bool, optional): Whether to use normalized xyz as - additional features. Defaults to False. - num_try (int, optional): Number of times to try if the patch selected - is invalid. Defaults to 10. - enlarge_size (float, optional): Enlarge the sampled patch to - [-block_size / 2 - enlarge_size, block_size / 2 + enlarge_size] as - an augmentation. If None, set it as 0. Defaults to 0.2. - min_unique_num (int, optional): Minimum number of unique points - the sampled patch should contain. If None, use PointNet++'s method - to judge uniqueness. Defaults to None. - eps (float, optional): A value added to patch boundary to guarantee - points coverage. Defaults to 1e-2. - - Note: - This transform should only be used in the training process of point - cloud segmentation tasks. For the sliding patch generation and - inference process in testing, please refer to the `slide_inference` - function of `EncoderDecoder3D` class. - """ - - def __init__(self, - num_points, - block_size=1.5, - sample_rate=None, - ignore_index=None, - use_normalized_coord=False, - num_try=10, - enlarge_size=0.2, - min_unique_num=None, - eps=1e-2): - self.num_points = num_points - self.block_size = block_size - self.ignore_index = ignore_index - self.use_normalized_coord = use_normalized_coord - self.num_try = num_try - self.enlarge_size = enlarge_size if enlarge_size is not None else 0.0 - self.min_unique_num = min_unique_num - self.eps = eps - - if sample_rate is not None: - warnings.warn( - "'sample_rate' has been deprecated and will be removed in " - 'the future. Please remove them from your code.') - - def _input_generation(self, coords, patch_center, coord_max, attributes, - attribute_dims, point_type): - """Generating model input. - - Generate input by subtracting patch center and adding additional - features. Currently support colors and normalized xyz as features. - - Args: - coords (np.ndarray): Sampled 3D Points. - patch_center (np.ndarray): Center coordinate of the selected patch. - coord_max (np.ndarray): Max coordinate of all 3D Points. - attributes (np.ndarray): features of input points. - attribute_dims (dict): Dictionary to indicate the meaning of extra - dimension. - point_type (type): class of input points inherited from BasePoints. - - Returns: - :obj:`BasePoints`: The generated input data. - """ - # subtract patch center, the z dimension is not centered - centered_coords = coords.copy() - centered_coords[:, 0] -= patch_center[0] - centered_coords[:, 1] -= patch_center[1] - - if self.use_normalized_coord: - normalized_coord = coords / coord_max - attributes = np.concatenate([attributes, normalized_coord], axis=1) - if attribute_dims is None: - attribute_dims = dict() - attribute_dims.update( - dict(normalized_coord=[ - attributes.shape[1], attributes.shape[1] + - 1, attributes.shape[1] + 2 - ])) - - points = np.concatenate([centered_coords, attributes], axis=1) - points = point_type( - points, points_dim=points.shape[1], attribute_dims=attribute_dims) - - return points - - def _patch_points_sampling(self, points, sem_mask): - """Patch points sampling. - - First sample a valid patch. - Then sample points within that patch to a certain number. - - Args: - points (:obj:`BasePoints`): 3D Points. - sem_mask (np.ndarray): semantic segmentation mask for input points. - - Returns: - tuple[:obj:`BasePoints`, np.ndarray] | :obj:`BasePoints`: - - - points (:obj:`BasePoints`): 3D Points. - - choices (np.ndarray): The generated random samples. - """ - coords = points.coord.numpy() - attributes = points.tensor[:, 3:].numpy() - attribute_dims = points.attribute_dims - point_type = type(points) - - coord_max = np.amax(coords, axis=0) - coord_min = np.amin(coords, axis=0) - - for _ in range(self.num_try): - # random sample a point as patch center - cur_center = coords[np.random.choice(coords.shape[0])] - - # boundary of a patch, which would be enlarged by - # `self.enlarge_size` as an augmentation - cur_max = cur_center + np.array( - [self.block_size / 2.0, self.block_size / 2.0, 0.0]) - cur_min = cur_center - np.array( - [self.block_size / 2.0, self.block_size / 2.0, 0.0]) - cur_max[2] = coord_max[2] - cur_min[2] = coord_min[2] - cur_choice = np.sum( - (coords >= (cur_min - self.enlarge_size)) * - (coords <= (cur_max + self.enlarge_size)), - axis=1) == 3 - - if not cur_choice.any(): # no points in this patch - continue - - cur_coords = coords[cur_choice, :] - cur_sem_mask = sem_mask[cur_choice] - point_idxs = np.where(cur_choice)[0] - mask = np.sum( - (cur_coords >= (cur_min - self.eps)) * (cur_coords <= - (cur_max + self.eps)), - axis=1) == 3 - - # two criteria for patch sampling, adopted from PointNet++ - # 1. selected patch should contain enough unique points - if self.min_unique_num is None: - # use PointNet++'s method as default - # [31, 31, 62] are just some big values used to transform - # coords from 3d array to 1d and then check their uniqueness - # this is used in all the ScanNet code following PointNet++ - vidx = np.ceil( - (cur_coords[mask, :] - cur_min) / (cur_max - cur_min) * - np.array([31.0, 31.0, 62.0])) - vidx = np.unique(vidx[:, 0] * 31.0 * 62.0 + vidx[:, 1] * 62.0 + - vidx[:, 2]) - flag1 = len(vidx) / 31.0 / 31.0 / 62.0 >= 0.02 - else: - # if `min_unique_num` is provided, directly compare with it - flag1 = mask.sum() >= self.min_unique_num - - # 2. selected patch should contain enough annotated points - if self.ignore_index is None: - flag2 = True - else: - flag2 = np.sum(cur_sem_mask != self.ignore_index) / \ - len(cur_sem_mask) >= 0.7 - - if flag1 and flag2: - break - - # sample idx to `self.num_points` - if point_idxs.size >= self.num_points: - # no duplicate in sub-sampling - choices = np.random.choice( - point_idxs, self.num_points, replace=False) - else: - # do not use random choice here to avoid some points not counted - dup = np.random.choice(point_idxs.size, - self.num_points - point_idxs.size) - idx_dup = np.concatenate( - [np.arange(point_idxs.size), - np.array(dup)], 0) - choices = point_idxs[idx_dup] - - # construct model input - points = self._input_generation(coords[choices], cur_center, coord_max, - attributes[choices], attribute_dims, - point_type) - - return points, choices - - def __call__(self, results): - """Call function to sample points to in indoor scenes. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - - assert 'pts_semantic_mask' in results.keys(), \ - 'semantic mask should be provided in training and evaluation' - pts_semantic_mask = results['pts_semantic_mask'] - - points, choices = self._patch_points_sampling(points, - pts_semantic_mask) - - results['points'] = points - results['pts_semantic_mask'] = pts_semantic_mask[choices] - pts_instance_mask = results.get('pts_instance_mask', None) - if pts_instance_mask is not None: - results['pts_instance_mask'] = pts_instance_mask[choices] - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(num_points={self.num_points},' - repr_str += f' block_size={self.block_size},' - repr_str += f' ignore_index={self.ignore_index},' - repr_str += f' use_normalized_coord={self.use_normalized_coord},' - repr_str += f' num_try={self.num_try},' - repr_str += f' enlarge_size={self.enlarge_size},' - repr_str += f' min_unique_num={self.min_unique_num},' - repr_str += f' eps={self.eps})' - return repr_str - - -@PIPELINES.register_module() -class BackgroundPointsFilter(object): - """Filter background points near the bounding box. - - Args: - bbox_enlarge_range (tuple[float], float): Bbox enlarge range. - """ - - def __init__(self, bbox_enlarge_range): - assert (is_tuple_of(bbox_enlarge_range, float) - and len(bbox_enlarge_range) == 3) \ - or isinstance(bbox_enlarge_range, float), \ - f'Invalid arguments bbox_enlarge_range {bbox_enlarge_range}' - - if isinstance(bbox_enlarge_range, float): - bbox_enlarge_range = [bbox_enlarge_range] * 3 - self.bbox_enlarge_range = np.array( - bbox_enlarge_range, dtype=np.float32)[np.newaxis, :] - - def __call__(self, input_dict): - """Call function to filter points by the range. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after filtering, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = input_dict['points'] - gt_bboxes_3d = input_dict['gt_bboxes_3d'] - - # avoid groundtruth being modified - gt_bboxes_3d_np = gt_bboxes_3d.tensor.clone().numpy() - gt_bboxes_3d_np[:, :3] = gt_bboxes_3d.gravity_center.clone().numpy() - - enlarged_gt_bboxes_3d = gt_bboxes_3d_np.copy() - enlarged_gt_bboxes_3d[:, 3:6] += self.bbox_enlarge_range - points_numpy = points.tensor.clone().numpy() - foreground_masks = box_np_ops.points_in_rbbox( - points_numpy, gt_bboxes_3d_np, origin=(0.5, 0.5, 0.5)) - enlarge_foreground_masks = box_np_ops.points_in_rbbox( - points_numpy, enlarged_gt_bboxes_3d, origin=(0.5, 0.5, 0.5)) - foreground_masks = foreground_masks.max(1) - enlarge_foreground_masks = enlarge_foreground_masks.max(1) - valid_masks = ~np.logical_and(~foreground_masks, - enlarge_foreground_masks) - - input_dict['points'] = points[valid_masks] - pts_instance_mask = input_dict.get('pts_instance_mask', None) - if pts_instance_mask is not None: - input_dict['pts_instance_mask'] = pts_instance_mask[valid_masks] - - pts_semantic_mask = input_dict.get('pts_semantic_mask', None) - if pts_semantic_mask is not None: - input_dict['pts_semantic_mask'] = pts_semantic_mask[valid_masks] - return input_dict - - def __repr__(self): - """str: Return a string that describes the module.""" - repr_str = self.__class__.__name__ - repr_str += f'(bbox_enlarge_range={self.bbox_enlarge_range.tolist()})' - return repr_str - - -@PIPELINES.register_module() -class VoxelBasedPointSampler(object): - """Voxel based point sampler. - - Apply voxel sampling to multiple sweep points. - - Args: - cur_sweep_cfg (dict): Config for sampling current points. - prev_sweep_cfg (dict): Config for sampling previous points. - time_dim (int): Index that indicate the time dimension - for input points. - """ - - def __init__(self, cur_sweep_cfg, prev_sweep_cfg=None, time_dim=3): - self.cur_voxel_generator = VoxelGenerator(**cur_sweep_cfg) - self.cur_voxel_num = self.cur_voxel_generator._max_voxels - self.time_dim = time_dim - if prev_sweep_cfg is not None: - assert prev_sweep_cfg['max_num_points'] == \ - cur_sweep_cfg['max_num_points'] - self.prev_voxel_generator = VoxelGenerator(**prev_sweep_cfg) - self.prev_voxel_num = self.prev_voxel_generator._max_voxels - else: - self.prev_voxel_generator = None - self.prev_voxel_num = 0 - - def _sample_points(self, points, sampler, point_dim): - """Sample points for each points subset. - - Args: - points (np.ndarray): Points subset to be sampled. - sampler (VoxelGenerator): Voxel based sampler for - each points subset. - point_dim (int): The dimension of each points - - Returns: - np.ndarray: Sampled points. - """ - voxels, coors, num_points_per_voxel = sampler.generate(points) - if voxels.shape[0] < sampler._max_voxels: - padding_points = np.zeros([ - sampler._max_voxels - voxels.shape[0], sampler._max_num_points, - point_dim - ], - dtype=points.dtype) - padding_points[:] = voxels[0] - sample_points = np.concatenate([voxels, padding_points], axis=0) - else: - sample_points = voxels - - return sample_points - - def __call__(self, results): - """Call function to sample points from multiple sweeps. - - Args: - input_dict (dict): Result dict from loading pipeline. - - Returns: - dict: Results after sampling, 'points', 'pts_instance_mask' - and 'pts_semantic_mask' keys are updated in the result dict. - """ - points = results['points'] - original_dim = points.shape[1] - - # TODO: process instance and semantic mask while _max_num_points - # is larger than 1 - # Extend points with seg and mask fields - map_fields2dim = [] - start_dim = original_dim - points_numpy = points.tensor.numpy() - extra_channel = [points_numpy] - for idx, key in enumerate(results['pts_mask_fields']): - map_fields2dim.append((key, idx + start_dim)) - extra_channel.append(results[key][..., None]) - - start_dim += len(results['pts_mask_fields']) - for idx, key in enumerate(results['pts_seg_fields']): - map_fields2dim.append((key, idx + start_dim)) - extra_channel.append(results[key][..., None]) - - points_numpy = np.concatenate(extra_channel, axis=-1) - - # Split points into two part, current sweep points and - # previous sweeps points. - # TODO: support different sampling methods for next sweeps points - # and previous sweeps points. - cur_points_flag = (points_numpy[:, self.time_dim] == 0) - cur_sweep_points = points_numpy[cur_points_flag] - prev_sweeps_points = points_numpy[~cur_points_flag] - if prev_sweeps_points.shape[0] == 0: - prev_sweeps_points = cur_sweep_points - - # Shuffle points before sampling - np.random.shuffle(cur_sweep_points) - np.random.shuffle(prev_sweeps_points) - - cur_sweep_points = self._sample_points(cur_sweep_points, - self.cur_voxel_generator, - points_numpy.shape[1]) - if self.prev_voxel_generator is not None: - prev_sweeps_points = self._sample_points(prev_sweeps_points, - self.prev_voxel_generator, - points_numpy.shape[1]) - - points_numpy = np.concatenate( - [cur_sweep_points, prev_sweeps_points], 0) - else: - points_numpy = cur_sweep_points - - if self.cur_voxel_generator._max_num_points == 1: - points_numpy = points_numpy.squeeze(1) - results['points'] = points.new_point(points_numpy[..., :original_dim]) - - # Restore the corresponding seg and mask fields - for key, dim_index in map_fields2dim: - results[key] = points_numpy[..., dim_index] - - return results - - def __repr__(self): - """str: Return a string that describes the module.""" - - def _auto_indent(repr_str, indent): - repr_str = repr_str.split('\n') - repr_str = [' ' * indent + t + '\n' for t in repr_str] - repr_str = ''.join(repr_str)[:-1] - return repr_str - - repr_str = self.__class__.__name__ - indent = 4 - repr_str += '(\n' - repr_str += ' ' * indent + f'num_cur_sweep={self.cur_voxel_num},\n' - repr_str += ' ' * indent + f'num_prev_sweep={self.prev_voxel_num},\n' - repr_str += ' ' * indent + f'time_dim={self.time_dim},\n' - repr_str += ' ' * indent + 'cur_voxel_generator=\n' - repr_str += f'{_auto_indent(repr(self.cur_voxel_generator), 8)},\n' - repr_str += ' ' * indent + 'prev_voxel_generator=\n' - repr_str += f'{_auto_indent(repr(self.prev_voxel_generator), 8)})' - return repr_str - - -@PIPELINES.register_module() -class AffineResize(object): - """Get the affine transform matrices to the target size. - - Different from :class:`RandomAffine` in MMDetection, this class can - calculate the affine transform matrices while resizing the input image - to a fixed size. The affine transform matrices include: 1) matrix - transforming original image to the network input image size. 2) matrix - transforming original image to the network output feature map size. - - Args: - img_scale (tuple): Images scales for resizing. - down_ratio (int): The down ratio of feature map. - Actually the arg should be >= 1. - bbox_clip_border (bool, optional): Whether clip the objects - outside the border of the image. Defaults to True. - """ - - def __init__(self, img_scale, down_ratio, bbox_clip_border=True): - - self.img_scale = img_scale - self.down_ratio = down_ratio - self.bbox_clip_border = bbox_clip_border - - def __call__(self, results): - """Call function to do affine transform to input image and labels. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Results after affine resize, 'affine_aug', 'trans_mat' - keys are added in the result dict. - """ - # The results have gone through RandomShiftScale before AffineResize - if 'center' not in results: - img = results['img'] - height, width = img.shape[:2] - center = np.array([width / 2, height / 2], dtype=np.float32) - size = np.array([width, height], dtype=np.float32) - results['affine_aug'] = False - else: - # The results did not go through RandomShiftScale before - # AffineResize - img = results['img'] - center = results['center'] - size = results['size'] - - trans_affine = self._get_transform_matrix(center, size, self.img_scale) - - img = cv2.warpAffine(img, trans_affine[:2, :], self.img_scale) - - if isinstance(self.down_ratio, tuple): - trans_mat = [ - self._get_transform_matrix( - center, size, - (self.img_scale[0] // ratio, self.img_scale[1] // ratio)) - for ratio in self.down_ratio - ] # (3, 3) - else: - trans_mat = self._get_transform_matrix( - center, size, (self.img_scale[0] // self.down_ratio, - self.img_scale[1] // self.down_ratio)) - - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape - results['trans_mat'] = trans_mat - - self._affine_bboxes(results, trans_affine) - - if 'centers2d' in results: - centers2d = self._affine_transform(results['centers2d'], - trans_affine) - valid_index = (centers2d[:, 0] > - 0) & (centers2d[:, 0] < - self.img_scale[0]) & (centers2d[:, 1] > 0) & ( - centers2d[:, 1] < self.img_scale[1]) - results['centers2d'] = centers2d[valid_index] - - for key in results.get('bbox_fields', []): - if key in ['gt_bboxes']: - results[key] = results[key][valid_index] - if 'gt_labels' in results: - results['gt_labels'] = results['gt_labels'][ - valid_index] - if 'gt_masks' in results: - raise NotImplementedError( - 'AffineResize only supports bbox.') - - for key in results.get('bbox3d_fields', []): - if key in ['gt_bboxes_3d']: - results[key].tensor = results[key].tensor[valid_index] - if 'gt_labels_3d' in results: - results['gt_labels_3d'] = results['gt_labels_3d'][ - valid_index] - - results['depths'] = results['depths'][valid_index] - - return results - - def _affine_bboxes(self, results, matrix): - """Affine transform bboxes to input image. - - Args: - results (dict): Result dict from loading pipeline. - matrix (np.ndarray): Matrix transforming original - image to the network input image size. - shape: (3, 3) - """ - - for key in results.get('bbox_fields', []): - bboxes = results[key] - bboxes[:, :2] = self._affine_transform(bboxes[:, :2], matrix) - bboxes[:, 2:] = self._affine_transform(bboxes[:, 2:], matrix) - if self.bbox_clip_border: - bboxes[:, - [0, 2]] = bboxes[:, - [0, 2]].clip(0, self.img_scale[0] - 1) - bboxes[:, - [1, 3]] = bboxes[:, - [1, 3]].clip(0, self.img_scale[1] - 1) - results[key] = bboxes - - def _affine_transform(self, points, matrix): - """Affine transform bbox points to input image. - - Args: - points (np.ndarray): Points to be transformed. - shape: (N, 2) - matrix (np.ndarray): Affine transform matrix. - shape: (3, 3) - - Returns: - np.ndarray: Transformed points. - """ - num_points = points.shape[0] - hom_points_2d = np.concatenate((points, np.ones((num_points, 1))), - axis=1) - hom_points_2d = hom_points_2d.T - affined_points = np.matmul(matrix, hom_points_2d).T - return affined_points[:, :2] - - def _get_transform_matrix(self, center, scale, output_scale): - """Get affine transform matrix. - - Args: - center (tuple): Center of current image. - scale (tuple): Scale of current image. - output_scale (tuple[float]): The transform target image scales. - - Returns: - np.ndarray: Affine transform matrix. - """ - # TODO: further add rot and shift here. - src_w = scale[0] - dst_w = output_scale[0] - dst_h = output_scale[1] - - src_dir = np.array([0, src_w * -0.5]) - dst_dir = np.array([0, dst_w * -0.5]) - - src = np.zeros((3, 2), dtype=np.float32) - dst = np.zeros((3, 2), dtype=np.float32) - src[0, :] = center - src[1, :] = center + src_dir - dst[0, :] = np.array([dst_w * 0.5, dst_h * 0.5]) - dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir - - src[2, :] = self._get_ref_point(src[0, :], src[1, :]) - dst[2, :] = self._get_ref_point(dst[0, :], dst[1, :]) - - get_matrix = cv2.getAffineTransform(src, dst) - - matrix = np.concatenate((get_matrix, [[0., 0., 1.]])) - - return matrix.astype(np.float32) - - def _get_ref_point(self, ref_point1, ref_point2): - """Get reference point to calculate affine transform matrix. - - While using opencv to calculate the affine matrix, we need at least - three corresponding points separately on original image and target - image. Here we use two points to get the the third reference point. - """ - d = ref_point1 - ref_point2 - ref_point3 = ref_point2 + np.array([-d[1], d[0]]) - return ref_point3 - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'down_ratio={self.down_ratio}) ' - return repr_str - - -@PIPELINES.register_module() -class RandomShiftScale(object): - """Random shift scale. - - Different from the normal shift and scale function, it doesn't - directly shift or scale image. It can record the shift and scale - infos into loading pipelines. It's designed to be used with - AffineResize together. - - Args: - shift_scale (tuple[float]): Shift and scale range. - aug_prob (float): The shifting and scaling probability. - """ - - def __init__(self, shift_scale, aug_prob): - - self.shift_scale = shift_scale - self.aug_prob = aug_prob - - def __call__(self, results): - """Call function to record random shift and scale infos. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Results after random shift and scale, 'center', 'size' - and 'affine_aug' keys are added in the result dict. - """ - img = results['img'] - - height, width = img.shape[:2] - - center = np.array([width / 2, height / 2], dtype=np.float32) - size = np.array([width, height], dtype=np.float32) - - if random.random() < self.aug_prob: - shift, scale = self.shift_scale[0], self.shift_scale[1] - shift_ranges = np.arange(-shift, shift + 0.1, 0.1) - center[0] += size[0] * random.choice(shift_ranges) - center[1] += size[1] * random.choice(shift_ranges) - scale_ranges = np.arange(1 - scale, 1 + scale + 0.1, 0.1) - size *= random.choice(scale_ranges) - results['affine_aug'] = True - else: - results['affine_aug'] = False - - results['center'] = center - results['size'] = size - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(shift_scale={self.shift_scale}, ' - repr_str += f'aug_prob={self.aug_prob}) ' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/s3dis_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/s3dis_dataset.py deleted file mode 100644 index 3df2eedd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/s3dis_dataset.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import numpy as np - -from mmdet3d.core import show_seg_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class S3DISDataset(Custom3DDataset): - r"""S3DIS Dataset for Detection Task. - - This class is the inner dataset for S3DIS. Since S3DIS has 6 areas, we - often train on 5 of them and test on the remaining one. The one for - test is Area_5 as suggested in `GSDN `_. - To concatenate 5 areas during training - `mmdet.datasets.dataset_wrappers.ConcatDataset` should be used. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('table', 'chair', 'sofa', 'bookcase', 'board') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - *kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - *kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 6), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - with_yaw=False, - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict = dict(pts_filename=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - -class _S3DISSegDataset(Custom3DSegDataset): - r"""S3DIS Dataset for Semantic Segmentation Task. - - This class is the inner dataset for S3DIS. Since S3DIS has 6 areas, we - often train on 5 of them and test on the remaining one. - However, there is not a fixed train-test split of S3DIS. People often test - on Area_5 as suggested by `SEGCloud `_. - But many papers also report the average results of 6-fold cross validation - over the 6 areas (e.g. `DGCNN `_). - Therefore, we use an inner dataset for one area, and further use a dataset - wrapper to concat all the provided data in different areas. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - CLASSES = ('ceiling', 'floor', 'wall', 'beam', 'column', 'window', 'door', - 'table', 'chair', 'sofa', 'bookcase', 'board', 'clutter') - - VALID_CLASS_IDS = tuple(range(13)) - - ALL_CLASS_IDS = tuple(range(14)) # possibly with 'stair' class - - PALETTE = [[0, 255, 0], [0, 0, 255], [0, 255, 255], [255, 255, 0], - [255, 0, 255], [100, 100, 255], [200, 200, 100], - [170, 120, 200], [255, 0, 0], [200, 100, 100], [10, 200, 100], - [200, 200, 200], [50, 50, 50]] - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs, - **kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=np.max(self.ALL_CLASS_IDS)), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, gt_sem_mask = self._extract_data( - i, pipeline, ['points', 'pts_semantic_mask'], load_annos=True) - points = points.numpy() - pred_sem_mask = result['semantic_mask'].numpy() - show_seg_result(points, gt_sem_mask, - pred_sem_mask, out_dir, file_name, - np.array(self.PALETTE), self.ignore_index, show) - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - # when testing, we load one whole scene every time - if not self.test_mode and scene_idxs is None: - raise NotImplementedError( - 'please provide re-sampled scene indexes for training') - - return super().get_scene_idxs(scene_idxs) - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class S3DISSegDataset(_S3DISSegDataset): - r"""S3DIS Dataset for Semantic Segmentation Task. - - This class serves as the API for experiments on the S3DIS Dataset. - It wraps the provided datasets of different areas. - We don't use `mmdet.datasets.dataset_wrappers.ConcatDataset` because we - need to concat the `scene_idxs` of different areas. - - Please refer to the `google form `_ for - data downloading. - - Args: - data_root (str): Path of dataset root. - ann_files (list[str]): Path of several annotation files. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (list[np.ndarray] | list[str], optional): Precomputed index - to load data. For scenes with many points, we may sample it several - times. Defaults to None. - """ - - def __init__(self, - data_root, - ann_files, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - # make sure that ann_files and scene_idxs have same length - ann_files = self._check_ann_files(ann_files) - scene_idxs = self._check_scene_idxs(scene_idxs, len(ann_files)) - - # initialize some attributes as datasets[0] - super().__init__( - data_root=data_root, - ann_file=ann_files[0], - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs[0], - **kwargs) - - datasets = [ - _S3DISSegDataset( - data_root=data_root, - ann_file=ann_files[i], - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs[i], - **kwargs) for i in range(len(ann_files)) - ] - - # data_infos and scene_idxs need to be concat - self.concat_data_infos([dst.data_infos for dst in datasets]) - self.concat_scene_idxs([dst.scene_idxs for dst in datasets]) - - # set group flag for the sampler - if not self.test_mode: - self._set_group_flag() - - def concat_data_infos(self, data_infos): - """Concat data_infos from several datasets to form self.data_infos. - - Args: - data_infos (list[list[dict]]) - """ - self.data_infos = [ - info for one_data_infos in data_infos for info in one_data_infos - ] - - def concat_scene_idxs(self, scene_idxs): - """Concat scene_idxs from several datasets to form self.scene_idxs. - - Needs to manually add offset to scene_idxs[1, 2, ...]. - - Args: - scene_idxs (list[np.ndarray]) - """ - self.scene_idxs = np.array([], dtype=np.int32) - offset = 0 - for one_scene_idxs in scene_idxs: - self.scene_idxs = np.concatenate( - [self.scene_idxs, one_scene_idxs + offset]).astype(np.int32) - offset = np.unique(self.scene_idxs).max() + 1 - - @staticmethod - def _duplicate_to_list(x, num): - """Repeat x `num` times to form a list.""" - return [x for _ in range(num)] - - def _check_ann_files(self, ann_file): - """Make ann_files as list/tuple.""" - # ann_file could be str - if not isinstance(ann_file, (list, tuple)): - ann_file = self._duplicate_to_list(ann_file, 1) - return ann_file - - def _check_scene_idxs(self, scene_idx, num): - """Make scene_idxs as list/tuple.""" - if scene_idx is None: - return self._duplicate_to_list(scene_idx, num) - # scene_idx could be str, np.ndarray, list or tuple - if isinstance(scene_idx, str): # str - return self._duplicate_to_list(scene_idx, num) - if isinstance(scene_idx[0], str): # list of str - return scene_idx - if isinstance(scene_idx[0], (list, tuple, np.ndarray)): # list of idx - return scene_idx - # single idx - return self._duplicate_to_list(scene_idx, num) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/scannet_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/scannet_dataset.py deleted file mode 100644 index 67727d11..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/scannet_dataset.py +++ /dev/null @@ -1,616 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -import warnings -from os import path as osp - -import numpy as np - -from mmdet3d.core import instance_seg_eval, show_result, show_seg_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmseg.datasets import DATASETS as SEG_DATASETS -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .custom_3d_seg import Custom3DSegDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class ScanNetDataset(Custom3DDataset): - r"""ScanNet Dataset for Detection Task. - - This class serves as the API for experiments on the ScanNet Dataset. - - Please refer to the `github repo `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=dict(use_camera=False, use_depth=True), - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - assert 'use_camera' in self.modality and \ - 'use_depth' in self.modality - assert self.modality['use_camera'] or self.modality['use_depth'] - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - img_prefix (str, optional): Prefix of image files. - - img_info (dict, optional): Image info. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict = dict(sample_idx=sample_idx) - - if self.modality['use_depth']: - input_dict['pts_filename'] = pts_filename - input_dict['file_name'] = pts_filename - - if self.modality['use_camera']: - img_info = [] - for img_path in info['img_paths']: - img_info.append( - dict(filename=osp.join(self.data_root, img_path))) - intrinsic = info['intrinsics'] - axis_align_matrix = self._get_axis_align_matrix(info) - depth2img = [] - for extrinsic in info['extrinsics']: - depth2img.append( - intrinsic @ np.linalg.inv(axis_align_matrix @ extrinsic)) - - input_dict['img_prefix'] = None - input_dict['img_info'] = img_info - input_dict['depth2img'] = depth2img - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - - axis_align_matrix (np.ndarray): Transformation matrix for - global scene alignment. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 6), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, - box_dim=gt_bboxes_3d.shape[-1], - with_yaw=False, - origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - axis_align_matrix = self._get_axis_align_matrix(info) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path, - axis_align_matrix=axis_align_matrix) - return anns_results - - def prepare_test_data(self, index): - """Prepare data for testing. - - We should take axis_align_matrix from self.data_infos since we need - to align point clouds. - - Args: - index (int): Index for accessing the target data. - - Returns: - dict: Testing data dict of the corresponding index. - """ - input_dict = self.get_data_info(index) - # take the axis_align_matrix from data_infos - input_dict['ann_info'] = dict( - axis_align_matrix=self._get_axis_align_matrix( - self.data_infos[index])) - self.pre_pipeline(input_dict) - example = self.pipeline(input_dict) - return example - - @staticmethod - def _get_axis_align_matrix(info): - """Get axis_align_matrix from info. If not exist, return identity mat. - - Args: - info (dict): one data info term. - - Returns: - np.ndarray: 4x4 transformation matrix. - """ - if 'axis_align_matrix' in info['annos'].keys(): - return info['annos']['axis_align_matrix'].astype(np.float32) - else: - warnings.warn( - 'axis_align_matrix is not found in ScanNet data info, please ' - 'use new pre-process scripts to re-generate ScanNet data') - return np.eye(4).astype(np.float32) - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict(type='GlobalAlignment', rotation_axis=2), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points = self._extract_data(i, pipeline, 'points').numpy() - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_result(points, gt_bboxes, pred_bboxes, out_dir, file_name, - show) - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class ScanNetSegDataset(Custom3DSegDataset): - r"""ScanNet Dataset for Semantic Segmentation Task. - - This class serves as the API for experiments on the ScanNet Dataset. - - Please refer to the `github repo `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - palette (list[list[int]], optional): The palette of segmentation map. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - scene_idxs (np.ndarray | str, optional): Precomputed index to load - data. For scenes with many points, we may sample it several times. - Defaults to None. - """ - CLASSES = ('wall', 'floor', 'cabinet', 'bed', 'chair', 'sofa', 'table', - 'door', 'window', 'bookshelf', 'picture', 'counter', 'desk', - 'curtain', 'refrigerator', 'showercurtrain', 'toilet', 'sink', - 'bathtub', 'otherfurniture') - - VALID_CLASS_IDS = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, - 33, 34, 36, 39) - - ALL_CLASS_IDS = tuple(range(41)) - - PALETTE = [ - [174, 199, 232], - [152, 223, 138], - [31, 119, 180], - [255, 187, 120], - [188, 189, 34], - [140, 86, 75], - [255, 152, 150], - [214, 39, 40], - [197, 176, 213], - [148, 103, 189], - [196, 156, 148], - [23, 190, 207], - [247, 182, 210], - [219, 219, 141], - [255, 127, 14], - [158, 218, 229], - [44, 160, 44], - [112, 128, 144], - [227, 119, 194], - [82, 84, 163], - ] - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - palette=None, - modality=None, - test_mode=False, - ignore_index=None, - scene_idxs=None, - **kwargs): - - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - palette=palette, - modality=modality, - test_mode=test_mode, - ignore_index=ignore_index, - scene_idxs=scene_idxs, - **kwargs) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=False, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=np.max(self.ALL_CLASS_IDS)), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict(type='Collect3D', keys=['points', 'pts_semantic_mask']) - ] - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, gt_sem_mask = self._extract_data( - i, pipeline, ['points', 'pts_semantic_mask'], load_annos=True) - points = points.numpy() - pred_sem_mask = result['semantic_mask'].numpy() - show_seg_result(points, gt_sem_mask, - pred_sem_mask, out_dir, file_name, - np.array(self.PALETTE), self.ignore_index, show) - - def get_scene_idxs(self, scene_idxs): - """Compute scene_idxs for data sampling. - - We sample more times for scenes with more points. - """ - # when testing, we load one whole scene every time - if not self.test_mode and scene_idxs is None: - raise NotImplementedError( - 'please provide re-sampled scene indexes for training') - - return super().get_scene_idxs(scene_idxs) - - def format_results(self, results, txtfile_prefix=None): - r"""Format the results to txt file. Refer to `ScanNet documentation - `_. - - Args: - outputs (list[dict]): Testing results of the dataset. - txtfile_prefix (str): The prefix of saved files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (outputs, tmp_dir), outputs is the detection results, - tmp_dir is the temporal directory created for saving submission - files when ``submission_prefix`` is not specified. - """ - import mmcv - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - mmcv.mkdir_or_exist(txtfile_prefix) - - # need to map network output to original label idx - pred2label = np.zeros(len(self.VALID_CLASS_IDS)).astype(np.int) - for original_label, output_idx in self.label_map.items(): - if output_idx != self.ignore_index: - pred2label[output_idx] = original_label - - outputs = [] - for i, result in enumerate(results): - info = self.data_infos[i] - sample_idx = info['point_cloud']['lidar_idx'] - pred_sem_mask = result['semantic_mask'].numpy().astype(np.int) - pred_label = pred2label[pred_sem_mask] - curr_file = f'{txtfile_prefix}/{sample_idx}.txt' - np.savetxt(curr_file, pred_label, fmt='%d') - outputs.append(dict(seg_mask=pred_label)) - - return outputs, tmp_dir - - -@DATASETS.register_module() -@SEG_DATASETS.register_module() -class ScanNetInstanceSegDataset(Custom3DSegDataset): - CLASSES = ('cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin') - - VALID_CLASS_IDS = (3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, - 36, 39) - - ALL_CLASS_IDS = tuple(range(41)) - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - pts_semantic_mask_path (str): Path of semantic masks. - - pts_instance_mask_path (str): Path of instance masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_instance_mask_path = osp.join(self.data_root, - info['pts_instance_mask_path']) - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict( - pts_instance_mask_path=pts_instance_mask_path, - pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. Palette is simply ignored for - instance segmentation. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - Defaults to None. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, random - palette will be generated. Defaults to None. - """ - if classes is not None: - return classes, None - return self.CLASSES, None - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - use_color=True, - load_dim=6, - use_dim=[0, 1, 2, 3, 4, 5]), - dict( - type='LoadAnnotations3D', - with_bbox_3d=False, - with_label_3d=False, - with_mask_3d=True, - with_seg_3d=True), - dict( - type='PointSegClassMapping', - valid_cat_ids=self.VALID_CLASS_IDS, - max_cat_id=40), - dict( - type='DefaultFormatBundle3D', - with_label=False, - class_names=self.CLASSES), - dict( - type='Collect3D', - keys=['points', 'pts_semantic_mask', 'pts_instance_mask']) - ] - return Compose(pipeline) - - def evaluate(self, - results, - metric=None, - options=None, - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in instance segmentation protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str]): Metrics to be evaluated. - options (dict, optional): options for instance_seg_eval. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Defaults to None. - show (bool, optional): Whether to visualize. - Defaults to False. - out_dir (str, optional): Path to save the visualization results. - Defaults to None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - assert isinstance( - results, list), f'Expect results to be list, got {type(results)}.' - assert len(results) > 0, 'Expect length of results > 0.' - assert len(results) == len(self.data_infos) - assert isinstance( - results[0], dict - ), f'Expect elements in results to be dict, got {type(results[0])}.' - - load_pipeline = self._get_pipeline(pipeline) - pred_instance_masks = [result['instance_mask'] for result in results] - pred_instance_labels = [result['instance_label'] for result in results] - pred_instance_scores = [result['instance_score'] for result in results] - gt_semantic_masks, gt_instance_masks = zip(*[ - self._extract_data( - index=i, - pipeline=load_pipeline, - key=['pts_semantic_mask', 'pts_instance_mask'], - load_annos=True) for i in range(len(self.data_infos)) - ]) - ret_dict = instance_seg_eval( - gt_semantic_masks, - gt_instance_masks, - pred_instance_masks, - pred_instance_labels, - pred_instance_scores, - valid_class_ids=self.VALID_CLASS_IDS, - class_labels=self.CLASSES, - options=options, - logger=logger) - - if show: - raise NotImplementedError('show is not implemented for now') - - return ret_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/semantickitti_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/semantickitti_dataset.py deleted file mode 100644 index fdd8423e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/semantickitti_dataset.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -from .builder import DATASETS -from .custom_3d import Custom3DDataset - - -@DATASETS.register_module() -class SemanticKITTIDataset(Custom3DDataset): - r"""SemanticKITTI Dataset. - - This class serves as the API for experiments on the SemanticKITTI Dataset - Please refer to `_ - for data downloading - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): NO 3D box for this dataset. - You can choose any type - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('unlabeled', 'car', 'bicycle', 'motorcycle', 'truck', 'bus', - 'person', 'bicyclist', 'motorcyclist', 'road', 'parking', - 'sidewalk', 'other-ground', 'building', 'fence', 'vegetation', - 'trunck', 'terrian', 'pole', 'traffic-sign') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=None, - box_type_3d='Lidar', - filter_empty_gt=False, - test_mode=False): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode) - - def get_data_info(self, index): - """Get data info according to the given index. - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - sample_idx (str): Sample index. - - pts_filename (str): Filename of point clouds. - - file_name (str): Filename of point clouds. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - pts_filename = osp.join(self.data_root, info['pts_path']) - - input_dict = dict( - pts_filename=pts_filename, - sample_idx=sample_idx, - file_name=pts_filename) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and ~(annos['gt_labels_3d'] != -1).any(): - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - - pts_semantic_mask_path = osp.join(self.data_root, - info['pts_semantic_mask_path']) - - anns_results = dict(pts_semantic_mask_path=pts_semantic_mask_path) - return anns_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/sunrgbd_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/sunrgbd_dataset.py deleted file mode 100644 index 53e054de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/sunrgbd_dataset.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from os import path as osp - -import numpy as np - -from mmdet3d.core import show_multi_modality_result, show_result -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmdet.core import eval_map -from .builder import DATASETS -from .custom_3d import Custom3DDataset -from .pipelines import Compose - - -@DATASETS.register_module() -class SUNRGBDDataset(Custom3DDataset): - r"""SUNRGBD Dataset. - - This class serves as the API for experiments on the SUNRGBD Dataset. - - See the `download page `_ - for data downloading. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'Depth' in this dataset. Available options includes - - - 'LiDAR': Box in LiDAR coordinates. - - 'Depth': Box in depth coordinates, usually for indoor dataset. - - 'Camera': Box in camera coordinates. - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - """ - CLASSES = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub') - - def __init__(self, - data_root, - ann_file, - pipeline=None, - classes=None, - modality=dict(use_camera=True, use_lidar=True), - box_type_3d='Depth', - filter_empty_gt=True, - test_mode=False, - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - **kwargs) - assert 'use_camera' in self.modality and \ - 'use_lidar' in self.modality - assert self.modality['use_camera'] or self.modality['use_lidar'] - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Data information that will be passed to the data - preprocessing pipelines. It includes the following keys: - - - sample_idx (str): Sample index. - - pts_filename (str, optional): Filename of point clouds. - - file_name (str, optional): Filename of point clouds. - - img_prefix (str, optional): Prefix of image files. - - img_info (dict, optional): Image info. - - calib (dict, optional): Camera calibration info. - - ann_info (dict): Annotation info. - """ - info = self.data_infos[index] - sample_idx = info['point_cloud']['lidar_idx'] - assert info['point_cloud']['lidar_idx'] == info['image']['image_idx'] - input_dict = dict(sample_idx=sample_idx) - - if self.modality['use_lidar']: - pts_filename = osp.join(self.data_root, info['pts_path']) - input_dict['pts_filename'] = pts_filename - input_dict['file_name'] = pts_filename - - if self.modality['use_camera']: - img_filename = osp.join( - osp.join(self.data_root, 'sunrgbd_trainval'), - info['image']['image_path']) - input_dict['img_prefix'] = None - input_dict['img_info'] = dict(filename=img_filename) - calib = info['calib'] - rt_mat = calib['Rt'] - # follow Coord3DMode.convert_point - rt_mat = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0] - ]) @ rt_mat.transpose(1, 0) - depth2img = calib['K'] @ rt_mat - input_dict['depth2img'] = depth2img - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - if self.filter_empty_gt and len(annos['gt_bboxes_3d']) == 0: - return None - return input_dict - - def get_ann_info(self, index): - """Get annotation info according to the given index. - - Args: - index (int): Index of the annotation data to get. - - Returns: - dict: annotation information consists of the following keys: - - - gt_bboxes_3d (:obj:`DepthInstance3DBoxes`): - 3D ground truth bboxes - - gt_labels_3d (np.ndarray): Labels of ground truths. - - pts_instance_mask_path (str): Path of instance masks. - - pts_semantic_mask_path (str): Path of semantic masks. - """ - # Use index to get the annos, thus the evalhook could also use this api - info = self.data_infos[index] - if info['annos']['gt_num'] != 0: - gt_bboxes_3d = info['annos']['gt_boxes_upright_depth'].astype( - np.float32) # k, 6 - gt_labels_3d = info['annos']['class'].astype(np.int64) - else: - gt_bboxes_3d = np.zeros((0, 7), dtype=np.float32) - gt_labels_3d = np.zeros((0, ), dtype=np.int64) - - # to target box structure - gt_bboxes_3d = DepthInstance3DBoxes( - gt_bboxes_3d, origin=(0.5, 0.5, 0.5)).convert_to(self.box_mode_3d) - - anns_results = dict( - gt_bboxes_3d=gt_bboxes_3d, gt_labels_3d=gt_labels_3d) - - if self.modality['use_camera']: - if info['annos']['gt_num'] != 0: - gt_bboxes_2d = info['annos']['bbox'].astype(np.float32) - else: - gt_bboxes_2d = np.zeros((0, 4), dtype=np.float32) - anns_results['bboxes'] = gt_bboxes_2d - anns_results['labels'] = gt_labels_3d - - return anns_results - - def _build_default_pipeline(self): - """Build the default pipeline for this dataset.""" - pipeline = [ - dict( - type='LoadPointsFromFile', - coord_type='DEPTH', - shift_height=False, - load_dim=6, - use_dim=[0, 1, 2]), - dict( - type='DefaultFormatBundle3D', - class_names=self.CLASSES, - with_label=False), - dict(type='Collect3D', keys=['points']) - ] - if self.modality['use_camera']: - pipeline.insert(0, dict(type='LoadImageFromFile')) - return Compose(pipeline) - - def show(self, results, out_dir, show=True, pipeline=None): - """Results visualization. - - Args: - results (list[dict]): List of bounding boxes results. - out_dir (str): Output directory of visualization result. - show (bool): Visualize the results online. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - pipeline = self._get_pipeline(pipeline) - for i, result in enumerate(results): - data_info = self.data_infos[i] - pts_path = data_info['pts_path'] - file_name = osp.split(pts_path)[-1].split('.')[0] - points, img_metas, img = self._extract_data( - i, pipeline, ['points', 'img_metas', 'img']) - # scale colors to [0, 255] - points = points.numpy() - points[:, 3:] *= 255 - - gt_bboxes = self.get_ann_info(i)['gt_bboxes_3d'].tensor.numpy() - pred_bboxes = result['boxes_3d'].tensor.numpy() - show_result(points, gt_bboxes.copy(), pred_bboxes.copy(), out_dir, - file_name, show) - - # multi-modality visualization - if self.modality['use_camera']: - img = img.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - pred_bboxes = DepthInstance3DBoxes( - pred_bboxes, origin=(0.5, 0.5, 0)) - gt_bboxes = DepthInstance3DBoxes( - gt_bboxes, origin=(0.5, 0.5, 0)) - show_multi_modality_result( - img, - gt_bboxes, - pred_bboxes, - None, - out_dir, - file_name, - box_mode='depth', - img_metas=img_metas, - show=show) - - def evaluate(self, - results, - metric=None, - iou_thr=(0.25, 0.5), - iou_thr_2d=(0.5, ), - logger=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluate. - - Evaluation in indoor protocol. - - Args: - results (list[dict]): List of results. - metric (str | list[str], optional): Metrics to be evaluated. - Default: None. - iou_thr (list[float], optional): AP IoU thresholds for 3D - evaluation. Default: (0.25, 0.5). - iou_thr_2d (list[float], optional): AP IoU thresholds for 2D - evaluation. Default: (0.5, ). - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict: Evaluation results. - """ - # evaluate 3D detection performance - if isinstance(results[0], dict): - return super().evaluate(results, metric, iou_thr, logger, show, - out_dir, pipeline) - # evaluate 2D detection performance - else: - eval_results = OrderedDict() - annotations = [self.get_ann_info(i) for i in range(len(self))] - iou_thr_2d = (iou_thr_2d) if isinstance(iou_thr_2d, - float) else iou_thr_2d - for iou_thr_2d_single in iou_thr_2d: - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=None, - iou_thr=iou_thr_2d_single, - dataset=self.CLASSES, - logger=logger) - eval_results['mAP_' + str(iou_thr_2d_single)] = mean_ap - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/utils.py deleted file mode 100644 index 3cc48b51..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/utils.py +++ /dev/null @@ -1,142 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -# yapf: disable -from mmdet3d.datasets.pipelines import (Collect3D, DefaultFormatBundle3D, - LoadAnnotations3D, - LoadImageFromFileMono3D, - LoadMultiViewImageFromFiles, - LoadPointsFromFile, - LoadPointsFromMultiSweeps, - MultiScaleFlipAug3D, - PointSegClassMapping) -from mmdet.datasets.pipelines import LoadImageFromFile, MultiScaleFlipAug -# yapf: enable -from .builder import PIPELINES - - -def is_loading_function(transform): - """Judge whether a transform function is a loading function. - - Note: `MultiScaleFlipAug3D` is a wrapper for multiple pipeline functions, - so we need to search if its inner transforms contain any loading function. - - Args: - transform (dict | :obj:`Pipeline`): A transform config or a function. - - Returns: - bool: Whether it is a loading function. None means can't judge. - When transform is `MultiScaleFlipAug3D`, we return None. - """ - # TODO: use more elegant way to distinguish loading modules - loading_functions = (LoadImageFromFile, LoadPointsFromFile, - LoadAnnotations3D, LoadMultiViewImageFromFiles, - LoadPointsFromMultiSweeps, DefaultFormatBundle3D, - Collect3D, LoadImageFromFileMono3D, - PointSegClassMapping) - if isinstance(transform, dict): - obj_cls = PIPELINES.get(transform['type']) - if obj_cls is None: - return False - if obj_cls in loading_functions: - return True - if obj_cls in (MultiScaleFlipAug3D, MultiScaleFlipAug): - return None - elif callable(transform): - if isinstance(transform, loading_functions): - return True - if isinstance(transform, (MultiScaleFlipAug3D, MultiScaleFlipAug)): - return None - return False - - -def get_loading_pipeline(pipeline): - """Only keep loading image, points and annotations related configuration. - - Args: - pipeline (list[dict] | list[:obj:`Pipeline`]): - Data pipeline configs or list of pipeline functions. - - Returns: - list[dict] | list[:obj:`Pipeline`]): The new pipeline list with only - keep loading image, points and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadPointsFromFile', - ... coord_type='LIDAR', load_dim=4, use_dim=4), - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations3D', - ... with_bbox=True, with_label_3d=True), - ... dict(type='Resize', - ... img_scale=[(640, 192), (2560, 768)], keep_ratio=True), - ... dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), - ... dict(type='PointsRangeFilter', - ... point_cloud_range=point_cloud_range), - ... dict(type='ObjectRangeFilter', - ... point_cloud_range=point_cloud_range), - ... dict(type='PointShuffle'), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle3D', class_names=class_names), - ... dict(type='Collect3D', - ... keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadPointsFromFile', - ... coord_type='LIDAR', load_dim=4, use_dim=4), - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations3D', - ... with_bbox=True, with_label_3d=True), - ... dict(type='DefaultFormatBundle3D', class_names=class_names), - ... dict(type='Collect3D', - ... keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']) - ... ] - >>> assert expected_pipelines == \ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline = [] - for transform in pipeline: - is_loading = is_loading_function(transform) - if is_loading is None: # MultiScaleFlipAug3D - # extract its inner pipeline - if isinstance(transform, dict): - inner_pipeline = transform.get('transforms', []) - else: - inner_pipeline = transform.transforms.transforms - loading_pipeline.extend(get_loading_pipeline(inner_pipeline)) - elif is_loading: - loading_pipeline.append(transform) - assert len(loading_pipeline) > 0, \ - 'The data pipeline in your config file must include ' \ - 'loading step.' - return loading_pipeline - - -def extract_result_dict(results, key): - """Extract and return the data corresponding to key in result dict. - - ``results`` is a dict output from `pipeline(input_dict)`, which is the - loaded data from ``Dataset`` class. - The data terms inside may be wrapped in list, tuple and DataContainer, so - this function essentially extracts data from these wrappers. - - Args: - results (dict): Data loaded using pipeline. - key (str): Key of the desired data. - - Returns: - np.ndarray | torch.Tensor: Data term. - """ - if key not in results.keys(): - return None - # results[key] may be data or list[data] or tuple[data] - # data may be wrapped inside DataContainer - data = results[key] - if isinstance(data, (list, tuple)): - data = data[0] - if isinstance(data, mmcv.parallel.DataContainer): - data = data._data - return data diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/waymo_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/waymo_dataset.py deleted file mode 100644 index 4a0f6649..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/datasets/waymo_dataset.py +++ /dev/null @@ -1,551 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import os -import tempfile -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.utils import print_log - -from ..core.bbox import Box3DMode, points_cam2img -from .builder import DATASETS -from .kitti_dataset import KittiDataset - - -@DATASETS.register_module() -class WaymoDataset(KittiDataset): - """Waymo Dataset. - - This class serves as the API for experiments on the Waymo Dataset. - - Please refer to ``_for data downloading. - It is recommended to symlink the dataset root to $MMDETECTION3D/data and - organize them as the doc shows. - - Args: - data_root (str): Path of dataset root. - ann_file (str): Path of annotation file. - split (str): Split of input data. - pts_prefix (str, optional): Prefix of points files. - Defaults to 'velodyne'. - pipeline (list[dict], optional): Pipeline used for data processing. - Defaults to None. - classes (tuple[str], optional): Classes used in the dataset. - Defaults to None. - modality (dict, optional): Modality to specify the sensor data used - as input. Defaults to None. - box_type_3d (str, optional): Type of 3D box of this dataset. - Based on the `box_type_3d`, the dataset will encapsulate the box - to its original format then converted them to `box_type_3d`. - Defaults to 'LiDAR' in this dataset. Available options includes - - - 'LiDAR': box in LiDAR coordinates - - 'Depth': box in depth coordinates, usually for indoor dataset - - 'Camera': box in camera coordinates - filter_empty_gt (bool, optional): Whether to filter empty GT. - Defaults to True. - test_mode (bool, optional): Whether the dataset is in test mode. - Defaults to False. - pcd_limit_range (list(float), optional): The range of point cloud used - to filter invalid predicted boxes. - Default: [-85, -85, -5, 85, 85, 5]. - """ - - CLASSES = ('Car', 'Cyclist', 'Pedestrian') - - def __init__(self, - data_root, - ann_file, - split, - pts_prefix='velodyne', - pipeline=None, - classes=None, - modality=None, - box_type_3d='LiDAR', - filter_empty_gt=True, - test_mode=False, - load_interval=1, - pcd_limit_range=[-85, -85, -5, 85, 85, 5], - **kwargs): - super().__init__( - data_root=data_root, - ann_file=ann_file, - split=split, - pts_prefix=pts_prefix, - pipeline=pipeline, - classes=classes, - modality=modality, - box_type_3d=box_type_3d, - filter_empty_gt=filter_empty_gt, - test_mode=test_mode, - pcd_limit_range=pcd_limit_range, - **kwargs) - - # to load a subset, just set the load_interval in the dataset config - self.data_infos = self.data_infos[::load_interval] - if hasattr(self, 'flag'): - self.flag = self.flag[::load_interval] - - def _get_pts_filename(self, idx): - pts_filename = osp.join(self.root_split, self.pts_prefix, - f'{idx:07d}.bin') - return pts_filename - - def get_data_info(self, index): - """Get data info according to the given index. - - Args: - index (int): Index of the sample data to get. - - Returns: - dict: Standard input_dict consists of the - data information. - - - sample_idx (str): sample index - - pts_filename (str): filename of point clouds - - img_prefix (str): prefix of image files - - img_info (dict): image info - - lidar2img (list[np.ndarray], optional): transformations from - lidar to different cameras - - ann_info (dict): annotation info - """ - info = self.data_infos[index] - sample_idx = info['image']['image_idx'] - img_filename = os.path.join(self.data_root, - info['image']['image_path']) - - # TODO: consider use torch.Tensor only - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P0 = info['calib']['P0'].astype(np.float32) - lidar2img = P0 @ rect @ Trv2c - - pts_filename = self._get_pts_filename(sample_idx) - input_dict = dict( - sample_idx=sample_idx, - pts_filename=pts_filename, - img_prefix=None, - img_info=dict(filename=img_filename), - lidar2img=lidar2img) - - if not self.test_mode: - annos = self.get_ann_info(index) - input_dict['ann_info'] = annos - - return input_dict - - def format_results(self, - outputs, - pklfile_prefix=None, - submission_prefix=None, - data_format='waymo'): - """Format the results to pkl file. - - Args: - outputs (list[dict]): Testing results of the dataset. - pklfile_prefix (str): The prefix of pkl files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str): The prefix of submitted files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - data_format (str, optional): Output data format. - Default: 'waymo'. Another supported choice is 'kitti'. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing - the json filepaths, tmp_dir is the temporal directory created - for saving json files when jsonfile_prefix is not specified. - """ - if pklfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - - assert ('waymo' in data_format or 'kitti' in data_format), \ - f'invalid data_format {data_format}' - - if (not isinstance(outputs[0], dict)) or 'img_bbox' in outputs[0]: - raise TypeError('Not supported type for reformat results.') - elif 'pts_bbox' in outputs[0]: - result_files = dict() - for name in outputs[0]: - results_ = [out[name] for out in outputs] - pklfile_prefix_ = pklfile_prefix + name - if submission_prefix is not None: - submission_prefix_ = f'{submission_prefix}_{name}' - else: - submission_prefix_ = None - result_files_ = self.bbox2result_kitti(results_, self.CLASSES, - pklfile_prefix_, - submission_prefix_) - result_files[name] = result_files_ - else: - result_files = self.bbox2result_kitti(outputs, self.CLASSES, - pklfile_prefix, - submission_prefix) - if 'waymo' in data_format: - from ..core.evaluation.waymo_utils.prediction_kitti_to_waymo import \ - KITTI2Waymo # noqa - waymo_root = osp.join( - self.data_root.split('kitti_format')[0], 'waymo_format') - if self.split == 'training': - waymo_tfrecords_dir = osp.join(waymo_root, 'validation') - prefix = '1' - elif self.split == 'testing': - waymo_tfrecords_dir = osp.join(waymo_root, 'testing') - prefix = '2' - else: - raise ValueError('Not supported split value.') - save_tmp_dir = tempfile.TemporaryDirectory() - waymo_results_save_dir = save_tmp_dir.name - waymo_results_final_path = f'{pklfile_prefix}.bin' - if 'pts_bbox' in result_files: - converter = KITTI2Waymo(result_files['pts_bbox'], - waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, prefix) - else: - converter = KITTI2Waymo(result_files, waymo_tfrecords_dir, - waymo_results_save_dir, - waymo_results_final_path, prefix) - converter.convert() - save_tmp_dir.cleanup() - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='waymo', - logger=None, - pklfile_prefix=None, - submission_prefix=None, - show=False, - out_dir=None, - pipeline=None): - """Evaluation in KITTI protocol. - - Args: - results (list[dict]): Testing results of the dataset. - metric (str | list[str], optional): Metrics to be evaluated. - Default: 'waymo'. Another supported metric is 'kitti'. - logger (logging.Logger | str, optional): Logger used for printing - related information during evaluation. Default: None. - pklfile_prefix (str, optional): The prefix of pkl files including - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - submission_prefix (str, optional): The prefix of submission data. - If not specified, the submission data will not be generated. - show (bool, optional): Whether to visualize. - Default: False. - out_dir (str, optional): Path to save the visualization results. - Default: None. - pipeline (list[dict], optional): raw data loading for showing. - Default: None. - - Returns: - dict[str: float]: results of each evaluation metric - """ - assert ('waymo' in metric or 'kitti' in metric), \ - f'invalid metric {metric}' - if 'kitti' in metric: - result_files, tmp_dir = self.format_results( - results, - pklfile_prefix, - submission_prefix, - data_format='kitti') - from mmdet3d.core.evaluation import kitti_eval - gt_annos = [info['annos'] for info in self.data_infos] - - if isinstance(result_files, dict): - ap_dict = dict() - for name, result_files_ in result_files.items(): - eval_types = ['bev', '3d'] - ap_result_str, ap_dict_ = kitti_eval( - gt_annos, - result_files_, - self.CLASSES, - eval_types=eval_types) - for ap_type, ap in ap_dict_.items(): - ap_dict[f'{name}/{ap_type}'] = float( - '{:.4f}'.format(ap)) - - print_log( - f'Results of {name}:\n' + ap_result_str, logger=logger) - - else: - ap_result_str, ap_dict = kitti_eval( - gt_annos, - result_files, - self.CLASSES, - eval_types=['bev', '3d']) - print_log('\n' + ap_result_str, logger=logger) - if 'waymo' in metric: - waymo_root = osp.join( - self.data_root.split('kitti_format')[0], 'waymo_format') - if pklfile_prefix is None: - eval_tmp_dir = tempfile.TemporaryDirectory() - pklfile_prefix = osp.join(eval_tmp_dir.name, 'results') - else: - eval_tmp_dir = None - result_files, tmp_dir = self.format_results( - results, - pklfile_prefix, - submission_prefix, - data_format='waymo') - import subprocess - ret_bytes = subprocess.check_output( - 'mmdet3d/core/evaluation/waymo_utils/' + - f'compute_detection_metrics_main {pklfile_prefix}.bin ' + - f'{waymo_root}/gt.bin', - shell=True) - ret_texts = ret_bytes.decode('utf-8') - print_log(ret_texts) - # parse the text to get ap_dict - ap_dict = { - 'Vehicle/L1 mAP': 0, - 'Vehicle/L1 mAPH': 0, - 'Vehicle/L2 mAP': 0, - 'Vehicle/L2 mAPH': 0, - 'Pedestrian/L1 mAP': 0, - 'Pedestrian/L1 mAPH': 0, - 'Pedestrian/L2 mAP': 0, - 'Pedestrian/L2 mAPH': 0, - 'Sign/L1 mAP': 0, - 'Sign/L1 mAPH': 0, - 'Sign/L2 mAP': 0, - 'Sign/L2 mAPH': 0, - 'Cyclist/L1 mAP': 0, - 'Cyclist/L1 mAPH': 0, - 'Cyclist/L2 mAP': 0, - 'Cyclist/L2 mAPH': 0, - 'Overall/L1 mAP': 0, - 'Overall/L1 mAPH': 0, - 'Overall/L2 mAP': 0, - 'Overall/L2 mAPH': 0 - } - mAP_splits = ret_texts.split('mAP ') - mAPH_splits = ret_texts.split('mAPH ') - for idx, key in enumerate(ap_dict.keys()): - split_idx = int(idx / 2) + 1 - if idx % 2 == 0: # mAP - ap_dict[key] = float(mAP_splits[split_idx].split(']')[0]) - else: # mAPH - ap_dict[key] = float(mAPH_splits[split_idx].split(']')[0]) - ap_dict['Overall/L1 mAP'] = \ - (ap_dict['Vehicle/L1 mAP'] + ap_dict['Pedestrian/L1 mAP'] + - ap_dict['Cyclist/L1 mAP']) / 3 - ap_dict['Overall/L1 mAPH'] = \ - (ap_dict['Vehicle/L1 mAPH'] + ap_dict['Pedestrian/L1 mAPH'] + - ap_dict['Cyclist/L1 mAPH']) / 3 - ap_dict['Overall/L2 mAP'] = \ - (ap_dict['Vehicle/L2 mAP'] + ap_dict['Pedestrian/L2 mAP'] + - ap_dict['Cyclist/L2 mAP']) / 3 - ap_dict['Overall/L2 mAPH'] = \ - (ap_dict['Vehicle/L2 mAPH'] + ap_dict['Pedestrian/L2 mAPH'] + - ap_dict['Cyclist/L2 mAPH']) / 3 - if eval_tmp_dir is not None: - eval_tmp_dir.cleanup() - - if tmp_dir is not None: - tmp_dir.cleanup() - - if show or out_dir: - self.show(results, out_dir, show=show, pipeline=pipeline) - return ap_dict - - def bbox2result_kitti(self, - net_outputs, - class_names, - pklfile_prefix=None, - submission_prefix=None): - """Convert results to kitti format for evaluation and test submission. - - Args: - net_outputs (List[np.ndarray]): list of array storing the - bbox and score - class_nanes (List[String]): A list of class names - pklfile_prefix (str): The prefix of pkl file. - submission_prefix (str): The prefix of submission file. - - Returns: - List[dict]: A list of dict have the kitti 3d format - """ - assert len(net_outputs) == len(self.data_infos), \ - 'invalid list length of network outputs' - if submission_prefix is not None: - mmcv.mkdir_or_exist(submission_prefix) - - det_annos = [] - print('\nConverting prediction to KITTI format') - for idx, pred_dicts in enumerate( - mmcv.track_iter_progress(net_outputs)): - annos = [] - info = self.data_infos[idx] - sample_idx = info['image']['image_idx'] - image_shape = info['image']['image_shape'][:2] - - box_dict = self.convert_valid_bboxes(pred_dicts, info) - if len(box_dict['bbox']) > 0: - box_2d_preds = box_dict['bbox'] - box_preds = box_dict['box3d_camera'] - scores = box_dict['scores'] - box_preds_lidar = box_dict['box3d_lidar'] - label_preds = box_dict['label_preds'] - - anno = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - - for box, box_lidar, bbox, score, label in zip( - box_preds, box_preds_lidar, box_2d_preds, scores, - label_preds): - bbox[2:] = np.minimum(bbox[2:], image_shape[::-1]) - bbox[:2] = np.maximum(bbox[:2], [0, 0]) - anno['name'].append(class_names[int(label)]) - anno['truncated'].append(0.0) - anno['occluded'].append(0) - anno['alpha'].append( - -np.arctan2(-box_lidar[1], box_lidar[0]) + box[6]) - anno['bbox'].append(bbox) - anno['dimensions'].append(box[3:6]) - anno['location'].append(box[:3]) - anno['rotation_y'].append(box[6]) - anno['score'].append(score) - - anno = {k: np.stack(v) for k, v in anno.items()} - annos.append(anno) - - if submission_prefix is not None: - curr_file = f'{submission_prefix}/{sample_idx:07d}.txt' - with open(curr_file, 'w') as f: - bbox = anno['bbox'] - loc = anno['location'] - dims = anno['dimensions'] # lhw -> hwl - - for idx in range(len(bbox)): - print( - '{} -1 -1 {:.4f} {:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} ' - '{:.4f} {:.4f} {:.4f} {:.4f} {:.4f} {:.4f}'. - format(anno['name'][idx], anno['alpha'][idx], - bbox[idx][0], bbox[idx][1], - bbox[idx][2], bbox[idx][3], - dims[idx][1], dims[idx][2], - dims[idx][0], loc[idx][0], loc[idx][1], - loc[idx][2], anno['rotation_y'][idx], - anno['score'][idx]), - file=f) - else: - annos.append({ - 'name': np.array([]), - 'truncated': np.array([]), - 'occluded': np.array([]), - 'alpha': np.array([]), - 'bbox': np.zeros([0, 4]), - 'dimensions': np.zeros([0, 3]), - 'location': np.zeros([0, 3]), - 'rotation_y': np.array([]), - 'score': np.array([]), - }) - annos[-1]['sample_idx'] = np.array( - [sample_idx] * len(annos[-1]['score']), dtype=np.int64) - - det_annos += annos - - if pklfile_prefix is not None: - if not pklfile_prefix.endswith(('.pkl', '.pickle')): - out = f'{pklfile_prefix}.pkl' - mmcv.dump(det_annos, out) - print(f'Result is saved to {out}.') - - return det_annos - - def convert_valid_bboxes(self, box_dict, info): - """Convert the boxes into valid format. - - Args: - box_dict (dict): Bounding boxes to be converted. - - - boxes_3d (:obj:``LiDARInstance3DBoxes``): 3D bounding boxes. - - scores_3d (np.ndarray): Scores of predicted boxes. - - labels_3d (np.ndarray): Class labels of predicted boxes. - info (dict): Dataset information dictionary. - - Returns: - dict: Valid boxes after conversion. - - - bbox (np.ndarray): 2D bounding boxes (in camera 0). - - box3d_camera (np.ndarray): 3D boxes in camera coordinates. - - box3d_lidar (np.ndarray): 3D boxes in lidar coordinates. - - scores (np.ndarray): Scores of predicted boxes. - - label_preds (np.ndarray): Class labels of predicted boxes. - - sample_idx (np.ndarray): Sample index. - """ - # TODO: refactor this function - box_preds = box_dict['boxes_3d'] - scores = box_dict['scores_3d'] - labels = box_dict['labels_3d'] - sample_idx = info['image']['image_idx'] - box_preds.limit_yaw(offset=0.5, period=np.pi * 2) - - if len(box_preds) == 0: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx) - - rect = info['calib']['R0_rect'].astype(np.float32) - Trv2c = info['calib']['Tr_velo_to_cam'].astype(np.float32) - P0 = info['calib']['P0'].astype(np.float32) - P0 = box_preds.tensor.new_tensor(P0) - - box_preds_camera = box_preds.convert_to(Box3DMode.CAM, rect @ Trv2c) - - box_corners = box_preds_camera.corners - box_corners_in_image = points_cam2img(box_corners, P0) - # box_corners_in_image: [N, 8, 2] - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - box_2d_preds = torch.cat([minxy, maxxy], dim=1) - # Post-processing - # check box_preds - limit_range = box_preds.tensor.new_tensor(self.pcd_limit_range) - valid_pcd_inds = ((box_preds.center > limit_range[:3]) & - (box_preds.center < limit_range[3:])) - valid_inds = valid_pcd_inds.all(-1) - - if valid_inds.sum() > 0: - return dict( - bbox=box_2d_preds[valid_inds, :].numpy(), - box3d_camera=box_preds_camera[valid_inds].tensor.numpy(), - box3d_lidar=box_preds[valid_inds].tensor.numpy(), - scores=scores[valid_inds].numpy(), - label_preds=labels[valid_inds].numpy(), - sample_idx=sample_idx, - ) - else: - return dict( - bbox=np.zeros([0, 4]), - box3d_camera=np.zeros([0, 7]), - box3d_lidar=np.zeros([0, 7]), - scores=np.zeros([0]), - label_preds=np.zeros([0, 4]), - sample_idx=sample_idx, - ) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/__init__.py deleted file mode 100644 index e80b7ad5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, DETECTORS, FUSION_LAYERS, HEADS, LOSSES, - MIDDLE_ENCODERS, NECKS, ROI_EXTRACTORS, SEGMENTORS, - SHARED_HEADS, VOXEL_ENCODERS, build_backbone, - build_detector, build_fusion_layer, build_head, - build_loss, build_middle_encoder, build_model, - build_neck, build_roi_extractor, build_shared_head, - build_voxel_encoder) -from .decode_heads import * # noqa: F401,F403 -from .dense_heads import * # noqa: F401,F403 -from .detectors import * # noqa: F401,F403 -from .fusion_layers import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .middle_encoders import * # noqa: F401,F403 -from .model_utils import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .roi_heads import * # noqa: F401,F403 -from .segmentors import * # noqa: F401,F403 -from .voxel_encoders import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES', - 'DETECTORS', 'SEGMENTORS', 'VOXEL_ENCODERS', 'MIDDLE_ENCODERS', - 'FUSION_LAYERS', 'build_backbone', 'build_neck', 'build_roi_extractor', - 'build_shared_head', 'build_head', 'build_loss', 'build_detector', - 'build_fusion_layer', 'build_model', 'build_middle_encoder', - 'build_voxel_encoder' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/__init__.py deleted file mode 100644 index 31d9d828..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.backbones import SSDVGG, HRNet, ResNet, ResNetV1d, ResNeXt -from .dgcnn import DGCNNBackbone -from .dla import DLANet -from .mink_resnet import MinkResNet -from .multi_backbone import MultiBackbone -from .nostem_regnet import NoStemRegNet -from .pointnet2_sa_msg import PointNet2SAMSG -from .pointnet2_sa_ssg import PointNet2SASSG -from .second import SECOND - -__all__ = [ - 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', 'NoStemRegNet', - 'SECOND', 'DGCNNBackbone', 'PointNet2SASSG', 'PointNet2SAMSG', - 'MultiBackbone', 'DLANet', 'MinkResNet' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/base_pointnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/base_pointnet.py deleted file mode 100644 index c5098969..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/base_pointnet.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import ABCMeta - -from mmcv.runner import BaseModule - - -class BasePointNet(BaseModule, metaclass=ABCMeta): - """Base class for PointNet.""" - - def __init__(self, init_cfg=None, pretrained=None): - super(BasePointNet, self).__init__(init_cfg) - self.fp16_enabled = False - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @staticmethod - def _split_point_feats(points): - """Split coordinates and features of input points. - - Args: - points (torch.Tensor): Point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - """ - xyz = points[..., 0:3].contiguous() - if points.size(-1) > 3: - features = points[..., 3:].transpose(1, 2).contiguous() - else: - features = None - - return xyz, features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dgcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dgcnn.py deleted file mode 100644 index f1047afb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dgcnn.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import DGCNNFAModule, DGCNNGFModule -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class DGCNNBackbone(BaseModule): - """Backbone network for DGCNN. - - Args: - in_channels (int): Input channels of point cloud. - num_samples (tuple[int], optional): The number of samples for knn or - ball query in each graph feature (GF) module. - Defaults to (20, 20, 20). - knn_modes (tuple[str], optional): Mode of KNN of each knn module. - Defaults to ('D-KNN', 'F-KNN', 'F-KNN'). - radius (tuple[float], optional): Sampling radii of each GF module. - Defaults to (None, None, None). - gf_channels (tuple[tuple[int]], optional): Out channels of each mlp in - GF module. Defaults to ((64, 64), (64, 64), (64, )). - fa_channels (tuple[int], optional): Out channels of each mlp in FA - module. Defaults to (1024, ). - act_cfg (dict, optional): Config of activation layer. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. - Defaults to None. - """ - - def __init__(self, - in_channels, - num_samples=(20, 20, 20), - knn_modes=('D-KNN', 'F-KNN', 'F-KNN'), - radius=(None, None, None), - gf_channels=((64, 64), (64, 64), (64, )), - fa_channels=(1024, ), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_gf = len(gf_channels) - - assert len(num_samples) == len(knn_modes) == len(radius) == len( - gf_channels), 'Num_samples, knn_modes, radius and gf_channels \ - should have the same length.' - - self.GF_modules = nn.ModuleList() - gf_in_channel = in_channels * 2 - skip_channel_list = [gf_in_channel] # input channel list - - for gf_index in range(self.num_gf): - cur_gf_mlps = list(gf_channels[gf_index]) - cur_gf_mlps = [gf_in_channel] + cur_gf_mlps - gf_out_channel = cur_gf_mlps[-1] - - self.GF_modules.append( - DGCNNGFModule( - mlp_channels=cur_gf_mlps, - num_sample=num_samples[gf_index], - knn_mode=knn_modes[gf_index], - radius=radius[gf_index], - act_cfg=act_cfg)) - skip_channel_list.append(gf_out_channel) - gf_in_channel = gf_out_channel * 2 - - fa_in_channel = sum(skip_channel_list[1:]) - cur_fa_mlps = list(fa_channels) - cur_fa_mlps = [fa_in_channel] + cur_fa_mlps - - self.FA_module = DGCNNFAModule( - mlp_channels=cur_fa_mlps, act_cfg=act_cfg) - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, in_channels). - - Returns: - dict[str, list[torch.Tensor]]: Outputs after graph feature (GF) and - feature aggregation (FA) modules. - - - gf_points (list[torch.Tensor]): Outputs after each GF module. - - fa_points (torch.Tensor): Outputs after FA module. - """ - gf_points = [points] - - for i in range(self.num_gf): - cur_points = self.GF_modules[i](gf_points[i]) - gf_points.append(cur_points) - - fa_points = self.FA_module(gf_points) - - out = dict(gf_points=gf_points, fa_points=fa_points) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dla.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dla.py deleted file mode 100644 index 001def49..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/dla.py +++ /dev/null @@ -1,448 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch import nn - -from ..builder import BACKBONES - - -def dla_build_norm_layer(cfg, num_features): - """Build normalization layer specially designed for DLANet. - - Args: - cfg (dict): The norm layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate a norm layer. - - requires_grad (bool, optional): Whether stop gradient updates. - num_features (int): Number of input channels. - - - Returns: - Function: Build normalization layer in mmcv. - """ - cfg_ = cfg.copy() - if cfg_['type'] == 'GN': - if num_features % 32 == 0: - return build_norm_layer(cfg_, num_features) - else: - assert 'num_groups' in cfg_ - cfg_['num_groups'] = cfg_['num_groups'] // 2 - return build_norm_layer(cfg_, num_features) - else: - return build_norm_layer(cfg_, num_features) - - -class BasicBlock(BaseModule): - """BasicBlock in DLANet. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Conv stride. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride=1, - dilation=1, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - self.conv1 = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.norm1 = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.relu = nn.ReLU(inplace=True) - self.conv2 = build_conv_layer( - conv_cfg, - out_channels, - out_channels, - 3, - stride=1, - padding=dilation, - dilation=dilation, - bias=False) - self.norm2 = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.stride = stride - - def forward(self, x, identity=None): - """Forward function.""" - - if identity is None: - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - out = self.conv2(out) - out = self.norm2(out) - out += identity - out = self.relu(out) - - return out - - -class Root(BaseModule): - """Root in DLANet. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - kernel_size (int): Size of convolution kernel. - add_identity (bool): Whether to add identity in root. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - kernel_size, - add_identity, - init_cfg=None): - super(Root, self).__init__(init_cfg) - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 1, - stride=1, - padding=(kernel_size - 1) // 2, - bias=False) - self.norm = dla_build_norm_layer(norm_cfg, out_channels)[1] - self.relu = nn.ReLU(inplace=True) - self.add_identity = add_identity - - def forward(self, feat_list): - """Forward function. - - Args: - feat_list (list[torch.Tensor]): Output features from - multiple layers. - """ - children = feat_list - x = self.conv(torch.cat(feat_list, 1)) - x = self.norm(x) - if self.add_identity: - x += children[0] - x = self.relu(x) - - return x - - -class Tree(BaseModule): - """Tree in DLANet. - - Args: - levels (int): The level of the tree. - block (nn.Module): The block module in tree. - in_channels: Input feature channel. - out_channels: Output feature channel. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Convolution stride. - Default: 1. - level_root (bool, optional): whether belongs to the - root layer. - root_dim (int, optional): Root input feature channel. - root_kernel_size (int, optional): Size of root - convolution kernel. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - add_identity (bool, optional): Whether to add - identity in root. Default: False. - init_cfg (dict, optional): Initialization config. - Default: None. - """ - - def __init__(self, - levels, - block, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride=1, - level_root=False, - root_dim=None, - root_kernel_size=1, - dilation=1, - add_identity=False, - init_cfg=None): - super(Tree, self).__init__(init_cfg) - if root_dim is None: - root_dim = 2 * out_channels - if level_root: - root_dim += in_channels - if levels == 1: - self.root = Root(root_dim, out_channels, norm_cfg, conv_cfg, - root_kernel_size, add_identity) - self.tree1 = block( - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride, - dilation=dilation) - self.tree2 = block( - out_channels, - out_channels, - norm_cfg, - conv_cfg, - 1, - dilation=dilation) - else: - self.tree1 = Tree( - levels - 1, - block, - in_channels, - out_channels, - norm_cfg, - conv_cfg, - stride, - root_dim=None, - root_kernel_size=root_kernel_size, - dilation=dilation, - add_identity=add_identity) - self.tree2 = Tree( - levels - 1, - block, - out_channels, - out_channels, - norm_cfg, - conv_cfg, - root_dim=root_dim + out_channels, - root_kernel_size=root_kernel_size, - dilation=dilation, - add_identity=add_identity) - self.level_root = level_root - self.root_dim = root_dim - self.downsample = None - self.project = None - self.levels = levels - if stride > 1: - self.downsample = nn.MaxPool2d(stride, stride=stride) - if in_channels != out_channels: - self.project = nn.Sequential( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 1, - stride=1, - bias=False), - dla_build_norm_layer(norm_cfg, out_channels)[1]) - - def forward(self, x, identity=None, children=None): - children = [] if children is None else children - bottom = self.downsample(x) if self.downsample else x - identity = self.project(bottom) if self.project else bottom - if self.level_root: - children.append(bottom) - x1 = self.tree1(x, identity) - if self.levels == 1: - x2 = self.tree2(x1) - feat_list = [x2, x1] + children - x = self.root(feat_list) - else: - children.append(x1) - x = self.tree2(x1, children=children) - return x - - -@BACKBONES.register_module() -class DLANet(BaseModule): - r"""`DLA backbone `_. - - Args: - depth (int): Depth of DLA. Default: 34. - in_channels (int, optional): Number of input image channels. - Default: 3. - norm_cfg (dict, optional): Dictionary to construct and config - norm layer. Default: None. - conv_cfg (dict, optional): Dictionary to construct and config - conv layer. Default: None. - layer_with_level_root (list[bool], optional): Whether to apply - level_root in each DLA layer, this is only used for - tree levels. Default: (False, True, True, True). - with_identity_root (bool, optional): Whether to add identity - in root layer. Default: False. - pretrained (str, optional): model pretrained path. - Default: None. - init_cfg (dict or list[dict], optional): Initialization - config dict. Default: None - """ - arch_settings = { - 34: (BasicBlock, (1, 1, 1, 2, 2, 1), (16, 32, 64, 128, 256, 512)), - } - - def __init__(self, - depth, - in_channels=3, - out_indices=(0, 1, 2, 3, 4, 5), - frozen_stages=-1, - norm_cfg=None, - conv_cfg=None, - layer_with_level_root=(False, True, True, True), - with_identity_root=False, - pretrained=None, - init_cfg=None): - super(DLANet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalida depth {depth} for DLA') - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - - block, levels, channels = self.arch_settings[depth] - self.channels = channels - self.num_levels = len(levels) - self.frozen_stages = frozen_stages - self.out_indices = out_indices - assert max(out_indices) < self.num_levels - self.base_layer = nn.Sequential( - build_conv_layer( - conv_cfg, - in_channels, - channels[0], - 7, - stride=1, - padding=3, - bias=False), - dla_build_norm_layer(norm_cfg, channels[0])[1], - nn.ReLU(inplace=True)) - - # DLANet first uses two conv layers then uses several - # Tree layers - for i in range(2): - level_layer = self._make_conv_level( - channels[0], - channels[i], - levels[i], - norm_cfg, - conv_cfg, - stride=i + 1) - layer_name = f'level{i}' - self.add_module(layer_name, level_layer) - - for i in range(2, self.num_levels): - dla_layer = Tree( - levels[i], - block, - channels[i - 1], - channels[i], - norm_cfg, - conv_cfg, - 2, - level_root=layer_with_level_root[i - 2], - add_identity=with_identity_root) - layer_name = f'level{i}' - self.add_module(layer_name, dla_layer) - - self._freeze_stages() - - def _make_conv_level(self, - in_channels, - out_channels, - num_convs, - norm_cfg, - conv_cfg, - stride=1, - dilation=1): - """Conv modules. - - Args: - in_channels (int): Input feature channel. - out_channels (int): Output feature channel. - num_convs (int): Number of Conv module. - norm_cfg (dict): Dictionary to construct and config - norm layer. - conv_cfg (dict): Dictionary to construct and config - conv layer. - stride (int, optional): Conv stride. Default: 1. - dilation (int, optional): Conv dilation. Default: 1. - """ - modules = [] - for i in range(num_convs): - modules.extend([ - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - 3, - stride=stride if i == 0 else 1, - padding=dilation, - bias=False, - dilation=dilation), - dla_build_norm_layer(norm_cfg, out_channels)[1], - nn.ReLU(inplace=True) - ]) - in_channels = out_channels - return nn.Sequential(*modules) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.base_layer.eval() - for param in self.base_layer.parameters(): - param.requires_grad = False - - for i in range(2): - m = getattr(self, f'level{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'level{i+1}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - outs = [] - x = self.base_layer(x) - for i in range(self.num_levels): - x = getattr(self, 'level{}'.format(i))(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/mink_resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/mink_resnet.py deleted file mode 100644 index 2af9bd32..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/mink_resnet.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -# Follow https://github.com/NVIDIA/MinkowskiEngine/blob/master/examples/resnet.py # noqa -# and mmcv.cnn.ResNet -try: - import MinkowskiEngine as ME - from MinkowskiEngine.modules.resnet_block import BasicBlock, Bottleneck -except ImportError: - import warnings - warnings.warn( - 'Please follow `getting_started.md` to install MinkowskiEngine.`') - # blocks are used in the static part of MinkResNet - BasicBlock, Bottleneck = None, None - -import torch.nn as nn - -from mmdet3d.models.builder import BACKBONES - - -@BACKBONES.register_module() -class MinkResNet(nn.Module): - r"""Minkowski ResNet backbone. See `4D Spatio-Temporal ConvNets - `_ for more details. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (ont): Number of input channels, 3 for RGB. - num_stages (int, optional): Resnet stages. Default: 4. - pool (bool, optional): Add max pooling after first conv if True. - Default: True. - """ - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, depth, in_channels, num_stages=4, pool=True): - super(MinkResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert 4 >= num_stages >= 1 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - self.num_stages = num_stages - self.pool = pool - - self.inplanes = 64 - self.conv1 = ME.MinkowskiConvolution( - in_channels, self.inplanes, kernel_size=3, stride=2, dimension=3) - # May be BatchNorm is better, but we follow original implementation. - self.norm1 = ME.MinkowskiInstanceNorm(self.inplanes) - self.relu = ME.MinkowskiReLU(inplace=True) - if self.pool: - self.maxpool = ME.MinkowskiMaxPooling( - kernel_size=2, stride=2, dimension=3) - - for i, num_blocks in enumerate(stage_blocks): - setattr( - self, f'layer{i}', - self._make_layer(block, 64 * 2**i, stage_blocks[i], stride=2)) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, ME.MinkowskiConvolution): - ME.utils.kaiming_normal_( - m.kernel, mode='fan_out', nonlinearity='relu') - - if isinstance(m, ME.MinkowskiBatchNorm): - nn.init.constant_(m.bn.weight, 1) - nn.init.constant_(m.bn.bias, 0) - - def _make_layer(self, block, planes, blocks, stride): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - ME.MinkowskiConvolution( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - dimension=3), - ME.MinkowskiBatchNorm(planes * block.expansion)) - layers = [] - layers.append( - block( - self.inplanes, - planes, - stride=stride, - downsample=downsample, - dimension=3)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, stride=1, dimension=3)) - return nn.Sequential(*layers) - - def forward(self, x): - """Forward pass of ResNet. - - Args: - x (ME.SparseTensor): Input sparse tensor. - - Returns: - list[ME.SparseTensor]: Output sparse tensors. - """ - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - if self.pool: - x = self.maxpool(x) - outs = [] - for i in range(self.num_stages): - x = getattr(self, f'layer{i}')(x) - outs.append(x) - return outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/multi_backbone.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/multi_backbone.py deleted file mode 100644 index a962fa35..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/multi_backbone.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from ..builder import BACKBONES, build_backbone - - -@BACKBONES.register_module() -class MultiBackbone(BaseModule): - """MultiBackbone with different configs. - - Args: - num_streams (int): The number of backbones. - backbones (list or dict): A list of backbone configs. - aggregation_mlp_channels (list[int]): Specify the mlp layers - for feature aggregation. - conv_cfg (dict): Config dict of convolutional layers. - norm_cfg (dict): Config dict of normalization layers. - act_cfg (dict): Config dict of activation layers. - suffixes (list): A list of suffixes to rename the return dict - for each backbone. - """ - - def __init__(self, - num_streams, - backbones, - aggregation_mlp_channels=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-5, momentum=0.01), - act_cfg=dict(type='ReLU'), - suffixes=('net0', 'net1'), - init_cfg=None, - pretrained=None, - **kwargs): - super().__init__(init_cfg=init_cfg) - assert isinstance(backbones, dict) or isinstance(backbones, list) - if isinstance(backbones, dict): - backbones_list = [] - for ind in range(num_streams): - backbones_list.append(copy.deepcopy(backbones)) - backbones = backbones_list - - assert len(backbones) == num_streams - assert len(suffixes) == num_streams - - self.backbone_list = nn.ModuleList() - # Rename the ret_dict with different suffixs. - self.suffixes = suffixes - - out_channels = 0 - - for backbone_cfg in backbones: - out_channels += backbone_cfg['fp_channels'][-1][-1] - self.backbone_list.append(build_backbone(backbone_cfg)) - - # Feature aggregation layers - if aggregation_mlp_channels is None: - aggregation_mlp_channels = [ - out_channels, out_channels // 2, - out_channels // len(self.backbone_list) - ] - else: - aggregation_mlp_channels.insert(0, out_channels) - - self.aggregation_layers = nn.Sequential() - for i in range(len(aggregation_mlp_channels) - 1): - self.aggregation_layers.add_module( - f'layer{i}', - ConvModule( - aggregation_mlp_channels[i], - aggregation_mlp_channels[i + 1], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @auto_fp16() - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, list[torch.Tensor]]: Outputs from multiple backbones. - - - fp_xyz[suffix] (list[torch.Tensor]): The coordinates of - each fp features. - - fp_features[suffix] (list[torch.Tensor]): The features - from each Feature Propagate Layers. - - fp_indices[suffix] (list[torch.Tensor]): Indices of the - input points. - - hd_feature (torch.Tensor): The aggregation feature - from multiple backbones. - """ - ret = {} - fp_features = [] - for ind in range(len(self.backbone_list)): - cur_ret = self.backbone_list[ind](points) - cur_suffix = self.suffixes[ind] - fp_features.append(cur_ret['fp_features'][-1]) - if cur_suffix != '': - for k in cur_ret.keys(): - cur_ret[k + '_' + cur_suffix] = cur_ret.pop(k) - ret.update(cur_ret) - - # Combine the features here - hd_feature = torch.cat(fp_features, dim=1) - hd_feature = self.aggregation_layers(hd_feature) - ret['hd_feature'] = hd_feature - return ret diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/nostem_regnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/nostem_regnet.py deleted file mode 100644 index 439a57c7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/nostem_regnet.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.backbones import RegNet -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class NoStemRegNet(RegNet): - """RegNet backbone without Stem for 3D detection. - - More details can be found in `paper `_ . - - Args: - arch (dict): The parameter of RegNets. - - w0 (int): Initial width. - - wa (float): Slope of width. - - wm (float): Quantization parameter to quantize the width. - - depth (int): Depth of the backbone. - - group_w (int): Width of group. - - bot_mul (float): Bottleneck ratio, i.e. expansion of bottleneck. - strides (Sequence[int]): Strides of the first block of each stage. - base_channels (int): Base channels after stem layer. - in_channels (int): Number of input image channels. Normally 3. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet3d.models import NoStemRegNet - >>> import torch - >>> self = NoStemRegNet( - arch=dict( - w0=88, - wa=26.31, - wm=2.25, - group_w=48, - depth=25, - bot_mul=1.0)) - >>> self.eval() - >>> inputs = torch.rand(1, 64, 16, 16) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 96, 8, 8) - (1, 192, 4, 4) - (1, 432, 2, 2) - (1, 1008, 1, 1) - """ - - def __init__(self, arch, init_cfg=None, **kwargs): - super(NoStemRegNet, self).__init__(arch, init_cfg=init_cfg, **kwargs) - - def _make_stem_layer(self, in_channels, base_channels): - """Override the original function that do not initialize a stem layer - since 3D detector's voxel encoder works like a stem layer.""" - return - - def forward(self, x): - """Forward function of backbone. - - Args: - x (torch.Tensor): Features in shape (N, C, H, W). - - Returns: - tuple[torch.Tensor]: Multi-scale features. - """ - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_msg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_msg.py deleted file mode 100644 index 9f032c44..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_msg.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import build_sa_module -from ..builder import BACKBONES -from .base_pointnet import BasePointNet - - -@BACKBONES.register_module() -class PointNet2SAMSG(BasePointNet): - """PointNet2 with Multi-scale grouping. - - Args: - in_channels (int): Input channels of point cloud. - num_points (tuple[int]): The number of points which each SA - module samples. - radii (tuple[float]): Sampling radii of each SA module. - num_samples (tuple[int]): The number of samples for ball - query in each SA module. - sa_channels (tuple[tuple[int]]): Out channels of each mlp in SA module. - aggregation_channels (tuple[int]): Out channels of aggregation - multi-scale grouping features. - fps_mods (tuple[int]): Mod of FPS for each SA module. - fps_sample_range_lists (tuple[tuple[int]]): The number of sampling - points which each SA module samples. - dilated_group (tuple[bool]): Whether to use dilated ball query for - out_indices (Sequence[int]): Output from which stages. - norm_cfg (dict): Config of normalization layer. - sa_cfg (dict): Config of set abstraction module, which may contain - the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - """ - - def __init__(self, - in_channels, - num_points=(2048, 1024, 512, 256), - radii=((0.2, 0.4, 0.8), (0.4, 0.8, 1.6), (1.6, 3.2, 4.8)), - num_samples=((32, 32, 64), (32, 32, 64), (32, 32, 32)), - sa_channels=(((16, 16, 32), (16, 16, 32), (32, 32, 64)), - ((64, 64, 128), (64, 64, 128), (64, 96, 128)), - ((128, 128, 256), (128, 192, 256), (128, 256, - 256))), - aggregation_channels=(64, 128, 256), - fps_mods=(('D-FPS'), ('FS'), ('F-FPS', 'D-FPS')), - fps_sample_range_lists=((-1), (-1), (512, -1)), - dilated_group=(True, True, True), - out_indices=(2, ), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModuleMSG', - pool_mod='max', - use_xyz=True, - normalize_xyz=False), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_sa = len(sa_channels) - self.out_indices = out_indices - assert max(out_indices) < self.num_sa - assert len(num_points) == len(radii) == len(num_samples) == len( - sa_channels) - if aggregation_channels is not None: - assert len(sa_channels) == len(aggregation_channels) - else: - aggregation_channels = [None] * len(sa_channels) - - self.SA_modules = nn.ModuleList() - self.aggregation_mlps = nn.ModuleList() - sa_in_channel = in_channels - 3 # number of channels without xyz - skip_channel_list = [sa_in_channel] - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - sa_out_channel = 0 - for radius_index in range(len(radii[sa_index])): - cur_sa_mlps[radius_index] = [sa_in_channel] + list( - cur_sa_mlps[radius_index]) - sa_out_channel += cur_sa_mlps[radius_index][-1] - - if isinstance(fps_mods[sa_index], tuple): - cur_fps_mod = list(fps_mods[sa_index]) - else: - cur_fps_mod = list([fps_mods[sa_index]]) - - if isinstance(fps_sample_range_lists[sa_index], tuple): - cur_fps_sample_range_list = list( - fps_sample_range_lists[sa_index]) - else: - cur_fps_sample_range_list = list( - [fps_sample_range_lists[sa_index]]) - - self.SA_modules.append( - build_sa_module( - num_point=num_points[sa_index], - radii=radii[sa_index], - sample_nums=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - fps_mod=cur_fps_mod, - fps_sample_range_list=cur_fps_sample_range_list, - dilated_group=dilated_group[sa_index], - norm_cfg=norm_cfg, - cfg=sa_cfg, - bias=True)) - skip_channel_list.append(sa_out_channel) - - cur_aggregation_channel = aggregation_channels[sa_index] - if cur_aggregation_channel is None: - self.aggregation_mlps.append(None) - sa_in_channel = sa_out_channel - else: - self.aggregation_mlps.append( - ConvModule( - sa_out_channel, - cur_aggregation_channel, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - kernel_size=1, - bias=True)) - sa_in_channel = cur_aggregation_channel - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, torch.Tensor]: Outputs of the last SA module. - - - sa_xyz (torch.Tensor): The coordinates of sa features. - - sa_features (torch.Tensor): The features from the - last Set Aggregation Layers. - - sa_indices (torch.Tensor): Indices of the - input points. - """ - xyz, features = self._split_point_feats(points) - - batch, num_points = xyz.shape[:2] - indices = xyz.new_tensor(range(num_points)).unsqueeze(0).repeat( - batch, 1).long() - - sa_xyz = [xyz] - sa_features = [features] - sa_indices = [indices] - - out_sa_xyz = [xyz] - out_sa_features = [features] - out_sa_indices = [indices] - - for i in range(self.num_sa): - cur_xyz, cur_features, cur_indices = self.SA_modules[i]( - sa_xyz[i], sa_features[i]) - if self.aggregation_mlps[i] is not None: - cur_features = self.aggregation_mlps[i](cur_features) - sa_xyz.append(cur_xyz) - sa_features.append(cur_features) - sa_indices.append( - torch.gather(sa_indices[-1], 1, cur_indices.long())) - if i in self.out_indices: - out_sa_xyz.append(sa_xyz[-1]) - out_sa_features.append(sa_features[-1]) - out_sa_indices.append(sa_indices[-1]) - - return dict( - sa_xyz=out_sa_xyz, - sa_features=out_sa_features, - sa_indices=out_sa_indices) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_ssg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_ssg.py deleted file mode 100644 index b894b7c0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/pointnet2_sa_ssg.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import PointFPModule, build_sa_module -from ..builder import BACKBONES -from .base_pointnet import BasePointNet - - -@BACKBONES.register_module() -class PointNet2SASSG(BasePointNet): - """PointNet2 with Single-scale grouping. - - Args: - in_channels (int): Input channels of point cloud. - num_points (tuple[int]): The number of points which each SA - module samples. - radius (tuple[float]): Sampling radii of each SA module. - num_samples (tuple[int]): The number of samples for ball - query in each SA module. - sa_channels (tuple[tuple[int]]): Out channels of each mlp in SA module. - fp_channels (tuple[tuple[int]]): Out channels of each mlp in FP module. - norm_cfg (dict): Config of normalization layer. - sa_cfg (dict): Config of set abstraction module, which may contain - the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - """ - - def __init__(self, - in_channels, - num_points=(2048, 1024, 512, 256), - radius=(0.2, 0.4, 0.8, 1.2), - num_samples=(64, 32, 16, 16), - sa_channels=((64, 64, 128), (128, 128, 256), (128, 128, 256), - (128, 128, 256)), - fp_channels=((256, 256), (256, 256)), - norm_cfg=dict(type='BN2d'), - sa_cfg=dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_sa = len(sa_channels) - self.num_fp = len(fp_channels) - - assert len(num_points) == len(radius) == len(num_samples) == len( - sa_channels) - assert len(sa_channels) >= len(fp_channels) - - self.SA_modules = nn.ModuleList() - sa_in_channel = in_channels - 3 # number of channels without xyz - skip_channel_list = [sa_in_channel] - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - cur_sa_mlps = [sa_in_channel] + cur_sa_mlps - sa_out_channel = cur_sa_mlps[-1] - - self.SA_modules.append( - build_sa_module( - num_point=num_points[sa_index], - radius=radius[sa_index], - num_sample=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - norm_cfg=norm_cfg, - cfg=sa_cfg)) - skip_channel_list.append(sa_out_channel) - sa_in_channel = sa_out_channel - - self.FP_modules = nn.ModuleList() - - fp_source_channel = skip_channel_list.pop() - fp_target_channel = skip_channel_list.pop() - for fp_index in range(len(fp_channels)): - cur_fp_mlps = list(fp_channels[fp_index]) - cur_fp_mlps = [fp_source_channel + fp_target_channel] + cur_fp_mlps - self.FP_modules.append(PointFPModule(mlp_channels=cur_fp_mlps)) - if fp_index != len(fp_channels) - 1: - fp_source_channel = cur_fp_mlps[-1] - fp_target_channel = skip_channel_list.pop() - - @auto_fp16(apply_to=('points', )) - def forward(self, points): - """Forward pass. - - Args: - points (torch.Tensor): point coordinates with features, - with shape (B, N, 3 + input_feature_dim). - - Returns: - dict[str, list[torch.Tensor]]: Outputs after SA and FP modules. - - - fp_xyz (list[torch.Tensor]): The coordinates of - each fp features. - - fp_features (list[torch.Tensor]): The features - from each Feature Propagate Layers. - - fp_indices (list[torch.Tensor]): Indices of the - input points. - """ - xyz, features = self._split_point_feats(points) - - batch, num_points = xyz.shape[:2] - indices = xyz.new_tensor(range(num_points)).unsqueeze(0).repeat( - batch, 1).long() - - sa_xyz = [xyz] - sa_features = [features] - sa_indices = [indices] - - for i in range(self.num_sa): - cur_xyz, cur_features, cur_indices = self.SA_modules[i]( - sa_xyz[i], sa_features[i]) - sa_xyz.append(cur_xyz) - sa_features.append(cur_features) - sa_indices.append( - torch.gather(sa_indices[-1], 1, cur_indices.long())) - - fp_xyz = [sa_xyz[-1]] - fp_features = [sa_features[-1]] - fp_indices = [sa_indices[-1]] - - for i in range(self.num_fp): - fp_features.append(self.FP_modules[i]( - sa_xyz[self.num_sa - i - 1], sa_xyz[self.num_sa - i], - sa_features[self.num_sa - i - 1], fp_features[-1])) - fp_xyz.append(sa_xyz[self.num_sa - i - 1]) - fp_indices.append(sa_indices[self.num_sa - i - 1]) - - ret = dict( - fp_xyz=fp_xyz, - fp_features=fp_features, - fp_indices=fp_indices, - sa_xyz=sa_xyz, - sa_features=sa_features, - sa_indices=sa_indices) - return ret diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/second.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/second.py deleted file mode 100644 index 27ac6cf3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/backbones/second.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class SECOND(BaseModule): - """Backbone network for SECOND/PointPillars/PartA2/MVXNet. - - Args: - in_channels (int): Input channels. - out_channels (list[int]): Output channels for multi-scale feature maps. - layer_nums (list[int]): Number of layers in each stage. - layer_strides (list[int]): Strides of each stage. - norm_cfg (dict): Config dict of normalization layers. - conv_cfg (dict): Config dict of convolutional layers. - """ - - def __init__(self, - in_channels=128, - out_channels=[128, 128, 256], - layer_nums=[3, 5, 5], - layer_strides=[2, 2, 2], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - conv_cfg=dict(type='Conv2d', bias=False), - init_cfg=None, - pretrained=None): - super(SECOND, self).__init__(init_cfg=init_cfg) - assert len(layer_strides) == len(layer_nums) - assert len(out_channels) == len(layer_nums) - - in_filters = [in_channels, *out_channels[:-1]] - # note that when stride > 1, conv2d with same padding isn't - # equal to pad-conv2d. we should use pad-conv2d. - blocks = [] - for i, layer_num in enumerate(layer_nums): - block = [ - build_conv_layer( - conv_cfg, - in_filters[i], - out_channels[i], - 3, - stride=layer_strides[i], - padding=1), - build_norm_layer(norm_cfg, out_channels[i])[1], - nn.ReLU(inplace=True), - ] - for j in range(layer_num): - block.append( - build_conv_layer( - conv_cfg, - out_channels[i], - out_channels[i], - 3, - padding=1)) - block.append(build_norm_layer(norm_cfg, out_channels[i])[1]) - block.append(nn.ReLU(inplace=True)) - - block = nn.Sequential(*block) - blocks.append(block) - - self.blocks = nn.ModuleList(blocks) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - else: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): Input with shape (N, C, H, W). - - Returns: - tuple[torch.Tensor]: Multi-scale features. - """ - outs = [] - for i in range(len(self.blocks)): - x = self.blocks[i](x) - outs.append(x) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/builder.py deleted file mode 100644 index fb8b8c23..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/builder.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -from mmdet.models.builder import BACKBONES as MMDET_BACKBONES -from mmdet.models.builder import DETECTORS as MMDET_DETECTORS -from mmdet.models.builder import HEADS as MMDET_HEADS -from mmdet.models.builder import LOSSES as MMDET_LOSSES -from mmdet.models.builder import NECKS as MMDET_NECKS -from mmdet.models.builder import ROI_EXTRACTORS as MMDET_ROI_EXTRACTORS -from mmdet.models.builder import SHARED_HEADS as MMDET_SHARED_HEADS -from mmseg.models.builder import LOSSES as MMSEG_LOSSES - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -ROI_EXTRACTORS = MODELS -SHARED_HEADS = MODELS -HEADS = MODELS -LOSSES = MODELS -DETECTORS = MODELS -VOXEL_ENCODERS = MODELS -MIDDLE_ENCODERS = MODELS -FUSION_LAYERS = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - if cfg['type'] in BACKBONES._module_dict.keys(): - return BACKBONES.build(cfg) - else: - return MMDET_BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - if cfg['type'] in NECKS._module_dict.keys(): - return NECKS.build(cfg) - else: - return MMDET_NECKS.build(cfg) - - -def build_roi_extractor(cfg): - """Build RoI feature extractor.""" - if cfg['type'] in ROI_EXTRACTORS._module_dict.keys(): - return ROI_EXTRACTORS.build(cfg) - else: - return MMDET_ROI_EXTRACTORS.build(cfg) - - -def build_shared_head(cfg): - """Build shared head of detector.""" - if cfg['type'] in SHARED_HEADS._module_dict.keys(): - return SHARED_HEADS.build(cfg) - else: - return MMDET_SHARED_HEADS.build(cfg) - - -def build_head(cfg): - """Build head.""" - if cfg['type'] in HEADS._module_dict.keys(): - return HEADS.build(cfg) - else: - return MMDET_HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss function.""" - if cfg['type'] in LOSSES._module_dict.keys(): - return LOSSES.build(cfg) - elif cfg['type'] in MMDET_LOSSES._module_dict.keys(): - return MMDET_LOSSES.build(cfg) - else: - return MMSEG_LOSSES.build(cfg) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - if cfg['type'] in DETECTORS._module_dict.keys(): - return DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - else: - return MMDET_DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) - - -def build_model(cfg, train_cfg=None, test_cfg=None): - """A function warpper for building 3D detector or segmentor according to - cfg. - - Should be deprecated in the future. - """ - if cfg.type in ['EncoderDecoder3D']: - return build_segmentor(cfg, train_cfg=train_cfg, test_cfg=test_cfg) - else: - return build_detector(cfg, train_cfg=train_cfg, test_cfg=test_cfg) - - -def build_voxel_encoder(cfg): - """Build voxel encoder.""" - return VOXEL_ENCODERS.build(cfg) - - -def build_middle_encoder(cfg): - """Build middle level encoder.""" - return MIDDLE_ENCODERS.build(cfg) - - -def build_fusion_layer(cfg): - """Build fusion layer.""" - return FUSION_LAYERS.build(cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/__init__.py deleted file mode 100644 index c3c3cfe5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .dgcnn_head import DGCNNHead -from .paconv_head import PAConvHead -from .pointnet2_head import PointNet2Head - -__all__ = ['PointNet2Head', 'DGCNNHead', 'PAConvHead'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/decode_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/decode_head.py deleted file mode 100644 index 774bee00..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/decode_head.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.cnn import normal_init -from mmcv.runner import BaseModule, auto_fp16, force_fp32 -from torch import nn as nn - -from mmseg.models.builder import build_loss - - -class Base3DDecodeHead(BaseModule, metaclass=ABCMeta): - """Base class for BaseDecodeHead. - - Args: - channels (int): Channels after modules, before conv_seg. - num_classes (int): Number of classes. - dropout_ratio (float, optional): Ratio of dropout layer. Default: 0.5. - conv_cfg (dict, optional): Config of conv layers. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of norm layers. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation layers. - Default: dict(type='ReLU'). - loss_decode (dict, optional): Config of decode loss. - Default: dict(type='CrossEntropyLoss'). - ignore_index (int, optional): The label index to be ignored. - When using masked BCE loss, ignore_index should be set to None. - Default: 255. - """ - - def __init__(self, - channels, - num_classes, - dropout_ratio=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - class_weight=None, - loss_weight=1.0), - ignore_index=255, - init_cfg=None): - super(Base3DDecodeHead, self).__init__(init_cfg=init_cfg) - self.channels = channels - self.num_classes = num_classes - self.dropout_ratio = dropout_ratio - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.loss_decode = build_loss(loss_decode) - self.ignore_index = ignore_index - - self.conv_seg = nn.Conv1d(channels, num_classes, kernel_size=1) - if dropout_ratio > 0: - self.dropout = nn.Dropout(dropout_ratio) - else: - self.dropout = None - self.fp16_enabled = False - - def init_weights(self): - """Initialize weights of classification layer.""" - super().init_weights() - normal_init(self.conv_seg, mean=0, std=0.01) - - @auto_fp16() - @abstractmethod - def forward(self, inputs): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, img_metas, pts_semantic_mask, train_cfg): - """Forward function for training. - - Args: - inputs (list[torch.Tensor]): List of multi-level point features. - img_metas (list[dict]): Meta information of each sample. - pts_semantic_mask (torch.Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs) - losses = self.losses(seg_logits, pts_semantic_mask) - return losses - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level point features. - img_metas (list[dict]): Meta information of each sample. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs) - - def cls_seg(self, feat): - """Classify each points.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.conv_seg(feat) - return output - - @force_fp32(apply_to=('seg_logit', )) - def losses(self, seg_logit, seg_label): - """Compute semantic segmentation loss. - - Args: - seg_logit (torch.Tensor): Predicted per-point segmentation logits - of shape [B, num_classes, N]. - seg_label (torch.Tensor): Ground-truth segmentation label of - shape [B, N]. - """ - loss = dict() - loss['loss_sem_seg'] = self.loss_decode( - seg_logit, seg_label, ignore_index=self.ignore_index) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/dgcnn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/dgcnn_head.py deleted file mode 100644 index 6e78df57..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/dgcnn_head.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule - -from mmdet3d.ops import DGCNNFPModule -from ..builder import HEADS -from .decode_head import Base3DDecodeHead - - -@HEADS.register_module() -class DGCNNHead(Base3DDecodeHead): - r"""DGCNN decoder head. - - Decoder head used in `DGCNN `_. - Refer to the - `reimplementation code `_. - - Args: - fp_channels (tuple[int], optional): Tuple of mlp channels in feature - propagation (FP) modules. Defaults to (1216, 512). - """ - - def __init__(self, fp_channels=(1216, 512), **kwargs): - super(DGCNNHead, self).__init__(**kwargs) - - self.FP_module = DGCNNFPModule( - mlp_channels=fp_channels, act_cfg=self.act_cfg) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L40 - self.pre_seg_conv = ConvModule( - fp_channels[-1], - self.channels, - kernel_size=1, - bias=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: points for decoder. - """ - fa_points = feat_dict['fa_points'] - - return fa_points - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - fa_points = self._extract_input(feat_dict) - - fp_points = self.FP_module(fa_points) - fp_points = fp_points.transpose(1, 2).contiguous() - output = self.pre_seg_conv(fp_points) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/paconv_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/paconv_head.py deleted file mode 100644 index 8cd98567..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/paconv_head.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule - -from ..builder import HEADS -from .pointnet2_head import PointNet2Head - - -@HEADS.register_module() -class PAConvHead(PointNet2Head): - r"""PAConv decoder head. - - Decoder head used in `PAConv `_. - Refer to the `official code `_. - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - fp_norm_cfg (dict): Config of norm layers used in FP modules. - Default: dict(type='BN2d'). - """ - - def __init__(self, - fp_channels=((768, 256, 256), (384, 256, 256), - (320, 256, 128), (128 + 6, 128, 128, 128)), - fp_norm_cfg=dict(type='BN2d'), - **kwargs): - super(PAConvHead, self).__init__(fp_channels, fp_norm_cfg, **kwargs) - - # https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/model/pointnet2/pointnet2_paconv_seg.py#L53 - # PointNet++'s decoder conv has bias while PAConv's doesn't have - # so we need to rebuild it here - self.pre_seg_conv = ConvModule( - fp_channels[-1][-1], - self.channels, - kernel_size=1, - bias=False, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - # PointNet++ doesn't use the first level of `sa_features` as input - # while PAConv inputs it through skip-connection - fp_feature = sa_features[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - - output = self.pre_seg_conv(fp_feature) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/pointnet2_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/pointnet2_head.py deleted file mode 100644 index 9c905341..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/decode_heads/pointnet2_head.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks import ConvModule -from torch import nn as nn - -from mmdet3d.ops import PointFPModule -from ..builder import HEADS -from .decode_head import Base3DDecodeHead - - -@HEADS.register_module() -class PointNet2Head(Base3DDecodeHead): - r"""PointNet2 decoder head. - - Decoder head used in `PointNet++ `_. - Refer to the `official code `_. - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - fp_norm_cfg (dict): Config of norm layers used in FP modules. - Default: dict(type='BN2d'). - """ - - def __init__(self, - fp_channels=((768, 256, 256), (384, 256, 256), - (320, 256, 128), (128, 128, 128, 128)), - fp_norm_cfg=dict(type='BN2d'), - **kwargs): - super(PointNet2Head, self).__init__(**kwargs) - - self.num_fp = len(fp_channels) - self.FP_modules = nn.ModuleList() - for cur_fp_mlps in fp_channels: - self.FP_modules.append( - PointFPModule(mlp_channels=cur_fp_mlps, norm_cfg=fp_norm_cfg)) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L40 - self.pre_seg_conv = ConvModule( - fp_channels[-1][-1], - self.channels, - kernel_size=1, - bias=True, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - list[torch.Tensor]: Coordinates of multiple levels of points. - list[torch.Tensor]: Features of multiple levels of points. - """ - sa_xyz = feat_dict['sa_xyz'] - sa_features = feat_dict['sa_features'] - assert len(sa_xyz) == len(sa_features) - - return sa_xyz, sa_features - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Segmentation map of shape [B, num_classes, N]. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_sem_seg.py#L24 - sa_features[0] = None - - fp_feature = sa_features[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - output = self.pre_seg_conv(fp_feature) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/__init__.py deleted file mode 100644 index f6422896..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .anchor3d_head import Anchor3DHead -from .anchor_free_mono3d_head import AnchorFreeMono3DHead -from .base_conv_bbox_head import BaseConvBboxHead -from .base_mono3d_dense_head import BaseMono3DDenseHead -from .centerpoint_head import CenterHead -from .fcos_mono3d_head import FCOSMono3DHead -from .free_anchor3d_head import FreeAnchor3DHead -from .groupfree3d_head import GroupFree3DHead -from .monoflex_head import MonoFlexHead -from .parta2_rpn_head import PartA2RPNHead -from .pgd_head import PGDHead -from .point_rpn_head import PointRPNHead -from .shape_aware_head import ShapeAwareHead -from .smoke_mono3d_head import SMOKEMono3DHead -from .ssd_3d_head import SSD3DHead -from .vote_head import VoteHead - -__all__ = [ - 'Anchor3DHead', 'FreeAnchor3DHead', 'PartA2RPNHead', 'VoteHead', - 'SSD3DHead', 'BaseConvBboxHead', 'CenterHead', 'ShapeAwareHead', - 'BaseMono3DDenseHead', 'AnchorFreeMono3DHead', 'FCOSMono3DHead', - 'GroupFree3DHead', 'PointRPNHead', 'SMOKEMono3DHead', 'PGDHead', - 'MonoFlexHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py deleted file mode 100644 index 87550497..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - -from mmdet3d.core import (PseudoSampler, box3d_multiclass_nms, limit_period, - xywhr2xyxyr) -from mmdet.core import (build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, multi_apply) -from ..builder import HEADS, build_loss -from .train_mixins import AnchorTrainMixin - - -@HEADS.register_module() -class Anchor3DHead(BaseModule, AnchorTrainMixin): - """Anchor head for SECOND/PointPillars/MVXNet/PartA2. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - feat_channels (int): Number of channels of the feature map. - use_direction_classifier (bool): Whether to add a direction classifier. - anchor_generator(dict): Config dict of anchor generator. - assigner_per_size (bool): Whether to do assignment for each separate - anchor size. - assign_per_class (bool): Whether to do assignment for each class. - diff_rad_by_sin (bool): Whether to change the difference into sin - difference for box regression loss. - dir_offset (float | int): The offset of BEV rotation angles. - (TODO: may be moved into box coder) - dir_limit_offset (float | int): The limited range of BEV - rotation angles. (TODO: may be moved into box coder) - bbox_coder (dict): Config dict of box coders. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_dir (dict): Config of direction classifier loss. - """ - - def __init__(self, - num_classes, - in_channels, - train_cfg, - test_cfg, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - range=[0, -39.68, -1.78, 69.12, 39.68, -1.78], - strides=[2], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - custom_values=[], - reshape_out=False), - assigner_per_size=False, - assign_per_class=False, - diff_rad_by_sin=True, - dir_offset=-np.pi / 2, - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict(type='CrossEntropyLoss', loss_weight=0.2), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.diff_rad_by_sin = diff_rad_by_sin - self.use_direction_classifier = use_direction_classifier - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.assigner_per_size = assigner_per_size - self.assign_per_class = assign_per_class - self.dir_offset = dir_offset - self.dir_limit_offset = dir_limit_offset - import warnings - warnings.warn( - 'dir_offset and dir_limit_offset will be depressed and be ' - 'incorporated into box coder in the future') - self.fp16_enabled = False - - # build anchor generator - self.anchor_generator = build_prior_generator(anchor_generator) - # In 3D detection, the anchor stride is connected with anchor size - self.num_anchors = self.anchor_generator.num_base_anchors - # build box coder - self.bbox_coder = build_bbox_coder(bbox_coder) - self.box_code_size = self.bbox_coder.code_size - - # build loss function - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.sampling = loss_cls['type'] not in ['FocalLoss', 'GHMC'] - if not self.use_sigmoid_cls: - self.num_classes += 1 - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_dir = build_loss(loss_dir) - self.fp16_enabled = False - - self._init_layers() - self._init_assigner_sampler() - - if init_cfg is None: - self.init_cfg = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', name='conv_cls', std=0.01, bias_prob=0.01)) - - def _init_assigner_sampler(self): - """Initialize the target assigner and sampler of the head.""" - if self.train_cfg is None: - return - - if self.sampling: - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - else: - self.bbox_sampler = PseudoSampler() - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - - def _init_layers(self): - """Initialize neural network layers of the head.""" - self.cls_out_channels = self.num_anchors * self.num_classes - self.conv_cls = nn.Conv2d(self.feat_channels, self.cls_out_channels, 1) - self.conv_reg = nn.Conv2d(self.feat_channels, - self.num_anchors * self.box_code_size, 1) - if self.use_direction_classifier: - self.conv_dir_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * 2, 1) - - def forward_single(self, x): - """Forward function on a single-scale feature map. - - Args: - x (torch.Tensor): Input features. - - Returns: - tuple[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = self.conv_dir_cls(x) - return cls_score, bbox_pred, dir_cls_preds - - def forward(self, feats): - """Forward pass. - - Args: - feats (list[torch.Tensor]): Multi-level features, e.g., - features produced by FPN. - - Returns: - tuple[list[torch.Tensor]]: Multi-level class score, bbox - and direction predictions. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, input_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - input_metas (list[dict]): contain pcd and img's meta info. - device (str): device of current module. - - Returns: - list[list[torch.Tensor]]: Anchors of each image, valid flags - of each image. - """ - num_imgs = len(input_metas) - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - return anchor_list - - def loss_single(self, cls_score, bbox_pred, dir_cls_preds, labels, - label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, num_total_samples): - """Calculate loss of Single-level results. - - Args: - cls_score (torch.Tensor): Class score in single-level. - bbox_pred (torch.Tensor): Bbox prediction in single-level. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single-level. - labels (torch.Tensor): Labels of class. - label_weights (torch.Tensor): Weights of class loss. - bbox_targets (torch.Tensor): Targets of bbox predictions. - bbox_weights (torch.Tensor): Weights of bbox loss. - dir_targets (torch.Tensor): Targets of direction predictions. - dir_weights (torch.Tensor): Weights of direction loss. - num_total_samples (int): The number of valid samples. - - Returns: - tuple[torch.Tensor]: Losses of class, bbox - and direction, respectively. - """ - # classification loss - if num_total_samples is None: - num_total_samples = int(cls_score.shape[0]) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, 1).reshape(-1, self.num_classes) - assert labels.max().item() <= self.num_classes - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # regression loss - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, self.box_code_size) - bbox_targets = bbox_targets.reshape(-1, self.box_code_size) - bbox_weights = bbox_weights.reshape(-1, self.box_code_size) - - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero( - as_tuple=False).reshape(-1) - num_pos = len(pos_inds) - - pos_bbox_pred = bbox_pred[pos_inds] - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_weights = bbox_weights[pos_inds] - - # dir loss - if self.use_direction_classifier: - dir_cls_preds = dir_cls_preds.permute(0, 2, 3, 1).reshape(-1, 2) - dir_targets = dir_targets.reshape(-1) - dir_weights = dir_weights.reshape(-1) - pos_dir_cls_preds = dir_cls_preds[pos_inds] - pos_dir_targets = dir_targets[pos_inds] - pos_dir_weights = dir_weights[pos_inds] - - if num_pos > 0: - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - pos_bbox_weights = pos_bbox_weights * bbox_weights.new_tensor( - code_weight) - if self.diff_rad_by_sin: - pos_bbox_pred, pos_bbox_targets = self.add_sin_difference( - pos_bbox_pred, pos_bbox_targets) - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_targets, - pos_bbox_weights, - avg_factor=num_total_samples) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - loss_dir = self.loss_dir( - pos_dir_cls_preds, - pos_dir_targets, - pos_dir_weights, - avg_factor=num_total_samples) - else: - loss_bbox = pos_bbox_pred.sum() - if self.use_direction_classifier: - loss_dir = pos_dir_cls_preds.sum() - - return loss_cls, loss_bbox, loss_dir - - @staticmethod - def add_sin_difference(boxes1, boxes2): - """Convert the rotation difference to difference in sine function. - - Args: - boxes1 (torch.Tensor): Original Boxes in shape (NxC), where C>=7 - and the 7th dimension is rotation dimension. - boxes2 (torch.Tensor): Target boxes in shape (NxC), where C>=7 and - the 7th dimension is rotation dimension. - - Returns: - tuple[torch.Tensor]: ``boxes1`` and ``boxes2`` whose 7th - dimensions are changed. - """ - rad_pred_encoding = torch.sin(boxes1[..., 6:7]) * torch.cos( - boxes2[..., 6:7]) - rad_tg_encoding = torch.cos(boxes1[..., 6:7]) * torch.sin(boxes2[..., - 6:7]) - boxes1 = torch.cat( - [boxes1[..., :6], rad_pred_encoding, boxes1[..., 7:]], dim=-1) - boxes2 = torch.cat([boxes2[..., :6], rad_tg_encoding, boxes2[..., 7:]], - dim=-1) - return boxes1, boxes2 - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Gt bboxes - of each sample. - gt_labels (list[torch.Tensor]): Gt labels of each sample. - input_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding boxes to ignore. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_cls (list[torch.Tensor]): Classification losses. - - loss_bbox (list[torch.Tensor]): Box regression losses. - - loss_dir (list[torch.Tensor]): Direction classification - losses. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - device = cls_scores[0].device - anchor_list = self.get_anchors( - featmap_sizes, input_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.anchor_target_3d( - anchor_list, - gt_bboxes, - input_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - num_classes=self.num_classes, - label_channels=label_channels, - sampling=self.sampling) - - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - dir_targets_list, dir_weights_list, num_total_pos, - num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # num_total_samples = None - losses_cls, losses_bbox, losses_dir = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - dir_cls_preds, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - dir_targets_list, - dir_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dir=losses_dir) - - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - input_metas, - cfg=None, - rescale=False): - """Get bboxes of anchor head. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - input_metas (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): Whether th rescale bbox. - - Returns: - list[tuple]: Prediction resultes of batches. - """ - assert len(cls_scores) == len(bbox_preds) - assert len(cls_scores) == len(dir_cls_preds) - num_levels = len(cls_scores) - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - device = cls_scores[0].device - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - mlvl_anchors = [ - anchor.reshape(-1, self.box_code_size) for anchor in mlvl_anchors - ] - - result_list = [] - for img_id in range(len(input_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() for i in range(num_levels) - ] - - input_meta = input_metas[img_id] - proposals = self.get_bboxes_single(cls_score_list, bbox_pred_list, - dir_cls_pred_list, mlvl_anchors, - input_meta, cfg, rescale) - result_list.append(proposals) - return result_list - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg=None, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): whether th rescale bbox. - - Returns: - tuple: Contain predictions of single batch. - - - bboxes (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores (torch.Tensor): Class score of each bbox. - - labels (torch.Tensor): Label of each bbox. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert cls_score.size()[-2:] == dir_cls_pred.size()[-2:] - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.num_classes) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, self.box_code_size) - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - if self.use_sigmoid_cls: - # Add a dummy background class to the front when using sigmoid - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - score_thr = cfg.get('score_thr', 0) - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_scores, score_thr, cfg.max_num, - cfg, mlvl_dir_scores) - bboxes, scores, labels, dir_scores = results - if bboxes.shape[0] > 0: - dir_rot = limit_period(bboxes[..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores.to(bboxes.dtype)) - bboxes = input_meta['box_type_3d'](bboxes, box_dim=self.box_code_size) - return bboxes, scores, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py deleted file mode 100644 index fbd13588..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/anchor_free_mono3d_head.py +++ /dev/null @@ -1,536 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from abc import abstractmethod - -import torch -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .base_mono3d_dense_head import BaseMono3DDenseHead - - -@HEADS.register_module() -class AnchorFreeMono3DHead(BaseMono3DDenseHead): - """Anchor-free head for monocular 3D object detection. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int, optional): Number of hidden channels. - Used in child classes. Defaults to 256. - stacked_convs (int, optional): Number of stacking convs of the head. - strides (tuple, optional): Downsample factor of each feature map. - dcn_on_last_conv (bool, optional): If true, use dcn in the last - layer of towers. Default: False. - conv_bias (bool | str, optional): If specified as `auto`, it will be - decided by the norm_cfg. Bias of conv will be set as True - if `norm_cfg` is None, otherwise False. Default: 'auto'. - background_label (int, optional): Label ID of background, - set as 0 for RPN and num_classes for other heads. - It will automatically set as `num_classes` if None is given. - use_direction_classifier (bool, optional): - Whether to add a direction classifier. - diff_rad_by_sin (bool, optional): Whether to change the difference - into sin difference for box regression loss. Defaults to True. - dir_offset (float, optional): Parameter used in direction - classification. Defaults to 0. - dir_limit_offset (float, optional): Parameter used in direction - classification. Defaults to 0. - loss_cls (dict, optional): Config of classification loss. - loss_bbox (dict, optional): Config of localization loss. - loss_dir (dict, optional): Config of direction classifier loss. - loss_attr (dict, optional): Config of attribute classifier loss, - which is only active when `pred_attrs=True`. - bbox_code_size (int, optional): Dimensions of predicted bounding boxes. - pred_attrs (bool, optional): Whether to predict attributes. - Defaults to False. - num_attrs (int, optional): The number of attributes to be predicted. - Default: 9. - pred_velo (bool, optional): Whether to predict velocity. - Defaults to False. - pred_bbox2d (bool, optional): Whether to predict 2D boxes. - Defaults to False. - group_reg_dims (tuple[int], optional): The dimension of each regression - target group. Default: (2, 1, 3, 1, 2). - cls_branch (tuple[int], optional): Channels for classification branch. - Default: (128, 64). - reg_branch (tuple[tuple], optional): Channels for regression branch. - Default: ( - (128, 64), # offset - (128, 64), # depth - (64, ), # size - (64, ), # rot - () # velo - ), - dir_branch (tuple[int], optional): Channels for direction - classification branch. Default: (64, ). - attr_branch (tuple[int], optional): Channels for classification branch. - Default: (64, ). - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - train_cfg (dict, optional): Training config of anchor head. - test_cfg (dict, optional): Testing config of anchor head. - """ # noqa: W605 - - _version = 1 - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - dcn_on_last_conv=False, - conv_bias='auto', - background_label=None, - use_direction_classifier=True, - diff_rad_by_sin=True, - dir_offset=0, - dir_limit_offset=0, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - bbox_code_size=9, # For nuscenes - pred_attrs=False, - num_attrs=9, # For nuscenes - pred_velo=False, - pred_bbox2d=False, - group_reg_dims=(2, 1, 3, 1, 2), # offset, depth, size, rot, velo, - cls_branch=(128, 64), - reg_branch=( - (128, 64), # offset - (128, 64), # depth - (64, ), # size - (64, ), # rot - () # velo - ), - dir_branch=(64, ), - attr_branch=(64, ), - conv_cfg=None, - norm_cfg=None, - train_cfg=None, - test_cfg=None, - init_cfg=None): - super(AnchorFreeMono3DHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.cls_out_channels = num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.use_direction_classifier = use_direction_classifier - self.diff_rad_by_sin = diff_rad_by_sin - self.dir_offset = dir_offset - self.dir_limit_offset = dir_limit_offset - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_dir = build_loss(loss_dir) - self.bbox_code_size = bbox_code_size - self.group_reg_dims = list(group_reg_dims) - self.cls_branch = cls_branch - self.reg_branch = reg_branch - assert len(reg_branch) == len(group_reg_dims), 'The number of '\ - 'element in reg_branch and group_reg_dims should be the same.' - self.pred_velo = pred_velo - self.pred_bbox2d = pred_bbox2d - self.out_channels = [] - for reg_branch_channels in reg_branch: - if len(reg_branch_channels) > 0: - self.out_channels.append(reg_branch_channels[-1]) - else: - self.out_channels.append(-1) - self.dir_branch = dir_branch - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - self.background_label = ( - num_classes if background_label is None else background_label) - # background_label should be either 0 or num_classes - assert (self.background_label == 0 - or self.background_label == num_classes) - self.pred_attrs = pred_attrs - self.attr_background_label = -1 - self.num_attrs = num_attrs - if self.pred_attrs: - self.attr_background_label = num_attrs - self.loss_attr = build_loss(loss_attr) - self.attr_branch = attr_branch - - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self): - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self): - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_branch(self, conv_channels=(64), conv_strides=(1)): - """Initialize conv layers as a prediction branch.""" - conv_before_pred = nn.ModuleList() - if isinstance(conv_channels, int): - conv_channels = [self.feat_channels] + [conv_channels] - conv_strides = [conv_strides] - else: - conv_channels = [self.feat_channels] + list(conv_channels) - conv_strides = list(conv_strides) - for i in range(len(conv_strides)): - conv_before_pred.append( - ConvModule( - conv_channels[i], - conv_channels[i + 1], - 3, - stride=conv_strides[i], - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - return conv_before_pred - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls_prev = self._init_branch( - conv_channels=self.cls_branch, - conv_strides=(1, ) * len(self.cls_branch)) - self.conv_cls = nn.Conv2d(self.cls_branch[-1], self.cls_out_channels, - 1) - self.conv_reg_prevs = nn.ModuleList() - self.conv_regs = nn.ModuleList() - for i in range(len(self.group_reg_dims)): - reg_dim = self.group_reg_dims[i] - reg_branch_channels = self.reg_branch[i] - out_channel = self.out_channels[i] - if len(reg_branch_channels) > 0: - self.conv_reg_prevs.append( - self._init_branch( - conv_channels=reg_branch_channels, - conv_strides=(1, ) * len(reg_branch_channels))) - self.conv_regs.append(nn.Conv2d(out_channel, reg_dim, 1)) - else: - self.conv_reg_prevs.append(None) - self.conv_regs.append( - nn.Conv2d(self.feat_channels, reg_dim, 1)) - if self.use_direction_classifier: - self.conv_dir_cls_prev = self._init_branch( - conv_channels=self.dir_branch, - conv_strides=(1, ) * len(self.dir_branch)) - self.conv_dir_cls = nn.Conv2d(self.dir_branch[-1], 2, 1) - if self.pred_attrs: - self.conv_attr_prev = self._init_branch( - conv_channels=self.attr_branch, - conv_strides=(1, ) * len(self.attr_branch)) - self.conv_attr = nn.Conv2d(self.attr_branch[-1], self.num_attrs, 1) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized defined init_weights because the - default init of DCN triggered by the init_cfg will init - conv_offset.weight, which mistakenly affects the training stability. - """ - for modules in [self.cls_convs, self.reg_convs, self.conv_cls_prev]: - for m in modules: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - for conv_reg_prev in self.conv_reg_prevs: - if conv_reg_prev is None: - continue - for m in conv_reg_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - if self.use_direction_classifier: - for m in self.conv_dir_cls_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - if self.pred_attrs: - for m in self.conv_attr_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - for conv_reg in self.conv_regs: - normal_init(conv_reg, std=0.01) - if self.use_direction_classifier: - normal_init(self.conv_dir_cls, std=0.01, bias=bias_cls) - if self.pred_attrs: - normal_init(self.conv_attr, std=0.01, bias=bias_cls) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores, bbox predictions, - and direction class predictions. - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - """ - return multi_apply(self.forward_single, feats)[:5] - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, direction class, - and attributes, features after classification and regression - conv layers, some models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - # clone the cls_feat for reusing the feature map afterwards - clone_cls_feat = cls_feat.clone() - for conv_cls_prev_layer in self.conv_cls_prev: - clone_cls_feat = conv_cls_prev_layer(clone_cls_feat) - cls_score = self.conv_cls(clone_cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = [] - for i in range(len(self.group_reg_dims)): - # clone the reg_feat for reusing the feature map afterwards - clone_reg_feat = reg_feat.clone() - if len(self.reg_branch[i]) > 0: - for conv_reg_prev_layer in self.conv_reg_prevs[i]: - clone_reg_feat = conv_reg_prev_layer(clone_reg_feat) - bbox_pred.append(self.conv_regs[i](clone_reg_feat)) - bbox_pred = torch.cat(bbox_pred, dim=1) - - dir_cls_pred = None - if self.use_direction_classifier: - clone_reg_feat = reg_feat.clone() - for conv_dir_cls_prev_layer in self.conv_dir_cls_prev: - clone_reg_feat = conv_dir_cls_prev_layer(clone_reg_feat) - dir_cls_pred = self.conv_dir_cls(clone_reg_feat) - - attr_pred = None - if self.pred_attrs: - # clone the cls_feat for reusing the feature map afterwards - clone_cls_feat = cls_feat.clone() - for conv_attr_prev_layer in self.conv_attr_prev: - clone_cls_feat = conv_attr_prev_layer(clone_cls_feat) - attr_pred = self.conv_attr(clone_cls_feat) - - return cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, \ - reg_feat - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D Ground truth bboxes for each - image with shape (num_gts, bbox_code_size). - gt_labels_3d (list[Tensor]): 3D class indices of each box. - centers2d (list[Tensor]): Projected 3D centers onto 2D images. - depths (list[Tensor]): Depth of projected centers on 2D images. - attr_labels (list[Tensor], optional): Attribute indices - corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - - raise NotImplementedError - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * bbox_code_size, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - """ - raise NotImplementedError - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points of a single scale level.""" - h, w = featmap_size - x_range = torch.arange(w, dtype=dtype, device=device) - y_range = torch.arange(h, dtype=dtype, device=device) - y, x = torch.meshgrid(y_range, x_range) - if flatten: - y = y.flatten() - x = x.flatten() - return y, x - - def get_points(self, featmap_sizes, dtype, device, flatten=False): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - dtype (torch.dtype): Type of points. - device (torch.device): Device of points. - - Returns: - tuple: points of each image. - """ - mlvl_points = [] - for i in range(len(featmap_sizes)): - mlvl_points.append( - self._get_points_single(featmap_sizes[i], self.strides[i], - dtype, device, flatten)) - return mlvl_points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_conv_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_conv_bbox_head.py deleted file mode 100644 index c7a3bb3e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_conv_bbox_head.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import HEADS - - -@HEADS.register_module() -class BaseConvBboxHead(BaseModule): - r"""More general bbox head, with shared conv layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls_score - shared convs - \-> reg convs -> bbox_pred - """ - - def __init__(self, - in_channels=0, - shared_conv_channels=(), - cls_conv_channels=(), - num_cls_out_channels=0, - reg_conv_channels=(), - num_reg_out_channels=0, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - bias='auto', - init_cfg=None, - *args, - **kwargs): - super(BaseConvBboxHead, self).__init__( - init_cfg=init_cfg, *args, **kwargs) - assert in_channels > 0 - assert num_cls_out_channels > 0 - assert num_reg_out_channels > 0 - self.in_channels = in_channels - self.shared_conv_channels = shared_conv_channels - self.cls_conv_channels = cls_conv_channels - self.num_cls_out_channels = num_cls_out_channels - self.reg_conv_channels = reg_conv_channels - self.num_reg_out_channels = num_reg_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.bias = bias - - # add shared convs - if len(self.shared_conv_channels) > 0: - self.shared_convs = self._add_conv_branch( - self.in_channels, self.shared_conv_channels) - out_channels = self.shared_conv_channels[-1] - else: - out_channels = self.in_channels - - # add cls specific branch - prev_channel = out_channels - if len(self.cls_conv_channels) > 0: - self.cls_convs = self._add_conv_branch(prev_channel, - self.cls_conv_channels) - prev_channel = self.cls_conv_channels[-1] - - self.conv_cls = build_conv_layer( - conv_cfg, - in_channels=prev_channel, - out_channels=num_cls_out_channels, - kernel_size=1) - # add reg specific branch - prev_channel = out_channels - if len(self.reg_conv_channels) > 0: - self.reg_convs = self._add_conv_branch(prev_channel, - self.reg_conv_channels) - prev_channel = self.reg_conv_channels[-1] - - self.conv_reg = build_conv_layer( - conv_cfg, - in_channels=prev_channel, - out_channels=num_reg_out_channels, - kernel_size=1) - - def _add_conv_branch(self, in_channels, conv_channels): - """Add shared or separable branch.""" - conv_spec = [in_channels] + list(conv_channels) - # add branch specific conv layers - conv_layers = nn.Sequential() - for i in range(len(conv_spec) - 1): - conv_layers.add_module( - f'layer{i}', - ConvModule( - conv_spec[i], - conv_spec[i + 1], - kernel_size=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.bias, - inplace=True)) - return conv_layers - - def forward(self, feats): - """Forward. - - Args: - feats (Tensor): Input features - - Returns: - Tensor: Class scores predictions - Tensor: Regression predictions - """ - # shared part - if len(self.shared_conv_channels) > 0: - x = self.shared_convs(feats) - - # separate branches - x_cls = x - x_reg = x - - if len(self.cls_conv_channels) > 0: - x_cls = self.cls_convs(x_cls) - cls_score = self.conv_cls(x_cls) - - if len(self.reg_conv_channels) > 0: - x_reg = self.reg_convs(x_reg) - bbox_pred = self.conv_reg(x_reg) - - return cls_score, bbox_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_mono3d_dense_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_mono3d_dense_head.py deleted file mode 100644 index 73658ace..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/base_mono3d_dense_head.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class BaseMono3DDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for Monocular 3D DenseHeads.""" - - def __init__(self, init_cfg=None): - super(BaseMono3DDenseHead, self).__init__(init_cfg=init_cfg) - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @abstractmethod - def get_bboxes(self, **kwargs): - """Transform network output for a batch into bbox predictions.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - centers2d=None, - depths=None, - attr_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_3d (list[Tensor]): 3D ground truth bboxes of the image, - shape (num_gts, self.bbox_code_size). - gt_labels_3d (list[Tensor]): 3D ground truth labels of each box, - shape (num_gts,). - centers2d (list[Tensor]): Projected 3D center of each box, - shape (num_gts, 2). - depths (list[Tensor]): Depth of projected 3D center of each box, - shape (num_gts,). - attr_labels (list[Tensor]): Attribute labels of each box, - shape (num_gts,). - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, gt_bboxes_3d, centers2d, depths, - attr_labels, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/centerpoint_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/centerpoint_head.py deleted file mode 100644 index d105896f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/centerpoint_head.py +++ /dev/null @@ -1,832 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -from mmcv.cnn import ConvModule, build_conv_layer -from mmcv.runner import BaseModule, force_fp32 -from torch import nn - -from mmdet3d.core import (circle_nms, draw_heatmap_gaussian, gaussian_radius, - xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev -from mmdet3d.models import builder -from mmdet3d.models.utils import clip_sigmoid -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss - - -@HEADS.register_module() -class SeparateHead(BaseModule): - """SeparateHead for CenterHead. - - Args: - in_channels (int): Input channels for conv_layer. - heads (dict): Conv information. - head_conv (int, optional): Output channels. - Default: 64. - final_kernel (int, optional): Kernel size for the last conv layer. - Default: 1. - init_bias (float, optional): Initial bias. Default: -2.19. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ - - def __init__(self, - in_channels, - heads, - head_conv=64, - final_kernel=1, - init_bias=-2.19, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(SeparateHead, self).__init__(init_cfg=init_cfg) - self.heads = heads - self.init_bias = init_bias - for head in self.heads: - classes, num_conv = self.heads[head] - - conv_layers = [] - c_in = in_channels - for i in range(num_conv - 1): - conv_layers.append( - ConvModule( - c_in, - head_conv, - kernel_size=final_kernel, - stride=1, - padding=final_kernel // 2, - bias=bias, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - c_in = head_conv - - conv_layers.append( - build_conv_layer( - conv_cfg, - head_conv, - classes, - kernel_size=final_kernel, - stride=1, - padding=final_kernel // 2, - bias=True)) - conv_layers = nn.Sequential(*conv_layers) - - self.__setattr__(head, conv_layers) - - if init_cfg is None: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - for head in self.heads: - if head == 'heatmap': - self.__getattr__(head)[-1].bias.data.fill_(self.init_bias) - - def forward(self, x): - """Forward function for SepHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - dict[str: torch.Tensor]: contains the following keys: - - -reg (torch.Tensor): 2D regression value with the - shape of [B, 2, H, W]. - -height (torch.Tensor): Height value with the - shape of [B, 1, H, W]. - -dim (torch.Tensor): Size value with the shape - of [B, 3, H, W]. - -rot (torch.Tensor): Rotation value with the - shape of [B, 2, H, W]. - -vel (torch.Tensor): Velocity value with the - shape of [B, 2, H, W]. - -heatmap (torch.Tensor): Heatmap with the shape of - [B, N, H, W]. - """ - ret_dict = dict() - for head in self.heads: - ret_dict[head] = self.__getattr__(head)(x) - - return ret_dict - - -@HEADS.register_module() -class DCNSeparateHead(BaseModule): - r"""DCNSeparateHead for CenterHead. - - .. code-block:: none - /-----> DCN for heatmap task -----> heatmap task. - feature - \-----> DCN for regression tasks -----> regression tasks - - Args: - in_channels (int): Input channels for conv_layer. - num_cls (int): Number of classes. - heads (dict): Conv information. - dcn_config (dict): Config of dcn layer. - head_conv (int, optional): Output channels. - Default: 64. - final_kernel (int, optional): Kernel size for the last conv - layer. Default: 1. - init_bias (float, optional): Initial bias. Default: -2.19. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ # noqa: W605 - - def __init__(self, - in_channels, - num_cls, - heads, - dcn_config, - head_conv=64, - final_kernel=1, - init_bias=-2.19, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(DCNSeparateHead, self).__init__(init_cfg=init_cfg) - if 'heatmap' in heads: - heads.pop('heatmap') - # feature adaptation with dcn - # use separate features for classification / regression - self.feature_adapt_cls = build_conv_layer(dcn_config) - - self.feature_adapt_reg = build_conv_layer(dcn_config) - - # heatmap prediction head - cls_head = [ - ConvModule( - in_channels, - head_conv, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - bias=bias, - norm_cfg=norm_cfg), - build_conv_layer( - conv_cfg, - head_conv, - num_cls, - kernel_size=3, - stride=1, - padding=1, - bias=bias) - ] - self.cls_head = nn.Sequential(*cls_head) - self.init_bias = init_bias - # other regression target - self.task_head = SeparateHead( - in_channels, - heads, - head_conv=head_conv, - final_kernel=final_kernel, - bias=bias) - if init_cfg is None: - self.init_cfg = dict(type='Kaiming', layer='Conv2d') - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - self.cls_head[-1].bias.data.fill_(self.init_bias) - - def forward(self, x): - """Forward function for DCNSepHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - dict[str: torch.Tensor]: contains the following keys: - - -reg (torch.Tensor): 2D regression value with the - shape of [B, 2, H, W]. - -height (torch.Tensor): Height value with the - shape of [B, 1, H, W]. - -dim (torch.Tensor): Size value with the shape - of [B, 3, H, W]. - -rot (torch.Tensor): Rotation value with the - shape of [B, 2, H, W]. - -vel (torch.Tensor): Velocity value with the - shape of [B, 2, H, W]. - -heatmap (torch.Tensor): Heatmap with the shape of - [B, N, H, W]. - """ - center_feat = self.feature_adapt_cls(x) - reg_feat = self.feature_adapt_reg(x) - - cls_score = self.cls_head(center_feat) - ret = self.task_head(reg_feat) - ret['heatmap'] = cls_score - - return ret - - -@HEADS.register_module() -class CenterHead(BaseModule): - """CenterHead for CenterPoint. - - Args: - in_channels (list[int] | int, optional): Channels of the input - feature map. Default: [128]. - tasks (list[dict], optional): Task information including class number - and class names. Default: None. - train_cfg (dict, optional): Train-time configs. Default: None. - test_cfg (dict, optional): Test-time configs. Default: None. - bbox_coder (dict, optional): Bbox coder configs. Default: None. - common_heads (dict, optional): Conv information for common heads. - Default: dict(). - loss_cls (dict, optional): Config of classification loss function. - Default: dict(type='GaussianFocalLoss', reduction='mean'). - loss_bbox (dict, optional): Config of regression loss function. - Default: dict(type='L1Loss', reduction='none'). - separate_head (dict, optional): Config of separate head. Default: dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3) - share_conv_channel (int, optional): Output channels for share_conv - layer. Default: 64. - num_heatmap_convs (int, optional): Number of conv layers for heatmap - conv layer. Default: 2. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (str, optional): Type of bias. Default: 'auto'. - """ - - def __init__(self, - in_channels=[128], - tasks=None, - train_cfg=None, - test_cfg=None, - bbox_coder=None, - common_heads=dict(), - loss_cls=dict(type='GaussianFocalLoss', reduction='mean'), - loss_bbox=dict( - type='L1Loss', reduction='none', loss_weight=0.25), - separate_head=dict( - type='SeparateHead', init_bias=-2.19, final_kernel=3), - share_conv_channel=64, - num_heatmap_convs=2, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias='auto', - norm_bbox=True, - init_cfg=None): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(CenterHead, self).__init__(init_cfg=init_cfg) - - num_classes = [len(t['class_names']) for t in tasks] - self.class_names = [t['class_names'] for t in tasks] - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.in_channels = in_channels - self.num_classes = num_classes - self.norm_bbox = norm_bbox - - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_anchor_per_locs = [n for n in num_classes] - self.fp16_enabled = False - - # a shared convolution - self.shared_conv = ConvModule( - in_channels, - share_conv_channel, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=bias) - - self.task_heads = nn.ModuleList() - - for num_cls in num_classes: - heads = copy.deepcopy(common_heads) - heads.update(dict(heatmap=(num_cls, num_heatmap_convs))) - separate_head.update( - in_channels=share_conv_channel, heads=heads, num_cls=num_cls) - self.task_heads.append(builder.build_head(separate_head)) - - def forward_single(self, x): - """Forward function for CenterPoint. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, 512, 128, 128]. - - Returns: - list[dict]: Output results for tasks. - """ - ret_dicts = [] - - x = self.shared_conv(x) - - for task in self.task_heads: - ret_dicts.append(task(x)) - - return ret_dicts - - def forward(self, feats): - """Forward pass. - - Args: - feats (list[torch.Tensor]): Multi-level features, e.g., - features produced by FPN. - - Returns: - tuple(list[dict]): Output results for tasks. - """ - return multi_apply(self.forward_single, feats) - - def _gather_feat(self, feat, ind, mask=None): - """Gather feature map. - - Given feature map and index, return indexed feature map. - - Args: - feat (torch.tensor): Feature map with the shape of [B, H*W, 10]. - ind (torch.Tensor): Index of the ground truth boxes with the - shape of [B, max_obj]. - mask (torch.Tensor, optional): Mask of the feature map with the - shape of [B, max_obj]. Default: None. - - Returns: - torch.Tensor: Feature map after gathering with the shape - of [B, max_obj, 10]. - """ - dim = feat.size(2) - ind = ind.unsqueeze(2).expand(ind.size(0), ind.size(1), dim) - feat = feat.gather(1, ind) - if mask is not None: - mask = mask.unsqueeze(2).expand_as(feat) - feat = feat[mask] - feat = feat.view(-1, dim) - return feat - - def get_targets(self, gt_bboxes_3d, gt_labels_3d): - """Generate targets. - - How each output is transformed: - - Each nested list is transposed so that all same-index elements in - each sub-list (1, ..., N) become the new sub-lists. - [ [a0, a1, a2, ... ], [b0, b1, b2, ... ], ... ] - ==> [ [a0, b0, ... ], [a1, b1, ... ], [a2, b2, ... ] ] - - The new transposed nested list is converted into a list of N - tensors generated by concatenating tensors in the new sub-lists. - [ tensor0, tensor1, tensor2, ... ] - - Args: - gt_bboxes_3d (list[:obj:`LiDARInstance3DBoxes`]): Ground - truth gt boxes. - gt_labels_3d (list[torch.Tensor]): Labels of boxes. - - Returns: - Returns: - tuple[list[torch.Tensor]]: Tuple of target including - the following results in order. - - - list[torch.Tensor]: Heatmap scores. - - list[torch.Tensor]: Ground truth boxes. - - list[torch.Tensor]: Indexes indicating the - position of the valid boxes. - - list[torch.Tensor]: Masks indicating which - boxes are valid. - """ - heatmaps, anno_boxes, inds, masks = multi_apply( - self.get_targets_single, gt_bboxes_3d, gt_labels_3d) - # Transpose heatmaps - heatmaps = list(map(list, zip(*heatmaps))) - heatmaps = [torch.stack(hms_) for hms_ in heatmaps] - # Transpose anno_boxes - anno_boxes = list(map(list, zip(*anno_boxes))) - anno_boxes = [torch.stack(anno_boxes_) for anno_boxes_ in anno_boxes] - # Transpose inds - inds = list(map(list, zip(*inds))) - inds = [torch.stack(inds_) for inds_ in inds] - # Transpose inds - masks = list(map(list, zip(*masks))) - masks = [torch.stack(masks_) for masks_ in masks] - return heatmaps, anno_boxes, inds, masks - - def get_targets_single(self, gt_bboxes_3d, gt_labels_3d): - """Generate training targets for a single sample. - - Args: - gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): Ground truth gt boxes. - gt_labels_3d (torch.Tensor): Labels of boxes. - - Returns: - tuple[list[torch.Tensor]]: Tuple of target including - the following results in order. - - - list[torch.Tensor]: Heatmap scores. - - list[torch.Tensor]: Ground truth boxes. - - list[torch.Tensor]: Indexes indicating the position - of the valid boxes. - - list[torch.Tensor]: Masks indicating which boxes - are valid. - """ - device = gt_labels_3d.device - gt_bboxes_3d = torch.cat( - (gt_bboxes_3d.gravity_center, gt_bboxes_3d.tensor[:, 3:]), - dim=1).to(device) - max_objs = self.train_cfg['max_objs'] * self.train_cfg['dense_reg'] - grid_size = torch.tensor(self.train_cfg['grid_size']) - pc_range = torch.tensor(self.train_cfg['point_cloud_range']) - voxel_size = torch.tensor(self.train_cfg['voxel_size']) - - feature_map_size = grid_size[:2] // self.train_cfg['out_size_factor'] - - # reorganize the gt_dict by tasks - task_masks = [] - flag = 0 - for class_name in self.class_names: - task_masks.append([ - torch.where(gt_labels_3d == class_name.index(i) + flag) - for i in class_name - ]) - flag += len(class_name) - - task_boxes = [] - task_classes = [] - flag2 = 0 - for idx, mask in enumerate(task_masks): - task_box = [] - task_class = [] - for m in mask: - task_box.append(gt_bboxes_3d[m]) - # 0 is background for each task, so we need to add 1 here. - task_class.append(gt_labels_3d[m] + 1 - flag2) - task_boxes.append(torch.cat(task_box, axis=0).to(device)) - task_classes.append(torch.cat(task_class).long().to(device)) - flag2 += len(mask) - draw_gaussian = draw_heatmap_gaussian - heatmaps, anno_boxes, inds, masks = [], [], [], [] - - for idx, task_head in enumerate(self.task_heads): - heatmap = gt_bboxes_3d.new_zeros( - (len(self.class_names[idx]), feature_map_size[1], - feature_map_size[0])) - - anno_box = gt_bboxes_3d.new_zeros((max_objs, 10), - dtype=torch.float32) - - ind = gt_labels_3d.new_zeros((max_objs), dtype=torch.int64) - mask = gt_bboxes_3d.new_zeros((max_objs), dtype=torch.uint8) - - num_objs = min(task_boxes[idx].shape[0], max_objs) - - for k in range(num_objs): - cls_id = task_classes[idx][k] - 1 - - width = task_boxes[idx][k][3] - length = task_boxes[idx][k][4] - width = width / voxel_size[0] / self.train_cfg[ - 'out_size_factor'] - length = length / voxel_size[1] / self.train_cfg[ - 'out_size_factor'] - - if width > 0 and length > 0: - radius = gaussian_radius( - (length, width), - min_overlap=self.train_cfg['gaussian_overlap']) - radius = max(self.train_cfg['min_radius'], int(radius)) - - # be really careful for the coordinate system of - # your box annotation. - x, y, z = task_boxes[idx][k][0], task_boxes[idx][k][ - 1], task_boxes[idx][k][2] - - coor_x = ( - x - pc_range[0] - ) / voxel_size[0] / self.train_cfg['out_size_factor'] - coor_y = ( - y - pc_range[1] - ) / voxel_size[1] / self.train_cfg['out_size_factor'] - - center = torch.tensor([coor_x, coor_y], - dtype=torch.float32, - device=device) - center_int = center.to(torch.int32) - - # throw out not in range objects to avoid out of array - # area when creating the heatmap - if not (0 <= center_int[0] < feature_map_size[0] - and 0 <= center_int[1] < feature_map_size[1]): - continue - - draw_gaussian(heatmap[cls_id], center_int, radius) - - new_idx = k - x, y = center_int[0], center_int[1] - - assert (y * feature_map_size[0] + x < - feature_map_size[0] * feature_map_size[1]) - - ind[new_idx] = y * feature_map_size[0] + x - mask[new_idx] = 1 - # TODO: support other outdoor dataset - vx, vy = task_boxes[idx][k][7:] - rot = task_boxes[idx][k][6] - box_dim = task_boxes[idx][k][3:6] - if self.norm_bbox: - box_dim = box_dim.log() - anno_box[new_idx] = torch.cat([ - center - torch.tensor([x, y], device=device), - z.unsqueeze(0), box_dim, - torch.sin(rot).unsqueeze(0), - torch.cos(rot).unsqueeze(0), - vx.unsqueeze(0), - vy.unsqueeze(0) - ]) - - heatmaps.append(heatmap) - anno_boxes.append(anno_box) - masks.append(mask) - inds.append(ind) - return heatmaps, anno_boxes, inds, masks - - @force_fp32(apply_to=('preds_dicts')) - def loss(self, gt_bboxes_3d, gt_labels_3d, preds_dicts, **kwargs): - """Loss function for CenterHead. - - Args: - gt_bboxes_3d (list[:obj:`LiDARInstance3DBoxes`]): Ground - truth gt boxes. - gt_labels_3d (list[torch.Tensor]): Labels of boxes. - preds_dicts (dict): Output of forward function. - - Returns: - dict[str:torch.Tensor]: Loss of heatmap and bbox of each task. - """ - heatmaps, anno_boxes, inds, masks = self.get_targets( - gt_bboxes_3d, gt_labels_3d) - loss_dict = dict() - for task_id, preds_dict in enumerate(preds_dicts): - # heatmap focal loss - preds_dict[0]['heatmap'] = clip_sigmoid(preds_dict[0]['heatmap']) - num_pos = heatmaps[task_id].eq(1).float().sum().item() - loss_heatmap = self.loss_cls( - preds_dict[0]['heatmap'], - heatmaps[task_id], - avg_factor=max(num_pos, 1)) - target_box = anno_boxes[task_id] - # reconstruct the anno_box from multiple reg heads - preds_dict[0]['anno_box'] = torch.cat( - (preds_dict[0]['reg'], preds_dict[0]['height'], - preds_dict[0]['dim'], preds_dict[0]['rot'], - preds_dict[0]['vel']), - dim=1) - - # Regression loss for dimension, offset, height, rotation - ind = inds[task_id] - num = masks[task_id].float().sum() - pred = preds_dict[0]['anno_box'].permute(0, 2, 3, 1).contiguous() - pred = pred.view(pred.size(0), -1, pred.size(3)) - pred = self._gather_feat(pred, ind) - mask = masks[task_id].unsqueeze(2).expand_as(target_box).float() - isnotnan = (~torch.isnan(target_box)).float() - mask *= isnotnan - - code_weights = self.train_cfg.get('code_weights', None) - bbox_weights = mask * mask.new_tensor(code_weights) - loss_bbox = self.loss_bbox( - pred, target_box, bbox_weights, avg_factor=(num + 1e-4)) - loss_dict[f'task{task_id}.loss_heatmap'] = loss_heatmap - loss_dict[f'task{task_id}.loss_bbox'] = loss_bbox - return loss_dict - - def get_bboxes(self, preds_dicts, img_metas, img=None, rescale=False): - """Generate bboxes from bbox head predictions. - - Args: - preds_dicts (tuple[list[dict]]): Prediction results. - img_metas (list[dict]): Point cloud and image's meta info. - - Returns: - list[dict]: Decoded bbox, scores and labels after nms. - """ - rets = [] - for task_id, preds_dict in enumerate(preds_dicts): - num_class_with_bg = self.num_classes[task_id] - batch_size = preds_dict[0]['heatmap'].shape[0] - batch_heatmap = preds_dict[0]['heatmap'].sigmoid() - - batch_reg = preds_dict[0]['reg'] - batch_hei = preds_dict[0]['height'] - - if self.norm_bbox: - batch_dim = torch.exp(preds_dict[0]['dim']) - else: - batch_dim = preds_dict[0]['dim'] - - batch_rots = preds_dict[0]['rot'][:, 0].unsqueeze(1) - batch_rotc = preds_dict[0]['rot'][:, 1].unsqueeze(1) - - if 'vel' in preds_dict[0]: - batch_vel = preds_dict[0]['vel'] - else: - batch_vel = None - temp = self.bbox_coder.decode( - batch_heatmap, - batch_rots, - batch_rotc, - batch_hei, - batch_dim, - batch_vel, - reg=batch_reg, - task_id=task_id) - assert self.test_cfg['nms_type'] in ['circle', 'rotate'] - batch_reg_preds = [box['bboxes'] for box in temp] - batch_cls_preds = [box['scores'] for box in temp] - batch_cls_labels = [box['labels'] for box in temp] - if self.test_cfg['nms_type'] == 'circle': - ret_task = [] - for i in range(batch_size): - boxes3d = temp[i]['bboxes'] - scores = temp[i]['scores'] - labels = temp[i]['labels'] - centers = boxes3d[:, [0, 1]] - boxes = torch.cat([centers, scores.view(-1, 1)], dim=1) - keep = torch.tensor( - circle_nms( - boxes.detach().cpu().numpy(), - self.test_cfg['min_radius'][task_id], - post_max_size=self.test_cfg['post_max_size']), - dtype=torch.long, - device=boxes.device) - - boxes3d = boxes3d[keep] - scores = scores[keep] - labels = labels[keep] - ret = dict(bboxes=boxes3d, scores=scores, labels=labels) - ret_task.append(ret) - rets.append(ret_task) - else: - rets.append( - self.get_task_detections(num_class_with_bg, - batch_cls_preds, batch_reg_preds, - batch_cls_labels, img_metas)) - - # Merge branches results - num_samples = len(rets[0]) - - ret_list = [] - for i in range(num_samples): - for k in rets[0][i].keys(): - if k == 'bboxes': - bboxes = torch.cat([ret[i][k] for ret in rets]) - bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 5] * 0.5 - bboxes = img_metas[i]['box_type_3d']( - bboxes, self.bbox_coder.code_size) - elif k == 'scores': - scores = torch.cat([ret[i][k] for ret in rets]) - elif k == 'labels': - flag = 0 - for j, num_class in enumerate(self.num_classes): - rets[j][i][k] += flag - flag += num_class - labels = torch.cat([ret[i][k].int() for ret in rets]) - ret_list.append([bboxes, scores, labels]) - return ret_list - - def get_task_detections(self, num_class_with_bg, batch_cls_preds, - batch_reg_preds, batch_cls_labels, img_metas): - """Rotate nms for each task. - - Args: - num_class_with_bg (int): Number of classes for the current task. - batch_cls_preds (list[torch.Tensor]): Prediction score with the - shape of [N]. - batch_reg_preds (list[torch.Tensor]): Prediction bbox with the - shape of [N, 9]. - batch_cls_labels (list[torch.Tensor]): Prediction label with the - shape of [N]. - img_metas (list[dict]): Meta information of each sample. - - Returns: - list[dict[str: torch.Tensor]]: contains the following keys: - - -bboxes (torch.Tensor): Prediction bboxes after nms with the - shape of [N, 9]. - -scores (torch.Tensor): Prediction scores after nms with the - shape of [N]. - -labels (torch.Tensor): Prediction labels after nms with the - shape of [N]. - """ - predictions_dicts = [] - post_center_range = self.test_cfg['post_center_limit_range'] - if len(post_center_range) > 0: - post_center_range = torch.tensor( - post_center_range, - dtype=batch_reg_preds[0].dtype, - device=batch_reg_preds[0].device) - - for i, (box_preds, cls_preds, cls_labels) in enumerate( - zip(batch_reg_preds, batch_cls_preds, batch_cls_labels)): - - # Apply NMS in bird eye view - - # get the highest score per prediction, then apply nms - # to remove overlapped box. - if num_class_with_bg == 1: - top_scores = cls_preds.squeeze(-1) - top_labels = torch.zeros( - cls_preds.shape[0], - device=cls_preds.device, - dtype=torch.long) - - else: - top_labels = cls_labels.long() - top_scores = cls_preds.squeeze(-1) - - if self.test_cfg['score_threshold'] > 0.0: - thresh = torch.tensor( - [self.test_cfg['score_threshold']], - device=cls_preds.device).type_as(cls_preds) - top_scores_keep = top_scores >= thresh - top_scores = top_scores.masked_select(top_scores_keep) - - if top_scores.shape[0] != 0: - if self.test_cfg['score_threshold'] > 0.0: - box_preds = box_preds[top_scores_keep] - top_labels = top_labels[top_scores_keep] - - boxes_for_nms = xywhr2xyxyr(img_metas[i]['box_type_3d']( - box_preds[:, :], self.bbox_coder.code_size).bev) - # the nms in 3d detection just remove overlap boxes. - - selected = nms_bev( - boxes_for_nms, - top_scores, - thresh=self.test_cfg['nms_thr'], - pre_max_size=self.test_cfg['pre_max_size'], - post_max_size=self.test_cfg['post_max_size']) - else: - selected = [] - - # if selected is not None: - selected_boxes = box_preds[selected] - selected_labels = top_labels[selected] - selected_scores = top_scores[selected] - - # finally generate predictions. - if selected_boxes.shape[0] != 0: - box_preds = selected_boxes - scores = selected_scores - label_preds = selected_labels - final_box_preds = box_preds - final_scores = scores - final_labels = label_preds - if post_center_range is not None: - mask = (final_box_preds[:, :3] >= - post_center_range[:3]).all(1) - mask &= (final_box_preds[:, :3] <= - post_center_range[3:]).all(1) - predictions_dict = dict( - bboxes=final_box_preds[mask], - scores=final_scores[mask], - labels=final_labels[mask]) - else: - predictions_dict = dict( - bboxes=final_box_preds, - scores=final_scores, - labels=final_labels) - else: - dtype = batch_reg_preds[0].dtype - device = batch_reg_preds[0].device - predictions_dict = dict( - bboxes=torch.zeros([0, self.bbox_coder.code_size], - dtype=dtype, - device=device), - scores=torch.zeros([0], dtype=dtype, device=device), - labels=torch.zeros([0], - dtype=top_labels.dtype, - device=device)) - - predictions_dicts.append(predictions_dict) - return predictions_dicts diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/fcos_mono3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/fcos_mono3d_head.py deleted file mode 100644 index d1da5744..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/fcos_mono3d_head.py +++ /dev/null @@ -1,958 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from logging import warning - -import numpy as np -import torch -from mmcv.cnn import Scale, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn - -from mmdet3d.core import (box3d_multiclass_nms, limit_period, points_img2cam, - xywhr2xyxyr) -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from ..builder import HEADS, build_loss -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - -INF = 1e8 - - -@HEADS.register_module() -class FCOSMono3DHead(AnchorFreeMono3DHead): - """Anchor-free head used in FCOS3D. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - regress_ranges (tuple[tuple[int, int]], optional): Regress range of multiple - level points. - center_sampling (bool, optional): If true, use center sampling. Default: True. - center_sample_radius (float, optional): Radius of center sampling. Default: 1.5. - norm_on_bbox (bool, optional): If true, normalize the regression targets - with FPN strides. Default: True. - centerness_on_reg (bool, optional): If true, position centerness on the - regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. - Default: True. - centerness_alpha (int, optional): Parameter used to adjust the intensity - attenuation from the center to the periphery. Default: 2.5. - loss_cls (dict, optional): Config of classification loss. - loss_bbox (dict, optional): Config of localization loss. - loss_dir (dict, optional): Config of direction classification loss. - loss_attr (dict, optional): Config of attribute classification loss. - loss_centerness (dict, optional): Config of centerness loss. - norm_cfg (dict, optional): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - centerness_branch (tuple[int], optional): Channels for centerness branch. - Default: (64, ). - """ # noqa: E501 - - def __init__(self, - regress_ranges=((-1, 48), (48, 96), (96, 192), (192, 384), - (384, INF)), - center_sampling=True, - center_sample_radius=1.5, - norm_on_bbox=True, - centerness_on_reg=True, - centerness_alpha=2.5, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_dir=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_attr=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - bbox_coder=dict(type='FCOS3DBBoxCoder', code_size=9), - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - centerness_branch=(64, ), - init_cfg=None, - **kwargs): - self.regress_ranges = regress_ranges - self.center_sampling = center_sampling - self.center_sample_radius = center_sample_radius - self.norm_on_bbox = norm_on_bbox - self.centerness_on_reg = centerness_on_reg - self.centerness_alpha = centerness_alpha - self.centerness_branch = centerness_branch - super().__init__( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.loss_centerness = build_loss(loss_centerness) - bbox_coder['code_size'] = self.bbox_code_size - self.bbox_coder = build_bbox_coder(bbox_coder) - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - self.conv_centerness_prev = self._init_branch( - conv_channels=self.centerness_branch, - conv_strides=(1, ) * len(self.centerness_branch)) - self.conv_centerness = nn.Conv2d(self.centerness_branch[-1], 1, 1) - self.scale_dim = 3 # only for offset, depth and size regression - self.scales = nn.ModuleList([ - nn.ModuleList([Scale(1.0) for _ in range(self.scale_dim)]) - for _ in self.strides - ]) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized init_weights because the default - init of DCN triggered by the init_cfg will init conv_offset.weight, - which mistakenly affects the training stability. - """ - super().init_weights() - for m in self.conv_centerness_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - normal_init(self.conv_centerness, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2). - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, - each is a 4D-tensor, the channel number is num_points * 1. - """ - # Note: we use [:5] to filter feats and only return predictions - return multi_apply(self.forward_single, feats, self.scales, - self.strides)[:5] - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox and direction class - predictions, centerness predictions of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, reg_feat = \ - super().forward_single(x) - - if self.centerness_on_reg: - clone_reg_feat = reg_feat.clone() - for conv_centerness_prev_layer in self.conv_centerness_prev: - clone_reg_feat = conv_centerness_prev_layer(clone_reg_feat) - centerness = self.conv_centerness(clone_reg_feat) - else: - clone_cls_feat = cls_feat.clone() - for conv_centerness_prev_layer in self.conv_centerness_prev: - clone_cls_feat = conv_centerness_prev_layer(clone_cls_feat) - centerness = self.conv_centerness(clone_cls_feat) - - bbox_pred = self.bbox_coder.decode(bbox_pred, scale, stride, - self.training, cls_score) - - return cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, \ - cls_feat, reg_feat - - @staticmethod - def add_sin_difference(boxes1, boxes2): - """Convert the rotation difference to difference in sine function. - - Args: - boxes1 (torch.Tensor): Original Boxes in shape (NxC), where C>=7 - and the 7th dimension is rotation dimension. - boxes2 (torch.Tensor): Target boxes in shape (NxC), where C>=7 and - the 7th dimension is rotation dimension. - - Returns: - tuple[torch.Tensor]: ``boxes1`` and ``boxes2`` whose 7th - dimensions are changed. - """ - rad_pred_encoding = torch.sin(boxes1[..., 6:7]) * torch.cos( - boxes2[..., 6:7]) - rad_tg_encoding = torch.cos(boxes1[..., 6:7]) * torch.sin(boxes2[..., - 6:7]) - boxes1 = torch.cat( - [boxes1[..., :6], rad_pred_encoding, boxes1[..., 7:]], dim=-1) - boxes2 = torch.cat([boxes2[..., :6], rad_tg_encoding, boxes2[..., 7:]], - dim=-1) - return boxes1, boxes2 - - @staticmethod - def get_direction_target(reg_targets, - dir_offset=0, - dir_limit_offset=0.0, - num_bins=2, - one_hot=True): - """Encode direction to 0 ~ num_bins-1. - - Args: - reg_targets (torch.Tensor): Bbox regression targets. - dir_offset (int, optional): Direction offset. Default to 0. - dir_limit_offset (float, optional): Offset to set the direction - range. Default to 0.0. - num_bins (int, optional): Number of bins to divide 2*PI. - Default to 2. - one_hot (bool, optional): Whether to encode as one hot. - Default to True. - - Returns: - torch.Tensor: Encoded direction targets. - """ - rot_gt = reg_targets[..., 6] - offset_rot = limit_period(rot_gt - dir_offset, dir_limit_offset, - 2 * np.pi) - dir_cls_targets = torch.floor(offset_rot / - (2 * np.pi / num_bins)).long() - dir_cls_targets = torch.clamp(dir_cls_targets, min=0, max=num_bins - 1) - if one_hot: - dir_targets = torch.zeros( - *list(dir_cls_targets.shape), - num_bins, - dtype=reg_targets.dtype, - device=dir_cls_targets.device) - dir_targets.scatter_(dir_cls_targets.unsqueeze(dim=-1).long(), 1.0) - dir_cls_targets = dir_targets - return dir_cls_targets - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', 'attr_preds', - 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D boxes ground truth with shape of - (num_gts, code_size). - gt_labels_3d (list[Tensor]): same as gt_labels - centers2d (list[Tensor]): 2D centers on the image with shape of - (num_gts, 2). - depths (list[Tensor]): Depth ground truth with shape of - (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(centernesses) == len( - attr_preds) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - labels_3d, bbox_targets_3d, centerness_targets, attr_targets = \ - self.get_targets( - all_level_points, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores, bbox_preds, dir_cls_preds and centerness - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, sum(self.group_reg_dims)) - for bbox_pred in bbox_preds - ] - flatten_dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - for dir_cls_pred in dir_cls_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_dir_cls_preds = torch.cat(flatten_dir_cls_preds) - flatten_centerness = torch.cat(flatten_centerness) - flatten_labels_3d = torch.cat(labels_3d) - flatten_bbox_targets_3d = torch.cat(bbox_targets_3d) - flatten_centerness_targets = torch.cat(centerness_targets) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels_3d >= 0) - & (flatten_labels_3d < bg_class_ind)).nonzero().reshape(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels_3d, - avg_factor=num_pos + num_imgs) # avoid num_pos is 0 - - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_dir_cls_preds = flatten_dir_cls_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - - if self.pred_attrs: - flatten_attr_preds = [ - attr_pred.permute(0, 2, 3, 1).reshape(-1, self.num_attrs) - for attr_pred in attr_preds - ] - flatten_attr_preds = torch.cat(flatten_attr_preds) - flatten_attr_targets = torch.cat(attr_targets) - pos_attr_preds = flatten_attr_preds[pos_inds] - - if num_pos > 0: - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_centerness_targets = flatten_centerness_targets[pos_inds] - if self.pred_attrs: - pos_attr_targets = flatten_attr_targets[pos_inds] - bbox_weights = pos_centerness_targets.new_ones( - len(pos_centerness_targets), sum(self.group_reg_dims)) - equal_weights = pos_centerness_targets.new_ones( - pos_centerness_targets.shape) - - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - assert len(code_weight) == sum(self.group_reg_dims) - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - - if self.use_direction_classifier: - pos_dir_cls_targets = self.get_direction_target( - pos_bbox_targets_3d, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - - if self.diff_rad_by_sin: - pos_bbox_preds, pos_bbox_targets_3d = self.add_sin_difference( - pos_bbox_preds, pos_bbox_targets_3d) - - loss_offset = self.loss_bbox( - pos_bbox_preds[:, :2], - pos_bbox_targets_3d[:, :2], - weight=bbox_weights[:, :2], - avg_factor=equal_weights.sum()) - loss_depth = self.loss_bbox( - pos_bbox_preds[:, 2], - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - loss_size = self.loss_bbox( - pos_bbox_preds[:, 3:6], - pos_bbox_targets_3d[:, 3:6], - weight=bbox_weights[:, 3:6], - avg_factor=equal_weights.sum()) - loss_rotsin = self.loss_bbox( - pos_bbox_preds[:, 6], - pos_bbox_targets_3d[:, 6], - weight=bbox_weights[:, 6], - avg_factor=equal_weights.sum()) - loss_velo = None - if self.pred_velo: - loss_velo = self.loss_bbox( - pos_bbox_preds[:, 7:9], - pos_bbox_targets_3d[:, 7:9], - weight=bbox_weights[:, 7:9], - avg_factor=equal_weights.sum()) - - loss_centerness = self.loss_centerness(pos_centerness, - pos_centerness_targets) - - # direction classification loss - loss_dir = None - # TODO: add more check for use_direction_classifier - if self.use_direction_classifier: - loss_dir = self.loss_dir( - pos_dir_cls_preds, - pos_dir_cls_targets, - equal_weights, - avg_factor=equal_weights.sum()) - - # attribute classification loss - loss_attr = None - if self.pred_attrs: - loss_attr = self.loss_attr( - pos_attr_preds, - pos_attr_targets, - pos_centerness_targets, - avg_factor=pos_centerness_targets.sum()) - - else: - # need absolute due to possible negative delta x/y - loss_offset = pos_bbox_preds[:, :2].sum() - loss_depth = pos_bbox_preds[:, 2].sum() - loss_size = pos_bbox_preds[:, 3:6].sum() - loss_rotsin = pos_bbox_preds[:, 6].sum() - loss_velo = None - if self.pred_velo: - loss_velo = pos_bbox_preds[:, 7:9].sum() - loss_centerness = pos_centerness.sum() - loss_dir = None - if self.use_direction_classifier: - loss_dir = pos_dir_cls_preds.sum() - loss_attr = None - if self.pred_attrs: - loss_attr = pos_attr_preds.sum() - - loss_dict = dict( - loss_cls=loss_cls, - loss_offset=loss_offset, - loss_depth=loss_depth, - loss_size=loss_size, - loss_rotsin=loss_rotsin, - loss_centerness=loss_centerness) - - if loss_velo is not None: - loss_dict['loss_velo'] = loss_velo - - if loss_dir is not None: - loss_dict['loss_dir'] = loss_dir - - if loss_attr is not None: - loss_dict['loss_attr'] = loss_attr - - return loss_dict - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', 'attr_preds', - 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_points * 1, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(centernesses) == len(attr_preds) - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - if self.use_direction_classifier: - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - dir_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [2, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.pred_attrs: - attr_pred_list = [ - attr_preds[i][img_id].detach() for i in range(num_levels) - ] - else: - attr_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_attrs, *cls_scores[i][img_id].shape[1:]], - self.attr_background_label).detach() - for i in range(num_levels) - ] - centerness_pred_list = [ - centernesses[i][img_id].detach() for i in range(num_levels) - ] - input_meta = img_metas[img_id] - det_bboxes = self._get_bboxes_single( - cls_score_list, bbox_pred_list, dir_cls_pred_list, - attr_pred_list, centerness_pred_list, mlvl_points, input_meta, - cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - attr_preds, - centernesses, - mlvl_points, - input_meta, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - Has shape (num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single scale - level with shape (num_points * bbox_code_size, H, W). - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on a single scale level with shape - (num_points * 2, H, W) - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for a single scale level - with shape (num_points, H, W). - mlvl_points (list[Tensor]): Box reference for a single scale level - with shape (num_total_points, 2). - input_meta (dict): Metadata of input image. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - tuples[Tensor]: Predicted 3D boxes, scores, labels and attributes. - """ - view = np.array(input_meta['cam2img']) - scale_factor = input_meta['scale_factor'] - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_points) - mlvl_centers2d = [] - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - mlvl_attr_scores = [] - mlvl_centerness = [] - - for cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, \ - points in zip(cls_scores, bbox_preds, dir_cls_preds, - attr_preds, centernesses, mlvl_points): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - attr_pred = attr_pred.permute(1, 2, 0).reshape(-1, self.num_attrs) - attr_score = torch.max(attr_pred, dim=-1)[1] - centerness = centerness.permute(1, 2, 0).reshape(-1).sigmoid() - - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, - sum(self.group_reg_dims)) - bbox_pred = bbox_pred[:, :self.bbox_code_size] - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - max_scores, _ = (scores * centerness[:, None]).max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - points = points[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_pred = dir_cls_pred[topk_inds, :] - centerness = centerness[topk_inds] - dir_cls_score = dir_cls_score[topk_inds] - attr_score = attr_score[topk_inds] - # change the offset to actual center predictions - bbox_pred[:, :2] = points - bbox_pred[:, :2] - if rescale: - bbox_pred[:, :2] /= bbox_pred[:, :2].new_tensor(scale_factor) - pred_center2d = bbox_pred[:, :3].clone() - bbox_pred[:, :3] = points_img2cam(bbox_pred[:, :3], view) - mlvl_centers2d.append(pred_center2d) - mlvl_bboxes.append(bbox_pred) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - mlvl_attr_scores.append(attr_score) - mlvl_centerness.append(centerness) - - mlvl_centers2d = torch.cat(mlvl_centers2d) - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - # change local yaw to global yaw for 3D nms - cam2img = mlvl_centers2d.new_zeros((4, 4)) - cam2img[:view.shape[0], :view.shape[1]] = \ - mlvl_centers2d.new_tensor(view) - mlvl_bboxes = self.bbox_coder.decode_yaw(mlvl_bboxes, mlvl_centers2d, - mlvl_dir_scores, - self.dir_offset, cam2img) - - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.bbox_code_size, - origin=(0.5, 0.5, 0.5)).bev) - - mlvl_scores = torch.cat(mlvl_scores) - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - mlvl_attr_scores = torch.cat(mlvl_attr_scores) - mlvl_centerness = torch.cat(mlvl_centerness) - # no scale_factors in box3d_multiclass_nms - # Then we multiply it from outside - mlvl_nms_scores = mlvl_scores * mlvl_centerness[:, None] - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_nms_scores, cfg.score_thr, - cfg.max_per_img, cfg, mlvl_dir_scores, - mlvl_attr_scores) - bboxes, scores, labels, dir_scores, attrs = results - attrs = attrs.to(labels.dtype) # change data type to int - bboxes = input_meta['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - # Note that the predictions use origin (0.5, 0.5, 0.5) - # Due to the ground truth centers2d are the gravity center of objects - # v0.10.0 fix inplace operation to the input tensor of cam_box3d - # So here we also need to add origin=(0.5, 0.5, 0.5) - if not self.pred_attrs: - attrs = None - - return bboxes, scores, labels, attrs - - @staticmethod - def pts2Dto3D(points, view): - """ - Args: - points (torch.Tensor): points in 2D images, [N, 3], - 3 corresponds with x, y in the image and depth. - view (np.ndarray): camera intrinsic, [3, 3] - - Returns: - torch.Tensor: points in 3D space. [N, 3], - 3 corresponds with x, y, z in 3D space. - """ - warning.warn('DeprecationWarning: This static method has been moved ' - 'out of this class to mmdet3d/core. The function ' - 'pts2Dto3D will be deprecated.') - - assert view.shape[0] <= 4 - assert view.shape[1] <= 4 - assert points.shape[1] == 3 - - points2D = points[:, :2] - depths = points[:, 2].view(-1, 1) - unnorm_points2D = torch.cat([points2D * depths, depths], dim=1) - - viewpad = torch.eye(4, dtype=points2D.dtype, device=points2D.device) - viewpad[:view.shape[0], :view.shape[1]] = points2D.new_tensor(view) - inv_viewpad = torch.inverse(viewpad).transpose(0, 1) - - # Do operation in homogeneous coordinates. - nbr_points = unnorm_points2D.shape[0] - homo_points2D = torch.cat( - [unnorm_points2D, - points2D.new_ones((nbr_points, 1))], dim=1) - points3D = torch.mm(homo_points2D, inv_viewpad)[:, :3] - - return points3D - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points according to feature map sizes.""" - y, x = super()._get_points_single(featmap_size, stride, dtype, device) - points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), - dim=-1) + stride // 2 - return points - - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - if attr_labels_list is None: - attr_labels_list = [ - gt_labels.new_full(gt_labels.shape, self.attr_background_label) - for gt_labels in gt_labels_list - ] - - # get labels and bbox_targets of each image - _, _, labels_3d_list, bbox_targets_3d_list, centerness_targets_list, \ - attr_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - gt_bboxes_3d_list, - gt_labels_3d_list, - centers2d_list, - depths_list, - attr_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - labels_3d_list = [ - labels_3d.split(num_points, 0) for labels_3d in labels_3d_list - ] - bbox_targets_3d_list = [ - bbox_targets_3d.split(num_points, 0) - for bbox_targets_3d in bbox_targets_3d_list - ] - centerness_targets_list = [ - centerness_targets.split(num_points, 0) - for centerness_targets in centerness_targets_list - ] - attr_targets_list = [ - attr_targets.split(num_points, 0) - for attr_targets in attr_targets_list - ] - - # concat per level image - concat_lvl_labels_3d = [] - concat_lvl_bbox_targets_3d = [] - concat_lvl_centerness_targets = [] - concat_lvl_attr_targets = [] - for i in range(num_levels): - concat_lvl_labels_3d.append( - torch.cat([labels[i] for labels in labels_3d_list])) - concat_lvl_centerness_targets.append( - torch.cat([ - centerness_targets[i] - for centerness_targets in centerness_targets_list - ])) - bbox_targets_3d = torch.cat([ - bbox_targets_3d[i] for bbox_targets_3d in bbox_targets_3d_list - ]) - concat_lvl_attr_targets.append( - torch.cat( - [attr_targets[i] for attr_targets in attr_targets_list])) - if self.norm_on_bbox: - bbox_targets_3d[:, : - 2] = bbox_targets_3d[:, :2] / self.strides[i] - concat_lvl_bbox_targets_3d.append(bbox_targets_3d) - return concat_lvl_labels_3d, concat_lvl_bbox_targets_3d, \ - concat_lvl_centerness_targets, concat_lvl_attr_targets - - def _get_target_single(self, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - points, regress_ranges, num_points_per_lvl): - """Compute regression and classification targets for a single image.""" - num_points = points.size(0) - num_gts = gt_labels.size(0) - if not isinstance(gt_bboxes_3d, torch.Tensor): - gt_bboxes_3d = gt_bboxes_3d.tensor.to(gt_bboxes.device) - if num_gts == 0: - return gt_labels.new_full((num_points,), self.background_label), \ - gt_bboxes.new_zeros((num_points, 4)), \ - gt_labels_3d.new_full( - (num_points,), self.background_label), \ - gt_bboxes_3d.new_zeros((num_points, self.bbox_code_size)), \ - gt_bboxes_3d.new_zeros((num_points,)), \ - attr_labels.new_full( - (num_points,), self.attr_background_label) - - # change orientation to local yaw - gt_bboxes_3d[..., 6] = -torch.atan2( - gt_bboxes_3d[..., 0], gt_bboxes_3d[..., 2]) + gt_bboxes_3d[..., 6] - - areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - areas = areas[None].repeat(num_points, 1) - regress_ranges = regress_ranges[:, None, :].expand( - num_points, num_gts, 2) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - centers2d = centers2d[None].expand(num_points, num_gts, 2) - gt_bboxes_3d = gt_bboxes_3d[None].expand(num_points, num_gts, - self.bbox_code_size) - depths = depths[None, :, None].expand(num_points, num_gts, 1) - xs, ys = points[:, 0], points[:, 1] - xs = xs[:, None].expand(num_points, num_gts) - ys = ys[:, None].expand(num_points, num_gts) - - delta_xs = (xs - centers2d[..., 0])[..., None] - delta_ys = (ys - centers2d[..., 1])[..., None] - bbox_targets_3d = torch.cat( - (delta_xs, delta_ys, depths, gt_bboxes_3d[..., 3:]), dim=-1) - - left = xs - gt_bboxes[..., 0] - right = gt_bboxes[..., 2] - xs - top = ys - gt_bboxes[..., 1] - bottom = gt_bboxes[..., 3] - ys - bbox_targets = torch.stack((left, top, right, bottom), -1) - - assert self.center_sampling is True, 'Setting center_sampling to '\ - 'False has not been implemented for FCOS3D.' - # condition1: inside a `center bbox` - radius = self.center_sample_radius - center_xs = centers2d[..., 0] - center_ys = centers2d[..., 1] - center_gts = torch.zeros_like(gt_bboxes) - stride = center_xs.new_zeros(center_xs.shape) - - # project the points on current lvl back to the `original` sizes - lvl_begin = 0 - for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl): - lvl_end = lvl_begin + num_points_lvl - stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius - lvl_begin = lvl_end - - center_gts[..., 0] = center_xs - stride - center_gts[..., 1] = center_ys - stride - center_gts[..., 2] = center_xs + stride - center_gts[..., 3] = center_ys + stride - - cb_dist_left = xs - center_gts[..., 0] - cb_dist_right = center_gts[..., 2] - xs - cb_dist_top = ys - center_gts[..., 1] - cb_dist_bottom = center_gts[..., 3] - ys - center_bbox = torch.stack( - (cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1) - inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0 - - # condition2: limit the regression range for each location - max_regress_distance = bbox_targets.max(-1)[0] - inside_regress_range = ( - (max_regress_distance >= regress_ranges[..., 0]) - & (max_regress_distance <= regress_ranges[..., 1])) - - # center-based criterion to deal with ambiguity - dists = torch.sqrt(torch.sum(bbox_targets_3d[..., :2]**2, dim=-1)) - dists[inside_gt_bbox_mask == 0] = INF - dists[inside_regress_range == 0] = INF - min_dist, min_dist_inds = dists.min(dim=1) - - labels = gt_labels[min_dist_inds] - labels_3d = gt_labels_3d[min_dist_inds] - attr_labels = attr_labels[min_dist_inds] - labels[min_dist == INF] = self.background_label # set as BG - labels_3d[min_dist == INF] = self.background_label # set as BG - attr_labels[min_dist == INF] = self.attr_background_label - - bbox_targets = bbox_targets[range(num_points), min_dist_inds] - bbox_targets_3d = bbox_targets_3d[range(num_points), min_dist_inds] - relative_dists = torch.sqrt( - torch.sum(bbox_targets_3d[..., :2]**2, - dim=-1)) / (1.414 * stride[:, 0]) - # [N, 1] / [N, 1] - centerness_targets = torch.exp(-self.centerness_alpha * relative_dists) - - return labels, bbox_targets, labels_3d, bbox_targets_3d, \ - centerness_targets, attr_labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/free_anchor3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/free_anchor3d_head.py deleted file mode 100644 index 330c456e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/free_anchor3d_head.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.bbox import bbox_overlaps_nearest_3d -from ..builder import HEADS -from .anchor3d_head import Anchor3DHead -from .train_mixins import get_direction_target - - -@HEADS.register_module() -class FreeAnchor3DHead(Anchor3DHead): - r"""`FreeAnchor `_ head for 3D detection. - - Note: - This implementation is directly modified from the `mmdet implementation - `_. - We find it also works on 3D detection with minor modification, i.e., - different hyper-parameters and a additional direction classifier. - - Args: - pre_anchor_topk (int): Number of boxes that be token in each bag. - bbox_thr (float): The threshold of the saturated linear function. It is - usually the same with the IoU threshold used in NMS. - gamma (float): Gamma parameter in focal loss. - alpha (float): Alpha parameter in focal loss. - kwargs (dict): Other arguments are the same as those in :class:`Anchor3DHead`. - """ # noqa: E501 - - def __init__(self, - pre_anchor_topk=50, - bbox_thr=0.6, - gamma=2.0, - alpha=0.5, - init_cfg=None, - **kwargs): - super().__init__(init_cfg=init_cfg, **kwargs) - self.pre_anchor_topk = pre_anchor_topk - self.bbox_thr = bbox_thr - self.gamma = gamma - self.alpha = alpha - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate loss of FreeAnchor head. - - Args: - cls_scores (list[torch.Tensor]): Classification scores of - different samples. - bbox_preds (list[torch.Tensor]): Box predictions of - different samples - dir_cls_preds (list[torch.Tensor]): Direction predictions of - different samples - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth boxes. - gt_labels (list[torch.Tensor]): Ground truth labels. - input_metas (list[dict]): List of input meta information. - gt_bboxes_ignore (list[:obj:`BaseInstance3DBoxes`], optional): - Ground truth boxes that should be ignored. Defaults to None. - - Returns: - dict[str, torch.Tensor]: Loss items. - - - positive_bag_loss (torch.Tensor): Loss of positive samples. - - negative_bag_loss (torch.Tensor): Loss of negative samples. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - anchor_list = self.get_anchors(featmap_sizes, input_metas) - anchors = [torch.cat(anchor) for anchor in anchor_list] - - # concatenate each level - cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape( - cls_score.size(0), -1, self.num_classes) - for cls_score in cls_scores - ] - bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape( - bbox_pred.size(0), -1, self.box_code_size) - for bbox_pred in bbox_preds - ] - dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, - 1).reshape(dir_cls_pred.size(0), -1, 2) - for dir_cls_pred in dir_cls_preds - ] - - cls_scores = torch.cat(cls_scores, dim=1) - bbox_preds = torch.cat(bbox_preds, dim=1) - dir_cls_preds = torch.cat(dir_cls_preds, dim=1) - - cls_prob = torch.sigmoid(cls_scores) - box_prob = [] - num_pos = 0 - positive_losses = [] - for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_, bbox_preds_, - dir_cls_preds_) in enumerate( - zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds, - dir_cls_preds)): - - gt_bboxes_ = gt_bboxes_.tensor.to(anchors_.device) - - with torch.no_grad(): - # box_localization: a_{j}^{loc}, shape: [j, 4] - pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_) - - # object_box_iou: IoU_{ij}^{loc}, shape: [i, j] - object_box_iou = bbox_overlaps_nearest_3d( - gt_bboxes_, pred_boxes) - - # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j] - t1 = self.bbox_thr - t2 = object_box_iou.max( - dim=1, keepdim=True).values.clamp(min=t1 + 1e-6) - object_box_prob = ((object_box_iou - t1) / (t2 - t1)).clamp( - min=0, max=1) - - # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j] - num_obj = gt_labels_.size(0) - indices = torch.stack( - [torch.arange(num_obj).type_as(gt_labels_), gt_labels_], - dim=0) - - object_cls_box_prob = torch.sparse_coo_tensor( - indices, object_box_prob) - - # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j] - """ - from "start" to "end" implement: - image_box_iou = torch.sparse.max(object_cls_box_prob, - dim=0).t() - - """ - # start - box_cls_prob = torch.sparse.sum( - object_cls_box_prob, dim=0).to_dense() - - indices = torch.nonzero(box_cls_prob, as_tuple=False).t_() - if indices.numel() == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.num_classes).type_as(object_box_prob) - else: - nonzero_box_prob = torch.where( - (gt_labels_.unsqueeze(dim=-1) == indices[0]), - object_box_prob[:, indices[1]], - torch.tensor( - [0]).type_as(object_box_prob)).max(dim=0).values - - # upmap to shape [j, c] - image_box_prob = torch.sparse_coo_tensor( - indices.flip([0]), - nonzero_box_prob, - size=(anchors_.size(0), self.num_classes)).to_dense() - # end - - box_prob.append(image_box_prob) - - # construct bags for objects - match_quality_matrix = bbox_overlaps_nearest_3d( - gt_bboxes_, anchors_) - _, matched = torch.topk( - match_quality_matrix, - self.pre_anchor_topk, - dim=1, - sorted=False) - del match_quality_matrix - - # matched_cls_prob: P_{ij}^{cls} - matched_cls_prob = torch.gather( - cls_prob_[matched], 2, - gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk, - 1)).squeeze(2) - - # matched_box_prob: P_{ij}^{loc} - matched_anchors = anchors_[matched] - matched_object_targets = self.bbox_coder.encode( - matched_anchors, - gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors)) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - # also calculate direction prob: P_{ij}^{dir} - matched_dir_targets = get_direction_target( - matched_anchors, - matched_object_targets, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - loss_dir = self.loss_dir( - dir_cls_preds_[matched].transpose(-2, -1), - matched_dir_targets, - reduction_override='none') - - # generate bbox weights - if self.diff_rad_by_sin: - bbox_preds_[matched], matched_object_targets = \ - self.add_sin_difference( - bbox_preds_[matched], matched_object_targets) - bbox_weights = matched_anchors.new_ones(matched_anchors.size()) - # Use pop is not right, check performance - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - loss_bbox = self.loss_bbox( - bbox_preds_[matched], - matched_object_targets, - bbox_weights, - reduction_override='none').sum(-1) - - if loss_dir is not None: - loss_bbox += loss_dir - matched_box_prob = torch.exp(-loss_bbox) - - # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )} - num_pos += len(gt_bboxes_) - positive_losses.append( - self.positive_bag_loss(matched_cls_prob, matched_box_prob)) - - positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos) - - # box_prob: P{a_{j} \in A_{+}} - box_prob = torch.stack(box_prob, dim=0) - - # negative_loss: - # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B|| - negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max( - 1, num_pos * self.pre_anchor_topk) - - losses = { - 'positive_bag_loss': positive_loss, - 'negative_bag_loss': negative_loss - } - return losses - - def positive_bag_loss(self, matched_cls_prob, matched_box_prob): - """Generate positive bag loss. - - Args: - matched_cls_prob (torch.Tensor): Classification probability - of matched positive samples. - matched_box_prob (torch.Tensor): Bounding box probability - of matched positive samples. - - Returns: - torch.Tensor: Loss of positive samples. - """ - # bag_prob = Mean-max(matched_prob) - matched_prob = matched_cls_prob * matched_box_prob - weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None) - weight /= weight.sum(dim=1).unsqueeze(dim=-1) - bag_prob = (weight * matched_prob).sum(dim=1) - # positive_bag_loss = -self.alpha * log(bag_prob) - bag_prob = bag_prob.clamp(0, 1) # to avoid bug of BCE, check - return self.alpha * F.binary_cross_entropy( - bag_prob, torch.ones_like(bag_prob), reduction='none') - - def negative_bag_loss(self, cls_prob, box_prob): - """Generate negative bag loss. - - Args: - cls_prob (torch.Tensor): Classification probability - of negative samples. - box_prob (torch.Tensor): Bounding box probability - of negative samples. - - Returns: - torch.Tensor: Loss of negative samples. - """ - prob = cls_prob * (1 - box_prob) - prob = prob.clamp(0, 1) # to avoid bug of BCE, check - negative_bag_loss = prob**self.gamma * F.binary_cross_entropy( - prob, torch.zeros_like(prob), reduction='none') - return (1 - self.alpha) * negative_bag_loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/groupfree3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/groupfree3d_head.py deleted file mode 100644 index 578b20e3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/groupfree3d_head.py +++ /dev/null @@ -1,996 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.cnn import ConvModule, xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer) -from mmcv.ops import PointsSampler as Points_Sampler -from mmcv.ops import gather_points -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss -from .base_conv_bbox_head import BaseConvBboxHead - -EPS = 1e-6 - - -class PointsObjClsModule(BaseModule): - """object candidate point prediction from seed point features. - - Args: - in_channel (int): number of channels of seed point features. - num_convs (int, optional): number of conv layers. - Default: 3. - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - """ - - def __init__(self, - in_channel, - num_convs=3, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - conv_channels = [in_channel for _ in range(num_convs - 1)] - conv_channels.append(1) - - self.mlp = nn.Sequential() - prev_channels = in_channel - for i in range(num_convs): - self.mlp.add_module( - f'layer{i}', - ConvModule( - prev_channels, - conv_channels[i], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if i < num_convs - 1 else None, - act_cfg=act_cfg if i < num_convs - 1 else None, - bias=True, - inplace=True)) - prev_channels = conv_channels[i] - - def forward(self, seed_features): - """Forward pass. - - Args: - seed_features (torch.Tensor): seed features, dims: - (batch_size, feature_dim, num_seed) - - Returns: - torch.Tensor: objectness logits, dim: - (batch_size, 1, num_seed) - """ - return self.mlp(seed_features) - - -class GeneralSamplingModule(nn.Module): - """Sampling Points. - - Sampling points with given index. - """ - - def forward(self, xyz, features, sample_inds): - """Forward pass. - - Args: - xyz: (B, N, 3) the coordinates of the features. - features (Tensor): (B, C, N) features to sample. - sample_inds (Tensor): (B, M) the given index, - where M is the number of points. - - Returns: - Tensor: (B, M, 3) coordinates of sampled features - Tensor: (B, C, M) the sampled features. - Tensor: (B, M) the given index. - """ - xyz_t = xyz.transpose(1, 2).contiguous() - new_xyz = gather_points(xyz_t, sample_inds).transpose(1, - 2).contiguous() - new_features = gather_points(features, sample_inds).contiguous() - - return new_xyz, new_features, sample_inds - - -@HEADS.register_module() -class GroupFree3DHead(BaseModule): - r"""Bbox head of `Group-Free 3D `_. - - Args: - num_classes (int): The number of class. - in_channels (int): The dims of input features from backbone. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - num_decoder_layers (int): The number of transformer decoder layers. - transformerlayers (dict): Config for transformer decoder. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - num_proposal (int): The number of initial sampling candidates. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - size_cls_agnostic (bool): Whether the predicted size is class-agnostic. - gt_per_seed (int): the number of candidate instance each point belongs - to. - sampling_objectness_loss (dict): Config of initial sampling - objectness loss. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - size_reg_loss (dict): Config of class-agnostic size regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_classes, - in_channels, - bbox_coder, - num_decoder_layers, - transformerlayers, - decoder_self_posembeds=dict( - type='ConvBNPositionalEncoding', - input_channel=6, - num_pos_feats=288), - decoder_cross_posembeds=dict( - type='ConvBNPositionalEncoding', - input_channel=3, - num_pos_feats=288), - train_cfg=None, - test_cfg=None, - num_proposal=128, - pred_layer_cfg=None, - size_cls_agnostic=True, - gt_per_seed=3, - sampling_objectness_loss=None, - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - size_reg_loss=None, - semantic_loss=None, - init_cfg=None): - super(GroupFree3DHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.num_proposal = num_proposal - self.in_channels = in_channels - self.num_decoder_layers = num_decoder_layers - self.size_cls_agnostic = size_cls_agnostic - self.gt_per_seed = gt_per_seed - - # Transformer decoder layers - if isinstance(transformerlayers, ConfigDict): - transformerlayers = [ - copy.deepcopy(transformerlayers) - for _ in range(num_decoder_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_decoder_layers - self.decoder_layers = nn.ModuleList() - for i in range(self.num_decoder_layers): - self.decoder_layers.append( - build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.decoder_layers[0].embed_dims - assert self.embed_dims == decoder_self_posembeds['num_pos_feats'] - assert self.embed_dims == decoder_cross_posembeds['num_pos_feats'] - - # bbox_coder - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - # Initial object candidate sampling - self.gsample_module = GeneralSamplingModule() - self.fps_module = Points_Sampler([self.num_proposal]) - self.points_obj_cls = PointsObjClsModule(self.in_channels) - - self.fp16_enabled = False - - # initial candidate prediction - self.conv_pred = BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels()) - - # query proj and key proj - self.decoder_query_proj = nn.Conv1d( - self.embed_dims, self.embed_dims, kernel_size=1) - self.decoder_key_proj = nn.Conv1d( - self.embed_dims, self.embed_dims, kernel_size=1) - - # query position embed - self.decoder_self_posembeds = nn.ModuleList() - for _ in range(self.num_decoder_layers): - self.decoder_self_posembeds.append( - build_positional_encoding(decoder_self_posembeds)) - # key position embed - self.decoder_cross_posembeds = nn.ModuleList() - for _ in range(self.num_decoder_layers): - self.decoder_cross_posembeds.append( - build_positional_encoding(decoder_cross_posembeds)) - - # Prediction Head - self.prediction_heads = nn.ModuleList() - for i in range(self.num_decoder_layers): - self.prediction_heads.append( - BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels())) - - self.sampling_objectness_loss = build_loss(sampling_objectness_loss) - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.semantic_loss = build_loss(semantic_loss) - if self.size_cls_agnostic: - self.size_reg_loss = build_loss(size_reg_loss) - else: - self.size_res_loss = build_loss(size_res_loss) - self.size_class_loss = build_loss(size_class_loss) - - def init_weights(self): - """Initialize weights of transformer decoder in GroupFree3DHead.""" - # initialize transformer - for m in self.decoder_layers.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - for m in self.decoder_self_posembeds.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - for m in self.decoder_cross_posembeds.parameters(): - if m.dim() > 1: - xavier_init(m, distribution='uniform') - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes + 1 - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # center residual (3), - # heading class+residual (num_dir_bins*2), - # size class+residual(num_sizes*4 or 3) - if self.size_cls_agnostic: - return 6 + self.num_dir_bins * 2 - else: - return 3 + self.num_dir_bins * 2 + self.num_sizes * 4 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - - seed_points = feat_dict['fp_xyz'][-1] - seed_features = feat_dict['fp_features'][-1] - seed_indices = feat_dict['fp_indices'][-1] - - return seed_points, seed_features, seed_indices - - def forward(self, feat_dict, sample_mod): - """Forward pass. - - Note: - The forward of GroupFree3DHead is divided into 2 steps: - - 1. Initial object candidates sampling. - 2. Iterative object box prediction by transformer decoder. - - Args: - feat_dict (dict): Feature dict from backbone. - sample_mod (str): sample mode for initial candidates sampling. - - Returns: - results (dict): Predictions of GroupFree3D head. - """ - assert sample_mod in ['fps', 'kps'] - - seed_xyz, seed_features, seed_indices = self._extract_input(feat_dict) - - results = dict( - seed_points=seed_xyz, - seed_features=seed_features, - seed_indices=seed_indices) - - # 1. Initial object candidates sampling. - if sample_mod == 'fps': - sample_inds = self.fps_module(seed_xyz, seed_features) - elif sample_mod == 'kps': - points_obj_cls_logits = self.points_obj_cls( - seed_features) # (batch_size, 1, num_seed) - points_obj_cls_scores = points_obj_cls_logits.sigmoid().squeeze(1) - sample_inds = torch.topk(points_obj_cls_scores, - self.num_proposal)[1].int() - results['seeds_obj_cls_logits'] = points_obj_cls_logits - else: - raise NotImplementedError( - f'Sample mode {sample_mod} is not supported!') - - candidate_xyz, candidate_features, sample_inds = self.gsample_module( - seed_xyz, seed_features, sample_inds) - - results['query_points_xyz'] = candidate_xyz # (B, M, 3) - results['query_points_feature'] = candidate_features # (B, C, M) - results['query_points_sample_inds'] = sample_inds.long() # (B, M) - - prefix = 'proposal.' - cls_predictions, reg_predictions = self.conv_pred(candidate_features) - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, candidate_xyz, - prefix) - - results.update(decode_res) - bbox3d = self.bbox_coder.decode(results, prefix) - - # 2. Iterative object box prediction by transformer decoder. - base_bbox3d = bbox3d[:, :, :6].detach().clone() - - query = self.decoder_query_proj(candidate_features).permute(2, 0, 1) - key = self.decoder_key_proj(seed_features).permute(2, 0, 1) - value = key - - # transformer decoder - results['num_decoder_layers'] = 0 - for i in range(self.num_decoder_layers): - prefix = f's{i}.' - - query_pos = self.decoder_self_posembeds[i](base_bbox3d).permute( - 2, 0, 1) - key_pos = self.decoder_cross_posembeds[i](seed_xyz).permute( - 2, 0, 1) - - query = self.decoder_layers[i]( - query, key, value, query_pos=query_pos, - key_pos=key_pos).permute(1, 2, 0) - - results[f'{prefix}query'] = query - - cls_predictions, reg_predictions = self.prediction_heads[i](query) - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, - candidate_xyz, prefix) - # TODO: should save bbox3d instead of decode_res? - results.update(decode_res) - - bbox3d = self.bbox_coder.decode(results, prefix) - results[f'{prefix}bbox3d'] = bbox3d - base_bbox3d = bbox3d[:, :, :6].detach().clone() - query = query.permute(2, 0, 1) - - results['num_decoder_layers'] += 1 - - return results - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None, - ret_target=False): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - ret_target (Bool): Return targets or not. - - Returns: - dict: Losses of GroupFree3D. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (sampling_targets, sampling_weights, assigned_size_targets, - size_class_targets, size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets, valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) = targets - - batch_size, proposal_num = size_class_targets.shape[:2] - - losses = dict() - - # calculate objectness classification loss - sampling_obj_score = bbox_preds['seeds_obj_cls_logits'].reshape(-1, 1) - sampling_objectness_loss = self.sampling_objectness_loss( - sampling_obj_score, - 1 - sampling_targets.reshape(-1), - sampling_weights.reshape(-1), - avg_factor=batch_size) - losses['sampling_objectness_loss'] = sampling_objectness_loss - - prefixes = ['proposal.'] + [ - f's{i}.' for i in range(bbox_preds['num_decoder_layers']) - ] - num_stages = len(prefixes) - for prefix in prefixes: - - # calculate objectness loss - obj_score = bbox_preds[f'{prefix}obj_scores'].transpose(2, 1) - objectness_loss = self.objectness_loss( - obj_score.reshape(-1, 1), - 1 - objectness_targets.reshape(-1), - objectness_weights.reshape(-1), - avg_factor=batch_size) - losses[f'{prefix}objectness_loss'] = objectness_loss / num_stages - - # calculate center loss - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).expand( - -1, -1, 3) - center_loss = self.center_loss( - bbox_preds[f'{prefix}center'], - assigned_center_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}center_loss'] = center_loss / num_stages - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds[f'{prefix}dir_class'].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - losses[f'{prefix}dir_class_loss'] = dir_class_loss / num_stages - - # calculate direction residual loss - heading_label_one_hot = size_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), - 1) - dir_res_norm = torch.sum( - bbox_preds[f'{prefix}dir_res_norm'] * heading_label_one_hot, - -1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - losses[f'{prefix}dir_res_loss'] = dir_res_loss / num_stages - - if self.size_cls_agnostic: - # calculate class-agnostic size loss - size_reg_loss = self.size_reg_loss( - bbox_preds[f'{prefix}size'], - assigned_size_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}size_reg_loss'] = size_reg_loss / num_stages - - else: - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds[f'{prefix}size_class'].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - losses[ - f'{prefix}size_class_loss'] = size_class_loss / num_stages - - # calculate size residual loss - one_hot_size_targets = size_class_targets.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, - size_class_targets.unsqueeze(-1), - 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).expand(-1, -1, -1, 3).contiguous() - size_residual_norm = torch.sum( - bbox_preds[f'{prefix}size_res_norm'] * - one_hot_size_targets_expand, 2) - box_loss_weights_expand = box_loss_weights.unsqueeze( - -1).expand(-1, -1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - losses[f'{prefix}size_res_loss'] = size_res_loss / num_stages - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds[f'{prefix}sem_scores'].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - losses[f'{prefix}semantic_loss'] = semantic_loss / num_stages - - if ret_target: - losses['targets'] = targets - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None, - max_gt_num=64): - """Generate targets of GroupFree3D head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - max_gt_num (int): Max number of GTs for single batch. - - Returns: - tuple[torch.Tensor]: Targets of GroupFree3D head. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - # max_gt_num = max(gt_num) - - max_gt_nums = [max_gt_num for _ in range(len(gt_labels_3d))] - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - seed_points = [ - bbox_preds['seed_points'][i] for i in range(len(gt_labels_3d)) - ] - - seed_indices = [ - bbox_preds['seed_indices'][i] for i in range(len(gt_labels_3d)) - ] - - candidate_indices = [ - bbox_preds['query_points_sample_inds'][i] - for i in range(len(gt_labels_3d)) - ] - - (sampling_targets, assigned_size_targets, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, objectness_targets, - objectness_masks) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - max_gt_nums, seed_points, - seed_indices, candidate_indices) - - # pad targets as original code of GroupFree3D. - for index in range(len(gt_labels_3d)): - pad_num = max_gt_num - gt_labels_3d[index].shape[0] - valid_gt_masks[index] = F.pad(valid_gt_masks[index], (0, pad_num)) - - sampling_targets = torch.stack(sampling_targets) - sampling_weights = (sampling_targets >= 0).float() - sampling_normalizer = sampling_weights.sum(dim=1, keepdim=True).float() - sampling_weights /= sampling_normalizer.clamp(min=1.0) - - assigned_size_targets = torch.stack(assigned_size_targets) - center_targets = torch.stack(center_targets) - valid_gt_masks = torch.stack(valid_gt_masks) - - assigned_center_targets = torch.stack(assigned_center_targets) - objectness_targets = torch.stack(objectness_targets) - - objectness_weights = torch.stack(objectness_masks) - cls_normalizer = objectness_weights.sum(dim=1, keepdim=True).float() - objectness_weights /= cls_normalizer.clamp(min=1.0) - - box_loss_weights = objectness_targets.float() / ( - objectness_targets.sum().float() + EPS) - - valid_gt_weights = valid_gt_masks.float() / ( - valid_gt_masks.sum().float() + EPS) - - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_class_targets = torch.stack(size_class_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - - return (sampling_targets, sampling_weights, assigned_size_targets, - size_class_targets, size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets, valid_gt_masks, objectness_targets, - objectness_weights, box_loss_weights, valid_gt_weights) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - max_gt_nums=None, - seed_points=None, - seed_indices=None, - candidate_indices=None, - seed_points_obj_topk=4): - """Generate targets of GroupFree3D head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - max_gt_nums (int): Max number of GTs for single batch. - seed_points (torch.Tensor): Coordinates of seed points. - seed_indices (torch.Tensor): Indices of seed points. - candidate_indices (torch.Tensor): Indices of object candidates. - seed_points_obj_topk (int): k value of k-Closest Points Sampling. - - Returns: - tuple[torch.Tensor]: Targets of GroupFree3D head. - """ - - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - # generate center, dir, size target - (center_targets, size_targets, size_class_targets, size_res_targets, - dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - # pad targets as original code of GroupFree3D - pad_num = max_gt_nums - gt_labels_3d.shape[0] - box_label_mask = points.new_zeros([max_gt_nums]) - box_label_mask[:gt_labels_3d.shape[0]] = 1 - - gt_bboxes_pad = F.pad(gt_bboxes_3d.tensor, (0, 0, 0, pad_num)) - gt_bboxes_pad[gt_labels_3d.shape[0]:, 0:3] += 1000 - gt_bboxes_3d = gt_bboxes_3d.new_box(gt_bboxes_pad) - - gt_labels_3d = F.pad(gt_labels_3d, (0, pad_num)) - - center_targets = F.pad(center_targets, (0, 0, 0, pad_num), value=1000) - size_targets = F.pad(size_targets, (0, 0, 0, pad_num)) - size_class_targets = F.pad(size_class_targets, (0, pad_num)) - size_res_targets = F.pad(size_res_targets, (0, 0, 0, pad_num)) - dir_class_targets = F.pad(dir_class_targets, (0, pad_num)) - dir_res_targets = F.pad(dir_res_targets, (0, pad_num)) - - # 0. generate pts_instance_label and pts_obj_mask - num_points = points.shape[0] - pts_obj_mask = points.new_zeros([num_points], dtype=torch.long) - pts_instance_label = points.new_zeros([num_points], - dtype=torch.long) - 1 - - if self.bbox_coder.with_rot: - vote_targets = points.new_zeros([num_points, 4 * self.gt_per_seed]) - vote_target_idx = points.new_zeros([num_points], dtype=torch.long) - box_indices_all = gt_bboxes_3d.points_in_boxes_part(points) - for i in range(gt_labels_3d.shape[0]): - box_indices = box_indices_all[:, i] - indices = torch.nonzero( - box_indices, as_tuple=False).squeeze(-1) - selected_points = points[indices] - pts_obj_mask[indices] = 1 - vote_targets_tmp = vote_targets[indices] - votes = gt_bboxes_3d.gravity_center[i].unsqueeze( - 0) - selected_points[:, :3] - - for j in range(self.gt_per_seed): - column_indices = torch.nonzero( - vote_target_idx[indices] == j, - as_tuple=False).squeeze(-1) - vote_targets_tmp[column_indices, - int(j * 3):int(j * 3 + - 3)] = votes[column_indices] - vote_targets_tmp[column_indices, - j + 3 * self.gt_per_seed] = i - if j == 0: - vote_targets_tmp[ - column_indices, :3 * - self.gt_per_seed] = votes[column_indices].repeat( - 1, self.gt_per_seed) - vote_targets_tmp[column_indices, - 3 * self.gt_per_seed:] = i - - vote_targets[indices] = vote_targets_tmp - vote_target_idx[indices] = torch.clamp( - vote_target_idx[indices] + 1, max=2) - - dist = points.new_zeros([num_points, self.gt_per_seed]) + 1000 - for j in range(self.gt_per_seed): - dist[:, j] = (vote_targets[:, 3 * j:3 * j + 3]**2).sum(-1) - - instance_indices = torch.argmin( - dist, dim=-1).unsqueeze(-1) + 3 * self.gt_per_seed - instance_lable = torch.gather(vote_targets, 1, - instance_indices).squeeze(-1) - pts_instance_label = instance_lable.long() - pts_instance_label[pts_obj_mask == 0] = -1 - - elif pts_semantic_mask is not None: - for i in torch.unique(pts_instance_mask): - indices = torch.nonzero( - pts_instance_mask == i, as_tuple=False).squeeze(-1) - - if pts_semantic_mask[indices[0]] < self.num_classes: - selected_points = points[indices, :3] - center = 0.5 * ( - selected_points.min(0)[0] + selected_points.max(0)[0]) - - delta_xyz = center - center_targets - instance_lable = torch.argmin((delta_xyz**2).sum(-1)) - pts_instance_label[indices] = instance_lable - pts_obj_mask[indices] = 1 - - else: - raise NotImplementedError - - # 1. generate objectness targets in sampling head - gt_num = gt_labels_3d.shape[0] - num_seed = seed_points.shape[0] - num_candidate = candidate_indices.shape[0] - - object_assignment = torch.gather(pts_instance_label, 0, seed_indices) - # set background points to the last gt bbox as original code - object_assignment[object_assignment < 0] = gt_num - 1 - object_assignment_one_hot = gt_bboxes_3d.tensor.new_zeros( - (num_seed, gt_num)) - object_assignment_one_hot.scatter_(1, object_assignment.unsqueeze(-1), - 1) # (num_seed, gt_num) - - delta_xyz = seed_points.unsqueeze( - 1) - gt_bboxes_3d.gravity_center.unsqueeze( - 0) # (num_seed, gt_num, 3) - delta_xyz = delta_xyz / (gt_bboxes_3d.dims.unsqueeze(0) + EPS) - - new_dist = torch.sum(delta_xyz**2, dim=-1) - euclidean_dist1 = torch.sqrt(new_dist + EPS) - euclidean_dist1 = euclidean_dist1 * object_assignment_one_hot + 100 * ( - 1 - object_assignment_one_hot) - # (gt_num, num_seed) - euclidean_dist1 = euclidean_dist1.permute(1, 0) - - # gt_num x topk - topk_inds = torch.topk( - euclidean_dist1, - seed_points_obj_topk, - largest=False)[1] * box_label_mask[:, None] + \ - (box_label_mask[:, None] - 1) - topk_inds = topk_inds.long() - topk_inds = topk_inds.view(-1).contiguous() - - sampling_targets = torch.zeros( - num_seed + 1, dtype=torch.long).to(points.device) - sampling_targets[topk_inds] = 1 - sampling_targets = sampling_targets[:num_seed] - # pts_instance_label - objectness_label_mask = torch.gather(pts_instance_label, 0, - seed_indices) # num_seed - sampling_targets[objectness_label_mask < 0] = 0 - - # 2. objectness target - seed_obj_gt = torch.gather(pts_obj_mask, 0, seed_indices) # num_seed - objectness_targets = torch.gather(seed_obj_gt, 0, - candidate_indices) # num_candidate - - # 3. box target - seed_instance_label = torch.gather(pts_instance_label, 0, - seed_indices) # num_seed - query_points_instance_label = torch.gather( - seed_instance_label, 0, candidate_indices) # num_candidate - - # Set assignment - # (num_candidate, ) with values in 0,1,...,gt_num-1 - assignment = query_points_instance_label - # set background points to the last gt bbox as original code - assignment[assignment < 0] = gt_num - 1 - assignment_expand = assignment.unsqueeze(1).expand(-1, 3) - - assigned_center_targets = center_targets[assignment] - assigned_size_targets = size_targets[assignment] - - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - dir_res_targets /= (np.pi / self.num_dir_bins) - - size_class_targets = size_class_targets[assignment] - size_res_targets = \ - torch.gather(size_res_targets, 0, assignment_expand) - one_hot_size_targets = gt_bboxes_3d.tensor.new_zeros( - (num_candidate, self.num_sizes)) - one_hot_size_targets.scatter_(1, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets = one_hot_size_targets.unsqueeze(-1).expand( - -1, -1, 3) # (num_candidate,num_size_cluster,3) - mean_sizes = size_res_targets.new_tensor( - self.bbox_coder.mean_sizes).unsqueeze(0) - pos_mean_sizes = torch.sum(one_hot_size_targets * mean_sizes, 1) - size_res_targets /= pos_mean_sizes - - mask_targets = gt_labels_3d[assignment].long() - - objectness_masks = points.new_ones((num_candidate)) - - return (sampling_targets, assigned_size_targets, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, - center_targets, assigned_center_targets, mask_targets, - objectness_targets, objectness_masks) - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - use_nms=True): - """Generate bboxes from GroupFree3D head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from GroupFree3D head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - use_nms (bool): Whether to apply NMS, skip nms postprocessing - while using GroupFree3D head in rpn stage. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # support multi-stage predictions - assert self.test_cfg['prediction_stages'] in \ - ['last', 'all', 'last_three'] - - prefixes = list() - if self.test_cfg['prediction_stages'] == 'last': - prefixes = [f's{self.num_decoder_layers - 1}.'] - elif self.test_cfg['prediction_stages'] == 'all': - prefixes = ['proposal.'] + \ - [f's{i}.' for i in range(self.num_decoder_layers)] - elif self.test_cfg['prediction_stages'] == 'last_three': - prefixes = [ - f's{i}.' for i in range(self.num_decoder_layers - - 3, self.num_decoder_layers) - ] - else: - raise NotImplementedError - - obj_scores = list() - sem_scores = list() - bbox3d = list() - for prefix in prefixes: - # decode boxes - obj_score = bbox_preds[f'{prefix}obj_scores'][..., -1].sigmoid() - sem_score = bbox_preds[f'{prefix}sem_scores'].softmax(-1) - bbox = self.bbox_coder.decode(bbox_preds, prefix) - obj_scores.append(obj_score) - sem_scores.append(sem_score) - bbox3d.append(bbox) - - obj_scores = torch.cat(obj_scores, dim=1) - sem_scores = torch.cat(sem_scores, dim=1) - bbox3d = torch.cat(bbox3d, dim=1) - - if use_nms: - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = \ - self.multiclass_nms_single(obj_scores[b], sem_scores[b], - bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - else: - return bbox3d - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/monoflex_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/monoflex_head.py deleted file mode 100644 index 478d92bd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/monoflex_head.py +++ /dev/null @@ -1,773 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import xavier_init -from torch import nn as nn - -from mmdet3d.core.utils import get_ellip_gaussian_2D -from mmdet3d.models.model_utils import EdgeFusionModule -from mmdet3d.models.utils import (filter_outside_objs, get_edge_indices, - get_keypoints, handle_proj_objs) -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from mmdet.models.utils.gaussian_target import (get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from ..builder import HEADS, build_loss -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - - -@HEADS.register_module() -class MonoFlexHead(AnchorFreeMono3DHead): - r"""MonoFlex head used in `MonoFlex `_ - - .. code-block:: none - - / --> 3 x 3 conv --> 1 x 1 conv --> [edge fusion] --> cls - | - | --> 3 x 3 conv --> 1 x 1 conv --> 2d bbox - | - | --> 3 x 3 conv --> 1 x 1 conv --> [edge fusion] --> 2d offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints uncertainty - feature - | --> 3 x 3 conv --> 1 x 1 conv --> keypoints uncertainty - | - | --> 3 x 3 conv --> 1 x 1 conv --> 3d dimensions - | - | |--- 1 x 1 conv --> ori cls - | --> 3 x 3 conv --| - | |--- 1 x 1 conv --> ori offsets - | - | --> 3 x 3 conv --> 1 x 1 conv --> depth - | - \ --> 3 x 3 conv --> 1 x 1 conv --> depth uncertainty - - Args: - use_edge_fusion (bool): Whether to use edge fusion module while - feature extraction. - edge_fusion_inds (list[tuple]): Indices of feature to use edge fusion. - edge_heatmap_ratio (float): Ratio of generating target heatmap. - filter_outside_objs (bool, optional): Whether to filter the - outside objects. Default: True. - loss_cls (dict, optional): Config of classification loss. - Default: loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0). - loss_bbox (dict, optional): Config of localization loss. - Default: loss_bbox=dict(type='IOULoss', loss_weight=10.0). - loss_dir (dict, optional): Config of direction classification loss. - Default: dict(type='MultibinLoss', loss_weight=0.1). - loss_keypoints (dict, optional): Config of keypoints loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_dims: (dict, optional): Config of dimensions loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_offsets2d: (dict, optional): Config of offsets2d loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_direct_depth: (dict, optional): Config of directly regression depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_keypoints_depth: (dict, optional): Config of keypoints decoded depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_combined_depth: (dict, optional): Config of combined depth loss. - Default: dict(type='L1Loss', loss_weight=0.1). - loss_attr (dict, optional): Config of attribute classification loss. - In MonoFlex, Default: None. - bbox_coder (dict, optional): Bbox coder for encoding and decoding boxes. - Default: dict(type='MonoFlexCoder', code_size=7). - norm_cfg (dict, optional): Dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict): Initialization config dict. Default: None. - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - use_edge_fusion, - edge_fusion_inds, - edge_heatmap_ratio, - filter_outside_objs=True, - loss_cls=dict(type='GaussianFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=0.1), - loss_dir=dict(type='MultiBinLoss', loss_weight=0.1), - loss_keypoints=dict(type='L1Loss', loss_weight=0.1), - loss_dims=dict(type='L1Loss', loss_weight=0.1), - loss_offsets2d=dict(type='L1Loss', loss_weight=0.1), - loss_direct_depth=dict(type='L1Loss', loss_weight=0.1), - loss_keypoints_depth=dict(type='L1Loss', loss_weight=0.1), - loss_combined_depth=dict(type='L1Loss', loss_weight=0.1), - loss_attr=None, - bbox_coder=dict(type='MonoFlexCoder', code_size=7), - norm_cfg=dict(type='BN'), - init_cfg=None, - init_bias=-2.19, - **kwargs): - self.use_edge_fusion = use_edge_fusion - self.edge_fusion_inds = edge_fusion_inds - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.filter_outside_objs = filter_outside_objs - self.edge_heatmap_ratio = edge_heatmap_ratio - self.init_bias = init_bias - self.loss_dir = build_loss(loss_dir) - self.loss_keypoints = build_loss(loss_keypoints) - self.loss_dims = build_loss(loss_dims) - self.loss_offsets2d = build_loss(loss_offsets2d) - self.loss_direct_depth = build_loss(loss_direct_depth) - self.loss_keypoints_depth = build_loss(loss_keypoints_depth) - self.loss_combined_depth = build_loss(loss_combined_depth) - self.bbox_coder = build_bbox_coder(bbox_coder) - - def _init_edge_module(self): - """Initialize edge fusion module for feature extraction.""" - self.edge_fuse_cls = EdgeFusionModule(self.num_classes, 256) - for i in range(len(self.edge_fusion_inds)): - reg_inds, out_inds = self.edge_fusion_inds[i] - out_channels = self.group_reg_dims[reg_inds][out_inds] - fusion_layer = EdgeFusionModule(out_channels, 256) - layer_name = f'edge_fuse_reg_{reg_inds}_{out_inds}' - self.add_module(layer_name, fusion_layer) - - def init_weights(self): - """Initialize weights.""" - super().init_weights() - self.conv_cls.bias.data.fill_(self.init_bias) - xavier_init(self.conv_regs[4][0], gain=0.01) - xavier_init(self.conv_regs[7][0], gain=0.01) - for m in self.conv_regs.modules(): - if isinstance(m, nn.Conv2d): - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls_prev = self._init_branch( - conv_channels=self.cls_branch, - conv_strides=(1, ) * len(self.cls_branch)) - self.conv_cls = nn.Conv2d(self.cls_branch[-1], self.cls_out_channels, - 1) - # init regression head - self.conv_reg_prevs = nn.ModuleList() - # init output head - self.conv_regs = nn.ModuleList() - # group_reg_dims: - # ((4, ), (2, ), (20, ), (3, ), (3, ), (8, 8), (1, ), (1, )) - for i in range(len(self.group_reg_dims)): - reg_dims = self.group_reg_dims[i] - reg_branch_channels = self.reg_branch[i] - out_channel = self.out_channels[i] - reg_list = nn.ModuleList() - if len(reg_branch_channels) > 0: - self.conv_reg_prevs.append( - self._init_branch( - conv_channels=reg_branch_channels, - conv_strides=(1, ) * len(reg_branch_channels))) - for reg_dim in reg_dims: - reg_list.append(nn.Conv2d(out_channel, reg_dim, 1)) - self.conv_regs.append(reg_list) - else: - self.conv_reg_prevs.append(None) - for reg_dim in reg_dims: - reg_list.append(nn.Conv2d(self.feat_channels, reg_dim, 1)) - self.conv_regs.append(reg_list) - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_predictor() - if self.use_edge_fusion: - self._init_edge_module() - - def forward_train(self, x, input_metas, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - gt_bboxes_ignore, proposal_cfg, **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (list[Tensor]): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_3d (list[Tensor]): 3D ground truth bboxes of the image, - shape (num_gts, self.bbox_code_size). - gt_labels_3d (list[Tensor]): 3D ground truth labels of each box, - shape (num_gts,). - centers2d (list[Tensor]): Projected 3D center of each box, - shape (num_gts, 2). - depths (list[Tensor]): Depth of projected 3D center of each box, - shape (num_gts,). - attr_labels (list[Tensor]): Attribute labels of each box, - shape (num_gts,). - gt_bboxes_ignore (list[Tensor]): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x, input_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, gt_bboxes_3d, centers2d, depths, - attr_labels, input_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels, - input_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes( - *outs, input_metas, cfg=proposal_cfg) - return losses, proposal_list - - def forward(self, feats, input_metas): - """Forward features from the upstream network. - - Args: - feats (list[Tensor]): Features from the upstream network, each is - a 4D-tensor. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - """ - mlvl_input_metas = [input_metas for i in range(len(feats))] - return multi_apply(self.forward_single, feats, mlvl_input_metas) - - def forward_single(self, x, input_metas): - """Forward features of a single scale level. - - Args: - x (Tensor): Feature maps from a specific FPN feature level. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: Scores for each class, bbox predictions. - """ - img_h, img_w = input_metas[0]['pad_shape'][:2] - batch_size, _, feat_h, feat_w = x.shape - downsample_ratio = img_h / feat_h - - for conv_cls_prev_layer in self.conv_cls_prev: - cls_feat = conv_cls_prev_layer(x) - out_cls = self.conv_cls(cls_feat) - - if self.use_edge_fusion: - # calculate the edge indices for the batch data - edge_indices_list = get_edge_indices( - input_metas, downsample_ratio, device=x.device) - edge_lens = [ - edge_indices.shape[0] for edge_indices in edge_indices_list - ] - max_edge_len = max(edge_lens) - edge_indices = x.new_zeros((batch_size, max_edge_len, 2), - dtype=torch.long) - for i in range(batch_size): - edge_indices[i, :edge_lens[i]] = edge_indices_list[i] - # cls feature map edge fusion - out_cls = self.edge_fuse_cls(cls_feat, out_cls, edge_indices, - edge_lens, feat_h, feat_w) - - bbox_pred = [] - - for i in range(len(self.group_reg_dims)): - reg_feat = x.clone() - # feature regression head - if len(self.reg_branch[i]) > 0: - for conv_reg_prev_layer in self.conv_reg_prevs[i]: - reg_feat = conv_reg_prev_layer(reg_feat) - - for j, conv_reg in enumerate(self.conv_regs[i]): - out_reg = conv_reg(reg_feat) - # Use Edge Fusion Module - if self.use_edge_fusion and (i, j) in self.edge_fusion_inds: - # reg feature map edge fusion - out_reg = getattr(self, 'edge_fuse_reg_{}_{}'.format( - i, j))(reg_feat, out_reg, edge_indices, edge_lens, - feat_h, feat_w) - bbox_pred.append(out_reg) - - bbox_pred = torch.cat(bbox_pred, dim=1) - cls_score = out_cls.sigmoid() # turn to 0-1 - cls_score = cls_score.clamp(min=1e-4, max=1 - 1e-4) - - return cls_score, bbox_pred - - def get_bboxes(self, cls_scores, bbox_preds, input_metas): - """Generate bboxes from bbox head predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - bbox_preds (list[Tensor]): Box regression for each scale. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Returns: - list[tuple[:obj:`CameraInstance3DBoxes`, Tensor, Tensor, None]]: - Each item in result_list is 4-tuple. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - cam2imgs = torch.stack([ - cls_scores[0].new_tensor(input_meta['cam2img']) - for input_meta in input_metas - ]) - batch_bboxes, batch_scores, batch_topk_labels = self.decode_heatmap( - cls_scores[0], - bbox_preds[0], - input_metas, - cam2imgs=cam2imgs, - topk=100, - kernel=3) - - result_list = [] - for img_id in range(len(input_metas)): - - bboxes = batch_bboxes[img_id] - scores = batch_scores[img_id] - labels = batch_topk_labels[img_id] - - keep_idx = scores > 0.25 - bboxes = bboxes[keep_idx] - scores = scores[keep_idx] - labels = labels[keep_idx] - - bboxes = input_metas[img_id]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - attrs = None - result_list.append((bboxes, scores, labels, attrs)) - - return result_list - - def decode_heatmap(self, - cls_score, - reg_pred, - input_metas, - cam2imgs, - topk=100, - kernel=3): - """Transform outputs into detections raw bbox predictions. - - Args: - class_score (Tensor): Center predict heatmap, - shape (B, num_classes, H, W). - reg_pred (Tensor): Box regression map. - shape (B, channel, H , W). - input_metas (List[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cam2imgs (Tensor): Camera intrinsic matrix. - shape (N, 4, 4) - topk (int, optional): Get top k center keypoints from heatmap. - Default 100. - kernel (int, optional): Max pooling kernel for extract local - maximum pixels. Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of SMOKEHead, containing - the following Tensors: - - batch_bboxes (Tensor): Coords of each 3D box. - shape (B, k, 7) - - batch_scores (Tensor): Scores of each 3D box. - shape (B, k) - - batch_topk_labels (Tensor): Categories of each 3D box. - shape (B, k) - """ - img_h, img_w = input_metas[0]['pad_shape'][:2] - batch_size, _, feat_h, feat_w = cls_score.shape - - downsample_ratio = img_h / feat_h - center_heatmap_pred = get_local_maximum(cls_score, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=topk) - batch_scores, batch_index, batch_topk_labels = batch_dets - - regression = transpose_and_gather_feat(reg_pred, batch_index) - regression = regression.view(-1, 8) - - pred_base_centers2d = torch.cat( - [topk_xs.view(-1, 1), - topk_ys.view(-1, 1).float()], dim=1) - preds = self.bbox_coder.decode(regression, batch_topk_labels, - downsample_ratio, cam2imgs) - pred_locations = self.bbox_coder.decode_location( - pred_base_centers2d, preds['offsets2d'], preds['combined_depth'], - cam2imgs, downsample_ratio) - pred_yaws = self.bbox_coder.decode_orientation( - preds['orientations']).unsqueeze(-1) - pred_dims = preds['dimensions'] - batch_bboxes = torch.cat((pred_locations, pred_dims, pred_yaws), dim=1) - batch_bboxes = batch_bboxes.view(batch_size, -1, self.bbox_code_size) - return batch_bboxes, batch_scores, batch_topk_labels - - def get_predictions(self, pred_reg, labels3d, centers2d, reg_mask, - batch_indices, input_metas, downsample_ratio): - """Prepare predictions for computing loss. - - Args: - pred_reg (Tensor): Box regression map. - shape (B, channel, H , W). - labels3d (Tensor): Labels of each 3D box. - shape (B * max_objs, ) - centers2d (Tensor): Coords of each projected 3D box - center on image. shape (N, 2) - reg_mask (Tensor): Indexes of the existence of the 3D box. - shape (B * max_objs, ) - batch_indices (Tenosr): Batch indices of the 3D box. - shape (N, 3) - input_metas (list[dict]): Meta information of each image, - e.g., image size, scaling factor, etc. - downsample_ratio (int): The stride of feature map. - - Returns: - dict: The predictions for computing loss. - """ - batch, channel = pred_reg.shape[0], pred_reg.shape[1] - w = pred_reg.shape[3] - cam2imgs = torch.stack([ - centers2d.new_tensor(input_meta['cam2img']) - for input_meta in input_metas - ]) - # (batch_size, 4, 4) -> (N, 4, 4) - cam2imgs = cam2imgs[batch_indices, :, :] - centers2d_inds = centers2d[:, 1] * w + centers2d[:, 0] - centers2d_inds = centers2d_inds.view(batch, -1) - pred_regression = transpose_and_gather_feat(pred_reg, centers2d_inds) - pred_regression_pois = pred_regression.view(-1, channel)[reg_mask] - preds = self.bbox_coder.decode(pred_regression_pois, labels3d, - downsample_ratio, cam2imgs) - - return preds - - def get_targets(self, gt_bboxes_list, gt_labels_list, gt_bboxes_3d_list, - gt_labels_3d_list, centers2d_list, depths_list, feat_shape, - img_shape, input_metas): - """Get training targets for batch images. -`` - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each - image, shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each - box, shape (num_gt,). - gt_bboxes_3d_list (list[:obj:`CameraInstance3DBoxes`]): 3D - Ground truth bboxes of each image, - shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of - each box, shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D - image, shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - feat_shape (tuple[int]): Feature map shape with value, - shape (B, _, H, W). - img_shape (tuple[int]): Image shape in [h, w] format. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor, dict]: The Tensor value is the targets of - center heatmap, the dict has components below: - - base_centers2d_target (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2), [dtype: int] - - labels3d (Tensor): Labels of each 3D box. - shape (N, ) - - reg_mask (Tensor): Mask of the existence of the 3D box. - shape (B * max_objs, ) - - batch_indices (Tensor): Batch id of the 3D box. - shape (N, ) - - depth_target (Tensor): Depth target of each 3D box. - shape (N, ) - - keypoints2d_target (Tensor): Keypoints of each projected 3D box - on image. shape (N, 10, 2) - - keypoints_mask (Tensor): Keypoints mask of each projected 3D - box on image. shape (N, 10) - - keypoints_depth_mask (Tensor): Depths decoded from keypoints - of each 3D box. shape (N, 3) - - orientations_target (Tensor): Orientation (encoded local yaw) - target of each 3D box. shape (N, ) - - offsets2d_target (Tensor): Offsets target of each projected - 3D box. shape (N, 2) - - dimensions_target (Tensor): Dimensions target of each 3D box. - shape (N, 3) - - downsample_ratio (int): The stride of feature map. - """ - - img_h, img_w = img_shape[:2] - batch_size, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) # 1/4 - height_ratio = float(feat_h / img_h) # 1/4 - - assert width_ratio == height_ratio - - # Whether to filter the objects which are not in FOV. - if self.filter_outside_objs: - filter_outside_objs(gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, - centers2d_list, input_metas) - - # transform centers2d to base centers2d for regression and - # heatmap generation. - # centers2d = int(base_centers2d) + offsets2d - base_centers2d_list, offsets2d_list, trunc_mask_list = \ - handle_proj_objs(centers2d_list, gt_bboxes_list, input_metas) - - keypoints2d_list, keypoints_mask_list, keypoints_depth_mask_list = \ - get_keypoints(gt_bboxes_3d_list, centers2d_list, input_metas) - - center_heatmap_target = gt_bboxes_list[-1].new_zeros( - [batch_size, self.num_classes, feat_h, feat_w]) - - for batch_id in range(batch_size): - # project gt_bboxes from input image to feat map - gt_bboxes = gt_bboxes_list[batch_id] * width_ratio - gt_labels = gt_labels_list[batch_id] - - # project base centers2d from input image to feat map - gt_base_centers2d = base_centers2d_list[batch_id] * width_ratio - trunc_masks = trunc_mask_list[batch_id] - - for j, base_center2d in enumerate(gt_base_centers2d): - if trunc_masks[j]: - # for outside objects, generate ellipse heatmap - base_center2d_x_int, base_center2d_y_int = \ - base_center2d.int() - scale_box_w = min(base_center2d_x_int - gt_bboxes[j][0], - gt_bboxes[j][2] - base_center2d_x_int) - scale_box_h = min(base_center2d_y_int - gt_bboxes[j][1], - gt_bboxes[j][3] - base_center2d_y_int) - radius_x = scale_box_w * self.edge_heatmap_ratio - radius_y = scale_box_h * self.edge_heatmap_ratio - radius_x, radius_y = max(0, int(radius_x)), max( - 0, int(radius_y)) - assert min(radius_x, radius_y) == 0 - ind = gt_labels[j] - get_ellip_gaussian_2D( - center_heatmap_target[batch_id, ind], - [base_center2d_x_int, base_center2d_y_int], radius_x, - radius_y) - else: - base_center2d_x_int, base_center2d_y_int = \ - base_center2d.int() - scale_box_h = (gt_bboxes[j][3] - gt_bboxes[j][1]) - scale_box_w = (gt_bboxes[j][2] - gt_bboxes[j][0]) - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.7) - radius = max(0, int(radius)) - ind = gt_labels[j] - gen_gaussian_target( - center_heatmap_target[batch_id, ind], - [base_center2d_x_int, base_center2d_y_int], radius) - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - num_ctrs = [centers2d.shape[0] for centers2d in centers2d_list] - max_objs = max(num_ctrs) - batch_indices = [ - centers2d_list[0].new_full((num_ctrs[i], ), i) - for i in range(batch_size) - ] - batch_indices = torch.cat(batch_indices, dim=0) - reg_mask = torch.zeros( - (batch_size, max_objs), - dtype=torch.bool).to(base_centers2d_list[0].device) - gt_bboxes_3d = input_metas['box_type_3d'].cat(gt_bboxes_3d_list) - gt_bboxes_3d = gt_bboxes_3d.to(base_centers2d_list[0].device) - - # encode original local yaw to multibin format - orienations_target = self.bbox_coder.encode(gt_bboxes_3d) - - batch_base_centers2d = base_centers2d_list[0].new_zeros( - (batch_size, max_objs, 2)) - - for i in range(batch_size): - reg_mask[i, :num_ctrs[i]] = 1 - batch_base_centers2d[i, :num_ctrs[i]] = base_centers2d_list[i] - - flatten_reg_mask = reg_mask.flatten() - - # transform base centers2d from input scale to output scale - batch_base_centers2d = batch_base_centers2d.view(-1, 2) * width_ratio - - dimensions_target = gt_bboxes_3d.tensor[:, 3:6] - labels_3d = torch.cat(gt_labels_3d_list) - keypoints2d_target = torch.cat(keypoints2d_list) - keypoints_mask = torch.cat(keypoints_mask_list) - keypoints_depth_mask = torch.cat(keypoints_depth_mask_list) - offsets2d_target = torch.cat(offsets2d_list) - bboxes2d = torch.cat(gt_bboxes_list) - - # transform FCOS style bbox into [x1, y1, x2, y2] format. - bboxes2d_target = torch.cat([bboxes2d[:, 0:2] * -1, bboxes2d[:, 2:]], - dim=-1) - depths = torch.cat(depths_list) - - target_labels = dict( - base_centers2d_target=batch_base_centers2d.int(), - labels3d=labels_3d, - reg_mask=flatten_reg_mask, - batch_indices=batch_indices, - bboxes2d_target=bboxes2d_target, - depth_target=depths, - keypoints2d_target=keypoints2d_target, - keypoints_mask=keypoints_mask, - keypoints_depth_mask=keypoints_depth_mask, - orienations_target=orienations_target, - offsets2d_target=offsets2d_target, - dimensions_target=dimensions_target, - downsample_ratio=1 / width_ratio) - - return center_heatmap_target, avg_factor, target_labels - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - input_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - shape (num_gt, 4). - bbox_preds (list[Tensor]): Box dims is a 4D-tensor, the channel - number is bbox_code_size. - shape (B, 7, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image. - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - shape (num_gts, ). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D boxes ground - truth. it is the flipped gt_bboxes - gt_labels_3d (list[Tensor]): Same as gt_labels. - centers2d (list[Tensor]): 2D centers on the image. - shape (num_gts, 2). - depths (list[Tensor]): Depth ground truth. - shape (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - In kitti it's None. - input_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - assert attr_labels is None - assert gt_bboxes_ignore is None - center2d_heatmap = cls_scores[0] - pred_reg = bbox_preds[0] - - center2d_heatmap_target, avg_factor, target_labels = \ - self.get_targets(gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, - center2d_heatmap.shape, - input_metas[0]['pad_shape'], - input_metas) - - preds = self.get_predictions( - pred_reg=pred_reg, - labels3d=target_labels['labels3d'], - centers2d=target_labels['base_centers2d_target'], - reg_mask=target_labels['reg_mask'], - batch_indices=target_labels['batch_indices'], - input_metas=input_metas, - downsample_ratio=target_labels['downsample_ratio']) - - # heatmap loss - loss_cls = self.loss_cls( - center2d_heatmap, center2d_heatmap_target, avg_factor=avg_factor) - - # bbox2d regression loss - loss_bbox = self.loss_bbox(preds['bboxes2d'], - target_labels['bboxes2d_target']) - - # keypoints loss, the keypoints in predictions and target are all - # local coordinates. Check the mask dtype should be bool, not int - # or float to ensure the indexing is bool index - keypoints2d_mask = target_labels['keypoints2d_mask'] - loss_keypoints = self.loss_keypoints( - preds['keypoints2d'][keypoints2d_mask], - target_labels['keypoints2d_target'][keypoints2d_mask]) - - # orientations loss - loss_dir = self.loss_dir(preds['orientations'], - target_labels['orientations_target']) - - # dimensions loss - loss_dims = self.loss_dims(preds['dimensions'], - target_labels['dimensions_target']) - - # offsets for center heatmap - loss_offsets2d = self.loss_offsets2d(preds['offsets2d'], - target_labels['offsets2d_target']) - - # directly regressed depth loss with direct depth uncertainty loss - direct_depth_weights = torch.exp(-preds['direct_depth_uncertainty']) - loss_weight_1 = self.loss_direct_depth.loss_weight - loss_direct_depth = self.loss_direct_depth( - preds['direct_depth'], target_labels['depth_target'], - direct_depth_weights) - loss_uncertainty_1 =\ - preds['direct_depth_uncertainty'] * loss_weight_1 - loss_direct_depth = loss_direct_depth + loss_uncertainty_1.mean() - - # keypoints decoded depth loss with keypoints depth uncertainty loss - depth_mask = target_labels['keypoints_depth_mask'] - depth_target = target_labels['depth_target'].unsqueeze(-1).repeat(1, 3) - valid_keypoints_depth_uncertainty = preds[ - 'keypoints_depth_uncertainty'][depth_mask] - valid_keypoints_depth_weights = torch.exp( - -valid_keypoints_depth_uncertainty) - loss_keypoints_depth = self.loss_keypoint_depth( - preds['keypoints_depth'][depth_mask], depth_target[depth_mask], - valid_keypoints_depth_weights) - loss_weight_2 = self.loss_keypoints_depth.loss_weight - loss_uncertainty_2 =\ - valid_keypoints_depth_uncertainty * loss_weight_2 - loss_keypoints_depth = loss_keypoints_depth + loss_uncertainty_2.mean() - - # combined depth loss for optimiaze the uncertainty - loss_combined_depth = self.loss_combined_depth( - preds['combined_depth'], target_labels['depth_target']) - - loss_dict = dict( - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_keypoints=loss_keypoints, - loss_dir=loss_dir, - loss_dims=loss_dims, - loss_offsets2d=loss_offsets2d, - loss_direct_depth=loss_direct_depth, - loss_keypoints_depth=loss_keypoints_depth, - loss_combined_depth=loss_combined_depth) - - return loss_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/parta2_rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/parta2_rpn_head.py deleted file mode 100644 index 4793d21e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/parta2_rpn_head.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet3d.core import limit_period, xywhr2xyxyr -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from ..builder import HEADS -from .anchor3d_head import Anchor3DHead - - -@HEADS.register_module() -class PartA2RPNHead(Anchor3DHead): - """RPN head for PartA2. - - Note: - The main difference between the PartA2 RPN head and the Anchor3DHead - lies in their output during inference. PartA2 RPN head further returns - the original classification score for the second stage since the bbox - head in RoI head does not do classification task. - - Different from RPN heads in 2D detectors, this RPN head does - multi-class classification task and uses FocalLoss like the SECOND and - PointPillars do. But this head uses class agnostic nms rather than - multi-class nms. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - feat_channels (int): Number of channels of the feature map. - use_direction_classifier (bool): Whether to add a direction classifier. - anchor_generator(dict): Config dict of anchor generator. - assigner_per_size (bool): Whether to do assignment for each separate - anchor size. - assign_per_class (bool): Whether to do assignment for each class. - diff_rad_by_sin (bool): Whether to change the difference into sin - difference for box regression loss. - dir_offset (float | int): The offset of BEV rotation angles - (TODO: may be moved into box coder) - dir_limit_offset (float | int): The limited range of BEV - rotation angles. (TODO: may be moved into box coder) - bbox_coder (dict): Config dict of box coders. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - loss_dir (dict): Config of direction classifier loss. - """ - - def __init__(self, - num_classes, - in_channels, - train_cfg, - test_cfg, - feat_channels=256, - use_direction_classifier=True, - anchor_generator=dict( - type='Anchor3DRangeGenerator', - range=[0, -39.68, -1.78, 69.12, 39.68, -1.78], - strides=[2], - sizes=[[3.9, 1.6, 1.56]], - rotations=[0, 1.57], - custom_values=[], - reshape_out=False), - assigner_per_size=False, - assign_per_class=False, - diff_rad_by_sin=True, - dir_offset=-np.pi / 2, - dir_limit_offset=0, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_dir=dict(type='CrossEntropyLoss', loss_weight=0.2), - init_cfg=None): - super().__init__(num_classes, in_channels, train_cfg, test_cfg, - feat_channels, use_direction_classifier, - anchor_generator, assigner_per_size, assign_per_class, - diff_rad_by_sin, dir_offset, dir_limit_offset, - bbox_coder, loss_cls, loss_bbox, loss_dir, init_cfg) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth boxes - of each sample. - gt_labels (list[torch.Tensor]): Labels of each sample. - input_metas (list[dict]): Point cloud and image's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_rpn_cls (list[torch.Tensor]): Classification losses. - - loss_rpn_bbox (list[torch.Tensor]): Box regression losses. - - loss_rpn_dir (list[torch.Tensor]): Direction classification - losses. - """ - loss_dict = super().loss(cls_scores, bbox_preds, dir_cls_preds, - gt_bboxes, gt_labels, input_metas, - gt_bboxes_ignore) - # change the loss key names to avoid conflict - return dict( - loss_rpn_cls=loss_dict['loss_cls'], - loss_rpn_bbox=loss_dict['loss_bbox'], - loss_rpn_dir=loss_dict['loss_dir']) - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor]): whether th rescale bbox. - - Returns: - dict: Predictions of single batch containing the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores_3d (torch.Tensor): Score of each bbox. - - labels_3d (torch.Tensor): Label of each bbox. - - cls_preds (torch.Tensor): Class score of each bbox. - """ - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_max_scores = [] - mlvl_label_pred = [] - mlvl_dir_scores = [] - mlvl_cls_score = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert cls_score.size()[-2:] == dir_cls_pred.size()[-2:] - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.num_classes) - - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, self.box_code_size) - - nms_pre = cfg.get('nms_pre', -1) - if self.use_sigmoid_cls: - max_scores, pred_labels = scores.max(dim=1) - else: - max_scores, pred_labels = scores[:, :-1].max(dim=1) - # get topk - if nms_pre > 0 and scores.shape[0] > nms_pre: - topk_scores, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - max_scores = topk_scores - cls_score = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - pred_labels = pred_labels[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_max_scores.append(max_scores) - mlvl_cls_score.append(cls_score) - mlvl_label_pred.append(pred_labels) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_max_scores = torch.cat(mlvl_max_scores) - mlvl_label_pred = torch.cat(mlvl_label_pred) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - # shape [k, num_class] before sigmoid - # PartA2 need to keep raw classification score - # because the bbox head in the second stage does not have - # classification branch, - # roi head need this score as classification score - mlvl_cls_score = torch.cat(mlvl_cls_score) - - score_thr = cfg.get('score_thr', 0) - result = self.class_agnostic_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_max_scores, mlvl_label_pred, - mlvl_cls_score, mlvl_dir_scores, - score_thr, cfg.nms_post, cfg, - input_meta) - - return result - - def class_agnostic_nms(self, mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_max_scores, mlvl_label_pred, mlvl_cls_score, - mlvl_dir_scores, score_thr, max_num, cfg, - input_meta): - """Class agnostic nms for single batch. - - Args: - mlvl_bboxes (torch.Tensor): Bboxes from Multi-level. - mlvl_bboxes_for_nms (torch.Tensor): Bboxes for nms - (bev or minmax boxes) from Multi-level. - mlvl_max_scores (torch.Tensor): Max scores of Multi-level bbox. - mlvl_label_pred (torch.Tensor): Class predictions - of Multi-level bbox. - mlvl_cls_score (torch.Tensor): Class scores of - Multi-level bbox. - mlvl_dir_scores (torch.Tensor): Direction scores of - Multi-level bbox. - score_thr (int): Score threshold. - max_num (int): Max number of bboxes after nms. - cfg (:obj:`ConfigDict`): Training or testing config. - input_meta (dict): Contain pcd and img's meta info. - - Returns: - dict: Predictions of single batch. Contain the keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores_3d (torch.Tensor): Score of each bbox. - - labels_3d (torch.Tensor): Label of each bbox. - - cls_preds (torch.Tensor): Class score of each bbox. - """ - bboxes = [] - scores = [] - labels = [] - dir_scores = [] - cls_scores = [] - score_thr_inds = mlvl_max_scores > score_thr - _scores = mlvl_max_scores[score_thr_inds] - _bboxes_for_nms = mlvl_bboxes_for_nms[score_thr_inds, :] - if cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr) - - _mlvl_bboxes = mlvl_bboxes[score_thr_inds, :] - _mlvl_dir_scores = mlvl_dir_scores[score_thr_inds] - _mlvl_label_pred = mlvl_label_pred[score_thr_inds] - _mlvl_cls_score = mlvl_cls_score[score_thr_inds] - - if len(selected) > 0: - bboxes.append(_mlvl_bboxes[selected]) - scores.append(_scores[selected]) - labels.append(_mlvl_label_pred[selected]) - cls_scores.append(_mlvl_cls_score[selected]) - dir_scores.append(_mlvl_dir_scores[selected]) - dir_rot = limit_period(bboxes[-1][..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[-1][..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores[-1].to(bboxes[-1].dtype)) - - if bboxes: - bboxes = torch.cat(bboxes, dim=0) - scores = torch.cat(scores, dim=0) - cls_scores = torch.cat(cls_scores, dim=0) - labels = torch.cat(labels, dim=0) - if bboxes.shape[0] > max_num: - _, inds = scores.sort(descending=True) - inds = inds[:max_num] - bboxes = bboxes[inds, :] - labels = labels[inds] - scores = scores[inds] - cls_scores = cls_scores[inds] - bboxes = input_meta['box_type_3d']( - bboxes, box_dim=self.box_code_size) - return dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=cls_scores # raw scores [max_num, cls_num] - ) - else: - return dict( - boxes_3d=input_meta['box_type_3d']( - mlvl_bboxes.new_zeros([0, self.box_code_size]), - box_dim=self.box_code_size), - scores_3d=mlvl_bboxes.new_zeros([0]), - labels_3d=mlvl_bboxes.new_zeros([0]), - cls_preds=mlvl_bboxes.new_zeros([0, mlvl_cls_score.shape[-1]])) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/pgd_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/pgd_head.py deleted file mode 100644 index 9291b9c5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/pgd_head.py +++ /dev/null @@ -1,1231 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import Scale, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core import box3d_multiclass_nms, xywhr2xyxyr -from mmdet3d.core.bbox import points_cam2img, points_img2cam -from mmdet.core import distance2bbox, multi_apply -from ..builder import HEADS, build_loss -from .fcos_mono3d_head import FCOSMono3DHead - - -@HEADS.register_module() -class PGDHead(FCOSMono3DHead): - r"""Anchor-free head used in `PGD `_. - - Args: - use_depth_classifer (bool, optional): Whether to use depth classifier. - Defaults to True. - use_only_reg_proj (bool, optional): Whether to use only direct - regressed depth in the re-projection (to make the network easier - to learn). Defaults to False. - weight_dim (int, optional): Dimension of the location-aware weight - map. Defaults to -1. - weight_branch (tuple[tuple[int]], optional): Feature map channels of - the convolutional branch for weight map. Defaults to ((256, ), ). - depth_branch (tuple[int], optional): Feature map channels of the - branch for probabilistic depth estimation. Defaults to (64, ), - depth_range (tuple[float], optional): Range of depth estimation. - Defaults to (0, 70), - depth_unit (int, optional): Unit of depth range division. Defaults to - 10. - division (str, optional): Depth division method. Options include - 'uniform', 'linear', 'log', 'loguniform'. Defaults to 'uniform'. - depth_bins (int, optional): Discrete bins of depth division. Defaults - to 8. - loss_depth (dict, optional): Depth loss. Defaults to dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0). - loss_bbox2d (dict, optional): Loss for 2D box estimation. Defaults to - dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0). - loss_consistency (dict, optional): Consistency loss. Defaults to - dict(type='GIoULoss', loss_weight=1.0), - pred_velo (bool, optional): Whether to predict velocity. Defaults to - False. - pred_bbox2d (bool, optional): Whether to predict 2D bounding boxes. - Defaults to True. - pred_keypoints (bool, optional): Whether to predict keypoints. - Defaults to False, - bbox_coder (dict, optional): Bounding box coder. Defaults to - dict(type='PGDBBoxCoder', base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), (3.9, 1.56, 1.6)), - code_size=7). - """ - - def __init__(self, - use_depth_classifier=True, - use_onlyreg_proj=False, - weight_dim=-1, - weight_branch=((256, ), ), - depth_branch=(64, ), - depth_range=(0, 70), - depth_unit=10, - division='uniform', - depth_bins=8, - loss_depth=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_bbox2d=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - loss_consistency=dict(type='GIoULoss', loss_weight=1.0), - pred_bbox2d=True, - pred_keypoints=False, - bbox_coder=dict( - type='PGDBBoxCoder', - base_depths=((28.01, 16.32), ), - base_dims=((0.8, 1.73, 0.6), (1.76, 1.73, 0.6), - (3.9, 1.56, 1.6)), - code_size=7), - **kwargs): - self.use_depth_classifier = use_depth_classifier - self.use_onlyreg_proj = use_onlyreg_proj - self.depth_branch = depth_branch - self.pred_keypoints = pred_keypoints - self.weight_dim = weight_dim - self.weight_branch = weight_branch - self.weight_out_channels = [] - for weight_branch_channels in weight_branch: - if len(weight_branch_channels) > 0: - self.weight_out_channels.append(weight_branch_channels[-1]) - else: - self.weight_out_channels.append(-1) - self.depth_range = depth_range - self.depth_unit = depth_unit - self.division = division - if self.division == 'uniform': - self.num_depth_cls = int( - (depth_range[1] - depth_range[0]) / depth_unit) + 1 - if self.num_depth_cls != depth_bins: - print('Warning: The number of bins computed from ' + - 'depth_unit is different from given parameter! ' + - 'Depth_unit will be considered with priority in ' + - 'Uniform Division.') - else: - self.num_depth_cls = depth_bins - super().__init__( - pred_bbox2d=pred_bbox2d, bbox_coder=bbox_coder, **kwargs) - self.loss_depth = build_loss(loss_depth) - if self.pred_bbox2d: - self.loss_bbox2d = build_loss(loss_bbox2d) - self.loss_consistency = build_loss(loss_consistency) - if self.pred_keypoints: - self.kpts_start = 9 if self.pred_velo else 7 - - def _init_layers(self): - """Initialize layers of the head.""" - super()._init_layers() - if self.pred_bbox2d: - self.scale_dim += 1 - if self.pred_keypoints: - self.scale_dim += 1 - self.scales = nn.ModuleList([ - nn.ModuleList([Scale(1.0) for _ in range(self.scale_dim)]) - for _ in self.strides - ]) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - super()._init_predictor() - - if self.use_depth_classifier: - self.conv_depth_cls_prev = self._init_branch( - conv_channels=self.depth_branch, - conv_strides=(1, ) * len(self.depth_branch)) - self.conv_depth_cls = nn.Conv2d(self.depth_branch[-1], - self.num_depth_cls, 1) - # Data-agnostic single param lambda for local depth fusion - self.fuse_lambda = nn.Parameter(torch.tensor(10e-5)) - - if self.weight_dim != -1: - self.conv_weight_prevs = nn.ModuleList() - self.conv_weights = nn.ModuleList() - for i in range(self.weight_dim): - weight_branch_channels = self.weight_branch[i] - weight_out_channel = self.weight_out_channels[i] - if len(weight_branch_channels) > 0: - self.conv_weight_prevs.append( - self._init_branch( - conv_channels=weight_branch_channels, - conv_strides=(1, ) * len(weight_branch_channels))) - self.conv_weights.append( - nn.Conv2d(weight_out_channel, 1, 1)) - else: - self.conv_weight_prevs.append(None) - self.conv_weights.append( - nn.Conv2d(self.feat_channels, 1, 1)) - - def init_weights(self): - """Initialize weights of the head. - - We currently still use the customized defined init_weights because the - default init of DCN triggered by the init_cfg will init - conv_offset.weight, which mistakenly affects the training stability. - """ - super().init_weights() - - bias_cls = bias_init_with_prob(0.01) - if self.use_depth_classifier: - for m in self.conv_depth_cls_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - normal_init(self.conv_depth_cls, std=0.01, bias=bias_cls) - - if self.weight_dim != -1: - for conv_weight_prev in self.conv_weight_prevs: - if conv_weight_prev is None: - continue - for m in conv_weight_prev: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - for conv_weight in self.conv_weights: - normal_init(conv_weight, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2). - weight (list[Tensor]): Location-aware weight maps on each - scale level, each is a 4D-tensor, the channel number is - num_points * 1. - depth_cls_preds (list[Tensor]): Box scores for depth class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - attr_preds (list[Tensor]): Attribute scores for each scale - level, each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, - each is a 4D-tensor, the channel number is num_points * 1. - """ - return multi_apply(self.forward_single, feats, self.scales, - self.strides) - - def forward_single(self, x, scale, stride): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps, only - used to normalize the bbox prediction when self.norm_on_bbox - is True. - - Returns: - tuple: scores for each class, bbox and direction class - predictions, depth class predictions, location-aware weights, - attribute and centerness predictions of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, centerness, cls_feat, \ - reg_feat = super().forward_single(x, scale, stride) - - max_regress_range = stride * self.regress_ranges[0][1] / \ - self.strides[0] - bbox_pred = self.bbox_coder.decode_2d(bbox_pred, scale, stride, - max_regress_range, self.training, - self.pred_keypoints, - self.pred_bbox2d) - - depth_cls_pred = None - if self.use_depth_classifier: - clone_reg_feat = reg_feat.clone() - for conv_depth_cls_prev_layer in self.conv_depth_cls_prev: - clone_reg_feat = conv_depth_cls_prev_layer(clone_reg_feat) - depth_cls_pred = self.conv_depth_cls(clone_reg_feat) - - weight = None - if self.weight_dim != -1: - weight = [] - for i in range(self.weight_dim): - clone_reg_feat = reg_feat.clone() - if len(self.weight_branch[i]) > 0: - for conv_weight_prev_layer in self.conv_weight_prevs[i]: - clone_reg_feat = conv_weight_prev_layer(clone_reg_feat) - weight.append(self.conv_weights[i](clone_reg_feat)) - weight = torch.cat(weight, dim=1) - - return cls_score, bbox_pred, dir_cls_pred, depth_cls_pred, weight, \ - attr_pred, centerness - - def get_proj_bbox2d(self, - bbox_preds, - pos_dir_cls_preds, - labels_3d, - bbox_targets_3d, - pos_points, - pos_inds, - img_metas, - pos_depth_cls_preds=None, - pos_weights=None, - pos_cls_scores=None, - with_kpts=False): - """Decode box predictions and get projected 2D attributes. - - Args: - bbox_preds (list[Tensor]): Box predictions for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - pos_dir_cls_preds (Tensor): Box scores for direction class - predictions of positive boxes on all the scale levels in shape - (num_pos_points, 2). - labels_3d (list[Tensor]): 3D box category labels for each scale - level, each is a 4D-tensor. - bbox_targets_3d (list[Tensor]): 3D box targets for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - pos_points (Tensor): Foreground points. - pos_inds (Tensor): Index of foreground points from flattened - tensors. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - pos_depth_cls_preds (Tensor, optional): Probabilistic depth map of - positive boxes on all the scale levels in shape - (num_pos_points, self.num_depth_cls). Defaults to None. - pos_weights (Tensor, optional): Location-aware weights of positive - boxes in shape (num_pos_points, self.weight_dim). Defaults to - None. - pos_cls_scores (Tensor, optional): Classification scores of - positive boxes in shape (num_pos_points, self.num_classes). - Defaults to None. - with_kpts (bool, optional): Whether to output keypoints targets. - Defaults to False. - - Returns: - tuple[Tensor]: Exterior 2D boxes from projected 3D boxes, - predicted 2D boxes and keypoint targets (if necessary). - """ - views = [np.array(img_meta['cam2img']) for img_meta in img_metas] - num_imgs = len(img_metas) - img_idx = [] - for label in labels_3d: - for idx in range(num_imgs): - img_idx.append( - labels_3d[0].new_ones(int(len(label) / num_imgs)) * idx) - img_idx = torch.cat(img_idx) - pos_img_idx = img_idx[pos_inds] - - flatten_strided_bbox_preds = [] - flatten_strided_bbox2d_preds = [] - flatten_bbox_targets_3d = [] - flatten_strides = [] - - for stride_idx, bbox_pred in enumerate(bbox_preds): - flatten_bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape( - -1, sum(self.group_reg_dims)) - flatten_bbox_pred[:, :2] *= self.strides[stride_idx] - flatten_bbox_pred[:, -4:] *= self.strides[stride_idx] - flatten_strided_bbox_preds.append( - flatten_bbox_pred[:, :self.bbox_coder.bbox_code_size]) - flatten_strided_bbox2d_preds.append(flatten_bbox_pred[:, -4:]) - - bbox_target_3d = bbox_targets_3d[stride_idx].clone() - bbox_target_3d[:, :2] *= self.strides[stride_idx] - bbox_target_3d[:, -4:] *= self.strides[stride_idx] - flatten_bbox_targets_3d.append(bbox_target_3d) - - flatten_stride = flatten_bbox_pred.new_ones( - *flatten_bbox_pred.shape[:-1], 1) * self.strides[stride_idx] - flatten_strides.append(flatten_stride) - - flatten_strided_bbox_preds = torch.cat(flatten_strided_bbox_preds) - flatten_strided_bbox2d_preds = torch.cat(flatten_strided_bbox2d_preds) - flatten_bbox_targets_3d = torch.cat(flatten_bbox_targets_3d) - flatten_strides = torch.cat(flatten_strides) - pos_strided_bbox_preds = flatten_strided_bbox_preds[pos_inds] - pos_strided_bbox2d_preds = flatten_strided_bbox2d_preds[pos_inds] - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_strides = flatten_strides[pos_inds] - - pos_decoded_bbox2d_preds = distance2bbox(pos_points, - pos_strided_bbox2d_preds) - - pos_strided_bbox_preds[:, :2] = \ - pos_points - pos_strided_bbox_preds[:, :2] - pos_bbox_targets_3d[:, :2] = \ - pos_points - pos_bbox_targets_3d[:, :2] - - if self.use_depth_classifier and (not self.use_onlyreg_proj): - pos_prob_depth_preds = self.bbox_coder.decode_prob_depth( - pos_depth_cls_preds, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - pos_strided_bbox_preds[:, 2] = \ - sig_alpha * pos_strided_bbox_preds.clone()[:, 2] + \ - (1 - sig_alpha) * pos_prob_depth_preds - - box_corners_in_image = pos_strided_bbox_preds.new_zeros( - (*pos_strided_bbox_preds.shape[:-1], 8, 2)) - box_corners_in_image_gt = pos_strided_bbox_preds.new_zeros( - (*pos_strided_bbox_preds.shape[:-1], 8, 2)) - - for idx in range(num_imgs): - mask = (pos_img_idx == idx) - if pos_strided_bbox_preds[mask].shape[0] == 0: - continue - cam2img = torch.eye( - 4, - dtype=pos_strided_bbox_preds.dtype, - device=pos_strided_bbox_preds.device) - view_shape = views[idx].shape - cam2img[:view_shape[0], :view_shape[1]] = \ - pos_strided_bbox_preds.new_tensor(views[idx]) - - centers2d_preds = pos_strided_bbox_preds.clone()[mask, :2] - centers2d_targets = pos_bbox_targets_3d.clone()[mask, :2] - centers3d_targets = points_img2cam(pos_bbox_targets_3d[mask, :3], - views[idx]) - - # use predicted depth to re-project the 2.5D centers - pos_strided_bbox_preds[mask, :3] = points_img2cam( - pos_strided_bbox_preds[mask, :3], views[idx]) - pos_bbox_targets_3d[mask, :3] = centers3d_targets - - # depth fixed when computing re-project 3D bboxes - pos_strided_bbox_preds[mask, 2] = \ - pos_bbox_targets_3d.clone()[mask, 2] - - # decode yaws - if self.use_direction_classifier: - pos_dir_cls_scores = torch.max( - pos_dir_cls_preds[mask], dim=-1)[1] - pos_strided_bbox_preds[mask] = self.bbox_coder.decode_yaw( - pos_strided_bbox_preds[mask], centers2d_preds, - pos_dir_cls_scores, self.dir_offset, cam2img) - pos_bbox_targets_3d[mask, 6] = torch.atan2( - centers2d_targets[:, 0] - cam2img[0, 2], - cam2img[0, 0]) + pos_bbox_targets_3d[mask, 6] - - corners = img_metas[0]['box_type_3d']( - pos_strided_bbox_preds[mask], - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).corners - box_corners_in_image[mask] = points_cam2img(corners, cam2img) - - corners_gt = img_metas[0]['box_type_3d']( - pos_bbox_targets_3d[mask, :self.bbox_code_size], - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).corners - box_corners_in_image_gt[mask] = points_cam2img(corners_gt, cam2img) - - minxy = torch.min(box_corners_in_image, dim=1)[0] - maxxy = torch.max(box_corners_in_image, dim=1)[0] - proj_bbox2d_preds = torch.cat([minxy, maxxy], dim=1) - - outputs = (proj_bbox2d_preds, pos_decoded_bbox2d_preds) - - if with_kpts: - norm_strides = pos_strides * self.regress_ranges[0][1] / \ - self.strides[0] - kpts_targets = box_corners_in_image_gt - pos_points[..., None, :] - kpts_targets = kpts_targets.view( - (*pos_strided_bbox_preds.shape[:-1], 16)) - kpts_targets /= norm_strides - - outputs += (kpts_targets, ) - - return outputs - - def get_pos_predictions(self, bbox_preds, dir_cls_preds, depth_cls_preds, - weights, attr_preds, centernesses, pos_inds, - img_metas): - """Flatten predictions and get positive ones. - - Args: - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - pos_inds (Tensor): Index of foreground points from flattened - tensors. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor]: Box predictions, direction classes, probabilistic - depth maps, location-aware weight maps, attributes and - centerness predictions. - """ - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, sum(self.group_reg_dims)) - for bbox_pred in bbox_preds - ] - flatten_dir_cls_preds = [ - dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - for dir_cls_pred in dir_cls_preds - ] - flatten_centerness = [ - centerness.permute(0, 2, 3, 1).reshape(-1) - for centerness in centernesses - ] - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_dir_cls_preds = torch.cat(flatten_dir_cls_preds) - flatten_centerness = torch.cat(flatten_centerness) - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_dir_cls_preds = flatten_dir_cls_preds[pos_inds] - pos_centerness = flatten_centerness[pos_inds] - - pos_depth_cls_preds = None - if self.use_depth_classifier: - flatten_depth_cls_preds = [ - depth_cls_pred.permute(0, 2, 3, - 1).reshape(-1, self.num_depth_cls) - for depth_cls_pred in depth_cls_preds - ] - flatten_depth_cls_preds = torch.cat(flatten_depth_cls_preds) - pos_depth_cls_preds = flatten_depth_cls_preds[pos_inds] - - pos_weights = None - if self.weight_dim != -1: - flatten_weights = [ - weight.permute(0, 2, 3, 1).reshape(-1, self.weight_dim) - for weight in weights - ] - flatten_weights = torch.cat(flatten_weights) - pos_weights = flatten_weights[pos_inds] - - pos_attr_preds = None - if self.pred_attrs: - flatten_attr_preds = [ - attr_pred.permute(0, 2, 3, 1).reshape(-1, self.num_attrs) - for attr_pred in attr_preds - ] - flatten_attr_preds = torch.cat(flatten_attr_preds) - pos_attr_preds = flatten_attr_preds[pos_inds] - - return pos_bbox_preds, pos_dir_cls_preds, pos_depth_cls_preds, \ - pos_weights, pos_attr_preds, pos_centerness - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', - 'depth_cls_preds', 'weights', 'attr_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - weights (list[Tensor]): Location-aware weights for each scale - level, each is a 4D-tensor, the channel number is - num_points * self.weight_dim. - attr_preds (list[Tensor]): Attribute scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_attrs. - centernesses (list[Tensor]): Centerness for each scale level, each - is a 4D-tensor, the channel number is num_points * 1. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): 3D boxes ground truth with shape of - (num_gts, code_size). - gt_labels_3d (list[Tensor]): same as gt_labels - centers2d (list[Tensor]): 2D centers on the image with shape of - (num_gts, 2). - depths (list[Tensor]): Depth ground truth with shape of - (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): specify which bounding boxes can - be ignored when computing the loss. Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(depth_cls_preds) == len(weights) == len(centernesses) == \ - len(attr_preds), 'The length of cls_scores, bbox_preds, ' \ - 'dir_cls_preds, depth_cls_preds, weights, centernesses, and' \ - f'attr_preds: {len(cls_scores)}, {len(bbox_preds)}, ' \ - f'{len(dir_cls_preds)}, {len(depth_cls_preds)}, {len(weights)}' \ - f'{len(centernesses)}, {len(attr_preds)} are inconsistent.' - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - labels_3d, bbox_targets_3d, centerness_targets, attr_targets = \ - self.get_targets( - all_level_points, gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, attr_labels) - - num_imgs = cls_scores[0].size(0) - # flatten cls_scores and targets - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_labels_3d = torch.cat(labels_3d) - flatten_bbox_targets_3d = torch.cat(bbox_targets_3d) - flatten_centerness_targets = torch.cat(centerness_targets) - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - if self.pred_attrs: - flatten_attr_targets = torch.cat(attr_targets) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((flatten_labels_3d >= 0) - & (flatten_labels_3d < bg_class_ind)).nonzero().reshape(-1) - num_pos = len(pos_inds) - - loss_dict = dict() - - loss_dict['loss_cls'] = self.loss_cls( - flatten_cls_scores, - flatten_labels_3d, - avg_factor=num_pos + num_imgs) # avoid num_pos is 0 - - pos_bbox_preds, pos_dir_cls_preds, pos_depth_cls_preds, pos_weights, \ - pos_attr_preds, pos_centerness = self.get_pos_predictions( - bbox_preds, dir_cls_preds, depth_cls_preds, weights, - attr_preds, centernesses, pos_inds, img_metas) - - if num_pos > 0: - pos_bbox_targets_3d = flatten_bbox_targets_3d[pos_inds] - pos_centerness_targets = flatten_centerness_targets[pos_inds] - pos_points = flatten_points[pos_inds] - if self.pred_attrs: - pos_attr_targets = flatten_attr_targets[pos_inds] - if self.use_direction_classifier: - pos_dir_cls_targets = self.get_direction_target( - pos_bbox_targets_3d, self.dir_offset, one_hot=False) - - bbox_weights = pos_centerness_targets.new_ones( - len(pos_centerness_targets), sum(self.group_reg_dims)) - equal_weights = pos_centerness_targets.new_ones( - pos_centerness_targets.shape) - code_weight = self.train_cfg.get('code_weight', None) - if code_weight: - assert len(code_weight) == sum(self.group_reg_dims) - bbox_weights = bbox_weights * bbox_weights.new_tensor( - code_weight) - - if self.diff_rad_by_sin: - pos_bbox_preds, pos_bbox_targets_3d = self.add_sin_difference( - pos_bbox_preds, pos_bbox_targets_3d) - - loss_dict['loss_offset'] = self.loss_bbox( - pos_bbox_preds[:, :2], - pos_bbox_targets_3d[:, :2], - weight=bbox_weights[:, :2], - avg_factor=equal_weights.sum()) - loss_dict['loss_size'] = self.loss_bbox( - pos_bbox_preds[:, 3:6], - pos_bbox_targets_3d[:, 3:6], - weight=bbox_weights[:, 3:6], - avg_factor=equal_weights.sum()) - loss_dict['loss_rotsin'] = self.loss_bbox( - pos_bbox_preds[:, 6], - pos_bbox_targets_3d[:, 6], - weight=bbox_weights[:, 6], - avg_factor=equal_weights.sum()) - if self.pred_velo: - loss_dict['loss_velo'] = self.loss_bbox( - pos_bbox_preds[:, 7:9], - pos_bbox_targets_3d[:, 7:9], - weight=bbox_weights[:, 7:9], - avg_factor=equal_weights.sum()) - - proj_bbox2d_inputs = (bbox_preds, pos_dir_cls_preds, labels_3d, - bbox_targets_3d, pos_points, pos_inds, - img_metas) - - # direction classification loss - # TODO: add more check for use_direction_classifier - if self.use_direction_classifier: - loss_dict['loss_dir'] = self.loss_dir( - pos_dir_cls_preds, - pos_dir_cls_targets, - equal_weights, - avg_factor=equal_weights.sum()) - - # init depth loss with the one computed from direct regression - loss_dict['loss_depth'] = self.loss_bbox( - pos_bbox_preds[:, 2], - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - # depth classification loss - if self.use_depth_classifier: - pos_prob_depth_preds = self.bbox_coder.decode_prob_depth( - pos_depth_cls_preds, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - if self.weight_dim != -1: - loss_fuse_depth = self.loss_depth( - sig_alpha * pos_bbox_preds[:, 2] + - (1 - sig_alpha) * pos_prob_depth_preds, - pos_bbox_targets_3d[:, 2], - sigma=pos_weights[:, 0], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - else: - loss_fuse_depth = self.loss_depth( - sig_alpha * pos_bbox_preds[:, 2] + - (1 - sig_alpha) * pos_prob_depth_preds, - pos_bbox_targets_3d[:, 2], - weight=bbox_weights[:, 2], - avg_factor=equal_weights.sum()) - loss_dict['loss_depth'] = loss_fuse_depth - - proj_bbox2d_inputs += (pos_depth_cls_preds, ) - - if self.pred_keypoints: - # use smoothL1 to compute consistency loss for keypoints - # normalize the offsets with strides - proj_bbox2d_preds, pos_decoded_bbox2d_preds, kpts_targets = \ - self.get_proj_bbox2d(*proj_bbox2d_inputs, with_kpts=True) - loss_dict['loss_kpts'] = self.loss_bbox( - pos_bbox_preds[:, self.kpts_start:self.kpts_start + 16], - kpts_targets, - weight=bbox_weights[:, - self.kpts_start:self.kpts_start + 16], - avg_factor=equal_weights.sum()) - - if self.pred_bbox2d: - loss_dict['loss_bbox2d'] = self.loss_bbox2d( - pos_bbox_preds[:, -4:], - pos_bbox_targets_3d[:, -4:], - weight=bbox_weights[:, -4:], - avg_factor=equal_weights.sum()) - if not self.pred_keypoints: - proj_bbox2d_preds, pos_decoded_bbox2d_preds = \ - self.get_proj_bbox2d(*proj_bbox2d_inputs) - loss_dict['loss_consistency'] = self.loss_consistency( - proj_bbox2d_preds, - pos_decoded_bbox2d_preds, - weight=bbox_weights[:, -4:], - avg_factor=equal_weights.sum()) - - loss_dict['loss_centerness'] = self.loss_centerness( - pos_centerness, pos_centerness_targets) - - # attribute classification loss - if self.pred_attrs: - loss_dict['loss_attr'] = self.loss_attr( - pos_attr_preds, - pos_attr_targets, - pos_centerness_targets, - avg_factor=pos_centerness_targets.sum()) - - else: - # need absolute due to possible negative delta x/y - loss_dict['loss_offset'] = pos_bbox_preds[:, :2].sum() - loss_dict['loss_size'] = pos_bbox_preds[:, 3:6].sum() - loss_dict['loss_rotsin'] = pos_bbox_preds[:, 6].sum() - loss_dict['loss_depth'] = pos_bbox_preds[:, 2].sum() - if self.pred_velo: - loss_dict['loss_velo'] = pos_bbox_preds[:, 7:9].sum() - if self.pred_keypoints: - loss_dict['loss_kpts'] = pos_bbox_preds[:, - self.kpts_start:self. - kpts_start + 16].sum() - if self.pred_bbox2d: - loss_dict['loss_bbox2d'] = pos_bbox_preds[:, -4:].sum() - loss_dict['loss_consistency'] = pos_bbox_preds[:, -4:].sum() - loss_dict['loss_centerness'] = pos_centerness.sum() - if self.use_direction_classifier: - loss_dict['loss_dir'] = pos_dir_cls_preds.sum() - if self.use_depth_classifier: - sig_alpha = torch.sigmoid(self.fuse_lambda) - loss_fuse_depth = \ - sig_alpha * pos_bbox_preds[:, 2].sum() + \ - (1 - sig_alpha) * pos_depth_cls_preds.sum() - if self.weight_dim != -1: - loss_fuse_depth *= torch.exp(-pos_weights[:, 0].sum()) - loss_dict['loss_depth'] = loss_fuse_depth - if self.pred_attrs: - loss_dict['loss_attr'] = pos_attr_preds.sum() - - return loss_dict - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'dir_cls_preds', - 'depth_cls_preds', 'weights', 'attr_preds', 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W) - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * 2. (bin = 2) - depth_cls_preds (list[Tensor]): Box scores for direction class - predictions on each scale level, each is a 4D-tensor, - the channel number is num_points * self.num_depth_cls. - weights (list[Tensor]): Location-aware weights for each scale - level, each is a 4D-tensor, the channel number is - num_points * self.weight_dim. - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_points * 1, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config, optional): Test / postprocessing configuration, - if None, test_cfg would be used. Defaults to None. - rescale (bool, optional): If True, return boxes in original image - space. Defaults to None. - - Returns: - list[tuple[Tensor]]: Each item in result_list is a tuple, which - consists of predicted 3D boxes, scores, labels, attributes and - 2D boxes (if necessary). - """ - assert len(cls_scores) == len(bbox_preds) == len(dir_cls_preds) == \ - len(depth_cls_preds) == len(weights) == len(centernesses) == \ - len(attr_preds), 'The length of cls_scores, bbox_preds, ' \ - 'dir_cls_preds, depth_cls_preds, weights, centernesses, and' \ - f'attr_preds: {len(cls_scores)}, {len(bbox_preds)}, ' \ - f'{len(dir_cls_preds)}, {len(depth_cls_preds)}, {len(weights)}' \ - f'{len(centernesses)}, {len(attr_preds)} are inconsistent.' - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - if self.use_direction_classifier: - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - dir_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [2, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.use_depth_classifier: - depth_cls_pred_list = [ - depth_cls_preds[i][img_id].detach() - for i in range(num_levels) - ] - else: - depth_cls_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_depth_cls, *cls_scores[i][img_id].shape[1:]], - 0).detach() for i in range(num_levels) - ] - if self.weight_dim != -1: - weight_list = [ - weights[i][img_id].detach() for i in range(num_levels) - ] - else: - weight_list = [ - cls_scores[i][img_id].new_full( - [1, *cls_scores[i][img_id].shape[1:]], 0).detach() - for i in range(num_levels) - ] - if self.pred_attrs: - attr_pred_list = [ - attr_preds[i][img_id].detach() for i in range(num_levels) - ] - else: - attr_pred_list = [ - cls_scores[i][img_id].new_full( - [self.num_attrs, *cls_scores[i][img_id].shape[1:]], - self.attr_background_label).detach() - for i in range(num_levels) - ] - centerness_pred_list = [ - centernesses[i][img_id].detach() for i in range(num_levels) - ] - input_meta = img_metas[img_id] - det_bboxes = self._get_bboxes_single( - cls_score_list, bbox_pred_list, dir_cls_pred_list, - depth_cls_pred_list, weight_list, attr_pred_list, - centerness_pred_list, mlvl_points, input_meta, cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - depth_cls_preds, - weights, - attr_preds, - centernesses, - mlvl_points, - input_meta, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - Has shape (num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single scale - level with shape (num_points * bbox_code_size, H, W). - dir_cls_preds (list[Tensor]): Box scores for direction class - predictions on a single scale level with shape - (num_points * 2, H, W) - depth_cls_preds (list[Tensor]): Box scores for probabilistic depth - predictions on a single scale level with shape - (num_points * self.num_depth_cls, H, W) - weights (list[Tensor]): Location-aware weight maps on a single - scale level with shape (num_points * self.weight_dim, H, W). - attr_preds (list[Tensor]): Attribute scores for each scale level - Has shape (N, num_points * num_attrs, H, W) - centernesses (list[Tensor]): Centerness for a single scale level - with shape (num_points, H, W). - mlvl_points (list[Tensor]): Box reference for a single scale level - with shape (num_total_points, 2). - input_meta (dict): Metadata of input image. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool, optional): If True, return boxes in original image - space. Defaults to False. - - Returns: - tuples[Tensor]: Predicted 3D boxes, scores, labels, attributes and - 2D boxes (if necessary). - """ - view = np.array(input_meta['cam2img']) - scale_factor = input_meta['scale_factor'] - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_points) - mlvl_centers2d = [] - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - mlvl_attr_scores = [] - mlvl_centerness = [] - mlvl_depth_cls_scores = [] - mlvl_depth_uncertainty = [] - mlvl_bboxes2d = None - if self.pred_bbox2d: - mlvl_bboxes2d = [] - - for cls_score, bbox_pred, dir_cls_pred, depth_cls_pred, weight, \ - attr_pred, centerness, points in zip( - cls_scores, bbox_preds, dir_cls_preds, depth_cls_preds, - weights, attr_preds, centernesses, mlvl_points): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - dir_cls_pred = dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - depth_cls_pred = depth_cls_pred.permute(1, 2, 0).reshape( - -1, self.num_depth_cls) - depth_cls_score = F.softmax( - depth_cls_pred, dim=-1).topk( - k=2, dim=-1)[0].mean(dim=-1) - if self.weight_dim != -1: - weight = weight.permute(1, 2, 0).reshape(-1, self.weight_dim) - else: - weight = weight.permute(1, 2, 0).reshape(-1, 1) - depth_uncertainty = torch.exp(-weight[:, -1]) - attr_pred = attr_pred.permute(1, 2, 0).reshape(-1, self.num_attrs) - attr_score = torch.max(attr_pred, dim=-1)[1] - centerness = centerness.permute(1, 2, 0).reshape(-1).sigmoid() - - bbox_pred = bbox_pred.permute(1, 2, - 0).reshape(-1, - sum(self.group_reg_dims)) - bbox_pred3d = bbox_pred[:, :self.bbox_coder.bbox_code_size] - if self.pred_bbox2d: - bbox_pred2d = bbox_pred[:, -4:] - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - merged_scores = scores * centerness[:, None] - if self.use_depth_classifier: - merged_scores *= depth_cls_score[:, None] - if self.weight_dim != -1: - merged_scores *= depth_uncertainty[:, None] - max_scores, _ = merged_scores.max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - points = points[topk_inds, :] - bbox_pred3d = bbox_pred3d[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_pred = dir_cls_pred[topk_inds, :] - depth_cls_pred = depth_cls_pred[topk_inds, :] - centerness = centerness[topk_inds] - dir_cls_score = dir_cls_score[topk_inds] - depth_cls_score = depth_cls_score[topk_inds] - depth_uncertainty = depth_uncertainty[topk_inds] - attr_score = attr_score[topk_inds] - if self.pred_bbox2d: - bbox_pred2d = bbox_pred2d[topk_inds, :] - # change the offset to actual center predictions - bbox_pred3d[:, :2] = points - bbox_pred3d[:, :2] - if rescale: - bbox_pred3d[:, :2] /= bbox_pred3d[:, :2].new_tensor( - scale_factor) - if self.pred_bbox2d: - bbox_pred2d /= bbox_pred2d.new_tensor(scale_factor) - if self.use_depth_classifier: - prob_depth_pred = self.bbox_coder.decode_prob_depth( - depth_cls_pred, self.depth_range, self.depth_unit, - self.division, self.num_depth_cls) - sig_alpha = torch.sigmoid(self.fuse_lambda) - bbox_pred3d[:, 2] = sig_alpha * bbox_pred3d[:, 2] + \ - (1 - sig_alpha) * prob_depth_pred - pred_center2d = bbox_pred3d[:, :3].clone() - bbox_pred3d[:, :3] = points_img2cam(bbox_pred3d[:, :3], view) - mlvl_centers2d.append(pred_center2d) - mlvl_bboxes.append(bbox_pred3d) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - mlvl_depth_cls_scores.append(depth_cls_score) - mlvl_attr_scores.append(attr_score) - mlvl_centerness.append(centerness) - mlvl_depth_uncertainty.append(depth_uncertainty) - if self.pred_bbox2d: - bbox_pred2d = distance2bbox( - points, bbox_pred2d, max_shape=input_meta['img_shape']) - mlvl_bboxes2d.append(bbox_pred2d) - - mlvl_centers2d = torch.cat(mlvl_centers2d) - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - if self.pred_bbox2d: - mlvl_bboxes2d = torch.cat(mlvl_bboxes2d) - - # change local yaw to global yaw for 3D nms - cam2img = torch.eye( - 4, dtype=mlvl_centers2d.dtype, device=mlvl_centers2d.device) - cam2img[:view.shape[0], :view.shape[1]] = \ - mlvl_centers2d.new_tensor(view) - mlvl_bboxes = self.bbox_coder.decode_yaw(mlvl_bboxes, mlvl_centers2d, - mlvl_dir_scores, - self.dir_offset, cam2img) - - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)).bev) - - mlvl_scores = torch.cat(mlvl_scores) - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - mlvl_attr_scores = torch.cat(mlvl_attr_scores) - mlvl_centerness = torch.cat(mlvl_centerness) - # no scale_factors in box3d_multiclass_nms - # Then we multiply it from outside - mlvl_nms_scores = mlvl_scores * mlvl_centerness[:, None] - if self.use_depth_classifier: # multiply the depth confidence - mlvl_depth_cls_scores = torch.cat(mlvl_depth_cls_scores) - mlvl_nms_scores *= mlvl_depth_cls_scores[:, None] - if self.weight_dim != -1: - mlvl_depth_uncertainty = torch.cat(mlvl_depth_uncertainty) - mlvl_nms_scores *= mlvl_depth_uncertainty[:, None] - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_nms_scores, cfg.score_thr, - cfg.max_per_img, cfg, mlvl_dir_scores, - mlvl_attr_scores, mlvl_bboxes2d) - bboxes, scores, labels, dir_scores, attrs = results[0:5] - attrs = attrs.to(labels.dtype) # change data type to int - bboxes = input_meta['box_type_3d']( - bboxes, - box_dim=self.bbox_coder.bbox_code_size, - origin=(0.5, 0.5, 0.5)) - # Note that the predictions use origin (0.5, 0.5, 0.5) - # Due to the ground truth centers2d are the gravity center of objects - # v0.10.0 fix inplace operation to the input tensor of cam_box3d - # So here we also need to add origin=(0.5, 0.5, 0.5) - if not self.pred_attrs: - attrs = None - - outputs = (bboxes, scores, labels, attrs) - if self.pred_bbox2d: - bboxes2d = results[-1] - bboxes2d = torch.cat([bboxes2d, scores[:, None]], dim=1) - outputs = outputs + (bboxes2d, ) - - return outputs - - def get_targets(self, points, gt_bboxes_list, gt_labels_list, - gt_bboxes_3d_list, gt_labels_3d_list, centers2d_list, - depths_list, attr_labels_list): - """Compute regression, classification and centerss targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - depths_list (list[Tensor]): Depth of projected 3D centers onto 2D - image, each has shape (num_gt, 1). - attr_labels_list (list[Tensor]): Attribute labels of each box, - each has shape (num_gt,). - - Returns: - tuple: - concat_lvl_labels (list[Tensor]): Labels of each level. \ - concat_lvl_bbox_targets (list[Tensor]): BBox targets of each \ - level. - """ - assert len(points) == len(self.regress_ranges) - num_levels = len(points) - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - if attr_labels_list is None: - attr_labels_list = [ - gt_labels.new_full(gt_labels.shape, self.attr_background_label) - for gt_labels in gt_labels_list - ] - - # get labels and bbox_targets of each image - _, bbox_targets_list, labels_3d_list, bbox_targets_3d_list, \ - centerness_targets_list, attr_targets_list = multi_apply( - self._get_target_single, - gt_bboxes_list, - gt_labels_list, - gt_bboxes_3d_list, - gt_labels_3d_list, - centers2d_list, - depths_list, - attr_labels_list, - points=concat_points, - regress_ranges=concat_regress_ranges, - num_points_per_lvl=num_points) - - # split to per img, per level - bbox_targets_list = [ - bbox_targets.split(num_points, 0) - for bbox_targets in bbox_targets_list - ] - labels_3d_list = [ - labels_3d.split(num_points, 0) for labels_3d in labels_3d_list - ] - bbox_targets_3d_list = [ - bbox_targets_3d.split(num_points, 0) - for bbox_targets_3d in bbox_targets_3d_list - ] - centerness_targets_list = [ - centerness_targets.split(num_points, 0) - for centerness_targets in centerness_targets_list - ] - attr_targets_list = [ - attr_targets.split(num_points, 0) - for attr_targets in attr_targets_list - ] - - # concat per level image - concat_lvl_labels_3d = [] - concat_lvl_bbox_targets_3d = [] - concat_lvl_centerness_targets = [] - concat_lvl_attr_targets = [] - for i in range(num_levels): - concat_lvl_labels_3d.append( - torch.cat([labels[i] for labels in labels_3d_list])) - concat_lvl_centerness_targets.append( - torch.cat([ - centerness_targets[i] - for centerness_targets in centerness_targets_list - ])) - bbox_targets_3d = torch.cat([ - bbox_targets_3d[i] for bbox_targets_3d in bbox_targets_3d_list - ]) - if self.pred_bbox2d: - bbox_targets = torch.cat( - [bbox_targets[i] for bbox_targets in bbox_targets_list]) - bbox_targets_3d = torch.cat([bbox_targets_3d, bbox_targets], - dim=1) - concat_lvl_attr_targets.append( - torch.cat( - [attr_targets[i] for attr_targets in attr_targets_list])) - if self.norm_on_bbox: - bbox_targets_3d[:, :2] = \ - bbox_targets_3d[:, :2] / self.strides[i] - if self.pred_bbox2d: - bbox_targets_3d[:, -4:] = \ - bbox_targets_3d[:, -4:] / self.strides[i] - concat_lvl_bbox_targets_3d.append(bbox_targets_3d) - return concat_lvl_labels_3d, concat_lvl_bbox_targets_3d, \ - concat_lvl_centerness_targets, concat_lvl_attr_targets diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/point_rpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/point_rpn_head.py deleted file mode 100644 index f73144f2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/point_rpn_head.py +++ /dev/null @@ -1,383 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - -from mmdet3d.core import xywhr2xyxyr -from mmdet3d.core.bbox.structures import (DepthInstance3DBoxes, - LiDARInstance3DBoxes) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss - - -@HEADS.register_module() -class PointRPNHead(BaseModule): - """RPN module for PointRCNN. - - Args: - num_classes (int): Number of classes. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - pred_layer_cfg (dict, optional): Config of classification and - regression prediction layers. Defaults to None. - enlarge_width (float, optional): Enlarge bbox for each side to ignore - close points. Defaults to 0.1. - cls_loss (dict, optional): Config of direction classification loss. - Defaults to None. - bbox_loss (dict, optional): Config of localization loss. - Defaults to None. - bbox_coder (dict, optional): Config dict of box coders. - Defaults to None. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - num_classes, - train_cfg, - test_cfg, - pred_layer_cfg=None, - enlarge_width=0.1, - cls_loss=None, - bbox_loss=None, - bbox_coder=None, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.enlarge_width = enlarge_width - - # build loss function - self.bbox_loss = build_loss(bbox_loss) - self.cls_loss = build_loss(cls_loss) - - # build box coder - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build pred conv - self.cls_layers = self._make_fc_layers( - fc_cfg=pred_layer_cfg.cls_linear_channels, - input_channels=pred_layer_cfg.in_channels, - output_channels=self._get_cls_out_channels()) - - self.reg_layers = self._make_fc_layers( - fc_cfg=pred_layer_cfg.reg_linear_channels, - input_channels=pred_layer_cfg.in_channels, - output_channels=self._get_reg_out_channels()) - - def _make_fc_layers(self, fc_cfg, input_channels, output_channels): - """Make fully connect layers. - - Args: - fc_cfg (dict): Config of fully connect. - input_channels (int): Input channels for fc_layers. - output_channels (int): Input channels for fc_layers. - - Returns: - nn.Sequential: Fully connect layers. - """ - fc_layers = [] - c_in = input_channels - for k in range(0, fc_cfg.__len__()): - fc_layers.extend([ - nn.Linear(c_in, fc_cfg[k], bias=False), - nn.BatchNorm1d(fc_cfg[k]), - nn.ReLU(), - ]) - c_in = fc_cfg[k] - fc_layers.append(nn.Linear(c_in, output_channels, bias=True)) - return nn.Sequential(*fc_layers) - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Bbox classification and regression - # (center residual (3), size regression (3) - # torch.cos(yaw) (1), torch.sin(yaw) (1) - return self.bbox_coder.code_size - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - tuple[list[torch.Tensor]]: Predicted boxes and classification - scores. - """ - point_features = feat_dict['fp_features'] - point_features = point_features.permute(0, 2, 1).contiguous() - batch_size = point_features.shape[0] - feat_cls = point_features.view(-1, point_features.shape[-1]) - feat_reg = point_features.view(-1, point_features.shape[-1]) - - point_cls_preds = self.cls_layers(feat_cls).reshape( - batch_size, -1, self._get_cls_out_channels()) - point_box_preds = self.reg_layers(feat_reg).reshape( - batch_size, -1, self._get_reg_out_channels()) - return point_box_preds, point_cls_preds - - @force_fp32(apply_to=('bbox_preds')) - def loss(self, - bbox_preds, - cls_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - img_metas=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of PointRCNN RPN_Head. - cls_preds (dict): Classification from forward of PointRCNN - RPN_Head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - img_metas (list[dict], Optional): Contain pcd and img's meta info. - Defaults to None. - - Returns: - dict: Losses of PointRCNN RPN module. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d) - (bbox_targets, mask_targets, positive_mask, negative_mask, - box_loss_weights, point_targets) = targets - - # bbox loss - bbox_loss = self.bbox_loss(bbox_preds, bbox_targets, - box_loss_weights.unsqueeze(-1)) - # calculate semantic loss - semantic_points = cls_preds.reshape(-1, self.num_classes) - semantic_targets = mask_targets - semantic_targets[negative_mask] = self.num_classes - semantic_points_label = semantic_targets - # for ignore, but now we do not have ignored label - semantic_loss_weight = negative_mask.float() + positive_mask.float() - semantic_loss = self.cls_loss(semantic_points, - semantic_points_label.reshape(-1), - semantic_loss_weight.reshape(-1)) - semantic_loss /= positive_mask.float().sum() - losses = dict(bbox_loss=bbox_loss, semantic_loss=semantic_loss) - - return losses - - def get_targets(self, points, gt_bboxes_3d, gt_labels_3d): - """Generate targets of PointRCNN RPN head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - - Returns: - tuple[torch.Tensor]: Targets of PointRCNN RPN head. - """ - # find empty example - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - (bbox_targets, mask_targets, positive_mask, negative_mask, - point_targets) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d) - - bbox_targets = torch.stack(bbox_targets) - mask_targets = torch.stack(mask_targets) - positive_mask = torch.stack(positive_mask) - negative_mask = torch.stack(negative_mask) - box_loss_weights = positive_mask / (positive_mask.sum() + 1e-6) - - return (bbox_targets, mask_targets, positive_mask, negative_mask, - box_loss_weights, point_targets) - - def get_targets_single(self, points, gt_bboxes_3d, gt_labels_3d): - """Generate targets of PointRCNN RPN head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - valid_gt = gt_labels_3d != -1 - gt_bboxes_3d = gt_bboxes_3d[valid_gt] - gt_labels_3d = gt_labels_3d[valid_gt] - - # transform the bbox coordinate to the point cloud coordinate - gt_bboxes_3d_tensor = gt_bboxes_3d.tensor.clone() - gt_bboxes_3d_tensor[..., 2] += gt_bboxes_3d_tensor[..., 5] / 2 - - points_mask, assignment = self._assign_targets_by_points_inside( - gt_bboxes_3d, points) - gt_bboxes_3d_tensor = gt_bboxes_3d_tensor[assignment] - mask_targets = gt_labels_3d[assignment] - - bbox_targets = self.bbox_coder.encode(gt_bboxes_3d_tensor, - points[..., 0:3], mask_targets) - - positive_mask = (points_mask.max(1)[0] > 0) - # add ignore_mask - extend_gt_bboxes_3d = gt_bboxes_3d.enlarged_box(self.enlarge_width) - points_mask, _ = self._assign_targets_by_points_inside( - extend_gt_bboxes_3d, points) - negative_mask = (points_mask.max(1)[0] == 0) - - point_targets = points[..., 0:3] - return (bbox_targets, mask_targets, positive_mask, negative_mask, - point_targets) - - def get_bboxes(self, - points, - bbox_preds, - cls_preds, - input_metas, - rescale=False): - """Generate bboxes from RPN head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Regression predictions from PointRCNN head. - cls_preds (dict): Class scores predictions from PointRCNN head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool, optional): Whether to rescale bboxes. - Defaults to False. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - sem_scores = cls_preds.sigmoid() - obj_scores = sem_scores.max(-1)[0] - object_class = sem_scores.argmax(dim=-1) - - batch_size = sem_scores.shape[0] - results = list() - for b in range(batch_size): - bbox3d = self.bbox_coder.decode(bbox_preds[b], points[b, ..., :3], - object_class[b]) - bbox_selected, score_selected, labels, cls_preds_selected = \ - self.class_agnostic_nms(obj_scores[b], sem_scores[b], bbox3d, - points[b, ..., :3], input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected.clone(), - box_dim=bbox_selected.shape[-1], - with_yaw=True) - results.append((bbox, score_selected, labels, cls_preds_selected)) - return results - - def class_agnostic_nms(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Class agnostic nms. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): Semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - nms_cfg = self.test_cfg.nms_cfg if not self.training \ - else self.train_cfg.nms_cfg - if nms_cfg.use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - num_bbox = bbox.shape[0] - bbox = input_meta['box_type_3d']( - bbox.clone(), - box_dim=bbox.shape[-1], - with_yaw=True, - origin=(0.5, 0.5, 0.5)) - - if isinstance(bbox, LiDARInstance3DBoxes): - box_idx = bbox.points_in_boxes(points) - box_indices = box_idx.new_zeros([num_bbox + 1]) - box_idx[box_idx == -1] = num_bbox - box_indices.scatter_add_(0, box_idx.long(), - box_idx.new_ones(box_idx.shape)) - box_indices = box_indices[:-1] - nonempty_box_mask = box_indices >= 0 - elif isinstance(bbox, DepthInstance3DBoxes): - box_indices = bbox.points_in_boxes(points) - nonempty_box_mask = box_indices.T.sum(1) >= 0 - else: - raise NotImplementedError('Unsupported bbox type!') - - bbox = bbox[nonempty_box_mask] - - if self.test_cfg.score_thr is not None: - score_thr = self.test_cfg.score_thr - keep = (obj_scores >= score_thr) - obj_scores = obj_scores[keep] - sem_scores = sem_scores[keep] - bbox = bbox.tensor[keep] - - if obj_scores.shape[0] > 0: - topk = min(nms_cfg.nms_pre, obj_scores.shape[0]) - obj_scores_nms, indices = torch.topk(obj_scores, k=topk) - bbox_for_nms = xywhr2xyxyr(bbox[indices].bev) - sem_scores_nms = sem_scores[indices] - - keep = nms_func(bbox_for_nms, obj_scores_nms, nms_cfg.iou_thr) - keep = keep[:nms_cfg.nms_post] - - bbox_selected = bbox.tensor[indices][keep] - score_selected = obj_scores_nms[keep] - cls_preds = sem_scores_nms[keep] - labels = torch.argmax(cls_preds, -1) - else: - bbox_selected = bbox.tensor - score_selected = obj_scores.new_zeros([0]) - labels = obj_scores.new_zeros([0]) - cls_preds = obj_scores.new_zeros([0, sem_scores.shape[-1]]) - - return bbox_selected, score_selected, labels, cls_preds - - def _assign_targets_by_points_inside(self, bboxes_3d, points): - """Compute assignment by checking whether point is inside bbox. - - Args: - bboxes_3d (:obj:`BaseInstance3DBoxes`): Instance of bounding boxes. - points (torch.Tensor): Points of a batch. - - Returns: - tuple[torch.Tensor]: Flags indicating whether each point is - inside bbox and the index of box where each point are in. - """ - # TODO: align points_in_boxes function in each box_structures - num_bbox = bboxes_3d.tensor.shape[0] - if isinstance(bboxes_3d, LiDARInstance3DBoxes): - assignment = bboxes_3d.points_in_boxes(points[:, 0:3]).long() - points_mask = assignment.new_zeros( - [assignment.shape[0], num_bbox + 1]) - assignment[assignment == -1] = num_bbox - points_mask.scatter_(1, assignment.unsqueeze(1), 1) - points_mask = points_mask[:, :-1] - assignment[assignment == num_bbox] = num_bbox - 1 - elif isinstance(bboxes_3d, DepthInstance3DBoxes): - points_mask = bboxes_3d.points_in_boxes(points) - assignment = points_mask.argmax(dim=-1) - else: - raise NotImplementedError('Unsupported bbox type!') - - return points_mask, assignment diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/shape_aware_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/shape_aware_head.py deleted file mode 100644 index 758a82de..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/shape_aware_head.py +++ /dev/null @@ -1,517 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core import box3d_multiclass_nms, limit_period, xywhr2xyxyr -from mmdet.core import multi_apply -from ..builder import HEADS, build_head -from .anchor3d_head import Anchor3DHead - - -@HEADS.register_module() -class BaseShapeHead(BaseModule): - """Base Shape-aware Head in Shape Signature Network. - - Note: - This base shape-aware grouping head uses default settings for small - objects. For large and huge objects, it is recommended to use - heavier heads, like (64, 64, 64) and (128, 128, 64, 64, 64) in - shared conv channels, (2, 1, 1) and (2, 1, 2, 1, 1) in shared - conv strides. For tiny objects, we can use smaller heads, like - (32, 32) channels and (1, 1) strides. - - Args: - num_cls (int): Number of classes. - num_base_anchors (int): Number of anchors per location. - box_code_size (int): The dimension of boxes to be encoded. - in_channels (int): Input channels for convolutional layers. - shared_conv_channels (tuple, optional): Channels for shared - convolutional layers. Default: (64, 64). - shared_conv_strides (tuple, optional): Strides for shared - convolutional layers. Default: (1, 1). - use_direction_classifier (bool, optional): Whether to use direction - classifier. Default: True. - conv_cfg (dict, optional): Config of conv layer. - Default: dict(type='Conv2d') - norm_cfg (dict, optional): Config of norm layer. - Default: dict(type='BN2d'). - bias (bool | str, optional): Type of bias. Default: False. - """ - - def __init__(self, - num_cls, - num_base_anchors, - box_code_size, - in_channels, - shared_conv_channels=(64, 64), - shared_conv_strides=(1, 1), - use_direction_classifier=True, - conv_cfg=dict(type='Conv2d'), - norm_cfg=dict(type='BN2d'), - bias=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.num_cls = num_cls - self.num_base_anchors = num_base_anchors - self.use_direction_classifier = use_direction_classifier - self.box_code_size = box_code_size - - assert len(shared_conv_channels) == len(shared_conv_strides), \ - 'Lengths of channels and strides list should be equal.' - - self.shared_conv_channels = [in_channels] + list(shared_conv_channels) - self.shared_conv_strides = list(shared_conv_strides) - - shared_conv = [] - for i in range(len(self.shared_conv_strides)): - shared_conv.append( - ConvModule( - self.shared_conv_channels[i], - self.shared_conv_channels[i + 1], - kernel_size=3, - stride=self.shared_conv_strides[i], - padding=1, - conv_cfg=conv_cfg, - bias=bias, - norm_cfg=norm_cfg)) - - self.shared_conv = nn.Sequential(*shared_conv) - - out_channels = self.shared_conv_channels[-1] - self.conv_cls = nn.Conv2d(out_channels, num_base_anchors * num_cls, 1) - self.conv_reg = nn.Conv2d(out_channels, - num_base_anchors * box_code_size, 1) - - if use_direction_classifier: - self.conv_dir_cls = nn.Conv2d(out_channels, num_base_anchors * 2, - 1) - if init_cfg is None: - if use_direction_classifier: - self.init_cfg = dict( - type='Kaiming', - layer='Conv2d', - override=[ - dict(type='Normal', name='conv_reg', std=0.01), - dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01), - dict( - type='Normal', - name='conv_dir_cls', - std=0.01, - bias_prob=0.01) - ]) - else: - self.init_cfg = dict( - type='Kaiming', - layer='Conv2d', - override=[ - dict(type='Normal', name='conv_reg', std=0.01), - dict( - type='Normal', - name='conv_cls', - std=0.01, - bias_prob=0.01) - ]) - - def forward(self, x): - """Forward function for SmallHead. - - Args: - x (torch.Tensor): Input feature map with the shape of - [B, C, H, W]. - - Returns: - dict[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - Note that all the returned tensors are reshaped as - [bs*num_base_anchors*H*W, num_cls/box_code_size/dir_bins]. - It is more convenient to concat anchors for different - classes even though they have different feature map sizes. - """ - x = self.shared_conv(x) - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - featmap_size = bbox_pred.shape[-2:] - H, W = featmap_size - B = bbox_pred.shape[0] - cls_score = cls_score.view(-1, self.num_base_anchors, self.num_cls, H, - W).permute(0, 1, 3, 4, - 2).reshape(B, -1, self.num_cls) - bbox_pred = bbox_pred.view(-1, self.num_base_anchors, - self.box_code_size, H, W).permute( - 0, 1, 3, 4, - 2).reshape(B, -1, self.box_code_size) - - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = self.conv_dir_cls(x) - dir_cls_preds = dir_cls_preds.view(-1, self.num_base_anchors, 2, H, - W).permute(0, 1, 3, 4, - 2).reshape(B, -1, 2) - ret = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - dir_cls_preds=dir_cls_preds, - featmap_size=featmap_size) - return ret - - -@HEADS.register_module() -class ShapeAwareHead(Anchor3DHead): - """Shape-aware grouping head for SSN. - - Args: - tasks (dict): Shape-aware groups of multi-class objects. - assign_per_class (bool, optional): Whether to do assignment for each - class. Default: True. - kwargs (dict): Other arguments are the same as those in - :class:`Anchor3DHead`. - """ - - def __init__(self, tasks, assign_per_class=True, init_cfg=None, **kwargs): - self.tasks = tasks - self.featmap_sizes = [] - super().__init__( - assign_per_class=assign_per_class, init_cfg=init_cfg, **kwargs) - - def init_weights(self): - if not self._is_init: - for m in self.heads: - if hasattr(m, 'init_weights'): - m.init_weights() - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - def _init_layers(self): - """Initialize neural network layers of the head.""" - self.heads = nn.ModuleList() - cls_ptr = 0 - for task in self.tasks: - sizes = self.anchor_generator.sizes[cls_ptr:cls_ptr + - task['num_class']] - num_size = torch.tensor(sizes).reshape(-1, 3).size(0) - num_rot = len(self.anchor_generator.rotations) - num_base_anchors = num_rot * num_size - branch = dict( - type='BaseShapeHead', - num_cls=self.num_classes, - num_base_anchors=num_base_anchors, - box_code_size=self.box_code_size, - in_channels=self.in_channels, - shared_conv_channels=task['shared_conv_channels'], - shared_conv_strides=task['shared_conv_strides']) - self.heads.append(build_head(branch)) - cls_ptr += task['num_class'] - - def forward_single(self, x): - """Forward function on a single-scale feature map. - - Args: - x (torch.Tensor): Input features. - Returns: - tuple[torch.Tensor]: Contain score of each class, bbox - regression and direction classification predictions. - """ - results = [] - - for head in self.heads: - results.append(head(x)) - - cls_score = torch.cat([result['cls_score'] for result in results], - dim=1) - bbox_pred = torch.cat([result['bbox_pred'] for result in results], - dim=1) - dir_cls_preds = None - if self.use_direction_classifier: - dir_cls_preds = torch.cat( - [result['dir_cls_preds'] for result in results], dim=1) - - self.featmap_sizes = [] - for i, task in enumerate(self.tasks): - for _ in range(task['num_class']): - self.featmap_sizes.append(results[i]['featmap_size']) - assert len(self.featmap_sizes) == len(self.anchor_generator.ranges), \ - 'Length of feature map sizes must be equal to length of ' + \ - 'different ranges of anchor generator.' - - return cls_score, bbox_pred, dir_cls_preds - - def loss_single(self, cls_score, bbox_pred, dir_cls_preds, labels, - label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, num_total_samples): - """Calculate loss of Single-level results. - - Args: - cls_score (torch.Tensor): Class score in single-level. - bbox_pred (torch.Tensor): Bbox prediction in single-level. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single-level. - labels (torch.Tensor): Labels of class. - label_weights (torch.Tensor): Weights of class loss. - bbox_targets (torch.Tensor): Targets of bbox predictions. - bbox_weights (torch.Tensor): Weights of bbox loss. - dir_targets (torch.Tensor): Targets of direction predictions. - dir_weights (torch.Tensor): Weights of direction loss. - num_total_samples (int): The number of valid samples. - - Returns: - tuple[torch.Tensor]: Losses of class, bbox - and direction, respectively. - """ - # classification loss - if num_total_samples is None: - num_total_samples = int(cls_score.shape[0]) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.reshape(-1, self.num_classes) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # regression loss - bbox_targets = bbox_targets.reshape(-1, self.box_code_size) - bbox_weights = bbox_weights.reshape(-1, self.box_code_size) - code_weight = self.train_cfg.get('code_weight', None) - - if code_weight: - bbox_weights = bbox_weights * bbox_weights.new_tensor(code_weight) - bbox_pred = bbox_pred.reshape(-1, self.box_code_size) - if self.diff_rad_by_sin: - bbox_pred, bbox_targets = self.add_sin_difference( - bbox_pred, bbox_targets) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - - # direction classification loss - loss_dir = None - if self.use_direction_classifier: - dir_cls_preds = dir_cls_preds.reshape(-1, 2) - dir_targets = dir_targets.reshape(-1) - dir_weights = dir_weights.reshape(-1) - loss_dir = self.loss_dir( - dir_cls_preds, - dir_targets, - dir_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dir - - def loss(self, - cls_scores, - bbox_preds, - dir_cls_preds, - gt_bboxes, - gt_labels, - input_metas, - gt_bboxes_ignore=None): - """Calculate losses. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Gt bboxes - of each sample. - gt_labels (list[torch.Tensor]): Gt labels of each sample. - input_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str, list[torch.Tensor]]: Classification, bbox, and - direction losses of each level. - - - loss_cls (list[torch.Tensor]): Classification losses. - - loss_bbox (list[torch.Tensor]): Box regression losses. - - loss_dir (list[torch.Tensor]): Direction classification - losses. - """ - device = cls_scores[0].device - anchor_list = self.get_anchors( - self.featmap_sizes, input_metas, device=device) - cls_reg_targets = self.anchor_target_3d( - anchor_list, - gt_bboxes, - input_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - num_classes=self.num_classes, - sampling=self.sampling) - - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - dir_targets_list, dir_weights_list, num_total_pos, - num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # num_total_samples = None - losses_cls, losses_bbox, losses_dir = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - dir_cls_preds, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - dir_targets_list, - dir_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dir=losses_dir) - - def get_bboxes(self, - cls_scores, - bbox_preds, - dir_cls_preds, - input_metas, - cfg=None, - rescale=False): - """Get bboxes of anchor head. - - Args: - cls_scores (list[torch.Tensor]): Multi-level class scores. - bbox_preds (list[torch.Tensor]): Multi-level bbox predictions. - dir_cls_preds (list[torch.Tensor]): Multi-level direction - class predictions. - input_metas (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`, optional): Training or testing config. - Default: None. - rescale (list[torch.Tensor], optional): Whether to rescale bbox. - Default: False. - - Returns: - list[tuple]: Prediction resultes of batches. - """ - assert len(cls_scores) == len(bbox_preds) - assert len(cls_scores) == len(dir_cls_preds) - num_levels = len(cls_scores) - assert num_levels == 1, 'Only support single level inference.' - device = cls_scores[0].device - mlvl_anchors = self.anchor_generator.grid_anchors( - self.featmap_sizes, device=device) - # `anchor` is a list of anchors for different classes - mlvl_anchors = [torch.cat(anchor, dim=0) for anchor in mlvl_anchors] - - result_list = [] - for img_id in range(len(input_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - dir_cls_pred_list = [ - dir_cls_preds[i][img_id].detach() for i in range(num_levels) - ] - - input_meta = input_metas[img_id] - proposals = self.get_bboxes_single(cls_score_list, bbox_pred_list, - dir_cls_pred_list, mlvl_anchors, - input_meta, cfg, rescale) - result_list.append(proposals) - return result_list - - def get_bboxes_single(self, - cls_scores, - bbox_preds, - dir_cls_preds, - mlvl_anchors, - input_meta, - cfg=None, - rescale=False): - """Get bboxes of single branch. - - Args: - cls_scores (torch.Tensor): Class score in single batch. - bbox_preds (torch.Tensor): Bbox prediction in single batch. - dir_cls_preds (torch.Tensor): Predictions of direction class - in single batch. - mlvl_anchors (List[torch.Tensor]): Multi-level anchors - in single batch. - input_meta (list[dict]): Contain pcd and img's meta info. - cfg (:obj:`ConfigDict`): Training or testing config. - rescale (list[torch.Tensor], optional): whether to rescale bbox. - Default: False. - - Returns: - tuple: Contain predictions of single batch. - - - bboxes (:obj:`BaseInstance3DBoxes`): Predicted 3d bboxes. - - scores (torch.Tensor): Class score of each bbox. - - labels (torch.Tensor): Label of each bbox. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_dir_scores = [] - for cls_score, bbox_pred, dir_cls_pred, anchors in zip( - cls_scores, bbox_preds, dir_cls_preds, mlvl_anchors): - assert cls_score.size()[-2] == bbox_pred.size()[-2] - assert cls_score.size()[-2] == dir_cls_pred.size()[-2] - dir_cls_score = torch.max(dir_cls_pred, dim=-1)[1] - - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - dir_cls_score = dir_cls_score[topk_inds] - - bboxes = self.bbox_coder.decode(anchors, bbox_pred) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_dir_scores.append(dir_cls_score) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - mlvl_bboxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - mlvl_bboxes, box_dim=self.box_code_size).bev) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_dir_scores = torch.cat(mlvl_dir_scores) - - if self.use_sigmoid_cls: - # Add a dummy background class to the front when using sigmoid - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - - score_thr = cfg.get('score_thr', 0) - results = box3d_multiclass_nms(mlvl_bboxes, mlvl_bboxes_for_nms, - mlvl_scores, score_thr, cfg.max_num, - cfg, mlvl_dir_scores) - bboxes, scores, labels, dir_scores = results - if bboxes.shape[0] > 0: - dir_rot = limit_period(bboxes[..., 6] - self.dir_offset, - self.dir_limit_offset, np.pi) - bboxes[..., 6] = ( - dir_rot + self.dir_offset + - np.pi * dir_scores.to(bboxes.dtype)) - bboxes = input_meta['box_type_3d'](bboxes, box_dim=self.box_code_size) - return bboxes, scores, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/smoke_mono3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/smoke_mono3d_head.py deleted file mode 100644 index 80f2b40b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/smoke_mono3d_head.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn import functional as F - -from mmdet.core import multi_apply -from mmdet.core.bbox.builder import build_bbox_coder -from mmdet.models.utils import gaussian_radius, gen_gaussian_target -from mmdet.models.utils.gaussian_target import (get_local_maximum, - get_topk_from_heatmap, - transpose_and_gather_feat) -from ..builder import HEADS -from .anchor_free_mono3d_head import AnchorFreeMono3DHead - - -@HEADS.register_module() -class SMOKEMono3DHead(AnchorFreeMono3DHead): - r"""Anchor-free head used in `SMOKE `_ - - .. code-block:: none - - /-----> 3*3 conv -----> 1*1 conv -----> cls - feature - \-----> 3*3 conv -----> 1*1 conv -----> reg - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - dim_channel (list[int]): indices of dimension offset preds in - regression heatmap channels. - ori_channel (list[int]): indices of orientation offset pred in - regression heatmap channels. - bbox_coder (:obj:`CameraInstance3DBoxes`): Bbox coder - for encoding and decoding boxes. - loss_cls (dict, optional): Config of classification loss. - Default: loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0). - loss_bbox (dict, optional): Config of localization loss. - Default: loss_bbox=dict(type='L1Loss', loss_weight=10.0). - loss_dir (dict, optional): Config of direction classification loss. - In SMOKE, Default: None. - loss_attr (dict, optional): Config of attribute classification loss. - In SMOKE, Default: None. - loss_centerness (dict): Config of centerness loss. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True). - init_cfg (dict): Initialization config dict. Default: None. - """ # noqa: E501 - - def __init__(self, - num_classes, - in_channels, - dim_channel, - ori_channel, - bbox_coder, - loss_cls=dict(type='GaussionFocalLoss', loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=0.1), - loss_dir=None, - loss_attr=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - init_cfg=None, - **kwargs): - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - loss_dir=loss_dir, - loss_attr=loss_attr, - norm_cfg=norm_cfg, - init_cfg=init_cfg, - **kwargs) - self.dim_channel = dim_channel - self.ori_channel = ori_channel - self.bbox_coder = build_bbox_coder(bbox_coder) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * bbox_code_size. - """ - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): Input feature map. - - Returns: - tuple: Scores for each class, bbox of input feature maps. - """ - cls_score, bbox_pred, dir_cls_pred, attr_pred, cls_feat, reg_feat = \ - super().forward_single(x) - cls_score = cls_score.sigmoid() # turn to 0-1 - cls_score = cls_score.clamp(min=1e-4, max=1 - 1e-4) - # (N, C, H, W) - offset_dims = bbox_pred[:, self.dim_channel, ...] - bbox_pred[:, self.dim_channel, ...] = offset_dims.sigmoid() - 0.5 - # (N, C, H, W) - vector_ori = bbox_pred[:, self.ori_channel, ...] - bbox_pred[:, self.ori_channel, ...] = F.normalize(vector_ori) - return cls_score, bbox_pred - - def get_bboxes(self, cls_scores, bbox_preds, img_metas, rescale=None): - """Generate bboxes from bbox head predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - bbox_preds (list[Tensor]): Box regression for each scale. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - - Returns: - list[tuple[:obj:`CameraInstance3DBoxes`, Tensor, Tensor, None]]: - Each item in result_list is 4-tuple. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - cam2imgs = torch.stack([ - cls_scores[0].new_tensor(img_meta['cam2img']) - for img_meta in img_metas - ]) - trans_mats = torch.stack([ - cls_scores[0].new_tensor(img_meta['trans_mat']) - for img_meta in img_metas - ]) - batch_bboxes, batch_scores, batch_topk_labels = self.decode_heatmap( - cls_scores[0], - bbox_preds[0], - img_metas, - cam2imgs=cam2imgs, - trans_mats=trans_mats, - topk=100, - kernel=3) - - result_list = [] - for img_id in range(len(img_metas)): - - bboxes = batch_bboxes[img_id] - scores = batch_scores[img_id] - labels = batch_topk_labels[img_id] - - keep_idx = scores > 0.25 - bboxes = bboxes[keep_idx] - scores = scores[keep_idx] - labels = labels[keep_idx] - - bboxes = img_metas[img_id]['box_type_3d']( - bboxes, box_dim=self.bbox_code_size, origin=(0.5, 0.5, 0.5)) - attrs = None - result_list.append((bboxes, scores, labels, attrs)) - - return result_list - - def decode_heatmap(self, - cls_score, - reg_pred, - img_metas, - cam2imgs, - trans_mats, - topk=100, - kernel=3): - """Transform outputs into detections raw bbox predictions. - - Args: - class_score (Tensor): Center predict heatmap, - shape (B, num_classes, H, W). - reg_pred (Tensor): Box regression map. - shape (B, channel, H , W). - img_metas (List[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cam2imgs (Tensor): Camera intrinsic matrixs. - shape (B, 4, 4) - trans_mats (Tensor): Transformation matrix from original image - to feature map. - shape: (batch, 3, 3) - topk (int): Get top k center keypoints from heatmap. Default 100. - kernel (int): Max pooling kernel for extract local maximum pixels. - Default 3. - - Returns: - tuple[torch.Tensor]: Decoded output of SMOKEHead, containing - the following Tensors: - - batch_bboxes (Tensor): Coords of each 3D box. - shape (B, k, 7) - - batch_scores (Tensor): Scores of each 3D box. - shape (B, k) - - batch_topk_labels (Tensor): Categories of each 3D box. - shape (B, k) - """ - img_h, img_w = img_metas[0]['pad_shape'][:2] - bs, _, feat_h, feat_w = cls_score.shape - - center_heatmap_pred = get_local_maximum(cls_score, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=topk) - batch_scores, batch_index, batch_topk_labels = batch_dets - - regression = transpose_and_gather_feat(reg_pred, batch_index) - regression = regression.view(-1, 8) - - points = torch.cat([topk_xs.view(-1, 1), - topk_ys.view(-1, 1).float()], - dim=1) - locations, dimensions, orientations = self.bbox_coder.decode( - regression, points, batch_topk_labels, cam2imgs, trans_mats) - - batch_bboxes = torch.cat((locations, dimensions, orientations), dim=1) - batch_bboxes = batch_bboxes.view(bs, -1, self.bbox_code_size) - return batch_bboxes, batch_scores, batch_topk_labels - - def get_predictions(self, labels3d, centers2d, gt_locations, gt_dimensions, - gt_orientations, indices, img_metas, pred_reg): - """Prepare predictions for computing loss. - - Args: - labels3d (Tensor): Labels of each 3D box. - shape (B, max_objs, ) - centers2d (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2) - gt_locations (Tensor): Coords of each 3D box's location. - shape (B * max_objs, 3) - gt_dimensions (Tensor): Dimensions of each 3D box. - shape (N, 3) - gt_orientations (Tensor): Orientation(yaw) of each 3D box. - shape (N, 1) - indices (Tensor): Indices of the existence of the 3D box. - shape (B * max_objs, ) - img_metas (list[dict]): Meta information of each image, - e.g., image size, scaling factor, etc. - pre_reg (Tensor): Box regression map. - shape (B, channel, H , W). - - Returns: - dict: the dict has components below: - - bbox3d_yaws (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred orientations. - - bbox3d_dims (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred dimensions. - - bbox3d_locs (:obj:`CameraInstance3DBoxes`): - bbox calculated using pred locations. - """ - batch, channel = pred_reg.shape[0], pred_reg.shape[1] - w = pred_reg.shape[3] - cam2imgs = torch.stack([ - gt_locations.new_tensor(img_meta['cam2img']) - for img_meta in img_metas - ]) - trans_mats = torch.stack([ - gt_locations.new_tensor(img_meta['trans_mat']) - for img_meta in img_metas - ]) - centers2d_inds = centers2d[:, 1] * w + centers2d[:, 0] - centers2d_inds = centers2d_inds.view(batch, -1) - pred_regression = transpose_and_gather_feat(pred_reg, centers2d_inds) - pred_regression_pois = pred_regression.view(-1, channel) - locations, dimensions, orientations = self.bbox_coder.decode( - pred_regression_pois, centers2d, labels3d, cam2imgs, trans_mats, - gt_locations) - - locations, dimensions, orientations = locations[indices], dimensions[ - indices], orientations[indices] - - locations[:, 1] += dimensions[:, 1] / 2 - - gt_locations = gt_locations[indices] - - assert len(locations) == len(gt_locations) - assert len(dimensions) == len(gt_dimensions) - assert len(orientations) == len(gt_orientations) - bbox3d_yaws = self.bbox_coder.encode(gt_locations, gt_dimensions, - orientations, img_metas) - bbox3d_dims = self.bbox_coder.encode(gt_locations, dimensions, - gt_orientations, img_metas) - bbox3d_locs = self.bbox_coder.encode(locations, gt_dimensions, - gt_orientations, img_metas) - - pred_bboxes = dict(ori=bbox3d_yaws, dim=bbox3d_dims, loc=bbox3d_locs) - - return pred_bboxes - - def get_targets(self, gt_bboxes, gt_labels, gt_bboxes_3d, gt_labels_3d, - centers2d, feat_shape, img_shape, img_metas): - """Get training targets for batch images. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, - shape (num_gt,). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D Ground - truth bboxes of each image, - shape (num_gt, bbox_code_size). - gt_labels_3d (list[Tensor]): 3D Ground truth labels of each - box, shape (num_gt,). - centers2d (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - feat_shape (tuple[int]): Feature map shape with value, - shape (B, _, H, W). - img_shape (tuple[int]): Image shape in [h, w] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[Tensor, dict]: The Tensor value is the targets of - center heatmap, the dict has components below: - - gt_centers2d (Tensor): Coords of each projected 3D box - center on image. shape (B * max_objs, 2) - - gt_labels3d (Tensor): Labels of each 3D box. - shape (B, max_objs, ) - - indices (Tensor): Indices of the existence of the 3D box. - shape (B * max_objs, ) - - affine_indices (Tensor): Indices of the affine of the 3D box. - shape (N, ) - - gt_locs (Tensor): Coords of each 3D box's location. - shape (N, 3) - - gt_dims (Tensor): Dimensions of each 3D box. - shape (N, 3) - - gt_yaws (Tensor): Orientation(yaw) of each 3D box. - shape (N, 1) - - gt_cors (Tensor): Coords of the corners of each 3D box. - shape (N, 8, 3) - """ - - reg_mask = torch.stack([ - gt_bboxes[0].new_tensor( - not img_meta['affine_aug'], dtype=torch.bool) - for img_meta in img_metas - ]) - - img_h, img_w = img_shape[:2] - bs, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) # 1/4 - height_ratio = float(feat_h / img_h) # 1/4 - - assert width_ratio == height_ratio - - center_heatmap_target = gt_bboxes[-1].new_zeros( - [bs, self.num_classes, feat_h, feat_w]) - - gt_centers2d = centers2d.copy() - - for batch_id in range(bs): - gt_bbox = gt_bboxes[batch_id] - gt_label = gt_labels[batch_id] - # project centers2d from input image to feat map - gt_center2d = gt_centers2d[batch_id] * width_ratio - - for j, center in enumerate(gt_center2d): - center_x_int, center_y_int = center.int() - scale_box_h = (gt_bbox[j][3] - gt_bbox[j][1]) * height_ratio - scale_box_w = (gt_bbox[j][2] - gt_bbox[j][0]) * width_ratio - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.7) - radius = max(0, int(radius)) - ind = gt_label[j] - gen_gaussian_target(center_heatmap_target[batch_id, ind], - [center_x_int, center_y_int], radius) - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - num_ctrs = [center2d.shape[0] for center2d in centers2d] - max_objs = max(num_ctrs) - - reg_inds = torch.cat( - [reg_mask[i].repeat(num_ctrs[i]) for i in range(bs)]) - - inds = torch.zeros((bs, max_objs), - dtype=torch.bool).to(centers2d[0].device) - - # put gt 3d bboxes to gpu - gt_bboxes_3d = [ - gt_bbox_3d.to(centers2d[0].device) for gt_bbox_3d in gt_bboxes_3d - ] - - batch_centers2d = centers2d[0].new_zeros((bs, max_objs, 2)) - batch_labels_3d = gt_labels_3d[0].new_zeros((bs, max_objs)) - batch_gt_locations = \ - gt_bboxes_3d[0].tensor.new_zeros((bs, max_objs, 3)) - for i in range(bs): - inds[i, :num_ctrs[i]] = 1 - batch_centers2d[i, :num_ctrs[i]] = centers2d[i] - batch_labels_3d[i, :num_ctrs[i]] = gt_labels_3d[i] - batch_gt_locations[i, :num_ctrs[i]] = \ - gt_bboxes_3d[i].tensor[:, :3] - - inds = inds.flatten() - batch_centers2d = batch_centers2d.view(-1, 2) * width_ratio - batch_gt_locations = batch_gt_locations.view(-1, 3) - - # filter the empty image, without gt_bboxes_3d - gt_bboxes_3d = [ - gt_bbox_3d for gt_bbox_3d in gt_bboxes_3d - if gt_bbox_3d.tensor.shape[0] > 0 - ] - - gt_dimensions = torch.cat( - [gt_bbox_3d.tensor[:, 3:6] for gt_bbox_3d in gt_bboxes_3d]) - gt_orientations = torch.cat([ - gt_bbox_3d.tensor[:, 6].unsqueeze(-1) - for gt_bbox_3d in gt_bboxes_3d - ]) - gt_corners = torch.cat( - [gt_bbox_3d.corners for gt_bbox_3d in gt_bboxes_3d]) - - target_labels = dict( - gt_centers2d=batch_centers2d.long(), - gt_labels3d=batch_labels_3d, - indices=inds, - reg_indices=reg_inds, - gt_locs=batch_gt_locations, - gt_dims=gt_dimensions, - gt_yaws=gt_orientations, - gt_cors=gt_corners) - - return center_heatmap_target, avg_factor, target_labels - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level. - shape (num_gt, 4). - bbox_preds (list[Tensor]): Box dims is a 4D-tensor, the channel - number is bbox_code_size. - shape (B, 7, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image. - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - shape (num_gts, ). - gt_bboxes_3d (list[:obj:`CameraInstance3DBoxes`]): 3D boxes ground - truth. it is the flipped gt_bboxes - gt_labels_3d (list[Tensor]): Same as gt_labels. - centers2d (list[Tensor]): 2D centers on the image. - shape (num_gts, 2). - depths (list[Tensor]): Depth ground truth. - shape (num_gts, ). - attr_labels (list[Tensor]): Attributes indices of each box. - In kitti it's None. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Default: None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == len(bbox_preds) == 1 - assert attr_labels is None - assert gt_bboxes_ignore is None - center2d_heatmap = cls_scores[0] - pred_reg = bbox_preds[0] - - center2d_heatmap_target, avg_factor, target_labels = \ - self.get_targets(gt_bboxes, gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, - center2d_heatmap.shape, - img_metas[0]['pad_shape'], - img_metas) - - pred_bboxes = self.get_predictions( - labels3d=target_labels['gt_labels3d'], - centers2d=target_labels['gt_centers2d'], - gt_locations=target_labels['gt_locs'], - gt_dimensions=target_labels['gt_dims'], - gt_orientations=target_labels['gt_yaws'], - indices=target_labels['indices'], - img_metas=img_metas, - pred_reg=pred_reg) - - loss_cls = self.loss_cls( - center2d_heatmap, center2d_heatmap_target, avg_factor=avg_factor) - - reg_inds = target_labels['reg_indices'] - - loss_bbox_oris = self.loss_bbox( - pred_bboxes['ori'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox_dims = self.loss_bbox( - pred_bboxes['dim'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox_locs = self.loss_bbox( - pred_bboxes['loc'].corners[reg_inds, ...], - target_labels['gt_cors'][reg_inds, ...]) - - loss_bbox = loss_bbox_dims + loss_bbox_locs + loss_bbox_oris - - loss_dict = dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - return loss_dict diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/ssd_3d_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/ssd_3d_head.py deleted file mode 100644 index 047333c9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/ssd_3d_head.py +++ /dev/null @@ -1,559 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import (DepthInstance3DBoxes, - LiDARInstance3DBoxes, - rotation_3d_in_axis) -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .vote_head import VoteHead - - -@HEADS.register_module() -class SSD3DHead(VoteHead): - r"""Bbox head of `3DSSD `_. - - Args: - num_classes (int): The number of class. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - in_channels (int): The number of input feature channel. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - act_cfg (dict): Config of activation in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_res_loss (dict): Config of size residual regression loss. - corner_loss (dict): Config of bbox corners regression loss. - vote_loss (dict): Config of candidate points regression loss. - """ - - def __init__(self, - num_classes, - bbox_coder, - in_channels=256, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - pred_layer_cfg=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_res_loss=None, - corner_loss=None, - vote_loss=None, - init_cfg=None): - super(SSD3DHead, self).__init__( - num_classes, - bbox_coder, - train_cfg=train_cfg, - test_cfg=test_cfg, - vote_module_cfg=vote_module_cfg, - vote_aggregation_cfg=vote_aggregation_cfg, - pred_layer_cfg=pred_layer_cfg, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - objectness_loss=objectness_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=None, - size_res_loss=size_res_loss, - semantic_loss=None, - init_cfg=init_cfg) - - self.corner_loss = build_loss(corner_loss) - self.vote_loss = build_loss(vote_loss) - self.num_candidates = vote_module_cfg['num_points'] - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (1) - return self.num_classes - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Bbox classification and regression - # (center residual (3), size regression (3) - # heading class+residual (num_dir_bins*2)), - return 3 + 3 + self.num_dir_bins * 2 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - seed_points = feat_dict['sa_xyz'][-1] - seed_features = feat_dict['sa_features'][-1] - seed_indices = feat_dict['sa_indices'][-1] - - return seed_points, seed_features, seed_indices - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of SSD3DHead. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of 3DSSD. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (vote_targets, center_targets, size_res_targets, dir_class_targets, - dir_res_targets, mask_targets, centerness_targets, corner3d_targets, - vote_mask, positive_mask, negative_mask, centerness_weights, - box_loss_weights, heading_res_loss_weight) = targets - - # calculate centerness loss - centerness_loss = self.objectness_loss( - bbox_preds['obj_scores'].transpose(2, 1), - centerness_targets, - weight=centerness_weights) - - # calculate center loss - center_loss = self.center_loss( - bbox_preds['center_offset'], - center_targets, - weight=box_loss_weights.unsqueeze(-1)) - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class'].transpose(1, 2), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - dir_res_loss = self.dir_res_loss( - bbox_preds['dir_res_norm'], - dir_res_targets.unsqueeze(-1).repeat(1, 1, self.num_dir_bins), - weight=heading_res_loss_weight) - - # calculate size residual loss - size_loss = self.size_res_loss( - bbox_preds['size'], - size_res_targets, - weight=box_loss_weights.unsqueeze(-1)) - - # calculate corner loss - one_hot_dir_class_targets = dir_class_targets.new_zeros( - bbox_preds['dir_class'].shape) - one_hot_dir_class_targets.scatter_(2, dir_class_targets.unsqueeze(-1), - 1) - pred_bbox3d = self.bbox_coder.decode( - dict( - center=bbox_preds['center'], - dir_res=bbox_preds['dir_res'], - dir_class=one_hot_dir_class_targets, - size=bbox_preds['size'])) - pred_bbox3d = pred_bbox3d.reshape(-1, pred_bbox3d.shape[-1]) - pred_bbox3d = img_metas[0]['box_type_3d']( - pred_bbox3d.clone(), - box_dim=pred_bbox3d.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - pred_corners3d = pred_bbox3d.corners.reshape(-1, 8, 3) - corner_loss = self.corner_loss( - pred_corners3d, - corner3d_targets.reshape(-1, 8, 3), - weight=box_loss_weights.view(-1, 1, 1)) - - # calculate vote loss - vote_loss = self.vote_loss( - bbox_preds['vote_offset'].transpose(1, 2), - vote_targets, - weight=vote_mask.unsqueeze(-1)) - - losses = dict( - centerness_loss=centerness_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_res_loss=size_loss, - corner_loss=corner_loss, - vote_loss=vote_loss) - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of ssd3d head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of ssd3d head. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - # find empty example - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - seed_points = [ - bbox_preds['seed_points'][i, :self.num_candidates].detach() - for i in range(len(gt_labels_3d)) - ] - - (vote_targets, center_targets, size_res_targets, dir_class_targets, - dir_res_targets, mask_targets, centerness_targets, corner3d_targets, - vote_mask, positive_mask, negative_mask) = multi_apply( - self.get_targets_single, points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, aggregated_points, - seed_points) - - center_targets = torch.stack(center_targets) - positive_mask = torch.stack(positive_mask) - negative_mask = torch.stack(negative_mask) - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - centerness_targets = torch.stack(centerness_targets).detach() - corner3d_targets = torch.stack(corner3d_targets) - vote_targets = torch.stack(vote_targets) - vote_mask = torch.stack(vote_mask) - - center_targets -= bbox_preds['aggregated_points'] - - centerness_weights = (positive_mask + - negative_mask).unsqueeze(-1).repeat( - 1, 1, self.num_classes).float() - centerness_weights = centerness_weights / \ - (centerness_weights.sum() + 1e-6) - vote_mask = vote_mask / (vote_mask.sum() + 1e-6) - - box_loss_weights = positive_mask / (positive_mask.sum() + 1e-6) - - batch_size, proposal_num = dir_class_targets.shape[:2] - heading_label_one_hot = dir_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - heading_res_loss_weight = heading_label_one_hot * \ - box_loss_weights.unsqueeze(-1) - - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, positive_mask, - negative_mask, centerness_weights, box_loss_weights, - heading_res_loss_weight) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None, - seed_points=None): - """Generate targets of ssd3d head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - candidate points layer. - seed_points (torch.Tensor): Seed points of candidate points. - - Returns: - tuple[torch.Tensor]: Targets of ssd3d head. - """ - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - valid_gt = gt_labels_3d != -1 - gt_bboxes_3d = gt_bboxes_3d[valid_gt] - gt_labels_3d = gt_labels_3d[valid_gt] - - # Generate fake GT for empty scene - if valid_gt.sum() == 0: - vote_targets = points.new_zeros(self.num_candidates, 3) - center_targets = points.new_zeros(self.num_candidates, 3) - size_res_targets = points.new_zeros(self.num_candidates, 3) - dir_class_targets = points.new_zeros( - self.num_candidates, dtype=torch.int64) - dir_res_targets = points.new_zeros(self.num_candidates) - mask_targets = points.new_zeros( - self.num_candidates, dtype=torch.int64) - centerness_targets = points.new_zeros(self.num_candidates, - self.num_classes) - corner3d_targets = points.new_zeros(self.num_candidates, 8, 3) - vote_mask = points.new_zeros(self.num_candidates, dtype=torch.bool) - positive_mask = points.new_zeros( - self.num_candidates, dtype=torch.bool) - negative_mask = points.new_ones( - self.num_candidates, dtype=torch.bool) - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, - positive_mask, negative_mask) - - gt_corner3d = gt_bboxes_3d.corners - - (center_targets, size_targets, dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - points_mask, assignment = self._assign_targets_by_points_inside( - gt_bboxes_3d, aggregated_points) - - center_targets = center_targets[assignment] - size_res_targets = size_targets[assignment] - mask_targets = gt_labels_3d[assignment] - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - corner3d_targets = gt_corner3d[assignment] - - top_center_targets = center_targets.clone() - top_center_targets[:, 2] += size_res_targets[:, 2] - dist = torch.norm(aggregated_points - top_center_targets, dim=1) - dist_mask = dist < self.train_cfg.pos_distance_thr - positive_mask = (points_mask.max(1)[0] > 0) * dist_mask - negative_mask = (points_mask.max(1)[0] == 0) - - # Centerness loss targets - canonical_xyz = aggregated_points - center_targets - if self.bbox_coder.with_rot: - # TODO: Align points rotation implementation of - # LiDARInstance3DBoxes and DepthInstance3DBoxes - canonical_xyz = rotation_3d_in_axis( - canonical_xyz.unsqueeze(0).transpose(0, 1), - -gt_bboxes_3d.yaw[assignment], - axis=2).squeeze(1) - distance_front = torch.clamp( - size_res_targets[:, 0] - canonical_xyz[:, 0], min=0) - distance_back = torch.clamp( - size_res_targets[:, 0] + canonical_xyz[:, 0], min=0) - distance_left = torch.clamp( - size_res_targets[:, 1] - canonical_xyz[:, 1], min=0) - distance_right = torch.clamp( - size_res_targets[:, 1] + canonical_xyz[:, 1], min=0) - distance_top = torch.clamp( - size_res_targets[:, 2] - canonical_xyz[:, 2], min=0) - distance_bottom = torch.clamp( - size_res_targets[:, 2] + canonical_xyz[:, 2], min=0) - - centerness_l = torch.min(distance_front, distance_back) / torch.max( - distance_front, distance_back) - centerness_w = torch.min(distance_left, distance_right) / torch.max( - distance_left, distance_right) - centerness_h = torch.min(distance_bottom, distance_top) / torch.max( - distance_bottom, distance_top) - centerness_targets = torch.clamp( - centerness_l * centerness_w * centerness_h, min=0) - centerness_targets = centerness_targets.pow(1 / 3.0) - centerness_targets = torch.clamp(centerness_targets, min=0, max=1) - - proposal_num = centerness_targets.shape[0] - one_hot_centerness_targets = centerness_targets.new_zeros( - (proposal_num, self.num_classes)) - one_hot_centerness_targets.scatter_(1, mask_targets.unsqueeze(-1), 1) - centerness_targets = centerness_targets.unsqueeze( - 1) * one_hot_centerness_targets - - # Vote loss targets - enlarged_gt_bboxes_3d = gt_bboxes_3d.enlarged_box( - self.train_cfg.expand_dims_length) - enlarged_gt_bboxes_3d.tensor[:, 2] -= self.train_cfg.expand_dims_length - vote_mask, vote_assignment = self._assign_targets_by_points_inside( - enlarged_gt_bboxes_3d, seed_points) - - vote_targets = gt_bboxes_3d.gravity_center - vote_targets = vote_targets[vote_assignment] - seed_points - vote_mask = vote_mask.max(1)[0] > 0 - - return (vote_targets, center_targets, size_res_targets, - dir_class_targets, dir_res_targets, mask_targets, - centerness_targets, corner3d_targets, vote_mask, positive_mask, - negative_mask) - - def get_bboxes(self, points, bbox_preds, input_metas, rescale=False): - """Generate bboxes from 3DSSD head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from sdd3d head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - sem_scores = F.sigmoid(bbox_preds['obj_scores']).transpose(1, 2) - obj_scores = sem_scores.max(-1)[0] - bbox3d = self.bbox_coder.decode(bbox_preds) - - batch_size = bbox3d.shape[0] - results = list() - - for b in range(batch_size): - bbox_selected, score_selected, labels = self.multiclass_nms_single( - obj_scores[b], sem_scores[b], bbox3d[b], points[b, ..., :3], - input_metas[b]) - - bbox = input_metas[b]['box_type_3d']( - bbox_selected.clone(), - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): Semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox.clone(), - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - - if isinstance(bbox, (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - box_indices = bbox.points_in_boxes_all(points) - nonempty_box_mask = box_indices.T.sum(1) >= 0 - else: - raise NotImplementedError('Unsupported bbox type!') - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - bbox_classes = torch.argmax(sem_scores, -1) - nms_keep = batched_nms( - minmax_box3d[nonempty_box_mask][:, [0, 1, 3, 4]], - obj_scores[nonempty_box_mask], bbox_classes[nonempty_box_mask], - self.test_cfg.nms_cfg)[1] - - if nms_keep.shape[0] > self.test_cfg.max_output_num: - nms_keep = nms_keep[:self.test_cfg.max_output_num] - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores >= self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_keep], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels - - def _assign_targets_by_points_inside(self, bboxes_3d, points): - """Compute assignment by checking whether point is inside bbox. - - Args: - bboxes_3d (BaseInstance3DBoxes): Instance of bounding boxes. - points (torch.Tensor): Points of a batch. - - Returns: - tuple[torch.Tensor]: Flags indicating whether each point is - inside bbox and the index of box where each point are in. - """ - if isinstance(bboxes_3d, (LiDARInstance3DBoxes, DepthInstance3DBoxes)): - points_mask = bboxes_3d.points_in_boxes_all(points) - assignment = points_mask.argmax(dim=-1) - else: - raise NotImplementedError('Unsupported bbox type!') - - return points_mask, assignment diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/train_mixins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/train_mixins.py deleted file mode 100644 index 9349ed9c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/train_mixins.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from mmdet3d.core import limit_period -from mmdet.core import images_to_levels, multi_apply - - -class AnchorTrainMixin(object): - """Mixin class for target assigning of dense heads.""" - - def anchor_target_3d(self, - anchor_list, - gt_bboxes_list, - input_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - num_classes=1, - sampling=True): - """Compute regression and classification targets for anchors. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - gt_bboxes_list (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each image. - input_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list): Ignore list of gt bboxes. - gt_labels_list (list[torch.Tensor]): Gt labels of batches. - label_channels (int): The channel of labels. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple (list, list, list, list, list, list, int, int): - Anchor targets, including labels, label weights, - bbox targets, bbox weights, direction targets, - direction weights, number of positive anchors and - number of negative anchors. - """ - num_imgs = len(input_metas) - assert len(anchor_list) == num_imgs - - if isinstance(anchor_list[0][0], list): - # sizes of anchors are different - # anchor number of a single level - num_level_anchors = [ - sum([anchor.size(0) for anchor in anchors]) - for anchors in anchor_list[0] - ] - for i in range(num_imgs): - anchor_list[i] = anchor_list[i][0] - else: - # anchor number of multi levels - num_level_anchors = [ - anchors.view(-1, self.box_code_size).size(0) - for anchors in anchor_list[0] - ] - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - anchor_list[i] = torch.cat(anchor_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - all_dir_targets, all_dir_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self.anchor_target_3d_single, - anchor_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - input_metas, - label_channels=label_channels, - num_classes=num_classes, - sampling=sampling) - - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - dir_targets_list = images_to_levels(all_dir_targets, num_level_anchors) - dir_weights_list = images_to_levels(all_dir_weights, num_level_anchors) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, dir_targets_list, dir_weights_list, - num_total_pos, num_total_neg) - - def anchor_target_3d_single(self, - anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - input_meta, - label_channels=1, - num_classes=1, - sampling=True): - """Compute targets of anchors in single batch. - - Args: - anchors (torch.Tensor): Concatenated multi-level anchor. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Gt bboxes. - gt_bboxes_ignore (torch.Tensor): Ignored gt bboxes. - gt_labels (torch.Tensor): Gt class labels. - input_meta (dict): Meta info of each image. - label_channels (int): The channel of labels. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple[torch.Tensor]: Anchor targets. - """ - if isinstance(self.bbox_assigner, - list) and (not isinstance(anchors, list)): - feat_size = anchors.size(0) * anchors.size(1) * anchors.size(2) - rot_angles = anchors.size(-2) - assert len(self.bbox_assigner) == anchors.size(-3) - (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) = [], [], [], [], [], [], [], [] - current_anchor_num = 0 - for i, assigner in enumerate(self.bbox_assigner): - current_anchors = anchors[..., i, :, :].reshape( - -1, self.box_code_size) - current_anchor_num += current_anchors.size(0) - if self.assign_per_class: - gt_per_cls = (gt_labels == i) - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes[gt_per_cls, :], - gt_bboxes_ignore, gt_labels[gt_per_cls], input_meta, - num_classes, sampling) - else: - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes, gt_bboxes_ignore, - gt_labels, input_meta, num_classes, sampling) - - (labels, label_weights, bbox_targets, bbox_weights, - dir_targets, dir_weights, pos_inds, neg_inds) = anchor_targets - total_labels.append(labels.reshape(feat_size, 1, rot_angles)) - total_label_weights.append( - label_weights.reshape(feat_size, 1, rot_angles)) - total_bbox_targets.append( - bbox_targets.reshape(feat_size, 1, rot_angles, - anchors.size(-1))) - total_bbox_weights.append( - bbox_weights.reshape(feat_size, 1, rot_angles, - anchors.size(-1))) - total_dir_targets.append( - dir_targets.reshape(feat_size, 1, rot_angles)) - total_dir_weights.append( - dir_weights.reshape(feat_size, 1, rot_angles)) - total_pos_inds.append(pos_inds) - total_neg_inds.append(neg_inds) - - total_labels = torch.cat(total_labels, dim=-2).reshape(-1) - total_label_weights = torch.cat( - total_label_weights, dim=-2).reshape(-1) - total_bbox_targets = torch.cat( - total_bbox_targets, dim=-3).reshape(-1, anchors.size(-1)) - total_bbox_weights = torch.cat( - total_bbox_weights, dim=-3).reshape(-1, anchors.size(-1)) - total_dir_targets = torch.cat( - total_dir_targets, dim=-2).reshape(-1) - total_dir_weights = torch.cat( - total_dir_weights, dim=-2).reshape(-1) - total_pos_inds = torch.cat(total_pos_inds, dim=0).reshape(-1) - total_neg_inds = torch.cat(total_neg_inds, dim=0).reshape(-1) - return (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) - elif isinstance(self.bbox_assigner, list) and isinstance( - anchors, list): - # class-aware anchors with different feature map sizes - assert len(self.bbox_assigner) == len(anchors), \ - 'The number of bbox assigners and anchors should be the same.' - (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) = [], [], [], [], [], [], [], [] - current_anchor_num = 0 - for i, assigner in enumerate(self.bbox_assigner): - current_anchors = anchors[i] - current_anchor_num += current_anchors.size(0) - if self.assign_per_class: - gt_per_cls = (gt_labels == i) - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes[gt_per_cls, :], - gt_bboxes_ignore, gt_labels[gt_per_cls], input_meta, - num_classes, sampling) - else: - anchor_targets = self.anchor_target_single_assigner( - assigner, current_anchors, gt_bboxes, gt_bboxes_ignore, - gt_labels, input_meta, num_classes, sampling) - - (labels, label_weights, bbox_targets, bbox_weights, - dir_targets, dir_weights, pos_inds, neg_inds) = anchor_targets - total_labels.append(labels) - total_label_weights.append(label_weights) - total_bbox_targets.append( - bbox_targets.reshape(-1, anchors[i].size(-1))) - total_bbox_weights.append( - bbox_weights.reshape(-1, anchors[i].size(-1))) - total_dir_targets.append(dir_targets) - total_dir_weights.append(dir_weights) - total_pos_inds.append(pos_inds) - total_neg_inds.append(neg_inds) - - total_labels = torch.cat(total_labels, dim=0) - total_label_weights = torch.cat(total_label_weights, dim=0) - total_bbox_targets = torch.cat(total_bbox_targets, dim=0) - total_bbox_weights = torch.cat(total_bbox_weights, dim=0) - total_dir_targets = torch.cat(total_dir_targets, dim=0) - total_dir_weights = torch.cat(total_dir_weights, dim=0) - total_pos_inds = torch.cat(total_pos_inds, dim=0) - total_neg_inds = torch.cat(total_neg_inds, dim=0) - return (total_labels, total_label_weights, total_bbox_targets, - total_bbox_weights, total_dir_targets, total_dir_weights, - total_pos_inds, total_neg_inds) - else: - return self.anchor_target_single_assigner(self.bbox_assigner, - anchors, gt_bboxes, - gt_bboxes_ignore, - gt_labels, input_meta, - num_classes, sampling) - - def anchor_target_single_assigner(self, - bbox_assigner, - anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - input_meta, - num_classes=1, - sampling=True): - """Assign anchors and encode positive anchors. - - Args: - bbox_assigner (BaseAssigner): assign positive and negative boxes. - anchors (torch.Tensor): Concatenated multi-level anchor. - gt_bboxes (:obj:`BaseInstance3DBoxes`): Gt bboxes. - gt_bboxes_ignore (torch.Tensor): Ignored gt bboxes. - gt_labels (torch.Tensor): Gt class labels. - input_meta (dict): Meta info of each image. - num_classes (int): The number of classes. - sampling (bool): Whether to sample anchors. - - Returns: - tuple[torch.Tensor]: Anchor targets. - """ - anchors = anchors.reshape(-1, anchors.size(-1)) - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - dir_targets = anchors.new_zeros((anchors.shape[0]), dtype=torch.long) - dir_weights = anchors.new_zeros((anchors.shape[0]), dtype=torch.float) - labels = anchors.new_zeros(num_valid_anchors, dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - if len(gt_bboxes) > 0: - if not isinstance(gt_bboxes, torch.Tensor): - gt_bboxes = gt_bboxes.tensor.to(anchors.device) - assign_result = bbox_assigner.assign(anchors, gt_bboxes, - gt_bboxes_ignore, gt_labels) - sampling_result = self.bbox_sampler.sample(assign_result, anchors, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - else: - pos_inds = torch.nonzero( - anchors.new_zeros((anchors.shape[0], ), dtype=torch.bool) > 0, - as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - anchors.new_zeros((anchors.shape[0], ), dtype=torch.bool) == 0, - as_tuple=False).squeeze(-1).unique() - - if gt_labels is not None: - labels += num_classes - if len(pos_inds) > 0: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - pos_dir_targets = get_direction_target( - sampling_result.pos_bboxes, - pos_bbox_targets, - self.dir_offset, - self.dir_limit_offset, - one_hot=False) - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - dir_targets[pos_inds] = pos_dir_targets - dir_weights[pos_inds] = 1.0 - - if gt_labels is None: - labels[pos_inds] = 1 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - return (labels, label_weights, bbox_targets, bbox_weights, dir_targets, - dir_weights, pos_inds, neg_inds) - - -def get_direction_target(anchors, - reg_targets, - dir_offset=0, - dir_limit_offset=0, - num_bins=2, - one_hot=True): - """Encode direction to 0 ~ num_bins-1. - - Args: - anchors (torch.Tensor): Concatenated multi-level anchor. - reg_targets (torch.Tensor): Bbox regression targets. - dir_offset (int): Direction offset. - num_bins (int): Number of bins to divide 2*PI. - one_hot (bool): Whether to encode as one hot. - - Returns: - torch.Tensor: Encoded direction targets. - """ - rot_gt = reg_targets[..., 6] + anchors[..., 6] - offset_rot = limit_period(rot_gt - dir_offset, dir_limit_offset, 2 * np.pi) - dir_cls_targets = torch.floor(offset_rot / (2 * np.pi / num_bins)).long() - dir_cls_targets = torch.clamp(dir_cls_targets, min=0, max=num_bins - 1) - if one_hot: - dir_targets = torch.zeros( - *list(dir_cls_targets.shape), - num_bins, - dtype=anchors.dtype, - device=dir_cls_targets.device) - dir_targets.scatter_(dir_cls_targets.unsqueeze(dim=-1).long(), 1.0) - dir_cls_targets = dir_targets - return dir_cls_targets diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/vote_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/vote_head.py deleted file mode 100644 index 50f8da0d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/dense_heads/vote_head.py +++ /dev/null @@ -1,665 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.ops import furthest_point_sample -from mmcv.runner import BaseModule, force_fp32 -from torch.nn import functional as F - -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet3d.models.losses import chamfer_distance -from mmdet3d.models.model_utils import VoteModule -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply -from ..builder import HEADS, build_loss -from .base_conv_bbox_head import BaseConvBboxHead - - -@HEADS.register_module() -class VoteHead(BaseModule): - r"""Bbox head of `Votenet `_. - - Args: - num_classes (int): The number of class. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - pred_layer_cfg (dict): Config of classfication and regression - prediction layers. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_classes, - bbox_coder, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - pred_layer_cfg=None, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - semantic_loss=None, - iou_loss=None, - init_cfg=None): - super(VoteHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = vote_module_cfg['gt_per_seed'] - self.num_proposal = vote_aggregation_cfg['num_point'] - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.size_res_loss = build_loss(size_res_loss) - if size_class_loss is not None: - self.size_class_loss = build_loss(size_class_loss) - if semantic_loss is not None: - self.semantic_loss = build_loss(semantic_loss) - if iou_loss is not None: - self.iou_loss = build_loss(iou_loss) - else: - self.iou_loss = None - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - self.vote_module = VoteModule(**vote_module_cfg) - self.vote_aggregation = build_sa_module(vote_aggregation_cfg) - self.fp16_enabled = False - - # Bbox classification and regression - self.conv_pred = BaseConvBboxHead( - **pred_layer_cfg, - num_cls_out_channels=self._get_cls_out_channels(), - num_reg_out_channels=self._get_reg_out_channels()) - - def _get_cls_out_channels(self): - """Return the channel number of classification outputs.""" - # Class numbers (k) + objectness (2) - return self.num_classes + 2 - - def _get_reg_out_channels(self): - """Return the channel number of regression outputs.""" - # Objectness scores (2), center residual (3), - # heading class+residual (num_dir_bins*2), - # size class+residual(num_sizes*4) - return 3 + self.num_dir_bins * 2 + self.num_sizes * 4 - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - torch.Tensor: Coordinates of input points. - torch.Tensor: Features of input points. - torch.Tensor: Indices of input points. - """ - - # for imvotenet - if 'seed_points' in feat_dict and \ - 'seed_features' in feat_dict and \ - 'seed_indices' in feat_dict: - seed_points = feat_dict['seed_points'] - seed_features = feat_dict['seed_features'] - seed_indices = feat_dict['seed_indices'] - # for votenet - else: - seed_points = feat_dict['fp_xyz'][-1] - seed_features = feat_dict['fp_features'][-1] - seed_indices = feat_dict['fp_indices'][-1] - - return seed_points, seed_features, seed_indices - - def forward(self, feat_dict, sample_mod): - """Forward pass. - - Note: - The forward of VoteHead is divided into 4 steps: - - 1. Generate vote_points from seed_points. - 2. Aggregate vote_points. - 3. Predict bbox and score. - 4. Decode predictions. - - Args: - feat_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed", "random" and "spec". - - Returns: - dict: Predictions of vote head. - """ - assert sample_mod in ['vote', 'seed', 'random', 'spec'] - - seed_points, seed_features, seed_indices = self._extract_input( - feat_dict) - - # 1. generate vote_points from seed_points - vote_points, vote_features, vote_offset = self.vote_module( - seed_points, seed_features) - results = dict( - seed_points=seed_points, - seed_indices=seed_indices, - vote_points=vote_points, - vote_features=vote_features, - vote_offset=vote_offset) - - # 2. aggregate vote_points - if sample_mod == 'vote': - # use fps in vote_aggregation - aggregation_inputs = dict( - points_xyz=vote_points, features=vote_features) - elif sample_mod == 'seed': - # FPS on seed and choose the votes corresponding to the seeds - sample_indices = furthest_point_sample(seed_points, - self.num_proposal) - aggregation_inputs = dict( - points_xyz=vote_points, - features=vote_features, - indices=sample_indices) - elif sample_mod == 'random': - # Random sampling from the votes - batch_size, num_seed = seed_points.shape[:2] - sample_indices = seed_points.new_tensor( - torch.randint(0, num_seed, (batch_size, self.num_proposal)), - dtype=torch.int32) - aggregation_inputs = dict( - points_xyz=vote_points, - features=vote_features, - indices=sample_indices) - elif sample_mod == 'spec': - # Specify the new center in vote_aggregation - aggregation_inputs = dict( - points_xyz=seed_points, - features=seed_features, - target_xyz=vote_points) - else: - raise NotImplementedError( - f'Sample mode {sample_mod} is not supported!') - - vote_aggregation_ret = self.vote_aggregation(**aggregation_inputs) - aggregated_points, features, aggregated_indices = vote_aggregation_ret - - results['aggregated_points'] = aggregated_points - results['aggregated_features'] = features - results['aggregated_indices'] = aggregated_indices - - # 3. predict bbox and score - cls_predictions, reg_predictions = self.conv_pred(features) - - # 4. decode predictions - decode_res = self.bbox_coder.split_pred(cls_predictions, - reg_predictions, - aggregated_points) - - results.update(decode_res) - - return results - - @force_fp32(apply_to=('bbox_preds', )) - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None, - ret_target=False): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - ret_target (Bool): Return targets or not. - - Returns: - dict: Losses of Votenet. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, valid_gt_masks, - objectness_targets, objectness_weights, box_loss_weights, - valid_gt_weights) = targets - - # calculate vote loss - vote_loss = self.vote_module.get_loss(bbox_preds['seed_points'], - bbox_preds['vote_points'], - bbox_preds['seed_indices'], - vote_target_masks, vote_targets) - - # calculate objectness loss - objectness_loss = self.objectness_loss( - bbox_preds['obj_scores'].transpose(2, 1), - objectness_targets, - weight=objectness_weights) - - # calculate center loss - source2target_loss, target2source_loss = self.center_loss( - bbox_preds['center'], - center_targets, - src_weight=box_loss_weights, - dst_weight=valid_gt_weights) - center_loss = source2target_loss + target2source_loss - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class'].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - batch_size, proposal_num = size_class_targets.shape[:2] - heading_label_one_hot = vote_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - dir_res_norm = torch.sum( - bbox_preds['dir_res_norm'] * heading_label_one_hot, -1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds['size_class'].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - - # calculate size residual loss - one_hot_size_targets = vote_targets.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).repeat(1, 1, 1, 3).contiguous() - size_residual_norm = torch.sum( - bbox_preds['size_res_norm'] * one_hot_size_targets_expand, 2) - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).repeat( - 1, 1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds['sem_scores'].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - - losses = dict( - vote_loss=vote_loss, - objectness_loss=objectness_loss, - semantic_loss=semantic_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=size_class_loss, - size_res_loss=size_res_loss) - - if self.iou_loss: - corners_pred = self.bbox_coder.decode_corners( - bbox_preds['center'], size_residual_norm, - one_hot_size_targets_expand) - corners_target = self.bbox_coder.decode_corners( - assigned_center_targets, size_res_targets, - one_hot_size_targets_expand) - iou_loss = self.iou_loss( - corners_pred, corners_target, weight=box_loss_weights) - losses['iou_loss'] = iou_loss - - if ret_target: - losses['targets'] = targets - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of vote head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - - Returns: - tuple[torch.Tensor]: Targets of vote head. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - max_gt_num = max(gt_num) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, - assigned_center_targets, mask_targets, objectness_targets, - objectness_masks) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - aggregated_points) - - # pad targets as original code of votenet. - for index in range(len(gt_labels_3d)): - pad_num = max_gt_num - gt_labels_3d[index].shape[0] - center_targets[index] = F.pad(center_targets[index], - (0, 0, 0, pad_num)) - valid_gt_masks[index] = F.pad(valid_gt_masks[index], (0, pad_num)) - - vote_targets = torch.stack(vote_targets) - vote_target_masks = torch.stack(vote_target_masks) - center_targets = torch.stack(center_targets) - valid_gt_masks = torch.stack(valid_gt_masks) - - assigned_center_targets = torch.stack(assigned_center_targets) - objectness_targets = torch.stack(objectness_targets) - objectness_weights = torch.stack(objectness_masks) - objectness_weights /= (torch.sum(objectness_weights) + 1e-6) - box_loss_weights = objectness_targets.float() / ( - torch.sum(objectness_targets).float() + 1e-6) - valid_gt_weights = valid_gt_masks.float() / ( - torch.sum(valid_gt_masks.float()) + 1e-6) - dir_class_targets = torch.stack(dir_class_targets) - dir_res_targets = torch.stack(dir_res_targets) - size_class_targets = torch.stack(size_class_targets) - size_res_targets = torch.stack(size_res_targets) - mask_targets = torch.stack(mask_targets) - - return (vote_targets, vote_target_masks, size_class_targets, - size_res_targets, dir_class_targets, dir_res_targets, - center_targets, assigned_center_targets, mask_targets, - valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None): - """Generate targets of vote head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - vote aggregation layer. - - Returns: - tuple[torch.Tensor]: Targets of vote head. - """ - assert self.bbox_coder.with_rot or pts_semantic_mask is not None - - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - - # generate votes target - num_points = points.shape[0] - if self.bbox_coder.with_rot: - vote_targets = points.new_zeros([num_points, 3 * self.gt_per_seed]) - vote_target_masks = points.new_zeros([num_points], - dtype=torch.long) - vote_target_idx = points.new_zeros([num_points], dtype=torch.long) - box_indices_all = gt_bboxes_3d.points_in_boxes_all(points) - for i in range(gt_labels_3d.shape[0]): - box_indices = box_indices_all[:, i] - indices = torch.nonzero( - box_indices, as_tuple=False).squeeze(-1) - selected_points = points[indices] - vote_target_masks[indices] = 1 - vote_targets_tmp = vote_targets[indices] - votes = gt_bboxes_3d.gravity_center[i].unsqueeze( - 0) - selected_points[:, :3] - - for j in range(self.gt_per_seed): - column_indices = torch.nonzero( - vote_target_idx[indices] == j, - as_tuple=False).squeeze(-1) - vote_targets_tmp[column_indices, - int(j * 3):int(j * 3 + - 3)] = votes[column_indices] - if j == 0: - vote_targets_tmp[column_indices] = votes[ - column_indices].repeat(1, self.gt_per_seed) - - vote_targets[indices] = vote_targets_tmp - vote_target_idx[indices] = torch.clamp( - vote_target_idx[indices] + 1, max=2) - elif pts_semantic_mask is not None: - vote_targets = points.new_zeros([num_points, 3]) - vote_target_masks = points.new_zeros([num_points], - dtype=torch.long) - - for i in torch.unique(pts_instance_mask): - indices = torch.nonzero( - pts_instance_mask == i, as_tuple=False).squeeze(-1) - if pts_semantic_mask[indices[0]] < self.num_classes: - selected_points = points[indices, :3] - center = 0.5 * ( - selected_points.min(0)[0] + selected_points.max(0)[0]) - vote_targets[indices, :] = center - selected_points - vote_target_masks[indices] = 1 - vote_targets = vote_targets.repeat((1, self.gt_per_seed)) - else: - raise NotImplementedError - - (center_targets, size_class_targets, size_res_targets, - dir_class_targets, - dir_res_targets) = self.bbox_coder.encode(gt_bboxes_3d, gt_labels_3d) - - proposal_num = aggregated_points.shape[0] - distance1, _, assignment, _ = chamfer_distance( - aggregated_points.unsqueeze(0), - center_targets.unsqueeze(0), - reduction='none') - assignment = assignment.squeeze(0) - euclidean_distance1 = torch.sqrt(distance1.squeeze(0) + 1e-6) - - objectness_targets = points.new_zeros((proposal_num), dtype=torch.long) - objectness_targets[ - euclidean_distance1 < self.train_cfg['pos_distance_thr']] = 1 - - objectness_masks = points.new_zeros((proposal_num)) - objectness_masks[ - euclidean_distance1 < self.train_cfg['pos_distance_thr']] = 1.0 - objectness_masks[ - euclidean_distance1 > self.train_cfg['neg_distance_thr']] = 1.0 - - dir_class_targets = dir_class_targets[assignment] - dir_res_targets = dir_res_targets[assignment] - dir_res_targets /= (np.pi / self.num_dir_bins) - size_class_targets = size_class_targets[assignment] - size_res_targets = size_res_targets[assignment] - - one_hot_size_targets = gt_bboxes_3d.tensor.new_zeros( - (proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(1, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets = one_hot_size_targets.unsqueeze(-1).repeat( - 1, 1, 3) - mean_sizes = size_res_targets.new_tensor( - self.bbox_coder.mean_sizes).unsqueeze(0) - pos_mean_sizes = torch.sum(one_hot_size_targets * mean_sizes, 1) - size_res_targets /= pos_mean_sizes - - mask_targets = gt_labels_3d[assignment] - assigned_center_targets = center_targets[assignment] - - return (vote_targets, vote_target_masks, size_class_targets, - size_res_targets, dir_class_targets, - dir_res_targets, center_targets, assigned_center_targets, - mask_targets.long(), objectness_targets, objectness_masks) - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - use_nms=True): - """Generate bboxes from vote head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from vote head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - use_nms (bool): Whether to apply NMS, skip nms postprocessing - while using vote head in rpn stage. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - obj_scores = F.softmax(bbox_preds['obj_scores'], dim=-1)[..., -1] - sem_scores = F.softmax(bbox_preds['sem_scores'], dim=-1) - bbox3d = self.bbox_coder.decode(bbox_preds) - - if use_nms: - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = \ - self.multiclass_nms_single(obj_scores[b], sem_scores[b], - bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - else: - return bbox3d - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/__init__.py deleted file mode 100644 index a3c45e28..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .base import Base3DDetector -from .centerpoint import CenterPoint -from .dynamic_voxelnet import DynamicVoxelNet -from .fcos_mono3d import FCOSMono3D -from .groupfree3dnet import GroupFree3DNet -from .h3dnet import H3DNet -from .imvotenet import ImVoteNet -from .imvoxelnet import ImVoxelNet -from .mvx_faster_rcnn import DynamicMVXFasterRCNN, MVXFasterRCNN -from .mvx_two_stage import MVXTwoStageDetector -from .parta2 import PartA2 -from .point_rcnn import PointRCNN -from .sassd import SASSD -from .single_stage_mono3d import SingleStageMono3DDetector -from .smoke_mono3d import SMOKEMono3D -from .ssd3dnet import SSD3DNet -from .votenet import VoteNet -from .voxelnet import VoxelNet - -__all__ = [ - 'Base3DDetector', 'VoxelNet', 'DynamicVoxelNet', 'MVXTwoStageDetector', - 'DynamicMVXFasterRCNN', 'MVXFasterRCNN', 'PartA2', 'VoteNet', 'H3DNet', - 'CenterPoint', 'SSD3DNet', 'ImVoteNet', 'SingleStageMono3DDetector', - 'FCOSMono3D', 'ImVoxelNet', 'GroupFree3DNet', 'PointRCNN', 'SMOKEMono3D', - 'SASSD' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/base.py deleted file mode 100644 index cb1ee8a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/base.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import torch -from mmcv.parallel import DataContainer as DC -from mmcv.runner import auto_fp16 - -from mmdet3d.core import Box3DMode, Coord3DMode, show_result -from mmdet.models.detectors import BaseDetector - - -class Base3DDetector(BaseDetector): - """Base class for detectors.""" - - def forward_test(self, points, img_metas, img=None, **kwargs): - """ - Args: - points (list[torch.Tensor]): the outer list indicates test-time - augmentations and inner torch.Tensor should have a shape NxC, - which contains all points in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch - img (list[torch.Tensor], optional): the outer - list indicates test-time augmentations and inner - torch.Tensor should have a shape NxCxHxW, which contains - all images in the batch. Defaults to None. - """ - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError('{} must be a list, but got {}'.format( - name, type(var))) - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError( - 'num of augmentations ({}) != num of image meta ({})'.format( - len(points), len(img_metas))) - - if num_augs == 1: - img = [img] if img is None else img - return self.simple_test(points[0], img_metas[0], img[0], **kwargs) - else: - return self.aug_test(points, img_metas, img, **kwargs) - - @auto_fp16(apply_to=('img', 'points')) - def forward(self, return_loss=True, **kwargs): - """Calls either forward_train or forward_test depending on whether - return_loss=True. - - Note this setting will change the expected inputs. When - `return_loss=True`, img and img_metas are single-nested (i.e. - torch.Tensor and list[dict]), and when `resturn_loss=False`, img and - img_metas should be double nested (i.e. list[torch.Tensor], - list[list[dict]]), with the outer list indicating test time - augmentations. - """ - if return_loss: - return self.forward_train(**kwargs) - else: - return self.forward_test(**kwargs) - - def show_results(self, data, result, out_dir, show=False, score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input points and the information of the sample. - result (list[dict]): Prediction results. - out_dir (str): Output directory of visualization result. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - """ - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - box_mode_3d = data['img_metas'][0]._data[0][batch_id][ - 'box_mode_3d'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - box_mode_3d = data['img_metas'][0][batch_id]['box_mode_3d'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - - pred_bboxes = result[batch_id]['boxes_3d'] - pred_labels = result[batch_id]['labels_3d'] - - if score_thr is not None: - mask = result[batch_id]['scores_3d'] > score_thr - pred_bboxes = pred_bboxes[mask] - pred_labels = pred_labels[mask] - - # for now we convert points and bbox into depth mode - if (box_mode_3d == Box3DMode.CAM) or (box_mode_3d - == Box3DMode.LIDAR): - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d, - Box3DMode.DEPTH) - elif box_mode_3d != Box3DMode.DEPTH: - ValueError( - f'Unsupported box_mode_3d {box_mode_3d} for conversion!') - pred_bboxes = pred_bboxes.tensor.cpu().numpy() - show_result( - points, - None, - pred_bboxes, - out_dir, - file_name, - show=show, - pred_labels=pred_labels) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/centerpoint.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/centerpoint.py deleted file mode 100644 index 0cd3b367..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/centerpoint.py +++ /dev/null @@ -1,198 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .mvx_two_stage import MVXTwoStageDetector - - -@DETECTORS.register_module() -class CenterPoint(MVXTwoStageDetector): - """Base class of Multi-modality VoxelNet.""" - - def __init__(self, - pts_voxel_layer=None, - pts_voxel_encoder=None, - pts_middle_encoder=None, - pts_fusion_layer=None, - img_backbone=None, - pts_backbone=None, - img_neck=None, - pts_neck=None, - pts_bbox_head=None, - img_roi_head=None, - img_rpn_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(CenterPoint, - self).__init__(pts_voxel_layer, pts_voxel_encoder, - pts_middle_encoder, pts_fusion_layer, - img_backbone, pts_backbone, img_neck, pts_neck, - pts_bbox_head, img_roi_head, img_rpn_head, - train_cfg, test_cfg, pretrained, init_cfg) - - def extract_pts_feat(self, pts, img_feats, img_metas): - """Extract features of points.""" - if not self.with_pts_bbox: - return None - voxels, num_points, coors = self.voxelize(pts) - - voxel_features = self.pts_voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x - - def forward_pts_train(self, - pts_feats, - gt_bboxes_3d, - gt_labels_3d, - img_metas, - gt_bboxes_ignore=None): - """Forward function for point cloud branch. - - Args: - pts_feats (list[torch.Tensor]): Features of point cloud branch - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - img_metas (list[dict]): Meta information of samples. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - outs = self.pts_bbox_head(pts_feats) - loss_inputs = [gt_bboxes_3d, gt_labels_3d, outs] - losses = self.pts_bbox_head.loss(*loss_inputs) - return losses - - def simple_test_pts(self, x, img_metas, rescale=False): - """Test function of point cloud branch.""" - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test_pts(self, feats, img_metas, rescale=False): - """Test function of point cloud branch with augmentaiton. - - The function implementation process is as follows: - - - step 1: map features back for double-flip augmentation. - - step 2: merge all features and generate boxes. - - step 3: map boxes back for scale augmentation. - - step 4: merge results. - - Args: - feats (list[torch.Tensor]): Feature of point cloud. - img_metas (list[dict]): Meta information of samples. - rescale (bool, optional): Whether to rescale bboxes. - Default: False. - - Returns: - dict: Returned bboxes consists of the following keys: - - - boxes_3d (:obj:`LiDARInstance3DBoxes`): Predicted bboxes. - - scores_3d (torch.Tensor): Scores of predicted boxes. - - labels_3d (torch.Tensor): Labels of predicted boxes. - """ - # only support aug_test for one sample - outs_list = [] - for x, img_meta in zip(feats, img_metas): - outs = self.pts_bbox_head(x) - # merge augmented outputs before decoding bboxes - for task_id, out in enumerate(outs): - for key in out[0].keys(): - if img_meta[0]['pcd_horizontal_flip']: - outs[task_id][0][key] = torch.flip( - outs[task_id][0][key], dims=[2]) - if key == 'reg': - outs[task_id][0][key][:, 1, ...] = 1 - outs[ - task_id][0][key][:, 1, ...] - elif key == 'rot': - outs[task_id][0][ - key][:, 0, - ...] = -outs[task_id][0][key][:, 0, ...] - elif key == 'vel': - outs[task_id][0][ - key][:, 1, - ...] = -outs[task_id][0][key][:, 1, ...] - if img_meta[0]['pcd_vertical_flip']: - outs[task_id][0][key] = torch.flip( - outs[task_id][0][key], dims=[3]) - if key == 'reg': - outs[task_id][0][key][:, 0, ...] = 1 - outs[ - task_id][0][key][:, 0, ...] - elif key == 'rot': - outs[task_id][0][ - key][:, 1, - ...] = -outs[task_id][0][key][:, 1, ...] - elif key == 'vel': - outs[task_id][0][ - key][:, 0, - ...] = -outs[task_id][0][key][:, 0, ...] - - outs_list.append(outs) - - preds_dicts = dict() - scale_img_metas = [] - - # concat outputs sharing the same pcd_scale_factor - for i, (img_meta, outs) in enumerate(zip(img_metas, outs_list)): - pcd_scale_factor = img_meta[0]['pcd_scale_factor'] - if pcd_scale_factor not in preds_dicts.keys(): - preds_dicts[pcd_scale_factor] = outs - scale_img_metas.append(img_meta) - else: - for task_id, out in enumerate(outs): - for key in out[0].keys(): - preds_dicts[pcd_scale_factor][task_id][0][key] += out[ - 0][key] - - aug_bboxes = [] - - for pcd_scale_factor, preds_dict in preds_dicts.items(): - for task_id, pred_dict in enumerate(preds_dict): - # merge outputs with different flips before decoding bboxes - for key in pred_dict[0].keys(): - preds_dict[task_id][0][key] /= len(outs_list) / len( - preds_dicts.keys()) - bbox_list = self.pts_bbox_head.get_bboxes( - preds_dict, img_metas[0], rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - if len(preds_dicts.keys()) > 1: - # merge outputs with different scales after decoding bboxes - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, scale_img_metas, - self.pts_bbox_head.test_cfg) - return merged_bboxes - else: - for key in bbox_list[0].keys(): - bbox_list[0][key] = bbox_list[0][key].to('cpu') - return bbox_list[0] - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - img_feats, pts_feats = self.extract_feats(points, img_metas, imgs) - bbox_list = dict() - if pts_feats and self.with_pts_bbox: - pts_bbox = self.aug_test_pts(pts_feats, img_metas, rescale) - bbox_list.update(pts_bbox=pts_bbox) - return [bbox_list] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/dynamic_voxelnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/dynamic_voxelnet.py deleted file mode 100644 index 2b33c67e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/dynamic_voxelnet.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from ..builder import DETECTORS -from .voxelnet import VoxelNet - - -@DETECTORS.register_module() -class DynamicVoxelNet(VoxelNet): - r"""VoxelNet using `dynamic voxelization `_. - """ - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(DynamicVoxelNet, self).__init__( - voxel_layer=voxel_layer, - voxel_encoder=voxel_encoder, - middle_encoder=middle_encoder, - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def extract_feat(self, points, img_metas): - """Extract features from points.""" - voxels, coors = self.voxelize(points) - voxel_features, feature_coors = self.voxel_encoder(voxels, coors) - batch_size = coors[-1, 0].item() + 1 - x = self.middle_encoder(voxel_features, feature_coors, batch_size) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points and coordinates. - """ - coors = [] - # dynamic voxelization only provide a coors mapping - for res in points: - res_coors = self.voxel_layer(res) - coors.append(res_coors) - points = torch.cat(points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return points, coors_batch diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/fcos_mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/fcos_mono3d.py deleted file mode 100644 index dc6be419..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/fcos_mono3d.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_mono3d import SingleStageMono3DDetector - - -@DETECTORS.register_module() -class FCOSMono3D(SingleStageMono3DDetector): - r"""`FCOS3D `_ for monocular 3D object detection. - - Currently please refer to our entry on the - `leaderboard `_. - """ # noqa: E501 - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FCOSMono3D, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/groupfree3dnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/groupfree3dnet.py deleted file mode 100644 index 9fe2f764..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/groupfree3dnet.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class GroupFree3DNet(SingleStage3DDetector): - """`Group-Free 3D `_.""" - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(GroupFree3DNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict[str: torch.Tensor]: Losses. - """ - # TODO: refactor votenet series to reduce redundant codes. - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.train_cfg.sample_mod) - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas) - losses = self.bbox_head.loss( - bbox_preds, *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - points_cat, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta in zip(feats, points_cat, img_metas): - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - pts_cat, bbox_preds, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/h3dnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/h3dnet.py deleted file mode 100644 index dbbb312a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/h3dnet.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import merge_aug_bboxes_3d -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class H3DNet(TwoStage3DDetector): - r"""H3DNet model. - - Please refer to the `paper `_ - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(H3DNet, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses. - """ - points_cat = torch.stack(points) - - feats_dict = self.extract_feat(points_cat) - feats_dict['fp_xyz'] = [feats_dict['fp_xyz_net0'][-1]] - feats_dict['fp_features'] = [feats_dict['hd_feature']] - feats_dict['fp_indices'] = [feats_dict['fp_indices_net0'][-1]] - - losses = dict() - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict, self.train_cfg.rpn.sample_mod) - feats_dict.update(rpn_outs) - - rpn_loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, img_metas) - rpn_losses = self.rpn_head.loss( - rpn_outs, - *rpn_loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore, - ret_target=True) - feats_dict['targets'] = rpn_losses.pop('targets') - losses.update(rpn_losses) - - # Generate rpn proposals - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - proposal_inputs = (points, rpn_outs, img_metas) - proposal_list = self.rpn_head.get_bboxes( - *proposal_inputs, use_nms=proposal_cfg.use_nms) - feats_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - roi_losses = self.roi_head.forward_train(feats_dict, img_metas, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, - pts_instance_mask, - gt_bboxes_ignore) - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - feats_dict = self.extract_feat(points_cat) - feats_dict['fp_xyz'] = [feats_dict['fp_xyz_net0'][-1]] - feats_dict['fp_features'] = [feats_dict['hd_feature']] - feats_dict['fp_indices'] = [feats_dict['fp_indices_net0'][-1]] - - if self.with_rpn: - proposal_cfg = self.test_cfg.rpn - rpn_outs = self.rpn_head(feats_dict, proposal_cfg.sample_mod) - feats_dict.update(rpn_outs) - # Generate rpn proposals - proposal_list = self.rpn_head.get_bboxes( - points, rpn_outs, img_metas, use_nms=proposal_cfg.use_nms) - feats_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - return self.roi_head.simple_test( - feats_dict, img_metas, points_cat, rescale=rescale) - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats_dict = self.extract_feats(points_cat, img_metas) - for feat_dict in feats_dict: - feat_dict['fp_xyz'] = [feat_dict['fp_xyz_net0'][-1]] - feat_dict['fp_features'] = [feat_dict['hd_feature']] - feat_dict['fp_indices'] = [feat_dict['fp_indices_net0'][-1]] - - # only support aug_test for one sample - aug_bboxes = [] - for feat_dict, pts_cat, img_meta in zip(feats_dict, points_cat, - img_metas): - if self.with_rpn: - proposal_cfg = self.test_cfg.rpn - rpn_outs = self.rpn_head(feat_dict, proposal_cfg.sample_mod) - feat_dict.update(rpn_outs) - # Generate rpn proposals - proposal_list = self.rpn_head.get_bboxes( - points, rpn_outs, img_metas, use_nms=proposal_cfg.use_nms) - feat_dict['proposal_list'] = proposal_list - else: - raise NotImplementedError - - bbox_results = self.roi_head.simple_test( - feat_dict, - self.test_cfg.rcnn.sample_mod, - img_meta, - pts_cat, - rescale=rescale) - aug_bboxes.append(bbox_results) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] - - def extract_feats(self, points, img_metas): - """Extract features of multiple samples.""" - return [ - self.extract_feat(pts, img_meta) - for pts, img_meta in zip(points, img_metas) - ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvotenet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvotenet.py deleted file mode 100644 index 0e6e9d66..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvotenet.py +++ /dev/null @@ -1,821 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import numpy as np -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from mmdet3d.models.utils import MLP -from .. import builder -from ..builder import DETECTORS -from .base import Base3DDetector - - -def sample_valid_seeds(mask, num_sampled_seed=1024): - r"""Randomly sample seeds from all imvotes. - - Modified from ``_ - - Args: - mask (torch.Tensor): Bool tensor in shape ( - seed_num*max_imvote_per_pixel), indicates - whether this imvote corresponds to a 2D bbox. - num_sampled_seed (int): How many to sample from all imvotes. - - Returns: - torch.Tensor: Indices with shape (num_sampled_seed). - """ # noqa: E501 - device = mask.device - batch_size = mask.shape[0] - sample_inds = mask.new_zeros((batch_size, num_sampled_seed), - dtype=torch.int64) - for bidx in range(batch_size): - # return index of non zero elements - valid_inds = torch.nonzero(mask[bidx, :]).squeeze(-1) - if len(valid_inds) < num_sampled_seed: - # compute set t1 - t2 - t1 = torch.arange(num_sampled_seed, device=device) - t2 = valid_inds % num_sampled_seed - combined = torch.cat((t1, t2)) - uniques, counts = combined.unique(return_counts=True) - difference = uniques[counts == 1] - - rand_inds = torch.randperm( - len(difference), - device=device)[:num_sampled_seed - len(valid_inds)] - cur_sample_inds = difference[rand_inds] - cur_sample_inds = torch.cat((valid_inds, cur_sample_inds)) - else: - rand_inds = torch.randperm( - len(valid_inds), device=device)[:num_sampled_seed] - cur_sample_inds = valid_inds[rand_inds] - sample_inds[bidx, :] = cur_sample_inds - return sample_inds - - -@DETECTORS.register_module() -class ImVoteNet(Base3DDetector): - r"""`ImVoteNet `_ for 3D detection.""" - - def __init__(self, - pts_backbone=None, - pts_bbox_heads=None, - pts_neck=None, - img_backbone=None, - img_neck=None, - img_roi_head=None, - img_rpn_head=None, - img_mlp=None, - freeze_img_branch=False, - fusion_layer=None, - num_sampled_seed=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - - super(ImVoteNet, self).__init__(init_cfg=init_cfg) - - # point branch - if pts_backbone is not None: - self.pts_backbone = builder.build_backbone(pts_backbone) - if pts_neck is not None: - self.pts_neck = builder.build_neck(pts_neck) - if pts_bbox_heads is not None: - pts_bbox_head_common = pts_bbox_heads.common - pts_bbox_head_common.update( - train_cfg=train_cfg.pts if train_cfg is not None else None) - pts_bbox_head_common.update(test_cfg=test_cfg.pts) - pts_bbox_head_joint = pts_bbox_head_common.copy() - pts_bbox_head_joint.update(pts_bbox_heads.joint) - pts_bbox_head_pts = pts_bbox_head_common.copy() - pts_bbox_head_pts.update(pts_bbox_heads.pts) - pts_bbox_head_img = pts_bbox_head_common.copy() - pts_bbox_head_img.update(pts_bbox_heads.img) - - self.pts_bbox_head_joint = builder.build_head(pts_bbox_head_joint) - self.pts_bbox_head_pts = builder.build_head(pts_bbox_head_pts) - self.pts_bbox_head_img = builder.build_head(pts_bbox_head_img) - self.pts_bbox_heads = [ - self.pts_bbox_head_joint, self.pts_bbox_head_pts, - self.pts_bbox_head_img - ] - self.loss_weights = pts_bbox_heads.loss_weights - - # image branch - if img_backbone: - self.img_backbone = builder.build_backbone(img_backbone) - if img_neck is not None: - self.img_neck = builder.build_neck(img_neck) - if img_rpn_head is not None: - rpn_train_cfg = train_cfg.img_rpn if train_cfg \ - is not None else None - img_rpn_head_ = img_rpn_head.copy() - img_rpn_head_.update( - train_cfg=rpn_train_cfg, test_cfg=test_cfg.img_rpn) - self.img_rpn_head = builder.build_head(img_rpn_head_) - if img_roi_head is not None: - rcnn_train_cfg = train_cfg.img_rcnn if train_cfg \ - is not None else None - img_roi_head.update( - train_cfg=rcnn_train_cfg, test_cfg=test_cfg.img_rcnn) - self.img_roi_head = builder.build_head(img_roi_head) - - # fusion - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - self.max_imvote_per_pixel = fusion_layer.max_imvote_per_pixel - - self.freeze_img_branch = freeze_img_branch - if freeze_img_branch: - self.freeze_img_branch_params() - - if img_mlp is not None: - self.img_mlp = MLP(**img_mlp) - - self.num_sampled_seed = num_sampled_seed - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if pretrained is None: - img_pretrained = None - pts_pretrained = None - elif isinstance(pretrained, dict): - img_pretrained = pretrained.get('img', None) - pts_pretrained = pretrained.get('pts', None) - else: - raise ValueError( - f'pretrained should be a dict, got {type(pretrained)}') - - if self.with_img_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_backbone.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_img_roi_head: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_roi_head.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - - if self.with_pts_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.pts_backbone.init_cfg = dict( - type='Pretrained', checkpoint=pts_pretrained) - - def freeze_img_branch_params(self): - """Freeze all image branch parameters.""" - if self.with_img_bbox_head: - for param in self.img_bbox_head.parameters(): - param.requires_grad = False - if self.with_img_backbone: - for param in self.img_backbone.parameters(): - param.requires_grad = False - if self.with_img_neck: - for param in self.img_neck.parameters(): - param.requires_grad = False - if self.with_img_rpn: - for param in self.img_rpn_head.parameters(): - param.requires_grad = False - if self.with_img_roi_head: - for param in self.img_roi_head.parameters(): - param.requires_grad = False - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Overload in order to load img network ckpts into img branch.""" - module_names = ['backbone', 'neck', 'roi_head', 'rpn_head'] - for key in list(state_dict): - for module_name in module_names: - if key.startswith(module_name) and ('img_' + - key) not in state_dict: - state_dict['img_' + key] = state_dict.pop(key) - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def train(self, mode=True): - """Overload in order to keep image branch modules in eval mode.""" - super(ImVoteNet, self).train(mode) - if self.freeze_img_branch: - if self.with_img_bbox_head: - self.img_bbox_head.eval() - if self.with_img_backbone: - self.img_backbone.eval() - if self.with_img_neck: - self.img_neck.eval() - if self.with_img_rpn: - self.img_rpn_head.eval() - if self.with_img_roi_head: - self.img_roi_head.eval() - - @property - def with_img_bbox(self): - """bool: Whether the detector has a 2D image box head.""" - return ((hasattr(self, 'img_roi_head') and self.img_roi_head.with_bbox) - or (hasattr(self, 'img_bbox_head') - and self.img_bbox_head is not None)) - - @property - def with_img_bbox_head(self): - """bool: Whether the detector has a 2D image box head (not roi).""" - return hasattr(self, - 'img_bbox_head') and self.img_bbox_head is not None - - @property - def with_img_backbone(self): - """bool: Whether the detector has a 2D image backbone.""" - return hasattr(self, 'img_backbone') and self.img_backbone is not None - - @property - def with_img_neck(self): - """bool: Whether the detector has a neck in image branch.""" - return hasattr(self, 'img_neck') and self.img_neck is not None - - @property - def with_img_rpn(self): - """bool: Whether the detector has a 2D RPN in image detector branch.""" - return hasattr(self, 'img_rpn_head') and self.img_rpn_head is not None - - @property - def with_img_roi_head(self): - """bool: Whether the detector has a RoI Head in image branch.""" - return hasattr(self, 'img_roi_head') and self.img_roi_head is not None - - @property - def with_pts_bbox(self): - """bool: Whether the detector has a 3D box head.""" - return hasattr(self, - 'pts_bbox_head') and self.pts_bbox_head is not None - - @property - def with_pts_backbone(self): - """bool: Whether the detector has a 3D backbone.""" - return hasattr(self, 'pts_backbone') and self.pts_backbone is not None - - @property - def with_pts_neck(self): - """bool: Whether the detector has a neck in 3D detector branch.""" - return hasattr(self, 'pts_neck') and self.pts_neck is not None - - def extract_feat(self, imgs): - """Just to inherit from abstract method.""" - pass - - def extract_img_feat(self, img): - """Directly extract features from the img backbone+neck.""" - x = self.img_backbone(img) - if self.with_img_neck: - x = self.img_neck(x) - return x - - def extract_img_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - - assert isinstance(imgs, list) - return [self.extract_img_feat(img) for img in imgs] - - def extract_pts_feat(self, pts): - """Extract features of points.""" - x = self.pts_backbone(pts) - if self.with_pts_neck: - x = self.pts_neck(x) - - seed_points = x['fp_xyz'][-1] - seed_features = x['fp_features'][-1] - seed_indices = x['fp_indices'][-1] - - return (seed_points, seed_features, seed_indices) - - def extract_pts_feats(self, pts): - """Extract features of points from multiple samples.""" - assert isinstance(pts, list) - return [self.extract_pts_feat(pt) for pt in pts] - - @torch.no_grad() - def extract_bboxes_2d(self, - img, - img_metas, - train=True, - bboxes_2d=None, - **kwargs): - """Extract bounding boxes from 2d detector. - - Args: - img (torch.Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): Image meta info. - train (bool): train-time or not. - bboxes_2d (list[torch.Tensor]): provided 2d bboxes, - not supported yet. - - Return: - list[torch.Tensor]: a list of processed 2d bounding boxes. - """ - if bboxes_2d is None: - x = self.extract_img_feat(img) - proposal_list = self.img_rpn_head.simple_test_rpn(x, img_metas) - rets = self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=False) - - rets_processed = [] - for ret in rets: - tmp = np.concatenate(ret, axis=0) - sem_class = img.new_zeros((len(tmp))) - start = 0 - for i, bboxes in enumerate(ret): - sem_class[start:start + len(bboxes)] = i - start += len(bboxes) - ret = img.new_tensor(tmp) - - # append class index - ret = torch.cat([ret, sem_class[:, None]], dim=-1) - inds = torch.argsort(ret[:, 4], descending=True) - ret = ret.index_select(0, inds) - - # drop half bboxes during training for better generalization - if train: - rand_drop = torch.randperm(len(ret))[:(len(ret) + 1) // 2] - rand_drop = torch.sort(rand_drop)[0] - ret = ret[rand_drop] - - rets_processed.append(ret.float()) - return rets_processed - else: - rets_processed = [] - for ret in bboxes_2d: - if len(ret) > 0 and train: - rand_drop = torch.randperm(len(ret))[:(len(ret) + 1) // 2] - rand_drop = torch.sort(rand_drop)[0] - ret = ret[rand_drop] - rets_processed.append(ret.float()) - return rets_processed - - def forward_train(self, - points=None, - img=None, - img_metas=None, - gt_bboxes=None, - gt_labels=None, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - bboxes_2d=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - pts_semantic_mask=None, - pts_instance_mask=None, - **kwargs): - """Forwarding of train for image branch pretrain or stage 2 train. - - Args: - points (list[torch.Tensor]): Points of each batch. - img (torch.Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image and point cloud meta info - dict. For example, keys include 'ori_shape', 'img_norm_cfg', - and 'transformation_3d_flow'. For details on the values of - the keys see `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[torch.Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[torch.Tensor]): class indices for each - 2d bounding box. - gt_bboxes_ignore (list[torch.Tensor]): specify which - 2d bounding boxes can be ignored when computing the loss. - gt_masks (torch.Tensor): true segmentation masks for each - 2d bbox, used if the architecture supports a segmentation task. - proposals: override rpn proposals (2d) with custom proposals. - Use when `with_rpn` is False. - bboxes_2d (list[torch.Tensor]): provided 2d bboxes, - not supported yet. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): 3d gt bboxes. - gt_labels_3d (list[torch.Tensor]): gt class labels for 3d bboxes. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - - Returns: - dict[str, torch.Tensor]: a dictionary of loss components. - """ - if points is None: - x = self.extract_img_feat(img) - losses = dict() - - # RPN forward and loss - if self.with_img_rpn: - proposal_cfg = self.train_cfg.get('img_rpn_proposal', - self.test_cfg.img_rpn) - rpn_losses, proposal_list = self.img_rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.img_roi_head.forward_train( - x, img_metas, proposal_list, gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, **kwargs) - losses.update(roi_losses) - return losses - else: - bboxes_2d = self.extract_bboxes_2d( - img, img_metas, bboxes_2d=bboxes_2d, **kwargs) - - points = torch.stack(points) - seeds_3d, seed_3d_features, seed_indices = \ - self.extract_pts_feat(points) - - img_features, masks = self.fusion_layer(img, bboxes_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, - -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict_joint = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - feat_dict_pts = dict( - seed_points=seeds_3d, - seed_features=seed_3d_features, - seed_indices=seed_indices) - feat_dict_img = dict( - seed_points=seeds_3d, - seed_features=img_features, - seed_indices=seed_indices) - - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, img_metas) - bbox_preds_joints = self.pts_bbox_head_joint( - feat_dict_joint, self.train_cfg.pts.sample_mod) - bbox_preds_pts = self.pts_bbox_head_pts( - feat_dict_pts, self.train_cfg.pts.sample_mod) - bbox_preds_img = self.pts_bbox_head_img( - feat_dict_img, self.train_cfg.pts.sample_mod) - losses_towers = [] - losses_joint = self.pts_bbox_head_joint.loss( - bbox_preds_joints, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_pts = self.pts_bbox_head_pts.loss( - bbox_preds_pts, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_img = self.pts_bbox_head_img.loss( - bbox_preds_img, - *loss_inputs, - gt_bboxes_ignore=gt_bboxes_ignore) - losses_towers.append(losses_joint) - losses_towers.append(losses_pts) - losses_towers.append(losses_img) - combined_losses = dict() - for loss_term in losses_joint: - if 'loss' in loss_term: - combined_losses[loss_term] = 0 - for i in range(len(losses_towers)): - combined_losses[loss_term] += \ - losses_towers[i][loss_term] * \ - self.loss_weights[i] - else: - # only save the metric of the joint head - # if it is not a loss - combined_losses[loss_term] = \ - losses_towers[0][loss_term] - - return combined_losses - - def forward_test(self, - points=None, - img_metas=None, - img=None, - bboxes_2d=None, - **kwargs): - """Forwarding of test for image branch pretrain or stage 2 train. - - Args: - points (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and the inner - list contains all points in the batch, where each Tensor - should have a shape NxC. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - img (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - bboxes_2d (list[list[torch.Tensor]], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - - Returns: - list[list[torch.Tensor]]|list[dict]: Predicted 2d or 3d boxes. - """ - if points is None: - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError( - f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test_img_only( - img=img[0], img_metas=img_metas[0], **kwargs) - else: - assert img[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{img[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test_img_only( - img=img, img_metas=img_metas, **kwargs) - - else: - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError('{} must be a list, but got {}'.format( - name, type(var))) - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError( - 'num of augmentations ({}) != num of image meta ({})'. - format(len(points), len(img_metas))) - - if num_augs == 1: - return self.simple_test( - points[0], - img_metas[0], - img[0], - bboxes_2d=bboxes_2d[0] if bboxes_2d is not None else None, - **kwargs) - else: - return self.aug_test(points, img_metas, img, bboxes_2d, - **kwargs) - - def simple_test_img_only(self, - img, - img_metas, - proposals=None, - rescale=False): - r"""Test without augmentation, image network pretrain. May refer to - ``_. - - Args: - img (torch.Tensor): Should have a shape NxCxHxW, which contains - all images in the batch. - img_metas (list[dict]): - proposals (list[Tensor], optional): override rpn proposals - with custom proposals. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes to the - original shape of input image. Defaults to False. - - Returns: - list[list[torch.Tensor]]: Predicted 2d boxes. - """ # noqa: E501 - assert self.with_img_bbox, 'Img bbox head must be implemented.' - assert self.with_img_backbone, 'Img backbone must be implemented.' - assert self.with_img_rpn, 'Img rpn must be implemented.' - assert self.with_img_roi_head, 'Img roi head must be implemented.' - - x = self.extract_img_feat(img) - - if proposals is None: - proposal_list = self.img_rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - ret = self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - return ret - - def simple_test(self, - points=None, - img_metas=None, - img=None, - bboxes_2d=None, - rescale=False, - **kwargs): - """Test without augmentation, stage 2. - - Args: - points (list[torch.Tensor], optional): Elements in the list - should have a shape NxC, the list indicates all point-clouds - in the batch. Defaults to None. - img_metas (list[dict], optional): List indicates - images in a batch. Defaults to None. - img (torch.Tensor, optional): Should have a shape NxCxHxW, - which contains all images in the batch. Defaults to None. - bboxes_2d (list[torch.Tensor], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes. - Defaults to False. - - Returns: - list[dict]: Predicted 3d boxes. - """ - bboxes_2d = self.extract_bboxes_2d( - img, img_metas, train=False, bboxes_2d=bboxes_2d, **kwargs) - - points = torch.stack(points) - seeds_3d, seed_3d_features, seed_indices = \ - self.extract_pts_feat(points) - - img_features, masks = self.fusion_layer(img, bboxes_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - bbox_preds = self.pts_bbox_head_joint(feat_dict, - self.test_cfg.pts.sample_mod) - bbox_list = self.pts_bbox_head_joint.get_bboxes( - points, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test_img_only(self, img, img_metas, rescale=False): - r"""Test function with augmentation, image network pretrain. May refer - to ``_. - - Args: - img (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes to the - original shape of input image. If rescale is False, then - returned bboxes and masks will fit the scale of imgs[0]. - Defaults to None. - - Returns: - list[list[torch.Tensor]]: Predicted 2d boxes. - """ # noqa: E501 - assert self.with_img_bbox, 'Img bbox head must be implemented.' - assert self.with_img_backbone, 'Img backbone must be implemented.' - assert self.with_img_rpn, 'Img rpn must be implemented.' - assert self.with_img_roi_head, 'Img roi head must be implemented.' - - x = self.extract_img_feats(img) - proposal_list = self.img_rpn_head.aug_test_rpn(x, img_metas) - - return self.img_roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, - points=None, - img_metas=None, - imgs=None, - bboxes_2d=None, - rescale=False, - **kwargs): - """Test function with augmentation, stage 2. - - Args: - points (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and the inner - list contains all points in the batch, where each Tensor - should have a shape NxC. Defaults to None. - img_metas (list[list[dict]], optional): the outer list - indicates test-time augs (multiscale, flip, etc.) - and the inner list indicates images in a batch. - Defaults to None. - imgs (list[list[torch.Tensor]], optional): the outer - list indicates test-time augmentations and inner Tensor - should have a shape NxCxHxW, which contains all images - in the batch. Defaults to None. Defaults to None. - bboxes_2d (list[list[torch.Tensor]], optional): - Provided 2d bboxes, not supported yet. Defaults to None. - rescale (bool, optional): Whether or not rescale bboxes. - Defaults to False. - - Returns: - list[dict]: Predicted 3d boxes. - """ - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_pts_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta, bbox_2d, img in zip(feats, points_cat, - img_metas, bboxes_2d, - imgs): - - bbox_2d = self.extract_bboxes_2d( - img, img_metas, train=False, bboxes_2d=bbox_2d, **kwargs) - - seeds_3d, seed_3d_features, seed_indices = x - - img_features, masks = self.fusion_layer(img, bbox_2d, seeds_3d, - img_metas) - - inds = sample_valid_seeds(masks, self.num_sampled_seed) - batch_size, img_feat_size = img_features.shape[:2] - pts_feat_size = seed_3d_features.shape[1] - inds_img = inds.view(batch_size, 1, - -1).expand(-1, img_feat_size, -1) - img_features = img_features.gather(-1, inds_img) - inds = inds % inds.shape[1] - inds_seed_xyz = inds.view(batch_size, -1, 1).expand(-1, -1, 3) - seeds_3d = seeds_3d.gather(1, inds_seed_xyz) - inds_seed_feats = inds.view(batch_size, 1, - -1).expand(-1, pts_feat_size, -1) - seed_3d_features = seed_3d_features.gather(-1, inds_seed_feats) - seed_indices = seed_indices.gather(1, inds) - - img_features = self.img_mlp(img_features) - - fused_features = torch.cat([seed_3d_features, img_features], dim=1) - - feat_dict = dict( - seed_points=seeds_3d, - seed_features=fused_features, - seed_indices=seed_indices) - bbox_preds = self.pts_bbox_head_joint(feat_dict, - self.test_cfg.pts.sample_mod) - bbox_list = self.pts_bbox_head_joint.get_bboxes( - pts_cat, bbox_preds, img_metas, rescale=rescale) - - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvoxelnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvoxelnet.py deleted file mode 100644 index c1d3e5be..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/imvoxelnet.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, build_prior_generator -from mmdet3d.models.fusion_layers.point_fusion import point_sample -from mmdet.models.detectors import BaseDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck - - -@DETECTORS.register_module() -class ImVoxelNet(BaseDetector): - r"""`ImVoxelNet `_.""" - - def __init__(self, - backbone, - neck, - neck_3d, - bbox_head, - n_voxels, - anchor_generator, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) - self.neck_3d = build_neck(neck_3d) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.n_voxels = n_voxels - self.anchor_generator = build_prior_generator(anchor_generator) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img, img_metas): - """Extract 3d features from the backbone -> fpn -> 3d projection. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - torch.Tensor: of shape (N, C_out, N_x, N_y, N_z) - """ - x = self.backbone(img) - x = self.neck(x)[0] - points = self.anchor_generator.grid_anchors( - [self.n_voxels[::-1]], device=img.device)[0][:, :3] - volumes = [] - for feature, img_meta in zip(x, img_metas): - img_scale_factor = ( - points.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta.keys() else 1) - img_flip = img_meta['flip'] if 'flip' in img_meta.keys() else False - img_crop_offset = ( - points.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta.keys() else 0) - volume = point_sample( - img_meta, - img_features=feature[None, ...], - points=points, - proj_mat=points.new_tensor(img_meta['lidar2img']), - coord_type='LIDAR', - img_scale_factor=img_scale_factor, - img_crop_offset=img_crop_offset, - img_flip=img_flip, - img_pad_shape=img.shape[-2:], - img_shape=img_meta['img_shape'][:2], - aligned=False) - volumes.append( - volume.reshape(self.n_voxels[::-1] + [-1]).permute(3, 2, 1, 0)) - x = torch.stack(volumes) - x = self.neck_3d(x) - return x - - def forward_train(self, img, img_metas, gt_bboxes_3d, gt_labels_3d, - **kwargs): - """Forward of training. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - - Returns: - dict[str, torch.Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img, img_metas) - x = self.bbox_head(x) - losses = self.bbox_head.loss(*x, gt_bboxes_3d, gt_labels_3d, img_metas) - return losses - - def forward_test(self, img, img_metas, **kwargs): - """Forward of testing. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - # not supporting aug_test for now - return self.simple_test(img, img_metas) - - def simple_test(self, img, img_metas): - """Test without augmentations. - - Args: - img (torch.Tensor): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - x = self.extract_feat(img, img_metas) - x = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes(*x, img_metas) - bbox_results = [ - bbox3d2result(det_bboxes, det_scores, det_labels) - for det_bboxes, det_scores, det_labels in bbox_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, **kwargs): - """Test with augmentations. - - Args: - imgs (list[torch.Tensor]): Input images of shape (N, C_in, H, W). - img_metas (list): Image metas. - - Returns: - list[dict]: Predicted 3d boxes. - """ - raise NotImplementedError diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py deleted file mode 100644 index efac009b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_faster_rcnn.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from ..builder import DETECTORS -from .mvx_two_stage import MVXTwoStageDetector - - -@DETECTORS.register_module() -class MVXFasterRCNN(MVXTwoStageDetector): - """Multi-modality VoxelNet using Faster R-CNN.""" - - def __init__(self, **kwargs): - super(MVXFasterRCNN, self).__init__(**kwargs) - - -@DETECTORS.register_module() -class DynamicMVXFasterRCNN(MVXTwoStageDetector): - """Multi-modality VoxelNet using Faster R-CNN and dynamic voxelization.""" - - def __init__(self, **kwargs): - super(DynamicMVXFasterRCNN, self).__init__(**kwargs) - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points and coordinates. - """ - coors = [] - # dynamic voxelization only provide a coors mapping - for res in points: - res_coors = self.pts_voxel_layer(res) - coors.append(res_coors) - points = torch.cat(points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return points, coors_batch - - def extract_pts_feat(self, points, img_feats, img_metas): - """Extract point features.""" - if not self.with_pts_bbox: - return None - voxels, coors = self.voxelize(points) - voxel_features, feature_coors = self.pts_voxel_encoder( - voxels, coors, points, img_feats, img_metas) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, feature_coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py deleted file mode 100644 index 3b7527aa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py +++ /dev/null @@ -1,505 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from os import path as osp - -import mmcv -import torch -from mmcv.ops import Voxelization -from mmcv.parallel import DataContainer as DC -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import (Box3DMode, Coord3DMode, bbox3d2result, - merge_aug_bboxes_3d, show_result) -from mmdet.core import multi_apply -from .. import builder -from ..builder import DETECTORS -from .base import Base3DDetector - - -@DETECTORS.register_module() -class MVXTwoStageDetector(Base3DDetector): - """Base class of Multi-modality VoxelNet.""" - - def __init__(self, - pts_voxel_layer=None, - pts_voxel_encoder=None, - pts_middle_encoder=None, - pts_fusion_layer=None, - img_backbone=None, - pts_backbone=None, - img_neck=None, - pts_neck=None, - pts_bbox_head=None, - img_roi_head=None, - img_rpn_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(MVXTwoStageDetector, self).__init__(init_cfg=init_cfg) - - if pts_voxel_layer: - self.pts_voxel_layer = Voxelization(**pts_voxel_layer) - if pts_voxel_encoder: - self.pts_voxel_encoder = builder.build_voxel_encoder( - pts_voxel_encoder) - if pts_middle_encoder: - self.pts_middle_encoder = builder.build_middle_encoder( - pts_middle_encoder) - if pts_backbone: - self.pts_backbone = builder.build_backbone(pts_backbone) - if pts_fusion_layer: - self.pts_fusion_layer = builder.build_fusion_layer( - pts_fusion_layer) - if pts_neck is not None: - self.pts_neck = builder.build_neck(pts_neck) - if pts_bbox_head: - pts_train_cfg = train_cfg.pts if train_cfg else None - pts_bbox_head.update(train_cfg=pts_train_cfg) - pts_test_cfg = test_cfg.pts if test_cfg else None - pts_bbox_head.update(test_cfg=pts_test_cfg) - self.pts_bbox_head = builder.build_head(pts_bbox_head) - - if img_backbone: - self.img_backbone = builder.build_backbone(img_backbone) - if img_neck is not None: - self.img_neck = builder.build_neck(img_neck) - if img_rpn_head is not None: - self.img_rpn_head = builder.build_head(img_rpn_head) - if img_roi_head is not None: - self.img_roi_head = builder.build_head(img_roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if pretrained is None: - img_pretrained = None - pts_pretrained = None - elif isinstance(pretrained, dict): - img_pretrained = pretrained.get('img', None) - pts_pretrained = pretrained.get('pts', None) - else: - raise ValueError( - f'pretrained should be a dict, got {type(pretrained)}') - - if self.with_img_backbone: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_backbone.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_img_roi_head: - if img_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg.') - self.img_roi_head.init_cfg = dict( - type='Pretrained', checkpoint=img_pretrained) - if self.with_pts_backbone: - if pts_pretrained is not None: - warnings.warn('DeprecationWarning: pretrained is a deprecated ' - 'key, please consider using init_cfg') - self.pts_backbone.init_cfg = dict( - type='Pretrained', checkpoint=pts_pretrained) - - @property - def with_img_shared_head(self): - """bool: Whether the detector has a shared head in image branch.""" - return hasattr(self, - 'img_shared_head') and self.img_shared_head is not None - - @property - def with_pts_bbox(self): - """bool: Whether the detector has a 3D box head.""" - return hasattr(self, - 'pts_bbox_head') and self.pts_bbox_head is not None - - @property - def with_img_bbox(self): - """bool: Whether the detector has a 2D image box head.""" - return hasattr(self, - 'img_bbox_head') and self.img_bbox_head is not None - - @property - def with_img_backbone(self): - """bool: Whether the detector has a 2D image backbone.""" - return hasattr(self, 'img_backbone') and self.img_backbone is not None - - @property - def with_pts_backbone(self): - """bool: Whether the detector has a 3D backbone.""" - return hasattr(self, 'pts_backbone') and self.pts_backbone is not None - - @property - def with_fusion(self): - """bool: Whether the detector has a fusion layer.""" - return hasattr(self, - 'pts_fusion_layer') and self.fusion_layer is not None - - @property - def with_img_neck(self): - """bool: Whether the detector has a neck in image branch.""" - return hasattr(self, 'img_neck') and self.img_neck is not None - - @property - def with_pts_neck(self): - """bool: Whether the detector has a neck in 3D detector branch.""" - return hasattr(self, 'pts_neck') and self.pts_neck is not None - - @property - def with_img_rpn(self): - """bool: Whether the detector has a 2D RPN in image detector branch.""" - return hasattr(self, 'img_rpn_head') and self.img_rpn_head is not None - - @property - def with_img_roi_head(self): - """bool: Whether the detector has a RoI Head in image branch.""" - return hasattr(self, 'img_roi_head') and self.img_roi_head is not None - - @property - def with_voxel_encoder(self): - """bool: Whether the detector has a voxel encoder.""" - return hasattr(self, - 'voxel_encoder') and self.voxel_encoder is not None - - @property - def with_middle_encoder(self): - """bool: Whether the detector has a middle encoder.""" - return hasattr(self, - 'middle_encoder') and self.middle_encoder is not None - - def extract_img_feat(self, img, img_metas): - """Extract features of images.""" - if self.with_img_backbone and img is not None: - input_shape = img.shape[-2:] - # update real input shape of each single img - for img_meta in img_metas: - img_meta.update(input_shape=input_shape) - - if img.dim() == 5 and img.size(0) == 1: - img.squeeze_() - elif img.dim() == 5 and img.size(0) > 1: - B, N, C, H, W = img.size() - img = img.view(B * N, C, H, W) - img_feats = self.img_backbone(img) - else: - return None - if self.with_img_neck: - img_feats = self.img_neck(img_feats) - return img_feats - - def extract_pts_feat(self, pts, img_feats, img_metas): - """Extract features of points.""" - if not self.with_pts_bbox: - return None - voxels, num_points, coors = self.voxelize(pts) - voxel_features = self.pts_voxel_encoder(voxels, num_points, coors, - img_feats, img_metas) - batch_size = coors[-1, 0] + 1 - x = self.pts_middle_encoder(voxel_features, coors, batch_size) - x = self.pts_backbone(x) - if self.with_pts_neck: - x = self.pts_neck(x) - return x - - def extract_feat(self, points, img, img_metas): - """Extract features from images and points.""" - img_feats = self.extract_img_feat(img, img_metas) - pts_feats = self.extract_pts_feat(points, img_feats, img_metas) - return (img_feats, pts_feats) - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply dynamic voxelization to points. - - Args: - points (list[torch.Tensor]): Points of each sample. - - Returns: - tuple[torch.Tensor]: Concatenated points, number of points - per voxel, and coordinates. - """ - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.pts_voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points=None, - img_metas=None, - gt_bboxes_3d=None, - gt_labels_3d=None, - gt_labels=None, - gt_bboxes=None, - img=None, - proposals=None, - gt_bboxes_ignore=None): - """Forward training function. - - Args: - points (list[torch.Tensor], optional): Points of each sample. - Defaults to None. - img_metas (list[dict], optional): Meta information of each sample. - Defaults to None. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`], optional): - Ground truth 3D boxes. Defaults to None. - gt_labels_3d (list[torch.Tensor], optional): Ground truth labels - of 3D boxes. Defaults to None. - gt_labels (list[torch.Tensor], optional): Ground truth labels - of 2D boxes in images. Defaults to None. - gt_bboxes (list[torch.Tensor], optional): Ground truth 2D boxes in - images. Defaults to None. - img (torch.Tensor, optional): Images of each sample with shape - (N, C, H, W). Defaults to None. - proposals ([list[torch.Tensor], optional): Predicted proposals - used for training Fast RCNN. Defaults to None. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - 2D boxes in images to be ignored. Defaults to None. - - Returns: - dict: Losses of different branches. - """ - img_feats, pts_feats = self.extract_feat( - points, img=img, img_metas=img_metas) - losses = dict() - if pts_feats: - losses_pts = self.forward_pts_train(pts_feats, gt_bboxes_3d, - gt_labels_3d, img_metas, - gt_bboxes_ignore) - losses.update(losses_pts) - if img_feats: - losses_img = self.forward_img_train( - img_feats, - img_metas=img_metas, - gt_bboxes=gt_bboxes, - gt_labels=gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - proposals=proposals) - losses.update(losses_img) - return losses - - def forward_pts_train(self, - pts_feats, - gt_bboxes_3d, - gt_labels_3d, - img_metas, - gt_bboxes_ignore=None): - """Forward function for point cloud branch. - - Args: - pts_feats (list[torch.Tensor]): Features of point cloud branch - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - img_metas (list[dict]): Meta information of samples. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - outs = self.pts_bbox_head(pts_feats) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.pts_bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def forward_img_train(self, - x, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - proposals=None, - **kwargs): - """Forward function for image branch. - - This function works similar to the forward function of Faster R-CNN. - - Args: - x (list[torch.Tensor]): Image features of shape (B, C, H, W) - of multiple levels. - img_metas (list[dict]): Meta information of images. - gt_bboxes (list[torch.Tensor]): Ground truth boxes of each image - sample. - gt_labels (list[torch.Tensor]): Ground truth labels of boxes. - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - proposals (list[torch.Tensor], optional): Proposals of each sample. - Defaults to None. - - Returns: - dict: Losses of each branch. - """ - losses = dict() - # RPN forward and loss - if self.with_img_rpn: - rpn_outs = self.img_rpn_head(x) - rpn_loss_inputs = rpn_outs + (gt_bboxes, img_metas, - self.train_cfg.img_rpn) - rpn_losses = self.img_rpn_head.loss( - *rpn_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(rpn_losses) - - proposal_cfg = self.train_cfg.get('img_rpn_proposal', - self.test_cfg.img_rpn) - proposal_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.img_rpn_head.get_bboxes(*proposal_inputs) - else: - proposal_list = proposals - - # bbox head forward and loss - if self.with_img_bbox: - # bbox head forward and loss - img_roi_losses = self.img_roi_head.forward_train( - x, img_metas, proposal_list, gt_bboxes, gt_labels, - gt_bboxes_ignore, **kwargs) - losses.update(img_roi_losses) - - return losses - - def simple_test_img(self, x, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - if proposals is None: - proposal_list = self.simple_test_rpn(x, img_metas, - self.test_cfg.img_rpn) - else: - proposal_list = proposals - - return self.img_roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def simple_test_rpn(self, x, img_metas, rpn_test_cfg): - """RPN test function.""" - rpn_outs = self.img_rpn_head(x) - proposal_inputs = rpn_outs + (img_metas, rpn_test_cfg) - proposal_list = self.img_rpn_head.get_bboxes(*proposal_inputs) - return proposal_list - - def simple_test_pts(self, x, img_metas, rescale=False): - """Test function of point cloud branch.""" - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def simple_test(self, points, img_metas, img=None, rescale=False): - """Test function without augmentaiton.""" - img_feats, pts_feats = self.extract_feat( - points, img=img, img_metas=img_metas) - - bbox_list = [dict() for i in range(len(img_metas))] - if pts_feats and self.with_pts_bbox: - bbox_pts = self.simple_test_pts( - pts_feats, img_metas, rescale=rescale) - for result_dict, pts_bbox in zip(bbox_list, bbox_pts): - result_dict['pts_bbox'] = pts_bbox - if img_feats and self.with_img_bbox: - bbox_img = self.simple_test_img( - img_feats, img_metas, rescale=rescale) - for result_dict, img_bbox in zip(bbox_list, bbox_img): - result_dict['img_bbox'] = img_bbox - return bbox_list - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - img_feats, pts_feats = self.extract_feats(points, img_metas, imgs) - - bbox_list = dict() - if pts_feats and self.with_pts_bbox: - bbox_pts = self.aug_test_pts(pts_feats, img_metas, rescale) - bbox_list.update(pts_bbox=bbox_pts) - return [bbox_list] - - def extract_feats(self, points, img_metas, imgs=None): - """Extract point and image features of multiple samples.""" - if imgs is None: - imgs = [None] * len(img_metas) - img_feats, pts_feats = multi_apply(self.extract_feat, points, imgs, - img_metas) - return img_feats, pts_feats - - def aug_test_pts(self, feats, img_metas, rescale=False): - """Test function of point cloud branch with augmentaiton.""" - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.pts_bbox_head(x) - bbox_list = self.pts_bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.pts_bbox_head.test_cfg) - return merged_bboxes - - def show_results(self, data, result, out_dir): - """Results visualization. - - Args: - data (dict): Input points and the information of the sample. - result (dict): Prediction results. - out_dir (str): Output directory of visualization result. - """ - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - box_mode_3d = data['img_metas'][0]._data[0][batch_id][ - 'box_mode_3d'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - box_mode_3d = data['img_metas'][0][batch_id]['box_mode_3d'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - inds = result[batch_id]['pts_bbox']['scores_3d'] > 0.1 - pred_bboxes = result[batch_id]['pts_bbox']['boxes_3d'][inds] - - # for now we convert points and bbox into depth mode - if (box_mode_3d == Box3DMode.CAM) or (box_mode_3d - == Box3DMode.LIDAR): - points = Coord3DMode.convert_point(points, Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - pred_bboxes = Box3DMode.convert(pred_bboxes, box_mode_3d, - Box3DMode.DEPTH) - elif box_mode_3d != Box3DMode.DEPTH: - ValueError( - f'Unsupported box_mode_3d {box_mode_3d} for conversion!') - - pred_bboxes = pred_bboxes.tensor.cpu().numpy() - show_result(points, None, pred_bboxes, out_dir, file_name) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/parta2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/parta2.py deleted file mode 100644 index 57cd9479..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/parta2.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from torch.nn import functional as F - -from .. import builder -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class PartA2(TwoStage3DDetector): - r"""Part-A2 detector. - - Please refer to the `paper `_ - """ - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PartA2, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas): - """Extract features from points.""" - voxel_dict = self.voxelize(points) - voxel_features = self.voxel_encoder(voxel_dict['voxels'], - voxel_dict['num_points'], - voxel_dict['coors']) - batch_size = voxel_dict['coors'][-1, 0].item() + 1 - feats_dict = self.middle_encoder(voxel_features, voxel_dict['coors'], - batch_size) - x = self.backbone(feats_dict['spatial_features']) - if self.with_neck: - neck_feats = self.neck(x) - feats_dict.update({'neck_feats': neck_feats}) - return feats_dict, voxel_dict - - @torch.no_grad() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points, voxel_centers = [], [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - res_voxel_centers = ( - res_coors[:, [2, 1, 0]] + 0.5) * res_voxels.new_tensor( - self.voxel_layer.voxel_size) + res_voxels.new_tensor( - self.voxel_layer.point_cloud_range[0:3]) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxel_centers.append(res_voxel_centers) - - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - voxel_centers = torch.cat(voxel_centers, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - - voxel_dict = dict( - voxels=voxels, - num_points=num_points, - coors=coors_batch, - voxel_centers=voxel_centers) - return voxel_dict - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None, - proposals=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - feats_dict, voxels_dict = self.extract_feat(points, img_metas) - - losses = dict() - - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict['neck_feats']) - rpn_loss_inputs = rpn_outs + (gt_bboxes_3d, gt_labels_3d, - img_metas) - rpn_losses = self.rpn_head.loss( - *rpn_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(rpn_losses) - - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - proposal_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.rpn_head.get_bboxes(*proposal_inputs) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(feats_dict, voxels_dict, - img_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d) - - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, proposals=None, rescale=False): - """Test function without augmentaiton.""" - feats_dict, voxels_dict = self.extract_feat(points, img_metas) - - if self.with_rpn: - rpn_outs = self.rpn_head(feats_dict['neck_feats']) - proposal_cfg = self.test_cfg.rpn - bbox_inputs = rpn_outs + (img_metas, proposal_cfg) - proposal_list = self.rpn_head.get_bboxes(*bbox_inputs) - else: - proposal_list = proposals - - return self.roi_head.simple_test(feats_dict, voxels_dict, img_metas, - proposal_list) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/point_rcnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/point_rcnn.py deleted file mode 100644 index 5d5d3d97..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/point_rcnn.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import DETECTORS -from .two_stage import TwoStage3DDetector - - -@DETECTORS.register_module() -class PointRCNN(TwoStage3DDetector): - r"""PointRCNN detector. - - Please refer to the `PointRCNN `_ - - Args: - backbone (dict): Config dict of detector's backbone. - neck (dict, optional): Config dict of neck. Defaults to None. - rpn_head (dict, optional): Config of RPN head. Defaults to None. - roi_head (dict, optional): Config of ROI head. Defaults to None. - train_cfg (dict, optional): Train configs. Defaults to None. - test_cfg (dict, optional): Test configs. Defaults to None. - pretrained (str, optional): Model pretrained path. Defaults to None. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PointRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def extract_feat(self, points): - """Directly extract features from the backbone+neck. - - Args: - points (torch.Tensor): Input points. - - Returns: - dict: Features from the backbone+neck - """ - x = self.backbone(points) - - if self.with_neck: - x = self.neck(x) - return x - - def forward_train(self, points, img_metas, gt_bboxes_3d, gt_labels_3d): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list[dict]): Meta information of each sample. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - - Returns: - dict: Losses. - """ - losses = dict() - points_cat = torch.stack(points) - x = self.extract_feat(points_cat) - - # features for rcnn - backbone_feats = x['fp_features'].clone() - backbone_xyz = x['fp_xyz'].clone() - rcnn_feats = {'features': backbone_feats, 'points': backbone_xyz} - - bbox_preds, cls_preds = self.rpn_head(x) - - rpn_loss = self.rpn_head.loss( - bbox_preds=bbox_preds, - cls_preds=cls_preds, - points=points, - gt_bboxes_3d=gt_bboxes_3d, - gt_labels_3d=gt_labels_3d, - img_metas=img_metas) - losses.update(rpn_loss) - - bbox_list = self.rpn_head.get_bboxes(points_cat, bbox_preds, cls_preds, - img_metas) - proposal_list = [ - dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=preds_cls) - for bboxes, scores, labels, preds_cls in bbox_list - ] - rcnn_feats.update({'points_cls_preds': cls_preds}) - - roi_losses = self.roi_head.forward_train(rcnn_feats, img_metas, - proposal_list, gt_bboxes_3d, - gt_labels_3d) - losses.update(roi_losses) - - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list[dict]): Image metas. - imgs (list[torch.Tensor], optional): Images of each sample. - Defaults to None. - rescale (bool, optional): Whether to rescale results. - Defaults to False. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - # features for rcnn - backbone_feats = x['fp_features'].clone() - backbone_xyz = x['fp_xyz'].clone() - rcnn_feats = {'features': backbone_feats, 'points': backbone_xyz} - bbox_preds, cls_preds = self.rpn_head(x) - rcnn_feats.update({'points_cls_preds': cls_preds}) - - bbox_list = self.rpn_head.get_bboxes( - points_cat, bbox_preds, cls_preds, img_metas, rescale=rescale) - - proposal_list = [ - dict( - boxes_3d=bboxes, - scores_3d=scores, - labels_3d=labels, - cls_preds=preds_cls) - for bboxes, scores, labels, preds_cls in bbox_list - ] - bbox_results = self.roi_head.simple_test(rcnn_feats, img_metas, - proposal_list) - - return bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/sassd.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/sassd.py deleted file mode 100644 index ab4e777f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/sassd.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from mmdet.models.builder import DETECTORS -from .. import builder -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class SASSD(SingleStage3DDetector): - r"""`SASSD ` _ for 3D detection.""" - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SASSD, self).__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) - - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas=None, test_mode=False): - """Extract features from points.""" - voxels, num_points, coors = self.voxelize(points) - voxel_features = self.voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0].item() + 1 - x, point_misc = self.middle_encoder(voxel_features, coors, batch_size, - test_mode) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x, point_misc - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - - x, point_misc = self.extract_feat(points, img_metas, test_mode=False) - aux_loss = self.middle_encoder.aux_loss(*point_misc, gt_bboxes_3d) - - outs = self.bbox_head(x) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - losses.update(aux_loss) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Test function without augmentaiton.""" - x, _ = self.extract_feat(points, img_metas, test_mode=True) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - feats = self.extract_feats(points, img_metas, test_mode=True) - - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage.py deleted file mode 100644 index c0722d73..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import Base3DDetector - - -@DETECTORS.register_module() -class SingleStage3DDetector(Base3DDetector): - """SingleStage3DDetector. - - This class serves as a base class for single-stage 3D detectors. - - Args: - backbone (dict): Config dict of detector's backbone. - neck (dict, optional): Config dict of neck. Defaults to None. - bbox_head (dict, optional): Config dict of box head. Defaults to None. - train_cfg (dict, optional): Config dict of training hyper-parameters. - Defaults to None. - test_cfg (dict, optional): Config dict of test hyper-parameters. - Defaults to None. - pretrained (str, optional): Path of pretrained models. - Defaults to None. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SingleStage3DDetector, self).__init__(init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def forward_dummy(self, points): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(points) - try: - sample_mod = self.train_cfg.sample_mod - outs = self.bbox_head(x, sample_mod) - except AttributeError: - outs = self.bbox_head(x) - return outs - - def extract_feat(self, points, img_metas=None): - """Directly extract features from the backbone+neck. - - Args: - points (torch.Tensor): Input points. - """ - x = self.backbone(points) - if self.with_neck: - x = self.neck(x) - return x - - def extract_feats(self, points, img_metas): - """Extract features of multiple samples.""" - return [ - self.extract_feat(pts, img_meta) - for pts, img_meta in zip(points, img_metas) - ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage_mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage_mono3d.py deleted file mode 100644 index 812ad277..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/single_stage_mono3d.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from mmdet3d.core import (CameraInstance3DBoxes, bbox3d2result, - show_multi_modality_result) -from mmdet.models.detectors import SingleStageDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck - - -@DETECTORS.register_module() -class SingleStageMono3DDetector(SingleStageDetector): - """Base class for monocular 3D single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feats(self, imgs): - """Directly extract features from the backbone+neck.""" - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_3d, - gt_labels_3d, - centers2d, - depths, - attr_labels=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_3d (list[Tensor]): Each item are the 3D truth boxes for - each image in [x, y, z, x_size, y_size, z_size, yaw, vx, vy] - format. - gt_labels_3d (list[Tensor]): 3D class indices corresponding to - each box. - centers2d (list[Tensor]): Projected 3D centers onto 2D images. - depths (list[Tensor]): Depth of projected centers on 2D images. - attr_labels (list[Tensor], optional): Attribute indices - corresponding to each box - gt_bboxes_ignore (list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_3d, - gt_labels_3d, centers2d, depths, - attr_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - bbox_outputs = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - - if self.bbox_head.pred_bbox2d: - from mmdet.core import bbox2result - bbox2d_img = [ - bbox2result(bboxes2d, labels, self.bbox_head.num_classes) - for bboxes, scores, labels, attrs, bboxes2d in bbox_outputs - ] - bbox_outputs = [bbox_outputs[0][:-1]] - - bbox_img = [ - bbox3d2result(bboxes, scores, labels, attrs) - for bboxes, scores, labels, attrs in bbox_outputs - ] - - bbox_list = [dict() for i in range(len(img_metas))] - for result_dict, img_bbox in zip(bbox_list, bbox_img): - result_dict['img_bbox'] = img_bbox - if self.bbox_head.pred_bbox2d: - for result_dict, img_bbox2d in zip(bbox_list, bbox2d_img): - result_dict['img_bbox2d'] = img_bbox2d - return bbox_list - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation.""" - feats = self.extract_feats(imgs) - - # only support aug_test for one sample - outs_list = [self.bbox_head(x) for x in feats] - for i, img_meta in enumerate(img_metas): - if img_meta[0]['pcd_horizontal_flip']: - for j in range(len(outs_list[i])): # for each prediction - if outs_list[i][j][0] is None: - continue - for k in range(len(outs_list[i][j])): - # every stride of featmap - outs_list[i][j][k] = torch.flip( - outs_list[i][j][k], dims=[3]) - reg = outs_list[i][1] - for reg_feat in reg: - # offset_x - reg_feat[:, 0, :, :] = 1 - reg_feat[:, 0, :, :] - # velo_x - if self.bbox_head.pred_velo: - reg_feat[:, 7, :, :] = -reg_feat[:, 7, :, :] - # rotation - reg_feat[:, 6, :, :] = -reg_feat[:, 6, :, :] + np.pi - - merged_outs = [] - for i in range(len(outs_list[0])): # for each prediction - merged_feats = [] - for j in range(len(outs_list[0][i])): - if outs_list[0][i][0] is None: - merged_feats.append(None) - continue - # for each stride of featmap - avg_feats = torch.mean( - torch.cat([x[i][j] for x in outs_list]), - dim=0, - keepdim=True) - if i == 1: # regression predictions - # rot/velo/2d det keeps the original - avg_feats[:, 6:, :, :] = \ - outs_list[0][i][j][:, 6:, :, :] - if i == 2: - # dir_cls keeps the original - avg_feats = outs_list[0][i][j] - merged_feats.append(avg_feats) - merged_outs.append(merged_feats) - merged_outs = tuple(merged_outs) - - bbox_outputs = self.bbox_head.get_bboxes( - *merged_outs, img_metas[0], rescale=rescale) - if self.bbox_head.pred_bbox2d: - from mmdet.core import bbox2result - bbox2d_img = [ - bbox2result(bboxes2d, labels, self.bbox_head.num_classes) - for bboxes, scores, labels, attrs, bboxes2d in bbox_outputs - ] - bbox_outputs = [bbox_outputs[0][:-1]] - - bbox_img = [ - bbox3d2result(bboxes, scores, labels, attrs) - for bboxes, scores, labels, attrs in bbox_outputs - ] - - bbox_list = dict() - bbox_list.update(img_bbox=bbox_img[0]) - if self.bbox_head.pred_bbox2d: - bbox_list.update(img_bbox2d=bbox2d_img[0]) - - return [bbox_list] - - def show_results(self, data, result, out_dir, show=False, score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input images and the information of the sample. - result (list[dict]): Prediction results. - out_dir (str): Output directory of visualization result. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - TODO: implement score_thr of single_stage_mono3d. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - Not implemented yet, but it is here for unification. - """ - for batch_id in range(len(result)): - if isinstance(data['img_metas'][0], DC): - img_filename = data['img_metas'][0]._data[0][batch_id][ - 'filename'] - cam2img = data['img_metas'][0]._data[0][batch_id]['cam2img'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - img_filename = data['img_metas'][0][batch_id]['filename'] - cam2img = data['img_metas'][0][batch_id]['cam2img'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - img = mmcv.imread(img_filename) - file_name = osp.split(img_filename)[-1].split('.')[0] - - assert out_dir is not None, 'Expect out_dir, got none.' - - pred_bboxes = result[batch_id]['img_bbox']['boxes_3d'] - assert isinstance(pred_bboxes, CameraInstance3DBoxes), \ - f'unsupported predicted bbox type {type(pred_bboxes)}' - - show_multi_modality_result( - img, - None, - pred_bboxes, - cam2img, - out_dir, - file_name, - 'camera', - show=show) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/smoke_mono3d.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/smoke_mono3d.py deleted file mode 100644 index b002ff23..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/smoke_mono3d.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage_mono3d import SingleStageMono3DDetector - - -@DETECTORS.register_module() -class SMOKEMono3D(SingleStageMono3DDetector): - r"""SMOKE `_ for monocular 3D object - detection. - - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(SMOKEMono3D, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/ssd3dnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/ssd3dnet.py deleted file mode 100644 index a0cdd974..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/ssd3dnet.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .votenet import VoteNet - - -@DETECTORS.register_module() -class SSD3DNet(VoteNet): - """3DSSDNet model. - - https://arxiv.org/abs/2002.10187.pdf - """ - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(SSD3DNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/two_stage.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/two_stage.py deleted file mode 100644 index 0be085eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/two_stage.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmdet.models import TwoStageDetector -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import Base3DDetector - - -@DETECTORS.register_module() -class TwoStage3DDetector(Base3DDetector, TwoStageDetector): - """Base class of two-stage 3D detector. - - It inherits original ``:class:TwoStageDetector`` and - ``:class:Base3DDetector``. This class could serve as a base class for all - two-stage 3D detectors. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(TwoStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - roi_head.pretrained = pretrained - self.roi_head = build_head(roi_head) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/votenet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/votenet.py deleted file mode 100644 index f3f6c0f2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/votenet.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class VoteNet(SingleStage3DDetector): - r"""`VoteNet `_ for 3D detection.""" - - def __init__(self, - backbone, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(VoteNet, self).__init__( - backbone=backbone, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=None, - pretrained=pretrained) - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - gt_bboxes_ignore=None): - """Forward of training. - - Args: - points (list[torch.Tensor]): Points of each batch. - img_metas (list): Image metas. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): gt bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): gt class labels of each batch. - pts_semantic_mask (list[torch.Tensor]): point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): point-wise instance - label of each batch. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.train_cfg.sample_mod) - loss_inputs = (points, gt_bboxes_3d, gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas) - losses = self.bbox_head.loss( - bbox_preds, *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Forward of testing. - - Args: - points (list[torch.Tensor]): Points of each sample. - img_metas (list): Image metas. - rescale (bool): Whether to rescale results. - - Returns: - list: Predicted 3d boxes. - """ - points_cat = torch.stack(points) - - x = self.extract_feat(points_cat) - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - points_cat, bbox_preds, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test with augmentation.""" - points_cat = [torch.stack(pts) for pts in points] - feats = self.extract_feats(points_cat, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, pts_cat, img_meta in zip(feats, points_cat, img_metas): - bbox_preds = self.bbox_head(x, self.test_cfg.sample_mod) - bbox_list = self.bbox_head.get_bboxes( - pts_cat, bbox_preds, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/voxelnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/voxelnet.py deleted file mode 100644 index d6e67005..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/detectors/voxelnet.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import Voxelization -from mmcv.runner import force_fp32 -from torch.nn import functional as F - -from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d -from .. import builder -from ..builder import DETECTORS -from .single_stage import SingleStage3DDetector - - -@DETECTORS.register_module() -class VoxelNet(SingleStage3DDetector): - r"""`VoxelNet `_ for 3D detection.""" - - def __init__(self, - voxel_layer, - voxel_encoder, - middle_encoder, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - pretrained=None): - super(VoxelNet, self).__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - pretrained=pretrained) - self.voxel_layer = Voxelization(**voxel_layer) - self.voxel_encoder = builder.build_voxel_encoder(voxel_encoder) - self.middle_encoder = builder.build_middle_encoder(middle_encoder) - - def extract_feat(self, points, img_metas=None): - """Extract features from points.""" - voxels, num_points, coors = self.voxelize(points) - voxel_features = self.voxel_encoder(voxels, num_points, coors) - batch_size = coors[-1, 0].item() + 1 - x = self.middle_encoder(voxel_features, coors, batch_size) - x = self.backbone(x) - if self.with_neck: - x = self.neck(x) - return x - - @torch.no_grad() - @force_fp32() - def voxelize(self, points): - """Apply hard voxelization to points.""" - voxels, coors, num_points = [], [], [] - for res in points: - res_voxels, res_coors, res_num_points = self.voxel_layer(res) - voxels.append(res_voxels) - coors.append(res_coors) - num_points.append(res_num_points) - voxels = torch.cat(voxels, dim=0) - num_points = torch.cat(num_points, dim=0) - coors_batch = [] - for i, coor in enumerate(coors): - coor_pad = F.pad(coor, (1, 0), mode='constant', value=i) - coors_batch.append(coor_pad) - coors_batch = torch.cat(coors_batch, dim=0) - return voxels, num_points, coors_batch - - def forward_train(self, - points, - img_metas, - gt_bboxes_3d, - gt_labels_3d, - gt_bboxes_ignore=None): - """Training forward function. - - Args: - points (list[torch.Tensor]): Point cloud of each sample. - img_metas (list[dict]): Meta information of each sample - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - gt_labels_3d (list[torch.Tensor]): Ground truth labels for - boxes of each sampole - gt_bboxes_ignore (list[torch.Tensor], optional): Ground truth - boxes to be ignored. Defaults to None. - - Returns: - dict: Losses of each branch. - """ - x = self.extract_feat(points, img_metas) - outs = self.bbox_head(x) - loss_inputs = outs + (gt_bboxes_3d, gt_labels_3d, img_metas) - losses = self.bbox_head.loss( - *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - def simple_test(self, points, img_metas, imgs=None, rescale=False): - """Test function without augmentaiton.""" - x = self.extract_feat(points, img_metas) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def aug_test(self, points, img_metas, imgs=None, rescale=False): - """Test function with augmentaiton.""" - feats = self.extract_feats(points, img_metas) - - # only support aug_test for one sample - aug_bboxes = [] - for x, img_meta in zip(feats, img_metas): - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_meta, rescale=rescale) - bbox_list = [ - dict(boxes_3d=bboxes, scores_3d=scores, labels_3d=labels) - for bboxes, scores, labels in bbox_list - ] - aug_bboxes.append(bbox_list[0]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes = merge_aug_bboxes_3d(aug_bboxes, img_metas, - self.bbox_head.test_cfg) - - return [merged_bboxes] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/__init__.py deleted file mode 100644 index 6df4741d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coord_transform import (apply_3d_transformation, bbox_2d_transform, - coord_2d_transform) -from .point_fusion import PointFusion -from .vote_fusion import VoteFusion - -__all__ = [ - 'PointFusion', 'VoteFusion', 'apply_3d_transformation', - 'bbox_2d_transform', 'coord_2d_transform' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py deleted file mode 100644 index 9c6929b0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from functools import partial - -import torch - -from mmdet3d.core.points import get_points_type - - -def apply_3d_transformation(pcd, coord_type, img_meta, reverse=False): - """Apply transformation to input point cloud. - - Args: - pcd (torch.Tensor): The point cloud to be transformed. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - img_meta(dict): Meta info regarding data transformation. - reverse (bool): Reversed transformation or not. - - Note: - The elements in img_meta['transformation_3d_flow']: - "T" stands for translation; - "S" stands for scale; - "R" stands for rotation; - "HF" stands for horizontal flip; - "VF" stands for vertical flip. - - Returns: - torch.Tensor: The transformed point cloud. - """ - - dtype = pcd.dtype - device = pcd.device - - pcd_rotate_mat = ( - torch.tensor(img_meta['pcd_rotation'], dtype=dtype, device=device) - if 'pcd_rotation' in img_meta else torch.eye( - 3, dtype=dtype, device=device)) - - pcd_scale_factor = ( - img_meta['pcd_scale_factor'] if 'pcd_scale_factor' in img_meta else 1.) - - pcd_trans_factor = ( - torch.tensor(img_meta['pcd_trans'], dtype=dtype, device=device) - if 'pcd_trans' in img_meta else torch.zeros( - (3), dtype=dtype, device=device)) - - pcd_horizontal_flip = img_meta[ - 'pcd_horizontal_flip'] if 'pcd_horizontal_flip' in \ - img_meta else False - - pcd_vertical_flip = img_meta[ - 'pcd_vertical_flip'] if 'pcd_vertical_flip' in \ - img_meta else False - - flow = img_meta['transformation_3d_flow'] \ - if 'transformation_3d_flow' in img_meta else [] - - pcd = pcd.clone() # prevent inplace modification - pcd = get_points_type(coord_type)(pcd) - - horizontal_flip_func = partial(pcd.flip, bev_direction='horizontal') \ - if pcd_horizontal_flip else lambda: None - vertical_flip_func = partial(pcd.flip, bev_direction='vertical') \ - if pcd_vertical_flip else lambda: None - if reverse: - scale_func = partial(pcd.scale, scale_factor=1.0 / pcd_scale_factor) - translate_func = partial(pcd.translate, trans_vector=-pcd_trans_factor) - # pcd_rotate_mat @ pcd_rotate_mat.inverse() is not - # exactly an identity matrix - # use angle to create the inverse rot matrix neither. - # rotate_func = partial(pcd.rotate, rotation=pcd_rotate_mat.inverse()) - - device = pcd_rotate_mat.device - rotation_ = pcd_rotate_mat.cpu().inverse().to(device) - rotate_func = partial(pcd.rotate, rotation=rotation_) - - # reverse the pipeline - flow = flow[::-1] - else: - scale_func = partial(pcd.scale, scale_factor=pcd_scale_factor) - translate_func = partial(pcd.translate, trans_vector=pcd_trans_factor) - rotate_func = partial(pcd.rotate, rotation=pcd_rotate_mat) - - flow_mapping = { - 'T': translate_func, - 'S': scale_func, - 'R': rotate_func, - 'HF': horizontal_flip_func, - 'VF': vertical_flip_func - } - for op in flow: - assert op in flow_mapping, f'This 3D data '\ - f'transformation op ({op}) is not supported' - func = flow_mapping[op] - func() - - return pcd.coord - - -def extract_2d_info(img_meta, tensor): - """Extract image augmentation information from img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - tensor(torch.Tensor): Input tensor used to create new ones. - - Returns: - (int, int, int, int, torch.Tensor, bool, torch.Tensor): - The extracted information. - """ - img_shape = img_meta['img_shape'] - ori_shape = img_meta['ori_shape'] - img_h, img_w, _ = img_shape - ori_h, ori_w, _ = ori_shape - - img_scale_factor = ( - tensor.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta else tensor.new_tensor([1.0, 1.0])) - img_flip = img_meta['flip'] if 'flip' in img_meta else False - img_crop_offset = ( - tensor.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta else tensor.new_tensor([0.0, 0.0])) - - return (img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, - img_crop_offset) - - -def bbox_2d_transform(img_meta, bbox_2d, ori2new): - """Transform 2d bbox according to img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - bbox_2d (torch.Tensor): Shape (..., >4) - The input 2d bboxes to transform. - ori2new (bool): Origin img coord system to new or not. - - Returns: - torch.Tensor: The transformed 2d bboxes. - """ - - img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, \ - img_crop_offset = extract_2d_info(img_meta, bbox_2d) - - bbox_2d_new = bbox_2d.clone() - - if ori2new: - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] * img_scale_factor[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] * img_scale_factor[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] * img_scale_factor[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] * img_scale_factor[1] - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] + img_crop_offset[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] + img_crop_offset[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] + img_crop_offset[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] + img_crop_offset[1] - - if img_flip: - bbox_2d_r = img_w - bbox_2d_new[:, 0] - bbox_2d_l = img_w - bbox_2d_new[:, 2] - bbox_2d_new[:, 0] = bbox_2d_l - bbox_2d_new[:, 2] = bbox_2d_r - else: - if img_flip: - bbox_2d_r = img_w - bbox_2d_new[:, 0] - bbox_2d_l = img_w - bbox_2d_new[:, 2] - bbox_2d_new[:, 0] = bbox_2d_l - bbox_2d_new[:, 2] = bbox_2d_r - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] - img_crop_offset[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] - img_crop_offset[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] - img_crop_offset[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] - img_crop_offset[1] - - bbox_2d_new[:, 0] = bbox_2d_new[:, 0] / img_scale_factor[0] - bbox_2d_new[:, 2] = bbox_2d_new[:, 2] / img_scale_factor[0] - bbox_2d_new[:, 1] = bbox_2d_new[:, 1] / img_scale_factor[1] - bbox_2d_new[:, 3] = bbox_2d_new[:, 3] / img_scale_factor[1] - - return bbox_2d_new - - -def coord_2d_transform(img_meta, coord_2d, ori2new): - """Transform 2d pixel coordinates according to img_meta. - - Args: - img_meta(dict): Meta info regarding data transformation. - coord_2d (torch.Tensor): Shape (..., 2) - The input 2d coords to transform. - ori2new (bool): Origin img coord system to new or not. - - Returns: - torch.Tensor: The transformed 2d coordinates. - """ - - img_h, img_w, ori_h, ori_w, img_scale_factor, img_flip, \ - img_crop_offset = extract_2d_info(img_meta, coord_2d) - - coord_2d_new = coord_2d.clone() - - if ori2new: - # TODO here we assume this order of transformation - coord_2d_new[..., 0] = coord_2d_new[..., 0] * img_scale_factor[0] - coord_2d_new[..., 1] = coord_2d_new[..., 1] * img_scale_factor[1] - - coord_2d_new[..., 0] += img_crop_offset[0] - coord_2d_new[..., 1] += img_crop_offset[1] - - # flip uv coordinates and bbox - if img_flip: - coord_2d_new[..., 0] = img_w - coord_2d_new[..., 0] - else: - if img_flip: - coord_2d_new[..., 0] = img_w - coord_2d_new[..., 0] - - coord_2d_new[..., 0] -= img_crop_offset[0] - coord_2d_new[..., 1] -= img_crop_offset[1] - - coord_2d_new[..., 0] = coord_2d_new[..., 0] / img_scale_factor[0] - coord_2d_new[..., 1] = coord_2d_new[..., 1] / img_scale_factor[1] - - return coord_2d_new diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/point_fusion.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/point_fusion.py deleted file mode 100644 index 97b41777..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/point_fusion.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import (get_proj_mat_by_coord_type, - points_cam2img) -from ..builder import FUSION_LAYERS -from . import apply_3d_transformation - - -def point_sample(img_meta, - img_features, - points, - proj_mat, - coord_type, - img_scale_factor, - img_crop_offset, - img_flip, - img_pad_shape, - img_shape, - aligned=True, - padding_mode='zeros', - align_corners=True): - """Obtain image features using points. - - Args: - img_meta (dict): Meta info. - img_features (torch.Tensor): 1 x C x H x W image features. - points (torch.Tensor): Nx3 point cloud in LiDAR coordinates. - proj_mat (torch.Tensor): 4x4 transformation matrix. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - img_scale_factor (torch.Tensor): Scale factor with shape of - (w_scale, h_scale). - img_crop_offset (torch.Tensor): Crop offset used to crop - image during data augmentation with shape of (w_offset, h_offset). - img_flip (bool): Whether the image is flipped. - img_pad_shape (tuple[int]): int tuple indicates the h & w after - padding, this is necessary to obtain features in feature map. - img_shape (tuple[int]): int tuple indicates the h & w before padding - after scaling, this is necessary for flipping coordinates. - aligned (bool, optional): Whether use bilinear interpolation when - sampling image features for each point. Defaults to True. - padding_mode (str, optional): Padding mode when padding values for - features of out-of-image points. Defaults to 'zeros'. - align_corners (bool, optional): Whether to align corners when - sampling image features for each point. Defaults to True. - - Returns: - torch.Tensor: NxC image features sampled by point coordinates. - """ - - # apply transformation based on info in img_meta - points = apply_3d_transformation( - points, coord_type, img_meta, reverse=True) - - # project points to camera coordinate - pts_2d = points_cam2img(points, proj_mat) - - # img transformation: scale -> crop -> flip - # the image is resized by img_scale_factor - img_coors = pts_2d[:, 0:2] * img_scale_factor # Nx2 - img_coors -= img_crop_offset - - # grid sample, the valid grid range should be in [-1,1] - coor_x, coor_y = torch.split(img_coors, 1, dim=1) # each is Nx1 - - if img_flip: - # by default we take it as horizontal flip - # use img_shape before padding for flip - orig_h, orig_w = img_shape - coor_x = orig_w - coor_x - - h, w = img_pad_shape - coor_y = coor_y / h * 2 - 1 - coor_x = coor_x / w * 2 - 1 - grid = torch.cat([coor_x, coor_y], - dim=1).unsqueeze(0).unsqueeze(0) # Nx2 -> 1x1xNx2 - - # align_corner=True provides higher performance - mode = 'bilinear' if aligned else 'nearest' - point_features = F.grid_sample( - img_features, - grid, - mode=mode, - padding_mode=padding_mode, - align_corners=align_corners) # 1xCx1xN feats - - return point_features.squeeze().t() - - -@FUSION_LAYERS.register_module() -class PointFusion(BaseModule): - """Fuse image features from multi-scale features. - - Args: - img_channels (list[int] | int): Channels of image features. - It could be a list if the input is multi-scale image features. - pts_channels (int): Channels of point features - mid_channels (int): Channels of middle layers - out_channels (int): Channels of output fused features - img_levels (int, optional): Number of image levels. Defaults to 3. - coord_type (str): 'DEPTH' or 'CAMERA' or 'LIDAR'. - Defaults to 'LIDAR'. - conv_cfg (dict, optional): Dict config of conv layers of middle - layers. Defaults to None. - norm_cfg (dict, optional): Dict config of norm layers of middle - layers. Defaults to None. - act_cfg (dict, optional): Dict config of activatation layers. - Defaults to None. - activate_out (bool, optional): Whether to apply relu activation - to output features. Defaults to True. - fuse_out (bool, optional): Whether apply conv layer to the fused - features. Defaults to False. - dropout_ratio (int, float, optional): Dropout ratio of image - features to prevent overfitting. Defaults to 0. - aligned (bool, optional): Whether apply aligned feature fusion. - Defaults to True. - align_corners (bool, optional): Whether to align corner when - sampling features according to points. Defaults to True. - padding_mode (str, optional): Mode used to pad the features of - points that do not have corresponding image features. - Defaults to 'zeros'. - lateral_conv (bool, optional): Whether to apply lateral convs - to image features. Defaults to True. - """ - - def __init__(self, - img_channels, - pts_channels, - mid_channels, - out_channels, - img_levels=3, - coord_type='LIDAR', - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - init_cfg=None, - activate_out=True, - fuse_out=False, - dropout_ratio=0, - aligned=True, - align_corners=True, - padding_mode='zeros', - lateral_conv=True): - super(PointFusion, self).__init__(init_cfg=init_cfg) - if isinstance(img_levels, int): - img_levels = [img_levels] - if isinstance(img_channels, int): - img_channels = [img_channels] * len(img_levels) - assert isinstance(img_levels, list) - assert isinstance(img_channels, list) - assert len(img_channels) == len(img_levels) - - self.img_levels = img_levels - self.coord_type = coord_type - self.act_cfg = act_cfg - self.activate_out = activate_out - self.fuse_out = fuse_out - self.dropout_ratio = dropout_ratio - self.img_channels = img_channels - self.aligned = aligned - self.align_corners = align_corners - self.padding_mode = padding_mode - - self.lateral_convs = None - if lateral_conv: - self.lateral_convs = nn.ModuleList() - for i in range(len(img_channels)): - l_conv = ConvModule( - img_channels[i], - mid_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.img_transform = nn.Sequential( - nn.Linear(mid_channels * len(img_channels), out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - else: - self.img_transform = nn.Sequential( - nn.Linear(sum(img_channels), out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - self.pts_transform = nn.Sequential( - nn.Linear(pts_channels, out_channels), - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - ) - - if self.fuse_out: - self.fuse_conv = nn.Sequential( - nn.Linear(mid_channels, out_channels), - # For pts the BN is initialized differently by default - # TODO: check whether this is necessary - nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01), - nn.ReLU(inplace=False)) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Xavier', layer='Conv2d', distribution='uniform'), - dict(type='Xavier', layer='Linear', distribution='uniform') - ] - - def forward(self, img_feats, pts, pts_feats, img_metas): - """Forward function. - - Args: - img_feats (list[torch.Tensor]): Image features. - pts: [list[torch.Tensor]]: A batch of points with shape N x 3. - pts_feats (torch.Tensor): A tensor consist of point features of the - total batch. - img_metas (list[dict]): Meta information of images. - - Returns: - torch.Tensor: Fused features of each point. - """ - img_pts = self.obtain_mlvl_feats(img_feats, pts, img_metas) - img_pre_fuse = self.img_transform(img_pts) - if self.training and self.dropout_ratio > 0: - img_pre_fuse = F.dropout(img_pre_fuse, self.dropout_ratio) - pts_pre_fuse = self.pts_transform(pts_feats) - - fuse_out = img_pre_fuse + pts_pre_fuse - if self.activate_out: - fuse_out = F.relu(fuse_out) - if self.fuse_out: - fuse_out = self.fuse_conv(fuse_out) - - return fuse_out - - def obtain_mlvl_feats(self, img_feats, pts, img_metas): - """Obtain multi-level features for each point. - - Args: - img_feats (list(torch.Tensor)): Multi-scale image features produced - by image backbone in shape (N, C, H, W). - pts (list[torch.Tensor]): Points of each sample. - img_metas (list[dict]): Meta information for each sample. - - Returns: - torch.Tensor: Corresponding image features of each point. - """ - if self.lateral_convs is not None: - img_ins = [ - lateral_conv(img_feats[i]) - for i, lateral_conv in zip(self.img_levels, self.lateral_convs) - ] - else: - img_ins = img_feats - img_feats_per_point = [] - # Sample multi-level features - for i in range(len(img_metas)): - mlvl_img_feats = [] - for level in range(len(self.img_levels)): - mlvl_img_feats.append( - self.sample_single(img_ins[level][i:i + 1], pts[i][:, :3], - img_metas[i])) - mlvl_img_feats = torch.cat(mlvl_img_feats, dim=-1) - img_feats_per_point.append(mlvl_img_feats) - - img_pts = torch.cat(img_feats_per_point, dim=0) - return img_pts - - def sample_single(self, img_feats, pts, img_meta): - """Sample features from single level image feature map. - - Args: - img_feats (torch.Tensor): Image feature map in shape - (1, C, H, W). - pts (torch.Tensor): Points of a single sample. - img_meta (dict): Meta information of the single sample. - - Returns: - torch.Tensor: Single level image features of each point. - """ - # TODO: image transformation also extracted - img_scale_factor = ( - pts.new_tensor(img_meta['scale_factor'][:2]) - if 'scale_factor' in img_meta.keys() else 1) - img_flip = img_meta['flip'] if 'flip' in img_meta.keys() else False - img_crop_offset = ( - pts.new_tensor(img_meta['img_crop_offset']) - if 'img_crop_offset' in img_meta.keys() else 0) - proj_mat = get_proj_mat_by_coord_type(img_meta, self.coord_type) - img_pts = point_sample( - img_meta=img_meta, - img_features=img_feats, - points=pts, - proj_mat=pts.new_tensor(proj_mat), - coord_type=self.coord_type, - img_scale_factor=img_scale_factor, - img_crop_offset=img_crop_offset, - img_flip=img_flip, - img_pad_shape=img_meta['input_shape'][:2], - img_shape=img_meta['img_shape'][:2], - aligned=self.aligned, - padding_mode=self.padding_mode, - align_corners=self.align_corners, - ) - return img_pts diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/vote_fusion.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/vote_fusion.py deleted file mode 100644 index 3633e4d2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/fusion_layers/vote_fusion.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.core.bbox import points_cam2img -from ..builder import FUSION_LAYERS -from . import apply_3d_transformation, bbox_2d_transform, coord_2d_transform - -EPS = 1e-6 - - -@FUSION_LAYERS.register_module() -class VoteFusion(nn.Module): - """Fuse 2d features from 3d seeds. - - Args: - num_classes (int): number of classes. - max_imvote_per_pixel (int): max number of imvotes. - """ - - def __init__(self, num_classes=10, max_imvote_per_pixel=3): - super(VoteFusion, self).__init__() - self.num_classes = num_classes - self.max_imvote_per_pixel = max_imvote_per_pixel - - def forward(self, imgs, bboxes_2d_rescaled, seeds_3d_depth, img_metas): - """Forward function. - - Args: - imgs (list[torch.Tensor]): Image features. - bboxes_2d_rescaled (list[torch.Tensor]): 2D bboxes. - seeds_3d_depth (torch.Tensor): 3D seeds. - img_metas (list[dict]): Meta information of images. - - Returns: - torch.Tensor: Concatenated cues of each point. - torch.Tensor: Validity mask of each feature. - """ - img_features = [] - masks = [] - for i, data in enumerate( - zip(imgs, bboxes_2d_rescaled, seeds_3d_depth, img_metas)): - img, bbox_2d_rescaled, seed_3d_depth, img_meta = data - bbox_num = bbox_2d_rescaled.shape[0] - seed_num = seed_3d_depth.shape[0] - - img_shape = img_meta['img_shape'] - img_h, img_w, _ = img_shape - - # first reverse the data transformations - xyz_depth = apply_3d_transformation( - seed_3d_depth, 'DEPTH', img_meta, reverse=True) - - # project points from depth to image - depth2img = xyz_depth.new_tensor(img_meta['depth2img']) - uvz_origin = points_cam2img(xyz_depth, depth2img, True) - z_cam = uvz_origin[..., 2] - uv_origin = (uvz_origin[..., :2] - 1).round() - - # rescale 2d coordinates and bboxes - uv_rescaled = coord_2d_transform(img_meta, uv_origin, True) - bbox_2d_origin = bbox_2d_transform(img_meta, bbox_2d_rescaled, - False) - - if bbox_num == 0: - imvote_num = seed_num * self.max_imvote_per_pixel - - # use zero features - two_cues = torch.zeros((15, imvote_num), - device=seed_3d_depth.device) - mask_zero = torch.zeros( - imvote_num - seed_num, device=seed_3d_depth.device).bool() - mask_one = torch.ones( - seed_num, device=seed_3d_depth.device).bool() - mask = torch.cat([mask_one, mask_zero], dim=0) - else: - # expand bboxes and seeds - bbox_expanded = bbox_2d_origin.view(1, bbox_num, -1).expand( - seed_num, -1, -1) - seed_2d_expanded = uv_origin.view(seed_num, 1, - -1).expand(-1, bbox_num, -1) - seed_2d_expanded_x, seed_2d_expanded_y = \ - seed_2d_expanded.split(1, dim=-1) - - bbox_expanded_l, bbox_expanded_t, bbox_expanded_r, \ - bbox_expanded_b, bbox_expanded_conf, bbox_expanded_cls = \ - bbox_expanded.split(1, dim=-1) - bbox_expanded_midx = (bbox_expanded_l + bbox_expanded_r) / 2 - bbox_expanded_midy = (bbox_expanded_t + bbox_expanded_b) / 2 - - seed_2d_in_bbox_x = (seed_2d_expanded_x > bbox_expanded_l) * \ - (seed_2d_expanded_x < bbox_expanded_r) - seed_2d_in_bbox_y = (seed_2d_expanded_y > bbox_expanded_t) * \ - (seed_2d_expanded_y < bbox_expanded_b) - seed_2d_in_bbox = seed_2d_in_bbox_x * seed_2d_in_bbox_y - - # semantic cues, dim=class_num - sem_cue = torch.zeros_like(bbox_expanded_conf).expand( - -1, -1, self.num_classes) - sem_cue = sem_cue.scatter(-1, bbox_expanded_cls.long(), - bbox_expanded_conf) - - # bbox center - uv - delta_u = bbox_expanded_midx - seed_2d_expanded_x - delta_v = bbox_expanded_midy - seed_2d_expanded_y - - seed_3d_expanded = seed_3d_depth.view(seed_num, 1, -1).expand( - -1, bbox_num, -1) - - z_cam = z_cam.view(seed_num, 1, 1).expand(-1, bbox_num, -1) - imvote = torch.cat( - [delta_u, delta_v, - torch.zeros_like(delta_v)], dim=-1).view(-1, 3) - imvote = imvote * z_cam.reshape(-1, 1) - imvote = imvote @ torch.inverse(depth2img.t()) - - # apply transformation to lifted imvotes - imvote = apply_3d_transformation( - imvote, 'DEPTH', img_meta, reverse=False) - - seed_3d_expanded = seed_3d_expanded.reshape(imvote.shape) - - # ray angle - ray_angle = seed_3d_expanded + imvote - ray_angle /= torch.sqrt(torch.sum(ray_angle**2, -1) + - EPS).unsqueeze(-1) - - # imvote lifted to 3d - xz = ray_angle[:, [0, 2]] / (ray_angle[:, [1]] + EPS) \ - * seed_3d_expanded[:, [1]] - seed_3d_expanded[:, [0, 2]] - - # geometric cues, dim=5 - geo_cue = torch.cat([xz, ray_angle], - dim=-1).view(seed_num, -1, 5) - - two_cues = torch.cat([geo_cue, sem_cue], dim=-1) - # mask to 0 if seed not in bbox - two_cues = two_cues * seed_2d_in_bbox.float() - - feature_size = two_cues.shape[-1] - # if bbox number is too small, append zeros - if bbox_num < self.max_imvote_per_pixel: - append_num = self.max_imvote_per_pixel - bbox_num - append_zeros = torch.zeros( - (seed_num, append_num, 1), - device=seed_2d_in_bbox.device).bool() - seed_2d_in_bbox = torch.cat( - [seed_2d_in_bbox, append_zeros], dim=1) - append_zeros = torch.zeros( - (seed_num, append_num, feature_size), - device=two_cues.device) - two_cues = torch.cat([two_cues, append_zeros], dim=1) - append_zeros = torch.zeros((seed_num, append_num, 1), - device=two_cues.device) - bbox_expanded_conf = torch.cat( - [bbox_expanded_conf, append_zeros], dim=1) - - # sort the valid seed-bbox pair according to confidence - pair_score = seed_2d_in_bbox.float() + bbox_expanded_conf - # and find the largests - mask, indices = pair_score.topk( - self.max_imvote_per_pixel, - dim=1, - largest=True, - sorted=True) - - indices_img = indices.expand(-1, -1, feature_size) - two_cues = two_cues.gather(dim=1, index=indices_img) - two_cues = two_cues.transpose(1, 0) - two_cues = two_cues.reshape(-1, feature_size).transpose( - 1, 0).contiguous() - - # since conf is ~ (0, 1), floor gives us validity - mask = mask.floor().int() - mask = mask.transpose(1, 0).reshape(-1).bool() - - # clear the padding - img = img[:, :img_shape[0], :img_shape[1]] - img_flatten = img.reshape(3, -1).float() - img_flatten /= 255. - - # take the normalized pixel value as texture cue - uv_rescaled[:, 0] = torch.clamp(uv_rescaled[:, 0].round(), 0, - img_shape[1] - 1) - uv_rescaled[:, 1] = torch.clamp(uv_rescaled[:, 1].round(), 0, - img_shape[0] - 1) - uv_flatten = uv_rescaled[:, 1].round() * \ - img_shape[1] + uv_rescaled[:, 0].round() - uv_expanded = uv_flatten.unsqueeze(0).expand(3, -1).long() - txt_cue = torch.gather(img_flatten, dim=-1, index=uv_expanded) - txt_cue = txt_cue.unsqueeze(1).expand(-1, - self.max_imvote_per_pixel, - -1).reshape(3, -1) - - # append texture cue - img_feature = torch.cat([two_cues, txt_cue], dim=0) - img_features.append(img_feature) - masks.append(mask) - - return torch.stack(img_features, 0), torch.stack(masks, 0) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/__init__.py deleted file mode 100644 index f6da379d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.losses import FocalLoss, SmoothL1Loss, binary_cross_entropy -from .axis_aligned_iou_loss import AxisAlignedIoULoss, axis_aligned_iou_loss -from .chamfer_distance import ChamferDistance, chamfer_distance -from .multibin_loss import MultiBinLoss -from .paconv_regularization_loss import PAConvRegularizationLoss -from .uncertain_smooth_l1_loss import UncertainL1Loss, UncertainSmoothL1Loss - -__all__ = [ - 'FocalLoss', 'SmoothL1Loss', 'binary_cross_entropy', 'ChamferDistance', - 'chamfer_distance', 'axis_aligned_iou_loss', 'AxisAlignedIoULoss', - 'PAConvRegularizationLoss', 'UncertainL1Loss', 'UncertainSmoothL1Loss', - 'MultiBinLoss' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/axis_aligned_iou_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/axis_aligned_iou_loss.py deleted file mode 100644 index 1f861aab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/axis_aligned_iou_loss.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet.models.losses.utils import weighted_loss -from ...core.bbox import AxisAlignedBboxOverlaps3D -from ..builder import LOSSES - - -@weighted_loss -def axis_aligned_iou_loss(pred, target): - """Calculate the IoU loss (1-IoU) of two set of axis aligned bounding - boxes. Note that predictions and targets are one-to-one corresponded. - - Args: - pred (torch.Tensor): Bbox predictions with shape [..., 3]. - target (torch.Tensor): Bbox targets (gt) with shape [..., 3]. - - Returns: - torch.Tensor: IoU loss between predictions and targets. - """ - - axis_aligned_iou = AxisAlignedBboxOverlaps3D()( - pred, target, is_aligned=True) - iou_loss = 1 - axis_aligned_iou - return iou_loss - - -@LOSSES.register_module() -class AxisAlignedIoULoss(nn.Module): - """Calculate the IoU loss (1-IoU) of axis aligned bounding boxes. - - Args: - reduction (str): Method to reduce losses. - The valid reduction method are none, sum or mean. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(AxisAlignedIoULoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss calculation. - - Args: - pred (torch.Tensor): Bbox predictions with shape [..., 3]. - target (torch.Tensor): Bbox targets (gt) with shape [..., 3]. - weight (torch.Tensor | float, optional): Weight of loss. - Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - - Returns: - torch.Tensor: IoU loss between predictions and targets. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - return (pred * weight).sum() - return axis_aligned_iou_loss( - pred, - target, - weight=weight, - avg_factor=avg_factor, - reduction=reduction) * self.loss_weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/chamfer_distance.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/chamfer_distance.py deleted file mode 100644 index 67908f59..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/chamfer_distance.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.nn.functional import l1_loss, mse_loss, smooth_l1_loss - -from ..builder import LOSSES - - -def chamfer_distance(src, - dst, - src_weight=1.0, - dst_weight=1.0, - criterion_mode='l2', - reduction='mean'): - """Calculate Chamfer Distance of two sets. - - Args: - src (torch.Tensor): Source set with shape [B, N, C] to - calculate Chamfer Distance. - dst (torch.Tensor): Destination set with shape [B, M, C] to - calculate Chamfer Distance. - src_weight (torch.Tensor or float): Weight of source loss. - dst_weight (torch.Tensor or float): Weight of destination loss. - criterion_mode (str): Criterion mode to calculate distance. - The valid modes are smooth_l1, l1 or l2. - reduction (str): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - - Returns: - tuple: Source and Destination loss with the corresponding indices. - - - loss_src (torch.Tensor): The min distance - from source to destination. - - loss_dst (torch.Tensor): The min distance - from destination to source. - - indices1 (torch.Tensor): Index the min distance point - for each point in source to destination. - - indices2 (torch.Tensor): Index the min distance point - for each point in destination to source. - """ - - if criterion_mode == 'smooth_l1': - criterion = smooth_l1_loss - elif criterion_mode == 'l1': - criterion = l1_loss - elif criterion_mode == 'l2': - criterion = mse_loss - else: - raise NotImplementedError - - src_expand = src.unsqueeze(2).repeat(1, 1, dst.shape[1], 1) - dst_expand = dst.unsqueeze(1).repeat(1, src.shape[1], 1, 1) - - distance = criterion(src_expand, dst_expand, reduction='none').sum(-1) - src2dst_distance, indices1 = torch.min(distance, dim=2) # (B,N) - dst2src_distance, indices2 = torch.min(distance, dim=1) # (B,M) - - loss_src = (src2dst_distance * src_weight) - loss_dst = (dst2src_distance * dst_weight) - - if reduction == 'sum': - loss_src = torch.sum(loss_src) - loss_dst = torch.sum(loss_dst) - elif reduction == 'mean': - loss_src = torch.mean(loss_src) - loss_dst = torch.mean(loss_dst) - elif reduction == 'none': - pass - else: - raise NotImplementedError - - return loss_src, loss_dst, indices1, indices2 - - -@LOSSES.register_module() -class ChamferDistance(nn.Module): - """Calculate Chamfer Distance of two sets. - - Args: - mode (str): Criterion mode to calculate distance. - The valid modes are smooth_l1, l1 or l2. - reduction (str): Method to reduce losses. - The valid reduction method are none, sum or mean. - loss_src_weight (float): Weight of loss_source. - loss_dst_weight (float): Weight of loss_target. - """ - - def __init__(self, - mode='l2', - reduction='mean', - loss_src_weight=1.0, - loss_dst_weight=1.0): - super(ChamferDistance, self).__init__() - - assert mode in ['smooth_l1', 'l1', 'l2'] - assert reduction in ['none', 'sum', 'mean'] - self.mode = mode - self.reduction = reduction - self.loss_src_weight = loss_src_weight - self.loss_dst_weight = loss_dst_weight - - def forward(self, - source, - target, - src_weight=1.0, - dst_weight=1.0, - reduction_override=None, - return_indices=False, - **kwargs): - """Forward function of loss calculation. - - Args: - source (torch.Tensor): Source set with shape [B, N, C] to - calculate Chamfer Distance. - target (torch.Tensor): Destination set with shape [B, M, C] to - calculate Chamfer Distance. - src_weight (torch.Tensor | float, optional): - Weight of source loss. Defaults to 1.0. - dst_weight (torch.Tensor | float, optional): - Weight of destination loss. Defaults to 1.0. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - return_indices (bool, optional): Whether to return indices. - Defaults to False. - - Returns: - tuple[torch.Tensor]: If ``return_indices=True``, return losses of - source and target with their corresponding indices in the - order of ``(loss_source, loss_target, indices1, indices2)``. - If ``return_indices=False``, return - ``(loss_source, loss_target)``. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - loss_source, loss_target, indices1, indices2 = chamfer_distance( - source, target, src_weight, dst_weight, self.mode, reduction) - - loss_source *= self.loss_src_weight - loss_target *= self.loss_dst_weight - - if return_indices: - return loss_source, loss_target, indices1, indices2 - else: - return loss_source, loss_target diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/multibin_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/multibin_loss.py deleted file mode 100644 index 2afab40b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/multibin_loss.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.nn import functional as F - -from mmdet.models.losses.utils import weighted_loss -from ..builder import LOSSES - - -@weighted_loss -def multibin_loss(pred_orientations, gt_orientations, num_dir_bins=4): - """Multi-Bin Loss. - - Args: - pred_orientations(torch.Tensor): Predicted local vector - orientation in [axis_cls, head_cls, sin, cos] format. - shape (N, num_dir_bins * 4) - gt_orientations(torch.Tensor): Corresponding gt bboxes, - shape (N, num_dir_bins * 2). - num_dir_bins(int, optional): Number of bins to encode - direction angle. - Defaults: 4. - - Return: - torch.Tensor: Loss tensor. - """ - cls_losses = 0 - reg_losses = 0 - reg_cnt = 0 - for i in range(num_dir_bins): - # bin cls loss - cls_ce_loss = F.cross_entropy( - pred_orientations[:, (i * 2):(i * 2 + 2)], - gt_orientations[:, i].long(), - reduction='mean') - # regression loss - valid_mask_i = (gt_orientations[:, i] == 1) - cls_losses += cls_ce_loss - if valid_mask_i.sum() > 0: - start = num_dir_bins * 2 + i * 2 - end = start + 2 - pred_offset = F.normalize(pred_orientations[valid_mask_i, - start:end]) - gt_offset_sin = torch.sin(gt_orientations[valid_mask_i, - num_dir_bins + i]) - gt_offset_cos = torch.cos(gt_orientations[valid_mask_i, - num_dir_bins + i]) - reg_loss = \ - F.l1_loss(pred_offset[:, 0], gt_offset_sin, - reduction='none') + \ - F.l1_loss(pred_offset[:, 1], gt_offset_cos, - reduction='none') - - reg_losses += reg_loss.sum() - reg_cnt += valid_mask_i.sum() - - return cls_losses / num_dir_bins + reg_losses / reg_cnt - - -@LOSSES.register_module() -class MultiBinLoss(nn.Module): - """Multi-Bin Loss for orientation. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'none'. - loss_weight (float, optional): The weight of loss. Defaults - to 1.0. - """ - - def __init__(self, reduction='none', loss_weight=1.0): - super(MultiBinLoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, pred, target, num_dir_bins, reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - num_dir_bins (int): Number of bins to encode direction angle. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * multibin_loss( - pred, target, num_dir_bins=num_dir_bins, reduction=reduction) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/paconv_regularization_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/paconv_regularization_loss.py deleted file mode 100644 index 19f4b9ff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/paconv_regularization_loss.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.ops import PAConv, PAConvCUDA -from mmdet.models.losses.utils import weight_reduce_loss -from ..builder import LOSSES - - -def weight_correlation(conv): - """Calculate correlations between kernel weights in Conv's weight bank as - regularization loss. The cosine similarity is used as metrics. - - Args: - conv (nn.Module): A Conv modules to be regularized. - Currently we only support `PAConv` and `PAConvCUDA`. - - Returns: - torch.Tensor: Correlations between each kernel weights in weight bank. - """ - assert isinstance(conv, (PAConv, PAConvCUDA)), \ - f'unsupported module type {type(conv)}' - kernels = conv.weight_bank # [C_in, num_kernels * C_out] - in_channels = conv.in_channels - out_channels = conv.out_channels - num_kernels = conv.num_kernels - - # [num_kernels, Cin * Cout] - flatten_kernels = kernels.view(in_channels, num_kernels, out_channels).\ - permute(1, 0, 2).reshape(num_kernels, -1) - # [num_kernels, num_kernels] - inner_product = torch.matmul(flatten_kernels, flatten_kernels.T) - # [num_kernels, 1] - kernel_norms = torch.sum(flatten_kernels**2, dim=-1, keepdim=True)**0.5 - # [num_kernels, num_kernels] - kernel_norms = torch.matmul(kernel_norms, kernel_norms.T) - cosine_sims = inner_product / kernel_norms - # take upper triangular part excluding diagonal since we only compute - # correlation between different kernels once - # the square is to ensure positive loss, refer to: - # https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/tool/train.py#L208 - corr = torch.sum(torch.triu(cosine_sims, diagonal=1)**2) - - return corr - - -def paconv_regularization_loss(modules, reduction): - """Computes correlation loss of PAConv weight kernels as regularization. - - Args: - modules (List[nn.Module] | :obj:`generator`): - A list or a python generator of torch.nn.Modules. - reduction (str): Method to reduce losses among PAConv modules. - The valid reduction method are none, sum or mean. - - Returns: - torch.Tensor: Correlation loss of kernel weights. - """ - corr_loss = [] - for module in modules: - if isinstance(module, (PAConv, PAConvCUDA)): - corr_loss.append(weight_correlation(module)) - corr_loss = torch.stack(corr_loss) - - # perform reduction - corr_loss = weight_reduce_loss(corr_loss, reduction=reduction) - - return corr_loss - - -@LOSSES.register_module() -class PAConvRegularizationLoss(nn.Module): - """Calculate correlation loss of kernel weights in PAConv's weight bank. - - This is used as a regularization term in PAConv model training. - - Args: - reduction (str): Method to reduce losses. The reduction is performed - among all PAConv modules instead of prediction tensors. - The valid reduction method are none, sum or mean. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(PAConvRegularizationLoss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, modules, reduction_override=None, **kwargs): - """Forward function of loss calculation. - - Args: - modules (List[nn.Module] | :obj:`generator`): - A list or a python generator of torch.nn.Modules. - reduction_override (str, optional): Method to reduce losses. - The valid reduction method are 'none', 'sum' or 'mean'. - Defaults to None. - - Returns: - torch.Tensor: Correlation loss of kernel weights. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - - return self.loss_weight * paconv_regularization_loss( - modules, reduction=reduction) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/uncertain_smooth_l1_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/uncertain_smooth_l1_loss.py deleted file mode 100644 index 76b41f5b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/losses/uncertain_smooth_l1_loss.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet.models.losses.utils import weighted_loss -from ..builder import LOSSES - - -@weighted_loss -def uncertain_smooth_l1_loss(pred, target, sigma, alpha=1.0, beta=1.0): - """Smooth L1 loss with uncertainty. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - assert target.numel() > 0 - assert pred.size() == target.size() == sigma.size(), 'The size of pred ' \ - f'{pred.size()}, target {target.size()}, and sigma {sigma.size()} ' \ - 'are inconsistent.' - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - loss = torch.exp(-sigma) * loss + alpha * sigma - - return loss - - -@weighted_loss -def uncertain_l1_loss(pred, target, sigma, alpha=1.0): - """L1 loss with uncertainty. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert target.numel() > 0 - assert pred.size() == target.size() == sigma.size(), 'The size of pred ' \ - f'{pred.size()}, target {target.size()}, and sigma {sigma.size()} ' \ - 'are inconsistent.' - loss = torch.abs(pred - target) - loss = torch.exp(-sigma) * loss + alpha * sigma - return loss - - -@LOSSES.register_module() -class UncertainSmoothL1Loss(nn.Module): - r"""Smooth L1 loss with uncertainty. - - Please refer to `PGD `_ and - `Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry - and Semantics `_ for more details. - - Args: - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'mean'. - loss_weight (float, optional): The weight of loss. Defaults to 1.0 - """ - - def __init__(self, alpha=1.0, beta=1.0, reduction='mean', loss_weight=1.0): - super(UncertainSmoothL1Loss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.alpha = alpha - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - sigma, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * uncertain_smooth_l1_loss( - pred, - target, - weight, - sigma=sigma, - alpha=self.alpha, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class UncertainL1Loss(nn.Module): - """L1 loss with uncertainty. - - Args: - alpha (float, optional): The coefficient of log(sigma). - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are 'none', 'mean' and 'sum'. Defaults to 'mean'. - loss_weight (float, optional): The weight of loss. Defaults to 1.0. - """ - - def __init__(self, alpha=1.0, reduction='mean', loss_weight=1.0): - super(UncertainL1Loss, self).__init__() - assert reduction in ['none', 'sum', 'mean'] - self.alpha = alpha - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - sigma, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - sigma (torch.Tensor): The sigma for uncertainty. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * uncertain_l1_loss( - pred, - target, - weight, - sigma=sigma, - alpha=self.alpha, - reduction=reduction, - avg_factor=avg_factor) - return loss_bbox diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/__init__.py deleted file mode 100644 index 1e7bb638..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .pillar_scatter import PointPillarsScatter -# from .sparse_encoder import SparseEncoder, SparseEncoderSASSD -# from .sparse_unet import SparseUNet - -__all__ = [ - 'PointPillarsScatter' -] - -# __all__ = [ -# 'PointPillarsScatter', 'SparseEncoder', 'SparseEncoderSASSD', 'SparseUNet' -# ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py deleted file mode 100644 index b0098512..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import auto_fp16 -from torch import nn - -from ..builder import MIDDLE_ENCODERS - - -@MIDDLE_ENCODERS.register_module() -class PointPillarsScatter(nn.Module): - """Point Pillar's Scatter. - - Converts learned features from dense tensor to sparse pseudo image. - - Args: - in_channels (int): Channels of input features. - output_shape (list[int]): Required output shape of features. - """ - - def __init__(self, in_channels, output_shape): - super().__init__() - self.output_shape = output_shape - self.ny = output_shape[0] - self.nx = output_shape[1] - self.in_channels = in_channels - self.fp16_enabled = False - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size=None): - """Foraward function to scatter features.""" - # TODO: rewrite the function in a batch manner - # no need to deal with different batch cases - if batch_size is not None: - return self.forward_batch(voxel_features, coors, batch_size) - else: - return self.forward_single(voxel_features, coors) - - def forward_single(self, voxel_features, coors): - """Scatter features of single sample. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, M, C). - coors (torch.Tensor): Coordinates of each voxel. - The first column indicates the sample ID. - """ - # Create the canvas for this sample - canvas = torch.zeros( - self.in_channels, - self.nx * self.ny, - dtype=voxel_features.dtype, - device=voxel_features.device) - - indices = coors[:, 2] * self.nx + coors[:, 3] - indices = indices.long() - voxels = voxel_features.t() - # Now scatter the blob back to the canvas. - canvas[:, indices] = voxels - # Undo the column stacking to final 4-dim tensor - canvas = canvas.view(1, self.in_channels, self.ny, self.nx) - return canvas - - def forward_batch(self, voxel_features, coors, batch_size): - """Scatter features of single sample. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, M, C). - coors (torch.Tensor): Coordinates of each voxel in shape (N, 4). - The first column indicates the sample ID. - batch_size (int): Number of samples in the current batch. - """ - # batch_canvas will be the final output. - batch_canvas = [] - for batch_itt in range(batch_size): - # Create the canvas for this sample - canvas = torch.zeros( - self.in_channels, - self.nx * self.ny, - dtype=voxel_features.dtype, - device=voxel_features.device) - - # Only include non-empty pillars - batch_mask = coors[:, 0] == batch_itt - this_coors = coors[batch_mask, :] - indices = this_coors[:, 2] * self.nx + this_coors[:, 3] - indices = indices.type(torch.long) - voxels = voxel_features[batch_mask, :] - voxels = voxels.t() - - # Now scatter the blob back to the canvas. - canvas[:, indices] = voxels - - # Append to a list for later stacking. - batch_canvas.append(canvas) - - # Stack to 3-dim tensor (batch-size, in_channels, nrows*ncols) - batch_canvas = torch.stack(batch_canvas, 0) - - # Undo the column stacking to final 4-dim tensor - batch_canvas = batch_canvas.view(batch_size, self.in_channels, self.ny, - self.nx) - - return batch_canvas diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_encoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_encoder.py deleted file mode 100644 index 253d6069..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_encoder.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops import points_in_boxes_all, three_interpolate, three_nn -from mmcv.runner import auto_fp16 -from torch import nn as nn - -from mmdet3d.ops import SparseBasicBlock, make_sparse_convmodule -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE -from mmdet.models.losses import sigmoid_focal_loss, smooth_l1_loss -from ..builder import MIDDLE_ENCODERS - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseConvTensor, SparseSequential -else: - from mmcv.ops import SparseConvTensor, SparseSequential - - -@MIDDLE_ENCODERS.register_module() -class SparseEncoder(nn.Module): - r"""Sparse encoder for SECOND and Part-A2. - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - order (list[str], optional): Order of conv module. - Defaults to ('conv', 'norm', 'act'). - norm_cfg (dict, optional): Config of normalization layer. Defaults to - dict(type='BN1d', eps=1e-3, momentum=0.01). - base_channels (int, optional): Out channels for conv_input layer. - Defaults to 16. - output_channels (int, optional): Out channels for conv_out layer. - Defaults to 128. - encoder_channels (tuple[tuple[int]], optional): - Convolutional channels of each encode block. - Defaults to ((16, ), (32, 32, 32), (64, 64, 64), (64, 64, 64)). - encoder_paddings (tuple[tuple[int]], optional): - Paddings of each encode block. - Defaults to ((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, 1)). - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - block_type='conv_module'): - super().__init__() - assert block_type in ['conv_module', 'basicblock'] - self.sparse_shape = sparse_shape - self.in_channels = in_channels - self.order = order - self.base_channels = base_channels - self.output_channels = output_channels - self.encoder_channels = encoder_channels - self.encoder_paddings = encoder_paddings - self.stage_num = len(self.encoder_channels) - self.fp16_enabled = False - # Spconv init all weight on its own - - assert isinstance(order, tuple) and len(order) == 3 - assert set(order) == {'conv', 'norm', 'act'} - - if self.order[0] != 'conv': # pre activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d', - order=('conv', )) - else: # post activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d') - - encoder_out_channels = self.make_encoder_layers( - make_sparse_convmodule, - norm_cfg, - self.base_channels, - block_type=block_type) - - self.conv_out = make_sparse_convmodule( - encoder_out_channels, - self.output_channels, - kernel_size=(3, 1, 1), - stride=(2, 1, 1), - norm_cfg=norm_cfg, - padding=0, - indice_key='spconv_down2', - conv_type='SparseConv3d') - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size): - """Forward of SparseEncoder. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, C). - coors (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - - Returns: - dict: Backbone features. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - return spatial_features - - def make_encoder_layers(self, - make_block, - norm_cfg, - in_channels, - block_type='conv_module', - conv_cfg=dict(type='SubMConv3d')): - """make encoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - conv_cfg (dict, optional): Config of conv layer. Defaults to - dict(type='SubMConv3d'). - - Returns: - int: The number of encoder output channels. - """ - assert block_type in ['conv_module', 'basicblock'] - self.encoder_layers = SparseSequential() - - for i, blocks in enumerate(self.encoder_channels): - blocks_list = [] - for j, out_channels in enumerate(tuple(blocks)): - padding = tuple(self.encoder_paddings[i])[j] - # each stage started with a spconv layer - # except the first stage - if i != 0 and j == 0 and block_type == 'conv_module': - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - elif block_type == 'basicblock': - if j == len(blocks) - 1 and i != len( - self.encoder_channels) - 1: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - else: - blocks_list.append( - SparseBasicBlock( - out_channels, - out_channels, - norm_cfg=norm_cfg, - conv_cfg=conv_cfg)) - else: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - padding=padding, - indice_key=f'subm{i + 1}', - conv_type='SubMConv3d')) - in_channels = out_channels - stage_name = f'encoder_layer{i + 1}' - stage_layers = SparseSequential(*blocks_list) - self.encoder_layers.add_module(stage_name, stage_layers) - return out_channels - - -@MIDDLE_ENCODERS.register_module() -class SparseEncoderSASSD(SparseEncoder): - r"""Sparse encoder for `SASSD `_ - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - order (list[str], optional): Order of conv module. - Defaults to ('conv', 'norm', 'act'). - norm_cfg (dict, optional): Config of normalization layer. Defaults to - dict(type='BN1d', eps=1e-3, momentum=0.01). - base_channels (int, optional): Out channels for conv_input layer. - Defaults to 16. - output_channels (int, optional): Out channels for conv_out layer. - Defaults to 128. - encoder_channels (tuple[tuple[int]], optional): - Convolutional channels of each encode block. - Defaults to ((16, ), (32, 32, 32), (64, 64, 64), (64, 64, 64)). - encoder_paddings (tuple[tuple[int]], optional): - Paddings of each encode block. - Defaults to ((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, 1)). - block_type (str, optional): Type of the block to use. - Defaults to 'conv_module'. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - block_type='conv_module'): - super(SparseEncoderSASSD, self).__init__( - in_channels=in_channels, - sparse_shape=sparse_shape, - order=order, - norm_cfg=norm_cfg, - base_channels=base_channels, - output_channels=output_channels, - encoder_channels=encoder_channels, - encoder_paddings=encoder_paddings, - block_type=block_type) - - self.point_fc = nn.Linear(112, 64, bias=False) - self.point_cls = nn.Linear(64, 1, bias=False) - self.point_reg = nn.Linear(64, 3, bias=False) - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size, test_mode=False): - """Forward of SparseEncoder. - - Args: - voxel_features (torch.Tensor): Voxel features in shape (N, C). - coors (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - test_mode (bool, optional): Whether in test mode. - Defaults to False. - - Returns: - dict: Backbone features. - tuple[torch.Tensor]: Mean feature value of the points, - Classificaion result of the points, - Regression offsets of the points. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - if test_mode: - return spatial_features, None - - points_mean = torch.zeros_like(voxel_features) - points_mean[:, 0] = coors[:, 0] - points_mean[:, 1:] = voxel_features[:, :3] - - # auxiliary network - p0 = self.make_auxiliary_points( - encode_features[0], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.1, .1, .2)) - - p1 = self.make_auxiliary_points( - encode_features[1], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.2, .2, .4)) - - p2 = self.make_auxiliary_points( - encode_features[2], - points_mean, - offset=(0, -40., -3.), - voxel_size=(.4, .4, .8)) - - pointwise = torch.cat([p0, p1, p2], dim=-1) - pointwise = self.point_fc(pointwise) - point_cls = self.point_cls(pointwise) - point_reg = self.point_reg(pointwise) - point_misc = (points_mean, point_cls, point_reg) - - return spatial_features, point_misc - - def get_auxiliary_targets(self, nxyz, gt_boxes3d, enlarge=1.0): - """Get auxiliary target. - - Args: - nxyz (torch.Tensor): Mean features of the points. - gt_boxes3d (torch.Tensor): Coordinates in shape (N, 4), - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - enlarge (int, optional): Enlaged scale. Defaults to 1.0. - - Returns: - tuple[torch.Tensor]: Label of the points and - center offsets of the points. - """ - center_offsets = list() - pts_labels = list() - for i in range(len(gt_boxes3d)): - boxes3d = gt_boxes3d[i].tensor.cpu() - idx = torch.nonzero(nxyz[:, 0] == i).view(-1) - new_xyz = nxyz[idx, 1:].cpu() - - boxes3d[:, 3:6] *= enlarge - - pts_in_flag, center_offset = self.calculate_pts_offsets( - new_xyz, boxes3d) - pts_label = pts_in_flag.max(0)[0].byte() - pts_labels.append(pts_label) - center_offsets.append(center_offset) - - center_offsets = torch.cat(center_offsets).cuda() - pts_labels = torch.cat(pts_labels).to(center_offsets.device) - - return pts_labels, center_offsets - - def calculate_pts_offsets(self, points, boxes): - """Find all boxes in which each point is, as well as the offsets from - the box centers. - - Args: - points (torch.Tensor): [M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - tuple[torch.Tensor]: Point indices of boxes with the shape of - (T, M). Default background = 0. - And offsets from the box centers of points, - if it belows to the box, with the shape of (M, 3). - Default background = 0. - """ - boxes_num = len(boxes) - pts_num = len(points) - points = points.cuda() - boxes = boxes.to(points.device) - - box_idxs_of_pts = points_in_boxes_all(points[None, ...], boxes[None, - ...]) - - pts_indices = box_idxs_of_pts.squeeze(0).transpose(0, 1) - - center_offsets = torch.zeros_like(points).to(points.device) - - for i in range(boxes_num): - for j in range(pts_num): - if pts_indices[i][j] == 1: - center_offsets[j][0] = points[j][0] - boxes[i][0] - center_offsets[j][1] = points[j][1] - boxes[i][1] - center_offsets[j][2] = ( - points[j][2] - (boxes[i][2] + boxes[i][2] / 2.0)) - return pts_indices.cpu(), center_offsets.cpu() - - def aux_loss(self, points, point_cls, point_reg, gt_bboxes): - """Calculate auxiliary loss. - - Args: - points (torch.Tensor): Mean feature value of the points. - point_cls (torch.Tensor): Classificaion result of the points. - point_reg (torch.Tensor): Regression offsets of the points. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes for each sample. - - Returns: - dict: Backbone features. - """ - num_boxes = len(gt_bboxes) - - pts_labels, center_targets = self.get_auxiliary_targets( - points, gt_bboxes) - - rpn_cls_target = pts_labels.long() - pos = (pts_labels > 0).float() - neg = (pts_labels == 0).float() - - pos_normalizer = pos.sum().clamp(min=1.0) - - cls_weights = pos + neg - reg_weights = pos - reg_weights = reg_weights / pos_normalizer - - aux_loss_cls = sigmoid_focal_loss( - point_cls, - rpn_cls_target, - weight=cls_weights, - avg_factor=pos_normalizer) - - aux_loss_cls /= num_boxes - - weight = reg_weights[..., None] - aux_loss_reg = smooth_l1_loss(point_reg, center_targets, beta=1 / 9.) - aux_loss_reg = torch.sum(aux_loss_reg * weight)[None] - aux_loss_reg /= num_boxes - - aux_loss_cls, aux_loss_reg = [aux_loss_cls], [aux_loss_reg] - - return dict(aux_loss_cls=aux_loss_cls, aux_loss_reg=aux_loss_reg) - - def make_auxiliary_points(self, - source_tensor, - target, - offset=(0., -40., -3.), - voxel_size=(.05, .05, .1)): - """Make auxiliary points for loss computation. - - Args: - source_tensor (torch.Tensor): (M, C) features to be propigated. - target (torch.Tensor): (N, 4) bxyz positions of the - target features. - offset (tuple[float], optional): Voxelization offset. - Defaults to (0., -40., -3.) - voxel_size (tuple[float], optional): Voxelization size. - Defaults to (.05, .05, .1) - - Returns: - torch.Tensor: (N, C) tensor of the features of the target features. - """ - # Tansfer tensor to points - source = source_tensor.indices.float() - offset = torch.Tensor(offset).to(source.device) - voxel_size = torch.Tensor(voxel_size).to(source.device) - source[:, 1:] = ( - source[:, [3, 2, 1]] * voxel_size + offset + .5 * voxel_size) - - source_feats = source_tensor.features[None, ...].transpose(1, 2) - - # Interplate auxiliary points - dist, idx = three_nn(target[None, ...], source[None, ...]) - dist_recip = 1.0 / (dist + 1e-8) - norm = torch.sum(dist_recip, dim=2, keepdim=True) - weight = dist_recip / norm - new_features = three_interpolate(source_feats.contiguous(), idx, - weight) - - return new_features.squeeze(0).transpose(0, 1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_unet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_unet.py deleted file mode 100644 index 91b1e723..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/middle_encoders/sparse_unet.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseConvTensor, SparseSequential -else: - from mmcv.ops import SparseConvTensor, SparseSequential - -from mmcv.runner import BaseModule, auto_fp16 - -from mmdet3d.ops import SparseBasicBlock, make_sparse_convmodule -from mmdet3d.ops.sparse_block import replace_feature -from ..builder import MIDDLE_ENCODERS - - -@MIDDLE_ENCODERS.register_module() -class SparseUNet(BaseModule): - r"""SparseUNet for PartA^2. - - See the `paper `_ for more details. - - Args: - in_channels (int): The number of input channels. - sparse_shape (list[int]): The sparse shape of input tensor. - norm_cfg (dict): Config of normalization layer. - base_channels (int): Out channels for conv_input layer. - output_channels (int): Out channels for conv_out layer. - encoder_channels (tuple[tuple[int]]): - Convolutional channels of each encode block. - encoder_paddings (tuple[tuple[int]]): Paddings of each encode block. - decoder_channels (tuple[tuple[int]]): - Convolutional channels of each decode block. - decoder_paddings (tuple[tuple[int]]): Paddings of each decode block. - """ - - def __init__(self, - in_channels, - sparse_shape, - order=('conv', 'norm', 'act'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - base_channels=16, - output_channels=128, - encoder_channels=((16, ), (32, 32, 32), (64, 64, 64), (64, 64, - 64)), - encoder_paddings=((1, ), (1, 1, 1), (1, 1, 1), ((0, 1, 1), 1, - 1)), - decoder_channels=((64, 64, 64), (64, 64, 32), (32, 32, 16), - (16, 16, 16)), - decoder_paddings=((1, 0), (1, 0), (0, 0), (0, 1)), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.sparse_shape = sparse_shape - self.in_channels = in_channels - self.order = order - self.base_channels = base_channels - self.output_channels = output_channels - self.encoder_channels = encoder_channels - self.encoder_paddings = encoder_paddings - self.decoder_channels = decoder_channels - self.decoder_paddings = decoder_paddings - self.stage_num = len(self.encoder_channels) - self.fp16_enabled = False - # Spconv init all weight on its own - - assert isinstance(order, tuple) and len(order) == 3 - assert set(order) == {'conv', 'norm', 'act'} - - if self.order[0] != 'conv': # pre activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d', - order=('conv', )) - else: # post activate - self.conv_input = make_sparse_convmodule( - in_channels, - self.base_channels, - 3, - norm_cfg=norm_cfg, - padding=1, - indice_key='subm1', - conv_type='SubMConv3d') - - encoder_out_channels = self.make_encoder_layers( - make_sparse_convmodule, norm_cfg, self.base_channels) - self.make_decoder_layers(make_sparse_convmodule, norm_cfg, - encoder_out_channels) - - self.conv_out = make_sparse_convmodule( - encoder_out_channels, - self.output_channels, - kernel_size=(3, 1, 1), - stride=(2, 1, 1), - norm_cfg=norm_cfg, - padding=0, - indice_key='spconv_down2', - conv_type='SparseConv3d') - - @auto_fp16(apply_to=('voxel_features', )) - def forward(self, voxel_features, coors, batch_size): - """Forward of SparseUNet. - - Args: - voxel_features (torch.float32): Voxel features in shape [N, C]. - coors (torch.int32): Coordinates in shape [N, 4], - the columns in the order of (batch_idx, z_idx, y_idx, x_idx). - batch_size (int): Batch size. - - Returns: - dict[str, torch.Tensor]: Backbone features. - """ - coors = coors.int() - input_sp_tensor = SparseConvTensor(voxel_features, coors, - self.sparse_shape, batch_size) - x = self.conv_input(input_sp_tensor) - - encode_features = [] - for encoder_layer in self.encoder_layers: - x = encoder_layer(x) - encode_features.append(x) - - # for detection head - # [200, 176, 5] -> [200, 176, 2] - out = self.conv_out(encode_features[-1]) - spatial_features = out.dense() - - N, C, D, H, W = spatial_features.shape - spatial_features = spatial_features.view(N, C * D, H, W) - - # for segmentation head, with output shape: - # [400, 352, 11] <- [200, 176, 5] - # [800, 704, 21] <- [400, 352, 11] - # [1600, 1408, 41] <- [800, 704, 21] - # [1600, 1408, 41] <- [1600, 1408, 41] - decode_features = [] - x = encode_features[-1] - for i in range(self.stage_num, 0, -1): - x = self.decoder_layer_forward(encode_features[i - 1], x, - getattr(self, f'lateral_layer{i}'), - getattr(self, f'merge_layer{i}'), - getattr(self, f'upsample_layer{i}')) - decode_features.append(x) - - seg_features = decode_features[-1].features - - ret = dict( - spatial_features=spatial_features, seg_features=seg_features) - - return ret - - def decoder_layer_forward(self, x_lateral, x_bottom, lateral_layer, - merge_layer, upsample_layer): - """Forward of upsample and residual block. - - Args: - x_lateral (:obj:`SparseConvTensor`): Lateral tensor. - x_bottom (:obj:`SparseConvTensor`): Feature from bottom layer. - lateral_layer (SparseBasicBlock): Convolution for lateral tensor. - merge_layer (SparseSequential): Convolution for merging features. - upsample_layer (SparseSequential): Convolution for upsampling. - - Returns: - :obj:`SparseConvTensor`: Upsampled feature. - """ - x = lateral_layer(x_lateral) - x = replace_feature(x, torch.cat((x_bottom.features, x.features), - dim=1)) - x_merge = merge_layer(x) - x = self.reduce_channel(x, x_merge.features.shape[1]) - x = replace_feature(x, x_merge.features + x.features) - x = upsample_layer(x) - return x - - @staticmethod - def reduce_channel(x, out_channels): - """reduce channel for element-wise addition. - - Args: - x (:obj:`SparseConvTensor`): Sparse tensor, ``x.features`` - are in shape (N, C1). - out_channels (int): The number of channel after reduction. - - Returns: - :obj:`SparseConvTensor`: Channel reduced feature. - """ - features = x.features - n, in_channels = features.shape - assert (in_channels % out_channels - == 0) and (in_channels >= out_channels) - x = replace_feature(x, features.view(n, out_channels, -1).sum(dim=2)) - return x - - def make_encoder_layers(self, make_block, norm_cfg, in_channels): - """make encoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - - Returns: - int: The number of encoder output channels. - """ - self.encoder_layers = SparseSequential() - - for i, blocks in enumerate(self.encoder_channels): - blocks_list = [] - for j, out_channels in enumerate(tuple(blocks)): - padding = tuple(self.encoder_paddings[i])[j] - # each stage started with a spconv layer - # except the first stage - if i != 0 and j == 0: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - stride=2, - padding=padding, - indice_key=f'spconv{i + 1}', - conv_type='SparseConv3d')) - else: - blocks_list.append( - make_block( - in_channels, - out_channels, - 3, - norm_cfg=norm_cfg, - padding=padding, - indice_key=f'subm{i + 1}', - conv_type='SubMConv3d')) - in_channels = out_channels - stage_name = f'encoder_layer{i + 1}' - stage_layers = SparseSequential(*blocks_list) - self.encoder_layers.add_module(stage_name, stage_layers) - return out_channels - - def make_decoder_layers(self, make_block, norm_cfg, in_channels): - """make decoder layers using sparse convs. - - Args: - make_block (method): A bounded function to build blocks. - norm_cfg (dict[str]): Config of normalization layer. - in_channels (int): The number of encoder input channels. - - Returns: - int: The number of encoder output channels. - """ - block_num = len(self.decoder_channels) - for i, block_channels in enumerate(self.decoder_channels): - paddings = self.decoder_paddings[i] - setattr( - self, f'lateral_layer{block_num - i}', - SparseBasicBlock( - in_channels, - block_channels[0], - conv_cfg=dict( - type='SubMConv3d', indice_key=f'subm{block_num - i}'), - norm_cfg=norm_cfg)) - setattr( - self, f'merge_layer{block_num - i}', - make_block( - in_channels * 2, - block_channels[1], - 3, - norm_cfg=norm_cfg, - padding=paddings[0], - indice_key=f'subm{block_num - i}', - conv_type='SubMConv3d')) - if block_num - i != 1: - setattr( - self, f'upsample_layer{block_num - i}', - make_block( - in_channels, - block_channels[2], - 3, - norm_cfg=norm_cfg, - indice_key=f'spconv{block_num - i}', - conv_type='SparseInverseConv3d')) - else: - # use submanifold conv instead of inverse conv - # in the last block - setattr( - self, f'upsample_layer{block_num - i}', - make_block( - in_channels, - block_channels[2], - 3, - norm_cfg=norm_cfg, - padding=paddings[1], - indice_key='subm1', - conv_type='SubMConv3d')) - in_channels = block_channels[2] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/__init__.py deleted file mode 100644 index 34df79a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .edge_fusion_module import EdgeFusionModule -from .transformer import GroupFree3DMHA -from .vote_module import VoteModule - -__all__ = ['VoteModule', 'GroupFree3DMHA', 'EdgeFusionModule'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/edge_fusion_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/edge_fusion_module.py deleted file mode 100644 index 2d9e09ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/edge_fusion_module.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - - -class EdgeFusionModule(BaseModule): - """Edge Fusion Module for feature map. - - Args: - out_channels (int): The number of output channels. - feat_channels (int): The number of channels in feature map - during edge feature fusion. - kernel_size (int, optional): Kernel size of convolution. - Default: 3. - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d')). - """ - - def __init__(self, - out_channels, - feat_channels, - kernel_size=3, - act_cfg=dict(type='ReLU'), - norm_cfg=dict(type='BN1d')): - super().__init__() - self.edge_convs = nn.Sequential( - ConvModule( - feat_channels, - feat_channels, - kernel_size=kernel_size, - padding=kernel_size // 2, - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg), - nn.Conv1d(feat_channels, out_channels, kernel_size=1)) - self.feat_channels = feat_channels - - def forward(self, features, fused_features, edge_indices, edge_lens, - output_h, output_w): - """Forward pass. - - Args: - features (torch.Tensor): Different representative features - for fusion. - fused_features (torch.Tensor): Different representative - features to be fused. - edge_indices (torch.Tensor): Batch image edge indices. - edge_lens (list[int]): List of edge length of each image. - output_h (int): Height of output feature map. - output_w (int): Width of output feature map. - - Returns: - torch.Tensor: Fused feature maps. - """ - batch_size = features.shape[0] - # normalize - grid_edge_indices = edge_indices.view(batch_size, -1, 1, 2).float() - grid_edge_indices[..., 0] = \ - grid_edge_indices[..., 0] / (output_w - 1) * 2 - 1 - grid_edge_indices[..., 1] = \ - grid_edge_indices[..., 1] / (output_h - 1) * 2 - 1 - - # apply edge fusion - edge_features = F.grid_sample( - features, grid_edge_indices, align_corners=True).squeeze(-1) - edge_output = self.edge_convs(edge_features) - - for k in range(batch_size): - edge_indice_k = edge_indices[k, :edge_lens[k]] - fused_features[k, :, edge_indice_k[:, 1], - edge_indice_k[:, 0]] += edge_output[ - k, :, :edge_lens[k]] - - return fused_features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/transformer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/transformer.py deleted file mode 100644 index 4f9a833e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/transformer.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn.bricks.registry import ATTENTION -from mmcv.cnn.bricks.transformer import POSITIONAL_ENCODING, MultiheadAttention -from torch import nn as nn - - -@ATTENTION.register_module() -class GroupFree3DMHA(MultiheadAttention): - """A warpper for torch.nn.MultiheadAttention for GroupFree3D. - - This module implements MultiheadAttention with identity connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - attn_drop (float, optional): A Dropout layer on attn_output_weights. - Defaults to 0.0. - proj_drop (float, optional): A Dropout layer. Defaults to 0.0. - dropout_layer (obj:`ConfigDict`, optional): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`, optional): The Config for - initialization. Default: None. - batch_first (bool, optional): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Defaults to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='DropOut', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super().__init__(embed_dims, num_heads, attn_drop, proj_drop, - dropout_layer, init_cfg, batch_first, **kwargs) - - def forward(self, - query, - key, - value, - identity, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `GroupFree3DMHA`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - If None, the ``query`` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. If None, `x` will be used. - query_pos (Tensor, optional): The positional encoding for query, - with the same shape as `x`. Defaults to None. - If not None, it will be added to `x` before forward function. - key_pos (Tensor, optional): The positional encoding for `key`, - with the same shape as `key`. Defaults to None. If not None, - it will be added to `key` before forward function. If None, - and `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor, optional): ByteTensor mask with shape - [num_queries, num_keys]. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - key_padding_mask (Tensor, optional): ByteTensor with shape - [bs, num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - if hasattr(self, 'operation_name'): - if self.operation_name == 'self_attn': - value = value + query_pos - elif self.operation_name == 'cross_attn': - value = value + key_pos - else: - raise NotImplementedError( - f'{self.__class__.name} ' - f"can't be used as {self.operation_name}") - else: - value = value + query_pos - - return super(GroupFree3DMHA, self).forward( - query=query, - key=key, - value=value, - identity=identity, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask, - **kwargs) - - -@POSITIONAL_ENCODING.register_module() -class ConvBNPositionalEncoding(nn.Module): - """Absolute position embedding with Conv learning. - - Args: - input_channel (int): input features dim. - num_pos_feats (int, optional): output position features dim. - Defaults to 288 to be consistent with seed features dim. - """ - - def __init__(self, input_channel, num_pos_feats=288): - super().__init__() - self.position_embedding_head = nn.Sequential( - nn.Conv1d(input_channel, num_pos_feats, kernel_size=1), - nn.BatchNorm1d(num_pos_feats), nn.ReLU(inplace=True), - nn.Conv1d(num_pos_feats, num_pos_feats, kernel_size=1)) - - def forward(self, xyz): - """Forward pass. - - Args: - xyz (Tensor): (B, N, 3) the coordinates to embed. - - Returns: - Tensor: (B, num_pos_feats, N) the embedded position features. - """ - xyz = xyz.permute(0, 2, 1) - position_embedding = self.position_embedding_head(xyz) - return position_embedding diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/vote_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/vote_module.py deleted file mode 100644 index 5cc52ad9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/model_utils/vote_module.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import is_tuple_of -from mmcv.cnn import ConvModule -from torch import nn as nn - -from mmdet3d.models.builder import build_loss - - -class VoteModule(nn.Module): - """Vote module. - - Generate votes from seed point features. - - Args: - in_channels (int): Number of channels of seed point features. - vote_per_seed (int, optional): Number of votes generated from - each seed point. Default: 1. - gt_per_seed (int, optional): Number of ground truth votes generated - from each seed point. Default: 3. - num_points (int, optional): Number of points to be used for voting. - Default: 1. - conv_channels (tuple[int], optional): Out channels of vote - generating convolution. Default: (16, 16). - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - norm_feats (bool, optional): Whether to normalize features. - Default: True. - with_res_feat (bool, optional): Whether to predict residual features. - Default: True. - vote_xyz_range (list[float], optional): - The range of points translation. Default: None. - vote_loss (dict, optional): Config of vote loss. Default: None. - """ - - def __init__(self, - in_channels, - vote_per_seed=1, - gt_per_seed=3, - num_points=-1, - conv_channels=(16, 16), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - norm_feats=True, - with_res_feat=True, - vote_xyz_range=None, - vote_loss=None): - super().__init__() - self.in_channels = in_channels - self.vote_per_seed = vote_per_seed - self.gt_per_seed = gt_per_seed - self.num_points = num_points - self.norm_feats = norm_feats - self.with_res_feat = with_res_feat - - assert vote_xyz_range is None or is_tuple_of(vote_xyz_range, float) - self.vote_xyz_range = vote_xyz_range - - if vote_loss is not None: - self.vote_loss = build_loss(vote_loss) - - prev_channels = in_channels - vote_conv_list = list() - for k in range(len(conv_channels)): - vote_conv_list.append( - ConvModule( - prev_channels, - conv_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - prev_channels = conv_channels[k] - self.vote_conv = nn.Sequential(*vote_conv_list) - - # conv_out predicts coordinate and residual features - if with_res_feat: - out_channel = (3 + in_channels) * self.vote_per_seed - else: - out_channel = 3 * self.vote_per_seed - self.conv_out = nn.Conv1d(prev_channels, out_channel, 1) - - def forward(self, seed_points, seed_feats): - """forward. - - Args: - seed_points (torch.Tensor): Coordinate of the seed - points in shape (B, N, 3). - seed_feats (torch.Tensor): Features of the seed points in shape - (B, C, N). - - Returns: - tuple[torch.Tensor]: - - - vote_points: Voted xyz based on the seed points - with shape (B, M, 3), ``M=num_seed*vote_per_seed``. - - vote_features: Voted features based on the seed points with - shape (B, C, M) where ``M=num_seed*vote_per_seed``, - ``C=vote_feature_dim``. - """ - if self.num_points != -1: - assert self.num_points < seed_points.shape[1], \ - f'Number of vote points ({self.num_points}) should be '\ - f'smaller than seed points size ({seed_points.shape[1]})' - seed_points = seed_points[:, :self.num_points] - seed_feats = seed_feats[..., :self.num_points] - - batch_size, feat_channels, num_seed = seed_feats.shape - num_vote = num_seed * self.vote_per_seed - x = self.vote_conv(seed_feats) - # (batch_size, (3+out_dim)*vote_per_seed, num_seed) - votes = self.conv_out(x) - - votes = votes.transpose(2, 1).view(batch_size, num_seed, - self.vote_per_seed, -1) - - offset = votes[:, :, :, 0:3] - if self.vote_xyz_range is not None: - limited_offset_list = [] - for axis in range(len(self.vote_xyz_range)): - limited_offset_list.append(offset[..., axis].clamp( - min=-self.vote_xyz_range[axis], - max=self.vote_xyz_range[axis])) - limited_offset = torch.stack(limited_offset_list, -1) - vote_points = (seed_points.unsqueeze(2) + - limited_offset).contiguous() - else: - vote_points = (seed_points.unsqueeze(2) + offset).contiguous() - vote_points = vote_points.view(batch_size, num_vote, 3) - offset = offset.reshape(batch_size, num_vote, 3).transpose(2, 1) - - if self.with_res_feat: - res_feats = votes[:, :, :, 3:] - vote_feats = (seed_feats.transpose(2, 1).unsqueeze(2) + - res_feats).contiguous() - vote_feats = vote_feats.view(batch_size, - num_vote, feat_channels).transpose( - 2, 1).contiguous() - - if self.norm_feats: - features_norm = torch.norm(vote_feats, p=2, dim=1) - vote_feats = vote_feats.div(features_norm.unsqueeze(1)) - else: - vote_feats = seed_feats - return vote_points, vote_feats, offset - - def get_loss(self, seed_points, vote_points, seed_indices, - vote_targets_mask, vote_targets): - """Calculate loss of voting module. - - Args: - seed_points (torch.Tensor): Coordinate of the seed points. - vote_points (torch.Tensor): Coordinate of the vote points. - seed_indices (torch.Tensor): Indices of seed points in raw points. - vote_targets_mask (torch.Tensor): Mask of valid vote targets. - vote_targets (torch.Tensor): Targets of votes. - - Returns: - torch.Tensor: Weighted vote loss. - """ - batch_size, num_seed = seed_points.shape[:2] - - seed_gt_votes_mask = torch.gather(vote_targets_mask, 1, - seed_indices).float() - - seed_indices_expand = seed_indices.unsqueeze(-1).repeat( - 1, 1, 3 * self.gt_per_seed) - seed_gt_votes = torch.gather(vote_targets, 1, seed_indices_expand) - seed_gt_votes += seed_points.repeat(1, 1, self.gt_per_seed) - - weight = seed_gt_votes_mask / (torch.sum(seed_gt_votes_mask) + 1e-6) - distance = self.vote_loss( - vote_points.view(batch_size * num_seed, -1, 3), - seed_gt_votes.view(batch_size * num_seed, -1, 3), - dst_weight=weight.view(batch_size * num_seed, 1))[1] - vote_loss = torch.sum(torch.min(distance, dim=1)[0]) - - return vote_loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/__init__.py deleted file mode 100644 index 9c60faf5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.necks.fpn import FPN -from .dla_neck import DLANeck -from .imvoxel_neck import OutdoorImVoxelNeck -from .pointnet2_fp_neck import PointNetFPNeck -from .second_fpn import SECONDFPN - -__all__ = [ - 'FPN', 'SECONDFPN', 'OutdoorImVoxelNeck', 'PointNetFPNeck', 'DLANeck' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/dla_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/dla_neck.py deleted file mode 100644 index 8bf6db1e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/dla_neck.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -from mmcv.cnn import ConvModule, build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from ..builder import NECKS - - -def fill_up_weights(up): - """Simulated bilinear upsampling kernel. - - Args: - up (nn.Module): ConvTranspose2d module. - """ - w = up.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - - -class IDAUpsample(BaseModule): - """Iterative Deep Aggregation (IDA) Upsampling module to upsample features - of different scales to a similar scale. - - Args: - out_channels (int): Number of output channels for DeformConv. - in_channels (List[int]): List of input channels of multi-scale - feature maps. - kernel_sizes (List[int]): List of size of the convolving - kernel of different scales. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - use_dcn (bool, optional): If True, use DCNv2. Default: True. - """ - - def __init__( - self, - out_channels, - in_channels, - kernel_sizes, - norm_cfg=None, - use_dcn=True, - init_cfg=None, - ): - super(IDAUpsample, self).__init__(init_cfg) - self.use_dcn = use_dcn - self.projs = nn.ModuleList() - self.ups = nn.ModuleList() - self.nodes = nn.ModuleList() - - for i in range(1, len(in_channels)): - in_channel = in_channels[i] - up_kernel_size = int(kernel_sizes[i]) - proj = ConvModule( - in_channel, - out_channels, - 3, - padding=1, - bias=True, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=norm_cfg) - node = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - bias=True, - conv_cfg=dict(type='DCNv2') if self.use_dcn else None, - norm_cfg=norm_cfg) - up = build_conv_layer( - dict(type='deconv'), - out_channels, - out_channels, - up_kernel_size * 2, - stride=up_kernel_size, - padding=up_kernel_size // 2, - output_padding=0, - groups=out_channels, - bias=False) - - self.projs.append(proj) - self.ups.append(up) - self.nodes.append(node) - - def forward(self, mlvl_features, start_level, end_level): - """Forward function. - - Args: - mlvl_features (list[torch.Tensor]): Features from multiple layers. - start_level (int): Start layer for feature upsampling. - end_level (int): End layer for feature upsampling. - """ - for i in range(start_level, end_level - 1): - upsample = self.ups[i - start_level] - project = self.projs[i - start_level] - mlvl_features[i + 1] = upsample(project(mlvl_features[i + 1])) - node = self.nodes[i - start_level] - mlvl_features[i + 1] = node(mlvl_features[i + 1] + - mlvl_features[i]) - - -class DLAUpsample(BaseModule): - """Deep Layer Aggregation (DLA) Upsampling module for different scales - feature extraction, upsampling and fusion, It consists of groups of - IDAupsample modules. - - Args: - start_level (int): The start layer. - channels (List[int]): List of input channels of multi-scale - feature maps. - scales(List[int]): List of scale of different layers' feature. - in_channels (NoneType, optional): List of input channels of - different scales. Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - use_dcn (bool, optional): Whether to use dcn in IDAup module. - Default: True. - """ - - def __init__(self, - start_level, - channels, - scales, - in_channels=None, - norm_cfg=None, - use_dcn=True, - init_cfg=None): - super(DLAUpsample, self).__init__(init_cfg) - self.start_level = start_level - if in_channels is None: - in_channels = channels - self.channels = channels - channels = list(channels) - scales = np.array(scales, dtype=int) - for i in range(len(channels) - 1): - j = -i - 2 - setattr( - self, 'ida_{}'.format(i), - IDAUpsample(channels[j], in_channels[j:], - scales[j:] // scales[j], norm_cfg, use_dcn)) - scales[j + 1:] = scales[j] - in_channels[j + 1:] = [channels[j] for _ in channels[j + 1:]] - - def forward(self, mlvl_features): - """Forward function. - - Args: - mlvl_features(list[torch.Tensor]): Features from multi-scale - layers. - - Returns: - tuple[torch.Tensor]: Up-sampled features of different layers. - """ - outs = [mlvl_features[-1]] - for i in range(len(mlvl_features) - self.start_level - 1): - ida = getattr(self, 'ida_{}'.format(i)) - ida(mlvl_features, len(mlvl_features) - i - 2, len(mlvl_features)) - outs.insert(0, mlvl_features[-1]) - return outs - - -@NECKS.register_module() -class DLANeck(BaseModule): - """DLA Neck. - - Args: - in_channels (list[int], optional): List of input channels - of multi-scale feature map. - start_level (int, optional): The scale level where upsampling - starts. Default: 2. - end_level (int, optional): The scale level where upsampling - ends. Default: 5. - norm_cfg (dict, optional): Config dict for normalization - layer. Default: None. - use_dcn (bool, optional): Whether to use dcn in IDAup module. - Default: True. - """ - - def __init__(self, - in_channels=[16, 32, 64, 128, 256, 512], - start_level=2, - end_level=5, - norm_cfg=None, - use_dcn=True, - init_cfg=None): - super(DLANeck, self).__init__(init_cfg) - self.start_level = start_level - self.end_level = end_level - scales = [2**i for i in range(len(in_channels[self.start_level:]))] - self.dla_up = DLAUpsample( - start_level=self.start_level, - channels=in_channels[self.start_level:], - scales=scales, - norm_cfg=norm_cfg, - use_dcn=use_dcn) - self.ida_up = IDAUpsample( - in_channels[self.start_level], - in_channels[self.start_level:self.end_level], - [2**i for i in range(self.end_level - self.start_level)], norm_cfg, - use_dcn) - - def forward(self, x): - mlvl_features = [x[i] for i in range(len(x))] - mlvl_features = self.dla_up(mlvl_features) - outs = [] - for i in range(self.end_level - self.start_level): - outs.append(mlvl_features[i].clone()) - self.ida_up(outs, 0, len(outs)) - return [outs[-1]] - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.ConvTranspose2d): - # In order to be consistent with the source code, - # reset the ConvTranspose2d initialization parameters - m.reset_parameters() - # Simulated bilinear upsampling kernel - fill_up_weights(m) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Conv2d): - # In order to be consistent with the source code, - # reset the Conv2d initialization parameters - m.reset_parameters() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/imvoxel_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/imvoxel_neck.py deleted file mode 100644 index 76dff391..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/imvoxel_neck.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from torch import nn - -from ..builder import NECKS - - -@NECKS.register_module() -class OutdoorImVoxelNeck(nn.Module): - """Neck for ImVoxelNet outdoor scenario. - - Args: - in_channels (int): Input channels of multi-scale feature map. - out_channels (int): Output channels of multi-scale feature map. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.model = nn.Sequential( - ResModule(in_channels), - ConvModule( - in_channels=in_channels, - out_channels=in_channels * 2, - kernel_size=3, - stride=(1, 1, 2), - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)), - ResModule(in_channels * 2), - ConvModule( - in_channels=in_channels * 2, - out_channels=in_channels * 4, - kernel_size=3, - stride=(1, 1, 2), - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)), - ResModule(in_channels * 4), - ConvModule( - in_channels=in_channels * 4, - out_channels=out_channels, - kernel_size=3, - padding=(1, 1, 0), - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True))) - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): of shape (N, C_in, N_x, N_y, N_z). - - Returns: - list[torch.Tensor]: of shape (N, C_out, N_y, N_x). - """ - x = self.model.forward(x) - assert x.shape[-1] == 1 - # Anchor3DHead axis order is (y, x). - return [x[..., 0].transpose(-1, -2)] - - def init_weights(self): - """Initialize weights of neck.""" - pass - - -class ResModule(nn.Module): - """3d residual block for ImVoxelNeck. - - Args: - n_channels (int): Input channels of a feature map. - """ - - def __init__(self, n_channels): - super().__init__() - self.conv0 = ConvModule( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=3, - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=dict(type='ReLU', inplace=True)) - self.conv1 = ConvModule( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=3, - padding=1, - conv_cfg=dict(type='Conv3d'), - norm_cfg=dict(type='BN3d'), - act_cfg=None) - self.activation = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): of shape (N, C, N_x, N_y, N_z). - - Returns: - torch.Tensor: 5d feature map. - """ - identity = x - x = self.conv0(x) - x = self.conv1(x) - x = identity + x - x = self.activation(x) - return x diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/pointnet2_fp_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/pointnet2_fp_neck.py deleted file mode 100644 index fcf8bc4d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/pointnet2_fp_neck.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.ops import PointFPModule -from ..builder import NECKS - - -@NECKS.register_module() -class PointNetFPNeck(BaseModule): - r"""PointNet FP Module used in PointRCNN. - - Refer to the `official code `_. - - .. code-block:: none - - sa_n ---------------------------------------- - | - ... --------------------------------- | - | | - sa_1 ------------- | | - | | | - sa_0 -> fp_0 -> fp_module ->fp_1 -> ... -> fp_module -> fp_n - - sa_n including sa_xyz (torch.Tensor) and sa_features (torch.Tensor) - fp_n including fp_xyz (torch.Tensor) and fp_features (torch.Tensor) - - Args: - fp_channels (tuple[tuple[int]]): Tuple of mlp channels in FP modules. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, fp_channels, init_cfg=None): - super(PointNetFPNeck, self).__init__(init_cfg=init_cfg) - - self.num_fp = len(fp_channels) - self.FP_modules = nn.ModuleList() - for cur_fp_mlps in fp_channels: - self.FP_modules.append(PointFPModule(mlp_channels=cur_fp_mlps)) - - def _extract_input(self, feat_dict): - """Extract inputs from features dictionary. - - Args: - feat_dict (dict): Feature dict from backbone, which may contain - the following keys and values: - - - sa_xyz (list[torch.Tensor]): Points of each sa module - in shape (N, 3). - - sa_features (list[torch.Tensor]): Output features of - each sa module in shape (N, M). - - Returns: - list[torch.Tensor]: Coordinates of multiple levels of points. - list[torch.Tensor]: Features of multiple levels of points. - """ - sa_xyz = feat_dict['sa_xyz'] - sa_features = feat_dict['sa_features'] - assert len(sa_xyz) == len(sa_features) - - return sa_xyz, sa_features - - def forward(self, feat_dict): - """Forward pass. - - Args: - feat_dict (dict): Feature dict from backbone. - - Returns: - dict[str, torch.Tensor]: Outputs of the Neck. - - - fp_xyz (torch.Tensor): The coordinates of fp features. - - fp_features (torch.Tensor): The features from the last - feature propagation layers. - """ - sa_xyz, sa_features = self._extract_input(feat_dict) - - fp_feature = sa_features[-1] - fp_xyz = sa_xyz[-1] - - for i in range(self.num_fp): - # consume the points in a bottom-up manner - fp_feature = self.FP_modules[i](sa_xyz[-(i + 2)], sa_xyz[-(i + 1)], - sa_features[-(i + 2)], fp_feature) - fp_xyz = sa_xyz[-(i + 2)] - - ret = dict(fp_xyz=fp_xyz, fp_features=fp_feature) - return ret diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/second_fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/second_fpn.py deleted file mode 100644 index 7c0c88e6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/necks/second_fpn.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import build_conv_layer, build_norm_layer, build_upsample_layer -from mmcv.runner import BaseModule, auto_fp16 -from torch import nn as nn - -from ..builder import NECKS - - -@NECKS.register_module() -class SECONDFPN(BaseModule): - """FPN used in SECOND/PointPillars/PartA2/MVXNet. - - Args: - in_channels (list[int]): Input channels of multi-scale feature maps. - out_channels (list[int]): Output channels of feature maps. - upsample_strides (list[int]): Strides used to upsample the - feature maps. - norm_cfg (dict): Config dict of normalization layers. - upsample_cfg (dict): Config dict of upsample layers. - conv_cfg (dict): Config dict of conv layers. - use_conv_for_no_stride (bool): Whether to use conv when stride is 1. - """ - - def __init__(self, - in_channels=[128, 128, 256], - out_channels=[256, 256, 256], - upsample_strides=[1, 2, 4], - norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01), - upsample_cfg=dict(type='deconv', bias=False), - conv_cfg=dict(type='Conv2d', bias=False), - use_conv_for_no_stride=False, - init_cfg=None): - # if for GroupNorm, - # cfg is dict(type='GN', num_groups=num_groups, eps=1e-3, affine=True) - super(SECONDFPN, self).__init__(init_cfg=init_cfg) - assert len(out_channels) == len(upsample_strides) == len(in_channels) - self.in_channels = in_channels - self.out_channels = out_channels - self.fp16_enabled = False - - deblocks = [] - for i, out_channel in enumerate(out_channels): - stride = upsample_strides[i] - if stride > 1 or (stride == 1 and not use_conv_for_no_stride): - upsample_layer = build_upsample_layer( - upsample_cfg, - in_channels=in_channels[i], - out_channels=out_channel, - kernel_size=upsample_strides[i], - stride=upsample_strides[i]) - else: - stride = np.round(1 / stride).astype(np.int64) - upsample_layer = build_conv_layer( - conv_cfg, - in_channels=in_channels[i], - out_channels=out_channel, - kernel_size=stride, - stride=stride) - - deblock = nn.Sequential(upsample_layer, - build_norm_layer(norm_cfg, out_channel)[1], - nn.ReLU(inplace=True)) - deblocks.append(deblock) - self.deblocks = nn.ModuleList(deblocks) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='ConvTranspose2d'), - dict(type='Constant', layer='NaiveSyncBatchNorm2d', val=1.0) - ] - - @auto_fp16() - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): 4D Tensor in (N, C, H, W) shape. - - Returns: - list[torch.Tensor]: Multi-level feature maps. - """ - assert len(x) == len(self.in_channels) - ups = [deblock(x[i]) for i, deblock in enumerate(self.deblocks)] - - if len(ups) > 1: - out = torch.cat(ups, dim=1) - else: - out = ups[0] - return [out] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/__init__.py deleted file mode 100644 index 1cc4dc6e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .base_3droi_head import Base3DRoIHead -# from .bbox_heads import PartA2BboxHead -from .h3d_roi_head import H3DRoIHead -from .mask_heads import PointwiseSemanticHead, PrimitiveHead -from .part_aggregation_roi_head import PartAggregationROIHead -from .point_rcnn_roi_head import PointRCNNRoIHead -from .roi_extractors import Single3DRoIAwareExtractor, SingleRoIExtractor - -__all__ = [ - 'Base3DRoIHead', 'PartAggregationROIHead', 'PointwiseSemanticHead', - 'Single3DRoIAwareExtractor', 'SingleRoIExtractor', - 'H3DRoIHead', 'PrimitiveHead', 'PointRCNNRoIHead' -] - -# __all__ = [ -# 'Base3DRoIHead', 'PartAggregationROIHead', 'PointwiseSemanticHead', -# 'Single3DRoIAwareExtractor', 'PartA2BboxHead', 'SingleRoIExtractor', -# 'H3DRoIHead', 'PrimitiveHead', 'PointRCNNRoIHead' -# ] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/base_3droi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/base_3droi_head.py deleted file mode 100644 index 07bcf7f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/base_3droi_head.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from mmcv.runner import BaseModule - - -class Base3DRoIHead(BaseModule, metaclass=ABCMeta): - """Base class for 3d RoIHeads.""" - - def __init__(self, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(Base3DRoIHead, self).__init__(init_cfg=init_cfg) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if bbox_head is not None: - self.init_bbox_head(bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoIHead has box head""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoIHead has mask head""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @abstractmethod - def init_bbox_head(self): - """Initialize the box head.""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize maek head.""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - **kwargs): - """Forward function during training. - - Args: - x (dict): Contains features from the first stage. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - gt_bboxes (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels (list[torch.LongTensor]): GT labels of each sample. - gt_bboxes_ignore (list[torch.Tensor], optional): - Ground truth boxes to be ignored. - - Returns: - dict[str, torch.Tensor]: Losses from each head. - """ - pass - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - pass - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - pass diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/__init__.py deleted file mode 100644 index 5f10ad42..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.roi_heads.bbox_heads import (BBoxHead, ConvFCBBoxHead, - DoubleConvFCBBoxHead, - Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .h3d_bbox_head import H3DBboxHead -from .parta2_bbox_head import PartA2BboxHead -from .point_rcnn_bbox_head import PointRCNNBboxHead - -__all__ = [ - 'BBoxHead', 'ConvFCBBoxHead', 'Shared2FCBBoxHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'PartA2BboxHead', - 'H3DBboxHead', 'PointRCNNBboxHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py deleted file mode 100644 index 90e60561..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/h3d_bbox_head.py +++ /dev/null @@ -1,927 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox import DepthInstance3DBoxes -from mmdet3d.core.post_processing import aligned_3d_nms -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.models.losses import chamfer_distance -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class H3DBboxHead(BaseModule): - r"""Bbox head of `H3DNet `_. - - Args: - num_classes (int): The number of classes. - surface_matching_cfg (dict): Config for surface primitive matching. - line_matching_cfg (dict): Config for line primitive matching. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - gt_per_seed (int): Number of ground truth votes generated - from each seed point. - num_proposal (int): Number of proposal votes generated. - feat_channels (tuple[int]): Convolution channels of - prediction layer. - primitive_feat_refine_streams (int): The number of mlps to - refine primitive feature. - primitive_refine_channels (tuple[int]): Convolution channels of - prediction layer. - upper_thresh (float): Threshold for line matching. - surface_thresh (float): Threshold for surface matching. - line_thresh (float): Threshold for line matching. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - dir_class_loss (dict): Config of direction classification loss. - dir_res_loss (dict): Config of direction residual regression loss. - size_class_loss (dict): Config of size classification loss. - size_res_loss (dict): Config of size residual regression loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - cues_objectness_loss (dict): Config of cues objectness loss. - cues_semantic_loss (dict): Config of cues semantic loss. - proposal_objectness_loss (dict): Config of proposal objectness - loss. - primitive_center_loss (dict): Config of primitive center regression - loss. - """ - - def __init__(self, - num_classes, - suface_matching_cfg, - line_matching_cfg, - bbox_coder, - train_cfg=None, - test_cfg=None, - gt_per_seed=1, - num_proposal=256, - feat_channels=(128, 128), - primitive_feat_refine_streams=2, - primitive_refine_channels=[128, 128, 128], - upper_thresh=100.0, - surface_thresh=0.5, - line_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - dir_class_loss=None, - dir_res_loss=None, - size_class_loss=None, - size_res_loss=None, - semantic_loss=None, - cues_objectness_loss=None, - cues_semantic_loss=None, - proposal_objectness_loss=None, - primitive_center_loss=None, - init_cfg=None): - super(H3DBboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = gt_per_seed - self.num_proposal = num_proposal - self.with_angle = bbox_coder['with_rot'] - self.upper_thresh = upper_thresh - self.surface_thresh = surface_thresh - self.line_thresh = line_thresh - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.dir_class_loss = build_loss(dir_class_loss) - self.dir_res_loss = build_loss(dir_res_loss) - self.size_class_loss = build_loss(size_class_loss) - self.size_res_loss = build_loss(size_res_loss) - self.semantic_loss = build_loss(semantic_loss) - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.num_sizes = self.bbox_coder.num_sizes - self.num_dir_bins = self.bbox_coder.num_dir_bins - - self.cues_objectness_loss = build_loss(cues_objectness_loss) - self.cues_semantic_loss = build_loss(cues_semantic_loss) - self.proposal_objectness_loss = build_loss(proposal_objectness_loss) - self.primitive_center_loss = build_loss(primitive_center_loss) - - assert suface_matching_cfg['mlp_channels'][-1] == \ - line_matching_cfg['mlp_channels'][-1] - - # surface center matching - self.surface_center_matcher = build_sa_module(suface_matching_cfg) - # line center matching - self.line_center_matcher = build_sa_module(line_matching_cfg) - - # Compute the matching scores - matching_feat_dims = suface_matching_cfg['mlp_channels'][-1] - self.matching_conv = ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.matching_pred = nn.Conv1d(matching_feat_dims, 2, 1) - - # Compute the semantic matching scores - self.semantic_matching_conv = ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.semantic_matching_pred = nn.Conv1d(matching_feat_dims, 2, 1) - - # Surface feature aggregation - self.surface_feats_aggregation = list() - for k in range(primitive_feat_refine_streams): - self.surface_feats_aggregation.append( - ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - self.surface_feats_aggregation = nn.Sequential( - *self.surface_feats_aggregation) - - # Line feature aggregation - self.line_feats_aggregation = list() - for k in range(primitive_feat_refine_streams): - self.line_feats_aggregation.append( - ConvModule( - matching_feat_dims, - matching_feat_dims, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - self.line_feats_aggregation = nn.Sequential( - *self.line_feats_aggregation) - - # surface center(6) + line center(12) - prev_channel = 18 * matching_feat_dims - self.bbox_pred = nn.ModuleList() - for k in range(len(primitive_refine_channels)): - self.bbox_pred.append( - ConvModule( - prev_channel, - primitive_refine_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=False)) - prev_channel = primitive_refine_channels[k] - - # Final object detection - # Objectness scores (2), center residual (3), - # heading class+residual (num_heading_bin*2), size class + - # residual(num_size_cluster*4) - conv_out_channel = (2 + 3 + bbox_coder['num_dir_bins'] * 2 + - bbox_coder['num_sizes'] * 4 + self.num_classes) - self.bbox_pred.append(nn.Conv1d(prev_channel, conv_out_channel, 1)) - - def forward(self, feats_dict, sample_mod): - """Forward pass. - - Args: - feats_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed" and "random". - - Returns: - dict: Predictions of vote head. - """ - ret_dict = {} - aggregated_points = feats_dict['aggregated_points'] - original_feature = feats_dict['aggregated_features'] - batch_size = original_feature.shape[0] - object_proposal = original_feature.shape[2] - - # Extract surface center, features and semantic predictions - z_center = feats_dict['pred_z_center'] - xy_center = feats_dict['pred_xy_center'] - z_semantic = feats_dict['sem_cls_scores_z'] - xy_semantic = feats_dict['sem_cls_scores_xy'] - z_feature = feats_dict['aggregated_features_z'] - xy_feature = feats_dict['aggregated_features_xy'] - # Extract line points and features - line_center = feats_dict['pred_line_center'] - line_feature = feats_dict['aggregated_features_line'] - - surface_center_pred = torch.cat((z_center, xy_center), dim=1) - ret_dict['surface_center_pred'] = surface_center_pred - ret_dict['surface_sem_pred'] = torch.cat((z_semantic, xy_semantic), - dim=1) - - # Extract the surface and line centers of rpn proposals - rpn_proposals = feats_dict['proposal_list'] - rpn_proposals_bbox = DepthInstance3DBoxes( - rpn_proposals.reshape(-1, 7).clone(), - box_dim=rpn_proposals.shape[-1], - with_yaw=self.with_angle, - origin=(0.5, 0.5, 0.5)) - - obj_surface_center, obj_line_center = \ - rpn_proposals_bbox.get_surface_line_center() - obj_surface_center = obj_surface_center.reshape( - batch_size, -1, 6, 3).transpose(1, 2).reshape(batch_size, -1, 3) - obj_line_center = obj_line_center.reshape(batch_size, -1, 12, - 3).transpose(1, 2).reshape( - batch_size, -1, 3) - ret_dict['surface_center_object'] = obj_surface_center - ret_dict['line_center_object'] = obj_line_center - - # aggregate primitive z and xy features to rpn proposals - surface_center_feature_pred = torch.cat((z_feature, xy_feature), dim=2) - surface_center_feature_pred = torch.cat( - (surface_center_feature_pred.new_zeros( - (batch_size, 6, surface_center_feature_pred.shape[2])), - surface_center_feature_pred), - dim=1) - - surface_xyz, surface_features, _ = self.surface_center_matcher( - surface_center_pred, - surface_center_feature_pred, - target_xyz=obj_surface_center) - - # aggregate primitive line features to rpn proposals - line_feature = torch.cat((line_feature.new_zeros( - (batch_size, 12, line_feature.shape[2])), line_feature), - dim=1) - line_xyz, line_features, _ = self.line_center_matcher( - line_center, line_feature, target_xyz=obj_line_center) - - # combine the surface and line features - combine_features = torch.cat((surface_features, line_features), dim=2) - - matching_features = self.matching_conv(combine_features) - matching_score = self.matching_pred(matching_features) - ret_dict['matching_score'] = matching_score.transpose(2, 1) - - semantic_matching_features = self.semantic_matching_conv( - combine_features) - semantic_matching_score = self.semantic_matching_pred( - semantic_matching_features) - ret_dict['semantic_matching_score'] = \ - semantic_matching_score.transpose(2, 1) - - surface_features = self.surface_feats_aggregation(surface_features) - line_features = self.line_feats_aggregation(line_features) - - # Combine all surface and line features - surface_features = surface_features.view(batch_size, -1, - object_proposal) - line_features = line_features.view(batch_size, -1, object_proposal) - - combine_feature = torch.cat((surface_features, line_features), dim=1) - - # Final bbox predictions - bbox_predictions = self.bbox_pred[0](combine_feature) - bbox_predictions += original_feature - for conv_module in self.bbox_pred[1:]: - bbox_predictions = conv_module(bbox_predictions) - - refine_decode_res = self.bbox_coder.split_pred( - bbox_predictions[:, :self.num_classes + 2], - bbox_predictions[:, self.num_classes + 2:], aggregated_points) - for key in refine_decode_res.keys(): - ret_dict[key + '_optimized'] = refine_decode_res[key] - return ret_dict - - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - rpn_targets=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of h3d bbox head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - rpn_targets (Tuple) : Targets generated by rpn head. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of H3dnet. - """ - (vote_targets, vote_target_masks, size_class_targets, size_res_targets, - dir_class_targets, dir_res_targets, center_targets, _, mask_targets, - valid_gt_masks, objectness_targets, objectness_weights, - box_loss_weights, valid_gt_weights) = rpn_targets - - losses = {} - - # calculate refined proposal loss - refined_proposal_loss = self.get_proposal_stage_loss( - bbox_preds, - size_class_targets, - size_res_targets, - dir_class_targets, - dir_res_targets, - center_targets, - mask_targets, - objectness_targets, - objectness_weights, - box_loss_weights, - valid_gt_weights, - suffix='_optimized') - for key in refined_proposal_loss.keys(): - losses[key + '_optimized'] = refined_proposal_loss[key] - - bbox3d_optimized = self.bbox_coder.decode( - bbox_preds, suffix='_optimized') - - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - - (cues_objectness_label, cues_sem_label, proposal_objectness_label, - cues_mask, cues_match_mask, proposal_objectness_mask, - cues_matching_label, obj_surface_line_center) = targets - - # match scores for each geometric primitive - objectness_scores = bbox_preds['matching_score'] - # match scores for the semantics of primitives - objectness_scores_sem = bbox_preds['semantic_matching_score'] - - primitive_objectness_loss = self.cues_objectness_loss( - objectness_scores.transpose(2, 1), - cues_objectness_label, - weight=cues_mask, - avg_factor=cues_mask.sum() + 1e-6) - - primitive_sem_loss = self.cues_semantic_loss( - objectness_scores_sem.transpose(2, 1), - cues_sem_label, - weight=cues_mask, - avg_factor=cues_mask.sum() + 1e-6) - - objectness_scores = bbox_preds['obj_scores_optimized'] - objectness_loss_refine = self.proposal_objectness_loss( - objectness_scores.transpose(2, 1), proposal_objectness_label) - primitive_matching_loss = (objectness_loss_refine * - cues_match_mask).sum() / ( - cues_match_mask.sum() + 1e-6) * 0.5 - primitive_sem_matching_loss = ( - objectness_loss_refine * proposal_objectness_mask).sum() / ( - proposal_objectness_mask.sum() + 1e-6) * 0.5 - - # Get the object surface center here - batch_size, object_proposal = bbox3d_optimized.shape[:2] - refined_bbox = DepthInstance3DBoxes( - bbox3d_optimized.reshape(-1, 7).clone(), - box_dim=bbox3d_optimized.shape[-1], - with_yaw=self.with_angle, - origin=(0.5, 0.5, 0.5)) - - pred_obj_surface_center, pred_obj_line_center = \ - refined_bbox.get_surface_line_center() - pred_obj_surface_center = pred_obj_surface_center.reshape( - batch_size, -1, 6, 3).transpose(1, 2).reshape(batch_size, -1, 3) - pred_obj_line_center = pred_obj_line_center.reshape( - batch_size, -1, 12, 3).transpose(1, 2).reshape(batch_size, -1, 3) - pred_surface_line_center = torch.cat( - (pred_obj_surface_center, pred_obj_line_center), 1) - - square_dist = self.primitive_center_loss(pred_surface_line_center, - obj_surface_line_center) - - match_dist = torch.sqrt(square_dist.sum(dim=-1) + 1e-6) - primitive_centroid_reg_loss = torch.sum( - match_dist * cues_matching_label) / ( - cues_matching_label.sum() + 1e-6) - - refined_loss = dict( - primitive_objectness_loss=primitive_objectness_loss, - primitive_sem_loss=primitive_sem_loss, - primitive_matching_loss=primitive_matching_loss, - primitive_sem_matching_loss=primitive_sem_matching_loss, - primitive_centroid_reg_loss=primitive_centroid_reg_loss) - - losses.update(refined_loss) - - return losses - - def get_bboxes(self, - points, - bbox_preds, - input_metas, - rescale=False, - suffix=''): - """Generate bboxes from vote head predictions. - - Args: - points (torch.Tensor): Input points. - bbox_preds (dict): Predictions from vote head. - input_metas (list[dict]): Point cloud and image's meta info. - rescale (bool): Whether to rescale bboxes. - - Returns: - list[tuple[torch.Tensor]]: Bounding boxes, scores and labels. - """ - # decode boxes - obj_scores = F.softmax( - bbox_preds['obj_scores' + suffix], dim=-1)[..., -1] - - sem_scores = F.softmax(bbox_preds['sem_scores'], dim=-1) - - prediction_collection = {} - prediction_collection['center'] = bbox_preds['center' + suffix] - prediction_collection['dir_class'] = bbox_preds['dir_class'] - prediction_collection['dir_res'] = bbox_preds['dir_res' + suffix] - prediction_collection['size_class'] = bbox_preds['size_class'] - prediction_collection['size_res'] = bbox_preds['size_res' + suffix] - - bbox3d = self.bbox_coder.decode(prediction_collection) - - batch_size = bbox3d.shape[0] - results = list() - for b in range(batch_size): - bbox_selected, score_selected, labels = self.multiclass_nms_single( - obj_scores[b], sem_scores[b], bbox3d[b], points[b, ..., :3], - input_metas[b]) - bbox = input_metas[b]['box_type_3d']( - bbox_selected, - box_dim=bbox_selected.shape[-1], - with_yaw=self.bbox_coder.with_rot) - results.append((bbox, score_selected, labels)) - - return results - - def multiclass_nms_single(self, obj_scores, sem_scores, bbox, points, - input_meta): - """Multi-class nms in single batch. - - Args: - obj_scores (torch.Tensor): Objectness score of bounding boxes. - sem_scores (torch.Tensor): semantic class score of bounding boxes. - bbox (torch.Tensor): Predicted bounding boxes. - points (torch.Tensor): Input points. - input_meta (dict): Point cloud and image's meta info. - - Returns: - tuple[torch.Tensor]: Bounding boxes, scores and labels. - """ - bbox = input_meta['box_type_3d']( - bbox, - box_dim=bbox.shape[-1], - with_yaw=self.bbox_coder.with_rot, - origin=(0.5, 0.5, 0.5)) - box_indices = bbox.points_in_boxes_all(points) - - corner3d = bbox.corners - minmax_box3d = corner3d.new(torch.Size((corner3d.shape[0], 6))) - minmax_box3d[:, :3] = torch.min(corner3d, dim=1)[0] - minmax_box3d[:, 3:] = torch.max(corner3d, dim=1)[0] - - nonempty_box_mask = box_indices.T.sum(1) > 5 - - bbox_classes = torch.argmax(sem_scores, -1) - nms_selected = aligned_3d_nms(minmax_box3d[nonempty_box_mask], - obj_scores[nonempty_box_mask], - bbox_classes[nonempty_box_mask], - self.test_cfg.nms_thr) - - # filter empty boxes and boxes with low score - scores_mask = (obj_scores > self.test_cfg.score_thr) - nonempty_box_inds = torch.nonzero( - nonempty_box_mask, as_tuple=False).flatten() - nonempty_mask = torch.zeros_like(bbox_classes).scatter( - 0, nonempty_box_inds[nms_selected], 1) - selected = (nonempty_mask.bool() & scores_mask.bool()) - - if self.test_cfg.per_class_proposal: - bbox_selected, score_selected, labels = [], [], [] - for k in range(sem_scores.shape[-1]): - bbox_selected.append(bbox[selected].tensor) - score_selected.append(obj_scores[selected] * - sem_scores[selected][:, k]) - labels.append( - torch.zeros_like(bbox_classes[selected]).fill_(k)) - bbox_selected = torch.cat(bbox_selected, 0) - score_selected = torch.cat(score_selected, 0) - labels = torch.cat(labels, 0) - else: - bbox_selected = bbox[selected].tensor - score_selected = obj_scores[selected] - labels = bbox_classes[selected] - - return bbox_selected, score_selected, labels - - def get_proposal_stage_loss(self, - bbox_preds, - size_class_targets, - size_res_targets, - dir_class_targets, - dir_res_targets, - center_targets, - mask_targets, - objectness_targets, - objectness_weights, - box_loss_weights, - valid_gt_weights, - suffix=''): - """Compute loss for the aggregation module. - - Args: - bbox_preds (dict): Predictions from forward of vote head. - size_class_targets (torch.Tensor): Ground truth - size class of each prediction bounding box. - size_res_targets (torch.Tensor): Ground truth - size residual of each prediction bounding box. - dir_class_targets (torch.Tensor): Ground truth - direction class of each prediction bounding box. - dir_res_targets (torch.Tensor): Ground truth - direction residual of each prediction bounding box. - center_targets (torch.Tensor): Ground truth center - of each prediction bounding box. - mask_targets (torch.Tensor): Validation of each - prediction bounding box. - objectness_targets (torch.Tensor): Ground truth - objectness label of each prediction bounding box. - objectness_weights (torch.Tensor): Weights of objectness - loss for each prediction bounding box. - box_loss_weights (torch.Tensor): Weights of regression - loss for each prediction bounding box. - valid_gt_weights (torch.Tensor): Validation of each - ground truth bounding box. - - Returns: - dict: Losses of aggregation module. - """ - # calculate objectness loss - objectness_loss = self.objectness_loss( - bbox_preds['obj_scores' + suffix].transpose(2, 1), - objectness_targets, - weight=objectness_weights) - - # calculate center loss - source2target_loss, target2source_loss = self.center_loss( - bbox_preds['center' + suffix], - center_targets, - src_weight=box_loss_weights, - dst_weight=valid_gt_weights) - center_loss = source2target_loss + target2source_loss - - # calculate direction class loss - dir_class_loss = self.dir_class_loss( - bbox_preds['dir_class' + suffix].transpose(2, 1), - dir_class_targets, - weight=box_loss_weights) - - # calculate direction residual loss - batch_size, proposal_num = size_class_targets.shape[:2] - heading_label_one_hot = dir_class_targets.new_zeros( - (batch_size, proposal_num, self.num_dir_bins)) - heading_label_one_hot.scatter_(2, dir_class_targets.unsqueeze(-1), 1) - dir_res_norm = (bbox_preds['dir_res_norm' + suffix] * - heading_label_one_hot).sum(dim=-1) - dir_res_loss = self.dir_res_loss( - dir_res_norm, dir_res_targets, weight=box_loss_weights) - - # calculate size class loss - size_class_loss = self.size_class_loss( - bbox_preds['size_class' + suffix].transpose(2, 1), - size_class_targets, - weight=box_loss_weights) - - # calculate size residual loss - one_hot_size_targets = box_loss_weights.new_zeros( - (batch_size, proposal_num, self.num_sizes)) - one_hot_size_targets.scatter_(2, size_class_targets.unsqueeze(-1), 1) - one_hot_size_targets_expand = one_hot_size_targets.unsqueeze( - -1).repeat(1, 1, 1, 3) - size_residual_norm = (bbox_preds['size_res_norm' + suffix] * - one_hot_size_targets_expand).sum(dim=2) - box_loss_weights_expand = box_loss_weights.unsqueeze(-1).repeat( - 1, 1, 3) - size_res_loss = self.size_res_loss( - size_residual_norm, - size_res_targets, - weight=box_loss_weights_expand) - - # calculate semantic loss - semantic_loss = self.semantic_loss( - bbox_preds['sem_scores' + suffix].transpose(2, 1), - mask_targets, - weight=box_loss_weights) - - losses = dict( - objectness_loss=objectness_loss, - semantic_loss=semantic_loss, - center_loss=center_loss, - dir_class_loss=dir_class_loss, - dir_res_loss=dir_res_loss, - size_class_loss=size_class_loss, - size_res_loss=size_res_loss) - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of proposal module. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (torch.Tensor): Bounding box predictions of vote head. - - Returns: - tuple[torch.Tensor]: Targets of proposal module. - """ - # find empty example - valid_gt_masks = list() - gt_num = list() - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - valid_gt_masks.append(gt_labels_3d[index].new_zeros(1)) - gt_num.append(1) - else: - valid_gt_masks.append(gt_labels_3d[index].new_ones( - gt_labels_3d[index].shape)) - gt_num.append(gt_labels_3d[index].shape[0]) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - aggregated_points = [ - bbox_preds['aggregated_points'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_center_pred = [ - bbox_preds['surface_center_pred'][i] - for i in range(len(gt_labels_3d)) - ] - - line_center_pred = [ - bbox_preds['pred_line_center'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_center_object = [ - bbox_preds['surface_center_object'][i] - for i in range(len(gt_labels_3d)) - ] - - line_center_object = [ - bbox_preds['line_center_object'][i] - for i in range(len(gt_labels_3d)) - ] - - surface_sem_pred = [ - bbox_preds['surface_sem_pred'][i] - for i in range(len(gt_labels_3d)) - ] - - line_sem_pred = [ - bbox_preds['sem_cls_scores_line'][i] - for i in range(len(gt_labels_3d)) - ] - - (cues_objectness_label, cues_sem_label, proposal_objectness_label, - cues_mask, cues_match_mask, proposal_objectness_mask, - cues_matching_label, obj_surface_line_center) = multi_apply( - self.get_targets_single, points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, aggregated_points, - surface_center_pred, line_center_pred, surface_center_object, - line_center_object, surface_sem_pred, line_sem_pred) - - cues_objectness_label = torch.stack(cues_objectness_label) - cues_sem_label = torch.stack(cues_sem_label) - proposal_objectness_label = torch.stack(proposal_objectness_label) - cues_mask = torch.stack(cues_mask) - cues_match_mask = torch.stack(cues_match_mask) - proposal_objectness_mask = torch.stack(proposal_objectness_mask) - cues_matching_label = torch.stack(cues_matching_label) - obj_surface_line_center = torch.stack(obj_surface_line_center) - - return (cues_objectness_label, cues_sem_label, - proposal_objectness_label, cues_mask, cues_match_mask, - proposal_objectness_mask, cues_matching_label, - obj_surface_line_center) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - aggregated_points=None, - pred_surface_center=None, - pred_line_center=None, - pred_obj_surface_center=None, - pred_obj_line_center=None, - pred_surface_sem=None, - pred_line_sem=None): - """Generate targets for primitive cues for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - aggregated_points (torch.Tensor): Aggregated points from - vote aggregation layer. - pred_surface_center (torch.Tensor): Prediction of surface center. - pred_line_center (torch.Tensor): Prediction of line center. - pred_obj_surface_center (torch.Tensor): Objectness prediction - of surface center. - pred_obj_line_center (torch.Tensor): Objectness prediction of - line center. - pred_surface_sem (torch.Tensor): Semantic prediction of - surface center. - pred_line_sem (torch.Tensor): Semantic prediction of line center. - Returns: - tuple[torch.Tensor]: Targets for primitive cues. - """ - device = points.device - gt_bboxes_3d = gt_bboxes_3d.to(device) - num_proposals = aggregated_points.shape[0] - gt_center = gt_bboxes_3d.gravity_center - - dist1, dist2, ind1, _ = chamfer_distance( - aggregated_points.unsqueeze(0), - gt_center.unsqueeze(0), - reduction='none') - # Set assignment - object_assignment = ind1.squeeze(0) - - # Generate objectness label and mask - # objectness_label: 1 if pred object center is within - # self.train_cfg['near_threshold'] of any GT object - # objectness_mask: 0 if pred object center is in gray - # zone (DONOTCARE), 1 otherwise - euclidean_dist1 = torch.sqrt(dist1.squeeze(0) + 1e-6) - proposal_objectness_label = euclidean_dist1.new_zeros( - num_proposals, dtype=torch.long) - proposal_objectness_mask = euclidean_dist1.new_zeros(num_proposals) - - gt_sem = gt_labels_3d[object_assignment] - - obj_surface_center, obj_line_center = \ - gt_bboxes_3d.get_surface_line_center() - obj_surface_center = obj_surface_center.reshape(-1, 6, - 3).transpose(0, 1) - obj_line_center = obj_line_center.reshape(-1, 12, 3).transpose(0, 1) - obj_surface_center = obj_surface_center[:, object_assignment].reshape( - 1, -1, 3) - obj_line_center = obj_line_center[:, - object_assignment].reshape(1, -1, 3) - - surface_sem = torch.argmax(pred_surface_sem, dim=1).float() - line_sem = torch.argmax(pred_line_sem, dim=1).float() - - dist_surface, _, surface_ind, _ = chamfer_distance( - obj_surface_center, - pred_surface_center.unsqueeze(0), - reduction='none') - dist_line, _, line_ind, _ = chamfer_distance( - obj_line_center, pred_line_center.unsqueeze(0), reduction='none') - - surface_sel = pred_surface_center[surface_ind.squeeze(0)] - line_sel = pred_line_center[line_ind.squeeze(0)] - surface_sel_sem = surface_sem[surface_ind.squeeze(0)] - line_sel_sem = line_sem[line_ind.squeeze(0)] - - surface_sel_sem_gt = gt_sem.repeat(6).float() - line_sel_sem_gt = gt_sem.repeat(12).float() - - euclidean_dist_surface = torch.sqrt(dist_surface.squeeze(0) + 1e-6) - euclidean_dist_line = torch.sqrt(dist_line.squeeze(0) + 1e-6) - objectness_label_surface = euclidean_dist_line.new_zeros( - num_proposals * 6, dtype=torch.long) - objectness_mask_surface = euclidean_dist_line.new_zeros(num_proposals * - 6) - objectness_label_line = euclidean_dist_line.new_zeros( - num_proposals * 12, dtype=torch.long) - objectness_mask_line = euclidean_dist_line.new_zeros(num_proposals * - 12) - objectness_label_surface_sem = euclidean_dist_line.new_zeros( - num_proposals * 6, dtype=torch.long) - objectness_label_line_sem = euclidean_dist_line.new_zeros( - num_proposals * 12, dtype=torch.long) - - euclidean_dist_obj_surface = torch.sqrt(( - (pred_obj_surface_center - surface_sel)**2).sum(dim=-1) + 1e-6) - euclidean_dist_obj_line = torch.sqrt( - torch.sum((pred_obj_line_center - line_sel)**2, dim=-1) + 1e-6) - - # Objectness score just with centers - proposal_objectness_label[ - euclidean_dist1 < self.train_cfg['near_threshold']] = 1 - proposal_objectness_mask[ - euclidean_dist1 < self.train_cfg['near_threshold']] = 1 - proposal_objectness_mask[ - euclidean_dist1 > self.train_cfg['far_threshold']] = 1 - - objectness_label_surface[ - (euclidean_dist_obj_surface < - self.train_cfg['label_surface_threshold']) * - (euclidean_dist_surface < - self.train_cfg['mask_surface_threshold'])] = 1 - objectness_label_surface_sem[ - (euclidean_dist_obj_surface < - self.train_cfg['label_surface_threshold']) * - (euclidean_dist_surface < self.train_cfg['mask_surface_threshold']) - * (surface_sel_sem == surface_sel_sem_gt)] = 1 - - objectness_label_line[ - (euclidean_dist_obj_line < self.train_cfg['label_line_threshold']) - * - (euclidean_dist_line < self.train_cfg['mask_line_threshold'])] = 1 - objectness_label_line_sem[ - (euclidean_dist_obj_line < self.train_cfg['label_line_threshold']) - * (euclidean_dist_line < self.train_cfg['mask_line_threshold']) * - (line_sel_sem == line_sel_sem_gt)] = 1 - - objectness_label_surface_obj = proposal_objectness_label.repeat(6) - objectness_mask_surface_obj = proposal_objectness_mask.repeat(6) - objectness_label_line_obj = proposal_objectness_label.repeat(12) - objectness_mask_line_obj = proposal_objectness_mask.repeat(12) - - objectness_mask_surface = objectness_mask_surface_obj - objectness_mask_line = objectness_mask_line_obj - - cues_objectness_label = torch.cat( - (objectness_label_surface, objectness_label_line), 0) - cues_sem_label = torch.cat( - (objectness_label_surface_sem, objectness_label_line_sem), 0) - cues_mask = torch.cat((objectness_mask_surface, objectness_mask_line), - 0) - - objectness_label_surface *= objectness_label_surface_obj - objectness_label_line *= objectness_label_line_obj - cues_matching_label = torch.cat( - (objectness_label_surface, objectness_label_line), 0) - - objectness_label_surface_sem *= objectness_label_surface_obj - objectness_label_line_sem *= objectness_label_line_obj - - cues_match_mask = (torch.sum( - cues_objectness_label.view(18, num_proposals), dim=0) >= - 1).float() - - obj_surface_line_center = torch.cat( - (obj_surface_center, obj_line_center), 1).squeeze(0) - - return (cues_objectness_label, cues_sem_label, - proposal_objectness_label, cues_mask, cues_match_mask, - proposal_objectness_mask, cues_matching_label, - obj_surface_line_center) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py deleted file mode 100644 index 0d27f768..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py +++ /dev/null @@ -1,631 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import ConvModule, normal_init - -from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import (SparseConvTensor, SparseMaxPool3d, - SparseSequential) -else: - from mmcv.ops import SparseConvTensor, SparseMaxPool3d, SparseSequential - -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core.bbox.structures import (LiDARInstance3DBoxes, - rotation_3d_in_axis, xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.ops import make_sparse_convmodule -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class PartA2BboxHead(BaseModule): - """PartA2 RoI head. - - Args: - num_classes (int): The number of classes to prediction. - seg_in_channels (int): Input channels of segmentation - convolution layer. - part_in_channels (int): Input channels of part convolution layer. - seg_conv_channels (list(int)): Out channels of each - segmentation convolution layer. - part_conv_channels (list(int)): Out channels of each - part convolution layer. - merge_conv_channels (list(int)): Out channels of each - feature merged convolution layer. - down_conv_channels (list(int)): Out channels of each - downsampled convolution layer. - shared_fc_channels (list(int)): Out channels of each shared fc layer. - cls_channels (list(int)): Out channels of each classification layer. - reg_channels (list(int)): Out channels of each regression layer. - dropout_ratio (float): Dropout ratio of classification and - regression layers. - roi_feat_size (int): The size of pooled roi features. - with_corner_loss (bool): Whether to use corner loss or not. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for box head. - conv_cfg (dict): Config dict of convolutional layers - norm_cfg (dict): Config dict of normalization layers - loss_bbox (dict): Config dict of box regression loss. - loss_cls (dict): Config dict of classifacation loss. - """ - - def __init__(self, - num_classes, - seg_in_channels, - part_in_channels, - seg_conv_channels=None, - part_conv_channels=None, - merge_conv_channels=None, - down_conv_channels=None, - shared_fc_channels=None, - cls_channels=None, - reg_channels=None, - dropout_ratio=0.1, - roi_feat_size=14, - with_corner_loss=True, - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='none', - loss_weight=1.0), - init_cfg=None): - super(PartA2BboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.with_corner_loss = with_corner_loss - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_bbox = build_loss(loss_bbox) - self.loss_cls = build_loss(loss_cls) - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - assert down_conv_channels[-1] == shared_fc_channels[0] - - # init layers - part_channel_last = part_in_channels - part_conv = [] - for i, channel in enumerate(part_conv_channels): - part_conv.append( - make_sparse_convmodule( - part_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key=f'rcnn_part{i}', - conv_type='SubMConv3d')) - part_channel_last = channel - self.part_conv = SparseSequential(*part_conv) - - seg_channel_last = seg_in_channels - seg_conv = [] - for i, channel in enumerate(seg_conv_channels): - seg_conv.append( - make_sparse_convmodule( - seg_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key=f'rcnn_seg{i}', - conv_type='SubMConv3d')) - seg_channel_last = channel - self.seg_conv = SparseSequential(*seg_conv) - - self.conv_down = SparseSequential() - - merge_conv_channel_last = part_channel_last + seg_channel_last - merge_conv = [] - for i, channel in enumerate(merge_conv_channels): - merge_conv.append( - make_sparse_convmodule( - merge_conv_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key='rcnn_down0')) - merge_conv_channel_last = channel - - down_conv_channel_last = merge_conv_channel_last - conv_down = [] - for i, channel in enumerate(down_conv_channels): - conv_down.append( - make_sparse_convmodule( - down_conv_channel_last, - channel, - 3, - padding=1, - norm_cfg=norm_cfg, - indice_key='rcnn_down1')) - down_conv_channel_last = channel - - self.conv_down.add_module('merge_conv', SparseSequential(*merge_conv)) - self.conv_down.add_module('max_pool3d', - SparseMaxPool3d(kernel_size=2, stride=2)) - self.conv_down.add_module('down_conv', SparseSequential(*conv_down)) - - shared_fc_list = [] - pool_size = roi_feat_size // 2 - pre_channel = shared_fc_channels[0] * pool_size**3 - for k in range(1, len(shared_fc_channels)): - shared_fc_list.append( - ConvModule( - pre_channel, - shared_fc_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = shared_fc_channels[k] - - if k != len(shared_fc_channels) - 1 and dropout_ratio > 0: - shared_fc_list.append(nn.Dropout(dropout_ratio)) - - self.shared_fc = nn.Sequential(*shared_fc_list) - - # Classification layer - channel_in = shared_fc_channels[-1] - cls_channel = 1 - cls_layers = [] - pre_channel = channel_in - for k in range(0, len(cls_channels)): - cls_layers.append( - ConvModule( - pre_channel, - cls_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = cls_channels[k] - cls_layers.append( - ConvModule( - pre_channel, - cls_channel, - 1, - padding=0, - conv_cfg=conv_cfg, - act_cfg=None)) - if dropout_ratio >= 0: - cls_layers.insert(1, nn.Dropout(dropout_ratio)) - - self.conv_cls = nn.Sequential(*cls_layers) - - # Regression layer - reg_layers = [] - pre_channel = channel_in - for k in range(0, len(reg_channels)): - reg_layers.append( - ConvModule( - pre_channel, - reg_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - inplace=True)) - pre_channel = reg_channels[k] - reg_layers.append( - ConvModule( - pre_channel, - self.bbox_coder.code_size, - 1, - padding=0, - conv_cfg=conv_cfg, - act_cfg=None)) - if dropout_ratio >= 0: - reg_layers.insert(1, nn.Dropout(dropout_ratio)) - - self.conv_reg = nn.Sequential(*reg_layers) - - if init_cfg is None: - self.init_cfg = dict( - type='Xavier', - layer=['Conv2d', 'Conv1d'], - distribution='uniform') - - def init_weights(self): - super().init_weights() - normal_init(self.conv_reg[-1].conv, mean=0, std=0.001) - - def forward(self, seg_feats, part_feats): - """Forward pass. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - - Returns: - tuple[torch.Tensor]: Score of class and bbox predictions. - """ - # (B * N, out_x, out_y, out_z, 4) - rcnn_batch_size = part_feats.shape[0] - - # transform to sparse tensors - sparse_shape = part_feats.shape[1:4] - # (non_empty_num, 4) ==> [bs_idx, x_idx, y_idx, z_idx] - sparse_idx = part_feats.sum(dim=-1).nonzero(as_tuple=False) - - part_features = part_feats[sparse_idx[:, 0], sparse_idx[:, 1], - sparse_idx[:, 2], sparse_idx[:, 3]] - seg_features = seg_feats[sparse_idx[:, 0], sparse_idx[:, 1], - sparse_idx[:, 2], sparse_idx[:, 3]] - coords = sparse_idx.int().contiguous() - part_features = SparseConvTensor(part_features, coords, sparse_shape, - rcnn_batch_size) - seg_features = SparseConvTensor(seg_features, coords, sparse_shape, - rcnn_batch_size) - - # forward rcnn network - x_part = self.part_conv(part_features) - x_rpn = self.seg_conv(seg_features) - - merged_feature = torch.cat((x_rpn.features, x_part.features), - dim=1) # (N, C) - shared_feature = SparseConvTensor(merged_feature, coords, sparse_shape, - rcnn_batch_size) - - x = self.conv_down(shared_feature) - - shared_feature = x.dense().view(rcnn_batch_size, -1, 1) - - shared_feature = self.shared_fc(shared_feature) - - cls_score = self.conv_cls(shared_feature).transpose( - 1, 2).contiguous().squeeze(dim=1) # (B, 1) - bbox_pred = self.conv_reg(shared_feature).transpose( - 1, 2).contiguous().squeeze(dim=1) # (B, C) - - return cls_score, bbox_pred - - def loss(self, cls_score, bbox_pred, rois, labels, bbox_targets, - pos_gt_bboxes, reg_mask, label_weights, bbox_weights): - """Computing losses. - - Args: - cls_score (torch.Tensor): Scores of each roi. - bbox_pred (torch.Tensor): Predictions of bboxes. - rois (torch.Tensor): Roi bboxes. - labels (torch.Tensor): Labels of class. - bbox_targets (torch.Tensor): Target of positive bboxes. - pos_gt_bboxes (torch.Tensor): Ground truths of positive bboxes. - reg_mask (torch.Tensor): Mask for positive bboxes. - label_weights (torch.Tensor): Weights of class loss. - bbox_weights (torch.Tensor): Weights of bbox loss. - - Returns: - dict: Computed losses. - - - loss_cls (torch.Tensor): Loss of classes. - - loss_bbox (torch.Tensor): Loss of bboxes. - - loss_corner (torch.Tensor): Loss of corners. - """ - losses = dict() - rcnn_batch_size = cls_score.shape[0] - - # calculate class loss - cls_flat = cls_score.view(-1) - loss_cls = self.loss_cls(cls_flat, labels, label_weights) - losses['loss_cls'] = loss_cls - - # calculate regression loss - code_size = self.bbox_coder.code_size - pos_inds = (reg_mask > 0) - if pos_inds.any() == 0: - # fake a part loss - losses['loss_bbox'] = loss_cls.new_tensor(0) - if self.with_corner_loss: - losses['loss_corner'] = loss_cls.new_tensor(0) - else: - pos_bbox_pred = bbox_pred.view(rcnn_batch_size, -1)[pos_inds] - bbox_weights_flat = bbox_weights[pos_inds].view(-1, 1).repeat( - 1, pos_bbox_pred.shape[-1]) - loss_bbox = self.loss_bbox( - pos_bbox_pred.unsqueeze(dim=0), bbox_targets.unsqueeze(dim=0), - bbox_weights_flat.unsqueeze(dim=0)) - losses['loss_bbox'] = loss_bbox - - if self.with_corner_loss: - pos_roi_boxes3d = rois[..., 1:].view(-1, code_size)[pos_inds] - pos_roi_boxes3d = pos_roi_boxes3d.view(-1, code_size) - batch_anchors = pos_roi_boxes3d.clone().detach() - pos_rois_rotation = pos_roi_boxes3d[..., 6].view(-1) - roi_xyz = pos_roi_boxes3d[..., 0:3].view(-1, 3) - batch_anchors[..., 0:3] = 0 - # decode boxes - pred_boxes3d = self.bbox_coder.decode( - batch_anchors, - pos_bbox_pred.view(-1, code_size)).view(-1, code_size) - - pred_boxes3d[..., 0:3] = rotation_3d_in_axis( - pred_boxes3d[..., 0:3].unsqueeze(1), - pos_rois_rotation, - axis=2).squeeze(1) - - pred_boxes3d[:, 0:3] += roi_xyz - - # calculate corner loss - loss_corner = self.get_corner_loss_lidar( - pred_boxes3d, pos_gt_bboxes) - losses['loss_corner'] = loss_corner - - return losses - - def get_targets(self, sampling_results, rcnn_train_cfg, concat=True): - """Generate targets. - - Args: - sampling_results (list[:obj:`SamplingResult`]): - Sampled results from rois. - rcnn_train_cfg (:obj:`ConfigDict`): Training config of rcnn. - concat (bool): Whether to concatenate targets between batches. - - Returns: - tuple[torch.Tensor]: Targets of boxes and class prediction. - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - iou_list = [res.iou for res in sampling_results] - targets = multi_apply( - self._get_target_single, - pos_bboxes_list, - pos_gt_bboxes_list, - iou_list, - cfg=rcnn_train_cfg) - - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) = targets - - if concat: - label = torch.cat(label, 0) - bbox_targets = torch.cat(bbox_targets, 0) - pos_gt_bboxes = torch.cat(pos_gt_bboxes, 0) - reg_mask = torch.cat(reg_mask, 0) - - label_weights = torch.cat(label_weights, 0) - label_weights /= torch.clamp(label_weights.sum(), min=1.0) - - bbox_weights = torch.cat(bbox_weights, 0) - bbox_weights /= torch.clamp(bbox_weights.sum(), min=1.0) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def _get_target_single(self, pos_bboxes, pos_gt_bboxes, ious, cfg): - """Generate training targets for a single sample. - - Args: - pos_bboxes (torch.Tensor): Positive boxes with shape - (N, 7). - pos_gt_bboxes (torch.Tensor): Ground truth boxes with shape - (M, 7). - ious (torch.Tensor): IoU between `pos_bboxes` and `pos_gt_bboxes` - in shape (N, M). - cfg (dict): Training configs. - - Returns: - tuple[torch.Tensor]: Target for positive boxes. - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - """ - cls_pos_mask = ious > cfg.cls_pos_thr - cls_neg_mask = ious < cfg.cls_neg_thr - interval_mask = (cls_pos_mask == 0) & (cls_neg_mask == 0) - - # iou regression target - label = (cls_pos_mask > 0).float() - label[interval_mask] = ious[interval_mask] * 2 - 0.5 - # label weights - label_weights = (label >= 0).float() - - # box regression target - reg_mask = pos_bboxes.new_zeros(ious.size(0)).long() - reg_mask[0:pos_gt_bboxes.size(0)] = 1 - bbox_weights = (reg_mask > 0).float() - if reg_mask.bool().any(): - pos_gt_bboxes_ct = pos_gt_bboxes.clone().detach() - roi_center = pos_bboxes[..., 0:3] - roi_ry = pos_bboxes[..., 6] % (2 * np.pi) - - # canonical transformation - pos_gt_bboxes_ct[..., 0:3] -= roi_center - pos_gt_bboxes_ct[..., 6] -= roi_ry - pos_gt_bboxes_ct[..., 0:3] = rotation_3d_in_axis( - pos_gt_bboxes_ct[..., 0:3].unsqueeze(1), -roi_ry, - axis=2).squeeze(1) - - # flip orientation if rois have opposite orientation - ry_label = pos_gt_bboxes_ct[..., 6] % (2 * np.pi) # 0 ~ 2pi - opposite_flag = (ry_label > np.pi * 0.5) & (ry_label < np.pi * 1.5) - ry_label[opposite_flag] = (ry_label[opposite_flag] + np.pi) % ( - 2 * np.pi) # (0 ~ pi/2, 3pi/2 ~ 2pi) - flag = ry_label > np.pi - ry_label[flag] = ry_label[flag] - np.pi * 2 # (-pi/2, pi/2) - ry_label = torch.clamp(ry_label, min=-np.pi / 2, max=np.pi / 2) - pos_gt_bboxes_ct[..., 6] = ry_label - - rois_anchor = pos_bboxes.clone().detach() - rois_anchor[:, 0:3] = 0 - rois_anchor[:, 6] = 0 - bbox_targets = self.bbox_coder.encode(rois_anchor, - pos_gt_bboxes_ct) - else: - # no fg bbox - bbox_targets = pos_gt_bboxes.new_empty((0, 7)) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def get_corner_loss_lidar(self, pred_bbox3d, gt_bbox3d, delta=1.0): - """Calculate corner loss of given boxes. - - Args: - pred_bbox3d (torch.FloatTensor): Predicted boxes in shape (N, 7). - gt_bbox3d (torch.FloatTensor): Ground truth boxes in shape (N, 7). - delta (float, optional): huber loss threshold. Defaults to 1.0 - - Returns: - torch.FloatTensor: Calculated corner loss in shape (N). - """ - assert pred_bbox3d.shape[0] == gt_bbox3d.shape[0] - - # This is a little bit hack here because we assume the box for - # Part-A2 is in LiDAR coordinates - gt_boxes_structure = LiDARInstance3DBoxes(gt_bbox3d) - pred_box_corners = LiDARInstance3DBoxes(pred_bbox3d).corners - gt_box_corners = gt_boxes_structure.corners - - # This flip only changes the heading direction of GT boxes - gt_bbox3d_flip = gt_boxes_structure.clone() - gt_bbox3d_flip.tensor[:, 6] += np.pi - gt_box_corners_flip = gt_bbox3d_flip.corners - - corner_dist = torch.min( - torch.norm(pred_box_corners - gt_box_corners, dim=2), - torch.norm(pred_box_corners - gt_box_corners_flip, - dim=2)) # (N, 8) - # huber loss - abs_error = corner_dist.abs() - quadratic = abs_error.clamp(max=delta) - linear = (abs_error - quadratic) - corner_loss = 0.5 * quadratic**2 + delta * linear - - return corner_loss.mean(dim=1) - - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - class_labels, - class_pred, - img_metas, - cfg=None): - """Generate bboxes from bbox head predictions. - - Args: - rois (torch.Tensor): Roi bounding boxes. - cls_score (torch.Tensor): Scores of bounding boxes. - bbox_pred (torch.Tensor): Bounding boxes predictions - class_labels (torch.Tensor): Label of classes - class_pred (torch.Tensor): Score for nms. - img_metas (list[dict]): Point cloud and image's meta info. - cfg (:obj:`ConfigDict`): Testing config. - - Returns: - list[tuple]: Decoded bbox, scores and labels after nms. - """ - roi_batch_id = rois[..., 0] - roi_boxes = rois[..., 1:] # boxes without batch id - batch_size = int(roi_batch_id.max().item() + 1) - - # decode boxes - roi_ry = roi_boxes[..., 6].view(-1) - roi_xyz = roi_boxes[..., 0:3].view(-1, 3) - local_roi_boxes = roi_boxes.clone().detach() - local_roi_boxes[..., 0:3] = 0 - rcnn_boxes3d = self.bbox_coder.decode(local_roi_boxes, bbox_pred) - rcnn_boxes3d[..., 0:3] = rotation_3d_in_axis( - rcnn_boxes3d[..., 0:3].unsqueeze(1), roi_ry, axis=2).squeeze(1) - rcnn_boxes3d[:, 0:3] += roi_xyz - - # post processing - result_list = [] - for batch_id in range(batch_size): - cur_class_labels = class_labels[batch_id] - cur_cls_score = cls_score[roi_batch_id == batch_id].view(-1) - - cur_box_prob = class_pred[batch_id] - cur_rcnn_boxes3d = rcnn_boxes3d[roi_batch_id == batch_id] - keep = self.multi_class_nms(cur_box_prob, cur_rcnn_boxes3d, - cfg.score_thr, cfg.nms_thr, - img_metas[batch_id], - cfg.use_rotate_nms) - selected_bboxes = cur_rcnn_boxes3d[keep] - selected_label_preds = cur_class_labels[keep] - selected_scores = cur_cls_score[keep] - - result_list.append( - (img_metas[batch_id]['box_type_3d'](selected_bboxes, - self.bbox_coder.code_size), - selected_scores, selected_label_preds)) - return result_list - - def multi_class_nms(self, - box_probs, - box_preds, - score_thr, - nms_thr, - input_meta, - use_rotate_nms=True): - """Multi-class NMS for box head. - - Note: - This function has large overlap with the `box3d_multiclass_nms` - implemented in `mmdet3d.core.post_processing`. We are considering - merging these two functions in the future. - - Args: - box_probs (torch.Tensor): Predicted boxes probabitilies in - shape (N,). - box_preds (torch.Tensor): Predicted boxes in shape (N, 7+C). - score_thr (float): Threshold of scores. - nms_thr (float): Threshold for NMS. - input_meta (dict): Meta information of the current sample. - use_rotate_nms (bool, optional): Whether to use rotated nms. - Defaults to True. - - Returns: - torch.Tensor: Selected indices. - """ - if use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - assert box_probs.shape[ - 1] == self.num_classes, f'box_probs shape: {str(box_probs.shape)}' - selected_list = [] - selected_labels = [] - boxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - box_preds, self.bbox_coder.code_size).bev) - - score_thresh = score_thr if isinstance( - score_thr, list) else [score_thr for x in range(self.num_classes)] - nms_thresh = nms_thr if isinstance( - nms_thr, list) else [nms_thr for x in range(self.num_classes)] - for k in range(0, self.num_classes): - class_scores_keep = box_probs[:, k] >= score_thresh[k] - - if class_scores_keep.int().sum() > 0: - original_idxs = class_scores_keep.nonzero( - as_tuple=False).view(-1) - cur_boxes_for_nms = boxes_for_nms[class_scores_keep] - cur_rank_scores = box_probs[class_scores_keep, k] - - cur_selected = nms_func(cur_boxes_for_nms, cur_rank_scores, - nms_thresh[k]) - - if cur_selected.shape[0] == 0: - continue - selected_list.append(original_idxs[cur_selected]) - selected_labels.append( - torch.full([cur_selected.shape[0]], - k + 1, - dtype=torch.int64, - device=box_preds.device)) - - keep = torch.cat( - selected_list, dim=0) if len(selected_list) > 0 else [] - return keep diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py deleted file mode 100644 index bd77a526..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/bbox_heads/point_rcnn_bbox_head.py +++ /dev/null @@ -1,577 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.cnn import ConvModule, normal_init -from mmcv.cnn.bricks import build_conv_layer -from mmcv.runner import BaseModule -from torch import nn as nn - -from mmdet3d.core.bbox.structures import (LiDARInstance3DBoxes, - rotation_3d_in_axis, xywhr2xyxyr) -from mmdet3d.core.post_processing import nms_bev, nms_normal_bev -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.ops import build_sa_module -from mmdet.core import build_bbox_coder, multi_apply - - -@HEADS.register_module() -class PointRCNNBboxHead(BaseModule): - """PointRCNN RoI Bbox head. - - Args: - num_classes (int): The number of classes to prediction. - in_channels (int): Input channels of point features. - mlp_channels (list[int]): the number of mlp channels - pred_layer_cfg (dict, optional): Config of classfication and - regression prediction layers. Defaults to None. - num_points (tuple, optional): The number of points which each SA - module samples. Defaults to (128, 32, -1). - radius (tuple, optional): Sampling radius of each SA module. - Defaults to (0.2, 0.4, 100). - num_samples (tuple, optional): The number of samples for ball query - in each SA module. Defaults to (64, 64, 64). - sa_channels (tuple, optional): Out channels of each mlp in SA module. - Defaults to ((128, 128, 128), (128, 128, 256), (256, 256, 512)). - bbox_coder (dict, optional): Config dict of box coders. - Defaults to dict(type='DeltaXYZWLHRBBoxCoder'). - sa_cfg (dict, optional): Config of set abstraction module, which may - contain the following keys and values: - - - pool_mod (str): Pool method ('max' or 'avg') for SA modules. - - use_xyz (bool): Whether to use xyz as a part of features. - - normalize_xyz (bool): Whether to normalize xyz with radii in - each SA module. - Defaults to dict(type='PointSAModule', pool_mod='max', - use_xyz=True). - conv_cfg (dict, optional): Config dict of convolutional layers. - Defaults to dict(type='Conv1d'). - norm_cfg (dict, optional): Config dict of normalization layers. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Config dict of activation layers. - Defaults to dict(type='ReLU'). - bias (str, optional): Type of bias. Defaults to 'auto'. - loss_bbox (dict, optional): Config of regression loss function. - Defaults to dict(type='SmoothL1Loss', beta=1.0 / 9.0, - reduction='sum', loss_weight=1.0). - loss_cls (dict, optional): Config of classification loss function. - Defaults to dict(type='CrossEntropyLoss', use_sigmoid=True, - reduction='sum', loss_weight=1.0). - with_corner_loss (bool, optional): Whether using corner loss. - Defaults to True. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__( - self, - num_classes, - in_channels, - mlp_channels, - pred_layer_cfg=None, - num_points=(128, 32, -1), - radius=(0.2, 0.4, 100), - num_samples=(64, 64, 64), - sa_channels=((128, 128, 128), (128, 128, 256), (256, 256, 512)), - bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), - sa_cfg=dict(type='PointSAModule', pool_mod='max', use_xyz=True), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - bias='auto', - loss_bbox=dict( - type='SmoothL1Loss', - beta=1.0 / 9.0, - reduction='sum', - loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - reduction='sum', - loss_weight=1.0), - with_corner_loss=True, - init_cfg=None): - super(PointRCNNBboxHead, self).__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.num_sa = len(sa_channels) - self.with_corner_loss = with_corner_loss - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.bias = bias - - self.loss_bbox = build_loss(loss_bbox) - self.loss_cls = build_loss(loss_cls) - self.bbox_coder = build_bbox_coder(bbox_coder) - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - - self.in_channels = in_channels - mlp_channels = [self.in_channels] + mlp_channels - shared_mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - shared_mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - inplace=False, - conv_cfg=dict(type='Conv2d'))) - self.xyz_up_layer = nn.Sequential(*shared_mlps) - - c_out = mlp_channels[-1] - self.merge_down_layer = ConvModule( - c_out * 2, - c_out, - kernel_size=(1, 1), - stride=(1, 1), - inplace=False, - conv_cfg=dict(type='Conv2d')) - - pre_channels = c_out - - self.SA_modules = nn.ModuleList() - sa_in_channel = pre_channels - - for sa_index in range(self.num_sa): - cur_sa_mlps = list(sa_channels[sa_index]) - cur_sa_mlps = [sa_in_channel] + cur_sa_mlps - sa_out_channel = cur_sa_mlps[-1] - - cur_num_points = num_points[sa_index] - if cur_num_points <= 0: - cur_num_points = None - self.SA_modules.append( - build_sa_module( - num_point=cur_num_points, - radius=radius[sa_index], - num_sample=num_samples[sa_index], - mlp_channels=cur_sa_mlps, - cfg=sa_cfg)) - sa_in_channel = sa_out_channel - self.cls_convs = self._add_conv_branch( - pred_layer_cfg.in_channels, pred_layer_cfg.cls_conv_channels) - self.reg_convs = self._add_conv_branch( - pred_layer_cfg.in_channels, pred_layer_cfg.reg_conv_channels) - - prev_channel = pred_layer_cfg.cls_conv_channels[-1] - self.conv_cls = build_conv_layer( - self.conv_cfg, - in_channels=prev_channel, - out_channels=self.num_classes, - kernel_size=1) - prev_channel = pred_layer_cfg.reg_conv_channels[-1] - self.conv_reg = build_conv_layer( - self.conv_cfg, - in_channels=prev_channel, - out_channels=self.bbox_coder.code_size * self.num_classes, - kernel_size=1) - - if init_cfg is None: - self.init_cfg = dict(type='Xavier', layer=['Conv2d', 'Conv1d']) - - def _add_conv_branch(self, in_channels, conv_channels): - """Add shared or separable branch. - - Args: - in_channels (int): Input feature channel. - conv_channels (tuple): Middle feature channels. - """ - conv_spec = [in_channels] + list(conv_channels) - # add branch specific conv layers - conv_layers = nn.Sequential() - for i in range(len(conv_spec) - 1): - conv_layers.add_module( - f'layer{i}', - ConvModule( - conv_spec[i], - conv_spec[i + 1], - kernel_size=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=self.bias, - inplace=True)) - return conv_layers - - def init_weights(self): - """Initialize weights of the head.""" - super().init_weights() - for m in self.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d): - if m.bias is not None: - nn.init.constant_(m.bias, 0) - normal_init(self.conv_reg.weight, mean=0, std=0.001) - - def forward(self, feats): - """Forward pass. - - Args: - feats (torch.Torch): Features from RCNN modules. - - Returns: - tuple[torch.Tensor]: Score of class and bbox predictions. - """ - input_data = feats.clone().detach() - xyz_input = input_data[..., 0:self.in_channels].transpose( - 1, 2).unsqueeze(dim=3).contiguous().clone().detach() - xyz_features = self.xyz_up_layer(xyz_input) - rpn_features = input_data[..., self.in_channels:].transpose( - 1, 2).unsqueeze(dim=3) - merged_features = torch.cat((xyz_features, rpn_features), dim=1) - merged_features = self.merge_down_layer(merged_features) - l_xyz, l_features = [input_data[..., 0:3].contiguous()], \ - [merged_features.squeeze(dim=3)] - for i in range(len(self.SA_modules)): - li_xyz, li_features, cur_indices = \ - self.SA_modules[i](l_xyz[i], l_features[i]) - l_xyz.append(li_xyz) - l_features.append(li_features) - - shared_features = l_features[-1] - x_cls = shared_features - x_reg = shared_features - x_cls = self.cls_convs(x_cls) - rcnn_cls = self.conv_cls(x_cls) - x_reg = self.reg_convs(x_reg) - rcnn_reg = self.conv_reg(x_reg) - rcnn_cls = rcnn_cls.transpose(1, 2).contiguous().squeeze(dim=1) - rcnn_reg = rcnn_reg.transpose(1, 2).contiguous().squeeze(dim=1) - return rcnn_cls, rcnn_reg - - def loss(self, cls_score, bbox_pred, rois, labels, bbox_targets, - pos_gt_bboxes, reg_mask, label_weights, bbox_weights): - """Computing losses. - - Args: - cls_score (torch.Tensor): Scores of each RoI. - bbox_pred (torch.Tensor): Predictions of bboxes. - rois (torch.Tensor): RoI bboxes. - labels (torch.Tensor): Labels of class. - bbox_targets (torch.Tensor): Target of positive bboxes. - pos_gt_bboxes (torch.Tensor): Ground truths of positive bboxes. - reg_mask (torch.Tensor): Mask for positive bboxes. - label_weights (torch.Tensor): Weights of class loss. - bbox_weights (torch.Tensor): Weights of bbox loss. - - Returns: - dict: Computed losses. - - - loss_cls (torch.Tensor): Loss of classes. - - loss_bbox (torch.Tensor): Loss of bboxes. - - loss_corner (torch.Tensor): Loss of corners. - """ - losses = dict() - rcnn_batch_size = cls_score.shape[0] - # calculate class loss - cls_flat = cls_score.view(-1) - loss_cls = self.loss_cls(cls_flat, labels, label_weights) - losses['loss_cls'] = loss_cls - - # calculate regression loss - code_size = self.bbox_coder.code_size - pos_inds = (reg_mask > 0) - - pos_bbox_pred = bbox_pred.view(rcnn_batch_size, -1)[pos_inds].clone() - bbox_weights_flat = bbox_weights[pos_inds].view(-1, 1).repeat( - 1, pos_bbox_pred.shape[-1]) - loss_bbox = self.loss_bbox( - pos_bbox_pred.unsqueeze(dim=0), - bbox_targets.unsqueeze(dim=0).detach(), - bbox_weights_flat.unsqueeze(dim=0)) - losses['loss_bbox'] = loss_bbox - - if pos_inds.any() != 0 and self.with_corner_loss: - rois = rois.detach() - pos_roi_boxes3d = rois[..., 1:].view(-1, code_size)[pos_inds] - pos_roi_boxes3d = pos_roi_boxes3d.view(-1, code_size) - batch_anchors = pos_roi_boxes3d.clone().detach() - pos_rois_rotation = pos_roi_boxes3d[..., 6].view(-1) - roi_xyz = pos_roi_boxes3d[..., 0:3].view(-1, 3) - batch_anchors[..., 0:3] = 0 - # decode boxes - pred_boxes3d = self.bbox_coder.decode( - batch_anchors, - pos_bbox_pred.view(-1, code_size)).view(-1, code_size) - - pred_boxes3d[..., 0:3] = rotation_3d_in_axis( - pred_boxes3d[..., 0:3].unsqueeze(1), (pos_rois_rotation), - axis=2).squeeze(1) - - pred_boxes3d[:, 0:3] += roi_xyz - - # calculate corner loss - loss_corner = self.get_corner_loss_lidar(pred_boxes3d, - pos_gt_bboxes) - - losses['loss_corner'] = loss_corner - else: - losses['loss_corner'] = loss_cls.new_tensor(0) - - return losses - - def get_corner_loss_lidar(self, pred_bbox3d, gt_bbox3d, delta=1.0): - """Calculate corner loss of given boxes. - - Args: - pred_bbox3d (torch.FloatTensor): Predicted boxes in shape (N, 7). - gt_bbox3d (torch.FloatTensor): Ground truth boxes in shape (N, 7). - delta (float, optional): huber loss threshold. Defaults to 1.0 - - Returns: - torch.FloatTensor: Calculated corner loss in shape (N). - """ - assert pred_bbox3d.shape[0] == gt_bbox3d.shape[0] - - # This is a little bit hack here because we assume the box for - # PointRCNN is in LiDAR coordinates - - gt_boxes_structure = LiDARInstance3DBoxes(gt_bbox3d) - pred_box_corners = LiDARInstance3DBoxes(pred_bbox3d).corners - gt_box_corners = gt_boxes_structure.corners - - # This flip only changes the heading direction of GT boxes - gt_bbox3d_flip = gt_boxes_structure.clone() - gt_bbox3d_flip.tensor[:, 6] += np.pi - gt_box_corners_flip = gt_bbox3d_flip.corners - - corner_dist = torch.min( - torch.norm(pred_box_corners - gt_box_corners, dim=2), - torch.norm(pred_box_corners - gt_box_corners_flip, dim=2)) - # huber loss - abs_error = corner_dist.abs() - quadratic = abs_error.clamp(max=delta) - linear = (abs_error - quadratic) - corner_loss = 0.5 * quadratic**2 + delta * linear - return corner_loss.mean(dim=1) - - def get_targets(self, sampling_results, rcnn_train_cfg, concat=True): - """Generate targets. - - Args: - sampling_results (list[:obj:`SamplingResult`]): - Sampled results from rois. - rcnn_train_cfg (:obj:`ConfigDict`): Training config of rcnn. - concat (bool, optional): Whether to concatenate targets between - batches. Defaults to True. - - Returns: - tuple[torch.Tensor]: Targets of boxes and class prediction. - """ - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - iou_list = [res.iou for res in sampling_results] - targets = multi_apply( - self._get_target_single, - pos_bboxes_list, - pos_gt_bboxes_list, - iou_list, - cfg=rcnn_train_cfg) - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) = targets - - if concat: - label = torch.cat(label, 0) - bbox_targets = torch.cat(bbox_targets, 0) - pos_gt_bboxes = torch.cat(pos_gt_bboxes, 0) - reg_mask = torch.cat(reg_mask, 0) - - label_weights = torch.cat(label_weights, 0) - label_weights /= torch.clamp(label_weights.sum(), min=1.0) - - bbox_weights = torch.cat(bbox_weights, 0) - bbox_weights /= torch.clamp(bbox_weights.sum(), min=1.0) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def _get_target_single(self, pos_bboxes, pos_gt_bboxes, ious, cfg): - """Generate training targets for a single sample. - - Args: - pos_bboxes (torch.Tensor): Positive boxes with shape - (N, 7). - pos_gt_bboxes (torch.Tensor): Ground truth boxes with shape - (M, 7). - ious (torch.Tensor): IoU between `pos_bboxes` and `pos_gt_bboxes` - in shape (N, M). - cfg (dict): Training configs. - - Returns: - tuple[torch.Tensor]: Target for positive boxes. - (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - """ - cls_pos_mask = ious > cfg.cls_pos_thr - cls_neg_mask = ious < cfg.cls_neg_thr - interval_mask = (cls_pos_mask == 0) & (cls_neg_mask == 0) - # iou regression target - label = (cls_pos_mask > 0).float() - label[interval_mask] = (ious[interval_mask] - cfg.cls_neg_thr) / \ - (cfg.cls_pos_thr - cfg.cls_neg_thr) - # label weights - label_weights = (label >= 0).float() - # box regression target - reg_mask = pos_bboxes.new_zeros(ious.size(0)).long() - reg_mask[0:pos_gt_bboxes.size(0)] = 1 - bbox_weights = (reg_mask > 0).float() - if reg_mask.bool().any(): - pos_gt_bboxes_ct = pos_gt_bboxes.clone().detach() - roi_center = pos_bboxes[..., 0:3] - roi_ry = pos_bboxes[..., 6] % (2 * np.pi) - - # canonical transformation - pos_gt_bboxes_ct[..., 0:3] -= roi_center - pos_gt_bboxes_ct[..., 6] -= roi_ry - pos_gt_bboxes_ct[..., 0:3] = rotation_3d_in_axis( - pos_gt_bboxes_ct[..., 0:3].unsqueeze(1), -(roi_ry), - axis=2).squeeze(1) - - # flip orientation if gt have opposite orientation - ry_label = pos_gt_bboxes_ct[..., 6] % (2 * np.pi) # 0 ~ 2pi - is_opposite = (ry_label > np.pi * 0.5) & (ry_label < np.pi * 1.5) - ry_label[is_opposite] = (ry_label[is_opposite] + np.pi) % ( - 2 * np.pi) # (0 ~ pi/2, 3pi/2 ~ 2pi) - flag = ry_label > np.pi - ry_label[flag] = ry_label[flag] - np.pi * 2 # (-pi/2, pi/2) - ry_label = torch.clamp(ry_label, min=-np.pi / 2, max=np.pi / 2) - pos_gt_bboxes_ct[..., 6] = ry_label - - rois_anchor = pos_bboxes.clone().detach() - rois_anchor[:, 0:3] = 0 - rois_anchor[:, 6] = 0 - bbox_targets = self.bbox_coder.encode(rois_anchor, - pos_gt_bboxes_ct) - else: - # no fg bbox - bbox_targets = pos_gt_bboxes.new_empty((0, 7)) - - return (label, bbox_targets, pos_gt_bboxes, reg_mask, label_weights, - bbox_weights) - - def get_bboxes(self, - rois, - cls_score, - bbox_pred, - class_labels, - img_metas, - cfg=None): - """Generate bboxes from bbox head predictions. - - Args: - rois (torch.Tensor): RoI bounding boxes. - cls_score (torch.Tensor): Scores of bounding boxes. - bbox_pred (torch.Tensor): Bounding boxes predictions - class_labels (torch.Tensor): Label of classes - img_metas (list[dict]): Point cloud and image's meta info. - cfg (:obj:`ConfigDict`, optional): Testing config. - Defaults to None. - - Returns: - list[tuple]: Decoded bbox, scores and labels after nms. - """ - roi_batch_id = rois[..., 0] - roi_boxes = rois[..., 1:] # boxes without batch id - batch_size = int(roi_batch_id.max().item() + 1) - - # decode boxes - roi_ry = roi_boxes[..., 6].view(-1) - roi_xyz = roi_boxes[..., 0:3].view(-1, 3) - local_roi_boxes = roi_boxes.clone().detach() - local_roi_boxes[..., 0:3] = 0 - rcnn_boxes3d = self.bbox_coder.decode(local_roi_boxes, bbox_pred) - rcnn_boxes3d[..., 0:3] = rotation_3d_in_axis( - rcnn_boxes3d[..., 0:3].unsqueeze(1), roi_ry, axis=2).squeeze(1) - rcnn_boxes3d[:, 0:3] += roi_xyz - - # post processing - result_list = [] - for batch_id in range(batch_size): - cur_class_labels = class_labels[batch_id] - cur_cls_score = cls_score[roi_batch_id == batch_id].view(-1) - - cur_box_prob = cur_cls_score.unsqueeze(1) - cur_rcnn_boxes3d = rcnn_boxes3d[roi_batch_id == batch_id] - keep = self.multi_class_nms(cur_box_prob, cur_rcnn_boxes3d, - cfg.score_thr, cfg.nms_thr, - img_metas[batch_id], - cfg.use_rotate_nms) - selected_bboxes = cur_rcnn_boxes3d[keep] - selected_label_preds = cur_class_labels[keep] - selected_scores = cur_cls_score[keep] - - result_list.append( - (img_metas[batch_id]['box_type_3d'](selected_bboxes, - self.bbox_coder.code_size), - selected_scores, selected_label_preds)) - return result_list - - def multi_class_nms(self, - box_probs, - box_preds, - score_thr, - nms_thr, - input_meta, - use_rotate_nms=True): - """Multi-class NMS for box head. - - Note: - This function has large overlap with the `box3d_multiclass_nms` - implemented in `mmdet3d.core.post_processing`. We are considering - merging these two functions in the future. - - Args: - box_probs (torch.Tensor): Predicted boxes probabilities in - shape (N,). - box_preds (torch.Tensor): Predicted boxes in shape (N, 7+C). - score_thr (float): Threshold of scores. - nms_thr (float): Threshold for NMS. - input_meta (dict): Meta information of the current sample. - use_rotate_nms (bool, optional): Whether to use rotated nms. - Defaults to True. - - Returns: - torch.Tensor: Selected indices. - """ - if use_rotate_nms: - nms_func = nms_bev - else: - nms_func = nms_normal_bev - - assert box_probs.shape[ - 1] == self.num_classes, f'box_probs shape: {str(box_probs.shape)}' - selected_list = [] - selected_labels = [] - boxes_for_nms = xywhr2xyxyr(input_meta['box_type_3d']( - box_preds, self.bbox_coder.code_size).bev) - - score_thresh = score_thr if isinstance( - score_thr, list) else [score_thr for x in range(self.num_classes)] - nms_thresh = nms_thr if isinstance( - nms_thr, list) else [nms_thr for x in range(self.num_classes)] - for k in range(0, self.num_classes): - class_scores_keep = box_probs[:, k] >= score_thresh[k] - - if class_scores_keep.int().sum() > 0: - original_idxs = class_scores_keep.nonzero( - as_tuple=False).view(-1) - cur_boxes_for_nms = boxes_for_nms[class_scores_keep] - cur_rank_scores = box_probs[class_scores_keep, k] - - cur_selected = nms_func(cur_boxes_for_nms, cur_rank_scores, - nms_thresh[k]) - - if cur_selected.shape[0] == 0: - continue - selected_list.append(original_idxs[cur_selected]) - selected_labels.append( - torch.full([cur_selected.shape[0]], - k + 1, - dtype=torch.int64, - device=box_preds.device)) - - keep = torch.cat( - selected_list, dim=0) if len(selected_list) > 0 else [] - return keep diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/h3d_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/h3d_roi_head.py deleted file mode 100644 index dccb1dc4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/h3d_roi_head.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet3d.core.bbox import bbox3d2result -from ..builder import HEADS, build_head -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class H3DRoIHead(Base3DRoIHead): - """H3D roi head for H3DNet. - - Args: - primitive_list (List): Configs of primitive heads. - bbox_head (ConfigDict): Config of bbox_head. - train_cfg (ConfigDict): Training config. - test_cfg (ConfigDict): Testing config. - """ - - def __init__(self, - primitive_list, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(H3DRoIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - # Primitive module - assert len(primitive_list) == 3 - self.primitive_z = build_head(primitive_list[0]) - self.primitive_xy = build_head(primitive_list[1]) - self.primitive_line = build_head(primitive_list[2]) - - def init_mask_head(self): - """Initialize mask head, skip since ``H3DROIHead`` does not have - one.""" - pass - - def init_bbox_head(self, bbox_head): - """Initialize box head.""" - bbox_head['train_cfg'] = self.train_cfg - bbox_head['test_cfg'] = self.test_cfg - self.bbox_head = build_head(bbox_head) - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - def forward_train(self, - feats_dict, - img_metas, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask, - pts_instance_mask, - gt_bboxes_ignore=None): - """Training forward function of PartAggregationROIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Contain pcd and img's meta info. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding boxes to ignore. - - Returns: - dict: losses from each head. - """ - losses = dict() - - sample_mod = self.train_cfg.sample_mod - assert sample_mod in ['vote', 'seed', 'random'] - result_z = self.primitive_z(feats_dict, sample_mod) - feats_dict.update(result_z) - - result_xy = self.primitive_xy(feats_dict, sample_mod) - feats_dict.update(result_xy) - - result_line = self.primitive_line(feats_dict, sample_mod) - feats_dict.update(result_line) - - primitive_loss_inputs = (feats_dict, points, gt_bboxes_3d, - gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas, - gt_bboxes_ignore) - - loss_z = self.primitive_z.loss(*primitive_loss_inputs) - losses.update(loss_z) - - loss_xy = self.primitive_xy.loss(*primitive_loss_inputs) - losses.update(loss_xy) - - loss_line = self.primitive_line.loss(*primitive_loss_inputs) - losses.update(loss_line) - - targets = feats_dict.pop('targets') - - bbox_results = self.bbox_head(feats_dict, sample_mod) - - feats_dict.update(bbox_results) - bbox_loss = self.bbox_head.loss(feats_dict, points, gt_bboxes_3d, - gt_labels_3d, pts_semantic_mask, - pts_instance_mask, img_metas, targets, - gt_bboxes_ignore) - losses.update(bbox_loss) - - return losses - - def simple_test(self, feats_dict, img_metas, points, rescale=False): - """Simple testing forward function of PartAggregationROIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Contain pcd and img's meta info. - points (torch.Tensor): Input points. - rescale (bool): Whether to rescale results. - - Returns: - dict: Bbox results of one frame. - """ - sample_mod = self.test_cfg.sample_mod - assert sample_mod in ['vote', 'seed', 'random'] - - result_z = self.primitive_z(feats_dict, sample_mod) - feats_dict.update(result_z) - - result_xy = self.primitive_xy(feats_dict, sample_mod) - feats_dict.update(result_xy) - - result_line = self.primitive_line(feats_dict, sample_mod) - feats_dict.update(result_line) - - bbox_preds = self.bbox_head(feats_dict, sample_mod) - feats_dict.update(bbox_preds) - bbox_list = self.bbox_head.get_bboxes( - points, - feats_dict, - img_metas, - rescale=rescale, - suffix='_optimized') - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/__init__.py deleted file mode 100644 index 8b227060..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .pointwise_semantic_head import PointwiseSemanticHead -from .primitive_head import PrimitiveHead - -__all__ = ['PointwiseSemanticHead', 'PrimitiveHead'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py deleted file mode 100644 index 71dbaa13..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/pointwise_semantic_head.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.core.bbox.structures import rotation_3d_in_axis -from mmdet3d.models.builder import HEADS, build_loss -from mmdet.core import multi_apply - - -@HEADS.register_module() -class PointwiseSemanticHead(BaseModule): - """Semantic segmentation head for point-wise segmentation. - - Predict point-wise segmentation and part regression results for PartA2. - See `paper `_ for more details. - - Args: - in_channels (int): The number of input channel. - num_classes (int): The number of class. - extra_width (float): Boxes enlarge width. - loss_seg (dict): Config of segmentation loss. - loss_part (dict): Config of part prediction loss. - """ - - def __init__(self, - in_channels, - num_classes=3, - extra_width=0.2, - seg_score_thr=0.3, - init_cfg=None, - loss_seg=dict( - type='FocalLoss', - use_sigmoid=True, - reduction='sum', - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_part=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0)): - super(PointwiseSemanticHead, self).__init__(init_cfg=init_cfg) - self.extra_width = extra_width - self.num_classes = num_classes - self.seg_score_thr = seg_score_thr - self.seg_cls_layer = nn.Linear(in_channels, 1, bias=True) - self.seg_reg_layer = nn.Linear(in_channels, 3, bias=True) - - self.loss_seg = build_loss(loss_seg) - self.loss_part = build_loss(loss_part) - - def forward(self, x): - """Forward pass. - - Args: - x (torch.Tensor): Features from the first stage. - - Returns: - dict: Part features, segmentation and part predictions. - - - seg_preds (torch.Tensor): Segment predictions. - - part_preds (torch.Tensor): Part predictions. - - part_feats (torch.Tensor): Feature predictions. - """ - seg_preds = self.seg_cls_layer(x) # (N, 1) - part_preds = self.seg_reg_layer(x) # (N, 3) - - seg_scores = torch.sigmoid(seg_preds).detach() - seg_mask = (seg_scores > self.seg_score_thr) - - part_offsets = torch.sigmoid(part_preds).clone().detach() - part_offsets[seg_mask.view(-1) == 0] = 0 - part_feats = torch.cat((part_offsets, seg_scores), - dim=-1) # shape (npoints, 4) - return dict( - seg_preds=seg_preds, part_preds=part_preds, part_feats=part_feats) - - def get_targets_single(self, voxel_centers, gt_bboxes_3d, gt_labels_3d): - """generate segmentation and part prediction targets for a single - sample. - - Args: - voxel_centers (torch.Tensor): The center of voxels in shape - (voxel_num, 3). - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth boxes in - shape (box_num, 7). - gt_labels_3d (torch.Tensor): Class labels of ground truths in - shape (box_num). - - Returns: - tuple[torch.Tensor]: Segmentation targets with shape [voxel_num] - part prediction targets with shape [voxel_num, 3] - """ - gt_bboxes_3d = gt_bboxes_3d.to(voxel_centers.device) - enlarged_gt_boxes = gt_bboxes_3d.enlarged_box(self.extra_width) - - part_targets = voxel_centers.new_zeros((voxel_centers.shape[0], 3), - dtype=torch.float32) - box_idx = gt_bboxes_3d.points_in_boxes_part(voxel_centers) - enlarge_box_idx = enlarged_gt_boxes.points_in_boxes_part( - voxel_centers).long() - - gt_labels_pad = F.pad( - gt_labels_3d, (1, 0), mode='constant', value=self.num_classes) - seg_targets = gt_labels_pad[(box_idx.long() + 1)] - fg_pt_flag = box_idx > -1 - ignore_flag = fg_pt_flag ^ (enlarge_box_idx > -1) - seg_targets[ignore_flag] = -1 - - for k in range(len(gt_bboxes_3d)): - k_box_flag = box_idx == k - # no point in current box (caused by velodyne reduce) - if not k_box_flag.any(): - continue - fg_voxels = voxel_centers[k_box_flag] - transformed_voxels = fg_voxels - gt_bboxes_3d.bottom_center[k] - transformed_voxels = rotation_3d_in_axis( - transformed_voxels.unsqueeze(0), - -gt_bboxes_3d.yaw[k].view(1), - axis=2) - part_targets[k_box_flag] = transformed_voxels / gt_bboxes_3d.dims[ - k] + voxel_centers.new_tensor([0.5, 0.5, 0]) - - part_targets = torch.clamp(part_targets, min=0) - return seg_targets, part_targets - - def get_targets(self, voxels_dict, gt_bboxes_3d, gt_labels_3d): - """generate segmentation and part prediction targets. - - Args: - voxel_centers (torch.Tensor): The center of voxels in shape - (voxel_num, 3). - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth boxes in - shape (box_num, 7). - gt_labels_3d (torch.Tensor): Class labels of ground truths in - shape (box_num). - - Returns: - dict: Prediction targets - - - seg_targets (torch.Tensor): Segmentation targets - with shape [voxel_num]. - - part_targets (torch.Tensor): Part prediction targets - with shape [voxel_num, 3]. - """ - batch_size = len(gt_labels_3d) - voxel_center_list = [] - for idx in range(batch_size): - coords_idx = voxels_dict['coors'][:, 0] == idx - voxel_center_list.append(voxels_dict['voxel_centers'][coords_idx]) - - seg_targets, part_targets = multi_apply(self.get_targets_single, - voxel_center_list, - gt_bboxes_3d, gt_labels_3d) - seg_targets = torch.cat(seg_targets, dim=0) - part_targets = torch.cat(part_targets, dim=0) - return dict(seg_targets=seg_targets, part_targets=part_targets) - - def loss(self, semantic_results, semantic_targets): - """Calculate point-wise segmentation and part prediction losses. - - Args: - semantic_results (dict): Results from semantic head. - - - seg_preds: Segmentation predictions. - - part_preds: Part predictions. - - semantic_targets (dict): Targets of semantic results. - - - seg_preds: Segmentation targets. - - part_preds: Part targets. - - Returns: - dict: Loss of segmentation and part prediction. - - - loss_seg (torch.Tensor): Segmentation prediction loss. - - loss_part (torch.Tensor): Part prediction loss. - """ - seg_preds = semantic_results['seg_preds'] - part_preds = semantic_results['part_preds'] - seg_targets = semantic_targets['seg_targets'] - part_targets = semantic_targets['part_targets'] - - pos_mask = (seg_targets > -1) & (seg_targets < self.num_classes) - binary_seg_target = pos_mask.long() - pos = pos_mask.float() - neg = (seg_targets == self.num_classes).float() - seg_weights = pos + neg - pos_normalizer = pos.sum() - seg_weights = seg_weights / torch.clamp(pos_normalizer, min=1.0) - loss_seg = self.loss_seg(seg_preds, binary_seg_target, seg_weights) - - if pos_normalizer > 0: - loss_part = self.loss_part(part_preds[pos_mask], - part_targets[pos_mask]) - else: - # fake a part loss - loss_part = loss_seg.new_tensor(0) - - return dict(loss_seg=loss_seg, loss_part=loss_part) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/primitive_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/primitive_head.py deleted file mode 100644 index ba44031f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/mask_heads/primitive_head.py +++ /dev/null @@ -1,968 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import furthest_point_sample -from mmcv.runner import BaseModule -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.models.builder import HEADS, build_loss -from mmdet3d.models.model_utils import VoteModule -from mmdet3d.ops import build_sa_module -from mmdet.core import multi_apply - - -@HEADS.register_module() -class PrimitiveHead(BaseModule): - r"""Primitive head of `H3DNet `_. - - Args: - num_dims (int): The dimension of primitive semantic information. - num_classes (int): The number of class. - primitive_mode (str): The mode of primitive module, - available mode ['z', 'xy', 'line']. - bbox_coder (:obj:`BaseBBoxCoder`): Bbox coder for encoding and - decoding boxes. - train_cfg (dict): Config for training. - test_cfg (dict): Config for testing. - vote_module_cfg (dict): Config of VoteModule for point-wise votes. - vote_aggregation_cfg (dict): Config of vote aggregation layer. - feat_channels (tuple[int]): Convolution channels of - prediction layer. - upper_thresh (float): Threshold for line matching. - surface_thresh (float): Threshold for surface matching. - conv_cfg (dict): Config of convolution in prediction layer. - norm_cfg (dict): Config of BN in prediction layer. - objectness_loss (dict): Config of objectness loss. - center_loss (dict): Config of center loss. - semantic_loss (dict): Config of point-wise semantic segmentation loss. - """ - - def __init__(self, - num_dims, - num_classes, - primitive_mode, - train_cfg=None, - test_cfg=None, - vote_module_cfg=None, - vote_aggregation_cfg=None, - feat_channels=(128, 128), - upper_thresh=100.0, - surface_thresh=0.5, - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - objectness_loss=None, - center_loss=None, - semantic_reg_loss=None, - semantic_cls_loss=None, - init_cfg=None): - super(PrimitiveHead, self).__init__(init_cfg=init_cfg) - assert primitive_mode in ['z', 'xy', 'line'] - # The dimension of primitive semantic information. - self.num_dims = num_dims - self.num_classes = num_classes - self.primitive_mode = primitive_mode - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.gt_per_seed = vote_module_cfg['gt_per_seed'] - self.num_proposal = vote_aggregation_cfg['num_point'] - self.upper_thresh = upper_thresh - self.surface_thresh = surface_thresh - - self.objectness_loss = build_loss(objectness_loss) - self.center_loss = build_loss(center_loss) - self.semantic_reg_loss = build_loss(semantic_reg_loss) - self.semantic_cls_loss = build_loss(semantic_cls_loss) - - assert vote_aggregation_cfg['mlp_channels'][0] == vote_module_cfg[ - 'in_channels'] - - # Primitive existence flag prediction - self.flag_conv = ConvModule( - vote_module_cfg['conv_channels'][-1], - vote_module_cfg['conv_channels'][-1] // 2, - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True) - self.flag_pred = torch.nn.Conv1d( - vote_module_cfg['conv_channels'][-1] // 2, 2, 1) - - self.vote_module = VoteModule(**vote_module_cfg) - self.vote_aggregation = build_sa_module(vote_aggregation_cfg) - - prev_channel = vote_aggregation_cfg['mlp_channels'][-1] - conv_pred_list = list() - for k in range(len(feat_channels)): - conv_pred_list.append( - ConvModule( - prev_channel, - feat_channels[k], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True, - inplace=True)) - prev_channel = feat_channels[k] - self.conv_pred = nn.Sequential(*conv_pred_list) - - conv_out_channel = 3 + num_dims + num_classes - self.conv_pred.add_module('conv_out', - nn.Conv1d(prev_channel, conv_out_channel, 1)) - - def forward(self, feats_dict, sample_mod): - """Forward pass. - - Args: - feats_dict (dict): Feature dict from backbone. - sample_mod (str): Sample mode for vote aggregation layer. - valid modes are "vote", "seed" and "random". - - Returns: - dict: Predictions of primitive head. - """ - assert sample_mod in ['vote', 'seed', 'random'] - - seed_points = feats_dict['fp_xyz_net0'][-1] - seed_features = feats_dict['hd_feature'] - results = {} - - primitive_flag = self.flag_conv(seed_features) - primitive_flag = self.flag_pred(primitive_flag) - - results['pred_flag_' + self.primitive_mode] = primitive_flag - - # 1. generate vote_points from seed_points - vote_points, vote_features, _ = self.vote_module( - seed_points, seed_features) - results['vote_' + self.primitive_mode] = vote_points - results['vote_features_' + self.primitive_mode] = vote_features - - # 2. aggregate vote_points - if sample_mod == 'vote': - # use fps in vote_aggregation - sample_indices = None - elif sample_mod == 'seed': - # FPS on seed and choose the votes corresponding to the seeds - sample_indices = furthest_point_sample(seed_points, - self.num_proposal) - elif sample_mod == 'random': - # Random sampling from the votes - batch_size, num_seed = seed_points.shape[:2] - sample_indices = torch.randint( - 0, - num_seed, (batch_size, self.num_proposal), - dtype=torch.int32, - device=seed_points.device) - else: - raise NotImplementedError('Unsupported sample mod!') - - vote_aggregation_ret = self.vote_aggregation(vote_points, - vote_features, - sample_indices) - aggregated_points, features, aggregated_indices = vote_aggregation_ret - results['aggregated_points_' + self.primitive_mode] = aggregated_points - results['aggregated_features_' + self.primitive_mode] = features - results['aggregated_indices_' + - self.primitive_mode] = aggregated_indices - - # 3. predict primitive offsets and semantic information - predictions = self.conv_pred(features) - - # 4. decode predictions - decode_ret = self.primitive_decode_scores(predictions, - aggregated_points) - results.update(decode_ret) - - center, pred_ind = self.get_primitive_center( - primitive_flag, decode_ret['center_' + self.primitive_mode]) - - results['pred_' + self.primitive_mode + '_ind'] = pred_ind - results['pred_' + self.primitive_mode + '_center'] = center - return results - - def loss(self, - bbox_preds, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - img_metas=None, - gt_bboxes_ignore=None): - """Compute loss. - - Args: - bbox_preds (dict): Predictions from forward of primitive head. - points (list[torch.Tensor]): Input points. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each sample. - gt_labels_3d (list[torch.Tensor]): Labels of each sample. - pts_semantic_mask (list[torch.Tensor]): Point-wise - semantic mask. - pts_instance_mask (list[torch.Tensor]): Point-wise - instance mask. - img_metas (list[dict]): Contain pcd and img's meta info. - gt_bboxes_ignore (list[torch.Tensor]): Specify - which bounding. - - Returns: - dict: Losses of Primitive Head. - """ - targets = self.get_targets(points, gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask, - bbox_preds) - - (point_mask, point_offset, gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask) = targets - - losses = {} - # Compute the loss of primitive existence flag - pred_flag = bbox_preds['pred_flag_' + self.primitive_mode] - flag_loss = self.objectness_loss(pred_flag, gt_primitive_mask.long()) - losses['flag_loss_' + self.primitive_mode] = flag_loss - - # calculate vote loss - vote_loss = self.vote_module.get_loss( - bbox_preds['seed_points'], - bbox_preds['vote_' + self.primitive_mode], - bbox_preds['seed_indices'], point_mask, point_offset) - losses['vote_loss_' + self.primitive_mode] = vote_loss - - num_proposal = bbox_preds['aggregated_points_' + - self.primitive_mode].shape[1] - primitive_center = bbox_preds['center_' + self.primitive_mode] - if self.primitive_mode != 'line': - primitive_semantic = bbox_preds['size_residuals_' + - self.primitive_mode].contiguous() - else: - primitive_semantic = None - semancitc_scores = bbox_preds['sem_cls_scores_' + - self.primitive_mode].transpose(2, 1) - - gt_primitive_mask = gt_primitive_mask / \ - (gt_primitive_mask.sum() + 1e-6) - center_loss, size_loss, sem_cls_loss = self.compute_primitive_loss( - primitive_center, primitive_semantic, semancitc_scores, - num_proposal, gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask) - losses['center_loss_' + self.primitive_mode] = center_loss - losses['size_loss_' + self.primitive_mode] = size_loss - losses['sem_loss_' + self.primitive_mode] = sem_cls_loss - - return losses - - def get_targets(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None, - bbox_preds=None): - """Generate targets of primitive head. - - Args: - points (list[torch.Tensor]): Points of each batch. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - bboxes of each batch. - gt_labels_3d (list[torch.Tensor]): Labels of each batch. - pts_semantic_mask (list[torch.Tensor]): Point-wise semantic - label of each batch. - pts_instance_mask (list[torch.Tensor]): Point-wise instance - label of each batch. - bbox_preds (dict): Predictions from forward of primitive head. - - Returns: - tuple[torch.Tensor]: Targets of primitive head. - """ - for index in range(len(gt_labels_3d)): - if len(gt_labels_3d[index]) == 0: - fake_box = gt_bboxes_3d[index].tensor.new_zeros( - 1, gt_bboxes_3d[index].tensor.shape[-1]) - gt_bboxes_3d[index] = gt_bboxes_3d[index].new_box(fake_box) - gt_labels_3d[index] = gt_labels_3d[index].new_zeros(1) - - if pts_semantic_mask is None: - pts_semantic_mask = [None for i in range(len(gt_labels_3d))] - pts_instance_mask = [None for i in range(len(gt_labels_3d))] - - (point_mask, point_sem, - point_offset) = multi_apply(self.get_targets_single, points, - gt_bboxes_3d, gt_labels_3d, - pts_semantic_mask, pts_instance_mask) - - point_mask = torch.stack(point_mask) - point_sem = torch.stack(point_sem) - point_offset = torch.stack(point_offset) - - batch_size = point_mask.shape[0] - num_proposal = bbox_preds['aggregated_points_' + - self.primitive_mode].shape[1] - num_seed = bbox_preds['seed_points'].shape[1] - seed_inds = bbox_preds['seed_indices'].long() - seed_inds_expand = seed_inds.view(batch_size, num_seed, - 1).repeat(1, 1, 3) - seed_gt_votes = torch.gather(point_offset, 1, seed_inds_expand) - seed_gt_votes += bbox_preds['seed_points'] - gt_primitive_center = seed_gt_votes.view(batch_size * num_proposal, 1, - 3) - - seed_inds_expand_sem = seed_inds.view(batch_size, num_seed, 1).repeat( - 1, 1, 4 + self.num_dims) - seed_gt_sem = torch.gather(point_sem, 1, seed_inds_expand_sem) - gt_primitive_semantic = seed_gt_sem[:, :, 3:3 + self.num_dims].view( - batch_size * num_proposal, 1, self.num_dims).contiguous() - - gt_sem_cls_label = seed_gt_sem[:, :, -1].long() - - gt_votes_mask = torch.gather(point_mask, 1, seed_inds) - - return (point_mask, point_offset, gt_primitive_center, - gt_primitive_semantic, gt_sem_cls_label, gt_votes_mask) - - def get_targets_single(self, - points, - gt_bboxes_3d, - gt_labels_3d, - pts_semantic_mask=None, - pts_instance_mask=None): - """Generate targets of primitive head for single batch. - - Args: - points (torch.Tensor): Points of each batch. - gt_bboxes_3d (:obj:`BaseInstance3DBoxes`): Ground truth - boxes of each batch. - gt_labels_3d (torch.Tensor): Labels of each batch. - pts_semantic_mask (torch.Tensor): Point-wise semantic - label of each batch. - pts_instance_mask (torch.Tensor): Point-wise instance - label of each batch. - - Returns: - tuple[torch.Tensor]: Targets of primitive head. - """ - gt_bboxes_3d = gt_bboxes_3d.to(points.device) - num_points = points.shape[0] - - point_mask = points.new_zeros(num_points) - # Offset to the primitive center - point_offset = points.new_zeros([num_points, 3]) - # Semantic information of primitive center - point_sem = points.new_zeros([num_points, 3 + self.num_dims + 1]) - - # Generate pts_semantic_mask and pts_instance_mask when they are None - if pts_semantic_mask is None or pts_instance_mask is None: - points2box_mask = gt_bboxes_3d.points_in_boxes_all(points) - assignment = points2box_mask.argmax(1) - background_mask = points2box_mask.max(1)[0] == 0 - - if pts_semantic_mask is None: - pts_semantic_mask = gt_labels_3d[assignment] - pts_semantic_mask[background_mask] = self.num_classes - - if pts_instance_mask is None: - pts_instance_mask = assignment - pts_instance_mask[background_mask] = gt_labels_3d.shape[0] - - instance_flag = torch.nonzero( - pts_semantic_mask != self.num_classes, as_tuple=False).squeeze(1) - instance_labels = pts_instance_mask[instance_flag].unique() - - with_yaw = gt_bboxes_3d.with_yaw - for i, i_instance in enumerate(instance_labels): - indices = instance_flag[pts_instance_mask[instance_flag] == - i_instance] - coords = points[indices, :3] - cur_cls_label = pts_semantic_mask[indices][0] - - # Bbox Corners - cur_corners = gt_bboxes_3d.corners[i] - - plane_lower_temp = points.new_tensor( - [0, 0, 1, -cur_corners[7, -1]]) - upper_points = cur_corners[[1, 2, 5, 6]] - refined_distance = (upper_points * plane_lower_temp[:3]).sum(dim=1) - - if self.check_horizon(upper_points) and \ - plane_lower_temp[0] + plane_lower_temp[1] < \ - self.train_cfg['lower_thresh']: - plane_lower = points.new_tensor( - [0, 0, 1, plane_lower_temp[-1]]) - plane_upper = points.new_tensor( - [0, 0, 1, -torch.mean(refined_distance)]) - else: - raise NotImplementedError('Only horizontal plane is support!') - - if self.check_dist(plane_upper, upper_points) is False: - raise NotImplementedError( - 'Mean distance to plane should be lower than thresh!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_lower, coords) - - # Get bottom four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='bottom') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - point2line_matching, - cur_corners, - [1, 1, 0, 0], - with_yaw, - mode='bottom') - - # Set the surface labels here - if self.primitive_mode == 'z' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - cur_corners, - with_yaw, - mode='bottom') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_upper, coords) - - # Get top four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='top') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - point2line_matching, - cur_corners, - [1, 1, 0, 0], - with_yaw, - mode='top') - - if self.primitive_mode == 'z' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets(point_mask, - point_offset, - point_sem, - coords[selected], - indices[selected], - cur_cls_label, - cur_corners, - with_yaw, - mode='top') - - # Get left two lines - plane_left_temp = self._get_plane_fomulation( - cur_corners[2] - cur_corners[3], - cur_corners[3] - cur_corners[0], cur_corners[0]) - - right_points = cur_corners[[4, 5, 7, 6]] - plane_left_temp /= torch.norm(plane_left_temp[:3]) - refined_distance = (right_points * plane_left_temp[:3]).sum(dim=1) - - if plane_left_temp[2] < self.train_cfg['lower_thresh']: - plane_left = plane_left_temp - plane_right = points.new_tensor([ - plane_left_temp[0], plane_left_temp[1], plane_left_temp[2], - -refined_distance.mean() - ]) - else: - raise NotImplementedError( - 'Normal vector of the plane should be horizontal!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_left, coords) - - # Get left four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='left') - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - point2line_matching[2:], cur_corners, [2, 2], - with_yaw, mode='left') - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='left') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_right, coords) - - # Get right four lines - if self.primitive_mode == 'line': - point2line_matching = self.match_point2line( - coords[selected], cur_corners, with_yaw, mode='right') - - point_mask, point_offset, point_sem = \ - self._assign_primitive_line_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - point2line_matching[2:], cur_corners, [2, 2], - with_yaw, mode='right') - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='right') - - plane_front_temp = self._get_plane_fomulation( - cur_corners[0] - cur_corners[4], - cur_corners[4] - cur_corners[5], cur_corners[5]) - - back_points = cur_corners[[3, 2, 7, 6]] - plane_front_temp /= torch.norm(plane_front_temp[:3]) - refined_distance = (back_points * plane_front_temp[:3]).sum(dim=1) - - if plane_front_temp[2] < self.train_cfg['lower_thresh']: - plane_front = plane_front_temp - plane_back = points.new_tensor([ - plane_front_temp[0], plane_front_temp[1], - plane_front_temp[2], -torch.mean(refined_distance) - ]) - else: - raise NotImplementedError( - 'Normal vector of the plane should be horizontal!') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_front, coords) - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - (point2plane_dist[selected]).var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='front') - - # Get the boundary points here - point2plane_dist, selected = self.match_point2plane( - plane_back, coords) - - if self.primitive_mode == 'xy' and \ - selected.sum() > self.train_cfg['num_point'] and \ - point2plane_dist[selected].var() < \ - self.train_cfg['var_thresh']: - - point_mask, point_offset, point_sem = \ - self._assign_primitive_surface_targets( - point_mask, point_offset, point_sem, - coords[selected], indices[selected], cur_cls_label, - cur_corners, with_yaw, mode='back') - - return (point_mask, point_sem, point_offset) - - def primitive_decode_scores(self, predictions, aggregated_points): - """Decode predicted parts to primitive head. - - Args: - predictions (torch.Tensor): primitive pridictions of each batch. - aggregated_points (torch.Tensor): The aggregated points - of vote stage. - - Returns: - Dict: Predictions of primitive head, including center, - semantic size and semantic scores. - """ - - ret_dict = {} - pred_transposed = predictions.transpose(2, 1) - - center = aggregated_points + pred_transposed[:, :, 0:3] - ret_dict['center_' + self.primitive_mode] = center - - if self.primitive_mode in ['z', 'xy']: - ret_dict['size_residuals_' + self.primitive_mode] = \ - pred_transposed[:, :, 3:3 + self.num_dims] - - ret_dict['sem_cls_scores_' + self.primitive_mode] = \ - pred_transposed[:, :, 3 + self.num_dims:] - - return ret_dict - - def check_horizon(self, points): - """Check whether is a horizontal plane. - - Args: - points (torch.Tensor): Points of input. - - Returns: - Bool: Flag of result. - """ - return (points[0][-1] == points[1][-1]) and \ - (points[1][-1] == points[2][-1]) and \ - (points[2][-1] == points[3][-1]) - - def check_dist(self, plane_equ, points): - """Whether the mean of points to plane distance is lower than thresh. - - Args: - plane_equ (torch.Tensor): Plane to be checked. - points (torch.Tensor): Points to be checked. - - Returns: - Tuple: Flag of result. - """ - return (points[:, 2] + - plane_equ[-1]).sum() / 4.0 < self.train_cfg['lower_thresh'] - - def point2line_dist(self, points, pts_a, pts_b): - """Calculate the distance from point to line. - - Args: - points (torch.Tensor): Points of input. - pts_a (torch.Tensor): Point on the specific line. - pts_b (torch.Tensor): Point on the specific line. - - Returns: - torch.Tensor: Distance between each point to line. - """ - line_a2b = pts_b - pts_a - line_a2pts = points - pts_a - length = (line_a2pts * line_a2b.view(1, 3)).sum(1) / \ - line_a2b.norm() - dist = (line_a2pts.norm(dim=1)**2 - length**2).sqrt() - - return dist - - def match_point2line(self, points, corners, with_yaw, mode='bottom'): - """Match points to corresponding line. - - Args: - points (torch.Tensor): Points of input. - corners (torch.Tensor): Eight corners of a bounding box. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right'). - Defaults to 'bottom'. - - Returns: - Tuple: Flag of matching correspondence. - """ - if with_yaw: - corners_pair = { - 'bottom': [[0, 3], [4, 7], [0, 4], [3, 7]], - 'top': [[1, 2], [5, 6], [1, 5], [2, 6]], - 'left': [[0, 1], [3, 2], [0, 1], [3, 2]], - 'right': [[4, 5], [7, 6], [4, 5], [7, 6]] - } - selected_list = [] - for pair_index in corners_pair[mode]: - selected = self.point2line_dist( - points, corners[pair_index[0]], corners[pair_index[1]]) \ - < self.train_cfg['line_thresh'] - selected_list.append(selected) - else: - xmin, ymin, _ = corners.min(0)[0] - xmax, ymax, _ = corners.max(0)[0] - sel1 = torch.abs(points[:, 0] - - xmin) < self.train_cfg['line_thresh'] - sel2 = torch.abs(points[:, 0] - - xmax) < self.train_cfg['line_thresh'] - sel3 = torch.abs(points[:, 1] - - ymin) < self.train_cfg['line_thresh'] - sel4 = torch.abs(points[:, 1] - - ymax) < self.train_cfg['line_thresh'] - selected_list = [sel1, sel2, sel3, sel4] - return selected_list - - def match_point2plane(self, plane, points): - """Match points to plane. - - Args: - plane (torch.Tensor): Equation of the plane. - points (torch.Tensor): Points of input. - - Returns: - Tuple: Distance of each point to the plane and - flag of matching correspondence. - """ - point2plane_dist = torch.abs((points * plane[:3]).sum(dim=1) + - plane[-1]) - min_dist = point2plane_dist.min() - selected = torch.abs(point2plane_dist - - min_dist) < self.train_cfg['dist_thresh'] - return point2plane_dist, selected - - def compute_primitive_loss(self, primitive_center, primitive_semantic, - semantic_scores, num_proposal, - gt_primitive_center, gt_primitive_semantic, - gt_sem_cls_label, gt_primitive_mask): - """Compute loss of primitive module. - - Args: - primitive_center (torch.Tensor): Pridictions of primitive center. - primitive_semantic (torch.Tensor): Pridictions of primitive - semantic. - semantic_scores (torch.Tensor): Pridictions of primitive - semantic scores. - num_proposal (int): The number of primitive proposal. - gt_primitive_center (torch.Tensor): Ground truth of - primitive center. - gt_votes_sem (torch.Tensor): Ground truth of primitive semantic. - gt_sem_cls_label (torch.Tensor): Ground truth of primitive - semantic class. - gt_primitive_mask (torch.Tensor): Ground truth of primitive mask. - - Returns: - Tuple: Loss of primitive module. - """ - batch_size = primitive_center.shape[0] - vote_xyz_reshape = primitive_center.view(batch_size * num_proposal, -1, - 3) - - center_loss = self.center_loss( - vote_xyz_reshape, - gt_primitive_center, - dst_weight=gt_primitive_mask.view(batch_size * num_proposal, 1))[1] - - if self.primitive_mode != 'line': - size_xyz_reshape = primitive_semantic.view( - batch_size * num_proposal, -1, self.num_dims).contiguous() - size_loss = self.semantic_reg_loss( - size_xyz_reshape, - gt_primitive_semantic, - dst_weight=gt_primitive_mask.view(batch_size * num_proposal, - 1))[1] - else: - size_loss = center_loss.new_tensor(0.0) - - # Semantic cls loss - sem_cls_loss = self.semantic_cls_loss( - semantic_scores, gt_sem_cls_label, weight=gt_primitive_mask) - - return center_loss, size_loss, sem_cls_loss - - def get_primitive_center(self, pred_flag, center): - """Generate primitive center from predictions. - - Args: - pred_flag (torch.Tensor): Scores of primitive center. - center (torch.Tensor): Pridictions of primitive center. - - Returns: - Tuple: Primitive center and the prediction indices. - """ - ind_normal = F.softmax(pred_flag, dim=1) - pred_indices = (ind_normal[:, 1, :] > - self.surface_thresh).detach().float() - selected = (ind_normal[:, 1, :] <= - self.surface_thresh).detach().float() - offset = torch.ones_like(center) * self.upper_thresh - center = center + offset * selected.unsqueeze(-1) - return center, pred_indices - - def _assign_primitive_line_targets(self, - point_mask, - point_offset, - point_sem, - coords, - indices, - cls_label, - point2line_matching, - corners, - center_axises, - with_yaw, - mode='bottom'): - """Generate targets of line primitive. - - Args: - point_mask (torch.Tensor): Tensor to store the ground - truth of mask. - point_offset (torch.Tensor): Tensor to store the ground - truth of offset. - point_sem (torch.Tensor): Tensor to store the ground - truth of semantic. - coords (torch.Tensor): The selected points. - indices (torch.Tensor): Indices of the selected points. - cls_label (int): Class label of the ground truth bounding box. - point2line_matching (torch.Tensor): Flag indicate that - matching line of each point. - corners (torch.Tensor): Corners of the ground truth bounding box. - center_axises (list[int]): Indicate in which axis the line center - should be refined. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right'). - Defaults to 'bottom'. - - Returns: - Tuple: Targets of the line primitive. - """ - corners_pair = { - 'bottom': [[0, 3], [4, 7], [0, 4], [3, 7]], - 'top': [[1, 2], [5, 6], [1, 5], [2, 6]], - 'left': [[0, 1], [3, 2]], - 'right': [[4, 5], [7, 6]] - } - corners_pair = corners_pair[mode] - assert len(corners_pair) == len(point2line_matching) == len( - center_axises) - for line_select, center_axis, pair_index in zip( - point2line_matching, center_axises, corners_pair): - if line_select.sum() > self.train_cfg['num_point_line']: - point_mask[indices[line_select]] = 1.0 - - if with_yaw: - line_center = (corners[pair_index[0]] + - corners[pair_index[1]]) / 2 - else: - line_center = coords[line_select].mean(dim=0) - line_center[center_axis] = corners[:, center_axis].mean() - - point_offset[indices[line_select]] = \ - line_center - coords[line_select] - point_sem[indices[line_select]] = \ - point_sem.new_tensor([line_center[0], line_center[1], - line_center[2], cls_label]) - return point_mask, point_offset, point_sem - - def _assign_primitive_surface_targets(self, - point_mask, - point_offset, - point_sem, - coords, - indices, - cls_label, - corners, - with_yaw, - mode='bottom'): - """Generate targets for primitive z and primitive xy. - - Args: - point_mask (torch.Tensor): Tensor to store the ground - truth of mask. - point_offset (torch.Tensor): Tensor to store the ground - truth of offset. - point_sem (torch.Tensor): Tensor to store the ground - truth of semantic. - coords (torch.Tensor): The selected points. - indices (torch.Tensor): Indices of the selected points. - cls_label (int): Class label of the ground truth bounding box. - corners (torch.Tensor): Corners of the ground truth bounding box. - with_yaw (Bool): Whether the boundind box is with rotation. - mode (str, optional): Specify which line should be matched, - available mode are ('bottom', 'top', 'left', 'right', - 'front', 'back'). - Defaults to 'bottom'. - - Returns: - Tuple: Targets of the center primitive. - """ - point_mask[indices] = 1.0 - corners_pair = { - 'bottom': [0, 7], - 'top': [1, 6], - 'left': [0, 1], - 'right': [4, 5], - 'front': [0, 1], - 'back': [3, 2] - } - pair_index = corners_pair[mode] - if self.primitive_mode == 'z': - if with_yaw: - center = (corners[pair_index[0]] + - corners[pair_index[1]]) / 2.0 - center[2] = coords[:, 2].mean() - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], - center[2], (corners[4] - corners[0]).norm(), - (corners[3] - corners[0]).norm(), cls_label - ]) - else: - center = point_mask.new_tensor([ - corners[:, 0].mean(), corners[:, 1].mean(), - coords[:, 2].mean() - ]) - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[:, 0].max() - corners[:, 0].min(), - corners[:, 1].max() - corners[:, 1].min(), cls_label - ]) - elif self.primitive_mode == 'xy': - if with_yaw: - center = coords.mean(0) - center[2] = (corners[pair_index[0], 2] + - corners[pair_index[1], 2]) / 2.0 - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[pair_index[1], 2] - corners[pair_index[0], 2], - cls_label - ]) - else: - center = point_mask.new_tensor([ - coords[:, 0].mean(), coords[:, 1].mean(), - corners[:, 2].mean() - ]) - point_sem[indices] = point_sem.new_tensor([ - center[0], center[1], center[2], - corners[:, 2].max() - corners[:, 2].min(), cls_label - ]) - point_offset[indices] = center - coords - return point_mask, point_offset, point_sem - - def _get_plane_fomulation(self, vector1, vector2, point): - """Compute the equation of the plane. - - Args: - vector1 (torch.Tensor): Parallel vector of the plane. - vector2 (torch.Tensor): Parallel vector of the plane. - point (torch.Tensor): Point on the plane. - - Returns: - torch.Tensor: Equation of the plane. - """ - surface_norm = torch.cross(vector1, vector2) - surface_dis = -torch.dot(surface_norm, point) - plane = point.new_tensor( - [surface_norm[0], surface_norm[1], surface_norm[2], surface_dis]) - return plane diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/part_aggregation_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/part_aggregation_roi_head.py deleted file mode 100644 index 777aba51..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/part_aggregation_roi_head.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from torch.nn import functional as F - -from mmdet3d.core import AssignResult -from mmdet3d.core.bbox import bbox3d2result, bbox3d2roi -from mmdet.core import build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class PartAggregationROIHead(Base3DRoIHead): - """Part aggregation roi head for PartA2. - - Args: - semantic_head (ConfigDict): Config of semantic head. - num_classes (int): The number of classes. - seg_roi_extractor (ConfigDict): Config of seg_roi_extractor. - part_roi_extractor (ConfigDict): Config of part_roi_extractor. - bbox_head (ConfigDict): Config of bbox_head. - train_cfg (ConfigDict): Training config. - test_cfg (ConfigDict): Testing config. - """ - - def __init__(self, - semantic_head, - num_classes=3, - seg_roi_extractor=None, - part_roi_extractor=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(PartAggregationROIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg) - self.num_classes = num_classes - assert semantic_head is not None - self.semantic_head = build_head(semantic_head) - - if seg_roi_extractor is not None: - self.seg_roi_extractor = build_roi_extractor(seg_roi_extractor) - if part_roi_extractor is not None: - self.part_roi_extractor = build_roi_extractor(part_roi_extractor) - - self.init_assigner_sampler() - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - def init_mask_head(self): - """Initialize mask head, skip since ``PartAggregationROIHead`` does not - have one.""" - pass - - def init_bbox_head(self, bbox_head): - """Initialize box head.""" - self.bbox_head = build_head(bbox_head) - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - - @property - def with_semantic(self): - """bool: whether the head has semantic branch""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - def forward_train(self, feats_dict, voxels_dict, img_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d): - """Training forward function of PartAggregationROIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - voxels_dict (dict): Contains information of voxels. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - The dictionary should contain the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Proposal bboxes - - labels_3d (torch.Tensor): Labels of proposals - - cls_preds (torch.Tensor): Original scores of proposals - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels_3d (list[LongTensor]): GT labels of each sample. - - Returns: - dict: losses from each head. - - - loss_semantic (torch.Tensor): loss of semantic head - - loss_bbox (torch.Tensor): loss of bboxes - """ - losses = dict() - if self.with_semantic: - semantic_results = self._semantic_forward_train( - feats_dict['seg_features'], voxels_dict, gt_bboxes_3d, - gt_labels_3d) - losses.update(semantic_results['loss_semantic']) - - sample_results = self._assign_and_sample(proposal_list, gt_bboxes_3d, - gt_labels_3d) - if self.with_bbox: - bbox_results = self._bbox_forward_train( - feats_dict['seg_features'], semantic_results['part_feats'], - voxels_dict, sample_results) - losses.update(bbox_results['loss_bbox']) - - return losses - - def simple_test(self, feats_dict, voxels_dict, img_metas, proposal_list, - **kwargs): - """Simple testing forward function of PartAggregationROIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - voxels_dict (dict): Contains information of voxels. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - - Returns: - dict: Bbox results of one frame. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - assert self.with_semantic - - semantic_results = self.semantic_head(feats_dict['seg_features']) - - rois = bbox3d2roi([res['boxes_3d'].tensor for res in proposal_list]) - labels_3d = [res['labels_3d'] for res in proposal_list] - cls_preds = [res['cls_preds'] for res in proposal_list] - bbox_results = self._bbox_forward(feats_dict['seg_features'], - semantic_results['part_feats'], - voxels_dict, rois) - - bbox_list = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - labels_3d, - cls_preds, - img_metas, - cfg=self.test_cfg) - - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def _bbox_forward_train(self, seg_feats, part_feats, voxels_dict, - sampling_results): - """Forward training function of roi_extractor and bbox_head. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - voxels_dict (dict): Contains information of voxels. - sampling_results (:obj:`SamplingResult`): Sampled results used - for training. - - Returns: - dict: Forward results including losses and predictions. - """ - rois = bbox3d2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(seg_feats, part_feats, voxels_dict, - rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, - self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _bbox_forward(self, seg_feats, part_feats, voxels_dict, rois): - """Forward function of roi_extractor and bbox_head used in both - training and testing. - - Args: - seg_feats (torch.Tensor): Point-wise semantic features. - part_feats (torch.Tensor): Point-wise part prediction features. - voxels_dict (dict): Contains information of voxels. - rois (Tensor): Roi boxes. - - Returns: - dict: Contains predictions of bbox_head and - features of roi_extractor. - """ - pooled_seg_feats = self.seg_roi_extractor(seg_feats, - voxels_dict['voxel_centers'], - voxels_dict['coors'][..., 0], - rois) - pooled_part_feats = self.part_roi_extractor( - part_feats, voxels_dict['voxel_centers'], - voxels_dict['coors'][..., 0], rois) - cls_score, bbox_pred = self.bbox_head(pooled_seg_feats, - pooled_part_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - pooled_seg_feats=pooled_seg_feats, - pooled_part_feats=pooled_part_feats) - return bbox_results - - def _assign_and_sample(self, proposal_list, gt_bboxes_3d, gt_labels_3d): - """Assign and sample proposals for training. - - Args: - proposal_list (list[dict]): Proposals produced by RPN. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - list[:obj:`SamplingResult`]: Sampled results of each training - sample. - """ - sampling_results = [] - # bbox assign - for batch_idx in range(len(proposal_list)): - cur_proposal_list = proposal_list[batch_idx] - cur_boxes = cur_proposal_list['boxes_3d'] - cur_labels_3d = cur_proposal_list['labels_3d'] - cur_gt_bboxes = gt_bboxes_3d[batch_idx].to(cur_boxes.device) - cur_gt_labels = gt_labels_3d[batch_idx] - - batch_num_gts = 0 - # 0 is bg - batch_gt_indis = cur_gt_labels.new_full((len(cur_boxes), ), 0) - batch_max_overlaps = cur_boxes.tensor.new_zeros(len(cur_boxes)) - # -1 is bg - batch_gt_labels = cur_gt_labels.new_full((len(cur_boxes), ), -1) - - # each class may have its own assigner - if isinstance(self.bbox_assigner, list): - for i, assigner in enumerate(self.bbox_assigner): - gt_per_cls = (cur_gt_labels == i) - pred_per_cls = (cur_labels_3d == i) - cur_assign_res = assigner.assign( - cur_boxes.tensor[pred_per_cls], - cur_gt_bboxes.tensor[gt_per_cls], - gt_labels=cur_gt_labels[gt_per_cls]) - # gather assign_results in different class into one result - batch_num_gts += cur_assign_res.num_gts - # gt inds (1-based) - gt_inds_arange_pad = gt_per_cls.nonzero( - as_tuple=False).view(-1) + 1 - # pad 0 for indice unassigned - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=0) - # pad -1 for indice ignore - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=-1) - # convert to 0~gt_num+2 for indices - gt_inds_arange_pad += 1 - # now 0 is bg, >1 is fg in batch_gt_indis - batch_gt_indis[pred_per_cls] = gt_inds_arange_pad[ - cur_assign_res.gt_inds + 1] - 1 - batch_max_overlaps[ - pred_per_cls] = cur_assign_res.max_overlaps - batch_gt_labels[pred_per_cls] = cur_assign_res.labels - - assign_result = AssignResult(batch_num_gts, batch_gt_indis, - batch_max_overlaps, - batch_gt_labels) - else: # for single class - assign_result = self.bbox_assigner.assign( - cur_boxes.tensor, - cur_gt_bboxes.tensor, - gt_labels=cur_gt_labels) - # sample boxes - sampling_result = self.bbox_sampler.sample(assign_result, - cur_boxes.tensor, - cur_gt_bboxes.tensor, - cur_gt_labels) - sampling_results.append(sampling_result) - return sampling_results - - def _semantic_forward_train(self, x, voxels_dict, gt_bboxes_3d, - gt_labels_3d): - """Train semantic head. - - Args: - x (torch.Tensor): Point-wise semantic features for segmentation - voxels_dict (dict): Contains information of voxels. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - dict: Segmentation results including losses - """ - semantic_results = self.semantic_head(x) - semantic_targets = self.semantic_head.get_targets( - voxels_dict, gt_bboxes_3d, gt_labels_3d) - loss_semantic = self.semantic_head.loss(semantic_results, - semantic_targets) - semantic_results.update(loss_semantic=loss_semantic) - return semantic_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/point_rcnn_roi_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/point_rcnn_roi_head.py deleted file mode 100644 index 808b305d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/point_rcnn_roi_head.py +++ /dev/null @@ -1,288 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn import functional as F - -from mmdet3d.core import AssignResult -from mmdet3d.core.bbox import bbox3d2result, bbox3d2roi -from mmdet.core import build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_3droi_head import Base3DRoIHead - - -@HEADS.register_module() -class PointRCNNRoIHead(Base3DRoIHead): - """RoI head for PointRCNN. - - Args: - bbox_head (dict): Config of bbox_head. - point_roi_extractor (dict): Config of RoI extractor. - train_cfg (dict): Train configs. - test_cfg (dict): Test configs. - depth_normalizer (float, optional): Normalize depth feature. - Defaults to 70.0. - init_cfg (dict, optional): Config of initialization. Defaults to None. - """ - - def __init__(self, - bbox_head, - point_roi_extractor, - train_cfg, - test_cfg, - depth_normalizer=70.0, - pretrained=None, - init_cfg=None): - super(PointRCNNRoIHead, self).__init__( - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - self.depth_normalizer = depth_normalizer - - if point_roi_extractor is not None: - self.point_roi_extractor = build_roi_extractor(point_roi_extractor) - - self.init_assigner_sampler() - - def init_bbox_head(self, bbox_head): - """Initialize box head. - - Args: - bbox_head (dict): Config dict of RoI Head. - """ - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self): - """Initialize maek head.""" - pass - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - if isinstance(self.train_cfg.assigner, dict): - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - elif isinstance(self.train_cfg.assigner, list): - self.bbox_assigner = [ - build_assigner(res) for res in self.train_cfg.assigner - ] - self.bbox_sampler = build_sampler(self.train_cfg.sampler) - - def forward_train(self, feats_dict, input_metas, proposal_list, - gt_bboxes_3d, gt_labels_3d): - """Training forward function of PointRCNNRoIHead. - - Args: - feats_dict (dict): Contains features from the first stage. - imput_metas (list[dict]): Meta info of each input. - proposal_list (list[dict]): Proposal information from rpn. - The dictionary should contain the following keys: - - - boxes_3d (:obj:`BaseInstance3DBoxes`): Proposal bboxes - - labels_3d (torch.Tensor): Labels of proposals - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): - GT bboxes of each sample. The bboxes are encapsulated - by 3D box structures. - gt_labels_3d (list[LongTensor]): GT labels of each sample. - - Returns: - dict: Losses from RoI RCNN head. - - loss_bbox (torch.Tensor): Loss of bboxes - """ - features = feats_dict['features'] - points = feats_dict['points'] - point_cls_preds = feats_dict['points_cls_preds'] - sem_scores = point_cls_preds.sigmoid() - point_scores = sem_scores.max(-1)[0] - - sample_results = self._assign_and_sample(proposal_list, gt_bboxes_3d, - gt_labels_3d) - - # concat the depth, semantic features and backbone features - features = features.transpose(1, 2).contiguous() - point_depths = points.norm(dim=2) / self.depth_normalizer - 0.5 - features_list = [ - point_scores.unsqueeze(2), - point_depths.unsqueeze(2), features - ] - features = torch.cat(features_list, dim=2) - - bbox_results = self._bbox_forward_train(features, points, - sample_results) - losses = dict() - losses.update(bbox_results['loss_bbox']) - - return losses - - def simple_test(self, feats_dict, img_metas, proposal_list, **kwargs): - """Simple testing forward function of PointRCNNRoIHead. - - Note: - This function assumes that the batch size is 1 - - Args: - feats_dict (dict): Contains features from the first stage. - img_metas (list[dict]): Meta info of each image. - proposal_list (list[dict]): Proposal information from rpn. - - Returns: - dict: Bbox results of one frame. - """ - rois = bbox3d2roi([res['boxes_3d'].tensor for res in proposal_list]) - labels_3d = [res['labels_3d'] for res in proposal_list] - - features = feats_dict['features'] - points = feats_dict['points'] - point_cls_preds = feats_dict['points_cls_preds'] - sem_scores = point_cls_preds.sigmoid() - point_scores = sem_scores.max(-1)[0] - - features = features.transpose(1, 2).contiguous() - point_depths = points.norm(dim=2) / self.depth_normalizer - 0.5 - features_list = [ - point_scores.unsqueeze(2), - point_depths.unsqueeze(2), features - ] - - features = torch.cat(features_list, dim=2) - batch_size = features.shape[0] - bbox_results = self._bbox_forward(features, points, batch_size, rois) - object_score = bbox_results['cls_score'].sigmoid() - bbox_list = self.bbox_head.get_bboxes( - rois, - object_score, - bbox_results['bbox_pred'], - labels_3d, - img_metas, - cfg=self.test_cfg) - - bbox_results = [ - bbox3d2result(bboxes, scores, labels) - for bboxes, scores, labels in bbox_list - ] - return bbox_results - - def _bbox_forward_train(self, features, points, sampling_results): - """Forward training function of roi_extractor and bbox_head. - - Args: - features (torch.Tensor): Backbone features with depth and \ - semantic features. - points (torch.Tensor): Pointcloud. - sampling_results (:obj:`SamplingResult`): Sampled results used - for training. - - Returns: - dict: Forward results including losses and predictions. - """ - rois = bbox3d2roi([res.bboxes for res in sampling_results]) - batch_size = features.shape[0] - bbox_results = self._bbox_forward(features, points, batch_size, rois) - bbox_targets = self.bbox_head.get_targets(sampling_results, - self.train_cfg) - - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _bbox_forward(self, features, points, batch_size, rois): - """Forward function of roi_extractor and bbox_head used in both - training and testing. - - Args: - features (torch.Tensor): Backbone features with depth and - semantic features. - points (torch.Tensor): Pointcloud. - batch_size (int): Batch size. - rois (torch.Tensor): RoI boxes. - - Returns: - dict: Contains predictions of bbox_head and - features of roi_extractor. - """ - pooled_point_feats = self.point_roi_extractor(features, points, - batch_size, rois) - - cls_score, bbox_pred = self.bbox_head(pooled_point_feats) - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def _assign_and_sample(self, proposal_list, gt_bboxes_3d, gt_labels_3d): - """Assign and sample proposals for training. - - Args: - proposal_list (list[dict]): Proposals produced by RPN. - gt_bboxes_3d (list[:obj:`BaseInstance3DBoxes`]): Ground truth - boxes. - gt_labels_3d (list[torch.Tensor]): Ground truth labels - - Returns: - list[:obj:`SamplingResult`]: Sampled results of each training - sample. - """ - sampling_results = [] - # bbox assign - for batch_idx in range(len(proposal_list)): - cur_proposal_list = proposal_list[batch_idx] - cur_boxes = cur_proposal_list['boxes_3d'] - cur_labels_3d = cur_proposal_list['labels_3d'] - cur_gt_bboxes = gt_bboxes_3d[batch_idx].to(cur_boxes.device) - cur_gt_labels = gt_labels_3d[batch_idx] - batch_num_gts = 0 - # 0 is bg - batch_gt_indis = cur_gt_labels.new_full((len(cur_boxes), ), 0) - batch_max_overlaps = cur_boxes.tensor.new_zeros(len(cur_boxes)) - # -1 is bg - batch_gt_labels = cur_gt_labels.new_full((len(cur_boxes), ), -1) - - # each class may have its own assigner - if isinstance(self.bbox_assigner, list): - for i, assigner in enumerate(self.bbox_assigner): - gt_per_cls = (cur_gt_labels == i) - pred_per_cls = (cur_labels_3d == i) - cur_assign_res = assigner.assign( - cur_boxes.tensor[pred_per_cls], - cur_gt_bboxes.tensor[gt_per_cls], - gt_labels=cur_gt_labels[gt_per_cls]) - # gather assign_results in different class into one result - batch_num_gts += cur_assign_res.num_gts - # gt inds (1-based) - gt_inds_arange_pad = gt_per_cls.nonzero( - as_tuple=False).view(-1) + 1 - # pad 0 for indice unassigned - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=0) - # pad -1 for indice ignore - gt_inds_arange_pad = F.pad( - gt_inds_arange_pad, (1, 0), mode='constant', value=-1) - # convert to 0~gt_num+2 for indices - gt_inds_arange_pad += 1 - # now 0 is bg, >1 is fg in batch_gt_indis - batch_gt_indis[pred_per_cls] = gt_inds_arange_pad[ - cur_assign_res.gt_inds + 1] - 1 - batch_max_overlaps[ - pred_per_cls] = cur_assign_res.max_overlaps - batch_gt_labels[pred_per_cls] = cur_assign_res.labels - - assign_result = AssignResult(batch_num_gts, batch_gt_indis, - batch_max_overlaps, - batch_gt_labels) - else: # for single class - assign_result = self.bbox_assigner.assign( - cur_boxes.tensor, - cur_gt_bboxes.tensor, - gt_labels=cur_gt_labels) - - # sample boxes - sampling_result = self.bbox_sampler.sample(assign_result, - cur_boxes.tensor, - cur_gt_bboxes.tensor, - cur_gt_labels) - sampling_results.append(sampling_result) - return sampling_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/__init__.py deleted file mode 100644 index f6a9f2a7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.roi_heads.roi_extractors import SingleRoIExtractor -from .single_roiaware_extractor import Single3DRoIAwareExtractor -from .single_roipoint_extractor import Single3DRoIPointExtractor - -__all__ = [ - 'SingleRoIExtractor', 'Single3DRoIAwareExtractor', - 'Single3DRoIPointExtractor' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py deleted file mode 100644 index 18977304..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roiaware_extractor.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import ops -from mmcv.runner import BaseModule - -from mmdet3d.models.builder import ROI_EXTRACTORS - - -@ROI_EXTRACTORS.register_module() -class Single3DRoIAwareExtractor(BaseModule): - """Point-wise roi-aware Extractor. - - Extract Point-wise roi features. - - Args: - roi_layer (dict): The config of roi layer. - """ - - def __init__(self, roi_layer=None, init_cfg=None): - super(Single3DRoIAwareExtractor, self).__init__(init_cfg=init_cfg) - self.roi_layer = self.build_roi_layers(roi_layer) - - def build_roi_layers(self, layer_cfg): - """Build roi layers using `layer_cfg`""" - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = layer_cls(**cfg) - return roi_layers - - def forward(self, feats, coordinate, batch_inds, rois): - """Extract point-wise roi features. - - Args: - feats (torch.FloatTensor): Point-wise features with - shape (batch, npoints, channels) for pooling. - coordinate (torch.FloatTensor): Coordinate of each point. - batch_inds (torch.LongTensor): Indicate the batch of each point. - rois (torch.FloatTensor): Roi boxes with batch indices. - - Returns: - torch.FloatTensor: Pooled features - """ - pooled_roi_feats = [] - for batch_idx in range(int(batch_inds.max()) + 1): - roi_inds = (rois[..., 0].int() == batch_idx) - coors_inds = (batch_inds.int() == batch_idx) - pooled_roi_feat = self.roi_layer(rois[..., 1:][roi_inds], - coordinate[coors_inds], - feats[coors_inds]) - pooled_roi_feats.append(pooled_roi_feat) - pooled_roi_feats = torch.cat(pooled_roi_feats, 0) - return pooled_roi_feats diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py deleted file mode 100644 index 596f592a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/roi_heads/roi_extractors/single_roipoint_extractor.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv import ops -from torch import nn as nn - -from mmdet3d.core.bbox.structures import rotation_3d_in_axis -from mmdet3d.models.builder import ROI_EXTRACTORS - - -@ROI_EXTRACTORS.register_module() -class Single3DRoIPointExtractor(nn.Module): - """Point-wise roi-aware Extractor. - - Extract Point-wise roi features. - - Args: - roi_layer (dict): The config of roi layer. - """ - - def __init__(self, roi_layer=None): - super(Single3DRoIPointExtractor, self).__init__() - self.roi_layer = self.build_roi_layers(roi_layer) - - def build_roi_layers(self, layer_cfg): - """Build roi layers using `layer_cfg`""" - cfg = layer_cfg.copy() - layer_type = cfg.pop('type') - assert hasattr(ops, layer_type) - layer_cls = getattr(ops, layer_type) - roi_layers = layer_cls(**cfg) - return roi_layers - - def forward(self, feats, coordinate, batch_inds, rois): - """Extract point-wise roi features. - - Args: - feats (torch.FloatTensor): Point-wise features with - shape (batch, npoints, channels) for pooling. - coordinate (torch.FloatTensor): Coordinate of each point. - batch_inds (torch.LongTensor): Indicate the batch of each point. - rois (torch.FloatTensor): Roi boxes with batch indices. - - Returns: - torch.FloatTensor: Pooled features - """ - rois = rois[..., 1:] - rois = rois.view(batch_inds, -1, rois.shape[-1]) - with torch.no_grad(): - pooled_roi_feat, pooled_empty_flag = self.roi_layer( - coordinate, feats, rois) - - # canonical transformation - roi_center = rois[:, :, 0:3] - pooled_roi_feat[:, :, :, 0:3] -= roi_center.unsqueeze(dim=2) - pooled_roi_feat = pooled_roi_feat.view(-1, - pooled_roi_feat.shape[-2], - pooled_roi_feat.shape[-1]) - pooled_roi_feat[:, :, 0:3] = rotation_3d_in_axis( - pooled_roi_feat[:, :, 0:3], - -(rois.view(-1, rois.shape[-1])[:, 6]), - axis=2) - pooled_roi_feat[pooled_empty_flag.view(-1) > 0] = 0 - - return pooled_roi_feat diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/__init__.py deleted file mode 100644 index 1b8bbe6a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .base import Base3DSegmentor -from .encoder_decoder import EncoderDecoder3D - -__all__ = ['Base3DSegmentor', 'EncoderDecoder3D'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/base.py deleted file mode 100644 index e66eefce..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/base.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from os import path as osp - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC -from mmcv.runner import auto_fp16 - -from mmdet3d.core import show_seg_result -from mmseg.models.segmentors import BaseSegmentor - - -class Base3DSegmentor(BaseSegmentor): - """Base class for 3D segmentors. - - The main difference with `BaseSegmentor` is that we modify the keys in - data_dict and use a 3D seg specific visualization function. - """ - - @property - def with_regularization_loss(self): - """bool: whether the segmentor has regularization loss for weight""" - return hasattr(self, 'loss_regularization') and \ - self.loss_regularization is not None - - def forward_test(self, points, img_metas, **kwargs): - """Calls either simple_test or aug_test depending on the length of - outer list of points. If len(points) == 1, call simple_test. Otherwise - call aug_test to aggregate the test results by e.g. voting. - - Args: - points (list[list[torch.Tensor]]): the outer list indicates - test-time augmentations and inner torch.Tensor should have a - shape BXNxC, which contains all points in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(points, 'points'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(points) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(points)}) != ' - f'num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(points[0], img_metas[0], **kwargs) - else: - return self.aug_test(points, img_metas, **kwargs) - - @auto_fp16(apply_to=('points')) - def forward(self, return_loss=True, **kwargs): - """Calls either forward_train or forward_test depending on whether - return_loss=True. - - Note this setting will change the expected inputs. When - `return_loss=True`, point and img_metas are single-nested (i.e. - torch.Tensor and list[dict]), and when `resturn_loss=False`, point and - img_metas should be double nested (i.e. list[torch.Tensor], - list[list[dict]]), with the outer list indicating test time - augmentations. - """ - if return_loss: - return self.forward_train(**kwargs) - else: - return self.forward_test(**kwargs) - - def show_results(self, - data, - result, - palette=None, - out_dir=None, - ignore_index=None, - show=False, - score_thr=None): - """Results visualization. - - Args: - data (list[dict]): Input points and the information of the sample. - result (list[dict]): Prediction results. - palette (list[list[int]]] | np.ndarray): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - out_dir (str): Output directory of visualization result. - ignore_index (int, optional): The label index to be ignored, e.g. - unannotated points. If None is given, set to len(self.CLASSES). - Defaults to None. - show (bool, optional): Determines whether you are - going to show result by open3d. - Defaults to False. - TODO: implement score_thr of Base3DSegmentor. - score_thr (float, optional): Score threshold of bounding boxes. - Default to None. - Not implemented yet, but it is here for unification. - """ - assert out_dir is not None, 'Expect out_dir, got none.' - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - for batch_id in range(len(result)): - if isinstance(data['points'][0], DC): - points = data['points'][0]._data[0][batch_id].numpy() - elif mmcv.is_list_of(data['points'][0], torch.Tensor): - points = data['points'][0][batch_id] - else: - ValueError(f"Unsupported data type {type(data['points'][0])} " - f'for visualization!') - if isinstance(data['img_metas'][0], DC): - pts_filename = data['img_metas'][0]._data[0][batch_id][ - 'pts_filename'] - elif mmcv.is_list_of(data['img_metas'][0], dict): - pts_filename = data['img_metas'][0][batch_id]['pts_filename'] - else: - ValueError( - f"Unsupported data type {type(data['img_metas'][0])} " - f'for visualization!') - file_name = osp.split(pts_filename)[-1].split('.')[0] - - pred_sem_mask = result[batch_id]['semantic_mask'].cpu().numpy() - - show_seg_result( - points, - None, - pred_sem_mask, - out_dir, - file_name, - palette, - ignore_index, - show=show) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/encoder_decoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/encoder_decoder.py deleted file mode 100644 index 161af8e3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from torch import nn as nn -from torch.nn import functional as F - -from mmseg.core import add_prefix -from ..builder import (SEGMENTORS, build_backbone, build_head, build_loss, - build_neck) -from .base import Base3DSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder3D(Base3DSegmentor): - """3D Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be thrown during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - loss_regularization=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(EncoderDecoder3D, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - self._init_loss_regularization(loss_regularization) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - assert self.with_decode_head, \ - '3D EncoderDecoder Segmentor should have a decode_head' - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = build_head(decode_head) - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(build_head(head_cfg)) - else: - self.auxiliary_head = build_head(auxiliary_head) - - def _init_loss_regularization(self, loss_regularization): - """Initialize ``loss_regularization``""" - if loss_regularization is not None: - if isinstance(loss_regularization, list): - self.loss_regularization = nn.ModuleList() - for loss_cfg in loss_regularization: - self.loss_regularization.append(build_loss(loss_cfg)) - else: - self.loss_regularization = build_loss(loss_regularization) - - def extract_feat(self, points): - """Extract features from points.""" - x = self.backbone(points) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, points, img_metas): - """Encode points with backbone and decode into a semantic segmentation - map of the same size as input. - - Args: - points (torch.Tensor): Input points of shape [B, N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - - Returns: - torch.Tensor: Segmentation logits of shape [B, num_classes, N]. - """ - x = self.extract_feat(points) - out = self._decode_head_forward_test(x, img_metas) - return out - - def _decode_head_forward_train(self, x, img_metas, pts_semantic_mask): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - pts_semantic_mask, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, pts_semantic_mask): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - pts_semantic_mask, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, pts_semantic_mask, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def _loss_regularization_forward_train(self): - """Calculate regularization loss for model weight in training.""" - losses = dict() - if isinstance(self.loss_regularization, nn.ModuleList): - for idx, regularize_loss in enumerate(self.loss_regularization): - loss_regularize = dict( - loss_regularize=regularize_loss(self.modules())) - losses.update(add_prefix(loss_regularize, f'regularize_{idx}')) - else: - loss_regularize = dict( - loss_regularize=self.loss_regularization(self.modules())) - losses.update(add_prefix(loss_regularize, 'regularize')) - - return losses - - def forward_dummy(self, points): - """Dummy forward function.""" - seg_logit = self.encode_decode(points, None) - - return seg_logit - - def forward_train(self, points, img_metas, pts_semantic_mask): - """Forward function for training. - - Args: - points (list[torch.Tensor]): List of points of shape [N, C]. - img_metas (list): Image metas. - pts_semantic_mask (list[torch.Tensor]): List of point-wise semantic - labels of shape [N]. - - Returns: - dict[str, Tensor]: Losses. - """ - points_cat = torch.stack(points) - pts_semantic_mask_cat = torch.stack(pts_semantic_mask) - - # extract features using backbone - x = self.extract_feat(points_cat) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - pts_semantic_mask_cat) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, pts_semantic_mask_cat) - losses.update(loss_aux) - - if self.with_regularization_loss: - loss_regularize = self._loss_regularization_forward_train() - losses.update(loss_regularize) - - return losses - - @staticmethod - def _input_generation(coords, - patch_center, - coord_max, - feats, - use_normalized_coord=False): - """Generating model input. - - Generate input by subtracting patch center and adding additional - features. Currently support colors and normalized xyz as features. - - Args: - coords (torch.Tensor): Sampled 3D point coordinate of shape [S, 3]. - patch_center (torch.Tensor): Center coordinate of the patch. - coord_max (torch.Tensor): Max coordinate of all 3D points. - feats (torch.Tensor): Features of sampled points of shape [S, C]. - use_normalized_coord (bool, optional): Whether to use normalized - xyz as additional features. Defaults to False. - - Returns: - torch.Tensor: The generated input data of shape [S, 3+C']. - """ - # subtract patch center, the z dimension is not centered - centered_coords = coords.clone() - centered_coords[:, 0] -= patch_center[0] - centered_coords[:, 1] -= patch_center[1] - - # normalized coordinates as extra features - if use_normalized_coord: - normalized_coord = coords / coord_max - feats = torch.cat([feats, normalized_coord], dim=1) - - points = torch.cat([centered_coords, feats], dim=1) - - return points - - def _sliding_patch_generation(self, - points, - num_points, - block_size, - sample_rate=0.5, - use_normalized_coord=False, - eps=1e-3): - """Sampling points in a sliding window fashion. - - First sample patches to cover all the input points. - Then sample points in each patch to batch points of a certain number. - - Args: - points (torch.Tensor): Input points of shape [N, 3+C]. - num_points (int): Number of points to be sampled in each patch. - block_size (float, optional): Size of a patch to sample. - sample_rate (float, optional): Stride used in sliding patch. - Defaults to 0.5. - use_normalized_coord (bool, optional): Whether to use normalized - xyz as additional features. Defaults to False. - eps (float, optional): A value added to patch boundary to guarantee - points coverage. Defaults to 1e-3. - - Returns: - np.ndarray | np.ndarray: - - - patch_points (torch.Tensor): Points of different patches of - shape [K, N, 3+C]. - - patch_idxs (torch.Tensor): Index of each point in - `patch_points`, of shape [K, N]. - """ - device = points.device - # we assume the first three dims are points' 3D coordinates - # and the rest dims are their per-point features - coords = points[:, :3] - feats = points[:, 3:] - - coord_max = coords.max(0)[0] - coord_min = coords.min(0)[0] - stride = block_size * sample_rate - num_grid_x = int( - torch.ceil((coord_max[0] - coord_min[0] - block_size) / - stride).item() + 1) - num_grid_y = int( - torch.ceil((coord_max[1] - coord_min[1] - block_size) / - stride).item() + 1) - - patch_points, patch_idxs = [], [] - for idx_y in range(num_grid_y): - s_y = coord_min[1] + idx_y * stride - e_y = torch.min(s_y + block_size, coord_max[1]) - s_y = e_y - block_size - for idx_x in range(num_grid_x): - s_x = coord_min[0] + idx_x * stride - e_x = torch.min(s_x + block_size, coord_max[0]) - s_x = e_x - block_size - - # extract points within this patch - cur_min = torch.tensor([s_x, s_y, coord_min[2]]).to(device) - cur_max = torch.tensor([e_x, e_y, coord_max[2]]).to(device) - cur_choice = ((coords >= cur_min - eps) & - (coords <= cur_max + eps)).all(dim=1) - - if not cur_choice.any(): # no points in this patch - continue - - # sample points in this patch to multiple batches - cur_center = cur_min + block_size / 2.0 - point_idxs = torch.nonzero(cur_choice, as_tuple=True)[0] - num_batch = int(np.ceil(point_idxs.shape[0] / num_points)) - point_size = int(num_batch * num_points) - replace = point_size > 2 * point_idxs.shape[0] - num_repeat = point_size - point_idxs.shape[0] - if replace: # duplicate - point_idxs_repeat = point_idxs[torch.randint( - 0, point_idxs.shape[0], - size=(num_repeat, )).to(device)] - else: - point_idxs_repeat = point_idxs[torch.randperm( - point_idxs.shape[0])[:num_repeat]] - - choices = torch.cat([point_idxs, point_idxs_repeat], dim=0) - choices = choices[torch.randperm(choices.shape[0])] - - # construct model input - point_batches = self._input_generation( - coords[choices], - cur_center, - coord_max, - feats[choices], - use_normalized_coord=use_normalized_coord) - - patch_points.append(point_batches) - patch_idxs.append(choices) - - patch_points = torch.cat(patch_points, dim=0) - patch_idxs = torch.cat(patch_idxs, dim=0) - - # make sure all points are sampled at least once - assert torch.unique(patch_idxs).shape[0] == points.shape[0], \ - 'some points are not sampled in sliding inference' - - return patch_points, patch_idxs - - def slide_inference(self, point, img_meta, rescale): - """Inference by sliding-window with overlap. - - Args: - point (torch.Tensor): Input points of shape [N, 3+C]. - img_meta (dict): Meta information of input sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - - Returns: - Tensor: The output segmentation map of shape [num_classes, N]. - """ - num_points = self.test_cfg.num_points - block_size = self.test_cfg.block_size - sample_rate = self.test_cfg.sample_rate - use_normalized_coord = self.test_cfg.use_normalized_coord - batch_size = self.test_cfg.batch_size * num_points - - # patch_points is of shape [K*N, 3+C], patch_idxs is of shape [K*N] - patch_points, patch_idxs = self._sliding_patch_generation( - point, num_points, block_size, sample_rate, use_normalized_coord) - feats_dim = patch_points.shape[1] - seg_logits = [] # save patch predictions - - for batch_idx in range(0, patch_points.shape[0], batch_size): - batch_points = patch_points[batch_idx:batch_idx + batch_size] - batch_points = batch_points.view(-1, num_points, feats_dim) - # batch_seg_logit is of shape [B, num_classes, N] - batch_seg_logit = self.encode_decode(batch_points, img_meta) - batch_seg_logit = batch_seg_logit.transpose(1, 2).contiguous() - seg_logits.append(batch_seg_logit.view(-1, self.num_classes)) - - # aggregate per-point logits by indexing sum and dividing count - seg_logits = torch.cat(seg_logits, dim=0) # [K*N, num_classes] - expand_patch_idxs = patch_idxs.unsqueeze(1).repeat(1, self.num_classes) - preds = point.new_zeros((point.shape[0], self.num_classes)).\ - scatter_add_(dim=0, index=expand_patch_idxs, src=seg_logits) - count_mat = torch.bincount(patch_idxs) - preds = preds / count_mat[:, None] - - # TODO: if rescale and voxelization segmentor - - return preds.transpose(0, 1) # to [num_classes, K*N] - - def whole_inference(self, points, img_metas, rescale): - """Inference with full scene (one forward pass without sliding).""" - seg_logit = self.encode_decode(points, img_metas) - # TODO: if rescale and voxelization segmentor - return seg_logit - - def inference(self, points, img_metas, rescale): - """Inference with slide/whole style. - - Args: - points (torch.Tensor): Input points of shape [B, N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - - Returns: - Tensor: The output segmentation map. - """ - assert self.test_cfg.mode in ['slide', 'whole'] - if self.test_cfg.mode == 'slide': - seg_logit = torch.stack([ - self.slide_inference(point, img_meta, rescale) - for point, img_meta in zip(points, img_metas) - ], 0) - else: - seg_logit = self.whole_inference(points, img_metas, rescale) - output = F.softmax(seg_logit, dim=1) - return output - - def simple_test(self, points, img_metas, rescale=True): - """Simple test with single scene. - - Args: - points (list[torch.Tensor]): List of points of shape [N, 3+C]. - img_metas (list[dict]): Meta information of each sample. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - Defaults to True. - - Returns: - list[dict]: The output prediction result with following keys: - - - semantic_mask (Tensor): Segmentation mask of shape [N]. - """ - # 3D segmentation requires per-point prediction, so it's impossible - # to use down-sampling to get a batch of scenes with same num_points - # therefore, we only support testing one scene every time - seg_pred = [] - for point, img_meta in zip(points, img_metas): - seg_prob = self.inference(point.unsqueeze(0), [img_meta], - rescale)[0] - seg_map = seg_prob.argmax(0) # [N] - # to cpu tensor for consistency with det3d - seg_map = seg_map.cpu() - seg_pred.append(seg_map) - # warp in dict - seg_pred = [dict(semantic_mask=seg_map) for seg_map in seg_pred] - return seg_pred - - def aug_test(self, points, img_metas, rescale=True): - """Test with augmentations. - - Args: - points (list[torch.Tensor]): List of points of shape [B, N, 3+C]. - img_metas (list[list[dict]]): Meta information of each sample. - Outer list are different samples while inner is different augs. - rescale (bool): Whether transform to original number of points. - Will be used for voxelization based segmentors. - Defaults to True. - - Returns: - list[dict]: The output prediction result with following keys: - - - semantic_mask (Tensor): Segmentation mask of shape [N]. - """ - # in aug_test, one scene going through different augmentations could - # have the same number of points and are stacked as a batch - # to save memory, we get augmented seg logit inplace - seg_pred = [] - for point, img_meta in zip(points, img_metas): - seg_prob = self.inference(point, img_meta, rescale) - seg_prob = seg_prob.mean(0) # [num_classes, N] - seg_map = seg_prob.argmax(0) # [N] - # to cpu tensor for consistency with det3d - seg_map = seg_map.cpu() - seg_pred.append(seg_map) - # warp in dict - seg_pred = [dict(semantic_mask=seg_map) for seg_map in seg_pred] - return seg_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/__init__.py deleted file mode 100644 index b63880d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .clip_sigmoid import clip_sigmoid -from .edge_indices import get_edge_indices -from .gen_keypoints import get_keypoints -from .handle_objs import filter_outside_objs, handle_proj_objs -from .mlp import MLP - -__all__ = [ - 'clip_sigmoid', 'MLP', 'get_edge_indices', 'filter_outside_objs', - 'handle_proj_objs', 'get_keypoints' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/clip_sigmoid.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/clip_sigmoid.py deleted file mode 100644 index 92016b2d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/clip_sigmoid.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def clip_sigmoid(x, eps=1e-4): - """Sigmoid function for input feature. - - Args: - x (torch.Tensor): Input feature map with the shape of [B, N, H, W]. - eps (float, optional): Lower bound of the range to be clamped to. - Defaults to 1e-4. - - Returns: - torch.Tensor: Feature map after sigmoid. - """ - y = torch.clamp(x.sigmoid_(), min=eps, max=1 - eps) - return y diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/edge_indices.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/edge_indices.py deleted file mode 100644 index 25cdee61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/edge_indices.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def get_edge_indices(img_metas, - downsample_ratio, - step=1, - pad_mode='default', - dtype=np.float32, - device='cpu'): - """Function to filter the objects label outside the image. - The edge_indices are generated using numpy on cpu rather - than on CUDA due to the latency issue. When batch size = 8, - this function with numpy array is ~8 times faster than that - with CUDA tensor (0.09s and 0.72s in 100 runs). - - Args: - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - downsample_ratio (int): Downsample ratio of output feature, - step (int, optional): Step size used for generateing - edge indices. Default: 1. - pad_mode (str, optional): Padding mode during data pipeline. - Default: 'default'. - dtype (torch.dtype, optional): Dtype of edge indices tensor. - Default: np.float32. - device (str, optional): Device of edge indices tensor. - Default: 'cpu'. - - Returns: - list[Tensor]: Edge indices for each image in batch data. - """ - edge_indices_list = [] - for i in range(len(img_metas)): - img_shape = img_metas[i]['img_shape'] - pad_shape = img_metas[i]['pad_shape'] - h, w = img_shape[:2] - pad_h, pad_w = pad_shape - edge_indices = [] - - if pad_mode == 'default': - x_min = 0 - y_min = 0 - x_max = (w - 1) // downsample_ratio - y_max = (h - 1) // downsample_ratio - elif pad_mode == 'center': - x_min = np.ceil((pad_w - w) / 2 * downsample_ratio) - y_min = np.ceil((pad_h - h) / 2 * downsample_ratio) - x_max = x_min + w // downsample_ratio - y_max = y_min + h // downsample_ratio - else: - raise NotImplementedError - - # left - y = np.arange(y_min, y_max, step, dtype=dtype) - x = np.ones(len(y)) * x_min - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # bottom - x = np.arange(x_min, x_max, step, dtype=dtype) - y = np.ones(len(x)) * y_max - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # right - y = np.arange(y_max, y_min, -step, dtype=dtype) - x = np.ones(len(y)) * x_max - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - # top - x = np.arange(x_max, x_min, -step, dtype=dtype) - y = np.ones(len(x)) * y_min - - edge_indices_edge = np.stack((x, y), axis=1) - edge_indices.append(edge_indices_edge) - - edge_indices = \ - np.concatenate([index for index in edge_indices], axis=0) - edge_indices = torch.from_numpy(edge_indices).to(device).long() - edge_indices_list.append(edge_indices) - - return edge_indices_list diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/gen_keypoints.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/gen_keypoints.py deleted file mode 100644 index 2456f904..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/gen_keypoints.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet3d.core.bbox import points_cam2img - - -def get_keypoints(gt_bboxes_3d_list, - centers2d_list, - img_metas, - use_local_coords=True): - """Function to filter the objects label outside the image. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - use_local_coords (bool, optional): Wheher to use local coordinates - for keypoints. Default: True. - - Returns: - tuple[list[Tensor]]: It contains two elements, the first is the - keypoints for each projected 2D bbox in batch data. The second is - the visible mask of depth calculated by keypoints. - """ - - assert len(gt_bboxes_3d_list) == len(centers2d_list) - bs = len(gt_bboxes_3d_list) - keypoints2d_list = [] - keypoints_depth_mask_list = [] - - for i in range(bs): - gt_bboxes_3d = gt_bboxes_3d_list[i] - centers2d = centers2d_list[i] - img_shape = img_metas[i]['img_shape'] - cam2img = img_metas[i]['cam2img'] - h, w = img_shape[:2] - # (N, 8, 3) - corners3d = gt_bboxes_3d.corners - top_centers3d = torch.mean(corners3d[:, [0, 1, 4, 5], :], dim=1) - bot_centers3d = torch.mean(corners3d[:, [2, 3, 6, 7], :], dim=1) - # (N, 2, 3) - top_bot_centers3d = torch.stack((top_centers3d, bot_centers3d), dim=1) - keypoints3d = torch.cat((corners3d, top_bot_centers3d), dim=1) - # (N, 10, 2) - keypoints2d = points_cam2img(keypoints3d, cam2img) - - # keypoints mask: keypoints must be inside - # the image and in front of the camera - keypoints_x_visible = (keypoints2d[..., 0] >= 0) & ( - keypoints2d[..., 0] <= w - 1) - keypoints_y_visible = (keypoints2d[..., 1] >= 0) & ( - keypoints2d[..., 1] <= h - 1) - keypoints_z_visible = (keypoints3d[..., -1] > 0) - - # (N, 1O) - keypoints_visible = keypoints_x_visible & \ - keypoints_y_visible & keypoints_z_visible - # center, diag-02, diag-13 - keypoints_depth_valid = torch.stack( - (keypoints_visible[:, [8, 9]].all(dim=1), - keypoints_visible[:, [0, 3, 5, 6]].all(dim=1), - keypoints_visible[:, [1, 2, 4, 7]].all(dim=1)), - dim=1) - keypoints_visible = keypoints_visible.float() - - if use_local_coords: - keypoints2d = torch.cat((keypoints2d - centers2d.unsqueeze(1), - keypoints_visible.unsqueeze(-1)), - dim=2) - else: - keypoints2d = torch.cat( - (keypoints2d, keypoints_visible.unsqueeze(-1)), dim=2) - - keypoints2d_list.append(keypoints2d) - keypoints_depth_mask_list.append(keypoints_depth_valid) - - return (keypoints2d_list, keypoints_depth_mask_list) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/handle_objs.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/handle_objs.py deleted file mode 100644 index 87bb5840..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/handle_objs.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def filter_outside_objs(gt_bboxes_list, gt_labels_list, gt_bboxes_3d_list, - gt_labels_3d_list, centers2d_list, img_metas): - """Function to filter the objects label outside the image. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - gt_bboxes_3d_list (list[Tensor]): 3D Ground truth bboxes of each - image, each has shape (num_gt, bbox_code_size). - gt_labels_3d_list (list[Tensor]): 3D Ground truth labels of each - box, each has shape (num_gt,). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - each has shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - """ - bs = len(centers2d_list) - - for i in range(bs): - centers2d = centers2d_list[i].clone() - img_shape = img_metas[i]['img_shape'] - keep_inds = (centers2d[:, 0] > 0) & \ - (centers2d[:, 0] < img_shape[1]) & \ - (centers2d[:, 1] > 0) & \ - (centers2d[:, 1] < img_shape[0]) - centers2d_list[i] = centers2d[keep_inds] - gt_labels_list[i] = gt_labels_list[i][keep_inds] - gt_bboxes_list[i] = gt_bboxes_list[i][keep_inds] - gt_bboxes_3d_list[i].tensor = gt_bboxes_3d_list[i].tensor[keep_inds] - gt_labels_3d_list[i] = gt_labels_3d_list[i][keep_inds] - - -def get_centers2d_target(centers2d, centers, img_shape): - """Function to get target centers2d. - - Args: - centers2d (Tensor): Projected 3D centers onto 2D images. - centers (Tensor): Centers of 2d gt bboxes. - img_shape (tuple): Resized image shape. - - Returns: - torch.Tensor: Projected 3D centers (centers2D) target. - """ - N = centers2d.shape[0] - h, w = img_shape[:2] - valid_intersects = centers2d.new_zeros((N, 2)) - a = (centers[:, 1] - centers2d[:, 1]) / (centers[:, 0] - centers2d[:, 0]) - b = centers[:, 1] - a * centers[:, 0] - left_y = b - right_y = (w - 1) * a + b - top_x = -b / a - bottom_x = (h - 1 - b) / a - - left_coors = torch.stack((left_y.new_zeros(N, ), left_y), dim=1) - right_coors = torch.stack((right_y.new_full((N, ), w - 1), right_y), dim=1) - top_coors = torch.stack((top_x, top_x.new_zeros(N, )), dim=1) - bottom_coors = torch.stack((bottom_x, bottom_x.new_full((N, ), h - 1)), - dim=1) - - intersects = torch.stack( - [left_coors, right_coors, top_coors, bottom_coors], dim=1) - intersects_x = intersects[:, :, 0] - intersects_y = intersects[:, :, 1] - inds = (intersects_x >= 0) & (intersects_x <= - w - 1) & (intersects_y >= 0) & ( - intersects_y <= h - 1) - valid_intersects = intersects[inds].reshape(N, 2, 2) - dist = torch.norm(valid_intersects - centers2d.unsqueeze(1), dim=2) - min_idx = torch.argmin(dist, dim=1) - - min_idx = min_idx.unsqueeze(-1).unsqueeze(-1).expand(-1, -1, 2) - centers2d_target = valid_intersects.gather(dim=1, index=min_idx).squeeze(1) - - return centers2d_target - - -def handle_proj_objs(centers2d_list, gt_bboxes_list, img_metas): - """Function to handle projected object centers2d, generate target - centers2d. - - Args: - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - shape (num_gt, 4). - centers2d_list (list[Tensor]): Projected 3D centers onto 2D image, - shape (num_gt, 2). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple[list[Tensor]]: It contains three elements. The first is the - target centers2d after handling the truncated objects. The second - is the offsets between target centers2d and round int dtype - centers2d,and the last is the truncation mask for each object in - batch data. - """ - bs = len(centers2d_list) - centers2d_target_list = [] - trunc_mask_list = [] - offsets2d_list = [] - # for now, only pad mode that img is padded by right and - # bottom side is supported. - for i in range(bs): - centers2d = centers2d_list[i] - gt_bbox = gt_bboxes_list[i] - img_shape = img_metas[i]['img_shape'] - centers2d_target = centers2d.clone() - inside_inds = (centers2d[:, 0] > 0) & \ - (centers2d[:, 0] < img_shape[1]) & \ - (centers2d[:, 1] > 0) & \ - (centers2d[:, 1] < img_shape[0]) - outside_inds = ~inside_inds - - # if there are outside objects - if outside_inds.any(): - centers = (gt_bbox[:, :2] + gt_bbox[:, 2:]) / 2 - outside_centers2d = centers2d[outside_inds] - match_centers = centers[outside_inds] - target_outside_centers2d = get_centers2d_target( - outside_centers2d, match_centers, img_shape) - centers2d_target[outside_inds] = target_outside_centers2d - - offsets2d = centers2d - centers2d_target.round().int() - trunc_mask = outside_inds - - centers2d_target_list.append(centers2d_target) - trunc_mask_list.append(trunc_mask) - offsets2d_list.append(offsets2d) - - return (centers2d_target_list, offsets2d_list, trunc_mask_list) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/mlp.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/mlp.py deleted file mode 100644 index 0b499bb4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/utils/mlp.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch import nn as nn - - -class MLP(BaseModule): - """A simple MLP module. - - Pass features (B, C, N) through an MLP. - - Args: - in_channels (int, optional): Number of channels of input features. - Default: 18. - conv_channels (tuple[int], optional): Out channels of the convolution. - Default: (256, 256). - conv_cfg (dict, optional): Config of convolution. - Default: dict(type='Conv1d'). - norm_cfg (dict, optional): Config of normalization. - Default: dict(type='BN1d'). - act_cfg (dict, optional): Config of activation. - Default: dict(type='ReLU'). - """ - - def __init__(self, - in_channel=18, - conv_channels=(256, 256), - conv_cfg=dict(type='Conv1d'), - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.mlp = nn.Sequential() - prev_channels = in_channel - for i, conv_channel in enumerate(conv_channels): - self.mlp.add_module( - f'layer{i}', - ConvModule( - prev_channels, - conv_channels[i], - 1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=True, - inplace=True)) - prev_channels = conv_channels[i] - - def forward(self, img_features): - return self.mlp(img_features) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/__init__.py deleted file mode 100644 index 459020a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from .pillar_encoder import DynamicPillarFeatureNet, PillarFeatureNet -from .voxel_encoder import DynamicSimpleVFE, DynamicVFE, HardSimpleVFE, HardVFE - -__all__ = [ - 'PillarFeatureNet', 'DynamicPillarFeatureNet', 'HardVFE', 'DynamicVFE', - 'HardSimpleVFE', 'DynamicSimpleVFE' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/pillar_encoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/pillar_encoder.py deleted file mode 100644 index 229bdd5a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/pillar_encoder.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.ops import DynamicScatter -from mmcv.runner import force_fp32 -from torch import nn - -from ..builder import VOXEL_ENCODERS -from .utils import PFNLayer, get_paddings_indicator - - -@VOXEL_ENCODERS.register_module() -class PillarFeatureNet(nn.Module): - """Pillar Feature Net. - - The network prepares the pillar features and performs forward pass - through PFNLayers. - - Args: - in_channels (int, optional): Number of input features, - either x, y, z or x, y, z, r. Defaults to 4. - feat_channels (tuple, optional): Number of features in each of the - N PFNLayers. Defaults to (64, ). - with_distance (bool, optional): Whether to include Euclidean distance - to points. Defaults to False. - with_cluster_center (bool, optional): [description]. Defaults to True. - with_voxel_center (bool, optional): [description]. Defaults to True. - voxel_size (tuple[float], optional): Size of voxels, only utilize x - and y size. Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): Point cloud range, only - utilizes x and y min. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg ([type], optional): [description]. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - mode (str, optional): The mode to gather point features. Options are - 'max' or 'avg'. Defaults to 'max'. - legacy (bool, optional): Whether to use the new behavior or - the original behavior. Defaults to True. - """ - - def __init__(self, - in_channels=4, - feat_channels=(64, ), - with_distance=False, - with_cluster_center=True, - with_voxel_center=True, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - legacy=True): - super(PillarFeatureNet, self).__init__() - assert len(feat_channels) > 0 - self.legacy = legacy - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.fp16_enabled = False - # Create PillarFeatureNet layers - self.in_channels = in_channels - feat_channels = [in_channels] + list(feat_channels) - pfn_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i < len(feat_channels) - 2: - last_layer = False - else: - last_layer = True - pfn_layers.append( - PFNLayer( - in_filters, - out_filters, - norm_cfg=norm_cfg, - last_layer=last_layer, - mode=mode)) - self.pfn_layers = nn.ModuleList(pfn_layers) - - # Need pillar (voxel) size and x/y offset in order to calculate offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - - @force_fp32(out_fp16=True) - def forward(self, features, num_points, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features or raw points in shape - (N, M, C). - num_points (torch.Tensor): Number of points in each pillar. - coors (torch.Tensor): Coordinates of each voxel. - - Returns: - torch.Tensor: Features of pillars. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - points_mean = features[:, :, :3].sum( - dim=1, keepdim=True) / num_points.type_as(features).view( - -1, 1, 1) - f_cluster = features[:, :, :3] - points_mean - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - dtype = features.dtype - if self._with_voxel_center: - if not self.legacy: - f_center = torch.zeros_like(features[:, :, :3]) - f_center[:, :, 0] = features[:, :, 0] - ( - coors[:, 3].to(dtype).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = features[:, :, 1] - ( - coors[:, 2].to(dtype).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = features[:, :, 2] - ( - coors[:, 1].to(dtype).unsqueeze(1) * self.vz + - self.z_offset) - else: - f_center = features[:, :, :3] - f_center[:, :, 0] = f_center[:, :, 0] - ( - coors[:, 3].type_as(features).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = f_center[:, :, 1] - ( - coors[:, 2].type_as(features).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = f_center[:, :, 2] - ( - coors[:, 1].type_as(features).unsqueeze(1) * self.vz + - self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :, :3], 2, 2, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - # The feature decorations were calculated without regard to whether - # pillar was empty. Need to ensure that - # empty pillars remain set to zeros. - voxel_count = features.shape[1] - mask = get_paddings_indicator(num_points, voxel_count, axis=0) - mask = torch.unsqueeze(mask, -1).type_as(features) - features *= mask - - for pfn in self.pfn_layers: - features = pfn(features, num_points) - - return features.squeeze(1) - - -@VOXEL_ENCODERS.register_module() -class DynamicPillarFeatureNet(PillarFeatureNet): - """Pillar Feature Net using dynamic voxelization. - - The network prepares the pillar features and performs forward pass - through PFNLayers. The main difference is that it is used for - dynamic voxels, which contains different number of points inside a voxel - without limits. - - Args: - in_channels (int, optional): Number of input features, - either x, y, z or x, y, z, r. Defaults to 4. - feat_channels (tuple, optional): Number of features in each of the - N PFNLayers. Defaults to (64, ). - with_distance (bool, optional): Whether to include Euclidean distance - to points. Defaults to False. - with_cluster_center (bool, optional): [description]. Defaults to True. - with_voxel_center (bool, optional): [description]. Defaults to True. - voxel_size (tuple[float], optional): Size of voxels, only utilize x - and y size. Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): Point cloud range, only - utilizes x and y min. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg ([type], optional): [description]. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - mode (str, optional): The mode to gather point features. Options are - 'max' or 'avg'. Defaults to 'max'. - legacy (bool, optional): Whether to use the new behavior or - the original behavior. Defaults to True. - """ - - def __init__(self, - in_channels=4, - feat_channels=(64, ), - with_distance=False, - with_cluster_center=True, - with_voxel_center=True, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - legacy=True): - super(DynamicPillarFeatureNet, self).__init__( - in_channels, - feat_channels, - with_distance, - with_cluster_center=with_cluster_center, - with_voxel_center=with_voxel_center, - voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - norm_cfg=norm_cfg, - mode=mode, - legacy=legacy) - self.fp16_enabled = False - feat_channels = [self.in_channels] + list(feat_channels) - pfn_layers = [] - # TODO: currently only support one PFNLayer - - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - pfn_layers.append( - nn.Sequential( - nn.Linear(in_filters, out_filters, bias=False), norm_layer, - nn.ReLU(inplace=True))) - self.num_pfn = len(pfn_layers) - self.pfn_layers = nn.ModuleList(pfn_layers) - self.pfn_scatter = DynamicScatter(voxel_size, point_cloud_range, - (mode != 'max')) - self.cluster_scatter = DynamicScatter( - voxel_size, point_cloud_range, average_points=True) - - def map_voxel_center_to_point(self, pts_coors, voxel_mean, voxel_coors): - """Map the centers of voxels to its corresponding points. - - Args: - pts_coors (torch.Tensor): The coordinates of each points, shape - (M, 3), where M is the number of points. - voxel_mean (torch.Tensor): The mean or aggregated features of a - voxel, shape (N, C), where N is the number of voxels. - voxel_coors (torch.Tensor): The coordinates of each voxel. - - Returns: - torch.Tensor: Corresponding voxel centers of each points, shape - (M, C), where M is the number of points. - """ - # Step 1: scatter voxel into canvas - # Calculate necessary things for canvas creation - canvas_y = int( - (self.point_cloud_range[4] - self.point_cloud_range[1]) / self.vy) - canvas_x = int( - (self.point_cloud_range[3] - self.point_cloud_range[0]) / self.vx) - canvas_channel = voxel_mean.size(1) - batch_size = pts_coors[-1, 0] + 1 - canvas_len = canvas_y * canvas_x * batch_size - # Create the canvas for this sample - canvas = voxel_mean.new_zeros(canvas_channel, canvas_len) - # Only include non-empty pillars - indices = ( - voxel_coors[:, 0] * canvas_y * canvas_x + - voxel_coors[:, 2] * canvas_x + voxel_coors[:, 3]) - # Scatter the blob back to the canvas - canvas[:, indices.long()] = voxel_mean.t() - - # Step 2: get voxel mean for each point - voxel_index = ( - pts_coors[:, 0] * canvas_y * canvas_x + - pts_coors[:, 2] * canvas_x + pts_coors[:, 3]) - center_per_point = canvas[:, voxel_index.long()].t() - return center_per_point - - @force_fp32(out_fp16=True) - def forward(self, features, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features or raw points in shape - (N, M, C). - coors (torch.Tensor): Coordinates of each voxel - - Returns: - torch.Tensor: Features of pillars. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - voxel_mean, mean_coors = self.cluster_scatter(features, coors) - points_mean = self.map_voxel_center_to_point( - coors, voxel_mean, mean_coors) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :3] - points_mean[:, :3] - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros(size=(features.size(0), 3)) - f_center[:, 0] = features[:, 0] - ( - coors[:, 3].type_as(features) * self.vx + self.x_offset) - f_center[:, 1] = features[:, 1] - ( - coors[:, 2].type_as(features) * self.vy + self.y_offset) - f_center[:, 2] = features[:, 2] - ( - coors[:, 1].type_as(features) * self.vz + self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :3], 2, 1, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - for i, pfn in enumerate(self.pfn_layers): - point_feats = pfn(features) - voxel_feats, voxel_coors = self.pfn_scatter(point_feats, coors) - if i != len(self.pfn_layers) - 1: - # need to concat voxel feats if it is not the last pfn - feat_per_point = self.map_voxel_center_to_point( - coors, voxel_feats, voxel_coors) - features = torch.cat([point_feats, feat_per_point], dim=1) - - return voxel_feats, voxel_coors diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/utils.py deleted file mode 100644 index be86303f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/utils.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.runner import auto_fp16 -from torch import nn -from torch.nn import functional as F - - -def get_paddings_indicator(actual_num, max_num, axis=0): - """Create boolean mask by actually number of a padded tensor. - - Args: - actual_num (torch.Tensor): Actual number of points in each voxel. - max_num (int): Max number of points in each voxel - - Returns: - torch.Tensor: Mask indicates which points are valid inside a voxel. - """ - actual_num = torch.unsqueeze(actual_num, axis + 1) - # tiled_actual_num: [N, M, 1] - max_num_shape = [1] * len(actual_num.shape) - max_num_shape[axis + 1] = -1 - max_num = torch.arange( - max_num, dtype=torch.int, device=actual_num.device).view(max_num_shape) - # tiled_actual_num: [[3,3,3,3,3], [4,4,4,4,4], [2,2,2,2,2]] - # tiled_max_num: [[0,1,2,3,4], [0,1,2,3,4], [0,1,2,3,4]] - paddings_indicator = actual_num.int() > max_num - # paddings_indicator shape: [batch_size, max_num] - return paddings_indicator - - -class VFELayer(nn.Module): - """Voxel Feature Encoder layer. - - The voxel encoder is composed of a series of these layers. - This module do not support average pooling and only support to use - max pooling to gather features inside a VFE. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - norm_cfg (dict): Config dict of normalization layers - max_out (bool): Whether aggregate the features of points inside - each voxel and only return voxel features. - cat_max (bool): Whether concatenate the aggregated features - and pointwise features. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - max_out=True, - cat_max=True): - super(VFELayer, self).__init__() - self.fp16_enabled = False - self.cat_max = cat_max - self.max_out = max_out - # self.units = int(out_channels / 2) - - self.norm = build_norm_layer(norm_cfg, out_channels)[1] - self.linear = nn.Linear(in_channels, out_channels, bias=False) - - @auto_fp16(apply_to=('inputs'), out_fp32=True) - def forward(self, inputs): - """Forward function. - - Args: - inputs (torch.Tensor): Voxels features of shape (N, M, C). - N is the number of voxels, M is the number of points in - voxels, C is the number of channels of point features. - - Returns: - torch.Tensor: Voxel features. There are three mode under which the - features have different meaning. - - `max_out=False`: Return point-wise features in - shape (N, M, C). - - `max_out=True` and `cat_max=False`: Return aggregated - voxel features in shape (N, C) - - `max_out=True` and `cat_max=True`: Return concatenated - point-wise features in shape (N, M, C). - """ - # [K, T, 7] tensordot [7, units] = [K, T, units] - voxel_count = inputs.shape[1] - - x = self.linear(inputs) - x = self.norm(x.permute(0, 2, 1).contiguous()).permute(0, 2, - 1).contiguous() - pointwise = F.relu(x) - # [K, T, units] - if self.max_out: - aggregated = torch.max(pointwise, dim=1, keepdim=True)[0] - else: - # this is for fusion layer - return pointwise - - if not self.cat_max: - return aggregated.squeeze(1) - else: - # [K, 1, units] - repeated = aggregated.repeat(1, voxel_count, 1) - concatenated = torch.cat([pointwise, repeated], dim=2) - # [K, T, 2 * units] - return concatenated - - -class PFNLayer(nn.Module): - """Pillar Feature Net Layer. - - The Pillar Feature Net is composed of a series of these layers, but the - PointPillars paper results only used a single PFNLayer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - norm_cfg (dict, optional): Config dict of normalization layers. - Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). - last_layer (bool, optional): If last_layer, there is no - concatenation of features. Defaults to False. - mode (str, optional): Pooling model to gather features inside voxels. - Defaults to 'max'. - """ - - def __init__(self, - in_channels, - out_channels, - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - last_layer=False, - mode='max'): - - super().__init__() - self.fp16_enabled = False - self.name = 'PFNLayer' - self.last_vfe = last_layer - if not self.last_vfe: - out_channels = out_channels // 2 - self.units = out_channels - - self.norm = build_norm_layer(norm_cfg, self.units)[1] - self.linear = nn.Linear(in_channels, self.units, bias=False) - - assert mode in ['max', 'avg'] - self.mode = mode - - @auto_fp16(apply_to=('inputs'), out_fp32=True) - def forward(self, inputs, num_voxels=None, aligned_distance=None): - """Forward function. - - Args: - inputs (torch.Tensor): Pillar/Voxel inputs with shape (N, M, C). - N is the number of voxels, M is the number of points in - voxels, C is the number of channels of point features. - num_voxels (torch.Tensor, optional): Number of points in each - voxel. Defaults to None. - aligned_distance (torch.Tensor, optional): The distance of - each points to the voxel center. Defaults to None. - - Returns: - torch.Tensor: Features of Pillars. - """ - x = self.linear(inputs) - x = self.norm(x.permute(0, 2, 1).contiguous()).permute(0, 2, - 1).contiguous() - x = F.relu(x) - - if self.mode == 'max': - if aligned_distance is not None: - x = x.mul(aligned_distance.unsqueeze(-1)) - x_max = torch.max(x, dim=1, keepdim=True)[0] - elif self.mode == 'avg': - if aligned_distance is not None: - x = x.mul(aligned_distance.unsqueeze(-1)) - x_max = x.sum( - dim=1, keepdim=True) / num_voxels.type_as(inputs).view( - -1, 1, 1) - - if self.last_vfe: - return x_max - else: - x_repeat = x_max.repeat(1, inputs.shape[1], 1) - x_concatenated = torch.cat([x, x_repeat], dim=2) - return x_concatenated diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py deleted file mode 100644 index 2e5704a5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/models/voxel_encoders/voxel_encoder.py +++ /dev/null @@ -1,491 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import build_norm_layer -from mmcv.ops import DynamicScatter -from mmcv.runner import force_fp32 -from torch import nn - -from .. import builder -from ..builder import VOXEL_ENCODERS -from .utils import VFELayer, get_paddings_indicator - - -@VOXEL_ENCODERS.register_module() -class HardSimpleVFE(nn.Module): - """Simple voxel feature encoder used in SECOND. - - It simply averages the values of points in a voxel. - - Args: - num_features (int, optional): Number of features to use. Default: 4. - """ - - def __init__(self, num_features=4): - super(HardSimpleVFE, self).__init__() - self.num_features = num_features - self.fp16_enabled = False - - @force_fp32(out_fp16=True) - def forward(self, features, num_points, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features in shape - (N, M, 3(4)). N is the number of voxels and M is the maximum - number of points inside a single voxel. - num_points (torch.Tensor): Number of points in each voxel, - shape (N, ). - coors (torch.Tensor): Coordinates of voxels. - - Returns: - torch.Tensor: Mean of points inside each voxel in shape (N, 3(4)) - """ - points_mean = features[:, :, :self.num_features].sum( - dim=1, keepdim=False) / num_points.type_as(features).view(-1, 1) - return points_mean.contiguous() - - -@VOXEL_ENCODERS.register_module() -class DynamicSimpleVFE(nn.Module): - """Simple dynamic voxel feature encoder used in DV-SECOND. - - It simply averages the values of points in a voxel. - But the number of points in a voxel is dynamic and varies. - - Args: - voxel_size (tupe[float]): Size of a single voxel - point_cloud_range (tuple[float]): Range of the point cloud and voxels - """ - - def __init__(self, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1)): - super(DynamicSimpleVFE, self).__init__() - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - self.fp16_enabled = False - - @torch.no_grad() - @force_fp32(out_fp16=True) - def forward(self, features, coors): - """Forward function. - - Args: - features (torch.Tensor): Point features in shape - (N, 3(4)). N is the number of points. - coors (torch.Tensor): Coordinates of voxels. - - Returns: - torch.Tensor: Mean of points inside each voxel in shape (M, 3(4)). - M is the number of voxels. - """ - # This function is used from the start of the voxelnet - # num_points: [concated_num_points] - features, features_coors = self.scatter(features, coors) - return features, features_coors - - -@VOXEL_ENCODERS.register_module() -class DynamicVFE(nn.Module): - """Dynamic Voxel feature encoder used in DV-SECOND. - - It encodes features of voxels and their points. It could also fuse - image feature into voxel features in a point-wise manner. - The number of points inside the voxel varies. - - Args: - in_channels (int, optional): Input channels of VFE. Defaults to 4. - feat_channels (list(int), optional): Channels of features in VFE. - with_distance (bool, optional): Whether to use the L2 distance of - points to the origin point. Defaults to False. - with_cluster_center (bool, optional): Whether to use the distance - to cluster center of points inside a voxel. Defaults to False. - with_voxel_center (bool, optional): Whether to use the distance - to center of voxel for each points inside a voxel. - Defaults to False. - voxel_size (tuple[float], optional): Size of a single voxel. - Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): The range of points - or voxels. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg (dict, optional): Config dict of normalization layers. - mode (str, optional): The mode when pooling features of points - inside a voxel. Available options include 'max' and 'avg'. - Defaults to 'max'. - fusion_layer (dict, optional): The config dict of fusion - layer used in multi-modal detectors. Defaults to None. - return_point_feats (bool, optional): Whether to return the features - of each points. Defaults to False. - """ - - def __init__(self, - in_channels=4, - feat_channels=[], - with_distance=False, - with_cluster_center=False, - with_voxel_center=False, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - fusion_layer=None, - return_point_feats=False): - super(DynamicVFE, self).__init__() - assert mode in ['avg', 'max'] - assert len(feat_channels) > 0 - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self.in_channels = in_channels - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.return_point_feats = return_point_feats - self.fp16_enabled = False - - # Need pillar (voxel) size and x/y offset in order to calculate offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - - feat_channels = [self.in_channels] + list(feat_channels) - vfe_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - vfe_layers.append( - nn.Sequential( - nn.Linear(in_filters, out_filters, bias=False), norm_layer, - nn.ReLU(inplace=True))) - self.vfe_layers = nn.ModuleList(vfe_layers) - self.num_vfe = len(vfe_layers) - self.vfe_scatter = DynamicScatter(voxel_size, point_cloud_range, - (mode != 'max')) - self.cluster_scatter = DynamicScatter( - voxel_size, point_cloud_range, average_points=True) - self.fusion_layer = None - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - - def map_voxel_center_to_point(self, pts_coors, voxel_mean, voxel_coors): - """Map voxel features to its corresponding points. - - Args: - pts_coors (torch.Tensor): Voxel coordinate of each point. - voxel_mean (torch.Tensor): Voxel features to be mapped. - voxel_coors (torch.Tensor): Coordinates of valid voxels - - Returns: - torch.Tensor: Features or centers of each point. - """ - # Step 1: scatter voxel into canvas - # Calculate necessary things for canvas creation - canvas_z = int( - (self.point_cloud_range[5] - self.point_cloud_range[2]) / self.vz) - canvas_y = int( - (self.point_cloud_range[4] - self.point_cloud_range[1]) / self.vy) - canvas_x = int( - (self.point_cloud_range[3] - self.point_cloud_range[0]) / self.vx) - # canvas_channel = voxel_mean.size(1) - batch_size = pts_coors[-1, 0] + 1 - canvas_len = canvas_z * canvas_y * canvas_x * batch_size - # Create the canvas for this sample - canvas = voxel_mean.new_zeros(canvas_len, dtype=torch.long) - # Only include non-empty pillars - indices = ( - voxel_coors[:, 0] * canvas_z * canvas_y * canvas_x + - voxel_coors[:, 1] * canvas_y * canvas_x + - voxel_coors[:, 2] * canvas_x + voxel_coors[:, 3]) - # Scatter the blob back to the canvas - canvas[indices.long()] = torch.arange( - start=0, end=voxel_mean.size(0), device=voxel_mean.device) - - # Step 2: get voxel mean for each point - voxel_index = ( - pts_coors[:, 0] * canvas_z * canvas_y * canvas_x + - pts_coors[:, 1] * canvas_y * canvas_x + - pts_coors[:, 2] * canvas_x + pts_coors[:, 3]) - voxel_inds = canvas[voxel_index.long()] - center_per_point = voxel_mean[voxel_inds, ...] - return center_per_point - - @force_fp32(out_fp16=True) - def forward(self, - features, - coors, - points=None, - img_feats=None, - img_metas=None): - """Forward functions. - - Args: - features (torch.Tensor): Features of voxels, shape is NxC. - coors (torch.Tensor): Coordinates of voxels, shape is Nx(1+NDim). - points (list[torch.Tensor], optional): Raw points used to guide the - multi-modality fusion. Defaults to None. - img_feats (list[torch.Tensor], optional): Image features used for - multi-modality fusion. Defaults to None. - img_metas (dict, optional): [description]. Defaults to None. - - Returns: - tuple: If `return_point_feats` is False, returns voxel features and - its coordinates. If `return_point_feats` is True, returns - feature of each points inside voxels. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - voxel_mean, mean_coors = self.cluster_scatter(features, coors) - points_mean = self.map_voxel_center_to_point( - coors, voxel_mean, mean_coors) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :3] - points_mean[:, :3] - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros(size=(features.size(0), 3)) - f_center[:, 0] = features[:, 0] - ( - coors[:, 3].type_as(features) * self.vx + self.x_offset) - f_center[:, 1] = features[:, 1] - ( - coors[:, 2].type_as(features) * self.vy + self.y_offset) - f_center[:, 2] = features[:, 2] - ( - coors[:, 1].type_as(features) * self.vz + self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :3], 2, 1, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - features = torch.cat(features_ls, dim=-1) - for i, vfe in enumerate(self.vfe_layers): - point_feats = vfe(features) - if (i == len(self.vfe_layers) - 1 and self.fusion_layer is not None - and img_feats is not None): - point_feats = self.fusion_layer(img_feats, points, point_feats, - img_metas) - voxel_feats, voxel_coors = self.vfe_scatter(point_feats, coors) - if i != len(self.vfe_layers) - 1: - # need to concat voxel feats if it is not the last vfe - feat_per_point = self.map_voxel_center_to_point( - coors, voxel_feats, voxel_coors) - features = torch.cat([point_feats, feat_per_point], dim=1) - - if self.return_point_feats: - return point_feats - return voxel_feats, voxel_coors - - -@VOXEL_ENCODERS.register_module() -class HardVFE(nn.Module): - """Voxel feature encoder used in DV-SECOND. - - It encodes features of voxels and their points. It could also fuse - image feature into voxel features in a point-wise manner. - - Args: - in_channels (int, optional): Input channels of VFE. Defaults to 4. - feat_channels (list(int), optional): Channels of features in VFE. - with_distance (bool, optional): Whether to use the L2 distance - of points to the origin point. Defaults to False. - with_cluster_center (bool, optional): Whether to use the distance - to cluster center of points inside a voxel. Defaults to False. - with_voxel_center (bool, optional): Whether to use the distance to - center of voxel for each points inside a voxel. Defaults to False. - voxel_size (tuple[float], optional): Size of a single voxel. - Defaults to (0.2, 0.2, 4). - point_cloud_range (tuple[float], optional): The range of points - or voxels. Defaults to (0, -40, -3, 70.4, 40, 1). - norm_cfg (dict, optional): Config dict of normalization layers. - mode (str, optional): The mode when pooling features of points inside a - voxel. Available options include 'max' and 'avg'. - Defaults to 'max'. - fusion_layer (dict, optional): The config dict of fusion layer - used in multi-modal detectors. Defaults to None. - return_point_feats (bool, optional): Whether to return the - features of each points. Defaults to False. - """ - - def __init__(self, - in_channels=4, - feat_channels=[], - with_distance=False, - with_cluster_center=False, - with_voxel_center=False, - voxel_size=(0.2, 0.2, 4), - point_cloud_range=(0, -40, -3, 70.4, 40, 1), - norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01), - mode='max', - fusion_layer=None, - return_point_feats=False): - super(HardVFE, self).__init__() - assert len(feat_channels) > 0 - if with_cluster_center: - in_channels += 3 - if with_voxel_center: - in_channels += 3 - if with_distance: - in_channels += 1 - self.in_channels = in_channels - self._with_distance = with_distance - self._with_cluster_center = with_cluster_center - self._with_voxel_center = with_voxel_center - self.return_point_feats = return_point_feats - self.fp16_enabled = False - - # Need pillar (voxel) size and x/y offset to calculate pillar offset - self.vx = voxel_size[0] - self.vy = voxel_size[1] - self.vz = voxel_size[2] - self.x_offset = self.vx / 2 + point_cloud_range[0] - self.y_offset = self.vy / 2 + point_cloud_range[1] - self.z_offset = self.vz / 2 + point_cloud_range[2] - self.point_cloud_range = point_cloud_range - self.scatter = DynamicScatter(voxel_size, point_cloud_range, True) - - feat_channels = [self.in_channels] + list(feat_channels) - vfe_layers = [] - for i in range(len(feat_channels) - 1): - in_filters = feat_channels[i] - out_filters = feat_channels[i + 1] - if i > 0: - in_filters *= 2 - # TODO: pass norm_cfg to VFE - # norm_name, norm_layer = build_norm_layer(norm_cfg, out_filters) - if i == (len(feat_channels) - 2): - cat_max = False - max_out = True - if fusion_layer: - max_out = False - else: - max_out = True - cat_max = True - vfe_layers.append( - VFELayer( - in_filters, - out_filters, - norm_cfg=norm_cfg, - max_out=max_out, - cat_max=cat_max)) - self.vfe_layers = nn.ModuleList(vfe_layers) - self.num_vfe = len(vfe_layers) - - self.fusion_layer = None - if fusion_layer is not None: - self.fusion_layer = builder.build_fusion_layer(fusion_layer) - - @force_fp32(out_fp16=True) - def forward(self, - features, - num_points, - coors, - img_feats=None, - img_metas=None): - """Forward functions. - - Args: - features (torch.Tensor): Features of voxels, shape is MxNxC. - num_points (torch.Tensor): Number of points in each voxel. - coors (torch.Tensor): Coordinates of voxels, shape is Mx(1+NDim). - img_feats (list[torch.Tensor], optional): Image features used for - multi-modality fusion. Defaults to None. - img_metas (dict, optional): [description]. Defaults to None. - - Returns: - tuple: If `return_point_feats` is False, returns voxel features and - its coordinates. If `return_point_feats` is True, returns - feature of each points inside voxels. - """ - features_ls = [features] - # Find distance of x, y, and z from cluster center - if self._with_cluster_center: - points_mean = ( - features[:, :, :3].sum(dim=1, keepdim=True) / - num_points.type_as(features).view(-1, 1, 1)) - # TODO: maybe also do cluster for reflectivity - f_cluster = features[:, :, :3] - points_mean - features_ls.append(f_cluster) - - # Find distance of x, y, and z from pillar center - if self._with_voxel_center: - f_center = features.new_zeros( - size=(features.size(0), features.size(1), 3)) - f_center[:, :, 0] = features[:, :, 0] - ( - coors[:, 3].type_as(features).unsqueeze(1) * self.vx + - self.x_offset) - f_center[:, :, 1] = features[:, :, 1] - ( - coors[:, 2].type_as(features).unsqueeze(1) * self.vy + - self.y_offset) - f_center[:, :, 2] = features[:, :, 2] - ( - coors[:, 1].type_as(features).unsqueeze(1) * self.vz + - self.z_offset) - features_ls.append(f_center) - - if self._with_distance: - points_dist = torch.norm(features[:, :, :3], 2, 2, keepdim=True) - features_ls.append(points_dist) - - # Combine together feature decorations - voxel_feats = torch.cat(features_ls, dim=-1) - # The feature decorations were calculated without regard to whether - # pillar was empty. - # Need to ensure that empty voxels remain set to zeros. - voxel_count = voxel_feats.shape[1] - mask = get_paddings_indicator(num_points, voxel_count, axis=0) - voxel_feats *= mask.unsqueeze(-1).type_as(voxel_feats) - - for i, vfe in enumerate(self.vfe_layers): - voxel_feats = vfe(voxel_feats) - - if (self.fusion_layer is not None and img_feats is not None): - voxel_feats = self.fusion_with_mask(features, mask, voxel_feats, - coors, img_feats, img_metas) - - return voxel_feats - - def fusion_with_mask(self, features, mask, voxel_feats, coors, img_feats, - img_metas): - """Fuse image and point features with mask. - - Args: - features (torch.Tensor): Features of voxel, usually it is the - values of points in voxels. - mask (torch.Tensor): Mask indicates valid features in each voxel. - voxel_feats (torch.Tensor): Features of voxels. - coors (torch.Tensor): Coordinates of each single voxel. - img_feats (list[torch.Tensor]): Multi-scale feature maps of image. - img_metas (list(dict)): Meta information of image and points. - - Returns: - torch.Tensor: Fused features of each voxel. - """ - # the features is consist of a batch of points - batch_size = coors[-1, 0] + 1 - points = [] - for i in range(batch_size): - single_mask = (coors[:, 0] == i) - points.append(features[single_mask][mask[single_mask]]) - - point_feats = voxel_feats[mask] - point_feats = self.fusion_layer(img_feats, points, point_feats, - img_metas) - - voxel_canvas = voxel_feats.new_zeros( - size=(voxel_feats.size(0), voxel_feats.size(1), - point_feats.size(-1))) - voxel_canvas[mask] = point_feats - out = torch.max(voxel_canvas, dim=1)[0] - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/__init__.py deleted file mode 100644 index c96e954a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/__init__.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmcv.ops import (RoIAlign, SigmoidFocalLoss, get_compiler_version, - get_compiling_cuda_version, nms, roi_align, - sigmoid_focal_loss) -from mmcv.ops.assign_score_withk import assign_score_withk -from mmcv.ops.ball_query import ball_query -from mmcv.ops.furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) -from mmcv.ops.gather_points import gather_points -from mmcv.ops.group_points import GroupAll, QueryAndGroup, grouping_operation -from mmcv.ops.knn import knn -from mmcv.ops.points_in_boxes import (points_in_boxes_all, points_in_boxes_cpu, - points_in_boxes_part) -from mmcv.ops.points_sampler import PointsSampler as Points_Sampler -from mmcv.ops.roiaware_pool3d import RoIAwarePool3d -from mmcv.ops.roipoint_pool3d import RoIPointPool3d -from mmcv.ops.scatter_points import DynamicScatter, dynamic_scatter -from mmcv.ops.three_interpolate import three_interpolate -from mmcv.ops.three_nn import three_nn -from mmcv.ops.voxelize import Voxelization, voxelization - -from .dgcnn_modules import DGCNNFAModule, DGCNNFPModule, DGCNNGFModule -from .norm import NaiveSyncBatchNorm1d, NaiveSyncBatchNorm2d -from .paconv import PAConv, PAConvCUDA -from .pointnet_modules import (PAConvCUDASAModule, PAConvCUDASAModuleMSG, - PAConvSAModule, PAConvSAModuleMSG, - PointFPModule, PointSAModule, PointSAModuleMSG, - build_sa_module) -# from .sparse_block import (SparseBasicBlock, SparseBottleneck, -# make_sparse_convmodule) - -__all__ = [ - 'nms', 'soft_nms', 'RoIAlign', 'roi_align', 'get_compiler_version', - 'get_compiling_cuda_version', 'NaiveSyncBatchNorm1d', - 'NaiveSyncBatchNorm2d', 'batched_nms', 'Voxelization', 'voxelization', - 'dynamic_scatter', 'DynamicScatter', 'sigmoid_focal_loss', - 'SigmoidFocalLoss', 'SparseBasicBlock', 'SparseBottleneck', - 'RoIAwarePool3d', 'points_in_boxes_part', 'points_in_boxes_cpu', - 'make_sparse_convmodule', 'ball_query', 'knn', 'furthest_point_sample', - 'furthest_point_sample_with_dist', 'three_interpolate', 'three_nn', - 'gather_points', 'grouping_operation', 'GroupAll', 'QueryAndGroup', - 'PointSAModule', 'PointSAModuleMSG', 'PointFPModule', 'DGCNNFPModule', - 'DGCNNGFModule', 'DGCNNFAModule', 'points_in_boxes_all', - 'get_compiler_version', 'assign_score_withk', 'get_compiling_cuda_version', - 'Points_Sampler', 'build_sa_module', 'PAConv', 'PAConvCUDA', - 'PAConvSAModuleMSG', 'PAConvSAModule', 'PAConvCUDASAModule', - 'PAConvCUDASAModuleMSG', 'RoIPointPool3d' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/__init__.py deleted file mode 100644 index 67beb090..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dgcnn_fa_module import DGCNNFAModule -from .dgcnn_fp_module import DGCNNFPModule -from .dgcnn_gf_module import DGCNNGFModule - -__all__ = ['DGCNNFAModule', 'DGCNNFPModule', 'DGCNNGFModule'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py deleted file mode 100644 index b0975e69..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fa_module.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class DGCNNFAModule(BaseModule): - """Point feature aggregation module used in DGCNN. - - Aggregate all the features of points. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. Defaults to None. - """ - - def __init__(self, - mlp_channels, - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, ), - stride=(1, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - @force_fp32() - def forward(self, points): - """forward. - - Args: - points (List[Tensor]): tensor of the features to be aggregated. - - Returns: - Tensor: (B, N, M) M = mlp[-1], tensor of the output points. - """ - - if len(points) > 1: - new_points = torch.cat(points[1:], dim=-1) - new_points = new_points.transpose(1, 2).contiguous() # (B, C, N) - new_points_copy = new_points - - new_points = self.mlps(new_points) - - new_fa_points = new_points.max(dim=-1, keepdim=True)[0] - new_fa_points = new_fa_points.repeat(1, 1, new_points.shape[-1]) - - new_points = torch.cat([new_fa_points, new_points_copy], dim=1) - new_points = new_points.transpose(1, 2).contiguous() - else: - new_points = points - - return new_points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py deleted file mode 100644 index c871721b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_fp_module.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class DGCNNFPModule(BaseModule): - """Point feature propagation module used in DGCNN. - - Propagate the features from one set to another. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of activation method. - Defaults to dict(type='BN1d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - init_cfg (dict, optional): Initialization config. Defaults to None. - """ - - def __init__(self, - mlp_channels, - norm_cfg=dict(type='BN1d'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, ), - stride=(1, ), - conv_cfg=dict(type='Conv1d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - @force_fp32() - def forward(self, points): - """forward. - - Args: - points (Tensor): (B, N, C) tensor of the input points. - - Returns: - Tensor: (B, N, M) M = mlp[-1], tensor of the new points. - """ - - if points is not None: - new_points = points.transpose(1, 2).contiguous() # (B, C, N) - new_points = self.mlps(new_points) - new_points = new_points.transpose(1, 2).contiguous() - else: - new_points = points - - return new_points diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py deleted file mode 100644 index 96785e7e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/dgcnn_modules/dgcnn_gf_module.py +++ /dev/null @@ -1,221 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops.group_points import GroupAll, QueryAndGroup, grouping_operation -from torch import nn as nn -from torch.nn import functional as F - - -class BaseDGCNNGFModule(nn.Module): - """Base module for point graph feature module used in DGCNN. - - Args: - radii (list[float]): List of radius in each knn or ball query. - sample_nums (list[int]): Number of samples in each knn or ball query. - mlp_channels (list[list[int]]): Specify of the dgcnn before - the global pooling for each graph feature module. - knn_modes (list[str], optional): Type of KNN method, valid mode - ['F-KNN', 'D-KNN'], Defaults to ['F-KNN']. - dilated_group (bool, optional): Whether to use dilated ball query. - Defaults to False. - use_xyz (bool, optional): Whether to use xyz as point features. - Defaults to True. - pool_mode (str, optional): Type of pooling method. Defaults to 'max'. - normalize_xyz (bool, optional): If ball query, whether to normalize - local XYZ with radius. Defaults to False. - grouper_return_grouped_xyz (bool, optional): Whether to return grouped - xyz in `QueryAndGroup`. Defaults to False. - grouper_return_grouped_idx (bool, optional): Whether to return grouped - idx in `QueryAndGroup`. Defaults to False. - """ - - def __init__(self, - radii, - sample_nums, - mlp_channels, - knn_modes=['F-KNN'], - dilated_group=False, - use_xyz=True, - pool_mode='max', - normalize_xyz=False, - grouper_return_grouped_xyz=False, - grouper_return_grouped_idx=False): - super(BaseDGCNNGFModule, self).__init__() - - assert len(sample_nums) == len( - mlp_channels - ), 'Num_samples and mlp_channels should have the same length.' - assert pool_mode in ['max', 'avg' - ], "Pool_mode should be one of ['max', 'avg']." - assert isinstance(knn_modes, list) or isinstance( - knn_modes, tuple), 'The type of knn_modes should be list or tuple.' - - if isinstance(mlp_channels, tuple): - mlp_channels = list(map(list, mlp_channels)) - self.mlp_channels = mlp_channels - - self.pool_mode = pool_mode - self.groupers = nn.ModuleList() - self.mlps = nn.ModuleList() - self.knn_modes = knn_modes - - for i in range(len(sample_nums)): - sample_num = sample_nums[i] - if sample_num is not None: - if self.knn_modes[i] == 'D-KNN': - grouper = QueryAndGroup( - radii[i], - sample_num, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=True) - else: - grouper = QueryAndGroup( - radii[i], - sample_num, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=grouper_return_grouped_idx) - else: - grouper = GroupAll(use_xyz) - self.groupers.append(grouper) - - def _pool_features(self, features): - """Perform feature aggregation using pooling operation. - - Args: - features (torch.Tensor): (B, C, N, K) - Features of locally grouped points before pooling. - - Returns: - torch.Tensor: (B, C, N) - Pooled features aggregating local information. - """ - if self.pool_mode == 'max': - # (B, C, N, 1) - new_features = F.max_pool2d( - features, kernel_size=[1, features.size(3)]) - elif self.pool_mode == 'avg': - # (B, C, N, 1) - new_features = F.avg_pool2d( - features, kernel_size=[1, features.size(3)]) - else: - raise NotImplementedError - - return new_features.squeeze(-1).contiguous() - - def forward(self, points): - """forward. - - Args: - points (Tensor): (B, N, C) input points. - - Returns: - List[Tensor]: (B, N, C1) new points generated from each graph - feature module. - """ - new_points_list = [points] - - for i in range(len(self.groupers)): - - new_points = new_points_list[i] - new_points_trans = new_points.transpose( - 1, 2).contiguous() # (B, C, N) - - if self.knn_modes[i] == 'D-KNN': - # (B, N, C) -> (B, N, K) - idx = self.groupers[i](new_points[..., -3:].contiguous(), - new_points[..., -3:].contiguous())[-1] - - grouped_results = grouping_operation( - new_points_trans, idx) # (B, C, N) -> (B, C, N, K) - grouped_results -= new_points_trans.unsqueeze(-1) - else: - grouped_results = self.groupers[i]( - new_points, new_points) # (B, N, C) -> (B, C, N, K) - - new_points = new_points_trans.unsqueeze(-1).repeat( - 1, 1, 1, grouped_results.shape[-1]) - new_points = torch.cat([grouped_results, new_points], dim=1) - - # (B, mlp[-1], N, K) - new_points = self.mlps[i](new_points) - - # (B, mlp[-1], N) - new_points = self._pool_features(new_points) - new_points = new_points.transpose(1, 2).contiguous() - new_points_list.append(new_points) - - return new_points - - -class DGCNNGFModule(BaseDGCNNGFModule): - """Point graph feature module used in DGCNN. - - Args: - mlp_channels (list[int]): Specify of the dgcnn before - the global pooling for each graph feature module. - num_sample (int, optional): Number of samples in each knn or ball - query. Defaults to None. - knn_mode (str, optional): Type of KNN method, valid mode - ['F-KNN', 'D-KNN']. Defaults to 'F-KNN'. - radius (float, optional): Radius to group with. - Defaults to None. - dilated_group (bool, optional): Whether to use dilated ball query. - Defaults to False. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d'). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU'). - use_xyz (bool, optional): Whether to use xyz as point features. - Defaults to True. - pool_mode (str, optional): Type of pooling method. - Defaults to 'max'. - normalize_xyz (bool, optional): If ball query, whether to normalize - local XYZ with radius. Defaults to False. - bias (bool | str, optional): If specified as `auto`, it will be decided - by the norm_cfg. Bias will be set as True if `norm_cfg` is None, - otherwise False. Defaults to 'auto'. - """ - - def __init__(self, - mlp_channels, - num_sample=None, - knn_mode='F-KNN', - radius=None, - dilated_group=False, - norm_cfg=dict(type='BN2d'), - act_cfg=dict(type='ReLU'), - use_xyz=True, - pool_mode='max', - normalize_xyz=False, - bias='auto'): - super(DGCNNGFModule, self).__init__( - mlp_channels=[mlp_channels], - sample_nums=[num_sample], - knn_modes=[knn_mode], - radii=[radius], - use_xyz=use_xyz, - pool_mode=pool_mode, - normalize_xyz=normalize_xyz, - dilated_group=dilated_group) - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - ConvModule( - mlp_channel[i], - mlp_channel[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=bias)) - self.mlps.append(mlp) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/norm.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/norm.py deleted file mode 100644 index 98ec7f11..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/norm.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NORM_LAYERS -from mmcv.runner import force_fp32 -from torch import distributed as dist -from torch import nn as nn -from torch.autograd.function import Function - - -class AllReduce(Function): - - @staticmethod - def forward(ctx, input): - input_list = [ - torch.zeros_like(input) for k in range(dist.get_world_size()) - ] - # Use allgather instead of allreduce in-place operations is unreliable - dist.all_gather(input_list, input, async_op=False) - inputs = torch.stack(input_list, dim=0) - return torch.sum(inputs, dim=0) - - @staticmethod - def backward(ctx, grad_output): - dist.all_reduce(grad_output, async_op=False) - return grad_output - - -@NORM_LAYERS.register_module('naiveSyncBN1d') -class NaiveSyncBatchNorm1d(nn.BatchNorm1d): - """Synchronized Batch Normalization for 3D Tensors. - - Note: - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - `torch.nn.SyncBatchNorm` has known unknown bugs. - It produces significantly worse AP (and sometimes goes NaN) - when the batch size on each worker is quite different - (e.g., when scale augmentation is used). - In 3D detection, different workers has points of different shapes, - which also cause instability. - - Use this implementation before `nn.SyncBatchNorm` is fixed. - It is slower than `nn.SyncBatchNorm`. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.fp16_enabled = False - - # customized normalization layer still needs this decorator - # to force the input to be fp32 and the output to be fp16 - # TODO: make mmcv fp16 utils handle customized norm layers - @force_fp32(out_fp16=True) - def forward(self, input): - """ - Args: - input (tensor): Has shape (N, C) or (N, C, L), where N is - the batch size, C is the number of features or - channels, and L is the sequence length - - Returns: - tensor: Has shape (N, C) or (N, C, L), has same shape - as input. - """ - assert input.dtype == torch.float32, \ - f'input should be in float32 type, got {input.dtype}' - using_dist = dist.is_available() and dist.is_initialized() - if (not using_dist) or dist.get_world_size() == 1 \ - or not self.training: - return super().forward(input) - assert input.shape[0] > 0, 'SyncBN does not support empty inputs' - is_two_dim = input.dim() == 2 - if is_two_dim: - input = input.unsqueeze(2) - - C = input.shape[1] - mean = torch.mean(input, dim=[0, 2]) - meansqr = torch.mean(input * input, dim=[0, 2]) - - vec = torch.cat([mean, meansqr], dim=0) - vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size()) - - mean, meansqr = torch.split(vec, C) - var = meansqr - mean * mean - self.running_mean += self.momentum * ( - mean.detach() - self.running_mean) - self.running_var += self.momentum * (var.detach() - self.running_var) - - invstd = torch.rsqrt(var + self.eps) - scale = self.weight * invstd - bias = self.bias - mean * scale - scale = scale.reshape(1, -1, 1) - bias = bias.reshape(1, -1, 1) - output = input * scale + bias - if is_two_dim: - output = output.squeeze(2) - return output - - -@NORM_LAYERS.register_module('naiveSyncBN2d') -class NaiveSyncBatchNorm2d(nn.BatchNorm2d): - """Synchronized Batch Normalization for 4D Tensors. - - Note: - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - `torch.nn.SyncBatchNorm` has known unknown bugs. - It produces significantly worse AP (and sometimes goes NaN) - when the batch size on each worker is quite different - (e.g., when scale augmentation is used). - This phenomenon also occurs when the multi-modality feature fusion - modules of multi-modality detectors use SyncBN. - - Use this implementation before `nn.SyncBatchNorm` is fixed. - It is slower than `nn.SyncBatchNorm`. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.fp16_enabled = False - - # customized normalization layer still needs this decorator - # to force the input to be fp32 and the output to be fp16 - # TODO: make mmcv fp16 utils handle customized norm layers - @force_fp32(out_fp16=True) - def forward(self, input): - """ - Args: - Input (tensor): Feature has shape (N, C, H, W). - - Returns: - tensor: Has shape (N, C, H, W), same shape as input. - """ - assert input.dtype == torch.float32, \ - f'input should be in float32 type, got {input.dtype}' - using_dist = dist.is_available() and dist.is_initialized() - if (not using_dist) or \ - dist.get_world_size() == 1 or \ - not self.training: - return super().forward(input) - - assert input.shape[0] > 0, 'SyncBN does not support empty inputs' - C = input.shape[1] - mean = torch.mean(input, dim=[0, 2, 3]) - meansqr = torch.mean(input * input, dim=[0, 2, 3]) - - vec = torch.cat([mean, meansqr], dim=0) - vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size()) - - mean, meansqr = torch.split(vec, C) - var = meansqr - mean * mean - self.running_mean += self.momentum * ( - mean.detach() - self.running_mean) - self.running_var += self.momentum * (var.detach() - self.running_var) - - invstd = torch.rsqrt(var + self.eps) - scale = self.weight * invstd - bias = self.bias - mean * scale - scale = scale.reshape(1, -1, 1, 1) - bias = bias.reshape(1, -1, 1, 1) - return input * scale + bias diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/__init__.py deleted file mode 100644 index d71c7660..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .paconv import PAConv, PAConvCUDA - -__all__ = ['PAConv', 'PAConvCUDA'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/paconv.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/paconv.py deleted file mode 100644 index bda8bfe3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/paconv.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -from mmcv.cnn import (ConvModule, build_activation_layer, build_norm_layer, - constant_init) -from mmcv.ops import assign_score_withk as assign_score_cuda -from torch import nn as nn -from torch.nn import functional as F - -from .utils import assign_kernel_withoutk, assign_score, calc_euclidian_dist - - -class ScoreNet(nn.Module): - r"""ScoreNet that outputs coefficient scores to assemble kernel weights in - the weight bank according to the relative position of point pairs. - - Args: - mlp_channels (List[int]): Hidden unit sizes of SharedMLP layers. - last_bn (bool, optional): Whether to use BN on the last output of mlps. - Defaults to False. - score_norm (str, optional): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. Defaults to 'softmax'. - temp_factor (float, optional): Temperature factor to scale the output - scores before softmax. Defaults to 1.0. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d'). - bias (bool | str, optional): If specified as `auto`, it will be decided - by the norm_cfg. Bias will be set as True if `norm_cfg` is None, - otherwise False. Defaults to 'auto'. - - Note: - The official code applies xavier_init to all Conv layers in ScoreNet, - see `PAConv `_. However in our experiments, we - did not find much difference in applying such xavier initialization - or not. So we neglect this initialization in our implementation. - """ - - def __init__(self, - mlp_channels, - last_bn=False, - score_norm='softmax', - temp_factor=1.0, - norm_cfg=dict(type='BN2d'), - bias='auto'): - super(ScoreNet, self).__init__() - - assert score_norm in ['softmax', 'sigmoid', 'identity'], \ - f'unsupported score_norm function {score_norm}' - - self.score_norm = score_norm - self.temp_factor = temp_factor - - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 2): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - bias=bias)) - - # for the last mlp that outputs scores, no relu and possibly no bn - i = len(mlp_channels) - 2 - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg if last_bn else None, - act_cfg=None, - bias=bias)) - - def forward(self, xyz_features): - """Forward. - - Args: - xyz_features (torch.Tensor): (B, C, N, K), features constructed - from xyz coordinates of point pairs. May contain relative - positions, Euclidean distance, etc. - - Returns: - torch.Tensor: (B, N, K, M), predicted scores for `M` kernels. - """ - scores = self.mlps(xyz_features) # (B, M, N, K) - - # perform score normalization - if self.score_norm == 'softmax': - scores = F.softmax(scores / self.temp_factor, dim=1) - elif self.score_norm == 'sigmoid': - scores = torch.sigmoid(scores / self.temp_factor) - else: # 'identity' - scores = scores - - scores = scores.permute(0, 2, 3, 1) # (B, N, K, M) - - return scores - - -class PAConv(nn.Module): - """Non-CUDA version of PAConv. - - PAConv stores a trainable weight bank containing several kernel weights. - Given input points and features, it computes coefficient scores to assemble - those kernels to form conv kernels, and then runs convolution on the input. - - Args: - in_channels (int): Input channels of point features. - out_channels (int): Output channels of point features. - num_kernels (int): Number of kernel weights in the weight bank. - norm_cfg (dict, optional): Type of normalization method. - Defaults to dict(type='BN2d', momentum=0.1). - act_cfg (dict, optional): Type of activation method. - Defaults to dict(type='ReLU', inplace=True). - scorenet_input (str, optional): Type of input to ScoreNet. - Can be 'identity', 'w_neighbor' or 'w_neighbor_dist'. - Defaults to 'w_neighbor_dist'. - weight_bank_init (str, optional): Init method of weight bank kernels. - Can be 'kaiming' or 'xavier'. Defaults to 'kaiming'. - kernel_input (str, optional): Input features to be multiplied with - kernel weights. Can be 'identity' or 'w_neighbor'. - Defaults to 'w_neighbor'. - scorenet_cfg (dict, optional): Config of the ScoreNet module, which - may contain the following keys and values: - - - mlp_channels (List[int]): Hidden units of MLPs. - - score_norm (str): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. - - temp_factor (float): Temperature factor to scale the output - scores before softmax. - - last_bn (bool): Whether to use BN on the last output of mlps. - """ - - def __init__(self, - in_channels, - out_channels, - num_kernels, - norm_cfg=dict(type='BN2d', momentum=0.1), - act_cfg=dict(type='ReLU', inplace=True), - scorenet_input='w_neighbor_dist', - weight_bank_init='kaiming', - kernel_input='w_neighbor', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConv, self).__init__() - - # determine weight kernel size according to used features - if kernel_input == 'identity': - # only use grouped_features - kernel_mul = 1 - elif kernel_input == 'w_neighbor': - # concat of (grouped_features - center_features, grouped_features) - kernel_mul = 2 - else: - raise NotImplementedError( - f'unsupported kernel_input {kernel_input}') - self.kernel_input = kernel_input - in_channels = kernel_mul * in_channels - - # determine mlp channels in ScoreNet according to used xyz features - if scorenet_input == 'identity': - # only use relative position (grouped_xyz - center_xyz) - self.scorenet_in_channels = 3 - elif scorenet_input == 'w_neighbor': - # (grouped_xyz - center_xyz, grouped_xyz) - self.scorenet_in_channels = 6 - elif scorenet_input == 'w_neighbor_dist': - # (center_xyz, grouped_xyz - center_xyz, Euclidean distance) - self.scorenet_in_channels = 7 - else: - raise NotImplementedError( - f'unsupported scorenet_input {scorenet_input}') - self.scorenet_input = scorenet_input - - # construct kernel weights in weight bank - # self.weight_bank is of shape [C, num_kernels * out_c] - # where C can be in_c or (2 * in_c) - if weight_bank_init == 'kaiming': - weight_init = nn.init.kaiming_normal_ - elif weight_bank_init == 'xavier': - weight_init = nn.init.xavier_normal_ - else: - raise NotImplementedError( - f'unsupported weight bank init method {weight_bank_init}') - - self.num_kernels = num_kernels # the parameter `m` in the paper - weight_bank = weight_init( - torch.empty(self.num_kernels, in_channels, out_channels)) - weight_bank = weight_bank.permute(1, 0, 2).reshape( - in_channels, self.num_kernels * out_channels).contiguous() - self.weight_bank = nn.Parameter(weight_bank, requires_grad=True) - - # construct ScoreNet - scorenet_cfg_ = copy.deepcopy(scorenet_cfg) - scorenet_cfg_['mlp_channels'].insert(0, self.scorenet_in_channels) - scorenet_cfg_['mlp_channels'].append(self.num_kernels) - self.scorenet = ScoreNet(**scorenet_cfg_) - - self.bn = build_norm_layer(norm_cfg, out_channels)[1] if \ - norm_cfg is not None else None - self.activate = build_activation_layer(act_cfg) if \ - act_cfg is not None else None - - # set some basic attributes of Conv layers - self.in_channels = in_channels - self.out_channels = out_channels - - self.init_weights() - - def init_weights(self): - """Initialize weights of shared MLP layers and BN layers.""" - if self.bn is not None: - constant_init(self.bn, val=1, bias=0) - - def _prepare_scorenet_input(self, points_xyz): - """Prepare input point pairs features for self.ScoreNet. - - Args: - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - Returns: - torch.Tensor: (B, C, npoint, K) - The generated features per point pair. - """ - B, _, npoint, K = points_xyz.size() - center_xyz = points_xyz[..., :1].repeat(1, 1, 1, K) - xyz_diff = points_xyz - center_xyz # [B, 3, npoint, K] - if self.scorenet_input == 'identity': - xyz_features = xyz_diff - elif self.scorenet_input == 'w_neighbor': - xyz_features = torch.cat((xyz_diff, points_xyz), dim=1) - else: # w_neighbor_dist - euclidian_dist = calc_euclidian_dist( - center_xyz.permute(0, 2, 3, 1).reshape(B * npoint * K, 3), - points_xyz.permute(0, 2, 3, 1).reshape(B * npoint * K, 3)).\ - reshape(B, 1, npoint, K) - xyz_features = torch.cat((center_xyz, xyz_diff, euclidian_dist), - dim=1) - return xyz_features - - def forward(self, inputs): - """Forward. - - Args: - inputs (tuple(torch.Tensor)): - - - features (torch.Tensor): (B, in_c, npoint, K) - Features of the queried points. - - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - Returns: - Tuple[torch.Tensor]: - - - new_features: (B, out_c, npoint, K), features after PAConv. - - points_xyz: same as input. - """ - features, points_xyz = inputs - B, _, npoint, K = features.size() - - if self.kernel_input == 'w_neighbor': - center_features = features[..., :1].repeat(1, 1, 1, K) - features_diff = features - center_features - # to (B, 2 * in_c, npoint, K) - features = torch.cat((features_diff, features), dim=1) - - # prepare features for between each point and its grouping center - xyz_features = self._prepare_scorenet_input(points_xyz) - - # scores to assemble kernel weights - scores = self.scorenet(xyz_features) # [B, npoint, K, m] - - # first compute out features over all kernels - # features is [B, C, npoint, K], weight_bank is [C, m * out_c] - new_features = torch.matmul( - features.permute(0, 2, 3, 1), - self.weight_bank).view(B, npoint, K, self.num_kernels, - -1) # [B, npoint, K, m, out_c] - - # then aggregate using scores - new_features = assign_score(scores, new_features) - # to [B, out_c, npoint, K] - new_features = new_features.permute(0, 3, 1, 2).contiguous() - - if self.bn is not None: - new_features = self.bn(new_features) - if self.activate is not None: - new_features = self.activate(new_features) - - # in order to keep input output consistency - # so that we can wrap PAConv in Sequential - return (new_features, points_xyz) - - -class PAConvCUDA(PAConv): - """CUDA version of PAConv that implements a cuda op to efficiently perform - kernel assembling. - - Different from vanilla PAConv, the input features of this function is not - grouped by centers. Instead, they will be queried on-the-fly by the - additional input `points_idx`. This avoids the large intermediate matrix. - See the `paper `_ appendix Sec. D for - more detailed descriptions. - """ - - def __init__(self, - in_channels, - out_channels, - num_kernels, - norm_cfg=dict(type='BN2d', momentum=0.1), - act_cfg=dict(type='ReLU', inplace=True), - scorenet_input='w_neighbor_dist', - weight_bank_init='kaiming', - kernel_input='w_neighbor', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDA, self).__init__( - in_channels=in_channels, - out_channels=out_channels, - num_kernels=num_kernels, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - scorenet_input=scorenet_input, - weight_bank_init=weight_bank_init, - kernel_input=kernel_input, - scorenet_cfg=scorenet_cfg) - - assert self.kernel_input == 'w_neighbor', \ - 'CUDA implemented PAConv only supports w_neighbor kernel_input' - - def forward(self, inputs): - """Forward. - - Args: - inputs (tuple(torch.Tensor)): - - - features (torch.Tensor): (B, in_c, N) - Features of all points in the current point cloud. - Different from non-CUDA version PAConv, here the features - are not grouped by each center to form a K dim. - - points_xyz (torch.Tensor): (B, 3, npoint, K) - Coordinates of the grouped points. - - points_idx (torch.Tensor): (B, npoint, K) - Index of the grouped points. - - Returns: - Tuple[torch.Tensor]: - - - new_features: (B, out_c, npoint, K), features after PAConv. - - points_xyz: same as input. - - points_idx: same as input. - """ - features, points_xyz, points_idx = inputs - - # prepare features for between each point and its grouping center - xyz_features = self._prepare_scorenet_input(points_xyz) - - # scores to assemble kernel weights - scores = self.scorenet(xyz_features) # [B, npoint, K, m] - - # pre-compute features for points and centers separately - # features is [B, in_c, N], weight_bank is [C, m * out_dim] - point_feat, center_feat = assign_kernel_withoutk( - features, self.weight_bank, self.num_kernels) - - # aggregate features using custom cuda op - new_features = assign_score_cuda( - scores, point_feat, center_feat, points_idx, - 'sum').contiguous() # [B, out_c, npoint, K] - - if self.bn is not None: - new_features = self.bn(new_features) - if self.activate is not None: - new_features = self.activate(new_features) - - # in order to keep input output consistency - return (new_features, points_xyz, points_idx) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/utils.py deleted file mode 100644 index 68e71d51..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/paconv/utils.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def calc_euclidian_dist(xyz1, xyz2): - """Calculate the Euclidean distance between two sets of points. - - Args: - xyz1 (torch.Tensor): (N, 3), the first set of points. - xyz2 (torch.Tensor): (N, 3), the second set of points. - - Returns: - torch.Tensor: (N, ), the Euclidean distance between each point pair. - """ - assert xyz1.shape[0] == xyz2.shape[0], 'number of points are not the same' - assert xyz1.shape[1] == xyz2.shape[1] == 3, \ - 'points coordinates dimension is not 3' - return torch.norm(xyz1 - xyz2, dim=-1) - - -def assign_score(scores, point_features): - """Perform weighted sum to aggregate output features according to scores. - This function is used in non-CUDA version of PAConv. - - Compared to the cuda op assigh_score_withk, this pytorch implementation - pre-computes output features for the neighbors of all centers, and then - performs aggregation. It consumes more GPU memories. - - Args: - scores (torch.Tensor): (B, npoint, K, M), predicted scores to - aggregate weight matrices in the weight bank. - `npoint` is the number of sampled centers. - `K` is the number of queried neighbors. - `M` is the number of weight matrices in the weight bank. - point_features (torch.Tensor): (B, npoint, K, M, out_dim) - Pre-computed point features to be aggregated. - - Returns: - torch.Tensor: (B, npoint, K, out_dim), the aggregated features. - """ - B, npoint, K, M = scores.size() - scores = scores.view(B, npoint, K, 1, M) - output = torch.matmul(scores, point_features).view(B, npoint, K, -1) - return output - - -def assign_kernel_withoutk(features, kernels, M): - """Pre-compute features with weight matrices in weight bank. This function - is used before cuda op assign_score_withk in CUDA version PAConv. - - Args: - features (torch.Tensor): (B, in_dim, N), input features of all points. - `N` is the number of points in current point cloud. - kernels (torch.Tensor): (2 * in_dim, M * out_dim), weight matrices in - the weight bank, transformed from (M, 2 * in_dim, out_dim). - `2 * in_dim` is because the input features are concatenation of - (point_features - center_features, point_features). - M (int): Number of weight matrices in the weight bank. - - Returns: - Tuple[torch.Tensor]: both of shape (B, N, M, out_dim): - - - point_features: Pre-computed features for points. - - center_features: Pre-computed features for centers. - """ - B, in_dim, N = features.size() - feat_trans = features.permute(0, 2, 1) # [B, N, in_dim] - out_feat_half1 = torch.matmul(feat_trans, kernels[:in_dim]).view( - B, N, M, -1) # [B, N, M, out_dim] - out_feat_half2 = torch.matmul(feat_trans, kernels[in_dim:]).view( - B, N, M, -1) # [B, N, M, out_dim] - - # TODO: why this hard-coded if condition? - # when the network input is only xyz without additional features - # xyz will be used as features, so that features.size(1) == 3 % 2 != 0 - # we need to compensate center_features because otherwise - # `point_features - center_features` will result in all zeros? - if features.size(1) % 2 != 0: - out_feat_half_coord = torch.matmul( - feat_trans[:, :, :3], # [B, N, 3] - kernels[in_dim:in_dim + 3]).view(B, N, M, -1) # [B, N, M, out_dim] - else: - out_feat_half_coord = torch.zeros_like(out_feat_half2) - - point_features = out_feat_half1 + out_feat_half2 - center_features = out_feat_half1 + out_feat_half_coord - return point_features, center_features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/__init__.py deleted file mode 100644 index 99b08eb8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_sa_module -from .paconv_sa_module import (PAConvCUDASAModule, PAConvCUDASAModuleMSG, - PAConvSAModule, PAConvSAModuleMSG) -from .point_fp_module import PointFPModule -from .point_sa_module import PointSAModule, PointSAModuleMSG - -__all__ = [ - 'build_sa_module', 'PointSAModuleMSG', 'PointSAModule', 'PointFPModule', - 'PAConvSAModule', 'PAConvSAModuleMSG', 'PAConvCUDASAModule', - 'PAConvCUDASAModuleMSG' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/builder.py deleted file mode 100644 index 6631cb42..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/builder.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -SA_MODULES = Registry('point_sa_module') - - -def build_sa_module(cfg, *args, **kwargs): - """Build PointNet2 set abstraction (SA) module. - - Args: - cfg (None or dict): The SA module config, which should contain: - - type (str): Module type. - - module args: Args needed to instantiate an SA module. - args (argument list): Arguments passed to the `__init__` - method of the corresponding module. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding SA module . - - Returns: - nn.Module: Created SA module. - """ - if cfg is None: - cfg_ = dict(type='PointSAModule') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - module_type = cfg_.pop('type') - if module_type not in SA_MODULES: - raise KeyError(f'Unrecognized module type {module_type}') - else: - sa_module = SA_MODULES.get(module_type) - - module = sa_module(*args, **kwargs, **cfg_) - - return module diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/paconv_sa_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/paconv_sa_module.py deleted file mode 100644 index 361ecbb2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/paconv_sa_module.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn - -from mmdet3d.ops import PAConv, PAConvCUDA -from .builder import SA_MODULES -from .point_sa_module import BasePointSAModule - - -@SA_MODULES.register_module() -class PAConvSAModuleMSG(BasePointSAModule): - r"""Point set abstraction module with multi-scale grouping (MSG) used in - PAConv networks. - - Replace the MLPs in `PointSAModuleMSG` with PAConv layers. - See the `paper `_ for more details. - - Args: - paconv_num_kernels (list[list[int]]): Number of kernel weights in the - weight banks of each layer's PAConv. - paconv_kernel_input (str, optional): Input features to be multiplied - with kernel weights. Can be 'identity' or 'w_neighbor'. - Defaults to 'w_neighbor'. - scorenet_input (str, optional): Type of the input to ScoreNet. - Defaults to 'w_neighbor_dist'. Can be the following values: - - - 'identity': Use xyz coordinates as input. - - 'w_neighbor': Use xyz coordinates and the difference with center - points as input. - - 'w_neighbor_dist': Use xyz coordinates, the difference with - center points and the Euclidean distance as input. - - scorenet_cfg (dict, optional): Config of the ScoreNet module, which - may contain the following keys and values: - - - mlp_channels (List[int]): Hidden units of MLPs. - - score_norm (str): Normalization function of output scores. - Can be 'softmax', 'sigmoid' or 'identity'. - - temp_factor (float): Temperature factor to scale the output - scores before softmax. - - last_bn (bool): Whether to use BN on the last output of mlps. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - paconv_num_kernels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto', - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvSAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz, - grouper_return_grouped_xyz=True) - - assert len(paconv_num_kernels) == len(mlp_channels) - for i in range(len(mlp_channels)): - assert len(paconv_num_kernels[i]) == len(mlp_channels[i]) - 1, \ - 'PAConv number of kernel weights wrong' - - # in PAConv, bias only exists in ScoreNet - scorenet_cfg['bias'] = bias - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - num_kernels = paconv_num_kernels[i] - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - PAConv( - mlp_channel[i], - mlp_channel[i + 1], - num_kernels[i], - norm_cfg=norm_cfg, - kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg)) - self.mlps.append(mlp) - - -@SA_MODULES.register_module() -class PAConvSAModule(PAConvSAModuleMSG): - r"""Point set abstraction module with single-scale grouping (SSG) used in - PAConv networks. - - Replace the MLPs in `PointSAModule` with PAConv layers. See the `paper - `_ for more details. - """ - - def __init__(self, - mlp_channels, - paconv_num_kernels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False, - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[16, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvSAModule, self).__init__( - mlp_channels=[mlp_channels], - paconv_num_kernels=[paconv_num_kernels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz, - paconv_kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg) - - -@SA_MODULES.register_module() -class PAConvCUDASAModuleMSG(BasePointSAModule): - r"""Point set abstraction module with multi-scale grouping (MSG) used in - PAConv networks. - - Replace the non CUDA version PAConv with CUDA implemented PAConv for - efficient computation. See the `paper `_ - for more details. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - paconv_num_kernels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto', - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDASAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz, - grouper_return_grouped_xyz=True, - grouper_return_grouped_idx=True) - - assert len(paconv_num_kernels) == len(mlp_channels) - for i in range(len(mlp_channels)): - assert len(paconv_num_kernels[i]) == len(mlp_channels[i]) - 1, \ - 'PAConv number of kernel weights wrong' - - # in PAConv, bias only exists in ScoreNet - scorenet_cfg['bias'] = bias - - # we need to manually concat xyz for CUDA implemented PAConv - self.use_xyz = use_xyz - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - num_kernels = paconv_num_kernels[i] - - # can't use `nn.Sequential` for PAConvCUDA because its input and - # output have different shapes - mlp = nn.ModuleList() - for i in range(len(mlp_channel) - 1): - mlp.append( - PAConvCUDA( - mlp_channel[i], - mlp_channel[i + 1], - num_kernels[i], - norm_cfg=norm_cfg, - kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg)) - self.mlps.append(mlp) - - def forward( - self, - points_xyz, - features=None, - indices=None, - target_xyz=None, - ): - """forward. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor, optional): (B, C, N) features of each point. - Default: None. - indices (Tensor, optional): (B, num_point) Index of the features. - Default: None. - target_xyz (Tensor, optional): (B, M, 3) new coords of the outputs. - Default: None. - - Returns: - Tensor: (B, M, 3) where M is the number of points. - New features xyz. - Tensor: (B, M, sum_k(mlps[k][-1])) where M is the number - of points. New feature descriptors. - Tensor: (B, M) where M is the number of points. - Index of the features. - """ - new_features_list = [] - - # sample points, (B, num_point, 3), (B, num_point) - new_xyz, indices = self._sample_points(points_xyz, features, indices, - target_xyz) - - for i in range(len(self.groupers)): - xyz = points_xyz - new_features = features - for j in range(len(self.mlps[i])): - # we don't use grouped_features here to avoid large GPU memory - # _, (B, 3, num_point, nsample), (B, num_point, nsample) - _, grouped_xyz, grouped_idx = self.groupers[i](xyz, new_xyz, - new_features) - - # concat xyz as additional features - if self.use_xyz and j == 0: - # (B, C+3, N) - new_features = torch.cat( - (points_xyz.permute(0, 2, 1), new_features), dim=1) - - # (B, out_c, num_point, nsample) - grouped_new_features = self.mlps[i][j]( - (new_features, grouped_xyz, grouped_idx.long()))[0] - - # different from PointNet++ and non CUDA version of PAConv - # CUDA version of PAConv needs to aggregate local features - # every time after it passes through a Conv layer - # in order to transform to valid input shape - # (B, out_c, num_point) - new_features = self._pool_features(grouped_new_features) - - # constrain the points to be grouped for next PAConv layer - # because new_features only contains sampled centers now - # (B, num_point, 3) - xyz = new_xyz - - new_features_list.append(new_features) - - return new_xyz, torch.cat(new_features_list, dim=1), indices - - -@SA_MODULES.register_module() -class PAConvCUDASAModule(PAConvCUDASAModuleMSG): - r"""Point set abstraction module with single-scale grouping (SSG) used in - PAConv networks. - - Replace the non CUDA version PAConv with CUDA implemented PAConv for - efficient computation. See the `paper `_ - for more details. - """ - - def __init__(self, - mlp_channels, - paconv_num_kernels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d', momentum=0.1), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False, - paconv_kernel_input='w_neighbor', - scorenet_input='w_neighbor_dist', - scorenet_cfg=dict( - mlp_channels=[8, 16, 16], - score_norm='softmax', - temp_factor=1.0, - last_bn=False)): - super(PAConvCUDASAModule, self).__init__( - mlp_channels=[mlp_channels], - paconv_num_kernels=[paconv_num_kernels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz, - paconv_kernel_input=paconv_kernel_input, - scorenet_input=scorenet_input, - scorenet_cfg=scorenet_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_fp_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_fp_module.py deleted file mode 100644 index 1bc833e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_fp_module.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import three_interpolate, three_nn -from mmcv.runner import BaseModule, force_fp32 -from torch import nn as nn - - -class PointFPModule(BaseModule): - """Point feature propagation module used in PointNets. - - Propagate the features from one set to another. - - Args: - mlp_channels (list[int]): List of mlp channels. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - """ - - def __init__(self, - mlp_channels: List[int], - norm_cfg: dict = dict(type='BN2d'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.fp16_enabled = False - self.mlps = nn.Sequential() - for i in range(len(mlp_channels) - 1): - self.mlps.add_module( - f'layer{i}', - ConvModule( - mlp_channels[i], - mlp_channels[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg)) - - @force_fp32() - def forward(self, target: torch.Tensor, source: torch.Tensor, - target_feats: torch.Tensor, - source_feats: torch.Tensor) -> torch.Tensor: - """forward. - - Args: - target (Tensor): (B, n, 3) tensor of the xyz positions of - the target features. - source (Tensor): (B, m, 3) tensor of the xyz positions of - the source features. - target_feats (Tensor): (B, C1, n) tensor of the features to be - propagated to. - source_feats (Tensor): (B, C2, m) tensor of features - to be propagated. - - Return: - Tensor: (B, M, N) M = mlp[-1], tensor of the target features. - """ - if source is not None: - dist, idx = three_nn(target, source) - dist_reciprocal = 1.0 / (dist + 1e-8) - norm = torch.sum(dist_reciprocal, dim=2, keepdim=True) - weight = dist_reciprocal / norm - - interpolated_feats = three_interpolate(source_feats, idx, weight) - else: - interpolated_feats = source_feats.expand(*source_feats.size()[0:2], - target.size(1)) - - if target_feats is not None: - new_features = torch.cat([interpolated_feats, target_feats], - dim=1) # (B, C2 + C1, n) - else: - new_features = interpolated_feats - - new_features = new_features.unsqueeze(-1) - new_features = self.mlps(new_features) - - return new_features.squeeze(-1) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_sa_module.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_sa_module.py deleted file mode 100644 index e33377fc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/pointnet_modules/point_sa_module.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from mmcv.ops import GroupAll -from mmcv.ops import PointsSampler as Points_Sampler -from mmcv.ops import QueryAndGroup, gather_points -from torch import nn as nn -from torch.nn import functional as F - -from mmdet3d.ops import PAConv -from .builder import SA_MODULES - - -class BasePointSAModule(nn.Module): - """Base module for point set abstraction module used in PointNets. - - Args: - num_point (int): Number of points. - radii (list[float]): List of radius in each ball query. - sample_nums (list[int]): Number of samples in each ball query. - mlp_channels (list[list[int]]): Specify of the pointnet before - the global pooling for each scale. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): - Range of points to apply FPS. Default: [-1]. - dilated_group (bool, optional): Whether to use dilated ball query. - Default: False. - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - grouper_return_grouped_xyz (bool, optional): Whether to return - grouped xyz in `QueryAndGroup`. Defaults to False. - grouper_return_grouped_idx (bool, optional): Whether to return - grouped idx in `QueryAndGroup`. Defaults to False. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - grouper_return_grouped_xyz=False, - grouper_return_grouped_idx=False): - super(BasePointSAModule, self).__init__() - - assert len(radii) == len(sample_nums) == len(mlp_channels) - assert pool_mod in ['max', 'avg'] - assert isinstance(fps_mod, list) or isinstance(fps_mod, tuple) - assert isinstance(fps_sample_range_list, list) or isinstance( - fps_sample_range_list, tuple) - assert len(fps_mod) == len(fps_sample_range_list) - - if isinstance(mlp_channels, tuple): - mlp_channels = list(map(list, mlp_channels)) - self.mlp_channels = mlp_channels - - if isinstance(num_point, int): - self.num_point = [num_point] - elif isinstance(num_point, list) or isinstance(num_point, tuple): - self.num_point = num_point - elif num_point is None: - self.num_point = None - else: - raise NotImplementedError('Error type of num_point!') - - self.pool_mod = pool_mod - self.groupers = nn.ModuleList() - self.mlps = nn.ModuleList() - self.fps_mod_list = fps_mod - self.fps_sample_range_list = fps_sample_range_list - - if self.num_point is not None: - self.points_sampler = Points_Sampler(self.num_point, - self.fps_mod_list, - self.fps_sample_range_list) - else: - self.points_sampler = None - - for i in range(len(radii)): - radius = radii[i] - sample_num = sample_nums[i] - if num_point is not None: - if dilated_group and i != 0: - min_radius = radii[i - 1] - else: - min_radius = 0 - grouper = QueryAndGroup( - radius, - sample_num, - min_radius=min_radius, - use_xyz=use_xyz, - normalize_xyz=normalize_xyz, - return_grouped_xyz=grouper_return_grouped_xyz, - return_grouped_idx=grouper_return_grouped_idx) - else: - grouper = GroupAll(use_xyz) - self.groupers.append(grouper) - - def _sample_points(self, points_xyz, features, indices, target_xyz): - """Perform point sampling based on inputs. - - If `indices` is specified, directly sample corresponding points. - Else if `target_xyz` is specified, use is as sampled points. - Otherwise sample points using `self.points_sampler`. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor): (B, C, N) features of each point. - indices (Tensor): (B, num_point) Index of the features. - target_xyz (Tensor): (B, M, 3) new_xyz coordinates of the outputs. - - Returns: - Tensor: (B, num_point, 3) sampled xyz coordinates of points. - Tensor: (B, num_point) sampled points' index. - """ - xyz_flipped = points_xyz.transpose(1, 2).contiguous() - if indices is not None: - assert (indices.shape[1] == self.num_point[0]) - new_xyz = gather_points(xyz_flipped, indices).transpose( - 1, 2).contiguous() if self.num_point is not None else None - elif target_xyz is not None: - new_xyz = target_xyz.contiguous() - else: - if self.num_point is not None: - indices = self.points_sampler(points_xyz, features) - new_xyz = gather_points(xyz_flipped, - indices).transpose(1, 2).contiguous() - else: - new_xyz = None - - return new_xyz, indices - - def _pool_features(self, features): - """Perform feature aggregation using pooling operation. - - Args: - features (torch.Tensor): (B, C, N, K) - Features of locally grouped points before pooling. - - Returns: - torch.Tensor: (B, C, N) - Pooled features aggregating local information. - """ - if self.pool_mod == 'max': - # (B, C, N, 1) - new_features = F.max_pool2d( - features, kernel_size=[1, features.size(3)]) - elif self.pool_mod == 'avg': - # (B, C, N, 1) - new_features = F.avg_pool2d( - features, kernel_size=[1, features.size(3)]) - else: - raise NotImplementedError - - return new_features.squeeze(-1).contiguous() - - def forward( - self, - points_xyz, - features=None, - indices=None, - target_xyz=None, - ): - """forward. - - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor, optional): (B, C, N) features of each point. - Default: None. - indices (Tensor, optional): (B, num_point) Index of the features. - Default: None. - target_xyz (Tensor, optional): (B, M, 3) new coords of the outputs. - Default: None. - - Returns: - Tensor: (B, M, 3) where M is the number of points. - New features xyz. - Tensor: (B, M, sum_k(mlps[k][-1])) where M is the number - of points. New feature descriptors. - Tensor: (B, M) where M is the number of points. - Index of the features. - """ - new_features_list = [] - - # sample points, (B, num_point, 3), (B, num_point) - new_xyz, indices = self._sample_points(points_xyz, features, indices, - target_xyz) - - for i in range(len(self.groupers)): - # grouped_results may contain: - # - grouped_features: (B, C, num_point, nsample) - # - grouped_xyz: (B, 3, num_point, nsample) - # - grouped_idx: (B, num_point, nsample) - grouped_results = self.groupers[i](points_xyz, new_xyz, features) - - # (B, mlp[-1], num_point, nsample) - new_features = self.mlps[i](grouped_results) - - # this is a bit hack because PAConv outputs two values - # we take the first one as feature - if isinstance(self.mlps[i][0], PAConv): - assert isinstance(new_features, tuple) - new_features = new_features[0] - - # (B, mlp[-1], num_point) - new_features = self._pool_features(new_features) - new_features_list.append(new_features) - - return new_xyz, torch.cat(new_features_list, dim=1), indices - - -@SA_MODULES.register_module() -class PointSAModuleMSG(BasePointSAModule): - """Point set abstraction module with multi-scale grouping (MSG) used in - PointNets. - - Args: - num_point (int): Number of points. - radii (list[float]): List of radius in each ball query. - sample_nums (list[int]): Number of samples in each ball query. - mlp_channels (list[list[int]]): Specify of the pointnet before - the global pooling for each scale. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): Range of points to - apply FPS. Default: [-1]. - dilated_group (bool, optional): Whether to use dilated ball query. - Default: False. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - bias (bool | str, optional): If specified as `auto`, it will be - decided by `norm_cfg`. `bias` will be set as True if - `norm_cfg` is None, otherwise False. Default: 'auto'. - """ - - def __init__(self, - num_point, - radii, - sample_nums, - mlp_channels, - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - dilated_group=False, - norm_cfg=dict(type='BN2d'), - use_xyz=True, - pool_mod='max', - normalize_xyz=False, - bias='auto'): - super(PointSAModuleMSG, self).__init__( - num_point=num_point, - radii=radii, - sample_nums=sample_nums, - mlp_channels=mlp_channels, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - dilated_group=dilated_group, - use_xyz=use_xyz, - pool_mod=pool_mod, - normalize_xyz=normalize_xyz) - - for i in range(len(self.mlp_channels)): - mlp_channel = self.mlp_channels[i] - if use_xyz: - mlp_channel[0] += 3 - - mlp = nn.Sequential() - for i in range(len(mlp_channel) - 1): - mlp.add_module( - f'layer{i}', - ConvModule( - mlp_channel[i], - mlp_channel[i + 1], - kernel_size=(1, 1), - stride=(1, 1), - conv_cfg=dict(type='Conv2d'), - norm_cfg=norm_cfg, - bias=bias)) - self.mlps.append(mlp) - - -@SA_MODULES.register_module() -class PointSAModule(PointSAModuleMSG): - """Point set abstraction module with single-scale grouping (SSG) used in - PointNets. - - Args: - mlp_channels (list[int]): Specify of the pointnet before - the global pooling for each scale. - num_point (int, optional): Number of points. - Default: None. - radius (float, optional): Radius to group with. - Default: None. - num_sample (int, optional): Number of samples in each ball query. - Default: None. - norm_cfg (dict, optional): Type of normalization method. - Default: dict(type='BN2d'). - use_xyz (bool, optional): Whether to use xyz. - Default: True. - pool_mod (str, optional): Type of pooling method. - Default: 'max_pool'. - fps_mod (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - fps_sample_range_list (list[int], optional): Range of points - to apply FPS. Default: [-1]. - normalize_xyz (bool, optional): Whether to normalize local XYZ - with radius. Default: False. - """ - - def __init__(self, - mlp_channels, - num_point=None, - radius=None, - num_sample=None, - norm_cfg=dict(type='BN2d'), - use_xyz=True, - pool_mod='max', - fps_mod=['D-FPS'], - fps_sample_range_list=[-1], - normalize_xyz=False): - super(PointSAModule, self).__init__( - mlp_channels=[mlp_channels], - num_point=num_point, - radii=[radius], - sample_nums=[num_sample], - norm_cfg=norm_cfg, - use_xyz=use_xyz, - pool_mod=pool_mod, - fps_mod=fps_mod, - fps_sample_range_list=fps_sample_range_list, - normalize_xyz=normalize_xyz) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/sparse_block.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/sparse_block.py deleted file mode 100644 index 03b18e2e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/sparse_block.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn - -from mmdet.models.backbones.resnet import BasicBlock, Bottleneck -from .spconv import IS_SPCONV2_AVAILABLE - -if IS_SPCONV2_AVAILABLE: - from spconv.pytorch import SparseModule, SparseSequential -else: - from mmcv.ops import SparseModule, SparseSequential - - -def replace_feature(out, new_features): - if 'replace_feature' in out.__dir__(): - # spconv 2.x behaviour - return out.replace_feature(new_features) - else: - out.features = new_features - return out - - -class SparseBottleneck(Bottleneck, SparseModule): - """Sparse bottleneck block for PartA^2. - - Bottleneck block implemented with submanifold sparse convolution. - - Args: - inplanes (int): inplanes of block. - planes (int): planes of block. - stride (int, optional): stride of the first block. Default: 1. - downsample (Module, optional): down sample module for block. - conv_cfg (dict, optional): dictionary to construct and config conv - layer. Default: None. - norm_cfg (dict, optional): dictionary to construct and config norm - layer. Default: dict(type='BN'). - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - conv_cfg=None, - norm_cfg=None): - - SparseModule.__init__(self) - Bottleneck.__init__( - self, - inplanes, - planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - identity = x.features - - out = self.conv1(x) - out = replace_feature(out, self.bn1(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv2(out) - out = replace_feature(out, self.bn2(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv3(out) - out = replace_feature(out, self.bn3(out.features)) - - if self.downsample is not None: - identity = self.downsample(x) - - out = replace_feature(out, out.features + identity) - out = replace_feature(out, self.relu(out.features)) - - return out - - -class SparseBasicBlock(BasicBlock, SparseModule): - """Sparse basic block for PartA^2. - - Sparse basic block implemented with submanifold sparse convolution. - - Args: - inplanes (int): inplanes of block. - planes (int): planes of block. - stride (int, optional): stride of the first block. Default: 1. - downsample (Module, optional): down sample module for block. - conv_cfg (dict, optional): dictionary to construct and config conv - layer. Default: None. - norm_cfg (dict, optional): dictionary to construct and config norm - layer. Default: dict(type='BN'). - """ - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - conv_cfg=None, - norm_cfg=None): - SparseModule.__init__(self) - BasicBlock.__init__( - self, - inplanes, - planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - identity = x.features - - assert x.features.dim() == 2, f'x.features.dim()={x.features.dim()}' - out = self.conv1(x) - out = replace_feature(out, self.norm1(out.features)) - out = replace_feature(out, self.relu(out.features)) - - out = self.conv2(out) - out = replace_feature(out, self.norm2(out.features)) - - if self.downsample is not None: - identity = self.downsample(x) - - out = replace_feature(out, out.features + identity) - out = replace_feature(out, self.relu(out.features)) - - return out - - -def make_sparse_convmodule(in_channels, - out_channels, - kernel_size, - indice_key, - stride=1, - padding=0, - conv_type='SubMConv3d', - norm_cfg=None, - order=('conv', 'norm', 'act')): - """Make sparse convolution module. - - Args: - in_channels (int): the number of input channels - out_channels (int): the number of out channels - kernel_size (int|tuple(int)): kernel size of convolution - indice_key (str): the indice key used for sparse tensor - stride (int|tuple(int)): the stride of convolution - padding (int or list[int]): the padding number of input - conv_type (str): sparse conv type in spconv - norm_cfg (dict[str]): config of normalization layer - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - - Returns: - spconv.SparseSequential: sparse convolution module. - """ - assert isinstance(order, tuple) and len(order) <= 3 - assert set(order) | {'conv', 'norm', 'act'} == {'conv', 'norm', 'act'} - - conv_cfg = dict(type=conv_type, indice_key=indice_key) - - layers = list() - for layer in order: - if layer == 'conv': - if conv_type not in [ - 'SparseInverseConv3d', 'SparseInverseConv2d', - 'SparseInverseConv1d' - ]: - layers.append( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - bias=False)) - else: - layers.append( - build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - bias=False)) - elif layer == 'norm': - layers.append(build_norm_layer(norm_cfg, out_channels)[1]) - elif layer == 'act': - layers.append(nn.ReLU(inplace=True)) - - layers = SparseSequential(*layers) - return layers diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/__init__.py deleted file mode 100644 index 561e5024..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .overwrite_spconv.write_spconv2 import register_spconv2 - -try: - import spconv -except ImportError: - IS_SPCONV2_AVAILABLE = False -else: - if hasattr(spconv, '__version__') and spconv.__version__ >= '2.0.0': - IS_SPCONV2_AVAILABLE = register_spconv2() - else: - IS_SPCONV2_AVAILABLE = False - -__all__ = ['IS_SPCONV2_AVAILABLE'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/__init__.py deleted file mode 100644 index 2e93d9ca..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .write_spconv2 import register_spconv2 - -__all__ = ['register_spconv2'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py deleted file mode 100644 index 237051eb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/ops/spconv/overwrite_spconv/write_spconv2.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -from mmcv.cnn.bricks.registry import CONV_LAYERS -from torch.nn.parameter import Parameter - - -def register_spconv2(): - """This func registers spconv2.0 spconv ops to overwrite the default mmcv - spconv ops.""" - try: - from spconv.pytorch import (SparseConv2d, SparseConv3d, SparseConv4d, - SparseConvTranspose2d, - SparseConvTranspose3d, SparseInverseConv2d, - SparseInverseConv3d, SparseModule, - SubMConv2d, SubMConv3d, SubMConv4d) - except ImportError: - return False - else: - CONV_LAYERS._register_module(SparseConv2d, 'SparseConv2d', force=True) - CONV_LAYERS._register_module(SparseConv3d, 'SparseConv3d', force=True) - CONV_LAYERS._register_module(SparseConv4d, 'SparseConv4d', force=True) - - CONV_LAYERS._register_module( - SparseConvTranspose2d, 'SparseConvTranspose2d', force=True) - CONV_LAYERS._register_module( - SparseConvTranspose3d, 'SparseConvTranspose3d', force=True) - - CONV_LAYERS._register_module( - SparseInverseConv2d, 'SparseInverseConv2d', force=True) - CONV_LAYERS._register_module( - SparseInverseConv3d, 'SparseInverseConv3d', force=True) - - CONV_LAYERS._register_module(SubMConv2d, 'SubMConv2d', force=True) - CONV_LAYERS._register_module(SubMConv3d, 'SubMConv3d', force=True) - CONV_LAYERS._register_module(SubMConv4d, 'SubMConv4d', force=True) - SparseModule._load_from_state_dict = _load_from_state_dict - SparseModule._save_to_state_dict = _save_to_state_dict - return True - - -def _save_to_state_dict(self, destination, prefix, keep_vars): - """Rewrite this func to compat the convolutional kernel weights between - spconv 1.x in MMCV and 2.x in spconv2.x. - - Kernel weights in MMCV spconv has shape in (D,H,W,in_channel,out_channel) , - while those in spcon2.x is in (out_channel,D,H,W,in_channel). - """ - for name, param in self._parameters.items(): - if param is not None: - param = param if keep_vars else param.detach() - if name == 'weight': - dims = list(range(1, len(param.shape))) + [0] - param = param.permute(*dims) - destination[prefix + name] = param - for name, buf in self._buffers.items(): - if buf is not None and name not in self._non_persistent_buffers_set: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Rewrite this func to compat the convolutional kernel weights between - spconv 1.x in MMCV and 2.x in spconv2.x. - - Kernel weights in MMCV spconv has shape in (D,H,W,in_channel,out_channel) , - while those in spcon2.x is in (out_channel,D,H,W,in_channel). - """ - for hook in self._load_state_dict_pre_hooks.values(): - hook(state_dict, prefix, local_metadata, strict, missing_keys, - unexpected_keys, error_msgs) - - local_name_params = itertools.chain(self._parameters.items(), - self._buffers.items()) - local_state = {k: v.data for k, v in local_name_params if v is not None} - - for name, param in local_state.items(): - key = prefix + name - if key in state_dict: - input_param = state_dict[key] - - # Backward compatibility: loading 1-dim tensor from - # 0.3.* to version 0.4+ - if len(param.shape) == 0 and len(input_param.shape) == 1: - input_param = input_param[0] - dims = [len(input_param.shape) - 1] + list( - range(len(input_param.shape) - 1)) - input_param = input_param.permute(*dims) - if input_param.shape != param.shape: - # local shape should match the one in checkpoint - error_msgs.append( - f'size mismatch for {key}: copying a param with ' - f'shape {key, input_param.shape} from checkpoint,' - f'the shape in current model is {param.shape}.') - continue - - if isinstance(input_param, Parameter): - # backwards compatibility for serialized parameters - input_param = input_param.data - try: - param.copy_(input_param) - except Exception: - error_msgs.append( - f'While copying the parameter named "{key}", whose ' - f'dimensions in the model are {param.size()} and whose ' - f'dimensions in the checkpoint are {input_param.size()}.') - elif strict: - missing_keys.append(key) - - if strict: - for key, input_param in state_dict.items(): - if key.startswith(prefix): - input_name = key[len(prefix):] - input_name = input_name.split( - '.', 1)[0] # get the name of param/buffer/child - if input_name not in self._modules \ - and input_name not in local_state: - unexpected_keys.append(key) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/__init__.py deleted file mode 100644 index 35df6aa2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg, print_log - -from .collect_env import collect_env -from .compat_cfg import compat_cfg -from .logger import get_root_logger -from .misc import find_latest_checkpoint -from .setup_env import setup_multi_processes - -__all__ = [ - 'Registry', 'build_from_cfg', 'get_root_logger', 'collect_env', - 'print_log', 'setup_multi_processes', 'find_latest_checkpoint', - 'compat_cfg' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/collect_env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/collect_env.py deleted file mode 100644 index c10d01a0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/collect_env.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmdet -import mmdet3d -import mmseg -# from mmdet3d.ops.spconv import IS_SPCONV2_AVAILABLE - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMDetection'] = mmdet.__version__ - env_info['MMSegmentation'] = mmseg.__version__ - env_info['MMDetection3D'] = mmdet3d.__version__ + '+' + get_git_hash()[:7] - # env_info['spconv2.0'] = IS_SPCONV2_AVAILABLE - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print(f'{name}: {val}') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/compat_cfg.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/compat_cfg.py deleted file mode 100644 index 1dddf694..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/compat_cfg.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv import ConfigDict - - -def compat_cfg(cfg): - """This function would modify some filed to keep the compatibility of - config. - - For example, it will move some args which will be deprecated to the correct - fields. - """ - cfg = copy.deepcopy(cfg) - cfg = compat_imgs_per_gpu(cfg) - cfg = compat_loader_args(cfg) - cfg = compat_runner_args(cfg) - return cfg - - -def compat_runner_args(cfg): - if 'runner' not in cfg: - cfg.runner = ConfigDict({ - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - }) - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - return cfg - - -def compat_imgs_per_gpu(cfg): - cfg = copy.deepcopy(cfg) - if 'imgs_per_gpu' in cfg.data: - warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - warnings.warn( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - return cfg - - -def compat_loader_args(cfg): - """Deprecated sample_per_gpu in cfg.data.""" - - cfg = copy.deepcopy(cfg) - if 'train_dataloader' not in cfg.data: - cfg.data['train_dataloader'] = ConfigDict() - if 'val_dataloader' not in cfg.data: - cfg.data['val_dataloader'] = ConfigDict() - if 'test_dataloader' not in cfg.data: - cfg.data['test_dataloader'] = ConfigDict() - - # special process for train_dataloader - if 'samples_per_gpu' in cfg.data: - - samples_per_gpu = cfg.data.pop('samples_per_gpu') - assert 'samples_per_gpu' not in \ - cfg.data.train_dataloader, ('`samples_per_gpu` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu - - if 'persistent_workers' in cfg.data: - - persistent_workers = cfg.data.pop('persistent_workers') - assert 'persistent_workers' not in \ - cfg.data.train_dataloader, ('`persistent_workers` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['persistent_workers'] = persistent_workers - - if 'workers_per_gpu' in cfg.data: - - workers_per_gpu = cfg.data.pop('workers_per_gpu') - cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu - - # special process for val_dataloader - if 'samples_per_gpu' in cfg.data.val: - # keep default value of `sample_per_gpu` is 1 - assert 'samples_per_gpu' not in \ - cfg.data.val_dataloader, ('`samples_per_gpu` are set ' - 'in `data.val` field and ` ' - 'data.val_dataloader` at ' - 'the same time. ' - 'Please only set it in ' - '`data.val_dataloader`. ') - cfg.data.val_dataloader['samples_per_gpu'] = \ - cfg.data.val.pop('samples_per_gpu') - # special process for val_dataloader - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - if 'samples_per_gpu' in cfg.data.test: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - - cfg.data.test_dataloader['samples_per_gpu'] = \ - cfg.data.test.pop('samples_per_gpu') - - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - if 'samples_per_gpu' in ds_cfg: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` at' - ' the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu - - return cfg diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/logger.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/logger.py deleted file mode 100644 index 14295d1a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/logger.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO, name='mmdet3d'): - """Get root logger and add a keyword filter to it. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmdet3d". - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - name (str, optional): The name of the root logger, also used as a - filter keyword. Defaults to 'mmdet3d'. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name=name, log_file=log_file, log_level=log_level) - - # add a logging filter - logging_filter = logging.Filter(name) - logging_filter.filter = lambda record: record.find(name) != -1 - - return logger diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/misc.py deleted file mode 100644 index 18dc19fe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/misc.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os.path as osp -import warnings - - -def find_latest_checkpoint(path, suffix='pth'): - """Find the latest checkpoint from the working directory. This function is - copied from mmdetection. - - Args: - path(str): The path to find checkpoints. - suffix(str): File extension. - Defaults to pth. - - Returns: - latest_path(str | None): File path of the latest checkpoint. - References: - .. [1] https://github.com/microsoft/SoftTeacher - /blob/main/ssod/utils/patch.py - """ - if not osp.exists(path): - warnings.warn('The path of checkpoints does not exist.') - return None - if osp.exists(osp.join(path, f'latest.{suffix}')): - return osp.join(path, f'latest.{suffix}') - - checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) - if len(checkpoints) == 0: - warnings.warn('There are no checkpoints in the path.') - return None - latest = -1 - latest_path = None - for checkpoint in checkpoints: - count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) - if count > latest: - latest = count - latest_path = checkpoint - return latest_path diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/setup_env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/setup_env.py deleted file mode 100644 index 72fb1a29..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/utils/setup_env.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -from torch import multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - workers_per_gpu = cfg.data.get('workers_per_gpu', 1) - if 'train_dataloader' in cfg.data: - workers_per_gpu = \ - max(cfg.data.train_dataloader.get('workers_per_gpu', 1), - workers_per_gpu) - - if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/version.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/version.py deleted file mode 100644 index a975e963..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmdet3d/version.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '1.0.0rc3' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/__init__.py deleted file mode 100644 index 360abfc8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -from packaging.version import parse - -from .version import __version__, version_info - -MMCV_MIN = '1.3.13' -MMCV_MAX = '1.6.0' - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -mmcv_min_version = digit_version(MMCV_MIN) -mmcv_max_version = digit_version(MMCV_MAX) -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_min_version}, <={mmcv_max_version}.' - -__all__ = ['__version__', 'version_info', 'digit_version'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/__init__.py deleted file mode 100644 index c6881805..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .inference import inference_segmentor, init_segmentor, show_result_pyplot -from .test import multi_gpu_test, single_gpu_test -from .train import (get_root_logger, init_random_seed, set_random_seed, - train_segmentor) - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor', - 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test', - 'show_result_pyplot', 'init_random_seed' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/inference.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/inference.py deleted file mode 100644 index 90694380..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import matplotlib.pyplot as plt -import mmcv -import torch -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmseg.datasets.pipelines import Compose -from mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - plt.figure(figsize=fig_size) - plt.imshow(mmcv.bgr2rgb(img)) - plt.title(title) - plt.tight_layout() - plt.show(block=block) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/test.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/test.py deleted file mode 100644 index cc4fcc97..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/test.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import tempfile -import warnings - -import mmcv -import numpy as np -import torch -from mmcv.engine import collect_results_cpu, collect_results_gpu -from mmcv.image import tensor2imgs -from mmcv.runner import get_dist_info - - -def np2tmp(array, temp_file_name=None, tmpdir=None): - """Save ndarray to local numpy file. - - Args: - array (ndarray): Ndarray to save. - temp_file_name (str): Numpy file name. If 'temp_file_name=None', this - function will generate a file name with tempfile.NamedTemporaryFile - to save ndarray. Default: None. - tmpdir (str): Temporary directory to save Ndarray files. Default: None. - Returns: - str: The numpy file name. - """ - - if temp_file_name is None: - temp_file_name = tempfile.NamedTemporaryFile( - suffix='.npy', delete=False, dir=tmpdir).name - np.save(temp_file_name, array) - return temp_file_name - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - efficient_test=False, - opacity=0.5, - pre_eval=False, - format_only=False, - format_args={}): - """Test with single GPU by progressive mode. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - show (bool): Whether show results during inference. Default: False. - out_dir (str, optional): If specified, the results will be dumped into - the directory to save output results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Mutually exclusive with - pre_eval and format_results. Default: False. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - pre_eval (bool): Use dataset.pre_eval() function to generate - pre_results for metric evaluation. Mutually exclusive with - efficient_test and format_results. Default: False. - format_only (bool): Only format result for results commit. - Mutually exclusive with pre_eval and efficient_test. - Default: False. - format_args (dict): The args for format_results. Default: {}. - Returns: - list: list of evaluation pre-results or list of save file names. - """ - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' - 'evaluation is CPU memory friendly with pre_eval=True') - mmcv.mkdir_or_exist('.efficient_test') - # when none of them is set true, return segmentation results as - # a list of np.array. - assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ - '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ - 'exclusive, only one of them could be true .' - - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - # The pipeline about how the data_loader retrieval samples from dataset: - # sampler -> batch_sampler -> indices - # The indices are passed to dataset_fetcher to get data from dataset. - # data_fetcher -> collate_fn(dataset[index]) -> data_sample - # we use batch_sampler to get correct data idx - loader_indices = data_loader.batch_sampler - - for batch_indices, data in zip(loader_indices, data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - - if show or out_dir: - img_tensor = data['img'][0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for img, img_meta in zip(imgs, img_metas): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result, - palette=dataset.PALETTE, - show=show, - out_file=out_file, - opacity=opacity) - - if efficient_test: - result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] - - if format_only: - result = dataset.format_results( - result, indices=batch_indices, **format_args) - if pre_eval: - # TODO: adapt samples_per_gpu > 1. - # only samples_per_gpu=1 valid now - result = dataset.pre_eval(result, indices=batch_indices) - results.extend(result) - else: - results.extend(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - - return results - - -def multi_gpu_test(model, - data_loader, - tmpdir=None, - gpu_collect=False, - efficient_test=False, - pre_eval=False, - format_only=False, - format_args={}): - """Test model with multiple gpus by progressive mode. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. The same path is used for efficient - test. Default: None. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Mutually exclusive with - pre_eval and format_results. Default: False. - pre_eval (bool): Use dataset.pre_eval() function to generate - pre_results for metric evaluation. Mutually exclusive with - efficient_test and format_results. Default: False. - format_only (bool): Only format result for results commit. - Mutually exclusive with pre_eval and efficient_test. - Default: False. - format_args (dict): The args for format_results. Default: {}. - - Returns: - list: list of evaluation pre-results or list of save file names. - """ - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' - 'evaluation is CPU memory friendly with pre_eval=True') - mmcv.mkdir_or_exist('.efficient_test') - # when none of them is set true, return segmentation results as - # a list of np.array. - assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ - '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ - 'exclusive, only one of them could be true .' - - model.eval() - results = [] - dataset = data_loader.dataset - # The pipeline about how the data_loader retrieval samples from dataset: - # sampler -> batch_sampler -> indices - # The indices are passed to dataset_fetcher to get data from dataset. - # data_fetcher -> collate_fn(dataset[index]) -> data_sample - # we use batch_sampler to get correct data idx - - # batch_sampler based on DistributedSampler, the indices only point to data - # samples of related machine. - loader_indices = data_loader.batch_sampler - - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - - for batch_indices, data in zip(loader_indices, data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if efficient_test: - result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] - - if format_only: - result = dataset.format_results( - result, indices=batch_indices, **format_args) - if pre_eval: - # TODO: adapt samples_per_gpu > 1. - # only samples_per_gpu=1 valid now - result = dataset.pre_eval(result, indices=batch_indices) - - results.extend(result) - - if rank == 0: - batch_size = len(result) * world_size - for _ in range(batch_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/train.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/train.py deleted file mode 100644 index 7e1096bc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/apis/train.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import random -import warnings - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import HOOKS, build_optimizer, build_runner, get_dist_info -from mmcv.utils import build_from_cfg - -from mmseg.core import DistEvalHook, EvalHook -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.utils import get_root_logger - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/__init__.py deleted file mode 100644 index 40227861..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluation import * # noqa: F401, F403 -from .seg import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/__init__.py deleted file mode 100644 index 3d16d17e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_names import get_classes, get_palette -from .eval_hooks import DistEvalHook, EvalHook -from .metrics import (eval_metrics, intersect_and_union, mean_dice, - mean_fscore, mean_iou, pre_eval_to_metrics) - -__all__ = [ - 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', - 'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics', - 'intersect_and_union' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/class_names.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/class_names.py deleted file mode 100644 index 4527fbaf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/class_names.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - - -def cityscapes_classes(): - """Cityscapes class names for external use.""" - return [ - 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def ade_classes(): - """ADE20K class names for external use.""" - return [ - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag' - ] - - -def voc_classes(): - """Pascal VOC class names for external use.""" - return [ - 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor' - ] - - -def cityscapes_palette(): - """Cityscapes palette for external use.""" - return [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], - [0, 0, 230], [119, 11, 32]] - - -def ade_palette(): - """ADE20K palette for external use.""" - return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - -def voc_palette(): - """Pascal VOC palette for external use.""" - return [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - -dataset_aliases = { - 'cityscapes': ['cityscapes'], - 'ade': ['ade', 'ade20k'], - 'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels - - -def get_palette(dataset): - """Get class palette (RGB) of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_palette()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/eval_hooks.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/eval_hooks.py deleted file mode 100644 index 952db3b0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import torch.distributed as dist -from mmcv.runner import DistEvalHook as _DistEvalHook -from mmcv.runner import EvalHook as _EvalHook -from torch.nn.modules.batchnorm import _BatchNorm - - -class EvalHook(_EvalHook): - """Single GPU EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - pre_eval (bool): Whether to use progressive mode to evaluate model. - Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, - *args, - by_epoch=False, - efficient_test=False, - pre_eval=False, - **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.pre_eval = pre_eval - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` for evaluation hook ' - 'is deprecated, the evaluation hook is CPU memory friendly ' - 'with ``pre_eval=True`` as argument for ``single_gpu_test()`` ' - 'function') - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - if not self._should_evaluate(runner): - return - - from mmseg.apis import single_gpu_test - results = single_gpu_test( - runner.model, self.dataloader, show=False, pre_eval=self.pre_eval) - runner.log_buffer.clear() - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - if self.save_best: - self._save_ckpt(runner, key_score) - - -class DistEvalHook(_DistEvalHook): - """Distributed EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - pre_eval (bool): Whether to use progressive mode to evaluate model. - Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, - *args, - by_epoch=False, - efficient_test=False, - pre_eval=False, - **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.pre_eval = pre_eval - if efficient_test: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` for evaluation hook ' - 'is deprecated, the evaluation hook is CPU memory friendly ' - 'with ``pre_eval=True`` as argument for ``multi_gpu_test()`` ' - 'function') - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - if not self._should_evaluate(runner): - return - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - from mmseg.apis import multi_gpu_test - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect, - pre_eval=self.pre_eval) - - runner.log_buffer.clear() - - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - - if self.save_best: - self._save_ckpt(runner, key_score) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/metrics.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/metrics.py deleted file mode 100644 index a1c0908e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,395 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calculate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground - truth segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for result, gt_seg_map in zip(results, gt_seg_maps): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - result, gt_seg_map, num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category recall, shape (num_classes, ). - ndarray: Per category precision, shape (num_classes, ). - ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground - truth segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, - total_area_pred_label, - total_area_label, metrics, nan_to_num, - beta) - - return ret_metrics - - -def pre_eval_to_metrics(pre_eval_results, - metrics=['mIoU'], - nan_to_num=None, - beta=1): - """Convert pre-eval results to metrics. - - Args: - pre_eval_results (list[tuple[torch.Tensor]]): per image eval results - for computing evaluation metric - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - - # convert list of tuples to tuple of lists, e.g. - # [(A_1, B_1, C_1, D_1), ..., (A_n, B_n, C_n, D_n)] to - # ([A_1, ..., A_n], ..., [D_1, ..., D_n]) - pre_eval_results = tuple(zip(*pre_eval_results)) - assert len(pre_eval_results) == 4 - - total_area_intersect = sum(pre_eval_results[0]) - total_area_union = sum(pre_eval_results[1]) - total_area_pred_label = sum(pre_eval_results[2]) - total_area_label = sum(pre_eval_results[3]) - - ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, - total_area_pred_label, - total_area_label, metrics, nan_to_num, - beta) - - return ret_metrics - - -def total_area_to_metrics(total_area_intersect, - total_area_union, - total_area_pred_label, - total_area_label, - metrics=['mIoU'], - nan_to_num=None, - beta=1): - """Calculate evaluation metrics - Args: - total_area_intersect (ndarray): The intersection of prediction and - ground truth histogram on all classes. - total_area_union (ndarray): The union of prediction and ground truth - histogram on all classes. - total_area_pred_label (ndarray): The prediction histogram on all - classes. - total_area_label (ndarray): The ground truth histogram on all classes. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/__init__.py deleted file mode 100644 index 5206b96b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_pixel_sampler -from .sampler import BasePixelSampler, OHEMPixelSampler - -__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/builder.py deleted file mode 100644 index 1cecd347..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -PIXEL_SAMPLERS = Registry('pixel sampler') - - -def build_pixel_sampler(cfg, **default_args): - """Build pixel sampler for segmentation map.""" - return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/__init__.py deleted file mode 100644 index 5a764856..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_pixel_sampler import BasePixelSampler -from .ohem_pixel_sampler import OHEMPixelSampler - -__all__ = ['BasePixelSampler', 'OHEMPixelSampler'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/base_pixel_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index 03672cd4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/ohem_pixel_sampler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/ohem_pixel_sampler.py deleted file mode 100644 index 833a2876..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/seg/sampler/ohem_pixel_sampler.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import PIXEL_SAMPLERS -from .base_pixel_sampler import BasePixelSampler - - -@PIXEL_SAMPLERS.register_module() -class OHEMPixelSampler(BasePixelSampler): - """Online Hard Example Mining Sampler for segmentation. - - Args: - context (nn.Module): The context of sampler, subclass of - :obj:`BaseDecodeHead`. - thresh (float, optional): The threshold for hard example selection. - Below which, are prediction with low confidence. If not - specified, the hard examples will be pixels of top ``min_kept`` - loss. Default: None. - min_kept (int, optional): The minimum number of predictions to keep. - Default: 100000. - """ - - def __init__(self, context, thresh=None, min_kept=100000): - super(OHEMPixelSampler, self).__init__() - self.context = context - assert min_kept > 1 - self.thresh = thresh - self.min_kept = min_kept - - def sample(self, seg_logit, seg_label): - """Sample pixels that have high loss or with low prediction confidence. - - Args: - seg_logit (torch.Tensor): segmentation logits, shape (N, C, H, W) - seg_label (torch.Tensor): segmentation label, shape (N, 1, H, W) - - Returns: - torch.Tensor: segmentation weight, shape (N, H, W) - """ - with torch.no_grad(): - assert seg_logit.shape[2:] == seg_label.shape[2:] - assert seg_label.shape[1] == 1 - seg_label = seg_label.squeeze(1).long() - batch_kept = self.min_kept * seg_label.size(0) - valid_mask = seg_label != self.context.ignore_index - seg_weight = seg_logit.new_zeros(size=seg_label.size()) - valid_seg_weight = seg_weight[valid_mask] - if self.thresh is not None: - seg_prob = F.softmax(seg_logit, dim=1) - - tmp_seg_label = seg_label.clone().unsqueeze(1) - tmp_seg_label[tmp_seg_label == self.context.ignore_index] = 0 - seg_prob = seg_prob.gather(1, tmp_seg_label).squeeze(1) - sort_prob, sort_indices = seg_prob[valid_mask].sort() - - if sort_prob.numel() > 0: - min_threshold = sort_prob[min(batch_kept, - sort_prob.numel() - 1)] - else: - min_threshold = 0.0 - threshold = max(min_threshold, self.thresh) - valid_seg_weight[seg_prob[valid_mask] < threshold] = 1. - else: - if not isinstance(self.context.loss_decode, nn.ModuleList): - losses_decode = [self.context.loss_decode] - else: - losses_decode = self.context.loss_decode - losses = 0.0 - for loss_module in losses_decode: - losses += loss_module( - seg_logit, - seg_label, - weight=None, - ignore_index=self.context.ignore_index, - reduction_override='none') - - # faster than topk according to https://github.com/pytorch/pytorch/issues/22812 # noqa - _, sort_indices = losses[valid_mask].sort(descending=True) - valid_seg_weight[sort_indices[:batch_kept]] = 1. - - seg_weight[valid_mask] = valid_seg_weight - - return seg_weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/__init__.py deleted file mode 100644 index be9de558..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .misc import add_prefix - -__all__ = ['add_prefix'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/misc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/misc.py deleted file mode 100644 index 282bb8d9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/core/utils/misc.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def add_prefix(inputs, prefix): - """Add prefix for dict. - - Args: - inputs (dict): The input dict with str keys. - prefix (str): The prefix to add. - - Returns: - - dict: The dict with keys updated with ``prefix``. - """ - - outputs = dict() - for name, value in inputs.items(): - outputs[f'{prefix}.{name}'] = value - - return outputs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/__init__.py deleted file mode 100644 index c115ab79..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .ade import ADE20KDataset -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .chase_db1 import ChaseDB1Dataset -from .cityscapes import CityscapesDataset -from .coco_stuff import COCOStuffDataset -from .custom import CustomDataset -from .dark_zurich import DarkZurichDataset -from .dataset_wrappers import ConcatDataset, RepeatDataset -from .drive import DRIVEDataset -from .hrf import HRFDataset -from .loveda import LoveDADataset -from .night_driving import NightDrivingDataset -from .pascal_context import PascalContextDataset, PascalContextDataset59 -from .stare import STAREDataset -from .voc import PascalVOCDataset - -__all__ = [ - 'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset', - 'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset', - 'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset', - 'STAREDataset', 'DarkZurichDataset', 'NightDrivingDataset', - 'COCOStuffDataset', 'LoveDADataset' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/ade.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/ade.py deleted file mode 100644 index db94cebd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/ade.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) - - def results2img(self, results, imgfile_prefix, to_label_id, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission. - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - if indices is None: - indices = list(range(len(self))) - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - # The index range of official requirement is from 0 to 150. - # But the index range of output is from 0 to 149. - # That is because we set reduce_zero_label=True. - result = result + 1 - - output = Image.fromarray(result.astype(np.uint8)) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, - results, - imgfile_prefix, - to_label_id=True, - indices=None): - """Format the results into dir (standard format for ade20k evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str | None): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - to_label_id (bool): whether convert output to label_id for - submission. Default: False - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, to_label_id, - indices) - return result_files diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/builder.py deleted file mode 100644 index 7ab64595..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/builder.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import Registry, build_from_cfg, digit_version -from torch.utils.data import DataLoader, DistributedSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - """Build :obj:`ConcatDataset by.""" - from .dataset_wrappers import ConcatDataset - img_dir = cfg['img_dir'] - ann_dir = cfg.get('ann_dir', None) - split = cfg.get('split', None) - # pop 'separate_eval' since it is not a valid key for common datasets. - separate_eval = cfg.pop('separate_eval', True) - num_img_dir = len(img_dir) if isinstance(img_dir, (list, tuple)) else 1 - if ann_dir is not None: - num_ann_dir = len(ann_dir) if isinstance(ann_dir, (list, tuple)) else 1 - else: - num_ann_dir = 0 - if split is not None: - num_split = len(split) if isinstance(split, (list, tuple)) else 1 - else: - num_split = 0 - if num_img_dir > 1: - assert num_img_dir == num_ann_dir or num_ann_dir == 0 - assert num_img_dir == num_split or num_split == 0 - else: - assert num_split == num_ann_dir or num_ann_dir <= 1 - num_dset = max(num_split, num_img_dir) - - datasets = [] - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - if isinstance(img_dir, (list, tuple)): - data_cfg['img_dir'] = img_dir[i] - if isinstance(ann_dir, (list, tuple)): - data_cfg['ann_dir'] = ann_dir[i] - if isinstance(split, (list, tuple)): - data_cfg['split'] = split[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - """Build datasets.""" - from .dataset_wrappers import ConcatDataset, RepeatDataset - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif isinstance(cfg.get('img_dir'), (list, tuple)) or isinstance( - cfg.get('split', None), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - drop_last=False, - pin_memory=True, - persistent_workers=True, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int | None): Seed to be used. Default: None. - drop_last (bool): Whether to drop the last incomplete batch in epoch. - Default: False - pin_memory (bool): Whether to use pin_memory in DataLoader. - Default: True - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers Dataset instances alive. - The argument also has effect in PyTorch>=1.7.0. - Default: True - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=shuffle) - shuffle = False - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if digit_version(torch.__version__) >= digit_version('1.8.0'): - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - persistent_workers=persistent_workers, - **kwargs) - else: - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - """Worker init func for dataloader. - - The seed of each worker equals to num_worker * rank + worker_id + user_seed - - Args: - worker_id (int): Worker id. - num_workers (int): Number of workers. - rank (int): The rank of current process. - seed (int): The random seed to use. - """ - - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/chase_db1.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/chase_db1.py deleted file mode 100644 index 7f14b2da..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/chase_db1.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ChaseDB1Dataset(CustomDataset): - """Chase_db1 dataset. - - In segmentation map annotation for Chase_db1, 0 stands for background, - which is included in 2 categories. ``reduce_zero_label`` is fixed to False. - The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_1stHO.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(ChaseDB1Dataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_1stHO.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/cityscapes.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/cityscapes.py deleted file mode 100644 index ed633d00..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/cityscapes.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from mmcv.utils import print_log -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CityscapesDataset(CustomDataset): - """Cityscapes dataset. - - The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is - fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. - """ - - CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], - [0, 80, 100], [0, 0, 230], [119, 11, 32]] - - def __init__(self, - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtFine_labelTrainIds.png', - **kwargs): - super(CityscapesDataset, self).__init__( - img_suffix=img_suffix, seg_map_suffix=seg_map_suffix, **kwargs) - - @staticmethod - def _convert_to_label_id(result): - """Convert trainId to id for cityscapes.""" - if isinstance(result, str): - result = np.load(result) - import cityscapesscripts.helpers.labels as CSLabels - result_copy = result.copy() - for trainId, label in CSLabels.trainId2label.items(): - result_copy[result == trainId] = label.id - - return result_copy - - def results2img(self, results, imgfile_prefix, to_label_id, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission. - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - if indices is None: - indices = list(range(len(self))) - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - if to_label_id: - result = self._convert_to_label_id(result) - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - output = Image.fromarray(result.astype(np.uint8)).convert('P') - import cityscapesscripts.helpers.labels as CSLabels - palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) - for label_id, label in CSLabels.id2label.items(): - palette[label_id] = label.color - - output.putpalette(palette) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, - results, - imgfile_prefix, - to_label_id=True, - indices=None): - """Format the results into dir (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - to_label_id (bool): whether convert output to label_id for - submission. Default: False - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, to_label_id, - indices) - - return result_files - - def evaluate(self, - results, - metric='mIoU', - logger=None, - imgfile_prefix=None): - """Evaluation in Cityscapes/default protocol. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file, - for cityscapes evaluation only. It includes the file path and - the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with cityscapes protocol, it would be - the prefix of output png files. The output files would be - png images under folder "a/b/prefix/xxx.png", where "xxx" is - the image name of cityscapes. If not specified, a temp file - will be created for evaluation. - Default: None. - - Returns: - dict[str, float]: Cityscapes/default metrics. - """ - - eval_results = dict() - metrics = metric.copy() if isinstance(metric, list) else [metric] - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, logger, imgfile_prefix)) - metrics.remove('cityscapes') - if len(metrics) > 0: - eval_results.update( - super(CityscapesDataset, - self).evaluate(results, metrics, logger)) - - return eval_results - - def _evaluate_cityscapes(self, results, logger, imgfile_prefix): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file - - Returns: - dict[str: float]: Cityscapes evaluation results. - """ - try: - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install cityscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_dir = imgfile_prefix - - eval_results = dict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - CSEval.args.evalInstLevelScore = True - CSEval.args.predictionPath = osp.abspath(result_dir) - CSEval.args.evalPixelAccuracy = True - CSEval.args.JSONOutput = False - - seg_map_list = [] - pred_list = [] - - # when evaluating with official cityscapesscripts, - # **_gtFine_labelIds.png is used - for seg_map in mmcv.scandir( - self.ann_dir, 'gtFine_labelIds.png', recursive=True): - seg_map_list.append(osp.join(self.ann_dir, seg_map)) - pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) - - eval_results.update( - CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) - - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/coco_stuff.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/coco_stuff.py deleted file mode 100644 index 546a0142..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/coco_stuff.py +++ /dev/null @@ -1,93 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class COCOStuffDataset(CustomDataset): - """COCO-Stuff dataset. - - In segmentation map annotation for COCO-Stuff, Train-IDs of the 10k version - are from 1 to 171, where 0 is the ignore index, and Train-ID of COCO Stuff - 164k is from 0 to 170, where 255 is the ignore index. So, they are all 171 - semantic categories. ``reduce_zero_label`` is set to True and False for the - 10k and 164k versions, respectively. The ``img_suffix`` is fixed to '.jpg', - and ``seg_map_suffix`` is fixed to '.png'. - """ - CLASSES = ( - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', - 'blanket', 'branch', 'bridge', 'building-other', 'bush', 'cabinet', - 'cage', 'cardboard', 'carpet', 'ceiling-other', 'ceiling-tile', - 'cloth', 'clothes', 'clouds', 'counter', 'cupboard', 'curtain', - 'desk-stuff', 'dirt', 'door-stuff', 'fence', 'floor-marble', - 'floor-other', 'floor-stone', 'floor-tile', 'floor-wood', - 'flower', 'fog', 'food-other', 'fruit', 'furniture-other', 'grass', - 'gravel', 'ground-other', 'hill', 'house', 'leaves', 'light', 'mat', - 'metal', 'mirror-stuff', 'moss', 'mountain', 'mud', 'napkin', 'net', - 'paper', 'pavement', 'pillow', 'plant-other', 'plastic', 'platform', - 'playingfield', 'railing', 'railroad', 'river', 'road', 'rock', 'roof', - 'rug', 'salad', 'sand', 'sea', 'shelf', 'sky-other', 'skyscraper', - 'snow', 'solid-other', 'stairs', 'stone', 'straw', 'structural-other', - 'table', 'tent', 'textile-other', 'towel', 'tree', 'vegetable', - 'wall-brick', 'wall-concrete', 'wall-other', 'wall-panel', - 'wall-stone', 'wall-tile', 'wall-wood', 'water-other', 'waterdrops', - 'window-blind', 'window-other', 'wood') - - PALETTE = [[0, 192, 64], [0, 192, 64], [0, 64, 96], [128, 192, 192], - [0, 64, 64], [0, 192, 224], [0, 192, 192], [128, 192, 64], - [0, 192, 96], [128, 192, 64], [128, 32, 192], [0, 0, 224], - [0, 0, 64], [0, 160, 192], [128, 0, 96], [128, 0, 192], - [0, 32, 192], [128, 128, 224], [0, 0, 192], [128, 160, 192], - [128, 128, 0], [128, 0, 32], [128, 32, 0], [128, 0, 128], - [64, 128, 32], [0, 160, 0], [0, 0, 0], [192, 128, 160], - [0, 32, 0], [0, 128, 128], [64, 128, 160], [128, 160, 0], - [0, 128, 0], [192, 128, 32], [128, 96, 128], [0, 0, 128], - [64, 0, 32], [0, 224, 128], [128, 0, 0], [192, 0, 160], - [0, 96, 128], [128, 128, 128], [64, 0, 160], [128, 224, 128], - [128, 128, 64], [192, 0, 32], [128, 96, 0], [128, 0, 192], - [0, 128, 32], [64, 224, 0], [0, 0, 64], [128, 128, 160], - [64, 96, 0], [0, 128, 192], [0, 128, 160], [192, 224, 0], - [0, 128, 64], [128, 128, 32], [192, 32, 128], [0, 64, 192], - [0, 0, 32], [64, 160, 128], [128, 64, 64], [128, 0, 160], - [64, 32, 128], [128, 192, 192], [0, 0, 160], [192, 160, 128], - [128, 192, 0], [128, 0, 96], [192, 32, 0], [128, 64, 128], - [64, 128, 96], [64, 160, 0], [0, 64, 0], [192, 128, 224], - [64, 32, 0], [0, 192, 128], [64, 128, 224], [192, 160, 0], - [0, 192, 0], [192, 128, 96], [192, 96, 128], [0, 64, 128], - [64, 0, 96], [64, 224, 128], [128, 64, 0], [192, 0, 224], - [64, 96, 128], [128, 192, 128], [64, 0, 224], [192, 224, 128], - [128, 192, 64], [192, 0, 96], [192, 96, 0], [128, 64, 192], - [0, 128, 96], [0, 224, 0], [64, 64, 64], [128, 128, 224], - [0, 96, 0], [64, 192, 192], [0, 128, 224], [128, 224, 0], - [64, 192, 64], [128, 128, 96], [128, 32, 128], [64, 0, 192], - [0, 64, 96], [0, 160, 128], [192, 0, 64], [128, 64, 224], - [0, 32, 128], [192, 128, 192], [0, 64, 224], [128, 160, 128], - [192, 128, 0], [128, 64, 32], [128, 32, 64], [192, 0, 128], - [64, 192, 32], [0, 160, 64], [64, 0, 0], [192, 192, 160], - [0, 32, 64], [64, 128, 128], [64, 192, 160], [128, 160, 64], - [64, 128, 0], [192, 192, 32], [128, 96, 192], [64, 0, 128], - [64, 64, 32], [0, 224, 192], [192, 0, 0], [192, 64, 160], - [0, 96, 192], [192, 128, 128], [64, 64, 160], [128, 224, 192], - [192, 128, 64], [192, 64, 32], [128, 96, 64], [192, 0, 192], - [0, 192, 32], [64, 224, 64], [64, 0, 64], [128, 192, 160], - [64, 96, 64], [64, 128, 192], [0, 192, 160], [192, 224, 64], - [64, 128, 64], [128, 192, 32], [192, 32, 192], [64, 64, 192], - [0, 64, 32], [64, 160, 192], [192, 64, 64], [128, 64, 160], - [64, 32, 192], [192, 192, 192], [0, 64, 160], [192, 160, 192], - [192, 192, 0], [128, 64, 96], [192, 32, 64], [192, 64, 128], - [64, 192, 96], [64, 160, 64], [64, 64, 0]] - - def __init__(self, **kwargs): - super(COCOStuffDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='_labelTrainIds.png', **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/custom.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/custom.py deleted file mode 100644 index 872b2b84..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/custom.py +++ /dev/null @@ -1,457 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from prettytable import PrettyTable -from torch.utils.data import Dataset - -from mmseg.core import eval_metrics, intersect_and_union, pre_eval_to_metrics -from mmseg.utils import get_root_logger -from .builder import DATASETS -from .pipelines import Compose, LoadAnnotations - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for semantic segmentation. An example of file structure - is as followed. - - .. code-block:: none - - ├── data - │ ├── my_dataset - │ │ ├── img_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{img_suffix} - │ │ │ │ ├── yyy{img_suffix} - │ │ │ │ ├── zzz{img_suffix} - │ │ │ ├── val - │ │ ├── ann_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{seg_map_suffix} - │ │ │ │ ├── yyy{seg_map_suffix} - │ │ │ │ ├── zzz{seg_map_suffix} - │ │ │ ├── val - - The img/gt_semantic_seg pair of CustomDataset should be of the same - except suffix. A valid img/gt_semantic_seg filename pair should be like - ``xxx{img_suffix}`` and ``xxx{seg_map_suffix}`` (extension is also included - in the suffix). If split is given, then ``xxx`` is specified in txt file. - Otherwise, all files in ``img_dir/``and ``ann_dir`` will be loaded. - Please refer to ``docs/tutorials/new_dataset.md`` for more details. - - - Args: - pipeline (list[dict]): Processing pipeline - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. Default: '.jpg' - ann_dir (str, optional): Path to annotation directory. Default: None - seg_map_suffix (str): Suffix of segmentation maps. Default: '.png' - split (str, optional): Split txt file. If split is specified, only - file with suffix in the splits will be loaded. Otherwise, all - images in img_dir/ann_dir will be loaded. Default: None - data_root (str, optional): Data root for img_dir/ann_dir. Default: - None. - test_mode (bool): If test_mode=True, gt wouldn't be loaded. - ignore_index (int): The label index to be ignored. Default: 255 - reduce_zero_label (bool): Whether to mark label zero as ignored. - Default: False - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, and - self.PALETTE is None, random palette will be generated. - Default: None - gt_seg_map_loader_cfg (dict, optional): build LoadAnnotations to - load gt for evaluation, load from disk by default. Default: None. - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - pipeline, - img_dir, - img_suffix='.jpg', - ann_dir=None, - seg_map_suffix='.png', - split=None, - data_root=None, - test_mode=False, - ignore_index=255, - reduce_zero_label=False, - classes=None, - palette=None, - gt_seg_map_loader_cfg=None): - self.pipeline = Compose(pipeline) - self.img_dir = img_dir - self.img_suffix = img_suffix - self.ann_dir = ann_dir - self.seg_map_suffix = seg_map_suffix - self.split = split - self.data_root = data_root - self.test_mode = test_mode - self.ignore_index = ignore_index - self.reduce_zero_label = reduce_zero_label - self.label_map = None - self.CLASSES, self.PALETTE = self.get_classes_and_palette( - classes, palette) - self.gt_seg_map_loader = LoadAnnotations( - ) if gt_seg_map_loader_cfg is None else LoadAnnotations( - **gt_seg_map_loader_cfg) - - if test_mode: - assert self.CLASSES is not None, \ - '`cls.CLASSES` or `classes` should be specified when testing' - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.img_dir): - self.img_dir = osp.join(self.data_root, self.img_dir) - if not (self.ann_dir is None or osp.isabs(self.ann_dir)): - self.ann_dir = osp.join(self.data_root, self.ann_dir) - if not (self.split is None or osp.isabs(self.split)): - self.split = osp.join(self.data_root, self.split) - - # load annotations - self.img_infos = self.load_annotations(self.img_dir, self.img_suffix, - self.ann_dir, - self.seg_map_suffix, self.split) - - def __len__(self): - """Total number of samples of data.""" - return len(self.img_infos) - - def load_annotations(self, img_dir, img_suffix, ann_dir, seg_map_suffix, - split): - """Load annotation from directory. - - Args: - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. - ann_dir (str|None): Path to annotation directory. - seg_map_suffix (str|None): Suffix of segmentation maps. - split (str|None): Split txt file. If split is specified, only file - with suffix in the splits will be loaded. Otherwise, all images - in img_dir/ann_dir will be loaded. Default: None - - Returns: - list[dict]: All image info of dataset. - """ - - img_infos = [] - if split is not None: - with open(split) as f: - for line in f: - img_name = line.strip() - img_info = dict(filename=img_name + img_suffix) - if ann_dir is not None: - seg_map = img_name + seg_map_suffix - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - else: - for img in mmcv.scandir(img_dir, img_suffix, recursive=True): - img_info = dict(filename=img) - if ann_dir is not None: - seg_map = img.replace(img_suffix, seg_map_suffix) - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - img_infos = sorted(img_infos, key=lambda x: x['filename']) - - print_log(f'Loaded {len(img_infos)} images', logger=get_root_logger()) - return img_infos - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.img_infos[idx]['ann'] - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['seg_fields'] = [] - results['img_prefix'] = self.img_dir - results['seg_prefix'] = self.ann_dir - if self.custom_classes: - results['label_map'] = self.label_map - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set - False). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - else: - return self.prepare_train_img(idx) - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys - introduced by pipeline. - """ - - img_info = self.img_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by - pipeline. - """ - - img_info = self.img_infos[idx] - results = dict(img_info=img_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def format_results(self, results, imgfile_prefix, indices=None, **kwargs): - """Place holder to format result to dataset specific output.""" - raise NotImplementedError - - def get_gt_seg_map_by_idx(self, index): - """Get one ground truth segmentation map for evaluation.""" - ann_info = self.get_ann_info(index) - results = dict(ann_info=ann_info) - self.pre_pipeline(results) - self.gt_seg_map_loader(results) - return results['gt_semantic_seg'] - - def get_gt_seg_maps(self, efficient_test=None): - """Get ground truth segmentation maps for evaluation.""" - if efficient_test is not None: - warnings.warn( - 'DeprecationWarning: ``efficient_test`` has been deprecated ' - 'since MMSeg v0.16, the ``get_gt_seg_maps()`` is CPU memory ' - 'friendly by default. ') - - for idx in range(len(self)): - ann_info = self.get_ann_info(idx) - results = dict(ann_info=ann_info) - self.pre_pipeline(results) - self.gt_seg_map_loader(results) - yield results['gt_semantic_seg'] - - def pre_eval(self, preds, indices): - """Collect eval result from each iteration. - - Args: - preds (list[torch.Tensor] | torch.Tensor): the segmentation logit - after argmax, shape (N, H, W). - indices (list[int] | int): the prediction related ground truth - indices. - - Returns: - list[torch.Tensor]: (area_intersect, area_union, area_prediction, - area_ground_truth). - """ - # In order to compat with batch inference - if not isinstance(indices, list): - indices = [indices] - if not isinstance(preds, list): - preds = [preds] - - pre_eval_results = [] - - for pred, index in zip(preds, indices): - seg_map = self.get_gt_seg_map_by_idx(index) - pre_eval_results.append( - intersect_and_union(pred, seg_map, len(self.CLASSES), - self.ignore_index, self.label_map, - self.reduce_zero_label)) - - return pre_eval_results - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, random - palette will be generated. Default: None - """ - if classes is None: - self.custom_classes = False - return self.CLASSES, self.PALETTE - - self.custom_classes = True - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - if self.CLASSES: - if not set(class_names).issubset(self.CLASSES): - raise ValueError('classes is not a subset of CLASSES.') - - # dictionary, its keys are the old label ids and its values - # are the new label ids. - # used for changing pixel labels in load_annotations. - self.label_map = {} - for i, c in enumerate(self.CLASSES): - if c not in class_names: - self.label_map[i] = -1 - else: - self.label_map[i] = class_names.index(c) - - palette = self.get_palette_for_custom_classes(class_names, palette) - - return class_names, palette - - def get_palette_for_custom_classes(self, class_names, palette=None): - - if self.label_map is not None: - # return subset of palette - palette = [] - for old_id, new_id in sorted( - self.label_map.items(), key=lambda x: x[1]): - if new_id != -1: - palette.append(self.PALETTE[old_id]) - palette = type(self.PALETTE)(palette) - - elif palette is None: - if self.PALETTE is None: - palette = np.random.randint(0, 255, size=(len(class_names), 3)) - else: - palette = self.PALETTE - - return palette - - def evaluate(self, - results, - metric='mIoU', - logger=None, - gt_seg_maps=None, - **kwargs): - """Evaluate the dataset. - - Args: - results (list[tuple[torch.Tensor]] | list[str]): per image pre_eval - results or predict segmentation map for computing evaluation - metric. - metric (str | list[str]): Metrics to be evaluated. 'mIoU', - 'mDice' and 'mFscore' are supported. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - gt_seg_maps (generator[ndarray]): Custom gt seg maps as input, - used in ConcatDataset - - Returns: - dict[str, float]: Default metrics. - """ - if isinstance(metric, str): - metric = [metric] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metric).issubset(set(allowed_metrics)): - raise KeyError('metric {} is not supported'.format(metric)) - - eval_results = {} - # test a list of files - if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( - results, str): - if gt_seg_maps is None: - gt_seg_maps = self.get_gt_seg_maps() - num_classes = len(self.CLASSES) - ret_metrics = eval_metrics( - results, - gt_seg_maps, - num_classes, - self.ignore_index, - metric, - label_map=self.label_map, - reduce_zero_label=self.reduce_zero_label) - # test a list of pre_eval_results - else: - ret_metrics = pre_eval_to_metrics(results, metric) - - # Because dataset.CLASSES is required for per-eval. - if self.CLASSES is None: - class_names = tuple(range(num_classes)) - else: - class_names = self.CLASSES - - # summary table - ret_metrics_summary = OrderedDict({ - ret_metric: np.round(np.nanmean(ret_metric_value) * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - - # each class table - ret_metrics.pop('aAcc', None) - ret_metrics_class = OrderedDict({ - ret_metric: np.round(ret_metric_value * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - ret_metrics_class.update({'Class': class_names}) - ret_metrics_class.move_to_end('Class', last=False) - - # for logger - class_table_data = PrettyTable() - for key, val in ret_metrics_class.items(): - class_table_data.add_column(key, val) - - summary_table_data = PrettyTable() - for key, val in ret_metrics_summary.items(): - if key == 'aAcc': - summary_table_data.add_column(key, [val]) - else: - summary_table_data.add_column('m' + key, [val]) - - print_log('per class results:', logger) - print_log('\n' + class_table_data.get_string(), logger=logger) - print_log('Summary:', logger) - print_log('\n' + summary_table_data.get_string(), logger=logger) - - # each metric dict - for key, value in ret_metrics_summary.items(): - if key == 'aAcc': - eval_results[key] = value / 100.0 - else: - eval_results['m' + key] = value / 100.0 - - ret_metrics_class.pop('Class', None) - for key, value in ret_metrics_class.items(): - eval_results.update({ - key + '.' + str(name): value[idx] / 100.0 - for idx, name in enumerate(class_names) - }) - - return eval_results diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dark_zurich.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dark_zurich.py deleted file mode 100644 index efc088f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dark_zurich.py +++ /dev/null @@ -1,13 +0,0 @@ -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class DarkZurichDataset(CityscapesDataset): - """DarkZurichDataset dataset.""" - - def __init__(self, **kwargs): - super().__init__( - img_suffix='_rgb_anon.png', - seg_map_suffix='_gt_labelTrainIds.png', - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dataset_wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dataset_wrappers.py deleted file mode 100644 index 0349332e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/dataset_wrappers.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -from itertools import chain - -import mmcv -import numpy as np -from mmcv.utils import print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - support evaluation and formatting results - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the concatenated - dataset results separately, Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = datasets[0].PALETTE - self.separate_eval = separate_eval - assert separate_eval in [True, False], \ - f'separate_eval can only be True or False,' \ - f'but get {separate_eval}' - if any([isinstance(ds, CityscapesDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating ConcatDataset containing CityscapesDataset' - 'is not supported!') - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[tuple[torch.Tensor]] | list[str]]): per image - pre_eval results or predict segmentation map for - computing evaluation metric. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: evaluate results of the total dataset - or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.img_dir} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - - if len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types when ' - 'self.separate_eval=False') - else: - if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( - results, str): - # merge the generators of gt_seg_maps - gt_seg_maps = chain( - *[dataset.get_gt_seg_maps() for dataset in self.datasets]) - else: - # if the results are `pre_eval` results, - # we do not need gt_seg_maps to evaluate - gt_seg_maps = None - eval_results = self.datasets[0].evaluate( - results, gt_seg_maps=gt_seg_maps, logger=logger, **kwargs) - return eval_results - - def get_dataset_idx_and_sample_idx(self, indice): - """Return dataset and sample index when given an indice of - ConcatDataset. - - Args: - indice (int): indice of sample in ConcatDataset - - Returns: - int: the index of sub dataset the sample belong to - int: the index of sample in its corresponding subset - """ - if indice < 0: - if -indice > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - indice = len(self) + indice - dataset_idx = bisect.bisect_right(self.cumulative_sizes, indice) - if dataset_idx == 0: - sample_idx = indice - else: - sample_idx = indice - self.cumulative_sizes[dataset_idx - 1] - return dataset_idx, sample_idx - - def format_results(self, results, imgfile_prefix, indices=None, **kwargs): - """format result for every sample of ConcatDataset.""" - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - ret_res = [] - for i, indice in enumerate(indices): - dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( - indice) - res = self.datasets[dataset_idx].format_results( - [results[i]], - imgfile_prefix + f'/{dataset_idx}', - indices=[sample_idx], - **kwargs) - ret_res.append(res) - return sum(ret_res, []) - - def pre_eval(self, preds, indices): - """do pre eval for every sample of ConcatDataset.""" - # In order to compat with batch inference - if not isinstance(indices, list): - indices = [indices] - if not isinstance(preds, list): - preds = [preds] - ret_res = [] - for i, indice in enumerate(indices): - dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( - indice) - res = self.datasets[dataset_idx].pre_eval(preds[i], sample_idx) - ret_res.append(res) - return sum(ret_res, []) - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = dataset.PALETTE - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item from original dataset.""" - return self.dataset[idx % self._ori_len] - - def __len__(self): - """The length is multiplied by ``times``""" - return self.times * self._ori_len diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/drive.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/drive.py deleted file mode 100644 index 65099114..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/drive.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/hrf.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/hrf.py deleted file mode 100644 index e4e10aea..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/hrf.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/loveda.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/loveda.py deleted file mode 100644 index 90d654f6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/loveda.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class LoveDADataset(CustomDataset): - """LoveDA dataset. - - In segmentation map annotation for LoveDA, 0 is the ignore index. - ``reduce_zero_label`` should be set to True. The ``img_suffix`` and - ``seg_map_suffix`` are both fixed to '.png'. - """ - CLASSES = ('background', 'building', 'road', 'water', 'barren', 'forest', - 'agricultural') - - PALETTE = [[255, 255, 255], [255, 0, 0], [255, 255, 0], [0, 0, 255], - [159, 129, 183], [0, 255, 0], [255, 195, 128]] - - def __init__(self, **kwargs): - super(LoveDADataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) - - def results2img(self, results, imgfile_prefix, indices=None): - """Write the segmentation results to images. - - Args: - results (list[ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - indices (list[int], optional): Indices of input results, if not - set, all the indices of the dataset will be used. - Default: None. - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - for result, idx in zip(results, indices): - - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - # The index range of official requirement is from 0 to 6. - output = Image.fromarray(result.astype(np.uint8)) - output.save(png_filename) - result_files.append(png_filename) - - return result_files - - def format_results(self, results, imgfile_prefix, indices=None): - """Format the results into dir (standard format for LoveDA evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". - indices (list[int], optional): Indices of input results, - if not set, all the indices of the dataset will be used. - Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - if indices is None: - indices = list(range(len(self))) - - assert isinstance(results, list), 'results must be a list.' - assert isinstance(indices, list), 'indices must be a list.' - - result_files = self.results2img(results, imgfile_prefix, indices) - - return result_files diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/night_driving.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/night_driving.py deleted file mode 100644 index a9289a27..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/night_driving.py +++ /dev/null @@ -1,13 +0,0 @@ -from .builder import DATASETS -from .cityscapes import CityscapesDataset - - -@DATASETS.register_module() -class NightDrivingDataset(CityscapesDataset): - """NightDrivingDataset dataset.""" - - def __init__(self, **kwargs): - super().__init__( - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtCoarse_labelTrainIds.png', - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pascal_context.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pascal_context.py deleted file mode 100644 index 1e7a09d7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pascal_context.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalContextDataset(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('background', 'aeroplane', 'bag', 'bed', 'bedclothes', 'bench', - 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus', - 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth', - 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence', - 'floor', 'flower', 'food', 'grass', 'ground', 'horse', - 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person', - 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep', - 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table', - 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water', - 'window', 'wood') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None - - -@DATASETS.register_module() -class PascalContextDataset59(CustomDataset): - """PascalContext dataset. - - In segmentation map annotation for PascalContext, 0 stands for background, - which is included in 60 categories. ``reduce_zero_label`` is fixed to - False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is - fixed to '.png'. - - Args: - split (str): Split txt file for PascalContext. - """ - - CLASSES = ('aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle', - 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet', - 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow', - 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower', - 'food', 'grass', 'ground', 'horse', 'keyboard', 'light', - 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform', - 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk', - 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train', - 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood') - - PALETTE = [[180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], - [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], - [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], - [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], - [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], - [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], - [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], - [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], - [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], - [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], - [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], - [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], - [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], - [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], - [0, 235, 255], [0, 173, 255], [31, 0, 255]] - - def __init__(self, split, **kwargs): - super(PascalContextDataset59, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - split=split, - reduce_zero_label=True, - **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/__init__.py deleted file mode 100644 index 91d9e474..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .compose import Compose -from .formatting import (Collect, ImageToTensor, ToDataContainer, ToTensor, - Transpose, to_tensor) -from .loading import LoadAnnotations, LoadImageFromFile -from .test_time_aug import MultiScaleFlipAug -from .transforms import (CLAHE, AdjustGamma, Normalize, Pad, - PhotoMetricDistortion, RandomCrop, RandomCutOut, - RandomFlip, RandomRotate, Rerange, Resize, RGB2Gray, - SegRescale) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile', - 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', - 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate', - 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray', 'RandomCutOut' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/compose.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/compose.py deleted file mode 100644 index 30280c13..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/compose.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formating.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formating.py deleted file mode 100644 index f6e53bfe..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formating.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -import warnings - -from .formatting import * - -warnings.warn('DeprecationWarning: mmseg.datasets.pipelines.formating will be ' - 'deprecated in 2021, please replace it with ' - 'mmseg.datasets.pipelines.formatting.') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formatting.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formatting.py deleted file mode 100644 index 4e057c1b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/formatting.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), - dict(key='gt_semantic_seg'))``. - """ - - def __init__(self, - fields=(dict(key='img', - stack=True), dict(key='gt_semantic_seg'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - if 'gt_semantic_seg' in results: - # convert to long - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, - ...].astype(np.int64)), - stack=True) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_semantic_seg". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the bottom/right - if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: (``filename``, ``ori_filename``, ``ori_shape``, - ``img_shape``, ``pad_shape``, ``scale_factor``, ``flip``, - ``flip_direction``, ``img_norm_cfg``) - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/loading.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index e1c82bd3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/test_time_aug.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 5c17cbbb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=(2048, 1024), - img_ratios=[0.5, 1.0], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (None | tuple | list[tuple]): Images scales for resizing. - img_ratios (float | list[float]): Image ratios for resizing - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale, - img_ratios=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - if img_ratios is not None: - img_ratios = img_ratios if isinstance(img_ratios, - list) else [img_ratios] - assert mmcv.is_list_of(img_ratios, float) - if img_scale is None: - # mode 1: given img_scale=None and a range of image ratio - self.img_scale = None - assert mmcv.is_list_of(img_ratios, float) - elif isinstance(img_scale, tuple) and mmcv.is_list_of( - img_ratios, float): - assert len(img_scale) == 2 - # mode 2: given a scale and a range of image ratio - self.img_scale = [(int(img_scale[0] * ratio), - int(img_scale[1] * ratio)) - for ratio in img_ratios] - else: - # mode 3: given multiple scales - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None - self.flip = flip - self.img_ratios = img_ratios - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): - h, w = results['img'].shape[:2] - img_scale = [(int(w * ratio), int(h * ratio)) - for ratio in self.img_ratios] - else: - img_scale = self.img_scale - flip_aug = [False, True] if self.flip else [False] - for scale in img_scale: - for flip in flip_aug: - for direction in self.flip_direction: - _results = results.copy() - _results['scale'] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip})' - repr_str += f'flip_direction={self.flip_direction}' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/transforms.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/transforms.py deleted file mode 100644 index 567c960a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/pipelines/transforms.py +++ /dev/null @@ -1,1042 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np -from mmcv.utils import deprecated_api_warning, is_tuple_of -from numpy import random - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class ResizeToMultiple(object): - """Resize images & seg to multiple of divisor. - - Args: - size_divisor (int): images and gt seg maps need to resize to multiple - of size_divisor. Default: 32. - interpolation (str, optional): The interpolation mode of image resize. - Default: None - """ - - def __init__(self, size_divisor=32, interpolation=None): - self.size_divisor = size_divisor - self.interpolation = interpolation - - def __call__(self, results): - """Call function to resize images, semantic segmentation map to - multiple of size divisor. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape' keys are updated. - """ - # Align image to multiple of size divisor. - img = results['img'] - img = mmcv.imresize_to_multiple( - img, - self.size_divisor, - scale_factor=1, - interpolation=self.interpolation - if self.interpolation else 'bilinear') - - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape - - # Align segmentation map to multiple of size divisor. - for key in results.get('seg_fields', []): - gt_seg = results[key] - gt_seg = mmcv.imresize_to_multiple( - gt_seg, - self.size_divisor, - scale_factor=1, - interpolation='nearest') - results[key] = gt_seg - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(size_divisor={self.size_divisor}, ' - f'interpolation={self.interpolation})') - return repr_str - - -@PIPELINES.register_module() -class Resize(object): - """Resize images & seg. - - This transform resizes the input image to some scale. If the input dict - contains the key "scale", then the scale in the input dict is used, - otherwise the specified scale in the init method is used. - - ``img_scale`` can be None, a tuple (single-scale) or a list of tuple - (multi-scale). There are 4 multiscale modes: - - - ``ratio_range is not None``: - 1. When img_scale is None, img_scale is the shape of image in results - (img_scale = results['img'].shape[:2]) and the image is resized based - on the original size. (mode 1) - 2. When img_scale is a tuple (single-scale), randomly sample a ratio from - the ratio range and multiply it with the image scale. (mode 2) - - - ``ratio_range is None and multiscale_mode == "range"``: randomly sample a - scale from the a range. (mode 3) - - - ``ratio_range is None and multiscale_mode == "value"``: randomly sample a - scale from multiple scales. (mode 4) - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - Default:None. - multiscale_mode (str): Either "range" or "value". - Default: 'range' - ratio_range (tuple[float]): (min_ratio, max_ratio). - Default: None - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: True - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given img_scale=None and a range of image ratio - # mode 2: given a scale and a range of image ratio - assert self.img_scale is None or len(self.img_scale) == 1 - else: - # mode 3 and 4: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, - where ``img_scale`` is the selected image scale and - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where - ``img_scale`` is sampled scale and None is just a placeholder - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where - ``scale`` is sampled ratio multiplied with ``img_scale`` and - None is just a placeholder to be consistent with - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - if self.img_scale is None: - h, w = results['img'].shape[:2] - scale, scale_idx = self.random_sample_ratio((w, h), - self.ratio_range) - else: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results['img'], results['scale'], return_scale=True) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results['img'].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results['img'], results['scale'], return_scale=True) - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape # in case that there is no padding - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], results['scale'], interpolation='nearest') - else: - gt_seg = mmcv.imresize( - results[key], results['scale'], interpolation='nearest') - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - self._random_scale(results) - self._resize_img(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(img_scale={self.img_scale}, ' - f'multiscale_mode={self.multiscale_mode}, ' - f'ratio_range={self.ratio_range}, ' - f'keep_ratio={self.keep_ratio})') - return repr_str - - -@PIPELINES.register_module() -class RandomFlip(object): - """Flip the image & seg. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - Args: - prob (float, optional): The flipping probability. Default: None. - direction(str, optional): The flipping direction. Options are - 'horizontal' and 'vertical'. Default: 'horizontal'. - """ - - @deprecated_api_warning({'flip_ratio': 'prob'}, cls_name='RandomFlip') - def __init__(self, prob=None, direction='horizontal'): - self.prob = prob - self.direction = direction - if prob is not None: - assert prob >= 0 and prob <= 1 - assert direction in ['horizontal', 'vertical'] - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added into - result dict. - """ - - if 'flip' not in results: - flip = True if np.random.rand() < self.prob else False - results['flip'] = flip - if 'flip_direction' not in results: - results['flip_direction'] = self.direction - if results['flip']: - # flip image - results['img'] = mmcv.imflip( - results['img'], direction=results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - # use copy() to make numpy stride positive - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']).copy() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(prob={self.prob})' - - -@PIPELINES.register_module() -class Pad(object): - """Pad the image & mask. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_val (float, optional): Padding value. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_val=0, - seg_pad_val=255): - self.size = size - self.size_divisor = size_divisor - self.pad_val = pad_val - self.seg_pad_val = seg_pad_val - # only one of size and size_divisor should be valid - assert size is not None or size_divisor is not None - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - if self.size is not None: - padded_img = mmcv.impad( - results['img'], shape=self.size, pad_val=self.pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results['img'], self.size_divisor, pad_val=self.pad_val) - results['img'] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_seg(self, results): - """Pad masks according to ``results['pad_shape']``.""" - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], - shape=results['pad_shape'][:2], - pad_val=self.seg_pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - - self._pad_img(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, size_divisor={self.size_divisor}, ' \ - f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - - results['img'] = mmcv.imnormalize(results['img'], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb=' \ - f'{self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class Rerange(object): - """Rerange the image pixel value. - - Args: - min_value (float or int): Minimum value of the reranged image. - Default: 0. - max_value (float or int): Maximum value of the reranged image. - Default: 255. - """ - - def __init__(self, min_value=0, max_value=255): - assert isinstance(min_value, float) or isinstance(min_value, int) - assert isinstance(max_value, float) or isinstance(max_value, int) - assert min_value < max_value - self.min_value = min_value - self.max_value = max_value - - def __call__(self, results): - """Call function to rerange images. - - Args: - results (dict): Result dict from loading pipeline. - Returns: - dict: Reranged results. - """ - - img = results['img'] - img_min_value = np.min(img) - img_max_value = np.max(img) - - assert img_min_value < img_max_value - # rerange to [0, 1] - img = (img - img_min_value) / (img_max_value - img_min_value) - # rerange to [min_value, max_value] - img = img * (self.max_value - self.min_value) + self.min_value - results['img'] = img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_value={self.min_value}, max_value={self.max_value})' - return repr_str - - -@PIPELINES.register_module() -class CLAHE(object): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - """ - - def __init__(self, clip_limit=40.0, tile_grid_size=(8, 8)): - assert isinstance(clip_limit, (float, int)) - self.clip_limit = clip_limit - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - self.tile_grid_size = tile_grid_size - - def __call__(self, results): - """Call function to Use CLAHE method process images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - for i in range(results['img'].shape[2]): - results['img'][:, :, i] = mmcv.clahe( - np.array(results['img'][:, :, i], dtype=np.uint8), - self.clip_limit, self.tile_grid_size) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(clip_limit={self.clip_limit}, '\ - f'tile_grid_size={self.tile_grid_size})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop(object): - """Random crop the image & seg. - - Args: - crop_size (tuple): Expected size after cropping, (h, w). - cat_max_ratio (float): The maximum ratio that single category could - occupy. - """ - - def __init__(self, crop_size, cat_max_ratio=1., ignore_index=255): - assert crop_size[0] > 0 and crop_size[1] > 0 - self.crop_size = crop_size - self.cat_max_ratio = cat_max_ratio - self.ignore_index = ignore_index - - def get_crop_bbox(self, img): - """Randomly get a crop bounding box.""" - margin_h = max(img.shape[0] - self.crop_size[0], 0) - margin_w = max(img.shape[1] - self.crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + self.crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + self.crop_size[1] - - return crop_y1, crop_y2, crop_x1, crop_x2 - - def crop(self, img, crop_bbox): - """Crop from ``img``""" - crop_y1, crop_y2, crop_x1, crop_x2 = crop_bbox - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - return img - - def __call__(self, results): - """Call function to randomly crop images, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - - img = results['img'] - crop_bbox = self.get_crop_bbox(img) - if self.cat_max_ratio < 1.: - # Repeat 10 times - for _ in range(10): - seg_temp = self.crop(results['gt_semantic_seg'], crop_bbox) - labels, cnt = np.unique(seg_temp, return_counts=True) - cnt = cnt[labels != self.ignore_index] - if len(cnt) > 1 and np.max(cnt) / np.sum( - cnt) < self.cat_max_ratio: - break - crop_bbox = self.get_crop_bbox(img) - - # crop the image - img = self.crop(img, crop_bbox) - img_shape = img.shape - results['img'] = img - results['img_shape'] = img_shape - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = self.crop(results[key], crop_bbox) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(crop_size={self.crop_size})' - - -@PIPELINES.register_module() -class RandomRotate(object): - """Rotate the image & seg. - - Args: - prob (float): The rotation probability. - degree (float, tuple[float]): Range of degrees to select from. If - degree is a number instead of tuple like (min, max), - the range of degree will be (``-degree``, ``+degree``) - pad_val (float, optional): Padding value of image. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. Default: None. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. Default: False - """ - - def __init__(self, - prob, - degree, - pad_val=0, - seg_pad_val=255, - center=None, - auto_bound=False): - self.prob = prob - assert prob >= 0 and prob <= 1 - if isinstance(degree, (float, int)): - assert degree > 0, f'degree {degree} should be positive' - self.degree = (-degree, degree) - else: - self.degree = degree - assert len(self.degree) == 2, f'degree {self.degree} should be a ' \ - f'tuple of (min, max)' - self.pal_val = pad_val - self.seg_pad_val = seg_pad_val - self.center = center - self.auto_bound = auto_bound - - def __call__(self, results): - """Call function to rotate image, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - - rotate = True if np.random.rand() < self.prob else False - degree = np.random.uniform(min(*self.degree), max(*self.degree)) - if rotate: - # rotate image - results['img'] = mmcv.imrotate( - results['img'], - angle=degree, - border_value=self.pal_val, - center=self.center, - auto_bound=self.auto_bound) - - # rotate segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imrotate( - results[key], - angle=degree, - border_value=self.seg_pad_val, - center=self.center, - auto_bound=self.auto_bound, - interpolation='nearest') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' \ - f'degree={self.degree}, ' \ - f'pad_val={self.pal_val}, ' \ - f'seg_pad_val={self.seg_pad_val}, ' \ - f'center={self.center}, ' \ - f'auto_bound={self.auto_bound})' - return repr_str - - -@PIPELINES.register_module() -class RGB2Gray(object): - """Convert RGB image to grayscale image. - - This transform calculate the weighted mean of input image channels with - ``weights`` and then expand the channels to ``out_channels``. When - ``out_channels`` is None, the number of output channels is the same as - input channels. - - Args: - out_channels (int): Expected number of output channels after - transforming. Default: None. - weights (tuple[float]): The weights to calculate the weighted mean. - Default: (0.299, 0.587, 0.114). - """ - - def __init__(self, out_channels=None, weights=(0.299, 0.587, 0.114)): - assert out_channels is None or out_channels > 0 - self.out_channels = out_channels - assert isinstance(weights, tuple) - for item in weights: - assert isinstance(item, (float, int)) - self.weights = weights - - def __call__(self, results): - """Call function to convert RGB image to grayscale image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with grayscale image. - """ - img = results['img'] - assert len(img.shape) == 3 - assert img.shape[2] == len(self.weights) - weights = np.array(self.weights).reshape((1, 1, -1)) - img = (img * weights).sum(2, keepdims=True) - if self.out_channels is None: - img = img.repeat(weights.shape[2], axis=2) - else: - img = img.repeat(self.out_channels, axis=2) - - results['img'] = img - results['img_shape'] = img.shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(out_channels={self.out_channels}, ' \ - f'weights={self.weights})' - return repr_str - - -@PIPELINES.register_module() -class AdjustGamma(object): - """Using gamma correction to process the image. - - Args: - gamma (float or int): Gamma value used in gamma correction. - Default: 1.0. - """ - - def __init__(self, gamma=1.0): - assert isinstance(gamma, float) or isinstance(gamma, int) - assert gamma > 0 - self.gamma = gamma - inv_gamma = 1.0 / gamma - self.table = np.array([(i / 255.0)**inv_gamma * 255 - for i in np.arange(256)]).astype('uint8') - - def __call__(self, results): - """Call function to process the image with gamma correction. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - results['img'] = mmcv.lut_transform( - np.array(results['img'], dtype=np.uint8), self.table) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(gamma={self.gamma})' - - -@PIPELINES.register_module() -class SegRescale(object): - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - """ - - def __init__(self, scale_factor=1): - self.scale_factor = scale_factor - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], self.scale_factor, interpolation='nearest') - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion(object): - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def convert(self, img, alpha=1, beta=0): - """Multiple with alpha and add beat with clip.""" - img = img.astype(np.float32) * alpha + beta - img = np.clip(img, 0, 255) - return img.astype(np.uint8) - - def brightness(self, img): - """Brightness distortion.""" - if random.randint(2): - return self.convert( - img, - beta=random.uniform(-self.brightness_delta, - self.brightness_delta)) - return img - - def contrast(self, img): - """Contrast distortion.""" - if random.randint(2): - return self.convert( - img, - alpha=random.uniform(self.contrast_lower, self.contrast_upper)) - return img - - def saturation(self, img): - """Saturation distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, 1] = self.convert( - img[:, :, 1], - alpha=random.uniform(self.saturation_lower, - self.saturation_upper)) - img = mmcv.hsv2bgr(img) - return img - - def hue(self, img): - """Hue distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, - 0] = (img[:, :, 0].astype(int) + - random.randint(-self.hue_delta, self.hue_delta)) % 180 - img = mmcv.hsv2bgr(img) - return img - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - img = results['img'] - # random brightness - img = self.brightness(img) - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - img = self.contrast(img) - - # random saturation - img = self.saturation(img) - - # random hue - img = self.hue(img) - - # random contrast - if mode == 0: - img = self.contrast(img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(brightness_delta={self.brightness_delta}, ' - f'contrast_range=({self.contrast_lower}, ' - f'{self.contrast_upper}), ' - f'saturation_range=({self.saturation_lower}, ' - f'{self.saturation_upper}), ' - f'hue_delta={self.hue_delta})') - return repr_str - - -@PIPELINES.register_module() -class RandomCutOut(object): - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - Args: - prob (float): cutout probability. - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - seg_fill_in (int): The labels of pixel to fill in the dropped regions. - If seg_fill_in is None, skip. Default: None. - """ - - def __init__(self, - prob, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0), - seg_fill_in=None): - - assert 0 <= prob and prob <= 1 - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - if seg_fill_in is not None: - assert (isinstance(seg_fill_in, int) and 0 <= seg_fill_in - and seg_fill_in <= 255) - self.prob = prob - self.n_holes = n_holes - self.fill_in = fill_in - self.seg_fill_in = seg_fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - cutout = True if np.random.rand() < self.prob else False - if cutout: - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - if self.seg_fill_in is not None: - for key in results.get('seg_fields', []): - results[key][y1:y2, x1:x2] = self.seg_fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' - repr_str += f'n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in}, ' - repr_str += f'seg_fill_in={self.seg_fill_in})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/stare.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/stare.py deleted file mode 100644 index a24d1d95..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/stare.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/voc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/voc.py deleted file mode 100644 index 3cec9e35..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/datasets/voc.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalVOCDataset(CustomDataset): - """Pascal VOC dataset. - - Args: - split (str): Split txt file for Pascal VOC. - """ - - CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', - 'train', 'tvmonitor') - - PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - def __init__(self, split, **kwargs): - super(PascalVOCDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/__init__.py deleted file mode 100644 index 87d8108e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, HEADS, LOSSES, SEGMENTORS, build_backbone, - build_head, build_loss, build_segmentor) -from .decode_heads import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 -from .segmentors import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'HEADS', 'LOSSES', 'SEGMENTORS', 'build_backbone', - 'build_head', 'build_loss', 'build_segmentor' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/__init__.py deleted file mode 100644 index 434378e9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bisenetv1 import BiSeNetV1 -from .bisenetv2 import BiSeNetV2 -from .cgnet import CGNet -from .erfnet import ERFNet -from .fast_scnn import FastSCNN -from .hrnet import HRNet -from .icnet import ICNet -from .mit import MixVisionTransformer -from .mobilenet_v2 import MobileNetV2 -from .mobilenet_v3 import MobileNetV3 -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1c, ResNetV1d -from .resnext import ResNeXt -from .stdc import STDCContextPathNet, STDCNet -from .swin import SwinTransformer -from .timm_backbone import TIMMBackbone -from .twins import PCPVT, SVT -from .unet import UNet -from .vit import VisionTransformer - -__all__ = [ - 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet', 'FastSCNN', - 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3', - 'VisionTransformer', 'SwinTransformer', 'MixVisionTransformer', - 'BiSeNetV1', 'BiSeNetV2', 'ICNet', 'TIMMBackbone', 'ERFNet', 'PCPVT', - 'SVT', 'STDCNet', 'STDCContextPathNet' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv1.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv1.py deleted file mode 100644 index 4beb7b39..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv1.py +++ /dev/null @@ -1,332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone - - -class SpatialPath(BaseModule): - """Spatial Path to preserve the spatial size of the original input image - and encode affluent spatial information. - - Args: - in_channels(int): The number of channels of input - image. Default: 3. - num_channels (Tuple[int]): The number of channels of - each layers in Spatial Path. - Default: (64, 64, 64, 128). - Returns: - x (torch.Tensor): Feature map for Feature Fusion Module. - """ - - def __init__(self, - in_channels=3, - num_channels=(64, 64, 64, 128), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(SpatialPath, self).__init__(init_cfg=init_cfg) - assert len(num_channels) == 4, 'Length of input channels \ - of Spatial Path must be 4!' - - self.layers = [] - for i in range(len(num_channels)): - layer_name = f'layer{i + 1}' - self.layers.append(layer_name) - if i == 0: - self.add_module( - layer_name, - ConvModule( - in_channels=in_channels, - out_channels=num_channels[i], - kernel_size=7, - stride=2, - padding=3, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - elif i == len(num_channels) - 1: - self.add_module( - layer_name, - ConvModule( - in_channels=num_channels[i - 1], - out_channels=num_channels[i], - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - else: - self.add_module( - layer_name, - ConvModule( - in_channels=num_channels[i - 1], - out_channels=num_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, x): - for i, layer_name in enumerate(self.layers): - layer_stage = getattr(self, layer_name) - x = layer_stage(x) - return x - - -class AttentionRefinementModule(BaseModule): - """Attention Refinement Module (ARM) to refine the features of each stage. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - Returns: - x_out (torch.Tensor): Feature map of Attention Refinement Module. - """ - - def __init__(self, - in_channels, - out_channel, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(AttentionRefinementModule, self).__init__(init_cfg=init_cfg) - self.conv_layer = ConvModule( - in_channels=in_channels, - out_channels=out_channel, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.atten_conv_layer = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - in_channels=out_channel, - out_channels=out_channel, - kernel_size=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), nn.Sigmoid()) - - def forward(self, x): - x = self.conv_layer(x) - x_atten = self.atten_conv_layer(x) - x_out = x * x_atten - return x_out - - -class ContextPath(BaseModule): - """Context Path to provide sufficient receptive field. - - Args: - backbone_cfg:(dict): Config of backbone of - Context Path. - context_channels (Tuple[int]): The number of channel numbers - of various modules in Context Path. - Default: (128, 256, 512). - align_corners (bool, optional): The align_corners argument of - resize operation. Default: False. - Returns: - x_16_up, x_32_up (torch.Tensor, torch.Tensor): Two feature maps - undergoing upsampling from 1/16 and 1/32 downsampling - feature maps. These two feature maps are used for Feature - Fusion Module and Auxiliary Head. - """ - - def __init__(self, - backbone_cfg, - context_channels=(128, 256, 512), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(ContextPath, self).__init__(init_cfg=init_cfg) - assert len(context_channels) == 3, 'Length of input channels \ - of Context Path must be 3!' - - self.backbone = build_backbone(backbone_cfg) - - self.align_corners = align_corners - self.arm16 = AttentionRefinementModule(context_channels[1], - context_channels[0]) - self.arm32 = AttentionRefinementModule(context_channels[2], - context_channels[0]) - self.conv_head32 = ConvModule( - in_channels=context_channels[0], - out_channels=context_channels[0], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv_head16 = ConvModule( - in_channels=context_channels[0], - out_channels=context_channels[0], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.gap_conv = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - in_channels=context_channels[2], - out_channels=context_channels[0], - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, x): - x_4, x_8, x_16, x_32 = self.backbone(x) - x_gap = self.gap_conv(x_32) - - x_32_arm = self.arm32(x_32) - x_32_sum = x_32_arm + x_gap - x_32_up = resize(input=x_32_sum, size=x_16.shape[2:], mode='nearest') - x_32_up = self.conv_head32(x_32_up) - - x_16_arm = self.arm16(x_16) - x_16_sum = x_16_arm + x_32_up - x_16_up = resize(input=x_16_sum, size=x_8.shape[2:], mode='nearest') - x_16_up = self.conv_head16(x_16_up) - - return x_16_up, x_32_up - - -class FeatureFusionModule(BaseModule): - """Feature Fusion Module to fuse low level output feature of Spatial Path - and high level output feature of Context Path. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - Returns: - x_out (torch.Tensor): Feature map of Feature Fusion Module. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.gap = nn.AdaptiveAvgPool2d((1, 1)) - self.conv_atten = nn.Sequential( - ConvModule( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), nn.Sigmoid()) - - def forward(self, x_sp, x_cp): - x_concat = torch.cat([x_sp, x_cp], dim=1) - x_fuse = self.conv1(x_concat) - x_atten = self.gap(x_fuse) - # Note: No BN and more 1x1 conv in paper. - x_atten = self.conv_atten(x_atten) - x_atten = x_fuse * x_atten - x_out = x_atten + x_fuse - return x_out - - -@BACKBONES.register_module() -class BiSeNetV1(BaseModule): - """BiSeNetV1 backbone. - - This backbone is the implementation of `BiSeNet: Bilateral - Segmentation Network for Real-time Semantic - Segmentation `_. - - Args: - backbone_cfg:(dict): Config of backbone of - Context Path. - in_channels (int): The number of channels of input - image. Default: 3. - spatial_channels (Tuple[int]): Size of channel numbers of - various layers in Spatial Path. - Default: (64, 64, 64, 128). - context_channels (Tuple[int]): Size of channel numbers of - various modules in Context Path. - Default: (128, 256, 512). - out_indices (Tuple[int] | int, optional): Output from which stages. - Default: (0, 1, 2). - align_corners (bool, optional): The align_corners argument of - resize operation in Bilateral Guided Aggregation Layer. - Default: False. - out_channels(int): The number of channels of output. - It must be the same with `in_channels` of decode_head. - Default: 256. - """ - - def __init__(self, - backbone_cfg, - in_channels=3, - spatial_channels=(64, 64, 64, 128), - context_channels=(128, 256, 512), - out_indices=(0, 1, 2), - align_corners=False, - out_channels=256, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - init_cfg=None): - - super(BiSeNetV1, self).__init__(init_cfg=init_cfg) - assert len(spatial_channels) == 4, 'Length of input channels \ - of Spatial Path must be 4!' - - assert len(context_channels) == 3, 'Length of input channels \ - of Context Path must be 3!' - - self.out_indices = out_indices - self.align_corners = align_corners - self.context_path = ContextPath(backbone_cfg, context_channels, - self.align_corners) - self.spatial_path = SpatialPath(in_channels, spatial_channels) - self.ffm = FeatureFusionModule(context_channels[1], out_channels) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - def forward(self, x): - # stole refactoring code from Coin Cheung, thanks - x_context8, x_context16 = self.context_path(x) - x_spatial = self.spatial_path(x) - x_fuse = self.ffm(x_spatial, x_context8) - - outs = [x_fuse, x_context8, x_context16] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv2.py deleted file mode 100644 index d908b321..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/bisenetv2.py +++ /dev/null @@ -1,622 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, - build_activation_layer, build_norm_layer) -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES - - -class DetailBranch(BaseModule): - """Detail Branch with wide channels and shallow layers to capture low-level - details and generate high-resolution feature representation. - - Args: - detail_channels (Tuple[int]): Size of channel numbers of each stage - in Detail Branch, in paper it has 3 stages. - Default: (64, 64, 128). - in_channels (int): Number of channels of input image. Default: 3. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Feature map of Detail Branch. - """ - - def __init__(self, - detail_channels=(64, 64, 128), - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(DetailBranch, self).__init__(init_cfg=init_cfg) - detail_branch = [] - for i in range(len(detail_channels)): - if i == 0: - detail_branch.append( - nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=detail_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg))) - else: - detail_branch.append( - nn.Sequential( - ConvModule( - in_channels=detail_channels[i - 1], - out_channels=detail_channels[i], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=detail_channels[i], - out_channels=detail_channels[i], - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg))) - self.detail_branch = nn.ModuleList(detail_branch) - - def forward(self, x): - for stage in self.detail_branch: - x = stage(x) - return x - - -class StemBlock(BaseModule): - """Stem Block at the beginning of Semantic Branch. - - Args: - in_channels (int): Number of input channels. - Default: 3. - out_channels (int): Number of output channels. - Default: 16. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): First feature map in Semantic Branch. - """ - - def __init__(self, - in_channels=3, - out_channels=16, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(StemBlock, self).__init__(init_cfg=init_cfg) - - self.conv_first = ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.convs = nn.Sequential( - ConvModule( - in_channels=out_channels, - out_channels=out_channels // 2, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - in_channels=out_channels // 2, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.pool = nn.MaxPool2d( - kernel_size=3, stride=2, padding=1, ceil_mode=False) - self.fuse_last = ConvModule( - in_channels=out_channels * 2, - out_channels=out_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - x = self.conv_first(x) - x_left = self.convs(x) - x_right = self.pool(x) - x = self.fuse_last(torch.cat([x_left, x_right], dim=1)) - return x - - -class GELayer(BaseModule): - """Gather-and-Expansion Layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - exp_ratio (int): Expansion ratio for middle channels. - Default: 6. - stride (int): Stride of GELayer. Default: 1 - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Intermediate feature map in - Semantic Branch. - """ - - def __init__(self, - in_channels, - out_channels, - exp_ratio=6, - stride=1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(GELayer, self).__init__(init_cfg=init_cfg) - mid_channel = in_channels * exp_ratio - self.conv1 = ConvModule( - in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if stride == 1: - self.dwconv = nn.Sequential( - # ReLU in ConvModule not shown in paper - ConvModule( - in_channels=in_channels, - out_channels=mid_channel, - kernel_size=3, - stride=stride, - padding=1, - groups=in_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.shortcut = None - else: - self.dwconv = nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=mid_channel, - kernel_size=3, - stride=stride, - padding=1, - groups=in_channels, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), - # ReLU in ConvModule not shown in paper - ConvModule( - in_channels=mid_channel, - out_channels=mid_channel, - kernel_size=3, - stride=1, - padding=1, - groups=mid_channel, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ) - self.shortcut = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=norm_cfg, - pw_act_cfg=None, - )) - - self.conv2 = nn.Sequential( - ConvModule( - in_channels=mid_channel, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None, - )) - - self.act = build_activation_layer(act_cfg) - - def forward(self, x): - identity = x - x = self.conv1(x) - x = self.dwconv(x) - x = self.conv2(x) - if self.shortcut is not None: - shortcut = self.shortcut(identity) - x = x + shortcut - else: - x = x + identity - x = self.act(x) - return x - - -class CEBlock(BaseModule): - """Context Embedding Block for large receptive filed in Semantic Branch. - - Args: - in_channels (int): Number of input channels. - Default: 3. - out_channels (int): Number of output channels. - Default: 16. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - x (torch.Tensor): Last feature map in Semantic Branch. - """ - - def __init__(self, - in_channels=3, - out_channels=16, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(CEBlock, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - self.gap = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - build_norm_layer(norm_cfg, self.in_channels)[1]) - self.conv_gap = ConvModule( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # Note: in paper here is naive conv2d, no bn-relu - self.conv_last = ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - identity = x - x = self.gap(x) - x = self.conv_gap(x) - x = identity + x - x = self.conv_last(x) - return x - - -class SemanticBranch(BaseModule): - """Semantic Branch which is lightweight with narrow channels and deep - layers to obtain high-level semantic context. - - Args: - semantic_channels(Tuple[int]): Size of channel numbers of - various stages in Semantic Branch. - Default: (16, 32, 64, 128). - in_channels (int): Number of channels of input image. Default: 3. - exp_ratio (int): Expansion ratio for middle channels. - Default: 6. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - semantic_outs (List[torch.Tensor]): List of several feature maps - for auxiliary heads (Booster) and Bilateral - Guided Aggregation Layer. - """ - - def __init__(self, - semantic_channels=(16, 32, 64, 128), - in_channels=3, - exp_ratio=6, - init_cfg=None): - super(SemanticBranch, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.semantic_channels = semantic_channels - self.semantic_stages = [] - for i in range(len(semantic_channels)): - stage_name = f'stage{i + 1}' - self.semantic_stages.append(stage_name) - if i == 0: - self.add_module( - stage_name, - StemBlock(self.in_channels, semantic_channels[i])) - elif i == (len(semantic_channels) - 1): - self.add_module( - stage_name, - nn.Sequential( - GELayer(semantic_channels[i - 1], semantic_channels[i], - exp_ratio, 2), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1))) - else: - self.add_module( - stage_name, - nn.Sequential( - GELayer(semantic_channels[i - 1], semantic_channels[i], - exp_ratio, 2), - GELayer(semantic_channels[i], semantic_channels[i], - exp_ratio, 1))) - - self.add_module(f'stage{len(semantic_channels)}_CEBlock', - CEBlock(semantic_channels[-1], semantic_channels[-1])) - self.semantic_stages.append(f'stage{len(semantic_channels)}_CEBlock') - - def forward(self, x): - semantic_outs = [] - for stage_name in self.semantic_stages: - semantic_stage = getattr(self, stage_name) - x = semantic_stage(x) - semantic_outs.append(x) - return semantic_outs - - -class BGALayer(BaseModule): - """Bilateral Guided Aggregation Layer to fuse the complementary information - from both Detail Branch and Semantic Branch. - - Args: - out_channels (int): Number of output channels. - Default: 128. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - Returns: - output (torch.Tensor): Output feature map for Segment heads. - """ - - def __init__(self, - out_channels=128, - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(BGALayer, self).__init__(init_cfg=init_cfg) - self.out_channels = out_channels - self.align_corners = align_corners - self.detail_dwconv = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=None, - pw_act_cfg=None, - )) - self.detail_down = nn.Sequential( - ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None), - nn.AvgPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=False)) - self.semantic_conv = nn.Sequential( - ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None)) - self.semantic_dwconv = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=None, - pw_act_cfg=None, - )) - self.conv = ConvModule( - in_channels=self.out_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - padding=1, - inplace=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - ) - - def forward(self, x_d, x_s): - detail_dwconv = self.detail_dwconv(x_d) - detail_down = self.detail_down(x_d) - semantic_conv = self.semantic_conv(x_s) - semantic_dwconv = self.semantic_dwconv(x_s) - semantic_conv = resize( - input=semantic_conv, - size=detail_dwconv.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fuse_1 = detail_dwconv * torch.sigmoid(semantic_conv) - fuse_2 = detail_down * torch.sigmoid(semantic_dwconv) - fuse_2 = resize( - input=fuse_2, - size=fuse_1.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = self.conv(fuse_1 + fuse_2) - return output - - -@BACKBONES.register_module() -class BiSeNetV2(BaseModule): - """BiSeNetV2: Bilateral Network with Guided Aggregation for - Real-time Semantic Segmentation. - - This backbone is the implementation of - `BiSeNetV2 `_. - - Args: - in_channels (int): Number of channel of input image. Default: 3. - detail_channels (Tuple[int], optional): Channels of each stage - in Detail Branch. Default: (64, 64, 128). - semantic_channels (Tuple[int], optional): Channels of each stage - in Semantic Branch. Default: (16, 32, 64, 128). - See Table 1 and Figure 3 of paper for more details. - semantic_expansion_ratio (int, optional): The expansion factor - expanding channel number of middle channels in Semantic Branch. - Default: 6. - bga_channels (int, optional): Number of middle channels in - Bilateral Guided Aggregation Layer. Default: 128. - out_indices (Tuple[int] | int, optional): Output from which stages. - Default: (0, 1, 2, 3, 4). - align_corners (bool, optional): The align_corners argument of - resize operation in Bilateral Guided Aggregation Layer. - Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=3, - detail_channels=(64, 64, 128), - semantic_channels=(16, 32, 64, 128), - semantic_expansion_ratio=6, - bga_channels=128, - out_indices=(0, 1, 2, 3, 4), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - if init_cfg is None: - init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) - ] - super(BiSeNetV2, self).__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_indices = out_indices - self.detail_channels = detail_channels - self.semantic_channels = semantic_channels - self.semantic_expansion_ratio = semantic_expansion_ratio - self.bga_channels = bga_channels - self.align_corners = align_corners - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.detail = DetailBranch(self.detail_channels, self.in_channels) - self.semantic = SemanticBranch(self.semantic_channels, - self.in_channels, - self.semantic_expansion_ratio) - self.bga = BGALayer(self.bga_channels, self.align_corners) - - def forward(self, x): - # stole refactoring code from Coin Cheung, thanks - x_detail = self.detail(x) - x_semantic_lst = self.semantic(x) - x_head = self.bga(x_detail, x_semantic_lst[-1]) - outs = [x_head] + x_semantic_lst[:-1] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/cgnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/cgnet.py deleted file mode 100644 index 168194c1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/cgnet.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import ConvModule, build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from ..builder import BACKBONES - - -class GlobalContextExtractor(nn.Module): - """Global Context Extractor for CGNet. - - This class is employed to refine the joint feature of both local feature - and surrounding context. - - Args: - channel (int): Number of input feature channels. - reduction (int): Reductions for global context extractor. Default: 16. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, channel, reduction=16, with_cp=False): - super(GlobalContextExtractor, self).__init__() - self.channel = channel - self.reduction = reduction - assert reduction >= 1 and channel >= reduction - self.with_cp = with_cp - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), nn.Sigmoid()) - - def forward(self, x): - - def _inner_forward(x): - num_batch, num_channel = x.size()[:2] - y = self.avg_pool(x).view(num_batch, num_channel) - y = self.fc(y).view(num_batch, num_channel, 1, 1) - return x * y - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class ContextGuidedBlock(nn.Module): - """Context Guided Block for CGNet. - - This class consists of four components: local feature extractor, - surrounding feature extractor, joint feature extractor and global - context extractor. - - Args: - in_channels (int): Number of input feature channels. - out_channels (int): Number of output feature channels. - dilation (int): Dilation rate for surrounding context extractor. - Default: 2. - reduction (int): Reduction for global context extractor. Default: 16. - skip_connect (bool): Add input to output or not. Default: True. - downsample (bool): Downsample the input to 1/2 or not. Default: False. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels, - out_channels, - dilation=2, - reduction=16, - skip_connect=True, - downsample=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - with_cp=False): - super(ContextGuidedBlock, self).__init__() - self.with_cp = with_cp - self.downsample = downsample - - channels = out_channels if downsample else out_channels // 2 - if 'type' in act_cfg and act_cfg['type'] == 'PReLU': - act_cfg['num_parameters'] = channels - kernel_size = 3 if downsample else 1 - stride = 2 if downsample else 1 - padding = (kernel_size - 1) // 2 - - self.conv1x1 = ConvModule( - in_channels, - channels, - kernel_size, - stride, - padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.f_loc = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=1, - groups=channels, - bias=False) - self.f_sur = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=dilation, - groups=channels, - dilation=dilation, - bias=False) - - self.bn = build_norm_layer(norm_cfg, 2 * channels)[1] - self.activate = nn.PReLU(2 * channels) - - if downsample: - self.bottleneck = build_conv_layer( - conv_cfg, - 2 * channels, - out_channels, - kernel_size=1, - bias=False) - - self.skip_connect = skip_connect and not downsample - self.f_glo = GlobalContextExtractor(out_channels, reduction, with_cp) - - def forward(self, x): - - def _inner_forward(x): - out = self.conv1x1(x) - loc = self.f_loc(out) - sur = self.f_sur(out) - - joi_feat = torch.cat([loc, sur], 1) # the joint feature - joi_feat = self.bn(joi_feat) - joi_feat = self.activate(joi_feat) - if self.downsample: - joi_feat = self.bottleneck(joi_feat) # channel = out_channels - # f_glo is employed to refine the joint feature - out = self.f_glo(joi_feat) - - if self.skip_connect: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InputInjection(nn.Module): - """Downsampling module for CGNet.""" - - def __init__(self, num_downsampling): - super(InputInjection, self).__init__() - self.pool = nn.ModuleList() - for i in range(num_downsampling): - self.pool.append(nn.AvgPool2d(3, stride=2, padding=1)) - - def forward(self, x): - for pool in self.pool: - x = pool(x) - return x - - -@BACKBONES.register_module() -class CGNet(BaseModule): - """CGNet backbone. - - This backbone is the implementation of `A Light-weight Context Guided - Network for Semantic Segmentation `_. - - Args: - in_channels (int): Number of input image channels. Normally 3. - num_channels (tuple[int]): Numbers of feature channels at each stages. - Default: (32, 64, 128). - num_blocks (tuple[int]): Numbers of CG blocks at stage 1 and stage 2. - Default: (3, 21). - dilations (tuple[int]): Dilation rate for surrounding context - extractors at stage 1 and stage 2. Default: (2, 4). - reductions (tuple[int]): Reductions for global context extractors at - stage 1 and stage 2. Default: (8, 16). - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - - super(CGNet, self).__init__(init_cfg) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer=['Conv2d', 'Linear']), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']), - dict(type='Constant', val=0, layer='PReLU') - ] - else: - raise TypeError('pretrained must be a str or None') - - self.in_channels = in_channels - self.num_channels = num_channels - assert isinstance(self.num_channels, tuple) and len( - self.num_channels) == 3 - self.num_blocks = num_blocks - assert isinstance(self.num_blocks, tuple) and len(self.num_blocks) == 2 - self.dilations = dilations - assert isinstance(self.dilations, tuple) and len(self.dilations) == 2 - self.reductions = reductions - assert isinstance(self.reductions, tuple) and len(self.reductions) == 2 - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - if 'type' in self.act_cfg and self.act_cfg['type'] == 'PReLU': - self.act_cfg['num_parameters'] = num_channels[0] - self.norm_eval = norm_eval - self.with_cp = with_cp - - cur_channels = in_channels - self.stem = nn.ModuleList() - for i in range(3): - self.stem.append( - ConvModule( - cur_channels, - num_channels[0], - 3, - 2 if i == 0 else 1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - cur_channels = num_channels[0] - - self.inject_2x = InputInjection(1) # down-sample for Input, factor=2 - self.inject_4x = InputInjection(2) # down-sample for Input, factor=4 - - cur_channels += in_channels - self.norm_prelu_0 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 1 - self.level1 = nn.ModuleList() - for i in range(num_blocks[0]): - self.level1.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[1], - num_channels[1], - dilations[0], - reductions[0], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[1] + in_channels - self.norm_prelu_1 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 2 - self.level2 = nn.ModuleList() - for i in range(num_blocks[1]): - self.level2.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[2], - num_channels[2], - dilations[1], - reductions[1], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[2] - self.norm_prelu_2 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - def forward(self, x): - output = [] - - # stage 0 - inp_2x = self.inject_2x(x) - inp_4x = self.inject_4x(x) - for layer in self.stem: - x = layer(x) - x = self.norm_prelu_0(torch.cat([x, inp_2x], 1)) - output.append(x) - - # stage 1 - for i, layer in enumerate(self.level1): - x = layer(x) - if i == 0: - down1 = x - x = self.norm_prelu_1(torch.cat([x, down1, inp_4x], 1)) - output.append(x) - - # stage 2 - for i, layer in enumerate(self.level2): - x = layer(x) - if i == 0: - down2 = x - x = self.norm_prelu_2(torch.cat([down2, x], 1)) - output.append(x) - - return output - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(CGNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/erfnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/erfnet.py deleted file mode 100644 index 8921c18f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/erfnet.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import build_activation_layer, build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES - - -class DownsamplerBlock(BaseModule): - """Downsampler block of ERFNet. - - This module is a little different from basical ConvModule. - The features from Conv and MaxPool layers are - concatenated before BatchNorm. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(DownsamplerBlock, self).__init__(init_cfg=init_cfg) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.conv = build_conv_layer( - self.conv_cfg, - in_channels, - out_channels - in_channels, - kernel_size=3, - stride=2, - padding=1) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] - self.act = build_activation_layer(self.act_cfg) - - def forward(self, input): - conv_out = self.conv(input) - pool_out = self.pool(input) - pool_out = resize( - input=pool_out, - size=conv_out.size()[2:], - mode='bilinear', - align_corners=False) - output = torch.cat([conv_out, pool_out], 1) - output = self.bn(output) - output = self.act(output) - return output - - -class NonBottleneck1d(BaseModule): - """Non-bottleneck block of ERFNet. - - Args: - channels (int): Number of channels in Non-bottleneck block. - drop_rate (float): Probability of an element to be zeroed. - Default 0. - dilation (int): Dilation rate for last two conv layers. - Default 1. - num_conv_layer (int): Number of 3x1 and 1x3 convolution layers. - Default 2. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - channels, - drop_rate=0, - dilation=1, - num_conv_layer=2, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(NonBottleneck1d, self).__init__(init_cfg=init_cfg) - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.act = build_activation_layer(self.act_cfg) - - self.convs_layers = nn.ModuleList() - for conv_layer in range(num_conv_layer): - first_conv_padding = (1, 0) if conv_layer == 0 else (dilation, 0) - first_conv_dilation = 1 if conv_layer == 0 else (dilation, 1) - second_conv_padding = (0, 1) if conv_layer == 0 else (0, dilation) - second_conv_dilation = 1 if conv_layer == 0 else (1, dilation) - - self.convs_layers.append( - build_conv_layer( - self.conv_cfg, - channels, - channels, - kernel_size=(3, 1), - stride=1, - padding=first_conv_padding, - bias=True, - dilation=first_conv_dilation)) - self.convs_layers.append(self.act) - self.convs_layers.append( - build_conv_layer( - self.conv_cfg, - channels, - channels, - kernel_size=(1, 3), - stride=1, - padding=second_conv_padding, - bias=True, - dilation=second_conv_dilation)) - self.convs_layers.append( - build_norm_layer(self.norm_cfg, channels)[1]) - if conv_layer == 0: - self.convs_layers.append(self.act) - else: - self.convs_layers.append(nn.Dropout(p=drop_rate)) - - def forward(self, input): - output = input - for conv in self.convs_layers: - output = conv(output) - output = self.act(output + input) - return output - - -class UpsamplerBlock(BaseModule): - """Upsampler block of ERFNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', eps=1e-3), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(UpsamplerBlock, self).__init__(init_cfg=init_cfg) - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.conv = nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1, - output_padding=1, - bias=True) - self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] - self.act = build_activation_layer(self.act_cfg) - - def forward(self, input): - output = self.conv(input) - output = self.bn(output) - output = self.act(output) - return output - - -@BACKBONES.register_module() -class ERFNet(BaseModule): - """ERFNet backbone. - - This backbone is the implementation of `ERFNet: Efficient Residual - Factorized ConvNet for Real-time SemanticSegmentation - `_. - - Args: - in_channels (int): The number of channels of input - image. Default: 3. - enc_downsample_channels (Tuple[int]): Size of channel - numbers of various Downsampler block in encoder. - Default: (16, 64, 128). - enc_stage_non_bottlenecks (Tuple[int]): Number of stages of - Non-bottleneck block in encoder. - Default: (5, 8). - enc_non_bottleneck_dilations (Tuple[int]): Dilation rate of each - stage of Non-bottleneck block of encoder. - Default: (2, 4, 8, 16). - enc_non_bottleneck_channels (Tuple[int]): Size of channel - numbers of various Non-bottleneck block in encoder. - Default: (64, 128). - dec_upsample_channels (Tuple[int]): Size of channel numbers of - various Deconvolution block in decoder. - Default: (64, 16). - dec_stages_non_bottleneck (Tuple[int]): Number of stages of - Non-bottleneck block in decoder. - Default: (2, 2). - dec_non_bottleneck_channels (Tuple[int]): Size of channel - numbers of various Non-bottleneck block in decoder. - Default: (64, 16). - drop_rate (float): Probability of an element to be zeroed. - Default 0.1. - """ - - def __init__(self, - in_channels=3, - enc_downsample_channels=(16, 64, 128), - enc_stage_non_bottlenecks=(5, 8), - enc_non_bottleneck_dilations=(2, 4, 8, 16), - enc_non_bottleneck_channels=(64, 128), - dec_upsample_channels=(64, 16), - dec_stages_non_bottleneck=(2, 2), - dec_non_bottleneck_channels=(64, 16), - dropout_ratio=0.1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - init_cfg=None): - - super(ERFNet, self).__init__(init_cfg=init_cfg) - assert len(enc_downsample_channels) \ - == len(dec_upsample_channels)+1, 'Number of downsample\ - block of encoder does not \ - match number of upsample block of decoder!' - assert len(enc_downsample_channels) \ - == len(enc_stage_non_bottlenecks)+1, 'Number of \ - downsample block of encoder does not match \ - number of Non-bottleneck block of encoder!' - assert len(enc_downsample_channels) \ - == len(enc_non_bottleneck_channels)+1, 'Number of \ - downsample block of encoder does not match \ - number of channels of Non-bottleneck block of encoder!' - assert enc_stage_non_bottlenecks[-1] \ - % len(enc_non_bottleneck_dilations) == 0, 'Number of \ - Non-bottleneck block of encoder does not match \ - number of Non-bottleneck block of encoder!' - assert len(dec_upsample_channels) \ - == len(dec_stages_non_bottleneck), 'Number of \ - upsample block of decoder does not match \ - number of Non-bottleneck block of decoder!' - assert len(dec_stages_non_bottleneck) \ - == len(dec_non_bottleneck_channels), 'Number of \ - Non-bottleneck block of decoder does not match \ - number of channels of Non-bottleneck block of decoder!' - - self.in_channels = in_channels - self.enc_downsample_channels = enc_downsample_channels - self.enc_stage_non_bottlenecks = enc_stage_non_bottlenecks - self.enc_non_bottleneck_dilations = enc_non_bottleneck_dilations - self.enc_non_bottleneck_channels = enc_non_bottleneck_channels - self.dec_upsample_channels = dec_upsample_channels - self.dec_stages_non_bottleneck = dec_stages_non_bottleneck - self.dec_non_bottleneck_channels = dec_non_bottleneck_channels - self.dropout_ratio = dropout_ratio - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - - self.encoder.append( - DownsamplerBlock(self.in_channels, enc_downsample_channels[0])) - - for i in range(len(enc_downsample_channels) - 1): - self.encoder.append( - DownsamplerBlock(enc_downsample_channels[i], - enc_downsample_channels[i + 1])) - # Last part of encoder is some dilated NonBottleneck1d blocks. - if i == len(enc_downsample_channels) - 2: - iteration_times = int(enc_stage_non_bottlenecks[-1] / - len(enc_non_bottleneck_dilations)) - for j in range(iteration_times): - for k in range(len(enc_non_bottleneck_dilations)): - self.encoder.append( - NonBottleneck1d(enc_downsample_channels[-1], - self.dropout_ratio, - enc_non_bottleneck_dilations[k])) - else: - for j in range(enc_stage_non_bottlenecks[i]): - self.encoder.append( - NonBottleneck1d(enc_downsample_channels[i + 1], - self.dropout_ratio)) - - for i in range(len(dec_upsample_channels)): - if i == 0: - self.decoder.append( - UpsamplerBlock(enc_downsample_channels[-1], - dec_non_bottleneck_channels[i])) - else: - self.decoder.append( - UpsamplerBlock(dec_non_bottleneck_channels[i - 1], - dec_non_bottleneck_channels[i])) - for j in range(dec_stages_non_bottleneck[i]): - self.decoder.append( - NonBottleneck1d(dec_non_bottleneck_channels[i])) - - def forward(self, x): - for enc in self.encoder: - x = enc(x) - for dec in self.decoder: - x = dec(x) - return [x] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/fast_scnn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/fast_scnn.py deleted file mode 100644 index cbfbcaf4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/fast_scnn.py +++ /dev/null @@ -1,409 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from mmseg.models.decode_heads.psp_head import PPM -from mmseg.ops import resize -from ..builder import BACKBONES -from ..utils import InvertedResidual - - -class LearningToDownsample(nn.Module): - """Learning to downsample module. - - Args: - in_channels (int): Number of input channels. - dw_channels (tuple[int]): Number of output channels of the first and - the second depthwise conv (dwconv) layers. - out_channels (int): Number of output channels of the whole - 'learning to downsample' module. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config - of depthwise ConvModule. If it is 'default', it will be the same - as `act_cfg`. Default: None. - """ - - def __init__(self, - in_channels, - dw_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dw_act_cfg=None): - super(LearningToDownsample, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.dw_act_cfg = dw_act_cfg - dw_channels1 = dw_channels[0] - dw_channels2 = dw_channels[1] - - self.conv = ConvModule( - in_channels, - dw_channels1, - 3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.dsconv1 = DepthwiseSeparableConvModule( - dw_channels1, - dw_channels2, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg, - dw_act_cfg=self.dw_act_cfg) - - self.dsconv2 = DepthwiseSeparableConvModule( - dw_channels2, - out_channels, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg, - dw_act_cfg=self.dw_act_cfg) - - def forward(self, x): - x = self.conv(x) - x = self.dsconv1(x) - x = self.dsconv2(x) - return x - - -class GlobalFeatureExtractor(nn.Module): - """Global feature extractor module. - - Args: - in_channels (int): Number of input channels of the GFE module. - Default: 64 - block_channels (tuple[int]): Tuple of ints. Each int specifies the - number of output channels of each Inverted Residual module. - Default: (64, 96, 128) - out_channels(int): Number of output channels of the GFE module. - Default: 128 - expand_ratio (int): Adjusts number of channels of the hidden layer - in InvertedResidual by this amount. - Default: 6 - num_blocks (tuple[int]): Tuple of ints. Each int specifies the - number of times each Inverted Residual module is repeated. - The repeated Inverted Residual modules are called a 'group'. - Default: (3, 3, 3) - strides (tuple[int]): Tuple of ints. Each int specifies - the downsampling factor of each 'group'. - Default: (2, 2, 1) - pool_scales (tuple[int]): Tuple of ints. Each int specifies - the parameter required in 'global average pooling' within PPM. - Default: (1, 2, 3, 6) - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - """ - - def __init__(self, - in_channels=64, - block_channels=(64, 96, 128), - out_channels=128, - expand_ratio=6, - num_blocks=(3, 3, 3), - strides=(2, 2, 1), - pool_scales=(1, 2, 3, 6), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False): - super(GlobalFeatureExtractor, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - assert len(block_channels) == len(num_blocks) == 3 - self.bottleneck1 = self._make_layer(in_channels, block_channels[0], - num_blocks[0], strides[0], - expand_ratio) - self.bottleneck2 = self._make_layer(block_channels[0], - block_channels[1], num_blocks[1], - strides[1], expand_ratio) - self.bottleneck3 = self._make_layer(block_channels[1], - block_channels[2], num_blocks[2], - strides[2], expand_ratio) - self.ppm = PPM( - pool_scales, - block_channels[2], - block_channels[2] // 4, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=align_corners) - - self.out = ConvModule( - block_channels[2] * 2, - out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _make_layer(self, - in_channels, - out_channels, - blocks, - stride=1, - expand_ratio=6): - layers = [ - InvertedResidual( - in_channels, - out_channels, - stride, - expand_ratio, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - ] - for i in range(1, blocks): - layers.append( - InvertedResidual( - out_channels, - out_channels, - 1, - expand_ratio, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - return nn.Sequential(*layers) - - def forward(self, x): - x = self.bottleneck1(x) - x = self.bottleneck2(x) - x = self.bottleneck3(x) - x = torch.cat([x, *self.ppm(x)], dim=1) - x = self.out(x) - return x - - -class FeatureFusionModule(nn.Module): - """Feature fusion module. - - Args: - higher_in_channels (int): Number of input channels of the - higher-resolution branch. - lower_in_channels (int): Number of input channels of the - lower-resolution branch. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - dwconv_act_cfg (dict): Config of activation layers in 3x3 conv. - Default: dict(type='ReLU'). - conv_act_cfg (dict): Config of activation layers in the two 1x1 conv. - Default: None. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - """ - - def __init__(self, - higher_in_channels, - lower_in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dwconv_act_cfg=dict(type='ReLU'), - conv_act_cfg=None, - align_corners=False): - super(FeatureFusionModule, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dwconv_act_cfg = dwconv_act_cfg - self.conv_act_cfg = conv_act_cfg - self.align_corners = align_corners - self.dwconv = ConvModule( - lower_in_channels, - out_channels, - 3, - padding=1, - groups=out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.dwconv_act_cfg) - self.conv_lower_res = ConvModule( - out_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.conv_act_cfg) - - self.conv_higher_res = ConvModule( - higher_in_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.conv_act_cfg) - - self.relu = nn.ReLU(True) - - def forward(self, higher_res_feature, lower_res_feature): - lower_res_feature = resize( - lower_res_feature, - size=higher_res_feature.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - lower_res_feature = self.dwconv(lower_res_feature) - lower_res_feature = self.conv_lower_res(lower_res_feature) - - higher_res_feature = self.conv_higher_res(higher_res_feature) - out = higher_res_feature + lower_res_feature - return self.relu(out) - - -@BACKBONES.register_module() -class FastSCNN(BaseModule): - """Fast-SCNN Backbone. - - This backbone is the implementation of `Fast-SCNN: Fast Semantic - Segmentation Network `_. - - Args: - in_channels (int): Number of input image channels. Default: 3. - downsample_dw_channels (tuple[int]): Number of output channels after - the first conv layer & the second conv layer in - Learning-To-Downsample (LTD) module. - Default: (32, 48). - global_in_channels (int): Number of input channels of - Global Feature Extractor(GFE). - Equal to number of output channels of LTD. - Default: 64. - global_block_channels (tuple[int]): Tuple of integers that describe - the output channels for each of the MobileNet-v2 bottleneck - residual blocks in GFE. - Default: (64, 96, 128). - global_block_strides (tuple[int]): Tuple of integers - that describe the strides (downsampling factors) for each of the - MobileNet-v2 bottleneck residual blocks in GFE. - Default: (2, 2, 1). - global_out_channels (int): Number of output channels of GFE. - Default: 128. - higher_in_channels (int): Number of input channels of the higher - resolution branch in FFM. - Equal to global_in_channels. - Default: 64. - lower_in_channels (int): Number of input channels of the lower - resolution branch in FFM. - Equal to global_out_channels. - Default: 128. - fusion_out_channels (int): Number of output channels of FFM. - Default: 128. - out_indices (tuple): Tuple of indices of list - [higher_res_features, lower_res_features, fusion_output]. - Often set to (0,1,2) to enable aux. heads. - Default: (0, 1, 2). - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config - of depthwise ConvModule. If it is 'default', it will be the same - as `act_cfg`. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels=3, - downsample_dw_channels=(32, 48), - global_in_channels=64, - global_block_channels=(64, 96, 128), - global_block_strides=(2, 2, 1), - global_out_channels=128, - higher_in_channels=64, - lower_in_channels=128, - fusion_out_channels=128, - out_indices=(0, 1, 2), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - dw_act_cfg=None, - init_cfg=None): - - super(FastSCNN, self).__init__(init_cfg) - - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) - ] - - if global_in_channels != higher_in_channels: - raise AssertionError('Global Input Channels must be the same \ - with Higher Input Channels!') - elif global_out_channels != lower_in_channels: - raise AssertionError('Global Output Channels must be the same \ - with Lower Input Channels!') - - self.in_channels = in_channels - self.downsample_dw_channels1 = downsample_dw_channels[0] - self.downsample_dw_channels2 = downsample_dw_channels[1] - self.global_in_channels = global_in_channels - self.global_block_channels = global_block_channels - self.global_block_strides = global_block_strides - self.global_out_channels = global_out_channels - self.higher_in_channels = higher_in_channels - self.lower_in_channels = lower_in_channels - self.fusion_out_channels = fusion_out_channels - self.out_indices = out_indices - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.learning_to_downsample = LearningToDownsample( - in_channels, - downsample_dw_channels, - global_in_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - dw_act_cfg=dw_act_cfg) - self.global_feature_extractor = GlobalFeatureExtractor( - global_in_channels, - global_block_channels, - global_out_channels, - strides=self.global_block_strides, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.feature_fusion = FeatureFusionModule( - higher_in_channels, - lower_in_channels, - fusion_out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dwconv_act_cfg=self.act_cfg, - align_corners=self.align_corners) - - def forward(self, x): - higher_res_features = self.learning_to_downsample(x) - lower_res_features = self.global_feature_extractor(higher_res_features) - fusion_output = self.feature_fusion(higher_res_features, - lower_res_features) - - outs = [higher_res_features, lower_res_features, fusion_output] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/hrnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/hrnet.py deleted file mode 100644 index 90feadcf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/hrnet.py +++ /dev/null @@ -1,642 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, ModuleList, Sequential -from mmcv.utils.parrots_wrapper import _BatchNorm - -from mmseg.ops import Upsample, resize -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(BaseModule): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - block_init_cfg=None, - init_cfg=None): - super(HRModule, self).__init__(init_cfg) - self.block_init_cfg = block_init_cfg - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - """Check branches configuration.""" - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \ - f'{len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \ - f'{len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \ - f'{len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - """Build one branch.""" - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=self.block_init_cfg)) - - return Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - """Build multiple branch.""" - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return ModuleList(branches) - - def _make_fuse_layers(self): - """Build fuse layer.""" - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - # we set align_corners=False for HRNet - Upsample( - scale_factor=2**(j - i), - mode='bilinear', - align_corners=False))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - elif j > i: - y = y + resize( - self.fuse_layers[i][j](x[j]), - size=x[i].shape[2:], - mode='bilinear', - align_corners=False) - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(BaseModule): - """HRNet backbone. - - This backbone is the implementation of `High-Resolution Representations - for Labeling Pixels and Regions `_. - - Args: - extra (dict): Detailed configuration for each stage of HRNet. - There must be 4 stages, the configuration for each stage must have - 5 keys: - - - num_modules (int): The number of HRModule in this stage. - - num_branches (int): The number of branches in the HRModule. - - block (str): The type of convolution block. - - num_blocks (tuple): The number of blocks in each branch. - The length must be equal to num_branches. - - num_channels (tuple): The number of channels in each branch. - The length must be equal to num_branches. - in_channels (int): Number of input image channels. Normally 3. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Use `BN` by default. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: False. - multiscale_output (bool): Whether to output multi-level features - produced by multiple branches. If False, only the first level - feature will be output. Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmseg.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - with_cp=False, - frozen_stages=-1, - zero_init_residual=False, - multiscale_output=True, - pretrained=None, - init_cfg=None): - super(HRNet, self).__init__(init_cfg) - - self.pretrained = pretrained - self.zero_init_residual = zero_init_residual - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - # Assert configurations of 4 stages are in extra - assert 'stage1' in extra and 'stage2' in extra \ - and 'stage3' in extra and 'stage4' in extra - # Assert whether the length of `num_blocks` and `num_channels` are - # equal to `num_branches` - for i in range(4): - cfg = extra[f'stage{i + 1}'] - assert len(cfg['num_blocks']) == cfg['num_branches'] and \ - len(cfg['num_channels']) == cfg['num_branches'] - - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.frozen_stages = frozen_stages - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels, multiscale_output=multiscale_output) - - self._freeze_stages() - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - """Make transition layer.""" - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - """Make each layer.""" - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - init_cfg=block_init_cfg)) - - return Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - """Make each stage.""" - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - block_init_cfg = None - if self.pretrained is None and not hasattr( - self, 'init_cfg') and self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', val=0, override=dict(name='norm3')) - - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg, - block_init_cfg=block_init_cfg)) - - return Sequential(*hr_modules), in_channels - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - - self.norm1.eval() - self.norm2.eval() - for m in [self.conv1, self.norm1, self.conv2, self.norm2]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - if i == 1: - m = getattr(self, f'layer{i}') - t = getattr(self, f'transition{i}') - elif i == 4: - m = getattr(self, f'stage{i}') - else: - m = getattr(self, f'stage{i}') - t = getattr(self, f'transition{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - t.eval() - for param in t.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/icnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/icnet.py deleted file mode 100644 index 10e54278..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/icnet.py +++ /dev/null @@ -1,165 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone -from ..decode_heads.psp_head import PPM - - -@BACKBONES.register_module() -class ICNet(BaseModule): - """ICNet for Real-Time Semantic Segmentation on High-Resolution Images. - - This backbone is the implementation of - `ICNet `_. - - Args: - backbone_cfg (dict): Config dict to build backbone. Usually it is - ResNet but it can also be other backbones. - in_channels (int): The number of input image channels. Default: 3. - layer_channels (Sequence[int]): The numbers of feature channels at - layer 2 and layer 4 in ResNet. It can also be other backbones. - Default: (512, 2048). - light_branch_middle_channels (int): The number of channels of the - middle layer in light branch. Default: 32. - psp_out_channels (int): The number of channels of the output of PSP - module. Default: 512. - out_channels (Sequence[int]): The numbers of output feature channels - at each branches. Default: (64, 256, 256). - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - backbone_cfg, - in_channels=3, - layer_channels=(512, 2048), - light_branch_middle_channels=32, - psp_out_channels=512, - out_channels=(64, 256, 256), - pool_scales=(1, 2, 3, 6), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - if backbone_cfg is None: - raise TypeError('backbone_cfg must be passed from config file!') - if init_cfg is None: - init_cfg = [ - dict(type='Kaiming', mode='fan_out', layer='Conv2d'), - dict(type='Constant', val=1, layer='_BatchNorm'), - dict(type='Normal', mean=0.01, layer='Linear') - ] - super(ICNet, self).__init__(init_cfg=init_cfg) - self.align_corners = align_corners - self.backbone = build_backbone(backbone_cfg) - - # Note: Default `ceil_mode` is false in nn.MaxPool2d, set - # `ceil_mode=True` to keep information in the corner of feature map. - self.backbone.maxpool = nn.MaxPool2d( - kernel_size=3, stride=2, padding=1, ceil_mode=True) - - self.psp_modules = PPM( - pool_scales=pool_scales, - in_channels=layer_channels[1], - channels=psp_out_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - align_corners=align_corners) - - self.psp_bottleneck = ConvModule( - layer_channels[1] + len(pool_scales) * psp_out_channels, - psp_out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.conv_sub1 = nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=light_branch_middle_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg), - ConvModule( - in_channels=light_branch_middle_channels, - out_channels=light_branch_middle_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg), - ConvModule( - in_channels=light_branch_middle_channels, - out_channels=out_channels[0], - kernel_size=3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - - self.conv_sub2 = ConvModule( - layer_channels[0], - out_channels[1], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - self.conv_sub4 = ConvModule( - psp_out_channels, - out_channels[2], - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg) - - def forward(self, x): - output = [] - - # sub 1 - output.append(self.conv_sub1(x)) - - # sub 2 - x = resize( - x, - scale_factor=0.5, - mode='bilinear', - align_corners=self.align_corners) - x = self.backbone.stem(x) - x = self.backbone.maxpool(x) - x = self.backbone.layer1(x) - x = self.backbone.layer2(x) - output.append(self.conv_sub2(x)) - - # sub 4 - x = resize( - x, - scale_factor=0.5, - mode='bilinear', - align_corners=self.align_corners) - x = self.backbone.layer3(x) - x = self.backbone.layer4(x) - psp_outs = self.psp_modules(x) + [x] - psp_outs = torch.cat(psp_outs, dim=1) - x = self.psp_bottleneck(psp_outs) - - output.append(self.conv_sub4(x)) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mit.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mit.py deleted file mode 100644 index c97213a4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mit.py +++ /dev/null @@ -1,431 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, build_activation_layer, build_norm_layer -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import MultiheadAttention -from mmcv.cnn.utils.weight_init import (constant_init, normal_init, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList, Sequential - -from ..builder import BACKBONES -from ..utils import PatchEmbed, nchw_to_nlc, nlc_to_nchw - - -class MixFFN(BaseModule): - """An implementation of MixFFN of Segformer. - - The differences between MixFFN & FFN: - 1. Use 1X1 Conv to replace Linear layer. - 2. Introduce 3X3 Conv to encode positional information. - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - act_cfg=dict(type='GELU'), - ffn_drop=0., - dropout_layer=None, - init_cfg=None): - super(MixFFN, self).__init__(init_cfg) - - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - in_channels = embed_dims - fc1 = Conv2d( - in_channels=in_channels, - out_channels=feedforward_channels, - kernel_size=1, - stride=1, - bias=True) - # 3x3 depth wise conv to provide positional encode information - pe_conv = Conv2d( - in_channels=feedforward_channels, - out_channels=feedforward_channels, - kernel_size=3, - stride=1, - padding=(3 - 1) // 2, - bias=True, - groups=feedforward_channels) - fc2 = Conv2d( - in_channels=feedforward_channels, - out_channels=in_channels, - kernel_size=1, - stride=1, - bias=True) - drop = nn.Dropout(ffn_drop) - layers = [fc1, pe_conv, self.activate, drop, fc2, drop] - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - - def forward(self, x, hw_shape, identity=None): - out = nlc_to_nchw(x, hw_shape) - out = self.layers(out) - out = nchw_to_nlc(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -class EfficientMultiheadAttention(MultiheadAttention): - """An implementation of Efficient Multi-head Attention of Segformer. - - This module is modified from MultiheadAttention which is a module from - mmcv.cnn.bricks.transformer. - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head - Attention of Segformer. Default: 1. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - init_cfg=None, - batch_first=True, - qkv_bias=False, - norm_cfg=dict(type='LN'), - sr_ratio=1): - super().__init__( - embed_dims, - num_heads, - attn_drop, - proj_drop, - dropout_layer=dropout_layer, - init_cfg=init_cfg, - batch_first=batch_first, - bias=qkv_bias) - - self.sr_ratio = sr_ratio - if sr_ratio > 1: - self.sr = Conv2d( - in_channels=embed_dims, - out_channels=embed_dims, - kernel_size=sr_ratio, - stride=sr_ratio) - # The ret[0] of build_norm_layer is norm name. - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - # handle the BC-breaking from https://github.com/open-mmlab/mmcv/pull/1418 # noqa - from mmseg import digit_version, mmcv_version - if mmcv_version < digit_version('1.3.17'): - warnings.warn('The legacy version of forward function in' - 'EfficientMultiheadAttention is deprecated in' - 'mmcv>=1.3.17 and will no longer support in the' - 'future. Please upgrade your mmcv.') - self.forward = self.legacy_forward - - def forward(self, x, hw_shape, identity=None): - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - x_q = x_q.transpose(0, 1) - x_kv = x_kv.transpose(0, 1) - - out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - def legacy_forward(self, x, hw_shape, identity=None): - """multi head attention forward in mmcv version < 1.3.17.""" - - x_q = x - if self.sr_ratio > 1: - x_kv = nlc_to_nchw(x, hw_shape) - x_kv = self.sr(x_kv) - x_kv = nchw_to_nlc(x_kv) - x_kv = self.norm(x_kv) - else: - x_kv = x - - if identity is None: - identity = x_q - - # `need_weights=True` will let nn.MultiHeadAttention - # `return attn_output, attn_output_weights.sum(dim=1) / num_heads` - # The `attn_output_weights.sum(dim=1)` may cause cuda error. So, we set - # `need_weights=False` to ignore `attn_output_weights.sum(dim=1)`. - # This issue - `https://github.com/pytorch/pytorch/issues/37583` report - # the error that large scale tensor sum operation may cause cuda error. - out = self.attn(query=x_q, key=x_kv, value=x_kv, need_weights=False)[0] - - return identity + self.dropout_layer(self.proj_drop(out)) - - -class TransformerEncoderLayer(BaseModule): - """Implements one encoder layer in Segformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed. - after the feed forward layer. Default 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.0. - qkv_bias (bool): enable bias for qkv if True. - Default: True. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: False. - init_cfg (dict, optional): Initialization config dict. - Default:None. - sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head - Attention of Segformer. Default: 1. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - batch_first=True, - sr_ratio=1): - super(TransformerEncoderLayer, self).__init__() - - # The ret[0] of build_norm_layer is norm name. - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.attn = EfficientMultiheadAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - batch_first=batch_first, - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - # The ret[0] of build_norm_layer is norm name. - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - - self.ffn = MixFFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg) - - def forward(self, x, hw_shape): - x = self.attn(self.norm1(x), hw_shape, identity=x) - x = self.ffn(self.norm2(x), hw_shape, identity=x) - return x - - -@BACKBONES.register_module() -class MixVisionTransformer(BaseModule): - """The backbone of Segformer. - - This backbone is the implementation of `SegFormer: Simple and - Efficient Design for Semantic Segmentation with - Transformers `_. - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): Embedding dimension. Default: 768. - num_stags (int): The num of stages. Default: 4. - num_layers (Sequence[int]): The layer number of each transformer encode - layer. Default: [3, 4, 6, 3]. - num_heads (Sequence[int]): The attention heads of each transformer - encode layer. Default: [1, 2, 4, 8]. - patch_sizes (Sequence[int]): The patch_size of each overlapped patch - embedding. Default: [7, 3, 3, 3]. - strides (Sequence[int]): The stride of each overlapped patch embedding. - Default: [4, 2, 2, 2]. - sr_ratios (Sequence[int]): The spatial reduction rate of each - transformer encode layer. Default: [8, 4, 2, 1]. - out_indices (Sequence[int] | int): Output from which stages. - Default: (0, 1, 2, 3). - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - qkv_bias (bool): Enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0 - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): stochastic depth rate. Default 0.0 - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=3, - embed_dims=64, - num_stages=4, - num_layers=[3, 4, 6, 3], - num_heads=[1, 2, 4, 8], - patch_sizes=[7, 3, 3, 3], - strides=[4, 2, 2, 2], - sr_ratios=[8, 4, 2, 1], - out_indices=(0, 1, 2, 3), - mlp_ratio=4, - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - pretrained=None, - init_cfg=None): - super(MixVisionTransformer, self).__init__(init_cfg=init_cfg) - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - - self.embed_dims = embed_dims - self.num_stages = num_stages - self.num_layers = num_layers - self.num_heads = num_heads - self.patch_sizes = patch_sizes - self.strides = strides - self.sr_ratios = sr_ratios - assert num_stages == len(num_layers) == len(num_heads) \ - == len(patch_sizes) == len(strides) == len(sr_ratios) - - self.out_indices = out_indices - assert max(out_indices) < self.num_stages - - # transformer encoder - dpr = [ - x.item() - for x in torch.linspace(0, drop_path_rate, sum(num_layers)) - ] # stochastic num_layer decay rule - - cur = 0 - self.layers = ModuleList() - for i, num_layer in enumerate(num_layers): - embed_dims_i = embed_dims * num_heads[i] - patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims_i, - kernel_size=patch_sizes[i], - stride=strides[i], - padding=patch_sizes[i] // 2, - norm_cfg=norm_cfg) - layer = ModuleList([ - TransformerEncoderLayer( - embed_dims=embed_dims_i, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * embed_dims_i, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[cur + idx], - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - sr_ratio=sr_ratios[i]) for idx in range(num_layer) - ]) - in_channels = embed_dims_i - # The ret[0] of build_norm_layer is norm name. - norm = build_norm_layer(norm_cfg, embed_dims_i)[1] - self.layers.append(ModuleList([patch_embed, layer, norm])) - cur += num_layer - - def init_weights(self): - if self.init_cfg is None: - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, val=1.0, bias=0.) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init( - m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) - else: - super(MixVisionTransformer, self).init_weights() - - def forward(self, x): - outs = [] - - for i, layer in enumerate(self.layers): - x, hw_shape = layer[0](x) - for block in layer[1]: - x = block(x, hw_shape) - x = layer[2](x) - x = nlc_to_nchw(x, hw_shape) - if i in self.out_indices: - outs.append(x) - - return outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v2.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v2.py deleted file mode 100644 index cbb9c6cd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(BaseModule): - """MobileNetV2 backbone. - - This backbone is the implementation of - `MobileNetV2: Inverted Residuals and Linear Bottlenecks - `_. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - strides (Sequence[int], optional): Strides of the first block of each - layer. If not specified, default config in ``arch_setting`` will - be used. - dilations (Sequence[int]): Dilation of each layer. - out_indices (None or Sequence[int]): Output from which stages. - Default: (7, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - # Parameters to build layers. 3 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks. - arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4], - [6, 96, 3], [6, 160, 3], [6, 320, 1]] - - def __init__(self, - widen_factor=1., - strides=(1, 2, 2, 2, 1, 2, 1), - dilations=(1, 1, 1, 1, 1, 1, 1), - out_indices=(1, 2, 4, 6), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV2, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - self.widen_factor = widen_factor - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == len(self.arch_settings) - self.out_indices = out_indices - for index in out_indices: - if index not in range(0, 7): - raise ValueError('the item in out_indices must in ' - f'range(0, 7). But received {index}') - - if frozen_stages not in range(-1, 7): - raise ValueError('frozen_stages must be in range(-1, 7). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks = layer_cfg - stride = self.strides[i] - dilation = self.dilations[i] - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - def make_layer(self, out_channels, num_blocks, stride, dilation, - expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): Number of blocks. - stride (int): Stride of the first block. - dilation (int): Dilation of the first block. - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. - """ - layers = [] - for i in range(num_blocks): - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - stride if i == 0 else 1, - expand_ratio=expand_ratio, - dilation=dilation if i == 0 else 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v3.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v3.py deleted file mode 100644 index dd3d6eb1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/mobilenet_v3.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -from mmcv.cnn import ConvModule -from mmcv.cnn.bricks import Conv2dAdaptivePadding -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidualV3 as InvertedResidual - - -@BACKBONES.register_module() -class MobileNetV3(BaseModule): - """MobileNetV3 backbone. - - This backbone is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - arch (str): Architecture of mobilnetv3, from {'small', 'large'}. - Default: 'small'. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - out_indices (tuple[int]): Output from which layer. - Default: (0, 1, 12). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - # Parameters to build each block: - # [kernel size, mid channels, out channels, with_se, act type, stride] - arch_settings = { - 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4 - [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8 - [3, 88, 24, False, 'ReLU', 1], - [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16 - [5, 240, 40, True, 'HSwish', 1], - [5, 240, 40, True, 'HSwish', 1], - [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16 - [5, 144, 48, True, 'HSwish', 1], - [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32 - [5, 576, 96, True, 'HSwish', 1], - [5, 576, 96, True, 'HSwish', 1]], - 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2 - [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4 - [3, 72, 24, False, 'ReLU', 1], - [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8 - [5, 120, 40, True, 'ReLU', 1], - [5, 120, 40, True, 'ReLU', 1], - [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16 - [3, 200, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16 - [3, 672, 112, True, 'HSwish', 1], - [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32 - [5, 960, 160, True, 'HSwish', 1], - [5, 960, 160, True, 'HSwish', 1]] - } # yapf: disable - - def __init__(self, - arch='small', - conv_cfg=None, - norm_cfg=dict(type='BN'), - out_indices=(0, 1, 12), - frozen_stages=-1, - reduction_factor=1, - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(MobileNetV3, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - assert arch in self.arch_settings - assert isinstance(reduction_factor, int) and reduction_factor > 0 - assert mmcv.is_tuple_of(out_indices, int) - for index in out_indices: - if index not in range(0, len(self.arch_settings[arch]) + 2): - raise ValueError( - 'the item in out_indices must in ' - f'range(0, {len(self.arch_settings[arch])+2}). ' - f'But received {index}') - - if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2): - raise ValueError('frozen_stages must be in range(-1, ' - f'{len(self.arch_settings[arch])+2}). ' - f'But received {frozen_stages}') - self.arch = arch - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.reduction_factor = reduction_factor - self.norm_eval = norm_eval - self.with_cp = with_cp - self.layers = self._make_layer() - - def _make_layer(self): - layers = [] - - # build the first layer (layer0) - in_channels = 16 - layer = ConvModule( - in_channels=3, - out_channels=in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - self.add_module('layer0', layer) - layers.append('layer0') - - layer_setting = self.arch_settings[self.arch] - for i, params in enumerate(layer_setting): - (kernel_size, mid_channels, out_channels, with_se, act, - stride) = params - - if self.arch == 'large' and i >= 12 or self.arch == 'small' and \ - i >= 8: - mid_channels = mid_channels // self.reduction_factor - out_channels = out_channels // self.reduction_factor - - if with_se: - se_cfg = dict( - channels=mid_channels, - ratio=4, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))) - else: - se_cfg = None - - layer = InvertedResidual( - in_channels=in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - with_expand_conv=(in_channels != mid_channels), - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type=act), - with_cp=self.with_cp) - in_channels = out_channels - layer_name = 'layer{}'.format(i + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # build the last layer - # block5 layer12 os=32 for small model - # block6 layer16 os=32 for large model - layer = ConvModule( - in_channels=in_channels, - out_channels=576 if self.arch == 'small' else 960, - kernel_size=1, - stride=1, - dilation=4, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - layer_name = 'layer{}'.format(len(layer_setting) + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # next, convert backbone MobileNetV3 to a semantic segmentation version - if self.arch == 'small': - self.layer4.depthwise_conv.conv.stride = (1, 1) - self.layer9.depthwise_conv.conv.stride = (1, 1) - for i in range(4, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 9: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - else: - self.layer7.depthwise_conv.conv.stride = (1, 1) - self.layer13.depthwise_conv.conv.stride = (1, 1) - for i in range(7, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 13: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - - return layers - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return outs - - def _freeze_stages(self): - for i in range(self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV3, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnest.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnest.py deleted file mode 100644 index 91952c2c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnest.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int | tuple[int]): Same as nn.Conv2d. - stride (int | tuple[int]): Same as nn.Conv2d. - padding (int | tuple[int]): Same as nn.Conv2d. - dilation (int | tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - width_per_group (int): Width per group of conv2. 64x4d indicates - ``groups=64, width_per_group=4`` and 32x8d indicates - ``groups=32, width_per_group=8``. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - This backbone is the implementation of `ResNeSt: - Split-Attention Networks `_. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnet.py deleted file mode 100644 index e8b961d5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnet.py +++ /dev/null @@ -1,714 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - """Basic block for ResNet.""" - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - """Forward function for plugins.""" - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - This backbone is the improved implementation of `Deep Residual Learning - for Image Recognition `_. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - stem_channels (int): Number of stem channels. Default: 64. - base_channels (int): Number of base channels of res layer. Default: 64. - num_stages (int): Resnet stages, normally 4. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - Default: (1, 2, 2, 2). - dilations (Sequence[int]): Dilation of each stage. - Default: (1, 1, 1, 1). - out_indices (Sequence[int]): Output from which stages. - Default: (0, 1, 2, 3). - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. Default: 'pytorch'. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv. - Default: False. - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict | None): Dictionary to construct and config conv layer. - When conv_cfg is None, cfg will be set to dict(type='Conv2d'). - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (dict | None): Dictionary to construct and config DCN conv layer. - When dcn is not None, conv_cfg must be None. Default: None. - stage_with_dcn (Sequence[bool]): Whether to set DCN conv for each - stage. The length of stage_with_dcn is equal to num_stages. - Default: (False, False, False, False). - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - - position (str, required): Position inside block to insert plugin, - options: 'after_conv1', 'after_conv2', 'after_conv3'. - - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - Default: None. - multi_grid (Sequence[int]|None): Multi grid dilation rates of last - stage. Default: None. - contract_dilation (bool): Whether contract first dilation of each layer - Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. Default: True. - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> from mmseg.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=64, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - multi_grid=None, - contract_dilation=False, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - self.pretrained = pretrained - self.zero_init_residual = zero_init_residual - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.multi_grid = multi_grid - self.contract_dilation = contract_dilation - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - # multi grid is applied to last layer only - stage_multi_grid = multi_grid if i == len( - self.stage_blocks) - 1 else None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - multi_grid=stage_multi_grid, - contract_dilation=contract_dilation, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i+1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """make plugins for ResNet 'stage_idx'th stage . - - Currently we support to insert 'context_block', - 'empirical_attention_block', 'nonlocal_block' into the backbone like - ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be : - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose 'stage_idx=0', the structure of blocks in the stage would be: - conv1-> conv2->conv3->yyy->zzz1->zzz2 - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - """Make stem layer for ResNet.""" - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1c(ResNet): - """ResNetV1c variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv in - the input stem with three 3x3 convs. For more details please refer to `Bag - of Tricks for Image Classification with Convolutional Neural Networks - `_. - """ - - def __init__(self, **kwargs): - super(ResNetV1c, self).__init__( - deep_stem=True, avg_down=False, **kwargs) - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - """ResNetV1d variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnext.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnext.py deleted file mode 100644 index 805c27bf..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/resnext.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - This backbone is the implementation of `Aggregated - Residual Transformations for Deep Neural - Networks `_. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Normally 3. - num_stages (int): Resnet stages, normally 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmseg.models import ResNeXt - >>> import torch - >>> self = ResNeXt(depth=50) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/stdc.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/stdc.py deleted file mode 100644 index 04f2f7a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/stdc.py +++ /dev/null @@ -1,422 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/MichaelFan01/STDC-Seg.""" -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner.base_module import BaseModule, ModuleList, Sequential - -from mmseg.ops import resize -from ..builder import BACKBONES, build_backbone -from .bisenetv1 import AttentionRefinementModule - - -class STDCModule(BaseModule): - """STDCModule. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels before scaling. - stride (int): The number of stride for the first conv layer. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): The activation config for conv layers. - num_convs (int): Numbers of conv layers. - fusion_type (str): Type of fusion operation. Default: 'add'. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - stride, - norm_cfg=None, - act_cfg=None, - num_convs=4, - fusion_type='add', - init_cfg=None): - super(STDCModule, self).__init__(init_cfg=init_cfg) - assert num_convs > 1 - assert fusion_type in ['add', 'cat'] - self.stride = stride - self.with_downsample = True if self.stride == 2 else False - self.fusion_type = fusion_type - - self.layers = ModuleList() - conv_0 = ConvModule( - in_channels, out_channels // 2, kernel_size=1, norm_cfg=norm_cfg) - - if self.with_downsample: - self.downsample = ConvModule( - out_channels // 2, - out_channels // 2, - kernel_size=3, - stride=2, - padding=1, - groups=out_channels // 2, - norm_cfg=norm_cfg, - act_cfg=None) - - if self.fusion_type == 'add': - self.layers.append(nn.Sequential(conv_0, self.downsample)) - self.skip = Sequential( - ConvModule( - in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=1, - groups=in_channels, - norm_cfg=norm_cfg, - act_cfg=None), - ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None)) - else: - self.layers.append(conv_0) - self.skip = nn.AvgPool2d(kernel_size=3, stride=2, padding=1) - else: - self.layers.append(conv_0) - - for i in range(1, num_convs): - out_factor = 2**(i + 1) if i != num_convs - 1 else 2**i - self.layers.append( - ConvModule( - out_channels // 2**i, - out_channels // out_factor, - kernel_size=3, - stride=1, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - if self.fusion_type == 'add': - out = self.forward_add(inputs) - else: - out = self.forward_cat(inputs) - return out - - def forward_add(self, inputs): - layer_outputs = [] - x = inputs.clone() - for layer in self.layers: - x = layer(x) - layer_outputs.append(x) - if self.with_downsample: - inputs = self.skip(inputs) - - return torch.cat(layer_outputs, dim=1) + inputs - - def forward_cat(self, inputs): - x0 = self.layers[0](inputs) - layer_outputs = [x0] - for i, layer in enumerate(self.layers[1:]): - if i == 0: - if self.with_downsample: - x = layer(self.downsample(x0)) - else: - x = layer(x0) - else: - x = layer(x) - layer_outputs.append(x) - if self.with_downsample: - layer_outputs[0] = self.skip(x0) - return torch.cat(layer_outputs, dim=1) - - -class FeatureFusionModule(BaseModule): - """Feature Fusion Module. This module is different from FeatureFusionModule - in BiSeNetV1. It uses two ConvModules in `self.attention` whose inter - channel number is calculated by given `scale_factor`, while - FeatureFusionModule in BiSeNetV1 only uses one ConvModule in - `self.conv_atten`. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - scale_factor (int): The number of channel scale factor. - Default: 4. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): The activation config for conv layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=4, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) - channels = out_channels // scale_factor - self.conv0 = ConvModule( - in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.attention = nn.Sequential( - nn.AdaptiveAvgPool2d((1, 1)), - ConvModule( - out_channels, - channels, - 1, - norm_cfg=None, - bias=False, - act_cfg=act_cfg), - ConvModule( - channels, - out_channels, - 1, - norm_cfg=None, - bias=False, - act_cfg=None), nn.Sigmoid()) - - def forward(self, spatial_inputs, context_inputs): - inputs = torch.cat([spatial_inputs, context_inputs], dim=1) - x = self.conv0(inputs) - attn = self.attention(x) - x_attn = x * attn - return x_attn + x - - -@BACKBONES.register_module() -class STDCNet(BaseModule): - """This backbone is the implementation of `Rethinking BiSeNet For Real-time - Semantic Segmentation `_. - - Args: - stdc_type (int): The type of backbone structure, - `STDCNet1` and`STDCNet2` denotes two main backbones in paper, - whose FLOPs is 813M and 1446M, respectively. - in_channels (int): The num of input_channels. - channels (tuple[int]): The output channels for each stage. - bottleneck_type (str): The type of STDC Module type, the value must - be 'add' or 'cat'. - norm_cfg (dict): Config dict for normalization layer. - act_cfg (dict): The activation config for conv layers. - num_convs (int): Numbers of conv layer at each STDC Module. - Default: 4. - with_final_conv (bool): Whether add a conv layer at the Module output. - Default: True. - pretrained (str, optional): Model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Example: - >>> import torch - >>> stdc_type = 'STDCNet1' - >>> in_channels = 3 - >>> channels = (32, 64, 256, 512, 1024) - >>> bottleneck_type = 'cat' - >>> inputs = torch.rand(1, 3, 1024, 2048) - >>> self = STDCNet(stdc_type, in_channels, - ... channels, bottleneck_type).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 256, 128, 256]) - outputs[1].shape = torch.Size([1, 512, 64, 128]) - outputs[2].shape = torch.Size([1, 1024, 32, 64]) - """ - - arch_settings = { - 'STDCNet1': [(2, 1), (2, 1), (2, 1)], - 'STDCNet2': [(2, 1, 1, 1), (2, 1, 1, 1, 1), (2, 1, 1)] - } - - def __init__(self, - stdc_type, - in_channels, - channels, - bottleneck_type, - norm_cfg, - act_cfg, - num_convs=4, - with_final_conv=False, - pretrained=None, - init_cfg=None): - super(STDCNet, self).__init__(init_cfg=init_cfg) - assert stdc_type in self.arch_settings, \ - f'invalid structure {stdc_type} for STDCNet.' - assert bottleneck_type in ['add', 'cat'],\ - f'bottleneck_type must be `add` or `cat`, got {bottleneck_type}' - - assert len(channels) == 5,\ - f'invalid channels length {len(channels)} for STDCNet.' - - self.in_channels = in_channels - self.channels = channels - self.stage_strides = self.arch_settings[stdc_type] - self.prtrained = pretrained - self.num_convs = num_convs - self.with_final_conv = with_final_conv - - self.stages = ModuleList([ - ConvModule( - self.in_channels, - self.channels[0], - kernel_size=3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg), - ConvModule( - self.channels[0], - self.channels[1], - kernel_size=3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - ]) - # `self.num_shallow_features` is the number of shallow modules in - # `STDCNet`, which is noted as `Stage1` and `Stage2` in original paper. - # They are both not used for following modules like Attention - # Refinement Module and Feature Fusion Module. - # Thus they would be cut from `outs`. Please refer to Figure 4 - # of original paper for more details. - self.num_shallow_features = len(self.stages) - - for strides in self.stage_strides: - idx = len(self.stages) - 1 - self.stages.append( - self._make_stage(self.channels[idx], self.channels[idx + 1], - strides, norm_cfg, act_cfg, bottleneck_type)) - # After appending, `self.stages` is a ModuleList including several - # shallow modules and STDCModules. - # (len(self.stages) == - # self.num_shallow_features + len(self.stage_strides)) - if self.with_final_conv: - self.final_conv = ConvModule( - self.channels[-1], - max(1024, self.channels[-1]), - 1, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def _make_stage(self, in_channels, out_channels, strides, norm_cfg, - act_cfg, bottleneck_type): - layers = [] - for i, stride in enumerate(strides): - layers.append( - STDCModule( - in_channels if i == 0 else out_channels, - out_channels, - stride, - norm_cfg, - act_cfg, - num_convs=self.num_convs, - fusion_type=bottleneck_type)) - return Sequential(*layers) - - def forward(self, x): - outs = [] - for stage in self.stages: - x = stage(x) - outs.append(x) - if self.with_final_conv: - outs[-1] = self.final_conv(outs[-1]) - outs = outs[self.num_shallow_features:] - return tuple(outs) - - -@BACKBONES.register_module() -class STDCContextPathNet(BaseModule): - """STDCNet with Context Path. The `outs` below is a list of three feature - maps from deep to shallow, whose height and width is from small to big, - respectively. The biggest feature map of `outs` is outputted for - `STDCHead`, where Detail Loss would be calculated by Detail Ground-truth. - The other two feature maps are used for Attention Refinement Module, - respectively. Besides, the biggest feature map of `outs` and the last - output of Attention Refinement Module are concatenated for Feature Fusion - Module. Then, this fusion feature map `feat_fuse` would be outputted for - `decode_head`. More details please refer to Figure 4 of original paper. - - Args: - backbone_cfg (dict): Config dict for stdc backbone. - last_in_channels (tuple(int)), The number of channels of last - two feature maps from stdc backbone. Default: (1024, 512). - out_channels (int): The channels of output feature maps. - Default: 128. - ffm_cfg (dict): Config dict for Feature Fusion Module. Default: - `dict(in_channels=512, out_channels=256, scale_factor=4)`. - upsample_mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'nearest'``. - align_corners (str): align_corners argument of F.interpolate. It - must be `None` if upsample_mode is ``'nearest'``. Default: None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Return: - outputs (tuple): The tuple of list of output feature map for - auxiliary heads and decoder head. - """ - - def __init__(self, - backbone_cfg, - last_in_channels=(1024, 512), - out_channels=128, - ffm_cfg=dict( - in_channels=512, out_channels=256, scale_factor=4), - upsample_mode='nearest', - align_corners=None, - norm_cfg=dict(type='BN'), - init_cfg=None): - super(STDCContextPathNet, self).__init__(init_cfg=init_cfg) - self.backbone = build_backbone(backbone_cfg) - self.arms = ModuleList() - self.convs = ModuleList() - for channels in last_in_channels: - self.arms.append(AttentionRefinementModule(channels, out_channels)) - self.convs.append( - ConvModule( - out_channels, - out_channels, - 3, - padding=1, - norm_cfg=norm_cfg)) - self.conv_avg = ConvModule( - last_in_channels[0], out_channels, 1, norm_cfg=norm_cfg) - - self.ffm = FeatureFusionModule(**ffm_cfg) - - self.upsample_mode = upsample_mode - self.align_corners = align_corners - - def forward(self, x): - outs = list(self.backbone(x)) - avg = F.adaptive_avg_pool2d(outs[-1], 1) - avg_feat = self.conv_avg(avg) - - feature_up = resize( - avg_feat, - size=outs[-1].shape[2:], - mode=self.upsample_mode, - align_corners=self.align_corners) - arms_out = [] - for i in range(len(self.arms)): - x_arm = self.arms[i](outs[len(outs) - 1 - i]) + feature_up - feature_up = resize( - x_arm, - size=outs[len(outs) - 1 - i - 1].shape[2:], - mode=self.upsample_mode, - align_corners=self.align_corners) - feature_up = self.convs[i](feature_up) - arms_out.append(feature_up) - - feat_fuse = self.ffm(outs[0], arms_out[1]) - - # The `outputs` has four feature maps. - # `outs[0]` is outputted for `STDCHead` auxiliary head. - # Two feature maps of `arms_out` are outputted for auxiliary head. - # `feat_fuse` is outputted for decoder head. - outputs = [outs[0]] + list(arms_out) + [feat_fuse] - return tuple(outputs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/swin.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/swin.py deleted file mode 100644 index a360ab01..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/swin.py +++ /dev/null @@ -1,755 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from copy import deepcopy - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.transformer import FFN, build_dropout -from mmcv.cnn.utils.weight_init import (constant_init, trunc_normal_, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from mmcv.utils import to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils.embed import PatchEmbed, PatchMerging - - -class WindowMSA(BaseModule): - """Window based multi-head self-attention (W-MSA) module with relative - position bias. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (tuple[int]): The height and width of the window. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - init_cfg (dict | None, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - init_cfg=None): - - super().__init__(init_cfg=init_cfg) - self.embed_dims = embed_dims - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_embed_dims = embed_dims // num_heads - self.scale = qk_scale or head_embed_dims**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), - num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # About 2x faster than original impl - Wh, Ww = self.window_size - rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) - rel_position_index = rel_index_coords + rel_index_coords.T - rel_position_index = rel_position_index.flip(1).contiguous() - self.register_buffer('relative_position_index', rel_position_index) - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - - self.softmax = nn.Softmax(dim=-1) - - def init_weights(self): - trunc_normal_(self.relative_position_bias_table, std=0.02) - - def forward(self, x, mask=None): - """ - Args: - - x (tensor): input features with shape of (num_windows*B, N, C) - mask (tensor | None, Optional): mask with shape of (num_windows, - Wh*Ww, Wh*Ww), value should be between (-inf, 0]. - """ - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, 4) - # make torchscript happy (cannot use tensor as tuple) - q, k, v = qkv[0], qkv[1], qkv[2] - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B // nW, nW, self.num_heads, N, - N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - @staticmethod - def double_step_seq(step1, len1, step2, len2): - seq1 = torch.arange(0, step1 * len1, step1) - seq2 = torch.arange(0, step2 * len2, step2) - return (seq1[:, None] + seq2[None, :]).reshape(1, -1) - - -class ShiftWindowMSA(BaseModule): - """Shifted Window Multihead Self-Attention Module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): The height and width of the window. - shift_size (int, optional): The shift step of each window towards - right-bottom. If zero, act as regular window-msa. Defaults to 0. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Defaults: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Defaults: 0. - proj_drop_rate (float, optional): Dropout ratio of output. - Defaults: 0. - dropout_layer (dict, optional): The dropout_layer used before output. - Defaults: dict(type='DropPath', drop_prob=0.). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - shift_size=0, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0, - proj_drop_rate=0, - dropout_layer=dict(type='DropPath', drop_prob=0.), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.window_size = window_size - self.shift_size = shift_size - assert 0 <= self.shift_size < self.window_size - - self.w_msa = WindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=to_2tuple(window_size), - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=proj_drop_rate, - init_cfg=None) - - self.drop = build_dropout(dropout_layer) - - def forward(self, query, hw_shape): - B, L, C = query.shape - H, W = hw_shape - assert L == H * W, 'input feature has wrong size' - query = query.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) - H_pad, W_pad = query.shape[1], query.shape[2] - - # cyclic shift - if self.shift_size > 0: - shifted_query = torch.roll( - query, - shifts=(-self.shift_size, -self.shift_size), - dims=(1, 2)) - - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - # nW, window_size, window_size, 1 - mask_windows = self.window_partition(img_mask) - mask_windows = mask_windows.view( - -1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-100.0)).masked_fill( - attn_mask == 0, float(0.0)) - else: - shifted_query = query - attn_mask = None - - # nW*B, window_size, window_size, C - query_windows = self.window_partition(shifted_query) - # nW*B, window_size*window_size, C - query_windows = query_windows.view(-1, self.window_size**2, C) - - # W-MSA/SW-MSA (nW*B, window_size*window_size, C) - attn_windows = self.w_msa(query_windows, mask=attn_mask) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, - self.window_size, C) - - # B H' W' C - shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, - shifts=(self.shift_size, self.shift_size), - dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - x = self.drop(x) - return x - - def window_reverse(self, windows, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - window_size = self.window_size - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, - window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - def window_partition(self, x): - """ - Args: - x: (B, H, W, C) - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - window_size = self.window_size - x = x.view(B, H // window_size, window_size, W // window_size, - window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() - windows = windows.view(-1, window_size, window_size, C) - return windows - - -class SwinBlock(BaseModule): - """" - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - window_size (int, optional): The local window scale. Default: 7. - shift (bool, optional): whether to shift window or not. Default False. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float, optional): Stochastic depth rate. Default: 0. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - window_size=7, - shift=False, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - - super(SwinBlock, self).__init__(init_cfg=init_cfg) - - self.with_cp = with_cp - - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - self.attn = ShiftWindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=window_size, - shift_size=window_size // 2 if shift else 0, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - init_cfg=None) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=2, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=True, - init_cfg=None) - - def forward(self, x, hw_shape): - - def _inner_forward(x): - identity = x - x = self.norm1(x) - x = self.attn(x, hw_shape) - - x = x + identity - - identity = x - x = self.norm2(x) - x = self.ffn(x, identity=identity) - - return x - - if self.with_cp and x.requires_grad: - x = cp.checkpoint(_inner_forward, x) - else: - x = _inner_forward(x) - - return x - - -class SwinBlockSequence(BaseModule): - """Implements one stage in Swin Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - depth (int): The number of blocks in this stage. - window_size (int, optional): The local window scale. Default: 7. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float | list[float], optional): Stochastic depth - rate. Default: 0. - downsample (BaseModule | None, optional): The downsample operation - module. Default: None. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - depth, - window_size=7, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - downsample=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(drop_path_rate, list): - drop_path_rates = drop_path_rate - assert len(drop_path_rates) == depth - else: - drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] - - self.blocks = ModuleList() - for i in range(depth): - block = SwinBlock( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=feedforward_channels, - window_size=window_size, - shift=False if i % 2 == 0 else True, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=drop_path_rates[i], - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.blocks.append(block) - - self.downsample = downsample - - def forward(self, x, hw_shape): - for block in self.blocks: - x = block(x, hw_shape) - - if self.downsample: - x_down, down_hw_shape = self.downsample(x, hw_shape) - return x_down, down_hw_shape, x, hw_shape - else: - return x, hw_shape, x, hw_shape - - -@BACKBONES.register_module() -class SwinTransformer(BaseModule): - """Swin Transformer backbone. - - This backbone is the implementation of `Swin Transformer: - Hierarchical Vision Transformer using Shifted - Windows `_. - Inspiration from https://github.com/microsoft/Swin-Transformer. - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): The num of input channels. - Defaults: 3. - embed_dims (int): The feature dimension. Default: 96. - patch_size (int | tuple[int]): Patch size. Default: 4. - window_size (int): Window size. Default: 7. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - Default: 4. - depths (tuple[int]): Depths of each Swin Transformer stage. - Default: (2, 2, 6, 2). - num_heads (tuple[int]): Parallel attention heads of each Swin - Transformer stage. Default: (3, 6, 12, 24). - strides (tuple[int]): The patch merging or patch embedding stride of - each Swin Transformer stage. (In swin, we set kernel size equal to - stride.) Default: (4, 2, 2, 2). - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool, optional): If True, add a learnable bias to query, key, - value. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - patch_norm (bool): If add a norm layer for patch embed and patch - merging. Default: True. - drop_rate (float): Dropout rate. Defaults: 0. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: False. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LN'). - norm_cfg (dict): Config dict for normalization layer at - output of backone. Defaults: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=96, - patch_size=4, - window_size=7, - mlp_ratio=4, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - strides=(4, 2, 2, 2), - out_indices=(0, 1, 2, 3), - qkv_bias=True, - qk_scale=None, - patch_norm=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - pretrained=None, - frozen_stages=-1, - init_cfg=None): - self.frozen_stages = frozen_stages - - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - super(SwinTransformer, self).__init__(init_cfg=init_cfg) - - num_layers = len(depths) - self.out_indices = out_indices - self.use_abs_pos_embed = use_abs_pos_embed - - assert strides[0] == patch_size, 'Use non-overlapping patch embed.' - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=strides[0], - padding='corner', - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - - if self.use_abs_pos_embed: - patch_row = pretrain_img_size[0] // patch_size - patch_col = pretrain_img_size[1] // patch_size - num_patches = patch_row * patch_col - self.absolute_pos_embed = nn.Parameter( - torch.zeros((1, num_patches, embed_dims))) - - self.drop_after_pos = nn.Dropout(p=drop_rate) - - # set stochastic depth decay rule - total_depth = sum(depths) - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, total_depth) - ] - - self.stages = ModuleList() - in_channels = embed_dims - for i in range(num_layers): - if i < num_layers - 1: - downsample = PatchMerging( - in_channels=in_channels, - out_channels=2 * in_channels, - stride=strides[i + 1], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - else: - downsample = None - - stage = SwinBlockSequence( - embed_dims=in_channels, - num_heads=num_heads[i], - feedforward_channels=mlp_ratio * in_channels, - depth=depths[i], - window_size=window_size, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], - downsample=downsample, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.stages.append(stage) - if downsample: - in_channels = downsample.out_channels - - self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] - # Add a norm layer for each output - for i in out_indices: - layer = build_norm_layer(norm_cfg, self.num_features[i])[1] - layer_name = f'norm{i}' - self.add_module(layer_name, layer) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - if self.use_abs_pos_embed: - self.absolute_pos_embed.requires_grad = False - self.drop_after_pos.eval() - - for i in range(1, self.frozen_stages + 1): - - if (i - 1) in self.out_indices: - norm_layer = getattr(self, f'norm{i-1}') - norm_layer.eval() - for param in norm_layer.parameters(): - param.requires_grad = False - - m = self.stages[i - 1] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - if self.use_abs_pos_embed: - trunc_normal_(self.absolute_pos_embed, std=0.02) - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, val=1.0, bias=0.) - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - ckpt = _load_checkpoint( - self.init_cfg['checkpoint'], logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - - state_dict = OrderedDict() - for k, v in _state_dict.items(): - if k.startswith('backbone.'): - state_dict[k[9:]] = v - else: - state_dict[k] = v - - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = self.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H * W: - logger.warning('Error in loading absolute_pos_embed, pass') - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view( - N2, H, W, C2).permute(0, 3, 1, 2).contiguous() - - # interpolate position bias table if needed - relative_position_bias_table_keys = [ - k for k in state_dict.keys() - if 'relative_position_bias_table' in k - ] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = self.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f'Error in loading {table_key}, pass') - elif L1 != L2: - S1 = int(L1**0.5) - S2 = int(L2**0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view( - nH2, L2).permute(1, 0).contiguous() - - # load state_dict - self.load_state_dict(state_dict, False) - - def forward(self, x): - x, hw_shape = self.patch_embed(x) - - if self.use_abs_pos_embed: - x = x + self.absolute_pos_embed - x = self.drop_after_pos(x) - - outs = [] - for i, stage in enumerate(self.stages): - x, hw_shape, out, out_hw_shape = stage(x, hw_shape) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - out = norm_layer(out) - out = out.view(-1, *out_hw_shape, - self.num_features[i]).permute(0, 3, 1, - 2).contiguous() - outs.append(out) - - return outs diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/timm_backbone.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/timm_backbone.py deleted file mode 100644 index 01b29fc5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/timm_backbone.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -try: - import timm -except ImportError: - timm = None - -from mmcv.cnn.bricks.registry import NORM_LAYERS -from mmcv.runner import BaseModule - -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class TIMMBackbone(BaseModule): - """Wrapper to use backbones from timm library. More details can be found in - `timm `_ . - - Args: - model_name (str): Name of timm model to instantiate. - pretrained (bool): Load pretrained weights if True. - checkpoint_path (str): Path of checkpoint to load after - model is initialized. - in_channels (int): Number of input image channels. Default: 3. - init_cfg (dict, optional): Initialization config dict - **kwargs: Other timm & model specific arguments. - """ - - def __init__( - self, - model_name, - features_only=True, - pretrained=True, - checkpoint_path='', - in_channels=3, - init_cfg=None, - **kwargs, - ): - if timm is None: - raise RuntimeError('timm is not installed') - super(TIMMBackbone, self).__init__(init_cfg) - if 'norm_layer' in kwargs: - kwargs['norm_layer'] = NORM_LAYERS.get(kwargs['norm_layer']) - self.timm_model = timm.create_model( - model_name=model_name, - features_only=features_only, - pretrained=pretrained, - in_chans=in_channels, - checkpoint_path=checkpoint_path, - **kwargs, - ) - - # Make unused parameters None - self.timm_model.global_pool = None - self.timm_model.fc = None - self.timm_model.classifier = None - - # Hack to use pretrained weights from timm - if pretrained or checkpoint_path: - self._is_init = True - - def forward(self, x): - features = self.timm_model(x) - return features diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/twins.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/twins.py deleted file mode 100644 index b41325b8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/twins.py +++ /dev/null @@ -1,587 +0,0 @@ -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.drop import build_dropout -from mmcv.cnn.bricks.transformer import FFN -from mmcv.cnn.utils.weight_init import (constant_init, normal_init, - trunc_normal_init) -from mmcv.runner import BaseModule, ModuleList -from torch.nn.modules.batchnorm import _BatchNorm - -from mmseg.models.backbones.mit import EfficientMultiheadAttention -from mmseg.models.builder import BACKBONES -from ..utils.embed import PatchEmbed - - -class GlobalSubsampledAttention(EfficientMultiheadAttention): - """Global Sub-sampled Attention (Spatial Reduction Attention) - - This module is modified from EfficientMultiheadAttention, - which is a module from mmseg.models.backbones.mit.py. - Specifically, there is no difference between - `GlobalSubsampledAttention` and `EfficientMultiheadAttention`, - `GlobalSubsampledAttention` is built as a brand new class - because it is renamed as `Global sub-sampled attention (GSA)` - in paper. - - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. Default: None. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dims) - or (n, batch, embed_dims). Default: False. - qkv_bias (bool): enable bias for qkv if True. Default: True. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (int): The ratio of spatial reduction of GSA of PCPVT. - Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=None, - batch_first=True, - qkv_bias=True, - norm_cfg=dict(type='LN'), - sr_ratio=1, - init_cfg=None): - super(GlobalSubsampledAttention, self).__init__( - embed_dims, - num_heads, - attn_drop=attn_drop, - proj_drop=proj_drop, - dropout_layer=dropout_layer, - batch_first=batch_first, - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio, - init_cfg=init_cfg) - - -class GSAEncoderLayer(BaseModule): - """Implements one encoder layer with GSA. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): Stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): Enable bias for qkv if True. Default: True - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - sr_ratio (float): Kernel_size of conv in Attention modules. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=1., - init_cfg=None): - super(GSAEncoderLayer, self).__init__(init_cfg=init_cfg) - - self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] - self.attn = GlobalSubsampledAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - qkv_bias=qkv_bias, - norm_cfg=norm_cfg, - sr_ratio=sr_ratio) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=False) - - self.drop_path = build_dropout( - dict(type='DropPath', drop_prob=drop_path_rate) - ) if drop_path_rate > 0. else nn.Identity() - - def forward(self, x, hw_shape): - x = x + self.drop_path(self.attn(self.norm1(x), hw_shape, identity=0.)) - x = x + self.drop_path(self.ffn(self.norm2(x))) - return x - - -class LocallyGroupedSelfAttention(BaseModule): - """Locally-grouped Self Attention (LSA) module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. Default: 8 - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: False. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - window_size(int): Window size of LSA. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - window_size=1, - init_cfg=None): - super(LocallyGroupedSelfAttention, self).__init__(init_cfg=init_cfg) - - assert embed_dims % num_heads == 0, f'dim {embed_dims} should be ' \ - f'divided by num_heads ' \ - f'{num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - head_dim = embed_dims // num_heads - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - self.window_size = window_size - - def forward(self, x, hw_shape): - b, n, c = x.shape - h, w = hw_shape - x = x.view(b, h, w, c) - - # pad feature maps to multiples of Local-groups - pad_l = pad_t = 0 - pad_r = (self.window_size - w % self.window_size) % self.window_size - pad_b = (self.window_size - h % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - - # calculate attention mask for LSA - Hp, Wp = x.shape[1:-1] - _h, _w = Hp // self.window_size, Wp // self.window_size - mask = torch.zeros((1, Hp, Wp), device=x.device) - mask[:, -pad_b:, :].fill_(1) - mask[:, :, -pad_r:].fill_(1) - - # [B, _h, _w, window_size, window_size, C] - x = x.reshape(b, _h, self.window_size, _w, self.window_size, - c).transpose(2, 3) - mask = mask.reshape(1, _h, self.window_size, _w, - self.window_size).transpose(2, 3).reshape( - 1, _h * _w, - self.window_size * self.window_size) - # [1, _h*_w, window_size*window_size, window_size*window_size] - attn_mask = mask.unsqueeze(2) - mask.unsqueeze(3) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-1000.0)).masked_fill( - attn_mask == 0, float(0.0)) - - # [3, B, _w*_h, nhead, window_size*window_size, dim] - qkv = self.qkv(x).reshape(b, _h * _w, - self.window_size * self.window_size, 3, - self.num_heads, c // self.num_heads).permute( - 3, 0, 1, 4, 2, 5) - q, k, v = qkv[0], qkv[1], qkv[2] - # [B, _h*_w, n_head, window_size*window_size, window_size*window_size] - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn + attn_mask.unsqueeze(2) - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - attn = (attn @ v).transpose(2, 3).reshape(b, _h, _w, self.window_size, - self.window_size, c) - x = attn.transpose(2, 3).reshape(b, _h * self.window_size, - _w * self.window_size, c) - if pad_r > 0 or pad_b > 0: - x = x[:, :h, :w, :].contiguous() - - x = x.reshape(b, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class LSAEncoderLayer(BaseModule): - """Implements one encoder layer in Twins-SVT. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): Enable bias for qkv if True. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - window_size (int): Window size of LSA. Default: 1. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - qk_scale=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - window_size=1, - init_cfg=None): - - super(LSAEncoderLayer, self).__init__(init_cfg=init_cfg) - - self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] - self.attn = LocallyGroupedSelfAttention(embed_dims, num_heads, - qkv_bias, qk_scale, - attn_drop_rate, drop_rate, - window_size) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=False) - - self.drop_path = build_dropout( - dict(type='DropPath', drop_prob=drop_path_rate) - ) if drop_path_rate > 0. else nn.Identity() - - def forward(self, x, hw_shape): - x = x + self.drop_path(self.attn(self.norm1(x), hw_shape)) - x = x + self.drop_path(self.ffn(self.norm2(x))) - return x - - -class ConditionalPositionEncoding(BaseModule): - """The Conditional Position Encoding (CPE) module. - - The CPE is the implementation of 'Conditional Positional Encodings - for Vision Transformers '_. - - Args: - in_channels (int): Number of input channels. - embed_dims (int): The feature dimension. Default: 768. - stride (int): Stride of conv layer. Default: 1. - """ - - def __init__(self, in_channels, embed_dims=768, stride=1, init_cfg=None): - super(ConditionalPositionEncoding, self).__init__(init_cfg=init_cfg) - self.proj = nn.Conv2d( - in_channels, - embed_dims, - kernel_size=3, - stride=stride, - padding=1, - bias=True, - groups=embed_dims) - self.stride = stride - - def forward(self, x, hw_shape): - b, n, c = x.shape - h, w = hw_shape - feat_token = x - cnn_feat = feat_token.transpose(1, 2).view(b, c, h, w) - if self.stride == 1: - x = self.proj(cnn_feat) + cnn_feat - else: - x = self.proj(cnn_feat) - x = x.flatten(2).transpose(1, 2) - return x - - -@BACKBONES.register_module() -class PCPVT(BaseModule): - """The backbone of Twins-PCPVT. - - This backbone is the implementation of `Twins: Revisiting the Design - of Spatial Attention in Vision Transformers - `_. - - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. - patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. - strides (list): The strides. Default: [4, 2, 2, 2]. - num_heads (int): Number of attention heads. Default: [1, 2, 4, 8]. - mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. - Default: [4, 4, 4, 4]. - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool): Enable bias for qkv if True. Default: False. - drop_rate (float): Probability of an element to be zeroed. - Default 0. - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.0 - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - depths (list): Depths of each stage. Default [3, 4, 6, 3] - sr_ratios (list): Kernel_size of conv in each Attn module in - Transformer encoder layer. Default: [8, 4, 2, 1]. - norm_after_stage(bool): Add extra norm. Default False. - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - in_channels=3, - embed_dims=[64, 128, 256, 512], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - num_heads=[1, 2, 4, 8], - mlp_ratios=[4, 4, 4, 4], - out_indices=(0, 1, 2, 3), - qkv_bias=False, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - norm_cfg=dict(type='LN'), - depths=[3, 4, 6, 3], - sr_ratios=[8, 4, 2, 1], - norm_after_stage=False, - pretrained=None, - init_cfg=None): - super(PCPVT, self).__init__(init_cfg=init_cfg) - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - self.depths = depths - - # patch_embed - self.patch_embeds = ModuleList() - self.position_encoding_drops = ModuleList() - self.layers = ModuleList() - - for i in range(len(depths)): - self.patch_embeds.append( - PatchEmbed( - in_channels=in_channels if i == 0 else embed_dims[i - 1], - embed_dims=embed_dims[i], - conv_type='Conv2d', - kernel_size=patch_sizes[i], - stride=strides[i], - padding='corner', - norm_cfg=norm_cfg)) - - self.position_encoding_drops.append(nn.Dropout(p=drop_rate)) - - self.position_encodings = ModuleList([ - ConditionalPositionEncoding(embed_dim, embed_dim) - for embed_dim in embed_dims - ]) - - # transformer encoder - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - cur = 0 - - for k in range(len(depths)): - _block = ModuleList([ - GSAEncoderLayer( - embed_dims=embed_dims[k], - num_heads=num_heads[k], - feedforward_channels=mlp_ratios[k] * embed_dims[k], - attn_drop_rate=attn_drop_rate, - drop_rate=drop_rate, - drop_path_rate=dpr[cur + i], - num_fcs=2, - qkv_bias=qkv_bias, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - sr_ratio=sr_ratios[k]) for i in range(depths[k]) - ]) - self.layers.append(_block) - cur += depths[k] - - self.norm_name, norm = build_norm_layer( - norm_cfg, embed_dims[-1], postfix=1) - - self.out_indices = out_indices - self.norm_after_stage = norm_after_stage - if self.norm_after_stage: - self.norm_list = ModuleList() - for dim in embed_dims: - self.norm_list.append(build_norm_layer(norm_cfg, dim)[1]) - - def init_weights(self): - if self.init_cfg is not None: - super(PCPVT, self).init_weights() - else: - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m, val=1.0, bias=0.) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[ - 1] * m.out_channels - fan_out //= m.groups - normal_init( - m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) - - def forward(self, x): - outputs = list() - - b = x.shape[0] - - for i in range(len(self.depths)): - x, hw_shape = self.patch_embeds[i](x) - h, w = hw_shape - x = self.position_encoding_drops[i](x) - for j, blk in enumerate(self.layers[i]): - x = blk(x, hw_shape) - if j == 0: - x = self.position_encodings[i](x, hw_shape) - if self.norm_after_stage: - x = self.norm_list[i](x) - x = x.reshape(b, h, w, -1).permute(0, 3, 1, 2).contiguous() - - if i in self.out_indices: - outputs.append(x) - - return tuple(outputs) - - -@BACKBONES.register_module() -class SVT(PCPVT): - """The backbone of Twins-SVT. - - This backbone is the implementation of `Twins: Revisiting the Design - of Spatial Attention in Vision Transformers - `_. - - Args: - in_channels (int): Number of input channels. Default: 3. - embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. - patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. - strides (list): The strides. Default: [4, 2, 2, 2]. - num_heads (int): Number of attention heads. Default: [1, 2, 4]. - mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. - Default: [4, 4, 4]. - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool): Enable bias for qkv if True. Default: False. - drop_rate (float): Dropout rate. Default 0. - attn_drop_rate (float): Dropout ratio of attention weight. - Default 0.0 - drop_path_rate (float): Stochastic depth rate. Default 0.2. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - depths (list): Depths of each stage. Default [4, 4, 4]. - sr_ratios (list): Kernel_size of conv in each Attn module in - Transformer encoder layer. Default: [4, 2, 1]. - windiow_sizes (list): Window size of LSA. Default: [7, 7, 7], - input_features_slice(bool): Input features need slice. Default: False. - norm_after_stage(bool): Add extra norm. Default False. - strides (list): Strides in patch-Embedding modules. Default: (2, 2, 2) - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - in_channels=3, - embed_dims=[64, 128, 256], - patch_sizes=[4, 2, 2, 2], - strides=[4, 2, 2, 2], - num_heads=[1, 2, 4], - mlp_ratios=[4, 4, 4], - out_indices=(0, 1, 2, 3), - qkv_bias=False, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_cfg=dict(type='LN'), - depths=[4, 4, 4], - sr_ratios=[4, 2, 1], - windiow_sizes=[7, 7, 7], - norm_after_stage=True, - pretrained=None, - init_cfg=None): - super(SVT, self).__init__(in_channels, embed_dims, patch_sizes, - strides, num_heads, mlp_ratios, out_indices, - qkv_bias, drop_rate, attn_drop_rate, - drop_path_rate, norm_cfg, depths, sr_ratios, - norm_after_stage, pretrained, init_cfg) - # transformer encoder - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - for k in range(len(depths)): - for i in range(depths[k]): - if i % 2 == 0: - self.layers[k][i] = \ - LSAEncoderLayer( - embed_dims=embed_dims[k], - num_heads=num_heads[k], - feedforward_channels=mlp_ratios[k] * embed_dims[k], - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:k])+i], - qkv_bias=qkv_bias, - window_size=windiow_sizes[k]) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/unet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/unet.py deleted file mode 100644 index c2d33667..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/unet.py +++ /dev/null @@ -1,438 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer) -from mmcv.runner import BaseModule -from mmcv.utils.parrots_wrapper import _BatchNorm - -from mmseg.ops import Upsample -from ..builder import BACKBONES -from ..utils import UpConvBlock - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsample_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsample_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = Upsample(**upsample_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -@BACKBONES.register_module() -class UNet(BaseModule): - """UNet backbone. - - This backbone is the implementation of `U-Net: Convolutional Networks - for Biomedical Image Segmentation `_. - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Notice: - The input image size should be divisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_divisible. - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None, - pretrained=None, - init_cfg=None): - super(UNet, self).__init__(init_cfg) - - self.pretrained = pretrained - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - self.base_channels = base_channels - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - self._check_input_divisible(x) - enc_outs = [] - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(UNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - def _check_input_divisible(self, x): - h, w = x.shape[-2:] - whole_downsample_rate = 1 - for i in range(1, self.num_stages): - if self.strides[i] == 2 or self.downsamples[i - 1]: - whole_downsample_rate *= 2 - assert (h % whole_downsample_rate == 0) \ - and (w % whole_downsample_rate == 0),\ - f'The input image size {(h, w)} should be divisible by the whole '\ - f'downsample rate {whole_downsample_rate}, when num_stages is '\ - f'{self.num_stages}, strides is {self.strides}, and downsamples '\ - f'is {self.downsamples}.' diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/vit.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/vit.py deleted file mode 100644 index 96565250..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/backbones/vit.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -from mmcv.cnn import build_norm_layer -from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention -from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init, - trunc_normal_) -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.modules.utils import _pair as to_2tuple - -from mmseg.ops import resize -from mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import PatchEmbed - - -class TransformerEncoderLayer(BaseModule): - """Implements one encoder layer in Vision Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - drop_rate (float): Probability of an element to be zeroed - after the feed forward layer. Default: 0.0. - attn_drop_rate (float): The drop out rate for attention layer. - Default: 0.0. - drop_path_rate (float): stochastic depth rate. Default 0.0. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - qkv_bias (bool): enable bias for qkv if True. Default: True - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default: True. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - num_fcs=2, - qkv_bias=True, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - batch_first=True): - super(TransformerEncoderLayer, self).__init__() - - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, embed_dims, postfix=1) - self.add_module(self.norm1_name, norm1) - - self.attn = MultiheadAttention( - embed_dims=embed_dims, - num_heads=num_heads, - attn_drop=attn_drop_rate, - proj_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - batch_first=batch_first, - bias=qkv_bias) - - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, embed_dims, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=num_fcs, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg) - - @property - def norm1(self): - return getattr(self, self.norm1_name) - - @property - def norm2(self): - return getattr(self, self.norm2_name) - - def forward(self, x): - x = self.attn(self.norm1(x), identity=x) - x = self.ffn(self.norm2(x), identity=x) - return x - - -@BACKBONES.register_module() -class VisionTransformer(BaseModule): - """Vision Transformer. - - This backbone is the implementation of `An Image is Worth 16x16 Words: - Transformers for Image Recognition at - Scale `_. - - Args: - img_size (int | tuple): Input image size. Default: 224. - patch_size (int): The patch size. Default: 16. - in_channels (int): Number of input channels. Default: 3. - embed_dims (int): embedding dimension. Default: 768. - num_layers (int): depth of transformer. Default: 12. - num_heads (int): number of attention heads. Default: 12. - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - out_indices (list | tuple | int): Output from which stages. - Default: -1. - qkv_bias (bool): enable bias for qkv if True. Default: True. - drop_rate (float): Probability of an element to be zeroed. - Default 0.0 - attn_drop_rate (float): The drop out rate for attention layer. - Default 0.0 - drop_path_rate (float): stochastic depth rate. Default 0.0 - with_cls_token (bool): Whether concatenating class token into image - tokens as transformer input. Default: True. - output_cls_token (bool): Whether output the cls_token. If set True, - `with_cls_token` must be True. Default: False. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN') - act_cfg (dict): The activation config for FFNs. - Default: dict(type='GELU'). - patch_norm (bool): Whether to add a norm in PatchEmbed Block. - Default: False. - final_norm (bool): Whether to add a additional layer to normalize - final feature map. Default: False. - interpolate_mode (str): Select the interpolate mode for position - embeding vector resize. Default: bicubic. - num_fcs (int): The number of fully-connected layers for FFNs. - Default: 2. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. Default: False. - pretrained (str, optional): model pretrained path. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - img_size=224, - patch_size=16, - in_channels=3, - embed_dims=768, - num_layers=12, - num_heads=12, - mlp_ratio=4, - out_indices=-1, - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - with_cls_token=True, - output_cls_token=False, - norm_cfg=dict(type='LN'), - act_cfg=dict(type='GELU'), - patch_norm=False, - final_norm=False, - interpolate_mode='bicubic', - num_fcs=2, - norm_eval=False, - with_cp=False, - pretrained=None, - init_cfg=None): - super(VisionTransformer, self).__init__(init_cfg=init_cfg) - - if isinstance(img_size, int): - img_size = to_2tuple(img_size) - elif isinstance(img_size, tuple): - if len(img_size) == 1: - img_size = to_2tuple(img_size[0]) - assert len(img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(img_size)}' - - if output_cls_token: - assert with_cls_token is True, f'with_cls_token must be True if' \ - f'set output_cls_token to True, but got {with_cls_token}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be set at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is not None: - raise TypeError('pretrained must be a str or None') - - self.img_size = img_size - self.patch_size = patch_size - self.interpolate_mode = interpolate_mode - self.norm_eval = norm_eval - self.with_cp = with_cp - self.pretrained = pretrained - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=patch_size, - padding='corner', - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None, - ) - - num_patches = (img_size[0] // patch_size) * \ - (img_size[1] // patch_size) - - self.with_cls_token = with_cls_token - self.output_cls_token = output_cls_token - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims)) - self.pos_embed = nn.Parameter( - torch.zeros(1, num_patches + 1, embed_dims)) - self.drop_after_pos = nn.Dropout(p=drop_rate) - - if isinstance(out_indices, int): - if out_indices == -1: - out_indices = num_layers - 1 - self.out_indices = [out_indices] - elif isinstance(out_indices, list) or isinstance(out_indices, tuple): - self.out_indices = out_indices - else: - raise TypeError('out_indices must be type of int, list or tuple') - - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, num_layers) - ] # stochastic depth decay rule - - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append( - TransformerEncoderLayer( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=mlp_ratio * embed_dims, - attn_drop_rate=attn_drop_rate, - drop_rate=drop_rate, - drop_path_rate=dpr[i], - num_fcs=num_fcs, - qkv_bias=qkv_bias, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - batch_first=True)) - - self.final_norm = final_norm - if final_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, embed_dims, postfix=1) - self.add_module(self.norm1_name, norm1) - - @property - def norm1(self): - return getattr(self, self.norm1_name) - - def init_weights(self): - if (isinstance(self.init_cfg, dict) - and self.init_cfg.get('type') == 'Pretrained'): - logger = get_root_logger() - checkpoint = _load_checkpoint( - self.init_cfg['checkpoint'], logger=logger, map_location='cpu') - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - if 'pos_embed' in state_dict.keys(): - if self.pos_embed.shape != state_dict['pos_embed'].shape: - logger.info(msg=f'Resize the pos_embed shape from ' - f'{state_dict["pos_embed"].shape} to ' - f'{self.pos_embed.shape}') - h, w = self.img_size - pos_size = int( - math.sqrt(state_dict['pos_embed'].shape[1] - 1)) - state_dict['pos_embed'] = self.resize_pos_embed( - state_dict['pos_embed'], - (h // self.patch_size, w // self.patch_size), - (pos_size, pos_size), self.interpolate_mode) - - self.load_state_dict(state_dict, False) - elif self.init_cfg is not None: - super(VisionTransformer, self).init_weights() - else: - # We only implement the 'jax_impl' initialization implemented at - # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - for n, m in self.named_modules(): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - if 'ffn' in n: - nn.init.normal_(m.bias, mean=0., std=1e-6) - else: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Conv2d): - kaiming_init(m, mode='fan_in', bias=0.) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m, val=1.0, bias=0.) - - def _pos_embeding(self, patched_img, hw_shape, pos_embed): - """Positiong embeding method. - - Resize the pos_embed, if the input image size doesn't match - the training size. - Args: - patched_img (torch.Tensor): The patched image, it should be - shape of [B, L1, C]. - hw_shape (tuple): The downsampled image resolution. - pos_embed (torch.Tensor): The pos_embed weighs, it should be - shape of [B, L2, c]. - Return: - torch.Tensor: The pos encoded image feature. - """ - assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ - 'the shapes of patched_img and pos_embed must be [B, L, C]' - x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] - if x_len != pos_len: - if pos_len == (self.img_size[0] // self.patch_size) * ( - self.img_size[1] // self.patch_size) + 1: - pos_h = self.img_size[0] // self.patch_size - pos_w = self.img_size[1] // self.patch_size - else: - raise ValueError( - 'Unexpected shape of pos_embed, got {}.'.format( - pos_embed.shape)) - pos_embed = self.resize_pos_embed(pos_embed, hw_shape, - (pos_h, pos_w), - self.interpolate_mode) - return self.drop_after_pos(patched_img + pos_embed) - - @staticmethod - def resize_pos_embed(pos_embed, input_shpae, pos_shape, mode): - """Resize pos_embed weights. - - Resize pos_embed using bicubic interpolate method. - Args: - pos_embed (torch.Tensor): Position embedding weights. - input_shpae (tuple): Tuple for (downsampled input image height, - downsampled input image width). - pos_shape (tuple): The resolution of downsampled origin training - image. - mode (str): Algorithm used for upsampling: - ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | - ``'trilinear'``. Default: ``'nearest'`` - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C] - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - pos_h, pos_w = pos_shape - cls_token_weight = pos_embed[:, 0] - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) - pos_embed_weight = resize( - pos_embed_weight, size=input_shpae, align_corners=False, mode=mode) - cls_token_weight = cls_token_weight.unsqueeze(1) - pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) - pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) - return pos_embed - - def forward(self, inputs): - B = inputs.shape[0] - - x, hw_shape = self.patch_embed(inputs) - - # stole cls_tokens impl from Phil Wang, thanks - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - x = self._pos_embeding(x, hw_shape, self.pos_embed) - - if not self.with_cls_token: - # Remove class token for transformer encoder input - x = x[:, 1:] - - outs = [] - for i, layer in enumerate(self.layers): - x = layer(x) - if i == len(self.layers) - 1: - if self.final_norm: - x = self.norm1(x) - if i in self.out_indices: - if self.with_cls_token: - # Remove class token and reshape token for decoder head - out = x[:, 1:] - else: - out = x - B, _, C = out.shape - out = out.reshape(B, hw_shape[0], hw_shape[1], - C).permute(0, 3, 1, 2).contiguous() - if self.output_cls_token: - out = [out, x[:, 0]] - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - super(VisionTransformer, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.LayerNorm): - m.eval() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/builder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/builder.py deleted file mode 100644 index 5e18e4e6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/builder.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.cnn.bricks.registry import ATTENTION as MMCV_ATTENTION -from mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) -ATTENTION = Registry('attention', parent=MMCV_ATTENTION) - -BACKBONES = MODELS -NECKS = MODELS -HEADS = MODELS -LOSSES = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/__init__.py deleted file mode 100644 index b5375a1f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .ann_head import ANNHead -from .apc_head import APCHead -from .aspp_head import ASPPHead -from .cc_head import CCHead -from .da_head import DAHead -from .dm_head import DMHead -from .dnl_head import DNLHead -from .dpt_head import DPTHead -from .ema_head import EMAHead -from .enc_head import EncHead -from .fcn_head import FCNHead -from .fpn_head import FPNHead -from .gc_head import GCHead -from .isa_head import ISAHead -from .lraspp_head import LRASPPHead -from .nl_head import NLHead -from .ocr_head import OCRHead -from .point_head import PointHead -from .psa_head import PSAHead -from .psp_head import PSPHead -from .segformer_head import SegformerHead -from .sep_aspp_head import DepthwiseSeparableASPPHead -from .sep_fcn_head import DepthwiseSeparableFCNHead -from .setr_mla_head import SETRMLAHead -from .setr_up_head import SETRUPHead -from .stdc_head import STDCHead -from .uper_head import UPerHead - -__all__ = [ - 'FCNHead', 'PSPHead', 'ASPPHead', 'PSAHead', 'NLHead', 'GCHead', 'CCHead', - 'UPerHead', 'DepthwiseSeparableASPPHead', 'ANNHead', 'DAHead', 'OCRHead', - 'EncHead', 'DepthwiseSeparableFCNHead', 'FPNHead', 'EMAHead', 'DNLHead', - 'PointHead', 'APCHead', 'DMHead', 'LRASPPHead', 'SETRUPHead', - 'SETRMLAHead', 'DPTHead', 'SETRMLAHead', 'SegformerHead', 'ISAHead', - 'STDCHead' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ann_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ann_head.py deleted file mode 100644 index c8d882e3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ann_head.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PPMConcat(nn.ModuleList): - """Pyramid Pooling Module that only concat the features of each layer. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - """ - - def __init__(self, pool_scales=(1, 3, 6, 8)): - super(PPMConcat, self).__init__( - [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) - - def forward(self, feats): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(feats) - ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) - concat_outs = torch.cat(ppm_outs, dim=2) - return concat_outs - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Make a ANN used SelfAttentionBlock. - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_scale (int): The scale of query feature map. - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, share_key_query, query_scale, key_pool_scales, - conv_cfg, norm_cfg, act_cfg): - key_psp = PPMConcat(key_pool_scales) - if query_scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=query_scale) - else: - query_downsample = None - super(SelfAttentionBlock, self).__init__( - key_in_channels=low_in_channels, - query_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=share_key_query, - query_downsample=query_downsample, - key_downsample=key_psp, - key_query_num_convs=1, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - -class AFNB(nn.Module): - """Asymmetric Fusion Non-local Block(AFNB) - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - and query projection. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, query_scales, key_pool_scales, conv_cfg, - norm_cfg, act_cfg): - super(AFNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=False, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - out_channels + high_in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, low_feats, high_feats): - """Forward function.""" - priors = [stage(high_feats, low_feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, high_feats], 1)) - return output - - -class APNB(nn.Module): - """Asymmetric Pyramid Non-local Block (APNB) - - Args: - in_channels (int): Input channels of key/query feature, - which is the key feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, out_channels, query_scales, - key_pool_scales, conv_cfg, norm_cfg, act_cfg): - super(APNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=in_channels, - high_in_channels=in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=True, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - 2 * in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, feats): - """Forward function.""" - priors = [stage(feats, feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, feats], 1)) - return output - - -@HEADS.register_module() -class ANNHead(BaseDecodeHead): - """Asymmetric Non-local Neural Networks for Semantic Segmentation. - - This head is the implementation of `ANNNet - `_. - - Args: - project_channels (int): Projection channels for Nonlocal. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): The pooling scales of key feature map. - Default: (1, 3, 6, 8). - """ - - def __init__(self, - project_channels, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - **kwargs): - super(ANNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(self.in_channels) == 2 - low_in_channels, high_in_channels = self.in_channels - self.project_channels = project_channels - self.fusion = AFNB( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - out_channels=high_in_channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - high_in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.context = APNB( - in_channels=self.channels, - out_channels=self.channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - low_feats, high_feats = self._transform_inputs(inputs) - output = self.fusion(low_feats, high_feats) - output = self.dropout(output) - output = self.bottleneck(output) - output = self.context(output) - output = self.cls_seg(output) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/apc_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/apc_head.py deleted file mode 100644 index 3198fd18..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/apc_head.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ACM(nn.Module): - """Adaptive Context Module used in APCNet. - - Args: - pool_scale (int): Pooling scale used in Adaptive Context - Module to extract region features. - fusion (bool): Add one conv to fuse residual feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, pool_scale, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(ACM, self).__init__() - self.pool_scale = pool_scale - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.pooled_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.global_info = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.gla = nn.Conv2d(self.channels, self.pool_scale**2, 1, 1, 0) - - self.residual_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - pooled_x = F.adaptive_avg_pool2d(x, self.pool_scale) - # [batch_size, channels, h, w] - x = self.input_redu_conv(x) - # [batch_size, channels, pool_scale, pool_scale] - pooled_x = self.pooled_redu_conv(pooled_x) - batch_size = x.size(0) - # [batch_size, pool_scale * pool_scale, channels] - pooled_x = pooled_x.view(batch_size, self.channels, - -1).permute(0, 2, 1).contiguous() - # [batch_size, h * w, pool_scale * pool_scale] - affinity_matrix = self.gla(x + resize( - self.global_info(F.adaptive_avg_pool2d(x, 1)), size=x.shape[2:]) - ).permute(0, 2, 3, 1).reshape( - batch_size, -1, self.pool_scale**2) - affinity_matrix = F.sigmoid(affinity_matrix) - # [batch_size, h * w, channels] - z_out = torch.matmul(affinity_matrix, pooled_x) - # [batch_size, channels, h * w] - z_out = z_out.permute(0, 2, 1).contiguous() - # [batch_size, channels, h, w] - z_out = z_out.view(batch_size, self.channels, x.size(2), x.size(3)) - z_out = self.residual_conv(z_out) - z_out = F.relu(z_out + x) - if self.fusion: - z_out = self.fusion_conv(z_out) - - return z_out - - -@HEADS.register_module() -class APCHead(BaseDecodeHead): - """Adaptive Pyramid Context Network for Semantic Segmentation. - - This head is the implementation of - `APCNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Adaptive Context - Module. Default: (1, 2, 3, 6). - fusion (bool): Add one conv to fuse residual feature. - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), fusion=True, **kwargs): - super(APCHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.fusion = fusion - acm_modules = [] - for pool_scale in self.pool_scales: - acm_modules.append( - ACM(pool_scale, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.acm_modules = nn.ModuleList(acm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - acm_outs = [x] - for acm_module in self.acm_modules: - acm_outs.append(acm_module(x)) - acm_outs = torch.cat(acm_outs, dim=1) - output = self.bottleneck(acm_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/aspp_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/aspp_head.py deleted file mode 100644 index 1fbd1bc8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/aspp_head.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ASPPModule(nn.ModuleList): - """Atrous Spatial Pyramid Pooling (ASPP) Module. - - Args: - dilations (tuple[int]): Dilation rate of each layer. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg, - act_cfg): - super(ASPPModule, self).__init__() - self.dilations = dilations - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for dilation in dilations: - self.append( - ConvModule( - self.in_channels, - self.channels, - 1 if dilation == 1 else 3, - dilation=dilation, - padding=0 if dilation == 1 else dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, x): - """Forward function.""" - aspp_outs = [] - for aspp_module in self: - aspp_outs.append(aspp_module(x)) - - return aspp_outs - - -@HEADS.register_module() -class ASPPHead(BaseDecodeHead): - """Rethinking Atrous Convolution for Semantic Image Segmentation. - - This head is the implementation of `DeepLabV3 - `_. - - Args: - dilations (tuple[int]): Dilation rates for ASPP module. - Default: (1, 6, 12, 18). - """ - - def __init__(self, dilations=(1, 6, 12, 18), **kwargs): - super(ASPPHead, self).__init__(**kwargs) - assert isinstance(dilations, (list, tuple)) - self.dilations = dilations - self.image_pool = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.aspp_modules = ASPPModule( - dilations, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - (len(dilations) + 1) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cascade_decode_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index f7c3da0d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cc_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cc_head.py deleted file mode 100644 index ed19eb46..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/cc_head.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import HEADS -from .fcn_head import FCNHead - -try: - from mmcv.ops import CrissCrossAttention -except ModuleNotFoundError: - CrissCrossAttention = None - - -@HEADS.register_module() -class CCHead(FCNHead): - """CCNet: Criss-Cross Attention for Semantic Segmentation. - - This head is the implementation of `CCNet - `_. - - Args: - recurrence (int): Number of recurrence of Criss Cross Attention - module. Default: 2. - """ - - def __init__(self, recurrence=2, **kwargs): - if CrissCrossAttention is None: - raise RuntimeError('Please install mmcv-full for ' - 'CrissCrossAttention ops') - super(CCHead, self).__init__(num_convs=2, **kwargs) - self.recurrence = recurrence - self.cca = CrissCrossAttention(self.channels) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - for _ in range(self.recurrence): - output = self.cca(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/da_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/da_head.py deleted file mode 100644 index 77fd6639..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/da_head.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale -from torch import nn - -from mmseg.core import add_prefix -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PAM(_SelfAttentionBlock): - """Position Attention Module (PAM) - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - """ - - def __init__(self, in_channels, channels): - super(PAM, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=1, - key_query_norm=False, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=False, - with_out=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None) - - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - out = super(PAM, self).forward(x, x) - - out = self.gamma(out) + x - return out - - -class CAM(nn.Module): - """Channel Attention Module (CAM)""" - - def __init__(self): - super(CAM, self).__init__() - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - batch_size, channels, height, width = x.size() - proj_query = x.view(batch_size, channels, -1) - proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1) - energy = torch.bmm(proj_query, proj_key) - energy_new = torch.max( - energy, -1, keepdim=True)[0].expand_as(energy) - energy - attention = F.softmax(energy_new, dim=-1) - proj_value = x.view(batch_size, channels, -1) - - out = torch.bmm(attention, proj_value) - out = out.view(batch_size, channels, height, width) - - out = self.gamma(out) + x - return out - - -@HEADS.register_module() -class DAHead(BaseDecodeHead): - """Dual Attention Network for Scene Segmentation. - - This head is the implementation of `DANet - `_. - - Args: - pam_channels (int): The channels of Position Attention Module(PAM). - """ - - def __init__(self, pam_channels, **kwargs): - super(DAHead, self).__init__(**kwargs) - self.pam_channels = pam_channels - self.pam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam = PAM(self.channels, pam_channels) - self.pam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - self.cam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam = CAM() - self.cam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - def pam_cls_seg(self, feat): - """PAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.pam_conv_seg(feat) - return output - - def cam_cls_seg(self, feat): - """CAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.cam_conv_seg(feat) - return output - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - pam_feat = self.pam_in_conv(x) - pam_feat = self.pam(pam_feat) - pam_feat = self.pam_out_conv(pam_feat) - pam_out = self.pam_cls_seg(pam_feat) - - cam_feat = self.cam_in_conv(x) - cam_feat = self.cam(cam_feat) - cam_feat = self.cam_out_conv(cam_feat) - cam_out = self.cam_cls_seg(cam_feat) - - feat_sum = pam_feat + cam_feat - pam_cam_out = self.cls_seg(feat_sum) - - return pam_cam_out, pam_out, cam_out - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, only ``pam_cam`` is used.""" - return self.forward(inputs)[0] - - def losses(self, seg_logit, seg_label): - """Compute ``pam_cam``, ``pam``, ``cam`` loss.""" - pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit - loss = dict() - loss.update( - add_prefix( - super(DAHead, self).losses(pam_cam_seg_logit, seg_label), - 'pam_cam')) - loss.update( - add_prefix( - super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam')) - loss.update( - add_prefix( - super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam')) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/decode_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/decode_head.py deleted file mode 100644 index 1443a81d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/decode_head.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -import torch.nn as nn -from mmcv.runner import BaseModule, auto_fp16, force_fp32 - -from mmseg.core import build_pixel_sampler -from mmseg.ops import resize -from ..builder import build_loss -from ..losses import accuracy - - -class BaseDecodeHead(BaseModule, metaclass=ABCMeta): - """Base class for BaseDecodeHead. - - Args: - in_channels (int|Sequence[int]): Input channels. - channels (int): Channels after modules, before conv_seg. - num_classes (int): Number of classes. - dropout_ratio (float): Ratio of dropout layer. Default: 0.1. - conv_cfg (dict|None): Config of conv layers. Default: None. - norm_cfg (dict|None): Config of norm layers. Default: None. - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU') - in_index (int|Sequence[int]): Input feature index. Default: -1 - input_transform (str|None): Transformation type of input features. - Options: 'resize_concat', 'multiple_select', None. - 'resize_concat': Multiple feature maps will be resize to the - same size as first one and than concat together. - Usually used in FCN head of HRNet. - 'multiple_select': Multiple feature maps will be bundle into - a list and passed into decode head. - None: Only one select feature map is allowed. - Default: None. - loss_decode (dict | Sequence[dict]): Config of decode loss. - The `loss_name` is property of corresponding loss function which - could be shown in training log. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_ce'. - e.g. dict(type='CrossEntropyLoss'), - [dict(type='CrossEntropyLoss', loss_name='loss_ce'), - dict(type='DiceLoss', loss_name='loss_dice')] - Default: dict(type='CrossEntropyLoss'). - ignore_index (int | None): The label index to be ignored. When using - masked BCE loss, ignore_index should be set to None. Default: 255. - sampler (dict|None): The config of segmentation map sampler. - Default: None. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - channels, - *, - num_classes, - dropout_ratio=0.1, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - in_index=-1, - input_transform=None, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - ignore_index=255, - sampler=None, - align_corners=False, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='conv_seg'))): - super(BaseDecodeHead, self).__init__(init_cfg) - self._init_inputs(in_channels, in_index, input_transform) - self.channels = channels - self.num_classes = num_classes - self.dropout_ratio = dropout_ratio - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.in_index = in_index - - self.ignore_index = ignore_index - self.align_corners = align_corners - - if isinstance(loss_decode, dict): - self.loss_decode = build_loss(loss_decode) - elif isinstance(loss_decode, (list, tuple)): - self.loss_decode = nn.ModuleList() - for loss in loss_decode: - self.loss_decode.append(build_loss(loss)) - else: - raise TypeError(f'loss_decode must be a dict or sequence of dict,\ - but got {type(loss_decode)}') - - if sampler is not None: - self.sampler = build_pixel_sampler(sampler, context=self) - else: - self.sampler = None - - self.conv_seg = nn.Conv2d(channels, num_classes, kernel_size=1) - if dropout_ratio > 0: - self.dropout = nn.Dropout2d(dropout_ratio) - else: - self.dropout = None - self.fp16_enabled = False - - def extra_repr(self): - """Extra repr.""" - s = f'input_transform={self.input_transform}, ' \ - f'ignore_index={self.ignore_index}, ' \ - f'align_corners={self.align_corners}' - return s - - def _init_inputs(self, in_channels, in_index, input_transform): - """Check and initialize input transforms. - - The in_channels, in_index and input_transform must match. - Specifically, when input_transform is None, only single feature map - will be selected. So in_channels and in_index must be of type int. - When input_transform - - Args: - in_channels (int|Sequence[int]): Input channels. - in_index (int|Sequence[int]): Input feature index. - input_transform (str|None): Transformation type of input features. - Options: 'resize_concat', 'multiple_select', None. - 'resize_concat': Multiple feature maps will be resize to the - same size as first one and than concat together. - Usually used in FCN head of HRNet. - 'multiple_select': Multiple feature maps will be bundle into - a list and passed into decode head. - None: Only one select feature map is allowed. - """ - - if input_transform is not None: - assert input_transform in ['resize_concat', 'multiple_select'] - self.input_transform = input_transform - self.in_index = in_index - if input_transform is not None: - assert isinstance(in_channels, (list, tuple)) - assert isinstance(in_index, (list, tuple)) - assert len(in_channels) == len(in_index) - if input_transform == 'resize_concat': - self.in_channels = sum(in_channels) - else: - self.in_channels = in_channels - else: - assert isinstance(in_channels, int) - assert isinstance(in_index, int) - self.in_channels = in_channels - - def _transform_inputs(self, inputs): - """Transform inputs for decoder. - - Args: - inputs (list[Tensor]): List of multi-level img features. - - Returns: - Tensor: The transformed inputs - """ - - if self.input_transform == 'resize_concat': - inputs = [inputs[i] for i in self.in_index] - upsampled_inputs = [ - resize( - input=x, - size=inputs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) for x in inputs - ] - inputs = torch.cat(upsampled_inputs, dim=1) - elif self.input_transform == 'multiple_select': - inputs = [inputs[i] for i in self.in_index] - else: - inputs = inputs[self.in_index] - - return inputs - - @auto_fp16() - @abstractmethod - def forward(self, inputs): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, img_metas, gt_semantic_seg, train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs) - losses = self.losses(seg_logits, gt_semantic_seg) - return losses - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs) - - def cls_seg(self, feat): - """Classify each pixel.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.conv_seg(feat) - return output - - @force_fp32(apply_to=('seg_logit', )) - def losses(self, seg_logit, seg_label): - """Compute segmentation loss.""" - loss = dict() - seg_logit = resize( - input=seg_logit, - size=seg_label.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - if self.sampler is not None: - seg_weight = self.sampler.sample(seg_logit, seg_label) - else: - seg_weight = None - seg_label = seg_label.squeeze(1) - - if not isinstance(self.loss_decode, nn.ModuleList): - losses_decode = [self.loss_decode] - else: - losses_decode = self.loss_decode - for loss_decode in losses_decode: - if loss_decode.loss_name not in loss: - loss[loss_decode.loss_name] = loss_decode( - seg_logit, - seg_label, - weight=seg_weight, - ignore_index=self.ignore_index) - else: - loss[loss_decode.loss_name] += loss_decode( - seg_logit, - seg_label, - weight=seg_weight, - ignore_index=self.ignore_index) - - loss['acc_seg'] = accuracy(seg_logit, seg_label) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dm_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dm_head.py deleted file mode 100644 index ffaa870a..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dm_head.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class DCM(nn.Module): - """Dynamic Convolutional Module used in DMNet. - - Args: - filter_size (int): The filter size of generated convolution kernel - used in Dynamic Convolutional Module. - fusion (bool): Add one conv to fuse DCM output feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(DCM, self).__init__() - self.filter_size = filter_size - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1, - 0) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.norm_cfg is not None: - self.norm = build_norm_layer(self.norm_cfg, self.channels)[1] - else: - self.norm = None - self.activate = build_activation_layer(self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - generated_filter = self.filter_gen_conv( - F.adaptive_avg_pool2d(x, self.filter_size)) - x = self.input_redu_conv(x) - b, c, h, w = x.shape - # [1, b * c, h, w], c = self.channels - x = x.view(1, b * c, h, w) - # [b * c, 1, filter_size, filter_size] - generated_filter = generated_filter.view(b * c, 1, self.filter_size, - self.filter_size) - pad = (self.filter_size - 1) // 2 - if (self.filter_size - 1) % 2 == 0: - p2d = (pad, pad, pad, pad) - else: - p2d = (pad + 1, pad, pad + 1, pad) - x = F.pad(input=x, pad=p2d, mode='constant', value=0) - # [1, b * c, h, w] - output = F.conv2d(input=x, weight=generated_filter, groups=b * c) - # [b, c, h, w] - output = output.view(b, c, h, w) - if self.norm is not None: - output = self.norm(output) - output = self.activate(output) - - if self.fusion: - output = self.fusion_conv(output) - - return output - - -@HEADS.register_module() -class DMHead(BaseDecodeHead): - """Dynamic Multi-scale Filters for Semantic Segmentation. - - This head is the implementation of - `DMNet `_. - - Args: - filter_sizes (tuple[int]): The size of generated convolutional filters - used in Dynamic Convolutional Module. Default: (1, 3, 5, 7). - fusion (bool): Add one conv to fuse DCM output feature. - """ - - def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs): - super(DMHead, self).__init__(**kwargs) - assert isinstance(filter_sizes, (list, tuple)) - self.filter_sizes = filter_sizes - self.fusion = fusion - dcm_modules = [] - for filter_size in self.filter_sizes: - dcm_modules.append( - DCM(filter_size, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.dcm_modules = nn.ModuleList(dcm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(filter_sizes) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - dcm_outs = [x] - for dcm_module in self.dcm_modules: - dcm_outs.append(dcm_module(x)) - dcm_outs = torch.cat(dcm_outs, dim=1) - output = self.bottleneck(dcm_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dnl_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dnl_head.py deleted file mode 100644 index ab53d9a2..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dnl_head.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NonLocal2d -from torch import nn - -from ..builder import HEADS -from .fcn_head import FCNHead - - -class DisentangledNonLocal2d(NonLocal2d): - """Disentangled Non-Local Blocks. - - Args: - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, *arg, temperature, **kwargs): - super().__init__(*arg, **kwargs) - self.temperature = temperature - self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1) - - def embedded_gaussian(self, theta_x, phi_x): - """Embedded gaussian with temperature.""" - - # NonLocal2d pairwise_weight: [N, HxW, HxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight /= self.temperature - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def forward(self, x): - # x: [N, C, H, W] - n = x.size(0) - - # g_x: [N, HxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # theta_x: [N, HxW, C], phi_x: [N, C, HxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - # subtract mean - theta_x -= theta_x.mean(dim=-2, keepdim=True) - phi_x -= phi_x.mean(dim=-1, keepdim=True) - - pairwise_func = getattr(self, self.mode) - # pairwise_weight: [N, HxW, HxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # y: [N, HxW, C] - y = torch.matmul(pairwise_weight, g_x) - # y: [N, C, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - # unary_mask: [N, 1, HxW] - unary_mask = self.conv_mask(x) - unary_mask = unary_mask.view(n, 1, -1) - unary_mask = unary_mask.softmax(dim=-1) - # unary_x: [N, 1, C] - unary_x = torch.matmul(unary_mask, g_x) - # unary_x: [N, C, 1, 1] - unary_x = unary_x.permute(0, 2, 1).contiguous().reshape( - n, self.inter_channels, 1, 1) - - output = x + self.conv_out(y + unary_x) - - return output - - -@HEADS.register_module() -class DNLHead(FCNHead): - """Disentangled Non-Local Neural Networks. - - This head is the implementation of `DNLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: False. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - temperature=0.05, - **kwargs): - super(DNLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.temperature = temperature - self.dnl_block = DisentangledNonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode, - temperature=self.temperature) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.dnl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dpt_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dpt_head.py deleted file mode 100644 index a63f9d29..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/dpt_head.py +++ /dev/null @@ -1,293 +0,0 @@ -import math - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Linear, build_activation_layer -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ReassembleBlocks(BaseModule): - """ViTPostProcessBlock, process cls_token in ViT backbone output and - rearrange the feature vector to feature map. - - Args: - in_channels (int): ViT feature channels. Default: 768. - out_channels (List): output channels of each stage. - Default: [96, 192, 384, 768]. - readout_type (str): Type of readout operation. Default: 'ignore'. - patch_size (int): The patch size. Default: 16. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels=768, - out_channels=[96, 192, 384, 768], - readout_type='ignore', - patch_size=16, - init_cfg=None): - super(ReassembleBlocks, self).__init__(init_cfg) - - assert readout_type in ['ignore', 'add', 'project'] - self.readout_type = readout_type - self.patch_size = patch_size - - self.projects = nn.ModuleList([ - ConvModule( - in_channels=in_channels, - out_channels=out_channel, - kernel_size=1, - act_cfg=None, - ) for out_channel in out_channels - ]) - - self.resize_layers = nn.ModuleList([ - nn.ConvTranspose2d( - in_channels=out_channels[0], - out_channels=out_channels[0], - kernel_size=4, - stride=4, - padding=0), - nn.ConvTranspose2d( - in_channels=out_channels[1], - out_channels=out_channels[1], - kernel_size=2, - stride=2, - padding=0), - nn.Identity(), - nn.Conv2d( - in_channels=out_channels[3], - out_channels=out_channels[3], - kernel_size=3, - stride=2, - padding=1) - ]) - if self.readout_type == 'project': - self.readout_projects = nn.ModuleList() - for _ in range(len(self.projects)): - self.readout_projects.append( - nn.Sequential( - Linear(2 * in_channels, in_channels), - build_activation_layer(dict(type='GELU')))) - - def forward(self, inputs): - assert isinstance(inputs, list) - out = [] - for i, x in enumerate(inputs): - assert len(x) == 2 - x, cls_token = x[0], x[1] - feature_shape = x.shape - if self.readout_type == 'project': - x = x.flatten(2).permute((0, 2, 1)) - readout = cls_token.unsqueeze(1).expand_as(x) - x = self.readout_projects[i](torch.cat((x, readout), -1)) - x = x.permute(0, 2, 1).reshape(feature_shape) - elif self.readout_type == 'add': - x = x.flatten(2) + cls_token.unsqueeze(-1) - x = x.reshape(feature_shape) - else: - pass - x = self.projects[i](x) - x = self.resize_layers[i](x) - out.append(x) - return out - - -class PreActResidualConvUnit(BaseModule): - """ResidualConvUnit, pre-activate residual unit. - - Args: - in_channels (int): number of channels in the input feature map. - act_cfg (dict): dictionary to construct and config activation layer. - norm_cfg (dict): dictionary to construct and config norm layer. - stride (int): stride of the first block. Default: 1 - dilation (int): dilation rate for convs layers. Default: 1. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels, - act_cfg, - norm_cfg, - stride=1, - dilation=1, - init_cfg=None): - super(PreActResidualConvUnit, self).__init__(init_cfg) - - self.conv1 = ConvModule( - in_channels, - in_channels, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=False, - order=('act', 'conv', 'norm')) - - self.conv2 = ConvModule( - in_channels, - in_channels, - 3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - bias=False, - order=('act', 'conv', 'norm')) - - def forward(self, inputs): - inputs_ = inputs.clone() - x = self.conv1(inputs) - x = self.conv2(x) - return x + inputs_ - - -class FeatureFusionBlock(BaseModule): - """FeatureFusionBlock, merge feature map from different stages. - - Args: - in_channels (int): Input channels. - act_cfg (dict): The activation config for ResidualConvUnit. - norm_cfg (dict): Config dict for normalization layer. - expand (bool): Whether expand the channels in post process block. - Default: False. - align_corners (bool): align_corner setting for bilinear upsample. - Default: True. - init_cfg (dict, optional): Initialization config dict. Default: None. - """ - - def __init__(self, - in_channels, - act_cfg, - norm_cfg, - expand=False, - align_corners=True, - init_cfg=None): - super(FeatureFusionBlock, self).__init__(init_cfg) - - self.in_channels = in_channels - self.expand = expand - self.align_corners = align_corners - - self.out_channels = in_channels - if self.expand: - self.out_channels = in_channels // 2 - - self.project = ConvModule( - self.in_channels, - self.out_channels, - kernel_size=1, - act_cfg=None, - bias=True) - - self.res_conv_unit1 = PreActResidualConvUnit( - in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) - self.res_conv_unit2 = PreActResidualConvUnit( - in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) - - def forward(self, *inputs): - x = inputs[0] - if len(inputs) == 2: - if x.shape != inputs[1].shape: - res = resize( - inputs[1], - size=(x.shape[2], x.shape[3]), - mode='bilinear', - align_corners=False) - else: - res = inputs[1] - x = x + self.res_conv_unit1(res) - x = self.res_conv_unit2(x) - x = resize( - x, - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners) - x = self.project(x) - return x - - -@HEADS.register_module() -class DPTHead(BaseDecodeHead): - """Vision Transformers for Dense Prediction. - - This head is implemented of `DPT `_. - - Args: - embed_dims (int): The embed dimension of the ViT backbone. - Default: 768. - post_process_channels (List): Out channels of post process conv - layers. Default: [96, 192, 384, 768]. - readout_type (str): Type of readout operation. Default: 'ignore'. - patch_size (int): The patch size. Default: 16. - expand_channels (bool): Whether expand the channels in post process - block. Default: False. - act_cfg (dict): The activation config for residual conv unit. - Default dict(type='ReLU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - """ - - def __init__(self, - embed_dims=768, - post_process_channels=[96, 192, 384, 768], - readout_type='ignore', - patch_size=16, - expand_channels=False, - act_cfg=dict(type='ReLU'), - norm_cfg=dict(type='BN'), - **kwargs): - super(DPTHead, self).__init__(**kwargs) - - self.in_channels = self.in_channels - self.expand_channels = expand_channels - self.reassemble_blocks = ReassembleBlocks(embed_dims, - post_process_channels, - readout_type, patch_size) - - self.post_process_channels = [ - channel * math.pow(2, i) if expand_channels else channel - for i, channel in enumerate(post_process_channels) - ] - self.convs = nn.ModuleList() - for channel in self.post_process_channels: - self.convs.append( - ConvModule( - channel, - self.channels, - kernel_size=3, - padding=1, - act_cfg=None, - bias=False)) - self.fusion_blocks = nn.ModuleList() - for _ in range(len(self.convs)): - self.fusion_blocks.append( - FeatureFusionBlock(self.channels, act_cfg, norm_cfg)) - self.fusion_blocks[0].res_conv_unit1 = None - self.project = ConvModule( - self.channels, - self.channels, - kernel_size=3, - padding=1, - norm_cfg=norm_cfg) - self.num_fusion_blocks = len(self.fusion_blocks) - self.num_reassemble_blocks = len(self.reassemble_blocks.resize_layers) - self.num_post_process_channels = len(self.post_process_channels) - assert self.num_fusion_blocks == self.num_reassemble_blocks - assert self.num_reassemble_blocks == self.num_post_process_channels - - def forward(self, inputs): - assert len(inputs) == self.num_reassemble_blocks - x = self._transform_inputs(inputs) - x = self.reassemble_blocks(x) - x = [self.convs[i](feature) for i, feature in enumerate(x)] - out = self.fusion_blocks[0](x[-1]) - for i in range(1, len(self.fusion_blocks)): - out = self.fusion_blocks[i](out, x[-(i + 1)]) - out = self.project(out) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ema_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ema_head.py deleted file mode 100644 index f6de1671..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ema_head.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -def reduce_mean(tensor): - """Reduce mean when distributed training.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -class EMAModule(nn.Module): - """Expectation Maximization Attention Module used in EMANet. - - Args: - channels (int): Channels of the whole module. - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - """ - - def __init__(self, channels, num_bases, num_stages, momentum): - super(EMAModule, self).__init__() - assert num_stages >= 1, 'num_stages must be at least 1!' - self.num_bases = num_bases - self.num_stages = num_stages - self.momentum = momentum - - bases = torch.zeros(1, channels, self.num_bases) - bases.normal_(0, math.sqrt(2. / self.num_bases)) - # [1, channels, num_bases] - bases = F.normalize(bases, dim=1, p=2) - self.register_buffer('bases', bases) - - def forward(self, feats): - """Forward function.""" - batch_size, channels, height, width = feats.size() - # [batch_size, channels, height*width] - feats = feats.view(batch_size, channels, height * width) - # [batch_size, channels, num_bases] - bases = self.bases.repeat(batch_size, 1, 1) - - with torch.no_grad(): - for i in range(self.num_stages): - # [batch_size, height*width, num_bases] - attention = torch.einsum('bcn,bck->bnk', feats, bases) - attention = F.softmax(attention, dim=2) - # l1 norm - attention_normed = F.normalize(attention, dim=1, p=1) - # [batch_size, channels, num_bases] - bases = torch.einsum('bcn,bnk->bck', feats, attention_normed) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - - feats_recon = torch.einsum('bck,bnk->bcn', bases, attention) - feats_recon = feats_recon.view(batch_size, channels, height, width) - - if self.training: - bases = bases.mean(dim=0, keepdim=True) - bases = reduce_mean(bases) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - self.bases = (1 - - self.momentum) * self.bases + self.momentum * bases - - return feats_recon - - -@HEADS.register_module() -class EMAHead(BaseDecodeHead): - """Expectation Maximization Attention Networks for Semantic Segmentation. - - This head is the implementation of `EMANet - `_. - - Args: - ema_channels (int): EMA module channels - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - concat_input (bool): Whether concat the input and output of convs - before classification layer. Default: True - momentum (float): Momentum to update the base. Default: 0.1. - """ - - def __init__(self, - ema_channels, - num_bases, - num_stages, - concat_input=True, - momentum=0.1, - **kwargs): - super(EMAHead, self).__init__(**kwargs) - self.ema_channels = ema_channels - self.num_bases = num_bases - self.num_stages = num_stages - self.concat_input = concat_input - self.momentum = momentum - self.ema_module = EMAModule(self.ema_channels, self.num_bases, - self.num_stages, self.momentum) - - self.ema_in_conv = ConvModule( - self.in_channels, - self.ema_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # project (0, inf) -> (-inf, inf) - self.ema_mid_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=None, - act_cfg=None) - for param in self.ema_mid_conv.parameters(): - param.requires_grad = False - - self.ema_out_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.bottleneck = ConvModule( - self.ema_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.ema_in_conv(x) - identity = feats - feats = self.ema_mid_conv(feats) - recon = self.ema_module(feats) - recon = F.relu(recon, inplace=True) - recon = self.ema_out_conv(recon) - output = F.relu(identity + recon, inplace=True) - output = self.bottleneck(output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/enc_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/enc_head.py deleted file mode 100644 index 648c8906..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/enc_head.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, build_norm_layer - -from mmseg.ops import Encoding, resize -from ..builder import HEADS, build_loss -from .decode_head import BaseDecodeHead - - -class EncModule(nn.Module): - """Encoding Module used in EncNet. - - Args: - in_channels (int): Input channels. - num_codes (int): Number of code words. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): - super(EncModule, self).__init__() - self.encoding_project = ConvModule( - in_channels, - in_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # TODO: resolve this hack - # change to 1d - if norm_cfg is not None: - encoding_norm_cfg = norm_cfg.copy() - if encoding_norm_cfg['type'] in ['BN', 'IN']: - encoding_norm_cfg['type'] += '1d' - else: - encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( - '2d', '1d') - else: - # fallback to BN1d - encoding_norm_cfg = dict(type='BN1d') - self.encoding = nn.Sequential( - Encoding(channels=in_channels, num_codes=num_codes), - build_norm_layer(encoding_norm_cfg, num_codes)[1], - nn.ReLU(inplace=True)) - self.fc = nn.Sequential( - nn.Linear(in_channels, in_channels), nn.Sigmoid()) - - def forward(self, x): - """Forward function.""" - encoding_projection = self.encoding_project(x) - encoding_feat = self.encoding(encoding_projection).mean(dim=1) - batch_size, channels, _, _ = x.size() - gamma = self.fc(encoding_feat) - y = gamma.view(batch_size, channels, 1, 1) - output = F.relu_(x + x * y) - return encoding_feat, output - - -@HEADS.register_module() -class EncHead(BaseDecodeHead): - """Context Encoding for Semantic Segmentation. - - This head is the implementation of `EncNet - `_. - - Args: - num_codes (int): Number of code words. Default: 32. - use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to - regularize the training. Default: True. - add_lateral (bool): Whether use lateral connection to fuse features. - Default: False. - loss_se_decode (dict): Config of decode loss. - Default: dict(type='CrossEntropyLoss', use_sigmoid=True). - """ - - def __init__(self, - num_codes=32, - use_se_loss=True, - add_lateral=False, - loss_se_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=0.2), - **kwargs): - super(EncHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.use_se_loss = use_se_loss - self.add_lateral = add_lateral - self.num_codes = num_codes - self.bottleneck = ConvModule( - self.in_channels[-1], - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if add_lateral: - self.lateral_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the last one - self.lateral_convs.append( - ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.fusion = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.enc_module = EncModule( - self.channels, - num_codes=num_codes, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.use_se_loss: - self.loss_se_decode = build_loss(loss_se_decode) - self.se_layer = nn.Linear(self.channels, self.num_classes) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - feat = self.bottleneck(inputs[-1]) - if self.add_lateral: - laterals = [ - resize( - lateral_conv(inputs[i]), - size=feat.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - feat = self.fusion(torch.cat([feat, *laterals], 1)) - encode_feat, output = self.enc_module(feat) - output = self.cls_seg(output) - if self.use_se_loss: - se_output = self.se_layer(encode_feat) - return output, se_output - else: - return output - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, ignore se_loss.""" - if self.use_se_loss: - return self.forward(inputs)[0] - else: - return self.forward(inputs) - - @staticmethod - def _convert_to_onehot_labels(seg_label, num_classes): - """Convert segmentation label to onehot. - - Args: - seg_label (Tensor): Segmentation label of shape (N, H, W). - num_classes (int): Number of classes. - - Returns: - Tensor: Onehot labels of shape (N, num_classes). - """ - - batch_size = seg_label.size(0) - onehot_labels = seg_label.new_zeros((batch_size, num_classes)) - for i in range(batch_size): - hist = seg_label[i].float().histc( - bins=num_classes, min=0, max=num_classes - 1) - onehot_labels[i] = hist > 0 - return onehot_labels - - def losses(self, seg_logit, seg_label): - """Compute segmentation and semantic encoding loss.""" - seg_logit, se_seg_logit = seg_logit - loss = dict() - loss.update(super(EncHead, self).losses(seg_logit, seg_label)) - se_loss = self.loss_se_decode( - se_seg_logit, - self._convert_to_onehot_labels(seg_label, self.num_classes)) - loss['loss_se'] = se_loss - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fcn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fcn_head.py deleted file mode 100644 index 3c8de51f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fcn_head.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FCNHead(BaseDecodeHead): - """Fully Convolution Networks for Semantic Segmentation. - - This head is implemented of `FCNNet `_. - - Args: - num_convs (int): Number of convs in the head. Default: 2. - kernel_size (int): The kernel size for convs in the head. Default: 3. - concat_input (bool): Whether concat the input and output of convs - before classification layer. - dilation (int): The dilation rate for convs in the head. Default: 1. - """ - - def __init__(self, - num_convs=2, - kernel_size=3, - concat_input=True, - dilation=1, - **kwargs): - assert num_convs >= 0 and dilation > 0 and isinstance(dilation, int) - self.num_convs = num_convs - self.concat_input = concat_input - self.kernel_size = kernel_size - super(FCNHead, self).__init__(**kwargs) - if num_convs == 0: - assert self.in_channels == self.channels - - conv_padding = (kernel_size // 2) * dilation - convs = [] - convs.append( - ConvModule( - self.in_channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - for i in range(num_convs - 1): - convs.append( - ConvModule( - self.channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if num_convs == 0: - self.convs = nn.Identity() - else: - self.convs = nn.Sequential(*convs) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=kernel_size, - padding=kernel_size // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs(x) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fpn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fpn_head.py deleted file mode 100644 index e41f324c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/fpn_head.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import Upsample, resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FPNHead(BaseDecodeHead): - """Panoptic Feature Pyramid Networks. - - This head is the implementation of `Semantic FPN - `_. - - Args: - feature_strides (tuple[int]): The strides for input feature maps. - stack_lateral. All strides suppose to be power of 2. The first - one is of largest resolution. - """ - - def __init__(self, feature_strides, **kwargs): - super(FPNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(feature_strides) == len(self.in_channels) - assert min(feature_strides) == feature_strides[0] - self.feature_strides = feature_strides - - self.scale_heads = nn.ModuleList() - for i in range(len(feature_strides)): - head_length = max( - 1, - int(np.log2(feature_strides[i]) - np.log2(feature_strides[0]))) - scale_head = [] - for k in range(head_length): - scale_head.append( - ConvModule( - self.in_channels[i] if k == 0 else self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if feature_strides[i] != feature_strides[0]: - scale_head.append( - Upsample( - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners)) - self.scale_heads.append(nn.Sequential(*scale_head)) - - def forward(self, inputs): - - x = self._transform_inputs(inputs) - - output = self.scale_heads[0](x[0]) - for i in range(1, len(self.feature_strides)): - # non inplace - output = output + resize( - self.scale_heads[i](x[i]), - size=output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/gc_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/gc_head.py deleted file mode 100644 index eed50742..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/gc_head.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ContextBlock - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class GCHead(FCNHead): - """GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. - - This head is the implementation of `GCNet - `_. - - Args: - ratio (float): Multiplier of channels ratio. Default: 1/4. - pooling_type (str): The pooling type of context aggregation. - Options are 'att', 'avg'. Default: 'avg'. - fusion_types (tuple[str]): The fusion type for feature fusion. - Options are 'channel_add', 'channel_mul'. Default: ('channel_add',) - """ - - def __init__(self, - ratio=1 / 4., - pooling_type='att', - fusion_types=('channel_add', ), - **kwargs): - super(GCHead, self).__init__(num_convs=2, **kwargs) - self.ratio = ratio - self.pooling_type = pooling_type - self.fusion_types = fusion_types - self.gc_block = ContextBlock( - in_channels=self.channels, - ratio=self.ratio, - pooling_type=self.pooling_type, - fusion_types=self.fusion_types) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.gc_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/isa_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/isa_head.py deleted file mode 100644 index c9224b61..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/isa_head.py +++ /dev/null @@ -1,142 +0,0 @@ -import math - -import torch -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Self-Attention Module. - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict | None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, conv_cfg, norm_cfg, act_cfg): - super(SelfAttentionBlock, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=2, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=False, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.output_project = self.build_project( - in_channels, - in_channels, - num_convs=1, - use_conv_module=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x): - """Forward function.""" - context = super(SelfAttentionBlock, self).forward(x, x) - return self.output_project(context) - - -@HEADS.register_module() -class ISAHead(BaseDecodeHead): - """Interlaced Sparse Self-Attention for Semantic Segmentation. - - This head is the implementation of `ISA - `_. - - Args: - isa_channels (int): The channels of ISA Module. - down_factor (tuple[int]): The local group size of ISA. - """ - - def __init__(self, isa_channels, down_factor=(8, 8), **kwargs): - super(ISAHead, self).__init__(**kwargs) - self.down_factor = down_factor - - self.in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.global_relation = SelfAttentionBlock( - self.channels, - isa_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.local_relation = SelfAttentionBlock( - self.channels, - isa_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.out_conv = ConvModule( - self.channels * 2, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x_ = self._transform_inputs(inputs) - x = self.in_conv(x_) - residual = x - - n, c, h, w = x.size() - loc_h, loc_w = self.down_factor # size of local group in H- and W-axes - glb_h, glb_w = math.ceil(h / loc_h), math.ceil(w / loc_w) - pad_h, pad_w = glb_h * loc_h - h, glb_w * loc_w - w - if pad_h > 0 or pad_w > 0: # pad if the size is not divisible - padding = (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2) - x = F.pad(x, padding) - - # global relation - x = x.view(n, c, glb_h, loc_h, glb_w, loc_w) - # do permutation to gather global group - x = x.permute(0, 3, 5, 1, 2, 4) # (n, loc_h, loc_w, c, glb_h, glb_w) - x = x.reshape(-1, c, glb_h, glb_w) - # apply attention within each global group - x = self.global_relation(x) # (n * loc_h * loc_w, c, glb_h, glb_w) - - # local relation - x = x.view(n, loc_h, loc_w, c, glb_h, glb_w) - # do permutation to gather local group - x = x.permute(0, 4, 5, 3, 1, 2) # (n, glb_h, glb_w, c, loc_h, loc_w) - x = x.reshape(-1, c, loc_h, loc_w) - # apply attention within each local group - x = self.local_relation(x) # (n * glb_h * glb_w, c, loc_h, loc_w) - - # permute each pixel back to its original position - x = x.view(n, glb_h, glb_w, c, loc_h, loc_w) - x = x.permute(0, 3, 1, 4, 2, 5) # (n, c, glb_h, loc_h, glb_w, loc_w) - x = x.reshape(n, c, glb_h * loc_h, glb_w * loc_w) - if pad_h > 0 or pad_w > 0: # remove padding - x = x[:, :, pad_h // 2:pad_h // 2 + h, pad_w // 2:pad_w // 2 + w] - - x = self.out_conv(torch.cat([x, residual], dim=1)) - out = self.cls_seg(x) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/lraspp_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/lraspp_head.py deleted file mode 100644 index c10ff0d8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/lraspp_head.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv import is_tuple_of -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class LRASPPHead(BaseDecodeHead): - """Lite R-ASPP (LRASPP) head is proposed in Searching for MobileNetV3. - - This head is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - branch_channels (tuple[int]): The number of output channels in every - each branch. Default: (32, 64). - """ - - def __init__(self, branch_channels=(32, 64), **kwargs): - super(LRASPPHead, self).__init__(**kwargs) - if self.input_transform != 'multiple_select': - raise ValueError('in Lite R-ASPP (LRASPP) head, input_transform ' - f'must be \'multiple_select\'. But received ' - f'\'{self.input_transform}\'') - assert is_tuple_of(branch_channels, int) - assert len(branch_channels) == len(self.in_channels) - 1 - self.branch_channels = branch_channels - - self.convs = nn.Sequential() - self.conv_ups = nn.Sequential() - for i in range(len(branch_channels)): - self.convs.add_module( - f'conv{i}', - nn.Conv2d( - self.in_channels[i], branch_channels[i], 1, bias=False)) - self.conv_ups.add_module( - f'conv_up{i}', - ConvModule( - self.channels + branch_channels[i], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False)) - - self.conv_up_input = nn.Conv2d(self.channels, self.channels, 1) - - self.aspp_conv = ConvModule( - self.in_channels[-1], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False) - self.image_pool = nn.Sequential( - nn.AvgPool2d(kernel_size=49, stride=(16, 20)), - ConvModule( - self.in_channels[2], - self.channels, - 1, - act_cfg=dict(type='Sigmoid'), - bias=False)) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - - x = inputs[-1] - - x = self.aspp_conv(x) * resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = self.conv_up_input(x) - - for i in range(len(self.branch_channels) - 1, -1, -1): - x = resize( - x, - size=inputs[i].size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = torch.cat([x, self.convs[i](inputs[i])], 1) - x = self.conv_ups[i](x) - - return self.cls_seg(x) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/nl_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/nl_head.py deleted file mode 100644 index 637517e7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/nl_head.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import NonLocal2d - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class NLHead(FCNHead): - """Non-local Neural Networks. - - This head is the implementation of `NLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: True. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - **kwargs): - super(NLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.nl_block = NonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.nl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ocr_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ocr_head.py deleted file mode 100644 index 09eadfb1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/ocr_head.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .cascade_decode_head import BaseCascadeDecodeHead - - -class SpatialGatherModule(nn.Module): - """Aggregate the context features according to the initial predicted - probability distribution. - - Employ the soft-weighted method to aggregate the context. - """ - - def __init__(self, scale): - super(SpatialGatherModule, self).__init__() - self.scale = scale - - def forward(self, feats, probs): - """Forward function.""" - batch_size, num_classes, height, width = probs.size() - channels = feats.size(1) - probs = probs.view(batch_size, num_classes, -1) - feats = feats.view(batch_size, channels, -1) - # [batch_size, height*width, num_classes] - feats = feats.permute(0, 2, 1) - # [batch_size, channels, height*width] - probs = F.softmax(self.scale * probs, dim=2) - # [batch_size, channels, num_classes] - ocr_context = torch.matmul(probs, feats) - ocr_context = ocr_context.permute(0, 2, 1).contiguous().unsqueeze(3) - return ocr_context - - -class ObjectAttentionBlock(_SelfAttentionBlock): - """Make a OCR used SelfAttentionBlock.""" - - def __init__(self, in_channels, channels, scale, conv_cfg, norm_cfg, - act_cfg): - if scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=scale) - else: - query_downsample = None - super(ObjectAttentionBlock, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=query_downsample, - key_downsample=None, - key_query_num_convs=2, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=True, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.bottleneck = ConvModule( - in_channels * 2, - in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, query_feats, key_feats): - """Forward function.""" - context = super(ObjectAttentionBlock, - self).forward(query_feats, key_feats) - output = self.bottleneck(torch.cat([context, query_feats], dim=1)) - if self.query_downsample is not None: - output = resize(query_feats) - - return output - - -@HEADS.register_module() -class OCRHead(BaseCascadeDecodeHead): - """Object-Contextual Representations for Semantic Segmentation. - - This head is the implementation of `OCRNet - `_. - - Args: - ocr_channels (int): The intermediate channels of OCR block. - scale (int): The scale of probability map in SpatialGatherModule in - Default: 1. - """ - - def __init__(self, ocr_channels, scale=1, **kwargs): - super(OCRHead, self).__init__(**kwargs) - self.ocr_channels = ocr_channels - self.scale = scale - self.object_context_block = ObjectAttentionBlock( - self.channels, - self.ocr_channels, - self.scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.spatial_gather_module = SpatialGatherModule(self.scale) - - self.bottleneck = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs, prev_output): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.bottleneck(x) - context = self.spatial_gather_module(feats, prev_output) - object_context = self.object_context_block(feats, context) - output = self.cls_seg(object_context) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/point_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/point_head.py deleted file mode 100644 index 72762180..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/point_head.py +++ /dev/null @@ -1,356 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import point_sample - -from mmseg.models.builder import HEADS -from mmseg.ops import resize -from ..losses import accuracy -from .cascade_decode_head import BaseCascadeDecodeHead - - -def calculate_uncertainty(seg_logits): - """Estimate uncertainty based on seg logits. - - For each location of the prediction ``seg_logits`` we estimate - uncertainty as the difference between top first and top second - predicted logits. - - Args: - seg_logits (Tensor): Semantic segmentation logits, - shape (batch_size, num_classes, height, width). - - Returns: - scores (Tensor): T uncertainty scores with the most uncertain - locations having the highest uncertainty score, shape ( - batch_size, 1, height, width) - """ - top2_scores = torch.topk(seg_logits, k=2, dim=1)[0] - return (top2_scores[:, 1] - top2_scores[:, 0]).unsqueeze(1) - - -@HEADS.register_module() -class PointHead(BaseCascadeDecodeHead): - """A mask point head use in PointRend. - - This head is implemented of `PointRend: Image Segmentation as - Rendering `_. - ``PointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict|None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict|None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - """ - - def __init__(self, - num_fcs=3, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU', inplace=False), - **kwargs): - super(PointHead, self).__init__( - input_transform='multiple_select', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - init_cfg=dict( - type='Normal', std=0.01, override=dict(name='fc_seg')), - **kwargs) - - self.num_fcs = num_fcs - self.coarse_pred_each_layer = coarse_pred_each_layer - - fc_in_channels = sum(self.in_channels) + self.num_classes - fc_channels = self.channels - self.fcs = nn.ModuleList() - for k in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += self.num_classes if self.coarse_pred_each_layer \ - else 0 - self.fc_seg = nn.Conv1d( - fc_in_channels, - self.num_classes, - kernel_size=1, - stride=1, - padding=0) - if self.dropout_ratio > 0: - self.dropout = nn.Dropout(self.dropout_ratio) - delattr(self, 'conv_seg') - - def cls_seg(self, feat): - """Classify each pixel with fc.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.fc_seg(feat) - return output - - def forward(self, fine_grained_point_feats, coarse_point_feats): - x = torch.cat([fine_grained_point_feats, coarse_point_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_point_feats), dim=1) - return self.cls_seg(x) - - def _get_fine_grained_point_feats(self, x, points): - """Sample from fine grained features. - - Args: - x (list[Tensor]): Feature pyramid from by neck or backbone. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - fine_grained_feats (Tensor): Sampled fine grained feature, - shape (batch_size, sum(channels of x), num_points). - """ - - fine_grained_feats_list = [ - point_sample(_, points, align_corners=self.align_corners) - for _ in x - ] - if len(fine_grained_feats_list) > 1: - fine_grained_feats = torch.cat(fine_grained_feats_list, dim=1) - else: - fine_grained_feats = fine_grained_feats_list[0] - - return fine_grained_feats - - def _get_coarse_point_feats(self, prev_output, points): - """Sample from fine grained features. - - Args: - prev_output (list[Tensor]): Prediction of previous decode head. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - coarse_feats (Tensor): Sampled coarse feature, shape (batch_size, - num_classes, num_points). - """ - - coarse_feats = point_sample( - prev_output, points, align_corners=self.align_corners) - - return coarse_feats - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self._transform_inputs(inputs) - with torch.no_grad(): - points = self.get_points_train( - prev_output, calculate_uncertainty, cfg=train_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats(prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - point_label = point_sample( - gt_semantic_seg.float(), - points, - mode='nearest', - align_corners=self.align_corners) - point_label = point_label.squeeze(1).long() - - losses = self.losses(point_logits, point_label) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - - x = self._transform_inputs(inputs) - refined_seg_logits = prev_output.clone() - for _ in range(test_cfg.subdivision_steps): - refined_seg_logits = resize( - refined_seg_logits, - scale_factor=test_cfg.scale_factor, - mode='bilinear', - align_corners=self.align_corners) - batch_size, channels, height, width = refined_seg_logits.shape - point_indices, points = self.get_points_test( - refined_seg_logits, calculate_uncertainty, cfg=test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats( - prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_seg_logits = refined_seg_logits.reshape( - batch_size, channels, height * width) - refined_seg_logits = refined_seg_logits.scatter_( - 2, point_indices, point_logits) - refined_seg_logits = refined_seg_logits.view( - batch_size, channels, height, width) - - return refined_seg_logits - - def losses(self, point_logits, point_label): - """Compute segmentation loss.""" - loss = dict() - if not isinstance(self.loss_decode, nn.ModuleList): - losses_decode = [self.loss_decode] - else: - losses_decode = self.loss_decode - for loss_module in losses_decode: - loss['point' + loss_module.loss_name] = loss_module( - point_logits, point_label, ignore_index=self.ignore_index) - - loss['acc_point'] = accuracy(point_logits, point_label) - return loss - - def get_points_train(self, seg_logits, uncertainty_func, cfg): - """Sample points for training. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - 'uncertainty_func' function that takes point's logit prediction as - input. - - Args: - seg_logits (Tensor): Semantic segmentation logits, shape ( - batch_size, num_classes, height, width). - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains the coordinates of ``num_points`` sampled - points. - """ - num_points = cfg.num_points - oversample_ratio = cfg.oversample_ratio - importance_sample_ratio = cfg.importance_sample_ratio - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = seg_logits.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=seg_logits.device) - point_logits = point_sample(seg_logits, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = uncertainty_func(point_logits) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=seg_logits.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_point_coords = torch.rand( - batch_size, num_random_points, 2, device=seg_logits.device) - point_coords = torch.cat((point_coords, rand_point_coords), dim=1) - return point_coords - - def get_points_test(self, seg_logits, uncertainty_func, cfg): - """Sample points for testing. - - Find ``num_points`` most uncertain points from ``uncertainty_map``. - - Args: - seg_logits (Tensor): A tensor of shape (batch_size, num_classes, - height, width) for class-specific or class-agnostic prediction. - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (batch_size, num_points) - that contains indices from [0, height x width) of the most - uncertain points. - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the ``height x width`` grid . - """ - - num_points = cfg.subdivision_num_points - uncertainty_map = uncertainty_func(seg_logits) - batch_size, _, height, width = uncertainty_map.shape - h_step = 1.0 / height - w_step = 1.0 / width - - uncertainty_map = uncertainty_map.view(batch_size, height * width) - num_points = min(height * width, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - point_coords = torch.zeros( - batch_size, - num_points, - 2, - dtype=torch.float, - device=seg_logits.device) - point_coords[:, :, 0] = w_step / 2.0 + (point_indices % - width).float() * w_step - point_coords[:, :, 1] = h_step / 2.0 + (point_indices // - width).float() * h_step - return point_indices, point_coords diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psa_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psa_head.py deleted file mode 100644 index df7593cb..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psa_head.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - -try: - from mmcv.ops import PSAMask -except ModuleNotFoundError: - PSAMask = None - - -@HEADS.register_module() -class PSAHead(BaseDecodeHead): - """Point-wise Spatial Attention Network for Scene Parsing. - - This head is the implementation of `PSANet - `_. - - Args: - mask_size (tuple[int]): The PSA mask size. It usually equals input - size. - psa_type (str): The type of psa module. Options are 'collect', - 'distribute', 'bi-direction'. Default: 'bi-direction' - compact (bool): Whether use compact map for 'collect' mode. - Default: True. - shrink_factor (int): The downsample factors of psa mask. Default: 2. - normalization_factor (float): The normalize factor of attention. - psa_softmax (bool): Whether use softmax for attention. - """ - - def __init__(self, - mask_size, - psa_type='bi-direction', - compact=False, - shrink_factor=2, - normalization_factor=1.0, - psa_softmax=True, - **kwargs): - if PSAMask is None: - raise RuntimeError('Please install mmcv-full for PSAMask ops') - super(PSAHead, self).__init__(**kwargs) - assert psa_type in ['collect', 'distribute', 'bi-direction'] - self.psa_type = psa_type - self.compact = compact - self.shrink_factor = shrink_factor - self.mask_size = mask_size - mask_h, mask_w = mask_size - self.psa_softmax = psa_softmax - if normalization_factor is None: - normalization_factor = mask_h * mask_w - self.normalization_factor = normalization_factor - - self.reduce = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - if psa_type == 'bi-direction': - self.reduce_p = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention_p = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - self.psamask_collect = PSAMask('collect', mask_size) - self.psamask_distribute = PSAMask('distribute', mask_size) - else: - self.psamask = PSAMask(psa_type, mask_size) - self.proj = ConvModule( - self.channels * (2 if psa_type == 'bi-direction' else 1), - self.in_channels, - kernel_size=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - self.in_channels * 2, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - identity = x - align_corners = self.align_corners - if self.psa_type in ['collect', 'distribute']: - out = self.reduce(x) - n, c, h, w = out.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - out = resize( - out, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y = self.attention(out) - if self.compact: - if self.psa_type == 'collect': - y = y.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y = self.psamask(y) - if self.psa_softmax: - y = F.softmax(y, dim=1) - out = torch.bmm( - out.view(n, c, h * w), y.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - else: - x_col = self.reduce(x) - x_dis = self.reduce_p(x) - n, c, h, w = x_col.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - x_col = resize( - x_col, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - x_dis = resize( - x_dis, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y_col = self.attention(x_col) - y_dis = self.attention_p(x_dis) - if self.compact: - y_dis = y_dis.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y_col = self.psamask_collect(y_col) - y_dis = self.psamask_distribute(y_dis) - if self.psa_softmax: - y_col = F.softmax(y_col, dim=1) - y_dis = F.softmax(y_dis, dim=1) - x_col = torch.bmm( - x_col.view(n, c, h * w), y_col.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - x_dis = torch.bmm( - x_dis.view(n, c, h * w), y_dis.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - out = torch.cat([x_col, x_dis], 1) - out = self.proj(out) - out = resize( - out, - size=identity.shape[2:], - mode='bilinear', - align_corners=align_corners) - out = self.bottleneck(torch.cat((identity, out), dim=1)) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psp_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index a27ae4bd..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners, **kwargs): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - **kwargs))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/segformer_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/segformer_head.py deleted file mode 100644 index 2e75d506..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/segformer_head.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.models.builder import HEADS -from mmseg.models.decode_heads.decode_head import BaseDecodeHead -from mmseg.ops import resize - - -@HEADS.register_module() -class SegformerHead(BaseDecodeHead): - """The all mlp Head of segformer. - - This head is the implementation of - `Segformer ` _. - - Args: - interpolate_mode: The interpolate mode of MLP head upsample operation. - Default: 'bilinear'. - """ - - def __init__(self, interpolate_mode='bilinear', **kwargs): - super().__init__(input_transform='multiple_select', **kwargs) - - self.interpolate_mode = interpolate_mode - num_inputs = len(self.in_channels) - - assert num_inputs == len(self.in_index) - - self.convs = nn.ModuleList() - for i in range(num_inputs): - self.convs.append( - ConvModule( - in_channels=self.in_channels[i], - out_channels=self.channels, - kernel_size=1, - stride=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - self.fusion_conv = ConvModule( - in_channels=self.channels * num_inputs, - out_channels=self.channels, - kernel_size=1, - norm_cfg=self.norm_cfg) - - def forward(self, inputs): - # Receive 4 stage backbone feature map: 1/4, 1/8, 1/16, 1/32 - inputs = self._transform_inputs(inputs) - outs = [] - for idx in range(len(inputs)): - x = inputs[idx] - conv = self.convs[idx] - outs.append( - resize( - input=conv(x), - size=inputs[0].shape[2:], - mode=self.interpolate_mode, - align_corners=self.align_corners)) - - out = self.fusion_conv(torch.cat(outs, dim=1)) - - out = self.cls_seg(out) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_aspp_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_aspp_head.py deleted file mode 100644 index 4e894e28..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_aspp_head.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .aspp_head import ASPPHead, ASPPModule - - -class DepthwiseSeparableASPPModule(ASPPModule): - """Atrous Spatial Pyramid Pooling (ASPP) Module with depthwise separable - conv.""" - - def __init__(self, **kwargs): - super(DepthwiseSeparableASPPModule, self).__init__(**kwargs) - for i, dilation in enumerate(self.dilations): - if dilation > 1: - self[i] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - 3, - dilation=dilation, - padding=dilation, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - -@HEADS.register_module() -class DepthwiseSeparableASPPHead(ASPPHead): - """Encoder-Decoder with Atrous Separable Convolution for Semantic Image - Segmentation. - - This head is the implementation of `DeepLabV3+ - `_. - - Args: - c1_in_channels (int): The input channels of c1 decoder. If is 0, - the no decoder will be used. - c1_channels (int): The intermediate channels of c1 decoder. - """ - - def __init__(self, c1_in_channels, c1_channels, **kwargs): - super(DepthwiseSeparableASPPHead, self).__init__(**kwargs) - assert c1_in_channels >= 0 - self.aspp_modules = DepthwiseSeparableASPPModule( - dilations=self.dilations, - in_channels=self.in_channels, - channels=self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if c1_in_channels > 0: - self.c1_bottleneck = ConvModule( - c1_in_channels, - c1_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - else: - self.c1_bottleneck = None - self.sep_bottleneck = nn.Sequential( - DepthwiseSeparableConvModule( - self.channels + c1_channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - DepthwiseSeparableConvModule( - self.channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - if self.c1_bottleneck is not None: - c1_output = self.c1_bottleneck(inputs[0]) - output = resize( - input=output, - size=c1_output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = torch.cat([output, c1_output], dim=1) - output = self.sep_bottleneck(output) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_fcn_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index 7f9658e0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to `Fast-SCNN: Fast Semantic - Segmentation Network `_. - - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - dw_act_cfg (dict):Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: None. - """ - - def __init__(self, dw_act_cfg=None, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) - - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg, - dw_act_cfg=dw_act_cfg) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_mla_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_mla_head.py deleted file mode 100644 index 6bb94ae3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_mla_head.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import Upsample -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class SETRMLAHead(BaseDecodeHead): - """Multi level feature aggretation head of SETR. - - MLA head of `SETR `_. - - Args: - mlahead_channels (int): Channels of conv-conv-4x of multi-level feature - aggregation. Default: 128. - up_scale (int): The scale factor of interpolate. Default:4. - """ - - def __init__(self, mla_channels=128, up_scale=4, **kwargs): - super(SETRMLAHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.mla_channels = mla_channels - - num_inputs = len(self.in_channels) - - # Refer to self.cls_seg settings of BaseDecodeHead - assert self.channels == num_inputs * mla_channels - - self.up_convs = nn.ModuleList() - for i in range(num_inputs): - self.up_convs.append( - nn.Sequential( - ConvModule( - in_channels=self.in_channels[i], - out_channels=mla_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - ConvModule( - in_channels=mla_channels, - out_channels=mla_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - Upsample( - scale_factor=up_scale, - mode='bilinear', - align_corners=self.align_corners))) - - def forward(self, inputs): - inputs = self._transform_inputs(inputs) - outs = [] - for x, up_conv in zip(inputs, self.up_convs): - outs.append(up_conv(x)) - out = torch.cat(outs, dim=1) - out = self.cls_seg(out) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_up_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_up_head.py deleted file mode 100644 index 87e7ea7f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/setr_up_head.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_norm_layer - -from mmseg.ops import Upsample -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class SETRUPHead(BaseDecodeHead): - """Naive upsampling head and Progressive upsampling head of SETR. - - Naive or PUP head of `SETR `_. - - Args: - norm_layer (dict): Config dict for input normalization. - Default: norm_layer=dict(type='LN', eps=1e-6, requires_grad=True). - num_convs (int): Number of decoder convolutions. Default: 1. - up_scale (int): The scale factor of interpolate. Default:4. - kernel_size (int): The kernel size of convolution when decoding - feature information from backbone. Default: 3. - init_cfg (dict | list[dict] | None): Initialization config dict. - Default: dict( - type='Constant', val=1.0, bias=0, layer='LayerNorm'). - """ - - def __init__(self, - norm_layer=dict(type='LN', eps=1e-6, requires_grad=True), - num_convs=1, - up_scale=4, - kernel_size=3, - init_cfg=[ - dict(type='Constant', val=1.0, bias=0, layer='LayerNorm'), - dict( - type='Normal', - std=0.01, - override=dict(name='conv_seg')) - ], - **kwargs): - - assert kernel_size in [1, 3], 'kernel_size must be 1 or 3.' - - super(SETRUPHead, self).__init__(init_cfg=init_cfg, **kwargs) - - assert isinstance(self.in_channels, int) - - _, self.norm = build_norm_layer(norm_layer, self.in_channels) - - self.up_convs = nn.ModuleList() - in_channels = self.in_channels - out_channels = self.channels - for _ in range(num_convs): - self.up_convs.append( - nn.Sequential( - ConvModule( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=1, - padding=int(kernel_size - 1) // 2, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - Upsample( - scale_factor=up_scale, - mode='bilinear', - align_corners=self.align_corners))) - in_channels = out_channels - - def forward(self, x): - x = self._transform_inputs(x) - - n, c, h, w = x.shape - x = x.reshape(n, c, h * w).transpose(2, 1).contiguous() - x = self.norm(x) - x = x.transpose(1, 2).reshape(n, c, h, w).contiguous() - - for up_conv in self.up_convs: - x = up_conv(x) - out = self.cls_seg(x) - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/stdc_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/stdc_head.py deleted file mode 100644 index 71600163..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/stdc_head.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class STDCHead(FCNHead): - """This head is the implementation of `Rethinking BiSeNet For Real-time - Semantic Segmentation `_. - - Args: - boundary_threshold (float): The threshold of calculating boundary. - Default: 0.1. - """ - - def __init__(self, boundary_threshold=0.1, **kwargs): - super(STDCHead, self).__init__(**kwargs) - self.boundary_threshold = boundary_threshold - # Using register buffer to make laplacian kernel on the same - # device of `seg_label`. - self.register_buffer( - 'laplacian_kernel', - torch.tensor([-1, -1, -1, -1, 8, -1, -1, -1, -1], - dtype=torch.float32, - requires_grad=False).reshape((1, 1, 3, 3))) - self.fusion_kernel = torch.nn.Parameter( - torch.tensor([[6. / 10], [3. / 10], [1. / 10]], - dtype=torch.float32).reshape(1, 3, 1, 1), - requires_grad=False) - - def losses(self, seg_logit, seg_label): - """Compute Detail Aggregation Loss.""" - # Note: The paper claims `fusion_kernel` is a trainable 1x1 conv - # parameters. However, it is a constant in original repo and other - # codebase because it would not be added into computation graph - # after threshold operation. - seg_label = seg_label.float() - boundary_targets = F.conv2d( - seg_label, self.laplacian_kernel, padding=1) - boundary_targets = boundary_targets.clamp(min=0) - boundary_targets[boundary_targets > self.boundary_threshold] = 1 - boundary_targets[boundary_targets <= self.boundary_threshold] = 0 - - boundary_targets_x2 = F.conv2d( - seg_label, self.laplacian_kernel, stride=2, padding=1) - boundary_targets_x2 = boundary_targets_x2.clamp(min=0) - - boundary_targets_x4 = F.conv2d( - seg_label, self.laplacian_kernel, stride=4, padding=1) - boundary_targets_x4 = boundary_targets_x4.clamp(min=0) - - boundary_targets_x4_up = F.interpolate( - boundary_targets_x4, boundary_targets.shape[2:], mode='nearest') - boundary_targets_x2_up = F.interpolate( - boundary_targets_x2, boundary_targets.shape[2:], mode='nearest') - - boundary_targets_x2_up[ - boundary_targets_x2_up > self.boundary_threshold] = 1 - boundary_targets_x2_up[ - boundary_targets_x2_up <= self.boundary_threshold] = 0 - - boundary_targets_x4_up[ - boundary_targets_x4_up > self.boundary_threshold] = 1 - boundary_targets_x4_up[ - boundary_targets_x4_up <= self.boundary_threshold] = 0 - - boudary_targets_pyramids = torch.stack( - (boundary_targets, boundary_targets_x2_up, boundary_targets_x4_up), - dim=1) - - boudary_targets_pyramids = boudary_targets_pyramids.squeeze(2) - boudary_targets_pyramid = F.conv2d(boudary_targets_pyramids, - self.fusion_kernel) - - boudary_targets_pyramid[ - boudary_targets_pyramid > self.boundary_threshold] = 1 - boudary_targets_pyramid[ - boudary_targets_pyramid <= self.boundary_threshold] = 0 - - seg_logit = F.interpolate( - seg_logit, - boundary_targets.shape[2:], - mode='bilinear', - align_corners=True) - loss = super(STDCHead, self).losses(seg_logit, - boudary_targets_pyramid.long()) - return loss diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/uper_head.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/uper_head.py deleted file mode 100644 index 57d80be1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/decode_heads/uper_head.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead -from .psp_head import PPM - - -@HEADS.register_module() -class UPerHead(BaseDecodeHead): - """Unified Perceptual Parsing for Scene Understanding. - - This head is the implementation of `UPerNet - `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module applied on the last feature. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(UPerHead, self).__init__( - input_transform='multiple_select', **kwargs) - # PSP Module - self.psp_modules = PPM( - pool_scales, - self.in_channels[-1], - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels[-1] + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # FPN Module - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the top layer - l_conv = ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - fpn_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - self.fpn_bottleneck = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def psp_forward(self, inputs): - """Forward function of PSP module.""" - x = inputs[-1] - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - - return output - - def forward(self, inputs): - """Forward function.""" - - inputs = self._transform_inputs(inputs) - - # build laterals - laterals = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - laterals.append(self.psp_forward(inputs)) - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], - size=prev_shape, - mode='bilinear', - align_corners=self.align_corners) - - # build outputs - fpn_outs = [ - self.fpn_convs[i](laterals[i]) - for i in range(used_backbone_levels - 1) - ] - # append psp feature - fpn_outs.append(laterals[-1]) - - for i in range(used_backbone_levels - 1, 0, -1): - fpn_outs[i] = resize( - fpn_outs[i], - size=fpn_outs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fpn_outs = torch.cat(fpn_outs, dim=1) - output = self.fpn_bottleneck(fpn_outs) - output = self.cls_seg(output) - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/__init__.py deleted file mode 100644 index fbc5b2d1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .accuracy import Accuracy, accuracy -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .focal_loss import FocalLoss -from .lovasz_loss import LovaszLoss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss', - 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss', - 'FocalLoss' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/accuracy.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/accuracy.py deleted file mode 100644 index f2cd16b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/accuracy.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - - -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class, ...) - target (torch.Tensor): The target of each prediction, shape (N, , ...) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == target.ndim + 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - # transpose to shape (maxk, N, ...) - pred_label = pred_label.transpose(0, 1) - correct = pred_label.eq(target.unsqueeze(0).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / target.numel())) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - """Accuracy calculation module.""" - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/cross_entropy_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/cross_entropy_loss.py deleted file mode 100644 index ee489a88..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=-100): - """The wrapper function for :func:`F.cross_entropy`""" - # class_weight is a manual rescaling weight given to each class. - # If given, has to be a Tensor of size C element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, target_shape, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_zeros(target_shape) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero(valid_mask, as_tuple=True) - - if inds[0].numel() > 0: - if labels.dim() == 3: - bin_labels[inds[0], labels[valid_mask], inds[1], inds[2]] = 1 - else: - bin_labels[inds[0], labels[valid_mask]] = 1 - - valid_mask = valid_mask.unsqueeze(1).expand(target_shape).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.unsqueeze(1).expand(target_shape) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=255): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. Default: 255 - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - assert (pred.dim() == 2 and label.dim() == 1) or ( - pred.dim() == 4 and label.dim() == 3), \ - 'Only pred shape [N, C], label shape [N] or pred shape [N, C, ' \ - 'H, W], label shape [N, H, W] are supported' - label, weight = _expand_onehot_labels(label, weight, pred.shape, - ignore_index) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask' - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_ce'. - """ - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_ce'): - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - self._loss_name = loss_name - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/dice_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/dice_loss.py deleted file mode 100644 index 79a3abfc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/dice_loss.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/ -segmentron/solver/loss.py (Apache-2.0 License)""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weighted_loss - - -@weighted_loss -def dice_loss(pred, - target, - valid_mask, - smooth=1, - exponent=2, - class_weight=None, - ignore_index=255): - assert pred.shape[0] == target.shape[0] - total_loss = 0 - num_classes = pred.shape[1] - for i in range(num_classes): - if i != ignore_index: - dice_loss = binary_dice_loss( - pred[:, i], - target[..., i], - valid_mask=valid_mask, - smooth=smooth, - exponent=exponent) - if class_weight is not None: - dice_loss *= class_weight[i] - total_loss += dice_loss - return total_loss / num_classes - - -@weighted_loss -def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards): - assert pred.shape[0] == target.shape[0] - pred = pred.reshape(pred.shape[0], -1) - target = target.reshape(target.shape[0], -1) - valid_mask = valid_mask.reshape(valid_mask.shape[0], -1) - - num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth - den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth - - return 1 - num / den - - -@LOSSES.register_module() -class DiceLoss(nn.Module): - """DiceLoss. - - This loss is proposed in `V-Net: Fully Convolutional Neural Networks for - Volumetric Medical Image Segmentation `_. - - Args: - smooth (float): A float number to smooth loss, and avoid NaN error. - Default: 1 - exponent (float): An float number to calculate denominator - value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Default to 1.0. - ignore_index (int | None): The label index to be ignored. Default: 255. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_dice'. - """ - - def __init__(self, - smooth=1, - exponent=2, - reduction='mean', - class_weight=None, - loss_weight=1.0, - ignore_index=255, - loss_name='loss_dice', - **kwards): - super(DiceLoss, self).__init__() - self.smooth = smooth - self.exponent = exponent - self.reduction = reduction - self.class_weight = get_class_weight(class_weight) - self.loss_weight = loss_weight - self.ignore_index = ignore_index - self._loss_name = loss_name - - def forward(self, - pred, - target, - avg_factor=None, - reduction_override=None, - **kwards): - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = pred.new_tensor(self.class_weight) - else: - class_weight = None - - pred = F.softmax(pred, dim=1) - num_classes = pred.shape[1] - one_hot_target = F.one_hot( - torch.clamp(target.long(), 0, num_classes - 1), - num_classes=num_classes) - valid_mask = (target != self.ignore_index).long() - - loss = self.loss_weight * dice_loss( - pred, - one_hot_target, - valid_mask=valid_mask, - reduction=reduction, - avg_factor=avg_factor, - smooth=self.smooth, - exponent=self.exponent, - class_weight=class_weight, - ignore_index=self.ignore_index) - return loss - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/focal_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/focal_loss.py deleted file mode 100644 index af1c711d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/focal_loss.py +++ /dev/null @@ -1,327 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/open-mmlab/mmdetection -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -# This method is used when cuda is not available -def py_sigmoid_focal_loss(pred, - target, - one_hot_target=None, - weight=None, - gamma=2.0, - alpha=0.5, - class_weight=None, - valid_mask=None, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning label of the prediction with - shape (N, C) - one_hot_target (None): Placeholder. It should be None. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal Loss. - Defaults to 0.5. - class_weight (list[float], optional): Weight of each class. - Defaults to None. - valid_mask (torch.Tensor, optional): A mask uses 1 to mark the valid - samples and uses 0 to mark the ignored samples. Default: None. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - if isinstance(alpha, list): - alpha = pred.new_tensor(alpha) - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - one_minus_pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * one_minus_pt.pow(gamma) - - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - final_weight = torch.ones(1, pred.size(1)).type_as(loss) - if weight is not None: - if weight.shape != loss.shape and weight.size(0) == loss.size(0): - # For most cases, weight is of shape (N, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - assert weight.dim() == loss.dim() - final_weight = final_weight * weight - if class_weight is not None: - final_weight = final_weight * pred.new_tensor(class_weight) - if valid_mask is not None: - final_weight = final_weight * valid_mask - loss = weight_reduce_loss(loss, final_weight, reduction, avg_factor) - return loss - - -def sigmoid_focal_loss(pred, - target, - one_hot_target, - weight=None, - gamma=2.0, - alpha=0.5, - class_weight=None, - valid_mask=None, - reduction='mean', - avg_factor=None): - r"""A warpper of cuda version `Focal Loss - `_. - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. It's shape - should be (N, ) - one_hot_target (torch.Tensor): The learning label with shape (N, C) - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal Loss. - Defaults to 0.5. - class_weight (list[float], optional): Weight of each class. - Defaults to None. - valid_mask (torch.Tensor, optional): A mask uses 1 to mark the valid - samples and uses 0 to mark the ignored samples. Default: None. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # Function.apply does not accept keyword arguments, so the decorator - # "weighted_loss" is not applicable - final_weight = torch.ones(1, pred.size(1)).type_as(pred) - if isinstance(alpha, list): - # _sigmoid_focal_loss doesn't accept alpha of list type. Therefore, if - # a list is given, we set the input alpha as 0.5. This means setting - # equal weight for foreground class and background class. By - # multiplying the loss by 2, the effect of setting alpha as 0.5 is - # undone. The alpha of type list is used to regulate the loss in the - # post-processing process. - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), - gamma, 0.5, None, 'none') * 2 - alpha = pred.new_tensor(alpha) - final_weight = final_weight * ( - alpha * one_hot_target + (1 - alpha) * (1 - one_hot_target)) - else: - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), - gamma, alpha, None, 'none') - if weight is not None: - if weight.shape != loss.shape and weight.size(0) == loss.size(0): - # For most cases, weight is of shape (N, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - assert weight.dim() == loss.dim() - final_weight = final_weight * weight - if class_weight is not None: - final_weight = final_weight * pred.new_tensor(class_weight) - if valid_mask is not None: - final_weight = final_weight * valid_mask - loss = weight_reduce_loss(loss, final_weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class FocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - gamma=2.0, - alpha=0.5, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_focal'): - """`Focal Loss `_ - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float | list[float], optional): A balanced form for Focal - Loss. Defaults to 0.5. When a list is provided, the length - of the list should be equal to the number of classes. - Please be careful that this parameter is not the - class-wise weight but the weight of a binary classification - problem. This binary classification problem regards the - pixels which belong to one class as the foreground - and the other pixels as the background, each element in - the list is the weight of the corresponding foreground class. - The value of alpha or each element of alpha should be a float - in the interval [0, 1]. If you want to specify the class-wise - weight, please use `class_weight` parameter. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this - loss item to be included into the backward graph, `loss_` must - be the prefix of the name. Defaults to 'loss_focal'. - """ - super(FocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'AssertionError: Only sigmoid focal loss supported now.' - assert reduction in ('none', 'mean', 'sum'), \ - "AssertionError: reduction should be 'none', 'mean' or " \ - "'sum'" - assert isinstance(alpha, (float, list)), \ - 'AssertionError: alpha should be of type float' - assert isinstance(gamma, float), \ - 'AssertionError: gamma should be of type float' - assert isinstance(loss_weight, float), \ - 'AssertionError: loss_weight should be of type float' - assert isinstance(loss_name, str), \ - 'AssertionError: loss_name should be of type str' - assert isinstance(class_weight, list) or class_weight is None, \ - 'AssertionError: class_weight must be None or of type list' - self.use_sigmoid = use_sigmoid - self.gamma = gamma - self.alpha = alpha - self.reduction = reduction - self.class_weight = class_weight - self.loss_weight = loss_weight - self._loss_name = loss_name - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - ignore_index=255, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction with shape - (N, C) where C = number of classes, or - (N, C, d_1, d_2, ..., d_K) with K≥1 in the - case of K-dimensional loss. - target (torch.Tensor): The ground truth. If containing class - indices, shape (N) where each value is 0≤targets[i]≤C−1, - or (N, d_1, d_2, ..., d_K) with K≥1 in the case of - K-dimensional loss. If containing class probabilities, - same shape as the input. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to - average the loss. Defaults to None. - reduction_override (str, optional): The reduction method used - to override the original reduction method of the loss. - Options are "none", "mean" and "sum". - ignore_index (int, optional): The label index to be ignored. - Default: 255 - Returns: - torch.Tensor: The calculated loss - """ - assert isinstance(ignore_index, int), \ - 'ignore_index must be of type int' - assert reduction_override in (None, 'none', 'mean', 'sum'), \ - "AssertionError: reduction should be 'none', 'mean' or " \ - "'sum'" - assert pred.shape == target.shape or \ - (pred.size(0) == target.size(0) and - pred.shape[2:] == target.shape[1:]), \ - "The shape of pred doesn't match the shape of target" - - original_shape = pred.shape - - # [B, C, d_1, d_2, ..., d_k] -> [C, B, d_1, d_2, ..., d_k] - pred = pred.transpose(0, 1) - # [C, B, d_1, d_2, ..., d_k] -> [C, N] - pred = pred.reshape(pred.size(0), -1) - # [C, N] -> [N, C] - pred = pred.transpose(0, 1).contiguous() - - if original_shape == target.shape: - # target with shape [B, C, d_1, d_2, ...] - # transform it's shape into [N, C] - # [B, C, d_1, d_2, ...] -> [C, B, d_1, d_2, ..., d_k] - target = target.transpose(0, 1) - # [C, B, d_1, d_2, ..., d_k] -> [C, N] - target = target.reshape(target.size(0), -1) - # [C, N] -> [N, C] - target = target.transpose(0, 1).contiguous() - else: - # target with shape [B, d_1, d_2, ...] - # transform it's shape into [N, ] - target = target.view(-1).contiguous() - valid_mask = (target != ignore_index).view(-1, 1) - # avoid raising error when using F.one_hot() - target = torch.where(target == ignore_index, target.new_tensor(0), - target) - - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - num_classes = pred.size(1) - if torch.cuda.is_available() and pred.is_cuda: - if target.dim() == 1: - one_hot_target = F.one_hot(target, num_classes=num_classes) - else: - one_hot_target = target - target = target.argmax(dim=1) - valid_mask = (target != ignore_index).view(-1, 1) - calculate_loss_func = sigmoid_focal_loss - else: - one_hot_target = None - if target.dim() == 1: - target = F.one_hot(target, num_classes=num_classes) - else: - valid_mask = (target.argmax(dim=1) != ignore_index).view( - -1, 1) - calculate_loss_func = py_sigmoid_focal_loss - - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - one_hot_target, - weight, - gamma=self.gamma, - alpha=self.alpha, - class_weight=self.class_weight, - valid_mask=valid_mask, - reduction=reduction, - avg_factor=avg_factor) - - if reduction == 'none': - # [N, C] -> [C, N] - loss_cls = loss_cls.transpose(0, 1) - # [C, N] -> [C, B, d1, d2, ...] - # original_shape: [B, C, d1, d2, ...] - loss_cls = loss_cls.reshape(original_shape[1], - original_shape[0], - *original_shape[2:]) - # [C, B, d1, d2, ...] -> [B, C, d1, d2, ...] - loss_cls = loss_cls.transpose(0, 1).contiguous() - else: - raise NotImplementedError - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/lovasz_loss.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/lovasz_loss.py deleted file mode 100644 index 2bb0fad3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/lovasz_loss.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytor -ch/lovasz_losses.py Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim -Berman 2018 ESAT-PSI KU Leuven (MIT License)""" - -import mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def lovasz_grad(gt_sorted): - """Computes gradient of the Lovasz extension w.r.t sorted errors. - - See Alg. 1 in paper. - """ - p = len(gt_sorted) - gts = gt_sorted.sum() - intersection = gts - gt_sorted.float().cumsum(0) - union = gts + (1 - gt_sorted).float().cumsum(0) - jaccard = 1. - intersection / union - if p > 1: # cover 1-pixel case - jaccard[1:p] = jaccard[1:p] - jaccard[0:-1] - return jaccard - - -def flatten_binary_logits(logits, labels, ignore_index=None): - """Flattens predictions in the batch (binary case) Remove labels equal to - 'ignore_index'.""" - logits = logits.view(-1) - labels = labels.view(-1) - if ignore_index is None: - return logits, labels - valid = (labels != ignore_index) - vlogits = logits[valid] - vlabels = labels[valid] - return vlogits, vlabels - - -def flatten_probs(probs, labels, ignore_index=None): - """Flattens predictions in the batch.""" - if probs.dim() == 3: - # assumes output of a sigmoid layer - B, H, W = probs.size() - probs = probs.view(B, 1, H, W) - B, C, H, W = probs.size() - probs = probs.permute(0, 2, 3, 1).contiguous().view(-1, C) # B*H*W, C=P,C - labels = labels.view(-1) - if ignore_index is None: - return probs, labels - valid = (labels != ignore_index) - vprobs = probs[valid.nonzero().squeeze()] - vlabels = labels[valid] - return vprobs, vlabels - - -def lovasz_hinge_flat(logits, labels): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [P], logits at each prediction - (between -infty and +infty). - labels (torch.Tensor): [P], binary ground truth labels (0 or 1). - - Returns: - torch.Tensor: The calculated loss. - """ - if len(labels) == 0: - # only void pixels, the gradients should be 0 - return logits.sum() * 0. - signs = 2. * labels.float() - 1. - errors = (1. - logits * signs) - errors_sorted, perm = torch.sort(errors, dim=0, descending=True) - perm = perm.data - gt_sorted = labels[perm] - grad = lovasz_grad(gt_sorted) - loss = torch.dot(F.relu(errors_sorted), grad) - return loss - - -def lovasz_hinge(logits, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Binary Lovasz hinge loss. - - Args: - logits (torch.Tensor): [B, H, W], logits at each pixel - (between -infty and +infty). - labels (torch.Tensor): [B, H, W], binary ground truth masks (0 or 1). - classes (str | list[int], optional): Placeholder, to be consistent with - other loss. Default: None. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): Placeholder, to be consistent - with other loss. Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - if per_image: - loss = [ - lovasz_hinge_flat(*flatten_binary_logits( - logit.unsqueeze(0), label.unsqueeze(0), ignore_index)) - for logit, label in zip(logits, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_hinge_flat( - *flatten_binary_logits(logits, labels, ignore_index)) - return loss - - -def lovasz_softmax_flat(probs, labels, classes='present', class_weight=None): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [P, C], class probabilities at each prediction - (between 0 and 1). - labels (torch.Tensor): [P], ground truth labels (between 0 and C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - class_weight (list[float], optional): The weight for each class. - Default: None. - - Returns: - torch.Tensor: The calculated loss. - """ - if probs.numel() == 0: - # only void pixels, the gradients should be 0 - return probs * 0. - C = probs.size(1) - losses = [] - class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes - for c in class_to_sum: - fg = (labels == c).float() # foreground for class c - if (classes == 'present' and fg.sum() == 0): - continue - if C == 1: - if len(classes) > 1: - raise ValueError('Sigmoid output possible only with 1 class') - class_pred = probs[:, 0] - else: - class_pred = probs[:, c] - errors = (fg - class_pred).abs() - errors_sorted, perm = torch.sort(errors, 0, descending=True) - perm = perm.data - fg_sorted = fg[perm] - loss = torch.dot(errors_sorted, lovasz_grad(fg_sorted)) - if class_weight is not None: - loss *= class_weight[c] - losses.append(loss) - return torch.stack(losses).mean() - - -def lovasz_softmax(probs, - labels, - classes='present', - per_image=False, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=255): - """Multi-class Lovasz-Softmax loss. - - Args: - probs (torch.Tensor): [B, C, H, W], class probabilities at each - prediction (between 0 and 1). - labels (torch.Tensor): [B, H, W], ground truth labels (between 0 and - C - 1). - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - class_weight (list[float], optional): The weight for each class. - Default: None. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. This parameter only works when per_image is True. - Default: None. - ignore_index (int | None): The label index to be ignored. Default: 255. - - Returns: - torch.Tensor: The calculated loss. - """ - - if per_image: - loss = [ - lovasz_softmax_flat( - *flatten_probs( - prob.unsqueeze(0), label.unsqueeze(0), ignore_index), - classes=classes, - class_weight=class_weight) - for prob, label in zip(probs, labels) - ] - loss = weight_reduce_loss( - torch.stack(loss), None, reduction, avg_factor) - else: - loss = lovasz_softmax_flat( - *flatten_probs(probs, labels, ignore_index), - classes=classes, - class_weight=class_weight) - return loss - - -@LOSSES.register_module() -class LovaszLoss(nn.Module): - """LovaszLoss. - - This loss is proposed in `The Lovasz-Softmax loss: A tractable surrogate - for the optimization of the intersection-over-union measure in neural - networks `_. - - Args: - loss_type (str, optional): Binary or multi-class loss. - Default: 'multi_class'. Options are "binary" and "multi_class". - classes (str | list[int], optional): Classes chosen to calculate loss. - 'all' for all classes, 'present' for classes present in labels, or - a list of classes to average. Default: 'present'. - per_image (bool, optional): If per_image is True, compute the loss per - image instead of per batch. Default: False. - reduction (str, optional): The method used to reduce the loss. Options - are "none", "mean" and "sum". This parameter only works when - per_image is True. Default: 'mean'. - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - loss_name (str, optional): Name of the loss item. If you want this loss - item to be included into the backward graph, `loss_` must be the - prefix of the name. Defaults to 'loss_lovasz'. - """ - - def __init__(self, - loss_type='multi_class', - classes='present', - per_image=False, - reduction='mean', - class_weight=None, - loss_weight=1.0, - loss_name='loss_lovasz'): - super(LovaszLoss, self).__init__() - assert loss_type in ('binary', 'multi_class'), "loss_type should be \ - 'binary' or 'multi_class'." - - if loss_type == 'binary': - self.cls_criterion = lovasz_hinge - else: - self.cls_criterion = lovasz_softmax - assert classes in ('all', 'present') or mmcv.is_list_of(classes, int) - if not per_image: - assert reduction == 'none', "reduction should be 'none' when \ - per_image is False." - - self.classes = classes - self.per_image = per_image - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - self._loss_name = loss_name - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - - # if multi-class loss, transform logits to probs - if self.cls_criterion == lovasz_softmax: - cls_score = F.softmax(cls_score, dim=1) - - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - self.classes, - self.per_image, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls - - @property - def loss_name(self): - """Loss Name. - - This function must be implemented and will return the name of this - loss function. This name will be used to combine different loss items - by simple sum operation. In addition, if you want this loss item to be - included into the backward graph, `loss_` must be the prefix of the - name. - Returns: - str: The name of this loss item. - """ - return self._loss_name diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/utils.py deleted file mode 100644 index c37875fa..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/losses/utils.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import mmcv -import numpy as np -import torch.nn.functional as F - - -def get_class_weight(class_weight): - """Get class weight for loss function. - - Args: - class_weight (list[float] | str | None): If class_weight is a str, - take it as a file name and read from it. - """ - if isinstance(class_weight, str): - # take it as a file path - if class_weight.endswith('.npy'): - class_weight = np.load(class_weight) - else: - # pkl, json or yaml - class_weight = mmcv.load(class_weight) - - return class_weight - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Average factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - if weight.dim() > 1: - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - loss = loss.sum() / avg_factor - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/__init__.py deleted file mode 100644 index aba73f16..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .fpn import FPN -from .ic_neck import ICNeck -from .jpu import JPU -from .mla_neck import MLANeck -from .multilevel_neck import MultiLevelNeck - -__all__ = ['FPN', 'MultiLevelNeck', 'MLANeck', 'ICNeck', 'JPU'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/fpn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/fpn.py deleted file mode 100644 index 975a48e8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/fpn.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule, auto_fp16 - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(BaseModule): - """Feature Pyramid Network. - - This neck is the implementation of `Feature Pyramid Networks for Object - Detection `_. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs - on the original feature from the backbone. If True, - it is equivalent to `add_extra_convs='on_input'`. If False, it is - equivalent to set `add_extra_convs='on_output'`. Default to True. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - init_cfg (dict or list[dict], optional): Initialization config dict. - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - extra_convs_on_inputs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest'), - init_cfg=dict( - type='Xavier', layer='Conv2d', distribution='uniform')): - super(FPN, self).__init__(init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - if extra_convs_on_inputs: - # For compatibility with previous release - # TODO: deprecate `extra_convs_on_inputs` - self.add_extra_convs = 'on_input' - else: - self.add_extra_convs = 'on_output' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - @auto_fp16() - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + resize( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/ic_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/ic_neck.py deleted file mode 100644 index d836a6b9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/ic_neck.py +++ /dev/null @@ -1,147 +0,0 @@ -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import NECKS - - -class CascadeFeatureFusion(BaseModule): - """Cascade Feature Fusion Unit in ICNet. - - Args: - low_channels (int): The number of input channels for - low resolution feature map. - high_channels (int): The number of input channels for - high resolution feature map. - out_channels (int): The number of output channels. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - - Returns: - x (Tensor): The output tensor of shape (N, out_channels, H, W). - x_low (Tensor): The output tensor of shape (N, out_channels, H, W) - for Cascade Label Guidance in auxiliary heads. - """ - - def __init__(self, - low_channels, - high_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - super(CascadeFeatureFusion, self).__init__(init_cfg=init_cfg) - self.align_corners = align_corners - self.conv_low = ConvModule( - low_channels, - out_channels, - 3, - padding=2, - dilation=2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.conv_high = ConvModule( - high_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, x_low, x_high): - x_low = resize( - x_low, - size=x_high.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - # Note: Different from original paper, `x_low` is underwent - # `self.conv_low` rather than another 1x1 conv classifier - # before being used for auxiliary head. - x_low = self.conv_low(x_low) - x_high = self.conv_high(x_high) - x = x_low + x_high - x = F.relu(x, inplace=True) - return x, x_low - - -@NECKS.register_module() -class ICNeck(BaseModule): - """ICNet for Real-Time Semantic Segmentation on High-Resolution Images. - - This head is the implementation of `ICHead - `_. - - Args: - in_channels (int): The number of input image channels. Default: 3. - out_channels (int): The numbers of output feature channels. - Default: 128. - conv_cfg (dict): Dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN'). - act_cfg (dict): Dictionary to construct and config act layer. - Default: dict(type='ReLU'). - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=(64, 256, 256), - out_channels=128, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False, - init_cfg=None): - super(ICNeck, self).__init__(init_cfg=init_cfg) - assert len(in_channels) == 3, 'Length of input channels \ - must be 3!' - - self.in_channels = in_channels - self.out_channels = out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.cff_24 = CascadeFeatureFusion( - self.in_channels[2], - self.in_channels[1], - self.out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - - self.cff_12 = CascadeFeatureFusion( - self.out_channels, - self.in_channels[0], - self.out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - - def forward(self, inputs): - assert len(inputs) == 3, 'Length of input feature \ - maps must be 3!' - - x_sub1, x_sub2, x_sub4 = inputs - x_cff_24, x_24 = self.cff_24(x_sub4, x_sub2) - x_cff_12, x_12 = self.cff_12(x_cff_24, x_sub1) - # Note: `x_cff_12` is used for decode_head, - # `x_24` and `x_12` are used for auxiliary head. - return x_24, x_12, x_cff_12 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/jpu.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/jpu.py deleted file mode 100644 index 3cc6b9f4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/jpu.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule -from mmcv.runner import BaseModule - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class JPU(BaseModule): - """FastFCN: Rethinking Dilated Convolution in the Backbone - for Semantic Segmentation. - - This Joint Pyramid Upsampling (JPU) neck is the implementation of - `FastFCN `_. - - Args: - in_channels (Tuple[int], optional): The number of input channels - for each convolution operations before upsampling. - Default: (512, 1024, 2048). - mid_channels (int): The number of output channels of JPU. - Default: 512. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - dilations (tuple[int]): Dilation rate of each Depthwise - Separable ConvModule. Default: (1, 2, 4, 8). - align_corners (bool, optional): The align_corners argument of - resize operation. Default: False. - conv_cfg (dict | None): Config of conv layers. - Default: None. - norm_cfg (dict | None): Config of norm layers. - Default: dict(type='BN'). - act_cfg (dict): Config of activation layers. - Default: dict(type='ReLU'). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None. - """ - - def __init__(self, - in_channels=(512, 1024, 2048), - mid_channels=512, - start_level=0, - end_level=-1, - dilations=(1, 2, 4, 8), - align_corners=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super(JPU, self).__init__(init_cfg=init_cfg) - assert isinstance(in_channels, tuple) - assert isinstance(dilations, tuple) - self.in_channels = in_channels - self.mid_channels = mid_channels - self.start_level = start_level - self.num_ins = len(in_channels) - if end_level == -1: - self.backbone_end_level = self.num_ins - else: - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - - self.dilations = dilations - self.align_corners = align_corners - - self.conv_layers = nn.ModuleList() - self.dilation_layers = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - conv_layer = nn.Sequential( - ConvModule( - self.in_channels[i], - self.mid_channels, - kernel_size=3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.conv_layers.append(conv_layer) - for i in range(len(dilations)): - dilation_layer = nn.Sequential( - DepthwiseSeparableConvModule( - in_channels=(self.backbone_end_level - self.start_level) * - self.mid_channels, - out_channels=self.mid_channels, - kernel_size=3, - stride=1, - padding=dilations[i], - dilation=dilations[i], - dw_norm_cfg=norm_cfg, - dw_act_cfg=None, - pw_norm_cfg=norm_cfg, - pw_act_cfg=act_cfg)) - self.dilation_layers.append(dilation_layer) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.in_channels), 'Length of inputs must \ - be the same with self.in_channels!' - - feats = [ - self.conv_layers[i - self.start_level](inputs[i]) - for i in range(self.start_level, self.backbone_end_level) - ] - - h, w = feats[0].shape[2:] - for i in range(1, len(feats)): - feats[i] = resize( - feats[i], - size=(h, w), - mode='bilinear', - align_corners=self.align_corners) - - feat = torch.cat(feats, dim=1) - concat_feat = torch.cat([ - self.dilation_layers[i](feat) for i in range(len(self.dilations)) - ], - dim=1) - - outs = [] - - # Default: outs[2] is the output of JPU for decoder head, outs[1] is - # the feature map from backbone for auxiliary head. Additionally, - # outs[0] can also be used for auxiliary head. - for i in range(self.start_level, self.backbone_end_level - 1): - outs.append(inputs[i]) - outs.append(concat_feat) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/mla_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/mla_neck.py deleted file mode 100644 index 1513e296..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/mla_neck.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, build_norm_layer - -from ..builder import NECKS - - -class MLAModule(nn.Module): - - def __init__(self, - in_channels=[1024, 1024, 1024, 1024], - out_channels=256, - norm_cfg=None, - act_cfg=None): - super(MLAModule, self).__init__() - self.channel_proj = nn.ModuleList() - for i in range(len(in_channels)): - self.channel_proj.append( - ConvModule( - in_channels=in_channels[i], - out_channels=out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.feat_extract = nn.ModuleList() - for i in range(len(in_channels)): - self.feat_extract.append( - ConvModule( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - - # feat_list -> [p2, p3, p4, p5] - feat_list = [] - for x, conv in zip(inputs, self.channel_proj): - feat_list.append(conv(x)) - - # feat_list -> [p5, p4, p3, p2] - # mid_list -> [m5, m4, m3, m2] - feat_list = feat_list[::-1] - mid_list = [] - for feat in feat_list: - if len(mid_list) == 0: - mid_list.append(feat) - else: - mid_list.append(mid_list[-1] + feat) - - # mid_list -> [m5, m4, m3, m2] - # out_list -> [o2, o3, o4, o5] - out_list = [] - for mid, conv in zip(mid_list, self.feat_extract): - out_list.append(conv(mid)) - - return tuple(out_list) - - -@NECKS.register_module() -class MLANeck(nn.Module): - """Multi-level Feature Aggregation. - - This neck is `The Multi-level Feature Aggregation construction of - SETR `_. - - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - norm_layer (dict): Config dict for input normalization. - Default: norm_layer=dict(type='LN', eps=1e-6, requires_grad=True). - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - norm_layer=dict(type='LN', eps=1e-6, requires_grad=True), - norm_cfg=None, - act_cfg=None): - super(MLANeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - - # In order to build general vision transformer backbone, we have to - # move MLA to neck. - self.norm = nn.ModuleList([ - build_norm_layer(norm_layer, in_channels[i])[1] - for i in range(len(in_channels)) - ]) - - self.mla = MLAModule( - in_channels=in_channels, - out_channels=out_channels, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # Convert from nchw to nlc - outs = [] - for i in range(len(inputs)): - x = inputs[i] - n, c, h, w = x.shape - x = x.reshape(n, c, h * w).transpose(2, 1).contiguous() - x = self.norm[i](x) - x = x.transpose(1, 2).reshape(n, c, h, w).contiguous() - outs.append(x) - - outs = self.mla(outs) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/multilevel_neck.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/multilevel_neck.py deleted file mode 100644 index 5151f876..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/necks/multilevel_neck.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule, xavier_init - -from mmseg.ops import resize -from ..builder import NECKS - - -@NECKS.register_module() -class MultiLevelNeck(nn.Module): - """MultiLevelNeck. - - A neck structure connect vit backbone and decoder_heads. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - scales (List[float]): Scale factors for each input feature map. - Default: [0.5, 1, 2, 4] - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scales=[0.5, 1, 2, 4], - norm_cfg=None, - act_cfg=None): - super(MultiLevelNeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.scales = scales - self.num_outs = len(scales) - self.lateral_convs = nn.ModuleList() - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.lateral_convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - for _ in range(self.num_outs): - self.convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - inputs = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # for len(inputs) not equal to self.num_outs - if len(inputs) == 1: - inputs = [inputs[0] for _ in range(self.num_outs)] - outs = [] - for i in range(self.num_outs): - x_resize = resize( - inputs[i], scale_factor=self.scales[i], mode='bilinear') - outs.append(self.convs[i](x_resize)) - return tuple(outs) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/__init__.py deleted file mode 100644 index 387c858b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseSegmentor -from .cascade_encoder_decoder import CascadeEncoderDecoder -from .encoder_decoder import EncoderDecoder - -__all__ = ['BaseSegmentor', 'EncoderDecoder', 'CascadeEncoderDecoder'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/base.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/base.py deleted file mode 100644 index f0f320ff..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/base.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - - -class BaseSegmentor(BaseModule, metaclass=ABCMeta): - """Base class for segmentors.""" - - def __init__(self, init_cfg=None): - super(BaseSegmentor, self).__init__(init_cfg) - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the segmentor has neck""" - return hasattr(self, 'neck') and self.neck is not None - - @property - def with_auxiliary_head(self): - """bool: whether the segmentor has auxiliary head""" - return hasattr(self, - 'auxiliary_head') and self.auxiliary_head is not None - - @property - def with_decode_head(self): - """bool: whether the segmentor has decode head""" - return hasattr(self, 'decode_head') and self.decode_head is not None - - @abstractmethod - def extract_feat(self, imgs): - """Placeholder for extract features from images.""" - pass - - @abstractmethod - def encode_decode(self, img, img_metas): - """Placeholder for encode images with backbone and decode into a - semantic segmentation map of the same size as input.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """Placeholder for Forward function for training.""" - pass - - @abstractmethod - def simple_test(self, img, img_meta, **kwargs): - """Placeholder for single image test.""" - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Placeholder for augmentation test.""" - pass - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got ' - f'{type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) != ' - f'num of image meta ({len(img_metas)})') - # all images in the same aug batch all of the same ori_shape and pad - # shape - for img_meta in img_metas: - ori_shapes = [_['ori_shape'] for _ in img_meta] - assert all(shape == ori_shapes[0] for shape in ori_shapes) - img_shapes = [_['img_shape'] for _ in img_meta] - assert all(shape == img_shapes[0] for shape in img_shapes) - pad_shapes = [_['pad_shape'] for _ in img_meta] - assert all(shape == pad_shapes[0] for shape in pad_shapes) - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def train_step(self, data_batch, optimizer, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - ``log_vars`` contains all the variables to be sent to the - logger. - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - def val_step(self, data_batch, optimizer=None, **kwargs): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - @staticmethod - def _parse_losses(losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - # If the loss_vars has different length, raise assertion error - # to prevent GPUs from infinite waiting. - if dist.is_available() and dist.is_initialized(): - log_var_length = torch.tensor(len(log_vars), device=loss.device) - dist.all_reduce(log_var_length) - message = (f'rank {dist.get_rank()}' + - f' len(log_vars): {len(log_vars)}' + ' keys: ' + - ','.join(log_vars.keys()) + '\n') - assert log_var_length == len(log_vars) * dist.get_world_size(), \ - 'loss log variables are different across GPUs!\n' + message - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def show_result(self, - img, - result, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor): The semantic segmentation results to draw over - `img`. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - assert palette.shape[0] == len(self.CLASSES) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/cascade_encoder_decoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/cascade_encoder_decoder.py deleted file mode 100644 index 7f9f9006..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/cascade_encoder_decoder.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from mmseg.core import add_prefix -from mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .encoder_decoder import EncoderDecoder - - -@SEGMENTORS.register_module() -class CascadeEncoderDecoder(EncoderDecoder): - """Cascade Encoder Decoder segmentors. - - CascadeEncoderDecoder almost the same as EncoderDecoder, while decoders of - CascadeEncoderDecoder are cascaded. The output of previous decoder_head - will be the input of next decoder_head. - """ - - def __init__(self, - num_stages, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - self.num_stages = num_stages - super(CascadeEncoderDecoder, self).__init__( - backbone=backbone, - decode_head=decode_head, - neck=neck, - auxiliary_head=auxiliary_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - assert isinstance(decode_head, list) - assert len(decode_head) == self.num_stages - self.decode_head = nn.ModuleList() - for i in range(self.num_stages): - self.decode_head.append(builder.build_head(decode_head[i])) - self.align_corners = self.decode_head[-1].align_corners - self.num_classes = self.decode_head[-1].num_classes - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self.decode_head[0].forward_test(x, img_metas, self.test_cfg) - for i in range(1, self.num_stages): - out = self.decode_head[i].forward_test(x, out, img_metas, - self.test_cfg) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - - loss_decode = self.decode_head[0].forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode_0')) - - for i in range(1, self.num_stages): - # forward test again, maybe unnecessary for most methods. - prev_outputs = self.decode_head[i - 1].forward_test( - x, img_metas, self.test_cfg) - loss_decode = self.decode_head[i].forward_train( - x, prev_outputs, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_decode, f'decode_{i}')) - - return losses diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/encoder_decoder.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/encoder_decoder.py deleted file mode 100644 index 72467b46..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmseg.core import add_prefix -from mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .base import BaseSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder(BaseSegmentor): - """Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be dumped during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(EncoderDecoder, self).__init__(init_cfg) - if pretrained is not None: - assert backbone.get('pretrained') is None, \ - 'both backbone and segmentor set pretrained weight' - backbone.pretrained = pretrained - self.backbone = builder.build_backbone(backbone) - if neck is not None: - self.neck = builder.build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - assert self.with_decode_head - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = builder.build_head(decode_head) - self.align_corners = self.decode_head.align_corners - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(builder.build_head(head_cfg)) - else: - self.auxiliary_head = builder.build_head(auxiliary_head) - - def extract_feat(self, img): - """Extract features from images.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self._decode_head_forward_test(x, img_metas) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def forward_dummy(self, img): - """Dummy forward function.""" - seg_logit = self.encode_decode(img, None) - - return seg_logit - - def forward_train(self, img, img_metas, gt_semantic_seg): - """Forward function for training. - - Args: - img (Tensor): Input images. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - x = self.extract_feat(img) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - gt_semantic_seg) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, gt_semantic_seg) - losses.update(loss_aux) - - return losses - - # TODO refactor - def slide_inference(self, img, img_meta, rescale): - """Inference by sliding-window with overlap. - - If h_crop > h_img or w_crop > w_img, the small patch will be used to - decode without padding. - """ - - h_stride, w_stride = self.test_cfg.stride - h_crop, w_crop = self.test_cfg.crop_size - batch_size, _, h_img, w_img = img.size() - num_classes = self.num_classes - h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1 - w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1 - preds = img.new_zeros((batch_size, num_classes, h_img, w_img)) - count_mat = img.new_zeros((batch_size, 1, h_img, w_img)) - for h_idx in range(h_grids): - for w_idx in range(w_grids): - y1 = h_idx * h_stride - x1 = w_idx * w_stride - y2 = min(y1 + h_crop, h_img) - x2 = min(x1 + w_crop, w_img) - y1 = max(y2 - h_crop, 0) - x1 = max(x2 - w_crop, 0) - crop_img = img[:, :, y1:y2, x1:x2] - crop_seg_logit = self.encode_decode(crop_img, img_meta) - preds += F.pad(crop_seg_logit, - (int(x1), int(preds.shape[3] - x2), int(y1), - int(preds.shape[2] - y2))) - - count_mat[:, :, y1:y2, x1:x2] += 1 - assert (count_mat == 0).sum() == 0 - if torch.onnx.is_in_onnx_export(): - # cast count_mat to constant while exporting to ONNX - count_mat = torch.from_numpy( - count_mat.cpu().detach().numpy()).to(device=img.device) - preds = preds / count_mat - if rescale: - preds = resize( - preds, - size=img_meta[0]['ori_shape'][:2], - mode='bilinear', - align_corners=self.align_corners, - warning=False) - return preds - - def whole_inference(self, img, img_meta, rescale): - """Inference with full image.""" - - seg_logit = self.encode_decode(img, img_meta) - if rescale: - # support dynamic shape for onnx - if torch.onnx.is_in_onnx_export(): - size = img.shape[2:] - else: - size = img_meta[0]['ori_shape'][:2] - seg_logit = resize( - seg_logit, - size=size, - mode='bilinear', - align_corners=self.align_corners, - warning=False) - - return seg_logit - - def inference(self, img, img_meta, rescale): - """Inference with slide/whole style. - - Args: - img (Tensor): The input image of shape (N, 3, H, W). - img_meta (dict): Image info dict where each dict has: 'img_shape', - 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - rescale (bool): Whether rescale back to original shape. - - Returns: - Tensor: The output segmentation map. - """ - - assert self.test_cfg.mode in ['slide', 'whole'] - ori_shape = img_meta[0]['ori_shape'] - assert all(_['ori_shape'] == ori_shape for _ in img_meta) - if self.test_cfg.mode == 'slide': - seg_logit = self.slide_inference(img, img_meta, rescale) - else: - seg_logit = self.whole_inference(img, img_meta, rescale) - output = F.softmax(seg_logit, dim=1) - flip = img_meta[0]['flip'] - if flip: - flip_direction = img_meta[0]['flip_direction'] - assert flip_direction in ['horizontal', 'vertical'] - if flip_direction == 'horizontal': - output = output.flip(dims=(3, )) - elif flip_direction == 'vertical': - output = output.flip(dims=(2, )) - - return output - - def simple_test(self, img, img_meta, rescale=True): - """Simple test with single image.""" - seg_logit = self.inference(img, img_meta, rescale) - seg_pred = seg_logit.argmax(dim=1) - if torch.onnx.is_in_onnx_export(): - # our inference backend only support 4D output - seg_pred = seg_pred.unsqueeze(0) - return seg_pred - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, rescale=True): - """Test with augmentations. - - Only rescale=True is supported. - """ - # aug_test rescale all imgs back to ori_shape for now - assert rescale - # to save memory, we get augmented seg logit inplace - seg_logit = self.inference(imgs[0], img_metas[0], rescale) - for i in range(1, len(imgs)): - cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale) - seg_logit += cur_seg_logit - seg_logit /= len(imgs) - seg_pred = seg_logit.argmax(dim=1) - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/__init__.py deleted file mode 100644 index 2417c518..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -from .embed import PatchEmbed -from .inverted_residual import InvertedResidual, InvertedResidualV3 -from .make_divisible import make_divisible -from .res_layer import ResLayer -from .se_layer import SELayer -from .self_attention_block import SelfAttentionBlock -from .shape_convert import nchw_to_nlc, nlc_to_nchw -from .up_conv_block import UpConvBlock - -__all__ = [ - 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual', - 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'PatchEmbed', - 'nchw_to_nlc', 'nlc_to_nchw' -] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/embed.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/embed.py deleted file mode 100644 index 1515675e..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/embed.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from typing import Sequence - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner.base_module import BaseModule -from mmcv.utils import to_2tuple - - -class AdaptivePadding(nn.Module): - """Applies padding to input (if needed) so that input can get fully covered - by filter you specified. It support two modes "same" and "corner". The - "same" mode is same with "SAME" padding mode in TensorFlow, pad zero around - input. The "corner" mode would pad zero to bottom right. - - Args: - kernel_size (int | tuple): Size of the kernel: - stride (int | tuple): Stride of the filter. Default: 1: - dilation (int | tuple): Spacing between kernel elements. - Default: 1. - padding (str): Support "same" and "corner", "corner" mode - would pad zero to bottom right, and "same" mode would - pad zero around input. Default: "corner". - Example: - >>> kernel_size = 16 - >>> stride = 16 - >>> dilation = 1 - >>> input = torch.rand(1, 1, 15, 17) - >>> adap_pad = AdaptivePadding( - >>> kernel_size=kernel_size, - >>> stride=stride, - >>> dilation=dilation, - >>> padding="corner") - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - >>> input = torch.rand(1, 1, 16, 17) - >>> out = adap_pad(input) - >>> assert (out.shape[2], out.shape[3]) == (16, 32) - """ - - def __init__(self, kernel_size=1, stride=1, dilation=1, padding='corner'): - - super(AdaptivePadding, self).__init__() - - assert padding in ('same', 'corner') - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - self.padding = padding - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - - def get_pad_shape(self, input_shape): - input_h, input_w = input_shape - kernel_h, kernel_w = self.kernel_size - stride_h, stride_w = self.stride - output_h = math.ceil(input_h / stride_h) - output_w = math.ceil(input_w / stride_w) - pad_h = max((output_h - 1) * stride_h + - (kernel_h - 1) * self.dilation[0] + 1 - input_h, 0) - pad_w = max((output_w - 1) * stride_w + - (kernel_w - 1) * self.dilation[1] + 1 - input_w, 0) - return pad_h, pad_w - - def forward(self, x): - pad_h, pad_w = self.get_pad_shape(x.size()[-2:]) - if pad_h > 0 or pad_w > 0: - if self.padding == 'corner': - x = F.pad(x, [0, pad_w, 0, pad_h]) - elif self.padding == 'same': - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, - pad_h - pad_h // 2 - ]) - return x - - -class PatchEmbed(BaseModule): - """Image to Patch Embedding. - - We use a conv layer to implement PatchEmbed. - - Args: - in_channels (int): The num of input channels. Default: 3 - embed_dims (int): The dimensions of embedding. Default: 768 - conv_type (str): The config dict for embedding - conv layer type selection. Default: "Conv2d". - kernel_size (int): The kernel_size of embedding conv. Default: 16. - stride (int, optional): The slide stride of embedding conv. - Default: None (Would be set as `kernel_size`). - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int): The dilation rate of embedding conv. Default: 1. - bias (bool): Bias of embed conv. Default: True. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - input_size (int | tuple | None): The size of input, which will be - used to calculate the out size. Only work when `dynamic_size` - is False. Default: None. - init_cfg (`mmcv.ConfigDict`, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - in_channels=3, - embed_dims=768, - conv_type='Conv2d', - kernel_size=16, - stride=None, - padding='corner', - dilation=1, - bias=True, - norm_cfg=None, - input_size=None, - init_cfg=None): - super(PatchEmbed, self).__init__(init_cfg=init_cfg) - - self.embed_dims = embed_dims - if stride is None: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of conv - padding = 0 - else: - self.adap_padding = None - padding = to_2tuple(padding) - - self.projection = build_conv_layer( - dict(type=conv_type), - in_channels=in_channels, - out_channels=embed_dims, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - bias=bias) - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - else: - self.norm = None - - if input_size: - input_size = to_2tuple(input_size) - # `init_out_size` would be used outside to - # calculate the num_patches - # when `use_abs_pos_embed` outside - self.init_input_size = input_size - if self.adap_padding: - pad_h, pad_w = self.adap_padding.get_pad_shape(input_size) - input_h, input_w = input_size - input_h = input_h + pad_h - input_w = input_w + pad_w - input_size = (input_h, input_w) - - # https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html - h_out = (input_size[0] + 2 * padding[0] - dilation[0] * - (kernel_size[0] - 1) - 1) // stride[0] + 1 - w_out = (input_size[1] + 2 * padding[1] - dilation[1] * - (kernel_size[1] - 1) - 1) // stride[1] + 1 - self.init_out_size = (h_out, w_out) - else: - self.init_input_size = None - self.init_out_size = None - - def forward(self, x): - """ - Args: - x (Tensor): Has shape (B, C, H, W). In most case, C is 3. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, out_h * out_w, embed_dims) - - out_size (tuple[int]): Spatial shape of x, arrange as - (out_h, out_w). - """ - - if self.adap_padding: - x = self.adap_padding(x) - - x = self.projection(x) - out_size = (x.shape[2], x.shape[3]) - x = x.flatten(2).transpose(1, 2) - if self.norm is not None: - x = self.norm(x) - return x, out_size - - -class PatchMerging(BaseModule): - """Merge patch feature map. - - This layer groups feature map by kernel_size, and applies norm and linear - layers to the grouped feature map. Our implementation uses `nn.Unfold` to - merge patch, which is about 25% faster than original implementation. - Instead, we need to modify pretrained models for compatibility. - - Args: - in_channels (int): The num of input channels. - out_channels (int): The num of output channels. - kernel_size (int | tuple, optional): the kernel size in the unfold - layer. Defaults to 2. - stride (int | tuple, optional): the stride of the sliding blocks in the - unfold layer. Default: None. (Would be set as `kernel_size`) - padding (int | tuple | string ): The padding length of - embedding conv. When it is a string, it means the mode - of adaptive padding, support "same" and "corner" now. - Default: "corner". - dilation (int | tuple, optional): dilation parameter in the unfold - layer. Default: 1. - bias (bool, optional): Whether to add bias in linear layer or not. - Defaults: False. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=2, - stride=None, - padding='corner', - dilation=1, - bias=False, - norm_cfg=dict(type='LN'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - if stride: - stride = stride - else: - stride = kernel_size - - kernel_size = to_2tuple(kernel_size) - stride = to_2tuple(stride) - dilation = to_2tuple(dilation) - - if isinstance(padding, str): - self.adap_padding = AdaptivePadding( - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=padding) - # disable the padding of unfold - padding = 0 - else: - self.adap_padding = None - - padding = to_2tuple(padding) - self.sampler = nn.Unfold( - kernel_size=kernel_size, - dilation=dilation, - padding=padding, - stride=stride) - - sample_dim = kernel_size[0] * kernel_size[1] * in_channels - - if norm_cfg is not None: - self.norm = build_norm_layer(norm_cfg, sample_dim)[1] - else: - self.norm = None - - self.reduction = nn.Linear(sample_dim, out_channels, bias=bias) - - def forward(self, x, input_size): - """ - Args: - x (Tensor): Has shape (B, H*W, C_in). - input_size (tuple[int]): The spatial shape of x, arrange as (H, W). - Default: None. - - Returns: - tuple: Contains merged results and its spatial shape. - - - x (Tensor): Has shape (B, Merged_H * Merged_W, C_out) - - out_size (tuple[int]): Spatial shape of x, arrange as - (Merged_H, Merged_W). - """ - B, L, C = x.shape - assert isinstance(input_size, Sequence), f'Expect ' \ - f'input_size is ' \ - f'`Sequence` ' \ - f'but get {input_size}' - - H, W = input_size - assert L == H * W, 'input feature has wrong size' - - x = x.view(B, H, W, C).permute([0, 3, 1, 2]) # B, C, H, W - # Use nn.Unfold to merge patch. About 25% faster than original method, - # but need to modify pretrained model for compatibility - - if self.adap_padding: - x = self.adap_padding(x) - H, W = x.shape[-2:] - - x = self.sampler(x) - # if kernel_size=2 and stride=2, x should has shape (B, 4*C, H/2*W/2) - - out_h = (H + 2 * self.sampler.padding[0] - self.sampler.dilation[0] * - (self.sampler.kernel_size[0] - 1) - - 1) // self.sampler.stride[0] + 1 - out_w = (W + 2 * self.sampler.padding[1] - self.sampler.dilation[1] * - (self.sampler.kernel_size[1] - 1) - - 1) // self.sampler.stride[1] + 1 - - output_size = (out_h, out_w) - x = x.transpose(1, 2) # B, H/2*W/2, 4*C - x = self.norm(x) if self.norm else x - x = self.reduction(x) - return x, output_size diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/inverted_residual.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/inverted_residual.py deleted file mode 100644 index c9cda768..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/inverted_residual.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule -from torch import nn -from torch.utils import checkpoint as cp - -from .se_layer import SELayer - - -class InvertedResidual(nn.Module): - """InvertedResidual block for MobileNetV2. - - Args: - in_channels (int): The input channels of the InvertedResidual block. - out_channels (int): The output channels of the InvertedResidual block. - stride (int): Stride of the middle (first) 3x3 convolution. - expand_ratio (int): Adjusts number of channels of the hidden layer - in InvertedResidual by this amount. - dilation (int): Dilation rate of depthwise conv. Default: 1 - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - stride, - expand_ratio, - dilation=1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - with_cp=False, - **kwargs): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2], f'stride must in [1, 2]. ' \ - f'But received {stride}.' - self.with_cp = with_cp - self.use_res_connect = self.stride == 1 and in_channels == out_channels - hidden_dim = int(round(in_channels * expand_ratio)) - - layers = [] - if expand_ratio != 1: - layers.append( - ConvModule( - in_channels=in_channels, - out_channels=hidden_dim, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - **kwargs)) - layers.extend([ - ConvModule( - in_channels=hidden_dim, - out_channels=hidden_dim, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - groups=hidden_dim, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - **kwargs), - ConvModule( - in_channels=hidden_dim, - out_channels=out_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None, - **kwargs) - ]) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - - def _inner_forward(x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InvertedResidualV3(nn.Module): - """Inverted Residual Block for MobileNetV3. - - Args: - in_channels (int): The input channels of this Module. - out_channels (int): The output channels of this Module. - mid_channels (int): The input channels of the depthwise convolution. - kernel_size (int): The kernel size of the depthwise convolution. - Default: 3. - stride (int): The stride of the depthwise convolution. Default: 1. - se_cfg (dict): Config dict for se layer. Default: None, which means no - se layer. - with_expand_conv (bool): Use expand conv or not. If set False, - mid_channels must be the same with in_channels. Default: True. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels, - kernel_size=3, - stride=1, - se_cfg=None, - with_expand_conv=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - with_cp=False): - super(InvertedResidualV3, self).__init__() - self.with_res_shortcut = (stride == 1 and in_channels == out_channels) - assert stride in [1, 2] - self.with_cp = with_cp - self.with_se = se_cfg is not None - self.with_expand_conv = with_expand_conv - - if self.with_se: - assert isinstance(se_cfg, dict) - if not self.with_expand_conv: - assert mid_channels == in_channels - - if self.with_expand_conv: - self.expand_conv = ConvModule( - in_channels=in_channels, - out_channels=mid_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.depthwise_conv = ConvModule( - in_channels=mid_channels, - out_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - padding=kernel_size // 2, - groups=mid_channels, - conv_cfg=dict( - type='Conv2dAdaptivePadding') if stride == 2 else conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - if self.with_se: - self.se = SELayer(**se_cfg) - - self.linear_conv = ConvModule( - in_channels=mid_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, x): - - def _inner_forward(x): - out = x - - if self.with_expand_conv: - out = self.expand_conv(out) - - out = self.depthwise_conv(out) - - if self.with_se: - out = self.se(out) - - out = self.linear_conv(out) - - if self.with_res_shortcut: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/make_divisible.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/make_divisible.py deleted file mode 100644 index ed42c2ee..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/make_divisible.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def make_divisible(value, divisor, min_value=None, min_ratio=0.9): - """Make divisible function. - - This function rounds the channel number to the nearest value that can be - divisible by the divisor. It is taken from the original tf repo. It ensures - that all layers have a channel number that is divisible by divisor. It can - be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa - - Args: - value (int): The original channel number. - divisor (int): The divisor to fully divide the channel number. - min_value (int): The minimum value of the output channel. - Default: None, means that the minimum value equal to the divisor. - min_ratio (float): The minimum ratio of the rounded channel number to - the original channel number. Default: 0.9. - - Returns: - int: The modified output channel number. - """ - - if min_value is None: - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than (1-min_ratio). - if new_value < min_ratio * value: - new_value += divisor - return new_value diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/res_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/res_layer.py deleted file mode 100644 index 190a0c5d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/res_layer.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - multi_grid (int | None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - dilation=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - multi_grid=None, - contract_dilation=False, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if multi_grid is None: - if dilation > 1 and contract_dilation: - first_dilation = dilation // 2 - else: - first_dilation = dilation - else: - first_dilation = multi_grid[0] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - dilation=first_dilation, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - dilation=dilation if multi_grid is None else multi_grid[i], - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/se_layer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/se_layer.py deleted file mode 100644 index 16f52aa5..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/se_layer.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn -from mmcv.cnn import ConvModule - -from .make_divisible import make_divisible - - -class SELayer(nn.Module): - """Squeeze-and-Excitation Module. - - Args: - channels (int): The input (and output) channels of the SE layer. - ratio (int): Squeeze ratio in SELayer, the intermediate channel will be - ``int(channels/ratio)``. Default: 16. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configured - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configured by the first dict and the - second activation layer will be configured by the second dict. - Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, - divisor=6.0)). - """ - - def __init__(self, - channels, - ratio=16, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))): - super(SELayer, self).__init__() - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=make_divisible(channels // ratio, 8), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=make_divisible(channels // ratio, 8), - out_channels=channels, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - out = self.global_avgpool(x) - out = self.conv1(out) - out = self.conv2(out) - return x * out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/self_attention_block.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/self_attention_block.py deleted file mode 100644 index c945fa71..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/self_attention_block.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule, constant_init -from torch import nn as nn -from torch.nn import functional as F - - -class SelfAttentionBlock(nn.Module): - """General self-attention block/non-local block. - - Please refer to https://arxiv.org/abs/1706.03762 for details about key, - query and value. - - Args: - key_in_channels (int): Input channels of key feature. - query_in_channels (int): Input channels of query feature. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_downsample (nn.Module): Query downsample module. - key_downsample (nn.Module): Key downsample module. - key_query_num_convs (int): Number of convs for key/query projection. - value_num_convs (int): Number of convs for value projection. - matmul_norm (bool): Whether normalize attention map with sqrt of - channels - with_out (bool): Whether use out projection. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, key_in_channels, query_in_channels, channels, - out_channels, share_key_query, query_downsample, - key_downsample, key_query_num_convs, value_out_num_convs, - key_query_norm, value_out_norm, matmul_norm, with_out, - conv_cfg, norm_cfg, act_cfg): - super(SelfAttentionBlock, self).__init__() - if share_key_query: - assert key_in_channels == query_in_channels - self.key_in_channels = key_in_channels - self.query_in_channels = query_in_channels - self.out_channels = out_channels - self.channels = channels - self.share_key_query = share_key_query - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.key_project = self.build_project( - key_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if share_key_query: - self.query_project = self.key_project - else: - self.query_project = self.build_project( - query_in_channels, - channels, - num_convs=key_query_num_convs, - use_conv_module=key_query_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.value_project = self.build_project( - key_in_channels, - channels if with_out else out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - if with_out: - self.out_project = self.build_project( - channels, - out_channels, - num_convs=value_out_num_convs, - use_conv_module=value_out_norm, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.out_project = None - - self.query_downsample = query_downsample - self.key_downsample = key_downsample - self.matmul_norm = matmul_norm - - self.init_weights() - - def init_weights(self): - """Initialize weight of later layer.""" - if self.out_project is not None: - if not isinstance(self.out_project, ConvModule): - constant_init(self.out_project, 0) - - def build_project(self, in_channels, channels, num_convs, use_conv_module, - conv_cfg, norm_cfg, act_cfg): - """Build projection layer for key/query/value/out.""" - if use_conv_module: - convs = [ - ConvModule( - in_channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - ] - for _ in range(num_convs - 1): - convs.append( - ConvModule( - channels, - channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - else: - convs = [nn.Conv2d(in_channels, channels, 1)] - for _ in range(num_convs - 1): - convs.append(nn.Conv2d(channels, channels, 1)) - if len(convs) > 1: - convs = nn.Sequential(*convs) - else: - convs = convs[0] - return convs - - def forward(self, query_feats, key_feats): - """Forward function.""" - batch_size = query_feats.size(0) - query = self.query_project(query_feats) - if self.query_downsample is not None: - query = self.query_downsample(query) - query = query.reshape(*query.shape[:2], -1) - query = query.permute(0, 2, 1).contiguous() - - key = self.key_project(key_feats) - value = self.value_project(key_feats) - if self.key_downsample is not None: - key = self.key_downsample(key) - value = self.key_downsample(value) - key = key.reshape(*key.shape[:2], -1) - value = value.reshape(*value.shape[:2], -1) - value = value.permute(0, 2, 1).contiguous() - - sim_map = torch.matmul(query, key) - if self.matmul_norm: - sim_map = (self.channels**-.5) * sim_map - sim_map = F.softmax(sim_map, dim=-1) - - context = torch.matmul(sim_map, value) - context = context.permute(0, 2, 1).contiguous() - context = context.reshape(batch_size, -1, *query_feats.shape[2:]) - if self.out_project is not None: - context = self.out_project(context) - return context diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/shape_convert.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/shape_convert.py deleted file mode 100644 index 0677348c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/shape_convert.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def nlc_to_nchw(x, hw_shape): - """Convert [N, L, C] shape tensor to [N, C, H, W] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, L, C] before conversion. - hw_shape (Sequence[int]): The height and width of output feature map. - - Returns: - Tensor: The output tensor of shape [N, C, H, W] after conversion. - """ - H, W = hw_shape - assert len(x.shape) == 3 - B, L, C = x.shape - assert L == H * W, 'The seq_len doesn\'t match H, W' - return x.transpose(1, 2).reshape(B, C, H, W) - - -def nchw_to_nlc(x): - """Flatten [N, C, H, W] shape tensor to [N, L, C] shape tensor. - - Args: - x (Tensor): The input tensor of shape [N, C, H, W] before conversion. - - Returns: - Tensor: The output tensor of shape [N, L, C] after conversion. - """ - assert len(x.shape) == 4 - return x.flatten(2).transpose(1, 2).contiguous() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/up_conv_block.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index d8396d9c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/__init__.py deleted file mode 100644 index bc075cd4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/encoding.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/encoding.py deleted file mode 100644 index f397cc54..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/encoding.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.nn import functional as F - - -class Encoding(nn.Module): - """Encoding Layer: a learnable residual encoder. - - Input is of shape (batch_size, channels, height, width). - Output is of shape (batch_size, num_codes, channels). - - Args: - channels: dimension of the features or feature channels - num_codes: number of code words - """ - - def __init__(self, channels, num_codes): - super(Encoding, self).__init__() - # init codewords and smoothing factor - self.channels, self.num_codes = channels, num_codes - std = 1. / ((num_codes * channels)**0.5) - # [num_codes, channels] - self.codewords = nn.Parameter( - torch.empty(num_codes, channels, - dtype=torch.float).uniform_(-std, std), - requires_grad=True) - # [num_codes] - self.scale = nn.Parameter( - torch.empty(num_codes, dtype=torch.float).uniform_(-1, 0), - requires_grad=True) - - @staticmethod - def scaled_l2(x, codewords, scale): - num_codes, channels = codewords.size() - batch_size = x.size(0) - reshaped_scale = scale.view((1, 1, num_codes)) - expanded_x = x.unsqueeze(2).expand( - (batch_size, x.size(1), num_codes, channels)) - reshaped_codewords = codewords.view((1, 1, num_codes, channels)) - - scaled_l2_norm = reshaped_scale * ( - expanded_x - reshaped_codewords).pow(2).sum(dim=3) - return scaled_l2_norm - - @staticmethod - def aggregate(assignment_weights, x, codewords): - num_codes, channels = codewords.size() - reshaped_codewords = codewords.view((1, 1, num_codes, channels)) - batch_size = x.size(0) - - expanded_x = x.unsqueeze(2).expand( - (batch_size, x.size(1), num_codes, channels)) - encoded_feat = (assignment_weights.unsqueeze(3) * - (expanded_x - reshaped_codewords)).sum(dim=1) - return encoded_feat - - def forward(self, x): - assert x.dim() == 4 and x.size(1) == self.channels - # [batch_size, channels, height, width] - batch_size = x.size(0) - # [batch_size, height x width, channels] - x = x.view(batch_size, self.channels, -1).transpose(1, 2).contiguous() - # assignment_weights: [batch_size, channels, num_codes] - assignment_weights = F.softmax( - self.scaled_l2(x, self.codewords, self.scale), dim=2) - # aggregate - encoded_feat = self.aggregate(assignment_weights, x, self.codewords) - return encoded_feat - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(Nx{self.channels}xHxW =>Nx{self.num_codes}' \ - f'x{self.channels})' - return repr_str diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/wrappers.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/wrappers.py deleted file mode 100644 index ce67e4be..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/ops/wrappers.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.nn.functional as F - - -def resize(input, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None, - warning=True): - if warning: - if size is not None and align_corners: - input_h, input_w = tuple(int(x) for x in input.shape[2:]) - output_h, output_w = tuple(int(x) for x in size) - if output_h > input_h or output_w > output_h: - if ((output_h > 1 and output_w > 1 and input_h > 1 - and input_w > 1) and (output_h - 1) % (input_h - 1) - and (output_w - 1) % (input_w - 1)): - warnings.warn( - f'When align_corners={align_corners}, ' - 'the output would more aligned if ' - f'input size {(input_h, input_w)} is `x+1` and ' - f'out size {(output_h, output_w)} is `nx+1`') - return F.interpolate(input, size, scale_factor, mode, align_corners) - - -class Upsample(nn.Module): - - def __init__(self, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None): - super(Upsample, self).__init__() - self.size = size - if isinstance(scale_factor, tuple): - self.scale_factor = tuple(float(factor) for factor in scale_factor) - else: - self.scale_factor = float(scale_factor) if scale_factor else None - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - if not self.size: - size = [int(t * self.scale_factor) for t in x.shape[-2:]] - else: - size = self.size - return resize(x, size, None, self.mode, self.align_corners) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/__init__.py deleted file mode 100644 index 3f155805..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/collect_env.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/collect_env.py deleted file mode 100644 index 3379ecb0..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/collect_env.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmseg - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMSegmentation'] = f'{mmseg.__version__}+{get_git_hash()[:7]}' - - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print('{}: {}'.format(name, val)) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/logger.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/logger.py deleted file mode 100644 index 0cb3c78d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/utils/logger.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmseg". - - Args: - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - - logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level) - - return logger diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/version.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/version.py deleted file mode 100644 index ffa55d38..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/mmseg/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '0.20.0' - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements.txt deleted file mode 100644 index 6981bd72..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements.txt +++ /dev/null @@ -1,4 +0,0 @@ --r requirements/build.txt --r requirements/optional.txt --r requirements/runtime.txt --r requirements/tests.txt diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/build.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/build.txt deleted file mode 100644 index e69de29b..00000000 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/docs.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/docs.txt deleted file mode 100644 index a31b7716..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/docs.txt +++ /dev/null @@ -1,8 +0,0 @@ -docutils==0.16.0 -m2r -mistune==0.8.4 -myst-parser --e git+https://github.com/open-mmlab/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme -sphinx==4.0.2 -sphinx-copybutton -sphinx_markdown_tables diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/mminstall.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/mminstall.txt deleted file mode 100644 index 16a8d8b7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/mminstall.txt +++ /dev/null @@ -1,3 +0,0 @@ -mmcv-full>=1.4.8,<=1.6.0 -mmdet>=2.24.0,<=3.0.0 -mmsegmentation>=0.20.0,<=1.0.0 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/optional.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/optional.txt deleted file mode 100644 index 84cbfa89..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/optional.txt +++ /dev/null @@ -1,3 +0,0 @@ -open3d -spconv -waymo-open-dataset-tf-2-1-0==1.2.0 diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/readthedocs.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/readthedocs.txt deleted file mode 100644 index 3ffe9e47..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/readthedocs.txt +++ /dev/null @@ -1,5 +0,0 @@ -mmcv>=1.4.8 -mmdet>=2.24.0 -mmsegmentation>=0.20.1 -torch -torchvision diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/runtime.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/runtime.txt deleted file mode 100644 index 9613fd74..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/runtime.txt +++ /dev/null @@ -1,15 +0,0 @@ -lyft_dataset_sdk -networkx>=2.2,<2.3 -numba==0.53.0 -numpy -nuscenes-devkit -plyfile -scikit-image -# by default we also use tensorboard to log results -tensorboard -trimesh>=2.35.39,<2.35.40 -addict -yapf -terminaltables -prettytable -opencv-python diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/tests.txt b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/tests.txt deleted file mode 100644 index 303cc37d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/requirements/tests.txt +++ /dev/null @@ -1,13 +0,0 @@ -asynctest -codecov -flake8 -interrogate -isort -# Note: used for kwarray.group_items, this may be ported to mmcv in the future. -kwarray -pytest -pytest-cov -pytest-runner -ubelt -xdoctest >= 0.10.0 -yapf diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.cfg b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.cfg deleted file mode 100644 index f6173432..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.cfg +++ /dev/null @@ -1,16 +0,0 @@ -[yapf] -BASED_ON_STYLE = pep8 -BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true -SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true - -[isort] -line_length = 79 -multi_line_output = 0 -extra_standard_library = setuptools -known_first_party = mmdet,mmseg,mmdet3d -known_third_party = cv2,imageio,indoor3d_util,load_scannet_data,lyft_dataset_sdk,m2r,matplotlib,mmcv,nuimages,numba,numpy,nuscenes,pandas,plyfile,pycocotools,pyquaternion,pytest,pytorch_sphinx_theme,recommonmark,requests,scannet_utils,scipy,seaborn,shapely,skimage,sphinx,tensorflow,terminaltables,torch,trimesh,ts,waymo_open_dataset -no_lines_before = STDLIB,LOCALFOLDER -default_section = THIRDPARTY - -[codespell] -ignore-words-list = ans,refridgerator,crate,hist,formating,dout,wan,nd,fo,avod,AVOD diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.py deleted file mode 100755 index 28af491b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/setup.py +++ /dev/null @@ -1,429 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import glob -import os -import platform -import re -import warnings -from pkg_resources import DistributionNotFound, get_distribution -from setuptools import find_packages, setup - -EXT_TYPE = '' -try: - import torch - if torch.__version__ == 'parrots': - from parrots.utils.build_extension import BuildExtension - EXT_TYPE = 'parrots' - elif (hasattr(torch, 'is_mlu_available') and torch.is_mlu_available()) or \ - os.getenv('FORCE_MLU', '0') == '1': - from torch_mlu.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - else: - from torch.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - cmd_class = {'build_ext': BuildExtension} -except ModuleNotFoundError: - cmd_class = {} - print('Skip building ext ops due to the absence of torch.') - - -def choose_requirement(primary, secondary): - """If some version of primary requirement installed, return primary, else - return secondary.""" - try: - name = re.split(r'[!<>=]', primary)[0] - get_distribution(name) - except DistributionNotFound: - return secondary - - return str(primary) - - -def get_version(): - version_file = 'mmcv/version.py' - with open(version_file, 'r', encoding='utf-8') as f: - exec(compile(f.read(), version_file, 'exec')) - version = locals()['__version__'] - local_version_identifier = os.environ.get('MMCV_LOCAL_VERSION_IDENTIFIER', '') - if local_version_identifier != '': - version += '+' + local_version_identifier - return version - - -def parse_requirements(fname='requirements/runtime.txt', with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import sys - from os.path import exists - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith('-r '): - # Allow specifying requirements in other files - target = line.split(' ')[1] - for info in parse_require_file(target): - yield info - else: - info = {'line': line} - if line.startswith('-e '): - info['package'] = line.split('#egg=')[1] - else: - # Remove versioning from the package - pat = '(' + '|'.join(['>=', '==', '>']) + ')' - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info['package'] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ';' in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, - rest.split(';')) - info['platform_deps'] = platform_deps - else: - version = rest # NOQA - info['version'] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath) as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith('#'): - yield from parse_line(line) - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info['package']] - if with_version and 'version' in info: - parts.extend(info['version']) - if not sys.version.startswith('3.4'): - # apparently package_deps are broken in 3.4 - platform_deps = info.get('platform_deps') - if platform_deps is not None: - parts.append(';' + platform_deps) - item = ''.join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -install_requires = parse_requirements() - -try: - # OpenCV installed via conda. - import cv2 # NOQA: F401 - major, minor, *rest = cv2.__version__.split('.') - if int(major) < 3: - raise RuntimeError( - f'OpenCV >=3 is required but {cv2.__version__} is installed') -except ImportError: - # If first not installed install second package - CHOOSE_INSTALL_REQUIRES = [('opencv-python-headless>=3', - 'opencv-python>=3')] - for main, secondary in CHOOSE_INSTALL_REQUIRES: - install_requires.append(choose_requirement(main, secondary)) - - -def get_extensions(): - extensions = [] - - if os.getenv('MMCV_WITH_TRT', '0') != '0': - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: ' + \ - 'Custom TensorRT Ops will be deprecated in future. ' - msg += blue_text + \ - 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - - ext_name = 'mmcv._ext_trt' - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - tensorrt_path = os.getenv('TENSORRT_DIR', '0') - tensorrt_lib_path = glob.glob( - os.path.join(tensorrt_path, 'targets', '*', 'lib'))[0] - library_dirs += [tensorrt_lib_path] - libraries += ['nvinfer', 'nvparsers', 'nvinfer_plugin'] - libraries += ['cudart'] - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/common/cuda') - include_trt_path = os.path.abspath('./mmcv/ops/csrc/tensorrt') - include_dirs.append(include_path) - include_dirs.append(include_trt_path) - include_dirs.append(os.path.join(tensorrt_path, 'include')) - include_dirs += include_paths(cuda=True) - - op_files = glob.glob('./mmcv/ops/csrc/tensorrt/plugins/*') - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('MMCV_WITH_TRT', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - # prevent cub/thrust conflict with other python library - # More context See issues #1454 - extra_compile_args['nvcc'] += ['-Xcompiler=-fno-gnu-unique'] - library_dirs += library_paths(cuda=True) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - if os.getenv('MMCV_WITH_OPS', '0') == '0': - return extensions - - if EXT_TYPE == 'parrots': - ext_name = 'mmcv._ext' - from parrots.utils.build_extension import Extension - - # new parrots op impl do not use MMCV_USE_PARROTS - # define_macros = [('MMCV_USE_PARROTS', None)] - define_macros = [] - include_dirs = [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') +\ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') +\ - glob.glob('./mmcv/ops/csrc/parrots/*.cpp') - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args = { - 'nvcc': [cuda_args, '-std=c++14'] if cuda_args else ['-std=c++14'], - 'cxx': ['-std=c++14'], - } - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - extra_compile_args['nvcc'] += [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - cuda=True, - pytorch=True) - extensions.append(ext_ops) - elif EXT_TYPE == 'pytorch': - ext_name = 'mmcv._ext' - from torch.utils.cpp_extension import CppExtension, CUDAExtension - - # prevent ninja from using too many resources - try: - import psutil - num_cpu = len(psutil.Process().cpu_affinity()) - cpu_use = max(4, num_cpu - 1) - except (ModuleNotFoundError, AttributeError): - cpu_use = 4 - - os.environ.setdefault('MAX_JOBS', str(cpu_use)) - define_macros = [] - - # Before PyTorch1.8.0, when compiling CUDA code, `cxx` is a - # required key passed to PyTorch. Even if there is no flag passed - # to cxx, users also need to pass an empty list to PyTorch. - # Since PyTorch1.8.0, it has a default value so users do not need - # to pass an empty list anymore. - # More details at https://github.com/pytorch/pytorch/pull/45956 - extra_compile_args = {'cxx': []} - - # Since the PR (https://github.com/open-mmlab/mmcv/pull/1463) uses - # c++14 features, the argument ['std=c++14'] must be added here. - # However, in the windows environment, some standard libraries - # will depend on c++17 or higher. In fact, for the windows - # environment, the compiler will choose the appropriate compiler - # to compile those cpp files, so there is no need to add the - # argument - if platform.system() != 'Windows': - extra_compile_args['cxx'] = ['-std=c++14'] - - include_dirs = [] - - is_rocm_pytorch = False - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm_pytorch = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - - if is_rocm_pytorch or torch.cuda.is_available() or os.getenv( - 'FORCE_CUDA', '0') == '1': - if is_rocm_pytorch: - define_macros += [('HIP_DIFF', None)] - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cpp') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - elif (hasattr(torch, 'is_mlu_available') and - torch.is_mlu_available()) or \ - os.getenv('FORCE_MLU', '0') == '1': - from torch_mlu.utils.cpp_extension import MLUExtension - define_macros += [('MMCV_WITH_MLU', None)] - mlu_args = os.getenv('MMCV_MLU_ARGS') - extra_compile_args['cncc'] = [mlu_args] if mlu_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/mlu/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/common/mlu/*.mlu') - extension = MLUExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/mlu')) - else: - print(f'Compiling {ext_name} only with CPU') - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cpu/*.cpp') - extension = CppExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - - # Since the PR (https://github.com/open-mmlab/mmcv/pull/1463) uses - # c++14 features, the argument ['std=c++14'] must be added here. - # However, in the windows environment, some standard libraries - # will depend on c++17 or higher. In fact, for the windows - # environment, the compiler will choose the appropriate compiler - # to compile those cpp files, so there is no need to add the - # argument - if 'nvcc' in extra_compile_args and platform.system() != 'Windows': - extra_compile_args['nvcc'] += ['-std=c++14'] - - ext_ops = extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args) - extensions.append(ext_ops) - - if EXT_TYPE == 'pytorch' and os.getenv('MMCV_WITH_ORT', '0') != '0': - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: ' + \ - 'Custom ONNXRuntime Ops will be deprecated in future. ' - msg += blue_text + \ - 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) - ext_name = 'mmcv._ext_ort' - import onnxruntime - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - ort_path = os.getenv('ONNXRUNTIME_DIR', '0') - library_dirs += [os.path.join(ort_path, 'lib')] - libraries.append('onnxruntime') - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/onnxruntime') - include_dirs.append(include_path) - include_dirs.append(os.path.join(ort_path, 'include')) - - op_files = glob.glob('./mmcv/ops/csrc/onnxruntime/cpu/*') - if onnxruntime.get_device() == 'GPU' or os.getenv('FORCE_CUDA', - '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files += glob.glob('./mmcv/ops/csrc/onnxruntime/gpu/*') - include_dirs += include_paths(cuda=True) - library_dirs += library_paths(cuda=True) - else: - include_dirs += include_paths(cuda=False) - library_dirs += library_paths(cuda=False) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - return extensions - - -setup( - name='mmcv' if os.getenv('MMCV_WITH_OPS', '0') == '0' else 'mmcv-full', - version=get_version(), - description='OpenMMLab Computer Vision Foundation', - keywords='computer vision', - packages=find_packages(), - include_package_data=True, - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', - 'Topic :: Utilities', - ], - url='https://github.com/open-mmlab/mmcv', - author='MMCV Contributors', - author_email='openmmlab@gmail.com', - install_requires=install_requires, - extras_require={ - 'all': parse_requirements('requirements.txt'), - 'tests': parse_requirements('requirements/test.txt'), - 'build': parse_requirements('requirements/build.txt'), - 'optional': parse_requirements('requirements/optional.txt'), - }, - ext_modules=get_extensions(), - cmdclass=cmd_class, - zip_safe=False) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/analyze_logs.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/analyze_logs.py deleted file mode 100644 index 6794534c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/analyze_logs.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import json -from collections import defaultdict - -import numpy as np -import seaborn as sns -from matplotlib import pyplot as plt - - -def cal_train_time(log_dicts, args): - for i, log_dict in enumerate(log_dicts): - print(f'{"-" * 5}Analyze train time of {args.json_logs[i]}{"-" * 5}') - all_times = [] - for epoch in log_dict.keys(): - if args.include_outliers: - all_times.append(log_dict[epoch]['time']) - else: - all_times.append(log_dict[epoch]['time'][1:]) - all_times = np.array(all_times) - epoch_ave_time = all_times.mean(-1) - slowest_epoch = epoch_ave_time.argmax() - fastest_epoch = epoch_ave_time.argmin() - std_over_epoch = epoch_ave_time.std() - print(f'slowest epoch {slowest_epoch + 1}, ' - f'average time is {epoch_ave_time[slowest_epoch]:.4f}') - print(f'fastest epoch {fastest_epoch + 1}, ' - f'average time is {epoch_ave_time[fastest_epoch]:.4f}') - print(f'time std over epochs is {std_over_epoch:.4f}') - print(f'average iter time: {np.mean(all_times):.4f} s/iter') - print() - - -def plot_curve(log_dicts, args): - if args.backend is not None: - plt.switch_backend(args.backend) - sns.set_style(args.style) - # if legend is None, use {filename}_{key} as legend - legend = args.legend - if legend is None: - legend = [] - for json_log in args.json_logs: - for metric in args.keys: - legend.append(f'{json_log}_{metric}') - assert len(legend) == (len(args.json_logs) * len(args.keys)) - metrics = args.keys - - num_metrics = len(metrics) - for i, log_dict in enumerate(log_dicts): - epochs = list(log_dict.keys()) - for j, metric in enumerate(metrics): - print(f'plot curve of {args.json_logs[i]}, metric is {metric}') - if metric not in log_dict[epochs[args.interval - 1]]: - raise KeyError( - f'{args.json_logs[i]} does not contain metric {metric}') - - if args.mode == 'eval': - if min(epochs) == args.interval: - x0 = args.interval - else: - # if current training is resumed from previous checkpoint - # we lost information in early epochs - # `xs` should start according to `min(epochs)` - if min(epochs) % args.interval == 0: - x0 = min(epochs) - else: - # find the first epoch that do eval - x0 = min(epochs) + args.interval - \ - min(epochs) % args.interval - xs = np.arange(x0, max(epochs) + 1, args.interval) - ys = [] - for epoch in epochs[args.interval - 1::args.interval]: - ys += log_dict[epoch][metric] - - # if training is aborted before eval of the last epoch - # `xs` and `ys` will have different length and cause an error - # check if `ys[-1]` is empty here - if not log_dict[epoch][metric]: - xs = xs[:-1] - - ax = plt.gca() - ax.set_xticks(xs) - plt.xlabel('epoch') - plt.plot(xs, ys, label=legend[i * num_metrics + j], marker='o') - else: - xs = [] - ys = [] - num_iters_per_epoch = \ - log_dict[epochs[args.interval-1]]['iter'][-1] - for epoch in epochs[args.interval - 1::args.interval]: - iters = log_dict[epoch]['iter'] - if log_dict[epoch]['mode'][-1] == 'val': - iters = iters[:-1] - xs.append( - np.array(iters) + (epoch - 1) * num_iters_per_epoch) - ys.append(np.array(log_dict[epoch][metric][:len(iters)])) - xs = np.concatenate(xs) - ys = np.concatenate(ys) - plt.xlabel('iter') - plt.plot( - xs, ys, label=legend[i * num_metrics + j], linewidth=0.5) - plt.legend() - if args.title is not None: - plt.title(args.title) - if args.out is None: - plt.show() - else: - print(f'save curve to: {args.out}') - plt.savefig(args.out) - plt.cla() - - -def add_plot_parser(subparsers): - parser_plt = subparsers.add_parser( - 'plot_curve', help='parser for plotting curves') - parser_plt.add_argument( - 'json_logs', - type=str, - nargs='+', - help='path of train log in json format') - parser_plt.add_argument( - '--keys', - type=str, - nargs='+', - default=['mAP_0.25'], - help='the metric that you want to plot') - parser_plt.add_argument('--title', type=str, help='title of figure') - parser_plt.add_argument( - '--legend', - type=str, - nargs='+', - default=None, - help='legend of each plot') - parser_plt.add_argument( - '--backend', type=str, default=None, help='backend of plt') - parser_plt.add_argument( - '--style', type=str, default='dark', help='style of plt') - parser_plt.add_argument('--out', type=str, default=None) - parser_plt.add_argument('--mode', type=str, default='train') - parser_plt.add_argument('--interval', type=int, default=1) - - -def add_time_parser(subparsers): - parser_time = subparsers.add_parser( - 'cal_train_time', - help='parser for computing the average time per training iteration') - parser_time.add_argument( - 'json_logs', - type=str, - nargs='+', - help='path of train log in json format') - parser_time.add_argument( - '--include-outliers', - action='store_true', - help='include the first value of every epoch when computing ' - 'the average time') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Analyze Json Log') - # currently only support plot curve and calculate average train time - subparsers = parser.add_subparsers(dest='task', help='task parser') - add_plot_parser(subparsers) - add_time_parser(subparsers) - args = parser.parse_args() - return args - - -def load_json_logs(json_logs): - # load and convert json_logs to log_dict, key is epoch, value is a sub dict - # keys of sub dict is different metrics, e.g. memory, bbox_mAP - # value of sub dict is a list of corresponding values of all iterations - log_dicts = [dict() for _ in json_logs] - for json_log, log_dict in zip(json_logs, log_dicts): - with open(json_log, 'r') as log_file: - for line in log_file: - log = json.loads(line.strip()) - # skip lines without `epoch` field - if 'epoch' not in log: - continue - epoch = log.pop('epoch') - if epoch not in log_dict: - log_dict[epoch] = defaultdict(list) - for k, v in log.items(): - log_dict[epoch][k].append(v) - return log_dicts - - -def main(): - args = parse_args() - - json_logs = args.json_logs - for json_log in json_logs: - assert json_log.endswith('.json') - - log_dicts = load_json_logs(json_logs) - - eval(args.task)(log_dicts, args) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/benchmark.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/benchmark.py deleted file mode 100644 index 006b301d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/benchmark.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import time - -import torch -from mmcv import Config -from mmcv.parallel import MMDataParallel -from mmcv.runner import load_checkpoint, wrap_fp16_model - -from mmdet3d.datasets import build_dataloader, build_dataset -from mmdet3d.models import build_detector -from tools.misc.fuse_conv_bn import fuse_module - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMDet benchmark a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--samples', default=2000, help='samples to benchmark') - parser.add_argument( - '--log-interval', default=50, help='interval of logging') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the dataloader - # TODO: support multiple images per gpu (only minor changes are needed) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=False, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_module(model) - - model = MMDataParallel(model, device_ids=[0]) - - model.eval() - - # the first several iterations may be very slow so skip them - num_warmup = 5 - pure_inf_time = 0 - - # benchmark with several samples and take the average - for i, data in enumerate(data_loader): - - torch.cuda.synchronize() - start_time = time.perf_counter() - - with torch.no_grad(): - model(return_loss=False, rescale=True, **data) - - torch.cuda.synchronize() - elapsed = time.perf_counter() - start_time - - if i >= num_warmup: - pure_inf_time += elapsed - if (i + 1) % args.log_interval == 0: - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Done image [{i + 1:<3}/ {args.samples}], ' - f'fps: {fps:.1f} img / s') - - if (i + 1) == args.samples: - pure_inf_time += elapsed - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Overall fps: {fps:.1f} img / s') - break - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/get_flops.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/get_flops.py deleted file mode 100644 index c4ae5d9b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/analysis_tools/get_flops.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import torch -from mmcv import Config, DictAction - -from mmdet3d.models import build_model - -try: - from mmcv.cnn import get_model_complexity_info -except ImportError: - raise ImportError('Please upgrade mmcv to >0.6.2') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[40000, 4], - help='input point cloud size') - parser.add_argument( - '--modality', - type=str, - default='point', - choices=['point', 'image', 'multi'], - help='input data modality') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def main(): - - args = parse_args() - - if args.modality == 'point': - assert len(args.shape) == 2, 'invalid input shape' - input_shape = tuple(args.shape) - elif args.modality == 'image': - if len(args.shape) == 1: - input_shape = (3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (3, ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - elif args.modality == 'multi': - raise NotImplementedError( - 'FLOPs counter is currently not supported for models with ' - 'multi-modality input') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - model = build_model( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - if torch.cuda.is_available(): - model.cuda() - model.eval() - - if hasattr(model, 'forward_dummy'): - model.forward = model.forward_dummy - else: - raise NotImplementedError( - 'FLOPs counter is currently not supported for {}'.format( - model.__class__.__name__)) - - flops, params = get_model_complexity_info(model, input_shape) - split_line = '=' * 30 - print(f'{split_line}\nInput shape: {input_shape}\n' - f'Flops: {flops}\nParams: {params}\n{split_line}') - print('!!!Please be cautious if you use the results in papers. ' - 'You may need to check if all ops are supported and verify that the ' - 'flops computation is correct.') - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.py deleted file mode 100644 index e2bd6c38..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.py +++ /dev/null @@ -1,305 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -from os import path as osp - -from tools.data_converter import indoor_converter as indoor -from tools.data_converter import kitti_converter as kitti -from tools.data_converter import lyft_converter as lyft_converter -from tools.data_converter import nuscenes_converter as nuscenes_converter -from tools.data_converter.create_gt_database import ( - GTDatabaseCreater, create_groundtruth_database) - - -def kitti_data_prep(root_path, - info_prefix, - version, - out_dir, - with_plane=False): - """Prepare data related to Kitti dataset. - - Related data consists of '.pkl' files recording basic infos, - 2D annotations and groundtruth database. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - out_dir (str): Output directory of the groundtruth database info. - with_plane (bool, optional): Whether to use plane information. - Default: False. - """ - kitti.create_kitti_info_file(root_path, info_prefix, with_plane) - kitti.create_reduced_point_cloud(root_path, info_prefix) - - info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl') - info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl') - info_trainval_path = osp.join(root_path, - f'{info_prefix}_infos_trainval.pkl') - info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl') - kitti.export_2d_annotation(root_path, info_train_path) - kitti.export_2d_annotation(root_path, info_val_path) - kitti.export_2d_annotation(root_path, info_trainval_path) - kitti.export_2d_annotation(root_path, info_test_path) - - create_groundtruth_database( - 'KittiDataset', - root_path, - info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl', - relative_path=False, - mask_anno_path='instances_train.json', - with_mask=(version == 'mask')) - - -def nuscenes_data_prep(root_path, - info_prefix, - version, - dataset_name, - out_dir, - max_sweeps=10): - """Prepare data related to nuScenes dataset. - - Related data consists of '.pkl' files recording basic infos, - 2D annotations and groundtruth database. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - dataset_name (str): The dataset class name. - out_dir (str): Output directory of the groundtruth database info. - max_sweeps (int, optional): Number of input consecutive frames. - Default: 10 - """ - nuscenes_converter.create_nuscenes_infos( - root_path, info_prefix, version=version, max_sweeps=max_sweeps) - - if version == 'v1.0-test': - info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl') - nuscenes_converter.export_2d_annotation( - root_path, info_test_path, version=version) - return - - info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl') - info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl') - nuscenes_converter.export_2d_annotation( - root_path, info_train_path, version=version) - nuscenes_converter.export_2d_annotation( - root_path, info_val_path, version=version) - create_groundtruth_database(dataset_name, root_path, info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl') - - -def lyft_data_prep(root_path, info_prefix, version, max_sweeps=10): - """Prepare data related to Lyft dataset. - - Related data consists of '.pkl' files recording basic infos. - Although the ground truth database and 2D annotations are not used in - Lyft, it can also be generated like nuScenes. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - version (str): Dataset version. - max_sweeps (int, optional): Number of input consecutive frames. - Defaults to 10. - """ - lyft_converter.create_lyft_infos( - root_path, info_prefix, version=version, max_sweeps=max_sweeps) - - -def scannet_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for scannet dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def s3dis_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for s3dis dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def sunrgbd_data_prep(root_path, info_prefix, out_dir, workers): - """Prepare the info file for sunrgbd dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - """ - indoor.create_indoor_info_file( - root_path, info_prefix, out_dir, workers=workers) - - -def waymo_data_prep(root_path, - info_prefix, - version, - out_dir, - workers, - max_sweeps=5): - """Prepare the info file for waymo dataset. - - Args: - root_path (str): Path of dataset root. - info_prefix (str): The prefix of info filenames. - out_dir (str): Output directory of the generated info file. - workers (int): Number of threads to be used. - max_sweeps (int, optional): Number of input consecutive frames. - Default: 5. Here we store pose information of these frames - for later use. - """ - from tools.data_converter import waymo_converter as waymo - - splits = ['training', 'validation', 'testing'] - for i, split in enumerate(splits): - load_dir = osp.join(root_path, 'waymo_format', split) - if split == 'validation': - save_dir = osp.join(out_dir, 'kitti_format', 'training') - else: - save_dir = osp.join(out_dir, 'kitti_format', split) - converter = waymo.Waymo2KITTI( - load_dir, - save_dir, - prefix=str(i), - workers=workers, - test_mode=(split == 'testing')) - converter.convert() - # Generate waymo infos - out_dir = osp.join(out_dir, 'kitti_format') - kitti.create_waymo_info_file( - out_dir, info_prefix, max_sweeps=max_sweeps, workers=workers) - GTDatabaseCreater( - 'WaymoDataset', - out_dir, - info_prefix, - f'{out_dir}/{info_prefix}_infos_train.pkl', - relative_path=False, - with_mask=False, - num_worker=workers).create() - - -parser = argparse.ArgumentParser(description='Data converter arg parser') -parser.add_argument('dataset', metavar='kitti', help='name of the dataset') -parser.add_argument( - '--root-path', - type=str, - default='./data/kitti', - help='specify the root path of dataset') -parser.add_argument( - '--version', - type=str, - default='v1.0', - required=False, - help='specify the dataset version, no need for kitti') -parser.add_argument( - '--max-sweeps', - type=int, - default=10, - required=False, - help='specify sweeps of lidar per example') -parser.add_argument( - '--with-plane', - action='store_true', - help='Whether to use plane information for kitti.') -parser.add_argument( - '--out-dir', - type=str, - default='./data/kitti', - required=False, - help='name of info pkl') -parser.add_argument('--extra-tag', type=str, default='kitti') -parser.add_argument( - '--workers', type=int, default=4, help='number of threads to be used') -args = parser.parse_args() - -if __name__ == '__main__': - if args.dataset == 'kitti': - kitti_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=args.version, - out_dir=args.out_dir, - with_plane=args.with_plane) - elif args.dataset == 'nuscenes' and args.version != 'v1.0-mini': - train_version = f'{args.version}-trainval' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - test_version = f'{args.version}-test' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=test_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - elif args.dataset == 'nuscenes' and args.version == 'v1.0-mini': - train_version = f'{args.version}' - nuscenes_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - dataset_name='NuScenesDataset', - out_dir=args.out_dir, - max_sweeps=args.max_sweeps) - elif args.dataset == 'lyft': - train_version = f'{args.version}-train' - lyft_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=train_version, - max_sweeps=args.max_sweeps) - test_version = f'{args.version}-test' - lyft_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=test_version, - max_sweeps=args.max_sweeps) - elif args.dataset == 'waymo': - waymo_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - version=args.version, - out_dir=args.out_dir, - workers=args.workers, - max_sweeps=args.max_sweeps) - elif args.dataset == 'scannet': - scannet_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) - elif args.dataset == 's3dis': - s3dis_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) - elif args.dataset == 'sunrgbd': - sunrgbd_data_prep( - root_path=args.root_path, - info_prefix=args.extra_tag, - out_dir=args.out_dir, - workers=args.workers) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.sh deleted file mode 100755 index 9a57852f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/create_data.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x -export PYTHONPATH=`pwd`:$PYTHONPATH - -PARTITION=$1 -JOB_NAME=$2 -DATASET=$3 -GPUS=${GPUS:-1} -GPUS_PER_NODE=${GPUS_PER_NODE:-1} -SRUN_ARGS=${SRUN_ARGS:-""} -JOB_NAME=create_data - -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/create_data.py ${DATASET} \ - --root-path ./data/${DATASET} \ - --out-dir ./data/${DATASET} \ - --extra-tag ${DATASET} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/__init__.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/__init__.py deleted file mode 100644 index ef101fec..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/create_gt_database.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/create_gt_database.py deleted file mode 100644 index 210f0e88..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/create_gt_database.py +++ /dev/null @@ -1,624 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle -from os import path as osp - -import mmcv -import numpy as np -from mmcv import track_iter_progress -from mmcv.ops import roi_align -from pycocotools import mask as maskUtils -from pycocotools.coco import COCO - -from mmdet3d.core.bbox import box_np_ops as box_np_ops -from mmdet3d.datasets import build_dataset -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps - - -def _poly2mask(mask_ann, img_h, img_w): - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - -def _parse_coco_ann_info(ann_info): - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, bboxes_ignore=gt_bboxes_ignore, masks=gt_masks_ann) - - return ann - - -def crop_image_patch_v2(pos_proposals, pos_assigned_gt_inds, gt_masks): - import torch - from torch.nn.modules.utils import _pair - device = pos_proposals.device - num_pos = pos_proposals.size(0) - fake_inds = ( - torch.arange(num_pos, - device=device).to(dtype=pos_proposals.dtype)[:, None]) - rois = torch.cat([fake_inds, pos_proposals], dim=1) # Nx5 - mask_size = _pair(28) - rois = rois.to(device=device) - gt_masks_th = ( - torch.from_numpy(gt_masks).to(device).index_select( - 0, pos_assigned_gt_inds).to(dtype=rois.dtype)) - # Use RoIAlign could apparently accelerate the training (~0.1s/iter) - targets = ( - roi_align(gt_masks_th, rois, mask_size[::-1], 1.0, 0, True).squeeze(1)) - return targets - - -def crop_image_patch(pos_proposals, gt_masks, pos_assigned_gt_inds, org_img): - num_pos = pos_proposals.shape[0] - masks = [] - img_patches = [] - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - bbox = pos_proposals[i, :].astype(np.int32) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1 + 1, 1) - h = np.maximum(y2 - y1 + 1, 1) - - mask_patch = gt_mask[y1:y1 + h, x1:x1 + w] - masked_img = gt_mask[..., None] * org_img - img_patch = masked_img[y1:y1 + h, x1:x1 + w] - - img_patches.append(img_patch) - masks.append(mask_patch) - return img_patches, masks - - -def create_groundtruth_database(dataset_class_name, - data_path, - info_prefix, - info_path=None, - mask_anno_path=None, - used_classes=None, - database_save_path=None, - db_info_save_path=None, - relative_path=True, - add_rgb=False, - lidar_only=False, - bev_only=False, - coors_range=None, - with_mask=False): - """Given the raw data, generate the ground truth database. - - Args: - dataset_class_name (str): Name of the input dataset. - data_path (str): Path of the data. - info_prefix (str): Prefix of the info file. - info_path (str, optional): Path of the info file. - Default: None. - mask_anno_path (str, optional): Path of the mask_anno. - Default: None. - used_classes (list[str], optional): Classes have been used. - Default: None. - database_save_path (str, optional): Path to save database. - Default: None. - db_info_save_path (str, optional): Path to save db_info. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - with_mask (bool, optional): Whether to use mask. - Default: False. - """ - print(f'Create GT Database of {dataset_class_name}') - dataset_cfg = dict( - type=dataset_class_name, data_root=data_path, ann_file=info_path) - if dataset_class_name == 'KittiDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=with_mask, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - elif dataset_class_name == 'NuScenesDataset': - dataset_cfg.update( - use_valid_flag=True, - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - use_dim=[0, 1, 2, 3, 4], - pad_empty_sweeps=True, - remove_close=True), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True) - ]) - - elif dataset_class_name == 'WaymoDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=False, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=6, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - dataset = build_dataset(dataset_cfg) - - if database_save_path is None: - database_save_path = osp.join(data_path, f'{info_prefix}_gt_database') - if db_info_save_path is None: - db_info_save_path = osp.join(data_path, - f'{info_prefix}_dbinfos_train.pkl') - mmcv.mkdir_or_exist(database_save_path) - all_db_infos = dict() - if with_mask: - coco = COCO(osp.join(data_path, mask_anno_path)) - imgIds = coco.getImgIds() - file2id = dict() - for i in imgIds: - info = coco.loadImgs([i])[0] - file2id.update({info['file_name']: i}) - - group_counter = 0 - for j in track_iter_progress(list(range(len(dataset)))): - input_dict = dataset.get_data_info(j) - dataset.pre_pipeline(input_dict) - example = dataset.pipeline(input_dict) - annos = example['ann_info'] - image_idx = example['sample_idx'] - points = example['points'].tensor.numpy() - gt_boxes_3d = annos['gt_bboxes_3d'].tensor.numpy() - names = annos['gt_names'] - group_dict = dict() - if 'group_ids' in annos: - group_ids = annos['group_ids'] - else: - group_ids = np.arange(gt_boxes_3d.shape[0], dtype=np.int64) - difficulty = np.zeros(gt_boxes_3d.shape[0], dtype=np.int32) - if 'difficulty' in annos: - difficulty = annos['difficulty'] - - num_obj = gt_boxes_3d.shape[0] - point_indices = box_np_ops.points_in_rbbox(points, gt_boxes_3d) - - if with_mask: - # prepare masks - gt_boxes = annos['gt_bboxes'] - img_path = osp.split(example['img_info']['filename'])[-1] - if img_path not in file2id.keys(): - print(f'skip image {img_path} for empty mask') - continue - img_id = file2id[img_path] - kins_annIds = coco.getAnnIds(imgIds=img_id) - kins_raw_info = coco.loadAnns(kins_annIds) - kins_ann_info = _parse_coco_ann_info(kins_raw_info) - h, w = annos['img_shape'][:2] - gt_masks = [ - _poly2mask(mask, h, w) for mask in kins_ann_info['masks'] - ] - # get mask inds based on iou mapping - bbox_iou = bbox_overlaps(kins_ann_info['bboxes'], gt_boxes) - mask_inds = bbox_iou.argmax(axis=0) - valid_inds = (bbox_iou.max(axis=0) > 0.5) - - # mask the image - # use more precise crop when it is ready - # object_img_patches = np.ascontiguousarray( - # np.stack(object_img_patches, axis=0).transpose(0, 3, 1, 2)) - # crop image patches using roi_align - # object_img_patches = crop_image_patch_v2( - # torch.Tensor(gt_boxes), - # torch.Tensor(mask_inds).long(), object_img_patches) - object_img_patches, object_masks = crop_image_patch( - gt_boxes, gt_masks, mask_inds, annos['img']) - - for i in range(num_obj): - filename = f'{image_idx}_{names[i]}_{i}.bin' - abs_filepath = osp.join(database_save_path, filename) - rel_filepath = osp.join(f'{info_prefix}_gt_database', filename) - - # save point clouds and image patches for each object - gt_points = points[point_indices[:, i]] - gt_points[:, :3] -= gt_boxes_3d[i, :3] - - if with_mask: - if object_masks[i].sum() == 0 or not valid_inds[i]: - # Skip object for empty or invalid mask - continue - img_patch_path = abs_filepath + '.png' - mask_patch_path = abs_filepath + '.mask.png' - mmcv.imwrite(object_img_patches[i], img_patch_path) - mmcv.imwrite(object_masks[i], mask_patch_path) - - with open(abs_filepath, 'w') as f: - gt_points.tofile(f) - - if (used_classes is None) or names[i] in used_classes: - db_info = { - 'name': names[i], - 'path': rel_filepath, - 'image_idx': image_idx, - 'gt_idx': i, - 'box3d_lidar': gt_boxes_3d[i], - 'num_points_in_gt': gt_points.shape[0], - 'difficulty': difficulty[i], - } - local_group_id = group_ids[i] - # if local_group_id >= 0: - if local_group_id not in group_dict: - group_dict[local_group_id] = group_counter - group_counter += 1 - db_info['group_id'] = group_dict[local_group_id] - if 'score' in annos: - db_info['score'] = annos['score'][i] - if with_mask: - db_info.update({'box2d_camera': gt_boxes[i]}) - if names[i] in all_db_infos: - all_db_infos[names[i]].append(db_info) - else: - all_db_infos[names[i]] = [db_info] - - for k, v in all_db_infos.items(): - print(f'load {len(v)} {k} database infos') - - with open(db_info_save_path, 'wb') as f: - pickle.dump(all_db_infos, f) - - -class GTDatabaseCreater: - """Given the raw data, generate the ground truth database. This is the - parallel version. For serialized version, please refer to - `create_groundtruth_database` - - Args: - dataset_class_name (str): Name of the input dataset. - data_path (str): Path of the data. - info_prefix (str): Prefix of the info file. - info_path (str, optional): Path of the info file. - Default: None. - mask_anno_path (str, optional): Path of the mask_anno. - Default: None. - used_classes (list[str], optional): Classes have been used. - Default: None. - database_save_path (str, optional): Path to save database. - Default: None. - db_info_save_path (str, optional): Path to save db_info. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - with_mask (bool, optional): Whether to use mask. - Default: False. - num_worker (int, optional): the number of parallel workers to use. - Default: 8. - """ - - def __init__(self, - dataset_class_name, - data_path, - info_prefix, - info_path=None, - mask_anno_path=None, - used_classes=None, - database_save_path=None, - db_info_save_path=None, - relative_path=True, - add_rgb=False, - lidar_only=False, - bev_only=False, - coors_range=None, - with_mask=False, - num_worker=8) -> None: - self.dataset_class_name = dataset_class_name - self.data_path = data_path - self.info_prefix = info_prefix - self.info_path = info_path - self.mask_anno_path = mask_anno_path - self.used_classes = used_classes - self.database_save_path = database_save_path - self.db_info_save_path = db_info_save_path - self.relative_path = relative_path - self.add_rgb = add_rgb - self.lidar_only = lidar_only - self.bev_only = bev_only - self.coors_range = coors_range - self.with_mask = with_mask - self.num_worker = num_worker - self.pipeline = None - - def create_single(self, input_dict): - group_counter = 0 - single_db_infos = dict() - example = self.pipeline(input_dict) - annos = example['ann_info'] - image_idx = example['sample_idx'] - points = example['points'].tensor.numpy() - gt_boxes_3d = annos['gt_bboxes_3d'].tensor.numpy() - names = annos['gt_names'] - group_dict = dict() - if 'group_ids' in annos: - group_ids = annos['group_ids'] - else: - group_ids = np.arange(gt_boxes_3d.shape[0], dtype=np.int64) - difficulty = np.zeros(gt_boxes_3d.shape[0], dtype=np.int32) - if 'difficulty' in annos: - difficulty = annos['difficulty'] - - num_obj = gt_boxes_3d.shape[0] - point_indices = box_np_ops.points_in_rbbox(points, gt_boxes_3d) - - if self.with_mask: - # prepare masks - gt_boxes = annos['gt_bboxes'] - img_path = osp.split(example['img_info']['filename'])[-1] - if img_path not in self.file2id.keys(): - print(f'skip image {img_path} for empty mask') - return single_db_infos - img_id = self.file2id[img_path] - kins_annIds = self.coco.getAnnIds(imgIds=img_id) - kins_raw_info = self.coco.loadAnns(kins_annIds) - kins_ann_info = _parse_coco_ann_info(kins_raw_info) - h, w = annos['img_shape'][:2] - gt_masks = [ - _poly2mask(mask, h, w) for mask in kins_ann_info['masks'] - ] - # get mask inds based on iou mapping - bbox_iou = bbox_overlaps(kins_ann_info['bboxes'], gt_boxes) - mask_inds = bbox_iou.argmax(axis=0) - valid_inds = (bbox_iou.max(axis=0) > 0.5) - - # mask the image - # use more precise crop when it is ready - # object_img_patches = np.ascontiguousarray( - # np.stack(object_img_patches, axis=0).transpose(0, 3, 1, 2)) - # crop image patches using roi_align - # object_img_patches = crop_image_patch_v2( - # torch.Tensor(gt_boxes), - # torch.Tensor(mask_inds).long(), object_img_patches) - object_img_patches, object_masks = crop_image_patch( - gt_boxes, gt_masks, mask_inds, annos['img']) - - for i in range(num_obj): - filename = f'{image_idx}_{names[i]}_{i}.bin' - abs_filepath = osp.join(self.database_save_path, filename) - rel_filepath = osp.join(f'{self.info_prefix}_gt_database', - filename) - - # save point clouds and image patches for each object - gt_points = points[point_indices[:, i]] - gt_points[:, :3] -= gt_boxes_3d[i, :3] - - if self.with_mask: - if object_masks[i].sum() == 0 or not valid_inds[i]: - # Skip object for empty or invalid mask - continue - img_patch_path = abs_filepath + '.png' - mask_patch_path = abs_filepath + '.mask.png' - mmcv.imwrite(object_img_patches[i], img_patch_path) - mmcv.imwrite(object_masks[i], mask_patch_path) - - with open(abs_filepath, 'w') as f: - gt_points.tofile(f) - - if (self.used_classes is None) or names[i] in self.used_classes: - db_info = { - 'name': names[i], - 'path': rel_filepath, - 'image_idx': image_idx, - 'gt_idx': i, - 'box3d_lidar': gt_boxes_3d[i], - 'num_points_in_gt': gt_points.shape[0], - 'difficulty': difficulty[i], - } - local_group_id = group_ids[i] - # if local_group_id >= 0: - if local_group_id not in group_dict: - group_dict[local_group_id] = group_counter - group_counter += 1 - db_info['group_id'] = group_dict[local_group_id] - if 'score' in annos: - db_info['score'] = annos['score'][i] - if self.with_mask: - db_info.update({'box2d_camera': gt_boxes[i]}) - if names[i] in single_db_infos: - single_db_infos[names[i]].append(db_info) - else: - single_db_infos[names[i]] = [db_info] - - return single_db_infos - - def create(self): - print(f'Create GT Database of {self.dataset_class_name}') - dataset_cfg = dict( - type=self.dataset_class_name, - data_root=self.data_path, - ann_file=self.info_path) - if self.dataset_class_name == 'KittiDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=self.with_mask, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=4, - use_dim=4, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - elif self.dataset_class_name == 'NuScenesDataset': - dataset_cfg.update( - use_valid_flag=True, - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=5, - use_dim=5), - dict( - type='LoadPointsFromMultiSweeps', - sweeps_num=10, - use_dim=[0, 1, 2, 3, 4], - pad_empty_sweeps=True, - remove_close=True), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True) - ]) - - elif self.dataset_class_name == 'WaymoDataset': - file_client_args = dict(backend='disk') - dataset_cfg.update( - test_mode=False, - split='training', - modality=dict( - use_lidar=True, - use_depth=False, - use_lidar_intensity=True, - use_camera=False, - ), - pipeline=[ - dict( - type='LoadPointsFromFile', - coord_type='LIDAR', - load_dim=6, - use_dim=6, - file_client_args=file_client_args), - dict( - type='LoadAnnotations3D', - with_bbox_3d=True, - with_label_3d=True, - file_client_args=file_client_args) - ]) - - dataset = build_dataset(dataset_cfg) - self.pipeline = dataset.pipeline - if self.database_save_path is None: - self.database_save_path = osp.join( - self.data_path, f'{self.info_prefix}_gt_database') - if self.db_info_save_path is None: - self.db_info_save_path = osp.join( - self.data_path, f'{self.info_prefix}_dbinfos_train.pkl') - mmcv.mkdir_or_exist(self.database_save_path) - if self.with_mask: - self.coco = COCO(osp.join(self.data_path, self.mask_anno_path)) - imgIds = self.coco.getImgIds() - self.file2id = dict() - for i in imgIds: - info = self.coco.loadImgs([i])[0] - self.file2id.update({info['file_name']: i}) - - def loop_dataset(i): - input_dict = dataset.get_data_info(i) - dataset.pre_pipeline(input_dict) - return input_dict - - multi_db_infos = mmcv.track_parallel_progress( - self.create_single, ((loop_dataset(i) - for i in range(len(dataset))), len(dataset)), - self.num_worker) - print('Make global unique group id') - group_counter_offset = 0 - all_db_infos = dict() - for single_db_infos in track_iter_progress(multi_db_infos): - group_id = -1 - for name, name_db_infos in single_db_infos.items(): - for db_info in name_db_infos: - group_id = max(group_id, db_info['group_id']) - db_info['group_id'] += group_counter_offset - if name not in all_db_infos: - all_db_infos[name] = [] - all_db_infos[name].extend(name_db_infos) - group_counter_offset += (group_id + 1) - - for k, v in all_db_infos.items(): - print(f'load {len(v)} {k} database infos') - - with open(self.db_info_save_path, 'wb') as f: - pickle.dump(all_db_infos, f) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/indoor_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/indoor_converter.py deleted file mode 100644 index d3be3676..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/indoor_converter.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -import mmcv -import numpy as np - -from tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData -from tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData -from tools.data_converter.sunrgbd_data_utils import SUNRGBDData - - -def create_indoor_info_file(data_path, - pkl_prefix='sunrgbd', - save_path=None, - use_v1=False, - workers=4): - """Create indoor information file. - - Get information of the raw data and save it to the pkl file. - - Args: - data_path (str): Path of the data. - pkl_prefix (str, optional): Prefix of the pkl to be saved. - Default: 'sunrgbd'. - save_path (str, optional): Path of the pkl to be saved. Default: None. - use_v1 (bool, optional): Whether to use v1. Default: False. - workers (int, optional): Number of threads to be used. Default: 4. - """ - assert os.path.exists(data_path) - assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis'], \ - f'unsupported indoor dataset {pkl_prefix}' - save_path = data_path if save_path is None else save_path - assert os.path.exists(save_path) - - # generate infos for both detection and segmentation task - if pkl_prefix in ['sunrgbd', 'scannet']: - train_filename = os.path.join(save_path, - f'{pkl_prefix}_infos_train.pkl') - val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl') - if pkl_prefix == 'sunrgbd': - # SUN RGB-D has a train-val split - train_dataset = SUNRGBDData( - root_path=data_path, split='train', use_v1=use_v1) - val_dataset = SUNRGBDData( - root_path=data_path, split='val', use_v1=use_v1) - else: - # ScanNet has a train-val-test split - train_dataset = ScanNetData(root_path=data_path, split='train') - val_dataset = ScanNetData(root_path=data_path, split='val') - test_dataset = ScanNetData(root_path=data_path, split='test') - test_filename = os.path.join(save_path, - f'{pkl_prefix}_infos_test.pkl') - - infos_train = train_dataset.get_infos( - num_workers=workers, has_label=True) - mmcv.dump(infos_train, train_filename, 'pkl') - print(f'{pkl_prefix} info train file is saved to {train_filename}') - - infos_val = val_dataset.get_infos(num_workers=workers, has_label=True) - mmcv.dump(infos_val, val_filename, 'pkl') - print(f'{pkl_prefix} info val file is saved to {val_filename}') - - if pkl_prefix == 'scannet': - infos_test = test_dataset.get_infos( - num_workers=workers, has_label=False) - mmcv.dump(infos_test, test_filename, 'pkl') - print(f'{pkl_prefix} info test file is saved to {test_filename}') - - # generate infos for the semantic segmentation task - # e.g. re-sampled scene indexes and label weights - # scene indexes are used to re-sample rooms with different number of points - # label weights are used to balance classes with different number of points - if pkl_prefix == 'scannet': - # label weight computation function is adopted from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - train_dataset = ScanNetSegData( - data_root=data_path, - ann_file=train_filename, - split='train', - num_points=8192, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - # TODO: do we need to generate on val set? - val_dataset = ScanNetSegData( - data_root=data_path, - ann_file=val_filename, - split='val', - num_points=8192, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - # no need to generate for test set - train_dataset.get_seg_infos() - val_dataset.get_seg_infos() - elif pkl_prefix == 's3dis': - # S3DIS doesn't have a fixed train-val split - # it has 6 areas instead, so we generate info file for each of them - # in training, we will use dataset to wrap different areas - splits = [f'Area_{i}' for i in [1, 2, 3, 4, 5, 6]] - for split in splits: - dataset = S3DISData(root_path=data_path, split=split) - info = dataset.get_infos(num_workers=workers, has_label=True) - filename = os.path.join(save_path, - f'{pkl_prefix}_infos_{split}.pkl') - mmcv.dump(info, filename, 'pkl') - print(f'{pkl_prefix} info {split} file is saved to {filename}') - seg_dataset = S3DISSegData( - data_root=data_path, - ann_file=filename, - split=split, - num_points=4096, - label_weight_func=lambda x: 1.0 / np.log(1.2 + x)) - seg_dataset.get_seg_infos() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_converter.py deleted file mode 100644 index 2db461d4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_converter.py +++ /dev/null @@ -1,624 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from pathlib import Path - -import mmcv -import numpy as np -from nuscenes.utils.geometry_utils import view_points - -from mmdet3d.core.bbox import box_np_ops, points_cam2img -from .kitti_data_utils import WaymoInfoGatherer, get_kitti_image_info -from .nuscenes_converter import post_process_coords - -kitti_categories = ('Pedestrian', 'Cyclist', 'Car') - - -def convert_to_kitti_info_version2(info): - """convert kitti info v1 to v2 if possible. - - Args: - info (dict): Info of the input kitti data. - - image (dict): image info - - calib (dict): calibration info - - point_cloud (dict): point cloud info - """ - if 'image' not in info or 'calib' not in info or 'point_cloud' not in info: - info['image'] = { - 'image_shape': info['img_shape'], - 'image_idx': info['image_idx'], - 'image_path': info['img_path'], - } - info['calib'] = { - 'R0_rect': info['calib/R0_rect'], - 'Tr_velo_to_cam': info['calib/Tr_velo_to_cam'], - 'P2': info['calib/P2'], - } - info['point_cloud'] = { - 'velodyne_path': info['velodyne_path'], - } - - -def _read_imageset_file(path): - with open(path, 'r') as f: - lines = f.readlines() - return [int(line) for line in lines] - - -class _NumPointsInGTCalculater: - """Calculate the number of points inside the ground truth box. This is the - parallel version. For the serialized version, please refer to - `_calculate_num_points_in_gt`. - - Args: - data_path (str): Path of the data. - relative_path (bool): Whether to use relative path. - remove_outside (bool, optional): Whether to remove points which are - outside of image. Default: True. - num_features (int, optional): Number of features per point. - Default: False. - num_worker (int, optional): the number of parallel workers to use. - Default: 8. - """ - - def __init__(self, - data_path, - relative_path, - remove_outside=True, - num_features=4, - num_worker=8) -> None: - self.data_path = data_path - self.relative_path = relative_path - self.remove_outside = remove_outside - self.num_features = num_features - self.num_worker = num_worker - - def calculate_single(self, info): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - if self.relative_path: - v_path = str(Path(self.data_path) / pc_info['velodyne_path']) - else: - v_path = pc_info['velodyne_path'] - points_v = np.fromfile( - v_path, dtype=np.float32, - count=-1).reshape([-1, self.num_features]) - rect = calib['R0_rect'] - Trv2c = calib['Tr_velo_to_cam'] - P2 = calib['P2'] - if self.remove_outside: - points_v = box_np_ops.remove_outside_points( - points_v, rect, Trv2c, P2, image_info['image_shape']) - annos = info['annos'] - num_obj = len([n for n in annos['name'] if n != 'DontCare']) - dims = annos['dimensions'][:num_obj] - loc = annos['location'][:num_obj] - rots = annos['rotation_y'][:num_obj] - gt_boxes_camera = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - gt_boxes_lidar = box_np_ops.box_camera_to_lidar( - gt_boxes_camera, rect, Trv2c) - indices = box_np_ops.points_in_rbbox(points_v[:, :3], gt_boxes_lidar) - num_points_in_gt = indices.sum(0) - num_ignored = len(annos['dimensions']) - num_obj - num_points_in_gt = np.concatenate( - [num_points_in_gt, -np.ones([num_ignored])]) - annos['num_points_in_gt'] = num_points_in_gt.astype(np.int32) - return info - - def calculate(self, infos): - ret_infos = mmcv.track_parallel_progress(self.calculate_single, infos, - self.num_worker) - for i, ret_info in enumerate(ret_infos): - infos[i] = ret_info - - -def _calculate_num_points_in_gt(data_path, - infos, - relative_path, - remove_outside=True, - num_features=4): - for info in mmcv.track_iter_progress(infos): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - if relative_path: - v_path = str(Path(data_path) / pc_info['velodyne_path']) - else: - v_path = pc_info['velodyne_path'] - points_v = np.fromfile( - v_path, dtype=np.float32, count=-1).reshape([-1, num_features]) - rect = calib['R0_rect'] - Trv2c = calib['Tr_velo_to_cam'] - P2 = calib['P2'] - if remove_outside: - points_v = box_np_ops.remove_outside_points( - points_v, rect, Trv2c, P2, image_info['image_shape']) - - # points_v = points_v[points_v[:, 0] > 0] - annos = info['annos'] - num_obj = len([n for n in annos['name'] if n != 'DontCare']) - # annos = kitti.filter_kitti_anno(annos, ['DontCare']) - dims = annos['dimensions'][:num_obj] - loc = annos['location'][:num_obj] - rots = annos['rotation_y'][:num_obj] - gt_boxes_camera = np.concatenate([loc, dims, rots[..., np.newaxis]], - axis=1) - gt_boxes_lidar = box_np_ops.box_camera_to_lidar( - gt_boxes_camera, rect, Trv2c) - indices = box_np_ops.points_in_rbbox(points_v[:, :3], gt_boxes_lidar) - num_points_in_gt = indices.sum(0) - num_ignored = len(annos['dimensions']) - num_obj - num_points_in_gt = np.concatenate( - [num_points_in_gt, -np.ones([num_ignored])]) - annos['num_points_in_gt'] = num_points_in_gt.astype(np.int32) - - -def create_kitti_info_file(data_path, - pkl_prefix='kitti', - with_plane=False, - save_path=None, - relative_path=True): - """Create info file of KITTI dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - data_path (str): Path of the data root. - pkl_prefix (str, optional): Prefix of the info file to be generated. - Default: 'kitti'. - with_plane (bool, optional): Whether to use plane information. - Default: False. - save_path (str, optional): Path to save the info file. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - """ - imageset_folder = Path(data_path) / 'ImageSets' - train_img_ids = _read_imageset_file(str(imageset_folder / 'train.txt')) - val_img_ids = _read_imageset_file(str(imageset_folder / 'val.txt')) - test_img_ids = _read_imageset_file(str(imageset_folder / 'test.txt')) - - print('Generate info. this may take several minutes.') - if save_path is None: - save_path = Path(data_path) - else: - save_path = Path(save_path) - kitti_infos_train = get_kitti_image_info( - data_path, - training=True, - velodyne=True, - calib=True, - with_plane=with_plane, - image_ids=train_img_ids, - relative_path=relative_path) - _calculate_num_points_in_gt(data_path, kitti_infos_train, relative_path) - filename = save_path / f'{pkl_prefix}_infos_train.pkl' - print(f'Kitti info train file is saved to {filename}') - mmcv.dump(kitti_infos_train, filename) - kitti_infos_val = get_kitti_image_info( - data_path, - training=True, - velodyne=True, - calib=True, - with_plane=with_plane, - image_ids=val_img_ids, - relative_path=relative_path) - _calculate_num_points_in_gt(data_path, kitti_infos_val, relative_path) - filename = save_path / f'{pkl_prefix}_infos_val.pkl' - print(f'Kitti info val file is saved to {filename}') - mmcv.dump(kitti_infos_val, filename) - filename = save_path / f'{pkl_prefix}_infos_trainval.pkl' - print(f'Kitti info trainval file is saved to {filename}') - mmcv.dump(kitti_infos_train + kitti_infos_val, filename) - - kitti_infos_test = get_kitti_image_info( - data_path, - training=False, - label_info=False, - velodyne=True, - calib=True, - with_plane=False, - image_ids=test_img_ids, - relative_path=relative_path) - filename = save_path / f'{pkl_prefix}_infos_test.pkl' - print(f'Kitti info test file is saved to {filename}') - mmcv.dump(kitti_infos_test, filename) - - -def create_waymo_info_file(data_path, - pkl_prefix='waymo', - save_path=None, - relative_path=True, - max_sweeps=5, - workers=8): - """Create info file of waymo dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - data_path (str): Path of the data root. - pkl_prefix (str, optional): Prefix of the info file to be generated. - Default: 'waymo'. - save_path (str, optional): Path to save the info file. - Default: None. - relative_path (bool, optional): Whether to use relative path. - Default: True. - max_sweeps (int, optional): Max sweeps before the detection frame - to be used. Default: 5. - """ - imageset_folder = Path(data_path) / 'ImageSets' - train_img_ids = _read_imageset_file(str(imageset_folder / 'train.txt')) - val_img_ids = _read_imageset_file(str(imageset_folder / 'val.txt')) - test_img_ids = _read_imageset_file(str(imageset_folder / 'test.txt')) - - print('Generate info. this may take several minutes.') - if save_path is None: - save_path = Path(data_path) - else: - save_path = Path(save_path) - waymo_infos_gatherer_trainval = WaymoInfoGatherer( - data_path, - training=True, - velodyne=True, - calib=True, - pose=True, - relative_path=relative_path, - max_sweeps=max_sweeps, - num_worker=workers) - waymo_infos_gatherer_test = WaymoInfoGatherer( - data_path, - training=False, - label_info=False, - velodyne=True, - calib=True, - pose=True, - relative_path=relative_path, - max_sweeps=max_sweeps, - num_worker=workers) - num_points_in_gt_calculater = _NumPointsInGTCalculater( - data_path, - relative_path, - num_features=6, - remove_outside=False, - num_worker=workers) - - waymo_infos_train = waymo_infos_gatherer_trainval.gather(train_img_ids) - num_points_in_gt_calculater.calculate(waymo_infos_train) - filename = save_path / f'{pkl_prefix}_infos_train.pkl' - print(f'Waymo info train file is saved to {filename}') - mmcv.dump(waymo_infos_train, filename) - waymo_infos_val = waymo_infos_gatherer_trainval.gather(val_img_ids) - num_points_in_gt_calculater.calculate(waymo_infos_val) - filename = save_path / f'{pkl_prefix}_infos_val.pkl' - print(f'Waymo info val file is saved to {filename}') - mmcv.dump(waymo_infos_val, filename) - filename = save_path / f'{pkl_prefix}_infos_trainval.pkl' - print(f'Waymo info trainval file is saved to {filename}') - mmcv.dump(waymo_infos_train + waymo_infos_val, filename) - waymo_infos_test = waymo_infos_gatherer_test.gather(test_img_ids) - filename = save_path / f'{pkl_prefix}_infos_test.pkl' - print(f'Waymo info test file is saved to {filename}') - mmcv.dump(waymo_infos_test, filename) - - -def _create_reduced_point_cloud(data_path, - info_path, - save_path=None, - back=False, - num_features=4, - front_camera_id=2): - """Create reduced point clouds for given info. - - Args: - data_path (str): Path of original data. - info_path (str): Path of data info. - save_path (str, optional): Path to save reduced point cloud - data. Default: None. - back (bool, optional): Whether to flip the points to back. - Default: False. - num_features (int, optional): Number of point features. Default: 4. - front_camera_id (int, optional): The referenced/front camera ID. - Default: 2. - """ - kitti_infos = mmcv.load(info_path) - - for info in mmcv.track_iter_progress(kitti_infos): - pc_info = info['point_cloud'] - image_info = info['image'] - calib = info['calib'] - - v_path = pc_info['velodyne_path'] - v_path = Path(data_path) / v_path - points_v = np.fromfile( - str(v_path), dtype=np.float32, - count=-1).reshape([-1, num_features]) - rect = calib['R0_rect'] - if front_camera_id == 2: - P2 = calib['P2'] - else: - P2 = calib[f'P{str(front_camera_id)}'] - Trv2c = calib['Tr_velo_to_cam'] - # first remove z < 0 points - # keep = points_v[:, -1] > 0 - # points_v = points_v[keep] - # then remove outside. - if back: - points_v[:, 0] = -points_v[:, 0] - points_v = box_np_ops.remove_outside_points(points_v, rect, Trv2c, P2, - image_info['image_shape']) - if save_path is None: - save_dir = v_path.parent.parent / (v_path.parent.stem + '_reduced') - if not save_dir.exists(): - save_dir.mkdir() - save_filename = save_dir / v_path.name - # save_filename = str(v_path) + '_reduced' - if back: - save_filename += '_back' - else: - save_filename = str(Path(save_path) / v_path.name) - if back: - save_filename += '_back' - with open(save_filename, 'w') as f: - points_v.tofile(f) - - -def create_reduced_point_cloud(data_path, - pkl_prefix, - train_info_path=None, - val_info_path=None, - test_info_path=None, - save_path=None, - with_back=False): - """Create reduced point clouds for training/validation/testing. - - Args: - data_path (str): Path of original data. - pkl_prefix (str): Prefix of info files. - train_info_path (str, optional): Path of training set info. - Default: None. - val_info_path (str, optional): Path of validation set info. - Default: None. - test_info_path (str, optional): Path of test set info. - Default: None. - save_path (str, optional): Path to save reduced point cloud data. - Default: None. - with_back (bool, optional): Whether to flip the points to back. - Default: False. - """ - if train_info_path is None: - train_info_path = Path(data_path) / f'{pkl_prefix}_infos_train.pkl' - if val_info_path is None: - val_info_path = Path(data_path) / f'{pkl_prefix}_infos_val.pkl' - if test_info_path is None: - test_info_path = Path(data_path) / f'{pkl_prefix}_infos_test.pkl' - - print('create reduced point cloud for training set') - _create_reduced_point_cloud(data_path, train_info_path, save_path) - print('create reduced point cloud for validation set') - _create_reduced_point_cloud(data_path, val_info_path, save_path) - print('create reduced point cloud for testing set') - _create_reduced_point_cloud(data_path, test_info_path, save_path) - if with_back: - _create_reduced_point_cloud( - data_path, train_info_path, save_path, back=True) - _create_reduced_point_cloud( - data_path, val_info_path, save_path, back=True) - _create_reduced_point_cloud( - data_path, test_info_path, save_path, back=True) - - -def export_2d_annotation(root_path, info_path, mono3d=True): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - mono3d (bool, optional): Whether to export mono3d annotation. - Default: True. - """ - # get bbox annotations for camera - kitti_infos = mmcv.load(info_path) - cat2Ids = [ - dict(id=kitti_categories.index(cat_name), name=cat_name) - for cat_name in kitti_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - from os import path as osp - for info in mmcv.track_iter_progress(kitti_infos): - coco_infos = get_2d_boxes(info, occluded=[0, 1, 2, 3], mono3d=mono3d) - (height, width, - _) = mmcv.imread(osp.join(root_path, - info['image']['image_path'])).shape - coco_2d_dict['images'].append( - dict( - file_name=info['image']['image_path'], - id=info['image']['image_idx'], - Tri2v=info['calib']['Tr_imu_to_velo'], - Trv2c=info['calib']['Tr_velo_to_cam'], - rect=info['calib']['R0_rect'], - cam_intrinsic=info['calib']['P2'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - if mono3d: - json_prefix = f'{info_path[:-4]}_mono3d' - else: - json_prefix = f'{info_path[:-4]}' - mmcv.dump(coco_2d_dict, f'{json_prefix}.coco.json') - - -def get_2d_boxes(info, occluded, mono3d=True): - """Get the 2D annotation records for a given info. - - Args: - info: Information of the given sample data. - occluded: Integer (0, 1, 2, 3) indicating occlusion state: - 0 = fully visible, 1 = partly occluded, 2 = largely occluded, - 3 = unknown, -1 = DontCare - mono3d (bool): Whether to get boxes with mono3d annotation. - - Return: - list[dict]: List of 2D annotation record that belongs to the input - `sample_data_token`. - """ - # Get calibration information - P2 = info['calib']['P2'] - - repro_recs = [] - # if no annotations in info (test dataset), then return - if 'annos' not in info: - return repro_recs - - # Get all the annotation with the specified visibilties. - ann_dicts = info['annos'] - mask = [(ocld in occluded) for ocld in ann_dicts['occluded']] - for k in ann_dicts.keys(): - ann_dicts[k] = ann_dicts[k][mask] - - # convert dict of list to list of dict - ann_recs = [] - for i in range(len(ann_dicts['occluded'])): - ann_rec = {} - for k in ann_dicts.keys(): - ann_rec[k] = ann_dicts[k][i] - ann_recs.append(ann_rec) - - for ann_idx, ann_rec in enumerate(ann_recs): - # Augment sample_annotation with token information. - ann_rec['sample_annotation_token'] = \ - f"{info['image']['image_idx']}.{ann_idx}" - ann_rec['sample_data_token'] = info['image']['image_idx'] - sample_data_token = info['image']['image_idx'] - - loc = ann_rec['location'][np.newaxis, :] - dim = ann_rec['dimensions'][np.newaxis, :] - rot = ann_rec['rotation_y'][np.newaxis, np.newaxis] - # transform the center from [0.5, 1.0, 0.5] to [0.5, 0.5, 0.5] - dst = np.array([0.5, 0.5, 0.5]) - src = np.array([0.5, 1.0, 0.5]) - loc = loc + dim * (dst - src) - offset = (info['calib']['P2'][0, 3] - info['calib']['P0'][0, 3]) \ - / info['calib']['P2'][0, 0] - loc_3d = np.copy(loc) - loc_3d[0, 0] += offset - gt_bbox_3d = np.concatenate([loc, dim, rot], axis=1).astype(np.float32) - - # Filter out the corners that are not in front of the calibrated - # sensor. - corners_3d = box_np_ops.center_to_corner_box3d( - gt_bbox_3d[:, :3], - gt_bbox_3d[:, 3:6], - gt_bbox_3d[:, 6], [0.5, 0.5, 0.5], - axis=1) - corners_3d = corners_3d[0].T # (1, 8, 3) -> (3, 8) - in_front = np.argwhere(corners_3d[2, :] > 0).flatten() - corners_3d = corners_3d[:, in_front] - - # Project 3d box to 2d. - camera_intrinsic = P2 - corner_coords = view_points(corners_3d, camera_intrinsic, - True).T[:, :2].tolist() - - # Keep only corners that fall within the image. - final_coords = post_process_coords(corner_coords) - - # Skip if the convex hull of the re-projected corners - # does not intersect the image canvas. - if final_coords is None: - continue - else: - min_x, min_y, max_x, max_y = final_coords - - # Generate dictionary record to be included in the .json file. - repro_rec = generate_record(ann_rec, min_x, min_y, max_x, max_y, - sample_data_token, - info['image']['image_path']) - - # If mono3d=True, add 3D annotations in camera coordinates - if mono3d and (repro_rec is not None): - repro_rec['bbox_cam3d'] = np.concatenate( - [loc_3d, dim, rot], - axis=1).astype(np.float32).squeeze().tolist() - repro_rec['velo_cam3d'] = -1 # no velocity in KITTI - - center3d = np.array(loc).reshape([1, 3]) - center2d = points_cam2img( - center3d, camera_intrinsic, with_depth=True) - repro_rec['center2d'] = center2d.squeeze().tolist() - # normalized center2D + depth - # samples with depth < 0 will be removed - if repro_rec['center2d'][2] <= 0: - continue - - repro_rec['attribute_name'] = -1 # no attribute in KITTI - repro_rec['attribute_id'] = -1 - - repro_recs.append(repro_rec) - - return repro_recs - - -def generate_record(ann_rec, x1, y1, x2, y2, sample_data_token, filename): - """Generate one 2D annotation record given various information on top of - the 2D bounding box coordinates. - - Args: - ann_rec (dict): Original 3d annotation record. - x1 (float): Minimum value of the x coordinate. - y1 (float): Minimum value of the y coordinate. - x2 (float): Maximum value of the x coordinate. - y2 (float): Maximum value of the y coordinate. - sample_data_token (str): Sample data token. - filename (str):The corresponding image file where the annotation - is present. - - Returns: - dict: A sample 2D annotation record. - - file_name (str): file name - - image_id (str): sample data token - - area (float): 2d box area - - category_name (str): category name - - category_id (int): category id - - bbox (list[float]): left x, top y, x_size, y_size of 2d box - - iscrowd (int): whether the area is crowd - """ - repro_rec = OrderedDict() - repro_rec['sample_data_token'] = sample_data_token - coco_rec = dict() - - key_mapping = { - 'name': 'category_name', - 'num_points_in_gt': 'num_lidar_pts', - 'sample_annotation_token': 'sample_annotation_token', - 'sample_data_token': 'sample_data_token', - } - - for key, value in ann_rec.items(): - if key in key_mapping.keys(): - repro_rec[key_mapping[key]] = value - - repro_rec['bbox_corners'] = [x1, y1, x2, y2] - repro_rec['filename'] = filename - - coco_rec['file_name'] = filename - coco_rec['image_id'] = sample_data_token - coco_rec['area'] = (y2 - y1) * (x2 - x1) - - if repro_rec['category_name'] not in kitti_categories: - return None - cat_name = repro_rec['category_name'] - coco_rec['category_name'] = cat_name - coco_rec['category_id'] = kitti_categories.index(cat_name) - coco_rec['bbox'] = [x1, y1, x2 - x1, y2 - y1] - coco_rec['iscrowd'] = 0 - - return coco_rec diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_data_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_data_utils.py deleted file mode 100644 index cae84cc6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/kitti_data_utils.py +++ /dev/null @@ -1,619 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict -from concurrent import futures as futures -from os import path as osp -from pathlib import Path - -import mmcv -import numpy as np -from PIL import Image -from skimage import io - - -def get_image_index_str(img_idx, use_prefix_id=False): - if use_prefix_id: - return '{:07d}'.format(img_idx) - else: - return '{:06d}'.format(img_idx) - - -def get_kitti_info_path(idx, - prefix, - info_type='image_2', - file_tail='.png', - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - img_idx_str = get_image_index_str(idx, use_prefix_id) - img_idx_str += file_tail - prefix = Path(prefix) - if training: - file_path = Path('training') / info_type / img_idx_str - else: - file_path = Path('testing') / info_type / img_idx_str - if exist_check and not (prefix / file_path).exists(): - raise ValueError('file not exist: {}'.format(file_path)) - if relative_path: - return str(file_path) - else: - return str(prefix / file_path) - - -def get_image_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='image_2', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.png', training, - relative_path, exist_check, use_prefix_id) - - -def get_label_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='label_2', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_plane_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - info_type='planes', - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, info_type, '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_velodyne_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'velodyne', '.bin', training, - relative_path, exist_check, use_prefix_id) - - -def get_calib_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'calib', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_pose_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'pose', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_timestamp_path(idx, - prefix, - training=True, - relative_path=True, - exist_check=True, - use_prefix_id=False): - return get_kitti_info_path(idx, prefix, 'timestamp', '.txt', training, - relative_path, exist_check, use_prefix_id) - - -def get_label_anno(label_path): - annotations = {} - annotations.update({ - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [] - }) - with open(label_path, 'r') as f: - lines = f.readlines() - # if len(lines) == 0 or len(lines[0]) < 15: - # content = [] - # else: - content = [line.strip().split(' ') for line in lines] - num_objects = len([x[0] for x in content if x[0] != 'DontCare']) - annotations['name'] = np.array([x[0] for x in content]) - num_gt = len(annotations['name']) - annotations['truncated'] = np.array([float(x[1]) for x in content]) - annotations['occluded'] = np.array([int(x[2]) for x in content]) - annotations['alpha'] = np.array([float(x[3]) for x in content]) - annotations['bbox'] = np.array([[float(info) for info in x[4:8]] - for x in content]).reshape(-1, 4) - # dimensions will convert hwl format to standard lhw(camera) format. - annotations['dimensions'] = np.array([[float(info) for info in x[8:11]] - for x in content - ]).reshape(-1, 3)[:, [2, 0, 1]] - annotations['location'] = np.array([[float(info) for info in x[11:14]] - for x in content]).reshape(-1, 3) - annotations['rotation_y'] = np.array([float(x[14]) - for x in content]).reshape(-1) - if len(content) != 0 and len(content[0]) == 16: # have score - annotations['score'] = np.array([float(x[15]) for x in content]) - else: - annotations['score'] = np.zeros((annotations['bbox'].shape[0], )) - index = list(range(num_objects)) + [-1] * (num_gt - num_objects) - annotations['index'] = np.array(index, dtype=np.int32) - annotations['group_ids'] = np.arange(num_gt, dtype=np.int32) - return annotations - - -def _extend_matrix(mat): - mat = np.concatenate([mat, np.array([[0., 0., 0., 1.]])], axis=0) - return mat - - -def get_kitti_image_info(path, - training=True, - label_info=True, - velodyne=False, - calib=False, - with_plane=False, - image_ids=7481, - extend_matrix=True, - num_worker=8, - relative_path=True, - with_imageshape=True): - """ - KITTI annotation format version 2: - { - [optional]points: [N, 3+] point cloud - [optional, for kitti]image: { - image_idx: ... - image_path: ... - image_shape: ... - } - point_cloud: { - num_features: 4 - velodyne_path: ... - } - [optional, for kitti]calib: { - R0_rect: ... - Tr_velo_to_cam: ... - P2: ... - } - annos: { - location: [num_gt, 3] array - dimensions: [num_gt, 3] array - rotation_y: [num_gt] angle array - name: [num_gt] ground truth name array - [optional]difficulty: kitti difficulty - [optional]group_ids: used for multi-part object - } - } - """ - root_path = Path(path) - if not isinstance(image_ids, list): - image_ids = list(range(image_ids)) - - def map_func(idx): - info = {} - pc_info = {'num_features': 4} - calib_info = {} - - image_info = {'image_idx': idx} - annotations = None - if velodyne: - pc_info['velodyne_path'] = get_velodyne_path( - idx, path, training, relative_path) - image_info['image_path'] = get_image_path(idx, path, training, - relative_path) - if with_imageshape: - img_path = image_info['image_path'] - if relative_path: - img_path = str(root_path / img_path) - image_info['image_shape'] = np.array( - io.imread(img_path).shape[:2], dtype=np.int32) - if label_info: - label_path = get_label_path(idx, path, training, relative_path) - if relative_path: - label_path = str(root_path / label_path) - annotations = get_label_anno(label_path) - info['image'] = image_info - info['point_cloud'] = pc_info - if calib: - calib_path = get_calib_path( - idx, path, training, relative_path=False) - with open(calib_path, 'r') as f: - lines = f.readlines() - P0 = np.array([float(info) for info in lines[0].split(' ')[1:13] - ]).reshape([3, 4]) - P1 = np.array([float(info) for info in lines[1].split(' ')[1:13] - ]).reshape([3, 4]) - P2 = np.array([float(info) for info in lines[2].split(' ')[1:13] - ]).reshape([3, 4]) - P3 = np.array([float(info) for info in lines[3].split(' ')[1:13] - ]).reshape([3, 4]) - if extend_matrix: - P0 = _extend_matrix(P0) - P1 = _extend_matrix(P1) - P2 = _extend_matrix(P2) - P3 = _extend_matrix(P3) - R0_rect = np.array([ - float(info) for info in lines[4].split(' ')[1:10] - ]).reshape([3, 3]) - if extend_matrix: - rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype) - rect_4x4[3, 3] = 1. - rect_4x4[:3, :3] = R0_rect - else: - rect_4x4 = R0_rect - - Tr_velo_to_cam = np.array([ - float(info) for info in lines[5].split(' ')[1:13] - ]).reshape([3, 4]) - Tr_imu_to_velo = np.array([ - float(info) for info in lines[6].split(' ')[1:13] - ]).reshape([3, 4]) - if extend_matrix: - Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam) - Tr_imu_to_velo = _extend_matrix(Tr_imu_to_velo) - calib_info['P0'] = P0 - calib_info['P1'] = P1 - calib_info['P2'] = P2 - calib_info['P3'] = P3 - calib_info['R0_rect'] = rect_4x4 - calib_info['Tr_velo_to_cam'] = Tr_velo_to_cam - calib_info['Tr_imu_to_velo'] = Tr_imu_to_velo - info['calib'] = calib_info - - if with_plane: - plane_path = get_plane_path(idx, path, training, relative_path) - if relative_path: - plane_path = str(root_path / plane_path) - lines = mmcv.list_from_file(plane_path) - info['plane'] = np.array([float(i) for i in lines[3].split()]) - - if annotations is not None: - info['annos'] = annotations - add_difficulty_to_annos(info) - return info - - with futures.ThreadPoolExecutor(num_worker) as executor: - image_infos = executor.map(map_func, image_ids) - - return list(image_infos) - - -class WaymoInfoGatherer: - """ - Parallel version of waymo dataset information gathering. - Waymo annotation format version like KITTI: - { - [optional]points: [N, 3+] point cloud - [optional, for kitti]image: { - image_idx: ... - image_path: ... - image_shape: ... - } - point_cloud: { - num_features: 6 - velodyne_path: ... - } - [optional, for kitti]calib: { - R0_rect: ... - Tr_velo_to_cam0: ... - P0: ... - } - annos: { - location: [num_gt, 3] array - dimensions: [num_gt, 3] array - rotation_y: [num_gt] angle array - name: [num_gt] ground truth name array - [optional]difficulty: kitti difficulty - [optional]group_ids: used for multi-part object - } - } - """ - - def __init__(self, - path, - training=True, - label_info=True, - velodyne=False, - calib=False, - pose=False, - extend_matrix=True, - num_worker=8, - relative_path=True, - with_imageshape=True, - max_sweeps=5) -> None: - self.path = path - self.training = training - self.label_info = label_info - self.velodyne = velodyne - self.calib = calib - self.pose = pose - self.extend_matrix = extend_matrix - self.num_worker = num_worker - self.relative_path = relative_path - self.with_imageshape = with_imageshape - self.max_sweeps = max_sweeps - - def gather_single(self, idx): - root_path = Path(self.path) - info = {} - pc_info = {'num_features': 6} - calib_info = {} - - image_info = {'image_idx': idx} - annotations = None - if self.velodyne: - pc_info['velodyne_path'] = get_velodyne_path( - idx, - self.path, - self.training, - self.relative_path, - use_prefix_id=True) - with open( - get_timestamp_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True)) as f: - info['timestamp'] = np.int64(f.read()) - image_info['image_path'] = get_image_path( - idx, - self.path, - self.training, - self.relative_path, - info_type='image_0', - use_prefix_id=True) - if self.with_imageshape: - img_path = image_info['image_path'] - if self.relative_path: - img_path = str(root_path / img_path) - # io using PIL is significantly faster than skimage - w, h = Image.open(img_path).size - image_info['image_shape'] = np.array((h, w), dtype=np.int32) - if self.label_info: - label_path = get_label_path( - idx, - self.path, - self.training, - self.relative_path, - info_type='label_all', - use_prefix_id=True) - if self.relative_path: - label_path = str(root_path / label_path) - annotations = get_label_anno(label_path) - info['image'] = image_info - info['point_cloud'] = pc_info - if self.calib: - calib_path = get_calib_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - with open(calib_path, 'r') as f: - lines = f.readlines() - P0 = np.array([float(info) for info in lines[0].split(' ')[1:13] - ]).reshape([3, 4]) - P1 = np.array([float(info) for info in lines[1].split(' ')[1:13] - ]).reshape([3, 4]) - P2 = np.array([float(info) for info in lines[2].split(' ')[1:13] - ]).reshape([3, 4]) - P3 = np.array([float(info) for info in lines[3].split(' ')[1:13] - ]).reshape([3, 4]) - P4 = np.array([float(info) for info in lines[4].split(' ')[1:13] - ]).reshape([3, 4]) - if self.extend_matrix: - P0 = _extend_matrix(P0) - P1 = _extend_matrix(P1) - P2 = _extend_matrix(P2) - P3 = _extend_matrix(P3) - P4 = _extend_matrix(P4) - R0_rect = np.array([ - float(info) for info in lines[5].split(' ')[1:10] - ]).reshape([3, 3]) - if self.extend_matrix: - rect_4x4 = np.zeros([4, 4], dtype=R0_rect.dtype) - rect_4x4[3, 3] = 1. - rect_4x4[:3, :3] = R0_rect - else: - rect_4x4 = R0_rect - - Tr_velo_to_cam = np.array([ - float(info) for info in lines[6].split(' ')[1:13] - ]).reshape([3, 4]) - if self.extend_matrix: - Tr_velo_to_cam = _extend_matrix(Tr_velo_to_cam) - calib_info['P0'] = P0 - calib_info['P1'] = P1 - calib_info['P2'] = P2 - calib_info['P3'] = P3 - calib_info['P4'] = P4 - calib_info['R0_rect'] = rect_4x4 - calib_info['Tr_velo_to_cam'] = Tr_velo_to_cam - info['calib'] = calib_info - if self.pose: - pose_path = get_pose_path( - idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - info['pose'] = np.loadtxt(pose_path) - - if annotations is not None: - info['annos'] = annotations - info['annos']['camera_id'] = info['annos'].pop('score') - add_difficulty_to_annos(info) - - sweeps = [] - prev_idx = idx - while len(sweeps) < self.max_sweeps: - prev_info = {} - prev_idx -= 1 - prev_info['velodyne_path'] = get_velodyne_path( - prev_idx, - self.path, - self.training, - self.relative_path, - exist_check=False, - use_prefix_id=True) - if_prev_exists = osp.exists( - Path(self.path) / prev_info['velodyne_path']) - if if_prev_exists: - with open( - get_timestamp_path( - prev_idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True)) as f: - prev_info['timestamp'] = np.int64(f.read()) - prev_pose_path = get_pose_path( - prev_idx, - self.path, - self.training, - relative_path=False, - use_prefix_id=True) - prev_info['pose'] = np.loadtxt(prev_pose_path) - sweeps.append(prev_info) - else: - break - info['sweeps'] = sweeps - - return info - - def gather(self, image_ids): - if not isinstance(image_ids, list): - image_ids = list(range(image_ids)) - image_infos = mmcv.track_parallel_progress(self.gather_single, - image_ids, self.num_worker) - return list(image_infos) - - -def kitti_anno_to_label_file(annos, folder): - folder = Path(folder) - for anno in annos: - image_idx = anno['metadata']['image_idx'] - label_lines = [] - for j in range(anno['bbox'].shape[0]): - label_dict = { - 'name': anno['name'][j], - 'alpha': anno['alpha'][j], - 'bbox': anno['bbox'][j], - 'location': anno['location'][j], - 'dimensions': anno['dimensions'][j], - 'rotation_y': anno['rotation_y'][j], - 'score': anno['score'][j], - } - label_line = kitti_result_line(label_dict) - label_lines.append(label_line) - label_file = folder / f'{get_image_index_str(image_idx)}.txt' - label_str = '\n'.join(label_lines) - with open(label_file, 'w') as f: - f.write(label_str) - - -def add_difficulty_to_annos(info): - min_height = [40, 25, - 25] # minimum height for evaluated groundtruth/detections - max_occlusion = [ - 0, 1, 2 - ] # maximum occlusion level of the groundtruth used for evaluation - max_trunc = [ - 0.15, 0.3, 0.5 - ] # maximum truncation level of the groundtruth used for evaluation - annos = info['annos'] - dims = annos['dimensions'] # lhw format - bbox = annos['bbox'] - height = bbox[:, 3] - bbox[:, 1] - occlusion = annos['occluded'] - truncation = annos['truncated'] - diff = [] - easy_mask = np.ones((len(dims), ), dtype=np.bool) - moderate_mask = np.ones((len(dims), ), dtype=np.bool) - hard_mask = np.ones((len(dims), ), dtype=np.bool) - i = 0 - for h, o, t in zip(height, occlusion, truncation): - if o > max_occlusion[0] or h <= min_height[0] or t > max_trunc[0]: - easy_mask[i] = False - if o > max_occlusion[1] or h <= min_height[1] or t > max_trunc[1]: - moderate_mask[i] = False - if o > max_occlusion[2] or h <= min_height[2] or t > max_trunc[2]: - hard_mask[i] = False - i += 1 - is_easy = easy_mask - is_moderate = np.logical_xor(easy_mask, moderate_mask) - is_hard = np.logical_xor(hard_mask, moderate_mask) - - for i in range(len(dims)): - if is_easy[i]: - diff.append(0) - elif is_moderate[i]: - diff.append(1) - elif is_hard[i]: - diff.append(2) - else: - diff.append(-1) - annos['difficulty'] = np.array(diff, np.int32) - return diff - - -def kitti_result_line(result_dict, precision=4): - prec_float = '{' + ':.{}f'.format(precision) + '}' - res_line = [] - all_field_default = OrderedDict([ - ('name', None), - ('truncated', -1), - ('occluded', -1), - ('alpha', -10), - ('bbox', None), - ('dimensions', [-1, -1, -1]), - ('location', [-1000, -1000, -1000]), - ('rotation_y', -10), - ('score', 0.0), - ]) - res_dict = [(key, None) for key, val in all_field_default.items()] - res_dict = OrderedDict(res_dict) - for key, val in result_dict.items(): - if all_field_default[key] is None and val is None: - raise ValueError('you must specify a value for {}'.format(key)) - res_dict[key] = val - - for key, val in res_dict.items(): - if key == 'name': - res_line.append(val) - elif key in ['truncated', 'alpha', 'rotation_y', 'score']: - if val is None: - res_line.append(str(all_field_default[key])) - else: - res_line.append(prec_float.format(val)) - elif key == 'occluded': - if val is None: - res_line.append(str(all_field_default[key])) - else: - res_line.append('{}'.format(val)) - elif key in ['bbox', 'dimensions', 'location']: - if val is None: - res_line += [str(v) for v in all_field_default[key]] - else: - res_line += [prec_float.format(v) for v in val] - else: - raise ValueError('unknown key. supported key:{}'.format( - res_dict.keys())) - return ' '.join(res_line) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_converter.py deleted file mode 100644 index c6a89d0d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_converter.py +++ /dev/null @@ -1,271 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from logging import warning -from os import path as osp - -import mmcv -import numpy as np -from lyft_dataset_sdk.lyftdataset import LyftDataset as Lyft -from pyquaternion import Quaternion - -from mmdet3d.datasets import LyftDataset -from .nuscenes_converter import (get_2d_boxes, get_available_scenes, - obtain_sensor2top) - -lyft_categories = ('car', 'truck', 'bus', 'emergency_vehicle', 'other_vehicle', - 'motorcycle', 'bicycle', 'pedestrian', 'animal') - - -def create_lyft_infos(root_path, - info_prefix, - version='v1.01-train', - max_sweeps=10): - """Create info file of lyft dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - root_path (str): Path of the data root. - info_prefix (str): Prefix of the info file to be generated. - version (str, optional): Version of the data. - Default: 'v1.01-train'. - max_sweeps (int, optional): Max number of sweeps. - Default: 10. - """ - lyft = Lyft( - data_path=osp.join(root_path, version), - json_path=osp.join(root_path, version, version), - verbose=True) - available_vers = ['v1.01-train', 'v1.01-test'] - assert version in available_vers - if version == 'v1.01-train': - train_scenes = mmcv.list_from_file('data/lyft/train.txt') - val_scenes = mmcv.list_from_file('data/lyft/val.txt') - elif version == 'v1.01-test': - train_scenes = mmcv.list_from_file('data/lyft/test.txt') - val_scenes = [] - else: - raise ValueError('unknown') - - # filter existing scenes. - available_scenes = get_available_scenes(lyft) - available_scene_names = [s['name'] for s in available_scenes] - train_scenes = list( - filter(lambda x: x in available_scene_names, train_scenes)) - val_scenes = list(filter(lambda x: x in available_scene_names, val_scenes)) - train_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in train_scenes - ]) - val_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in val_scenes - ]) - - test = 'test' in version - if test: - print(f'test scene: {len(train_scenes)}') - else: - print(f'train scene: {len(train_scenes)}, \ - val scene: {len(val_scenes)}') - train_lyft_infos, val_lyft_infos = _fill_trainval_infos( - lyft, train_scenes, val_scenes, test, max_sweeps=max_sweeps) - - metadata = dict(version=version) - if test: - print(f'test sample: {len(train_lyft_infos)}') - data = dict(infos=train_lyft_infos, metadata=metadata) - info_name = f'{info_prefix}_infos_test' - info_path = osp.join(root_path, f'{info_name}.pkl') - mmcv.dump(data, info_path) - else: - print(f'train sample: {len(train_lyft_infos)}, \ - val sample: {len(val_lyft_infos)}') - data = dict(infos=train_lyft_infos, metadata=metadata) - train_info_name = f'{info_prefix}_infos_train' - info_path = osp.join(root_path, f'{train_info_name}.pkl') - mmcv.dump(data, info_path) - data['infos'] = val_lyft_infos - val_info_name = f'{info_prefix}_infos_val' - info_val_path = osp.join(root_path, f'{val_info_name}.pkl') - mmcv.dump(data, info_val_path) - - -def _fill_trainval_infos(lyft, - train_scenes, - val_scenes, - test=False, - max_sweeps=10): - """Generate the train/val infos from the raw data. - - Args: - lyft (:obj:`LyftDataset`): Dataset class in the Lyft dataset. - train_scenes (list[str]): Basic information of training scenes. - val_scenes (list[str]): Basic information of validation scenes. - test (bool, optional): Whether use the test mode. In the test mode, no - annotations can be accessed. Default: False. - max_sweeps (int, optional): Max number of sweeps. Default: 10. - - Returns: - tuple[list[dict]]: Information of training set and - validation set that will be saved to the info file. - """ - train_lyft_infos = [] - val_lyft_infos = [] - - for sample in mmcv.track_iter_progress(lyft.sample): - lidar_token = sample['data']['LIDAR_TOP'] - sd_rec = lyft.get('sample_data', sample['data']['LIDAR_TOP']) - cs_record = lyft.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = lyft.get('ego_pose', sd_rec['ego_pose_token']) - abs_lidar_path, boxes, _ = lyft.get_sample_data(lidar_token) - # nuScenes devkit returns more convenient relative paths while - # lyft devkit returns absolute paths - abs_lidar_path = str(abs_lidar_path) # absolute path - lidar_path = abs_lidar_path.split(f'{os.getcwd()}/')[-1] - # relative path - - mmcv.check_file_exist(lidar_path) - - info = { - 'lidar_path': lidar_path, - 'token': sample['token'], - 'sweeps': [], - 'cams': dict(), - 'lidar2ego_translation': cs_record['translation'], - 'lidar2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sample['timestamp'], - } - - l2e_r = info['lidar2ego_rotation'] - l2e_t = info['lidar2ego_translation'] - e2g_r = info['ego2global_rotation'] - e2g_t = info['ego2global_translation'] - l2e_r_mat = Quaternion(l2e_r).rotation_matrix - e2g_r_mat = Quaternion(e2g_r).rotation_matrix - - # obtain 6 image's information per frame - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - for cam in camera_types: - cam_token = sample['data'][cam] - cam_path, _, cam_intrinsic = lyft.get_sample_data(cam_token) - cam_info = obtain_sensor2top(lyft, cam_token, l2e_t, l2e_r_mat, - e2g_t, e2g_r_mat, cam) - cam_info.update(cam_intrinsic=cam_intrinsic) - info['cams'].update({cam: cam_info}) - - # obtain sweeps for a single key-frame - sd_rec = lyft.get('sample_data', sample['data']['LIDAR_TOP']) - sweeps = [] - while len(sweeps) < max_sweeps: - if not sd_rec['prev'] == '': - sweep = obtain_sensor2top(lyft, sd_rec['prev'], l2e_t, - l2e_r_mat, e2g_t, e2g_r_mat, 'lidar') - sweeps.append(sweep) - sd_rec = lyft.get('sample_data', sd_rec['prev']) - else: - break - info['sweeps'] = sweeps - # obtain annotation - if not test: - annotations = [ - lyft.get('sample_annotation', token) - for token in sample['anns'] - ] - locs = np.array([b.center for b in boxes]).reshape(-1, 3) - dims = np.array([b.wlh for b in boxes]).reshape(-1, 3) - rots = np.array([b.orientation.yaw_pitch_roll[0] - for b in boxes]).reshape(-1, 1) - - names = [b.name for b in boxes] - for i in range(len(names)): - if names[i] in LyftDataset.NameMapping: - names[i] = LyftDataset.NameMapping[names[i]] - names = np.array(names) - - # we need to convert box size to - # the format of our lidar coordinate system - # which is x_size, y_size, z_size (corresponding to l, w, h) - gt_boxes = np.concatenate([locs, dims[:, [1, 0, 2]], rots], axis=1) - assert len(gt_boxes) == len( - annotations), f'{len(gt_boxes)}, {len(annotations)}' - info['gt_boxes'] = gt_boxes - info['gt_names'] = names - info['num_lidar_pts'] = np.array( - [a['num_lidar_pts'] for a in annotations]) - info['num_radar_pts'] = np.array( - [a['num_radar_pts'] for a in annotations]) - - if sample['scene_token'] in train_scenes: - train_lyft_infos.append(info) - else: - val_lyft_infos.append(info) - - return train_lyft_infos, val_lyft_infos - - -def export_2d_annotation(root_path, info_path, version): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - version (str): Dataset version. - """ - warning.warn('DeprecationWarning: 2D annotations are not used on the ' - 'Lyft dataset. The function export_2d_annotation will be ' - 'deprecated.') - # get bbox annotations for camera - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - lyft_infos = mmcv.load(info_path)['infos'] - lyft = Lyft( - data_path=osp.join(root_path, version), - json_path=osp.join(root_path, version, version), - verbose=True) - # info_2d_list = [] - cat2Ids = [ - dict(id=lyft_categories.index(cat_name), name=cat_name) - for cat_name in lyft_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - for info in mmcv.track_iter_progress(lyft_infos): - for cam in camera_types: - cam_info = info['cams'][cam] - coco_infos = get_2d_boxes( - lyft, - cam_info['sample_data_token'], - visibilities=['', '1', '2', '3', '4']) - (height, width, _) = mmcv.imread(cam_info['data_path']).shape - coco_2d_dict['images'].append( - dict( - file_name=cam_info['data_path'], - id=cam_info['sample_data_token'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - mmcv.dump(coco_2d_dict, f'{info_path[:-4]}.coco.json') diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_data_fixer.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_data_fixer.py deleted file mode 100644 index 55103515..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/lyft_data_fixer.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os - -import numpy as np - - -def fix_lyft(root_folder='./data/lyft', version='v1.01'): - # refer to https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/discussion/110000 # noqa - lidar_path = 'lidar/host-a011_lidar1_1233090652702363606.bin' - root_folder = os.path.join(root_folder, f'{version}-train') - lidar_path = os.path.join(root_folder, lidar_path) - assert os.path.isfile(lidar_path), f'Please download the complete Lyft ' \ - f'dataset and make sure {lidar_path} is present.' - points = np.fromfile(lidar_path, dtype=np.float32, count=-1) - try: - points.reshape([-1, 5]) - print(f'This fix is not required for version {version}.') - except ValueError: - new_points = np.array(list(points) + [100.0, 1.0], dtype='float32') - new_points.tofile(lidar_path) - print(f'Appended 100.0 and 1.0 to the end of {lidar_path}.') - - -parser = argparse.ArgumentParser(description='Lyft dataset fixer arg parser') -parser.add_argument( - '--root-folder', - type=str, - default='./data/lyft', - help='specify the root path of Lyft dataset') -parser.add_argument( - '--version', - type=str, - default='v1.01', - help='specify Lyft dataset version') -args = parser.parse_args() - -if __name__ == '__main__': - fix_lyft(root_folder=args.root_folder, version=args.version) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuimage_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuimage_converter.py deleted file mode 100644 index a46015a1..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuimage_converter.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import base64 -from os import path as osp - -import mmcv -import numpy as np -from nuimages import NuImages -from nuimages.utils.utils import mask_decode, name_to_index_mapping - -nus_categories = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - -NAME_MAPPING = { - 'movable_object.barrier': 'barrier', - 'vehicle.bicycle': 'bicycle', - 'vehicle.bus.bendy': 'bus', - 'vehicle.bus.rigid': 'bus', - 'vehicle.car': 'car', - 'vehicle.construction': 'construction_vehicle', - 'vehicle.motorcycle': 'motorcycle', - 'human.pedestrian.adult': 'pedestrian', - 'human.pedestrian.child': 'pedestrian', - 'human.pedestrian.construction_worker': 'pedestrian', - 'human.pedestrian.police_officer': 'pedestrian', - 'movable_object.trafficcone': 'traffic_cone', - 'vehicle.trailer': 'trailer', - 'vehicle.truck': 'truck', -} - - -def parse_args(): - parser = argparse.ArgumentParser(description='Data converter arg parser') - parser.add_argument( - '--data-root', - type=str, - default='./data/nuimages', - help='specify the root path of dataset') - parser.add_argument( - '--version', - type=str, - nargs='+', - default=['v1.0-mini'], - required=False, - help='specify the dataset version') - parser.add_argument( - '--out-dir', - type=str, - default='./data/nuimages/annotations/', - required=False, - help='path to save the exported json') - parser.add_argument( - '--nproc', - type=int, - default=4, - required=False, - help='workers to process semantic masks') - parser.add_argument('--extra-tag', type=str, default='nuimages') - args = parser.parse_args() - return args - - -def get_img_annos(nuim, img_info, cat2id, out_dir, data_root, seg_root): - """Get semantic segmentation map for an image. - - Args: - nuim (obj:`NuImages`): NuImages dataset object - img_info (dict): Meta information of img - - Returns: - np.ndarray: Semantic segmentation map of the image - """ - sd_token = img_info['token'] - image_id = img_info['id'] - name_to_index = name_to_index_mapping(nuim.category) - - # Get image data. - width, height = img_info['width'], img_info['height'] - semseg_mask = np.zeros((height, width)).astype('uint8') - - # Load stuff / surface regions. - surface_anns = [ - o for o in nuim.surface_ann if o['sample_data_token'] == sd_token - ] - - # Draw stuff / surface regions. - for ann in surface_anns: - # Get color and mask. - category_token = ann['category_token'] - category_name = nuim.get('category', category_token)['name'] - if ann['mask'] is None: - continue - mask = mask_decode(ann['mask']) - - # Draw mask for semantic segmentation. - semseg_mask[mask == 1] = name_to_index[category_name] - - # Load object instances. - object_anns = [ - o for o in nuim.object_ann if o['sample_data_token'] == sd_token - ] - - # Sort by token to ensure that objects always appear in the - # instance mask in the same order. - object_anns = sorted(object_anns, key=lambda k: k['token']) - - # Draw object instances. - # The 0 index is reserved for background; thus, the instances - # should start from index 1. - annotations = [] - for i, ann in enumerate(object_anns, start=1): - # Get color, box, mask and name. - category_token = ann['category_token'] - category_name = nuim.get('category', category_token)['name'] - if ann['mask'] is None: - continue - mask = mask_decode(ann['mask']) - - # Draw masks for semantic segmentation and instance segmentation. - semseg_mask[mask == 1] = name_to_index[category_name] - - if category_name in NAME_MAPPING: - cat_name = NAME_MAPPING[category_name] - cat_id = cat2id[cat_name] - - x_min, y_min, x_max, y_max = ann['bbox'] - # encode calibrated instance mask - mask_anno = dict() - mask_anno['counts'] = base64.b64decode( - ann['mask']['counts']).decode() - mask_anno['size'] = ann['mask']['size'] - - data_anno = dict( - image_id=image_id, - category_id=cat_id, - bbox=[x_min, y_min, x_max - x_min, y_max - y_min], - area=(x_max - x_min) * (y_max - y_min), - segmentation=mask_anno, - iscrowd=0) - annotations.append(data_anno) - - # after process, save semantic masks - img_filename = img_info['file_name'] - seg_filename = img_filename.replace('jpg', 'png') - seg_filename = osp.join(seg_root, seg_filename) - mmcv.imwrite(semseg_mask, seg_filename) - return annotations, np.max(semseg_mask) - - -def export_nuim_to_coco(nuim, data_root, out_dir, extra_tag, version, nproc): - print('Process category information') - categories = [] - categories = [ - dict(id=nus_categories.index(cat_name), name=cat_name) - for cat_name in nus_categories - ] - cat2id = {k_v['name']: k_v['id'] for k_v in categories} - - images = [] - print('Process image meta information...') - for sample_info in mmcv.track_iter_progress(nuim.sample_data): - if sample_info['is_key_frame']: - img_idx = len(images) - images.append( - dict( - id=img_idx, - token=sample_info['token'], - file_name=sample_info['filename'], - width=sample_info['width'], - height=sample_info['height'])) - - seg_root = f'{out_dir}semantic_masks' - mmcv.mkdir_or_exist(seg_root) - mmcv.mkdir_or_exist(osp.join(data_root, 'calibrated')) - - global process_img_anno - - def process_img_anno(img_info): - single_img_annos, max_cls_id = get_img_annos(nuim, img_info, cat2id, - out_dir, data_root, - seg_root) - return single_img_annos, max_cls_id - - print('Process img annotations...') - if nproc > 1: - outputs = mmcv.track_parallel_progress( - process_img_anno, images, nproc=nproc) - else: - outputs = [] - for img_info in mmcv.track_iter_progress(images): - outputs.append(process_img_anno(img_info)) - - # Determine the index of object annotation - print('Process annotation information...') - annotations = [] - max_cls_ids = [] - for single_img_annos, max_cls_id in outputs: - max_cls_ids.append(max_cls_id) - for img_anno in single_img_annos: - img_anno.update(id=len(annotations)) - annotations.append(img_anno) - - max_cls_id = max(max_cls_ids) - print(f'Max ID of class in the semantic map: {max_cls_id}') - - coco_format_json = dict( - images=images, annotations=annotations, categories=categories) - - mmcv.mkdir_or_exist(out_dir) - out_file = osp.join(out_dir, f'{extra_tag}_{version}.json') - print(f'Annotation dumped to {out_file}') - mmcv.dump(coco_format_json, out_file) - - -def main(): - args = parse_args() - for version in args.version: - nuim = NuImages( - dataroot=args.data_root, version=version, verbose=True, lazy=True) - export_nuim_to_coco(nuim, args.data_root, args.out_dir, args.extra_tag, - version, args.nproc) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuscenes_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuscenes_converter.py deleted file mode 100644 index c6140fcc..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/nuscenes_converter.py +++ /dev/null @@ -1,628 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from collections import OrderedDict -from os import path as osp -from typing import List, Tuple, Union - -import mmcv -import numpy as np -from nuscenes.nuscenes import NuScenes -from nuscenes.utils.geometry_utils import view_points -from pyquaternion import Quaternion -from shapely.geometry import MultiPoint, box - -from mmdet3d.core.bbox import points_cam2img -from mmdet3d.datasets import NuScenesDataset - -nus_categories = ('car', 'truck', 'trailer', 'bus', 'construction_vehicle', - 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', - 'barrier') - -nus_attributes = ('cycle.with_rider', 'cycle.without_rider', - 'pedestrian.moving', 'pedestrian.standing', - 'pedestrian.sitting_lying_down', 'vehicle.moving', - 'vehicle.parked', 'vehicle.stopped', 'None') - - -def create_nuscenes_infos(root_path, - info_prefix, - version='v1.0-trainval', - max_sweeps=10): - """Create info file of nuscene dataset. - - Given the raw data, generate its related info file in pkl format. - - Args: - root_path (str): Path of the data root. - info_prefix (str): Prefix of the info file to be generated. - version (str, optional): Version of the data. - Default: 'v1.0-trainval'. - max_sweeps (int, optional): Max number of sweeps. - Default: 10. - """ - from nuscenes.nuscenes import NuScenes - nusc = NuScenes(version=version, dataroot=root_path, verbose=True) - from nuscenes.utils import splits - available_vers = ['v1.0-trainval', 'v1.0-test', 'v1.0-mini'] - assert version in available_vers - if version == 'v1.0-trainval': - train_scenes = splits.train - val_scenes = splits.val - elif version == 'v1.0-test': - train_scenes = splits.test - val_scenes = [] - elif version == 'v1.0-mini': - train_scenes = splits.mini_train - val_scenes = splits.mini_val - else: - raise ValueError('unknown') - - # filter existing scenes. - available_scenes = get_available_scenes(nusc) - available_scene_names = [s['name'] for s in available_scenes] - train_scenes = list( - filter(lambda x: x in available_scene_names, train_scenes)) - val_scenes = list(filter(lambda x: x in available_scene_names, val_scenes)) - train_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in train_scenes - ]) - val_scenes = set([ - available_scenes[available_scene_names.index(s)]['token'] - for s in val_scenes - ]) - - test = 'test' in version - if test: - print('test scene: {}'.format(len(train_scenes))) - else: - print('train scene: {}, val scene: {}'.format( - len(train_scenes), len(val_scenes))) - train_nusc_infos, val_nusc_infos = _fill_trainval_infos( - nusc, train_scenes, val_scenes, test, max_sweeps=max_sweeps) - - metadata = dict(version=version) - if test: - print('test sample: {}'.format(len(train_nusc_infos))) - data = dict(infos=train_nusc_infos, metadata=metadata) - info_path = osp.join(root_path, - '{}_infos_test.pkl'.format(info_prefix)) - mmcv.dump(data, info_path) - else: - print('train sample: {}, val sample: {}'.format( - len(train_nusc_infos), len(val_nusc_infos))) - data = dict(infos=train_nusc_infos, metadata=metadata) - info_path = osp.join(root_path, - '{}_infos_train.pkl'.format(info_prefix)) - mmcv.dump(data, info_path) - data['infos'] = val_nusc_infos - info_val_path = osp.join(root_path, - '{}_infos_val.pkl'.format(info_prefix)) - mmcv.dump(data, info_val_path) - - -def get_available_scenes(nusc): - """Get available scenes from the input nuscenes class. - - Given the raw data, get the information of available scenes for - further info generation. - - Args: - nusc (class): Dataset class in the nuScenes dataset. - - Returns: - available_scenes (list[dict]): List of basic information for the - available scenes. - """ - available_scenes = [] - print('total scene num: {}'.format(len(nusc.scene))) - for scene in nusc.scene: - scene_token = scene['token'] - scene_rec = nusc.get('scene', scene_token) - sample_rec = nusc.get('sample', scene_rec['first_sample_token']) - sd_rec = nusc.get('sample_data', sample_rec['data']['LIDAR_TOP']) - has_more_frames = True - scene_not_exist = False - while has_more_frames: - lidar_path, boxes, _ = nusc.get_sample_data(sd_rec['token']) - lidar_path = str(lidar_path) - if os.getcwd() in lidar_path: - # path from lyftdataset is absolute path - lidar_path = lidar_path.split(f'{os.getcwd()}/')[-1] - # relative path - if not mmcv.is_filepath(lidar_path): - scene_not_exist = True - break - else: - break - if scene_not_exist: - continue - available_scenes.append(scene) - print('exist scene num: {}'.format(len(available_scenes))) - return available_scenes - - -def _fill_trainval_infos(nusc, - train_scenes, - val_scenes, - test=False, - max_sweeps=10): - """Generate the train/val infos from the raw data. - - Args: - nusc (:obj:`NuScenes`): Dataset class in the nuScenes dataset. - train_scenes (list[str]): Basic information of training scenes. - val_scenes (list[str]): Basic information of validation scenes. - test (bool, optional): Whether use the test mode. In test mode, no - annotations can be accessed. Default: False. - max_sweeps (int, optional): Max number of sweeps. Default: 10. - - Returns: - tuple[list[dict]]: Information of training set and validation set - that will be saved to the info file. - """ - train_nusc_infos = [] - val_nusc_infos = [] - - for sample in mmcv.track_iter_progress(nusc.sample): - lidar_token = sample['data']['LIDAR_TOP'] - sd_rec = nusc.get('sample_data', sample['data']['LIDAR_TOP']) - cs_record = nusc.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token']) - lidar_path, boxes, _ = nusc.get_sample_data(lidar_token) - - mmcv.check_file_exist(lidar_path) - - info = { - 'lidar_path': lidar_path, - 'token': sample['token'], - 'sweeps': [], - 'cams': dict(), - 'lidar2ego_translation': cs_record['translation'], - 'lidar2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sample['timestamp'], - } - - l2e_r = info['lidar2ego_rotation'] - l2e_t = info['lidar2ego_translation'] - e2g_r = info['ego2global_rotation'] - e2g_t = info['ego2global_translation'] - l2e_r_mat = Quaternion(l2e_r).rotation_matrix - e2g_r_mat = Quaternion(e2g_r).rotation_matrix - - # obtain 6 image's information per frame - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - for cam in camera_types: - cam_token = sample['data'][cam] - cam_path, _, cam_intrinsic = nusc.get_sample_data(cam_token) - cam_info = obtain_sensor2top(nusc, cam_token, l2e_t, l2e_r_mat, - e2g_t, e2g_r_mat, cam) - cam_info.update(cam_intrinsic=cam_intrinsic) - info['cams'].update({cam: cam_info}) - - # obtain sweeps for a single key-frame - sd_rec = nusc.get('sample_data', sample['data']['LIDAR_TOP']) - sweeps = [] - while len(sweeps) < max_sweeps: - if not sd_rec['prev'] == '': - sweep = obtain_sensor2top(nusc, sd_rec['prev'], l2e_t, - l2e_r_mat, e2g_t, e2g_r_mat, 'lidar') - sweeps.append(sweep) - sd_rec = nusc.get('sample_data', sd_rec['prev']) - else: - break - info['sweeps'] = sweeps - # obtain annotation - if not test: - annotations = [ - nusc.get('sample_annotation', token) - for token in sample['anns'] - ] - locs = np.array([b.center for b in boxes]).reshape(-1, 3) - dims = np.array([b.wlh for b in boxes]).reshape(-1, 3) - rots = np.array([b.orientation.yaw_pitch_roll[0] - for b in boxes]).reshape(-1, 1) - velocity = np.array( - [nusc.box_velocity(token)[:2] for token in sample['anns']]) - valid_flag = np.array( - [(anno['num_lidar_pts'] + anno['num_radar_pts']) > 0 - for anno in annotations], - dtype=bool).reshape(-1) - # convert velo from global to lidar - for i in range(len(boxes)): - velo = np.array([*velocity[i], 0.0]) - velo = velo @ np.linalg.inv(e2g_r_mat).T @ np.linalg.inv( - l2e_r_mat).T - velocity[i] = velo[:2] - - names = [b.name for b in boxes] - for i in range(len(names)): - if names[i] in NuScenesDataset.NameMapping: - names[i] = NuScenesDataset.NameMapping[names[i]] - names = np.array(names) - # we need to convert box size to - # the format of our lidar coordinate system - # which is x_size, y_size, z_size (corresponding to l, w, h) - gt_boxes = np.concatenate([locs, dims[:, [1, 0, 2]], rots], axis=1) - assert len(gt_boxes) == len( - annotations), f'{len(gt_boxes)}, {len(annotations)}' - info['gt_boxes'] = gt_boxes - info['gt_names'] = names - info['gt_velocity'] = velocity.reshape(-1, 2) - info['num_lidar_pts'] = np.array( - [a['num_lidar_pts'] for a in annotations]) - info['num_radar_pts'] = np.array( - [a['num_radar_pts'] for a in annotations]) - info['valid_flag'] = valid_flag - - if sample['scene_token'] in train_scenes: - train_nusc_infos.append(info) - else: - val_nusc_infos.append(info) - - return train_nusc_infos, val_nusc_infos - - -def obtain_sensor2top(nusc, - sensor_token, - l2e_t, - l2e_r_mat, - e2g_t, - e2g_r_mat, - sensor_type='lidar'): - """Obtain the info with RT matric from general sensor to Top LiDAR. - - Args: - nusc (class): Dataset class in the nuScenes dataset. - sensor_token (str): Sample data token corresponding to the - specific sensor type. - l2e_t (np.ndarray): Translation from lidar to ego in shape (1, 3). - l2e_r_mat (np.ndarray): Rotation matrix from lidar to ego - in shape (3, 3). - e2g_t (np.ndarray): Translation from ego to global in shape (1, 3). - e2g_r_mat (np.ndarray): Rotation matrix from ego to global - in shape (3, 3). - sensor_type (str, optional): Sensor to calibrate. Default: 'lidar'. - - Returns: - sweep (dict): Sweep information after transformation. - """ - sd_rec = nusc.get('sample_data', sensor_token) - cs_record = nusc.get('calibrated_sensor', - sd_rec['calibrated_sensor_token']) - pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token']) - data_path = str(nusc.get_sample_data_path(sd_rec['token'])) - if os.getcwd() in data_path: # path from lyftdataset is absolute path - data_path = data_path.split(f'{os.getcwd()}/')[-1] # relative path - sweep = { - 'data_path': data_path, - 'type': sensor_type, - 'sample_data_token': sd_rec['token'], - 'sensor2ego_translation': cs_record['translation'], - 'sensor2ego_rotation': cs_record['rotation'], - 'ego2global_translation': pose_record['translation'], - 'ego2global_rotation': pose_record['rotation'], - 'timestamp': sd_rec['timestamp'] - } - l2e_r_s = sweep['sensor2ego_rotation'] - l2e_t_s = sweep['sensor2ego_translation'] - e2g_r_s = sweep['ego2global_rotation'] - e2g_t_s = sweep['ego2global_translation'] - - # obtain the RT from sensor to Top LiDAR - # sweep->ego->global->ego'->lidar - l2e_r_s_mat = Quaternion(l2e_r_s).rotation_matrix - e2g_r_s_mat = Quaternion(e2g_r_s).rotation_matrix - R = (l2e_r_s_mat.T @ e2g_r_s_mat.T) @ ( - np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T) - T = (l2e_t_s @ e2g_r_s_mat.T + e2g_t_s) @ ( - np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T) - T -= e2g_t @ (np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T - ) + l2e_t @ np.linalg.inv(l2e_r_mat).T - sweep['sensor2lidar_rotation'] = R.T # points @ R.T + T - sweep['sensor2lidar_translation'] = T - return sweep - - -def export_2d_annotation(root_path, info_path, version, mono3d=True): - """Export 2d annotation from the info file and raw data. - - Args: - root_path (str): Root path of the raw data. - info_path (str): Path of the info file. - version (str): Dataset version. - mono3d (bool, optional): Whether to export mono3d annotation. - Default: True. - """ - # get bbox annotations for camera - camera_types = [ - 'CAM_FRONT', - 'CAM_FRONT_RIGHT', - 'CAM_FRONT_LEFT', - 'CAM_BACK', - 'CAM_BACK_LEFT', - 'CAM_BACK_RIGHT', - ] - nusc_infos = mmcv.load(info_path)['infos'] - nusc = NuScenes(version=version, dataroot=root_path, verbose=True) - # info_2d_list = [] - cat2Ids = [ - dict(id=nus_categories.index(cat_name), name=cat_name) - for cat_name in nus_categories - ] - coco_ann_id = 0 - coco_2d_dict = dict(annotations=[], images=[], categories=cat2Ids) - for info in mmcv.track_iter_progress(nusc_infos): - for cam in camera_types: - cam_info = info['cams'][cam] - coco_infos = get_2d_boxes( - nusc, - cam_info['sample_data_token'], - visibilities=['', '1', '2', '3', '4'], - mono3d=mono3d) - (height, width, _) = mmcv.imread(cam_info['data_path']).shape - coco_2d_dict['images'].append( - dict( - file_name=cam_info['data_path'].split('data/nuscenes/') - [-1], - id=cam_info['sample_data_token'], - token=info['token'], - cam2ego_rotation=cam_info['sensor2ego_rotation'], - cam2ego_translation=cam_info['sensor2ego_translation'], - ego2global_rotation=info['ego2global_rotation'], - ego2global_translation=info['ego2global_translation'], - cam_intrinsic=cam_info['cam_intrinsic'], - width=width, - height=height)) - for coco_info in coco_infos: - if coco_info is None: - continue - # add an empty key for coco format - coco_info['segmentation'] = [] - coco_info['id'] = coco_ann_id - coco_2d_dict['annotations'].append(coco_info) - coco_ann_id += 1 - if mono3d: - json_prefix = f'{info_path[:-4]}_mono3d' - else: - json_prefix = f'{info_path[:-4]}' - mmcv.dump(coco_2d_dict, f'{json_prefix}.coco.json') - - -def get_2d_boxes(nusc, - sample_data_token: str, - visibilities: List[str], - mono3d=True): - """Get the 2D annotation records for a given `sample_data_token`. - - Args: - sample_data_token (str): Sample data token belonging to a camera - keyframe. - visibilities (list[str]): Visibility filter. - mono3d (bool): Whether to get boxes with mono3d annotation. - - Return: - list[dict]: List of 2D annotation record that belongs to the input - `sample_data_token`. - """ - - # Get the sample data and the sample corresponding to that sample data. - sd_rec = nusc.get('sample_data', sample_data_token) - - assert sd_rec[ - 'sensor_modality'] == 'camera', 'Error: get_2d_boxes only works' \ - ' for camera sample_data!' - if not sd_rec['is_key_frame']: - raise ValueError( - 'The 2D re-projections are available only for keyframes.') - - s_rec = nusc.get('sample', sd_rec['sample_token']) - - # Get the calibrated sensor and ego pose - # record to get the transformation matrices. - cs_rec = nusc.get('calibrated_sensor', sd_rec['calibrated_sensor_token']) - pose_rec = nusc.get('ego_pose', sd_rec['ego_pose_token']) - camera_intrinsic = np.array(cs_rec['camera_intrinsic']) - - # Get all the annotation with the specified visibilties. - ann_recs = [ - nusc.get('sample_annotation', token) for token in s_rec['anns'] - ] - ann_recs = [ - ann_rec for ann_rec in ann_recs - if (ann_rec['visibility_token'] in visibilities) - ] - - repro_recs = [] - - for ann_rec in ann_recs: - # Augment sample_annotation with token information. - ann_rec['sample_annotation_token'] = ann_rec['token'] - ann_rec['sample_data_token'] = sample_data_token - - # Get the box in global coordinates. - box = nusc.get_box(ann_rec['token']) - - # Move them to the ego-pose frame. - box.translate(-np.array(pose_rec['translation'])) - box.rotate(Quaternion(pose_rec['rotation']).inverse) - - # Move them to the calibrated sensor frame. - box.translate(-np.array(cs_rec['translation'])) - box.rotate(Quaternion(cs_rec['rotation']).inverse) - - # Filter out the corners that are not in front of the calibrated - # sensor. - corners_3d = box.corners() - in_front = np.argwhere(corners_3d[2, :] > 0).flatten() - corners_3d = corners_3d[:, in_front] - - # Project 3d box to 2d. - corner_coords = view_points(corners_3d, camera_intrinsic, - True).T[:, :2].tolist() - - # Keep only corners that fall within the image. - final_coords = post_process_coords(corner_coords) - - # Skip if the convex hull of the re-projected corners - # does not intersect the image canvas. - if final_coords is None: - continue - else: - min_x, min_y, max_x, max_y = final_coords - - # Generate dictionary record to be included in the .json file. - repro_rec = generate_record(ann_rec, min_x, min_y, max_x, max_y, - sample_data_token, sd_rec['filename']) - - # If mono3d=True, add 3D annotations in camera coordinates - if mono3d and (repro_rec is not None): - loc = box.center.tolist() - - dim = box.wlh - dim[[0, 1, 2]] = dim[[1, 2, 0]] # convert wlh to our lhw - dim = dim.tolist() - - rot = box.orientation.yaw_pitch_roll[0] - rot = [-rot] # convert the rot to our cam coordinate - - global_velo2d = nusc.box_velocity(box.token)[:2] - global_velo3d = np.array([*global_velo2d, 0.0]) - e2g_r_mat = Quaternion(pose_rec['rotation']).rotation_matrix - c2e_r_mat = Quaternion(cs_rec['rotation']).rotation_matrix - cam_velo3d = global_velo3d @ np.linalg.inv( - e2g_r_mat).T @ np.linalg.inv(c2e_r_mat).T - velo = cam_velo3d[0::2].tolist() - - repro_rec['bbox_cam3d'] = loc + dim + rot - repro_rec['velo_cam3d'] = velo - - center3d = np.array(loc).reshape([1, 3]) - center2d = points_cam2img( - center3d, camera_intrinsic, with_depth=True) - repro_rec['center2d'] = center2d.squeeze().tolist() - # normalized center2D + depth - # if samples with depth < 0 will be removed - if repro_rec['center2d'][2] <= 0: - continue - - ann_token = nusc.get('sample_annotation', - box.token)['attribute_tokens'] - if len(ann_token) == 0: - attr_name = 'None' - else: - attr_name = nusc.get('attribute', ann_token[0])['name'] - attr_id = nus_attributes.index(attr_name) - repro_rec['attribute_name'] = attr_name - repro_rec['attribute_id'] = attr_id - - repro_recs.append(repro_rec) - - return repro_recs - - -def post_process_coords( - corner_coords: List, imsize: Tuple[int, int] = (1600, 900) -) -> Union[Tuple[float, float, float, float], None]: - """Get the intersection of the convex hull of the reprojected bbox corners - and the image canvas, return None if no intersection. - - Args: - corner_coords (list[int]): Corner coordinates of reprojected - bounding box. - imsize (tuple[int]): Size of the image canvas. - - Return: - tuple [float]: Intersection of the convex hull of the 2D box - corners and the image canvas. - """ - polygon_from_2d_box = MultiPoint(corner_coords).convex_hull - img_canvas = box(0, 0, imsize[0], imsize[1]) - - if polygon_from_2d_box.intersects(img_canvas): - img_intersection = polygon_from_2d_box.intersection(img_canvas) - intersection_coords = np.array( - [coord for coord in img_intersection.exterior.coords]) - - min_x = min(intersection_coords[:, 0]) - min_y = min(intersection_coords[:, 1]) - max_x = max(intersection_coords[:, 0]) - max_y = max(intersection_coords[:, 1]) - - return min_x, min_y, max_x, max_y - else: - return None - - -def generate_record(ann_rec: dict, x1: float, y1: float, x2: float, y2: float, - sample_data_token: str, filename: str) -> OrderedDict: - """Generate one 2D annotation record given various information on top of - the 2D bounding box coordinates. - - Args: - ann_rec (dict): Original 3d annotation record. - x1 (float): Minimum value of the x coordinate. - y1 (float): Minimum value of the y coordinate. - x2 (float): Maximum value of the x coordinate. - y2 (float): Maximum value of the y coordinate. - sample_data_token (str): Sample data token. - filename (str):The corresponding image file where the annotation - is present. - - Returns: - dict: A sample 2D annotation record. - - file_name (str): file name - - image_id (str): sample data token - - area (float): 2d box area - - category_name (str): category name - - category_id (int): category id - - bbox (list[float]): left x, top y, dx, dy of 2d box - - iscrowd (int): whether the area is crowd - """ - repro_rec = OrderedDict() - repro_rec['sample_data_token'] = sample_data_token - coco_rec = dict() - - relevant_keys = [ - 'attribute_tokens', - 'category_name', - 'instance_token', - 'next', - 'num_lidar_pts', - 'num_radar_pts', - 'prev', - 'sample_annotation_token', - 'sample_data_token', - 'visibility_token', - ] - - for key, value in ann_rec.items(): - if key in relevant_keys: - repro_rec[key] = value - - repro_rec['bbox_corners'] = [x1, y1, x2, y2] - repro_rec['filename'] = filename - - coco_rec['file_name'] = filename - coco_rec['image_id'] = sample_data_token - coco_rec['area'] = (y2 - y1) * (x2 - x1) - - if repro_rec['category_name'] not in NuScenesDataset.NameMapping: - return None - cat_name = NuScenesDataset.NameMapping[repro_rec['category_name']] - coco_rec['category_name'] = cat_name - coco_rec['category_id'] = nus_categories.index(cat_name) - coco_rec['bbox'] = [x1, y1, x2 - x1, y2 - y1] - coco_rec['iscrowd'] = 0 - - return coco_rec diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/s3dis_data_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/s3dis_data_utils.py deleted file mode 100644 index 751688f7..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/s3dis_data_utils.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np - - -class S3DISData(object): - """S3DIS data. - - Generate s3dis infos for s3dis_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'Area_1'. - """ - - def __init__(self, root_path, split='Area_1'): - self.root_dir = root_path - self.split = split - self.data_dir = osp.join(root_path, - 'Stanford3dDataset_v1.2_Aligned_Version') - - # Following `GSDN `_, use 5 furniture - # classes for detection: table, chair, sofa, bookcase, board. - self.cat_ids = np.array([7, 8, 9, 10, 11]) - self.cat_ids2class = { - cat_id: i - for i, cat_id in enumerate(list(self.cat_ids)) - } - - assert split in [ - 'Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_6' - ] - self.sample_id_list = os.listdir(osp.join(self.data_dir, - split)) # conferenceRoom_1 - for sample_id in self.sample_id_list: - if os.path.isfile(osp.join(self.data_dir, split, sample_id)): - self.sample_id_list.remove(sample_id) - - def __len__(self): - return len(self.sample_id_list) - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - info = dict() - pc_info = { - 'num_features': 6, - 'lidar_idx': f'{self.split}_{sample_idx}' - } - info['point_cloud'] = pc_info - pts_filename = osp.join(self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_point.npy') - pts_instance_mask_path = osp.join( - self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_ins_label.npy') - pts_semantic_mask_path = osp.join( - self.root_dir, 's3dis_data', - f'{self.split}_{sample_idx}_sem_label.npy') - - points = np.load(pts_filename).astype(np.float32) - pts_instance_mask = np.load(pts_instance_mask_path).astype(np.int) - pts_semantic_mask = np.load(pts_semantic_mask_path).astype(np.int) - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'instance_mask')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'semantic_mask')) - - points.tofile( - osp.join(self.root_dir, 'points', - f'{self.split}_{sample_idx}.bin')) - pts_instance_mask.tofile( - osp.join(self.root_dir, 'instance_mask', - f'{self.split}_{sample_idx}.bin')) - pts_semantic_mask.tofile( - osp.join(self.root_dir, 'semantic_mask', - f'{self.split}_{sample_idx}.bin')) - - info['pts_path'] = osp.join('points', - f'{self.split}_{sample_idx}.bin') - info['pts_instance_mask_path'] = osp.join( - 'instance_mask', f'{self.split}_{sample_idx}.bin') - info['pts_semantic_mask_path'] = osp.join( - 'semantic_mask', f'{self.split}_{sample_idx}.bin') - info['annos'] = self.get_bboxes(points, pts_instance_mask, - pts_semantic_mask) - - return info - - sample_id_list = sample_id_list if sample_id_list is not None \ - else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) - - def get_bboxes(self, points, pts_instance_mask, pts_semantic_mask): - """Convert instance masks to axis-aligned bounding boxes. - - Args: - points (np.array): Scene points of shape (n, 6). - pts_instance_mask (np.ndarray): Instance labels of shape (n,). - pts_semantic_mask (np.ndarray): Semantic labels of shape (n,). - - Returns: - dict: A dict containing detection infos with following keys: - - - gt_boxes_upright_depth (np.ndarray): Bounding boxes - of shape (n, 6) - - class (np.ndarray): Box labels of shape (n,) - - gt_num (int): Number of boxes. - """ - bboxes, labels = [], [] - for i in range(1, pts_instance_mask.max()): - ids = pts_instance_mask == i - # check if all instance points have same semantic label - assert pts_semantic_mask[ids].min() == pts_semantic_mask[ids].max() - label = pts_semantic_mask[ids][0] - # keep only furniture objects - if label in self.cat_ids2class: - labels.append(self.cat_ids2class[pts_semantic_mask[ids][0]]) - pts = points[:, :3][ids] - min_pts = pts.min(axis=0) - max_pts = pts.max(axis=0) - locations = (min_pts + max_pts) / 2 - dimensions = max_pts - min_pts - bboxes.append(np.concatenate((locations, dimensions))) - annotation = dict() - # follow ScanNet and SUN RGB-D keys - annotation['gt_boxes_upright_depth'] = np.array(bboxes) - annotation['class'] = np.array(labels) - annotation['gt_num'] = len(labels) - return annotation - - -class S3DISSegData(object): - """S3DIS dataset used to generate infos for semantic segmentation task. - - Args: - data_root (str): Root path of the raw data. - ann_file (str): The generated scannet infos. - split (str, optional): Set split type of the data. Default: 'train'. - num_points (int, optional): Number of points in each data input. - Default: 8192. - label_weight_func (function, optional): Function to compute the - label weight. Default: None. - """ - - def __init__(self, - data_root, - ann_file, - split='Area_1', - num_points=4096, - label_weight_func=None): - self.data_root = data_root - self.data_infos = mmcv.load(ann_file) - self.split = split - self.num_points = num_points - - self.all_ids = np.arange(13) # all possible ids - self.cat_ids = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, - 12]) # used for seg task - self.ignore_index = len(self.cat_ids) - - self.cat_id2class = np.ones((self.all_ids.shape[0],), dtype=np.int) * \ - self.ignore_index - for i, cat_id in enumerate(self.cat_ids): - self.cat_id2class[cat_id] = i - - # label weighting function is taken from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - self.label_weight_func = (lambda x: 1.0 / np.log(1.2 + x)) if \ - label_weight_func is None else label_weight_func - - def get_seg_infos(self): - scene_idxs, label_weight = self.get_scene_idxs_and_label_weight() - save_folder = osp.join(self.data_root, 'seg_info') - mmcv.mkdir_or_exist(save_folder) - np.save( - osp.join(save_folder, f'{self.split}_resampled_scene_idxs.npy'), - scene_idxs) - np.save( - osp.join(save_folder, f'{self.split}_label_weight.npy'), - label_weight) - print(f'{self.split} resampled scene index and label weight saved') - - def _convert_to_label(self, mask): - """Convert class_id in loaded segmentation mask to label.""" - if isinstance(mask, str): - if mask.endswith('npy'): - mask = np.load(mask) - else: - mask = np.fromfile(mask, dtype=np.int64) - label = self.cat_id2class[mask] - return label - - def get_scene_idxs_and_label_weight(self): - """Compute scene_idxs for data sampling and label weight for loss - calculation. - - We sample more times for scenes with more points. Label_weight is - inversely proportional to number of class points. - """ - num_classes = len(self.cat_ids) - num_point_all = [] - label_weight = np.zeros((num_classes + 1, )) # ignore_index - for data_info in self.data_infos: - label = self._convert_to_label( - osp.join(self.data_root, data_info['pts_semantic_mask_path'])) - num_point_all.append(label.shape[0]) - class_count, _ = np.histogram(label, range(num_classes + 2)) - label_weight += class_count - - # repeat scene_idx for num_scene_point // num_sample_point times - sample_prob = np.array(num_point_all) / float(np.sum(num_point_all)) - num_iter = int(np.sum(num_point_all) / float(self.num_points)) - scene_idxs = [] - for idx in range(len(self.data_infos)): - scene_idxs.extend([idx] * int(round(sample_prob[idx] * num_iter))) - scene_idxs = np.array(scene_idxs).astype(np.int32) - - # calculate label weight, adopted from PointNet++ - label_weight = label_weight[:-1].astype(np.float32) - label_weight = label_weight / label_weight.sum() - label_weight = self.label_weight_func(label_weight).astype(np.float32) - - return scene_idxs, label_weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/scannet_data_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/scannet_data_utils.py deleted file mode 100644 index 085d401c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/scannet_data_utils.py +++ /dev/null @@ -1,297 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np - - -class ScanNetData(object): - """ScanNet data. - - Generate scannet infos for scannet_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'train'. - """ - - def __init__(self, root_path, split='train'): - self.root_dir = root_path - self.split = split - self.split_dir = osp.join(root_path) - self.classes = [ - 'cabinet', 'bed', 'chair', 'sofa', 'table', 'door', 'window', - 'bookshelf', 'picture', 'counter', 'desk', 'curtain', - 'refrigerator', 'showercurtrain', 'toilet', 'sink', 'bathtub', - 'garbagebin' - ] - self.cat2label = {cat: self.classes.index(cat) for cat in self.classes} - self.label2cat = {self.cat2label[t]: t for t in self.cat2label} - self.cat_ids = np.array( - [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, 36, 39]) - self.cat_ids2class = { - nyu40id: i - for i, nyu40id in enumerate(list(self.cat_ids)) - } - assert split in ['train', 'val', 'test'] - split_file = osp.join(self.root_dir, 'meta_data', - f'scannetv2_{split}.txt') - mmcv.check_file_exist(split_file) - self.sample_id_list = mmcv.list_from_file(split_file) - self.test_mode = (split == 'test') - - def __len__(self): - return len(self.sample_id_list) - - def get_aligned_box_label(self, idx): - box_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_aligned_bbox.npy') - mmcv.check_file_exist(box_file) - return np.load(box_file) - - def get_unaligned_box_label(self, idx): - box_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_unaligned_bbox.npy') - mmcv.check_file_exist(box_file) - return np.load(box_file) - - def get_axis_align_matrix(self, idx): - matrix_file = osp.join(self.root_dir, 'scannet_instance_data', - f'{idx}_axis_align_matrix.npy') - mmcv.check_file_exist(matrix_file) - return np.load(matrix_file) - - def get_images(self, idx): - paths = [] - path = osp.join(self.root_dir, 'posed_images', idx) - for file in sorted(os.listdir(path)): - if file.endswith('.jpg'): - paths.append(osp.join('posed_images', idx, file)) - return paths - - def get_extrinsics(self, idx): - extrinsics = [] - path = osp.join(self.root_dir, 'posed_images', idx) - for file in sorted(os.listdir(path)): - if file.endswith('.txt') and not file == 'intrinsic.txt': - extrinsics.append(np.loadtxt(osp.join(path, file))) - return extrinsics - - def get_intrinsics(self, idx): - matrix_file = osp.join(self.root_dir, 'posed_images', idx, - 'intrinsic.txt') - mmcv.check_file_exist(matrix_file) - return np.loadtxt(matrix_file) - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - info = dict() - pc_info = {'num_features': 6, 'lidar_idx': sample_idx} - info['point_cloud'] = pc_info - pts_filename = osp.join(self.root_dir, 'scannet_instance_data', - f'{sample_idx}_vert.npy') - points = np.load(pts_filename) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - points.tofile( - osp.join(self.root_dir, 'points', f'{sample_idx}.bin')) - info['pts_path'] = osp.join('points', f'{sample_idx}.bin') - - # update with RGB image paths if exist - if os.path.exists(osp.join(self.root_dir, 'posed_images')): - info['intrinsics'] = self.get_intrinsics(sample_idx) - all_extrinsics = self.get_extrinsics(sample_idx) - all_img_paths = self.get_images(sample_idx) - # some poses in ScanNet are invalid - extrinsics, img_paths = [], [] - for extrinsic, img_path in zip(all_extrinsics, all_img_paths): - if np.all(np.isfinite(extrinsic)): - img_paths.append(img_path) - extrinsics.append(extrinsic) - info['extrinsics'] = extrinsics - info['img_paths'] = img_paths - - if not self.test_mode: - pts_instance_mask_path = osp.join( - self.root_dir, 'scannet_instance_data', - f'{sample_idx}_ins_label.npy') - pts_semantic_mask_path = osp.join( - self.root_dir, 'scannet_instance_data', - f'{sample_idx}_sem_label.npy') - - pts_instance_mask = np.load(pts_instance_mask_path).astype( - np.int64) - pts_semantic_mask = np.load(pts_semantic_mask_path).astype( - np.int64) - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'instance_mask')) - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'semantic_mask')) - - pts_instance_mask.tofile( - osp.join(self.root_dir, 'instance_mask', - f'{sample_idx}.bin')) - pts_semantic_mask.tofile( - osp.join(self.root_dir, 'semantic_mask', - f'{sample_idx}.bin')) - - info['pts_instance_mask_path'] = osp.join( - 'instance_mask', f'{sample_idx}.bin') - info['pts_semantic_mask_path'] = osp.join( - 'semantic_mask', f'{sample_idx}.bin') - - if has_label: - annotations = {} - # box is of shape [k, 6 + class] - aligned_box_label = self.get_aligned_box_label(sample_idx) - unaligned_box_label = self.get_unaligned_box_label(sample_idx) - annotations['gt_num'] = aligned_box_label.shape[0] - if annotations['gt_num'] != 0: - aligned_box = aligned_box_label[:, :-1] # k, 6 - unaligned_box = unaligned_box_label[:, :-1] - classes = aligned_box_label[:, -1] # k - annotations['name'] = np.array([ - self.label2cat[self.cat_ids2class[classes[i]]] - for i in range(annotations['gt_num']) - ]) - # default names are given to aligned bbox for compatibility - # we also save unaligned bbox info with marked names - annotations['location'] = aligned_box[:, :3] - annotations['dimensions'] = aligned_box[:, 3:6] - annotations['gt_boxes_upright_depth'] = aligned_box - annotations['unaligned_location'] = unaligned_box[:, :3] - annotations['unaligned_dimensions'] = unaligned_box[:, 3:6] - annotations[ - 'unaligned_gt_boxes_upright_depth'] = unaligned_box - annotations['index'] = np.arange( - annotations['gt_num'], dtype=np.int32) - annotations['class'] = np.array([ - self.cat_ids2class[classes[i]] - for i in range(annotations['gt_num']) - ]) - axis_align_matrix = self.get_axis_align_matrix(sample_idx) - annotations['axis_align_matrix'] = axis_align_matrix # 4x4 - info['annos'] = annotations - return info - - sample_id_list = sample_id_list if sample_id_list is not None \ - else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) - - -class ScanNetSegData(object): - """ScanNet dataset used to generate infos for semantic segmentation task. - - Args: - data_root (str): Root path of the raw data. - ann_file (str): The generated scannet infos. - split (str, optional): Set split type of the data. Default: 'train'. - num_points (int, optional): Number of points in each data input. - Default: 8192. - label_weight_func (function, optional): Function to compute the - label weight. Default: None. - """ - - def __init__(self, - data_root, - ann_file, - split='train', - num_points=8192, - label_weight_func=None): - self.data_root = data_root - self.data_infos = mmcv.load(ann_file) - self.split = split - assert split in ['train', 'val', 'test'] - self.num_points = num_points - - self.all_ids = np.arange(41) # all possible ids - self.cat_ids = np.array([ - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 24, 28, 33, 34, 36, - 39 - ]) # used for seg task - self.ignore_index = len(self.cat_ids) - - self.cat_id2class = np.ones((self.all_ids.shape[0],), dtype=np.int) * \ - self.ignore_index - for i, cat_id in enumerate(self.cat_ids): - self.cat_id2class[cat_id] = i - - # label weighting function is taken from - # https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 - self.label_weight_func = (lambda x: 1.0 / np.log(1.2 + x)) if \ - label_weight_func is None else label_weight_func - - def get_seg_infos(self): - if self.split == 'test': - return - scene_idxs, label_weight = self.get_scene_idxs_and_label_weight() - save_folder = osp.join(self.data_root, 'seg_info') - mmcv.mkdir_or_exist(save_folder) - np.save( - osp.join(save_folder, f'{self.split}_resampled_scene_idxs.npy'), - scene_idxs) - np.save( - osp.join(save_folder, f'{self.split}_label_weight.npy'), - label_weight) - print(f'{self.split} resampled scene index and label weight saved') - - def _convert_to_label(self, mask): - """Convert class_id in loaded segmentation mask to label.""" - if isinstance(mask, str): - if mask.endswith('npy'): - mask = np.load(mask) - else: - mask = np.fromfile(mask, dtype=np.int64) - label = self.cat_id2class[mask] - return label - - def get_scene_idxs_and_label_weight(self): - """Compute scene_idxs for data sampling and label weight for loss - calculation. - - We sample more times for scenes with more points. Label_weight is - inversely proportional to number of class points. - """ - num_classes = len(self.cat_ids) - num_point_all = [] - label_weight = np.zeros((num_classes + 1, )) # ignore_index - for data_info in self.data_infos: - label = self._convert_to_label( - osp.join(self.data_root, data_info['pts_semantic_mask_path'])) - num_point_all.append(label.shape[0]) - class_count, _ = np.histogram(label, range(num_classes + 2)) - label_weight += class_count - - # repeat scene_idx for num_scene_point // num_sample_point times - sample_prob = np.array(num_point_all) / float(np.sum(num_point_all)) - num_iter = int(np.sum(num_point_all) / float(self.num_points)) - scene_idxs = [] - for idx in range(len(self.data_infos)): - scene_idxs.extend([idx] * int(round(sample_prob[idx] * num_iter))) - scene_idxs = np.array(scene_idxs).astype(np.int32) - - # calculate label weight, adopted from PointNet++ - label_weight = label_weight[:-1].astype(np.float32) - label_weight = label_weight / label_weight.sum() - label_weight = self.label_weight_func(label_weight).astype(np.float32) - - return scene_idxs, label_weight diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/sunrgbd_data_utils.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/sunrgbd_data_utils.py deleted file mode 100644 index 152ea42f..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/sunrgbd_data_utils.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from concurrent import futures as futures -from os import path as osp - -import mmcv -import numpy as np -from scipy import io as sio - - -def random_sampling(points, num_points, replace=None, return_choices=False): - """Random sampling. - - Sampling point cloud to a certain number of points. - - Args: - points (ndarray): Point cloud. - num_points (int): The number of samples. - replace (bool): Whether the sample is with or without replacement. - return_choices (bool): Whether to return choices. - - Returns: - points (ndarray): Point cloud after sampling. - """ - - if replace is None: - replace = (points.shape[0] < num_points) - choices = np.random.choice(points.shape[0], num_points, replace=replace) - if return_choices: - return points[choices], choices - else: - return points[choices] - - -class SUNRGBDInstance(object): - - def __init__(self, line): - data = line.split(' ') - data[1:] = [float(x) for x in data[1:]] - self.classname = data[0] - self.xmin = data[1] - self.ymin = data[2] - self.xmax = data[1] + data[3] - self.ymax = data[2] + data[4] - self.box2d = np.array([self.xmin, self.ymin, self.xmax, self.ymax]) - self.centroid = np.array([data[5], data[6], data[7]]) - self.width = data[8] - self.length = data[9] - self.height = data[10] - # data[9] is x_size (length), data[8] is y_size (width), data[10] is - # z_size (height) in our depth coordinate system, - # l corresponds to the size along the x axis - self.size = np.array([data[9], data[8], data[10]]) * 2 - self.orientation = np.zeros((3, )) - self.orientation[0] = data[11] - self.orientation[1] = data[12] - self.heading_angle = np.arctan2(self.orientation[1], - self.orientation[0]) - self.box3d = np.concatenate( - [self.centroid, self.size, self.heading_angle[None]]) - - -class SUNRGBDData(object): - """SUNRGBD data. - - Generate scannet infos for sunrgbd_converter. - - Args: - root_path (str): Root path of the raw data. - split (str, optional): Set split type of the data. Default: 'train'. - use_v1 (bool, optional): Whether to use v1. Default: False. - """ - - def __init__(self, root_path, split='train', use_v1=False): - self.root_dir = root_path - self.split = split - self.split_dir = osp.join(root_path, 'sunrgbd_trainval') - self.classes = [ - 'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser', - 'night_stand', 'bookshelf', 'bathtub' - ] - self.cat2label = {cat: self.classes.index(cat) for cat in self.classes} - self.label2cat = { - label: self.classes[label] - for label in range(len(self.classes)) - } - assert split in ['train', 'val', 'test'] - split_file = osp.join(self.split_dir, f'{split}_data_idx.txt') - mmcv.check_file_exist(split_file) - self.sample_id_list = map(int, mmcv.list_from_file(split_file)) - self.image_dir = osp.join(self.split_dir, 'image') - self.calib_dir = osp.join(self.split_dir, 'calib') - self.depth_dir = osp.join(self.split_dir, 'depth') - if use_v1: - self.label_dir = osp.join(self.split_dir, 'label_v1') - else: - self.label_dir = osp.join(self.split_dir, 'label') - - def __len__(self): - return len(self.sample_id_list) - - def get_image(self, idx): - img_filename = osp.join(self.image_dir, f'{idx:06d}.jpg') - return mmcv.imread(img_filename) - - def get_image_shape(self, idx): - image = self.get_image(idx) - return np.array(image.shape[:2], dtype=np.int32) - - def get_depth(self, idx): - depth_filename = osp.join(self.depth_dir, f'{idx:06d}.mat') - depth = sio.loadmat(depth_filename)['instance'] - return depth - - def get_calibration(self, idx): - calib_filepath = osp.join(self.calib_dir, f'{idx:06d}.txt') - lines = [line.rstrip() for line in open(calib_filepath)] - Rt = np.array([float(x) for x in lines[0].split(' ')]) - Rt = np.reshape(Rt, (3, 3), order='F').astype(np.float32) - K = np.array([float(x) for x in lines[1].split(' ')]) - K = np.reshape(K, (3, 3), order='F').astype(np.float32) - return K, Rt - - def get_label_objects(self, idx): - label_filename = osp.join(self.label_dir, f'{idx:06d}.txt') - lines = [line.rstrip() for line in open(label_filename)] - objects = [SUNRGBDInstance(line) for line in lines] - return objects - - def get_infos(self, num_workers=4, has_label=True, sample_id_list=None): - """Get data infos. - - This method gets information from the raw data. - - Args: - num_workers (int, optional): Number of threads to be used. - Default: 4. - has_label (bool, optional): Whether the data has label. - Default: True. - sample_id_list (list[int], optional): Index list of the sample. - Default: None. - - Returns: - infos (list[dict]): Information of the raw data. - """ - - def process_single_scene(sample_idx): - print(f'{self.split} sample_idx: {sample_idx}') - # convert depth to points - SAMPLE_NUM = 50000 - # TODO: Check whether can move the point - # sampling process during training. - pc_upright_depth = self.get_depth(sample_idx) - pc_upright_depth_subsampled = random_sampling( - pc_upright_depth, SAMPLE_NUM) - - info = dict() - pc_info = {'num_features': 6, 'lidar_idx': sample_idx} - info['point_cloud'] = pc_info - - mmcv.mkdir_or_exist(osp.join(self.root_dir, 'points')) - pc_upright_depth_subsampled.tofile( - osp.join(self.root_dir, 'points', f'{sample_idx:06d}.bin')) - - info['pts_path'] = osp.join('points', f'{sample_idx:06d}.bin') - img_path = osp.join('image', f'{sample_idx:06d}.jpg') - image_info = { - 'image_idx': sample_idx, - 'image_shape': self.get_image_shape(sample_idx), - 'image_path': img_path - } - info['image'] = image_info - - K, Rt = self.get_calibration(sample_idx) - calib_info = {'K': K, 'Rt': Rt} - info['calib'] = calib_info - - if has_label: - obj_list = self.get_label_objects(sample_idx) - annotations = {} - annotations['gt_num'] = len([ - obj.classname for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - if annotations['gt_num'] != 0: - annotations['name'] = np.array([ - obj.classname for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['bbox'] = np.concatenate([ - obj.box2d.reshape(1, 4) for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) - annotations['location'] = np.concatenate([ - obj.centroid.reshape(1, 3) for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) - annotations['dimensions'] = 2 * np.array([ - [obj.length, obj.width, obj.height] for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) # lwh (depth) format - annotations['rotation_y'] = np.array([ - obj.heading_angle for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['index'] = np.arange( - len(obj_list), dtype=np.int32) - annotations['class'] = np.array([ - self.cat2label[obj.classname] for obj in obj_list - if obj.classname in self.cat2label.keys() - ]) - annotations['gt_boxes_upright_depth'] = np.stack( - [ - obj.box3d for obj in obj_list - if obj.classname in self.cat2label.keys() - ], - axis=0) # (K,8) - info['annos'] = annotations - return info - - sample_id_list = sample_id_list if \ - sample_id_list is not None else self.sample_id_list - with futures.ThreadPoolExecutor(num_workers) as executor: - infos = executor.map(process_single_scene, sample_id_list) - return list(infos) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/waymo_converter.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/waymo_converter.py deleted file mode 100644 index f991514b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/data_converter/waymo_converter.py +++ /dev/null @@ -1,556 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Adapted from `Waymo to KITTI converter - `_. -""" - -try: - from waymo_open_dataset import dataset_pb2 -except ImportError: - raise ImportError( - 'Please run "pip install waymo-open-dataset-tf-2-1-0==1.2.0" ' - 'to install the official devkit first.') - -from glob import glob -from os.path import join - -import mmcv -import numpy as np -import tensorflow as tf -from waymo_open_dataset.utils import range_image_utils, transform_utils -from waymo_open_dataset.utils.frame_utils import \ - parse_range_image_and_camera_projection - - -class Waymo2KITTI(object): - """Waymo to KITTI converter. - - This class serves as the converter to change the waymo raw data to KITTI - format. - - Args: - load_dir (str): Directory to load waymo raw data. - save_dir (str): Directory to save data in KITTI format. - prefix (str): Prefix of filename. In general, 0 for training, 1 for - validation and 2 for testing. - workers (int, optional): Number of workers for the parallel process. - test_mode (bool, optional): Whether in the test_mode. Default: False. - """ - - def __init__(self, - load_dir, - save_dir, - prefix, - workers=64, - test_mode=False): - self.filter_empty_3dboxes = True - self.filter_no_label_zone_points = True - - self.selected_waymo_classes = ['VEHICLE', 'PEDESTRIAN', 'CYCLIST'] - - # Only data collected in specific locations will be converted - # If set None, this filter is disabled - # Available options: location_sf (main dataset) - self.selected_waymo_locations = None - self.save_track_id = False - - # turn on eager execution for older tensorflow versions - if int(tf.__version__.split('.')[0]) < 2: - tf.enable_eager_execution() - - self.lidar_list = [ - '_FRONT', '_FRONT_RIGHT', '_FRONT_LEFT', '_SIDE_RIGHT', - '_SIDE_LEFT' - ] - self.type_list = [ - 'UNKNOWN', 'VEHICLE', 'PEDESTRIAN', 'SIGN', 'CYCLIST' - ] - self.waymo_to_kitti_class_map = { - 'UNKNOWN': 'DontCare', - 'PEDESTRIAN': 'Pedestrian', - 'VEHICLE': 'Car', - 'CYCLIST': 'Cyclist', - 'SIGN': 'Sign' # not in kitti - } - - self.load_dir = load_dir - self.save_dir = save_dir - self.prefix = prefix - self.workers = int(workers) - self.test_mode = test_mode - - self.tfrecord_pathnames = sorted( - glob(join(self.load_dir, '*.tfrecord'))) - - self.label_save_dir = f'{self.save_dir}/label_' - self.label_all_save_dir = f'{self.save_dir}/label_all' - self.image_save_dir = f'{self.save_dir}/image_' - self.calib_save_dir = f'{self.save_dir}/calib' - self.point_cloud_save_dir = f'{self.save_dir}/velodyne' - self.pose_save_dir = f'{self.save_dir}/pose' - self.timestamp_save_dir = f'{self.save_dir}/timestamp' - - self.create_folder() - - def convert(self): - """Convert action.""" - print('Start converting ...') - mmcv.track_parallel_progress(self.convert_one, range(len(self)), - self.workers) - print('\nFinished ...') - - def convert_one(self, file_idx): - """Convert action for single file. - - Args: - file_idx (int): Index of the file to be converted. - """ - pathname = self.tfrecord_pathnames[file_idx] - dataset = tf.data.TFRecordDataset(pathname, compression_type='') - - for frame_idx, data in enumerate(dataset): - - frame = dataset_pb2.Frame() - frame.ParseFromString(bytearray(data.numpy())) - if (self.selected_waymo_locations is not None - and frame.context.stats.location - not in self.selected_waymo_locations): - continue - - self.save_image(frame, file_idx, frame_idx) - self.save_calib(frame, file_idx, frame_idx) - self.save_lidar(frame, file_idx, frame_idx) - self.save_pose(frame, file_idx, frame_idx) - self.save_timestamp(frame, file_idx, frame_idx) - - if not self.test_mode: - self.save_label(frame, file_idx, frame_idx) - - def __len__(self): - """Length of the filename list.""" - return len(self.tfrecord_pathnames) - - def save_image(self, frame, file_idx, frame_idx): - """Parse and save the images in png format. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - for img in frame.images: - img_path = f'{self.image_save_dir}{str(img.name - 1)}/' + \ - f'{self.prefix}{str(file_idx).zfill(3)}' + \ - f'{str(frame_idx).zfill(3)}.png' - img = mmcv.imfrombytes(img.image) - mmcv.imwrite(img, img_path) - - def save_calib(self, frame, file_idx, frame_idx): - """Parse and save the calibration data. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - # waymo front camera to kitti reference camera - T_front_cam_to_ref = np.array([[0.0, -1.0, 0.0], [0.0, 0.0, -1.0], - [1.0, 0.0, 0.0]]) - camera_calibs = [] - R0_rect = [f'{i:e}' for i in np.eye(3).flatten()] - Tr_velo_to_cams = [] - calib_context = '' - - for camera in frame.context.camera_calibrations: - # extrinsic parameters - T_cam_to_vehicle = np.array(camera.extrinsic.transform).reshape( - 4, 4) - T_vehicle_to_cam = np.linalg.inv(T_cam_to_vehicle) - Tr_velo_to_cam = \ - self.cart_to_homo(T_front_cam_to_ref) @ T_vehicle_to_cam - if camera.name == 1: # FRONT = 1, see dataset.proto for details - self.T_velo_to_front_cam = Tr_velo_to_cam.copy() - Tr_velo_to_cam = Tr_velo_to_cam[:3, :].reshape((12, )) - Tr_velo_to_cams.append([f'{i:e}' for i in Tr_velo_to_cam]) - - # intrinsic parameters - camera_calib = np.zeros((3, 4)) - camera_calib[0, 0] = camera.intrinsic[0] - camera_calib[1, 1] = camera.intrinsic[1] - camera_calib[0, 2] = camera.intrinsic[2] - camera_calib[1, 2] = camera.intrinsic[3] - camera_calib[2, 2] = 1 - camera_calib = list(camera_calib.reshape(12)) - camera_calib = [f'{i:e}' for i in camera_calib] - camera_calibs.append(camera_calib) - - # all camera ids are saved as id-1 in the result because - # camera 0 is unknown in the proto - for i in range(5): - calib_context += 'P' + str(i) + ': ' + \ - ' '.join(camera_calibs[i]) + '\n' - calib_context += 'R0_rect' + ': ' + ' '.join(R0_rect) + '\n' - for i in range(5): - calib_context += 'Tr_velo_to_cam_' + str(i) + ': ' + \ - ' '.join(Tr_velo_to_cams[i]) + '\n' - - with open( - f'{self.calib_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', - 'w+') as fp_calib: - fp_calib.write(calib_context) - fp_calib.close() - - def save_lidar(self, frame, file_idx, frame_idx): - """Parse and save the lidar data in psd format. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - range_images, camera_projections, range_image_top_pose = \ - parse_range_image_and_camera_projection(frame) - - # First return - points_0, cp_points_0, intensity_0, elongation_0, mask_indices_0 = \ - self.convert_range_image_to_point_cloud( - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=0 - ) - points_0 = np.concatenate(points_0, axis=0) - intensity_0 = np.concatenate(intensity_0, axis=0) - elongation_0 = np.concatenate(elongation_0, axis=0) - mask_indices_0 = np.concatenate(mask_indices_0, axis=0) - - # Second return - points_1, cp_points_1, intensity_1, elongation_1, mask_indices_1 = \ - self.convert_range_image_to_point_cloud( - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=1 - ) - points_1 = np.concatenate(points_1, axis=0) - intensity_1 = np.concatenate(intensity_1, axis=0) - elongation_1 = np.concatenate(elongation_1, axis=0) - mask_indices_1 = np.concatenate(mask_indices_1, axis=0) - - points = np.concatenate([points_0, points_1], axis=0) - intensity = np.concatenate([intensity_0, intensity_1], axis=0) - elongation = np.concatenate([elongation_0, elongation_1], axis=0) - mask_indices = np.concatenate([mask_indices_0, mask_indices_1], axis=0) - - # timestamp = frame.timestamp_micros * np.ones_like(intensity) - - # concatenate x,y,z, intensity, elongation, timestamp (6-dim) - point_cloud = np.column_stack( - (points, intensity, elongation, mask_indices)) - - pc_path = f'{self.point_cloud_save_dir}/{self.prefix}' + \ - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.bin' - point_cloud.astype(np.float32).tofile(pc_path) - - def save_label(self, frame, file_idx, frame_idx): - """Parse and save the label data in txt format. - The relation between waymo and kitti coordinates is noteworthy: - 1. x, y, z correspond to l, w, h (waymo) -> l, h, w (kitti) - 2. x-y-z: front-left-up (waymo) -> right-down-front(kitti) - 3. bbox origin at volumetric center (waymo) -> bottom center (kitti) - 4. rotation: +x around y-axis (kitti) -> +x around z-axis (waymo) - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - fp_label_all = open( - f'{self.label_all_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', 'w+') - id_to_bbox = dict() - id_to_name = dict() - for labels in frame.projected_lidar_labels: - name = labels.name - for label in labels.labels: - # TODO: need a workaround as bbox may not belong to front cam - bbox = [ - label.box.center_x - label.box.length / 2, - label.box.center_y - label.box.width / 2, - label.box.center_x + label.box.length / 2, - label.box.center_y + label.box.width / 2 - ] - id_to_bbox[label.id] = bbox - id_to_name[label.id] = name - 1 - - for obj in frame.laser_labels: - bounding_box = None - name = None - id = obj.id - for lidar in self.lidar_list: - if id + lidar in id_to_bbox: - bounding_box = id_to_bbox.get(id + lidar) - name = str(id_to_name.get(id + lidar)) - break - - if bounding_box is None or name is None: - name = '0' - bounding_box = (0, 0, 0, 0) - - my_type = self.type_list[obj.type] - - if my_type not in self.selected_waymo_classes: - continue - - if self.filter_empty_3dboxes and obj.num_lidar_points_in_box < 1: - continue - - my_type = self.waymo_to_kitti_class_map[my_type] - - height = obj.box.height - width = obj.box.width - length = obj.box.length - - x = obj.box.center_x - y = obj.box.center_y - z = obj.box.center_z - height / 2 - - # project bounding box to the virtual reference frame - pt_ref = self.T_velo_to_front_cam @ \ - np.array([x, y, z, 1]).reshape((4, 1)) - x, y, z, _ = pt_ref.flatten().tolist() - - rotation_y = -obj.box.heading - np.pi / 2 - track_id = obj.id - - # not available - truncated = 0 - occluded = 0 - alpha = -10 - - line = my_type + \ - ' {} {} {} {} {} {} {} {} {} {} {} {} {} {}\n'.format( - round(truncated, 2), occluded, round(alpha, 2), - round(bounding_box[0], 2), round(bounding_box[1], 2), - round(bounding_box[2], 2), round(bounding_box[3], 2), - round(height, 2), round(width, 2), round(length, 2), - round(x, 2), round(y, 2), round(z, 2), - round(rotation_y, 2)) - - if self.save_track_id: - line_all = line[:-1] + ' ' + name + ' ' + track_id + '\n' - else: - line_all = line[:-1] + ' ' + name + '\n' - - fp_label = open( - f'{self.label_save_dir}{name}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt', 'a') - fp_label.write(line) - fp_label.close() - - fp_label_all.write(line_all) - - fp_label_all.close() - - def save_pose(self, frame, file_idx, frame_idx): - """Parse and save the pose data. - - Note that SDC's own pose is not included in the regular training - of KITTI dataset. KITTI raw dataset contains ego motion files - but are not often used. Pose is important for algorithms that - take advantage of the temporal information. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - pose = np.array(frame.pose.transform).reshape(4, 4) - np.savetxt( - join(f'{self.pose_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt'), - pose) - - def save_timestamp(self, frame, file_idx, frame_idx): - """Save the timestamp data in a separate file instead of the - pointcloud. - - Note that SDC's own pose is not included in the regular training - of KITTI dataset. KITTI raw dataset contains ego motion files - but are not often used. Pose is important for algorithms that - take advantage of the temporal information. - - Args: - frame (:obj:`Frame`): Open dataset frame proto. - file_idx (int): Current file index. - frame_idx (int): Current frame index. - """ - with open( - join(f'{self.timestamp_save_dir}/{self.prefix}' + - f'{str(file_idx).zfill(3)}{str(frame_idx).zfill(3)}.txt'), - 'w') as f: - f.write(str(frame.timestamp_micros)) - - def create_folder(self): - """Create folder for data preprocessing.""" - if not self.test_mode: - dir_list1 = [ - self.label_all_save_dir, self.calib_save_dir, - self.point_cloud_save_dir, self.pose_save_dir, - self.timestamp_save_dir - ] - dir_list2 = [self.label_save_dir, self.image_save_dir] - else: - dir_list1 = [ - self.calib_save_dir, self.point_cloud_save_dir, - self.pose_save_dir, self.timestamp_save_dir - ] - dir_list2 = [self.image_save_dir] - for d in dir_list1: - mmcv.mkdir_or_exist(d) - for d in dir_list2: - for i in range(5): - mmcv.mkdir_or_exist(f'{d}{str(i)}') - - def convert_range_image_to_point_cloud(self, - frame, - range_images, - camera_projections, - range_image_top_pose, - ri_index=0): - """Convert range images to point cloud. - - Args: - frame (:obj:`Frame`): Open dataset frame. - range_images (dict): Mapping from laser_name to list of two - range images corresponding with two returns. - camera_projections (dict): Mapping from laser_name to list of two - camera projections corresponding with two returns. - range_image_top_pose (:obj:`Transform`): Range image pixel pose for - top lidar. - ri_index (int, optional): 0 for the first return, - 1 for the second return. Default: 0. - - Returns: - tuple[list[np.ndarray]]: (List of points with shape [N, 3], - camera projections of points with shape [N, 6], intensity - with shape [N, 1], elongation with shape [N, 1], points' - position in the depth map (element offset if points come from - the main lidar otherwise -1) with shape[N, 1]). All the - lists have the length of lidar numbers (5). - """ - calibrations = sorted( - frame.context.laser_calibrations, key=lambda c: c.name) - points = [] - cp_points = [] - intensity = [] - elongation = [] - mask_indices = [] - - frame_pose = tf.convert_to_tensor( - value=np.reshape(np.array(frame.pose.transform), [4, 4])) - # [H, W, 6] - range_image_top_pose_tensor = tf.reshape( - tf.convert_to_tensor(value=range_image_top_pose.data), - range_image_top_pose.shape.dims) - # [H, W, 3, 3] - range_image_top_pose_tensor_rotation = \ - transform_utils.get_rotation_matrix( - range_image_top_pose_tensor[..., 0], - range_image_top_pose_tensor[..., 1], - range_image_top_pose_tensor[..., 2]) - range_image_top_pose_tensor_translation = \ - range_image_top_pose_tensor[..., 3:] - range_image_top_pose_tensor = transform_utils.get_transform( - range_image_top_pose_tensor_rotation, - range_image_top_pose_tensor_translation) - for c in calibrations: - range_image = range_images[c.name][ri_index] - if len(c.beam_inclinations) == 0: - beam_inclinations = range_image_utils.compute_inclination( - tf.constant( - [c.beam_inclination_min, c.beam_inclination_max]), - height=range_image.shape.dims[0]) - else: - beam_inclinations = tf.constant(c.beam_inclinations) - - beam_inclinations = tf.reverse(beam_inclinations, axis=[-1]) - extrinsic = np.reshape(np.array(c.extrinsic.transform), [4, 4]) - - range_image_tensor = tf.reshape( - tf.convert_to_tensor(value=range_image.data), - range_image.shape.dims) - pixel_pose_local = None - frame_pose_local = None - if c.name == dataset_pb2.LaserName.TOP: - pixel_pose_local = range_image_top_pose_tensor - pixel_pose_local = tf.expand_dims(pixel_pose_local, axis=0) - frame_pose_local = tf.expand_dims(frame_pose, axis=0) - range_image_mask = range_image_tensor[..., 0] > 0 - - if self.filter_no_label_zone_points: - nlz_mask = range_image_tensor[..., 3] != 1.0 # 1.0: in NLZ - range_image_mask = range_image_mask & nlz_mask - - range_image_cartesian = \ - range_image_utils.extract_point_cloud_from_range_image( - tf.expand_dims(range_image_tensor[..., 0], axis=0), - tf.expand_dims(extrinsic, axis=0), - tf.expand_dims(tf.convert_to_tensor( - value=beam_inclinations), axis=0), - pixel_pose=pixel_pose_local, - frame_pose=frame_pose_local) - - mask_index = tf.where(range_image_mask) - - range_image_cartesian = tf.squeeze(range_image_cartesian, axis=0) - points_tensor = tf.gather_nd(range_image_cartesian, mask_index) - - cp = camera_projections[c.name][ri_index] - cp_tensor = tf.reshape( - tf.convert_to_tensor(value=cp.data), cp.shape.dims) - cp_points_tensor = tf.gather_nd(cp_tensor, mask_index) - points.append(points_tensor.numpy()) - cp_points.append(cp_points_tensor.numpy()) - - intensity_tensor = tf.gather_nd(range_image_tensor[..., 1], - mask_index) - intensity.append(intensity_tensor.numpy()) - - elongation_tensor = tf.gather_nd(range_image_tensor[..., 2], - mask_index) - elongation.append(elongation_tensor.numpy()) - if c.name == 1: - mask_index = (ri_index * range_image_mask.shape[0] + - mask_index[:, 0] - ) * range_image_mask.shape[1] + mask_index[:, 1] - mask_index = mask_index.numpy().astype(elongation[-1].dtype) - else: - mask_index = np.full_like(elongation[-1], -1) - - mask_indices.append(mask_index) - - return points, cp_points, intensity, elongation, mask_indices - - def cart_to_homo(self, mat): - """Convert transformation matrix in Cartesian coordinates to - homogeneous format. - - Args: - mat (np.ndarray): Transformation matrix in Cartesian. - The input matrix shape is 3x3 or 3x4. - - Returns: - np.ndarray: Transformation matrix in homogeneous format. - The matrix shape is 4x4. - """ - ret = np.eye(4) - if mat.shape == (3, 3): - ret[:3, :3] = mat - elif mat.shape == (3, 4): - ret[:3, :] = mat - else: - raise ValueError(mat.shape) - return ret diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d2torchserve.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d2torchserve.py deleted file mode 100644 index 17d7f255..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d2torchserve.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -from argparse import ArgumentParser, Namespace -from pathlib import Path -from tempfile import TemporaryDirectory - -import mmcv - -try: - from model_archiver.model_packaging import package_model - from model_archiver.model_packaging_utils import ModelExportUtils -except ImportError: - package_model = None - - -def mmdet3d2torchserve( - config_file: str, - checkpoint_file: str, - output_folder: str, - model_name: str, - model_version: str = '1.0', - force: bool = False, -): - """Converts MMDetection3D model (config + checkpoint) to TorchServe `.mar`. - - Args: - config_file (str): - In MMDetection3D config format. - The contents vary for each task repository. - checkpoint_file (str): - In MMDetection3D checkpoint format. - The contents vary for each task repository. - output_folder (str): - Folder where `{model_name}.mar` will be created. - The file created will be in TorchServe archive format. - model_name (str): - If not None, used for naming the `{model_name}.mar` file - that will be created under `output_folder`. - If None, `{Path(checkpoint_file).stem}` will be used. - model_version (str, optional): - Model's version. Default: '1.0'. - force (bool, optional): - If True, if there is an existing `{model_name}.mar` - file under `output_folder` it will be overwritten. - Default: False. - """ - mmcv.mkdir_or_exist(output_folder) - - config = mmcv.Config.fromfile(config_file) - - with TemporaryDirectory() as tmpdir: - config.dump(f'{tmpdir}/config.py') - - args = Namespace( - **{ - 'model_file': f'{tmpdir}/config.py', - 'serialized_file': checkpoint_file, - 'handler': f'{Path(__file__).parent}/mmdet3d_handler.py', - 'model_name': model_name or Path(checkpoint_file).stem, - 'version': model_version, - 'export_path': output_folder, - 'force': force, - 'requirements_file': None, - 'extra_files': None, - 'runtime': 'python', - 'archive_format': 'default' - }) - manifest = ModelExportUtils.generate_manifest_json(args) - package_model(args, manifest) - - -def parse_args(): - parser = ArgumentParser( - description='Convert MMDetection models to TorchServe `.mar` format.') - parser.add_argument('config', type=str, help='config file path') - parser.add_argument('checkpoint', type=str, help='checkpoint file path') - parser.add_argument( - '--output-folder', - type=str, - required=True, - help='Folder where `{model_name}.mar` will be created.') - parser.add_argument( - '--model-name', - type=str, - default=None, - help='If not None, used for naming the `{model_name}.mar`' - 'file that will be created under `output_folder`.' - 'If None, `{Path(checkpoint_file).stem}` will be used.') - parser.add_argument( - '--model-version', - type=str, - default='1.0', - help='Number used for versioning.') - parser.add_argument( - '-f', - '--force', - action='store_true', - help='overwrite the existing `{model_name}.mar`') - args = parser.parse_args() - - return args - - -if __name__ == '__main__': - args = parse_args() - - if package_model is None: - raise ImportError('`torch-model-archiver` is required.' - 'Try: pip install torch-model-archiver') - - mmdet3d2torchserve(args.config, args.checkpoint, args.output_folder, - args.model_name, args.model_version, args.force) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d_handler.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d_handler.py deleted file mode 100644 index 9231f916..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/mmdet3d_handler.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import base64 -import os - -import numpy as np -import torch -from ts.torch_handler.base_handler import BaseHandler - -from mmdet3d.apis import inference_detector, init_model -from mmdet3d.core.points import get_points_type - - -class MMdet3dHandler(BaseHandler): - """MMDetection3D Handler used in TorchServe. - - Handler to load models in MMDetection3D, and it will process data to get - predicted results. For now, it only supports SECOND. - """ - threshold = 0.5 - load_dim = 4 - use_dim = [0, 1, 2, 3] - coord_type = 'LIDAR' - attribute_dims = None - - def initialize(self, context): - """Initialize function loads the model in MMDetection3D. - - Args: - context (context): It is a JSON Object containing information - pertaining to the model artifacts parameters. - """ - properties = context.system_properties - self.map_location = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = torch.device(self.map_location + ':' + - str(properties.get('gpu_id')) if torch.cuda. - is_available() else self.map_location) - self.manifest = context.manifest - - model_dir = properties.get('model_dir') - serialized_file = self.manifest['model']['serializedFile'] - checkpoint = os.path.join(model_dir, serialized_file) - self.config_file = os.path.join(model_dir, 'config.py') - self.model = init_model(self.config_file, checkpoint, self.device) - self.initialized = True - - def preprocess(self, data): - """Preprocess function converts data into LiDARPoints class. - - Args: - data (List): Input data from the request. - - Returns: - `LiDARPoints` : The preprocess function returns the input - point cloud data as LiDARPoints class. - """ - for row in data: - # Compat layer: normally the envelope should just return the data - # directly, but older versions of Torchserve didn't have envelope. - pts = row.get('data') or row.get('body') - if isinstance(pts, str): - pts = base64.b64decode(pts) - - points = np.frombuffer(pts, dtype=np.float32) - points = points.reshape(-1, self.load_dim) - points = points[:, self.use_dim] - points_class = get_points_type(self.coord_type) - points = points_class( - points, - points_dim=points.shape[-1], - attribute_dims=self.attribute_dims) - - return points - - def inference(self, data): - """Inference Function. - - This function is used to make a prediction call on the - given input request. - - Args: - data (`LiDARPoints`): LiDARPoints class passed to make - the inference request. - - Returns: - List(dict) : The predicted result is returned in this function. - """ - results, _ = inference_detector(self.model, data) - return results - - def postprocess(self, data): - """Postprocess function. - - This function makes use of the output from the inference and - converts it into a torchserve supported response output. - - Args: - data (List[dict]): The data received from the prediction - output of the model. - - Returns: - List: The post process function returns a list of the predicted - output. - """ - output = [] - for pts_index, result in enumerate(data): - output.append([]) - if 'pts_bbox' in result.keys(): - pred_bboxes = result['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = result['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = result['boxes_3d'].tensor.numpy() - pred_scores = result['scores_3d'].numpy() - - index = pred_scores > self.threshold - bbox_coords = pred_bboxes[index].tolist() - score = pred_scores[index].tolist() - - output[pts_index].append({'3dbbox': bbox_coords, 'score': score}) - - return output diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/test_torchserver.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/test_torchserver.py deleted file mode 100644 index d7e6f641..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/deployment/test_torchserver.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from argparse import ArgumentParser - -import numpy as np -import requests - -from mmdet3d.apis import inference_detector, init_model - - -def parse_args(): - parser = ArgumentParser() - parser.add_argument('pcd', help='Point cloud file') - parser.add_argument('config', help='Config file') - parser.add_argument('checkpoint', help='Checkpoint file') - parser.add_argument('model_name', help='The model name in the server') - parser.add_argument( - '--inference-addr', - default='127.0.0.1:8080', - help='Address and port of the inference server') - parser.add_argument( - '--device', default='cuda:0', help='Device used for inference') - parser.add_argument( - '--score-thr', type=float, default=0.5, help='3d bbox score threshold') - args = parser.parse_args() - return args - - -def parse_result(input): - bbox = input[0]['3dbbox'] - result = np.array(bbox) - return result - - -def main(args): - # build the model from a config file and a checkpoint file - model = init_model(args.config, args.checkpoint, device=args.device) - # test a single point cloud file - model_result, _ = inference_detector(model, args.pcd) - # filter the 3d bboxes whose scores > 0.5 - if 'pts_bbox' in model_result[0].keys(): - pred_bboxes = model_result[0]['pts_bbox']['boxes_3d'].tensor.numpy() - pred_scores = model_result[0]['pts_bbox']['scores_3d'].numpy() - else: - pred_bboxes = model_result[0]['boxes_3d'].tensor.numpy() - pred_scores = model_result[0]['scores_3d'].numpy() - model_result = pred_bboxes[pred_scores > 0.5] - - url = 'http://' + args.inference_addr + '/predictions/' + args.model_name - with open(args.pcd, 'rb') as points: - response = requests.post(url, points) - server_result = parse_result(response.json()) - assert np.allclose(model_result, server_result) - - -if __name__ == '__main__': - args = parse_args() - main(args) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/dist_test.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/dist_test.sh deleted file mode 100755 index dea131b4..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/dist_test.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -CHECKPOINT=$2 -GPUS=$3 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/test.py \ - $CONFIG \ - $CHECKPOINT \ - --launcher pytorch \ - ${@:4} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/browse_dataset.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/browse_dataset.py deleted file mode 100644 index 3cc4737c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/browse_dataset.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import warnings -from os import path as osp -from pathlib import Path - -import mmcv -import numpy as np -from mmcv import Config, DictAction, mkdir_or_exist - -from mmdet3d.core.bbox import (Box3DMode, CameraInstance3DBoxes, Coord3DMode, - DepthInstance3DBoxes, LiDARInstance3DBoxes) -from mmdet3d.core.visualizer import (show_multi_modality_result, show_result, - show_seg_result) -from mmdet3d.datasets import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser(description='Browse a dataset') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--skip-type', - type=str, - nargs='+', - default=['Normalize'], - help='skip some useless pipeline') - parser.add_argument( - '--output-dir', - default=None, - type=str, - help='If there is no display interface, you can save it') - parser.add_argument( - '--task', - type=str, - choices=['det', 'seg', 'multi_modality-det', 'mono-det'], - help='Determine the visualization method depending on the task.') - parser.add_argument( - '--aug', - action='store_true', - help='Whether to visualize augmented datasets or original dataset.') - parser.add_argument( - '--online', - action='store_true', - help='Whether to perform online visualization. Note that you often ' - 'need a monitor to do so.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def build_data_cfg(config_path, skip_type, aug, cfg_options): - """Build data config for loading visualization data.""" - - cfg = Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # extract inner dataset of `RepeatDataset` as `cfg.data.train` - # so we don't need to worry about it later - if cfg.data.train['type'] == 'RepeatDataset': - cfg.data.train = cfg.data.train.dataset - # use only first dataset for `ConcatDataset` - if cfg.data.train['type'] == 'ConcatDataset': - cfg.data.train = cfg.data.train.datasets[0] - train_data_cfg = cfg.data.train - - if aug: - show_pipeline = cfg.train_pipeline - else: - show_pipeline = cfg.eval_pipeline - for i in range(len(cfg.train_pipeline)): - if cfg.train_pipeline[i]['type'] == 'LoadAnnotations3D': - show_pipeline.insert(i, cfg.train_pipeline[i]) - # Collect points as well as labels - if cfg.train_pipeline[i]['type'] == 'Collect3D': - if show_pipeline[-1]['type'] == 'Collect3D': - show_pipeline[-1] = cfg.train_pipeline[i] - else: - show_pipeline.append(cfg.train_pipeline[i]) - - train_data_cfg['pipeline'] = [ - x for x in show_pipeline if x['type'] not in skip_type - ] - - return cfg - - -def to_depth_mode(points, bboxes): - """Convert points and bboxes to Depth Coord and Depth Box mode.""" - if points is not None: - points = Coord3DMode.convert_point(points.copy(), Coord3DMode.LIDAR, - Coord3DMode.DEPTH) - if bboxes is not None: - bboxes = Box3DMode.convert(bboxes.clone(), Box3DMode.LIDAR, - Box3DMode.DEPTH) - return points, bboxes - - -def show_det_data(input, out_dir, show=False): - """Visualize 3D point cloud and 3D bboxes.""" - img_metas = input['img_metas']._data - points = input['points']._data.numpy() - gt_bboxes = input['gt_bboxes_3d']._data.tensor - if img_metas['box_mode_3d'] != Box3DMode.DEPTH: - points, gt_bboxes = to_depth_mode(points, gt_bboxes) - filename = osp.splitext(osp.basename(img_metas['pts_filename']))[0] - show_result( - points, - gt_bboxes.clone(), - None, - out_dir, - filename, - show=show, - snapshot=True) - - -def show_seg_data(input, out_dir, show=False): - """Visualize 3D point cloud and segmentation mask.""" - img_metas = input['img_metas']._data - points = input['points']._data.numpy() - gt_seg = input['pts_semantic_mask']._data.numpy() - filename = osp.splitext(osp.basename(img_metas['pts_filename']))[0] - show_seg_result( - points, - gt_seg.copy(), - None, - out_dir, - filename, - np.array(img_metas['PALETTE']), - img_metas['ignore_index'], - show=show, - snapshot=True) - - -def show_proj_bbox_img(input, out_dir, show=False, is_nus_mono=False): - """Visualize 3D bboxes on 2D image by projection.""" - gt_bboxes = input['gt_bboxes_3d']._data - img_metas = input['img_metas']._data - img = input['img']._data.numpy() - # need to transpose channel to first dim - img = img.transpose(1, 2, 0) - # no 3D gt bboxes, just show img - if gt_bboxes.tensor.shape[0] == 0: - gt_bboxes = None - filename = Path(img_metas['filename']).name - if isinstance(gt_bboxes, DepthInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - None, - out_dir, - filename, - box_mode='depth', - img_metas=img_metas, - show=show) - elif isinstance(gt_bboxes, LiDARInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - img_metas['lidar2img'], - out_dir, - filename, - box_mode='lidar', - img_metas=img_metas, - show=show) - elif isinstance(gt_bboxes, CameraInstance3DBoxes): - show_multi_modality_result( - img, - gt_bboxes, - None, - img_metas['cam2img'], - out_dir, - filename, - box_mode='camera', - img_metas=img_metas, - show=show) - else: - # can't project, just show img - warnings.warn( - f'unrecognized gt box type {type(gt_bboxes)}, only show image') - show_multi_modality_result( - img, None, None, None, out_dir, filename, show=show) - - -def main(): - args = parse_args() - - if args.output_dir is not None: - mkdir_or_exist(args.output_dir) - - cfg = build_data_cfg(args.config, args.skip_type, args.aug, - args.cfg_options) - try: - dataset = build_dataset( - cfg.data.train, default_args=dict(filter_empty_gt=False)) - except TypeError: # seg dataset doesn't have `filter_empty_gt` key - dataset = build_dataset(cfg.data.train) - - dataset_type = cfg.dataset_type - # configure visualization mode - vis_task = args.task # 'det', 'seg', 'multi_modality-det', 'mono-det' - progress_bar = mmcv.ProgressBar(len(dataset)) - - for input in dataset: - if vis_task in ['det', 'multi_modality-det']: - # show 3D bboxes on 3D point clouds - show_det_data(input, args.output_dir, show=args.online) - if vis_task in ['multi_modality-det', 'mono-det']: - # project 3D bboxes to 2D image - show_proj_bbox_img( - input, - args.output_dir, - show=args.online, - is_nus_mono=(dataset_type == 'NuScenesMonoDataset')) - elif vis_task in ['seg']: - # show 3D segmentation mask on 3D point clouds - show_seg_data(input, args.output_dir, show=args.online) - progress_bar.update() - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/fuse_conv_bn.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/fuse_conv_bn.py deleted file mode 100644 index 85c0897d..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/fuse_conv_bn.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import torch -from mmcv.runner import save_checkpoint -from torch import nn as nn - -from mmdet3d.apis import init_model - - -def fuse_conv_bn(conv, bn): - """During inference, the functionary of batch norm layers is turned off but - only the mean and var alone channels are used, which exposes the chance to - fuse it with the preceding conv layers to save computations and simplify - network structures.""" - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_module(m): - last_conv = None - last_conv_name = None - - for name, child in m.named_children(): - if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = fuse_conv_bn(last_conv, child) - m._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - m._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_module(child) - return m - - -def parse_args(): - parser = argparse.ArgumentParser( - description='fuse Conv and BN layers in a model') - parser.add_argument('config', help='config file path') - parser.add_argument('checkpoint', help='checkpoint file path') - parser.add_argument('out', help='output path of the converted model') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - # build the model from a config file and a checkpoint file - model = init_model(args.config, args.checkpoint) - # fuse conv and bn layers of the model - fused_model = fuse_module(model) - save_checkpoint(fused_model, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/print_config.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/print_config.py deleted file mode 100644 index 3e685ad8..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/print_config.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -from mmcv import Config, DictAction - - -def parse_args(): - parser = argparse.ArgumentParser(description='Print the whole config') - parser.add_argument('config', help='config file path') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='arguments in dict') - args = parser.parse_args() - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - print(f'Config:\n{cfg.pretty_text}') - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/visualize_results.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/visualize_results.py deleted file mode 100644 index f8ea05ab..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/misc/visualize_results.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import mmcv -from mmcv import Config - -from mmdet3d.datasets import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D visualize the results') - parser.add_argument('config', help='test config file path') - parser.add_argument('--result', help='results file in pickle format') - parser.add_argument( - '--show-dir', help='directory where visualize results will be saved') - args = parser.parse_args() - - return args - - -def main(): - args = parse_args() - - if args.result is not None and \ - not args.result.endswith(('.pkl', '.pickle')): - raise ValueError('The results file must be a pkl file.') - - cfg = Config.fromfile(args.config) - cfg.data.test.test_mode = True - - # build the dataset - dataset = build_dataset(cfg.data.test) - results = mmcv.load(args.result) - - if getattr(dataset, 'show', None) is not None: - # data loading pipeline for showing - eval_pipeline = cfg.get('eval_pipeline', {}) - if eval_pipeline: - dataset.show(results, args.show_dir, pipeline=eval_pipeline) - else: - dataset.show(results, args.show_dir) # use default pipeline - else: - raise NotImplementedError( - 'Show is not implemented for dataset {}!'.format( - type(dataset).__name__)) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_h3dnet_checkpoints.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_h3dnet_checkpoints.py deleted file mode 100644 index 5484d90c..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_h3dnet_checkpoints.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import tempfile - -import torch -from mmcv import Config -from mmcv.runner import load_state_dict - -from mmdet3d.models import build_detector - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D upgrade model version(before v0.6.0) of H3DNet') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='path of the output checkpoint file') - args = parser.parse_args() - return args - - -def parse_config(config_strings): - """Parse config from strings. - - Args: - config_strings (string): strings of model config. - - Returns: - Config: model config - """ - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - - # Update backbone config - if 'pool_mod' in config.model.backbone.backbones: - config.model.backbone.backbones.pop('pool_mod') - - if 'sa_cfg' not in config.model.backbone: - config.model.backbone['sa_cfg'] = dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True) - - if 'type' not in config.model.rpn_head.vote_aggregation_cfg: - config.model.rpn_head.vote_aggregation_cfg['type'] = 'PointSAModule' - - # Update rpn_head config - if 'pred_layer_cfg' not in config.model.rpn_head: - config.model.rpn_head['pred_layer_cfg'] = dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True) - - if 'feat_channels' in config.model.rpn_head: - config.model.rpn_head.pop('feat_channels') - - if 'vote_moudule_cfg' in config.model.rpn_head: - config.model.rpn_head['vote_module_cfg'] = config.model.rpn_head.pop( - 'vote_moudule_cfg') - - if config.model.rpn_head.vote_aggregation_cfg.use_xyz: - config.model.rpn_head.vote_aggregation_cfg.mlp_channels[0] -= 3 - - for cfg in config.model.roi_head.primitive_list: - cfg['vote_module_cfg'] = cfg.pop('vote_moudule_cfg') - cfg.vote_aggregation_cfg.mlp_channels[0] -= 3 - if 'type' not in cfg.vote_aggregation_cfg: - cfg.vote_aggregation_cfg['type'] = 'PointSAModule' - - if 'type' not in config.model.roi_head.bbox_head.suface_matching_cfg: - config.model.roi_head.bbox_head.suface_matching_cfg[ - 'type'] = 'PointSAModule' - - if config.model.roi_head.bbox_head.suface_matching_cfg.use_xyz: - config.model.roi_head.bbox_head.suface_matching_cfg.mlp_channels[ - 0] -= 3 - - if 'type' not in config.model.roi_head.bbox_head.line_matching_cfg: - config.model.roi_head.bbox_head.line_matching_cfg[ - 'type'] = 'PointSAModule' - - if config.model.roi_head.bbox_head.line_matching_cfg.use_xyz: - config.model.roi_head.bbox_head.line_matching_cfg.mlp_channels[0] -= 3 - - if 'proposal_module_cfg' in config.model.roi_head.bbox_head: - config.model.roi_head.bbox_head.pop('proposal_module_cfg') - - temp_file.close() - - return config - - -def main(): - """Convert keys in checkpoints for VoteNet. - - There can be some breaking changes during the development of mmdetection3d, - and this tool is used for upgrading checkpoints trained with old versions - (before v0.6.0) to the latest one. - """ - args = parse_args() - checkpoint = torch.load(args.checkpoint) - cfg = parse_config(checkpoint['meta']['config']) - # Build the model and load checkpoint - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - orig_ckpt = checkpoint['state_dict'] - converted_ckpt = orig_ckpt.copy() - - if cfg['dataset_type'] == 'ScanNetDataset': - NUM_CLASSES = 18 - elif cfg['dataset_type'] == 'SUNRGBDDataset': - NUM_CLASSES = 10 - else: - raise NotImplementedError - - RENAME_PREFIX = { - 'rpn_head.conv_pred.0': 'rpn_head.conv_pred.shared_convs.layer0', - 'rpn_head.conv_pred.1': 'rpn_head.conv_pred.shared_convs.layer1' - } - - DEL_KEYS = [ - 'rpn_head.conv_pred.0.bn.num_batches_tracked', - 'rpn_head.conv_pred.1.bn.num_batches_tracked' - ] - - EXTRACT_KEYS = { - 'rpn_head.conv_pred.conv_cls.weight': - ('rpn_head.conv_pred.conv_out.weight', [(0, 2), (-NUM_CLASSES, -1)]), - 'rpn_head.conv_pred.conv_cls.bias': - ('rpn_head.conv_pred.conv_out.bias', [(0, 2), (-NUM_CLASSES, -1)]), - 'rpn_head.conv_pred.conv_reg.weight': - ('rpn_head.conv_pred.conv_out.weight', [(2, -NUM_CLASSES)]), - 'rpn_head.conv_pred.conv_reg.bias': - ('rpn_head.conv_pred.conv_out.bias', [(2, -NUM_CLASSES)]) - } - - # Delete some useless keys - for key in DEL_KEYS: - converted_ckpt.pop(key) - - # Rename keys with specific prefix - RENAME_KEYS = dict() - for old_key in converted_ckpt.keys(): - for rename_prefix in RENAME_PREFIX.keys(): - if rename_prefix in old_key: - new_key = old_key.replace(rename_prefix, - RENAME_PREFIX[rename_prefix]) - RENAME_KEYS[new_key] = old_key - for new_key, old_key in RENAME_KEYS.items(): - converted_ckpt[new_key] = converted_ckpt.pop(old_key) - - # Extract weights and rename the keys - for new_key, (old_key, indices) in EXTRACT_KEYS.items(): - cur_layers = orig_ckpt[old_key] - converted_layers = [] - for (start, end) in indices: - if end != -1: - converted_layers.append(cur_layers[start:end]) - else: - converted_layers.append(cur_layers[start:]) - converted_layers = torch.cat(converted_layers, 0) - converted_ckpt[new_key] = converted_layers - if old_key in converted_ckpt.keys(): - converted_ckpt.pop(old_key) - - # Check the converted checkpoint by loading to the model - load_state_dict(model, converted_ckpt, strict=True) - checkpoint['state_dict'] = converted_ckpt - torch.save(checkpoint, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_votenet_checkpoints.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_votenet_checkpoints.py deleted file mode 100644 index 78399171..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/convert_votenet_checkpoints.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import tempfile - -import torch -from mmcv import Config -from mmcv.runner import load_state_dict - -from mmdet3d.models import build_detector - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet3D upgrade model version(before v0.6.0) of VoteNet') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='path of the output checkpoint file') - args = parser.parse_args() - return args - - -def parse_config(config_strings): - """Parse config from strings. - - Args: - config_strings (string): strings of model config. - - Returns: - Config: model config - """ - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - - # Update backbone config - if 'pool_mod' in config.model.backbone: - config.model.backbone.pop('pool_mod') - - if 'sa_cfg' not in config.model.backbone: - config.model.backbone['sa_cfg'] = dict( - type='PointSAModule', - pool_mod='max', - use_xyz=True, - normalize_xyz=True) - - if 'type' not in config.model.bbox_head.vote_aggregation_cfg: - config.model.bbox_head.vote_aggregation_cfg['type'] = 'PointSAModule' - - # Update bbox_head config - if 'pred_layer_cfg' not in config.model.bbox_head: - config.model.bbox_head['pred_layer_cfg'] = dict( - in_channels=128, shared_conv_channels=(128, 128), bias=True) - - if 'feat_channels' in config.model.bbox_head: - config.model.bbox_head.pop('feat_channels') - - if 'vote_moudule_cfg' in config.model.bbox_head: - config.model.bbox_head['vote_module_cfg'] = config.model.bbox_head.pop( - 'vote_moudule_cfg') - - if config.model.bbox_head.vote_aggregation_cfg.use_xyz: - config.model.bbox_head.vote_aggregation_cfg.mlp_channels[0] -= 3 - - temp_file.close() - - return config - - -def main(): - """Convert keys in checkpoints for VoteNet. - - There can be some breaking changes during the development of mmdetection3d, - and this tool is used for upgrading checkpoints trained with old versions - (before v0.6.0) to the latest one. - """ - args = parse_args() - checkpoint = torch.load(args.checkpoint) - cfg = parse_config(checkpoint['meta']['config']) - # Build the model and load checkpoint - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - orig_ckpt = checkpoint['state_dict'] - converted_ckpt = orig_ckpt.copy() - - if cfg['dataset_type'] == 'ScanNetDataset': - NUM_CLASSES = 18 - elif cfg['dataset_type'] == 'SUNRGBDDataset': - NUM_CLASSES = 10 - else: - raise NotImplementedError - - RENAME_PREFIX = { - 'bbox_head.conv_pred.0': 'bbox_head.conv_pred.shared_convs.layer0', - 'bbox_head.conv_pred.1': 'bbox_head.conv_pred.shared_convs.layer1' - } - - DEL_KEYS = [ - 'bbox_head.conv_pred.0.bn.num_batches_tracked', - 'bbox_head.conv_pred.1.bn.num_batches_tracked' - ] - - EXTRACT_KEYS = { - 'bbox_head.conv_pred.conv_cls.weight': - ('bbox_head.conv_pred.conv_out.weight', [(0, 2), (-NUM_CLASSES, -1)]), - 'bbox_head.conv_pred.conv_cls.bias': - ('bbox_head.conv_pred.conv_out.bias', [(0, 2), (-NUM_CLASSES, -1)]), - 'bbox_head.conv_pred.conv_reg.weight': - ('bbox_head.conv_pred.conv_out.weight', [(2, -NUM_CLASSES)]), - 'bbox_head.conv_pred.conv_reg.bias': - ('bbox_head.conv_pred.conv_out.bias', [(2, -NUM_CLASSES)]) - } - - # Delete some useless keys - for key in DEL_KEYS: - converted_ckpt.pop(key) - - # Rename keys with specific prefix - RENAME_KEYS = dict() - for old_key in converted_ckpt.keys(): - for rename_prefix in RENAME_PREFIX.keys(): - if rename_prefix in old_key: - new_key = old_key.replace(rename_prefix, - RENAME_PREFIX[rename_prefix]) - RENAME_KEYS[new_key] = old_key - for new_key, old_key in RENAME_KEYS.items(): - converted_ckpt[new_key] = converted_ckpt.pop(old_key) - - # Extract weights and rename the keys - for new_key, (old_key, indices) in EXTRACT_KEYS.items(): - cur_layers = orig_ckpt[old_key] - converted_layers = [] - for (start, end) in indices: - if end != -1: - converted_layers.append(cur_layers[start:end]) - else: - converted_layers.append(cur_layers[start:]) - converted_layers = torch.cat(converted_layers, 0) - converted_ckpt[new_key] = converted_layers - if old_key in converted_ckpt.keys(): - converted_ckpt.pop(old_key) - - # Check the converted checkpoint by loading to the model - load_state_dict(model, converted_ckpt, strict=True) - checkpoint['state_dict'] = converted_ckpt - torch.save(checkpoint, args.out) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/publish_model.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/publish_model.py deleted file mode 100644 index e2660578..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/publish_model.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import subprocess - -import torch - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Process a checkpoint to be published') - parser.add_argument('in_file', help='input checkpoint filename') - parser.add_argument('out_file', help='output checkpoint filename') - args = parser.parse_args() - return args - - -def process_checkpoint(in_file, out_file): - checkpoint = torch.load(in_file, map_location='cpu') - # remove optimizer for smaller file size - if 'optimizer' in checkpoint: - del checkpoint['optimizer'] - # if it is necessary to remove some sensitive data in checkpoint['meta'], - # add the code here. - torch.save(checkpoint, out_file) - sha = subprocess.check_output(['sha256sum', out_file]).decode() - final_file = out_file.rstrip('.pth') + '-{}.pth'.format(sha[:8]) - subprocess.Popen(['mv', out_file, final_file]) - - -def main(): - args = parse_args() - process_checkpoint(args.in_file, args.out_file) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/regnet2mmdet.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/regnet2mmdet.py deleted file mode 100644 index fbf8c8f3..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/model_converters/regnet2mmdet.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -from collections import OrderedDict - -import torch - - -def convert_stem(model_key, model_weight, state_dict, converted_names): - new_key = model_key.replace('stem.conv', 'conv1') - new_key = new_key.replace('stem.bn', 'bn1') - state_dict[new_key] = model_weight - converted_names.add(model_key) - print(f'Convert {model_key} to {new_key}') - - -def convert_head(model_key, model_weight, state_dict, converted_names): - new_key = model_key.replace('head.fc', 'fc') - state_dict[new_key] = model_weight - converted_names.add(model_key) - print(f'Convert {model_key} to {new_key}') - - -def convert_reslayer(model_key, model_weight, state_dict, converted_names): - split_keys = model_key.split('.') - layer, block, module = split_keys[:3] - block_id = int(block[1:]) - layer_name = f'layer{int(layer[1:])}' - block_name = f'{block_id - 1}' - - if block_id == 1 and module == 'bn': - new_key = f'{layer_name}.{block_name}.downsample.1.{split_keys[-1]}' - elif block_id == 1 and module == 'proj': - new_key = f'{layer_name}.{block_name}.downsample.0.{split_keys[-1]}' - elif module == 'f': - if split_keys[3] == 'a_bn': - module_name = 'bn1' - elif split_keys[3] == 'b_bn': - module_name = 'bn2' - elif split_keys[3] == 'c_bn': - module_name = 'bn3' - elif split_keys[3] == 'a': - module_name = 'conv1' - elif split_keys[3] == 'b': - module_name = 'conv2' - elif split_keys[3] == 'c': - module_name = 'conv3' - new_key = f'{layer_name}.{block_name}.{module_name}.{split_keys[-1]}' - else: - raise ValueError(f'Unsupported conversion of key {model_key}') - print(f'Convert {model_key} to {new_key}') - state_dict[new_key] = model_weight - converted_names.add(model_key) - - -def convert(src, dst): - """Convert keys in pycls pretrained RegNet models to mmdet style.""" - # load caffe model - regnet_model = torch.load(src) - blobs = regnet_model['model_state'] - # convert to pytorch style - state_dict = OrderedDict() - converted_names = set() - for key, weight in blobs.items(): - if 'stem' in key: - convert_stem(key, weight, state_dict, converted_names) - elif 'head' in key: - convert_head(key, weight, state_dict, converted_names) - elif key.startswith('s'): - convert_reslayer(key, weight, state_dict, converted_names) - - # check if all layers are converted - for key in blobs: - if key not in converted_names: - print(f'not converted: {key}') - # save checkpoint - checkpoint = dict() - checkpoint['state_dict'] = state_dict - torch.save(checkpoint, dst) - - -def main(): - parser = argparse.ArgumentParser(description='Convert model keys') - parser.add_argument('src', help='src detectron model path') - parser.add_argument('dst', help='save path') - args = parser.parse_args() - convert(args.src, args.dst) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_test.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_test.sh deleted file mode 100755 index 6dd67e57..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_test.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -CHECKPOINT=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-8} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -PY_ARGS=${@:5} -SRUN_ARGS=${SRUN_ARGS:-""} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_train.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_train.sh deleted file mode 100755 index b3feb3d9..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/slurm_train.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -WORK_DIR=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-8} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:5} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/test.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/test.py deleted file mode 100644 index f94e5fe6..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/test.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import warnings - -import mmcv -import torch -from mmcv import Config, DictAction -from mmcv.cnn import fuse_conv_bn -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) - -import mmdet -from mmdet3d.apis import single_gpu_test -from mmdet3d.datasets import build_dataloader, build_dataset -from mmdet3d.models import build_model -from mmdet.apis import multi_gpu_test, set_random_seed -from mmdet.datasets import replace_ImageToTensor - -if mmdet.__version__ > '2.23.0': - # If mmdet version > 2.23.0, setup_multi_processes would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import setup_multi_processes -else: - from mmdet3d.utils import setup_multi_processes - -try: - # If mmdet version > 2.23.0, compat_cfg would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import compat_cfg -except ImportError: - from mmdet3d.utils import compat_cfg - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet test (and eval) a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - parser.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument( - '--gpu-id', - type=int, - default=0, - help='id of gpu to use ' - '(only applicable to non-distributed testing)') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "bbox",' - ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where results will be saved') - parser.add_argument( - '--gpu-collect', - action='store_true', - help='whether to use gpu to collect results.') - parser.add_argument( - '--tmpdir', - help='tmp directory used for collecting results from multiple ' - 'workers, available when gpu-collect is not specified') - parser.add_argument('--seed', type=int, default=0, help='random seed') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function (deprecate), ' - 'change to --eval-options instead.') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.eval_options: - raise ValueError( - '--options and --eval-options cannot be both specified, ' - '--options is deprecated in favor of --eval-options') - if args.options: - warnings.warn('--options is deprecated in favor of --eval-options') - args.eval_options = args.options - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - cfg = compat_cfg(cfg) - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - cfg.model.pretrained = None - - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed testing. Use the first GPU ' - 'in `gpu_ids` now.') - else: - cfg.gpu_ids = [args.gpu_id] - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - test_dataloader_default_args = dict( - samples_per_gpu=1, workers_per_gpu=2, dist=distributed, shuffle=False) - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - cfg.data.test.test_mode = True - if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.test.pipeline = replace_ImageToTensor( - cfg.data.test.pipeline) - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - ds_cfg.test_mode = True - if cfg.data.test_dataloader.get('samples_per_gpu', 1) > 1: - for ds_cfg in cfg.data.test: - ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline) - - test_loader_cfg = { - **test_dataloader_default_args, - **cfg.data.get('test_dataloader', {}) - } - - # set random seeds - if args.seed is not None: - set_random_seed(args.seed, deterministic=args.deterministic) - - # build the dataloader - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader(dataset, **test_loader_cfg) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_model(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_conv_bn(model) - # old versions did not save class info in checkpoints, this walkaround is - # for backward compatibility - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = dataset.CLASSES - # palette for visualization in segmentation tasks - if 'PALETTE' in checkpoint.get('meta', {}): - model.PALETTE = checkpoint['meta']['PALETTE'] - elif hasattr(dataset, 'PALETTE'): - # segmentation dataset has `PALETTE` attribute - model.PALETTE = dataset.PALETTE - - if not distributed: - model = MMDataParallel(model, device_ids=cfg.gpu_ids) - outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - outputs = multi_gpu_test(model, data_loader, args.tmpdir, - args.gpu_collect) - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - print(f'\nwriting results to {args.out}') - mmcv.dump(outputs, args.out) - kwargs = {} if args.eval_options is None else args.eval_options - if args.format_only: - dataset.format_results(outputs, **kwargs) - if args.eval: - eval_kwargs = cfg.get('evaluation', {}).copy() - # hard-code way to remove EvalHook args - for key in [ - 'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best', - 'rule' - ]: - eval_kwargs.pop(key, None) - eval_kwargs.update(dict(metric=args.eval, **kwargs)) - print(dataset.evaluate(outputs, **eval_kwargs)) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.py deleted file mode 100644 index e2ad41df..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import argparse -import time -from os import path as osp - -import mmcv -import numpy as np - -from mmdet3d.core.bbox import limit_period - - -def update_sunrgbd_infos(root_dir, out_dir, pkl_files): - print(f'{pkl_files} will be modified because ' - f'of the refactor of the Depth coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for item in mmcv.track_iter_progress(a): - if 'rotation_y' in item['annos']: - item['annos']['rotation_y'] = -item['annos']['rotation_y'] - item['annos']['gt_boxes_upright_depth'][:, -1:] = \ - -item['annos']['gt_boxes_upright_depth'][:, -1:] - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -def update_outdoor_dbinfos(root_dir, out_dir, pkl_files): - print(f'{pkl_files} will be modified because ' - f'of the refactor of the LIDAR coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for k in a.keys(): - print(f'Updating samples of class {k}:') - for item in mmcv.track_iter_progress(a[k]): - boxes = item['box3d_lidar'].copy() - # swap l, w (or dx, dy) - item['box3d_lidar'][3] = boxes[4] - item['box3d_lidar'][4] = boxes[3] - # change yaw - item['box3d_lidar'][6] = -boxes[6] - np.pi / 2 - item['box3d_lidar'][6] = limit_period( - item['box3d_lidar'][6], period=np.pi * 2) - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -def update_nuscenes_or_lyft_infos(root_dir, out_dir, pkl_files): - - print(f'{pkl_files} will be modified because ' - f'of the refactor of the LIDAR coordinate system.') - if root_dir == out_dir: - print(f'Warning, you are overwriting ' - f'the original data under {root_dir}.') - time.sleep(3) - for pkl_file in pkl_files: - in_path = osp.join(root_dir, pkl_file) - print(f'Reading from input file: {in_path}.') - a = mmcv.load(in_path) - print('Start updating:') - for item in mmcv.track_iter_progress(a['infos']): - boxes = item['gt_boxes'].copy() - # swap l, w (or dx, dy) - item['gt_boxes'][:, 3] = boxes[:, 4] - item['gt_boxes'][:, 4] = boxes[:, 3] - # change yaw - item['gt_boxes'][:, 6] = -boxes[:, 6] - np.pi / 2 - item['gt_boxes'][:, 6] = limit_period( - item['gt_boxes'][:, 6], period=np.pi * 2) - - out_path = osp.join(out_dir, pkl_file) - print(f'Writing to output file: {out_path}.') - mmcv.dump(a, out_path, 'pkl') - - -parser = argparse.ArgumentParser(description='Arg parser for data coords ' - 'update due to coords sys refactor.') -parser.add_argument('dataset', metavar='kitti', help='name of the dataset') -parser.add_argument( - '--root-dir', - type=str, - default='./data/kitti', - help='specify the root dir of dataset') -parser.add_argument( - '--version', - type=str, - default='v1.0', - required=False, - help='specify the dataset version, no need for kitti') -parser.add_argument( - '--out-dir', - type=str, - default=None, - required=False, - help='name of info pkl') -args = parser.parse_args() - -if __name__ == '__main__': - if args.out_dir is None: - args.out_dir = args.root_dir - if args.dataset == 'kitti': - # KITTI infos is in CAM coord sys (unchanged) - # KITTI dbinfos is in LIDAR coord sys (changed) - # so we only update dbinfos - pkl_files = ['kitti_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'nuscenes': - # nuScenes infos is in LIDAR coord sys (changed) - # nuScenes dbinfos is in LIDAR coord sys (changed) - # so we update both infos and dbinfos - pkl_files = ['nuscenes_infos_val.pkl'] - if args.version != 'v1.0-mini': - pkl_files.append('nuscenes_infos_train.pkl') - else: - pkl_files.append('nuscenes_infos_train_tiny.pkl') - update_nuscenes_or_lyft_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - if args.version != 'v1.0-mini': - pkl_files = ['nuscenes_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, - out_dir=args.out_dir, - pkl_files=pkl_files) - elif args.dataset == 'lyft': - # Lyft infos is in LIDAR coord sys (changed) - # Lyft has no dbinfos - # so we update infos - pkl_files = ['lyft_infos_train.pkl', 'lyft_infos_val.pkl'] - update_nuscenes_or_lyft_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'waymo': - # Waymo infos is in CAM coord sys (unchanged) - # Waymo dbinfos is in LIDAR coord sys (changed) - # so we only update dbinfos - pkl_files = ['waymo_dbinfos_train.pkl'] - update_outdoor_dbinfos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) - elif args.dataset == 'scannet': - # ScanNet infos is in DEPTH coord sys (changed) - # but bbox is without yaw - # so ScanNet is unaffected - pass - elif args.dataset == 's3dis': - # Segmentation datasets are not affected - pass - elif args.dataset == 'sunrgbd': - # SUNRGBD infos is in DEPTH coord sys (changed) - # and bbox is with yaw - # so we update infos - pkl_files = ['sunrgbd_infos_train.pkl', 'sunrgbd_infos_val.pkl'] - update_sunrgbd_infos( - root_dir=args.root_dir, out_dir=args.out_dir, pkl_files=pkl_files) diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.sh b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.sh deleted file mode 100644 index bd8db628..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/tools/update_data_coords.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -set -x -export PYTHONPATH=`pwd`:$PYTHONPATH - -PARTITION=$1 -DATASET=$2 -GPUS=${GPUS:-1} -GPUS_PER_NODE=${GPUS_PER_NODE:-1} -SRUN_ARGS=${SRUN_ARGS:-""} -JOB_NAME=update_data_coords - -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/update_data_coords.py ${DATASET} \ - --root-dir ./data/${DATASET} \ - --out-dir ./data/${DATASET} diff --git a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/train.py b/cv/3d_detection/pointnet2/pytorch/mmdetection3d/train.py deleted file mode 100644 index ed9c2a6b..00000000 --- a/cv/3d_detection/pointnet2/pytorch/mmdetection3d/train.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from __future__ import division -import argparse -import copy -import os -import time -import warnings -from os import path as osp - -import mmcv -import torch -import torch.distributed as dist -from mmcv import Config, DictAction -from mmcv.runner import get_dist_info, init_dist - -from mmdet import __version__ as mmdet_version -from mmdet3d import __version__ as mmdet3d_version -from mmdet3d.apis import init_random_seed, train_model -from mmdet3d.datasets import build_dataset -from mmdet3d.models import build_model -from mmdet3d.utils import collect_env, get_root_logger -from mmdet.apis import set_random_seed -from mmseg import __version__ as mmseg_version - -try: - # If mmdet version > 2.20.0, setup_multi_processes would be imported and - # used from mmdet instead of mmdet3d. - from mmdet.utils import setup_multi_processes -except ImportError: - from mmdet3d.utils import setup_multi_processes - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--auto-resume', - action='store_true', - help='resume from the latest checkpoint automatically') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='(Deprecated, please use --gpu-id) number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-id', - type=int, - default=0, - help='number of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=0, help='random seed') - parser.add_argument( - '--diff-seed', - action='store_true', - help='Whether or not set different seeds for different ranks') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file (deprecate), ' - 'change to --cfg-options instead.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--autoscale-lr', - action='store_true', - help='automatically scale lr with the number of gpus') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.cfg_options: - raise ValueError( - '--options and --cfg-options cannot be both specified, ' - '--options is deprecated in favor of --cfg-options') - if args.options: - warnings.warn('--options is deprecated in favor of --cfg-options') - args.cfg_options = args.options - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - if args.resume_from is not None: - cfg.resume_from = args.resume_from - - if args.auto_resume: - cfg.auto_resume = args.auto_resume - warnings.warn('`--auto-resume` is only supported when mmdet' - 'version >= 2.20.0 for 3D detection model or' - 'mmsegmentation verision >= 0.21.0 for 3D' - 'segmentation model') - - if args.gpus is not None: - cfg.gpu_ids = range(1) - warnings.warn('`--gpus` is deprecated because we only support ' - 'single GPU mode in non-distributed training. ' - 'Use `gpus=1` now.') - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed training. Use the first GPU ' - 'in `gpu_ids` now.') - if args.gpus is None and args.gpu_ids is None: - cfg.gpu_ids = [args.gpu_id] - - if args.autoscale_lr: - # apply the linear scaling rule (https://arxiv.org/abs/1706.02677) - cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8 - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # re-set gpu_ids with distributed training mode - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - # specify logger name, if we still use 'mmdet', the output info will be - # filtered and won't be saved in the log_file - # TODO: ugly workaround to judge whether we are training det or seg model - if cfg.model.type in ['EncoderDecoder3D']: - logger_name = 'mmseg' - else: - logger_name = 'mmdet' - logger = get_root_logger( - log_file=log_file, log_level=cfg.log_level, name=logger_name) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - meta['config'] = cfg.pretty_text - - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - # set random seeds - seed = init_random_seed(args.seed) - seed = seed + dist.get_rank() if args.diff_seed else seed - logger.info(f'Set random seed to {seed}, ' - f'deterministic: {args.deterministic}') - set_random_seed(seed, deterministic=args.deterministic) - cfg.seed = seed - meta['seed'] = seed - meta['exp_name'] = osp.basename(args.config) - - model = build_model( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - model.init_weights() - - logger.info(f'Model:\n{model}') - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - # in case we use a dataset wrapper - if 'dataset' in cfg.data.train: - val_dataset.pipeline = cfg.data.train.dataset.pipeline - else: - val_dataset.pipeline = cfg.data.train.pipeline - # set test_mode=False here in deep copied config - # which do not affect AP/AR calculation later - # refer to https://mmdetection3d.readthedocs.io/en/latest/tutorials/customize_runtime.html#customize-workflow # noqa - val_dataset.test_mode = False - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmdet version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmdet_version=mmdet_version, - mmseg_version=mmseg_version, - mmdet3d_version=mmdet3d_version, - config=cfg.pretty_text, - CLASSES=datasets[0].CLASSES, - PALETTE=datasets[0].PALETTE # for segmentors - if hasattr(datasets[0], 'PALETTE') else None) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - train_model( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/cv/3d_detection/pointpillars/pytorch/README.md b/cv/3d_detection/pointpillars/pytorch/README.md index 5b58cec9..897b2506 100755 --- a/cv/3d_detection/pointpillars/pytorch/README.md +++ b/cv/3d_detection/pointpillars/pytorch/README.md @@ -121,3 +121,6 @@ python3 test.py --ckpt pretrained/your_weights.pth --pc_path data/kitti/training ## Acknowledements Fast Encoders for Object Detection from Point Clouds](https://arxiv.org/abs/1812.05784) + +## Reference +[PointPillars](https://github.com/zhulf0804/PointPillars/tree/b9948e73505c8d6bfa631ffdf76c7148e82c5942) \ No newline at end of file diff --git a/cv/classification/fasternet/pytorch/README.md b/cv/classification/fasternet/pytorch/README.md index 21c57776..b6fe516a 100644 --- a/cv/classification/fasternet/pytorch/README.md +++ b/cv/classification/fasternet/pytorch/README.md @@ -71,6 +71,4 @@ To train other FasterNet variants, `--cfg` need to be changed. You may also want ## Reference -- [timm](https://github.com/rwightman/pytorch-image-models) -- [ConvNeXt](https://github.com/facebookresearch/ConvNeXt) -- [mmdetection](https://github.com/open-mmlab/mmdetection) +[FasterNet](https://github.com/JierunChen/FasterNet/tree/e8fba4465ae912359c9f661a72b14e39347e4954) diff --git a/cv/classification/repvit/pytorch/README.md b/cv/classification/repvit/pytorch/README.md index 92ee1cb8..d6c36226 100644 --- a/cv/classification/repvit/pytorch/README.md +++ b/cv/classification/repvit/pytorch/README.md @@ -60,4 +60,4 @@ wandb: (3) Don't visualize my results |BI-V100 x8|1.5984 s / it| Acc@1 78.53% | ## Reference -- [RepViT](https://github.com/THU-MIG/RepViT) +- [RepViT](https://github.com/THU-MIG/RepViT/tree/4df6086a198bc1c7278ed1124b6f6409ff42148c) diff --git a/cv/detection/yolof/pytorch/README.md b/cv/detection/yolof/pytorch/README.md index 3ca449f3..8f6ab063 100755 --- a/cv/detection/yolof/pytorch/README.md +++ b/cv/detection/yolof/pytorch/README.md @@ -63,4 +63,4 @@ bash train_dist.sh configs/yolof/yolof_r50_c5_8x8_1x_coco.py 8 ## Reference -- [mmdetection](https://github.com/open-mmlab/mmdetection) \ No newline at end of file +- [mmdetection](https://github.com/WXinlong/SOLO/tree/f4cd03b9404e3bd84ca0be45966fb61d20d2efe6) \ No newline at end of file -- Gitee From 717864158c9494c98a4afd9e93f40fa380eef7c9 Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 14:18:44 +0800 Subject: [PATCH 4/7] update RepViT and delete useless code --- cv/classification/fasternet/pytorch/README.md | 2 +- cv/classification/repvit/pytorch/.gitignore | 9 - cv/classification/repvit/pytorch/README.md | 7 +- .../repvit/pytorch/data/__init__.py | 0 .../repvit/pytorch/data/datasets.py | 140 -- .../repvit/pytorch/data/samplers.py | 64 - .../repvit/pytorch/data/threeaugment.py | 121 -- .../repvit/pytorch/detection/.gitignore | 4 - .../repvit/pytorch/detection/README.md | 58 - .../repvit/pytorch/detection/checkpoint.py | 255 --- .../_base_/datasets/cityscapes_detection.py | 56 - .../_base_/datasets/cityscapes_instance.py | 56 - .../configs/_base_/datasets/coco_detection.py | 49 - .../configs/_base_/datasets/coco_instance.py | 49 - .../_base_/datasets/coco_instance_semantic.py | 54 - .../configs/_base_/datasets/deepfashion.py | 53 - .../_base_/datasets/lvis_v0.5_instance.py | 24 - .../_base_/datasets/lvis_v1_instance.py | 24 - .../configs/_base_/datasets/voc0712.py | 55 - .../configs/_base_/datasets/wider_face.py | 63 - .../configs/_base_/default_runtime.py | 16 - .../models/cascade_mask_rcnn_pvtv2_b2_fpn.py | 193 -- .../models/cascade_mask_rcnn_r50_fpn.py | 196 -- .../_base_/models/cascade_rcnn_r50_fpn.py | 179 -- .../_base_/models/fast_rcnn_r50_fpn.py | 62 - .../_base_/models/faster_rcnn_r50_caffe_c4.py | 112 -- .../models/faster_rcnn_r50_caffe_dc5.py | 103 - .../_base_/models/faster_rcnn_r50_fpn.py | 108 - .../_base_/models/mask_rcnn_r50_caffe_c4.py | 123 -- .../_base_/models/mask_rcnn_r50_fpn.py | 120 -- .../_base_/models/retinanet_r50_fpn.py | 60 - .../configs/_base_/models/rpn_r50_caffe_c4.py | 56 - .../configs/_base_/models/rpn_r50_fpn.py | 58 - .../detection/configs/_base_/models/ssd300.py | 51 - .../configs/_base_/schedules/schedule_1x.py | 11 - .../configs/_base_/schedules/schedule_20e.py | 11 - .../configs/_base_/schedules/schedule_2x.py | 11 - .../mask_rcnn_repvit_m1_1_fpn_1x_coco.py | 24 - .../mask_rcnn_repvit_m1_5_fpn_1x_coco.py | 24 - .../mask_rcnn_repvit_m2_3_fpn_1x_coco.py | 24 - .../repvit/pytorch/detection/dist_test.sh | 23 - .../repvit/pytorch/detection/dist_train.sh | 21 - .../repvit/pytorch/detection/eval.sh | 1 - .../detection/logs/repvit_m1_1_coco.json | 1765 ----------------- .../detection/logs/repvit_m1_5_coco.json | 1765 ----------------- .../detection/logs/repvit_m2_3_coco.json | 1765 ----------------- .../mmcv_custom/runner/checkpoint.py | 79 - .../mmcv_custom/runner/epoch_based_runner.py | 98 - .../detection/mmcv_custom/runner/optimizer.py | 29 - .../detection/mmdet_custom/apis/train.py | 180 -- .../repvit/pytorch/detection/repvit.py | 429 ---- .../repvit/pytorch/detection/slurm_train.sh | 28 - .../repvit/pytorch/detection/test.py | 241 --- .../repvit/pytorch/detection/train.py | 245 --- .../repvit/pytorch/detection/train.sh | 1 - cv/classification/repvit/pytorch/engine.py | 106 - cv/classification/repvit/pytorch/eval.sh | 1 - .../repvit/pytorch/export_coreml.py | 43 - .../repvit/pytorch/figures/latency.png | Bin 630219 -> 0 bytes .../pytorch/figures/repvit_m0_9_latency.png | Bin 120998 -> 0 bytes cv/classification/repvit/pytorch/flops.py | 22 - .../pytorch/logs/repvit_m0_9_distill_300e.txt | 300 --- .../pytorch/logs/repvit_m0_9_distill_450e.txt | 450 ----- .../pytorch/logs/repvit_m1_0_distill_300e.txt | 300 --- .../pytorch/logs/repvit_m1_0_distill_450e.txt | 450 ----- .../pytorch/logs/repvit_m1_1_distill_300e.txt | 300 --- .../pytorch/logs/repvit_m1_1_distill_450e.txt | 450 ----- .../pytorch/logs/repvit_m1_5_distill_300e.txt | 300 --- .../pytorch/logs/repvit_m1_5_distill_450e.txt | 450 ----- .../pytorch/logs/repvit_m2_3_distill_300e.txt | 300 --- .../pytorch/logs/repvit_m2_3_distill_450e.txt | 450 ----- cv/classification/repvit/pytorch/losses.py | 64 - cv/classification/repvit/pytorch/main.py | 486 ----- .../repvit/pytorch/model/__init__.py | 1 - .../repvit/pytorch/model/repvit.py | 479 ----- .../repvit/pytorch/requirements.txt | 5 - .../repvit/pytorch/segmentation/.gitignore | 4 - .../repvit/pytorch/segmentation/README.md | 64 - .../pytorch/segmentation/align_resize.py | 230 --- .../configs/_base_/datasets/ade20k.py | 57 - .../configs/_base_/default_runtime.py | 14 - .../configs/_base_/models/fpn_r50.py | 36 - .../configs/_base_/schedules/schedule_160k.py | 9 - .../configs/_base_/schedules/schedule_20k.py | 9 - .../configs/_base_/schedules/schedule_40k.py | 9 - .../configs/_base_/schedules/schedule_80k.py | 9 - .../sem_fpn/fpn_repvit_m1_1_ade20k_40k.py | 30 - .../sem_fpn/fpn_repvit_m1_5_ade20k_40k.py | 30 - .../sem_fpn/fpn_repvit_m2_3_ade20k_40k.py | 30 - .../repvit/pytorch/segmentation/eval.sh | 1 - .../segmentation/logs/repvit_m1_1_ade20k.json | 811 -------- .../segmentation/logs/repvit_m1_5_ade20k.json | 811 -------- .../segmentation/logs/repvit_m2_3_ade20k.json | 811 -------- .../repvit/pytorch/segmentation/repvit.py | 429 ---- .../segmentation/tools/analyze_logs.py | 130 -- .../pytorch/segmentation/tools/benchmark.py | 86 - .../segmentation/tools/browse_dataset.py | 167 -- .../tools/convert_datasets/chase_db1.py | 88 - .../tools/convert_datasets/cityscapes.py | 56 - .../tools/convert_datasets/coco_stuff10k.py | 306 --- .../tools/convert_datasets/coco_stuff164k.py | 263 --- .../tools/convert_datasets/drive.py | 113 -- .../tools/convert_datasets/hrf.py | 111 -- .../tools/convert_datasets/pascal_context.py | 87 - .../tools/convert_datasets/stare.py | 166 -- .../tools/convert_datasets/voc_aug.py | 92 - .../pytorch/segmentation/tools/deploy_test.py | 296 --- .../pytorch/segmentation/tools/dist_test.sh | 10 - .../pytorch/segmentation/tools/dist_train.sh | 20 - .../pytorch/segmentation/tools/get_flops.py | 62 - .../tools/model_converters/mit2mmseg.py | 82 - .../tools/model_converters/swin2mmseg.py | 87 - .../tools/model_converters/vit2mmseg.py | 70 - .../segmentation/tools/onnx2tensorrt.py | 276 --- .../segmentation/tools/print_config.py | 39 - .../segmentation/tools/publish_model.py | 36 - .../segmentation/tools/pytorch2onnx.py | 391 ---- .../segmentation/tools/pytorch2torchscript.py | 185 -- .../pytorch/segmentation/tools/slurm_test.sh | 24 - .../pytorch/segmentation/tools/slurm_train.sh | 27 - .../repvit/pytorch/segmentation/tools/test.py | 232 --- .../tools/torchserve/mmseg2torchserve.py | 111 -- .../tools/torchserve/mmseg_handler.py | 56 - .../tools/torchserve/test_torchserve.py | 57 - .../pytorch/segmentation/tools/train.py | 184 -- .../repvit/pytorch/segmentation/train.sh | 1 - cv/classification/repvit/pytorch/speed_gpu.py | 51 - cv/classification/repvit/pytorch/train.sh | 1 - cv/classification/repvit/pytorch/utils.py | 236 --- cv/detection/yolof/pytorch/README.md | 2 +- 130 files changed, 6 insertions(+), 21901 deletions(-) delete mode 100644 cv/classification/repvit/pytorch/.gitignore delete mode 100644 cv/classification/repvit/pytorch/data/__init__.py delete mode 100644 cv/classification/repvit/pytorch/data/datasets.py delete mode 100644 cv/classification/repvit/pytorch/data/samplers.py delete mode 100644 cv/classification/repvit/pytorch/data/threeaugment.py delete mode 100644 cv/classification/repvit/pytorch/detection/.gitignore delete mode 100644 cv/classification/repvit/pytorch/detection/README.md delete mode 100644 cv/classification/repvit/pytorch/detection/checkpoint.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_detection.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_instance.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_detection.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance_semantic.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/deepfashion.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v0.5_instance.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v1_instance.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/voc0712.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/datasets/wider_face.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/default_runtime.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_pvtv2_b2_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_rcnn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/fast_rcnn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_dc5.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_caffe_c4.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/retinanet_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_caffe_c4.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_fpn.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/models/ssd300.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_1x.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_20e.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_2x.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_5_fpn_1x_coco.py delete mode 100644 cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m2_3_fpn_1x_coco.py delete mode 100644 cv/classification/repvit/pytorch/detection/dist_test.sh delete mode 100644 cv/classification/repvit/pytorch/detection/dist_train.sh delete mode 100644 cv/classification/repvit/pytorch/detection/eval.sh delete mode 100644 cv/classification/repvit/pytorch/detection/logs/repvit_m1_1_coco.json delete mode 100644 cv/classification/repvit/pytorch/detection/logs/repvit_m1_5_coco.json delete mode 100644 cv/classification/repvit/pytorch/detection/logs/repvit_m2_3_coco.json delete mode 100644 cv/classification/repvit/pytorch/detection/mmcv_custom/runner/checkpoint.py delete mode 100644 cv/classification/repvit/pytorch/detection/mmcv_custom/runner/epoch_based_runner.py delete mode 100644 cv/classification/repvit/pytorch/detection/mmcv_custom/runner/optimizer.py delete mode 100644 cv/classification/repvit/pytorch/detection/mmdet_custom/apis/train.py delete mode 100644 cv/classification/repvit/pytorch/detection/repvit.py delete mode 100644 cv/classification/repvit/pytorch/detection/slurm_train.sh delete mode 100644 cv/classification/repvit/pytorch/detection/test.py delete mode 100644 cv/classification/repvit/pytorch/detection/train.py delete mode 100644 cv/classification/repvit/pytorch/detection/train.sh delete mode 100644 cv/classification/repvit/pytorch/engine.py delete mode 100644 cv/classification/repvit/pytorch/eval.sh delete mode 100644 cv/classification/repvit/pytorch/export_coreml.py delete mode 100644 cv/classification/repvit/pytorch/figures/latency.png delete mode 100644 cv/classification/repvit/pytorch/figures/repvit_m0_9_latency.png delete mode 100644 cv/classification/repvit/pytorch/flops.py delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_300e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_450e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_300e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_450e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_300e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_450e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_300e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_450e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_300e.txt delete mode 100644 cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_450e.txt delete mode 100644 cv/classification/repvit/pytorch/losses.py delete mode 100644 cv/classification/repvit/pytorch/main.py delete mode 100644 cv/classification/repvit/pytorch/model/__init__.py delete mode 100644 cv/classification/repvit/pytorch/model/repvit.py delete mode 100644 cv/classification/repvit/pytorch/requirements.txt delete mode 100644 cv/classification/repvit/pytorch/segmentation/.gitignore delete mode 100644 cv/classification/repvit/pytorch/segmentation/README.md delete mode 100644 cv/classification/repvit/pytorch/segmentation/align_resize.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/datasets/ade20k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/default_runtime.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/models/fpn_r50.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_160k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_20k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_40k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_80k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_5_ade20k_40k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m2_3_ade20k_40k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/eval.sh delete mode 100644 cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_1_ade20k.json delete mode 100644 cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_5_ade20k.json delete mode 100644 cv/classification/repvit/pytorch/segmentation/logs/repvit_m2_3_ade20k.json delete mode 100644 cv/classification/repvit/pytorch/segmentation/repvit.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/analyze_logs.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/benchmark.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/browse_dataset.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/chase_db1.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/cityscapes.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff10k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff164k.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/drive.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/hrf.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/pascal_context.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/stare.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/voc_aug.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/deploy_test.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/dist_test.sh delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/dist_train.sh delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/get_flops.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/model_converters/mit2mmseg.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/model_converters/swin2mmseg.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/model_converters/vit2mmseg.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/onnx2tensorrt.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/print_config.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/publish_model.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/pytorch2onnx.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/pytorch2torchscript.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/slurm_test.sh delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/slurm_train.sh delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/test.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg2torchserve.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg_handler.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/torchserve/test_torchserve.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/tools/train.py delete mode 100644 cv/classification/repvit/pytorch/segmentation/train.sh delete mode 100644 cv/classification/repvit/pytorch/speed_gpu.py delete mode 100644 cv/classification/repvit/pytorch/train.sh delete mode 100644 cv/classification/repvit/pytorch/utils.py diff --git a/cv/classification/fasternet/pytorch/README.md b/cv/classification/fasternet/pytorch/README.md index b6fe516a..1232ebd8 100644 --- a/cv/classification/fasternet/pytorch/README.md +++ b/cv/classification/fasternet/pytorch/README.md @@ -55,7 +55,7 @@ FasterNet-T0 training on ImageNet-1K with a 1-GPU node: ```bash # You can change the dataset path '--data_dir' according to your own dataset path !!! python3 train_test.py -g 0 --num_nodes 1 -n 4 -b 512 -e 2000 \ - --data_dir /path/to/imagenet \ + --data_dir ./imagenet \ --pin_memory --wandb_project_name fasternet \ --model_ckpt_dir ./model_ckpt/$(date +'%Y%m%d_%H%M%S') \ --cfg cfg/fasternet_t0.yaml diff --git a/cv/classification/repvit/pytorch/.gitignore b/cv/classification/repvit/pytorch/.gitignore deleted file mode 100644 index d8946f10..00000000 --- a/cv/classification/repvit/pytorch/.gitignore +++ /dev/null @@ -1,9 +0,0 @@ -wandb -coreml -pretrain -**/__pycache__ -pretrain -ignore -*.zip -checkpoints -trt \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/README.md b/cv/classification/repvit/pytorch/README.md index d6c36226..1650ab34 100644 --- a/cv/classification/repvit/pytorch/README.md +++ b/cv/classification/repvit/pytorch/README.md @@ -10,9 +10,10 @@ Recently, lightweight Vision Transformers (ViTs) demonstrate superior performanc ## Step 1: Installation ```bash - +git clone https://github.com/THU-MIG/RepViT.git +cd RepViT +git checkout 298f42075eda5d2e6102559fad260c970769d34e pip3 install -r requirements.txt - ``` ## Step 2: Preparing datasets @@ -60,4 +61,4 @@ wandb: (3) Don't visualize my results |BI-V100 x8|1.5984 s / it| Acc@1 78.53% | ## Reference -- [RepViT](https://github.com/THU-MIG/RepViT/tree/4df6086a198bc1c7278ed1124b6f6409ff42148c) +[RepViT](https://github.com/THU-MIG/RepViT/tree/298f42075eda5d2e6102559fad260c970769d34e) diff --git a/cv/classification/repvit/pytorch/data/__init__.py b/cv/classification/repvit/pytorch/data/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/cv/classification/repvit/pytorch/data/datasets.py b/cv/classification/repvit/pytorch/data/datasets.py deleted file mode 100644 index d30fad74..00000000 --- a/cv/classification/repvit/pytorch/data/datasets.py +++ /dev/null @@ -1,140 +0,0 @@ -''' -Build trainining/testing datasets -''' -import os -import json - -from torchvision import datasets, transforms -from torchvision.datasets.folder import ImageFolder, default_loader -import torch - -from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from timm.data import create_transform - -try: - from timm.data import TimmDatasetTar -except ImportError: - # for higher version of timm - from timm.data import ImageDataset as TimmDatasetTar - -class INatDataset(ImageFolder): - def __init__(self, root, train=True, year=2018, transform=None, target_transform=None, - category='name', loader=default_loader): - self.transform = transform - self.loader = loader - self.target_transform = target_transform - self.year = year - # assert category in ['kingdom','phylum','class','order','supercategory','family','genus','name'] - path_json = os.path.join( - root, f'{"train" if train else "val"}{year}.json') - with open(path_json) as json_file: - data = json.load(json_file) - - with open(os.path.join(root, 'categories.json')) as json_file: - data_catg = json.load(json_file) - - path_json_for_targeter = os.path.join(root, f"train{year}.json") - - with open(path_json_for_targeter) as json_file: - data_for_targeter = json.load(json_file) - - targeter = {} - indexer = 0 - for elem in data_for_targeter['annotations']: - king = [] - king.append(data_catg[int(elem['category_id'])][category]) - if king[0] not in targeter.keys(): - targeter[king[0]] = indexer - indexer += 1 - self.nb_classes = len(targeter) - - self.samples = [] - for elem in data['images']: - cut = elem['file_name'].split('/') - target_current = int(cut[2]) - path_current = os.path.join(root, cut[0], cut[2], cut[3]) - - categors = data_catg[target_current] - target_current_true = targeter[categors[category]] - self.samples.append((path_current, target_current_true)) - - # __getitem__ and __len__ inherited from ImageFolder - - -def build_dataset(is_train, args): - transform = build_transform(is_train, args) - - if args.data_set == 'CIFAR': - dataset = datasets.CIFAR100( - args.data_path, train=is_train, transform=transform) - nb_classes = 100 - elif args.data_set == 'IMNET': - prefix = 'train' if is_train else 'val' - data_dir = os.path.join(args.data_path, f'{prefix}.tar') - if os.path.exists(data_dir): - dataset = TimmDatasetTar(data_dir, transform=transform) - else: - root = os.path.join(args.data_path, 'train' if is_train else 'val') - dataset = datasets.ImageFolder(root, transform=transform) - nb_classes = 1000 - elif args.data_set == 'IMNETEE': - root = os.path.join(args.data_path, 'train' if is_train else 'val') - dataset = datasets.ImageFolder(root, transform=transform) - nb_classes = 10 - elif args.data_set == 'FLOWERS': - root = os.path.join(args.data_path, 'train' if is_train else 'test') - dataset = datasets.ImageFolder(root, transform=transform) - if is_train: - dataset = torch.utils.data.ConcatDataset( - [dataset for _ in range(100)]) - nb_classes = 102 - elif args.data_set == 'INAT': - dataset = INatDataset(args.data_path, train=is_train, year=2018, - category=args.inat_category, transform=transform) - nb_classes = dataset.nb_classes - elif args.data_set == 'INAT19': - dataset = INatDataset(args.data_path, train=is_train, year=2019, - category=args.inat_category, transform=transform) - nb_classes = dataset.nb_classes - return dataset, nb_classes - - -def build_transform(is_train, args): - resize_im = args.input_size > 32 - if is_train: - # this should always dispatch to transforms_imagenet_train - transform = create_transform( - input_size=args.input_size, - is_training=True, - color_jitter=args.color_jitter, - auto_augment=args.aa, - interpolation=args.train_interpolation, - re_prob=args.reprob, - re_mode=args.remode, - re_count=args.recount, - ) - if not resize_im: - # replace RandomResizedCropAndInterpolation with - # RandomCrop - transform.transforms[0] = transforms.RandomCrop( - args.input_size, padding=4) - return transform - - t = [] - if args.finetune: - t.append( - transforms.Resize((args.input_size, args.input_size), - interpolation=3) - ) - else: - if resize_im: - size = int((256 / 224) * args.input_size) - t.append( - # to maintain same ratio w.r.t. 224 images - transforms.Resize(size, interpolation=3), - ) - t.append(transforms.CenterCrop(args.input_size)) - - t.append(transforms.ToTensor()) - t.append(transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD)) - return transforms.Compose(t) diff --git a/cv/classification/repvit/pytorch/data/samplers.py b/cv/classification/repvit/pytorch/data/samplers.py deleted file mode 100644 index 5d06cd58..00000000 --- a/cv/classification/repvit/pytorch/data/samplers.py +++ /dev/null @@ -1,64 +0,0 @@ -''' -Build samplers for data loading -''' -import torch -import torch.distributed as dist -import math - - -class RASampler(torch.utils.data.Sampler): - """Sampler that restricts data loading to a subset of the dataset for distributed, - with repeated augmentation. - It ensures that different each augmented version of a sample will be visible to a - different process (GPU) - Heavily based on torch.utils.data.DistributedSampler - """ - - def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True): - if num_replicas is None: - if not dist.is_available(): - raise RuntimeError( - "Requires distributed package to be available") - num_replicas = dist.get_world_size() - if rank is None: - if not dist.is_available(): - raise RuntimeError( - "Requires distributed package to be available") - rank = dist.get_rank() - self.dataset = dataset - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.num_samples = int( - math.ceil(len(self.dataset) * 3.0 / self.num_replicas)) - self.total_size = self.num_samples * self.num_replicas - # self.num_selected_samples = int(math.ceil(len(self.dataset) / self.num_replicas)) - self.num_selected_samples = int(math.floor( - len(self.dataset) // 256 * 256 / self.num_replicas)) - self.shuffle = shuffle - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - if self.shuffle: - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = list(range(len(self.dataset))) - - # add extra samples to make it evenly divisible - indices = [ele for ele in indices for i in range(3)] - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices[:self.num_selected_samples]) - - def __len__(self): - return self.num_selected_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/cv/classification/repvit/pytorch/data/threeaugment.py b/cv/classification/repvit/pytorch/data/threeaugment.py deleted file mode 100644 index dd2b26a4..00000000 --- a/cv/classification/repvit/pytorch/data/threeaugment.py +++ /dev/null @@ -1,121 +0,0 @@ -""" -3Augment implementation from (https://github.com/facebookresearch/deit/blob/main/augment.py) -Data-augmentation (DA) based on dino DA (https://github.com/facebookresearch/dino) -and timm DA(https://github.com/rwightman/pytorch-image-models) -Can be called by adding "--ThreeAugment" to the command line -""" -import torch -from torchvision import transforms - -from timm.data.transforms import str_to_pil_interp, RandomResizedCropAndInterpolation, ToNumpy, ToTensor - -import numpy as np -from torchvision import datasets, transforms -import random - - - -from PIL import ImageFilter, ImageOps -import torchvision.transforms.functional as TF - - -class GaussianBlur(object): - """ - Apply Gaussian Blur to the PIL image. - """ - def __init__(self, p=0.1, radius_min=0.1, radius_max=2.): - self.prob = p - self.radius_min = radius_min - self.radius_max = radius_max - - def __call__(self, img): - do_it = random.random() <= self.prob - if not do_it: - return img - - img = img.filter( - ImageFilter.GaussianBlur( - radius=random.uniform(self.radius_min, self.radius_max) - ) - ) - return img - -class Solarization(object): - """ - Apply Solarization to the PIL image. - """ - def __init__(self, p=0.2): - self.p = p - - def __call__(self, img): - if random.random() < self.p: - return ImageOps.solarize(img) - else: - return img - -class gray_scale(object): - """ - Apply Solarization to the PIL image. - """ - def __init__(self, p=0.2): - self.p = p - self.transf = transforms.Grayscale(3) - - def __call__(self, img): - if random.random() < self.p: - return self.transf(img) - else: - return img - - - -class horizontal_flip(object): - """ - Apply Solarization to the PIL image. - """ - def __init__(self, p=0.2,activate_pred=False): - self.p = p - self.transf = transforms.RandomHorizontalFlip(p=1.0) - - def __call__(self, img): - if random.random() < self.p: - return self.transf(img) - else: - return img - - - -def new_data_aug_generator(args = None): - img_size = args.input_size - remove_random_resized_crop = False - mean, std = [0.485, 0.456, 0.406], [0.229, 0.224, 0.225] - primary_tfl = [] - scale=(0.08, 1.0) - interpolation='bicubic' - if remove_random_resized_crop: - primary_tfl = [ - transforms.Resize(img_size, interpolation=3), - transforms.RandomCrop(img_size, padding=4,padding_mode='reflect'), - transforms.RandomHorizontalFlip() - ] - else: - primary_tfl = [ - RandomResizedCropAndInterpolation( - img_size, scale=scale, interpolation=interpolation), - transforms.RandomHorizontalFlip() - ] - - - secondary_tfl = [transforms.RandomChoice([gray_scale(p=1.0), - Solarization(p=1.0), - GaussianBlur(p=1.0)])] - - if args.color_jitter is not None and not args.color_jitter==0: - secondary_tfl.append(transforms.ColorJitter(args.color_jitter, args.color_jitter, args.color_jitter)) - final_tfl = [ - transforms.ToTensor(), - transforms.Normalize( - mean=torch.tensor(mean), - std=torch.tensor(std)) - ] - return transforms.Compose(primary_tfl+secondary_tfl+final_tfl) diff --git a/cv/classification/repvit/pytorch/detection/.gitignore b/cv/classification/repvit/pytorch/detection/.gitignore deleted file mode 100644 index 84f1c0b3..00000000 --- a/cv/classification/repvit/pytorch/detection/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -work_dirs -pretrain -data -det_pretrain \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/README.md b/cv/classification/repvit/pytorch/detection/README.md deleted file mode 100644 index e950fd01..00000000 --- a/cv/classification/repvit/pytorch/detection/README.md +++ /dev/null @@ -1,58 +0,0 @@ -# Object Detection and Instance Segmentation - -Detection and instance segmentation on MS COCO 2017 is implemented based on [MMDetection](https://github.com/open-mmlab/mmdetection). - -## Models -| Model | $AP^b$ | $AP_{50}^b$ | $AP_{75}^b$ | $AP^m$ | $AP_{50}^m$ | $AP_{75}^m$ | Latency | Ckpt | Log | -|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|:--:|:--:|:--:| -| RepViT-M1.1 | 39.8 | 61.9 | 43.5 | 37.2 | 58.8 | 40.1 | 4.9ms | [M1.1](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m1_1_coco.pth) | [M1.1](./logs/repvit_m1_1_coco.json) | -| RepViT-M1.5 | 41.6 | 63.2 | 45.3 | 38.6 | 60.5 | 41.5 | 6.4ms | [M1.5](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m1_5_coco.pth) | [M1.5](./logs/repvit_m1_5_coco.json) | -| RepViT-M2.3 | 44.6 | 66.1 | 48.8 | 40.8 | 63.6 | 43.9 | 9.9ms | [M2.3](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m2_3_coco.pth) | [M2.3](./logs/repvit_m2_3_coco.json) | - -## Installation - -Install [mmcv-full](https://github.com/open-mmlab/mmcv) and [MMDetection v2.28.2](https://github.com/open-mmlab/mmdetection/tree/v2.28.2), -Later versions should work as well. -The easiest way is to install via [MIM](https://github.com/open-mmlab/mim) -``` -pip install -U openmim -mim install mmcv-full==1.7.1 -mim install mmdet==2.28.2 -``` - -## Data preparation - -Prepare COCO 2017 dataset according to the [instructions in MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/1_exist_data_model.md#test-existing-models-on-standard-datasets). -The dataset should be organized as -``` -detection -├── data -│ ├── coco -│ │ ├── annotations -│ │ ├── train2017 -│ │ ├── val2017 -│ │ ├── test2017 -``` - -## Testing - -We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use: -``` -./dist_test.sh config_file path/to/checkpoint #GPUs --eval bbox segm -``` - -For example, to test RepViT-M1.1 on COCO 2017 on an 8-GPU machine, - -``` -./dist_test.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py path/to/repvit_m1_1_coco.pth 8 --eval bbox segm -``` - -## Training -Download ImageNet-1K pretrained weights into `./pretrain` - -We provide PyTorch distributed data parallel (DDP) training script `dist_train.sh`, for example, to train RepViT-M1.1 on an 8-GPU machine: -``` -./dist_train.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py 8 -``` -Tips: specify configs and #GPUs! - diff --git a/cv/classification/repvit/pytorch/detection/checkpoint.py b/cv/classification/repvit/pytorch/detection/checkpoint.py deleted file mode 100644 index 86d7a328..00000000 --- a/cv/classification/repvit/pytorch/detection/checkpoint.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo - -import mmcv -from mmcv.parallel import is_module_wrapper -from mmcv.runner.dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, prefixes, loader=None, force=False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or list[str] or tuple[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - loader (function): checkpoint loader - """ - - for p in cls._schemes: - if path.startswith(p): - return cls._schemes[p] - - @classmethod - def load_checkpoint(cls, filename, map_location=None, logger=None): - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - - -def _load_checkpoint(filename, map_location=None, logger=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - import pdb; pdb.set_trace() - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_detection.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_detection.py deleted file mode 100644 index e341b59d..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_detection.py +++ /dev/null @@ -1,56 +0,0 @@ -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', img_scale=[(2048, 800), (2048, 1024)], keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=1, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=8, - dataset=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_train.json', - img_prefix=data_root + 'leftImg8bit/train/', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_val.json', - img_prefix=data_root + 'leftImg8bit/val/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_test.json', - img_prefix=data_root + 'leftImg8bit/test/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_instance.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_instance.py deleted file mode 100644 index 4e3c34e2..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/cityscapes_instance.py +++ /dev/null @@ -1,56 +0,0 @@ -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', img_scale=[(2048, 800), (2048, 1024)], keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=1, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=8, - dataset=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_train.json', - img_prefix=data_root + 'leftImg8bit/train/', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_val.json', - img_prefix=data_root + 'leftImg8bit/val/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/instancesonly_filtered_gtFine_test.json', - img_prefix=data_root + 'leftImg8bit/test/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_detection.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_detection.py deleted file mode 100644 index 149f590b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_detection.py +++ /dev/null @@ -1,49 +0,0 @@ -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance.py deleted file mode 100644 index 9901a858..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance.py +++ /dev/null @@ -1,49 +0,0 @@ -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance_semantic.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance_semantic.py deleted file mode 100644 index 6c8bf07b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/coco_instance_semantic.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='SegRescale', scale_factor=1 / 8), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - seg_prefix=data_root + 'stuffthingmaps/train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/deepfashion.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/deepfashion.py deleted file mode 100644 index 308b4b2a..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/deepfashion.py +++ /dev/null @@ -1,53 +0,0 @@ -# dataset settings -dataset_type = 'DeepFashionDataset' -data_root = 'data/DeepFashion/In-shop/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(750, 1101), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(750, 1101), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - imgs_per_gpu=2, - workers_per_gpu=1, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json', - img_prefix=data_root + 'Img/', - pipeline=train_pipeline, - data_root=data_root), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json', - img_prefix=data_root + 'Img/', - pipeline=test_pipeline, - data_root=data_root), - test=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/DeepFashion_segmentation_gallery.json', - img_prefix=data_root + 'Img/', - pipeline=test_pipeline, - data_root=data_root)) -evaluation = dict(interval=5, metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v0.5_instance.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v0.5_instance.py deleted file mode 100644 index 207e0053..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v0.5_instance.py +++ /dev/null @@ -1,24 +0,0 @@ -# dataset settings -_base_ = 'coco_instance.py' -dataset_type = 'LVISV05Dataset' -data_root = 'data/lvis_v0.5/' -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - _delete_=True, - type='ClassBalancedDataset', - oversample_thr=1e-3, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v0.5_train.json', - img_prefix=data_root + 'train2017/')), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v0.5_val.json', - img_prefix=data_root + 'val2017/'), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v0.5_val.json', - img_prefix=data_root + 'val2017/')) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v1_instance.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v1_instance.py deleted file mode 100644 index be791edd..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/lvis_v1_instance.py +++ /dev/null @@ -1,24 +0,0 @@ -# dataset settings -_base_ = 'coco_instance.py' -dataset_type = 'LVISV1Dataset' -data_root = 'data/lvis_v1/' -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - _delete_=True, - type='ClassBalancedDataset', - oversample_thr=1e-3, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_train.json', - img_prefix=data_root)), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_val.json', - img_prefix=data_root), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_val.json', - img_prefix=data_root)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/voc0712.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/voc0712.py deleted file mode 100644 index ae09acdd..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/voc0712.py +++ /dev/null @@ -1,55 +0,0 @@ -# dataset settings -dataset_type = 'VOCDataset' -data_root = 'data/VOCdevkit/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1000, 600), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1000, 600), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=3, - dataset=dict( - type=dataset_type, - ann_file=[ - data_root + 'VOC2007/ImageSets/Main/trainval.txt', - data_root + 'VOC2012/ImageSets/Main/trainval.txt' - ], - img_prefix=[data_root + 'VOC2007/', data_root + 'VOC2012/'], - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt', - img_prefix=data_root + 'VOC2007/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt', - img_prefix=data_root + 'VOC2007/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='mAP') diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/wider_face.py b/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/wider_face.py deleted file mode 100644 index d1d649be..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/datasets/wider_face.py +++ /dev/null @@ -1,63 +0,0 @@ -# dataset settings -dataset_type = 'WIDERFaceDataset' -data_root = 'data/WIDERFace/' -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(300, 300), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(300, 300), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=60, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'train.txt', - img_prefix=data_root + 'WIDER_train/', - min_size=17, - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline)) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/default_runtime.py b/cv/classification/repvit/pytorch/detection/configs/_base_/default_runtime.py deleted file mode 100644 index 55097c5b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/default_runtime.py +++ /dev/null @@ -1,16 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -custom_hooks = [dict(type='NumClassCheckHook')] - -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_pvtv2_b2_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_pvtv2_b2_fpn.py deleted file mode 100644 index 3c20bf91..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_pvtv2_b2_fpn.py +++ /dev/null @@ -1,193 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - backbone=dict( - type='pvt_v2_b2', - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[64, 128, 320, 512], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_across_levels=False, - nms_pre=2000, - nms_post=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_across_levels=False, - nms_pre=1000, - nms_post=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py deleted file mode 100644 index 9ef6673c..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,196 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_rcnn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_rcnn_r50_fpn.py deleted file mode 100644 index cde2a96c..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/cascade_rcnn_r50_fpn.py +++ /dev/null @@ -1,179 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ]), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/fast_rcnn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/fast_rcnn_r50_fpn.py deleted file mode 100644 index 1099165b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/fast_rcnn_r50_fpn.py +++ /dev/null @@ -1,62 +0,0 @@ -# model settings -model = dict( - type='FastRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py deleted file mode 100644 index 6e18f71b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py +++ /dev/null @@ -1,112 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=False) -model = dict( - type='FasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=True, - style='caffe'), - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - shared_head=dict( - type='ResLayer', - depth=50, - stage=3, - stride=2, - dilation=1, - style='caffe', - norm_cfg=norm_cfg, - norm_eval=True), - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=1024, - featmap_strides=[16]), - bbox_head=dict( - type='BBoxHead', - with_avg_pool=True, - roi_feat_size=7, - in_channels=2048, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=6000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_dc5.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_dc5.py deleted file mode 100644 index 5089f0e3..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_caffe_dc5.py +++ /dev/null @@ -1,103 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=False) -model = dict( - type='FasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - strides=(1, 2, 2, 1), - dilations=(1, 1, 1, 2), - out_indices=(3, ), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=True, - style='caffe'), - rpn_head=dict( - type='RPNHead', - in_channels=2048, - feat_channels=2048, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=2048, - featmap_strides=[16]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=2048, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms=dict(type='nms', iou_threshold=0.7), - nms_pre=6000, - max_per_img=1000, - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_fpn.py deleted file mode 100644 index c67137e9..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/faster_rcnn_r50_fpn.py +++ /dev/null @@ -1,108 +0,0 @@ -# model settings -model = dict( - type='FasterRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100) - # soft-nms is also supported for rcnn testing - # e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05) - )) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_caffe_c4.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_caffe_c4.py deleted file mode 100644 index eaae1342..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_caffe_c4.py +++ /dev/null @@ -1,123 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=False) -model = dict( - type='MaskRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=True, - style='caffe'), - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - shared_head=dict( - type='ResLayer', - depth=50, - stage=3, - stride=2, - dilation=1, - style='caffe', - norm_cfg=norm_cfg, - norm_eval=True), - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=1024, - featmap_strides=[16]), - bbox_head=dict( - type='BBoxHead', - with_avg_pool=True, - roi_feat_size=7, - in_channels=2048, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=None, - mask_head=dict( - type='FCNMaskHead', - num_convs=0, - in_channels=2048, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=14, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=6000, - nms=dict(type='nms', iou_threshold=0.7), - max_per_img=1000, - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_fpn.py deleted file mode 100644 index 6fc79082..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,120 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/retinanet_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/retinanet_r50_fpn.py deleted file mode 100644 index f3b97b30..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/retinanet_r50_fpn.py +++ /dev/null @@ -1,60 +0,0 @@ -# model settings -model = dict( - type='RetinaNet', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_input', - num_outs=5), - bbox_head=dict( - type='RetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100)) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_caffe_c4.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_caffe_c4.py deleted file mode 100644 index 9c32a55d..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_caffe_c4.py +++ /dev/null @@ -1,56 +0,0 @@ -# model settings -model = dict( - type='RPN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - neck=None, - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_fpn.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_fpn.py deleted file mode 100644 index b9b76183..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/rpn_r50_fpn.py +++ /dev/null @@ -1,58 +0,0 @@ -# model settings -model = dict( - type='RPN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/models/ssd300.py b/cv/classification/repvit/pytorch/detection/configs/_base_/models/ssd300.py deleted file mode 100644 index ef5cd727..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/models/ssd300.py +++ /dev/null @@ -1,51 +0,0 @@ -# model settings -input_size = 300 -model = dict( - type='SingleStageDetector', - pretrained='open-mmlab://vgg16_caffe', - backbone=dict( - type='SSDVGG', - input_size=input_size, - depth=16, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - l2_norm_scale=20), - neck=None, - bbox_head=dict( - type='SSDHead', - in_channels=(512, 1024, 512, 256, 256, 256), - num_classes=80, - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=input_size, - basesize_ratio_range=(0.15, 0.9), - strides=[8, 16, 32, 64, 100, 300], - ratios=[[2], [2, 3], [2, 3], [2, 3], [2], [2]]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2])), - # model training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0., - ignore_iof_thr=-1, - gt_max_assign_all=False), - smoothl1_beta=1., - allowed_border=-1, - pos_weight=-1, - neg_pos_ratio=3, - debug=False), - test_cfg=dict( - nms_pre=1000, - nms=dict(type='nms', iou_threshold=0.45), - min_bbox_size=0, - score_thr=0.02, - max_per_img=200)) -cudnn_benchmark = True diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_1x.py b/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_1x.py deleted file mode 100644 index e42940bc..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1e-6, # 0.001 - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_20e.py b/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_20e.py deleted file mode 100644 index 00e85902..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_20e.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[16, 19]) -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_2x.py b/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_2x.py deleted file mode 100644 index 69dc9ee8..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/_base_/schedules/schedule_2x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py b/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py deleted file mode 100644 index 24f0ac37..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '_base_/models/mask_rcnn_r50_fpn.py', - '_base_/datasets/coco_instance.py', - '_base_/schedules/schedule_1x.py', - '_base_/default_runtime.py' -] -# optimizer -model = dict( - backbone=dict( - type='repvit_m1_1', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m1_1_distill_300e.pth', - ), - out_indices = [2,6,20,24] - ), - neck=dict( - type='FPN', - in_channels=[64, 128, 256, 512], - out_channels=256, - num_outs=5)) -# optimizer -optimizer = dict(_delete_=True, type='AdamW', lr=0.0002, weight_decay=0.05) # 0.0001 -optimizer_config = dict(grad_clip=None) diff --git a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_5_fpn_1x_coco.py b/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_5_fpn_1x_coco.py deleted file mode 100644 index 03b8b21b..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m1_5_fpn_1x_coco.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '_base_/models/mask_rcnn_r50_fpn.py', - '_base_/datasets/coco_instance.py', - '_base_/schedules/schedule_1x.py', - '_base_/default_runtime.py' -] -# optimizer -model = dict( - backbone=dict( - type='repvit_m1_5', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m1_5_distill_300e.pth', - ), - out_indices=[4, 10, 36, 42] - ), - neck=dict( - type='FPN', - in_channels=[64, 128, 256, 512], - out_channels=256, - num_outs=5)) -# optimizer -optimizer = dict(_delete_=True, type='AdamW', lr=0.0002, weight_decay=0.05) # 0.0001 -optimizer_config = dict(grad_clip=None) diff --git a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m2_3_fpn_1x_coco.py b/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m2_3_fpn_1x_coco.py deleted file mode 100644 index e82d2ed5..00000000 --- a/cv/classification/repvit/pytorch/detection/configs/mask_rcnn_repvit_m2_3_fpn_1x_coco.py +++ /dev/null @@ -1,24 +0,0 @@ -_base_ = [ - '_base_/models/mask_rcnn_r50_fpn.py', - '_base_/datasets/coco_instance.py', - '_base_/schedules/schedule_1x.py', - '_base_/default_runtime.py' -] -# optimizer -model = dict( - backbone=dict( - type='repvit_m2_3', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m2_3_distill_450e.pth', - ), - out_indices=[6, 14, 50, 54] - ), - neck=dict( - type='FPN', - in_channels=[80, 160, 320, 640], - out_channels=256, - num_outs=5)) -# optimizer -optimizer = dict(_delete_=True, type='AdamW', lr=0.0002, weight_decay=0.05) # 0.0001 -optimizer_config = dict(grad_clip=None) diff --git a/cv/classification/repvit/pytorch/detection/dist_test.sh b/cv/classification/repvit/pytorch/detection/dist_test.sh deleted file mode 100644 index f04be211..00000000 --- a/cv/classification/repvit/pytorch/detection/dist_test.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -CHECKPOINT=$2 -GPUS=$3 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -NCCL_P2P_DISABLE=1 \ -python -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/test.py \ - $CONFIG \ - $CHECKPOINT \ - --launcher pytorch \ - ${@:4} diff --git a/cv/classification/repvit/pytorch/detection/dist_train.sh b/cv/classification/repvit/pytorch/detection/dist_train.sh deleted file mode 100644 index f6c8026b..00000000 --- a/cv/classification/repvit/pytorch/detection/dist_train.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -NCCL_P2P_DISABLE=1 \ -python -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --seed 0 \ - --launcher pytorch ${@:3} diff --git a/cv/classification/repvit/pytorch/detection/eval.sh b/cv/classification/repvit/pytorch/detection/eval.sh deleted file mode 100644 index 7ba1b2bb..00000000 --- a/cv/classification/repvit/pytorch/detection/eval.sh +++ /dev/null @@ -1 +0,0 @@ -PORT=12345 ./dist_test.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py det_pretrain/repvit_m1_1_coco.pth 8 --eval bbox segm \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/logs/repvit_m1_1_coco.json b/cv/classification/repvit/pytorch/detection/logs/repvit_m1_1_coco.json deleted file mode 100644 index 8c16feea..00000000 --- a/cv/classification/repvit/pytorch/detection/logs/repvit_m1_1_coco.json +++ /dev/null @@ -1,1765 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 4090\nCUDA_HOME: /home/inspur/cuda-11.7\nNVCC: Cuda compilation tools, release 11.7, V11.7.99\nGCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.9.2 (built against CUDA 12.1)\n - Built with CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.7.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 11.3\nMMCV CUDA Compiler: 11.7\nMMDetection: 2.28.2+25971f7", "config": "model = dict(\n type='MaskRCNN',\n pretrained='torchvision://resnet50',\n backbone=dict(\n type='repvit_m1_1',\n depth=50,\n num_stages=4,\n out_indices=[2, 6, 20, 24],\n frozen_stages=1,\n norm_cfg=dict(type='BN', requires_grad=True),\n norm_eval=True,\n style='pytorch',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m1_1_distill_300e.pth')),\n neck=dict(\n type='FPN',\n in_channels=[64, 128, 256, 512],\n out_channels=256,\n num_outs=5),\n rpn_head=dict(\n type='RPNHead',\n in_channels=256,\n feat_channels=256,\n anchor_generator=dict(\n type='AnchorGenerator',\n scales=[8],\n ratios=[0.5, 1.0, 2.0],\n strides=[4, 8, 16, 32, 64]),\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[1.0, 1.0, 1.0, 1.0]),\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n roi_head=dict(\n type='StandardRoIHead',\n bbox_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n bbox_head=dict(\n type='Shared2FCBBoxHead',\n in_channels=256,\n fc_out_channels=1024,\n roi_feat_size=7,\n num_classes=80,\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[0.1, 0.1, 0.2, 0.2]),\n reg_class_agnostic=False,\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n mask_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n mask_head=dict(\n type='FCNMaskHead',\n num_convs=4,\n in_channels=256,\n conv_out_channels=256,\n num_classes=80,\n loss_mask=dict(\n type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),\n train_cfg=dict(\n rpn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.7,\n neg_iou_thr=0.3,\n min_pos_iou=0.3,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=256,\n pos_fraction=0.5,\n neg_pos_ub=-1,\n add_gt_as_proposals=False),\n allowed_border=-1,\n pos_weight=-1,\n debug=False),\n rpn_proposal=dict(\n nms_pre=2000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.5,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=512,\n pos_fraction=0.25,\n neg_pos_ub=-1,\n add_gt_as_proposals=True),\n mask_size=28,\n pos_weight=-1,\n debug=False)),\n test_cfg=dict(\n rpn=dict(\n nms_pre=1000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n score_thr=0.05,\n nms=dict(type='nms', iou_threshold=0.5),\n max_per_img=100,\n mask_thr_binary=0.5)))\ndataset_type = 'CocoDataset'\ndata_root = 'data/coco/'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=2,\n workers_per_gpu=2,\n train=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_train2017.json',\n img_prefix='data/coco/train2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(\n type='Collect',\n keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n ]),\n val=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nevaluation = dict(metric=['bbox', 'segm'])\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.05)\noptimizer_config = dict(grad_clip=None)\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=500,\n warmup_ratio=1e-06,\n step=[8, 11])\nrunner = dict(type='EpochBasedRunner', max_epochs=12)\ncheckpoint_config = dict(interval=1)\nlog_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])\ncustom_hooks = [dict(type='NumClassCheckHook')]\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\nwork_dir = './work_dirs/mask_rcnn_repvit_m1_1_fpn_1x_coco'\nauto_resume = False\ngpu_ids = range(0, 8)\n", "seed": 0, "exp_name": "mask_rcnn_repvit_m1_1_fpn_1x_coco.py"} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 2e-05, "memory": 5484, "data_time": 0.11493, "loss_rpn_cls": 0.53125, "loss_rpn_bbox": 0.17528, "loss_cls": 2.15711, "acc": 67.11523, "loss_bbox": 0.07881, "loss_mask": 1.82539, "loss": 4.76784, "time": 0.45501} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 4e-05, "memory": 5685, "data_time": 0.03757, "loss_rpn_cls": 0.2529, "loss_rpn_bbox": 0.1119, "loss_cls": 0.42318, "acc": 94.50195, "loss_bbox": 0.18693, "loss_mask": 0.78322, "loss": 1.75813, "time": 0.25931} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 6e-05, "memory": 5685, "data_time": 0.04388, "loss_rpn_cls": 0.22443, "loss_rpn_bbox": 0.10909, "loss_cls": 0.3672, "acc": 94.34106, "loss_bbox": 0.19103, "loss_mask": 0.70588, "loss": 1.59763, "time": 0.2502} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 8e-05, "memory": 5685, "data_time": 0.0412, "loss_rpn_cls": 0.18149, "loss_rpn_bbox": 0.11001, "loss_cls": 0.41935, "acc": 93.19458, "loss_bbox": 0.23987, "loss_mask": 0.68086, "loss": 1.63158, "time": 0.25473} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0001, "memory": 5685, "data_time": 0.03801, "loss_rpn_cls": 0.13581, "loss_rpn_bbox": 0.10632, "loss_cls": 0.44377, "acc": 92.29712, "loss_bbox": 0.28241, "loss_mask": 0.64083, "loss": 1.60914, "time": 0.22581} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.00012, "memory": 5766, "data_time": 0.04811, "loss_rpn_cls": 0.10375, "loss_rpn_bbox": 0.09649, "loss_cls": 0.48607, "acc": 90.94165, "loss_bbox": 0.34173, "loss_mask": 0.60045, "loss": 1.62849, "time": 0.23706} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.00014, "memory": 5766, "data_time": 0.03541, "loss_rpn_cls": 0.09444, "loss_rpn_bbox": 0.0934, "loss_cls": 0.46625, "acc": 90.97437, "loss_bbox": 0.33575, "loss_mask": 0.55465, "loss": 1.54449, "time": 0.23098} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.00016, "memory": 5872, "data_time": 0.04341, "loss_rpn_cls": 0.09834, "loss_rpn_bbox": 0.09409, "loss_cls": 0.47988, "acc": 90.21802, "loss_bbox": 0.35705, "loss_mask": 0.52511, "loss": 1.55448, "time": 0.23998} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.00018, "memory": 5872, "data_time": 0.04044, "loss_rpn_cls": 0.08892, "loss_rpn_bbox": 0.08911, "loss_cls": 0.45759, "acc": 90.15674, "loss_bbox": 0.36044, "loss_mask": 0.48406, "loss": 1.4801, "time": 0.22401} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 5872, "data_time": 0.03287, "loss_rpn_cls": 0.08567, "loss_rpn_bbox": 0.08671, "loss_cls": 0.45837, "acc": 90.03564, "loss_bbox": 0.36601, "loss_mask": 0.47189, "loss": 1.46865, "time": 0.22216} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 5914, "data_time": 0.03586, "loss_rpn_cls": 0.08768, "loss_rpn_bbox": 0.08303, "loss_cls": 0.44763, "acc": 90.08301, "loss_bbox": 0.36176, "loss_mask": 0.44976, "loss": 1.42987, "time": 0.22477} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 5914, "data_time": 0.0354, "loss_rpn_cls": 0.0858, "loss_rpn_bbox": 0.08417, "loss_cls": 0.42795, "acc": 89.92334, "loss_bbox": 0.3623, "loss_mask": 0.43589, "loss": 1.39611, "time": 0.22921} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 5914, "data_time": 0.04643, "loss_rpn_cls": 0.08152, "loss_rpn_bbox": 0.08739, "loss_cls": 0.43011, "acc": 89.7854, "loss_bbox": 0.36421, "loss_mask": 0.42014, "loss": 1.38336, "time": 0.23036} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 5932, "data_time": 0.03909, "loss_rpn_cls": 0.08153, "loss_rpn_bbox": 0.08408, "loss_cls": 0.41312, "acc": 89.56372, "loss_bbox": 0.37063, "loss_mask": 0.4186, "loss": 1.36797, "time": 0.22349} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 5932, "data_time": 0.03728, "loss_rpn_cls": 0.07783, "loss_rpn_bbox": 0.08302, "loss_cls": 0.40903, "acc": 89.63501, "loss_bbox": 0.36463, "loss_mask": 0.40655, "loss": 1.34108, "time": 0.22942} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 5932, "data_time": 0.04123, "loss_rpn_cls": 0.07502, "loss_rpn_bbox": 0.08324, "loss_cls": 0.41193, "acc": 89.5249, "loss_bbox": 0.36948, "loss_mask": 0.40897, "loss": 1.34865, "time": 0.21881} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 5932, "data_time": 0.04313, "loss_rpn_cls": 0.07307, "loss_rpn_bbox": 0.07937, "loss_cls": 0.38904, "acc": 89.71875, "loss_bbox": 0.3573, "loss_mask": 0.39409, "loss": 1.29287, "time": 0.22877} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 5932, "data_time": 0.03936, "loss_rpn_cls": 0.07364, "loss_rpn_bbox": 0.07946, "loss_cls": 0.39475, "acc": 89.65527, "loss_bbox": 0.36034, "loss_mask": 0.39728, "loss": 1.30546, "time": 0.2158} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 5932, "data_time": 0.03842, "loss_rpn_cls": 0.07313, "loss_rpn_bbox": 0.07734, "loss_cls": 0.39265, "acc": 89.66455, "loss_bbox": 0.35397, "loss_mask": 0.38329, "loss": 1.28039, "time": 0.21532} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 5936, "data_time": 0.04323, "loss_rpn_cls": 0.07288, "loss_rpn_bbox": 0.07643, "loss_cls": 0.37378, "acc": 89.92383, "loss_bbox": 0.34456, "loss_mask": 0.37956, "loss": 1.24721, "time": 0.22676} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 5936, "data_time": 0.04121, "loss_rpn_cls": 0.07134, "loss_rpn_bbox": 0.07546, "loss_cls": 0.3729, "acc": 89.88843, "loss_bbox": 0.34647, "loss_mask": 0.37744, "loss": 1.24361, "time": 0.22407} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 5936, "data_time": 0.04542, "loss_rpn_cls": 0.06559, "loss_rpn_bbox": 0.07308, "loss_cls": 0.36851, "acc": 89.98267, "loss_bbox": 0.34278, "loss_mask": 0.37697, "loss": 1.22692, "time": 0.21262} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.0002, "memory": 5936, "data_time": 0.04357, "loss_rpn_cls": 0.0712, "loss_rpn_bbox": 0.0808, "loss_cls": 0.38398, "acc": 89.51953, "loss_bbox": 0.35933, "loss_mask": 0.37571, "loss": 1.27102, "time": 0.21876} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.0002, "memory": 5936, "data_time": 0.04852, "loss_rpn_cls": 0.0722, "loss_rpn_bbox": 0.08073, "loss_cls": 0.37505, "acc": 89.47705, "loss_bbox": 0.36045, "loss_mask": 0.37496, "loss": 1.26338, "time": 0.22206} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.0002, "memory": 5936, "data_time": 0.04288, "loss_rpn_cls": 0.06499, "loss_rpn_bbox": 0.07143, "loss_cls": 0.36126, "acc": 90.20068, "loss_bbox": 0.33195, "loss_mask": 0.36521, "loss": 1.19484, "time": 0.21172} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.0002, "memory": 5936, "data_time": 0.04545, "loss_rpn_cls": 0.06526, "loss_rpn_bbox": 0.0733, "loss_cls": 0.36175, "acc": 90.00073, "loss_bbox": 0.33686, "loss_mask": 0.36024, "loss": 1.19741, "time": 0.21082} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.0002, "memory": 5936, "data_time": 0.04031, "loss_rpn_cls": 0.07245, "loss_rpn_bbox": 0.0794, "loss_cls": 0.36601, "acc": 89.76782, "loss_bbox": 0.34344, "loss_mask": 0.35565, "loss": 1.21695, "time": 0.21426} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.0002, "memory": 5936, "data_time": 0.0399, "loss_rpn_cls": 0.06673, "loss_rpn_bbox": 0.07326, "loss_cls": 0.34296, "acc": 90.51123, "loss_bbox": 0.32437, "loss_mask": 0.35109, "loss": 1.15842, "time": 0.20436} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.0002, "memory": 5936, "data_time": 0.03708, "loss_rpn_cls": 0.06306, "loss_rpn_bbox": 0.07435, "loss_cls": 0.36285, "acc": 89.646, "loss_bbox": 0.35021, "loss_mask": 0.34748, "loss": 1.19795, "time": 0.20476} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.0002, "memory": 5936, "data_time": 0.04808, "loss_rpn_cls": 0.07364, "loss_rpn_bbox": 0.08142, "loss_cls": 0.34656, "acc": 90.08008, "loss_bbox": 0.33642, "loss_mask": 0.35705, "loss": 1.1951, "time": 0.21209} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.0002, "memory": 5936, "data_time": 0.04729, "loss_rpn_cls": 0.06236, "loss_rpn_bbox": 0.07402, "loss_cls": 0.34814, "acc": 90.0354, "loss_bbox": 0.33496, "loss_mask": 0.35414, "loss": 1.17362, "time": 0.21016} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.0002, "memory": 5936, "data_time": 0.03868, "loss_rpn_cls": 0.06889, "loss_rpn_bbox": 0.07543, "loss_cls": 0.34815, "acc": 90.13452, "loss_bbox": 0.33355, "loss_mask": 0.3542, "loss": 1.18022, "time": 0.21761} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.0002, "memory": 5936, "data_time": 0.0436, "loss_rpn_cls": 0.06396, "loss_rpn_bbox": 0.07562, "loss_cls": 0.34146, "acc": 90.38062, "loss_bbox": 0.32968, "loss_mask": 0.35034, "loss": 1.16107, "time": 0.20726} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.0002, "memory": 5936, "data_time": 0.03782, "loss_rpn_cls": 0.06469, "loss_rpn_bbox": 0.06994, "loss_cls": 0.35525, "acc": 89.9126, "loss_bbox": 0.33811, "loss_mask": 0.3485, "loss": 1.17648, "time": 0.20975} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.0002, "memory": 5936, "data_time": 0.03372, "loss_rpn_cls": 0.06417, "loss_rpn_bbox": 0.07431, "loss_cls": 0.35192, "acc": 89.80371, "loss_bbox": 0.33962, "loss_mask": 0.34302, "loss": 1.17305, "time": 0.21364} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.0002, "memory": 5936, "data_time": 0.0392, "loss_rpn_cls": 0.0651, "loss_rpn_bbox": 0.07461, "loss_cls": 0.34899, "acc": 90.15967, "loss_bbox": 0.33181, "loss_mask": 0.35398, "loss": 1.17448, "time": 0.20481} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.0002, "memory": 5936, "data_time": 0.04213, "loss_rpn_cls": 0.06276, "loss_rpn_bbox": 0.07499, "loss_cls": 0.35278, "acc": 89.82812, "loss_bbox": 0.33725, "loss_mask": 0.34463, "loss": 1.1724, "time": 0.21241} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.0002, "memory": 5940, "data_time": 0.03908, "loss_rpn_cls": 0.06093, "loss_rpn_bbox": 0.07284, "loss_cls": 0.33534, "acc": 90.30811, "loss_bbox": 0.32299, "loss_mask": 0.34519, "loss": 1.1373, "time": 0.2133} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.0002, "memory": 5940, "data_time": 0.04395, "loss_rpn_cls": 0.06154, "loss_rpn_bbox": 0.07539, "loss_cls": 0.32198, "acc": 90.50488, "loss_bbox": 0.32155, "loss_mask": 0.33265, "loss": 1.1131, "time": 0.21363} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.0002, "memory": 5940, "data_time": 0.04315, "loss_rpn_cls": 0.05978, "loss_rpn_bbox": 0.07069, "loss_cls": 0.34653, "acc": 90.13794, "loss_bbox": 0.33169, "loss_mask": 0.34035, "loss": 1.14903, "time": 0.21138} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.0002, "memory": 5940, "data_time": 0.03428, "loss_rpn_cls": 0.0614, "loss_rpn_bbox": 0.07249, "loss_cls": 0.34002, "acc": 90.15649, "loss_bbox": 0.33462, "loss_mask": 0.34268, "loss": 1.1512, "time": 0.21608} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.0002, "memory": 5940, "data_time": 0.03469, "loss_rpn_cls": 0.0583, "loss_rpn_bbox": 0.06864, "loss_cls": 0.32293, "acc": 90.4978, "loss_bbox": 0.32046, "loss_mask": 0.33263, "loss": 1.10296, "time": 0.20065} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.0002, "memory": 5940, "data_time": 0.03993, "loss_rpn_cls": 0.0619, "loss_rpn_bbox": 0.06996, "loss_cls": 0.32454, "acc": 90.46338, "loss_bbox": 0.31911, "loss_mask": 0.33597, "loss": 1.11148, "time": 0.20466} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.0002, "memory": 5940, "data_time": 0.03879, "loss_rpn_cls": 0.0677, "loss_rpn_bbox": 0.07497, "loss_cls": 0.33319, "acc": 90.33008, "loss_bbox": 0.32133, "loss_mask": 0.33115, "loss": 1.12833, "time": 0.21653} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.0002, "memory": 5940, "data_time": 0.04457, "loss_rpn_cls": 0.05922, "loss_rpn_bbox": 0.07199, "loss_cls": 0.32498, "acc": 90.54834, "loss_bbox": 0.31504, "loss_mask": 0.33003, "loss": 1.10126, "time": 0.21394} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.0002, "memory": 5940, "data_time": 0.03635, "loss_rpn_cls": 0.06202, "loss_rpn_bbox": 0.0738, "loss_cls": 0.33655, "acc": 90.12671, "loss_bbox": 0.33673, "loss_mask": 0.33696, "loss": 1.14605, "time": 0.21137} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.0002, "memory": 5940, "data_time": 0.03343, "loss_rpn_cls": 0.06046, "loss_rpn_bbox": 0.07278, "loss_cls": 0.32973, "acc": 90.3855, "loss_bbox": 0.32003, "loss_mask": 0.33177, "loss": 1.11478, "time": 0.20354} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.0002, "memory": 5940, "data_time": 0.03682, "loss_rpn_cls": 0.06104, "loss_rpn_bbox": 0.07331, "loss_cls": 0.33177, "acc": 90.25098, "loss_bbox": 0.32145, "loss_mask": 0.3378, "loss": 1.12537, "time": 0.20823} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.0002, "memory": 5940, "data_time": 0.04194, "loss_rpn_cls": 0.06362, "loss_rpn_bbox": 0.07028, "loss_cls": 0.3322, "acc": 90.2356, "loss_bbox": 0.32811, "loss_mask": 0.32985, "loss": 1.12407, "time": 0.20663} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.0002, "memory": 5940, "data_time": 0.02875, "loss_rpn_cls": 0.05606, "loss_rpn_bbox": 0.06888, "loss_cls": 0.32335, "acc": 90.52002, "loss_bbox": 0.31912, "loss_mask": 0.3227, "loss": 1.09011, "time": 0.21005} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.0002, "memory": 5940, "data_time": 0.03338, "loss_rpn_cls": 0.05794, "loss_rpn_bbox": 0.07074, "loss_cls": 0.32379, "acc": 90.36792, "loss_bbox": 0.32861, "loss_mask": 0.33935, "loss": 1.12044, "time": 0.20311} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.0002, "memory": 5940, "data_time": 0.03879, "loss_rpn_cls": 0.05788, "loss_rpn_bbox": 0.06876, "loss_cls": 0.33299, "acc": 90.16187, "loss_bbox": 0.32759, "loss_mask": 0.3195, "loss": 1.10672, "time": 0.20974} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.0002, "memory": 5940, "data_time": 0.03442, "loss_rpn_cls": 0.05855, "loss_rpn_bbox": 0.07112, "loss_cls": 0.33083, "acc": 90.11914, "loss_bbox": 0.33409, "loss_mask": 0.32399, "loss": 1.11858, "time": 0.2106} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.0002, "memory": 5940, "data_time": 0.03865, "loss_rpn_cls": 0.05897, "loss_rpn_bbox": 0.07238, "loss_cls": 0.3305, "acc": 90.13794, "loss_bbox": 0.33208, "loss_mask": 0.33369, "loss": 1.12762, "time": 0.21307} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.0002, "memory": 5940, "data_time": 0.04032, "loss_rpn_cls": 0.06088, "loss_rpn_bbox": 0.06827, "loss_cls": 0.30481, "acc": 90.93872, "loss_bbox": 0.30856, "loss_mask": 0.31643, "loss": 1.05894, "time": 0.20967} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.0002, "memory": 5940, "data_time": 0.03897, "loss_rpn_cls": 0.059, "loss_rpn_bbox": 0.07015, "loss_cls": 0.31761, "acc": 90.71021, "loss_bbox": 0.31019, "loss_mask": 0.32351, "loss": 1.08047, "time": 0.20687} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.0002, "memory": 5940, "data_time": 0.03818, "loss_rpn_cls": 0.05921, "loss_rpn_bbox": 0.07122, "loss_cls": 0.31683, "acc": 90.39697, "loss_bbox": 0.32331, "loss_mask": 0.32316, "loss": 1.09372, "time": 0.21888} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.0002, "memory": 5940, "data_time": 0.04429, "loss_rpn_cls": 0.05254, "loss_rpn_bbox": 0.06779, "loss_cls": 0.32038, "acc": 90.55884, "loss_bbox": 0.32004, "loss_mask": 0.32701, "loss": 1.08776, "time": 0.21218} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.0002, "memory": 5940, "data_time": 0.02881, "loss_rpn_cls": 0.0562, "loss_rpn_bbox": 0.07095, "loss_cls": 0.32086, "acc": 90.57031, "loss_bbox": 0.31538, "loss_mask": 0.31793, "loss": 1.08131, "time": 0.21738} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.0002, "memory": 5940, "data_time": 0.03159, "loss_rpn_cls": 0.06271, "loss_rpn_bbox": 0.07593, "loss_cls": 0.31621, "acc": 90.74316, "loss_bbox": 0.31663, "loss_mask": 0.32685, "loss": 1.09834, "time": 0.21617} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.0002, "memory": 5940, "data_time": 0.04544, "loss_rpn_cls": 0.06091, "loss_rpn_bbox": 0.06968, "loss_cls": 0.31932, "acc": 90.38794, "loss_bbox": 0.32548, "loss_mask": 0.33366, "loss": 1.10905, "time": 0.21227} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.0002, "memory": 5940, "data_time": 0.03675, "loss_rpn_cls": 0.05465, "loss_rpn_bbox": 0.06797, "loss_cls": 0.31449, "acc": 90.46924, "loss_bbox": 0.31933, "loss_mask": 0.32247, "loss": 1.07892, "time": 0.20284} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.0002, "memory": 5940, "data_time": 0.03148, "loss_rpn_cls": 0.05265, "loss_rpn_bbox": 0.07035, "loss_cls": 0.31223, "acc": 90.70581, "loss_bbox": 0.3142, "loss_mask": 0.32559, "loss": 1.07501, "time": 0.21101} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.0002, "memory": 5940, "data_time": 0.03162, "loss_rpn_cls": 0.05883, "loss_rpn_bbox": 0.06925, "loss_cls": 0.31779, "acc": 90.64111, "loss_bbox": 0.31517, "loss_mask": 0.31544, "loss": 1.07647, "time": 0.21123} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.0002, "memory": 5940, "data_time": 0.03705, "loss_rpn_cls": 0.05276, "loss_rpn_bbox": 0.06835, "loss_cls": 0.32408, "acc": 90.09619, "loss_bbox": 0.32772, "loss_mask": 0.31716, "loss": 1.09007, "time": 0.21114} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.0002, "memory": 5940, "data_time": 0.03814, "loss_rpn_cls": 0.05875, "loss_rpn_bbox": 0.07174, "loss_cls": 0.32008, "acc": 90.54663, "loss_bbox": 0.32258, "loss_mask": 0.32142, "loss": 1.09458, "time": 0.21975} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.0002, "memory": 5940, "data_time": 0.03667, "loss_rpn_cls": 0.06172, "loss_rpn_bbox": 0.06551, "loss_cls": 0.31506, "acc": 90.62451, "loss_bbox": 0.30858, "loss_mask": 0.32153, "loss": 1.0724, "time": 0.20247} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.0002, "memory": 5940, "data_time": 0.04459, "loss_rpn_cls": 0.05777, "loss_rpn_bbox": 0.07325, "loss_cls": 0.31253, "acc": 90.5625, "loss_bbox": 0.31921, "loss_mask": 0.32034, "loss": 1.08311, "time": 0.21293} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.0002, "memory": 5940, "data_time": 0.02992, "loss_rpn_cls": 0.05757, "loss_rpn_bbox": 0.07286, "loss_cls": 0.32745, "acc": 90.51074, "loss_bbox": 0.31968, "loss_mask": 0.3226, "loss": 1.10016, "time": 0.19903} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.0002, "memory": 5940, "data_time": 0.0442, "loss_rpn_cls": 0.05651, "loss_rpn_bbox": 0.06872, "loss_cls": 0.31046, "acc": 90.76221, "loss_bbox": 0.31279, "loss_mask": 0.31917, "loss": 1.06764, "time": 0.21846} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.0002, "memory": 5940, "data_time": 0.03713, "loss_rpn_cls": 0.05352, "loss_rpn_bbox": 0.06824, "loss_cls": 0.3183, "acc": 90.50195, "loss_bbox": 0.3201, "loss_mask": 0.32021, "loss": 1.08036, "time": 0.20102} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.0002, "memory": 5940, "data_time": 0.0374, "loss_rpn_cls": 0.05344, "loss_rpn_bbox": 0.06607, "loss_cls": 0.31425, "acc": 90.47046, "loss_bbox": 0.32063, "loss_mask": 0.31897, "loss": 1.07335, "time": 0.21277} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.0002, "memory": 5940, "data_time": 0.03922, "loss_rpn_cls": 0.05532, "loss_rpn_bbox": 0.07105, "loss_cls": 0.31893, "acc": 90.43799, "loss_bbox": 0.32188, "loss_mask": 0.31303, "loss": 1.08021, "time": 0.21105} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.0002, "memory": 5940, "data_time": 0.03395, "loss_rpn_cls": 0.05427, "loss_rpn_bbox": 0.06924, "loss_cls": 0.30811, "acc": 90.59814, "loss_bbox": 0.32056, "loss_mask": 0.30955, "loss": 1.06173, "time": 0.21704} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.0002, "memory": 5940, "data_time": 0.03616, "loss_rpn_cls": 0.05817, "loss_rpn_bbox": 0.07328, "loss_cls": 0.33356, "acc": 89.86035, "loss_bbox": 0.33735, "loss_mask": 0.32294, "loss": 1.1253, "time": 0.21321} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.0002, "memory": 5940, "data_time": 0.03605, "loss_rpn_cls": 0.05265, "loss_rpn_bbox": 0.06788, "loss_cls": 0.31928, "acc": 90.5061, "loss_bbox": 0.31823, "loss_mask": 0.31216, "loss": 1.07019, "time": 0.20017} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.0002, "memory": 5940, "data_time": 0.04648, "loss_rpn_cls": 0.05769, "loss_rpn_bbox": 0.07073, "loss_cls": 0.3168, "acc": 90.51733, "loss_bbox": 0.31756, "loss_mask": 0.32174, "loss": 1.08453, "time": 0.21708} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.0002, "memory": 5940, "data_time": 0.03282, "loss_rpn_cls": 0.0491, "loss_rpn_bbox": 0.0661, "loss_cls": 0.32579, "acc": 90.04736, "loss_bbox": 0.33001, "loss_mask": 0.31512, "loss": 1.08612, "time": 0.20181} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.0002, "memory": 5940, "data_time": 0.03316, "loss_rpn_cls": 0.05166, "loss_rpn_bbox": 0.06538, "loss_cls": 0.31043, "acc": 90.77588, "loss_bbox": 0.315, "loss_mask": 0.31063, "loss": 1.0531, "time": 0.20195} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.0002, "memory": 5940, "data_time": 0.03276, "loss_rpn_cls": 0.05288, "loss_rpn_bbox": 0.06815, "loss_cls": 0.30725, "acc": 90.79565, "loss_bbox": 0.31056, "loss_mask": 0.31103, "loss": 1.04987, "time": 0.21098} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.0002, "memory": 5940, "data_time": 0.03737, "loss_rpn_cls": 0.05086, "loss_rpn_bbox": 0.06726, "loss_cls": 0.30457, "acc": 90.78564, "loss_bbox": 0.30768, "loss_mask": 0.31345, "loss": 1.04383, "time": 0.20815} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.0002, "memory": 5940, "data_time": 0.0356, "loss_rpn_cls": 0.05622, "loss_rpn_bbox": 0.06865, "loss_cls": 0.30043, "acc": 90.88892, "loss_bbox": 0.3017, "loss_mask": 0.31185, "loss": 1.03885, "time": 0.20833} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.0002, "memory": 5940, "data_time": 0.04353, "loss_rpn_cls": 0.05437, "loss_rpn_bbox": 0.06836, "loss_cls": 0.30985, "acc": 90.41797, "loss_bbox": 0.31907, "loss_mask": 0.31704, "loss": 1.06869, "time": 0.21385} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.0002, "memory": 5940, "data_time": 0.03298, "loss_rpn_cls": 0.05482, "loss_rpn_bbox": 0.06639, "loss_cls": 0.30944, "acc": 90.70532, "loss_bbox": 0.31141, "loss_mask": 0.30415, "loss": 1.04621, "time": 0.19735} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.0002, "memory": 5940, "data_time": 0.03451, "loss_rpn_cls": 0.05311, "loss_rpn_bbox": 0.06796, "loss_cls": 0.30875, "acc": 90.66382, "loss_bbox": 0.31063, "loss_mask": 0.30518, "loss": 1.04563, "time": 0.20871} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.0002, "memory": 5940, "data_time": 0.03556, "loss_rpn_cls": 0.05313, "loss_rpn_bbox": 0.06651, "loss_cls": 0.30525, "acc": 90.64307, "loss_bbox": 0.31909, "loss_mask": 0.30894, "loss": 1.05292, "time": 0.20731} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.0002, "memory": 5940, "data_time": 0.03741, "loss_rpn_cls": 0.05074, "loss_rpn_bbox": 0.06314, "loss_cls": 0.29312, "acc": 91.06006, "loss_bbox": 0.3024, "loss_mask": 0.30595, "loss": 1.01536, "time": 0.2044} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.0002, "memory": 5940, "data_time": 0.0374, "loss_rpn_cls": 0.05475, "loss_rpn_bbox": 0.06771, "loss_cls": 0.30363, "acc": 90.91162, "loss_bbox": 0.30604, "loss_mask": 0.31773, "loss": 1.04986, "time": 0.20727} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.0002, "memory": 5940, "data_time": 0.04051, "loss_rpn_cls": 0.05358, "loss_rpn_bbox": 0.06415, "loss_cls": 0.30433, "acc": 90.74268, "loss_bbox": 0.30912, "loss_mask": 0.30328, "loss": 1.03447, "time": 0.21326} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.0002, "memory": 5940, "data_time": 0.03927, "loss_rpn_cls": 0.05028, "loss_rpn_bbox": 0.06564, "loss_cls": 0.29874, "acc": 90.73169, "loss_bbox": 0.31326, "loss_mask": 0.2995, "loss": 1.02742, "time": 0.20442} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.0002, "memory": 5940, "data_time": 0.03177, "loss_rpn_cls": 0.05188, "loss_rpn_bbox": 0.06389, "loss_cls": 0.29137, "acc": 91.12988, "loss_bbox": 0.29294, "loss_mask": 0.30717, "loss": 1.00725, "time": 0.21908} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.0002, "memory": 5940, "data_time": 0.02946, "loss_rpn_cls": 0.04886, "loss_rpn_bbox": 0.06387, "loss_cls": 0.29406, "acc": 91.11768, "loss_bbox": 0.29447, "loss_mask": 0.30319, "loss": 1.00445, "time": 0.24821} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.0002, "memory": 5940, "data_time": 0.03098, "loss_rpn_cls": 0.05595, "loss_rpn_bbox": 0.068, "loss_cls": 0.30838, "acc": 90.65234, "loss_bbox": 0.31501, "loss_mask": 0.31342, "loss": 1.06076, "time": 0.24334} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.0002, "memory": 5940, "data_time": 0.03351, "loss_rpn_cls": 0.05118, "loss_rpn_bbox": 0.06234, "loss_cls": 0.3068, "acc": 90.76489, "loss_bbox": 0.30645, "loss_mask": 0.30433, "loss": 1.0311, "time": 0.20663} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.0002, "memory": 5940, "data_time": 0.03374, "loss_rpn_cls": 0.05748, "loss_rpn_bbox": 0.06969, "loss_cls": 0.31054, "acc": 90.6084, "loss_bbox": 0.31233, "loss_mask": 0.30722, "loss": 1.05726, "time": 0.21333} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.0002, "memory": 5940, "data_time": 0.03514, "loss_rpn_cls": 0.05345, "loss_rpn_bbox": 0.06828, "loss_cls": 0.29361, "acc": 91.1106, "loss_bbox": 0.29816, "loss_mask": 0.30635, "loss": 1.01985, "time": 0.20356} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.0002, "memory": 5940, "data_time": 0.03825, "loss_rpn_cls": 0.05362, "loss_rpn_bbox": 0.06575, "loss_cls": 0.29968, "acc": 90.78101, "loss_bbox": 0.30621, "loss_mask": 0.30267, "loss": 1.02795, "time": 0.21574} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.0002, "memory": 5940, "data_time": 0.03961, "loss_rpn_cls": 0.05734, "loss_rpn_bbox": 0.07117, "loss_cls": 0.29631, "acc": 90.91919, "loss_bbox": 0.30203, "loss_mask": 0.31023, "loss": 1.03708, "time": 0.20642} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.0002, "memory": 5940, "data_time": 0.03296, "loss_rpn_cls": 0.0545, "loss_rpn_bbox": 0.06964, "loss_cls": 0.29492, "acc": 90.86255, "loss_bbox": 0.30549, "loss_mask": 0.30436, "loss": 1.02891, "time": 0.2145} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.0002, "memory": 5940, "data_time": 0.03167, "loss_rpn_cls": 0.05021, "loss_rpn_bbox": 0.06137, "loss_cls": 0.29241, "acc": 91.22803, "loss_bbox": 0.29173, "loss_mask": 0.29832, "loss": 0.99404, "time": 0.2051} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.0002, "memory": 5940, "data_time": 0.04226, "loss_rpn_cls": 0.05094, "loss_rpn_bbox": 0.06607, "loss_cls": 0.30875, "acc": 90.55786, "loss_bbox": 0.31355, "loss_mask": 0.30163, "loss": 1.04093, "time": 0.20884} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.0002, "memory": 5940, "data_time": 0.0331, "loss_rpn_cls": 0.05041, "loss_rpn_bbox": 0.06199, "loss_cls": 0.31663, "acc": 90.72241, "loss_bbox": 0.30421, "loss_mask": 0.30745, "loss": 1.04069, "time": 0.19198} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.0002, "memory": 5940, "data_time": 0.03447, "loss_rpn_cls": 0.05253, "loss_rpn_bbox": 0.06646, "loss_cls": 0.29724, "acc": 90.78271, "loss_bbox": 0.31134, "loss_mask": 0.30885, "loss": 1.03642, "time": 0.20821} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.0002, "memory": 5940, "data_time": 0.04564, "loss_rpn_cls": 0.05606, "loss_rpn_bbox": 0.06812, "loss_cls": 0.30315, "acc": 90.7229, "loss_bbox": 0.30993, "loss_mask": 0.30602, "loss": 1.04328, "time": 0.20666} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.0002, "memory": 5940, "data_time": 0.03611, "loss_rpn_cls": 0.05409, "loss_rpn_bbox": 0.06758, "loss_cls": 0.30944, "acc": 90.56689, "loss_bbox": 0.31358, "loss_mask": 0.31342, "loss": 1.05811, "time": 0.20976} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.0002, "memory": 5940, "data_time": 0.04339, "loss_rpn_cls": 0.05178, "loss_rpn_bbox": 0.06613, "loss_cls": 0.30166, "acc": 90.66919, "loss_bbox": 0.31051, "loss_mask": 0.31237, "loss": 1.04246, "time": 0.21362} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.0002, "memory": 5940, "data_time": 0.04081, "loss_rpn_cls": 0.05017, "loss_rpn_bbox": 0.06581, "loss_cls": 0.30639, "acc": 90.63794, "loss_bbox": 0.31479, "loss_mask": 0.30637, "loss": 1.04354, "time": 0.20534} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.0002, "memory": 5940, "data_time": 0.02831, "loss_rpn_cls": 0.05252, "loss_rpn_bbox": 0.06385, "loss_cls": 0.29438, "acc": 91.03613, "loss_bbox": 0.2994, "loss_mask": 0.30183, "loss": 1.01197, "time": 0.20185} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.0002, "memory": 5940, "data_time": 0.03034, "loss_rpn_cls": 0.05178, "loss_rpn_bbox": 0.06958, "loss_cls": 0.29316, "acc": 91.08276, "loss_bbox": 0.30299, "loss_mask": 0.29832, "loss": 1.01583, "time": 0.21286} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.0002, "memory": 5940, "data_time": 0.03337, "loss_rpn_cls": 0.05074, "loss_rpn_bbox": 0.06731, "loss_cls": 0.28623, "acc": 91.16455, "loss_bbox": 0.30252, "loss_mask": 0.30056, "loss": 1.00736, "time": 0.19938} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.0002, "memory": 5940, "data_time": 0.03577, "loss_rpn_cls": 0.04933, "loss_rpn_bbox": 0.06677, "loss_cls": 0.29435, "acc": 90.87354, "loss_bbox": 0.31077, "loss_mask": 0.3025, "loss": 1.02373, "time": 0.2069} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.0002, "memory": 5940, "data_time": 0.03317, "loss_rpn_cls": 0.05238, "loss_rpn_bbox": 0.0636, "loss_cls": 0.29108, "acc": 90.98315, "loss_bbox": 0.29887, "loss_mask": 0.30114, "loss": 1.00707, "time": 0.20105} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.0002, "memory": 5940, "data_time": 0.0354, "loss_rpn_cls": 0.05115, "loss_rpn_bbox": 0.07017, "loss_cls": 0.3139, "acc": 90.47949, "loss_bbox": 0.31073, "loss_mask": 0.30317, "loss": 1.04911, "time": 0.21366} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.0002, "memory": 5940, "data_time": 0.03345, "loss_rpn_cls": 0.0488, "loss_rpn_bbox": 0.06475, "loss_cls": 0.29919, "acc": 90.65674, "loss_bbox": 0.30991, "loss_mask": 0.29454, "loss": 1.0172, "time": 0.20499} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.0002, "memory": 5940, "data_time": 0.03706, "loss_rpn_cls": 0.05184, "loss_rpn_bbox": 0.06714, "loss_cls": 0.30298, "acc": 90.86475, "loss_bbox": 0.30769, "loss_mask": 0.30804, "loss": 1.03769, "time": 0.20907} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.0002, "memory": 5940, "data_time": 0.03848, "loss_rpn_cls": 0.05157, "loss_rpn_bbox": 0.06974, "loss_cls": 0.31384, "acc": 90.39355, "loss_bbox": 0.31666, "loss_mask": 0.30204, "loss": 1.05386, "time": 0.21333} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.0002, "memory": 5940, "data_time": 0.03611, "loss_rpn_cls": 0.05041, "loss_rpn_bbox": 0.06264, "loss_cls": 0.28266, "acc": 91.11279, "loss_bbox": 0.2957, "loss_mask": 0.29231, "loss": 0.98372, "time": 0.20864} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.0002, "memory": 5940, "data_time": 0.04428, "loss_rpn_cls": 0.0519, "loss_rpn_bbox": 0.06998, "loss_cls": 0.31373, "acc": 90.44507, "loss_bbox": 0.31789, "loss_mask": 0.30366, "loss": 1.05717, "time": 0.20901} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.0002, "memory": 5940, "data_time": 0.03574, "loss_rpn_cls": 0.04772, "loss_rpn_bbox": 0.06189, "loss_cls": 0.28734, "acc": 91.07495, "loss_bbox": 0.3005, "loss_mask": 0.29855, "loss": 0.99601, "time": 0.20396} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.0002, "memory": 5940, "data_time": 0.03477, "loss_rpn_cls": 0.05016, "loss_rpn_bbox": 0.06409, "loss_cls": 0.29599, "acc": 91.073, "loss_bbox": 0.2996, "loss_mask": 0.29722, "loss": 1.00706, "time": 0.20665} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.0002, "memory": 5940, "data_time": 0.03383, "loss_rpn_cls": 0.05222, "loss_rpn_bbox": 0.06528, "loss_cls": 0.29124, "acc": 91.04272, "loss_bbox": 0.29781, "loss_mask": 0.29797, "loss": 1.00453, "time": 0.2157} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.0002, "memory": 5940, "data_time": 0.0389, "loss_rpn_cls": 0.05183, "loss_rpn_bbox": 0.06505, "loss_cls": 0.28369, "acc": 91.36084, "loss_bbox": 0.28747, "loss_mask": 0.28934, "loss": 0.97738, "time": 0.20942} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.0002, "memory": 5940, "data_time": 0.0388, "loss_rpn_cls": 0.04835, "loss_rpn_bbox": 0.06087, "loss_cls": 0.28353, "acc": 91.30908, "loss_bbox": 0.29338, "loss_mask": 0.29802, "loss": 0.98415, "time": 0.20248} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.0002, "memory": 5940, "data_time": 0.03869, "loss_rpn_cls": 0.05059, "loss_rpn_bbox": 0.07042, "loss_cls": 0.29934, "acc": 90.89185, "loss_bbox": 0.30597, "loss_mask": 0.30466, "loss": 1.03098, "time": 0.20665} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.0002, "memory": 5940, "data_time": 0.04248, "loss_rpn_cls": 0.04662, "loss_rpn_bbox": 0.06414, "loss_cls": 0.29185, "acc": 90.96387, "loss_bbox": 0.29876, "loss_mask": 0.2944, "loss": 0.99576, "time": 0.2096} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.0002, "memory": 5940, "data_time": 0.04061, "loss_rpn_cls": 0.0498, "loss_rpn_bbox": 0.06394, "loss_cls": 0.29066, "acc": 91.10425, "loss_bbox": 0.29291, "loss_mask": 0.29792, "loss": 0.99522, "time": 0.20644} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.0002, "memory": 5940, "data_time": 0.03457, "loss_rpn_cls": 0.04991, "loss_rpn_bbox": 0.06521, "loss_cls": 0.29147, "acc": 91.13647, "loss_bbox": 0.28967, "loss_mask": 0.29612, "loss": 0.99238, "time": 0.19782} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.0002, "memory": 5940, "data_time": 0.02961, "loss_rpn_cls": 0.04817, "loss_rpn_bbox": 0.06166, "loss_cls": 0.27764, "acc": 91.47119, "loss_bbox": 0.28508, "loss_mask": 0.2999, "loss": 0.97245, "time": 0.19269} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.0002, "memory": 5940, "data_time": 0.03619, "loss_rpn_cls": 0.04933, "loss_rpn_bbox": 0.06055, "loss_cls": 0.28193, "acc": 91.28931, "loss_bbox": 0.29278, "loss_mask": 0.29745, "loss": 0.98204, "time": 0.21185} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.0002, "memory": 5940, "data_time": 0.04237, "loss_rpn_cls": 0.046, "loss_rpn_bbox": 0.06421, "loss_cls": 0.29624, "acc": 90.84985, "loss_bbox": 0.3043, "loss_mask": 0.29598, "loss": 1.00673, "time": 0.21575} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.0002, "memory": 5940, "data_time": 0.03115, "loss_rpn_cls": 0.04577, "loss_rpn_bbox": 0.06339, "loss_cls": 0.28627, "acc": 91.39502, "loss_bbox": 0.29002, "loss_mask": 0.30414, "loss": 0.98959, "time": 0.1993} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.0002, "memory": 5940, "data_time": 0.03741, "loss_rpn_cls": 0.04692, "loss_rpn_bbox": 0.06365, "loss_cls": 0.29537, "acc": 90.92944, "loss_bbox": 0.30317, "loss_mask": 0.29212, "loss": 1.00123, "time": 0.20865} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.0002, "memory": 5940, "data_time": 0.03571, "loss_rpn_cls": 0.04629, "loss_rpn_bbox": 0.06433, "loss_cls": 0.28436, "acc": 91.14331, "loss_bbox": 0.29862, "loss_mask": 0.29812, "loss": 0.99171, "time": 0.20425} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.0002, "memory": 5940, "data_time": 0.02637, "loss_rpn_cls": 0.04807, "loss_rpn_bbox": 0.06631, "loss_cls": 0.28733, "acc": 91.07666, "loss_bbox": 0.29719, "loss_mask": 0.29349, "loss": 0.9924, "time": 0.20496} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.0002, "memory": 5940, "data_time": 0.03674, "loss_rpn_cls": 0.05015, "loss_rpn_bbox": 0.06421, "loss_cls": 0.30094, "acc": 90.68042, "loss_bbox": 0.3084, "loss_mask": 0.2964, "loss": 1.0201, "time": 0.21743} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.0002, "memory": 5940, "data_time": 0.04006, "loss_rpn_cls": 0.05109, "loss_rpn_bbox": 0.06875, "loss_cls": 0.3074, "acc": 90.79053, "loss_bbox": 0.30795, "loss_mask": 0.30028, "loss": 1.03546, "time": 0.21603} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.0002, "memory": 5940, "data_time": 0.03566, "loss_rpn_cls": 0.05453, "loss_rpn_bbox": 0.06504, "loss_cls": 0.29611, "acc": 90.6687, "loss_bbox": 0.30922, "loss_mask": 0.29267, "loss": 1.01756, "time": 0.21704} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.0002, "memory": 5940, "data_time": 0.03441, "loss_rpn_cls": 0.04776, "loss_rpn_bbox": 0.06601, "loss_cls": 0.27789, "acc": 91.43896, "loss_bbox": 0.28342, "loss_mask": 0.29116, "loss": 0.96624, "time": 0.20089} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.0002, "memory": 5940, "data_time": 0.03457, "loss_rpn_cls": 0.0455, "loss_rpn_bbox": 0.06133, "loss_cls": 0.28948, "acc": 91.06323, "loss_bbox": 0.29904, "loss_mask": 0.29395, "loss": 0.98931, "time": 0.20313} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.0002, "memory": 5940, "data_time": 0.02898, "loss_rpn_cls": 0.04916, "loss_rpn_bbox": 0.06857, "loss_cls": 0.2889, "acc": 91.12109, "loss_bbox": 0.29682, "loss_mask": 0.29522, "loss": 0.99867, "time": 0.20137} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.0002, "memory": 5940, "data_time": 0.04271, "loss_rpn_cls": 0.04941, "loss_rpn_bbox": 0.06574, "loss_cls": 0.29749, "acc": 90.85791, "loss_bbox": 0.30085, "loss_mask": 0.29327, "loss": 1.00676, "time": 0.21002} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.0002, "memory": 5940, "data_time": 0.03582, "loss_rpn_cls": 0.04611, "loss_rpn_bbox": 0.06095, "loss_cls": 0.29249, "acc": 91.03027, "loss_bbox": 0.30109, "loss_mask": 0.28795, "loss": 0.98859, "time": 0.20638} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.0002, "memory": 5940, "data_time": 0.03366, "loss_rpn_cls": 0.04797, "loss_rpn_bbox": 0.0655, "loss_cls": 0.28943, "acc": 90.87524, "loss_bbox": 0.30759, "loss_mask": 0.30247, "loss": 1.01297, "time": 0.20671} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.0002, "memory": 5940, "data_time": 0.03386, "loss_rpn_cls": 0.04767, "loss_rpn_bbox": 0.06297, "loss_cls": 0.28824, "acc": 91.17627, "loss_bbox": 0.29289, "loss_mask": 0.28596, "loss": 0.97773, "time": 0.2059} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.0002, "memory": 5940, "data_time": 0.03704, "loss_rpn_cls": 0.04982, "loss_rpn_bbox": 0.06145, "loss_cls": 0.2795, "acc": 91.36011, "loss_bbox": 0.28385, "loss_mask": 0.29294, "loss": 0.96756, "time": 0.20877} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.0002, "memory": 5940, "data_time": 0.03079, "loss_rpn_cls": 0.04789, "loss_rpn_bbox": 0.06289, "loss_cls": 0.29548, "acc": 90.83057, "loss_bbox": 0.29975, "loss_mask": 0.28896, "loss": 0.99497, "time": 0.2082} -{"mode": "val", "epoch": 1, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.2231, "bbox_mAP_50": 0.4322, "bbox_mAP_75": 0.209, "bbox_mAP_s": 0.1231, "bbox_mAP_m": 0.2555, "bbox_mAP_l": 0.2923, "bbox_mAP_copypaste": "0.2231 0.4322 0.2090 0.1231 0.2555 0.2923", "segm_mAP": 0.2353, "segm_mAP_50": 0.4123, "segm_mAP_75": 0.2414, "segm_mAP_s": 0.0959, "segm_mAP_m": 0.2577, "segm_mAP_l": 0.3535, "segm_mAP_copypaste": "0.2353 0.4123 0.2414 0.0959 0.2577 0.3535"} -{"mode": "train", "epoch": 2, "iter": 50, "lr": 0.0002, "memory": 5940, "data_time": 0.10237, "loss_rpn_cls": 0.04693, "loss_rpn_bbox": 0.06372, "loss_cls": 0.29305, "acc": 90.7627, "loss_bbox": 0.30411, "loss_mask": 0.29088, "loss": 0.9987, "time": 0.29555} -{"mode": "train", "epoch": 2, "iter": 100, "lr": 0.0002, "memory": 5940, "data_time": 0.03611, "loss_rpn_cls": 0.04457, "loss_rpn_bbox": 0.06475, "loss_cls": 0.28528, "acc": 91.00537, "loss_bbox": 0.29956, "loss_mask": 0.28646, "loss": 0.98063, "time": 0.21656} -{"mode": "train", "epoch": 2, "iter": 150, "lr": 0.0002, "memory": 5940, "data_time": 0.04147, "loss_rpn_cls": 0.04477, "loss_rpn_bbox": 0.06375, "loss_cls": 0.28686, "acc": 90.89087, "loss_bbox": 0.30958, "loss_mask": 0.29286, "loss": 0.99782, "time": 0.22457} -{"mode": "train", "epoch": 2, "iter": 200, "lr": 0.0002, "memory": 5940, "data_time": 0.03952, "loss_rpn_cls": 0.0423, "loss_rpn_bbox": 0.06496, "loss_cls": 0.29083, "acc": 90.85425, "loss_bbox": 0.30394, "loss_mask": 0.29012, "loss": 0.99215, "time": 0.22093} -{"mode": "train", "epoch": 2, "iter": 250, "lr": 0.0002, "memory": 5940, "data_time": 0.04067, "loss_rpn_cls": 0.04393, "loss_rpn_bbox": 0.06613, "loss_cls": 0.28161, "acc": 91.18262, "loss_bbox": 0.29082, "loss_mask": 0.29351, "loss": 0.976, "time": 0.21284} -{"mode": "train", "epoch": 2, "iter": 300, "lr": 0.0002, "memory": 5940, "data_time": 0.04064, "loss_rpn_cls": 0.04741, "loss_rpn_bbox": 0.06431, "loss_cls": 0.28255, "acc": 91.20532, "loss_bbox": 0.29084, "loss_mask": 0.2907, "loss": 0.97581, "time": 0.21818} -{"mode": "train", "epoch": 2, "iter": 350, "lr": 0.0002, "memory": 5940, "data_time": 0.0384, "loss_rpn_cls": 0.04142, "loss_rpn_bbox": 0.0609, "loss_cls": 0.27257, "acc": 91.28809, "loss_bbox": 0.29507, "loss_mask": 0.28839, "loss": 0.95835, "time": 0.21141} -{"mode": "train", "epoch": 2, "iter": 400, "lr": 0.0002, "memory": 5940, "data_time": 0.04234, "loss_rpn_cls": 0.04373, "loss_rpn_bbox": 0.0605, "loss_cls": 0.27599, "acc": 91.48657, "loss_bbox": 0.2882, "loss_mask": 0.29161, "loss": 0.96003, "time": 0.20249} -{"mode": "train", "epoch": 2, "iter": 450, "lr": 0.0002, "memory": 5940, "data_time": 0.03688, "loss_rpn_cls": 0.04634, "loss_rpn_bbox": 0.06301, "loss_cls": 0.28553, "acc": 91.01074, "loss_bbox": 0.30178, "loss_mask": 0.29443, "loss": 0.9911, "time": 0.21665} -{"mode": "train", "epoch": 2, "iter": 500, "lr": 0.0002, "memory": 5940, "data_time": 0.04061, "loss_rpn_cls": 0.04685, "loss_rpn_bbox": 0.06121, "loss_cls": 0.28119, "acc": 91.271, "loss_bbox": 0.2868, "loss_mask": 0.29277, "loss": 0.96881, "time": 0.21201} -{"mode": "train", "epoch": 2, "iter": 550, "lr": 0.0002, "memory": 5940, "data_time": 0.04001, "loss_rpn_cls": 0.04566, "loss_rpn_bbox": 0.06292, "loss_cls": 0.2741, "acc": 91.26099, "loss_bbox": 0.28901, "loss_mask": 0.288, "loss": 0.9597, "time": 0.21346} -{"mode": "train", "epoch": 2, "iter": 600, "lr": 0.0002, "memory": 5940, "data_time": 0.04784, "loss_rpn_cls": 0.04777, "loss_rpn_bbox": 0.06743, "loss_cls": 0.28688, "acc": 90.89941, "loss_bbox": 0.29903, "loss_mask": 0.28507, "loss": 0.98617, "time": 0.2123} -{"mode": "train", "epoch": 2, "iter": 650, "lr": 0.0002, "memory": 5940, "data_time": 0.03967, "loss_rpn_cls": 0.04512, "loss_rpn_bbox": 0.06123, "loss_cls": 0.2808, "acc": 91.0686, "loss_bbox": 0.29574, "loss_mask": 0.29188, "loss": 0.97477, "time": 0.21879} -{"mode": "train", "epoch": 2, "iter": 700, "lr": 0.0002, "memory": 5940, "data_time": 0.03191, "loss_rpn_cls": 0.04284, "loss_rpn_bbox": 0.06007, "loss_cls": 0.27568, "acc": 91.42578, "loss_bbox": 0.29001, "loss_mask": 0.28765, "loss": 0.95626, "time": 0.19693} -{"mode": "train", "epoch": 2, "iter": 750, "lr": 0.0002, "memory": 5940, "data_time": 0.03955, "loss_rpn_cls": 0.04073, "loss_rpn_bbox": 0.05941, "loss_cls": 0.27194, "acc": 91.36768, "loss_bbox": 0.29167, "loss_mask": 0.28593, "loss": 0.94967, "time": 0.21076} -{"mode": "train", "epoch": 2, "iter": 800, "lr": 0.0002, "memory": 5940, "data_time": 0.03818, "loss_rpn_cls": 0.04675, "loss_rpn_bbox": 0.0613, "loss_cls": 0.28664, "acc": 90.96484, "loss_bbox": 0.29957, "loss_mask": 0.28224, "loss": 0.9765, "time": 0.21756} -{"mode": "train", "epoch": 2, "iter": 850, "lr": 0.0002, "memory": 5940, "data_time": 0.04508, "loss_rpn_cls": 0.04604, "loss_rpn_bbox": 0.06286, "loss_cls": 0.2841, "acc": 91.00562, "loss_bbox": 0.29566, "loss_mask": 0.28709, "loss": 0.97575, "time": 0.2229} -{"mode": "train", "epoch": 2, "iter": 900, "lr": 0.0002, "memory": 5940, "data_time": 0.04138, "loss_rpn_cls": 0.04071, "loss_rpn_bbox": 0.06097, "loss_cls": 0.27438, "acc": 91.37402, "loss_bbox": 0.29337, "loss_mask": 0.28554, "loss": 0.95497, "time": 0.20426} -{"mode": "train", "epoch": 2, "iter": 950, "lr": 0.0002, "memory": 5940, "data_time": 0.03205, "loss_rpn_cls": 0.04458, "loss_rpn_bbox": 0.06077, "loss_cls": 0.28105, "acc": 91.23413, "loss_bbox": 0.29461, "loss_mask": 0.28158, "loss": 0.96259, "time": 0.20274} -{"mode": "train", "epoch": 2, "iter": 1000, "lr": 0.0002, "memory": 5940, "data_time": 0.03794, "loss_rpn_cls": 0.04196, "loss_rpn_bbox": 0.05972, "loss_cls": 0.27563, "acc": 91.3457, "loss_bbox": 0.28953, "loss_mask": 0.28774, "loss": 0.95457, "time": 0.21169} -{"mode": "train", "epoch": 2, "iter": 1050, "lr": 0.0002, "memory": 5940, "data_time": 0.03881, "loss_rpn_cls": 0.04646, "loss_rpn_bbox": 0.0618, "loss_cls": 0.2845, "acc": 90.89893, "loss_bbox": 0.30793, "loss_mask": 0.28819, "loss": 0.98888, "time": 0.2165} -{"mode": "train", "epoch": 2, "iter": 1100, "lr": 0.0002, "memory": 5940, "data_time": 0.03482, "loss_rpn_cls": 0.04128, "loss_rpn_bbox": 0.06185, "loss_cls": 0.2833, "acc": 91.16772, "loss_bbox": 0.2957, "loss_mask": 0.28777, "loss": 0.9699, "time": 0.2134} -{"mode": "train", "epoch": 2, "iter": 1150, "lr": 0.0002, "memory": 5940, "data_time": 0.04064, "loss_rpn_cls": 0.0426, "loss_rpn_bbox": 0.05964, "loss_cls": 0.27325, "acc": 91.48901, "loss_bbox": 0.28215, "loss_mask": 0.29305, "loss": 0.95068, "time": 0.21746} -{"mode": "train", "epoch": 2, "iter": 1200, "lr": 0.0002, "memory": 5940, "data_time": 0.03644, "loss_rpn_cls": 0.04658, "loss_rpn_bbox": 0.05997, "loss_cls": 0.27155, "acc": 91.29321, "loss_bbox": 0.29082, "loss_mask": 0.28463, "loss": 0.95355, "time": 0.21449} -{"mode": "train", "epoch": 2, "iter": 1250, "lr": 0.0002, "memory": 5940, "data_time": 0.04064, "loss_rpn_cls": 0.04815, "loss_rpn_bbox": 0.06639, "loss_cls": 0.29503, "acc": 90.79736, "loss_bbox": 0.30591, "loss_mask": 0.29408, "loss": 1.00956, "time": 0.21593} -{"mode": "train", "epoch": 2, "iter": 1300, "lr": 0.0002, "memory": 5940, "data_time": 0.03428, "loss_rpn_cls": 0.0459, "loss_rpn_bbox": 0.06025, "loss_cls": 0.28391, "acc": 91.18799, "loss_bbox": 0.2925, "loss_mask": 0.2844, "loss": 0.96695, "time": 0.21219} -{"mode": "train", "epoch": 2, "iter": 1350, "lr": 0.0002, "memory": 5940, "data_time": 0.04135, "loss_rpn_cls": 0.04656, "loss_rpn_bbox": 0.06369, "loss_cls": 0.27528, "acc": 91.43286, "loss_bbox": 0.28927, "loss_mask": 0.28702, "loss": 0.96182, "time": 0.20139} -{"mode": "train", "epoch": 2, "iter": 1400, "lr": 0.0002, "memory": 5940, "data_time": 0.03343, "loss_rpn_cls": 0.04329, "loss_rpn_bbox": 0.06198, "loss_cls": 0.27148, "acc": 91.51929, "loss_bbox": 0.28205, "loss_mask": 0.27953, "loss": 0.93832, "time": 0.20746} -{"mode": "train", "epoch": 2, "iter": 1450, "lr": 0.0002, "memory": 5940, "data_time": 0.0387, "loss_rpn_cls": 0.04932, "loss_rpn_bbox": 0.06515, "loss_cls": 0.28066, "acc": 91.07495, "loss_bbox": 0.29724, "loss_mask": 0.28191, "loss": 0.97429, "time": 0.20674} -{"mode": "train", "epoch": 2, "iter": 1500, "lr": 0.0002, "memory": 5940, "data_time": 0.0365, "loss_rpn_cls": 0.04628, "loss_rpn_bbox": 0.06735, "loss_cls": 0.28515, "acc": 90.78101, "loss_bbox": 0.30367, "loss_mask": 0.28241, "loss": 0.98485, "time": 0.21754} -{"mode": "train", "epoch": 2, "iter": 1550, "lr": 0.0002, "memory": 5940, "data_time": 0.04, "loss_rpn_cls": 0.04414, "loss_rpn_bbox": 0.05916, "loss_cls": 0.26743, "acc": 91.55933, "loss_bbox": 0.28271, "loss_mask": 0.28615, "loss": 0.93959, "time": 0.21546} -{"mode": "train", "epoch": 2, "iter": 1600, "lr": 0.0002, "memory": 5940, "data_time": 0.04178, "loss_rpn_cls": 0.04233, "loss_rpn_bbox": 0.06289, "loss_cls": 0.27622, "acc": 91.10132, "loss_bbox": 0.29297, "loss_mask": 0.28489, "loss": 0.9593, "time": 0.21396} -{"mode": "train", "epoch": 2, "iter": 1650, "lr": 0.0002, "memory": 5940, "data_time": 0.04113, "loss_rpn_cls": 0.04075, "loss_rpn_bbox": 0.06096, "loss_cls": 0.269, "acc": 91.53003, "loss_bbox": 0.28819, "loss_mask": 0.27829, "loss": 0.9372, "time": 0.20902} -{"mode": "train", "epoch": 2, "iter": 1700, "lr": 0.0002, "memory": 5940, "data_time": 0.03657, "loss_rpn_cls": 0.04658, "loss_rpn_bbox": 0.06238, "loss_cls": 0.2756, "acc": 91.47583, "loss_bbox": 0.28513, "loss_mask": 0.28517, "loss": 0.95487, "time": 0.21993} -{"mode": "train", "epoch": 2, "iter": 1750, "lr": 0.0002, "memory": 5940, "data_time": 0.04123, "loss_rpn_cls": 0.04678, "loss_rpn_bbox": 0.06293, "loss_cls": 0.27976, "acc": 91.18652, "loss_bbox": 0.29692, "loss_mask": 0.28635, "loss": 0.97272, "time": 0.21013} -{"mode": "train", "epoch": 2, "iter": 1800, "lr": 0.0002, "memory": 5940, "data_time": 0.04061, "loss_rpn_cls": 0.04486, "loss_rpn_bbox": 0.0653, "loss_cls": 0.28662, "acc": 91.0481, "loss_bbox": 0.2933, "loss_mask": 0.29436, "loss": 0.98443, "time": 0.21461} -{"mode": "train", "epoch": 2, "iter": 1850, "lr": 0.0002, "memory": 5940, "data_time": 0.04293, "loss_rpn_cls": 0.04579, "loss_rpn_bbox": 0.06016, "loss_cls": 0.27717, "acc": 91.60083, "loss_bbox": 0.28003, "loss_mask": 0.2848, "loss": 0.94795, "time": 0.20487} -{"mode": "train", "epoch": 2, "iter": 1900, "lr": 0.0002, "memory": 5940, "data_time": 0.04158, "loss_rpn_cls": 0.04776, "loss_rpn_bbox": 0.0638, "loss_cls": 0.27175, "acc": 91.40942, "loss_bbox": 0.2806, "loss_mask": 0.28595, "loss": 0.94985, "time": 0.21897} -{"mode": "train", "epoch": 2, "iter": 1950, "lr": 0.0002, "memory": 5940, "data_time": 0.04652, "loss_rpn_cls": 0.04739, "loss_rpn_bbox": 0.06359, "loss_cls": 0.27467, "acc": 91.42847, "loss_bbox": 0.28193, "loss_mask": 0.28826, "loss": 0.95585, "time": 0.21903} -{"mode": "train", "epoch": 2, "iter": 2000, "lr": 0.0002, "memory": 5940, "data_time": 0.03575, "loss_rpn_cls": 0.04584, "loss_rpn_bbox": 0.06404, "loss_cls": 0.2806, "acc": 91.09717, "loss_bbox": 0.29725, "loss_mask": 0.28729, "loss": 0.97503, "time": 0.22519} -{"mode": "train", "epoch": 2, "iter": 2050, "lr": 0.0002, "memory": 5940, "data_time": 0.03396, "loss_rpn_cls": 0.04, "loss_rpn_bbox": 0.06143, "loss_cls": 0.27964, "acc": 91.2439, "loss_bbox": 0.28676, "loss_mask": 0.28401, "loss": 0.95184, "time": 0.20886} -{"mode": "train", "epoch": 2, "iter": 2100, "lr": 0.0002, "memory": 5940, "data_time": 0.03558, "loss_rpn_cls": 0.04358, "loss_rpn_bbox": 0.06385, "loss_cls": 0.27584, "acc": 91.36157, "loss_bbox": 0.2872, "loss_mask": 0.28558, "loss": 0.95606, "time": 0.21501} -{"mode": "train", "epoch": 2, "iter": 2150, "lr": 0.0002, "memory": 5940, "data_time": 0.03369, "loss_rpn_cls": 0.04282, "loss_rpn_bbox": 0.06467, "loss_cls": 0.27597, "acc": 91.24414, "loss_bbox": 0.29255, "loss_mask": 0.28368, "loss": 0.95969, "time": 0.21112} -{"mode": "train", "epoch": 2, "iter": 2200, "lr": 0.0002, "memory": 5940, "data_time": 0.03457, "loss_rpn_cls": 0.04096, "loss_rpn_bbox": 0.0604, "loss_cls": 0.25911, "acc": 91.7644, "loss_bbox": 0.2799, "loss_mask": 0.28495, "loss": 0.92532, "time": 0.20925} -{"mode": "train", "epoch": 2, "iter": 2250, "lr": 0.0002, "memory": 5940, "data_time": 0.03458, "loss_rpn_cls": 0.04387, "loss_rpn_bbox": 0.06328, "loss_cls": 0.27375, "acc": 91.32153, "loss_bbox": 0.28801, "loss_mask": 0.28693, "loss": 0.95584, "time": 0.21694} -{"mode": "train", "epoch": 2, "iter": 2300, "lr": 0.0002, "memory": 5940, "data_time": 0.04092, "loss_rpn_cls": 0.04513, "loss_rpn_bbox": 0.0614, "loss_cls": 0.28722, "acc": 90.84375, "loss_bbox": 0.30417, "loss_mask": 0.28826, "loss": 0.98618, "time": 0.21562} -{"mode": "train", "epoch": 2, "iter": 2350, "lr": 0.0002, "memory": 5940, "data_time": 0.03877, "loss_rpn_cls": 0.04299, "loss_rpn_bbox": 0.06138, "loss_cls": 0.27501, "acc": 91.30322, "loss_bbox": 0.28615, "loss_mask": 0.28351, "loss": 0.94903, "time": 0.2106} -{"mode": "train", "epoch": 2, "iter": 2400, "lr": 0.0002, "memory": 5940, "data_time": 0.03293, "loss_rpn_cls": 0.04204, "loss_rpn_bbox": 0.05939, "loss_cls": 0.27067, "acc": 91.62964, "loss_bbox": 0.28, "loss_mask": 0.28306, "loss": 0.93517, "time": 0.20478} -{"mode": "train", "epoch": 2, "iter": 2450, "lr": 0.0002, "memory": 5940, "data_time": 0.03597, "loss_rpn_cls": 0.04142, "loss_rpn_bbox": 0.06091, "loss_cls": 0.27729, "acc": 91.34985, "loss_bbox": 0.28991, "loss_mask": 0.28466, "loss": 0.95419, "time": 0.21222} -{"mode": "train", "epoch": 2, "iter": 2500, "lr": 0.0002, "memory": 5940, "data_time": 0.03542, "loss_rpn_cls": 0.04236, "loss_rpn_bbox": 0.0602, "loss_cls": 0.27811, "acc": 91.35718, "loss_bbox": 0.28689, "loss_mask": 0.28662, "loss": 0.95419, "time": 0.20725} -{"mode": "train", "epoch": 2, "iter": 2550, "lr": 0.0002, "memory": 5940, "data_time": 0.0353, "loss_rpn_cls": 0.0444, "loss_rpn_bbox": 0.0595, "loss_cls": 0.26754, "acc": 91.77197, "loss_bbox": 0.28002, "loss_mask": 0.28114, "loss": 0.9326, "time": 0.21414} -{"mode": "train", "epoch": 2, "iter": 2600, "lr": 0.0002, "memory": 5940, "data_time": 0.04331, "loss_rpn_cls": 0.04772, "loss_rpn_bbox": 0.06311, "loss_cls": 0.28011, "acc": 91.25732, "loss_bbox": 0.2839, "loss_mask": 0.28378, "loss": 0.95862, "time": 0.21526} -{"mode": "train", "epoch": 2, "iter": 2650, "lr": 0.0002, "memory": 5940, "data_time": 0.03488, "loss_rpn_cls": 0.04664, "loss_rpn_bbox": 0.06386, "loss_cls": 0.27275, "acc": 91.36865, "loss_bbox": 0.28281, "loss_mask": 0.28326, "loss": 0.9493, "time": 0.21272} -{"mode": "train", "epoch": 2, "iter": 2700, "lr": 0.0002, "memory": 5940, "data_time": 0.04063, "loss_rpn_cls": 0.04405, "loss_rpn_bbox": 0.06287, "loss_cls": 0.27472, "acc": 91.22925, "loss_bbox": 0.29337, "loss_mask": 0.28352, "loss": 0.95852, "time": 0.21523} -{"mode": "train", "epoch": 2, "iter": 2750, "lr": 0.0002, "memory": 5940, "data_time": 0.03726, "loss_rpn_cls": 0.03968, "loss_rpn_bbox": 0.06197, "loss_cls": 0.27419, "acc": 91.23804, "loss_bbox": 0.28768, "loss_mask": 0.28592, "loss": 0.94945, "time": 0.20687} -{"mode": "train", "epoch": 2, "iter": 2800, "lr": 0.0002, "memory": 5940, "data_time": 0.04197, "loss_rpn_cls": 0.04465, "loss_rpn_bbox": 0.06054, "loss_cls": 0.28063, "acc": 91.17432, "loss_bbox": 0.2861, "loss_mask": 0.28274, "loss": 0.95465, "time": 0.20847} -{"mode": "train", "epoch": 2, "iter": 2850, "lr": 0.0002, "memory": 5940, "data_time": 0.03867, "loss_rpn_cls": 0.04075, "loss_rpn_bbox": 0.06229, "loss_cls": 0.2766, "acc": 91.20215, "loss_bbox": 0.28753, "loss_mask": 0.28023, "loss": 0.9474, "time": 0.20563} -{"mode": "train", "epoch": 2, "iter": 2900, "lr": 0.0002, "memory": 5940, "data_time": 0.03664, "loss_rpn_cls": 0.04338, "loss_rpn_bbox": 0.06212, "loss_cls": 0.27095, "acc": 91.40161, "loss_bbox": 0.28497, "loss_mask": 0.28324, "loss": 0.94465, "time": 0.21239} -{"mode": "train", "epoch": 2, "iter": 2950, "lr": 0.0002, "memory": 5940, "data_time": 0.04129, "loss_rpn_cls": 0.04537, "loss_rpn_bbox": 0.06403, "loss_cls": 0.28673, "acc": 91.08862, "loss_bbox": 0.29254, "loss_mask": 0.28573, "loss": 0.97441, "time": 0.21436} -{"mode": "train", "epoch": 2, "iter": 3000, "lr": 0.0002, "memory": 5940, "data_time": 0.03577, "loss_rpn_cls": 0.04601, "loss_rpn_bbox": 0.06233, "loss_cls": 0.2737, "acc": 91.39258, "loss_bbox": 0.28768, "loss_mask": 0.28582, "loss": 0.95555, "time": 0.21365} -{"mode": "train", "epoch": 2, "iter": 3050, "lr": 0.0002, "memory": 5940, "data_time": 0.04091, "loss_rpn_cls": 0.04239, "loss_rpn_bbox": 0.06269, "loss_cls": 0.27703, "acc": 91.20947, "loss_bbox": 0.289, "loss_mask": 0.28109, "loss": 0.95218, "time": 0.22601} -{"mode": "train", "epoch": 2, "iter": 3100, "lr": 0.0002, "memory": 5940, "data_time": 0.03596, "loss_rpn_cls": 0.04365, "loss_rpn_bbox": 0.06108, "loss_cls": 0.27228, "acc": 91.44507, "loss_bbox": 0.28591, "loss_mask": 0.28129, "loss": 0.94421, "time": 0.21006} -{"mode": "train", "epoch": 2, "iter": 3150, "lr": 0.0002, "memory": 5940, "data_time": 0.0329, "loss_rpn_cls": 0.04201, "loss_rpn_bbox": 0.05932, "loss_cls": 0.26887, "acc": 91.53906, "loss_bbox": 0.28578, "loss_mask": 0.27763, "loss": 0.93361, "time": 0.21286} -{"mode": "train", "epoch": 2, "iter": 3200, "lr": 0.0002, "memory": 5940, "data_time": 0.03655, "loss_rpn_cls": 0.03941, "loss_rpn_bbox": 0.0591, "loss_cls": 0.26906, "acc": 91.61108, "loss_bbox": 0.27766, "loss_mask": 0.28166, "loss": 0.92688, "time": 0.20728} -{"mode": "train", "epoch": 2, "iter": 3250, "lr": 0.0002, "memory": 5940, "data_time": 0.03877, "loss_rpn_cls": 0.04448, "loss_rpn_bbox": 0.06089, "loss_cls": 0.27026, "acc": 91.3877, "loss_bbox": 0.28449, "loss_mask": 0.28282, "loss": 0.94293, "time": 0.20854} -{"mode": "train", "epoch": 2, "iter": 3300, "lr": 0.0002, "memory": 5940, "data_time": 0.03808, "loss_rpn_cls": 0.04453, "loss_rpn_bbox": 0.05902, "loss_cls": 0.26257, "acc": 91.71411, "loss_bbox": 0.27564, "loss_mask": 0.27487, "loss": 0.91662, "time": 0.20474} -{"mode": "train", "epoch": 2, "iter": 3350, "lr": 0.0002, "memory": 5940, "data_time": 0.0358, "loss_rpn_cls": 0.04211, "loss_rpn_bbox": 0.06217, "loss_cls": 0.28294, "acc": 91.31201, "loss_bbox": 0.28632, "loss_mask": 0.27877, "loss": 0.9523, "time": 0.20795} -{"mode": "train", "epoch": 2, "iter": 3400, "lr": 0.0002, "memory": 5940, "data_time": 0.03665, "loss_rpn_cls": 0.0439, "loss_rpn_bbox": 0.06165, "loss_cls": 0.27018, "acc": 91.625, "loss_bbox": 0.27613, "loss_mask": 0.2866, "loss": 0.93845, "time": 0.20464} -{"mode": "train", "epoch": 2, "iter": 3450, "lr": 0.0002, "memory": 5940, "data_time": 0.02853, "loss_rpn_cls": 0.0409, "loss_rpn_bbox": 0.05666, "loss_cls": 0.25916, "acc": 91.89526, "loss_bbox": 0.26768, "loss_mask": 0.27733, "loss": 0.90173, "time": 0.20201} -{"mode": "train", "epoch": 2, "iter": 3500, "lr": 0.0002, "memory": 5940, "data_time": 0.04097, "loss_rpn_cls": 0.0468, "loss_rpn_bbox": 0.06458, "loss_cls": 0.28377, "acc": 90.94751, "loss_bbox": 0.29916, "loss_mask": 0.28738, "loss": 0.98169, "time": 0.22315} -{"mode": "train", "epoch": 2, "iter": 3550, "lr": 0.0002, "memory": 5940, "data_time": 0.03341, "loss_rpn_cls": 0.04434, "loss_rpn_bbox": 0.06466, "loss_cls": 0.27976, "acc": 91.00293, "loss_bbox": 0.29396, "loss_mask": 0.28788, "loss": 0.9706, "time": 0.21645} -{"mode": "train", "epoch": 2, "iter": 3600, "lr": 0.0002, "memory": 5940, "data_time": 0.03452, "loss_rpn_cls": 0.03947, "loss_rpn_bbox": 0.05918, "loss_cls": 0.26359, "acc": 91.77734, "loss_bbox": 0.27241, "loss_mask": 0.27482, "loss": 0.90946, "time": 0.21366} -{"mode": "train", "epoch": 2, "iter": 3650, "lr": 0.0002, "memory": 5940, "data_time": 0.038, "loss_rpn_cls": 0.04734, "loss_rpn_bbox": 0.06343, "loss_cls": 0.26839, "acc": 91.46338, "loss_bbox": 0.28212, "loss_mask": 0.28512, "loss": 0.94641, "time": 0.21264} -{"mode": "train", "epoch": 2, "iter": 3700, "lr": 0.0002, "memory": 5940, "data_time": 0.04419, "loss_rpn_cls": 0.04208, "loss_rpn_bbox": 0.05985, "loss_cls": 0.27142, "acc": 91.38257, "loss_bbox": 0.28121, "loss_mask": 0.27453, "loss": 0.92909, "time": 0.20622} -{"mode": "train", "epoch": 2, "iter": 3750, "lr": 0.0002, "memory": 5940, "data_time": 0.03694, "loss_rpn_cls": 0.04212, "loss_rpn_bbox": 0.06173, "loss_cls": 0.2714, "acc": 91.40796, "loss_bbox": 0.28313, "loss_mask": 0.2761, "loss": 0.93448, "time": 0.21056} -{"mode": "train", "epoch": 2, "iter": 3800, "lr": 0.0002, "memory": 5940, "data_time": 0.03973, "loss_rpn_cls": 0.04409, "loss_rpn_bbox": 0.06534, "loss_cls": 0.28049, "acc": 91.16235, "loss_bbox": 0.28925, "loss_mask": 0.27987, "loss": 0.95905, "time": 0.21902} -{"mode": "train", "epoch": 2, "iter": 3850, "lr": 0.0002, "memory": 5940, "data_time": 0.03709, "loss_rpn_cls": 0.04119, "loss_rpn_bbox": 0.06163, "loss_cls": 0.27354, "acc": 91.32666, "loss_bbox": 0.28477, "loss_mask": 0.27317, "loss": 0.93429, "time": 0.20358} -{"mode": "train", "epoch": 2, "iter": 3900, "lr": 0.0002, "memory": 5940, "data_time": 0.0358, "loss_rpn_cls": 0.04316, "loss_rpn_bbox": 0.06224, "loss_cls": 0.28246, "acc": 91.09937, "loss_bbox": 0.2918, "loss_mask": 0.28071, "loss": 0.96038, "time": 0.21747} -{"mode": "train", "epoch": 2, "iter": 3950, "lr": 0.0002, "memory": 5940, "data_time": 0.04376, "loss_rpn_cls": 0.04675, "loss_rpn_bbox": 0.06353, "loss_cls": 0.28417, "acc": 90.94507, "loss_bbox": 0.30086, "loss_mask": 0.28356, "loss": 0.97887, "time": 0.22106} -{"mode": "train", "epoch": 2, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.03485, "loss_rpn_cls": 0.04081, "loss_rpn_bbox": 0.06101, "loss_cls": 0.2684, "acc": 91.50537, "loss_bbox": 0.28349, "loss_mask": 0.27333, "loss": 0.92704, "time": 0.20596} -{"mode": "train", "epoch": 2, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.03794, "loss_rpn_cls": 0.04203, "loss_rpn_bbox": 0.06015, "loss_cls": 0.26939, "acc": 91.35474, "loss_bbox": 0.28264, "loss_mask": 0.27651, "loss": 0.93072, "time": 0.21388} -{"mode": "train", "epoch": 2, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.0382, "loss_rpn_cls": 0.04514, "loss_rpn_bbox": 0.06044, "loss_cls": 0.2654, "acc": 91.4729, "loss_bbox": 0.27753, "loss_mask": 0.27869, "loss": 0.92719, "time": 0.21243} -{"mode": "train", "epoch": 2, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.03534, "loss_rpn_cls": 0.04135, "loss_rpn_bbox": 0.05897, "loss_cls": 0.26188, "acc": 91.90088, "loss_bbox": 0.26944, "loss_mask": 0.2762, "loss": 0.90784, "time": 0.20892} -{"mode": "train", "epoch": 2, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.03712, "loss_rpn_cls": 0.04365, "loss_rpn_bbox": 0.06559, "loss_cls": 0.26956, "acc": 91.39062, "loss_bbox": 0.2889, "loss_mask": 0.28436, "loss": 0.95206, "time": 0.21348} -{"mode": "train", "epoch": 2, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.03563, "loss_rpn_cls": 0.04426, "loss_rpn_bbox": 0.0587, "loss_cls": 0.26097, "acc": 91.76294, "loss_bbox": 0.27353, "loss_mask": 0.28027, "loss": 0.91774, "time": 0.2561} -{"mode": "train", "epoch": 2, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.03599, "loss_rpn_cls": 0.04486, "loss_rpn_bbox": 0.06563, "loss_cls": 0.28566, "acc": 90.82812, "loss_bbox": 0.29948, "loss_mask": 0.28116, "loss": 0.97679, "time": 0.21988} -{"mode": "train", "epoch": 2, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.04014, "loss_rpn_cls": 0.04305, "loss_rpn_bbox": 0.06021, "loss_cls": 0.26863, "acc": 91.33154, "loss_bbox": 0.28436, "loss_mask": 0.27423, "loss": 0.93048, "time": 0.22404} -{"mode": "train", "epoch": 2, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03696, "loss_rpn_cls": 0.04216, "loss_rpn_bbox": 0.06083, "loss_cls": 0.27748, "acc": 91.39917, "loss_bbox": 0.28658, "loss_mask": 0.27967, "loss": 0.94672, "time": 0.21316} -{"mode": "train", "epoch": 2, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.0415, "loss_rpn_cls": 0.04204, "loss_rpn_bbox": 0.06031, "loss_cls": 0.2771, "acc": 91.08057, "loss_bbox": 0.29299, "loss_mask": 0.27895, "loss": 0.95139, "time": 0.2198} -{"mode": "train", "epoch": 2, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.03728, "loss_rpn_cls": 0.04082, "loss_rpn_bbox": 0.05928, "loss_cls": 0.25699, "acc": 91.83862, "loss_bbox": 0.27396, "loss_mask": 0.27407, "loss": 0.90512, "time": 0.20951} -{"mode": "train", "epoch": 2, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03927, "loss_rpn_cls": 0.0445, "loss_rpn_bbox": 0.06392, "loss_cls": 0.28618, "acc": 91.13062, "loss_bbox": 0.2937, "loss_mask": 0.28495, "loss": 0.97325, "time": 0.21282} -{"mode": "train", "epoch": 2, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.03358, "loss_rpn_cls": 0.04452, "loss_rpn_bbox": 0.06123, "loss_cls": 0.28454, "acc": 91.19946, "loss_bbox": 0.29089, "loss_mask": 0.28493, "loss": 0.96611, "time": 0.21247} -{"mode": "train", "epoch": 2, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.0382, "loss_rpn_cls": 0.04544, "loss_rpn_bbox": 0.06077, "loss_cls": 0.2733, "acc": 91.31836, "loss_bbox": 0.28894, "loss_mask": 0.2845, "loss": 0.95295, "time": 0.2135} -{"mode": "train", "epoch": 2, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.03564, "loss_rpn_cls": 0.04219, "loss_rpn_bbox": 0.05944, "loss_cls": 0.26228, "acc": 91.68115, "loss_bbox": 0.27556, "loss_mask": 0.27887, "loss": 0.91835, "time": 0.20922} -{"mode": "train", "epoch": 2, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.031, "loss_rpn_cls": 0.04268, "loss_rpn_bbox": 0.05838, "loss_cls": 0.26584, "acc": 91.53613, "loss_bbox": 0.28452, "loss_mask": 0.27136, "loss": 0.92278, "time": 0.20814} -{"mode": "train", "epoch": 2, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.0367, "loss_rpn_cls": 0.0408, "loss_rpn_bbox": 0.06027, "loss_cls": 0.26223, "acc": 91.79077, "loss_bbox": 0.27278, "loss_mask": 0.27469, "loss": 0.91076, "time": 0.21051} -{"mode": "train", "epoch": 2, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.03076, "loss_rpn_cls": 0.0402, "loss_rpn_bbox": 0.05937, "loss_cls": 0.26721, "acc": 91.62451, "loss_bbox": 0.27194, "loss_mask": 0.26838, "loss": 0.9071, "time": 0.19738} -{"mode": "train", "epoch": 2, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.03575, "loss_rpn_cls": 0.04252, "loss_rpn_bbox": 0.05883, "loss_cls": 0.27115, "acc": 91.33325, "loss_bbox": 0.28749, "loss_mask": 0.27093, "loss": 0.93093, "time": 0.21355} -{"mode": "train", "epoch": 2, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.03833, "loss_rpn_cls": 0.04107, "loss_rpn_bbox": 0.05896, "loss_cls": 0.27324, "acc": 91.23682, "loss_bbox": 0.28495, "loss_mask": 0.27215, "loss": 0.93037, "time": 0.21048} -{"mode": "train", "epoch": 2, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.0387, "loss_rpn_cls": 0.04305, "loss_rpn_bbox": 0.05847, "loss_cls": 0.27104, "acc": 91.23853, "loss_bbox": 0.28917, "loss_mask": 0.2728, "loss": 0.93453, "time": 0.20842} -{"mode": "train", "epoch": 2, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.03398, "loss_rpn_cls": 0.04138, "loss_rpn_bbox": 0.06135, "loss_cls": 0.26594, "acc": 91.57764, "loss_bbox": 0.27807, "loss_mask": 0.27589, "loss": 0.92263, "time": 0.21864} -{"mode": "train", "epoch": 2, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.03131, "loss_rpn_cls": 0.04393, "loss_rpn_bbox": 0.05802, "loss_cls": 0.26074, "acc": 91.7644, "loss_bbox": 0.27947, "loss_mask": 0.27718, "loss": 0.91933, "time": 0.21426} -{"mode": "train", "epoch": 2, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.03735, "loss_rpn_cls": 0.04228, "loss_rpn_bbox": 0.06019, "loss_cls": 0.27861, "acc": 91.07397, "loss_bbox": 0.28998, "loss_mask": 0.28523, "loss": 0.9563, "time": 0.20905} -{"mode": "train", "epoch": 2, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.02967, "loss_rpn_cls": 0.04261, "loss_rpn_bbox": 0.06133, "loss_cls": 0.27643, "acc": 91.20679, "loss_bbox": 0.28595, "loss_mask": 0.28213, "loss": 0.94846, "time": 0.21324} -{"mode": "train", "epoch": 2, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.0349, "loss_rpn_cls": 0.04635, "loss_rpn_bbox": 0.06459, "loss_cls": 0.27496, "acc": 91.24268, "loss_bbox": 0.28821, "loss_mask": 0.27478, "loss": 0.94888, "time": 0.22099} -{"mode": "train", "epoch": 2, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.03759, "loss_rpn_cls": 0.04314, "loss_rpn_bbox": 0.0618, "loss_cls": 0.26588, "acc": 91.46265, "loss_bbox": 0.28353, "loss_mask": 0.27575, "loss": 0.93011, "time": 0.21907} -{"mode": "train", "epoch": 2, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.03474, "loss_rpn_cls": 0.0438, "loss_rpn_bbox": 0.06258, "loss_cls": 0.26944, "acc": 91.41089, "loss_bbox": 0.28194, "loss_mask": 0.27719, "loss": 0.93496, "time": 0.21697} -{"mode": "train", "epoch": 2, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.03433, "loss_rpn_cls": 0.03823, "loss_rpn_bbox": 0.05846, "loss_cls": 0.26588, "acc": 91.59497, "loss_bbox": 0.27908, "loss_mask": 0.27446, "loss": 0.91611, "time": 0.2033} -{"mode": "train", "epoch": 2, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.04032, "loss_rpn_cls": 0.04431, "loss_rpn_bbox": 0.06287, "loss_cls": 0.27553, "acc": 91.41675, "loss_bbox": 0.27944, "loss_mask": 0.27973, "loss": 0.94189, "time": 0.21551} -{"mode": "train", "epoch": 2, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.03229, "loss_rpn_cls": 0.0427, "loss_rpn_bbox": 0.06167, "loss_cls": 0.27233, "acc": 91.36255, "loss_bbox": 0.28561, "loss_mask": 0.27607, "loss": 0.93837, "time": 0.20736} -{"mode": "train", "epoch": 2, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.04415, "loss_rpn_cls": 0.04336, "loss_rpn_bbox": 0.06214, "loss_cls": 0.26873, "acc": 91.49194, "loss_bbox": 0.28879, "loss_mask": 0.27339, "loss": 0.93642, "time": 0.20654} -{"mode": "train", "epoch": 2, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.0376, "loss_rpn_cls": 0.04286, "loss_rpn_bbox": 0.06444, "loss_cls": 0.27584, "acc": 91.30566, "loss_bbox": 0.28756, "loss_mask": 0.27502, "loss": 0.94572, "time": 0.21158} -{"mode": "train", "epoch": 2, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.03215, "loss_rpn_cls": 0.04091, "loss_rpn_bbox": 0.05779, "loss_cls": 0.25712, "acc": 91.84644, "loss_bbox": 0.2729, "loss_mask": 0.2773, "loss": 0.90602, "time": 0.20441} -{"mode": "train", "epoch": 2, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.03852, "loss_rpn_cls": 0.04252, "loss_rpn_bbox": 0.06141, "loss_cls": 0.27339, "acc": 91.42505, "loss_bbox": 0.28475, "loss_mask": 0.27728, "loss": 0.93935, "time": 0.21119} -{"mode": "train", "epoch": 2, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.0344, "loss_rpn_cls": 0.04283, "loss_rpn_bbox": 0.0606, "loss_cls": 0.27028, "acc": 91.43774, "loss_bbox": 0.27912, "loss_mask": 0.27882, "loss": 0.93166, "time": 0.20614} -{"mode": "train", "epoch": 2, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.03122, "loss_rpn_cls": 0.04304, "loss_rpn_bbox": 0.06399, "loss_cls": 0.2776, "acc": 91.19287, "loss_bbox": 0.28615, "loss_mask": 0.27716, "loss": 0.94794, "time": 0.22062} -{"mode": "train", "epoch": 2, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.0366, "loss_rpn_cls": 0.04365, "loss_rpn_bbox": 0.06197, "loss_cls": 0.26247, "acc": 91.72656, "loss_bbox": 0.27196, "loss_mask": 0.27291, "loss": 0.91297, "time": 0.21115} -{"mode": "train", "epoch": 2, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.02994, "loss_rpn_cls": 0.04154, "loss_rpn_bbox": 0.05887, "loss_cls": 0.27051, "acc": 91.41309, "loss_bbox": 0.28408, "loss_mask": 0.27555, "loss": 0.93055, "time": 0.20651} -{"mode": "train", "epoch": 2, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.03007, "loss_rpn_cls": 0.03928, "loss_rpn_bbox": 0.05934, "loss_cls": 0.26309, "acc": 91.59473, "loss_bbox": 0.27782, "loss_mask": 0.27512, "loss": 0.91465, "time": 0.21008} -{"mode": "train", "epoch": 2, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.03401, "loss_rpn_cls": 0.04256, "loss_rpn_bbox": 0.06097, "loss_cls": 0.27366, "acc": 91.32715, "loss_bbox": 0.27871, "loss_mask": 0.27514, "loss": 0.93104, "time": 0.21519} -{"mode": "train", "epoch": 2, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.03806, "loss_rpn_cls": 0.04116, "loss_rpn_bbox": 0.06113, "loss_cls": 0.26161, "acc": 91.58008, "loss_bbox": 0.27343, "loss_mask": 0.27718, "loss": 0.91451, "time": 0.21095} -{"mode": "train", "epoch": 2, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.04198, "loss_rpn_cls": 0.03953, "loss_rpn_bbox": 0.05705, "loss_cls": 0.26812, "acc": 91.35181, "loss_bbox": 0.2804, "loss_mask": 0.27349, "loss": 0.91858, "time": 0.21561} -{"mode": "train", "epoch": 2, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.0363, "loss_rpn_cls": 0.04314, "loss_rpn_bbox": 0.06281, "loss_cls": 0.26314, "acc": 91.51343, "loss_bbox": 0.28026, "loss_mask": 0.2795, "loss": 0.92885, "time": 0.20901} -{"mode": "train", "epoch": 2, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.03221, "loss_rpn_cls": 0.04513, "loss_rpn_bbox": 0.06048, "loss_cls": 0.27227, "acc": 91.27124, "loss_bbox": 0.28708, "loss_mask": 0.27774, "loss": 0.94269, "time": 0.2144} -{"mode": "train", "epoch": 2, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.03727, "loss_rpn_cls": 0.04543, "loss_rpn_bbox": 0.06332, "loss_cls": 0.26636, "acc": 91.49805, "loss_bbox": 0.28423, "loss_mask": 0.2768, "loss": 0.93614, "time": 0.21841} -{"mode": "train", "epoch": 2, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.03616, "loss_rpn_cls": 0.0442, "loss_rpn_bbox": 0.06293, "loss_cls": 0.26063, "acc": 91.72803, "loss_bbox": 0.27317, "loss_mask": 0.27801, "loss": 0.91894, "time": 0.21605} -{"mode": "train", "epoch": 2, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.0311, "loss_rpn_cls": 0.04197, "loss_rpn_bbox": 0.05861, "loss_cls": 0.27191, "acc": 91.47729, "loss_bbox": 0.27713, "loss_mask": 0.27364, "loss": 0.92327, "time": 0.21193} -{"mode": "train", "epoch": 2, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.03535, "loss_rpn_cls": 0.04119, "loss_rpn_bbox": 0.06483, "loss_cls": 0.27813, "acc": 91.10474, "loss_bbox": 0.29367, "loss_mask": 0.28367, "loss": 0.96148, "time": 0.21473} -{"mode": "train", "epoch": 2, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03249, "loss_rpn_cls": 0.03643, "loss_rpn_bbox": 0.05815, "loss_cls": 0.24743, "acc": 92.08667, "loss_bbox": 0.26334, "loss_mask": 0.27317, "loss": 0.87851, "time": 0.2035} -{"mode": "train", "epoch": 2, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.03585, "loss_rpn_cls": 0.04425, "loss_rpn_bbox": 0.06372, "loss_cls": 0.26651, "acc": 91.36523, "loss_bbox": 0.27962, "loss_mask": 0.27796, "loss": 0.93206, "time": 0.20795} -{"mode": "train", "epoch": 2, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.03672, "loss_rpn_cls": 0.04067, "loss_rpn_bbox": 0.05827, "loss_cls": 0.2646, "acc": 91.67603, "loss_bbox": 0.2696, "loss_mask": 0.27807, "loss": 0.91122, "time": 0.20847} -{"mode": "train", "epoch": 2, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.03362, "loss_rpn_cls": 0.04362, "loss_rpn_bbox": 0.05812, "loss_cls": 0.26942, "acc": 91.50073, "loss_bbox": 0.2807, "loss_mask": 0.27038, "loss": 0.92224, "time": 0.21045} -{"mode": "train", "epoch": 2, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.0393, "loss_rpn_cls": 0.0409, "loss_rpn_bbox": 0.06225, "loss_cls": 0.27613, "acc": 91.1084, "loss_bbox": 0.29419, "loss_mask": 0.27747, "loss": 0.95095, "time": 0.218} -{"mode": "train", "epoch": 2, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.0324, "loss_rpn_cls": 0.04533, "loss_rpn_bbox": 0.06001, "loss_cls": 0.2779, "acc": 91.12964, "loss_bbox": 0.29116, "loss_mask": 0.27644, "loss": 0.95084, "time": 0.21851} -{"mode": "train", "epoch": 2, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.03226, "loss_rpn_cls": 0.04221, "loss_rpn_bbox": 0.06211, "loss_cls": 0.25782, "acc": 91.74121, "loss_bbox": 0.27335, "loss_mask": 0.28402, "loss": 0.91951, "time": 0.20618} -{"mode": "train", "epoch": 2, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.03747, "loss_rpn_cls": 0.03964, "loss_rpn_bbox": 0.05796, "loss_cls": 0.26004, "acc": 91.65112, "loss_bbox": 0.27135, "loss_mask": 0.27416, "loss": 0.90315, "time": 0.20424} -{"mode": "train", "epoch": 2, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.03578, "loss_rpn_cls": 0.04211, "loss_rpn_bbox": 0.05834, "loss_cls": 0.25601, "acc": 91.8479, "loss_bbox": 0.26922, "loss_mask": 0.2734, "loss": 0.89908, "time": 0.20948} -{"mode": "train", "epoch": 2, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.03371, "loss_rpn_cls": 0.04229, "loss_rpn_bbox": 0.06032, "loss_cls": 0.27123, "acc": 91.39014, "loss_bbox": 0.27813, "loss_mask": 0.27538, "loss": 0.92735, "time": 0.20342} -{"mode": "train", "epoch": 2, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.03486, "loss_rpn_cls": 0.04453, "loss_rpn_bbox": 0.06008, "loss_cls": 0.25744, "acc": 91.97485, "loss_bbox": 0.26792, "loss_mask": 0.27014, "loss": 0.9001, "time": 0.20732} -{"mode": "train", "epoch": 2, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.03754, "loss_rpn_cls": 0.04266, "loss_rpn_bbox": 0.06123, "loss_cls": 0.25931, "acc": 91.57837, "loss_bbox": 0.27745, "loss_mask": 0.27293, "loss": 0.91358, "time": 0.21507} -{"mode": "train", "epoch": 2, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.03494, "loss_rpn_cls": 0.03734, "loss_rpn_bbox": 0.0591, "loss_cls": 0.26233, "acc": 91.68701, "loss_bbox": 0.27911, "loss_mask": 0.27695, "loss": 0.91483, "time": 0.20203} -{"mode": "train", "epoch": 2, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.03491, "loss_rpn_cls": 0.04222, "loss_rpn_bbox": 0.06026, "loss_cls": 0.26809, "acc": 91.43848, "loss_bbox": 0.28259, "loss_mask": 0.27531, "loss": 0.92846, "time": 0.20496} -{"mode": "train", "epoch": 2, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.04194, "loss_rpn_cls": 0.04126, "loss_rpn_bbox": 0.06091, "loss_cls": 0.2626, "acc": 91.65747, "loss_bbox": 0.27169, "loss_mask": 0.27134, "loss": 0.9078, "time": 0.20847} -{"mode": "train", "epoch": 2, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.03632, "loss_rpn_cls": 0.038, "loss_rpn_bbox": 0.05586, "loss_cls": 0.2526, "acc": 91.94141, "loss_bbox": 0.26798, "loss_mask": 0.27377, "loss": 0.88821, "time": 0.20987} -{"mode": "train", "epoch": 2, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.03521, "loss_rpn_cls": 0.04137, "loss_rpn_bbox": 0.05693, "loss_cls": 0.25567, "acc": 91.92603, "loss_bbox": 0.26919, "loss_mask": 0.26775, "loss": 0.89091, "time": 0.20443} -{"mode": "train", "epoch": 2, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.03571, "loss_rpn_cls": 0.04391, "loss_rpn_bbox": 0.05897, "loss_cls": 0.26497, "acc": 91.66577, "loss_bbox": 0.27831, "loss_mask": 0.26777, "loss": 0.91393, "time": 0.20655} -{"mode": "val", "epoch": 2, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.2796, "bbox_mAP_50": 0.4942, "bbox_mAP_75": 0.2883, "bbox_mAP_s": 0.1636, "bbox_mAP_m": 0.309, "bbox_mAP_l": 0.3679, "bbox_mAP_copypaste": "0.2796 0.4942 0.2883 0.1636 0.3090 0.3679", "segm_mAP": 0.28, "segm_mAP_50": 0.4677, "segm_mAP_75": 0.294, "segm_mAP_s": 0.1266, "segm_mAP_m": 0.304, "segm_mAP_l": 0.4202, "segm_mAP_copypaste": "0.2800 0.4677 0.2940 0.1266 0.3040 0.4202"} -{"mode": "train", "epoch": 3, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.14727, "loss_rpn_cls": 0.03615, "loss_rpn_bbox": 0.05984, "loss_cls": 0.25136, "acc": 91.70703, "loss_bbox": 0.27313, "loss_mask": 0.27483, "loss": 0.89531, "time": 0.34485} -{"mode": "train", "epoch": 3, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.03538, "loss_rpn_cls": 0.0382, "loss_rpn_bbox": 0.0597, "loss_cls": 0.25472, "acc": 91.7627, "loss_bbox": 0.27434, "loss_mask": 0.26671, "loss": 0.89366, "time": 0.21945} -{"mode": "train", "epoch": 3, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.03321, "loss_rpn_cls": 0.0355, "loss_rpn_bbox": 0.05852, "loss_cls": 0.25014, "acc": 91.78394, "loss_bbox": 0.26833, "loss_mask": 0.26153, "loss": 0.87402, "time": 0.20706} -{"mode": "train", "epoch": 3, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.04062, "loss_rpn_cls": 0.03835, "loss_rpn_bbox": 0.06022, "loss_cls": 0.2552, "acc": 91.57642, "loss_bbox": 0.27747, "loss_mask": 0.26748, "loss": 0.89872, "time": 0.22006} -{"mode": "train", "epoch": 3, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.04829, "loss_rpn_cls": 0.04064, "loss_rpn_bbox": 0.06245, "loss_cls": 0.26369, "acc": 91.30664, "loss_bbox": 0.28568, "loss_mask": 0.27688, "loss": 0.92933, "time": 0.22486} -{"mode": "train", "epoch": 3, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.03999, "loss_rpn_cls": 0.03567, "loss_rpn_bbox": 0.05502, "loss_cls": 0.24428, "acc": 91.92261, "loss_bbox": 0.26864, "loss_mask": 0.27265, "loss": 0.87627, "time": 0.2073} -{"mode": "train", "epoch": 3, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.04428, "loss_rpn_cls": 0.04017, "loss_rpn_bbox": 0.06242, "loss_cls": 0.25874, "acc": 91.49463, "loss_bbox": 0.28371, "loss_mask": 0.27045, "loss": 0.91548, "time": 0.22198} -{"mode": "train", "epoch": 3, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.03168, "loss_rpn_cls": 0.03683, "loss_rpn_bbox": 0.05504, "loss_cls": 0.24821, "acc": 91.79248, "loss_bbox": 0.2748, "loss_mask": 0.26839, "loss": 0.88328, "time": 0.21187} -{"mode": "train", "epoch": 3, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.04382, "loss_rpn_cls": 0.04034, "loss_rpn_bbox": 0.06078, "loss_cls": 0.2515, "acc": 91.61816, "loss_bbox": 0.27724, "loss_mask": 0.27205, "loss": 0.9019, "time": 0.2156} -{"mode": "train", "epoch": 3, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.04428, "loss_rpn_cls": 0.0346, "loss_rpn_bbox": 0.05788, "loss_cls": 0.25272, "acc": 91.75781, "loss_bbox": 0.27442, "loss_mask": 0.26786, "loss": 0.88748, "time": 0.21886} -{"mode": "train", "epoch": 3, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.044, "loss_rpn_cls": 0.03715, "loss_rpn_bbox": 0.05526, "loss_cls": 0.24897, "acc": 91.98193, "loss_bbox": 0.26799, "loss_mask": 0.26512, "loss": 0.87449, "time": 0.20678} -{"mode": "train", "epoch": 3, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.03722, "loss_rpn_cls": 0.03522, "loss_rpn_bbox": 0.05682, "loss_cls": 0.25216, "acc": 91.82275, "loss_bbox": 0.26689, "loss_mask": 0.26664, "loss": 0.87773, "time": 0.2142} -{"mode": "train", "epoch": 3, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.03918, "loss_rpn_cls": 0.03821, "loss_rpn_bbox": 0.05756, "loss_cls": 0.25839, "acc": 91.57178, "loss_bbox": 0.28307, "loss_mask": 0.27645, "loss": 0.91368, "time": 0.21555} -{"mode": "train", "epoch": 3, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.0352, "loss_rpn_cls": 0.03825, "loss_rpn_bbox": 0.05666, "loss_cls": 0.24671, "acc": 91.90454, "loss_bbox": 0.26721, "loss_mask": 0.26611, "loss": 0.87494, "time": 0.21338} -{"mode": "train", "epoch": 3, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.03886, "loss_rpn_cls": 0.03865, "loss_rpn_bbox": 0.06029, "loss_cls": 0.25852, "acc": 91.6333, "loss_bbox": 0.27852, "loss_mask": 0.26817, "loss": 0.90415, "time": 0.21807} -{"mode": "train", "epoch": 3, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.03641, "loss_rpn_cls": 0.03675, "loss_rpn_bbox": 0.05829, "loss_cls": 0.25451, "acc": 91.6875, "loss_bbox": 0.27354, "loss_mask": 0.26672, "loss": 0.88981, "time": 0.22393} -{"mode": "train", "epoch": 3, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.04201, "loss_rpn_cls": 0.0405, "loss_rpn_bbox": 0.06076, "loss_cls": 0.25775, "acc": 91.54248, "loss_bbox": 0.27777, "loss_mask": 0.27129, "loss": 0.90806, "time": 0.21774} -{"mode": "train", "epoch": 3, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.04166, "loss_rpn_cls": 0.03721, "loss_rpn_bbox": 0.05931, "loss_cls": 0.2513, "acc": 91.74072, "loss_bbox": 0.274, "loss_mask": 0.26958, "loss": 0.89141, "time": 0.2207} -{"mode": "train", "epoch": 3, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.03768, "loss_rpn_cls": 0.03877, "loss_rpn_bbox": 0.05576, "loss_cls": 0.24772, "acc": 92.01318, "loss_bbox": 0.26261, "loss_mask": 0.26717, "loss": 0.87204, "time": 0.2204} -{"mode": "train", "epoch": 3, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.03969, "loss_rpn_cls": 0.03718, "loss_rpn_bbox": 0.05787, "loss_cls": 0.25683, "acc": 91.68823, "loss_bbox": 0.27772, "loss_mask": 0.26342, "loss": 0.89301, "time": 0.21191} -{"mode": "train", "epoch": 3, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.03738, "loss_rpn_cls": 0.0378, "loss_rpn_bbox": 0.05762, "loss_cls": 0.24715, "acc": 91.87671, "loss_bbox": 0.26789, "loss_mask": 0.26328, "loss": 0.87375, "time": 0.20933} -{"mode": "train", "epoch": 3, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.04112, "loss_rpn_cls": 0.03777, "loss_rpn_bbox": 0.05876, "loss_cls": 0.25158, "acc": 91.8313, "loss_bbox": 0.26812, "loss_mask": 0.25973, "loss": 0.87596, "time": 0.21188} -{"mode": "train", "epoch": 3, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.04185, "loss_rpn_cls": 0.03688, "loss_rpn_bbox": 0.05731, "loss_cls": 0.25369, "acc": 91.66333, "loss_bbox": 0.27395, "loss_mask": 0.27353, "loss": 0.89536, "time": 0.22747} -{"mode": "train", "epoch": 3, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.04427, "loss_rpn_cls": 0.03849, "loss_rpn_bbox": 0.058, "loss_cls": 0.25737, "acc": 91.46533, "loss_bbox": 0.28725, "loss_mask": 0.27496, "loss": 0.91607, "time": 0.21674} -{"mode": "train", "epoch": 3, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.03727, "loss_rpn_cls": 0.03905, "loss_rpn_bbox": 0.05894, "loss_cls": 0.25726, "acc": 91.66602, "loss_bbox": 0.27771, "loss_mask": 0.27493, "loss": 0.90789, "time": 0.20923} -{"mode": "train", "epoch": 3, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.04076, "loss_rpn_cls": 0.03893, "loss_rpn_bbox": 0.05792, "loss_cls": 0.24793, "acc": 91.94287, "loss_bbox": 0.26614, "loss_mask": 0.27162, "loss": 0.88253, "time": 0.21064} -{"mode": "train", "epoch": 3, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.0358, "loss_rpn_cls": 0.04163, "loss_rpn_bbox": 0.06309, "loss_cls": 0.25952, "acc": 91.69141, "loss_bbox": 0.27477, "loss_mask": 0.2683, "loss": 0.90731, "time": 0.21223} -{"mode": "train", "epoch": 3, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.03984, "loss_rpn_cls": 0.0402, "loss_rpn_bbox": 0.05707, "loss_cls": 0.25107, "acc": 91.87134, "loss_bbox": 0.27349, "loss_mask": 0.26992, "loss": 0.89175, "time": 0.20691} -{"mode": "train", "epoch": 3, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.04631, "loss_rpn_cls": 0.04357, "loss_rpn_bbox": 0.06373, "loss_cls": 0.26047, "acc": 91.56226, "loss_bbox": 0.27861, "loss_mask": 0.26726, "loss": 0.91363, "time": 0.2275} -{"mode": "train", "epoch": 3, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.03553, "loss_rpn_cls": 0.03744, "loss_rpn_bbox": 0.057, "loss_cls": 0.25514, "acc": 91.82007, "loss_bbox": 0.26793, "loss_mask": 0.27022, "loss": 0.88774, "time": 0.2113} -{"mode": "train", "epoch": 3, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.03573, "loss_rpn_cls": 0.03883, "loss_rpn_bbox": 0.05938, "loss_cls": 0.25525, "acc": 91.74536, "loss_bbox": 0.2702, "loss_mask": 0.26913, "loss": 0.89279, "time": 0.21277} -{"mode": "train", "epoch": 3, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.03231, "loss_rpn_cls": 0.04325, "loss_rpn_bbox": 0.06353, "loss_cls": 0.27372, "acc": 91.19702, "loss_bbox": 0.29093, "loss_mask": 0.27118, "loss": 0.94261, "time": 0.21907} -{"mode": "train", "epoch": 3, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.04269, "loss_rpn_cls": 0.03687, "loss_rpn_bbox": 0.05794, "loss_cls": 0.25866, "acc": 91.58008, "loss_bbox": 0.27814, "loss_mask": 0.2722, "loss": 0.90382, "time": 0.21134} -{"mode": "train", "epoch": 3, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.0312, "loss_rpn_cls": 0.04142, "loss_rpn_bbox": 0.05976, "loss_cls": 0.25976, "acc": 91.65405, "loss_bbox": 0.27442, "loss_mask": 0.26599, "loss": 0.90134, "time": 0.2138} -{"mode": "train", "epoch": 3, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.03526, "loss_rpn_cls": 0.03951, "loss_rpn_bbox": 0.06102, "loss_cls": 0.25228, "acc": 91.93213, "loss_bbox": 0.26904, "loss_mask": 0.27247, "loss": 0.89432, "time": 0.20396} -{"mode": "train", "epoch": 3, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.03525, "loss_rpn_cls": 0.0404, "loss_rpn_bbox": 0.06025, "loss_cls": 0.25001, "acc": 91.80347, "loss_bbox": 0.27159, "loss_mask": 0.26831, "loss": 0.89056, "time": 0.206} -{"mode": "train", "epoch": 3, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.03277, "loss_rpn_cls": 0.03873, "loss_rpn_bbox": 0.05912, "loss_cls": 0.26233, "acc": 91.37012, "loss_bbox": 0.28306, "loss_mask": 0.27624, "loss": 0.91948, "time": 0.21308} -{"mode": "train", "epoch": 3, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.03146, "loss_rpn_cls": 0.03587, "loss_rpn_bbox": 0.05906, "loss_cls": 0.25555, "acc": 91.64868, "loss_bbox": 0.27167, "loss_mask": 0.26552, "loss": 0.88767, "time": 0.21349} -{"mode": "train", "epoch": 3, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.03652, "loss_rpn_cls": 0.03676, "loss_rpn_bbox": 0.05681, "loss_cls": 0.25324, "acc": 91.78394, "loss_bbox": 0.26749, "loss_mask": 0.26493, "loss": 0.87922, "time": 0.20548} -{"mode": "train", "epoch": 3, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.03295, "loss_rpn_cls": 0.04095, "loss_rpn_bbox": 0.05976, "loss_cls": 0.25721, "acc": 91.77051, "loss_bbox": 0.26916, "loss_mask": 0.27061, "loss": 0.8977, "time": 0.20374} -{"mode": "train", "epoch": 3, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.04001, "loss_rpn_cls": 0.03905, "loss_rpn_bbox": 0.06145, "loss_cls": 0.26175, "acc": 91.51318, "loss_bbox": 0.28334, "loss_mask": 0.27242, "loss": 0.91802, "time": 0.21301} -{"mode": "train", "epoch": 3, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.04093, "loss_rpn_cls": 0.0424, "loss_rpn_bbox": 0.06356, "loss_cls": 0.25776, "acc": 91.67017, "loss_bbox": 0.2753, "loss_mask": 0.26756, "loss": 0.90659, "time": 0.22236} -{"mode": "train", "epoch": 3, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.03003, "loss_rpn_cls": 0.03594, "loss_rpn_bbox": 0.05772, "loss_cls": 0.2494, "acc": 91.80859, "loss_bbox": 0.27165, "loss_mask": 0.26829, "loss": 0.88302, "time": 0.22115} -{"mode": "train", "epoch": 3, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.04172, "loss_rpn_cls": 0.0368, "loss_rpn_bbox": 0.06055, "loss_cls": 0.25648, "acc": 91.66309, "loss_bbox": 0.27287, "loss_mask": 0.27371, "loss": 0.90041, "time": 0.21758} -{"mode": "train", "epoch": 3, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.0379, "loss_rpn_cls": 0.03745, "loss_rpn_bbox": 0.06096, "loss_cls": 0.24415, "acc": 91.95996, "loss_bbox": 0.26999, "loss_mask": 0.26541, "loss": 0.87796, "time": 0.20923} -{"mode": "train", "epoch": 3, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.03825, "loss_rpn_cls": 0.04101, "loss_rpn_bbox": 0.05685, "loss_cls": 0.25613, "acc": 91.65479, "loss_bbox": 0.27247, "loss_mask": 0.26481, "loss": 0.89128, "time": 0.19809} -{"mode": "train", "epoch": 3, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.03482, "loss_rpn_cls": 0.04034, "loss_rpn_bbox": 0.05955, "loss_cls": 0.27019, "acc": 91.33179, "loss_bbox": 0.28601, "loss_mask": 0.26913, "loss": 0.92521, "time": 0.20711} -{"mode": "train", "epoch": 3, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.04028, "loss_rpn_cls": 0.03891, "loss_rpn_bbox": 0.05704, "loss_cls": 0.25197, "acc": 91.88086, "loss_bbox": 0.27058, "loss_mask": 0.26443, "loss": 0.88293, "time": 0.21205} -{"mode": "train", "epoch": 3, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.0338, "loss_rpn_cls": 0.04214, "loss_rpn_bbox": 0.06161, "loss_cls": 0.26047, "acc": 91.70215, "loss_bbox": 0.27498, "loss_mask": 0.2696, "loss": 0.9088, "time": 0.21645} -{"mode": "train", "epoch": 3, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.03579, "loss_rpn_cls": 0.03722, "loss_rpn_bbox": 0.05706, "loss_cls": 0.25833, "acc": 91.57715, "loss_bbox": 0.27129, "loss_mask": 0.2664, "loss": 0.89031, "time": 0.21066} -{"mode": "train", "epoch": 3, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.03489, "loss_rpn_cls": 0.03923, "loss_rpn_bbox": 0.05688, "loss_cls": 0.24617, "acc": 92.06372, "loss_bbox": 0.26408, "loss_mask": 0.27057, "loss": 0.87693, "time": 0.20059} -{"mode": "train", "epoch": 3, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.03373, "loss_rpn_cls": 0.0404, "loss_rpn_bbox": 0.06053, "loss_cls": 0.26263, "acc": 91.31958, "loss_bbox": 0.28321, "loss_mask": 0.27511, "loss": 0.92189, "time": 0.20863} -{"mode": "train", "epoch": 3, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.03779, "loss_rpn_cls": 0.03777, "loss_rpn_bbox": 0.05869, "loss_cls": 0.25719, "acc": 91.63013, "loss_bbox": 0.2682, "loss_mask": 0.27286, "loss": 0.8947, "time": 0.20696} -{"mode": "train", "epoch": 3, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.02968, "loss_rpn_cls": 0.03977, "loss_rpn_bbox": 0.06165, "loss_cls": 0.2613, "acc": 91.37573, "loss_bbox": 0.28007, "loss_mask": 0.26802, "loss": 0.91081, "time": 0.20857} -{"mode": "train", "epoch": 3, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.03247, "loss_rpn_cls": 0.03749, "loss_rpn_bbox": 0.05949, "loss_cls": 0.26235, "acc": 91.42676, "loss_bbox": 0.28249, "loss_mask": 0.27015, "loss": 0.91197, "time": 0.21014} -{"mode": "train", "epoch": 3, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.02999, "loss_rpn_cls": 0.03564, "loss_rpn_bbox": 0.05792, "loss_cls": 0.2532, "acc": 91.84448, "loss_bbox": 0.26514, "loss_mask": 0.2627, "loss": 0.8746, "time": 0.20632} -{"mode": "train", "epoch": 3, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.03248, "loss_rpn_cls": 0.03707, "loss_rpn_bbox": 0.06037, "loss_cls": 0.25348, "acc": 91.76001, "loss_bbox": 0.27508, "loss_mask": 0.27106, "loss": 0.89705, "time": 0.21111} -{"mode": "train", "epoch": 3, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.03742, "loss_rpn_cls": 0.03785, "loss_rpn_bbox": 0.05827, "loss_cls": 0.25967, "acc": 91.46387, "loss_bbox": 0.28031, "loss_mask": 0.26656, "loss": 0.90266, "time": 0.2044} -{"mode": "train", "epoch": 3, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.03088, "loss_rpn_cls": 0.03587, "loss_rpn_bbox": 0.05753, "loss_cls": 0.2441, "acc": 91.94678, "loss_bbox": 0.26415, "loss_mask": 0.26991, "loss": 0.87156, "time": 0.20231} -{"mode": "train", "epoch": 3, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.03521, "loss_rpn_cls": 0.03884, "loss_rpn_bbox": 0.05763, "loss_cls": 0.2632, "acc": 91.47412, "loss_bbox": 0.28021, "loss_mask": 0.276, "loss": 0.91588, "time": 0.21347} -{"mode": "train", "epoch": 3, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.03834, "loss_rpn_cls": 0.03743, "loss_rpn_bbox": 0.05965, "loss_cls": 0.25378, "acc": 91.68091, "loss_bbox": 0.27855, "loss_mask": 0.27071, "loss": 0.90011, "time": 0.20676} -{"mode": "train", "epoch": 3, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.0369, "loss_rpn_cls": 0.03594, "loss_rpn_bbox": 0.05729, "loss_cls": 0.24819, "acc": 92.08423, "loss_bbox": 0.26551, "loss_mask": 0.26979, "loss": 0.87671, "time": 0.20659} -{"mode": "train", "epoch": 3, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.03707, "loss_rpn_cls": 0.03639, "loss_rpn_bbox": 0.0571, "loss_cls": 0.25237, "acc": 91.7229, "loss_bbox": 0.27539, "loss_mask": 0.26449, "loss": 0.88574, "time": 0.2033} -{"mode": "train", "epoch": 3, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.03193, "loss_rpn_cls": 0.04059, "loss_rpn_bbox": 0.06205, "loss_cls": 0.25887, "acc": 91.6438, "loss_bbox": 0.27462, "loss_mask": 0.27066, "loss": 0.90679, "time": 0.20385} -{"mode": "train", "epoch": 3, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.04086, "loss_rpn_cls": 0.0384, "loss_rpn_bbox": 0.05792, "loss_cls": 0.25497, "acc": 91.58203, "loss_bbox": 0.27433, "loss_mask": 0.26905, "loss": 0.89467, "time": 0.21456} -{"mode": "train", "epoch": 3, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.0353, "loss_rpn_cls": 0.03583, "loss_rpn_bbox": 0.05584, "loss_cls": 0.25194, "acc": 91.82642, "loss_bbox": 0.26482, "loss_mask": 0.2639, "loss": 0.87233, "time": 0.1978} -{"mode": "train", "epoch": 3, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.03475, "loss_rpn_cls": 0.03924, "loss_rpn_bbox": 0.06209, "loss_cls": 0.25899, "acc": 91.41211, "loss_bbox": 0.2836, "loss_mask": 0.27208, "loss": 0.916, "time": 0.2058} -{"mode": "train", "epoch": 3, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.03997, "loss_rpn_cls": 0.03522, "loss_rpn_bbox": 0.05941, "loss_cls": 0.25221, "acc": 91.875, "loss_bbox": 0.26635, "loss_mask": 0.26146, "loss": 0.87465, "time": 0.20359} -{"mode": "train", "epoch": 3, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.02888, "loss_rpn_cls": 0.04076, "loss_rpn_bbox": 0.05904, "loss_cls": 0.25772, "acc": 91.64746, "loss_bbox": 0.27119, "loss_mask": 0.26499, "loss": 0.8937, "time": 0.20879} -{"mode": "train", "epoch": 3, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.03444, "loss_rpn_cls": 0.0404, "loss_rpn_bbox": 0.06165, "loss_cls": 0.25456, "acc": 91.68481, "loss_bbox": 0.27895, "loss_mask": 0.27524, "loss": 0.91079, "time": 0.20688} -{"mode": "train", "epoch": 3, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.03263, "loss_rpn_cls": 0.03954, "loss_rpn_bbox": 0.05791, "loss_cls": 0.25765, "acc": 91.73242, "loss_bbox": 0.27105, "loss_mask": 0.26241, "loss": 0.88856, "time": 0.21529} -{"mode": "train", "epoch": 3, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.04451, "loss_rpn_cls": 0.03864, "loss_rpn_bbox": 0.05837, "loss_cls": 0.2624, "acc": 91.4082, "loss_bbox": 0.28132, "loss_mask": 0.26947, "loss": 0.9102, "time": 0.22119} -{"mode": "train", "epoch": 3, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.0401, "loss_rpn_cls": 0.03703, "loss_rpn_bbox": 0.05562, "loss_cls": 0.25734, "acc": 91.74268, "loss_bbox": 0.26869, "loss_mask": 0.26506, "loss": 0.88374, "time": 0.20368} -{"mode": "train", "epoch": 3, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.03548, "loss_rpn_cls": 0.04144, "loss_rpn_bbox": 0.05629, "loss_cls": 0.2549, "acc": 91.86694, "loss_bbox": 0.26217, "loss_mask": 0.2667, "loss": 0.88149, "time": 0.20796} -{"mode": "train", "epoch": 3, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.04165, "loss_rpn_cls": 0.04154, "loss_rpn_bbox": 0.06078, "loss_cls": 0.26095, "acc": 91.49365, "loss_bbox": 0.27854, "loss_mask": 0.2673, "loss": 0.90912, "time": 0.21073} -{"mode": "train", "epoch": 3, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.04172, "loss_rpn_cls": 0.03874, "loss_rpn_bbox": 0.0599, "loss_cls": 0.24684, "acc": 91.94116, "loss_bbox": 0.26489, "loss_mask": 0.26425, "loss": 0.87461, "time": 0.20799} -{"mode": "train", "epoch": 3, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.04933, "loss_rpn_cls": 0.04017, "loss_rpn_bbox": 0.05937, "loss_cls": 0.25544, "acc": 91.74927, "loss_bbox": 0.26985, "loss_mask": 0.26769, "loss": 0.89252, "time": 0.22007} -{"mode": "train", "epoch": 3, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.0456, "loss_rpn_cls": 0.0405, "loss_rpn_bbox": 0.06439, "loss_cls": 0.25765, "acc": 91.63135, "loss_bbox": 0.27659, "loss_mask": 0.27475, "loss": 0.91388, "time": 0.21202} -{"mode": "train", "epoch": 3, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.04244, "loss_rpn_cls": 0.03672, "loss_rpn_bbox": 0.05592, "loss_cls": 0.24208, "acc": 92.10596, "loss_bbox": 0.26317, "loss_mask": 0.26221, "loss": 0.8601, "time": 0.2108} -{"mode": "train", "epoch": 3, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.03848, "loss_rpn_cls": 0.0383, "loss_rpn_bbox": 0.05983, "loss_cls": 0.26234, "acc": 91.53931, "loss_bbox": 0.27812, "loss_mask": 0.26949, "loss": 0.90807, "time": 0.21882} -{"mode": "train", "epoch": 3, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.03645, "loss_rpn_cls": 0.03926, "loss_rpn_bbox": 0.05908, "loss_cls": 0.24986, "acc": 91.96094, "loss_bbox": 0.26234, "loss_mask": 0.26445, "loss": 0.87499, "time": 0.20927} -{"mode": "train", "epoch": 3, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.03676, "loss_rpn_cls": 0.04149, "loss_rpn_bbox": 0.0608, "loss_cls": 0.26318, "acc": 91.66113, "loss_bbox": 0.27447, "loss_mask": 0.27051, "loss": 0.91045, "time": 0.21355} -{"mode": "train", "epoch": 3, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.02976, "loss_rpn_cls": 0.03902, "loss_rpn_bbox": 0.06214, "loss_cls": 0.25752, "acc": 91.74902, "loss_bbox": 0.26952, "loss_mask": 0.2657, "loss": 0.89389, "time": 0.20285} -{"mode": "train", "epoch": 3, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.03937, "loss_rpn_cls": 0.03659, "loss_rpn_bbox": 0.05773, "loss_cls": 0.25178, "acc": 91.7876, "loss_bbox": 0.26798, "loss_mask": 0.26522, "loss": 0.8793, "time": 0.20501} -{"mode": "train", "epoch": 3, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.04041, "loss_rpn_cls": 0.03831, "loss_rpn_bbox": 0.05804, "loss_cls": 0.25771, "acc": 91.48438, "loss_bbox": 0.28238, "loss_mask": 0.26675, "loss": 0.90319, "time": 0.2091} -{"mode": "train", "epoch": 3, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.03547, "loss_rpn_cls": 0.03657, "loss_rpn_bbox": 0.05719, "loss_cls": 0.25449, "acc": 91.71387, "loss_bbox": 0.27356, "loss_mask": 0.26323, "loss": 0.88505, "time": 0.21541} -{"mode": "train", "epoch": 3, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.03484, "loss_rpn_cls": 0.03709, "loss_rpn_bbox": 0.05637, "loss_cls": 0.24947, "acc": 92.03247, "loss_bbox": 0.26462, "loss_mask": 0.2613, "loss": 0.86885, "time": 0.20399} -{"mode": "train", "epoch": 3, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03438, "loss_rpn_cls": 0.03842, "loss_rpn_bbox": 0.06038, "loss_cls": 0.25049, "acc": 91.78687, "loss_bbox": 0.26675, "loss_mask": 0.26345, "loss": 0.87949, "time": 0.19954} -{"mode": "train", "epoch": 3, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.03905, "loss_rpn_cls": 0.03569, "loss_rpn_bbox": 0.0546, "loss_cls": 0.24561, "acc": 91.94751, "loss_bbox": 0.26741, "loss_mask": 0.26548, "loss": 0.86879, "time": 0.19975} -{"mode": "train", "epoch": 3, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.03808, "loss_rpn_cls": 0.03514, "loss_rpn_bbox": 0.05425, "loss_cls": 0.25038, "acc": 91.9978, "loss_bbox": 0.2631, "loss_mask": 0.26582, "loss": 0.86869, "time": 0.19805} -{"mode": "train", "epoch": 3, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03827, "loss_rpn_cls": 0.03481, "loss_rpn_bbox": 0.05367, "loss_cls": 0.25232, "acc": 91.84814, "loss_bbox": 0.2726, "loss_mask": 0.26469, "loss": 0.87809, "time": 0.20346} -{"mode": "train", "epoch": 3, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.04147, "loss_rpn_cls": 0.03701, "loss_rpn_bbox": 0.05933, "loss_cls": 0.24836, "acc": 91.75952, "loss_bbox": 0.27371, "loss_mask": 0.26944, "loss": 0.88785, "time": 0.20627} -{"mode": "train", "epoch": 3, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.03916, "loss_rpn_cls": 0.03917, "loss_rpn_bbox": 0.05679, "loss_cls": 0.25879, "acc": 91.72729, "loss_bbox": 0.26671, "loss_mask": 0.26838, "loss": 0.88985, "time": 0.20207} -{"mode": "train", "epoch": 3, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.03947, "loss_rpn_cls": 0.03774, "loss_rpn_bbox": 0.0595, "loss_cls": 0.24879, "acc": 91.80713, "loss_bbox": 0.26945, "loss_mask": 0.26582, "loss": 0.88129, "time": 0.21127} -{"mode": "train", "epoch": 3, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.04844, "loss_rpn_cls": 0.0404, "loss_rpn_bbox": 0.05884, "loss_cls": 0.25548, "acc": 91.70361, "loss_bbox": 0.27199, "loss_mask": 0.26387, "loss": 0.89058, "time": 0.21227} -{"mode": "train", "epoch": 3, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.04455, "loss_rpn_cls": 0.03649, "loss_rpn_bbox": 0.05509, "loss_cls": 0.25122, "acc": 91.75879, "loss_bbox": 0.26875, "loss_mask": 0.26818, "loss": 0.87973, "time": 0.20544} -{"mode": "train", "epoch": 3, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.03413, "loss_rpn_cls": 0.03592, "loss_rpn_bbox": 0.05674, "loss_cls": 0.24283, "acc": 92.1438, "loss_bbox": 0.25895, "loss_mask": 0.26025, "loss": 0.85468, "time": 0.20072} -{"mode": "train", "epoch": 3, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.04846, "loss_rpn_cls": 0.03728, "loss_rpn_bbox": 0.05835, "loss_cls": 0.25857, "acc": 91.64868, "loss_bbox": 0.26613, "loss_mask": 0.26343, "loss": 0.88376, "time": 0.21347} -{"mode": "train", "epoch": 3, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.0465, "loss_rpn_cls": 0.03749, "loss_rpn_bbox": 0.05718, "loss_cls": 0.25868, "acc": 91.67065, "loss_bbox": 0.27393, "loss_mask": 0.27045, "loss": 0.89774, "time": 0.20921} -{"mode": "train", "epoch": 3, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.03467, "loss_rpn_cls": 0.03833, "loss_rpn_bbox": 0.05843, "loss_cls": 0.2582, "acc": 91.82178, "loss_bbox": 0.27013, "loss_mask": 0.2673, "loss": 0.89239, "time": 0.20051} -{"mode": "train", "epoch": 3, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.04166, "loss_rpn_cls": 0.03947, "loss_rpn_bbox": 0.05817, "loss_cls": 0.25986, "acc": 91.73975, "loss_bbox": 0.26907, "loss_mask": 0.26467, "loss": 0.89125, "time": 0.21035} -{"mode": "train", "epoch": 3, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.03375, "loss_rpn_cls": 0.03781, "loss_rpn_bbox": 0.05487, "loss_cls": 0.25854, "acc": 91.67432, "loss_bbox": 0.2712, "loss_mask": 0.27132, "loss": 0.89373, "time": 0.19831} -{"mode": "train", "epoch": 3, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.032, "loss_rpn_cls": 0.0368, "loss_rpn_bbox": 0.0572, "loss_cls": 0.252, "acc": 91.78125, "loss_bbox": 0.26884, "loss_mask": 0.26328, "loss": 0.87813, "time": 0.20632} -{"mode": "train", "epoch": 3, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.03674, "loss_rpn_cls": 0.03534, "loss_rpn_bbox": 0.05603, "loss_cls": 0.24536, "acc": 92.00513, "loss_bbox": 0.2627, "loss_mask": 0.26364, "loss": 0.86307, "time": 0.20946} -{"mode": "train", "epoch": 3, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.03663, "loss_rpn_cls": 0.0363, "loss_rpn_bbox": 0.05582, "loss_cls": 0.252, "acc": 91.82593, "loss_bbox": 0.26293, "loss_mask": 0.26994, "loss": 0.87699, "time": 0.19586} -{"mode": "train", "epoch": 3, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.04167, "loss_rpn_cls": 0.03938, "loss_rpn_bbox": 0.05962, "loss_cls": 0.25834, "acc": 91.67773, "loss_bbox": 0.27001, "loss_mask": 0.26506, "loss": 0.89241, "time": 0.21159} -{"mode": "train", "epoch": 3, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.04206, "loss_rpn_cls": 0.04063, "loss_rpn_bbox": 0.05998, "loss_cls": 0.27052, "acc": 91.32666, "loss_bbox": 0.27938, "loss_mask": 0.27258, "loss": 0.92309, "time": 0.213} -{"mode": "train", "epoch": 3, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.03684, "loss_rpn_cls": 0.03832, "loss_rpn_bbox": 0.05535, "loss_cls": 0.24495, "acc": 92.00195, "loss_bbox": 0.26877, "loss_mask": 0.26391, "loss": 0.8713, "time": 0.21031} -{"mode": "train", "epoch": 3, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.0385, "loss_rpn_cls": 0.03354, "loss_rpn_bbox": 0.0545, "loss_cls": 0.23571, "acc": 92.44287, "loss_bbox": 0.25237, "loss_mask": 0.25993, "loss": 0.83605, "time": 0.1994} -{"mode": "train", "epoch": 3, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.03503, "loss_rpn_cls": 0.03513, "loss_rpn_bbox": 0.05612, "loss_cls": 0.24715, "acc": 92.11182, "loss_bbox": 0.25979, "loss_mask": 0.26735, "loss": 0.86555, "time": 0.19715} -{"mode": "train", "epoch": 3, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.03934, "loss_rpn_cls": 0.03991, "loss_rpn_bbox": 0.05568, "loss_cls": 0.2454, "acc": 91.91626, "loss_bbox": 0.27155, "loss_mask": 0.26629, "loss": 0.87883, "time": 0.20589} -{"mode": "train", "epoch": 3, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.0448, "loss_rpn_cls": 0.04066, "loss_rpn_bbox": 0.05861, "loss_cls": 0.25767, "acc": 91.71069, "loss_bbox": 0.27259, "loss_mask": 0.26515, "loss": 0.89468, "time": 0.21021} -{"mode": "train", "epoch": 3, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.04549, "loss_rpn_cls": 0.03951, "loss_rpn_bbox": 0.05883, "loss_cls": 0.2597, "acc": 91.67358, "loss_bbox": 0.26701, "loss_mask": 0.25995, "loss": 0.885, "time": 0.21046} -{"mode": "train", "epoch": 3, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.03303, "loss_rpn_cls": 0.03812, "loss_rpn_bbox": 0.0554, "loss_cls": 0.24623, "acc": 92.03979, "loss_bbox": 0.26165, "loss_mask": 0.2622, "loss": 0.8636, "time": 0.20645} -{"mode": "train", "epoch": 3, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.03726, "loss_rpn_cls": 0.03751, "loss_rpn_bbox": 0.05706, "loss_cls": 0.25028, "acc": 91.79688, "loss_bbox": 0.26836, "loss_mask": 0.26594, "loss": 0.87916, "time": 0.20468} -{"mode": "train", "epoch": 3, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.04343, "loss_rpn_cls": 0.04363, "loss_rpn_bbox": 0.05962, "loss_cls": 0.26968, "acc": 91.35254, "loss_bbox": 0.28032, "loss_mask": 0.26319, "loss": 0.91643, "time": 0.21354} -{"mode": "train", "epoch": 3, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.04613, "loss_rpn_cls": 0.04222, "loss_rpn_bbox": 0.05976, "loss_cls": 0.25133, "acc": 91.74902, "loss_bbox": 0.27146, "loss_mask": 0.2622, "loss": 0.88697, "time": 0.20589} -{"mode": "train", "epoch": 3, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03714, "loss_rpn_cls": 0.0368, "loss_rpn_bbox": 0.05721, "loss_cls": 0.25841, "acc": 91.53418, "loss_bbox": 0.27598, "loss_mask": 0.26521, "loss": 0.89362, "time": 0.20332} -{"mode": "train", "epoch": 3, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.0453, "loss_rpn_cls": 0.03555, "loss_rpn_bbox": 0.05745, "loss_cls": 0.24772, "acc": 92.03857, "loss_bbox": 0.25987, "loss_mask": 0.26052, "loss": 0.86112, "time": 0.21339} -{"mode": "train", "epoch": 3, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.03586, "loss_rpn_cls": 0.03378, "loss_rpn_bbox": 0.05431, "loss_cls": 0.23967, "acc": 92.36621, "loss_bbox": 0.25279, "loss_mask": 0.2581, "loss": 0.83864, "time": 0.19251} -{"mode": "train", "epoch": 3, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.04153, "loss_rpn_cls": 0.03715, "loss_rpn_bbox": 0.05728, "loss_cls": 0.25029, "acc": 91.8125, "loss_bbox": 0.26796, "loss_mask": 0.26506, "loss": 0.87775, "time": 0.21673} -{"mode": "train", "epoch": 3, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.03853, "loss_rpn_cls": 0.03679, "loss_rpn_bbox": 0.05929, "loss_cls": 0.25324, "acc": 91.69653, "loss_bbox": 0.27047, "loss_mask": 0.26085, "loss": 0.88065, "time": 0.20007} -{"mode": "train", "epoch": 3, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.03737, "loss_rpn_cls": 0.03722, "loss_rpn_bbox": 0.0575, "loss_cls": 0.25006, "acc": 92.0166, "loss_bbox": 0.26419, "loss_mask": 0.26281, "loss": 0.87178, "time": 0.20846} -{"mode": "train", "epoch": 3, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.03774, "loss_rpn_cls": 0.03738, "loss_rpn_bbox": 0.0564, "loss_cls": 0.25271, "acc": 91.81738, "loss_bbox": 0.27186, "loss_mask": 0.25992, "loss": 0.87827, "time": 0.20206} -{"mode": "train", "epoch": 3, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.04747, "loss_rpn_cls": 0.04087, "loss_rpn_bbox": 0.06164, "loss_cls": 0.2632, "acc": 91.51489, "loss_bbox": 0.28008, "loss_mask": 0.26604, "loss": 0.91182, "time": 0.22669} -{"mode": "train", "epoch": 3, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.04072, "loss_rpn_cls": 0.03877, "loss_rpn_bbox": 0.0599, "loss_cls": 0.24804, "acc": 91.79688, "loss_bbox": 0.26832, "loss_mask": 0.26918, "loss": 0.8842, "time": 0.20906} -{"mode": "train", "epoch": 3, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.03401, "loss_rpn_cls": 0.03578, "loss_rpn_bbox": 0.05324, "loss_cls": 0.24075, "acc": 92.28589, "loss_bbox": 0.25525, "loss_mask": 0.26301, "loss": 0.84803, "time": 0.19633} -{"mode": "train", "epoch": 3, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.04555, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05815, "loss_cls": 0.24394, "acc": 92.02417, "loss_bbox": 0.26859, "loss_mask": 0.26042, "loss": 0.8669, "time": 0.21081} -{"mode": "train", "epoch": 3, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03624, "loss_rpn_cls": 0.03582, "loss_rpn_bbox": 0.05471, "loss_cls": 0.25574, "acc": 91.79053, "loss_bbox": 0.26629, "loss_mask": 0.2634, "loss": 0.87595, "time": 0.20126} -{"mode": "train", "epoch": 3, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.04147, "loss_rpn_cls": 0.03571, "loss_rpn_bbox": 0.05612, "loss_cls": 0.26297, "acc": 91.58643, "loss_bbox": 0.26456, "loss_mask": 0.25778, "loss": 0.87715, "time": 0.21316} -{"mode": "train", "epoch": 3, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.03962, "loss_rpn_cls": 0.03577, "loss_rpn_bbox": 0.0553, "loss_cls": 0.25724, "acc": 91.76343, "loss_bbox": 0.26675, "loss_mask": 0.26444, "loss": 0.87951, "time": 0.20761} -{"mode": "train", "epoch": 3, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.03419, "loss_rpn_cls": 0.03862, "loss_rpn_bbox": 0.05703, "loss_cls": 0.2524, "acc": 91.93481, "loss_bbox": 0.26372, "loss_mask": 0.26548, "loss": 0.87724, "time": 0.19395} -{"mode": "train", "epoch": 3, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.04843, "loss_rpn_cls": 0.03908, "loss_rpn_bbox": 0.06053, "loss_cls": 0.24348, "acc": 92.00732, "loss_bbox": 0.26325, "loss_mask": 0.26415, "loss": 0.8705, "time": 0.21549} -{"mode": "train", "epoch": 3, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.0479, "loss_rpn_cls": 0.03663, "loss_rpn_bbox": 0.05948, "loss_cls": 0.2526, "acc": 91.71045, "loss_bbox": 0.27325, "loss_mask": 0.25751, "loss": 0.87948, "time": 0.20492} -{"mode": "train", "epoch": 3, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.03725, "loss_rpn_cls": 0.03551, "loss_rpn_bbox": 0.05564, "loss_cls": 0.25522, "acc": 91.61475, "loss_bbox": 0.27687, "loss_mask": 0.26638, "loss": 0.88962, "time": 0.20851} -{"mode": "train", "epoch": 3, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.04574, "loss_rpn_cls": 0.04164, "loss_rpn_bbox": 0.06046, "loss_cls": 0.26591, "acc": 91.4436, "loss_bbox": 0.28033, "loss_mask": 0.26879, "loss": 0.91714, "time": 0.21643} -{"mode": "train", "epoch": 3, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.03889, "loss_rpn_cls": 0.03894, "loss_rpn_bbox": 0.05688, "loss_cls": 0.24602, "acc": 92.16455, "loss_bbox": 0.25648, "loss_mask": 0.25833, "loss": 0.85665, "time": 0.20056} -{"mode": "train", "epoch": 3, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.04805, "loss_rpn_cls": 0.03967, "loss_rpn_bbox": 0.05794, "loss_cls": 0.25558, "acc": 91.71289, "loss_bbox": 0.26788, "loss_mask": 0.26503, "loss": 0.8861, "time": 0.20823} -{"mode": "train", "epoch": 3, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.0441, "loss_rpn_cls": 0.03478, "loss_rpn_bbox": 0.05626, "loss_cls": 0.25572, "acc": 91.68091, "loss_bbox": 0.27037, "loss_mask": 0.26389, "loss": 0.88103, "time": 0.21331} -{"mode": "train", "epoch": 3, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.0452, "loss_rpn_cls": 0.038, "loss_rpn_bbox": 0.05609, "loss_cls": 0.25874, "acc": 91.70654, "loss_bbox": 0.27365, "loss_mask": 0.26593, "loss": 0.89241, "time": 0.20094} -{"mode": "train", "epoch": 3, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.04237, "loss_rpn_cls": 0.0366, "loss_rpn_bbox": 0.05823, "loss_cls": 0.2592, "acc": 91.66968, "loss_bbox": 0.27004, "loss_mask": 0.26908, "loss": 0.89317, "time": 0.21536} -{"mode": "train", "epoch": 3, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.04281, "loss_rpn_cls": 0.04052, "loss_rpn_bbox": 0.05802, "loss_cls": 0.25395, "acc": 91.80664, "loss_bbox": 0.26536, "loss_mask": 0.26421, "loss": 0.88208, "time": 0.20237} -{"mode": "train", "epoch": 3, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.03675, "loss_rpn_cls": 0.04074, "loss_rpn_bbox": 0.05937, "loss_cls": 0.24152, "acc": 92.08398, "loss_bbox": 0.26428, "loss_mask": 0.26526, "loss": 0.87117, "time": 0.21288} -{"mode": "train", "epoch": 3, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.03419, "loss_rpn_cls": 0.03956, "loss_rpn_bbox": 0.05868, "loss_cls": 0.25206, "acc": 91.81689, "loss_bbox": 0.2689, "loss_mask": 0.26188, "loss": 0.88108, "time": 0.20413} -{"mode": "train", "epoch": 3, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.03304, "loss_rpn_cls": 0.03692, "loss_rpn_bbox": 0.05673, "loss_cls": 0.2431, "acc": 92.11157, "loss_bbox": 0.26158, "loss_mask": 0.25985, "loss": 0.85818, "time": 0.20581} -{"mode": "train", "epoch": 3, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.03517, "loss_rpn_cls": 0.03598, "loss_rpn_bbox": 0.05408, "loss_cls": 0.24509, "acc": 91.96729, "loss_bbox": 0.26482, "loss_mask": 0.26071, "loss": 0.86068, "time": 0.20971} -{"mode": "val", "epoch": 3, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3007, "bbox_mAP_50": 0.5193, "bbox_mAP_75": 0.3144, "bbox_mAP_s": 0.1697, "bbox_mAP_m": 0.3385, "bbox_mAP_l": 0.3871, "bbox_mAP_copypaste": "0.3007 0.5193 0.3144 0.1697 0.3385 0.3871", "segm_mAP": 0.3006, "segm_mAP_50": 0.4957, "segm_mAP_75": 0.3194, "segm_mAP_s": 0.1334, "segm_mAP_m": 0.3317, "segm_mAP_l": 0.4363, "segm_mAP_copypaste": "0.3006 0.4957 0.3194 0.1334 0.3317 0.4363"} -{"mode": "train", "epoch": 4, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.10817, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.0596, "loss_cls": 0.24832, "acc": 91.53442, "loss_bbox": 0.27432, "loss_mask": 0.25678, "loss": 0.87482, "time": 0.3051} -{"mode": "train", "epoch": 4, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.03264, "loss_rpn_cls": 0.03352, "loss_rpn_bbox": 0.05438, "loss_cls": 0.24253, "acc": 91.79346, "loss_bbox": 0.2732, "loss_mask": 0.26456, "loss": 0.8682, "time": 0.22124} -{"mode": "train", "epoch": 4, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.03872, "loss_rpn_cls": 0.0322, "loss_rpn_bbox": 0.05619, "loss_cls": 0.24372, "acc": 92.03564, "loss_bbox": 0.25844, "loss_mask": 0.25819, "loss": 0.84874, "time": 0.21976} -{"mode": "train", "epoch": 4, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.04222, "loss_rpn_cls": 0.03502, "loss_rpn_bbox": 0.05718, "loss_cls": 0.25181, "acc": 91.62402, "loss_bbox": 0.2748, "loss_mask": 0.26578, "loss": 0.88459, "time": 0.22935} -{"mode": "train", "epoch": 4, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.03658, "loss_rpn_cls": 0.03447, "loss_rpn_bbox": 0.05489, "loss_cls": 0.24114, "acc": 91.88843, "loss_bbox": 0.26722, "loss_mask": 0.25485, "loss": 0.85257, "time": 0.22165} -{"mode": "train", "epoch": 4, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.0385, "loss_rpn_cls": 0.03483, "loss_rpn_bbox": 0.0562, "loss_cls": 0.23798, "acc": 92.03882, "loss_bbox": 0.26505, "loss_mask": 0.25363, "loss": 0.84769, "time": 0.2195} -{"mode": "train", "epoch": 4, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.03826, "loss_rpn_cls": 0.03468, "loss_rpn_bbox": 0.05381, "loss_cls": 0.24155, "acc": 91.99854, "loss_bbox": 0.26832, "loss_mask": 0.25887, "loss": 0.85723, "time": 0.22207} -{"mode": "train", "epoch": 4, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.04124, "loss_rpn_cls": 0.03539, "loss_rpn_bbox": 0.05662, "loss_cls": 0.26209, "acc": 91.57373, "loss_bbox": 0.27122, "loss_mask": 0.26201, "loss": 0.88733, "time": 0.22843} -{"mode": "train", "epoch": 4, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.04428, "loss_rpn_cls": 0.0367, "loss_rpn_bbox": 0.05881, "loss_cls": 0.24663, "acc": 91.65259, "loss_bbox": 0.27658, "loss_mask": 0.26602, "loss": 0.88475, "time": 0.22871} -{"mode": "train", "epoch": 4, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.04055, "loss_rpn_cls": 0.03561, "loss_rpn_bbox": 0.05555, "loss_cls": 0.25028, "acc": 91.70288, "loss_bbox": 0.27209, "loss_mask": 0.25663, "loss": 0.87016, "time": 0.23392} -{"mode": "train", "epoch": 4, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.03824, "loss_rpn_cls": 0.03505, "loss_rpn_bbox": 0.05724, "loss_cls": 0.2393, "acc": 92.05688, "loss_bbox": 0.26297, "loss_mask": 0.26628, "loss": 0.86083, "time": 0.2161} -{"mode": "train", "epoch": 4, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.04122, "loss_rpn_cls": 0.03479, "loss_rpn_bbox": 0.05626, "loss_cls": 0.24301, "acc": 92.0188, "loss_bbox": 0.26513, "loss_mask": 0.26363, "loss": 0.86282, "time": 0.23016} -{"mode": "train", "epoch": 4, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.03789, "loss_rpn_cls": 0.03374, "loss_rpn_bbox": 0.05329, "loss_cls": 0.23437, "acc": 92.25391, "loss_bbox": 0.25788, "loss_mask": 0.25593, "loss": 0.83522, "time": 0.21079} -{"mode": "train", "epoch": 4, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.04197, "loss_rpn_cls": 0.03481, "loss_rpn_bbox": 0.0566, "loss_cls": 0.23458, "acc": 92.14722, "loss_bbox": 0.26259, "loss_mask": 0.25956, "loss": 0.84815, "time": 0.22097} -{"mode": "train", "epoch": 4, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.03777, "loss_rpn_cls": 0.03237, "loss_rpn_bbox": 0.05472, "loss_cls": 0.23561, "acc": 92.1311, "loss_bbox": 0.25832, "loss_mask": 0.25611, "loss": 0.83713, "time": 0.21913} -{"mode": "train", "epoch": 4, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.03466, "loss_rpn_cls": 0.0354, "loss_rpn_bbox": 0.05811, "loss_cls": 0.24434, "acc": 91.94897, "loss_bbox": 0.26362, "loss_mask": 0.25932, "loss": 0.86079, "time": 0.22296} -{"mode": "train", "epoch": 4, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.03518, "loss_rpn_cls": 0.03495, "loss_rpn_bbox": 0.05693, "loss_cls": 0.23845, "acc": 92.08203, "loss_bbox": 0.26279, "loss_mask": 0.25757, "loss": 0.85069, "time": 0.21047} -{"mode": "train", "epoch": 4, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.04131, "loss_rpn_cls": 0.03437, "loss_rpn_bbox": 0.05902, "loss_cls": 0.24354, "acc": 91.96338, "loss_bbox": 0.26532, "loss_mask": 0.26273, "loss": 0.86497, "time": 0.21501} -{"mode": "train", "epoch": 4, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.03673, "loss_rpn_cls": 0.03278, "loss_rpn_bbox": 0.05236, "loss_cls": 0.23202, "acc": 92.34619, "loss_bbox": 0.25001, "loss_mask": 0.26487, "loss": 0.83203, "time": 0.20303} -{"mode": "train", "epoch": 4, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.0328, "loss_rpn_cls": 0.03408, "loss_rpn_bbox": 0.05722, "loss_cls": 0.23199, "acc": 92.16089, "loss_bbox": 0.26192, "loss_mask": 0.26041, "loss": 0.84561, "time": 0.21787} -{"mode": "train", "epoch": 4, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.04728, "loss_rpn_cls": 0.03385, "loss_rpn_bbox": 0.05728, "loss_cls": 0.24579, "acc": 91.78516, "loss_bbox": 0.27011, "loss_mask": 0.25642, "loss": 0.86344, "time": 0.21977} -{"mode": "train", "epoch": 4, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.04034, "loss_rpn_cls": 0.03339, "loss_rpn_bbox": 0.06157, "loss_cls": 0.24572, "acc": 91.82666, "loss_bbox": 0.26628, "loss_mask": 0.26486, "loss": 0.87183, "time": 0.22605} -{"mode": "train", "epoch": 4, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.03633, "loss_rpn_cls": 0.03571, "loss_rpn_bbox": 0.05726, "loss_cls": 0.25104, "acc": 91.75464, "loss_bbox": 0.26973, "loss_mask": 0.26494, "loss": 0.87867, "time": 0.21834} -{"mode": "train", "epoch": 4, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.03971, "loss_rpn_cls": 0.03803, "loss_rpn_bbox": 0.05864, "loss_cls": 0.23731, "acc": 92.06055, "loss_bbox": 0.26456, "loss_mask": 0.25289, "loss": 0.85143, "time": 0.22461} -{"mode": "train", "epoch": 4, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.04097, "loss_rpn_cls": 0.03734, "loss_rpn_bbox": 0.05724, "loss_cls": 0.24085, "acc": 92.05054, "loss_bbox": 0.26658, "loss_mask": 0.25969, "loss": 0.86171, "time": 0.22075} -{"mode": "train", "epoch": 4, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.04467, "loss_rpn_cls": 0.03719, "loss_rpn_bbox": 0.0577, "loss_cls": 0.23912, "acc": 92.09131, "loss_bbox": 0.25634, "loss_mask": 0.2626, "loss": 0.85296, "time": 0.20881} -{"mode": "train", "epoch": 4, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.04415, "loss_rpn_cls": 0.03847, "loss_rpn_bbox": 0.05977, "loss_cls": 0.24296, "acc": 91.85425, "loss_bbox": 0.26804, "loss_mask": 0.26254, "loss": 0.87177, "time": 0.21693} -{"mode": "train", "epoch": 4, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.04512, "loss_rpn_cls": 0.03478, "loss_rpn_bbox": 0.05939, "loss_cls": 0.24488, "acc": 91.75879, "loss_bbox": 0.27308, "loss_mask": 0.25874, "loss": 0.87086, "time": 0.21128} -{"mode": "train", "epoch": 4, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.03957, "loss_rpn_cls": 0.03603, "loss_rpn_bbox": 0.05852, "loss_cls": 0.2459, "acc": 91.77319, "loss_bbox": 0.26872, "loss_mask": 0.26422, "loss": 0.87338, "time": 0.21911} -{"mode": "train", "epoch": 4, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.0369, "loss_rpn_cls": 0.03815, "loss_rpn_bbox": 0.05886, "loss_cls": 0.24839, "acc": 91.77588, "loss_bbox": 0.26602, "loss_mask": 0.2584, "loss": 0.86982, "time": 0.22082} -{"mode": "train", "epoch": 4, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.03766, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05341, "loss_cls": 0.23702, "acc": 92.37769, "loss_bbox": 0.2524, "loss_mask": 0.25578, "loss": 0.83204, "time": 0.20691} -{"mode": "train", "epoch": 4, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.03738, "loss_rpn_cls": 0.03462, "loss_rpn_bbox": 0.05612, "loss_cls": 0.24396, "acc": 92.10815, "loss_bbox": 0.25603, "loss_mask": 0.25589, "loss": 0.84662, "time": 0.21234} -{"mode": "train", "epoch": 4, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.03608, "loss_rpn_cls": 0.03475, "loss_rpn_bbox": 0.05421, "loss_cls": 0.2409, "acc": 91.98535, "loss_bbox": 0.25978, "loss_mask": 0.25899, "loss": 0.84863, "time": 0.22333} -{"mode": "train", "epoch": 4, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.0447, "loss_rpn_cls": 0.0362, "loss_rpn_bbox": 0.05855, "loss_cls": 0.2522, "acc": 91.70898, "loss_bbox": 0.26805, "loss_mask": 0.26192, "loss": 0.87693, "time": 0.21829} -{"mode": "train", "epoch": 4, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.04492, "loss_rpn_cls": 0.03521, "loss_rpn_bbox": 0.05658, "loss_cls": 0.24546, "acc": 91.87964, "loss_bbox": 0.26411, "loss_mask": 0.25811, "loss": 0.85948, "time": 0.20593} -{"mode": "train", "epoch": 4, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.03662, "loss_rpn_cls": 0.0358, "loss_rpn_bbox": 0.05396, "loss_cls": 0.22884, "acc": 92.36646, "loss_bbox": 0.25413, "loss_mask": 0.25646, "loss": 0.8292, "time": 0.21787} -{"mode": "train", "epoch": 4, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.04047, "loss_rpn_cls": 0.03571, "loss_rpn_bbox": 0.05989, "loss_cls": 0.24685, "acc": 91.89282, "loss_bbox": 0.26828, "loss_mask": 0.25907, "loss": 0.86981, "time": 0.21926} -{"mode": "train", "epoch": 4, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.03466, "loss_rpn_cls": 0.03329, "loss_rpn_bbox": 0.05597, "loss_cls": 0.24589, "acc": 92.00146, "loss_bbox": 0.25798, "loss_mask": 0.26229, "loss": 0.85542, "time": 0.21131} -{"mode": "train", "epoch": 4, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.04044, "loss_rpn_cls": 0.03544, "loss_rpn_bbox": 0.05875, "loss_cls": 0.23496, "acc": 92.29492, "loss_bbox": 0.25513, "loss_mask": 0.26135, "loss": 0.84564, "time": 0.20884} -{"mode": "train", "epoch": 4, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.03917, "loss_rpn_cls": 0.03747, "loss_rpn_bbox": 0.0586, "loss_cls": 0.2322, "acc": 92.28613, "loss_bbox": 0.25994, "loss_mask": 0.26656, "loss": 0.85478, "time": 0.21427} -{"mode": "train", "epoch": 4, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.03358, "loss_rpn_cls": 0.0351, "loss_rpn_bbox": 0.05559, "loss_cls": 0.23383, "acc": 92.26196, "loss_bbox": 0.25627, "loss_mask": 0.25535, "loss": 0.83614, "time": 0.20692} -{"mode": "train", "epoch": 4, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.03328, "loss_rpn_cls": 0.03324, "loss_rpn_bbox": 0.05478, "loss_cls": 0.23752, "acc": 92.2146, "loss_bbox": 0.25779, "loss_mask": 0.26039, "loss": 0.84372, "time": 0.21359} -{"mode": "train", "epoch": 4, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.03554, "loss_rpn_cls": 0.03461, "loss_rpn_bbox": 0.0557, "loss_cls": 0.23864, "acc": 91.8811, "loss_bbox": 0.26555, "loss_mask": 0.26104, "loss": 0.85553, "time": 0.21858} -{"mode": "train", "epoch": 4, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.0362, "loss_rpn_cls": 0.03689, "loss_rpn_bbox": 0.05877, "loss_cls": 0.24479, "acc": 91.8042, "loss_bbox": 0.26413, "loss_mask": 0.25769, "loss": 0.86227, "time": 0.21835} -{"mode": "train", "epoch": 4, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.04107, "loss_rpn_cls": 0.03692, "loss_rpn_bbox": 0.05757, "loss_cls": 0.24737, "acc": 91.87915, "loss_bbox": 0.26231, "loss_mask": 0.26383, "loss": 0.86799, "time": 0.22095} -{"mode": "train", "epoch": 4, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.045, "loss_rpn_cls": 0.03792, "loss_rpn_bbox": 0.05613, "loss_cls": 0.24629, "acc": 91.79248, "loss_bbox": 0.265, "loss_mask": 0.26168, "loss": 0.86702, "time": 0.21243} -{"mode": "train", "epoch": 4, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.04683, "loss_rpn_cls": 0.03472, "loss_rpn_bbox": 0.05442, "loss_cls": 0.23928, "acc": 92.06445, "loss_bbox": 0.26204, "loss_mask": 0.2537, "loss": 0.84415, "time": 0.20944} -{"mode": "train", "epoch": 4, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.043, "loss_rpn_cls": 0.03612, "loss_rpn_bbox": 0.05692, "loss_cls": 0.24803, "acc": 91.6792, "loss_bbox": 0.27197, "loss_mask": 0.2628, "loss": 0.87584, "time": 0.21891} -{"mode": "train", "epoch": 4, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.04483, "loss_rpn_cls": 0.03342, "loss_rpn_bbox": 0.05405, "loss_cls": 0.23563, "acc": 92.25537, "loss_bbox": 0.25189, "loss_mask": 0.25482, "loss": 0.82981, "time": 0.22139} -{"mode": "train", "epoch": 4, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.03976, "loss_rpn_cls": 0.03728, "loss_rpn_bbox": 0.06211, "loss_cls": 0.25609, "acc": 91.58032, "loss_bbox": 0.2814, "loss_mask": 0.26768, "loss": 0.90456, "time": 0.21554} -{"mode": "train", "epoch": 4, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.03902, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05503, "loss_cls": 0.24201, "acc": 91.95337, "loss_bbox": 0.26874, "loss_mask": 0.25905, "loss": 0.85826, "time": 0.21173} -{"mode": "train", "epoch": 4, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.03794, "loss_rpn_cls": 0.03415, "loss_rpn_bbox": 0.05494, "loss_cls": 0.23535, "acc": 92.19385, "loss_bbox": 0.2614, "loss_mask": 0.25576, "loss": 0.8416, "time": 0.2147} -{"mode": "train", "epoch": 4, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.03899, "loss_rpn_cls": 0.03357, "loss_rpn_bbox": 0.05475, "loss_cls": 0.23791, "acc": 92.02539, "loss_bbox": 0.26281, "loss_mask": 0.25908, "loss": 0.84812, "time": 0.21644} -{"mode": "train", "epoch": 4, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.0455, "loss_rpn_cls": 0.0397, "loss_rpn_bbox": 0.05802, "loss_cls": 0.2531, "acc": 91.7915, "loss_bbox": 0.26938, "loss_mask": 0.2639, "loss": 0.8841, "time": 0.23353} -{"mode": "train", "epoch": 4, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.04435, "loss_rpn_cls": 0.03569, "loss_rpn_bbox": 0.0595, "loss_cls": 0.25458, "acc": 91.58325, "loss_bbox": 0.27639, "loss_mask": 0.26628, "loss": 0.89244, "time": 0.21948} -{"mode": "train", "epoch": 4, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.05063, "loss_rpn_cls": 0.03526, "loss_rpn_bbox": 0.05711, "loss_cls": 0.24447, "acc": 91.98096, "loss_bbox": 0.26356, "loss_mask": 0.261, "loss": 0.8614, "time": 0.21665} -{"mode": "train", "epoch": 4, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.03633, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05464, "loss_cls": 0.23058, "acc": 92.24878, "loss_bbox": 0.2562, "loss_mask": 0.26421, "loss": 0.83906, "time": 0.23681} -{"mode": "train", "epoch": 4, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.03233, "loss_rpn_cls": 0.03928, "loss_rpn_bbox": 0.05784, "loss_cls": 0.24135, "acc": 92.13135, "loss_bbox": 0.25977, "loss_mask": 0.25613, "loss": 0.85438, "time": 0.27646} -{"mode": "train", "epoch": 4, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.04001, "loss_rpn_cls": 0.03317, "loss_rpn_bbox": 0.05337, "loss_cls": 0.24331, "acc": 91.95557, "loss_bbox": 0.26414, "loss_mask": 0.25955, "loss": 0.85355, "time": 0.21155} -{"mode": "train", "epoch": 4, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.04395, "loss_rpn_cls": 0.03622, "loss_rpn_bbox": 0.05506, "loss_cls": 0.24661, "acc": 91.98828, "loss_bbox": 0.26017, "loss_mask": 0.25487, "loss": 0.85292, "time": 0.21669} -{"mode": "train", "epoch": 4, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.0492, "loss_rpn_cls": 0.03593, "loss_rpn_bbox": 0.05682, "loss_cls": 0.2498, "acc": 91.73755, "loss_bbox": 0.26763, "loss_mask": 0.25898, "loss": 0.86916, "time": 0.21199} -{"mode": "train", "epoch": 4, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.04612, "loss_rpn_cls": 0.03708, "loss_rpn_bbox": 0.0566, "loss_cls": 0.24874, "acc": 91.78149, "loss_bbox": 0.26585, "loss_mask": 0.25495, "loss": 0.86324, "time": 0.21985} -{"mode": "train", "epoch": 4, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.04154, "loss_rpn_cls": 0.03532, "loss_rpn_bbox": 0.05726, "loss_cls": 0.2352, "acc": 92.26855, "loss_bbox": 0.25723, "loss_mask": 0.25982, "loss": 0.84483, "time": 0.20354} -{"mode": "train", "epoch": 4, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.03928, "loss_rpn_cls": 0.0342, "loss_rpn_bbox": 0.05503, "loss_cls": 0.23124, "acc": 92.26416, "loss_bbox": 0.2539, "loss_mask": 0.25453, "loss": 0.8289, "time": 0.20553} -{"mode": "train", "epoch": 4, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.04819, "loss_rpn_cls": 0.03737, "loss_rpn_bbox": 0.05773, "loss_cls": 0.24331, "acc": 92.02393, "loss_bbox": 0.26429, "loss_mask": 0.25749, "loss": 0.86019, "time": 0.21862} -{"mode": "train", "epoch": 4, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.0386, "loss_rpn_cls": 0.03685, "loss_rpn_bbox": 0.05703, "loss_cls": 0.2508, "acc": 91.74707, "loss_bbox": 0.26887, "loss_mask": 0.2718, "loss": 0.88535, "time": 0.20095} -{"mode": "train", "epoch": 4, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.04145, "loss_rpn_cls": 0.03527, "loss_rpn_bbox": 0.05763, "loss_cls": 0.24565, "acc": 91.90161, "loss_bbox": 0.26419, "loss_mask": 0.27103, "loss": 0.87376, "time": 0.2888} -{"mode": "train", "epoch": 4, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.04629, "loss_rpn_cls": 0.0317, "loss_rpn_bbox": 0.05476, "loss_cls": 0.24078, "acc": 91.98145, "loss_bbox": 0.26259, "loss_mask": 0.26569, "loss": 0.85552, "time": 0.21118} -{"mode": "train", "epoch": 4, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.04359, "loss_rpn_cls": 0.03476, "loss_rpn_bbox": 0.05595, "loss_cls": 0.24042, "acc": 92.21191, "loss_bbox": 0.25408, "loss_mask": 0.25831, "loss": 0.84352, "time": 0.21091} -{"mode": "train", "epoch": 4, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.04088, "loss_rpn_cls": 0.03465, "loss_rpn_bbox": 0.05506, "loss_cls": 0.23672, "acc": 92.18994, "loss_bbox": 0.25415, "loss_mask": 0.25868, "loss": 0.83926, "time": 0.25438} -{"mode": "train", "epoch": 4, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.0393, "loss_rpn_cls": 0.03877, "loss_rpn_bbox": 0.0554, "loss_cls": 0.24104, "acc": 91.875, "loss_bbox": 0.26962, "loss_mask": 0.26079, "loss": 0.86562, "time": 0.21426} -{"mode": "train", "epoch": 4, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.03964, "loss_rpn_cls": 0.03317, "loss_rpn_bbox": 0.05431, "loss_cls": 0.23529, "acc": 92.18677, "loss_bbox": 0.25475, "loss_mask": 0.25638, "loss": 0.8339, "time": 0.20714} -{"mode": "train", "epoch": 4, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.03624, "loss_rpn_cls": 0.03478, "loss_rpn_bbox": 0.0554, "loss_cls": 0.23772, "acc": 92.31226, "loss_bbox": 0.2556, "loss_mask": 0.2601, "loss": 0.84359, "time": 0.20993} -{"mode": "train", "epoch": 4, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.04164, "loss_rpn_cls": 0.0376, "loss_rpn_bbox": 0.05843, "loss_cls": 0.24939, "acc": 91.83838, "loss_bbox": 0.26911, "loss_mask": 0.26095, "loss": 0.87549, "time": 0.25866} -{"mode": "train", "epoch": 4, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.03412, "loss_rpn_cls": 0.03329, "loss_rpn_bbox": 0.05232, "loss_cls": 0.23546, "acc": 92.29102, "loss_bbox": 0.25334, "loss_mask": 0.2503, "loss": 0.8247, "time": 0.20339} -{"mode": "train", "epoch": 4, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.03586, "loss_rpn_cls": 0.03684, "loss_rpn_bbox": 0.05819, "loss_cls": 0.24378, "acc": 91.96191, "loss_bbox": 0.26731, "loss_mask": 0.26308, "loss": 0.86919, "time": 0.21446} -{"mode": "train", "epoch": 4, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.0376, "loss_rpn_cls": 0.03379, "loss_rpn_bbox": 0.05369, "loss_cls": 0.23534, "acc": 92.18872, "loss_bbox": 0.25287, "loss_mask": 0.26493, "loss": 0.84062, "time": 0.20586} -{"mode": "train", "epoch": 4, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.0361, "loss_rpn_cls": 0.03982, "loss_rpn_bbox": 0.05884, "loss_cls": 0.24577, "acc": 91.98242, "loss_bbox": 0.26529, "loss_mask": 0.26079, "loss": 0.87051, "time": 0.21077} -{"mode": "train", "epoch": 4, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.04748, "loss_rpn_cls": 0.0398, "loss_rpn_bbox": 0.05845, "loss_cls": 0.24973, "acc": 91.75, "loss_bbox": 0.26525, "loss_mask": 0.2563, "loss": 0.86952, "time": 0.21334} -{"mode": "train", "epoch": 4, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.03976, "loss_rpn_cls": 0.03391, "loss_rpn_bbox": 0.05633, "loss_cls": 0.25057, "acc": 91.83496, "loss_bbox": 0.26291, "loss_mask": 0.25341, "loss": 0.85713, "time": 0.21662} -{"mode": "train", "epoch": 4, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.03893, "loss_rpn_cls": 0.03535, "loss_rpn_bbox": 0.05315, "loss_cls": 0.24875, "acc": 91.9082, "loss_bbox": 0.26299, "loss_mask": 0.25469, "loss": 0.85494, "time": 0.2121} -{"mode": "train", "epoch": 4, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.04008, "loss_rpn_cls": 0.03776, "loss_rpn_bbox": 0.05668, "loss_cls": 0.25963, "acc": 91.67456, "loss_bbox": 0.27591, "loss_mask": 0.25678, "loss": 0.88677, "time": 0.21161} -{"mode": "train", "epoch": 4, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.03676, "loss_rpn_cls": 0.03554, "loss_rpn_bbox": 0.05456, "loss_cls": 0.24277, "acc": 91.99658, "loss_bbox": 0.26367, "loss_mask": 0.25302, "loss": 0.84955, "time": 0.20303} -{"mode": "train", "epoch": 4, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.04003, "loss_rpn_cls": 0.0364, "loss_rpn_bbox": 0.05571, "loss_cls": 0.24174, "acc": 92.14331, "loss_bbox": 0.26018, "loss_mask": 0.25612, "loss": 0.85016, "time": 0.20878} -{"mode": "train", "epoch": 4, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.04222, "loss_rpn_cls": 0.03768, "loss_rpn_bbox": 0.06004, "loss_cls": 0.24593, "acc": 92.1311, "loss_bbox": 0.25407, "loss_mask": 0.25813, "loss": 0.85585, "time": 0.21724} -{"mode": "train", "epoch": 4, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.0354, "loss_rpn_cls": 0.03359, "loss_rpn_bbox": 0.05474, "loss_cls": 0.23776, "acc": 92.06787, "loss_bbox": 0.25829, "loss_mask": 0.26471, "loss": 0.84908, "time": 0.19939} -{"mode": "train", "epoch": 4, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.03985, "loss_rpn_cls": 0.03888, "loss_rpn_bbox": 0.05748, "loss_cls": 0.23429, "acc": 92.16357, "loss_bbox": 0.25986, "loss_mask": 0.26322, "loss": 0.85373, "time": 0.21112} -{"mode": "train", "epoch": 4, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.04331, "loss_rpn_cls": 0.03494, "loss_rpn_bbox": 0.05409, "loss_cls": 0.23676, "acc": 92.12158, "loss_bbox": 0.26773, "loss_mask": 0.25429, "loss": 0.84781, "time": 0.20657} -{"mode": "train", "epoch": 4, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.04142, "loss_rpn_cls": 0.037, "loss_rpn_bbox": 0.05772, "loss_cls": 0.23806, "acc": 92.20068, "loss_bbox": 0.25769, "loss_mask": 0.25976, "loss": 0.85023, "time": 0.207} -{"mode": "train", "epoch": 4, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.03695, "loss_rpn_cls": 0.03481, "loss_rpn_bbox": 0.05494, "loss_cls": 0.24284, "acc": 91.99683, "loss_bbox": 0.26352, "loss_mask": 0.2549, "loss": 0.85101, "time": 0.20526} -{"mode": "train", "epoch": 4, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03456, "loss_rpn_cls": 0.03329, "loss_rpn_bbox": 0.05288, "loss_cls": 0.22983, "acc": 92.3501, "loss_bbox": 0.25605, "loss_mask": 0.25643, "loss": 0.82848, "time": 0.20314} -{"mode": "train", "epoch": 4, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.04226, "loss_rpn_cls": 0.03515, "loss_rpn_bbox": 0.05552, "loss_cls": 0.23981, "acc": 92.13721, "loss_bbox": 0.26119, "loss_mask": 0.25464, "loss": 0.84632, "time": 0.21263} -{"mode": "train", "epoch": 4, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.03913, "loss_rpn_cls": 0.03409, "loss_rpn_bbox": 0.05621, "loss_cls": 0.24305, "acc": 92.05078, "loss_bbox": 0.25676, "loss_mask": 0.26157, "loss": 0.85169, "time": 0.20525} -{"mode": "train", "epoch": 4, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.03742, "loss_rpn_cls": 0.03745, "loss_rpn_bbox": 0.05587, "loss_cls": 0.24592, "acc": 91.9104, "loss_bbox": 0.25825, "loss_mask": 0.2538, "loss": 0.85129, "time": 0.20594} -{"mode": "train", "epoch": 4, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.0408, "loss_rpn_cls": 0.03675, "loss_rpn_bbox": 0.05438, "loss_cls": 0.24275, "acc": 92.01538, "loss_bbox": 0.2588, "loss_mask": 0.26383, "loss": 0.85652, "time": 0.20619} -{"mode": "train", "epoch": 4, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.03973, "loss_rpn_cls": 0.03599, "loss_rpn_bbox": 0.06026, "loss_cls": 0.2561, "acc": 91.57764, "loss_bbox": 0.27307, "loss_mask": 0.25768, "loss": 0.8831, "time": 0.21103} -{"mode": "train", "epoch": 4, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.04228, "loss_rpn_cls": 0.03534, "loss_rpn_bbox": 0.05442, "loss_cls": 0.24052, "acc": 92.2124, "loss_bbox": 0.25377, "loss_mask": 0.25703, "loss": 0.84108, "time": 0.20558} -{"mode": "train", "epoch": 4, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.03936, "loss_rpn_cls": 0.03869, "loss_rpn_bbox": 0.06043, "loss_cls": 0.2523, "acc": 91.61523, "loss_bbox": 0.27187, "loss_mask": 0.26273, "loss": 0.88601, "time": 0.21862} -{"mode": "train", "epoch": 4, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.03496, "loss_rpn_cls": 0.03507, "loss_rpn_bbox": 0.0554, "loss_cls": 0.24589, "acc": 91.9812, "loss_bbox": 0.26352, "loss_mask": 0.25643, "loss": 0.85631, "time": 0.2093} -{"mode": "train", "epoch": 4, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.03997, "loss_rpn_cls": 0.03359, "loss_rpn_bbox": 0.05572, "loss_cls": 0.23645, "acc": 92.19019, "loss_bbox": 0.25471, "loss_mask": 0.25708, "loss": 0.83755, "time": 0.20098} -{"mode": "train", "epoch": 4, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.03939, "loss_rpn_cls": 0.03687, "loss_rpn_bbox": 0.05944, "loss_cls": 0.24984, "acc": 91.87158, "loss_bbox": 0.26449, "loss_mask": 0.26813, "loss": 0.87879, "time": 0.21422} -{"mode": "train", "epoch": 4, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.04219, "loss_rpn_cls": 0.03993, "loss_rpn_bbox": 0.06047, "loss_cls": 0.24469, "acc": 91.94116, "loss_bbox": 0.25951, "loss_mask": 0.2619, "loss": 0.8665, "time": 0.21389} -{"mode": "train", "epoch": 4, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.04067, "loss_rpn_cls": 0.03393, "loss_rpn_bbox": 0.05671, "loss_cls": 0.23426, "acc": 92.18994, "loss_bbox": 0.25808, "loss_mask": 0.25128, "loss": 0.83425, "time": 0.2132} -{"mode": "train", "epoch": 4, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.04221, "loss_rpn_cls": 0.03288, "loss_rpn_bbox": 0.05774, "loss_cls": 0.24352, "acc": 92.03223, "loss_bbox": 0.26173, "loss_mask": 0.25349, "loss": 0.84936, "time": 0.21541} -{"mode": "train", "epoch": 4, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.0377, "loss_rpn_cls": 0.03501, "loss_rpn_bbox": 0.05664, "loss_cls": 0.24413, "acc": 92.04443, "loss_bbox": 0.2619, "loss_mask": 0.26244, "loss": 0.86013, "time": 0.20874} -{"mode": "train", "epoch": 4, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.04945, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05737, "loss_cls": 0.2557, "acc": 91.65796, "loss_bbox": 0.2693, "loss_mask": 0.25712, "loss": 0.87292, "time": 0.21806} -{"mode": "train", "epoch": 4, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.03866, "loss_rpn_cls": 0.0332, "loss_rpn_bbox": 0.05735, "loss_cls": 0.25222, "acc": 91.8374, "loss_bbox": 0.26634, "loss_mask": 0.25937, "loss": 0.86848, "time": 0.20841} -{"mode": "train", "epoch": 4, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.04079, "loss_rpn_cls": 0.0359, "loss_rpn_bbox": 0.05748, "loss_cls": 0.24519, "acc": 91.95386, "loss_bbox": 0.26292, "loss_mask": 0.2561, "loss": 0.8576, "time": 0.22128} -{"mode": "train", "epoch": 4, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.03604, "loss_rpn_cls": 0.03463, "loss_rpn_bbox": 0.05361, "loss_cls": 0.24052, "acc": 92.30591, "loss_bbox": 0.25663, "loss_mask": 0.25548, "loss": 0.84088, "time": 0.20903} -{"mode": "train", "epoch": 4, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.03221, "loss_rpn_cls": 0.03712, "loss_rpn_bbox": 0.05723, "loss_cls": 0.25296, "acc": 91.65454, "loss_bbox": 0.26828, "loss_mask": 0.25808, "loss": 0.87367, "time": 0.21954} -{"mode": "train", "epoch": 4, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.03103, "loss_rpn_cls": 0.0368, "loss_rpn_bbox": 0.05822, "loss_cls": 0.23971, "acc": 92.00049, "loss_bbox": 0.25824, "loss_mask": 0.25567, "loss": 0.84864, "time": 0.21964} -{"mode": "train", "epoch": 4, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.0379, "loss_rpn_cls": 0.03631, "loss_rpn_bbox": 0.05588, "loss_cls": 0.24035, "acc": 92.09253, "loss_bbox": 0.26004, "loss_mask": 0.26643, "loss": 0.85901, "time": 0.20735} -{"mode": "train", "epoch": 4, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.03344, "loss_rpn_cls": 0.03365, "loss_rpn_bbox": 0.05269, "loss_cls": 0.24456, "acc": 92.12695, "loss_bbox": 0.25882, "loss_mask": 0.25654, "loss": 0.84626, "time": 0.20319} -{"mode": "train", "epoch": 4, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.04128, "loss_rpn_cls": 0.03451, "loss_rpn_bbox": 0.0553, "loss_cls": 0.23614, "acc": 92.20312, "loss_bbox": 0.25776, "loss_mask": 0.25953, "loss": 0.84325, "time": 0.21189} -{"mode": "train", "epoch": 4, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.04369, "loss_rpn_cls": 0.03981, "loss_rpn_bbox": 0.05963, "loss_cls": 0.2544, "acc": 91.58203, "loss_bbox": 0.27546, "loss_mask": 0.2629, "loss": 0.89218, "time": 0.21642} -{"mode": "train", "epoch": 4, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.04028, "loss_rpn_cls": 0.03444, "loss_rpn_bbox": 0.05819, "loss_cls": 0.24425, "acc": 91.8103, "loss_bbox": 0.26859, "loss_mask": 0.25899, "loss": 0.86445, "time": 0.21125} -{"mode": "train", "epoch": 4, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.0359, "loss_rpn_cls": 0.03339, "loss_rpn_bbox": 0.05617, "loss_cls": 0.24084, "acc": 92.01929, "loss_bbox": 0.26164, "loss_mask": 0.25504, "loss": 0.84708, "time": 0.20737} -{"mode": "train", "epoch": 4, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03767, "loss_rpn_cls": 0.03625, "loss_rpn_bbox": 0.05793, "loss_cls": 0.25042, "acc": 91.89014, "loss_bbox": 0.26138, "loss_mask": 0.25858, "loss": 0.86456, "time": 0.21515} -{"mode": "train", "epoch": 4, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.04, "loss_rpn_cls": 0.03338, "loss_rpn_bbox": 0.0539, "loss_cls": 0.24132, "acc": 92.02417, "loss_bbox": 0.26506, "loss_mask": 0.25528, "loss": 0.84894, "time": 0.20702} -{"mode": "train", "epoch": 4, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.04042, "loss_rpn_cls": 0.03629, "loss_rpn_bbox": 0.05875, "loss_cls": 0.24434, "acc": 91.85181, "loss_bbox": 0.27084, "loss_mask": 0.25752, "loss": 0.86773, "time": 0.21254} -{"mode": "train", "epoch": 4, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.04252, "loss_rpn_cls": 0.03782, "loss_rpn_bbox": 0.05674, "loss_cls": 0.24686, "acc": 91.86963, "loss_bbox": 0.27017, "loss_mask": 0.26323, "loss": 0.87482, "time": 0.21375} -{"mode": "train", "epoch": 4, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.0356, "loss_rpn_cls": 0.03566, "loss_rpn_bbox": 0.05751, "loss_cls": 0.25002, "acc": 91.6936, "loss_bbox": 0.26492, "loss_mask": 0.26536, "loss": 0.87347, "time": 0.21922} -{"mode": "train", "epoch": 4, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.03704, "loss_rpn_cls": 0.03318, "loss_rpn_bbox": 0.05424, "loss_cls": 0.23044, "acc": 92.44702, "loss_bbox": 0.24899, "loss_mask": 0.25249, "loss": 0.81934, "time": 0.20242} -{"mode": "train", "epoch": 4, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.04147, "loss_rpn_cls": 0.03669, "loss_rpn_bbox": 0.05759, "loss_cls": 0.23902, "acc": 92.00659, "loss_bbox": 0.26052, "loss_mask": 0.26432, "loss": 0.85813, "time": 0.21988} -{"mode": "train", "epoch": 4, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.04717, "loss_rpn_cls": 0.03325, "loss_rpn_bbox": 0.05378, "loss_cls": 0.23786, "acc": 92.14136, "loss_bbox": 0.2611, "loss_mask": 0.26271, "loss": 0.84871, "time": 0.21155} -{"mode": "train", "epoch": 4, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.04509, "loss_rpn_cls": 0.03469, "loss_rpn_bbox": 0.05329, "loss_cls": 0.24061, "acc": 92.17578, "loss_bbox": 0.25839, "loss_mask": 0.25219, "loss": 0.83917, "time": 0.20777} -{"mode": "train", "epoch": 4, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.03189, "loss_rpn_cls": 0.03306, "loss_rpn_bbox": 0.05114, "loss_cls": 0.23892, "acc": 92.16919, "loss_bbox": 0.25614, "loss_mask": 0.25067, "loss": 0.82993, "time": 0.2072} -{"mode": "train", "epoch": 4, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.0412, "loss_rpn_cls": 0.0381, "loss_rpn_bbox": 0.05669, "loss_cls": 0.24728, "acc": 91.95044, "loss_bbox": 0.27097, "loss_mask": 0.26012, "loss": 0.87316, "time": 0.21061} -{"mode": "train", "epoch": 4, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03562, "loss_rpn_cls": 0.04003, "loss_rpn_bbox": 0.05724, "loss_cls": 0.24951, "acc": 91.9043, "loss_bbox": 0.26452, "loss_mask": 0.26371, "loss": 0.87501, "time": 0.21057} -{"mode": "train", "epoch": 4, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.04353, "loss_rpn_cls": 0.03401, "loss_rpn_bbox": 0.05804, "loss_cls": 0.24117, "acc": 92.00684, "loss_bbox": 0.26462, "loss_mask": 0.26211, "loss": 0.85996, "time": 0.21165} -{"mode": "train", "epoch": 4, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.0449, "loss_rpn_cls": 0.03705, "loss_rpn_bbox": 0.05675, "loss_cls": 0.2418, "acc": 92.14062, "loss_bbox": 0.25777, "loss_mask": 0.25997, "loss": 0.85334, "time": 0.21814} -{"mode": "train", "epoch": 4, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.03622, "loss_rpn_cls": 0.03721, "loss_rpn_bbox": 0.05669, "loss_cls": 0.24126, "acc": 92.10522, "loss_bbox": 0.25792, "loss_mask": 0.26322, "loss": 0.8563, "time": 0.20443} -{"mode": "train", "epoch": 4, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.04132, "loss_rpn_cls": 0.03726, "loss_rpn_bbox": 0.05795, "loss_cls": 0.24896, "acc": 91.81738, "loss_bbox": 0.26815, "loss_mask": 0.26169, "loss": 0.87401, "time": 0.2124} -{"mode": "train", "epoch": 4, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.03632, "loss_rpn_cls": 0.03645, "loss_rpn_bbox": 0.05742, "loss_cls": 0.25633, "acc": 91.65283, "loss_bbox": 0.26642, "loss_mask": 0.26674, "loss": 0.88336, "time": 0.21011} -{"mode": "train", "epoch": 4, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.04507, "loss_rpn_cls": 0.03406, "loss_rpn_bbox": 0.05426, "loss_cls": 0.24804, "acc": 91.85205, "loss_bbox": 0.26754, "loss_mask": 0.25633, "loss": 0.86024, "time": 0.21026} -{"mode": "train", "epoch": 4, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.05137, "loss_rpn_cls": 0.03593, "loss_rpn_bbox": 0.05626, "loss_cls": 0.24794, "acc": 91.97314, "loss_bbox": 0.25886, "loss_mask": 0.25307, "loss": 0.85206, "time": 0.21486} -{"mode": "train", "epoch": 4, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.04102, "loss_rpn_cls": 0.0364, "loss_rpn_bbox": 0.05518, "loss_cls": 0.24099, "acc": 92.16113, "loss_bbox": 0.25472, "loss_mask": 0.26522, "loss": 0.85251, "time": 0.20385} -{"mode": "train", "epoch": 4, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.04548, "loss_rpn_cls": 0.03721, "loss_rpn_bbox": 0.05703, "loss_cls": 0.23808, "acc": 92.07568, "loss_bbox": 0.2599, "loss_mask": 0.26146, "loss": 0.85369, "time": 0.21094} -{"mode": "train", "epoch": 4, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.04161, "loss_rpn_cls": 0.03566, "loss_rpn_bbox": 0.05598, "loss_cls": 0.24738, "acc": 91.81445, "loss_bbox": 0.26602, "loss_mask": 0.25344, "loss": 0.85848, "time": 0.20935} -{"mode": "train", "epoch": 4, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.04447, "loss_rpn_cls": 0.0357, "loss_rpn_bbox": 0.0563, "loss_cls": 0.24029, "acc": 92.19189, "loss_bbox": 0.25366, "loss_mask": 0.25535, "loss": 0.8413, "time": 0.20625} -{"mode": "train", "epoch": 4, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.03119, "loss_rpn_cls": 0.03499, "loss_rpn_bbox": 0.05472, "loss_cls": 0.23353, "acc": 92.20215, "loss_bbox": 0.25332, "loss_mask": 0.25121, "loss": 0.82777, "time": 0.20192} -{"mode": "train", "epoch": 4, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.04311, "loss_rpn_cls": 0.0352, "loss_rpn_bbox": 0.05667, "loss_cls": 0.23848, "acc": 92.02393, "loss_bbox": 0.26025, "loss_mask": 0.26015, "loss": 0.85076, "time": 0.21881} -{"mode": "train", "epoch": 4, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.03802, "loss_rpn_cls": 0.03431, "loss_rpn_bbox": 0.05491, "loss_cls": 0.24018, "acc": 92.11475, "loss_bbox": 0.2638, "loss_mask": 0.26047, "loss": 0.85366, "time": 0.21182} -{"mode": "train", "epoch": 4, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.04268, "loss_rpn_cls": 0.03746, "loss_rpn_bbox": 0.06108, "loss_cls": 0.25375, "acc": 91.68237, "loss_bbox": 0.27166, "loss_mask": 0.2594, "loss": 0.88336, "time": 0.21404} -{"mode": "train", "epoch": 4, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.03352, "loss_rpn_cls": 0.03643, "loss_rpn_bbox": 0.05544, "loss_cls": 0.24152, "acc": 92.20068, "loss_bbox": 0.25153, "loss_mask": 0.25873, "loss": 0.84366, "time": 0.20252} -{"mode": "train", "epoch": 4, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.03809, "loss_rpn_cls": 0.03349, "loss_rpn_bbox": 0.05507, "loss_cls": 0.25169, "acc": 91.82764, "loss_bbox": 0.26383, "loss_mask": 0.25847, "loss": 0.86255, "time": 0.21646} -{"mode": "val", "epoch": 4, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3136, "bbox_mAP_50": 0.5354, "bbox_mAP_75": 0.3305, "bbox_mAP_s": 0.1958, "bbox_mAP_m": 0.3525, "bbox_mAP_l": 0.4008, "bbox_mAP_copypaste": "0.3136 0.5354 0.3305 0.1958 0.3525 0.4008", "segm_mAP": 0.308, "segm_mAP_50": 0.5062, "segm_mAP_75": 0.3295, "segm_mAP_s": 0.1486, "segm_mAP_m": 0.3379, "segm_mAP_l": 0.449, "segm_mAP_copypaste": "0.3080 0.5062 0.3295 0.1486 0.3379 0.4490"} -{"mode": "train", "epoch": 5, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.10152, "loss_rpn_cls": 0.03327, "loss_rpn_bbox": 0.05464, "loss_cls": 0.23916, "acc": 91.97192, "loss_bbox": 0.26426, "loss_mask": 0.25897, "loss": 0.85031, "time": 0.29032} -{"mode": "train", "epoch": 5, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.04074, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.05719, "loss_cls": 0.23627, "acc": 91.92896, "loss_bbox": 0.26904, "loss_mask": 0.25663, "loss": 0.85102, "time": 0.23567} -{"mode": "train", "epoch": 5, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.0345, "loss_rpn_cls": 0.03192, "loss_rpn_bbox": 0.05542, "loss_cls": 0.23812, "acc": 92.05225, "loss_bbox": 0.26242, "loss_mask": 0.25659, "loss": 0.84447, "time": 0.22472} -{"mode": "train", "epoch": 5, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.03755, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05696, "loss_cls": 0.22759, "acc": 92.29858, "loss_bbox": 0.25627, "loss_mask": 0.25315, "loss": 0.82603, "time": 0.21935} -{"mode": "train", "epoch": 5, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.0399, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.05699, "loss_cls": 0.23085, "acc": 92.3501, "loss_bbox": 0.25328, "loss_mask": 0.25389, "loss": 0.82848, "time": 0.21891} -{"mode": "train", "epoch": 5, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.03107, "loss_rpn_cls": 0.03243, "loss_rpn_bbox": 0.05433, "loss_cls": 0.22604, "acc": 92.3291, "loss_bbox": 0.25333, "loss_mask": 0.25545, "loss": 0.82158, "time": 0.22875} -{"mode": "train", "epoch": 5, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.04683, "loss_rpn_cls": 0.03344, "loss_rpn_bbox": 0.05512, "loss_cls": 0.23002, "acc": 92.12378, "loss_bbox": 0.26099, "loss_mask": 0.25588, "loss": 0.83545, "time": 0.22129} -{"mode": "train", "epoch": 5, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.03055, "loss_rpn_cls": 0.03129, "loss_rpn_bbox": 0.05255, "loss_cls": 0.23324, "acc": 92.19604, "loss_bbox": 0.2593, "loss_mask": 0.25006, "loss": 0.82645, "time": 0.22686} -{"mode": "train", "epoch": 5, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.04052, "loss_rpn_cls": 0.03442, "loss_rpn_bbox": 0.05548, "loss_cls": 0.23008, "acc": 92.23291, "loss_bbox": 0.26079, "loss_mask": 0.25122, "loss": 0.832, "time": 0.21843} -{"mode": "train", "epoch": 5, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.03736, "loss_rpn_cls": 0.0339, "loss_rpn_bbox": 0.05689, "loss_cls": 0.23018, "acc": 92.34595, "loss_bbox": 0.25236, "loss_mask": 0.25292, "loss": 0.82626, "time": 0.20778} -{"mode": "train", "epoch": 5, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.03169, "loss_rpn_cls": 0.03197, "loss_rpn_bbox": 0.05379, "loss_cls": 0.22742, "acc": 92.45044, "loss_bbox": 0.24687, "loss_mask": 0.24945, "loss": 0.8095, "time": 0.20507} -{"mode": "train", "epoch": 5, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.03913, "loss_rpn_cls": 0.03517, "loss_rpn_bbox": 0.0567, "loss_cls": 0.24107, "acc": 91.84253, "loss_bbox": 0.26826, "loss_mask": 0.24932, "loss": 0.85051, "time": 0.20935} -{"mode": "train", "epoch": 5, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.03739, "loss_rpn_cls": 0.03311, "loss_rpn_bbox": 0.05407, "loss_cls": 0.23481, "acc": 92.00317, "loss_bbox": 0.26045, "loss_mask": 0.25122, "loss": 0.83367, "time": 0.21793} -{"mode": "train", "epoch": 5, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.04308, "loss_rpn_cls": 0.03199, "loss_rpn_bbox": 0.05658, "loss_cls": 0.23183, "acc": 92.26709, "loss_bbox": 0.25876, "loss_mask": 0.26177, "loss": 0.84093, "time": 0.21289} -{"mode": "train", "epoch": 5, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.03595, "loss_rpn_cls": 0.03105, "loss_rpn_bbox": 0.05214, "loss_cls": 0.22445, "acc": 92.5022, "loss_bbox": 0.249, "loss_mask": 0.24679, "loss": 0.80342, "time": 0.20993} -{"mode": "train", "epoch": 5, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.04439, "loss_rpn_cls": 0.03578, "loss_rpn_bbox": 0.05739, "loss_cls": 0.2299, "acc": 92.18018, "loss_bbox": 0.26218, "loss_mask": 0.25826, "loss": 0.84351, "time": 0.2129} -{"mode": "train", "epoch": 5, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.03926, "loss_rpn_cls": 0.03386, "loss_rpn_bbox": 0.05694, "loss_cls": 0.23454, "acc": 92.07861, "loss_bbox": 0.26407, "loss_mask": 0.2571, "loss": 0.84651, "time": 0.2074} -{"mode": "train", "epoch": 5, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.04714, "loss_rpn_cls": 0.03366, "loss_rpn_bbox": 0.058, "loss_cls": 0.24323, "acc": 91.97827, "loss_bbox": 0.2632, "loss_mask": 0.25706, "loss": 0.85515, "time": 0.2184} -{"mode": "train", "epoch": 5, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.03571, "loss_rpn_cls": 0.03436, "loss_rpn_bbox": 0.0543, "loss_cls": 0.22476, "acc": 92.49536, "loss_bbox": 0.24879, "loss_mask": 0.25282, "loss": 0.81504, "time": 0.21025} -{"mode": "train", "epoch": 5, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.03976, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05869, "loss_cls": 0.23794, "acc": 92.08936, "loss_bbox": 0.25985, "loss_mask": 0.26063, "loss": 0.85292, "time": 0.21525} -{"mode": "train", "epoch": 5, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.042, "loss_rpn_cls": 0.03475, "loss_rpn_bbox": 0.05633, "loss_cls": 0.23962, "acc": 91.9856, "loss_bbox": 0.27024, "loss_mask": 0.25862, "loss": 0.85956, "time": 0.21958} -{"mode": "train", "epoch": 5, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.03701, "loss_rpn_cls": 0.03121, "loss_rpn_bbox": 0.05264, "loss_cls": 0.2226, "acc": 92.51172, "loss_bbox": 0.2476, "loss_mask": 0.25679, "loss": 0.81084, "time": 0.21044} -{"mode": "train", "epoch": 5, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.03502, "loss_rpn_cls": 0.03402, "loss_rpn_bbox": 0.05475, "loss_cls": 0.23989, "acc": 91.93945, "loss_bbox": 0.26545, "loss_mask": 0.25318, "loss": 0.84729, "time": 0.20844} -{"mode": "train", "epoch": 5, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.04188, "loss_rpn_cls": 0.03181, "loss_rpn_bbox": 0.05249, "loss_cls": 0.23215, "acc": 92.3252, "loss_bbox": 0.2523, "loss_mask": 0.249, "loss": 0.81775, "time": 0.21088} -{"mode": "train", "epoch": 5, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.03553, "loss_rpn_cls": 0.03332, "loss_rpn_bbox": 0.05702, "loss_cls": 0.24071, "acc": 91.98438, "loss_bbox": 0.25818, "loss_mask": 0.24694, "loss": 0.83617, "time": 0.20719} -{"mode": "train", "epoch": 5, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.03561, "loss_rpn_cls": 0.02946, "loss_rpn_bbox": 0.05195, "loss_cls": 0.21329, "acc": 92.91992, "loss_bbox": 0.2392, "loss_mask": 0.24941, "loss": 0.78331, "time": 0.20981} -{"mode": "train", "epoch": 5, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.03921, "loss_rpn_cls": 0.03427, "loss_rpn_bbox": 0.0547, "loss_cls": 0.23109, "acc": 92.24219, "loss_bbox": 0.2612, "loss_mask": 0.25307, "loss": 0.83432, "time": 0.21245} -{"mode": "train", "epoch": 5, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.03793, "loss_rpn_cls": 0.03077, "loss_rpn_bbox": 0.05934, "loss_cls": 0.23903, "acc": 92.03003, "loss_bbox": 0.26588, "loss_mask": 0.26022, "loss": 0.85524, "time": 0.21858} -{"mode": "train", "epoch": 5, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.0324, "loss_rpn_cls": 0.03281, "loss_rpn_bbox": 0.05321, "loss_cls": 0.23766, "acc": 92.24634, "loss_bbox": 0.25763, "loss_mask": 0.25873, "loss": 0.84005, "time": 0.2084} -{"mode": "train", "epoch": 5, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.03029, "loss_rpn_cls": 0.03267, "loss_rpn_bbox": 0.05329, "loss_cls": 0.23527, "acc": 92.26123, "loss_bbox": 0.25712, "loss_mask": 0.25695, "loss": 0.8353, "time": 0.20601} -{"mode": "train", "epoch": 5, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.03638, "loss_rpn_cls": 0.03062, "loss_rpn_bbox": 0.05551, "loss_cls": 0.23408, "acc": 92.03979, "loss_bbox": 0.26407, "loss_mask": 0.25225, "loss": 0.83652, "time": 0.20996} -{"mode": "train", "epoch": 5, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.03603, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.0553, "loss_cls": 0.23543, "acc": 92.04224, "loss_bbox": 0.26553, "loss_mask": 0.25917, "loss": 0.85124, "time": 0.20759} -{"mode": "train", "epoch": 5, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.035, "loss_rpn_cls": 0.03137, "loss_rpn_bbox": 0.05232, "loss_cls": 0.22689, "acc": 92.37866, "loss_bbox": 0.24732, "loss_mask": 0.24791, "loss": 0.80581, "time": 0.20952} -{"mode": "train", "epoch": 5, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.04688, "loss_rpn_cls": 0.03297, "loss_rpn_bbox": 0.05523, "loss_cls": 0.23899, "acc": 92.16748, "loss_bbox": 0.25357, "loss_mask": 0.25205, "loss": 0.8328, "time": 0.21529} -{"mode": "train", "epoch": 5, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.04779, "loss_rpn_cls": 0.03296, "loss_rpn_bbox": 0.05678, "loss_cls": 0.24297, "acc": 92.0249, "loss_bbox": 0.26498, "loss_mask": 0.2588, "loss": 0.8565, "time": 0.2117} -{"mode": "train", "epoch": 5, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.04319, "loss_rpn_cls": 0.036, "loss_rpn_bbox": 0.05542, "loss_cls": 0.24531, "acc": 91.94092, "loss_bbox": 0.26355, "loss_mask": 0.25626, "loss": 0.85654, "time": 0.21158} -{"mode": "train", "epoch": 5, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.04411, "loss_rpn_cls": 0.0322, "loss_rpn_bbox": 0.0571, "loss_cls": 0.23974, "acc": 92.16431, "loss_bbox": 0.25631, "loss_mask": 0.2529, "loss": 0.83825, "time": 0.21218} -{"mode": "train", "epoch": 5, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.05211, "loss_rpn_cls": 0.03246, "loss_rpn_bbox": 0.05307, "loss_cls": 0.23529, "acc": 92.15625, "loss_bbox": 0.25276, "loss_mask": 0.25169, "loss": 0.82526, "time": 0.20686} -{"mode": "train", "epoch": 5, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.03618, "loss_rpn_cls": 0.03094, "loss_rpn_bbox": 0.05266, "loss_cls": 0.2421, "acc": 91.89868, "loss_bbox": 0.26406, "loss_mask": 0.25793, "loss": 0.84769, "time": 0.21248} -{"mode": "train", "epoch": 5, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.04789, "loss_rpn_cls": 0.03388, "loss_rpn_bbox": 0.05232, "loss_cls": 0.22489, "acc": 92.46973, "loss_bbox": 0.24773, "loss_mask": 0.24651, "loss": 0.80533, "time": 0.21406} -{"mode": "train", "epoch": 5, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.04026, "loss_rpn_cls": 0.03291, "loss_rpn_bbox": 0.0543, "loss_cls": 0.24552, "acc": 91.91406, "loss_bbox": 0.26345, "loss_mask": 0.25896, "loss": 0.85513, "time": 0.20852} -{"mode": "train", "epoch": 5, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.04023, "loss_rpn_cls": 0.03423, "loss_rpn_bbox": 0.05585, "loss_cls": 0.24659, "acc": 91.68018, "loss_bbox": 0.26724, "loss_mask": 0.25783, "loss": 0.86175, "time": 0.2059} -{"mode": "train", "epoch": 5, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.03446, "loss_rpn_cls": 0.03361, "loss_rpn_bbox": 0.0524, "loss_cls": 0.2339, "acc": 92.13843, "loss_bbox": 0.26039, "loss_mask": 0.25445, "loss": 0.83474, "time": 0.20819} -{"mode": "train", "epoch": 5, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.03841, "loss_rpn_cls": 0.03464, "loss_rpn_bbox": 0.05447, "loss_cls": 0.23596, "acc": 92.10352, "loss_bbox": 0.26094, "loss_mask": 0.25366, "loss": 0.83967, "time": 0.20798} -{"mode": "train", "epoch": 5, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.04086, "loss_rpn_cls": 0.03281, "loss_rpn_bbox": 0.053, "loss_cls": 0.24128, "acc": 92.20898, "loss_bbox": 0.25341, "loss_mask": 0.24989, "loss": 0.8304, "time": 0.20333} -{"mode": "train", "epoch": 5, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.03573, "loss_rpn_cls": 0.03443, "loss_rpn_bbox": 0.0524, "loss_cls": 0.2406, "acc": 91.97363, "loss_bbox": 0.26313, "loss_mask": 0.25382, "loss": 0.84437, "time": 0.21665} -{"mode": "train", "epoch": 5, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.03755, "loss_rpn_cls": 0.03332, "loss_rpn_bbox": 0.05626, "loss_cls": 0.23605, "acc": 92.09399, "loss_bbox": 0.26032, "loss_mask": 0.24995, "loss": 0.8359, "time": 0.21286} -{"mode": "train", "epoch": 5, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.03619, "loss_rpn_cls": 0.03527, "loss_rpn_bbox": 0.05457, "loss_cls": 0.2298, "acc": 92.28101, "loss_bbox": 0.25186, "loss_mask": 0.25279, "loss": 0.82428, "time": 0.20493} -{"mode": "train", "epoch": 5, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.03644, "loss_rpn_cls": 0.0338, "loss_rpn_bbox": 0.05587, "loss_cls": 0.24149, "acc": 92.01416, "loss_bbox": 0.25983, "loss_mask": 0.25385, "loss": 0.84485, "time": 0.20817} -{"mode": "train", "epoch": 5, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.05191, "loss_rpn_cls": 0.03492, "loss_rpn_bbox": 0.05689, "loss_cls": 0.2442, "acc": 91.79028, "loss_bbox": 0.26748, "loss_mask": 0.25858, "loss": 0.86207, "time": 0.21227} -{"mode": "train", "epoch": 5, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.04362, "loss_rpn_cls": 0.03235, "loss_rpn_bbox": 0.05501, "loss_cls": 0.22342, "acc": 92.46094, "loss_bbox": 0.25096, "loss_mask": 0.24829, "loss": 0.81003, "time": 0.20449} -{"mode": "train", "epoch": 5, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.03417, "loss_rpn_cls": 0.03092, "loss_rpn_bbox": 0.05096, "loss_cls": 0.22333, "acc": 92.6228, "loss_bbox": 0.2482, "loss_mask": 0.24528, "loss": 0.79868, "time": 0.19165} -{"mode": "train", "epoch": 5, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.04116, "loss_rpn_cls": 0.03449, "loss_rpn_bbox": 0.05621, "loss_cls": 0.24861, "acc": 91.86719, "loss_bbox": 0.26614, "loss_mask": 0.25975, "loss": 0.86521, "time": 0.21514} -{"mode": "train", "epoch": 5, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.03854, "loss_rpn_cls": 0.03521, "loss_rpn_bbox": 0.05623, "loss_cls": 0.24578, "acc": 91.86963, "loss_bbox": 0.2656, "loss_mask": 0.25571, "loss": 0.85853, "time": 0.21608} -{"mode": "train", "epoch": 5, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.04417, "loss_rpn_cls": 0.03166, "loss_rpn_bbox": 0.05591, "loss_cls": 0.23469, "acc": 92.12817, "loss_bbox": 0.25585, "loss_mask": 0.25197, "loss": 0.83008, "time": 0.21582} -{"mode": "train", "epoch": 5, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.03468, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05382, "loss_cls": 0.22894, "acc": 92.41406, "loss_bbox": 0.24782, "loss_mask": 0.25191, "loss": 0.81425, "time": 0.20671} -{"mode": "train", "epoch": 5, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.03604, "loss_rpn_cls": 0.03374, "loss_rpn_bbox": 0.05508, "loss_cls": 0.24384, "acc": 91.90308, "loss_bbox": 0.26106, "loss_mask": 0.25274, "loss": 0.84646, "time": 0.21656} -{"mode": "train", "epoch": 5, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.03305, "loss_rpn_cls": 0.03171, "loss_rpn_bbox": 0.0556, "loss_cls": 0.2391, "acc": 92.01758, "loss_bbox": 0.2627, "loss_mask": 0.25223, "loss": 0.84133, "time": 0.21069} -{"mode": "train", "epoch": 5, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.04406, "loss_rpn_cls": 0.03615, "loss_rpn_bbox": 0.05701, "loss_cls": 0.24476, "acc": 91.86963, "loss_bbox": 0.26025, "loss_mask": 0.25335, "loss": 0.85152, "time": 0.21134} -{"mode": "train", "epoch": 5, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.03762, "loss_rpn_cls": 0.03613, "loss_rpn_bbox": 0.05679, "loss_cls": 0.23791, "acc": 92.04468, "loss_bbox": 0.25715, "loss_mask": 0.25028, "loss": 0.83826, "time": 0.20633} -{"mode": "train", "epoch": 5, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.03322, "loss_rpn_cls": 0.03301, "loss_rpn_bbox": 0.05648, "loss_cls": 0.25782, "acc": 91.71582, "loss_bbox": 0.26408, "loss_mask": 0.25825, "loss": 0.86966, "time": 0.21129} -{"mode": "train", "epoch": 5, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.02578, "loss_rpn_cls": 0.03186, "loss_rpn_bbox": 0.05432, "loss_cls": 0.22824, "acc": 92.30469, "loss_bbox": 0.24654, "loss_mask": 0.2544, "loss": 0.81536, "time": 0.20884} -{"mode": "train", "epoch": 5, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.03126, "loss_rpn_cls": 0.0333, "loss_rpn_bbox": 0.05528, "loss_cls": 0.24326, "acc": 91.8916, "loss_bbox": 0.26816, "loss_mask": 0.25767, "loss": 0.85767, "time": 0.21824} -{"mode": "train", "epoch": 5, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.04115, "loss_rpn_cls": 0.03643, "loss_rpn_bbox": 0.05636, "loss_cls": 0.25015, "acc": 91.70728, "loss_bbox": 0.26881, "loss_mask": 0.2623, "loss": 0.87405, "time": 0.21134} -{"mode": "train", "epoch": 5, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.03869, "loss_rpn_cls": 0.03435, "loss_rpn_bbox": 0.05869, "loss_cls": 0.24501, "acc": 91.90576, "loss_bbox": 0.26615, "loss_mask": 0.26245, "loss": 0.86665, "time": 0.21157} -{"mode": "train", "epoch": 5, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.03269, "loss_rpn_cls": 0.03466, "loss_rpn_bbox": 0.05237, "loss_cls": 0.23683, "acc": 92.16846, "loss_bbox": 0.25078, "loss_mask": 0.25439, "loss": 0.82902, "time": 0.21038} -{"mode": "train", "epoch": 5, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.03806, "loss_rpn_cls": 0.0331, "loss_rpn_bbox": 0.05611, "loss_cls": 0.23272, "acc": 92.18311, "loss_bbox": 0.25999, "loss_mask": 0.2528, "loss": 0.83471, "time": 0.21083} -{"mode": "train", "epoch": 5, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.03748, "loss_rpn_cls": 0.03589, "loss_rpn_bbox": 0.05659, "loss_cls": 0.23444, "acc": 92.24927, "loss_bbox": 0.25337, "loss_mask": 0.25501, "loss": 0.83531, "time": 0.22073} -{"mode": "train", "epoch": 5, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.03343, "loss_rpn_cls": 0.03379, "loss_rpn_bbox": 0.05517, "loss_cls": 0.23892, "acc": 92.00171, "loss_bbox": 0.26485, "loss_mask": 0.25712, "loss": 0.84984, "time": 0.20587} -{"mode": "train", "epoch": 5, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.03468, "loss_rpn_cls": 0.03879, "loss_rpn_bbox": 0.05769, "loss_cls": 0.24312, "acc": 91.90454, "loss_bbox": 0.26282, "loss_mask": 0.25392, "loss": 0.85636, "time": 0.21247} -{"mode": "train", "epoch": 5, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.04086, "loss_rpn_cls": 0.03598, "loss_rpn_bbox": 0.0583, "loss_cls": 0.2454, "acc": 91.88159, "loss_bbox": 0.26542, "loss_mask": 0.26179, "loss": 0.86689, "time": 0.20857} -{"mode": "train", "epoch": 5, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.041, "loss_rpn_cls": 0.03659, "loss_rpn_bbox": 0.05969, "loss_cls": 0.24217, "acc": 91.90161, "loss_bbox": 0.26478, "loss_mask": 0.25654, "loss": 0.85977, "time": 0.2204} -{"mode": "train", "epoch": 5, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.04198, "loss_rpn_cls": 0.03541, "loss_rpn_bbox": 0.05828, "loss_cls": 0.23328, "acc": 92.23413, "loss_bbox": 0.25902, "loss_mask": 0.24839, "loss": 0.83438, "time": 0.21259} -{"mode": "train", "epoch": 5, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.03488, "loss_rpn_cls": 0.03331, "loss_rpn_bbox": 0.05543, "loss_cls": 0.23502, "acc": 92.25684, "loss_bbox": 0.25577, "loss_mask": 0.24944, "loss": 0.82898, "time": 0.20707} -{"mode": "train", "epoch": 5, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.04133, "loss_rpn_cls": 0.03393, "loss_rpn_bbox": 0.055, "loss_cls": 0.23854, "acc": 92.03149, "loss_bbox": 0.25595, "loss_mask": 0.24983, "loss": 0.83324, "time": 0.21069} -{"mode": "train", "epoch": 5, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.03796, "loss_rpn_cls": 0.03076, "loss_rpn_bbox": 0.05392, "loss_cls": 0.24529, "acc": 91.98779, "loss_bbox": 0.25761, "loss_mask": 0.25486, "loss": 0.84244, "time": 0.20645} -{"mode": "train", "epoch": 5, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.03341, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05338, "loss_cls": 0.2289, "acc": 92.45679, "loss_bbox": 0.24921, "loss_mask": 0.24617, "loss": 0.80973, "time": 0.20237} -{"mode": "train", "epoch": 5, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.04191, "loss_rpn_cls": 0.03211, "loss_rpn_bbox": 0.05313, "loss_cls": 0.23198, "acc": 92.19165, "loss_bbox": 0.25655, "loss_mask": 0.25833, "loss": 0.83211, "time": 0.21221} -{"mode": "train", "epoch": 5, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.03983, "loss_rpn_cls": 0.03641, "loss_rpn_bbox": 0.05327, "loss_cls": 0.23574, "acc": 92.23804, "loss_bbox": 0.25526, "loss_mask": 0.25927, "loss": 0.83995, "time": 0.20715} -{"mode": "train", "epoch": 5, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.04002, "loss_rpn_cls": 0.0311, "loss_rpn_bbox": 0.05426, "loss_cls": 0.2322, "acc": 92.18994, "loss_bbox": 0.25372, "loss_mask": 0.2463, "loss": 0.81758, "time": 0.2067} -{"mode": "train", "epoch": 5, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.03685, "loss_rpn_cls": 0.03543, "loss_rpn_bbox": 0.05882, "loss_cls": 0.24153, "acc": 92.02197, "loss_bbox": 0.25985, "loss_mask": 0.24941, "loss": 0.84503, "time": 0.21838} -{"mode": "train", "epoch": 5, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.03959, "loss_rpn_cls": 0.03635, "loss_rpn_bbox": 0.05789, "loss_cls": 0.24728, "acc": 91.77271, "loss_bbox": 0.27347, "loss_mask": 0.25827, "loss": 0.87325, "time": 0.21251} -{"mode": "train", "epoch": 5, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.0371, "loss_rpn_cls": 0.033, "loss_rpn_bbox": 0.05278, "loss_cls": 0.23813, "acc": 92.09961, "loss_bbox": 0.2558, "loss_mask": 0.24958, "loss": 0.82928, "time": 0.20901} -{"mode": "train", "epoch": 5, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.03365, "loss_rpn_cls": 0.03678, "loss_rpn_bbox": 0.05815, "loss_cls": 0.23803, "acc": 92.15576, "loss_bbox": 0.25691, "loss_mask": 0.26011, "loss": 0.84998, "time": 0.20564} -{"mode": "train", "epoch": 5, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.03483, "loss_rpn_cls": 0.03202, "loss_rpn_bbox": 0.05514, "loss_cls": 0.23395, "acc": 92.25122, "loss_bbox": 0.25647, "loss_mask": 0.25114, "loss": 0.82871, "time": 0.21853} -{"mode": "train", "epoch": 5, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.03368, "loss_rpn_cls": 0.03009, "loss_rpn_bbox": 0.05264, "loss_cls": 0.23176, "acc": 92.48999, "loss_bbox": 0.24771, "loss_mask": 0.25868, "loss": 0.82088, "time": 0.20185} -{"mode": "train", "epoch": 5, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.04602, "loss_rpn_cls": 0.03466, "loss_rpn_bbox": 0.05634, "loss_cls": 0.23813, "acc": 91.94409, "loss_bbox": 0.26455, "loss_mask": 0.25302, "loss": 0.8467, "time": 0.21998} -{"mode": "train", "epoch": 5, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03313, "loss_rpn_cls": 0.03411, "loss_rpn_bbox": 0.05907, "loss_cls": 0.25261, "acc": 91.61719, "loss_bbox": 0.27417, "loss_mask": 0.25448, "loss": 0.87444, "time": 0.2187} -{"mode": "train", "epoch": 5, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.04025, "loss_rpn_cls": 0.03302, "loss_rpn_bbox": 0.05304, "loss_cls": 0.23152, "acc": 92.2644, "loss_bbox": 0.24903, "loss_mask": 0.24111, "loss": 0.80772, "time": 0.20524} -{"mode": "train", "epoch": 5, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.0375, "loss_rpn_cls": 0.03209, "loss_rpn_bbox": 0.0491, "loss_cls": 0.22836, "acc": 92.36646, "loss_bbox": 0.25033, "loss_mask": 0.24683, "loss": 0.80671, "time": 0.20227} -{"mode": "train", "epoch": 5, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03997, "loss_rpn_cls": 0.03247, "loss_rpn_bbox": 0.05385, "loss_cls": 0.23618, "acc": 92.02979, "loss_bbox": 0.26085, "loss_mask": 0.25809, "loss": 0.84144, "time": 0.2098} -{"mode": "train", "epoch": 5, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.03356, "loss_rpn_bbox": 0.05271, "loss_cls": 0.23848, "acc": 92.26416, "loss_bbox": 0.25529, "loss_mask": 0.24885, "loss": 0.82889, "time": 0.20239} -{"mode": "train", "epoch": 5, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.04098, "loss_rpn_cls": 0.03442, "loss_rpn_bbox": 0.05432, "loss_cls": 0.24313, "acc": 92.06494, "loss_bbox": 0.25598, "loss_mask": 0.2571, "loss": 0.84496, "time": 0.20068} -{"mode": "train", "epoch": 5, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.04067, "loss_rpn_cls": 0.03666, "loss_rpn_bbox": 0.05673, "loss_cls": 0.25051, "acc": 91.84937, "loss_bbox": 0.26376, "loss_mask": 0.2572, "loss": 0.86486, "time": 0.21351} -{"mode": "train", "epoch": 5, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.03848, "loss_rpn_cls": 0.03592, "loss_rpn_bbox": 0.05743, "loss_cls": 0.25023, "acc": 91.71704, "loss_bbox": 0.26814, "loss_mask": 0.26412, "loss": 0.87585, "time": 0.21619} -{"mode": "train", "epoch": 5, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.03245, "loss_rpn_cls": 0.03429, "loss_rpn_bbox": 0.0515, "loss_cls": 0.24598, "acc": 91.94995, "loss_bbox": 0.26126, "loss_mask": 0.25057, "loss": 0.8436, "time": 0.21234} -{"mode": "train", "epoch": 5, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.0381, "loss_rpn_cls": 0.03747, "loss_rpn_bbox": 0.05936, "loss_cls": 0.24706, "acc": 91.79053, "loss_bbox": 0.27015, "loss_mask": 0.26206, "loss": 0.8761, "time": 0.21786} -{"mode": "train", "epoch": 5, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.03596, "loss_rpn_cls": 0.03288, "loss_rpn_bbox": 0.05719, "loss_cls": 0.23348, "acc": 92.36694, "loss_bbox": 0.2496, "loss_mask": 0.25015, "loss": 0.8233, "time": 0.20217} -{"mode": "train", "epoch": 5, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.03465, "loss_rpn_cls": 0.0333, "loss_rpn_bbox": 0.05726, "loss_cls": 0.24368, "acc": 91.94238, "loss_bbox": 0.26355, "loss_mask": 0.26144, "loss": 0.85924, "time": 0.21517} -{"mode": "train", "epoch": 5, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.03481, "loss_rpn_cls": 0.03048, "loss_rpn_bbox": 0.05322, "loss_cls": 0.23205, "acc": 92.32666, "loss_bbox": 0.24734, "loss_mask": 0.25501, "loss": 0.81809, "time": 0.20365} -{"mode": "train", "epoch": 5, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.03529, "loss_rpn_cls": 0.03179, "loss_rpn_bbox": 0.05593, "loss_cls": 0.23667, "acc": 92.2395, "loss_bbox": 0.24868, "loss_mask": 0.24749, "loss": 0.82057, "time": 0.20477} -{"mode": "train", "epoch": 5, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.03019, "loss_rpn_cls": 0.03678, "loss_rpn_bbox": 0.05609, "loss_cls": 0.23422, "acc": 92.24316, "loss_bbox": 0.25691, "loss_mask": 0.25855, "loss": 0.84255, "time": 0.2043} -{"mode": "train", "epoch": 5, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.0431, "loss_rpn_cls": 0.03434, "loss_rpn_bbox": 0.05409, "loss_cls": 0.23024, "acc": 92.28076, "loss_bbox": 0.25938, "loss_mask": 0.2521, "loss": 0.83015, "time": 0.21789} -{"mode": "train", "epoch": 5, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.04186, "loss_rpn_cls": 0.03356, "loss_rpn_bbox": 0.05457, "loss_cls": 0.23462, "acc": 92.18433, "loss_bbox": 0.25594, "loss_mask": 0.25021, "loss": 0.82891, "time": 0.21461} -{"mode": "train", "epoch": 5, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.0353, "loss_rpn_cls": 0.03057, "loss_rpn_bbox": 0.05212, "loss_cls": 0.23151, "acc": 92.32642, "loss_bbox": 0.25117, "loss_mask": 0.25458, "loss": 0.81995, "time": 0.20075} -{"mode": "train", "epoch": 5, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.0353, "loss_rpn_cls": 0.0333, "loss_rpn_bbox": 0.05551, "loss_cls": 0.23631, "acc": 92.02319, "loss_bbox": 0.25691, "loss_mask": 0.25381, "loss": 0.83585, "time": 0.20989} -{"mode": "train", "epoch": 5, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.03356, "loss_rpn_cls": 0.03148, "loss_rpn_bbox": 0.05312, "loss_cls": 0.22754, "acc": 92.31958, "loss_bbox": 0.25155, "loss_mask": 0.25078, "loss": 0.81448, "time": 0.21746} -{"mode": "train", "epoch": 5, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.03802, "loss_rpn_cls": 0.03417, "loss_rpn_bbox": 0.05516, "loss_cls": 0.24195, "acc": 92.01294, "loss_bbox": 0.26276, "loss_mask": 0.25846, "loss": 0.8525, "time": 0.20353} -{"mode": "train", "epoch": 5, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.03369, "loss_rpn_cls": 0.03706, "loss_rpn_bbox": 0.05765, "loss_cls": 0.23433, "acc": 92.14429, "loss_bbox": 0.25875, "loss_mask": 0.2545, "loss": 0.8423, "time": 0.19913} -{"mode": "train", "epoch": 5, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.04333, "loss_rpn_cls": 0.03649, "loss_rpn_bbox": 0.0565, "loss_cls": 0.24173, "acc": 91.98877, "loss_bbox": 0.26166, "loss_mask": 0.25392, "loss": 0.8503, "time": 0.21471} -{"mode": "train", "epoch": 5, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.04119, "loss_rpn_cls": 0.03401, "loss_rpn_bbox": 0.05796, "loss_cls": 0.24112, "acc": 92.03589, "loss_bbox": 0.26468, "loss_mask": 0.25271, "loss": 0.85048, "time": 0.20896} -{"mode": "train", "epoch": 5, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.03599, "loss_rpn_cls": 0.03511, "loss_rpn_bbox": 0.05613, "loss_cls": 0.23384, "acc": 92.10498, "loss_bbox": 0.262, "loss_mask": 0.2538, "loss": 0.84088, "time": 0.2086} -{"mode": "train", "epoch": 5, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.03833, "loss_rpn_cls": 0.03552, "loss_rpn_bbox": 0.05873, "loss_cls": 0.24602, "acc": 91.84546, "loss_bbox": 0.26288, "loss_mask": 0.25427, "loss": 0.85742, "time": 0.20343} -{"mode": "train", "epoch": 5, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.03743, "loss_rpn_cls": 0.0326, "loss_rpn_bbox": 0.05322, "loss_cls": 0.22985, "acc": 92.21143, "loss_bbox": 0.25755, "loss_mask": 0.2526, "loss": 0.82582, "time": 0.21068} -{"mode": "train", "epoch": 5, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.03944, "loss_rpn_cls": 0.03263, "loss_rpn_bbox": 0.05267, "loss_cls": 0.24054, "acc": 91.98389, "loss_bbox": 0.26415, "loss_mask": 0.25099, "loss": 0.84098, "time": 0.21307} -{"mode": "train", "epoch": 5, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.03565, "loss_rpn_cls": 0.03492, "loss_rpn_bbox": 0.05649, "loss_cls": 0.22816, "acc": 92.44019, "loss_bbox": 0.24697, "loss_mask": 0.25351, "loss": 0.82005, "time": 0.20864} -{"mode": "train", "epoch": 5, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.04468, "loss_rpn_cls": 0.03386, "loss_rpn_bbox": 0.05592, "loss_cls": 0.2284, "acc": 92.28198, "loss_bbox": 0.2609, "loss_mask": 0.25596, "loss": 0.83505, "time": 0.21256} -{"mode": "train", "epoch": 5, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03768, "loss_rpn_cls": 0.03718, "loss_rpn_bbox": 0.05989, "loss_cls": 0.23886, "acc": 91.99561, "loss_bbox": 0.26798, "loss_mask": 0.26141, "loss": 0.86531, "time": 0.21055} -{"mode": "train", "epoch": 5, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.03247, "loss_rpn_cls": 0.03229, "loss_rpn_bbox": 0.05433, "loss_cls": 0.24116, "acc": 92.07568, "loss_bbox": 0.25559, "loss_mask": 0.25565, "loss": 0.83902, "time": 0.19981} -{"mode": "train", "epoch": 5, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.03891, "loss_rpn_cls": 0.03489, "loss_rpn_bbox": 0.05484, "loss_cls": 0.23645, "acc": 92.14478, "loss_bbox": 0.25492, "loss_mask": 0.25466, "loss": 0.83577, "time": 0.20415} -{"mode": "train", "epoch": 5, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.04091, "loss_rpn_cls": 0.03575, "loss_rpn_bbox": 0.05406, "loss_cls": 0.22596, "acc": 92.39233, "loss_bbox": 0.24926, "loss_mask": 0.25286, "loss": 0.81788, "time": 0.2047} -{"mode": "train", "epoch": 5, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.03617, "loss_rpn_cls": 0.03391, "loss_rpn_bbox": 0.05604, "loss_cls": 0.23532, "acc": 92.08032, "loss_bbox": 0.25709, "loss_mask": 0.24928, "loss": 0.83164, "time": 0.20636} -{"mode": "train", "epoch": 5, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.04055, "loss_rpn_cls": 0.03553, "loss_rpn_bbox": 0.05674, "loss_cls": 0.23111, "acc": 92.17017, "loss_bbox": 0.25657, "loss_mask": 0.2572, "loss": 0.83715, "time": 0.20711} -{"mode": "train", "epoch": 5, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.03161, "loss_rpn_cls": 0.03446, "loss_rpn_bbox": 0.05541, "loss_cls": 0.23403, "acc": 92.19019, "loss_bbox": 0.25371, "loss_mask": 0.25626, "loss": 0.83388, "time": 0.20687} -{"mode": "train", "epoch": 5, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.034, "loss_rpn_cls": 0.03437, "loss_rpn_bbox": 0.05684, "loss_cls": 0.23706, "acc": 92.05908, "loss_bbox": 0.25502, "loss_mask": 0.24931, "loss": 0.8326, "time": 0.20398} -{"mode": "train", "epoch": 5, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.03664, "loss_rpn_cls": 0.03228, "loss_rpn_bbox": 0.04999, "loss_cls": 0.22914, "acc": 92.44165, "loss_bbox": 0.24924, "loss_mask": 0.25012, "loss": 0.81077, "time": 0.20189} -{"mode": "train", "epoch": 5, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.04073, "loss_rpn_cls": 0.03634, "loss_rpn_bbox": 0.05605, "loss_cls": 0.24662, "acc": 91.77222, "loss_bbox": 0.26605, "loss_mask": 0.25251, "loss": 0.85756, "time": 0.21485} -{"mode": "train", "epoch": 5, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.03479, "loss_rpn_cls": 0.03687, "loss_rpn_bbox": 0.05611, "loss_cls": 0.25846, "acc": 91.59717, "loss_bbox": 0.2716, "loss_mask": 0.25675, "loss": 0.8798, "time": 0.20968} -{"mode": "train", "epoch": 5, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03452, "loss_rpn_cls": 0.03363, "loss_rpn_bbox": 0.05537, "loss_cls": 0.23812, "acc": 92.2251, "loss_bbox": 0.25353, "loss_mask": 0.24988, "loss": 0.83055, "time": 0.20772} -{"mode": "train", "epoch": 5, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.03837, "loss_rpn_cls": 0.03362, "loss_rpn_bbox": 0.05351, "loss_cls": 0.23782, "acc": 92.11133, "loss_bbox": 0.25564, "loss_mask": 0.25389, "loss": 0.83449, "time": 0.21062} -{"mode": "train", "epoch": 5, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.034, "loss_rpn_cls": 0.0316, "loss_rpn_bbox": 0.05294, "loss_cls": 0.2327, "acc": 92.31348, "loss_bbox": 0.25356, "loss_mask": 0.25378, "loss": 0.82458, "time": 0.2054} -{"mode": "train", "epoch": 5, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.04261, "loss_rpn_cls": 0.03407, "loss_rpn_bbox": 0.05318, "loss_cls": 0.24766, "acc": 91.83325, "loss_bbox": 0.26742, "loss_mask": 0.25561, "loss": 0.85794, "time": 0.24019} -{"mode": "train", "epoch": 5, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.03553, "loss_rpn_cls": 0.03424, "loss_rpn_bbox": 0.05399, "loss_cls": 0.24065, "acc": 92.03931, "loss_bbox": 0.26425, "loss_mask": 0.25598, "loss": 0.84912, "time": 0.19901} -{"mode": "train", "epoch": 5, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.03993, "loss_rpn_cls": 0.03442, "loss_rpn_bbox": 0.05411, "loss_cls": 0.23493, "acc": 92.21558, "loss_bbox": 0.25381, "loss_mask": 0.2584, "loss": 0.83567, "time": 0.20665} -{"mode": "train", "epoch": 5, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.04051, "loss_rpn_cls": 0.03641, "loss_rpn_bbox": 0.05815, "loss_cls": 0.24347, "acc": 91.97681, "loss_bbox": 0.26042, "loss_mask": 0.25049, "loss": 0.84893, "time": 0.21213} -{"mode": "train", "epoch": 5, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.03386, "loss_rpn_cls": 0.03062, "loss_rpn_bbox": 0.0515, "loss_cls": 0.22389, "acc": 92.46655, "loss_bbox": 0.24765, "loss_mask": 0.24681, "loss": 0.80047, "time": 0.20482} -{"mode": "train", "epoch": 5, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.04841, "loss_rpn_cls": 0.03414, "loss_rpn_bbox": 0.05628, "loss_cls": 0.24394, "acc": 91.86938, "loss_bbox": 0.2648, "loss_mask": 0.26044, "loss": 0.8596, "time": 0.23988} -{"mode": "train", "epoch": 5, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.03222, "loss_rpn_cls": 0.03301, "loss_rpn_bbox": 0.05333, "loss_cls": 0.23963, "acc": 92.19751, "loss_bbox": 0.25323, "loss_mask": 0.25338, "loss": 0.83258, "time": 0.20556} -{"mode": "train", "epoch": 5, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.03297, "loss_rpn_cls": 0.03554, "loss_rpn_bbox": 0.05558, "loss_cls": 0.24807, "acc": 91.84961, "loss_bbox": 0.26817, "loss_mask": 0.25648, "loss": 0.86384, "time": 0.19652} -{"mode": "train", "epoch": 5, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.03435, "loss_rpn_cls": 0.03422, "loss_rpn_bbox": 0.05691, "loss_cls": 0.2339, "acc": 92.18579, "loss_bbox": 0.25496, "loss_mask": 0.25417, "loss": 0.83416, "time": 0.24243} -{"mode": "train", "epoch": 5, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.03642, "loss_rpn_cls": 0.03078, "loss_rpn_bbox": 0.05263, "loss_cls": 0.23885, "acc": 92.21069, "loss_bbox": 0.25139, "loss_mask": 0.25588, "loss": 0.82952, "time": 0.19434} -{"mode": "train", "epoch": 5, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.04369, "loss_rpn_cls": 0.03456, "loss_rpn_bbox": 0.05453, "loss_cls": 0.24601, "acc": 91.92871, "loss_bbox": 0.26176, "loss_mask": 0.25198, "loss": 0.84884, "time": 0.20669} -{"mode": "train", "epoch": 5, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.03218, "loss_rpn_cls": 0.0348, "loss_rpn_bbox": 0.05407, "loss_cls": 0.233, "acc": 92.32666, "loss_bbox": 0.2525, "loss_mask": 0.25296, "loss": 0.82734, "time": 0.20708} -{"mode": "train", "epoch": 5, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.03218, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.04914, "loss_cls": 0.22777, "acc": 92.70239, "loss_bbox": 0.23416, "loss_mask": 0.24945, "loss": 0.79226, "time": 0.27562} -{"mode": "train", "epoch": 5, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.0417, "loss_rpn_cls": 0.0328, "loss_rpn_bbox": 0.05393, "loss_cls": 0.24001, "acc": 92.0105, "loss_bbox": 0.26091, "loss_mask": 0.25541, "loss": 0.84307, "time": 0.20296} -{"mode": "train", "epoch": 5, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.03494, "loss_rpn_cls": 0.03238, "loss_rpn_bbox": 0.05407, "loss_cls": 0.23161, "acc": 92.34741, "loss_bbox": 0.25149, "loss_mask": 0.25406, "loss": 0.82361, "time": 0.19804} -{"mode": "val", "epoch": 5, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3218, "bbox_mAP_50": 0.5426, "bbox_mAP_75": 0.3427, "bbox_mAP_s": 0.1928, "bbox_mAP_m": 0.3552, "bbox_mAP_l": 0.4114, "bbox_mAP_copypaste": "0.3218 0.5426 0.3427 0.1928 0.3552 0.4114", "segm_mAP": 0.3148, "segm_mAP_50": 0.5144, "segm_mAP_75": 0.3359, "segm_mAP_s": 0.1465, "segm_mAP_m": 0.3402, "segm_mAP_l": 0.4574, "segm_mAP_copypaste": "0.3148 0.5144 0.3359 0.1465 0.3402 0.4574"} -{"mode": "train", "epoch": 6, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.10846, "loss_rpn_cls": 0.03322, "loss_rpn_bbox": 0.05465, "loss_cls": 0.2323, "acc": 92.12036, "loss_bbox": 0.26346, "loss_mask": 0.25501, "loss": 0.83864, "time": 0.29328} -{"mode": "train", "epoch": 6, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.04459, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.054, "loss_cls": 0.23231, "acc": 92.05029, "loss_bbox": 0.272, "loss_mask": 0.25169, "loss": 0.84021, "time": 0.21945} -{"mode": "train", "epoch": 6, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.03938, "loss_rpn_cls": 0.02942, "loss_rpn_bbox": 0.05327, "loss_cls": 0.23202, "acc": 92.21655, "loss_bbox": 0.25498, "loss_mask": 0.24727, "loss": 0.81695, "time": 0.21667} -{"mode": "train", "epoch": 6, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.04352, "loss_rpn_cls": 0.03146, "loss_rpn_bbox": 0.05483, "loss_cls": 0.22022, "acc": 92.39136, "loss_bbox": 0.25206, "loss_mask": 0.2479, "loss": 0.80646, "time": 0.2185} -{"mode": "train", "epoch": 6, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.04188, "loss_rpn_cls": 0.03249, "loss_rpn_bbox": 0.05423, "loss_cls": 0.22076, "acc": 92.57544, "loss_bbox": 0.247, "loss_mask": 0.2459, "loss": 0.80039, "time": 0.22299} -{"mode": "train", "epoch": 6, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.03872, "loss_rpn_cls": 0.03101, "loss_rpn_bbox": 0.05553, "loss_cls": 0.22435, "acc": 92.22119, "loss_bbox": 0.25773, "loss_mask": 0.24576, "loss": 0.81438, "time": 0.21879} -{"mode": "train", "epoch": 6, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.04434, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05379, "loss_cls": 0.22254, "acc": 92.29297, "loss_bbox": 0.25699, "loss_mask": 0.25011, "loss": 0.81521, "time": 0.21657} -{"mode": "train", "epoch": 6, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.05144, "loss_rpn_cls": 0.03412, "loss_rpn_bbox": 0.05581, "loss_cls": 0.23602, "acc": 91.95435, "loss_bbox": 0.26271, "loss_mask": 0.24589, "loss": 0.83454, "time": 0.21938} -{"mode": "train", "epoch": 6, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.04046, "loss_rpn_cls": 0.03373, "loss_rpn_bbox": 0.05385, "loss_cls": 0.23019, "acc": 92.31641, "loss_bbox": 0.25031, "loss_mask": 0.23885, "loss": 0.80694, "time": 0.21416} -{"mode": "train", "epoch": 6, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.04229, "loss_rpn_cls": 0.03142, "loss_rpn_bbox": 0.0554, "loss_cls": 0.22425, "acc": 92.44775, "loss_bbox": 0.24857, "loss_mask": 0.25022, "loss": 0.80986, "time": 0.21608} -{"mode": "train", "epoch": 6, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.03599, "loss_rpn_cls": 0.0309, "loss_rpn_bbox": 0.04998, "loss_cls": 0.21939, "acc": 92.36841, "loss_bbox": 0.25261, "loss_mask": 0.24795, "loss": 0.80083, "time": 0.20753} -{"mode": "train", "epoch": 6, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.0337, "loss_rpn_cls": 0.0338, "loss_rpn_bbox": 0.05329, "loss_cls": 0.23139, "acc": 92.19312, "loss_bbox": 0.25679, "loss_mask": 0.25215, "loss": 0.82742, "time": 0.21551} -{"mode": "train", "epoch": 6, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.05248, "loss_rpn_cls": 0.02998, "loss_rpn_bbox": 0.05301, "loss_cls": 0.22746, "acc": 92.22705, "loss_bbox": 0.25713, "loss_mask": 0.24695, "loss": 0.81454, "time": 0.2141} -{"mode": "train", "epoch": 6, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.03607, "loss_rpn_cls": 0.03292, "loss_rpn_bbox": 0.05601, "loss_cls": 0.24151, "acc": 91.86279, "loss_bbox": 0.26184, "loss_mask": 0.25387, "loss": 0.84615, "time": 0.21241} -{"mode": "train", "epoch": 6, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.04378, "loss_rpn_cls": 0.03222, "loss_rpn_bbox": 0.05292, "loss_cls": 0.23386, "acc": 92.19604, "loss_bbox": 0.25615, "loss_mask": 0.24909, "loss": 0.82423, "time": 0.19972} -{"mode": "train", "epoch": 6, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.03857, "loss_rpn_cls": 0.02989, "loss_rpn_bbox": 0.05471, "loss_cls": 0.22945, "acc": 92.16602, "loss_bbox": 0.25749, "loss_mask": 0.24937, "loss": 0.82091, "time": 0.21871} -{"mode": "train", "epoch": 6, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.04089, "loss_rpn_cls": 0.03144, "loss_rpn_bbox": 0.05396, "loss_cls": 0.22719, "acc": 92.27173, "loss_bbox": 0.25619, "loss_mask": 0.24887, "loss": 0.81766, "time": 0.20757} -{"mode": "train", "epoch": 6, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.03967, "loss_rpn_cls": 0.03186, "loss_rpn_bbox": 0.05589, "loss_cls": 0.23052, "acc": 92.22046, "loss_bbox": 0.25387, "loss_mask": 0.24553, "loss": 0.81766, "time": 0.21273} -{"mode": "train", "epoch": 6, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.04274, "loss_rpn_cls": 0.03179, "loss_rpn_bbox": 0.0533, "loss_cls": 0.22302, "acc": 92.48145, "loss_bbox": 0.24429, "loss_mask": 0.24582, "loss": 0.79822, "time": 0.21478} -{"mode": "train", "epoch": 6, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.03863, "loss_rpn_cls": 0.03285, "loss_rpn_bbox": 0.05948, "loss_cls": 0.2421, "acc": 91.87695, "loss_bbox": 0.26362, "loss_mask": 0.25657, "loss": 0.85462, "time": 0.22471} -{"mode": "train", "epoch": 6, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.04098, "loss_rpn_cls": 0.03009, "loss_rpn_bbox": 0.05317, "loss_cls": 0.22554, "acc": 92.37964, "loss_bbox": 0.24689, "loss_mask": 0.24221, "loss": 0.7979, "time": 0.2105} -{"mode": "train", "epoch": 6, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.03857, "loss_rpn_cls": 0.03246, "loss_rpn_bbox": 0.05318, "loss_cls": 0.23403, "acc": 92.24561, "loss_bbox": 0.25199, "loss_mask": 0.24784, "loss": 0.81949, "time": 0.21339} -{"mode": "train", "epoch": 6, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.03473, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05544, "loss_cls": 0.23381, "acc": 92.21948, "loss_bbox": 0.26209, "loss_mask": 0.25546, "loss": 0.8397, "time": 0.21363} -{"mode": "train", "epoch": 6, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.03139, "loss_rpn_cls": 0.03157, "loss_rpn_bbox": 0.05613, "loss_cls": 0.22687, "acc": 92.37695, "loss_bbox": 0.25177, "loss_mask": 0.24965, "loss": 0.81598, "time": 0.20629} -{"mode": "train", "epoch": 6, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.03497, "loss_rpn_cls": 0.02851, "loss_rpn_bbox": 0.05121, "loss_cls": 0.22502, "acc": 92.40137, "loss_bbox": 0.24926, "loss_mask": 0.24772, "loss": 0.80172, "time": 0.20218} -{"mode": "train", "epoch": 6, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.04184, "loss_rpn_cls": 0.03279, "loss_rpn_bbox": 0.05642, "loss_cls": 0.2356, "acc": 92.07129, "loss_bbox": 0.26292, "loss_mask": 0.25346, "loss": 0.84118, "time": 0.21756} -{"mode": "train", "epoch": 6, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.03676, "loss_rpn_cls": 0.03187, "loss_rpn_bbox": 0.0557, "loss_cls": 0.22713, "acc": 92.32129, "loss_bbox": 0.25707, "loss_mask": 0.25027, "loss": 0.82204, "time": 0.21278} -{"mode": "train", "epoch": 6, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.03829, "loss_rpn_cls": 0.03332, "loss_rpn_bbox": 0.05389, "loss_cls": 0.22792, "acc": 92.42725, "loss_bbox": 0.25337, "loss_mask": 0.25099, "loss": 0.8195, "time": 0.20891} -{"mode": "train", "epoch": 6, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.03995, "loss_rpn_cls": 0.03178, "loss_rpn_bbox": 0.05564, "loss_cls": 0.22887, "acc": 92.28857, "loss_bbox": 0.25667, "loss_mask": 0.24717, "loss": 0.82014, "time": 0.21476} -{"mode": "train", "epoch": 6, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.04153, "loss_rpn_cls": 0.0308, "loss_rpn_bbox": 0.05368, "loss_cls": 0.23697, "acc": 92.12695, "loss_bbox": 0.2558, "loss_mask": 0.25654, "loss": 0.83379, "time": 0.21709} -{"mode": "train", "epoch": 6, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.03904, "loss_rpn_cls": 0.03074, "loss_rpn_bbox": 0.05041, "loss_cls": 0.22937, "acc": 92.25513, "loss_bbox": 0.25188, "loss_mask": 0.25159, "loss": 0.81399, "time": 0.20153} -{"mode": "train", "epoch": 6, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.04517, "loss_rpn_cls": 0.03297, "loss_rpn_bbox": 0.05655, "loss_cls": 0.24303, "acc": 91.82031, "loss_bbox": 0.26719, "loss_mask": 0.25502, "loss": 0.85477, "time": 0.21459} -{"mode": "train", "epoch": 6, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.03643, "loss_rpn_cls": 0.03265, "loss_rpn_bbox": 0.05559, "loss_cls": 0.23006, "acc": 92.14722, "loss_bbox": 0.25832, "loss_mask": 0.25305, "loss": 0.82967, "time": 0.21763} -{"mode": "train", "epoch": 6, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.04121, "loss_rpn_cls": 0.02963, "loss_rpn_bbox": 0.05098, "loss_cls": 0.21279, "acc": 92.7666, "loss_bbox": 0.24265, "loss_mask": 0.24429, "loss": 0.78033, "time": 0.20727} -{"mode": "train", "epoch": 6, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.03892, "loss_rpn_cls": 0.02831, "loss_rpn_bbox": 0.0532, "loss_cls": 0.22577, "acc": 92.34229, "loss_bbox": 0.2537, "loss_mask": 0.2487, "loss": 0.80969, "time": 0.20936} -{"mode": "train", "epoch": 6, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.03966, "loss_rpn_cls": 0.03084, "loss_rpn_bbox": 0.05451, "loss_cls": 0.217, "acc": 92.68604, "loss_bbox": 0.24088, "loss_mask": 0.244, "loss": 0.78723, "time": 0.21515} -{"mode": "train", "epoch": 6, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.04118, "loss_rpn_cls": 0.03199, "loss_rpn_bbox": 0.05593, "loss_cls": 0.21669, "acc": 92.6377, "loss_bbox": 0.242, "loss_mask": 0.24613, "loss": 0.79275, "time": 0.2121} -{"mode": "train", "epoch": 6, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.04815, "loss_rpn_cls": 0.03368, "loss_rpn_bbox": 0.05317, "loss_cls": 0.23989, "acc": 91.95312, "loss_bbox": 0.26144, "loss_mask": 0.25142, "loss": 0.8396, "time": 0.21212} -{"mode": "train", "epoch": 6, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.03919, "loss_rpn_cls": 0.03303, "loss_rpn_bbox": 0.05555, "loss_cls": 0.22933, "acc": 92.41943, "loss_bbox": 0.25041, "loss_mask": 0.24763, "loss": 0.81596, "time": 0.21915} -{"mode": "train", "epoch": 6, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.03462, "loss_rpn_cls": 0.03437, "loss_rpn_bbox": 0.05363, "loss_cls": 0.23095, "acc": 92.39258, "loss_bbox": 0.25752, "loss_mask": 0.25208, "loss": 0.82855, "time": 0.20805} -{"mode": "train", "epoch": 6, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.04579, "loss_rpn_cls": 0.03301, "loss_rpn_bbox": 0.0565, "loss_cls": 0.24047, "acc": 92.12646, "loss_bbox": 0.25184, "loss_mask": 0.25311, "loss": 0.83492, "time": 0.20885} -{"mode": "train", "epoch": 6, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.03716, "loss_rpn_cls": 0.03212, "loss_rpn_bbox": 0.05595, "loss_cls": 0.23039, "acc": 92.29492, "loss_bbox": 0.2482, "loss_mask": 0.24933, "loss": 0.81598, "time": 0.20518} -{"mode": "train", "epoch": 6, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.03959, "loss_rpn_cls": 0.03142, "loss_rpn_bbox": 0.05343, "loss_cls": 0.23878, "acc": 92.12036, "loss_bbox": 0.25463, "loss_mask": 0.25296, "loss": 0.83122, "time": 0.21072} -{"mode": "train", "epoch": 6, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.04391, "loss_rpn_cls": 0.031, "loss_rpn_bbox": 0.05458, "loss_cls": 0.22288, "acc": 92.41602, "loss_bbox": 0.25282, "loss_mask": 0.2529, "loss": 0.81418, "time": 0.21968} -{"mode": "train", "epoch": 6, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.04555, "loss_rpn_cls": 0.03476, "loss_rpn_bbox": 0.05634, "loss_cls": 0.23888, "acc": 91.91577, "loss_bbox": 0.26495, "loss_mask": 0.25829, "loss": 0.85322, "time": 0.21678} -{"mode": "train", "epoch": 6, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.03954, "loss_rpn_cls": 0.03189, "loss_rpn_bbox": 0.05323, "loss_cls": 0.22185, "acc": 92.52441, "loss_bbox": 0.25235, "loss_mask": 0.25186, "loss": 0.81117, "time": 0.20911} -{"mode": "train", "epoch": 6, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.03799, "loss_rpn_cls": 0.03452, "loss_rpn_bbox": 0.05198, "loss_cls": 0.22479, "acc": 92.36255, "loss_bbox": 0.25174, "loss_mask": 0.24517, "loss": 0.8082, "time": 0.20577} -{"mode": "train", "epoch": 6, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.03743, "loss_rpn_cls": 0.03316, "loss_rpn_bbox": 0.05435, "loss_cls": 0.22097, "acc": 92.48633, "loss_bbox": 0.24863, "loss_mask": 0.24595, "loss": 0.80307, "time": 0.21915} -{"mode": "train", "epoch": 6, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.03637, "loss_rpn_cls": 0.0311, "loss_rpn_bbox": 0.05305, "loss_cls": 0.23325, "acc": 92.25464, "loss_bbox": 0.25841, "loss_mask": 0.24943, "loss": 0.82524, "time": 0.20675} -{"mode": "train", "epoch": 6, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.03657, "loss_rpn_cls": 0.03166, "loss_rpn_bbox": 0.05256, "loss_cls": 0.23993, "acc": 91.83472, "loss_bbox": 0.26652, "loss_mask": 0.25427, "loss": 0.84494, "time": 0.21467} -{"mode": "train", "epoch": 6, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.03562, "loss_rpn_cls": 0.03192, "loss_rpn_bbox": 0.05349, "loss_cls": 0.22764, "acc": 92.37427, "loss_bbox": 0.25211, "loss_mask": 0.24684, "loss": 0.81201, "time": 0.21648} -{"mode": "train", "epoch": 6, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.03826, "loss_rpn_cls": 0.03392, "loss_rpn_bbox": 0.05632, "loss_cls": 0.22994, "acc": 92.29712, "loss_bbox": 0.25274, "loss_mask": 0.25248, "loss": 0.8254, "time": 0.20702} -{"mode": "train", "epoch": 6, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.03398, "loss_rpn_cls": 0.02963, "loss_rpn_bbox": 0.05161, "loss_cls": 0.23217, "acc": 92.30322, "loss_bbox": 0.25396, "loss_mask": 0.25188, "loss": 0.81924, "time": 0.20524} -{"mode": "train", "epoch": 6, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.04335, "loss_rpn_cls": 0.03333, "loss_rpn_bbox": 0.05835, "loss_cls": 0.23782, "acc": 91.84717, "loss_bbox": 0.2667, "loss_mask": 0.2512, "loss": 0.84741, "time": 0.21939} -{"mode": "train", "epoch": 6, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.03902, "loss_rpn_cls": 0.02926, "loss_rpn_bbox": 0.05119, "loss_cls": 0.21901, "acc": 92.54761, "loss_bbox": 0.24649, "loss_mask": 0.24618, "loss": 0.79212, "time": 0.20611} -{"mode": "train", "epoch": 6, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.03383, "loss_rpn_cls": 0.02807, "loss_rpn_bbox": 0.05061, "loss_cls": 0.22568, "acc": 92.43945, "loss_bbox": 0.24457, "loss_mask": 0.24079, "loss": 0.78973, "time": 0.20765} -{"mode": "train", "epoch": 6, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.0352, "loss_rpn_cls": 0.0327, "loss_rpn_bbox": 0.05301, "loss_cls": 0.23613, "acc": 92.05664, "loss_bbox": 0.25806, "loss_mask": 0.25141, "loss": 0.83131, "time": 0.21866} -{"mode": "train", "epoch": 6, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.04522, "loss_rpn_cls": 0.03395, "loss_rpn_bbox": 0.05658, "loss_cls": 0.23459, "acc": 92.08496, "loss_bbox": 0.26016, "loss_mask": 0.25406, "loss": 0.83934, "time": 0.21792} -{"mode": "train", "epoch": 6, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.03911, "loss_rpn_cls": 0.03262, "loss_rpn_bbox": 0.05428, "loss_cls": 0.23356, "acc": 92.0896, "loss_bbox": 0.25911, "loss_mask": 0.24921, "loss": 0.82879, "time": 0.21982} -{"mode": "train", "epoch": 6, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.04328, "loss_rpn_cls": 0.0311, "loss_rpn_bbox": 0.05051, "loss_cls": 0.23094, "acc": 92.31836, "loss_bbox": 0.25072, "loss_mask": 0.25021, "loss": 0.81348, "time": 0.20747} -{"mode": "train", "epoch": 6, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.03893, "loss_rpn_cls": 0.03453, "loss_rpn_bbox": 0.05483, "loss_cls": 0.23732, "acc": 92.22705, "loss_bbox": 0.25104, "loss_mask": 0.25048, "loss": 0.82819, "time": 0.20998} -{"mode": "train", "epoch": 6, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.03441, "loss_rpn_cls": 0.03132, "loss_rpn_bbox": 0.05256, "loss_cls": 0.23637, "acc": 92.18286, "loss_bbox": 0.26332, "loss_mask": 0.24847, "loss": 0.83204, "time": 0.21201} -{"mode": "train", "epoch": 6, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.03336, "loss_rpn_cls": 0.03212, "loss_rpn_bbox": 0.05658, "loss_cls": 0.24735, "acc": 91.78564, "loss_bbox": 0.26675, "loss_mask": 0.2605, "loss": 0.86329, "time": 0.20798} -{"mode": "train", "epoch": 6, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.04441, "loss_rpn_cls": 0.03164, "loss_rpn_bbox": 0.05165, "loss_cls": 0.2339, "acc": 92.2981, "loss_bbox": 0.25152, "loss_mask": 0.24716, "loss": 0.81587, "time": 0.21366} -{"mode": "train", "epoch": 6, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.03597, "loss_rpn_cls": 0.03064, "loss_rpn_bbox": 0.05162, "loss_cls": 0.23237, "acc": 92.1875, "loss_bbox": 0.25507, "loss_mask": 0.24312, "loss": 0.81282, "time": 0.21168} -{"mode": "train", "epoch": 6, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.03692, "loss_rpn_cls": 0.03354, "loss_rpn_bbox": 0.05504, "loss_cls": 0.24413, "acc": 92.0127, "loss_bbox": 0.26106, "loss_mask": 0.25704, "loss": 0.8508, "time": 0.21853} -{"mode": "train", "epoch": 6, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.03925, "loss_rpn_cls": 0.03044, "loss_rpn_bbox": 0.05155, "loss_cls": 0.23336, "acc": 92.19556, "loss_bbox": 0.25383, "loss_mask": 0.24928, "loss": 0.81845, "time": 0.20901} -{"mode": "train", "epoch": 6, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.03772, "loss_rpn_cls": 0.03088, "loss_rpn_bbox": 0.05377, "loss_cls": 0.24031, "acc": 91.84741, "loss_bbox": 0.26249, "loss_mask": 0.24641, "loss": 0.83387, "time": 0.20897} -{"mode": "train", "epoch": 6, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.04704, "loss_rpn_cls": 0.03089, "loss_rpn_bbox": 0.05226, "loss_cls": 0.23379, "acc": 92.27954, "loss_bbox": 0.25088, "loss_mask": 0.25369, "loss": 0.82152, "time": 0.21321} -{"mode": "train", "epoch": 6, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.04033, "loss_rpn_cls": 0.0326, "loss_rpn_bbox": 0.05449, "loss_cls": 0.23506, "acc": 92.104, "loss_bbox": 0.25648, "loss_mask": 0.25459, "loss": 0.83322, "time": 0.21691} -{"mode": "train", "epoch": 6, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.03453, "loss_rpn_cls": 0.03211, "loss_rpn_bbox": 0.05342, "loss_cls": 0.22733, "acc": 92.51465, "loss_bbox": 0.24414, "loss_mask": 0.24778, "loss": 0.80478, "time": 0.20156} -{"mode": "train", "epoch": 6, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.04133, "loss_rpn_cls": 0.03687, "loss_rpn_bbox": 0.0591, "loss_cls": 0.23327, "acc": 92.14526, "loss_bbox": 0.25931, "loss_mask": 0.25861, "loss": 0.84716, "time": 0.22981} -{"mode": "train", "epoch": 6, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.04018, "loss_rpn_cls": 0.03057, "loss_rpn_bbox": 0.0521, "loss_cls": 0.21899, "acc": 92.54321, "loss_bbox": 0.24634, "loss_mask": 0.24691, "loss": 0.79491, "time": 0.2225} -{"mode": "train", "epoch": 6, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.04051, "loss_rpn_cls": 0.03335, "loss_rpn_bbox": 0.05503, "loss_cls": 0.23091, "acc": 92.25586, "loss_bbox": 0.25509, "loss_mask": 0.2481, "loss": 0.82248, "time": 0.21719} -{"mode": "train", "epoch": 6, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.03395, "loss_rpn_cls": 0.03379, "loss_rpn_bbox": 0.05333, "loss_cls": 0.23786, "acc": 92.03223, "loss_bbox": 0.26376, "loss_mask": 0.25463, "loss": 0.84337, "time": 0.2103} -{"mode": "train", "epoch": 6, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.03484, "loss_rpn_cls": 0.03286, "loss_rpn_bbox": 0.05401, "loss_cls": 0.23139, "acc": 92.23779, "loss_bbox": 0.25771, "loss_mask": 0.25126, "loss": 0.82722, "time": 0.20301} -{"mode": "train", "epoch": 6, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.03795, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.0535, "loss_cls": 0.22819, "acc": 92.35547, "loss_bbox": 0.25379, "loss_mask": 0.25673, "loss": 0.82428, "time": 0.21202} -{"mode": "train", "epoch": 6, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.04537, "loss_rpn_cls": 0.03115, "loss_rpn_bbox": 0.05633, "loss_cls": 0.22598, "acc": 92.32373, "loss_bbox": 0.25397, "loss_mask": 0.25281, "loss": 0.82025, "time": 0.21464} -{"mode": "train", "epoch": 6, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.0345, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05162, "loss_cls": 0.2402, "acc": 92.07983, "loss_bbox": 0.25672, "loss_mask": 0.2476, "loss": 0.8282, "time": 0.21366} -{"mode": "train", "epoch": 6, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.03652, "loss_rpn_cls": 0.03513, "loss_rpn_bbox": 0.0576, "loss_cls": 0.24572, "acc": 91.78467, "loss_bbox": 0.26783, "loss_mask": 0.26022, "loss": 0.8665, "time": 0.22944} -{"mode": "train", "epoch": 6, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.03932, "loss_rpn_cls": 0.03153, "loss_rpn_bbox": 0.05453, "loss_cls": 0.23115, "acc": 92.18018, "loss_bbox": 0.25261, "loss_mask": 0.25495, "loss": 0.82476, "time": 0.20946} -{"mode": "train", "epoch": 6, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.03938, "loss_rpn_cls": 0.0323, "loss_rpn_bbox": 0.05549, "loss_cls": 0.23098, "acc": 92.27808, "loss_bbox": 0.25781, "loss_mask": 0.2545, "loss": 0.83108, "time": 0.21607} -{"mode": "train", "epoch": 6, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.03887, "loss_rpn_cls": 0.03372, "loss_rpn_bbox": 0.05208, "loss_cls": 0.22142, "acc": 92.58154, "loss_bbox": 0.24469, "loss_mask": 0.24752, "loss": 0.79942, "time": 0.20842} -{"mode": "train", "epoch": 6, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.03642, "loss_rpn_cls": 0.03073, "loss_rpn_bbox": 0.05003, "loss_cls": 0.2225, "acc": 92.4729, "loss_bbox": 0.24763, "loss_mask": 0.24801, "loss": 0.7989, "time": 0.20892} -{"mode": "train", "epoch": 6, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.03519, "loss_rpn_cls": 0.03336, "loss_rpn_bbox": 0.05278, "loss_cls": 0.23461, "acc": 92.09937, "loss_bbox": 0.25511, "loss_mask": 0.2502, "loss": 0.82606, "time": 0.20708} -{"mode": "train", "epoch": 6, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.04015, "loss_rpn_cls": 0.03456, "loss_rpn_bbox": 0.05291, "loss_cls": 0.24758, "acc": 91.86816, "loss_bbox": 0.25839, "loss_mask": 0.25216, "loss": 0.84561, "time": 0.21432} -{"mode": "train", "epoch": 6, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.04012, "loss_rpn_cls": 0.03117, "loss_rpn_bbox": 0.05459, "loss_cls": 0.23211, "acc": 92.21094, "loss_bbox": 0.25946, "loss_mask": 0.25422, "loss": 0.83155, "time": 0.20667} -{"mode": "train", "epoch": 6, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03914, "loss_rpn_cls": 0.02912, "loss_rpn_bbox": 0.05207, "loss_cls": 0.23006, "acc": 92.43408, "loss_bbox": 0.24928, "loss_mask": 0.25114, "loss": 0.81167, "time": 0.1993} -{"mode": "train", "epoch": 6, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.04322, "loss_rpn_cls": 0.037, "loss_rpn_bbox": 0.05847, "loss_cls": 0.23855, "acc": 91.96167, "loss_bbox": 0.26406, "loss_mask": 0.25988, "loss": 0.85796, "time": 0.21159} -{"mode": "train", "epoch": 6, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.03942, "loss_rpn_cls": 0.02995, "loss_rpn_bbox": 0.05246, "loss_cls": 0.23443, "acc": 92.25098, "loss_bbox": 0.25358, "loss_mask": 0.25288, "loss": 0.8233, "time": 0.20357} -{"mode": "train", "epoch": 6, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03925, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.05105, "loss_cls": 0.22579, "acc": 92.47534, "loss_bbox": 0.24664, "loss_mask": 0.24737, "loss": 0.80107, "time": 0.20203} -{"mode": "train", "epoch": 6, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.04332, "loss_rpn_cls": 0.03103, "loss_rpn_bbox": 0.05249, "loss_cls": 0.23622, "acc": 92.13013, "loss_bbox": 0.25459, "loss_mask": 0.25209, "loss": 0.82641, "time": 0.2164} -{"mode": "train", "epoch": 6, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.03309, "loss_rpn_cls": 0.03367, "loss_rpn_bbox": 0.0587, "loss_cls": 0.23126, "acc": 92.2334, "loss_bbox": 0.2542, "loss_mask": 0.25152, "loss": 0.82935, "time": 0.21313} -{"mode": "train", "epoch": 6, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.04349, "loss_rpn_cls": 0.03182, "loss_rpn_bbox": 0.05007, "loss_cls": 0.22258, "acc": 92.69702, "loss_bbox": 0.24416, "loss_mask": 0.24747, "loss": 0.7961, "time": 0.205} -{"mode": "train", "epoch": 6, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.03676, "loss_rpn_cls": 0.03253, "loss_rpn_bbox": 0.05445, "loss_cls": 0.23034, "acc": 92.30688, "loss_bbox": 0.25486, "loss_mask": 0.25167, "loss": 0.82385, "time": 0.19957} -{"mode": "train", "epoch": 6, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.03867, "loss_rpn_cls": 0.03614, "loss_rpn_bbox": 0.05406, "loss_cls": 0.23349, "acc": 92.20215, "loss_bbox": 0.25591, "loss_mask": 0.24727, "loss": 0.82687, "time": 0.21547} -{"mode": "train", "epoch": 6, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.04083, "loss_rpn_cls": 0.03171, "loss_rpn_bbox": 0.05566, "loss_cls": 0.24311, "acc": 92.13525, "loss_bbox": 0.25245, "loss_mask": 0.24949, "loss": 0.83241, "time": 0.21659} -{"mode": "train", "epoch": 6, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.04299, "loss_rpn_cls": 0.03735, "loss_rpn_bbox": 0.05743, "loss_cls": 0.24484, "acc": 91.93066, "loss_bbox": 0.26422, "loss_mask": 0.25045, "loss": 0.85429, "time": 0.21703} -{"mode": "train", "epoch": 6, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.03481, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05432, "loss_cls": 0.22825, "acc": 92.4021, "loss_bbox": 0.24619, "loss_mask": 0.25112, "loss": 0.81304, "time": 0.20198} -{"mode": "train", "epoch": 6, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.03443, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.05289, "loss_cls": 0.23077, "acc": 92.24536, "loss_bbox": 0.25333, "loss_mask": 0.25465, "loss": 0.82251, "time": 0.20516} -{"mode": "train", "epoch": 6, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.038, "loss_rpn_cls": 0.03313, "loss_rpn_bbox": 0.05708, "loss_cls": 0.22991, "acc": 92.26611, "loss_bbox": 0.25425, "loss_mask": 0.24959, "loss": 0.82395, "time": 0.20806} -{"mode": "train", "epoch": 6, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.03958, "loss_rpn_cls": 0.03373, "loss_rpn_bbox": 0.05729, "loss_cls": 0.23245, "acc": 92.20142, "loss_bbox": 0.25257, "loss_mask": 0.25675, "loss": 0.83279, "time": 0.22098} -{"mode": "train", "epoch": 6, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.03247, "loss_rpn_cls": 0.03273, "loss_rpn_bbox": 0.05204, "loss_cls": 0.23477, "acc": 92.24194, "loss_bbox": 0.25598, "loss_mask": 0.25166, "loss": 0.82718, "time": 0.20837} -{"mode": "train", "epoch": 6, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.03404, "loss_rpn_cls": 0.03512, "loss_rpn_bbox": 0.05535, "loss_cls": 0.23696, "acc": 92.12915, "loss_bbox": 0.25638, "loss_mask": 0.24885, "loss": 0.83266, "time": 0.21359} -{"mode": "train", "epoch": 6, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.04258, "loss_rpn_cls": 0.03266, "loss_rpn_bbox": 0.05731, "loss_cls": 0.22841, "acc": 92.25464, "loss_bbox": 0.24782, "loss_mask": 0.25153, "loss": 0.81773, "time": 0.21826} -{"mode": "train", "epoch": 6, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.03833, "loss_rpn_cls": 0.03721, "loss_rpn_bbox": 0.05958, "loss_cls": 0.24127, "acc": 92.02588, "loss_bbox": 0.26098, "loss_mask": 0.24989, "loss": 0.84894, "time": 0.21689} -{"mode": "train", "epoch": 6, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.04769, "loss_rpn_cls": 0.03229, "loss_rpn_bbox": 0.05497, "loss_cls": 0.2386, "acc": 92.06787, "loss_bbox": 0.26076, "loss_mask": 0.25174, "loss": 0.83837, "time": 0.2132} -{"mode": "train", "epoch": 6, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.03258, "loss_rpn_cls": 0.03367, "loss_rpn_bbox": 0.05193, "loss_cls": 0.2261, "acc": 92.49048, "loss_bbox": 0.24487, "loss_mask": 0.25037, "loss": 0.80695, "time": 0.20773} -{"mode": "train", "epoch": 6, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.04208, "loss_rpn_cls": 0.03576, "loss_rpn_bbox": 0.06005, "loss_cls": 0.24896, "acc": 91.71436, "loss_bbox": 0.26612, "loss_mask": 0.25472, "loss": 0.86561, "time": 0.21704} -{"mode": "train", "epoch": 6, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.03727, "loss_rpn_cls": 0.03324, "loss_rpn_bbox": 0.05324, "loss_cls": 0.2277, "acc": 92.37793, "loss_bbox": 0.24596, "loss_mask": 0.24341, "loss": 0.80354, "time": 0.21973} -{"mode": "train", "epoch": 6, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.04524, "loss_rpn_cls": 0.03187, "loss_rpn_bbox": 0.05756, "loss_cls": 0.24124, "acc": 92.01318, "loss_bbox": 0.25412, "loss_mask": 0.25656, "loss": 0.84134, "time": 0.21594} -{"mode": "train", "epoch": 6, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.0426, "loss_rpn_cls": 0.03359, "loss_rpn_bbox": 0.05471, "loss_cls": 0.22628, "acc": 92.39648, "loss_bbox": 0.24692, "loss_mask": 0.25174, "loss": 0.81324, "time": 0.20493} -{"mode": "train", "epoch": 6, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.04127, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05452, "loss_cls": 0.22282, "acc": 92.40405, "loss_bbox": 0.25695, "loss_mask": 0.25103, "loss": 0.81823, "time": 0.21638} -{"mode": "train", "epoch": 6, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.03732, "loss_rpn_cls": 0.03088, "loss_rpn_bbox": 0.05411, "loss_cls": 0.22803, "acc": 92.2627, "loss_bbox": 0.24769, "loss_mask": 0.24559, "loss": 0.8063, "time": 0.21417} -{"mode": "train", "epoch": 6, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.03874, "loss_rpn_cls": 0.0303, "loss_rpn_bbox": 0.05199, "loss_cls": 0.22786, "acc": 92.32007, "loss_bbox": 0.25148, "loss_mask": 0.24794, "loss": 0.80958, "time": 0.20223} -{"mode": "train", "epoch": 6, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.03549, "loss_rpn_cls": 0.03517, "loss_rpn_bbox": 0.05558, "loss_cls": 0.2292, "acc": 92.38892, "loss_bbox": 0.24681, "loss_mask": 0.25007, "loss": 0.81683, "time": 0.20685} -{"mode": "train", "epoch": 6, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.04034, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05163, "loss_cls": 0.22589, "acc": 92.43896, "loss_bbox": 0.24579, "loss_mask": 0.25068, "loss": 0.80605, "time": 0.20513} -{"mode": "train", "epoch": 6, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03962, "loss_rpn_cls": 0.03372, "loss_rpn_bbox": 0.05331, "loss_cls": 0.22973, "acc": 92.41064, "loss_bbox": 0.24748, "loss_mask": 0.24653, "loss": 0.81077, "time": 0.20509} -{"mode": "train", "epoch": 6, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.04368, "loss_rpn_cls": 0.03214, "loss_rpn_bbox": 0.05524, "loss_cls": 0.23942, "acc": 92.03271, "loss_bbox": 0.25616, "loss_mask": 0.26129, "loss": 0.84426, "time": 0.20897} -{"mode": "train", "epoch": 6, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.04055, "loss_rpn_cls": 0.0343, "loss_rpn_bbox": 0.05213, "loss_cls": 0.23381, "acc": 92.30859, "loss_bbox": 0.25237, "loss_mask": 0.24309, "loss": 0.8157, "time": 0.2135} -{"mode": "train", "epoch": 6, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.03647, "loss_rpn_cls": 0.035, "loss_rpn_bbox": 0.05886, "loss_cls": 0.23491, "acc": 92.25903, "loss_bbox": 0.25686, "loss_mask": 0.25356, "loss": 0.8392, "time": 0.2174} -{"mode": "train", "epoch": 6, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.04659, "loss_rpn_cls": 0.03502, "loss_rpn_bbox": 0.05619, "loss_cls": 0.23603, "acc": 92.19995, "loss_bbox": 0.25698, "loss_mask": 0.25535, "loss": 0.83956, "time": 0.21195} -{"mode": "train", "epoch": 6, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.0366, "loss_rpn_cls": 0.03097, "loss_rpn_bbox": 0.05398, "loss_cls": 0.2305, "acc": 92.27026, "loss_bbox": 0.25248, "loss_mask": 0.24927, "loss": 0.81721, "time": 0.20714} -{"mode": "train", "epoch": 6, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.0409, "loss_rpn_cls": 0.03277, "loss_rpn_bbox": 0.05048, "loss_cls": 0.22587, "acc": 92.51733, "loss_bbox": 0.24362, "loss_mask": 0.24739, "loss": 0.80014, "time": 0.20406} -{"mode": "train", "epoch": 6, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.05072, "loss_rpn_cls": 0.02978, "loss_rpn_bbox": 0.05359, "loss_cls": 0.23768, "acc": 92.04224, "loss_bbox": 0.25942, "loss_mask": 0.24987, "loss": 0.83033, "time": 0.20994} -{"mode": "train", "epoch": 6, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.04692, "loss_rpn_cls": 0.03484, "loss_rpn_bbox": 0.0548, "loss_cls": 0.23163, "acc": 92.30249, "loss_bbox": 0.24635, "loss_mask": 0.25827, "loss": 0.82588, "time": 0.20731} -{"mode": "train", "epoch": 6, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.03726, "loss_rpn_cls": 0.03413, "loss_rpn_bbox": 0.05425, "loss_cls": 0.22514, "acc": 92.51074, "loss_bbox": 0.24446, "loss_mask": 0.25385, "loss": 0.81184, "time": 0.20612} -{"mode": "train", "epoch": 6, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.05254, "loss_rpn_cls": 0.03462, "loss_rpn_bbox": 0.05717, "loss_cls": 0.23727, "acc": 92.08545, "loss_bbox": 0.2611, "loss_mask": 0.25826, "loss": 0.84842, "time": 0.21281} -{"mode": "train", "epoch": 6, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03648, "loss_rpn_cls": 0.03402, "loss_rpn_bbox": 0.05477, "loss_cls": 0.23543, "acc": 92.09082, "loss_bbox": 0.25454, "loss_mask": 0.24989, "loss": 0.82865, "time": 0.21078} -{"mode": "train", "epoch": 6, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.04623, "loss_rpn_cls": 0.03284, "loss_rpn_bbox": 0.05463, "loss_cls": 0.23234, "acc": 92.19897, "loss_bbox": 0.25852, "loss_mask": 0.25171, "loss": 0.83004, "time": 0.21283} -{"mode": "train", "epoch": 6, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.0385, "loss_rpn_cls": 0.03303, "loss_rpn_bbox": 0.05221, "loss_cls": 0.23445, "acc": 92.29248, "loss_bbox": 0.25314, "loss_mask": 0.24424, "loss": 0.81708, "time": 0.20387} -{"mode": "train", "epoch": 6, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.04253, "loss_rpn_cls": 0.03208, "loss_rpn_bbox": 0.0536, "loss_cls": 0.23778, "acc": 92.01099, "loss_bbox": 0.25692, "loss_mask": 0.24672, "loss": 0.8271, "time": 0.21031} -{"mode": "train", "epoch": 6, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.04225, "loss_rpn_cls": 0.03371, "loss_rpn_bbox": 0.0569, "loss_cls": 0.2379, "acc": 92.09839, "loss_bbox": 0.25977, "loss_mask": 0.25197, "loss": 0.84026, "time": 0.21815} -{"mode": "train", "epoch": 6, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.03933, "loss_rpn_cls": 0.0355, "loss_rpn_bbox": 0.056, "loss_cls": 0.23581, "acc": 92.14722, "loss_bbox": 0.26041, "loss_mask": 0.25269, "loss": 0.84041, "time": 0.21408} -{"mode": "train", "epoch": 6, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.04119, "loss_rpn_cls": 0.03113, "loss_rpn_bbox": 0.05098, "loss_cls": 0.22772, "acc": 92.30127, "loss_bbox": 0.25311, "loss_mask": 0.24863, "loss": 0.81157, "time": 0.20661} -{"mode": "train", "epoch": 6, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.04392, "loss_rpn_cls": 0.03215, "loss_rpn_bbox": 0.05403, "loss_cls": 0.23482, "acc": 92.11133, "loss_bbox": 0.25629, "loss_mask": 0.25837, "loss": 0.83566, "time": 0.21019} -{"mode": "train", "epoch": 6, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.03851, "loss_rpn_cls": 0.03303, "loss_rpn_bbox": 0.05423, "loss_cls": 0.23189, "acc": 92.32568, "loss_bbox": 0.25623, "loss_mask": 0.24534, "loss": 0.82071, "time": 0.21078} -{"mode": "train", "epoch": 6, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.03415, "loss_rpn_cls": 0.03086, "loss_rpn_bbox": 0.05338, "loss_cls": 0.23751, "acc": 92.02734, "loss_bbox": 0.26154, "loss_mask": 0.24719, "loss": 0.83048, "time": 0.21225} -{"mode": "train", "epoch": 6, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.03648, "loss_rpn_cls": 0.02992, "loss_rpn_bbox": 0.05305, "loss_cls": 0.2191, "acc": 92.56885, "loss_bbox": 0.24122, "loss_mask": 0.24469, "loss": 0.78799, "time": 0.21798} -{"mode": "train", "epoch": 6, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.04518, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.05605, "loss_cls": 0.23442, "acc": 92.37329, "loss_bbox": 0.24776, "loss_mask": 0.25023, "loss": 0.82195, "time": 0.20824} -{"mode": "train", "epoch": 6, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.04035, "loss_rpn_cls": 0.03603, "loss_rpn_bbox": 0.05317, "loss_cls": 0.22618, "acc": 92.46802, "loss_bbox": 0.24624, "loss_mask": 0.24688, "loss": 0.80851, "time": 0.20891} -{"mode": "train", "epoch": 6, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.03421, "loss_rpn_cls": 0.03246, "loss_rpn_bbox": 0.05239, "loss_cls": 0.22902, "acc": 92.38794, "loss_bbox": 0.24813, "loss_mask": 0.2529, "loss": 0.8149, "time": 0.21081} -{"mode": "train", "epoch": 6, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.04251, "loss_rpn_cls": 0.03399, "loss_rpn_bbox": 0.05596, "loss_cls": 0.23829, "acc": 91.96118, "loss_bbox": 0.26878, "loss_mask": 0.25394, "loss": 0.85096, "time": 0.21924} -{"mode": "train", "epoch": 6, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.03963, "loss_rpn_cls": 0.03214, "loss_rpn_bbox": 0.05452, "loss_cls": 0.23552, "acc": 92.09204, "loss_bbox": 0.25614, "loss_mask": 0.24758, "loss": 0.8259, "time": 0.21464} -{"mode": "train", "epoch": 6, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.05099, "loss_rpn_cls": 0.03387, "loss_rpn_bbox": 0.05843, "loss_cls": 0.23338, "acc": 92.09106, "loss_bbox": 0.25479, "loss_mask": 0.24676, "loss": 0.82723, "time": 0.22073} -{"mode": "train", "epoch": 6, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.04074, "loss_rpn_cls": 0.03338, "loss_rpn_bbox": 0.05266, "loss_cls": 0.23685, "acc": 92.14282, "loss_bbox": 0.25883, "loss_mask": 0.25035, "loss": 0.83208, "time": 0.20998} -{"mode": "val", "epoch": 6, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3314, "bbox_mAP_50": 0.5504, "bbox_mAP_75": 0.3533, "bbox_mAP_s": 0.1911, "bbox_mAP_m": 0.3682, "bbox_mAP_l": 0.427, "bbox_mAP_copypaste": "0.3314 0.5504 0.3533 0.1911 0.3682 0.4270", "segm_mAP": 0.3237, "segm_mAP_50": 0.5241, "segm_mAP_75": 0.3461, "segm_mAP_s": 0.1494, "segm_mAP_m": 0.3521, "segm_mAP_l": 0.473, "segm_mAP_copypaste": "0.3237 0.5241 0.3461 0.1494 0.3521 0.4730"} -{"mode": "train", "epoch": 7, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.11302, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.05538, "loss_cls": 0.22855, "acc": 92.33472, "loss_bbox": 0.25511, "loss_mask": 0.24693, "loss": 0.81771, "time": 0.29429} -{"mode": "train", "epoch": 7, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.03709, "loss_rpn_cls": 0.02744, "loss_rpn_bbox": 0.05183, "loss_cls": 0.21835, "acc": 92.52417, "loss_bbox": 0.25232, "loss_mask": 0.24988, "loss": 0.79981, "time": 0.23335} -{"mode": "train", "epoch": 7, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.04246, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.0561, "loss_cls": 0.22887, "acc": 92.04126, "loss_bbox": 0.26112, "loss_mask": 0.25019, "loss": 0.82559, "time": 0.23359} -{"mode": "train", "epoch": 7, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.03699, "loss_rpn_cls": 0.03313, "loss_rpn_bbox": 0.05758, "loss_cls": 0.23445, "acc": 91.86255, "loss_bbox": 0.26504, "loss_mask": 0.25433, "loss": 0.84453, "time": 0.22249} -{"mode": "train", "epoch": 7, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.03676, "loss_rpn_cls": 0.02983, "loss_rpn_bbox": 0.05153, "loss_cls": 0.22029, "acc": 92.61279, "loss_bbox": 0.24408, "loss_mask": 0.24836, "loss": 0.7941, "time": 0.20616} -{"mode": "train", "epoch": 7, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.04185, "loss_rpn_cls": 0.03111, "loss_rpn_bbox": 0.05189, "loss_cls": 0.22197, "acc": 92.42554, "loss_bbox": 0.25465, "loss_mask": 0.24927, "loss": 0.80889, "time": 0.21471} -{"mode": "train", "epoch": 7, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.03931, "loss_rpn_cls": 0.0302, "loss_rpn_bbox": 0.05232, "loss_cls": 0.22858, "acc": 92.25635, "loss_bbox": 0.25264, "loss_mask": 0.24604, "loss": 0.80978, "time": 0.20744} -{"mode": "train", "epoch": 7, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.03439, "loss_rpn_cls": 0.03121, "loss_rpn_bbox": 0.05473, "loss_cls": 0.22888, "acc": 92.22046, "loss_bbox": 0.25587, "loss_mask": 0.24735, "loss": 0.81803, "time": 0.21542} -{"mode": "train", "epoch": 7, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.03895, "loss_rpn_cls": 0.0278, "loss_rpn_bbox": 0.0488, "loss_cls": 0.20837, "acc": 92.8562, "loss_bbox": 0.23996, "loss_mask": 0.24029, "loss": 0.76522, "time": 0.20219} -{"mode": "train", "epoch": 7, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.03956, "loss_rpn_cls": 0.03148, "loss_rpn_bbox": 0.05344, "loss_cls": 0.22781, "acc": 92.38623, "loss_bbox": 0.25379, "loss_mask": 0.24421, "loss": 0.81073, "time": 0.21288} -{"mode": "train", "epoch": 7, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.04484, "loss_rpn_cls": 0.02977, "loss_rpn_bbox": 0.05263, "loss_cls": 0.21868, "acc": 92.5022, "loss_bbox": 0.24916, "loss_mask": 0.24628, "loss": 0.79652, "time": 0.21856} -{"mode": "train", "epoch": 7, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.04146, "loss_rpn_cls": 0.029, "loss_rpn_bbox": 0.05539, "loss_cls": 0.22796, "acc": 92.22217, "loss_bbox": 0.25717, "loss_mask": 0.25074, "loss": 0.82027, "time": 0.20999} -{"mode": "train", "epoch": 7, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.0417, "loss_rpn_cls": 0.0308, "loss_rpn_bbox": 0.05408, "loss_cls": 0.22521, "acc": 92.30347, "loss_bbox": 0.2556, "loss_mask": 0.25227, "loss": 0.81795, "time": 0.22442} -{"mode": "train", "epoch": 7, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.043, "loss_rpn_cls": 0.03063, "loss_rpn_bbox": 0.05519, "loss_cls": 0.21635, "acc": 92.59937, "loss_bbox": 0.24963, "loss_mask": 0.24432, "loss": 0.79612, "time": 0.21506} -{"mode": "train", "epoch": 7, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.04141, "loss_rpn_cls": 0.03133, "loss_rpn_bbox": 0.05468, "loss_cls": 0.23623, "acc": 92.09106, "loss_bbox": 0.25463, "loss_mask": 0.24577, "loss": 0.82263, "time": 0.21718} -{"mode": "train", "epoch": 7, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.03463, "loss_rpn_cls": 0.03312, "loss_rpn_bbox": 0.05874, "loss_cls": 0.23719, "acc": 92.02515, "loss_bbox": 0.25856, "loss_mask": 0.25613, "loss": 0.84374, "time": 0.22202} -{"mode": "train", "epoch": 7, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.03804, "loss_rpn_cls": 0.02988, "loss_rpn_bbox": 0.05524, "loss_cls": 0.22654, "acc": 92.32104, "loss_bbox": 0.24535, "loss_mask": 0.24891, "loss": 0.80592, "time": 0.22082} -{"mode": "train", "epoch": 7, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.04177, "loss_rpn_cls": 0.02876, "loss_rpn_bbox": 0.05214, "loss_cls": 0.21206, "acc": 92.74683, "loss_bbox": 0.24384, "loss_mask": 0.24195, "loss": 0.77875, "time": 0.20951} -{"mode": "train", "epoch": 7, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.0368, "loss_rpn_cls": 0.03084, "loss_rpn_bbox": 0.05556, "loss_cls": 0.22817, "acc": 92.20312, "loss_bbox": 0.25704, "loss_mask": 0.25055, "loss": 0.82216, "time": 0.21274} -{"mode": "train", "epoch": 7, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.0392, "loss_rpn_cls": 0.03132, "loss_rpn_bbox": 0.05671, "loss_cls": 0.22538, "acc": 92.20386, "loss_bbox": 0.25788, "loss_mask": 0.24783, "loss": 0.81912, "time": 0.22043} -{"mode": "train", "epoch": 7, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.04141, "loss_rpn_cls": 0.02951, "loss_rpn_bbox": 0.05183, "loss_cls": 0.21942, "acc": 92.44458, "loss_bbox": 0.25106, "loss_mask": 0.24959, "loss": 0.80141, "time": 0.2102} -{"mode": "train", "epoch": 7, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.03919, "loss_rpn_cls": 0.03074, "loss_rpn_bbox": 0.05267, "loss_cls": 0.21282, "acc": 92.63232, "loss_bbox": 0.24387, "loss_mask": 0.24529, "loss": 0.78539, "time": 0.20766} -{"mode": "train", "epoch": 7, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.0466, "loss_rpn_cls": 0.03102, "loss_rpn_bbox": 0.05593, "loss_cls": 0.23005, "acc": 92.25635, "loss_bbox": 0.25818, "loss_mask": 0.24879, "loss": 0.82397, "time": 0.21014} -{"mode": "train", "epoch": 7, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.03258, "loss_rpn_cls": 0.03102, "loss_rpn_bbox": 0.0525, "loss_cls": 0.21803, "acc": 92.63037, "loss_bbox": 0.23694, "loss_mask": 0.24088, "loss": 0.77937, "time": 0.20808} -{"mode": "train", "epoch": 7, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.03664, "loss_rpn_cls": 0.03123, "loss_rpn_bbox": 0.05372, "loss_cls": 0.22546, "acc": 92.36084, "loss_bbox": 0.24585, "loss_mask": 0.24597, "loss": 0.80223, "time": 0.23857} -{"mode": "train", "epoch": 7, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.03812, "loss_rpn_cls": 0.03383, "loss_rpn_bbox": 0.05412, "loss_cls": 0.22106, "acc": 92.51465, "loss_bbox": 0.24952, "loss_mask": 0.2439, "loss": 0.80243, "time": 0.20834} -{"mode": "train", "epoch": 7, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.04138, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05421, "loss_cls": 0.23526, "acc": 92.16748, "loss_bbox": 0.25842, "loss_mask": 0.2527, "loss": 0.83236, "time": 0.21928} -{"mode": "train", "epoch": 7, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.04366, "loss_rpn_cls": 0.03123, "loss_rpn_bbox": 0.05289, "loss_cls": 0.22466, "acc": 92.44873, "loss_bbox": 0.24365, "loss_mask": 0.24578, "loss": 0.7982, "time": 0.21489} -{"mode": "train", "epoch": 7, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.04394, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.05148, "loss_cls": 0.22402, "acc": 92.39746, "loss_bbox": 0.24699, "loss_mask": 0.24108, "loss": 0.79295, "time": 0.21331} -{"mode": "train", "epoch": 7, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.03631, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05357, "loss_cls": 0.23179, "acc": 92.12695, "loss_bbox": 0.25343, "loss_mask": 0.24827, "loss": 0.81801, "time": 0.24645} -{"mode": "train", "epoch": 7, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.0282, "loss_rpn_cls": 0.02882, "loss_rpn_bbox": 0.05078, "loss_cls": 0.21814, "acc": 92.70923, "loss_bbox": 0.238, "loss_mask": 0.2472, "loss": 0.78294, "time": 0.23213} -{"mode": "train", "epoch": 7, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.03784, "loss_rpn_cls": 0.03031, "loss_rpn_bbox": 0.05276, "loss_cls": 0.21472, "acc": 92.73462, "loss_bbox": 0.24461, "loss_mask": 0.2443, "loss": 0.7867, "time": 0.2495} -{"mode": "train", "epoch": 7, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.03384, "loss_rpn_bbox": 0.05592, "loss_cls": 0.23293, "acc": 92.04907, "loss_bbox": 0.25987, "loss_mask": 0.24907, "loss": 0.83163, "time": 0.21403} -{"mode": "train", "epoch": 7, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.03633, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.0531, "loss_cls": 0.22444, "acc": 92.34253, "loss_bbox": 0.25031, "loss_mask": 0.24457, "loss": 0.80167, "time": 0.20352} -{"mode": "train", "epoch": 7, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.04425, "loss_rpn_cls": 0.02959, "loss_rpn_bbox": 0.0533, "loss_cls": 0.22672, "acc": 92.43115, "loss_bbox": 0.24639, "loss_mask": 0.2459, "loss": 0.80189, "time": 0.20738} -{"mode": "train", "epoch": 7, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.03791, "loss_rpn_cls": 0.03161, "loss_rpn_bbox": 0.0556, "loss_cls": 0.22827, "acc": 92.24341, "loss_bbox": 0.2561, "loss_mask": 0.24718, "loss": 0.81877, "time": 0.22019} -{"mode": "train", "epoch": 7, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.04223, "loss_rpn_cls": 0.03237, "loss_rpn_bbox": 0.05305, "loss_cls": 0.22975, "acc": 92.39648, "loss_bbox": 0.24443, "loss_mask": 0.24829, "loss": 0.8079, "time": 0.25255} -{"mode": "train", "epoch": 7, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.03867, "loss_rpn_cls": 0.02834, "loss_rpn_bbox": 0.04908, "loss_cls": 0.22382, "acc": 92.45386, "loss_bbox": 0.24817, "loss_mask": 0.24558, "loss": 0.79499, "time": 0.21004} -{"mode": "train", "epoch": 7, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.03951, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.04915, "loss_cls": 0.2135, "acc": 92.8894, "loss_bbox": 0.23826, "loss_mask": 0.24245, "loss": 0.77214, "time": 0.20895} -{"mode": "train", "epoch": 7, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.03789, "loss_rpn_cls": 0.02786, "loss_rpn_bbox": 0.04852, "loss_cls": 0.21606, "acc": 92.77368, "loss_bbox": 0.23585, "loss_mask": 0.24412, "loss": 0.77241, "time": 0.20719} -{"mode": "train", "epoch": 7, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.04389, "loss_rpn_cls": 0.03106, "loss_rpn_bbox": 0.0531, "loss_cls": 0.22307, "acc": 92.479, "loss_bbox": 0.24497, "loss_mask": 0.24671, "loss": 0.7989, "time": 0.2169} -{"mode": "train", "epoch": 7, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.03913, "loss_rpn_cls": 0.03378, "loss_rpn_bbox": 0.05552, "loss_cls": 0.22512, "acc": 92.47607, "loss_bbox": 0.2464, "loss_mask": 0.24646, "loss": 0.80728, "time": 0.21512} -{"mode": "train", "epoch": 7, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.04293, "loss_rpn_cls": 0.03243, "loss_rpn_bbox": 0.05494, "loss_cls": 0.22295, "acc": 92.52588, "loss_bbox": 0.24959, "loss_mask": 0.25156, "loss": 0.81148, "time": 0.21465} -{"mode": "train", "epoch": 7, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.0426, "loss_rpn_cls": 0.03074, "loss_rpn_bbox": 0.05436, "loss_cls": 0.23052, "acc": 92.177, "loss_bbox": 0.25321, "loss_mask": 0.2471, "loss": 0.81593, "time": 0.21422} -{"mode": "train", "epoch": 7, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.04256, "loss_rpn_cls": 0.03272, "loss_rpn_bbox": 0.05114, "loss_cls": 0.22204, "acc": 92.41553, "loss_bbox": 0.25007, "loss_mask": 0.24491, "loss": 0.80089, "time": 0.21074} -{"mode": "train", "epoch": 7, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.04235, "loss_rpn_cls": 0.03369, "loss_rpn_bbox": 0.0586, "loss_cls": 0.23273, "acc": 91.8833, "loss_bbox": 0.2658, "loss_mask": 0.25218, "loss": 0.84299, "time": 0.21384} -{"mode": "train", "epoch": 7, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.03815, "loss_rpn_cls": 0.03294, "loss_rpn_bbox": 0.05555, "loss_cls": 0.23246, "acc": 92.4502, "loss_bbox": 0.25024, "loss_mask": 0.25139, "loss": 0.82258, "time": 0.20703} -{"mode": "train", "epoch": 7, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.04196, "loss_rpn_cls": 0.03217, "loss_rpn_bbox": 0.05823, "loss_cls": 0.23559, "acc": 91.98218, "loss_bbox": 0.2657, "loss_mask": 0.25968, "loss": 0.85138, "time": 0.24816} -{"mode": "train", "epoch": 7, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.0398, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.05391, "loss_cls": 0.2222, "acc": 92.36401, "loss_bbox": 0.25262, "loss_mask": 0.25185, "loss": 0.80998, "time": 0.21123} -{"mode": "train", "epoch": 7, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.04287, "loss_rpn_cls": 0.03093, "loss_rpn_bbox": 0.05075, "loss_cls": 0.22436, "acc": 92.47217, "loss_bbox": 0.25, "loss_mask": 0.24619, "loss": 0.80223, "time": 0.20692} -{"mode": "train", "epoch": 7, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.0351, "loss_rpn_cls": 0.02816, "loss_rpn_bbox": 0.05153, "loss_cls": 0.21544, "acc": 92.64502, "loss_bbox": 0.24912, "loss_mask": 0.24469, "loss": 0.78894, "time": 0.20619} -{"mode": "train", "epoch": 7, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.04671, "loss_rpn_cls": 0.03313, "loss_rpn_bbox": 0.0537, "loss_cls": 0.23398, "acc": 91.9707, "loss_bbox": 0.26311, "loss_mask": 0.24844, "loss": 0.83235, "time": 0.21725} -{"mode": "train", "epoch": 7, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.03313, "loss_rpn_cls": 0.03042, "loss_rpn_bbox": 0.05331, "loss_cls": 0.21933, "acc": 92.59351, "loss_bbox": 0.24656, "loss_mask": 0.24868, "loss": 0.7983, "time": 0.21138} -{"mode": "train", "epoch": 7, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.03983, "loss_rpn_cls": 0.03295, "loss_rpn_bbox": 0.05588, "loss_cls": 0.22123, "acc": 92.51953, "loss_bbox": 0.25184, "loss_mask": 0.25367, "loss": 0.81557, "time": 0.25259} -{"mode": "train", "epoch": 7, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.03513, "loss_rpn_cls": 0.03043, "loss_rpn_bbox": 0.05203, "loss_cls": 0.23321, "acc": 92.17188, "loss_bbox": 0.25606, "loss_mask": 0.24893, "loss": 0.82066, "time": 0.20778} -{"mode": "train", "epoch": 7, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.04488, "loss_rpn_cls": 0.03323, "loss_rpn_bbox": 0.05447, "loss_cls": 0.23998, "acc": 91.97168, "loss_bbox": 0.25869, "loss_mask": 0.25474, "loss": 0.8411, "time": 0.222} -{"mode": "train", "epoch": 7, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.03455, "loss_rpn_cls": 0.02839, "loss_rpn_bbox": 0.05173, "loss_cls": 0.22475, "acc": 92.60352, "loss_bbox": 0.24073, "loss_mask": 0.24259, "loss": 0.78819, "time": 0.20928} -{"mode": "train", "epoch": 7, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.03663, "loss_rpn_cls": 0.0354, "loss_rpn_bbox": 0.05775, "loss_cls": 0.23753, "acc": 91.97607, "loss_bbox": 0.26277, "loss_mask": 0.24761, "loss": 0.84107, "time": 0.21629} -{"mode": "train", "epoch": 7, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.04108, "loss_rpn_cls": 0.03279, "loss_rpn_bbox": 0.05336, "loss_cls": 0.22991, "acc": 92.35596, "loss_bbox": 0.25017, "loss_mask": 0.24658, "loss": 0.8128, "time": 0.21092} -{"mode": "train", "epoch": 7, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.04574, "loss_rpn_cls": 0.03164, "loss_rpn_bbox": 0.05757, "loss_cls": 0.23748, "acc": 92.0647, "loss_bbox": 0.2552, "loss_mask": 0.24645, "loss": 0.82835, "time": 0.21653} -{"mode": "train", "epoch": 7, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.03647, "loss_rpn_cls": 0.032, "loss_rpn_bbox": 0.05417, "loss_cls": 0.22195, "acc": 92.30737, "loss_bbox": 0.25389, "loss_mask": 0.24568, "loss": 0.8077, "time": 0.21739} -{"mode": "train", "epoch": 7, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.04015, "loss_rpn_cls": 0.03135, "loss_rpn_bbox": 0.05203, "loss_cls": 0.21909, "acc": 92.66382, "loss_bbox": 0.24672, "loss_mask": 0.24544, "loss": 0.79462, "time": 0.20985} -{"mode": "train", "epoch": 7, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.04699, "loss_rpn_cls": 0.03524, "loss_rpn_bbox": 0.05756, "loss_cls": 0.23397, "acc": 92.11743, "loss_bbox": 0.26273, "loss_mask": 0.2482, "loss": 0.8377, "time": 0.22988} -{"mode": "train", "epoch": 7, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.03748, "loss_rpn_cls": 0.0326, "loss_rpn_bbox": 0.05317, "loss_cls": 0.22872, "acc": 92.27515, "loss_bbox": 0.25555, "loss_mask": 0.24875, "loss": 0.81879, "time": 0.21427} -{"mode": "train", "epoch": 7, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.03721, "loss_rpn_cls": 0.03043, "loss_rpn_bbox": 0.0525, "loss_cls": 0.22596, "acc": 92.48218, "loss_bbox": 0.246, "loss_mask": 0.24271, "loss": 0.7976, "time": 0.21998} -{"mode": "train", "epoch": 7, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.03759, "loss_rpn_cls": 0.029, "loss_rpn_bbox": 0.05051, "loss_cls": 0.2224, "acc": 92.43726, "loss_bbox": 0.24697, "loss_mask": 0.24672, "loss": 0.79559, "time": 0.20213} -{"mode": "train", "epoch": 7, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.03563, "loss_rpn_cls": 0.02844, "loss_rpn_bbox": 0.04911, "loss_cls": 0.21914, "acc": 92.6416, "loss_bbox": 0.24313, "loss_mask": 0.24364, "loss": 0.78346, "time": 0.20646} -{"mode": "train", "epoch": 7, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.03928, "loss_rpn_cls": 0.03247, "loss_rpn_bbox": 0.0521, "loss_cls": 0.22593, "acc": 92.3606, "loss_bbox": 0.25322, "loss_mask": 0.2482, "loss": 0.81192, "time": 0.20872} -{"mode": "train", "epoch": 7, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.03606, "loss_rpn_cls": 0.03011, "loss_rpn_bbox": 0.05509, "loss_cls": 0.22344, "acc": 92.27563, "loss_bbox": 0.25239, "loss_mask": 0.2456, "loss": 0.80663, "time": 0.21676} -{"mode": "train", "epoch": 7, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.0351, "loss_rpn_cls": 0.02831, "loss_rpn_bbox": 0.04956, "loss_cls": 0.22364, "acc": 92.50928, "loss_bbox": 0.24282, "loss_mask": 0.24433, "loss": 0.78866, "time": 0.20729} -{"mode": "train", "epoch": 7, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.03756, "loss_rpn_cls": 0.02936, "loss_rpn_bbox": 0.05099, "loss_cls": 0.22831, "acc": 92.43945, "loss_bbox": 0.24784, "loss_mask": 0.2465, "loss": 0.803, "time": 0.20934} -{"mode": "train", "epoch": 7, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.03584, "loss_rpn_cls": 0.03185, "loss_rpn_bbox": 0.05361, "loss_cls": 0.23131, "acc": 92.20239, "loss_bbox": 0.25371, "loss_mask": 0.24178, "loss": 0.81225, "time": 0.22161} -{"mode": "train", "epoch": 7, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.03121, "loss_rpn_cls": 0.03034, "loss_rpn_bbox": 0.05352, "loss_cls": 0.23276, "acc": 92.17041, "loss_bbox": 0.25105, "loss_mask": 0.25413, "loss": 0.8218, "time": 0.21692} -{"mode": "train", "epoch": 7, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.0361, "loss_rpn_cls": 0.03018, "loss_rpn_bbox": 0.05165, "loss_cls": 0.21926, "acc": 92.5647, "loss_bbox": 0.24381, "loss_mask": 0.2426, "loss": 0.78749, "time": 0.2105} -{"mode": "train", "epoch": 7, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.0355, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05437, "loss_cls": 0.23457, "acc": 92.20459, "loss_bbox": 0.25288, "loss_mask": 0.24851, "loss": 0.82239, "time": 0.21174} -{"mode": "train", "epoch": 7, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.03507, "loss_rpn_cls": 0.03533, "loss_rpn_bbox": 0.05704, "loss_cls": 0.23937, "acc": 91.93945, "loss_bbox": 0.26263, "loss_mask": 0.24858, "loss": 0.84295, "time": 0.22045} -{"mode": "train", "epoch": 7, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.03218, "loss_rpn_cls": 0.02837, "loss_rpn_bbox": 0.05052, "loss_cls": 0.23057, "acc": 92.24121, "loss_bbox": 0.24914, "loss_mask": 0.25139, "loss": 0.80999, "time": 0.21656} -{"mode": "train", "epoch": 7, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.03327, "loss_rpn_cls": 0.03045, "loss_rpn_bbox": 0.04998, "loss_cls": 0.22766, "acc": 92.12183, "loss_bbox": 0.25721, "loss_mask": 0.24758, "loss": 0.81287, "time": 0.20665} -{"mode": "train", "epoch": 7, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.03457, "loss_rpn_cls": 0.03092, "loss_rpn_bbox": 0.05221, "loss_cls": 0.23326, "acc": 92.29883, "loss_bbox": 0.25368, "loss_mask": 0.25135, "loss": 0.82142, "time": 0.20763} -{"mode": "train", "epoch": 7, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.0375, "loss_rpn_cls": 0.03073, "loss_rpn_bbox": 0.0511, "loss_cls": 0.21947, "acc": 92.67969, "loss_bbox": 0.24643, "loss_mask": 0.25181, "loss": 0.79954, "time": 0.20863} -{"mode": "train", "epoch": 7, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.04142, "loss_rpn_cls": 0.03273, "loss_rpn_bbox": 0.05578, "loss_cls": 0.2323, "acc": 92.07812, "loss_bbox": 0.25569, "loss_mask": 0.24415, "loss": 0.82064, "time": 0.21136} -{"mode": "train", "epoch": 7, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.04853, "loss_rpn_cls": 0.02991, "loss_rpn_bbox": 0.05227, "loss_cls": 0.23074, "acc": 92.37109, "loss_bbox": 0.24894, "loss_mask": 0.2495, "loss": 0.81135, "time": 0.21348} -{"mode": "train", "epoch": 7, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.03615, "loss_rpn_cls": 0.03165, "loss_rpn_bbox": 0.05489, "loss_cls": 0.23314, "acc": 92.11182, "loss_bbox": 0.25559, "loss_mask": 0.25321, "loss": 0.82848, "time": 0.21537} -{"mode": "train", "epoch": 7, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.03299, "loss_rpn_cls": 0.03136, "loss_rpn_bbox": 0.05411, "loss_cls": 0.22516, "acc": 92.45581, "loss_bbox": 0.25279, "loss_mask": 0.25247, "loss": 0.81589, "time": 0.21217} -{"mode": "train", "epoch": 7, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.03264, "loss_rpn_cls": 0.03101, "loss_rpn_bbox": 0.05042, "loss_cls": 0.22119, "acc": 92.58643, "loss_bbox": 0.24722, "loss_mask": 0.24819, "loss": 0.79803, "time": 0.21356} -{"mode": "train", "epoch": 7, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.04701, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.05347, "loss_cls": 0.22958, "acc": 92.37402, "loss_bbox": 0.25018, "loss_mask": 0.24663, "loss": 0.81333, "time": 0.22212} -{"mode": "train", "epoch": 7, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.03722, "loss_rpn_cls": 0.03046, "loss_rpn_bbox": 0.05201, "loss_cls": 0.21924, "acc": 92.55469, "loss_bbox": 0.24321, "loss_mask": 0.24203, "loss": 0.78694, "time": 0.21579} -{"mode": "train", "epoch": 7, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03819, "loss_rpn_cls": 0.0362, "loss_rpn_bbox": 0.05781, "loss_cls": 0.23226, "acc": 92.12085, "loss_bbox": 0.2603, "loss_mask": 0.24577, "loss": 0.83234, "time": 0.21103} -{"mode": "train", "epoch": 7, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.03856, "loss_rpn_cls": 0.03097, "loss_rpn_bbox": 0.05429, "loss_cls": 0.22733, "acc": 92.25952, "loss_bbox": 0.25176, "loss_mask": 0.24983, "loss": 0.81418, "time": 0.20775} -{"mode": "train", "epoch": 7, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.03729, "loss_rpn_cls": 0.03291, "loss_rpn_bbox": 0.05651, "loss_cls": 0.24205, "acc": 91.87671, "loss_bbox": 0.26045, "loss_mask": 0.25321, "loss": 0.84514, "time": 0.21865} -{"mode": "train", "epoch": 7, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03639, "loss_rpn_cls": 0.02991, "loss_rpn_bbox": 0.05146, "loss_cls": 0.21982, "acc": 92.65503, "loss_bbox": 0.2434, "loss_mask": 0.23992, "loss": 0.78452, "time": 0.20821} -{"mode": "train", "epoch": 7, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.04003, "loss_rpn_cls": 0.03075, "loss_rpn_bbox": 0.05199, "loss_cls": 0.23, "acc": 92.25977, "loss_bbox": 0.24887, "loss_mask": 0.24402, "loss": 0.80563, "time": 0.21533} -{"mode": "train", "epoch": 7, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.04194, "loss_rpn_cls": 0.03463, "loss_rpn_bbox": 0.05341, "loss_cls": 0.23589, "acc": 92.25659, "loss_bbox": 0.25141, "loss_mask": 0.25562, "loss": 0.83097, "time": 0.21652} -{"mode": "train", "epoch": 7, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.03117, "loss_rpn_cls": 0.03238, "loss_rpn_bbox": 0.05437, "loss_cls": 0.23136, "acc": 92.24243, "loss_bbox": 0.25617, "loss_mask": 0.25168, "loss": 0.82597, "time": 0.20898} -{"mode": "train", "epoch": 7, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.0404, "loss_rpn_cls": 0.0314, "loss_rpn_bbox": 0.05524, "loss_cls": 0.2317, "acc": 92.09937, "loss_bbox": 0.25568, "loss_mask": 0.24479, "loss": 0.81881, "time": 0.21925} -{"mode": "train", "epoch": 7, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.03993, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.05143, "loss_cls": 0.22892, "acc": 92.47778, "loss_bbox": 0.24771, "loss_mask": 0.24622, "loss": 0.80423, "time": 0.2026} -{"mode": "train", "epoch": 7, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.03504, "loss_rpn_cls": 0.03056, "loss_rpn_bbox": 0.05035, "loss_cls": 0.21659, "acc": 92.7915, "loss_bbox": 0.23648, "loss_mask": 0.24274, "loss": 0.77671, "time": 0.19621} -{"mode": "train", "epoch": 7, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.03617, "loss_rpn_cls": 0.03483, "loss_rpn_bbox": 0.05519, "loss_cls": 0.22867, "acc": 92.29785, "loss_bbox": 0.25214, "loss_mask": 0.24809, "loss": 0.81891, "time": 0.21055} -{"mode": "train", "epoch": 7, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.04319, "loss_rpn_cls": 0.03493, "loss_rpn_bbox": 0.05445, "loss_cls": 0.23852, "acc": 92.06348, "loss_bbox": 0.25833, "loss_mask": 0.24565, "loss": 0.83188, "time": 0.21154} -{"mode": "train", "epoch": 7, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.0365, "loss_rpn_cls": 0.03075, "loss_rpn_bbox": 0.05344, "loss_cls": 0.22854, "acc": 92.31274, "loss_bbox": 0.25515, "loss_mask": 0.25035, "loss": 0.81824, "time": 0.2066} -{"mode": "train", "epoch": 7, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.03466, "loss_rpn_cls": 0.0342, "loss_rpn_bbox": 0.05231, "loss_cls": 0.22734, "acc": 92.23242, "loss_bbox": 0.2523, "loss_mask": 0.24671, "loss": 0.81287, "time": 0.20729} -{"mode": "train", "epoch": 7, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.04454, "loss_rpn_cls": 0.03293, "loss_rpn_bbox": 0.0554, "loss_cls": 0.2306, "acc": 92.19434, "loss_bbox": 0.25262, "loss_mask": 0.2504, "loss": 0.82195, "time": 0.21345} -{"mode": "train", "epoch": 7, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.03552, "loss_rpn_cls": 0.03195, "loss_rpn_bbox": 0.05165, "loss_cls": 0.22904, "acc": 92.32812, "loss_bbox": 0.25098, "loss_mask": 0.24348, "loss": 0.80709, "time": 0.20463} -{"mode": "train", "epoch": 7, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.03473, "loss_rpn_cls": 0.03043, "loss_rpn_bbox": 0.05098, "loss_cls": 0.22665, "acc": 92.33594, "loss_bbox": 0.25089, "loss_mask": 0.24199, "loss": 0.80095, "time": 0.2161} -{"mode": "train", "epoch": 7, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.03441, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.05087, "loss_cls": 0.22364, "acc": 92.41821, "loss_bbox": 0.24301, "loss_mask": 0.24273, "loss": 0.78974, "time": 0.20399} -{"mode": "train", "epoch": 7, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.04302, "loss_rpn_cls": 0.02951, "loss_rpn_bbox": 0.05081, "loss_cls": 0.22258, "acc": 92.52124, "loss_bbox": 0.2383, "loss_mask": 0.24555, "loss": 0.78675, "time": 0.20819} -{"mode": "train", "epoch": 7, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.0348, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05566, "loss_cls": 0.2301, "acc": 92.28247, "loss_bbox": 0.25149, "loss_mask": 0.25395, "loss": 0.82435, "time": 0.20753} -{"mode": "train", "epoch": 7, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.0372, "loss_rpn_cls": 0.03471, "loss_rpn_bbox": 0.0552, "loss_cls": 0.23322, "acc": 92.19434, "loss_bbox": 0.25435, "loss_mask": 0.25386, "loss": 0.83133, "time": 0.21693} -{"mode": "train", "epoch": 7, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.03503, "loss_rpn_cls": 0.03114, "loss_rpn_bbox": 0.05416, "loss_cls": 0.23453, "acc": 92.17896, "loss_bbox": 0.25388, "loss_mask": 0.24668, "loss": 0.82039, "time": 0.20136} -{"mode": "train", "epoch": 7, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.03596, "loss_rpn_cls": 0.03231, "loss_rpn_bbox": 0.05557, "loss_cls": 0.23232, "acc": 92.17944, "loss_bbox": 0.26144, "loss_mask": 0.24933, "loss": 0.83098, "time": 0.21392} -{"mode": "train", "epoch": 7, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.03701, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.05299, "loss_cls": 0.22559, "acc": 92.30615, "loss_bbox": 0.25658, "loss_mask": 0.25186, "loss": 0.81651, "time": 0.20498} -{"mode": "train", "epoch": 7, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.03515, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.05172, "loss_cls": 0.23224, "acc": 92.11719, "loss_bbox": 0.25457, "loss_mask": 0.24885, "loss": 0.81662, "time": 0.20771} -{"mode": "train", "epoch": 7, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.0348, "loss_rpn_cls": 0.03295, "loss_rpn_bbox": 0.05312, "loss_cls": 0.23457, "acc": 92.21484, "loss_bbox": 0.2485, "loss_mask": 0.24742, "loss": 0.81656, "time": 0.20951} -{"mode": "train", "epoch": 7, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.04405, "loss_rpn_cls": 0.03093, "loss_rpn_bbox": 0.05385, "loss_cls": 0.23412, "acc": 92.1875, "loss_bbox": 0.25234, "loss_mask": 0.24871, "loss": 0.81993, "time": 0.21191} -{"mode": "train", "epoch": 7, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.04423, "loss_rpn_cls": 0.03143, "loss_rpn_bbox": 0.05111, "loss_cls": 0.2258, "acc": 92.39966, "loss_bbox": 0.24937, "loss_mask": 0.24158, "loss": 0.79929, "time": 0.21307} -{"mode": "train", "epoch": 7, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.03379, "loss_rpn_cls": 0.02945, "loss_rpn_bbox": 0.05002, "loss_cls": 0.22275, "acc": 92.46216, "loss_bbox": 0.24829, "loss_mask": 0.24525, "loss": 0.79578, "time": 0.20543} -{"mode": "train", "epoch": 7, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.03366, "loss_rpn_cls": 0.03093, "loss_rpn_bbox": 0.05595, "loss_cls": 0.22829, "acc": 92.19556, "loss_bbox": 0.25693, "loss_mask": 0.24777, "loss": 0.81988, "time": 0.21183} -{"mode": "train", "epoch": 7, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03488, "loss_rpn_cls": 0.03065, "loss_rpn_bbox": 0.05306, "loss_cls": 0.23018, "acc": 92.15137, "loss_bbox": 0.25446, "loss_mask": 0.24575, "loss": 0.81409, "time": 0.21583} -{"mode": "train", "epoch": 7, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.03089, "loss_rpn_cls": 0.03355, "loss_rpn_bbox": 0.05374, "loss_cls": 0.2383, "acc": 92.15356, "loss_bbox": 0.25652, "loss_mask": 0.2513, "loss": 0.83341, "time": 0.20068} -{"mode": "train", "epoch": 7, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.04229, "loss_rpn_cls": 0.03282, "loss_rpn_bbox": 0.05377, "loss_cls": 0.2262, "acc": 92.36963, "loss_bbox": 0.24869, "loss_mask": 0.24521, "loss": 0.8067, "time": 0.20043} -{"mode": "train", "epoch": 7, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.04528, "loss_rpn_cls": 0.03211, "loss_rpn_bbox": 0.05641, "loss_cls": 0.23524, "acc": 92.11938, "loss_bbox": 0.26017, "loss_mask": 0.2524, "loss": 0.83633, "time": 0.2179} -{"mode": "train", "epoch": 7, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.03404, "loss_rpn_cls": 0.03142, "loss_rpn_bbox": 0.05374, "loss_cls": 0.22891, "acc": 92.43652, "loss_bbox": 0.24503, "loss_mask": 0.25001, "loss": 0.80911, "time": 0.20833} -{"mode": "train", "epoch": 7, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.03908, "loss_rpn_cls": 0.03305, "loss_rpn_bbox": 0.05724, "loss_cls": 0.23698, "acc": 91.92676, "loss_bbox": 0.2661, "loss_mask": 0.2496, "loss": 0.84297, "time": 0.21392} -{"mode": "train", "epoch": 7, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.04019, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.05231, "loss_cls": 0.23029, "acc": 92.22632, "loss_bbox": 0.25253, "loss_mask": 0.24295, "loss": 0.8079, "time": 0.19829} -{"mode": "train", "epoch": 7, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.03809, "loss_rpn_cls": 0.03342, "loss_rpn_bbox": 0.05512, "loss_cls": 0.23597, "acc": 92.08667, "loss_bbox": 0.25866, "loss_mask": 0.25586, "loss": 0.83903, "time": 0.20761} -{"mode": "train", "epoch": 7, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.03128, "loss_rpn_cls": 0.03076, "loss_rpn_bbox": 0.05199, "loss_cls": 0.22147, "acc": 92.54883, "loss_bbox": 0.24426, "loss_mask": 0.24252, "loss": 0.791, "time": 0.2129} -{"mode": "train", "epoch": 7, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.03446, "loss_rpn_cls": 0.0347, "loss_rpn_bbox": 0.05551, "loss_cls": 0.24112, "acc": 91.90186, "loss_bbox": 0.26204, "loss_mask": 0.25163, "loss": 0.845, "time": 0.22525} -{"mode": "train", "epoch": 7, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.03474, "loss_rpn_cls": 0.03228, "loss_rpn_bbox": 0.05186, "loss_cls": 0.2311, "acc": 92.17236, "loss_bbox": 0.25711, "loss_mask": 0.24935, "loss": 0.8217, "time": 0.2111} -{"mode": "train", "epoch": 7, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.03639, "loss_rpn_cls": 0.03271, "loss_rpn_bbox": 0.05468, "loss_cls": 0.24113, "acc": 92.08228, "loss_bbox": 0.25273, "loss_mask": 0.24481, "loss": 0.82607, "time": 0.21629} -{"mode": "train", "epoch": 7, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.03696, "loss_rpn_cls": 0.03286, "loss_rpn_bbox": 0.05351, "loss_cls": 0.23229, "acc": 92.1084, "loss_bbox": 0.25738, "loss_mask": 0.25299, "loss": 0.82904, "time": 0.20522} -{"mode": "train", "epoch": 7, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.04351, "loss_rpn_cls": 0.03419, "loss_rpn_bbox": 0.0557, "loss_cls": 0.2321, "acc": 92.12695, "loss_bbox": 0.25414, "loss_mask": 0.24476, "loss": 0.82089, "time": 0.20677} -{"mode": "train", "epoch": 7, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.03522, "loss_rpn_cls": 0.03402, "loss_rpn_bbox": 0.05689, "loss_cls": 0.2543, "acc": 91.59521, "loss_bbox": 0.26889, "loss_mask": 0.25763, "loss": 0.87173, "time": 0.21236} -{"mode": "train", "epoch": 7, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.03521, "loss_rpn_cls": 0.03537, "loss_rpn_bbox": 0.05407, "loss_cls": 0.23071, "acc": 92.18359, "loss_bbox": 0.25543, "loss_mask": 0.25078, "loss": 0.82636, "time": 0.21975} -{"mode": "train", "epoch": 7, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.03353, "loss_rpn_cls": 0.03289, "loss_rpn_bbox": 0.05601, "loss_cls": 0.23217, "acc": 92.07104, "loss_bbox": 0.25809, "loss_mask": 0.24757, "loss": 0.82672, "time": 0.20991} -{"mode": "train", "epoch": 7, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.0459, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05619, "loss_cls": 0.23162, "acc": 92.31372, "loss_bbox": 0.25113, "loss_mask": 0.25081, "loss": 0.8229, "time": 0.20681} -{"mode": "train", "epoch": 7, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.04494, "loss_rpn_cls": 0.03605, "loss_rpn_bbox": 0.05794, "loss_cls": 0.24444, "acc": 91.66821, "loss_bbox": 0.27399, "loss_mask": 0.25919, "loss": 0.87161, "time": 0.2159} -{"mode": "train", "epoch": 7, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.03121, "loss_rpn_cls": 0.03117, "loss_rpn_bbox": 0.05181, "loss_cls": 0.22998, "acc": 92.41284, "loss_bbox": 0.24288, "loss_mask": 0.24482, "loss": 0.80067, "time": 0.20263} -{"mode": "train", "epoch": 7, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.03401, "loss_rpn_cls": 0.02901, "loss_rpn_bbox": 0.04976, "loss_cls": 0.21747, "acc": 92.63965, "loss_bbox": 0.24528, "loss_mask": 0.24418, "loss": 0.78569, "time": 0.20667} -{"mode": "train", "epoch": 7, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.03586, "loss_rpn_cls": 0.02992, "loss_rpn_bbox": 0.04804, "loss_cls": 0.21267, "acc": 92.85132, "loss_bbox": 0.23818, "loss_mask": 0.2467, "loss": 0.7755, "time": 0.20634} -{"mode": "train", "epoch": 7, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.0335, "loss_rpn_cls": 0.03058, "loss_rpn_bbox": 0.05152, "loss_cls": 0.22356, "acc": 92.55688, "loss_bbox": 0.24175, "loss_mask": 0.24967, "loss": 0.79709, "time": 0.20355} -{"mode": "train", "epoch": 7, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.03914, "loss_rpn_cls": 0.03149, "loss_rpn_bbox": 0.05471, "loss_cls": 0.23429, "acc": 92.20972, "loss_bbox": 0.25132, "loss_mask": 0.24542, "loss": 0.81724, "time": 0.20869} -{"mode": "train", "epoch": 7, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.03604, "loss_rpn_cls": 0.03392, "loss_rpn_bbox": 0.05521, "loss_cls": 0.2267, "acc": 92.49048, "loss_bbox": 0.24394, "loss_mask": 0.24926, "loss": 0.80902, "time": 0.21012} -{"mode": "train", "epoch": 7, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.03447, "loss_rpn_cls": 0.03341, "loss_rpn_bbox": 0.05383, "loss_cls": 0.23927, "acc": 92.12939, "loss_bbox": 0.25519, "loss_mask": 0.24881, "loss": 0.83051, "time": 0.20398} -{"mode": "train", "epoch": 7, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.04261, "loss_rpn_cls": 0.03271, "loss_rpn_bbox": 0.05829, "loss_cls": 0.24104, "acc": 91.98633, "loss_bbox": 0.26632, "loss_mask": 0.2544, "loss": 0.85276, "time": 0.21212} -{"mode": "train", "epoch": 7, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.03826, "loss_rpn_cls": 0.03547, "loss_rpn_bbox": 0.05369, "loss_cls": 0.2268, "acc": 92.41553, "loss_bbox": 0.25366, "loss_mask": 0.24749, "loss": 0.81711, "time": 0.2077} -{"mode": "train", "epoch": 7, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.0323, "loss_rpn_cls": 0.03099, "loss_rpn_bbox": 0.05117, "loss_cls": 0.23105, "acc": 92.29688, "loss_bbox": 0.25291, "loss_mask": 0.24707, "loss": 0.81319, "time": 0.20617} -{"mode": "val", "epoch": 7, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3374, "bbox_mAP_50": 0.5579, "bbox_mAP_75": 0.3591, "bbox_mAP_s": 0.1942, "bbox_mAP_m": 0.3664, "bbox_mAP_l": 0.4369, "bbox_mAP_copypaste": "0.3374 0.5579 0.3591 0.1942 0.3664 0.4369", "segm_mAP": 0.327, "segm_mAP_50": 0.5317, "segm_mAP_75": 0.3475, "segm_mAP_s": 0.1524, "segm_mAP_m": 0.3559, "segm_mAP_l": 0.4739, "segm_mAP_copypaste": "0.3270 0.5317 0.3475 0.1524 0.3559 0.4739"} -{"mode": "train", "epoch": 8, "iter": 50, "lr": 0.0002, "memory": 5941, "data_time": 0.10338, "loss_rpn_cls": 0.02559, "loss_rpn_bbox": 0.0488, "loss_cls": 0.20515, "acc": 92.96948, "loss_bbox": 0.23149, "loss_mask": 0.23661, "loss": 0.74764, "time": 0.29225} -{"mode": "train", "epoch": 8, "iter": 100, "lr": 0.0002, "memory": 5941, "data_time": 0.04011, "loss_rpn_cls": 0.02952, "loss_rpn_bbox": 0.05173, "loss_cls": 0.22028, "acc": 92.52368, "loss_bbox": 0.24767, "loss_mask": 0.23808, "loss": 0.78727, "time": 0.2346} -{"mode": "train", "epoch": 8, "iter": 150, "lr": 0.0002, "memory": 5941, "data_time": 0.03732, "loss_rpn_cls": 0.02553, "loss_rpn_bbox": 0.0485, "loss_cls": 0.20852, "acc": 92.89209, "loss_bbox": 0.23179, "loss_mask": 0.23466, "loss": 0.74901, "time": 0.21541} -{"mode": "train", "epoch": 8, "iter": 200, "lr": 0.0002, "memory": 5941, "data_time": 0.04287, "loss_rpn_cls": 0.03088, "loss_rpn_bbox": 0.05426, "loss_cls": 0.2069, "acc": 92.84985, "loss_bbox": 0.23794, "loss_mask": 0.24631, "loss": 0.7763, "time": 0.23704} -{"mode": "train", "epoch": 8, "iter": 250, "lr": 0.0002, "memory": 5941, "data_time": 0.03919, "loss_rpn_cls": 0.03149, "loss_rpn_bbox": 0.05426, "loss_cls": 0.2182, "acc": 92.5, "loss_bbox": 0.2478, "loss_mask": 0.24327, "loss": 0.79501, "time": 0.22578} -{"mode": "train", "epoch": 8, "iter": 300, "lr": 0.0002, "memory": 5941, "data_time": 0.04812, "loss_rpn_cls": 0.03163, "loss_rpn_bbox": 0.05632, "loss_cls": 0.22174, "acc": 92.30688, "loss_bbox": 0.25328, "loss_mask": 0.24044, "loss": 0.80341, "time": 0.23597} -{"mode": "train", "epoch": 8, "iter": 350, "lr": 0.0002, "memory": 5941, "data_time": 0.03248, "loss_rpn_cls": 0.02826, "loss_rpn_bbox": 0.05119, "loss_cls": 0.20807, "acc": 93.0, "loss_bbox": 0.2341, "loss_mask": 0.24266, "loss": 0.76427, "time": 0.22021} -{"mode": "train", "epoch": 8, "iter": 400, "lr": 0.0002, "memory": 5941, "data_time": 0.03823, "loss_rpn_cls": 0.03073, "loss_rpn_bbox": 0.05376, "loss_cls": 0.21373, "acc": 92.69824, "loss_bbox": 0.23799, "loss_mask": 0.24073, "loss": 0.77695, "time": 0.22649} -{"mode": "train", "epoch": 8, "iter": 450, "lr": 0.0002, "memory": 5941, "data_time": 0.04009, "loss_rpn_cls": 0.02637, "loss_rpn_bbox": 0.04755, "loss_cls": 0.21325, "acc": 92.72876, "loss_bbox": 0.23816, "loss_mask": 0.24523, "loss": 0.77055, "time": 0.21656} -{"mode": "train", "epoch": 8, "iter": 500, "lr": 0.0002, "memory": 5941, "data_time": 0.03514, "loss_rpn_cls": 0.02805, "loss_rpn_bbox": 0.05066, "loss_cls": 0.22504, "acc": 92.1333, "loss_bbox": 0.26138, "loss_mask": 0.24111, "loss": 0.80624, "time": 0.22279} -{"mode": "train", "epoch": 8, "iter": 550, "lr": 0.0002, "memory": 5941, "data_time": 0.04243, "loss_rpn_cls": 0.03099, "loss_rpn_bbox": 0.05156, "loss_cls": 0.2115, "acc": 92.80493, "loss_bbox": 0.23596, "loss_mask": 0.23733, "loss": 0.76734, "time": 0.21534} -{"mode": "train", "epoch": 8, "iter": 600, "lr": 0.0002, "memory": 5941, "data_time": 0.03346, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.04809, "loss_cls": 0.21314, "acc": 92.71362, "loss_bbox": 0.24118, "loss_mask": 0.24563, "loss": 0.77786, "time": 0.21184} -{"mode": "train", "epoch": 8, "iter": 650, "lr": 0.0002, "memory": 5941, "data_time": 0.03767, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.05415, "loss_cls": 0.22364, "acc": 92.4314, "loss_bbox": 0.25149, "loss_mask": 0.24615, "loss": 0.80457, "time": 0.21379} -{"mode": "train", "epoch": 8, "iter": 700, "lr": 0.0002, "memory": 5941, "data_time": 0.03526, "loss_rpn_cls": 0.02776, "loss_rpn_bbox": 0.04959, "loss_cls": 0.21957, "acc": 92.4895, "loss_bbox": 0.24591, "loss_mask": 0.24004, "loss": 0.78287, "time": 0.21097} -{"mode": "train", "epoch": 8, "iter": 750, "lr": 0.0002, "memory": 5941, "data_time": 0.03814, "loss_rpn_cls": 0.03001, "loss_rpn_bbox": 0.05328, "loss_cls": 0.22129, "acc": 92.35132, "loss_bbox": 0.25312, "loss_mask": 0.23868, "loss": 0.79638, "time": 0.21701} -{"mode": "train", "epoch": 8, "iter": 800, "lr": 0.0002, "memory": 5941, "data_time": 0.04269, "loss_rpn_cls": 0.02706, "loss_rpn_bbox": 0.05023, "loss_cls": 0.21032, "acc": 92.80054, "loss_bbox": 0.23668, "loss_mask": 0.23765, "loss": 0.76194, "time": 0.2114} -{"mode": "train", "epoch": 8, "iter": 850, "lr": 0.0002, "memory": 5941, "data_time": 0.03558, "loss_rpn_cls": 0.02768, "loss_rpn_bbox": 0.05098, "loss_cls": 0.21693, "acc": 92.57471, "loss_bbox": 0.23997, "loss_mask": 0.24478, "loss": 0.78034, "time": 0.21921} -{"mode": "train", "epoch": 8, "iter": 900, "lr": 0.0002, "memory": 5941, "data_time": 0.03871, "loss_rpn_cls": 0.03173, "loss_rpn_bbox": 0.05441, "loss_cls": 0.22946, "acc": 92.23022, "loss_bbox": 0.25154, "loss_mask": 0.24741, "loss": 0.81456, "time": 0.21576} -{"mode": "train", "epoch": 8, "iter": 950, "lr": 0.0002, "memory": 5941, "data_time": 0.03896, "loss_rpn_cls": 0.03202, "loss_rpn_bbox": 0.05367, "loss_cls": 0.22072, "acc": 92.36841, "loss_bbox": 0.2533, "loss_mask": 0.24752, "loss": 0.80723, "time": 0.21793} -{"mode": "train", "epoch": 8, "iter": 1000, "lr": 0.0002, "memory": 5941, "data_time": 0.04335, "loss_rpn_cls": 0.03144, "loss_rpn_bbox": 0.05104, "loss_cls": 0.22144, "acc": 92.51807, "loss_bbox": 0.24752, "loss_mask": 0.24372, "loss": 0.79516, "time": 0.22134} -{"mode": "train", "epoch": 8, "iter": 1050, "lr": 0.0002, "memory": 5941, "data_time": 0.03115, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.05149, "loss_cls": 0.21914, "acc": 92.55713, "loss_bbox": 0.24536, "loss_mask": 0.24473, "loss": 0.78997, "time": 0.21193} -{"mode": "train", "epoch": 8, "iter": 1100, "lr": 0.0002, "memory": 5941, "data_time": 0.03704, "loss_rpn_cls": 0.02888, "loss_rpn_bbox": 0.05007, "loss_cls": 0.2168, "acc": 92.54419, "loss_bbox": 0.24613, "loss_mask": 0.24684, "loss": 0.78872, "time": 0.21622} -{"mode": "train", "epoch": 8, "iter": 1150, "lr": 0.0002, "memory": 5941, "data_time": 0.04049, "loss_rpn_cls": 0.03025, "loss_rpn_bbox": 0.05343, "loss_cls": 0.21467, "acc": 92.67236, "loss_bbox": 0.24091, "loss_mask": 0.24691, "loss": 0.78617, "time": 0.22022} -{"mode": "train", "epoch": 8, "iter": 1200, "lr": 0.0002, "memory": 5941, "data_time": 0.04353, "loss_rpn_cls": 0.02926, "loss_rpn_bbox": 0.05285, "loss_cls": 0.21211, "acc": 92.68018, "loss_bbox": 0.24453, "loss_mask": 0.24081, "loss": 0.77956, "time": 0.20047} -{"mode": "train", "epoch": 8, "iter": 1250, "lr": 0.0002, "memory": 5941, "data_time": 0.04092, "loss_rpn_cls": 0.02812, "loss_rpn_bbox": 0.05232, "loss_cls": 0.21996, "acc": 92.53223, "loss_bbox": 0.24975, "loss_mask": 0.2367, "loss": 0.78685, "time": 0.21046} -{"mode": "train", "epoch": 8, "iter": 1300, "lr": 0.0002, "memory": 5941, "data_time": 0.04672, "loss_rpn_cls": 0.03384, "loss_rpn_bbox": 0.05645, "loss_cls": 0.23735, "acc": 91.90015, "loss_bbox": 0.26106, "loss_mask": 0.25439, "loss": 0.84308, "time": 0.21831} -{"mode": "train", "epoch": 8, "iter": 1350, "lr": 0.0002, "memory": 5941, "data_time": 0.03735, "loss_rpn_cls": 0.02799, "loss_rpn_bbox": 0.04995, "loss_cls": 0.21368, "acc": 92.59375, "loss_bbox": 0.24208, "loss_mask": 0.24015, "loss": 0.77385, "time": 0.2053} -{"mode": "train", "epoch": 8, "iter": 1400, "lr": 0.0002, "memory": 5941, "data_time": 0.04117, "loss_rpn_cls": 0.0305, "loss_rpn_bbox": 0.05374, "loss_cls": 0.22881, "acc": 92.25269, "loss_bbox": 0.25142, "loss_mask": 0.2474, "loss": 0.81188, "time": 0.20328} -{"mode": "train", "epoch": 8, "iter": 1450, "lr": 0.0002, "memory": 5941, "data_time": 0.04215, "loss_rpn_cls": 0.02871, "loss_rpn_bbox": 0.05568, "loss_cls": 0.21552, "acc": 92.59985, "loss_bbox": 0.25012, "loss_mask": 0.24294, "loss": 0.79297, "time": 0.20412} -{"mode": "train", "epoch": 8, "iter": 1500, "lr": 0.0002, "memory": 5941, "data_time": 0.03902, "loss_rpn_cls": 0.03197, "loss_rpn_bbox": 0.05518, "loss_cls": 0.22723, "acc": 92.31958, "loss_bbox": 0.25505, "loss_mask": 0.24496, "loss": 0.81439, "time": 0.21575} -{"mode": "train", "epoch": 8, "iter": 1550, "lr": 0.0002, "memory": 5941, "data_time": 0.04754, "loss_rpn_cls": 0.03053, "loss_rpn_bbox": 0.05216, "loss_cls": 0.21844, "acc": 92.56079, "loss_bbox": 0.24518, "loss_mask": 0.24659, "loss": 0.79291, "time": 0.20685} -{"mode": "train", "epoch": 8, "iter": 1600, "lr": 0.0002, "memory": 5941, "data_time": 0.04735, "loss_rpn_cls": 0.03188, "loss_rpn_bbox": 0.05651, "loss_cls": 0.22018, "acc": 92.42383, "loss_bbox": 0.25295, "loss_mask": 0.2479, "loss": 0.80941, "time": 0.21266} -{"mode": "train", "epoch": 8, "iter": 1650, "lr": 0.0002, "memory": 5941, "data_time": 0.04232, "loss_rpn_cls": 0.02921, "loss_rpn_bbox": 0.05238, "loss_cls": 0.22757, "acc": 92.47656, "loss_bbox": 0.24761, "loss_mask": 0.24493, "loss": 0.80169, "time": 0.21702} -{"mode": "train", "epoch": 8, "iter": 1700, "lr": 0.0002, "memory": 5941, "data_time": 0.03951, "loss_rpn_cls": 0.03009, "loss_rpn_bbox": 0.05197, "loss_cls": 0.22455, "acc": 92.37329, "loss_bbox": 0.24989, "loss_mask": 0.24709, "loss": 0.80357, "time": 0.21003} -{"mode": "train", "epoch": 8, "iter": 1750, "lr": 0.0002, "memory": 5941, "data_time": 0.0392, "loss_rpn_cls": 0.03065, "loss_rpn_bbox": 0.05002, "loss_cls": 0.21825, "acc": 92.64258, "loss_bbox": 0.23913, "loss_mask": 0.23878, "loss": 0.77682, "time": 0.20889} -{"mode": "train", "epoch": 8, "iter": 1800, "lr": 0.0002, "memory": 5941, "data_time": 0.03919, "loss_rpn_cls": 0.03082, "loss_rpn_bbox": 0.05098, "loss_cls": 0.22517, "acc": 92.39722, "loss_bbox": 0.24921, "loss_mask": 0.24622, "loss": 0.80239, "time": 0.20663} -{"mode": "train", "epoch": 8, "iter": 1850, "lr": 0.0002, "memory": 5941, "data_time": 0.0414, "loss_rpn_cls": 0.03162, "loss_rpn_bbox": 0.05485, "loss_cls": 0.23754, "acc": 92.01758, "loss_bbox": 0.25733, "loss_mask": 0.24822, "loss": 0.82956, "time": 0.2124} -{"mode": "train", "epoch": 8, "iter": 1900, "lr": 0.0002, "memory": 5941, "data_time": 0.04231, "loss_rpn_cls": 0.03096, "loss_rpn_bbox": 0.05142, "loss_cls": 0.22522, "acc": 92.32666, "loss_bbox": 0.25379, "loss_mask": 0.24376, "loss": 0.80515, "time": 0.20444} -{"mode": "train", "epoch": 8, "iter": 1950, "lr": 0.0002, "memory": 5941, "data_time": 0.03799, "loss_rpn_cls": 0.02779, "loss_rpn_bbox": 0.0505, "loss_cls": 0.21409, "acc": 92.81543, "loss_bbox": 0.2412, "loss_mask": 0.2463, "loss": 0.77988, "time": 0.20271} -{"mode": "train", "epoch": 8, "iter": 2000, "lr": 0.0002, "memory": 5941, "data_time": 0.0407, "loss_rpn_cls": 0.03153, "loss_rpn_bbox": 0.05448, "loss_cls": 0.23513, "acc": 92.0918, "loss_bbox": 0.25794, "loss_mask": 0.24903, "loss": 0.82812, "time": 0.21668} -{"mode": "train", "epoch": 8, "iter": 2050, "lr": 0.0002, "memory": 5941, "data_time": 0.04346, "loss_rpn_cls": 0.0285, "loss_rpn_bbox": 0.05129, "loss_cls": 0.2261, "acc": 92.50903, "loss_bbox": 0.24408, "loss_mask": 0.24285, "loss": 0.79282, "time": 0.21085} -{"mode": "train", "epoch": 8, "iter": 2100, "lr": 0.0002, "memory": 5941, "data_time": 0.03866, "loss_rpn_cls": 0.03225, "loss_rpn_bbox": 0.05258, "loss_cls": 0.22538, "acc": 92.36963, "loss_bbox": 0.25051, "loss_mask": 0.24237, "loss": 0.8031, "time": 0.20876} -{"mode": "train", "epoch": 8, "iter": 2150, "lr": 0.0002, "memory": 5941, "data_time": 0.0396, "loss_rpn_cls": 0.03145, "loss_rpn_bbox": 0.05335, "loss_cls": 0.21967, "acc": 92.47192, "loss_bbox": 0.25055, "loss_mask": 0.24101, "loss": 0.79603, "time": 0.21485} -{"mode": "train", "epoch": 8, "iter": 2200, "lr": 0.0002, "memory": 5941, "data_time": 0.03922, "loss_rpn_cls": 0.02822, "loss_rpn_bbox": 0.05224, "loss_cls": 0.21357, "acc": 92.74536, "loss_bbox": 0.24227, "loss_mask": 0.24202, "loss": 0.77831, "time": 0.2112} -{"mode": "train", "epoch": 8, "iter": 2250, "lr": 0.0002, "memory": 5941, "data_time": 0.02929, "loss_rpn_cls": 0.03204, "loss_rpn_bbox": 0.05459, "loss_cls": 0.22339, "acc": 92.41138, "loss_bbox": 0.24925, "loss_mask": 0.24337, "loss": 0.80263, "time": 0.20553} -{"mode": "train", "epoch": 8, "iter": 2300, "lr": 0.0002, "memory": 5941, "data_time": 0.03999, "loss_rpn_cls": 0.0303, "loss_rpn_bbox": 0.05304, "loss_cls": 0.22139, "acc": 92.42944, "loss_bbox": 0.25295, "loss_mask": 0.24213, "loss": 0.79981, "time": 0.2134} -{"mode": "train", "epoch": 8, "iter": 2350, "lr": 0.0002, "memory": 5941, "data_time": 0.03755, "loss_rpn_cls": 0.02833, "loss_rpn_bbox": 0.05277, "loss_cls": 0.22081, "acc": 92.46313, "loss_bbox": 0.24915, "loss_mask": 0.24453, "loss": 0.7956, "time": 0.21042} -{"mode": "train", "epoch": 8, "iter": 2400, "lr": 0.0002, "memory": 5941, "data_time": 0.03905, "loss_rpn_cls": 0.03374, "loss_rpn_bbox": 0.05542, "loss_cls": 0.22911, "acc": 92.177, "loss_bbox": 0.255, "loss_mask": 0.24831, "loss": 0.82158, "time": 0.22045} -{"mode": "train", "epoch": 8, "iter": 2450, "lr": 0.0002, "memory": 5941, "data_time": 0.03439, "loss_rpn_cls": 0.02782, "loss_rpn_bbox": 0.05229, "loss_cls": 0.22643, "acc": 92.49609, "loss_bbox": 0.24057, "loss_mask": 0.24223, "loss": 0.78933, "time": 0.20462} -{"mode": "train", "epoch": 8, "iter": 2500, "lr": 0.0002, "memory": 5941, "data_time": 0.04725, "loss_rpn_cls": 0.03001, "loss_rpn_bbox": 0.05516, "loss_cls": 0.22283, "acc": 92.46729, "loss_bbox": 0.24708, "loss_mask": 0.24428, "loss": 0.79937, "time": 0.21392} -{"mode": "train", "epoch": 8, "iter": 2550, "lr": 0.0002, "memory": 5941, "data_time": 0.03853, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.05103, "loss_cls": 0.2199, "acc": 92.53052, "loss_bbox": 0.24489, "loss_mask": 0.2377, "loss": 0.78277, "time": 0.2167} -{"mode": "train", "epoch": 8, "iter": 2600, "lr": 0.0002, "memory": 5941, "data_time": 0.04283, "loss_rpn_cls": 0.02964, "loss_rpn_bbox": 0.05233, "loss_cls": 0.22859, "acc": 92.34814, "loss_bbox": 0.25074, "loss_mask": 0.24543, "loss": 0.80672, "time": 0.22244} -{"mode": "train", "epoch": 8, "iter": 2650, "lr": 0.0002, "memory": 5941, "data_time": 0.03757, "loss_rpn_cls": 0.03182, "loss_rpn_bbox": 0.05353, "loss_cls": 0.22092, "acc": 92.59546, "loss_bbox": 0.24546, "loss_mask": 0.24897, "loss": 0.8007, "time": 0.21896} -{"mode": "train", "epoch": 8, "iter": 2700, "lr": 0.0002, "memory": 5941, "data_time": 0.04251, "loss_rpn_cls": 0.02949, "loss_rpn_bbox": 0.0536, "loss_cls": 0.22042, "acc": 92.50049, "loss_bbox": 0.25457, "loss_mask": 0.25116, "loss": 0.80924, "time": 0.21205} -{"mode": "train", "epoch": 8, "iter": 2750, "lr": 0.0002, "memory": 5941, "data_time": 0.03615, "loss_rpn_cls": 0.02961, "loss_rpn_bbox": 0.05517, "loss_cls": 0.21973, "acc": 92.43335, "loss_bbox": 0.24996, "loss_mask": 0.24906, "loss": 0.80352, "time": 0.20768} -{"mode": "train", "epoch": 8, "iter": 2800, "lr": 0.0002, "memory": 5941, "data_time": 0.04197, "loss_rpn_cls": 0.03047, "loss_rpn_bbox": 0.05353, "loss_cls": 0.23823, "acc": 91.74146, "loss_bbox": 0.2627, "loss_mask": 0.24834, "loss": 0.83326, "time": 0.20764} -{"mode": "train", "epoch": 8, "iter": 2850, "lr": 0.0002, "memory": 5941, "data_time": 0.04809, "loss_rpn_cls": 0.03273, "loss_rpn_bbox": 0.05256, "loss_cls": 0.21844, "acc": 92.66772, "loss_bbox": 0.24407, "loss_mask": 0.24422, "loss": 0.79202, "time": 0.21046} -{"mode": "train", "epoch": 8, "iter": 2900, "lr": 0.0002, "memory": 5941, "data_time": 0.04004, "loss_rpn_cls": 0.03028, "loss_rpn_bbox": 0.05345, "loss_cls": 0.22515, "acc": 92.37646, "loss_bbox": 0.25101, "loss_mask": 0.24631, "loss": 0.80621, "time": 0.20368} -{"mode": "train", "epoch": 8, "iter": 2950, "lr": 0.0002, "memory": 5941, "data_time": 0.0352, "loss_rpn_cls": 0.02858, "loss_rpn_bbox": 0.0516, "loss_cls": 0.22136, "acc": 92.48853, "loss_bbox": 0.24558, "loss_mask": 0.24452, "loss": 0.79164, "time": 0.20236} -{"mode": "train", "epoch": 8, "iter": 3000, "lr": 0.0002, "memory": 5941, "data_time": 0.04518, "loss_rpn_cls": 0.03164, "loss_rpn_bbox": 0.05207, "loss_cls": 0.22521, "acc": 92.35547, "loss_bbox": 0.25368, "loss_mask": 0.25088, "loss": 0.81348, "time": 0.20878} -{"mode": "train", "epoch": 8, "iter": 3050, "lr": 0.0002, "memory": 5941, "data_time": 0.04039, "loss_rpn_cls": 0.03128, "loss_rpn_bbox": 0.05257, "loss_cls": 0.22222, "acc": 92.46484, "loss_bbox": 0.24576, "loss_mask": 0.24992, "loss": 0.80175, "time": 0.21102} -{"mode": "train", "epoch": 8, "iter": 3100, "lr": 0.0002, "memory": 5941, "data_time": 0.04033, "loss_rpn_cls": 0.03092, "loss_rpn_bbox": 0.0575, "loss_cls": 0.22231, "acc": 92.41235, "loss_bbox": 0.24946, "loss_mask": 0.24969, "loss": 0.80988, "time": 0.20673} -{"mode": "train", "epoch": 8, "iter": 3150, "lr": 0.0002, "memory": 5941, "data_time": 0.03677, "loss_rpn_cls": 0.02668, "loss_rpn_bbox": 0.05023, "loss_cls": 0.22001, "acc": 92.57178, "loss_bbox": 0.24292, "loss_mask": 0.24189, "loss": 0.78173, "time": 0.2034} -{"mode": "train", "epoch": 8, "iter": 3200, "lr": 0.0002, "memory": 5941, "data_time": 0.04586, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05695, "loss_cls": 0.23042, "acc": 92.22729, "loss_bbox": 0.25779, "loss_mask": 0.2504, "loss": 0.82846, "time": 0.21502} -{"mode": "train", "epoch": 8, "iter": 3250, "lr": 0.0002, "memory": 5941, "data_time": 0.03024, "loss_rpn_cls": 0.03115, "loss_rpn_bbox": 0.05159, "loss_cls": 0.22567, "acc": 92.50977, "loss_bbox": 0.24233, "loss_mask": 0.24314, "loss": 0.79388, "time": 0.21137} -{"mode": "train", "epoch": 8, "iter": 3300, "lr": 0.0002, "memory": 5941, "data_time": 0.03345, "loss_rpn_cls": 0.03296, "loss_rpn_bbox": 0.05281, "loss_cls": 0.22469, "acc": 92.48535, "loss_bbox": 0.24435, "loss_mask": 0.24614, "loss": 0.80094, "time": 0.20801} -{"mode": "train", "epoch": 8, "iter": 3350, "lr": 0.0002, "memory": 5941, "data_time": 0.0379, "loss_rpn_cls": 0.03149, "loss_rpn_bbox": 0.05003, "loss_cls": 0.21934, "acc": 92.48828, "loss_bbox": 0.24344, "loss_mask": 0.24113, "loss": 0.78543, "time": 0.2076} -{"mode": "train", "epoch": 8, "iter": 3400, "lr": 0.0002, "memory": 5941, "data_time": 0.04072, "loss_rpn_cls": 0.02973, "loss_rpn_bbox": 0.05304, "loss_cls": 0.22604, "acc": 92.52417, "loss_bbox": 0.24601, "loss_mask": 0.24689, "loss": 0.8017, "time": 0.20841} -{"mode": "train", "epoch": 8, "iter": 3450, "lr": 0.0002, "memory": 5941, "data_time": 0.04184, "loss_rpn_cls": 0.03072, "loss_rpn_bbox": 0.05547, "loss_cls": 0.22677, "acc": 92.29565, "loss_bbox": 0.25252, "loss_mask": 0.24559, "loss": 0.81108, "time": 0.21327} -{"mode": "train", "epoch": 8, "iter": 3500, "lr": 0.0002, "memory": 5941, "data_time": 0.04404, "loss_rpn_cls": 0.03112, "loss_rpn_bbox": 0.05269, "loss_cls": 0.22755, "acc": 92.4082, "loss_bbox": 0.2492, "loss_mask": 0.24687, "loss": 0.80743, "time": 0.20914} -{"mode": "train", "epoch": 8, "iter": 3550, "lr": 0.0002, "memory": 5941, "data_time": 0.0487, "loss_rpn_cls": 0.03106, "loss_rpn_bbox": 0.05365, "loss_cls": 0.2388, "acc": 91.99487, "loss_bbox": 0.26032, "loss_mask": 0.25441, "loss": 0.83825, "time": 0.20923} -{"mode": "train", "epoch": 8, "iter": 3600, "lr": 0.0002, "memory": 5941, "data_time": 0.0378, "loss_rpn_cls": 0.0307, "loss_rpn_bbox": 0.05243, "loss_cls": 0.22502, "acc": 92.33301, "loss_bbox": 0.25425, "loss_mask": 0.24583, "loss": 0.80823, "time": 0.21665} -{"mode": "train", "epoch": 8, "iter": 3650, "lr": 0.0002, "memory": 5941, "data_time": 0.04138, "loss_rpn_cls": 0.03119, "loss_rpn_bbox": 0.05244, "loss_cls": 0.2339, "acc": 92.14746, "loss_bbox": 0.25429, "loss_mask": 0.24519, "loss": 0.81701, "time": 0.21318} -{"mode": "train", "epoch": 8, "iter": 3700, "lr": 0.0002, "memory": 5941, "data_time": 0.03875, "loss_rpn_cls": 0.03045, "loss_rpn_bbox": 0.0527, "loss_cls": 0.2298, "acc": 92.20874, "loss_bbox": 0.25098, "loss_mask": 0.24657, "loss": 0.8105, "time": 0.20417} -{"mode": "train", "epoch": 8, "iter": 3750, "lr": 0.0002, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.03131, "loss_rpn_bbox": 0.05241, "loss_cls": 0.2309, "acc": 92.28223, "loss_bbox": 0.25211, "loss_mask": 0.25379, "loss": 0.82052, "time": 0.2142} -{"mode": "train", "epoch": 8, "iter": 3800, "lr": 0.0002, "memory": 5941, "data_time": 0.04104, "loss_rpn_cls": 0.02971, "loss_rpn_bbox": 0.05198, "loss_cls": 0.22311, "acc": 92.38794, "loss_bbox": 0.24629, "loss_mask": 0.24454, "loss": 0.79562, "time": 0.22314} -{"mode": "train", "epoch": 8, "iter": 3850, "lr": 0.0002, "memory": 5941, "data_time": 0.04444, "loss_rpn_cls": 0.03223, "loss_rpn_bbox": 0.05234, "loss_cls": 0.23355, "acc": 92.38989, "loss_bbox": 0.24796, "loss_mask": 0.2456, "loss": 0.81168, "time": 0.21302} -{"mode": "train", "epoch": 8, "iter": 3900, "lr": 0.0002, "memory": 5941, "data_time": 0.04144, "loss_rpn_cls": 0.03236, "loss_rpn_bbox": 0.05058, "loss_cls": 0.22153, "acc": 92.67529, "loss_bbox": 0.24391, "loss_mask": 0.245, "loss": 0.79338, "time": 0.20729} -{"mode": "train", "epoch": 8, "iter": 3950, "lr": 0.0002, "memory": 5941, "data_time": 0.04031, "loss_rpn_cls": 0.03064, "loss_rpn_bbox": 0.05183, "loss_cls": 0.22314, "acc": 92.46899, "loss_bbox": 0.24558, "loss_mask": 0.24294, "loss": 0.79413, "time": 0.20737} -{"mode": "train", "epoch": 8, "iter": 4000, "lr": 0.0002, "memory": 5941, "data_time": 0.03782, "loss_rpn_cls": 0.02917, "loss_rpn_bbox": 0.05256, "loss_cls": 0.2252, "acc": 92.33081, "loss_bbox": 0.25698, "loss_mask": 0.24757, "loss": 0.81148, "time": 0.20829} -{"mode": "train", "epoch": 8, "iter": 4050, "lr": 0.0002, "memory": 5941, "data_time": 0.04175, "loss_rpn_cls": 0.02889, "loss_rpn_bbox": 0.05054, "loss_cls": 0.22577, "acc": 92.45068, "loss_bbox": 0.25023, "loss_mask": 0.24223, "loss": 0.79766, "time": 0.20965} -{"mode": "train", "epoch": 8, "iter": 4100, "lr": 0.0002, "memory": 5941, "data_time": 0.03832, "loss_rpn_cls": 0.03121, "loss_rpn_bbox": 0.05215, "loss_cls": 0.22845, "acc": 92.3811, "loss_bbox": 0.24842, "loss_mask": 0.25089, "loss": 0.81112, "time": 0.2068} -{"mode": "train", "epoch": 8, "iter": 4150, "lr": 0.0002, "memory": 5941, "data_time": 0.03576, "loss_rpn_cls": 0.03037, "loss_rpn_bbox": 0.05137, "loss_cls": 0.2238, "acc": 92.31104, "loss_bbox": 0.25218, "loss_mask": 0.24817, "loss": 0.8059, "time": 0.20291} -{"mode": "train", "epoch": 8, "iter": 4200, "lr": 0.0002, "memory": 5941, "data_time": 0.04414, "loss_rpn_cls": 0.03141, "loss_rpn_bbox": 0.05231, "loss_cls": 0.21868, "acc": 92.64941, "loss_bbox": 0.23891, "loss_mask": 0.24153, "loss": 0.78284, "time": 0.21136} -{"mode": "train", "epoch": 8, "iter": 4250, "lr": 0.0002, "memory": 5941, "data_time": 0.0381, "loss_rpn_cls": 0.02983, "loss_rpn_bbox": 0.05499, "loss_cls": 0.22178, "acc": 92.35596, "loss_bbox": 0.25047, "loss_mask": 0.24284, "loss": 0.79992, "time": 0.20187} -{"mode": "train", "epoch": 8, "iter": 4300, "lr": 0.0002, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.03335, "loss_rpn_bbox": 0.05555, "loss_cls": 0.22495, "acc": 92.40234, "loss_bbox": 0.24615, "loss_mask": 0.24707, "loss": 0.80708, "time": 0.21364} -{"mode": "train", "epoch": 8, "iter": 4350, "lr": 0.0002, "memory": 5941, "data_time": 0.03425, "loss_rpn_cls": 0.03011, "loss_rpn_bbox": 0.05243, "loss_cls": 0.22746, "acc": 92.31714, "loss_bbox": 0.24907, "loss_mask": 0.24696, "loss": 0.80602, "time": 0.20741} -{"mode": "train", "epoch": 8, "iter": 4400, "lr": 0.0002, "memory": 5941, "data_time": 0.03592, "loss_rpn_cls": 0.02927, "loss_rpn_bbox": 0.0524, "loss_cls": 0.22468, "acc": 92.3501, "loss_bbox": 0.24927, "loss_mask": 0.24766, "loss": 0.80328, "time": 0.20235} -{"mode": "train", "epoch": 8, "iter": 4450, "lr": 0.0002, "memory": 5941, "data_time": 0.0378, "loss_rpn_cls": 0.03154, "loss_rpn_bbox": 0.05496, "loss_cls": 0.23084, "acc": 92.31787, "loss_bbox": 0.25384, "loss_mask": 0.24738, "loss": 0.81857, "time": 0.20589} -{"mode": "train", "epoch": 8, "iter": 4500, "lr": 0.0002, "memory": 5941, "data_time": 0.04107, "loss_rpn_cls": 0.03266, "loss_rpn_bbox": 0.05351, "loss_cls": 0.2301, "acc": 92.17676, "loss_bbox": 0.25409, "loss_mask": 0.24413, "loss": 0.81448, "time": 0.21363} -{"mode": "train", "epoch": 8, "iter": 4550, "lr": 0.0002, "memory": 5941, "data_time": 0.03653, "loss_rpn_cls": 0.02917, "loss_rpn_bbox": 0.05353, "loss_cls": 0.22712, "acc": 92.43726, "loss_bbox": 0.24591, "loss_mask": 0.24524, "loss": 0.80097, "time": 0.19893} -{"mode": "train", "epoch": 8, "iter": 4600, "lr": 0.0002, "memory": 5941, "data_time": 0.05015, "loss_rpn_cls": 0.03271, "loss_rpn_bbox": 0.0528, "loss_cls": 0.22658, "acc": 92.27808, "loss_bbox": 0.2543, "loss_mask": 0.24989, "loss": 0.81628, "time": 0.20968} -{"mode": "train", "epoch": 8, "iter": 4650, "lr": 0.0002, "memory": 5941, "data_time": 0.03805, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.0505, "loss_cls": 0.22779, "acc": 92.31714, "loss_bbox": 0.25428, "loss_mask": 0.24495, "loss": 0.8084, "time": 0.20502} -{"mode": "train", "epoch": 8, "iter": 4700, "lr": 0.0002, "memory": 5941, "data_time": 0.04022, "loss_rpn_cls": 0.03179, "loss_rpn_bbox": 0.05271, "loss_cls": 0.23431, "acc": 92.1499, "loss_bbox": 0.25273, "loss_mask": 0.24131, "loss": 0.81285, "time": 0.20799} -{"mode": "train", "epoch": 8, "iter": 4750, "lr": 0.0002, "memory": 5941, "data_time": 0.03821, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.05307, "loss_cls": 0.22079, "acc": 92.51123, "loss_bbox": 0.24296, "loss_mask": 0.24812, "loss": 0.79423, "time": 0.20499} -{"mode": "train", "epoch": 8, "iter": 4800, "lr": 0.0002, "memory": 5941, "data_time": 0.03417, "loss_rpn_cls": 0.03296, "loss_rpn_bbox": 0.05398, "loss_cls": 0.23509, "acc": 92.06128, "loss_bbox": 0.25393, "loss_mask": 0.24943, "loss": 0.82539, "time": 0.21801} -{"mode": "train", "epoch": 8, "iter": 4850, "lr": 0.0002, "memory": 5941, "data_time": 0.03634, "loss_rpn_cls": 0.03278, "loss_rpn_bbox": 0.05252, "loss_cls": 0.23481, "acc": 91.98486, "loss_bbox": 0.26187, "loss_mask": 0.25177, "loss": 0.83375, "time": 0.20734} -{"mode": "train", "epoch": 8, "iter": 4900, "lr": 0.0002, "memory": 5941, "data_time": 0.03643, "loss_rpn_cls": 0.0292, "loss_rpn_bbox": 0.05193, "loss_cls": 0.22308, "acc": 92.4209, "loss_bbox": 0.24882, "loss_mask": 0.24484, "loss": 0.79786, "time": 0.20174} -{"mode": "train", "epoch": 8, "iter": 4950, "lr": 0.0002, "memory": 5941, "data_time": 0.04227, "loss_rpn_cls": 0.03191, "loss_rpn_bbox": 0.05294, "loss_cls": 0.22644, "acc": 92.30591, "loss_bbox": 0.25065, "loss_mask": 0.24567, "loss": 0.80762, "time": 0.20908} -{"mode": "train", "epoch": 8, "iter": 5000, "lr": 0.0002, "memory": 5941, "data_time": 0.03394, "loss_rpn_cls": 0.02996, "loss_rpn_bbox": 0.05229, "loss_cls": 0.23053, "acc": 92.22803, "loss_bbox": 0.25372, "loss_mask": 0.24768, "loss": 0.81418, "time": 0.21195} -{"mode": "train", "epoch": 8, "iter": 5050, "lr": 0.0002, "memory": 5941, "data_time": 0.03148, "loss_rpn_cls": 0.028, "loss_rpn_bbox": 0.04785, "loss_cls": 0.21854, "acc": 92.62988, "loss_bbox": 0.24274, "loss_mask": 0.24301, "loss": 0.78013, "time": 0.20431} -{"mode": "train", "epoch": 8, "iter": 5100, "lr": 0.0002, "memory": 5941, "data_time": 0.03793, "loss_rpn_cls": 0.03102, "loss_rpn_bbox": 0.05421, "loss_cls": 0.23892, "acc": 92.00464, "loss_bbox": 0.26121, "loss_mask": 0.25309, "loss": 0.83845, "time": 0.21087} -{"mode": "train", "epoch": 8, "iter": 5150, "lr": 0.0002, "memory": 5941, "data_time": 0.04398, "loss_rpn_cls": 0.03006, "loss_rpn_bbox": 0.05356, "loss_cls": 0.22074, "acc": 92.51709, "loss_bbox": 0.24804, "loss_mask": 0.24367, "loss": 0.79607, "time": 0.21558} -{"mode": "train", "epoch": 8, "iter": 5200, "lr": 0.0002, "memory": 5941, "data_time": 0.03194, "loss_rpn_cls": 0.02956, "loss_rpn_bbox": 0.04866, "loss_cls": 0.22327, "acc": 92.44922, "loss_bbox": 0.24449, "loss_mask": 0.24036, "loss": 0.78634, "time": 0.2092} -{"mode": "train", "epoch": 8, "iter": 5250, "lr": 0.0002, "memory": 5941, "data_time": 0.03116, "loss_rpn_cls": 0.03349, "loss_rpn_bbox": 0.0572, "loss_cls": 0.2302, "acc": 92.1543, "loss_bbox": 0.25867, "loss_mask": 0.25441, "loss": 0.83396, "time": 0.21299} -{"mode": "train", "epoch": 8, "iter": 5300, "lr": 0.0002, "memory": 5941, "data_time": 0.03665, "loss_rpn_cls": 0.03197, "loss_rpn_bbox": 0.0502, "loss_cls": 0.23136, "acc": 92.1814, "loss_bbox": 0.2506, "loss_mask": 0.24286, "loss": 0.80699, "time": 0.21744} -{"mode": "train", "epoch": 8, "iter": 5350, "lr": 0.0002, "memory": 5941, "data_time": 0.03687, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.04936, "loss_cls": 0.22073, "acc": 92.60596, "loss_bbox": 0.24764, "loss_mask": 0.24081, "loss": 0.78769, "time": 0.20103} -{"mode": "train", "epoch": 8, "iter": 5400, "lr": 0.0002, "memory": 5941, "data_time": 0.0362, "loss_rpn_cls": 0.0342, "loss_rpn_bbox": 0.05583, "loss_cls": 0.23924, "acc": 92.00415, "loss_bbox": 0.25847, "loss_mask": 0.2524, "loss": 0.84014, "time": 0.21498} -{"mode": "train", "epoch": 8, "iter": 5450, "lr": 0.0002, "memory": 5941, "data_time": 0.03674, "loss_rpn_cls": 0.03062, "loss_rpn_bbox": 0.05231, "loss_cls": 0.23621, "acc": 92.0874, "loss_bbox": 0.25443, "loss_mask": 0.24947, "loss": 0.82304, "time": 0.28223} -{"mode": "train", "epoch": 8, "iter": 5500, "lr": 0.0002, "memory": 5941, "data_time": 0.0388, "loss_rpn_cls": 0.02992, "loss_rpn_bbox": 0.05495, "loss_cls": 0.23473, "acc": 92.09961, "loss_bbox": 0.25371, "loss_mask": 0.2537, "loss": 0.827, "time": 0.21654} -{"mode": "train", "epoch": 8, "iter": 5550, "lr": 0.0002, "memory": 5941, "data_time": 0.03656, "loss_rpn_cls": 0.03181, "loss_rpn_bbox": 0.05406, "loss_cls": 0.22715, "acc": 92.39917, "loss_bbox": 0.25, "loss_mask": 0.24846, "loss": 0.81147, "time": 0.21386} -{"mode": "train", "epoch": 8, "iter": 5600, "lr": 0.0002, "memory": 5941, "data_time": 0.03823, "loss_rpn_cls": 0.03214, "loss_rpn_bbox": 0.05416, "loss_cls": 0.2262, "acc": 92.32642, "loss_bbox": 0.25351, "loss_mask": 0.24712, "loss": 0.81314, "time": 0.21162} -{"mode": "train", "epoch": 8, "iter": 5650, "lr": 0.0002, "memory": 5941, "data_time": 0.03535, "loss_rpn_cls": 0.03036, "loss_rpn_bbox": 0.05281, "loss_cls": 0.22998, "acc": 92.24878, "loss_bbox": 0.26185, "loss_mask": 0.2483, "loss": 0.82331, "time": 0.23831} -{"mode": "train", "epoch": 8, "iter": 5700, "lr": 0.0002, "memory": 5941, "data_time": 0.03517, "loss_rpn_cls": 0.03337, "loss_rpn_bbox": 0.0509, "loss_cls": 0.23207, "acc": 92.19751, "loss_bbox": 0.25409, "loss_mask": 0.2468, "loss": 0.81723, "time": 0.2398} -{"mode": "train", "epoch": 8, "iter": 5750, "lr": 0.0002, "memory": 5941, "data_time": 0.03307, "loss_rpn_cls": 0.03194, "loss_rpn_bbox": 0.0522, "loss_cls": 0.2288, "acc": 92.35278, "loss_bbox": 0.2469, "loss_mask": 0.24994, "loss": 0.80979, "time": 0.23634} -{"mode": "train", "epoch": 8, "iter": 5800, "lr": 0.0002, "memory": 5941, "data_time": 0.0323, "loss_rpn_cls": 0.02762, "loss_rpn_bbox": 0.0524, "loss_cls": 0.22053, "acc": 92.60815, "loss_bbox": 0.24369, "loss_mask": 0.24593, "loss": 0.79018, "time": 0.1985} -{"mode": "train", "epoch": 8, "iter": 5850, "lr": 0.0002, "memory": 5941, "data_time": 0.03813, "loss_rpn_cls": 0.03216, "loss_rpn_bbox": 0.05221, "loss_cls": 0.22785, "acc": 92.3042, "loss_bbox": 0.25143, "loss_mask": 0.25022, "loss": 0.81387, "time": 0.20388} -{"mode": "train", "epoch": 8, "iter": 5900, "lr": 0.0002, "memory": 5941, "data_time": 0.03654, "loss_rpn_cls": 0.03024, "loss_rpn_bbox": 0.05651, "loss_cls": 0.23648, "acc": 92.01587, "loss_bbox": 0.26232, "loss_mask": 0.24984, "loss": 0.83538, "time": 0.20524} -{"mode": "train", "epoch": 8, "iter": 5950, "lr": 0.0002, "memory": 5941, "data_time": 0.04045, "loss_rpn_cls": 0.02858, "loss_rpn_bbox": 0.05016, "loss_cls": 0.22569, "acc": 92.50732, "loss_bbox": 0.2487, "loss_mask": 0.24583, "loss": 0.79896, "time": 0.19632} -{"mode": "train", "epoch": 8, "iter": 6000, "lr": 0.0002, "memory": 5941, "data_time": 0.04155, "loss_rpn_cls": 0.03028, "loss_rpn_bbox": 0.05193, "loss_cls": 0.22384, "acc": 92.3208, "loss_bbox": 0.24855, "loss_mask": 0.24309, "loss": 0.79769, "time": 0.20287} -{"mode": "train", "epoch": 8, "iter": 6050, "lr": 0.0002, "memory": 5941, "data_time": 0.0392, "loss_rpn_cls": 0.03254, "loss_rpn_bbox": 0.05575, "loss_cls": 0.22756, "acc": 92.36719, "loss_bbox": 0.25088, "loss_mask": 0.24285, "loss": 0.80959, "time": 0.21001} -{"mode": "train", "epoch": 8, "iter": 6100, "lr": 0.0002, "memory": 5941, "data_time": 0.04207, "loss_rpn_cls": 0.03289, "loss_rpn_bbox": 0.05433, "loss_cls": 0.23112, "acc": 92.14136, "loss_bbox": 0.26116, "loss_mask": 0.24975, "loss": 0.82926, "time": 0.212} -{"mode": "train", "epoch": 8, "iter": 6150, "lr": 0.0002, "memory": 5941, "data_time": 0.04567, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.05484, "loss_cls": 0.22342, "acc": 92.36304, "loss_bbox": 0.24647, "loss_mask": 0.24888, "loss": 0.80536, "time": 0.20681} -{"mode": "train", "epoch": 8, "iter": 6200, "lr": 0.0002, "memory": 5941, "data_time": 0.03697, "loss_rpn_cls": 0.0313, "loss_rpn_bbox": 0.05409, "loss_cls": 0.23138, "acc": 92.11548, "loss_bbox": 0.25667, "loss_mask": 0.24398, "loss": 0.81742, "time": 0.21261} -{"mode": "train", "epoch": 8, "iter": 6250, "lr": 0.0002, "memory": 5941, "data_time": 0.03983, "loss_rpn_cls": 0.03374, "loss_rpn_bbox": 0.05848, "loss_cls": 0.24199, "acc": 91.86523, "loss_bbox": 0.26581, "loss_mask": 0.25153, "loss": 0.85155, "time": 0.2134} -{"mode": "train", "epoch": 8, "iter": 6300, "lr": 0.0002, "memory": 5941, "data_time": 0.0427, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.0524, "loss_cls": 0.21802, "acc": 92.69556, "loss_bbox": 0.24083, "loss_mask": 0.24591, "loss": 0.78735, "time": 0.20102} -{"mode": "train", "epoch": 8, "iter": 6350, "lr": 0.0002, "memory": 5941, "data_time": 0.04113, "loss_rpn_cls": 0.03083, "loss_rpn_bbox": 0.05393, "loss_cls": 0.22566, "acc": 92.37036, "loss_bbox": 0.25353, "loss_mask": 0.24565, "loss": 0.8096, "time": 0.21252} -{"mode": "train", "epoch": 8, "iter": 6400, "lr": 0.0002, "memory": 5941, "data_time": 0.03699, "loss_rpn_cls": 0.03081, "loss_rpn_bbox": 0.05202, "loss_cls": 0.22392, "acc": 92.44653, "loss_bbox": 0.24581, "loss_mask": 0.24584, "loss": 0.79842, "time": 0.20233} -{"mode": "train", "epoch": 8, "iter": 6450, "lr": 0.0002, "memory": 5941, "data_time": 0.04047, "loss_rpn_cls": 0.03186, "loss_rpn_bbox": 0.05678, "loss_cls": 0.2314, "acc": 92.15527, "loss_bbox": 0.2551, "loss_mask": 0.25268, "loss": 0.82781, "time": 0.21006} -{"mode": "train", "epoch": 8, "iter": 6500, "lr": 0.0002, "memory": 5941, "data_time": 0.03418, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05231, "loss_cls": 0.22452, "acc": 92.31421, "loss_bbox": 0.24928, "loss_mask": 0.24549, "loss": 0.80475, "time": 0.21873} -{"mode": "train", "epoch": 8, "iter": 6550, "lr": 0.0002, "memory": 5941, "data_time": 0.03737, "loss_rpn_cls": 0.03288, "loss_rpn_bbox": 0.05685, "loss_cls": 0.23812, "acc": 91.96313, "loss_bbox": 0.25426, "loss_mask": 0.25162, "loss": 0.83373, "time": 0.26429} -{"mode": "train", "epoch": 8, "iter": 6600, "lr": 0.0002, "memory": 5941, "data_time": 0.0393, "loss_rpn_cls": 0.03012, "loss_rpn_bbox": 0.05337, "loss_cls": 0.23572, "acc": 92.16553, "loss_bbox": 0.25272, "loss_mask": 0.24704, "loss": 0.81896, "time": 0.21364} -{"mode": "train", "epoch": 8, "iter": 6650, "lr": 0.0002, "memory": 5941, "data_time": 0.03549, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.05392, "loss_cls": 0.23348, "acc": 91.95972, "loss_bbox": 0.26194, "loss_mask": 0.24911, "loss": 0.82868, "time": 0.21182} -{"mode": "train", "epoch": 8, "iter": 6700, "lr": 0.0002, "memory": 5941, "data_time": 0.04198, "loss_rpn_cls": 0.03201, "loss_rpn_bbox": 0.05132, "loss_cls": 0.22567, "acc": 92.30542, "loss_bbox": 0.25076, "loss_mask": 0.24391, "loss": 0.80366, "time": 0.20421} -{"mode": "train", "epoch": 8, "iter": 6750, "lr": 0.0002, "memory": 5941, "data_time": 0.03493, "loss_rpn_cls": 0.03138, "loss_rpn_bbox": 0.05196, "loss_cls": 0.2235, "acc": 92.44531, "loss_bbox": 0.24975, "loss_mask": 0.24806, "loss": 0.80465, "time": 0.24207} -{"mode": "train", "epoch": 8, "iter": 6800, "lr": 0.0002, "memory": 5941, "data_time": 0.03561, "loss_rpn_cls": 0.03021, "loss_rpn_bbox": 0.05397, "loss_cls": 0.22474, "acc": 92.36987, "loss_bbox": 0.25372, "loss_mask": 0.24915, "loss": 0.8118, "time": 0.20555} -{"mode": "train", "epoch": 8, "iter": 6850, "lr": 0.0002, "memory": 5941, "data_time": 0.04012, "loss_rpn_cls": 0.03033, "loss_rpn_bbox": 0.05393, "loss_cls": 0.22477, "acc": 92.44263, "loss_bbox": 0.2456, "loss_mask": 0.24708, "loss": 0.80171, "time": 0.20551} -{"mode": "train", "epoch": 8, "iter": 6900, "lr": 0.0002, "memory": 5941, "data_time": 0.03559, "loss_rpn_cls": 0.03135, "loss_rpn_bbox": 0.05567, "loss_cls": 0.23378, "acc": 92.27734, "loss_bbox": 0.25204, "loss_mask": 0.24784, "loss": 0.82067, "time": 0.21898} -{"mode": "train", "epoch": 8, "iter": 6950, "lr": 0.0002, "memory": 5941, "data_time": 0.04177, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.055, "loss_cls": 0.23443, "acc": 92.21436, "loss_bbox": 0.25336, "loss_mask": 0.2474, "loss": 0.82194, "time": 0.2183} -{"mode": "train", "epoch": 8, "iter": 7000, "lr": 0.0002, "memory": 5941, "data_time": 0.0446, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.05059, "loss_cls": 0.23219, "acc": 92.1731, "loss_bbox": 0.25429, "loss_mask": 0.25275, "loss": 0.82, "time": 0.20362} -{"mode": "train", "epoch": 8, "iter": 7050, "lr": 0.0002, "memory": 5941, "data_time": 0.03065, "loss_rpn_cls": 0.03099, "loss_rpn_bbox": 0.05512, "loss_cls": 0.22242, "acc": 92.53979, "loss_bbox": 0.24679, "loss_mask": 0.24947, "loss": 0.80478, "time": 0.2105} -{"mode": "train", "epoch": 8, "iter": 7100, "lr": 0.0002, "memory": 5941, "data_time": 0.03843, "loss_rpn_cls": 0.03357, "loss_rpn_bbox": 0.05775, "loss_cls": 0.22804, "acc": 92.33691, "loss_bbox": 0.24629, "loss_mask": 0.24707, "loss": 0.81272, "time": 0.2144} -{"mode": "train", "epoch": 8, "iter": 7150, "lr": 0.0002, "memory": 5941, "data_time": 0.03562, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.05271, "loss_cls": 0.23101, "acc": 92.29224, "loss_bbox": 0.24827, "loss_mask": 0.24058, "loss": 0.80431, "time": 0.20166} -{"mode": "train", "epoch": 8, "iter": 7200, "lr": 0.0002, "memory": 5941, "data_time": 0.04414, "loss_rpn_cls": 0.03362, "loss_rpn_bbox": 0.05509, "loss_cls": 0.22957, "acc": 92.46069, "loss_bbox": 0.24476, "loss_mask": 0.24631, "loss": 0.80935, "time": 0.21116} -{"mode": "train", "epoch": 8, "iter": 7250, "lr": 0.0002, "memory": 5941, "data_time": 0.03585, "loss_rpn_cls": 0.03106, "loss_rpn_bbox": 0.05371, "loss_cls": 0.2254, "acc": 92.39893, "loss_bbox": 0.24856, "loss_mask": 0.24912, "loss": 0.80786, "time": 0.20925} -{"mode": "train", "epoch": 8, "iter": 7300, "lr": 0.0002, "memory": 5941, "data_time": 0.04334, "loss_rpn_cls": 0.03406, "loss_rpn_bbox": 0.05564, "loss_cls": 0.23235, "acc": 92.15454, "loss_bbox": 0.26003, "loss_mask": 0.25299, "loss": 0.83508, "time": 0.21129} -{"mode": "val", "epoch": 8, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3353, "bbox_mAP_50": 0.5541, "bbox_mAP_75": 0.3597, "bbox_mAP_s": 0.2028, "bbox_mAP_m": 0.3679, "bbox_mAP_l": 0.4288, "bbox_mAP_copypaste": "0.3353 0.5541 0.3597 0.2028 0.3679 0.4288", "segm_mAP": 0.3247, "segm_mAP_50": 0.5297, "segm_mAP_75": 0.3459, "segm_mAP_s": 0.1571, "segm_mAP_m": 0.3507, "segm_mAP_l": 0.4754, "segm_mAP_copypaste": "0.3247 0.5297 0.3459 0.1571 0.3507 0.4754"} -{"mode": "train", "epoch": 9, "iter": 50, "lr": 2e-05, "memory": 5941, "data_time": 0.10815, "loss_rpn_cls": 0.02712, "loss_rpn_bbox": 0.05324, "loss_cls": 0.21815, "acc": 92.42456, "loss_bbox": 0.24103, "loss_mask": 0.24108, "loss": 0.78062, "time": 0.28195} -{"mode": "train", "epoch": 9, "iter": 100, "lr": 2e-05, "memory": 5941, "data_time": 0.0359, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.05424, "loss_cls": 0.21489, "acc": 92.4458, "loss_bbox": 0.24918, "loss_mask": 0.23604, "loss": 0.78391, "time": 0.23239} -{"mode": "train", "epoch": 9, "iter": 150, "lr": 2e-05, "memory": 5941, "data_time": 0.04386, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.05202, "loss_cls": 0.21257, "acc": 92.51611, "loss_bbox": 0.24573, "loss_mask": 0.23686, "loss": 0.77508, "time": 0.22354} -{"mode": "train", "epoch": 9, "iter": 200, "lr": 2e-05, "memory": 5941, "data_time": 0.04638, "loss_rpn_cls": 0.02633, "loss_rpn_bbox": 0.04832, "loss_cls": 0.20848, "acc": 92.71191, "loss_bbox": 0.24059, "loss_mask": 0.2429, "loss": 0.76663, "time": 0.22093} -{"mode": "train", "epoch": 9, "iter": 250, "lr": 2e-05, "memory": 5941, "data_time": 0.03783, "loss_rpn_cls": 0.02661, "loss_rpn_bbox": 0.04681, "loss_cls": 0.19536, "acc": 93.20044, "loss_bbox": 0.22628, "loss_mask": 0.23513, "loss": 0.73019, "time": 0.21595} -{"mode": "train", "epoch": 9, "iter": 300, "lr": 2e-05, "memory": 5941, "data_time": 0.03806, "loss_rpn_cls": 0.02641, "loss_rpn_bbox": 0.04805, "loss_cls": 0.2022, "acc": 92.93188, "loss_bbox": 0.23731, "loss_mask": 0.236, "loss": 0.74997, "time": 0.2139} -{"mode": "train", "epoch": 9, "iter": 350, "lr": 2e-05, "memory": 5941, "data_time": 0.04121, "loss_rpn_cls": 0.02704, "loss_rpn_bbox": 0.05103, "loss_cls": 0.20776, "acc": 92.68726, "loss_bbox": 0.24652, "loss_mask": 0.2426, "loss": 0.77495, "time": 0.2218} -{"mode": "train", "epoch": 9, "iter": 400, "lr": 2e-05, "memory": 5941, "data_time": 0.03751, "loss_rpn_cls": 0.02411, "loss_rpn_bbox": 0.04693, "loss_cls": 0.19193, "acc": 93.15332, "loss_bbox": 0.22789, "loss_mask": 0.23392, "loss": 0.72477, "time": 0.21373} -{"mode": "train", "epoch": 9, "iter": 450, "lr": 2e-05, "memory": 5941, "data_time": 0.03818, "loss_rpn_cls": 0.02537, "loss_rpn_bbox": 0.04834, "loss_cls": 0.19477, "acc": 93.07349, "loss_bbox": 0.23196, "loss_mask": 0.23294, "loss": 0.73336, "time": 0.22841} -{"mode": "train", "epoch": 9, "iter": 500, "lr": 2e-05, "memory": 5941, "data_time": 0.03483, "loss_rpn_cls": 0.02322, "loss_rpn_bbox": 0.04794, "loss_cls": 0.19589, "acc": 93.14868, "loss_bbox": 0.22896, "loss_mask": 0.23271, "loss": 0.72872, "time": 0.21185} -{"mode": "train", "epoch": 9, "iter": 550, "lr": 2e-05, "memory": 5941, "data_time": 0.03257, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.04676, "loss_cls": 0.19772, "acc": 92.93457, "loss_bbox": 0.23769, "loss_mask": 0.23569, "loss": 0.74087, "time": 0.21019} -{"mode": "train", "epoch": 9, "iter": 600, "lr": 2e-05, "memory": 5941, "data_time": 0.03858, "loss_rpn_cls": 0.0268, "loss_rpn_bbox": 0.04986, "loss_cls": 0.20643, "acc": 92.70557, "loss_bbox": 0.24105, "loss_mask": 0.23724, "loss": 0.76138, "time": 0.21989} -{"mode": "train", "epoch": 9, "iter": 650, "lr": 2e-05, "memory": 5941, "data_time": 0.04774, "loss_rpn_cls": 0.02736, "loss_rpn_bbox": 0.05309, "loss_cls": 0.21302, "acc": 92.21313, "loss_bbox": 0.25341, "loss_mask": 0.24073, "loss": 0.78761, "time": 0.2288} -{"mode": "train", "epoch": 9, "iter": 700, "lr": 2e-05, "memory": 5941, "data_time": 0.04184, "loss_rpn_cls": 0.02498, "loss_rpn_bbox": 0.04838, "loss_cls": 0.20253, "acc": 92.73193, "loss_bbox": 0.243, "loss_mask": 0.23383, "loss": 0.75273, "time": 0.21686} -{"mode": "train", "epoch": 9, "iter": 750, "lr": 2e-05, "memory": 5941, "data_time": 0.04547, "loss_rpn_cls": 0.02368, "loss_rpn_bbox": 0.04763, "loss_cls": 0.19317, "acc": 93.13086, "loss_bbox": 0.23703, "loss_mask": 0.23271, "loss": 0.73423, "time": 0.21681} -{"mode": "train", "epoch": 9, "iter": 800, "lr": 2e-05, "memory": 5941, "data_time": 0.03961, "loss_rpn_cls": 0.02801, "loss_rpn_bbox": 0.0491, "loss_cls": 0.19924, "acc": 92.84814, "loss_bbox": 0.2398, "loss_mask": 0.22905, "loss": 0.7452, "time": 0.21676} -{"mode": "train", "epoch": 9, "iter": 850, "lr": 2e-05, "memory": 5941, "data_time": 0.04143, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04808, "loss_cls": 0.18819, "acc": 93.32324, "loss_bbox": 0.22614, "loss_mask": 0.22857, "loss": 0.71397, "time": 0.21493} -{"mode": "train", "epoch": 9, "iter": 900, "lr": 2e-05, "memory": 5941, "data_time": 0.04143, "loss_rpn_cls": 0.02661, "loss_rpn_bbox": 0.04971, "loss_cls": 0.19409, "acc": 93.04346, "loss_bbox": 0.2349, "loss_mask": 0.23979, "loss": 0.7451, "time": 0.21361} -{"mode": "train", "epoch": 9, "iter": 950, "lr": 2e-05, "memory": 5941, "data_time": 0.03855, "loss_rpn_cls": 0.02449, "loss_rpn_bbox": 0.04898, "loss_cls": 0.19793, "acc": 92.86938, "loss_bbox": 0.23893, "loss_mask": 0.23602, "loss": 0.74635, "time": 0.21154} -{"mode": "train", "epoch": 9, "iter": 1000, "lr": 2e-05, "memory": 5941, "data_time": 0.0331, "loss_rpn_cls": 0.02836, "loss_rpn_bbox": 0.05172, "loss_cls": 0.20929, "acc": 92.49951, "loss_bbox": 0.2503, "loss_mask": 0.23726, "loss": 0.77693, "time": 0.22664} -{"mode": "train", "epoch": 9, "iter": 1050, "lr": 2e-05, "memory": 5941, "data_time": 0.03352, "loss_rpn_cls": 0.0257, "loss_rpn_bbox": 0.0517, "loss_cls": 0.20577, "acc": 92.75537, "loss_bbox": 0.24554, "loss_mask": 0.23621, "loss": 0.76491, "time": 0.21718} -{"mode": "train", "epoch": 9, "iter": 1100, "lr": 2e-05, "memory": 5941, "data_time": 0.03503, "loss_rpn_cls": 0.02594, "loss_rpn_bbox": 0.05345, "loss_cls": 0.21241, "acc": 92.59961, "loss_bbox": 0.24514, "loss_mask": 0.23404, "loss": 0.77099, "time": 0.22726} -{"mode": "train", "epoch": 9, "iter": 1150, "lr": 2e-05, "memory": 5941, "data_time": 0.03782, "loss_rpn_cls": 0.02538, "loss_rpn_bbox": 0.04878, "loss_cls": 0.19817, "acc": 92.90161, "loss_bbox": 0.23764, "loss_mask": 0.23331, "loss": 0.74328, "time": 0.20831} -{"mode": "train", "epoch": 9, "iter": 1200, "lr": 2e-05, "memory": 5941, "data_time": 0.03919, "loss_rpn_cls": 0.0244, "loss_rpn_bbox": 0.05009, "loss_cls": 0.19634, "acc": 93.00732, "loss_bbox": 0.22966, "loss_mask": 0.23058, "loss": 0.73107, "time": 0.22177} -{"mode": "train", "epoch": 9, "iter": 1250, "lr": 2e-05, "memory": 5941, "data_time": 0.03162, "loss_rpn_cls": 0.02543, "loss_rpn_bbox": 0.04711, "loss_cls": 0.19507, "acc": 93.06421, "loss_bbox": 0.22876, "loss_mask": 0.23039, "loss": 0.72675, "time": 0.2059} -{"mode": "train", "epoch": 9, "iter": 1300, "lr": 2e-05, "memory": 5941, "data_time": 0.03359, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.04614, "loss_cls": 0.18738, "acc": 93.35083, "loss_bbox": 0.2229, "loss_mask": 0.23032, "loss": 0.70877, "time": 0.21246} -{"mode": "train", "epoch": 9, "iter": 1350, "lr": 2e-05, "memory": 5941, "data_time": 0.04468, "loss_rpn_cls": 0.02491, "loss_rpn_bbox": 0.0495, "loss_cls": 0.19642, "acc": 93.00537, "loss_bbox": 0.23181, "loss_mask": 0.2336, "loss": 0.73623, "time": 0.20927} -{"mode": "train", "epoch": 9, "iter": 1400, "lr": 2e-05, "memory": 5941, "data_time": 0.03121, "loss_rpn_cls": 0.02443, "loss_rpn_bbox": 0.0456, "loss_cls": 0.18509, "acc": 93.37231, "loss_bbox": 0.22267, "loss_mask": 0.22691, "loss": 0.7047, "time": 0.20381} -{"mode": "train", "epoch": 9, "iter": 1450, "lr": 2e-05, "memory": 5941, "data_time": 0.03163, "loss_rpn_cls": 0.02711, "loss_rpn_bbox": 0.04813, "loss_cls": 0.19442, "acc": 93.05151, "loss_bbox": 0.23077, "loss_mask": 0.23356, "loss": 0.73398, "time": 0.211} -{"mode": "train", "epoch": 9, "iter": 1500, "lr": 2e-05, "memory": 5941, "data_time": 0.0372, "loss_rpn_cls": 0.02453, "loss_rpn_bbox": 0.04742, "loss_cls": 0.19209, "acc": 93.13428, "loss_bbox": 0.23705, "loss_mask": 0.23632, "loss": 0.73741, "time": 0.19981} -{"mode": "train", "epoch": 9, "iter": 1550, "lr": 2e-05, "memory": 5941, "data_time": 0.04175, "loss_rpn_cls": 0.025, "loss_rpn_bbox": 0.04784, "loss_cls": 0.19315, "acc": 93.12646, "loss_bbox": 0.2339, "loss_mask": 0.23427, "loss": 0.73417, "time": 0.20606} -{"mode": "train", "epoch": 9, "iter": 1600, "lr": 2e-05, "memory": 5941, "data_time": 0.04089, "loss_rpn_cls": 0.02394, "loss_rpn_bbox": 0.04829, "loss_cls": 0.19213, "acc": 93.0625, "loss_bbox": 0.23073, "loss_mask": 0.23559, "loss": 0.73068, "time": 0.21087} -{"mode": "train", "epoch": 9, "iter": 1650, "lr": 2e-05, "memory": 5941, "data_time": 0.03934, "loss_rpn_cls": 0.02624, "loss_rpn_bbox": 0.05063, "loss_cls": 0.19325, "acc": 93.02319, "loss_bbox": 0.23574, "loss_mask": 0.23107, "loss": 0.73693, "time": 0.21848} -{"mode": "train", "epoch": 9, "iter": 1700, "lr": 2e-05, "memory": 5941, "data_time": 0.04516, "loss_rpn_cls": 0.02403, "loss_rpn_bbox": 0.04962, "loss_cls": 0.20203, "acc": 92.76123, "loss_bbox": 0.24137, "loss_mask": 0.2346, "loss": 0.75165, "time": 0.21956} -{"mode": "train", "epoch": 9, "iter": 1750, "lr": 2e-05, "memory": 5941, "data_time": 0.03868, "loss_rpn_cls": 0.02458, "loss_rpn_bbox": 0.04743, "loss_cls": 0.19375, "acc": 93.0686, "loss_bbox": 0.23173, "loss_mask": 0.22623, "loss": 0.72371, "time": 0.20732} -{"mode": "train", "epoch": 9, "iter": 1800, "lr": 2e-05, "memory": 5941, "data_time": 0.02837, "loss_rpn_cls": 0.02333, "loss_rpn_bbox": 0.04709, "loss_cls": 0.18283, "acc": 93.50562, "loss_bbox": 0.21258, "loss_mask": 0.23074, "loss": 0.69657, "time": 0.20208} -{"mode": "train", "epoch": 9, "iter": 1850, "lr": 2e-05, "memory": 5941, "data_time": 0.04207, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04841, "loss_cls": 0.19449, "acc": 93.01318, "loss_bbox": 0.23646, "loss_mask": 0.23111, "loss": 0.73262, "time": 0.21406} -{"mode": "train", "epoch": 9, "iter": 1900, "lr": 2e-05, "memory": 5941, "data_time": 0.04122, "loss_rpn_cls": 0.02681, "loss_rpn_bbox": 0.04945, "loss_cls": 0.19499, "acc": 93.06494, "loss_bbox": 0.23418, "loss_mask": 0.22906, "loss": 0.73448, "time": 0.21914} -{"mode": "train", "epoch": 9, "iter": 1950, "lr": 2e-05, "memory": 5941, "data_time": 0.03709, "loss_rpn_cls": 0.02458, "loss_rpn_bbox": 0.04827, "loss_cls": 0.18812, "acc": 93.34961, "loss_bbox": 0.22457, "loss_mask": 0.22806, "loss": 0.7136, "time": 0.20479} -{"mode": "train", "epoch": 9, "iter": 2000, "lr": 2e-05, "memory": 5941, "data_time": 0.03613, "loss_rpn_cls": 0.02379, "loss_rpn_bbox": 0.048, "loss_cls": 0.18701, "acc": 93.3064, "loss_bbox": 0.22945, "loss_mask": 0.22942, "loss": 0.71767, "time": 0.21154} -{"mode": "train", "epoch": 9, "iter": 2050, "lr": 2e-05, "memory": 5941, "data_time": 0.03834, "loss_rpn_cls": 0.02295, "loss_rpn_bbox": 0.04639, "loss_cls": 0.18708, "acc": 93.2627, "loss_bbox": 0.22443, "loss_mask": 0.22678, "loss": 0.70763, "time": 0.20992} -{"mode": "train", "epoch": 9, "iter": 2100, "lr": 2e-05, "memory": 5941, "data_time": 0.03881, "loss_rpn_cls": 0.02282, "loss_rpn_bbox": 0.0456, "loss_cls": 0.18959, "acc": 93.16162, "loss_bbox": 0.22802, "loss_mask": 0.22805, "loss": 0.71407, "time": 0.21623} -{"mode": "train", "epoch": 9, "iter": 2150, "lr": 2e-05, "memory": 5941, "data_time": 0.03577, "loss_rpn_cls": 0.02598, "loss_rpn_bbox": 0.04705, "loss_cls": 0.19242, "acc": 93.06445, "loss_bbox": 0.23337, "loss_mask": 0.23111, "loss": 0.72992, "time": 0.21465} -{"mode": "train", "epoch": 9, "iter": 2200, "lr": 2e-05, "memory": 5941, "data_time": 0.03599, "loss_rpn_cls": 0.02606, "loss_rpn_bbox": 0.04884, "loss_cls": 0.19311, "acc": 93.0625, "loss_bbox": 0.22873, "loss_mask": 0.2329, "loss": 0.72965, "time": 0.20814} -{"mode": "train", "epoch": 9, "iter": 2250, "lr": 2e-05, "memory": 5941, "data_time": 0.03699, "loss_rpn_cls": 0.02256, "loss_rpn_bbox": 0.04678, "loss_cls": 0.18901, "acc": 93.21802, "loss_bbox": 0.22814, "loss_mask": 0.23185, "loss": 0.71833, "time": 0.21929} -{"mode": "train", "epoch": 9, "iter": 2300, "lr": 2e-05, "memory": 5941, "data_time": 0.0334, "loss_rpn_cls": 0.02507, "loss_rpn_bbox": 0.04674, "loss_cls": 0.18625, "acc": 93.30762, "loss_bbox": 0.22578, "loss_mask": 0.23347, "loss": 0.71731, "time": 0.20386} -{"mode": "train", "epoch": 9, "iter": 2350, "lr": 2e-05, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.02498, "loss_rpn_bbox": 0.05048, "loss_cls": 0.20186, "acc": 92.66187, "loss_bbox": 0.24791, "loss_mask": 0.2382, "loss": 0.76344, "time": 0.22195} -{"mode": "train", "epoch": 9, "iter": 2400, "lr": 2e-05, "memory": 5941, "data_time": 0.04347, "loss_rpn_cls": 0.02257, "loss_rpn_bbox": 0.04912, "loss_cls": 0.18659, "acc": 93.28638, "loss_bbox": 0.22938, "loss_mask": 0.23179, "loss": 0.71946, "time": 0.20392} -{"mode": "train", "epoch": 9, "iter": 2450, "lr": 2e-05, "memory": 5941, "data_time": 0.04263, "loss_rpn_cls": 0.02249, "loss_rpn_bbox": 0.04716, "loss_cls": 0.1925, "acc": 93.08838, "loss_bbox": 0.23607, "loss_mask": 0.23423, "loss": 0.73244, "time": 0.20571} -{"mode": "train", "epoch": 9, "iter": 2500, "lr": 2e-05, "memory": 5941, "data_time": 0.03783, "loss_rpn_cls": 0.02287, "loss_rpn_bbox": 0.0456, "loss_cls": 0.19506, "acc": 93.11353, "loss_bbox": 0.23113, "loss_mask": 0.22909, "loss": 0.72375, "time": 0.2095} -{"mode": "train", "epoch": 9, "iter": 2550, "lr": 2e-05, "memory": 5941, "data_time": 0.03928, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04663, "loss_cls": 0.18884, "acc": 93.18384, "loss_bbox": 0.22713, "loss_mask": 0.2266, "loss": 0.71364, "time": 0.20943} -{"mode": "train", "epoch": 9, "iter": 2600, "lr": 2e-05, "memory": 5941, "data_time": 0.0498, "loss_rpn_cls": 0.02423, "loss_rpn_bbox": 0.04703, "loss_cls": 0.18624, "acc": 93.31006, "loss_bbox": 0.22805, "loss_mask": 0.23084, "loss": 0.71638, "time": 0.21428} -{"mode": "train", "epoch": 9, "iter": 2650, "lr": 2e-05, "memory": 5941, "data_time": 0.03261, "loss_rpn_cls": 0.0223, "loss_rpn_bbox": 0.04469, "loss_cls": 0.18253, "acc": 93.39014, "loss_bbox": 0.21869, "loss_mask": 0.22592, "loss": 0.69414, "time": 0.19845} -{"mode": "train", "epoch": 9, "iter": 2700, "lr": 2e-05, "memory": 5941, "data_time": 0.03816, "loss_rpn_cls": 0.02603, "loss_rpn_bbox": 0.0462, "loss_cls": 0.19135, "acc": 93.2019, "loss_bbox": 0.22947, "loss_mask": 0.2277, "loss": 0.72075, "time": 0.21125} -{"mode": "train", "epoch": 9, "iter": 2750, "lr": 2e-05, "memory": 5941, "data_time": 0.0392, "loss_rpn_cls": 0.02379, "loss_rpn_bbox": 0.04399, "loss_cls": 0.19337, "acc": 93.18994, "loss_bbox": 0.22962, "loss_mask": 0.22784, "loss": 0.71862, "time": 0.20812} -{"mode": "train", "epoch": 9, "iter": 2800, "lr": 2e-05, "memory": 5941, "data_time": 0.0464, "loss_rpn_cls": 0.02398, "loss_rpn_bbox": 0.04652, "loss_cls": 0.19097, "acc": 93.19629, "loss_bbox": 0.23104, "loss_mask": 0.22576, "loss": 0.71828, "time": 0.21035} -{"mode": "train", "epoch": 9, "iter": 2850, "lr": 2e-05, "memory": 5941, "data_time": 0.04409, "loss_rpn_cls": 0.02427, "loss_rpn_bbox": 0.04663, "loss_cls": 0.19468, "acc": 92.98804, "loss_bbox": 0.23543, "loss_mask": 0.22528, "loss": 0.72629, "time": 0.22236} -{"mode": "train", "epoch": 9, "iter": 2900, "lr": 2e-05, "memory": 5941, "data_time": 0.03748, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04997, "loss_cls": 0.19768, "acc": 92.81152, "loss_bbox": 0.2397, "loss_mask": 0.23188, "loss": 0.74294, "time": 0.21128} -{"mode": "train", "epoch": 9, "iter": 2950, "lr": 2e-05, "memory": 5941, "data_time": 0.04028, "loss_rpn_cls": 0.026, "loss_rpn_bbox": 0.05123, "loss_cls": 0.20077, "acc": 92.66699, "loss_bbox": 0.24507, "loss_mask": 0.22902, "loss": 0.75209, "time": 0.22615} -{"mode": "train", "epoch": 9, "iter": 3000, "lr": 2e-05, "memory": 5941, "data_time": 0.04063, "loss_rpn_cls": 0.02513, "loss_rpn_bbox": 0.04883, "loss_cls": 0.19481, "acc": 93.01855, "loss_bbox": 0.23529, "loss_mask": 0.23283, "loss": 0.7369, "time": 0.21202} -{"mode": "train", "epoch": 9, "iter": 3050, "lr": 2e-05, "memory": 5941, "data_time": 0.04345, "loss_rpn_cls": 0.02316, "loss_rpn_bbox": 0.0452, "loss_cls": 0.18711, "acc": 93.31934, "loss_bbox": 0.2277, "loss_mask": 0.2285, "loss": 0.71168, "time": 0.2124} -{"mode": "train", "epoch": 9, "iter": 3100, "lr": 2e-05, "memory": 5941, "data_time": 0.04326, "loss_rpn_cls": 0.02447, "loss_rpn_bbox": 0.05074, "loss_cls": 0.19943, "acc": 92.76147, "loss_bbox": 0.24318, "loss_mask": 0.23602, "loss": 0.75384, "time": 0.22652} -{"mode": "train", "epoch": 9, "iter": 3150, "lr": 2e-05, "memory": 5941, "data_time": 0.03661, "loss_rpn_cls": 0.02527, "loss_rpn_bbox": 0.04784, "loss_cls": 0.1925, "acc": 93.13647, "loss_bbox": 0.22948, "loss_mask": 0.23044, "loss": 0.72553, "time": 0.20522} -{"mode": "train", "epoch": 9, "iter": 3200, "lr": 2e-05, "memory": 5941, "data_time": 0.03564, "loss_rpn_cls": 0.02307, "loss_rpn_bbox": 0.04507, "loss_cls": 0.17814, "acc": 93.57764, "loss_bbox": 0.21715, "loss_mask": 0.22625, "loss": 0.68968, "time": 0.20514} -{"mode": "train", "epoch": 9, "iter": 3250, "lr": 2e-05, "memory": 5941, "data_time": 0.02907, "loss_rpn_cls": 0.02407, "loss_rpn_bbox": 0.0464, "loss_cls": 0.1852, "acc": 93.30713, "loss_bbox": 0.22953, "loss_mask": 0.23493, "loss": 0.72013, "time": 0.20905} -{"mode": "train", "epoch": 9, "iter": 3300, "lr": 2e-05, "memory": 5941, "data_time": 0.0391, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04909, "loss_cls": 0.18453, "acc": 93.47949, "loss_bbox": 0.2191, "loss_mask": 0.22668, "loss": 0.70531, "time": 0.20544} -{"mode": "train", "epoch": 9, "iter": 3350, "lr": 2e-05, "memory": 5941, "data_time": 0.03515, "loss_rpn_cls": 0.02466, "loss_rpn_bbox": 0.04861, "loss_cls": 0.19206, "acc": 92.87573, "loss_bbox": 0.2373, "loss_mask": 0.23665, "loss": 0.73928, "time": 0.21769} -{"mode": "train", "epoch": 9, "iter": 3400, "lr": 2e-05, "memory": 5941, "data_time": 0.03376, "loss_rpn_cls": 0.02272, "loss_rpn_bbox": 0.04729, "loss_cls": 0.18486, "acc": 93.323, "loss_bbox": 0.22661, "loss_mask": 0.23148, "loss": 0.71296, "time": 0.21047} -{"mode": "train", "epoch": 9, "iter": 3450, "lr": 2e-05, "memory": 5941, "data_time": 0.04471, "loss_rpn_cls": 0.02341, "loss_rpn_bbox": 0.04752, "loss_cls": 0.18571, "acc": 93.31128, "loss_bbox": 0.22394, "loss_mask": 0.22697, "loss": 0.70755, "time": 0.21278} -{"mode": "train", "epoch": 9, "iter": 3500, "lr": 2e-05, "memory": 5941, "data_time": 0.03992, "loss_rpn_cls": 0.02164, "loss_rpn_bbox": 0.04562, "loss_cls": 0.18154, "acc": 93.47852, "loss_bbox": 0.22302, "loss_mask": 0.22592, "loss": 0.69774, "time": 0.20141} -{"mode": "train", "epoch": 9, "iter": 3550, "lr": 2e-05, "memory": 5941, "data_time": 0.03671, "loss_rpn_cls": 0.02054, "loss_rpn_bbox": 0.04384, "loss_cls": 0.17947, "acc": 93.52148, "loss_bbox": 0.21599, "loss_mask": 0.22685, "loss": 0.68669, "time": 0.20463} -{"mode": "train", "epoch": 9, "iter": 3600, "lr": 2e-05, "memory": 5941, "data_time": 0.03642, "loss_rpn_cls": 0.02328, "loss_rpn_bbox": 0.0501, "loss_cls": 0.18821, "acc": 93.26123, "loss_bbox": 0.23041, "loss_mask": 0.22657, "loss": 0.71856, "time": 0.22171} -{"mode": "train", "epoch": 9, "iter": 3650, "lr": 2e-05, "memory": 5941, "data_time": 0.03406, "loss_rpn_cls": 0.02266, "loss_rpn_bbox": 0.04767, "loss_cls": 0.19017, "acc": 93.18579, "loss_bbox": 0.22748, "loss_mask": 0.23094, "loss": 0.71892, "time": 0.20858} -{"mode": "train", "epoch": 9, "iter": 3700, "lr": 2e-05, "memory": 5941, "data_time": 0.03987, "loss_rpn_cls": 0.02344, "loss_rpn_bbox": 0.04715, "loss_cls": 0.19952, "acc": 92.86133, "loss_bbox": 0.23603, "loss_mask": 0.23061, "loss": 0.73676, "time": 0.21085} -{"mode": "train", "epoch": 9, "iter": 3750, "lr": 2e-05, "memory": 5941, "data_time": 0.03338, "loss_rpn_cls": 0.02383, "loss_rpn_bbox": 0.04448, "loss_cls": 0.18639, "acc": 93.18481, "loss_bbox": 0.22577, "loss_mask": 0.22581, "loss": 0.70629, "time": 0.20659} -{"mode": "train", "epoch": 9, "iter": 3800, "lr": 2e-05, "memory": 5941, "data_time": 0.04033, "loss_rpn_cls": 0.02369, "loss_rpn_bbox": 0.04776, "loss_cls": 0.18746, "acc": 93.24365, "loss_bbox": 0.23139, "loss_mask": 0.22754, "loss": 0.71784, "time": 0.2181} -{"mode": "train", "epoch": 9, "iter": 3850, "lr": 2e-05, "memory": 5941, "data_time": 0.03952, "loss_rpn_cls": 0.0239, "loss_rpn_bbox": 0.04807, "loss_cls": 0.19432, "acc": 92.95947, "loss_bbox": 0.24038, "loss_mask": 0.23232, "loss": 0.73899, "time": 0.21591} -{"mode": "train", "epoch": 9, "iter": 3900, "lr": 2e-05, "memory": 5941, "data_time": 0.03521, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04657, "loss_cls": 0.18365, "acc": 93.43677, "loss_bbox": 0.22325, "loss_mask": 0.22405, "loss": 0.70072, "time": 0.20843} -{"mode": "train", "epoch": 9, "iter": 3950, "lr": 2e-05, "memory": 5941, "data_time": 0.03936, "loss_rpn_cls": 0.02323, "loss_rpn_bbox": 0.04851, "loss_cls": 0.18639, "acc": 93.15747, "loss_bbox": 0.22525, "loss_mask": 0.22805, "loss": 0.71143, "time": 0.21839} -{"mode": "train", "epoch": 9, "iter": 4000, "lr": 2e-05, "memory": 5941, "data_time": 0.04123, "loss_rpn_cls": 0.02508, "loss_rpn_bbox": 0.05048, "loss_cls": 0.19455, "acc": 92.93701, "loss_bbox": 0.23486, "loss_mask": 0.22596, "loss": 0.73094, "time": 0.21036} -{"mode": "train", "epoch": 9, "iter": 4050, "lr": 2e-05, "memory": 5941, "data_time": 0.03701, "loss_rpn_cls": 0.02266, "loss_rpn_bbox": 0.04557, "loss_cls": 0.18467, "acc": 93.38257, "loss_bbox": 0.22602, "loss_mask": 0.22375, "loss": 0.70267, "time": 0.21129} -{"mode": "train", "epoch": 9, "iter": 4100, "lr": 2e-05, "memory": 5941, "data_time": 0.03945, "loss_rpn_cls": 0.02449, "loss_rpn_bbox": 0.04826, "loss_cls": 0.20565, "acc": 92.72949, "loss_bbox": 0.2437, "loss_mask": 0.23356, "loss": 0.75566, "time": 0.21289} -{"mode": "train", "epoch": 9, "iter": 4150, "lr": 2e-05, "memory": 5941, "data_time": 0.03139, "loss_rpn_cls": 0.02314, "loss_rpn_bbox": 0.04688, "loss_cls": 0.18656, "acc": 93.26074, "loss_bbox": 0.23233, "loss_mask": 0.22602, "loss": 0.71493, "time": 0.2131} -{"mode": "train", "epoch": 9, "iter": 4200, "lr": 2e-05, "memory": 5941, "data_time": 0.03325, "loss_rpn_cls": 0.02351, "loss_rpn_bbox": 0.04493, "loss_cls": 0.19506, "acc": 93.16797, "loss_bbox": 0.22617, "loss_mask": 0.22467, "loss": 0.71434, "time": 0.21001} -{"mode": "train", "epoch": 9, "iter": 4250, "lr": 2e-05, "memory": 5941, "data_time": 0.04443, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.05064, "loss_cls": 0.19227, "acc": 92.96411, "loss_bbox": 0.24047, "loss_mask": 0.23199, "loss": 0.74041, "time": 0.2111} -{"mode": "train", "epoch": 9, "iter": 4300, "lr": 2e-05, "memory": 5941, "data_time": 0.03901, "loss_rpn_cls": 0.02428, "loss_rpn_bbox": 0.04843, "loss_cls": 0.19109, "acc": 93.12939, "loss_bbox": 0.2309, "loss_mask": 0.23016, "loss": 0.72486, "time": 0.21218} -{"mode": "train", "epoch": 9, "iter": 4350, "lr": 2e-05, "memory": 5941, "data_time": 0.03908, "loss_rpn_cls": 0.02397, "loss_rpn_bbox": 0.04661, "loss_cls": 0.18886, "acc": 93.24658, "loss_bbox": 0.22863, "loss_mask": 0.22714, "loss": 0.71522, "time": 0.20612} -{"mode": "train", "epoch": 9, "iter": 4400, "lr": 2e-05, "memory": 5941, "data_time": 0.04219, "loss_rpn_cls": 0.02206, "loss_rpn_bbox": 0.0457, "loss_cls": 0.18789, "acc": 93.17261, "loss_bbox": 0.22748, "loss_mask": 0.23068, "loss": 0.71381, "time": 0.20892} -{"mode": "train", "epoch": 9, "iter": 4450, "lr": 2e-05, "memory": 5941, "data_time": 0.03968, "loss_rpn_cls": 0.02409, "loss_rpn_bbox": 0.0483, "loss_cls": 0.18799, "acc": 93.25342, "loss_bbox": 0.22384, "loss_mask": 0.22602, "loss": 0.71023, "time": 0.20801} -{"mode": "train", "epoch": 9, "iter": 4500, "lr": 2e-05, "memory": 5941, "data_time": 0.03336, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04742, "loss_cls": 0.20045, "acc": 92.78955, "loss_bbox": 0.24061, "loss_mask": 0.23761, "loss": 0.75061, "time": 0.21354} -{"mode": "train", "epoch": 9, "iter": 4550, "lr": 2e-05, "memory": 5941, "data_time": 0.0378, "loss_rpn_cls": 0.02536, "loss_rpn_bbox": 0.05, "loss_cls": 0.19693, "acc": 92.86255, "loss_bbox": 0.23831, "loss_mask": 0.22947, "loss": 0.74007, "time": 0.21191} -{"mode": "train", "epoch": 9, "iter": 4600, "lr": 2e-05, "memory": 5941, "data_time": 0.03485, "loss_rpn_cls": 0.02338, "loss_rpn_bbox": 0.04909, "loss_cls": 0.19016, "acc": 93.1228, "loss_bbox": 0.23269, "loss_mask": 0.23499, "loss": 0.73031, "time": 0.20998} -{"mode": "train", "epoch": 9, "iter": 4650, "lr": 2e-05, "memory": 5941, "data_time": 0.02702, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04541, "loss_cls": 0.18169, "acc": 93.49438, "loss_bbox": 0.21836, "loss_mask": 0.22357, "loss": 0.69141, "time": 0.19832} -{"mode": "train", "epoch": 9, "iter": 4700, "lr": 2e-05, "memory": 5941, "data_time": 0.03901, "loss_rpn_cls": 0.02309, "loss_rpn_bbox": 0.045, "loss_cls": 0.1895, "acc": 93.16138, "loss_bbox": 0.22626, "loss_mask": 0.22975, "loss": 0.7136, "time": 0.20612} -{"mode": "train", "epoch": 9, "iter": 4750, "lr": 2e-05, "memory": 5941, "data_time": 0.0359, "loss_rpn_cls": 0.02524, "loss_rpn_bbox": 0.04816, "loss_cls": 0.19592, "acc": 92.8938, "loss_bbox": 0.24133, "loss_mask": 0.23102, "loss": 0.74167, "time": 0.2173} -{"mode": "train", "epoch": 9, "iter": 4800, "lr": 2e-05, "memory": 5941, "data_time": 0.04245, "loss_rpn_cls": 0.02318, "loss_rpn_bbox": 0.04695, "loss_cls": 0.19406, "acc": 93.12598, "loss_bbox": 0.22999, "loss_mask": 0.22955, "loss": 0.72374, "time": 0.2096} -{"mode": "train", "epoch": 9, "iter": 4850, "lr": 2e-05, "memory": 5941, "data_time": 0.0339, "loss_rpn_cls": 0.02303, "loss_rpn_bbox": 0.04575, "loss_cls": 0.18585, "acc": 93.3623, "loss_bbox": 0.22646, "loss_mask": 0.22835, "loss": 0.70945, "time": 0.20244} -{"mode": "train", "epoch": 9, "iter": 4900, "lr": 2e-05, "memory": 5941, "data_time": 0.04385, "loss_rpn_cls": 0.02433, "loss_rpn_bbox": 0.04751, "loss_cls": 0.19384, "acc": 92.87744, "loss_bbox": 0.23723, "loss_mask": 0.23074, "loss": 0.73365, "time": 0.21282} -{"mode": "train", "epoch": 9, "iter": 4950, "lr": 2e-05, "memory": 5941, "data_time": 0.03477, "loss_rpn_cls": 0.02253, "loss_rpn_bbox": 0.04424, "loss_cls": 0.17988, "acc": 93.56152, "loss_bbox": 0.22088, "loss_mask": 0.23208, "loss": 0.69961, "time": 0.2043} -{"mode": "train", "epoch": 9, "iter": 5000, "lr": 2e-05, "memory": 5941, "data_time": 0.03779, "loss_rpn_cls": 0.02373, "loss_rpn_bbox": 0.04867, "loss_cls": 0.1924, "acc": 92.96436, "loss_bbox": 0.2322, "loss_mask": 0.23174, "loss": 0.72873, "time": 0.20876} -{"mode": "train", "epoch": 9, "iter": 5050, "lr": 2e-05, "memory": 5941, "data_time": 0.04566, "loss_rpn_cls": 0.02467, "loss_rpn_bbox": 0.05006, "loss_cls": 0.18908, "acc": 93.23315, "loss_bbox": 0.22823, "loss_mask": 0.23203, "loss": 0.72407, "time": 0.21549} -{"mode": "train", "epoch": 9, "iter": 5100, "lr": 2e-05, "memory": 5941, "data_time": 0.03599, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.0496, "loss_cls": 0.18843, "acc": 93.17676, "loss_bbox": 0.23221, "loss_mask": 0.23081, "loss": 0.72403, "time": 0.21265} -{"mode": "train", "epoch": 9, "iter": 5150, "lr": 2e-05, "memory": 5941, "data_time": 0.04155, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04618, "loss_cls": 0.18368, "acc": 93.3374, "loss_bbox": 0.22958, "loss_mask": 0.23474, "loss": 0.71789, "time": 0.20515} -{"mode": "train", "epoch": 9, "iter": 5200, "lr": 2e-05, "memory": 5941, "data_time": 0.03528, "loss_rpn_cls": 0.02572, "loss_rpn_bbox": 0.04437, "loss_cls": 0.18397, "acc": 93.45898, "loss_bbox": 0.22214, "loss_mask": 0.22957, "loss": 0.70577, "time": 0.20647} -{"mode": "train", "epoch": 9, "iter": 5250, "lr": 2e-05, "memory": 5941, "data_time": 0.03329, "loss_rpn_cls": 0.02283, "loss_rpn_bbox": 0.04665, "loss_cls": 0.1861, "acc": 93.30884, "loss_bbox": 0.22576, "loss_mask": 0.23013, "loss": 0.71148, "time": 0.20266} -{"mode": "train", "epoch": 9, "iter": 5300, "lr": 2e-05, "memory": 5941, "data_time": 0.03298, "loss_rpn_cls": 0.02529, "loss_rpn_bbox": 0.04814, "loss_cls": 0.18886, "acc": 93.2356, "loss_bbox": 0.22399, "loss_mask": 0.22669, "loss": 0.71297, "time": 0.20792} -{"mode": "train", "epoch": 9, "iter": 5350, "lr": 2e-05, "memory": 5941, "data_time": 0.03605, "loss_rpn_cls": 0.02538, "loss_rpn_bbox": 0.04703, "loss_cls": 0.18947, "acc": 93.20337, "loss_bbox": 0.23379, "loss_mask": 0.23384, "loss": 0.72951, "time": 0.20728} -{"mode": "train", "epoch": 9, "iter": 5400, "lr": 2e-05, "memory": 5941, "data_time": 0.03281, "loss_rpn_cls": 0.02408, "loss_rpn_bbox": 0.04656, "loss_cls": 0.18728, "acc": 93.23926, "loss_bbox": 0.22791, "loss_mask": 0.23121, "loss": 0.71705, "time": 0.21084} -{"mode": "train", "epoch": 9, "iter": 5450, "lr": 2e-05, "memory": 5941, "data_time": 0.03345, "loss_rpn_cls": 0.02421, "loss_rpn_bbox": 0.04581, "loss_cls": 0.18456, "acc": 93.34155, "loss_bbox": 0.22706, "loss_mask": 0.22382, "loss": 0.70546, "time": 0.20297} -{"mode": "train", "epoch": 9, "iter": 5500, "lr": 2e-05, "memory": 5941, "data_time": 0.03209, "loss_rpn_cls": 0.02306, "loss_rpn_bbox": 0.04863, "loss_cls": 0.19416, "acc": 93.05005, "loss_bbox": 0.23737, "loss_mask": 0.23239, "loss": 0.73561, "time": 0.21159} -{"mode": "train", "epoch": 9, "iter": 5550, "lr": 2e-05, "memory": 5941, "data_time": 0.04582, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.04759, "loss_cls": 0.19182, "acc": 93.06641, "loss_bbox": 0.23309, "loss_mask": 0.22916, "loss": 0.72532, "time": 0.21006} -{"mode": "train", "epoch": 9, "iter": 5600, "lr": 2e-05, "memory": 5941, "data_time": 0.03357, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04616, "loss_cls": 0.18292, "acc": 93.43359, "loss_bbox": 0.21884, "loss_mask": 0.22424, "loss": 0.69709, "time": 0.20726} -{"mode": "train", "epoch": 9, "iter": 5650, "lr": 2e-05, "memory": 5941, "data_time": 0.0415, "loss_rpn_cls": 0.02456, "loss_rpn_bbox": 0.04827, "loss_cls": 0.19294, "acc": 93.0144, "loss_bbox": 0.23878, "loss_mask": 0.23257, "loss": 0.73713, "time": 0.21192} -{"mode": "train", "epoch": 9, "iter": 5700, "lr": 2e-05, "memory": 5941, "data_time": 0.03842, "loss_rpn_cls": 0.02417, "loss_rpn_bbox": 0.04803, "loss_cls": 0.19326, "acc": 93.02417, "loss_bbox": 0.23629, "loss_mask": 0.23321, "loss": 0.73497, "time": 0.20949} -{"mode": "train", "epoch": 9, "iter": 5750, "lr": 2e-05, "memory": 5941, "data_time": 0.03735, "loss_rpn_cls": 0.02257, "loss_rpn_bbox": 0.04502, "loss_cls": 0.18447, "acc": 93.3728, "loss_bbox": 0.22243, "loss_mask": 0.22783, "loss": 0.70232, "time": 0.20912} -{"mode": "train", "epoch": 9, "iter": 5800, "lr": 2e-05, "memory": 5941, "data_time": 0.03826, "loss_rpn_cls": 0.0231, "loss_rpn_bbox": 0.04549, "loss_cls": 0.19008, "acc": 93.19702, "loss_bbox": 0.22778, "loss_mask": 0.22829, "loss": 0.71473, "time": 0.20069} -{"mode": "train", "epoch": 9, "iter": 5850, "lr": 2e-05, "memory": 5941, "data_time": 0.03434, "loss_rpn_cls": 0.02322, "loss_rpn_bbox": 0.04627, "loss_cls": 0.19077, "acc": 93.12476, "loss_bbox": 0.231, "loss_mask": 0.23073, "loss": 0.72199, "time": 0.20522} -{"mode": "train", "epoch": 9, "iter": 5900, "lr": 2e-05, "memory": 5941, "data_time": 0.04129, "loss_rpn_cls": 0.02429, "loss_rpn_bbox": 0.04843, "loss_cls": 0.18635, "acc": 93.40503, "loss_bbox": 0.22338, "loss_mask": 0.23107, "loss": 0.71352, "time": 0.21299} -{"mode": "train", "epoch": 9, "iter": 5950, "lr": 2e-05, "memory": 5941, "data_time": 0.03513, "loss_rpn_cls": 0.0238, "loss_rpn_bbox": 0.0473, "loss_cls": 0.18757, "acc": 93.21289, "loss_bbox": 0.2294, "loss_mask": 0.23088, "loss": 0.71895, "time": 0.20893} -{"mode": "train", "epoch": 9, "iter": 6000, "lr": 2e-05, "memory": 5941, "data_time": 0.03406, "loss_rpn_cls": 0.0233, "loss_rpn_bbox": 0.04721, "loss_cls": 0.19678, "acc": 92.89966, "loss_bbox": 0.2371, "loss_mask": 0.2278, "loss": 0.73219, "time": 0.20653} -{"mode": "train", "epoch": 9, "iter": 6050, "lr": 2e-05, "memory": 5941, "data_time": 0.03416, "loss_rpn_cls": 0.02273, "loss_rpn_bbox": 0.04865, "loss_cls": 0.18585, "acc": 93.25146, "loss_bbox": 0.23132, "loss_mask": 0.23117, "loss": 0.71972, "time": 0.20916} -{"mode": "train", "epoch": 9, "iter": 6100, "lr": 2e-05, "memory": 5941, "data_time": 0.03985, "loss_rpn_cls": 0.02542, "loss_rpn_bbox": 0.04898, "loss_cls": 0.19498, "acc": 92.92163, "loss_bbox": 0.23772, "loss_mask": 0.2333, "loss": 0.7404, "time": 0.214} -{"mode": "train", "epoch": 9, "iter": 6150, "lr": 2e-05, "memory": 5941, "data_time": 0.03807, "loss_rpn_cls": 0.02364, "loss_rpn_bbox": 0.04777, "loss_cls": 0.18622, "acc": 93.3125, "loss_bbox": 0.22085, "loss_mask": 0.22512, "loss": 0.7036, "time": 0.21196} -{"mode": "train", "epoch": 9, "iter": 6200, "lr": 2e-05, "memory": 5941, "data_time": 0.03129, "loss_rpn_cls": 0.02369, "loss_rpn_bbox": 0.04631, "loss_cls": 0.18686, "acc": 93.3623, "loss_bbox": 0.22289, "loss_mask": 0.22554, "loss": 0.7053, "time": 0.21009} -{"mode": "train", "epoch": 9, "iter": 6250, "lr": 2e-05, "memory": 5941, "data_time": 0.03223, "loss_rpn_cls": 0.02312, "loss_rpn_bbox": 0.04442, "loss_cls": 0.18288, "acc": 93.42627, "loss_bbox": 0.22216, "loss_mask": 0.22328, "loss": 0.69587, "time": 0.20939} -{"mode": "train", "epoch": 9, "iter": 6300, "lr": 2e-05, "memory": 5941, "data_time": 0.03278, "loss_rpn_cls": 0.02526, "loss_rpn_bbox": 0.05039, "loss_cls": 0.18924, "acc": 93.19507, "loss_bbox": 0.22589, "loss_mask": 0.22757, "loss": 0.71835, "time": 0.21632} -{"mode": "train", "epoch": 9, "iter": 6350, "lr": 2e-05, "memory": 5941, "data_time": 0.03566, "loss_rpn_cls": 0.02555, "loss_rpn_bbox": 0.04829, "loss_cls": 0.19383, "acc": 93.05127, "loss_bbox": 0.23225, "loss_mask": 0.23165, "loss": 0.73157, "time": 0.21808} -{"mode": "train", "epoch": 9, "iter": 6400, "lr": 2e-05, "memory": 5941, "data_time": 0.03362, "loss_rpn_cls": 0.02203, "loss_rpn_bbox": 0.04437, "loss_cls": 0.1825, "acc": 93.37012, "loss_bbox": 0.21448, "loss_mask": 0.22582, "loss": 0.6892, "time": 0.2078} -{"mode": "train", "epoch": 9, "iter": 6450, "lr": 2e-05, "memory": 5941, "data_time": 0.03727, "loss_rpn_cls": 0.0244, "loss_rpn_bbox": 0.04773, "loss_cls": 0.2004, "acc": 92.86865, "loss_bbox": 0.24357, "loss_mask": 0.23435, "loss": 0.75045, "time": 0.21318} -{"mode": "train", "epoch": 9, "iter": 6500, "lr": 2e-05, "memory": 5941, "data_time": 0.03542, "loss_rpn_cls": 0.02367, "loss_rpn_bbox": 0.04916, "loss_cls": 0.19712, "acc": 92.83862, "loss_bbox": 0.24171, "loss_mask": 0.23266, "loss": 0.74432, "time": 0.21154} -{"mode": "train", "epoch": 9, "iter": 6550, "lr": 2e-05, "memory": 5941, "data_time": 0.04075, "loss_rpn_cls": 0.02369, "loss_rpn_bbox": 0.04906, "loss_cls": 0.19207, "acc": 93.08325, "loss_bbox": 0.23184, "loss_mask": 0.23286, "loss": 0.72952, "time": 0.21235} -{"mode": "train", "epoch": 9, "iter": 6600, "lr": 2e-05, "memory": 5941, "data_time": 0.03305, "loss_rpn_cls": 0.02186, "loss_rpn_bbox": 0.04648, "loss_cls": 0.18779, "acc": 93.24292, "loss_bbox": 0.2288, "loss_mask": 0.22614, "loss": 0.71106, "time": 0.21764} -{"mode": "train", "epoch": 9, "iter": 6650, "lr": 2e-05, "memory": 5941, "data_time": 0.03193, "loss_rpn_cls": 0.02313, "loss_rpn_bbox": 0.04557, "loss_cls": 0.17754, "acc": 93.65747, "loss_bbox": 0.22052, "loss_mask": 0.22738, "loss": 0.69414, "time": 0.21813} -{"mode": "train", "epoch": 9, "iter": 6700, "lr": 2e-05, "memory": 5941, "data_time": 0.03417, "loss_rpn_cls": 0.0238, "loss_rpn_bbox": 0.04841, "loss_cls": 0.19123, "acc": 93.0769, "loss_bbox": 0.23271, "loss_mask": 0.22838, "loss": 0.72453, "time": 0.21989} -{"mode": "train", "epoch": 9, "iter": 6750, "lr": 2e-05, "memory": 5941, "data_time": 0.03582, "loss_rpn_cls": 0.0255, "loss_rpn_bbox": 0.04583, "loss_cls": 0.19286, "acc": 93.11108, "loss_bbox": 0.23385, "loss_mask": 0.23561, "loss": 0.73365, "time": 0.20794} -{"mode": "train", "epoch": 9, "iter": 6800, "lr": 2e-05, "memory": 5941, "data_time": 0.04033, "loss_rpn_cls": 0.02454, "loss_rpn_bbox": 0.04801, "loss_cls": 0.18449, "acc": 93.27905, "loss_bbox": 0.22345, "loss_mask": 0.22716, "loss": 0.70764, "time": 0.2097} -{"mode": "train", "epoch": 9, "iter": 6850, "lr": 2e-05, "memory": 5941, "data_time": 0.03833, "loss_rpn_cls": 0.02313, "loss_rpn_bbox": 0.04764, "loss_cls": 0.18559, "acc": 93.33667, "loss_bbox": 0.22943, "loss_mask": 0.22389, "loss": 0.70968, "time": 0.20239} -{"mode": "train", "epoch": 9, "iter": 6900, "lr": 2e-05, "memory": 5941, "data_time": 0.04046, "loss_rpn_cls": 0.02277, "loss_rpn_bbox": 0.04704, "loss_cls": 0.18761, "acc": 93.28589, "loss_bbox": 0.22782, "loss_mask": 0.22991, "loss": 0.71516, "time": 0.2122} -{"mode": "train", "epoch": 9, "iter": 6950, "lr": 2e-05, "memory": 5941, "data_time": 0.03509, "loss_rpn_cls": 0.02284, "loss_rpn_bbox": 0.04549, "loss_cls": 0.17995, "acc": 93.46118, "loss_bbox": 0.22315, "loss_mask": 0.22468, "loss": 0.69611, "time": 0.2083} -{"mode": "train", "epoch": 9, "iter": 7000, "lr": 2e-05, "memory": 5941, "data_time": 0.04129, "loss_rpn_cls": 0.02575, "loss_rpn_bbox": 0.04881, "loss_cls": 0.19999, "acc": 92.85156, "loss_bbox": 0.24014, "loss_mask": 0.23125, "loss": 0.74595, "time": 0.21683} -{"mode": "train", "epoch": 9, "iter": 7050, "lr": 2e-05, "memory": 5941, "data_time": 0.0306, "loss_rpn_cls": 0.02318, "loss_rpn_bbox": 0.04755, "loss_cls": 0.1947, "acc": 92.95508, "loss_bbox": 0.23603, "loss_mask": 0.23392, "loss": 0.73538, "time": 0.20679} -{"mode": "train", "epoch": 9, "iter": 7100, "lr": 2e-05, "memory": 5941, "data_time": 0.03985, "loss_rpn_cls": 0.02483, "loss_rpn_bbox": 0.0455, "loss_cls": 0.1865, "acc": 93.20898, "loss_bbox": 0.22993, "loss_mask": 0.22415, "loss": 0.71092, "time": 0.2103} -{"mode": "train", "epoch": 9, "iter": 7150, "lr": 2e-05, "memory": 5941, "data_time": 0.04695, "loss_rpn_cls": 0.02236, "loss_rpn_bbox": 0.04463, "loss_cls": 0.18174, "acc": 93.37061, "loss_bbox": 0.22463, "loss_mask": 0.22591, "loss": 0.69927, "time": 0.21439} -{"mode": "train", "epoch": 9, "iter": 7200, "lr": 2e-05, "memory": 5941, "data_time": 0.03746, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04678, "loss_cls": 0.18605, "acc": 93.26685, "loss_bbox": 0.22607, "loss_mask": 0.22935, "loss": 0.71145, "time": 0.20344} -{"mode": "train", "epoch": 9, "iter": 7250, "lr": 2e-05, "memory": 5941, "data_time": 0.03415, "loss_rpn_cls": 0.02449, "loss_rpn_bbox": 0.04755, "loss_cls": 0.19497, "acc": 92.96753, "loss_bbox": 0.23214, "loss_mask": 0.23104, "loss": 0.73018, "time": 0.20837} -{"mode": "train", "epoch": 9, "iter": 7300, "lr": 2e-05, "memory": 5941, "data_time": 0.03533, "loss_rpn_cls": 0.0221, "loss_rpn_bbox": 0.04633, "loss_cls": 0.1814, "acc": 93.34033, "loss_bbox": 0.22181, "loss_mask": 0.22391, "loss": 0.69555, "time": 0.21625} -{"mode": "val", "epoch": 9, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.3885, "bbox_mAP_50": 0.6093, "bbox_mAP_75": 0.4215, "bbox_mAP_s": 0.2353, "bbox_mAP_m": 0.4207, "bbox_mAP_l": 0.4992, "bbox_mAP_copypaste": "0.3885 0.6093 0.4215 0.2353 0.4207 0.4992", "segm_mAP": 0.3651, "segm_mAP_50": 0.5807, "segm_mAP_75": 0.3912, "segm_mAP_s": 0.1821, "segm_mAP_m": 0.3916, "segm_mAP_l": 0.53, "segm_mAP_copypaste": "0.3651 0.5807 0.3912 0.1821 0.3916 0.5300"} -{"mode": "train", "epoch": 10, "iter": 50, "lr": 2e-05, "memory": 5941, "data_time": 0.10846, "loss_rpn_cls": 0.02303, "loss_rpn_bbox": 0.04634, "loss_cls": 0.18865, "acc": 93.11743, "loss_bbox": 0.2346, "loss_mask": 0.22774, "loss": 0.72036, "time": 0.31313} -{"mode": "train", "epoch": 10, "iter": 100, "lr": 2e-05, "memory": 5941, "data_time": 0.03423, "loss_rpn_cls": 0.02135, "loss_rpn_bbox": 0.04558, "loss_cls": 0.18033, "acc": 93.50171, "loss_bbox": 0.21654, "loss_mask": 0.22281, "loss": 0.68661, "time": 0.25996} -{"mode": "train", "epoch": 10, "iter": 150, "lr": 2e-05, "memory": 5941, "data_time": 0.04743, "loss_rpn_cls": 0.02351, "loss_rpn_bbox": 0.04535, "loss_cls": 0.18029, "acc": 93.4248, "loss_bbox": 0.22367, "loss_mask": 0.22937, "loss": 0.7022, "time": 0.27631} -{"mode": "train", "epoch": 10, "iter": 200, "lr": 2e-05, "memory": 5941, "data_time": 0.03879, "loss_rpn_cls": 0.02461, "loss_rpn_bbox": 0.0494, "loss_cls": 0.19139, "acc": 93.0791, "loss_bbox": 0.23246, "loss_mask": 0.23275, "loss": 0.7306, "time": 0.23782} -{"mode": "train", "epoch": 10, "iter": 250, "lr": 2e-05, "memory": 5941, "data_time": 0.03876, "loss_rpn_cls": 0.0247, "loss_rpn_bbox": 0.05105, "loss_cls": 0.19821, "acc": 92.85229, "loss_bbox": 0.24315, "loss_mask": 0.22986, "loss": 0.74697, "time": 0.27843} -{"mode": "train", "epoch": 10, "iter": 300, "lr": 2e-05, "memory": 5941, "data_time": 0.03736, "loss_rpn_cls": 0.02157, "loss_rpn_bbox": 0.04352, "loss_cls": 0.17643, "acc": 93.60449, "loss_bbox": 0.21849, "loss_mask": 0.22468, "loss": 0.6847, "time": 0.22445} -{"mode": "train", "epoch": 10, "iter": 350, "lr": 2e-05, "memory": 5941, "data_time": 0.04466, "loss_rpn_cls": 0.02327, "loss_rpn_bbox": 0.04734, "loss_cls": 0.18583, "acc": 93.21021, "loss_bbox": 0.22673, "loss_mask": 0.22622, "loss": 0.70938, "time": 0.27315} -{"mode": "train", "epoch": 10, "iter": 400, "lr": 2e-05, "memory": 5941, "data_time": 0.04393, "loss_rpn_cls": 0.02281, "loss_rpn_bbox": 0.04792, "loss_cls": 0.18999, "acc": 93.08008, "loss_bbox": 0.23208, "loss_mask": 0.22939, "loss": 0.72218, "time": 0.22729} -{"mode": "train", "epoch": 10, "iter": 450, "lr": 2e-05, "memory": 5941, "data_time": 0.04032, "loss_rpn_cls": 0.02204, "loss_rpn_bbox": 0.04646, "loss_cls": 0.17971, "acc": 93.42334, "loss_bbox": 0.21939, "loss_mask": 0.22745, "loss": 0.69504, "time": 0.2189} -{"mode": "train", "epoch": 10, "iter": 500, "lr": 2e-05, "memory": 5941, "data_time": 0.03932, "loss_rpn_cls": 0.02272, "loss_rpn_bbox": 0.04895, "loss_cls": 0.18534, "acc": 93.302, "loss_bbox": 0.2278, "loss_mask": 0.22438, "loss": 0.70919, "time": 0.21048} -{"mode": "train", "epoch": 10, "iter": 550, "lr": 2e-05, "memory": 5941, "data_time": 0.04446, "loss_rpn_cls": 0.02207, "loss_rpn_bbox": 0.04591, "loss_cls": 0.18446, "acc": 93.36646, "loss_bbox": 0.22721, "loss_mask": 0.22341, "loss": 0.70306, "time": 0.21717} -{"mode": "train", "epoch": 10, "iter": 600, "lr": 2e-05, "memory": 5941, "data_time": 0.04255, "loss_rpn_cls": 0.02181, "loss_rpn_bbox": 0.048, "loss_cls": 0.18776, "acc": 93.17993, "loss_bbox": 0.23419, "loss_mask": 0.22828, "loss": 0.72004, "time": 0.22675} -{"mode": "train", "epoch": 10, "iter": 650, "lr": 2e-05, "memory": 5941, "data_time": 0.04318, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04841, "loss_cls": 0.18681, "acc": 93.17529, "loss_bbox": 0.22824, "loss_mask": 0.22975, "loss": 0.71574, "time": 0.22226} -{"mode": "train", "epoch": 10, "iter": 700, "lr": 2e-05, "memory": 5941, "data_time": 0.03223, "loss_rpn_cls": 0.02172, "loss_rpn_bbox": 0.04482, "loss_cls": 0.18019, "acc": 93.38916, "loss_bbox": 0.21963, "loss_mask": 0.22554, "loss": 0.69189, "time": 0.25008} -{"mode": "train", "epoch": 10, "iter": 750, "lr": 2e-05, "memory": 5941, "data_time": 0.03169, "loss_rpn_cls": 0.02316, "loss_rpn_bbox": 0.04691, "loss_cls": 0.18723, "acc": 93.17944, "loss_bbox": 0.22618, "loss_mask": 0.22369, "loss": 0.70717, "time": 0.21548} -{"mode": "train", "epoch": 10, "iter": 800, "lr": 2e-05, "memory": 5941, "data_time": 0.04453, "loss_rpn_cls": 0.02436, "loss_rpn_bbox": 0.04896, "loss_cls": 0.18199, "acc": 93.39917, "loss_bbox": 0.2253, "loss_mask": 0.22466, "loss": 0.70527, "time": 0.21371} -{"mode": "train", "epoch": 10, "iter": 850, "lr": 2e-05, "memory": 5941, "data_time": 0.04366, "loss_rpn_cls": 0.02315, "loss_rpn_bbox": 0.0477, "loss_cls": 0.1809, "acc": 93.39966, "loss_bbox": 0.23022, "loss_mask": 0.22989, "loss": 0.71186, "time": 0.21745} -{"mode": "train", "epoch": 10, "iter": 900, "lr": 2e-05, "memory": 5941, "data_time": 0.03452, "loss_rpn_cls": 0.02288, "loss_rpn_bbox": 0.04591, "loss_cls": 0.1804, "acc": 93.52124, "loss_bbox": 0.21829, "loss_mask": 0.22213, "loss": 0.68961, "time": 0.20283} -{"mode": "train", "epoch": 10, "iter": 950, "lr": 2e-05, "memory": 5941, "data_time": 0.03528, "loss_rpn_cls": 0.02516, "loss_rpn_bbox": 0.04772, "loss_cls": 0.1861, "acc": 93.21411, "loss_bbox": 0.22885, "loss_mask": 0.22785, "loss": 0.71568, "time": 0.21999} -{"mode": "train", "epoch": 10, "iter": 1000, "lr": 2e-05, "memory": 5941, "data_time": 0.03138, "loss_rpn_cls": 0.02188, "loss_rpn_bbox": 0.04636, "loss_cls": 0.17779, "acc": 93.51294, "loss_bbox": 0.22404, "loss_mask": 0.22983, "loss": 0.69991, "time": 0.20647} -{"mode": "train", "epoch": 10, "iter": 1050, "lr": 2e-05, "memory": 5941, "data_time": 0.03551, "loss_rpn_cls": 0.02355, "loss_rpn_bbox": 0.04573, "loss_cls": 0.18273, "acc": 93.29443, "loss_bbox": 0.22964, "loss_mask": 0.22825, "loss": 0.7099, "time": 0.21237} -{"mode": "train", "epoch": 10, "iter": 1100, "lr": 2e-05, "memory": 5941, "data_time": 0.03966, "loss_rpn_cls": 0.02483, "loss_rpn_bbox": 0.05028, "loss_cls": 0.18755, "acc": 93.05835, "loss_bbox": 0.23817, "loss_mask": 0.23004, "loss": 0.73086, "time": 0.22302} -{"mode": "train", "epoch": 10, "iter": 1150, "lr": 2e-05, "memory": 5941, "data_time": 0.0427, "loss_rpn_cls": 0.02367, "loss_rpn_bbox": 0.04511, "loss_cls": 0.18268, "acc": 93.37012, "loss_bbox": 0.22747, "loss_mask": 0.23052, "loss": 0.70944, "time": 0.2198} -{"mode": "train", "epoch": 10, "iter": 1200, "lr": 2e-05, "memory": 5941, "data_time": 0.03351, "loss_rpn_cls": 0.02336, "loss_rpn_bbox": 0.04841, "loss_cls": 0.19488, "acc": 92.95166, "loss_bbox": 0.23885, "loss_mask": 0.23004, "loss": 0.73554, "time": 0.22071} -{"mode": "train", "epoch": 10, "iter": 1250, "lr": 2e-05, "memory": 5941, "data_time": 0.03815, "loss_rpn_cls": 0.02248, "loss_rpn_bbox": 0.04876, "loss_cls": 0.1773, "acc": 93.52148, "loss_bbox": 0.21852, "loss_mask": 0.22163, "loss": 0.68868, "time": 0.21393} -{"mode": "train", "epoch": 10, "iter": 1300, "lr": 2e-05, "memory": 5941, "data_time": 0.03687, "loss_rpn_cls": 0.02118, "loss_rpn_bbox": 0.04563, "loss_cls": 0.17602, "acc": 93.55615, "loss_bbox": 0.22251, "loss_mask": 0.21983, "loss": 0.68517, "time": 0.25597} -{"mode": "train", "epoch": 10, "iter": 1350, "lr": 2e-05, "memory": 5941, "data_time": 0.03882, "loss_rpn_cls": 0.02171, "loss_rpn_bbox": 0.04811, "loss_cls": 0.18753, "acc": 93.17896, "loss_bbox": 0.23736, "loss_mask": 0.23464, "loss": 0.72936, "time": 0.21294} -{"mode": "train", "epoch": 10, "iter": 1400, "lr": 2e-05, "memory": 5941, "data_time": 0.03648, "loss_rpn_cls": 0.0248, "loss_rpn_bbox": 0.04718, "loss_cls": 0.19264, "acc": 93.07251, "loss_bbox": 0.23167, "loss_mask": 0.22953, "loss": 0.72582, "time": 0.22749} -{"mode": "train", "epoch": 10, "iter": 1450, "lr": 2e-05, "memory": 5941, "data_time": 0.03752, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04541, "loss_cls": 0.18043, "acc": 93.42383, "loss_bbox": 0.22422, "loss_mask": 0.22768, "loss": 0.7022, "time": 0.21808} -{"mode": "train", "epoch": 10, "iter": 1500, "lr": 2e-05, "memory": 5941, "data_time": 0.03661, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04806, "loss_cls": 0.18248, "acc": 93.38159, "loss_bbox": 0.22758, "loss_mask": 0.22591, "loss": 0.70703, "time": 0.20722} -{"mode": "train", "epoch": 10, "iter": 1550, "lr": 2e-05, "memory": 5941, "data_time": 0.03184, "loss_rpn_cls": 0.02393, "loss_rpn_bbox": 0.04884, "loss_cls": 0.18564, "acc": 93.21802, "loss_bbox": 0.23233, "loss_mask": 0.23077, "loss": 0.72151, "time": 0.20997} -{"mode": "train", "epoch": 10, "iter": 1600, "lr": 2e-05, "memory": 5941, "data_time": 0.03393, "loss_rpn_cls": 0.02298, "loss_rpn_bbox": 0.04701, "loss_cls": 0.18874, "acc": 93.22021, "loss_bbox": 0.2329, "loss_mask": 0.2306, "loss": 0.72223, "time": 0.20193} -{"mode": "train", "epoch": 10, "iter": 1650, "lr": 2e-05, "memory": 5941, "data_time": 0.03339, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04855, "loss_cls": 0.18543, "acc": 93.21338, "loss_bbox": 0.23271, "loss_mask": 0.23258, "loss": 0.72165, "time": 0.22141} -{"mode": "train", "epoch": 10, "iter": 1700, "lr": 2e-05, "memory": 5941, "data_time": 0.04137, "loss_rpn_cls": 0.02235, "loss_rpn_bbox": 0.04682, "loss_cls": 0.18312, "acc": 93.28833, "loss_bbox": 0.23196, "loss_mask": 0.22907, "loss": 0.71333, "time": 0.21192} -{"mode": "train", "epoch": 10, "iter": 1750, "lr": 2e-05, "memory": 5941, "data_time": 0.03186, "loss_rpn_cls": 0.02068, "loss_rpn_bbox": 0.04443, "loss_cls": 0.17533, "acc": 93.62769, "loss_bbox": 0.21553, "loss_mask": 0.22661, "loss": 0.68258, "time": 0.24082} -{"mode": "train", "epoch": 10, "iter": 1800, "lr": 2e-05, "memory": 5941, "data_time": 0.03339, "loss_rpn_cls": 0.02071, "loss_rpn_bbox": 0.04546, "loss_cls": 0.18247, "acc": 93.44287, "loss_bbox": 0.22092, "loss_mask": 0.22435, "loss": 0.6939, "time": 0.21899} -{"mode": "train", "epoch": 10, "iter": 1850, "lr": 2e-05, "memory": 5941, "data_time": 0.03385, "loss_rpn_cls": 0.02287, "loss_rpn_bbox": 0.04589, "loss_cls": 0.18021, "acc": 93.39136, "loss_bbox": 0.22315, "loss_mask": 0.22428, "loss": 0.69641, "time": 0.20731} -{"mode": "train", "epoch": 10, "iter": 1900, "lr": 2e-05, "memory": 5941, "data_time": 0.03658, "loss_rpn_cls": 0.02244, "loss_rpn_bbox": 0.04547, "loss_cls": 0.17845, "acc": 93.46875, "loss_bbox": 0.22305, "loss_mask": 0.22, "loss": 0.68941, "time": 0.20512} -{"mode": "train", "epoch": 10, "iter": 1950, "lr": 2e-05, "memory": 5941, "data_time": 0.03121, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04611, "loss_cls": 0.18546, "acc": 93.17676, "loss_bbox": 0.23249, "loss_mask": 0.22826, "loss": 0.71416, "time": 0.21932} -{"mode": "train", "epoch": 10, "iter": 2000, "lr": 2e-05, "memory": 5941, "data_time": 0.03482, "loss_rpn_cls": 0.0211, "loss_rpn_bbox": 0.04569, "loss_cls": 0.18762, "acc": 93.36865, "loss_bbox": 0.22585, "loss_mask": 0.22629, "loss": 0.70656, "time": 0.20705} -{"mode": "train", "epoch": 10, "iter": 2050, "lr": 2e-05, "memory": 5941, "data_time": 0.03872, "loss_rpn_cls": 0.02197, "loss_rpn_bbox": 0.04472, "loss_cls": 0.17845, "acc": 93.51489, "loss_bbox": 0.22429, "loss_mask": 0.22263, "loss": 0.69206, "time": 0.20632} -{"mode": "train", "epoch": 10, "iter": 2100, "lr": 2e-05, "memory": 5941, "data_time": 0.03575, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.05081, "loss_cls": 0.18547, "acc": 93.19312, "loss_bbox": 0.22826, "loss_mask": 0.22439, "loss": 0.71147, "time": 0.21186} -{"mode": "train", "epoch": 10, "iter": 2150, "lr": 2e-05, "memory": 5941, "data_time": 0.03573, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.04717, "loss_cls": 0.18183, "acc": 93.35791, "loss_bbox": 0.22367, "loss_mask": 0.22376, "loss": 0.69845, "time": 0.21744} -{"mode": "train", "epoch": 10, "iter": 2200, "lr": 2e-05, "memory": 5941, "data_time": 0.03487, "loss_rpn_cls": 0.02407, "loss_rpn_bbox": 0.05056, "loss_cls": 0.18948, "acc": 93.06128, "loss_bbox": 0.23455, "loss_mask": 0.22647, "loss": 0.72514, "time": 0.22978} -{"mode": "train", "epoch": 10, "iter": 2250, "lr": 2e-05, "memory": 5941, "data_time": 0.03988, "loss_rpn_cls": 0.02465, "loss_rpn_bbox": 0.04844, "loss_cls": 0.18602, "acc": 93.28857, "loss_bbox": 0.22778, "loss_mask": 0.22533, "loss": 0.71223, "time": 0.21399} -{"mode": "train", "epoch": 10, "iter": 2300, "lr": 2e-05, "memory": 5941, "data_time": 0.03734, "loss_rpn_cls": 0.02212, "loss_rpn_bbox": 0.04539, "loss_cls": 0.18813, "acc": 93.15942, "loss_bbox": 0.23221, "loss_mask": 0.22803, "loss": 0.71588, "time": 0.201} -{"mode": "train", "epoch": 10, "iter": 2350, "lr": 2e-05, "memory": 5941, "data_time": 0.04366, "loss_rpn_cls": 0.02314, "loss_rpn_bbox": 0.04865, "loss_cls": 0.18684, "acc": 93.18848, "loss_bbox": 0.23062, "loss_mask": 0.23243, "loss": 0.72169, "time": 0.21291} -{"mode": "train", "epoch": 10, "iter": 2400, "lr": 2e-05, "memory": 5941, "data_time": 0.03195, "loss_rpn_cls": 0.02275, "loss_rpn_bbox": 0.04506, "loss_cls": 0.18198, "acc": 93.31519, "loss_bbox": 0.22539, "loss_mask": 0.22864, "loss": 0.70382, "time": 0.21499} -{"mode": "train", "epoch": 10, "iter": 2450, "lr": 2e-05, "memory": 5941, "data_time": 0.04515, "loss_rpn_cls": 0.02439, "loss_rpn_bbox": 0.04734, "loss_cls": 0.18846, "acc": 93.20459, "loss_bbox": 0.23366, "loss_mask": 0.23248, "loss": 0.72632, "time": 0.22409} -{"mode": "train", "epoch": 10, "iter": 2500, "lr": 2e-05, "memory": 5941, "data_time": 0.03541, "loss_rpn_cls": 0.02177, "loss_rpn_bbox": 0.04924, "loss_cls": 0.18235, "acc": 93.36304, "loss_bbox": 0.22701, "loss_mask": 0.22906, "loss": 0.70943, "time": 0.22322} -{"mode": "train", "epoch": 10, "iter": 2550, "lr": 2e-05, "memory": 5941, "data_time": 0.03389, "loss_rpn_cls": 0.02424, "loss_rpn_bbox": 0.04819, "loss_cls": 0.18272, "acc": 93.37085, "loss_bbox": 0.22811, "loss_mask": 0.22794, "loss": 0.7112, "time": 0.21016} -{"mode": "train", "epoch": 10, "iter": 2600, "lr": 2e-05, "memory": 5941, "data_time": 0.03259, "loss_rpn_cls": 0.02121, "loss_rpn_bbox": 0.04514, "loss_cls": 0.18617, "acc": 93.2439, "loss_bbox": 0.22796, "loss_mask": 0.22621, "loss": 0.70669, "time": 0.20818} -{"mode": "train", "epoch": 10, "iter": 2650, "lr": 2e-05, "memory": 5941, "data_time": 0.0365, "loss_rpn_cls": 0.02201, "loss_rpn_bbox": 0.04732, "loss_cls": 0.18802, "acc": 93.2063, "loss_bbox": 0.22579, "loss_mask": 0.22563, "loss": 0.70877, "time": 0.22381} -{"mode": "train", "epoch": 10, "iter": 2700, "lr": 2e-05, "memory": 5941, "data_time": 0.04338, "loss_rpn_cls": 0.02278, "loss_rpn_bbox": 0.04671, "loss_cls": 0.17756, "acc": 93.52905, "loss_bbox": 0.22173, "loss_mask": 0.22644, "loss": 0.69522, "time": 0.22012} -{"mode": "train", "epoch": 10, "iter": 2750, "lr": 2e-05, "memory": 5941, "data_time": 0.04292, "loss_rpn_cls": 0.02363, "loss_rpn_bbox": 0.04831, "loss_cls": 0.18534, "acc": 93.26343, "loss_bbox": 0.22899, "loss_mask": 0.23387, "loss": 0.72015, "time": 0.21946} -{"mode": "train", "epoch": 10, "iter": 2800, "lr": 2e-05, "memory": 5941, "data_time": 0.03931, "loss_rpn_cls": 0.02213, "loss_rpn_bbox": 0.04736, "loss_cls": 0.18819, "acc": 93.20142, "loss_bbox": 0.22824, "loss_mask": 0.22606, "loss": 0.71199, "time": 0.20661} -{"mode": "train", "epoch": 10, "iter": 2850, "lr": 2e-05, "memory": 5941, "data_time": 0.0383, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.04756, "loss_cls": 0.18746, "acc": 93.17944, "loss_bbox": 0.22966, "loss_mask": 0.22683, "loss": 0.71451, "time": 0.21646} -{"mode": "train", "epoch": 10, "iter": 2900, "lr": 2e-05, "memory": 5941, "data_time": 0.03527, "loss_rpn_cls": 0.02255, "loss_rpn_bbox": 0.04497, "loss_cls": 0.17988, "acc": 93.48584, "loss_bbox": 0.22097, "loss_mask": 0.22623, "loss": 0.69459, "time": 0.20506} -{"mode": "train", "epoch": 10, "iter": 2950, "lr": 2e-05, "memory": 5941, "data_time": 0.03641, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04884, "loss_cls": 0.18604, "acc": 93.22974, "loss_bbox": 0.22599, "loss_mask": 0.22885, "loss": 0.71263, "time": 0.21474} -{"mode": "train", "epoch": 10, "iter": 3000, "lr": 2e-05, "memory": 5941, "data_time": 0.02821, "loss_rpn_cls": 0.02085, "loss_rpn_bbox": 0.04488, "loss_cls": 0.17697, "acc": 93.59302, "loss_bbox": 0.21951, "loss_mask": 0.22284, "loss": 0.68505, "time": 0.20803} -{"mode": "train", "epoch": 10, "iter": 3050, "lr": 2e-05, "memory": 5941, "data_time": 0.03957, "loss_rpn_cls": 0.02544, "loss_rpn_bbox": 0.04955, "loss_cls": 0.19585, "acc": 92.8335, "loss_bbox": 0.23955, "loss_mask": 0.23286, "loss": 0.74325, "time": 0.21802} -{"mode": "train", "epoch": 10, "iter": 3100, "lr": 2e-05, "memory": 5941, "data_time": 0.04044, "loss_rpn_cls": 0.02382, "loss_rpn_bbox": 0.04605, "loss_cls": 0.18987, "acc": 93.11157, "loss_bbox": 0.23371, "loss_mask": 0.22693, "loss": 0.72037, "time": 0.22589} -{"mode": "train", "epoch": 10, "iter": 3150, "lr": 2e-05, "memory": 5941, "data_time": 0.03988, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.04485, "loss_cls": 0.18252, "acc": 93.42725, "loss_bbox": 0.2234, "loss_mask": 0.22629, "loss": 0.70006, "time": 0.21881} -{"mode": "train", "epoch": 10, "iter": 3200, "lr": 2e-05, "memory": 5941, "data_time": 0.04549, "loss_rpn_cls": 0.02574, "loss_rpn_bbox": 0.04831, "loss_cls": 0.19128, "acc": 93.0061, "loss_bbox": 0.23415, "loss_mask": 0.23247, "loss": 0.73194, "time": 0.21616} -{"mode": "train", "epoch": 10, "iter": 3250, "lr": 2e-05, "memory": 5941, "data_time": 0.04371, "loss_rpn_cls": 0.0236, "loss_rpn_bbox": 0.04916, "loss_cls": 0.18864, "acc": 93.14258, "loss_bbox": 0.23613, "loss_mask": 0.23115, "loss": 0.72868, "time": 0.21454} -{"mode": "train", "epoch": 10, "iter": 3300, "lr": 2e-05, "memory": 5941, "data_time": 0.03556, "loss_rpn_cls": 0.0215, "loss_rpn_bbox": 0.04233, "loss_cls": 0.17873, "acc": 93.54297, "loss_bbox": 0.2145, "loss_mask": 0.21795, "loss": 0.67501, "time": 0.20751} -{"mode": "train", "epoch": 10, "iter": 3350, "lr": 2e-05, "memory": 5941, "data_time": 0.03848, "loss_rpn_cls": 0.02476, "loss_rpn_bbox": 0.0446, "loss_cls": 0.18052, "acc": 93.47559, "loss_bbox": 0.22258, "loss_mask": 0.22204, "loss": 0.6945, "time": 0.22043} -{"mode": "train", "epoch": 10, "iter": 3400, "lr": 2e-05, "memory": 5941, "data_time": 0.0474, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.04537, "loss_cls": 0.17778, "acc": 93.48193, "loss_bbox": 0.22348, "loss_mask": 0.2278, "loss": 0.69668, "time": 0.21867} -{"mode": "train", "epoch": 10, "iter": 3450, "lr": 2e-05, "memory": 5941, "data_time": 0.05054, "loss_rpn_cls": 0.02389, "loss_rpn_bbox": 0.04779, "loss_cls": 0.19068, "acc": 93.11694, "loss_bbox": 0.23391, "loss_mask": 0.2312, "loss": 0.72746, "time": 0.22553} -{"mode": "train", "epoch": 10, "iter": 3500, "lr": 2e-05, "memory": 5941, "data_time": 0.04529, "loss_rpn_cls": 0.02276, "loss_rpn_bbox": 0.04565, "loss_cls": 0.18304, "acc": 93.21265, "loss_bbox": 0.22947, "loss_mask": 0.22674, "loss": 0.70765, "time": 0.21982} -{"mode": "train", "epoch": 10, "iter": 3550, "lr": 2e-05, "memory": 5941, "data_time": 0.03621, "loss_rpn_cls": 0.02265, "loss_rpn_bbox": 0.04661, "loss_cls": 0.17522, "acc": 93.64893, "loss_bbox": 0.21582, "loss_mask": 0.22243, "loss": 0.68272, "time": 0.20783} -{"mode": "train", "epoch": 10, "iter": 3600, "lr": 2e-05, "memory": 5941, "data_time": 0.03915, "loss_rpn_cls": 0.0223, "loss_rpn_bbox": 0.04704, "loss_cls": 0.18282, "acc": 93.41919, "loss_bbox": 0.22131, "loss_mask": 0.22245, "loss": 0.69591, "time": 0.2142} -{"mode": "train", "epoch": 10, "iter": 3650, "lr": 2e-05, "memory": 5941, "data_time": 0.03615, "loss_rpn_cls": 0.02207, "loss_rpn_bbox": 0.04614, "loss_cls": 0.18444, "acc": 93.3125, "loss_bbox": 0.22542, "loss_mask": 0.22551, "loss": 0.70357, "time": 0.21515} -{"mode": "train", "epoch": 10, "iter": 3700, "lr": 2e-05, "memory": 5941, "data_time": 0.04166, "loss_rpn_cls": 0.02491, "loss_rpn_bbox": 0.04767, "loss_cls": 0.18748, "acc": 93.03638, "loss_bbox": 0.23744, "loss_mask": 0.23302, "loss": 0.73052, "time": 0.2128} -{"mode": "train", "epoch": 10, "iter": 3750, "lr": 2e-05, "memory": 5941, "data_time": 0.03697, "loss_rpn_cls": 0.02531, "loss_rpn_bbox": 0.04858, "loss_cls": 0.19071, "acc": 93.04126, "loss_bbox": 0.23051, "loss_mask": 0.22457, "loss": 0.71969, "time": 0.21724} -{"mode": "train", "epoch": 10, "iter": 3800, "lr": 2e-05, "memory": 5941, "data_time": 0.04037, "loss_rpn_cls": 0.02228, "loss_rpn_bbox": 0.04718, "loss_cls": 0.18591, "acc": 93.27026, "loss_bbox": 0.2275, "loss_mask": 0.23062, "loss": 0.71348, "time": 0.20956} -{"mode": "train", "epoch": 10, "iter": 3850, "lr": 2e-05, "memory": 5941, "data_time": 0.03789, "loss_rpn_cls": 0.02456, "loss_rpn_bbox": 0.04758, "loss_cls": 0.18703, "acc": 93.23804, "loss_bbox": 0.2301, "loss_mask": 0.2257, "loss": 0.71497, "time": 0.21941} -{"mode": "train", "epoch": 10, "iter": 3900, "lr": 2e-05, "memory": 5941, "data_time": 0.04239, "loss_rpn_cls": 0.02104, "loss_rpn_bbox": 0.04271, "loss_cls": 0.1805, "acc": 93.47827, "loss_bbox": 0.22177, "loss_mask": 0.22315, "loss": 0.68917, "time": 0.20479} -{"mode": "train", "epoch": 10, "iter": 3950, "lr": 2e-05, "memory": 5941, "data_time": 0.037, "loss_rpn_cls": 0.02274, "loss_rpn_bbox": 0.04294, "loss_cls": 0.17788, "acc": 93.5979, "loss_bbox": 0.21622, "loss_mask": 0.22565, "loss": 0.68543, "time": 0.20334} -{"mode": "train", "epoch": 10, "iter": 4000, "lr": 2e-05, "memory": 5941, "data_time": 0.0362, "loss_rpn_cls": 0.02114, "loss_rpn_bbox": 0.04607, "loss_cls": 0.18201, "acc": 93.46118, "loss_bbox": 0.22392, "loss_mask": 0.22538, "loss": 0.69851, "time": 0.21202} -{"mode": "train", "epoch": 10, "iter": 4050, "lr": 2e-05, "memory": 5941, "data_time": 0.03796, "loss_rpn_cls": 0.02176, "loss_rpn_bbox": 0.04631, "loss_cls": 0.18803, "acc": 93.14526, "loss_bbox": 0.22691, "loss_mask": 0.22678, "loss": 0.70979, "time": 0.2135} -{"mode": "train", "epoch": 10, "iter": 4100, "lr": 2e-05, "memory": 5941, "data_time": 0.0386, "loss_rpn_cls": 0.02114, "loss_rpn_bbox": 0.04348, "loss_cls": 0.1737, "acc": 93.74243, "loss_bbox": 0.21069, "loss_mask": 0.21645, "loss": 0.66547, "time": 0.20333} -{"mode": "train", "epoch": 10, "iter": 4150, "lr": 2e-05, "memory": 5941, "data_time": 0.03389, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.04451, "loss_cls": 0.18172, "acc": 93.39941, "loss_bbox": 0.2208, "loss_mask": 0.22311, "loss": 0.69314, "time": 0.21485} -{"mode": "train", "epoch": 10, "iter": 4200, "lr": 2e-05, "memory": 5941, "data_time": 0.03727, "loss_rpn_cls": 0.02345, "loss_rpn_bbox": 0.04935, "loss_cls": 0.17871, "acc": 93.47925, "loss_bbox": 0.22341, "loss_mask": 0.22452, "loss": 0.69944, "time": 0.21435} -{"mode": "train", "epoch": 10, "iter": 4250, "lr": 2e-05, "memory": 5941, "data_time": 0.03918, "loss_rpn_cls": 0.02387, "loss_rpn_bbox": 0.04892, "loss_cls": 0.19129, "acc": 93.03613, "loss_bbox": 0.23743, "loss_mask": 0.23162, "loss": 0.73312, "time": 0.21339} -{"mode": "train", "epoch": 10, "iter": 4300, "lr": 2e-05, "memory": 5941, "data_time": 0.03822, "loss_rpn_cls": 0.02195, "loss_rpn_bbox": 0.04423, "loss_cls": 0.17924, "acc": 93.43115, "loss_bbox": 0.22091, "loss_mask": 0.226, "loss": 0.69232, "time": 0.20969} -{"mode": "train", "epoch": 10, "iter": 4350, "lr": 2e-05, "memory": 5941, "data_time": 0.03276, "loss_rpn_cls": 0.02412, "loss_rpn_bbox": 0.04928, "loss_cls": 0.18863, "acc": 93.12769, "loss_bbox": 0.23035, "loss_mask": 0.22953, "loss": 0.72191, "time": 0.21103} -{"mode": "train", "epoch": 10, "iter": 4400, "lr": 2e-05, "memory": 5941, "data_time": 0.04144, "loss_rpn_cls": 0.02212, "loss_rpn_bbox": 0.04652, "loss_cls": 0.17939, "acc": 93.5022, "loss_bbox": 0.22359, "loss_mask": 0.22829, "loss": 0.69991, "time": 0.20748} -{"mode": "train", "epoch": 10, "iter": 4450, "lr": 2e-05, "memory": 5941, "data_time": 0.03944, "loss_rpn_cls": 0.0212, "loss_rpn_bbox": 0.04517, "loss_cls": 0.17845, "acc": 93.48755, "loss_bbox": 0.21817, "loss_mask": 0.22027, "loss": 0.68325, "time": 0.20654} -{"mode": "train", "epoch": 10, "iter": 4500, "lr": 2e-05, "memory": 5941, "data_time": 0.03258, "loss_rpn_cls": 0.02259, "loss_rpn_bbox": 0.04567, "loss_cls": 0.18078, "acc": 93.57349, "loss_bbox": 0.2197, "loss_mask": 0.22435, "loss": 0.69308, "time": 0.21727} -{"mode": "train", "epoch": 10, "iter": 4550, "lr": 2e-05, "memory": 5941, "data_time": 0.03588, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04798, "loss_cls": 0.18676, "acc": 93.13062, "loss_bbox": 0.23111, "loss_mask": 0.22804, "loss": 0.7176, "time": 0.21156} -{"mode": "train", "epoch": 10, "iter": 4600, "lr": 2e-05, "memory": 5941, "data_time": 0.04347, "loss_rpn_cls": 0.02427, "loss_rpn_bbox": 0.05093, "loss_cls": 0.19245, "acc": 92.93506, "loss_bbox": 0.23479, "loss_mask": 0.22835, "loss": 0.73079, "time": 0.2184} -{"mode": "train", "epoch": 10, "iter": 4650, "lr": 2e-05, "memory": 5941, "data_time": 0.03642, "loss_rpn_cls": 0.02204, "loss_rpn_bbox": 0.04794, "loss_cls": 0.18359, "acc": 93.29517, "loss_bbox": 0.22513, "loss_mask": 0.22871, "loss": 0.7074, "time": 0.21585} -{"mode": "train", "epoch": 10, "iter": 4700, "lr": 2e-05, "memory": 5941, "data_time": 0.03928, "loss_rpn_cls": 0.02307, "loss_rpn_bbox": 0.04658, "loss_cls": 0.1933, "acc": 92.98584, "loss_bbox": 0.23299, "loss_mask": 0.22888, "loss": 0.72483, "time": 0.22102} -{"mode": "train", "epoch": 10, "iter": 4750, "lr": 2e-05, "memory": 5941, "data_time": 0.03802, "loss_rpn_cls": 0.02256, "loss_rpn_bbox": 0.04614, "loss_cls": 0.18314, "acc": 93.20874, "loss_bbox": 0.22797, "loss_mask": 0.22675, "loss": 0.70656, "time": 0.21495} -{"mode": "train", "epoch": 10, "iter": 4800, "lr": 2e-05, "memory": 5941, "data_time": 0.0347, "loss_rpn_cls": 0.02376, "loss_rpn_bbox": 0.04817, "loss_cls": 0.19184, "acc": 92.98901, "loss_bbox": 0.23087, "loss_mask": 0.22739, "loss": 0.72202, "time": 0.21612} -{"mode": "train", "epoch": 10, "iter": 4850, "lr": 2e-05, "memory": 5941, "data_time": 0.03656, "loss_rpn_cls": 0.02178, "loss_rpn_bbox": 0.04622, "loss_cls": 0.18616, "acc": 93.1582, "loss_bbox": 0.23265, "loss_mask": 0.22707, "loss": 0.71388, "time": 0.21557} -{"mode": "train", "epoch": 10, "iter": 4900, "lr": 2e-05, "memory": 5941, "data_time": 0.03555, "loss_rpn_cls": 0.02468, "loss_rpn_bbox": 0.04751, "loss_cls": 0.18711, "acc": 93.10767, "loss_bbox": 0.23107, "loss_mask": 0.22706, "loss": 0.71743, "time": 0.21589} -{"mode": "train", "epoch": 10, "iter": 4950, "lr": 2e-05, "memory": 5941, "data_time": 0.03488, "loss_rpn_cls": 0.02337, "loss_rpn_bbox": 0.04759, "loss_cls": 0.18618, "acc": 93.21997, "loss_bbox": 0.23077, "loss_mask": 0.22328, "loss": 0.71119, "time": 0.2081} -{"mode": "train", "epoch": 10, "iter": 5000, "lr": 2e-05, "memory": 5941, "data_time": 0.03326, "loss_rpn_cls": 0.02335, "loss_rpn_bbox": 0.04806, "loss_cls": 0.18877, "acc": 93.21094, "loss_bbox": 0.23008, "loss_mask": 0.2313, "loss": 0.72156, "time": 0.2123} -{"mode": "train", "epoch": 10, "iter": 5050, "lr": 2e-05, "memory": 5941, "data_time": 0.03731, "loss_rpn_cls": 0.02353, "loss_rpn_bbox": 0.04536, "loss_cls": 0.18885, "acc": 93.16064, "loss_bbox": 0.22748, "loss_mask": 0.22545, "loss": 0.71066, "time": 0.21488} -{"mode": "train", "epoch": 10, "iter": 5100, "lr": 2e-05, "memory": 5941, "data_time": 0.03892, "loss_rpn_cls": 0.0233, "loss_rpn_bbox": 0.04774, "loss_cls": 0.18598, "acc": 93.19019, "loss_bbox": 0.22716, "loss_mask": 0.23007, "loss": 0.71424, "time": 0.21585} -{"mode": "train", "epoch": 10, "iter": 5150, "lr": 2e-05, "memory": 5941, "data_time": 0.03412, "loss_rpn_cls": 0.02321, "loss_rpn_bbox": 0.04752, "loss_cls": 0.18238, "acc": 93.42432, "loss_bbox": 0.22326, "loss_mask": 0.22355, "loss": 0.69992, "time": 0.21271} -{"mode": "train", "epoch": 10, "iter": 5200, "lr": 2e-05, "memory": 5941, "data_time": 0.03472, "loss_rpn_cls": 0.02181, "loss_rpn_bbox": 0.04487, "loss_cls": 0.18419, "acc": 93.29956, "loss_bbox": 0.22637, "loss_mask": 0.22437, "loss": 0.70161, "time": 0.20648} -{"mode": "train", "epoch": 10, "iter": 5250, "lr": 2e-05, "memory": 5941, "data_time": 0.03417, "loss_rpn_cls": 0.02418, "loss_rpn_bbox": 0.04809, "loss_cls": 0.18631, "acc": 93.16504, "loss_bbox": 0.23378, "loss_mask": 0.22758, "loss": 0.71994, "time": 0.21174} -{"mode": "train", "epoch": 10, "iter": 5300, "lr": 2e-05, "memory": 5941, "data_time": 0.03192, "loss_rpn_cls": 0.02362, "loss_rpn_bbox": 0.04451, "loss_cls": 0.18556, "acc": 93.20483, "loss_bbox": 0.23207, "loss_mask": 0.22985, "loss": 0.71561, "time": 0.21124} -{"mode": "train", "epoch": 10, "iter": 5350, "lr": 2e-05, "memory": 5941, "data_time": 0.03104, "loss_rpn_cls": 0.02131, "loss_rpn_bbox": 0.04539, "loss_cls": 0.18546, "acc": 93.24731, "loss_bbox": 0.22738, "loss_mask": 0.2266, "loss": 0.70614, "time": 0.20561} -{"mode": "train", "epoch": 10, "iter": 5400, "lr": 2e-05, "memory": 5941, "data_time": 0.04124, "loss_rpn_cls": 0.02375, "loss_rpn_bbox": 0.04943, "loss_cls": 0.18809, "acc": 93.06519, "loss_bbox": 0.23232, "loss_mask": 0.22766, "loss": 0.72124, "time": 0.21685} -{"mode": "train", "epoch": 10, "iter": 5450, "lr": 2e-05, "memory": 5941, "data_time": 0.032, "loss_rpn_cls": 0.02451, "loss_rpn_bbox": 0.04872, "loss_cls": 0.18817, "acc": 93.09546, "loss_bbox": 0.2362, "loss_mask": 0.23021, "loss": 0.7278, "time": 0.21047} -{"mode": "train", "epoch": 10, "iter": 5500, "lr": 2e-05, "memory": 5941, "data_time": 0.03553, "loss_rpn_cls": 0.02471, "loss_rpn_bbox": 0.04786, "loss_cls": 0.19187, "acc": 93.04126, "loss_bbox": 0.23306, "loss_mask": 0.2346, "loss": 0.73209, "time": 0.21998} -{"mode": "train", "epoch": 10, "iter": 5550, "lr": 2e-05, "memory": 5941, "data_time": 0.03532, "loss_rpn_cls": 0.02159, "loss_rpn_bbox": 0.0425, "loss_cls": 0.17883, "acc": 93.56763, "loss_bbox": 0.21567, "loss_mask": 0.22444, "loss": 0.68303, "time": 0.20813} -{"mode": "train", "epoch": 10, "iter": 5600, "lr": 2e-05, "memory": 5941, "data_time": 0.03557, "loss_rpn_cls": 0.0221, "loss_rpn_bbox": 0.04475, "loss_cls": 0.17486, "acc": 93.59473, "loss_bbox": 0.22045, "loss_mask": 0.22843, "loss": 0.69059, "time": 0.21297} -{"mode": "train", "epoch": 10, "iter": 5650, "lr": 2e-05, "memory": 5941, "data_time": 0.03734, "loss_rpn_cls": 0.0207, "loss_rpn_bbox": 0.04473, "loss_cls": 0.17741, "acc": 93.43604, "loss_bbox": 0.22174, "loss_mask": 0.22493, "loss": 0.68951, "time": 0.20083} -{"mode": "train", "epoch": 10, "iter": 5700, "lr": 2e-05, "memory": 5941, "data_time": 0.03323, "loss_rpn_cls": 0.02502, "loss_rpn_bbox": 0.04871, "loss_cls": 0.1915, "acc": 93.15283, "loss_bbox": 0.2316, "loss_mask": 0.23088, "loss": 0.72771, "time": 0.22429} -{"mode": "train", "epoch": 10, "iter": 5750, "lr": 2e-05, "memory": 5941, "data_time": 0.03909, "loss_rpn_cls": 0.0203, "loss_rpn_bbox": 0.04605, "loss_cls": 0.18075, "acc": 93.42139, "loss_bbox": 0.22129, "loss_mask": 0.22219, "loss": 0.69058, "time": 0.2114} -{"mode": "train", "epoch": 10, "iter": 5800, "lr": 2e-05, "memory": 5941, "data_time": 0.03635, "loss_rpn_cls": 0.0216, "loss_rpn_bbox": 0.04513, "loss_cls": 0.18072, "acc": 93.44824, "loss_bbox": 0.22156, "loss_mask": 0.21976, "loss": 0.68878, "time": 0.21726} -{"mode": "train", "epoch": 10, "iter": 5850, "lr": 2e-05, "memory": 5941, "data_time": 0.0379, "loss_rpn_cls": 0.02317, "loss_rpn_bbox": 0.04882, "loss_cls": 0.18171, "acc": 93.31592, "loss_bbox": 0.22819, "loss_mask": 0.22555, "loss": 0.70743, "time": 0.20734} -{"mode": "train", "epoch": 10, "iter": 5900, "lr": 2e-05, "memory": 5941, "data_time": 0.03732, "loss_rpn_cls": 0.02388, "loss_rpn_bbox": 0.04843, "loss_cls": 0.18934, "acc": 93.19531, "loss_bbox": 0.23063, "loss_mask": 0.22971, "loss": 0.722, "time": 0.22023} -{"mode": "train", "epoch": 10, "iter": 5950, "lr": 2e-05, "memory": 5941, "data_time": 0.03633, "loss_rpn_cls": 0.02622, "loss_rpn_bbox": 0.04921, "loss_cls": 0.19147, "acc": 93.02539, "loss_bbox": 0.2359, "loss_mask": 0.22821, "loss": 0.731, "time": 0.22251} -{"mode": "train", "epoch": 10, "iter": 6000, "lr": 2e-05, "memory": 5941, "data_time": 0.03502, "loss_rpn_cls": 0.02255, "loss_rpn_bbox": 0.0484, "loss_cls": 0.18387, "acc": 93.33203, "loss_bbox": 0.22459, "loss_mask": 0.22415, "loss": 0.70358, "time": 0.21853} -{"mode": "train", "epoch": 10, "iter": 6050, "lr": 2e-05, "memory": 5941, "data_time": 0.03587, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.04712, "loss_cls": 0.17856, "acc": 93.45679, "loss_bbox": 0.22564, "loss_mask": 0.22603, "loss": 0.69937, "time": 0.21353} -{"mode": "train", "epoch": 10, "iter": 6100, "lr": 2e-05, "memory": 5941, "data_time": 0.03403, "loss_rpn_cls": 0.02218, "loss_rpn_bbox": 0.04531, "loss_cls": 0.18248, "acc": 93.33545, "loss_bbox": 0.2264, "loss_mask": 0.22988, "loss": 0.70625, "time": 0.20041} -{"mode": "train", "epoch": 10, "iter": 6150, "lr": 2e-05, "memory": 5941, "data_time": 0.03846, "loss_rpn_cls": 0.02392, "loss_rpn_bbox": 0.04621, "loss_cls": 0.18498, "acc": 93.23926, "loss_bbox": 0.2278, "loss_mask": 0.22997, "loss": 0.71288, "time": 0.21822} -{"mode": "train", "epoch": 10, "iter": 6200, "lr": 2e-05, "memory": 5941, "data_time": 0.03488, "loss_rpn_cls": 0.02175, "loss_rpn_bbox": 0.04615, "loss_cls": 0.18553, "acc": 93.22461, "loss_bbox": 0.22725, "loss_mask": 0.22313, "loss": 0.70382, "time": 0.21369} -{"mode": "train", "epoch": 10, "iter": 6250, "lr": 2e-05, "memory": 5941, "data_time": 0.03125, "loss_rpn_cls": 0.02173, "loss_rpn_bbox": 0.04236, "loss_cls": 0.17177, "acc": 93.64478, "loss_bbox": 0.21793, "loss_mask": 0.22156, "loss": 0.67534, "time": 0.20801} -{"mode": "train", "epoch": 10, "iter": 6300, "lr": 2e-05, "memory": 5941, "data_time": 0.03686, "loss_rpn_cls": 0.02305, "loss_rpn_bbox": 0.04818, "loss_cls": 0.18501, "acc": 93.27563, "loss_bbox": 0.22841, "loss_mask": 0.2293, "loss": 0.71395, "time": 0.21432} -{"mode": "train", "epoch": 10, "iter": 6350, "lr": 2e-05, "memory": 5941, "data_time": 0.0381, "loss_rpn_cls": 0.02399, "loss_rpn_bbox": 0.04816, "loss_cls": 0.1847, "acc": 93.2583, "loss_bbox": 0.22557, "loss_mask": 0.22478, "loss": 0.7072, "time": 0.21035} -{"mode": "train", "epoch": 10, "iter": 6400, "lr": 2e-05, "memory": 5941, "data_time": 0.04301, "loss_rpn_cls": 0.02294, "loss_rpn_bbox": 0.0487, "loss_cls": 0.18665, "acc": 93.17114, "loss_bbox": 0.23237, "loss_mask": 0.22925, "loss": 0.71991, "time": 0.22231} -{"mode": "train", "epoch": 10, "iter": 6450, "lr": 2e-05, "memory": 5941, "data_time": 0.03472, "loss_rpn_cls": 0.02328, "loss_rpn_bbox": 0.04763, "loss_cls": 0.18161, "acc": 93.37646, "loss_bbox": 0.2228, "loss_mask": 0.22759, "loss": 0.7029, "time": 0.21717} -{"mode": "train", "epoch": 10, "iter": 6500, "lr": 2e-05, "memory": 5941, "data_time": 0.04627, "loss_rpn_cls": 0.02377, "loss_rpn_bbox": 0.04649, "loss_cls": 0.18957, "acc": 93.13354, "loss_bbox": 0.22817, "loss_mask": 0.22638, "loss": 0.71437, "time": 0.20388} -{"mode": "train", "epoch": 10, "iter": 6550, "lr": 2e-05, "memory": 5941, "data_time": 0.03402, "loss_rpn_cls": 0.02122, "loss_rpn_bbox": 0.04596, "loss_cls": 0.17933, "acc": 93.49438, "loss_bbox": 0.22325, "loss_mask": 0.22601, "loss": 0.69576, "time": 0.21119} -{"mode": "train", "epoch": 10, "iter": 6600, "lr": 2e-05, "memory": 5941, "data_time": 0.03681, "loss_rpn_cls": 0.02289, "loss_rpn_bbox": 0.04648, "loss_cls": 0.18385, "acc": 93.27759, "loss_bbox": 0.22788, "loss_mask": 0.22668, "loss": 0.70778, "time": 0.21358} -{"mode": "train", "epoch": 10, "iter": 6650, "lr": 2e-05, "memory": 5941, "data_time": 0.03908, "loss_rpn_cls": 0.0237, "loss_rpn_bbox": 0.04976, "loss_cls": 0.18754, "acc": 93.15039, "loss_bbox": 0.23084, "loss_mask": 0.23086, "loss": 0.7227, "time": 0.21477} -{"mode": "train", "epoch": 10, "iter": 6700, "lr": 2e-05, "memory": 5941, "data_time": 0.03507, "loss_rpn_cls": 0.0211, "loss_rpn_bbox": 0.04568, "loss_cls": 0.18316, "acc": 93.33228, "loss_bbox": 0.22171, "loss_mask": 0.22097, "loss": 0.69261, "time": 0.21085} -{"mode": "train", "epoch": 10, "iter": 6750, "lr": 2e-05, "memory": 5941, "data_time": 0.03448, "loss_rpn_cls": 0.02159, "loss_rpn_bbox": 0.04726, "loss_cls": 0.17953, "acc": 93.49048, "loss_bbox": 0.22008, "loss_mask": 0.22536, "loss": 0.69383, "time": 0.20634} -{"mode": "train", "epoch": 10, "iter": 6800, "lr": 2e-05, "memory": 5941, "data_time": 0.03475, "loss_rpn_cls": 0.02107, "loss_rpn_bbox": 0.04223, "loss_cls": 0.18118, "acc": 93.44873, "loss_bbox": 0.21447, "loss_mask": 0.22047, "loss": 0.67942, "time": 0.19617} -{"mode": "train", "epoch": 10, "iter": 6850, "lr": 2e-05, "memory": 5941, "data_time": 0.03609, "loss_rpn_cls": 0.02381, "loss_rpn_bbox": 0.04834, "loss_cls": 0.1858, "acc": 93.23828, "loss_bbox": 0.2311, "loss_mask": 0.22944, "loss": 0.71848, "time": 0.21851} -{"mode": "train", "epoch": 10, "iter": 6900, "lr": 2e-05, "memory": 5941, "data_time": 0.03556, "loss_rpn_cls": 0.02523, "loss_rpn_bbox": 0.04667, "loss_cls": 0.19035, "acc": 92.96899, "loss_bbox": 0.23481, "loss_mask": 0.22741, "loss": 0.72447, "time": 0.21536} -{"mode": "train", "epoch": 10, "iter": 6950, "lr": 2e-05, "memory": 5941, "data_time": 0.03768, "loss_rpn_cls": 0.0234, "loss_rpn_bbox": 0.04646, "loss_cls": 0.18793, "acc": 93.11523, "loss_bbox": 0.23165, "loss_mask": 0.22886, "loss": 0.71831, "time": 0.20932} -{"mode": "train", "epoch": 10, "iter": 7000, "lr": 2e-05, "memory": 5941, "data_time": 0.03706, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04744, "loss_cls": 0.18402, "acc": 93.23828, "loss_bbox": 0.22645, "loss_mask": 0.22113, "loss": 0.70197, "time": 0.20155} -{"mode": "train", "epoch": 10, "iter": 7050, "lr": 2e-05, "memory": 5941, "data_time": 0.0375, "loss_rpn_cls": 0.02151, "loss_rpn_bbox": 0.04415, "loss_cls": 0.18152, "acc": 93.40186, "loss_bbox": 0.22321, "loss_mask": 0.22325, "loss": 0.69364, "time": 0.20301} -{"mode": "train", "epoch": 10, "iter": 7100, "lr": 2e-05, "memory": 5941, "data_time": 0.039, "loss_rpn_cls": 0.02044, "loss_rpn_bbox": 0.04549, "loss_cls": 0.18106, "acc": 93.44043, "loss_bbox": 0.22632, "loss_mask": 0.2263, "loss": 0.69961, "time": 0.20247} -{"mode": "train", "epoch": 10, "iter": 7150, "lr": 2e-05, "memory": 5941, "data_time": 0.03489, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.04618, "loss_cls": 0.17997, "acc": 93.44556, "loss_bbox": 0.21787, "loss_mask": 0.222, "loss": 0.68768, "time": 0.24412} -{"mode": "train", "epoch": 10, "iter": 7200, "lr": 2e-05, "memory": 5941, "data_time": 0.03735, "loss_rpn_cls": 0.02224, "loss_rpn_bbox": 0.04512, "loss_cls": 0.1792, "acc": 93.53809, "loss_bbox": 0.22212, "loss_mask": 0.21732, "loss": 0.68599, "time": 0.20445} -{"mode": "train", "epoch": 10, "iter": 7250, "lr": 2e-05, "memory": 5941, "data_time": 0.03243, "loss_rpn_cls": 0.02593, "loss_rpn_bbox": 0.04776, "loss_cls": 0.19187, "acc": 93.18262, "loss_bbox": 0.22862, "loss_mask": 0.22695, "loss": 0.72113, "time": 0.21367} -{"mode": "train", "epoch": 10, "iter": 7300, "lr": 2e-05, "memory": 5941, "data_time": 0.03991, "loss_rpn_cls": 0.02447, "loss_rpn_bbox": 0.04685, "loss_cls": 0.18257, "acc": 93.33936, "loss_bbox": 0.22436, "loss_mask": 0.22841, "loss": 0.70667, "time": 0.20452} -{"mode": "val", "epoch": 10, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.3935, "bbox_mAP_50": 0.6136, "bbox_mAP_75": 0.429, "bbox_mAP_s": 0.2431, "bbox_mAP_m": 0.4271, "bbox_mAP_l": 0.5103, "bbox_mAP_copypaste": "0.3935 0.6136 0.4290 0.2431 0.4271 0.5103", "segm_mAP": 0.369, "segm_mAP_50": 0.5856, "segm_mAP_75": 0.3933, "segm_mAP_s": 0.1878, "segm_mAP_m": 0.3971, "segm_mAP_l": 0.5375, "segm_mAP_copypaste": "0.3690 0.5856 0.3933 0.1878 0.3971 0.5375"} -{"mode": "train", "epoch": 11, "iter": 50, "lr": 2e-05, "memory": 5941, "data_time": 0.10942, "loss_rpn_cls": 0.02162, "loss_rpn_bbox": 0.0458, "loss_cls": 0.18144, "acc": 93.43213, "loss_bbox": 0.22526, "loss_mask": 0.23114, "loss": 0.70525, "time": 0.28378} -{"mode": "train", "epoch": 11, "iter": 100, "lr": 2e-05, "memory": 5941, "data_time": 0.04756, "loss_rpn_cls": 0.02512, "loss_rpn_bbox": 0.04895, "loss_cls": 0.18381, "acc": 93.19995, "loss_bbox": 0.23287, "loss_mask": 0.22506, "loss": 0.71582, "time": 0.24328} -{"mode": "train", "epoch": 11, "iter": 150, "lr": 2e-05, "memory": 5941, "data_time": 0.0433, "loss_rpn_cls": 0.02253, "loss_rpn_bbox": 0.0476, "loss_cls": 0.17932, "acc": 93.32349, "loss_bbox": 0.2244, "loss_mask": 0.23444, "loss": 0.70829, "time": 0.22447} -{"mode": "train", "epoch": 11, "iter": 200, "lr": 2e-05, "memory": 5941, "data_time": 0.04116, "loss_rpn_cls": 0.02193, "loss_rpn_bbox": 0.04681, "loss_cls": 0.17952, "acc": 93.33643, "loss_bbox": 0.22416, "loss_mask": 0.22898, "loss": 0.7014, "time": 0.21909} -{"mode": "train", "epoch": 11, "iter": 250, "lr": 2e-05, "memory": 5941, "data_time": 0.03761, "loss_rpn_cls": 0.02413, "loss_rpn_bbox": 0.04955, "loss_cls": 0.18335, "acc": 93.25781, "loss_bbox": 0.23117, "loss_mask": 0.23367, "loss": 0.72186, "time": 0.22492} -{"mode": "train", "epoch": 11, "iter": 300, "lr": 2e-05, "memory": 5941, "data_time": 0.04188, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04715, "loss_cls": 0.17465, "acc": 93.49341, "loss_bbox": 0.22323, "loss_mask": 0.22192, "loss": 0.68949, "time": 0.22476} -{"mode": "train", "epoch": 11, "iter": 350, "lr": 2e-05, "memory": 5941, "data_time": 0.04336, "loss_rpn_cls": 0.02164, "loss_rpn_bbox": 0.04894, "loss_cls": 0.18111, "acc": 93.3501, "loss_bbox": 0.22645, "loss_mask": 0.22118, "loss": 0.69933, "time": 0.22137} -{"mode": "train", "epoch": 11, "iter": 400, "lr": 2e-05, "memory": 5941, "data_time": 0.04092, "loss_rpn_cls": 0.02378, "loss_rpn_bbox": 0.04555, "loss_cls": 0.18022, "acc": 93.39185, "loss_bbox": 0.22478, "loss_mask": 0.2222, "loss": 0.69653, "time": 0.23067} -{"mode": "train", "epoch": 11, "iter": 450, "lr": 2e-05, "memory": 5941, "data_time": 0.0461, "loss_rpn_cls": 0.02451, "loss_rpn_bbox": 0.04516, "loss_cls": 0.17797, "acc": 93.53369, "loss_bbox": 0.21894, "loss_mask": 0.22486, "loss": 0.69146, "time": 0.21226} -{"mode": "train", "epoch": 11, "iter": 500, "lr": 2e-05, "memory": 5941, "data_time": 0.03608, "loss_rpn_cls": 0.02173, "loss_rpn_bbox": 0.04992, "loss_cls": 0.1819, "acc": 93.32886, "loss_bbox": 0.22972, "loss_mask": 0.22942, "loss": 0.71269, "time": 0.21571} -{"mode": "train", "epoch": 11, "iter": 550, "lr": 2e-05, "memory": 5941, "data_time": 0.04175, "loss_rpn_cls": 0.02059, "loss_rpn_bbox": 0.04389, "loss_cls": 0.17716, "acc": 93.47095, "loss_bbox": 0.2209, "loss_mask": 0.22089, "loss": 0.68343, "time": 0.21429} -{"mode": "train", "epoch": 11, "iter": 600, "lr": 2e-05, "memory": 5941, "data_time": 0.05319, "loss_rpn_cls": 0.02356, "loss_rpn_bbox": 0.04722, "loss_cls": 0.18522, "acc": 93.25903, "loss_bbox": 0.22938, "loss_mask": 0.22636, "loss": 0.71173, "time": 0.22158} -{"mode": "train", "epoch": 11, "iter": 650, "lr": 2e-05, "memory": 5941, "data_time": 0.03777, "loss_rpn_cls": 0.02328, "loss_rpn_bbox": 0.04775, "loss_cls": 0.18425, "acc": 93.29468, "loss_bbox": 0.22532, "loss_mask": 0.2289, "loss": 0.70951, "time": 0.21903} -{"mode": "train", "epoch": 11, "iter": 700, "lr": 2e-05, "memory": 5941, "data_time": 0.03843, "loss_rpn_cls": 0.02161, "loss_rpn_bbox": 0.04603, "loss_cls": 0.17211, "acc": 93.72852, "loss_bbox": 0.21783, "loss_mask": 0.22138, "loss": 0.67896, "time": 0.21517} -{"mode": "train", "epoch": 11, "iter": 750, "lr": 2e-05, "memory": 5941, "data_time": 0.04682, "loss_rpn_cls": 0.02313, "loss_rpn_bbox": 0.04717, "loss_cls": 0.18378, "acc": 93.20557, "loss_bbox": 0.22715, "loss_mask": 0.22714, "loss": 0.70836, "time": 0.21234} -{"mode": "train", "epoch": 11, "iter": 800, "lr": 2e-05, "memory": 5941, "data_time": 0.04389, "loss_rpn_cls": 0.02354, "loss_rpn_bbox": 0.04696, "loss_cls": 0.18588, "acc": 93.14307, "loss_bbox": 0.2311, "loss_mask": 0.22891, "loss": 0.7164, "time": 0.22031} -{"mode": "train", "epoch": 11, "iter": 850, "lr": 2e-05, "memory": 5941, "data_time": 0.03842, "loss_rpn_cls": 0.02188, "loss_rpn_bbox": 0.04573, "loss_cls": 0.17622, "acc": 93.46802, "loss_bbox": 0.2228, "loss_mask": 0.22273, "loss": 0.68937, "time": 0.20465} -{"mode": "train", "epoch": 11, "iter": 900, "lr": 2e-05, "memory": 5941, "data_time": 0.03902, "loss_rpn_cls": 0.02365, "loss_rpn_bbox": 0.04703, "loss_cls": 0.17934, "acc": 93.51001, "loss_bbox": 0.22016, "loss_mask": 0.22362, "loss": 0.6938, "time": 0.21569} -{"mode": "train", "epoch": 11, "iter": 950, "lr": 2e-05, "memory": 5941, "data_time": 0.03918, "loss_rpn_cls": 0.02297, "loss_rpn_bbox": 0.04632, "loss_cls": 0.18289, "acc": 93.34595, "loss_bbox": 0.228, "loss_mask": 0.23275, "loss": 0.71293, "time": 0.21735} -{"mode": "train", "epoch": 11, "iter": 1000, "lr": 2e-05, "memory": 5941, "data_time": 0.03554, "loss_rpn_cls": 0.02128, "loss_rpn_bbox": 0.04687, "loss_cls": 0.17216, "acc": 93.72241, "loss_bbox": 0.21357, "loss_mask": 0.2192, "loss": 0.67308, "time": 0.22311} -{"mode": "train", "epoch": 11, "iter": 1050, "lr": 2e-05, "memory": 5941, "data_time": 0.03971, "loss_rpn_cls": 0.02008, "loss_rpn_bbox": 0.04343, "loss_cls": 0.17619, "acc": 93.56738, "loss_bbox": 0.22341, "loss_mask": 0.22235, "loss": 0.68546, "time": 0.21594} -{"mode": "train", "epoch": 11, "iter": 1100, "lr": 2e-05, "memory": 5941, "data_time": 0.04697, "loss_rpn_cls": 0.02091, "loss_rpn_bbox": 0.04422, "loss_cls": 0.17491, "acc": 93.62305, "loss_bbox": 0.21421, "loss_mask": 0.2192, "loss": 0.67345, "time": 0.20994} -{"mode": "train", "epoch": 11, "iter": 1150, "lr": 2e-05, "memory": 5941, "data_time": 0.04619, "loss_rpn_cls": 0.02345, "loss_rpn_bbox": 0.04592, "loss_cls": 0.18337, "acc": 93.32837, "loss_bbox": 0.22597, "loss_mask": 0.22674, "loss": 0.70545, "time": 0.21384} -{"mode": "train", "epoch": 11, "iter": 1200, "lr": 2e-05, "memory": 5941, "data_time": 0.04784, "loss_rpn_cls": 0.02238, "loss_rpn_bbox": 0.04664, "loss_cls": 0.17397, "acc": 93.5105, "loss_bbox": 0.2193, "loss_mask": 0.22432, "loss": 0.6866, "time": 0.22509} -{"mode": "train", "epoch": 11, "iter": 1250, "lr": 2e-05, "memory": 5941, "data_time": 0.03976, "loss_rpn_cls": 0.02545, "loss_rpn_bbox": 0.0492, "loss_cls": 0.19411, "acc": 92.91675, "loss_bbox": 0.23833, "loss_mask": 0.23242, "loss": 0.73952, "time": 0.21837} -{"mode": "train", "epoch": 11, "iter": 1300, "lr": 2e-05, "memory": 5941, "data_time": 0.03814, "loss_rpn_cls": 0.02135, "loss_rpn_bbox": 0.0458, "loss_cls": 0.18333, "acc": 93.30396, "loss_bbox": 0.22452, "loss_mask": 0.224, "loss": 0.699, "time": 0.20476} -{"mode": "train", "epoch": 11, "iter": 1350, "lr": 2e-05, "memory": 5941, "data_time": 0.05175, "loss_rpn_cls": 0.02362, "loss_rpn_bbox": 0.04912, "loss_cls": 0.17939, "acc": 93.42065, "loss_bbox": 0.22618, "loss_mask": 0.22581, "loss": 0.70412, "time": 0.22157} -{"mode": "train", "epoch": 11, "iter": 1400, "lr": 2e-05, "memory": 5941, "data_time": 0.0378, "loss_rpn_cls": 0.02108, "loss_rpn_bbox": 0.04419, "loss_cls": 0.16951, "acc": 93.72607, "loss_bbox": 0.22062, "loss_mask": 0.22254, "loss": 0.67793, "time": 0.22339} -{"mode": "train", "epoch": 11, "iter": 1450, "lr": 2e-05, "memory": 5941, "data_time": 0.0457, "loss_rpn_cls": 0.02146, "loss_rpn_bbox": 0.04413, "loss_cls": 0.18274, "acc": 93.3208, "loss_bbox": 0.22338, "loss_mask": 0.22263, "loss": 0.69433, "time": 0.20733} -{"mode": "train", "epoch": 11, "iter": 1500, "lr": 2e-05, "memory": 5941, "data_time": 0.04814, "loss_rpn_cls": 0.023, "loss_rpn_bbox": 0.04655, "loss_cls": 0.18084, "acc": 93.38867, "loss_bbox": 0.23465, "loss_mask": 0.22609, "loss": 0.71113, "time": 0.21678} -{"mode": "train", "epoch": 11, "iter": 1550, "lr": 2e-05, "memory": 5941, "data_time": 0.04073, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.0466, "loss_cls": 0.1778, "acc": 93.56152, "loss_bbox": 0.22097, "loss_mask": 0.21906, "loss": 0.68625, "time": 0.21288} -{"mode": "train", "epoch": 11, "iter": 1600, "lr": 2e-05, "memory": 5941, "data_time": 0.0479, "loss_rpn_cls": 0.02342, "loss_rpn_bbox": 0.04665, "loss_cls": 0.18124, "acc": 93.44507, "loss_bbox": 0.22289, "loss_mask": 0.22378, "loss": 0.69798, "time": 0.21933} -{"mode": "train", "epoch": 11, "iter": 1650, "lr": 2e-05, "memory": 5941, "data_time": 0.04728, "loss_rpn_cls": 0.02245, "loss_rpn_bbox": 0.04416, "loss_cls": 0.17741, "acc": 93.59058, "loss_bbox": 0.22037, "loss_mask": 0.22473, "loss": 0.68911, "time": 0.21827} -{"mode": "train", "epoch": 11, "iter": 1700, "lr": 2e-05, "memory": 5941, "data_time": 0.04839, "loss_rpn_cls": 0.0211, "loss_rpn_bbox": 0.04467, "loss_cls": 0.17223, "acc": 93.72607, "loss_bbox": 0.21311, "loss_mask": 0.21844, "loss": 0.66956, "time": 0.21782} -{"mode": "train", "epoch": 11, "iter": 1750, "lr": 2e-05, "memory": 5941, "data_time": 0.04342, "loss_rpn_cls": 0.02321, "loss_rpn_bbox": 0.04787, "loss_cls": 0.18366, "acc": 93.25488, "loss_bbox": 0.23055, "loss_mask": 0.22765, "loss": 0.71294, "time": 0.22165} -{"mode": "train", "epoch": 11, "iter": 1800, "lr": 2e-05, "memory": 5941, "data_time": 0.03555, "loss_rpn_cls": 0.02342, "loss_rpn_bbox": 0.04618, "loss_cls": 0.18615, "acc": 93.15356, "loss_bbox": 0.22937, "loss_mask": 0.2258, "loss": 0.71092, "time": 0.20981} -{"mode": "train", "epoch": 11, "iter": 1850, "lr": 2e-05, "memory": 5941, "data_time": 0.04165, "loss_rpn_cls": 0.02126, "loss_rpn_bbox": 0.04391, "loss_cls": 0.179, "acc": 93.41479, "loss_bbox": 0.22544, "loss_mask": 0.22493, "loss": 0.69456, "time": 0.20991} -{"mode": "train", "epoch": 11, "iter": 1900, "lr": 2e-05, "memory": 5941, "data_time": 0.04275, "loss_rpn_cls": 0.02211, "loss_rpn_bbox": 0.04464, "loss_cls": 0.17471, "acc": 93.59521, "loss_bbox": 0.2184, "loss_mask": 0.21985, "loss": 0.67971, "time": 0.20629} -{"mode": "train", "epoch": 11, "iter": 1950, "lr": 2e-05, "memory": 5941, "data_time": 0.04239, "loss_rpn_cls": 0.02268, "loss_rpn_bbox": 0.04787, "loss_cls": 0.17956, "acc": 93.41748, "loss_bbox": 0.22365, "loss_mask": 0.22514, "loss": 0.69891, "time": 0.22214} -{"mode": "train", "epoch": 11, "iter": 2000, "lr": 2e-05, "memory": 5941, "data_time": 0.03461, "loss_rpn_cls": 0.01993, "loss_rpn_bbox": 0.04487, "loss_cls": 0.1733, "acc": 93.67065, "loss_bbox": 0.21946, "loss_mask": 0.2258, "loss": 0.68337, "time": 0.21006} -{"mode": "train", "epoch": 11, "iter": 2050, "lr": 2e-05, "memory": 5941, "data_time": 0.04166, "loss_rpn_cls": 0.02302, "loss_rpn_bbox": 0.04762, "loss_cls": 0.18273, "acc": 93.34961, "loss_bbox": 0.22377, "loss_mask": 0.22502, "loss": 0.70215, "time": 0.21573} -{"mode": "train", "epoch": 11, "iter": 2100, "lr": 2e-05, "memory": 5941, "data_time": 0.05214, "loss_rpn_cls": 0.02451, "loss_rpn_bbox": 0.04859, "loss_cls": 0.18508, "acc": 93.3186, "loss_bbox": 0.23181, "loss_mask": 0.2256, "loss": 0.7156, "time": 0.2192} -{"mode": "train", "epoch": 11, "iter": 2150, "lr": 2e-05, "memory": 5941, "data_time": 0.0448, "loss_rpn_cls": 0.02324, "loss_rpn_bbox": 0.04737, "loss_cls": 0.18628, "acc": 93.00024, "loss_bbox": 0.23328, "loss_mask": 0.22731, "loss": 0.71749, "time": 0.21991} -{"mode": "train", "epoch": 11, "iter": 2200, "lr": 2e-05, "memory": 5941, "data_time": 0.03994, "loss_rpn_cls": 0.02273, "loss_rpn_bbox": 0.04595, "loss_cls": 0.17952, "acc": 93.44971, "loss_bbox": 0.22302, "loss_mask": 0.22708, "loss": 0.6983, "time": 0.21181} -{"mode": "train", "epoch": 11, "iter": 2250, "lr": 2e-05, "memory": 5941, "data_time": 0.03647, "loss_rpn_cls": 0.02137, "loss_rpn_bbox": 0.04374, "loss_cls": 0.17462, "acc": 93.55542, "loss_bbox": 0.22041, "loss_mask": 0.21979, "loss": 0.67993, "time": 0.21028} -{"mode": "train", "epoch": 11, "iter": 2300, "lr": 2e-05, "memory": 5941, "data_time": 0.05049, "loss_rpn_cls": 0.02193, "loss_rpn_bbox": 0.04744, "loss_cls": 0.18511, "acc": 93.10791, "loss_bbox": 0.2324, "loss_mask": 0.22185, "loss": 0.70873, "time": 0.21614} -{"mode": "train", "epoch": 11, "iter": 2350, "lr": 2e-05, "memory": 5941, "data_time": 0.04007, "loss_rpn_cls": 0.0201, "loss_rpn_bbox": 0.04506, "loss_cls": 0.1744, "acc": 93.58789, "loss_bbox": 0.21552, "loss_mask": 0.22456, "loss": 0.67964, "time": 0.21121} -{"mode": "train", "epoch": 11, "iter": 2400, "lr": 2e-05, "memory": 5941, "data_time": 0.05264, "loss_rpn_cls": 0.02555, "loss_rpn_bbox": 0.04928, "loss_cls": 0.18636, "acc": 93.21997, "loss_bbox": 0.23054, "loss_mask": 0.22677, "loss": 0.7185, "time": 0.21872} -{"mode": "train", "epoch": 11, "iter": 2450, "lr": 2e-05, "memory": 5941, "data_time": 0.03925, "loss_rpn_cls": 0.02025, "loss_rpn_bbox": 0.04233, "loss_cls": 0.17551, "acc": 93.60815, "loss_bbox": 0.21871, "loss_mask": 0.22059, "loss": 0.67739, "time": 0.20716} -{"mode": "train", "epoch": 11, "iter": 2500, "lr": 2e-05, "memory": 5941, "data_time": 0.03483, "loss_rpn_cls": 0.02314, "loss_rpn_bbox": 0.04791, "loss_cls": 0.18335, "acc": 93.25366, "loss_bbox": 0.23206, "loss_mask": 0.2299, "loss": 0.71635, "time": 0.21135} -{"mode": "train", "epoch": 11, "iter": 2550, "lr": 2e-05, "memory": 5941, "data_time": 0.04547, "loss_rpn_cls": 0.02136, "loss_rpn_bbox": 0.04519, "loss_cls": 0.17806, "acc": 93.50562, "loss_bbox": 0.21872, "loss_mask": 0.22359, "loss": 0.68693, "time": 0.22125} -{"mode": "train", "epoch": 11, "iter": 2600, "lr": 2e-05, "memory": 5941, "data_time": 0.04298, "loss_rpn_cls": 0.02207, "loss_rpn_bbox": 0.04699, "loss_cls": 0.18412, "acc": 93.28735, "loss_bbox": 0.22414, "loss_mask": 0.22142, "loss": 0.69874, "time": 0.21344} -{"mode": "train", "epoch": 11, "iter": 2650, "lr": 2e-05, "memory": 5941, "data_time": 0.03628, "loss_rpn_cls": 0.02151, "loss_rpn_bbox": 0.04436, "loss_cls": 0.1794, "acc": 93.51392, "loss_bbox": 0.22143, "loss_mask": 0.22306, "loss": 0.68977, "time": 0.21032} -{"mode": "train", "epoch": 11, "iter": 2700, "lr": 2e-05, "memory": 5941, "data_time": 0.04105, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04848, "loss_cls": 0.18347, "acc": 93.35205, "loss_bbox": 0.2276, "loss_mask": 0.22168, "loss": 0.70308, "time": 0.21853} -{"mode": "train", "epoch": 11, "iter": 2750, "lr": 2e-05, "memory": 5941, "data_time": 0.0397, "loss_rpn_cls": 0.0231, "loss_rpn_bbox": 0.04812, "loss_cls": 0.18054, "acc": 93.27856, "loss_bbox": 0.22595, "loss_mask": 0.22811, "loss": 0.70582, "time": 0.22092} -{"mode": "train", "epoch": 11, "iter": 2800, "lr": 2e-05, "memory": 5941, "data_time": 0.04227, "loss_rpn_cls": 0.02185, "loss_rpn_bbox": 0.04703, "loss_cls": 0.18248, "acc": 93.27466, "loss_bbox": 0.23214, "loss_mask": 0.22254, "loss": 0.70604, "time": 0.21558} -{"mode": "train", "epoch": 11, "iter": 2850, "lr": 2e-05, "memory": 5941, "data_time": 0.03665, "loss_rpn_cls": 0.02173, "loss_rpn_bbox": 0.04721, "loss_cls": 0.1803, "acc": 93.45947, "loss_bbox": 0.21944, "loss_mask": 0.22388, "loss": 0.69256, "time": 0.21366} -{"mode": "train", "epoch": 11, "iter": 2900, "lr": 2e-05, "memory": 5941, "data_time": 0.03449, "loss_rpn_cls": 0.02228, "loss_rpn_bbox": 0.04667, "loss_cls": 0.18159, "acc": 93.33105, "loss_bbox": 0.22557, "loss_mask": 0.22612, "loss": 0.70223, "time": 0.20953} -{"mode": "train", "epoch": 11, "iter": 2950, "lr": 2e-05, "memory": 5941, "data_time": 0.04542, "loss_rpn_cls": 0.02419, "loss_rpn_bbox": 0.04783, "loss_cls": 0.18942, "acc": 93.04126, "loss_bbox": 0.23649, "loss_mask": 0.23236, "loss": 0.7303, "time": 0.21912} -{"mode": "train", "epoch": 11, "iter": 3000, "lr": 2e-05, "memory": 5941, "data_time": 0.04404, "loss_rpn_cls": 0.02363, "loss_rpn_bbox": 0.04543, "loss_cls": 0.18282, "acc": 93.30347, "loss_bbox": 0.23144, "loss_mask": 0.22631, "loss": 0.70963, "time": 0.20685} -{"mode": "train", "epoch": 11, "iter": 3050, "lr": 2e-05, "memory": 5941, "data_time": 0.03329, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.0499, "loss_cls": 0.1851, "acc": 93.10083, "loss_bbox": 0.23246, "loss_mask": 0.2312, "loss": 0.72232, "time": 0.2209} -{"mode": "train", "epoch": 11, "iter": 3100, "lr": 2e-05, "memory": 5941, "data_time": 0.03711, "loss_rpn_cls": 0.02056, "loss_rpn_bbox": 0.04651, "loss_cls": 0.17724, "acc": 93.55347, "loss_bbox": 0.22134, "loss_mask": 0.2194, "loss": 0.68505, "time": 0.21026} -{"mode": "train", "epoch": 11, "iter": 3150, "lr": 2e-05, "memory": 5941, "data_time": 0.037, "loss_rpn_cls": 0.0244, "loss_rpn_bbox": 0.04866, "loss_cls": 0.18785, "acc": 93.14087, "loss_bbox": 0.23008, "loss_mask": 0.2254, "loss": 0.7164, "time": 0.21566} -{"mode": "train", "epoch": 11, "iter": 3200, "lr": 2e-05, "memory": 5941, "data_time": 0.03803, "loss_rpn_cls": 0.02242, "loss_rpn_bbox": 0.04851, "loss_cls": 0.18793, "acc": 93.19019, "loss_bbox": 0.22842, "loss_mask": 0.22597, "loss": 0.71325, "time": 0.2125} -{"mode": "train", "epoch": 11, "iter": 3250, "lr": 2e-05, "memory": 5941, "data_time": 0.02974, "loss_rpn_cls": 0.02292, "loss_rpn_bbox": 0.04835, "loss_cls": 0.18232, "acc": 93.2749, "loss_bbox": 0.22548, "loss_mask": 0.22852, "loss": 0.70759, "time": 0.22367} -{"mode": "train", "epoch": 11, "iter": 3300, "lr": 2e-05, "memory": 5941, "data_time": 0.03009, "loss_rpn_cls": 0.02109, "loss_rpn_bbox": 0.04355, "loss_cls": 0.16745, "acc": 93.88403, "loss_bbox": 0.21107, "loss_mask": 0.22148, "loss": 0.66464, "time": 0.21378} -{"mode": "train", "epoch": 11, "iter": 3350, "lr": 2e-05, "memory": 5941, "data_time": 0.03668, "loss_rpn_cls": 0.02197, "loss_rpn_bbox": 0.04751, "loss_cls": 0.1836, "acc": 93.28442, "loss_bbox": 0.22883, "loss_mask": 0.23186, "loss": 0.71377, "time": 0.21438} -{"mode": "train", "epoch": 11, "iter": 3400, "lr": 2e-05, "memory": 5941, "data_time": 0.03316, "loss_rpn_cls": 0.02135, "loss_rpn_bbox": 0.04722, "loss_cls": 0.17812, "acc": 93.48145, "loss_bbox": 0.22638, "loss_mask": 0.22594, "loss": 0.699, "time": 0.20455} -{"mode": "train", "epoch": 11, "iter": 3450, "lr": 2e-05, "memory": 5941, "data_time": 0.03511, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.04532, "loss_cls": 0.18165, "acc": 93.29688, "loss_bbox": 0.22683, "loss_mask": 0.22533, "loss": 0.70138, "time": 0.21335} -{"mode": "train", "epoch": 11, "iter": 3500, "lr": 2e-05, "memory": 5941, "data_time": 0.03474, "loss_rpn_cls": 0.02123, "loss_rpn_bbox": 0.04525, "loss_cls": 0.18414, "acc": 93.3606, "loss_bbox": 0.22247, "loss_mask": 0.22152, "loss": 0.69461, "time": 0.21016} -{"mode": "train", "epoch": 11, "iter": 3550, "lr": 2e-05, "memory": 5941, "data_time": 0.03178, "loss_rpn_cls": 0.02342, "loss_rpn_bbox": 0.04576, "loss_cls": 0.18639, "acc": 93.03394, "loss_bbox": 0.22945, "loss_mask": 0.22073, "loss": 0.70576, "time": 0.21448} -{"mode": "train", "epoch": 11, "iter": 3600, "lr": 2e-05, "memory": 5941, "data_time": 0.03673, "loss_rpn_cls": 0.02155, "loss_rpn_bbox": 0.04422, "loss_cls": 0.1758, "acc": 93.57983, "loss_bbox": 0.21677, "loss_mask": 0.21893, "loss": 0.67727, "time": 0.20913} -{"mode": "train", "epoch": 11, "iter": 3650, "lr": 2e-05, "memory": 5941, "data_time": 0.04021, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.04598, "loss_cls": 0.18149, "acc": 93.39868, "loss_bbox": 0.22627, "loss_mask": 0.2292, "loss": 0.70497, "time": 0.21563} -{"mode": "train", "epoch": 11, "iter": 3700, "lr": 2e-05, "memory": 5941, "data_time": 0.03678, "loss_rpn_cls": 0.02188, "loss_rpn_bbox": 0.04405, "loss_cls": 0.17843, "acc": 93.41895, "loss_bbox": 0.22327, "loss_mask": 0.22323, "loss": 0.69086, "time": 0.21529} -{"mode": "train", "epoch": 11, "iter": 3750, "lr": 2e-05, "memory": 5941, "data_time": 0.03944, "loss_rpn_cls": 0.01956, "loss_rpn_bbox": 0.0463, "loss_cls": 0.17683, "acc": 93.51489, "loss_bbox": 0.22142, "loss_mask": 0.22392, "loss": 0.68802, "time": 0.20524} -{"mode": "train", "epoch": 11, "iter": 3800, "lr": 2e-05, "memory": 5941, "data_time": 0.0344, "loss_rpn_cls": 0.02064, "loss_rpn_bbox": 0.04371, "loss_cls": 0.17376, "acc": 93.67383, "loss_bbox": 0.21239, "loss_mask": 0.22175, "loss": 0.67225, "time": 0.20678} -{"mode": "train", "epoch": 11, "iter": 3850, "lr": 2e-05, "memory": 5941, "data_time": 0.04244, "loss_rpn_cls": 0.02315, "loss_rpn_bbox": 0.04911, "loss_cls": 0.18713, "acc": 93.00854, "loss_bbox": 0.23551, "loss_mask": 0.22907, "loss": 0.72396, "time": 0.22633} -{"mode": "train", "epoch": 11, "iter": 3900, "lr": 2e-05, "memory": 5941, "data_time": 0.04464, "loss_rpn_cls": 0.02361, "loss_rpn_bbox": 0.0462, "loss_cls": 0.18168, "acc": 93.39136, "loss_bbox": 0.22117, "loss_mask": 0.22209, "loss": 0.69475, "time": 0.21313} -{"mode": "train", "epoch": 11, "iter": 3950, "lr": 2e-05, "memory": 5941, "data_time": 0.03695, "loss_rpn_cls": 0.02389, "loss_rpn_bbox": 0.04838, "loss_cls": 0.18718, "acc": 93.11694, "loss_bbox": 0.2293, "loss_mask": 0.22829, "loss": 0.71704, "time": 0.21037} -{"mode": "train", "epoch": 11, "iter": 4000, "lr": 2e-05, "memory": 5941, "data_time": 0.03424, "loss_rpn_cls": 0.02379, "loss_rpn_bbox": 0.04681, "loss_cls": 0.17974, "acc": 93.46753, "loss_bbox": 0.22599, "loss_mask": 0.22263, "loss": 0.69897, "time": 0.21743} -{"mode": "train", "epoch": 11, "iter": 4050, "lr": 2e-05, "memory": 5941, "data_time": 0.04542, "loss_rpn_cls": 0.0224, "loss_rpn_bbox": 0.04531, "loss_cls": 0.18367, "acc": 93.33008, "loss_bbox": 0.22737, "loss_mask": 0.22496, "loss": 0.70372, "time": 0.2034} -{"mode": "train", "epoch": 11, "iter": 4100, "lr": 2e-05, "memory": 5941, "data_time": 0.03305, "loss_rpn_cls": 0.02498, "loss_rpn_bbox": 0.04655, "loss_cls": 0.18636, "acc": 93.20557, "loss_bbox": 0.22903, "loss_mask": 0.22843, "loss": 0.71535, "time": 0.21884} -{"mode": "train", "epoch": 11, "iter": 4150, "lr": 2e-05, "memory": 5941, "data_time": 0.03543, "loss_rpn_cls": 0.02158, "loss_rpn_bbox": 0.04442, "loss_cls": 0.17984, "acc": 93.3457, "loss_bbox": 0.22253, "loss_mask": 0.22691, "loss": 0.69527, "time": 0.20564} -{"mode": "train", "epoch": 11, "iter": 4200, "lr": 2e-05, "memory": 5942, "data_time": 0.03968, "loss_rpn_cls": 0.02223, "loss_rpn_bbox": 0.04769, "loss_cls": 0.18097, "acc": 93.33862, "loss_bbox": 0.22439, "loss_mask": 0.22149, "loss": 0.69677, "time": 0.26171} -{"mode": "train", "epoch": 11, "iter": 4250, "lr": 2e-05, "memory": 5942, "data_time": 0.03356, "loss_rpn_cls": 0.02294, "loss_rpn_bbox": 0.04457, "loss_cls": 0.17821, "acc": 93.55737, "loss_bbox": 0.21804, "loss_mask": 0.22523, "loss": 0.68899, "time": 0.20714} -{"mode": "train", "epoch": 11, "iter": 4300, "lr": 2e-05, "memory": 5942, "data_time": 0.03922, "loss_rpn_cls": 0.02235, "loss_rpn_bbox": 0.04932, "loss_cls": 0.18132, "acc": 93.41602, "loss_bbox": 0.23293, "loss_mask": 0.22886, "loss": 0.71479, "time": 0.24768} -{"mode": "train", "epoch": 11, "iter": 4350, "lr": 2e-05, "memory": 5942, "data_time": 0.03987, "loss_rpn_cls": 0.02295, "loss_rpn_bbox": 0.04858, "loss_cls": 0.18712, "acc": 93.19849, "loss_bbox": 0.23053, "loss_mask": 0.22997, "loss": 0.71914, "time": 0.24407} -{"mode": "train", "epoch": 11, "iter": 4400, "lr": 2e-05, "memory": 5942, "data_time": 0.04318, "loss_rpn_cls": 0.02246, "loss_rpn_bbox": 0.04584, "loss_cls": 0.17868, "acc": 93.39941, "loss_bbox": 0.22499, "loss_mask": 0.22852, "loss": 0.70049, "time": 0.21579} -{"mode": "train", "epoch": 11, "iter": 4450, "lr": 2e-05, "memory": 5942, "data_time": 0.03742, "loss_rpn_cls": 0.02093, "loss_rpn_bbox": 0.04667, "loss_cls": 0.18284, "acc": 93.29126, "loss_bbox": 0.22628, "loss_mask": 0.22741, "loss": 0.70412, "time": 0.2128} -{"mode": "train", "epoch": 11, "iter": 4500, "lr": 2e-05, "memory": 5942, "data_time": 0.04757, "loss_rpn_cls": 0.02217, "loss_rpn_bbox": 0.04689, "loss_cls": 0.1817, "acc": 93.34521, "loss_bbox": 0.22311, "loss_mask": 0.22396, "loss": 0.69783, "time": 0.24665} -{"mode": "train", "epoch": 11, "iter": 4550, "lr": 2e-05, "memory": 5942, "data_time": 0.04045, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04682, "loss_cls": 0.17476, "acc": 93.55078, "loss_bbox": 0.21837, "loss_mask": 0.21996, "loss": 0.68174, "time": 0.20612} -{"mode": "train", "epoch": 11, "iter": 4600, "lr": 2e-05, "memory": 5942, "data_time": 0.03983, "loss_rpn_cls": 0.02171, "loss_rpn_bbox": 0.0454, "loss_cls": 0.18153, "acc": 93.4353, "loss_bbox": 0.21982, "loss_mask": 0.22485, "loss": 0.6933, "time": 0.21001} -{"mode": "train", "epoch": 11, "iter": 4650, "lr": 2e-05, "memory": 5942, "data_time": 0.04051, "loss_rpn_cls": 0.02349, "loss_rpn_bbox": 0.04808, "loss_cls": 0.1897, "acc": 93.06201, "loss_bbox": 0.23255, "loss_mask": 0.23706, "loss": 0.73088, "time": 0.21401} -{"mode": "train", "epoch": 11, "iter": 4700, "lr": 2e-05, "memory": 5942, "data_time": 0.03366, "loss_rpn_cls": 0.02176, "loss_rpn_bbox": 0.04514, "loss_cls": 0.18049, "acc": 93.37573, "loss_bbox": 0.22336, "loss_mask": 0.22422, "loss": 0.69497, "time": 0.20934} -{"mode": "train", "epoch": 11, "iter": 4750, "lr": 2e-05, "memory": 5942, "data_time": 0.03569, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04519, "loss_cls": 0.17828, "acc": 93.48926, "loss_bbox": 0.22354, "loss_mask": 0.22549, "loss": 0.69549, "time": 0.20653} -{"mode": "train", "epoch": 11, "iter": 4800, "lr": 2e-05, "memory": 5942, "data_time": 0.04218, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04978, "loss_cls": 0.19239, "acc": 93.06543, "loss_bbox": 0.23353, "loss_mask": 0.22757, "loss": 0.7278, "time": 0.22087} -{"mode": "train", "epoch": 11, "iter": 4850, "lr": 2e-05, "memory": 5942, "data_time": 0.03793, "loss_rpn_cls": 0.02072, "loss_rpn_bbox": 0.04562, "loss_cls": 0.18107, "acc": 93.5061, "loss_bbox": 0.22292, "loss_mask": 0.22363, "loss": 0.69396, "time": 0.20786} -{"mode": "train", "epoch": 11, "iter": 4900, "lr": 2e-05, "memory": 5942, "data_time": 0.0445, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04653, "loss_cls": 0.18369, "acc": 93.21704, "loss_bbox": 0.23108, "loss_mask": 0.23269, "loss": 0.71771, "time": 0.21222} -{"mode": "train", "epoch": 11, "iter": 4950, "lr": 2e-05, "memory": 5942, "data_time": 0.03404, "loss_rpn_cls": 0.02147, "loss_rpn_bbox": 0.04344, "loss_cls": 0.17593, "acc": 93.58105, "loss_bbox": 0.22051, "loss_mask": 0.23203, "loss": 0.69338, "time": 0.216} -{"mode": "train", "epoch": 11, "iter": 5000, "lr": 2e-05, "memory": 5942, "data_time": 0.03604, "loss_rpn_cls": 0.02122, "loss_rpn_bbox": 0.04474, "loss_cls": 0.17787, "acc": 93.42358, "loss_bbox": 0.22251, "loss_mask": 0.21987, "loss": 0.68621, "time": 0.20794} -{"mode": "train", "epoch": 11, "iter": 5050, "lr": 2e-05, "memory": 5942, "data_time": 0.03641, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.04621, "loss_cls": 0.18066, "acc": 93.43335, "loss_bbox": 0.22097, "loss_mask": 0.22362, "loss": 0.69174, "time": 0.2144} -{"mode": "train", "epoch": 11, "iter": 5100, "lr": 2e-05, "memory": 5942, "data_time": 0.03906, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04655, "loss_cls": 0.18291, "acc": 93.19263, "loss_bbox": 0.22688, "loss_mask": 0.22557, "loss": 0.70427, "time": 0.21073} -{"mode": "train", "epoch": 11, "iter": 5150, "lr": 2e-05, "memory": 5942, "data_time": 0.03806, "loss_rpn_cls": 0.0222, "loss_rpn_bbox": 0.04472, "loss_cls": 0.1784, "acc": 93.51831, "loss_bbox": 0.21424, "loss_mask": 0.22173, "loss": 0.6813, "time": 0.24807} -{"mode": "train", "epoch": 11, "iter": 5200, "lr": 2e-05, "memory": 5942, "data_time": 0.03778, "loss_rpn_cls": 0.02209, "loss_rpn_bbox": 0.04569, "loss_cls": 0.17327, "acc": 93.52051, "loss_bbox": 0.22009, "loss_mask": 0.2152, "loss": 0.67634, "time": 0.20605} -{"mode": "train", "epoch": 11, "iter": 5250, "lr": 2e-05, "memory": 5942, "data_time": 0.03977, "loss_rpn_cls": 0.02058, "loss_rpn_bbox": 0.04558, "loss_cls": 0.17831, "acc": 93.44702, "loss_bbox": 0.21786, "loss_mask": 0.2211, "loss": 0.68343, "time": 0.21502} -{"mode": "train", "epoch": 11, "iter": 5300, "lr": 2e-05, "memory": 5942, "data_time": 0.03592, "loss_rpn_cls": 0.02408, "loss_rpn_bbox": 0.04761, "loss_cls": 0.18388, "acc": 93.33911, "loss_bbox": 0.22933, "loss_mask": 0.22779, "loss": 0.7127, "time": 0.21724} -{"mode": "train", "epoch": 11, "iter": 5350, "lr": 2e-05, "memory": 5942, "data_time": 0.04836, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.04768, "loss_cls": 0.18393, "acc": 93.32544, "loss_bbox": 0.22975, "loss_mask": 0.22566, "loss": 0.70868, "time": 0.21309} -{"mode": "train", "epoch": 11, "iter": 5400, "lr": 2e-05, "memory": 5942, "data_time": 0.03795, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04586, "loss_cls": 0.17813, "acc": 93.46899, "loss_bbox": 0.22177, "loss_mask": 0.22378, "loss": 0.69208, "time": 0.24105} -{"mode": "train", "epoch": 11, "iter": 5450, "lr": 2e-05, "memory": 5942, "data_time": 0.04094, "loss_rpn_cls": 0.02139, "loss_rpn_bbox": 0.04524, "loss_cls": 0.17887, "acc": 93.53613, "loss_bbox": 0.21599, "loss_mask": 0.21825, "loss": 0.67974, "time": 0.21521} -{"mode": "train", "epoch": 11, "iter": 5500, "lr": 2e-05, "memory": 5942, "data_time": 0.03276, "loss_rpn_cls": 0.02303, "loss_rpn_bbox": 0.04527, "loss_cls": 0.18115, "acc": 93.47998, "loss_bbox": 0.22117, "loss_mask": 0.22423, "loss": 0.69484, "time": 0.20433} -{"mode": "train", "epoch": 11, "iter": 5550, "lr": 2e-05, "memory": 5942, "data_time": 0.04386, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04746, "loss_cls": 0.18684, "acc": 93.16309, "loss_bbox": 0.23082, "loss_mask": 0.22726, "loss": 0.71538, "time": 0.21466} -{"mode": "train", "epoch": 11, "iter": 5600, "lr": 2e-05, "memory": 5942, "data_time": 0.03048, "loss_rpn_cls": 0.02161, "loss_rpn_bbox": 0.04537, "loss_cls": 0.17199, "acc": 93.70117, "loss_bbox": 0.21222, "loss_mask": 0.22131, "loss": 0.6725, "time": 0.24322} -{"mode": "train", "epoch": 11, "iter": 5650, "lr": 2e-05, "memory": 5942, "data_time": 0.04234, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.04645, "loss_cls": 0.17718, "acc": 93.4563, "loss_bbox": 0.22104, "loss_mask": 0.22395, "loss": 0.69065, "time": 0.21806} -{"mode": "train", "epoch": 11, "iter": 5700, "lr": 2e-05, "memory": 5942, "data_time": 0.03862, "loss_rpn_cls": 0.02239, "loss_rpn_bbox": 0.04764, "loss_cls": 0.18161, "acc": 93.40601, "loss_bbox": 0.22829, "loss_mask": 0.22751, "loss": 0.70743, "time": 0.21935} -{"mode": "train", "epoch": 11, "iter": 5750, "lr": 2e-05, "memory": 5942, "data_time": 0.0436, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04707, "loss_cls": 0.1769, "acc": 93.52905, "loss_bbox": 0.22287, "loss_mask": 0.22566, "loss": 0.69487, "time": 0.21649} -{"mode": "train", "epoch": 11, "iter": 5800, "lr": 2e-05, "memory": 5942, "data_time": 0.0372, "loss_rpn_cls": 0.02129, "loss_rpn_bbox": 0.04728, "loss_cls": 0.17732, "acc": 93.53467, "loss_bbox": 0.21678, "loss_mask": 0.2204, "loss": 0.68306, "time": 0.20868} -{"mode": "train", "epoch": 11, "iter": 5850, "lr": 2e-05, "memory": 5942, "data_time": 0.03209, "loss_rpn_cls": 0.02259, "loss_rpn_bbox": 0.04684, "loss_cls": 0.17855, "acc": 93.43628, "loss_bbox": 0.22472, "loss_mask": 0.22552, "loss": 0.69822, "time": 0.21084} -{"mode": "train", "epoch": 11, "iter": 5900, "lr": 2e-05, "memory": 5942, "data_time": 0.0378, "loss_rpn_cls": 0.02056, "loss_rpn_bbox": 0.04356, "loss_cls": 0.17628, "acc": 93.63696, "loss_bbox": 0.21445, "loss_mask": 0.22149, "loss": 0.67634, "time": 0.20484} -{"mode": "train", "epoch": 11, "iter": 5950, "lr": 2e-05, "memory": 5942, "data_time": 0.03072, "loss_rpn_cls": 0.02222, "loss_rpn_bbox": 0.04533, "loss_cls": 0.17926, "acc": 93.43823, "loss_bbox": 0.21741, "loss_mask": 0.21869, "loss": 0.6829, "time": 0.20591} -{"mode": "train", "epoch": 11, "iter": 6000, "lr": 2e-05, "memory": 5942, "data_time": 0.03825, "loss_rpn_cls": 0.02367, "loss_rpn_bbox": 0.04766, "loss_cls": 0.1804, "acc": 93.34985, "loss_bbox": 0.22441, "loss_mask": 0.22337, "loss": 0.69952, "time": 0.21321} -{"mode": "train", "epoch": 11, "iter": 6050, "lr": 2e-05, "memory": 5942, "data_time": 0.03841, "loss_rpn_cls": 0.02348, "loss_rpn_bbox": 0.04758, "loss_cls": 0.17822, "acc": 93.57446, "loss_bbox": 0.2188, "loss_mask": 0.2215, "loss": 0.68957, "time": 0.20745} -{"mode": "train", "epoch": 11, "iter": 6100, "lr": 2e-05, "memory": 5942, "data_time": 0.03462, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04722, "loss_cls": 0.18287, "acc": 93.3147, "loss_bbox": 0.22843, "loss_mask": 0.22545, "loss": 0.70651, "time": 0.20437} -{"mode": "train", "epoch": 11, "iter": 6150, "lr": 2e-05, "memory": 5942, "data_time": 0.03734, "loss_rpn_cls": 0.02115, "loss_rpn_bbox": 0.0433, "loss_cls": 0.17498, "acc": 93.52661, "loss_bbox": 0.22416, "loss_mask": 0.22544, "loss": 0.68902, "time": 0.20644} -{"mode": "train", "epoch": 11, "iter": 6200, "lr": 2e-05, "memory": 5942, "data_time": 0.03785, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04801, "loss_cls": 0.18283, "acc": 93.33862, "loss_bbox": 0.22615, "loss_mask": 0.22447, "loss": 0.70518, "time": 0.20541} -{"mode": "train", "epoch": 11, "iter": 6250, "lr": 2e-05, "memory": 5942, "data_time": 0.03911, "loss_rpn_cls": 0.02163, "loss_rpn_bbox": 0.04474, "loss_cls": 0.17712, "acc": 93.5564, "loss_bbox": 0.22137, "loss_mask": 0.22403, "loss": 0.68888, "time": 0.20628} -{"mode": "train", "epoch": 11, "iter": 6300, "lr": 2e-05, "memory": 5942, "data_time": 0.03416, "loss_rpn_cls": 0.02315, "loss_rpn_bbox": 0.0469, "loss_cls": 0.18762, "acc": 93.16602, "loss_bbox": 0.23238, "loss_mask": 0.23385, "loss": 0.7239, "time": 0.21132} -{"mode": "train", "epoch": 11, "iter": 6350, "lr": 2e-05, "memory": 5942, "data_time": 0.03658, "loss_rpn_cls": 0.02138, "loss_rpn_bbox": 0.04602, "loss_cls": 0.17767, "acc": 93.40894, "loss_bbox": 0.22492, "loss_mask": 0.22736, "loss": 0.69735, "time": 0.20289} -{"mode": "train", "epoch": 11, "iter": 6400, "lr": 2e-05, "memory": 5942, "data_time": 0.03136, "loss_rpn_cls": 0.02247, "loss_rpn_bbox": 0.04461, "loss_cls": 0.18346, "acc": 93.18335, "loss_bbox": 0.22558, "loss_mask": 0.22611, "loss": 0.70223, "time": 0.21262} -{"mode": "train", "epoch": 11, "iter": 6450, "lr": 2e-05, "memory": 5942, "data_time": 0.04283, "loss_rpn_cls": 0.02187, "loss_rpn_bbox": 0.04624, "loss_cls": 0.18698, "acc": 93.21118, "loss_bbox": 0.23463, "loss_mask": 0.22825, "loss": 0.71798, "time": 0.21787} -{"mode": "train", "epoch": 11, "iter": 6500, "lr": 2e-05, "memory": 5942, "data_time": 0.03807, "loss_rpn_cls": 0.02251, "loss_rpn_bbox": 0.04493, "loss_cls": 0.17314, "acc": 93.5459, "loss_bbox": 0.22219, "loss_mask": 0.2281, "loss": 0.69087, "time": 0.21043} -{"mode": "train", "epoch": 11, "iter": 6550, "lr": 2e-05, "memory": 5942, "data_time": 0.03059, "loss_rpn_cls": 0.02349, "loss_rpn_bbox": 0.04738, "loss_cls": 0.18738, "acc": 93.14233, "loss_bbox": 0.23394, "loss_mask": 0.22869, "loss": 0.72088, "time": 0.21812} -{"mode": "train", "epoch": 11, "iter": 6600, "lr": 2e-05, "memory": 5942, "data_time": 0.03365, "loss_rpn_cls": 0.02268, "loss_rpn_bbox": 0.04694, "loss_cls": 0.18374, "acc": 93.25269, "loss_bbox": 0.2268, "loss_mask": 0.22672, "loss": 0.70687, "time": 0.21141} -{"mode": "train", "epoch": 11, "iter": 6650, "lr": 2e-05, "memory": 5942, "data_time": 0.03757, "loss_rpn_cls": 0.02282, "loss_rpn_bbox": 0.04729, "loss_cls": 0.18265, "acc": 93.27979, "loss_bbox": 0.22817, "loss_mask": 0.22944, "loss": 0.71037, "time": 0.21563} -{"mode": "train", "epoch": 11, "iter": 6700, "lr": 2e-05, "memory": 5942, "data_time": 0.04063, "loss_rpn_cls": 0.02161, "loss_rpn_bbox": 0.04692, "loss_cls": 0.17993, "acc": 93.51782, "loss_bbox": 0.22517, "loss_mask": 0.22487, "loss": 0.6985, "time": 0.21175} -{"mode": "train", "epoch": 11, "iter": 6750, "lr": 2e-05, "memory": 5942, "data_time": 0.04188, "loss_rpn_cls": 0.02351, "loss_rpn_bbox": 0.04905, "loss_cls": 0.19098, "acc": 92.94873, "loss_bbox": 0.23936, "loss_mask": 0.22955, "loss": 0.73245, "time": 0.21525} -{"mode": "train", "epoch": 11, "iter": 6800, "lr": 2e-05, "memory": 5942, "data_time": 0.03639, "loss_rpn_cls": 0.02088, "loss_rpn_bbox": 0.04664, "loss_cls": 0.18316, "acc": 93.3042, "loss_bbox": 0.22916, "loss_mask": 0.22595, "loss": 0.70579, "time": 0.20654} -{"mode": "train", "epoch": 11, "iter": 6850, "lr": 2e-05, "memory": 5942, "data_time": 0.04209, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04625, "loss_cls": 0.18225, "acc": 93.35498, "loss_bbox": 0.22315, "loss_mask": 0.22228, "loss": 0.69648, "time": 0.22006} -{"mode": "train", "epoch": 11, "iter": 6900, "lr": 2e-05, "memory": 5942, "data_time": 0.03644, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.0468, "loss_cls": 0.1798, "acc": 93.49463, "loss_bbox": 0.22146, "loss_mask": 0.22446, "loss": 0.69434, "time": 0.21051} -{"mode": "train", "epoch": 11, "iter": 6950, "lr": 2e-05, "memory": 5942, "data_time": 0.05276, "loss_rpn_cls": 0.02354, "loss_rpn_bbox": 0.0457, "loss_cls": 0.18667, "acc": 93.24756, "loss_bbox": 0.23451, "loss_mask": 0.23056, "loss": 0.72097, "time": 0.20963} -{"mode": "train", "epoch": 11, "iter": 7000, "lr": 2e-05, "memory": 5942, "data_time": 0.03697, "loss_rpn_cls": 0.02095, "loss_rpn_bbox": 0.0468, "loss_cls": 0.18088, "acc": 93.41602, "loss_bbox": 0.22496, "loss_mask": 0.22545, "loss": 0.69904, "time": 0.2128} -{"mode": "train", "epoch": 11, "iter": 7050, "lr": 2e-05, "memory": 5942, "data_time": 0.04317, "loss_rpn_cls": 0.02288, "loss_rpn_bbox": 0.04819, "loss_cls": 0.18263, "acc": 93.29492, "loss_bbox": 0.23075, "loss_mask": 0.22986, "loss": 0.71431, "time": 0.2135} -{"mode": "train", "epoch": 11, "iter": 7100, "lr": 2e-05, "memory": 5942, "data_time": 0.04221, "loss_rpn_cls": 0.02198, "loss_rpn_bbox": 0.04663, "loss_cls": 0.18244, "acc": 93.2998, "loss_bbox": 0.22885, "loss_mask": 0.22878, "loss": 0.70868, "time": 0.21495} -{"mode": "train", "epoch": 11, "iter": 7150, "lr": 2e-05, "memory": 5942, "data_time": 0.03755, "loss_rpn_cls": 0.02217, "loss_rpn_bbox": 0.046, "loss_cls": 0.17896, "acc": 93.48901, "loss_bbox": 0.22289, "loss_mask": 0.2236, "loss": 0.69363, "time": 0.20613} -{"mode": "train", "epoch": 11, "iter": 7200, "lr": 2e-05, "memory": 5942, "data_time": 0.03423, "loss_rpn_cls": 0.02284, "loss_rpn_bbox": 0.04766, "loss_cls": 0.18997, "acc": 93.08374, "loss_bbox": 0.23384, "loss_mask": 0.22232, "loss": 0.71663, "time": 0.21429} -{"mode": "train", "epoch": 11, "iter": 7250, "lr": 2e-05, "memory": 5942, "data_time": 0.04547, "loss_rpn_cls": 0.022, "loss_rpn_bbox": 0.04675, "loss_cls": 0.18308, "acc": 93.28174, "loss_bbox": 0.22278, "loss_mask": 0.22031, "loss": 0.69493, "time": 0.21089} -{"mode": "train", "epoch": 11, "iter": 7300, "lr": 2e-05, "memory": 5942, "data_time": 0.03485, "loss_rpn_cls": 0.02149, "loss_rpn_bbox": 0.0461, "loss_cls": 0.18272, "acc": 93.34351, "loss_bbox": 0.2296, "loss_mask": 0.22793, "loss": 0.70783, "time": 0.20961} -{"mode": "val", "epoch": 11, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.3949, "bbox_mAP_50": 0.6155, "bbox_mAP_75": 0.4298, "bbox_mAP_s": 0.2383, "bbox_mAP_m": 0.4287, "bbox_mAP_l": 0.5117, "bbox_mAP_copypaste": "0.3949 0.6155 0.4298 0.2383 0.4287 0.5117", "segm_mAP": 0.3713, "segm_mAP_50": 0.587, "segm_mAP_75": 0.3982, "segm_mAP_s": 0.1842, "segm_mAP_m": 0.3994, "segm_mAP_l": 0.5371, "segm_mAP_copypaste": "0.3713 0.5870 0.3982 0.1842 0.3994 0.5371"} -{"mode": "train", "epoch": 12, "iter": 50, "lr": 0.0, "memory": 5942, "data_time": 0.11042, "loss_rpn_cls": 0.0237, "loss_rpn_bbox": 0.04757, "loss_cls": 0.1799, "acc": 93.31714, "loss_bbox": 0.23434, "loss_mask": 0.22618, "loss": 0.71169, "time": 0.30695} -{"mode": "train", "epoch": 12, "iter": 100, "lr": 0.0, "memory": 5942, "data_time": 0.03942, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04593, "loss_cls": 0.18004, "acc": 93.37207, "loss_bbox": 0.22877, "loss_mask": 0.22607, "loss": 0.70372, "time": 0.22841} -{"mode": "train", "epoch": 12, "iter": 150, "lr": 0.0, "memory": 5942, "data_time": 0.04558, "loss_rpn_cls": 0.02103, "loss_rpn_bbox": 0.04535, "loss_cls": 0.17862, "acc": 93.40405, "loss_bbox": 0.22778, "loss_mask": 0.22595, "loss": 0.69873, "time": 0.24298} -{"mode": "train", "epoch": 12, "iter": 200, "lr": 0.0, "memory": 5942, "data_time": 0.04411, "loss_rpn_cls": 0.02001, "loss_rpn_bbox": 0.04537, "loss_cls": 0.17209, "acc": 93.62134, "loss_bbox": 0.21809, "loss_mask": 0.21799, "loss": 0.67355, "time": 0.22873} -{"mode": "train", "epoch": 12, "iter": 250, "lr": 0.0, "memory": 5942, "data_time": 0.03916, "loss_rpn_cls": 0.02207, "loss_rpn_bbox": 0.04715, "loss_cls": 0.18253, "acc": 93.15747, "loss_bbox": 0.23187, "loss_mask": 0.22748, "loss": 0.7111, "time": 0.23324} -{"mode": "train", "epoch": 12, "iter": 300, "lr": 0.0, "memory": 5942, "data_time": 0.04561, "loss_rpn_cls": 0.01962, "loss_rpn_bbox": 0.04367, "loss_cls": 0.17388, "acc": 93.47852, "loss_bbox": 0.21868, "loss_mask": 0.21699, "loss": 0.67284, "time": 0.22322} -{"mode": "train", "epoch": 12, "iter": 350, "lr": 0.0, "memory": 5942, "data_time": 0.04695, "loss_rpn_cls": 0.02221, "loss_rpn_bbox": 0.04707, "loss_cls": 0.18243, "acc": 93.24414, "loss_bbox": 0.23074, "loss_mask": 0.22507, "loss": 0.70752, "time": 0.23115} -{"mode": "train", "epoch": 12, "iter": 400, "lr": 0.0, "memory": 5942, "data_time": 0.04827, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04673, "loss_cls": 0.18133, "acc": 93.32666, "loss_bbox": 0.22532, "loss_mask": 0.22541, "loss": 0.70093, "time": 0.22407} -{"mode": "train", "epoch": 12, "iter": 450, "lr": 0.0, "memory": 5942, "data_time": 0.03813, "loss_rpn_cls": 0.02226, "loss_rpn_bbox": 0.05008, "loss_cls": 0.17995, "acc": 93.36255, "loss_bbox": 0.2228, "loss_mask": 0.22524, "loss": 0.70033, "time": 0.22578} -{"mode": "train", "epoch": 12, "iter": 500, "lr": 0.0, "memory": 5942, "data_time": 0.03341, "loss_rpn_cls": 0.02145, "loss_rpn_bbox": 0.04526, "loss_cls": 0.17501, "acc": 93.4834, "loss_bbox": 0.22477, "loss_mask": 0.22527, "loss": 0.69176, "time": 0.21799} -{"mode": "train", "epoch": 12, "iter": 550, "lr": 0.0, "memory": 5942, "data_time": 0.03281, "loss_rpn_cls": 0.0207, "loss_rpn_bbox": 0.0466, "loss_cls": 0.17454, "acc": 93.57544, "loss_bbox": 0.21681, "loss_mask": 0.21736, "loss": 0.67602, "time": 0.22379} -{"mode": "train", "epoch": 12, "iter": 600, "lr": 0.0, "memory": 5942, "data_time": 0.0345, "loss_rpn_cls": 0.02093, "loss_rpn_bbox": 0.04463, "loss_cls": 0.17597, "acc": 93.48535, "loss_bbox": 0.22387, "loss_mask": 0.22369, "loss": 0.68909, "time": 0.21803} -{"mode": "train", "epoch": 12, "iter": 650, "lr": 0.0, "memory": 5942, "data_time": 0.03405, "loss_rpn_cls": 0.02093, "loss_rpn_bbox": 0.04685, "loss_cls": 0.17439, "acc": 93.53149, "loss_bbox": 0.21955, "loss_mask": 0.22195, "loss": 0.68368, "time": 0.22205} -{"mode": "train", "epoch": 12, "iter": 700, "lr": 0.0, "memory": 5942, "data_time": 0.03724, "loss_rpn_cls": 0.0219, "loss_rpn_bbox": 0.04806, "loss_cls": 0.17589, "acc": 93.49487, "loss_bbox": 0.21899, "loss_mask": 0.22027, "loss": 0.68511, "time": 0.23203} -{"mode": "train", "epoch": 12, "iter": 750, "lr": 0.0, "memory": 5942, "data_time": 0.03709, "loss_rpn_cls": 0.02322, "loss_rpn_bbox": 0.04818, "loss_cls": 0.17955, "acc": 93.33618, "loss_bbox": 0.22317, "loss_mask": 0.22571, "loss": 0.69984, "time": 0.22603} -{"mode": "train", "epoch": 12, "iter": 800, "lr": 0.0, "memory": 5942, "data_time": 0.03231, "loss_rpn_cls": 0.02143, "loss_rpn_bbox": 0.04611, "loss_cls": 0.17511, "acc": 93.58301, "loss_bbox": 0.22044, "loss_mask": 0.22215, "loss": 0.68525, "time": 0.22006} -{"mode": "train", "epoch": 12, "iter": 850, "lr": 0.0, "memory": 5942, "data_time": 0.03366, "loss_rpn_cls": 0.02167, "loss_rpn_bbox": 0.04251, "loss_cls": 0.16784, "acc": 93.82886, "loss_bbox": 0.21701, "loss_mask": 0.22058, "loss": 0.66961, "time": 0.21163} -{"mode": "train", "epoch": 12, "iter": 900, "lr": 0.0, "memory": 5942, "data_time": 0.03603, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04496, "loss_cls": 0.17487, "acc": 93.52319, "loss_bbox": 0.21767, "loss_mask": 0.21836, "loss": 0.67661, "time": 0.22032} -{"mode": "train", "epoch": 12, "iter": 950, "lr": 0.0, "memory": 5942, "data_time": 0.03723, "loss_rpn_cls": 0.02011, "loss_rpn_bbox": 0.04463, "loss_cls": 0.17651, "acc": 93.53467, "loss_bbox": 0.22213, "loss_mask": 0.22372, "loss": 0.6871, "time": 0.21808} -{"mode": "train", "epoch": 12, "iter": 1000, "lr": 0.0, "memory": 5942, "data_time": 0.03292, "loss_rpn_cls": 0.02157, "loss_rpn_bbox": 0.04483, "loss_cls": 0.17364, "acc": 93.58691, "loss_bbox": 0.22143, "loss_mask": 0.21971, "loss": 0.68118, "time": 0.21319} -{"mode": "train", "epoch": 12, "iter": 1050, "lr": 0.0, "memory": 5942, "data_time": 0.03582, "loss_rpn_cls": 0.0212, "loss_rpn_bbox": 0.04507, "loss_cls": 0.17586, "acc": 93.57324, "loss_bbox": 0.21969, "loss_mask": 0.22605, "loss": 0.68787, "time": 0.21082} -{"mode": "train", "epoch": 12, "iter": 1100, "lr": 0.0, "memory": 5942, "data_time": 0.02654, "loss_rpn_cls": 0.02089, "loss_rpn_bbox": 0.04469, "loss_cls": 0.17186, "acc": 93.75195, "loss_bbox": 0.21101, "loss_mask": 0.21987, "loss": 0.66833, "time": 0.21328} -{"mode": "train", "epoch": 12, "iter": 1150, "lr": 0.0, "memory": 5942, "data_time": 0.03815, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.04462, "loss_cls": 0.17299, "acc": 93.48706, "loss_bbox": 0.22119, "loss_mask": 0.22647, "loss": 0.68558, "time": 0.22396} -{"mode": "train", "epoch": 12, "iter": 1200, "lr": 0.0, "memory": 5942, "data_time": 0.02886, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.04405, "loss_cls": 0.1695, "acc": 93.67627, "loss_bbox": 0.21645, "loss_mask": 0.21885, "loss": 0.66936, "time": 0.20859} -{"mode": "train", "epoch": 12, "iter": 1250, "lr": 0.0, "memory": 5942, "data_time": 0.03345, "loss_rpn_cls": 0.02085, "loss_rpn_bbox": 0.04301, "loss_cls": 0.17283, "acc": 93.67871, "loss_bbox": 0.21039, "loss_mask": 0.21712, "loss": 0.6642, "time": 0.2198} -{"mode": "train", "epoch": 12, "iter": 1300, "lr": 0.0, "memory": 5942, "data_time": 0.03777, "loss_rpn_cls": 0.02222, "loss_rpn_bbox": 0.04634, "loss_cls": 0.17643, "acc": 93.49487, "loss_bbox": 0.22545, "loss_mask": 0.22109, "loss": 0.69153, "time": 0.21719} -{"mode": "train", "epoch": 12, "iter": 1350, "lr": 0.0, "memory": 5942, "data_time": 0.03933, "loss_rpn_cls": 0.02094, "loss_rpn_bbox": 0.04591, "loss_cls": 0.17715, "acc": 93.44141, "loss_bbox": 0.22123, "loss_mask": 0.21913, "loss": 0.68437, "time": 0.2149} -{"mode": "train", "epoch": 12, "iter": 1400, "lr": 0.0, "memory": 5942, "data_time": 0.03341, "loss_rpn_cls": 0.0201, "loss_rpn_bbox": 0.04356, "loss_cls": 0.17225, "acc": 93.70264, "loss_bbox": 0.21965, "loss_mask": 0.22288, "loss": 0.67845, "time": 0.20657} -{"mode": "train", "epoch": 12, "iter": 1450, "lr": 0.0, "memory": 5942, "data_time": 0.03234, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.0471, "loss_cls": 0.17503, "acc": 93.48657, "loss_bbox": 0.2196, "loss_mask": 0.2183, "loss": 0.68169, "time": 0.21586} -{"mode": "train", "epoch": 12, "iter": 1500, "lr": 0.0, "memory": 5942, "data_time": 0.02999, "loss_rpn_cls": 0.02017, "loss_rpn_bbox": 0.04367, "loss_cls": 0.17303, "acc": 93.5979, "loss_bbox": 0.21474, "loss_mask": 0.22311, "loss": 0.67472, "time": 0.20919} -{"mode": "train", "epoch": 12, "iter": 1550, "lr": 0.0, "memory": 5942, "data_time": 0.03911, "loss_rpn_cls": 0.01969, "loss_rpn_bbox": 0.04521, "loss_cls": 0.1676, "acc": 93.76562, "loss_bbox": 0.21208, "loss_mask": 0.21705, "loss": 0.66162, "time": 0.21656} -{"mode": "train", "epoch": 12, "iter": 1600, "lr": 0.0, "memory": 5942, "data_time": 0.04329, "loss_rpn_cls": 0.01964, "loss_rpn_bbox": 0.04363, "loss_cls": 0.16817, "acc": 93.79199, "loss_bbox": 0.20981, "loss_mask": 0.21755, "loss": 0.65882, "time": 0.21562} -{"mode": "train", "epoch": 12, "iter": 1650, "lr": 0.0, "memory": 5942, "data_time": 0.03908, "loss_rpn_cls": 0.02263, "loss_rpn_bbox": 0.0437, "loss_cls": 0.18216, "acc": 93.2605, "loss_bbox": 0.22396, "loss_mask": 0.22469, "loss": 0.69715, "time": 0.20455} -{"mode": "train", "epoch": 12, "iter": 1700, "lr": 0.0, "memory": 5942, "data_time": 0.03869, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.04403, "loss_cls": 0.17242, "acc": 93.64233, "loss_bbox": 0.21846, "loss_mask": 0.22011, "loss": 0.67531, "time": 0.21146} -{"mode": "train", "epoch": 12, "iter": 1750, "lr": 0.0, "memory": 5942, "data_time": 0.03429, "loss_rpn_cls": 0.02141, "loss_rpn_bbox": 0.04731, "loss_cls": 0.17921, "acc": 93.39893, "loss_bbox": 0.22524, "loss_mask": 0.224, "loss": 0.69717, "time": 0.20972} -{"mode": "train", "epoch": 12, "iter": 1800, "lr": 0.0, "memory": 5942, "data_time": 0.03221, "loss_rpn_cls": 0.02162, "loss_rpn_bbox": 0.0449, "loss_cls": 0.17558, "acc": 93.53564, "loss_bbox": 0.21675, "loss_mask": 0.22202, "loss": 0.68087, "time": 0.20079} -{"mode": "train", "epoch": 12, "iter": 1850, "lr": 0.0, "memory": 5942, "data_time": 0.03975, "loss_rpn_cls": 0.01941, "loss_rpn_bbox": 0.04479, "loss_cls": 0.17267, "acc": 93.56201, "loss_bbox": 0.22106, "loss_mask": 0.2193, "loss": 0.67724, "time": 0.20711} -{"mode": "train", "epoch": 12, "iter": 1900, "lr": 0.0, "memory": 5942, "data_time": 0.03804, "loss_rpn_cls": 0.02249, "loss_rpn_bbox": 0.04671, "loss_cls": 0.18296, "acc": 93.27051, "loss_bbox": 0.22405, "loss_mask": 0.22316, "loss": 0.69936, "time": 0.21313} -{"mode": "train", "epoch": 12, "iter": 1950, "lr": 0.0, "memory": 5942, "data_time": 0.03284, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.04829, "loss_cls": 0.1813, "acc": 93.2771, "loss_bbox": 0.22342, "loss_mask": 0.22634, "loss": 0.70301, "time": 0.22291} -{"mode": "train", "epoch": 12, "iter": 2000, "lr": 0.0, "memory": 5942, "data_time": 0.03517, "loss_rpn_cls": 0.02192, "loss_rpn_bbox": 0.04517, "loss_cls": 0.17199, "acc": 93.69116, "loss_bbox": 0.21479, "loss_mask": 0.21965, "loss": 0.67351, "time": 0.20497} -{"mode": "train", "epoch": 12, "iter": 2050, "lr": 0.0, "memory": 5942, "data_time": 0.03458, "loss_rpn_cls": 0.02094, "loss_rpn_bbox": 0.04744, "loss_cls": 0.17969, "acc": 93.38428, "loss_bbox": 0.22553, "loss_mask": 0.22271, "loss": 0.69631, "time": 0.21226} -{"mode": "train", "epoch": 12, "iter": 2100, "lr": 0.0, "memory": 5942, "data_time": 0.03285, "loss_rpn_cls": 0.02303, "loss_rpn_bbox": 0.04628, "loss_cls": 0.18006, "acc": 93.33374, "loss_bbox": 0.22835, "loss_mask": 0.22596, "loss": 0.70367, "time": 0.21273} -{"mode": "train", "epoch": 12, "iter": 2150, "lr": 0.0, "memory": 5942, "data_time": 0.03299, "loss_rpn_cls": 0.02257, "loss_rpn_bbox": 0.04541, "loss_cls": 0.17057, "acc": 93.64307, "loss_bbox": 0.22162, "loss_mask": 0.21869, "loss": 0.67886, "time": 0.20541} -{"mode": "train", "epoch": 12, "iter": 2200, "lr": 0.0, "memory": 5942, "data_time": 0.03803, "loss_rpn_cls": 0.02246, "loss_rpn_bbox": 0.04609, "loss_cls": 0.17025, "acc": 93.67896, "loss_bbox": 0.22094, "loss_mask": 0.21986, "loss": 0.6796, "time": 0.2134} -{"mode": "train", "epoch": 12, "iter": 2250, "lr": 0.0, "memory": 5942, "data_time": 0.0415, "loss_rpn_cls": 0.02189, "loss_rpn_bbox": 0.04669, "loss_cls": 0.17736, "acc": 93.4231, "loss_bbox": 0.22529, "loss_mask": 0.22471, "loss": 0.69595, "time": 0.20985} -{"mode": "train", "epoch": 12, "iter": 2300, "lr": 0.0, "memory": 5942, "data_time": 0.03265, "loss_rpn_cls": 0.02078, "loss_rpn_bbox": 0.04537, "loss_cls": 0.17098, "acc": 93.67163, "loss_bbox": 0.21471, "loss_mask": 0.21891, "loss": 0.67075, "time": 0.20248} -{"mode": "train", "epoch": 12, "iter": 2350, "lr": 0.0, "memory": 5942, "data_time": 0.03345, "loss_rpn_cls": 0.01985, "loss_rpn_bbox": 0.04565, "loss_cls": 0.16783, "acc": 93.76831, "loss_bbox": 0.21394, "loss_mask": 0.21789, "loss": 0.66516, "time": 0.19912} -{"mode": "train", "epoch": 12, "iter": 2400, "lr": 0.0, "memory": 5942, "data_time": 0.03297, "loss_rpn_cls": 0.02023, "loss_rpn_bbox": 0.04152, "loss_cls": 0.17215, "acc": 93.65381, "loss_bbox": 0.21932, "loss_mask": 0.22386, "loss": 0.67709, "time": 0.19809} -{"mode": "train", "epoch": 12, "iter": 2450, "lr": 0.0, "memory": 5942, "data_time": 0.03189, "loss_rpn_cls": 0.02295, "loss_rpn_bbox": 0.04912, "loss_cls": 0.18859, "acc": 92.94482, "loss_bbox": 0.23923, "loss_mask": 0.23225, "loss": 0.73214, "time": 0.21772} -{"mode": "train", "epoch": 12, "iter": 2500, "lr": 0.0, "memory": 5942, "data_time": 0.03527, "loss_rpn_cls": 0.02236, "loss_rpn_bbox": 0.04777, "loss_cls": 0.17764, "acc": 93.44629, "loss_bbox": 0.22007, "loss_mask": 0.22945, "loss": 0.69728, "time": 0.20655} -{"mode": "train", "epoch": 12, "iter": 2550, "lr": 0.0, "memory": 5942, "data_time": 0.04491, "loss_rpn_cls": 0.02111, "loss_rpn_bbox": 0.04616, "loss_cls": 0.17426, "acc": 93.56934, "loss_bbox": 0.22189, "loss_mask": 0.22873, "loss": 0.69215, "time": 0.20522} -{"mode": "train", "epoch": 12, "iter": 2600, "lr": 0.0, "memory": 5942, "data_time": 0.0364, "loss_rpn_cls": 0.0216, "loss_rpn_bbox": 0.0472, "loss_cls": 0.17498, "acc": 93.55347, "loss_bbox": 0.2188, "loss_mask": 0.22099, "loss": 0.68357, "time": 0.2057} -{"mode": "train", "epoch": 12, "iter": 2650, "lr": 0.0, "memory": 5942, "data_time": 0.03547, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04785, "loss_cls": 0.18375, "acc": 93.16187, "loss_bbox": 0.23507, "loss_mask": 0.2249, "loss": 0.71231, "time": 0.20529} -{"mode": "train", "epoch": 12, "iter": 2700, "lr": 0.0, "memory": 5942, "data_time": 0.0336, "loss_rpn_cls": 0.02014, "loss_rpn_bbox": 0.04502, "loss_cls": 0.17406, "acc": 93.46411, "loss_bbox": 0.2219, "loss_mask": 0.22075, "loss": 0.68187, "time": 0.20884} -{"mode": "train", "epoch": 12, "iter": 2750, "lr": 0.0, "memory": 5942, "data_time": 0.04984, "loss_rpn_cls": 0.01983, "loss_rpn_bbox": 0.04539, "loss_cls": 0.17502, "acc": 93.51416, "loss_bbox": 0.21965, "loss_mask": 0.21771, "loss": 0.67758, "time": 0.21463} -{"mode": "train", "epoch": 12, "iter": 2800, "lr": 0.0, "memory": 5942, "data_time": 0.04803, "loss_rpn_cls": 0.02036, "loss_rpn_bbox": 0.04777, "loss_cls": 0.17848, "acc": 93.44946, "loss_bbox": 0.222, "loss_mask": 0.22857, "loss": 0.69718, "time": 0.21496} -{"mode": "train", "epoch": 12, "iter": 2850, "lr": 0.0, "memory": 5942, "data_time": 0.04139, "loss_rpn_cls": 0.02315, "loss_rpn_bbox": 0.04915, "loss_cls": 0.18079, "acc": 93.2688, "loss_bbox": 0.22843, "loss_mask": 0.22699, "loss": 0.70851, "time": 0.21438} -{"mode": "train", "epoch": 12, "iter": 2900, "lr": 0.0, "memory": 5942, "data_time": 0.03713, "loss_rpn_cls": 0.02148, "loss_rpn_bbox": 0.04843, "loss_cls": 0.17539, "acc": 93.47974, "loss_bbox": 0.22292, "loss_mask": 0.22446, "loss": 0.69269, "time": 0.21145} -{"mode": "train", "epoch": 12, "iter": 2950, "lr": 0.0, "memory": 5942, "data_time": 0.04266, "loss_rpn_cls": 0.02094, "loss_rpn_bbox": 0.04469, "loss_cls": 0.17315, "acc": 93.66064, "loss_bbox": 0.21748, "loss_mask": 0.22248, "loss": 0.67875, "time": 0.21062} -{"mode": "train", "epoch": 12, "iter": 3000, "lr": 0.0, "memory": 5942, "data_time": 0.04647, "loss_rpn_cls": 0.02158, "loss_rpn_bbox": 0.04563, "loss_cls": 0.17013, "acc": 93.72119, "loss_bbox": 0.21301, "loss_mask": 0.21873, "loss": 0.66908, "time": 0.20778} -{"mode": "train", "epoch": 12, "iter": 3050, "lr": 0.0, "memory": 5942, "data_time": 0.04467, "loss_rpn_cls": 0.02205, "loss_rpn_bbox": 0.04822, "loss_cls": 0.18389, "acc": 93.24243, "loss_bbox": 0.23002, "loss_mask": 0.22787, "loss": 0.71205, "time": 0.21467} -{"mode": "train", "epoch": 12, "iter": 3100, "lr": 0.0, "memory": 5942, "data_time": 0.04203, "loss_rpn_cls": 0.02158, "loss_rpn_bbox": 0.0446, "loss_cls": 0.1776, "acc": 93.47607, "loss_bbox": 0.22534, "loss_mask": 0.22739, "loss": 0.69651, "time": 0.21166} -{"mode": "train", "epoch": 12, "iter": 3150, "lr": 0.0, "memory": 5942, "data_time": 0.04375, "loss_rpn_cls": 0.02423, "loss_rpn_bbox": 0.04992, "loss_cls": 0.18429, "acc": 93.2478, "loss_bbox": 0.22872, "loss_mask": 0.22646, "loss": 0.71361, "time": 0.21396} -{"mode": "train", "epoch": 12, "iter": 3200, "lr": 0.0, "memory": 5942, "data_time": 0.04023, "loss_rpn_cls": 0.02139, "loss_rpn_bbox": 0.04636, "loss_cls": 0.17361, "acc": 93.57788, "loss_bbox": 0.22104, "loss_mask": 0.22732, "loss": 0.68973, "time": 0.21045} -{"mode": "train", "epoch": 12, "iter": 3250, "lr": 0.0, "memory": 5942, "data_time": 0.05509, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04705, "loss_cls": 0.18171, "acc": 93.27539, "loss_bbox": 0.226, "loss_mask": 0.22683, "loss": 0.7045, "time": 0.22703} -{"mode": "train", "epoch": 12, "iter": 3300, "lr": 0.0, "memory": 5942, "data_time": 0.04609, "loss_rpn_cls": 0.02309, "loss_rpn_bbox": 0.04816, "loss_cls": 0.18741, "acc": 93.06396, "loss_bbox": 0.23501, "loss_mask": 0.23314, "loss": 0.72682, "time": 0.21816} -{"mode": "train", "epoch": 12, "iter": 3350, "lr": 0.0, "memory": 5942, "data_time": 0.0376, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.04613, "loss_cls": 0.16946, "acc": 93.72485, "loss_bbox": 0.21507, "loss_mask": 0.22208, "loss": 0.67326, "time": 0.2114} -{"mode": "train", "epoch": 12, "iter": 3400, "lr": 0.0, "memory": 5942, "data_time": 0.04503, "loss_rpn_cls": 0.02059, "loss_rpn_bbox": 0.04612, "loss_cls": 0.17563, "acc": 93.45166, "loss_bbox": 0.22859, "loss_mask": 0.22678, "loss": 0.69771, "time": 0.21296} -{"mode": "train", "epoch": 12, "iter": 3450, "lr": 0.0, "memory": 5942, "data_time": 0.03858, "loss_rpn_cls": 0.02007, "loss_rpn_bbox": 0.04463, "loss_cls": 0.17194, "acc": 93.66064, "loss_bbox": 0.21517, "loss_mask": 0.2184, "loss": 0.67022, "time": 0.21029} -{"mode": "train", "epoch": 12, "iter": 3500, "lr": 0.0, "memory": 5942, "data_time": 0.03765, "loss_rpn_cls": 0.02173, "loss_rpn_bbox": 0.04651, "loss_cls": 0.17609, "acc": 93.51782, "loss_bbox": 0.21969, "loss_mask": 0.21889, "loss": 0.68291, "time": 0.21405} -{"mode": "train", "epoch": 12, "iter": 3550, "lr": 0.0, "memory": 5942, "data_time": 0.03826, "loss_rpn_cls": 0.02127, "loss_rpn_bbox": 0.04625, "loss_cls": 0.17445, "acc": 93.45776, "loss_bbox": 0.22443, "loss_mask": 0.22343, "loss": 0.68984, "time": 0.21263} -{"mode": "train", "epoch": 12, "iter": 3600, "lr": 0.0, "memory": 5942, "data_time": 0.03674, "loss_rpn_cls": 0.02109, "loss_rpn_bbox": 0.04722, "loss_cls": 0.16968, "acc": 93.67017, "loss_bbox": 0.21739, "loss_mask": 0.22098, "loss": 0.67635, "time": 0.20394} -{"mode": "train", "epoch": 12, "iter": 3650, "lr": 0.0, "memory": 5942, "data_time": 0.03232, "loss_rpn_cls": 0.01977, "loss_rpn_bbox": 0.04356, "loss_cls": 0.16657, "acc": 93.82642, "loss_bbox": 0.21232, "loss_mask": 0.22389, "loss": 0.6661, "time": 0.20476} -{"mode": "train", "epoch": 12, "iter": 3700, "lr": 0.0, "memory": 5942, "data_time": 0.04607, "loss_rpn_cls": 0.02099, "loss_rpn_bbox": 0.04563, "loss_cls": 0.17961, "acc": 93.38477, "loss_bbox": 0.2265, "loss_mask": 0.22688, "loss": 0.69961, "time": 0.21482} -{"mode": "train", "epoch": 12, "iter": 3750, "lr": 0.0, "memory": 5942, "data_time": 0.03642, "loss_rpn_cls": 0.02077, "loss_rpn_bbox": 0.04599, "loss_cls": 0.17167, "acc": 93.6228, "loss_bbox": 0.21384, "loss_mask": 0.21958, "loss": 0.67185, "time": 0.20871} -{"mode": "train", "epoch": 12, "iter": 3800, "lr": 0.0, "memory": 5942, "data_time": 0.0397, "loss_rpn_cls": 0.02149, "loss_rpn_bbox": 0.0461, "loss_cls": 0.1781, "acc": 93.37573, "loss_bbox": 0.22636, "loss_mask": 0.22612, "loss": 0.69816, "time": 0.21866} -{"mode": "train", "epoch": 12, "iter": 3850, "lr": 0.0, "memory": 5942, "data_time": 0.05203, "loss_rpn_cls": 0.02146, "loss_rpn_bbox": 0.04588, "loss_cls": 0.17335, "acc": 93.65112, "loss_bbox": 0.22014, "loss_mask": 0.22149, "loss": 0.68232, "time": 0.21563} -{"mode": "train", "epoch": 12, "iter": 3900, "lr": 0.0, "memory": 5942, "data_time": 0.04023, "loss_rpn_cls": 0.02495, "loss_rpn_bbox": 0.04747, "loss_cls": 0.17857, "acc": 93.4585, "loss_bbox": 0.22403, "loss_mask": 0.22336, "loss": 0.69838, "time": 0.20829} -{"mode": "train", "epoch": 12, "iter": 3950, "lr": 0.0, "memory": 5942, "data_time": 0.04174, "loss_rpn_cls": 0.01837, "loss_rpn_bbox": 0.04081, "loss_cls": 0.16673, "acc": 93.79785, "loss_bbox": 0.21214, "loss_mask": 0.21765, "loss": 0.6557, "time": 0.19983} -{"mode": "train", "epoch": 12, "iter": 4000, "lr": 0.0, "memory": 5942, "data_time": 0.04283, "loss_rpn_cls": 0.02181, "loss_rpn_bbox": 0.04554, "loss_cls": 0.1774, "acc": 93.44507, "loss_bbox": 0.22739, "loss_mask": 0.2271, "loss": 0.69923, "time": 0.21905} -{"mode": "train", "epoch": 12, "iter": 4050, "lr": 0.0, "memory": 5942, "data_time": 0.04475, "loss_rpn_cls": 0.02061, "loss_rpn_bbox": 0.04769, "loss_cls": 0.18291, "acc": 93.22998, "loss_bbox": 0.22948, "loss_mask": 0.22835, "loss": 0.70903, "time": 0.21023} -{"mode": "train", "epoch": 12, "iter": 4100, "lr": 0.0, "memory": 5942, "data_time": 0.04157, "loss_rpn_cls": 0.02163, "loss_rpn_bbox": 0.04654, "loss_cls": 0.17216, "acc": 93.60376, "loss_bbox": 0.21931, "loss_mask": 0.22186, "loss": 0.6815, "time": 0.21621} -{"mode": "train", "epoch": 12, "iter": 4150, "lr": 0.0, "memory": 5942, "data_time": 0.0381, "loss_rpn_cls": 0.02086, "loss_rpn_bbox": 0.04426, "loss_cls": 0.16739, "acc": 93.79126, "loss_bbox": 0.21821, "loss_mask": 0.22319, "loss": 0.67391, "time": 0.20843} -{"mode": "train", "epoch": 12, "iter": 4200, "lr": 0.0, "memory": 5942, "data_time": 0.0413, "loss_rpn_cls": 0.02106, "loss_rpn_bbox": 0.04696, "loss_cls": 0.18216, "acc": 93.41187, "loss_bbox": 0.22448, "loss_mask": 0.21889, "loss": 0.69355, "time": 0.20475} -{"mode": "train", "epoch": 12, "iter": 4250, "lr": 0.0, "memory": 5942, "data_time": 0.04894, "loss_rpn_cls": 0.02218, "loss_rpn_bbox": 0.04571, "loss_cls": 0.17963, "acc": 93.35303, "loss_bbox": 0.22536, "loss_mask": 0.22263, "loss": 0.69552, "time": 0.21168} -{"mode": "train", "epoch": 12, "iter": 4300, "lr": 0.0, "memory": 5942, "data_time": 0.05081, "loss_rpn_cls": 0.0204, "loss_rpn_bbox": 0.04434, "loss_cls": 0.17752, "acc": 93.47339, "loss_bbox": 0.22337, "loss_mask": 0.22092, "loss": 0.68654, "time": 0.21642} -{"mode": "train", "epoch": 12, "iter": 4350, "lr": 0.0, "memory": 5942, "data_time": 0.04148, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.04556, "loss_cls": 0.1777, "acc": 93.45386, "loss_bbox": 0.2235, "loss_mask": 0.22199, "loss": 0.69099, "time": 0.21234} -{"mode": "train", "epoch": 12, "iter": 4400, "lr": 0.0, "memory": 5942, "data_time": 0.03543, "loss_rpn_cls": 0.0222, "loss_rpn_bbox": 0.04611, "loss_cls": 0.17441, "acc": 93.5293, "loss_bbox": 0.21941, "loss_mask": 0.22347, "loss": 0.6856, "time": 0.21746} -{"mode": "train", "epoch": 12, "iter": 4450, "lr": 0.0, "memory": 5942, "data_time": 0.03713, "loss_rpn_cls": 0.02071, "loss_rpn_bbox": 0.04405, "loss_cls": 0.18382, "acc": 93.19531, "loss_bbox": 0.22991, "loss_mask": 0.22455, "loss": 0.70303, "time": 0.20476} -{"mode": "train", "epoch": 12, "iter": 4500, "lr": 0.0, "memory": 5942, "data_time": 0.041, "loss_rpn_cls": 0.02426, "loss_rpn_bbox": 0.04501, "loss_cls": 0.17966, "acc": 93.31519, "loss_bbox": 0.22972, "loss_mask": 0.22538, "loss": 0.70402, "time": 0.20837} -{"mode": "train", "epoch": 12, "iter": 4550, "lr": 0.0, "memory": 5942, "data_time": 0.03881, "loss_rpn_cls": 0.02198, "loss_rpn_bbox": 0.04293, "loss_cls": 0.17287, "acc": 93.68896, "loss_bbox": 0.21548, "loss_mask": 0.22197, "loss": 0.67524, "time": 0.20986} -{"mode": "train", "epoch": 12, "iter": 4600, "lr": 0.0, "memory": 5942, "data_time": 0.04591, "loss_rpn_cls": 0.02163, "loss_rpn_bbox": 0.0457, "loss_cls": 0.1808, "acc": 93.23999, "loss_bbox": 0.22535, "loss_mask": 0.22235, "loss": 0.69582, "time": 0.2188} -{"mode": "train", "epoch": 12, "iter": 4650, "lr": 0.0, "memory": 5942, "data_time": 0.04576, "loss_rpn_cls": 0.02087, "loss_rpn_bbox": 0.04413, "loss_cls": 0.16951, "acc": 93.68188, "loss_bbox": 0.21319, "loss_mask": 0.21621, "loss": 0.66391, "time": 0.21567} -{"mode": "train", "epoch": 12, "iter": 4700, "lr": 0.0, "memory": 5942, "data_time": 0.04077, "loss_rpn_cls": 0.02126, "loss_rpn_bbox": 0.04327, "loss_cls": 0.17298, "acc": 93.56641, "loss_bbox": 0.2189, "loss_mask": 0.21755, "loss": 0.67397, "time": 0.20641} -{"mode": "train", "epoch": 12, "iter": 4750, "lr": 0.0, "memory": 5942, "data_time": 0.03736, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04649, "loss_cls": 0.1786, "acc": 93.44312, "loss_bbox": 0.21938, "loss_mask": 0.22189, "loss": 0.6889, "time": 0.20993} -{"mode": "train", "epoch": 12, "iter": 4800, "lr": 0.0, "memory": 5942, "data_time": 0.0311, "loss_rpn_cls": 0.02319, "loss_rpn_bbox": 0.04452, "loss_cls": 0.17799, "acc": 93.48584, "loss_bbox": 0.2236, "loss_mask": 0.22243, "loss": 0.69172, "time": 0.20589} -{"mode": "train", "epoch": 12, "iter": 4850, "lr": 0.0, "memory": 5942, "data_time": 0.0467, "loss_rpn_cls": 0.02165, "loss_rpn_bbox": 0.04832, "loss_cls": 0.18048, "acc": 93.31006, "loss_bbox": 0.2288, "loss_mask": 0.22448, "loss": 0.70373, "time": 0.21259} -{"mode": "train", "epoch": 12, "iter": 4900, "lr": 0.0, "memory": 5942, "data_time": 0.0329, "loss_rpn_cls": 0.02155, "loss_rpn_bbox": 0.04619, "loss_cls": 0.17986, "acc": 93.39551, "loss_bbox": 0.22624, "loss_mask": 0.22201, "loss": 0.69586, "time": 0.2107} -{"mode": "train", "epoch": 12, "iter": 4950, "lr": 0.0, "memory": 5942, "data_time": 0.0473, "loss_rpn_cls": 0.02236, "loss_rpn_bbox": 0.04515, "loss_cls": 0.17452, "acc": 93.55273, "loss_bbox": 0.21895, "loss_mask": 0.22102, "loss": 0.68201, "time": 0.21787} -{"mode": "train", "epoch": 12, "iter": 5000, "lr": 0.0, "memory": 5942, "data_time": 0.04003, "loss_rpn_cls": 0.0229, "loss_rpn_bbox": 0.04574, "loss_cls": 0.17707, "acc": 93.45142, "loss_bbox": 0.22598, "loss_mask": 0.22264, "loss": 0.69433, "time": 0.21155} -{"mode": "train", "epoch": 12, "iter": 5050, "lr": 0.0, "memory": 5942, "data_time": 0.04603, "loss_rpn_cls": 0.0233, "loss_rpn_bbox": 0.04619, "loss_cls": 0.18298, "acc": 93.25854, "loss_bbox": 0.22888, "loss_mask": 0.22614, "loss": 0.70749, "time": 0.21093} -{"mode": "train", "epoch": 12, "iter": 5100, "lr": 0.0, "memory": 5942, "data_time": 0.03785, "loss_rpn_cls": 0.0197, "loss_rpn_bbox": 0.04338, "loss_cls": 0.17412, "acc": 93.49316, "loss_bbox": 0.22249, "loss_mask": 0.2187, "loss": 0.6784, "time": 0.20742} -{"mode": "train", "epoch": 12, "iter": 5150, "lr": 0.0, "memory": 5942, "data_time": 0.03806, "loss_rpn_cls": 0.02261, "loss_rpn_bbox": 0.04463, "loss_cls": 0.17371, "acc": 93.55103, "loss_bbox": 0.22182, "loss_mask": 0.22312, "loss": 0.68589, "time": 0.20673} -{"mode": "train", "epoch": 12, "iter": 5200, "lr": 0.0, "memory": 5942, "data_time": 0.0368, "loss_rpn_cls": 0.02161, "loss_rpn_bbox": 0.04658, "loss_cls": 0.17554, "acc": 93.43628, "loss_bbox": 0.21976, "loss_mask": 0.22218, "loss": 0.68568, "time": 0.21262} -{"mode": "train", "epoch": 12, "iter": 5250, "lr": 0.0, "memory": 5942, "data_time": 0.03179, "loss_rpn_cls": 0.02187, "loss_rpn_bbox": 0.04584, "loss_cls": 0.1826, "acc": 93.29834, "loss_bbox": 0.22605, "loss_mask": 0.22289, "loss": 0.69924, "time": 0.20651} -{"mode": "train", "epoch": 12, "iter": 5300, "lr": 0.0, "memory": 5942, "data_time": 0.04103, "loss_rpn_cls": 0.02009, "loss_rpn_bbox": 0.0478, "loss_cls": 0.18019, "acc": 93.36377, "loss_bbox": 0.22926, "loss_mask": 0.22272, "loss": 0.70006, "time": 0.21136} -{"mode": "train", "epoch": 12, "iter": 5350, "lr": 0.0, "memory": 5942, "data_time": 0.03729, "loss_rpn_cls": 0.02219, "loss_rpn_bbox": 0.04637, "loss_cls": 0.18174, "acc": 93.2478, "loss_bbox": 0.22668, "loss_mask": 0.22462, "loss": 0.7016, "time": 0.21522} -{"mode": "train", "epoch": 12, "iter": 5400, "lr": 0.0, "memory": 5942, "data_time": 0.03952, "loss_rpn_cls": 0.02267, "loss_rpn_bbox": 0.04806, "loss_cls": 0.18139, "acc": 93.23999, "loss_bbox": 0.23269, "loss_mask": 0.22821, "loss": 0.71302, "time": 0.20909} -{"mode": "train", "epoch": 12, "iter": 5450, "lr": 0.0, "memory": 5942, "data_time": 0.03132, "loss_rpn_cls": 0.01975, "loss_rpn_bbox": 0.04479, "loss_cls": 0.17517, "acc": 93.43628, "loss_bbox": 0.22828, "loss_mask": 0.2227, "loss": 0.69069, "time": 0.20435} -{"mode": "train", "epoch": 12, "iter": 5500, "lr": 0.0, "memory": 5942, "data_time": 0.03616, "loss_rpn_cls": 0.02113, "loss_rpn_bbox": 0.04357, "loss_cls": 0.17692, "acc": 93.44629, "loss_bbox": 0.222, "loss_mask": 0.22388, "loss": 0.6875, "time": 0.2139} -{"mode": "train", "epoch": 12, "iter": 5550, "lr": 0.0, "memory": 5942, "data_time": 0.03658, "loss_rpn_cls": 0.02284, "loss_rpn_bbox": 0.04906, "loss_cls": 0.18263, "acc": 93.25732, "loss_bbox": 0.2259, "loss_mask": 0.2225, "loss": 0.70292, "time": 0.2167} -{"mode": "train", "epoch": 12, "iter": 5600, "lr": 0.0, "memory": 5942, "data_time": 0.03943, "loss_rpn_cls": 0.0219, "loss_rpn_bbox": 0.0461, "loss_cls": 0.17032, "acc": 93.67871, "loss_bbox": 0.21662, "loss_mask": 0.21931, "loss": 0.67424, "time": 0.21202} -{"mode": "train", "epoch": 12, "iter": 5650, "lr": 0.0, "memory": 5942, "data_time": 0.03617, "loss_rpn_cls": 0.02211, "loss_rpn_bbox": 0.04613, "loss_cls": 0.17428, "acc": 93.64087, "loss_bbox": 0.22063, "loss_mask": 0.21987, "loss": 0.68302, "time": 0.2197} -{"mode": "train", "epoch": 12, "iter": 5700, "lr": 0.0, "memory": 5942, "data_time": 0.03295, "loss_rpn_cls": 0.02039, "loss_rpn_bbox": 0.04336, "loss_cls": 0.17376, "acc": 93.51416, "loss_bbox": 0.21698, "loss_mask": 0.22213, "loss": 0.67663, "time": 0.20411} -{"mode": "train", "epoch": 12, "iter": 5750, "lr": 0.0, "memory": 5942, "data_time": 0.03625, "loss_rpn_cls": 0.02133, "loss_rpn_bbox": 0.04429, "loss_cls": 0.17472, "acc": 93.51392, "loss_bbox": 0.2186, "loss_mask": 0.22593, "loss": 0.68487, "time": 0.20426} -{"mode": "train", "epoch": 12, "iter": 5800, "lr": 0.0, "memory": 5942, "data_time": 0.02763, "loss_rpn_cls": 0.02026, "loss_rpn_bbox": 0.04515, "loss_cls": 0.17794, "acc": 93.36865, "loss_bbox": 0.22379, "loss_mask": 0.22199, "loss": 0.68912, "time": 0.21339} -{"mode": "train", "epoch": 12, "iter": 5850, "lr": 0.0, "memory": 5942, "data_time": 0.03754, "loss_rpn_cls": 0.02083, "loss_rpn_bbox": 0.04711, "loss_cls": 0.17593, "acc": 93.47168, "loss_bbox": 0.22078, "loss_mask": 0.22277, "loss": 0.68742, "time": 0.20683} -{"mode": "train", "epoch": 12, "iter": 5900, "lr": 0.0, "memory": 5942, "data_time": 0.03673, "loss_rpn_cls": 0.02111, "loss_rpn_bbox": 0.04548, "loss_cls": 0.17784, "acc": 93.48657, "loss_bbox": 0.22376, "loss_mask": 0.22178, "loss": 0.68997, "time": 0.2001} -{"mode": "train", "epoch": 12, "iter": 5950, "lr": 0.0, "memory": 5942, "data_time": 0.03556, "loss_rpn_cls": 0.02075, "loss_rpn_bbox": 0.0428, "loss_cls": 0.17047, "acc": 93.67188, "loss_bbox": 0.21551, "loss_mask": 0.22018, "loss": 0.66971, "time": 0.20314} -{"mode": "train", "epoch": 12, "iter": 6000, "lr": 0.0, "memory": 5942, "data_time": 0.03018, "loss_rpn_cls": 0.02028, "loss_rpn_bbox": 0.04289, "loss_cls": 0.1695, "acc": 93.76831, "loss_bbox": 0.21272, "loss_mask": 0.21757, "loss": 0.66297, "time": 0.21163} -{"mode": "train", "epoch": 12, "iter": 6050, "lr": 0.0, "memory": 5942, "data_time": 0.03562, "loss_rpn_cls": 0.02015, "loss_rpn_bbox": 0.04459, "loss_cls": 0.17487, "acc": 93.50977, "loss_bbox": 0.22025, "loss_mask": 0.22238, "loss": 0.68224, "time": 0.20012} -{"mode": "train", "epoch": 12, "iter": 6100, "lr": 0.0, "memory": 5942, "data_time": 0.04056, "loss_rpn_cls": 0.02228, "loss_rpn_bbox": 0.04726, "loss_cls": 0.17944, "acc": 93.36816, "loss_bbox": 0.22718, "loss_mask": 0.22242, "loss": 0.69858, "time": 0.21293} -{"mode": "train", "epoch": 12, "iter": 6150, "lr": 0.0, "memory": 5942, "data_time": 0.03672, "loss_rpn_cls": 0.02108, "loss_rpn_bbox": 0.04446, "loss_cls": 0.168, "acc": 93.79785, "loss_bbox": 0.21336, "loss_mask": 0.21803, "loss": 0.66493, "time": 0.20528} -{"mode": "train", "epoch": 12, "iter": 6200, "lr": 0.0, "memory": 5942, "data_time": 0.03993, "loss_rpn_cls": 0.02046, "loss_rpn_bbox": 0.04648, "loss_cls": 0.17148, "acc": 93.59253, "loss_bbox": 0.21706, "loss_mask": 0.22062, "loss": 0.6761, "time": 0.20276} -{"mode": "train", "epoch": 12, "iter": 6250, "lr": 0.0, "memory": 5942, "data_time": 0.04308, "loss_rpn_cls": 0.02293, "loss_rpn_bbox": 0.04584, "loss_cls": 0.17961, "acc": 93.40454, "loss_bbox": 0.22459, "loss_mask": 0.2218, "loss": 0.69477, "time": 0.21289} -{"mode": "train", "epoch": 12, "iter": 6300, "lr": 0.0, "memory": 5942, "data_time": 0.04189, "loss_rpn_cls": 0.02143, "loss_rpn_bbox": 0.04808, "loss_cls": 0.177, "acc": 93.33228, "loss_bbox": 0.22568, "loss_mask": 0.22471, "loss": 0.6969, "time": 0.2108} -{"mode": "train", "epoch": 12, "iter": 6350, "lr": 0.0, "memory": 5942, "data_time": 0.04208, "loss_rpn_cls": 0.0217, "loss_rpn_bbox": 0.04563, "loss_cls": 0.17462, "acc": 93.64038, "loss_bbox": 0.22072, "loss_mask": 0.22217, "loss": 0.68483, "time": 0.21274} -{"mode": "train", "epoch": 12, "iter": 6400, "lr": 0.0, "memory": 5942, "data_time": 0.05425, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.04916, "loss_cls": 0.1802, "acc": 93.29565, "loss_bbox": 0.2256, "loss_mask": 0.21829, "loss": 0.69491, "time": 0.22305} -{"mode": "train", "epoch": 12, "iter": 6450, "lr": 0.0, "memory": 5942, "data_time": 0.0437, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04545, "loss_cls": 0.17676, "acc": 93.40552, "loss_bbox": 0.22763, "loss_mask": 0.22442, "loss": 0.69611, "time": 0.21167} -{"mode": "train", "epoch": 12, "iter": 6500, "lr": 0.0, "memory": 5942, "data_time": 0.03565, "loss_rpn_cls": 0.02125, "loss_rpn_bbox": 0.04656, "loss_cls": 0.17555, "acc": 93.51147, "loss_bbox": 0.21916, "loss_mask": 0.22166, "loss": 0.68418, "time": 0.2182} -{"mode": "train", "epoch": 12, "iter": 6550, "lr": 0.0, "memory": 5942, "data_time": 0.03638, "loss_rpn_cls": 0.0208, "loss_rpn_bbox": 0.04359, "loss_cls": 0.17786, "acc": 93.44873, "loss_bbox": 0.21814, "loss_mask": 0.21688, "loss": 0.67726, "time": 0.20578} -{"mode": "train", "epoch": 12, "iter": 6600, "lr": 0.0, "memory": 5942, "data_time": 0.03726, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.04526, "loss_cls": 0.17851, "acc": 93.38574, "loss_bbox": 0.22679, "loss_mask": 0.22602, "loss": 0.69824, "time": 0.20249} -{"mode": "train", "epoch": 12, "iter": 6650, "lr": 0.0, "memory": 5942, "data_time": 0.03466, "loss_rpn_cls": 0.0209, "loss_rpn_bbox": 0.04549, "loss_cls": 0.17624, "acc": 93.45825, "loss_bbox": 0.22366, "loss_mask": 0.22447, "loss": 0.69076, "time": 0.20497} -{"mode": "train", "epoch": 12, "iter": 6700, "lr": 0.0, "memory": 5942, "data_time": 0.0342, "loss_rpn_cls": 0.02243, "loss_rpn_bbox": 0.04671, "loss_cls": 0.18271, "acc": 93.35938, "loss_bbox": 0.22853, "loss_mask": 0.22855, "loss": 0.70894, "time": 0.2054} -{"mode": "train", "epoch": 12, "iter": 6750, "lr": 0.0, "memory": 5942, "data_time": 0.04426, "loss_rpn_cls": 0.02098, "loss_rpn_bbox": 0.04655, "loss_cls": 0.1706, "acc": 93.75562, "loss_bbox": 0.21566, "loss_mask": 0.22274, "loss": 0.67654, "time": 0.21056} -{"mode": "train", "epoch": 12, "iter": 6800, "lr": 0.0, "memory": 5942, "data_time": 0.03997, "loss_rpn_cls": 0.01931, "loss_rpn_bbox": 0.04228, "loss_cls": 0.17219, "acc": 93.64917, "loss_bbox": 0.21667, "loss_mask": 0.21894, "loss": 0.66938, "time": 0.21057} -{"mode": "train", "epoch": 12, "iter": 6850, "lr": 0.0, "memory": 5942, "data_time": 0.03906, "loss_rpn_cls": 0.02099, "loss_rpn_bbox": 0.04526, "loss_cls": 0.1733, "acc": 93.56226, "loss_bbox": 0.21831, "loss_mask": 0.21786, "loss": 0.67571, "time": 0.21343} -{"mode": "train", "epoch": 12, "iter": 6900, "lr": 0.0, "memory": 5942, "data_time": 0.0338, "loss_rpn_cls": 0.02042, "loss_rpn_bbox": 0.04441, "loss_cls": 0.17587, "acc": 93.47046, "loss_bbox": 0.22107, "loss_mask": 0.2191, "loss": 0.68086, "time": 0.20675} -{"mode": "train", "epoch": 12, "iter": 6950, "lr": 0.0, "memory": 5942, "data_time": 0.03681, "loss_rpn_cls": 0.02195, "loss_rpn_bbox": 0.04717, "loss_cls": 0.178, "acc": 93.38232, "loss_bbox": 0.22639, "loss_mask": 0.22098, "loss": 0.69448, "time": 0.21715} -{"mode": "train", "epoch": 12, "iter": 7000, "lr": 0.0, "memory": 5942, "data_time": 0.03585, "loss_rpn_cls": 0.02117, "loss_rpn_bbox": 0.04699, "loss_cls": 0.16954, "acc": 93.71509, "loss_bbox": 0.21815, "loss_mask": 0.22596, "loss": 0.68182, "time": 0.20282} -{"mode": "train", "epoch": 12, "iter": 7050, "lr": 0.0, "memory": 5942, "data_time": 0.03484, "loss_rpn_cls": 0.02104, "loss_rpn_bbox": 0.04422, "loss_cls": 0.17112, "acc": 93.75317, "loss_bbox": 0.20989, "loss_mask": 0.21399, "loss": 0.66026, "time": 0.20327} -{"mode": "train", "epoch": 12, "iter": 7100, "lr": 0.0, "memory": 5942, "data_time": 0.04466, "loss_rpn_cls": 0.02076, "loss_rpn_bbox": 0.0466, "loss_cls": 0.17725, "acc": 93.50586, "loss_bbox": 0.22086, "loss_mask": 0.22302, "loss": 0.6885, "time": 0.20446} -{"mode": "train", "epoch": 12, "iter": 7150, "lr": 0.0, "memory": 5942, "data_time": 0.04354, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04485, "loss_cls": 0.17408, "acc": 93.54565, "loss_bbox": 0.22136, "loss_mask": 0.22304, "loss": 0.68407, "time": 0.20646} -{"mode": "train", "epoch": 12, "iter": 7200, "lr": 0.0, "memory": 5942, "data_time": 0.03038, "loss_rpn_cls": 0.01923, "loss_rpn_bbox": 0.04099, "loss_cls": 0.16659, "acc": 93.95068, "loss_bbox": 0.21117, "loss_mask": 0.21298, "loss": 0.65096, "time": 0.19177} -{"mode": "train", "epoch": 12, "iter": 7250, "lr": 0.0, "memory": 5942, "data_time": 0.03472, "loss_rpn_cls": 0.02292, "loss_rpn_bbox": 0.04719, "loss_cls": 0.17666, "acc": 93.44238, "loss_bbox": 0.23084, "loss_mask": 0.22752, "loss": 0.70513, "time": 0.21538} -{"mode": "train", "epoch": 12, "iter": 7300, "lr": 0.0, "memory": 5942, "data_time": 0.03617, "loss_rpn_cls": 0.02089, "loss_rpn_bbox": 0.04744, "loss_cls": 0.18286, "acc": 93.22803, "loss_bbox": 0.23171, "loss_mask": 0.22662, "loss": 0.70952, "time": 0.20657} -{"mode": "val", "epoch": 12, "iter": 625, "lr": 0.0, "bbox_mAP": 0.3978, "bbox_mAP_50": 0.619, "bbox_mAP_75": 0.435, "bbox_mAP_s": 0.246, "bbox_mAP_m": 0.4326, "bbox_mAP_l": 0.5143, "bbox_mAP_copypaste": "0.3978 0.6190 0.4350 0.2460 0.4326 0.5143", "segm_mAP": 0.3722, "segm_mAP_50": 0.5882, "segm_mAP_75": 0.401, "segm_mAP_s": 0.1903, "segm_mAP_m": 0.399, "segm_mAP_l": 0.5398, "segm_mAP_copypaste": "0.3722 0.5882 0.4010 0.1903 0.3990 0.5398"} diff --git a/cv/classification/repvit/pytorch/detection/logs/repvit_m1_5_coco.json b/cv/classification/repvit/pytorch/detection/logs/repvit_m1_5_coco.json deleted file mode 100644 index 1779ea79..00000000 --- a/cv/classification/repvit/pytorch/detection/logs/repvit_m1_5_coco.json +++ /dev/null @@ -1,1765 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: GeForce RTX 3090\nCUDA_HOME: /home/wangao/cuda-11.2\nNVCC: Cuda compilation tools, release 11.2, V11.2.152\nGCC: gcc (GCC) 5.4.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.8.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 5.4\nMMCV CUDA Compiler: 11.2\nMMDetection: 2.28.2+c54117a", "config": "model = dict(\n type='MaskRCNN',\n pretrained='torchvision://resnet50',\n backbone=dict(\n type='repvit_m4',\n depth=50,\n num_stages=4,\n out_indices=[4, 10, 36, 42],\n frozen_stages=1,\n norm_cfg=dict(type='BN', requires_grad=True),\n norm_eval=True,\n style='pytorch',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m4_distill_300e.pth')),\n neck=dict(\n type='FPN',\n in_channels=[64, 128, 256, 512],\n out_channels=256,\n num_outs=5),\n rpn_head=dict(\n type='RPNHead',\n in_channels=256,\n feat_channels=256,\n anchor_generator=dict(\n type='AnchorGenerator',\n scales=[8],\n ratios=[0.5, 1.0, 2.0],\n strides=[4, 8, 16, 32, 64]),\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[1.0, 1.0, 1.0, 1.0]),\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n roi_head=dict(\n type='StandardRoIHead',\n bbox_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n bbox_head=dict(\n type='Shared2FCBBoxHead',\n in_channels=256,\n fc_out_channels=1024,\n roi_feat_size=7,\n num_classes=80,\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[0.1, 0.1, 0.2, 0.2]),\n reg_class_agnostic=False,\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n mask_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n mask_head=dict(\n type='FCNMaskHead',\n num_convs=4,\n in_channels=256,\n conv_out_channels=256,\n num_classes=80,\n loss_mask=dict(\n type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),\n train_cfg=dict(\n rpn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.7,\n neg_iou_thr=0.3,\n min_pos_iou=0.3,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=256,\n pos_fraction=0.5,\n neg_pos_ub=-1,\n add_gt_as_proposals=False),\n allowed_border=-1,\n pos_weight=-1,\n debug=False),\n rpn_proposal=dict(\n nms_pre=2000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.5,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=512,\n pos_fraction=0.25,\n neg_pos_ub=-1,\n add_gt_as_proposals=True),\n mask_size=28,\n pos_weight=-1,\n debug=False)),\n test_cfg=dict(\n rpn=dict(\n nms_pre=1000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n score_thr=0.05,\n nms=dict(type='nms', iou_threshold=0.5),\n max_per_img=100,\n mask_thr_binary=0.5)))\ndataset_type = 'CocoDataset'\ndata_root = 'data/coco/'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=2,\n workers_per_gpu=2,\n train=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_train2017.json',\n img_prefix='data/coco/train2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(\n type='Collect',\n keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n ]),\n val=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nevaluation = dict(metric=['bbox', 'segm'])\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.05)\noptimizer_config = dict(grad_clip=None)\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=500,\n warmup_ratio=1e-06,\n step=[8, 11])\nrunner = dict(type='EpochBasedRunner', max_epochs=12)\ncheckpoint_config = dict(interval=1)\nlog_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])\ncustom_hooks = [dict(type='NumClassCheckHook')]\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\nwork_dir = './work_dirs/mask_rcnn_repvit_m4_fpn_1x_coco'\nauto_resume = False\ngpu_ids = range(0, 8)\n", "seed": 0, "exp_name": "mask_rcnn_repvit_m4_fpn_1x_coco.py"} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 2e-05, "memory": 7980, "data_time": 0.14548, "loss_rpn_cls": 0.53314, "loss_rpn_bbox": 0.14748, "loss_cls": 1.91489, "acc": 68.13208, "loss_bbox": 0.072, "loss_mask": 1.59976, "loss": 4.26727, "time": 0.68673} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 4e-05, "memory": 8162, "data_time": 0.06063, "loss_rpn_cls": 0.25145, "loss_rpn_bbox": 0.11155, "loss_cls": 0.43818, "acc": 94.75, "loss_bbox": 0.17652, "loss_mask": 0.77439, "loss": 1.75208, "time": 0.47045} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 6e-05, "memory": 8162, "data_time": 0.06035, "loss_rpn_cls": 0.20735, "loss_rpn_bbox": 0.10837, "loss_cls": 0.39381, "acc": 93.92725, "loss_bbox": 0.20736, "loss_mask": 0.70913, "loss": 1.62602, "time": 0.46687} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 8e-05, "memory": 8162, "data_time": 0.05478, "loss_rpn_cls": 0.15562, "loss_rpn_bbox": 0.10826, "loss_cls": 0.43233, "acc": 92.84741, "loss_bbox": 0.25263, "loss_mask": 0.67926, "loss": 1.62809, "time": 0.45768} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0001, "memory": 8162, "data_time": 0.0551, "loss_rpn_cls": 0.12255, "loss_rpn_bbox": 0.10449, "loss_cls": 0.4498, "acc": 91.91943, "loss_bbox": 0.29686, "loss_mask": 0.63444, "loss": 1.60814, "time": 0.43649} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.00012, "memory": 8212, "data_time": 0.07644, "loss_rpn_cls": 0.09739, "loss_rpn_bbox": 0.09526, "loss_cls": 0.49012, "acc": 90.57593, "loss_bbox": 0.35082, "loss_mask": 0.58461, "loss": 1.6182, "time": 0.45779} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.00014, "memory": 8212, "data_time": 0.05278, "loss_rpn_cls": 0.0899, "loss_rpn_bbox": 0.09086, "loss_cls": 0.46009, "acc": 90.81299, "loss_bbox": 0.3403, "loss_mask": 0.52724, "loss": 1.50839, "time": 0.43938} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.00016, "memory": 8351, "data_time": 0.05907, "loss_rpn_cls": 0.09468, "loss_rpn_bbox": 0.0926, "loss_cls": 0.4741, "acc": 90.06348, "loss_bbox": 0.35981, "loss_mask": 0.4983, "loss": 1.51949, "time": 0.4306} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.00018, "memory": 8351, "data_time": 0.05434, "loss_rpn_cls": 0.08586, "loss_rpn_bbox": 0.08794, "loss_cls": 0.45477, "acc": 89.84888, "loss_bbox": 0.36565, "loss_mask": 0.45791, "loss": 1.45213, "time": 0.42242} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 8351, "data_time": 0.04756, "loss_rpn_cls": 0.07821, "loss_rpn_bbox": 0.08602, "loss_cls": 0.45179, "acc": 89.7334, "loss_bbox": 0.37553, "loss_mask": 0.451, "loss": 1.44255, "time": 0.41952} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 8398, "data_time": 0.05398, "loss_rpn_cls": 0.08005, "loss_rpn_bbox": 0.08015, "loss_cls": 0.44438, "acc": 89.64673, "loss_bbox": 0.37342, "loss_mask": 0.41911, "loss": 1.39711, "time": 0.43381} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 8398, "data_time": 0.04911, "loss_rpn_cls": 0.07676, "loss_rpn_bbox": 0.08142, "loss_cls": 0.42188, "acc": 89.59058, "loss_bbox": 0.36926, "loss_mask": 0.40923, "loss": 1.35854, "time": 0.43607} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 8398, "data_time": 0.07102, "loss_rpn_cls": 0.07519, "loss_rpn_bbox": 0.08478, "loss_cls": 0.41821, "acc": 89.52515, "loss_bbox": 0.37333, "loss_mask": 0.39809, "loss": 1.34961, "time": 0.43863} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 8398, "data_time": 0.05426, "loss_rpn_cls": 0.07631, "loss_rpn_bbox": 0.08253, "loss_cls": 0.40514, "acc": 89.38013, "loss_bbox": 0.37422, "loss_mask": 0.39287, "loss": 1.33107, "time": 0.43034} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 8398, "data_time": 0.04916, "loss_rpn_cls": 0.07243, "loss_rpn_bbox": 0.08025, "loss_cls": 0.3955, "acc": 89.45923, "loss_bbox": 0.36461, "loss_mask": 0.38045, "loss": 1.29324, "time": 0.43622} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 8398, "data_time": 0.05913, "loss_rpn_cls": 0.06914, "loss_rpn_bbox": 0.08124, "loss_cls": 0.40151, "acc": 89.427, "loss_bbox": 0.36257, "loss_mask": 0.37968, "loss": 1.29413, "time": 0.54043} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 8398, "data_time": 0.06153, "loss_rpn_cls": 0.06676, "loss_rpn_bbox": 0.07747, "loss_cls": 0.3726, "acc": 89.71411, "loss_bbox": 0.35481, "loss_mask": 0.36996, "loss": 1.2416, "time": 0.4233} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 8404, "data_time": 0.05656, "loss_rpn_cls": 0.06777, "loss_rpn_bbox": 0.07761, "loss_cls": 0.38445, "acc": 89.5686, "loss_bbox": 0.35571, "loss_mask": 0.37542, "loss": 1.26096, "time": 0.41979} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 8415, "data_time": 0.05396, "loss_rpn_cls": 0.06608, "loss_rpn_bbox": 0.0761, "loss_cls": 0.3778, "acc": 89.68164, "loss_bbox": 0.34412, "loss_mask": 0.36283, "loss": 1.22693, "time": 0.41275} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 8415, "data_time": 0.06083, "loss_rpn_cls": 0.06699, "loss_rpn_bbox": 0.0749, "loss_cls": 0.36141, "acc": 89.95605, "loss_bbox": 0.34437, "loss_mask": 0.35494, "loss": 1.20261, "time": 0.42294} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 8415, "data_time": 0.04988, "loss_rpn_cls": 0.06339, "loss_rpn_bbox": 0.07299, "loss_cls": 0.36277, "acc": 89.83081, "loss_bbox": 0.34577, "loss_mask": 0.35495, "loss": 1.19986, "time": 0.42698} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 8415, "data_time": 0.05954, "loss_rpn_cls": 0.05847, "loss_rpn_bbox": 0.07187, "loss_cls": 0.36026, "acc": 89.68848, "loss_bbox": 0.349, "loss_mask": 0.35461, "loss": 1.19421, "time": 0.41342} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.0002, "memory": 8415, "data_time": 0.05858, "loss_rpn_cls": 0.06511, "loss_rpn_bbox": 0.07882, "loss_cls": 0.36549, "acc": 89.56396, "loss_bbox": 0.35267, "loss_mask": 0.35172, "loss": 1.21381, "time": 0.41625} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.0002, "memory": 8415, "data_time": 0.06615, "loss_rpn_cls": 0.06678, "loss_rpn_bbox": 0.07934, "loss_cls": 0.36381, "acc": 89.5354, "loss_bbox": 0.35422, "loss_mask": 0.35489, "loss": 1.21904, "time": 0.42158} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.0002, "memory": 8415, "data_time": 0.05749, "loss_rpn_cls": 0.05694, "loss_rpn_bbox": 0.0691, "loss_cls": 0.35693, "acc": 89.9856, "loss_bbox": 0.33243, "loss_mask": 0.34468, "loss": 1.16008, "time": 0.40236} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.0002, "memory": 8415, "data_time": 0.05652, "loss_rpn_cls": 0.05882, "loss_rpn_bbox": 0.07072, "loss_cls": 0.3518, "acc": 89.92432, "loss_bbox": 0.3382, "loss_mask": 0.34099, "loss": 1.16052, "time": 0.47858} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.0002, "memory": 8415, "data_time": 0.0554, "loss_rpn_cls": 0.06632, "loss_rpn_bbox": 0.0772, "loss_cls": 0.35289, "acc": 89.81982, "loss_bbox": 0.33598, "loss_mask": 0.33655, "loss": 1.16894, "time": 0.41736} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.0002, "memory": 8415, "data_time": 0.05635, "loss_rpn_cls": 0.06142, "loss_rpn_bbox": 0.07092, "loss_cls": 0.32754, "acc": 90.54077, "loss_bbox": 0.32229, "loss_mask": 0.32667, "loss": 1.10883, "time": 0.41364} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.0002, "memory": 8415, "data_time": 0.04895, "loss_rpn_cls": 0.05758, "loss_rpn_bbox": 0.07208, "loss_cls": 0.34706, "acc": 89.74316, "loss_bbox": 0.34157, "loss_mask": 0.32836, "loss": 1.14665, "time": 0.46464} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.0002, "memory": 8415, "data_time": 0.06182, "loss_rpn_cls": 0.0651, "loss_rpn_bbox": 0.079, "loss_cls": 0.32956, "acc": 90.37305, "loss_bbox": 0.32879, "loss_mask": 0.33533, "loss": 1.13778, "time": 0.42048} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.0002, "memory": 8415, "data_time": 0.06247, "loss_rpn_cls": 0.05696, "loss_rpn_bbox": 0.07231, "loss_cls": 0.33033, "acc": 90.24268, "loss_bbox": 0.32858, "loss_mask": 0.3375, "loss": 1.12568, "time": 0.41424} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.0002, "memory": 8415, "data_time": 0.05158, "loss_rpn_cls": 0.05898, "loss_rpn_bbox": 0.0733, "loss_cls": 0.33616, "acc": 90.17676, "loss_bbox": 0.32939, "loss_mask": 0.33634, "loss": 1.13416, "time": 0.40736} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.0002, "memory": 8415, "data_time": 0.06168, "loss_rpn_cls": 0.05807, "loss_rpn_bbox": 0.07231, "loss_cls": 0.32441, "acc": 90.44385, "loss_bbox": 0.32279, "loss_mask": 0.32701, "loss": 1.10459, "time": 0.40502} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.0002, "memory": 8415, "data_time": 0.05439, "loss_rpn_cls": 0.06002, "loss_rpn_bbox": 0.06796, "loss_cls": 0.33398, "acc": 90.16528, "loss_bbox": 0.33135, "loss_mask": 0.32746, "loss": 1.12077, "time": 0.42232} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.0002, "memory": 8415, "data_time": 0.04958, "loss_rpn_cls": 0.05742, "loss_rpn_bbox": 0.07281, "loss_cls": 0.33623, "acc": 89.95874, "loss_bbox": 0.33596, "loss_mask": 0.32664, "loss": 1.12907, "time": 0.42305} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.0002, "memory": 8415, "data_time": 0.06257, "loss_rpn_cls": 0.05794, "loss_rpn_bbox": 0.07209, "loss_cls": 0.32644, "acc": 90.3938, "loss_bbox": 0.32472, "loss_mask": 0.33483, "loss": 1.11602, "time": 0.40895} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.0002, "memory": 8415, "data_time": 0.06022, "loss_rpn_cls": 0.05802, "loss_rpn_bbox": 0.07227, "loss_cls": 0.33569, "acc": 90.07837, "loss_bbox": 0.33061, "loss_mask": 0.32818, "loss": 1.12476, "time": 0.40249} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.0002, "memory": 8423, "data_time": 0.05395, "loss_rpn_cls": 0.0558, "loss_rpn_bbox": 0.07001, "loss_cls": 0.32029, "acc": 90.45044, "loss_bbox": 0.32026, "loss_mask": 0.32796, "loss": 1.09431, "time": 0.41267} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.0002, "memory": 8423, "data_time": 0.059, "loss_rpn_cls": 0.05567, "loss_rpn_bbox": 0.0722, "loss_cls": 0.30893, "acc": 90.62866, "loss_bbox": 0.31654, "loss_mask": 0.31699, "loss": 1.07031, "time": 0.41269} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.0002, "memory": 8423, "data_time": 0.06282, "loss_rpn_cls": 0.05445, "loss_rpn_bbox": 0.06809, "loss_cls": 0.32541, "acc": 90.42383, "loss_bbox": 0.32201, "loss_mask": 0.32235, "loss": 1.09231, "time": 0.40692} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.0002, "memory": 8423, "data_time": 0.05528, "loss_rpn_cls": 0.05437, "loss_rpn_bbox": 0.07047, "loss_cls": 0.32547, "acc": 90.26758, "loss_bbox": 0.3266, "loss_mask": 0.31934, "loss": 1.09626, "time": 0.42819} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.0002, "memory": 8423, "data_time": 0.05003, "loss_rpn_cls": 0.05406, "loss_rpn_bbox": 0.06614, "loss_cls": 0.31329, "acc": 90.59033, "loss_bbox": 0.31957, "loss_mask": 0.31193, "loss": 1.06499, "time": 0.4112} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.0002, "memory": 8423, "data_time": 0.05253, "loss_rpn_cls": 0.05467, "loss_rpn_bbox": 0.0672, "loss_cls": 0.30893, "acc": 90.62939, "loss_bbox": 0.3159, "loss_mask": 0.31678, "loss": 1.06348, "time": 0.40682} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.0002, "memory": 8423, "data_time": 0.05683, "loss_rpn_cls": 0.05941, "loss_rpn_bbox": 0.0725, "loss_cls": 0.3192, "acc": 90.43213, "loss_bbox": 0.31784, "loss_mask": 0.31386, "loss": 1.08282, "time": 0.42347} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.0002, "memory": 8423, "data_time": 0.06164, "loss_rpn_cls": 0.05326, "loss_rpn_bbox": 0.0692, "loss_cls": 0.31364, "acc": 90.62769, "loss_bbox": 0.31673, "loss_mask": 0.31418, "loss": 1.06701, "time": 0.40844} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.0002, "memory": 8423, "data_time": 0.05329, "loss_rpn_cls": 0.05655, "loss_rpn_bbox": 0.07043, "loss_cls": 0.3231, "acc": 90.23755, "loss_bbox": 0.33334, "loss_mask": 0.3228, "loss": 1.10622, "time": 0.41754} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.0002, "memory": 8423, "data_time": 0.04266, "loss_rpn_cls": 0.05604, "loss_rpn_bbox": 0.06953, "loss_cls": 0.3129, "acc": 90.56226, "loss_bbox": 0.31466, "loss_mask": 0.31157, "loss": 1.06469, "time": 0.40516} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.0002, "memory": 8423, "data_time": 0.04837, "loss_rpn_cls": 0.05416, "loss_rpn_bbox": 0.06985, "loss_cls": 0.31561, "acc": 90.38257, "loss_bbox": 0.31873, "loss_mask": 0.31818, "loss": 1.07653, "time": 0.40271} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.0002, "memory": 8423, "data_time": 0.05515, "loss_rpn_cls": 0.05708, "loss_rpn_bbox": 0.06704, "loss_cls": 0.31885, "acc": 90.3269, "loss_bbox": 0.32401, "loss_mask": 0.31204, "loss": 1.07902, "time": 0.40969} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.0002, "memory": 8423, "data_time": 0.04334, "loss_rpn_cls": 0.05033, "loss_rpn_bbox": 0.06539, "loss_cls": 0.31713, "acc": 90.40845, "loss_bbox": 0.31854, "loss_mask": 0.30704, "loss": 1.05844, "time": 0.40655} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.0002, "memory": 8423, "data_time": 0.04822, "loss_rpn_cls": 0.05332, "loss_rpn_bbox": 0.06702, "loss_cls": 0.31587, "acc": 90.32642, "loss_bbox": 0.32649, "loss_mask": 0.32057, "loss": 1.08328, "time": 0.41204} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.0002, "memory": 8423, "data_time": 0.05445, "loss_rpn_cls": 0.05243, "loss_rpn_bbox": 0.06594, "loss_cls": 0.31692, "acc": 90.29761, "loss_bbox": 0.32347, "loss_mask": 0.30194, "loss": 1.0607, "time": 0.41183} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.0002, "memory": 8423, "data_time": 0.05255, "loss_rpn_cls": 0.05327, "loss_rpn_bbox": 0.06834, "loss_cls": 0.3154, "acc": 90.33276, "loss_bbox": 0.33185, "loss_mask": 0.30855, "loss": 1.07739, "time": 0.40617} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.0002, "memory": 8423, "data_time": 0.05287, "loss_rpn_cls": 0.05419, "loss_rpn_bbox": 0.06981, "loss_cls": 0.31446, "acc": 90.44629, "loss_bbox": 0.3217, "loss_mask": 0.31459, "loss": 1.07474, "time": 0.411} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.0002, "memory": 8423, "data_time": 0.06098, "loss_rpn_cls": 0.05362, "loss_rpn_bbox": 0.066, "loss_cls": 0.29831, "acc": 90.98047, "loss_bbox": 0.30756, "loss_mask": 0.30019, "loss": 1.02567, "time": 0.40646} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.0002, "memory": 8423, "data_time": 0.05531, "loss_rpn_cls": 0.05181, "loss_rpn_bbox": 0.06587, "loss_cls": 0.30202, "acc": 90.79834, "loss_bbox": 0.30676, "loss_mask": 0.30432, "loss": 1.03078, "time": 0.40064} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.0002, "memory": 8423, "data_time": 0.05615, "loss_rpn_cls": 0.05129, "loss_rpn_bbox": 0.06762, "loss_cls": 0.30608, "acc": 90.56934, "loss_bbox": 0.32054, "loss_mask": 0.30872, "loss": 1.05425, "time": 0.41019} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.0002, "memory": 8423, "data_time": 0.06452, "loss_rpn_cls": 0.04802, "loss_rpn_bbox": 0.06504, "loss_cls": 0.29533, "acc": 90.8894, "loss_bbox": 0.31073, "loss_mask": 0.31029, "loss": 1.02941, "time": 0.40401} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.0002, "memory": 8423, "data_time": 0.04303, "loss_rpn_cls": 0.05138, "loss_rpn_bbox": 0.06795, "loss_cls": 0.29618, "acc": 90.88257, "loss_bbox": 0.31027, "loss_mask": 0.30303, "loss": 1.02881, "time": 0.41547} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.0002, "memory": 8423, "data_time": 0.04348, "loss_rpn_cls": 0.05633, "loss_rpn_bbox": 0.07265, "loss_cls": 0.30395, "acc": 90.61523, "loss_bbox": 0.31367, "loss_mask": 0.31062, "loss": 1.05722, "time": 0.42313} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.0002, "memory": 8423, "data_time": 0.06832, "loss_rpn_cls": 0.05521, "loss_rpn_bbox": 0.06656, "loss_cls": 0.30792, "acc": 90.45044, "loss_bbox": 0.32147, "loss_mask": 0.31698, "loss": 1.06814, "time": 0.42061} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.0002, "memory": 8423, "data_time": 0.05531, "loss_rpn_cls": 0.04859, "loss_rpn_bbox": 0.06476, "loss_cls": 0.30471, "acc": 90.52686, "loss_bbox": 0.31607, "loss_mask": 0.30542, "loss": 1.03954, "time": 0.40413} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.0002, "memory": 8423, "data_time": 0.04525, "loss_rpn_cls": 0.04607, "loss_rpn_bbox": 0.06712, "loss_cls": 0.29864, "acc": 90.81958, "loss_bbox": 0.31042, "loss_mask": 0.3062, "loss": 1.02844, "time": 0.40739} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.0002, "memory": 8423, "data_time": 0.05021, "loss_rpn_cls": 0.0546, "loss_rpn_bbox": 0.06607, "loss_cls": 0.30625, "acc": 90.73022, "loss_bbox": 0.31103, "loss_mask": 0.2986, "loss": 1.03655, "time": 0.42005} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.0002, "memory": 8423, "data_time": 0.05226, "loss_rpn_cls": 0.04826, "loss_rpn_bbox": 0.06449, "loss_cls": 0.3109, "acc": 90.25195, "loss_bbox": 0.32329, "loss_mask": 0.30474, "loss": 1.05168, "time": 0.4108} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.0002, "memory": 8423, "data_time": 0.05283, "loss_rpn_cls": 0.05165, "loss_rpn_bbox": 0.06837, "loss_cls": 0.31063, "acc": 90.5293, "loss_bbox": 0.31953, "loss_mask": 0.30548, "loss": 1.05565, "time": 0.41643} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.0002, "memory": 8423, "data_time": 0.0525, "loss_rpn_cls": 0.05221, "loss_rpn_bbox": 0.06296, "loss_cls": 0.31093, "acc": 90.62012, "loss_bbox": 0.30818, "loss_mask": 0.30328, "loss": 1.03756, "time": 0.40696} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.0002, "memory": 8423, "data_time": 0.06386, "loss_rpn_cls": 0.05133, "loss_rpn_bbox": 0.06981, "loss_cls": 0.29879, "acc": 90.67529, "loss_bbox": 0.31379, "loss_mask": 0.30476, "loss": 1.03849, "time": 0.41225} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.0002, "memory": 8423, "data_time": 0.04765, "loss_rpn_cls": 0.05342, "loss_rpn_bbox": 0.0702, "loss_cls": 0.31065, "acc": 90.64136, "loss_bbox": 0.31539, "loss_mask": 0.30415, "loss": 1.0538, "time": 0.39819} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.0002, "memory": 8423, "data_time": 0.0629, "loss_rpn_cls": 0.05343, "loss_rpn_bbox": 0.06604, "loss_cls": 0.29689, "acc": 90.86255, "loss_bbox": 0.3102, "loss_mask": 0.30488, "loss": 1.03145, "time": 0.42065} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.0002, "memory": 8423, "data_time": 0.05443, "loss_rpn_cls": 0.0471, "loss_rpn_bbox": 0.06487, "loss_cls": 0.30614, "acc": 90.67139, "loss_bbox": 0.31227, "loss_mask": 0.30406, "loss": 1.03444, "time": 0.39565} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.0002, "memory": 8423, "data_time": 0.05362, "loss_rpn_cls": 0.0475, "loss_rpn_bbox": 0.06299, "loss_cls": 0.29969, "acc": 90.60767, "loss_bbox": 0.31229, "loss_mask": 0.30072, "loss": 1.02318, "time": 0.40784} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.0002, "memory": 8423, "data_time": 0.0604, "loss_rpn_cls": 0.05153, "loss_rpn_bbox": 0.0682, "loss_cls": 0.30947, "acc": 90.50269, "loss_bbox": 0.31624, "loss_mask": 0.29752, "loss": 1.04296, "time": 0.41951} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.0002, "memory": 8423, "data_time": 0.05243, "loss_rpn_cls": 0.05008, "loss_rpn_bbox": 0.06538, "loss_cls": 0.29554, "acc": 90.83984, "loss_bbox": 0.31091, "loss_mask": 0.29545, "loss": 1.01736, "time": 0.4183} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.0002, "memory": 8423, "data_time": 0.0447, "loss_rpn_cls": 0.05087, "loss_rpn_bbox": 0.06923, "loss_cls": 0.31964, "acc": 90.08057, "loss_bbox": 0.33484, "loss_mask": 0.30487, "loss": 1.07945, "time": 0.41641} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.0002, "memory": 8423, "data_time": 0.05408, "loss_rpn_cls": 0.04778, "loss_rpn_bbox": 0.06515, "loss_cls": 0.30464, "acc": 90.79663, "loss_bbox": 0.30843, "loss_mask": 0.29821, "loss": 1.02421, "time": 0.40672} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.0002, "memory": 8423, "data_time": 0.06516, "loss_rpn_cls": 0.05197, "loss_rpn_bbox": 0.06772, "loss_cls": 0.30603, "acc": 90.6189, "loss_bbox": 0.31291, "loss_mask": 0.30336, "loss": 1.04199, "time": 0.41548} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.0002, "memory": 8423, "data_time": 0.04979, "loss_rpn_cls": 0.04576, "loss_rpn_bbox": 0.06259, "loss_cls": 0.31373, "acc": 90.3064, "loss_bbox": 0.32103, "loss_mask": 0.30026, "loss": 1.04338, "time": 0.3956} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.0002, "memory": 8423, "data_time": 0.04348, "loss_rpn_cls": 0.04752, "loss_rpn_bbox": 0.06246, "loss_cls": 0.29112, "acc": 91.1123, "loss_bbox": 0.30421, "loss_mask": 0.29571, "loss": 1.00101, "time": 0.40668} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.0002, "memory": 8423, "data_time": 0.04454, "loss_rpn_cls": 0.04727, "loss_rpn_bbox": 0.06524, "loss_cls": 0.2879, "acc": 91.04565, "loss_bbox": 0.30225, "loss_mask": 0.29332, "loss": 0.99597, "time": 0.40453} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.0002, "memory": 8423, "data_time": 0.05332, "loss_rpn_cls": 0.04521, "loss_rpn_bbox": 0.06434, "loss_cls": 0.29267, "acc": 90.89526, "loss_bbox": 0.30738, "loss_mask": 0.29989, "loss": 1.00949, "time": 0.41636} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.0002, "memory": 8423, "data_time": 0.0494, "loss_rpn_cls": 0.05014, "loss_rpn_bbox": 0.06516, "loss_cls": 0.29089, "acc": 90.99512, "loss_bbox": 0.30081, "loss_mask": 0.2949, "loss": 1.00189, "time": 0.41618} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.0002, "memory": 8423, "data_time": 0.05991, "loss_rpn_cls": 0.04939, "loss_rpn_bbox": 0.06528, "loss_cls": 0.2964, "acc": 90.68188, "loss_bbox": 0.31142, "loss_mask": 0.30031, "loss": 1.0228, "time": 0.41045} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.0002, "memory": 8423, "data_time": 0.04343, "loss_rpn_cls": 0.04892, "loss_rpn_bbox": 0.06273, "loss_cls": 0.29628, "acc": 90.97363, "loss_bbox": 0.30863, "loss_mask": 0.28558, "loss": 1.00214, "time": 0.39564} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.0002, "memory": 8423, "data_time": 0.04736, "loss_rpn_cls": 0.04657, "loss_rpn_bbox": 0.0647, "loss_cls": 0.29262, "acc": 90.80151, "loss_bbox": 0.30854, "loss_mask": 0.28936, "loss": 1.0018, "time": 0.41553} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.0002, "memory": 8423, "data_time": 0.04866, "loss_rpn_cls": 0.0483, "loss_rpn_bbox": 0.06349, "loss_cls": 0.28999, "acc": 90.85645, "loss_bbox": 0.31113, "loss_mask": 0.29684, "loss": 1.00975, "time": 0.40303} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.0002, "memory": 8423, "data_time": 0.04996, "loss_rpn_cls": 0.04686, "loss_rpn_bbox": 0.06015, "loss_cls": 0.28176, "acc": 91.12646, "loss_bbox": 0.29925, "loss_mask": 0.28873, "loss": 0.97675, "time": 0.40207} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.0002, "memory": 8423, "data_time": 0.05383, "loss_rpn_cls": 0.05127, "loss_rpn_bbox": 0.06505, "loss_cls": 0.28751, "acc": 91.09106, "loss_bbox": 0.29861, "loss_mask": 0.3013, "loss": 1.00374, "time": 0.41334} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.0002, "memory": 8423, "data_time": 0.05691, "loss_rpn_cls": 0.04668, "loss_rpn_bbox": 0.06142, "loss_cls": 0.29292, "acc": 90.88647, "loss_bbox": 0.30632, "loss_mask": 0.28718, "loss": 0.99451, "time": 0.42349} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.0002, "memory": 8423, "data_time": 0.0606, "loss_rpn_cls": 0.04338, "loss_rpn_bbox": 0.06206, "loss_cls": 0.28828, "acc": 90.91724, "loss_bbox": 0.30646, "loss_mask": 0.2868, "loss": 0.98697, "time": 0.40583} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.0002, "memory": 8423, "data_time": 0.05157, "loss_rpn_cls": 0.04593, "loss_rpn_bbox": 0.06024, "loss_cls": 0.28139, "acc": 91.1543, "loss_bbox": 0.29044, "loss_mask": 0.28912, "loss": 0.96712, "time": 0.42645} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.0002, "memory": 8423, "data_time": 0.04427, "loss_rpn_cls": 0.04264, "loss_rpn_bbox": 0.06061, "loss_cls": 0.27925, "acc": 91.29395, "loss_bbox": 0.28669, "loss_mask": 0.29133, "loss": 0.96052, "time": 0.40008} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.0002, "memory": 8423, "data_time": 0.04809, "loss_rpn_cls": 0.04787, "loss_rpn_bbox": 0.0642, "loss_cls": 0.29439, "acc": 90.83716, "loss_bbox": 0.30543, "loss_mask": 0.29698, "loss": 1.00888, "time": 0.42336} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.0002, "memory": 8423, "data_time": 0.04684, "loss_rpn_cls": 0.04622, "loss_rpn_bbox": 0.05869, "loss_cls": 0.29157, "acc": 90.90479, "loss_bbox": 0.29818, "loss_mask": 0.2899, "loss": 0.98457, "time": 0.40841} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.0002, "memory": 8423, "data_time": 0.05542, "loss_rpn_cls": 0.05033, "loss_rpn_bbox": 0.06646, "loss_cls": 0.29529, "acc": 90.80884, "loss_bbox": 0.30696, "loss_mask": 0.29353, "loss": 1.01257, "time": 0.41456} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.0002, "memory": 8423, "data_time": 0.05055, "loss_rpn_cls": 0.05013, "loss_rpn_bbox": 0.06505, "loss_cls": 0.28064, "acc": 91.30493, "loss_bbox": 0.29159, "loss_mask": 0.29383, "loss": 0.98126, "time": 0.51731} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.0002, "memory": 8423, "data_time": 0.05347, "loss_rpn_cls": 0.0471, "loss_rpn_bbox": 0.0625, "loss_cls": 0.28863, "acc": 90.9834, "loss_bbox": 0.29899, "loss_mask": 0.28862, "loss": 0.98585, "time": 0.43011} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.0002, "memory": 8423, "data_time": 0.06149, "loss_rpn_cls": 0.05076, "loss_rpn_bbox": 0.06831, "loss_cls": 0.27974, "acc": 91.22485, "loss_bbox": 0.29284, "loss_mask": 0.29474, "loss": 0.9864, "time": 0.40293} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.0002, "memory": 8423, "data_time": 0.05801, "loss_rpn_cls": 0.04778, "loss_rpn_bbox": 0.06515, "loss_cls": 0.28331, "acc": 90.99048, "loss_bbox": 0.3017, "loss_mask": 0.28866, "loss": 0.9866, "time": 0.42585} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.0002, "memory": 8423, "data_time": 0.0457, "loss_rpn_cls": 0.04745, "loss_rpn_bbox": 0.05839, "loss_cls": 0.27768, "acc": 91.51123, "loss_bbox": 0.28263, "loss_mask": 0.28545, "loss": 0.9516, "time": 0.40645} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.0002, "memory": 8423, "data_time": 0.06595, "loss_rpn_cls": 0.04694, "loss_rpn_bbox": 0.06376, "loss_cls": 0.2897, "acc": 90.86304, "loss_bbox": 0.30576, "loss_mask": 0.28466, "loss": 0.99081, "time": 0.4119} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.0002, "memory": 8423, "data_time": 0.05258, "loss_rpn_cls": 0.04427, "loss_rpn_bbox": 0.05838, "loss_cls": 0.29314, "acc": 91.05786, "loss_bbox": 0.29546, "loss_mask": 0.28843, "loss": 0.97968, "time": 0.38843} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.0002, "memory": 8423, "data_time": 0.04818, "loss_rpn_cls": 0.04667, "loss_rpn_bbox": 0.06324, "loss_cls": 0.28792, "acc": 90.96973, "loss_bbox": 0.31122, "loss_mask": 0.2961, "loss": 1.00516, "time": 0.39795} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.0002, "memory": 8423, "data_time": 0.06517, "loss_rpn_cls": 0.05083, "loss_rpn_bbox": 0.0648, "loss_cls": 0.28421, "acc": 90.98486, "loss_bbox": 0.30511, "loss_mask": 0.29203, "loss": 0.99699, "time": 0.4077} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.0002, "memory": 8423, "data_time": 0.04866, "loss_rpn_cls": 0.04888, "loss_rpn_bbox": 0.06374, "loss_cls": 0.30035, "acc": 90.58643, "loss_bbox": 0.30946, "loss_mask": 0.29542, "loss": 1.01785, "time": 0.41198} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.0002, "memory": 8423, "data_time": 0.05849, "loss_rpn_cls": 0.04617, "loss_rpn_bbox": 0.06312, "loss_cls": 0.29243, "acc": 90.69604, "loss_bbox": 0.3101, "loss_mask": 0.29862, "loss": 1.01044, "time": 0.43076} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.0002, "memory": 8423, "data_time": 0.06086, "loss_rpn_cls": 0.04597, "loss_rpn_bbox": 0.06273, "loss_cls": 0.29736, "acc": 90.68091, "loss_bbox": 0.30865, "loss_mask": 0.29244, "loss": 1.00715, "time": 0.41198} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.0002, "memory": 8423, "data_time": 0.04353, "loss_rpn_cls": 0.04694, "loss_rpn_bbox": 0.06054, "loss_cls": 0.28565, "acc": 91.1355, "loss_bbox": 0.29743, "loss_mask": 0.28925, "loss": 0.97981, "time": 0.41998} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.0002, "memory": 8423, "data_time": 0.04916, "loss_rpn_cls": 0.04733, "loss_rpn_bbox": 0.06635, "loss_cls": 0.27533, "acc": 91.4187, "loss_bbox": 0.29102, "loss_mask": 0.28354, "loss": 0.96356, "time": 0.42089} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.0002, "memory": 8423, "data_time": 0.04911, "loss_rpn_cls": 0.04515, "loss_rpn_bbox": 0.06387, "loss_cls": 0.27202, "acc": 91.36719, "loss_bbox": 0.29314, "loss_mask": 0.28587, "loss": 0.96006, "time": 0.40419} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.0002, "memory": 8423, "data_time": 0.05242, "loss_rpn_cls": 0.04378, "loss_rpn_bbox": 0.06403, "loss_cls": 0.28692, "acc": 90.86035, "loss_bbox": 0.30439, "loss_mask": 0.28546, "loss": 0.98458, "time": 0.41326} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.0002, "memory": 8423, "data_time": 0.04272, "loss_rpn_cls": 0.04686, "loss_rpn_bbox": 0.05976, "loss_cls": 0.27931, "acc": 91.16382, "loss_bbox": 0.29486, "loss_mask": 0.28379, "loss": 0.96458, "time": 0.39897} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.0002, "memory": 8423, "data_time": 0.0561, "loss_rpn_cls": 0.04715, "loss_rpn_bbox": 0.06672, "loss_cls": 0.29501, "acc": 90.72876, "loss_bbox": 0.30435, "loss_mask": 0.28806, "loss": 1.00129, "time": 0.41738} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.0002, "memory": 8423, "data_time": 0.05686, "loss_rpn_cls": 0.04754, "loss_rpn_bbox": 0.06075, "loss_cls": 0.2845, "acc": 90.97559, "loss_bbox": 0.2997, "loss_mask": 0.28057, "loss": 0.97305, "time": 0.41531} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.0002, "memory": 8423, "data_time": 0.05406, "loss_rpn_cls": 0.04771, "loss_rpn_bbox": 0.06377, "loss_cls": 0.28899, "acc": 90.98877, "loss_bbox": 0.29641, "loss_mask": 0.29464, "loss": 0.99151, "time": 0.40983} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.0002, "memory": 8423, "data_time": 0.05686, "loss_rpn_cls": 0.04523, "loss_rpn_bbox": 0.06597, "loss_cls": 0.29357, "acc": 90.76221, "loss_bbox": 0.30554, "loss_mask": 0.28718, "loss": 0.99748, "time": 0.4066} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.0002, "memory": 8423, "data_time": 0.0546, "loss_rpn_cls": 0.04439, "loss_rpn_bbox": 0.05949, "loss_cls": 0.26925, "acc": 91.35229, "loss_bbox": 0.28726, "loss_mask": 0.27899, "loss": 0.93939, "time": 0.42325} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.0002, "memory": 8423, "data_time": 0.05473, "loss_rpn_cls": 0.04656, "loss_rpn_bbox": 0.06663, "loss_cls": 0.29633, "acc": 90.74121, "loss_bbox": 0.30675, "loss_mask": 0.29088, "loss": 1.00714, "time": 0.4212} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.0002, "memory": 8423, "data_time": 0.04942, "loss_rpn_cls": 0.04374, "loss_rpn_bbox": 0.05839, "loss_cls": 0.27411, "acc": 91.26221, "loss_bbox": 0.28798, "loss_mask": 0.28382, "loss": 0.94804, "time": 0.40698} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.0002, "memory": 8423, "data_time": 0.04724, "loss_rpn_cls": 0.04363, "loss_rpn_bbox": 0.06099, "loss_cls": 0.27986, "acc": 91.34399, "loss_bbox": 0.28861, "loss_mask": 0.28066, "loss": 0.95375, "time": 0.41354} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.0002, "memory": 8423, "data_time": 0.05233, "loss_rpn_cls": 0.04781, "loss_rpn_bbox": 0.06174, "loss_cls": 0.27734, "acc": 91.28662, "loss_bbox": 0.29037, "loss_mask": 0.28532, "loss": 0.96258, "time": 0.4139} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.0002, "memory": 8423, "data_time": 0.05318, "loss_rpn_cls": 0.04661, "loss_rpn_bbox": 0.06202, "loss_cls": 0.27489, "acc": 91.46729, "loss_bbox": 0.27988, "loss_mask": 0.27371, "loss": 0.93711, "time": 0.40307} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.0002, "memory": 8423, "data_time": 0.05571, "loss_rpn_cls": 0.04351, "loss_rpn_bbox": 0.05793, "loss_cls": 0.2642, "acc": 91.62256, "loss_bbox": 0.28412, "loss_mask": 0.28841, "loss": 0.93816, "time": 0.40065} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.0002, "memory": 8423, "data_time": 0.04744, "loss_rpn_cls": 0.04573, "loss_rpn_bbox": 0.06693, "loss_cls": 0.28352, "acc": 91.0896, "loss_bbox": 0.29699, "loss_mask": 0.29085, "loss": 0.98402, "time": 0.40088} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.0002, "memory": 8423, "data_time": 0.05604, "loss_rpn_cls": 0.04004, "loss_rpn_bbox": 0.06107, "loss_cls": 0.27455, "acc": 91.26636, "loss_bbox": 0.29098, "loss_mask": 0.28251, "loss": 0.94915, "time": 0.40376} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.0002, "memory": 8423, "data_time": 0.0622, "loss_rpn_cls": 0.04507, "loss_rpn_bbox": 0.06016, "loss_cls": 0.27533, "acc": 91.30957, "loss_bbox": 0.28625, "loss_mask": 0.28631, "loss": 0.95312, "time": 0.41322} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.0002, "memory": 8423, "data_time": 0.05016, "loss_rpn_cls": 0.04561, "loss_rpn_bbox": 0.06158, "loss_cls": 0.28076, "acc": 91.25806, "loss_bbox": 0.28584, "loss_mask": 0.27959, "loss": 0.95338, "time": 0.40196} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.0002, "memory": 8423, "data_time": 0.04128, "loss_rpn_cls": 0.04356, "loss_rpn_bbox": 0.05887, "loss_cls": 0.26944, "acc": 91.56299, "loss_bbox": 0.28471, "loss_mask": 0.28346, "loss": 0.94003, "time": 0.39553} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.0002, "memory": 8423, "data_time": 0.04878, "loss_rpn_cls": 0.04282, "loss_rpn_bbox": 0.05746, "loss_cls": 0.26374, "acc": 91.604, "loss_bbox": 0.28563, "loss_mask": 0.28055, "loss": 0.93021, "time": 0.42865} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.0002, "memory": 8423, "data_time": 0.06643, "loss_rpn_cls": 0.04141, "loss_rpn_bbox": 0.06065, "loss_cls": 0.27477, "acc": 91.32129, "loss_bbox": 0.2886, "loss_mask": 0.28008, "loss": 0.94551, "time": 0.41087} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.0002, "memory": 8423, "data_time": 0.04869, "loss_rpn_cls": 0.04135, "loss_rpn_bbox": 0.05998, "loss_cls": 0.26716, "acc": 91.60254, "loss_bbox": 0.28419, "loss_mask": 0.2911, "loss": 0.94377, "time": 0.40578} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.0002, "memory": 8423, "data_time": 0.05412, "loss_rpn_cls": 0.04213, "loss_rpn_bbox": 0.06041, "loss_cls": 0.27583, "acc": 91.34204, "loss_bbox": 0.28923, "loss_mask": 0.2772, "loss": 0.94481, "time": 0.41773} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.0002, "memory": 8423, "data_time": 0.05596, "loss_rpn_cls": 0.04102, "loss_rpn_bbox": 0.06061, "loss_cls": 0.27191, "acc": 91.37866, "loss_bbox": 0.2911, "loss_mask": 0.28622, "loss": 0.95087, "time": 0.41427} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.0002, "memory": 8423, "data_time": 0.03944, "loss_rpn_cls": 0.04503, "loss_rpn_bbox": 0.06359, "loss_cls": 0.27852, "acc": 91.25586, "loss_bbox": 0.29053, "loss_mask": 0.28173, "loss": 0.9594, "time": 0.39865} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.0002, "memory": 8423, "data_time": 0.05721, "loss_rpn_cls": 0.04615, "loss_rpn_bbox": 0.06056, "loss_cls": 0.289, "acc": 90.95142, "loss_bbox": 0.29896, "loss_mask": 0.2779, "loss": 0.97257, "time": 0.40626} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.0002, "memory": 8423, "data_time": 0.05954, "loss_rpn_cls": 0.04475, "loss_rpn_bbox": 0.06493, "loss_cls": 0.29127, "acc": 90.9209, "loss_bbox": 0.30144, "loss_mask": 0.2829, "loss": 0.98529, "time": 0.42101} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.0002, "memory": 8423, "data_time": 0.05804, "loss_rpn_cls": 0.05019, "loss_rpn_bbox": 0.06211, "loss_cls": 0.28418, "acc": 90.90527, "loss_bbox": 0.29944, "loss_mask": 0.27832, "loss": 0.97425, "time": 0.41521} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.0002, "memory": 8423, "data_time": 0.05968, "loss_rpn_cls": 0.04269, "loss_rpn_bbox": 0.0629, "loss_cls": 0.27081, "acc": 91.52588, "loss_bbox": 0.2786, "loss_mask": 0.27566, "loss": 0.93066, "time": 0.4135} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.0002, "memory": 8423, "data_time": 0.05114, "loss_rpn_cls": 0.04103, "loss_rpn_bbox": 0.05843, "loss_cls": 0.27404, "acc": 91.27539, "loss_bbox": 0.29132, "loss_mask": 0.28051, "loss": 0.94533, "time": 0.4176} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.0002, "memory": 8423, "data_time": 0.03904, "loss_rpn_cls": 0.04407, "loss_rpn_bbox": 0.06446, "loss_cls": 0.2726, "acc": 91.32788, "loss_bbox": 0.29062, "loss_mask": 0.28208, "loss": 0.95384, "time": 0.396} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.0002, "memory": 8423, "data_time": 0.0598, "loss_rpn_cls": 0.04391, "loss_rpn_bbox": 0.06219, "loss_cls": 0.27848, "acc": 91.14917, "loss_bbox": 0.29169, "loss_mask": 0.2801, "loss": 0.95637, "time": 0.41665} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.0002, "memory": 8423, "data_time": 0.05356, "loss_rpn_cls": 0.04358, "loss_rpn_bbox": 0.05781, "loss_cls": 0.27476, "acc": 91.31128, "loss_bbox": 0.29066, "loss_mask": 0.27811, "loss": 0.94492, "time": 0.4029} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.0002, "memory": 8423, "data_time": 0.04679, "loss_rpn_cls": 0.04213, "loss_rpn_bbox": 0.0615, "loss_cls": 0.27604, "acc": 91.09424, "loss_bbox": 0.29611, "loss_mask": 0.28824, "loss": 0.96402, "time": 0.39725} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.0002, "memory": 8423, "data_time": 0.05121, "loss_rpn_cls": 0.04314, "loss_rpn_bbox": 0.0593, "loss_cls": 0.26849, "acc": 91.60962, "loss_bbox": 0.28202, "loss_mask": 0.27344, "loss": 0.92639, "time": 0.40034} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.0002, "memory": 8423, "data_time": 0.05261, "loss_rpn_cls": 0.04585, "loss_rpn_bbox": 0.05834, "loss_cls": 0.25894, "acc": 91.70947, "loss_bbox": 0.2772, "loss_mask": 0.28014, "loss": 0.92047, "time": 0.40574} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.0002, "memory": 8423, "data_time": 0.05134, "loss_rpn_cls": 0.04353, "loss_rpn_bbox": 0.06006, "loss_cls": 0.28413, "acc": 91.04077, "loss_bbox": 0.28885, "loss_mask": 0.27861, "loss": 0.95519, "time": 0.41608} -{"mode": "val", "epoch": 1, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.2663, "bbox_mAP_50": 0.4885, "bbox_mAP_75": 0.2705, "bbox_mAP_s": 0.1543, "bbox_mAP_m": 0.3003, "bbox_mAP_l": 0.3458, "bbox_mAP_copypaste": "0.2663 0.4885 0.2705 0.1543 0.3003 0.3458", "segm_mAP": 0.2741, "segm_mAP_50": 0.4656, "segm_mAP_75": 0.288, "segm_mAP_s": 0.1239, "segm_mAP_m": 0.302, "segm_mAP_l": 0.4035, "segm_mAP_copypaste": "0.2741 0.4656 0.2880 0.1239 0.3020 0.4035"} -{"mode": "train", "epoch": 2, "iter": 50, "lr": 0.0002, "memory": 8423, "data_time": 0.12429, "loss_rpn_cls": 0.04069, "loss_rpn_bbox": 0.05997, "loss_cls": 0.27572, "acc": 91.09692, "loss_bbox": 0.29541, "loss_mask": 0.27691, "loss": 0.94871, "time": 0.54211} -{"mode": "train", "epoch": 2, "iter": 100, "lr": 0.0002, "memory": 8423, "data_time": 0.04825, "loss_rpn_cls": 0.04079, "loss_rpn_bbox": 0.06188, "loss_cls": 0.26475, "acc": 91.43311, "loss_bbox": 0.28833, "loss_mask": 0.27075, "loss": 0.92649, "time": 0.43462} -{"mode": "train", "epoch": 2, "iter": 150, "lr": 0.0002, "memory": 8423, "data_time": 0.05428, "loss_rpn_cls": 0.03971, "loss_rpn_bbox": 0.0601, "loss_cls": 0.27224, "acc": 91.24097, "loss_bbox": 0.30106, "loss_mask": 0.27885, "loss": 0.95198, "time": 0.42506} -{"mode": "train", "epoch": 2, "iter": 200, "lr": 0.0002, "memory": 8423, "data_time": 0.04796, "loss_rpn_cls": 0.04022, "loss_rpn_bbox": 0.0618, "loss_cls": 0.27653, "acc": 91.07983, "loss_bbox": 0.29257, "loss_mask": 0.2744, "loss": 0.94551, "time": 0.42628} -{"mode": "train", "epoch": 2, "iter": 250, "lr": 0.0002, "memory": 8423, "data_time": 0.04982, "loss_rpn_cls": 0.03892, "loss_rpn_bbox": 0.06231, "loss_cls": 0.26395, "acc": 91.41455, "loss_bbox": 0.28569, "loss_mask": 0.27776, "loss": 0.92862, "time": 0.41647} -{"mode": "train", "epoch": 2, "iter": 300, "lr": 0.0002, "memory": 8423, "data_time": 0.04876, "loss_rpn_cls": 0.04262, "loss_rpn_bbox": 0.06021, "loss_cls": 0.26304, "acc": 91.63428, "loss_bbox": 0.28106, "loss_mask": 0.27596, "loss": 0.92289, "time": 0.40613} -{"mode": "train", "epoch": 2, "iter": 350, "lr": 0.0002, "memory": 8423, "data_time": 0.05005, "loss_rpn_cls": 0.03756, "loss_rpn_bbox": 0.05805, "loss_cls": 0.25923, "acc": 91.56152, "loss_bbox": 0.28745, "loss_mask": 0.26951, "loss": 0.9118, "time": 0.40484} -{"mode": "train", "epoch": 2, "iter": 400, "lr": 0.0002, "memory": 8423, "data_time": 0.05649, "loss_rpn_cls": 0.03824, "loss_rpn_bbox": 0.05719, "loss_cls": 0.2613, "acc": 91.63647, "loss_bbox": 0.28138, "loss_mask": 0.27378, "loss": 0.91189, "time": 0.41379} -{"mode": "train", "epoch": 2, "iter": 450, "lr": 0.0002, "memory": 8423, "data_time": 0.04752, "loss_rpn_cls": 0.04089, "loss_rpn_bbox": 0.05929, "loss_cls": 0.2714, "acc": 91.17407, "loss_bbox": 0.29155, "loss_mask": 0.28092, "loss": 0.94405, "time": 0.40656} -{"mode": "train", "epoch": 2, "iter": 500, "lr": 0.0002, "memory": 8423, "data_time": 0.05078, "loss_rpn_cls": 0.04234, "loss_rpn_bbox": 0.05806, "loss_cls": 0.26531, "acc": 91.54126, "loss_bbox": 0.28012, "loss_mask": 0.2802, "loss": 0.92603, "time": 0.41688} -{"mode": "train", "epoch": 2, "iter": 550, "lr": 0.0002, "memory": 8423, "data_time": 0.04618, "loss_rpn_cls": 0.04069, "loss_rpn_bbox": 0.05969, "loss_cls": 0.2568, "acc": 91.67261, "loss_bbox": 0.28027, "loss_mask": 0.27409, "loss": 0.91154, "time": 0.41218} -{"mode": "train", "epoch": 2, "iter": 600, "lr": 0.0002, "memory": 8423, "data_time": 0.059, "loss_rpn_cls": 0.04027, "loss_rpn_bbox": 0.06292, "loss_cls": 0.27386, "acc": 91.03174, "loss_bbox": 0.29375, "loss_mask": 0.27427, "loss": 0.94507, "time": 0.41972} -{"mode": "train", "epoch": 2, "iter": 650, "lr": 0.0002, "memory": 8423, "data_time": 0.04694, "loss_rpn_cls": 0.04105, "loss_rpn_bbox": 0.05857, "loss_cls": 0.26816, "acc": 91.31372, "loss_bbox": 0.28658, "loss_mask": 0.27867, "loss": 0.93304, "time": 0.41003} -{"mode": "train", "epoch": 2, "iter": 700, "lr": 0.0002, "memory": 8423, "data_time": 0.03636, "loss_rpn_cls": 0.03834, "loss_rpn_bbox": 0.05669, "loss_cls": 0.25643, "acc": 91.79736, "loss_bbox": 0.27868, "loss_mask": 0.27413, "loss": 0.90426, "time": 0.38777} -{"mode": "train", "epoch": 2, "iter": 750, "lr": 0.0002, "memory": 8423, "data_time": 0.04425, "loss_rpn_cls": 0.03602, "loss_rpn_bbox": 0.05566, "loss_cls": 0.26147, "acc": 91.49048, "loss_bbox": 0.28633, "loss_mask": 0.27292, "loss": 0.9124, "time": 0.39361} -{"mode": "train", "epoch": 2, "iter": 800, "lr": 0.0002, "memory": 8423, "data_time": 0.04283, "loss_rpn_cls": 0.04129, "loss_rpn_bbox": 0.05782, "loss_cls": 0.26825, "acc": 91.29028, "loss_bbox": 0.28695, "loss_mask": 0.26989, "loss": 0.9242, "time": 0.46112} -{"mode": "train", "epoch": 2, "iter": 850, "lr": 0.0002, "memory": 8423, "data_time": 0.05174, "loss_rpn_cls": 0.04032, "loss_rpn_bbox": 0.05979, "loss_cls": 0.26271, "acc": 91.37085, "loss_bbox": 0.28423, "loss_mask": 0.2754, "loss": 0.92244, "time": 0.41627} -{"mode": "train", "epoch": 2, "iter": 900, "lr": 0.0002, "memory": 8423, "data_time": 0.05109, "loss_rpn_cls": 0.0357, "loss_rpn_bbox": 0.05704, "loss_cls": 0.26398, "acc": 91.54102, "loss_bbox": 0.2856, "loss_mask": 0.27222, "loss": 0.91454, "time": 0.40141} -{"mode": "train", "epoch": 2, "iter": 950, "lr": 0.0002, "memory": 8423, "data_time": 0.03786, "loss_rpn_cls": 0.03931, "loss_rpn_bbox": 0.0574, "loss_cls": 0.26761, "acc": 91.53906, "loss_bbox": 0.28398, "loss_mask": 0.27024, "loss": 0.91854, "time": 0.38899} -{"mode": "train", "epoch": 2, "iter": 1000, "lr": 0.0002, "memory": 8423, "data_time": 0.04606, "loss_rpn_cls": 0.03623, "loss_rpn_bbox": 0.05601, "loss_cls": 0.25862, "acc": 91.60205, "loss_bbox": 0.28224, "loss_mask": 0.27492, "loss": 0.90803, "time": 0.40889} -{"mode": "train", "epoch": 2, "iter": 1050, "lr": 0.0002, "memory": 8423, "data_time": 0.04812, "loss_rpn_cls": 0.04292, "loss_rpn_bbox": 0.05826, "loss_cls": 0.27092, "acc": 91.03589, "loss_bbox": 0.30031, "loss_mask": 0.274, "loss": 0.94641, "time": 0.41431} -{"mode": "train", "epoch": 2, "iter": 1100, "lr": 0.0002, "memory": 8423, "data_time": 0.03972, "loss_rpn_cls": 0.03672, "loss_rpn_bbox": 0.05831, "loss_cls": 0.26701, "acc": 91.37476, "loss_bbox": 0.28548, "loss_mask": 0.27343, "loss": 0.92096, "time": 0.40903} -{"mode": "train", "epoch": 2, "iter": 1150, "lr": 0.0002, "memory": 8423, "data_time": 0.04474, "loss_rpn_cls": 0.03683, "loss_rpn_bbox": 0.05613, "loss_cls": 0.26034, "acc": 91.67261, "loss_bbox": 0.2758, "loss_mask": 0.27782, "loss": 0.90692, "time": 0.40628} -{"mode": "train", "epoch": 2, "iter": 1200, "lr": 0.0002, "memory": 8423, "data_time": 0.03769, "loss_rpn_cls": 0.03907, "loss_rpn_bbox": 0.05648, "loss_cls": 0.25634, "acc": 91.71777, "loss_bbox": 0.28253, "loss_mask": 0.26996, "loss": 0.90438, "time": 0.41002} -{"mode": "train", "epoch": 2, "iter": 1250, "lr": 0.0002, "memory": 8423, "data_time": 0.04773, "loss_rpn_cls": 0.04202, "loss_rpn_bbox": 0.06256, "loss_cls": 0.27419, "acc": 91.12231, "loss_bbox": 0.29874, "loss_mask": 0.27972, "loss": 0.95722, "time": 0.40965} -{"mode": "train", "epoch": 2, "iter": 1300, "lr": 0.0002, "memory": 8423, "data_time": 0.04076, "loss_rpn_cls": 0.04075, "loss_rpn_bbox": 0.05666, "loss_cls": 0.26525, "acc": 91.54761, "loss_bbox": 0.28122, "loss_mask": 0.27115, "loss": 0.91503, "time": 0.4093} -{"mode": "train", "epoch": 2, "iter": 1350, "lr": 0.0002, "memory": 8423, "data_time": 0.052, "loss_rpn_cls": 0.04144, "loss_rpn_bbox": 0.05933, "loss_cls": 0.26439, "acc": 91.57153, "loss_bbox": 0.28396, "loss_mask": 0.27142, "loss": 0.92054, "time": 0.4077} -{"mode": "train", "epoch": 2, "iter": 1400, "lr": 0.0002, "memory": 8423, "data_time": 0.03876, "loss_rpn_cls": 0.04031, "loss_rpn_bbox": 0.05862, "loss_cls": 0.2583, "acc": 91.75586, "loss_bbox": 0.27531, "loss_mask": 0.26379, "loss": 0.89632, "time": 0.45373} -{"mode": "train", "epoch": 2, "iter": 1450, "lr": 0.0002, "memory": 8423, "data_time": 0.04785, "loss_rpn_cls": 0.04357, "loss_rpn_bbox": 0.06142, "loss_cls": 0.26119, "acc": 91.37939, "loss_bbox": 0.28525, "loss_mask": 0.26884, "loss": 0.92027, "time": 0.41271} -{"mode": "train", "epoch": 2, "iter": 1500, "lr": 0.0002, "memory": 8423, "data_time": 0.04549, "loss_rpn_cls": 0.04121, "loss_rpn_bbox": 0.06264, "loss_cls": 0.26345, "acc": 91.27759, "loss_bbox": 0.29138, "loss_mask": 0.27105, "loss": 0.92974, "time": 0.41458} -{"mode": "train", "epoch": 2, "iter": 1550, "lr": 0.0002, "memory": 8423, "data_time": 0.04801, "loss_rpn_cls": 0.03898, "loss_rpn_bbox": 0.05566, "loss_cls": 0.25229, "acc": 91.82251, "loss_bbox": 0.27369, "loss_mask": 0.27102, "loss": 0.89165, "time": 0.4057} -{"mode": "train", "epoch": 2, "iter": 1600, "lr": 0.0002, "memory": 8423, "data_time": 0.04795, "loss_rpn_cls": 0.03748, "loss_rpn_bbox": 0.05928, "loss_cls": 0.25845, "acc": 91.55151, "loss_bbox": 0.27975, "loss_mask": 0.26955, "loss": 0.90451, "time": 0.40377} -{"mode": "train", "epoch": 2, "iter": 1650, "lr": 0.0002, "memory": 8423, "data_time": 0.05305, "loss_rpn_cls": 0.03637, "loss_rpn_bbox": 0.05758, "loss_cls": 0.25071, "acc": 91.87817, "loss_bbox": 0.27429, "loss_mask": 0.26836, "loss": 0.88731, "time": 0.39967} -{"mode": "train", "epoch": 2, "iter": 1700, "lr": 0.0002, "memory": 8423, "data_time": 0.03857, "loss_rpn_cls": 0.04185, "loss_rpn_bbox": 0.05819, "loss_cls": 0.25888, "acc": 91.80591, "loss_bbox": 0.27279, "loss_mask": 0.27465, "loss": 0.90635, "time": 0.40297} -{"mode": "train", "epoch": 2, "iter": 1750, "lr": 0.0002, "memory": 8423, "data_time": 0.04647, "loss_rpn_cls": 0.0399, "loss_rpn_bbox": 0.05913, "loss_cls": 0.26668, "acc": 91.47729, "loss_bbox": 0.28635, "loss_mask": 0.2727, "loss": 0.92475, "time": 0.40508} -{"mode": "train", "epoch": 2, "iter": 1800, "lr": 0.0002, "memory": 8423, "data_time": 0.04766, "loss_rpn_cls": 0.03965, "loss_rpn_bbox": 0.06187, "loss_cls": 0.26961, "acc": 91.35083, "loss_bbox": 0.28503, "loss_mask": 0.27992, "loss": 0.93608, "time": 0.42231} -{"mode": "train", "epoch": 2, "iter": 1850, "lr": 0.0002, "memory": 8423, "data_time": 0.0533, "loss_rpn_cls": 0.04045, "loss_rpn_bbox": 0.05694, "loss_cls": 0.25484, "acc": 91.94385, "loss_bbox": 0.26755, "loss_mask": 0.27292, "loss": 0.8927, "time": 0.40787} -{"mode": "train", "epoch": 2, "iter": 1900, "lr": 0.0002, "memory": 8423, "data_time": 0.04927, "loss_rpn_cls": 0.04232, "loss_rpn_bbox": 0.06023, "loss_cls": 0.2551, "acc": 91.70117, "loss_bbox": 0.2766, "loss_mask": 0.27561, "loss": 0.90986, "time": 0.42786} -{"mode": "train", "epoch": 2, "iter": 1950, "lr": 0.0002, "memory": 8423, "data_time": 0.05334, "loss_rpn_cls": 0.04205, "loss_rpn_bbox": 0.05979, "loss_cls": 0.2575, "acc": 91.74902, "loss_bbox": 0.27032, "loss_mask": 0.27436, "loss": 0.90402, "time": 0.52176} -{"mode": "train", "epoch": 2, "iter": 2000, "lr": 0.0002, "memory": 8423, "data_time": 0.04322, "loss_rpn_cls": 0.03898, "loss_rpn_bbox": 0.0606, "loss_cls": 0.26498, "acc": 91.46411, "loss_bbox": 0.28773, "loss_mask": 0.27471, "loss": 0.92699, "time": 0.42437} -{"mode": "train", "epoch": 2, "iter": 2050, "lr": 0.0002, "memory": 8423, "data_time": 0.04175, "loss_rpn_cls": 0.03626, "loss_rpn_bbox": 0.05805, "loss_cls": 0.26419, "acc": 91.60962, "loss_bbox": 0.27632, "loss_mask": 0.27217, "loss": 0.90698, "time": 0.41608} -{"mode": "train", "epoch": 2, "iter": 2100, "lr": 0.0002, "memory": 8423, "data_time": 0.04133, "loss_rpn_cls": 0.03991, "loss_rpn_bbox": 0.06007, "loss_cls": 0.26216, "acc": 91.68848, "loss_bbox": 0.27601, "loss_mask": 0.276, "loss": 0.91416, "time": 0.40598} -{"mode": "train", "epoch": 2, "iter": 2150, "lr": 0.0002, "memory": 8423, "data_time": 0.04421, "loss_rpn_cls": 0.0391, "loss_rpn_bbox": 0.06065, "loss_cls": 0.26264, "acc": 91.47729, "loss_bbox": 0.2824, "loss_mask": 0.27221, "loss": 0.917, "time": 0.411} -{"mode": "train", "epoch": 2, "iter": 2200, "lr": 0.0002, "memory": 8423, "data_time": 0.044, "loss_rpn_cls": 0.03654, "loss_rpn_bbox": 0.05711, "loss_cls": 0.24336, "acc": 92.10596, "loss_bbox": 0.2706, "loss_mask": 0.26771, "loss": 0.87532, "time": 0.40823} -{"mode": "train", "epoch": 2, "iter": 2250, "lr": 0.0002, "memory": 8423, "data_time": 0.04367, "loss_rpn_cls": 0.03892, "loss_rpn_bbox": 0.05888, "loss_cls": 0.26108, "acc": 91.4668, "loss_bbox": 0.28039, "loss_mask": 0.27052, "loss": 0.90979, "time": 0.41178} -{"mode": "train", "epoch": 2, "iter": 2300, "lr": 0.0002, "memory": 8423, "data_time": 0.04883, "loss_rpn_cls": 0.03922, "loss_rpn_bbox": 0.05761, "loss_cls": 0.27064, "acc": 91.12329, "loss_bbox": 0.29303, "loss_mask": 0.27763, "loss": 0.93813, "time": 0.41182} -{"mode": "train", "epoch": 2, "iter": 2350, "lr": 0.0002, "memory": 8423, "data_time": 0.04298, "loss_rpn_cls": 0.03858, "loss_rpn_bbox": 0.05851, "loss_cls": 0.2608, "acc": 91.62891, "loss_bbox": 0.27832, "loss_mask": 0.2752, "loss": 0.91141, "time": 0.40095} -{"mode": "train", "epoch": 2, "iter": 2400, "lr": 0.0002, "memory": 8423, "data_time": 0.04043, "loss_rpn_cls": 0.03654, "loss_rpn_bbox": 0.05605, "loss_cls": 0.25324, "acc": 91.94165, "loss_bbox": 0.26631, "loss_mask": 0.26807, "loss": 0.88021, "time": 0.40545} -{"mode": "train", "epoch": 2, "iter": 2450, "lr": 0.0002, "memory": 8423, "data_time": 0.04049, "loss_rpn_cls": 0.03703, "loss_rpn_bbox": 0.05812, "loss_cls": 0.2614, "acc": 91.71387, "loss_bbox": 0.27769, "loss_mask": 0.27416, "loss": 0.9084, "time": 0.42104} -{"mode": "train", "epoch": 2, "iter": 2500, "lr": 0.0002, "memory": 8423, "data_time": 0.04494, "loss_rpn_cls": 0.03707, "loss_rpn_bbox": 0.05691, "loss_cls": 0.25119, "acc": 91.89136, "loss_bbox": 0.27235, "loss_mask": 0.27151, "loss": 0.88903, "time": 0.40986} -{"mode": "train", "epoch": 2, "iter": 2550, "lr": 0.0002, "memory": 8423, "data_time": 0.04537, "loss_rpn_cls": 0.03942, "loss_rpn_bbox": 0.05653, "loss_cls": 0.24826, "acc": 92.08887, "loss_bbox": 0.26982, "loss_mask": 0.26574, "loss": 0.87977, "time": 0.4208} -{"mode": "train", "epoch": 2, "iter": 2600, "lr": 0.0002, "memory": 8423, "data_time": 0.0476, "loss_rpn_cls": 0.04403, "loss_rpn_bbox": 0.05946, "loss_cls": 0.26633, "acc": 91.53613, "loss_bbox": 0.27316, "loss_mask": 0.27305, "loss": 0.91603, "time": 0.41261} -{"mode": "train", "epoch": 2, "iter": 2650, "lr": 0.0002, "memory": 8423, "data_time": 0.04467, "loss_rpn_cls": 0.04117, "loss_rpn_bbox": 0.05951, "loss_cls": 0.26268, "acc": 91.52856, "loss_bbox": 0.27525, "loss_mask": 0.27229, "loss": 0.91092, "time": 0.40626} -{"mode": "train", "epoch": 2, "iter": 2700, "lr": 0.0002, "memory": 8423, "data_time": 0.05038, "loss_rpn_cls": 0.04081, "loss_rpn_bbox": 0.05955, "loss_cls": 0.26317, "acc": 91.37671, "loss_bbox": 0.28226, "loss_mask": 0.27173, "loss": 0.91752, "time": 0.41884} -{"mode": "train", "epoch": 2, "iter": 2750, "lr": 0.0002, "memory": 8423, "data_time": 0.04549, "loss_rpn_cls": 0.03523, "loss_rpn_bbox": 0.05828, "loss_cls": 0.25729, "acc": 91.58423, "loss_bbox": 0.27437, "loss_mask": 0.27325, "loss": 0.89842, "time": 0.4038} -{"mode": "train", "epoch": 2, "iter": 2800, "lr": 0.0002, "memory": 8423, "data_time": 0.05533, "loss_rpn_cls": 0.04013, "loss_rpn_bbox": 0.05765, "loss_cls": 0.2573, "acc": 91.77515, "loss_bbox": 0.27431, "loss_mask": 0.27221, "loss": 0.90161, "time": 0.41226} -{"mode": "train", "epoch": 2, "iter": 2850, "lr": 0.0002, "memory": 8423, "data_time": 0.04997, "loss_rpn_cls": 0.0373, "loss_rpn_bbox": 0.05882, "loss_cls": 0.2617, "acc": 91.46826, "loss_bbox": 0.27557, "loss_mask": 0.26801, "loss": 0.9014, "time": 0.39613} -{"mode": "train", "epoch": 2, "iter": 2900, "lr": 0.0002, "memory": 8423, "data_time": 0.04963, "loss_rpn_cls": 0.03911, "loss_rpn_bbox": 0.0587, "loss_cls": 0.25557, "acc": 91.64868, "loss_bbox": 0.27733, "loss_mask": 0.26998, "loss": 0.90067, "time": 0.40853} -{"mode": "train", "epoch": 2, "iter": 2950, "lr": 0.0002, "memory": 8423, "data_time": 0.04607, "loss_rpn_cls": 0.04186, "loss_rpn_bbox": 0.06091, "loss_cls": 0.26531, "acc": 91.51245, "loss_bbox": 0.28033, "loss_mask": 0.27086, "loss": 0.91927, "time": 0.40779} -{"mode": "train", "epoch": 2, "iter": 3000, "lr": 0.0002, "memory": 8423, "data_time": 0.04563, "loss_rpn_cls": 0.03933, "loss_rpn_bbox": 0.05872, "loss_cls": 0.2595, "acc": 91.64551, "loss_bbox": 0.28049, "loss_mask": 0.26979, "loss": 0.90782, "time": 0.40147} -{"mode": "train", "epoch": 2, "iter": 3050, "lr": 0.0002, "memory": 8423, "data_time": 0.05599, "loss_rpn_cls": 0.03796, "loss_rpn_bbox": 0.05902, "loss_cls": 0.26424, "acc": 91.45361, "loss_bbox": 0.27623, "loss_mask": 0.27182, "loss": 0.90928, "time": 0.41861} -{"mode": "train", "epoch": 2, "iter": 3100, "lr": 0.0002, "memory": 8423, "data_time": 0.05045, "loss_rpn_cls": 0.0385, "loss_rpn_bbox": 0.05805, "loss_cls": 0.25805, "acc": 91.79858, "loss_bbox": 0.27294, "loss_mask": 0.26863, "loss": 0.89617, "time": 0.39293} -{"mode": "train", "epoch": 2, "iter": 3150, "lr": 0.0002, "memory": 8423, "data_time": 0.04733, "loss_rpn_cls": 0.03813, "loss_rpn_bbox": 0.05625, "loss_cls": 0.25186, "acc": 91.90308, "loss_bbox": 0.27326, "loss_mask": 0.26676, "loss": 0.88625, "time": 0.41057} -{"mode": "train", "epoch": 2, "iter": 3200, "lr": 0.0002, "memory": 8423, "data_time": 0.04543, "loss_rpn_cls": 0.03544, "loss_rpn_bbox": 0.05577, "loss_cls": 0.25378, "acc": 92.00659, "loss_bbox": 0.26575, "loss_mask": 0.26705, "loss": 0.87779, "time": 0.40438} -{"mode": "train", "epoch": 2, "iter": 3250, "lr": 0.0002, "memory": 8423, "data_time": 0.0427, "loss_rpn_cls": 0.0387, "loss_rpn_bbox": 0.05728, "loss_cls": 0.25397, "acc": 91.66626, "loss_bbox": 0.27588, "loss_mask": 0.27135, "loss": 0.89718, "time": 0.39104} -{"mode": "train", "epoch": 2, "iter": 3300, "lr": 0.0002, "memory": 8423, "data_time": 0.05134, "loss_rpn_cls": 0.03938, "loss_rpn_bbox": 0.05565, "loss_cls": 0.24787, "acc": 91.98926, "loss_bbox": 0.26609, "loss_mask": 0.26306, "loss": 0.87206, "time": 0.40371} -{"mode": "train", "epoch": 2, "iter": 3350, "lr": 0.0002, "memory": 8423, "data_time": 0.04919, "loss_rpn_cls": 0.0389, "loss_rpn_bbox": 0.05819, "loss_cls": 0.25827, "acc": 91.70825, "loss_bbox": 0.27218, "loss_mask": 0.26684, "loss": 0.89438, "time": 0.41022} -{"mode": "train", "epoch": 2, "iter": 3400, "lr": 0.0002, "memory": 8423, "data_time": 0.04653, "loss_rpn_cls": 0.03974, "loss_rpn_bbox": 0.05802, "loss_cls": 0.2544, "acc": 91.88306, "loss_bbox": 0.26885, "loss_mask": 0.27131, "loss": 0.89232, "time": 0.39909} -{"mode": "train", "epoch": 2, "iter": 3450, "lr": 0.0002, "memory": 8423, "data_time": 0.03596, "loss_rpn_cls": 0.03676, "loss_rpn_bbox": 0.05321, "loss_cls": 0.24367, "acc": 92.1814, "loss_bbox": 0.25637, "loss_mask": 0.26374, "loss": 0.85375, "time": 0.43304} -{"mode": "train", "epoch": 2, "iter": 3500, "lr": 0.0002, "memory": 8423, "data_time": 0.04819, "loss_rpn_cls": 0.04251, "loss_rpn_bbox": 0.06094, "loss_cls": 0.26787, "acc": 91.25903, "loss_bbox": 0.28658, "loss_mask": 0.2762, "loss": 0.9341, "time": 0.48335} -{"mode": "train", "epoch": 2, "iter": 3550, "lr": 0.0002, "memory": 8423, "data_time": 0.04267, "loss_rpn_cls": 0.03983, "loss_rpn_bbox": 0.0603, "loss_cls": 0.26529, "acc": 91.43408, "loss_bbox": 0.28317, "loss_mask": 0.27313, "loss": 0.92172, "time": 0.41298} -{"mode": "train", "epoch": 2, "iter": 3600, "lr": 0.0002, "memory": 8423, "data_time": 0.04494, "loss_rpn_cls": 0.03609, "loss_rpn_bbox": 0.05629, "loss_cls": 0.2511, "acc": 91.99438, "loss_bbox": 0.26548, "loss_mask": 0.26279, "loss": 0.87175, "time": 0.40307} -{"mode": "train", "epoch": 2, "iter": 3650, "lr": 0.0002, "memory": 8423, "data_time": 0.04934, "loss_rpn_cls": 0.04221, "loss_rpn_bbox": 0.06011, "loss_cls": 0.2527, "acc": 91.74219, "loss_bbox": 0.27163, "loss_mask": 0.26875, "loss": 0.8954, "time": 0.39276} -{"mode": "train", "epoch": 2, "iter": 3700, "lr": 0.0002, "memory": 8423, "data_time": 0.05971, "loss_rpn_cls": 0.03747, "loss_rpn_bbox": 0.05663, "loss_cls": 0.2535, "acc": 91.78564, "loss_bbox": 0.26641, "loss_mask": 0.2648, "loss": 0.87881, "time": 0.39561} -{"mode": "train", "epoch": 2, "iter": 3750, "lr": 0.0002, "memory": 8423, "data_time": 0.05188, "loss_rpn_cls": 0.03684, "loss_rpn_bbox": 0.0579, "loss_cls": 0.25471, "acc": 91.67163, "loss_bbox": 0.27104, "loss_mask": 0.26466, "loss": 0.88514, "time": 0.39478} -{"mode": "train", "epoch": 2, "iter": 3800, "lr": 0.0002, "memory": 8423, "data_time": 0.05407, "loss_rpn_cls": 0.03864, "loss_rpn_bbox": 0.06149, "loss_cls": 0.26374, "acc": 91.48511, "loss_bbox": 0.27971, "loss_mask": 0.26977, "loss": 0.91333, "time": 0.40755} -{"mode": "train", "epoch": 2, "iter": 3850, "lr": 0.0002, "memory": 8423, "data_time": 0.05126, "loss_rpn_cls": 0.0375, "loss_rpn_bbox": 0.05785, "loss_cls": 0.25292, "acc": 91.80859, "loss_bbox": 0.27397, "loss_mask": 0.26318, "loss": 0.88541, "time": 0.40608} -{"mode": "train", "epoch": 2, "iter": 3900, "lr": 0.0002, "memory": 8423, "data_time": 0.04281, "loss_rpn_cls": 0.03914, "loss_rpn_bbox": 0.05922, "loss_cls": 0.26703, "acc": 91.46411, "loss_bbox": 0.27754, "loss_mask": 0.26763, "loss": 0.91056, "time": 0.40702} -{"mode": "train", "epoch": 2, "iter": 3950, "lr": 0.0002, "memory": 8423, "data_time": 0.05831, "loss_rpn_cls": 0.04109, "loss_rpn_bbox": 0.06, "loss_cls": 0.26845, "acc": 91.34814, "loss_bbox": 0.28801, "loss_mask": 0.27373, "loss": 0.93128, "time": 0.42477} -{"mode": "train", "epoch": 2, "iter": 4000, "lr": 0.0002, "memory": 8423, "data_time": 0.04211, "loss_rpn_cls": 0.03641, "loss_rpn_bbox": 0.05752, "loss_cls": 0.25282, "acc": 91.86523, "loss_bbox": 0.27059, "loss_mask": 0.2613, "loss": 0.87864, "time": 0.39792} -{"mode": "train", "epoch": 2, "iter": 4050, "lr": 0.0002, "memory": 8423, "data_time": 0.0472, "loss_rpn_cls": 0.03804, "loss_rpn_bbox": 0.05658, "loss_cls": 0.25065, "acc": 91.85278, "loss_bbox": 0.27067, "loss_mask": 0.26106, "loss": 0.877, "time": 0.45642} -{"mode": "train", "epoch": 2, "iter": 4100, "lr": 0.0002, "memory": 8423, "data_time": 0.05527, "loss_rpn_cls": 0.04011, "loss_rpn_bbox": 0.05719, "loss_cls": 0.24656, "acc": 92.00293, "loss_bbox": 0.26601, "loss_mask": 0.26466, "loss": 0.87453, "time": 0.4089} -{"mode": "train", "epoch": 2, "iter": 4150, "lr": 0.0002, "memory": 8423, "data_time": 0.04366, "loss_rpn_cls": 0.03604, "loss_rpn_bbox": 0.05531, "loss_cls": 0.24306, "acc": 92.22192, "loss_bbox": 0.26023, "loss_mask": 0.263, "loss": 0.85764, "time": 0.40615} -{"mode": "train", "epoch": 2, "iter": 4200, "lr": 0.0002, "memory": 8423, "data_time": 0.04578, "loss_rpn_cls": 0.03925, "loss_rpn_bbox": 0.06187, "loss_cls": 0.25892, "acc": 91.58105, "loss_bbox": 0.27929, "loss_mask": 0.27299, "loss": 0.91232, "time": 0.41566} -{"mode": "train", "epoch": 2, "iter": 4250, "lr": 0.0002, "memory": 8423, "data_time": 0.04655, "loss_rpn_cls": 0.04087, "loss_rpn_bbox": 0.05499, "loss_cls": 0.24946, "acc": 92.00317, "loss_bbox": 0.2648, "loss_mask": 0.26712, "loss": 0.87724, "time": 0.40729} -{"mode": "train", "epoch": 2, "iter": 4300, "lr": 0.0002, "memory": 8423, "data_time": 0.04896, "loss_rpn_cls": 0.04113, "loss_rpn_bbox": 0.06237, "loss_cls": 0.26834, "acc": 91.1814, "loss_bbox": 0.28907, "loss_mask": 0.2691, "loss": 0.93, "time": 0.42676} -{"mode": "train", "epoch": 2, "iter": 4350, "lr": 0.0002, "memory": 8423, "data_time": 0.05227, "loss_rpn_cls": 0.03794, "loss_rpn_bbox": 0.05655, "loss_cls": 0.25205, "acc": 91.70386, "loss_bbox": 0.27601, "loss_mask": 0.26061, "loss": 0.88316, "time": 0.41627} -{"mode": "train", "epoch": 2, "iter": 4400, "lr": 0.0002, "memory": 8423, "data_time": 0.04503, "loss_rpn_cls": 0.03711, "loss_rpn_bbox": 0.05727, "loss_cls": 0.26258, "acc": 91.7063, "loss_bbox": 0.27624, "loss_mask": 0.26757, "loss": 0.90076, "time": 0.40676} -{"mode": "train", "epoch": 2, "iter": 4450, "lr": 0.0002, "memory": 8423, "data_time": 0.05104, "loss_rpn_cls": 0.03739, "loss_rpn_bbox": 0.05722, "loss_cls": 0.26061, "acc": 91.36938, "loss_bbox": 0.28367, "loss_mask": 0.2677, "loss": 0.90659, "time": 0.41671} -{"mode": "train", "epoch": 2, "iter": 4500, "lr": 0.0002, "memory": 8423, "data_time": 0.04834, "loss_rpn_cls": 0.03615, "loss_rpn_bbox": 0.05574, "loss_cls": 0.24009, "acc": 92.17383, "loss_bbox": 0.26244, "loss_mask": 0.26374, "loss": 0.85816, "time": 0.41023} -{"mode": "train", "epoch": 2, "iter": 4550, "lr": 0.0002, "memory": 8423, "data_time": 0.05328, "loss_rpn_cls": 0.04156, "loss_rpn_bbox": 0.06091, "loss_cls": 0.26651, "acc": 91.55835, "loss_bbox": 0.28039, "loss_mask": 0.273, "loss": 0.92238, "time": 0.41461} -{"mode": "train", "epoch": 2, "iter": 4600, "lr": 0.0002, "memory": 8423, "data_time": 0.04459, "loss_rpn_cls": 0.03947, "loss_rpn_bbox": 0.05782, "loss_cls": 0.26692, "acc": 91.47876, "loss_bbox": 0.27749, "loss_mask": 0.26979, "loss": 0.91149, "time": 0.40823} -{"mode": "train", "epoch": 2, "iter": 4650, "lr": 0.0002, "memory": 8423, "data_time": 0.0536, "loss_rpn_cls": 0.04156, "loss_rpn_bbox": 0.05708, "loss_cls": 0.2575, "acc": 91.51733, "loss_bbox": 0.27948, "loss_mask": 0.26958, "loss": 0.90519, "time": 0.40752} -{"mode": "train", "epoch": 2, "iter": 4700, "lr": 0.0002, "memory": 8423, "data_time": 0.04879, "loss_rpn_cls": 0.03735, "loss_rpn_bbox": 0.05599, "loss_cls": 0.25027, "acc": 91.94702, "loss_bbox": 0.26628, "loss_mask": 0.26515, "loss": 0.87504, "time": 0.40307} -{"mode": "train", "epoch": 2, "iter": 4750, "lr": 0.0002, "memory": 8423, "data_time": 0.04345, "loss_rpn_cls": 0.03794, "loss_rpn_bbox": 0.05528, "loss_cls": 0.25456, "acc": 91.72632, "loss_bbox": 0.27333, "loss_mask": 0.25855, "loss": 0.87967, "time": 0.40032} -{"mode": "train", "epoch": 2, "iter": 4800, "lr": 0.0002, "memory": 8423, "data_time": 0.05209, "loss_rpn_cls": 0.0371, "loss_rpn_bbox": 0.05677, "loss_cls": 0.24559, "acc": 92.06543, "loss_bbox": 0.26277, "loss_mask": 0.2621, "loss": 0.86434, "time": 0.39746} -{"mode": "train", "epoch": 2, "iter": 4850, "lr": 0.0002, "memory": 8423, "data_time": 0.04244, "loss_rpn_cls": 0.03547, "loss_rpn_bbox": 0.05643, "loss_cls": 0.25168, "acc": 91.92114, "loss_bbox": 0.26161, "loss_mask": 0.25563, "loss": 0.86081, "time": 0.39982} -{"mode": "train", "epoch": 2, "iter": 4900, "lr": 0.0002, "memory": 8423, "data_time": 0.04015, "loss_rpn_cls": 0.03719, "loss_rpn_bbox": 0.0558, "loss_cls": 0.25373, "acc": 91.64722, "loss_bbox": 0.27668, "loss_mask": 0.25975, "loss": 0.88315, "time": 0.41352} -{"mode": "train", "epoch": 2, "iter": 4950, "lr": 0.0002, "memory": 8423, "data_time": 0.04816, "loss_rpn_cls": 0.03725, "loss_rpn_bbox": 0.05453, "loss_cls": 0.2541, "acc": 91.60278, "loss_bbox": 0.2737, "loss_mask": 0.26114, "loss": 0.88073, "time": 0.40401} -{"mode": "train", "epoch": 2, "iter": 5000, "lr": 0.0002, "memory": 8423, "data_time": 0.05318, "loss_rpn_cls": 0.03605, "loss_rpn_bbox": 0.05534, "loss_cls": 0.25494, "acc": 91.54297, "loss_bbox": 0.28395, "loss_mask": 0.25903, "loss": 0.88931, "time": 0.40878} -{"mode": "train", "epoch": 2, "iter": 5050, "lr": 0.0002, "memory": 8423, "data_time": 0.04705, "loss_rpn_cls": 0.03661, "loss_rpn_bbox": 0.05758, "loss_cls": 0.25269, "acc": 91.82959, "loss_bbox": 0.27374, "loss_mask": 0.26392, "loss": 0.88455, "time": 0.41938} -{"mode": "train", "epoch": 2, "iter": 5100, "lr": 0.0002, "memory": 8423, "data_time": 0.04678, "loss_rpn_cls": 0.04144, "loss_rpn_bbox": 0.05507, "loss_cls": 0.24898, "acc": 92.0144, "loss_bbox": 0.27104, "loss_mask": 0.26612, "loss": 0.88266, "time": 0.42078} -{"mode": "train", "epoch": 2, "iter": 5150, "lr": 0.0002, "memory": 8423, "data_time": 0.04947, "loss_rpn_cls": 0.03795, "loss_rpn_bbox": 0.05699, "loss_cls": 0.26614, "acc": 91.35278, "loss_bbox": 0.28056, "loss_mask": 0.27263, "loss": 0.91427, "time": 0.4177} -{"mode": "train", "epoch": 2, "iter": 5200, "lr": 0.0002, "memory": 8423, "data_time": 0.04457, "loss_rpn_cls": 0.03876, "loss_rpn_bbox": 0.05861, "loss_cls": 0.26083, "acc": 91.43579, "loss_bbox": 0.27406, "loss_mask": 0.26873, "loss": 0.90099, "time": 0.41172} -{"mode": "train", "epoch": 2, "iter": 5250, "lr": 0.0002, "memory": 8423, "data_time": 0.0455, "loss_rpn_cls": 0.04113, "loss_rpn_bbox": 0.06116, "loss_cls": 0.25737, "acc": 91.63135, "loss_bbox": 0.27729, "loss_mask": 0.2645, "loss": 0.90145, "time": 0.41658} -{"mode": "train", "epoch": 2, "iter": 5300, "lr": 0.0002, "memory": 8423, "data_time": 0.04346, "loss_rpn_cls": 0.0396, "loss_rpn_bbox": 0.05766, "loss_cls": 0.25002, "acc": 91.76562, "loss_bbox": 0.2731, "loss_mask": 0.2626, "loss": 0.88298, "time": 0.41176} -{"mode": "train", "epoch": 2, "iter": 5350, "lr": 0.0002, "memory": 8423, "data_time": 0.04804, "loss_rpn_cls": 0.03884, "loss_rpn_bbox": 0.05969, "loss_cls": 0.25601, "acc": 91.58496, "loss_bbox": 0.27552, "loss_mask": 0.26752, "loss": 0.8976, "time": 0.41988} -{"mode": "train", "epoch": 2, "iter": 5400, "lr": 0.0002, "memory": 8423, "data_time": 0.04609, "loss_rpn_cls": 0.03478, "loss_rpn_bbox": 0.05477, "loss_cls": 0.24997, "acc": 91.86914, "loss_bbox": 0.27008, "loss_mask": 0.26469, "loss": 0.87429, "time": 0.44917} -{"mode": "train", "epoch": 2, "iter": 5450, "lr": 0.0002, "memory": 8423, "data_time": 0.04806, "loss_rpn_cls": 0.03968, "loss_rpn_bbox": 0.05908, "loss_cls": 0.25803, "acc": 91.64819, "loss_bbox": 0.2683, "loss_mask": 0.26614, "loss": 0.89123, "time": 0.40956} -{"mode": "train", "epoch": 2, "iter": 5500, "lr": 0.0002, "memory": 8423, "data_time": 0.04189, "loss_rpn_cls": 0.03821, "loss_rpn_bbox": 0.05806, "loss_cls": 0.25932, "acc": 91.65332, "loss_bbox": 0.27662, "loss_mask": 0.26377, "loss": 0.89598, "time": 0.41417} -{"mode": "train", "epoch": 2, "iter": 5550, "lr": 0.0002, "memory": 8423, "data_time": 0.0497, "loss_rpn_cls": 0.0385, "loss_rpn_bbox": 0.05832, "loss_cls": 0.2547, "acc": 91.69824, "loss_bbox": 0.27683, "loss_mask": 0.25942, "loss": 0.88777, "time": 0.40418} -{"mode": "train", "epoch": 2, "iter": 5600, "lr": 0.0002, "memory": 8423, "data_time": 0.04915, "loss_rpn_cls": 0.0404, "loss_rpn_bbox": 0.06122, "loss_cls": 0.25969, "acc": 91.66968, "loss_bbox": 0.27318, "loss_mask": 0.26395, "loss": 0.89845, "time": 0.40228} -{"mode": "train", "epoch": 2, "iter": 5650, "lr": 0.0002, "memory": 8423, "data_time": 0.03759, "loss_rpn_cls": 0.03649, "loss_rpn_bbox": 0.05406, "loss_cls": 0.24338, "acc": 92.1311, "loss_bbox": 0.26154, "loss_mask": 0.26402, "loss": 0.85948, "time": 0.39967} -{"mode": "train", "epoch": 2, "iter": 5700, "lr": 0.0002, "memory": 8423, "data_time": 0.05235, "loss_rpn_cls": 0.03923, "loss_rpn_bbox": 0.05782, "loss_cls": 0.25431, "acc": 91.79175, "loss_bbox": 0.26823, "loss_mask": 0.26803, "loss": 0.88763, "time": 0.41842} -{"mode": "train", "epoch": 2, "iter": 5750, "lr": 0.0002, "memory": 8423, "data_time": 0.04213, "loss_rpn_cls": 0.03891, "loss_rpn_bbox": 0.05705, "loss_cls": 0.25519, "acc": 91.78174, "loss_bbox": 0.26846, "loss_mask": 0.2678, "loss": 0.8874, "time": 0.40473} -{"mode": "train", "epoch": 2, "iter": 5800, "lr": 0.0002, "memory": 8423, "data_time": 0.04094, "loss_rpn_cls": 0.03718, "loss_rpn_bbox": 0.05969, "loss_cls": 0.25423, "acc": 91.76465, "loss_bbox": 0.27352, "loss_mask": 0.26771, "loss": 0.89232, "time": 0.41073} -{"mode": "train", "epoch": 2, "iter": 5850, "lr": 0.0002, "memory": 8423, "data_time": 0.04826, "loss_rpn_cls": 0.03947, "loss_rpn_bbox": 0.05821, "loss_cls": 0.2496, "acc": 91.93042, "loss_bbox": 0.26413, "loss_mask": 0.26113, "loss": 0.87253, "time": 0.40764} -{"mode": "train", "epoch": 2, "iter": 5900, "lr": 0.0002, "memory": 8423, "data_time": 0.03771, "loss_rpn_cls": 0.03602, "loss_rpn_bbox": 0.05524, "loss_cls": 0.25868, "acc": 91.65479, "loss_bbox": 0.27508, "loss_mask": 0.26446, "loss": 0.88948, "time": 0.44282} -{"mode": "train", "epoch": 2, "iter": 5950, "lr": 0.0002, "memory": 8423, "data_time": 0.04393, "loss_rpn_cls": 0.037, "loss_rpn_bbox": 0.05561, "loss_cls": 0.24818, "acc": 91.97241, "loss_bbox": 0.2672, "loss_mask": 0.26593, "loss": 0.87392, "time": 0.4516} -{"mode": "train", "epoch": 2, "iter": 6000, "lr": 0.0002, "memory": 8423, "data_time": 0.04249, "loss_rpn_cls": 0.03843, "loss_rpn_bbox": 0.05747, "loss_cls": 0.25653, "acc": 91.74194, "loss_bbox": 0.26696, "loss_mask": 0.26621, "loss": 0.88559, "time": 0.42194} -{"mode": "train", "epoch": 2, "iter": 6050, "lr": 0.0002, "memory": 8423, "data_time": 0.04595, "loss_rpn_cls": 0.03525, "loss_rpn_bbox": 0.05744, "loss_cls": 0.24323, "acc": 91.97974, "loss_bbox": 0.26243, "loss_mask": 0.26717, "loss": 0.86552, "time": 0.42007} -{"mode": "train", "epoch": 2, "iter": 6100, "lr": 0.0002, "memory": 8423, "data_time": 0.04628, "loss_rpn_cls": 0.03492, "loss_rpn_bbox": 0.0534, "loss_cls": 0.25203, "acc": 91.72559, "loss_bbox": 0.26952, "loss_mask": 0.26328, "loss": 0.87315, "time": 0.40068} -{"mode": "train", "epoch": 2, "iter": 6150, "lr": 0.0002, "memory": 8423, "data_time": 0.04528, "loss_rpn_cls": 0.038, "loss_rpn_bbox": 0.059, "loss_cls": 0.2492, "acc": 91.80127, "loss_bbox": 0.27131, "loss_mask": 0.2681, "loss": 0.88561, "time": 0.39829} -{"mode": "train", "epoch": 2, "iter": 6200, "lr": 0.0002, "memory": 8423, "data_time": 0.04441, "loss_rpn_cls": 0.03991, "loss_rpn_bbox": 0.05677, "loss_cls": 0.25852, "acc": 91.52832, "loss_bbox": 0.27653, "loss_mask": 0.26363, "loss": 0.89536, "time": 0.40415} -{"mode": "train", "epoch": 2, "iter": 6250, "lr": 0.0002, "memory": 8423, "data_time": 0.05087, "loss_rpn_cls": 0.04051, "loss_rpn_bbox": 0.05943, "loss_cls": 0.25406, "acc": 91.7666, "loss_bbox": 0.27505, "loss_mask": 0.26452, "loss": 0.89356, "time": 0.42189} -{"mode": "train", "epoch": 2, "iter": 6300, "lr": 0.0002, "memory": 8423, "data_time": 0.04951, "loss_rpn_cls": 0.04019, "loss_rpn_bbox": 0.05962, "loss_cls": 0.24747, "acc": 92.05811, "loss_bbox": 0.26404, "loss_mask": 0.26527, "loss": 0.87658, "time": 0.421} -{"mode": "train", "epoch": 2, "iter": 6350, "lr": 0.0002, "memory": 8423, "data_time": 0.03819, "loss_rpn_cls": 0.03754, "loss_rpn_bbox": 0.05575, "loss_cls": 0.26097, "acc": 91.71704, "loss_bbox": 0.26512, "loss_mask": 0.25949, "loss": 0.87886, "time": 0.4006} -{"mode": "train", "epoch": 2, "iter": 6400, "lr": 0.0002, "memory": 8423, "data_time": 0.04438, "loss_rpn_cls": 0.03803, "loss_rpn_bbox": 0.06126, "loss_cls": 0.27057, "acc": 91.32739, "loss_bbox": 0.28165, "loss_mask": 0.27022, "loss": 0.92173, "time": 0.41307} -{"mode": "train", "epoch": 2, "iter": 6450, "lr": 0.0002, "memory": 8423, "data_time": 0.03842, "loss_rpn_cls": 0.03409, "loss_rpn_bbox": 0.05512, "loss_cls": 0.23725, "acc": 92.34009, "loss_bbox": 0.25331, "loss_mask": 0.25984, "loss": 0.83959, "time": 0.39309} -{"mode": "train", "epoch": 2, "iter": 6500, "lr": 0.0002, "memory": 8423, "data_time": 0.04193, "loss_rpn_cls": 0.03953, "loss_rpn_bbox": 0.05995, "loss_cls": 0.24905, "acc": 91.75342, "loss_bbox": 0.27152, "loss_mask": 0.26648, "loss": 0.88653, "time": 0.4046} -{"mode": "train", "epoch": 2, "iter": 6550, "lr": 0.0002, "memory": 8423, "data_time": 0.04636, "loss_rpn_cls": 0.03821, "loss_rpn_bbox": 0.0545, "loss_cls": 0.24835, "acc": 91.97266, "loss_bbox": 0.25939, "loss_mask": 0.2653, "loss": 0.86575, "time": 0.40178} -{"mode": "train", "epoch": 2, "iter": 6600, "lr": 0.0002, "memory": 8423, "data_time": 0.03831, "loss_rpn_cls": 0.03909, "loss_rpn_bbox": 0.05396, "loss_cls": 0.25096, "acc": 91.82007, "loss_bbox": 0.26822, "loss_mask": 0.2594, "loss": 0.87163, "time": 0.40786} -{"mode": "train", "epoch": 2, "iter": 6650, "lr": 0.0002, "memory": 8423, "data_time": 0.0529, "loss_rpn_cls": 0.03568, "loss_rpn_bbox": 0.05889, "loss_cls": 0.25877, "acc": 91.49683, "loss_bbox": 0.2795, "loss_mask": 0.26404, "loss": 0.89688, "time": 0.40905} -{"mode": "train", "epoch": 2, "iter": 6700, "lr": 0.0002, "memory": 8423, "data_time": 0.03884, "loss_rpn_cls": 0.03844, "loss_rpn_bbox": 0.05654, "loss_cls": 0.25635, "acc": 91.67505, "loss_bbox": 0.27707, "loss_mask": 0.26497, "loss": 0.89338, "time": 0.40691} -{"mode": "train", "epoch": 2, "iter": 6750, "lr": 0.0002, "memory": 8423, "data_time": 0.04191, "loss_rpn_cls": 0.03746, "loss_rpn_bbox": 0.05907, "loss_cls": 0.24533, "acc": 92.01978, "loss_bbox": 0.25873, "loss_mask": 0.27377, "loss": 0.87436, "time": 0.39989} -{"mode": "train", "epoch": 2, "iter": 6800, "lr": 0.0002, "memory": 8423, "data_time": 0.04213, "loss_rpn_cls": 0.03399, "loss_rpn_bbox": 0.05461, "loss_cls": 0.24808, "acc": 91.89209, "loss_bbox": 0.26238, "loss_mask": 0.26357, "loss": 0.86263, "time": 0.39806} -{"mode": "train", "epoch": 2, "iter": 6850, "lr": 0.0002, "memory": 8423, "data_time": 0.04455, "loss_rpn_cls": 0.03779, "loss_rpn_bbox": 0.05525, "loss_cls": 0.24095, "acc": 92.09961, "loss_bbox": 0.26055, "loss_mask": 0.26364, "loss": 0.85817, "time": 0.40987} -{"mode": "train", "epoch": 2, "iter": 6900, "lr": 0.0002, "memory": 8423, "data_time": 0.0423, "loss_rpn_cls": 0.03731, "loss_rpn_bbox": 0.05646, "loss_cls": 0.25689, "acc": 91.68115, "loss_bbox": 0.27275, "loss_mask": 0.26268, "loss": 0.88608, "time": 0.40227} -{"mode": "train", "epoch": 2, "iter": 6950, "lr": 0.0002, "memory": 8423, "data_time": 0.04737, "loss_rpn_cls": 0.03879, "loss_rpn_bbox": 0.05656, "loss_cls": 0.24309, "acc": 92.25391, "loss_bbox": 0.25733, "loss_mask": 0.25882, "loss": 0.8546, "time": 0.39939} -{"mode": "train", "epoch": 2, "iter": 7000, "lr": 0.0002, "memory": 8423, "data_time": 0.05021, "loss_rpn_cls": 0.03801, "loss_rpn_bbox": 0.05713, "loss_cls": 0.24753, "acc": 91.83569, "loss_bbox": 0.26827, "loss_mask": 0.262, "loss": 0.87293, "time": 0.40638} -{"mode": "train", "epoch": 2, "iter": 7050, "lr": 0.0002, "memory": 8423, "data_time": 0.04233, "loss_rpn_cls": 0.03444, "loss_rpn_bbox": 0.05544, "loss_cls": 0.2469, "acc": 91.93604, "loss_bbox": 0.2664, "loss_mask": 0.26716, "loss": 0.87034, "time": 0.40158} -{"mode": "train", "epoch": 2, "iter": 7100, "lr": 0.0002, "memory": 8423, "data_time": 0.04558, "loss_rpn_cls": 0.03826, "loss_rpn_bbox": 0.05683, "loss_cls": 0.25428, "acc": 91.6394, "loss_bbox": 0.27485, "loss_mask": 0.26602, "loss": 0.89025, "time": 0.41139} -{"mode": "train", "epoch": 2, "iter": 7150, "lr": 0.0002, "memory": 8423, "data_time": 0.04929, "loss_rpn_cls": 0.03725, "loss_rpn_bbox": 0.0572, "loss_cls": 0.24567, "acc": 91.98047, "loss_bbox": 0.26443, "loss_mask": 0.26096, "loss": 0.86551, "time": 0.40322} -{"mode": "train", "epoch": 2, "iter": 7200, "lr": 0.0002, "memory": 8423, "data_time": 0.04793, "loss_rpn_cls": 0.03509, "loss_rpn_bbox": 0.05259, "loss_cls": 0.24419, "acc": 92.14893, "loss_bbox": 0.25895, "loss_mask": 0.26463, "loss": 0.85545, "time": 0.40431} -{"mode": "train", "epoch": 2, "iter": 7250, "lr": 0.0002, "memory": 8423, "data_time": 0.05109, "loss_rpn_cls": 0.03681, "loss_rpn_bbox": 0.05315, "loss_cls": 0.241, "acc": 92.16235, "loss_bbox": 0.26152, "loss_mask": 0.25788, "loss": 0.85036, "time": 0.39832} -{"mode": "train", "epoch": 2, "iter": 7300, "lr": 0.0002, "memory": 8423, "data_time": 0.04993, "loss_rpn_cls": 0.03938, "loss_rpn_bbox": 0.05613, "loss_cls": 0.24856, "acc": 92.15186, "loss_bbox": 0.26731, "loss_mask": 0.26132, "loss": 0.8727, "time": 0.40532} -{"mode": "val", "epoch": 2, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3174, "bbox_mAP_50": 0.5368, "bbox_mAP_75": 0.3375, "bbox_mAP_s": 0.1801, "bbox_mAP_m": 0.3552, "bbox_mAP_l": 0.4239, "bbox_mAP_copypaste": "0.3174 0.5368 0.3375 0.1801 0.3552 0.4239", "segm_mAP": 0.3133, "segm_mAP_50": 0.5122, "segm_mAP_75": 0.3317, "segm_mAP_s": 0.1416, "segm_mAP_m": 0.3471, "segm_mAP_l": 0.4662, "segm_mAP_copypaste": "0.3133 0.5122 0.3317 0.1416 0.3471 0.4662"} -{"mode": "train", "epoch": 3, "iter": 50, "lr": 0.0002, "memory": 8423, "data_time": 0.11975, "loss_rpn_cls": 0.03304, "loss_rpn_bbox": 0.05615, "loss_cls": 0.23349, "acc": 92.08594, "loss_bbox": 0.26356, "loss_mask": 0.26218, "loss": 0.84842, "time": 0.55696} -{"mode": "train", "epoch": 3, "iter": 100, "lr": 0.0002, "memory": 8423, "data_time": 0.04308, "loss_rpn_cls": 0.03458, "loss_rpn_bbox": 0.05595, "loss_cls": 0.23981, "acc": 92.09863, "loss_bbox": 0.26326, "loss_mask": 0.25538, "loss": 0.84897, "time": 0.42028} -{"mode": "train", "epoch": 3, "iter": 150, "lr": 0.0002, "memory": 8423, "data_time": 0.04163, "loss_rpn_cls": 0.03105, "loss_rpn_bbox": 0.05463, "loss_cls": 0.23249, "acc": 92.1897, "loss_bbox": 0.25542, "loss_mask": 0.24985, "loss": 0.82344, "time": 0.40728} -{"mode": "train", "epoch": 3, "iter": 200, "lr": 0.0002, "memory": 8423, "data_time": 0.04347, "loss_rpn_cls": 0.03431, "loss_rpn_bbox": 0.05616, "loss_cls": 0.24177, "acc": 91.80664, "loss_bbox": 0.26646, "loss_mask": 0.25647, "loss": 0.85516, "time": 0.41661} -{"mode": "train", "epoch": 3, "iter": 250, "lr": 0.0002, "memory": 8423, "data_time": 0.05256, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05842, "loss_cls": 0.25639, "acc": 91.48096, "loss_bbox": 0.27677, "loss_mask": 0.2645, "loss": 0.89188, "time": 0.42071} -{"mode": "train", "epoch": 3, "iter": 300, "lr": 0.0002, "memory": 8423, "data_time": 0.0462, "loss_rpn_cls": 0.03133, "loss_rpn_bbox": 0.05168, "loss_cls": 0.2289, "acc": 92.34155, "loss_bbox": 0.25754, "loss_mask": 0.26217, "loss": 0.83162, "time": 0.40682} -{"mode": "train", "epoch": 3, "iter": 350, "lr": 0.0002, "memory": 8423, "data_time": 0.05211, "loss_rpn_cls": 0.03508, "loss_rpn_bbox": 0.05794, "loss_cls": 0.23989, "acc": 91.88452, "loss_bbox": 0.2734, "loss_mask": 0.26006, "loss": 0.86636, "time": 0.42283} -{"mode": "train", "epoch": 3, "iter": 400, "lr": 0.0002, "memory": 8423, "data_time": 0.03264, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.05189, "loss_cls": 0.22983, "acc": 92.22852, "loss_bbox": 0.26305, "loss_mask": 0.25753, "loss": 0.83405, "time": 0.40574} -{"mode": "train", "epoch": 3, "iter": 450, "lr": 0.0002, "memory": 8423, "data_time": 0.04736, "loss_rpn_cls": 0.03433, "loss_rpn_bbox": 0.05619, "loss_cls": 0.23946, "acc": 91.9021, "loss_bbox": 0.2696, "loss_mask": 0.25782, "loss": 0.8574, "time": 0.41597} -{"mode": "train", "epoch": 3, "iter": 500, "lr": 0.0002, "memory": 8423, "data_time": 0.04884, "loss_rpn_cls": 0.03125, "loss_rpn_bbox": 0.05421, "loss_cls": 0.23908, "acc": 91.96045, "loss_bbox": 0.26533, "loss_mask": 0.25527, "loss": 0.84515, "time": 0.42671} -{"mode": "train", "epoch": 3, "iter": 550, "lr": 0.0002, "memory": 8423, "data_time": 0.04454, "loss_rpn_cls": 0.03119, "loss_rpn_bbox": 0.05156, "loss_cls": 0.23676, "acc": 92.16748, "loss_bbox": 0.26065, "loss_mask": 0.25616, "loss": 0.83632, "time": 0.40061} -{"mode": "train", "epoch": 3, "iter": 600, "lr": 0.0002, "memory": 8423, "data_time": 0.04228, "loss_rpn_cls": 0.03058, "loss_rpn_bbox": 0.05324, "loss_cls": 0.23953, "acc": 92.08911, "loss_bbox": 0.25921, "loss_mask": 0.25234, "loss": 0.8349, "time": 0.41286} -{"mode": "train", "epoch": 3, "iter": 650, "lr": 0.0002, "memory": 8423, "data_time": 0.04734, "loss_rpn_cls": 0.03471, "loss_rpn_bbox": 0.05385, "loss_cls": 0.24311, "acc": 91.82642, "loss_bbox": 0.27344, "loss_mask": 0.26327, "loss": 0.86837, "time": 0.40607} -{"mode": "train", "epoch": 3, "iter": 700, "lr": 0.0002, "memory": 8423, "data_time": 0.04191, "loss_rpn_cls": 0.03188, "loss_rpn_bbox": 0.05381, "loss_cls": 0.23666, "acc": 92.12988, "loss_bbox": 0.25546, "loss_mask": 0.25366, "loss": 0.83147, "time": 0.40459} -{"mode": "train", "epoch": 3, "iter": 750, "lr": 0.0002, "memory": 8423, "data_time": 0.04465, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05662, "loss_cls": 0.24519, "acc": 91.77856, "loss_bbox": 0.26877, "loss_mask": 0.25496, "loss": 0.8587, "time": 0.41733} -{"mode": "train", "epoch": 3, "iter": 800, "lr": 0.0002, "memory": 8423, "data_time": 0.04741, "loss_rpn_cls": 0.03304, "loss_rpn_bbox": 0.05482, "loss_cls": 0.23752, "acc": 92.09888, "loss_bbox": 0.26066, "loss_mask": 0.25523, "loss": 0.84126, "time": 0.40538} -{"mode": "train", "epoch": 3, "iter": 850, "lr": 0.0002, "memory": 8423, "data_time": 0.04963, "loss_rpn_cls": 0.03613, "loss_rpn_bbox": 0.05714, "loss_cls": 0.24149, "acc": 91.927, "loss_bbox": 0.26504, "loss_mask": 0.26299, "loss": 0.86279, "time": 0.41502} -{"mode": "train", "epoch": 3, "iter": 900, "lr": 0.0002, "memory": 8423, "data_time": 0.04559, "loss_rpn_cls": 0.03327, "loss_rpn_bbox": 0.05583, "loss_cls": 0.23835, "acc": 91.96606, "loss_bbox": 0.26521, "loss_mask": 0.2558, "loss": 0.84846, "time": 0.41338} -{"mode": "train", "epoch": 3, "iter": 950, "lr": 0.0002, "memory": 8423, "data_time": 0.04333, "loss_rpn_cls": 0.03431, "loss_rpn_bbox": 0.05165, "loss_cls": 0.23435, "acc": 92.19727, "loss_bbox": 0.25803, "loss_mask": 0.25713, "loss": 0.83547, "time": 0.41684} -{"mode": "train", "epoch": 3, "iter": 1000, "lr": 0.0002, "memory": 8423, "data_time": 0.04125, "loss_rpn_cls": 0.03444, "loss_rpn_bbox": 0.05378, "loss_cls": 0.23995, "acc": 91.94531, "loss_bbox": 0.26592, "loss_mask": 0.25327, "loss": 0.84736, "time": 0.40555} -{"mode": "train", "epoch": 3, "iter": 1050, "lr": 0.0002, "memory": 8423, "data_time": 0.0413, "loss_rpn_cls": 0.03509, "loss_rpn_bbox": 0.05452, "loss_cls": 0.22978, "acc": 92.35132, "loss_bbox": 0.25633, "loss_mask": 0.25298, "loss": 0.8287, "time": 0.40047} -{"mode": "train", "epoch": 3, "iter": 1100, "lr": 0.0002, "memory": 8423, "data_time": 0.04877, "loss_rpn_cls": 0.03418, "loss_rpn_bbox": 0.055, "loss_cls": 0.24134, "acc": 92.03857, "loss_bbox": 0.26123, "loss_mask": 0.24909, "loss": 0.84084, "time": 0.40409} -{"mode": "train", "epoch": 3, "iter": 1150, "lr": 0.0002, "memory": 8423, "data_time": 0.04749, "loss_rpn_cls": 0.03287, "loss_rpn_bbox": 0.05348, "loss_cls": 0.24134, "acc": 91.9978, "loss_bbox": 0.26241, "loss_mask": 0.26432, "loss": 0.85442, "time": 0.43164} -{"mode": "train", "epoch": 3, "iter": 1200, "lr": 0.0002, "memory": 8423, "data_time": 0.0495, "loss_rpn_cls": 0.03475, "loss_rpn_bbox": 0.05508, "loss_cls": 0.24285, "acc": 91.79565, "loss_bbox": 0.27643, "loss_mask": 0.26432, "loss": 0.87342, "time": 0.41162} -{"mode": "train", "epoch": 3, "iter": 1250, "lr": 0.0002, "memory": 8423, "data_time": 0.03779, "loss_rpn_cls": 0.03496, "loss_rpn_bbox": 0.05541, "loss_cls": 0.24242, "acc": 91.98608, "loss_bbox": 0.26772, "loss_mask": 0.26241, "loss": 0.86292, "time": 0.45348} -{"mode": "train", "epoch": 3, "iter": 1300, "lr": 0.0002, "memory": 8423, "data_time": 0.04453, "loss_rpn_cls": 0.03355, "loss_rpn_bbox": 0.05421, "loss_cls": 0.23375, "acc": 92.2002, "loss_bbox": 0.25851, "loss_mask": 0.25925, "loss": 0.83928, "time": 0.39562} -{"mode": "train", "epoch": 3, "iter": 1350, "lr": 0.0002, "memory": 8423, "data_time": 0.04121, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05957, "loss_cls": 0.23932, "acc": 92.15137, "loss_bbox": 0.26202, "loss_mask": 0.25598, "loss": 0.85271, "time": 0.3954} -{"mode": "train", "epoch": 3, "iter": 1400, "lr": 0.0002, "memory": 8423, "data_time": 0.04632, "loss_rpn_cls": 0.03683, "loss_rpn_bbox": 0.05429, "loss_cls": 0.24103, "acc": 92.07104, "loss_bbox": 0.26244, "loss_mask": 0.2606, "loss": 0.8552, "time": 0.40116} -{"mode": "train", "epoch": 3, "iter": 1450, "lr": 0.0002, "memory": 8423, "data_time": 0.05163, "loss_rpn_cls": 0.0379, "loss_rpn_bbox": 0.05984, "loss_cls": 0.24331, "acc": 91.8833, "loss_bbox": 0.26842, "loss_mask": 0.2563, "loss": 0.86578, "time": 0.48074} -{"mode": "train", "epoch": 3, "iter": 1500, "lr": 0.0002, "memory": 8423, "data_time": 0.04446, "loss_rpn_cls": 0.03388, "loss_rpn_bbox": 0.05365, "loss_cls": 0.24319, "acc": 92.10083, "loss_bbox": 0.25905, "loss_mask": 0.25874, "loss": 0.84851, "time": 0.39873} -{"mode": "train", "epoch": 3, "iter": 1550, "lr": 0.0002, "memory": 8423, "data_time": 0.04289, "loss_rpn_cls": 0.03379, "loss_rpn_bbox": 0.05578, "loss_cls": 0.24412, "acc": 91.97949, "loss_bbox": 0.26038, "loss_mask": 0.25711, "loss": 0.85117, "time": 0.40793} -{"mode": "train", "epoch": 3, "iter": 1600, "lr": 0.0002, "memory": 8423, "data_time": 0.04731, "loss_rpn_cls": 0.03744, "loss_rpn_bbox": 0.05919, "loss_cls": 0.25311, "acc": 91.64966, "loss_bbox": 0.27654, "loss_mask": 0.25882, "loss": 0.88509, "time": 0.41025} -{"mode": "train", "epoch": 3, "iter": 1650, "lr": 0.0002, "memory": 8423, "data_time": 0.05473, "loss_rpn_cls": 0.03379, "loss_rpn_bbox": 0.05412, "loss_cls": 0.24034, "acc": 91.99634, "loss_bbox": 0.26558, "loss_mask": 0.25903, "loss": 0.85287, "time": 0.40497} -{"mode": "train", "epoch": 3, "iter": 1700, "lr": 0.0002, "memory": 8423, "data_time": 0.04149, "loss_rpn_cls": 0.03695, "loss_rpn_bbox": 0.05566, "loss_cls": 0.24856, "acc": 91.90112, "loss_bbox": 0.26654, "loss_mask": 0.25575, "loss": 0.86347, "time": 0.41432} -{"mode": "train", "epoch": 3, "iter": 1750, "lr": 0.0002, "memory": 8423, "data_time": 0.04434, "loss_rpn_cls": 0.03455, "loss_rpn_bbox": 0.05728, "loss_cls": 0.23838, "acc": 92.19092, "loss_bbox": 0.26042, "loss_mask": 0.25914, "loss": 0.84976, "time": 0.39925} -{"mode": "train", "epoch": 3, "iter": 1800, "lr": 0.0002, "memory": 8423, "data_time": 0.04793, "loss_rpn_cls": 0.03666, "loss_rpn_bbox": 0.05622, "loss_cls": 0.23738, "acc": 92.18774, "loss_bbox": 0.25899, "loss_mask": 0.25569, "loss": 0.84495, "time": 0.41385} -{"mode": "train", "epoch": 3, "iter": 1850, "lr": 0.0002, "memory": 8423, "data_time": 0.04112, "loss_rpn_cls": 0.03351, "loss_rpn_bbox": 0.05518, "loss_cls": 0.24276, "acc": 91.76343, "loss_bbox": 0.27123, "loss_mask": 0.26539, "loss": 0.86807, "time": 0.40409} -{"mode": "train", "epoch": 3, "iter": 1900, "lr": 0.0002, "memory": 8423, "data_time": 0.04251, "loss_rpn_cls": 0.03226, "loss_rpn_bbox": 0.05576, "loss_cls": 0.23783, "acc": 92.06323, "loss_bbox": 0.26187, "loss_mask": 0.25567, "loss": 0.8434, "time": 0.39872} -{"mode": "train", "epoch": 3, "iter": 1950, "lr": 0.0002, "memory": 8423, "data_time": 0.05119, "loss_rpn_cls": 0.03249, "loss_rpn_bbox": 0.05298, "loss_cls": 0.23768, "acc": 92.22144, "loss_bbox": 0.25714, "loss_mask": 0.25377, "loss": 0.83406, "time": 0.39055} -{"mode": "train", "epoch": 3, "iter": 2000, "lr": 0.0002, "memory": 8423, "data_time": 0.04175, "loss_rpn_cls": 0.03616, "loss_rpn_bbox": 0.05604, "loss_cls": 0.23865, "acc": 92.16406, "loss_bbox": 0.25934, "loss_mask": 0.25968, "loss": 0.84986, "time": 0.49374} -{"mode": "train", "epoch": 3, "iter": 2050, "lr": 0.0002, "memory": 8423, "data_time": 0.04973, "loss_rpn_cls": 0.03392, "loss_rpn_bbox": 0.05765, "loss_cls": 0.24462, "acc": 91.82104, "loss_bbox": 0.27254, "loss_mask": 0.26002, "loss": 0.86874, "time": 0.40753} -{"mode": "train", "epoch": 3, "iter": 2100, "lr": 0.0002, "memory": 8423, "data_time": 0.04395, "loss_rpn_cls": 0.03629, "loss_rpn_bbox": 0.05954, "loss_cls": 0.23162, "acc": 92.16479, "loss_bbox": 0.26465, "loss_mask": 0.25392, "loss": 0.84602, "time": 0.41913} -{"mode": "train", "epoch": 3, "iter": 2150, "lr": 0.0002, "memory": 8423, "data_time": 0.0392, "loss_rpn_cls": 0.03314, "loss_rpn_bbox": 0.05444, "loss_cls": 0.23162, "acc": 92.19507, "loss_bbox": 0.26091, "loss_mask": 0.25911, "loss": 0.83922, "time": 0.46879} -{"mode": "train", "epoch": 3, "iter": 2200, "lr": 0.0002, "memory": 8423, "data_time": 0.04871, "loss_rpn_cls": 0.0335, "loss_rpn_bbox": 0.05626, "loss_cls": 0.24609, "acc": 91.86963, "loss_bbox": 0.26448, "loss_mask": 0.26403, "loss": 0.86437, "time": 0.40142} -{"mode": "train", "epoch": 3, "iter": 2250, "lr": 0.0002, "memory": 8423, "data_time": 0.05513, "loss_rpn_cls": 0.03434, "loss_rpn_bbox": 0.05767, "loss_cls": 0.2321, "acc": 92.23999, "loss_bbox": 0.25759, "loss_mask": 0.25451, "loss": 0.8362, "time": 0.41261} -{"mode": "train", "epoch": 3, "iter": 2300, "lr": 0.0002, "memory": 8423, "data_time": 0.05041, "loss_rpn_cls": 0.03633, "loss_rpn_bbox": 0.05313, "loss_cls": 0.23929, "acc": 92.0105, "loss_bbox": 0.26116, "loss_mask": 0.25494, "loss": 0.84484, "time": 0.39529} -{"mode": "train", "epoch": 3, "iter": 2350, "lr": 0.0002, "memory": 8423, "data_time": 0.04019, "loss_rpn_cls": 0.03567, "loss_rpn_bbox": 0.05574, "loss_cls": 0.25095, "acc": 91.71631, "loss_bbox": 0.27653, "loss_mask": 0.26027, "loss": 0.87916, "time": 0.41092} -{"mode": "train", "epoch": 3, "iter": 2400, "lr": 0.0002, "memory": 8423, "data_time": 0.04769, "loss_rpn_cls": 0.03501, "loss_rpn_bbox": 0.05293, "loss_cls": 0.23747, "acc": 92.12451, "loss_bbox": 0.25825, "loss_mask": 0.25243, "loss": 0.8361, "time": 0.41533} -{"mode": "train", "epoch": 3, "iter": 2450, "lr": 0.0002, "memory": 8423, "data_time": 0.04066, "loss_rpn_cls": 0.03675, "loss_rpn_bbox": 0.05755, "loss_cls": 0.24316, "acc": 92.07202, "loss_bbox": 0.2647, "loss_mask": 0.25898, "loss": 0.86113, "time": 0.41704} -{"mode": "train", "epoch": 3, "iter": 2500, "lr": 0.0002, "memory": 8423, "data_time": 0.04454, "loss_rpn_cls": 0.03335, "loss_rpn_bbox": 0.05349, "loss_cls": 0.2476, "acc": 91.82739, "loss_bbox": 0.26318, "loss_mask": 0.25552, "loss": 0.85314, "time": 0.40799} -{"mode": "train", "epoch": 3, "iter": 2550, "lr": 0.0002, "memory": 8423, "data_time": 0.04255, "loss_rpn_cls": 0.03375, "loss_rpn_bbox": 0.05335, "loss_cls": 0.23065, "acc": 92.31812, "loss_bbox": 0.25642, "loss_mask": 0.25796, "loss": 0.83212, "time": 0.3962} -{"mode": "train", "epoch": 3, "iter": 2600, "lr": 0.0002, "memory": 8423, "data_time": 0.04452, "loss_rpn_cls": 0.0366, "loss_rpn_bbox": 0.05745, "loss_cls": 0.24692, "acc": 91.75342, "loss_bbox": 0.27193, "loss_mask": 0.26741, "loss": 0.88031, "time": 0.40255} -{"mode": "train", "epoch": 3, "iter": 2650, "lr": 0.0002, "memory": 8423, "data_time": 0.05105, "loss_rpn_cls": 0.03317, "loss_rpn_bbox": 0.05543, "loss_cls": 0.24343, "acc": 91.95215, "loss_bbox": 0.25877, "loss_mask": 0.2599, "loss": 0.85071, "time": 0.4059} -{"mode": "train", "epoch": 3, "iter": 2700, "lr": 0.0002, "memory": 8423, "data_time": 0.04078, "loss_rpn_cls": 0.03744, "loss_rpn_bbox": 0.05775, "loss_cls": 0.24507, "acc": 91.84937, "loss_bbox": 0.26662, "loss_mask": 0.25929, "loss": 0.86618, "time": 0.40573} -{"mode": "train", "epoch": 3, "iter": 2750, "lr": 0.0002, "memory": 8423, "data_time": 0.04512, "loss_rpn_cls": 0.03377, "loss_rpn_bbox": 0.05593, "loss_cls": 0.24732, "acc": 91.8396, "loss_bbox": 0.27367, "loss_mask": 0.25971, "loss": 0.87039, "time": 0.41166} -{"mode": "train", "epoch": 3, "iter": 2800, "lr": 0.0002, "memory": 8423, "data_time": 0.0426, "loss_rpn_cls": 0.03129, "loss_rpn_bbox": 0.05376, "loss_cls": 0.23608, "acc": 92.19214, "loss_bbox": 0.25407, "loss_mask": 0.25135, "loss": 0.82656, "time": 0.40684} -{"mode": "train", "epoch": 3, "iter": 2850, "lr": 0.0002, "memory": 8423, "data_time": 0.04497, "loss_rpn_cls": 0.03331, "loss_rpn_bbox": 0.05651, "loss_cls": 0.23713, "acc": 92.10229, "loss_bbox": 0.26088, "loss_mask": 0.26075, "loss": 0.84858, "time": 0.4092} -{"mode": "train", "epoch": 3, "iter": 2900, "lr": 0.0002, "memory": 8423, "data_time": 0.0495, "loss_rpn_cls": 0.03471, "loss_rpn_bbox": 0.05488, "loss_cls": 0.24416, "acc": 91.7981, "loss_bbox": 0.27071, "loss_mask": 0.25452, "loss": 0.85899, "time": 0.41182} -{"mode": "train", "epoch": 3, "iter": 2950, "lr": 0.0002, "memory": 8423, "data_time": 0.04439, "loss_rpn_cls": 0.03282, "loss_rpn_bbox": 0.05391, "loss_cls": 0.23242, "acc": 92.25024, "loss_bbox": 0.2548, "loss_mask": 0.25915, "loss": 0.8331, "time": 0.40045} -{"mode": "train", "epoch": 3, "iter": 3000, "lr": 0.0002, "memory": 8423, "data_time": 0.03598, "loss_rpn_cls": 0.03451, "loss_rpn_bbox": 0.05387, "loss_cls": 0.24785, "acc": 91.94629, "loss_bbox": 0.26537, "loss_mask": 0.26584, "loss": 0.86745, "time": 0.40736} -{"mode": "train", "epoch": 3, "iter": 3050, "lr": 0.0002, "memory": 8423, "data_time": 0.04332, "loss_rpn_cls": 0.03331, "loss_rpn_bbox": 0.05632, "loss_cls": 0.23761, "acc": 92.09644, "loss_bbox": 0.26802, "loss_mask": 0.25971, "loss": 0.85497, "time": 0.40607} -{"mode": "train", "epoch": 3, "iter": 3100, "lr": 0.0002, "memory": 8423, "data_time": 0.05229, "loss_rpn_cls": 0.03192, "loss_rpn_bbox": 0.05377, "loss_cls": 0.23096, "acc": 92.45898, "loss_bbox": 0.2534, "loss_mask": 0.2554, "loss": 0.82544, "time": 0.42067} -{"mode": "train", "epoch": 3, "iter": 3150, "lr": 0.0002, "memory": 8423, "data_time": 0.04969, "loss_rpn_cls": 0.03326, "loss_rpn_bbox": 0.05376, "loss_cls": 0.23082, "acc": 92.23315, "loss_bbox": 0.26153, "loss_mask": 0.25179, "loss": 0.83116, "time": 0.40571} -{"mode": "train", "epoch": 3, "iter": 3200, "lr": 0.0002, "memory": 8423, "data_time": 0.04571, "loss_rpn_cls": 0.03608, "loss_rpn_bbox": 0.05836, "loss_cls": 0.24417, "acc": 91.875, "loss_bbox": 0.26784, "loss_mask": 0.25961, "loss": 0.86607, "time": 0.4054} -{"mode": "train", "epoch": 3, "iter": 3250, "lr": 0.0002, "memory": 8423, "data_time": 0.04905, "loss_rpn_cls": 0.0349, "loss_rpn_bbox": 0.05463, "loss_cls": 0.23802, "acc": 92.1001, "loss_bbox": 0.25893, "loss_mask": 0.25768, "loss": 0.84415, "time": 0.40912} -{"mode": "train", "epoch": 3, "iter": 3300, "lr": 0.0002, "memory": 8423, "data_time": 0.05156, "loss_rpn_cls": 0.03266, "loss_rpn_bbox": 0.05296, "loss_cls": 0.23296, "acc": 92.31348, "loss_bbox": 0.24867, "loss_mask": 0.25445, "loss": 0.82171, "time": 0.40769} -{"mode": "train", "epoch": 3, "iter": 3350, "lr": 0.0002, "memory": 8423, "data_time": 0.05138, "loss_rpn_cls": 0.03469, "loss_rpn_bbox": 0.05797, "loss_cls": 0.24062, "acc": 91.76294, "loss_bbox": 0.26965, "loss_mask": 0.26099, "loss": 0.86394, "time": 0.40949} -{"mode": "train", "epoch": 3, "iter": 3400, "lr": 0.0002, "memory": 8423, "data_time": 0.04563, "loss_rpn_cls": 0.03134, "loss_rpn_bbox": 0.05519, "loss_cls": 0.23721, "acc": 92.1958, "loss_bbox": 0.25676, "loss_mask": 0.24879, "loss": 0.82929, "time": 0.40209} -{"mode": "train", "epoch": 3, "iter": 3450, "lr": 0.0002, "memory": 8423, "data_time": 0.0383, "loss_rpn_cls": 0.03605, "loss_rpn_bbox": 0.05574, "loss_cls": 0.24025, "acc": 91.96606, "loss_bbox": 0.25929, "loss_mask": 0.25476, "loss": 0.84608, "time": 0.40051} -{"mode": "train", "epoch": 3, "iter": 3500, "lr": 0.0002, "memory": 8423, "data_time": 0.04736, "loss_rpn_cls": 0.03707, "loss_rpn_bbox": 0.05878, "loss_cls": 0.24173, "acc": 91.97485, "loss_bbox": 0.26624, "loss_mask": 0.26385, "loss": 0.86767, "time": 0.39923} -{"mode": "train", "epoch": 3, "iter": 3550, "lr": 0.0002, "memory": 8423, "data_time": 0.04137, "loss_rpn_cls": 0.0347, "loss_rpn_bbox": 0.05467, "loss_cls": 0.23704, "acc": 92.19385, "loss_bbox": 0.25746, "loss_mask": 0.25354, "loss": 0.8374, "time": 0.40305} -{"mode": "train", "epoch": 3, "iter": 3600, "lr": 0.0002, "memory": 8423, "data_time": 0.05893, "loss_rpn_cls": 0.03287, "loss_rpn_bbox": 0.05463, "loss_cls": 0.24859, "acc": 91.73511, "loss_bbox": 0.26894, "loss_mask": 0.25374, "loss": 0.85876, "time": 0.42777} -{"mode": "train", "epoch": 3, "iter": 3650, "lr": 0.0002, "memory": 8423, "data_time": 0.04569, "loss_rpn_cls": 0.03414, "loss_rpn_bbox": 0.05217, "loss_cls": 0.24102, "acc": 92.12598, "loss_bbox": 0.25667, "loss_mask": 0.25591, "loss": 0.83991, "time": 0.40054} -{"mode": "train", "epoch": 3, "iter": 3700, "lr": 0.0002, "memory": 8423, "data_time": 0.04394, "loss_rpn_cls": 0.03662, "loss_rpn_bbox": 0.05303, "loss_cls": 0.23706, "acc": 92.34058, "loss_bbox": 0.25233, "loss_mask": 0.25576, "loss": 0.8348, "time": 0.40324} -{"mode": "train", "epoch": 3, "iter": 3750, "lr": 0.0002, "memory": 8423, "data_time": 0.04814, "loss_rpn_cls": 0.0374, "loss_rpn_bbox": 0.0573, "loss_cls": 0.24043, "acc": 91.95752, "loss_bbox": 0.26283, "loss_mask": 0.25983, "loss": 0.8578, "time": 0.41702} -{"mode": "train", "epoch": 3, "iter": 3800, "lr": 0.0002, "memory": 8423, "data_time": 0.04813, "loss_rpn_cls": 0.03388, "loss_rpn_bbox": 0.05622, "loss_cls": 0.23266, "acc": 92.20312, "loss_bbox": 0.25711, "loss_mask": 0.25562, "loss": 0.83548, "time": 0.40153} -{"mode": "train", "epoch": 3, "iter": 3850, "lr": 0.0002, "memory": 8423, "data_time": 0.05918, "loss_rpn_cls": 0.03633, "loss_rpn_bbox": 0.05545, "loss_cls": 0.24204, "acc": 91.98511, "loss_bbox": 0.26077, "loss_mask": 0.2572, "loss": 0.85178, "time": 0.42494} -{"mode": "train", "epoch": 3, "iter": 3900, "lr": 0.0002, "memory": 8423, "data_time": 0.04665, "loss_rpn_cls": 0.03699, "loss_rpn_bbox": 0.06063, "loss_cls": 0.24515, "acc": 91.9314, "loss_bbox": 0.26732, "loss_mask": 0.26464, "loss": 0.87474, "time": 0.40395} -{"mode": "train", "epoch": 3, "iter": 3950, "lr": 0.0002, "memory": 8423, "data_time": 0.05027, "loss_rpn_cls": 0.03326, "loss_rpn_bbox": 0.05276, "loss_cls": 0.2219, "acc": 92.50708, "loss_bbox": 0.24956, "loss_mask": 0.25254, "loss": 0.81003, "time": 0.40493} -{"mode": "train", "epoch": 3, "iter": 4000, "lr": 0.0002, "memory": 8423, "data_time": 0.04673, "loss_rpn_cls": 0.03423, "loss_rpn_bbox": 0.05588, "loss_cls": 0.24538, "acc": 91.88672, "loss_bbox": 0.2677, "loss_mask": 0.26006, "loss": 0.86326, "time": 0.42353} -{"mode": "train", "epoch": 3, "iter": 4050, "lr": 0.0002, "memory": 8423, "data_time": 0.04903, "loss_rpn_cls": 0.03472, "loss_rpn_bbox": 0.05624, "loss_cls": 0.23297, "acc": 92.29346, "loss_bbox": 0.25142, "loss_mask": 0.2532, "loss": 0.82855, "time": 0.49073} -{"mode": "train", "epoch": 3, "iter": 4100, "lr": 0.0002, "memory": 8423, "data_time": 0.04912, "loss_rpn_cls": 0.03723, "loss_rpn_bbox": 0.05728, "loss_cls": 0.24345, "acc": 92.01855, "loss_bbox": 0.26789, "loss_mask": 0.26121, "loss": 0.86707, "time": 0.41086} -{"mode": "train", "epoch": 3, "iter": 4150, "lr": 0.0002, "memory": 8423, "data_time": 0.03946, "loss_rpn_cls": 0.03633, "loss_rpn_bbox": 0.05816, "loss_cls": 0.24204, "acc": 91.9729, "loss_bbox": 0.26079, "loss_mask": 0.25641, "loss": 0.85374, "time": 0.40253} -{"mode": "train", "epoch": 3, "iter": 4200, "lr": 0.0002, "memory": 8423, "data_time": 0.04893, "loss_rpn_cls": 0.03458, "loss_rpn_bbox": 0.05379, "loss_cls": 0.23241, "acc": 92.21802, "loss_bbox": 0.25453, "loss_mask": 0.25515, "loss": 0.83047, "time": 0.40822} -{"mode": "train", "epoch": 3, "iter": 4250, "lr": 0.0002, "memory": 8423, "data_time": 0.04609, "loss_rpn_cls": 0.03517, "loss_rpn_bbox": 0.05469, "loss_cls": 0.24027, "acc": 91.91797, "loss_bbox": 0.26981, "loss_mask": 0.259, "loss": 0.85895, "time": 0.40318} -{"mode": "train", "epoch": 3, "iter": 4300, "lr": 0.0002, "memory": 8423, "data_time": 0.0446, "loss_rpn_cls": 0.0335, "loss_rpn_bbox": 0.05291, "loss_cls": 0.23512, "acc": 92.21387, "loss_bbox": 0.25786, "loss_mask": 0.25294, "loss": 0.83232, "time": 0.41775} -{"mode": "train", "epoch": 3, "iter": 4350, "lr": 0.0002, "memory": 8423, "data_time": 0.04099, "loss_rpn_cls": 0.03216, "loss_rpn_bbox": 0.05203, "loss_cls": 0.2299, "acc": 92.41479, "loss_bbox": 0.24903, "loss_mask": 0.24981, "loss": 0.81293, "time": 0.40726} -{"mode": "train", "epoch": 3, "iter": 4400, "lr": 0.0002, "memory": 8423, "data_time": 0.04151, "loss_rpn_cls": 0.03382, "loss_rpn_bbox": 0.05636, "loss_cls": 0.23781, "acc": 92.06738, "loss_bbox": 0.25359, "loss_mask": 0.25271, "loss": 0.83429, "time": 0.39964} -{"mode": "train", "epoch": 3, "iter": 4450, "lr": 0.0002, "memory": 8423, "data_time": 0.04481, "loss_rpn_cls": 0.03316, "loss_rpn_bbox": 0.05118, "loss_cls": 0.22776, "acc": 92.39648, "loss_bbox": 0.25473, "loss_mask": 0.25424, "loss": 0.82107, "time": 0.40071} -{"mode": "train", "epoch": 3, "iter": 4500, "lr": 0.0002, "memory": 8423, "data_time": 0.04469, "loss_rpn_cls": 0.03299, "loss_rpn_bbox": 0.05103, "loss_cls": 0.23561, "acc": 92.30322, "loss_bbox": 0.25119, "loss_mask": 0.25743, "loss": 0.82826, "time": 0.40051} -{"mode": "train", "epoch": 3, "iter": 4550, "lr": 0.0002, "memory": 8423, "data_time": 0.04349, "loss_rpn_cls": 0.03108, "loss_rpn_bbox": 0.05082, "loss_cls": 0.23705, "acc": 92.15088, "loss_bbox": 0.26172, "loss_mask": 0.2524, "loss": 0.83307, "time": 0.40547} -{"mode": "train", "epoch": 3, "iter": 4600, "lr": 0.0002, "memory": 8423, "data_time": 0.04578, "loss_rpn_cls": 0.03296, "loss_rpn_bbox": 0.05571, "loss_cls": 0.23616, "acc": 92.07104, "loss_bbox": 0.26444, "loss_mask": 0.25901, "loss": 0.84828, "time": 0.41109} -{"mode": "train", "epoch": 3, "iter": 4650, "lr": 0.0002, "memory": 8423, "data_time": 0.04294, "loss_rpn_cls": 0.03568, "loss_rpn_bbox": 0.05356, "loss_cls": 0.23904, "acc": 92.16626, "loss_bbox": 0.25472, "loss_mask": 0.25692, "loss": 0.83991, "time": 0.40168} -{"mode": "train", "epoch": 3, "iter": 4700, "lr": 0.0002, "memory": 8423, "data_time": 0.04155, "loss_rpn_cls": 0.03396, "loss_rpn_bbox": 0.05641, "loss_cls": 0.24016, "acc": 92.04053, "loss_bbox": 0.26113, "loss_mask": 0.25599, "loss": 0.84766, "time": 0.40358} -{"mode": "train", "epoch": 3, "iter": 4750, "lr": 0.0002, "memory": 8423, "data_time": 0.05143, "loss_rpn_cls": 0.03528, "loss_rpn_bbox": 0.05506, "loss_cls": 0.24003, "acc": 92.04883, "loss_bbox": 0.26378, "loss_mask": 0.25386, "loss": 0.84801, "time": 0.41637} -{"mode": "train", "epoch": 3, "iter": 4800, "lr": 0.0002, "memory": 8423, "data_time": 0.04875, "loss_rpn_cls": 0.0325, "loss_rpn_bbox": 0.05103, "loss_cls": 0.23403, "acc": 92.14233, "loss_bbox": 0.25967, "loss_mask": 0.25727, "loss": 0.83451, "time": 0.41522} -{"mode": "train", "epoch": 3, "iter": 4850, "lr": 0.0002, "memory": 8423, "data_time": 0.03837, "loss_rpn_cls": 0.03225, "loss_rpn_bbox": 0.05273, "loss_cls": 0.22777, "acc": 92.51416, "loss_bbox": 0.24846, "loss_mask": 0.24923, "loss": 0.81044, "time": 0.40235} -{"mode": "train", "epoch": 3, "iter": 4900, "lr": 0.0002, "memory": 8423, "data_time": 0.05269, "loss_rpn_cls": 0.0331, "loss_rpn_bbox": 0.05439, "loss_cls": 0.23685, "acc": 92.21655, "loss_bbox": 0.25527, "loss_mask": 0.25435, "loss": 0.83396, "time": 0.41514} -{"mode": "train", "epoch": 3, "iter": 4950, "lr": 0.0002, "memory": 8423, "data_time": 0.04914, "loss_rpn_cls": 0.03377, "loss_rpn_bbox": 0.05362, "loss_cls": 0.24216, "acc": 92.09839, "loss_bbox": 0.26288, "loss_mask": 0.26323, "loss": 0.85566, "time": 0.40298} -{"mode": "train", "epoch": 3, "iter": 5000, "lr": 0.0002, "memory": 8423, "data_time": 0.03939, "loss_rpn_cls": 0.03415, "loss_rpn_bbox": 0.05438, "loss_cls": 0.24219, "acc": 92.12744, "loss_bbox": 0.25845, "loss_mask": 0.25697, "loss": 0.84614, "time": 0.39893} -{"mode": "train", "epoch": 3, "iter": 5050, "lr": 0.0002, "memory": 8423, "data_time": 0.04487, "loss_rpn_cls": 0.03566, "loss_rpn_bbox": 0.05503, "loss_cls": 0.23932, "acc": 92.1875, "loss_bbox": 0.25721, "loss_mask": 0.25274, "loss": 0.83995, "time": 0.42369} -{"mode": "train", "epoch": 3, "iter": 5100, "lr": 0.0002, "memory": 8423, "data_time": 0.03644, "loss_rpn_cls": 0.03384, "loss_rpn_bbox": 0.05166, "loss_cls": 0.2414, "acc": 92.11694, "loss_bbox": 0.25949, "loss_mask": 0.25924, "loss": 0.84563, "time": 0.39552} -{"mode": "train", "epoch": 3, "iter": 5150, "lr": 0.0002, "memory": 8423, "data_time": 0.03731, "loss_rpn_cls": 0.03286, "loss_rpn_bbox": 0.05387, "loss_cls": 0.23779, "acc": 92.03613, "loss_bbox": 0.25849, "loss_mask": 0.25449, "loss": 0.8375, "time": 0.39972} -{"mode": "train", "epoch": 3, "iter": 5200, "lr": 0.0002, "memory": 8423, "data_time": 0.03982, "loss_rpn_cls": 0.03121, "loss_rpn_bbox": 0.05312, "loss_cls": 0.23234, "acc": 92.2771, "loss_bbox": 0.25515, "loss_mask": 0.25627, "loss": 0.82809, "time": 0.40518} -{"mode": "train", "epoch": 3, "iter": 5250, "lr": 0.0002, "memory": 8423, "data_time": 0.03912, "loss_rpn_cls": 0.0312, "loss_rpn_bbox": 0.05261, "loss_cls": 0.23636, "acc": 92.22485, "loss_bbox": 0.25243, "loss_mask": 0.25736, "loss": 0.82996, "time": 0.39887} -{"mode": "train", "epoch": 3, "iter": 5300, "lr": 0.0002, "memory": 8423, "data_time": 0.04227, "loss_rpn_cls": 0.03503, "loss_rpn_bbox": 0.05648, "loss_cls": 0.23884, "acc": 92.17407, "loss_bbox": 0.25611, "loss_mask": 0.25287, "loss": 0.83933, "time": 0.44787} -{"mode": "train", "epoch": 3, "iter": 5350, "lr": 0.0002, "memory": 8423, "data_time": 0.04737, "loss_rpn_cls": 0.03578, "loss_rpn_bbox": 0.05557, "loss_cls": 0.24757, "acc": 91.87622, "loss_bbox": 0.26532, "loss_mask": 0.26084, "loss": 0.86509, "time": 0.39607} -{"mode": "train", "epoch": 3, "iter": 5400, "lr": 0.0002, "memory": 8423, "data_time": 0.04264, "loss_rpn_cls": 0.03494, "loss_rpn_bbox": 0.05193, "loss_cls": 0.23303, "acc": 92.30518, "loss_bbox": 0.25664, "loss_mask": 0.25277, "loss": 0.82932, "time": 0.4052} -{"mode": "train", "epoch": 3, "iter": 5450, "lr": 0.0002, "memory": 8423, "data_time": 0.03998, "loss_rpn_cls": 0.03039, "loss_rpn_bbox": 0.05086, "loss_cls": 0.22356, "acc": 92.72095, "loss_bbox": 0.24231, "loss_mask": 0.2513, "loss": 0.79843, "time": 0.39789} -{"mode": "train", "epoch": 3, "iter": 5500, "lr": 0.0002, "memory": 8423, "data_time": 0.04084, "loss_rpn_cls": 0.03172, "loss_rpn_bbox": 0.05243, "loss_cls": 0.23223, "acc": 92.46118, "loss_bbox": 0.24976, "loss_mask": 0.25614, "loss": 0.82227, "time": 0.39686} -{"mode": "train", "epoch": 3, "iter": 5550, "lr": 0.0002, "memory": 8423, "data_time": 0.04612, "loss_rpn_cls": 0.03427, "loss_rpn_bbox": 0.05203, "loss_cls": 0.23549, "acc": 92.17578, "loss_bbox": 0.26069, "loss_mask": 0.25909, "loss": 0.84157, "time": 0.40903} -{"mode": "train", "epoch": 3, "iter": 5600, "lr": 0.0002, "memory": 8423, "data_time": 0.04587, "loss_rpn_cls": 0.03638, "loss_rpn_bbox": 0.05513, "loss_cls": 0.24487, "acc": 92.08789, "loss_bbox": 0.25967, "loss_mask": 0.25282, "loss": 0.84888, "time": 0.40791} -{"mode": "train", "epoch": 3, "iter": 5650, "lr": 0.0002, "memory": 8423, "data_time": 0.04767, "loss_rpn_cls": 0.03743, "loss_rpn_bbox": 0.05532, "loss_cls": 0.24098, "acc": 92.05713, "loss_bbox": 0.25682, "loss_mask": 0.25058, "loss": 0.84114, "time": 0.40987} -{"mode": "train", "epoch": 3, "iter": 5700, "lr": 0.0002, "memory": 8423, "data_time": 0.03785, "loss_rpn_cls": 0.03452, "loss_rpn_bbox": 0.05203, "loss_cls": 0.22931, "acc": 92.38647, "loss_bbox": 0.25013, "loss_mask": 0.25117, "loss": 0.81715, "time": 0.40832} -{"mode": "train", "epoch": 3, "iter": 5750, "lr": 0.0002, "memory": 8423, "data_time": 0.04181, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05365, "loss_cls": 0.2359, "acc": 92.17261, "loss_bbox": 0.25691, "loss_mask": 0.25561, "loss": 0.8355, "time": 0.40728} -{"mode": "train", "epoch": 3, "iter": 5800, "lr": 0.0002, "memory": 8423, "data_time": 0.04845, "loss_rpn_cls": 0.04031, "loss_rpn_bbox": 0.05547, "loss_cls": 0.25512, "acc": 91.79468, "loss_bbox": 0.26751, "loss_mask": 0.25711, "loss": 0.87552, "time": 0.40857} -{"mode": "train", "epoch": 3, "iter": 5850, "lr": 0.0002, "memory": 8423, "data_time": 0.04781, "loss_rpn_cls": 0.0382, "loss_rpn_bbox": 0.05581, "loss_cls": 0.23512, "acc": 92.09106, "loss_bbox": 0.25943, "loss_mask": 0.25265, "loss": 0.84121, "time": 0.40664} -{"mode": "train", "epoch": 3, "iter": 5900, "lr": 0.0002, "memory": 8423, "data_time": 0.03971, "loss_rpn_cls": 0.03401, "loss_rpn_bbox": 0.05372, "loss_cls": 0.24411, "acc": 91.87402, "loss_bbox": 0.26466, "loss_mask": 0.25428, "loss": 0.85079, "time": 0.41234} -{"mode": "train", "epoch": 3, "iter": 5950, "lr": 0.0002, "memory": 8423, "data_time": 0.04903, "loss_rpn_cls": 0.03182, "loss_rpn_bbox": 0.05378, "loss_cls": 0.23336, "acc": 92.39355, "loss_bbox": 0.25105, "loss_mask": 0.25083, "loss": 0.82084, "time": 0.41466} -{"mode": "train", "epoch": 3, "iter": 6000, "lr": 0.0002, "memory": 8423, "data_time": 0.03984, "loss_rpn_cls": 0.03023, "loss_rpn_bbox": 0.05065, "loss_cls": 0.23284, "acc": 92.45679, "loss_bbox": 0.24623, "loss_mask": 0.25028, "loss": 0.81023, "time": 0.50143} -{"mode": "train", "epoch": 3, "iter": 6050, "lr": 0.0002, "memory": 8423, "data_time": 0.04396, "loss_rpn_cls": 0.03386, "loss_rpn_bbox": 0.05366, "loss_cls": 0.23313, "acc": 92.28223, "loss_bbox": 0.25643, "loss_mask": 0.25477, "loss": 0.83185, "time": 0.42013} -{"mode": "train", "epoch": 3, "iter": 6100, "lr": 0.0002, "memory": 8423, "data_time": 0.04267, "loss_rpn_cls": 0.03407, "loss_rpn_bbox": 0.05568, "loss_cls": 0.23502, "acc": 92.17822, "loss_bbox": 0.25735, "loss_mask": 0.24976, "loss": 0.83188, "time": 0.40661} -{"mode": "train", "epoch": 3, "iter": 6150, "lr": 0.0002, "memory": 8423, "data_time": 0.04052, "loss_rpn_cls": 0.03321, "loss_rpn_bbox": 0.05361, "loss_cls": 0.23416, "acc": 92.22241, "loss_bbox": 0.25242, "loss_mask": 0.25396, "loss": 0.82736, "time": 0.41038} -{"mode": "train", "epoch": 3, "iter": 6200, "lr": 0.0002, "memory": 8423, "data_time": 0.04652, "loss_rpn_cls": 0.03452, "loss_rpn_bbox": 0.05238, "loss_cls": 0.23939, "acc": 92.1145, "loss_bbox": 0.26169, "loss_mask": 0.24929, "loss": 0.83726, "time": 0.44371} -{"mode": "train", "epoch": 3, "iter": 6250, "lr": 0.0002, "memory": 8423, "data_time": 0.05188, "loss_rpn_cls": 0.03476, "loss_rpn_bbox": 0.05771, "loss_cls": 0.24882, "acc": 91.8147, "loss_bbox": 0.26914, "loss_mask": 0.25666, "loss": 0.86708, "time": 0.43054} -{"mode": "train", "epoch": 3, "iter": 6300, "lr": 0.0002, "memory": 8423, "data_time": 0.04668, "loss_rpn_cls": 0.03425, "loss_rpn_bbox": 0.05649, "loss_cls": 0.23751, "acc": 92.02759, "loss_bbox": 0.26093, "loss_mask": 0.26078, "loss": 0.84995, "time": 0.40977} -{"mode": "train", "epoch": 3, "iter": 6350, "lr": 0.0002, "memory": 8423, "data_time": 0.04154, "loss_rpn_cls": 0.03209, "loss_rpn_bbox": 0.04992, "loss_cls": 0.22858, "acc": 92.62573, "loss_bbox": 0.24387, "loss_mask": 0.254, "loss": 0.80845, "time": 0.39534} -{"mode": "train", "epoch": 3, "iter": 6400, "lr": 0.0002, "memory": 8423, "data_time": 0.04795, "loss_rpn_cls": 0.03462, "loss_rpn_bbox": 0.05466, "loss_cls": 0.22474, "acc": 92.52222, "loss_bbox": 0.25365, "loss_mask": 0.25156, "loss": 0.81922, "time": 0.41055} -{"mode": "train", "epoch": 3, "iter": 6450, "lr": 0.0002, "memory": 8423, "data_time": 0.04178, "loss_rpn_cls": 0.03238, "loss_rpn_bbox": 0.0512, "loss_cls": 0.23232, "acc": 92.34277, "loss_bbox": 0.25374, "loss_mask": 0.25345, "loss": 0.82309, "time": 0.40999} -{"mode": "train", "epoch": 3, "iter": 6500, "lr": 0.0002, "memory": 8423, "data_time": 0.05111, "loss_rpn_cls": 0.03345, "loss_rpn_bbox": 0.05264, "loss_cls": 0.24669, "acc": 91.94385, "loss_bbox": 0.25438, "loss_mask": 0.24928, "loss": 0.83644, "time": 0.43023} -{"mode": "train", "epoch": 3, "iter": 6550, "lr": 0.0002, "memory": 8423, "data_time": 0.04471, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.0522, "loss_cls": 0.2361, "acc": 92.18213, "loss_bbox": 0.25453, "loss_mask": 0.25506, "loss": 0.82995, "time": 0.4117} -{"mode": "train", "epoch": 3, "iter": 6600, "lr": 0.0002, "memory": 8423, "data_time": 0.03908, "loss_rpn_cls": 0.03562, "loss_rpn_bbox": 0.05356, "loss_cls": 0.23964, "acc": 92.27734, "loss_bbox": 0.25353, "loss_mask": 0.25721, "loss": 0.83955, "time": 0.40193} -{"mode": "train", "epoch": 3, "iter": 6650, "lr": 0.0002, "memory": 8423, "data_time": 0.05856, "loss_rpn_cls": 0.03436, "loss_rpn_bbox": 0.05665, "loss_cls": 0.23435, "acc": 92.29663, "loss_bbox": 0.25484, "loss_mask": 0.25499, "loss": 0.83519, "time": 0.4223} -{"mode": "train", "epoch": 3, "iter": 6700, "lr": 0.0002, "memory": 8423, "data_time": 0.05023, "loss_rpn_cls": 0.03327, "loss_rpn_bbox": 0.05538, "loss_cls": 0.23785, "acc": 92.07568, "loss_bbox": 0.26028, "loss_mask": 0.25006, "loss": 0.83685, "time": 0.40156} -{"mode": "train", "epoch": 3, "iter": 6750, "lr": 0.0002, "memory": 8423, "data_time": 0.0441, "loss_rpn_cls": 0.03229, "loss_rpn_bbox": 0.05232, "loss_cls": 0.24317, "acc": 91.93774, "loss_bbox": 0.26355, "loss_mask": 0.2544, "loss": 0.84573, "time": 0.4141} -{"mode": "train", "epoch": 3, "iter": 6800, "lr": 0.0002, "memory": 8423, "data_time": 0.04216, "loss_rpn_cls": 0.03648, "loss_rpn_bbox": 0.05725, "loss_cls": 0.24776, "acc": 91.79321, "loss_bbox": 0.26596, "loss_mask": 0.2593, "loss": 0.86675, "time": 0.40693} -{"mode": "train", "epoch": 3, "iter": 6850, "lr": 0.0002, "memory": 8423, "data_time": 0.0438, "loss_rpn_cls": 0.03506, "loss_rpn_bbox": 0.05321, "loss_cls": 0.22779, "acc": 92.51318, "loss_bbox": 0.24583, "loss_mask": 0.25095, "loss": 0.81284, "time": 0.40382} -{"mode": "train", "epoch": 3, "iter": 6900, "lr": 0.0002, "memory": 8423, "data_time": 0.05253, "loss_rpn_cls": 0.0353, "loss_rpn_bbox": 0.05419, "loss_cls": 0.24134, "acc": 92.09888, "loss_bbox": 0.25804, "loss_mask": 0.25727, "loss": 0.84613, "time": 0.40926} -{"mode": "train", "epoch": 3, "iter": 6950, "lr": 0.0002, "memory": 8423, "data_time": 0.05137, "loss_rpn_cls": 0.03142, "loss_rpn_bbox": 0.05299, "loss_cls": 0.24008, "acc": 92.08325, "loss_bbox": 0.25819, "loss_mask": 0.25531, "loss": 0.838, "time": 0.45274} -{"mode": "train", "epoch": 3, "iter": 7000, "lr": 0.0002, "memory": 8423, "data_time": 0.04723, "loss_rpn_cls": 0.03327, "loss_rpn_bbox": 0.0527, "loss_cls": 0.24579, "acc": 91.97729, "loss_bbox": 0.2624, "loss_mask": 0.25562, "loss": 0.84979, "time": 0.40335} -{"mode": "train", "epoch": 3, "iter": 7050, "lr": 0.0002, "memory": 8423, "data_time": 0.04495, "loss_rpn_cls": 0.03239, "loss_rpn_bbox": 0.05432, "loss_cls": 0.24573, "acc": 91.90723, "loss_bbox": 0.25911, "loss_mask": 0.25721, "loss": 0.84876, "time": 0.41136} -{"mode": "train", "epoch": 3, "iter": 7100, "lr": 0.0002, "memory": 8423, "data_time": 0.05154, "loss_rpn_cls": 0.03593, "loss_rpn_bbox": 0.05426, "loss_cls": 0.23516, "acc": 92.16772, "loss_bbox": 0.25353, "loss_mask": 0.25417, "loss": 0.83307, "time": 0.40328} -{"mode": "train", "epoch": 3, "iter": 7150, "lr": 0.0002, "memory": 8423, "data_time": 0.04214, "loss_rpn_cls": 0.03565, "loss_rpn_bbox": 0.05618, "loss_cls": 0.2321, "acc": 92.29688, "loss_bbox": 0.25466, "loss_mask": 0.25573, "loss": 0.83432, "time": 0.41617} -{"mode": "train", "epoch": 3, "iter": 7200, "lr": 0.0002, "memory": 8423, "data_time": 0.03992, "loss_rpn_cls": 0.03553, "loss_rpn_bbox": 0.05428, "loss_cls": 0.23922, "acc": 92.07031, "loss_bbox": 0.25945, "loss_mask": 0.25107, "loss": 0.83954, "time": 0.41277} -{"mode": "train", "epoch": 3, "iter": 7250, "lr": 0.0002, "memory": 8423, "data_time": 0.03997, "loss_rpn_cls": 0.0345, "loss_rpn_bbox": 0.05282, "loss_cls": 0.23095, "acc": 92.43457, "loss_bbox": 0.25199, "loss_mask": 0.25099, "loss": 0.82124, "time": 0.39983} -{"mode": "train", "epoch": 3, "iter": 7300, "lr": 0.0002, "memory": 8423, "data_time": 0.04508, "loss_rpn_cls": 0.03277, "loss_rpn_bbox": 0.05021, "loss_cls": 0.23282, "acc": 92.29712, "loss_bbox": 0.25661, "loss_mask": 0.25537, "loss": 0.82779, "time": 0.4075} -{"mode": "val", "epoch": 3, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3358, "bbox_mAP_50": 0.5515, "bbox_mAP_75": 0.3646, "bbox_mAP_s": 0.1978, "bbox_mAP_m": 0.3767, "bbox_mAP_l": 0.4318, "bbox_mAP_copypaste": "0.3358 0.5515 0.3646 0.1978 0.3767 0.4318", "segm_mAP": 0.3251, "segm_mAP_50": 0.5259, "segm_mAP_75": 0.3473, "segm_mAP_s": 0.153, "segm_mAP_m": 0.3561, "segm_mAP_l": 0.4772, "segm_mAP_copypaste": "0.3251 0.5259 0.3473 0.1530 0.3561 0.4772"} -{"mode": "train", "epoch": 4, "iter": 50, "lr": 0.0002, "memory": 8423, "data_time": 0.12779, "loss_rpn_cls": 0.0324, "loss_rpn_bbox": 0.05602, "loss_cls": 0.2363, "acc": 91.88037, "loss_bbox": 0.26499, "loss_mask": 0.24924, "loss": 0.83895, "time": 0.53655} -{"mode": "train", "epoch": 4, "iter": 100, "lr": 0.0002, "memory": 8423, "data_time": 0.04563, "loss_rpn_cls": 0.02941, "loss_rpn_bbox": 0.05139, "loss_cls": 0.22724, "acc": 92.1792, "loss_bbox": 0.26058, "loss_mask": 0.25355, "loss": 0.82217, "time": 0.49772} -{"mode": "train", "epoch": 4, "iter": 150, "lr": 0.0002, "memory": 8423, "data_time": 0.05425, "loss_rpn_cls": 0.02857, "loss_rpn_bbox": 0.05268, "loss_cls": 0.22445, "acc": 92.39453, "loss_bbox": 0.24786, "loss_mask": 0.24747, "loss": 0.80102, "time": 0.48019} -{"mode": "train", "epoch": 4, "iter": 200, "lr": 0.0002, "memory": 8423, "data_time": 0.05749, "loss_rpn_cls": 0.03068, "loss_rpn_bbox": 0.05314, "loss_cls": 0.23659, "acc": 91.98584, "loss_bbox": 0.26178, "loss_mask": 0.25475, "loss": 0.83695, "time": 0.42429} -{"mode": "train", "epoch": 4, "iter": 250, "lr": 0.0002, "memory": 8423, "data_time": 0.04229, "loss_rpn_cls": 0.03026, "loss_rpn_bbox": 0.05176, "loss_cls": 0.22299, "acc": 92.3208, "loss_bbox": 0.25291, "loss_mask": 0.24438, "loss": 0.8023, "time": 0.42269} -{"mode": "train", "epoch": 4, "iter": 300, "lr": 0.0002, "memory": 8423, "data_time": 0.04761, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.0523, "loss_cls": 0.22524, "acc": 92.42212, "loss_bbox": 0.25478, "loss_mask": 0.24445, "loss": 0.80867, "time": 0.42065} -{"mode": "train", "epoch": 4, "iter": 350, "lr": 0.0002, "memory": 8423, "data_time": 0.04289, "loss_rpn_cls": 0.02946, "loss_rpn_bbox": 0.05057, "loss_cls": 0.23102, "acc": 92.26514, "loss_bbox": 0.25706, "loss_mask": 0.24856, "loss": 0.81666, "time": 0.41974} -{"mode": "train", "epoch": 4, "iter": 400, "lr": 0.0002, "memory": 8423, "data_time": 0.04658, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05291, "loss_cls": 0.24029, "acc": 91.90918, "loss_bbox": 0.26132, "loss_mask": 0.2513, "loss": 0.83676, "time": 0.42701} -{"mode": "train", "epoch": 4, "iter": 450, "lr": 0.0002, "memory": 8423, "data_time": 0.04742, "loss_rpn_cls": 0.03213, "loss_rpn_bbox": 0.05517, "loss_cls": 0.23562, "acc": 91.95044, "loss_bbox": 0.26958, "loss_mask": 0.25748, "loss": 0.84998, "time": 0.42857} -{"mode": "train", "epoch": 4, "iter": 500, "lr": 0.0002, "memory": 8423, "data_time": 0.04833, "loss_rpn_cls": 0.0315, "loss_rpn_bbox": 0.05183, "loss_cls": 0.236, "acc": 92.10718, "loss_bbox": 0.25934, "loss_mask": 0.24842, "loss": 0.8271, "time": 0.42348} -{"mode": "train", "epoch": 4, "iter": 550, "lr": 0.0002, "memory": 8423, "data_time": 0.04658, "loss_rpn_cls": 0.03122, "loss_rpn_bbox": 0.054, "loss_cls": 0.22466, "acc": 92.25562, "loss_bbox": 0.25397, "loss_mask": 0.25651, "loss": 0.82037, "time": 0.4053} -{"mode": "train", "epoch": 4, "iter": 600, "lr": 0.0002, "memory": 8423, "data_time": 0.05083, "loss_rpn_cls": 0.03151, "loss_rpn_bbox": 0.05266, "loss_cls": 0.2256, "acc": 92.47998, "loss_bbox": 0.25359, "loss_mask": 0.25486, "loss": 0.81822, "time": 0.40313} -{"mode": "train", "epoch": 4, "iter": 650, "lr": 0.0002, "memory": 8423, "data_time": 0.04503, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.04998, "loss_cls": 0.22034, "acc": 92.67041, "loss_bbox": 0.24905, "loss_mask": 0.24423, "loss": 0.79382, "time": 0.39867} -{"mode": "train", "epoch": 4, "iter": 700, "lr": 0.0002, "memory": 8423, "data_time": 0.04144, "loss_rpn_cls": 0.03107, "loss_rpn_bbox": 0.05228, "loss_cls": 0.21553, "acc": 92.58911, "loss_bbox": 0.25143, "loss_mask": 0.25013, "loss": 0.80044, "time": 0.40403} -{"mode": "train", "epoch": 4, "iter": 750, "lr": 0.0002, "memory": 8423, "data_time": 0.04013, "loss_rpn_cls": 0.02718, "loss_rpn_bbox": 0.0507, "loss_cls": 0.22182, "acc": 92.39917, "loss_bbox": 0.24947, "loss_mask": 0.24511, "loss": 0.79428, "time": 0.40644} -{"mode": "train", "epoch": 4, "iter": 800, "lr": 0.0002, "memory": 8423, "data_time": 0.04153, "loss_rpn_cls": 0.03176, "loss_rpn_bbox": 0.05505, "loss_cls": 0.22884, "acc": 92.26733, "loss_bbox": 0.25307, "loss_mask": 0.24799, "loss": 0.81671, "time": 0.40701} -{"mode": "train", "epoch": 4, "iter": 850, "lr": 0.0002, "memory": 8423, "data_time": 0.04095, "loss_rpn_cls": 0.03005, "loss_rpn_bbox": 0.05315, "loss_cls": 0.22729, "acc": 92.31909, "loss_bbox": 0.25249, "loss_mask": 0.2456, "loss": 0.80857, "time": 0.39267} -{"mode": "train", "epoch": 4, "iter": 900, "lr": 0.0002, "memory": 8423, "data_time": 0.04607, "loss_rpn_cls": 0.03006, "loss_rpn_bbox": 0.05508, "loss_cls": 0.23091, "acc": 92.2063, "loss_bbox": 0.25711, "loss_mask": 0.25343, "loss": 0.82658, "time": 0.41831} -{"mode": "train", "epoch": 4, "iter": 950, "lr": 0.0002, "memory": 8423, "data_time": 0.0402, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.04907, "loss_cls": 0.22268, "acc": 92.56128, "loss_bbox": 0.24073, "loss_mask": 0.25529, "loss": 0.79733, "time": 0.38897} -{"mode": "train", "epoch": 4, "iter": 1000, "lr": 0.0002, "memory": 8423, "data_time": 0.04162, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05369, "loss_cls": 0.21932, "acc": 92.50586, "loss_bbox": 0.25305, "loss_mask": 0.25158, "loss": 0.80942, "time": 0.40341} -{"mode": "train", "epoch": 4, "iter": 1050, "lr": 0.0002, "memory": 8423, "data_time": 0.0528, "loss_rpn_cls": 0.03141, "loss_rpn_bbox": 0.05368, "loss_cls": 0.22877, "acc": 92.30762, "loss_bbox": 0.25718, "loss_mask": 0.24683, "loss": 0.81787, "time": 0.4085} -{"mode": "train", "epoch": 4, "iter": 1100, "lr": 0.0002, "memory": 8423, "data_time": 0.04368, "loss_rpn_cls": 0.03046, "loss_rpn_bbox": 0.0575, "loss_cls": 0.23086, "acc": 92.23291, "loss_bbox": 0.25391, "loss_mask": 0.25348, "loss": 0.82621, "time": 0.41444} -{"mode": "train", "epoch": 4, "iter": 1150, "lr": 0.0002, "memory": 8423, "data_time": 0.04178, "loss_rpn_cls": 0.0321, "loss_rpn_bbox": 0.05387, "loss_cls": 0.23579, "acc": 92.11035, "loss_bbox": 0.26198, "loss_mask": 0.25461, "loss": 0.83834, "time": 0.41033} -{"mode": "train", "epoch": 4, "iter": 1200, "lr": 0.0002, "memory": 8423, "data_time": 0.04689, "loss_rpn_cls": 0.03448, "loss_rpn_bbox": 0.05499, "loss_cls": 0.22202, "acc": 92.51123, "loss_bbox": 0.25365, "loss_mask": 0.24586, "loss": 0.811, "time": 0.42198} -{"mode": "train", "epoch": 4, "iter": 1250, "lr": 0.0002, "memory": 8423, "data_time": 0.04743, "loss_rpn_cls": 0.03261, "loss_rpn_bbox": 0.05366, "loss_cls": 0.22598, "acc": 92.33862, "loss_bbox": 0.25608, "loss_mask": 0.24801, "loss": 0.81633, "time": 0.41302} -{"mode": "train", "epoch": 4, "iter": 1300, "lr": 0.0002, "memory": 8423, "data_time": 0.04639, "loss_rpn_cls": 0.03297, "loss_rpn_bbox": 0.05323, "loss_cls": 0.22208, "acc": 92.59033, "loss_bbox": 0.24452, "loss_mask": 0.25259, "loss": 0.80538, "time": 0.44836} -{"mode": "train", "epoch": 4, "iter": 1350, "lr": 0.0002, "memory": 8423, "data_time": 0.04541, "loss_rpn_cls": 0.03574, "loss_rpn_bbox": 0.05595, "loss_cls": 0.23319, "acc": 92.20239, "loss_bbox": 0.25846, "loss_mask": 0.25466, "loss": 0.838, "time": 0.40151} -{"mode": "train", "epoch": 4, "iter": 1400, "lr": 0.0002, "memory": 8423, "data_time": 0.04935, "loss_rpn_cls": 0.03476, "loss_rpn_bbox": 0.05633, "loss_cls": 0.233, "acc": 92.09839, "loss_bbox": 0.25881, "loss_mask": 0.25035, "loss": 0.83325, "time": 0.40464} -{"mode": "train", "epoch": 4, "iter": 1450, "lr": 0.0002, "memory": 8423, "data_time": 0.04077, "loss_rpn_cls": 0.03382, "loss_rpn_bbox": 0.05522, "loss_cls": 0.2383, "acc": 91.99561, "loss_bbox": 0.26034, "loss_mask": 0.2553, "loss": 0.84299, "time": 0.41679} -{"mode": "train", "epoch": 4, "iter": 1500, "lr": 0.0002, "memory": 8423, "data_time": 0.04089, "loss_rpn_cls": 0.03511, "loss_rpn_bbox": 0.05532, "loss_cls": 0.23465, "acc": 92.13159, "loss_bbox": 0.25733, "loss_mask": 0.24837, "loss": 0.83078, "time": 0.40354} -{"mode": "train", "epoch": 4, "iter": 1550, "lr": 0.0002, "memory": 8423, "data_time": 0.03816, "loss_rpn_cls": 0.02956, "loss_rpn_bbox": 0.04966, "loss_cls": 0.22077, "acc": 92.75171, "loss_bbox": 0.24028, "loss_mask": 0.24854, "loss": 0.78882, "time": 0.39288} -{"mode": "train", "epoch": 4, "iter": 1600, "lr": 0.0002, "memory": 8423, "data_time": 0.03912, "loss_rpn_cls": 0.03159, "loss_rpn_bbox": 0.05243, "loss_cls": 0.2285, "acc": 92.51147, "loss_bbox": 0.24533, "loss_mask": 0.24673, "loss": 0.80457, "time": 0.40325} -{"mode": "train", "epoch": 4, "iter": 1650, "lr": 0.0002, "memory": 8423, "data_time": 0.03945, "loss_rpn_cls": 0.03025, "loss_rpn_bbox": 0.05073, "loss_cls": 0.22991, "acc": 92.22437, "loss_bbox": 0.25207, "loss_mask": 0.25029, "loss": 0.81324, "time": 0.41103} -{"mode": "train", "epoch": 4, "iter": 1700, "lr": 0.0002, "memory": 8423, "data_time": 0.04471, "loss_rpn_cls": 0.03263, "loss_rpn_bbox": 0.05487, "loss_cls": 0.23662, "acc": 92.05688, "loss_bbox": 0.25938, "loss_mask": 0.25095, "loss": 0.83445, "time": 0.40602} -{"mode": "train", "epoch": 4, "iter": 1750, "lr": 0.0002, "memory": 8423, "data_time": 0.04932, "loss_rpn_cls": 0.03176, "loss_rpn_bbox": 0.05272, "loss_cls": 0.22985, "acc": 92.27368, "loss_bbox": 0.2525, "loss_mask": 0.24931, "loss": 0.81614, "time": 0.3916} -{"mode": "train", "epoch": 4, "iter": 1800, "lr": 0.0002, "memory": 8423, "data_time": 0.04085, "loss_rpn_cls": 0.03164, "loss_rpn_bbox": 0.05044, "loss_cls": 0.22073, "acc": 92.6333, "loss_bbox": 0.24425, "loss_mask": 0.24654, "loss": 0.7936, "time": 0.41428} -{"mode": "train", "epoch": 4, "iter": 1850, "lr": 0.0002, "memory": 8423, "data_time": 0.04694, "loss_rpn_cls": 0.03309, "loss_rpn_bbox": 0.05575, "loss_cls": 0.23387, "acc": 92.18359, "loss_bbox": 0.25713, "loss_mask": 0.2499, "loss": 0.82974, "time": 0.41618} -{"mode": "train", "epoch": 4, "iter": 1900, "lr": 0.0002, "memory": 8423, "data_time": 0.04033, "loss_rpn_cls": 0.03006, "loss_rpn_bbox": 0.05166, "loss_cls": 0.23487, "acc": 92.18945, "loss_bbox": 0.25026, "loss_mask": 0.25171, "loss": 0.81857, "time": 0.40658} -{"mode": "train", "epoch": 4, "iter": 1950, "lr": 0.0002, "memory": 8423, "data_time": 0.04949, "loss_rpn_cls": 0.03053, "loss_rpn_bbox": 0.05456, "loss_cls": 0.22113, "acc": 92.60107, "loss_bbox": 0.24465, "loss_mask": 0.25029, "loss": 0.80116, "time": 0.39986} -{"mode": "train", "epoch": 4, "iter": 2000, "lr": 0.0002, "memory": 8423, "data_time": 0.04577, "loss_rpn_cls": 0.03187, "loss_rpn_bbox": 0.05468, "loss_cls": 0.22228, "acc": 92.48267, "loss_bbox": 0.25343, "loss_mask": 0.2558, "loss": 0.81806, "time": 0.40621} -{"mode": "train", "epoch": 4, "iter": 2050, "lr": 0.0002, "memory": 8423, "data_time": 0.03953, "loss_rpn_cls": 0.03305, "loss_rpn_bbox": 0.05219, "loss_cls": 0.22056, "acc": 92.56958, "loss_bbox": 0.24513, "loss_mask": 0.24752, "loss": 0.79845, "time": 0.46079} -{"mode": "train", "epoch": 4, "iter": 2100, "lr": 0.0002, "memory": 8423, "data_time": 0.03798, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.05118, "loss_cls": 0.22239, "acc": 92.53394, "loss_bbox": 0.24661, "loss_mask": 0.25029, "loss": 0.8003, "time": 0.3959} -{"mode": "train", "epoch": 4, "iter": 2150, "lr": 0.0002, "memory": 8423, "data_time": 0.03882, "loss_rpn_cls": 0.02977, "loss_rpn_bbox": 0.05149, "loss_cls": 0.22125, "acc": 92.35254, "loss_bbox": 0.25595, "loss_mask": 0.25011, "loss": 0.80856, "time": 0.40812} -{"mode": "train", "epoch": 4, "iter": 2200, "lr": 0.0002, "memory": 8423, "data_time": 0.04088, "loss_rpn_cls": 0.03262, "loss_rpn_bbox": 0.05471, "loss_cls": 0.22817, "acc": 92.18188, "loss_bbox": 0.25288, "loss_mask": 0.24774, "loss": 0.81612, "time": 0.41147} -{"mode": "train", "epoch": 4, "iter": 2250, "lr": 0.0002, "memory": 8423, "data_time": 0.04247, "loss_rpn_cls": 0.03235, "loss_rpn_bbox": 0.05316, "loss_cls": 0.22976, "acc": 92.15381, "loss_bbox": 0.25379, "loss_mask": 0.25415, "loss": 0.82322, "time": 0.40687} -{"mode": "train", "epoch": 4, "iter": 2300, "lr": 0.0002, "memory": 8423, "data_time": 0.04565, "loss_rpn_cls": 0.0342, "loss_rpn_bbox": 0.052, "loss_cls": 0.23335, "acc": 92.06689, "loss_bbox": 0.25273, "loss_mask": 0.25183, "loss": 0.82412, "time": 0.40578} -{"mode": "train", "epoch": 4, "iter": 2350, "lr": 0.0002, "memory": 8423, "data_time": 0.04499, "loss_rpn_cls": 0.03034, "loss_rpn_bbox": 0.05029, "loss_cls": 0.22482, "acc": 92.42578, "loss_bbox": 0.25028, "loss_mask": 0.24638, "loss": 0.8021, "time": 0.40354} -{"mode": "train", "epoch": 4, "iter": 2400, "lr": 0.0002, "memory": 8423, "data_time": 0.04749, "loss_rpn_cls": 0.03204, "loss_rpn_bbox": 0.05266, "loss_cls": 0.23412, "acc": 92.10156, "loss_bbox": 0.25999, "loss_mask": 0.2537, "loss": 0.8325, "time": 0.42191} -{"mode": "train", "epoch": 4, "iter": 2450, "lr": 0.0002, "memory": 8423, "data_time": 0.05318, "loss_rpn_cls": 0.02941, "loss_rpn_bbox": 0.05038, "loss_cls": 0.21973, "acc": 92.7356, "loss_bbox": 0.2408, "loss_mask": 0.24577, "loss": 0.78609, "time": 0.41588} -{"mode": "train", "epoch": 4, "iter": 2500, "lr": 0.0002, "memory": 8423, "data_time": 0.04473, "loss_rpn_cls": 0.03426, "loss_rpn_bbox": 0.05839, "loss_cls": 0.24534, "acc": 91.85645, "loss_bbox": 0.27176, "loss_mask": 0.25618, "loss": 0.86593, "time": 0.40869} -{"mode": "train", "epoch": 4, "iter": 2550, "lr": 0.0002, "memory": 8423, "data_time": 0.04698, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.051, "loss_cls": 0.23128, "acc": 92.198, "loss_bbox": 0.26053, "loss_mask": 0.25133, "loss": 0.82502, "time": 0.4171} -{"mode": "train", "epoch": 4, "iter": 2600, "lr": 0.0002, "memory": 8423, "data_time": 0.04492, "loss_rpn_cls": 0.03184, "loss_rpn_bbox": 0.05148, "loss_cls": 0.2253, "acc": 92.41699, "loss_bbox": 0.24949, "loss_mask": 0.24618, "loss": 0.80429, "time": 0.39338} -{"mode": "train", "epoch": 4, "iter": 2650, "lr": 0.0002, "memory": 8423, "data_time": 0.04252, "loss_rpn_cls": 0.02871, "loss_rpn_bbox": 0.05168, "loss_cls": 0.22084, "acc": 92.44702, "loss_bbox": 0.25004, "loss_mask": 0.25158, "loss": 0.80285, "time": 0.41377} -{"mode": "train", "epoch": 4, "iter": 2700, "lr": 0.0002, "memory": 8423, "data_time": 0.05311, "loss_rpn_cls": 0.03621, "loss_rpn_bbox": 0.05456, "loss_cls": 0.23771, "acc": 92.08911, "loss_bbox": 0.25837, "loss_mask": 0.25375, "loss": 0.84059, "time": 0.43247} -{"mode": "train", "epoch": 4, "iter": 2750, "lr": 0.0002, "memory": 8423, "data_time": 0.04328, "loss_rpn_cls": 0.03305, "loss_rpn_bbox": 0.05648, "loss_cls": 0.23604, "acc": 92.01782, "loss_bbox": 0.26291, "loss_mask": 0.25546, "loss": 0.84393, "time": 0.41794} -{"mode": "train", "epoch": 4, "iter": 2800, "lr": 0.0002, "memory": 8423, "data_time": 0.04734, "loss_rpn_cls": 0.03169, "loss_rpn_bbox": 0.05372, "loss_cls": 0.22997, "acc": 92.37427, "loss_bbox": 0.25294, "loss_mask": 0.25067, "loss": 0.81901, "time": 0.40609} -{"mode": "train", "epoch": 4, "iter": 2850, "lr": 0.0002, "memory": 8423, "data_time": 0.04001, "loss_rpn_cls": 0.02941, "loss_rpn_bbox": 0.05065, "loss_cls": 0.21491, "acc": 92.69531, "loss_bbox": 0.24234, "loss_mask": 0.25511, "loss": 0.79242, "time": 0.40343} -{"mode": "train", "epoch": 4, "iter": 2900, "lr": 0.0002, "memory": 8423, "data_time": 0.03651, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.05381, "loss_cls": 0.2373, "acc": 92.16797, "loss_bbox": 0.25277, "loss_mask": 0.2456, "loss": 0.82139, "time": 0.40542} -{"mode": "train", "epoch": 4, "iter": 2950, "lr": 0.0002, "memory": 8423, "data_time": 0.04237, "loss_rpn_cls": 0.02939, "loss_rpn_bbox": 0.05005, "loss_cls": 0.22997, "acc": 92.3313, "loss_bbox": 0.25579, "loss_mask": 0.24906, "loss": 0.81427, "time": 0.40159} -{"mode": "train", "epoch": 4, "iter": 3000, "lr": 0.0002, "memory": 8423, "data_time": 0.05031, "loss_rpn_cls": 0.03346, "loss_rpn_bbox": 0.05137, "loss_cls": 0.23235, "acc": 92.26538, "loss_bbox": 0.25012, "loss_mask": 0.24737, "loss": 0.81467, "time": 0.40867} -{"mode": "train", "epoch": 4, "iter": 3050, "lr": 0.0002, "memory": 8423, "data_time": 0.04665, "loss_rpn_cls": 0.0325, "loss_rpn_bbox": 0.05334, "loss_cls": 0.22793, "acc": 92.19336, "loss_bbox": 0.25424, "loss_mask": 0.24814, "loss": 0.81614, "time": 0.40007} -{"mode": "train", "epoch": 4, "iter": 3100, "lr": 0.0002, "memory": 8423, "data_time": 0.04774, "loss_rpn_cls": 0.03295, "loss_rpn_bbox": 0.05267, "loss_cls": 0.22934, "acc": 92.28369, "loss_bbox": 0.24996, "loss_mask": 0.24779, "loss": 0.81271, "time": 0.41731} -{"mode": "train", "epoch": 4, "iter": 3150, "lr": 0.0002, "memory": 8423, "data_time": 0.04087, "loss_rpn_cls": 0.03137, "loss_rpn_bbox": 0.05378, "loss_cls": 0.22179, "acc": 92.59204, "loss_bbox": 0.24705, "loss_mask": 0.2507, "loss": 0.80469, "time": 0.39254} -{"mode": "train", "epoch": 4, "iter": 3200, "lr": 0.0002, "memory": 8423, "data_time": 0.04019, "loss_rpn_cls": 0.0306, "loss_rpn_bbox": 0.0516, "loss_cls": 0.22149, "acc": 92.48047, "loss_bbox": 0.24458, "loss_mask": 0.24644, "loss": 0.79471, "time": 0.39667} -{"mode": "train", "epoch": 4, "iter": 3250, "lr": 0.0002, "memory": 8423, "data_time": 0.05421, "loss_rpn_cls": 0.03296, "loss_rpn_bbox": 0.05363, "loss_cls": 0.23343, "acc": 92.25391, "loss_bbox": 0.25444, "loss_mask": 0.24817, "loss": 0.82263, "time": 0.41725} -{"mode": "train", "epoch": 4, "iter": 3300, "lr": 0.0002, "memory": 8423, "data_time": 0.04247, "loss_rpn_cls": 0.03436, "loss_rpn_bbox": 0.05311, "loss_cls": 0.23712, "acc": 91.9895, "loss_bbox": 0.26022, "loss_mask": 0.26169, "loss": 0.84651, "time": 0.40061} -{"mode": "train", "epoch": 4, "iter": 3350, "lr": 0.0002, "memory": 8423, "data_time": 0.04711, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.05382, "loss_cls": 0.23219, "acc": 92.21802, "loss_bbox": 0.25412, "loss_mask": 0.25841, "loss": 0.82941, "time": 0.40501} -{"mode": "train", "epoch": 4, "iter": 3400, "lr": 0.0002, "memory": 8423, "data_time": 0.05172, "loss_rpn_cls": 0.02802, "loss_rpn_bbox": 0.05078, "loss_cls": 0.22527, "acc": 92.38574, "loss_bbox": 0.24917, "loss_mask": 0.25414, "loss": 0.80737, "time": 0.40183} -{"mode": "train", "epoch": 4, "iter": 3450, "lr": 0.0002, "memory": 8423, "data_time": 0.04756, "loss_rpn_cls": 0.03151, "loss_rpn_bbox": 0.05152, "loss_cls": 0.22551, "acc": 92.54419, "loss_bbox": 0.24507, "loss_mask": 0.24861, "loss": 0.80222, "time": 0.41093} -{"mode": "train", "epoch": 4, "iter": 3500, "lr": 0.0002, "memory": 8423, "data_time": 0.046, "loss_rpn_cls": 0.03038, "loss_rpn_bbox": 0.05042, "loss_cls": 0.2208, "acc": 92.59692, "loss_bbox": 0.24581, "loss_mask": 0.24999, "loss": 0.79738, "time": 0.40039} -{"mode": "train", "epoch": 4, "iter": 3550, "lr": 0.0002, "memory": 8423, "data_time": 0.04748, "loss_rpn_cls": 0.03599, "loss_rpn_bbox": 0.05199, "loss_cls": 0.22665, "acc": 92.2793, "loss_bbox": 0.25708, "loss_mask": 0.25173, "loss": 0.82345, "time": 0.41571} -{"mode": "train", "epoch": 4, "iter": 3600, "lr": 0.0002, "memory": 8423, "data_time": 0.04416, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.05045, "loss_cls": 0.22906, "acc": 92.32007, "loss_bbox": 0.24801, "loss_mask": 0.24798, "loss": 0.80474, "time": 0.40752} -{"mode": "train", "epoch": 4, "iter": 3650, "lr": 0.0002, "memory": 8423, "data_time": 0.04195, "loss_rpn_cls": 0.03096, "loss_rpn_bbox": 0.05191, "loss_cls": 0.23107, "acc": 92.3916, "loss_bbox": 0.2516, "loss_mask": 0.24712, "loss": 0.81266, "time": 0.47223} -{"mode": "train", "epoch": 4, "iter": 3700, "lr": 0.0002, "memory": 8423, "data_time": 0.05044, "loss_rpn_cls": 0.03201, "loss_rpn_bbox": 0.05448, "loss_cls": 0.24252, "acc": 91.927, "loss_bbox": 0.26287, "loss_mask": 0.25322, "loss": 0.84511, "time": 0.43197} -{"mode": "train", "epoch": 4, "iter": 3750, "lr": 0.0002, "memory": 8423, "data_time": 0.04096, "loss_rpn_cls": 0.03006, "loss_rpn_bbox": 0.04878, "loss_cls": 0.21943, "acc": 92.59155, "loss_bbox": 0.24121, "loss_mask": 0.24286, "loss": 0.78235, "time": 0.40083} -{"mode": "train", "epoch": 4, "iter": 3800, "lr": 0.0002, "memory": 8423, "data_time": 0.04339, "loss_rpn_cls": 0.03381, "loss_rpn_bbox": 0.05429, "loss_cls": 0.22847, "acc": 92.24268, "loss_bbox": 0.25531, "loss_mask": 0.25398, "loss": 0.82586, "time": 0.40819} -{"mode": "train", "epoch": 4, "iter": 3850, "lr": 0.0002, "memory": 8423, "data_time": 0.04365, "loss_rpn_cls": 0.03023, "loss_rpn_bbox": 0.04989, "loss_cls": 0.22256, "acc": 92.52734, "loss_bbox": 0.24378, "loss_mask": 0.25453, "loss": 0.80099, "time": 0.401} -{"mode": "train", "epoch": 4, "iter": 3900, "lr": 0.0002, "memory": 8423, "data_time": 0.03877, "loss_rpn_cls": 0.03312, "loss_rpn_bbox": 0.05383, "loss_cls": 0.23257, "acc": 92.31274, "loss_bbox": 0.2587, "loss_mask": 0.25248, "loss": 0.8307, "time": 0.40874} -{"mode": "train", "epoch": 4, "iter": 3950, "lr": 0.0002, "memory": 8423, "data_time": 0.04713, "loss_rpn_cls": 0.03516, "loss_rpn_bbox": 0.05491, "loss_cls": 0.23557, "acc": 92.0979, "loss_bbox": 0.2538, "loss_mask": 0.24633, "loss": 0.82576, "time": 0.40788} -{"mode": "train", "epoch": 4, "iter": 4000, "lr": 0.0002, "memory": 8423, "data_time": 0.04336, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.05221, "loss_cls": 0.23325, "acc": 92.35571, "loss_bbox": 0.25164, "loss_mask": 0.24414, "loss": 0.8112, "time": 0.46574} -{"mode": "train", "epoch": 4, "iter": 4050, "lr": 0.0002, "memory": 8423, "data_time": 0.04225, "loss_rpn_cls": 0.03074, "loss_rpn_bbox": 0.04932, "loss_cls": 0.23234, "acc": 92.32788, "loss_bbox": 0.25259, "loss_mask": 0.24453, "loss": 0.80952, "time": 0.47754} -{"mode": "train", "epoch": 4, "iter": 4100, "lr": 0.0002, "memory": 8423, "data_time": 0.04779, "loss_rpn_cls": 0.03249, "loss_rpn_bbox": 0.05286, "loss_cls": 0.24311, "acc": 91.99023, "loss_bbox": 0.26451, "loss_mask": 0.2462, "loss": 0.83918, "time": 0.45185} -{"mode": "train", "epoch": 4, "iter": 4150, "lr": 0.0002, "memory": 8423, "data_time": 0.03386, "loss_rpn_cls": 0.03161, "loss_rpn_bbox": 0.05087, "loss_cls": 0.23208, "acc": 92.20923, "loss_bbox": 0.25288, "loss_mask": 0.24525, "loss": 0.81269, "time": 0.39826} -{"mode": "train", "epoch": 4, "iter": 4200, "lr": 0.0002, "memory": 8423, "data_time": 0.04024, "loss_rpn_cls": 0.03255, "loss_rpn_bbox": 0.05278, "loss_cls": 0.22706, "acc": 92.54321, "loss_bbox": 0.24881, "loss_mask": 0.24806, "loss": 0.80927, "time": 0.41145} -{"mode": "train", "epoch": 4, "iter": 4250, "lr": 0.0002, "memory": 8423, "data_time": 0.0455, "loss_rpn_cls": 0.03325, "loss_rpn_bbox": 0.05584, "loss_cls": 0.23066, "acc": 92.45947, "loss_bbox": 0.24448, "loss_mask": 0.24815, "loss": 0.81238, "time": 0.40609} -{"mode": "train", "epoch": 4, "iter": 4300, "lr": 0.0002, "memory": 8423, "data_time": 0.04126, "loss_rpn_cls": 0.03125, "loss_rpn_bbox": 0.0512, "loss_cls": 0.2242, "acc": 92.41284, "loss_bbox": 0.24453, "loss_mask": 0.25613, "loss": 0.80732, "time": 0.38393} -{"mode": "train", "epoch": 4, "iter": 4350, "lr": 0.0002, "memory": 8423, "data_time": 0.04801, "loss_rpn_cls": 0.03501, "loss_rpn_bbox": 0.05358, "loss_cls": 0.22491, "acc": 92.36011, "loss_bbox": 0.25238, "loss_mask": 0.25422, "loss": 0.8201, "time": 0.41233} -{"mode": "train", "epoch": 4, "iter": 4400, "lr": 0.0002, "memory": 8423, "data_time": 0.04967, "loss_rpn_cls": 0.03314, "loss_rpn_bbox": 0.0503, "loss_cls": 0.22698, "acc": 92.47876, "loss_bbox": 0.25639, "loss_mask": 0.24439, "loss": 0.8112, "time": 0.40463} -{"mode": "train", "epoch": 4, "iter": 4450, "lr": 0.0002, "memory": 8423, "data_time": 0.04357, "loss_rpn_cls": 0.03297, "loss_rpn_bbox": 0.05413, "loss_cls": 0.22699, "acc": 92.44385, "loss_bbox": 0.24795, "loss_mask": 0.25066, "loss": 0.81271, "time": 0.39952} -{"mode": "train", "epoch": 4, "iter": 4500, "lr": 0.0002, "memory": 8423, "data_time": 0.03713, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.05224, "loss_cls": 0.23024, "acc": 92.47607, "loss_bbox": 0.25165, "loss_mask": 0.2463, "loss": 0.81391, "time": 0.40207} -{"mode": "train", "epoch": 4, "iter": 4550, "lr": 0.0002, "memory": 8423, "data_time": 0.03971, "loss_rpn_cls": 0.03146, "loss_rpn_bbox": 0.05032, "loss_cls": 0.22665, "acc": 92.56665, "loss_bbox": 0.24429, "loss_mask": 0.24734, "loss": 0.80006, "time": 0.39713} -{"mode": "train", "epoch": 4, "iter": 4600, "lr": 0.0002, "memory": 8423, "data_time": 0.04773, "loss_rpn_cls": 0.03114, "loss_rpn_bbox": 0.0522, "loss_cls": 0.22804, "acc": 92.36499, "loss_bbox": 0.25147, "loss_mask": 0.24418, "loss": 0.80703, "time": 0.41001} -{"mode": "train", "epoch": 4, "iter": 4650, "lr": 0.0002, "memory": 8423, "data_time": 0.04319, "loss_rpn_cls": 0.03055, "loss_rpn_bbox": 0.05245, "loss_cls": 0.23329, "acc": 92.24902, "loss_bbox": 0.25136, "loss_mask": 0.25388, "loss": 0.82154, "time": 0.39278} -{"mode": "train", "epoch": 4, "iter": 4700, "lr": 0.0002, "memory": 8423, "data_time": 0.04468, "loss_rpn_cls": 0.03498, "loss_rpn_bbox": 0.05161, "loss_cls": 0.22686, "acc": 92.45874, "loss_bbox": 0.24649, "loss_mask": 0.24439, "loss": 0.80432, "time": 0.40619} -{"mode": "train", "epoch": 4, "iter": 4750, "lr": 0.0002, "memory": 8423, "data_time": 0.05176, "loss_rpn_cls": 0.03321, "loss_rpn_bbox": 0.05073, "loss_cls": 0.22903, "acc": 92.36133, "loss_bbox": 0.24963, "loss_mask": 0.25309, "loss": 0.81569, "time": 0.4052} -{"mode": "train", "epoch": 4, "iter": 4800, "lr": 0.0002, "memory": 8423, "data_time": 0.04569, "loss_rpn_cls": 0.03286, "loss_rpn_bbox": 0.05656, "loss_cls": 0.24417, "acc": 91.92041, "loss_bbox": 0.25895, "loss_mask": 0.24788, "loss": 0.84041, "time": 0.40175} -{"mode": "train", "epoch": 4, "iter": 4850, "lr": 0.0002, "memory": 8423, "data_time": 0.04689, "loss_rpn_cls": 0.03208, "loss_rpn_bbox": 0.05073, "loss_cls": 0.22408, "acc": 92.61133, "loss_bbox": 0.24313, "loss_mask": 0.24922, "loss": 0.79924, "time": 0.39553} -{"mode": "train", "epoch": 4, "iter": 4900, "lr": 0.0002, "memory": 8423, "data_time": 0.04348, "loss_rpn_cls": 0.0344, "loss_rpn_bbox": 0.05661, "loss_cls": 0.24078, "acc": 91.89087, "loss_bbox": 0.26298, "loss_mask": 0.25737, "loss": 0.85215, "time": 0.42724} -{"mode": "train", "epoch": 4, "iter": 4950, "lr": 0.0002, "memory": 8423, "data_time": 0.03975, "loss_rpn_cls": 0.03096, "loss_rpn_bbox": 0.05185, "loss_cls": 0.22858, "acc": 92.33374, "loss_bbox": 0.2549, "loss_mask": 0.24781, "loss": 0.8141, "time": 0.40581} -{"mode": "train", "epoch": 4, "iter": 5000, "lr": 0.0002, "memory": 8423, "data_time": 0.04154, "loss_rpn_cls": 0.03115, "loss_rpn_bbox": 0.0517, "loss_cls": 0.22713, "acc": 92.43115, "loss_bbox": 0.24872, "loss_mask": 0.24781, "loss": 0.80652, "time": 0.38845} -{"mode": "train", "epoch": 4, "iter": 5050, "lr": 0.0002, "memory": 8423, "data_time": 0.0451, "loss_rpn_cls": 0.03333, "loss_rpn_bbox": 0.05601, "loss_cls": 0.2333, "acc": 92.1958, "loss_bbox": 0.25522, "loss_mask": 0.25671, "loss": 0.83458, "time": 0.40588} -{"mode": "train", "epoch": 4, "iter": 5100, "lr": 0.0002, "memory": 8423, "data_time": 0.04687, "loss_rpn_cls": 0.03787, "loss_rpn_bbox": 0.05722, "loss_cls": 0.23379, "acc": 92.2832, "loss_bbox": 0.24724, "loss_mask": 0.25333, "loss": 0.82945, "time": 0.41266} -{"mode": "train", "epoch": 4, "iter": 5150, "lr": 0.0002, "memory": 8423, "data_time": 0.05184, "loss_rpn_cls": 0.03123, "loss_rpn_bbox": 0.05301, "loss_cls": 0.2243, "acc": 92.41846, "loss_bbox": 0.2464, "loss_mask": 0.24365, "loss": 0.79861, "time": 0.41307} -{"mode": "train", "epoch": 4, "iter": 5200, "lr": 0.0002, "memory": 8423, "data_time": 0.05464, "loss_rpn_cls": 0.03141, "loss_rpn_bbox": 0.05382, "loss_cls": 0.23279, "acc": 92.26489, "loss_bbox": 0.25009, "loss_mask": 0.24473, "loss": 0.81283, "time": 0.41616} -{"mode": "train", "epoch": 4, "iter": 5250, "lr": 0.0002, "memory": 8423, "data_time": 0.03756, "loss_rpn_cls": 0.03205, "loss_rpn_bbox": 0.05279, "loss_cls": 0.23402, "acc": 92.2002, "loss_bbox": 0.25314, "loss_mask": 0.2507, "loss": 0.82269, "time": 0.39648} -{"mode": "train", "epoch": 4, "iter": 5300, "lr": 0.0002, "memory": 8423, "data_time": 0.05091, "loss_rpn_cls": 0.03124, "loss_rpn_bbox": 0.05364, "loss_cls": 0.23749, "acc": 92.16333, "loss_bbox": 0.25716, "loss_mask": 0.24793, "loss": 0.82745, "time": 0.40944} -{"mode": "train", "epoch": 4, "iter": 5350, "lr": 0.0002, "memory": 8423, "data_time": 0.04445, "loss_rpn_cls": 0.02986, "loss_rpn_bbox": 0.05347, "loss_cls": 0.234, "acc": 92.30957, "loss_bbox": 0.2538, "loss_mask": 0.25134, "loss": 0.82248, "time": 0.40951} -{"mode": "train", "epoch": 4, "iter": 5400, "lr": 0.0002, "memory": 8423, "data_time": 0.0489, "loss_rpn_cls": 0.03302, "loss_rpn_bbox": 0.05449, "loss_cls": 0.23032, "acc": 92.31934, "loss_bbox": 0.25278, "loss_mask": 0.2486, "loss": 0.8192, "time": 0.41979} -{"mode": "train", "epoch": 4, "iter": 5450, "lr": 0.0002, "memory": 8423, "data_time": 0.03992, "loss_rpn_cls": 0.03187, "loss_rpn_bbox": 0.05054, "loss_cls": 0.22734, "acc": 92.56812, "loss_bbox": 0.24481, "loss_mask": 0.24628, "loss": 0.80085, "time": 0.4808} -{"mode": "train", "epoch": 4, "iter": 5500, "lr": 0.0002, "memory": 8423, "data_time": 0.04018, "loss_rpn_cls": 0.03159, "loss_rpn_bbox": 0.05323, "loss_cls": 0.24198, "acc": 91.99683, "loss_bbox": 0.25795, "loss_mask": 0.2488, "loss": 0.83357, "time": 0.42152} -{"mode": "train", "epoch": 4, "iter": 5550, "lr": 0.0002, "memory": 8423, "data_time": 0.04238, "loss_rpn_cls": 0.03326, "loss_rpn_bbox": 0.05419, "loss_cls": 0.22543, "acc": 92.44312, "loss_bbox": 0.24798, "loss_mask": 0.24676, "loss": 0.80763, "time": 0.42032} -{"mode": "train", "epoch": 4, "iter": 5600, "lr": 0.0002, "memory": 8423, "data_time": 0.0561, "loss_rpn_cls": 0.03329, "loss_rpn_bbox": 0.05279, "loss_cls": 0.22539, "acc": 92.53638, "loss_bbox": 0.25024, "loss_mask": 0.25684, "loss": 0.81853, "time": 0.39828} -{"mode": "train", "epoch": 4, "iter": 5650, "lr": 0.0002, "memory": 8423, "data_time": 0.03778, "loss_rpn_cls": 0.02973, "loss_rpn_bbox": 0.04959, "loss_cls": 0.23192, "acc": 92.38696, "loss_bbox": 0.24784, "loss_mask": 0.24474, "loss": 0.80383, "time": 0.38882} -{"mode": "train", "epoch": 4, "iter": 5700, "lr": 0.0002, "memory": 8423, "data_time": 0.04777, "loss_rpn_cls": 0.03054, "loss_rpn_bbox": 0.05196, "loss_cls": 0.22576, "acc": 92.45361, "loss_bbox": 0.2467, "loss_mask": 0.254, "loss": 0.80896, "time": 0.45673} -{"mode": "train", "epoch": 4, "iter": 5750, "lr": 0.0002, "memory": 8423, "data_time": 0.05165, "loss_rpn_cls": 0.03709, "loss_rpn_bbox": 0.05601, "loss_cls": 0.24278, "acc": 91.93604, "loss_bbox": 0.26449, "loss_mask": 0.25328, "loss": 0.85364, "time": 0.40879} -{"mode": "train", "epoch": 4, "iter": 5800, "lr": 0.0002, "memory": 8423, "data_time": 0.04999, "loss_rpn_cls": 0.03103, "loss_rpn_bbox": 0.05484, "loss_cls": 0.22584, "acc": 92.32446, "loss_bbox": 0.25584, "loss_mask": 0.25017, "loss": 0.81772, "time": 0.40515} -{"mode": "train", "epoch": 4, "iter": 5850, "lr": 0.0002, "memory": 8423, "data_time": 0.04062, "loss_rpn_cls": 0.03038, "loss_rpn_bbox": 0.05293, "loss_cls": 0.22394, "acc": 92.44775, "loss_bbox": 0.24758, "loss_mask": 0.24844, "loss": 0.80327, "time": 0.40115} -{"mode": "train", "epoch": 4, "iter": 5900, "lr": 0.0002, "memory": 8423, "data_time": 0.04382, "loss_rpn_cls": 0.03238, "loss_rpn_bbox": 0.05386, "loss_cls": 0.22453, "acc": 92.50806, "loss_bbox": 0.24974, "loss_mask": 0.24739, "loss": 0.80789, "time": 0.41216} -{"mode": "train", "epoch": 4, "iter": 5950, "lr": 0.0002, "memory": 8423, "data_time": 0.04634, "loss_rpn_cls": 0.0292, "loss_rpn_bbox": 0.04985, "loss_cls": 0.22372, "acc": 92.4668, "loss_bbox": 0.25173, "loss_mask": 0.24541, "loss": 0.7999, "time": 0.38806} -{"mode": "train", "epoch": 4, "iter": 6000, "lr": 0.0002, "memory": 8423, "data_time": 0.04563, "loss_rpn_cls": 0.0322, "loss_rpn_bbox": 0.05484, "loss_cls": 0.23724, "acc": 92.05713, "loss_bbox": 0.25943, "loss_mask": 0.25048, "loss": 0.83419, "time": 0.41073} -{"mode": "train", "epoch": 4, "iter": 6050, "lr": 0.0002, "memory": 8423, "data_time": 0.09769, "loss_rpn_cls": 0.03334, "loss_rpn_bbox": 0.05281, "loss_cls": 0.23094, "acc": 92.34448, "loss_bbox": 0.25393, "loss_mask": 0.25376, "loss": 0.82478, "time": 0.44945} -{"mode": "train", "epoch": 4, "iter": 6100, "lr": 0.0002, "memory": 8423, "data_time": 0.04176, "loss_rpn_cls": 0.03221, "loss_rpn_bbox": 0.05286, "loss_cls": 0.22764, "acc": 92.17017, "loss_bbox": 0.25323, "loss_mask": 0.25248, "loss": 0.81843, "time": 0.42025} -{"mode": "train", "epoch": 4, "iter": 6150, "lr": 0.0002, "memory": 8423, "data_time": 0.0402, "loss_rpn_cls": 0.03086, "loss_rpn_bbox": 0.05029, "loss_cls": 0.21587, "acc": 92.71704, "loss_bbox": 0.24212, "loss_mask": 0.24329, "loss": 0.78243, "time": 0.40359} -{"mode": "train", "epoch": 4, "iter": 6200, "lr": 0.0002, "memory": 8423, "data_time": 0.04637, "loss_rpn_cls": 0.0341, "loss_rpn_bbox": 0.05365, "loss_cls": 0.22203, "acc": 92.39526, "loss_bbox": 0.24735, "loss_mask": 0.25265, "loss": 0.80978, "time": 0.4176} -{"mode": "train", "epoch": 4, "iter": 6250, "lr": 0.0002, "memory": 8423, "data_time": 0.05305, "loss_rpn_cls": 0.0289, "loss_rpn_bbox": 0.05094, "loss_cls": 0.23088, "acc": 92.32642, "loss_bbox": 0.2533, "loss_mask": 0.25681, "loss": 0.82083, "time": 0.41613} -{"mode": "train", "epoch": 4, "iter": 6300, "lr": 0.0002, "memory": 8423, "data_time": 0.04544, "loss_rpn_cls": 0.03171, "loss_rpn_bbox": 0.05, "loss_cls": 0.23181, "acc": 92.36304, "loss_bbox": 0.24976, "loss_mask": 0.24402, "loss": 0.8073, "time": 0.40364} -{"mode": "train", "epoch": 4, "iter": 6350, "lr": 0.0002, "memory": 8423, "data_time": 0.03628, "loss_rpn_cls": 0.03148, "loss_rpn_bbox": 0.04813, "loss_cls": 0.22909, "acc": 92.42993, "loss_bbox": 0.24627, "loss_mask": 0.2428, "loss": 0.79777, "time": 0.39643} -{"mode": "train", "epoch": 4, "iter": 6400, "lr": 0.0002, "memory": 8423, "data_time": 0.04323, "loss_rpn_cls": 0.03396, "loss_rpn_bbox": 0.05347, "loss_cls": 0.23881, "acc": 92.17163, "loss_bbox": 0.25656, "loss_mask": 0.25071, "loss": 0.8335, "time": 0.40909} -{"mode": "train", "epoch": 4, "iter": 6450, "lr": 0.0002, "memory": 8423, "data_time": 0.03733, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05332, "loss_cls": 0.23126, "acc": 92.28784, "loss_bbox": 0.24876, "loss_mask": 0.25287, "loss": 0.82202, "time": 0.41736} -{"mode": "train", "epoch": 4, "iter": 6500, "lr": 0.0002, "memory": 8423, "data_time": 0.04911, "loss_rpn_cls": 0.03081, "loss_rpn_bbox": 0.05421, "loss_cls": 0.2258, "acc": 92.38989, "loss_bbox": 0.25407, "loss_mask": 0.25063, "loss": 0.81553, "time": 0.40957} -{"mode": "train", "epoch": 4, "iter": 6550, "lr": 0.0002, "memory": 8423, "data_time": 0.05102, "loss_rpn_cls": 0.03462, "loss_rpn_bbox": 0.05258, "loss_cls": 0.22467, "acc": 92.54541, "loss_bbox": 0.24627, "loss_mask": 0.25002, "loss": 0.80816, "time": 0.4193} -{"mode": "train", "epoch": 4, "iter": 6600, "lr": 0.0002, "memory": 8424, "data_time": 0.03979, "loss_rpn_cls": 0.03268, "loss_rpn_bbox": 0.05322, "loss_cls": 0.22829, "acc": 92.37524, "loss_bbox": 0.24609, "loss_mask": 0.25077, "loss": 0.81105, "time": 0.4098} -{"mode": "train", "epoch": 4, "iter": 6650, "lr": 0.0002, "memory": 8424, "data_time": 0.04457, "loss_rpn_cls": 0.03446, "loss_rpn_bbox": 0.05412, "loss_cls": 0.23796, "acc": 92.10571, "loss_bbox": 0.25776, "loss_mask": 0.25287, "loss": 0.83717, "time": 0.41913} -{"mode": "train", "epoch": 4, "iter": 6700, "lr": 0.0002, "memory": 8424, "data_time": 0.0374, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.05369, "loss_cls": 0.23583, "acc": 92.12012, "loss_bbox": 0.25691, "loss_mask": 0.25506, "loss": 0.83497, "time": 0.40643} -{"mode": "train", "epoch": 4, "iter": 6750, "lr": 0.0002, "memory": 8424, "data_time": 0.04936, "loss_rpn_cls": 0.03091, "loss_rpn_bbox": 0.05035, "loss_cls": 0.2332, "acc": 92.17822, "loss_bbox": 0.25516, "loss_mask": 0.24757, "loss": 0.81719, "time": 0.40618} -{"mode": "train", "epoch": 4, "iter": 6800, "lr": 0.0002, "memory": 8424, "data_time": 0.05241, "loss_rpn_cls": 0.03375, "loss_rpn_bbox": 0.05305, "loss_cls": 0.23804, "acc": 92.25195, "loss_bbox": 0.24788, "loss_mask": 0.24485, "loss": 0.81757, "time": 0.41038} -{"mode": "train", "epoch": 4, "iter": 6850, "lr": 0.0002, "memory": 8424, "data_time": 0.04949, "loss_rpn_cls": 0.03298, "loss_rpn_bbox": 0.05212, "loss_cls": 0.22934, "acc": 92.56836, "loss_bbox": 0.24069, "loss_mask": 0.25387, "loss": 0.809, "time": 0.39743} -{"mode": "train", "epoch": 4, "iter": 6900, "lr": 0.0002, "memory": 8424, "data_time": 0.0457, "loss_rpn_cls": 0.03203, "loss_rpn_bbox": 0.05334, "loss_cls": 0.23074, "acc": 92.36157, "loss_bbox": 0.24952, "loss_mask": 0.25414, "loss": 0.81977, "time": 0.41021} -{"mode": "train", "epoch": 4, "iter": 6950, "lr": 0.0002, "memory": 8424, "data_time": 0.0452, "loss_rpn_cls": 0.03333, "loss_rpn_bbox": 0.05278, "loss_cls": 0.22932, "acc": 92.22974, "loss_bbox": 0.25245, "loss_mask": 0.24505, "loss": 0.81293, "time": 0.40326} -{"mode": "train", "epoch": 4, "iter": 7000, "lr": 0.0002, "memory": 8424, "data_time": 0.04427, "loss_rpn_cls": 0.03343, "loss_rpn_bbox": 0.05278, "loss_cls": 0.22304, "acc": 92.52026, "loss_bbox": 0.24048, "loss_mask": 0.24413, "loss": 0.79386, "time": 0.39533} -{"mode": "train", "epoch": 4, "iter": 7050, "lr": 0.0002, "memory": 8424, "data_time": 0.03639, "loss_rpn_cls": 0.03089, "loss_rpn_bbox": 0.05052, "loss_cls": 0.21976, "acc": 92.5813, "loss_bbox": 0.24336, "loss_mask": 0.24319, "loss": 0.78774, "time": 0.39767} -{"mode": "train", "epoch": 4, "iter": 7100, "lr": 0.0002, "memory": 8424, "data_time": 0.05382, "loss_rpn_cls": 0.03402, "loss_rpn_bbox": 0.05277, "loss_cls": 0.22517, "acc": 92.36987, "loss_bbox": 0.24725, "loss_mask": 0.25022, "loss": 0.80944, "time": 0.41753} -{"mode": "train", "epoch": 4, "iter": 7150, "lr": 0.0002, "memory": 8424, "data_time": 0.0441, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05136, "loss_cls": 0.23202, "acc": 92.24487, "loss_bbox": 0.25583, "loss_mask": 0.25187, "loss": 0.82286, "time": 0.41211} -{"mode": "train", "epoch": 4, "iter": 7200, "lr": 0.0002, "memory": 8424, "data_time": 0.04695, "loss_rpn_cls": 0.03443, "loss_rpn_bbox": 0.05686, "loss_cls": 0.23572, "acc": 92.08203, "loss_bbox": 0.25589, "loss_mask": 0.25256, "loss": 0.83546, "time": 0.40915} -{"mode": "train", "epoch": 4, "iter": 7250, "lr": 0.0002, "memory": 8424, "data_time": 0.04115, "loss_rpn_cls": 0.03331, "loss_rpn_bbox": 0.05181, "loss_cls": 0.22449, "acc": 92.55957, "loss_bbox": 0.24196, "loss_mask": 0.25115, "loss": 0.80272, "time": 0.39257} -{"mode": "train", "epoch": 4, "iter": 7300, "lr": 0.0002, "memory": 8424, "data_time": 0.04756, "loss_rpn_cls": 0.02954, "loss_rpn_bbox": 0.05139, "loss_cls": 0.23322, "acc": 92.2644, "loss_bbox": 0.25247, "loss_mask": 0.24773, "loss": 0.81435, "time": 0.40759} -{"mode": "val", "epoch": 4, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3447, "bbox_mAP_50": 0.5675, "bbox_mAP_75": 0.3729, "bbox_mAP_s": 0.2029, "bbox_mAP_m": 0.3894, "bbox_mAP_l": 0.4493, "bbox_mAP_copypaste": "0.3447 0.5675 0.3729 0.2029 0.3894 0.4493", "segm_mAP": 0.3297, "segm_mAP_50": 0.5381, "segm_mAP_75": 0.3506, "segm_mAP_s": 0.1521, "segm_mAP_m": 0.3641, "segm_mAP_l": 0.4858, "segm_mAP_copypaste": "0.3297 0.5381 0.3506 0.1521 0.3641 0.4858"} -{"mode": "train", "epoch": 5, "iter": 50, "lr": 0.0002, "memory": 8424, "data_time": 0.11908, "loss_rpn_cls": 0.02948, "loss_rpn_bbox": 0.05135, "loss_cls": 0.22686, "acc": 92.21338, "loss_bbox": 0.25589, "loss_mask": 0.24889, "loss": 0.81247, "time": 0.49927} -{"mode": "train", "epoch": 5, "iter": 100, "lr": 0.0002, "memory": 8424, "data_time": 0.05153, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.05384, "loss_cls": 0.22056, "acc": 92.31909, "loss_bbox": 0.25771, "loss_mask": 0.24879, "loss": 0.81039, "time": 0.49028} -{"mode": "train", "epoch": 5, "iter": 150, "lr": 0.0002, "memory": 8424, "data_time": 0.04335, "loss_rpn_cls": 0.02981, "loss_rpn_bbox": 0.05189, "loss_cls": 0.22057, "acc": 92.4353, "loss_bbox": 0.24812, "loss_mask": 0.24996, "loss": 0.80035, "time": 0.42387} -{"mode": "train", "epoch": 5, "iter": 200, "lr": 0.0002, "memory": 8424, "data_time": 0.04607, "loss_rpn_cls": 0.02983, "loss_rpn_bbox": 0.05252, "loss_cls": 0.21112, "acc": 92.72876, "loss_bbox": 0.24601, "loss_mask": 0.24478, "loss": 0.78426, "time": 0.41431} -{"mode": "train", "epoch": 5, "iter": 250, "lr": 0.0002, "memory": 8424, "data_time": 0.04726, "loss_rpn_cls": 0.03053, "loss_rpn_bbox": 0.0539, "loss_cls": 0.21455, "acc": 92.68311, "loss_bbox": 0.24281, "loss_mask": 0.24544, "loss": 0.78723, "time": 0.41212} -{"mode": "train", "epoch": 5, "iter": 300, "lr": 0.0002, "memory": 8424, "data_time": 0.04111, "loss_rpn_cls": 0.02881, "loss_rpn_bbox": 0.05046, "loss_cls": 0.21477, "acc": 92.52515, "loss_bbox": 0.24236, "loss_mask": 0.24399, "loss": 0.78039, "time": 0.40981} -{"mode": "train", "epoch": 5, "iter": 350, "lr": 0.0002, "memory": 8424, "data_time": 0.05616, "loss_rpn_cls": 0.02971, "loss_rpn_bbox": 0.0509, "loss_cls": 0.21596, "acc": 92.39307, "loss_bbox": 0.2534, "loss_mask": 0.24789, "loss": 0.79787, "time": 0.42216} -{"mode": "train", "epoch": 5, "iter": 400, "lr": 0.0002, "memory": 8424, "data_time": 0.03751, "loss_rpn_cls": 0.02903, "loss_rpn_bbox": 0.04874, "loss_cls": 0.22401, "acc": 92.40112, "loss_bbox": 0.25076, "loss_mask": 0.24252, "loss": 0.79505, "time": 0.41115} -{"mode": "train", "epoch": 5, "iter": 450, "lr": 0.0002, "memory": 8424, "data_time": 0.05099, "loss_rpn_cls": 0.03199, "loss_rpn_bbox": 0.05176, "loss_cls": 0.21761, "acc": 92.55566, "loss_bbox": 0.24745, "loss_mask": 0.24321, "loss": 0.79202, "time": 0.41433} -{"mode": "train", "epoch": 5, "iter": 500, "lr": 0.0002, "memory": 8424, "data_time": 0.04778, "loss_rpn_cls": 0.03025, "loss_rpn_bbox": 0.05283, "loss_cls": 0.2202, "acc": 92.54077, "loss_bbox": 0.24522, "loss_mask": 0.2443, "loss": 0.7928, "time": 0.40607} -{"mode": "train", "epoch": 5, "iter": 550, "lr": 0.0002, "memory": 8424, "data_time": 0.03563, "loss_rpn_cls": 0.0291, "loss_rpn_bbox": 0.05017, "loss_cls": 0.21165, "acc": 92.86816, "loss_bbox": 0.23569, "loss_mask": 0.23775, "loss": 0.76436, "time": 0.40182} -{"mode": "train", "epoch": 5, "iter": 600, "lr": 0.0002, "memory": 8424, "data_time": 0.04398, "loss_rpn_cls": 0.03089, "loss_rpn_bbox": 0.0529, "loss_cls": 0.23113, "acc": 92.11108, "loss_bbox": 0.25993, "loss_mask": 0.24075, "loss": 0.8156, "time": 0.41498} -{"mode": "train", "epoch": 5, "iter": 650, "lr": 0.0002, "memory": 8424, "data_time": 0.04015, "loss_rpn_cls": 0.02942, "loss_rpn_bbox": 0.05022, "loss_cls": 0.22291, "acc": 92.30127, "loss_bbox": 0.25344, "loss_mask": 0.24447, "loss": 0.80046, "time": 0.41678} -{"mode": "train", "epoch": 5, "iter": 700, "lr": 0.0002, "memory": 8424, "data_time": 0.04983, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.05318, "loss_cls": 0.22115, "acc": 92.49048, "loss_bbox": 0.24778, "loss_mask": 0.25109, "loss": 0.80236, "time": 0.41174} -{"mode": "train", "epoch": 5, "iter": 750, "lr": 0.0002, "memory": 8424, "data_time": 0.0463, "loss_rpn_cls": 0.02766, "loss_rpn_bbox": 0.04852, "loss_cls": 0.21063, "acc": 92.85669, "loss_bbox": 0.23785, "loss_mask": 0.23862, "loss": 0.76328, "time": 0.40948} -{"mode": "train", "epoch": 5, "iter": 800, "lr": 0.0002, "memory": 8424, "data_time": 0.05219, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05305, "loss_cls": 0.21866, "acc": 92.48218, "loss_bbox": 0.24986, "loss_mask": 0.24723, "loss": 0.79974, "time": 0.42402} -{"mode": "train", "epoch": 5, "iter": 850, "lr": 0.0002, "memory": 8424, "data_time": 0.04835, "loss_rpn_cls": 0.0308, "loss_rpn_bbox": 0.0532, "loss_cls": 0.22272, "acc": 92.25635, "loss_bbox": 0.2544, "loss_mask": 0.24917, "loss": 0.81029, "time": 0.41595} -{"mode": "train", "epoch": 5, "iter": 900, "lr": 0.0002, "memory": 8424, "data_time": 0.04701, "loss_rpn_cls": 0.03035, "loss_rpn_bbox": 0.05364, "loss_cls": 0.23233, "acc": 92.12402, "loss_bbox": 0.25585, "loss_mask": 0.24761, "loss": 0.81978, "time": 0.41539} -{"mode": "train", "epoch": 5, "iter": 950, "lr": 0.0002, "memory": 8424, "data_time": 0.04448, "loss_rpn_cls": 0.03066, "loss_rpn_bbox": 0.05063, "loss_cls": 0.21654, "acc": 92.64233, "loss_bbox": 0.23928, "loss_mask": 0.24509, "loss": 0.7822, "time": 0.41221} -{"mode": "train", "epoch": 5, "iter": 1000, "lr": 0.0002, "memory": 8424, "data_time": 0.04855, "loss_rpn_cls": 0.03194, "loss_rpn_bbox": 0.05463, "loss_cls": 0.22208, "acc": 92.52539, "loss_bbox": 0.25172, "loss_mask": 0.25175, "loss": 0.81213, "time": 0.41883} -{"mode": "train", "epoch": 5, "iter": 1050, "lr": 0.0002, "memory": 8424, "data_time": 0.04664, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.05252, "loss_cls": 0.23062, "acc": 92.2085, "loss_bbox": 0.26027, "loss_mask": 0.24975, "loss": 0.82507, "time": 0.41103} -{"mode": "train", "epoch": 5, "iter": 1100, "lr": 0.0002, "memory": 8424, "data_time": 0.04771, "loss_rpn_cls": 0.02807, "loss_rpn_bbox": 0.04904, "loss_cls": 0.21626, "acc": 92.63794, "loss_bbox": 0.24021, "loss_mask": 0.24708, "loss": 0.78066, "time": 0.40677} -{"mode": "train", "epoch": 5, "iter": 1150, "lr": 0.0002, "memory": 8424, "data_time": 0.0433, "loss_rpn_cls": 0.02996, "loss_rpn_bbox": 0.05176, "loss_cls": 0.2277, "acc": 92.2002, "loss_bbox": 0.25299, "loss_mask": 0.24608, "loss": 0.80849, "time": 0.39701} -{"mode": "train", "epoch": 5, "iter": 1200, "lr": 0.0002, "memory": 8424, "data_time": 0.05397, "loss_rpn_cls": 0.02868, "loss_rpn_bbox": 0.04926, "loss_cls": 0.21912, "acc": 92.6377, "loss_bbox": 0.24389, "loss_mask": 0.24001, "loss": 0.78097, "time": 0.4094} -{"mode": "train", "epoch": 5, "iter": 1250, "lr": 0.0002, "memory": 8424, "data_time": 0.04713, "loss_rpn_cls": 0.03011, "loss_rpn_bbox": 0.05276, "loss_cls": 0.23278, "acc": 92.2771, "loss_bbox": 0.24801, "loss_mask": 0.23882, "loss": 0.80247, "time": 0.40305} -{"mode": "train", "epoch": 5, "iter": 1300, "lr": 0.0002, "memory": 8424, "data_time": 0.04326, "loss_rpn_cls": 0.02704, "loss_rpn_bbox": 0.04818, "loss_cls": 0.20562, "acc": 93.07959, "loss_bbox": 0.22984, "loss_mask": 0.24279, "loss": 0.75347, "time": 0.40481} -{"mode": "train", "epoch": 5, "iter": 1350, "lr": 0.0002, "memory": 8424, "data_time": 0.0465, "loss_rpn_cls": 0.031, "loss_rpn_bbox": 0.05075, "loss_cls": 0.21971, "acc": 92.5249, "loss_bbox": 0.2507, "loss_mask": 0.24347, "loss": 0.79563, "time": 0.40782} -{"mode": "train", "epoch": 5, "iter": 1400, "lr": 0.0002, "memory": 8424, "data_time": 0.04206, "loss_rpn_cls": 0.02802, "loss_rpn_bbox": 0.05476, "loss_cls": 0.22713, "acc": 92.24292, "loss_bbox": 0.2552, "loss_mask": 0.24907, "loss": 0.81418, "time": 0.40877} -{"mode": "train", "epoch": 5, "iter": 1450, "lr": 0.0002, "memory": 8424, "data_time": 0.04053, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.04953, "loss_cls": 0.21909, "acc": 92.49243, "loss_bbox": 0.25177, "loss_mask": 0.25108, "loss": 0.80103, "time": 0.41063} -{"mode": "train", "epoch": 5, "iter": 1500, "lr": 0.0002, "memory": 8424, "data_time": 0.0398, "loss_rpn_cls": 0.03051, "loss_rpn_bbox": 0.0496, "loss_cls": 0.2203, "acc": 92.6394, "loss_bbox": 0.2428, "loss_mask": 0.2472, "loss": 0.79041, "time": 0.39702} -{"mode": "train", "epoch": 5, "iter": 1550, "lr": 0.0002, "memory": 8424, "data_time": 0.046, "loss_rpn_cls": 0.02842, "loss_rpn_bbox": 0.052, "loss_cls": 0.22131, "acc": 92.35449, "loss_bbox": 0.25374, "loss_mask": 0.24304, "loss": 0.7985, "time": 0.42288} -{"mode": "train", "epoch": 5, "iter": 1600, "lr": 0.0002, "memory": 8424, "data_time": 0.04825, "loss_rpn_cls": 0.03198, "loss_rpn_bbox": 0.05154, "loss_cls": 0.22497, "acc": 92.28613, "loss_bbox": 0.25525, "loss_mask": 0.25126, "loss": 0.81498, "time": 0.41063} -{"mode": "train", "epoch": 5, "iter": 1650, "lr": 0.0002, "memory": 8424, "data_time": 0.04719, "loss_rpn_cls": 0.02802, "loss_rpn_bbox": 0.04892, "loss_cls": 0.21635, "acc": 92.56396, "loss_bbox": 0.24, "loss_mask": 0.23983, "loss": 0.77312, "time": 0.41142} -{"mode": "train", "epoch": 5, "iter": 1700, "lr": 0.0002, "memory": 8424, "data_time": 0.05595, "loss_rpn_cls": 0.03083, "loss_rpn_bbox": 0.05173, "loss_cls": 0.2198, "acc": 92.60645, "loss_bbox": 0.24066, "loss_mask": 0.24478, "loss": 0.78781, "time": 0.41785} -{"mode": "train", "epoch": 5, "iter": 1750, "lr": 0.0002, "memory": 8424, "data_time": 0.04978, "loss_rpn_cls": 0.02972, "loss_rpn_bbox": 0.05327, "loss_cls": 0.22844, "acc": 92.30713, "loss_bbox": 0.25383, "loss_mask": 0.24794, "loss": 0.8132, "time": 0.39028} -{"mode": "train", "epoch": 5, "iter": 1800, "lr": 0.0002, "memory": 8424, "data_time": 0.05025, "loss_rpn_cls": 0.03228, "loss_rpn_bbox": 0.05181, "loss_cls": 0.22852, "acc": 92.28052, "loss_bbox": 0.25079, "loss_mask": 0.24661, "loss": 0.81, "time": 0.40628} -{"mode": "train", "epoch": 5, "iter": 1850, "lr": 0.0002, "memory": 8424, "data_time": 0.05062, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.05304, "loss_cls": 0.21647, "acc": 92.67871, "loss_bbox": 0.24145, "loss_mask": 0.24589, "loss": 0.7861, "time": 0.40638} -{"mode": "train", "epoch": 5, "iter": 1900, "lr": 0.0002, "memory": 8424, "data_time": 0.05978, "loss_rpn_cls": 0.02816, "loss_rpn_bbox": 0.04932, "loss_cls": 0.22131, "acc": 92.48096, "loss_bbox": 0.243, "loss_mask": 0.24368, "loss": 0.78547, "time": 0.40774} -{"mode": "train", "epoch": 5, "iter": 1950, "lr": 0.0002, "memory": 8424, "data_time": 0.04396, "loss_rpn_cls": 0.02741, "loss_rpn_bbox": 0.04872, "loss_cls": 0.22744, "acc": 92.32422, "loss_bbox": 0.25365, "loss_mask": 0.24892, "loss": 0.80613, "time": 0.41669} -{"mode": "train", "epoch": 5, "iter": 2000, "lr": 0.0002, "memory": 8424, "data_time": 0.05899, "loss_rpn_cls": 0.03103, "loss_rpn_bbox": 0.049, "loss_cls": 0.21153, "acc": 92.81274, "loss_bbox": 0.23832, "loss_mask": 0.24059, "loss": 0.77047, "time": 0.4132} -{"mode": "train", "epoch": 5, "iter": 2050, "lr": 0.0002, "memory": 8424, "data_time": 0.0484, "loss_rpn_cls": 0.0286, "loss_rpn_bbox": 0.0509, "loss_cls": 0.23101, "acc": 92.38672, "loss_bbox": 0.24891, "loss_mask": 0.24889, "loss": 0.8083, "time": 0.41191} -{"mode": "train", "epoch": 5, "iter": 2100, "lr": 0.0002, "memory": 8424, "data_time": 0.04452, "loss_rpn_cls": 0.03063, "loss_rpn_bbox": 0.05184, "loss_cls": 0.23236, "acc": 92.01099, "loss_bbox": 0.25781, "loss_mask": 0.24832, "loss": 0.82096, "time": 0.40502} -{"mode": "train", "epoch": 5, "iter": 2150, "lr": 0.0002, "memory": 8424, "data_time": 0.04499, "loss_rpn_cls": 0.02954, "loss_rpn_bbox": 0.04905, "loss_cls": 0.21872, "acc": 92.56519, "loss_bbox": 0.24671, "loss_mask": 0.24471, "loss": 0.78872, "time": 0.40724} -{"mode": "train", "epoch": 5, "iter": 2200, "lr": 0.0002, "memory": 8424, "data_time": 0.04531, "loss_rpn_cls": 0.0302, "loss_rpn_bbox": 0.05112, "loss_cls": 0.2151, "acc": 92.53564, "loss_bbox": 0.24795, "loss_mask": 0.24673, "loss": 0.7911, "time": 0.40451} -{"mode": "train", "epoch": 5, "iter": 2250, "lr": 0.0002, "memory": 8424, "data_time": 0.04676, "loss_rpn_cls": 0.02941, "loss_rpn_bbox": 0.0492, "loss_cls": 0.2221, "acc": 92.63403, "loss_bbox": 0.24125, "loss_mask": 0.2407, "loss": 0.78266, "time": 0.39926} -{"mode": "train", "epoch": 5, "iter": 2300, "lr": 0.0002, "memory": 8424, "data_time": 0.04212, "loss_rpn_cls": 0.03246, "loss_rpn_bbox": 0.04889, "loss_cls": 0.2235, "acc": 92.38647, "loss_bbox": 0.24949, "loss_mask": 0.24777, "loss": 0.80211, "time": 0.451} -{"mode": "train", "epoch": 5, "iter": 2350, "lr": 0.0002, "memory": 8424, "data_time": 0.04292, "loss_rpn_cls": 0.02976, "loss_rpn_bbox": 0.0528, "loss_cls": 0.22165, "acc": 92.49976, "loss_bbox": 0.2475, "loss_mask": 0.24162, "loss": 0.79333, "time": 0.4321} -{"mode": "train", "epoch": 5, "iter": 2400, "lr": 0.0002, "memory": 8424, "data_time": 0.04271, "loss_rpn_cls": 0.03239, "loss_rpn_bbox": 0.05092, "loss_cls": 0.22029, "acc": 92.56494, "loss_bbox": 0.24278, "loss_mask": 0.24423, "loss": 0.79061, "time": 0.43748} -{"mode": "train", "epoch": 5, "iter": 2450, "lr": 0.0002, "memory": 8424, "data_time": 0.04239, "loss_rpn_cls": 0.02922, "loss_rpn_bbox": 0.05232, "loss_cls": 0.22833, "acc": 92.4292, "loss_bbox": 0.24969, "loss_mask": 0.24493, "loss": 0.80449, "time": 0.43429} -{"mode": "train", "epoch": 5, "iter": 2500, "lr": 0.0002, "memory": 8424, "data_time": 0.06526, "loss_rpn_cls": 0.03045, "loss_rpn_bbox": 0.05284, "loss_cls": 0.23067, "acc": 92.07861, "loss_bbox": 0.25713, "loss_mask": 0.24703, "loss": 0.81812, "time": 0.45858} -{"mode": "train", "epoch": 5, "iter": 2550, "lr": 0.0002, "memory": 8424, "data_time": 0.05036, "loss_rpn_cls": 0.03004, "loss_rpn_bbox": 0.05133, "loss_cls": 0.20948, "acc": 92.90649, "loss_bbox": 0.23812, "loss_mask": 0.24167, "loss": 0.77064, "time": 0.42732} -{"mode": "train", "epoch": 5, "iter": 2600, "lr": 0.0002, "memory": 8424, "data_time": 0.03885, "loss_rpn_cls": 0.02902, "loss_rpn_bbox": 0.04782, "loss_cls": 0.20394, "acc": 93.12061, "loss_bbox": 0.23326, "loss_mask": 0.23694, "loss": 0.75098, "time": 0.42109} -{"mode": "train", "epoch": 5, "iter": 2650, "lr": 0.0002, "memory": 8424, "data_time": 0.04926, "loss_rpn_cls": 0.03108, "loss_rpn_bbox": 0.05284, "loss_cls": 0.23261, "acc": 92.22998, "loss_bbox": 0.25512, "loss_mask": 0.25252, "loss": 0.82418, "time": 0.49894} -{"mode": "train", "epoch": 5, "iter": 2700, "lr": 0.0002, "memory": 8424, "data_time": 0.04505, "loss_rpn_cls": 0.03134, "loss_rpn_bbox": 0.05254, "loss_cls": 0.23187, "acc": 92.27441, "loss_bbox": 0.25102, "loss_mask": 0.24534, "loss": 0.81211, "time": 0.44696} -{"mode": "train", "epoch": 5, "iter": 2750, "lr": 0.0002, "memory": 8424, "data_time": 0.0535, "loss_rpn_cls": 0.02823, "loss_rpn_bbox": 0.05207, "loss_cls": 0.21891, "acc": 92.48804, "loss_bbox": 0.24732, "loss_mask": 0.24447, "loss": 0.791, "time": 0.49059} -{"mode": "train", "epoch": 5, "iter": 2800, "lr": 0.0002, "memory": 8424, "data_time": 0.04303, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.05006, "loss_cls": 0.21209, "acc": 92.81812, "loss_bbox": 0.239, "loss_mask": 0.24192, "loss": 0.77098, "time": 0.4439} -{"mode": "train", "epoch": 5, "iter": 2850, "lr": 0.0002, "memory": 8424, "data_time": 0.04307, "loss_rpn_cls": 0.03043, "loss_rpn_bbox": 0.05123, "loss_cls": 0.22527, "acc": 92.2998, "loss_bbox": 0.25044, "loss_mask": 0.24417, "loss": 0.80153, "time": 0.45916} -{"mode": "train", "epoch": 5, "iter": 2900, "lr": 0.0002, "memory": 8424, "data_time": 0.04245, "loss_rpn_cls": 0.02951, "loss_rpn_bbox": 0.0523, "loss_cls": 0.2208, "acc": 92.40088, "loss_bbox": 0.24913, "loss_mask": 0.24465, "loss": 0.79639, "time": 0.42267} -{"mode": "train", "epoch": 5, "iter": 2950, "lr": 0.0002, "memory": 8424, "data_time": 0.05169, "loss_rpn_cls": 0.03156, "loss_rpn_bbox": 0.05333, "loss_cls": 0.22384, "acc": 92.28857, "loss_bbox": 0.25334, "loss_mask": 0.24446, "loss": 0.80654, "time": 0.40543} -{"mode": "train", "epoch": 5, "iter": 3000, "lr": 0.0002, "memory": 8424, "data_time": 0.04542, "loss_rpn_cls": 0.03178, "loss_rpn_bbox": 0.05257, "loss_cls": 0.21906, "acc": 92.54565, "loss_bbox": 0.24498, "loss_mask": 0.24077, "loss": 0.78916, "time": 0.41044} -{"mode": "train", "epoch": 5, "iter": 3050, "lr": 0.0002, "memory": 8424, "data_time": 0.04075, "loss_rpn_cls": 0.02986, "loss_rpn_bbox": 0.05233, "loss_cls": 0.23339, "acc": 92.16382, "loss_bbox": 0.25206, "loss_mask": 0.25056, "loss": 0.81819, "time": 0.40859} -{"mode": "train", "epoch": 5, "iter": 3100, "lr": 0.0002, "memory": 8424, "data_time": 0.03448, "loss_rpn_cls": 0.02987, "loss_rpn_bbox": 0.05072, "loss_cls": 0.2139, "acc": 92.63306, "loss_bbox": 0.23695, "loss_mask": 0.24418, "loss": 0.77563, "time": 0.41187} -{"mode": "train", "epoch": 5, "iter": 3150, "lr": 0.0002, "memory": 8424, "data_time": 0.04109, "loss_rpn_cls": 0.02926, "loss_rpn_bbox": 0.05129, "loss_cls": 0.22994, "acc": 92.2627, "loss_bbox": 0.25648, "loss_mask": 0.24917, "loss": 0.81614, "time": 0.40997} -{"mode": "train", "epoch": 5, "iter": 3200, "lr": 0.0002, "memory": 8424, "data_time": 0.04916, "loss_rpn_cls": 0.03176, "loss_rpn_bbox": 0.05309, "loss_cls": 0.2368, "acc": 92.01562, "loss_bbox": 0.25978, "loss_mask": 0.25486, "loss": 0.83629, "time": 0.40881} -{"mode": "train", "epoch": 5, "iter": 3250, "lr": 0.0002, "memory": 8424, "data_time": 0.05341, "loss_rpn_cls": 0.03099, "loss_rpn_bbox": 0.0549, "loss_cls": 0.23084, "acc": 92.31079, "loss_bbox": 0.2536, "loss_mask": 0.25206, "loss": 0.82238, "time": 0.4088} -{"mode": "train", "epoch": 5, "iter": 3300, "lr": 0.0002, "memory": 8424, "data_time": 0.0448, "loss_rpn_cls": 0.0309, "loss_rpn_bbox": 0.04881, "loss_cls": 0.22115, "acc": 92.5813, "loss_bbox": 0.24238, "loss_mask": 0.24588, "loss": 0.78912, "time": 0.4163} -{"mode": "train", "epoch": 5, "iter": 3350, "lr": 0.0002, "memory": 8424, "data_time": 0.03979, "loss_rpn_cls": 0.02994, "loss_rpn_bbox": 0.05225, "loss_cls": 0.21885, "acc": 92.54126, "loss_bbox": 0.25029, "loss_mask": 0.24385, "loss": 0.79518, "time": 0.40998} -{"mode": "train", "epoch": 5, "iter": 3400, "lr": 0.0002, "memory": 8424, "data_time": 0.05215, "loss_rpn_cls": 0.03288, "loss_rpn_bbox": 0.05339, "loss_cls": 0.22525, "acc": 92.47705, "loss_bbox": 0.24461, "loss_mask": 0.24615, "loss": 0.80227, "time": 0.43062} -{"mode": "train", "epoch": 5, "iter": 3450, "lr": 0.0002, "memory": 8424, "data_time": 0.0427, "loss_rpn_cls": 0.03105, "loss_rpn_bbox": 0.05194, "loss_cls": 0.22915, "acc": 92.34912, "loss_bbox": 0.25523, "loss_mask": 0.24963, "loss": 0.817, "time": 0.41095} -{"mode": "train", "epoch": 5, "iter": 3500, "lr": 0.0002, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.03432, "loss_rpn_bbox": 0.05325, "loss_cls": 0.23316, "acc": 92.20459, "loss_bbox": 0.25546, "loss_mask": 0.24844, "loss": 0.82463, "time": 0.40967} -{"mode": "train", "epoch": 5, "iter": 3550, "lr": 0.0002, "memory": 8424, "data_time": 0.05645, "loss_rpn_cls": 0.0322, "loss_rpn_bbox": 0.05422, "loss_cls": 0.23662, "acc": 92.16528, "loss_bbox": 0.25432, "loss_mask": 0.25345, "loss": 0.8308, "time": 0.408} -{"mode": "train", "epoch": 5, "iter": 3600, "lr": 0.0002, "memory": 8424, "data_time": 0.04682, "loss_rpn_cls": 0.03436, "loss_rpn_bbox": 0.05578, "loss_cls": 0.22741, "acc": 92.22021, "loss_bbox": 0.25401, "loss_mask": 0.25009, "loss": 0.82165, "time": 0.41371} -{"mode": "train", "epoch": 5, "iter": 3650, "lr": 0.0002, "memory": 8424, "data_time": 0.04632, "loss_rpn_cls": 0.0328, "loss_rpn_bbox": 0.05429, "loss_cls": 0.21836, "acc": 92.59595, "loss_bbox": 0.24945, "loss_mask": 0.2401, "loss": 0.79501, "time": 0.40316} -{"mode": "train", "epoch": 5, "iter": 3700, "lr": 0.0002, "memory": 8424, "data_time": 0.03958, "loss_rpn_cls": 0.0305, "loss_rpn_bbox": 0.05119, "loss_cls": 0.22486, "acc": 92.4646, "loss_bbox": 0.24694, "loss_mask": 0.24495, "loss": 0.79844, "time": 0.46756} -{"mode": "train", "epoch": 5, "iter": 3750, "lr": 0.0002, "memory": 8424, "data_time": 0.0529, "loss_rpn_cls": 0.03199, "loss_rpn_bbox": 0.05154, "loss_cls": 0.22796, "acc": 92.24097, "loss_bbox": 0.24294, "loss_mask": 0.24134, "loss": 0.79577, "time": 0.41227} -{"mode": "train", "epoch": 5, "iter": 3800, "lr": 0.0002, "memory": 8424, "data_time": 0.04832, "loss_rpn_cls": 0.02815, "loss_rpn_bbox": 0.05056, "loss_cls": 0.23348, "acc": 92.24707, "loss_bbox": 0.24766, "loss_mask": 0.24585, "loss": 0.80569, "time": 0.39514} -{"mode": "train", "epoch": 5, "iter": 3850, "lr": 0.0002, "memory": 8424, "data_time": 0.04363, "loss_rpn_cls": 0.02867, "loss_rpn_bbox": 0.04921, "loss_cls": 0.2229, "acc": 92.55078, "loss_bbox": 0.24231, "loss_mask": 0.23895, "loss": 0.78205, "time": 0.40104} -{"mode": "train", "epoch": 5, "iter": 3900, "lr": 0.0002, "memory": 8424, "data_time": 0.05196, "loss_rpn_cls": 0.03119, "loss_rpn_bbox": 0.05023, "loss_cls": 0.2214, "acc": 92.41626, "loss_bbox": 0.2453, "loss_mask": 0.24817, "loss": 0.7963, "time": 0.40801} -{"mode": "train", "epoch": 5, "iter": 3950, "lr": 0.0002, "memory": 8424, "data_time": 0.04972, "loss_rpn_cls": 0.03133, "loss_rpn_bbox": 0.04917, "loss_cls": 0.22204, "acc": 92.56201, "loss_bbox": 0.24697, "loss_mask": 0.24999, "loss": 0.79949, "time": 0.40757} -{"mode": "train", "epoch": 5, "iter": 4000, "lr": 0.0002, "memory": 8424, "data_time": 0.05105, "loss_rpn_cls": 0.02865, "loss_rpn_bbox": 0.05091, "loss_cls": 0.22024, "acc": 92.45166, "loss_bbox": 0.24613, "loss_mask": 0.24046, "loss": 0.78639, "time": 0.4056} -{"mode": "train", "epoch": 5, "iter": 4050, "lr": 0.0002, "memory": 8424, "data_time": 0.04382, "loss_rpn_cls": 0.03325, "loss_rpn_bbox": 0.05475, "loss_cls": 0.2237, "acc": 92.45386, "loss_bbox": 0.24773, "loss_mask": 0.23911, "loss": 0.79853, "time": 0.42023} -{"mode": "train", "epoch": 5, "iter": 4100, "lr": 0.0002, "memory": 8424, "data_time": 0.05279, "loss_rpn_cls": 0.03191, "loss_rpn_bbox": 0.05408, "loss_cls": 0.23299, "acc": 92.17188, "loss_bbox": 0.261, "loss_mask": 0.25029, "loss": 0.83027, "time": 0.47342} -{"mode": "train", "epoch": 5, "iter": 4150, "lr": 0.0002, "memory": 8424, "data_time": 0.04295, "loss_rpn_cls": 0.03144, "loss_rpn_bbox": 0.04943, "loss_cls": 0.22878, "acc": 92.47681, "loss_bbox": 0.24475, "loss_mask": 0.2418, "loss": 0.79621, "time": 0.39584} -{"mode": "train", "epoch": 5, "iter": 4200, "lr": 0.0002, "memory": 8424, "data_time": 0.04394, "loss_rpn_cls": 0.0348, "loss_rpn_bbox": 0.05412, "loss_cls": 0.22533, "acc": 92.5564, "loss_bbox": 0.24539, "loss_mask": 0.25089, "loss": 0.81054, "time": 0.41637} -{"mode": "train", "epoch": 5, "iter": 4250, "lr": 0.0002, "memory": 8424, "data_time": 0.04702, "loss_rpn_cls": 0.0282, "loss_rpn_bbox": 0.05107, "loss_cls": 0.22406, "acc": 92.48828, "loss_bbox": 0.24814, "loss_mask": 0.24385, "loss": 0.79531, "time": 0.41867} -{"mode": "train", "epoch": 5, "iter": 4300, "lr": 0.0002, "memory": 8424, "data_time": 0.04137, "loss_rpn_cls": 0.02746, "loss_rpn_bbox": 0.04885, "loss_cls": 0.21882, "acc": 92.71118, "loss_bbox": 0.23813, "loss_mask": 0.2481, "loss": 0.78136, "time": 0.3981} -{"mode": "train", "epoch": 5, "iter": 4350, "lr": 0.0002, "memory": 8424, "data_time": 0.05638, "loss_rpn_cls": 0.03103, "loss_rpn_bbox": 0.05245, "loss_cls": 0.22277, "acc": 92.36938, "loss_bbox": 0.25133, "loss_mask": 0.24424, "loss": 0.80182, "time": 0.41287} -{"mode": "train", "epoch": 5, "iter": 4400, "lr": 0.0002, "memory": 8424, "data_time": 0.04455, "loss_rpn_cls": 0.03194, "loss_rpn_bbox": 0.05467, "loss_cls": 0.2374, "acc": 91.87524, "loss_bbox": 0.26364, "loss_mask": 0.24556, "loss": 0.83322, "time": 0.418} -{"mode": "train", "epoch": 5, "iter": 4450, "lr": 0.0002, "memory": 8424, "data_time": 0.05021, "loss_rpn_cls": 0.02871, "loss_rpn_bbox": 0.04938, "loss_cls": 0.21333, "acc": 92.66968, "loss_bbox": 0.23674, "loss_mask": 0.23187, "loss": 0.76003, "time": 0.45763} -{"mode": "train", "epoch": 5, "iter": 4500, "lr": 0.0002, "memory": 8424, "data_time": 0.04819, "loss_rpn_cls": 0.02921, "loss_rpn_bbox": 0.04594, "loss_cls": 0.2158, "acc": 92.61157, "loss_bbox": 0.23948, "loss_mask": 0.23739, "loss": 0.76781, "time": 0.40234} -{"mode": "train", "epoch": 5, "iter": 4550, "lr": 0.0002, "memory": 8424, "data_time": 0.04909, "loss_rpn_cls": 0.02964, "loss_rpn_bbox": 0.04977, "loss_cls": 0.22644, "acc": 92.27344, "loss_bbox": 0.25396, "loss_mask": 0.24989, "loss": 0.8097, "time": 0.40958} -{"mode": "train", "epoch": 5, "iter": 4600, "lr": 0.0002, "memory": 8424, "data_time": 0.04861, "loss_rpn_cls": 0.03, "loss_rpn_bbox": 0.04903, "loss_cls": 0.21752, "acc": 92.74658, "loss_bbox": 0.24452, "loss_mask": 0.23946, "loss": 0.78053, "time": 0.40909} -{"mode": "train", "epoch": 5, "iter": 4650, "lr": 0.0002, "memory": 8424, "data_time": 0.04966, "loss_rpn_cls": 0.03114, "loss_rpn_bbox": 0.05055, "loss_cls": 0.22639, "acc": 92.4978, "loss_bbox": 0.24252, "loss_mask": 0.24971, "loss": 0.80031, "time": 0.40234} -{"mode": "train", "epoch": 5, "iter": 4700, "lr": 0.0002, "memory": 8424, "data_time": 0.04549, "loss_rpn_cls": 0.03393, "loss_rpn_bbox": 0.05327, "loss_cls": 0.23066, "acc": 92.2124, "loss_bbox": 0.24987, "loss_mask": 0.24798, "loss": 0.8157, "time": 0.41288} -{"mode": "train", "epoch": 5, "iter": 4750, "lr": 0.0002, "memory": 8424, "data_time": 0.048, "loss_rpn_cls": 0.03256, "loss_rpn_bbox": 0.05382, "loss_cls": 0.23104, "acc": 92.20776, "loss_bbox": 0.25148, "loss_mask": 0.25351, "loss": 0.82241, "time": 0.41516} -{"mode": "train", "epoch": 5, "iter": 4800, "lr": 0.0002, "memory": 8424, "data_time": 0.04185, "loss_rpn_cls": 0.03132, "loss_rpn_bbox": 0.04752, "loss_cls": 0.23332, "acc": 92.32495, "loss_bbox": 0.25042, "loss_mask": 0.24108, "loss": 0.80367, "time": 0.40466} -{"mode": "train", "epoch": 5, "iter": 4850, "lr": 0.0002, "memory": 8424, "data_time": 0.04379, "loss_rpn_cls": 0.03293, "loss_rpn_bbox": 0.05599, "loss_cls": 0.2381, "acc": 92.00317, "loss_bbox": 0.26123, "loss_mask": 0.25539, "loss": 0.84364, "time": 0.41324} -{"mode": "train", "epoch": 5, "iter": 4900, "lr": 0.0002, "memory": 8424, "data_time": 0.04535, "loss_rpn_cls": 0.03028, "loss_rpn_bbox": 0.05323, "loss_cls": 0.21784, "acc": 92.71143, "loss_bbox": 0.23848, "loss_mask": 0.23781, "loss": 0.77764, "time": 0.40097} -{"mode": "train", "epoch": 5, "iter": 4950, "lr": 0.0002, "memory": 8424, "data_time": 0.03818, "loss_rpn_cls": 0.03039, "loss_rpn_bbox": 0.05381, "loss_cls": 0.23206, "acc": 92.19897, "loss_bbox": 0.25125, "loss_mask": 0.24971, "loss": 0.81722, "time": 0.47059} -{"mode": "train", "epoch": 5, "iter": 5000, "lr": 0.0002, "memory": 8424, "data_time": 0.04067, "loss_rpn_cls": 0.02864, "loss_rpn_bbox": 0.04995, "loss_cls": 0.22004, "acc": 92.55273, "loss_bbox": 0.23776, "loss_mask": 0.24577, "loss": 0.78215, "time": 0.39791} -{"mode": "train", "epoch": 5, "iter": 5050, "lr": 0.0002, "memory": 8424, "data_time": 0.03945, "loss_rpn_cls": 0.02848, "loss_rpn_bbox": 0.05262, "loss_cls": 0.22411, "acc": 92.521, "loss_bbox": 0.23914, "loss_mask": 0.23818, "loss": 0.78254, "time": 0.40806} -{"mode": "train", "epoch": 5, "iter": 5100, "lr": 0.0002, "memory": 8424, "data_time": 0.03883, "loss_rpn_cls": 0.03312, "loss_rpn_bbox": 0.05237, "loss_cls": 0.22087, "acc": 92.5979, "loss_bbox": 0.24357, "loss_mask": 0.25009, "loss": 0.80002, "time": 0.40298} -{"mode": "train", "epoch": 5, "iter": 5150, "lr": 0.0002, "memory": 8424, "data_time": 0.05761, "loss_rpn_cls": 0.02975, "loss_rpn_bbox": 0.05063, "loss_cls": 0.21693, "acc": 92.60303, "loss_bbox": 0.24772, "loss_mask": 0.24368, "loss": 0.7887, "time": 0.42673} -{"mode": "train", "epoch": 5, "iter": 5200, "lr": 0.0002, "memory": 8424, "data_time": 0.05788, "loss_rpn_cls": 0.02908, "loss_rpn_bbox": 0.05094, "loss_cls": 0.2221, "acc": 92.48657, "loss_bbox": 0.24444, "loss_mask": 0.24469, "loss": 0.79125, "time": 0.41943} -{"mode": "train", "epoch": 5, "iter": 5250, "lr": 0.0002, "memory": 8424, "data_time": 0.04139, "loss_rpn_cls": 0.02825, "loss_rpn_bbox": 0.04885, "loss_cls": 0.21593, "acc": 92.69946, "loss_bbox": 0.23905, "loss_mask": 0.24651, "loss": 0.77859, "time": 0.40141} -{"mode": "train", "epoch": 5, "iter": 5300, "lr": 0.0002, "memory": 8424, "data_time": 0.04309, "loss_rpn_cls": 0.03198, "loss_rpn_bbox": 0.05256, "loss_cls": 0.22495, "acc": 92.39844, "loss_bbox": 0.24616, "loss_mask": 0.24718, "loss": 0.80283, "time": 0.40411} -{"mode": "train", "epoch": 5, "iter": 5350, "lr": 0.0002, "memory": 8424, "data_time": 0.04367, "loss_rpn_cls": 0.02772, "loss_rpn_bbox": 0.04959, "loss_cls": 0.21425, "acc": 92.72974, "loss_bbox": 0.23944, "loss_mask": 0.24087, "loss": 0.77187, "time": 0.409} -{"mode": "train", "epoch": 5, "iter": 5400, "lr": 0.0002, "memory": 8424, "data_time": 0.04899, "loss_rpn_cls": 0.03118, "loss_rpn_bbox": 0.05161, "loss_cls": 0.23115, "acc": 92.26343, "loss_bbox": 0.25082, "loss_mask": 0.24868, "loss": 0.81345, "time": 0.40604} -{"mode": "train", "epoch": 5, "iter": 5450, "lr": 0.0002, "memory": 8424, "data_time": 0.04263, "loss_rpn_cls": 0.03356, "loss_rpn_bbox": 0.05353, "loss_cls": 0.2186, "acc": 92.50342, "loss_bbox": 0.24847, "loss_mask": 0.24558, "loss": 0.79974, "time": 0.40241} -{"mode": "train", "epoch": 5, "iter": 5500, "lr": 0.0002, "memory": 8424, "data_time": 0.05979, "loss_rpn_cls": 0.03477, "loss_rpn_bbox": 0.0528, "loss_cls": 0.22808, "acc": 92.34082, "loss_bbox": 0.24797, "loss_mask": 0.24506, "loss": 0.80869, "time": 0.41694} -{"mode": "train", "epoch": 5, "iter": 5550, "lr": 0.0002, "memory": 8424, "data_time": 0.05005, "loss_rpn_cls": 0.03129, "loss_rpn_bbox": 0.0537, "loss_cls": 0.22813, "acc": 92.32153, "loss_bbox": 0.25361, "loss_mask": 0.24336, "loss": 0.81008, "time": 0.41427} -{"mode": "train", "epoch": 5, "iter": 5600, "lr": 0.0002, "memory": 8424, "data_time": 0.04417, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.05274, "loss_cls": 0.22205, "acc": 92.46265, "loss_bbox": 0.25115, "loss_mask": 0.24826, "loss": 0.80508, "time": 0.41594} -{"mode": "train", "epoch": 5, "iter": 5650, "lr": 0.0002, "memory": 8424, "data_time": 0.05171, "loss_rpn_cls": 0.03154, "loss_rpn_bbox": 0.05509, "loss_cls": 0.23451, "acc": 92.13696, "loss_bbox": 0.25347, "loss_mask": 0.24692, "loss": 0.82155, "time": 0.40358} -{"mode": "train", "epoch": 5, "iter": 5700, "lr": 0.0002, "memory": 8424, "data_time": 0.05008, "loss_rpn_cls": 0.03064, "loss_rpn_bbox": 0.05041, "loss_cls": 0.21572, "acc": 92.60474, "loss_bbox": 0.24589, "loss_mask": 0.24549, "loss": 0.78815, "time": 0.4065} -{"mode": "train", "epoch": 5, "iter": 5750, "lr": 0.0002, "memory": 8424, "data_time": 0.05023, "loss_rpn_cls": 0.03138, "loss_rpn_bbox": 0.04926, "loss_cls": 0.22451, "acc": 92.34863, "loss_bbox": 0.24861, "loss_mask": 0.24198, "loss": 0.79574, "time": 0.41262} -{"mode": "train", "epoch": 5, "iter": 5800, "lr": 0.0002, "memory": 8424, "data_time": 0.04856, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.05289, "loss_cls": 0.2186, "acc": 92.79443, "loss_bbox": 0.23842, "loss_mask": 0.24667, "loss": 0.78849, "time": 0.41649} -{"mode": "train", "epoch": 5, "iter": 5850, "lr": 0.0002, "memory": 8424, "data_time": 0.05902, "loss_rpn_cls": 0.02975, "loss_rpn_bbox": 0.05213, "loss_cls": 0.21635, "acc": 92.60718, "loss_bbox": 0.24896, "loss_mask": 0.24676, "loss": 0.79394, "time": 0.41175} -{"mode": "train", "epoch": 5, "iter": 5900, "lr": 0.0002, "memory": 8424, "data_time": 0.04836, "loss_rpn_cls": 0.0349, "loss_rpn_bbox": 0.05553, "loss_cls": 0.22897, "acc": 92.20605, "loss_bbox": 0.2554, "loss_mask": 0.25581, "loss": 0.83061, "time": 0.3994} -{"mode": "train", "epoch": 5, "iter": 5950, "lr": 0.0002, "memory": 8424, "data_time": 0.04094, "loss_rpn_cls": 0.02973, "loss_rpn_bbox": 0.0505, "loss_cls": 0.2299, "acc": 92.30225, "loss_bbox": 0.24454, "loss_mask": 0.24588, "loss": 0.80055, "time": 0.38482} -{"mode": "train", "epoch": 5, "iter": 6000, "lr": 0.0002, "memory": 8424, "data_time": 0.03901, "loss_rpn_cls": 0.03387, "loss_rpn_bbox": 0.05165, "loss_cls": 0.22449, "acc": 92.40283, "loss_bbox": 0.24245, "loss_mask": 0.24615, "loss": 0.7986, "time": 0.39566} -{"mode": "train", "epoch": 5, "iter": 6050, "lr": 0.0002, "memory": 8424, "data_time": 0.04757, "loss_rpn_cls": 0.03113, "loss_rpn_bbox": 0.05085, "loss_cls": 0.21998, "acc": 92.45386, "loss_bbox": 0.24132, "loss_mask": 0.24699, "loss": 0.79028, "time": 0.39547} -{"mode": "train", "epoch": 5, "iter": 6100, "lr": 0.0002, "memory": 8424, "data_time": 0.04559, "loss_rpn_cls": 0.03093, "loss_rpn_bbox": 0.05231, "loss_cls": 0.22152, "acc": 92.45752, "loss_bbox": 0.24431, "loss_mask": 0.24225, "loss": 0.79132, "time": 0.39655} -{"mode": "train", "epoch": 5, "iter": 6150, "lr": 0.0002, "memory": 8424, "data_time": 0.04977, "loss_rpn_cls": 0.03132, "loss_rpn_bbox": 0.05261, "loss_cls": 0.2261, "acc": 92.33643, "loss_bbox": 0.24637, "loss_mask": 0.24783, "loss": 0.80422, "time": 0.41471} -{"mode": "train", "epoch": 5, "iter": 6200, "lr": 0.0002, "memory": 8424, "data_time": 0.03905, "loss_rpn_cls": 0.03052, "loss_rpn_bbox": 0.05148, "loss_cls": 0.22712, "acc": 92.40796, "loss_bbox": 0.24776, "loss_mask": 0.25038, "loss": 0.80727, "time": 0.40257} -{"mode": "train", "epoch": 5, "iter": 6250, "lr": 0.0002, "memory": 8424, "data_time": 0.04162, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.05226, "loss_cls": 0.22268, "acc": 92.38306, "loss_bbox": 0.24714, "loss_mask": 0.24199, "loss": 0.79429, "time": 0.411} -{"mode": "train", "epoch": 5, "iter": 6300, "lr": 0.0002, "memory": 8424, "data_time": 0.04429, "loss_rpn_cls": 0.03021, "loss_rpn_bbox": 0.04705, "loss_cls": 0.22363, "acc": 92.55811, "loss_bbox": 0.2419, "loss_mask": 0.24259, "loss": 0.78538, "time": 0.4067} -{"mode": "train", "epoch": 5, "iter": 6350, "lr": 0.0002, "memory": 8424, "data_time": 0.04938, "loss_rpn_cls": 0.03279, "loss_rpn_bbox": 0.05239, "loss_cls": 0.22788, "acc": 92.1958, "loss_bbox": 0.25552, "loss_mask": 0.24353, "loss": 0.81211, "time": 0.41989} -{"mode": "train", "epoch": 5, "iter": 6400, "lr": 0.0002, "memory": 8424, "data_time": 0.04251, "loss_rpn_cls": 0.03383, "loss_rpn_bbox": 0.05235, "loss_cls": 0.23853, "acc": 92.03662, "loss_bbox": 0.25717, "loss_mask": 0.24688, "loss": 0.82874, "time": 0.41403} -{"mode": "train", "epoch": 5, "iter": 6450, "lr": 0.0002, "memory": 8424, "data_time": 0.04244, "loss_rpn_cls": 0.02978, "loss_rpn_bbox": 0.05108, "loss_cls": 0.22892, "acc": 92.37061, "loss_bbox": 0.24396, "loss_mask": 0.2403, "loss": 0.79404, "time": 0.42466} -{"mode": "train", "epoch": 5, "iter": 6500, "lr": 0.0002, "memory": 8424, "data_time": 0.04719, "loss_rpn_cls": 0.03034, "loss_rpn_bbox": 0.0499, "loss_cls": 0.22418, "acc": 92.4646, "loss_bbox": 0.24492, "loss_mask": 0.24639, "loss": 0.79573, "time": 0.41351} -{"mode": "train", "epoch": 5, "iter": 6550, "lr": 0.0002, "memory": 8424, "data_time": 0.03881, "loss_rpn_cls": 0.0287, "loss_rpn_bbox": 0.04909, "loss_cls": 0.21763, "acc": 92.71338, "loss_bbox": 0.24006, "loss_mask": 0.24652, "loss": 0.78201, "time": 0.40532} -{"mode": "train", "epoch": 5, "iter": 6600, "lr": 0.0002, "memory": 8424, "data_time": 0.05617, "loss_rpn_cls": 0.03074, "loss_rpn_bbox": 0.04972, "loss_cls": 0.23117, "acc": 92.18262, "loss_bbox": 0.25809, "loss_mask": 0.24749, "loss": 0.81722, "time": 0.41629} -{"mode": "train", "epoch": 5, "iter": 6650, "lr": 0.0002, "memory": 8424, "data_time": 0.04536, "loss_rpn_cls": 0.03055, "loss_rpn_bbox": 0.0508, "loss_cls": 0.22832, "acc": 92.2749, "loss_bbox": 0.25359, "loss_mask": 0.24678, "loss": 0.81004, "time": 0.412} -{"mode": "train", "epoch": 5, "iter": 6700, "lr": 0.0002, "memory": 8424, "data_time": 0.04736, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05002, "loss_cls": 0.21743, "acc": 92.66626, "loss_bbox": 0.23972, "loss_mask": 0.24676, "loss": 0.7857, "time": 0.40097} -{"mode": "train", "epoch": 5, "iter": 6750, "lr": 0.0002, "memory": 8424, "data_time": 0.04866, "loss_rpn_cls": 0.03239, "loss_rpn_bbox": 0.05422, "loss_cls": 0.22993, "acc": 92.27417, "loss_bbox": 0.24832, "loss_mask": 0.24261, "loss": 0.80747, "time": 0.45695} -{"mode": "train", "epoch": 5, "iter": 6800, "lr": 0.0002, "memory": 8424, "data_time": 0.04096, "loss_rpn_cls": 0.02801, "loss_rpn_bbox": 0.04812, "loss_cls": 0.21199, "acc": 92.77271, "loss_bbox": 0.2373, "loss_mask": 0.24027, "loss": 0.76569, "time": 0.41488} -{"mode": "train", "epoch": 5, "iter": 6850, "lr": 0.0002, "memory": 8424, "data_time": 0.05291, "loss_rpn_cls": 0.03062, "loss_rpn_bbox": 0.05169, "loss_cls": 0.22998, "acc": 92.27368, "loss_bbox": 0.25243, "loss_mask": 0.25056, "loss": 0.81528, "time": 0.46837} -{"mode": "train", "epoch": 5, "iter": 6900, "lr": 0.0002, "memory": 8424, "data_time": 0.03853, "loss_rpn_cls": 0.03016, "loss_rpn_bbox": 0.04923, "loss_cls": 0.22185, "acc": 92.58398, "loss_bbox": 0.23866, "loss_mask": 0.24301, "loss": 0.7829, "time": 0.41165} -{"mode": "train", "epoch": 5, "iter": 6950, "lr": 0.0002, "memory": 8424, "data_time": 0.0402, "loss_rpn_cls": 0.03063, "loss_rpn_bbox": 0.05177, "loss_cls": 0.23321, "acc": 92.11523, "loss_bbox": 0.25648, "loss_mask": 0.24713, "loss": 0.81922, "time": 0.39893} -{"mode": "train", "epoch": 5, "iter": 7000, "lr": 0.0002, "memory": 8424, "data_time": 0.04472, "loss_rpn_cls": 0.02963, "loss_rpn_bbox": 0.05318, "loss_cls": 0.21877, "acc": 92.5752, "loss_bbox": 0.24493, "loss_mask": 0.24718, "loss": 0.79368, "time": 0.40396} -{"mode": "train", "epoch": 5, "iter": 7050, "lr": 0.0002, "memory": 8424, "data_time": 0.04642, "loss_rpn_cls": 0.02903, "loss_rpn_bbox": 0.04812, "loss_cls": 0.21791, "acc": 92.70923, "loss_bbox": 0.23805, "loss_mask": 0.24524, "loss": 0.77836, "time": 0.3886} -{"mode": "train", "epoch": 5, "iter": 7100, "lr": 0.0002, "memory": 8424, "data_time": 0.05441, "loss_rpn_cls": 0.03184, "loss_rpn_bbox": 0.05078, "loss_cls": 0.22853, "acc": 92.33276, "loss_bbox": 0.24729, "loss_mask": 0.24256, "loss": 0.80099, "time": 0.4069} -{"mode": "train", "epoch": 5, "iter": 7150, "lr": 0.0002, "memory": 8424, "data_time": 0.04078, "loss_rpn_cls": 0.03218, "loss_rpn_bbox": 0.05008, "loss_cls": 0.21628, "acc": 92.604, "loss_bbox": 0.24193, "loss_mask": 0.24247, "loss": 0.78294, "time": 0.40083} -{"mode": "train", "epoch": 5, "iter": 7200, "lr": 0.0002, "memory": 8424, "data_time": 0.04302, "loss_rpn_cls": 0.0288, "loss_rpn_bbox": 0.04549, "loss_cls": 0.21381, "acc": 92.92749, "loss_bbox": 0.22758, "loss_mask": 0.24016, "loss": 0.75585, "time": 0.38574} -{"mode": "train", "epoch": 5, "iter": 7250, "lr": 0.0002, "memory": 8424, "data_time": 0.05618, "loss_rpn_cls": 0.02916, "loss_rpn_bbox": 0.05057, "loss_cls": 0.22838, "acc": 92.35254, "loss_bbox": 0.25133, "loss_mask": 0.24548, "loss": 0.80491, "time": 0.40638} -{"mode": "train", "epoch": 5, "iter": 7300, "lr": 0.0002, "memory": 8424, "data_time": 0.04231, "loss_rpn_cls": 0.02962, "loss_rpn_bbox": 0.05013, "loss_cls": 0.22127, "acc": 92.59302, "loss_bbox": 0.24328, "loss_mask": 0.24454, "loss": 0.78884, "time": 0.44691} -{"mode": "val", "epoch": 5, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3534, "bbox_mAP_50": 0.5715, "bbox_mAP_75": 0.3836, "bbox_mAP_s": 0.2015, "bbox_mAP_m": 0.3909, "bbox_mAP_l": 0.4623, "bbox_mAP_copypaste": "0.3534 0.5715 0.3836 0.2015 0.3909 0.4623", "segm_mAP": 0.3375, "segm_mAP_50": 0.5466, "segm_mAP_75": 0.3631, "segm_mAP_s": 0.1511, "segm_mAP_m": 0.3686, "segm_mAP_l": 0.5012, "segm_mAP_copypaste": "0.3375 0.5466 0.3631 0.1511 0.3686 0.5012"} -{"mode": "train", "epoch": 6, "iter": 50, "lr": 0.0002, "memory": 8424, "data_time": 0.12505, "loss_rpn_cls": 0.0301, "loss_rpn_bbox": 0.05133, "loss_cls": 0.21715, "acc": 92.52881, "loss_bbox": 0.24957, "loss_mask": 0.24565, "loss": 0.79379, "time": 0.50484} -{"mode": "train", "epoch": 6, "iter": 100, "lr": 0.0002, "memory": 8424, "data_time": 0.06022, "loss_rpn_cls": 0.02657, "loss_rpn_bbox": 0.05011, "loss_cls": 0.22189, "acc": 92.29492, "loss_bbox": 0.25802, "loss_mask": 0.24333, "loss": 0.79992, "time": 0.42096} -{"mode": "train", "epoch": 6, "iter": 150, "lr": 0.0002, "memory": 8424, "data_time": 0.04554, "loss_rpn_cls": 0.0272, "loss_rpn_bbox": 0.04953, "loss_cls": 0.21391, "acc": 92.62891, "loss_bbox": 0.2423, "loss_mask": 0.2388, "loss": 0.77173, "time": 0.42626} -{"mode": "train", "epoch": 6, "iter": 200, "lr": 0.0002, "memory": 8424, "data_time": 0.04962, "loss_rpn_cls": 0.02665, "loss_rpn_bbox": 0.05067, "loss_cls": 0.20517, "acc": 92.7666, "loss_bbox": 0.23845, "loss_mask": 0.23955, "loss": 0.76049, "time": 0.41573} -{"mode": "train", "epoch": 6, "iter": 250, "lr": 0.0002, "memory": 8424, "data_time": 0.04964, "loss_rpn_cls": 0.0302, "loss_rpn_bbox": 0.04997, "loss_cls": 0.20672, "acc": 92.93481, "loss_bbox": 0.23663, "loss_mask": 0.23877, "loss": 0.76228, "time": 0.47343} -{"mode": "train", "epoch": 6, "iter": 300, "lr": 0.0002, "memory": 8424, "data_time": 0.04188, "loss_rpn_cls": 0.02774, "loss_rpn_bbox": 0.05161, "loss_cls": 0.21225, "acc": 92.5249, "loss_bbox": 0.24706, "loss_mask": 0.23984, "loss": 0.77849, "time": 0.41917} -{"mode": "train", "epoch": 6, "iter": 350, "lr": 0.0002, "memory": 8424, "data_time": 0.03868, "loss_rpn_cls": 0.02741, "loss_rpn_bbox": 0.05029, "loss_cls": 0.2065, "acc": 92.78784, "loss_bbox": 0.24424, "loss_mask": 0.23904, "loss": 0.76748, "time": 0.42272} -{"mode": "train", "epoch": 6, "iter": 400, "lr": 0.0002, "memory": 8424, "data_time": 0.04561, "loss_rpn_cls": 0.03086, "loss_rpn_bbox": 0.05261, "loss_cls": 0.21996, "acc": 92.41162, "loss_bbox": 0.25096, "loss_mask": 0.23691, "loss": 0.79131, "time": 0.41601} -{"mode": "train", "epoch": 6, "iter": 450, "lr": 0.0002, "memory": 8424, "data_time": 0.04255, "loss_rpn_cls": 0.02956, "loss_rpn_bbox": 0.05086, "loss_cls": 0.21792, "acc": 92.57812, "loss_bbox": 0.24133, "loss_mask": 0.23092, "loss": 0.7706, "time": 0.41776} -{"mode": "train", "epoch": 6, "iter": 500, "lr": 0.0002, "memory": 8424, "data_time": 0.04929, "loss_rpn_cls": 0.02945, "loss_rpn_bbox": 0.0513, "loss_cls": 0.21014, "acc": 92.74414, "loss_bbox": 0.23809, "loss_mask": 0.2405, "loss": 0.76947, "time": 0.4196} -{"mode": "train", "epoch": 6, "iter": 550, "lr": 0.0002, "memory": 8424, "data_time": 0.03926, "loss_rpn_cls": 0.02676, "loss_rpn_bbox": 0.04638, "loss_cls": 0.20784, "acc": 92.63452, "loss_bbox": 0.24301, "loss_mask": 0.23812, "loss": 0.76209, "time": 0.40374} -{"mode": "train", "epoch": 6, "iter": 600, "lr": 0.0002, "memory": 8424, "data_time": 0.04066, "loss_rpn_cls": 0.03115, "loss_rpn_bbox": 0.0499, "loss_cls": 0.21371, "acc": 92.69678, "loss_bbox": 0.24471, "loss_mask": 0.24499, "loss": 0.78446, "time": 0.41146} -{"mode": "train", "epoch": 6, "iter": 650, "lr": 0.0002, "memory": 8424, "data_time": 0.06125, "loss_rpn_cls": 0.02776, "loss_rpn_bbox": 0.04922, "loss_cls": 0.20967, "acc": 92.66602, "loss_bbox": 0.24328, "loss_mask": 0.23628, "loss": 0.76621, "time": 0.41231} -{"mode": "train", "epoch": 6, "iter": 700, "lr": 0.0002, "memory": 8424, "data_time": 0.04276, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.0526, "loss_cls": 0.22673, "acc": 92.24829, "loss_bbox": 0.24997, "loss_mask": 0.24577, "loss": 0.80463, "time": 0.41165} -{"mode": "train", "epoch": 6, "iter": 750, "lr": 0.0002, "memory": 8424, "data_time": 0.0519, "loss_rpn_cls": 0.02863, "loss_rpn_bbox": 0.04945, "loss_cls": 0.22076, "acc": 92.48218, "loss_bbox": 0.24339, "loss_mask": 0.24065, "loss": 0.78289, "time": 0.39745} -{"mode": "train", "epoch": 6, "iter": 800, "lr": 0.0002, "memory": 8424, "data_time": 0.04819, "loss_rpn_cls": 0.02758, "loss_rpn_bbox": 0.05065, "loss_cls": 0.21428, "acc": 92.71582, "loss_bbox": 0.24339, "loss_mask": 0.2411, "loss": 0.777, "time": 0.4762} -{"mode": "train", "epoch": 6, "iter": 850, "lr": 0.0002, "memory": 8424, "data_time": 0.04887, "loss_rpn_cls": 0.02789, "loss_rpn_bbox": 0.04975, "loss_cls": 0.21322, "acc": 92.62866, "loss_bbox": 0.24396, "loss_mask": 0.24143, "loss": 0.77625, "time": 0.41257} -{"mode": "train", "epoch": 6, "iter": 900, "lr": 0.0002, "memory": 8424, "data_time": 0.05085, "loss_rpn_cls": 0.02844, "loss_rpn_bbox": 0.05206, "loss_cls": 0.21392, "acc": 92.67505, "loss_bbox": 0.23946, "loss_mask": 0.23701, "loss": 0.7709, "time": 0.40842} -{"mode": "train", "epoch": 6, "iter": 950, "lr": 0.0002, "memory": 8424, "data_time": 0.04243, "loss_rpn_cls": 0.0294, "loss_rpn_bbox": 0.04986, "loss_cls": 0.21299, "acc": 92.94067, "loss_bbox": 0.23111, "loss_mask": 0.23624, "loss": 0.75961, "time": 0.3999} -{"mode": "train", "epoch": 6, "iter": 1000, "lr": 0.0002, "memory": 8424, "data_time": 0.04327, "loss_rpn_cls": 0.03233, "loss_rpn_bbox": 0.05536, "loss_cls": 0.23001, "acc": 92.29346, "loss_bbox": 0.25116, "loss_mask": 0.24597, "loss": 0.81483, "time": 0.42752} -{"mode": "train", "epoch": 6, "iter": 1050, "lr": 0.0002, "memory": 8424, "data_time": 0.04733, "loss_rpn_cls": 0.02743, "loss_rpn_bbox": 0.04979, "loss_cls": 0.21274, "acc": 92.76367, "loss_bbox": 0.23672, "loss_mask": 0.23419, "loss": 0.76087, "time": 0.41187} -{"mode": "train", "epoch": 6, "iter": 1100, "lr": 0.0002, "memory": 8424, "data_time": 0.04724, "loss_rpn_cls": 0.02882, "loss_rpn_bbox": 0.04962, "loss_cls": 0.22026, "acc": 92.56641, "loss_bbox": 0.24281, "loss_mask": 0.24034, "loss": 0.78185, "time": 0.41305} -{"mode": "train", "epoch": 6, "iter": 1150, "lr": 0.0002, "memory": 8424, "data_time": 0.04093, "loss_rpn_cls": 0.02885, "loss_rpn_bbox": 0.05177, "loss_cls": 0.22438, "acc": 92.46411, "loss_bbox": 0.25037, "loss_mask": 0.24802, "loss": 0.80338, "time": 0.40659} -{"mode": "train", "epoch": 6, "iter": 1200, "lr": 0.0002, "memory": 8424, "data_time": 0.03923, "loss_rpn_cls": 0.02953, "loss_rpn_bbox": 0.05245, "loss_cls": 0.22251, "acc": 92.55347, "loss_bbox": 0.24014, "loss_mask": 0.24268, "loss": 0.78732, "time": 0.40616} -{"mode": "train", "epoch": 6, "iter": 1250, "lr": 0.0002, "memory": 8424, "data_time": 0.0446, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04778, "loss_cls": 0.21508, "acc": 92.64844, "loss_bbox": 0.24106, "loss_mask": 0.24138, "loss": 0.77023, "time": 0.40383} -{"mode": "train", "epoch": 6, "iter": 1300, "lr": 0.0002, "memory": 8424, "data_time": 0.04849, "loss_rpn_cls": 0.0291, "loss_rpn_bbox": 0.0529, "loss_cls": 0.22204, "acc": 92.37988, "loss_bbox": 0.25029, "loss_mask": 0.24389, "loss": 0.79822, "time": 0.41478} -{"mode": "train", "epoch": 6, "iter": 1350, "lr": 0.0002, "memory": 8424, "data_time": 0.04087, "loss_rpn_cls": 0.02854, "loss_rpn_bbox": 0.05143, "loss_cls": 0.21571, "acc": 92.65869, "loss_bbox": 0.24502, "loss_mask": 0.2415, "loss": 0.78219, "time": 0.47818} -{"mode": "train", "epoch": 6, "iter": 1400, "lr": 0.0002, "memory": 8424, "data_time": 0.04711, "loss_rpn_cls": 0.02991, "loss_rpn_bbox": 0.04968, "loss_cls": 0.21701, "acc": 92.76855, "loss_bbox": 0.24178, "loss_mask": 0.24406, "loss": 0.78243, "time": 0.41686} -{"mode": "train", "epoch": 6, "iter": 1450, "lr": 0.0002, "memory": 8424, "data_time": 0.04093, "loss_rpn_cls": 0.02809, "loss_rpn_bbox": 0.05138, "loss_cls": 0.2136, "acc": 92.67529, "loss_bbox": 0.24448, "loss_mask": 0.24053, "loss": 0.77807, "time": 0.41539} -{"mode": "train", "epoch": 6, "iter": 1500, "lr": 0.0002, "memory": 8424, "data_time": 0.04865, "loss_rpn_cls": 0.02867, "loss_rpn_bbox": 0.04984, "loss_cls": 0.21377, "acc": 92.7168, "loss_bbox": 0.24066, "loss_mask": 0.24631, "loss": 0.77925, "time": 0.41186} -{"mode": "train", "epoch": 6, "iter": 1550, "lr": 0.0002, "memory": 8424, "data_time": 0.04111, "loss_rpn_cls": 0.02853, "loss_rpn_bbox": 0.04656, "loss_cls": 0.21922, "acc": 92.64844, "loss_bbox": 0.24139, "loss_mask": 0.24167, "loss": 0.77737, "time": 0.39542} -{"mode": "train", "epoch": 6, "iter": 1600, "lr": 0.0002, "memory": 8424, "data_time": 0.0562, "loss_rpn_cls": 0.03024, "loss_rpn_bbox": 0.05179, "loss_cls": 0.22304, "acc": 92.31982, "loss_bbox": 0.25119, "loss_mask": 0.24698, "loss": 0.80324, "time": 0.42487} -{"mode": "train", "epoch": 6, "iter": 1650, "lr": 0.0002, "memory": 8424, "data_time": 0.04408, "loss_rpn_cls": 0.0309, "loss_rpn_bbox": 0.05182, "loss_cls": 0.21684, "acc": 92.5415, "loss_bbox": 0.24656, "loss_mask": 0.24757, "loss": 0.79369, "time": 0.40962} -{"mode": "train", "epoch": 6, "iter": 1700, "lr": 0.0002, "memory": 8424, "data_time": 0.0477, "loss_rpn_cls": 0.02682, "loss_rpn_bbox": 0.04728, "loss_cls": 0.20476, "acc": 93.00488, "loss_bbox": 0.23342, "loss_mask": 0.23601, "loss": 0.74829, "time": 0.41468} -{"mode": "train", "epoch": 6, "iter": 1750, "lr": 0.0002, "memory": 8424, "data_time": 0.04567, "loss_rpn_cls": 0.02675, "loss_rpn_bbox": 0.04974, "loss_cls": 0.21262, "acc": 92.69873, "loss_bbox": 0.2389, "loss_mask": 0.24154, "loss": 0.76955, "time": 0.40403} -{"mode": "train", "epoch": 6, "iter": 1800, "lr": 0.0002, "memory": 8424, "data_time": 0.04395, "loss_rpn_cls": 0.02721, "loss_rpn_bbox": 0.05033, "loss_cls": 0.20757, "acc": 92.9126, "loss_bbox": 0.23057, "loss_mask": 0.23612, "loss": 0.7518, "time": 0.46862} -{"mode": "train", "epoch": 6, "iter": 1850, "lr": 0.0002, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.0283, "loss_rpn_bbox": 0.05163, "loss_cls": 0.20757, "acc": 92.91797, "loss_bbox": 0.23247, "loss_mask": 0.23871, "loss": 0.75867, "time": 0.40099} -{"mode": "train", "epoch": 6, "iter": 1900, "lr": 0.0002, "memory": 8424, "data_time": 0.05274, "loss_rpn_cls": 0.03072, "loss_rpn_bbox": 0.04959, "loss_cls": 0.22959, "acc": 92.27173, "loss_bbox": 0.25378, "loss_mask": 0.24356, "loss": 0.80723, "time": 0.40912} -{"mode": "train", "epoch": 6, "iter": 1950, "lr": 0.0002, "memory": 8424, "data_time": 0.0454, "loss_rpn_cls": 0.029, "loss_rpn_bbox": 0.05114, "loss_cls": 0.21888, "acc": 92.64526, "loss_bbox": 0.24067, "loss_mask": 0.2388, "loss": 0.77848, "time": 0.41707} -{"mode": "train", "epoch": 6, "iter": 2000, "lr": 0.0002, "memory": 8424, "data_time": 0.04146, "loss_rpn_cls": 0.02993, "loss_rpn_bbox": 0.04988, "loss_cls": 0.21892, "acc": 92.59644, "loss_bbox": 0.24832, "loss_mask": 0.24231, "loss": 0.78936, "time": 0.40016} -{"mode": "train", "epoch": 6, "iter": 2050, "lr": 0.0002, "memory": 8424, "data_time": 0.05087, "loss_rpn_cls": 0.02871, "loss_rpn_bbox": 0.05314, "loss_cls": 0.2168, "acc": 92.65625, "loss_bbox": 0.23887, "loss_mask": 0.244, "loss": 0.78152, "time": 0.40054} -{"mode": "train", "epoch": 6, "iter": 2100, "lr": 0.0002, "memory": 8424, "data_time": 0.04178, "loss_rpn_cls": 0.03048, "loss_rpn_bbox": 0.05152, "loss_cls": 0.21592, "acc": 92.79688, "loss_bbox": 0.23803, "loss_mask": 0.23973, "loss": 0.77568, "time": 0.40084} -{"mode": "train", "epoch": 6, "iter": 2150, "lr": 0.0002, "memory": 8424, "data_time": 0.04465, "loss_rpn_cls": 0.02837, "loss_rpn_bbox": 0.04964, "loss_cls": 0.22251, "acc": 92.48828, "loss_bbox": 0.24087, "loss_mask": 0.24529, "loss": 0.78668, "time": 0.40423} -{"mode": "train", "epoch": 6, "iter": 2200, "lr": 0.0002, "memory": 8424, "data_time": 0.0454, "loss_rpn_cls": 0.02769, "loss_rpn_bbox": 0.05061, "loss_cls": 0.20926, "acc": 92.80054, "loss_bbox": 0.24012, "loss_mask": 0.24199, "loss": 0.76968, "time": 0.40752} -{"mode": "train", "epoch": 6, "iter": 2250, "lr": 0.0002, "memory": 8424, "data_time": 0.04861, "loss_rpn_cls": 0.03195, "loss_rpn_bbox": 0.05241, "loss_cls": 0.22329, "acc": 92.29199, "loss_bbox": 0.25553, "loss_mask": 0.24872, "loss": 0.8119, "time": 0.41158} -{"mode": "train", "epoch": 6, "iter": 2300, "lr": 0.0002, "memory": 8424, "data_time": 0.04565, "loss_rpn_cls": 0.02872, "loss_rpn_bbox": 0.04947, "loss_cls": 0.21102, "acc": 92.82422, "loss_bbox": 0.24188, "loss_mask": 0.2424, "loss": 0.77349, "time": 0.40578} -{"mode": "train", "epoch": 6, "iter": 2350, "lr": 0.0002, "memory": 8424, "data_time": 0.04436, "loss_rpn_cls": 0.03191, "loss_rpn_bbox": 0.04869, "loss_cls": 0.21814, "acc": 92.64966, "loss_bbox": 0.24246, "loss_mask": 0.23896, "loss": 0.78015, "time": 0.40746} -{"mode": "train", "epoch": 6, "iter": 2400, "lr": 0.0002, "memory": 8424, "data_time": 0.04243, "loss_rpn_cls": 0.02994, "loss_rpn_bbox": 0.05102, "loss_cls": 0.21086, "acc": 92.80176, "loss_bbox": 0.23765, "loss_mask": 0.24142, "loss": 0.77089, "time": 0.42271} -{"mode": "train", "epoch": 6, "iter": 2450, "lr": 0.0002, "memory": 8424, "data_time": 0.036, "loss_rpn_cls": 0.02887, "loss_rpn_bbox": 0.049, "loss_cls": 0.21646, "acc": 92.75293, "loss_bbox": 0.24314, "loss_mask": 0.24001, "loss": 0.77748, "time": 0.40112} -{"mode": "train", "epoch": 6, "iter": 2500, "lr": 0.0002, "memory": 8424, "data_time": 0.04182, "loss_rpn_cls": 0.02934, "loss_rpn_bbox": 0.04873, "loss_cls": 0.22348, "acc": 92.31567, "loss_bbox": 0.25343, "loss_mask": 0.24706, "loss": 0.80204, "time": 0.43013} -{"mode": "train", "epoch": 6, "iter": 2550, "lr": 0.0002, "memory": 8424, "data_time": 0.04335, "loss_rpn_cls": 0.02705, "loss_rpn_bbox": 0.04986, "loss_cls": 0.2142, "acc": 92.75732, "loss_bbox": 0.24073, "loss_mask": 0.2381, "loss": 0.76994, "time": 0.41376} -{"mode": "train", "epoch": 6, "iter": 2600, "lr": 0.0002, "memory": 8424, "data_time": 0.04336, "loss_rpn_cls": 0.03039, "loss_rpn_bbox": 0.05233, "loss_cls": 0.21848, "acc": 92.6228, "loss_bbox": 0.24069, "loss_mask": 0.2477, "loss": 0.78959, "time": 0.40766} -{"mode": "train", "epoch": 6, "iter": 2650, "lr": 0.0002, "memory": 8424, "data_time": 0.03764, "loss_rpn_cls": 0.02634, "loss_rpn_bbox": 0.04847, "loss_cls": 0.21924, "acc": 92.56714, "loss_bbox": 0.24273, "loss_mask": 0.24422, "loss": 0.78099, "time": 0.40328} -{"mode": "train", "epoch": 6, "iter": 2700, "lr": 0.0002, "memory": 8424, "data_time": 0.04724, "loss_rpn_cls": 0.02918, "loss_rpn_bbox": 0.0542, "loss_cls": 0.22342, "acc": 92.2019, "loss_bbox": 0.25478, "loss_mask": 0.24366, "loss": 0.80525, "time": 0.41811} -{"mode": "train", "epoch": 6, "iter": 2750, "lr": 0.0002, "memory": 8424, "data_time": 0.04542, "loss_rpn_cls": 0.02773, "loss_rpn_bbox": 0.04818, "loss_cls": 0.20658, "acc": 92.8291, "loss_bbox": 0.23802, "loss_mask": 0.23988, "loss": 0.76038, "time": 0.40471} -{"mode": "train", "epoch": 6, "iter": 2800, "lr": 0.0002, "memory": 8424, "data_time": 0.03849, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04683, "loss_cls": 0.21093, "acc": 92.73096, "loss_bbox": 0.23302, "loss_mask": 0.23099, "loss": 0.74767, "time": 0.39876} -{"mode": "train", "epoch": 6, "iter": 2850, "lr": 0.0002, "memory": 8424, "data_time": 0.04036, "loss_rpn_cls": 0.03086, "loss_rpn_bbox": 0.04927, "loss_cls": 0.21966, "acc": 92.4834, "loss_bbox": 0.24525, "loss_mask": 0.2448, "loss": 0.78984, "time": 0.41807} -{"mode": "train", "epoch": 6, "iter": 2900, "lr": 0.0002, "memory": 8424, "data_time": 0.04896, "loss_rpn_cls": 0.03109, "loss_rpn_bbox": 0.05288, "loss_cls": 0.2209, "acc": 92.43628, "loss_bbox": 0.24768, "loss_mask": 0.24589, "loss": 0.79845, "time": 0.41269} -{"mode": "train", "epoch": 6, "iter": 2950, "lr": 0.0002, "memory": 8424, "data_time": 0.04096, "loss_rpn_cls": 0.02818, "loss_rpn_bbox": 0.05037, "loss_cls": 0.22637, "acc": 92.37451, "loss_bbox": 0.25039, "loss_mask": 0.24187, "loss": 0.79717, "time": 0.41504} -{"mode": "train", "epoch": 6, "iter": 3000, "lr": 0.0002, "memory": 8424, "data_time": 0.0463, "loss_rpn_cls": 0.02769, "loss_rpn_bbox": 0.04691, "loss_cls": 0.21602, "acc": 92.72827, "loss_bbox": 0.23856, "loss_mask": 0.24159, "loss": 0.77077, "time": 0.40227} -{"mode": "train", "epoch": 6, "iter": 3050, "lr": 0.0002, "memory": 8424, "data_time": 0.04591, "loss_rpn_cls": 0.03002, "loss_rpn_bbox": 0.0514, "loss_cls": 0.22267, "acc": 92.55493, "loss_bbox": 0.24136, "loss_mask": 0.24564, "loss": 0.7911, "time": 0.40816} -{"mode": "train", "epoch": 6, "iter": 3100, "lr": 0.0002, "memory": 8424, "data_time": 0.03835, "loss_rpn_cls": 0.02886, "loss_rpn_bbox": 0.04896, "loss_cls": 0.22744, "acc": 92.4165, "loss_bbox": 0.2521, "loss_mask": 0.24475, "loss": 0.80211, "time": 0.40948} -{"mode": "train", "epoch": 6, "iter": 3150, "lr": 0.0002, "memory": 8424, "data_time": 0.03863, "loss_rpn_cls": 0.02908, "loss_rpn_bbox": 0.0524, "loss_cls": 0.22979, "acc": 92.27393, "loss_bbox": 0.25462, "loss_mask": 0.25463, "loss": 0.82052, "time": 0.41682} -{"mode": "train", "epoch": 6, "iter": 3200, "lr": 0.0002, "memory": 8424, "data_time": 0.04862, "loss_rpn_cls": 0.02886, "loss_rpn_bbox": 0.04785, "loss_cls": 0.21874, "acc": 92.61035, "loss_bbox": 0.23953, "loss_mask": 0.23909, "loss": 0.77407, "time": 0.40819} -{"mode": "train", "epoch": 6, "iter": 3250, "lr": 0.0002, "memory": 8424, "data_time": 0.04527, "loss_rpn_cls": 0.02683, "loss_rpn_bbox": 0.04776, "loss_cls": 0.21256, "acc": 92.64575, "loss_bbox": 0.24251, "loss_mask": 0.23769, "loss": 0.76734, "time": 0.4212} -{"mode": "train", "epoch": 6, "iter": 3300, "lr": 0.0002, "memory": 8424, "data_time": 0.04866, "loss_rpn_cls": 0.03063, "loss_rpn_bbox": 0.05101, "loss_cls": 0.22955, "acc": 92.34058, "loss_bbox": 0.25035, "loss_mask": 0.24779, "loss": 0.80933, "time": 0.4208} -{"mode": "train", "epoch": 6, "iter": 3350, "lr": 0.0002, "memory": 8424, "data_time": 0.04641, "loss_rpn_cls": 0.02862, "loss_rpn_bbox": 0.04754, "loss_cls": 0.2186, "acc": 92.59717, "loss_bbox": 0.24349, "loss_mask": 0.2422, "loss": 0.78045, "time": 0.40059} -{"mode": "train", "epoch": 6, "iter": 3400, "lr": 0.0002, "memory": 8424, "data_time": 0.0487, "loss_rpn_cls": 0.0294, "loss_rpn_bbox": 0.05013, "loss_cls": 0.23232, "acc": 92.0459, "loss_bbox": 0.25535, "loss_mask": 0.23979, "loss": 0.80699, "time": 0.40102} -{"mode": "train", "epoch": 6, "iter": 3450, "lr": 0.0002, "memory": 8424, "data_time": 0.05716, "loss_rpn_cls": 0.02749, "loss_rpn_bbox": 0.04908, "loss_cls": 0.21713, "acc": 92.68213, "loss_bbox": 0.23896, "loss_mask": 0.24432, "loss": 0.77697, "time": 0.40997} -{"mode": "train", "epoch": 6, "iter": 3500, "lr": 0.0002, "memory": 8424, "data_time": 0.04924, "loss_rpn_cls": 0.02969, "loss_rpn_bbox": 0.05031, "loss_cls": 0.22119, "acc": 92.52344, "loss_bbox": 0.24362, "loss_mask": 0.24635, "loss": 0.79115, "time": 0.41353} -{"mode": "train", "epoch": 6, "iter": 3550, "lr": 0.0002, "memory": 8424, "data_time": 0.04386, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.04955, "loss_cls": 0.2157, "acc": 92.81445, "loss_bbox": 0.2329, "loss_mask": 0.24206, "loss": 0.77003, "time": 0.39737} -{"mode": "train", "epoch": 6, "iter": 3600, "lr": 0.0002, "memory": 8424, "data_time": 0.04715, "loss_rpn_cls": 0.03302, "loss_rpn_bbox": 0.05533, "loss_cls": 0.22528, "acc": 92.42896, "loss_bbox": 0.25407, "loss_mask": 0.25171, "loss": 0.81941, "time": 0.4241} -{"mode": "train", "epoch": 6, "iter": 3650, "lr": 0.0002, "memory": 8424, "data_time": 0.03939, "loss_rpn_cls": 0.02947, "loss_rpn_bbox": 0.04897, "loss_cls": 0.2082, "acc": 92.7373, "loss_bbox": 0.23709, "loss_mask": 0.23923, "loss": 0.76297, "time": 0.39955} -{"mode": "train", "epoch": 6, "iter": 3700, "lr": 0.0002, "memory": 8424, "data_time": 0.04668, "loss_rpn_cls": 0.02952, "loss_rpn_bbox": 0.0508, "loss_cls": 0.21215, "acc": 92.75415, "loss_bbox": 0.24386, "loss_mask": 0.23963, "loss": 0.77597, "time": 0.41472} -{"mode": "train", "epoch": 6, "iter": 3750, "lr": 0.0002, "memory": 8424, "data_time": 0.04113, "loss_rpn_cls": 0.03139, "loss_rpn_bbox": 0.0496, "loss_cls": 0.22826, "acc": 92.30078, "loss_bbox": 0.25271, "loss_mask": 0.24688, "loss": 0.80885, "time": 0.4147} -{"mode": "train", "epoch": 6, "iter": 3800, "lr": 0.0002, "memory": 8424, "data_time": 0.04179, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.04983, "loss_cls": 0.22255, "acc": 92.40869, "loss_bbox": 0.24727, "loss_mask": 0.24144, "loss": 0.78985, "time": 0.40576} -{"mode": "train", "epoch": 6, "iter": 3850, "lr": 0.0002, "memory": 8424, "data_time": 0.0478, "loss_rpn_cls": 0.02914, "loss_rpn_bbox": 0.04922, "loss_cls": 0.21597, "acc": 92.5061, "loss_bbox": 0.24375, "loss_mask": 0.24796, "loss": 0.78604, "time": 0.41495} -{"mode": "train", "epoch": 6, "iter": 3900, "lr": 0.0002, "memory": 8424, "data_time": 0.05536, "loss_rpn_cls": 0.02934, "loss_rpn_bbox": 0.05184, "loss_cls": 0.21259, "acc": 92.72827, "loss_bbox": 0.24176, "loss_mask": 0.24326, "loss": 0.77879, "time": 0.41839} -{"mode": "train", "epoch": 6, "iter": 3950, "lr": 0.0002, "memory": 8424, "data_time": 0.03383, "loss_rpn_cls": 0.02946, "loss_rpn_bbox": 0.04798, "loss_cls": 0.22934, "acc": 92.35962, "loss_bbox": 0.24569, "loss_mask": 0.24176, "loss": 0.79422, "time": 0.40879} -{"mode": "train", "epoch": 6, "iter": 4000, "lr": 0.0002, "memory": 8424, "data_time": 0.04815, "loss_rpn_cls": 0.03178, "loss_rpn_bbox": 0.0542, "loss_cls": 0.23012, "acc": 92.13208, "loss_bbox": 0.25734, "loss_mask": 0.25212, "loss": 0.82556, "time": 0.43872} -{"mode": "train", "epoch": 6, "iter": 4050, "lr": 0.0002, "memory": 8424, "data_time": 0.047, "loss_rpn_cls": 0.02878, "loss_rpn_bbox": 0.05086, "loss_cls": 0.21912, "acc": 92.44873, "loss_bbox": 0.24613, "loss_mask": 0.24573, "loss": 0.79061, "time": 0.4071} -{"mode": "train", "epoch": 6, "iter": 4100, "lr": 0.0002, "memory": 8424, "data_time": 0.04554, "loss_rpn_cls": 0.02946, "loss_rpn_bbox": 0.05191, "loss_cls": 0.22411, "acc": 92.43701, "loss_bbox": 0.25022, "loss_mask": 0.24859, "loss": 0.80428, "time": 0.42012} -{"mode": "train", "epoch": 6, "iter": 4150, "lr": 0.0002, "memory": 8424, "data_time": 0.04741, "loss_rpn_cls": 0.02889, "loss_rpn_bbox": 0.04895, "loss_cls": 0.2091, "acc": 92.9563, "loss_bbox": 0.23597, "loss_mask": 0.24177, "loss": 0.76467, "time": 0.40562} -{"mode": "train", "epoch": 6, "iter": 4200, "lr": 0.0002, "memory": 8424, "data_time": 0.04586, "loss_rpn_cls": 0.02834, "loss_rpn_bbox": 0.04632, "loss_cls": 0.21063, "acc": 92.87427, "loss_bbox": 0.23598, "loss_mask": 0.23951, "loss": 0.76079, "time": 0.40545} -{"mode": "train", "epoch": 6, "iter": 4250, "lr": 0.0002, "memory": 8424, "data_time": 0.04063, "loss_rpn_cls": 0.03135, "loss_rpn_bbox": 0.04949, "loss_cls": 0.22052, "acc": 92.47656, "loss_bbox": 0.24621, "loss_mask": 0.24351, "loss": 0.79108, "time": 0.40381} -{"mode": "train", "epoch": 6, "iter": 4300, "lr": 0.0002, "memory": 8424, "data_time": 0.04674, "loss_rpn_cls": 0.0312, "loss_rpn_bbox": 0.04966, "loss_cls": 0.23232, "acc": 92.2041, "loss_bbox": 0.24792, "loss_mask": 0.24481, "loss": 0.8059, "time": 0.41289} -{"mode": "train", "epoch": 6, "iter": 4350, "lr": 0.0002, "memory": 8424, "data_time": 0.04285, "loss_rpn_cls": 0.02976, "loss_rpn_bbox": 0.05086, "loss_cls": 0.22081, "acc": 92.43384, "loss_bbox": 0.25036, "loss_mask": 0.24381, "loss": 0.7956, "time": 0.40247} -{"mode": "train", "epoch": 6, "iter": 4400, "lr": 0.0002, "memory": 8424, "data_time": 0.04463, "loss_rpn_cls": 0.02658, "loss_rpn_bbox": 0.04878, "loss_cls": 0.21968, "acc": 92.63184, "loss_bbox": 0.23908, "loss_mask": 0.24392, "loss": 0.77803, "time": 0.39299} -{"mode": "train", "epoch": 6, "iter": 4450, "lr": 0.0002, "memory": 8424, "data_time": 0.04365, "loss_rpn_cls": 0.03409, "loss_rpn_bbox": 0.0545, "loss_cls": 0.22965, "acc": 92.13647, "loss_bbox": 0.25403, "loss_mask": 0.25136, "loss": 0.82364, "time": 0.41838} -{"mode": "train", "epoch": 6, "iter": 4500, "lr": 0.0002, "memory": 8424, "data_time": 0.04443, "loss_rpn_cls": 0.02717, "loss_rpn_bbox": 0.04837, "loss_cls": 0.22569, "acc": 92.37695, "loss_bbox": 0.24223, "loss_mask": 0.24697, "loss": 0.79042, "time": 0.408} -{"mode": "train", "epoch": 6, "iter": 4550, "lr": 0.0002, "memory": 8424, "data_time": 0.04242, "loss_rpn_cls": 0.02703, "loss_rpn_bbox": 0.04771, "loss_cls": 0.21794, "acc": 92.67163, "loss_bbox": 0.23568, "loss_mask": 0.24126, "loss": 0.76963, "time": 0.40379} -{"mode": "train", "epoch": 6, "iter": 4600, "lr": 0.0002, "memory": 8424, "data_time": 0.04664, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.04934, "loss_cls": 0.22609, "acc": 92.35522, "loss_bbox": 0.24637, "loss_mask": 0.24639, "loss": 0.79733, "time": 0.41441} -{"mode": "train", "epoch": 6, "iter": 4650, "lr": 0.0002, "memory": 8424, "data_time": 0.03863, "loss_rpn_cls": 0.03092, "loss_rpn_bbox": 0.05469, "loss_cls": 0.22126, "acc": 92.44043, "loss_bbox": 0.24576, "loss_mask": 0.24485, "loss": 0.79747, "time": 0.40405} -{"mode": "train", "epoch": 6, "iter": 4700, "lr": 0.0002, "memory": 8424, "data_time": 0.04706, "loss_rpn_cls": 0.02873, "loss_rpn_bbox": 0.04686, "loss_cls": 0.21094, "acc": 92.92139, "loss_bbox": 0.23692, "loss_mask": 0.24119, "loss": 0.76464, "time": 0.40366} -{"mode": "train", "epoch": 6, "iter": 4750, "lr": 0.0002, "memory": 8424, "data_time": 0.04026, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.0503, "loss_cls": 0.22005, "acc": 92.61133, "loss_bbox": 0.24162, "loss_mask": 0.24288, "loss": 0.78414, "time": 0.39438} -{"mode": "train", "epoch": 6, "iter": 4800, "lr": 0.0002, "memory": 8424, "data_time": 0.04029, "loss_rpn_cls": 0.03492, "loss_rpn_bbox": 0.05063, "loss_cls": 0.22051, "acc": 92.50903, "loss_bbox": 0.24389, "loss_mask": 0.2383, "loss": 0.78825, "time": 0.47421} -{"mode": "train", "epoch": 6, "iter": 4850, "lr": 0.0002, "memory": 8424, "data_time": 0.0408, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.05163, "loss_cls": 0.23288, "acc": 92.35522, "loss_bbox": 0.24167, "loss_mask": 0.24172, "loss": 0.79715, "time": 0.41104} -{"mode": "train", "epoch": 6, "iter": 4900, "lr": 0.0002, "memory": 8424, "data_time": 0.04428, "loss_rpn_cls": 0.03499, "loss_rpn_bbox": 0.05409, "loss_cls": 0.22619, "acc": 92.40967, "loss_bbox": 0.25084, "loss_mask": 0.24398, "loss": 0.81009, "time": 0.41574} -{"mode": "train", "epoch": 6, "iter": 4950, "lr": 0.0002, "memory": 8424, "data_time": 0.03642, "loss_rpn_cls": 0.02971, "loss_rpn_bbox": 0.0501, "loss_cls": 0.21079, "acc": 92.84741, "loss_bbox": 0.23338, "loss_mask": 0.24153, "loss": 0.76551, "time": 0.38528} -{"mode": "train", "epoch": 6, "iter": 5000, "lr": 0.0002, "memory": 8424, "data_time": 0.03666, "loss_rpn_cls": 0.02851, "loss_rpn_bbox": 0.04957, "loss_cls": 0.21766, "acc": 92.64014, "loss_bbox": 0.24365, "loss_mask": 0.24721, "loss": 0.78661, "time": 0.40005} -{"mode": "train", "epoch": 6, "iter": 5050, "lr": 0.0002, "memory": 8424, "data_time": 0.04204, "loss_rpn_cls": 0.02903, "loss_rpn_bbox": 0.05343, "loss_cls": 0.22109, "acc": 92.46753, "loss_bbox": 0.24741, "loss_mask": 0.24525, "loss": 0.79622, "time": 0.53987} -{"mode": "train", "epoch": 6, "iter": 5100, "lr": 0.0002, "memory": 8424, "data_time": 0.05257, "loss_rpn_cls": 0.03042, "loss_rpn_bbox": 0.05363, "loss_cls": 0.22539, "acc": 92.40039, "loss_bbox": 0.24311, "loss_mask": 0.25087, "loss": 0.80342, "time": 0.56755} -{"mode": "train", "epoch": 6, "iter": 5150, "lr": 0.0002, "memory": 8424, "data_time": 0.04323, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.04848, "loss_cls": 0.22421, "acc": 92.37061, "loss_bbox": 0.2482, "loss_mask": 0.24339, "loss": 0.79366, "time": 0.56196} -{"mode": "train", "epoch": 6, "iter": 5200, "lr": 0.0002, "memory": 8424, "data_time": 0.0456, "loss_rpn_cls": 0.03181, "loss_rpn_bbox": 0.05188, "loss_cls": 0.22232, "acc": 92.479, "loss_bbox": 0.2463, "loss_mask": 0.24059, "loss": 0.7929, "time": 0.4917} -{"mode": "train", "epoch": 6, "iter": 5250, "lr": 0.0002, "memory": 8424, "data_time": 0.0534, "loss_rpn_cls": 0.02958, "loss_rpn_bbox": 0.05342, "loss_cls": 0.21943, "acc": 92.51343, "loss_bbox": 0.24099, "loss_mask": 0.24543, "loss": 0.78884, "time": 0.41698} -{"mode": "train", "epoch": 6, "iter": 5300, "lr": 0.0002, "memory": 8424, "data_time": 0.04176, "loss_rpn_cls": 0.03315, "loss_rpn_bbox": 0.05512, "loss_cls": 0.22749, "acc": 92.37915, "loss_bbox": 0.2531, "loss_mask": 0.24419, "loss": 0.81305, "time": 0.42229} -{"mode": "train", "epoch": 6, "iter": 5350, "lr": 0.0002, "memory": 8424, "data_time": 0.0529, "loss_rpn_cls": 0.02954, "loss_rpn_bbox": 0.0511, "loss_cls": 0.22506, "acc": 92.39282, "loss_bbox": 0.25008, "loss_mask": 0.24418, "loss": 0.79997, "time": 0.40693} -{"mode": "train", "epoch": 6, "iter": 5400, "lr": 0.0002, "memory": 8424, "data_time": 0.03585, "loss_rpn_cls": 0.03221, "loss_rpn_bbox": 0.04852, "loss_cls": 0.21792, "acc": 92.73755, "loss_bbox": 0.23526, "loss_mask": 0.24314, "loss": 0.77706, "time": 0.39107} -{"mode": "train", "epoch": 6, "iter": 5450, "lr": 0.0002, "memory": 8424, "data_time": 0.04783, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05604, "loss_cls": 0.23857, "acc": 91.95752, "loss_bbox": 0.25433, "loss_mask": 0.24892, "loss": 0.83077, "time": 0.42389} -{"mode": "train", "epoch": 6, "iter": 5500, "lr": 0.0002, "memory": 8424, "data_time": 0.04332, "loss_rpn_cls": 0.02941, "loss_rpn_bbox": 0.04934, "loss_cls": 0.2164, "acc": 92.68213, "loss_bbox": 0.2375, "loss_mask": 0.23722, "loss": 0.76987, "time": 0.46425} -{"mode": "train", "epoch": 6, "iter": 5550, "lr": 0.0002, "memory": 8424, "data_time": 0.04821, "loss_rpn_cls": 0.02892, "loss_rpn_bbox": 0.05347, "loss_cls": 0.23291, "acc": 92.26221, "loss_bbox": 0.2454, "loss_mask": 0.24887, "loss": 0.80956, "time": 0.40595} -{"mode": "train", "epoch": 6, "iter": 5600, "lr": 0.0002, "memory": 8424, "data_time": 0.04533, "loss_rpn_cls": 0.03093, "loss_rpn_bbox": 0.0511, "loss_cls": 0.22008, "acc": 92.64819, "loss_bbox": 0.238, "loss_mask": 0.24471, "loss": 0.78482, "time": 0.4651} -{"mode": "train", "epoch": 6, "iter": 5650, "lr": 0.0002, "memory": 8424, "data_time": 0.0484, "loss_rpn_cls": 0.02955, "loss_rpn_bbox": 0.05084, "loss_cls": 0.21394, "acc": 92.64868, "loss_bbox": 0.2451, "loss_mask": 0.24039, "loss": 0.77981, "time": 0.40866} -{"mode": "train", "epoch": 6, "iter": 5700, "lr": 0.0002, "memory": 8424, "data_time": 0.04501, "loss_rpn_cls": 0.02961, "loss_rpn_bbox": 0.05033, "loss_cls": 0.21506, "acc": 92.73975, "loss_bbox": 0.23494, "loss_mask": 0.23868, "loss": 0.76862, "time": 0.41633} -{"mode": "train", "epoch": 6, "iter": 5750, "lr": 0.0002, "memory": 8424, "data_time": 0.04653, "loss_rpn_cls": 0.02854, "loss_rpn_bbox": 0.04822, "loss_cls": 0.21282, "acc": 92.73853, "loss_bbox": 0.23909, "loss_mask": 0.23819, "loss": 0.76685, "time": 0.40084} -{"mode": "train", "epoch": 6, "iter": 5800, "lr": 0.0002, "memory": 8424, "data_time": 0.04352, "loss_rpn_cls": 0.03163, "loss_rpn_bbox": 0.05128, "loss_cls": 0.21262, "acc": 92.78394, "loss_bbox": 0.23785, "loss_mask": 0.2408, "loss": 0.77418, "time": 0.44723} -{"mode": "train", "epoch": 6, "iter": 5850, "lr": 0.0002, "memory": 8424, "data_time": 0.04417, "loss_rpn_cls": 0.02965, "loss_rpn_bbox": 0.04801, "loss_cls": 0.21317, "acc": 92.76416, "loss_bbox": 0.23601, "loss_mask": 0.24402, "loss": 0.77086, "time": 0.40414} -{"mode": "train", "epoch": 6, "iter": 5900, "lr": 0.0002, "memory": 8424, "data_time": 0.05022, "loss_rpn_cls": 0.03237, "loss_rpn_bbox": 0.04972, "loss_cls": 0.21439, "acc": 92.80981, "loss_bbox": 0.23674, "loss_mask": 0.24298, "loss": 0.77619, "time": 0.40044} -{"mode": "train", "epoch": 6, "iter": 5950, "lr": 0.0002, "memory": 8424, "data_time": 0.05112, "loss_rpn_cls": 0.03009, "loss_rpn_bbox": 0.05156, "loss_cls": 0.2257, "acc": 92.39307, "loss_bbox": 0.24524, "loss_mask": 0.25129, "loss": 0.80388, "time": 0.40544} -{"mode": "train", "epoch": 6, "iter": 6000, "lr": 0.0002, "memory": 8424, "data_time": 0.04546, "loss_rpn_cls": 0.03198, "loss_rpn_bbox": 0.04881, "loss_cls": 0.22338, "acc": 92.6106, "loss_bbox": 0.24061, "loss_mask": 0.23719, "loss": 0.78196, "time": 0.4058} -{"mode": "train", "epoch": 6, "iter": 6050, "lr": 0.0002, "memory": 8424, "data_time": 0.04357, "loss_rpn_cls": 0.03072, "loss_rpn_bbox": 0.05454, "loss_cls": 0.2269, "acc": 92.52295, "loss_bbox": 0.24889, "loss_mask": 0.24628, "loss": 0.80733, "time": 0.42053} -{"mode": "train", "epoch": 6, "iter": 6100, "lr": 0.0002, "memory": 8424, "data_time": 0.05267, "loss_rpn_cls": 0.03037, "loss_rpn_bbox": 0.05246, "loss_cls": 0.22765, "acc": 92.34009, "loss_bbox": 0.24976, "loss_mask": 0.24735, "loss": 0.80759, "time": 0.47061} -{"mode": "train", "epoch": 6, "iter": 6150, "lr": 0.0002, "memory": 8424, "data_time": 0.04372, "loss_rpn_cls": 0.02821, "loss_rpn_bbox": 0.05007, "loss_cls": 0.21919, "acc": 92.51929, "loss_bbox": 0.24276, "loss_mask": 0.2418, "loss": 0.78202, "time": 0.46899} -{"mode": "train", "epoch": 6, "iter": 6200, "lr": 0.0002, "memory": 8424, "data_time": 0.04442, "loss_rpn_cls": 0.03018, "loss_rpn_bbox": 0.04679, "loss_cls": 0.21935, "acc": 92.729, "loss_bbox": 0.2358, "loss_mask": 0.24066, "loss": 0.77278, "time": 0.4167} -{"mode": "train", "epoch": 6, "iter": 6250, "lr": 0.0002, "memory": 8424, "data_time": 0.05683, "loss_rpn_cls": 0.02757, "loss_rpn_bbox": 0.05033, "loss_cls": 0.22707, "acc": 92.4519, "loss_bbox": 0.24778, "loss_mask": 0.242, "loss": 0.79475, "time": 0.40139} -{"mode": "train", "epoch": 6, "iter": 6300, "lr": 0.0002, "memory": 8424, "data_time": 0.05098, "loss_rpn_cls": 0.03253, "loss_rpn_bbox": 0.05145, "loss_cls": 0.22745, "acc": 92.55005, "loss_bbox": 0.23862, "loss_mask": 0.25073, "loss": 0.80077, "time": 0.40488} -{"mode": "train", "epoch": 6, "iter": 6350, "lr": 0.0002, "memory": 8424, "data_time": 0.03913, "loss_rpn_cls": 0.03104, "loss_rpn_bbox": 0.04999, "loss_cls": 0.21685, "acc": 92.76587, "loss_bbox": 0.2377, "loss_mask": 0.24656, "loss": 0.78213, "time": 0.40062} -{"mode": "train", "epoch": 6, "iter": 6400, "lr": 0.0002, "memory": 8424, "data_time": 0.05657, "loss_rpn_cls": 0.03097, "loss_rpn_bbox": 0.05328, "loss_cls": 0.22183, "acc": 92.42212, "loss_bbox": 0.25191, "loss_mask": 0.24897, "loss": 0.80696, "time": 0.40756} -{"mode": "train", "epoch": 6, "iter": 6450, "lr": 0.0002, "memory": 8424, "data_time": 0.04194, "loss_rpn_cls": 0.03039, "loss_rpn_bbox": 0.05071, "loss_cls": 0.22119, "acc": 92.46997, "loss_bbox": 0.24341, "loss_mask": 0.24059, "loss": 0.78629, "time": 0.4084} -{"mode": "train", "epoch": 6, "iter": 6500, "lr": 0.0002, "memory": 8424, "data_time": 0.05184, "loss_rpn_cls": 0.0299, "loss_rpn_bbox": 0.05097, "loss_cls": 0.21872, "acc": 92.50879, "loss_bbox": 0.24813, "loss_mask": 0.24452, "loss": 0.79223, "time": 0.40694} -{"mode": "train", "epoch": 6, "iter": 6550, "lr": 0.0002, "memory": 8424, "data_time": 0.04633, "loss_rpn_cls": 0.03034, "loss_rpn_bbox": 0.04909, "loss_cls": 0.21609, "acc": 92.75391, "loss_bbox": 0.23893, "loss_mask": 0.23698, "loss": 0.77144, "time": 0.40015} -{"mode": "train", "epoch": 6, "iter": 6600, "lr": 0.0002, "memory": 8424, "data_time": 0.05229, "loss_rpn_cls": 0.02798, "loss_rpn_bbox": 0.04966, "loss_cls": 0.21958, "acc": 92.50049, "loss_bbox": 0.24447, "loss_mask": 0.2396, "loss": 0.7813, "time": 0.41175} -{"mode": "train", "epoch": 6, "iter": 6650, "lr": 0.0002, "memory": 8424, "data_time": 0.04875, "loss_rpn_cls": 0.03108, "loss_rpn_bbox": 0.05335, "loss_cls": 0.22192, "acc": 92.521, "loss_bbox": 0.24415, "loss_mask": 0.24502, "loss": 0.79552, "time": 0.41463} -{"mode": "train", "epoch": 6, "iter": 6700, "lr": 0.0002, "memory": 8424, "data_time": 0.04563, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05187, "loss_cls": 0.22214, "acc": 92.57959, "loss_bbox": 0.24556, "loss_mask": 0.24362, "loss": 0.7961, "time": 0.41865} -{"mode": "train", "epoch": 6, "iter": 6750, "lr": 0.0002, "memory": 8424, "data_time": 0.04661, "loss_rpn_cls": 0.02879, "loss_rpn_bbox": 0.04735, "loss_cls": 0.20987, "acc": 92.87598, "loss_bbox": 0.23905, "loss_mask": 0.24201, "loss": 0.76707, "time": 0.40335} -{"mode": "train", "epoch": 6, "iter": 6800, "lr": 0.0002, "memory": 8424, "data_time": 0.05372, "loss_rpn_cls": 0.02917, "loss_rpn_bbox": 0.05023, "loss_cls": 0.21935, "acc": 92.54761, "loss_bbox": 0.24564, "loss_mask": 0.25157, "loss": 0.79596, "time": 0.40395} -{"mode": "train", "epoch": 6, "iter": 6850, "lr": 0.0002, "memory": 8424, "data_time": 0.04427, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05101, "loss_cls": 0.21382, "acc": 92.78711, "loss_bbox": 0.24259, "loss_mask": 0.23663, "loss": 0.775, "time": 0.40288} -{"mode": "train", "epoch": 6, "iter": 6900, "lr": 0.0002, "memory": 8424, "data_time": 0.0426, "loss_rpn_cls": 0.02952, "loss_rpn_bbox": 0.05007, "loss_cls": 0.22306, "acc": 92.35425, "loss_bbox": 0.24784, "loss_mask": 0.23891, "loss": 0.7894, "time": 0.40981} -{"mode": "train", "epoch": 6, "iter": 6950, "lr": 0.0002, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.02855, "loss_rpn_bbox": 0.04918, "loss_cls": 0.20699, "acc": 92.90869, "loss_bbox": 0.23247, "loss_mask": 0.23669, "loss": 0.75388, "time": 0.41467} -{"mode": "train", "epoch": 6, "iter": 7000, "lr": 0.0002, "memory": 8424, "data_time": 0.05137, "loss_rpn_cls": 0.02923, "loss_rpn_bbox": 0.05166, "loss_cls": 0.22695, "acc": 92.50635, "loss_bbox": 0.24001, "loss_mask": 0.24494, "loss": 0.79278, "time": 0.40372} -{"mode": "train", "epoch": 6, "iter": 7050, "lr": 0.0002, "memory": 8424, "data_time": 0.04398, "loss_rpn_cls": 0.03462, "loss_rpn_bbox": 0.04987, "loss_cls": 0.21588, "acc": 92.8252, "loss_bbox": 0.23393, "loss_mask": 0.24193, "loss": 0.77622, "time": 0.40983} -{"mode": "train", "epoch": 6, "iter": 7100, "lr": 0.0002, "memory": 8424, "data_time": 0.04017, "loss_rpn_cls": 0.02954, "loss_rpn_bbox": 0.04936, "loss_cls": 0.2193, "acc": 92.56934, "loss_bbox": 0.24198, "loss_mask": 0.24386, "loss": 0.78405, "time": 0.40139} -{"mode": "train", "epoch": 6, "iter": 7150, "lr": 0.0002, "memory": 8424, "data_time": 0.05029, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.05237, "loss_cls": 0.22787, "acc": 92.30396, "loss_bbox": 0.25736, "loss_mask": 0.2492, "loss": 0.81638, "time": 0.41931} -{"mode": "train", "epoch": 6, "iter": 7200, "lr": 0.0002, "memory": 8424, "data_time": 0.0524, "loss_rpn_cls": 0.02901, "loss_rpn_bbox": 0.05063, "loss_cls": 0.2203, "acc": 92.44775, "loss_bbox": 0.24531, "loss_mask": 0.24004, "loss": 0.78529, "time": 0.41098} -{"mode": "train", "epoch": 6, "iter": 7250, "lr": 0.0002, "memory": 8424, "data_time": 0.05835, "loss_rpn_cls": 0.03123, "loss_rpn_bbox": 0.05459, "loss_cls": 0.22341, "acc": 92.43359, "loss_bbox": 0.24442, "loss_mask": 0.24075, "loss": 0.79439, "time": 0.41724} -{"mode": "train", "epoch": 6, "iter": 7300, "lr": 0.0002, "memory": 8424, "data_time": 0.04865, "loss_rpn_cls": 0.03084, "loss_rpn_bbox": 0.04897, "loss_cls": 0.22664, "acc": 92.37671, "loss_bbox": 0.24516, "loss_mask": 0.24262, "loss": 0.79423, "time": 0.40727} -{"mode": "val", "epoch": 6, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3575, "bbox_mAP_50": 0.5754, "bbox_mAP_75": 0.387, "bbox_mAP_s": 0.2017, "bbox_mAP_m": 0.4, "bbox_mAP_l": 0.4726, "bbox_mAP_copypaste": "0.3575 0.5754 0.3870 0.2017 0.4000 0.4726", "segm_mAP": 0.3423, "segm_mAP_50": 0.5485, "segm_mAP_75": 0.3664, "segm_mAP_s": 0.1517, "segm_mAP_m": 0.3747, "segm_mAP_l": 0.5066, "segm_mAP_copypaste": "0.3423 0.5485 0.3664 0.1517 0.3747 0.5066"} -{"mode": "train", "epoch": 7, "iter": 50, "lr": 0.0002, "memory": 8424, "data_time": 0.12778, "loss_rpn_cls": 0.02872, "loss_rpn_bbox": 0.05114, "loss_cls": 0.2155, "acc": 92.62646, "loss_bbox": 0.24492, "loss_mask": 0.23775, "loss": 0.77804, "time": 0.51542} -{"mode": "train", "epoch": 7, "iter": 100, "lr": 0.0002, "memory": 8424, "data_time": 0.0474, "loss_rpn_cls": 0.02621, "loss_rpn_bbox": 0.04847, "loss_cls": 0.20447, "acc": 92.87085, "loss_bbox": 0.24071, "loss_mask": 0.23878, "loss": 0.75864, "time": 0.45432} -{"mode": "train", "epoch": 7, "iter": 150, "lr": 0.0002, "memory": 8424, "data_time": 0.05113, "loss_rpn_cls": 0.02658, "loss_rpn_bbox": 0.05193, "loss_cls": 0.21787, "acc": 92.38159, "loss_bbox": 0.2495, "loss_mask": 0.24194, "loss": 0.78782, "time": 0.51064} -{"mode": "train", "epoch": 7, "iter": 200, "lr": 0.0002, "memory": 8424, "data_time": 0.04435, "loss_rpn_cls": 0.03004, "loss_rpn_bbox": 0.05347, "loss_cls": 0.22154, "acc": 92.28809, "loss_bbox": 0.25108, "loss_mask": 0.24529, "loss": 0.80142, "time": 0.44318} -{"mode": "train", "epoch": 7, "iter": 250, "lr": 0.0002, "memory": 8424, "data_time": 0.04307, "loss_rpn_cls": 0.02814, "loss_rpn_bbox": 0.04771, "loss_cls": 0.20723, "acc": 92.97559, "loss_bbox": 0.23068, "loss_mask": 0.23977, "loss": 0.75353, "time": 0.41884} -{"mode": "train", "epoch": 7, "iter": 300, "lr": 0.0002, "memory": 8424, "data_time": 0.04974, "loss_rpn_cls": 0.02703, "loss_rpn_bbox": 0.04759, "loss_cls": 0.20849, "acc": 92.91284, "loss_bbox": 0.24439, "loss_mask": 0.24064, "loss": 0.76815, "time": 0.43006} -{"mode": "train", "epoch": 7, "iter": 350, "lr": 0.0002, "memory": 8424, "data_time": 0.05205, "loss_rpn_cls": 0.0278, "loss_rpn_bbox": 0.04898, "loss_cls": 0.21072, "acc": 92.84155, "loss_bbox": 0.2406, "loss_mask": 0.23953, "loss": 0.76763, "time": 0.42122} -{"mode": "train", "epoch": 7, "iter": 400, "lr": 0.0002, "memory": 8424, "data_time": 0.03948, "loss_rpn_cls": 0.02707, "loss_rpn_bbox": 0.0511, "loss_cls": 0.21514, "acc": 92.62744, "loss_bbox": 0.24167, "loss_mask": 0.23897, "loss": 0.77395, "time": 0.41786} -{"mode": "train", "epoch": 7, "iter": 450, "lr": 0.0002, "memory": 8424, "data_time": 0.04674, "loss_rpn_cls": 0.02601, "loss_rpn_bbox": 0.04547, "loss_cls": 0.19898, "acc": 93.10742, "loss_bbox": 0.23074, "loss_mask": 0.23237, "loss": 0.73358, "time": 0.40893} -{"mode": "train", "epoch": 7, "iter": 500, "lr": 0.0002, "memory": 8424, "data_time": 0.04745, "loss_rpn_cls": 0.02814, "loss_rpn_bbox": 0.04954, "loss_cls": 0.21496, "acc": 92.64429, "loss_bbox": 0.24358, "loss_mask": 0.23839, "loss": 0.77462, "time": 0.43006} -{"mode": "train", "epoch": 7, "iter": 550, "lr": 0.0002, "memory": 8424, "data_time": 0.04925, "loss_rpn_cls": 0.02784, "loss_rpn_bbox": 0.04917, "loss_cls": 0.20837, "acc": 92.80615, "loss_bbox": 0.23921, "loss_mask": 0.23829, "loss": 0.76289, "time": 0.44224} -{"mode": "train", "epoch": 7, "iter": 600, "lr": 0.0002, "memory": 8424, "data_time": 0.05214, "loss_rpn_cls": 0.02514, "loss_rpn_bbox": 0.05164, "loss_cls": 0.21554, "acc": 92.47241, "loss_bbox": 0.24801, "loss_mask": 0.24309, "loss": 0.78341, "time": 0.42205} -{"mode": "train", "epoch": 7, "iter": 650, "lr": 0.0002, "memory": 8424, "data_time": 0.04351, "loss_rpn_cls": 0.02823, "loss_rpn_bbox": 0.05002, "loss_cls": 0.21021, "acc": 92.6582, "loss_bbox": 0.2428, "loss_mask": 0.23955, "loss": 0.77081, "time": 0.4269} -{"mode": "train", "epoch": 7, "iter": 700, "lr": 0.0002, "memory": 8424, "data_time": 0.04708, "loss_rpn_cls": 0.02648, "loss_rpn_bbox": 0.05118, "loss_cls": 0.20509, "acc": 92.92236, "loss_bbox": 0.23446, "loss_mask": 0.23699, "loss": 0.7542, "time": 0.47612} -{"mode": "train", "epoch": 7, "iter": 750, "lr": 0.0002, "memory": 8424, "data_time": 0.04762, "loss_rpn_cls": 0.02831, "loss_rpn_bbox": 0.05142, "loss_cls": 0.21828, "acc": 92.4834, "loss_bbox": 0.24503, "loss_mask": 0.24233, "loss": 0.78538, "time": 0.41465} -{"mode": "train", "epoch": 7, "iter": 800, "lr": 0.0002, "memory": 8424, "data_time": 0.03511, "loss_rpn_cls": 0.02998, "loss_rpn_bbox": 0.05455, "loss_cls": 0.22126, "acc": 92.42798, "loss_bbox": 0.24764, "loss_mask": 0.25046, "loss": 0.80389, "time": 0.42847} -{"mode": "train", "epoch": 7, "iter": 850, "lr": 0.0002, "memory": 8424, "data_time": 0.04591, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.05139, "loss_cls": 0.20989, "acc": 92.73535, "loss_bbox": 0.2336, "loss_mask": 0.24316, "loss": 0.76742, "time": 0.42325} -{"mode": "train", "epoch": 7, "iter": 900, "lr": 0.0002, "memory": 8424, "data_time": 0.05203, "loss_rpn_cls": 0.02585, "loss_rpn_bbox": 0.0483, "loss_cls": 0.19895, "acc": 93.13184, "loss_bbox": 0.23025, "loss_mask": 0.2327, "loss": 0.73605, "time": 0.40614} -{"mode": "train", "epoch": 7, "iter": 950, "lr": 0.0002, "memory": 8424, "data_time": 0.04134, "loss_rpn_cls": 0.02774, "loss_rpn_bbox": 0.05158, "loss_cls": 0.21776, "acc": 92.41187, "loss_bbox": 0.24893, "loss_mask": 0.24272, "loss": 0.78874, "time": 0.42212} -{"mode": "train", "epoch": 7, "iter": 1000, "lr": 0.0002, "memory": 8424, "data_time": 0.04767, "loss_rpn_cls": 0.02669, "loss_rpn_bbox": 0.0522, "loss_cls": 0.21682, "acc": 92.50391, "loss_bbox": 0.24831, "loss_mask": 0.24022, "loss": 0.78425, "time": 0.42278} -{"mode": "train", "epoch": 7, "iter": 1050, "lr": 0.0002, "memory": 8424, "data_time": 0.04318, "loss_rpn_cls": 0.02675, "loss_rpn_bbox": 0.04799, "loss_cls": 0.20492, "acc": 92.81055, "loss_bbox": 0.2396, "loss_mask": 0.24124, "loss": 0.7605, "time": 0.41301} -{"mode": "train", "epoch": 7, "iter": 1100, "lr": 0.0002, "memory": 8424, "data_time": 0.04819, "loss_rpn_cls": 0.02768, "loss_rpn_bbox": 0.0489, "loss_cls": 0.20273, "acc": 92.88379, "loss_bbox": 0.23569, "loss_mask": 0.23724, "loss": 0.75225, "time": 0.41397} -{"mode": "train", "epoch": 7, "iter": 1150, "lr": 0.0002, "memory": 8424, "data_time": 0.05272, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.05137, "loss_cls": 0.21402, "acc": 92.60449, "loss_bbox": 0.24429, "loss_mask": 0.23926, "loss": 0.77661, "time": 0.42421} -{"mode": "train", "epoch": 7, "iter": 1200, "lr": 0.0002, "memory": 8424, "data_time": 0.03915, "loss_rpn_cls": 0.0269, "loss_rpn_bbox": 0.04834, "loss_cls": 0.20423, "acc": 92.92456, "loss_bbox": 0.23052, "loss_mask": 0.23423, "loss": 0.74422, "time": 0.40354} -{"mode": "train", "epoch": 7, "iter": 1250, "lr": 0.0002, "memory": 8424, "data_time": 0.04142, "loss_rpn_cls": 0.0294, "loss_rpn_bbox": 0.05014, "loss_cls": 0.20986, "acc": 92.84033, "loss_bbox": 0.23306, "loss_mask": 0.23955, "loss": 0.76201, "time": 0.40567} -{"mode": "train", "epoch": 7, "iter": 1300, "lr": 0.0002, "memory": 8424, "data_time": 0.04351, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.05094, "loss_cls": 0.2072, "acc": 92.89795, "loss_bbox": 0.23551, "loss_mask": 0.23842, "loss": 0.76137, "time": 0.41097} -{"mode": "train", "epoch": 7, "iter": 1350, "lr": 0.0002, "memory": 8424, "data_time": 0.04248, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.05072, "loss_cls": 0.21748, "acc": 92.53931, "loss_bbox": 0.24577, "loss_mask": 0.24725, "loss": 0.78998, "time": 0.41699} -{"mode": "train", "epoch": 7, "iter": 1400, "lr": 0.0002, "memory": 8424, "data_time": 0.04647, "loss_rpn_cls": 0.02814, "loss_rpn_bbox": 0.04883, "loss_cls": 0.20775, "acc": 92.89404, "loss_bbox": 0.23323, "loss_mask": 0.23959, "loss": 0.75754, "time": 0.41988} -{"mode": "train", "epoch": 7, "iter": 1450, "lr": 0.0002, "memory": 8424, "data_time": 0.05409, "loss_rpn_cls": 0.02741, "loss_rpn_bbox": 0.04768, "loss_cls": 0.20994, "acc": 92.7561, "loss_bbox": 0.23873, "loss_mask": 0.23371, "loss": 0.75746, "time": 0.40577} -{"mode": "train", "epoch": 7, "iter": 1500, "lr": 0.0002, "memory": 8424, "data_time": 0.04837, "loss_rpn_cls": 0.02776, "loss_rpn_bbox": 0.05004, "loss_cls": 0.22146, "acc": 92.39233, "loss_bbox": 0.24648, "loss_mask": 0.23975, "loss": 0.78548, "time": 0.42675} -{"mode": "train", "epoch": 7, "iter": 1550, "lr": 0.0002, "memory": 8424, "data_time": 0.04075, "loss_rpn_cls": 0.02788, "loss_rpn_bbox": 0.04708, "loss_cls": 0.20293, "acc": 93.14111, "loss_bbox": 0.22372, "loss_mask": 0.23995, "loss": 0.74156, "time": 0.43688} -{"mode": "train", "epoch": 7, "iter": 1600, "lr": 0.0002, "memory": 8424, "data_time": 0.04893, "loss_rpn_cls": 0.0271, "loss_rpn_bbox": 0.04925, "loss_cls": 0.20433, "acc": 92.87183, "loss_bbox": 0.23563, "loss_mask": 0.23577, "loss": 0.75207, "time": 0.45975} -{"mode": "train", "epoch": 7, "iter": 1650, "lr": 0.0002, "memory": 8424, "data_time": 0.04419, "loss_rpn_cls": 0.02995, "loss_rpn_bbox": 0.0516, "loss_cls": 0.21828, "acc": 92.38403, "loss_bbox": 0.2468, "loss_mask": 0.24427, "loss": 0.79091, "time": 0.48198} -{"mode": "train", "epoch": 7, "iter": 1700, "lr": 0.0002, "memory": 8424, "data_time": 0.04126, "loss_rpn_cls": 0.02602, "loss_rpn_bbox": 0.04959, "loss_cls": 0.21264, "acc": 92.71631, "loss_bbox": 0.23814, "loss_mask": 0.2381, "loss": 0.7645, "time": 0.47743} -{"mode": "train", "epoch": 7, "iter": 1750, "lr": 0.0002, "memory": 8424, "data_time": 0.05355, "loss_rpn_cls": 0.02722, "loss_rpn_bbox": 0.04911, "loss_cls": 0.21088, "acc": 92.85889, "loss_bbox": 0.23566, "loss_mask": 0.23811, "loss": 0.76098, "time": 0.4852} -{"mode": "train", "epoch": 7, "iter": 1800, "lr": 0.0002, "memory": 8424, "data_time": 0.05025, "loss_rpn_cls": 0.02854, "loss_rpn_bbox": 0.05133, "loss_cls": 0.21204, "acc": 92.69556, "loss_bbox": 0.24636, "loss_mask": 0.23916, "loss": 0.77743, "time": 0.49036} -{"mode": "train", "epoch": 7, "iter": 1850, "lr": 0.0002, "memory": 8424, "data_time": 0.05225, "loss_rpn_cls": 0.02836, "loss_rpn_bbox": 0.04965, "loss_cls": 0.21082, "acc": 92.78125, "loss_bbox": 0.23516, "loss_mask": 0.23908, "loss": 0.76306, "time": 0.56687} -{"mode": "train", "epoch": 7, "iter": 1900, "lr": 0.0002, "memory": 8424, "data_time": 0.04398, "loss_rpn_cls": 0.02582, "loss_rpn_bbox": 0.04591, "loss_cls": 0.21097, "acc": 92.71948, "loss_bbox": 0.23851, "loss_mask": 0.23628, "loss": 0.75749, "time": 0.49364} -{"mode": "train", "epoch": 7, "iter": 1950, "lr": 0.0002, "memory": 8424, "data_time": 0.04143, "loss_rpn_cls": 0.02668, "loss_rpn_bbox": 0.04576, "loss_cls": 0.19777, "acc": 93.25098, "loss_bbox": 0.22649, "loss_mask": 0.23643, "loss": 0.73313, "time": 0.47543} -{"mode": "train", "epoch": 7, "iter": 2000, "lr": 0.0002, "memory": 8424, "data_time": 0.04191, "loss_rpn_cls": 0.02536, "loss_rpn_bbox": 0.04489, "loss_cls": 0.20126, "acc": 93.08154, "loss_bbox": 0.22805, "loss_mask": 0.23666, "loss": 0.73621, "time": 0.40019} -{"mode": "train", "epoch": 7, "iter": 2050, "lr": 0.0002, "memory": 8424, "data_time": 0.05408, "loss_rpn_cls": 0.02735, "loss_rpn_bbox": 0.0495, "loss_cls": 0.20914, "acc": 92.88965, "loss_bbox": 0.23333, "loss_mask": 0.23878, "loss": 0.75811, "time": 0.41557} -{"mode": "train", "epoch": 7, "iter": 2100, "lr": 0.0002, "memory": 8424, "data_time": 0.04348, "loss_rpn_cls": 0.02976, "loss_rpn_bbox": 0.05166, "loss_cls": 0.21558, "acc": 92.73242, "loss_bbox": 0.23815, "loss_mask": 0.23975, "loss": 0.77492, "time": 0.41066} -{"mode": "train", "epoch": 7, "iter": 2150, "lr": 0.0002, "memory": 8424, "data_time": 0.04231, "loss_rpn_cls": 0.03116, "loss_rpn_bbox": 0.05183, "loss_cls": 0.20987, "acc": 92.91553, "loss_bbox": 0.23626, "loss_mask": 0.24531, "loss": 0.77443, "time": 0.397} -{"mode": "train", "epoch": 7, "iter": 2200, "lr": 0.0002, "memory": 8424, "data_time": 0.04624, "loss_rpn_cls": 0.02797, "loss_rpn_bbox": 0.05016, "loss_cls": 0.21298, "acc": 92.68701, "loss_bbox": 0.23996, "loss_mask": 0.24147, "loss": 0.77254, "time": 0.46399} -{"mode": "train", "epoch": 7, "iter": 2250, "lr": 0.0002, "memory": 8424, "data_time": 0.0457, "loss_rpn_cls": 0.02998, "loss_rpn_bbox": 0.04752, "loss_cls": 0.21536, "acc": 92.63745, "loss_bbox": 0.23969, "loss_mask": 0.23993, "loss": 0.77248, "time": 0.40989} -{"mode": "train", "epoch": 7, "iter": 2300, "lr": 0.0002, "memory": 8424, "data_time": 0.05584, "loss_rpn_cls": 0.0317, "loss_rpn_bbox": 0.05459, "loss_cls": 0.22506, "acc": 92.10669, "loss_bbox": 0.25201, "loss_mask": 0.24482, "loss": 0.80818, "time": 0.41265} -{"mode": "train", "epoch": 7, "iter": 2350, "lr": 0.0002, "memory": 8424, "data_time": 0.04781, "loss_rpn_cls": 0.03004, "loss_rpn_bbox": 0.05201, "loss_cls": 0.21954, "acc": 92.72095, "loss_bbox": 0.23782, "loss_mask": 0.24487, "loss": 0.78427, "time": 0.41183} -{"mode": "train", "epoch": 7, "iter": 2400, "lr": 0.0002, "memory": 8424, "data_time": 0.04818, "loss_rpn_cls": 0.02911, "loss_rpn_bbox": 0.05389, "loss_cls": 0.22494, "acc": 92.18213, "loss_bbox": 0.25426, "loss_mask": 0.2521, "loss": 0.81429, "time": 0.45658} -{"mode": "train", "epoch": 7, "iter": 2450, "lr": 0.0002, "memory": 8424, "data_time": 0.04551, "loss_rpn_cls": 0.02747, "loss_rpn_bbox": 0.05058, "loss_cls": 0.21009, "acc": 92.74902, "loss_bbox": 0.24025, "loss_mask": 0.24351, "loss": 0.7719, "time": 0.4678} -{"mode": "train", "epoch": 7, "iter": 2500, "lr": 0.0002, "memory": 8424, "data_time": 0.05195, "loss_rpn_cls": 0.02845, "loss_rpn_bbox": 0.04724, "loss_cls": 0.21183, "acc": 92.79175, "loss_bbox": 0.23953, "loss_mask": 0.23915, "loss": 0.7662, "time": 0.4049} -{"mode": "train", "epoch": 7, "iter": 2550, "lr": 0.0002, "memory": 8424, "data_time": 0.04102, "loss_rpn_cls": 0.02515, "loss_rpn_bbox": 0.04773, "loss_cls": 0.20579, "acc": 92.88599, "loss_bbox": 0.23919, "loss_mask": 0.23914, "loss": 0.75701, "time": 0.40035} -{"mode": "train", "epoch": 7, "iter": 2600, "lr": 0.0002, "memory": 8424, "data_time": 0.0504, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.05004, "loss_cls": 0.22305, "acc": 92.16138, "loss_bbox": 0.25219, "loss_mask": 0.24268, "loss": 0.79726, "time": 0.40964} -{"mode": "train", "epoch": 7, "iter": 2650, "lr": 0.0002, "memory": 8424, "data_time": 0.04104, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.0496, "loss_cls": 0.21223, "acc": 92.7207, "loss_bbox": 0.23782, "loss_mask": 0.24291, "loss": 0.77133, "time": 0.41107} -{"mode": "train", "epoch": 7, "iter": 2700, "lr": 0.0002, "memory": 8424, "data_time": 0.04888, "loss_rpn_cls": 0.03034, "loss_rpn_bbox": 0.05215, "loss_cls": 0.21249, "acc": 92.8269, "loss_bbox": 0.24005, "loss_mask": 0.24733, "loss": 0.78235, "time": 0.41143} -{"mode": "train", "epoch": 7, "iter": 2750, "lr": 0.0002, "memory": 8424, "data_time": 0.04285, "loss_rpn_cls": 0.02751, "loss_rpn_bbox": 0.0483, "loss_cls": 0.22019, "acc": 92.5061, "loss_bbox": 0.24302, "loss_mask": 0.24301, "loss": 0.78203, "time": 0.39929} -{"mode": "train", "epoch": 7, "iter": 2800, "lr": 0.0002, "memory": 8424, "data_time": 0.04787, "loss_rpn_cls": 0.02959, "loss_rpn_bbox": 0.05024, "loss_cls": 0.22924, "acc": 92.34033, "loss_bbox": 0.24539, "loss_mask": 0.24835, "loss": 0.80281, "time": 0.41239} -{"mode": "train", "epoch": 7, "iter": 2850, "lr": 0.0002, "memory": 8424, "data_time": 0.04151, "loss_rpn_cls": 0.02672, "loss_rpn_bbox": 0.04823, "loss_cls": 0.21021, "acc": 92.90283, "loss_bbox": 0.23199, "loss_mask": 0.23454, "loss": 0.75169, "time": 0.40238} -{"mode": "train", "epoch": 7, "iter": 2900, "lr": 0.0002, "memory": 8424, "data_time": 0.04835, "loss_rpn_cls": 0.03228, "loss_rpn_bbox": 0.05434, "loss_cls": 0.22507, "acc": 92.29517, "loss_bbox": 0.24995, "loss_mask": 0.24054, "loss": 0.80217, "time": 0.41996} -{"mode": "train", "epoch": 7, "iter": 2950, "lr": 0.0002, "memory": 8424, "data_time": 0.05054, "loss_rpn_cls": 0.03092, "loss_rpn_bbox": 0.04958, "loss_cls": 0.21915, "acc": 92.55981, "loss_bbox": 0.23926, "loss_mask": 0.23819, "loss": 0.77711, "time": 0.40957} -{"mode": "train", "epoch": 7, "iter": 3000, "lr": 0.0002, "memory": 8424, "data_time": 0.04916, "loss_rpn_cls": 0.02975, "loss_rpn_bbox": 0.05339, "loss_cls": 0.22306, "acc": 92.45923, "loss_bbox": 0.24224, "loss_mask": 0.23913, "loss": 0.78756, "time": 0.3957} -{"mode": "train", "epoch": 7, "iter": 3050, "lr": 0.0002, "memory": 8424, "data_time": 0.04381, "loss_rpn_cls": 0.02914, "loss_rpn_bbox": 0.05066, "loss_cls": 0.21687, "acc": 92.63916, "loss_bbox": 0.23795, "loss_mask": 0.24173, "loss": 0.77635, "time": 0.39558} -{"mode": "train", "epoch": 7, "iter": 3100, "lr": 0.0002, "memory": 8424, "data_time": 0.04971, "loss_rpn_cls": 0.02717, "loss_rpn_bbox": 0.04864, "loss_cls": 0.20694, "acc": 92.98706, "loss_bbox": 0.23448, "loss_mask": 0.24067, "loss": 0.75791, "time": 0.41311} -{"mode": "train", "epoch": 7, "iter": 3150, "lr": 0.0002, "memory": 8424, "data_time": 0.05169, "loss_rpn_cls": 0.03254, "loss_rpn_bbox": 0.0533, "loss_cls": 0.21845, "acc": 92.40405, "loss_bbox": 0.25161, "loss_mask": 0.24327, "loss": 0.79917, "time": 0.42402} -{"mode": "train", "epoch": 7, "iter": 3200, "lr": 0.0002, "memory": 8424, "data_time": 0.04257, "loss_rpn_cls": 0.03041, "loss_rpn_bbox": 0.04971, "loss_cls": 0.2197, "acc": 92.55493, "loss_bbox": 0.2462, "loss_mask": 0.24363, "loss": 0.78965, "time": 0.40524} -{"mode": "train", "epoch": 7, "iter": 3250, "lr": 0.0002, "memory": 8424, "data_time": 0.04077, "loss_rpn_cls": 0.02793, "loss_rpn_bbox": 0.04841, "loss_cls": 0.21364, "acc": 92.77368, "loss_bbox": 0.23661, "loss_mask": 0.23321, "loss": 0.75979, "time": 0.40856} -{"mode": "train", "epoch": 7, "iter": 3300, "lr": 0.0002, "memory": 8424, "data_time": 0.04488, "loss_rpn_cls": 0.02668, "loss_rpn_bbox": 0.04652, "loss_cls": 0.21313, "acc": 92.72192, "loss_bbox": 0.24026, "loss_mask": 0.2396, "loss": 0.7662, "time": 0.3987} -{"mode": "train", "epoch": 7, "iter": 3350, "lr": 0.0002, "memory": 8424, "data_time": 0.04336, "loss_rpn_cls": 0.02619, "loss_rpn_bbox": 0.04586, "loss_cls": 0.2071, "acc": 92.9104, "loss_bbox": 0.2328, "loss_mask": 0.23533, "loss": 0.74729, "time": 0.39276} -{"mode": "train", "epoch": 7, "iter": 3400, "lr": 0.0002, "memory": 8424, "data_time": 0.04976, "loss_rpn_cls": 0.02738, "loss_rpn_bbox": 0.04847, "loss_cls": 0.21869, "acc": 92.49414, "loss_bbox": 0.24551, "loss_mask": 0.24161, "loss": 0.78166, "time": 0.40042} -{"mode": "train", "epoch": 7, "iter": 3450, "lr": 0.0002, "memory": 8424, "data_time": 0.04329, "loss_rpn_cls": 0.02744, "loss_rpn_bbox": 0.05142, "loss_cls": 0.21359, "acc": 92.59473, "loss_bbox": 0.24066, "loss_mask": 0.23824, "loss": 0.77134, "time": 0.46843} -{"mode": "train", "epoch": 7, "iter": 3500, "lr": 0.0002, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.02661, "loss_rpn_bbox": 0.04638, "loss_cls": 0.20557, "acc": 93.02783, "loss_bbox": 0.2294, "loss_mask": 0.23522, "loss": 0.74319, "time": 0.39978} -{"mode": "train", "epoch": 7, "iter": 3550, "lr": 0.0002, "memory": 8424, "data_time": 0.04795, "loss_rpn_cls": 0.0267, "loss_rpn_bbox": 0.04751, "loss_cls": 0.21637, "acc": 92.76636, "loss_bbox": 0.23846, "loss_mask": 0.24144, "loss": 0.77048, "time": 0.45036} -{"mode": "train", "epoch": 7, "iter": 3600, "lr": 0.0002, "memory": 8424, "data_time": 0.0441, "loss_rpn_cls": 0.02974, "loss_rpn_bbox": 0.05019, "loss_cls": 0.21738, "acc": 92.54053, "loss_bbox": 0.24187, "loss_mask": 0.23942, "loss": 0.77861, "time": 0.45149} -{"mode": "train", "epoch": 7, "iter": 3650, "lr": 0.0002, "memory": 8424, "data_time": 0.04701, "loss_rpn_cls": 0.0286, "loss_rpn_bbox": 0.04967, "loss_cls": 0.22032, "acc": 92.58813, "loss_bbox": 0.23671, "loss_mask": 0.24941, "loss": 0.78473, "time": 0.50228} -{"mode": "train", "epoch": 7, "iter": 3700, "lr": 0.0002, "memory": 8424, "data_time": 0.04498, "loss_rpn_cls": 0.0293, "loss_rpn_bbox": 0.04769, "loss_cls": 0.20664, "acc": 92.96899, "loss_bbox": 0.23162, "loss_mask": 0.23573, "loss": 0.75097, "time": 0.39368} -{"mode": "train", "epoch": 7, "iter": 3750, "lr": 0.0002, "memory": 8424, "data_time": 0.04896, "loss_rpn_cls": 0.03004, "loss_rpn_bbox": 0.05076, "loss_cls": 0.21879, "acc": 92.69995, "loss_bbox": 0.24029, "loss_mask": 0.24369, "loss": 0.78357, "time": 0.40924} -{"mode": "train", "epoch": 7, "iter": 3800, "lr": 0.0002, "memory": 8424, "data_time": 0.04776, "loss_rpn_cls": 0.0327, "loss_rpn_bbox": 0.05348, "loss_cls": 0.22696, "acc": 92.26172, "loss_bbox": 0.24879, "loss_mask": 0.24151, "loss": 0.80343, "time": 0.42972} -{"mode": "train", "epoch": 7, "iter": 3850, "lr": 0.0002, "memory": 8424, "data_time": 0.04116, "loss_rpn_cls": 0.02475, "loss_rpn_bbox": 0.04749, "loss_cls": 0.21981, "acc": 92.55737, "loss_bbox": 0.23887, "loss_mask": 0.24508, "loss": 0.776, "time": 0.40322} -{"mode": "train", "epoch": 7, "iter": 3900, "lr": 0.0002, "memory": 8424, "data_time": 0.044, "loss_rpn_cls": 0.02699, "loss_rpn_bbox": 0.04638, "loss_cls": 0.22093, "acc": 92.40332, "loss_bbox": 0.24687, "loss_mask": 0.24276, "loss": 0.78393, "time": 0.41596} -{"mode": "train", "epoch": 7, "iter": 3950, "lr": 0.0002, "memory": 8424, "data_time": 0.04007, "loss_rpn_cls": 0.02806, "loss_rpn_bbox": 0.04869, "loss_cls": 0.21858, "acc": 92.62939, "loss_bbox": 0.24027, "loss_mask": 0.24194, "loss": 0.77754, "time": 0.40937} -{"mode": "train", "epoch": 7, "iter": 4000, "lr": 0.0002, "memory": 8424, "data_time": 0.04407, "loss_rpn_cls": 0.02676, "loss_rpn_bbox": 0.04735, "loss_cls": 0.20636, "acc": 93.00073, "loss_bbox": 0.23393, "loss_mask": 0.24303, "loss": 0.75743, "time": 0.39283} -{"mode": "train", "epoch": 7, "iter": 4050, "lr": 0.0002, "memory": 8424, "data_time": 0.05102, "loss_rpn_cls": 0.03024, "loss_rpn_bbox": 0.05142, "loss_cls": 0.22529, "acc": 92.35376, "loss_bbox": 0.24502, "loss_mask": 0.23723, "loss": 0.78921, "time": 0.40553} -{"mode": "train", "epoch": 7, "iter": 4100, "lr": 0.0002, "memory": 8424, "data_time": 0.05841, "loss_rpn_cls": 0.02819, "loss_rpn_bbox": 0.04851, "loss_cls": 0.21482, "acc": 92.69604, "loss_bbox": 0.23725, "loss_mask": 0.24221, "loss": 0.77097, "time": 0.40187} -{"mode": "train", "epoch": 7, "iter": 4150, "lr": 0.0002, "memory": 8424, "data_time": 0.04523, "loss_rpn_cls": 0.03052, "loss_rpn_bbox": 0.05062, "loss_cls": 0.21831, "acc": 92.49121, "loss_bbox": 0.23895, "loss_mask": 0.24625, "loss": 0.78464, "time": 0.4081} -{"mode": "train", "epoch": 7, "iter": 4200, "lr": 0.0002, "memory": 8424, "data_time": 0.04687, "loss_rpn_cls": 0.02845, "loss_rpn_bbox": 0.05069, "loss_cls": 0.21439, "acc": 92.66382, "loss_bbox": 0.24187, "loss_mask": 0.24708, "loss": 0.78248, "time": 0.40785} -{"mode": "train", "epoch": 7, "iter": 4250, "lr": 0.0002, "memory": 8424, "data_time": 0.04509, "loss_rpn_cls": 0.02702, "loss_rpn_bbox": 0.04685, "loss_cls": 0.21246, "acc": 92.80005, "loss_bbox": 0.23594, "loss_mask": 0.24005, "loss": 0.76231, "time": 0.39708} -{"mode": "train", "epoch": 7, "iter": 4300, "lr": 0.0002, "memory": 8424, "data_time": 0.05665, "loss_rpn_cls": 0.03014, "loss_rpn_bbox": 0.05014, "loss_cls": 0.21455, "acc": 92.66919, "loss_bbox": 0.23875, "loss_mask": 0.237, "loss": 0.77059, "time": 0.41281} -{"mode": "train", "epoch": 7, "iter": 4350, "lr": 0.0002, "memory": 8424, "data_time": 0.04592, "loss_rpn_cls": 0.02862, "loss_rpn_bbox": 0.04839, "loss_cls": 0.20796, "acc": 92.91772, "loss_bbox": 0.23312, "loss_mask": 0.23719, "loss": 0.75527, "time": 0.40858} -{"mode": "train", "epoch": 7, "iter": 4400, "lr": 0.0002, "memory": 8424, "data_time": 0.05213, "loss_rpn_cls": 0.03167, "loss_rpn_bbox": 0.05413, "loss_cls": 0.21927, "acc": 92.4021, "loss_bbox": 0.24903, "loss_mask": 0.241, "loss": 0.7951, "time": 0.39231} -{"mode": "train", "epoch": 7, "iter": 4450, "lr": 0.0002, "memory": 8424, "data_time": 0.05089, "loss_rpn_cls": 0.02695, "loss_rpn_bbox": 0.05051, "loss_cls": 0.2134, "acc": 92.71753, "loss_bbox": 0.24163, "loss_mask": 0.24447, "loss": 0.77696, "time": 0.40287} -{"mode": "train", "epoch": 7, "iter": 4500, "lr": 0.0002, "memory": 8424, "data_time": 0.04992, "loss_rpn_cls": 0.0296, "loss_rpn_bbox": 0.05319, "loss_cls": 0.22827, "acc": 92.30884, "loss_bbox": 0.25, "loss_mask": 0.24665, "loss": 0.80771, "time": 0.41074} -{"mode": "train", "epoch": 7, "iter": 4550, "lr": 0.0002, "memory": 8424, "data_time": 0.04335, "loss_rpn_cls": 0.02716, "loss_rpn_bbox": 0.04786, "loss_cls": 0.21269, "acc": 92.84863, "loss_bbox": 0.23266, "loss_mask": 0.23345, "loss": 0.75382, "time": 0.40099} -{"mode": "train", "epoch": 7, "iter": 4600, "lr": 0.0002, "memory": 8424, "data_time": 0.04578, "loss_rpn_cls": 0.03088, "loss_rpn_bbox": 0.04846, "loss_cls": 0.22119, "acc": 92.50537, "loss_bbox": 0.24148, "loss_mask": 0.23871, "loss": 0.78071, "time": 0.41185} -{"mode": "train", "epoch": 7, "iter": 4650, "lr": 0.0002, "memory": 8424, "data_time": 0.05218, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.04977, "loss_cls": 0.22457, "acc": 92.67505, "loss_bbox": 0.24208, "loss_mask": 0.25199, "loss": 0.80018, "time": 0.5229} -{"mode": "train", "epoch": 7, "iter": 4700, "lr": 0.0002, "memory": 8424, "data_time": 0.04255, "loss_rpn_cls": 0.02861, "loss_rpn_bbox": 0.05067, "loss_cls": 0.2211, "acc": 92.54272, "loss_bbox": 0.24684, "loss_mask": 0.24625, "loss": 0.79347, "time": 0.45175} -{"mode": "train", "epoch": 7, "iter": 4750, "lr": 0.0002, "memory": 8424, "data_time": 0.05404, "loss_rpn_cls": 0.0281, "loss_rpn_bbox": 0.05204, "loss_cls": 0.22043, "acc": 92.41553, "loss_bbox": 0.24776, "loss_mask": 0.24083, "loss": 0.78916, "time": 0.41138} -{"mode": "train", "epoch": 7, "iter": 4800, "lr": 0.0002, "memory": 8424, "data_time": 0.0468, "loss_rpn_cls": 0.02777, "loss_rpn_bbox": 0.04789, "loss_cls": 0.2206, "acc": 92.66479, "loss_bbox": 0.2388, "loss_mask": 0.2389, "loss": 0.77397, "time": 0.39628} -{"mode": "train", "epoch": 7, "iter": 4850, "lr": 0.0002, "memory": 8424, "data_time": 0.04006, "loss_rpn_cls": 0.02671, "loss_rpn_bbox": 0.04634, "loss_cls": 0.21156, "acc": 92.9585, "loss_bbox": 0.22816, "loss_mask": 0.23572, "loss": 0.74848, "time": 0.38815} -{"mode": "train", "epoch": 7, "iter": 4900, "lr": 0.0002, "memory": 8424, "data_time": 0.04087, "loss_rpn_cls": 0.0319, "loss_rpn_bbox": 0.05146, "loss_cls": 0.21907, "acc": 92.57593, "loss_bbox": 0.23839, "loss_mask": 0.23982, "loss": 0.78064, "time": 0.40387} -{"mode": "train", "epoch": 7, "iter": 4950, "lr": 0.0002, "memory": 8424, "data_time": 0.0506, "loss_rpn_cls": 0.03005, "loss_rpn_bbox": 0.05068, "loss_cls": 0.22345, "acc": 92.47632, "loss_bbox": 0.25018, "loss_mask": 0.23812, "loss": 0.79248, "time": 0.41473} -{"mode": "train", "epoch": 7, "iter": 5000, "lr": 0.0002, "memory": 8424, "data_time": 0.04381, "loss_rpn_cls": 0.02866, "loss_rpn_bbox": 0.05009, "loss_cls": 0.21517, "acc": 92.6272, "loss_bbox": 0.24436, "loss_mask": 0.24635, "loss": 0.78463, "time": 0.40096} -{"mode": "train", "epoch": 7, "iter": 5050, "lr": 0.0002, "memory": 8424, "data_time": 0.04166, "loss_rpn_cls": 0.03249, "loss_rpn_bbox": 0.04926, "loss_cls": 0.21754, "acc": 92.58618, "loss_bbox": 0.24159, "loss_mask": 0.2409, "loss": 0.78178, "time": 0.39901} -{"mode": "train", "epoch": 7, "iter": 5100, "lr": 0.0002, "memory": 8424, "data_time": 0.05445, "loss_rpn_cls": 0.02985, "loss_rpn_bbox": 0.05183, "loss_cls": 0.22316, "acc": 92.37402, "loss_bbox": 0.24438, "loss_mask": 0.24507, "loss": 0.79429, "time": 0.4204} -{"mode": "train", "epoch": 7, "iter": 5150, "lr": 0.0002, "memory": 8424, "data_time": 0.04156, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.04859, "loss_cls": 0.21846, "acc": 92.64233, "loss_bbox": 0.23935, "loss_mask": 0.23565, "loss": 0.77224, "time": 0.39366} -{"mode": "train", "epoch": 7, "iter": 5200, "lr": 0.0002, "memory": 8424, "data_time": 0.04153, "loss_rpn_cls": 0.02791, "loss_rpn_bbox": 0.04741, "loss_cls": 0.2114, "acc": 92.69922, "loss_bbox": 0.23665, "loss_mask": 0.23544, "loss": 0.75881, "time": 0.40704} -{"mode": "train", "epoch": 7, "iter": 5250, "lr": 0.0002, "memory": 8424, "data_time": 0.04396, "loss_rpn_cls": 0.02771, "loss_rpn_bbox": 0.04705, "loss_cls": 0.20316, "acc": 92.98682, "loss_bbox": 0.22886, "loss_mask": 0.23896, "loss": 0.74574, "time": 0.40211} -{"mode": "train", "epoch": 7, "iter": 5300, "lr": 0.0002, "memory": 8424, "data_time": 0.05625, "loss_rpn_cls": 0.02678, "loss_rpn_bbox": 0.04724, "loss_cls": 0.21081, "acc": 92.87769, "loss_bbox": 0.2299, "loss_mask": 0.23809, "loss": 0.75283, "time": 0.40896} -{"mode": "train", "epoch": 7, "iter": 5350, "lr": 0.0002, "memory": 8424, "data_time": 0.04466, "loss_rpn_cls": 0.031, "loss_rpn_bbox": 0.05148, "loss_cls": 0.21718, "acc": 92.59351, "loss_bbox": 0.23955, "loss_mask": 0.24882, "loss": 0.78804, "time": 0.40924} -{"mode": "train", "epoch": 7, "iter": 5400, "lr": 0.0002, "memory": 8424, "data_time": 0.0461, "loss_rpn_cls": 0.03158, "loss_rpn_bbox": 0.05129, "loss_cls": 0.22283, "acc": 92.43799, "loss_bbox": 0.24498, "loss_mask": 0.24404, "loss": 0.79472, "time": 0.41723} -{"mode": "train", "epoch": 7, "iter": 5450, "lr": 0.0002, "memory": 8424, "data_time": 0.04161, "loss_rpn_cls": 0.02835, "loss_rpn_bbox": 0.05028, "loss_cls": 0.22295, "acc": 92.49927, "loss_bbox": 0.24258, "loss_mask": 0.24098, "loss": 0.78513, "time": 0.44865} -{"mode": "train", "epoch": 7, "iter": 5500, "lr": 0.0002, "memory": 8424, "data_time": 0.04882, "loss_rpn_cls": 0.03028, "loss_rpn_bbox": 0.05172, "loss_cls": 0.22387, "acc": 92.34644, "loss_bbox": 0.24999, "loss_mask": 0.24185, "loss": 0.79772, "time": 0.42128} -{"mode": "train", "epoch": 7, "iter": 5550, "lr": 0.0002, "memory": 8424, "data_time": 0.04566, "loss_rpn_cls": 0.02746, "loss_rpn_bbox": 0.04922, "loss_cls": 0.21959, "acc": 92.51807, "loss_bbox": 0.247, "loss_mask": 0.24361, "loss": 0.78688, "time": 0.40267} -{"mode": "train", "epoch": 7, "iter": 5600, "lr": 0.0002, "memory": 8424, "data_time": 0.04525, "loss_rpn_cls": 0.02615, "loss_rpn_bbox": 0.04858, "loss_cls": 0.21807, "acc": 92.50464, "loss_bbox": 0.24376, "loss_mask": 0.24281, "loss": 0.77936, "time": 0.41014} -{"mode": "train", "epoch": 7, "iter": 5650, "lr": 0.0002, "memory": 8424, "data_time": 0.04123, "loss_rpn_cls": 0.0284, "loss_rpn_bbox": 0.04944, "loss_cls": 0.21739, "acc": 92.604, "loss_bbox": 0.23873, "loss_mask": 0.24016, "loss": 0.77412, "time": 0.4023} -{"mode": "train", "epoch": 7, "iter": 5700, "lr": 0.0002, "memory": 8424, "data_time": 0.0428, "loss_rpn_cls": 0.02823, "loss_rpn_bbox": 0.05018, "loss_cls": 0.22028, "acc": 92.52637, "loss_bbox": 0.24193, "loss_mask": 0.23999, "loss": 0.7806, "time": 0.41251} -{"mode": "train", "epoch": 7, "iter": 5750, "lr": 0.0002, "memory": 8424, "data_time": 0.04703, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.04776, "loss_cls": 0.21407, "acc": 92.70605, "loss_bbox": 0.23942, "loss_mask": 0.23621, "loss": 0.76744, "time": 0.41306} -{"mode": "train", "epoch": 7, "iter": 5800, "lr": 0.0002, "memory": 8424, "data_time": 0.04262, "loss_rpn_cls": 0.0276, "loss_rpn_bbox": 0.04696, "loss_cls": 0.21555, "acc": 92.5791, "loss_bbox": 0.24169, "loss_mask": 0.23907, "loss": 0.77088, "time": 0.40036} -{"mode": "train", "epoch": 7, "iter": 5850, "lr": 0.0002, "memory": 8424, "data_time": 0.03979, "loss_rpn_cls": 0.02896, "loss_rpn_bbox": 0.05208, "loss_cls": 0.21543, "acc": 92.55591, "loss_bbox": 0.24338, "loss_mask": 0.23952, "loss": 0.77938, "time": 0.48292} -{"mode": "train", "epoch": 7, "iter": 5900, "lr": 0.0002, "memory": 8424, "data_time": 0.04521, "loss_rpn_cls": 0.02852, "loss_rpn_bbox": 0.04949, "loss_cls": 0.21656, "acc": 92.51611, "loss_bbox": 0.24636, "loss_mask": 0.2371, "loss": 0.77803, "time": 0.4163} -{"mode": "train", "epoch": 7, "iter": 5950, "lr": 0.0002, "memory": 8424, "data_time": 0.03942, "loss_rpn_cls": 0.03, "loss_rpn_bbox": 0.0497, "loss_cls": 0.22176, "acc": 92.53711, "loss_bbox": 0.24339, "loss_mask": 0.24164, "loss": 0.78649, "time": 0.39695} -{"mode": "train", "epoch": 7, "iter": 6000, "lr": 0.0002, "memory": 8424, "data_time": 0.04749, "loss_rpn_cls": 0.02838, "loss_rpn_bbox": 0.04953, "loss_cls": 0.21216, "acc": 92.72583, "loss_bbox": 0.23491, "loss_mask": 0.23647, "loss": 0.76145, "time": 0.3969} -{"mode": "train", "epoch": 7, "iter": 6050, "lr": 0.0002, "memory": 8424, "data_time": 0.04333, "loss_rpn_cls": 0.02919, "loss_rpn_bbox": 0.05228, "loss_cls": 0.22363, "acc": 92.40259, "loss_bbox": 0.24959, "loss_mask": 0.24519, "loss": 0.79989, "time": 0.41135} -{"mode": "train", "epoch": 7, "iter": 6100, "lr": 0.0002, "memory": 8424, "data_time": 0.04363, "loss_rpn_cls": 0.0291, "loss_rpn_bbox": 0.05019, "loss_cls": 0.21881, "acc": 92.66406, "loss_bbox": 0.24194, "loss_mask": 0.24802, "loss": 0.78806, "time": 0.41512} -{"mode": "train", "epoch": 7, "iter": 6150, "lr": 0.0002, "memory": 8424, "data_time": 0.04744, "loss_rpn_cls": 0.03189, "loss_rpn_bbox": 0.05351, "loss_cls": 0.2224, "acc": 92.31641, "loss_bbox": 0.25302, "loss_mask": 0.24432, "loss": 0.80514, "time": 0.4767} -{"mode": "train", "epoch": 7, "iter": 6200, "lr": 0.0002, "memory": 8424, "data_time": 0.04753, "loss_rpn_cls": 0.02782, "loss_rpn_bbox": 0.04866, "loss_cls": 0.22873, "acc": 92.34302, "loss_bbox": 0.24204, "loss_mask": 0.23812, "loss": 0.78537, "time": 0.40097} -{"mode": "train", "epoch": 7, "iter": 6250, "lr": 0.0002, "memory": 8424, "data_time": 0.04823, "loss_rpn_cls": 0.03158, "loss_rpn_bbox": 0.05195, "loss_cls": 0.22295, "acc": 92.44946, "loss_bbox": 0.25099, "loss_mask": 0.25002, "loss": 0.80749, "time": 0.40502} -{"mode": "train", "epoch": 7, "iter": 6300, "lr": 0.0002, "memory": 8424, "data_time": 0.04326, "loss_rpn_cls": 0.02744, "loss_rpn_bbox": 0.04907, "loss_cls": 0.21025, "acc": 92.81128, "loss_bbox": 0.23257, "loss_mask": 0.23686, "loss": 0.75619, "time": 0.41229} -{"mode": "train", "epoch": 7, "iter": 6350, "lr": 0.0002, "memory": 8424, "data_time": 0.04409, "loss_rpn_cls": 0.0313, "loss_rpn_bbox": 0.05171, "loss_cls": 0.22887, "acc": 92.28979, "loss_bbox": 0.25023, "loss_mask": 0.24501, "loss": 0.80712, "time": 0.43034} -{"mode": "train", "epoch": 7, "iter": 6400, "lr": 0.0002, "memory": 8424, "data_time": 0.04189, "loss_rpn_cls": 0.02892, "loss_rpn_bbox": 0.04863, "loss_cls": 0.22415, "acc": 92.44482, "loss_bbox": 0.24542, "loss_mask": 0.23901, "loss": 0.78613, "time": 0.5053} -{"mode": "train", "epoch": 7, "iter": 6450, "lr": 0.0002, "memory": 8424, "data_time": 0.04372, "loss_rpn_cls": 0.03068, "loss_rpn_bbox": 0.05083, "loss_cls": 0.22465, "acc": 92.42554, "loss_bbox": 0.24085, "loss_mask": 0.23764, "loss": 0.78465, "time": 0.40798} -{"mode": "train", "epoch": 7, "iter": 6500, "lr": 0.0002, "memory": 8424, "data_time": 0.04665, "loss_rpn_cls": 0.02801, "loss_rpn_bbox": 0.04936, "loss_cls": 0.22135, "acc": 92.34839, "loss_bbox": 0.24952, "loss_mask": 0.24325, "loss": 0.79149, "time": 0.41295} -{"mode": "train", "epoch": 7, "iter": 6550, "lr": 0.0002, "memory": 8424, "data_time": 0.05189, "loss_rpn_cls": 0.03268, "loss_rpn_bbox": 0.05207, "loss_cls": 0.21935, "acc": 92.43774, "loss_bbox": 0.24315, "loss_mask": 0.23912, "loss": 0.78638, "time": 0.40482} -{"mode": "train", "epoch": 7, "iter": 6600, "lr": 0.0002, "memory": 8424, "data_time": 0.04852, "loss_rpn_cls": 0.03054, "loss_rpn_bbox": 0.05267, "loss_cls": 0.23629, "acc": 92.00049, "loss_bbox": 0.25432, "loss_mask": 0.24794, "loss": 0.82177, "time": 0.40882} -{"mode": "train", "epoch": 7, "iter": 6650, "lr": 0.0002, "memory": 8424, "data_time": 0.04889, "loss_rpn_cls": 0.03148, "loss_rpn_bbox": 0.05025, "loss_cls": 0.21798, "acc": 92.50049, "loss_bbox": 0.24385, "loss_mask": 0.24369, "loss": 0.78725, "time": 0.42823} -{"mode": "train", "epoch": 7, "iter": 6700, "lr": 0.0002, "memory": 8424, "data_time": 0.04581, "loss_rpn_cls": 0.02937, "loss_rpn_bbox": 0.05244, "loss_cls": 0.21591, "acc": 92.61646, "loss_bbox": 0.24363, "loss_mask": 0.23972, "loss": 0.78108, "time": 0.4108} -{"mode": "train", "epoch": 7, "iter": 6750, "lr": 0.0002, "memory": 8424, "data_time": 0.06324, "loss_rpn_cls": 0.02948, "loss_rpn_bbox": 0.05197, "loss_cls": 0.21952, "acc": 92.59082, "loss_bbox": 0.24084, "loss_mask": 0.24173, "loss": 0.78354, "time": 0.40712} -{"mode": "train", "epoch": 7, "iter": 6800, "lr": 0.0002, "memory": 8424, "data_time": 0.06056, "loss_rpn_cls": 0.03184, "loss_rpn_bbox": 0.0538, "loss_cls": 0.23233, "acc": 92.07446, "loss_bbox": 0.26209, "loss_mask": 0.2499, "loss": 0.82996, "time": 0.42472} -{"mode": "train", "epoch": 7, "iter": 6850, "lr": 0.0002, "memory": 8424, "data_time": 0.03633, "loss_rpn_cls": 0.02821, "loss_rpn_bbox": 0.04748, "loss_cls": 0.21641, "acc": 92.65405, "loss_bbox": 0.234, "loss_mask": 0.23799, "loss": 0.76409, "time": 0.40301} -{"mode": "train", "epoch": 7, "iter": 6900, "lr": 0.0002, "memory": 8424, "data_time": 0.04261, "loss_rpn_cls": 0.02796, "loss_rpn_bbox": 0.04651, "loss_cls": 0.20859, "acc": 92.91113, "loss_bbox": 0.23463, "loss_mask": 0.24221, "loss": 0.75989, "time": 0.40209} -{"mode": "train", "epoch": 7, "iter": 6950, "lr": 0.0002, "memory": 8424, "data_time": 0.04479, "loss_rpn_cls": 0.02732, "loss_rpn_bbox": 0.04465, "loss_cls": 0.20528, "acc": 93.16968, "loss_bbox": 0.22955, "loss_mask": 0.2401, "loss": 0.7469, "time": 0.40594} -{"mode": "train", "epoch": 7, "iter": 7000, "lr": 0.0002, "memory": 8424, "data_time": 0.04251, "loss_rpn_cls": 0.02684, "loss_rpn_bbox": 0.04801, "loss_cls": 0.21228, "acc": 92.92285, "loss_bbox": 0.23491, "loss_mask": 0.24529, "loss": 0.76732, "time": 0.40305} -{"mode": "train", "epoch": 7, "iter": 7050, "lr": 0.0002, "memory": 8424, "data_time": 0.05196, "loss_rpn_cls": 0.02921, "loss_rpn_bbox": 0.0515, "loss_cls": 0.21944, "acc": 92.604, "loss_bbox": 0.23974, "loss_mask": 0.2403, "loss": 0.78018, "time": 0.41086} -{"mode": "train", "epoch": 7, "iter": 7100, "lr": 0.0002, "memory": 8424, "data_time": 0.05281, "loss_rpn_cls": 0.03137, "loss_rpn_bbox": 0.052, "loss_cls": 0.21433, "acc": 92.83667, "loss_bbox": 0.23243, "loss_mask": 0.24346, "loss": 0.77359, "time": 0.41035} -{"mode": "train", "epoch": 7, "iter": 7150, "lr": 0.0002, "memory": 8424, "data_time": 0.04361, "loss_rpn_cls": 0.03057, "loss_rpn_bbox": 0.05002, "loss_cls": 0.22805, "acc": 92.44751, "loss_bbox": 0.24215, "loss_mask": 0.24291, "loss": 0.7937, "time": 0.40474} -{"mode": "train", "epoch": 7, "iter": 7200, "lr": 0.0002, "memory": 8424, "data_time": 0.05686, "loss_rpn_cls": 0.03148, "loss_rpn_bbox": 0.05458, "loss_cls": 0.23167, "acc": 92.33374, "loss_bbox": 0.25663, "loss_mask": 0.24872, "loss": 0.82308, "time": 0.42219} -{"mode": "train", "epoch": 7, "iter": 7250, "lr": 0.0002, "memory": 8424, "data_time": 0.0536, "loss_rpn_cls": 0.03214, "loss_rpn_bbox": 0.05019, "loss_cls": 0.21918, "acc": 92.66602, "loss_bbox": 0.24363, "loss_mask": 0.24186, "loss": 0.78699, "time": 0.41115} -{"mode": "train", "epoch": 7, "iter": 7300, "lr": 0.0002, "memory": 8424, "data_time": 0.04161, "loss_rpn_cls": 0.0287, "loss_rpn_bbox": 0.04742, "loss_cls": 0.21971, "acc": 92.50146, "loss_bbox": 0.24476, "loss_mask": 0.23846, "loss": 0.77905, "time": 0.45613} -{"mode": "val", "epoch": 7, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3603, "bbox_mAP_50": 0.5773, "bbox_mAP_75": 0.391, "bbox_mAP_s": 0.1998, "bbox_mAP_m": 0.3938, "bbox_mAP_l": 0.471, "bbox_mAP_copypaste": "0.3603 0.5773 0.3910 0.1998 0.3938 0.4710", "segm_mAP": 0.3421, "segm_mAP_50": 0.5511, "segm_mAP_75": 0.3658, "segm_mAP_s": 0.1536, "segm_mAP_m": 0.3692, "segm_mAP_l": 0.5001, "segm_mAP_copypaste": "0.3421 0.5511 0.3658 0.1536 0.3692 0.5001"} -{"mode": "train", "epoch": 8, "iter": 50, "lr": 0.0002, "memory": 8424, "data_time": 0.12046, "loss_rpn_cls": 0.0237, "loss_rpn_bbox": 0.04494, "loss_cls": 0.19789, "acc": 93.25122, "loss_bbox": 0.22126, "loss_mask": 0.23234, "loss": 0.72014, "time": 0.50485} -{"mode": "train", "epoch": 8, "iter": 100, "lr": 0.0002, "memory": 8424, "data_time": 0.05537, "loss_rpn_cls": 0.02653, "loss_rpn_bbox": 0.04786, "loss_cls": 0.2083, "acc": 92.9082, "loss_bbox": 0.23433, "loss_mask": 0.23189, "loss": 0.74891, "time": 0.43056} -{"mode": "train", "epoch": 8, "iter": 150, "lr": 0.0002, "memory": 8424, "data_time": 0.04783, "loss_rpn_cls": 0.02423, "loss_rpn_bbox": 0.0442, "loss_cls": 0.19489, "acc": 93.29321, "loss_bbox": 0.21811, "loss_mask": 0.22835, "loss": 0.70978, "time": 0.40606} -{"mode": "train", "epoch": 8, "iter": 200, "lr": 0.0002, "memory": 8424, "data_time": 0.05034, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.05075, "loss_cls": 0.19464, "acc": 93.23755, "loss_bbox": 0.22899, "loss_mask": 0.23791, "loss": 0.74106, "time": 0.41614} -{"mode": "train", "epoch": 8, "iter": 250, "lr": 0.0002, "memory": 8424, "data_time": 0.04985, "loss_rpn_cls": 0.02791, "loss_rpn_bbox": 0.05027, "loss_cls": 0.20959, "acc": 92.72388, "loss_bbox": 0.23587, "loss_mask": 0.23476, "loss": 0.7584, "time": 0.41289} -{"mode": "train", "epoch": 8, "iter": 300, "lr": 0.0002, "memory": 8424, "data_time": 0.05489, "loss_rpn_cls": 0.02755, "loss_rpn_bbox": 0.053, "loss_cls": 0.21147, "acc": 92.54175, "loss_bbox": 0.24133, "loss_mask": 0.23415, "loss": 0.76751, "time": 0.43018} -{"mode": "train", "epoch": 8, "iter": 350, "lr": 0.0002, "memory": 8424, "data_time": 0.04092, "loss_rpn_cls": 0.02507, "loss_rpn_bbox": 0.04744, "loss_cls": 0.20033, "acc": 93.15576, "loss_bbox": 0.2252, "loss_mask": 0.23576, "loss": 0.73381, "time": 0.41117} -{"mode": "train", "epoch": 8, "iter": 400, "lr": 0.0002, "memory": 8424, "data_time": 0.04275, "loss_rpn_cls": 0.02834, "loss_rpn_bbox": 0.0496, "loss_cls": 0.20337, "acc": 93.01538, "loss_bbox": 0.22826, "loss_mask": 0.23342, "loss": 0.74299, "time": 0.40774} -{"mode": "train", "epoch": 8, "iter": 450, "lr": 0.0002, "memory": 8424, "data_time": 0.04945, "loss_rpn_cls": 0.02353, "loss_rpn_bbox": 0.04391, "loss_cls": 0.20041, "acc": 93.04956, "loss_bbox": 0.22698, "loss_mask": 0.23626, "loss": 0.73109, "time": 0.39995} -{"mode": "train", "epoch": 8, "iter": 500, "lr": 0.0002, "memory": 8424, "data_time": 0.04051, "loss_rpn_cls": 0.0257, "loss_rpn_bbox": 0.04736, "loss_cls": 0.21013, "acc": 92.57642, "loss_bbox": 0.24922, "loss_mask": 0.23433, "loss": 0.76675, "time": 0.40183} -{"mode": "train", "epoch": 8, "iter": 550, "lr": 0.0002, "memory": 8424, "data_time": 0.04987, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.04795, "loss_cls": 0.199, "acc": 93.09253, "loss_bbox": 0.22757, "loss_mask": 0.23061, "loss": 0.73438, "time": 0.41368} -{"mode": "train", "epoch": 8, "iter": 600, "lr": 0.0002, "memory": 8424, "data_time": 0.04348, "loss_rpn_cls": 0.02753, "loss_rpn_bbox": 0.04446, "loss_cls": 0.20686, "acc": 92.92773, "loss_bbox": 0.22933, "loss_mask": 0.23746, "loss": 0.74563, "time": 0.40486} -{"mode": "train", "epoch": 8, "iter": 650, "lr": 0.0002, "memory": 8424, "data_time": 0.0492, "loss_rpn_cls": 0.02734, "loss_rpn_bbox": 0.05052, "loss_cls": 0.21515, "acc": 92.66406, "loss_bbox": 0.24295, "loss_mask": 0.23978, "loss": 0.77574, "time": 0.39677} -{"mode": "train", "epoch": 8, "iter": 700, "lr": 0.0002, "memory": 8424, "data_time": 0.04337, "loss_rpn_cls": 0.02673, "loss_rpn_bbox": 0.04663, "loss_cls": 0.21248, "acc": 92.83496, "loss_bbox": 0.23173, "loss_mask": 0.23293, "loss": 0.7505, "time": 0.40373} -{"mode": "train", "epoch": 8, "iter": 750, "lr": 0.0002, "memory": 8424, "data_time": 0.0453, "loss_rpn_cls": 0.02944, "loss_rpn_bbox": 0.04994, "loss_cls": 0.20832, "acc": 92.7583, "loss_bbox": 0.24276, "loss_mask": 0.23439, "loss": 0.76486, "time": 0.40487} -{"mode": "train", "epoch": 8, "iter": 800, "lr": 0.0002, "memory": 8424, "data_time": 0.05068, "loss_rpn_cls": 0.02406, "loss_rpn_bbox": 0.04685, "loss_cls": 0.19957, "acc": 93.14917, "loss_bbox": 0.22652, "loss_mask": 0.23479, "loss": 0.73178, "time": 0.40918} -{"mode": "train", "epoch": 8, "iter": 850, "lr": 0.0002, "memory": 8424, "data_time": 0.04316, "loss_rpn_cls": 0.02494, "loss_rpn_bbox": 0.04718, "loss_cls": 0.20279, "acc": 93.01685, "loss_bbox": 0.22901, "loss_mask": 0.23552, "loss": 0.73944, "time": 0.41919} -{"mode": "train", "epoch": 8, "iter": 900, "lr": 0.0002, "memory": 8424, "data_time": 0.04664, "loss_rpn_cls": 0.02771, "loss_rpn_bbox": 0.05059, "loss_cls": 0.22039, "acc": 92.46411, "loss_bbox": 0.24096, "loss_mask": 0.24278, "loss": 0.78243, "time": 0.39562} -{"mode": "train", "epoch": 8, "iter": 950, "lr": 0.0002, "memory": 8424, "data_time": 0.04994, "loss_rpn_cls": 0.02913, "loss_rpn_bbox": 0.04982, "loss_cls": 0.21411, "acc": 92.5415, "loss_bbox": 0.24438, "loss_mask": 0.23983, "loss": 0.77727, "time": 0.40862} -{"mode": "train", "epoch": 8, "iter": 1000, "lr": 0.0002, "memory": 8424, "data_time": 0.04848, "loss_rpn_cls": 0.02849, "loss_rpn_bbox": 0.04733, "loss_cls": 0.20979, "acc": 92.77417, "loss_bbox": 0.23642, "loss_mask": 0.23829, "loss": 0.76032, "time": 0.40665} -{"mode": "train", "epoch": 8, "iter": 1050, "lr": 0.0002, "memory": 8424, "data_time": 0.03603, "loss_rpn_cls": 0.02703, "loss_rpn_bbox": 0.04814, "loss_cls": 0.20529, "acc": 92.96338, "loss_bbox": 0.23327, "loss_mask": 0.23739, "loss": 0.75112, "time": 0.39305} -{"mode": "train", "epoch": 8, "iter": 1100, "lr": 0.0002, "memory": 8424, "data_time": 0.04185, "loss_rpn_cls": 0.02658, "loss_rpn_bbox": 0.04634, "loss_cls": 0.20297, "acc": 92.92285, "loss_bbox": 0.23389, "loss_mask": 0.23971, "loss": 0.74948, "time": 0.40604} -{"mode": "train", "epoch": 8, "iter": 1150, "lr": 0.0002, "memory": 8424, "data_time": 0.04666, "loss_rpn_cls": 0.02737, "loss_rpn_bbox": 0.04955, "loss_cls": 0.20358, "acc": 92.97754, "loss_bbox": 0.23006, "loss_mask": 0.2407, "loss": 0.75126, "time": 0.41242} -{"mode": "train", "epoch": 8, "iter": 1200, "lr": 0.0002, "memory": 8424, "data_time": 0.04995, "loss_rpn_cls": 0.02726, "loss_rpn_bbox": 0.04881, "loss_cls": 0.19789, "acc": 93.09033, "loss_bbox": 0.23176, "loss_mask": 0.23415, "loss": 0.73986, "time": 0.39286} -{"mode": "train", "epoch": 8, "iter": 1250, "lr": 0.0002, "memory": 8424, "data_time": 0.04441, "loss_rpn_cls": 0.02588, "loss_rpn_bbox": 0.04803, "loss_cls": 0.20527, "acc": 92.92456, "loss_bbox": 0.23847, "loss_mask": 0.23248, "loss": 0.75013, "time": 0.40754} -{"mode": "train", "epoch": 8, "iter": 1300, "lr": 0.0002, "memory": 8424, "data_time": 0.04813, "loss_rpn_cls": 0.03135, "loss_rpn_bbox": 0.05272, "loss_cls": 0.22255, "acc": 92.25952, "loss_bbox": 0.25044, "loss_mask": 0.24586, "loss": 0.80292, "time": 0.41263} -{"mode": "train", "epoch": 8, "iter": 1350, "lr": 0.0002, "memory": 8424, "data_time": 0.04808, "loss_rpn_cls": 0.0274, "loss_rpn_bbox": 0.04647, "loss_cls": 0.20173, "acc": 93.02808, "loss_bbox": 0.22802, "loss_mask": 0.23297, "loss": 0.7366, "time": 0.40181} -{"mode": "train", "epoch": 8, "iter": 1400, "lr": 0.0002, "memory": 8424, "data_time": 0.04674, "loss_rpn_cls": 0.02784, "loss_rpn_bbox": 0.04964, "loss_cls": 0.21766, "acc": 92.57764, "loss_bbox": 0.24253, "loss_mask": 0.23912, "loss": 0.77679, "time": 0.39577} -{"mode": "train", "epoch": 8, "iter": 1450, "lr": 0.0002, "memory": 8424, "data_time": 0.04755, "loss_rpn_cls": 0.02629, "loss_rpn_bbox": 0.05212, "loss_cls": 0.20519, "acc": 92.84961, "loss_bbox": 0.24069, "loss_mask": 0.23682, "loss": 0.76111, "time": 0.39229} -{"mode": "train", "epoch": 8, "iter": 1500, "lr": 0.0002, "memory": 8424, "data_time": 0.04312, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.05155, "loss_cls": 0.21439, "acc": 92.70337, "loss_bbox": 0.2445, "loss_mask": 0.24057, "loss": 0.78098, "time": 0.41517} -{"mode": "train", "epoch": 8, "iter": 1550, "lr": 0.0002, "memory": 8424, "data_time": 0.04888, "loss_rpn_cls": 0.02756, "loss_rpn_bbox": 0.0484, "loss_cls": 0.20436, "acc": 92.98193, "loss_bbox": 0.23477, "loss_mask": 0.23771, "loss": 0.7528, "time": 0.39694} -{"mode": "train", "epoch": 8, "iter": 1600, "lr": 0.0002, "memory": 8424, "data_time": 0.05156, "loss_rpn_cls": 0.02814, "loss_rpn_bbox": 0.05209, "loss_cls": 0.2074, "acc": 92.82104, "loss_bbox": 0.24242, "loss_mask": 0.23925, "loss": 0.7693, "time": 0.41055} -{"mode": "train", "epoch": 8, "iter": 1650, "lr": 0.0002, "memory": 8424, "data_time": 0.04668, "loss_rpn_cls": 0.02744, "loss_rpn_bbox": 0.04853, "loss_cls": 0.21022, "acc": 92.81421, "loss_bbox": 0.23517, "loss_mask": 0.23914, "loss": 0.76052, "time": 0.42303} -{"mode": "train", "epoch": 8, "iter": 1700, "lr": 0.0002, "memory": 8424, "data_time": 0.04011, "loss_rpn_cls": 0.02776, "loss_rpn_bbox": 0.04845, "loss_cls": 0.21013, "acc": 92.7561, "loss_bbox": 0.23887, "loss_mask": 0.24154, "loss": 0.76674, "time": 0.39347} -{"mode": "train", "epoch": 8, "iter": 1750, "lr": 0.0002, "memory": 8424, "data_time": 0.04078, "loss_rpn_cls": 0.02645, "loss_rpn_bbox": 0.04571, "loss_cls": 0.20144, "acc": 93.0083, "loss_bbox": 0.22648, "loss_mask": 0.23247, "loss": 0.73255, "time": 0.40551} -{"mode": "train", "epoch": 8, "iter": 1800, "lr": 0.0002, "memory": 8424, "data_time": 0.04971, "loss_rpn_cls": 0.02845, "loss_rpn_bbox": 0.04747, "loss_cls": 0.21166, "acc": 92.77734, "loss_bbox": 0.23442, "loss_mask": 0.23861, "loss": 0.7606, "time": 0.41232} -{"mode": "train", "epoch": 8, "iter": 1850, "lr": 0.0002, "memory": 8424, "data_time": 0.0451, "loss_rpn_cls": 0.02834, "loss_rpn_bbox": 0.05063, "loss_cls": 0.22667, "acc": 92.3147, "loss_bbox": 0.24656, "loss_mask": 0.24458, "loss": 0.79679, "time": 0.40687} -{"mode": "train", "epoch": 8, "iter": 1900, "lr": 0.0002, "memory": 8424, "data_time": 0.04918, "loss_rpn_cls": 0.0278, "loss_rpn_bbox": 0.04748, "loss_cls": 0.21382, "acc": 92.58154, "loss_bbox": 0.24327, "loss_mask": 0.23848, "loss": 0.77084, "time": 0.46647} -{"mode": "train", "epoch": 8, "iter": 1950, "lr": 0.0002, "memory": 8424, "data_time": 0.04232, "loss_rpn_cls": 0.02523, "loss_rpn_bbox": 0.04682, "loss_cls": 0.20088, "acc": 93.1189, "loss_bbox": 0.23124, "loss_mask": 0.23942, "loss": 0.74358, "time": 0.39678} -{"mode": "train", "epoch": 8, "iter": 2000, "lr": 0.0002, "memory": 8424, "data_time": 0.04878, "loss_rpn_cls": 0.0302, "loss_rpn_bbox": 0.05015, "loss_cls": 0.22198, "acc": 92.55835, "loss_bbox": 0.24357, "loss_mask": 0.24097, "loss": 0.78687, "time": 0.43177} -{"mode": "train", "epoch": 8, "iter": 2050, "lr": 0.0002, "memory": 8424, "data_time": 0.05712, "loss_rpn_cls": 0.02771, "loss_rpn_bbox": 0.04741, "loss_cls": 0.21069, "acc": 92.81323, "loss_bbox": 0.23426, "loss_mask": 0.23843, "loss": 0.7585, "time": 0.39998} -{"mode": "train", "epoch": 8, "iter": 2100, "lr": 0.0002, "memory": 8424, "data_time": 0.03955, "loss_rpn_cls": 0.02969, "loss_rpn_bbox": 0.04878, "loss_cls": 0.21362, "acc": 92.69312, "loss_bbox": 0.2407, "loss_mask": 0.23914, "loss": 0.77193, "time": 0.40595} -{"mode": "train", "epoch": 8, "iter": 2150, "lr": 0.0002, "memory": 8424, "data_time": 0.04643, "loss_rpn_cls": 0.02885, "loss_rpn_bbox": 0.04922, "loss_cls": 0.20962, "acc": 92.73975, "loss_bbox": 0.23796, "loss_mask": 0.23504, "loss": 0.76069, "time": 0.41581} -{"mode": "train", "epoch": 8, "iter": 2200, "lr": 0.0002, "memory": 8424, "data_time": 0.04768, "loss_rpn_cls": 0.02635, "loss_rpn_bbox": 0.04847, "loss_cls": 0.20509, "acc": 92.95972, "loss_bbox": 0.23315, "loss_mask": 0.23917, "loss": 0.75223, "time": 0.40279} -{"mode": "train", "epoch": 8, "iter": 2250, "lr": 0.0002, "memory": 8424, "data_time": 0.03794, "loss_rpn_cls": 0.02873, "loss_rpn_bbox": 0.05068, "loss_cls": 0.21465, "acc": 92.65674, "loss_bbox": 0.2404, "loss_mask": 0.23512, "loss": 0.76959, "time": 0.40458} -{"mode": "train", "epoch": 8, "iter": 2300, "lr": 0.0002, "memory": 8424, "data_time": 0.0445, "loss_rpn_cls": 0.0258, "loss_rpn_bbox": 0.04985, "loss_cls": 0.21007, "acc": 92.64355, "loss_bbox": 0.24407, "loss_mask": 0.23786, "loss": 0.76765, "time": 0.46167} -{"mode": "train", "epoch": 8, "iter": 2350, "lr": 0.0002, "memory": 8424, "data_time": 0.04373, "loss_rpn_cls": 0.02663, "loss_rpn_bbox": 0.04875, "loss_cls": 0.21114, "acc": 92.71313, "loss_bbox": 0.23649, "loss_mask": 0.23832, "loss": 0.76132, "time": 0.4061} -{"mode": "train", "epoch": 8, "iter": 2400, "lr": 0.0002, "memory": 8424, "data_time": 0.04306, "loss_rpn_cls": 0.03018, "loss_rpn_bbox": 0.05141, "loss_cls": 0.2171, "acc": 92.59106, "loss_bbox": 0.24459, "loss_mask": 0.24204, "loss": 0.78533, "time": 0.41027} -{"mode": "train", "epoch": 8, "iter": 2450, "lr": 0.0002, "memory": 8424, "data_time": 0.04252, "loss_rpn_cls": 0.02771, "loss_rpn_bbox": 0.048, "loss_cls": 0.21725, "acc": 92.62573, "loss_bbox": 0.23089, "loss_mask": 0.23673, "loss": 0.76058, "time": 0.39404} -{"mode": "train", "epoch": 8, "iter": 2500, "lr": 0.0002, "memory": 8424, "data_time": 0.05214, "loss_rpn_cls": 0.02815, "loss_rpn_bbox": 0.05136, "loss_cls": 0.21154, "acc": 92.79199, "loss_bbox": 0.2383, "loss_mask": 0.23705, "loss": 0.76639, "time": 0.41045} -{"mode": "train", "epoch": 8, "iter": 2550, "lr": 0.0002, "memory": 8424, "data_time": 0.03921, "loss_rpn_cls": 0.02592, "loss_rpn_bbox": 0.04742, "loss_cls": 0.20478, "acc": 92.99829, "loss_bbox": 0.23429, "loss_mask": 0.2305, "loss": 0.7429, "time": 0.41651} -{"mode": "train", "epoch": 8, "iter": 2600, "lr": 0.0002, "memory": 8424, "data_time": 0.05024, "loss_rpn_cls": 0.02655, "loss_rpn_bbox": 0.04909, "loss_cls": 0.21902, "acc": 92.52319, "loss_bbox": 0.24097, "loss_mask": 0.24019, "loss": 0.77582, "time": 0.41999} -{"mode": "train", "epoch": 8, "iter": 2650, "lr": 0.0002, "memory": 8424, "data_time": 0.04422, "loss_rpn_cls": 0.02943, "loss_rpn_bbox": 0.05042, "loss_cls": 0.20519, "acc": 92.98291, "loss_bbox": 0.2327, "loss_mask": 0.24331, "loss": 0.76105, "time": 0.41586} -{"mode": "train", "epoch": 8, "iter": 2700, "lr": 0.0002, "memory": 8424, "data_time": 0.04818, "loss_rpn_cls": 0.02855, "loss_rpn_bbox": 0.0499, "loss_cls": 0.21096, "acc": 92.78101, "loss_bbox": 0.24513, "loss_mask": 0.24403, "loss": 0.77856, "time": 0.408} -{"mode": "train", "epoch": 8, "iter": 2750, "lr": 0.0002, "memory": 8424, "data_time": 0.04281, "loss_rpn_cls": 0.02643, "loss_rpn_bbox": 0.05099, "loss_cls": 0.20857, "acc": 92.77417, "loss_bbox": 0.2391, "loss_mask": 0.24346, "loss": 0.76855, "time": 0.39298} -{"mode": "train", "epoch": 8, "iter": 2800, "lr": 0.0002, "memory": 8424, "data_time": 0.04738, "loss_rpn_cls": 0.02719, "loss_rpn_bbox": 0.04912, "loss_cls": 0.22153, "acc": 92.25488, "loss_bbox": 0.24843, "loss_mask": 0.24219, "loss": 0.78846, "time": 0.40674} -{"mode": "train", "epoch": 8, "iter": 2850, "lr": 0.0002, "memory": 8424, "data_time": 0.05421, "loss_rpn_cls": 0.03038, "loss_rpn_bbox": 0.04895, "loss_cls": 0.20987, "acc": 92.92993, "loss_bbox": 0.23263, "loss_mask": 0.23903, "loss": 0.76087, "time": 0.4042} -{"mode": "train", "epoch": 8, "iter": 2900, "lr": 0.0002, "memory": 8424, "data_time": 0.04578, "loss_rpn_cls": 0.02774, "loss_rpn_bbox": 0.04897, "loss_cls": 0.20924, "acc": 92.84839, "loss_bbox": 0.23548, "loss_mask": 0.23973, "loss": 0.76115, "time": 0.40435} -{"mode": "train", "epoch": 8, "iter": 2950, "lr": 0.0002, "memory": 8424, "data_time": 0.04108, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.04766, "loss_cls": 0.21223, "acc": 92.87866, "loss_bbox": 0.23397, "loss_mask": 0.23907, "loss": 0.76061, "time": 0.39894} -{"mode": "train", "epoch": 8, "iter": 3000, "lr": 0.0002, "memory": 8424, "data_time": 0.04895, "loss_rpn_cls": 0.02857, "loss_rpn_bbox": 0.04852, "loss_cls": 0.21296, "acc": 92.62964, "loss_bbox": 0.24296, "loss_mask": 0.24386, "loss": 0.77686, "time": 0.40377} -{"mode": "train", "epoch": 8, "iter": 3050, "lr": 0.0002, "memory": 8424, "data_time": 0.04315, "loss_rpn_cls": 0.02797, "loss_rpn_bbox": 0.04871, "loss_cls": 0.2036, "acc": 92.94629, "loss_bbox": 0.23355, "loss_mask": 0.24401, "loss": 0.75784, "time": 0.41554} -{"mode": "train", "epoch": 8, "iter": 3100, "lr": 0.0002, "memory": 8424, "data_time": 0.05051, "loss_rpn_cls": 0.02943, "loss_rpn_bbox": 0.05324, "loss_cls": 0.20829, "acc": 92.81763, "loss_bbox": 0.23906, "loss_mask": 0.24345, "loss": 0.77348, "time": 0.4157} -{"mode": "train", "epoch": 8, "iter": 3150, "lr": 0.0002, "memory": 8424, "data_time": 0.04387, "loss_rpn_cls": 0.02488, "loss_rpn_bbox": 0.04647, "loss_cls": 0.2047, "acc": 93.01758, "loss_bbox": 0.22822, "loss_mask": 0.23461, "loss": 0.73888, "time": 0.40456} -{"mode": "train", "epoch": 8, "iter": 3200, "lr": 0.0002, "memory": 8424, "data_time": 0.05419, "loss_rpn_cls": 0.03012, "loss_rpn_bbox": 0.05269, "loss_cls": 0.22057, "acc": 92.46558, "loss_bbox": 0.24547, "loss_mask": 0.24433, "loss": 0.79319, "time": 0.42034} -{"mode": "train", "epoch": 8, "iter": 3250, "lr": 0.0002, "memory": 8424, "data_time": 0.04242, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.04786, "loss_cls": 0.21012, "acc": 92.93872, "loss_bbox": 0.23102, "loss_mask": 0.23776, "loss": 0.75444, "time": 0.40736} -{"mode": "train", "epoch": 8, "iter": 3300, "lr": 0.0002, "memory": 8424, "data_time": 0.0385, "loss_rpn_cls": 0.02935, "loss_rpn_bbox": 0.04933, "loss_cls": 0.21567, "acc": 92.70972, "loss_bbox": 0.23105, "loss_mask": 0.23869, "loss": 0.76409, "time": 0.41411} -{"mode": "train", "epoch": 8, "iter": 3350, "lr": 0.0002, "memory": 8424, "data_time": 0.04267, "loss_rpn_cls": 0.02923, "loss_rpn_bbox": 0.04644, "loss_cls": 0.20744, "acc": 92.84155, "loss_bbox": 0.23123, "loss_mask": 0.23623, "loss": 0.75057, "time": 0.45228} -{"mode": "train", "epoch": 8, "iter": 3400, "lr": 0.0002, "memory": 8424, "data_time": 0.05142, "loss_rpn_cls": 0.02674, "loss_rpn_bbox": 0.04936, "loss_cls": 0.21504, "acc": 92.77026, "loss_bbox": 0.23818, "loss_mask": 0.24223, "loss": 0.77154, "time": 0.40933} -{"mode": "train", "epoch": 8, "iter": 3450, "lr": 0.0002, "memory": 8424, "data_time": 0.04721, "loss_rpn_cls": 0.02858, "loss_rpn_bbox": 0.05176, "loss_cls": 0.21743, "acc": 92.58374, "loss_bbox": 0.23916, "loss_mask": 0.23938, "loss": 0.77631, "time": 0.4716} -{"mode": "train", "epoch": 8, "iter": 3500, "lr": 0.0002, "memory": 8424, "data_time": 0.05597, "loss_rpn_cls": 0.02902, "loss_rpn_bbox": 0.04895, "loss_cls": 0.21451, "acc": 92.73511, "loss_bbox": 0.23564, "loss_mask": 0.23802, "loss": 0.76615, "time": 0.41096} -{"mode": "train", "epoch": 8, "iter": 3550, "lr": 0.0002, "memory": 8424, "data_time": 0.06611, "loss_rpn_cls": 0.02894, "loss_rpn_bbox": 0.05023, "loss_cls": 0.22161, "acc": 92.53345, "loss_bbox": 0.24632, "loss_mask": 0.24551, "loss": 0.79262, "time": 0.42696} -{"mode": "train", "epoch": 8, "iter": 3600, "lr": 0.0002, "memory": 8424, "data_time": 0.04593, "loss_rpn_cls": 0.02923, "loss_rpn_bbox": 0.04914, "loss_cls": 0.21128, "acc": 92.67334, "loss_bbox": 0.24161, "loss_mask": 0.23951, "loss": 0.77077, "time": 0.41682} -{"mode": "train", "epoch": 8, "iter": 3650, "lr": 0.0002, "memory": 8424, "data_time": 0.05335, "loss_rpn_cls": 0.0283, "loss_rpn_bbox": 0.04883, "loss_cls": 0.22272, "acc": 92.45044, "loss_bbox": 0.24544, "loss_mask": 0.23708, "loss": 0.78237, "time": 0.41636} -{"mode": "train", "epoch": 8, "iter": 3700, "lr": 0.0002, "memory": 8424, "data_time": 0.04684, "loss_rpn_cls": 0.02734, "loss_rpn_bbox": 0.04865, "loss_cls": 0.21773, "acc": 92.57861, "loss_bbox": 0.23722, "loss_mask": 0.24317, "loss": 0.77412, "time": 0.40123} -{"mode": "train", "epoch": 8, "iter": 3750, "lr": 0.0002, "memory": 8424, "data_time": 0.04758, "loss_rpn_cls": 0.02876, "loss_rpn_bbox": 0.04907, "loss_cls": 0.22038, "acc": 92.53052, "loss_bbox": 0.2421, "loss_mask": 0.24705, "loss": 0.78736, "time": 0.41134} -{"mode": "train", "epoch": 8, "iter": 3800, "lr": 0.0002, "memory": 8424, "data_time": 0.05319, "loss_rpn_cls": 0.02695, "loss_rpn_bbox": 0.04834, "loss_cls": 0.20696, "acc": 92.82861, "loss_bbox": 0.23426, "loss_mask": 0.23942, "loss": 0.75593, "time": 0.42467} -{"mode": "train", "epoch": 8, "iter": 3850, "lr": 0.0002, "memory": 8424, "data_time": 0.05072, "loss_rpn_cls": 0.02926, "loss_rpn_bbox": 0.04854, "loss_cls": 0.21488, "acc": 92.71753, "loss_bbox": 0.23569, "loss_mask": 0.23858, "loss": 0.76695, "time": 0.41031} -{"mode": "train", "epoch": 8, "iter": 3900, "lr": 0.0002, "memory": 8424, "data_time": 0.04147, "loss_rpn_cls": 0.02829, "loss_rpn_bbox": 0.04725, "loss_cls": 0.20861, "acc": 92.92456, "loss_bbox": 0.23479, "loss_mask": 0.23797, "loss": 0.7569, "time": 0.40074} -{"mode": "train", "epoch": 8, "iter": 3950, "lr": 0.0002, "memory": 8424, "data_time": 0.10925, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.04791, "loss_cls": 0.20995, "acc": 92.89185, "loss_bbox": 0.23366, "loss_mask": 0.23586, "loss": 0.75654, "time": 0.46744} -{"mode": "train", "epoch": 8, "iter": 4000, "lr": 0.0002, "memory": 8424, "data_time": 0.04574, "loss_rpn_cls": 0.02801, "loss_rpn_bbox": 0.04885, "loss_cls": 0.20962, "acc": 92.7041, "loss_bbox": 0.24522, "loss_mask": 0.24161, "loss": 0.77331, "time": 0.41158} -{"mode": "train", "epoch": 8, "iter": 4050, "lr": 0.0002, "memory": 8424, "data_time": 0.04606, "loss_rpn_cls": 0.02709, "loss_rpn_bbox": 0.04692, "loss_cls": 0.2101, "acc": 92.77197, "loss_bbox": 0.23643, "loss_mask": 0.2331, "loss": 0.75364, "time": 0.41486} -{"mode": "train", "epoch": 8, "iter": 4100, "lr": 0.0002, "memory": 8424, "data_time": 0.04936, "loss_rpn_cls": 0.02912, "loss_rpn_bbox": 0.04837, "loss_cls": 0.21527, "acc": 92.60767, "loss_bbox": 0.23823, "loss_mask": 0.24376, "loss": 0.77475, "time": 0.40933} -{"mode": "train", "epoch": 8, "iter": 4150, "lr": 0.0002, "memory": 8424, "data_time": 0.04523, "loss_rpn_cls": 0.02847, "loss_rpn_bbox": 0.04772, "loss_cls": 0.21676, "acc": 92.53882, "loss_bbox": 0.24168, "loss_mask": 0.24276, "loss": 0.77739, "time": 0.40489} -{"mode": "train", "epoch": 8, "iter": 4200, "lr": 0.0002, "memory": 8424, "data_time": 0.05149, "loss_rpn_cls": 0.02981, "loss_rpn_bbox": 0.04831, "loss_cls": 0.20437, "acc": 92.9895, "loss_bbox": 0.22784, "loss_mask": 0.23645, "loss": 0.74678, "time": 0.40368} -{"mode": "train", "epoch": 8, "iter": 4250, "lr": 0.0002, "memory": 8424, "data_time": 0.04538, "loss_rpn_cls": 0.02781, "loss_rpn_bbox": 0.05131, "loss_cls": 0.21085, "acc": 92.71802, "loss_bbox": 0.24158, "loss_mask": 0.23945, "loss": 0.77101, "time": 0.40064} -{"mode": "train", "epoch": 8, "iter": 4300, "lr": 0.0002, "memory": 8424, "data_time": 0.04725, "loss_rpn_cls": 0.03, "loss_rpn_bbox": 0.05201, "loss_cls": 0.21386, "acc": 92.76782, "loss_bbox": 0.23505, "loss_mask": 0.24408, "loss": 0.775, "time": 0.41028} -{"mode": "train", "epoch": 8, "iter": 4350, "lr": 0.0002, "memory": 8424, "data_time": 0.04032, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.04887, "loss_cls": 0.21748, "acc": 92.59668, "loss_bbox": 0.24116, "loss_mask": 0.23815, "loss": 0.77333, "time": 0.47427} -{"mode": "train", "epoch": 8, "iter": 4400, "lr": 0.0002, "memory": 8424, "data_time": 0.04157, "loss_rpn_cls": 0.02775, "loss_rpn_bbox": 0.04863, "loss_cls": 0.21196, "acc": 92.70483, "loss_bbox": 0.23665, "loss_mask": 0.24185, "loss": 0.76683, "time": 0.4084} -{"mode": "train", "epoch": 8, "iter": 4450, "lr": 0.0002, "memory": 8424, "data_time": 0.04395, "loss_rpn_cls": 0.03104, "loss_rpn_bbox": 0.05086, "loss_cls": 0.22108, "acc": 92.48633, "loss_bbox": 0.24171, "loss_mask": 0.24145, "loss": 0.78613, "time": 0.46108} -{"mode": "train", "epoch": 8, "iter": 4500, "lr": 0.0002, "memory": 8424, "data_time": 0.04815, "loss_rpn_cls": 0.03117, "loss_rpn_bbox": 0.05031, "loss_cls": 0.22007, "acc": 92.44556, "loss_bbox": 0.24405, "loss_mask": 0.23684, "loss": 0.78243, "time": 0.42086} -{"mode": "train", "epoch": 8, "iter": 4550, "lr": 0.0002, "memory": 8424, "data_time": 0.04546, "loss_rpn_cls": 0.02772, "loss_rpn_bbox": 0.04996, "loss_cls": 0.21456, "acc": 92.71216, "loss_bbox": 0.23515, "loss_mask": 0.23842, "loss": 0.76581, "time": 0.40073} -{"mode": "train", "epoch": 8, "iter": 4600, "lr": 0.0002, "memory": 8424, "data_time": 0.05699, "loss_rpn_cls": 0.02939, "loss_rpn_bbox": 0.04937, "loss_cls": 0.22098, "acc": 92.43066, "loss_bbox": 0.24516, "loss_mask": 0.24584, "loss": 0.79075, "time": 0.40407} -{"mode": "train", "epoch": 8, "iter": 4650, "lr": 0.0002, "memory": 8424, "data_time": 0.04286, "loss_rpn_cls": 0.0275, "loss_rpn_bbox": 0.04698, "loss_cls": 0.21912, "acc": 92.56079, "loss_bbox": 0.24469, "loss_mask": 0.23827, "loss": 0.77656, "time": 0.40109} -{"mode": "train", "epoch": 8, "iter": 4700, "lr": 0.0002, "memory": 8424, "data_time": 0.0445, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.0488, "loss_cls": 0.21956, "acc": 92.49316, "loss_bbox": 0.24101, "loss_mask": 0.23432, "loss": 0.77294, "time": 0.41321} -{"mode": "train", "epoch": 8, "iter": 4750, "lr": 0.0002, "memory": 8424, "data_time": 0.04138, "loss_rpn_cls": 0.02728, "loss_rpn_bbox": 0.04918, "loss_cls": 0.21332, "acc": 92.68262, "loss_bbox": 0.23517, "loss_mask": 0.24035, "loss": 0.7653, "time": 0.39528} -{"mode": "train", "epoch": 8, "iter": 4800, "lr": 0.0002, "memory": 8424, "data_time": 0.04105, "loss_rpn_cls": 0.03087, "loss_rpn_bbox": 0.0502, "loss_cls": 0.22197, "acc": 92.47241, "loss_bbox": 0.24228, "loss_mask": 0.24204, "loss": 0.78736, "time": 0.40922} -{"mode": "train", "epoch": 8, "iter": 4850, "lr": 0.0002, "memory": 8424, "data_time": 0.04104, "loss_rpn_cls": 0.03151, "loss_rpn_bbox": 0.04976, "loss_cls": 0.22355, "acc": 92.39722, "loss_bbox": 0.24993, "loss_mask": 0.24338, "loss": 0.79814, "time": 0.40844} -{"mode": "train", "epoch": 8, "iter": 4900, "lr": 0.0002, "memory": 8424, "data_time": 0.0387, "loss_rpn_cls": 0.02811, "loss_rpn_bbox": 0.048, "loss_cls": 0.2131, "acc": 92.7688, "loss_bbox": 0.2331, "loss_mask": 0.23958, "loss": 0.76189, "time": 0.39292} -{"mode": "train", "epoch": 8, "iter": 4950, "lr": 0.0002, "memory": 8424, "data_time": 0.04708, "loss_rpn_cls": 0.02989, "loss_rpn_bbox": 0.04926, "loss_cls": 0.21736, "acc": 92.51782, "loss_bbox": 0.23961, "loss_mask": 0.24147, "loss": 0.77757, "time": 0.40298} -{"mode": "train", "epoch": 8, "iter": 5000, "lr": 0.0002, "memory": 8424, "data_time": 0.04012, "loss_rpn_cls": 0.02846, "loss_rpn_bbox": 0.04829, "loss_cls": 0.21893, "acc": 92.5354, "loss_bbox": 0.23957, "loss_mask": 0.23939, "loss": 0.77463, "time": 0.40801} -{"mode": "train", "epoch": 8, "iter": 5050, "lr": 0.0002, "memory": 8424, "data_time": 0.03983, "loss_rpn_cls": 0.02611, "loss_rpn_bbox": 0.04378, "loss_cls": 0.20492, "acc": 92.92798, "loss_bbox": 0.2312, "loss_mask": 0.23495, "loss": 0.74096, "time": 0.39602} -{"mode": "train", "epoch": 8, "iter": 5100, "lr": 0.0002, "memory": 8424, "data_time": 0.04662, "loss_rpn_cls": 0.02908, "loss_rpn_bbox": 0.0504, "loss_cls": 0.22251, "acc": 92.43774, "loss_bbox": 0.24888, "loss_mask": 0.24572, "loss": 0.79659, "time": 0.38889} -{"mode": "train", "epoch": 8, "iter": 5150, "lr": 0.0002, "memory": 8424, "data_time": 0.05573, "loss_rpn_cls": 0.02685, "loss_rpn_bbox": 0.04973, "loss_cls": 0.20622, "acc": 92.94995, "loss_bbox": 0.23598, "loss_mask": 0.23835, "loss": 0.75713, "time": 0.4657} -{"mode": "train", "epoch": 8, "iter": 5200, "lr": 0.0002, "memory": 8424, "data_time": 0.04414, "loss_rpn_cls": 0.02736, "loss_rpn_bbox": 0.04519, "loss_cls": 0.21365, "acc": 92.73608, "loss_bbox": 0.23433, "loss_mask": 0.23489, "loss": 0.75541, "time": 0.40833} -{"mode": "train", "epoch": 8, "iter": 5250, "lr": 0.0002, "memory": 8424, "data_time": 0.03957, "loss_rpn_cls": 0.03008, "loss_rpn_bbox": 0.05355, "loss_cls": 0.22556, "acc": 92.41895, "loss_bbox": 0.24849, "loss_mask": 0.24768, "loss": 0.80537, "time": 0.41067} -{"mode": "train", "epoch": 8, "iter": 5300, "lr": 0.0002, "memory": 8424, "data_time": 0.04287, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.04684, "loss_cls": 0.21798, "acc": 92.5603, "loss_bbox": 0.23815, "loss_mask": 0.23356, "loss": 0.76591, "time": 0.4288} -{"mode": "train", "epoch": 8, "iter": 5350, "lr": 0.0002, "memory": 8424, "data_time": 0.04426, "loss_rpn_cls": 0.02737, "loss_rpn_bbox": 0.046, "loss_cls": 0.20801, "acc": 93.01025, "loss_bbox": 0.23255, "loss_mask": 0.23575, "loss": 0.74968, "time": 0.40184} -{"mode": "train", "epoch": 8, "iter": 5400, "lr": 0.0002, "memory": 8424, "data_time": 0.03788, "loss_rpn_cls": 0.03207, "loss_rpn_bbox": 0.05178, "loss_cls": 0.2246, "acc": 92.40137, "loss_bbox": 0.2479, "loss_mask": 0.24831, "loss": 0.80467, "time": 0.41573} -{"mode": "train", "epoch": 8, "iter": 5450, "lr": 0.0002, "memory": 8424, "data_time": 0.03904, "loss_rpn_cls": 0.02881, "loss_rpn_bbox": 0.04863, "loss_cls": 0.22411, "acc": 92.41895, "loss_bbox": 0.24606, "loss_mask": 0.24318, "loss": 0.79079, "time": 0.40281} -{"mode": "train", "epoch": 8, "iter": 5500, "lr": 0.0002, "memory": 8424, "data_time": 0.05175, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.05107, "loss_cls": 0.22033, "acc": 92.5874, "loss_bbox": 0.24329, "loss_mask": 0.24475, "loss": 0.78868, "time": 0.40708} -{"mode": "train", "epoch": 8, "iter": 5550, "lr": 0.0002, "memory": 8424, "data_time": 0.04449, "loss_rpn_cls": 0.02976, "loss_rpn_bbox": 0.04985, "loss_cls": 0.21218, "acc": 92.76343, "loss_bbox": 0.23905, "loss_mask": 0.24131, "loss": 0.77215, "time": 0.41018} -{"mode": "train", "epoch": 8, "iter": 5600, "lr": 0.0002, "memory": 8424, "data_time": 0.04481, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05064, "loss_cls": 0.21571, "acc": 92.66431, "loss_bbox": 0.24258, "loss_mask": 0.23935, "loss": 0.77924, "time": 0.41136} -{"mode": "train", "epoch": 8, "iter": 5650, "lr": 0.0002, "memory": 8424, "data_time": 0.04373, "loss_rpn_cls": 0.02945, "loss_rpn_bbox": 0.04956, "loss_cls": 0.2196, "acc": 92.4856, "loss_bbox": 0.24653, "loss_mask": 0.24163, "loss": 0.78676, "time": 0.39689} -{"mode": "train", "epoch": 8, "iter": 5700, "lr": 0.0002, "memory": 8424, "data_time": 0.04483, "loss_rpn_cls": 0.03004, "loss_rpn_bbox": 0.04736, "loss_cls": 0.22198, "acc": 92.36987, "loss_bbox": 0.24494, "loss_mask": 0.2384, "loss": 0.78272, "time": 0.39965} -{"mode": "train", "epoch": 8, "iter": 5750, "lr": 0.0002, "memory": 8424, "data_time": 0.04236, "loss_rpn_cls": 0.03018, "loss_rpn_bbox": 0.04816, "loss_cls": 0.21849, "acc": 92.61816, "loss_bbox": 0.23965, "loss_mask": 0.24129, "loss": 0.77778, "time": 0.38813} -{"mode": "train", "epoch": 8, "iter": 5800, "lr": 0.0002, "memory": 8424, "data_time": 0.03824, "loss_rpn_cls": 0.02601, "loss_rpn_bbox": 0.0483, "loss_cls": 0.21131, "acc": 92.87329, "loss_bbox": 0.23037, "loss_mask": 0.23911, "loss": 0.75509, "time": 0.38814} -{"mode": "train", "epoch": 8, "iter": 5850, "lr": 0.0002, "memory": 8424, "data_time": 0.0445, "loss_rpn_cls": 0.02983, "loss_rpn_bbox": 0.04842, "loss_cls": 0.21798, "acc": 92.59448, "loss_bbox": 0.24155, "loss_mask": 0.24331, "loss": 0.78109, "time": 0.39502} -{"mode": "train", "epoch": 8, "iter": 5900, "lr": 0.0002, "memory": 8424, "data_time": 0.04339, "loss_rpn_cls": 0.02868, "loss_rpn_bbox": 0.0527, "loss_cls": 0.22495, "acc": 92.36353, "loss_bbox": 0.25006, "loss_mask": 0.24439, "loss": 0.80078, "time": 0.44971} -{"mode": "train", "epoch": 8, "iter": 5950, "lr": 0.0002, "memory": 8424, "data_time": 0.04808, "loss_rpn_cls": 0.02558, "loss_rpn_bbox": 0.04642, "loss_cls": 0.21417, "acc": 92.77393, "loss_bbox": 0.23775, "loss_mask": 0.23897, "loss": 0.76289, "time": 0.39034} -{"mode": "train", "epoch": 8, "iter": 6000, "lr": 0.0002, "memory": 8424, "data_time": 0.04698, "loss_rpn_cls": 0.02713, "loss_rpn_bbox": 0.04811, "loss_cls": 0.21315, "acc": 92.79541, "loss_bbox": 0.23857, "loss_mask": 0.23757, "loss": 0.76454, "time": 0.39279} -{"mode": "train", "epoch": 8, "iter": 6050, "lr": 0.0002, "memory": 8424, "data_time": 0.04478, "loss_rpn_cls": 0.0292, "loss_rpn_bbox": 0.05203, "loss_cls": 0.21715, "acc": 92.66577, "loss_bbox": 0.23945, "loss_mask": 0.23691, "loss": 0.77474, "time": 0.40167} -{"mode": "train", "epoch": 8, "iter": 6100, "lr": 0.0002, "memory": 8424, "data_time": 0.04299, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.0511, "loss_cls": 0.22229, "acc": 92.31836, "loss_bbox": 0.2523, "loss_mask": 0.24292, "loss": 0.79881, "time": 0.41336} -{"mode": "train", "epoch": 8, "iter": 6150, "lr": 0.0002, "memory": 8424, "data_time": 0.04847, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.05161, "loss_cls": 0.21322, "acc": 92.64209, "loss_bbox": 0.23523, "loss_mask": 0.24233, "loss": 0.77196, "time": 0.40227} -{"mode": "train", "epoch": 8, "iter": 6200, "lr": 0.0002, "memory": 8424, "data_time": 0.04195, "loss_rpn_cls": 0.02914, "loss_rpn_bbox": 0.05046, "loss_cls": 0.21956, "acc": 92.53711, "loss_bbox": 0.24299, "loss_mask": 0.23513, "loss": 0.77727, "time": 0.40019} -{"mode": "train", "epoch": 8, "iter": 6250, "lr": 0.0002, "memory": 8424, "data_time": 0.0456, "loss_rpn_cls": 0.03078, "loss_rpn_bbox": 0.05424, "loss_cls": 0.23377, "acc": 92.07617, "loss_bbox": 0.25584, "loss_mask": 0.24707, "loss": 0.8217, "time": 0.41431} -{"mode": "train", "epoch": 8, "iter": 6300, "lr": 0.0002, "memory": 8424, "data_time": 0.04619, "loss_rpn_cls": 0.02824, "loss_rpn_bbox": 0.04901, "loss_cls": 0.20622, "acc": 92.9624, "loss_bbox": 0.23023, "loss_mask": 0.23707, "loss": 0.75078, "time": 0.38565} -{"mode": "train", "epoch": 8, "iter": 6350, "lr": 0.0002, "memory": 8424, "data_time": 0.04966, "loss_rpn_cls": 0.02752, "loss_rpn_bbox": 0.05045, "loss_cls": 0.21202, "acc": 92.72876, "loss_bbox": 0.24215, "loss_mask": 0.24135, "loss": 0.77349, "time": 0.45994} -{"mode": "train", "epoch": 8, "iter": 6400, "lr": 0.0002, "memory": 8424, "data_time": 0.04574, "loss_rpn_cls": 0.02837, "loss_rpn_bbox": 0.04838, "loss_cls": 0.20894, "acc": 92.8479, "loss_bbox": 0.23449, "loss_mask": 0.23909, "loss": 0.75926, "time": 0.39938} -{"mode": "train", "epoch": 8, "iter": 6450, "lr": 0.0002, "memory": 8424, "data_time": 0.04763, "loss_rpn_cls": 0.03002, "loss_rpn_bbox": 0.05266, "loss_cls": 0.2163, "acc": 92.53613, "loss_bbox": 0.2421, "loss_mask": 0.24735, "loss": 0.78843, "time": 0.4032} -{"mode": "train", "epoch": 8, "iter": 6500, "lr": 0.0002, "memory": 8424, "data_time": 0.04746, "loss_rpn_cls": 0.03051, "loss_rpn_bbox": 0.04925, "loss_cls": 0.21591, "acc": 92.58838, "loss_bbox": 0.23744, "loss_mask": 0.24011, "loss": 0.77322, "time": 0.40401} -{"mode": "train", "epoch": 8, "iter": 6550, "lr": 0.0002, "memory": 8424, "data_time": 0.04817, "loss_rpn_cls": 0.02973, "loss_rpn_bbox": 0.05305, "loss_cls": 0.22523, "acc": 92.32837, "loss_bbox": 0.24551, "loss_mask": 0.24253, "loss": 0.79605, "time": 0.41393} -{"mode": "train", "epoch": 8, "iter": 6600, "lr": 0.0002, "memory": 8424, "data_time": 0.05241, "loss_rpn_cls": 0.02781, "loss_rpn_bbox": 0.04984, "loss_cls": 0.21868, "acc": 92.61279, "loss_bbox": 0.24251, "loss_mask": 0.24206, "loss": 0.78089, "time": 0.41736} -{"mode": "train", "epoch": 8, "iter": 6650, "lr": 0.0002, "memory": 8424, "data_time": 0.04958, "loss_rpn_cls": 0.02752, "loss_rpn_bbox": 0.05033, "loss_cls": 0.2209, "acc": 92.42969, "loss_bbox": 0.25091, "loss_mask": 0.24328, "loss": 0.79294, "time": 0.41945} -{"mode": "train", "epoch": 8, "iter": 6700, "lr": 0.0002, "memory": 8424, "data_time": 0.04482, "loss_rpn_cls": 0.02746, "loss_rpn_bbox": 0.04796, "loss_cls": 0.21791, "acc": 92.58691, "loss_bbox": 0.24428, "loss_mask": 0.24179, "loss": 0.77939, "time": 0.39905} -{"mode": "train", "epoch": 8, "iter": 6750, "lr": 0.0002, "memory": 8424, "data_time": 0.04036, "loss_rpn_cls": 0.02881, "loss_rpn_bbox": 0.04826, "loss_cls": 0.2121, "acc": 92.64941, "loss_bbox": 0.24177, "loss_mask": 0.24361, "loss": 0.77455, "time": 0.40903} -{"mode": "train", "epoch": 8, "iter": 6800, "lr": 0.0002, "memory": 8424, "data_time": 0.0402, "loss_rpn_cls": 0.0271, "loss_rpn_bbox": 0.04992, "loss_cls": 0.21868, "acc": 92.60352, "loss_bbox": 0.24459, "loss_mask": 0.2414, "loss": 0.78168, "time": 0.41051} -{"mode": "train", "epoch": 8, "iter": 6850, "lr": 0.0002, "memory": 8424, "data_time": 0.04485, "loss_rpn_cls": 0.02769, "loss_rpn_bbox": 0.04963, "loss_cls": 0.2157, "acc": 92.69897, "loss_bbox": 0.23639, "loss_mask": 0.24167, "loss": 0.77107, "time": 0.40899} -{"mode": "train", "epoch": 8, "iter": 6900, "lr": 0.0002, "memory": 8424, "data_time": 0.04788, "loss_rpn_cls": 0.02868, "loss_rpn_bbox": 0.05172, "loss_cls": 0.22085, "acc": 92.51685, "loss_bbox": 0.24424, "loss_mask": 0.2416, "loss": 0.78709, "time": 0.42353} -{"mode": "train", "epoch": 8, "iter": 6950, "lr": 0.0002, "memory": 8424, "data_time": 0.04949, "loss_rpn_cls": 0.02815, "loss_rpn_bbox": 0.05134, "loss_cls": 0.22305, "acc": 92.51318, "loss_bbox": 0.24398, "loss_mask": 0.24155, "loss": 0.78807, "time": 0.42157} -{"mode": "train", "epoch": 8, "iter": 7000, "lr": 0.0002, "memory": 8424, "data_time": 0.05229, "loss_rpn_cls": 0.02773, "loss_rpn_bbox": 0.04713, "loss_cls": 0.221, "acc": 92.33789, "loss_bbox": 0.24564, "loss_mask": 0.24462, "loss": 0.78612, "time": 0.40278} -{"mode": "train", "epoch": 8, "iter": 7050, "lr": 0.0002, "memory": 8424, "data_time": 0.04021, "loss_rpn_cls": 0.02839, "loss_rpn_bbox": 0.05109, "loss_cls": 0.21311, "acc": 92.80933, "loss_bbox": 0.23534, "loss_mask": 0.24244, "loss": 0.77036, "time": 0.40816} -{"mode": "train", "epoch": 8, "iter": 7100, "lr": 0.0002, "memory": 8424, "data_time": 0.04783, "loss_rpn_cls": 0.03022, "loss_rpn_bbox": 0.05308, "loss_cls": 0.2097, "acc": 92.74487, "loss_bbox": 0.23533, "loss_mask": 0.24043, "loss": 0.76876, "time": 0.42703} -{"mode": "train", "epoch": 8, "iter": 7150, "lr": 0.0002, "memory": 8424, "data_time": 0.04305, "loss_rpn_cls": 0.02923, "loss_rpn_bbox": 0.04869, "loss_cls": 0.21205, "acc": 92.68018, "loss_bbox": 0.23613, "loss_mask": 0.23514, "loss": 0.76124, "time": 0.39626} -{"mode": "train", "epoch": 8, "iter": 7200, "lr": 0.0002, "memory": 8424, "data_time": 0.05612, "loss_rpn_cls": 0.03115, "loss_rpn_bbox": 0.05044, "loss_cls": 0.21468, "acc": 92.84741, "loss_bbox": 0.23276, "loss_mask": 0.24121, "loss": 0.77025, "time": 0.40653} -{"mode": "train", "epoch": 8, "iter": 7250, "lr": 0.0002, "memory": 8424, "data_time": 0.0422, "loss_rpn_cls": 0.0282, "loss_rpn_bbox": 0.04997, "loss_cls": 0.21454, "acc": 92.71606, "loss_bbox": 0.23648, "loss_mask": 0.23987, "loss": 0.76906, "time": 0.40594} -{"mode": "train", "epoch": 8, "iter": 7300, "lr": 0.0002, "memory": 8424, "data_time": 0.05215, "loss_rpn_cls": 0.03025, "loss_rpn_bbox": 0.05205, "loss_cls": 0.22126, "acc": 92.43555, "loss_bbox": 0.24714, "loss_mask": 0.24659, "loss": 0.79729, "time": 0.41218} -{"mode": "val", "epoch": 8, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3571, "bbox_mAP_50": 0.5745, "bbox_mAP_75": 0.3903, "bbox_mAP_s": 0.2076, "bbox_mAP_m": 0.3954, "bbox_mAP_l": 0.4584, "bbox_mAP_copypaste": "0.3571 0.5745 0.3903 0.2076 0.3954 0.4584", "segm_mAP": 0.3383, "segm_mAP_50": 0.5478, "segm_mAP_75": 0.3622, "segm_mAP_s": 0.1529, "segm_mAP_m": 0.3689, "segm_mAP_l": 0.4966, "segm_mAP_copypaste": "0.3383 0.5478 0.3622 0.1529 0.3689 0.4966"} -{"mode": "train", "epoch": 9, "iter": 50, "lr": 2e-05, "memory": 8424, "data_time": 0.13545, "loss_rpn_cls": 0.02609, "loss_rpn_bbox": 0.04888, "loss_cls": 0.20602, "acc": 92.81836, "loss_bbox": 0.23098, "loss_mask": 0.2339, "loss": 0.74587, "time": 0.49916} -{"mode": "train", "epoch": 9, "iter": 100, "lr": 2e-05, "memory": 8424, "data_time": 0.05353, "loss_rpn_cls": 0.02638, "loss_rpn_bbox": 0.04995, "loss_cls": 0.20426, "acc": 92.73926, "loss_bbox": 0.23914, "loss_mask": 0.23003, "loss": 0.74976, "time": 0.43824} -{"mode": "train", "epoch": 9, "iter": 150, "lr": 2e-05, "memory": 8424, "data_time": 0.05829, "loss_rpn_cls": 0.02521, "loss_rpn_bbox": 0.04829, "loss_cls": 0.20153, "acc": 92.78174, "loss_bbox": 0.23637, "loss_mask": 0.23084, "loss": 0.74225, "time": 0.42705} -{"mode": "train", "epoch": 9, "iter": 200, "lr": 2e-05, "memory": 8424, "data_time": 0.06353, "loss_rpn_cls": 0.02381, "loss_rpn_bbox": 0.04464, "loss_cls": 0.19886, "acc": 92.97827, "loss_bbox": 0.22963, "loss_mask": 0.23545, "loss": 0.73239, "time": 0.41814} -{"mode": "train", "epoch": 9, "iter": 250, "lr": 2e-05, "memory": 8424, "data_time": 0.04632, "loss_rpn_cls": 0.02354, "loss_rpn_bbox": 0.04345, "loss_cls": 0.18529, "acc": 93.45605, "loss_bbox": 0.21753, "loss_mask": 0.22886, "loss": 0.69868, "time": 0.40522} -{"mode": "train", "epoch": 9, "iter": 300, "lr": 2e-05, "memory": 8424, "data_time": 0.04552, "loss_rpn_cls": 0.02399, "loss_rpn_bbox": 0.04431, "loss_cls": 0.19043, "acc": 93.23853, "loss_bbox": 0.22786, "loss_mask": 0.22861, "loss": 0.71519, "time": 0.41112} -{"mode": "train", "epoch": 9, "iter": 350, "lr": 2e-05, "memory": 8424, "data_time": 0.05002, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04709, "loss_cls": 0.19618, "acc": 92.98413, "loss_bbox": 0.23392, "loss_mask": 0.23524, "loss": 0.73697, "time": 0.40784} -{"mode": "train", "epoch": 9, "iter": 400, "lr": 2e-05, "memory": 8424, "data_time": 0.04679, "loss_rpn_cls": 0.02178, "loss_rpn_bbox": 0.04287, "loss_cls": 0.17883, "acc": 93.56689, "loss_bbox": 0.21664, "loss_mask": 0.22728, "loss": 0.6874, "time": 0.40489} -{"mode": "train", "epoch": 9, "iter": 450, "lr": 2e-05, "memory": 8424, "data_time": 0.04843, "loss_rpn_cls": 0.02288, "loss_rpn_bbox": 0.04463, "loss_cls": 0.18152, "acc": 93.51392, "loss_bbox": 0.2206, "loss_mask": 0.22425, "loss": 0.69388, "time": 0.42253} -{"mode": "train", "epoch": 9, "iter": 500, "lr": 2e-05, "memory": 8424, "data_time": 0.04578, "loss_rpn_cls": 0.02063, "loss_rpn_bbox": 0.0444, "loss_cls": 0.18369, "acc": 93.45068, "loss_bbox": 0.21941, "loss_mask": 0.22594, "loss": 0.69408, "time": 0.41483} -{"mode": "train", "epoch": 9, "iter": 550, "lr": 2e-05, "memory": 8424, "data_time": 0.04328, "loss_rpn_cls": 0.02194, "loss_rpn_bbox": 0.04317, "loss_cls": 0.18502, "acc": 93.24658, "loss_bbox": 0.22718, "loss_mask": 0.22588, "loss": 0.70319, "time": 0.41292} -{"mode": "train", "epoch": 9, "iter": 600, "lr": 2e-05, "memory": 8424, "data_time": 0.04767, "loss_rpn_cls": 0.02448, "loss_rpn_bbox": 0.04598, "loss_cls": 0.19591, "acc": 92.94775, "loss_bbox": 0.23175, "loss_mask": 0.23028, "loss": 0.72839, "time": 0.41582} -{"mode": "train", "epoch": 9, "iter": 650, "lr": 2e-05, "memory": 8424, "data_time": 0.05585, "loss_rpn_cls": 0.02478, "loss_rpn_bbox": 0.04923, "loss_cls": 0.20065, "acc": 92.72852, "loss_bbox": 0.241, "loss_mask": 0.23328, "loss": 0.74894, "time": 0.42564} -{"mode": "train", "epoch": 9, "iter": 700, "lr": 2e-05, "memory": 8424, "data_time": 0.05518, "loss_rpn_cls": 0.022, "loss_rpn_bbox": 0.04458, "loss_cls": 0.18781, "acc": 93.18848, "loss_bbox": 0.22962, "loss_mask": 0.22582, "loss": 0.70983, "time": 0.41631} -{"mode": "train", "epoch": 9, "iter": 750, "lr": 2e-05, "memory": 8424, "data_time": 0.05819, "loss_rpn_cls": 0.02085, "loss_rpn_bbox": 0.04409, "loss_cls": 0.18201, "acc": 93.31348, "loss_bbox": 0.22697, "loss_mask": 0.22735, "loss": 0.70126, "time": 0.43538} -{"mode": "train", "epoch": 9, "iter": 800, "lr": 2e-05, "memory": 8424, "data_time": 0.04764, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04517, "loss_cls": 0.1865, "acc": 93.24023, "loss_bbox": 0.22644, "loss_mask": 0.22203, "loss": 0.70467, "time": 0.41912} -{"mode": "train", "epoch": 9, "iter": 850, "lr": 2e-05, "memory": 8424, "data_time": 0.0475, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04436, "loss_cls": 0.17819, "acc": 93.6272, "loss_bbox": 0.216, "loss_mask": 0.22279, "loss": 0.68138, "time": 0.41126} -{"mode": "train", "epoch": 9, "iter": 900, "lr": 2e-05, "memory": 8424, "data_time": 0.04838, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.046, "loss_cls": 0.18356, "acc": 93.40283, "loss_bbox": 0.22298, "loss_mask": 0.23406, "loss": 0.70979, "time": 0.42041} -{"mode": "train", "epoch": 9, "iter": 950, "lr": 2e-05, "memory": 8424, "data_time": 0.05113, "loss_rpn_cls": 0.02221, "loss_rpn_bbox": 0.04525, "loss_cls": 0.18601, "acc": 93.26294, "loss_bbox": 0.22763, "loss_mask": 0.22884, "loss": 0.70994, "time": 0.41504} -{"mode": "train", "epoch": 9, "iter": 1000, "lr": 2e-05, "memory": 8424, "data_time": 0.04652, "loss_rpn_cls": 0.02483, "loss_rpn_bbox": 0.0477, "loss_cls": 0.19622, "acc": 92.82422, "loss_bbox": 0.23856, "loss_mask": 0.23171, "loss": 0.73901, "time": 0.42345} -{"mode": "train", "epoch": 9, "iter": 1050, "lr": 2e-05, "memory": 8424, "data_time": 0.04166, "loss_rpn_cls": 0.02344, "loss_rpn_bbox": 0.04757, "loss_cls": 0.19177, "acc": 93.1499, "loss_bbox": 0.23397, "loss_mask": 0.22971, "loss": 0.72648, "time": 0.40849} -{"mode": "train", "epoch": 9, "iter": 1100, "lr": 2e-05, "memory": 8424, "data_time": 0.04456, "loss_rpn_cls": 0.02297, "loss_rpn_bbox": 0.0496, "loss_cls": 0.19861, "acc": 92.89478, "loss_bbox": 0.23624, "loss_mask": 0.22915, "loss": 0.73658, "time": 0.42918} -{"mode": "train", "epoch": 9, "iter": 1150, "lr": 2e-05, "memory": 8424, "data_time": 0.05235, "loss_rpn_cls": 0.02188, "loss_rpn_bbox": 0.04479, "loss_cls": 0.1846, "acc": 93.34326, "loss_bbox": 0.22472, "loss_mask": 0.22339, "loss": 0.69936, "time": 0.40465} -{"mode": "train", "epoch": 9, "iter": 1200, "lr": 2e-05, "memory": 8424, "data_time": 0.05579, "loss_rpn_cls": 0.0217, "loss_rpn_bbox": 0.04621, "loss_cls": 0.18234, "acc": 93.41406, "loss_bbox": 0.21899, "loss_mask": 0.22316, "loss": 0.69239, "time": 0.4289} -{"mode": "train", "epoch": 9, "iter": 1250, "lr": 2e-05, "memory": 8424, "data_time": 0.04189, "loss_rpn_cls": 0.02333, "loss_rpn_bbox": 0.04354, "loss_cls": 0.18353, "acc": 93.41675, "loss_bbox": 0.21815, "loss_mask": 0.22468, "loss": 0.69324, "time": 0.40075} -{"mode": "train", "epoch": 9, "iter": 1300, "lr": 2e-05, "memory": 8424, "data_time": 0.04736, "loss_rpn_cls": 0.02003, "loss_rpn_bbox": 0.0423, "loss_cls": 0.1761, "acc": 93.67578, "loss_bbox": 0.2116, "loss_mask": 0.22259, "loss": 0.67262, "time": 0.41485} -{"mode": "train", "epoch": 9, "iter": 1350, "lr": 2e-05, "memory": 8424, "data_time": 0.06461, "loss_rpn_cls": 0.02238, "loss_rpn_bbox": 0.0457, "loss_cls": 0.18303, "acc": 93.38818, "loss_bbox": 0.22053, "loss_mask": 0.22762, "loss": 0.69926, "time": 0.4146} -{"mode": "train", "epoch": 9, "iter": 1400, "lr": 2e-05, "memory": 8424, "data_time": 0.04354, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04169, "loss_cls": 0.17206, "acc": 93.79321, "loss_bbox": 0.21233, "loss_mask": 0.22, "loss": 0.66823, "time": 0.40146} -{"mode": "train", "epoch": 9, "iter": 1450, "lr": 2e-05, "memory": 8424, "data_time": 0.04517, "loss_rpn_cls": 0.02323, "loss_rpn_bbox": 0.04476, "loss_cls": 0.18338, "acc": 93.40332, "loss_bbox": 0.22113, "loss_mask": 0.22749, "loss": 0.7, "time": 0.42208} -{"mode": "train", "epoch": 9, "iter": 1500, "lr": 2e-05, "memory": 8424, "data_time": 0.04727, "loss_rpn_cls": 0.0213, "loss_rpn_bbox": 0.04395, "loss_cls": 0.18333, "acc": 93.32104, "loss_bbox": 0.22609, "loss_mask": 0.22912, "loss": 0.70379, "time": 0.47825} -{"mode": "train", "epoch": 9, "iter": 1550, "lr": 2e-05, "memory": 8424, "data_time": 0.05425, "loss_rpn_cls": 0.02323, "loss_rpn_bbox": 0.04427, "loss_cls": 0.18424, "acc": 93.44702, "loss_bbox": 0.22352, "loss_mask": 0.22717, "loss": 0.70243, "time": 0.40989} -{"mode": "train", "epoch": 9, "iter": 1600, "lr": 2e-05, "memory": 8424, "data_time": 0.05196, "loss_rpn_cls": 0.02145, "loss_rpn_bbox": 0.04445, "loss_cls": 0.18071, "acc": 93.38062, "loss_bbox": 0.22185, "loss_mask": 0.22701, "loss": 0.69547, "time": 0.41522} -{"mode": "train", "epoch": 9, "iter": 1650, "lr": 2e-05, "memory": 8424, "data_time": 0.05104, "loss_rpn_cls": 0.02264, "loss_rpn_bbox": 0.0462, "loss_cls": 0.18244, "acc": 93.39722, "loss_bbox": 0.22543, "loss_mask": 0.2225, "loss": 0.69921, "time": 0.42074} -{"mode": "train", "epoch": 9, "iter": 1700, "lr": 2e-05, "memory": 8424, "data_time": 0.05847, "loss_rpn_cls": 0.0228, "loss_rpn_bbox": 0.04565, "loss_cls": 0.18991, "acc": 93.18945, "loss_bbox": 0.22959, "loss_mask": 0.22706, "loss": 0.71502, "time": 0.42034} -{"mode": "train", "epoch": 9, "iter": 1750, "lr": 2e-05, "memory": 8424, "data_time": 0.04623, "loss_rpn_cls": 0.02245, "loss_rpn_bbox": 0.04389, "loss_cls": 0.18121, "acc": 93.41699, "loss_bbox": 0.22236, "loss_mask": 0.22023, "loss": 0.69014, "time": 0.40883} -{"mode": "train", "epoch": 9, "iter": 1800, "lr": 2e-05, "memory": 8424, "data_time": 0.03641, "loss_rpn_cls": 0.02159, "loss_rpn_bbox": 0.04312, "loss_cls": 0.17168, "acc": 93.89429, "loss_bbox": 0.20467, "loss_mask": 0.22469, "loss": 0.66575, "time": 0.40978} -{"mode": "train", "epoch": 9, "iter": 1850, "lr": 2e-05, "memory": 8424, "data_time": 0.05036, "loss_rpn_cls": 0.0194, "loss_rpn_bbox": 0.0448, "loss_cls": 0.18149, "acc": 93.36426, "loss_bbox": 0.22576, "loss_mask": 0.22416, "loss": 0.69561, "time": 0.41865} -{"mode": "train", "epoch": 9, "iter": 1900, "lr": 2e-05, "memory": 8424, "data_time": 0.05002, "loss_rpn_cls": 0.02447, "loss_rpn_bbox": 0.04562, "loss_cls": 0.18276, "acc": 93.37354, "loss_bbox": 0.22501, "loss_mask": 0.22254, "loss": 0.7004, "time": 0.43922} -{"mode": "train", "epoch": 9, "iter": 1950, "lr": 2e-05, "memory": 8424, "data_time": 0.04497, "loss_rpn_cls": 0.02085, "loss_rpn_bbox": 0.04406, "loss_cls": 0.17676, "acc": 93.69214, "loss_bbox": 0.21408, "loss_mask": 0.22221, "loss": 0.67795, "time": 0.41793} -{"mode": "train", "epoch": 9, "iter": 2000, "lr": 2e-05, "memory": 8424, "data_time": 0.04313, "loss_rpn_cls": 0.02164, "loss_rpn_bbox": 0.04399, "loss_cls": 0.17542, "acc": 93.64917, "loss_bbox": 0.21898, "loss_mask": 0.22464, "loss": 0.68469, "time": 0.40947} -{"mode": "train", "epoch": 9, "iter": 2050, "lr": 2e-05, "memory": 8424, "data_time": 0.04438, "loss_rpn_cls": 0.02028, "loss_rpn_bbox": 0.04246, "loss_cls": 0.17584, "acc": 93.4707, "loss_bbox": 0.21502, "loss_mask": 0.22202, "loss": 0.67562, "time": 0.41475} -{"mode": "train", "epoch": 9, "iter": 2100, "lr": 2e-05, "memory": 8424, "data_time": 0.05082, "loss_rpn_cls": 0.02087, "loss_rpn_bbox": 0.04185, "loss_cls": 0.1763, "acc": 93.61353, "loss_bbox": 0.21738, "loss_mask": 0.22017, "loss": 0.67657, "time": 0.41506} -{"mode": "train", "epoch": 9, "iter": 2150, "lr": 2e-05, "memory": 8424, "data_time": 0.04452, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04351, "loss_cls": 0.18203, "acc": 93.38208, "loss_bbox": 0.22426, "loss_mask": 0.22663, "loss": 0.69857, "time": 0.41392} -{"mode": "train", "epoch": 9, "iter": 2200, "lr": 2e-05, "memory": 8424, "data_time": 0.04465, "loss_rpn_cls": 0.02395, "loss_rpn_bbox": 0.04535, "loss_cls": 0.183, "acc": 93.30103, "loss_bbox": 0.22038, "loss_mask": 0.22607, "loss": 0.69875, "time": 0.45398} -{"mode": "train", "epoch": 9, "iter": 2250, "lr": 2e-05, "memory": 8424, "data_time": 0.04653, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.04271, "loss_cls": 0.17948, "acc": 93.47412, "loss_bbox": 0.21756, "loss_mask": 0.2254, "loss": 0.68698, "time": 0.41918} -{"mode": "train", "epoch": 9, "iter": 2300, "lr": 2e-05, "memory": 8424, "data_time": 0.04081, "loss_rpn_cls": 0.02181, "loss_rpn_bbox": 0.04283, "loss_cls": 0.17579, "acc": 93.63721, "loss_bbox": 0.21663, "loss_mask": 0.22576, "loss": 0.68282, "time": 0.41374} -{"mode": "train", "epoch": 9, "iter": 2350, "lr": 2e-05, "memory": 8424, "data_time": 0.04883, "loss_rpn_cls": 0.02284, "loss_rpn_bbox": 0.04671, "loss_cls": 0.187, "acc": 93.12109, "loss_bbox": 0.23445, "loss_mask": 0.22967, "loss": 0.72068, "time": 0.42286} -{"mode": "train", "epoch": 9, "iter": 2400, "lr": 2e-05, "memory": 8424, "data_time": 0.05282, "loss_rpn_cls": 0.01984, "loss_rpn_bbox": 0.04544, "loss_cls": 0.17476, "acc": 93.6604, "loss_bbox": 0.21996, "loss_mask": 0.22382, "loss": 0.68382, "time": 0.40976} -{"mode": "train", "epoch": 9, "iter": 2450, "lr": 2e-05, "memory": 8424, "data_time": 0.05388, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.04329, "loss_cls": 0.18002, "acc": 93.43652, "loss_bbox": 0.22606, "loss_mask": 0.22666, "loss": 0.69631, "time": 0.4106} -{"mode": "train", "epoch": 9, "iter": 2500, "lr": 2e-05, "memory": 8424, "data_time": 0.04449, "loss_rpn_cls": 0.02058, "loss_rpn_bbox": 0.04183, "loss_cls": 0.18238, "acc": 93.46777, "loss_bbox": 0.22361, "loss_mask": 0.22219, "loss": 0.69059, "time": 0.40972} -{"mode": "train", "epoch": 9, "iter": 2550, "lr": 2e-05, "memory": 8424, "data_time": 0.04881, "loss_rpn_cls": 0.02216, "loss_rpn_bbox": 0.04289, "loss_cls": 0.17873, "acc": 93.45215, "loss_bbox": 0.21836, "loss_mask": 0.22094, "loss": 0.68307, "time": 0.41441} -{"mode": "train", "epoch": 9, "iter": 2600, "lr": 2e-05, "memory": 8424, "data_time": 0.05618, "loss_rpn_cls": 0.02127, "loss_rpn_bbox": 0.04336, "loss_cls": 0.17752, "acc": 93.56006, "loss_bbox": 0.21971, "loss_mask": 0.2241, "loss": 0.68596, "time": 0.41796} -{"mode": "train", "epoch": 9, "iter": 2650, "lr": 2e-05, "memory": 8424, "data_time": 0.03888, "loss_rpn_cls": 0.01955, "loss_rpn_bbox": 0.04128, "loss_cls": 0.16956, "acc": 93.83911, "loss_bbox": 0.20835, "loss_mask": 0.21918, "loss": 0.65792, "time": 0.5004} -{"mode": "train", "epoch": 9, "iter": 2700, "lr": 2e-05, "memory": 8424, "data_time": 0.04575, "loss_rpn_cls": 0.02385, "loss_rpn_bbox": 0.04244, "loss_cls": 0.17853, "acc": 93.52173, "loss_bbox": 0.2176, "loss_mask": 0.22141, "loss": 0.68384, "time": 0.41665} -{"mode": "train", "epoch": 9, "iter": 2750, "lr": 2e-05, "memory": 8424, "data_time": 0.04903, "loss_rpn_cls": 0.02254, "loss_rpn_bbox": 0.04052, "loss_cls": 0.181, "acc": 93.49487, "loss_bbox": 0.21836, "loss_mask": 0.22026, "loss": 0.68268, "time": 0.40908} -{"mode": "train", "epoch": 9, "iter": 2800, "lr": 2e-05, "memory": 8424, "data_time": 0.0516, "loss_rpn_cls": 0.0209, "loss_rpn_bbox": 0.04269, "loss_cls": 0.17763, "acc": 93.57324, "loss_bbox": 0.21999, "loss_mask": 0.21887, "loss": 0.68007, "time": 0.39433} -{"mode": "train", "epoch": 9, "iter": 2850, "lr": 2e-05, "memory": 8424, "data_time": 0.05401, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04327, "loss_cls": 0.18267, "acc": 93.33228, "loss_bbox": 0.22525, "loss_mask": 0.21951, "loss": 0.69307, "time": 0.43365} -{"mode": "train", "epoch": 9, "iter": 2900, "lr": 2e-05, "memory": 8424, "data_time": 0.04887, "loss_rpn_cls": 0.02223, "loss_rpn_bbox": 0.04584, "loss_cls": 0.18693, "acc": 93.09839, "loss_bbox": 0.22968, "loss_mask": 0.22443, "loss": 0.70911, "time": 0.42029} -{"mode": "train", "epoch": 9, "iter": 2950, "lr": 2e-05, "memory": 8424, "data_time": 0.04794, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04718, "loss_cls": 0.18786, "acc": 93.125, "loss_bbox": 0.23367, "loss_mask": 0.22142, "loss": 0.71312, "time": 0.43058} -{"mode": "train", "epoch": 9, "iter": 3000, "lr": 2e-05, "memory": 8424, "data_time": 0.04467, "loss_rpn_cls": 0.02263, "loss_rpn_bbox": 0.04514, "loss_cls": 0.18397, "acc": 93.36475, "loss_bbox": 0.22473, "loss_mask": 0.22669, "loss": 0.70317, "time": 0.41129} -{"mode": "train", "epoch": 9, "iter": 3050, "lr": 2e-05, "memory": 8424, "data_time": 0.05221, "loss_rpn_cls": 0.02101, "loss_rpn_bbox": 0.04188, "loss_cls": 0.17399, "acc": 93.65234, "loss_bbox": 0.21569, "loss_mask": 0.22092, "loss": 0.67348, "time": 0.41553} -{"mode": "train", "epoch": 9, "iter": 3100, "lr": 2e-05, "memory": 8424, "data_time": 0.05399, "loss_rpn_cls": 0.02206, "loss_rpn_bbox": 0.04665, "loss_cls": 0.18722, "acc": 93.1438, "loss_bbox": 0.23065, "loss_mask": 0.22894, "loss": 0.71552, "time": 0.42996} -{"mode": "train", "epoch": 9, "iter": 3150, "lr": 2e-05, "memory": 8424, "data_time": 0.04801, "loss_rpn_cls": 0.02152, "loss_rpn_bbox": 0.0439, "loss_cls": 0.18098, "acc": 93.47583, "loss_bbox": 0.22126, "loss_mask": 0.22529, "loss": 0.69295, "time": 0.46525} -{"mode": "train", "epoch": 9, "iter": 3200, "lr": 2e-05, "memory": 8424, "data_time": 0.04086, "loss_rpn_cls": 0.02081, "loss_rpn_bbox": 0.04134, "loss_cls": 0.16613, "acc": 93.87231, "loss_bbox": 0.20686, "loss_mask": 0.21751, "loss": 0.65265, "time": 0.39716} -{"mode": "train", "epoch": 9, "iter": 3250, "lr": 2e-05, "memory": 8424, "data_time": 0.04039, "loss_rpn_cls": 0.02177, "loss_rpn_bbox": 0.04258, "loss_cls": 0.17459, "acc": 93.57153, "loss_bbox": 0.22094, "loss_mask": 0.22714, "loss": 0.68702, "time": 0.4674} -{"mode": "train", "epoch": 9, "iter": 3300, "lr": 2e-05, "memory": 8424, "data_time": 0.05318, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04502, "loss_cls": 0.17411, "acc": 93.73218, "loss_bbox": 0.21027, "loss_mask": 0.21991, "loss": 0.67251, "time": 0.41152} -{"mode": "train", "epoch": 9, "iter": 3350, "lr": 2e-05, "memory": 8424, "data_time": 0.04272, "loss_rpn_cls": 0.02195, "loss_rpn_bbox": 0.04489, "loss_cls": 0.17732, "acc": 93.40405, "loss_bbox": 0.22703, "loss_mask": 0.22947, "loss": 0.70066, "time": 0.41837} -{"mode": "train", "epoch": 9, "iter": 3400, "lr": 2e-05, "memory": 8424, "data_time": 0.03938, "loss_rpn_cls": 0.02067, "loss_rpn_bbox": 0.04386, "loss_cls": 0.1761, "acc": 93.58813, "loss_bbox": 0.21896, "loss_mask": 0.22414, "loss": 0.68375, "time": 0.40644} -{"mode": "train", "epoch": 9, "iter": 3450, "lr": 2e-05, "memory": 8424, "data_time": 0.04868, "loss_rpn_cls": 0.02168, "loss_rpn_bbox": 0.04368, "loss_cls": 0.17411, "acc": 93.64478, "loss_bbox": 0.21437, "loss_mask": 0.21929, "loss": 0.67312, "time": 0.41275} -{"mode": "train", "epoch": 9, "iter": 3500, "lr": 2e-05, "memory": 8424, "data_time": 0.04571, "loss_rpn_cls": 0.01946, "loss_rpn_bbox": 0.04212, "loss_cls": 0.17087, "acc": 93.83569, "loss_bbox": 0.21154, "loss_mask": 0.22026, "loss": 0.66425, "time": 0.44854} -{"mode": "train", "epoch": 9, "iter": 3550, "lr": 2e-05, "memory": 8424, "data_time": 0.04152, "loss_rpn_cls": 0.0189, "loss_rpn_bbox": 0.04046, "loss_cls": 0.17093, "acc": 93.8269, "loss_bbox": 0.20571, "loss_mask": 0.22101, "loss": 0.65701, "time": 0.39726} -{"mode": "train", "epoch": 9, "iter": 3600, "lr": 2e-05, "memory": 8424, "data_time": 0.0425, "loss_rpn_cls": 0.02101, "loss_rpn_bbox": 0.04593, "loss_cls": 0.17638, "acc": 93.58325, "loss_bbox": 0.21699, "loss_mask": 0.21971, "loss": 0.68002, "time": 0.42567} -{"mode": "train", "epoch": 9, "iter": 3650, "lr": 2e-05, "memory": 8424, "data_time": 0.04686, "loss_rpn_cls": 0.02139, "loss_rpn_bbox": 0.04384, "loss_cls": 0.17963, "acc": 93.50049, "loss_bbox": 0.2182, "loss_mask": 0.22425, "loss": 0.6873, "time": 0.40497} -{"mode": "train", "epoch": 9, "iter": 3700, "lr": 2e-05, "memory": 8424, "data_time": 0.05417, "loss_rpn_cls": 0.02191, "loss_rpn_bbox": 0.0432, "loss_cls": 0.18664, "acc": 93.2251, "loss_bbox": 0.22484, "loss_mask": 0.22298, "loss": 0.69957, "time": 0.41339} -{"mode": "train", "epoch": 9, "iter": 3750, "lr": 2e-05, "memory": 8424, "data_time": 0.04416, "loss_rpn_cls": 0.02049, "loss_rpn_bbox": 0.0408, "loss_cls": 0.1772, "acc": 93.48462, "loss_bbox": 0.21453, "loss_mask": 0.21823, "loss": 0.67124, "time": 0.40796} -{"mode": "train", "epoch": 9, "iter": 3800, "lr": 2e-05, "memory": 8424, "data_time": 0.04933, "loss_rpn_cls": 0.02112, "loss_rpn_bbox": 0.04398, "loss_cls": 0.17581, "acc": 93.55957, "loss_bbox": 0.22118, "loss_mask": 0.22037, "loss": 0.68245, "time": 0.41178} -{"mode": "train", "epoch": 9, "iter": 3850, "lr": 2e-05, "memory": 8424, "data_time": 0.04863, "loss_rpn_cls": 0.02123, "loss_rpn_bbox": 0.04429, "loss_cls": 0.17951, "acc": 93.40601, "loss_bbox": 0.22686, "loss_mask": 0.22498, "loss": 0.69686, "time": 0.41019} -{"mode": "train", "epoch": 9, "iter": 3900, "lr": 2e-05, "memory": 8424, "data_time": 0.0465, "loss_rpn_cls": 0.02048, "loss_rpn_bbox": 0.04291, "loss_cls": 0.17207, "acc": 93.82812, "loss_bbox": 0.21052, "loss_mask": 0.2169, "loss": 0.66288, "time": 0.41396} -{"mode": "train", "epoch": 9, "iter": 3950, "lr": 2e-05, "memory": 8424, "data_time": 0.04537, "loss_rpn_cls": 0.0213, "loss_rpn_bbox": 0.04454, "loss_cls": 0.17183, "acc": 93.59326, "loss_bbox": 0.2138, "loss_mask": 0.22249, "loss": 0.67396, "time": 0.41088} -{"mode": "train", "epoch": 9, "iter": 4000, "lr": 2e-05, "memory": 8424, "data_time": 0.05062, "loss_rpn_cls": 0.02292, "loss_rpn_bbox": 0.04619, "loss_cls": 0.18228, "acc": 93.36865, "loss_bbox": 0.22432, "loss_mask": 0.22149, "loss": 0.69721, "time": 0.40218} -{"mode": "train", "epoch": 9, "iter": 4050, "lr": 2e-05, "memory": 8424, "data_time": 0.04726, "loss_rpn_cls": 0.02082, "loss_rpn_bbox": 0.04174, "loss_cls": 0.1731, "acc": 93.67822, "loss_bbox": 0.21566, "loss_mask": 0.21863, "loss": 0.66995, "time": 0.41301} -{"mode": "train", "epoch": 9, "iter": 4100, "lr": 2e-05, "memory": 8424, "data_time": 0.04802, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04454, "loss_cls": 0.19259, "acc": 93.0105, "loss_bbox": 0.23327, "loss_mask": 0.22732, "loss": 0.7201, "time": 0.4242} -{"mode": "train", "epoch": 9, "iter": 4150, "lr": 2e-05, "memory": 8424, "data_time": 0.04328, "loss_rpn_cls": 0.02159, "loss_rpn_bbox": 0.04338, "loss_cls": 0.17504, "acc": 93.63721, "loss_bbox": 0.22238, "loss_mask": 0.21967, "loss": 0.68205, "time": 0.41136} -{"mode": "train", "epoch": 9, "iter": 4200, "lr": 2e-05, "memory": 8424, "data_time": 0.04303, "loss_rpn_cls": 0.02146, "loss_rpn_bbox": 0.04131, "loss_cls": 0.18006, "acc": 93.55737, "loss_bbox": 0.21332, "loss_mask": 0.21516, "loss": 0.6713, "time": 0.40344} -{"mode": "train", "epoch": 9, "iter": 4250, "lr": 2e-05, "memory": 8424, "data_time": 0.05554, "loss_rpn_cls": 0.0223, "loss_rpn_bbox": 0.0465, "loss_cls": 0.17892, "acc": 93.40942, "loss_bbox": 0.22699, "loss_mask": 0.22459, "loss": 0.69931, "time": 0.40726} -{"mode": "train", "epoch": 9, "iter": 4300, "lr": 2e-05, "memory": 8424, "data_time": 0.05092, "loss_rpn_cls": 0.02269, "loss_rpn_bbox": 0.04467, "loss_cls": 0.17773, "acc": 93.58667, "loss_bbox": 0.21697, "loss_mask": 0.22405, "loss": 0.68611, "time": 0.41542} -{"mode": "train", "epoch": 9, "iter": 4350, "lr": 2e-05, "memory": 8424, "data_time": 0.0444, "loss_rpn_cls": 0.02091, "loss_rpn_bbox": 0.043, "loss_cls": 0.17478, "acc": 93.61621, "loss_bbox": 0.2169, "loss_mask": 0.21998, "loss": 0.67557, "time": 0.47305} -{"mode": "train", "epoch": 9, "iter": 4400, "lr": 2e-05, "memory": 8424, "data_time": 0.05175, "loss_rpn_cls": 0.02035, "loss_rpn_bbox": 0.04211, "loss_cls": 0.17707, "acc": 93.52417, "loss_bbox": 0.21685, "loss_mask": 0.22506, "loss": 0.68144, "time": 0.40577} -{"mode": "train", "epoch": 9, "iter": 4450, "lr": 2e-05, "memory": 8424, "data_time": 0.0471, "loss_rpn_cls": 0.0216, "loss_rpn_bbox": 0.04405, "loss_cls": 0.1732, "acc": 93.64258, "loss_bbox": 0.21587, "loss_mask": 0.22039, "loss": 0.67511, "time": 0.41156} -{"mode": "train", "epoch": 9, "iter": 4500, "lr": 2e-05, "memory": 8424, "data_time": 0.03751, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04393, "loss_cls": 0.18557, "acc": 93.29224, "loss_bbox": 0.22913, "loss_mask": 0.22961, "loss": 0.71008, "time": 0.40822} -{"mode": "train", "epoch": 9, "iter": 4550, "lr": 2e-05, "memory": 8424, "data_time": 0.04871, "loss_rpn_cls": 0.02179, "loss_rpn_bbox": 0.04601, "loss_cls": 0.18379, "acc": 93.2915, "loss_bbox": 0.22719, "loss_mask": 0.22171, "loss": 0.70048, "time": 0.40778} -{"mode": "train", "epoch": 9, "iter": 4600, "lr": 2e-05, "memory": 8424, "data_time": 0.05039, "loss_rpn_cls": 0.0219, "loss_rpn_bbox": 0.04519, "loss_cls": 0.17987, "acc": 93.45557, "loss_bbox": 0.22111, "loss_mask": 0.22948, "loss": 0.69755, "time": 0.41752} -{"mode": "train", "epoch": 9, "iter": 4650, "lr": 2e-05, "memory": 8424, "data_time": 0.0372, "loss_rpn_cls": 0.01994, "loss_rpn_bbox": 0.04148, "loss_cls": 0.16962, "acc": 93.85327, "loss_bbox": 0.20846, "loss_mask": 0.21858, "loss": 0.65809, "time": 0.39458} -{"mode": "train", "epoch": 9, "iter": 4700, "lr": 2e-05, "memory": 8424, "data_time": 0.04634, "loss_rpn_cls": 0.01981, "loss_rpn_bbox": 0.04134, "loss_cls": 0.17291, "acc": 93.6665, "loss_bbox": 0.21245, "loss_mask": 0.223, "loss": 0.66951, "time": 0.40522} -{"mode": "train", "epoch": 9, "iter": 4750, "lr": 2e-05, "memory": 8424, "data_time": 0.04078, "loss_rpn_cls": 0.02245, "loss_rpn_bbox": 0.04435, "loss_cls": 0.18475, "acc": 93.19678, "loss_bbox": 0.22957, "loss_mask": 0.22457, "loss": 0.70569, "time": 0.40788} -{"mode": "train", "epoch": 9, "iter": 4800, "lr": 2e-05, "memory": 8424, "data_time": 0.05272, "loss_rpn_cls": 0.02223, "loss_rpn_bbox": 0.04313, "loss_cls": 0.18329, "acc": 93.41577, "loss_bbox": 0.21913, "loss_mask": 0.22285, "loss": 0.69064, "time": 0.4103} -{"mode": "train", "epoch": 9, "iter": 4850, "lr": 2e-05, "memory": 8424, "data_time": 0.03756, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04224, "loss_cls": 0.17296, "acc": 93.69629, "loss_bbox": 0.21569, "loss_mask": 0.22188, "loss": 0.67269, "time": 0.39351} -{"mode": "train", "epoch": 9, "iter": 4900, "lr": 2e-05, "memory": 8424, "data_time": 0.05068, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04374, "loss_cls": 0.18288, "acc": 93.21265, "loss_bbox": 0.22618, "loss_mask": 0.22713, "loss": 0.70208, "time": 0.4123} -{"mode": "train", "epoch": 9, "iter": 4950, "lr": 2e-05, "memory": 8424, "data_time": 0.04662, "loss_rpn_cls": 0.02018, "loss_rpn_bbox": 0.04038, "loss_cls": 0.16922, "acc": 93.8396, "loss_bbox": 0.21315, "loss_mask": 0.22429, "loss": 0.66722, "time": 0.4034} -{"mode": "train", "epoch": 9, "iter": 5000, "lr": 2e-05, "memory": 8424, "data_time": 0.0426, "loss_rpn_cls": 0.02209, "loss_rpn_bbox": 0.04465, "loss_cls": 0.17921, "acc": 93.45508, "loss_bbox": 0.21972, "loss_mask": 0.22306, "loss": 0.68872, "time": 0.40059} -{"mode": "train", "epoch": 9, "iter": 5050, "lr": 2e-05, "memory": 8424, "data_time": 0.05471, "loss_rpn_cls": 0.02296, "loss_rpn_bbox": 0.04599, "loss_cls": 0.17792, "acc": 93.52173, "loss_bbox": 0.21704, "loss_mask": 0.22567, "loss": 0.68959, "time": 0.40949} -{"mode": "train", "epoch": 9, "iter": 5100, "lr": 2e-05, "memory": 8424, "data_time": 0.04871, "loss_rpn_cls": 0.02137, "loss_rpn_bbox": 0.04593, "loss_cls": 0.17995, "acc": 93.50293, "loss_bbox": 0.2213, "loss_mask": 0.2243, "loss": 0.69285, "time": 0.41312} -{"mode": "train", "epoch": 9, "iter": 5150, "lr": 2e-05, "memory": 8424, "data_time": 0.04989, "loss_rpn_cls": 0.02155, "loss_rpn_bbox": 0.04216, "loss_cls": 0.17485, "acc": 93.61255, "loss_bbox": 0.21898, "loss_mask": 0.22779, "loss": 0.68534, "time": 0.39414} -{"mode": "train", "epoch": 9, "iter": 5200, "lr": 2e-05, "memory": 8424, "data_time": 0.04323, "loss_rpn_cls": 0.02218, "loss_rpn_bbox": 0.04057, "loss_cls": 0.17348, "acc": 93.79102, "loss_bbox": 0.21128, "loss_mask": 0.22266, "loss": 0.67017, "time": 0.40565} -{"mode": "train", "epoch": 9, "iter": 5250, "lr": 2e-05, "memory": 8424, "data_time": 0.04033, "loss_rpn_cls": 0.0203, "loss_rpn_bbox": 0.04311, "loss_cls": 0.17499, "acc": 93.63721, "loss_bbox": 0.21672, "loss_mask": 0.22376, "loss": 0.67888, "time": 0.40897} -{"mode": "train", "epoch": 9, "iter": 5300, "lr": 2e-05, "memory": 8424, "data_time": 0.03742, "loss_rpn_cls": 0.02171, "loss_rpn_bbox": 0.04425, "loss_cls": 0.17705, "acc": 93.57446, "loss_bbox": 0.21467, "loss_mask": 0.22216, "loss": 0.67984, "time": 0.40847} -{"mode": "train", "epoch": 9, "iter": 5350, "lr": 2e-05, "memory": 8424, "data_time": 0.04986, "loss_rpn_cls": 0.02217, "loss_rpn_bbox": 0.04309, "loss_cls": 0.17888, "acc": 93.48999, "loss_bbox": 0.22369, "loss_mask": 0.22599, "loss": 0.69383, "time": 0.40747} -{"mode": "train", "epoch": 9, "iter": 5400, "lr": 2e-05, "memory": 8424, "data_time": 0.04292, "loss_rpn_cls": 0.0217, "loss_rpn_bbox": 0.04278, "loss_cls": 0.17738, "acc": 93.50439, "loss_bbox": 0.21875, "loss_mask": 0.22491, "loss": 0.68552, "time": 0.41393} -{"mode": "train", "epoch": 9, "iter": 5450, "lr": 2e-05, "memory": 8424, "data_time": 0.04001, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.04197, "loss_cls": 0.17378, "acc": 93.65698, "loss_bbox": 0.21603, "loss_mask": 0.21732, "loss": 0.67092, "time": 0.40679} -{"mode": "train", "epoch": 9, "iter": 5500, "lr": 2e-05, "memory": 8424, "data_time": 0.03943, "loss_rpn_cls": 0.02159, "loss_rpn_bbox": 0.04499, "loss_cls": 0.18141, "acc": 93.37451, "loss_bbox": 0.22696, "loss_mask": 0.22634, "loss": 0.70129, "time": 0.4887} -{"mode": "train", "epoch": 9, "iter": 5550, "lr": 2e-05, "memory": 8424, "data_time": 0.05552, "loss_rpn_cls": 0.02145, "loss_rpn_bbox": 0.04332, "loss_cls": 0.18061, "acc": 93.32275, "loss_bbox": 0.22218, "loss_mask": 0.22168, "loss": 0.68925, "time": 0.41089} -{"mode": "train", "epoch": 9, "iter": 5600, "lr": 2e-05, "memory": 8424, "data_time": 0.03971, "loss_rpn_cls": 0.02285, "loss_rpn_bbox": 0.04236, "loss_cls": 0.17055, "acc": 93.78955, "loss_bbox": 0.20681, "loss_mask": 0.21789, "loss": 0.66046, "time": 0.41093} -{"mode": "train", "epoch": 9, "iter": 5650, "lr": 2e-05, "memory": 8424, "data_time": 0.05803, "loss_rpn_cls": 0.02192, "loss_rpn_bbox": 0.04483, "loss_cls": 0.18211, "acc": 93.30688, "loss_bbox": 0.22938, "loss_mask": 0.22395, "loss": 0.70219, "time": 0.41941} -{"mode": "train", "epoch": 9, "iter": 5700, "lr": 2e-05, "memory": 8424, "data_time": 0.05169, "loss_rpn_cls": 0.02166, "loss_rpn_bbox": 0.04406, "loss_cls": 0.18058, "acc": 93.4292, "loss_bbox": 0.22513, "loss_mask": 0.22488, "loss": 0.69631, "time": 0.4053} -{"mode": "train", "epoch": 9, "iter": 5750, "lr": 2e-05, "memory": 8424, "data_time": 0.04479, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.0413, "loss_cls": 0.17229, "acc": 93.73706, "loss_bbox": 0.21384, "loss_mask": 0.21979, "loss": 0.66754, "time": 0.40468} -{"mode": "train", "epoch": 9, "iter": 5800, "lr": 2e-05, "memory": 8424, "data_time": 0.04547, "loss_rpn_cls": 0.02132, "loss_rpn_bbox": 0.04196, "loss_cls": 0.17968, "acc": 93.42285, "loss_bbox": 0.21934, "loss_mask": 0.22457, "loss": 0.68688, "time": 0.39967} -{"mode": "train", "epoch": 9, "iter": 5850, "lr": 2e-05, "memory": 8424, "data_time": 0.04439, "loss_rpn_cls": 0.0207, "loss_rpn_bbox": 0.0423, "loss_cls": 0.1785, "acc": 93.50732, "loss_bbox": 0.22021, "loss_mask": 0.22225, "loss": 0.68397, "time": 0.39573} -{"mode": "train", "epoch": 9, "iter": 5900, "lr": 2e-05, "memory": 8424, "data_time": 0.05079, "loss_rpn_cls": 0.02247, "loss_rpn_bbox": 0.04452, "loss_cls": 0.17454, "acc": 93.73633, "loss_bbox": 0.21141, "loss_mask": 0.22504, "loss": 0.67797, "time": 0.39555} -{"mode": "train", "epoch": 9, "iter": 5950, "lr": 2e-05, "memory": 8424, "data_time": 0.04295, "loss_rpn_cls": 0.02089, "loss_rpn_bbox": 0.04399, "loss_cls": 0.17641, "acc": 93.61597, "loss_bbox": 0.21665, "loss_mask": 0.22335, "loss": 0.68129, "time": 0.40045} -{"mode": "train", "epoch": 9, "iter": 6000, "lr": 2e-05, "memory": 8424, "data_time": 0.04278, "loss_rpn_cls": 0.02054, "loss_rpn_bbox": 0.04397, "loss_cls": 0.18551, "acc": 93.24658, "loss_bbox": 0.22517, "loss_mask": 0.22126, "loss": 0.69646, "time": 0.40088} -{"mode": "train", "epoch": 9, "iter": 6050, "lr": 2e-05, "memory": 8424, "data_time": 0.04493, "loss_rpn_cls": 0.02076, "loss_rpn_bbox": 0.0448, "loss_cls": 0.17442, "acc": 93.56836, "loss_bbox": 0.21857, "loss_mask": 0.22291, "loss": 0.68146, "time": 0.39521} -{"mode": "train", "epoch": 9, "iter": 6100, "lr": 2e-05, "memory": 8424, "data_time": 0.04633, "loss_rpn_cls": 0.02356, "loss_rpn_bbox": 0.04527, "loss_cls": 0.18048, "acc": 93.38916, "loss_bbox": 0.22474, "loss_mask": 0.22614, "loss": 0.7002, "time": 0.39822} -{"mode": "train", "epoch": 9, "iter": 6150, "lr": 2e-05, "memory": 8424, "data_time": 0.05189, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04374, "loss_cls": 0.17416, "acc": 93.70459, "loss_bbox": 0.21177, "loss_mask": 0.2183, "loss": 0.66801, "time": 0.41751} -{"mode": "train", "epoch": 9, "iter": 6200, "lr": 2e-05, "memory": 8424, "data_time": 0.04127, "loss_rpn_cls": 0.02129, "loss_rpn_bbox": 0.04265, "loss_cls": 0.17411, "acc": 93.69727, "loss_bbox": 0.2102, "loss_mask": 0.22011, "loss": 0.66836, "time": 0.40068} -{"mode": "train", "epoch": 9, "iter": 6250, "lr": 2e-05, "memory": 8424, "data_time": 0.04246, "loss_rpn_cls": 0.02109, "loss_rpn_bbox": 0.04059, "loss_cls": 0.17112, "acc": 93.83716, "loss_bbox": 0.2099, "loss_mask": 0.21559, "loss": 0.6583, "time": 0.41096} -{"mode": "train", "epoch": 9, "iter": 6300, "lr": 2e-05, "memory": 8424, "data_time": 0.04268, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04631, "loss_cls": 0.17852, "acc": 93.51636, "loss_bbox": 0.21541, "loss_mask": 0.22091, "loss": 0.68415, "time": 0.42082} -{"mode": "train", "epoch": 9, "iter": 6350, "lr": 2e-05, "memory": 8424, "data_time": 0.04836, "loss_rpn_cls": 0.0226, "loss_rpn_bbox": 0.04428, "loss_cls": 0.18377, "acc": 93.30518, "loss_bbox": 0.22357, "loss_mask": 0.22445, "loss": 0.69868, "time": 0.42261} -{"mode": "train", "epoch": 9, "iter": 6400, "lr": 2e-05, "memory": 8424, "data_time": 0.04164, "loss_rpn_cls": 0.01972, "loss_rpn_bbox": 0.0407, "loss_cls": 0.17063, "acc": 93.75342, "loss_bbox": 0.20477, "loss_mask": 0.21945, "loss": 0.65526, "time": 0.41015} -{"mode": "train", "epoch": 9, "iter": 6450, "lr": 2e-05, "memory": 8424, "data_time": 0.0496, "loss_rpn_cls": 0.0219, "loss_rpn_bbox": 0.04381, "loss_cls": 0.18716, "acc": 93.21704, "loss_bbox": 0.23037, "loss_mask": 0.22715, "loss": 0.7104, "time": 0.40854} -{"mode": "train", "epoch": 9, "iter": 6500, "lr": 2e-05, "memory": 8424, "data_time": 0.04304, "loss_rpn_cls": 0.02207, "loss_rpn_bbox": 0.04533, "loss_cls": 0.18779, "acc": 93.07983, "loss_bbox": 0.2324, "loss_mask": 0.22602, "loss": 0.71362, "time": 0.46766} -{"mode": "train", "epoch": 9, "iter": 6550, "lr": 2e-05, "memory": 8424, "data_time": 0.04832, "loss_rpn_cls": 0.02125, "loss_rpn_bbox": 0.04513, "loss_cls": 0.18173, "acc": 93.38745, "loss_bbox": 0.22281, "loss_mask": 0.22608, "loss": 0.697, "time": 0.417} -{"mode": "train", "epoch": 9, "iter": 6600, "lr": 2e-05, "memory": 8424, "data_time": 0.04315, "loss_rpn_cls": 0.01894, "loss_rpn_bbox": 0.04258, "loss_cls": 0.17398, "acc": 93.62012, "loss_bbox": 0.21564, "loss_mask": 0.21841, "loss": 0.66956, "time": 0.42036} -{"mode": "train", "epoch": 9, "iter": 6650, "lr": 2e-05, "memory": 8424, "data_time": 0.04469, "loss_rpn_cls": 0.02022, "loss_rpn_bbox": 0.04215, "loss_cls": 0.17028, "acc": 93.82935, "loss_bbox": 0.21228, "loss_mask": 0.22192, "loss": 0.66686, "time": 0.48673} -{"mode": "train", "epoch": 9, "iter": 6700, "lr": 2e-05, "memory": 8424, "data_time": 0.04718, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.04463, "loss_cls": 0.18016, "acc": 93.35181, "loss_bbox": 0.22178, "loss_mask": 0.22052, "loss": 0.6889, "time": 0.43262} -{"mode": "train", "epoch": 9, "iter": 6750, "lr": 2e-05, "memory": 8424, "data_time": 0.04685, "loss_rpn_cls": 0.02302, "loss_rpn_bbox": 0.04255, "loss_cls": 0.18111, "acc": 93.45923, "loss_bbox": 0.2218, "loss_mask": 0.22792, "loss": 0.69641, "time": 0.40817} -{"mode": "train", "epoch": 9, "iter": 6800, "lr": 2e-05, "memory": 8424, "data_time": 0.05125, "loss_rpn_cls": 0.0209, "loss_rpn_bbox": 0.04417, "loss_cls": 0.17114, "acc": 93.7417, "loss_bbox": 0.2135, "loss_mask": 0.22103, "loss": 0.67074, "time": 0.41012} -{"mode": "train", "epoch": 9, "iter": 6850, "lr": 2e-05, "memory": 8424, "data_time": 0.05025, "loss_rpn_cls": 0.02046, "loss_rpn_bbox": 0.04392, "loss_cls": 0.17735, "acc": 93.58618, "loss_bbox": 0.21807, "loss_mask": 0.21723, "loss": 0.67702, "time": 0.39705} -{"mode": "train", "epoch": 9, "iter": 6900, "lr": 2e-05, "memory": 8424, "data_time": 0.04527, "loss_rpn_cls": 0.02138, "loss_rpn_bbox": 0.04372, "loss_cls": 0.17898, "acc": 93.55811, "loss_bbox": 0.21801, "loss_mask": 0.22263, "loss": 0.68472, "time": 0.39768} -{"mode": "train", "epoch": 9, "iter": 6950, "lr": 2e-05, "memory": 8424, "data_time": 0.04402, "loss_rpn_cls": 0.02073, "loss_rpn_bbox": 0.04208, "loss_cls": 0.17026, "acc": 93.77441, "loss_bbox": 0.21414, "loss_mask": 0.21823, "loss": 0.66543, "time": 0.41164} -{"mode": "train", "epoch": 9, "iter": 7000, "lr": 2e-05, "memory": 8424, "data_time": 0.05302, "loss_rpn_cls": 0.02229, "loss_rpn_bbox": 0.04493, "loss_cls": 0.18716, "acc": 93.22827, "loss_bbox": 0.22707, "loss_mask": 0.22526, "loss": 0.70671, "time": 0.41146} -{"mode": "train", "epoch": 9, "iter": 7050, "lr": 2e-05, "memory": 8424, "data_time": 0.03712, "loss_rpn_cls": 0.02138, "loss_rpn_bbox": 0.04331, "loss_cls": 0.18163, "acc": 93.34497, "loss_bbox": 0.22314, "loss_mask": 0.22607, "loss": 0.69552, "time": 0.40768} -{"mode": "train", "epoch": 9, "iter": 7100, "lr": 2e-05, "memory": 8424, "data_time": 0.04854, "loss_rpn_cls": 0.02167, "loss_rpn_bbox": 0.04175, "loss_cls": 0.17799, "acc": 93.521, "loss_bbox": 0.22, "loss_mask": 0.21812, "loss": 0.67953, "time": 0.40871} -{"mode": "train", "epoch": 9, "iter": 7150, "lr": 2e-05, "memory": 8424, "data_time": 0.0548, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.04088, "loss_cls": 0.16901, "acc": 93.77417, "loss_bbox": 0.21469, "loss_mask": 0.21981, "loss": 0.66467, "time": 0.41488} -{"mode": "train", "epoch": 9, "iter": 7200, "lr": 2e-05, "memory": 8424, "data_time": 0.04692, "loss_rpn_cls": 0.02143, "loss_rpn_bbox": 0.04307, "loss_cls": 0.17388, "acc": 93.69531, "loss_bbox": 0.21602, "loss_mask": 0.22447, "loss": 0.67886, "time": 0.40566} -{"mode": "train", "epoch": 9, "iter": 7250, "lr": 2e-05, "memory": 8424, "data_time": 0.04105, "loss_rpn_cls": 0.02242, "loss_rpn_bbox": 0.04423, "loss_cls": 0.18308, "acc": 93.39844, "loss_bbox": 0.22131, "loss_mask": 0.22355, "loss": 0.69458, "time": 0.40638} -{"mode": "train", "epoch": 9, "iter": 7300, "lr": 2e-05, "memory": 8424, "data_time": 0.04698, "loss_rpn_cls": 0.01954, "loss_rpn_bbox": 0.04273, "loss_cls": 0.17124, "acc": 93.66211, "loss_bbox": 0.2121, "loss_mask": 0.21783, "loss": 0.66343, "time": 0.46976} -{"mode": "val", "epoch": 9, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.4093, "bbox_mAP_50": 0.6274, "bbox_mAP_75": 0.4466, "bbox_mAP_s": 0.2375, "bbox_mAP_m": 0.4457, "bbox_mAP_l": 0.5393, "bbox_mAP_copypaste": "0.4093 0.6274 0.4466 0.2375 0.4457 0.5393", "segm_mAP": 0.3802, "segm_mAP_50": 0.5985, "segm_mAP_75": 0.4074, "segm_mAP_s": 0.1801, "segm_mAP_m": 0.4096, "segm_mAP_l": 0.5602, "segm_mAP_copypaste": "0.3802 0.5985 0.4074 0.1801 0.4096 0.5602"} -{"mode": "train", "epoch": 10, "iter": 50, "lr": 2e-05, "memory": 8424, "data_time": 0.13068, "loss_rpn_cls": 0.02015, "loss_rpn_bbox": 0.04249, "loss_cls": 0.17353, "acc": 93.61963, "loss_bbox": 0.222, "loss_mask": 0.22054, "loss": 0.67872, "time": 0.52171} -{"mode": "train", "epoch": 10, "iter": 100, "lr": 2e-05, "memory": 8424, "data_time": 0.0432, "loss_rpn_cls": 0.01951, "loss_rpn_bbox": 0.04181, "loss_cls": 0.16648, "acc": 93.86816, "loss_bbox": 0.20648, "loss_mask": 0.21621, "loss": 0.65049, "time": 0.42507} -{"mode": "train", "epoch": 10, "iter": 150, "lr": 2e-05, "memory": 8424, "data_time": 0.05192, "loss_rpn_cls": 0.02045, "loss_rpn_bbox": 0.04177, "loss_cls": 0.1721, "acc": 93.70508, "loss_bbox": 0.21303, "loss_mask": 0.22193, "loss": 0.66927, "time": 0.42071} -{"mode": "train", "epoch": 10, "iter": 200, "lr": 2e-05, "memory": 8424, "data_time": 0.0467, "loss_rpn_cls": 0.02189, "loss_rpn_bbox": 0.04577, "loss_cls": 0.17879, "acc": 93.51587, "loss_bbox": 0.22081, "loss_mask": 0.22545, "loss": 0.6927, "time": 0.42604} -{"mode": "train", "epoch": 10, "iter": 250, "lr": 2e-05, "memory": 8424, "data_time": 0.04614, "loss_rpn_cls": 0.02189, "loss_rpn_bbox": 0.04698, "loss_cls": 0.18434, "acc": 93.26587, "loss_bbox": 0.23095, "loss_mask": 0.22151, "loss": 0.70568, "time": 0.42914} -{"mode": "train", "epoch": 10, "iter": 300, "lr": 2e-05, "memory": 8424, "data_time": 0.04175, "loss_rpn_cls": 0.01989, "loss_rpn_bbox": 0.03998, "loss_cls": 0.16422, "acc": 94.00732, "loss_bbox": 0.20776, "loss_mask": 0.21614, "loss": 0.64798, "time": 0.41649} -{"mode": "train", "epoch": 10, "iter": 350, "lr": 2e-05, "memory": 8424, "data_time": 0.05311, "loss_rpn_cls": 0.02043, "loss_rpn_bbox": 0.04361, "loss_cls": 0.17788, "acc": 93.48828, "loss_bbox": 0.21838, "loss_mask": 0.21973, "loss": 0.68004, "time": 0.43239} -{"mode": "train", "epoch": 10, "iter": 400, "lr": 2e-05, "memory": 8424, "data_time": 0.05291, "loss_rpn_cls": 0.02044, "loss_rpn_bbox": 0.04437, "loss_cls": 0.17841, "acc": 93.40381, "loss_bbox": 0.22056, "loss_mask": 0.22198, "loss": 0.68577, "time": 0.42308} -{"mode": "train", "epoch": 10, "iter": 450, "lr": 2e-05, "memory": 8424, "data_time": 0.04383, "loss_rpn_cls": 0.01961, "loss_rpn_bbox": 0.04259, "loss_cls": 0.16769, "acc": 93.80615, "loss_bbox": 0.20892, "loss_mask": 0.22181, "loss": 0.66062, "time": 0.41141} -{"mode": "train", "epoch": 10, "iter": 500, "lr": 2e-05, "memory": 8424, "data_time": 0.04868, "loss_rpn_cls": 0.01974, "loss_rpn_bbox": 0.04502, "loss_cls": 0.1732, "acc": 93.62134, "loss_bbox": 0.21638, "loss_mask": 0.21876, "loss": 0.6731, "time": 0.40554} -{"mode": "train", "epoch": 10, "iter": 550, "lr": 2e-05, "memory": 8424, "data_time": 0.04929, "loss_rpn_cls": 0.01957, "loss_rpn_bbox": 0.0423, "loss_cls": 0.17183, "acc": 93.75464, "loss_bbox": 0.21605, "loss_mask": 0.21677, "loss": 0.66652, "time": 0.40506} -{"mode": "train", "epoch": 10, "iter": 600, "lr": 2e-05, "memory": 8424, "data_time": 0.05172, "loss_rpn_cls": 0.01998, "loss_rpn_bbox": 0.04394, "loss_cls": 0.17662, "acc": 93.4353, "loss_bbox": 0.22453, "loss_mask": 0.22213, "loss": 0.68721, "time": 0.42529} -{"mode": "train", "epoch": 10, "iter": 650, "lr": 2e-05, "memory": 8424, "data_time": 0.05161, "loss_rpn_cls": 0.02071, "loss_rpn_bbox": 0.04449, "loss_cls": 0.17303, "acc": 93.65942, "loss_bbox": 0.21711, "loss_mask": 0.22243, "loss": 0.67778, "time": 0.41772} -{"mode": "train", "epoch": 10, "iter": 700, "lr": 2e-05, "memory": 8424, "data_time": 0.03633, "loss_rpn_cls": 0.01814, "loss_rpn_bbox": 0.04123, "loss_cls": 0.17017, "acc": 93.75684, "loss_bbox": 0.21055, "loss_mask": 0.21866, "loss": 0.65876, "time": 0.40888} -{"mode": "train", "epoch": 10, "iter": 750, "lr": 2e-05, "memory": 8424, "data_time": 0.03439, "loss_rpn_cls": 0.02035, "loss_rpn_bbox": 0.043, "loss_cls": 0.17578, "acc": 93.52051, "loss_bbox": 0.21557, "loss_mask": 0.21901, "loss": 0.67371, "time": 0.406} -{"mode": "train", "epoch": 10, "iter": 800, "lr": 2e-05, "memory": 8424, "data_time": 0.05345, "loss_rpn_cls": 0.0226, "loss_rpn_bbox": 0.04543, "loss_cls": 0.17062, "acc": 93.67969, "loss_bbox": 0.21365, "loss_mask": 0.21726, "loss": 0.66956, "time": 0.41816} -{"mode": "train", "epoch": 10, "iter": 850, "lr": 2e-05, "memory": 8424, "data_time": 0.05186, "loss_rpn_cls": 0.02091, "loss_rpn_bbox": 0.04374, "loss_cls": 0.17046, "acc": 93.62451, "loss_bbox": 0.21902, "loss_mask": 0.22199, "loss": 0.67612, "time": 0.40323} -{"mode": "train", "epoch": 10, "iter": 900, "lr": 2e-05, "memory": 8424, "data_time": 0.04018, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04223, "loss_cls": 0.16822, "acc": 93.87793, "loss_bbox": 0.20687, "loss_mask": 0.21476, "loss": 0.65211, "time": 0.4} -{"mode": "train", "epoch": 10, "iter": 950, "lr": 2e-05, "memory": 8424, "data_time": 0.04275, "loss_rpn_cls": 0.02264, "loss_rpn_bbox": 0.0438, "loss_cls": 0.17568, "acc": 93.54492, "loss_bbox": 0.22069, "loss_mask": 0.22066, "loss": 0.68347, "time": 0.42411} -{"mode": "train", "epoch": 10, "iter": 1000, "lr": 2e-05, "memory": 8424, "data_time": 0.0369, "loss_rpn_cls": 0.02015, "loss_rpn_bbox": 0.04227, "loss_cls": 0.16453, "acc": 93.86792, "loss_bbox": 0.21394, "loss_mask": 0.2226, "loss": 0.66349, "time": 0.40261} -{"mode": "train", "epoch": 10, "iter": 1050, "lr": 2e-05, "memory": 8424, "data_time": 0.04156, "loss_rpn_cls": 0.01979, "loss_rpn_bbox": 0.04235, "loss_cls": 0.17399, "acc": 93.49414, "loss_bbox": 0.2205, "loss_mask": 0.2212, "loss": 0.67784, "time": 0.40134} -{"mode": "train", "epoch": 10, "iter": 1100, "lr": 2e-05, "memory": 8424, "data_time": 0.05125, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04666, "loss_cls": 0.17761, "acc": 93.3479, "loss_bbox": 0.23091, "loss_mask": 0.22259, "loss": 0.69991, "time": 0.47917} -{"mode": "train", "epoch": 10, "iter": 1150, "lr": 2e-05, "memory": 8424, "data_time": 0.05438, "loss_rpn_cls": 0.02088, "loss_rpn_bbox": 0.04145, "loss_cls": 0.17129, "acc": 93.65112, "loss_bbox": 0.21706, "loss_mask": 0.22565, "loss": 0.67633, "time": 0.46829} -{"mode": "train", "epoch": 10, "iter": 1200, "lr": 2e-05, "memory": 8424, "data_time": 0.03975, "loss_rpn_cls": 0.021, "loss_rpn_bbox": 0.04478, "loss_cls": 0.18263, "acc": 93.2998, "loss_bbox": 0.22651, "loss_mask": 0.22358, "loss": 0.6985, "time": 0.419} -{"mode": "train", "epoch": 10, "iter": 1250, "lr": 2e-05, "memory": 8424, "data_time": 0.04687, "loss_rpn_cls": 0.01993, "loss_rpn_bbox": 0.04463, "loss_cls": 0.16645, "acc": 93.76831, "loss_bbox": 0.20738, "loss_mask": 0.21574, "loss": 0.65412, "time": 0.41269} -{"mode": "train", "epoch": 10, "iter": 1300, "lr": 2e-05, "memory": 8424, "data_time": 0.04634, "loss_rpn_cls": 0.01863, "loss_rpn_bbox": 0.04154, "loss_cls": 0.16571, "acc": 93.89478, "loss_bbox": 0.21182, "loss_mask": 0.21343, "loss": 0.65113, "time": 0.40081} -{"mode": "train", "epoch": 10, "iter": 1350, "lr": 2e-05, "memory": 8424, "data_time": 0.04367, "loss_rpn_cls": 0.01998, "loss_rpn_bbox": 0.04404, "loss_cls": 0.17648, "acc": 93.53101, "loss_bbox": 0.22552, "loss_mask": 0.22669, "loss": 0.69271, "time": 0.41723} -{"mode": "train", "epoch": 10, "iter": 1400, "lr": 2e-05, "memory": 8424, "data_time": 0.0421, "loss_rpn_cls": 0.02167, "loss_rpn_bbox": 0.04315, "loss_cls": 0.1835, "acc": 93.26074, "loss_bbox": 0.22095, "loss_mask": 0.22251, "loss": 0.69178, "time": 0.41804} -{"mode": "train", "epoch": 10, "iter": 1450, "lr": 2e-05, "memory": 8424, "data_time": 0.04641, "loss_rpn_cls": 0.02035, "loss_rpn_bbox": 0.04169, "loss_cls": 0.17102, "acc": 93.72217, "loss_bbox": 0.21501, "loss_mask": 0.22131, "loss": 0.66938, "time": 0.42133} -{"mode": "train", "epoch": 10, "iter": 1500, "lr": 2e-05, "memory": 8424, "data_time": 0.05201, "loss_rpn_cls": 0.02041, "loss_rpn_bbox": 0.04429, "loss_cls": 0.1718, "acc": 93.67822, "loss_bbox": 0.21782, "loss_mask": 0.22099, "loss": 0.67532, "time": 0.40285} -{"mode": "train", "epoch": 10, "iter": 1550, "lr": 2e-05, "memory": 8424, "data_time": 0.04441, "loss_rpn_cls": 0.02108, "loss_rpn_bbox": 0.04549, "loss_cls": 0.17316, "acc": 93.57788, "loss_bbox": 0.22192, "loss_mask": 0.22416, "loss": 0.68581, "time": 0.4198} -{"mode": "train", "epoch": 10, "iter": 1600, "lr": 2e-05, "memory": 8424, "data_time": 0.03539, "loss_rpn_cls": 0.02121, "loss_rpn_bbox": 0.04329, "loss_cls": 0.17519, "acc": 93.64209, "loss_bbox": 0.22117, "loss_mask": 0.22418, "loss": 0.68504, "time": 0.4512} -{"mode": "train", "epoch": 10, "iter": 1650, "lr": 2e-05, "memory": 8424, "data_time": 0.04175, "loss_rpn_cls": 0.02035, "loss_rpn_bbox": 0.04487, "loss_cls": 0.17658, "acc": 93.47681, "loss_bbox": 0.22019, "loss_mask": 0.22682, "loss": 0.6888, "time": 0.41325} -{"mode": "train", "epoch": 10, "iter": 1700, "lr": 2e-05, "memory": 8424, "data_time": 0.04883, "loss_rpn_cls": 0.02065, "loss_rpn_bbox": 0.04258, "loss_cls": 0.17165, "acc": 93.65454, "loss_bbox": 0.21934, "loss_mask": 0.22235, "loss": 0.67657, "time": 0.41872} -{"mode": "train", "epoch": 10, "iter": 1750, "lr": 2e-05, "memory": 8424, "data_time": 0.03799, "loss_rpn_cls": 0.01802, "loss_rpn_bbox": 0.04088, "loss_cls": 0.16316, "acc": 93.95874, "loss_bbox": 0.20461, "loss_mask": 0.21902, "loss": 0.64568, "time": 0.40398} -{"mode": "train", "epoch": 10, "iter": 1800, "lr": 2e-05, "memory": 8424, "data_time": 0.04214, "loss_rpn_cls": 0.01935, "loss_rpn_bbox": 0.04149, "loss_cls": 0.16937, "acc": 93.87036, "loss_bbox": 0.20969, "loss_mask": 0.21713, "loss": 0.65703, "time": 0.41999} -{"mode": "train", "epoch": 10, "iter": 1850, "lr": 2e-05, "memory": 8424, "data_time": 0.04466, "loss_rpn_cls": 0.02083, "loss_rpn_bbox": 0.0424, "loss_cls": 0.16972, "acc": 93.75366, "loss_bbox": 0.21074, "loss_mask": 0.21903, "loss": 0.66273, "time": 0.41564} -{"mode": "train", "epoch": 10, "iter": 1900, "lr": 2e-05, "memory": 8424, "data_time": 0.05481, "loss_rpn_cls": 0.01899, "loss_rpn_bbox": 0.04172, "loss_cls": 0.16682, "acc": 93.82446, "loss_bbox": 0.21188, "loss_mask": 0.21406, "loss": 0.65347, "time": 0.40579} -{"mode": "train", "epoch": 10, "iter": 1950, "lr": 2e-05, "memory": 8424, "data_time": 0.04364, "loss_rpn_cls": 0.02017, "loss_rpn_bbox": 0.04239, "loss_cls": 0.17491, "acc": 93.54883, "loss_bbox": 0.21858, "loss_mask": 0.22042, "loss": 0.67647, "time": 0.41048} -{"mode": "train", "epoch": 10, "iter": 2000, "lr": 2e-05, "memory": 8424, "data_time": 0.05001, "loss_rpn_cls": 0.01894, "loss_rpn_bbox": 0.0419, "loss_cls": 0.174, "acc": 93.75635, "loss_bbox": 0.21439, "loss_mask": 0.22034, "loss": 0.66957, "time": 0.39181} -{"mode": "train", "epoch": 10, "iter": 2050, "lr": 2e-05, "memory": 8424, "data_time": 0.05355, "loss_rpn_cls": 0.01917, "loss_rpn_bbox": 0.04128, "loss_cls": 0.16906, "acc": 93.83228, "loss_bbox": 0.21396, "loss_mask": 0.2172, "loss": 0.66068, "time": 0.4105} -{"mode": "train", "epoch": 10, "iter": 2100, "lr": 2e-05, "memory": 8424, "data_time": 0.04549, "loss_rpn_cls": 0.01978, "loss_rpn_bbox": 0.04643, "loss_cls": 0.17588, "acc": 93.51343, "loss_bbox": 0.21952, "loss_mask": 0.21953, "loss": 0.68114, "time": 0.41664} -{"mode": "train", "epoch": 10, "iter": 2150, "lr": 2e-05, "memory": 8424, "data_time": 0.04484, "loss_rpn_cls": 0.01993, "loss_rpn_bbox": 0.04359, "loss_cls": 0.16925, "acc": 93.80396, "loss_bbox": 0.21222, "loss_mask": 0.21686, "loss": 0.66185, "time": 0.41914} -{"mode": "train", "epoch": 10, "iter": 2200, "lr": 2e-05, "memory": 8424, "data_time": 0.04396, "loss_rpn_cls": 0.02165, "loss_rpn_bbox": 0.04637, "loss_cls": 0.18012, "acc": 93.34399, "loss_bbox": 0.22303, "loss_mask": 0.21937, "loss": 0.69054, "time": 0.43672} -{"mode": "train", "epoch": 10, "iter": 2250, "lr": 2e-05, "memory": 8424, "data_time": 0.04845, "loss_rpn_cls": 0.02098, "loss_rpn_bbox": 0.04449, "loss_cls": 0.17669, "acc": 93.57715, "loss_bbox": 0.21653, "loss_mask": 0.21877, "loss": 0.67745, "time": 0.408} -{"mode": "train", "epoch": 10, "iter": 2300, "lr": 2e-05, "memory": 8424, "data_time": 0.04432, "loss_rpn_cls": 0.01951, "loss_rpn_bbox": 0.04169, "loss_cls": 0.17722, "acc": 93.50635, "loss_bbox": 0.22174, "loss_mask": 0.22191, "loss": 0.68207, "time": 0.3981} -{"mode": "train", "epoch": 10, "iter": 2350, "lr": 2e-05, "memory": 8424, "data_time": 0.05267, "loss_rpn_cls": 0.02052, "loss_rpn_bbox": 0.04481, "loss_cls": 0.17598, "acc": 93.5105, "loss_bbox": 0.21962, "loss_mask": 0.22635, "loss": 0.68728, "time": 0.41266} -{"mode": "train", "epoch": 10, "iter": 2400, "lr": 2e-05, "memory": 8424, "data_time": 0.03857, "loss_rpn_cls": 0.02009, "loss_rpn_bbox": 0.04163, "loss_cls": 0.16972, "acc": 93.7373, "loss_bbox": 0.21393, "loss_mask": 0.22236, "loss": 0.66772, "time": 0.40535} -{"mode": "train", "epoch": 10, "iter": 2450, "lr": 2e-05, "memory": 8424, "data_time": 0.05231, "loss_rpn_cls": 0.02206, "loss_rpn_bbox": 0.04347, "loss_cls": 0.17558, "acc": 93.55835, "loss_bbox": 0.22134, "loss_mask": 0.22453, "loss": 0.68698, "time": 0.41558} -{"mode": "train", "epoch": 10, "iter": 2500, "lr": 2e-05, "memory": 8424, "data_time": 0.04595, "loss_rpn_cls": 0.01928, "loss_rpn_bbox": 0.04509, "loss_cls": 0.17028, "acc": 93.7168, "loss_bbox": 0.21635, "loss_mask": 0.22182, "loss": 0.67284, "time": 0.43081} -{"mode": "train", "epoch": 10, "iter": 2550, "lr": 2e-05, "memory": 8424, "data_time": 0.03912, "loss_rpn_cls": 0.02172, "loss_rpn_bbox": 0.04428, "loss_cls": 0.17488, "acc": 93.58252, "loss_bbox": 0.21856, "loss_mask": 0.22335, "loss": 0.6828, "time": 0.40667} -{"mode": "train", "epoch": 10, "iter": 2600, "lr": 2e-05, "memory": 8424, "data_time": 0.03985, "loss_rpn_cls": 0.01953, "loss_rpn_bbox": 0.04143, "loss_cls": 0.1744, "acc": 93.54126, "loss_bbox": 0.21866, "loss_mask": 0.21972, "loss": 0.67374, "time": 0.39322} -{"mode": "train", "epoch": 10, "iter": 2650, "lr": 2e-05, "memory": 8424, "data_time": 0.04463, "loss_rpn_cls": 0.0199, "loss_rpn_bbox": 0.04348, "loss_cls": 0.17624, "acc": 93.55737, "loss_bbox": 0.21596, "loss_mask": 0.2177, "loss": 0.67329, "time": 0.40984} -{"mode": "train", "epoch": 10, "iter": 2700, "lr": 2e-05, "memory": 8424, "data_time": 0.05208, "loss_rpn_cls": 0.02049, "loss_rpn_bbox": 0.04281, "loss_cls": 0.16798, "acc": 93.78857, "loss_bbox": 0.21241, "loss_mask": 0.21958, "loss": 0.66328, "time": 0.41475} -{"mode": "train", "epoch": 10, "iter": 2750, "lr": 2e-05, "memory": 8424, "data_time": 0.05671, "loss_rpn_cls": 0.02078, "loss_rpn_bbox": 0.04435, "loss_cls": 0.17147, "acc": 93.63159, "loss_bbox": 0.21868, "loss_mask": 0.22798, "loss": 0.68326, "time": 0.4807} -{"mode": "train", "epoch": 10, "iter": 2800, "lr": 2e-05, "memory": 8424, "data_time": 0.05241, "loss_rpn_cls": 0.02022, "loss_rpn_bbox": 0.04314, "loss_cls": 0.17372, "acc": 93.58545, "loss_bbox": 0.21786, "loss_mask": 0.22142, "loss": 0.67636, "time": 0.41211} -{"mode": "train", "epoch": 10, "iter": 2850, "lr": 2e-05, "memory": 8424, "data_time": 0.05117, "loss_rpn_cls": 0.02118, "loss_rpn_bbox": 0.04353, "loss_cls": 0.17561, "acc": 93.51538, "loss_bbox": 0.22067, "loss_mask": 0.22176, "loss": 0.68275, "time": 0.41717} -{"mode": "train", "epoch": 10, "iter": 2900, "lr": 2e-05, "memory": 8424, "data_time": 0.04747, "loss_rpn_cls": 0.02008, "loss_rpn_bbox": 0.04098, "loss_cls": 0.16936, "acc": 93.75708, "loss_bbox": 0.20953, "loss_mask": 0.21923, "loss": 0.65918, "time": 0.40625} -{"mode": "train", "epoch": 10, "iter": 2950, "lr": 2e-05, "memory": 8424, "data_time": 0.0467, "loss_rpn_cls": 0.0207, "loss_rpn_bbox": 0.04526, "loss_cls": 0.17598, "acc": 93.54248, "loss_bbox": 0.21652, "loss_mask": 0.22127, "loss": 0.67972, "time": 0.40698} -{"mode": "train", "epoch": 10, "iter": 3000, "lr": 2e-05, "memory": 8424, "data_time": 0.03601, "loss_rpn_cls": 0.01988, "loss_rpn_bbox": 0.0413, "loss_cls": 0.16836, "acc": 93.82446, "loss_bbox": 0.21054, "loss_mask": 0.21726, "loss": 0.65734, "time": 0.40228} -{"mode": "train", "epoch": 10, "iter": 3050, "lr": 2e-05, "memory": 8424, "data_time": 0.05323, "loss_rpn_cls": 0.02312, "loss_rpn_bbox": 0.04558, "loss_cls": 0.18515, "acc": 93.18164, "loss_bbox": 0.23058, "loss_mask": 0.22758, "loss": 0.712, "time": 0.42134} -{"mode": "train", "epoch": 10, "iter": 3100, "lr": 2e-05, "memory": 8424, "data_time": 0.0447, "loss_rpn_cls": 0.02105, "loss_rpn_bbox": 0.04233, "loss_cls": 0.17775, "acc": 93.44971, "loss_bbox": 0.22064, "loss_mask": 0.21919, "loss": 0.68097, "time": 0.41681} -{"mode": "train", "epoch": 10, "iter": 3150, "lr": 2e-05, "memory": 8424, "data_time": 0.04652, "loss_rpn_cls": 0.02005, "loss_rpn_bbox": 0.04126, "loss_cls": 0.17186, "acc": 93.71509, "loss_bbox": 0.21373, "loss_mask": 0.22026, "loss": 0.66716, "time": 0.46454} -{"mode": "train", "epoch": 10, "iter": 3200, "lr": 2e-05, "memory": 8424, "data_time": 0.05367, "loss_rpn_cls": 0.02318, "loss_rpn_bbox": 0.04483, "loss_cls": 0.17961, "acc": 93.40283, "loss_bbox": 0.22292, "loss_mask": 0.22629, "loss": 0.69682, "time": 0.41339} -{"mode": "train", "epoch": 10, "iter": 3250, "lr": 2e-05, "memory": 8424, "data_time": 0.05811, "loss_rpn_cls": 0.02235, "loss_rpn_bbox": 0.04503, "loss_cls": 0.1786, "acc": 93.46509, "loss_bbox": 0.22405, "loss_mask": 0.22378, "loss": 0.69381, "time": 0.42496} -{"mode": "train", "epoch": 10, "iter": 3300, "lr": 2e-05, "memory": 8424, "data_time": 0.04499, "loss_rpn_cls": 0.01962, "loss_rpn_bbox": 0.03874, "loss_cls": 0.16743, "acc": 93.8291, "loss_bbox": 0.20348, "loss_mask": 0.21203, "loss": 0.6413, "time": 0.40374} -{"mode": "train", "epoch": 10, "iter": 3350, "lr": 2e-05, "memory": 8424, "data_time": 0.04922, "loss_rpn_cls": 0.0219, "loss_rpn_bbox": 0.04103, "loss_cls": 0.16856, "acc": 93.86182, "loss_bbox": 0.2115, "loss_mask": 0.21776, "loss": 0.66076, "time": 0.47045} -{"mode": "train", "epoch": 10, "iter": 3400, "lr": 2e-05, "memory": 8424, "data_time": 0.05919, "loss_rpn_cls": 0.02105, "loss_rpn_bbox": 0.0417, "loss_cls": 0.16849, "acc": 93.76831, "loss_bbox": 0.21253, "loss_mask": 0.22206, "loss": 0.66584, "time": 0.40903} -{"mode": "train", "epoch": 10, "iter": 3450, "lr": 2e-05, "memory": 8424, "data_time": 0.06033, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.04433, "loss_cls": 0.17975, "acc": 93.45728, "loss_bbox": 0.22015, "loss_mask": 0.22455, "loss": 0.69103, "time": 0.4154} -{"mode": "train", "epoch": 10, "iter": 3500, "lr": 2e-05, "memory": 8424, "data_time": 0.05487, "loss_rpn_cls": 0.02023, "loss_rpn_bbox": 0.04218, "loss_cls": 0.17223, "acc": 93.625, "loss_bbox": 0.21615, "loss_mask": 0.22016, "loss": 0.67095, "time": 0.41158} -{"mode": "train", "epoch": 10, "iter": 3550, "lr": 2e-05, "memory": 8424, "data_time": 0.04546, "loss_rpn_cls": 0.02055, "loss_rpn_bbox": 0.04346, "loss_cls": 0.16319, "acc": 94.03394, "loss_bbox": 0.20319, "loss_mask": 0.21618, "loss": 0.64658, "time": 0.40554} -{"mode": "train", "epoch": 10, "iter": 3600, "lr": 2e-05, "memory": 8424, "data_time": 0.0487, "loss_rpn_cls": 0.01943, "loss_rpn_bbox": 0.04268, "loss_cls": 0.16942, "acc": 93.85498, "loss_bbox": 0.2085, "loss_mask": 0.21398, "loss": 0.65401, "time": 0.40964} -{"mode": "train", "epoch": 10, "iter": 3650, "lr": 2e-05, "memory": 8424, "data_time": 0.04387, "loss_rpn_cls": 0.01958, "loss_rpn_bbox": 0.04254, "loss_cls": 0.16846, "acc": 93.83203, "loss_bbox": 0.21401, "loss_mask": 0.21975, "loss": 0.66435, "time": 0.41216} -{"mode": "train", "epoch": 10, "iter": 3700, "lr": 2e-05, "memory": 8424, "data_time": 0.04829, "loss_rpn_cls": 0.0208, "loss_rpn_bbox": 0.04364, "loss_cls": 0.17564, "acc": 93.46387, "loss_bbox": 0.22482, "loss_mask": 0.22652, "loss": 0.69142, "time": 0.40799} -{"mode": "train", "epoch": 10, "iter": 3750, "lr": 2e-05, "memory": 8424, "data_time": 0.04628, "loss_rpn_cls": 0.02253, "loss_rpn_bbox": 0.04446, "loss_cls": 0.17795, "acc": 93.42456, "loss_bbox": 0.21825, "loss_mask": 0.22009, "loss": 0.68328, "time": 0.421} -{"mode": "train", "epoch": 10, "iter": 3800, "lr": 2e-05, "memory": 8424, "data_time": 0.04989, "loss_rpn_cls": 0.01976, "loss_rpn_bbox": 0.04363, "loss_cls": 0.17323, "acc": 93.65845, "loss_bbox": 0.21693, "loss_mask": 0.22497, "loss": 0.67853, "time": 0.41054} -{"mode": "train", "epoch": 10, "iter": 3850, "lr": 2e-05, "memory": 8424, "data_time": 0.04578, "loss_rpn_cls": 0.02186, "loss_rpn_bbox": 0.04438, "loss_cls": 0.17506, "acc": 93.57397, "loss_bbox": 0.21954, "loss_mask": 0.21971, "loss": 0.68055, "time": 0.42812} -{"mode": "train", "epoch": 10, "iter": 3900, "lr": 2e-05, "memory": 8424, "data_time": 0.04863, "loss_rpn_cls": 0.019, "loss_rpn_bbox": 0.03912, "loss_cls": 0.16855, "acc": 93.8313, "loss_bbox": 0.21216, "loss_mask": 0.21724, "loss": 0.65606, "time": 0.39383} -{"mode": "train", "epoch": 10, "iter": 3950, "lr": 2e-05, "memory": 8424, "data_time": 0.0454, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.03902, "loss_cls": 0.16574, "acc": 94.06592, "loss_bbox": 0.20493, "loss_mask": 0.21891, "loss": 0.6491, "time": 0.39757} -{"mode": "train", "epoch": 10, "iter": 4000, "lr": 2e-05, "memory": 8424, "data_time": 0.04441, "loss_rpn_cls": 0.0192, "loss_rpn_bbox": 0.04236, "loss_cls": 0.16833, "acc": 93.82642, "loss_bbox": 0.21092, "loss_mask": 0.21686, "loss": 0.65767, "time": 0.40637} -{"mode": "train", "epoch": 10, "iter": 4050, "lr": 2e-05, "memory": 8424, "data_time": 0.04603, "loss_rpn_cls": 0.01979, "loss_rpn_bbox": 0.04201, "loss_cls": 0.17661, "acc": 93.53491, "loss_bbox": 0.2151, "loss_mask": 0.21871, "loss": 0.67223, "time": 0.41089} -{"mode": "train", "epoch": 10, "iter": 4100, "lr": 2e-05, "memory": 8424, "data_time": 0.0476, "loss_rpn_cls": 0.01862, "loss_rpn_bbox": 0.04036, "loss_cls": 0.16062, "acc": 94.14087, "loss_bbox": 0.20076, "loss_mask": 0.21098, "loss": 0.63135, "time": 0.4129} -{"mode": "train", "epoch": 10, "iter": 4150, "lr": 2e-05, "memory": 8424, "data_time": 0.04022, "loss_rpn_cls": 0.02032, "loss_rpn_bbox": 0.04074, "loss_cls": 0.17169, "acc": 93.72266, "loss_bbox": 0.21257, "loss_mask": 0.21804, "loss": 0.66336, "time": 0.40154} -{"mode": "train", "epoch": 10, "iter": 4200, "lr": 2e-05, "memory": 8424, "data_time": 0.04819, "loss_rpn_cls": 0.02088, "loss_rpn_bbox": 0.04501, "loss_cls": 0.16941, "acc": 93.78418, "loss_bbox": 0.2133, "loss_mask": 0.21934, "loss": 0.66794, "time": 0.41077} -{"mode": "train", "epoch": 10, "iter": 4250, "lr": 2e-05, "memory": 8424, "data_time": 0.04689, "loss_rpn_cls": 0.02193, "loss_rpn_bbox": 0.04515, "loss_cls": 0.17911, "acc": 93.47339, "loss_bbox": 0.22591, "loss_mask": 0.22528, "loss": 0.69736, "time": 0.41589} -{"mode": "train", "epoch": 10, "iter": 4300, "lr": 2e-05, "memory": 8424, "data_time": 0.04675, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04064, "loss_cls": 0.16622, "acc": 93.85132, "loss_bbox": 0.21071, "loss_mask": 0.21979, "loss": 0.65729, "time": 0.398} -{"mode": "train", "epoch": 10, "iter": 4350, "lr": 2e-05, "memory": 8424, "data_time": 0.03934, "loss_rpn_cls": 0.02019, "loss_rpn_bbox": 0.04549, "loss_cls": 0.17632, "acc": 93.43335, "loss_bbox": 0.221, "loss_mask": 0.22347, "loss": 0.68646, "time": 0.40641} -{"mode": "train", "epoch": 10, "iter": 4400, "lr": 2e-05, "memory": 8424, "data_time": 0.04861, "loss_rpn_cls": 0.02017, "loss_rpn_bbox": 0.0429, "loss_cls": 0.16967, "acc": 93.8147, "loss_bbox": 0.21477, "loss_mask": 0.223, "loss": 0.6705, "time": 0.40271} -{"mode": "train", "epoch": 10, "iter": 4450, "lr": 2e-05, "memory": 8424, "data_time": 0.04935, "loss_rpn_cls": 0.01928, "loss_rpn_bbox": 0.04157, "loss_cls": 0.16766, "acc": 93.78711, "loss_bbox": 0.20617, "loss_mask": 0.21481, "loss": 0.64948, "time": 0.38686} -{"mode": "train", "epoch": 10, "iter": 4500, "lr": 2e-05, "memory": 8424, "data_time": 0.03528, "loss_rpn_cls": 0.02034, "loss_rpn_bbox": 0.0419, "loss_cls": 0.16827, "acc": 93.8418, "loss_bbox": 0.21103, "loss_mask": 0.21798, "loss": 0.65952, "time": 0.39863} -{"mode": "train", "epoch": 10, "iter": 4550, "lr": 2e-05, "memory": 8424, "data_time": 0.04163, "loss_rpn_cls": 0.02094, "loss_rpn_bbox": 0.04399, "loss_cls": 0.17276, "acc": 93.59668, "loss_bbox": 0.22255, "loss_mask": 0.22089, "loss": 0.68113, "time": 0.40287} -{"mode": "train", "epoch": 10, "iter": 4600, "lr": 2e-05, "memory": 8424, "data_time": 0.04638, "loss_rpn_cls": 0.02256, "loss_rpn_bbox": 0.04639, "loss_cls": 0.18189, "acc": 93.26099, "loss_bbox": 0.22288, "loss_mask": 0.2226, "loss": 0.69632, "time": 0.40961} -{"mode": "train", "epoch": 10, "iter": 4650, "lr": 2e-05, "memory": 8424, "data_time": 0.03969, "loss_rpn_cls": 0.01983, "loss_rpn_bbox": 0.044, "loss_cls": 0.17087, "acc": 93.67261, "loss_bbox": 0.21552, "loss_mask": 0.22197, "loss": 0.67219, "time": 0.41343} -{"mode": "train", "epoch": 10, "iter": 4700, "lr": 2e-05, "memory": 8424, "data_time": 0.05453, "loss_rpn_cls": 0.02118, "loss_rpn_bbox": 0.04269, "loss_cls": 0.1794, "acc": 93.34961, "loss_bbox": 0.22223, "loss_mask": 0.22288, "loss": 0.68838, "time": 0.41994} -{"mode": "train", "epoch": 10, "iter": 4750, "lr": 2e-05, "memory": 8424, "data_time": 0.04904, "loss_rpn_cls": 0.01964, "loss_rpn_bbox": 0.04241, "loss_cls": 0.1724, "acc": 93.60693, "loss_bbox": 0.21774, "loss_mask": 0.21989, "loss": 0.67208, "time": 0.41145} -{"mode": "train", "epoch": 10, "iter": 4800, "lr": 2e-05, "memory": 8424, "data_time": 0.04734, "loss_rpn_cls": 0.02069, "loss_rpn_bbox": 0.04418, "loss_cls": 0.1823, "acc": 93.37061, "loss_bbox": 0.22067, "loss_mask": 0.22231, "loss": 0.69015, "time": 0.41621} -{"mode": "train", "epoch": 10, "iter": 4850, "lr": 2e-05, "memory": 8424, "data_time": 0.04724, "loss_rpn_cls": 0.01965, "loss_rpn_bbox": 0.04275, "loss_cls": 0.17513, "acc": 93.49634, "loss_bbox": 0.22173, "loss_mask": 0.22045, "loss": 0.67972, "time": 0.41067} -{"mode": "train", "epoch": 10, "iter": 4900, "lr": 2e-05, "memory": 8424, "data_time": 0.04287, "loss_rpn_cls": 0.02215, "loss_rpn_bbox": 0.04379, "loss_cls": 0.17817, "acc": 93.46362, "loss_bbox": 0.22074, "loss_mask": 0.22187, "loss": 0.68673, "time": 0.41511} -{"mode": "train", "epoch": 10, "iter": 4950, "lr": 2e-05, "memory": 8424, "data_time": 0.04889, "loss_rpn_cls": 0.02095, "loss_rpn_bbox": 0.04382, "loss_cls": 0.17658, "acc": 93.56201, "loss_bbox": 0.21761, "loss_mask": 0.21869, "loss": 0.67764, "time": 0.41125} -{"mode": "train", "epoch": 10, "iter": 5000, "lr": 2e-05, "memory": 8424, "data_time": 0.042, "loss_rpn_cls": 0.02026, "loss_rpn_bbox": 0.04429, "loss_cls": 0.17675, "acc": 93.59351, "loss_bbox": 0.21893, "loss_mask": 0.22369, "loss": 0.68392, "time": 0.40899} -{"mode": "train", "epoch": 10, "iter": 5050, "lr": 2e-05, "memory": 8424, "data_time": 0.04991, "loss_rpn_cls": 0.0216, "loss_rpn_bbox": 0.0416, "loss_cls": 0.17739, "acc": 93.54565, "loss_bbox": 0.21745, "loss_mask": 0.21849, "loss": 0.67653, "time": 0.41756} -{"mode": "train", "epoch": 10, "iter": 5100, "lr": 2e-05, "memory": 8424, "data_time": 0.04958, "loss_rpn_cls": 0.02037, "loss_rpn_bbox": 0.04442, "loss_cls": 0.17364, "acc": 93.64258, "loss_bbox": 0.21889, "loss_mask": 0.22295, "loss": 0.68026, "time": 0.41624} -{"mode": "train", "epoch": 10, "iter": 5150, "lr": 2e-05, "memory": 8424, "data_time": 0.04635, "loss_rpn_cls": 0.02231, "loss_rpn_bbox": 0.0438, "loss_cls": 0.17367, "acc": 93.72778, "loss_bbox": 0.2133, "loss_mask": 0.21802, "loss": 0.6711, "time": 0.46249} -{"mode": "train", "epoch": 10, "iter": 5200, "lr": 2e-05, "memory": 8424, "data_time": 0.04827, "loss_rpn_cls": 0.01987, "loss_rpn_bbox": 0.04132, "loss_cls": 0.17419, "acc": 93.64722, "loss_bbox": 0.21684, "loss_mask": 0.21857, "loss": 0.67078, "time": 0.40564} -{"mode": "train", "epoch": 10, "iter": 5250, "lr": 2e-05, "memory": 8424, "data_time": 0.0513, "loss_rpn_cls": 0.02105, "loss_rpn_bbox": 0.04431, "loss_cls": 0.17143, "acc": 93.67773, "loss_bbox": 0.22158, "loss_mask": 0.21993, "loss": 0.6783, "time": 0.41231} -{"mode": "train", "epoch": 10, "iter": 5300, "lr": 2e-05, "memory": 8424, "data_time": 0.0431, "loss_rpn_cls": 0.02026, "loss_rpn_bbox": 0.0408, "loss_cls": 0.17439, "acc": 93.58325, "loss_bbox": 0.221, "loss_mask": 0.22354, "loss": 0.68, "time": 0.40879} -{"mode": "train", "epoch": 10, "iter": 5350, "lr": 2e-05, "memory": 8424, "data_time": 0.03968, "loss_rpn_cls": 0.01864, "loss_rpn_bbox": 0.04181, "loss_cls": 0.17289, "acc": 93.66138, "loss_bbox": 0.21583, "loss_mask": 0.21999, "loss": 0.66915, "time": 0.4091} -{"mode": "train", "epoch": 10, "iter": 5400, "lr": 2e-05, "memory": 8424, "data_time": 0.05288, "loss_rpn_cls": 0.02063, "loss_rpn_bbox": 0.04538, "loss_cls": 0.17653, "acc": 93.39575, "loss_bbox": 0.22188, "loss_mask": 0.22073, "loss": 0.68515, "time": 0.41118} -{"mode": "train", "epoch": 10, "iter": 5450, "lr": 2e-05, "memory": 8424, "data_time": 0.04719, "loss_rpn_cls": 0.02267, "loss_rpn_bbox": 0.04491, "loss_cls": 0.17965, "acc": 93.31958, "loss_bbox": 0.22668, "loss_mask": 0.22317, "loss": 0.69707, "time": 0.40987} -{"mode": "train", "epoch": 10, "iter": 5500, "lr": 2e-05, "memory": 8424, "data_time": 0.0466, "loss_rpn_cls": 0.02151, "loss_rpn_bbox": 0.04399, "loss_cls": 0.17776, "acc": 93.45166, "loss_bbox": 0.22081, "loss_mask": 0.22621, "loss": 0.69029, "time": 0.41591} -{"mode": "train", "epoch": 10, "iter": 5550, "lr": 2e-05, "memory": 8424, "data_time": 0.0503, "loss_rpn_cls": 0.01963, "loss_rpn_bbox": 0.03933, "loss_cls": 0.16605, "acc": 93.9563, "loss_bbox": 0.20476, "loss_mask": 0.21581, "loss": 0.64557, "time": 0.40438} -{"mode": "train", "epoch": 10, "iter": 5600, "lr": 2e-05, "memory": 8424, "data_time": 0.05174, "loss_rpn_cls": 0.01985, "loss_rpn_bbox": 0.04126, "loss_cls": 0.16311, "acc": 93.93433, "loss_bbox": 0.21077, "loss_mask": 0.2217, "loss": 0.65669, "time": 0.46169} -{"mode": "train", "epoch": 10, "iter": 5650, "lr": 2e-05, "memory": 8424, "data_time": 0.04846, "loss_rpn_cls": 0.01916, "loss_rpn_bbox": 0.04121, "loss_cls": 0.16422, "acc": 93.87256, "loss_bbox": 0.21049, "loss_mask": 0.21757, "loss": 0.65265, "time": 0.39414} -{"mode": "train", "epoch": 10, "iter": 5700, "lr": 2e-05, "memory": 8424, "data_time": 0.04504, "loss_rpn_cls": 0.02231, "loss_rpn_bbox": 0.04511, "loss_cls": 0.17773, "acc": 93.48047, "loss_bbox": 0.22132, "loss_mask": 0.22396, "loss": 0.69042, "time": 0.42897} -{"mode": "train", "epoch": 10, "iter": 5750, "lr": 2e-05, "memory": 8424, "data_time": 0.05192, "loss_rpn_cls": 0.01881, "loss_rpn_bbox": 0.04207, "loss_cls": 0.16817, "acc": 93.80029, "loss_bbox": 0.21106, "loss_mask": 0.21613, "loss": 0.65624, "time": 0.40885} -{"mode": "train", "epoch": 10, "iter": 5800, "lr": 2e-05, "memory": 8424, "data_time": 0.04399, "loss_rpn_cls": 0.01961, "loss_rpn_bbox": 0.04195, "loss_cls": 0.16987, "acc": 93.83789, "loss_bbox": 0.20876, "loss_mask": 0.21454, "loss": 0.65472, "time": 0.40402} -{"mode": "train", "epoch": 10, "iter": 5850, "lr": 2e-05, "memory": 8424, "data_time": 0.04822, "loss_rpn_cls": 0.02151, "loss_rpn_bbox": 0.04447, "loss_cls": 0.1705, "acc": 93.67358, "loss_bbox": 0.21539, "loss_mask": 0.2199, "loss": 0.67177, "time": 0.42123} -{"mode": "train", "epoch": 10, "iter": 5900, "lr": 2e-05, "memory": 8424, "data_time": 0.04982, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04453, "loss_cls": 0.17717, "acc": 93.56738, "loss_bbox": 0.22073, "loss_mask": 0.22175, "loss": 0.68492, "time": 0.40984} -{"mode": "train", "epoch": 10, "iter": 5950, "lr": 2e-05, "memory": 8424, "data_time": 0.04597, "loss_rpn_cls": 0.02313, "loss_rpn_bbox": 0.0454, "loss_cls": 0.17748, "acc": 93.50269, "loss_bbox": 0.22367, "loss_mask": 0.22097, "loss": 0.69065, "time": 0.41024} -{"mode": "train", "epoch": 10, "iter": 6000, "lr": 2e-05, "memory": 8424, "data_time": 0.04358, "loss_rpn_cls": 0.01922, "loss_rpn_bbox": 0.04431, "loss_cls": 0.17207, "acc": 93.74414, "loss_bbox": 0.21301, "loss_mask": 0.21725, "loss": 0.66586, "time": 0.41957} -{"mode": "train", "epoch": 10, "iter": 6050, "lr": 2e-05, "memory": 8424, "data_time": 0.04443, "loss_rpn_cls": 0.02, "loss_rpn_bbox": 0.04291, "loss_cls": 0.17058, "acc": 93.78516, "loss_bbox": 0.2173, "loss_mask": 0.21976, "loss": 0.67055, "time": 0.40481} -{"mode": "train", "epoch": 10, "iter": 6100, "lr": 2e-05, "memory": 8424, "data_time": 0.0399, "loss_rpn_cls": 0.01956, "loss_rpn_bbox": 0.04167, "loss_cls": 0.17085, "acc": 93.65942, "loss_bbox": 0.21757, "loss_mask": 0.22412, "loss": 0.67377, "time": 0.39885} -{"mode": "train", "epoch": 10, "iter": 6150, "lr": 2e-05, "memory": 8424, "data_time": 0.04797, "loss_rpn_cls": 0.02124, "loss_rpn_bbox": 0.04236, "loss_cls": 0.17329, "acc": 93.60107, "loss_bbox": 0.21729, "loss_mask": 0.22342, "loss": 0.6776, "time": 0.41542} -{"mode": "train", "epoch": 10, "iter": 6200, "lr": 2e-05, "memory": 8424, "data_time": 0.04233, "loss_rpn_cls": 0.01885, "loss_rpn_bbox": 0.04243, "loss_cls": 0.17213, "acc": 93.69336, "loss_bbox": 0.21666, "loss_mask": 0.21834, "loss": 0.66841, "time": 0.39804} -{"mode": "train", "epoch": 10, "iter": 6250, "lr": 2e-05, "memory": 8424, "data_time": 0.03859, "loss_rpn_cls": 0.01951, "loss_rpn_bbox": 0.03873, "loss_cls": 0.16068, "acc": 93.98486, "loss_bbox": 0.20803, "loss_mask": 0.21567, "loss": 0.64262, "time": 0.40365} -{"mode": "train", "epoch": 10, "iter": 6300, "lr": 2e-05, "memory": 8424, "data_time": 0.03571, "loss_rpn_cls": 0.02246, "loss_rpn_bbox": 0.04412, "loss_cls": 0.17393, "acc": 93.60059, "loss_bbox": 0.22014, "loss_mask": 0.22331, "loss": 0.68396, "time": 0.40611} -{"mode": "train", "epoch": 10, "iter": 6350, "lr": 2e-05, "memory": 8424, "data_time": 0.04371, "loss_rpn_cls": 0.02113, "loss_rpn_bbox": 0.044, "loss_cls": 0.1715, "acc": 93.72095, "loss_bbox": 0.21408, "loss_mask": 0.21833, "loss": 0.66903, "time": 0.4056} -{"mode": "train", "epoch": 10, "iter": 6400, "lr": 2e-05, "memory": 8424, "data_time": 0.05924, "loss_rpn_cls": 0.021, "loss_rpn_bbox": 0.04482, "loss_cls": 0.17591, "acc": 93.51562, "loss_bbox": 0.22257, "loss_mask": 0.22572, "loss": 0.69002, "time": 0.47415} -{"mode": "train", "epoch": 10, "iter": 6450, "lr": 2e-05, "memory": 8424, "data_time": 0.04443, "loss_rpn_cls": 0.02098, "loss_rpn_bbox": 0.04365, "loss_cls": 0.16756, "acc": 93.81958, "loss_bbox": 0.21092, "loss_mask": 0.22119, "loss": 0.6643, "time": 0.41661} -{"mode": "train", "epoch": 10, "iter": 6500, "lr": 2e-05, "memory": 8424, "data_time": 0.06009, "loss_rpn_cls": 0.02101, "loss_rpn_bbox": 0.04281, "loss_cls": 0.17693, "acc": 93.50171, "loss_bbox": 0.2182, "loss_mask": 0.21958, "loss": 0.67854, "time": 0.40101} -{"mode": "train", "epoch": 10, "iter": 6550, "lr": 2e-05, "memory": 8424, "data_time": 0.04118, "loss_rpn_cls": 0.01886, "loss_rpn_bbox": 0.04227, "loss_cls": 0.1669, "acc": 93.87036, "loss_bbox": 0.21317, "loss_mask": 0.22055, "loss": 0.66175, "time": 0.4089} -{"mode": "train", "epoch": 10, "iter": 6600, "lr": 2e-05, "memory": 8424, "data_time": 0.0425, "loss_rpn_cls": 0.0216, "loss_rpn_bbox": 0.04309, "loss_cls": 0.17184, "acc": 93.62476, "loss_bbox": 0.21646, "loss_mask": 0.21912, "loss": 0.67211, "time": 0.40153} -{"mode": "train", "epoch": 10, "iter": 6650, "lr": 2e-05, "memory": 8424, "data_time": 0.04549, "loss_rpn_cls": 0.02116, "loss_rpn_bbox": 0.04574, "loss_cls": 0.17763, "acc": 93.45142, "loss_bbox": 0.22042, "loss_mask": 0.22522, "loss": 0.69017, "time": 0.40888} -{"mode": "train", "epoch": 10, "iter": 6700, "lr": 2e-05, "memory": 8424, "data_time": 0.03922, "loss_rpn_cls": 0.0192, "loss_rpn_bbox": 0.0419, "loss_cls": 0.17019, "acc": 93.70898, "loss_bbox": 0.21431, "loss_mask": 0.21472, "loss": 0.66032, "time": 0.40458} -{"mode": "train", "epoch": 10, "iter": 6750, "lr": 2e-05, "memory": 8424, "data_time": 0.04406, "loss_rpn_cls": 0.02049, "loss_rpn_bbox": 0.04329, "loss_cls": 0.16818, "acc": 93.81592, "loss_bbox": 0.21032, "loss_mask": 0.21805, "loss": 0.66034, "time": 0.49154} -{"mode": "train", "epoch": 10, "iter": 6800, "lr": 2e-05, "memory": 8424, "data_time": 0.04587, "loss_rpn_cls": 0.01833, "loss_rpn_bbox": 0.03874, "loss_cls": 0.16811, "acc": 93.78784, "loss_bbox": 0.2043, "loss_mask": 0.21337, "loss": 0.64284, "time": 0.39742} -{"mode": "train", "epoch": 10, "iter": 6850, "lr": 2e-05, "memory": 8424, "data_time": 0.03924, "loss_rpn_cls": 0.02176, "loss_rpn_bbox": 0.04485, "loss_cls": 0.17285, "acc": 93.57544, "loss_bbox": 0.21884, "loss_mask": 0.22109, "loss": 0.67939, "time": 0.42655} -{"mode": "train", "epoch": 10, "iter": 6900, "lr": 2e-05, "memory": 8424, "data_time": 0.04861, "loss_rpn_cls": 0.02214, "loss_rpn_bbox": 0.04287, "loss_cls": 0.1785, "acc": 93.32812, "loss_bbox": 0.22528, "loss_mask": 0.21975, "loss": 0.68854, "time": 0.41807} -{"mode": "train", "epoch": 10, "iter": 6950, "lr": 2e-05, "memory": 8424, "data_time": 0.05235, "loss_rpn_cls": 0.02042, "loss_rpn_bbox": 0.04299, "loss_cls": 0.17808, "acc": 93.44336, "loss_bbox": 0.22165, "loss_mask": 0.22212, "loss": 0.68526, "time": 0.41217} -{"mode": "train", "epoch": 10, "iter": 7000, "lr": 2e-05, "memory": 8424, "data_time": 0.053, "loss_rpn_cls": 0.0198, "loss_rpn_bbox": 0.04332, "loss_cls": 0.17144, "acc": 93.63281, "loss_bbox": 0.21607, "loss_mask": 0.21652, "loss": 0.66716, "time": 0.40559} -{"mode": "train", "epoch": 10, "iter": 7050, "lr": 2e-05, "memory": 8424, "data_time": 0.04597, "loss_rpn_cls": 0.01839, "loss_rpn_bbox": 0.04075, "loss_cls": 0.17102, "acc": 93.70312, "loss_bbox": 0.21266, "loss_mask": 0.21505, "loss": 0.65787, "time": 0.40534} -{"mode": "train", "epoch": 10, "iter": 7100, "lr": 2e-05, "memory": 8424, "data_time": 0.04593, "loss_rpn_cls": 0.01824, "loss_rpn_bbox": 0.04136, "loss_cls": 0.17035, "acc": 93.7793, "loss_bbox": 0.21634, "loss_mask": 0.22022, "loss": 0.66651, "time": 0.39543} -{"mode": "train", "epoch": 10, "iter": 7150, "lr": 2e-05, "memory": 8424, "data_time": 0.04561, "loss_rpn_cls": 0.01867, "loss_rpn_bbox": 0.04211, "loss_cls": 0.16857, "acc": 93.8374, "loss_bbox": 0.20766, "loss_mask": 0.21632, "loss": 0.65332, "time": 0.40722} -{"mode": "train", "epoch": 10, "iter": 7200, "lr": 2e-05, "memory": 8424, "data_time": 0.05501, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.0414, "loss_cls": 0.16853, "acc": 93.86646, "loss_bbox": 0.20952, "loss_mask": 0.21175, "loss": 0.65149, "time": 0.40952} -{"mode": "train", "epoch": 10, "iter": 7250, "lr": 2e-05, "memory": 8424, "data_time": 0.04277, "loss_rpn_cls": 0.0224, "loss_rpn_bbox": 0.04398, "loss_cls": 0.1793, "acc": 93.48462, "loss_bbox": 0.21955, "loss_mask": 0.21811, "loss": 0.68334, "time": 0.4105} -{"mode": "train", "epoch": 10, "iter": 7300, "lr": 2e-05, "memory": 8424, "data_time": 0.05909, "loss_rpn_cls": 0.02059, "loss_rpn_bbox": 0.04306, "loss_cls": 0.17261, "acc": 93.64038, "loss_bbox": 0.21633, "loss_mask": 0.22152, "loss": 0.67412, "time": 0.40032} -{"mode": "val", "epoch": 10, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.413, "bbox_mAP_50": 0.6305, "bbox_mAP_75": 0.4529, "bbox_mAP_s": 0.25, "bbox_mAP_m": 0.4484, "bbox_mAP_l": 0.5428, "bbox_mAP_copypaste": "0.4130 0.6305 0.4529 0.2500 0.4484 0.5428", "segm_mAP": 0.3835, "segm_mAP_50": 0.6018, "segm_mAP_75": 0.4107, "segm_mAP_s": 0.1925, "segm_mAP_m": 0.4124, "segm_mAP_l": 0.5616, "segm_mAP_copypaste": "0.3835 0.6018 0.4107 0.1925 0.4124 0.5616"} -{"mode": "train", "epoch": 11, "iter": 50, "lr": 2e-05, "memory": 8424, "data_time": 0.13041, "loss_rpn_cls": 0.01901, "loss_rpn_bbox": 0.04188, "loss_cls": 0.16924, "acc": 93.77051, "loss_bbox": 0.21401, "loss_mask": 0.22329, "loss": 0.66744, "time": 0.49867} -{"mode": "train", "epoch": 11, "iter": 100, "lr": 2e-05, "memory": 8424, "data_time": 0.05965, "loss_rpn_cls": 0.02112, "loss_rpn_bbox": 0.0452, "loss_cls": 0.17305, "acc": 93.54248, "loss_bbox": 0.22362, "loss_mask": 0.21951, "loss": 0.6825, "time": 0.45288} -{"mode": "train", "epoch": 11, "iter": 150, "lr": 2e-05, "memory": 8424, "data_time": 0.04718, "loss_rpn_cls": 0.02118, "loss_rpn_bbox": 0.04374, "loss_cls": 0.16594, "acc": 93.85815, "loss_bbox": 0.21167, "loss_mask": 0.22705, "loss": 0.66958, "time": 0.41322} -{"mode": "train", "epoch": 11, "iter": 200, "lr": 2e-05, "memory": 8424, "data_time": 0.04806, "loss_rpn_cls": 0.01931, "loss_rpn_bbox": 0.0428, "loss_cls": 0.16646, "acc": 93.72241, "loss_bbox": 0.21313, "loss_mask": 0.22141, "loss": 0.66311, "time": 0.41745} -{"mode": "train", "epoch": 11, "iter": 250, "lr": 2e-05, "memory": 8424, "data_time": 0.04442, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.0455, "loss_cls": 0.17189, "acc": 93.56616, "loss_bbox": 0.22109, "loss_mask": 0.22567, "loss": 0.68446, "time": 0.41724} -{"mode": "train", "epoch": 11, "iter": 300, "lr": 2e-05, "memory": 8424, "data_time": 0.04565, "loss_rpn_cls": 0.02024, "loss_rpn_bbox": 0.04339, "loss_cls": 0.16483, "acc": 93.80786, "loss_bbox": 0.21522, "loss_mask": 0.21546, "loss": 0.65914, "time": 0.41823} -{"mode": "train", "epoch": 11, "iter": 350, "lr": 2e-05, "memory": 8424, "data_time": 0.04655, "loss_rpn_cls": 0.01955, "loss_rpn_bbox": 0.04498, "loss_cls": 0.17058, "acc": 93.60791, "loss_bbox": 0.21475, "loss_mask": 0.21508, "loss": 0.66494, "time": 0.42447} -{"mode": "train", "epoch": 11, "iter": 400, "lr": 2e-05, "memory": 8424, "data_time": 0.0395, "loss_rpn_cls": 0.02086, "loss_rpn_bbox": 0.042, "loss_cls": 0.16952, "acc": 93.74878, "loss_bbox": 0.2154, "loss_mask": 0.21758, "loss": 0.66537, "time": 0.43144} -{"mode": "train", "epoch": 11, "iter": 450, "lr": 2e-05, "memory": 8424, "data_time": 0.04968, "loss_rpn_cls": 0.02125, "loss_rpn_bbox": 0.04158, "loss_cls": 0.16696, "acc": 93.82642, "loss_bbox": 0.21053, "loss_mask": 0.21864, "loss": 0.65896, "time": 0.39975} -{"mode": "train", "epoch": 11, "iter": 500, "lr": 2e-05, "memory": 8424, "data_time": 0.04363, "loss_rpn_cls": 0.01956, "loss_rpn_bbox": 0.04612, "loss_cls": 0.17105, "acc": 93.72559, "loss_bbox": 0.22273, "loss_mask": 0.22522, "loss": 0.68468, "time": 0.41723} -{"mode": "train", "epoch": 11, "iter": 550, "lr": 2e-05, "memory": 8424, "data_time": 0.04675, "loss_rpn_cls": 0.01835, "loss_rpn_bbox": 0.04019, "loss_cls": 0.1672, "acc": 93.78271, "loss_bbox": 0.21186, "loss_mask": 0.21361, "loss": 0.65121, "time": 0.41467} -{"mode": "train", "epoch": 11, "iter": 600, "lr": 2e-05, "memory": 8424, "data_time": 0.0561, "loss_rpn_cls": 0.01989, "loss_rpn_bbox": 0.04326, "loss_cls": 0.17083, "acc": 93.73193, "loss_bbox": 0.21761, "loss_mask": 0.21824, "loss": 0.66983, "time": 0.40756} -{"mode": "train", "epoch": 11, "iter": 650, "lr": 2e-05, "memory": 8424, "data_time": 0.04313, "loss_rpn_cls": 0.02037, "loss_rpn_bbox": 0.04433, "loss_cls": 0.17051, "acc": 93.68921, "loss_bbox": 0.21514, "loss_mask": 0.22238, "loss": 0.67273, "time": 0.41666} -{"mode": "train", "epoch": 11, "iter": 700, "lr": 2e-05, "memory": 8424, "data_time": 0.04591, "loss_rpn_cls": 0.0198, "loss_rpn_bbox": 0.04249, "loss_cls": 0.16355, "acc": 93.98608, "loss_bbox": 0.20869, "loss_mask": 0.21618, "loss": 0.65072, "time": 0.41086} -{"mode": "train", "epoch": 11, "iter": 750, "lr": 2e-05, "memory": 8424, "data_time": 0.05706, "loss_rpn_cls": 0.02111, "loss_rpn_bbox": 0.04363, "loss_cls": 0.17336, "acc": 93.58496, "loss_bbox": 0.21452, "loss_mask": 0.22038, "loss": 0.67301, "time": 0.4087} -{"mode": "train", "epoch": 11, "iter": 800, "lr": 2e-05, "memory": 8424, "data_time": 0.05214, "loss_rpn_cls": 0.02079, "loss_rpn_bbox": 0.04327, "loss_cls": 0.17546, "acc": 93.51562, "loss_bbox": 0.2221, "loss_mask": 0.22403, "loss": 0.68565, "time": 0.41852} -{"mode": "train", "epoch": 11, "iter": 850, "lr": 2e-05, "memory": 8424, "data_time": 0.0472, "loss_rpn_cls": 0.01856, "loss_rpn_bbox": 0.04196, "loss_cls": 0.16359, "acc": 93.80835, "loss_bbox": 0.21157, "loss_mask": 0.21677, "loss": 0.65245, "time": 0.40324} -{"mode": "train", "epoch": 11, "iter": 900, "lr": 2e-05, "memory": 8424, "data_time": 0.04334, "loss_rpn_cls": 0.02047, "loss_rpn_bbox": 0.04302, "loss_cls": 0.1684, "acc": 93.88159, "loss_bbox": 0.20867, "loss_mask": 0.21669, "loss": 0.65725, "time": 0.40816} -{"mode": "train", "epoch": 11, "iter": 950, "lr": 2e-05, "memory": 8424, "data_time": 0.04197, "loss_rpn_cls": 0.02041, "loss_rpn_bbox": 0.04214, "loss_cls": 0.17232, "acc": 93.65063, "loss_bbox": 0.2166, "loss_mask": 0.22352, "loss": 0.67499, "time": 0.41111} -{"mode": "train", "epoch": 11, "iter": 1000, "lr": 2e-05, "memory": 8424, "data_time": 0.04269, "loss_rpn_cls": 0.01845, "loss_rpn_bbox": 0.04308, "loss_cls": 0.16068, "acc": 94.05225, "loss_bbox": 0.20311, "loss_mask": 0.21302, "loss": 0.63834, "time": 0.42165} -{"mode": "train", "epoch": 11, "iter": 1050, "lr": 2e-05, "memory": 8424, "data_time": 0.04462, "loss_rpn_cls": 0.01722, "loss_rpn_bbox": 0.03972, "loss_cls": 0.16324, "acc": 93.94141, "loss_bbox": 0.21208, "loss_mask": 0.21494, "loss": 0.6472, "time": 0.41375} -{"mode": "train", "epoch": 11, "iter": 1100, "lr": 2e-05, "memory": 8424, "data_time": 0.04778, "loss_rpn_cls": 0.01864, "loss_rpn_bbox": 0.04041, "loss_cls": 0.16134, "acc": 93.98926, "loss_bbox": 0.20428, "loss_mask": 0.21137, "loss": 0.63605, "time": 0.40026} -{"mode": "train", "epoch": 11, "iter": 1150, "lr": 2e-05, "memory": 8424, "data_time": 0.04936, "loss_rpn_cls": 0.0211, "loss_rpn_bbox": 0.04249, "loss_cls": 0.17216, "acc": 93.62866, "loss_bbox": 0.2181, "loss_mask": 0.22277, "loss": 0.67661, "time": 0.45525} -{"mode": "train", "epoch": 11, "iter": 1200, "lr": 2e-05, "memory": 8424, "data_time": 0.04988, "loss_rpn_cls": 0.01979, "loss_rpn_bbox": 0.0423, "loss_cls": 0.16258, "acc": 93.91968, "loss_bbox": 0.20937, "loss_mask": 0.21697, "loss": 0.65102, "time": 0.42462} -{"mode": "train", "epoch": 11, "iter": 1250, "lr": 2e-05, "memory": 8424, "data_time": 0.04954, "loss_rpn_cls": 0.02314, "loss_rpn_bbox": 0.0454, "loss_cls": 0.18157, "acc": 93.34741, "loss_bbox": 0.22809, "loss_mask": 0.22621, "loss": 0.70441, "time": 0.4154} -{"mode": "train", "epoch": 11, "iter": 1300, "lr": 2e-05, "memory": 8424, "data_time": 0.0472, "loss_rpn_cls": 0.01861, "loss_rpn_bbox": 0.04242, "loss_cls": 0.17286, "acc": 93.58081, "loss_bbox": 0.21417, "loss_mask": 0.21874, "loss": 0.66679, "time": 0.40504} -{"mode": "train", "epoch": 11, "iter": 1350, "lr": 2e-05, "memory": 8424, "data_time": 0.06256, "loss_rpn_cls": 0.02188, "loss_rpn_bbox": 0.04529, "loss_cls": 0.16916, "acc": 93.74829, "loss_bbox": 0.21455, "loss_mask": 0.21939, "loss": 0.67028, "time": 0.42241} -{"mode": "train", "epoch": 11, "iter": 1400, "lr": 2e-05, "memory": 8424, "data_time": 0.04556, "loss_rpn_cls": 0.01909, "loss_rpn_bbox": 0.04035, "loss_cls": 0.16168, "acc": 94.02954, "loss_bbox": 0.21157, "loss_mask": 0.21659, "loss": 0.64929, "time": 0.41615} -{"mode": "train", "epoch": 11, "iter": 1450, "lr": 2e-05, "memory": 8424, "data_time": 0.05163, "loss_rpn_cls": 0.01906, "loss_rpn_bbox": 0.0405, "loss_cls": 0.16972, "acc": 93.67725, "loss_bbox": 0.21195, "loss_mask": 0.21633, "loss": 0.65756, "time": 0.404} -{"mode": "train", "epoch": 11, "iter": 1500, "lr": 2e-05, "memory": 8424, "data_time": 0.04965, "loss_rpn_cls": 0.02111, "loss_rpn_bbox": 0.04254, "loss_cls": 0.16989, "acc": 93.65186, "loss_bbox": 0.22438, "loss_mask": 0.21916, "loss": 0.67707, "time": 0.4031} -{"mode": "train", "epoch": 11, "iter": 1550, "lr": 2e-05, "memory": 8424, "data_time": 0.04389, "loss_rpn_cls": 0.01976, "loss_rpn_bbox": 0.04262, "loss_cls": 0.1653, "acc": 93.94043, "loss_bbox": 0.20991, "loss_mask": 0.21284, "loss": 0.65043, "time": 0.4064} -{"mode": "train", "epoch": 11, "iter": 1600, "lr": 2e-05, "memory": 8424, "data_time": 0.04902, "loss_rpn_cls": 0.02055, "loss_rpn_bbox": 0.04286, "loss_cls": 0.17032, "acc": 93.7146, "loss_bbox": 0.21501, "loss_mask": 0.21938, "loss": 0.66811, "time": 0.40097} -{"mode": "train", "epoch": 11, "iter": 1650, "lr": 2e-05, "memory": 8424, "data_time": 0.04969, "loss_rpn_cls": 0.01986, "loss_rpn_bbox": 0.04045, "loss_cls": 0.16612, "acc": 93.94067, "loss_bbox": 0.20974, "loss_mask": 0.21636, "loss": 0.65254, "time": 0.40678} -{"mode": "train", "epoch": 11, "iter": 1700, "lr": 2e-05, "memory": 8424, "data_time": 0.05468, "loss_rpn_cls": 0.01899, "loss_rpn_bbox": 0.04061, "loss_cls": 0.16325, "acc": 93.99512, "loss_bbox": 0.20435, "loss_mask": 0.21428, "loss": 0.64148, "time": 0.40309} -{"mode": "train", "epoch": 11, "iter": 1750, "lr": 2e-05, "memory": 8424, "data_time": 0.05135, "loss_rpn_cls": 0.01952, "loss_rpn_bbox": 0.04395, "loss_cls": 0.17092, "acc": 93.63818, "loss_bbox": 0.2187, "loss_mask": 0.22146, "loss": 0.67456, "time": 0.41284} -{"mode": "train", "epoch": 11, "iter": 1800, "lr": 2e-05, "memory": 8424, "data_time": 0.04676, "loss_rpn_cls": 0.02073, "loss_rpn_bbox": 0.04229, "loss_cls": 0.17572, "acc": 93.49268, "loss_bbox": 0.2199, "loss_mask": 0.21976, "loss": 0.6784, "time": 0.40556} -{"mode": "train", "epoch": 11, "iter": 1850, "lr": 2e-05, "memory": 8424, "data_time": 0.05106, "loss_rpn_cls": 0.01842, "loss_rpn_bbox": 0.04078, "loss_cls": 0.16328, "acc": 93.88623, "loss_bbox": 0.21466, "loss_mask": 0.21664, "loss": 0.65379, "time": 0.40257} -{"mode": "train", "epoch": 11, "iter": 1900, "lr": 2e-05, "memory": 8424, "data_time": 0.04972, "loss_rpn_cls": 0.01894, "loss_rpn_bbox": 0.04087, "loss_cls": 0.16405, "acc": 93.91748, "loss_bbox": 0.20715, "loss_mask": 0.2131, "loss": 0.64411, "time": 0.39293} -{"mode": "train", "epoch": 11, "iter": 1950, "lr": 2e-05, "memory": 8424, "data_time": 0.04938, "loss_rpn_cls": 0.01934, "loss_rpn_bbox": 0.04374, "loss_cls": 0.16915, "acc": 93.74121, "loss_bbox": 0.21234, "loss_mask": 0.21912, "loss": 0.66369, "time": 0.40487} -{"mode": "train", "epoch": 11, "iter": 2000, "lr": 2e-05, "memory": 8424, "data_time": 0.04148, "loss_rpn_cls": 0.01849, "loss_rpn_bbox": 0.04128, "loss_cls": 0.16205, "acc": 94.01709, "loss_bbox": 0.20895, "loss_mask": 0.21959, "loss": 0.65036, "time": 0.40283} -{"mode": "train", "epoch": 11, "iter": 2050, "lr": 2e-05, "memory": 8424, "data_time": 0.04827, "loss_rpn_cls": 0.02184, "loss_rpn_bbox": 0.04408, "loss_cls": 0.16946, "acc": 93.70874, "loss_bbox": 0.21143, "loss_mask": 0.21985, "loss": 0.66665, "time": 0.40478} -{"mode": "train", "epoch": 11, "iter": 2100, "lr": 2e-05, "memory": 8424, "data_time": 0.05447, "loss_rpn_cls": 0.02208, "loss_rpn_bbox": 0.04493, "loss_cls": 0.17294, "acc": 93.67041, "loss_bbox": 0.22077, "loss_mask": 0.21915, "loss": 0.67988, "time": 0.40446} -{"mode": "train", "epoch": 11, "iter": 2150, "lr": 2e-05, "memory": 8424, "data_time": 0.05389, "loss_rpn_cls": 0.02063, "loss_rpn_bbox": 0.04316, "loss_cls": 0.17696, "acc": 93.38843, "loss_bbox": 0.22461, "loss_mask": 0.22114, "loss": 0.68649, "time": 0.46649} -{"mode": "train", "epoch": 11, "iter": 2200, "lr": 2e-05, "memory": 8424, "data_time": 0.03943, "loss_rpn_cls": 0.01998, "loss_rpn_bbox": 0.04177, "loss_cls": 0.16699, "acc": 93.80884, "loss_bbox": 0.21248, "loss_mask": 0.22398, "loss": 0.66519, "time": 0.39482} -{"mode": "train", "epoch": 11, "iter": 2250, "lr": 2e-05, "memory": 8424, "data_time": 0.03897, "loss_rpn_cls": 0.01876, "loss_rpn_bbox": 0.04003, "loss_cls": 0.16151, "acc": 93.98608, "loss_bbox": 0.20838, "loss_mask": 0.21283, "loss": 0.64151, "time": 0.39156} -{"mode": "train", "epoch": 11, "iter": 2300, "lr": 2e-05, "memory": 8424, "data_time": 0.05359, "loss_rpn_cls": 0.02038, "loss_rpn_bbox": 0.04351, "loss_cls": 0.17346, "acc": 93.51929, "loss_bbox": 0.22126, "loss_mask": 0.2155, "loss": 0.67411, "time": 0.39976} -{"mode": "train", "epoch": 11, "iter": 2350, "lr": 2e-05, "memory": 8424, "data_time": 0.04722, "loss_rpn_cls": 0.01881, "loss_rpn_bbox": 0.04112, "loss_cls": 0.16124, "acc": 94.06982, "loss_bbox": 0.20484, "loss_mask": 0.21963, "loss": 0.64564, "time": 0.39684} -{"mode": "train", "epoch": 11, "iter": 2400, "lr": 2e-05, "memory": 8424, "data_time": 0.05906, "loss_rpn_cls": 0.02284, "loss_rpn_bbox": 0.04583, "loss_cls": 0.17244, "acc": 93.55786, "loss_bbox": 0.22041, "loss_mask": 0.22073, "loss": 0.68225, "time": 0.41793} -{"mode": "train", "epoch": 11, "iter": 2450, "lr": 2e-05, "memory": 8424, "data_time": 0.04863, "loss_rpn_cls": 0.01818, "loss_rpn_bbox": 0.03915, "loss_cls": 0.16378, "acc": 93.94287, "loss_bbox": 0.20961, "loss_mask": 0.21456, "loss": 0.64528, "time": 0.40719} -{"mode": "train", "epoch": 11, "iter": 2500, "lr": 2e-05, "memory": 8424, "data_time": 0.04313, "loss_rpn_cls": 0.02053, "loss_rpn_bbox": 0.04366, "loss_cls": 0.17094, "acc": 93.66431, "loss_bbox": 0.21782, "loss_mask": 0.22278, "loss": 0.67573, "time": 0.40605} -{"mode": "train", "epoch": 11, "iter": 2550, "lr": 2e-05, "memory": 8424, "data_time": 0.05271, "loss_rpn_cls": 0.01996, "loss_rpn_bbox": 0.04175, "loss_cls": 0.16551, "acc": 93.89746, "loss_bbox": 0.20756, "loss_mask": 0.2175, "loss": 0.65228, "time": 0.40979} -{"mode": "train", "epoch": 11, "iter": 2600, "lr": 2e-05, "memory": 8424, "data_time": 0.05141, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.0433, "loss_cls": 0.17036, "acc": 93.74121, "loss_bbox": 0.21188, "loss_mask": 0.21603, "loss": 0.66188, "time": 0.411} -{"mode": "train", "epoch": 11, "iter": 2650, "lr": 2e-05, "memory": 8424, "data_time": 0.04518, "loss_rpn_cls": 0.01887, "loss_rpn_bbox": 0.04059, "loss_cls": 0.1684, "acc": 93.83105, "loss_bbox": 0.21062, "loss_mask": 0.21652, "loss": 0.65499, "time": 0.40247} -{"mode": "train", "epoch": 11, "iter": 2700, "lr": 2e-05, "memory": 8424, "data_time": 0.04716, "loss_rpn_cls": 0.01938, "loss_rpn_bbox": 0.04468, "loss_cls": 0.17122, "acc": 93.66699, "loss_bbox": 0.21786, "loss_mask": 0.21534, "loss": 0.66848, "time": 0.41016} -{"mode": "train", "epoch": 11, "iter": 2750, "lr": 2e-05, "memory": 8424, "data_time": 0.04407, "loss_rpn_cls": 0.02069, "loss_rpn_bbox": 0.04434, "loss_cls": 0.16888, "acc": 93.66455, "loss_bbox": 0.21626, "loss_mask": 0.22134, "loss": 0.67152, "time": 0.40902} -{"mode": "train", "epoch": 11, "iter": 2800, "lr": 2e-05, "memory": 8424, "data_time": 0.04548, "loss_rpn_cls": 0.02019, "loss_rpn_bbox": 0.04332, "loss_cls": 0.17204, "acc": 93.6228, "loss_bbox": 0.21959, "loss_mask": 0.21583, "loss": 0.67097, "time": 0.41596} -{"mode": "train", "epoch": 11, "iter": 2850, "lr": 2e-05, "memory": 8424, "data_time": 0.04202, "loss_rpn_cls": 0.01914, "loss_rpn_bbox": 0.04319, "loss_cls": 0.16687, "acc": 93.87134, "loss_bbox": 0.20959, "loss_mask": 0.21937, "loss": 0.65815, "time": 0.46786} -{"mode": "train", "epoch": 11, "iter": 2900, "lr": 2e-05, "memory": 8424, "data_time": 0.04306, "loss_rpn_cls": 0.0205, "loss_rpn_bbox": 0.04313, "loss_cls": 0.17002, "acc": 93.72778, "loss_bbox": 0.21645, "loss_mask": 0.22091, "loss": 0.67101, "time": 0.46238} -{"mode": "train", "epoch": 11, "iter": 2950, "lr": 2e-05, "memory": 8424, "data_time": 0.05304, "loss_rpn_cls": 0.02149, "loss_rpn_bbox": 0.04433, "loss_cls": 0.17554, "acc": 93.46094, "loss_bbox": 0.22619, "loss_mask": 0.22386, "loss": 0.69141, "time": 0.41313} -{"mode": "train", "epoch": 11, "iter": 3000, "lr": 2e-05, "memory": 8424, "data_time": 0.05122, "loss_rpn_cls": 0.02129, "loss_rpn_bbox": 0.04234, "loss_cls": 0.17087, "acc": 93.67139, "loss_bbox": 0.22063, "loss_mask": 0.22008, "loss": 0.6752, "time": 0.4058} -{"mode": "train", "epoch": 11, "iter": 3050, "lr": 2e-05, "memory": 8424, "data_time": 0.04479, "loss_rpn_cls": 0.02068, "loss_rpn_bbox": 0.04547, "loss_cls": 0.17447, "acc": 93.53101, "loss_bbox": 0.22135, "loss_mask": 0.22492, "loss": 0.68688, "time": 0.48317} -{"mode": "train", "epoch": 11, "iter": 3100, "lr": 2e-05, "memory": 8424, "data_time": 0.04828, "loss_rpn_cls": 0.01928, "loss_rpn_bbox": 0.04243, "loss_cls": 0.16355, "acc": 93.96826, "loss_bbox": 0.20881, "loss_mask": 0.21312, "loss": 0.64719, "time": 0.40863} -{"mode": "train", "epoch": 11, "iter": 3150, "lr": 2e-05, "memory": 8424, "data_time": 0.04941, "loss_rpn_cls": 0.02148, "loss_rpn_bbox": 0.0447, "loss_cls": 0.17461, "acc": 93.54492, "loss_bbox": 0.21654, "loss_mask": 0.21824, "loss": 0.67558, "time": 0.41116} -{"mode": "train", "epoch": 11, "iter": 3200, "lr": 2e-05, "memory": 8424, "data_time": 0.04994, "loss_rpn_cls": 0.02046, "loss_rpn_bbox": 0.04431, "loss_cls": 0.17401, "acc": 93.60034, "loss_bbox": 0.21778, "loss_mask": 0.21938, "loss": 0.67594, "time": 0.40509} -{"mode": "train", "epoch": 11, "iter": 3250, "lr": 2e-05, "memory": 8424, "data_time": 0.04146, "loss_rpn_cls": 0.02023, "loss_rpn_bbox": 0.04487, "loss_cls": 0.17091, "acc": 93.60718, "loss_bbox": 0.21506, "loss_mask": 0.22213, "loss": 0.67321, "time": 0.42509} -{"mode": "train", "epoch": 11, "iter": 3300, "lr": 2e-05, "memory": 8424, "data_time": 0.03918, "loss_rpn_cls": 0.01882, "loss_rpn_bbox": 0.04001, "loss_cls": 0.15559, "acc": 94.27808, "loss_bbox": 0.20304, "loss_mask": 0.21529, "loss": 0.63274, "time": 0.41454} -{"mode": "train", "epoch": 11, "iter": 3350, "lr": 2e-05, "memory": 8424, "data_time": 0.04318, "loss_rpn_cls": 0.0199, "loss_rpn_bbox": 0.04345, "loss_cls": 0.17039, "acc": 93.6665, "loss_bbox": 0.21812, "loss_mask": 0.22504, "loss": 0.6769, "time": 0.40168} -{"mode": "train", "epoch": 11, "iter": 3400, "lr": 2e-05, "memory": 8424, "data_time": 0.03439, "loss_rpn_cls": 0.01912, "loss_rpn_bbox": 0.04357, "loss_cls": 0.16859, "acc": 93.78198, "loss_bbox": 0.21445, "loss_mask": 0.21731, "loss": 0.66304, "time": 0.39363} -{"mode": "train", "epoch": 11, "iter": 3450, "lr": 2e-05, "memory": 8424, "data_time": 0.04055, "loss_rpn_cls": 0.0206, "loss_rpn_bbox": 0.04148, "loss_cls": 0.16849, "acc": 93.70288, "loss_bbox": 0.21419, "loss_mask": 0.21904, "loss": 0.6638, "time": 0.45534} -{"mode": "train", "epoch": 11, "iter": 3500, "lr": 2e-05, "memory": 8424, "data_time": 0.04498, "loss_rpn_cls": 0.0194, "loss_rpn_bbox": 0.04094, "loss_cls": 0.17162, "acc": 93.71362, "loss_bbox": 0.21234, "loss_mask": 0.21524, "loss": 0.65955, "time": 0.41219} -{"mode": "train", "epoch": 11, "iter": 3550, "lr": 2e-05, "memory": 8424, "data_time": 0.04508, "loss_rpn_cls": 0.02011, "loss_rpn_bbox": 0.0418, "loss_cls": 0.17405, "acc": 93.47119, "loss_bbox": 0.21799, "loss_mask": 0.21604, "loss": 0.66999, "time": 0.4095} -{"mode": "train", "epoch": 11, "iter": 3600, "lr": 2e-05, "memory": 8424, "data_time": 0.04804, "loss_rpn_cls": 0.02008, "loss_rpn_bbox": 0.04094, "loss_cls": 0.16219, "acc": 94.03735, "loss_bbox": 0.20835, "loss_mask": 0.21502, "loss": 0.64659, "time": 0.40156} -{"mode": "train", "epoch": 11, "iter": 3650, "lr": 2e-05, "memory": 8424, "data_time": 0.05121, "loss_rpn_cls": 0.01962, "loss_rpn_bbox": 0.04272, "loss_cls": 0.17068, "acc": 93.68677, "loss_bbox": 0.21603, "loss_mask": 0.22398, "loss": 0.67303, "time": 0.40432} -{"mode": "train", "epoch": 11, "iter": 3700, "lr": 2e-05, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.01963, "loss_rpn_bbox": 0.04056, "loss_cls": 0.16894, "acc": 93.7373, "loss_bbox": 0.21275, "loss_mask": 0.21601, "loss": 0.65789, "time": 0.41254} -{"mode": "train", "epoch": 11, "iter": 3750, "lr": 2e-05, "memory": 8424, "data_time": 0.04816, "loss_rpn_cls": 0.01789, "loss_rpn_bbox": 0.04258, "loss_cls": 0.16668, "acc": 93.84717, "loss_bbox": 0.21097, "loss_mask": 0.21667, "loss": 0.65479, "time": 0.39126} -{"mode": "train", "epoch": 11, "iter": 3800, "lr": 2e-05, "memory": 8424, "data_time": 0.04918, "loss_rpn_cls": 0.01851, "loss_rpn_bbox": 0.03986, "loss_cls": 0.15988, "acc": 94.09814, "loss_bbox": 0.20164, "loss_mask": 0.21586, "loss": 0.63575, "time": 0.39871} -{"mode": "train", "epoch": 11, "iter": 3850, "lr": 2e-05, "memory": 8424, "data_time": 0.04838, "loss_rpn_cls": 0.02098, "loss_rpn_bbox": 0.04553, "loss_cls": 0.17547, "acc": 93.37183, "loss_bbox": 0.22509, "loss_mask": 0.22398, "loss": 0.69107, "time": 0.41575} -{"mode": "train", "epoch": 11, "iter": 3900, "lr": 2e-05, "memory": 8424, "data_time": 0.05019, "loss_rpn_cls": 0.02041, "loss_rpn_bbox": 0.04265, "loss_cls": 0.16671, "acc": 93.97339, "loss_bbox": 0.21095, "loss_mask": 0.21621, "loss": 0.65693, "time": 0.40647} -{"mode": "train", "epoch": 11, "iter": 3950, "lr": 2e-05, "memory": 8424, "data_time": 0.04785, "loss_rpn_cls": 0.02169, "loss_rpn_bbox": 0.04453, "loss_cls": 0.17609, "acc": 93.44092, "loss_bbox": 0.21767, "loss_mask": 0.22198, "loss": 0.68196, "time": 0.44709} -{"mode": "train", "epoch": 11, "iter": 4000, "lr": 2e-05, "memory": 8424, "data_time": 0.04627, "loss_rpn_cls": 0.0209, "loss_rpn_bbox": 0.04265, "loss_cls": 0.16906, "acc": 93.77588, "loss_bbox": 0.21539, "loss_mask": 0.21567, "loss": 0.66365, "time": 0.41987} -{"mode": "train", "epoch": 11, "iter": 4050, "lr": 2e-05, "memory": 8424, "data_time": 0.05293, "loss_rpn_cls": 0.01913, "loss_rpn_bbox": 0.04134, "loss_cls": 0.1715, "acc": 93.73999, "loss_bbox": 0.21658, "loss_mask": 0.21899, "loss": 0.66753, "time": 0.40361} -{"mode": "train", "epoch": 11, "iter": 4100, "lr": 2e-05, "memory": 8424, "data_time": 0.04106, "loss_rpn_cls": 0.02259, "loss_rpn_bbox": 0.0431, "loss_cls": 0.17826, "acc": 93.47095, "loss_bbox": 0.21923, "loss_mask": 0.22362, "loss": 0.68681, "time": 0.41993} -{"mode": "train", "epoch": 11, "iter": 4150, "lr": 2e-05, "memory": 8424, "data_time": 0.04434, "loss_rpn_cls": 0.01944, "loss_rpn_bbox": 0.04042, "loss_cls": 0.16756, "acc": 93.78076, "loss_bbox": 0.21055, "loss_mask": 0.21984, "loss": 0.65781, "time": 0.39716} -{"mode": "train", "epoch": 11, "iter": 4200, "lr": 2e-05, "memory": 8424, "data_time": 0.04883, "loss_rpn_cls": 0.01885, "loss_rpn_bbox": 0.044, "loss_cls": 0.16905, "acc": 93.71851, "loss_bbox": 0.21593, "loss_mask": 0.21536, "loss": 0.66319, "time": 0.42842} -{"mode": "train", "epoch": 11, "iter": 4250, "lr": 2e-05, "memory": 8424, "data_time": 0.04289, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04127, "loss_cls": 0.16584, "acc": 93.85181, "loss_bbox": 0.20826, "loss_mask": 0.21884, "loss": 0.65495, "time": 0.40817} -{"mode": "train", "epoch": 11, "iter": 4300, "lr": 2e-05, "memory": 8424, "data_time": 0.04535, "loss_rpn_cls": 0.0208, "loss_rpn_bbox": 0.04517, "loss_cls": 0.17132, "acc": 93.72119, "loss_bbox": 0.21948, "loss_mask": 0.22191, "loss": 0.67869, "time": 0.41324} -{"mode": "train", "epoch": 11, "iter": 4350, "lr": 2e-05, "memory": 8424, "data_time": 0.04814, "loss_rpn_cls": 0.02112, "loss_rpn_bbox": 0.0448, "loss_cls": 0.17503, "acc": 93.56934, "loss_bbox": 0.21827, "loss_mask": 0.22355, "loss": 0.68277, "time": 0.41617} -{"mode": "train", "epoch": 11, "iter": 4400, "lr": 2e-05, "memory": 8424, "data_time": 0.05194, "loss_rpn_cls": 0.02013, "loss_rpn_bbox": 0.04247, "loss_cls": 0.16791, "acc": 93.71484, "loss_bbox": 0.21326, "loss_mask": 0.22049, "loss": 0.66425, "time": 0.40854} -{"mode": "train", "epoch": 11, "iter": 4450, "lr": 2e-05, "memory": 8424, "data_time": 0.04484, "loss_rpn_cls": 0.01859, "loss_rpn_bbox": 0.04288, "loss_cls": 0.17002, "acc": 93.70752, "loss_bbox": 0.21596, "loss_mask": 0.22103, "loss": 0.66848, "time": 0.40785} -{"mode": "train", "epoch": 11, "iter": 4500, "lr": 2e-05, "memory": 8424, "data_time": 0.05848, "loss_rpn_cls": 0.01946, "loss_rpn_bbox": 0.04327, "loss_cls": 0.17041, "acc": 93.79272, "loss_bbox": 0.21312, "loss_mask": 0.21807, "loss": 0.66435, "time": 0.40255} -{"mode": "train", "epoch": 11, "iter": 4550, "lr": 2e-05, "memory": 8424, "data_time": 0.04866, "loss_rpn_cls": 0.01989, "loss_rpn_bbox": 0.04271, "loss_cls": 0.16613, "acc": 93.90356, "loss_bbox": 0.20961, "loss_mask": 0.2144, "loss": 0.65273, "time": 0.40376} -{"mode": "train", "epoch": 11, "iter": 4600, "lr": 2e-05, "memory": 8424, "data_time": 0.04763, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04182, "loss_cls": 0.16816, "acc": 93.78003, "loss_bbox": 0.21053, "loss_mask": 0.21748, "loss": 0.65803, "time": 0.41376} -{"mode": "train", "epoch": 11, "iter": 4650, "lr": 2e-05, "memory": 8424, "data_time": 0.04764, "loss_rpn_cls": 0.02011, "loss_rpn_bbox": 0.04402, "loss_cls": 0.17801, "acc": 93.44849, "loss_bbox": 0.22111, "loss_mask": 0.23142, "loss": 0.69467, "time": 0.40933} -{"mode": "train", "epoch": 11, "iter": 4700, "lr": 2e-05, "memory": 8424, "data_time": 0.04955, "loss_rpn_cls": 0.01991, "loss_rpn_bbox": 0.04162, "loss_cls": 0.17227, "acc": 93.61182, "loss_bbox": 0.21286, "loss_mask": 0.21872, "loss": 0.66538, "time": 0.39969} -{"mode": "train", "epoch": 11, "iter": 4750, "lr": 2e-05, "memory": 8424, "data_time": 0.03677, "loss_rpn_cls": 0.01982, "loss_rpn_bbox": 0.04176, "loss_cls": 0.16931, "acc": 93.73364, "loss_bbox": 0.21352, "loss_mask": 0.21755, "loss": 0.66195, "time": 0.39852} -{"mode": "train", "epoch": 11, "iter": 4800, "lr": 2e-05, "memory": 8424, "data_time": 0.04931, "loss_rpn_cls": 0.02221, "loss_rpn_bbox": 0.04595, "loss_cls": 0.18023, "acc": 93.44702, "loss_bbox": 0.22354, "loss_mask": 0.22152, "loss": 0.69345, "time": 0.4223} -{"mode": "train", "epoch": 11, "iter": 4850, "lr": 2e-05, "memory": 8424, "data_time": 0.04395, "loss_rpn_cls": 0.01938, "loss_rpn_bbox": 0.04209, "loss_cls": 0.16914, "acc": 93.80493, "loss_bbox": 0.21093, "loss_mask": 0.21504, "loss": 0.65658, "time": 0.40254} -{"mode": "train", "epoch": 11, "iter": 4900, "lr": 2e-05, "memory": 8424, "data_time": 0.05283, "loss_rpn_cls": 0.01976, "loss_rpn_bbox": 0.04255, "loss_cls": 0.17218, "acc": 93.56494, "loss_bbox": 0.22058, "loss_mask": 0.22694, "loss": 0.68201, "time": 0.41147} -{"mode": "train", "epoch": 11, "iter": 4950, "lr": 2e-05, "memory": 8424, "data_time": 0.04085, "loss_rpn_cls": 0.01958, "loss_rpn_bbox": 0.03973, "loss_cls": 0.16626, "acc": 93.9231, "loss_bbox": 0.21076, "loss_mask": 0.22281, "loss": 0.65914, "time": 0.40158} -{"mode": "train", "epoch": 11, "iter": 5000, "lr": 2e-05, "memory": 8424, "data_time": 0.04587, "loss_rpn_cls": 0.01857, "loss_rpn_bbox": 0.04094, "loss_cls": 0.16501, "acc": 93.84619, "loss_bbox": 0.21016, "loss_mask": 0.2146, "loss": 0.64927, "time": 0.40406} -{"mode": "train", "epoch": 11, "iter": 5050, "lr": 2e-05, "memory": 8424, "data_time": 0.04913, "loss_rpn_cls": 0.01858, "loss_rpn_bbox": 0.04231, "loss_cls": 0.16548, "acc": 93.91577, "loss_bbox": 0.21114, "loss_mask": 0.21696, "loss": 0.65448, "time": 0.41241} -{"mode": "train", "epoch": 11, "iter": 5100, "lr": 2e-05, "memory": 8424, "data_time": 0.04595, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.04256, "loss_cls": 0.17174, "acc": 93.57446, "loss_bbox": 0.21654, "loss_mask": 0.21831, "loss": 0.66967, "time": 0.40296} -{"mode": "train", "epoch": 11, "iter": 5150, "lr": 2e-05, "memory": 8424, "data_time": 0.04729, "loss_rpn_cls": 0.02015, "loss_rpn_bbox": 0.04124, "loss_cls": 0.16501, "acc": 93.97339, "loss_bbox": 0.20368, "loss_mask": 0.21645, "loss": 0.64652, "time": 0.46362} -{"mode": "train", "epoch": 11, "iter": 5200, "lr": 2e-05, "memory": 8424, "data_time": 0.04743, "loss_rpn_cls": 0.01915, "loss_rpn_bbox": 0.04172, "loss_cls": 0.161, "acc": 93.97974, "loss_bbox": 0.20765, "loss_mask": 0.21053, "loss": 0.64005, "time": 0.39462} -{"mode": "train", "epoch": 11, "iter": 5250, "lr": 2e-05, "memory": 8424, "data_time": 0.04381, "loss_rpn_cls": 0.01823, "loss_rpn_bbox": 0.04197, "loss_cls": 0.16843, "acc": 93.72046, "loss_bbox": 0.20775, "loss_mask": 0.21391, "loss": 0.65029, "time": 0.40767} -{"mode": "train", "epoch": 11, "iter": 5300, "lr": 2e-05, "memory": 8424, "data_time": 0.04363, "loss_rpn_cls": 0.0214, "loss_rpn_bbox": 0.04355, "loss_cls": 0.1718, "acc": 93.62915, "loss_bbox": 0.21895, "loss_mask": 0.22172, "loss": 0.67743, "time": 0.40113} -{"mode": "train", "epoch": 11, "iter": 5350, "lr": 2e-05, "memory": 8424, "data_time": 0.05451, "loss_rpn_cls": 0.02073, "loss_rpn_bbox": 0.04415, "loss_cls": 0.17229, "acc": 93.5957, "loss_bbox": 0.21912, "loss_mask": 0.22004, "loss": 0.67633, "time": 0.40589} -{"mode": "train", "epoch": 11, "iter": 5400, "lr": 2e-05, "memory": 8424, "data_time": 0.0433, "loss_rpn_cls": 0.0195, "loss_rpn_bbox": 0.04225, "loss_cls": 0.16535, "acc": 93.9314, "loss_bbox": 0.21044, "loss_mask": 0.21852, "loss": 0.65607, "time": 0.40463} -{"mode": "train", "epoch": 11, "iter": 5450, "lr": 2e-05, "memory": 8424, "data_time": 0.0484, "loss_rpn_cls": 0.01922, "loss_rpn_bbox": 0.04136, "loss_cls": 0.16703, "acc": 93.87671, "loss_bbox": 0.20586, "loss_mask": 0.2128, "loss": 0.64626, "time": 0.41037} -{"mode": "train", "epoch": 11, "iter": 5500, "lr": 2e-05, "memory": 8424, "data_time": 0.04033, "loss_rpn_cls": 0.02086, "loss_rpn_bbox": 0.04152, "loss_cls": 0.17165, "acc": 93.8064, "loss_bbox": 0.21089, "loss_mask": 0.21878, "loss": 0.6637, "time": 0.39322} -{"mode": "train", "epoch": 11, "iter": 5550, "lr": 2e-05, "memory": 8424, "data_time": 0.05365, "loss_rpn_cls": 0.02005, "loss_rpn_bbox": 0.04376, "loss_cls": 0.17359, "acc": 93.59106, "loss_bbox": 0.21874, "loss_mask": 0.22198, "loss": 0.67812, "time": 0.40702} -{"mode": "train", "epoch": 11, "iter": 5600, "lr": 2e-05, "memory": 8424, "data_time": 0.03853, "loss_rpn_cls": 0.01968, "loss_rpn_bbox": 0.04178, "loss_cls": 0.16057, "acc": 94.10352, "loss_bbox": 0.20445, "loss_mask": 0.21311, "loss": 0.63958, "time": 0.40968} -{"mode": "train", "epoch": 11, "iter": 5650, "lr": 2e-05, "memory": 8424, "data_time": 0.04693, "loss_rpn_cls": 0.01976, "loss_rpn_bbox": 0.04265, "loss_cls": 0.16777, "acc": 93.76123, "loss_bbox": 0.21199, "loss_mask": 0.21682, "loss": 0.659, "time": 0.42307} -{"mode": "train", "epoch": 11, "iter": 5700, "lr": 2e-05, "memory": 8424, "data_time": 0.04672, "loss_rpn_cls": 0.02037, "loss_rpn_bbox": 0.04388, "loss_cls": 0.17115, "acc": 93.75073, "loss_bbox": 0.21816, "loss_mask": 0.22277, "loss": 0.67633, "time": 0.41463} -{"mode": "train", "epoch": 11, "iter": 5750, "lr": 2e-05, "memory": 8424, "data_time": 0.0519, "loss_rpn_cls": 0.02093, "loss_rpn_bbox": 0.04319, "loss_cls": 0.1684, "acc": 93.82617, "loss_bbox": 0.21309, "loss_mask": 0.22049, "loss": 0.6661, "time": 0.40122} -{"mode": "train", "epoch": 11, "iter": 5800, "lr": 2e-05, "memory": 8424, "data_time": 0.04433, "loss_rpn_cls": 0.01969, "loss_rpn_bbox": 0.04335, "loss_cls": 0.16488, "acc": 93.90796, "loss_bbox": 0.20668, "loss_mask": 0.21391, "loss": 0.64851, "time": 0.41265} -{"mode": "train", "epoch": 11, "iter": 5850, "lr": 2e-05, "memory": 8424, "data_time": 0.03839, "loss_rpn_cls": 0.02047, "loss_rpn_bbox": 0.04276, "loss_cls": 0.16602, "acc": 93.87207, "loss_bbox": 0.2143, "loss_mask": 0.21912, "loss": 0.66267, "time": 0.41548} -{"mode": "train", "epoch": 11, "iter": 5900, "lr": 2e-05, "memory": 8424, "data_time": 0.04821, "loss_rpn_cls": 0.01821, "loss_rpn_bbox": 0.03996, "loss_cls": 0.16073, "acc": 94.07422, "loss_bbox": 0.20355, "loss_mask": 0.21599, "loss": 0.63844, "time": 0.40476} -{"mode": "train", "epoch": 11, "iter": 5950, "lr": 2e-05, "memory": 8424, "data_time": 0.04331, "loss_rpn_cls": 0.02003, "loss_rpn_bbox": 0.04119, "loss_cls": 0.16665, "acc": 93.83594, "loss_bbox": 0.20703, "loss_mask": 0.21299, "loss": 0.64789, "time": 0.40546} -{"mode": "train", "epoch": 11, "iter": 6000, "lr": 2e-05, "memory": 8424, "data_time": 0.04938, "loss_rpn_cls": 0.02103, "loss_rpn_bbox": 0.04397, "loss_cls": 0.1697, "acc": 93.6853, "loss_bbox": 0.21318, "loss_mask": 0.21748, "loss": 0.66536, "time": 0.41002} -{"mode": "train", "epoch": 11, "iter": 6050, "lr": 2e-05, "memory": 8424, "data_time": 0.05079, "loss_rpn_cls": 0.02149, "loss_rpn_bbox": 0.04389, "loss_cls": 0.16728, "acc": 93.86646, "loss_bbox": 0.20973, "loss_mask": 0.21699, "loss": 0.65938, "time": 0.4045} -{"mode": "train", "epoch": 11, "iter": 6100, "lr": 2e-05, "memory": 8424, "data_time": 0.04879, "loss_rpn_cls": 0.02061, "loss_rpn_bbox": 0.04316, "loss_cls": 0.17297, "acc": 93.6145, "loss_bbox": 0.21762, "loss_mask": 0.21959, "loss": 0.67394, "time": 0.39977} -{"mode": "train", "epoch": 11, "iter": 6150, "lr": 2e-05, "memory": 8424, "data_time": 0.04676, "loss_rpn_cls": 0.01837, "loss_rpn_bbox": 0.03935, "loss_cls": 0.16529, "acc": 93.84473, "loss_bbox": 0.21299, "loss_mask": 0.2192, "loss": 0.65521, "time": 0.39458} -{"mode": "train", "epoch": 11, "iter": 6200, "lr": 2e-05, "memory": 8424, "data_time": 0.04603, "loss_rpn_cls": 0.02127, "loss_rpn_bbox": 0.044, "loss_cls": 0.17227, "acc": 93.65039, "loss_bbox": 0.21601, "loss_mask": 0.21698, "loss": 0.67053, "time": 0.47151} -{"mode": "train", "epoch": 11, "iter": 6250, "lr": 2e-05, "memory": 8424, "data_time": 0.05173, "loss_rpn_cls": 0.01869, "loss_rpn_bbox": 0.04073, "loss_cls": 0.16713, "acc": 93.86206, "loss_bbox": 0.21063, "loss_mask": 0.21583, "loss": 0.65301, "time": 0.40735} -{"mode": "train", "epoch": 11, "iter": 6300, "lr": 2e-05, "memory": 8424, "data_time": 0.04285, "loss_rpn_cls": 0.02176, "loss_rpn_bbox": 0.04326, "loss_cls": 0.17609, "acc": 93.55347, "loss_bbox": 0.22178, "loss_mask": 0.22654, "loss": 0.68942, "time": 0.41571} -{"mode": "train", "epoch": 11, "iter": 6350, "lr": 2e-05, "memory": 8424, "data_time": 0.04601, "loss_rpn_cls": 0.01999, "loss_rpn_bbox": 0.04211, "loss_cls": 0.16734, "acc": 93.74048, "loss_bbox": 0.21352, "loss_mask": 0.22094, "loss": 0.66391, "time": 0.40253} -{"mode": "train", "epoch": 11, "iter": 6400, "lr": 2e-05, "memory": 8424, "data_time": 0.04134, "loss_rpn_cls": 0.01989, "loss_rpn_bbox": 0.04093, "loss_cls": 0.17213, "acc": 93.59033, "loss_bbox": 0.21661, "loss_mask": 0.21949, "loss": 0.66905, "time": 0.41373} -{"mode": "train", "epoch": 11, "iter": 6450, "lr": 2e-05, "memory": 8424, "data_time": 0.0522, "loss_rpn_cls": 0.01947, "loss_rpn_bbox": 0.04263, "loss_cls": 0.17756, "acc": 93.45679, "loss_bbox": 0.2246, "loss_mask": 0.22203, "loss": 0.68628, "time": 0.41265} -{"mode": "train", "epoch": 11, "iter": 6500, "lr": 2e-05, "memory": 8424, "data_time": 0.05022, "loss_rpn_cls": 0.01981, "loss_rpn_bbox": 0.04186, "loss_cls": 0.16364, "acc": 93.86597, "loss_bbox": 0.21193, "loss_mask": 0.22109, "loss": 0.65834, "time": 0.4044} -{"mode": "train", "epoch": 11, "iter": 6550, "lr": 2e-05, "memory": 8424, "data_time": 0.04204, "loss_rpn_cls": 0.02144, "loss_rpn_bbox": 0.04361, "loss_cls": 0.17643, "acc": 93.41284, "loss_bbox": 0.22299, "loss_mask": 0.22259, "loss": 0.68706, "time": 0.41925} -{"mode": "train", "epoch": 11, "iter": 6600, "lr": 2e-05, "memory": 8424, "data_time": 0.04521, "loss_rpn_cls": 0.01995, "loss_rpn_bbox": 0.04275, "loss_cls": 0.17386, "acc": 93.55762, "loss_bbox": 0.21602, "loss_mask": 0.22033, "loss": 0.67291, "time": 0.40543} -{"mode": "train", "epoch": 11, "iter": 6650, "lr": 2e-05, "memory": 8424, "data_time": 0.05063, "loss_rpn_cls": 0.02074, "loss_rpn_bbox": 0.04347, "loss_cls": 0.17347, "acc": 93.60229, "loss_bbox": 0.21681, "loss_mask": 0.22386, "loss": 0.67836, "time": 0.40356} -{"mode": "train", "epoch": 11, "iter": 6700, "lr": 2e-05, "memory": 8424, "data_time": 0.05018, "loss_rpn_cls": 0.01903, "loss_rpn_bbox": 0.04299, "loss_cls": 0.1664, "acc": 93.86328, "loss_bbox": 0.21288, "loss_mask": 0.22019, "loss": 0.66148, "time": 0.40902} -{"mode": "train", "epoch": 11, "iter": 6750, "lr": 2e-05, "memory": 8424, "data_time": 0.05697, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.04542, "loss_cls": 0.18104, "acc": 93.26636, "loss_bbox": 0.22874, "loss_mask": 0.22322, "loss": 0.69873, "time": 0.41646} -{"mode": "train", "epoch": 11, "iter": 6800, "lr": 2e-05, "memory": 8424, "data_time": 0.05014, "loss_rpn_cls": 0.01941, "loss_rpn_bbox": 0.04284, "loss_cls": 0.17289, "acc": 93.66724, "loss_bbox": 0.21816, "loss_mask": 0.22092, "loss": 0.67422, "time": 0.40861} -{"mode": "train", "epoch": 11, "iter": 6850, "lr": 2e-05, "memory": 8424, "data_time": 0.0529, "loss_rpn_cls": 0.02071, "loss_rpn_bbox": 0.04223, "loss_cls": 0.17157, "acc": 93.75391, "loss_bbox": 0.21221, "loss_mask": 0.21509, "loss": 0.66181, "time": 0.41407} -{"mode": "train", "epoch": 11, "iter": 6900, "lr": 2e-05, "memory": 8424, "data_time": 0.0473, "loss_rpn_cls": 0.01969, "loss_rpn_bbox": 0.04314, "loss_cls": 0.16717, "acc": 93.92676, "loss_bbox": 0.20944, "loss_mask": 0.21559, "loss": 0.65503, "time": 0.50892} -{"mode": "train", "epoch": 11, "iter": 6950, "lr": 2e-05, "memory": 8424, "data_time": 0.06402, "loss_rpn_cls": 0.02182, "loss_rpn_bbox": 0.04225, "loss_cls": 0.17569, "acc": 93.53442, "loss_bbox": 0.22239, "loss_mask": 0.22377, "loss": 0.68592, "time": 0.41333} -{"mode": "train", "epoch": 11, "iter": 7000, "lr": 2e-05, "memory": 8424, "data_time": 0.0451, "loss_rpn_cls": 0.01903, "loss_rpn_bbox": 0.04253, "loss_cls": 0.16803, "acc": 93.76245, "loss_bbox": 0.21312, "loss_mask": 0.21849, "loss": 0.66119, "time": 0.40719} -{"mode": "train", "epoch": 11, "iter": 7050, "lr": 2e-05, "memory": 8424, "data_time": 0.05406, "loss_rpn_cls": 0.0211, "loss_rpn_bbox": 0.04422, "loss_cls": 0.17253, "acc": 93.64697, "loss_bbox": 0.21999, "loss_mask": 0.22373, "loss": 0.68157, "time": 0.4213} -{"mode": "train", "epoch": 11, "iter": 7100, "lr": 2e-05, "memory": 8424, "data_time": 0.04855, "loss_rpn_cls": 0.02009, "loss_rpn_bbox": 0.0427, "loss_cls": 0.16964, "acc": 93.60181, "loss_bbox": 0.21612, "loss_mask": 0.22126, "loss": 0.6698, "time": 0.41503} -{"mode": "train", "epoch": 11, "iter": 7150, "lr": 2e-05, "memory": 8424, "data_time": 0.04497, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04223, "loss_cls": 0.16911, "acc": 93.84058, "loss_bbox": 0.21367, "loss_mask": 0.21636, "loss": 0.66129, "time": 0.40383} -{"mode": "train", "epoch": 11, "iter": 7200, "lr": 2e-05, "memory": 8424, "data_time": 0.04439, "loss_rpn_cls": 0.02028, "loss_rpn_bbox": 0.04403, "loss_cls": 0.17477, "acc": 93.52661, "loss_bbox": 0.22164, "loss_mask": 0.21769, "loss": 0.67841, "time": 0.4044} -{"mode": "train", "epoch": 11, "iter": 7250, "lr": 2e-05, "memory": 8424, "data_time": 0.05157, "loss_rpn_cls": 0.01973, "loss_rpn_bbox": 0.04274, "loss_cls": 0.17127, "acc": 93.66528, "loss_bbox": 0.21461, "loss_mask": 0.22, "loss": 0.66836, "time": 0.39973} -{"mode": "train", "epoch": 11, "iter": 7300, "lr": 2e-05, "memory": 8424, "data_time": 0.04803, "loss_rpn_cls": 0.0191, "loss_rpn_bbox": 0.04235, "loss_cls": 0.17185, "acc": 93.67993, "loss_bbox": 0.21617, "loss_mask": 0.21909, "loss": 0.66856, "time": 0.40442} -{"mode": "val", "epoch": 11, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.4137, "bbox_mAP_50": 0.631, "bbox_mAP_75": 0.4509, "bbox_mAP_s": 0.2387, "bbox_mAP_m": 0.4491, "bbox_mAP_l": 0.5421, "bbox_mAP_copypaste": "0.4137 0.6310 0.4509 0.2387 0.4491 0.5421", "segm_mAP": 0.3845, "segm_mAP_50": 0.6035, "segm_mAP_75": 0.4119, "segm_mAP_s": 0.1829, "segm_mAP_m": 0.4144, "segm_mAP_l": 0.562, "segm_mAP_copypaste": "0.3845 0.6035 0.4119 0.1829 0.4144 0.5620"} -{"mode": "train", "epoch": 12, "iter": 50, "lr": 0.0, "memory": 8424, "data_time": 0.1281, "loss_rpn_cls": 0.02139, "loss_rpn_bbox": 0.04402, "loss_cls": 0.16982, "acc": 93.60376, "loss_bbox": 0.22102, "loss_mask": 0.22073, "loss": 0.67698, "time": 0.58248} -{"mode": "train", "epoch": 12, "iter": 100, "lr": 0.0, "memory": 8424, "data_time": 0.04082, "loss_rpn_cls": 0.01987, "loss_rpn_bbox": 0.04203, "loss_cls": 0.16788, "acc": 93.80518, "loss_bbox": 0.21655, "loss_mask": 0.21934, "loss": 0.66567, "time": 0.41675} -{"mode": "train", "epoch": 12, "iter": 150, "lr": 0.0, "memory": 8424, "data_time": 0.05038, "loss_rpn_cls": 0.01828, "loss_rpn_bbox": 0.04192, "loss_cls": 0.16819, "acc": 93.80371, "loss_bbox": 0.21872, "loss_mask": 0.22132, "loss": 0.66843, "time": 0.42167} -{"mode": "train", "epoch": 12, "iter": 200, "lr": 0.0, "memory": 8424, "data_time": 0.04923, "loss_rpn_cls": 0.01807, "loss_rpn_bbox": 0.04148, "loss_cls": 0.15837, "acc": 94.06226, "loss_bbox": 0.20634, "loss_mask": 0.21301, "loss": 0.63727, "time": 0.4089} -{"mode": "train", "epoch": 12, "iter": 250, "lr": 0.0, "memory": 8424, "data_time": 0.05251, "loss_rpn_cls": 0.01995, "loss_rpn_bbox": 0.04347, "loss_cls": 0.17109, "acc": 93.57715, "loss_bbox": 0.22228, "loss_mask": 0.22035, "loss": 0.67714, "time": 0.40478} -{"mode": "train", "epoch": 12, "iter": 300, "lr": 0.0, "memory": 8424, "data_time": 0.0528, "loss_rpn_cls": 0.01728, "loss_rpn_bbox": 0.03998, "loss_cls": 0.16103, "acc": 93.91064, "loss_bbox": 0.20646, "loss_mask": 0.21067, "loss": 0.63543, "time": 0.4077} -{"mode": "train", "epoch": 12, "iter": 350, "lr": 0.0, "memory": 8424, "data_time": 0.05703, "loss_rpn_cls": 0.01984, "loss_rpn_bbox": 0.04326, "loss_cls": 0.17042, "acc": 93.66284, "loss_bbox": 0.22023, "loss_mask": 0.21824, "loss": 0.67198, "time": 0.42274} -{"mode": "train", "epoch": 12, "iter": 400, "lr": 0.0, "memory": 8424, "data_time": 0.05354, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04347, "loss_cls": 0.16735, "acc": 93.80322, "loss_bbox": 0.21395, "loss_mask": 0.22063, "loss": 0.66531, "time": 0.40924} -{"mode": "train", "epoch": 12, "iter": 450, "lr": 0.0, "memory": 8424, "data_time": 0.0404, "loss_rpn_cls": 0.02009, "loss_rpn_bbox": 0.04566, "loss_cls": 0.16577, "acc": 93.77808, "loss_bbox": 0.21147, "loss_mask": 0.21912, "loss": 0.66211, "time": 0.41789} -{"mode": "train", "epoch": 12, "iter": 500, "lr": 0.0, "memory": 8424, "data_time": 0.04409, "loss_rpn_cls": 0.01844, "loss_rpn_bbox": 0.04137, "loss_cls": 0.16336, "acc": 93.90161, "loss_bbox": 0.21263, "loss_mask": 0.21929, "loss": 0.65509, "time": 0.41102} -{"mode": "train", "epoch": 12, "iter": 550, "lr": 0.0, "memory": 8424, "data_time": 0.04167, "loss_rpn_cls": 0.01884, "loss_rpn_bbox": 0.04301, "loss_cls": 0.16367, "acc": 93.92114, "loss_bbox": 0.20759, "loss_mask": 0.21179, "loss": 0.64488, "time": 0.40657} -{"mode": "train", "epoch": 12, "iter": 600, "lr": 0.0, "memory": 8424, "data_time": 0.04371, "loss_rpn_cls": 0.01831, "loss_rpn_bbox": 0.04086, "loss_cls": 0.16508, "acc": 93.85669, "loss_bbox": 0.21433, "loss_mask": 0.21732, "loss": 0.65591, "time": 0.40589} -{"mode": "train", "epoch": 12, "iter": 650, "lr": 0.0, "memory": 8424, "data_time": 0.04181, "loss_rpn_cls": 0.01941, "loss_rpn_bbox": 0.04284, "loss_cls": 0.16072, "acc": 94.00806, "loss_bbox": 0.21147, "loss_mask": 0.21538, "loss": 0.64982, "time": 0.41743} -{"mode": "train", "epoch": 12, "iter": 700, "lr": 0.0, "memory": 8424, "data_time": 0.04766, "loss_rpn_cls": 0.01947, "loss_rpn_bbox": 0.04343, "loss_cls": 0.16352, "acc": 93.96899, "loss_bbox": 0.20855, "loss_mask": 0.21423, "loss": 0.64919, "time": 0.42638} -{"mode": "train", "epoch": 12, "iter": 750, "lr": 0.0, "memory": 8424, "data_time": 0.04534, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04412, "loss_cls": 0.16859, "acc": 93.72192, "loss_bbox": 0.21415, "loss_mask": 0.22023, "loss": 0.66714, "time": 0.4055} -{"mode": "train", "epoch": 12, "iter": 800, "lr": 0.0, "memory": 8424, "data_time": 0.04556, "loss_rpn_cls": 0.01958, "loss_rpn_bbox": 0.04181, "loss_cls": 0.16242, "acc": 93.90918, "loss_bbox": 0.20948, "loss_mask": 0.21587, "loss": 0.64916, "time": 0.40794} -{"mode": "train", "epoch": 12, "iter": 850, "lr": 0.0, "memory": 8424, "data_time": 0.04722, "loss_rpn_cls": 0.01842, "loss_rpn_bbox": 0.03912, "loss_cls": 0.15618, "acc": 94.14868, "loss_bbox": 0.20566, "loss_mask": 0.21363, "loss": 0.63301, "time": 0.40955} -{"mode": "train", "epoch": 12, "iter": 900, "lr": 0.0, "memory": 8424, "data_time": 0.04966, "loss_rpn_cls": 0.01799, "loss_rpn_bbox": 0.04144, "loss_cls": 0.16193, "acc": 93.91895, "loss_bbox": 0.20614, "loss_mask": 0.21252, "loss": 0.64002, "time": 0.41506} -{"mode": "train", "epoch": 12, "iter": 950, "lr": 0.0, "memory": 8424, "data_time": 0.04874, "loss_rpn_cls": 0.01821, "loss_rpn_bbox": 0.04081, "loss_cls": 0.16546, "acc": 93.85156, "loss_bbox": 0.21422, "loss_mask": 0.21809, "loss": 0.6568, "time": 0.40549} -{"mode": "train", "epoch": 12, "iter": 1000, "lr": 0.0, "memory": 8424, "data_time": 0.04491, "loss_rpn_cls": 0.01955, "loss_rpn_bbox": 0.0408, "loss_cls": 0.16082, "acc": 93.99414, "loss_bbox": 0.21047, "loss_mask": 0.21367, "loss": 0.64531, "time": 0.4639} -{"mode": "train", "epoch": 12, "iter": 1050, "lr": 0.0, "memory": 8424, "data_time": 0.04462, "loss_rpn_cls": 0.01896, "loss_rpn_bbox": 0.04143, "loss_cls": 0.16174, "acc": 93.99243, "loss_bbox": 0.20808, "loss_mask": 0.21898, "loss": 0.64919, "time": 0.39298} -{"mode": "train", "epoch": 12, "iter": 1100, "lr": 0.0, "memory": 8424, "data_time": 0.03891, "loss_rpn_cls": 0.01835, "loss_rpn_bbox": 0.04135, "loss_cls": 0.16034, "acc": 94.19775, "loss_bbox": 0.20172, "loss_mask": 0.21408, "loss": 0.63584, "time": 0.39954} -{"mode": "train", "epoch": 12, "iter": 1150, "lr": 0.0, "memory": 8424, "data_time": 0.04766, "loss_rpn_cls": 0.01809, "loss_rpn_bbox": 0.04114, "loss_cls": 0.16306, "acc": 93.85083, "loss_bbox": 0.21223, "loss_mask": 0.21925, "loss": 0.65376, "time": 0.40912} -{"mode": "train", "epoch": 12, "iter": 1200, "lr": 0.0, "memory": 8424, "data_time": 0.03787, "loss_rpn_cls": 0.01827, "loss_rpn_bbox": 0.04041, "loss_cls": 0.16004, "acc": 94.02075, "loss_bbox": 0.2073, "loss_mask": 0.21484, "loss": 0.64087, "time": 0.41025} -{"mode": "train", "epoch": 12, "iter": 1250, "lr": 0.0, "memory": 8424, "data_time": 0.04552, "loss_rpn_cls": 0.01848, "loss_rpn_bbox": 0.0394, "loss_cls": 0.15803, "acc": 94.18164, "loss_bbox": 0.19984, "loss_mask": 0.21031, "loss": 0.62606, "time": 0.40341} -{"mode": "train", "epoch": 12, "iter": 1300, "lr": 0.0, "memory": 8424, "data_time": 0.05347, "loss_rpn_cls": 0.01889, "loss_rpn_bbox": 0.04231, "loss_cls": 0.16422, "acc": 93.83008, "loss_bbox": 0.21373, "loss_mask": 0.21587, "loss": 0.65502, "time": 0.41108} -{"mode": "train", "epoch": 12, "iter": 1350, "lr": 0.0, "memory": 8424, "data_time": 0.05283, "loss_rpn_cls": 0.01978, "loss_rpn_bbox": 0.04179, "loss_cls": 0.1662, "acc": 93.78271, "loss_bbox": 0.2103, "loss_mask": 0.21416, "loss": 0.65224, "time": 0.41929} -{"mode": "train", "epoch": 12, "iter": 1400, "lr": 0.0, "memory": 8424, "data_time": 0.04383, "loss_rpn_cls": 0.01739, "loss_rpn_bbox": 0.03968, "loss_cls": 0.16286, "acc": 93.97437, "loss_bbox": 0.21003, "loss_mask": 0.21641, "loss": 0.64636, "time": 0.39693} -{"mode": "train", "epoch": 12, "iter": 1450, "lr": 0.0, "memory": 8424, "data_time": 0.04212, "loss_rpn_cls": 0.01919, "loss_rpn_bbox": 0.04329, "loss_cls": 0.16368, "acc": 93.88672, "loss_bbox": 0.20832, "loss_mask": 0.21126, "loss": 0.64574, "time": 0.40923} -{"mode": "train", "epoch": 12, "iter": 1500, "lr": 0.0, "memory": 8424, "data_time": 0.0417, "loss_rpn_cls": 0.01787, "loss_rpn_bbox": 0.03974, "loss_cls": 0.16045, "acc": 94.03857, "loss_bbox": 0.20271, "loss_mask": 0.21752, "loss": 0.6383, "time": 0.40547} -{"mode": "train", "epoch": 12, "iter": 1550, "lr": 0.0, "memory": 8424, "data_time": 0.04437, "loss_rpn_cls": 0.01843, "loss_rpn_bbox": 0.04171, "loss_cls": 0.15664, "acc": 94.13623, "loss_bbox": 0.20014, "loss_mask": 0.20981, "loss": 0.62672, "time": 0.40568} -{"mode": "train", "epoch": 12, "iter": 1600, "lr": 0.0, "memory": 8424, "data_time": 0.0571, "loss_rpn_cls": 0.01773, "loss_rpn_bbox": 0.03983, "loss_cls": 0.15296, "acc": 94.29224, "loss_bbox": 0.19942, "loss_mask": 0.21127, "loss": 0.62122, "time": 0.41225} -{"mode": "train", "epoch": 12, "iter": 1650, "lr": 0.0, "memory": 8424, "data_time": 0.05299, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04035, "loss_cls": 0.16882, "acc": 93.6792, "loss_bbox": 0.21243, "loss_mask": 0.2172, "loss": 0.65872, "time": 0.39725} -{"mode": "train", "epoch": 12, "iter": 1700, "lr": 0.0, "memory": 8424, "data_time": 0.04668, "loss_rpn_cls": 0.01764, "loss_rpn_bbox": 0.03998, "loss_cls": 0.15903, "acc": 94.02686, "loss_bbox": 0.20657, "loss_mask": 0.21157, "loss": 0.63479, "time": 0.3955} -{"mode": "train", "epoch": 12, "iter": 1750, "lr": 0.0, "memory": 8424, "data_time": 0.04093, "loss_rpn_cls": 0.01861, "loss_rpn_bbox": 0.04334, "loss_cls": 0.16618, "acc": 93.72583, "loss_bbox": 0.21523, "loss_mask": 0.21743, "loss": 0.66078, "time": 0.40778} -{"mode": "train", "epoch": 12, "iter": 1800, "lr": 0.0, "memory": 8424, "data_time": 0.0416, "loss_rpn_cls": 0.01879, "loss_rpn_bbox": 0.04123, "loss_cls": 0.1651, "acc": 93.86792, "loss_bbox": 0.2079, "loss_mask": 0.21768, "loss": 0.65071, "time": 0.39375} -{"mode": "train", "epoch": 12, "iter": 1850, "lr": 0.0, "memory": 8424, "data_time": 0.05164, "loss_rpn_cls": 0.01678, "loss_rpn_bbox": 0.04102, "loss_cls": 0.16281, "acc": 93.94092, "loss_bbox": 0.21022, "loss_mask": 0.21281, "loss": 0.64364, "time": 0.46885} -{"mode": "train", "epoch": 12, "iter": 1900, "lr": 0.0, "memory": 8424, "data_time": 0.04662, "loss_rpn_cls": 0.01917, "loss_rpn_bbox": 0.04294, "loss_cls": 0.17078, "acc": 93.63257, "loss_bbox": 0.21439, "loss_mask": 0.21685, "loss": 0.66412, "time": 0.40826} -{"mode": "train", "epoch": 12, "iter": 1950, "lr": 0.0, "memory": 8424, "data_time": 0.04213, "loss_rpn_cls": 0.02094, "loss_rpn_bbox": 0.04435, "loss_cls": 0.1693, "acc": 93.66602, "loss_bbox": 0.21445, "loss_mask": 0.2211, "loss": 0.67013, "time": 0.42472} -{"mode": "train", "epoch": 12, "iter": 2000, "lr": 0.0, "memory": 8424, "data_time": 0.0468, "loss_rpn_cls": 0.01852, "loss_rpn_bbox": 0.04132, "loss_cls": 0.16062, "acc": 93.99243, "loss_bbox": 0.20529, "loss_mask": 0.2151, "loss": 0.64084, "time": 0.40986} -{"mode": "train", "epoch": 12, "iter": 2050, "lr": 0.0, "memory": 8424, "data_time": 0.04815, "loss_rpn_cls": 0.01863, "loss_rpn_bbox": 0.04373, "loss_cls": 0.16739, "acc": 93.84521, "loss_bbox": 0.21373, "loss_mask": 0.21756, "loss": 0.66104, "time": 0.40952} -{"mode": "train", "epoch": 12, "iter": 2100, "lr": 0.0, "memory": 8424, "data_time": 0.04949, "loss_rpn_cls": 0.02024, "loss_rpn_bbox": 0.04213, "loss_cls": 0.16682, "acc": 93.77295, "loss_bbox": 0.21643, "loss_mask": 0.21897, "loss": 0.66459, "time": 0.41658} -{"mode": "train", "epoch": 12, "iter": 2150, "lr": 0.0, "memory": 8424, "data_time": 0.04438, "loss_rpn_cls": 0.01931, "loss_rpn_bbox": 0.04161, "loss_cls": 0.15899, "acc": 94.07031, "loss_bbox": 0.20955, "loss_mask": 0.21236, "loss": 0.64183, "time": 0.4029} -{"mode": "train", "epoch": 12, "iter": 2200, "lr": 0.0, "memory": 8424, "data_time": 0.0501, "loss_rpn_cls": 0.0188, "loss_rpn_bbox": 0.04267, "loss_cls": 0.15773, "acc": 94.04395, "loss_bbox": 0.20785, "loss_mask": 0.21378, "loss": 0.64082, "time": 0.40756} -{"mode": "train", "epoch": 12, "iter": 2250, "lr": 0.0, "memory": 8424, "data_time": 0.0543, "loss_rpn_cls": 0.01926, "loss_rpn_bbox": 0.04266, "loss_cls": 0.16618, "acc": 93.82739, "loss_bbox": 0.21364, "loss_mask": 0.21759, "loss": 0.65934, "time": 0.4036} -{"mode": "train", "epoch": 12, "iter": 2300, "lr": 0.0, "memory": 8424, "data_time": 0.04604, "loss_rpn_cls": 0.01914, "loss_rpn_bbox": 0.04167, "loss_cls": 0.1596, "acc": 94.07422, "loss_bbox": 0.20247, "loss_mask": 0.21166, "loss": 0.63453, "time": 0.45667} -{"mode": "train", "epoch": 12, "iter": 2350, "lr": 0.0, "memory": 8424, "data_time": 0.04945, "loss_rpn_cls": 0.01801, "loss_rpn_bbox": 0.04174, "loss_cls": 0.15537, "acc": 94.18506, "loss_bbox": 0.20257, "loss_mask": 0.21272, "loss": 0.63041, "time": 0.40233} -{"mode": "train", "epoch": 12, "iter": 2400, "lr": 0.0, "memory": 8424, "data_time": 0.04695, "loss_rpn_cls": 0.0183, "loss_rpn_bbox": 0.03806, "loss_cls": 0.1624, "acc": 93.92334, "loss_bbox": 0.20739, "loss_mask": 0.21659, "loss": 0.64274, "time": 0.39727} -{"mode": "train", "epoch": 12, "iter": 2450, "lr": 0.0, "memory": 8424, "data_time": 0.04027, "loss_rpn_cls": 0.01912, "loss_rpn_bbox": 0.04518, "loss_cls": 0.17738, "acc": 93.38135, "loss_bbox": 0.22708, "loss_mask": 0.22404, "loss": 0.69278, "time": 0.4182} -{"mode": "train", "epoch": 12, "iter": 2500, "lr": 0.0, "memory": 8424, "data_time": 0.04596, "loss_rpn_cls": 0.02012, "loss_rpn_bbox": 0.04386, "loss_cls": 0.16446, "acc": 93.81079, "loss_bbox": 0.20818, "loss_mask": 0.22269, "loss": 0.65931, "time": 0.40338} -{"mode": "train", "epoch": 12, "iter": 2550, "lr": 0.0, "memory": 8424, "data_time": 0.05411, "loss_rpn_cls": 0.01849, "loss_rpn_bbox": 0.04246, "loss_cls": 0.16618, "acc": 93.78784, "loss_bbox": 0.21217, "loss_mask": 0.22218, "loss": 0.66149, "time": 0.39601} -{"mode": "train", "epoch": 12, "iter": 2600, "lr": 0.0, "memory": 8424, "data_time": 0.0429, "loss_rpn_cls": 0.02013, "loss_rpn_bbox": 0.04323, "loss_cls": 0.16291, "acc": 93.94873, "loss_bbox": 0.20851, "loss_mask": 0.21602, "loss": 0.6508, "time": 0.40165} -{"mode": "train", "epoch": 12, "iter": 2650, "lr": 0.0, "memory": 8424, "data_time": 0.04675, "loss_rpn_cls": 0.02026, "loss_rpn_bbox": 0.04386, "loss_cls": 0.17245, "acc": 93.53223, "loss_bbox": 0.22505, "loss_mask": 0.21913, "loss": 0.68075, "time": 0.40152} -{"mode": "train", "epoch": 12, "iter": 2700, "lr": 0.0, "memory": 8424, "data_time": 0.04496, "loss_rpn_cls": 0.01763, "loss_rpn_bbox": 0.04125, "loss_cls": 0.16229, "acc": 93.95874, "loss_bbox": 0.20933, "loss_mask": 0.21187, "loss": 0.64237, "time": 0.40732} -{"mode": "train", "epoch": 12, "iter": 2750, "lr": 0.0, "memory": 8424, "data_time": 0.06525, "loss_rpn_cls": 0.0183, "loss_rpn_bbox": 0.04181, "loss_cls": 0.16125, "acc": 93.99341, "loss_bbox": 0.20769, "loss_mask": 0.21212, "loss": 0.64117, "time": 0.41915} -{"mode": "train", "epoch": 12, "iter": 2800, "lr": 0.0, "memory": 8424, "data_time": 0.05792, "loss_rpn_cls": 0.01928, "loss_rpn_bbox": 0.04375, "loss_cls": 0.16637, "acc": 93.87646, "loss_bbox": 0.21142, "loss_mask": 0.2209, "loss": 0.66171, "time": 0.40912} -{"mode": "train", "epoch": 12, "iter": 2850, "lr": 0.0, "memory": 8424, "data_time": 0.04982, "loss_rpn_cls": 0.02004, "loss_rpn_bbox": 0.04511, "loss_cls": 0.16975, "acc": 93.69507, "loss_bbox": 0.21632, "loss_mask": 0.22152, "loss": 0.67274, "time": 0.414} -{"mode": "train", "epoch": 12, "iter": 2900, "lr": 0.0, "memory": 8424, "data_time": 0.03952, "loss_rpn_cls": 0.01912, "loss_rpn_bbox": 0.04408, "loss_cls": 0.16267, "acc": 93.85986, "loss_bbox": 0.21074, "loss_mask": 0.21846, "loss": 0.65507, "time": 0.41969} -{"mode": "train", "epoch": 12, "iter": 2950, "lr": 0.0, "memory": 8424, "data_time": 0.04859, "loss_rpn_cls": 0.01844, "loss_rpn_bbox": 0.04116, "loss_cls": 0.16138, "acc": 94.04053, "loss_bbox": 0.20718, "loss_mask": 0.21698, "loss": 0.64515, "time": 0.41405} -{"mode": "train", "epoch": 12, "iter": 3000, "lr": 0.0, "memory": 8424, "data_time": 0.05114, "loss_rpn_cls": 0.01952, "loss_rpn_bbox": 0.04163, "loss_cls": 0.15757, "acc": 94.14746, "loss_bbox": 0.20209, "loss_mask": 0.21397, "loss": 0.63479, "time": 0.40947} -{"mode": "train", "epoch": 12, "iter": 3050, "lr": 0.0, "memory": 8424, "data_time": 0.05004, "loss_rpn_cls": 0.01946, "loss_rpn_bbox": 0.04486, "loss_cls": 0.17093, "acc": 93.66772, "loss_bbox": 0.21745, "loss_mask": 0.21964, "loss": 0.67235, "time": 0.40598} -{"mode": "train", "epoch": 12, "iter": 3100, "lr": 0.0, "memory": 8424, "data_time": 0.04741, "loss_rpn_cls": 0.01936, "loss_rpn_bbox": 0.04108, "loss_cls": 0.1649, "acc": 93.88916, "loss_bbox": 0.21417, "loss_mask": 0.22017, "loss": 0.65968, "time": 0.40462} -{"mode": "train", "epoch": 12, "iter": 3150, "lr": 0.0, "memory": 8424, "data_time": 0.0483, "loss_rpn_cls": 0.0206, "loss_rpn_bbox": 0.04583, "loss_cls": 0.17116, "acc": 93.61841, "loss_bbox": 0.21865, "loss_mask": 0.21991, "loss": 0.67616, "time": 0.40306} -{"mode": "train", "epoch": 12, "iter": 3200, "lr": 0.0, "memory": 8424, "data_time": 0.04177, "loss_rpn_cls": 0.01905, "loss_rpn_bbox": 0.04278, "loss_cls": 0.16264, "acc": 93.94092, "loss_bbox": 0.21195, "loss_mask": 0.22286, "loss": 0.65928, "time": 0.40142} -{"mode": "train", "epoch": 12, "iter": 3250, "lr": 0.0, "memory": 8424, "data_time": 0.06432, "loss_rpn_cls": 0.02003, "loss_rpn_bbox": 0.04343, "loss_cls": 0.1695, "acc": 93.7063, "loss_bbox": 0.21353, "loss_mask": 0.22091, "loss": 0.66739, "time": 0.42453} -{"mode": "train", "epoch": 12, "iter": 3300, "lr": 0.0, "memory": 8424, "data_time": 0.04715, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.04439, "loss_cls": 0.17275, "acc": 93.48389, "loss_bbox": 0.22134, "loss_mask": 0.22494, "loss": 0.68394, "time": 0.40625} -{"mode": "train", "epoch": 12, "iter": 3350, "lr": 0.0, "memory": 8424, "data_time": 0.04103, "loss_rpn_cls": 0.01805, "loss_rpn_bbox": 0.04187, "loss_cls": 0.15833, "acc": 94.06055, "loss_bbox": 0.20448, "loss_mask": 0.2167, "loss": 0.63944, "time": 0.41139} -{"mode": "train", "epoch": 12, "iter": 3400, "lr": 0.0, "memory": 8424, "data_time": 0.0492, "loss_rpn_cls": 0.0194, "loss_rpn_bbox": 0.04259, "loss_cls": 0.16616, "acc": 93.79492, "loss_bbox": 0.21726, "loss_mask": 0.21992, "loss": 0.66532, "time": 0.41318} -{"mode": "train", "epoch": 12, "iter": 3450, "lr": 0.0, "memory": 8424, "data_time": 0.04393, "loss_rpn_cls": 0.01736, "loss_rpn_bbox": 0.04081, "loss_cls": 0.15998, "acc": 94.01318, "loss_bbox": 0.20453, "loss_mask": 0.21239, "loss": 0.63507, "time": 0.40473} -{"mode": "train", "epoch": 12, "iter": 3500, "lr": 0.0, "memory": 8424, "data_time": 0.04478, "loss_rpn_cls": 0.01866, "loss_rpn_bbox": 0.04257, "loss_cls": 0.16539, "acc": 93.86304, "loss_bbox": 0.21034, "loss_mask": 0.21275, "loss": 0.64971, "time": 0.40856} -{"mode": "train", "epoch": 12, "iter": 3550, "lr": 0.0, "memory": 8424, "data_time": 0.04677, "loss_rpn_cls": 0.01893, "loss_rpn_bbox": 0.04281, "loss_cls": 0.1626, "acc": 93.89795, "loss_bbox": 0.21299, "loss_mask": 0.21646, "loss": 0.65379, "time": 0.41765} -{"mode": "train", "epoch": 12, "iter": 3600, "lr": 0.0, "memory": 8424, "data_time": 0.04482, "loss_rpn_cls": 0.01897, "loss_rpn_bbox": 0.04294, "loss_cls": 0.15806, "acc": 94.01367, "loss_bbox": 0.20543, "loss_mask": 0.21464, "loss": 0.64003, "time": 0.4706} -{"mode": "train", "epoch": 12, "iter": 3650, "lr": 0.0, "memory": 8424, "data_time": 0.03918, "loss_rpn_cls": 0.01746, "loss_rpn_bbox": 0.04003, "loss_cls": 0.15515, "acc": 94.17944, "loss_bbox": 0.20112, "loss_mask": 0.21764, "loss": 0.6314, "time": 0.3917} -{"mode": "train", "epoch": 12, "iter": 3700, "lr": 0.0, "memory": 8424, "data_time": 0.05136, "loss_rpn_cls": 0.01837, "loss_rpn_bbox": 0.04237, "loss_cls": 0.16606, "acc": 93.76929, "loss_bbox": 0.21664, "loss_mask": 0.21906, "loss": 0.66249, "time": 0.40694} -{"mode": "train", "epoch": 12, "iter": 3750, "lr": 0.0, "memory": 8424, "data_time": 0.04237, "loss_rpn_cls": 0.01901, "loss_rpn_bbox": 0.04181, "loss_cls": 0.15959, "acc": 94.08496, "loss_bbox": 0.20124, "loss_mask": 0.21202, "loss": 0.63367, "time": 0.40134} -{"mode": "train", "epoch": 12, "iter": 3800, "lr": 0.0, "memory": 8424, "data_time": 0.05012, "loss_rpn_cls": 0.01958, "loss_rpn_bbox": 0.04246, "loss_cls": 0.16625, "acc": 93.77197, "loss_bbox": 0.21464, "loss_mask": 0.21943, "loss": 0.66236, "time": 0.41311} -{"mode": "train", "epoch": 12, "iter": 3850, "lr": 0.0, "memory": 8424, "data_time": 0.05774, "loss_rpn_cls": 0.01883, "loss_rpn_bbox": 0.04247, "loss_cls": 0.16044, "acc": 94.02637, "loss_bbox": 0.20818, "loss_mask": 0.21482, "loss": 0.64473, "time": 0.41364} -{"mode": "train", "epoch": 12, "iter": 3900, "lr": 0.0, "memory": 8424, "data_time": 0.04633, "loss_rpn_cls": 0.02177, "loss_rpn_bbox": 0.04358, "loss_cls": 0.1661, "acc": 93.85669, "loss_bbox": 0.21415, "loss_mask": 0.21733, "loss": 0.66293, "time": 0.40506} -{"mode": "train", "epoch": 12, "iter": 3950, "lr": 0.0, "memory": 8424, "data_time": 0.04759, "loss_rpn_cls": 0.01693, "loss_rpn_bbox": 0.03789, "loss_cls": 0.1544, "acc": 94.2439, "loss_bbox": 0.20137, "loss_mask": 0.21096, "loss": 0.62155, "time": 0.39514} -{"mode": "train", "epoch": 12, "iter": 4000, "lr": 0.0, "memory": 8424, "data_time": 0.05142, "loss_rpn_cls": 0.02064, "loss_rpn_bbox": 0.04263, "loss_cls": 0.1647, "acc": 93.88184, "loss_bbox": 0.21414, "loss_mask": 0.21793, "loss": 0.66005, "time": 0.41338} -{"mode": "train", "epoch": 12, "iter": 4050, "lr": 0.0, "memory": 8424, "data_time": 0.05262, "loss_rpn_cls": 0.01927, "loss_rpn_bbox": 0.04382, "loss_cls": 0.17216, "acc": 93.55664, "loss_bbox": 0.21927, "loss_mask": 0.22228, "loss": 0.6768, "time": 0.41589} -{"mode": "train", "epoch": 12, "iter": 4100, "lr": 0.0, "memory": 8424, "data_time": 0.04783, "loss_rpn_cls": 0.01977, "loss_rpn_bbox": 0.04267, "loss_cls": 0.16215, "acc": 93.95947, "loss_bbox": 0.21003, "loss_mask": 0.21436, "loss": 0.64897, "time": 0.41106} -{"mode": "train", "epoch": 12, "iter": 4150, "lr": 0.0, "memory": 8424, "data_time": 0.04461, "loss_rpn_cls": 0.01873, "loss_rpn_bbox": 0.0404, "loss_cls": 0.15645, "acc": 94.21606, "loss_bbox": 0.20597, "loss_mask": 0.2169, "loss": 0.63844, "time": 0.39958} -{"mode": "train", "epoch": 12, "iter": 4200, "lr": 0.0, "memory": 8424, "data_time": 0.04446, "loss_rpn_cls": 0.01873, "loss_rpn_bbox": 0.0431, "loss_cls": 0.16848, "acc": 93.81616, "loss_bbox": 0.2132, "loss_mask": 0.21204, "loss": 0.65554, "time": 0.40131} -{"mode": "train", "epoch": 12, "iter": 4250, "lr": 0.0, "memory": 8424, "data_time": 0.05483, "loss_rpn_cls": 0.02021, "loss_rpn_bbox": 0.04228, "loss_cls": 0.16761, "acc": 93.71606, "loss_bbox": 0.2155, "loss_mask": 0.21694, "loss": 0.66255, "time": 0.41723} -{"mode": "train", "epoch": 12, "iter": 4300, "lr": 0.0, "memory": 8424, "data_time": 0.05307, "loss_rpn_cls": 0.01855, "loss_rpn_bbox": 0.0406, "loss_cls": 0.16549, "acc": 93.83936, "loss_bbox": 0.2123, "loss_mask": 0.2138, "loss": 0.65073, "time": 0.40413} -{"mode": "train", "epoch": 12, "iter": 4350, "lr": 0.0, "memory": 8424, "data_time": 0.04827, "loss_rpn_cls": 0.02083, "loss_rpn_bbox": 0.04169, "loss_cls": 0.16558, "acc": 93.89331, "loss_bbox": 0.21148, "loss_mask": 0.21591, "loss": 0.65549, "time": 0.41028} -{"mode": "train", "epoch": 12, "iter": 4400, "lr": 0.0, "memory": 8424, "data_time": 0.04044, "loss_rpn_cls": 0.01961, "loss_rpn_bbox": 0.04176, "loss_cls": 0.16406, "acc": 93.82568, "loss_bbox": 0.21056, "loss_mask": 0.21831, "loss": 0.65431, "time": 0.42685} -{"mode": "train", "epoch": 12, "iter": 4450, "lr": 0.0, "memory": 8424, "data_time": 0.04742, "loss_rpn_cls": 0.01805, "loss_rpn_bbox": 0.04035, "loss_cls": 0.169, "acc": 93.71826, "loss_bbox": 0.21794, "loss_mask": 0.2188, "loss": 0.66413, "time": 0.3953} -{"mode": "train", "epoch": 12, "iter": 4500, "lr": 0.0, "memory": 8424, "data_time": 0.0498, "loss_rpn_cls": 0.02055, "loss_rpn_bbox": 0.04128, "loss_cls": 0.16878, "acc": 93.69116, "loss_bbox": 0.22007, "loss_mask": 0.21995, "loss": 0.67062, "time": 0.4081} -{"mode": "train", "epoch": 12, "iter": 4550, "lr": 0.0, "memory": 8424, "data_time": 0.04134, "loss_rpn_cls": 0.01968, "loss_rpn_bbox": 0.03949, "loss_cls": 0.16236, "acc": 93.98486, "loss_bbox": 0.20655, "loss_mask": 0.21613, "loss": 0.6442, "time": 0.40346} -{"mode": "train", "epoch": 12, "iter": 4600, "lr": 0.0, "memory": 8424, "data_time": 0.05265, "loss_rpn_cls": 0.01951, "loss_rpn_bbox": 0.0421, "loss_cls": 0.16959, "acc": 93.65771, "loss_bbox": 0.21435, "loss_mask": 0.2167, "loss": 0.66225, "time": 0.40822} -{"mode": "train", "epoch": 12, "iter": 4650, "lr": 0.0, "memory": 8424, "data_time": 0.05427, "loss_rpn_cls": 0.01864, "loss_rpn_bbox": 0.04044, "loss_cls": 0.1585, "acc": 94.0896, "loss_bbox": 0.2045, "loss_mask": 0.20901, "loss": 0.6311, "time": 0.40012} -{"mode": "train", "epoch": 12, "iter": 4700, "lr": 0.0, "memory": 8424, "data_time": 0.04612, "loss_rpn_cls": 0.01882, "loss_rpn_bbox": 0.03939, "loss_cls": 0.16321, "acc": 93.97339, "loss_bbox": 0.20863, "loss_mask": 0.21253, "loss": 0.64258, "time": 0.39555} -{"mode": "train", "epoch": 12, "iter": 4750, "lr": 0.0, "memory": 8424, "data_time": 0.04635, "loss_rpn_cls": 0.02023, "loss_rpn_bbox": 0.04278, "loss_cls": 0.16578, "acc": 93.83643, "loss_bbox": 0.21014, "loss_mask": 0.21735, "loss": 0.65627, "time": 0.41034} -{"mode": "train", "epoch": 12, "iter": 4800, "lr": 0.0, "memory": 8424, "data_time": 0.04098, "loss_rpn_cls": 0.02019, "loss_rpn_bbox": 0.04118, "loss_cls": 0.16714, "acc": 93.78931, "loss_bbox": 0.2132, "loss_mask": 0.21648, "loss": 0.65819, "time": 0.41036} -{"mode": "train", "epoch": 12, "iter": 4850, "lr": 0.0, "memory": 8424, "data_time": 0.05394, "loss_rpn_cls": 0.01885, "loss_rpn_bbox": 0.04422, "loss_cls": 0.16806, "acc": 93.65039, "loss_bbox": 0.21836, "loss_mask": 0.2193, "loss": 0.6688, "time": 0.4244} -{"mode": "train", "epoch": 12, "iter": 4900, "lr": 0.0, "memory": 8424, "data_time": 0.04348, "loss_rpn_cls": 0.01916, "loss_rpn_bbox": 0.04232, "loss_cls": 0.16959, "acc": 93.78149, "loss_bbox": 0.21547, "loss_mask": 0.21564, "loss": 0.66219, "time": 0.4162} -{"mode": "train", "epoch": 12, "iter": 4950, "lr": 0.0, "memory": 8424, "data_time": 0.0564, "loss_rpn_cls": 0.01958, "loss_rpn_bbox": 0.0418, "loss_cls": 0.16359, "acc": 93.90015, "loss_bbox": 0.20801, "loss_mask": 0.21432, "loss": 0.6473, "time": 0.41573} -{"mode": "train", "epoch": 12, "iter": 5000, "lr": 0.0, "memory": 8424, "data_time": 0.04755, "loss_rpn_cls": 0.01962, "loss_rpn_bbox": 0.04151, "loss_cls": 0.16683, "acc": 93.80664, "loss_bbox": 0.21353, "loss_mask": 0.21557, "loss": 0.65705, "time": 0.40942} -{"mode": "train", "epoch": 12, "iter": 5050, "lr": 0.0, "memory": 8424, "data_time": 0.06112, "loss_rpn_cls": 0.02044, "loss_rpn_bbox": 0.04231, "loss_cls": 0.17167, "acc": 93.63989, "loss_bbox": 0.21921, "loss_mask": 0.22047, "loss": 0.67409, "time": 0.40751} -{"mode": "train", "epoch": 12, "iter": 5100, "lr": 0.0, "memory": 8424, "data_time": 0.05054, "loss_rpn_cls": 0.01728, "loss_rpn_bbox": 0.03982, "loss_cls": 0.16004, "acc": 93.99438, "loss_bbox": 0.20941, "loss_mask": 0.21221, "loss": 0.63876, "time": 0.40888} -{"mode": "train", "epoch": 12, "iter": 5150, "lr": 0.0, "memory": 8424, "data_time": 0.04606, "loss_rpn_cls": 0.0196, "loss_rpn_bbox": 0.04085, "loss_cls": 0.16078, "acc": 94.0166, "loss_bbox": 0.21011, "loss_mask": 0.21652, "loss": 0.64786, "time": 0.39922} -{"mode": "train", "epoch": 12, "iter": 5200, "lr": 0.0, "memory": 8424, "data_time": 0.0437, "loss_rpn_cls": 0.019, "loss_rpn_bbox": 0.04264, "loss_cls": 0.16194, "acc": 93.87891, "loss_bbox": 0.20659, "loss_mask": 0.21464, "loss": 0.6448, "time": 0.41767} -{"mode": "train", "epoch": 12, "iter": 5250, "lr": 0.0, "memory": 8424, "data_time": 0.03481, "loss_rpn_cls": 0.01893, "loss_rpn_bbox": 0.04198, "loss_cls": 0.1716, "acc": 93.65479, "loss_bbox": 0.21547, "loss_mask": 0.218, "loss": 0.66597, "time": 0.41135} -{"mode": "train", "epoch": 12, "iter": 5300, "lr": 0.0, "memory": 8424, "data_time": 0.04733, "loss_rpn_cls": 0.01846, "loss_rpn_bbox": 0.04376, "loss_cls": 0.16514, "acc": 93.802, "loss_bbox": 0.21759, "loss_mask": 0.216, "loss": 0.66095, "time": 0.45932} -{"mode": "train", "epoch": 12, "iter": 5350, "lr": 0.0, "memory": 8424, "data_time": 0.04204, "loss_rpn_cls": 0.02031, "loss_rpn_bbox": 0.04256, "loss_cls": 0.17028, "acc": 93.62231, "loss_bbox": 0.21859, "loss_mask": 0.22042, "loss": 0.67216, "time": 0.41729} -{"mode": "train", "epoch": 12, "iter": 5400, "lr": 0.0, "memory": 8424, "data_time": 0.04731, "loss_rpn_cls": 0.02024, "loss_rpn_bbox": 0.04414, "loss_cls": 0.1696, "acc": 93.65088, "loss_bbox": 0.22234, "loss_mask": 0.22173, "loss": 0.67805, "time": 0.41006} -{"mode": "train", "epoch": 12, "iter": 5450, "lr": 0.0, "memory": 8424, "data_time": 0.03563, "loss_rpn_cls": 0.01811, "loss_rpn_bbox": 0.04148, "loss_cls": 0.16389, "acc": 93.79272, "loss_bbox": 0.21458, "loss_mask": 0.21657, "loss": 0.65464, "time": 0.39714} -{"mode": "train", "epoch": 12, "iter": 5500, "lr": 0.0, "memory": 8424, "data_time": 0.04134, "loss_rpn_cls": 0.01914, "loss_rpn_bbox": 0.04013, "loss_cls": 0.16657, "acc": 93.7478, "loss_bbox": 0.21203, "loss_mask": 0.21815, "loss": 0.65602, "time": 0.40246} -{"mode": "train", "epoch": 12, "iter": 5550, "lr": 0.0, "memory": 8424, "data_time": 0.04392, "loss_rpn_cls": 0.0205, "loss_rpn_bbox": 0.04476, "loss_cls": 0.17109, "acc": 93.65552, "loss_bbox": 0.21446, "loss_mask": 0.21764, "loss": 0.66846, "time": 0.41484} -{"mode": "train", "epoch": 12, "iter": 5600, "lr": 0.0, "memory": 8424, "data_time": 0.04634, "loss_rpn_cls": 0.01931, "loss_rpn_bbox": 0.04237, "loss_cls": 0.15975, "acc": 93.99146, "loss_bbox": 0.2069, "loss_mask": 0.21416, "loss": 0.64249, "time": 0.41804} -{"mode": "train", "epoch": 12, "iter": 5650, "lr": 0.0, "memory": 8424, "data_time": 0.04636, "loss_rpn_cls": 0.01892, "loss_rpn_bbox": 0.04215, "loss_cls": 0.16227, "acc": 94.02246, "loss_bbox": 0.21012, "loss_mask": 0.21419, "loss": 0.64766, "time": 0.41721} -{"mode": "train", "epoch": 12, "iter": 5700, "lr": 0.0, "memory": 8424, "data_time": 0.0394, "loss_rpn_cls": 0.01768, "loss_rpn_bbox": 0.03948, "loss_cls": 0.16138, "acc": 93.95459, "loss_bbox": 0.20737, "loss_mask": 0.21857, "loss": 0.64448, "time": 0.40085} -{"mode": "train", "epoch": 12, "iter": 5750, "lr": 0.0, "memory": 8424, "data_time": 0.04479, "loss_rpn_cls": 0.01835, "loss_rpn_bbox": 0.04055, "loss_cls": 0.16026, "acc": 94.0, "loss_bbox": 0.20614, "loss_mask": 0.21905, "loss": 0.64436, "time": 0.46968} -{"mode": "train", "epoch": 12, "iter": 5800, "lr": 0.0, "memory": 8424, "data_time": 0.03816, "loss_rpn_cls": 0.01807, "loss_rpn_bbox": 0.04122, "loss_cls": 0.16654, "acc": 93.71948, "loss_bbox": 0.21537, "loss_mask": 0.21698, "loss": 0.65819, "time": 0.40497} -{"mode": "train", "epoch": 12, "iter": 5850, "lr": 0.0, "memory": 8424, "data_time": 0.0454, "loss_rpn_cls": 0.01885, "loss_rpn_bbox": 0.04358, "loss_cls": 0.16484, "acc": 93.90723, "loss_bbox": 0.208, "loss_mask": 0.21457, "loss": 0.64982, "time": 0.39912} -{"mode": "train", "epoch": 12, "iter": 5900, "lr": 0.0, "memory": 8424, "data_time": 0.04712, "loss_rpn_cls": 0.01849, "loss_rpn_bbox": 0.04202, "loss_cls": 0.16551, "acc": 93.91089, "loss_bbox": 0.21235, "loss_mask": 0.21506, "loss": 0.65343, "time": 0.39692} -{"mode": "train", "epoch": 12, "iter": 5950, "lr": 0.0, "memory": 8424, "data_time": 0.04555, "loss_rpn_cls": 0.01821, "loss_rpn_bbox": 0.03941, "loss_cls": 0.16012, "acc": 93.97852, "loss_bbox": 0.20676, "loss_mask": 0.21444, "loss": 0.63894, "time": 0.40204} -{"mode": "train", "epoch": 12, "iter": 6000, "lr": 0.0, "memory": 8424, "data_time": 0.0339, "loss_rpn_cls": 0.01931, "loss_rpn_bbox": 0.0398, "loss_cls": 0.15963, "acc": 94.14478, "loss_bbox": 0.20361, "loss_mask": 0.21102, "loss": 0.63337, "time": 0.41321} -{"mode": "train", "epoch": 12, "iter": 6050, "lr": 0.0, "memory": 8424, "data_time": 0.04698, "loss_rpn_cls": 0.01755, "loss_rpn_bbox": 0.04106, "loss_cls": 0.16165, "acc": 93.91455, "loss_bbox": 0.20775, "loss_mask": 0.21475, "loss": 0.64275, "time": 0.41031} -{"mode": "train", "epoch": 12, "iter": 6100, "lr": 0.0, "memory": 8424, "data_time": 0.04992, "loss_rpn_cls": 0.01942, "loss_rpn_bbox": 0.04312, "loss_cls": 0.16465, "acc": 93.80518, "loss_bbox": 0.21679, "loss_mask": 0.21721, "loss": 0.6612, "time": 0.491} -{"mode": "train", "epoch": 12, "iter": 6150, "lr": 0.0, "memory": 8424, "data_time": 0.04088, "loss_rpn_cls": 0.01728, "loss_rpn_bbox": 0.04069, "loss_cls": 0.15581, "acc": 94.1543, "loss_bbox": 0.20203, "loss_mask": 0.21097, "loss": 0.62678, "time": 0.40801} -{"mode": "train", "epoch": 12, "iter": 6200, "lr": 0.0, "memory": 8424, "data_time": 0.04695, "loss_rpn_cls": 0.0181, "loss_rpn_bbox": 0.04256, "loss_cls": 0.16072, "acc": 93.95068, "loss_bbox": 0.20387, "loss_mask": 0.21358, "loss": 0.63883, "time": 0.41571} -{"mode": "train", "epoch": 12, "iter": 6250, "lr": 0.0, "memory": 8424, "data_time": 0.04872, "loss_rpn_cls": 0.01944, "loss_rpn_bbox": 0.04204, "loss_cls": 0.16677, "acc": 93.76831, "loss_bbox": 0.21151, "loss_mask": 0.21604, "loss": 0.65581, "time": 0.41604} -{"mode": "train", "epoch": 12, "iter": 6300, "lr": 0.0, "memory": 8424, "data_time": 0.05101, "loss_rpn_cls": 0.01836, "loss_rpn_bbox": 0.04396, "loss_cls": 0.16746, "acc": 93.71533, "loss_bbox": 0.2168, "loss_mask": 0.21833, "loss": 0.66492, "time": 0.41058} -{"mode": "train", "epoch": 12, "iter": 6350, "lr": 0.0, "memory": 8424, "data_time": 0.05128, "loss_rpn_cls": 0.01832, "loss_rpn_bbox": 0.04201, "loss_cls": 0.16229, "acc": 93.92603, "loss_bbox": 0.20932, "loss_mask": 0.21534, "loss": 0.64728, "time": 0.4182} -{"mode": "train", "epoch": 12, "iter": 6400, "lr": 0.0, "memory": 8424, "data_time": 0.06436, "loss_rpn_cls": 0.01816, "loss_rpn_bbox": 0.04496, "loss_cls": 0.16734, "acc": 93.7561, "loss_bbox": 0.21587, "loss_mask": 0.2132, "loss": 0.65953, "time": 0.41967} -{"mode": "train", "epoch": 12, "iter": 6450, "lr": 0.0, "memory": 8424, "data_time": 0.05428, "loss_rpn_cls": 0.02008, "loss_rpn_bbox": 0.04196, "loss_cls": 0.16784, "acc": 93.71045, "loss_bbox": 0.21775, "loss_mask": 0.21788, "loss": 0.66551, "time": 0.40323} -{"mode": "train", "epoch": 12, "iter": 6500, "lr": 0.0, "memory": 8424, "data_time": 0.04536, "loss_rpn_cls": 0.01833, "loss_rpn_bbox": 0.043, "loss_cls": 0.16404, "acc": 93.86768, "loss_bbox": 0.2094, "loss_mask": 0.21536, "loss": 0.65012, "time": 0.41946} -{"mode": "train", "epoch": 12, "iter": 6550, "lr": 0.0, "memory": 8424, "data_time": 0.05274, "loss_rpn_cls": 0.01794, "loss_rpn_bbox": 0.0405, "loss_cls": 0.16433, "acc": 93.83936, "loss_bbox": 0.20783, "loss_mask": 0.21066, "loss": 0.64127, "time": 0.41103} -{"mode": "train", "epoch": 12, "iter": 6600, "lr": 0.0, "memory": 8424, "data_time": 0.05206, "loss_rpn_cls": 0.01849, "loss_rpn_bbox": 0.0419, "loss_cls": 0.16643, "acc": 93.80542, "loss_bbox": 0.21669, "loss_mask": 0.22146, "loss": 0.66497, "time": 0.40477} -{"mode": "train", "epoch": 12, "iter": 6650, "lr": 0.0, "memory": 8424, "data_time": 0.04252, "loss_rpn_cls": 0.01858, "loss_rpn_bbox": 0.04156, "loss_cls": 0.16667, "acc": 93.81665, "loss_bbox": 0.21447, "loss_mask": 0.21825, "loss": 0.65955, "time": 0.39383} -{"mode": "train", "epoch": 12, "iter": 6700, "lr": 0.0, "memory": 8424, "data_time": 0.04409, "loss_rpn_cls": 0.02007, "loss_rpn_bbox": 0.04313, "loss_cls": 0.16996, "acc": 93.73511, "loss_bbox": 0.21635, "loss_mask": 0.22091, "loss": 0.67042, "time": 0.39887} -{"mode": "train", "epoch": 12, "iter": 6750, "lr": 0.0, "memory": 8424, "data_time": 0.05394, "loss_rpn_cls": 0.01922, "loss_rpn_bbox": 0.04279, "loss_cls": 0.15959, "acc": 94.03149, "loss_bbox": 0.20473, "loss_mask": 0.21819, "loss": 0.64453, "time": 0.4094} -{"mode": "train", "epoch": 12, "iter": 6800, "lr": 0.0, "memory": 8424, "data_time": 0.05122, "loss_rpn_cls": 0.0178, "loss_rpn_bbox": 0.03872, "loss_cls": 0.16093, "acc": 94.03442, "loss_bbox": 0.20509, "loss_mask": 0.21127, "loss": 0.63382, "time": 0.41902} -{"mode": "train", "epoch": 12, "iter": 6850, "lr": 0.0, "memory": 8424, "data_time": 0.05211, "loss_rpn_cls": 0.01809, "loss_rpn_bbox": 0.04162, "loss_cls": 0.16184, "acc": 93.95703, "loss_bbox": 0.20663, "loss_mask": 0.21209, "loss": 0.64026, "time": 0.40754} -{"mode": "train", "epoch": 12, "iter": 6900, "lr": 0.0, "memory": 8424, "data_time": 0.03964, "loss_rpn_cls": 0.01814, "loss_rpn_bbox": 0.04053, "loss_cls": 0.16623, "acc": 93.78467, "loss_bbox": 0.21073, "loss_mask": 0.21409, "loss": 0.64971, "time": 0.39998} -{"mode": "train", "epoch": 12, "iter": 6950, "lr": 0.0, "memory": 8424, "data_time": 0.04413, "loss_rpn_cls": 0.01988, "loss_rpn_bbox": 0.04286, "loss_cls": 0.16657, "acc": 93.73022, "loss_bbox": 0.2145, "loss_mask": 0.2151, "loss": 0.65891, "time": 0.41274} -{"mode": "train", "epoch": 12, "iter": 7000, "lr": 0.0, "memory": 8424, "data_time": 0.04871, "loss_rpn_cls": 0.01881, "loss_rpn_bbox": 0.04276, "loss_cls": 0.16037, "acc": 93.99292, "loss_bbox": 0.20678, "loss_mask": 0.22, "loss": 0.64872, "time": 0.40684} -{"mode": "train", "epoch": 12, "iter": 7050, "lr": 0.0, "memory": 8424, "data_time": 0.04371, "loss_rpn_cls": 0.0186, "loss_rpn_bbox": 0.04057, "loss_cls": 0.15792, "acc": 94.13794, "loss_bbox": 0.20039, "loss_mask": 0.2088, "loss": 0.62628, "time": 0.40826} -{"mode": "train", "epoch": 12, "iter": 7100, "lr": 0.0, "memory": 8424, "data_time": 0.05516, "loss_rpn_cls": 0.01804, "loss_rpn_bbox": 0.04232, "loss_cls": 0.16392, "acc": 93.93677, "loss_bbox": 0.20797, "loss_mask": 0.21711, "loss": 0.64935, "time": 0.40131} -{"mode": "train", "epoch": 12, "iter": 7150, "lr": 0.0, "memory": 8424, "data_time": 0.05134, "loss_rpn_cls": 0.01849, "loss_rpn_bbox": 0.04143, "loss_cls": 0.16247, "acc": 93.93066, "loss_bbox": 0.21037, "loss_mask": 0.2182, "loss": 0.65096, "time": 0.39192} -{"mode": "train", "epoch": 12, "iter": 7200, "lr": 0.0, "memory": 8424, "data_time": 0.03782, "loss_rpn_cls": 0.01809, "loss_rpn_bbox": 0.03723, "loss_cls": 0.15374, "acc": 94.323, "loss_bbox": 0.20083, "loss_mask": 0.20747, "loss": 0.61736, "time": 0.39124} -{"mode": "train", "epoch": 12, "iter": 7250, "lr": 0.0, "memory": 8424, "data_time": 0.04148, "loss_rpn_cls": 0.01936, "loss_rpn_bbox": 0.04313, "loss_cls": 0.16463, "acc": 93.84668, "loss_bbox": 0.21942, "loss_mask": 0.22223, "loss": 0.66876, "time": 0.41775} -{"mode": "train", "epoch": 12, "iter": 7300, "lr": 0.0, "memory": 8424, "data_time": 0.0426, "loss_rpn_cls": 0.0183, "loss_rpn_bbox": 0.04353, "loss_cls": 0.17257, "acc": 93.57227, "loss_bbox": 0.22043, "loss_mask": 0.21888, "loss": 0.67371, "time": 0.47587} -{"mode": "val", "epoch": 12, "iter": 625, "lr": 0.0, "bbox_mAP": 0.416, "bbox_mAP_50": 0.6325, "bbox_mAP_75": 0.4534, "bbox_mAP_s": 0.2413, "bbox_mAP_m": 0.4509, "bbox_mAP_l": 0.5483, "bbox_mAP_copypaste": "0.4160 0.6325 0.4534 0.2413 0.4509 0.5483", "segm_mAP": 0.3859, "segm_mAP_50": 0.6053, "segm_mAP_75": 0.4145, "segm_mAP_s": 0.1851, "segm_mAP_m": 0.4149, "segm_mAP_l": 0.5631, "segm_mAP_copypaste": "0.3859 0.6053 0.4145 0.1851 0.4149 0.5631"} diff --git a/cv/classification/repvit/pytorch/detection/logs/repvit_m2_3_coco.json b/cv/classification/repvit/pytorch/detection/logs/repvit_m2_3_coco.json deleted file mode 100644 index 8b1dea53..00000000 --- a/cv/classification/repvit/pytorch/detection/logs/repvit_m2_3_coco.json +++ /dev/null @@ -1,1765 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: GeForce RTX 3090\nCUDA_HOME: /home/wangao/cuda-11.2\nNVCC: Cuda compilation tools, release 11.2, V11.2.152\nGCC: gcc (GCC) 5.4.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.8.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 5.4\nMMCV CUDA Compiler: 11.2\nMMDetection: 2.28.2+ab02657", "config": "model = dict(\n type='MaskRCNN',\n pretrained='torchvision://resnet50',\n backbone=dict(\n type='repvit_m6',\n depth=50,\n num_stages=4,\n out_indices=[6, 14, 50, 54],\n frozen_stages=1,\n norm_cfg=dict(type='BN', requires_grad=True),\n norm_eval=True,\n style='pytorch',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m6_distill_450e.pth')),\n neck=dict(\n type='FPN',\n in_channels=[80, 160, 320, 640],\n out_channels=256,\n num_outs=5),\n rpn_head=dict(\n type='RPNHead',\n in_channels=256,\n feat_channels=256,\n anchor_generator=dict(\n type='AnchorGenerator',\n scales=[8],\n ratios=[0.5, 1.0, 2.0],\n strides=[4, 8, 16, 32, 64]),\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[1.0, 1.0, 1.0, 1.0]),\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n roi_head=dict(\n type='StandardRoIHead',\n bbox_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n bbox_head=dict(\n type='Shared2FCBBoxHead',\n in_channels=256,\n fc_out_channels=1024,\n roi_feat_size=7,\n num_classes=80,\n bbox_coder=dict(\n type='DeltaXYWHBBoxCoder',\n target_means=[0.0, 0.0, 0.0, 0.0],\n target_stds=[0.1, 0.1, 0.2, 0.2]),\n reg_class_agnostic=False,\n loss_cls=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),\n loss_bbox=dict(type='L1Loss', loss_weight=1.0)),\n mask_roi_extractor=dict(\n type='SingleRoIExtractor',\n roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),\n out_channels=256,\n featmap_strides=[4, 8, 16, 32]),\n mask_head=dict(\n type='FCNMaskHead',\n num_convs=4,\n in_channels=256,\n conv_out_channels=256,\n num_classes=80,\n loss_mask=dict(\n type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),\n train_cfg=dict(\n rpn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.7,\n neg_iou_thr=0.3,\n min_pos_iou=0.3,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=256,\n pos_fraction=0.5,\n neg_pos_ub=-1,\n add_gt_as_proposals=False),\n allowed_border=-1,\n pos_weight=-1,\n debug=False),\n rpn_proposal=dict(\n nms_pre=2000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.5,\n match_low_quality=True,\n ignore_iof_thr=-1),\n sampler=dict(\n type='RandomSampler',\n num=512,\n pos_fraction=0.25,\n neg_pos_ub=-1,\n add_gt_as_proposals=True),\n mask_size=28,\n pos_weight=-1,\n debug=False)),\n test_cfg=dict(\n rpn=dict(\n nms_pre=1000,\n max_per_img=1000,\n nms=dict(type='nms', iou_threshold=0.7),\n min_bbox_size=0),\n rcnn=dict(\n score_thr=0.05,\n nms=dict(type='nms', iou_threshold=0.5),\n max_per_img=100,\n mask_thr_binary=0.5)))\ndataset_type = 'CocoDataset'\ndata_root = 'data/coco/'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=2,\n workers_per_gpu=2,\n train=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_train2017.json',\n img_prefix='data/coco/train2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(\n type='Collect',\n keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])\n ]),\n val=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='CocoDataset',\n ann_file='data/coco/annotations/instances_val2017.json',\n img_prefix='data/coco/val2017/',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nevaluation = dict(metric=['bbox', 'segm'])\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.05)\noptimizer_config = dict(grad_clip=None)\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=500,\n warmup_ratio=1e-06,\n step=[8, 11])\nrunner = dict(type='EpochBasedRunner', max_epochs=12)\ncheckpoint_config = dict(interval=1)\nlog_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])\ncustom_hooks = [dict(type='NumClassCheckHook')]\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\nwork_dir = './work_dirs/mask_rcnn_repvit_m6_fpn_1x_coco'\nauto_resume = False\ngpu_ids = range(0, 8)\n", "seed": 0, "exp_name": "mask_rcnn_repvit_m6_fpn_1x_coco.py"} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 2e-05, "memory": 12069, "data_time": 0.15192, "loss_rpn_cls": 0.53878, "loss_rpn_bbox": 0.1359, "loss_cls": 1.98109, "acc": 67.09521, "loss_bbox": 0.05794, "loss_mask": 1.75779, "loss": 4.4715, "time": 0.80169} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 4e-05, "memory": 12284, "data_time": 0.05602, "loss_rpn_cls": 0.25366, "loss_rpn_bbox": 0.10515, "loss_cls": 0.40303, "acc": 94.82104, "loss_bbox": 0.17368, "loss_mask": 0.76105, "loss": 1.69657, "time": 0.56041} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 6e-05, "memory": 12284, "data_time": 0.05561, "loss_rpn_cls": 0.20733, "loss_rpn_bbox": 0.10429, "loss_cls": 0.38342, "acc": 94.09229, "loss_bbox": 0.20155, "loss_mask": 0.70071, "loss": 1.5973, "time": 0.5492} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 8e-05, "memory": 12284, "data_time": 0.04855, "loss_rpn_cls": 0.15439, "loss_rpn_bbox": 0.10531, "loss_cls": 0.43235, "acc": 92.91333, "loss_bbox": 0.24937, "loss_mask": 0.66155, "loss": 1.60297, "time": 0.54933} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0001, "memory": 12284, "data_time": 0.05652, "loss_rpn_cls": 0.12144, "loss_rpn_bbox": 0.10281, "loss_cls": 0.46298, "acc": 91.89087, "loss_bbox": 0.29727, "loss_mask": 0.61037, "loss": 1.59487, "time": 0.53272} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.00012, "memory": 12370, "data_time": 0.07537, "loss_rpn_cls": 0.09661, "loss_rpn_bbox": 0.09357, "loss_cls": 0.49623, "acc": 90.67896, "loss_bbox": 0.34825, "loss_mask": 0.56895, "loss": 1.60361, "time": 0.53117} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.00014, "memory": 12370, "data_time": 0.05168, "loss_rpn_cls": 0.0893, "loss_rpn_bbox": 0.09051, "loss_cls": 0.47839, "acc": 90.70996, "loss_bbox": 0.34057, "loss_mask": 0.52023, "loss": 1.519, "time": 0.52195} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.00016, "memory": 12439, "data_time": 0.05805, "loss_rpn_cls": 0.08896, "loss_rpn_bbox": 0.09085, "loss_cls": 0.50361, "acc": 89.75708, "loss_bbox": 0.36831, "loss_mask": 0.49023, "loss": 1.54196, "time": 0.58059} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.00018, "memory": 12504, "data_time": 0.05437, "loss_rpn_cls": 0.08192, "loss_rpn_bbox": 0.08782, "loss_cls": 0.47399, "acc": 89.76367, "loss_bbox": 0.36414, "loss_mask": 0.45511, "loss": 1.46298, "time": 0.51593} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 12504, "data_time": 0.0408, "loss_rpn_cls": 0.07744, "loss_rpn_bbox": 0.08541, "loss_cls": 0.46854, "acc": 89.48267, "loss_bbox": 0.37875, "loss_mask": 0.44363, "loss": 1.45377, "time": 0.51048} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 12504, "data_time": 0.04945, "loss_rpn_cls": 0.0767, "loss_rpn_bbox": 0.08127, "loss_cls": 0.45755, "acc": 89.48804, "loss_bbox": 0.37599, "loss_mask": 0.42181, "loss": 1.41332, "time": 0.53289} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 12504, "data_time": 0.04376, "loss_rpn_cls": 0.075, "loss_rpn_bbox": 0.08242, "loss_cls": 0.43191, "acc": 89.55786, "loss_bbox": 0.36654, "loss_mask": 0.415, "loss": 1.37086, "time": 0.52822} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 12555, "data_time": 0.06405, "loss_rpn_cls": 0.07143, "loss_rpn_bbox": 0.08436, "loss_cls": 0.42985, "acc": 89.20337, "loss_bbox": 0.37571, "loss_mask": 0.39525, "loss": 1.3566, "time": 0.51781} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 12555, "data_time": 0.05297, "loss_rpn_cls": 0.07205, "loss_rpn_bbox": 0.0818, "loss_cls": 0.41066, "acc": 89.12451, "loss_bbox": 0.37493, "loss_mask": 0.38492, "loss": 1.32436, "time": 0.52597} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 12555, "data_time": 0.04922, "loss_rpn_cls": 0.06745, "loss_rpn_bbox": 0.08021, "loss_cls": 0.39479, "acc": 89.45068, "loss_bbox": 0.36178, "loss_mask": 0.37443, "loss": 1.27867, "time": 0.74636} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 12560, "data_time": 0.05238, "loss_rpn_cls": 0.06793, "loss_rpn_bbox": 0.08042, "loss_cls": 0.4028, "acc": 89.32471, "loss_bbox": 0.36602, "loss_mask": 0.37544, "loss": 1.29262, "time": 0.51142} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 12560, "data_time": 0.05793, "loss_rpn_cls": 0.06263, "loss_rpn_bbox": 0.07643, "loss_cls": 0.37681, "acc": 89.62769, "loss_bbox": 0.35866, "loss_mask": 0.36386, "loss": 1.23839, "time": 0.51532} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 12560, "data_time": 0.05623, "loss_rpn_cls": 0.06503, "loss_rpn_bbox": 0.07611, "loss_cls": 0.384, "acc": 89.45581, "loss_bbox": 0.36046, "loss_mask": 0.36492, "loss": 1.25052, "time": 0.50286} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 12560, "data_time": 0.05035, "loss_rpn_cls": 0.06276, "loss_rpn_bbox": 0.07511, "loss_cls": 0.379, "acc": 89.53418, "loss_bbox": 0.34573, "loss_mask": 0.35563, "loss": 1.21822, "time": 0.50216} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 12584, "data_time": 0.05453, "loss_rpn_cls": 0.06276, "loss_rpn_bbox": 0.07392, "loss_cls": 0.36088, "acc": 89.81543, "loss_bbox": 0.34008, "loss_mask": 0.34621, "loss": 1.18384, "time": 0.5166} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 12584, "data_time": 0.05202, "loss_rpn_cls": 0.0607, "loss_rpn_bbox": 0.07211, "loss_cls": 0.37023, "acc": 89.6377, "loss_bbox": 0.34647, "loss_mask": 0.34986, "loss": 1.19937, "time": 0.51722} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 12584, "data_time": 0.05682, "loss_rpn_cls": 0.05718, "loss_rpn_bbox": 0.07081, "loss_cls": 0.35775, "acc": 89.77832, "loss_bbox": 0.33718, "loss_mask": 0.34515, "loss": 1.16808, "time": 0.49537} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.0002, "memory": 12584, "data_time": 0.05745, "loss_rpn_cls": 0.06233, "loss_rpn_bbox": 0.07742, "loss_cls": 0.36444, "acc": 89.48682, "loss_bbox": 0.34774, "loss_mask": 0.34408, "loss": 1.19602, "time": 0.51968} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.0002, "memory": 12584, "data_time": 0.06079, "loss_rpn_cls": 0.06205, "loss_rpn_bbox": 0.07795, "loss_cls": 0.35728, "acc": 89.59668, "loss_bbox": 0.34681, "loss_mask": 0.34105, "loss": 1.18513, "time": 0.51552} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.0002, "memory": 12584, "data_time": 0.0475, "loss_rpn_cls": 0.05452, "loss_rpn_bbox": 0.06787, "loss_cls": 0.34568, "acc": 90.1084, "loss_bbox": 0.32642, "loss_mask": 0.33481, "loss": 1.12931, "time": 0.50004} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.0002, "memory": 12584, "data_time": 0.05889, "loss_rpn_cls": 0.05209, "loss_rpn_bbox": 0.06997, "loss_cls": 0.34646, "acc": 89.96094, "loss_bbox": 0.33459, "loss_mask": 0.33246, "loss": 1.13557, "time": 0.51249} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.0002, "memory": 12584, "data_time": 0.05307, "loss_rpn_cls": 0.05956, "loss_rpn_bbox": 0.07591, "loss_cls": 0.3452, "acc": 89.89941, "loss_bbox": 0.33452, "loss_mask": 0.33143, "loss": 1.14662, "time": 0.50452} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.0002, "memory": 12584, "data_time": 0.05699, "loss_rpn_cls": 0.05874, "loss_rpn_bbox": 0.06973, "loss_cls": 0.32136, "acc": 90.70142, "loss_bbox": 0.31458, "loss_mask": 0.32396, "loss": 1.08836, "time": 0.50144} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.0002, "memory": 12584, "data_time": 0.04879, "loss_rpn_cls": 0.0546, "loss_rpn_bbox": 0.07045, "loss_cls": 0.34428, "acc": 89.78955, "loss_bbox": 0.33855, "loss_mask": 0.31884, "loss": 1.12673, "time": 0.49621} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.0002, "memory": 12584, "data_time": 0.0627, "loss_rpn_cls": 0.06106, "loss_rpn_bbox": 0.076, "loss_cls": 0.33244, "acc": 90.11157, "loss_bbox": 0.33344, "loss_mask": 0.32693, "loss": 1.12988, "time": 0.50055} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.0002, "memory": 12584, "data_time": 0.06349, "loss_rpn_cls": 0.05226, "loss_rpn_bbox": 0.06964, "loss_cls": 0.32641, "acc": 90.33374, "loss_bbox": 0.32423, "loss_mask": 0.32559, "loss": 1.09813, "time": 0.50837} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.0002, "memory": 12584, "data_time": 0.0522, "loss_rpn_cls": 0.05664, "loss_rpn_bbox": 0.07154, "loss_cls": 0.33648, "acc": 90.15796, "loss_bbox": 0.32682, "loss_mask": 0.32737, "loss": 1.11886, "time": 0.5175} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.0002, "memory": 12584, "data_time": 0.0613, "loss_rpn_cls": 0.0542, "loss_rpn_bbox": 0.07049, "loss_cls": 0.3219, "acc": 90.49512, "loss_bbox": 0.31842, "loss_mask": 0.32413, "loss": 1.08913, "time": 0.50827} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.0002, "memory": 12584, "data_time": 0.05883, "loss_rpn_cls": 0.05565, "loss_rpn_bbox": 0.06654, "loss_cls": 0.33779, "acc": 89.93115, "loss_bbox": 0.33315, "loss_mask": 0.32119, "loss": 1.11432, "time": 0.51704} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.0002, "memory": 12584, "data_time": 0.05092, "loss_rpn_cls": 0.05339, "loss_rpn_bbox": 0.06983, "loss_cls": 0.33133, "acc": 89.92163, "loss_bbox": 0.33037, "loss_mask": 0.31699, "loss": 1.10192, "time": 0.51107} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.0002, "memory": 12584, "data_time": 0.05271, "loss_rpn_cls": 0.053, "loss_rpn_bbox": 0.07084, "loss_cls": 0.32627, "acc": 90.36499, "loss_bbox": 0.32481, "loss_mask": 0.32437, "loss": 1.09929, "time": 0.49969} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.0002, "memory": 12584, "data_time": 0.05969, "loss_rpn_cls": 0.05219, "loss_rpn_bbox": 0.07053, "loss_cls": 0.33484, "acc": 89.96509, "loss_bbox": 0.33038, "loss_mask": 0.3174, "loss": 1.10534, "time": 0.505} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.0002, "memory": 12584, "data_time": 0.05387, "loss_rpn_cls": 0.05061, "loss_rpn_bbox": 0.06805, "loss_cls": 0.3197, "acc": 90.50073, "loss_bbox": 0.31625, "loss_mask": 0.31937, "loss": 1.07399, "time": 0.50714} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.0002, "memory": 12584, "data_time": 0.06034, "loss_rpn_cls": 0.05218, "loss_rpn_bbox": 0.06994, "loss_cls": 0.30788, "acc": 90.62085, "loss_bbox": 0.31372, "loss_mask": 0.30743, "loss": 1.05115, "time": 0.51222} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.0002, "memory": 12584, "data_time": 0.0596, "loss_rpn_cls": 0.05084, "loss_rpn_bbox": 0.06656, "loss_cls": 0.32631, "acc": 90.29834, "loss_bbox": 0.32355, "loss_mask": 0.31629, "loss": 1.08356, "time": 0.50105} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.0002, "memory": 12584, "data_time": 0.05232, "loss_rpn_cls": 0.04982, "loss_rpn_bbox": 0.06798, "loss_cls": 0.32127, "acc": 90.31396, "loss_bbox": 0.32433, "loss_mask": 0.31449, "loss": 1.07789, "time": 0.51745} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.0002, "memory": 12584, "data_time": 0.04928, "loss_rpn_cls": 0.04911, "loss_rpn_bbox": 0.06372, "loss_cls": 0.30386, "acc": 90.71948, "loss_bbox": 0.31201, "loss_mask": 0.30441, "loss": 1.03311, "time": 0.49766} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.0002, "memory": 12584, "data_time": 0.04695, "loss_rpn_cls": 0.05119, "loss_rpn_bbox": 0.06447, "loss_cls": 0.30842, "acc": 90.57397, "loss_bbox": 0.31721, "loss_mask": 0.30681, "loss": 1.0481, "time": 0.49586} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.0002, "memory": 12584, "data_time": 0.04824, "loss_rpn_cls": 0.05372, "loss_rpn_bbox": 0.07033, "loss_cls": 0.31467, "acc": 90.42993, "loss_bbox": 0.31514, "loss_mask": 0.30376, "loss": 1.05762, "time": 0.50936} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.0002, "memory": 12584, "data_time": 0.06065, "loss_rpn_cls": 0.04909, "loss_rpn_bbox": 0.06691, "loss_cls": 0.30503, "acc": 90.72681, "loss_bbox": 0.31123, "loss_mask": 0.30292, "loss": 1.03518, "time": 0.50689} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.0002, "memory": 12584, "data_time": 0.05094, "loss_rpn_cls": 0.05093, "loss_rpn_bbox": 0.06877, "loss_cls": 0.32249, "acc": 90.17554, "loss_bbox": 0.33106, "loss_mask": 0.31208, "loss": 1.08532, "time": 0.51494} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.0002, "memory": 12584, "data_time": 0.04731, "loss_rpn_cls": 0.05138, "loss_rpn_bbox": 0.06746, "loss_cls": 0.3104, "acc": 90.5813, "loss_bbox": 0.3109, "loss_mask": 0.29879, "loss": 1.03892, "time": 0.49773} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.0002, "memory": 12584, "data_time": 0.05455, "loss_rpn_cls": 0.04986, "loss_rpn_bbox": 0.06838, "loss_cls": 0.31869, "acc": 90.20947, "loss_bbox": 0.32134, "loss_mask": 0.30702, "loss": 1.0653, "time": 0.5055} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.0002, "memory": 12584, "data_time": 0.05749, "loss_rpn_cls": 0.05466, "loss_rpn_bbox": 0.06548, "loss_cls": 0.31461, "acc": 90.48438, "loss_bbox": 0.31893, "loss_mask": 0.30213, "loss": 1.05579, "time": 0.49739} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.0002, "memory": 12584, "data_time": 0.04142, "loss_rpn_cls": 0.04774, "loss_rpn_bbox": 0.06356, "loss_cls": 0.30908, "acc": 90.58862, "loss_bbox": 0.31263, "loss_mask": 0.29791, "loss": 1.03091, "time": 0.5031} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.0002, "memory": 12584, "data_time": 0.03812, "loss_rpn_cls": 0.04897, "loss_rpn_bbox": 0.06548, "loss_cls": 0.31238, "acc": 90.33203, "loss_bbox": 0.32363, "loss_mask": 0.31137, "loss": 1.06184, "time": 0.49317} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.0002, "memory": 12584, "data_time": 0.05286, "loss_rpn_cls": 0.04768, "loss_rpn_bbox": 0.06328, "loss_cls": 0.30637, "acc": 90.59229, "loss_bbox": 0.31671, "loss_mask": 0.29309, "loss": 1.02713, "time": 0.52055} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.0002, "memory": 12584, "data_time": 0.0489, "loss_rpn_cls": 0.04915, "loss_rpn_bbox": 0.06604, "loss_cls": 0.31383, "acc": 90.31738, "loss_bbox": 0.32779, "loss_mask": 0.29901, "loss": 1.05582, "time": 0.4948} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.0002, "memory": 12584, "data_time": 0.05586, "loss_rpn_cls": 0.05062, "loss_rpn_bbox": 0.06726, "loss_cls": 0.30744, "acc": 90.49512, "loss_bbox": 0.31956, "loss_mask": 0.30619, "loss": 1.05108, "time": 0.51322} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.0002, "memory": 12584, "data_time": 0.05966, "loss_rpn_cls": 0.05202, "loss_rpn_bbox": 0.0638, "loss_cls": 0.29551, "acc": 90.95483, "loss_bbox": 0.30562, "loss_mask": 0.29299, "loss": 1.00993, "time": 0.50196} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.0002, "memory": 12584, "data_time": 0.05334, "loss_rpn_cls": 0.04935, "loss_rpn_bbox": 0.06423, "loss_cls": 0.30279, "acc": 90.67139, "loss_bbox": 0.30545, "loss_mask": 0.29743, "loss": 1.01925, "time": 0.6784} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.0002, "memory": 12584, "data_time": 0.05785, "loss_rpn_cls": 0.04988, "loss_rpn_bbox": 0.06586, "loss_cls": 0.29801, "acc": 90.6792, "loss_bbox": 0.31233, "loss_mask": 0.29728, "loss": 1.02337, "time": 0.51977} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.0002, "memory": 12584, "data_time": 0.06235, "loss_rpn_cls": 0.04187, "loss_rpn_bbox": 0.06267, "loss_cls": 0.29618, "acc": 90.82397, "loss_bbox": 0.31039, "loss_mask": 0.30211, "loss": 1.01323, "time": 0.50541} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.0002, "memory": 12584, "data_time": 0.04272, "loss_rpn_cls": 0.04921, "loss_rpn_bbox": 0.06556, "loss_cls": 0.28885, "acc": 91.03125, "loss_bbox": 0.29905, "loss_mask": 0.2935, "loss": 0.99616, "time": 0.50753} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.0002, "memory": 12584, "data_time": 0.03957, "loss_rpn_cls": 0.05087, "loss_rpn_bbox": 0.07082, "loss_cls": 0.29748, "acc": 90.83081, "loss_bbox": 0.30784, "loss_mask": 0.29909, "loss": 1.0261, "time": 0.50517} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.0002, "memory": 12584, "data_time": 0.06772, "loss_rpn_cls": 0.05204, "loss_rpn_bbox": 0.06414, "loss_cls": 0.30077, "acc": 90.64062, "loss_bbox": 0.31081, "loss_mask": 0.30428, "loss": 1.03204, "time": 0.51295} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.0002, "memory": 12584, "data_time": 0.05372, "loss_rpn_cls": 0.04663, "loss_rpn_bbox": 0.0626, "loss_cls": 0.30026, "acc": 90.67285, "loss_bbox": 0.31, "loss_mask": 0.29846, "loss": 1.01794, "time": 0.50419} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.0002, "memory": 12584, "data_time": 0.04332, "loss_rpn_cls": 0.04431, "loss_rpn_bbox": 0.06389, "loss_cls": 0.29521, "acc": 90.7854, "loss_bbox": 0.30886, "loss_mask": 0.29986, "loss": 1.01214, "time": 0.49832} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.0002, "memory": 12584, "data_time": 0.04724, "loss_rpn_cls": 0.05041, "loss_rpn_bbox": 0.06318, "loss_cls": 0.29823, "acc": 90.87939, "loss_bbox": 0.30527, "loss_mask": 0.28913, "loss": 1.00621, "time": 0.50327} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.0002, "memory": 12584, "data_time": 0.05324, "loss_rpn_cls": 0.04352, "loss_rpn_bbox": 0.06263, "loss_cls": 0.30095, "acc": 90.54956, "loss_bbox": 0.31538, "loss_mask": 0.29268, "loss": 1.01515, "time": 0.50368} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.0002, "memory": 12584, "data_time": 0.05171, "loss_rpn_cls": 0.04927, "loss_rpn_bbox": 0.06552, "loss_cls": 0.3022, "acc": 90.63379, "loss_bbox": 0.31364, "loss_mask": 0.29368, "loss": 1.02432, "time": 0.51725} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.0002, "memory": 12584, "data_time": 0.05545, "loss_rpn_cls": 0.051, "loss_rpn_bbox": 0.06016, "loss_cls": 0.29894, "acc": 90.84644, "loss_bbox": 0.30112, "loss_mask": 0.29515, "loss": 1.00638, "time": 0.5128} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.0002, "memory": 12584, "data_time": 0.06384, "loss_rpn_cls": 0.04865, "loss_rpn_bbox": 0.06692, "loss_cls": 0.29119, "acc": 90.84424, "loss_bbox": 0.30351, "loss_mask": 0.29775, "loss": 1.00803, "time": 0.5156} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.0002, "memory": 12584, "data_time": 0.04089, "loss_rpn_cls": 0.04823, "loss_rpn_bbox": 0.06725, "loss_cls": 0.30413, "acc": 90.69873, "loss_bbox": 0.31121, "loss_mask": 0.29591, "loss": 1.02674, "time": 0.48365} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.0002, "memory": 12584, "data_time": 0.06062, "loss_rpn_cls": 0.04872, "loss_rpn_bbox": 0.06392, "loss_cls": 0.29815, "acc": 90.90942, "loss_bbox": 0.30611, "loss_mask": 0.29901, "loss": 1.01592, "time": 0.51357} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.0002, "memory": 12584, "data_time": 0.04975, "loss_rpn_cls": 0.04359, "loss_rpn_bbox": 0.0618, "loss_cls": 0.29899, "acc": 90.76514, "loss_bbox": 0.30771, "loss_mask": 0.29628, "loss": 1.00838, "time": 0.49715} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.0002, "memory": 12584, "data_time": 0.05206, "loss_rpn_cls": 0.04488, "loss_rpn_bbox": 0.06017, "loss_cls": 0.29782, "acc": 90.60034, "loss_bbox": 0.31145, "loss_mask": 0.29138, "loss": 1.00571, "time": 0.50751} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.0002, "memory": 12584, "data_time": 0.05859, "loss_rpn_cls": 0.04694, "loss_rpn_bbox": 0.06477, "loss_cls": 0.29926, "acc": 90.70264, "loss_bbox": 0.30853, "loss_mask": 0.29237, "loss": 1.01187, "time": 0.52155} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.0002, "memory": 12584, "data_time": 0.05284, "loss_rpn_cls": 0.04569, "loss_rpn_bbox": 0.06336, "loss_cls": 0.28937, "acc": 90.90332, "loss_bbox": 0.30627, "loss_mask": 0.28456, "loss": 0.98926, "time": 0.53046} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.0002, "memory": 12584, "data_time": 0.04517, "loss_rpn_cls": 0.04669, "loss_rpn_bbox": 0.06705, "loss_cls": 0.31042, "acc": 90.22607, "loss_bbox": 0.32595, "loss_mask": 0.29671, "loss": 1.04682, "time": 0.51363} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.0002, "memory": 12584, "data_time": 0.05405, "loss_rpn_cls": 0.04363, "loss_rpn_bbox": 0.06225, "loss_cls": 0.29697, "acc": 90.9043, "loss_bbox": 0.30124, "loss_mask": 0.28866, "loss": 0.99274, "time": 0.50742} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.0002, "memory": 12584, "data_time": 0.06261, "loss_rpn_cls": 0.04618, "loss_rpn_bbox": 0.06471, "loss_cls": 0.30204, "acc": 90.62305, "loss_bbox": 0.31021, "loss_mask": 0.29547, "loss": 1.01861, "time": 0.5208} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.0002, "memory": 12584, "data_time": 0.05186, "loss_rpn_cls": 0.04074, "loss_rpn_bbox": 0.05985, "loss_cls": 0.30258, "acc": 90.5459, "loss_bbox": 0.31343, "loss_mask": 0.29216, "loss": 1.00876, "time": 0.49951} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.0002, "memory": 12584, "data_time": 0.03973, "loss_rpn_cls": 0.04416, "loss_rpn_bbox": 0.05956, "loss_cls": 0.28779, "acc": 91.14722, "loss_bbox": 0.29818, "loss_mask": 0.28783, "loss": 0.97753, "time": 0.49388} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.0002, "memory": 12584, "data_time": 0.0453, "loss_rpn_cls": 0.04394, "loss_rpn_bbox": 0.06213, "loss_cls": 0.28696, "acc": 91.01562, "loss_bbox": 0.30032, "loss_mask": 0.28698, "loss": 0.98032, "time": 0.50493} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.0002, "memory": 12584, "data_time": 0.05458, "loss_rpn_cls": 0.04356, "loss_rpn_bbox": 0.06218, "loss_cls": 0.2825, "acc": 91.08716, "loss_bbox": 0.29691, "loss_mask": 0.29034, "loss": 0.9755, "time": 0.51973} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.0002, "memory": 12584, "data_time": 0.0514, "loss_rpn_cls": 0.04708, "loss_rpn_bbox": 0.06157, "loss_cls": 0.28033, "acc": 91.19336, "loss_bbox": 0.28869, "loss_mask": 0.28557, "loss": 0.96324, "time": 0.50923} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.0002, "memory": 12584, "data_time": 0.06036, "loss_rpn_cls": 0.04707, "loss_rpn_bbox": 0.06221, "loss_cls": 0.28961, "acc": 90.7085, "loss_bbox": 0.30772, "loss_mask": 0.29067, "loss": 0.99727, "time": 0.52004} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.0002, "memory": 12584, "data_time": 0.05178, "loss_rpn_cls": 0.04447, "loss_rpn_bbox": 0.06125, "loss_cls": 0.29157, "acc": 91.0166, "loss_bbox": 0.29969, "loss_mask": 0.27899, "loss": 0.97597, "time": 0.66876} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.0002, "memory": 12584, "data_time": 0.04966, "loss_rpn_cls": 0.04502, "loss_rpn_bbox": 0.06231, "loss_cls": 0.28675, "acc": 90.9668, "loss_bbox": 0.29816, "loss_mask": 0.28239, "loss": 0.97463, "time": 0.57091} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.0002, "memory": 12584, "data_time": 0.05202, "loss_rpn_cls": 0.04532, "loss_rpn_bbox": 0.06102, "loss_cls": 0.28337, "acc": 91.06982, "loss_bbox": 0.30357, "loss_mask": 0.28873, "loss": 0.98201, "time": 0.51384} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.0002, "memory": 12584, "data_time": 0.04773, "loss_rpn_cls": 0.04344, "loss_rpn_bbox": 0.05803, "loss_cls": 0.27308, "acc": 91.27197, "loss_bbox": 0.29224, "loss_mask": 0.28295, "loss": 0.94974, "time": 0.50521} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.0002, "memory": 12584, "data_time": 0.05165, "loss_rpn_cls": 0.04672, "loss_rpn_bbox": 0.06281, "loss_cls": 0.28315, "acc": 91.23462, "loss_bbox": 0.29519, "loss_mask": 0.29426, "loss": 0.98213, "time": 0.50501} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.0002, "memory": 12584, "data_time": 0.0575, "loss_rpn_cls": 0.0437, "loss_rpn_bbox": 0.05807, "loss_cls": 0.28908, "acc": 90.94873, "loss_bbox": 0.2989, "loss_mask": 0.28064, "loss": 0.97039, "time": 0.51595} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.0002, "memory": 12584, "data_time": 0.05338, "loss_rpn_cls": 0.03959, "loss_rpn_bbox": 0.05952, "loss_cls": 0.28077, "acc": 91.13013, "loss_bbox": 0.29987, "loss_mask": 0.27827, "loss": 0.95802, "time": 0.49577} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.0002, "memory": 12584, "data_time": 0.05374, "loss_rpn_cls": 0.04208, "loss_rpn_bbox": 0.05819, "loss_cls": 0.27779, "acc": 91.30469, "loss_bbox": 0.28426, "loss_mask": 0.28318, "loss": 0.94551, "time": 0.5198} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.0002, "memory": 12584, "data_time": 0.04368, "loss_rpn_cls": 0.03957, "loss_rpn_bbox": 0.05824, "loss_cls": 0.268, "acc": 91.51636, "loss_bbox": 0.27911, "loss_mask": 0.28002, "loss": 0.92493, "time": 0.50403} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.0002, "memory": 12584, "data_time": 0.04923, "loss_rpn_cls": 0.04505, "loss_rpn_bbox": 0.06199, "loss_cls": 0.28093, "acc": 91.10938, "loss_bbox": 0.29896, "loss_mask": 0.28795, "loss": 0.97489, "time": 0.50524} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.0002, "memory": 12584, "data_time": 0.04053, "loss_rpn_cls": 0.04256, "loss_rpn_bbox": 0.05625, "loss_cls": 0.2842, "acc": 91.15503, "loss_bbox": 0.29308, "loss_mask": 0.28103, "loss": 0.95711, "time": 0.50192} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.0002, "memory": 12584, "data_time": 0.05512, "loss_rpn_cls": 0.04634, "loss_rpn_bbox": 0.06356, "loss_cls": 0.28828, "acc": 90.97632, "loss_bbox": 0.30153, "loss_mask": 0.28439, "loss": 0.98411, "time": 0.52169} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.0002, "memory": 12584, "data_time": 0.05, "loss_rpn_cls": 0.04625, "loss_rpn_bbox": 0.0619, "loss_cls": 0.27433, "acc": 91.29761, "loss_bbox": 0.28773, "loss_mask": 0.28169, "loss": 0.95191, "time": 0.49847} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.0002, "memory": 12584, "data_time": 0.05409, "loss_rpn_cls": 0.04369, "loss_rpn_bbox": 0.05974, "loss_cls": 0.28961, "acc": 90.91309, "loss_bbox": 0.29872, "loss_mask": 0.27854, "loss": 0.9703, "time": 0.51596} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.0002, "memory": 12584, "data_time": 0.06016, "loss_rpn_cls": 0.04847, "loss_rpn_bbox": 0.06462, "loss_cls": 0.27902, "acc": 91.16626, "loss_bbox": 0.2918, "loss_mask": 0.28564, "loss": 0.96955, "time": 0.50638} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.0002, "memory": 12584, "data_time": 0.05593, "loss_rpn_cls": 0.04478, "loss_rpn_bbox": 0.0621, "loss_cls": 0.27711, "acc": 91.2146, "loss_bbox": 0.29248, "loss_mask": 0.27979, "loss": 0.95626, "time": 0.52329} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.0002, "memory": 12584, "data_time": 0.04469, "loss_rpn_cls": 0.04005, "loss_rpn_bbox": 0.05527, "loss_cls": 0.27276, "acc": 91.59448, "loss_bbox": 0.27992, "loss_mask": 0.27729, "loss": 0.9253, "time": 0.50096} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.0002, "memory": 12584, "data_time": 0.06854, "loss_rpn_cls": 0.04214, "loss_rpn_bbox": 0.06066, "loss_cls": 0.28265, "acc": 90.95898, "loss_bbox": 0.30088, "loss_mask": 0.27753, "loss": 0.96386, "time": 0.51537} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.0002, "memory": 12584, "data_time": 0.05213, "loss_rpn_cls": 0.04202, "loss_rpn_bbox": 0.05556, "loss_cls": 0.29086, "acc": 91.05054, "loss_bbox": 0.29003, "loss_mask": 0.28166, "loss": 0.96012, "time": 0.49076} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.0002, "memory": 12584, "data_time": 0.05112, "loss_rpn_cls": 0.04356, "loss_rpn_bbox": 0.06009, "loss_cls": 0.27638, "acc": 91.14722, "loss_bbox": 0.30182, "loss_mask": 0.28479, "loss": 0.96664, "time": 0.50941} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.0002, "memory": 12584, "data_time": 0.06726, "loss_rpn_cls": 0.04528, "loss_rpn_bbox": 0.06154, "loss_cls": 0.2829, "acc": 90.96851, "loss_bbox": 0.30013, "loss_mask": 0.282, "loss": 0.97186, "time": 0.50684} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.0002, "memory": 12584, "data_time": 0.04826, "loss_rpn_cls": 0.04596, "loss_rpn_bbox": 0.06105, "loss_cls": 0.2905, "acc": 90.84131, "loss_bbox": 0.30296, "loss_mask": 0.28904, "loss": 0.98951, "time": 0.50388} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.0002, "memory": 12584, "data_time": 0.05414, "loss_rpn_cls": 0.04392, "loss_rpn_bbox": 0.06049, "loss_cls": 0.2895, "acc": 90.79785, "loss_bbox": 0.30296, "loss_mask": 0.29004, "loss": 0.98691, "time": 0.51707} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.0002, "memory": 12584, "data_time": 0.05909, "loss_rpn_cls": 0.04265, "loss_rpn_bbox": 0.06053, "loss_cls": 0.28937, "acc": 90.75293, "loss_bbox": 0.30145, "loss_mask": 0.2851, "loss": 0.97911, "time": 0.50917} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.0002, "memory": 12584, "data_time": 0.04344, "loss_rpn_cls": 0.04264, "loss_rpn_bbox": 0.05825, "loss_cls": 0.27785, "acc": 91.38354, "loss_bbox": 0.28616, "loss_mask": 0.27929, "loss": 0.94419, "time": 0.51278} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.0002, "memory": 12584, "data_time": 0.05065, "loss_rpn_cls": 0.04169, "loss_rpn_bbox": 0.06245, "loss_cls": 0.2736, "acc": 91.37598, "loss_bbox": 0.28617, "loss_mask": 0.27707, "loss": 0.94098, "time": 0.51188} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.0002, "memory": 12584, "data_time": 0.04797, "loss_rpn_cls": 0.04075, "loss_rpn_bbox": 0.06072, "loss_cls": 0.268, "acc": 91.42603, "loss_bbox": 0.28943, "loss_mask": 0.2759, "loss": 0.9348, "time": 0.49721} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.0002, "memory": 12584, "data_time": 0.05134, "loss_rpn_cls": 0.04016, "loss_rpn_bbox": 0.05975, "loss_cls": 0.28312, "acc": 90.92139, "loss_bbox": 0.30156, "loss_mask": 0.27887, "loss": 0.96346, "time": 0.4974} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.0002, "memory": 12584, "data_time": 0.04756, "loss_rpn_cls": 0.04457, "loss_rpn_bbox": 0.05746, "loss_cls": 0.28182, "acc": 91.13306, "loss_bbox": 0.2867, "loss_mask": 0.27768, "loss": 0.94823, "time": 0.49838} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.0002, "memory": 12584, "data_time": 0.05522, "loss_rpn_cls": 0.04252, "loss_rpn_bbox": 0.06283, "loss_cls": 0.29468, "acc": 90.77832, "loss_bbox": 0.30206, "loss_mask": 0.2823, "loss": 0.98438, "time": 0.51587} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.0002, "memory": 12584, "data_time": 0.05388, "loss_rpn_cls": 0.04135, "loss_rpn_bbox": 0.05772, "loss_cls": 0.2795, "acc": 91.04102, "loss_bbox": 0.29609, "loss_mask": 0.27187, "loss": 0.94653, "time": 0.50828} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.0002, "memory": 12584, "data_time": 0.05515, "loss_rpn_cls": 0.04432, "loss_rpn_bbox": 0.06101, "loss_cls": 0.28424, "acc": 91.11353, "loss_bbox": 0.2924, "loss_mask": 0.28267, "loss": 0.96465, "time": 0.50473} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.0002, "memory": 12584, "data_time": 0.05672, "loss_rpn_cls": 0.04278, "loss_rpn_bbox": 0.06215, "loss_cls": 0.29451, "acc": 90.85693, "loss_bbox": 0.30145, "loss_mask": 0.27988, "loss": 0.98077, "time": 0.50377} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.0002, "memory": 12584, "data_time": 0.05197, "loss_rpn_cls": 0.042, "loss_rpn_bbox": 0.05682, "loss_cls": 0.26145, "acc": 91.54395, "loss_bbox": 0.2797, "loss_mask": 0.26854, "loss": 0.90851, "time": 0.51341} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.0002, "memory": 12584, "data_time": 0.06064, "loss_rpn_cls": 0.04218, "loss_rpn_bbox": 0.06353, "loss_cls": 0.28986, "acc": 90.75928, "loss_bbox": 0.3017, "loss_mask": 0.28511, "loss": 0.98239, "time": 0.51426} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.0002, "memory": 12584, "data_time": 0.04965, "loss_rpn_cls": 0.03812, "loss_rpn_bbox": 0.05576, "loss_cls": 0.26402, "acc": 91.48657, "loss_bbox": 0.28334, "loss_mask": 0.27307, "loss": 0.91431, "time": 0.50552} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.0002, "memory": 12584, "data_time": 0.05454, "loss_rpn_cls": 0.04073, "loss_rpn_bbox": 0.05789, "loss_cls": 0.27465, "acc": 91.36377, "loss_bbox": 0.28563, "loss_mask": 0.27412, "loss": 0.93302, "time": 0.50393} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.0002, "memory": 12584, "data_time": 0.05463, "loss_rpn_cls": 0.04442, "loss_rpn_bbox": 0.05849, "loss_cls": 0.27231, "acc": 91.35547, "loss_bbox": 0.2831, "loss_mask": 0.27423, "loss": 0.93255, "time": 0.52141} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.0002, "memory": 12584, "data_time": 0.05581, "loss_rpn_cls": 0.04251, "loss_rpn_bbox": 0.0584, "loss_cls": 0.26383, "acc": 91.72241, "loss_bbox": 0.27384, "loss_mask": 0.26803, "loss": 0.9066, "time": 0.67731} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.0002, "memory": 12584, "data_time": 0.06017, "loss_rpn_cls": 0.04098, "loss_rpn_bbox": 0.05559, "loss_cls": 0.26114, "acc": 91.70874, "loss_bbox": 0.27852, "loss_mask": 0.28203, "loss": 0.91827, "time": 0.50888} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.0002, "memory": 12584, "data_time": 0.05672, "loss_rpn_cls": 0.0418, "loss_rpn_bbox": 0.06389, "loss_cls": 0.28118, "acc": 91.14771, "loss_bbox": 0.29254, "loss_mask": 0.28317, "loss": 0.96257, "time": 0.50547} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.0002, "memory": 12584, "data_time": 0.06712, "loss_rpn_cls": 0.03653, "loss_rpn_bbox": 0.05826, "loss_cls": 0.26704, "acc": 91.4624, "loss_bbox": 0.2815, "loss_mask": 0.27204, "loss": 0.91538, "time": 0.51545} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.0002, "memory": 12584, "data_time": 0.06341, "loss_rpn_cls": 0.04168, "loss_rpn_bbox": 0.05752, "loss_cls": 0.27103, "acc": 91.36646, "loss_bbox": 0.28117, "loss_mask": 0.27733, "loss": 0.92874, "time": 0.5628} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.0002, "memory": 12584, "data_time": 0.04943, "loss_rpn_cls": 0.04265, "loss_rpn_bbox": 0.05828, "loss_cls": 0.26823, "acc": 91.41211, "loss_bbox": 0.27731, "loss_mask": 0.27148, "loss": 0.91795, "time": 0.49975} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.0002, "memory": 12584, "data_time": 0.04391, "loss_rpn_cls": 0.03924, "loss_rpn_bbox": 0.05592, "loss_cls": 0.25752, "acc": 91.81543, "loss_bbox": 0.27371, "loss_mask": 0.27345, "loss": 0.89985, "time": 0.49424} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.0002, "memory": 12584, "data_time": 0.04392, "loss_rpn_cls": 0.03993, "loss_rpn_bbox": 0.05447, "loss_cls": 0.25768, "acc": 91.75659, "loss_bbox": 0.27698, "loss_mask": 0.27255, "loss": 0.90161, "time": 0.50143} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.0002, "memory": 12584, "data_time": 0.06866, "loss_rpn_cls": 0.03772, "loss_rpn_bbox": 0.05773, "loss_cls": 0.2672, "acc": 91.60913, "loss_bbox": 0.28128, "loss_mask": 0.2724, "loss": 0.91632, "time": 0.51575} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.0002, "memory": 12584, "data_time": 0.04904, "loss_rpn_cls": 0.03826, "loss_rpn_bbox": 0.05689, "loss_cls": 0.26258, "acc": 91.78394, "loss_bbox": 0.27608, "loss_mask": 0.28072, "loss": 0.91453, "time": 0.50191} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.0002, "memory": 12584, "data_time": 0.05603, "loss_rpn_cls": 0.03796, "loss_rpn_bbox": 0.05772, "loss_cls": 0.27453, "acc": 91.24414, "loss_bbox": 0.2861, "loss_mask": 0.27025, "loss": 0.92656, "time": 0.5166} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.0002, "memory": 12584, "data_time": 0.05715, "loss_rpn_cls": 0.03724, "loss_rpn_bbox": 0.05713, "loss_cls": 0.26883, "acc": 91.3833, "loss_bbox": 0.28797, "loss_mask": 0.27916, "loss": 0.93033, "time": 0.5056} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.0002, "memory": 12584, "data_time": 0.04031, "loss_rpn_cls": 0.03972, "loss_rpn_bbox": 0.06007, "loss_cls": 0.27163, "acc": 91.37354, "loss_bbox": 0.2862, "loss_mask": 0.27522, "loss": 0.93284, "time": 0.50661} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.0002, "memory": 12584, "data_time": 0.05274, "loss_rpn_cls": 0.03986, "loss_rpn_bbox": 0.05729, "loss_cls": 0.28151, "acc": 91.14526, "loss_bbox": 0.28869, "loss_mask": 0.27347, "loss": 0.94083, "time": 0.51264} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.0002, "memory": 12584, "data_time": 0.06145, "loss_rpn_cls": 0.04108, "loss_rpn_bbox": 0.06231, "loss_cls": 0.28711, "acc": 91.13965, "loss_bbox": 0.29244, "loss_mask": 0.27279, "loss": 0.95573, "time": 0.5154} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.0002, "memory": 12584, "data_time": 0.05593, "loss_rpn_cls": 0.04547, "loss_rpn_bbox": 0.0585, "loss_cls": 0.27407, "acc": 91.08374, "loss_bbox": 0.29279, "loss_mask": 0.27283, "loss": 0.94365, "time": 0.51324} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.0002, "memory": 12584, "data_time": 0.05219, "loss_rpn_cls": 0.03782, "loss_rpn_bbox": 0.06012, "loss_cls": 0.26038, "acc": 91.65552, "loss_bbox": 0.27174, "loss_mask": 0.26706, "loss": 0.89712, "time": 0.49992} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.0002, "memory": 12584, "data_time": 0.05481, "loss_rpn_cls": 0.03761, "loss_rpn_bbox": 0.0555, "loss_cls": 0.26709, "acc": 91.49463, "loss_bbox": 0.284, "loss_mask": 0.27232, "loss": 0.91652, "time": 0.50381} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.0002, "memory": 12584, "data_time": 0.04172, "loss_rpn_cls": 0.04294, "loss_rpn_bbox": 0.06145, "loss_cls": 0.26541, "acc": 91.43091, "loss_bbox": 0.28423, "loss_mask": 0.27053, "loss": 0.92456, "time": 0.50101} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.0002, "memory": 12584, "data_time": 0.06165, "loss_rpn_cls": 0.04046, "loss_rpn_bbox": 0.05913, "loss_cls": 0.27429, "acc": 91.23193, "loss_bbox": 0.28615, "loss_mask": 0.27142, "loss": 0.93144, "time": 0.50663} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.0002, "memory": 12584, "data_time": 0.0563, "loss_rpn_cls": 0.03849, "loss_rpn_bbox": 0.05505, "loss_cls": 0.26277, "acc": 91.5376, "loss_bbox": 0.28532, "loss_mask": 0.27073, "loss": 0.91235, "time": 0.50866} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.0002, "memory": 12584, "data_time": 0.05326, "loss_rpn_cls": 0.04021, "loss_rpn_bbox": 0.05847, "loss_cls": 0.26532, "acc": 91.2688, "loss_bbox": 0.2902, "loss_mask": 0.27945, "loss": 0.93366, "time": 0.50054} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.0002, "memory": 12584, "data_time": 0.04966, "loss_rpn_cls": 0.03944, "loss_rpn_bbox": 0.05669, "loss_cls": 0.26784, "acc": 91.52734, "loss_bbox": 0.27783, "loss_mask": 0.26418, "loss": 0.90598, "time": 0.50307} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.0002, "memory": 12584, "data_time": 0.05476, "loss_rpn_cls": 0.0419, "loss_rpn_bbox": 0.05562, "loss_cls": 0.25554, "acc": 91.92285, "loss_bbox": 0.265, "loss_mask": 0.27003, "loss": 0.88808, "time": 0.51061} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.0002, "memory": 12584, "data_time": 0.05116, "loss_rpn_cls": 0.04006, "loss_rpn_bbox": 0.05728, "loss_cls": 0.26915, "acc": 91.43555, "loss_bbox": 0.28187, "loss_mask": 0.27119, "loss": 0.91955, "time": 0.51621} -{"mode": "val", "epoch": 1, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.2878, "bbox_mAP_50": 0.5141, "bbox_mAP_75": 0.3011, "bbox_mAP_s": 0.1621, "bbox_mAP_m": 0.3268, "bbox_mAP_l": 0.3685, "bbox_mAP_copypaste": "0.2878 0.5141 0.3011 0.1621 0.3268 0.3685", "segm_mAP": 0.2932, "segm_mAP_50": 0.4919, "segm_mAP_75": 0.307, "segm_mAP_s": 0.1292, "segm_mAP_m": 0.3272, "segm_mAP_l": 0.4231, "segm_mAP_copypaste": "0.2932 0.4919 0.3070 0.1292 0.3272 0.4231"} -{"mode": "train", "epoch": 2, "iter": 50, "lr": 0.0002, "memory": 12584, "data_time": 0.1223, "loss_rpn_cls": 0.03754, "loss_rpn_bbox": 0.05653, "loss_cls": 0.26635, "acc": 91.29272, "loss_bbox": 0.28558, "loss_mask": 0.26769, "loss": 0.91369, "time": 0.60867} -{"mode": "train", "epoch": 2, "iter": 100, "lr": 0.0002, "memory": 12584, "data_time": 0.04975, "loss_rpn_cls": 0.03658, "loss_rpn_bbox": 0.05854, "loss_cls": 0.25732, "acc": 91.6394, "loss_bbox": 0.27845, "loss_mask": 0.26288, "loss": 0.89378, "time": 0.5342} -{"mode": "train", "epoch": 2, "iter": 150, "lr": 0.0002, "memory": 12584, "data_time": 0.04881, "loss_rpn_cls": 0.03518, "loss_rpn_bbox": 0.05699, "loss_cls": 0.26154, "acc": 91.52515, "loss_bbox": 0.29109, "loss_mask": 0.27048, "loss": 0.91527, "time": 0.52514} -{"mode": "train", "epoch": 2, "iter": 200, "lr": 0.0002, "memory": 12584, "data_time": 0.04709, "loss_rpn_cls": 0.03495, "loss_rpn_bbox": 0.05866, "loss_cls": 0.26228, "acc": 91.43677, "loss_bbox": 0.28576, "loss_mask": 0.266, "loss": 0.90764, "time": 0.52365} -{"mode": "train", "epoch": 2, "iter": 250, "lr": 0.0002, "memory": 12584, "data_time": 0.04479, "loss_rpn_cls": 0.03622, "loss_rpn_bbox": 0.05884, "loss_cls": 0.25439, "acc": 91.70654, "loss_bbox": 0.27468, "loss_mask": 0.26947, "loss": 0.8936, "time": 0.51349} -{"mode": "train", "epoch": 2, "iter": 300, "lr": 0.0002, "memory": 12584, "data_time": 0.04586, "loss_rpn_cls": 0.03754, "loss_rpn_bbox": 0.05697, "loss_cls": 0.25737, "acc": 91.72998, "loss_bbox": 0.27428, "loss_mask": 0.26592, "loss": 0.89209, "time": 0.50461} -{"mode": "train", "epoch": 2, "iter": 350, "lr": 0.0002, "memory": 12584, "data_time": 0.04856, "loss_rpn_cls": 0.03272, "loss_rpn_bbox": 0.05476, "loss_cls": 0.25237, "acc": 91.74854, "loss_bbox": 0.28156, "loss_mask": 0.26386, "loss": 0.88526, "time": 0.50676} -{"mode": "train", "epoch": 2, "iter": 400, "lr": 0.0002, "memory": 12584, "data_time": 0.04645, "loss_rpn_cls": 0.03466, "loss_rpn_bbox": 0.0543, "loss_cls": 0.25258, "acc": 91.76611, "loss_bbox": 0.27198, "loss_mask": 0.26778, "loss": 0.8813, "time": 0.50573} -{"mode": "train", "epoch": 2, "iter": 450, "lr": 0.0002, "memory": 12584, "data_time": 0.04407, "loss_rpn_cls": 0.03624, "loss_rpn_bbox": 0.05583, "loss_cls": 0.26412, "acc": 91.30225, "loss_bbox": 0.2877, "loss_mask": 0.27217, "loss": 0.91606, "time": 0.50273} -{"mode": "train", "epoch": 2, "iter": 500, "lr": 0.0002, "memory": 12584, "data_time": 0.04448, "loss_rpn_cls": 0.03612, "loss_rpn_bbox": 0.05439, "loss_cls": 0.25798, "acc": 91.65576, "loss_bbox": 0.27054, "loss_mask": 0.27226, "loss": 0.89129, "time": 0.49693} -{"mode": "train", "epoch": 2, "iter": 550, "lr": 0.0002, "memory": 12584, "data_time": 0.046, "loss_rpn_cls": 0.03483, "loss_rpn_bbox": 0.05614, "loss_cls": 0.24757, "acc": 91.81567, "loss_bbox": 0.271, "loss_mask": 0.2634, "loss": 0.87294, "time": 0.50677} -{"mode": "train", "epoch": 2, "iter": 600, "lr": 0.0002, "memory": 12584, "data_time": 0.05416, "loss_rpn_cls": 0.03653, "loss_rpn_bbox": 0.05965, "loss_cls": 0.26717, "acc": 91.28345, "loss_bbox": 0.28567, "loss_mask": 0.26803, "loss": 0.91705, "time": 0.51819} -{"mode": "train", "epoch": 2, "iter": 650, "lr": 0.0002, "memory": 12584, "data_time": 0.04918, "loss_rpn_cls": 0.03503, "loss_rpn_bbox": 0.05518, "loss_cls": 0.25354, "acc": 91.67114, "loss_bbox": 0.27695, "loss_mask": 0.26853, "loss": 0.88923, "time": 0.50228} -{"mode": "train", "epoch": 2, "iter": 700, "lr": 0.0002, "memory": 12584, "data_time": 0.03523, "loss_rpn_cls": 0.03602, "loss_rpn_bbox": 0.05367, "loss_cls": 0.24791, "acc": 92.03271, "loss_bbox": 0.2704, "loss_mask": 0.26786, "loss": 0.87587, "time": 0.4817} -{"mode": "train", "epoch": 2, "iter": 750, "lr": 0.0002, "memory": 12584, "data_time": 0.04198, "loss_rpn_cls": 0.0322, "loss_rpn_bbox": 0.05272, "loss_cls": 0.2505, "acc": 91.69067, "loss_bbox": 0.28197, "loss_mask": 0.26639, "loss": 0.88378, "time": 0.6885} -{"mode": "train", "epoch": 2, "iter": 800, "lr": 0.0002, "memory": 12584, "data_time": 0.04245, "loss_rpn_cls": 0.03726, "loss_rpn_bbox": 0.0548, "loss_cls": 0.26093, "acc": 91.50269, "loss_bbox": 0.28164, "loss_mask": 0.26033, "loss": 0.89496, "time": 0.5131} -{"mode": "train", "epoch": 2, "iter": 850, "lr": 0.0002, "memory": 12584, "data_time": 0.04638, "loss_rpn_cls": 0.03633, "loss_rpn_bbox": 0.05601, "loss_cls": 0.25697, "acc": 91.57642, "loss_bbox": 0.27764, "loss_mask": 0.26579, "loss": 0.89276, "time": 0.52095} -{"mode": "train", "epoch": 2, "iter": 900, "lr": 0.0002, "memory": 12584, "data_time": 0.05177, "loss_rpn_cls": 0.03273, "loss_rpn_bbox": 0.05369, "loss_cls": 0.24945, "acc": 91.81177, "loss_bbox": 0.27679, "loss_mask": 0.26267, "loss": 0.87533, "time": 0.50991} -{"mode": "train", "epoch": 2, "iter": 950, "lr": 0.0002, "memory": 12584, "data_time": 0.03844, "loss_rpn_cls": 0.03447, "loss_rpn_bbox": 0.05412, "loss_cls": 0.2612, "acc": 91.60596, "loss_bbox": 0.27992, "loss_mask": 0.26259, "loss": 0.8923, "time": 0.50146} -{"mode": "train", "epoch": 2, "iter": 1000, "lr": 0.0002, "memory": 12584, "data_time": 0.04422, "loss_rpn_cls": 0.03084, "loss_rpn_bbox": 0.05236, "loss_cls": 0.25619, "acc": 91.61816, "loss_bbox": 0.27566, "loss_mask": 0.266, "loss": 0.88105, "time": 0.50963} -{"mode": "train", "epoch": 2, "iter": 1050, "lr": 0.0002, "memory": 12584, "data_time": 0.04651, "loss_rpn_cls": 0.03867, "loss_rpn_bbox": 0.05529, "loss_cls": 0.26078, "acc": 91.31519, "loss_bbox": 0.29199, "loss_mask": 0.26527, "loss": 0.91201, "time": 0.51164} -{"mode": "train", "epoch": 2, "iter": 1100, "lr": 0.0002, "memory": 12584, "data_time": 0.03947, "loss_rpn_cls": 0.03247, "loss_rpn_bbox": 0.05436, "loss_cls": 0.26303, "acc": 91.60059, "loss_bbox": 0.27621, "loss_mask": 0.26501, "loss": 0.89108, "time": 0.56095} -{"mode": "train", "epoch": 2, "iter": 1150, "lr": 0.0002, "memory": 12584, "data_time": 0.04511, "loss_rpn_cls": 0.0327, "loss_rpn_bbox": 0.0532, "loss_cls": 0.25216, "acc": 91.8396, "loss_bbox": 0.26899, "loss_mask": 0.27215, "loss": 0.8792, "time": 0.505} -{"mode": "train", "epoch": 2, "iter": 1200, "lr": 0.0002, "memory": 12584, "data_time": 0.0434, "loss_rpn_cls": 0.03593, "loss_rpn_bbox": 0.05329, "loss_cls": 0.248, "acc": 91.8125, "loss_bbox": 0.27414, "loss_mask": 0.26031, "loss": 0.87167, "time": 0.50677} -{"mode": "train", "epoch": 2, "iter": 1250, "lr": 0.0002, "memory": 12584, "data_time": 0.04405, "loss_rpn_cls": 0.03875, "loss_rpn_bbox": 0.0595, "loss_cls": 0.26565, "acc": 91.24316, "loss_bbox": 0.2902, "loss_mask": 0.27223, "loss": 0.92634, "time": 0.49956} -{"mode": "train", "epoch": 2, "iter": 1300, "lr": 0.0002, "memory": 12584, "data_time": 0.03944, "loss_rpn_cls": 0.03623, "loss_rpn_bbox": 0.05309, "loss_cls": 0.25841, "acc": 91.62866, "loss_bbox": 0.27539, "loss_mask": 0.2639, "loss": 0.88703, "time": 0.5126} -{"mode": "train", "epoch": 2, "iter": 1350, "lr": 0.0002, "memory": 12584, "data_time": 0.04861, "loss_rpn_cls": 0.03722, "loss_rpn_bbox": 0.05588, "loss_cls": 0.25415, "acc": 91.84009, "loss_bbox": 0.27466, "loss_mask": 0.26546, "loss": 0.88736, "time": 0.50981} -{"mode": "train", "epoch": 2, "iter": 1400, "lr": 0.0002, "memory": 12584, "data_time": 0.03694, "loss_rpn_cls": 0.03561, "loss_rpn_bbox": 0.05483, "loss_cls": 0.24747, "acc": 92.00269, "loss_bbox": 0.26597, "loss_mask": 0.25681, "loss": 0.86069, "time": 0.50462} -{"mode": "train", "epoch": 2, "iter": 1450, "lr": 0.0002, "memory": 12584, "data_time": 0.04568, "loss_rpn_cls": 0.03797, "loss_rpn_bbox": 0.05782, "loss_cls": 0.25393, "acc": 91.60547, "loss_bbox": 0.27987, "loss_mask": 0.26297, "loss": 0.89257, "time": 0.51812} -{"mode": "train", "epoch": 2, "iter": 1500, "lr": 0.0002, "memory": 12584, "data_time": 0.04128, "loss_rpn_cls": 0.03839, "loss_rpn_bbox": 0.05965, "loss_cls": 0.25752, "acc": 91.45361, "loss_bbox": 0.27953, "loss_mask": 0.26018, "loss": 0.89528, "time": 0.51975} -{"mode": "train", "epoch": 2, "iter": 1550, "lr": 0.0002, "memory": 12584, "data_time": 0.0462, "loss_rpn_cls": 0.03383, "loss_rpn_bbox": 0.05206, "loss_cls": 0.23782, "acc": 92.19067, "loss_bbox": 0.2639, "loss_mask": 0.26527, "loss": 0.85288, "time": 0.5084} -{"mode": "train", "epoch": 2, "iter": 1600, "lr": 0.0002, "memory": 12584, "data_time": 0.04951, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05536, "loss_cls": 0.25171, "acc": 91.80273, "loss_bbox": 0.27246, "loss_mask": 0.25946, "loss": 0.87188, "time": 0.50562} -{"mode": "train", "epoch": 2, "iter": 1650, "lr": 0.0002, "memory": 12584, "data_time": 0.04708, "loss_rpn_cls": 0.03282, "loss_rpn_bbox": 0.05476, "loss_cls": 0.24706, "acc": 91.9751, "loss_bbox": 0.26847, "loss_mask": 0.25801, "loss": 0.86112, "time": 0.49966} -{"mode": "train", "epoch": 2, "iter": 1700, "lr": 0.0002, "memory": 12584, "data_time": 0.04317, "loss_rpn_cls": 0.03734, "loss_rpn_bbox": 0.05507, "loss_cls": 0.25119, "acc": 91.96997, "loss_bbox": 0.26552, "loss_mask": 0.26214, "loss": 0.87126, "time": 0.51136} -{"mode": "train", "epoch": 2, "iter": 1750, "lr": 0.0002, "memory": 12584, "data_time": 0.04142, "loss_rpn_cls": 0.03554, "loss_rpn_bbox": 0.05592, "loss_cls": 0.25564, "acc": 91.68359, "loss_bbox": 0.27762, "loss_mask": 0.26536, "loss": 0.89008, "time": 0.50404} -{"mode": "train", "epoch": 2, "iter": 1800, "lr": 0.0002, "memory": 12584, "data_time": 0.04555, "loss_rpn_cls": 0.03562, "loss_rpn_bbox": 0.0579, "loss_cls": 0.25564, "acc": 91.63403, "loss_bbox": 0.27254, "loss_mask": 0.27039, "loss": 0.89209, "time": 0.50841} -{"mode": "train", "epoch": 2, "iter": 1850, "lr": 0.0002, "memory": 12584, "data_time": 0.04664, "loss_rpn_cls": 0.03443, "loss_rpn_bbox": 0.05433, "loss_cls": 0.24805, "acc": 92.11865, "loss_bbox": 0.25995, "loss_mask": 0.26469, "loss": 0.86146, "time": 0.49631} -{"mode": "train", "epoch": 2, "iter": 1900, "lr": 0.0002, "memory": 12584, "data_time": 0.04733, "loss_rpn_cls": 0.0399, "loss_rpn_bbox": 0.05768, "loss_cls": 0.24941, "acc": 91.8562, "loss_bbox": 0.26954, "loss_mask": 0.26459, "loss": 0.88112, "time": 0.5163} -{"mode": "train", "epoch": 2, "iter": 1950, "lr": 0.0002, "memory": 12584, "data_time": 0.05787, "loss_rpn_cls": 0.03864, "loss_rpn_bbox": 0.05657, "loss_cls": 0.2477, "acc": 92.00195, "loss_bbox": 0.26357, "loss_mask": 0.26507, "loss": 0.87155, "time": 0.52048} -{"mode": "train", "epoch": 2, "iter": 2000, "lr": 0.0002, "memory": 12584, "data_time": 0.04043, "loss_rpn_cls": 0.03581, "loss_rpn_bbox": 0.05743, "loss_cls": 0.26136, "acc": 91.46094, "loss_bbox": 0.28377, "loss_mask": 0.26841, "loss": 0.90678, "time": 0.63639} -{"mode": "train", "epoch": 2, "iter": 2050, "lr": 0.0002, "memory": 12584, "data_time": 0.04084, "loss_rpn_cls": 0.03147, "loss_rpn_bbox": 0.05455, "loss_cls": 0.2565, "acc": 91.73584, "loss_bbox": 0.27053, "loss_mask": 0.26512, "loss": 0.87816, "time": 0.50776} -{"mode": "train", "epoch": 2, "iter": 2100, "lr": 0.0002, "memory": 12584, "data_time": 0.04175, "loss_rpn_cls": 0.03349, "loss_rpn_bbox": 0.05601, "loss_cls": 0.25009, "acc": 91.89062, "loss_bbox": 0.26904, "loss_mask": 0.26529, "loss": 0.87391, "time": 0.50004} -{"mode": "train", "epoch": 2, "iter": 2150, "lr": 0.0002, "memory": 12584, "data_time": 0.04092, "loss_rpn_cls": 0.03595, "loss_rpn_bbox": 0.05725, "loss_cls": 0.25369, "acc": 91.73438, "loss_bbox": 0.27454, "loss_mask": 0.26511, "loss": 0.88654, "time": 0.49284} -{"mode": "train", "epoch": 2, "iter": 2200, "lr": 0.0002, "memory": 12584, "data_time": 0.0438, "loss_rpn_cls": 0.03273, "loss_rpn_bbox": 0.05481, "loss_cls": 0.23764, "acc": 92.28833, "loss_bbox": 0.26078, "loss_mask": 0.26227, "loss": 0.84824, "time": 0.5023} -{"mode": "train", "epoch": 2, "iter": 2250, "lr": 0.0002, "memory": 12584, "data_time": 0.03919, "loss_rpn_cls": 0.03461, "loss_rpn_bbox": 0.0562, "loss_cls": 0.25562, "acc": 91.63452, "loss_bbox": 0.27128, "loss_mask": 0.26425, "loss": 0.88197, "time": 0.5064} -{"mode": "train", "epoch": 2, "iter": 2300, "lr": 0.0002, "memory": 12584, "data_time": 0.04496, "loss_rpn_cls": 0.0353, "loss_rpn_bbox": 0.0543, "loss_cls": 0.26169, "acc": 91.50635, "loss_bbox": 0.28054, "loss_mask": 0.26504, "loss": 0.89687, "time": 0.49826} -{"mode": "train", "epoch": 2, "iter": 2350, "lr": 0.0002, "memory": 12584, "data_time": 0.04375, "loss_rpn_cls": 0.03584, "loss_rpn_bbox": 0.05531, "loss_cls": 0.25434, "acc": 91.82031, "loss_bbox": 0.26862, "loss_mask": 0.26749, "loss": 0.8816, "time": 0.507} -{"mode": "train", "epoch": 2, "iter": 2400, "lr": 0.0002, "memory": 12584, "data_time": 0.03903, "loss_rpn_cls": 0.03277, "loss_rpn_bbox": 0.05283, "loss_cls": 0.24372, "acc": 92.18506, "loss_bbox": 0.25979, "loss_mask": 0.26197, "loss": 0.85109, "time": 0.50126} -{"mode": "train", "epoch": 2, "iter": 2450, "lr": 0.0002, "memory": 12584, "data_time": 0.04153, "loss_rpn_cls": 0.03304, "loss_rpn_bbox": 0.05457, "loss_cls": 0.24884, "acc": 91.99292, "loss_bbox": 0.27033, "loss_mask": 0.26472, "loss": 0.87149, "time": 0.51159} -{"mode": "train", "epoch": 2, "iter": 2500, "lr": 0.0002, "memory": 12584, "data_time": 0.04183, "loss_rpn_cls": 0.03456, "loss_rpn_bbox": 0.05385, "loss_cls": 0.25216, "acc": 91.87354, "loss_bbox": 0.27037, "loss_mask": 0.26247, "loss": 0.87342, "time": 0.57941} -{"mode": "train", "epoch": 2, "iter": 2550, "lr": 0.0002, "memory": 12584, "data_time": 0.04414, "loss_rpn_cls": 0.03717, "loss_rpn_bbox": 0.05314, "loss_cls": 0.23815, "acc": 92.35815, "loss_bbox": 0.26224, "loss_mask": 0.25694, "loss": 0.84766, "time": 0.5084} -{"mode": "train", "epoch": 2, "iter": 2600, "lr": 0.0002, "memory": 12584, "data_time": 0.0491, "loss_rpn_cls": 0.03808, "loss_rpn_bbox": 0.05628, "loss_cls": 0.25053, "acc": 91.79224, "loss_bbox": 0.26583, "loss_mask": 0.26134, "loss": 0.87207, "time": 0.50835} -{"mode": "train", "epoch": 2, "iter": 2650, "lr": 0.0002, "memory": 12584, "data_time": 0.03986, "loss_rpn_cls": 0.03813, "loss_rpn_bbox": 0.05541, "loss_cls": 0.25066, "acc": 91.84644, "loss_bbox": 0.26662, "loss_mask": 0.26374, "loss": 0.87456, "time": 0.66964} -{"mode": "train", "epoch": 2, "iter": 2700, "lr": 0.0002, "memory": 12584, "data_time": 0.04936, "loss_rpn_cls": 0.03594, "loss_rpn_bbox": 0.0565, "loss_cls": 0.25568, "acc": 91.60303, "loss_bbox": 0.27619, "loss_mask": 0.26557, "loss": 0.88987, "time": 0.49584} -{"mode": "train", "epoch": 2, "iter": 2750, "lr": 0.0002, "memory": 12584, "data_time": 0.04524, "loss_rpn_cls": 0.03185, "loss_rpn_bbox": 0.05596, "loss_cls": 0.24992, "acc": 91.84131, "loss_bbox": 0.26589, "loss_mask": 0.26359, "loss": 0.8672, "time": 0.49077} -{"mode": "train", "epoch": 2, "iter": 2800, "lr": 0.0002, "memory": 12584, "data_time": 0.04909, "loss_rpn_cls": 0.03555, "loss_rpn_bbox": 0.05461, "loss_cls": 0.25588, "acc": 91.73169, "loss_bbox": 0.2717, "loss_mask": 0.26528, "loss": 0.88303, "time": 0.49963} -{"mode": "train", "epoch": 2, "iter": 2850, "lr": 0.0002, "memory": 12584, "data_time": 0.04808, "loss_rpn_cls": 0.03348, "loss_rpn_bbox": 0.0555, "loss_cls": 0.2547, "acc": 91.66968, "loss_bbox": 0.26976, "loss_mask": 0.25993, "loss": 0.87336, "time": 0.50078} -{"mode": "train", "epoch": 2, "iter": 2900, "lr": 0.0002, "memory": 12584, "data_time": 0.04676, "loss_rpn_cls": 0.03446, "loss_rpn_bbox": 0.05554, "loss_cls": 0.24726, "acc": 91.87842, "loss_bbox": 0.27005, "loss_mask": 0.26375, "loss": 0.87107, "time": 0.50353} -{"mode": "train", "epoch": 2, "iter": 2950, "lr": 0.0002, "memory": 12584, "data_time": 0.04846, "loss_rpn_cls": 0.03616, "loss_rpn_bbox": 0.05744, "loss_cls": 0.26253, "acc": 91.56592, "loss_bbox": 0.27633, "loss_mask": 0.26264, "loss": 0.8951, "time": 0.50072} -{"mode": "train", "epoch": 2, "iter": 3000, "lr": 0.0002, "memory": 12584, "data_time": 0.0435, "loss_rpn_cls": 0.03465, "loss_rpn_bbox": 0.05477, "loss_cls": 0.25506, "acc": 91.73413, "loss_bbox": 0.27291, "loss_mask": 0.26112, "loss": 0.87851, "time": 0.50491} -{"mode": "train", "epoch": 2, "iter": 3050, "lr": 0.0002, "memory": 12584, "data_time": 0.04921, "loss_rpn_cls": 0.03239, "loss_rpn_bbox": 0.05502, "loss_cls": 0.25786, "acc": 91.52612, "loss_bbox": 0.2734, "loss_mask": 0.25912, "loss": 0.87779, "time": 0.53075} -{"mode": "train", "epoch": 2, "iter": 3100, "lr": 0.0002, "memory": 12584, "data_time": 0.04725, "loss_rpn_cls": 0.03458, "loss_rpn_bbox": 0.05462, "loss_cls": 0.25075, "acc": 91.91626, "loss_bbox": 0.26634, "loss_mask": 0.25873, "loss": 0.86502, "time": 0.49945} -{"mode": "train", "epoch": 2, "iter": 3150, "lr": 0.0002, "memory": 12584, "data_time": 0.04188, "loss_rpn_cls": 0.03372, "loss_rpn_bbox": 0.05309, "loss_cls": 0.24483, "acc": 92.0542, "loss_bbox": 0.26778, "loss_mask": 0.25696, "loss": 0.85638, "time": 0.5008} -{"mode": "train", "epoch": 2, "iter": 3200, "lr": 0.0002, "memory": 12584, "data_time": 0.04158, "loss_rpn_cls": 0.03099, "loss_rpn_bbox": 0.05187, "loss_cls": 0.24523, "acc": 92.21411, "loss_bbox": 0.25799, "loss_mask": 0.26146, "loss": 0.84754, "time": 0.49606} -{"mode": "train", "epoch": 2, "iter": 3250, "lr": 0.0002, "memory": 12584, "data_time": 0.04357, "loss_rpn_cls": 0.03427, "loss_rpn_bbox": 0.05343, "loss_cls": 0.24013, "acc": 91.96875, "loss_bbox": 0.26573, "loss_mask": 0.26009, "loss": 0.85365, "time": 0.49275} -{"mode": "train", "epoch": 2, "iter": 3300, "lr": 0.0002, "memory": 12584, "data_time": 0.04423, "loss_rpn_cls": 0.03569, "loss_rpn_bbox": 0.05257, "loss_cls": 0.24129, "acc": 92.17603, "loss_bbox": 0.25712, "loss_mask": 0.25997, "loss": 0.84664, "time": 0.49} -{"mode": "train", "epoch": 2, "iter": 3350, "lr": 0.0002, "memory": 12584, "data_time": 0.04469, "loss_rpn_cls": 0.0356, "loss_rpn_bbox": 0.05517, "loss_cls": 0.25459, "acc": 91.82373, "loss_bbox": 0.26508, "loss_mask": 0.2592, "loss": 0.86963, "time": 0.49948} -{"mode": "train", "epoch": 2, "iter": 3400, "lr": 0.0002, "memory": 12584, "data_time": 0.04599, "loss_rpn_cls": 0.03615, "loss_rpn_bbox": 0.05423, "loss_cls": 0.24463, "acc": 92.22217, "loss_bbox": 0.25872, "loss_mask": 0.26411, "loss": 0.85784, "time": 0.49736} -{"mode": "train", "epoch": 2, "iter": 3450, "lr": 0.0002, "memory": 12584, "data_time": 0.03722, "loss_rpn_cls": 0.03356, "loss_rpn_bbox": 0.05028, "loss_cls": 0.23192, "acc": 92.59546, "loss_bbox": 0.24627, "loss_mask": 0.25669, "loss": 0.81872, "time": 0.48819} -{"mode": "train", "epoch": 2, "iter": 3500, "lr": 0.0002, "memory": 12584, "data_time": 0.04644, "loss_rpn_cls": 0.03824, "loss_rpn_bbox": 0.05708, "loss_cls": 0.25778, "acc": 91.60815, "loss_bbox": 0.2779, "loss_mask": 0.26853, "loss": 0.89953, "time": 0.51807} -{"mode": "train", "epoch": 2, "iter": 3550, "lr": 0.0002, "memory": 12584, "data_time": 0.04296, "loss_rpn_cls": 0.03287, "loss_rpn_bbox": 0.05668, "loss_cls": 0.25816, "acc": 91.60522, "loss_bbox": 0.2759, "loss_mask": 0.26994, "loss": 0.89356, "time": 0.50399} -{"mode": "train", "epoch": 2, "iter": 3600, "lr": 0.0002, "memory": 12584, "data_time": 0.04169, "loss_rpn_cls": 0.03157, "loss_rpn_bbox": 0.05151, "loss_cls": 0.23983, "acc": 92.22021, "loss_bbox": 0.25879, "loss_mask": 0.25451, "loss": 0.8362, "time": 0.50494} -{"mode": "train", "epoch": 2, "iter": 3650, "lr": 0.0002, "memory": 12584, "data_time": 0.04612, "loss_rpn_cls": 0.03752, "loss_rpn_bbox": 0.05562, "loss_cls": 0.24558, "acc": 91.91968, "loss_bbox": 0.26358, "loss_mask": 0.26094, "loss": 0.86322, "time": 0.49733} -{"mode": "train", "epoch": 2, "iter": 3700, "lr": 0.0002, "memory": 12584, "data_time": 0.05812, "loss_rpn_cls": 0.03172, "loss_rpn_bbox": 0.05272, "loss_cls": 0.24943, "acc": 91.76636, "loss_bbox": 0.26285, "loss_mask": 0.25585, "loss": 0.85257, "time": 0.50236} -{"mode": "train", "epoch": 2, "iter": 3750, "lr": 0.0002, "memory": 12584, "data_time": 0.04913, "loss_rpn_cls": 0.03238, "loss_rpn_bbox": 0.05431, "loss_cls": 0.24756, "acc": 91.95947, "loss_bbox": 0.26563, "loss_mask": 0.25593, "loss": 0.8558, "time": 0.49377} -{"mode": "train", "epoch": 2, "iter": 3800, "lr": 0.0002, "memory": 12584, "data_time": 0.05039, "loss_rpn_cls": 0.03417, "loss_rpn_bbox": 0.05706, "loss_cls": 0.25579, "acc": 91.63599, "loss_bbox": 0.2745, "loss_mask": 0.2655, "loss": 0.88701, "time": 0.51366} -{"mode": "train", "epoch": 2, "iter": 3850, "lr": 0.0002, "memory": 12584, "data_time": 0.04704, "loss_rpn_cls": 0.03277, "loss_rpn_bbox": 0.05412, "loss_cls": 0.24336, "acc": 91.99121, "loss_bbox": 0.2637, "loss_mask": 0.2533, "loss": 0.84725, "time": 0.49055} -{"mode": "train", "epoch": 2, "iter": 3900, "lr": 0.0002, "memory": 12584, "data_time": 0.04485, "loss_rpn_cls": 0.03405, "loss_rpn_bbox": 0.05486, "loss_cls": 0.25378, "acc": 91.82031, "loss_bbox": 0.27107, "loss_mask": 0.25832, "loss": 0.87208, "time": 0.51969} -{"mode": "train", "epoch": 2, "iter": 3950, "lr": 0.0002, "memory": 12584, "data_time": 0.05487, "loss_rpn_cls": 0.03692, "loss_rpn_bbox": 0.05626, "loss_cls": 0.25421, "acc": 91.53467, "loss_bbox": 0.28021, "loss_mask": 0.26116, "loss": 0.88875, "time": 0.532} -{"mode": "train", "epoch": 2, "iter": 4000, "lr": 0.0002, "memory": 12584, "data_time": 0.04448, "loss_rpn_cls": 0.03225, "loss_rpn_bbox": 0.05429, "loss_cls": 0.24841, "acc": 91.91895, "loss_bbox": 0.26327, "loss_mask": 0.25214, "loss": 0.85036, "time": 0.49995} -{"mode": "train", "epoch": 2, "iter": 4050, "lr": 0.0002, "memory": 12584, "data_time": 0.04586, "loss_rpn_cls": 0.03503, "loss_rpn_bbox": 0.053, "loss_cls": 0.2424, "acc": 92.10522, "loss_bbox": 0.26261, "loss_mask": 0.25494, "loss": 0.84799, "time": 0.60788} -{"mode": "train", "epoch": 2, "iter": 4100, "lr": 0.0002, "memory": 12584, "data_time": 0.04909, "loss_rpn_cls": 0.03557, "loss_rpn_bbox": 0.05364, "loss_cls": 0.24356, "acc": 91.96777, "loss_bbox": 0.25914, "loss_mask": 0.25739, "loss": 0.8493, "time": 0.49866} -{"mode": "train", "epoch": 2, "iter": 4150, "lr": 0.0002, "memory": 12584, "data_time": 0.04791, "loss_rpn_cls": 0.03276, "loss_rpn_bbox": 0.05145, "loss_cls": 0.24593, "acc": 92.26001, "loss_bbox": 0.25283, "loss_mask": 0.25395, "loss": 0.83692, "time": 0.50569} -{"mode": "train", "epoch": 2, "iter": 4200, "lr": 0.0002, "memory": 12584, "data_time": 0.04796, "loss_rpn_cls": 0.03443, "loss_rpn_bbox": 0.05856, "loss_cls": 0.25011, "acc": 91.84277, "loss_bbox": 0.2731, "loss_mask": 0.26343, "loss": 0.87964, "time": 0.51101} -{"mode": "train", "epoch": 2, "iter": 4250, "lr": 0.0002, "memory": 12584, "data_time": 0.04373, "loss_rpn_cls": 0.03518, "loss_rpn_bbox": 0.05133, "loss_cls": 0.24406, "acc": 92.21167, "loss_bbox": 0.25767, "loss_mask": 0.26027, "loss": 0.84852, "time": 0.50413} -{"mode": "train", "epoch": 2, "iter": 4300, "lr": 0.0002, "memory": 12584, "data_time": 0.04658, "loss_rpn_cls": 0.03765, "loss_rpn_bbox": 0.05853, "loss_cls": 0.25877, "acc": 91.50098, "loss_bbox": 0.27714, "loss_mask": 0.25987, "loss": 0.89195, "time": 0.51945} -{"mode": "train", "epoch": 2, "iter": 4350, "lr": 0.0002, "memory": 12584, "data_time": 0.04748, "loss_rpn_cls": 0.03366, "loss_rpn_bbox": 0.05264, "loss_cls": 0.24745, "acc": 91.79932, "loss_bbox": 0.26883, "loss_mask": 0.25137, "loss": 0.85395, "time": 0.51386} -{"mode": "train", "epoch": 2, "iter": 4400, "lr": 0.0002, "memory": 12584, "data_time": 0.04136, "loss_rpn_cls": 0.03476, "loss_rpn_bbox": 0.05391, "loss_cls": 0.25283, "acc": 91.91284, "loss_bbox": 0.26838, "loss_mask": 0.26131, "loss": 0.87119, "time": 0.59101} -{"mode": "train", "epoch": 2, "iter": 4450, "lr": 0.0002, "memory": 12584, "data_time": 0.05262, "loss_rpn_cls": 0.03397, "loss_rpn_bbox": 0.05412, "loss_cls": 0.249, "acc": 91.71753, "loss_bbox": 0.2752, "loss_mask": 0.25743, "loss": 0.86972, "time": 0.5171} -{"mode": "train", "epoch": 2, "iter": 4500, "lr": 0.0002, "memory": 12584, "data_time": 0.04827, "loss_rpn_cls": 0.03225, "loss_rpn_bbox": 0.05211, "loss_cls": 0.23243, "acc": 92.26318, "loss_bbox": 0.25399, "loss_mask": 0.25566, "loss": 0.82644, "time": 0.50088} -{"mode": "train", "epoch": 2, "iter": 4550, "lr": 0.0002, "memory": 12584, "data_time": 0.04734, "loss_rpn_cls": 0.03872, "loss_rpn_bbox": 0.057, "loss_cls": 0.25879, "acc": 91.5708, "loss_bbox": 0.27556, "loss_mask": 0.26496, "loss": 0.89502, "time": 0.51142} -{"mode": "train", "epoch": 2, "iter": 4600, "lr": 0.0002, "memory": 12584, "data_time": 0.04332, "loss_rpn_cls": 0.03552, "loss_rpn_bbox": 0.05444, "loss_cls": 0.26408, "acc": 91.55713, "loss_bbox": 0.27346, "loss_mask": 0.26413, "loss": 0.89163, "time": 0.50637} -{"mode": "train", "epoch": 2, "iter": 4650, "lr": 0.0002, "memory": 12584, "data_time": 0.04748, "loss_rpn_cls": 0.03833, "loss_rpn_bbox": 0.05368, "loss_cls": 0.25114, "acc": 91.62695, "loss_bbox": 0.27335, "loss_mask": 0.26208, "loss": 0.87857, "time": 0.50147} -{"mode": "train", "epoch": 2, "iter": 4700, "lr": 0.0002, "memory": 12584, "data_time": 0.04582, "loss_rpn_cls": 0.03293, "loss_rpn_bbox": 0.05246, "loss_cls": 0.24411, "acc": 92.1062, "loss_bbox": 0.26057, "loss_mask": 0.25773, "loss": 0.84781, "time": 0.4921} -{"mode": "train", "epoch": 2, "iter": 4750, "lr": 0.0002, "memory": 12584, "data_time": 0.04184, "loss_rpn_cls": 0.0342, "loss_rpn_bbox": 0.05215, "loss_cls": 0.24164, "acc": 92.1084, "loss_bbox": 0.26643, "loss_mask": 0.25151, "loss": 0.84593, "time": 0.50367} -{"mode": "train", "epoch": 2, "iter": 4800, "lr": 0.0002, "memory": 12584, "data_time": 0.04842, "loss_rpn_cls": 0.03291, "loss_rpn_bbox": 0.05338, "loss_cls": 0.23757, "acc": 92.32324, "loss_bbox": 0.25365, "loss_mask": 0.25534, "loss": 0.83286, "time": 0.50076} -{"mode": "train", "epoch": 2, "iter": 4850, "lr": 0.0002, "memory": 12584, "data_time": 0.0417, "loss_rpn_cls": 0.03215, "loss_rpn_bbox": 0.0528, "loss_cls": 0.24055, "acc": 92.22925, "loss_bbox": 0.25608, "loss_mask": 0.24975, "loss": 0.83133, "time": 0.49356} -{"mode": "train", "epoch": 2, "iter": 4900, "lr": 0.0002, "memory": 12584, "data_time": 0.03754, "loss_rpn_cls": 0.03537, "loss_rpn_bbox": 0.05247, "loss_cls": 0.24928, "acc": 91.8103, "loss_bbox": 0.2686, "loss_mask": 0.25174, "loss": 0.85745, "time": 0.51107} -{"mode": "train", "epoch": 2, "iter": 4950, "lr": 0.0002, "memory": 12584, "data_time": 0.0508, "loss_rpn_cls": 0.03223, "loss_rpn_bbox": 0.05186, "loss_cls": 0.24913, "acc": 91.66846, "loss_bbox": 0.26992, "loss_mask": 0.25357, "loss": 0.85671, "time": 0.50936} -{"mode": "train", "epoch": 2, "iter": 5000, "lr": 0.0002, "memory": 12584, "data_time": 0.0503, "loss_rpn_cls": 0.03341, "loss_rpn_bbox": 0.05237, "loss_cls": 0.24997, "acc": 91.76807, "loss_bbox": 0.27283, "loss_mask": 0.25194, "loss": 0.86051, "time": 0.50206} -{"mode": "train", "epoch": 2, "iter": 5050, "lr": 0.0002, "memory": 12584, "data_time": 0.04008, "loss_rpn_cls": 0.03265, "loss_rpn_bbox": 0.05439, "loss_cls": 0.24623, "acc": 92.02124, "loss_bbox": 0.26282, "loss_mask": 0.25679, "loss": 0.85288, "time": 0.51526} -{"mode": "train", "epoch": 2, "iter": 5100, "lr": 0.0002, "memory": 12584, "data_time": 0.04235, "loss_rpn_cls": 0.03625, "loss_rpn_bbox": 0.05184, "loss_cls": 0.23699, "acc": 92.33643, "loss_bbox": 0.2608, "loss_mask": 0.25714, "loss": 0.84301, "time": 0.50969} -{"mode": "train", "epoch": 2, "iter": 5150, "lr": 0.0002, "memory": 12584, "data_time": 0.04727, "loss_rpn_cls": 0.03372, "loss_rpn_bbox": 0.05347, "loss_cls": 0.25753, "acc": 91.55835, "loss_bbox": 0.26966, "loss_mask": 0.26357, "loss": 0.87795, "time": 0.51286} -{"mode": "train", "epoch": 2, "iter": 5200, "lr": 0.0002, "memory": 12584, "data_time": 0.03872, "loss_rpn_cls": 0.03536, "loss_rpn_bbox": 0.05516, "loss_cls": 0.25501, "acc": 91.57178, "loss_bbox": 0.26845, "loss_mask": 0.25979, "loss": 0.87377, "time": 0.50219} -{"mode": "train", "epoch": 2, "iter": 5250, "lr": 0.0002, "memory": 12584, "data_time": 0.04411, "loss_rpn_cls": 0.03788, "loss_rpn_bbox": 0.05802, "loss_cls": 0.24508, "acc": 91.88037, "loss_bbox": 0.26656, "loss_mask": 0.25621, "loss": 0.86375, "time": 0.51728} -{"mode": "train", "epoch": 2, "iter": 5300, "lr": 0.0002, "memory": 12584, "data_time": 0.04476, "loss_rpn_cls": 0.03433, "loss_rpn_bbox": 0.05462, "loss_cls": 0.24326, "acc": 91.97217, "loss_bbox": 0.27051, "loss_mask": 0.25352, "loss": 0.85624, "time": 0.50552} -{"mode": "train", "epoch": 2, "iter": 5350, "lr": 0.0002, "memory": 12584, "data_time": 0.04359, "loss_rpn_cls": 0.0338, "loss_rpn_bbox": 0.05568, "loss_cls": 0.24458, "acc": 91.84863, "loss_bbox": 0.26737, "loss_mask": 0.25853, "loss": 0.85996, "time": 0.5112} -{"mode": "train", "epoch": 2, "iter": 5400, "lr": 0.0002, "memory": 12584, "data_time": 0.04483, "loss_rpn_cls": 0.0309, "loss_rpn_bbox": 0.05138, "loss_cls": 0.24382, "acc": 92.03247, "loss_bbox": 0.26141, "loss_mask": 0.25712, "loss": 0.84463, "time": 0.49841} -{"mode": "train", "epoch": 2, "iter": 5450, "lr": 0.0002, "memory": 12584, "data_time": 0.0523, "loss_rpn_cls": 0.03527, "loss_rpn_bbox": 0.05533, "loss_cls": 0.24959, "acc": 91.8269, "loss_bbox": 0.2633, "loss_mask": 0.2581, "loss": 0.8616, "time": 0.61273} -{"mode": "train", "epoch": 2, "iter": 5500, "lr": 0.0002, "memory": 12584, "data_time": 0.04169, "loss_rpn_cls": 0.03534, "loss_rpn_bbox": 0.05416, "loss_cls": 0.24582, "acc": 92.08545, "loss_bbox": 0.26528, "loss_mask": 0.25434, "loss": 0.85494, "time": 0.50762} -{"mode": "train", "epoch": 2, "iter": 5550, "lr": 0.0002, "memory": 12584, "data_time": 0.0486, "loss_rpn_cls": 0.03457, "loss_rpn_bbox": 0.05329, "loss_cls": 0.24085, "acc": 91.98486, "loss_bbox": 0.2711, "loss_mask": 0.25314, "loss": 0.85295, "time": 0.49519} -{"mode": "train", "epoch": 2, "iter": 5600, "lr": 0.0002, "memory": 12584, "data_time": 0.04756, "loss_rpn_cls": 0.03539, "loss_rpn_bbox": 0.05717, "loss_cls": 0.25618, "acc": 91.85278, "loss_bbox": 0.26821, "loss_mask": 0.25768, "loss": 0.87464, "time": 0.50262} -{"mode": "train", "epoch": 2, "iter": 5650, "lr": 0.0002, "memory": 12584, "data_time": 0.04096, "loss_rpn_cls": 0.03352, "loss_rpn_bbox": 0.05095, "loss_cls": 0.23966, "acc": 92.15527, "loss_bbox": 0.25775, "loss_mask": 0.25694, "loss": 0.8388, "time": 0.49754} -{"mode": "train", "epoch": 2, "iter": 5700, "lr": 0.0002, "memory": 12584, "data_time": 0.04985, "loss_rpn_cls": 0.03339, "loss_rpn_bbox": 0.05471, "loss_cls": 0.24873, "acc": 92.04956, "loss_bbox": 0.26229, "loss_mask": 0.25903, "loss": 0.85816, "time": 0.50539} -{"mode": "train", "epoch": 2, "iter": 5750, "lr": 0.0002, "memory": 12584, "data_time": 0.04322, "loss_rpn_cls": 0.03452, "loss_rpn_bbox": 0.05301, "loss_cls": 0.24696, "acc": 91.88623, "loss_bbox": 0.26066, "loss_mask": 0.25914, "loss": 0.8543, "time": 0.5045} -{"mode": "train", "epoch": 2, "iter": 5800, "lr": 0.0002, "memory": 12584, "data_time": 0.03883, "loss_rpn_cls": 0.03253, "loss_rpn_bbox": 0.05574, "loss_cls": 0.24464, "acc": 91.98657, "loss_bbox": 0.26042, "loss_mask": 0.25943, "loss": 0.85275, "time": 0.5172} -{"mode": "train", "epoch": 2, "iter": 5850, "lr": 0.0002, "memory": 12584, "data_time": 0.04808, "loss_rpn_cls": 0.03622, "loss_rpn_bbox": 0.05472, "loss_cls": 0.24317, "acc": 92.13696, "loss_bbox": 0.25891, "loss_mask": 0.25458, "loss": 0.8476, "time": 0.5113} -{"mode": "train", "epoch": 2, "iter": 5900, "lr": 0.0002, "memory": 12584, "data_time": 0.03795, "loss_rpn_cls": 0.03411, "loss_rpn_bbox": 0.05141, "loss_cls": 0.24551, "acc": 91.96655, "loss_bbox": 0.26642, "loss_mask": 0.25344, "loss": 0.85088, "time": 0.49461} -{"mode": "train", "epoch": 2, "iter": 5950, "lr": 0.0002, "memory": 12584, "data_time": 0.04212, "loss_rpn_cls": 0.03316, "loss_rpn_bbox": 0.05211, "loss_cls": 0.23485, "acc": 92.30127, "loss_bbox": 0.25311, "loss_mask": 0.25666, "loss": 0.82989, "time": 0.49792} -{"mode": "train", "epoch": 2, "iter": 6000, "lr": 0.0002, "memory": 12584, "data_time": 0.04345, "loss_rpn_cls": 0.03499, "loss_rpn_bbox": 0.05387, "loss_cls": 0.24791, "acc": 91.97534, "loss_bbox": 0.26203, "loss_mask": 0.25858, "loss": 0.85737, "time": 0.5109} -{"mode": "train", "epoch": 2, "iter": 6050, "lr": 0.0002, "memory": 12584, "data_time": 0.04494, "loss_rpn_cls": 0.03146, "loss_rpn_bbox": 0.05352, "loss_cls": 0.23703, "acc": 92.1521, "loss_bbox": 0.25753, "loss_mask": 0.25602, "loss": 0.83556, "time": 0.50395} -{"mode": "train", "epoch": 2, "iter": 6100, "lr": 0.0002, "memory": 12584, "data_time": 0.04888, "loss_rpn_cls": 0.0307, "loss_rpn_bbox": 0.04974, "loss_cls": 0.23912, "acc": 92.06323, "loss_bbox": 0.26031, "loss_mask": 0.2539, "loss": 0.83377, "time": 0.50733} -{"mode": "train", "epoch": 2, "iter": 6150, "lr": 0.0002, "memory": 12584, "data_time": 0.04576, "loss_rpn_cls": 0.03618, "loss_rpn_bbox": 0.055, "loss_cls": 0.24058, "acc": 92.04492, "loss_bbox": 0.26383, "loss_mask": 0.26013, "loss": 0.85571, "time": 0.50013} -{"mode": "train", "epoch": 2, "iter": 6200, "lr": 0.0002, "memory": 12584, "data_time": 0.0436, "loss_rpn_cls": 0.03662, "loss_rpn_bbox": 0.05321, "loss_cls": 0.25286, "acc": 91.65625, "loss_bbox": 0.2703, "loss_mask": 0.25741, "loss": 0.87041, "time": 0.50266} -{"mode": "train", "epoch": 2, "iter": 6250, "lr": 0.0002, "memory": 12584, "data_time": 0.04892, "loss_rpn_cls": 0.03418, "loss_rpn_bbox": 0.05533, "loss_cls": 0.24801, "acc": 91.8186, "loss_bbox": 0.26744, "loss_mask": 0.2573, "loss": 0.86224, "time": 0.50778} -{"mode": "train", "epoch": 2, "iter": 6300, "lr": 0.0002, "memory": 12584, "data_time": 0.04328, "loss_rpn_cls": 0.03588, "loss_rpn_bbox": 0.0553, "loss_cls": 0.24011, "acc": 92.33057, "loss_bbox": 0.25457, "loss_mask": 0.26056, "loss": 0.84643, "time": 0.50544} -{"mode": "train", "epoch": 2, "iter": 6350, "lr": 0.0002, "memory": 12584, "data_time": 0.03715, "loss_rpn_cls": 0.03412, "loss_rpn_bbox": 0.05186, "loss_cls": 0.25226, "acc": 91.97095, "loss_bbox": 0.25619, "loss_mask": 0.25293, "loss": 0.84736, "time": 0.49343} -{"mode": "train", "epoch": 2, "iter": 6400, "lr": 0.0002, "memory": 12584, "data_time": 0.04054, "loss_rpn_cls": 0.03378, "loss_rpn_bbox": 0.05712, "loss_cls": 0.25759, "acc": 91.68433, "loss_bbox": 0.27396, "loss_mask": 0.26371, "loss": 0.88616, "time": 0.49996} -{"mode": "train", "epoch": 2, "iter": 6450, "lr": 0.0002, "memory": 12584, "data_time": 0.03848, "loss_rpn_cls": 0.02979, "loss_rpn_bbox": 0.05116, "loss_cls": 0.22992, "acc": 92.46655, "loss_bbox": 0.24795, "loss_mask": 0.2552, "loss": 0.81402, "time": 0.54343} -{"mode": "train", "epoch": 2, "iter": 6500, "lr": 0.0002, "memory": 12584, "data_time": 0.04932, "loss_rpn_cls": 0.03437, "loss_rpn_bbox": 0.05667, "loss_cls": 0.24887, "acc": 91.80029, "loss_bbox": 0.26403, "loss_mask": 0.26238, "loss": 0.86632, "time": 0.64855} -{"mode": "train", "epoch": 2, "iter": 6550, "lr": 0.0002, "memory": 12584, "data_time": 0.04344, "loss_rpn_cls": 0.03327, "loss_rpn_bbox": 0.05095, "loss_cls": 0.24105, "acc": 92.23608, "loss_bbox": 0.25025, "loss_mask": 0.25904, "loss": 0.83455, "time": 0.49696} -{"mode": "train", "epoch": 2, "iter": 6600, "lr": 0.0002, "memory": 12584, "data_time": 0.04085, "loss_rpn_cls": 0.0352, "loss_rpn_bbox": 0.05096, "loss_cls": 0.24124, "acc": 92.177, "loss_bbox": 0.25784, "loss_mask": 0.25078, "loss": 0.83602, "time": 0.49848} -{"mode": "train", "epoch": 2, "iter": 6650, "lr": 0.0002, "memory": 12584, "data_time": 0.05087, "loss_rpn_cls": 0.03282, "loss_rpn_bbox": 0.05496, "loss_cls": 0.25058, "acc": 91.79785, "loss_bbox": 0.26841, "loss_mask": 0.25467, "loss": 0.86144, "time": 0.50173} -{"mode": "train", "epoch": 2, "iter": 6700, "lr": 0.0002, "memory": 12584, "data_time": 0.03851, "loss_rpn_cls": 0.03448, "loss_rpn_bbox": 0.05327, "loss_cls": 0.25311, "acc": 91.72729, "loss_bbox": 0.27228, "loss_mask": 0.25869, "loss": 0.87182, "time": 0.5157} -{"mode": "train", "epoch": 2, "iter": 6750, "lr": 0.0002, "memory": 12584, "data_time": 0.04229, "loss_rpn_cls": 0.0333, "loss_rpn_bbox": 0.05505, "loss_cls": 0.2316, "acc": 92.22046, "loss_bbox": 0.25301, "loss_mask": 0.26421, "loss": 0.83718, "time": 0.49477} -{"mode": "train", "epoch": 2, "iter": 6800, "lr": 0.0002, "memory": 12584, "data_time": 0.04436, "loss_rpn_cls": 0.03071, "loss_rpn_bbox": 0.05102, "loss_cls": 0.23818, "acc": 92.17627, "loss_bbox": 0.25284, "loss_mask": 0.25597, "loss": 0.82873, "time": 0.49987} -{"mode": "train", "epoch": 2, "iter": 6850, "lr": 0.0002, "memory": 12584, "data_time": 0.04036, "loss_rpn_cls": 0.03179, "loss_rpn_bbox": 0.05164, "loss_cls": 0.23592, "acc": 92.26562, "loss_bbox": 0.25449, "loss_mask": 0.25548, "loss": 0.82931, "time": 0.49907} -{"mode": "train", "epoch": 2, "iter": 6900, "lr": 0.0002, "memory": 12584, "data_time": 0.04038, "loss_rpn_cls": 0.03281, "loss_rpn_bbox": 0.05261, "loss_cls": 0.24391, "acc": 92.04663, "loss_bbox": 0.26023, "loss_mask": 0.25745, "loss": 0.847, "time": 0.48521} -{"mode": "train", "epoch": 2, "iter": 6950, "lr": 0.0002, "memory": 12584, "data_time": 0.0503, "loss_rpn_cls": 0.03491, "loss_rpn_bbox": 0.05288, "loss_cls": 0.23157, "acc": 92.48926, "loss_bbox": 0.25018, "loss_mask": 0.25433, "loss": 0.82386, "time": 0.49253} -{"mode": "train", "epoch": 2, "iter": 7000, "lr": 0.0002, "memory": 12584, "data_time": 0.05258, "loss_rpn_cls": 0.03459, "loss_rpn_bbox": 0.05359, "loss_cls": 0.2399, "acc": 91.96606, "loss_bbox": 0.26066, "loss_mask": 0.25541, "loss": 0.84415, "time": 0.50709} -{"mode": "train", "epoch": 2, "iter": 7050, "lr": 0.0002, "memory": 12584, "data_time": 0.041, "loss_rpn_cls": 0.03083, "loss_rpn_bbox": 0.05194, "loss_cls": 0.23957, "acc": 92.15747, "loss_bbox": 0.25904, "loss_mask": 0.25798, "loss": 0.83935, "time": 0.48212} -{"mode": "train", "epoch": 2, "iter": 7100, "lr": 0.0002, "memory": 12584, "data_time": 0.04149, "loss_rpn_cls": 0.03353, "loss_rpn_bbox": 0.05397, "loss_cls": 0.24511, "acc": 91.87256, "loss_bbox": 0.26455, "loss_mask": 0.25777, "loss": 0.85494, "time": 0.49793} -{"mode": "train", "epoch": 2, "iter": 7150, "lr": 0.0002, "memory": 12584, "data_time": 0.0482, "loss_rpn_cls": 0.03354, "loss_rpn_bbox": 0.05304, "loss_cls": 0.23475, "acc": 92.27051, "loss_bbox": 0.25368, "loss_mask": 0.25263, "loss": 0.82765, "time": 0.49974} -{"mode": "train", "epoch": 2, "iter": 7200, "lr": 0.0002, "memory": 12584, "data_time": 0.04789, "loss_rpn_cls": 0.03215, "loss_rpn_bbox": 0.04868, "loss_cls": 0.22988, "acc": 92.47607, "loss_bbox": 0.24858, "loss_mask": 0.25609, "loss": 0.81538, "time": 0.4933} -{"mode": "train", "epoch": 2, "iter": 7250, "lr": 0.0002, "memory": 12584, "data_time": 0.04749, "loss_rpn_cls": 0.03335, "loss_rpn_bbox": 0.04977, "loss_cls": 0.23042, "acc": 92.40186, "loss_bbox": 0.25307, "loss_mask": 0.24907, "loss": 0.81567, "time": 0.49674} -{"mode": "train", "epoch": 2, "iter": 7300, "lr": 0.0002, "memory": 12584, "data_time": 0.04802, "loss_rpn_cls": 0.03471, "loss_rpn_bbox": 0.05269, "loss_cls": 0.24267, "acc": 92.23364, "loss_bbox": 0.25719, "loss_mask": 0.25203, "loss": 0.83929, "time": 0.49242} -{"mode": "val", "epoch": 2, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3363, "bbox_mAP_50": 0.5582, "bbox_mAP_75": 0.3671, "bbox_mAP_s": 0.1997, "bbox_mAP_m": 0.3764, "bbox_mAP_l": 0.4415, "bbox_mAP_copypaste": "0.3363 0.5582 0.3671 0.1997 0.3764 0.4415", "segm_mAP": 0.3271, "segm_mAP_50": 0.5295, "segm_mAP_75": 0.3518, "segm_mAP_s": 0.1515, "segm_mAP_m": 0.3643, "segm_mAP_l": 0.4822, "segm_mAP_copypaste": "0.3271 0.5295 0.3518 0.1515 0.3643 0.4822"} -{"mode": "train", "epoch": 3, "iter": 50, "lr": 0.0002, "memory": 12584, "data_time": 0.119, "loss_rpn_cls": 0.02924, "loss_rpn_bbox": 0.05233, "loss_cls": 0.22249, "acc": 92.42749, "loss_bbox": 0.25227, "loss_mask": 0.25359, "loss": 0.80992, "time": 0.61828} -{"mode": "train", "epoch": 3, "iter": 100, "lr": 0.0002, "memory": 12584, "data_time": 0.04413, "loss_rpn_cls": 0.02977, "loss_rpn_bbox": 0.05167, "loss_cls": 0.22508, "acc": 92.50269, "loss_bbox": 0.25218, "loss_mask": 0.24872, "loss": 0.80742, "time": 0.53688} -{"mode": "train", "epoch": 3, "iter": 150, "lr": 0.0002, "memory": 12584, "data_time": 0.03953, "loss_rpn_cls": 0.02588, "loss_rpn_bbox": 0.05082, "loss_cls": 0.22352, "acc": 92.4729, "loss_bbox": 0.24803, "loss_mask": 0.24063, "loss": 0.78889, "time": 0.49848} -{"mode": "train", "epoch": 3, "iter": 200, "lr": 0.0002, "memory": 12584, "data_time": 0.04831, "loss_rpn_cls": 0.02981, "loss_rpn_bbox": 0.05184, "loss_cls": 0.22983, "acc": 92.15576, "loss_bbox": 0.25575, "loss_mask": 0.24919, "loss": 0.81642, "time": 0.61305} -{"mode": "train", "epoch": 3, "iter": 250, "lr": 0.0002, "memory": 12584, "data_time": 0.05749, "loss_rpn_cls": 0.03124, "loss_rpn_bbox": 0.05439, "loss_cls": 0.24873, "acc": 91.66357, "loss_bbox": 0.26953, "loss_mask": 0.25742, "loss": 0.86129, "time": 0.52509} -{"mode": "train", "epoch": 3, "iter": 300, "lr": 0.0002, "memory": 12584, "data_time": 0.04966, "loss_rpn_cls": 0.02889, "loss_rpn_bbox": 0.04796, "loss_cls": 0.21658, "acc": 92.59009, "loss_bbox": 0.24623, "loss_mask": 0.25339, "loss": 0.79306, "time": 0.50845} -{"mode": "train", "epoch": 3, "iter": 350, "lr": 0.0002, "memory": 12584, "data_time": 0.05189, "loss_rpn_cls": 0.03049, "loss_rpn_bbox": 0.0539, "loss_cls": 0.22938, "acc": 92.23071, "loss_bbox": 0.26254, "loss_mask": 0.25403, "loss": 0.83035, "time": 0.52582} -{"mode": "train", "epoch": 3, "iter": 400, "lr": 0.0002, "memory": 12584, "data_time": 0.03908, "loss_rpn_cls": 0.02829, "loss_rpn_bbox": 0.04838, "loss_cls": 0.21728, "acc": 92.63208, "loss_bbox": 0.25217, "loss_mask": 0.24933, "loss": 0.79545, "time": 0.51912} -{"mode": "train", "epoch": 3, "iter": 450, "lr": 0.0002, "memory": 12584, "data_time": 0.05336, "loss_rpn_cls": 0.02922, "loss_rpn_bbox": 0.05249, "loss_cls": 0.23091, "acc": 92.09595, "loss_bbox": 0.26095, "loss_mask": 0.25152, "loss": 0.82508, "time": 0.52342} -{"mode": "train", "epoch": 3, "iter": 500, "lr": 0.0002, "memory": 12585, "data_time": 0.05234, "loss_rpn_cls": 0.02761, "loss_rpn_bbox": 0.05066, "loss_cls": 0.23155, "acc": 92.19556, "loss_bbox": 0.25429, "loss_mask": 0.2492, "loss": 0.81331, "time": 0.52579} -{"mode": "train", "epoch": 3, "iter": 550, "lr": 0.0002, "memory": 12585, "data_time": 0.04925, "loss_rpn_cls": 0.02696, "loss_rpn_bbox": 0.04855, "loss_cls": 0.22473, "acc": 92.52588, "loss_bbox": 0.25084, "loss_mask": 0.24629, "loss": 0.79737, "time": 0.49997} -{"mode": "train", "epoch": 3, "iter": 600, "lr": 0.0002, "memory": 12585, "data_time": 0.04249, "loss_rpn_cls": 0.02725, "loss_rpn_bbox": 0.04899, "loss_cls": 0.22247, "acc": 92.53003, "loss_bbox": 0.24652, "loss_mask": 0.24758, "loss": 0.79281, "time": 0.50888} -{"mode": "train", "epoch": 3, "iter": 650, "lr": 0.0002, "memory": 12585, "data_time": 0.04806, "loss_rpn_cls": 0.03059, "loss_rpn_bbox": 0.05049, "loss_cls": 0.23351, "acc": 92.10791, "loss_bbox": 0.26359, "loss_mask": 0.25588, "loss": 0.83406, "time": 0.56681} -{"mode": "train", "epoch": 3, "iter": 700, "lr": 0.0002, "memory": 12585, "data_time": 0.04371, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.04982, "loss_cls": 0.22299, "acc": 92.37866, "loss_bbox": 0.24824, "loss_mask": 0.24293, "loss": 0.79314, "time": 0.50897} -{"mode": "train", "epoch": 3, "iter": 750, "lr": 0.0002, "memory": 12585, "data_time": 0.04864, "loss_rpn_cls": 0.02923, "loss_rpn_bbox": 0.05221, "loss_cls": 0.23399, "acc": 92.1062, "loss_bbox": 0.25899, "loss_mask": 0.2468, "loss": 0.82121, "time": 0.5246} -{"mode": "train", "epoch": 3, "iter": 800, "lr": 0.0002, "memory": 12585, "data_time": 0.05011, "loss_rpn_cls": 0.03009, "loss_rpn_bbox": 0.05098, "loss_cls": 0.23272, "acc": 92.19336, "loss_bbox": 0.2525, "loss_mask": 0.24482, "loss": 0.81111, "time": 0.50862} -{"mode": "train", "epoch": 3, "iter": 850, "lr": 0.0002, "memory": 12585, "data_time": 0.0536, "loss_rpn_cls": 0.03116, "loss_rpn_bbox": 0.0529, "loss_cls": 0.23374, "acc": 92.13208, "loss_bbox": 0.25688, "loss_mask": 0.25475, "loss": 0.82943, "time": 0.55739} -{"mode": "train", "epoch": 3, "iter": 900, "lr": 0.0002, "memory": 12585, "data_time": 0.04771, "loss_rpn_cls": 0.02883, "loss_rpn_bbox": 0.05155, "loss_cls": 0.22732, "acc": 92.30103, "loss_bbox": 0.25206, "loss_mask": 0.24528, "loss": 0.80504, "time": 0.51089} -{"mode": "train", "epoch": 3, "iter": 950, "lr": 0.0002, "memory": 12585, "data_time": 0.04357, "loss_rpn_cls": 0.03056, "loss_rpn_bbox": 0.04835, "loss_cls": 0.22782, "acc": 92.44946, "loss_bbox": 0.24567, "loss_mask": 0.24781, "loss": 0.8002, "time": 0.52408} -{"mode": "train", "epoch": 3, "iter": 1000, "lr": 0.0002, "memory": 12585, "data_time": 0.04394, "loss_rpn_cls": 0.03006, "loss_rpn_bbox": 0.05017, "loss_cls": 0.22473, "acc": 92.40967, "loss_bbox": 0.25594, "loss_mask": 0.2441, "loss": 0.80499, "time": 0.49916} -{"mode": "train", "epoch": 3, "iter": 1050, "lr": 0.0002, "memory": 12585, "data_time": 0.04414, "loss_rpn_cls": 0.03073, "loss_rpn_bbox": 0.05033, "loss_cls": 0.22047, "acc": 92.52612, "loss_bbox": 0.2477, "loss_mask": 0.24457, "loss": 0.7938, "time": 0.50011} -{"mode": "train", "epoch": 3, "iter": 1100, "lr": 0.0002, "memory": 12585, "data_time": 0.04908, "loss_rpn_cls": 0.03055, "loss_rpn_bbox": 0.05066, "loss_cls": 0.22516, "acc": 92.43994, "loss_bbox": 0.25097, "loss_mask": 0.24217, "loss": 0.79952, "time": 0.51902} -{"mode": "train", "epoch": 3, "iter": 1150, "lr": 0.0002, "memory": 12585, "data_time": 0.04819, "loss_rpn_cls": 0.03057, "loss_rpn_bbox": 0.05007, "loss_cls": 0.22626, "acc": 92.33301, "loss_bbox": 0.25213, "loss_mask": 0.25297, "loss": 0.812, "time": 0.53337} -{"mode": "train", "epoch": 3, "iter": 1200, "lr": 0.0002, "memory": 12585, "data_time": 0.04588, "loss_rpn_cls": 0.02958, "loss_rpn_bbox": 0.05076, "loss_cls": 0.22863, "acc": 92.08398, "loss_bbox": 0.26299, "loss_mask": 0.256, "loss": 0.82796, "time": 0.51152} -{"mode": "train", "epoch": 3, "iter": 1250, "lr": 0.0002, "memory": 12585, "data_time": 0.04212, "loss_rpn_cls": 0.02827, "loss_rpn_bbox": 0.05106, "loss_cls": 0.23444, "acc": 92.13892, "loss_bbox": 0.25854, "loss_mask": 0.25122, "loss": 0.82353, "time": 0.50909} -{"mode": "train", "epoch": 3, "iter": 1300, "lr": 0.0002, "memory": 12585, "data_time": 0.04566, "loss_rpn_cls": 0.02869, "loss_rpn_bbox": 0.05, "loss_cls": 0.22239, "acc": 92.45044, "loss_bbox": 0.24812, "loss_mask": 0.24839, "loss": 0.79758, "time": 0.4964} -{"mode": "train", "epoch": 3, "iter": 1350, "lr": 0.0002, "memory": 12585, "data_time": 0.04373, "loss_rpn_cls": 0.03249, "loss_rpn_bbox": 0.05527, "loss_cls": 0.22699, "acc": 92.36304, "loss_bbox": 0.25294, "loss_mask": 0.24849, "loss": 0.81618, "time": 0.51547} -{"mode": "train", "epoch": 3, "iter": 1400, "lr": 0.0002, "memory": 12585, "data_time": 0.05177, "loss_rpn_cls": 0.03124, "loss_rpn_bbox": 0.05007, "loss_cls": 0.22732, "acc": 92.46045, "loss_bbox": 0.251, "loss_mask": 0.25166, "loss": 0.81129, "time": 0.57075} -{"mode": "train", "epoch": 3, "iter": 1450, "lr": 0.0002, "memory": 12585, "data_time": 0.05262, "loss_rpn_cls": 0.03336, "loss_rpn_bbox": 0.05558, "loss_cls": 0.23131, "acc": 92.22485, "loss_bbox": 0.25706, "loss_mask": 0.24798, "loss": 0.82529, "time": 0.52342} -{"mode": "train", "epoch": 3, "iter": 1500, "lr": 0.0002, "memory": 12585, "data_time": 0.0487, "loss_rpn_cls": 0.02786, "loss_rpn_bbox": 0.04897, "loss_cls": 0.22956, "acc": 92.36523, "loss_bbox": 0.25106, "loss_mask": 0.24772, "loss": 0.80517, "time": 0.50243} -{"mode": "train", "epoch": 3, "iter": 1550, "lr": 0.0002, "memory": 12585, "data_time": 0.04405, "loss_rpn_cls": 0.02987, "loss_rpn_bbox": 0.05178, "loss_cls": 0.2288, "acc": 92.40308, "loss_bbox": 0.24902, "loss_mask": 0.24807, "loss": 0.80755, "time": 0.50194} -{"mode": "train", "epoch": 3, "iter": 1600, "lr": 0.0002, "memory": 12585, "data_time": 0.04488, "loss_rpn_cls": 0.03426, "loss_rpn_bbox": 0.05483, "loss_cls": 0.24364, "acc": 91.83496, "loss_bbox": 0.26693, "loss_mask": 0.25256, "loss": 0.85223, "time": 0.52948} -{"mode": "train", "epoch": 3, "iter": 1650, "lr": 0.0002, "memory": 12585, "data_time": 0.0503, "loss_rpn_cls": 0.02969, "loss_rpn_bbox": 0.05048, "loss_cls": 0.23418, "acc": 92.17212, "loss_bbox": 0.25858, "loss_mask": 0.25111, "loss": 0.82404, "time": 0.60176} -{"mode": "train", "epoch": 3, "iter": 1700, "lr": 0.0002, "memory": 12585, "data_time": 0.03964, "loss_rpn_cls": 0.03266, "loss_rpn_bbox": 0.05198, "loss_cls": 0.23146, "acc": 92.29468, "loss_bbox": 0.25598, "loss_mask": 0.24735, "loss": 0.81942, "time": 0.61883} -{"mode": "train", "epoch": 3, "iter": 1750, "lr": 0.0002, "memory": 12585, "data_time": 0.04531, "loss_rpn_cls": 0.03052, "loss_rpn_bbox": 0.05296, "loss_cls": 0.22972, "acc": 92.46387, "loss_bbox": 0.25259, "loss_mask": 0.25101, "loss": 0.81679, "time": 0.50166} -{"mode": "train", "epoch": 3, "iter": 1800, "lr": 0.0002, "memory": 12585, "data_time": 0.05252, "loss_rpn_cls": 0.03194, "loss_rpn_bbox": 0.05245, "loss_cls": 0.23193, "acc": 92.20776, "loss_bbox": 0.25618, "loss_mask": 0.24795, "loss": 0.82045, "time": 0.50594} -{"mode": "train", "epoch": 3, "iter": 1850, "lr": 0.0002, "memory": 12585, "data_time": 0.04631, "loss_rpn_cls": 0.03105, "loss_rpn_bbox": 0.05196, "loss_cls": 0.23732, "acc": 92.03857, "loss_bbox": 0.2614, "loss_mask": 0.25632, "loss": 0.83806, "time": 0.50424} -{"mode": "train", "epoch": 3, "iter": 1900, "lr": 0.0002, "memory": 12585, "data_time": 0.04349, "loss_rpn_cls": 0.03026, "loss_rpn_bbox": 0.05228, "loss_cls": 0.22849, "acc": 92.28809, "loss_bbox": 0.25272, "loss_mask": 0.24525, "loss": 0.809, "time": 0.51361} -{"mode": "train", "epoch": 3, "iter": 1950, "lr": 0.0002, "memory": 12585, "data_time": 0.04932, "loss_rpn_cls": 0.0302, "loss_rpn_bbox": 0.04915, "loss_cls": 0.23048, "acc": 92.31689, "loss_bbox": 0.24853, "loss_mask": 0.24497, "loss": 0.80333, "time": 0.49693} -{"mode": "train", "epoch": 3, "iter": 2000, "lr": 0.0002, "memory": 12585, "data_time": 0.04168, "loss_rpn_cls": 0.0331, "loss_rpn_bbox": 0.05216, "loss_cls": 0.22808, "acc": 92.45679, "loss_bbox": 0.24733, "loss_mask": 0.25063, "loss": 0.8113, "time": 0.50588} -{"mode": "train", "epoch": 3, "iter": 2050, "lr": 0.0002, "memory": 12586, "data_time": 0.0526, "loss_rpn_cls": 0.03142, "loss_rpn_bbox": 0.05398, "loss_cls": 0.23413, "acc": 92.12549, "loss_bbox": 0.26287, "loss_mask": 0.25145, "loss": 0.83385, "time": 0.51584} -{"mode": "train", "epoch": 3, "iter": 2100, "lr": 0.0002, "memory": 12586, "data_time": 0.04965, "loss_rpn_cls": 0.03231, "loss_rpn_bbox": 0.05584, "loss_cls": 0.22622, "acc": 92.26001, "loss_bbox": 0.25589, "loss_mask": 0.24766, "loss": 0.81792, "time": 0.52486} -{"mode": "train", "epoch": 3, "iter": 2150, "lr": 0.0002, "memory": 12586, "data_time": 0.04282, "loss_rpn_cls": 0.03025, "loss_rpn_bbox": 0.05039, "loss_cls": 0.21952, "acc": 92.55347, "loss_bbox": 0.24588, "loss_mask": 0.24942, "loss": 0.79547, "time": 0.51133} -{"mode": "train", "epoch": 3, "iter": 2200, "lr": 0.0002, "memory": 12586, "data_time": 0.05234, "loss_rpn_cls": 0.0305, "loss_rpn_bbox": 0.0528, "loss_cls": 0.23372, "acc": 92.18457, "loss_bbox": 0.24975, "loss_mask": 0.25535, "loss": 0.82212, "time": 0.51899} -{"mode": "train", "epoch": 3, "iter": 2250, "lr": 0.0002, "memory": 12586, "data_time": 0.05362, "loss_rpn_cls": 0.03068, "loss_rpn_bbox": 0.05289, "loss_cls": 0.22091, "acc": 92.5293, "loss_bbox": 0.25049, "loss_mask": 0.24701, "loss": 0.80197, "time": 0.51764} -{"mode": "train", "epoch": 3, "iter": 2300, "lr": 0.0002, "memory": 12586, "data_time": 0.0511, "loss_rpn_cls": 0.03196, "loss_rpn_bbox": 0.04956, "loss_cls": 0.23205, "acc": 92.22559, "loss_bbox": 0.25444, "loss_mask": 0.24517, "loss": 0.81317, "time": 0.49359} -{"mode": "train", "epoch": 3, "iter": 2350, "lr": 0.0002, "memory": 12586, "data_time": 0.04276, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.05176, "loss_cls": 0.23837, "acc": 91.99609, "loss_bbox": 0.26306, "loss_mask": 0.25407, "loss": 0.83821, "time": 0.50939} -{"mode": "train", "epoch": 3, "iter": 2400, "lr": 0.0002, "memory": 12586, "data_time": 0.04469, "loss_rpn_cls": 0.0315, "loss_rpn_bbox": 0.04946, "loss_cls": 0.22811, "acc": 92.40845, "loss_bbox": 0.24866, "loss_mask": 0.24403, "loss": 0.80176, "time": 0.5095} -{"mode": "train", "epoch": 3, "iter": 2450, "lr": 0.0002, "memory": 12586, "data_time": 0.04302, "loss_rpn_cls": 0.03282, "loss_rpn_bbox": 0.05324, "loss_cls": 0.23018, "acc": 92.36255, "loss_bbox": 0.25644, "loss_mask": 0.25073, "loss": 0.82341, "time": 0.51028} -{"mode": "train", "epoch": 3, "iter": 2500, "lr": 0.0002, "memory": 12586, "data_time": 0.04128, "loss_rpn_cls": 0.02953, "loss_rpn_bbox": 0.04941, "loss_cls": 0.23061, "acc": 92.24536, "loss_bbox": 0.25206, "loss_mask": 0.24694, "loss": 0.80856, "time": 0.50863} -{"mode": "train", "epoch": 3, "iter": 2550, "lr": 0.0002, "memory": 12586, "data_time": 0.04212, "loss_rpn_cls": 0.02939, "loss_rpn_bbox": 0.04928, "loss_cls": 0.22136, "acc": 92.59033, "loss_bbox": 0.24946, "loss_mask": 0.25042, "loss": 0.79992, "time": 0.54567} -{"mode": "train", "epoch": 3, "iter": 2600, "lr": 0.0002, "memory": 12586, "data_time": 0.04087, "loss_rpn_cls": 0.03197, "loss_rpn_bbox": 0.05336, "loss_cls": 0.24026, "acc": 91.93945, "loss_bbox": 0.26326, "loss_mask": 0.26002, "loss": 0.84886, "time": 0.51302} -{"mode": "train", "epoch": 3, "iter": 2650, "lr": 0.0002, "memory": 12586, "data_time": 0.05361, "loss_rpn_cls": 0.02891, "loss_rpn_bbox": 0.05136, "loss_cls": 0.23929, "acc": 92.13135, "loss_bbox": 0.25192, "loss_mask": 0.25389, "loss": 0.82537, "time": 0.52353} -{"mode": "train", "epoch": 3, "iter": 2700, "lr": 0.0002, "memory": 12586, "data_time": 0.04134, "loss_rpn_cls": 0.03183, "loss_rpn_bbox": 0.05362, "loss_cls": 0.23478, "acc": 92.0144, "loss_bbox": 0.2621, "loss_mask": 0.25193, "loss": 0.83425, "time": 0.51113} -{"mode": "train", "epoch": 3, "iter": 2750, "lr": 0.0002, "memory": 12586, "data_time": 0.04589, "loss_rpn_cls": 0.03086, "loss_rpn_bbox": 0.05185, "loss_cls": 0.236, "acc": 92.01587, "loss_bbox": 0.26136, "loss_mask": 0.25334, "loss": 0.83341, "time": 0.50859} -{"mode": "train", "epoch": 3, "iter": 2800, "lr": 0.0002, "memory": 12586, "data_time": 0.04239, "loss_rpn_cls": 0.02861, "loss_rpn_bbox": 0.04997, "loss_cls": 0.22793, "acc": 92.44849, "loss_bbox": 0.24581, "loss_mask": 0.24631, "loss": 0.79863, "time": 0.50355} -{"mode": "train", "epoch": 3, "iter": 2850, "lr": 0.0002, "memory": 12586, "data_time": 0.04493, "loss_rpn_cls": 0.03095, "loss_rpn_bbox": 0.0528, "loss_cls": 0.23118, "acc": 92.33105, "loss_bbox": 0.25172, "loss_mask": 0.25154, "loss": 0.81819, "time": 0.52188} -{"mode": "train", "epoch": 3, "iter": 2900, "lr": 0.0002, "memory": 12586, "data_time": 0.0516, "loss_rpn_cls": 0.03146, "loss_rpn_bbox": 0.05098, "loss_cls": 0.23071, "acc": 92.21021, "loss_bbox": 0.25881, "loss_mask": 0.24891, "loss": 0.82087, "time": 0.5185} -{"mode": "train", "epoch": 3, "iter": 2950, "lr": 0.0002, "memory": 12586, "data_time": 0.04341, "loss_rpn_cls": 0.02799, "loss_rpn_bbox": 0.04973, "loss_cls": 0.22098, "acc": 92.43774, "loss_bbox": 0.24688, "loss_mask": 0.25048, "loss": 0.79607, "time": 0.50727} -{"mode": "train", "epoch": 3, "iter": 3000, "lr": 0.0002, "memory": 12586, "data_time": 0.04272, "loss_rpn_cls": 0.03121, "loss_rpn_bbox": 0.05038, "loss_cls": 0.2351, "acc": 92.20361, "loss_bbox": 0.26136, "loss_mask": 0.25752, "loss": 0.83556, "time": 0.51476} -{"mode": "train", "epoch": 3, "iter": 3050, "lr": 0.0002, "memory": 12586, "data_time": 0.05, "loss_rpn_cls": 0.02913, "loss_rpn_bbox": 0.05143, "loss_cls": 0.22958, "acc": 92.22729, "loss_bbox": 0.25842, "loss_mask": 0.25023, "loss": 0.81879, "time": 0.51039} -{"mode": "train", "epoch": 3, "iter": 3100, "lr": 0.0002, "memory": 12586, "data_time": 0.05054, "loss_rpn_cls": 0.02815, "loss_rpn_bbox": 0.05017, "loss_cls": 0.22505, "acc": 92.54663, "loss_bbox": 0.24817, "loss_mask": 0.24721, "loss": 0.79875, "time": 0.50824} -{"mode": "train", "epoch": 3, "iter": 3150, "lr": 0.0002, "memory": 12586, "data_time": 0.04501, "loss_rpn_cls": 0.02859, "loss_rpn_bbox": 0.04997, "loss_cls": 0.22295, "acc": 92.4812, "loss_bbox": 0.25254, "loss_mask": 0.24338, "loss": 0.79743, "time": 0.50575} -{"mode": "train", "epoch": 3, "iter": 3200, "lr": 0.0002, "memory": 12586, "data_time": 0.04449, "loss_rpn_cls": 0.03161, "loss_rpn_bbox": 0.05426, "loss_cls": 0.2333, "acc": 92.12793, "loss_bbox": 0.25911, "loss_mask": 0.25371, "loss": 0.832, "time": 0.50679} -{"mode": "train", "epoch": 3, "iter": 3250, "lr": 0.0002, "memory": 12586, "data_time": 0.05023, "loss_rpn_cls": 0.03262, "loss_rpn_bbox": 0.05114, "loss_cls": 0.22812, "acc": 92.32812, "loss_bbox": 0.25137, "loss_mask": 0.2491, "loss": 0.81235, "time": 0.50924} -{"mode": "train", "epoch": 3, "iter": 3300, "lr": 0.0002, "memory": 12586, "data_time": 0.04856, "loss_rpn_cls": 0.02911, "loss_rpn_bbox": 0.04842, "loss_cls": 0.22575, "acc": 92.47437, "loss_bbox": 0.2433, "loss_mask": 0.24533, "loss": 0.79191, "time": 0.50996} -{"mode": "train", "epoch": 3, "iter": 3350, "lr": 0.0002, "memory": 12586, "data_time": 0.05162, "loss_rpn_cls": 0.03102, "loss_rpn_bbox": 0.05392, "loss_cls": 0.23227, "acc": 91.97046, "loss_bbox": 0.26272, "loss_mask": 0.25203, "loss": 0.83196, "time": 0.51359} -{"mode": "train", "epoch": 3, "iter": 3400, "lr": 0.0002, "memory": 12586, "data_time": 0.04404, "loss_rpn_cls": 0.02799, "loss_rpn_bbox": 0.05089, "loss_cls": 0.22395, "acc": 92.39941, "loss_bbox": 0.24744, "loss_mask": 0.24112, "loss": 0.7914, "time": 0.49161} -{"mode": "train", "epoch": 3, "iter": 3450, "lr": 0.0002, "memory": 12586, "data_time": 0.03589, "loss_rpn_cls": 0.0311, "loss_rpn_bbox": 0.05169, "loss_cls": 0.23667, "acc": 92.18335, "loss_bbox": 0.25352, "loss_mask": 0.24882, "loss": 0.82181, "time": 0.50491} -{"mode": "train", "epoch": 3, "iter": 3500, "lr": 0.0002, "memory": 12586, "data_time": 0.0467, "loss_rpn_cls": 0.03299, "loss_rpn_bbox": 0.0543, "loss_cls": 0.23763, "acc": 92.19263, "loss_bbox": 0.26211, "loss_mask": 0.25533, "loss": 0.84236, "time": 0.51439} -{"mode": "train", "epoch": 3, "iter": 3550, "lr": 0.0002, "memory": 12586, "data_time": 0.04441, "loss_rpn_cls": 0.03182, "loss_rpn_bbox": 0.05133, "loss_cls": 0.22705, "acc": 92.35327, "loss_bbox": 0.2506, "loss_mask": 0.24738, "loss": 0.80817, "time": 0.50778} -{"mode": "train", "epoch": 3, "iter": 3600, "lr": 0.0002, "memory": 12586, "data_time": 0.05215, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.05097, "loss_cls": 0.23109, "acc": 92.18481, "loss_bbox": 0.25788, "loss_mask": 0.24558, "loss": 0.8157, "time": 0.53291} -{"mode": "train", "epoch": 3, "iter": 3650, "lr": 0.0002, "memory": 12586, "data_time": 0.04752, "loss_rpn_cls": 0.02994, "loss_rpn_bbox": 0.04876, "loss_cls": 0.22412, "acc": 92.56006, "loss_bbox": 0.2424, "loss_mask": 0.24724, "loss": 0.79246, "time": 0.49961} -{"mode": "train", "epoch": 3, "iter": 3700, "lr": 0.0002, "memory": 12586, "data_time": 0.04179, "loss_rpn_cls": 0.03194, "loss_rpn_bbox": 0.04889, "loss_cls": 0.22866, "acc": 92.44434, "loss_bbox": 0.24471, "loss_mask": 0.24475, "loss": 0.79896, "time": 0.49768} -{"mode": "train", "epoch": 3, "iter": 3750, "lr": 0.0002, "memory": 12586, "data_time": 0.05097, "loss_rpn_cls": 0.03204, "loss_rpn_bbox": 0.05296, "loss_cls": 0.23269, "acc": 92.24268, "loss_bbox": 0.25724, "loss_mask": 0.25126, "loss": 0.8262, "time": 0.60887} -{"mode": "train", "epoch": 3, "iter": 3800, "lr": 0.0002, "memory": 12586, "data_time": 0.04991, "loss_rpn_cls": 0.03035, "loss_rpn_bbox": 0.052, "loss_cls": 0.23534, "acc": 92.22632, "loss_bbox": 0.24809, "loss_mask": 0.24528, "loss": 0.81106, "time": 0.50161} -{"mode": "train", "epoch": 3, "iter": 3850, "lr": 0.0002, "memory": 12586, "data_time": 0.05398, "loss_rpn_cls": 0.03241, "loss_rpn_bbox": 0.05171, "loss_cls": 0.23926, "acc": 91.99463, "loss_bbox": 0.25293, "loss_mask": 0.24812, "loss": 0.82443, "time": 0.51305} -{"mode": "train", "epoch": 3, "iter": 3900, "lr": 0.0002, "memory": 12586, "data_time": 0.05273, "loss_rpn_cls": 0.03505, "loss_rpn_bbox": 0.05623, "loss_cls": 0.23553, "acc": 92.07031, "loss_bbox": 0.25549, "loss_mask": 0.25457, "loss": 0.83688, "time": 0.50309} -{"mode": "train", "epoch": 3, "iter": 3950, "lr": 0.0002, "memory": 12586, "data_time": 0.05151, "loss_rpn_cls": 0.0304, "loss_rpn_bbox": 0.04907, "loss_cls": 0.21511, "acc": 92.75732, "loss_bbox": 0.24071, "loss_mask": 0.24321, "loss": 0.77849, "time": 0.5047} -{"mode": "train", "epoch": 3, "iter": 4000, "lr": 0.0002, "memory": 12586, "data_time": 0.04957, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.05197, "loss_cls": 0.23308, "acc": 92.14331, "loss_bbox": 0.25664, "loss_mask": 0.2504, "loss": 0.82159, "time": 0.51752} -{"mode": "train", "epoch": 3, "iter": 4050, "lr": 0.0002, "memory": 12586, "data_time": 0.04047, "loss_rpn_cls": 0.03153, "loss_rpn_bbox": 0.05235, "loss_cls": 0.22427, "acc": 92.55981, "loss_bbox": 0.24253, "loss_mask": 0.24586, "loss": 0.79655, "time": 0.57166} -{"mode": "train", "epoch": 3, "iter": 4100, "lr": 0.0002, "memory": 12586, "data_time": 0.04581, "loss_rpn_cls": 0.0324, "loss_rpn_bbox": 0.05324, "loss_cls": 0.241, "acc": 92.0376, "loss_bbox": 0.25924, "loss_mask": 0.25159, "loss": 0.83748, "time": 0.50569} -{"mode": "train", "epoch": 3, "iter": 4150, "lr": 0.0002, "memory": 12586, "data_time": 0.03859, "loss_rpn_cls": 0.03173, "loss_rpn_bbox": 0.05453, "loss_cls": 0.22887, "acc": 92.34131, "loss_bbox": 0.25279, "loss_mask": 0.24809, "loss": 0.81601, "time": 0.48955} -{"mode": "train", "epoch": 3, "iter": 4200, "lr": 0.0002, "memory": 12586, "data_time": 0.04679, "loss_rpn_cls": 0.02908, "loss_rpn_bbox": 0.04984, "loss_cls": 0.22779, "acc": 92.38306, "loss_bbox": 0.24731, "loss_mask": 0.24714, "loss": 0.80116, "time": 0.49946} -{"mode": "train", "epoch": 3, "iter": 4250, "lr": 0.0002, "memory": 12586, "data_time": 0.04333, "loss_rpn_cls": 0.03172, "loss_rpn_bbox": 0.05066, "loss_cls": 0.22872, "acc": 92.16016, "loss_bbox": 0.25808, "loss_mask": 0.25002, "loss": 0.8192, "time": 0.49346} -{"mode": "train", "epoch": 3, "iter": 4300, "lr": 0.0002, "memory": 12586, "data_time": 0.04385, "loss_rpn_cls": 0.02883, "loss_rpn_bbox": 0.04931, "loss_cls": 0.22704, "acc": 92.4082, "loss_bbox": 0.24863, "loss_mask": 0.24203, "loss": 0.79584, "time": 0.50234} -{"mode": "train", "epoch": 3, "iter": 4350, "lr": 0.0002, "memory": 12586, "data_time": 0.0458, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.04914, "loss_cls": 0.21956, "acc": 92.77441, "loss_bbox": 0.24163, "loss_mask": 0.24325, "loss": 0.78234, "time": 0.51286} -{"mode": "train", "epoch": 3, "iter": 4400, "lr": 0.0002, "memory": 12586, "data_time": 0.03949, "loss_rpn_cls": 0.03047, "loss_rpn_bbox": 0.0527, "loss_cls": 0.22324, "acc": 92.41821, "loss_bbox": 0.24665, "loss_mask": 0.2432, "loss": 0.79626, "time": 0.49907} -{"mode": "train", "epoch": 3, "iter": 4450, "lr": 0.0002, "memory": 12586, "data_time": 0.04456, "loss_rpn_cls": 0.02816, "loss_rpn_bbox": 0.04795, "loss_cls": 0.22204, "acc": 92.35693, "loss_bbox": 0.24869, "loss_mask": 0.24573, "loss": 0.79258, "time": 0.49486} -{"mode": "train", "epoch": 3, "iter": 4500, "lr": 0.0002, "memory": 12586, "data_time": 0.04498, "loss_rpn_cls": 0.02654, "loss_rpn_bbox": 0.04698, "loss_cls": 0.22095, "acc": 92.59985, "loss_bbox": 0.2464, "loss_mask": 0.24792, "loss": 0.78879, "time": 0.49824} -{"mode": "train", "epoch": 3, "iter": 4550, "lr": 0.0002, "memory": 12586, "data_time": 0.04508, "loss_rpn_cls": 0.02828, "loss_rpn_bbox": 0.04751, "loss_cls": 0.22804, "acc": 92.33911, "loss_bbox": 0.25017, "loss_mask": 0.24383, "loss": 0.79782, "time": 0.49157} -{"mode": "train", "epoch": 3, "iter": 4600, "lr": 0.0002, "memory": 12586, "data_time": 0.0449, "loss_rpn_cls": 0.02876, "loss_rpn_bbox": 0.05186, "loss_cls": 0.22545, "acc": 92.40649, "loss_bbox": 0.25469, "loss_mask": 0.24846, "loss": 0.80923, "time": 0.49179} -{"mode": "train", "epoch": 3, "iter": 4650, "lr": 0.0002, "memory": 12586, "data_time": 0.04306, "loss_rpn_cls": 0.02996, "loss_rpn_bbox": 0.04906, "loss_cls": 0.22682, "acc": 92.43628, "loss_bbox": 0.24742, "loss_mask": 0.24805, "loss": 0.80131, "time": 0.503} -{"mode": "train", "epoch": 3, "iter": 4700, "lr": 0.0002, "memory": 12586, "data_time": 0.04207, "loss_rpn_cls": 0.03068, "loss_rpn_bbox": 0.05228, "loss_cls": 0.23394, "acc": 92.19897, "loss_bbox": 0.25285, "loss_mask": 0.25047, "loss": 0.82023, "time": 0.55469} -{"mode": "train", "epoch": 3, "iter": 4750, "lr": 0.0002, "memory": 12586, "data_time": 0.0508, "loss_rpn_cls": 0.03177, "loss_rpn_bbox": 0.05105, "loss_cls": 0.23348, "acc": 92.26953, "loss_bbox": 0.25575, "loss_mask": 0.24519, "loss": 0.81724, "time": 0.50935} -{"mode": "train", "epoch": 3, "iter": 4800, "lr": 0.0002, "memory": 12586, "data_time": 0.04939, "loss_rpn_cls": 0.0287, "loss_rpn_bbox": 0.04805, "loss_cls": 0.22289, "acc": 92.44897, "loss_bbox": 0.2499, "loss_mask": 0.24829, "loss": 0.79784, "time": 0.50339} -{"mode": "train", "epoch": 3, "iter": 4850, "lr": 0.0002, "memory": 12586, "data_time": 0.0427, "loss_rpn_cls": 0.02848, "loss_rpn_bbox": 0.0492, "loss_cls": 0.22984, "acc": 92.55884, "loss_bbox": 0.23824, "loss_mask": 0.24566, "loss": 0.79142, "time": 0.49783} -{"mode": "train", "epoch": 3, "iter": 4900, "lr": 0.0002, "memory": 12586, "data_time": 0.05724, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.0511, "loss_cls": 0.23172, "acc": 92.32446, "loss_bbox": 0.24675, "loss_mask": 0.24775, "loss": 0.80714, "time": 0.52333} -{"mode": "train", "epoch": 3, "iter": 4950, "lr": 0.0002, "memory": 12586, "data_time": 0.04975, "loss_rpn_cls": 0.02906, "loss_rpn_bbox": 0.04936, "loss_cls": 0.23436, "acc": 92.29932, "loss_bbox": 0.25347, "loss_mask": 0.25553, "loss": 0.82179, "time": 0.5019} -{"mode": "train", "epoch": 3, "iter": 5000, "lr": 0.0002, "memory": 12586, "data_time": 0.0443, "loss_rpn_cls": 0.02989, "loss_rpn_bbox": 0.05013, "loss_cls": 0.234, "acc": 92.36182, "loss_bbox": 0.24727, "loss_mask": 0.24884, "loss": 0.81013, "time": 0.4967} -{"mode": "train", "epoch": 3, "iter": 5050, "lr": 0.0002, "memory": 12586, "data_time": 0.04698, "loss_rpn_cls": 0.03172, "loss_rpn_bbox": 0.05048, "loss_cls": 0.2311, "acc": 92.49561, "loss_bbox": 0.2446, "loss_mask": 0.24617, "loss": 0.80407, "time": 0.51143} -{"mode": "train", "epoch": 3, "iter": 5100, "lr": 0.0002, "memory": 12586, "data_time": 0.04072, "loss_rpn_cls": 0.03085, "loss_rpn_bbox": 0.04773, "loss_cls": 0.23503, "acc": 92.19067, "loss_bbox": 0.25068, "loss_mask": 0.25189, "loss": 0.81617, "time": 0.50025} -{"mode": "train", "epoch": 3, "iter": 5150, "lr": 0.0002, "memory": 12586, "data_time": 0.03937, "loss_rpn_cls": 0.02836, "loss_rpn_bbox": 0.05011, "loss_cls": 0.22768, "acc": 92.26392, "loss_bbox": 0.24951, "loss_mask": 0.24582, "loss": 0.80147, "time": 0.55468} -{"mode": "train", "epoch": 3, "iter": 5200, "lr": 0.0002, "memory": 12586, "data_time": 0.04439, "loss_rpn_cls": 0.02738, "loss_rpn_bbox": 0.0491, "loss_cls": 0.22307, "acc": 92.45874, "loss_bbox": 0.24659, "loss_mask": 0.24508, "loss": 0.79121, "time": 0.54954} -{"mode": "train", "epoch": 3, "iter": 5250, "lr": 0.0002, "memory": 12586, "data_time": 0.04389, "loss_rpn_cls": 0.02799, "loss_rpn_bbox": 0.04846, "loss_cls": 0.22457, "acc": 92.53076, "loss_bbox": 0.24334, "loss_mask": 0.24829, "loss": 0.79265, "time": 0.48772} -{"mode": "train", "epoch": 3, "iter": 5300, "lr": 0.0002, "memory": 12586, "data_time": 0.05033, "loss_rpn_cls": 0.03199, "loss_rpn_bbox": 0.05212, "loss_cls": 0.22719, "acc": 92.38696, "loss_bbox": 0.24588, "loss_mask": 0.24533, "loss": 0.80252, "time": 0.50068} -{"mode": "train", "epoch": 3, "iter": 5350, "lr": 0.0002, "memory": 12586, "data_time": 0.04932, "loss_rpn_cls": 0.03271, "loss_rpn_bbox": 0.05234, "loss_cls": 0.23993, "acc": 92.08472, "loss_bbox": 0.25551, "loss_mask": 0.25311, "loss": 0.8336, "time": 0.49787} -{"mode": "train", "epoch": 3, "iter": 5400, "lr": 0.0002, "memory": 12586, "data_time": 0.04574, "loss_rpn_cls": 0.03111, "loss_rpn_bbox": 0.04859, "loss_cls": 0.22638, "acc": 92.58325, "loss_bbox": 0.2478, "loss_mask": 0.25011, "loss": 0.80399, "time": 0.50099} -{"mode": "train", "epoch": 3, "iter": 5450, "lr": 0.0002, "memory": 12586, "data_time": 0.04333, "loss_rpn_cls": 0.02725, "loss_rpn_bbox": 0.04723, "loss_cls": 0.21601, "acc": 92.9104, "loss_bbox": 0.23191, "loss_mask": 0.24385, "loss": 0.76625, "time": 0.49824} -{"mode": "train", "epoch": 3, "iter": 5500, "lr": 0.0002, "memory": 12586, "data_time": 0.04395, "loss_rpn_cls": 0.02768, "loss_rpn_bbox": 0.04844, "loss_cls": 0.2282, "acc": 92.6228, "loss_bbox": 0.24117, "loss_mask": 0.24685, "loss": 0.79234, "time": 0.6067} -{"mode": "train", "epoch": 3, "iter": 5550, "lr": 0.0002, "memory": 12586, "data_time": 0.04288, "loss_rpn_cls": 0.03141, "loss_rpn_bbox": 0.04869, "loss_cls": 0.22548, "acc": 92.35864, "loss_bbox": 0.2549, "loss_mask": 0.2521, "loss": 0.81258, "time": 0.50817} -{"mode": "train", "epoch": 3, "iter": 5600, "lr": 0.0002, "memory": 12586, "data_time": 0.05028, "loss_rpn_cls": 0.03174, "loss_rpn_bbox": 0.05128, "loss_cls": 0.23284, "acc": 92.33374, "loss_bbox": 0.25033, "loss_mask": 0.24468, "loss": 0.81087, "time": 0.50737} -{"mode": "train", "epoch": 3, "iter": 5650, "lr": 0.0002, "memory": 12586, "data_time": 0.05262, "loss_rpn_cls": 0.0329, "loss_rpn_bbox": 0.05157, "loss_cls": 0.23297, "acc": 92.37305, "loss_bbox": 0.24773, "loss_mask": 0.24267, "loss": 0.80784, "time": 0.51429} -{"mode": "train", "epoch": 3, "iter": 5700, "lr": 0.0002, "memory": 12586, "data_time": 0.04451, "loss_rpn_cls": 0.02984, "loss_rpn_bbox": 0.04838, "loss_cls": 0.21622, "acc": 92.76587, "loss_bbox": 0.23802, "loss_mask": 0.2422, "loss": 0.77466, "time": 0.50432} -{"mode": "train", "epoch": 3, "iter": 5750, "lr": 0.0002, "memory": 12586, "data_time": 0.04724, "loss_rpn_cls": 0.02911, "loss_rpn_bbox": 0.04994, "loss_cls": 0.22685, "acc": 92.4043, "loss_bbox": 0.24607, "loss_mask": 0.24596, "loss": 0.79793, "time": 0.49829} -{"mode": "train", "epoch": 3, "iter": 5800, "lr": 0.0002, "memory": 12586, "data_time": 0.05253, "loss_rpn_cls": 0.03517, "loss_rpn_bbox": 0.05198, "loss_cls": 0.24232, "acc": 92.10938, "loss_bbox": 0.2578, "loss_mask": 0.24619, "loss": 0.83347, "time": 0.50924} -{"mode": "train", "epoch": 3, "iter": 5850, "lr": 0.0002, "memory": 12586, "data_time": 0.05027, "loss_rpn_cls": 0.03243, "loss_rpn_bbox": 0.05138, "loss_cls": 0.22311, "acc": 92.35498, "loss_bbox": 0.24995, "loss_mask": 0.24288, "loss": 0.79975, "time": 0.56014} -{"mode": "train", "epoch": 3, "iter": 5900, "lr": 0.0002, "memory": 12586, "data_time": 0.04037, "loss_rpn_cls": 0.02966, "loss_rpn_bbox": 0.04956, "loss_cls": 0.22673, "acc": 92.36572, "loss_bbox": 0.25085, "loss_mask": 0.24633, "loss": 0.80313, "time": 0.50748} -{"mode": "train", "epoch": 3, "iter": 5950, "lr": 0.0002, "memory": 12586, "data_time": 0.04934, "loss_rpn_cls": 0.02882, "loss_rpn_bbox": 0.04963, "loss_cls": 0.22044, "acc": 92.78345, "loss_bbox": 0.23901, "loss_mask": 0.24396, "loss": 0.78186, "time": 0.50291} -{"mode": "train", "epoch": 3, "iter": 6000, "lr": 0.0002, "memory": 12586, "data_time": 0.0415, "loss_rpn_cls": 0.02749, "loss_rpn_bbox": 0.04704, "loss_cls": 0.21861, "acc": 92.83691, "loss_bbox": 0.23599, "loss_mask": 0.24188, "loss": 0.77101, "time": 0.4853} -{"mode": "train", "epoch": 3, "iter": 6050, "lr": 0.0002, "memory": 12586, "data_time": 0.04211, "loss_rpn_cls": 0.0294, "loss_rpn_bbox": 0.05002, "loss_cls": 0.22513, "acc": 92.45117, "loss_bbox": 0.24716, "loss_mask": 0.24725, "loss": 0.79896, "time": 0.51055} -{"mode": "train", "epoch": 3, "iter": 6100, "lr": 0.0002, "memory": 12586, "data_time": 0.04184, "loss_rpn_cls": 0.02934, "loss_rpn_bbox": 0.05173, "loss_cls": 0.22799, "acc": 92.46997, "loss_bbox": 0.2502, "loss_mask": 0.24546, "loss": 0.80473, "time": 0.56724} -{"mode": "train", "epoch": 3, "iter": 6150, "lr": 0.0002, "memory": 12586, "data_time": 0.04174, "loss_rpn_cls": 0.02859, "loss_rpn_bbox": 0.05002, "loss_cls": 0.22584, "acc": 92.39819, "loss_bbox": 0.24423, "loss_mask": 0.24644, "loss": 0.79513, "time": 0.50199} -{"mode": "train", "epoch": 3, "iter": 6200, "lr": 0.0002, "memory": 12586, "data_time": 0.04318, "loss_rpn_cls": 0.02876, "loss_rpn_bbox": 0.04837, "loss_cls": 0.22991, "acc": 92.38525, "loss_bbox": 0.25269, "loss_mask": 0.24335, "loss": 0.80308, "time": 0.49557} -{"mode": "train", "epoch": 3, "iter": 6250, "lr": 0.0002, "memory": 12586, "data_time": 0.05129, "loss_rpn_cls": 0.03096, "loss_rpn_bbox": 0.05332, "loss_cls": 0.23756, "acc": 92.12793, "loss_bbox": 0.2575, "loss_mask": 0.25095, "loss": 0.83028, "time": 0.51479} -{"mode": "train", "epoch": 3, "iter": 6300, "lr": 0.0002, "memory": 12586, "data_time": 0.04508, "loss_rpn_cls": 0.03206, "loss_rpn_bbox": 0.05204, "loss_cls": 0.22625, "acc": 92.427, "loss_bbox": 0.24949, "loss_mask": 0.25145, "loss": 0.8113, "time": 0.50192} -{"mode": "train", "epoch": 3, "iter": 6350, "lr": 0.0002, "memory": 12586, "data_time": 0.0358, "loss_rpn_cls": 0.02915, "loss_rpn_bbox": 0.04653, "loss_cls": 0.21933, "acc": 92.79199, "loss_bbox": 0.23595, "loss_mask": 0.2463, "loss": 0.77726, "time": 0.486} -{"mode": "train", "epoch": 3, "iter": 6400, "lr": 0.0002, "memory": 12586, "data_time": 0.10554, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.05012, "loss_cls": 0.21675, "acc": 92.65894, "loss_bbox": 0.24388, "loss_mask": 0.24351, "loss": 0.78303, "time": 0.56115} -{"mode": "train", "epoch": 3, "iter": 6450, "lr": 0.0002, "memory": 12586, "data_time": 0.04428, "loss_rpn_cls": 0.02806, "loss_rpn_bbox": 0.04753, "loss_cls": 0.22507, "acc": 92.51904, "loss_bbox": 0.24608, "loss_mask": 0.24475, "loss": 0.79148, "time": 0.50645} -{"mode": "train", "epoch": 3, "iter": 6500, "lr": 0.0002, "memory": 12586, "data_time": 0.04638, "loss_rpn_cls": 0.02782, "loss_rpn_bbox": 0.04824, "loss_cls": 0.23467, "acc": 92.2395, "loss_bbox": 0.24351, "loss_mask": 0.24034, "loss": 0.79458, "time": 0.51231} -{"mode": "train", "epoch": 3, "iter": 6550, "lr": 0.0002, "memory": 12586, "data_time": 0.04371, "loss_rpn_cls": 0.02738, "loss_rpn_bbox": 0.04823, "loss_cls": 0.22651, "acc": 92.47778, "loss_bbox": 0.2442, "loss_mask": 0.24763, "loss": 0.79396, "time": 0.5028} -{"mode": "train", "epoch": 3, "iter": 6600, "lr": 0.0002, "memory": 12586, "data_time": 0.03925, "loss_rpn_cls": 0.02917, "loss_rpn_bbox": 0.04925, "loss_cls": 0.22689, "acc": 92.58203, "loss_bbox": 0.24549, "loss_mask": 0.25061, "loss": 0.8014, "time": 0.48414} -{"mode": "train", "epoch": 3, "iter": 6650, "lr": 0.0002, "memory": 12586, "data_time": 0.04993, "loss_rpn_cls": 0.03007, "loss_rpn_bbox": 0.05266, "loss_cls": 0.22216, "acc": 92.48877, "loss_bbox": 0.24702, "loss_mask": 0.24606, "loss": 0.79797, "time": 0.50434} -{"mode": "train", "epoch": 3, "iter": 6700, "lr": 0.0002, "memory": 12586, "data_time": 0.05226, "loss_rpn_cls": 0.03048, "loss_rpn_bbox": 0.05117, "loss_cls": 0.22818, "acc": 92.36353, "loss_bbox": 0.24965, "loss_mask": 0.24054, "loss": 0.80002, "time": 0.4992} -{"mode": "train", "epoch": 3, "iter": 6750, "lr": 0.0002, "memory": 12586, "data_time": 0.04134, "loss_rpn_cls": 0.02766, "loss_rpn_bbox": 0.04825, "loss_cls": 0.23374, "acc": 92.23193, "loss_bbox": 0.25393, "loss_mask": 0.24552, "loss": 0.8091, "time": 0.54772} -{"mode": "train", "epoch": 3, "iter": 6800, "lr": 0.0002, "memory": 12586, "data_time": 0.04829, "loss_rpn_cls": 0.03246, "loss_rpn_bbox": 0.05304, "loss_cls": 0.23769, "acc": 92.01099, "loss_bbox": 0.25668, "loss_mask": 0.25028, "loss": 0.83014, "time": 0.51619} -{"mode": "train", "epoch": 3, "iter": 6850, "lr": 0.0002, "memory": 12586, "data_time": 0.0441, "loss_rpn_cls": 0.03047, "loss_rpn_bbox": 0.04938, "loss_cls": 0.21892, "acc": 92.78003, "loss_bbox": 0.23816, "loss_mask": 0.24322, "loss": 0.78015, "time": 0.49117} -{"mode": "train", "epoch": 3, "iter": 6900, "lr": 0.0002, "memory": 12586, "data_time": 0.05467, "loss_rpn_cls": 0.03017, "loss_rpn_bbox": 0.05043, "loss_cls": 0.23091, "acc": 92.43433, "loss_bbox": 0.24987, "loss_mask": 0.24957, "loss": 0.81095, "time": 0.5006} -{"mode": "train", "epoch": 3, "iter": 6950, "lr": 0.0002, "memory": 12586, "data_time": 0.04845, "loss_rpn_cls": 0.02753, "loss_rpn_bbox": 0.04868, "loss_cls": 0.22841, "acc": 92.38696, "loss_bbox": 0.24908, "loss_mask": 0.24294, "loss": 0.79664, "time": 0.51406} -{"mode": "train", "epoch": 3, "iter": 7000, "lr": 0.0002, "memory": 12586, "data_time": 0.04923, "loss_rpn_cls": 0.02988, "loss_rpn_bbox": 0.04886, "loss_cls": 0.23227, "acc": 92.42944, "loss_bbox": 0.25001, "loss_mask": 0.24957, "loss": 0.81059, "time": 0.50106} -{"mode": "train", "epoch": 3, "iter": 7050, "lr": 0.0002, "memory": 12586, "data_time": 0.04618, "loss_rpn_cls": 0.02863, "loss_rpn_bbox": 0.05038, "loss_cls": 0.23444, "acc": 92.20874, "loss_bbox": 0.24895, "loss_mask": 0.25193, "loss": 0.81433, "time": 0.60166} -{"mode": "train", "epoch": 3, "iter": 7100, "lr": 0.0002, "memory": 12586, "data_time": 0.04774, "loss_rpn_cls": 0.03242, "loss_rpn_bbox": 0.05046, "loss_cls": 0.22779, "acc": 92.36865, "loss_bbox": 0.24555, "loss_mask": 0.24565, "loss": 0.80186, "time": 0.49472} -{"mode": "train", "epoch": 3, "iter": 7150, "lr": 0.0002, "memory": 12586, "data_time": 0.04117, "loss_rpn_cls": 0.03192, "loss_rpn_bbox": 0.05166, "loss_cls": 0.2204, "acc": 92.61475, "loss_bbox": 0.24457, "loss_mask": 0.24892, "loss": 0.79748, "time": 0.51079} -{"mode": "train", "epoch": 3, "iter": 7200, "lr": 0.0002, "memory": 12586, "data_time": 0.03747, "loss_rpn_cls": 0.03277, "loss_rpn_bbox": 0.05084, "loss_cls": 0.23253, "acc": 92.30811, "loss_bbox": 0.24936, "loss_mask": 0.24221, "loss": 0.8077, "time": 0.49495} -{"mode": "train", "epoch": 3, "iter": 7250, "lr": 0.0002, "memory": 12586, "data_time": 0.0417, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.04839, "loss_cls": 0.21966, "acc": 92.73218, "loss_bbox": 0.24467, "loss_mask": 0.24139, "loss": 0.78409, "time": 0.49693} -{"mode": "train", "epoch": 3, "iter": 7300, "lr": 0.0002, "memory": 12586, "data_time": 0.04747, "loss_rpn_cls": 0.0289, "loss_rpn_bbox": 0.04632, "loss_cls": 0.22432, "acc": 92.48462, "loss_bbox": 0.24618, "loss_mask": 0.2459, "loss": 0.79163, "time": 0.51006} -{"mode": "val", "epoch": 3, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3525, "bbox_mAP_50": 0.5689, "bbox_mAP_75": 0.3859, "bbox_mAP_s": 0.206, "bbox_mAP_m": 0.3955, "bbox_mAP_l": 0.4514, "bbox_mAP_copypaste": "0.3525 0.5689 0.3859 0.2060 0.3955 0.4514", "segm_mAP": 0.3412, "segm_mAP_50": 0.5445, "segm_mAP_75": 0.3684, "segm_mAP_s": 0.1576, "segm_mAP_m": 0.3787, "segm_mAP_l": 0.4838, "segm_mAP_copypaste": "0.3412 0.5445 0.3684 0.1576 0.3787 0.4838"} -{"mode": "train", "epoch": 4, "iter": 50, "lr": 0.0002, "memory": 12586, "data_time": 0.13487, "loss_rpn_cls": 0.02664, "loss_rpn_bbox": 0.05132, "loss_cls": 0.22637, "acc": 92.16309, "loss_bbox": 0.25683, "loss_mask": 0.24133, "loss": 0.80248, "time": 0.62793} -{"mode": "train", "epoch": 4, "iter": 100, "lr": 0.0002, "memory": 12586, "data_time": 0.04295, "loss_rpn_cls": 0.02506, "loss_rpn_bbox": 0.04718, "loss_cls": 0.21632, "acc": 92.51367, "loss_bbox": 0.25239, "loss_mask": 0.24442, "loss": 0.78537, "time": 0.51431} -{"mode": "train", "epoch": 4, "iter": 150, "lr": 0.0002, "memory": 12586, "data_time": 0.05407, "loss_rpn_cls": 0.02459, "loss_rpn_bbox": 0.04872, "loss_cls": 0.21582, "acc": 92.80054, "loss_bbox": 0.23574, "loss_mask": 0.23867, "loss": 0.76353, "time": 0.51357} -{"mode": "train", "epoch": 4, "iter": 200, "lr": 0.0002, "memory": 12586, "data_time": 0.0561, "loss_rpn_cls": 0.02598, "loss_rpn_bbox": 0.04906, "loss_cls": 0.22092, "acc": 92.35645, "loss_bbox": 0.25149, "loss_mask": 0.2466, "loss": 0.79406, "time": 0.51126} -{"mode": "train", "epoch": 4, "iter": 250, "lr": 0.0002, "memory": 12586, "data_time": 0.04535, "loss_rpn_cls": 0.02577, "loss_rpn_bbox": 0.04767, "loss_cls": 0.21302, "acc": 92.49512, "loss_bbox": 0.24257, "loss_mask": 0.23508, "loss": 0.76412, "time": 0.50891} -{"mode": "train", "epoch": 4, "iter": 300, "lr": 0.0002, "memory": 12586, "data_time": 0.04461, "loss_rpn_cls": 0.02696, "loss_rpn_bbox": 0.0476, "loss_cls": 0.20947, "acc": 92.83008, "loss_bbox": 0.24199, "loss_mask": 0.23464, "loss": 0.76066, "time": 0.51829} -{"mode": "train", "epoch": 4, "iter": 350, "lr": 0.0002, "memory": 12586, "data_time": 0.04737, "loss_rpn_cls": 0.02577, "loss_rpn_bbox": 0.04639, "loss_cls": 0.2117, "acc": 92.75366, "loss_bbox": 0.24284, "loss_mask": 0.24007, "loss": 0.76677, "time": 0.52524} -{"mode": "train", "epoch": 4, "iter": 400, "lr": 0.0002, "memory": 12586, "data_time": 0.05004, "loss_rpn_cls": 0.02785, "loss_rpn_bbox": 0.04842, "loss_cls": 0.22648, "acc": 92.29932, "loss_bbox": 0.25061, "loss_mask": 0.24393, "loss": 0.79729, "time": 0.5179} -{"mode": "train", "epoch": 4, "iter": 450, "lr": 0.0002, "memory": 12586, "data_time": 0.0526, "loss_rpn_cls": 0.0276, "loss_rpn_bbox": 0.05047, "loss_cls": 0.22046, "acc": 92.38696, "loss_bbox": 0.25841, "loss_mask": 0.24954, "loss": 0.80648, "time": 0.51374} -{"mode": "train", "epoch": 4, "iter": 500, "lr": 0.0002, "memory": 12586, "data_time": 0.04973, "loss_rpn_cls": 0.02772, "loss_rpn_bbox": 0.04818, "loss_cls": 0.22438, "acc": 92.43896, "loss_bbox": 0.24816, "loss_mask": 0.23892, "loss": 0.78736, "time": 0.52052} -{"mode": "train", "epoch": 4, "iter": 550, "lr": 0.0002, "memory": 12586, "data_time": 0.04854, "loss_rpn_cls": 0.02679, "loss_rpn_bbox": 0.04935, "loss_cls": 0.21729, "acc": 92.60498, "loss_bbox": 0.24541, "loss_mask": 0.2474, "loss": 0.78624, "time": 0.51054} -{"mode": "train", "epoch": 4, "iter": 600, "lr": 0.0002, "memory": 12586, "data_time": 0.0537, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.04842, "loss_cls": 0.21636, "acc": 92.61426, "loss_bbox": 0.24725, "loss_mask": 0.24257, "loss": 0.78227, "time": 0.57385} -{"mode": "train", "epoch": 4, "iter": 650, "lr": 0.0002, "memory": 12586, "data_time": 0.04658, "loss_rpn_cls": 0.02579, "loss_rpn_bbox": 0.04589, "loss_cls": 0.20645, "acc": 93.01904, "loss_bbox": 0.23702, "loss_mask": 0.23763, "loss": 0.75278, "time": 0.50103} -{"mode": "train", "epoch": 4, "iter": 700, "lr": 0.0002, "memory": 12586, "data_time": 0.04604, "loss_rpn_cls": 0.02614, "loss_rpn_bbox": 0.04796, "loss_cls": 0.20804, "acc": 92.77563, "loss_bbox": 0.2422, "loss_mask": 0.24205, "loss": 0.76639, "time": 0.5019} -{"mode": "train", "epoch": 4, "iter": 750, "lr": 0.0002, "memory": 12586, "data_time": 0.04607, "loss_rpn_cls": 0.02477, "loss_rpn_bbox": 0.04695, "loss_cls": 0.21687, "acc": 92.55029, "loss_bbox": 0.24389, "loss_mask": 0.2393, "loss": 0.77178, "time": 0.51141} -{"mode": "train", "epoch": 4, "iter": 800, "lr": 0.0002, "memory": 12586, "data_time": 0.04143, "loss_rpn_cls": 0.02767, "loss_rpn_bbox": 0.05065, "loss_cls": 0.21277, "acc": 92.70215, "loss_bbox": 0.24109, "loss_mask": 0.2441, "loss": 0.77628, "time": 0.50682} -{"mode": "train", "epoch": 4, "iter": 850, "lr": 0.0002, "memory": 12586, "data_time": 0.04347, "loss_rpn_cls": 0.02689, "loss_rpn_bbox": 0.04893, "loss_cls": 0.20983, "acc": 92.78857, "loss_bbox": 0.24006, "loss_mask": 0.23789, "loss": 0.76359, "time": 0.48875} -{"mode": "train", "epoch": 4, "iter": 900, "lr": 0.0002, "memory": 12586, "data_time": 0.04941, "loss_rpn_cls": 0.02691, "loss_rpn_bbox": 0.0508, "loss_cls": 0.22034, "acc": 92.43774, "loss_bbox": 0.24489, "loss_mask": 0.2435, "loss": 0.78643, "time": 0.51074} -{"mode": "train", "epoch": 4, "iter": 950, "lr": 0.0002, "memory": 12586, "data_time": 0.04372, "loss_rpn_cls": 0.02542, "loss_rpn_bbox": 0.04521, "loss_cls": 0.21146, "acc": 92.82178, "loss_bbox": 0.23222, "loss_mask": 0.24394, "loss": 0.75824, "time": 0.49765} -{"mode": "train", "epoch": 4, "iter": 1000, "lr": 0.0002, "memory": 12586, "data_time": 0.04114, "loss_rpn_cls": 0.02749, "loss_rpn_bbox": 0.04947, "loss_cls": 0.20551, "acc": 92.78687, "loss_bbox": 0.24073, "loss_mask": 0.23926, "loss": 0.76248, "time": 0.50603} -{"mode": "train", "epoch": 4, "iter": 1050, "lr": 0.0002, "memory": 12586, "data_time": 0.0537, "loss_rpn_cls": 0.0266, "loss_rpn_bbox": 0.04947, "loss_cls": 0.21785, "acc": 92.51025, "loss_bbox": 0.24854, "loss_mask": 0.24253, "loss": 0.78498, "time": 0.51849} -{"mode": "train", "epoch": 4, "iter": 1100, "lr": 0.0002, "memory": 12586, "data_time": 0.04355, "loss_rpn_cls": 0.02688, "loss_rpn_bbox": 0.05331, "loss_cls": 0.21594, "acc": 92.54785, "loss_bbox": 0.24294, "loss_mask": 0.24575, "loss": 0.78482, "time": 0.55403} -{"mode": "train", "epoch": 4, "iter": 1150, "lr": 0.0002, "memory": 12586, "data_time": 0.04277, "loss_rpn_cls": 0.02827, "loss_rpn_bbox": 0.04993, "loss_cls": 0.21942, "acc": 92.54663, "loss_bbox": 0.24651, "loss_mask": 0.24405, "loss": 0.78818, "time": 0.50587} -{"mode": "train", "epoch": 4, "iter": 1200, "lr": 0.0002, "memory": 12586, "data_time": 0.04147, "loss_rpn_cls": 0.02861, "loss_rpn_bbox": 0.05113, "loss_cls": 0.21247, "acc": 92.72095, "loss_bbox": 0.24534, "loss_mask": 0.23733, "loss": 0.77487, "time": 0.51843} -{"mode": "train", "epoch": 4, "iter": 1250, "lr": 0.0002, "memory": 12586, "data_time": 0.0479, "loss_rpn_cls": 0.02702, "loss_rpn_bbox": 0.04966, "loss_cls": 0.21613, "acc": 92.60718, "loss_bbox": 0.2439, "loss_mask": 0.23795, "loss": 0.77466, "time": 0.50925} -{"mode": "train", "epoch": 4, "iter": 1300, "lr": 0.0002, "memory": 12586, "data_time": 0.05, "loss_rpn_cls": 0.02793, "loss_rpn_bbox": 0.04881, "loss_cls": 0.21221, "acc": 92.78687, "loss_bbox": 0.23682, "loss_mask": 0.24213, "loss": 0.76791, "time": 0.50444} -{"mode": "train", "epoch": 4, "iter": 1350, "lr": 0.0002, "memory": 12586, "data_time": 0.05129, "loss_rpn_cls": 0.03131, "loss_rpn_bbox": 0.05161, "loss_cls": 0.21948, "acc": 92.51733, "loss_bbox": 0.2466, "loss_mask": 0.24506, "loss": 0.79405, "time": 0.51556} -{"mode": "train", "epoch": 4, "iter": 1400, "lr": 0.0002, "memory": 12588, "data_time": 0.05075, "loss_rpn_cls": 0.02794, "loss_rpn_bbox": 0.05184, "loss_cls": 0.22141, "acc": 92.34766, "loss_bbox": 0.25194, "loss_mask": 0.24002, "loss": 0.79315, "time": 0.57389} -{"mode": "train", "epoch": 4, "iter": 1450, "lr": 0.0002, "memory": 12588, "data_time": 0.0424, "loss_rpn_cls": 0.02807, "loss_rpn_bbox": 0.05036, "loss_cls": 0.22542, "acc": 92.23999, "loss_bbox": 0.24961, "loss_mask": 0.24675, "loss": 0.80021, "time": 0.57492} -{"mode": "train", "epoch": 4, "iter": 1500, "lr": 0.0002, "memory": 12588, "data_time": 0.04206, "loss_rpn_cls": 0.02961, "loss_rpn_bbox": 0.05089, "loss_cls": 0.22269, "acc": 92.3811, "loss_bbox": 0.24809, "loss_mask": 0.24052, "loss": 0.7918, "time": 0.5227} -{"mode": "train", "epoch": 4, "iter": 1550, "lr": 0.0002, "memory": 12588, "data_time": 0.04266, "loss_rpn_cls": 0.02509, "loss_rpn_bbox": 0.04551, "loss_cls": 0.21365, "acc": 92.90405, "loss_bbox": 0.23339, "loss_mask": 0.23712, "loss": 0.75475, "time": 0.50008} -{"mode": "train", "epoch": 4, "iter": 1600, "lr": 0.0002, "memory": 12588, "data_time": 0.04217, "loss_rpn_cls": 0.02828, "loss_rpn_bbox": 0.04781, "loss_cls": 0.2186, "acc": 92.6897, "loss_bbox": 0.23512, "loss_mask": 0.23648, "loss": 0.76629, "time": 0.5583} -{"mode": "train", "epoch": 4, "iter": 1650, "lr": 0.0002, "memory": 12588, "data_time": 0.0435, "loss_rpn_cls": 0.02613, "loss_rpn_bbox": 0.04644, "loss_cls": 0.21797, "acc": 92.51904, "loss_bbox": 0.23889, "loss_mask": 0.24351, "loss": 0.77293, "time": 0.50788} -{"mode": "train", "epoch": 4, "iter": 1700, "lr": 0.0002, "memory": 12588, "data_time": 0.04718, "loss_rpn_cls": 0.02905, "loss_rpn_bbox": 0.05107, "loss_cls": 0.22609, "acc": 92.2666, "loss_bbox": 0.24766, "loss_mask": 0.24368, "loss": 0.79756, "time": 0.51634} -{"mode": "train", "epoch": 4, "iter": 1750, "lr": 0.0002, "memory": 12588, "data_time": 0.05276, "loss_rpn_cls": 0.02807, "loss_rpn_bbox": 0.04854, "loss_cls": 0.22144, "acc": 92.51172, "loss_bbox": 0.24395, "loss_mask": 0.24121, "loss": 0.7832, "time": 0.49789} -{"mode": "train", "epoch": 4, "iter": 1800, "lr": 0.0002, "memory": 12588, "data_time": 0.04365, "loss_rpn_cls": 0.02788, "loss_rpn_bbox": 0.04653, "loss_cls": 0.21506, "acc": 92.71899, "loss_bbox": 0.23726, "loss_mask": 0.23946, "loss": 0.76619, "time": 0.51513} -{"mode": "train", "epoch": 4, "iter": 1850, "lr": 0.0002, "memory": 12588, "data_time": 0.04702, "loss_rpn_cls": 0.029, "loss_rpn_bbox": 0.05127, "loss_cls": 0.22098, "acc": 92.58105, "loss_bbox": 0.24365, "loss_mask": 0.2424, "loss": 0.7873, "time": 0.56073} -{"mode": "train", "epoch": 4, "iter": 1900, "lr": 0.0002, "memory": 12588, "data_time": 0.04067, "loss_rpn_cls": 0.02719, "loss_rpn_bbox": 0.04805, "loss_cls": 0.22343, "acc": 92.44702, "loss_bbox": 0.23828, "loss_mask": 0.24328, "loss": 0.78023, "time": 0.5021} -{"mode": "train", "epoch": 4, "iter": 1950, "lr": 0.0002, "memory": 12588, "data_time": 0.04901, "loss_rpn_cls": 0.02748, "loss_rpn_bbox": 0.05054, "loss_cls": 0.21738, "acc": 92.70093, "loss_bbox": 0.23695, "loss_mask": 0.24323, "loss": 0.77558, "time": 0.49992} -{"mode": "train", "epoch": 4, "iter": 2000, "lr": 0.0002, "memory": 12588, "data_time": 0.04753, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.05045, "loss_cls": 0.21635, "acc": 92.70483, "loss_bbox": 0.24282, "loss_mask": 0.24721, "loss": 0.7868, "time": 0.55732} -{"mode": "train", "epoch": 4, "iter": 2050, "lr": 0.0002, "memory": 12588, "data_time": 0.03993, "loss_rpn_cls": 0.02705, "loss_rpn_bbox": 0.0476, "loss_cls": 0.21437, "acc": 92.68091, "loss_bbox": 0.23955, "loss_mask": 0.23701, "loss": 0.76558, "time": 0.50737} -{"mode": "train", "epoch": 4, "iter": 2100, "lr": 0.0002, "memory": 12588, "data_time": 0.03959, "loss_rpn_cls": 0.02698, "loss_rpn_bbox": 0.04705, "loss_cls": 0.21609, "acc": 92.68921, "loss_bbox": 0.24085, "loss_mask": 0.24236, "loss": 0.77332, "time": 0.50172} -{"mode": "train", "epoch": 4, "iter": 2150, "lr": 0.0002, "memory": 12588, "data_time": 0.04278, "loss_rpn_cls": 0.02631, "loss_rpn_bbox": 0.04765, "loss_cls": 0.21042, "acc": 92.72827, "loss_bbox": 0.24455, "loss_mask": 0.24142, "loss": 0.77035, "time": 0.5077} -{"mode": "train", "epoch": 4, "iter": 2200, "lr": 0.0002, "memory": 12588, "data_time": 0.04278, "loss_rpn_cls": 0.02808, "loss_rpn_bbox": 0.04993, "loss_cls": 0.21632, "acc": 92.50439, "loss_bbox": 0.24258, "loss_mask": 0.24116, "loss": 0.77807, "time": 0.50355} -{"mode": "train", "epoch": 4, "iter": 2250, "lr": 0.0002, "memory": 12588, "data_time": 0.04265, "loss_rpn_cls": 0.02881, "loss_rpn_bbox": 0.04951, "loss_cls": 0.22043, "acc": 92.45728, "loss_bbox": 0.2468, "loss_mask": 0.24499, "loss": 0.79053, "time": 0.5709} -{"mode": "train", "epoch": 4, "iter": 2300, "lr": 0.0002, "memory": 12588, "data_time": 0.04762, "loss_rpn_cls": 0.03023, "loss_rpn_bbox": 0.04783, "loss_cls": 0.21986, "acc": 92.5127, "loss_bbox": 0.24411, "loss_mask": 0.2411, "loss": 0.78312, "time": 0.50156} -{"mode": "train", "epoch": 4, "iter": 2350, "lr": 0.0002, "memory": 12588, "data_time": 0.05185, "loss_rpn_cls": 0.02587, "loss_rpn_bbox": 0.04593, "loss_cls": 0.21513, "acc": 92.6626, "loss_bbox": 0.24141, "loss_mask": 0.23833, "loss": 0.76667, "time": 0.50214} -{"mode": "train", "epoch": 4, "iter": 2400, "lr": 0.0002, "memory": 12588, "data_time": 0.05008, "loss_rpn_cls": 0.02795, "loss_rpn_bbox": 0.04866, "loss_cls": 0.22411, "acc": 92.27222, "loss_bbox": 0.25084, "loss_mask": 0.24596, "loss": 0.79753, "time": 0.50925} -{"mode": "train", "epoch": 4, "iter": 2450, "lr": 0.0002, "memory": 12588, "data_time": 0.05112, "loss_rpn_cls": 0.02549, "loss_rpn_bbox": 0.04634, "loss_cls": 0.2153, "acc": 92.74268, "loss_bbox": 0.23573, "loss_mask": 0.23862, "loss": 0.76149, "time": 0.51335} -{"mode": "train", "epoch": 4, "iter": 2500, "lr": 0.0002, "memory": 12588, "data_time": 0.04152, "loss_rpn_cls": 0.03007, "loss_rpn_bbox": 0.05394, "loss_cls": 0.23494, "acc": 92.1145, "loss_bbox": 0.26212, "loss_mask": 0.2491, "loss": 0.83018, "time": 0.55764} -{"mode": "train", "epoch": 4, "iter": 2550, "lr": 0.0002, "memory": 12588, "data_time": 0.04383, "loss_rpn_cls": 0.02595, "loss_rpn_bbox": 0.04708, "loss_cls": 0.21612, "acc": 92.51514, "loss_bbox": 0.24782, "loss_mask": 0.24194, "loss": 0.77892, "time": 0.49921} -{"mode": "train", "epoch": 4, "iter": 2600, "lr": 0.0002, "memory": 12588, "data_time": 0.04239, "loss_rpn_cls": 0.02722, "loss_rpn_bbox": 0.04732, "loss_cls": 0.21998, "acc": 92.4856, "loss_bbox": 0.24032, "loss_mask": 0.23833, "loss": 0.77318, "time": 0.51689} -{"mode": "train", "epoch": 4, "iter": 2650, "lr": 0.0002, "memory": 12588, "data_time": 0.04037, "loss_rpn_cls": 0.02528, "loss_rpn_bbox": 0.04781, "loss_cls": 0.21211, "acc": 92.6897, "loss_bbox": 0.24407, "loss_mask": 0.2402, "loss": 0.76946, "time": 0.50447} -{"mode": "train", "epoch": 4, "iter": 2700, "lr": 0.0002, "memory": 12588, "data_time": 0.04924, "loss_rpn_cls": 0.0317, "loss_rpn_bbox": 0.05079, "loss_cls": 0.2262, "acc": 92.42188, "loss_bbox": 0.2469, "loss_mask": 0.24549, "loss": 0.80109, "time": 0.52511} -{"mode": "train", "epoch": 4, "iter": 2750, "lr": 0.0002, "memory": 12588, "data_time": 0.05097, "loss_rpn_cls": 0.02934, "loss_rpn_bbox": 0.05238, "loss_cls": 0.22444, "acc": 92.35913, "loss_bbox": 0.25422, "loss_mask": 0.24773, "loss": 0.80811, "time": 0.51101} -{"mode": "train", "epoch": 4, "iter": 2800, "lr": 0.0002, "memory": 12588, "data_time": 0.05404, "loss_rpn_cls": 0.02635, "loss_rpn_bbox": 0.04881, "loss_cls": 0.21133, "acc": 92.81567, "loss_bbox": 0.23876, "loss_mask": 0.24217, "loss": 0.76742, "time": 0.50682} -{"mode": "train", "epoch": 4, "iter": 2850, "lr": 0.0002, "memory": 12588, "data_time": 0.04019, "loss_rpn_cls": 0.02667, "loss_rpn_bbox": 0.04709, "loss_cls": 0.20803, "acc": 92.88452, "loss_bbox": 0.23333, "loss_mask": 0.24574, "loss": 0.76085, "time": 0.49775} -{"mode": "train", "epoch": 4, "iter": 2900, "lr": 0.0002, "memory": 12588, "data_time": 0.03722, "loss_rpn_cls": 0.02893, "loss_rpn_bbox": 0.04887, "loss_cls": 0.22309, "acc": 92.64111, "loss_bbox": 0.24037, "loss_mask": 0.23742, "loss": 0.77868, "time": 0.50146} -{"mode": "train", "epoch": 4, "iter": 2950, "lr": 0.0002, "memory": 12588, "data_time": 0.04251, "loss_rpn_cls": 0.02533, "loss_rpn_bbox": 0.04614, "loss_cls": 0.21544, "acc": 92.6311, "loss_bbox": 0.24088, "loss_mask": 0.24059, "loss": 0.76838, "time": 0.498} -{"mode": "train", "epoch": 4, "iter": 3000, "lr": 0.0002, "memory": 12588, "data_time": 0.04688, "loss_rpn_cls": 0.02927, "loss_rpn_bbox": 0.04766, "loss_cls": 0.21768, "acc": 92.66772, "loss_bbox": 0.23959, "loss_mask": 0.23748, "loss": 0.77167, "time": 0.60672} -{"mode": "train", "epoch": 4, "iter": 3050, "lr": 0.0002, "memory": 12588, "data_time": 0.05069, "loss_rpn_cls": 0.02766, "loss_rpn_bbox": 0.04914, "loss_cls": 0.21687, "acc": 92.45996, "loss_bbox": 0.24589, "loss_mask": 0.23693, "loss": 0.77649, "time": 0.50177} -{"mode": "train", "epoch": 4, "iter": 3100, "lr": 0.0002, "memory": 12588, "data_time": 0.04864, "loss_rpn_cls": 0.02997, "loss_rpn_bbox": 0.04834, "loss_cls": 0.22166, "acc": 92.46338, "loss_bbox": 0.24256, "loss_mask": 0.23771, "loss": 0.78025, "time": 0.51585} -{"mode": "train", "epoch": 4, "iter": 3150, "lr": 0.0002, "memory": 12588, "data_time": 0.04618, "loss_rpn_cls": 0.02695, "loss_rpn_bbox": 0.04902, "loss_cls": 0.21261, "acc": 92.83716, "loss_bbox": 0.23541, "loss_mask": 0.23936, "loss": 0.76335, "time": 0.50294} -{"mode": "train", "epoch": 4, "iter": 3200, "lr": 0.0002, "memory": 12588, "data_time": 0.04873, "loss_rpn_cls": 0.02527, "loss_rpn_bbox": 0.04783, "loss_cls": 0.20732, "acc": 92.9292, "loss_bbox": 0.23412, "loss_mask": 0.23917, "loss": 0.7537, "time": 0.49879} -{"mode": "train", "epoch": 4, "iter": 3250, "lr": 0.0002, "memory": 12588, "data_time": 0.05005, "loss_rpn_cls": 0.02938, "loss_rpn_bbox": 0.04975, "loss_cls": 0.2163, "acc": 92.61743, "loss_bbox": 0.24177, "loss_mask": 0.23988, "loss": 0.77708, "time": 0.50481} -{"mode": "train", "epoch": 4, "iter": 3300, "lr": 0.0002, "memory": 12588, "data_time": 0.09464, "loss_rpn_cls": 0.02976, "loss_rpn_bbox": 0.0491, "loss_cls": 0.22179, "acc": 92.37939, "loss_bbox": 0.248, "loss_mask": 0.25054, "loss": 0.79919, "time": 0.54303} -{"mode": "train", "epoch": 4, "iter": 3350, "lr": 0.0002, "memory": 12588, "data_time": 0.0384, "loss_rpn_cls": 0.02788, "loss_rpn_bbox": 0.0503, "loss_cls": 0.21457, "acc": 92.57739, "loss_bbox": 0.24543, "loss_mask": 0.24863, "loss": 0.7868, "time": 0.49357} -{"mode": "train", "epoch": 4, "iter": 3400, "lr": 0.0002, "memory": 12588, "data_time": 0.05006, "loss_rpn_cls": 0.02421, "loss_rpn_bbox": 0.04661, "loss_cls": 0.21342, "acc": 92.70044, "loss_bbox": 0.23734, "loss_mask": 0.24535, "loss": 0.76692, "time": 0.49933} -{"mode": "train", "epoch": 4, "iter": 3450, "lr": 0.0002, "memory": 12588, "data_time": 0.04628, "loss_rpn_cls": 0.02797, "loss_rpn_bbox": 0.04751, "loss_cls": 0.21536, "acc": 92.76953, "loss_bbox": 0.2339, "loss_mask": 0.24115, "loss": 0.76589, "time": 0.56299} -{"mode": "train", "epoch": 4, "iter": 3500, "lr": 0.0002, "memory": 12588, "data_time": 0.045, "loss_rpn_cls": 0.02761, "loss_rpn_bbox": 0.04724, "loss_cls": 0.21126, "acc": 92.89062, "loss_bbox": 0.23461, "loss_mask": 0.24007, "loss": 0.76079, "time": 0.54458} -{"mode": "train", "epoch": 4, "iter": 3550, "lr": 0.0002, "memory": 12588, "data_time": 0.04232, "loss_rpn_cls": 0.03141, "loss_rpn_bbox": 0.04854, "loss_cls": 0.2149, "acc": 92.61304, "loss_bbox": 0.24448, "loss_mask": 0.24142, "loss": 0.78075, "time": 0.50482} -{"mode": "train", "epoch": 4, "iter": 3600, "lr": 0.0002, "memory": 12588, "data_time": 0.04216, "loss_rpn_cls": 0.0259, "loss_rpn_bbox": 0.04622, "loss_cls": 0.21149, "acc": 92.70776, "loss_bbox": 0.23641, "loss_mask": 0.23832, "loss": 0.75834, "time": 0.50397} -{"mode": "train", "epoch": 4, "iter": 3650, "lr": 0.0002, "memory": 12588, "data_time": 0.04174, "loss_rpn_cls": 0.0277, "loss_rpn_bbox": 0.04722, "loss_cls": 0.22229, "acc": 92.62598, "loss_bbox": 0.24004, "loss_mask": 0.24027, "loss": 0.77752, "time": 0.49783} -{"mode": "train", "epoch": 4, "iter": 3700, "lr": 0.0002, "memory": 12588, "data_time": 0.04995, "loss_rpn_cls": 0.02817, "loss_rpn_bbox": 0.0504, "loss_cls": 0.23192, "acc": 92.25537, "loss_bbox": 0.25182, "loss_mask": 0.24369, "loss": 0.806, "time": 0.51644} -{"mode": "train", "epoch": 4, "iter": 3750, "lr": 0.0002, "memory": 12588, "data_time": 0.03844, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04462, "loss_cls": 0.20954, "acc": 92.9231, "loss_bbox": 0.23076, "loss_mask": 0.23249, "loss": 0.74186, "time": 0.50024} -{"mode": "train", "epoch": 4, "iter": 3800, "lr": 0.0002, "memory": 12588, "data_time": 0.0452, "loss_rpn_cls": 0.02889, "loss_rpn_bbox": 0.04966, "loss_cls": 0.22045, "acc": 92.50732, "loss_bbox": 0.24622, "loss_mask": 0.24799, "loss": 0.79321, "time": 0.50205} -{"mode": "train", "epoch": 4, "iter": 3850, "lr": 0.0002, "memory": 12588, "data_time": 0.04213, "loss_rpn_cls": 0.02755, "loss_rpn_bbox": 0.04591, "loss_cls": 0.211, "acc": 92.9231, "loss_bbox": 0.23186, "loss_mask": 0.24517, "loss": 0.76149, "time": 0.49883} -{"mode": "train", "epoch": 4, "iter": 3900, "lr": 0.0002, "memory": 12588, "data_time": 0.04195, "loss_rpn_cls": 0.03083, "loss_rpn_bbox": 0.05026, "loss_cls": 0.21901, "acc": 92.66431, "loss_bbox": 0.24651, "loss_mask": 0.24479, "loss": 0.7914, "time": 0.50158} -{"mode": "train", "epoch": 4, "iter": 3950, "lr": 0.0002, "memory": 12588, "data_time": 0.05029, "loss_rpn_cls": 0.03056, "loss_rpn_bbox": 0.05026, "loss_cls": 0.2207, "acc": 92.50049, "loss_bbox": 0.24524, "loss_mask": 0.23761, "loss": 0.78437, "time": 0.49446} -{"mode": "train", "epoch": 4, "iter": 4000, "lr": 0.0002, "memory": 12588, "data_time": 0.04314, "loss_rpn_cls": 0.0271, "loss_rpn_bbox": 0.04811, "loss_cls": 0.22163, "acc": 92.53418, "loss_bbox": 0.24142, "loss_mask": 0.23599, "loss": 0.77425, "time": 0.51229} -{"mode": "train", "epoch": 4, "iter": 4050, "lr": 0.0002, "memory": 12588, "data_time": 0.04271, "loss_rpn_cls": 0.02806, "loss_rpn_bbox": 0.04596, "loss_cls": 0.22006, "acc": 92.59497, "loss_bbox": 0.24279, "loss_mask": 0.23793, "loss": 0.77479, "time": 0.51222} -{"mode": "train", "epoch": 4, "iter": 4100, "lr": 0.0002, "memory": 12588, "data_time": 0.0482, "loss_rpn_cls": 0.03019, "loss_rpn_bbox": 0.04888, "loss_cls": 0.22492, "acc": 92.47778, "loss_bbox": 0.25104, "loss_mask": 0.2376, "loss": 0.79265, "time": 0.50669} -{"mode": "train", "epoch": 4, "iter": 4150, "lr": 0.0002, "memory": 12588, "data_time": 0.04142, "loss_rpn_cls": 0.0276, "loss_rpn_bbox": 0.04725, "loss_cls": 0.21243, "acc": 92.69385, "loss_bbox": 0.23782, "loss_mask": 0.23482, "loss": 0.75992, "time": 0.49559} -{"mode": "train", "epoch": 4, "iter": 4200, "lr": 0.0002, "memory": 12588, "data_time": 0.04225, "loss_rpn_cls": 0.02919, "loss_rpn_bbox": 0.04807, "loss_cls": 0.21625, "acc": 92.7251, "loss_bbox": 0.23899, "loss_mask": 0.23847, "loss": 0.77097, "time": 0.50374} -{"mode": "train", "epoch": 4, "iter": 4250, "lr": 0.0002, "memory": 12588, "data_time": 0.0462, "loss_rpn_cls": 0.02992, "loss_rpn_bbox": 0.05175, "loss_cls": 0.21958, "acc": 92.79272, "loss_bbox": 0.23394, "loss_mask": 0.24126, "loss": 0.77645, "time": 0.50564} -{"mode": "train", "epoch": 4, "iter": 4300, "lr": 0.0002, "memory": 12588, "data_time": 0.04258, "loss_rpn_cls": 0.02579, "loss_rpn_bbox": 0.04675, "loss_cls": 0.21438, "acc": 92.64429, "loss_bbox": 0.23768, "loss_mask": 0.24613, "loss": 0.77073, "time": 0.48861} -{"mode": "train", "epoch": 4, "iter": 4350, "lr": 0.0002, "memory": 12588, "data_time": 0.04846, "loss_rpn_cls": 0.03102, "loss_rpn_bbox": 0.04917, "loss_cls": 0.21484, "acc": 92.75732, "loss_bbox": 0.24209, "loss_mask": 0.24945, "loss": 0.78657, "time": 0.5145} -{"mode": "train", "epoch": 4, "iter": 4400, "lr": 0.0002, "memory": 12588, "data_time": 0.05213, "loss_rpn_cls": 0.02839, "loss_rpn_bbox": 0.0464, "loss_cls": 0.21518, "acc": 92.76514, "loss_bbox": 0.24376, "loss_mask": 0.23851, "loss": 0.77224, "time": 0.49826} -{"mode": "train", "epoch": 4, "iter": 4450, "lr": 0.0002, "memory": 12588, "data_time": 0.04614, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.05027, "loss_cls": 0.2171, "acc": 92.60596, "loss_bbox": 0.23956, "loss_mask": 0.24415, "loss": 0.77898, "time": 0.49613} -{"mode": "train", "epoch": 4, "iter": 4500, "lr": 0.0002, "memory": 12588, "data_time": 0.03958, "loss_rpn_cls": 0.02745, "loss_rpn_bbox": 0.04747, "loss_cls": 0.21451, "acc": 92.73218, "loss_bbox": 0.24162, "loss_mask": 0.23629, "loss": 0.76734, "time": 0.54847} -{"mode": "train", "epoch": 4, "iter": 4550, "lr": 0.0002, "memory": 12588, "data_time": 0.04358, "loss_rpn_cls": 0.02666, "loss_rpn_bbox": 0.04629, "loss_cls": 0.20947, "acc": 92.88013, "loss_bbox": 0.2349, "loss_mask": 0.2394, "loss": 0.75673, "time": 0.48499} -{"mode": "train", "epoch": 4, "iter": 4600, "lr": 0.0002, "memory": 12588, "data_time": 0.04554, "loss_rpn_cls": 0.02769, "loss_rpn_bbox": 0.04797, "loss_cls": 0.21551, "acc": 92.63135, "loss_bbox": 0.23978, "loss_mask": 0.23471, "loss": 0.76566, "time": 0.50199} -{"mode": "train", "epoch": 4, "iter": 4650, "lr": 0.0002, "memory": 12588, "data_time": 0.04258, "loss_rpn_cls": 0.02616, "loss_rpn_bbox": 0.04852, "loss_cls": 0.22344, "acc": 92.52515, "loss_bbox": 0.24028, "loss_mask": 0.24497, "loss": 0.78336, "time": 0.49717} -{"mode": "train", "epoch": 4, "iter": 4700, "lr": 0.0002, "memory": 12588, "data_time": 0.04708, "loss_rpn_cls": 0.02934, "loss_rpn_bbox": 0.04773, "loss_cls": 0.21629, "acc": 92.71436, "loss_bbox": 0.23661, "loss_mask": 0.23723, "loss": 0.76721, "time": 0.50391} -{"mode": "train", "epoch": 4, "iter": 4750, "lr": 0.0002, "memory": 12588, "data_time": 0.04947, "loss_rpn_cls": 0.02732, "loss_rpn_bbox": 0.04687, "loss_cls": 0.21677, "acc": 92.66943, "loss_bbox": 0.23998, "loss_mask": 0.2447, "loss": 0.77563, "time": 0.49952} -{"mode": "train", "epoch": 4, "iter": 4800, "lr": 0.0002, "memory": 12588, "data_time": 0.04426, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.05268, "loss_cls": 0.22528, "acc": 92.43384, "loss_bbox": 0.24895, "loss_mask": 0.23926, "loss": 0.79568, "time": 0.50861} -{"mode": "train", "epoch": 4, "iter": 4850, "lr": 0.0002, "memory": 12588, "data_time": 0.04806, "loss_rpn_cls": 0.02733, "loss_rpn_bbox": 0.04718, "loss_cls": 0.2178, "acc": 92.7854, "loss_bbox": 0.23346, "loss_mask": 0.24298, "loss": 0.76874, "time": 0.49307} -{"mode": "train", "epoch": 4, "iter": 4900, "lr": 0.0002, "memory": 12588, "data_time": 0.04134, "loss_rpn_cls": 0.02948, "loss_rpn_bbox": 0.0526, "loss_cls": 0.22624, "acc": 92.18408, "loss_bbox": 0.25004, "loss_mask": 0.24666, "loss": 0.80503, "time": 0.59936} -{"mode": "train", "epoch": 4, "iter": 4950, "lr": 0.0002, "memory": 12588, "data_time": 0.03975, "loss_rpn_cls": 0.02754, "loss_rpn_bbox": 0.04747, "loss_cls": 0.21571, "acc": 92.72485, "loss_bbox": 0.24081, "loss_mask": 0.2399, "loss": 0.77143, "time": 0.50693} -{"mode": "train", "epoch": 4, "iter": 5000, "lr": 0.0002, "memory": 12588, "data_time": 0.04516, "loss_rpn_cls": 0.02745, "loss_rpn_bbox": 0.04763, "loss_cls": 0.2113, "acc": 92.83008, "loss_bbox": 0.23539, "loss_mask": 0.23878, "loss": 0.76055, "time": 0.49785} -{"mode": "train", "epoch": 4, "iter": 5050, "lr": 0.0002, "memory": 12588, "data_time": 0.04679, "loss_rpn_cls": 0.02998, "loss_rpn_bbox": 0.05117, "loss_cls": 0.22503, "acc": 92.49146, "loss_bbox": 0.2437, "loss_mask": 0.24754, "loss": 0.79743, "time": 0.51993} -{"mode": "train", "epoch": 4, "iter": 5100, "lr": 0.0002, "memory": 12588, "data_time": 0.04997, "loss_rpn_cls": 0.03241, "loss_rpn_bbox": 0.0525, "loss_cls": 0.22224, "acc": 92.40039, "loss_bbox": 0.24203, "loss_mask": 0.24159, "loss": 0.79077, "time": 0.51681} -{"mode": "train", "epoch": 4, "iter": 5150, "lr": 0.0002, "memory": 12588, "data_time": 0.05766, "loss_rpn_cls": 0.02764, "loss_rpn_bbox": 0.04896, "loss_cls": 0.21412, "acc": 92.69775, "loss_bbox": 0.23868, "loss_mask": 0.23547, "loss": 0.76487, "time": 0.51657} -{"mode": "train", "epoch": 4, "iter": 5200, "lr": 0.0002, "memory": 12588, "data_time": 0.0535, "loss_rpn_cls": 0.02593, "loss_rpn_bbox": 0.04981, "loss_cls": 0.22407, "acc": 92.48779, "loss_bbox": 0.24182, "loss_mask": 0.23651, "loss": 0.77814, "time": 0.5199} -{"mode": "train", "epoch": 4, "iter": 5250, "lr": 0.0002, "memory": 12588, "data_time": 0.0408, "loss_rpn_cls": 0.02732, "loss_rpn_bbox": 0.04893, "loss_cls": 0.21904, "acc": 92.63696, "loss_bbox": 0.24247, "loss_mask": 0.24554, "loss": 0.7833, "time": 0.48255} -{"mode": "train", "epoch": 4, "iter": 5300, "lr": 0.0002, "memory": 12588, "data_time": 0.05023, "loss_rpn_cls": 0.02661, "loss_rpn_bbox": 0.04965, "loss_cls": 0.22662, "acc": 92.3855, "loss_bbox": 0.24853, "loss_mask": 0.24248, "loss": 0.79389, "time": 0.58883} -{"mode": "train", "epoch": 4, "iter": 5350, "lr": 0.0002, "memory": 12588, "data_time": 0.04632, "loss_rpn_cls": 0.02622, "loss_rpn_bbox": 0.04924, "loss_cls": 0.21963, "acc": 92.58325, "loss_bbox": 0.24466, "loss_mask": 0.24272, "loss": 0.78247, "time": 0.50507} -{"mode": "train", "epoch": 4, "iter": 5400, "lr": 0.0002, "memory": 12588, "data_time": 0.04561, "loss_rpn_cls": 0.02851, "loss_rpn_bbox": 0.05037, "loss_cls": 0.21853, "acc": 92.48999, "loss_bbox": 0.24193, "loss_mask": 0.24022, "loss": 0.77956, "time": 0.51518} -{"mode": "train", "epoch": 4, "iter": 5450, "lr": 0.0002, "memory": 12588, "data_time": 0.03705, "loss_rpn_cls": 0.02831, "loss_rpn_bbox": 0.04676, "loss_cls": 0.21457, "acc": 92.86011, "loss_bbox": 0.23497, "loss_mask": 0.23807, "loss": 0.76269, "time": 0.50337} -{"mode": "train", "epoch": 4, "iter": 5500, "lr": 0.0002, "memory": 12588, "data_time": 0.03637, "loss_rpn_cls": 0.02921, "loss_rpn_bbox": 0.0496, "loss_cls": 0.23259, "acc": 92.24512, "loss_bbox": 0.24803, "loss_mask": 0.24077, "loss": 0.8002, "time": 0.52013} -{"mode": "train", "epoch": 4, "iter": 5550, "lr": 0.0002, "memory": 12588, "data_time": 0.04229, "loss_rpn_cls": 0.02884, "loss_rpn_bbox": 0.04988, "loss_cls": 0.2198, "acc": 92.64526, "loss_bbox": 0.23769, "loss_mask": 0.24038, "loss": 0.7766, "time": 0.52461} -{"mode": "train", "epoch": 4, "iter": 5600, "lr": 0.0002, "memory": 12588, "data_time": 0.05436, "loss_rpn_cls": 0.02839, "loss_rpn_bbox": 0.0488, "loss_cls": 0.21699, "acc": 92.66504, "loss_bbox": 0.24293, "loss_mask": 0.24838, "loss": 0.78549, "time": 0.5109} -{"mode": "train", "epoch": 4, "iter": 5650, "lr": 0.0002, "memory": 12588, "data_time": 0.03778, "loss_rpn_cls": 0.02706, "loss_rpn_bbox": 0.04598, "loss_cls": 0.21882, "acc": 92.70239, "loss_bbox": 0.23894, "loss_mask": 0.23686, "loss": 0.76767, "time": 0.49496} -{"mode": "train", "epoch": 4, "iter": 5700, "lr": 0.0002, "memory": 12588, "data_time": 0.04839, "loss_rpn_cls": 0.02709, "loss_rpn_bbox": 0.04821, "loss_cls": 0.21186, "acc": 92.81958, "loss_bbox": 0.23882, "loss_mask": 0.24383, "loss": 0.76982, "time": 0.54669} -{"mode": "train", "epoch": 4, "iter": 5750, "lr": 0.0002, "memory": 12588, "data_time": 0.05124, "loss_rpn_cls": 0.03215, "loss_rpn_bbox": 0.05195, "loss_cls": 0.22888, "acc": 92.2019, "loss_bbox": 0.25406, "loss_mask": 0.24539, "loss": 0.81244, "time": 0.50517} -{"mode": "train", "epoch": 4, "iter": 5800, "lr": 0.0002, "memory": 12588, "data_time": 0.05028, "loss_rpn_cls": 0.02732, "loss_rpn_bbox": 0.0508, "loss_cls": 0.21942, "acc": 92.53149, "loss_bbox": 0.24502, "loss_mask": 0.24198, "loss": 0.78454, "time": 0.50261} -{"mode": "train", "epoch": 4, "iter": 5850, "lr": 0.0002, "memory": 12588, "data_time": 0.04074, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04843, "loss_cls": 0.2202, "acc": 92.60156, "loss_bbox": 0.24084, "loss_mask": 0.23903, "loss": 0.77441, "time": 0.50242} -{"mode": "train", "epoch": 4, "iter": 5900, "lr": 0.0002, "memory": 12588, "data_time": 0.04773, "loss_rpn_cls": 0.02906, "loss_rpn_bbox": 0.04959, "loss_cls": 0.21889, "acc": 92.71338, "loss_bbox": 0.24282, "loss_mask": 0.24076, "loss": 0.78113, "time": 0.52143} -{"mode": "train", "epoch": 4, "iter": 5950, "lr": 0.0002, "memory": 12588, "data_time": 0.04443, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04571, "loss_cls": 0.21678, "acc": 92.6748, "loss_bbox": 0.24213, "loss_mask": 0.23597, "loss": 0.7665, "time": 0.4867} -{"mode": "train", "epoch": 4, "iter": 6000, "lr": 0.0002, "memory": 12588, "data_time": 0.04427, "loss_rpn_cls": 0.02636, "loss_rpn_bbox": 0.05063, "loss_cls": 0.22341, "acc": 92.45825, "loss_bbox": 0.24664, "loss_mask": 0.24059, "loss": 0.78763, "time": 0.50361} -{"mode": "train", "epoch": 4, "iter": 6050, "lr": 0.0002, "memory": 12588, "data_time": 0.04549, "loss_rpn_cls": 0.03098, "loss_rpn_bbox": 0.04914, "loss_cls": 0.22503, "acc": 92.43262, "loss_bbox": 0.24925, "loss_mask": 0.24691, "loss": 0.8013, "time": 0.51437} -{"mode": "train", "epoch": 4, "iter": 6100, "lr": 0.0002, "memory": 12588, "data_time": 0.0416, "loss_rpn_cls": 0.0295, "loss_rpn_bbox": 0.04938, "loss_cls": 0.21959, "acc": 92.45361, "loss_bbox": 0.24227, "loss_mask": 0.24754, "loss": 0.78827, "time": 0.51777} -{"mode": "train", "epoch": 4, "iter": 6150, "lr": 0.0002, "memory": 12588, "data_time": 0.04175, "loss_rpn_cls": 0.02657, "loss_rpn_bbox": 0.04663, "loss_cls": 0.20134, "acc": 93.17139, "loss_bbox": 0.22601, "loss_mask": 0.23799, "loss": 0.73853, "time": 0.5} -{"mode": "train", "epoch": 4, "iter": 6200, "lr": 0.0002, "memory": 12588, "data_time": 0.04367, "loss_rpn_cls": 0.03056, "loss_rpn_bbox": 0.04884, "loss_cls": 0.21663, "acc": 92.59668, "loss_bbox": 0.23774, "loss_mask": 0.24379, "loss": 0.77757, "time": 0.50625} -{"mode": "train", "epoch": 4, "iter": 6250, "lr": 0.0002, "memory": 12588, "data_time": 0.04983, "loss_rpn_cls": 0.02671, "loss_rpn_bbox": 0.04685, "loss_cls": 0.22069, "acc": 92.5542, "loss_bbox": 0.24433, "loss_mask": 0.24797, "loss": 0.78656, "time": 0.50618} -{"mode": "train", "epoch": 4, "iter": 6300, "lr": 0.0002, "memory": 12588, "data_time": 0.04869, "loss_rpn_cls": 0.02676, "loss_rpn_bbox": 0.04634, "loss_cls": 0.21739, "acc": 92.68481, "loss_bbox": 0.23841, "loss_mask": 0.23535, "loss": 0.76425, "time": 0.4958} -{"mode": "train", "epoch": 4, "iter": 6350, "lr": 0.0002, "memory": 12588, "data_time": 0.03512, "loss_rpn_cls": 0.02679, "loss_rpn_bbox": 0.04445, "loss_cls": 0.22037, "acc": 92.6499, "loss_bbox": 0.23648, "loss_mask": 0.23534, "loss": 0.76344, "time": 0.54104} -{"mode": "train", "epoch": 4, "iter": 6400, "lr": 0.0002, "memory": 12588, "data_time": 0.04825, "loss_rpn_cls": 0.03172, "loss_rpn_bbox": 0.04897, "loss_cls": 0.22678, "acc": 92.45581, "loss_bbox": 0.24681, "loss_mask": 0.24129, "loss": 0.79556, "time": 0.50669} -{"mode": "train", "epoch": 4, "iter": 6450, "lr": 0.0002, "memory": 12588, "data_time": 0.03861, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.04883, "loss_cls": 0.22648, "acc": 92.40942, "loss_bbox": 0.2455, "loss_mask": 0.24485, "loss": 0.79524, "time": 0.49689} -{"mode": "train", "epoch": 4, "iter": 6500, "lr": 0.0002, "memory": 12588, "data_time": 0.0483, "loss_rpn_cls": 0.02773, "loss_rpn_bbox": 0.04983, "loss_cls": 0.21251, "acc": 92.72461, "loss_bbox": 0.24198, "loss_mask": 0.24462, "loss": 0.77668, "time": 0.49908} -{"mode": "train", "epoch": 4, "iter": 6550, "lr": 0.0002, "memory": 12588, "data_time": 0.05, "loss_rpn_cls": 0.03094, "loss_rpn_bbox": 0.04903, "loss_cls": 0.2142, "acc": 92.8689, "loss_bbox": 0.23704, "loss_mask": 0.24203, "loss": 0.77324, "time": 0.5069} -{"mode": "train", "epoch": 4, "iter": 6600, "lr": 0.0002, "memory": 12588, "data_time": 0.04174, "loss_rpn_cls": 0.02994, "loss_rpn_bbox": 0.04857, "loss_cls": 0.21826, "acc": 92.61304, "loss_bbox": 0.23991, "loss_mask": 0.24451, "loss": 0.78119, "time": 0.48892} -{"mode": "train", "epoch": 4, "iter": 6650, "lr": 0.0002, "memory": 12588, "data_time": 0.04696, "loss_rpn_cls": 0.0296, "loss_rpn_bbox": 0.05065, "loss_cls": 0.23002, "acc": 92.34692, "loss_bbox": 0.25068, "loss_mask": 0.24341, "loss": 0.80435, "time": 0.49856} -{"mode": "train", "epoch": 4, "iter": 6700, "lr": 0.0002, "memory": 12588, "data_time": 0.03904, "loss_rpn_cls": 0.02887, "loss_rpn_bbox": 0.04904, "loss_cls": 0.22859, "acc": 92.31982, "loss_bbox": 0.24933, "loss_mask": 0.24973, "loss": 0.80555, "time": 0.49872} -{"mode": "train", "epoch": 4, "iter": 6750, "lr": 0.0002, "memory": 12588, "data_time": 0.04901, "loss_rpn_cls": 0.02735, "loss_rpn_bbox": 0.04647, "loss_cls": 0.2248, "acc": 92.48706, "loss_bbox": 0.24575, "loss_mask": 0.23863, "loss": 0.783, "time": 0.49796} -{"mode": "train", "epoch": 4, "iter": 6800, "lr": 0.0002, "memory": 12588, "data_time": 0.0539, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.04866, "loss_cls": 0.22074, "acc": 92.57422, "loss_bbox": 0.23676, "loss_mask": 0.23523, "loss": 0.77014, "time": 0.55997} -{"mode": "train", "epoch": 4, "iter": 6850, "lr": 0.0002, "memory": 12588, "data_time": 0.04286, "loss_rpn_cls": 0.02871, "loss_rpn_bbox": 0.04843, "loss_cls": 0.21772, "acc": 92.73413, "loss_bbox": 0.23604, "loss_mask": 0.24487, "loss": 0.77576, "time": 0.54242} -{"mode": "train", "epoch": 4, "iter": 6900, "lr": 0.0002, "memory": 12588, "data_time": 0.04925, "loss_rpn_cls": 0.02852, "loss_rpn_bbox": 0.04903, "loss_cls": 0.22019, "acc": 92.52979, "loss_bbox": 0.24063, "loss_mask": 0.24563, "loss": 0.78401, "time": 0.50548} -{"mode": "train", "epoch": 4, "iter": 6950, "lr": 0.0002, "memory": 12588, "data_time": 0.04482, "loss_rpn_cls": 0.02868, "loss_rpn_bbox": 0.04839, "loss_cls": 0.21949, "acc": 92.52832, "loss_bbox": 0.24305, "loss_mask": 0.2371, "loss": 0.77671, "time": 0.50203} -{"mode": "train", "epoch": 4, "iter": 7000, "lr": 0.0002, "memory": 12588, "data_time": 0.0451, "loss_rpn_cls": 0.02957, "loss_rpn_bbox": 0.04834, "loss_cls": 0.21187, "acc": 92.80811, "loss_bbox": 0.23217, "loss_mask": 0.23603, "loss": 0.75798, "time": 0.50339} -{"mode": "train", "epoch": 4, "iter": 7050, "lr": 0.0002, "memory": 12588, "data_time": 0.03737, "loss_rpn_cls": 0.02693, "loss_rpn_bbox": 0.04655, "loss_cls": 0.211, "acc": 92.79858, "loss_bbox": 0.23371, "loss_mask": 0.23433, "loss": 0.75253, "time": 0.49092} -{"mode": "train", "epoch": 4, "iter": 7100, "lr": 0.0002, "memory": 12588, "data_time": 0.05453, "loss_rpn_cls": 0.02967, "loss_rpn_bbox": 0.04882, "loss_cls": 0.21772, "acc": 92.55371, "loss_bbox": 0.24063, "loss_mask": 0.24167, "loss": 0.77851, "time": 0.5578} -{"mode": "train", "epoch": 4, "iter": 7150, "lr": 0.0002, "memory": 12588, "data_time": 0.04279, "loss_rpn_cls": 0.02686, "loss_rpn_bbox": 0.04771, "loss_cls": 0.22044, "acc": 92.59497, "loss_bbox": 0.2452, "loss_mask": 0.24433, "loss": 0.78454, "time": 0.51324} -{"mode": "train", "epoch": 4, "iter": 7200, "lr": 0.0002, "memory": 12588, "data_time": 0.0448, "loss_rpn_cls": 0.02874, "loss_rpn_bbox": 0.0515, "loss_cls": 0.22423, "acc": 92.4939, "loss_bbox": 0.24766, "loss_mask": 0.24385, "loss": 0.79599, "time": 0.50715} -{"mode": "train", "epoch": 4, "iter": 7250, "lr": 0.0002, "memory": 12588, "data_time": 0.04312, "loss_rpn_cls": 0.02925, "loss_rpn_bbox": 0.04826, "loss_cls": 0.21942, "acc": 92.70361, "loss_bbox": 0.23386, "loss_mask": 0.24449, "loss": 0.77527, "time": 0.48824} -{"mode": "train", "epoch": 4, "iter": 7300, "lr": 0.0002, "memory": 12588, "data_time": 0.04724, "loss_rpn_cls": 0.02548, "loss_rpn_bbox": 0.04721, "loss_cls": 0.21985, "acc": 92.6123, "loss_bbox": 0.23966, "loss_mask": 0.24147, "loss": 0.77367, "time": 0.51172} -{"mode": "val", "epoch": 4, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3766, "bbox_mAP_50": 0.5997, "bbox_mAP_75": 0.4132, "bbox_mAP_s": 0.215, "bbox_mAP_m": 0.4239, "bbox_mAP_l": 0.4868, "bbox_mAP_copypaste": "0.3766 0.5997 0.4132 0.2150 0.4239 0.4868", "segm_mAP": 0.3522, "segm_mAP_50": 0.5656, "segm_mAP_75": 0.376, "segm_mAP_s": 0.164, "segm_mAP_m": 0.3892, "segm_mAP_l": 0.5183, "segm_mAP_copypaste": "0.3522 0.5656 0.3760 0.1640 0.3892 0.5183"} -{"mode": "train", "epoch": 5, "iter": 50, "lr": 0.0002, "memory": 12588, "data_time": 0.11662, "loss_rpn_cls": 0.02369, "loss_rpn_bbox": 0.04614, "loss_cls": 0.21691, "acc": 92.55176, "loss_bbox": 0.24642, "loss_mask": 0.24137, "loss": 0.77453, "time": 0.59651} -{"mode": "train", "epoch": 5, "iter": 100, "lr": 0.0002, "memory": 12588, "data_time": 0.05766, "loss_rpn_cls": 0.02473, "loss_rpn_bbox": 0.04916, "loss_cls": 0.21036, "acc": 92.63013, "loss_bbox": 0.24693, "loss_mask": 0.24076, "loss": 0.77195, "time": 0.59966} -{"mode": "train", "epoch": 5, "iter": 150, "lr": 0.0002, "memory": 12588, "data_time": 0.04784, "loss_rpn_cls": 0.02438, "loss_rpn_bbox": 0.04745, "loss_cls": 0.2074, "acc": 92.81494, "loss_bbox": 0.23777, "loss_mask": 0.23882, "loss": 0.75582, "time": 0.51237} -{"mode": "train", "epoch": 5, "iter": 200, "lr": 0.0002, "memory": 12588, "data_time": 0.04813, "loss_rpn_cls": 0.02576, "loss_rpn_bbox": 0.04894, "loss_cls": 0.2022, "acc": 92.95239, "loss_bbox": 0.23399, "loss_mask": 0.2377, "loss": 0.74859, "time": 0.50182} -{"mode": "train", "epoch": 5, "iter": 250, "lr": 0.0002, "memory": 12588, "data_time": 0.04732, "loss_rpn_cls": 0.02515, "loss_rpn_bbox": 0.04884, "loss_cls": 0.20488, "acc": 92.94312, "loss_bbox": 0.23269, "loss_mask": 0.23846, "loss": 0.75002, "time": 0.51373} -{"mode": "train", "epoch": 5, "iter": 300, "lr": 0.0002, "memory": 12588, "data_time": 0.03968, "loss_rpn_cls": 0.02449, "loss_rpn_bbox": 0.04641, "loss_cls": 0.20264, "acc": 92.95508, "loss_bbox": 0.23225, "loss_mask": 0.23739, "loss": 0.7432, "time": 0.50958} -{"mode": "train", "epoch": 5, "iter": 350, "lr": 0.0002, "memory": 12588, "data_time": 0.05456, "loss_rpn_cls": 0.02525, "loss_rpn_bbox": 0.04702, "loss_cls": 0.20306, "acc": 92.79688, "loss_bbox": 0.24258, "loss_mask": 0.23855, "loss": 0.75646, "time": 0.51171} -{"mode": "train", "epoch": 5, "iter": 400, "lr": 0.0002, "memory": 12588, "data_time": 0.03421, "loss_rpn_cls": 0.02308, "loss_rpn_bbox": 0.045, "loss_cls": 0.20816, "acc": 92.85742, "loss_bbox": 0.2378, "loss_mask": 0.23583, "loss": 0.74987, "time": 0.51758} -{"mode": "train", "epoch": 5, "iter": 450, "lr": 0.0002, "memory": 12588, "data_time": 0.04626, "loss_rpn_cls": 0.02618, "loss_rpn_bbox": 0.04731, "loss_cls": 0.2026, "acc": 92.97021, "loss_bbox": 0.23699, "loss_mask": 0.23558, "loss": 0.74865, "time": 0.49495} -{"mode": "train", "epoch": 5, "iter": 500, "lr": 0.0002, "memory": 12588, "data_time": 0.04187, "loss_rpn_cls": 0.02413, "loss_rpn_bbox": 0.0479, "loss_cls": 0.2042, "acc": 92.88843, "loss_bbox": 0.23366, "loss_mask": 0.23592, "loss": 0.7458, "time": 0.50377} -{"mode": "train", "epoch": 5, "iter": 550, "lr": 0.0002, "memory": 12588, "data_time": 0.03732, "loss_rpn_cls": 0.02301, "loss_rpn_bbox": 0.0453, "loss_cls": 0.19673, "acc": 93.18384, "loss_bbox": 0.22538, "loss_mask": 0.22902, "loss": 0.71944, "time": 0.49496} -{"mode": "train", "epoch": 5, "iter": 600, "lr": 0.0002, "memory": 12588, "data_time": 0.04162, "loss_rpn_cls": 0.02637, "loss_rpn_bbox": 0.04854, "loss_cls": 0.2176, "acc": 92.40332, "loss_bbox": 0.2502, "loss_mask": 0.23276, "loss": 0.77547, "time": 0.51517} -{"mode": "train", "epoch": 5, "iter": 650, "lr": 0.0002, "memory": 12588, "data_time": 0.04344, "loss_rpn_cls": 0.02462, "loss_rpn_bbox": 0.04571, "loss_cls": 0.202, "acc": 92.89087, "loss_bbox": 0.23674, "loss_mask": 0.23739, "loss": 0.74646, "time": 0.51752} -{"mode": "train", "epoch": 5, "iter": 700, "lr": 0.0002, "memory": 12588, "data_time": 0.04952, "loss_rpn_cls": 0.02486, "loss_rpn_bbox": 0.04826, "loss_cls": 0.20838, "acc": 92.89795, "loss_bbox": 0.23681, "loss_mask": 0.24261, "loss": 0.76092, "time": 0.519} -{"mode": "train", "epoch": 5, "iter": 750, "lr": 0.0002, "memory": 12588, "data_time": 0.04618, "loss_rpn_cls": 0.02265, "loss_rpn_bbox": 0.04402, "loss_cls": 0.19976, "acc": 93.26807, "loss_bbox": 0.22602, "loss_mask": 0.23046, "loss": 0.72291, "time": 0.51632} -{"mode": "train", "epoch": 5, "iter": 800, "lr": 0.0002, "memory": 12588, "data_time": 0.05628, "loss_rpn_cls": 0.02759, "loss_rpn_bbox": 0.04921, "loss_cls": 0.20635, "acc": 92.79663, "loss_bbox": 0.23885, "loss_mask": 0.23895, "loss": 0.76094, "time": 0.51635} -{"mode": "train", "epoch": 5, "iter": 850, "lr": 0.0002, "memory": 12588, "data_time": 0.04556, "loss_rpn_cls": 0.02603, "loss_rpn_bbox": 0.04849, "loss_cls": 0.20973, "acc": 92.65527, "loss_bbox": 0.24347, "loss_mask": 0.23947, "loss": 0.76719, "time": 0.5649} -{"mode": "train", "epoch": 5, "iter": 900, "lr": 0.0002, "memory": 12588, "data_time": 0.04749, "loss_rpn_cls": 0.02595, "loss_rpn_bbox": 0.04898, "loss_cls": 0.21977, "acc": 92.49048, "loss_bbox": 0.24383, "loss_mask": 0.24077, "loss": 0.7793, "time": 0.48523} -{"mode": "train", "epoch": 5, "iter": 950, "lr": 0.0002, "memory": 12588, "data_time": 0.04209, "loss_rpn_cls": 0.02597, "loss_rpn_bbox": 0.04658, "loss_cls": 0.20578, "acc": 92.94629, "loss_bbox": 0.23065, "loss_mask": 0.23417, "loss": 0.74314, "time": 0.50352} -{"mode": "train", "epoch": 5, "iter": 1000, "lr": 0.0002, "memory": 12588, "data_time": 0.04888, "loss_rpn_cls": 0.02647, "loss_rpn_bbox": 0.0498, "loss_cls": 0.20785, "acc": 92.88184, "loss_bbox": 0.24066, "loss_mask": 0.24136, "loss": 0.76613, "time": 0.51751} -{"mode": "train", "epoch": 5, "iter": 1050, "lr": 0.0002, "memory": 12588, "data_time": 0.0455, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.04758, "loss_cls": 0.21404, "acc": 92.54688, "loss_bbox": 0.2467, "loss_mask": 0.24093, "loss": 0.77716, "time": 0.5125} -{"mode": "train", "epoch": 5, "iter": 1100, "lr": 0.0002, "memory": 12588, "data_time": 0.04157, "loss_rpn_cls": 0.02463, "loss_rpn_bbox": 0.0449, "loss_cls": 0.20002, "acc": 93.09302, "loss_bbox": 0.22816, "loss_mask": 0.23587, "loss": 0.73357, "time": 0.50687} -{"mode": "train", "epoch": 5, "iter": 1150, "lr": 0.0002, "memory": 12588, "data_time": 0.04201, "loss_rpn_cls": 0.02554, "loss_rpn_bbox": 0.04722, "loss_cls": 0.21352, "acc": 92.60718, "loss_bbox": 0.24182, "loss_mask": 0.23786, "loss": 0.76597, "time": 0.49935} -{"mode": "train", "epoch": 5, "iter": 1200, "lr": 0.0002, "memory": 12588, "data_time": 0.05128, "loss_rpn_cls": 0.02381, "loss_rpn_bbox": 0.04472, "loss_cls": 0.2085, "acc": 92.90942, "loss_bbox": 0.23196, "loss_mask": 0.23139, "loss": 0.74038, "time": 0.56107} -{"mode": "train", "epoch": 5, "iter": 1250, "lr": 0.0002, "memory": 12588, "data_time": 0.04486, "loss_rpn_cls": 0.02608, "loss_rpn_bbox": 0.04838, "loss_cls": 0.21817, "acc": 92.4939, "loss_bbox": 0.23729, "loss_mask": 0.23134, "loss": 0.76126, "time": 0.56275} -{"mode": "train", "epoch": 5, "iter": 1300, "lr": 0.0002, "memory": 12588, "data_time": 0.04344, "loss_rpn_cls": 0.02385, "loss_rpn_bbox": 0.04459, "loss_cls": 0.1927, "acc": 93.49194, "loss_bbox": 0.21629, "loss_mask": 0.23329, "loss": 0.71071, "time": 0.50115} -{"mode": "train", "epoch": 5, "iter": 1350, "lr": 0.0002, "memory": 12588, "data_time": 0.04376, "loss_rpn_cls": 0.0276, "loss_rpn_bbox": 0.04658, "loss_cls": 0.20648, "acc": 92.90649, "loss_bbox": 0.23777, "loss_mask": 0.23601, "loss": 0.75443, "time": 0.55651} -{"mode": "train", "epoch": 5, "iter": 1400, "lr": 0.0002, "memory": 12588, "data_time": 0.0367, "loss_rpn_cls": 0.02566, "loss_rpn_bbox": 0.05041, "loss_cls": 0.21051, "acc": 92.74561, "loss_bbox": 0.24099, "loss_mask": 0.24374, "loss": 0.7713, "time": 0.55785} -{"mode": "train", "epoch": 5, "iter": 1450, "lr": 0.0002, "memory": 12588, "data_time": 0.09266, "loss_rpn_cls": 0.0263, "loss_rpn_bbox": 0.04517, "loss_cls": 0.20918, "acc": 92.85303, "loss_bbox": 0.23536, "loss_mask": 0.24115, "loss": 0.75717, "time": 0.54413} -{"mode": "train", "epoch": 5, "iter": 1500, "lr": 0.0002, "memory": 12588, "data_time": 0.03792, "loss_rpn_cls": 0.02558, "loss_rpn_bbox": 0.04579, "loss_cls": 0.20704, "acc": 93.04517, "loss_bbox": 0.22967, "loss_mask": 0.23738, "loss": 0.74546, "time": 0.50473} -{"mode": "train", "epoch": 5, "iter": 1550, "lr": 0.0002, "memory": 12588, "data_time": 0.04761, "loss_rpn_cls": 0.02302, "loss_rpn_bbox": 0.04792, "loss_cls": 0.20852, "acc": 92.7312, "loss_bbox": 0.24122, "loss_mask": 0.2355, "loss": 0.75619, "time": 0.51317} -{"mode": "train", "epoch": 5, "iter": 1600, "lr": 0.0002, "memory": 12588, "data_time": 0.04648, "loss_rpn_cls": 0.02755, "loss_rpn_bbox": 0.04739, "loss_cls": 0.20893, "acc": 92.76172, "loss_bbox": 0.24083, "loss_mask": 0.2433, "loss": 0.768, "time": 0.49462} -{"mode": "train", "epoch": 5, "iter": 1650, "lr": 0.0002, "memory": 12588, "data_time": 0.04689, "loss_rpn_cls": 0.02447, "loss_rpn_bbox": 0.04485, "loss_cls": 0.20172, "acc": 92.99268, "loss_bbox": 0.22798, "loss_mask": 0.23017, "loss": 0.72919, "time": 0.50793} -{"mode": "train", "epoch": 5, "iter": 1700, "lr": 0.0002, "memory": 12588, "data_time": 0.05113, "loss_rpn_cls": 0.02558, "loss_rpn_bbox": 0.04735, "loss_cls": 0.21156, "acc": 92.82983, "loss_bbox": 0.23052, "loss_mask": 0.23811, "loss": 0.75313, "time": 0.51609} -{"mode": "train", "epoch": 5, "iter": 1750, "lr": 0.0002, "memory": 12588, "data_time": 0.04909, "loss_rpn_cls": 0.02502, "loss_rpn_bbox": 0.04865, "loss_cls": 0.21373, "acc": 92.65601, "loss_bbox": 0.24076, "loss_mask": 0.24038, "loss": 0.76854, "time": 0.50129} -{"mode": "train", "epoch": 5, "iter": 1800, "lr": 0.0002, "memory": 12588, "data_time": 0.04957, "loss_rpn_cls": 0.02754, "loss_rpn_bbox": 0.04786, "loss_cls": 0.21608, "acc": 92.57324, "loss_bbox": 0.24126, "loss_mask": 0.24064, "loss": 0.77337, "time": 0.50292} -{"mode": "train", "epoch": 5, "iter": 1850, "lr": 0.0002, "memory": 12588, "data_time": 0.05417, "loss_rpn_cls": 0.02466, "loss_rpn_bbox": 0.04887, "loss_cls": 0.20724, "acc": 92.86108, "loss_bbox": 0.23339, "loss_mask": 0.23313, "loss": 0.7473, "time": 0.50746} -{"mode": "train", "epoch": 5, "iter": 1900, "lr": 0.0002, "memory": 12588, "data_time": 0.05788, "loss_rpn_cls": 0.02461, "loss_rpn_bbox": 0.04516, "loss_cls": 0.20939, "acc": 92.77954, "loss_bbox": 0.2333, "loss_mask": 0.23341, "loss": 0.74588, "time": 0.49852} -{"mode": "train", "epoch": 5, "iter": 1950, "lr": 0.0002, "memory": 12588, "data_time": 0.03699, "loss_rpn_cls": 0.02397, "loss_rpn_bbox": 0.04497, "loss_cls": 0.21906, "acc": 92.41309, "loss_bbox": 0.24408, "loss_mask": 0.24083, "loss": 0.77292, "time": 0.56277} -{"mode": "train", "epoch": 5, "iter": 2000, "lr": 0.0002, "memory": 12588, "data_time": 0.05667, "loss_rpn_cls": 0.02701, "loss_rpn_bbox": 0.0451, "loss_cls": 0.20119, "acc": 93.19702, "loss_bbox": 0.22837, "loss_mask": 0.23361, "loss": 0.73528, "time": 0.50626} -{"mode": "train", "epoch": 5, "iter": 2050, "lr": 0.0002, "memory": 12588, "data_time": 0.04631, "loss_rpn_cls": 0.02612, "loss_rpn_bbox": 0.04655, "loss_cls": 0.21422, "acc": 92.67163, "loss_bbox": 0.23778, "loss_mask": 0.24144, "loss": 0.7661, "time": 0.50171} -{"mode": "train", "epoch": 5, "iter": 2100, "lr": 0.0002, "memory": 12588, "data_time": 0.04751, "loss_rpn_cls": 0.02617, "loss_rpn_bbox": 0.04749, "loss_cls": 0.21539, "acc": 92.40869, "loss_bbox": 0.24193, "loss_mask": 0.23889, "loss": 0.76987, "time": 0.50776} -{"mode": "train", "epoch": 5, "iter": 2150, "lr": 0.0002, "memory": 12588, "data_time": 0.04494, "loss_rpn_cls": 0.02647, "loss_rpn_bbox": 0.04518, "loss_cls": 0.20903, "acc": 92.76172, "loss_bbox": 0.23783, "loss_mask": 0.23884, "loss": 0.75735, "time": 0.51013} -{"mode": "train", "epoch": 5, "iter": 2200, "lr": 0.0002, "memory": 12588, "data_time": 0.04598, "loss_rpn_cls": 0.02636, "loss_rpn_bbox": 0.04664, "loss_cls": 0.20949, "acc": 92.76953, "loss_bbox": 0.2414, "loss_mask": 0.23832, "loss": 0.76222, "time": 0.49872} -{"mode": "train", "epoch": 5, "iter": 2250, "lr": 0.0002, "memory": 12588, "data_time": 0.04936, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.04522, "loss_cls": 0.21119, "acc": 92.78857, "loss_bbox": 0.23482, "loss_mask": 0.23286, "loss": 0.74912, "time": 0.48872} -{"mode": "train", "epoch": 5, "iter": 2300, "lr": 0.0002, "memory": 12588, "data_time": 0.04246, "loss_rpn_cls": 0.0255, "loss_rpn_bbox": 0.04459, "loss_cls": 0.21189, "acc": 92.66797, "loss_bbox": 0.24375, "loss_mask": 0.23888, "loss": 0.7646, "time": 0.51342} -{"mode": "train", "epoch": 5, "iter": 2350, "lr": 0.0002, "memory": 12588, "data_time": 0.04598, "loss_rpn_cls": 0.02666, "loss_rpn_bbox": 0.04832, "loss_cls": 0.20879, "acc": 92.74146, "loss_bbox": 0.23801, "loss_mask": 0.23294, "loss": 0.75472, "time": 0.50926} -{"mode": "train", "epoch": 5, "iter": 2400, "lr": 0.0002, "memory": 12588, "data_time": 0.04206, "loss_rpn_cls": 0.02644, "loss_rpn_bbox": 0.04612, "loss_cls": 0.20439, "acc": 92.96899, "loss_bbox": 0.23029, "loss_mask": 0.23394, "loss": 0.74118, "time": 0.50612} -{"mode": "train", "epoch": 5, "iter": 2450, "lr": 0.0002, "memory": 12588, "data_time": 0.04313, "loss_rpn_cls": 0.02586, "loss_rpn_bbox": 0.04876, "loss_cls": 0.21143, "acc": 92.69824, "loss_bbox": 0.23791, "loss_mask": 0.2373, "loss": 0.76126, "time": 0.49588} -{"mode": "train", "epoch": 5, "iter": 2500, "lr": 0.0002, "memory": 12588, "data_time": 0.05663, "loss_rpn_cls": 0.02761, "loss_rpn_bbox": 0.04897, "loss_cls": 0.22542, "acc": 92.32129, "loss_bbox": 0.24624, "loss_mask": 0.24153, "loss": 0.78978, "time": 0.50675} -{"mode": "train", "epoch": 5, "iter": 2550, "lr": 0.0002, "memory": 12588, "data_time": 0.05094, "loss_rpn_cls": 0.02536, "loss_rpn_bbox": 0.04776, "loss_cls": 0.20135, "acc": 93.0835, "loss_bbox": 0.23064, "loss_mask": 0.2326, "loss": 0.7377, "time": 0.49348} -{"mode": "train", "epoch": 5, "iter": 2600, "lr": 0.0002, "memory": 12588, "data_time": 0.04313, "loss_rpn_cls": 0.02359, "loss_rpn_bbox": 0.04334, "loss_cls": 0.19362, "acc": 93.28809, "loss_bbox": 0.2264, "loss_mask": 0.2275, "loss": 0.71445, "time": 0.48041} -{"mode": "train", "epoch": 5, "iter": 2650, "lr": 0.0002, "memory": 12588, "data_time": 0.04623, "loss_rpn_cls": 0.02599, "loss_rpn_bbox": 0.04832, "loss_cls": 0.22245, "acc": 92.55103, "loss_bbox": 0.24313, "loss_mask": 0.24387, "loss": 0.78376, "time": 0.50216} -{"mode": "train", "epoch": 5, "iter": 2700, "lr": 0.0002, "memory": 12588, "data_time": 0.04886, "loss_rpn_cls": 0.0274, "loss_rpn_bbox": 0.04869, "loss_cls": 0.22126, "acc": 92.52686, "loss_bbox": 0.24328, "loss_mask": 0.23897, "loss": 0.77959, "time": 0.51599} -{"mode": "train", "epoch": 5, "iter": 2750, "lr": 0.0002, "memory": 12588, "data_time": 0.04895, "loss_rpn_cls": 0.0249, "loss_rpn_bbox": 0.04726, "loss_cls": 0.21094, "acc": 92.76709, "loss_bbox": 0.23564, "loss_mask": 0.23915, "loss": 0.75789, "time": 0.51361} -{"mode": "train", "epoch": 5, "iter": 2800, "lr": 0.0002, "memory": 12588, "data_time": 0.04236, "loss_rpn_cls": 0.02426, "loss_rpn_bbox": 0.04558, "loss_cls": 0.20014, "acc": 93.12793, "loss_bbox": 0.22466, "loss_mask": 0.23581, "loss": 0.73045, "time": 0.50631} -{"mode": "train", "epoch": 5, "iter": 2850, "lr": 0.0002, "memory": 12588, "data_time": 0.04333, "loss_rpn_cls": 0.0268, "loss_rpn_bbox": 0.04691, "loss_cls": 0.21205, "acc": 92.67236, "loss_bbox": 0.23977, "loss_mask": 0.23661, "loss": 0.76214, "time": 0.51101} -{"mode": "train", "epoch": 5, "iter": 2900, "lr": 0.0002, "memory": 12588, "data_time": 0.04416, "loss_rpn_cls": 0.02479, "loss_rpn_bbox": 0.04749, "loss_cls": 0.20923, "acc": 92.75342, "loss_bbox": 0.23856, "loss_mask": 0.23677, "loss": 0.75684, "time": 0.56904} -{"mode": "train", "epoch": 5, "iter": 2950, "lr": 0.0002, "memory": 12588, "data_time": 0.04989, "loss_rpn_cls": 0.02751, "loss_rpn_bbox": 0.04923, "loss_cls": 0.21716, "acc": 92.47144, "loss_bbox": 0.24294, "loss_mask": 0.23644, "loss": 0.77329, "time": 0.50208} -{"mode": "train", "epoch": 5, "iter": 3000, "lr": 0.0002, "memory": 12588, "data_time": 0.04379, "loss_rpn_cls": 0.0275, "loss_rpn_bbox": 0.04857, "loss_cls": 0.20773, "acc": 92.78931, "loss_bbox": 0.2376, "loss_mask": 0.23287, "loss": 0.75427, "time": 0.51162} -{"mode": "train", "epoch": 5, "iter": 3050, "lr": 0.0002, "memory": 12588, "data_time": 0.03852, "loss_rpn_cls": 0.02578, "loss_rpn_bbox": 0.04799, "loss_cls": 0.22844, "acc": 92.39429, "loss_bbox": 0.23884, "loss_mask": 0.24277, "loss": 0.78381, "time": 0.56379} -{"mode": "train", "epoch": 5, "iter": 3100, "lr": 0.0002, "memory": 12588, "data_time": 0.03417, "loss_rpn_cls": 0.02562, "loss_rpn_bbox": 0.04665, "loss_cls": 0.20547, "acc": 92.89673, "loss_bbox": 0.22935, "loss_mask": 0.23875, "loss": 0.74583, "time": 0.50988} -{"mode": "train", "epoch": 5, "iter": 3150, "lr": 0.0002, "memory": 12588, "data_time": 0.04078, "loss_rpn_cls": 0.02589, "loss_rpn_bbox": 0.04703, "loss_cls": 0.21322, "acc": 92.73755, "loss_bbox": 0.23964, "loss_mask": 0.24018, "loss": 0.76596, "time": 0.50534} -{"mode": "train", "epoch": 5, "iter": 3200, "lr": 0.0002, "memory": 12588, "data_time": 0.04838, "loss_rpn_cls": 0.02817, "loss_rpn_bbox": 0.049, "loss_cls": 0.22127, "acc": 92.27197, "loss_bbox": 0.24843, "loss_mask": 0.2441, "loss": 0.79097, "time": 0.51486} -{"mode": "train", "epoch": 5, "iter": 3250, "lr": 0.0002, "memory": 12588, "data_time": 0.04635, "loss_rpn_cls": 0.0266, "loss_rpn_bbox": 0.05054, "loss_cls": 0.22017, "acc": 92.49146, "loss_bbox": 0.24649, "loss_mask": 0.24342, "loss": 0.78721, "time": 0.50334} -{"mode": "train", "epoch": 5, "iter": 3300, "lr": 0.0002, "memory": 12588, "data_time": 0.04211, "loss_rpn_cls": 0.02713, "loss_rpn_bbox": 0.04443, "loss_cls": 0.20943, "acc": 92.8667, "loss_bbox": 0.23099, "loss_mask": 0.23338, "loss": 0.74536, "time": 0.55431} -{"mode": "train", "epoch": 5, "iter": 3350, "lr": 0.0002, "memory": 12588, "data_time": 0.04364, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04744, "loss_cls": 0.20635, "acc": 92.83594, "loss_bbox": 0.23802, "loss_mask": 0.23591, "loss": 0.75363, "time": 0.50929} -{"mode": "train", "epoch": 5, "iter": 3400, "lr": 0.0002, "memory": 12588, "data_time": 0.04743, "loss_rpn_cls": 0.02651, "loss_rpn_bbox": 0.04847, "loss_cls": 0.21086, "acc": 92.89453, "loss_bbox": 0.23151, "loss_mask": 0.23619, "loss": 0.75354, "time": 0.51701} -{"mode": "train", "epoch": 5, "iter": 3450, "lr": 0.0002, "memory": 12588, "data_time": 0.04185, "loss_rpn_cls": 0.02689, "loss_rpn_bbox": 0.04765, "loss_cls": 0.21468, "acc": 92.66724, "loss_bbox": 0.24048, "loss_mask": 0.23746, "loss": 0.76716, "time": 0.51053} -{"mode": "train", "epoch": 5, "iter": 3500, "lr": 0.0002, "memory": 12588, "data_time": 0.0486, "loss_rpn_cls": 0.02991, "loss_rpn_bbox": 0.04964, "loss_cls": 0.21173, "acc": 92.70654, "loss_bbox": 0.24116, "loss_mask": 0.23814, "loss": 0.77057, "time": 0.56877} -{"mode": "train", "epoch": 5, "iter": 3550, "lr": 0.0002, "memory": 12588, "data_time": 0.05179, "loss_rpn_cls": 0.02836, "loss_rpn_bbox": 0.0496, "loss_cls": 0.21656, "acc": 92.63965, "loss_bbox": 0.23981, "loss_mask": 0.24435, "loss": 0.77868, "time": 0.50142} -{"mode": "train", "epoch": 5, "iter": 3600, "lr": 0.0002, "memory": 12588, "data_time": 0.04527, "loss_rpn_cls": 0.02876, "loss_rpn_bbox": 0.05167, "loss_cls": 0.21191, "acc": 92.70557, "loss_bbox": 0.23926, "loss_mask": 0.24055, "loss": 0.77215, "time": 0.51856} -{"mode": "train", "epoch": 5, "iter": 3650, "lr": 0.0002, "memory": 12588, "data_time": 0.04896, "loss_rpn_cls": 0.02805, "loss_rpn_bbox": 0.04991, "loss_cls": 0.2086, "acc": 92.88989, "loss_bbox": 0.23603, "loss_mask": 0.23212, "loss": 0.75471, "time": 0.50834} -{"mode": "train", "epoch": 5, "iter": 3700, "lr": 0.0002, "memory": 12588, "data_time": 0.03927, "loss_rpn_cls": 0.02679, "loss_rpn_bbox": 0.04657, "loss_cls": 0.21443, "acc": 92.93799, "loss_bbox": 0.23415, "loss_mask": 0.23636, "loss": 0.7583, "time": 0.49947} -{"mode": "train", "epoch": 5, "iter": 3750, "lr": 0.0002, "memory": 12588, "data_time": 0.05212, "loss_rpn_cls": 0.02855, "loss_rpn_bbox": 0.04678, "loss_cls": 0.21286, "acc": 92.76343, "loss_bbox": 0.22905, "loss_mask": 0.23548, "loss": 0.75273, "time": 0.50119} -{"mode": "train", "epoch": 5, "iter": 3800, "lr": 0.0002, "memory": 12588, "data_time": 0.04789, "loss_rpn_cls": 0.0247, "loss_rpn_bbox": 0.04573, "loss_cls": 0.21697, "acc": 92.6333, "loss_bbox": 0.23363, "loss_mask": 0.23959, "loss": 0.76062, "time": 0.49174} -{"mode": "train", "epoch": 5, "iter": 3850, "lr": 0.0002, "memory": 12588, "data_time": 0.04445, "loss_rpn_cls": 0.02506, "loss_rpn_bbox": 0.04531, "loss_cls": 0.2109, "acc": 92.92407, "loss_bbox": 0.2314, "loss_mask": 0.2304, "loss": 0.74308, "time": 0.49393} -{"mode": "train", "epoch": 5, "iter": 3900, "lr": 0.0002, "memory": 12588, "data_time": 0.04603, "loss_rpn_cls": 0.0266, "loss_rpn_bbox": 0.04635, "loss_cls": 0.21169, "acc": 92.75977, "loss_bbox": 0.23149, "loss_mask": 0.24211, "loss": 0.75824, "time": 0.50382} -{"mode": "train", "epoch": 5, "iter": 3950, "lr": 0.0002, "memory": 12588, "data_time": 0.04741, "loss_rpn_cls": 0.02815, "loss_rpn_bbox": 0.04583, "loss_cls": 0.21222, "acc": 92.79663, "loss_bbox": 0.23527, "loss_mask": 0.24184, "loss": 0.76331, "time": 0.49577} -{"mode": "train", "epoch": 5, "iter": 4000, "lr": 0.0002, "memory": 12588, "data_time": 0.04938, "loss_rpn_cls": 0.02417, "loss_rpn_bbox": 0.0466, "loss_cls": 0.21112, "acc": 92.74463, "loss_bbox": 0.23322, "loss_mask": 0.23042, "loss": 0.74553, "time": 0.50458} -{"mode": "train", "epoch": 5, "iter": 4050, "lr": 0.0002, "memory": 12588, "data_time": 0.04276, "loss_rpn_cls": 0.02717, "loss_rpn_bbox": 0.04961, "loss_cls": 0.21225, "acc": 92.73755, "loss_bbox": 0.23544, "loss_mask": 0.23178, "loss": 0.75624, "time": 0.52245} -{"mode": "train", "epoch": 5, "iter": 4100, "lr": 0.0002, "memory": 12588, "data_time": 0.04638, "loss_rpn_cls": 0.02729, "loss_rpn_bbox": 0.04937, "loss_cls": 0.22181, "acc": 92.48169, "loss_bbox": 0.2485, "loss_mask": 0.24093, "loss": 0.7879, "time": 0.51678} -{"mode": "train", "epoch": 5, "iter": 4150, "lr": 0.0002, "memory": 12588, "data_time": 0.04538, "loss_rpn_cls": 0.02639, "loss_rpn_bbox": 0.04552, "loss_cls": 0.21056, "acc": 92.8562, "loss_bbox": 0.23268, "loss_mask": 0.23206, "loss": 0.74722, "time": 0.4958} -{"mode": "train", "epoch": 5, "iter": 4200, "lr": 0.0002, "memory": 12588, "data_time": 0.0449, "loss_rpn_cls": 0.02904, "loss_rpn_bbox": 0.04938, "loss_cls": 0.21197, "acc": 92.90356, "loss_bbox": 0.23407, "loss_mask": 0.245, "loss": 0.76945, "time": 0.50252} -{"mode": "train", "epoch": 5, "iter": 4250, "lr": 0.0002, "memory": 12588, "data_time": 0.0432, "loss_rpn_cls": 0.02555, "loss_rpn_bbox": 0.04738, "loss_cls": 0.21032, "acc": 92.92114, "loss_bbox": 0.23244, "loss_mask": 0.23598, "loss": 0.75167, "time": 0.51243} -{"mode": "train", "epoch": 5, "iter": 4300, "lr": 0.0002, "memory": 12588, "data_time": 0.0412, "loss_rpn_cls": 0.02339, "loss_rpn_bbox": 0.0453, "loss_cls": 0.20577, "acc": 93.04956, "loss_bbox": 0.22511, "loss_mask": 0.23751, "loss": 0.73709, "time": 0.55561} -{"mode": "train", "epoch": 5, "iter": 4350, "lr": 0.0002, "memory": 12588, "data_time": 0.05435, "loss_rpn_cls": 0.02425, "loss_rpn_bbox": 0.04756, "loss_cls": 0.20993, "acc": 92.72705, "loss_bbox": 0.2416, "loss_mask": 0.23589, "loss": 0.75923, "time": 0.50788} -{"mode": "train", "epoch": 5, "iter": 4400, "lr": 0.0002, "memory": 12588, "data_time": 0.0425, "loss_rpn_cls": 0.02682, "loss_rpn_bbox": 0.05002, "loss_cls": 0.22215, "acc": 92.31616, "loss_bbox": 0.25139, "loss_mask": 0.23703, "loss": 0.78741, "time": 0.51229} -{"mode": "train", "epoch": 5, "iter": 4450, "lr": 0.0002, "memory": 12588, "data_time": 0.04975, "loss_rpn_cls": 0.0247, "loss_rpn_bbox": 0.04549, "loss_cls": 0.20476, "acc": 92.97266, "loss_bbox": 0.22493, "loss_mask": 0.22429, "loss": 0.72417, "time": 0.4968} -{"mode": "train", "epoch": 5, "iter": 4500, "lr": 0.0002, "memory": 12588, "data_time": 0.04478, "loss_rpn_cls": 0.02461, "loss_rpn_bbox": 0.04195, "loss_cls": 0.20055, "acc": 93.05835, "loss_bbox": 0.22674, "loss_mask": 0.23022, "loss": 0.72407, "time": 0.49349} -{"mode": "train", "epoch": 5, "iter": 4550, "lr": 0.0002, "memory": 12588, "data_time": 0.05035, "loss_rpn_cls": 0.02539, "loss_rpn_bbox": 0.04544, "loss_cls": 0.21374, "acc": 92.63208, "loss_bbox": 0.2405, "loss_mask": 0.24139, "loss": 0.76646, "time": 0.50279} -{"mode": "train", "epoch": 5, "iter": 4600, "lr": 0.0002, "memory": 12588, "data_time": 0.04788, "loss_rpn_cls": 0.02528, "loss_rpn_bbox": 0.04464, "loss_cls": 0.20216, "acc": 93.15942, "loss_bbox": 0.23036, "loss_mask": 0.23058, "loss": 0.73302, "time": 0.50445} -{"mode": "train", "epoch": 5, "iter": 4650, "lr": 0.0002, "memory": 12588, "data_time": 0.04951, "loss_rpn_cls": 0.0272, "loss_rpn_bbox": 0.04649, "loss_cls": 0.21249, "acc": 92.80225, "loss_bbox": 0.23165, "loss_mask": 0.23979, "loss": 0.75762, "time": 0.4986} -{"mode": "train", "epoch": 5, "iter": 4700, "lr": 0.0002, "memory": 12588, "data_time": 0.04869, "loss_rpn_cls": 0.02982, "loss_rpn_bbox": 0.04894, "loss_cls": 0.22434, "acc": 92.40698, "loss_bbox": 0.24122, "loss_mask": 0.23989, "loss": 0.78421, "time": 0.55749} -{"mode": "train", "epoch": 5, "iter": 4750, "lr": 0.0002, "memory": 12588, "data_time": 0.05299, "loss_rpn_cls": 0.0282, "loss_rpn_bbox": 0.04961, "loss_cls": 0.22172, "acc": 92.44385, "loss_bbox": 0.24204, "loss_mask": 0.24453, "loss": 0.7861, "time": 0.57173} -{"mode": "train", "epoch": 5, "iter": 4800, "lr": 0.0002, "memory": 12588, "data_time": 0.04205, "loss_rpn_cls": 0.02662, "loss_rpn_bbox": 0.04366, "loss_cls": 0.21837, "acc": 92.64575, "loss_bbox": 0.23906, "loss_mask": 0.23282, "loss": 0.76052, "time": 0.51366} -{"mode": "train", "epoch": 5, "iter": 4850, "lr": 0.0002, "memory": 12588, "data_time": 0.04444, "loss_rpn_cls": 0.02818, "loss_rpn_bbox": 0.05111, "loss_cls": 0.22332, "acc": 92.32251, "loss_bbox": 0.25018, "loss_mask": 0.24461, "loss": 0.7974, "time": 0.52086} -{"mode": "train", "epoch": 5, "iter": 4900, "lr": 0.0002, "memory": 12588, "data_time": 0.04349, "loss_rpn_cls": 0.02668, "loss_rpn_bbox": 0.04875, "loss_cls": 0.20326, "acc": 93.0874, "loss_bbox": 0.22522, "loss_mask": 0.23086, "loss": 0.73476, "time": 0.54268} -{"mode": "train", "epoch": 5, "iter": 4950, "lr": 0.0002, "memory": 12588, "data_time": 0.04261, "loss_rpn_cls": 0.02614, "loss_rpn_bbox": 0.04873, "loss_cls": 0.21702, "acc": 92.63867, "loss_bbox": 0.23936, "loss_mask": 0.24139, "loss": 0.77264, "time": 0.52126} -{"mode": "train", "epoch": 5, "iter": 5000, "lr": 0.0002, "memory": 12588, "data_time": 0.04147, "loss_rpn_cls": 0.02445, "loss_rpn_bbox": 0.04635, "loss_cls": 0.20472, "acc": 93.00684, "loss_bbox": 0.22275, "loss_mask": 0.2388, "loss": 0.73706, "time": 0.49104} -{"mode": "train", "epoch": 5, "iter": 5050, "lr": 0.0002, "memory": 12588, "data_time": 0.09125, "loss_rpn_cls": 0.02606, "loss_rpn_bbox": 0.04834, "loss_cls": 0.20857, "acc": 92.97974, "loss_bbox": 0.22815, "loss_mask": 0.23173, "loss": 0.74284, "time": 0.54845} -{"mode": "train", "epoch": 5, "iter": 5100, "lr": 0.0002, "memory": 12588, "data_time": 0.03906, "loss_rpn_cls": 0.02788, "loss_rpn_bbox": 0.0478, "loss_cls": 0.21189, "acc": 92.77539, "loss_bbox": 0.2357, "loss_mask": 0.24184, "loss": 0.76511, "time": 0.49999} -{"mode": "train", "epoch": 5, "iter": 5150, "lr": 0.0002, "memory": 12588, "data_time": 0.05101, "loss_rpn_cls": 0.02611, "loss_rpn_bbox": 0.04655, "loss_cls": 0.20508, "acc": 92.92432, "loss_bbox": 0.234, "loss_mask": 0.23758, "loss": 0.74932, "time": 0.52483} -{"mode": "train", "epoch": 5, "iter": 5200, "lr": 0.0002, "memory": 12588, "data_time": 0.05241, "loss_rpn_cls": 0.02668, "loss_rpn_bbox": 0.04698, "loss_cls": 0.21402, "acc": 92.71948, "loss_bbox": 0.23438, "loss_mask": 0.23631, "loss": 0.75837, "time": 0.55835} -{"mode": "train", "epoch": 5, "iter": 5250, "lr": 0.0002, "memory": 12588, "data_time": 0.03996, "loss_rpn_cls": 0.02418, "loss_rpn_bbox": 0.04461, "loss_cls": 0.2044, "acc": 93.01123, "loss_bbox": 0.23081, "loss_mask": 0.23728, "loss": 0.74128, "time": 0.54347} -{"mode": "train", "epoch": 5, "iter": 5300, "lr": 0.0002, "memory": 12588, "data_time": 0.04171, "loss_rpn_cls": 0.02509, "loss_rpn_bbox": 0.0477, "loss_cls": 0.21003, "acc": 92.8313, "loss_bbox": 0.23642, "loss_mask": 0.24054, "loss": 0.75978, "time": 0.50022} -{"mode": "train", "epoch": 5, "iter": 5350, "lr": 0.0002, "memory": 12588, "data_time": 0.0418, "loss_rpn_cls": 0.02376, "loss_rpn_bbox": 0.04518, "loss_cls": 0.20302, "acc": 92.97754, "loss_bbox": 0.23018, "loss_mask": 0.2367, "loss": 0.73884, "time": 0.50843} -{"mode": "train", "epoch": 5, "iter": 5400, "lr": 0.0002, "memory": 12588, "data_time": 0.04669, "loss_rpn_cls": 0.02727, "loss_rpn_bbox": 0.04728, "loss_cls": 0.21817, "acc": 92.56201, "loss_bbox": 0.24064, "loss_mask": 0.24158, "loss": 0.77494, "time": 0.4992} -{"mode": "train", "epoch": 5, "iter": 5450, "lr": 0.0002, "memory": 12588, "data_time": 0.04478, "loss_rpn_cls": 0.02847, "loss_rpn_bbox": 0.04931, "loss_cls": 0.21007, "acc": 92.77417, "loss_bbox": 0.23806, "loss_mask": 0.23653, "loss": 0.76245, "time": 0.49257} -{"mode": "train", "epoch": 5, "iter": 5500, "lr": 0.0002, "memory": 12588, "data_time": 0.05311, "loss_rpn_cls": 0.02975, "loss_rpn_bbox": 0.04837, "loss_cls": 0.21081, "acc": 92.80518, "loss_bbox": 0.23653, "loss_mask": 0.23708, "loss": 0.76254, "time": 0.50471} -{"mode": "train", "epoch": 5, "iter": 5550, "lr": 0.0002, "memory": 12588, "data_time": 0.05006, "loss_rpn_cls": 0.02724, "loss_rpn_bbox": 0.04957, "loss_cls": 0.21297, "acc": 92.73242, "loss_bbox": 0.24212, "loss_mask": 0.23586, "loss": 0.76776, "time": 0.50066} -{"mode": "train", "epoch": 5, "iter": 5600, "lr": 0.0002, "memory": 12588, "data_time": 0.04427, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.04838, "loss_cls": 0.21236, "acc": 92.77759, "loss_bbox": 0.23891, "loss_mask": 0.24325, "loss": 0.77079, "time": 0.50384} -{"mode": "train", "epoch": 5, "iter": 5650, "lr": 0.0002, "memory": 12588, "data_time": 0.05192, "loss_rpn_cls": 0.02866, "loss_rpn_bbox": 0.05092, "loss_cls": 0.22324, "acc": 92.41357, "loss_bbox": 0.24211, "loss_mask": 0.24127, "loss": 0.7862, "time": 0.49255} -{"mode": "train", "epoch": 5, "iter": 5700, "lr": 0.0002, "memory": 12588, "data_time": 0.04872, "loss_rpn_cls": 0.02601, "loss_rpn_bbox": 0.04598, "loss_cls": 0.20594, "acc": 92.92554, "loss_bbox": 0.23236, "loss_mask": 0.2383, "loss": 0.74858, "time": 0.5059} -{"mode": "train", "epoch": 5, "iter": 5750, "lr": 0.0002, "memory": 12588, "data_time": 0.04871, "loss_rpn_cls": 0.02587, "loss_rpn_bbox": 0.04509, "loss_cls": 0.21262, "acc": 92.6875, "loss_bbox": 0.23901, "loss_mask": 0.23578, "loss": 0.75838, "time": 0.5129} -{"mode": "train", "epoch": 5, "iter": 5800, "lr": 0.0002, "memory": 12588, "data_time": 0.04485, "loss_rpn_cls": 0.0275, "loss_rpn_bbox": 0.04855, "loss_cls": 0.20532, "acc": 93.08179, "loss_bbox": 0.22731, "loss_mask": 0.23805, "loss": 0.74672, "time": 0.5658} -{"mode": "train", "epoch": 5, "iter": 5850, "lr": 0.0002, "memory": 12588, "data_time": 0.04997, "loss_rpn_cls": 0.02555, "loss_rpn_bbox": 0.04771, "loss_cls": 0.20461, "acc": 92.89697, "loss_bbox": 0.23839, "loss_mask": 0.23842, "loss": 0.75468, "time": 0.50335} -{"mode": "train", "epoch": 5, "iter": 5900, "lr": 0.0002, "memory": 12588, "data_time": 0.04687, "loss_rpn_cls": 0.02907, "loss_rpn_bbox": 0.05127, "loss_cls": 0.21844, "acc": 92.53369, "loss_bbox": 0.24489, "loss_mask": 0.24662, "loss": 0.79029, "time": 0.50617} -{"mode": "train", "epoch": 5, "iter": 5950, "lr": 0.0002, "memory": 12588, "data_time": 0.0404, "loss_rpn_cls": 0.02483, "loss_rpn_bbox": 0.04582, "loss_cls": 0.2153, "acc": 92.71167, "loss_bbox": 0.23469, "loss_mask": 0.23814, "loss": 0.75879, "time": 0.49624} -{"mode": "train", "epoch": 5, "iter": 6000, "lr": 0.0002, "memory": 12588, "data_time": 0.04587, "loss_rpn_cls": 0.02657, "loss_rpn_bbox": 0.04712, "loss_cls": 0.21866, "acc": 92.62061, "loss_bbox": 0.23546, "loss_mask": 0.23792, "loss": 0.76573, "time": 0.49787} -{"mode": "train", "epoch": 5, "iter": 6050, "lr": 0.0002, "memory": 12588, "data_time": 0.04608, "loss_rpn_cls": 0.02644, "loss_rpn_bbox": 0.04691, "loss_cls": 0.20983, "acc": 92.74927, "loss_bbox": 0.23109, "loss_mask": 0.23987, "loss": 0.75414, "time": 0.49384} -{"mode": "train", "epoch": 5, "iter": 6100, "lr": 0.0002, "memory": 12588, "data_time": 0.04444, "loss_rpn_cls": 0.02738, "loss_rpn_bbox": 0.04735, "loss_cls": 0.21312, "acc": 92.70459, "loss_bbox": 0.23446, "loss_mask": 0.23283, "loss": 0.75514, "time": 0.49285} -{"mode": "train", "epoch": 5, "iter": 6150, "lr": 0.0002, "memory": 12588, "data_time": 0.04731, "loss_rpn_cls": 0.02761, "loss_rpn_bbox": 0.04822, "loss_cls": 0.20917, "acc": 92.81592, "loss_bbox": 0.23611, "loss_mask": 0.23947, "loss": 0.76058, "time": 0.50251} -{"mode": "train", "epoch": 5, "iter": 6200, "lr": 0.0002, "memory": 12588, "data_time": 0.04317, "loss_rpn_cls": 0.02676, "loss_rpn_bbox": 0.04699, "loss_cls": 0.21144, "acc": 92.7688, "loss_bbox": 0.23501, "loss_mask": 0.23944, "loss": 0.75963, "time": 0.5454} -{"mode": "train", "epoch": 5, "iter": 6250, "lr": 0.0002, "memory": 12588, "data_time": 0.04404, "loss_rpn_cls": 0.02625, "loss_rpn_bbox": 0.04816, "loss_cls": 0.21056, "acc": 92.75879, "loss_bbox": 0.2344, "loss_mask": 0.23558, "loss": 0.75494, "time": 0.50559} -{"mode": "train", "epoch": 5, "iter": 6300, "lr": 0.0002, "memory": 12588, "data_time": 0.04705, "loss_rpn_cls": 0.02637, "loss_rpn_bbox": 0.04355, "loss_cls": 0.20889, "acc": 92.86646, "loss_bbox": 0.22924, "loss_mask": 0.23134, "loss": 0.73939, "time": 0.50148} -{"mode": "train", "epoch": 5, "iter": 6350, "lr": 0.0002, "memory": 12588, "data_time": 0.05201, "loss_rpn_cls": 0.02881, "loss_rpn_bbox": 0.04884, "loss_cls": 0.22342, "acc": 92.28394, "loss_bbox": 0.24351, "loss_mask": 0.23465, "loss": 0.77923, "time": 0.51044} -{"mode": "train", "epoch": 5, "iter": 6400, "lr": 0.0002, "memory": 12588, "data_time": 0.04561, "loss_rpn_cls": 0.02929, "loss_rpn_bbox": 0.04773, "loss_cls": 0.22742, "acc": 92.37891, "loss_bbox": 0.24734, "loss_mask": 0.23929, "loss": 0.79107, "time": 0.50459} -{"mode": "train", "epoch": 5, "iter": 6450, "lr": 0.0002, "memory": 12588, "data_time": 0.04147, "loss_rpn_cls": 0.02547, "loss_rpn_bbox": 0.04694, "loss_cls": 0.21767, "acc": 92.77051, "loss_bbox": 0.23465, "loss_mask": 0.23436, "loss": 0.75909, "time": 0.51113} -{"mode": "train", "epoch": 5, "iter": 6500, "lr": 0.0002, "memory": 12588, "data_time": 0.04276, "loss_rpn_cls": 0.02682, "loss_rpn_bbox": 0.04584, "loss_cls": 0.21068, "acc": 92.83984, "loss_bbox": 0.23308, "loss_mask": 0.23798, "loss": 0.75439, "time": 0.49323} -{"mode": "train", "epoch": 5, "iter": 6550, "lr": 0.0002, "memory": 12588, "data_time": 0.04125, "loss_rpn_cls": 0.02599, "loss_rpn_bbox": 0.04526, "loss_cls": 0.2041, "acc": 93.01978, "loss_bbox": 0.22913, "loss_mask": 0.23888, "loss": 0.74337, "time": 0.49812} -{"mode": "train", "epoch": 5, "iter": 6600, "lr": 0.0002, "memory": 12588, "data_time": 0.05181, "loss_rpn_cls": 0.02591, "loss_rpn_bbox": 0.04572, "loss_cls": 0.22171, "acc": 92.50708, "loss_bbox": 0.24573, "loss_mask": 0.24121, "loss": 0.78029, "time": 0.54279} -{"mode": "train", "epoch": 5, "iter": 6650, "lr": 0.0002, "memory": 12588, "data_time": 0.04342, "loss_rpn_cls": 0.02703, "loss_rpn_bbox": 0.04639, "loss_cls": 0.21663, "acc": 92.66821, "loss_bbox": 0.24262, "loss_mask": 0.23849, "loss": 0.77117, "time": 0.49794} -{"mode": "train", "epoch": 5, "iter": 6700, "lr": 0.0002, "memory": 12588, "data_time": 0.04989, "loss_rpn_cls": 0.02692, "loss_rpn_bbox": 0.04608, "loss_cls": 0.21316, "acc": 92.88525, "loss_bbox": 0.23063, "loss_mask": 0.23905, "loss": 0.75585, "time": 0.50249} -{"mode": "train", "epoch": 5, "iter": 6750, "lr": 0.0002, "memory": 12588, "data_time": 0.04633, "loss_rpn_cls": 0.02729, "loss_rpn_bbox": 0.0501, "loss_cls": 0.22118, "acc": 92.49731, "loss_bbox": 0.23918, "loss_mask": 0.23479, "loss": 0.77254, "time": 0.50483} -{"mode": "train", "epoch": 5, "iter": 6800, "lr": 0.0002, "memory": 12588, "data_time": 0.03871, "loss_rpn_cls": 0.02487, "loss_rpn_bbox": 0.04413, "loss_cls": 0.20246, "acc": 93.03613, "loss_bbox": 0.22523, "loss_mask": 0.23101, "loss": 0.7277, "time": 0.54831} -{"mode": "train", "epoch": 5, "iter": 6850, "lr": 0.0002, "memory": 12588, "data_time": 0.04685, "loss_rpn_cls": 0.02616, "loss_rpn_bbox": 0.04783, "loss_cls": 0.21835, "acc": 92.59155, "loss_bbox": 0.23971, "loss_mask": 0.24314, "loss": 0.7752, "time": 0.48717} -{"mode": "train", "epoch": 5, "iter": 6900, "lr": 0.0002, "memory": 12588, "data_time": 0.03861, "loss_rpn_cls": 0.02649, "loss_rpn_bbox": 0.04556, "loss_cls": 0.21505, "acc": 92.8208, "loss_bbox": 0.23122, "loss_mask": 0.23561, "loss": 0.75394, "time": 0.51693} -{"mode": "train", "epoch": 5, "iter": 6950, "lr": 0.0002, "memory": 12588, "data_time": 0.04428, "loss_rpn_cls": 0.02864, "loss_rpn_bbox": 0.04748, "loss_cls": 0.22053, "acc": 92.51172, "loss_bbox": 0.24335, "loss_mask": 0.23927, "loss": 0.77927, "time": 0.49252} -{"mode": "train", "epoch": 5, "iter": 7000, "lr": 0.0002, "memory": 12588, "data_time": 0.04644, "loss_rpn_cls": 0.02631, "loss_rpn_bbox": 0.04954, "loss_cls": 0.20958, "acc": 92.76465, "loss_bbox": 0.23522, "loss_mask": 0.2393, "loss": 0.75995, "time": 0.56414} -{"mode": "train", "epoch": 5, "iter": 7050, "lr": 0.0002, "memory": 12588, "data_time": 0.04335, "loss_rpn_cls": 0.02434, "loss_rpn_bbox": 0.04497, "loss_cls": 0.20607, "acc": 93.06177, "loss_bbox": 0.22797, "loss_mask": 0.23711, "loss": 0.74046, "time": 0.49334} -{"mode": "train", "epoch": 5, "iter": 7100, "lr": 0.0002, "memory": 12588, "data_time": 0.05288, "loss_rpn_cls": 0.02734, "loss_rpn_bbox": 0.04704, "loss_cls": 0.22054, "acc": 92.58618, "loss_bbox": 0.23625, "loss_mask": 0.23436, "loss": 0.76553, "time": 0.55324} -{"mode": "train", "epoch": 5, "iter": 7150, "lr": 0.0002, "memory": 12588, "data_time": 0.0386, "loss_rpn_cls": 0.02774, "loss_rpn_bbox": 0.04561, "loss_cls": 0.21131, "acc": 92.89062, "loss_bbox": 0.23231, "loss_mask": 0.23545, "loss": 0.75241, "time": 0.49158} -{"mode": "train", "epoch": 5, "iter": 7200, "lr": 0.0002, "memory": 12588, "data_time": 0.04222, "loss_rpn_cls": 0.02362, "loss_rpn_bbox": 0.04162, "loss_cls": 0.20606, "acc": 93.13232, "loss_bbox": 0.21995, "loss_mask": 0.23528, "loss": 0.72653, "time": 0.48904} -{"mode": "train", "epoch": 5, "iter": 7250, "lr": 0.0002, "memory": 12588, "data_time": 0.05754, "loss_rpn_cls": 0.02519, "loss_rpn_bbox": 0.04633, "loss_cls": 0.21712, "acc": 92.5354, "loss_bbox": 0.24096, "loss_mask": 0.23649, "loss": 0.7661, "time": 0.53923} -{"mode": "train", "epoch": 5, "iter": 7300, "lr": 0.0002, "memory": 12588, "data_time": 0.04214, "loss_rpn_cls": 0.0254, "loss_rpn_bbox": 0.04588, "loss_cls": 0.20883, "acc": 92.85693, "loss_bbox": 0.23321, "loss_mask": 0.23592, "loss": 0.74924, "time": 0.53199} -{"mode": "val", "epoch": 5, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3853, "bbox_mAP_50": 0.6021, "bbox_mAP_75": 0.4245, "bbox_mAP_s": 0.2352, "bbox_mAP_m": 0.4322, "bbox_mAP_l": 0.495, "bbox_mAP_copypaste": "0.3853 0.6021 0.4245 0.2352 0.4322 0.4950", "segm_mAP": 0.3612, "segm_mAP_50": 0.5714, "segm_mAP_75": 0.3877, "segm_mAP_s": 0.1804, "segm_mAP_m": 0.3969, "segm_mAP_l": 0.5324, "segm_mAP_copypaste": "0.3612 0.5714 0.3877 0.1804 0.3969 0.5324"} -{"mode": "train", "epoch": 6, "iter": 50, "lr": 0.0002, "memory": 12588, "data_time": 0.12734, "loss_rpn_cls": 0.02519, "loss_rpn_bbox": 0.04713, "loss_cls": 0.20026, "acc": 92.97876, "loss_bbox": 0.23597, "loss_mask": 0.23626, "loss": 0.7448, "time": 0.59273} -{"mode": "train", "epoch": 6, "iter": 100, "lr": 0.0002, "memory": 12588, "data_time": 0.06291, "loss_rpn_cls": 0.02214, "loss_rpn_bbox": 0.0454, "loss_cls": 0.20646, "acc": 92.72437, "loss_bbox": 0.24674, "loss_mask": 0.23588, "loss": 0.75662, "time": 0.53151} -{"mode": "train", "epoch": 6, "iter": 150, "lr": 0.0002, "memory": 12588, "data_time": 0.0513, "loss_rpn_cls": 0.02274, "loss_rpn_bbox": 0.04498, "loss_cls": 0.20288, "acc": 93.00366, "loss_bbox": 0.22918, "loss_mask": 0.2338, "loss": 0.73357, "time": 0.51851} -{"mode": "train", "epoch": 6, "iter": 200, "lr": 0.0002, "memory": 12588, "data_time": 0.0544, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04579, "loss_cls": 0.19661, "acc": 93.08496, "loss_bbox": 0.22824, "loss_mask": 0.23104, "loss": 0.72488, "time": 0.51509} -{"mode": "train", "epoch": 6, "iter": 250, "lr": 0.0002, "memory": 12588, "data_time": 0.04964, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04564, "loss_cls": 0.19678, "acc": 93.14941, "loss_bbox": 0.22856, "loss_mask": 0.2295, "loss": 0.725, "time": 0.51214} -{"mode": "train", "epoch": 6, "iter": 300, "lr": 0.0002, "memory": 12588, "data_time": 0.04504, "loss_rpn_cls": 0.02365, "loss_rpn_bbox": 0.04713, "loss_cls": 0.20041, "acc": 92.93604, "loss_bbox": 0.23597, "loss_mask": 0.23124, "loss": 0.7384, "time": 0.51202} -{"mode": "train", "epoch": 6, "iter": 350, "lr": 0.0002, "memory": 12588, "data_time": 0.0491, "loss_rpn_cls": 0.02309, "loss_rpn_bbox": 0.04534, "loss_cls": 0.19401, "acc": 93.14185, "loss_bbox": 0.23079, "loss_mask": 0.233, "loss": 0.72622, "time": 0.52858} -{"mode": "train", "epoch": 6, "iter": 400, "lr": 0.0002, "memory": 12588, "data_time": 0.06203, "loss_rpn_cls": 0.02608, "loss_rpn_bbox": 0.04769, "loss_cls": 0.20485, "acc": 92.8396, "loss_bbox": 0.23822, "loss_mask": 0.22946, "loss": 0.74629, "time": 0.5808} -{"mode": "train", "epoch": 6, "iter": 450, "lr": 0.0002, "memory": 12588, "data_time": 0.0487, "loss_rpn_cls": 0.02336, "loss_rpn_bbox": 0.04651, "loss_cls": 0.20647, "acc": 92.88086, "loss_bbox": 0.23113, "loss_mask": 0.22395, "loss": 0.73142, "time": 0.51764} -{"mode": "train", "epoch": 6, "iter": 500, "lr": 0.0002, "memory": 12588, "data_time": 0.04939, "loss_rpn_cls": 0.02462, "loss_rpn_bbox": 0.04714, "loss_cls": 0.19644, "acc": 93.09106, "loss_bbox": 0.22758, "loss_mask": 0.23483, "loss": 0.73061, "time": 0.5171} -{"mode": "train", "epoch": 6, "iter": 550, "lr": 0.0002, "memory": 12588, "data_time": 0.04359, "loss_rpn_cls": 0.02412, "loss_rpn_bbox": 0.04232, "loss_cls": 0.19423, "acc": 93.10498, "loss_bbox": 0.22656, "loss_mask": 0.22993, "loss": 0.71716, "time": 0.50479} -{"mode": "train", "epoch": 6, "iter": 600, "lr": 0.0002, "memory": 12588, "data_time": 0.04065, "loss_rpn_cls": 0.02727, "loss_rpn_bbox": 0.04576, "loss_cls": 0.20599, "acc": 92.87842, "loss_bbox": 0.23248, "loss_mask": 0.23552, "loss": 0.74701, "time": 0.51673} -{"mode": "train", "epoch": 6, "iter": 650, "lr": 0.0002, "memory": 12588, "data_time": 0.06159, "loss_rpn_cls": 0.02332, "loss_rpn_bbox": 0.04506, "loss_cls": 0.19313, "acc": 93.2915, "loss_bbox": 0.23171, "loss_mask": 0.2298, "loss": 0.72301, "time": 0.50928} -{"mode": "train", "epoch": 6, "iter": 700, "lr": 0.0002, "memory": 12588, "data_time": 0.04142, "loss_rpn_cls": 0.0246, "loss_rpn_bbox": 0.04749, "loss_cls": 0.20345, "acc": 92.87427, "loss_bbox": 0.23624, "loss_mask": 0.23699, "loss": 0.74878, "time": 0.50341} -{"mode": "train", "epoch": 6, "iter": 750, "lr": 0.0002, "memory": 12588, "data_time": 0.04847, "loss_rpn_cls": 0.02331, "loss_rpn_bbox": 0.04485, "loss_cls": 0.20849, "acc": 92.88379, "loss_bbox": 0.22882, "loss_mask": 0.23208, "loss": 0.73754, "time": 0.48389} -{"mode": "train", "epoch": 6, "iter": 800, "lr": 0.0002, "memory": 12588, "data_time": 0.04573, "loss_rpn_cls": 0.02361, "loss_rpn_bbox": 0.04565, "loss_cls": 0.19923, "acc": 92.96216, "loss_bbox": 0.23135, "loss_mask": 0.23204, "loss": 0.73188, "time": 0.50497} -{"mode": "train", "epoch": 6, "iter": 850, "lr": 0.0002, "memory": 12588, "data_time": 0.05252, "loss_rpn_cls": 0.02396, "loss_rpn_bbox": 0.04559, "loss_cls": 0.20146, "acc": 92.93945, "loss_bbox": 0.23198, "loss_mask": 0.23208, "loss": 0.73507, "time": 0.49977} -{"mode": "train", "epoch": 6, "iter": 900, "lr": 0.0002, "memory": 12588, "data_time": 0.05134, "loss_rpn_cls": 0.02479, "loss_rpn_bbox": 0.04806, "loss_cls": 0.19985, "acc": 93.01587, "loss_bbox": 0.22667, "loss_mask": 0.22826, "loss": 0.72763, "time": 0.51669} -{"mode": "train", "epoch": 6, "iter": 950, "lr": 0.0002, "memory": 12588, "data_time": 0.05363, "loss_rpn_cls": 0.02596, "loss_rpn_bbox": 0.04544, "loss_cls": 0.19515, "acc": 93.35864, "loss_bbox": 0.22087, "loss_mask": 0.22878, "loss": 0.7162, "time": 0.56025} -{"mode": "train", "epoch": 6, "iter": 1000, "lr": 0.0002, "memory": 12588, "data_time": 0.04719, "loss_rpn_cls": 0.0258, "loss_rpn_bbox": 0.05007, "loss_cls": 0.22055, "acc": 92.47949, "loss_bbox": 0.23956, "loss_mask": 0.23609, "loss": 0.77207, "time": 0.53164} -{"mode": "train", "epoch": 6, "iter": 1050, "lr": 0.0002, "memory": 12588, "data_time": 0.04525, "loss_rpn_cls": 0.02352, "loss_rpn_bbox": 0.04506, "loss_cls": 0.20006, "acc": 93.12524, "loss_bbox": 0.22094, "loss_mask": 0.22732, "loss": 0.71689, "time": 0.51256} -{"mode": "train", "epoch": 6, "iter": 1100, "lr": 0.0002, "memory": 12588, "data_time": 0.04189, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04451, "loss_cls": 0.20296, "acc": 93.08813, "loss_bbox": 0.22539, "loss_mask": 0.23205, "loss": 0.72935, "time": 0.56055} -{"mode": "train", "epoch": 6, "iter": 1150, "lr": 0.0002, "memory": 12588, "data_time": 0.03693, "loss_rpn_cls": 0.02401, "loss_rpn_bbox": 0.04675, "loss_cls": 0.21093, "acc": 92.7998, "loss_bbox": 0.24095, "loss_mask": 0.23952, "loss": 0.76216, "time": 0.5096} -{"mode": "train", "epoch": 6, "iter": 1200, "lr": 0.0002, "memory": 12588, "data_time": 0.04022, "loss_rpn_cls": 0.02436, "loss_rpn_bbox": 0.04752, "loss_cls": 0.20055, "acc": 93.08008, "loss_bbox": 0.22712, "loss_mask": 0.23475, "loss": 0.7343, "time": 0.49513} -{"mode": "train", "epoch": 6, "iter": 1250, "lr": 0.0002, "memory": 12588, "data_time": 0.04292, "loss_rpn_cls": 0.02141, "loss_rpn_bbox": 0.04349, "loss_cls": 0.20163, "acc": 92.99023, "loss_bbox": 0.22756, "loss_mask": 0.23062, "loss": 0.7247, "time": 0.56009} -{"mode": "train", "epoch": 6, "iter": 1300, "lr": 0.0002, "memory": 12588, "data_time": 0.05056, "loss_rpn_cls": 0.02476, "loss_rpn_bbox": 0.04896, "loss_cls": 0.21149, "acc": 92.68823, "loss_bbox": 0.23897, "loss_mask": 0.23399, "loss": 0.75817, "time": 0.51337} -{"mode": "train", "epoch": 6, "iter": 1350, "lr": 0.0002, "memory": 12588, "data_time": 0.04193, "loss_rpn_cls": 0.02518, "loss_rpn_bbox": 0.04676, "loss_cls": 0.20462, "acc": 92.91577, "loss_bbox": 0.23245, "loss_mask": 0.23524, "loss": 0.74424, "time": 0.51034} -{"mode": "train", "epoch": 6, "iter": 1400, "lr": 0.0002, "memory": 12588, "data_time": 0.0492, "loss_rpn_cls": 0.02564, "loss_rpn_bbox": 0.04561, "loss_cls": 0.20352, "acc": 92.98486, "loss_bbox": 0.2326, "loss_mask": 0.23569, "loss": 0.74305, "time": 0.50633} -{"mode": "train", "epoch": 6, "iter": 1450, "lr": 0.0002, "memory": 12588, "data_time": 0.03997, "loss_rpn_cls": 0.02438, "loss_rpn_bbox": 0.04676, "loss_cls": 0.19981, "acc": 93.0144, "loss_bbox": 0.23074, "loss_mask": 0.23177, "loss": 0.73346, "time": 0.49881} -{"mode": "train", "epoch": 6, "iter": 1500, "lr": 0.0002, "memory": 12588, "data_time": 0.0502, "loss_rpn_cls": 0.02385, "loss_rpn_bbox": 0.04584, "loss_cls": 0.20724, "acc": 92.87646, "loss_bbox": 0.23162, "loss_mask": 0.23754, "loss": 0.74609, "time": 0.55222} -{"mode": "train", "epoch": 6, "iter": 1550, "lr": 0.0002, "memory": 12588, "data_time": 0.04264, "loss_rpn_cls": 0.02426, "loss_rpn_bbox": 0.04224, "loss_cls": 0.20255, "acc": 93.07617, "loss_bbox": 0.23042, "loss_mask": 0.23314, "loss": 0.73261, "time": 0.48126} -{"mode": "train", "epoch": 6, "iter": 1600, "lr": 0.0002, "memory": 12588, "data_time": 0.05526, "loss_rpn_cls": 0.02453, "loss_rpn_bbox": 0.04755, "loss_cls": 0.21102, "acc": 92.71021, "loss_bbox": 0.23988, "loss_mask": 0.23845, "loss": 0.76142, "time": 0.51014} -{"mode": "train", "epoch": 6, "iter": 1650, "lr": 0.0002, "memory": 12588, "data_time": 0.04361, "loss_rpn_cls": 0.02506, "loss_rpn_bbox": 0.04709, "loss_cls": 0.20435, "acc": 92.81909, "loss_bbox": 0.23454, "loss_mask": 0.23479, "loss": 0.74583, "time": 0.50852} -{"mode": "train", "epoch": 6, "iter": 1700, "lr": 0.0002, "memory": 12588, "data_time": 0.04956, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.04317, "loss_cls": 0.18636, "acc": 93.52686, "loss_bbox": 0.21795, "loss_mask": 0.22788, "loss": 0.69902, "time": 0.55535} -{"mode": "train", "epoch": 6, "iter": 1750, "lr": 0.0002, "memory": 12588, "data_time": 0.04405, "loss_rpn_cls": 0.02387, "loss_rpn_bbox": 0.04485, "loss_cls": 0.19379, "acc": 93.2373, "loss_bbox": 0.22524, "loss_mask": 0.23391, "loss": 0.72165, "time": 0.49718} -{"mode": "train", "epoch": 6, "iter": 1800, "lr": 0.0002, "memory": 12588, "data_time": 0.04452, "loss_rpn_cls": 0.02398, "loss_rpn_bbox": 0.04571, "loss_cls": 0.19331, "acc": 93.33789, "loss_bbox": 0.21827, "loss_mask": 0.22689, "loss": 0.70817, "time": 0.49257} -{"mode": "train", "epoch": 6, "iter": 1850, "lr": 0.0002, "memory": 12588, "data_time": 0.04555, "loss_rpn_cls": 0.02362, "loss_rpn_bbox": 0.04715, "loss_cls": 0.1954, "acc": 93.16504, "loss_bbox": 0.21927, "loss_mask": 0.23314, "loss": 0.71857, "time": 0.49869} -{"mode": "train", "epoch": 6, "iter": 1900, "lr": 0.0002, "memory": 12588, "data_time": 0.05413, "loss_rpn_cls": 0.02509, "loss_rpn_bbox": 0.04539, "loss_cls": 0.21197, "acc": 92.6145, "loss_bbox": 0.2394, "loss_mask": 0.23173, "loss": 0.75358, "time": 0.50194} -{"mode": "train", "epoch": 6, "iter": 1950, "lr": 0.0002, "memory": 12588, "data_time": 0.04676, "loss_rpn_cls": 0.02513, "loss_rpn_bbox": 0.04658, "loss_cls": 0.20158, "acc": 93.03491, "loss_bbox": 0.22854, "loss_mask": 0.23196, "loss": 0.73379, "time": 0.50834} -{"mode": "train", "epoch": 6, "iter": 2000, "lr": 0.0002, "memory": 12588, "data_time": 0.04251, "loss_rpn_cls": 0.02399, "loss_rpn_bbox": 0.04583, "loss_cls": 0.20563, "acc": 92.90552, "loss_bbox": 0.23641, "loss_mask": 0.23498, "loss": 0.74684, "time": 0.49695} -{"mode": "train", "epoch": 6, "iter": 2050, "lr": 0.0002, "memory": 12588, "data_time": 0.05124, "loss_rpn_cls": 0.02505, "loss_rpn_bbox": 0.04834, "loss_cls": 0.21337, "acc": 92.83521, "loss_bbox": 0.2304, "loss_mask": 0.23556, "loss": 0.75272, "time": 0.49584} -{"mode": "train", "epoch": 6, "iter": 2100, "lr": 0.0002, "memory": 12588, "data_time": 0.04528, "loss_rpn_cls": 0.0258, "loss_rpn_bbox": 0.04644, "loss_cls": 0.21187, "acc": 92.9209, "loss_bbox": 0.22567, "loss_mask": 0.23105, "loss": 0.74083, "time": 0.49589} -{"mode": "train", "epoch": 6, "iter": 2150, "lr": 0.0002, "memory": 12588, "data_time": 0.04487, "loss_rpn_cls": 0.02391, "loss_rpn_bbox": 0.04599, "loss_cls": 0.20926, "acc": 92.87256, "loss_bbox": 0.23218, "loss_mask": 0.23794, "loss": 0.74929, "time": 0.49913} -{"mode": "train", "epoch": 6, "iter": 2200, "lr": 0.0002, "memory": 12588, "data_time": 0.05262, "loss_rpn_cls": 0.02403, "loss_rpn_bbox": 0.04654, "loss_cls": 0.19642, "acc": 93.19727, "loss_bbox": 0.22854, "loss_mask": 0.2342, "loss": 0.72973, "time": 0.50543} -{"mode": "train", "epoch": 6, "iter": 2250, "lr": 0.0002, "memory": 12588, "data_time": 0.0475, "loss_rpn_cls": 0.027, "loss_rpn_bbox": 0.04776, "loss_cls": 0.20853, "acc": 92.68188, "loss_bbox": 0.23982, "loss_mask": 0.24073, "loss": 0.76383, "time": 0.50495} -{"mode": "train", "epoch": 6, "iter": 2300, "lr": 0.0002, "memory": 12588, "data_time": 0.04685, "loss_rpn_cls": 0.02519, "loss_rpn_bbox": 0.04505, "loss_cls": 0.19627, "acc": 93.24854, "loss_bbox": 0.22917, "loss_mask": 0.2328, "loss": 0.72849, "time": 0.49964} -{"mode": "train", "epoch": 6, "iter": 2350, "lr": 0.0002, "memory": 12588, "data_time": 0.04109, "loss_rpn_cls": 0.02693, "loss_rpn_bbox": 0.04409, "loss_cls": 0.19969, "acc": 93.04541, "loss_bbox": 0.23096, "loss_mask": 0.22858, "loss": 0.73025, "time": 0.49711} -{"mode": "train", "epoch": 6, "iter": 2400, "lr": 0.0002, "memory": 12588, "data_time": 0.04566, "loss_rpn_cls": 0.02474, "loss_rpn_bbox": 0.04623, "loss_cls": 0.19612, "acc": 93.13306, "loss_bbox": 0.22657, "loss_mask": 0.23525, "loss": 0.72891, "time": 0.5107} -{"mode": "train", "epoch": 6, "iter": 2450, "lr": 0.0002, "memory": 12588, "data_time": 0.04184, "loss_rpn_cls": 0.02302, "loss_rpn_bbox": 0.04443, "loss_cls": 0.1991, "acc": 93.13965, "loss_bbox": 0.23252, "loss_mask": 0.23076, "loss": 0.72983, "time": 0.48894} -{"mode": "train", "epoch": 6, "iter": 2500, "lr": 0.0002, "memory": 12588, "data_time": 0.04455, "loss_rpn_cls": 0.02401, "loss_rpn_bbox": 0.04459, "loss_cls": 0.20613, "acc": 92.82422, "loss_bbox": 0.23856, "loss_mask": 0.23922, "loss": 0.75251, "time": 0.6363} -{"mode": "train", "epoch": 6, "iter": 2550, "lr": 0.0002, "memory": 12588, "data_time": 0.04157, "loss_rpn_cls": 0.0242, "loss_rpn_bbox": 0.04552, "loss_cls": 0.20152, "acc": 93.09521, "loss_bbox": 0.22896, "loss_mask": 0.22973, "loss": 0.72993, "time": 0.50583} -{"mode": "train", "epoch": 6, "iter": 2600, "lr": 0.0002, "memory": 12588, "data_time": 0.04384, "loss_rpn_cls": 0.02699, "loss_rpn_bbox": 0.04805, "loss_cls": 0.20261, "acc": 93.03076, "loss_bbox": 0.22817, "loss_mask": 0.23773, "loss": 0.74355, "time": 0.50651} -{"mode": "train", "epoch": 6, "iter": 2650, "lr": 0.0002, "memory": 12588, "data_time": 0.04035, "loss_rpn_cls": 0.02319, "loss_rpn_bbox": 0.04371, "loss_cls": 0.20289, "acc": 92.96826, "loss_bbox": 0.22864, "loss_mask": 0.23401, "loss": 0.73245, "time": 0.49216} -{"mode": "train", "epoch": 6, "iter": 2700, "lr": 0.0002, "memory": 12588, "data_time": 0.05305, "loss_rpn_cls": 0.02418, "loss_rpn_bbox": 0.04914, "loss_cls": 0.21488, "acc": 92.42188, "loss_bbox": 0.24497, "loss_mask": 0.23446, "loss": 0.76763, "time": 0.50706} -{"mode": "train", "epoch": 6, "iter": 2750, "lr": 0.0002, "memory": 12588, "data_time": 0.04657, "loss_rpn_cls": 0.02307, "loss_rpn_bbox": 0.04381, "loss_cls": 0.1972, "acc": 93.08423, "loss_bbox": 0.22278, "loss_mask": 0.23045, "loss": 0.71731, "time": 0.55156} -{"mode": "train", "epoch": 6, "iter": 2800, "lr": 0.0002, "memory": 12588, "data_time": 0.04172, "loss_rpn_cls": 0.02181, "loss_rpn_bbox": 0.04272, "loss_cls": 0.19851, "acc": 93.20825, "loss_bbox": 0.22147, "loss_mask": 0.22671, "loss": 0.71123, "time": 0.50209} -{"mode": "train", "epoch": 6, "iter": 2850, "lr": 0.0002, "memory": 12588, "data_time": 0.04018, "loss_rpn_cls": 0.02535, "loss_rpn_bbox": 0.04534, "loss_cls": 0.20432, "acc": 92.79053, "loss_bbox": 0.23511, "loss_mask": 0.23843, "loss": 0.74855, "time": 0.54912} -{"mode": "train", "epoch": 6, "iter": 2900, "lr": 0.0002, "memory": 12588, "data_time": 0.05215, "loss_rpn_cls": 0.02601, "loss_rpn_bbox": 0.04817, "loss_cls": 0.20866, "acc": 92.7771, "loss_bbox": 0.2379, "loss_mask": 0.23632, "loss": 0.75708, "time": 0.51393} -{"mode": "train", "epoch": 6, "iter": 2950, "lr": 0.0002, "memory": 12588, "data_time": 0.04598, "loss_rpn_cls": 0.0241, "loss_rpn_bbox": 0.04601, "loss_cls": 0.2081, "acc": 92.7998, "loss_bbox": 0.23801, "loss_mask": 0.23231, "loss": 0.74854, "time": 0.51276} -{"mode": "train", "epoch": 6, "iter": 3000, "lr": 0.0002, "memory": 12588, "data_time": 0.05215, "loss_rpn_cls": 0.02352, "loss_rpn_bbox": 0.04261, "loss_cls": 0.20438, "acc": 92.98315, "loss_bbox": 0.22887, "loss_mask": 0.23513, "loss": 0.73452, "time": 0.50268} -{"mode": "train", "epoch": 6, "iter": 3050, "lr": 0.0002, "memory": 12588, "data_time": 0.04724, "loss_rpn_cls": 0.02701, "loss_rpn_bbox": 0.04649, "loss_cls": 0.20832, "acc": 92.84692, "loss_bbox": 0.22702, "loss_mask": 0.2334, "loss": 0.74223, "time": 0.50014} -{"mode": "train", "epoch": 6, "iter": 3100, "lr": 0.0002, "memory": 12588, "data_time": 0.04013, "loss_rpn_cls": 0.02455, "loss_rpn_bbox": 0.04452, "loss_cls": 0.21216, "acc": 92.70728, "loss_bbox": 0.24169, "loss_mask": 0.23412, "loss": 0.75704, "time": 0.54078} -{"mode": "train", "epoch": 6, "iter": 3150, "lr": 0.0002, "memory": 12588, "data_time": 0.04001, "loss_rpn_cls": 0.02532, "loss_rpn_bbox": 0.04733, "loss_cls": 0.22233, "acc": 92.46606, "loss_bbox": 0.24242, "loss_mask": 0.24341, "loss": 0.78081, "time": 0.50062} -{"mode": "train", "epoch": 6, "iter": 3200, "lr": 0.0002, "memory": 12588, "data_time": 0.05206, "loss_rpn_cls": 0.02338, "loss_rpn_bbox": 0.04369, "loss_cls": 0.20433, "acc": 92.96606, "loss_bbox": 0.22848, "loss_mask": 0.22875, "loss": 0.72864, "time": 0.50335} -{"mode": "train", "epoch": 6, "iter": 3250, "lr": 0.0002, "memory": 12588, "data_time": 0.04646, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04364, "loss_cls": 0.195, "acc": 93.21729, "loss_bbox": 0.22678, "loss_mask": 0.22903, "loss": 0.71764, "time": 0.50725} -{"mode": "train", "epoch": 6, "iter": 3300, "lr": 0.0002, "memory": 12588, "data_time": 0.04438, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.04652, "loss_cls": 0.20917, "acc": 92.79028, "loss_bbox": 0.23429, "loss_mask": 0.23834, "loss": 0.75335, "time": 0.49986} -{"mode": "train", "epoch": 6, "iter": 3350, "lr": 0.0002, "memory": 12588, "data_time": 0.04792, "loss_rpn_cls": 0.02281, "loss_rpn_bbox": 0.04424, "loss_cls": 0.20216, "acc": 92.96191, "loss_bbox": 0.22948, "loss_mask": 0.23272, "loss": 0.73141, "time": 0.49091} -{"mode": "train", "epoch": 6, "iter": 3400, "lr": 0.0002, "memory": 12588, "data_time": 0.04435, "loss_rpn_cls": 0.02425, "loss_rpn_bbox": 0.04576, "loss_cls": 0.21165, "acc": 92.68652, "loss_bbox": 0.23873, "loss_mask": 0.23065, "loss": 0.75104, "time": 0.50029} -{"mode": "train", "epoch": 6, "iter": 3450, "lr": 0.0002, "memory": 12588, "data_time": 0.05911, "loss_rpn_cls": 0.02351, "loss_rpn_bbox": 0.045, "loss_cls": 0.2009, "acc": 93.1084, "loss_bbox": 0.22587, "loss_mask": 0.23549, "loss": 0.73077, "time": 0.50926} -{"mode": "train", "epoch": 6, "iter": 3500, "lr": 0.0002, "memory": 12588, "data_time": 0.04963, "loss_rpn_cls": 0.02443, "loss_rpn_bbox": 0.04562, "loss_cls": 0.20782, "acc": 92.76953, "loss_bbox": 0.23351, "loss_mask": 0.23873, "loss": 0.75011, "time": 0.50959} -{"mode": "train", "epoch": 6, "iter": 3550, "lr": 0.0002, "memory": 12588, "data_time": 0.04499, "loss_rpn_cls": 0.02544, "loss_rpn_bbox": 0.04608, "loss_cls": 0.19849, "acc": 93.19409, "loss_bbox": 0.22226, "loss_mask": 0.2337, "loss": 0.72598, "time": 0.49781} -{"mode": "train", "epoch": 6, "iter": 3600, "lr": 0.0002, "memory": 12588, "data_time": 0.04925, "loss_rpn_cls": 0.02894, "loss_rpn_bbox": 0.05064, "loss_cls": 0.21096, "acc": 92.69849, "loss_bbox": 0.2386, "loss_mask": 0.2407, "loss": 0.76984, "time": 0.53501} -{"mode": "train", "epoch": 6, "iter": 3650, "lr": 0.0002, "memory": 12588, "data_time": 0.04527, "loss_rpn_cls": 0.02478, "loss_rpn_bbox": 0.04443, "loss_cls": 0.19451, "acc": 93.22168, "loss_bbox": 0.22375, "loss_mask": 0.23018, "loss": 0.71766, "time": 0.51616} -{"mode": "train", "epoch": 6, "iter": 3700, "lr": 0.0002, "memory": 12588, "data_time": 0.04723, "loss_rpn_cls": 0.02449, "loss_rpn_bbox": 0.04661, "loss_cls": 0.19758, "acc": 93.05688, "loss_bbox": 0.23064, "loss_mask": 0.23033, "loss": 0.72965, "time": 0.51474} -{"mode": "train", "epoch": 6, "iter": 3750, "lr": 0.0002, "memory": 12588, "data_time": 0.04121, "loss_rpn_cls": 0.02722, "loss_rpn_bbox": 0.04551, "loss_cls": 0.21148, "acc": 92.81592, "loss_bbox": 0.23805, "loss_mask": 0.23824, "loss": 0.7605, "time": 0.49427} -{"mode": "train", "epoch": 6, "iter": 3800, "lr": 0.0002, "memory": 12588, "data_time": 0.04338, "loss_rpn_cls": 0.02532, "loss_rpn_bbox": 0.04549, "loss_cls": 0.2036, "acc": 93.00513, "loss_bbox": 0.23333, "loss_mask": 0.23186, "loss": 0.73959, "time": 0.49458} -{"mode": "train", "epoch": 6, "iter": 3850, "lr": 0.0002, "memory": 12588, "data_time": 0.0482, "loss_rpn_cls": 0.02391, "loss_rpn_bbox": 0.04487, "loss_cls": 0.20188, "acc": 93.0083, "loss_bbox": 0.23093, "loss_mask": 0.23738, "loss": 0.73898, "time": 0.51282} -{"mode": "train", "epoch": 6, "iter": 3900, "lr": 0.0002, "memory": 12588, "data_time": 0.0599, "loss_rpn_cls": 0.02468, "loss_rpn_bbox": 0.04736, "loss_cls": 0.19875, "acc": 93.05225, "loss_bbox": 0.23065, "loss_mask": 0.23157, "loss": 0.73301, "time": 0.5593} -{"mode": "train", "epoch": 6, "iter": 3950, "lr": 0.0002, "memory": 12588, "data_time": 0.04119, "loss_rpn_cls": 0.02475, "loss_rpn_bbox": 0.04451, "loss_cls": 0.21169, "acc": 92.68604, "loss_bbox": 0.2347, "loss_mask": 0.23169, "loss": 0.74734, "time": 0.49802} -{"mode": "train", "epoch": 6, "iter": 4000, "lr": 0.0002, "memory": 12588, "data_time": 0.04837, "loss_rpn_cls": 0.0284, "loss_rpn_bbox": 0.04964, "loss_cls": 0.21978, "acc": 92.48047, "loss_bbox": 0.24413, "loss_mask": 0.24158, "loss": 0.78354, "time": 0.52772} -{"mode": "train", "epoch": 6, "iter": 4050, "lr": 0.0002, "memory": 12588, "data_time": 0.04818, "loss_rpn_cls": 0.02549, "loss_rpn_bbox": 0.04684, "loss_cls": 0.20557, "acc": 92.96704, "loss_bbox": 0.22819, "loss_mask": 0.23763, "loss": 0.74371, "time": 0.49811} -{"mode": "train", "epoch": 6, "iter": 4100, "lr": 0.0002, "memory": 12588, "data_time": 0.04933, "loss_rpn_cls": 0.02532, "loss_rpn_bbox": 0.04762, "loss_cls": 0.20732, "acc": 92.91797, "loss_bbox": 0.23648, "loss_mask": 0.23781, "loss": 0.75455, "time": 0.50855} -{"mode": "train", "epoch": 6, "iter": 4150, "lr": 0.0002, "memory": 12588, "data_time": 0.04583, "loss_rpn_cls": 0.02638, "loss_rpn_bbox": 0.04475, "loss_cls": 0.19477, "acc": 93.27954, "loss_bbox": 0.22441, "loss_mask": 0.23091, "loss": 0.72123, "time": 0.55263} -{"mode": "train", "epoch": 6, "iter": 4200, "lr": 0.0002, "memory": 12588, "data_time": 0.04937, "loss_rpn_cls": 0.02391, "loss_rpn_bbox": 0.04238, "loss_cls": 0.19384, "acc": 93.27295, "loss_bbox": 0.22342, "loss_mask": 0.23255, "loss": 0.71611, "time": 0.49745} -{"mode": "train", "epoch": 6, "iter": 4250, "lr": 0.0002, "memory": 12588, "data_time": 0.04137, "loss_rpn_cls": 0.02643, "loss_rpn_bbox": 0.04518, "loss_cls": 0.20448, "acc": 92.8689, "loss_bbox": 0.23124, "loss_mask": 0.23569, "loss": 0.74302, "time": 0.51176} -{"mode": "train", "epoch": 6, "iter": 4300, "lr": 0.0002, "memory": 12588, "data_time": 0.04401, "loss_rpn_cls": 0.02736, "loss_rpn_bbox": 0.04562, "loss_cls": 0.21648, "acc": 92.63306, "loss_bbox": 0.23502, "loss_mask": 0.23784, "loss": 0.76233, "time": 0.5097} -{"mode": "train", "epoch": 6, "iter": 4350, "lr": 0.0002, "memory": 12588, "data_time": 0.04501, "loss_rpn_cls": 0.02416, "loss_rpn_bbox": 0.04651, "loss_cls": 0.2085, "acc": 92.81201, "loss_bbox": 0.23539, "loss_mask": 0.23669, "loss": 0.75125, "time": 0.50566} -{"mode": "train", "epoch": 6, "iter": 4400, "lr": 0.0002, "memory": 12588, "data_time": 0.04975, "loss_rpn_cls": 0.02317, "loss_rpn_bbox": 0.04479, "loss_cls": 0.20822, "acc": 93.01221, "loss_bbox": 0.22798, "loss_mask": 0.23612, "loss": 0.74028, "time": 0.49653} -{"mode": "train", "epoch": 6, "iter": 4450, "lr": 0.0002, "memory": 12588, "data_time": 0.05088, "loss_rpn_cls": 0.02823, "loss_rpn_bbox": 0.04985, "loss_cls": 0.21847, "acc": 92.5752, "loss_bbox": 0.24136, "loss_mask": 0.2443, "loss": 0.78222, "time": 0.51361} -{"mode": "train", "epoch": 6, "iter": 4500, "lr": 0.0002, "memory": 12588, "data_time": 0.04872, "loss_rpn_cls": 0.02283, "loss_rpn_bbox": 0.0444, "loss_cls": 0.21441, "acc": 92.74878, "loss_bbox": 0.23123, "loss_mask": 0.23908, "loss": 0.75195, "time": 0.49654} -{"mode": "train", "epoch": 6, "iter": 4550, "lr": 0.0002, "memory": 12588, "data_time": 0.04377, "loss_rpn_cls": 0.02294, "loss_rpn_bbox": 0.04344, "loss_cls": 0.20507, "acc": 93.06958, "loss_bbox": 0.2235, "loss_mask": 0.23332, "loss": 0.72826, "time": 0.48026} -{"mode": "train", "epoch": 6, "iter": 4600, "lr": 0.0002, "memory": 12588, "data_time": 0.0517, "loss_rpn_cls": 0.02443, "loss_rpn_bbox": 0.04511, "loss_cls": 0.20767, "acc": 92.87231, "loss_bbox": 0.22951, "loss_mask": 0.23504, "loss": 0.74177, "time": 0.51415} -{"mode": "train", "epoch": 6, "iter": 4650, "lr": 0.0002, "memory": 12588, "data_time": 0.03934, "loss_rpn_cls": 0.02649, "loss_rpn_bbox": 0.04968, "loss_cls": 0.20323, "acc": 92.94287, "loss_bbox": 0.2303, "loss_mask": 0.23483, "loss": 0.74453, "time": 0.50651} -{"mode": "train", "epoch": 6, "iter": 4700, "lr": 0.0002, "memory": 12588, "data_time": 0.04469, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04274, "loss_cls": 0.19752, "acc": 93.23804, "loss_bbox": 0.22282, "loss_mask": 0.23221, "loss": 0.71972, "time": 0.50176} -{"mode": "train", "epoch": 6, "iter": 4750, "lr": 0.0002, "memory": 12588, "data_time": 0.03641, "loss_rpn_cls": 0.02429, "loss_rpn_bbox": 0.04602, "loss_cls": 0.2059, "acc": 92.97925, "loss_bbox": 0.23193, "loss_mask": 0.23491, "loss": 0.74305, "time": 0.48568} -{"mode": "train", "epoch": 6, "iter": 4800, "lr": 0.0002, "memory": 12588, "data_time": 0.03642, "loss_rpn_cls": 0.02877, "loss_rpn_bbox": 0.04604, "loss_cls": 0.20591, "acc": 92.82568, "loss_bbox": 0.23142, "loss_mask": 0.2286, "loss": 0.74073, "time": 0.56094} -{"mode": "train", "epoch": 6, "iter": 4850, "lr": 0.0002, "memory": 12588, "data_time": 0.04096, "loss_rpn_cls": 0.02482, "loss_rpn_bbox": 0.04705, "loss_cls": 0.21217, "acc": 92.81567, "loss_bbox": 0.22962, "loss_mask": 0.23139, "loss": 0.74504, "time": 0.50983} -{"mode": "train", "epoch": 6, "iter": 4900, "lr": 0.0002, "memory": 12588, "data_time": 0.04524, "loss_rpn_cls": 0.02979, "loss_rpn_bbox": 0.04937, "loss_cls": 0.21806, "acc": 92.62061, "loss_bbox": 0.24237, "loss_mask": 0.23442, "loss": 0.77401, "time": 0.51456} -{"mode": "train", "epoch": 6, "iter": 4950, "lr": 0.0002, "memory": 12588, "data_time": 0.03888, "loss_rpn_cls": 0.02573, "loss_rpn_bbox": 0.04633, "loss_cls": 0.20753, "acc": 92.90527, "loss_bbox": 0.22639, "loss_mask": 0.23503, "loss": 0.741, "time": 0.54122} -{"mode": "train", "epoch": 6, "iter": 5000, "lr": 0.0002, "memory": 12588, "data_time": 0.04111, "loss_rpn_cls": 0.02326, "loss_rpn_bbox": 0.04448, "loss_cls": 0.2098, "acc": 92.77026, "loss_bbox": 0.23084, "loss_mask": 0.23852, "loss": 0.7469, "time": 0.54887} -{"mode": "train", "epoch": 6, "iter": 5050, "lr": 0.0002, "memory": 12588, "data_time": 0.04553, "loss_rpn_cls": 0.02529, "loss_rpn_bbox": 0.04926, "loss_cls": 0.21064, "acc": 92.76831, "loss_bbox": 0.23369, "loss_mask": 0.23576, "loss": 0.75464, "time": 0.49843} -{"mode": "train", "epoch": 6, "iter": 5100, "lr": 0.0002, "memory": 12588, "data_time": 0.052, "loss_rpn_cls": 0.02576, "loss_rpn_bbox": 0.04934, "loss_cls": 0.21616, "acc": 92.68481, "loss_bbox": 0.2312, "loss_mask": 0.24296, "loss": 0.76542, "time": 0.55945} -{"mode": "train", "epoch": 6, "iter": 5150, "lr": 0.0002, "memory": 12588, "data_time": 0.03903, "loss_rpn_cls": 0.02568, "loss_rpn_bbox": 0.04493, "loss_cls": 0.21448, "acc": 92.7417, "loss_bbox": 0.23466, "loss_mask": 0.23681, "loss": 0.75657, "time": 0.4952} -{"mode": "train", "epoch": 6, "iter": 5200, "lr": 0.0002, "memory": 12588, "data_time": 0.04492, "loss_rpn_cls": 0.02807, "loss_rpn_bbox": 0.04765, "loss_cls": 0.21127, "acc": 92.74805, "loss_bbox": 0.23191, "loss_mask": 0.23068, "loss": 0.74959, "time": 0.50302} -{"mode": "train", "epoch": 6, "iter": 5250, "lr": 0.0002, "memory": 12588, "data_time": 0.05214, "loss_rpn_cls": 0.02598, "loss_rpn_bbox": 0.04924, "loss_cls": 0.20867, "acc": 92.7832, "loss_bbox": 0.22889, "loss_mask": 0.23583, "loss": 0.74862, "time": 0.51066} -{"mode": "train", "epoch": 6, "iter": 5300, "lr": 0.0002, "memory": 12588, "data_time": 0.0437, "loss_rpn_cls": 0.02873, "loss_rpn_bbox": 0.05085, "loss_cls": 0.21301, "acc": 92.6377, "loss_bbox": 0.23834, "loss_mask": 0.23692, "loss": 0.76785, "time": 0.5237} -{"mode": "train", "epoch": 6, "iter": 5350, "lr": 0.0002, "memory": 12588, "data_time": 0.05314, "loss_rpn_cls": 0.02532, "loss_rpn_bbox": 0.04679, "loss_cls": 0.21228, "acc": 92.74341, "loss_bbox": 0.23646, "loss_mask": 0.23475, "loss": 0.7556, "time": 0.54674} -{"mode": "train", "epoch": 6, "iter": 5400, "lr": 0.0002, "memory": 12588, "data_time": 0.03514, "loss_rpn_cls": 0.02728, "loss_rpn_bbox": 0.04387, "loss_cls": 0.205, "acc": 92.99072, "loss_bbox": 0.22415, "loss_mask": 0.23508, "loss": 0.73538, "time": 0.49125} -{"mode": "train", "epoch": 6, "iter": 5450, "lr": 0.0002, "memory": 12588, "data_time": 0.04857, "loss_rpn_cls": 0.02796, "loss_rpn_bbox": 0.05212, "loss_cls": 0.21956, "acc": 92.46655, "loss_bbox": 0.24276, "loss_mask": 0.23906, "loss": 0.78145, "time": 0.50721} -{"mode": "train", "epoch": 6, "iter": 5500, "lr": 0.0002, "memory": 12588, "data_time": 0.04398, "loss_rpn_cls": 0.02546, "loss_rpn_bbox": 0.04581, "loss_cls": 0.19907, "acc": 93.09009, "loss_bbox": 0.22613, "loss_mask": 0.22959, "loss": 0.72605, "time": 0.50439} -{"mode": "train", "epoch": 6, "iter": 5550, "lr": 0.0002, "memory": 12588, "data_time": 0.04896, "loss_rpn_cls": 0.02585, "loss_rpn_bbox": 0.048, "loss_cls": 0.20897, "acc": 92.79761, "loss_bbox": 0.22991, "loss_mask": 0.23905, "loss": 0.75178, "time": 0.50135} -{"mode": "train", "epoch": 6, "iter": 5600, "lr": 0.0002, "memory": 12588, "data_time": 0.04968, "loss_rpn_cls": 0.02759, "loss_rpn_bbox": 0.04666, "loss_cls": 0.2022, "acc": 93.12695, "loss_bbox": 0.22562, "loss_mask": 0.23525, "loss": 0.73733, "time": 0.50634} -{"mode": "train", "epoch": 6, "iter": 5650, "lr": 0.0002, "memory": 12588, "data_time": 0.05313, "loss_rpn_cls": 0.02651, "loss_rpn_bbox": 0.04667, "loss_cls": 0.20034, "acc": 93.05005, "loss_bbox": 0.23129, "loss_mask": 0.23308, "loss": 0.73789, "time": 0.51721} -{"mode": "train", "epoch": 6, "iter": 5700, "lr": 0.0002, "memory": 12588, "data_time": 0.04517, "loss_rpn_cls": 0.02566, "loss_rpn_bbox": 0.04531, "loss_cls": 0.19958, "acc": 93.0293, "loss_bbox": 0.22434, "loss_mask": 0.2298, "loss": 0.7247, "time": 0.52311} -{"mode": "train", "epoch": 6, "iter": 5750, "lr": 0.0002, "memory": 12588, "data_time": 0.04541, "loss_rpn_cls": 0.02367, "loss_rpn_bbox": 0.04379, "loss_cls": 0.19785, "acc": 93.12305, "loss_bbox": 0.22526, "loss_mask": 0.22818, "loss": 0.71875, "time": 0.49627} -{"mode": "train", "epoch": 6, "iter": 5800, "lr": 0.0002, "memory": 12588, "data_time": 0.04523, "loss_rpn_cls": 0.02622, "loss_rpn_bbox": 0.04694, "loss_cls": 0.20314, "acc": 93.12378, "loss_bbox": 0.22458, "loss_mask": 0.23224, "loss": 0.73311, "time": 0.55523} -{"mode": "train", "epoch": 6, "iter": 5850, "lr": 0.0002, "memory": 12588, "data_time": 0.05099, "loss_rpn_cls": 0.02535, "loss_rpn_bbox": 0.04446, "loss_cls": 0.2045, "acc": 92.9834, "loss_bbox": 0.22472, "loss_mask": 0.23251, "loss": 0.73154, "time": 0.50481} -{"mode": "train", "epoch": 6, "iter": 5900, "lr": 0.0002, "memory": 12588, "data_time": 0.04772, "loss_rpn_cls": 0.02707, "loss_rpn_bbox": 0.04532, "loss_cls": 0.20396, "acc": 92.98999, "loss_bbox": 0.22626, "loss_mask": 0.23362, "loss": 0.73623, "time": 0.49681} -{"mode": "train", "epoch": 6, "iter": 5950, "lr": 0.0002, "memory": 12588, "data_time": 0.05252, "loss_rpn_cls": 0.02613, "loss_rpn_bbox": 0.04706, "loss_cls": 0.21282, "acc": 92.75098, "loss_bbox": 0.23246, "loss_mask": 0.24143, "loss": 0.75989, "time": 0.50131} -{"mode": "train", "epoch": 6, "iter": 6000, "lr": 0.0002, "memory": 12588, "data_time": 0.04844, "loss_rpn_cls": 0.02785, "loss_rpn_bbox": 0.04467, "loss_cls": 0.20554, "acc": 93.02808, "loss_bbox": 0.2258, "loss_mask": 0.22831, "loss": 0.73218, "time": 0.49528} -{"mode": "train", "epoch": 6, "iter": 6050, "lr": 0.0002, "memory": 12588, "data_time": 0.04453, "loss_rpn_cls": 0.02702, "loss_rpn_bbox": 0.05058, "loss_cls": 0.21173, "acc": 92.79175, "loss_bbox": 0.23719, "loss_mask": 0.24156, "loss": 0.76808, "time": 0.50638} -{"mode": "train", "epoch": 6, "iter": 6100, "lr": 0.0002, "memory": 12588, "data_time": 0.05768, "loss_rpn_cls": 0.02604, "loss_rpn_bbox": 0.04797, "loss_cls": 0.2093, "acc": 92.82544, "loss_bbox": 0.23462, "loss_mask": 0.2394, "loss": 0.75732, "time": 0.51085} -{"mode": "train", "epoch": 6, "iter": 6150, "lr": 0.0002, "memory": 12588, "data_time": 0.043, "loss_rpn_cls": 0.02383, "loss_rpn_bbox": 0.04545, "loss_cls": 0.19996, "acc": 93.01367, "loss_bbox": 0.22882, "loss_mask": 0.23426, "loss": 0.73232, "time": 0.50095} -{"mode": "train", "epoch": 6, "iter": 6200, "lr": 0.0002, "memory": 12588, "data_time": 0.04415, "loss_rpn_cls": 0.02509, "loss_rpn_bbox": 0.04285, "loss_cls": 0.20289, "acc": 93.14478, "loss_bbox": 0.22162, "loss_mask": 0.2316, "loss": 0.72405, "time": 0.54251} -{"mode": "train", "epoch": 6, "iter": 6250, "lr": 0.0002, "memory": 12588, "data_time": 0.05964, "loss_rpn_cls": 0.02334, "loss_rpn_bbox": 0.04541, "loss_cls": 0.21333, "acc": 92.71973, "loss_bbox": 0.23824, "loss_mask": 0.23298, "loss": 0.75331, "time": 0.50621} -{"mode": "train", "epoch": 6, "iter": 6300, "lr": 0.0002, "memory": 12588, "data_time": 0.05451, "loss_rpn_cls": 0.02872, "loss_rpn_bbox": 0.04686, "loss_cls": 0.2071, "acc": 92.91357, "loss_bbox": 0.22666, "loss_mask": 0.24065, "loss": 0.74999, "time": 0.49247} -{"mode": "train", "epoch": 6, "iter": 6350, "lr": 0.0002, "memory": 12588, "data_time": 0.03908, "loss_rpn_cls": 0.02552, "loss_rpn_bbox": 0.04579, "loss_cls": 0.19821, "acc": 93.21045, "loss_bbox": 0.22494, "loss_mask": 0.23761, "loss": 0.73206, "time": 0.54113} -{"mode": "train", "epoch": 6, "iter": 6400, "lr": 0.0002, "memory": 12588, "data_time": 0.05822, "loss_rpn_cls": 0.02606, "loss_rpn_bbox": 0.04896, "loss_cls": 0.2098, "acc": 92.73291, "loss_bbox": 0.24161, "loss_mask": 0.24082, "loss": 0.76726, "time": 0.49879} -{"mode": "train", "epoch": 6, "iter": 6450, "lr": 0.0002, "memory": 12588, "data_time": 0.04116, "loss_rpn_cls": 0.02652, "loss_rpn_bbox": 0.04606, "loss_cls": 0.2073, "acc": 92.84399, "loss_bbox": 0.23171, "loss_mask": 0.23334, "loss": 0.74492, "time": 0.50024} -{"mode": "train", "epoch": 6, "iter": 6500, "lr": 0.0002, "memory": 12588, "data_time": 0.05393, "loss_rpn_cls": 0.02538, "loss_rpn_bbox": 0.04743, "loss_cls": 0.21301, "acc": 92.76587, "loss_bbox": 0.23628, "loss_mask": 0.23606, "loss": 0.75816, "time": 0.50532} -{"mode": "train", "epoch": 6, "iter": 6550, "lr": 0.0002, "memory": 12588, "data_time": 0.04565, "loss_rpn_cls": 0.02674, "loss_rpn_bbox": 0.04508, "loss_cls": 0.20721, "acc": 92.99365, "loss_bbox": 0.2271, "loss_mask": 0.22981, "loss": 0.73595, "time": 0.50133} -{"mode": "train", "epoch": 6, "iter": 6600, "lr": 0.0002, "memory": 12588, "data_time": 0.05651, "loss_rpn_cls": 0.02495, "loss_rpn_bbox": 0.04589, "loss_cls": 0.20951, "acc": 92.80005, "loss_bbox": 0.23162, "loss_mask": 0.22953, "loss": 0.7415, "time": 0.51534} -{"mode": "train", "epoch": 6, "iter": 6650, "lr": 0.0002, "memory": 12588, "data_time": 0.05301, "loss_rpn_cls": 0.02699, "loss_rpn_bbox": 0.04861, "loss_cls": 0.21277, "acc": 92.82666, "loss_bbox": 0.23458, "loss_mask": 0.23648, "loss": 0.75943, "time": 0.51729} -{"mode": "train", "epoch": 6, "iter": 6700, "lr": 0.0002, "memory": 12588, "data_time": 0.04806, "loss_rpn_cls": 0.0271, "loss_rpn_bbox": 0.04741, "loss_cls": 0.20693, "acc": 92.99146, "loss_bbox": 0.23538, "loss_mask": 0.23729, "loss": 0.75411, "time": 0.55947} -{"mode": "train", "epoch": 6, "iter": 6750, "lr": 0.0002, "memory": 12588, "data_time": 0.04706, "loss_rpn_cls": 0.02433, "loss_rpn_bbox": 0.04321, "loss_cls": 0.19971, "acc": 93.09546, "loss_bbox": 0.22808, "loss_mask": 0.232, "loss": 0.72733, "time": 0.48931} -{"mode": "train", "epoch": 6, "iter": 6800, "lr": 0.0002, "memory": 12588, "data_time": 0.05279, "loss_rpn_cls": 0.02413, "loss_rpn_bbox": 0.04621, "loss_cls": 0.20676, "acc": 92.87622, "loss_bbox": 0.22959, "loss_mask": 0.24158, "loss": 0.74827, "time": 0.50685} -{"mode": "train", "epoch": 6, "iter": 6850, "lr": 0.0002, "memory": 12588, "data_time": 0.0464, "loss_rpn_cls": 0.02538, "loss_rpn_bbox": 0.04641, "loss_cls": 0.20092, "acc": 93.05396, "loss_bbox": 0.23182, "loss_mask": 0.22798, "loss": 0.73251, "time": 0.509} -{"mode": "train", "epoch": 6, "iter": 6900, "lr": 0.0002, "memory": 12588, "data_time": 0.04345, "loss_rpn_cls": 0.02527, "loss_rpn_bbox": 0.04611, "loss_cls": 0.21726, "acc": 92.59253, "loss_bbox": 0.23783, "loss_mask": 0.23326, "loss": 0.75974, "time": 0.50797} -{"mode": "train", "epoch": 6, "iter": 6950, "lr": 0.0002, "memory": 12588, "data_time": 0.04809, "loss_rpn_cls": 0.02372, "loss_rpn_bbox": 0.04542, "loss_cls": 0.1944, "acc": 93.2124, "loss_bbox": 0.22093, "loss_mask": 0.22826, "loss": 0.71273, "time": 0.5155} -{"mode": "train", "epoch": 6, "iter": 7000, "lr": 0.0002, "memory": 12588, "data_time": 0.05252, "loss_rpn_cls": 0.02586, "loss_rpn_bbox": 0.04717, "loss_cls": 0.21909, "acc": 92.78564, "loss_bbox": 0.22703, "loss_mask": 0.23547, "loss": 0.75462, "time": 0.5472} -{"mode": "train", "epoch": 6, "iter": 7050, "lr": 0.0002, "memory": 12588, "data_time": 0.04339, "loss_rpn_cls": 0.02812, "loss_rpn_bbox": 0.04532, "loss_cls": 0.20467, "acc": 93.10303, "loss_bbox": 0.22682, "loss_mask": 0.23276, "loss": 0.73768, "time": 0.50362} -{"mode": "train", "epoch": 6, "iter": 7100, "lr": 0.0002, "memory": 12588, "data_time": 0.04043, "loss_rpn_cls": 0.02444, "loss_rpn_bbox": 0.04412, "loss_cls": 0.20727, "acc": 92.87402, "loss_bbox": 0.22875, "loss_mask": 0.23603, "loss": 0.74062, "time": 0.50403} -{"mode": "train", "epoch": 6, "iter": 7150, "lr": 0.0002, "memory": 12588, "data_time": 0.05052, "loss_rpn_cls": 0.02618, "loss_rpn_bbox": 0.04806, "loss_cls": 0.21655, "acc": 92.5564, "loss_bbox": 0.24504, "loss_mask": 0.24002, "loss": 0.77586, "time": 0.55809} -{"mode": "train", "epoch": 6, "iter": 7200, "lr": 0.0002, "memory": 12588, "data_time": 0.05046, "loss_rpn_cls": 0.02557, "loss_rpn_bbox": 0.04646, "loss_cls": 0.20978, "acc": 92.78345, "loss_bbox": 0.23371, "loss_mask": 0.23387, "loss": 0.74939, "time": 0.5019} -{"mode": "train", "epoch": 6, "iter": 7250, "lr": 0.0002, "memory": 12588, "data_time": 0.06675, "loss_rpn_cls": 0.026, "loss_rpn_bbox": 0.05006, "loss_cls": 0.20397, "acc": 92.80884, "loss_bbox": 0.23263, "loss_mask": 0.23051, "loss": 0.74318, "time": 0.52041} -{"mode": "train", "epoch": 6, "iter": 7300, "lr": 0.0002, "memory": 12588, "data_time": 0.04504, "loss_rpn_cls": 0.02669, "loss_rpn_bbox": 0.04448, "loss_cls": 0.21005, "acc": 92.76807, "loss_bbox": 0.23537, "loss_mask": 0.23251, "loss": 0.74909, "time": 0.4966} -{"mode": "val", "epoch": 6, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3872, "bbox_mAP_50": 0.6071, "bbox_mAP_75": 0.4266, "bbox_mAP_s": 0.2217, "bbox_mAP_m": 0.4299, "bbox_mAP_l": 0.5045, "bbox_mAP_copypaste": "0.3872 0.6071 0.4266 0.2217 0.4299 0.5045", "segm_mAP": 0.3629, "segm_mAP_50": 0.5798, "segm_mAP_75": 0.39, "segm_mAP_s": 0.1687, "segm_mAP_m": 0.3981, "segm_mAP_l": 0.5287, "segm_mAP_copypaste": "0.3629 0.5798 0.3900 0.1687 0.3981 0.5287"} -{"mode": "train", "epoch": 7, "iter": 50, "lr": 0.0002, "memory": 12588, "data_time": 0.12866, "loss_rpn_cls": 0.02387, "loss_rpn_bbox": 0.04632, "loss_cls": 0.19732, "acc": 93.03784, "loss_bbox": 0.22973, "loss_mask": 0.2275, "loss": 0.72475, "time": 0.61485} -{"mode": "train", "epoch": 7, "iter": 100, "lr": 0.0002, "memory": 12588, "data_time": 0.04577, "loss_rpn_cls": 0.02307, "loss_rpn_bbox": 0.04413, "loss_cls": 0.18846, "acc": 93.3186, "loss_bbox": 0.22469, "loss_mask": 0.22841, "loss": 0.70876, "time": 0.53474} -{"mode": "train", "epoch": 7, "iter": 150, "lr": 0.0002, "memory": 12588, "data_time": 0.05161, "loss_rpn_cls": 0.02346, "loss_rpn_bbox": 0.04732, "loss_cls": 0.19616, "acc": 93.03882, "loss_bbox": 0.23042, "loss_mask": 0.23252, "loss": 0.72988, "time": 0.54851} -{"mode": "train", "epoch": 7, "iter": 200, "lr": 0.0002, "memory": 12588, "data_time": 0.04759, "loss_rpn_cls": 0.02613, "loss_rpn_bbox": 0.04889, "loss_cls": 0.21067, "acc": 92.61987, "loss_bbox": 0.23886, "loss_mask": 0.23556, "loss": 0.7601, "time": 0.53388} -{"mode": "train", "epoch": 7, "iter": 250, "lr": 0.0002, "memory": 12588, "data_time": 0.04207, "loss_rpn_cls": 0.02402, "loss_rpn_bbox": 0.044, "loss_cls": 0.20527, "acc": 93.02002, "loss_bbox": 0.2222, "loss_mask": 0.23127, "loss": 0.72676, "time": 0.49623} -{"mode": "train", "epoch": 7, "iter": 300, "lr": 0.0002, "memory": 12588, "data_time": 0.04786, "loss_rpn_cls": 0.02365, "loss_rpn_bbox": 0.04384, "loss_cls": 0.19732, "acc": 93.2041, "loss_bbox": 0.23029, "loss_mask": 0.22996, "loss": 0.72506, "time": 0.51474} -{"mode": "train", "epoch": 7, "iter": 350, "lr": 0.0002, "memory": 12588, "data_time": 0.0475, "loss_rpn_cls": 0.02322, "loss_rpn_bbox": 0.04474, "loss_cls": 0.19861, "acc": 93.12305, "loss_bbox": 0.22669, "loss_mask": 0.22968, "loss": 0.72294, "time": 0.49734} -{"mode": "train", "epoch": 7, "iter": 400, "lr": 0.0002, "memory": 12588, "data_time": 0.03767, "loss_rpn_cls": 0.02522, "loss_rpn_bbox": 0.04637, "loss_cls": 0.19967, "acc": 93.02808, "loss_bbox": 0.22836, "loss_mask": 0.23204, "loss": 0.73166, "time": 0.50114} -{"mode": "train", "epoch": 7, "iter": 450, "lr": 0.0002, "memory": 12588, "data_time": 0.0502, "loss_rpn_cls": 0.02089, "loss_rpn_bbox": 0.04143, "loss_cls": 0.18514, "acc": 93.50464, "loss_bbox": 0.21657, "loss_mask": 0.22205, "loss": 0.68608, "time": 0.50366} -{"mode": "train", "epoch": 7, "iter": 500, "lr": 0.0002, "memory": 12588, "data_time": 0.05118, "loss_rpn_cls": 0.02414, "loss_rpn_bbox": 0.04464, "loss_cls": 0.20125, "acc": 93.07275, "loss_bbox": 0.22919, "loss_mask": 0.22778, "loss": 0.727, "time": 0.51486} -{"mode": "train", "epoch": 7, "iter": 550, "lr": 0.0002, "memory": 12588, "data_time": 0.05628, "loss_rpn_cls": 0.02161, "loss_rpn_bbox": 0.04449, "loss_cls": 0.19369, "acc": 93.15967, "loss_bbox": 0.22578, "loss_mask": 0.22784, "loss": 0.71341, "time": 0.62527} -{"mode": "train", "epoch": 7, "iter": 600, "lr": 0.0002, "memory": 12588, "data_time": 0.0492, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.04635, "loss_cls": 0.20007, "acc": 92.95703, "loss_bbox": 0.23516, "loss_mask": 0.23221, "loss": 0.73603, "time": 0.51442} -{"mode": "train", "epoch": 7, "iter": 650, "lr": 0.0002, "memory": 12588, "data_time": 0.04735, "loss_rpn_cls": 0.02381, "loss_rpn_bbox": 0.0458, "loss_cls": 0.20261, "acc": 92.93408, "loss_bbox": 0.23237, "loss_mask": 0.2322, "loss": 0.73679, "time": 0.52166} -{"mode": "train", "epoch": 7, "iter": 700, "lr": 0.0002, "memory": 12588, "data_time": 0.04801, "loss_rpn_cls": 0.02315, "loss_rpn_bbox": 0.04638, "loss_cls": 0.18934, "acc": 93.38794, "loss_bbox": 0.22406, "loss_mask": 0.22908, "loss": 0.71202, "time": 0.51031} -{"mode": "train", "epoch": 7, "iter": 750, "lr": 0.0002, "memory": 12588, "data_time": 0.04495, "loss_rpn_cls": 0.02474, "loss_rpn_bbox": 0.04595, "loss_cls": 0.20525, "acc": 92.8313, "loss_bbox": 0.23073, "loss_mask": 0.23184, "loss": 0.73852, "time": 0.51421} -{"mode": "train", "epoch": 7, "iter": 800, "lr": 0.0002, "memory": 12588, "data_time": 0.04043, "loss_rpn_cls": 0.0255, "loss_rpn_bbox": 0.04956, "loss_cls": 0.21002, "acc": 92.80371, "loss_bbox": 0.2374, "loss_mask": 0.23916, "loss": 0.76164, "time": 0.52225} -{"mode": "train", "epoch": 7, "iter": 850, "lr": 0.0002, "memory": 12588, "data_time": 0.04337, "loss_rpn_cls": 0.02346, "loss_rpn_bbox": 0.04585, "loss_cls": 0.20398, "acc": 93.04907, "loss_bbox": 0.22208, "loss_mask": 0.23183, "loss": 0.72721, "time": 0.51877} -{"mode": "train", "epoch": 7, "iter": 900, "lr": 0.0002, "memory": 12588, "data_time": 0.05079, "loss_rpn_cls": 0.0207, "loss_rpn_bbox": 0.04329, "loss_cls": 0.18733, "acc": 93.49902, "loss_bbox": 0.21573, "loss_mask": 0.22256, "loss": 0.6896, "time": 0.51118} -{"mode": "train", "epoch": 7, "iter": 950, "lr": 0.0002, "memory": 12588, "data_time": 0.04288, "loss_rpn_cls": 0.02316, "loss_rpn_bbox": 0.04677, "loss_cls": 0.20022, "acc": 92.93018, "loss_bbox": 0.23135, "loss_mask": 0.23075, "loss": 0.73224, "time": 0.56173} -{"mode": "train", "epoch": 7, "iter": 1000, "lr": 0.0002, "memory": 12588, "data_time": 0.04558, "loss_rpn_cls": 0.02262, "loss_rpn_bbox": 0.04736, "loss_cls": 0.19672, "acc": 92.98853, "loss_bbox": 0.23439, "loss_mask": 0.23164, "loss": 0.73274, "time": 0.51414} -{"mode": "train", "epoch": 7, "iter": 1050, "lr": 0.0002, "memory": 12588, "data_time": 0.04748, "loss_rpn_cls": 0.02194, "loss_rpn_bbox": 0.04384, "loss_cls": 0.19283, "acc": 93.25586, "loss_bbox": 0.22656, "loss_mask": 0.23133, "loss": 0.7165, "time": 0.55027} -{"mode": "train", "epoch": 7, "iter": 1100, "lr": 0.0002, "memory": 12588, "data_time": 0.04984, "loss_rpn_cls": 0.02256, "loss_rpn_bbox": 0.04498, "loss_cls": 0.18666, "acc": 93.38257, "loss_bbox": 0.22164, "loss_mask": 0.22871, "loss": 0.70454, "time": 0.51506} -{"mode": "train", "epoch": 7, "iter": 1150, "lr": 0.0002, "memory": 12588, "data_time": 0.05196, "loss_rpn_cls": 0.02383, "loss_rpn_bbox": 0.04732, "loss_cls": 0.20254, "acc": 92.93701, "loss_bbox": 0.23137, "loss_mask": 0.23175, "loss": 0.73682, "time": 0.56275} -{"mode": "train", "epoch": 7, "iter": 1200, "lr": 0.0002, "memory": 12588, "data_time": 0.04059, "loss_rpn_cls": 0.02187, "loss_rpn_bbox": 0.04371, "loss_cls": 0.18829, "acc": 93.32983, "loss_bbox": 0.21606, "loss_mask": 0.22456, "loss": 0.6945, "time": 0.49553} -{"mode": "train", "epoch": 7, "iter": 1250, "lr": 0.0002, "memory": 12588, "data_time": 0.04132, "loss_rpn_cls": 0.02316, "loss_rpn_bbox": 0.04562, "loss_cls": 0.19794, "acc": 93.07935, "loss_bbox": 0.22286, "loss_mask": 0.2307, "loss": 0.72029, "time": 0.49686} -{"mode": "train", "epoch": 7, "iter": 1300, "lr": 0.0002, "memory": 12588, "data_time": 0.04261, "loss_rpn_cls": 0.02491, "loss_rpn_bbox": 0.04587, "loss_cls": 0.18858, "acc": 93.34692, "loss_bbox": 0.22218, "loss_mask": 0.22983, "loss": 0.71137, "time": 0.50328} -{"mode": "train", "epoch": 7, "iter": 1350, "lr": 0.0002, "memory": 12588, "data_time": 0.04517, "loss_rpn_cls": 0.02446, "loss_rpn_bbox": 0.04648, "loss_cls": 0.20376, "acc": 92.95654, "loss_bbox": 0.23307, "loss_mask": 0.23958, "loss": 0.74735, "time": 0.51389} -{"mode": "train", "epoch": 7, "iter": 1400, "lr": 0.0002, "memory": 12588, "data_time": 0.04679, "loss_rpn_cls": 0.02323, "loss_rpn_bbox": 0.04432, "loss_cls": 0.19237, "acc": 93.32422, "loss_bbox": 0.21876, "loss_mask": 0.22855, "loss": 0.70724, "time": 0.50938} -{"mode": "train", "epoch": 7, "iter": 1450, "lr": 0.0002, "memory": 12588, "data_time": 0.04969, "loss_rpn_cls": 0.02279, "loss_rpn_bbox": 0.04346, "loss_cls": 0.1915, "acc": 93.34619, "loss_bbox": 0.22278, "loss_mask": 0.2255, "loss": 0.70603, "time": 0.4989} -{"mode": "train", "epoch": 7, "iter": 1500, "lr": 0.0002, "memory": 12588, "data_time": 0.04198, "loss_rpn_cls": 0.02347, "loss_rpn_bbox": 0.0458, "loss_cls": 0.20395, "acc": 92.86035, "loss_bbox": 0.23033, "loss_mask": 0.23085, "loss": 0.7344, "time": 0.56033} -{"mode": "train", "epoch": 7, "iter": 1550, "lr": 0.0002, "memory": 12588, "data_time": 0.03811, "loss_rpn_cls": 0.02227, "loss_rpn_bbox": 0.04273, "loss_cls": 0.19106, "acc": 93.44678, "loss_bbox": 0.21349, "loss_mask": 0.22831, "loss": 0.69786, "time": 0.49703} -{"mode": "train", "epoch": 7, "iter": 1600, "lr": 0.0002, "memory": 12588, "data_time": 0.04451, "loss_rpn_cls": 0.02335, "loss_rpn_bbox": 0.04459, "loss_cls": 0.19228, "acc": 93.29907, "loss_bbox": 0.22194, "loss_mask": 0.2253, "loss": 0.70747, "time": 0.50216} -{"mode": "train", "epoch": 7, "iter": 1650, "lr": 0.0002, "memory": 12588, "data_time": 0.04157, "loss_rpn_cls": 0.02516, "loss_rpn_bbox": 0.04722, "loss_cls": 0.20707, "acc": 92.71802, "loss_bbox": 0.23692, "loss_mask": 0.23281, "loss": 0.74919, "time": 0.50966} -{"mode": "train", "epoch": 7, "iter": 1700, "lr": 0.0002, "memory": 12588, "data_time": 0.04126, "loss_rpn_cls": 0.02189, "loss_rpn_bbox": 0.04512, "loss_cls": 0.20391, "acc": 93.02246, "loss_bbox": 0.2264, "loss_mask": 0.22916, "loss": 0.72647, "time": 0.50351} -{"mode": "train", "epoch": 7, "iter": 1750, "lr": 0.0002, "memory": 12588, "data_time": 0.04454, "loss_rpn_cls": 0.02225, "loss_rpn_bbox": 0.0457, "loss_cls": 0.19989, "acc": 93.14819, "loss_bbox": 0.22408, "loss_mask": 0.22973, "loss": 0.72165, "time": 0.49836} -{"mode": "train", "epoch": 7, "iter": 1800, "lr": 0.0002, "memory": 12588, "data_time": 0.04016, "loss_rpn_cls": 0.0233, "loss_rpn_bbox": 0.04691, "loss_cls": 0.19748, "acc": 93.12256, "loss_bbox": 0.23315, "loss_mask": 0.22797, "loss": 0.72881, "time": 0.50928} -{"mode": "train", "epoch": 7, "iter": 1850, "lr": 0.0002, "memory": 12588, "data_time": 0.04361, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04487, "loss_cls": 0.1982, "acc": 93.15527, "loss_bbox": 0.22171, "loss_mask": 0.22788, "loss": 0.71759, "time": 0.62279} -{"mode": "train", "epoch": 7, "iter": 1900, "lr": 0.0002, "memory": 12588, "data_time": 0.04418, "loss_rpn_cls": 0.02234, "loss_rpn_bbox": 0.04121, "loss_cls": 0.19527, "acc": 93.19653, "loss_bbox": 0.22236, "loss_mask": 0.22912, "loss": 0.71029, "time": 0.51302} -{"mode": "train", "epoch": 7, "iter": 1950, "lr": 0.0002, "memory": 12588, "data_time": 0.04439, "loss_rpn_cls": 0.02138, "loss_rpn_bbox": 0.04158, "loss_cls": 0.19064, "acc": 93.48218, "loss_bbox": 0.21772, "loss_mask": 0.22951, "loss": 0.70084, "time": 0.59784} -{"mode": "train", "epoch": 7, "iter": 2000, "lr": 0.0002, "memory": 12588, "data_time": 0.04123, "loss_rpn_cls": 0.0205, "loss_rpn_bbox": 0.04022, "loss_cls": 0.18899, "acc": 93.5271, "loss_bbox": 0.21277, "loss_mask": 0.22628, "loss": 0.68876, "time": 0.49711} -{"mode": "train", "epoch": 7, "iter": 2050, "lr": 0.0002, "memory": 12588, "data_time": 0.04913, "loss_rpn_cls": 0.02262, "loss_rpn_bbox": 0.04497, "loss_cls": 0.19664, "acc": 93.14526, "loss_bbox": 0.22353, "loss_mask": 0.22942, "loss": 0.71719, "time": 0.50843} -{"mode": "train", "epoch": 7, "iter": 2100, "lr": 0.0002, "memory": 12588, "data_time": 0.04181, "loss_rpn_cls": 0.02586, "loss_rpn_bbox": 0.0465, "loss_cls": 0.19624, "acc": 93.20703, "loss_bbox": 0.22336, "loss_mask": 0.23116, "loss": 0.72312, "time": 0.50433} -{"mode": "train", "epoch": 7, "iter": 2150, "lr": 0.0002, "memory": 12588, "data_time": 0.04325, "loss_rpn_cls": 0.0249, "loss_rpn_bbox": 0.04742, "loss_cls": 0.19146, "acc": 93.33057, "loss_bbox": 0.22371, "loss_mask": 0.23567, "loss": 0.72316, "time": 0.5041} -{"mode": "train", "epoch": 7, "iter": 2200, "lr": 0.0002, "memory": 12588, "data_time": 0.04727, "loss_rpn_cls": 0.02323, "loss_rpn_bbox": 0.0455, "loss_cls": 0.19933, "acc": 93.02808, "loss_bbox": 0.22667, "loss_mask": 0.23209, "loss": 0.72683, "time": 0.50035} -{"mode": "train", "epoch": 7, "iter": 2250, "lr": 0.0002, "memory": 12588, "data_time": 0.04567, "loss_rpn_cls": 0.02488, "loss_rpn_bbox": 0.04358, "loss_cls": 0.20259, "acc": 92.9895, "loss_bbox": 0.22816, "loss_mask": 0.22973, "loss": 0.72893, "time": 0.50902} -{"mode": "train", "epoch": 7, "iter": 2300, "lr": 0.0002, "memory": 12588, "data_time": 0.04989, "loss_rpn_cls": 0.02586, "loss_rpn_bbox": 0.04974, "loss_cls": 0.20652, "acc": 92.64233, "loss_bbox": 0.23751, "loss_mask": 0.23411, "loss": 0.75374, "time": 0.51089} -{"mode": "train", "epoch": 7, "iter": 2350, "lr": 0.0002, "memory": 12588, "data_time": 0.04511, "loss_rpn_cls": 0.02543, "loss_rpn_bbox": 0.0469, "loss_cls": 0.19967, "acc": 93.09644, "loss_bbox": 0.227, "loss_mask": 0.23157, "loss": 0.73058, "time": 0.55542} -{"mode": "train", "epoch": 7, "iter": 2400, "lr": 0.0002, "memory": 12588, "data_time": 0.04729, "loss_rpn_cls": 0.02296, "loss_rpn_bbox": 0.04896, "loss_cls": 0.20778, "acc": 92.70093, "loss_bbox": 0.24038, "loss_mask": 0.24211, "loss": 0.7622, "time": 0.50494} -{"mode": "train", "epoch": 7, "iter": 2450, "lr": 0.0002, "memory": 12588, "data_time": 0.04315, "loss_rpn_cls": 0.02245, "loss_rpn_bbox": 0.04612, "loss_cls": 0.19737, "acc": 93.03076, "loss_bbox": 0.23045, "loss_mask": 0.23238, "loss": 0.72877, "time": 0.50859} -{"mode": "train", "epoch": 7, "iter": 2500, "lr": 0.0002, "memory": 12588, "data_time": 0.04952, "loss_rpn_cls": 0.02328, "loss_rpn_bbox": 0.04236, "loss_cls": 0.19839, "acc": 93.19946, "loss_bbox": 0.22448, "loss_mask": 0.22948, "loss": 0.71799, "time": 0.50673} -{"mode": "train", "epoch": 7, "iter": 2550, "lr": 0.0002, "memory": 12588, "data_time": 0.03856, "loss_rpn_cls": 0.02028, "loss_rpn_bbox": 0.04265, "loss_cls": 0.19607, "acc": 93.23877, "loss_bbox": 0.22511, "loss_mask": 0.23053, "loss": 0.71464, "time": 0.49895} -{"mode": "train", "epoch": 7, "iter": 2600, "lr": 0.0002, "memory": 12588, "data_time": 0.05084, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04644, "loss_cls": 0.22041, "acc": 92.48657, "loss_bbox": 0.23928, "loss_mask": 0.23707, "loss": 0.76814, "time": 0.51336} -{"mode": "train", "epoch": 7, "iter": 2650, "lr": 0.0002, "memory": 12588, "data_time": 0.04051, "loss_rpn_cls": 0.02469, "loss_rpn_bbox": 0.04562, "loss_cls": 0.20205, "acc": 93.04541, "loss_bbox": 0.22313, "loss_mask": 0.23612, "loss": 0.73161, "time": 0.51613} -{"mode": "train", "epoch": 7, "iter": 2700, "lr": 0.0002, "memory": 12588, "data_time": 0.04922, "loss_rpn_cls": 0.02502, "loss_rpn_bbox": 0.04744, "loss_cls": 0.20249, "acc": 93.07739, "loss_bbox": 0.22904, "loss_mask": 0.23929, "loss": 0.74328, "time": 0.50851} -{"mode": "train", "epoch": 7, "iter": 2750, "lr": 0.0002, "memory": 12588, "data_time": 0.04237, "loss_rpn_cls": 0.02508, "loss_rpn_bbox": 0.04415, "loss_cls": 0.20489, "acc": 92.82837, "loss_bbox": 0.22939, "loss_mask": 0.23266, "loss": 0.73617, "time": 0.51049} -{"mode": "train", "epoch": 7, "iter": 2800, "lr": 0.0002, "memory": 12588, "data_time": 0.05044, "loss_rpn_cls": 0.02474, "loss_rpn_bbox": 0.04607, "loss_cls": 0.21155, "acc": 92.72656, "loss_bbox": 0.23065, "loss_mask": 0.23945, "loss": 0.75246, "time": 0.51204} -{"mode": "train", "epoch": 7, "iter": 2850, "lr": 0.0002, "memory": 12588, "data_time": 0.04115, "loss_rpn_cls": 0.02167, "loss_rpn_bbox": 0.0439, "loss_cls": 0.1941, "acc": 93.26514, "loss_bbox": 0.21912, "loss_mask": 0.22722, "loss": 0.70601, "time": 0.49659} -{"mode": "train", "epoch": 7, "iter": 2900, "lr": 0.0002, "memory": 12588, "data_time": 0.04626, "loss_rpn_cls": 0.02803, "loss_rpn_bbox": 0.04969, "loss_cls": 0.21021, "acc": 92.73877, "loss_bbox": 0.2367, "loss_mask": 0.23404, "loss": 0.75867, "time": 0.57179} -{"mode": "train", "epoch": 7, "iter": 2950, "lr": 0.0002, "memory": 12588, "data_time": 0.04978, "loss_rpn_cls": 0.02641, "loss_rpn_bbox": 0.04508, "loss_cls": 0.20703, "acc": 92.87988, "loss_bbox": 0.22667, "loss_mask": 0.2297, "loss": 0.7349, "time": 0.50943} -{"mode": "train", "epoch": 7, "iter": 3000, "lr": 0.0002, "memory": 12588, "data_time": 0.04663, "loss_rpn_cls": 0.02489, "loss_rpn_bbox": 0.04871, "loss_cls": 0.21073, "acc": 92.70972, "loss_bbox": 0.235, "loss_mask": 0.23195, "loss": 0.75128, "time": 0.50757} -{"mode": "train", "epoch": 7, "iter": 3050, "lr": 0.0002, "memory": 12588, "data_time": 0.04331, "loss_rpn_cls": 0.02421, "loss_rpn_bbox": 0.04596, "loss_cls": 0.19853, "acc": 93.03052, "loss_bbox": 0.22547, "loss_mask": 0.2311, "loss": 0.72526, "time": 0.50667} -{"mode": "train", "epoch": 7, "iter": 3100, "lr": 0.0002, "memory": 12588, "data_time": 0.04023, "loss_rpn_cls": 0.02299, "loss_rpn_bbox": 0.04427, "loss_cls": 0.19271, "acc": 93.36084, "loss_bbox": 0.21861, "loss_mask": 0.2303, "loss": 0.70889, "time": 0.49417} -{"mode": "train", "epoch": 7, "iter": 3150, "lr": 0.0002, "memory": 12588, "data_time": 0.04654, "loss_rpn_cls": 0.02836, "loss_rpn_bbox": 0.04933, "loss_cls": 0.20607, "acc": 92.8728, "loss_bbox": 0.23883, "loss_mask": 0.23387, "loss": 0.75645, "time": 0.51985} -{"mode": "train", "epoch": 7, "iter": 3200, "lr": 0.0002, "memory": 12588, "data_time": 0.04097, "loss_rpn_cls": 0.02587, "loss_rpn_bbox": 0.04556, "loss_cls": 0.20546, "acc": 92.8855, "loss_bbox": 0.23278, "loss_mask": 0.23323, "loss": 0.7429, "time": 0.50652} -{"mode": "train", "epoch": 7, "iter": 3250, "lr": 0.0002, "memory": 12588, "data_time": 0.04581, "loss_rpn_cls": 0.02339, "loss_rpn_bbox": 0.04413, "loss_cls": 0.19578, "acc": 93.27246, "loss_bbox": 0.22247, "loss_mask": 0.22717, "loss": 0.71295, "time": 0.51608} -{"mode": "train", "epoch": 7, "iter": 3300, "lr": 0.0002, "memory": 12588, "data_time": 0.04088, "loss_rpn_cls": 0.02282, "loss_rpn_bbox": 0.04191, "loss_cls": 0.20214, "acc": 93.07788, "loss_bbox": 0.22731, "loss_mask": 0.23046, "loss": 0.72464, "time": 0.55884} -{"mode": "train", "epoch": 7, "iter": 3350, "lr": 0.0002, "memory": 12588, "data_time": 0.04054, "loss_rpn_cls": 0.02185, "loss_rpn_bbox": 0.04191, "loss_cls": 0.19625, "acc": 93.20752, "loss_bbox": 0.22032, "loss_mask": 0.22671, "loss": 0.70704, "time": 0.55011} -{"mode": "train", "epoch": 7, "iter": 3400, "lr": 0.0002, "memory": 12588, "data_time": 0.04725, "loss_rpn_cls": 0.02352, "loss_rpn_bbox": 0.04361, "loss_cls": 0.20128, "acc": 93.01758, "loss_bbox": 0.23158, "loss_mask": 0.23064, "loss": 0.73063, "time": 0.5027} -{"mode": "train", "epoch": 7, "iter": 3450, "lr": 0.0002, "memory": 12588, "data_time": 0.04251, "loss_rpn_cls": 0.02329, "loss_rpn_bbox": 0.0461, "loss_cls": 0.19642, "acc": 93.03613, "loss_bbox": 0.22738, "loss_mask": 0.23175, "loss": 0.72494, "time": 0.50858} -{"mode": "train", "epoch": 7, "iter": 3500, "lr": 0.0002, "memory": 12588, "data_time": 0.04133, "loss_rpn_cls": 0.02212, "loss_rpn_bbox": 0.04142, "loss_cls": 0.19451, "acc": 93.32031, "loss_bbox": 0.21795, "loss_mask": 0.22823, "loss": 0.70422, "time": 0.50206} -{"mode": "train", "epoch": 7, "iter": 3550, "lr": 0.0002, "memory": 12588, "data_time": 0.04581, "loss_rpn_cls": 0.0228, "loss_rpn_bbox": 0.04308, "loss_cls": 0.20186, "acc": 93.12817, "loss_bbox": 0.2246, "loss_mask": 0.23089, "loss": 0.72324, "time": 0.49399} -{"mode": "train", "epoch": 7, "iter": 3600, "lr": 0.0002, "memory": 12588, "data_time": 0.04699, "loss_rpn_cls": 0.02511, "loss_rpn_bbox": 0.04545, "loss_cls": 0.20822, "acc": 92.80566, "loss_bbox": 0.23004, "loss_mask": 0.23021, "loss": 0.73904, "time": 0.51453} -{"mode": "train", "epoch": 7, "iter": 3650, "lr": 0.0002, "memory": 12588, "data_time": 0.04184, "loss_rpn_cls": 0.02335, "loss_rpn_bbox": 0.04475, "loss_cls": 0.20581, "acc": 92.92969, "loss_bbox": 0.22745, "loss_mask": 0.23963, "loss": 0.74099, "time": 0.48987} -{"mode": "train", "epoch": 7, "iter": 3700, "lr": 0.0002, "memory": 12588, "data_time": 0.04466, "loss_rpn_cls": 0.02431, "loss_rpn_bbox": 0.04378, "loss_cls": 0.194, "acc": 93.25806, "loss_bbox": 0.22073, "loss_mask": 0.22813, "loss": 0.71094, "time": 0.49231} -{"mode": "train", "epoch": 7, "iter": 3750, "lr": 0.0002, "memory": 12588, "data_time": 0.04424, "loss_rpn_cls": 0.02421, "loss_rpn_bbox": 0.04552, "loss_cls": 0.20193, "acc": 93.08325, "loss_bbox": 0.22825, "loss_mask": 0.23291, "loss": 0.73282, "time": 0.49835} -{"mode": "train", "epoch": 7, "iter": 3800, "lr": 0.0002, "memory": 12588, "data_time": 0.04562, "loss_rpn_cls": 0.02776, "loss_rpn_bbox": 0.04882, "loss_cls": 0.2127, "acc": 92.64893, "loss_bbox": 0.23686, "loss_mask": 0.23416, "loss": 0.76029, "time": 0.51745} -{"mode": "train", "epoch": 7, "iter": 3850, "lr": 0.0002, "memory": 12588, "data_time": 0.04006, "loss_rpn_cls": 0.02119, "loss_rpn_bbox": 0.04283, "loss_cls": 0.20263, "acc": 92.94702, "loss_bbox": 0.22635, "loss_mask": 0.23605, "loss": 0.72905, "time": 0.59333} -{"mode": "train", "epoch": 7, "iter": 3900, "lr": 0.0002, "memory": 12588, "data_time": 0.04381, "loss_rpn_cls": 0.02275, "loss_rpn_bbox": 0.04225, "loss_cls": 0.20241, "acc": 92.86328, "loss_bbox": 0.23278, "loss_mask": 0.23109, "loss": 0.73129, "time": 0.49312} -{"mode": "train", "epoch": 7, "iter": 3950, "lr": 0.0002, "memory": 12588, "data_time": 0.04038, "loss_rpn_cls": 0.02422, "loss_rpn_bbox": 0.04454, "loss_cls": 0.20578, "acc": 92.95239, "loss_bbox": 0.22956, "loss_mask": 0.23523, "loss": 0.73933, "time": 0.49876} -{"mode": "train", "epoch": 7, "iter": 4000, "lr": 0.0002, "memory": 12588, "data_time": 0.04232, "loss_rpn_cls": 0.02361, "loss_rpn_bbox": 0.0435, "loss_cls": 0.18986, "acc": 93.44067, "loss_bbox": 0.21816, "loss_mask": 0.23223, "loss": 0.70736, "time": 0.49429} -{"mode": "train", "epoch": 7, "iter": 4050, "lr": 0.0002, "memory": 12588, "data_time": 0.04783, "loss_rpn_cls": 0.02505, "loss_rpn_bbox": 0.04684, "loss_cls": 0.20624, "acc": 92.78052, "loss_bbox": 0.23188, "loss_mask": 0.22588, "loss": 0.73589, "time": 0.49828} -{"mode": "train", "epoch": 7, "iter": 4100, "lr": 0.0002, "memory": 12588, "data_time": 0.05567, "loss_rpn_cls": 0.02393, "loss_rpn_bbox": 0.04432, "loss_cls": 0.20126, "acc": 93.11133, "loss_bbox": 0.22341, "loss_mask": 0.2307, "loss": 0.72362, "time": 0.49776} -{"mode": "train", "epoch": 7, "iter": 4150, "lr": 0.0002, "memory": 12588, "data_time": 0.04339, "loss_rpn_cls": 0.02543, "loss_rpn_bbox": 0.04604, "loss_cls": 0.20188, "acc": 92.94727, "loss_bbox": 0.22787, "loss_mask": 0.23735, "loss": 0.73858, "time": 0.50579} -{"mode": "train", "epoch": 7, "iter": 4200, "lr": 0.0002, "memory": 12588, "data_time": 0.04381, "loss_rpn_cls": 0.02417, "loss_rpn_bbox": 0.04617, "loss_cls": 0.19654, "acc": 93.12305, "loss_bbox": 0.22761, "loss_mask": 0.23729, "loss": 0.73177, "time": 0.49712} -{"mode": "train", "epoch": 7, "iter": 4250, "lr": 0.0002, "memory": 12588, "data_time": 0.04351, "loss_rpn_cls": 0.02308, "loss_rpn_bbox": 0.04289, "loss_cls": 0.20288, "acc": 92.96973, "loss_bbox": 0.22844, "loss_mask": 0.23231, "loss": 0.72961, "time": 0.55178} -{"mode": "train", "epoch": 7, "iter": 4300, "lr": 0.0002, "memory": 12588, "data_time": 0.05792, "loss_rpn_cls": 0.02498, "loss_rpn_bbox": 0.04558, "loss_cls": 0.20798, "acc": 92.95801, "loss_bbox": 0.23054, "loss_mask": 0.22753, "loss": 0.7366, "time": 0.52644} -{"mode": "train", "epoch": 7, "iter": 4350, "lr": 0.0002, "memory": 12588, "data_time": 0.04265, "loss_rpn_cls": 0.02282, "loss_rpn_bbox": 0.04377, "loss_cls": 0.19611, "acc": 93.19849, "loss_bbox": 0.22195, "loss_mask": 0.22608, "loss": 0.71072, "time": 0.51314} -{"mode": "train", "epoch": 7, "iter": 4400, "lr": 0.0002, "memory": 12588, "data_time": 0.04476, "loss_rpn_cls": 0.02723, "loss_rpn_bbox": 0.04916, "loss_cls": 0.20963, "acc": 92.71851, "loss_bbox": 0.23627, "loss_mask": 0.23428, "loss": 0.75657, "time": 0.5045} -{"mode": "train", "epoch": 7, "iter": 4450, "lr": 0.0002, "memory": 12588, "data_time": 0.04647, "loss_rpn_cls": 0.0235, "loss_rpn_bbox": 0.04592, "loss_cls": 0.19873, "acc": 93.01636, "loss_bbox": 0.23209, "loss_mask": 0.23374, "loss": 0.73398, "time": 0.50082} -{"mode": "train", "epoch": 7, "iter": 4500, "lr": 0.0002, "memory": 12588, "data_time": 0.04595, "loss_rpn_cls": 0.02588, "loss_rpn_bbox": 0.04817, "loss_cls": 0.20936, "acc": 92.79077, "loss_bbox": 0.23641, "loss_mask": 0.23505, "loss": 0.75488, "time": 0.51049} -{"mode": "train", "epoch": 7, "iter": 4550, "lr": 0.0002, "memory": 12588, "data_time": 0.03792, "loss_rpn_cls": 0.0223, "loss_rpn_bbox": 0.04366, "loss_cls": 0.19556, "acc": 93.21606, "loss_bbox": 0.22082, "loss_mask": 0.2227, "loss": 0.70505, "time": 0.48707} -{"mode": "train", "epoch": 7, "iter": 4600, "lr": 0.0002, "memory": 12588, "data_time": 0.04664, "loss_rpn_cls": 0.02454, "loss_rpn_bbox": 0.04434, "loss_cls": 0.20854, "acc": 92.87622, "loss_bbox": 0.2282, "loss_mask": 0.22936, "loss": 0.73497, "time": 0.51635} -{"mode": "train", "epoch": 7, "iter": 4650, "lr": 0.0002, "memory": 12588, "data_time": 0.04833, "loss_rpn_cls": 0.02705, "loss_rpn_bbox": 0.04491, "loss_cls": 0.21428, "acc": 92.7959, "loss_bbox": 0.22993, "loss_mask": 0.24178, "loss": 0.75794, "time": 0.51339} -{"mode": "train", "epoch": 7, "iter": 4700, "lr": 0.0002, "memory": 12588, "data_time": 0.04183, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04648, "loss_cls": 0.2095, "acc": 92.79199, "loss_bbox": 0.23436, "loss_mask": 0.23377, "loss": 0.74904, "time": 0.51065} -{"mode": "train", "epoch": 7, "iter": 4750, "lr": 0.0002, "memory": 12588, "data_time": 0.05127, "loss_rpn_cls": 0.02508, "loss_rpn_bbox": 0.04716, "loss_cls": 0.20603, "acc": 92.86987, "loss_bbox": 0.23203, "loss_mask": 0.22723, "loss": 0.73753, "time": 0.51749} -{"mode": "train", "epoch": 7, "iter": 4800, "lr": 0.0002, "memory": 12588, "data_time": 0.04461, "loss_rpn_cls": 0.02395, "loss_rpn_bbox": 0.04323, "loss_cls": 0.19695, "acc": 93.21265, "loss_bbox": 0.2237, "loss_mask": 0.23159, "loss": 0.71942, "time": 0.5384} -{"mode": "train", "epoch": 7, "iter": 4850, "lr": 0.0002, "memory": 12588, "data_time": 0.03859, "loss_rpn_cls": 0.0239, "loss_rpn_bbox": 0.04248, "loss_cls": 0.19654, "acc": 93.34277, "loss_bbox": 0.21838, "loss_mask": 0.22692, "loss": 0.70822, "time": 0.47993} -{"mode": "train", "epoch": 7, "iter": 4900, "lr": 0.0002, "memory": 12588, "data_time": 0.04384, "loss_rpn_cls": 0.02731, "loss_rpn_bbox": 0.04679, "loss_cls": 0.20648, "acc": 92.87671, "loss_bbox": 0.22706, "loss_mask": 0.2323, "loss": 0.73995, "time": 0.55938} -{"mode": "train", "epoch": 7, "iter": 4950, "lr": 0.0002, "memory": 12588, "data_time": 0.05182, "loss_rpn_cls": 0.02692, "loss_rpn_bbox": 0.04622, "loss_cls": 0.2103, "acc": 92.72241, "loss_bbox": 0.23643, "loss_mask": 0.22944, "loss": 0.74932, "time": 0.51149} -{"mode": "train", "epoch": 7, "iter": 5000, "lr": 0.0002, "memory": 12588, "data_time": 0.04406, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.04501, "loss_cls": 0.19554, "acc": 93.20483, "loss_bbox": 0.22625, "loss_mask": 0.23595, "loss": 0.72777, "time": 0.5506} -{"mode": "train", "epoch": 7, "iter": 5050, "lr": 0.0002, "memory": 12588, "data_time": 0.04143, "loss_rpn_cls": 0.02702, "loss_rpn_bbox": 0.04422, "loss_cls": 0.20006, "acc": 93.03125, "loss_bbox": 0.22637, "loss_mask": 0.23075, "loss": 0.72841, "time": 0.49816} -{"mode": "train", "epoch": 7, "iter": 5100, "lr": 0.0002, "memory": 12588, "data_time": 0.05055, "loss_rpn_cls": 0.02533, "loss_rpn_bbox": 0.04739, "loss_cls": 0.2115, "acc": 92.63647, "loss_bbox": 0.23388, "loss_mask": 0.2361, "loss": 0.75421, "time": 0.51106} -{"mode": "train", "epoch": 7, "iter": 5150, "lr": 0.0002, "memory": 12588, "data_time": 0.0486, "loss_rpn_cls": 0.02479, "loss_rpn_bbox": 0.04424, "loss_cls": 0.20844, "acc": 92.79248, "loss_bbox": 0.22776, "loss_mask": 0.22889, "loss": 0.73412, "time": 0.50106} -{"mode": "train", "epoch": 7, "iter": 5200, "lr": 0.0002, "memory": 12588, "data_time": 0.04214, "loss_rpn_cls": 0.02261, "loss_rpn_bbox": 0.04329, "loss_cls": 0.20303, "acc": 92.90894, "loss_bbox": 0.2276, "loss_mask": 0.22714, "loss": 0.72368, "time": 0.60616} -{"mode": "train", "epoch": 7, "iter": 5250, "lr": 0.0002, "memory": 12588, "data_time": 0.04519, "loss_rpn_cls": 0.02334, "loss_rpn_bbox": 0.04316, "loss_cls": 0.19232, "acc": 93.35181, "loss_bbox": 0.21891, "loss_mask": 0.22807, "loss": 0.7058, "time": 0.50566} -{"mode": "train", "epoch": 7, "iter": 5300, "lr": 0.0002, "memory": 12588, "data_time": 0.05301, "loss_rpn_cls": 0.0217, "loss_rpn_bbox": 0.04266, "loss_cls": 0.19989, "acc": 93.09497, "loss_bbox": 0.21748, "loss_mask": 0.23017, "loss": 0.7119, "time": 0.50325} -{"mode": "train", "epoch": 7, "iter": 5350, "lr": 0.0002, "memory": 12588, "data_time": 0.04401, "loss_rpn_cls": 0.02634, "loss_rpn_bbox": 0.04699, "loss_cls": 0.20598, "acc": 92.91772, "loss_bbox": 0.22704, "loss_mask": 0.23936, "loss": 0.74571, "time": 0.50336} -{"mode": "train", "epoch": 7, "iter": 5400, "lr": 0.0002, "memory": 12588, "data_time": 0.04816, "loss_rpn_cls": 0.02578, "loss_rpn_bbox": 0.0472, "loss_cls": 0.20654, "acc": 92.77954, "loss_bbox": 0.23512, "loss_mask": 0.23388, "loss": 0.74853, "time": 0.50964} -{"mode": "train", "epoch": 7, "iter": 5450, "lr": 0.0002, "memory": 12588, "data_time": 0.04222, "loss_rpn_cls": 0.02358, "loss_rpn_bbox": 0.04562, "loss_cls": 0.2017, "acc": 93.03394, "loss_bbox": 0.23112, "loss_mask": 0.23037, "loss": 0.73238, "time": 0.48121} -{"mode": "train", "epoch": 7, "iter": 5500, "lr": 0.0002, "memory": 12588, "data_time": 0.04914, "loss_rpn_cls": 0.02461, "loss_rpn_bbox": 0.04693, "loss_cls": 0.21217, "acc": 92.72314, "loss_bbox": 0.23817, "loss_mask": 0.23249, "loss": 0.75437, "time": 0.52304} -{"mode": "train", "epoch": 7, "iter": 5550, "lr": 0.0002, "memory": 12588, "data_time": 0.05119, "loss_rpn_cls": 0.02273, "loss_rpn_bbox": 0.04449, "loss_cls": 0.20268, "acc": 92.87061, "loss_bbox": 0.23387, "loss_mask": 0.23252, "loss": 0.73629, "time": 0.50369} -{"mode": "train", "epoch": 7, "iter": 5600, "lr": 0.0002, "memory": 12588, "data_time": 0.04831, "loss_rpn_cls": 0.02394, "loss_rpn_bbox": 0.04412, "loss_cls": 0.20354, "acc": 92.86719, "loss_bbox": 0.2317, "loss_mask": 0.2322, "loss": 0.7355, "time": 0.50976} -{"mode": "train", "epoch": 7, "iter": 5650, "lr": 0.0002, "memory": 12588, "data_time": 0.04382, "loss_rpn_cls": 0.02455, "loss_rpn_bbox": 0.04541, "loss_cls": 0.20152, "acc": 93.01685, "loss_bbox": 0.22838, "loss_mask": 0.2317, "loss": 0.73156, "time": 0.50209} -{"mode": "train", "epoch": 7, "iter": 5700, "lr": 0.0002, "memory": 12588, "data_time": 0.05024, "loss_rpn_cls": 0.02308, "loss_rpn_bbox": 0.0453, "loss_cls": 0.20542, "acc": 92.97437, "loss_bbox": 0.23209, "loss_mask": 0.23084, "loss": 0.73673, "time": 0.51801} -{"mode": "train", "epoch": 7, "iter": 5750, "lr": 0.0002, "memory": 12588, "data_time": 0.04767, "loss_rpn_cls": 0.02393, "loss_rpn_bbox": 0.04321, "loss_cls": 0.19933, "acc": 92.95728, "loss_bbox": 0.22653, "loss_mask": 0.22746, "loss": 0.72045, "time": 0.56767} -{"mode": "train", "epoch": 7, "iter": 5800, "lr": 0.0002, "memory": 12588, "data_time": 0.04061, "loss_rpn_cls": 0.02265, "loss_rpn_bbox": 0.0425, "loss_cls": 0.20121, "acc": 92.97339, "loss_bbox": 0.22857, "loss_mask": 0.22848, "loss": 0.72341, "time": 0.5069} -{"mode": "train", "epoch": 7, "iter": 5850, "lr": 0.0002, "memory": 12588, "data_time": 0.0405, "loss_rpn_cls": 0.02478, "loss_rpn_bbox": 0.04702, "loss_cls": 0.19295, "acc": 93.271, "loss_bbox": 0.22819, "loss_mask": 0.23126, "loss": 0.7242, "time": 0.49978} -{"mode": "train", "epoch": 7, "iter": 5900, "lr": 0.0002, "memory": 12588, "data_time": 0.04204, "loss_rpn_cls": 0.02364, "loss_rpn_bbox": 0.04526, "loss_cls": 0.20245, "acc": 92.87158, "loss_bbox": 0.23286, "loss_mask": 0.22887, "loss": 0.73309, "time": 0.50528} -{"mode": "train", "epoch": 7, "iter": 5950, "lr": 0.0002, "memory": 12588, "data_time": 0.03959, "loss_rpn_cls": 0.02605, "loss_rpn_bbox": 0.04523, "loss_cls": 0.21106, "acc": 92.92017, "loss_bbox": 0.23013, "loss_mask": 0.23586, "loss": 0.74833, "time": 0.49061} -{"mode": "train", "epoch": 7, "iter": 6000, "lr": 0.0002, "memory": 12588, "data_time": 0.04564, "loss_rpn_cls": 0.024, "loss_rpn_bbox": 0.04533, "loss_cls": 0.19976, "acc": 93.10132, "loss_bbox": 0.22238, "loss_mask": 0.22753, "loss": 0.719, "time": 0.4931} -{"mode": "train", "epoch": 7, "iter": 6050, "lr": 0.0002, "memory": 12588, "data_time": 0.04895, "loss_rpn_cls": 0.0257, "loss_rpn_bbox": 0.04726, "loss_cls": 0.2095, "acc": 92.80811, "loss_bbox": 0.23349, "loss_mask": 0.23674, "loss": 0.75268, "time": 0.51638} -{"mode": "train", "epoch": 7, "iter": 6100, "lr": 0.0002, "memory": 12588, "data_time": 0.04606, "loss_rpn_cls": 0.02493, "loss_rpn_bbox": 0.04533, "loss_cls": 0.20589, "acc": 92.99805, "loss_bbox": 0.2267, "loss_mask": 0.23539, "loss": 0.73824, "time": 0.51231} -{"mode": "train", "epoch": 7, "iter": 6150, "lr": 0.0002, "memory": 12588, "data_time": 0.04783, "loss_rpn_cls": 0.02785, "loss_rpn_bbox": 0.04887, "loss_cls": 0.21304, "acc": 92.64893, "loss_bbox": 0.23898, "loss_mask": 0.23532, "loss": 0.76405, "time": 0.51847} -{"mode": "train", "epoch": 7, "iter": 6200, "lr": 0.0002, "memory": 12588, "data_time": 0.0471, "loss_rpn_cls": 0.02341, "loss_rpn_bbox": 0.04377, "loss_cls": 0.20519, "acc": 92.99243, "loss_bbox": 0.22465, "loss_mask": 0.22758, "loss": 0.7246, "time": 0.49022} -{"mode": "train", "epoch": 7, "iter": 6250, "lr": 0.0002, "memory": 12588, "data_time": 0.04794, "loss_rpn_cls": 0.02735, "loss_rpn_bbox": 0.0478, "loss_cls": 0.21125, "acc": 92.80371, "loss_bbox": 0.23238, "loss_mask": 0.23857, "loss": 0.75734, "time": 0.50649} -{"mode": "train", "epoch": 7, "iter": 6300, "lr": 0.0002, "memory": 12588, "data_time": 0.03982, "loss_rpn_cls": 0.02377, "loss_rpn_bbox": 0.0442, "loss_cls": 0.19795, "acc": 93.16187, "loss_bbox": 0.2189, "loss_mask": 0.22649, "loss": 0.71131, "time": 0.50696} -{"mode": "train", "epoch": 7, "iter": 6350, "lr": 0.0002, "memory": 12588, "data_time": 0.04482, "loss_rpn_cls": 0.02705, "loss_rpn_bbox": 0.04702, "loss_cls": 0.21664, "acc": 92.65918, "loss_bbox": 0.2356, "loss_mask": 0.2377, "loss": 0.76401, "time": 0.51868} -{"mode": "train", "epoch": 7, "iter": 6400, "lr": 0.0002, "memory": 12588, "data_time": 0.04177, "loss_rpn_cls": 0.02347, "loss_rpn_bbox": 0.0445, "loss_cls": 0.21653, "acc": 92.76709, "loss_bbox": 0.23577, "loss_mask": 0.23328, "loss": 0.75354, "time": 0.51739} -{"mode": "train", "epoch": 7, "iter": 6450, "lr": 0.0002, "memory": 12588, "data_time": 0.04418, "loss_rpn_cls": 0.02555, "loss_rpn_bbox": 0.04611, "loss_cls": 0.21319, "acc": 92.77686, "loss_bbox": 0.23122, "loss_mask": 0.2309, "loss": 0.74697, "time": 0.5045} -{"mode": "train", "epoch": 7, "iter": 6500, "lr": 0.0002, "memory": 12588, "data_time": 0.04801, "loss_rpn_cls": 0.0227, "loss_rpn_bbox": 0.04572, "loss_cls": 0.20635, "acc": 92.80542, "loss_bbox": 0.23476, "loss_mask": 0.23311, "loss": 0.74263, "time": 0.50791} -{"mode": "train", "epoch": 7, "iter": 6550, "lr": 0.0002, "memory": 12588, "data_time": 0.05199, "loss_rpn_cls": 0.02706, "loss_rpn_bbox": 0.04741, "loss_cls": 0.20909, "acc": 92.66479, "loss_bbox": 0.23364, "loss_mask": 0.23006, "loss": 0.74726, "time": 0.49932} -{"mode": "train", "epoch": 7, "iter": 6600, "lr": 0.0002, "memory": 12588, "data_time": 0.04609, "loss_rpn_cls": 0.02725, "loss_rpn_bbox": 0.04898, "loss_cls": 0.22361, "acc": 92.38647, "loss_bbox": 0.24238, "loss_mask": 0.2417, "loss": 0.78391, "time": 0.50189} -{"mode": "train", "epoch": 7, "iter": 6650, "lr": 0.0002, "memory": 12588, "data_time": 0.04936, "loss_rpn_cls": 0.02768, "loss_rpn_bbox": 0.04611, "loss_cls": 0.21024, "acc": 92.85376, "loss_bbox": 0.22983, "loss_mask": 0.23541, "loss": 0.74927, "time": 0.53358} -{"mode": "train", "epoch": 7, "iter": 6700, "lr": 0.0002, "memory": 12588, "data_time": 0.04269, "loss_rpn_cls": 0.02678, "loss_rpn_bbox": 0.04775, "loss_cls": 0.20725, "acc": 92.76587, "loss_bbox": 0.23439, "loss_mask": 0.2328, "loss": 0.74898, "time": 0.50788} -{"mode": "train", "epoch": 7, "iter": 6750, "lr": 0.0002, "memory": 12588, "data_time": 0.05347, "loss_rpn_cls": 0.02522, "loss_rpn_bbox": 0.04769, "loss_cls": 0.21556, "acc": 92.81689, "loss_bbox": 0.22982, "loss_mask": 0.23626, "loss": 0.75455, "time": 0.50588} -{"mode": "train", "epoch": 7, "iter": 6800, "lr": 0.0002, "memory": 12588, "data_time": 0.06042, "loss_rpn_cls": 0.027, "loss_rpn_bbox": 0.04936, "loss_cls": 0.21805, "acc": 92.36719, "loss_bbox": 0.24979, "loss_mask": 0.24057, "loss": 0.78478, "time": 0.51959} -{"mode": "train", "epoch": 7, "iter": 6850, "lr": 0.0002, "memory": 12588, "data_time": 0.03974, "loss_rpn_cls": 0.0243, "loss_rpn_bbox": 0.04338, "loss_cls": 0.20399, "acc": 93.01074, "loss_bbox": 0.22051, "loss_mask": 0.22627, "loss": 0.71846, "time": 0.49702} -{"mode": "train", "epoch": 7, "iter": 6900, "lr": 0.0002, "memory": 12588, "data_time": 0.04261, "loss_rpn_cls": 0.02253, "loss_rpn_bbox": 0.04241, "loss_cls": 0.19265, "acc": 93.30688, "loss_bbox": 0.22301, "loss_mask": 0.2304, "loss": 0.711, "time": 0.49172} -{"mode": "train", "epoch": 7, "iter": 6950, "lr": 0.0002, "memory": 12588, "data_time": 0.04143, "loss_rpn_cls": 0.02303, "loss_rpn_bbox": 0.04087, "loss_cls": 0.19258, "acc": 93.51392, "loss_bbox": 0.21613, "loss_mask": 0.23247, "loss": 0.70509, "time": 0.55915} -{"mode": "train", "epoch": 7, "iter": 7000, "lr": 0.0002, "memory": 12588, "data_time": 0.04324, "loss_rpn_cls": 0.02512, "loss_rpn_bbox": 0.04414, "loss_cls": 0.2031, "acc": 93.06592, "loss_bbox": 0.22439, "loss_mask": 0.23562, "loss": 0.73237, "time": 0.50063} -{"mode": "train", "epoch": 7, "iter": 7050, "lr": 0.0002, "memory": 12588, "data_time": 0.05182, "loss_rpn_cls": 0.02548, "loss_rpn_bbox": 0.047, "loss_cls": 0.20962, "acc": 92.85889, "loss_bbox": 0.22952, "loss_mask": 0.23109, "loss": 0.7427, "time": 0.55596} -{"mode": "train", "epoch": 7, "iter": 7100, "lr": 0.0002, "memory": 12588, "data_time": 0.04949, "loss_rpn_cls": 0.02693, "loss_rpn_bbox": 0.04738, "loss_cls": 0.20773, "acc": 93.04053, "loss_bbox": 0.22316, "loss_mask": 0.23454, "loss": 0.73973, "time": 0.50671} -{"mode": "train", "epoch": 7, "iter": 7150, "lr": 0.0002, "memory": 12588, "data_time": 0.04259, "loss_rpn_cls": 0.02658, "loss_rpn_bbox": 0.04552, "loss_cls": 0.21202, "acc": 92.73096, "loss_bbox": 0.23117, "loss_mask": 0.23249, "loss": 0.74778, "time": 0.49275} -{"mode": "train", "epoch": 7, "iter": 7200, "lr": 0.0002, "memory": 12588, "data_time": 0.05384, "loss_rpn_cls": 0.02618, "loss_rpn_bbox": 0.04991, "loss_cls": 0.21788, "acc": 92.60742, "loss_bbox": 0.24612, "loss_mask": 0.2398, "loss": 0.77989, "time": 0.51078} -{"mode": "train", "epoch": 7, "iter": 7250, "lr": 0.0002, "memory": 12588, "data_time": 0.05065, "loss_rpn_cls": 0.02691, "loss_rpn_bbox": 0.04574, "loss_cls": 0.21417, "acc": 92.73926, "loss_bbox": 0.23419, "loss_mask": 0.23286, "loss": 0.75387, "time": 0.50469} -{"mode": "train", "epoch": 7, "iter": 7300, "lr": 0.0002, "memory": 12588, "data_time": 0.03949, "loss_rpn_cls": 0.02437, "loss_rpn_bbox": 0.04295, "loss_cls": 0.2086, "acc": 92.85474, "loss_bbox": 0.23194, "loss_mask": 0.23097, "loss": 0.73883, "time": 0.5096} -{"mode": "val", "epoch": 7, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3907, "bbox_mAP_50": 0.6067, "bbox_mAP_75": 0.4244, "bbox_mAP_s": 0.2208, "bbox_mAP_m": 0.4261, "bbox_mAP_l": 0.5173, "bbox_mAP_copypaste": "0.3907 0.6067 0.4244 0.2208 0.4261 0.5173", "segm_mAP": 0.3657, "segm_mAP_50": 0.5835, "segm_mAP_75": 0.3932, "segm_mAP_s": 0.1686, "segm_mAP_m": 0.3993, "segm_mAP_l": 0.5382, "segm_mAP_copypaste": "0.3657 0.5835 0.3932 0.1686 0.3993 0.5382"} -{"mode": "train", "epoch": 8, "iter": 50, "lr": 0.0002, "memory": 12588, "data_time": 0.11928, "loss_rpn_cls": 0.02029, "loss_rpn_bbox": 0.04079, "loss_cls": 0.17749, "acc": 93.7334, "loss_bbox": 0.20891, "loss_mask": 0.22285, "loss": 0.67033, "time": 0.60585} -{"mode": "train", "epoch": 8, "iter": 100, "lr": 0.0002, "memory": 12588, "data_time": 0.04351, "loss_rpn_cls": 0.0221, "loss_rpn_bbox": 0.04344, "loss_cls": 0.19191, "acc": 93.27466, "loss_bbox": 0.22323, "loss_mask": 0.2215, "loss": 0.70218, "time": 0.58651} -{"mode": "train", "epoch": 8, "iter": 150, "lr": 0.0002, "memory": 12588, "data_time": 0.04689, "loss_rpn_cls": 0.02034, "loss_rpn_bbox": 0.03976, "loss_cls": 0.17354, "acc": 93.84912, "loss_bbox": 0.20242, "loss_mask": 0.21772, "loss": 0.65378, "time": 0.49773} -{"mode": "train", "epoch": 8, "iter": 200, "lr": 0.0002, "memory": 12588, "data_time": 0.05694, "loss_rpn_cls": 0.0222, "loss_rpn_bbox": 0.04537, "loss_cls": 0.1772, "acc": 93.66772, "loss_bbox": 0.21612, "loss_mask": 0.22696, "loss": 0.68786, "time": 0.537} -{"mode": "train", "epoch": 8, "iter": 250, "lr": 0.0002, "memory": 12588, "data_time": 0.05073, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04515, "loss_cls": 0.1912, "acc": 93.27295, "loss_bbox": 0.2237, "loss_mask": 0.22735, "loss": 0.71031, "time": 0.51734} -{"mode": "train", "epoch": 8, "iter": 300, "lr": 0.0002, "memory": 12588, "data_time": 0.05504, "loss_rpn_cls": 0.02246, "loss_rpn_bbox": 0.04757, "loss_cls": 0.19176, "acc": 93.10596, "loss_bbox": 0.22587, "loss_mask": 0.22587, "loss": 0.71354, "time": 0.51325} -{"mode": "train", "epoch": 8, "iter": 350, "lr": 0.0002, "memory": 12588, "data_time": 0.04284, "loss_rpn_cls": 0.02107, "loss_rpn_bbox": 0.04329, "loss_cls": 0.1827, "acc": 93.60254, "loss_bbox": 0.21288, "loss_mask": 0.22879, "loss": 0.68873, "time": 0.51314} -{"mode": "train", "epoch": 8, "iter": 400, "lr": 0.0002, "memory": 12588, "data_time": 0.04808, "loss_rpn_cls": 0.02274, "loss_rpn_bbox": 0.04443, "loss_cls": 0.185, "acc": 93.44507, "loss_bbox": 0.21382, "loss_mask": 0.22277, "loss": 0.68877, "time": 0.5951} -{"mode": "train", "epoch": 8, "iter": 450, "lr": 0.0002, "memory": 12588, "data_time": 0.04641, "loss_rpn_cls": 0.02051, "loss_rpn_bbox": 0.04014, "loss_cls": 0.18713, "acc": 93.46631, "loss_bbox": 0.21424, "loss_mask": 0.22607, "loss": 0.6881, "time": 0.49802} -{"mode": "train", "epoch": 8, "iter": 500, "lr": 0.0002, "memory": 12588, "data_time": 0.04039, "loss_rpn_cls": 0.02078, "loss_rpn_bbox": 0.04299, "loss_cls": 0.197, "acc": 92.94849, "loss_bbox": 0.23282, "loss_mask": 0.22483, "loss": 0.71841, "time": 0.50142} -{"mode": "train", "epoch": 8, "iter": 550, "lr": 0.0002, "memory": 12588, "data_time": 0.04988, "loss_rpn_cls": 0.02423, "loss_rpn_bbox": 0.04265, "loss_cls": 0.18357, "acc": 93.53589, "loss_bbox": 0.21287, "loss_mask": 0.22094, "loss": 0.68425, "time": 0.56037} -{"mode": "train", "epoch": 8, "iter": 600, "lr": 0.0002, "memory": 12588, "data_time": 0.04432, "loss_rpn_cls": 0.02162, "loss_rpn_bbox": 0.04052, "loss_cls": 0.18657, "acc": 93.40527, "loss_bbox": 0.218, "loss_mask": 0.22836, "loss": 0.69507, "time": 0.50564} -{"mode": "train", "epoch": 8, "iter": 650, "lr": 0.0002, "memory": 12588, "data_time": 0.04831, "loss_rpn_cls": 0.02149, "loss_rpn_bbox": 0.04515, "loss_cls": 0.19418, "acc": 93.24121, "loss_bbox": 0.22532, "loss_mask": 0.22853, "loss": 0.71467, "time": 0.49728} -{"mode": "train", "epoch": 8, "iter": 700, "lr": 0.0002, "memory": 12588, "data_time": 0.04217, "loss_rpn_cls": 0.02021, "loss_rpn_bbox": 0.0413, "loss_cls": 0.18984, "acc": 93.34839, "loss_bbox": 0.22054, "loss_mask": 0.22268, "loss": 0.69458, "time": 0.50614} -{"mode": "train", "epoch": 8, "iter": 750, "lr": 0.0002, "memory": 12588, "data_time": 0.0461, "loss_rpn_cls": 0.02243, "loss_rpn_bbox": 0.0444, "loss_cls": 0.19008, "acc": 93.24536, "loss_bbox": 0.22742, "loss_mask": 0.2228, "loss": 0.70713, "time": 0.50264} -{"mode": "train", "epoch": 8, "iter": 800, "lr": 0.0002, "memory": 12588, "data_time": 0.0531, "loss_rpn_cls": 0.01904, "loss_rpn_bbox": 0.04251, "loss_cls": 0.18538, "acc": 93.4519, "loss_bbox": 0.21473, "loss_mask": 0.22216, "loss": 0.68382, "time": 0.51326} -{"mode": "train", "epoch": 8, "iter": 850, "lr": 0.0002, "memory": 12588, "data_time": 0.04283, "loss_rpn_cls": 0.02057, "loss_rpn_bbox": 0.04271, "loss_cls": 0.19481, "acc": 93.18726, "loss_bbox": 0.21799, "loss_mask": 0.22433, "loss": 0.70042, "time": 0.52193} -{"mode": "train", "epoch": 8, "iter": 900, "lr": 0.0002, "memory": 12588, "data_time": 0.04733, "loss_rpn_cls": 0.02347, "loss_rpn_bbox": 0.04551, "loss_cls": 0.20655, "acc": 92.71851, "loss_bbox": 0.22923, "loss_mask": 0.23291, "loss": 0.73766, "time": 0.50894} -{"mode": "train", "epoch": 8, "iter": 950, "lr": 0.0002, "memory": 12588, "data_time": 0.05088, "loss_rpn_cls": 0.02436, "loss_rpn_bbox": 0.04486, "loss_cls": 0.18998, "acc": 93.24023, "loss_bbox": 0.22728, "loss_mask": 0.22919, "loss": 0.71566, "time": 0.5162} -{"mode": "train", "epoch": 8, "iter": 1000, "lr": 0.0002, "memory": 12588, "data_time": 0.04565, "loss_rpn_cls": 0.02309, "loss_rpn_bbox": 0.04271, "loss_cls": 0.19173, "acc": 93.31177, "loss_bbox": 0.22112, "loss_mask": 0.22708, "loss": 0.70573, "time": 0.51191} -{"mode": "train", "epoch": 8, "iter": 1050, "lr": 0.0002, "memory": 12588, "data_time": 0.03882, "loss_rpn_cls": 0.02097, "loss_rpn_bbox": 0.04335, "loss_cls": 0.18645, "acc": 93.50903, "loss_bbox": 0.21803, "loss_mask": 0.22811, "loss": 0.69691, "time": 0.49428} -{"mode": "train", "epoch": 8, "iter": 1100, "lr": 0.0002, "memory": 12588, "data_time": 0.04438, "loss_rpn_cls": 0.02192, "loss_rpn_bbox": 0.04221, "loss_cls": 0.19089, "acc": 93.25195, "loss_bbox": 0.21967, "loss_mask": 0.22968, "loss": 0.70437, "time": 0.5095} -{"mode": "train", "epoch": 8, "iter": 1150, "lr": 0.0002, "memory": 12588, "data_time": 0.04851, "loss_rpn_cls": 0.02331, "loss_rpn_bbox": 0.0453, "loss_cls": 0.18655, "acc": 93.3938, "loss_bbox": 0.21554, "loss_mask": 0.23033, "loss": 0.70103, "time": 0.50935} -{"mode": "train", "epoch": 8, "iter": 1200, "lr": 0.0002, "memory": 12588, "data_time": 0.04932, "loss_rpn_cls": 0.02218, "loss_rpn_bbox": 0.04442, "loss_cls": 0.18732, "acc": 93.44971, "loss_bbox": 0.22248, "loss_mask": 0.227, "loss": 0.70341, "time": 0.49502} -{"mode": "train", "epoch": 8, "iter": 1250, "lr": 0.0002, "memory": 12588, "data_time": 0.04434, "loss_rpn_cls": 0.02142, "loss_rpn_bbox": 0.04389, "loss_cls": 0.19385, "acc": 93.26416, "loss_bbox": 0.22419, "loss_mask": 0.22476, "loss": 0.70811, "time": 0.50297} -{"mode": "train", "epoch": 8, "iter": 1300, "lr": 0.0002, "memory": 12588, "data_time": 0.05449, "loss_rpn_cls": 0.02735, "loss_rpn_bbox": 0.04773, "loss_cls": 0.20224, "acc": 92.8623, "loss_bbox": 0.23369, "loss_mask": 0.23487, "loss": 0.74587, "time": 0.51162} -{"mode": "train", "epoch": 8, "iter": 1350, "lr": 0.0002, "memory": 12588, "data_time": 0.04327, "loss_rpn_cls": 0.02237, "loss_rpn_bbox": 0.04273, "loss_cls": 0.18997, "acc": 93.30176, "loss_bbox": 0.21655, "loss_mask": 0.22476, "loss": 0.69637, "time": 0.54576} -{"mode": "train", "epoch": 8, "iter": 1400, "lr": 0.0002, "memory": 12588, "data_time": 0.04737, "loss_rpn_cls": 0.02204, "loss_rpn_bbox": 0.04529, "loss_cls": 0.20352, "acc": 92.90454, "loss_bbox": 0.23014, "loss_mask": 0.22833, "loss": 0.72931, "time": 0.55309} -{"mode": "train", "epoch": 8, "iter": 1450, "lr": 0.0002, "memory": 12588, "data_time": 0.04867, "loss_rpn_cls": 0.02211, "loss_rpn_bbox": 0.04697, "loss_cls": 0.19357, "acc": 93.23706, "loss_bbox": 0.22741, "loss_mask": 0.22723, "loss": 0.7173, "time": 0.50026} -{"mode": "train", "epoch": 8, "iter": 1500, "lr": 0.0002, "memory": 12588, "data_time": 0.03879, "loss_rpn_cls": 0.02372, "loss_rpn_bbox": 0.0468, "loss_cls": 0.20029, "acc": 92.99585, "loss_bbox": 0.23272, "loss_mask": 0.2306, "loss": 0.73413, "time": 0.50335} -{"mode": "train", "epoch": 8, "iter": 1550, "lr": 0.0002, "memory": 12588, "data_time": 0.04948, "loss_rpn_cls": 0.02206, "loss_rpn_bbox": 0.04402, "loss_cls": 0.18954, "acc": 93.2771, "loss_bbox": 0.22031, "loss_mask": 0.22859, "loss": 0.70453, "time": 0.50295} -{"mode": "train", "epoch": 8, "iter": 1600, "lr": 0.0002, "memory": 12588, "data_time": 0.05237, "loss_rpn_cls": 0.02526, "loss_rpn_bbox": 0.04741, "loss_cls": 0.19399, "acc": 93.21973, "loss_bbox": 0.22533, "loss_mask": 0.23024, "loss": 0.72223, "time": 0.51443} -{"mode": "train", "epoch": 8, "iter": 1650, "lr": 0.0002, "memory": 12588, "data_time": 0.0453, "loss_rpn_cls": 0.02174, "loss_rpn_bbox": 0.04421, "loss_cls": 0.19799, "acc": 93.09863, "loss_bbox": 0.22676, "loss_mask": 0.2285, "loss": 0.71918, "time": 0.52605} -{"mode": "train", "epoch": 8, "iter": 1700, "lr": 0.0002, "memory": 12588, "data_time": 0.04249, "loss_rpn_cls": 0.02243, "loss_rpn_bbox": 0.04372, "loss_cls": 0.2045, "acc": 92.86841, "loss_bbox": 0.22916, "loss_mask": 0.23167, "loss": 0.73147, "time": 0.49723} -{"mode": "train", "epoch": 8, "iter": 1750, "lr": 0.0002, "memory": 12588, "data_time": 0.04529, "loss_rpn_cls": 0.02113, "loss_rpn_bbox": 0.04135, "loss_cls": 0.18633, "acc": 93.42676, "loss_bbox": 0.21517, "loss_mask": 0.22219, "loss": 0.68617, "time": 0.50723} -{"mode": "train", "epoch": 8, "iter": 1800, "lr": 0.0002, "memory": 12588, "data_time": 0.04808, "loss_rpn_cls": 0.02293, "loss_rpn_bbox": 0.04323, "loss_cls": 0.19547, "acc": 93.14209, "loss_bbox": 0.22142, "loss_mask": 0.22885, "loss": 0.7119, "time": 0.50502} -{"mode": "train", "epoch": 8, "iter": 1850, "lr": 0.0002, "memory": 12588, "data_time": 0.04873, "loss_rpn_cls": 0.0229, "loss_rpn_bbox": 0.04551, "loss_cls": 0.20797, "acc": 92.80884, "loss_bbox": 0.23301, "loss_mask": 0.23212, "loss": 0.74151, "time": 0.56891} -{"mode": "train", "epoch": 8, "iter": 1900, "lr": 0.0002, "memory": 12588, "data_time": 0.04631, "loss_rpn_cls": 0.02365, "loss_rpn_bbox": 0.04298, "loss_cls": 0.19718, "acc": 92.99121, "loss_bbox": 0.22866, "loss_mask": 0.22822, "loss": 0.72069, "time": 0.49988} -{"mode": "train", "epoch": 8, "iter": 1950, "lr": 0.0002, "memory": 12588, "data_time": 0.04541, "loss_rpn_cls": 0.02054, "loss_rpn_bbox": 0.04226, "loss_cls": 0.18486, "acc": 93.56616, "loss_bbox": 0.21751, "loss_mask": 0.22859, "loss": 0.69375, "time": 0.5372} -{"mode": "train", "epoch": 8, "iter": 2000, "lr": 0.0002, "memory": 12588, "data_time": 0.04535, "loss_rpn_cls": 0.02322, "loss_rpn_bbox": 0.04588, "loss_cls": 0.2086, "acc": 92.76416, "loss_bbox": 0.23344, "loss_mask": 0.2322, "loss": 0.74333, "time": 0.51442} -{"mode": "train", "epoch": 8, "iter": 2050, "lr": 0.0002, "memory": 12588, "data_time": 0.05075, "loss_rpn_cls": 0.02375, "loss_rpn_bbox": 0.04389, "loss_cls": 0.2022, "acc": 93.11108, "loss_bbox": 0.22231, "loss_mask": 0.22918, "loss": 0.72133, "time": 0.49862} -{"mode": "train", "epoch": 8, "iter": 2100, "lr": 0.0002, "memory": 12588, "data_time": 0.04774, "loss_rpn_cls": 0.0236, "loss_rpn_bbox": 0.04483, "loss_cls": 0.19528, "acc": 93.20459, "loss_bbox": 0.22581, "loss_mask": 0.22799, "loss": 0.71752, "time": 0.5077} -{"mode": "train", "epoch": 8, "iter": 2150, "lr": 0.0002, "memory": 12588, "data_time": 0.05186, "loss_rpn_cls": 0.02508, "loss_rpn_bbox": 0.04521, "loss_cls": 0.1946, "acc": 93.22217, "loss_bbox": 0.22417, "loss_mask": 0.2248, "loss": 0.71386, "time": 0.51811} -{"mode": "train", "epoch": 8, "iter": 2200, "lr": 0.0002, "memory": 12588, "data_time": 0.04873, "loss_rpn_cls": 0.02127, "loss_rpn_bbox": 0.04354, "loss_cls": 0.19277, "acc": 93.28662, "loss_bbox": 0.22023, "loss_mask": 0.2282, "loss": 0.706, "time": 0.55741} -{"mode": "train", "epoch": 8, "iter": 2250, "lr": 0.0002, "memory": 12588, "data_time": 0.03499, "loss_rpn_cls": 0.02379, "loss_rpn_bbox": 0.04628, "loss_cls": 0.1985, "acc": 93.15039, "loss_bbox": 0.22737, "loss_mask": 0.22815, "loss": 0.72408, "time": 0.50108} -{"mode": "train", "epoch": 8, "iter": 2300, "lr": 0.0002, "memory": 12588, "data_time": 0.04748, "loss_rpn_cls": 0.0223, "loss_rpn_bbox": 0.04549, "loss_cls": 0.19796, "acc": 93.04614, "loss_bbox": 0.22912, "loss_mask": 0.22871, "loss": 0.72358, "time": 0.5933} -{"mode": "train", "epoch": 8, "iter": 2350, "lr": 0.0002, "memory": 12588, "data_time": 0.04598, "loss_rpn_cls": 0.02097, "loss_rpn_bbox": 0.04409, "loss_cls": 0.1985, "acc": 93.1084, "loss_bbox": 0.22554, "loss_mask": 0.22895, "loss": 0.71805, "time": 0.49071} -{"mode": "train", "epoch": 8, "iter": 2400, "lr": 0.0002, "memory": 12588, "data_time": 0.04888, "loss_rpn_cls": 0.02557, "loss_rpn_bbox": 0.04659, "loss_cls": 0.20121, "acc": 92.88745, "loss_bbox": 0.22935, "loss_mask": 0.23133, "loss": 0.73405, "time": 0.52489} -{"mode": "train", "epoch": 8, "iter": 2450, "lr": 0.0002, "memory": 12588, "data_time": 0.04306, "loss_rpn_cls": 0.02162, "loss_rpn_bbox": 0.04342, "loss_cls": 0.19952, "acc": 93.14282, "loss_bbox": 0.21971, "loss_mask": 0.22704, "loss": 0.71131, "time": 0.49827} -{"mode": "train", "epoch": 8, "iter": 2500, "lr": 0.0002, "memory": 12588, "data_time": 0.05427, "loss_rpn_cls": 0.02311, "loss_rpn_bbox": 0.04571, "loss_cls": 0.19338, "acc": 93.23877, "loss_bbox": 0.22111, "loss_mask": 0.22955, "loss": 0.71286, "time": 0.50487} -{"mode": "train", "epoch": 8, "iter": 2550, "lr": 0.0002, "memory": 12588, "data_time": 0.04457, "loss_rpn_cls": 0.02326, "loss_rpn_bbox": 0.04324, "loss_cls": 0.19383, "acc": 93.25342, "loss_bbox": 0.22222, "loss_mask": 0.22377, "loss": 0.70633, "time": 0.51715} -{"mode": "train", "epoch": 8, "iter": 2600, "lr": 0.0002, "memory": 12588, "data_time": 0.0504, "loss_rpn_cls": 0.02177, "loss_rpn_bbox": 0.04471, "loss_cls": 0.20705, "acc": 92.82495, "loss_bbox": 0.22878, "loss_mask": 0.22774, "loss": 0.73006, "time": 0.51062} -{"mode": "train", "epoch": 8, "iter": 2650, "lr": 0.0002, "memory": 12588, "data_time": 0.04629, "loss_rpn_cls": 0.02412, "loss_rpn_bbox": 0.04564, "loss_cls": 0.19745, "acc": 93.08936, "loss_bbox": 0.22341, "loss_mask": 0.23251, "loss": 0.72312, "time": 0.52525} -{"mode": "train", "epoch": 8, "iter": 2700, "lr": 0.0002, "memory": 12588, "data_time": 0.05147, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.04534, "loss_cls": 0.19258, "acc": 93.21436, "loss_bbox": 0.22956, "loss_mask": 0.23125, "loss": 0.72239, "time": 0.51154} -{"mode": "train", "epoch": 8, "iter": 2750, "lr": 0.0002, "memory": 12588, "data_time": 0.03696, "loss_rpn_cls": 0.02231, "loss_rpn_bbox": 0.04628, "loss_cls": 0.19412, "acc": 93.10864, "loss_bbox": 0.22733, "loss_mask": 0.2335, "loss": 0.72355, "time": 0.54665} -{"mode": "train", "epoch": 8, "iter": 2800, "lr": 0.0002, "memory": 12588, "data_time": 0.04597, "loss_rpn_cls": 0.0233, "loss_rpn_bbox": 0.04491, "loss_cls": 0.20574, "acc": 92.69507, "loss_bbox": 0.23433, "loss_mask": 0.23165, "loss": 0.73993, "time": 0.4959} -{"mode": "train", "epoch": 8, "iter": 2850, "lr": 0.0002, "memory": 12588, "data_time": 0.05174, "loss_rpn_cls": 0.02675, "loss_rpn_bbox": 0.04446, "loss_cls": 0.19278, "acc": 93.32031, "loss_bbox": 0.2214, "loss_mask": 0.22816, "loss": 0.71355, "time": 0.4972} -{"mode": "train", "epoch": 8, "iter": 2900, "lr": 0.0002, "memory": 12588, "data_time": 0.04551, "loss_rpn_cls": 0.02202, "loss_rpn_bbox": 0.0447, "loss_cls": 0.19872, "acc": 93.20093, "loss_bbox": 0.22514, "loss_mask": 0.23023, "loss": 0.7208, "time": 0.5001} -{"mode": "train", "epoch": 8, "iter": 2950, "lr": 0.0002, "memory": 12588, "data_time": 0.03981, "loss_rpn_cls": 0.02204, "loss_rpn_bbox": 0.04337, "loss_cls": 0.19539, "acc": 93.2334, "loss_bbox": 0.21854, "loss_mask": 0.22794, "loss": 0.70728, "time": 0.48941} -{"mode": "train", "epoch": 8, "iter": 3000, "lr": 0.0002, "memory": 12588, "data_time": 0.04955, "loss_rpn_cls": 0.02536, "loss_rpn_bbox": 0.04367, "loss_cls": 0.19797, "acc": 92.9856, "loss_bbox": 0.23017, "loss_mask": 0.23356, "loss": 0.73074, "time": 0.50462} -{"mode": "train", "epoch": 8, "iter": 3050, "lr": 0.0002, "memory": 12588, "data_time": 0.04011, "loss_rpn_cls": 0.02253, "loss_rpn_bbox": 0.04438, "loss_cls": 0.19476, "acc": 93.1687, "loss_bbox": 0.22334, "loss_mask": 0.233, "loss": 0.71801, "time": 0.50489} -{"mode": "train", "epoch": 8, "iter": 3100, "lr": 0.0002, "memory": 12588, "data_time": 0.05351, "loss_rpn_cls": 0.02375, "loss_rpn_bbox": 0.04775, "loss_cls": 0.19941, "acc": 93.11353, "loss_bbox": 0.22692, "loss_mask": 0.23267, "loss": 0.7305, "time": 0.50678} -{"mode": "train", "epoch": 8, "iter": 3150, "lr": 0.0002, "memory": 12588, "data_time": 0.04815, "loss_rpn_cls": 0.02069, "loss_rpn_bbox": 0.04171, "loss_cls": 0.19195, "acc": 93.44995, "loss_bbox": 0.21294, "loss_mask": 0.22627, "loss": 0.69356, "time": 0.49366} -{"mode": "train", "epoch": 8, "iter": 3200, "lr": 0.0002, "memory": 12588, "data_time": 0.05503, "loss_rpn_cls": 0.02422, "loss_rpn_bbox": 0.04843, "loss_cls": 0.19951, "acc": 92.91748, "loss_bbox": 0.23311, "loss_mask": 0.23386, "loss": 0.73913, "time": 0.50795} -{"mode": "train", "epoch": 8, "iter": 3250, "lr": 0.0002, "memory": 12588, "data_time": 0.04197, "loss_rpn_cls": 0.02152, "loss_rpn_bbox": 0.04354, "loss_cls": 0.1929, "acc": 93.21069, "loss_bbox": 0.21897, "loss_mask": 0.22724, "loss": 0.70417, "time": 0.5122} -{"mode": "train", "epoch": 8, "iter": 3300, "lr": 0.0002, "memory": 12588, "data_time": 0.03891, "loss_rpn_cls": 0.02479, "loss_rpn_bbox": 0.04488, "loss_cls": 0.19463, "acc": 93.27466, "loss_bbox": 0.21729, "loss_mask": 0.23091, "loss": 0.7125, "time": 0.55257} -{"mode": "train", "epoch": 8, "iter": 3350, "lr": 0.0002, "memory": 12588, "data_time": 0.03627, "loss_rpn_cls": 0.02417, "loss_rpn_bbox": 0.04176, "loss_cls": 0.19184, "acc": 93.18237, "loss_bbox": 0.21934, "loss_mask": 0.22653, "loss": 0.70365, "time": 0.50517} -{"mode": "train", "epoch": 8, "iter": 3400, "lr": 0.0002, "memory": 12588, "data_time": 0.03952, "loss_rpn_cls": 0.02264, "loss_rpn_bbox": 0.04462, "loss_cls": 0.19626, "acc": 93.18506, "loss_bbox": 0.22328, "loss_mask": 0.22996, "loss": 0.71676, "time": 0.48637} -{"mode": "train", "epoch": 8, "iter": 3450, "lr": 0.0002, "memory": 12588, "data_time": 0.0463, "loss_rpn_cls": 0.02402, "loss_rpn_bbox": 0.04693, "loss_cls": 0.20561, "acc": 92.78223, "loss_bbox": 0.2275, "loss_mask": 0.22969, "loss": 0.73375, "time": 0.51159} -{"mode": "train", "epoch": 8, "iter": 3500, "lr": 0.0002, "memory": 12588, "data_time": 0.05532, "loss_rpn_cls": 0.02412, "loss_rpn_bbox": 0.04463, "loss_cls": 0.19828, "acc": 93.20776, "loss_bbox": 0.22226, "loss_mask": 0.22906, "loss": 0.71836, "time": 0.50631} -{"mode": "train", "epoch": 8, "iter": 3550, "lr": 0.0002, "memory": 12588, "data_time": 0.05745, "loss_rpn_cls": 0.02515, "loss_rpn_bbox": 0.04561, "loss_cls": 0.2065, "acc": 92.81348, "loss_bbox": 0.23426, "loss_mask": 0.23727, "loss": 0.74879, "time": 0.49608} -{"mode": "train", "epoch": 8, "iter": 3600, "lr": 0.0002, "memory": 12588, "data_time": 0.04234, "loss_rpn_cls": 0.02447, "loss_rpn_bbox": 0.04473, "loss_cls": 0.19617, "acc": 93.12476, "loss_bbox": 0.22882, "loss_mask": 0.22875, "loss": 0.72294, "time": 0.50779} -{"mode": "train", "epoch": 8, "iter": 3650, "lr": 0.0002, "memory": 12588, "data_time": 0.04859, "loss_rpn_cls": 0.02388, "loss_rpn_bbox": 0.04465, "loss_cls": 0.21364, "acc": 92.8269, "loss_bbox": 0.23126, "loss_mask": 0.22614, "loss": 0.73957, "time": 0.511} -{"mode": "train", "epoch": 8, "iter": 3700, "lr": 0.0002, "memory": 12588, "data_time": 0.0443, "loss_rpn_cls": 0.02398, "loss_rpn_bbox": 0.04423, "loss_cls": 0.20409, "acc": 92.99512, "loss_bbox": 0.22385, "loss_mask": 0.22908, "loss": 0.72523, "time": 0.49531} -{"mode": "train", "epoch": 8, "iter": 3750, "lr": 0.0002, "memory": 12588, "data_time": 0.04688, "loss_rpn_cls": 0.02438, "loss_rpn_bbox": 0.04423, "loss_cls": 0.20434, "acc": 93.03687, "loss_bbox": 0.22627, "loss_mask": 0.23407, "loss": 0.73329, "time": 0.49954} -{"mode": "train", "epoch": 8, "iter": 3800, "lr": 0.0002, "memory": 12588, "data_time": 0.05045, "loss_rpn_cls": 0.02204, "loss_rpn_bbox": 0.04376, "loss_cls": 0.19347, "acc": 93.15479, "loss_bbox": 0.22331, "loss_mask": 0.22908, "loss": 0.71165, "time": 0.51305} -{"mode": "train", "epoch": 8, "iter": 3850, "lr": 0.0002, "memory": 12588, "data_time": 0.04944, "loss_rpn_cls": 0.02291, "loss_rpn_bbox": 0.04358, "loss_cls": 0.20202, "acc": 93.15234, "loss_bbox": 0.22264, "loss_mask": 0.23061, "loss": 0.72177, "time": 0.50688} -{"mode": "train", "epoch": 8, "iter": 3900, "lr": 0.0002, "memory": 12588, "data_time": 0.04532, "loss_rpn_cls": 0.02318, "loss_rpn_bbox": 0.04311, "loss_cls": 0.19879, "acc": 93.23584, "loss_bbox": 0.22415, "loss_mask": 0.22806, "loss": 0.71729, "time": 0.48861} -{"mode": "train", "epoch": 8, "iter": 3950, "lr": 0.0002, "memory": 12588, "data_time": 0.04603, "loss_rpn_cls": 0.02293, "loss_rpn_bbox": 0.04365, "loss_cls": 0.19573, "acc": 93.3335, "loss_bbox": 0.21869, "loss_mask": 0.22922, "loss": 0.71021, "time": 0.49164} -{"mode": "train", "epoch": 8, "iter": 4000, "lr": 0.0002, "memory": 12588, "data_time": 0.04293, "loss_rpn_cls": 0.02399, "loss_rpn_bbox": 0.04467, "loss_cls": 0.19779, "acc": 93.10034, "loss_bbox": 0.23049, "loss_mask": 0.23387, "loss": 0.73081, "time": 0.5527} -{"mode": "train", "epoch": 8, "iter": 4050, "lr": 0.0002, "memory": 12588, "data_time": 0.04505, "loss_rpn_cls": 0.02302, "loss_rpn_bbox": 0.04278, "loss_cls": 0.2035, "acc": 93.08911, "loss_bbox": 0.22155, "loss_mask": 0.2278, "loss": 0.71866, "time": 0.49947} -{"mode": "train", "epoch": 8, "iter": 4100, "lr": 0.0002, "memory": 12588, "data_time": 0.04564, "loss_rpn_cls": 0.02398, "loss_rpn_bbox": 0.04438, "loss_cls": 0.20717, "acc": 92.92725, "loss_bbox": 0.22346, "loss_mask": 0.23556, "loss": 0.73455, "time": 0.48576} -{"mode": "train", "epoch": 8, "iter": 4150, "lr": 0.0002, "memory": 12588, "data_time": 0.04236, "loss_rpn_cls": 0.02379, "loss_rpn_bbox": 0.04347, "loss_cls": 0.20202, "acc": 93.06226, "loss_bbox": 0.22326, "loss_mask": 0.23112, "loss": 0.72366, "time": 0.49464} -{"mode": "train", "epoch": 8, "iter": 4200, "lr": 0.0002, "memory": 12588, "data_time": 0.05085, "loss_rpn_cls": 0.02385, "loss_rpn_bbox": 0.04485, "loss_cls": 0.1914, "acc": 93.29395, "loss_bbox": 0.21702, "loss_mask": 0.22796, "loss": 0.70507, "time": 0.50106} -{"mode": "train", "epoch": 8, "iter": 4250, "lr": 0.0002, "memory": 12588, "data_time": 0.04351, "loss_rpn_cls": 0.02346, "loss_rpn_bbox": 0.04658, "loss_cls": 0.19541, "acc": 93.18872, "loss_bbox": 0.22781, "loss_mask": 0.22741, "loss": 0.72067, "time": 0.60981} -{"mode": "train", "epoch": 8, "iter": 4300, "lr": 0.0002, "memory": 12588, "data_time": 0.04966, "loss_rpn_cls": 0.02375, "loss_rpn_bbox": 0.04685, "loss_cls": 0.19789, "acc": 93.05615, "loss_bbox": 0.22207, "loss_mask": 0.2316, "loss": 0.72216, "time": 0.50701} -{"mode": "train", "epoch": 8, "iter": 4350, "lr": 0.0002, "memory": 12588, "data_time": 0.04334, "loss_rpn_cls": 0.02388, "loss_rpn_bbox": 0.04432, "loss_cls": 0.20177, "acc": 93.06934, "loss_bbox": 0.22663, "loss_mask": 0.23136, "loss": 0.72796, "time": 0.51065} -{"mode": "train", "epoch": 8, "iter": 4400, "lr": 0.0002, "memory": 12588, "data_time": 0.04287, "loss_rpn_cls": 0.02422, "loss_rpn_bbox": 0.04389, "loss_cls": 0.20125, "acc": 93.03223, "loss_bbox": 0.22357, "loss_mask": 0.23216, "loss": 0.7251, "time": 0.55911} -{"mode": "train", "epoch": 8, "iter": 4450, "lr": 0.0002, "memory": 12588, "data_time": 0.04584, "loss_rpn_cls": 0.02469, "loss_rpn_bbox": 0.04639, "loss_cls": 0.20533, "acc": 92.90649, "loss_bbox": 0.22845, "loss_mask": 0.23405, "loss": 0.73891, "time": 0.50312} -{"mode": "train", "epoch": 8, "iter": 4500, "lr": 0.0002, "memory": 12588, "data_time": 0.05112, "loss_rpn_cls": 0.02431, "loss_rpn_bbox": 0.04516, "loss_cls": 0.20275, "acc": 92.83691, "loss_bbox": 0.22963, "loss_mask": 0.22881, "loss": 0.73065, "time": 0.51044} -{"mode": "train", "epoch": 8, "iter": 4550, "lr": 0.0002, "memory": 12588, "data_time": 0.04726, "loss_rpn_cls": 0.0235, "loss_rpn_bbox": 0.04571, "loss_cls": 0.1979, "acc": 93.17236, "loss_bbox": 0.22338, "loss_mask": 0.22857, "loss": 0.71906, "time": 0.50164} -{"mode": "train", "epoch": 8, "iter": 4600, "lr": 0.0002, "memory": 12588, "data_time": 0.05918, "loss_rpn_cls": 0.02459, "loss_rpn_bbox": 0.0445, "loss_cls": 0.20528, "acc": 92.80029, "loss_bbox": 0.23101, "loss_mask": 0.23623, "loss": 0.74161, "time": 0.49854} -{"mode": "train", "epoch": 8, "iter": 4650, "lr": 0.0002, "memory": 12588, "data_time": 0.04415, "loss_rpn_cls": 0.0243, "loss_rpn_bbox": 0.0428, "loss_cls": 0.19952, "acc": 93.05762, "loss_bbox": 0.22962, "loss_mask": 0.2311, "loss": 0.72734, "time": 0.5486} -{"mode": "train", "epoch": 8, "iter": 4700, "lr": 0.0002, "memory": 12588, "data_time": 0.03835, "loss_rpn_cls": 0.02611, "loss_rpn_bbox": 0.04437, "loss_cls": 0.20697, "acc": 92.93799, "loss_bbox": 0.22696, "loss_mask": 0.227, "loss": 0.73141, "time": 0.50439} -{"mode": "train", "epoch": 8, "iter": 4750, "lr": 0.0002, "memory": 12588, "data_time": 0.04407, "loss_rpn_cls": 0.02238, "loss_rpn_bbox": 0.04511, "loss_cls": 0.20197, "acc": 93.04395, "loss_bbox": 0.22523, "loss_mask": 0.23102, "loss": 0.72571, "time": 0.49318} -{"mode": "train", "epoch": 8, "iter": 4800, "lr": 0.0002, "memory": 12588, "data_time": 0.04018, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.04555, "loss_cls": 0.2049, "acc": 92.85327, "loss_bbox": 0.22994, "loss_mask": 0.23298, "loss": 0.7384, "time": 0.52109} -{"mode": "train", "epoch": 8, "iter": 4850, "lr": 0.0002, "memory": 12588, "data_time": 0.04587, "loss_rpn_cls": 0.02657, "loss_rpn_bbox": 0.04435, "loss_cls": 0.19985, "acc": 92.86621, "loss_bbox": 0.23489, "loss_mask": 0.2339, "loss": 0.73957, "time": 0.51059} -{"mode": "train", "epoch": 8, "iter": 4900, "lr": 0.0002, "memory": 12588, "data_time": 0.0417, "loss_rpn_cls": 0.02272, "loss_rpn_bbox": 0.04374, "loss_cls": 0.19032, "acc": 93.30322, "loss_bbox": 0.22151, "loss_mask": 0.22864, "loss": 0.70693, "time": 0.50348} -{"mode": "train", "epoch": 8, "iter": 4950, "lr": 0.0002, "memory": 12588, "data_time": 0.04802, "loss_rpn_cls": 0.02404, "loss_rpn_bbox": 0.04486, "loss_cls": 0.20351, "acc": 92.93115, "loss_bbox": 0.22685, "loss_mask": 0.23162, "loss": 0.73087, "time": 0.50263} -{"mode": "train", "epoch": 8, "iter": 5000, "lr": 0.0002, "memory": 12588, "data_time": 0.04216, "loss_rpn_cls": 0.02371, "loss_rpn_bbox": 0.04423, "loss_cls": 0.20812, "acc": 92.7605, "loss_bbox": 0.22851, "loss_mask": 0.22994, "loss": 0.7345, "time": 0.50714} -{"mode": "train", "epoch": 8, "iter": 5050, "lr": 0.0002, "memory": 12588, "data_time": 0.04077, "loss_rpn_cls": 0.02137, "loss_rpn_bbox": 0.03964, "loss_cls": 0.19071, "acc": 93.35815, "loss_bbox": 0.2176, "loss_mask": 0.22744, "loss": 0.69676, "time": 0.50671} -{"mode": "train", "epoch": 8, "iter": 5100, "lr": 0.0002, "memory": 12588, "data_time": 0.04634, "loss_rpn_cls": 0.02527, "loss_rpn_bbox": 0.04626, "loss_cls": 0.20648, "acc": 92.80811, "loss_bbox": 0.23426, "loss_mask": 0.23503, "loss": 0.7473, "time": 0.5059} -{"mode": "train", "epoch": 8, "iter": 5150, "lr": 0.0002, "memory": 12588, "data_time": 0.05858, "loss_rpn_cls": 0.0232, "loss_rpn_bbox": 0.04543, "loss_cls": 0.19297, "acc": 93.26123, "loss_bbox": 0.22367, "loss_mask": 0.22907, "loss": 0.71434, "time": 0.50543} -{"mode": "train", "epoch": 8, "iter": 5200, "lr": 0.0002, "memory": 12588, "data_time": 0.04274, "loss_rpn_cls": 0.02227, "loss_rpn_bbox": 0.04093, "loss_cls": 0.19888, "acc": 93.04834, "loss_bbox": 0.22222, "loss_mask": 0.22709, "loss": 0.7114, "time": 0.50418} -{"mode": "train", "epoch": 8, "iter": 5250, "lr": 0.0002, "memory": 12588, "data_time": 0.04083, "loss_rpn_cls": 0.0259, "loss_rpn_bbox": 0.04834, "loss_cls": 0.2117, "acc": 92.70679, "loss_bbox": 0.23492, "loss_mask": 0.2368, "loss": 0.75767, "time": 0.50323} -{"mode": "train", "epoch": 8, "iter": 5300, "lr": 0.0002, "memory": 12588, "data_time": 0.04134, "loss_rpn_cls": 0.02494, "loss_rpn_bbox": 0.04238, "loss_cls": 0.20325, "acc": 92.83667, "loss_bbox": 0.22486, "loss_mask": 0.22402, "loss": 0.71945, "time": 0.52122} -{"mode": "train", "epoch": 8, "iter": 5350, "lr": 0.0002, "memory": 12588, "data_time": 0.04287, "loss_rpn_cls": 0.02128, "loss_rpn_bbox": 0.04169, "loss_cls": 0.19487, "acc": 93.23047, "loss_bbox": 0.22415, "loss_mask": 0.22772, "loss": 0.70972, "time": 0.48815} -{"mode": "train", "epoch": 8, "iter": 5400, "lr": 0.0002, "memory": 12588, "data_time": 0.04502, "loss_rpn_cls": 0.0279, "loss_rpn_bbox": 0.04707, "loss_cls": 0.21079, "acc": 92.69214, "loss_bbox": 0.23619, "loss_mask": 0.24043, "loss": 0.76238, "time": 0.51695} -{"mode": "train", "epoch": 8, "iter": 5450, "lr": 0.0002, "memory": 12588, "data_time": 0.04479, "loss_rpn_cls": 0.02308, "loss_rpn_bbox": 0.04452, "loss_cls": 0.20462, "acc": 92.90625, "loss_bbox": 0.23003, "loss_mask": 0.23482, "loss": 0.73707, "time": 0.51304} -{"mode": "train", "epoch": 8, "iter": 5500, "lr": 0.0002, "memory": 12588, "data_time": 0.05046, "loss_rpn_cls": 0.02509, "loss_rpn_bbox": 0.04657, "loss_cls": 0.20714, "acc": 92.8208, "loss_bbox": 0.23118, "loss_mask": 0.23561, "loss": 0.74559, "time": 0.51405} -{"mode": "train", "epoch": 8, "iter": 5550, "lr": 0.0002, "memory": 12588, "data_time": 0.04275, "loss_rpn_cls": 0.02518, "loss_rpn_bbox": 0.04585, "loss_cls": 0.20046, "acc": 93.10498, "loss_bbox": 0.22449, "loss_mask": 0.23186, "loss": 0.72783, "time": 0.50612} -{"mode": "train", "epoch": 8, "iter": 5600, "lr": 0.0002, "memory": 12588, "data_time": 0.04406, "loss_rpn_cls": 0.02516, "loss_rpn_bbox": 0.04548, "loss_cls": 0.20113, "acc": 92.97729, "loss_bbox": 0.23025, "loss_mask": 0.23026, "loss": 0.73228, "time": 0.50494} -{"mode": "train", "epoch": 8, "iter": 5650, "lr": 0.0002, "memory": 12588, "data_time": 0.04372, "loss_rpn_cls": 0.02389, "loss_rpn_bbox": 0.0448, "loss_cls": 0.20701, "acc": 92.90063, "loss_bbox": 0.23464, "loss_mask": 0.2332, "loss": 0.74354, "time": 0.50662} -{"mode": "train", "epoch": 8, "iter": 5700, "lr": 0.0002, "memory": 12588, "data_time": 0.04518, "loss_rpn_cls": 0.02548, "loss_rpn_bbox": 0.04325, "loss_cls": 0.20515, "acc": 93.01147, "loss_bbox": 0.23099, "loss_mask": 0.23084, "loss": 0.73571, "time": 0.59821} -{"mode": "train", "epoch": 8, "iter": 5750, "lr": 0.0002, "memory": 12588, "data_time": 0.04033, "loss_rpn_cls": 0.02476, "loss_rpn_bbox": 0.04375, "loss_cls": 0.20331, "acc": 93.02246, "loss_bbox": 0.2225, "loss_mask": 0.23085, "loss": 0.72519, "time": 0.49358} -{"mode": "train", "epoch": 8, "iter": 5800, "lr": 0.0002, "memory": 12588, "data_time": 0.04103, "loss_rpn_cls": 0.02068, "loss_rpn_bbox": 0.04355, "loss_cls": 0.19816, "acc": 93.08398, "loss_bbox": 0.22411, "loss_mask": 0.22991, "loss": 0.71641, "time": 0.54232} -{"mode": "train", "epoch": 8, "iter": 5850, "lr": 0.0002, "memory": 12588, "data_time": 0.04347, "loss_rpn_cls": 0.02462, "loss_rpn_bbox": 0.045, "loss_cls": 0.20469, "acc": 93.0144, "loss_bbox": 0.22993, "loss_mask": 0.23391, "loss": 0.73816, "time": 0.4938} -{"mode": "train", "epoch": 8, "iter": 5900, "lr": 0.0002, "memory": 12588, "data_time": 0.04553, "loss_rpn_cls": 0.02487, "loss_rpn_bbox": 0.04801, "loss_cls": 0.20829, "acc": 92.77588, "loss_bbox": 0.23556, "loss_mask": 0.23467, "loss": 0.7514, "time": 0.49577} -{"mode": "train", "epoch": 8, "iter": 5950, "lr": 0.0002, "memory": 12588, "data_time": 0.04732, "loss_rpn_cls": 0.02171, "loss_rpn_bbox": 0.04148, "loss_cls": 0.19507, "acc": 93.29272, "loss_bbox": 0.22488, "loss_mask": 0.23049, "loss": 0.71364, "time": 0.48678} -{"mode": "train", "epoch": 8, "iter": 6000, "lr": 0.0002, "memory": 12588, "data_time": 0.04816, "loss_rpn_cls": 0.02274, "loss_rpn_bbox": 0.04372, "loss_cls": 0.19865, "acc": 93.14258, "loss_bbox": 0.22705, "loss_mask": 0.22936, "loss": 0.72153, "time": 0.49707} -{"mode": "train", "epoch": 8, "iter": 6050, "lr": 0.0002, "memory": 12588, "data_time": 0.04963, "loss_rpn_cls": 0.02475, "loss_rpn_bbox": 0.04677, "loss_cls": 0.20735, "acc": 92.94312, "loss_bbox": 0.22582, "loss_mask": 0.22526, "loss": 0.72996, "time": 0.50963} -{"mode": "train", "epoch": 8, "iter": 6100, "lr": 0.0002, "memory": 12588, "data_time": 0.04537, "loss_rpn_cls": 0.02452, "loss_rpn_bbox": 0.04694, "loss_cls": 0.20897, "acc": 92.78027, "loss_bbox": 0.23957, "loss_mask": 0.23454, "loss": 0.75455, "time": 0.51194} -{"mode": "train", "epoch": 8, "iter": 6150, "lr": 0.0002, "memory": 12588, "data_time": 0.04708, "loss_rpn_cls": 0.02487, "loss_rpn_bbox": 0.04732, "loss_cls": 0.20053, "acc": 93.08203, "loss_bbox": 0.22411, "loss_mask": 0.23568, "loss": 0.7325, "time": 0.50323} -{"mode": "train", "epoch": 8, "iter": 6200, "lr": 0.0002, "memory": 12588, "data_time": 0.04287, "loss_rpn_cls": 0.02519, "loss_rpn_bbox": 0.04526, "loss_cls": 0.20595, "acc": 92.87622, "loss_bbox": 0.23157, "loss_mask": 0.22732, "loss": 0.73529, "time": 0.5612} -{"mode": "train", "epoch": 8, "iter": 6250, "lr": 0.0002, "memory": 12588, "data_time": 0.04141, "loss_rpn_cls": 0.02437, "loss_rpn_bbox": 0.04955, "loss_cls": 0.21309, "acc": 92.56079, "loss_bbox": 0.24436, "loss_mask": 0.23622, "loss": 0.76759, "time": 0.51523} -{"mode": "train", "epoch": 8, "iter": 6300, "lr": 0.0002, "memory": 12588, "data_time": 0.04784, "loss_rpn_cls": 0.02222, "loss_rpn_bbox": 0.04382, "loss_cls": 0.18882, "acc": 93.40186, "loss_bbox": 0.21821, "loss_mask": 0.22931, "loss": 0.70238, "time": 0.58449} -{"mode": "train", "epoch": 8, "iter": 6350, "lr": 0.0002, "memory": 12588, "data_time": 0.04436, "loss_rpn_cls": 0.02414, "loss_rpn_bbox": 0.04521, "loss_cls": 0.19893, "acc": 93.09277, "loss_bbox": 0.23079, "loss_mask": 0.22925, "loss": 0.72832, "time": 0.49851} -{"mode": "train", "epoch": 8, "iter": 6400, "lr": 0.0002, "memory": 12588, "data_time": 0.04327, "loss_rpn_cls": 0.02333, "loss_rpn_bbox": 0.04369, "loss_cls": 0.19522, "acc": 93.12988, "loss_bbox": 0.22213, "loss_mask": 0.23219, "loss": 0.71657, "time": 0.50176} -{"mode": "train", "epoch": 8, "iter": 6450, "lr": 0.0002, "memory": 12588, "data_time": 0.04667, "loss_rpn_cls": 0.02503, "loss_rpn_bbox": 0.0483, "loss_cls": 0.19975, "acc": 92.97119, "loss_bbox": 0.22923, "loss_mask": 0.23758, "loss": 0.73989, "time": 0.5482} -{"mode": "train", "epoch": 8, "iter": 6500, "lr": 0.0002, "memory": 12588, "data_time": 0.04797, "loss_rpn_cls": 0.0257, "loss_rpn_bbox": 0.04465, "loss_cls": 0.19449, "acc": 93.2605, "loss_bbox": 0.22369, "loss_mask": 0.22891, "loss": 0.71743, "time": 0.51176} -{"mode": "train", "epoch": 8, "iter": 6550, "lr": 0.0002, "memory": 12588, "data_time": 0.04364, "loss_rpn_cls": 0.02552, "loss_rpn_bbox": 0.04848, "loss_cls": 0.20885, "acc": 92.68066, "loss_bbox": 0.23407, "loss_mask": 0.23629, "loss": 0.75321, "time": 0.50836} -{"mode": "train", "epoch": 8, "iter": 6600, "lr": 0.0002, "memory": 12588, "data_time": 0.04953, "loss_rpn_cls": 0.02414, "loss_rpn_bbox": 0.04519, "loss_cls": 0.2074, "acc": 92.84668, "loss_bbox": 0.23086, "loss_mask": 0.23369, "loss": 0.74128, "time": 0.50074} -{"mode": "train", "epoch": 8, "iter": 6650, "lr": 0.0002, "memory": 12588, "data_time": 0.04731, "loss_rpn_cls": 0.02376, "loss_rpn_bbox": 0.04584, "loss_cls": 0.20936, "acc": 92.73193, "loss_bbox": 0.23775, "loss_mask": 0.23411, "loss": 0.75081, "time": 0.50729} -{"mode": "train", "epoch": 8, "iter": 6700, "lr": 0.0002, "memory": 12588, "data_time": 0.04867, "loss_rpn_cls": 0.02477, "loss_rpn_bbox": 0.04429, "loss_cls": 0.20617, "acc": 92.8479, "loss_bbox": 0.23374, "loss_mask": 0.23164, "loss": 0.74061, "time": 0.49856} -{"mode": "train", "epoch": 8, "iter": 6750, "lr": 0.0002, "memory": 12588, "data_time": 0.0409, "loss_rpn_cls": 0.02462, "loss_rpn_bbox": 0.04446, "loss_cls": 0.19702, "acc": 93.05176, "loss_bbox": 0.22774, "loss_mask": 0.2328, "loss": 0.72664, "time": 0.49872} -{"mode": "train", "epoch": 8, "iter": 6800, "lr": 0.0002, "memory": 12588, "data_time": 0.0411, "loss_rpn_cls": 0.02366, "loss_rpn_bbox": 0.04558, "loss_cls": 0.20398, "acc": 92.99707, "loss_bbox": 0.23123, "loss_mask": 0.23101, "loss": 0.73545, "time": 0.49355} -{"mode": "train", "epoch": 8, "iter": 6850, "lr": 0.0002, "memory": 12588, "data_time": 0.04573, "loss_rpn_cls": 0.0236, "loss_rpn_bbox": 0.04499, "loss_cls": 0.19821, "acc": 93.09033, "loss_bbox": 0.22361, "loss_mask": 0.23073, "loss": 0.72115, "time": 0.49533} -{"mode": "train", "epoch": 8, "iter": 6900, "lr": 0.0002, "memory": 12588, "data_time": 0.04348, "loss_rpn_cls": 0.0242, "loss_rpn_bbox": 0.04744, "loss_cls": 0.20914, "acc": 92.82544, "loss_bbox": 0.23274, "loss_mask": 0.23244, "loss": 0.74596, "time": 0.50887} -{"mode": "train", "epoch": 8, "iter": 6950, "lr": 0.0002, "memory": 12588, "data_time": 0.05095, "loss_rpn_cls": 0.02422, "loss_rpn_bbox": 0.04656, "loss_cls": 0.2035, "acc": 92.95508, "loss_bbox": 0.22957, "loss_mask": 0.2302, "loss": 0.73405, "time": 0.51419} -{"mode": "train", "epoch": 8, "iter": 7000, "lr": 0.0002, "memory": 12588, "data_time": 0.04768, "loss_rpn_cls": 0.02361, "loss_rpn_bbox": 0.04266, "loss_cls": 0.20107, "acc": 92.97925, "loss_bbox": 0.22896, "loss_mask": 0.23466, "loss": 0.73096, "time": 0.49044} -{"mode": "train", "epoch": 8, "iter": 7050, "lr": 0.0002, "memory": 12588, "data_time": 0.03757, "loss_rpn_cls": 0.02383, "loss_rpn_bbox": 0.0464, "loss_cls": 0.20065, "acc": 93.04785, "loss_bbox": 0.22465, "loss_mask": 0.23408, "loss": 0.7296, "time": 0.50069} -{"mode": "train", "epoch": 8, "iter": 7100, "lr": 0.0002, "memory": 12588, "data_time": 0.04267, "loss_rpn_cls": 0.02505, "loss_rpn_bbox": 0.04888, "loss_cls": 0.19638, "acc": 93.18921, "loss_bbox": 0.22182, "loss_mask": 0.23129, "loss": 0.72341, "time": 0.5247} -{"mode": "train", "epoch": 8, "iter": 7150, "lr": 0.0002, "memory": 12588, "data_time": 0.04224, "loss_rpn_cls": 0.02474, "loss_rpn_bbox": 0.04469, "loss_cls": 0.19944, "acc": 93.06836, "loss_bbox": 0.22331, "loss_mask": 0.22745, "loss": 0.71964, "time": 0.54067} -{"mode": "train", "epoch": 8, "iter": 7200, "lr": 0.0002, "memory": 12588, "data_time": 0.05058, "loss_rpn_cls": 0.02598, "loss_rpn_bbox": 0.04646, "loss_cls": 0.20151, "acc": 93.15479, "loss_bbox": 0.22128, "loss_mask": 0.23186, "loss": 0.72709, "time": 0.50538} -{"mode": "train", "epoch": 8, "iter": 7250, "lr": 0.0002, "memory": 12588, "data_time": 0.0387, "loss_rpn_cls": 0.02324, "loss_rpn_bbox": 0.0457, "loss_cls": 0.19687, "acc": 93.06445, "loss_bbox": 0.22457, "loss_mask": 0.23066, "loss": 0.72105, "time": 0.49842} -{"mode": "train", "epoch": 8, "iter": 7300, "lr": 0.0002, "memory": 12588, "data_time": 0.04861, "loss_rpn_cls": 0.02564, "loss_rpn_bbox": 0.04696, "loss_cls": 0.21235, "acc": 92.70557, "loss_bbox": 0.236, "loss_mask": 0.23528, "loss": 0.75622, "time": 0.50114} -{"mode": "val", "epoch": 8, "iter": 625, "lr": 0.0002, "bbox_mAP": 0.3866, "bbox_mAP_50": 0.6051, "bbox_mAP_75": 0.4283, "bbox_mAP_s": 0.2352, "bbox_mAP_m": 0.4285, "bbox_mAP_l": 0.4934, "bbox_mAP_copypaste": "0.3866 0.6051 0.4283 0.2352 0.4285 0.4934", "segm_mAP": 0.3644, "segm_mAP_50": 0.5796, "segm_mAP_75": 0.3897, "segm_mAP_s": 0.1758, "segm_mAP_m": 0.3975, "segm_mAP_l": 0.53, "segm_mAP_copypaste": "0.3644 0.5796 0.3897 0.1758 0.3975 0.5300"} -{"mode": "train", "epoch": 9, "iter": 50, "lr": 2e-05, "memory": 12588, "data_time": 0.12812, "loss_rpn_cls": 0.02123, "loss_rpn_bbox": 0.04423, "loss_cls": 0.18691, "acc": 93.38013, "loss_bbox": 0.217, "loss_mask": 0.22298, "loss": 0.69234, "time": 0.58486} -{"mode": "train", "epoch": 9, "iter": 100, "lr": 2e-05, "memory": 12588, "data_time": 0.0485, "loss_rpn_cls": 0.02153, "loss_rpn_bbox": 0.04529, "loss_cls": 0.18896, "acc": 93.20898, "loss_bbox": 0.2248, "loss_mask": 0.21982, "loss": 0.70039, "time": 0.52784} -{"mode": "train", "epoch": 9, "iter": 150, "lr": 2e-05, "memory": 12588, "data_time": 0.05421, "loss_rpn_cls": 0.02076, "loss_rpn_bbox": 0.04329, "loss_cls": 0.18501, "acc": 93.25854, "loss_bbox": 0.22142, "loss_mask": 0.21875, "loss": 0.68923, "time": 0.70809} -{"mode": "train", "epoch": 9, "iter": 200, "lr": 2e-05, "memory": 12588, "data_time": 0.05531, "loss_rpn_cls": 0.01909, "loss_rpn_bbox": 0.04007, "loss_cls": 0.1809, "acc": 93.53027, "loss_bbox": 0.21705, "loss_mask": 0.22374, "loss": 0.68085, "time": 0.51899} -{"mode": "train", "epoch": 9, "iter": 250, "lr": 2e-05, "memory": 12588, "data_time": 0.04473, "loss_rpn_cls": 0.01856, "loss_rpn_bbox": 0.0388, "loss_cls": 0.17034, "acc": 93.85205, "loss_bbox": 0.20378, "loss_mask": 0.21836, "loss": 0.64984, "time": 0.51069} -{"mode": "train", "epoch": 9, "iter": 300, "lr": 2e-05, "memory": 12588, "data_time": 0.04599, "loss_rpn_cls": 0.01909, "loss_rpn_bbox": 0.03964, "loss_cls": 0.17305, "acc": 93.70801, "loss_bbox": 0.21066, "loss_mask": 0.21753, "loss": 0.65997, "time": 0.52028} -{"mode": "train", "epoch": 9, "iter": 350, "lr": 2e-05, "memory": 12588, "data_time": 0.05053, "loss_rpn_cls": 0.01976, "loss_rpn_bbox": 0.04212, "loss_cls": 0.17974, "acc": 93.40186, "loss_bbox": 0.21888, "loss_mask": 0.22368, "loss": 0.68417, "time": 0.51298} -{"mode": "train", "epoch": 9, "iter": 400, "lr": 2e-05, "memory": 12588, "data_time": 0.04382, "loss_rpn_cls": 0.01783, "loss_rpn_bbox": 0.03768, "loss_cls": 0.16337, "acc": 94.04785, "loss_bbox": 0.19945, "loss_mask": 0.21628, "loss": 0.63461, "time": 0.50876} -{"mode": "train", "epoch": 9, "iter": 450, "lr": 2e-05, "memory": 12588, "data_time": 0.04302, "loss_rpn_cls": 0.01783, "loss_rpn_bbox": 0.03983, "loss_cls": 0.16834, "acc": 93.8877, "loss_bbox": 0.20506, "loss_mask": 0.21598, "loss": 0.64704, "time": 0.5093} -{"mode": "train", "epoch": 9, "iter": 500, "lr": 2e-05, "memory": 12588, "data_time": 0.04, "loss_rpn_cls": 0.01672, "loss_rpn_bbox": 0.03925, "loss_cls": 0.16658, "acc": 93.91577, "loss_bbox": 0.204, "loss_mask": 0.21615, "loss": 0.6427, "time": 0.57393} -{"mode": "train", "epoch": 9, "iter": 550, "lr": 2e-05, "memory": 12588, "data_time": 0.03994, "loss_rpn_cls": 0.01758, "loss_rpn_bbox": 0.03845, "loss_cls": 0.16728, "acc": 93.87817, "loss_bbox": 0.20821, "loss_mask": 0.21496, "loss": 0.64648, "time": 0.51038} -{"mode": "train", "epoch": 9, "iter": 600, "lr": 2e-05, "memory": 12588, "data_time": 0.04313, "loss_rpn_cls": 0.019, "loss_rpn_bbox": 0.04083, "loss_cls": 0.1776, "acc": 93.54517, "loss_bbox": 0.21192, "loss_mask": 0.22011, "loss": 0.66946, "time": 0.55282} -{"mode": "train", "epoch": 9, "iter": 650, "lr": 2e-05, "memory": 12588, "data_time": 0.05466, "loss_rpn_cls": 0.02034, "loss_rpn_bbox": 0.0439, "loss_cls": 0.17894, "acc": 93.33545, "loss_bbox": 0.22244, "loss_mask": 0.22155, "loss": 0.68718, "time": 0.52742} -{"mode": "train", "epoch": 9, "iter": 700, "lr": 2e-05, "memory": 12588, "data_time": 0.05182, "loss_rpn_cls": 0.01757, "loss_rpn_bbox": 0.04004, "loss_cls": 0.17151, "acc": 93.68457, "loss_bbox": 0.21245, "loss_mask": 0.21527, "loss": 0.65684, "time": 0.51587} -{"mode": "train", "epoch": 9, "iter": 750, "lr": 2e-05, "memory": 12588, "data_time": 0.05516, "loss_rpn_cls": 0.01666, "loss_rpn_bbox": 0.0393, "loss_cls": 0.16595, "acc": 93.89844, "loss_bbox": 0.20722, "loss_mask": 0.21659, "loss": 0.64572, "time": 0.52402} -{"mode": "train", "epoch": 9, "iter": 800, "lr": 2e-05, "memory": 12588, "data_time": 0.04857, "loss_rpn_cls": 0.01992, "loss_rpn_bbox": 0.04006, "loss_cls": 0.17073, "acc": 93.75317, "loss_bbox": 0.20901, "loss_mask": 0.20979, "loss": 0.64952, "time": 0.52088} -{"mode": "train", "epoch": 9, "iter": 850, "lr": 2e-05, "memory": 12588, "data_time": 0.04545, "loss_rpn_cls": 0.01611, "loss_rpn_bbox": 0.03957, "loss_cls": 0.15978, "acc": 94.21851, "loss_bbox": 0.19951, "loss_mask": 0.21312, "loss": 0.62808, "time": 0.50947} -{"mode": "train", "epoch": 9, "iter": 900, "lr": 2e-05, "memory": 12588, "data_time": 0.0419, "loss_rpn_cls": 0.01829, "loss_rpn_bbox": 0.0405, "loss_cls": 0.16678, "acc": 93.90942, "loss_bbox": 0.20709, "loss_mask": 0.22186, "loss": 0.65453, "time": 0.5045} -{"mode": "train", "epoch": 9, "iter": 950, "lr": 2e-05, "memory": 12588, "data_time": 0.05078, "loss_rpn_cls": 0.01786, "loss_rpn_bbox": 0.04024, "loss_cls": 0.16732, "acc": 93.88281, "loss_bbox": 0.21149, "loss_mask": 0.21637, "loss": 0.65328, "time": 0.51049} -{"mode": "train", "epoch": 9, "iter": 1000, "lr": 2e-05, "memory": 12588, "data_time": 0.04198, "loss_rpn_cls": 0.02002, "loss_rpn_bbox": 0.04195, "loss_cls": 0.17871, "acc": 93.34644, "loss_bbox": 0.22051, "loss_mask": 0.22017, "loss": 0.68136, "time": 0.52517} -{"mode": "train", "epoch": 9, "iter": 1050, "lr": 2e-05, "memory": 12588, "data_time": 0.04164, "loss_rpn_cls": 0.01762, "loss_rpn_bbox": 0.04205, "loss_cls": 0.17452, "acc": 93.65259, "loss_bbox": 0.21704, "loss_mask": 0.21733, "loss": 0.66855, "time": 0.51676} -{"mode": "train", "epoch": 9, "iter": 1100, "lr": 2e-05, "memory": 12588, "data_time": 0.04575, "loss_rpn_cls": 0.01847, "loss_rpn_bbox": 0.04384, "loss_cls": 0.17944, "acc": 93.50439, "loss_bbox": 0.21633, "loss_mask": 0.21556, "loss": 0.67363, "time": 0.52938} -{"mode": "train", "epoch": 9, "iter": 1150, "lr": 2e-05, "memory": 12588, "data_time": 0.04852, "loss_rpn_cls": 0.0176, "loss_rpn_bbox": 0.04028, "loss_cls": 0.16811, "acc": 93.85229, "loss_bbox": 0.20636, "loss_mask": 0.2117, "loss": 0.64405, "time": 0.50344} -{"mode": "train", "epoch": 9, "iter": 1200, "lr": 2e-05, "memory": 12588, "data_time": 0.04838, "loss_rpn_cls": 0.01663, "loss_rpn_bbox": 0.04088, "loss_cls": 0.16499, "acc": 93.8606, "loss_bbox": 0.20218, "loss_mask": 0.21045, "loss": 0.63513, "time": 0.52181} -{"mode": "train", "epoch": 9, "iter": 1250, "lr": 2e-05, "memory": 12588, "data_time": 0.03955, "loss_rpn_cls": 0.01774, "loss_rpn_bbox": 0.03847, "loss_cls": 0.16745, "acc": 93.9165, "loss_bbox": 0.20377, "loss_mask": 0.21327, "loss": 0.6407, "time": 0.50423} -{"mode": "train", "epoch": 9, "iter": 1300, "lr": 2e-05, "memory": 12588, "data_time": 0.0419, "loss_rpn_cls": 0.01552, "loss_rpn_bbox": 0.03727, "loss_cls": 0.159, "acc": 94.20312, "loss_bbox": 0.19849, "loss_mask": 0.21344, "loss": 0.62373, "time": 0.50291} -{"mode": "train", "epoch": 9, "iter": 1350, "lr": 2e-05, "memory": 12588, "data_time": 0.05711, "loss_rpn_cls": 0.01729, "loss_rpn_bbox": 0.04039, "loss_cls": 0.16403, "acc": 93.92163, "loss_bbox": 0.20334, "loss_mask": 0.21295, "loss": 0.63799, "time": 0.50223} -{"mode": "train", "epoch": 9, "iter": 1400, "lr": 2e-05, "memory": 12588, "data_time": 0.03886, "loss_rpn_cls": 0.01672, "loss_rpn_bbox": 0.03701, "loss_cls": 0.15816, "acc": 94.20752, "loss_bbox": 0.19666, "loss_mask": 0.21094, "loss": 0.61949, "time": 0.54324} -{"mode": "train", "epoch": 9, "iter": 1450, "lr": 2e-05, "memory": 12588, "data_time": 0.03736, "loss_rpn_cls": 0.0186, "loss_rpn_bbox": 0.03982, "loss_cls": 0.16747, "acc": 93.79321, "loss_bbox": 0.20702, "loss_mask": 0.21662, "loss": 0.64953, "time": 0.56908} -{"mode": "train", "epoch": 9, "iter": 1500, "lr": 2e-05, "memory": 12588, "data_time": 0.04468, "loss_rpn_cls": 0.01694, "loss_rpn_bbox": 0.03868, "loss_cls": 0.16433, "acc": 93.96655, "loss_bbox": 0.21013, "loss_mask": 0.21762, "loss": 0.6477, "time": 0.49961} -{"mode": "train", "epoch": 9, "iter": 1550, "lr": 2e-05, "memory": 12588, "data_time": 0.05006, "loss_rpn_cls": 0.01798, "loss_rpn_bbox": 0.03879, "loss_cls": 0.16734, "acc": 93.88208, "loss_bbox": 0.20753, "loss_mask": 0.2169, "loss": 0.64855, "time": 0.50511} -{"mode": "train", "epoch": 9, "iter": 1600, "lr": 2e-05, "memory": 12588, "data_time": 0.0478, "loss_rpn_cls": 0.01641, "loss_rpn_bbox": 0.03943, "loss_cls": 0.16423, "acc": 93.9563, "loss_bbox": 0.2059, "loss_mask": 0.21724, "loss": 0.64321, "time": 0.50919} -{"mode": "train", "epoch": 9, "iter": 1650, "lr": 2e-05, "memory": 12588, "data_time": 0.04418, "loss_rpn_cls": 0.01827, "loss_rpn_bbox": 0.04072, "loss_cls": 0.16558, "acc": 93.91919, "loss_bbox": 0.20814, "loss_mask": 0.21426, "loss": 0.64696, "time": 0.50551} -{"mode": "train", "epoch": 9, "iter": 1700, "lr": 2e-05, "memory": 12588, "data_time": 0.05067, "loss_rpn_cls": 0.01732, "loss_rpn_bbox": 0.04046, "loss_cls": 0.17315, "acc": 93.70728, "loss_bbox": 0.21507, "loss_mask": 0.21711, "loss": 0.66311, "time": 0.52238} -{"mode": "train", "epoch": 9, "iter": 1750, "lr": 2e-05, "memory": 12588, "data_time": 0.04437, "loss_rpn_cls": 0.01766, "loss_rpn_bbox": 0.03871, "loss_cls": 0.16327, "acc": 94.00879, "loss_bbox": 0.20389, "loss_mask": 0.20821, "loss": 0.63176, "time": 0.5045} -{"mode": "train", "epoch": 9, "iter": 1800, "lr": 2e-05, "memory": 12588, "data_time": 0.03361, "loss_rpn_cls": 0.01602, "loss_rpn_bbox": 0.03819, "loss_cls": 0.15426, "acc": 94.38452, "loss_bbox": 0.18962, "loss_mask": 0.21444, "loss": 0.61253, "time": 0.50748} -{"mode": "train", "epoch": 9, "iter": 1850, "lr": 2e-05, "memory": 12588, "data_time": 0.04559, "loss_rpn_cls": 0.01515, "loss_rpn_bbox": 0.03964, "loss_cls": 0.16288, "acc": 93.98877, "loss_bbox": 0.20606, "loss_mask": 0.21247, "loss": 0.6362, "time": 0.51034} -{"mode": "train", "epoch": 9, "iter": 1900, "lr": 2e-05, "memory": 12588, "data_time": 0.0426, "loss_rpn_cls": 0.01943, "loss_rpn_bbox": 0.04036, "loss_cls": 0.16585, "acc": 93.89355, "loss_bbox": 0.20728, "loss_mask": 0.21166, "loss": 0.64458, "time": 0.57695} -{"mode": "train", "epoch": 9, "iter": 1950, "lr": 2e-05, "memory": 12588, "data_time": 0.04091, "loss_rpn_cls": 0.01666, "loss_rpn_bbox": 0.03878, "loss_cls": 0.16032, "acc": 94.12646, "loss_bbox": 0.19817, "loss_mask": 0.21214, "loss": 0.62606, "time": 0.50213} -{"mode": "train", "epoch": 9, "iter": 2000, "lr": 2e-05, "memory": 12588, "data_time": 0.04024, "loss_rpn_cls": 0.01676, "loss_rpn_bbox": 0.03879, "loss_cls": 0.1571, "acc": 94.19312, "loss_bbox": 0.20172, "loss_mask": 0.21438, "loss": 0.62874, "time": 0.5079} -{"mode": "train", "epoch": 9, "iter": 2050, "lr": 2e-05, "memory": 12588, "data_time": 0.0445, "loss_rpn_cls": 0.01678, "loss_rpn_bbox": 0.03797, "loss_cls": 0.1583, "acc": 94.11523, "loss_bbox": 0.19867, "loss_mask": 0.20983, "loss": 0.62156, "time": 0.62835} -{"mode": "train", "epoch": 9, "iter": 2100, "lr": 2e-05, "memory": 12588, "data_time": 0.04155, "loss_rpn_cls": 0.01623, "loss_rpn_bbox": 0.03638, "loss_cls": 0.16066, "acc": 94.07104, "loss_bbox": 0.20164, "loss_mask": 0.20845, "loss": 0.62337, "time": 0.50158} -{"mode": "train", "epoch": 9, "iter": 2150, "lr": 2e-05, "memory": 12588, "data_time": 0.03645, "loss_rpn_cls": 0.0176, "loss_rpn_bbox": 0.03838, "loss_cls": 0.16303, "acc": 94.01904, "loss_bbox": 0.20346, "loss_mask": 0.21366, "loss": 0.63612, "time": 0.50492} -{"mode": "train", "epoch": 9, "iter": 2200, "lr": 2e-05, "memory": 12588, "data_time": 0.04039, "loss_rpn_cls": 0.01862, "loss_rpn_bbox": 0.04, "loss_cls": 0.16935, "acc": 93.79468, "loss_bbox": 0.20455, "loss_mask": 0.21671, "loss": 0.64922, "time": 0.51618} -{"mode": "train", "epoch": 9, "iter": 2250, "lr": 2e-05, "memory": 12588, "data_time": 0.04104, "loss_rpn_cls": 0.01661, "loss_rpn_bbox": 0.03786, "loss_cls": 0.16071, "acc": 94.10742, "loss_bbox": 0.20173, "loss_mask": 0.2136, "loss": 0.6305, "time": 0.51131} -{"mode": "train", "epoch": 9, "iter": 2300, "lr": 2e-05, "memory": 12588, "data_time": 0.03527, "loss_rpn_cls": 0.01609, "loss_rpn_bbox": 0.03808, "loss_cls": 0.15725, "acc": 94.22168, "loss_bbox": 0.19999, "loss_mask": 0.21407, "loss": 0.62548, "time": 0.4996} -{"mode": "train", "epoch": 9, "iter": 2350, "lr": 2e-05, "memory": 12588, "data_time": 0.04885, "loss_rpn_cls": 0.01889, "loss_rpn_bbox": 0.04116, "loss_cls": 0.17084, "acc": 93.65576, "loss_bbox": 0.2164, "loss_mask": 0.21935, "loss": 0.66664, "time": 0.52164} -{"mode": "train", "epoch": 9, "iter": 2400, "lr": 2e-05, "memory": 12588, "data_time": 0.04918, "loss_rpn_cls": 0.01542, "loss_rpn_bbox": 0.03994, "loss_cls": 0.15637, "acc": 94.21655, "loss_bbox": 0.20039, "loss_mask": 0.21197, "loss": 0.62409, "time": 0.50324} -{"mode": "train", "epoch": 9, "iter": 2450, "lr": 2e-05, "memory": 12588, "data_time": 0.04848, "loss_rpn_cls": 0.01621, "loss_rpn_bbox": 0.03811, "loss_cls": 0.1621, "acc": 93.99585, "loss_bbox": 0.20776, "loss_mask": 0.21636, "loss": 0.64054, "time": 0.50548} -{"mode": "train", "epoch": 9, "iter": 2500, "lr": 2e-05, "memory": 12588, "data_time": 0.04548, "loss_rpn_cls": 0.01676, "loss_rpn_bbox": 0.03701, "loss_cls": 0.16636, "acc": 93.92456, "loss_bbox": 0.20689, "loss_mask": 0.21157, "loss": 0.6386, "time": 0.50681} -{"mode": "train", "epoch": 9, "iter": 2550, "lr": 2e-05, "memory": 12588, "data_time": 0.04698, "loss_rpn_cls": 0.01683, "loss_rpn_bbox": 0.03804, "loss_cls": 0.16159, "acc": 93.98706, "loss_bbox": 0.20099, "loss_mask": 0.2078, "loss": 0.62524, "time": 0.50405} -{"mode": "train", "epoch": 9, "iter": 2600, "lr": 2e-05, "memory": 12588, "data_time": 0.05388, "loss_rpn_cls": 0.01633, "loss_rpn_bbox": 0.03803, "loss_cls": 0.15796, "acc": 94.13184, "loss_bbox": 0.20059, "loss_mask": 0.21236, "loss": 0.62528, "time": 0.50464} -{"mode": "train", "epoch": 9, "iter": 2650, "lr": 2e-05, "memory": 12588, "data_time": 0.04117, "loss_rpn_cls": 0.01562, "loss_rpn_bbox": 0.03637, "loss_cls": 0.1532, "acc": 94.32715, "loss_bbox": 0.19166, "loss_mask": 0.20842, "loss": 0.60526, "time": 0.56262} -{"mode": "train", "epoch": 9, "iter": 2700, "lr": 2e-05, "memory": 12588, "data_time": 0.0435, "loss_rpn_cls": 0.01723, "loss_rpn_bbox": 0.03715, "loss_cls": 0.16287, "acc": 94.04053, "loss_bbox": 0.1995, "loss_mask": 0.21049, "loss": 0.62726, "time": 0.50268} -{"mode": "train", "epoch": 9, "iter": 2750, "lr": 2e-05, "memory": 12588, "data_time": 0.04669, "loss_rpn_cls": 0.01706, "loss_rpn_bbox": 0.03559, "loss_cls": 0.16264, "acc": 94.18115, "loss_bbox": 0.20139, "loss_mask": 0.20839, "loss": 0.62506, "time": 0.50068} -{"mode": "train", "epoch": 9, "iter": 2800, "lr": 2e-05, "memory": 12588, "data_time": 0.05292, "loss_rpn_cls": 0.01697, "loss_rpn_bbox": 0.03739, "loss_cls": 0.1575, "acc": 94.2439, "loss_bbox": 0.20264, "loss_mask": 0.20848, "loss": 0.62298, "time": 0.50968} -{"mode": "train", "epoch": 9, "iter": 2850, "lr": 2e-05, "memory": 12588, "data_time": 0.05005, "loss_rpn_cls": 0.01715, "loss_rpn_bbox": 0.03809, "loss_cls": 0.16461, "acc": 93.91406, "loss_bbox": 0.20805, "loss_mask": 0.20882, "loss": 0.63672, "time": 0.52871} -{"mode": "train", "epoch": 9, "iter": 2900, "lr": 2e-05, "memory": 12588, "data_time": 0.04515, "loss_rpn_cls": 0.01785, "loss_rpn_bbox": 0.03983, "loss_cls": 0.16808, "acc": 93.73584, "loss_bbox": 0.2096, "loss_mask": 0.21322, "loss": 0.64858, "time": 0.57218} -{"mode": "train", "epoch": 9, "iter": 2950, "lr": 2e-05, "memory": 12588, "data_time": 0.04672, "loss_rpn_cls": 0.01773, "loss_rpn_bbox": 0.04153, "loss_cls": 0.17016, "acc": 93.65796, "loss_bbox": 0.21533, "loss_mask": 0.21137, "loss": 0.65611, "time": 0.52795} -{"mode": "train", "epoch": 9, "iter": 3000, "lr": 2e-05, "memory": 12588, "data_time": 0.04326, "loss_rpn_cls": 0.01724, "loss_rpn_bbox": 0.03975, "loss_cls": 0.1661, "acc": 93.86841, "loss_bbox": 0.20658, "loss_mask": 0.21366, "loss": 0.64332, "time": 0.5089} -{"mode": "train", "epoch": 9, "iter": 3050, "lr": 2e-05, "memory": 12588, "data_time": 0.04919, "loss_rpn_cls": 0.01568, "loss_rpn_bbox": 0.03679, "loss_cls": 0.1571, "acc": 94.19458, "loss_bbox": 0.19922, "loss_mask": 0.21097, "loss": 0.61977, "time": 0.51076} -{"mode": "train", "epoch": 9, "iter": 3100, "lr": 2e-05, "memory": 12588, "data_time": 0.04946, "loss_rpn_cls": 0.01717, "loss_rpn_bbox": 0.04122, "loss_cls": 0.16787, "acc": 93.74976, "loss_bbox": 0.21391, "loss_mask": 0.21775, "loss": 0.65793, "time": 0.51341} -{"mode": "train", "epoch": 9, "iter": 3150, "lr": 2e-05, "memory": 12588, "data_time": 0.04793, "loss_rpn_cls": 0.01802, "loss_rpn_bbox": 0.03903, "loss_cls": 0.16352, "acc": 94.00806, "loss_bbox": 0.20357, "loss_mask": 0.21421, "loss": 0.63835, "time": 0.50393} -{"mode": "train", "epoch": 9, "iter": 3200, "lr": 2e-05, "memory": 12588, "data_time": 0.04336, "loss_rpn_cls": 0.01589, "loss_rpn_bbox": 0.0361, "loss_cls": 0.14878, "acc": 94.49512, "loss_bbox": 0.19073, "loss_mask": 0.20582, "loss": 0.59733, "time": 0.50569} -{"mode": "train", "epoch": 9, "iter": 3250, "lr": 2e-05, "memory": 12588, "data_time": 0.03798, "loss_rpn_cls": 0.01609, "loss_rpn_bbox": 0.03783, "loss_cls": 0.15782, "acc": 94.15576, "loss_bbox": 0.20393, "loss_mask": 0.21545, "loss": 0.63112, "time": 0.50756} -{"mode": "train", "epoch": 9, "iter": 3300, "lr": 2e-05, "memory": 12588, "data_time": 0.05004, "loss_rpn_cls": 0.01762, "loss_rpn_bbox": 0.03937, "loss_cls": 0.15674, "acc": 94.22339, "loss_bbox": 0.19482, "loss_mask": 0.20967, "loss": 0.61822, "time": 0.56117} -{"mode": "train", "epoch": 9, "iter": 3350, "lr": 2e-05, "memory": 12588, "data_time": 0.04326, "loss_rpn_cls": 0.01727, "loss_rpn_bbox": 0.03972, "loss_cls": 0.16079, "acc": 93.93945, "loss_bbox": 0.21056, "loss_mask": 0.21732, "loss": 0.64566, "time": 0.5136} -{"mode": "train", "epoch": 9, "iter": 3400, "lr": 2e-05, "memory": 12588, "data_time": 0.0385, "loss_rpn_cls": 0.0162, "loss_rpn_bbox": 0.03838, "loss_cls": 0.1568, "acc": 94.2124, "loss_bbox": 0.1997, "loss_mask": 0.21307, "loss": 0.62416, "time": 0.49692} -{"mode": "train", "epoch": 9, "iter": 3450, "lr": 2e-05, "memory": 12588, "data_time": 0.05332, "loss_rpn_cls": 0.01666, "loss_rpn_bbox": 0.0384, "loss_cls": 0.15673, "acc": 94.17358, "loss_bbox": 0.19782, "loss_mask": 0.20946, "loss": 0.61908, "time": 0.50974} -{"mode": "train", "epoch": 9, "iter": 3500, "lr": 2e-05, "memory": 12588, "data_time": 0.04672, "loss_rpn_cls": 0.01496, "loss_rpn_bbox": 0.03671, "loss_cls": 0.1547, "acc": 94.3252, "loss_bbox": 0.19452, "loss_mask": 0.20795, "loss": 0.60883, "time": 0.49755} -{"mode": "train", "epoch": 9, "iter": 3550, "lr": 2e-05, "memory": 12588, "data_time": 0.04526, "loss_rpn_cls": 0.01487, "loss_rpn_bbox": 0.03568, "loss_cls": 0.15354, "acc": 94.30054, "loss_bbox": 0.19001, "loss_mask": 0.21139, "loss": 0.60549, "time": 0.48756} -{"mode": "train", "epoch": 9, "iter": 3600, "lr": 2e-05, "memory": 12588, "data_time": 0.04166, "loss_rpn_cls": 0.01653, "loss_rpn_bbox": 0.03993, "loss_cls": 0.16136, "acc": 94.16138, "loss_bbox": 0.20372, "loss_mask": 0.21042, "loss": 0.63195, "time": 0.50897} -{"mode": "train", "epoch": 9, "iter": 3650, "lr": 2e-05, "memory": 12588, "data_time": 0.04381, "loss_rpn_cls": 0.01622, "loss_rpn_bbox": 0.03891, "loss_cls": 0.16016, "acc": 94.12061, "loss_bbox": 0.20041, "loss_mask": 0.21185, "loss": 0.62756, "time": 0.50229} -{"mode": "train", "epoch": 9, "iter": 3700, "lr": 2e-05, "memory": 12588, "data_time": 0.05015, "loss_rpn_cls": 0.01705, "loss_rpn_bbox": 0.03803, "loss_cls": 0.16976, "acc": 93.75659, "loss_bbox": 0.20897, "loss_mask": 0.21224, "loss": 0.64605, "time": 0.50775} -{"mode": "train", "epoch": 9, "iter": 3750, "lr": 2e-05, "memory": 12588, "data_time": 0.04094, "loss_rpn_cls": 0.01581, "loss_rpn_bbox": 0.03587, "loss_cls": 0.15923, "acc": 94.12842, "loss_bbox": 0.1989, "loss_mask": 0.20747, "loss": 0.61729, "time": 0.50889} -{"mode": "train", "epoch": 9, "iter": 3800, "lr": 2e-05, "memory": 12588, "data_time": 0.04494, "loss_rpn_cls": 0.01715, "loss_rpn_bbox": 0.03877, "loss_cls": 0.15811, "acc": 94.19775, "loss_bbox": 0.20234, "loss_mask": 0.20922, "loss": 0.62559, "time": 0.53759} -{"mode": "train", "epoch": 9, "iter": 3850, "lr": 2e-05, "memory": 12588, "data_time": 0.04178, "loss_rpn_cls": 0.01676, "loss_rpn_bbox": 0.0388, "loss_cls": 0.16279, "acc": 93.94141, "loss_bbox": 0.20946, "loss_mask": 0.21352, "loss": 0.64132, "time": 0.50325} -{"mode": "train", "epoch": 9, "iter": 3900, "lr": 2e-05, "memory": 12588, "data_time": 0.04734, "loss_rpn_cls": 0.01608, "loss_rpn_bbox": 0.03719, "loss_cls": 0.15416, "acc": 94.33472, "loss_bbox": 0.19208, "loss_mask": 0.20629, "loss": 0.6058, "time": 0.51442} -{"mode": "train", "epoch": 9, "iter": 3950, "lr": 2e-05, "memory": 12588, "data_time": 0.05122, "loss_rpn_cls": 0.01678, "loss_rpn_bbox": 0.039, "loss_cls": 0.15566, "acc": 94.1604, "loss_bbox": 0.19865, "loss_mask": 0.21141, "loss": 0.62149, "time": 0.5078} -{"mode": "train", "epoch": 9, "iter": 4000, "lr": 2e-05, "memory": 12588, "data_time": 0.05169, "loss_rpn_cls": 0.01761, "loss_rpn_bbox": 0.04059, "loss_cls": 0.16328, "acc": 93.98022, "loss_bbox": 0.20626, "loss_mask": 0.20922, "loss": 0.63696, "time": 0.55551} -{"mode": "train", "epoch": 9, "iter": 4050, "lr": 2e-05, "memory": 12588, "data_time": 0.04636, "loss_rpn_cls": 0.01583, "loss_rpn_bbox": 0.03702, "loss_cls": 0.15629, "acc": 94.21753, "loss_bbox": 0.19848, "loss_mask": 0.20711, "loss": 0.61473, "time": 0.51011} -{"mode": "train", "epoch": 9, "iter": 4100, "lr": 2e-05, "memory": 12588, "data_time": 0.05156, "loss_rpn_cls": 0.0164, "loss_rpn_bbox": 0.03887, "loss_cls": 0.17112, "acc": 93.69263, "loss_bbox": 0.21277, "loss_mask": 0.2169, "loss": 0.65606, "time": 0.58976} -{"mode": "train", "epoch": 9, "iter": 4150, "lr": 2e-05, "memory": 12588, "data_time": 0.03999, "loss_rpn_cls": 0.01619, "loss_rpn_bbox": 0.03828, "loss_cls": 0.15632, "acc": 94.25879, "loss_bbox": 0.20411, "loss_mask": 0.20928, "loss": 0.62418, "time": 0.50671} -{"mode": "train", "epoch": 9, "iter": 4200, "lr": 2e-05, "memory": 12588, "data_time": 0.04299, "loss_rpn_cls": 0.01697, "loss_rpn_bbox": 0.03596, "loss_cls": 0.16071, "acc": 94.19312, "loss_bbox": 0.19636, "loss_mask": 0.20602, "loss": 0.61602, "time": 0.50437} -{"mode": "train", "epoch": 9, "iter": 4250, "lr": 2e-05, "memory": 12588, "data_time": 0.0533, "loss_rpn_cls": 0.01681, "loss_rpn_bbox": 0.04099, "loss_cls": 0.16438, "acc": 93.8728, "loss_bbox": 0.21036, "loss_mask": 0.21518, "loss": 0.64771, "time": 0.50648} -{"mode": "train", "epoch": 9, "iter": 4300, "lr": 2e-05, "memory": 12588, "data_time": 0.05393, "loss_rpn_cls": 0.01775, "loss_rpn_bbox": 0.03918, "loss_cls": 0.16022, "acc": 94.09033, "loss_bbox": 0.19915, "loss_mask": 0.21271, "loss": 0.629, "time": 0.51088} -{"mode": "train", "epoch": 9, "iter": 4350, "lr": 2e-05, "memory": 12588, "data_time": 0.03989, "loss_rpn_cls": 0.01589, "loss_rpn_bbox": 0.03756, "loss_cls": 0.15751, "acc": 94.17993, "loss_bbox": 0.19704, "loss_mask": 0.21066, "loss": 0.61865, "time": 0.49851} -{"mode": "train", "epoch": 9, "iter": 4400, "lr": 2e-05, "memory": 12588, "data_time": 0.04333, "loss_rpn_cls": 0.01522, "loss_rpn_bbox": 0.03704, "loss_cls": 0.15961, "acc": 94.12012, "loss_bbox": 0.19839, "loss_mask": 0.21272, "loss": 0.62297, "time": 0.48955} -{"mode": "train", "epoch": 9, "iter": 4450, "lr": 2e-05, "memory": 12588, "data_time": 0.04538, "loss_rpn_cls": 0.01659, "loss_rpn_bbox": 0.03814, "loss_cls": 0.15806, "acc": 94.15405, "loss_bbox": 0.19877, "loss_mask": 0.20993, "loss": 0.62148, "time": 0.50339} -{"mode": "train", "epoch": 9, "iter": 4500, "lr": 2e-05, "memory": 12588, "data_time": 0.04524, "loss_rpn_cls": 0.01774, "loss_rpn_bbox": 0.03856, "loss_cls": 0.16702, "acc": 93.83691, "loss_bbox": 0.21015, "loss_mask": 0.21991, "loss": 0.65338, "time": 0.51374} -{"mode": "train", "epoch": 9, "iter": 4550, "lr": 2e-05, "memory": 12588, "data_time": 0.04627, "loss_rpn_cls": 0.01718, "loss_rpn_bbox": 0.04024, "loss_cls": 0.16308, "acc": 93.89111, "loss_bbox": 0.20828, "loss_mask": 0.21134, "loss": 0.64013, "time": 0.50043} -{"mode": "train", "epoch": 9, "iter": 4600, "lr": 2e-05, "memory": 12588, "data_time": 0.046, "loss_rpn_cls": 0.01547, "loss_rpn_bbox": 0.03941, "loss_cls": 0.15949, "acc": 94.01709, "loss_bbox": 0.2061, "loss_mask": 0.21684, "loss": 0.63732, "time": 0.50266} -{"mode": "train", "epoch": 9, "iter": 4650, "lr": 2e-05, "memory": 12588, "data_time": 0.03726, "loss_rpn_cls": 0.0154, "loss_rpn_bbox": 0.03672, "loss_cls": 0.15032, "acc": 94.49243, "loss_bbox": 0.18962, "loss_mask": 0.2076, "loss": 0.59966, "time": 0.496} -{"mode": "train", "epoch": 9, "iter": 4700, "lr": 2e-05, "memory": 12588, "data_time": 0.04651, "loss_rpn_cls": 0.01635, "loss_rpn_bbox": 0.03631, "loss_cls": 0.15575, "acc": 94.17627, "loss_bbox": 0.19552, "loss_mask": 0.21185, "loss": 0.61577, "time": 0.50216} -{"mode": "train", "epoch": 9, "iter": 4750, "lr": 2e-05, "memory": 12588, "data_time": 0.04381, "loss_rpn_cls": 0.0177, "loss_rpn_bbox": 0.0393, "loss_cls": 0.16807, "acc": 93.79712, "loss_bbox": 0.21181, "loss_mask": 0.2153, "loss": 0.65218, "time": 0.5189} -{"mode": "train", "epoch": 9, "iter": 4800, "lr": 2e-05, "memory": 12588, "data_time": 0.04661, "loss_rpn_cls": 0.01655, "loss_rpn_bbox": 0.03802, "loss_cls": 0.16342, "acc": 94.0769, "loss_bbox": 0.20286, "loss_mask": 0.21136, "loss": 0.6322, "time": 0.50299} -{"mode": "train", "epoch": 9, "iter": 4850, "lr": 2e-05, "memory": 12588, "data_time": 0.04006, "loss_rpn_cls": 0.01515, "loss_rpn_bbox": 0.03652, "loss_cls": 0.15129, "acc": 94.42163, "loss_bbox": 0.19477, "loss_mask": 0.21058, "loss": 0.60831, "time": 0.49788} -{"mode": "train", "epoch": 9, "iter": 4900, "lr": 2e-05, "memory": 12588, "data_time": 0.04729, "loss_rpn_cls": 0.01644, "loss_rpn_bbox": 0.03868, "loss_cls": 0.1644, "acc": 93.82568, "loss_bbox": 0.2088, "loss_mask": 0.21436, "loss": 0.64268, "time": 0.50769} -{"mode": "train", "epoch": 9, "iter": 4950, "lr": 2e-05, "memory": 12588, "data_time": 0.04168, "loss_rpn_cls": 0.0159, "loss_rpn_bbox": 0.0361, "loss_cls": 0.15153, "acc": 94.40625, "loss_bbox": 0.19492, "loss_mask": 0.21218, "loss": 0.61064, "time": 0.49888} -{"mode": "train", "epoch": 9, "iter": 5000, "lr": 2e-05, "memory": 12588, "data_time": 0.04233, "loss_rpn_cls": 0.01736, "loss_rpn_bbox": 0.03889, "loss_cls": 0.15986, "acc": 94.11011, "loss_bbox": 0.2007, "loss_mask": 0.21289, "loss": 0.6297, "time": 0.49988} -{"mode": "train", "epoch": 9, "iter": 5050, "lr": 2e-05, "memory": 12588, "data_time": 0.05062, "loss_rpn_cls": 0.01797, "loss_rpn_bbox": 0.04003, "loss_cls": 0.16216, "acc": 94.04175, "loss_bbox": 0.20092, "loss_mask": 0.21343, "loss": 0.6345, "time": 0.50538} -{"mode": "train", "epoch": 9, "iter": 5100, "lr": 2e-05, "memory": 12588, "data_time": 0.03998, "loss_rpn_cls": 0.01579, "loss_rpn_bbox": 0.04025, "loss_cls": 0.1595, "acc": 94.10791, "loss_bbox": 0.20166, "loss_mask": 0.21086, "loss": 0.62806, "time": 0.50868} -{"mode": "train", "epoch": 9, "iter": 5150, "lr": 2e-05, "memory": 12588, "data_time": 0.04768, "loss_rpn_cls": 0.01629, "loss_rpn_bbox": 0.03704, "loss_cls": 0.15648, "acc": 94.2002, "loss_bbox": 0.20147, "loss_mask": 0.21597, "loss": 0.62726, "time": 0.50058} -{"mode": "train", "epoch": 9, "iter": 5200, "lr": 2e-05, "memory": 12588, "data_time": 0.03904, "loss_rpn_cls": 0.01634, "loss_rpn_bbox": 0.03527, "loss_cls": 0.15632, "acc": 94.3208, "loss_bbox": 0.1943, "loss_mask": 0.2111, "loss": 0.61332, "time": 0.49864} -{"mode": "train", "epoch": 9, "iter": 5250, "lr": 2e-05, "memory": 12588, "data_time": 0.03957, "loss_rpn_cls": 0.01558, "loss_rpn_bbox": 0.03803, "loss_cls": 0.15573, "acc": 94.24927, "loss_bbox": 0.19894, "loss_mask": 0.21281, "loss": 0.62108, "time": 0.50553} -{"mode": "train", "epoch": 9, "iter": 5300, "lr": 2e-05, "memory": 12588, "data_time": 0.03991, "loss_rpn_cls": 0.01656, "loss_rpn_bbox": 0.03886, "loss_cls": 0.15916, "acc": 94.1001, "loss_bbox": 0.19858, "loss_mask": 0.21177, "loss": 0.62493, "time": 0.61668} -{"mode": "train", "epoch": 9, "iter": 5350, "lr": 2e-05, "memory": 12588, "data_time": 0.04423, "loss_rpn_cls": 0.01719, "loss_rpn_bbox": 0.03821, "loss_cls": 0.15957, "acc": 94.15063, "loss_bbox": 0.20402, "loss_mask": 0.21569, "loss": 0.63468, "time": 0.50761} -{"mode": "train", "epoch": 9, "iter": 5400, "lr": 2e-05, "memory": 12588, "data_time": 0.03915, "loss_rpn_cls": 0.01639, "loss_rpn_bbox": 0.03784, "loss_cls": 0.15877, "acc": 94.07593, "loss_bbox": 0.20185, "loss_mask": 0.21411, "loss": 0.62897, "time": 0.50031} -{"mode": "train", "epoch": 9, "iter": 5450, "lr": 2e-05, "memory": 12588, "data_time": 0.03888, "loss_rpn_cls": 0.01594, "loss_rpn_bbox": 0.03638, "loss_cls": 0.15695, "acc": 94.1897, "loss_bbox": 0.19767, "loss_mask": 0.20873, "loss": 0.61567, "time": 0.50144} -{"mode": "train", "epoch": 9, "iter": 5500, "lr": 2e-05, "memory": 12588, "data_time": 0.03999, "loss_rpn_cls": 0.01712, "loss_rpn_bbox": 0.03954, "loss_cls": 0.16195, "acc": 94.04785, "loss_bbox": 0.20764, "loss_mask": 0.21447, "loss": 0.64073, "time": 0.50789} -{"mode": "train", "epoch": 9, "iter": 5550, "lr": 2e-05, "memory": 12588, "data_time": 0.05327, "loss_rpn_cls": 0.01693, "loss_rpn_bbox": 0.03861, "loss_cls": 0.16048, "acc": 94.03613, "loss_bbox": 0.20158, "loss_mask": 0.21116, "loss": 0.62876, "time": 0.49878} -{"mode": "train", "epoch": 9, "iter": 5600, "lr": 2e-05, "memory": 12588, "data_time": 0.04207, "loss_rpn_cls": 0.01792, "loss_rpn_bbox": 0.03705, "loss_cls": 0.15417, "acc": 94.39966, "loss_bbox": 0.19164, "loss_mask": 0.20741, "loss": 0.60818, "time": 0.50439} -{"mode": "train", "epoch": 9, "iter": 5650, "lr": 2e-05, "memory": 12588, "data_time": 0.05011, "loss_rpn_cls": 0.01681, "loss_rpn_bbox": 0.03974, "loss_cls": 0.16379, "acc": 93.927, "loss_bbox": 0.20918, "loss_mask": 0.21303, "loss": 0.64254, "time": 0.50571} -{"mode": "train", "epoch": 9, "iter": 5700, "lr": 2e-05, "memory": 12588, "data_time": 0.04669, "loss_rpn_cls": 0.01629, "loss_rpn_bbox": 0.03876, "loss_cls": 0.16232, "acc": 93.99097, "loss_bbox": 0.20896, "loss_mask": 0.21447, "loss": 0.6408, "time": 0.50399} -{"mode": "train", "epoch": 9, "iter": 5750, "lr": 2e-05, "memory": 12588, "data_time": 0.04573, "loss_rpn_cls": 0.01503, "loss_rpn_bbox": 0.03617, "loss_cls": 0.15366, "acc": 94.36108, "loss_bbox": 0.1952, "loss_mask": 0.2091, "loss": 0.60916, "time": 0.50577} -{"mode": "train", "epoch": 9, "iter": 5800, "lr": 2e-05, "memory": 12588, "data_time": 0.04877, "loss_rpn_cls": 0.01645, "loss_rpn_bbox": 0.03682, "loss_cls": 0.1609, "acc": 93.98022, "loss_bbox": 0.20042, "loss_mask": 0.20986, "loss": 0.62444, "time": 0.49217} -{"mode": "train", "epoch": 9, "iter": 5850, "lr": 2e-05, "memory": 12588, "data_time": 0.04453, "loss_rpn_cls": 0.01584, "loss_rpn_bbox": 0.0371, "loss_cls": 0.16194, "acc": 94.05078, "loss_bbox": 0.203, "loss_mask": 0.21239, "loss": 0.63028, "time": 0.49468} -{"mode": "train", "epoch": 9, "iter": 5900, "lr": 2e-05, "memory": 12588, "data_time": 0.05033, "loss_rpn_cls": 0.01651, "loss_rpn_bbox": 0.03934, "loss_cls": 0.15789, "acc": 94.19531, "loss_bbox": 0.19719, "loss_mask": 0.21499, "loss": 0.62591, "time": 0.59381} -{"mode": "train", "epoch": 9, "iter": 5950, "lr": 2e-05, "memory": 12588, "data_time": 0.04369, "loss_rpn_cls": 0.01642, "loss_rpn_bbox": 0.03815, "loss_cls": 0.15659, "acc": 94.26855, "loss_bbox": 0.19888, "loss_mask": 0.21115, "loss": 0.62119, "time": 0.55145} -{"mode": "train", "epoch": 9, "iter": 6000, "lr": 2e-05, "memory": 12588, "data_time": 0.04246, "loss_rpn_cls": 0.01535, "loss_rpn_bbox": 0.03833, "loss_cls": 0.16433, "acc": 93.9375, "loss_bbox": 0.20538, "loss_mask": 0.21054, "loss": 0.63394, "time": 0.49211} -{"mode": "train", "epoch": 9, "iter": 6050, "lr": 2e-05, "memory": 12588, "data_time": 0.04675, "loss_rpn_cls": 0.01607, "loss_rpn_bbox": 0.03952, "loss_cls": 0.15717, "acc": 94.15967, "loss_bbox": 0.20019, "loss_mask": 0.21273, "loss": 0.62569, "time": 0.5456} -{"mode": "train", "epoch": 9, "iter": 6100, "lr": 2e-05, "memory": 12589, "data_time": 0.04782, "loss_rpn_cls": 0.01751, "loss_rpn_bbox": 0.04026, "loss_cls": 0.16168, "acc": 93.91772, "loss_bbox": 0.20763, "loss_mask": 0.214, "loss": 0.64108, "time": 0.49737} -{"mode": "train", "epoch": 9, "iter": 6150, "lr": 2e-05, "memory": 12589, "data_time": 0.05077, "loss_rpn_cls": 0.01579, "loss_rpn_bbox": 0.03842, "loss_cls": 0.15573, "acc": 94.17139, "loss_bbox": 0.19527, "loss_mask": 0.20919, "loss": 0.6144, "time": 0.50596} -{"mode": "train", "epoch": 9, "iter": 6200, "lr": 2e-05, "memory": 12589, "data_time": 0.03892, "loss_rpn_cls": 0.01714, "loss_rpn_bbox": 0.03761, "loss_cls": 0.15736, "acc": 94.2854, "loss_bbox": 0.19335, "loss_mask": 0.20906, "loss": 0.61453, "time": 0.4955} -{"mode": "train", "epoch": 9, "iter": 6250, "lr": 2e-05, "memory": 12589, "data_time": 0.04477, "loss_rpn_cls": 0.01727, "loss_rpn_bbox": 0.03518, "loss_cls": 0.15027, "acc": 94.52148, "loss_bbox": 0.1871, "loss_mask": 0.20262, "loss": 0.59244, "time": 0.5133} -{"mode": "train", "epoch": 9, "iter": 6300, "lr": 2e-05, "memory": 12589, "data_time": 0.04308, "loss_rpn_cls": 0.018, "loss_rpn_bbox": 0.0406, "loss_cls": 0.15902, "acc": 94.16382, "loss_bbox": 0.19851, "loss_mask": 0.2091, "loss": 0.62522, "time": 0.50683} -{"mode": "train", "epoch": 9, "iter": 6350, "lr": 2e-05, "memory": 12589, "data_time": 0.04655, "loss_rpn_cls": 0.01793, "loss_rpn_bbox": 0.03913, "loss_cls": 0.16077, "acc": 94.09375, "loss_bbox": 0.2031, "loss_mask": 0.21245, "loss": 0.63338, "time": 0.50989} -{"mode": "train", "epoch": 9, "iter": 6400, "lr": 2e-05, "memory": 12589, "data_time": 0.04055, "loss_rpn_cls": 0.01582, "loss_rpn_bbox": 0.03579, "loss_cls": 0.15011, "acc": 94.4165, "loss_bbox": 0.18666, "loss_mask": 0.20707, "loss": 0.59544, "time": 0.50502} -{"mode": "train", "epoch": 9, "iter": 6450, "lr": 2e-05, "memory": 12589, "data_time": 0.04217, "loss_rpn_cls": 0.01732, "loss_rpn_bbox": 0.03835, "loss_cls": 0.16815, "acc": 93.86182, "loss_bbox": 0.21294, "loss_mask": 0.21578, "loss": 0.65254, "time": 0.49989} -{"mode": "train", "epoch": 9, "iter": 6500, "lr": 2e-05, "memory": 12589, "data_time": 0.0418, "loss_rpn_cls": 0.01697, "loss_rpn_bbox": 0.03978, "loss_cls": 0.16656, "acc": 93.75317, "loss_bbox": 0.213, "loss_mask": 0.21427, "loss": 0.65058, "time": 0.50246} -{"mode": "train", "epoch": 9, "iter": 6550, "lr": 2e-05, "memory": 12589, "data_time": 0.04253, "loss_rpn_cls": 0.01667, "loss_rpn_bbox": 0.03978, "loss_cls": 0.16048, "acc": 93.99268, "loss_bbox": 0.2057, "loss_mask": 0.21446, "loss": 0.63709, "time": 0.50051} -{"mode": "train", "epoch": 9, "iter": 6600, "lr": 2e-05, "memory": 12589, "data_time": 0.03986, "loss_rpn_cls": 0.01457, "loss_rpn_bbox": 0.03748, "loss_cls": 0.15406, "acc": 94.29834, "loss_bbox": 0.19773, "loss_mask": 0.20611, "loss": 0.60994, "time": 0.52141} -{"mode": "train", "epoch": 9, "iter": 6650, "lr": 2e-05, "memory": 12589, "data_time": 0.03864, "loss_rpn_cls": 0.01505, "loss_rpn_bbox": 0.03717, "loss_cls": 0.15104, "acc": 94.37305, "loss_bbox": 0.19513, "loss_mask": 0.21045, "loss": 0.60884, "time": 0.50345} -{"mode": "train", "epoch": 9, "iter": 6700, "lr": 2e-05, "memory": 12589, "data_time": 0.04336, "loss_rpn_cls": 0.01548, "loss_rpn_bbox": 0.03871, "loss_cls": 0.15983, "acc": 94.10229, "loss_bbox": 0.20401, "loss_mask": 0.21096, "loss": 0.62899, "time": 0.508} -{"mode": "train", "epoch": 9, "iter": 6750, "lr": 2e-05, "memory": 12589, "data_time": 0.04528, "loss_rpn_cls": 0.01742, "loss_rpn_bbox": 0.03763, "loss_cls": 0.1634, "acc": 94.00537, "loss_bbox": 0.2034, "loss_mask": 0.21657, "loss": 0.63842, "time": 0.50427} -{"mode": "train", "epoch": 9, "iter": 6800, "lr": 2e-05, "memory": 12589, "data_time": 0.04821, "loss_rpn_cls": 0.0161, "loss_rpn_bbox": 0.03881, "loss_cls": 0.15452, "acc": 94.25806, "loss_bbox": 0.19645, "loss_mask": 0.20881, "loss": 0.6147, "time": 0.49631} -{"mode": "train", "epoch": 9, "iter": 6850, "lr": 2e-05, "memory": 12589, "data_time": 0.04831, "loss_rpn_cls": 0.01573, "loss_rpn_bbox": 0.03778, "loss_cls": 0.15775, "acc": 94.21997, "loss_bbox": 0.19932, "loss_mask": 0.20629, "loss": 0.61687, "time": 0.50558} -{"mode": "train", "epoch": 9, "iter": 6900, "lr": 2e-05, "memory": 12589, "data_time": 0.04986, "loss_rpn_cls": 0.01599, "loss_rpn_bbox": 0.03816, "loss_cls": 0.16194, "acc": 94.1189, "loss_bbox": 0.20125, "loss_mask": 0.21455, "loss": 0.6319, "time": 0.50379} -{"mode": "train", "epoch": 9, "iter": 6950, "lr": 2e-05, "memory": 12589, "data_time": 0.04299, "loss_rpn_cls": 0.01624, "loss_rpn_bbox": 0.0367, "loss_cls": 0.15141, "acc": 94.37988, "loss_bbox": 0.19765, "loss_mask": 0.20882, "loss": 0.61081, "time": 0.5522} -{"mode": "train", "epoch": 9, "iter": 7000, "lr": 2e-05, "memory": 12589, "data_time": 0.04847, "loss_rpn_cls": 0.01758, "loss_rpn_bbox": 0.03894, "loss_cls": 0.16743, "acc": 93.84229, "loss_bbox": 0.20724, "loss_mask": 0.21283, "loss": 0.64403, "time": 0.5064} -{"mode": "train", "epoch": 9, "iter": 7050, "lr": 2e-05, "memory": 12589, "data_time": 0.03588, "loss_rpn_cls": 0.01672, "loss_rpn_bbox": 0.03768, "loss_cls": 0.16107, "acc": 93.96924, "loss_bbox": 0.20581, "loss_mask": 0.2148, "loss": 0.63606, "time": 0.50298} -{"mode": "train", "epoch": 9, "iter": 7100, "lr": 2e-05, "memory": 12589, "data_time": 0.04552, "loss_rpn_cls": 0.01677, "loss_rpn_bbox": 0.03674, "loss_cls": 0.15719, "acc": 94.10547, "loss_bbox": 0.20314, "loss_mask": 0.20831, "loss": 0.62216, "time": 0.50312} -{"mode": "train", "epoch": 9, "iter": 7150, "lr": 2e-05, "memory": 12589, "data_time": 0.05709, "loss_rpn_cls": 0.01554, "loss_rpn_bbox": 0.03593, "loss_cls": 0.15334, "acc": 94.33984, "loss_bbox": 0.19776, "loss_mask": 0.20921, "loss": 0.61177, "time": 0.56123} -{"mode": "train", "epoch": 9, "iter": 7200, "lr": 2e-05, "memory": 12589, "data_time": 0.04839, "loss_rpn_cls": 0.01548, "loss_rpn_bbox": 0.03809, "loss_cls": 0.15655, "acc": 94.15869, "loss_bbox": 0.19901, "loss_mask": 0.2128, "loss": 0.62193, "time": 0.50366} -{"mode": "train", "epoch": 9, "iter": 7250, "lr": 2e-05, "memory": 12589, "data_time": 0.04121, "loss_rpn_cls": 0.01775, "loss_rpn_bbox": 0.03913, "loss_cls": 0.16357, "acc": 93.90918, "loss_bbox": 0.20447, "loss_mask": 0.21352, "loss": 0.63844, "time": 0.49565} -{"mode": "train", "epoch": 9, "iter": 7300, "lr": 2e-05, "memory": 12589, "data_time": 0.0475, "loss_rpn_cls": 0.01497, "loss_rpn_bbox": 0.03756, "loss_cls": 0.15642, "acc": 94.05713, "loss_bbox": 0.19351, "loss_mask": 0.20485, "loss": 0.60731, "time": 0.51787} -{"mode": "val", "epoch": 9, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.4408, "bbox_mAP_50": 0.6579, "bbox_mAP_75": 0.4856, "bbox_mAP_s": 0.2737, "bbox_mAP_m": 0.4754, "bbox_mAP_l": 0.5773, "bbox_mAP_copypaste": "0.4408 0.6579 0.4856 0.2737 0.4754 0.5773", "segm_mAP": 0.4029, "segm_mAP_50": 0.6294, "segm_mAP_75": 0.4315, "segm_mAP_s": 0.2058, "segm_mAP_m": 0.4301, "segm_mAP_l": 0.5843, "segm_mAP_copypaste": "0.4029 0.6294 0.4315 0.2058 0.4301 0.5843"} -{"mode": "train", "epoch": 10, "iter": 50, "lr": 2e-05, "memory": 12589, "data_time": 0.13268, "loss_rpn_cls": 0.01496, "loss_rpn_bbox": 0.03723, "loss_cls": 0.15711, "acc": 94.11548, "loss_bbox": 0.2026, "loss_mask": 0.20812, "loss": 0.62002, "time": 0.62617} -{"mode": "train", "epoch": 10, "iter": 100, "lr": 2e-05, "memory": 12589, "data_time": 0.04644, "loss_rpn_cls": 0.01485, "loss_rpn_bbox": 0.03638, "loss_cls": 0.14608, "acc": 94.48413, "loss_bbox": 0.18636, "loss_mask": 0.20406, "loss": 0.58772, "time": 0.52148} -{"mode": "train", "epoch": 10, "iter": 150, "lr": 2e-05, "memory": 12589, "data_time": 0.05322, "loss_rpn_cls": 0.01582, "loss_rpn_bbox": 0.03682, "loss_cls": 0.15273, "acc": 94.32153, "loss_bbox": 0.19518, "loss_mask": 0.20961, "loss": 0.61016, "time": 0.5198} -{"mode": "train", "epoch": 10, "iter": 200, "lr": 2e-05, "memory": 12589, "data_time": 0.04844, "loss_rpn_cls": 0.01706, "loss_rpn_bbox": 0.04014, "loss_cls": 0.15874, "acc": 94.08887, "loss_bbox": 0.20165, "loss_mask": 0.21402, "loss": 0.63162, "time": 0.53012} -{"mode": "train", "epoch": 10, "iter": 250, "lr": 2e-05, "memory": 12589, "data_time": 0.04629, "loss_rpn_cls": 0.01661, "loss_rpn_bbox": 0.04073, "loss_cls": 0.1619, "acc": 93.9624, "loss_bbox": 0.21218, "loss_mask": 0.21018, "loss": 0.6416, "time": 0.60006} -{"mode": "train", "epoch": 10, "iter": 300, "lr": 2e-05, "memory": 12589, "data_time": 0.04454, "loss_rpn_cls": 0.01516, "loss_rpn_bbox": 0.03474, "loss_cls": 0.14649, "acc": 94.59814, "loss_bbox": 0.19208, "loss_mask": 0.20529, "loss": 0.59375, "time": 0.5242} -{"mode": "train", "epoch": 10, "iter": 350, "lr": 2e-05, "memory": 12589, "data_time": 0.05287, "loss_rpn_cls": 0.01526, "loss_rpn_bbox": 0.03783, "loss_cls": 0.15581, "acc": 94.21191, "loss_bbox": 0.19741, "loss_mask": 0.20692, "loss": 0.61323, "time": 0.53464} -{"mode": "train", "epoch": 10, "iter": 400, "lr": 2e-05, "memory": 12589, "data_time": 0.05364, "loss_rpn_cls": 0.01535, "loss_rpn_bbox": 0.03893, "loss_cls": 0.15962, "acc": 93.99902, "loss_bbox": 0.20259, "loss_mask": 0.21055, "loss": 0.62704, "time": 0.5221} -{"mode": "train", "epoch": 10, "iter": 450, "lr": 2e-05, "memory": 12589, "data_time": 0.04863, "loss_rpn_cls": 0.01408, "loss_rpn_bbox": 0.03739, "loss_cls": 0.15045, "acc": 94.31714, "loss_bbox": 0.19014, "loss_mask": 0.20868, "loss": 0.60074, "time": 0.51725} -{"mode": "train", "epoch": 10, "iter": 500, "lr": 2e-05, "memory": 12589, "data_time": 0.0483, "loss_rpn_cls": 0.0147, "loss_rpn_bbox": 0.03945, "loss_cls": 0.15475, "acc": 94.29321, "loss_bbox": 0.19606, "loss_mask": 0.20581, "loss": 0.61077, "time": 0.50819} -{"mode": "train", "epoch": 10, "iter": 550, "lr": 2e-05, "memory": 12589, "data_time": 0.04982, "loss_rpn_cls": 0.01531, "loss_rpn_bbox": 0.03691, "loss_cls": 0.15241, "acc": 94.34082, "loss_bbox": 0.19771, "loss_mask": 0.20211, "loss": 0.60445, "time": 0.49657} -{"mode": "train", "epoch": 10, "iter": 600, "lr": 2e-05, "memory": 12589, "data_time": 0.05097, "loss_rpn_cls": 0.01532, "loss_rpn_bbox": 0.03858, "loss_cls": 0.1585, "acc": 94.01611, "loss_bbox": 0.20599, "loss_mask": 0.21006, "loss": 0.62845, "time": 0.51939} -{"mode": "train", "epoch": 10, "iter": 650, "lr": 2e-05, "memory": 12589, "data_time": 0.05384, "loss_rpn_cls": 0.01554, "loss_rpn_bbox": 0.03909, "loss_cls": 0.15465, "acc": 94.19751, "loss_bbox": 0.19765, "loss_mask": 0.21095, "loss": 0.61788, "time": 0.57117} -{"mode": "train", "epoch": 10, "iter": 700, "lr": 2e-05, "memory": 12589, "data_time": 0.04353, "loss_rpn_cls": 0.01397, "loss_rpn_bbox": 0.03556, "loss_cls": 0.15154, "acc": 94.36157, "loss_bbox": 0.19157, "loss_mask": 0.20636, "loss": 0.59901, "time": 0.49409} -{"mode": "train", "epoch": 10, "iter": 750, "lr": 2e-05, "memory": 12589, "data_time": 0.0399, "loss_rpn_cls": 0.01517, "loss_rpn_bbox": 0.03734, "loss_cls": 0.15254, "acc": 94.27197, "loss_bbox": 0.19557, "loss_mask": 0.20584, "loss": 0.60646, "time": 0.55632} -{"mode": "train", "epoch": 10, "iter": 800, "lr": 2e-05, "memory": 12589, "data_time": 0.05625, "loss_rpn_cls": 0.01636, "loss_rpn_bbox": 0.03905, "loss_cls": 0.15213, "acc": 94.31543, "loss_bbox": 0.19461, "loss_mask": 0.20556, "loss": 0.60772, "time": 0.51931} -{"mode": "train", "epoch": 10, "iter": 850, "lr": 2e-05, "memory": 12589, "data_time": 0.05245, "loss_rpn_cls": 0.01582, "loss_rpn_bbox": 0.03836, "loss_cls": 0.15049, "acc": 94.30029, "loss_bbox": 0.20062, "loss_mask": 0.21259, "loss": 0.61788, "time": 0.51186} -{"mode": "train", "epoch": 10, "iter": 900, "lr": 2e-05, "memory": 12589, "data_time": 0.04243, "loss_rpn_cls": 0.0155, "loss_rpn_bbox": 0.03688, "loss_cls": 0.15025, "acc": 94.4707, "loss_bbox": 0.19027, "loss_mask": 0.20386, "loss": 0.59675, "time": 0.50075} -{"mode": "train", "epoch": 10, "iter": 950, "lr": 2e-05, "memory": 12589, "data_time": 0.0441, "loss_rpn_cls": 0.01653, "loss_rpn_bbox": 0.03762, "loss_cls": 0.15692, "acc": 94.11011, "loss_bbox": 0.19996, "loss_mask": 0.2078, "loss": 0.61884, "time": 0.5136} -{"mode": "train", "epoch": 10, "iter": 1000, "lr": 2e-05, "memory": 12589, "data_time": 0.03396, "loss_rpn_cls": 0.01475, "loss_rpn_bbox": 0.03685, "loss_cls": 0.14559, "acc": 94.49731, "loss_bbox": 0.19369, "loss_mask": 0.20947, "loss": 0.60034, "time": 0.5004} -{"mode": "train", "epoch": 10, "iter": 1050, "lr": 2e-05, "memory": 12589, "data_time": 0.03985, "loss_rpn_cls": 0.01566, "loss_rpn_bbox": 0.03681, "loss_cls": 0.15415, "acc": 94.23511, "loss_bbox": 0.20084, "loss_mask": 0.21088, "loss": 0.61834, "time": 0.50351} -{"mode": "train", "epoch": 10, "iter": 1100, "lr": 2e-05, "memory": 12589, "data_time": 0.05144, "loss_rpn_cls": 0.01684, "loss_rpn_bbox": 0.04113, "loss_cls": 0.15939, "acc": 93.92285, "loss_bbox": 0.21092, "loss_mask": 0.21067, "loss": 0.63895, "time": 0.52582} -{"mode": "train", "epoch": 10, "iter": 1150, "lr": 2e-05, "memory": 12589, "data_time": 0.05348, "loss_rpn_cls": 0.01611, "loss_rpn_bbox": 0.03657, "loss_cls": 0.15163, "acc": 94.32471, "loss_bbox": 0.1984, "loss_mask": 0.21288, "loss": 0.61559, "time": 0.51529} -{"mode": "train", "epoch": 10, "iter": 1200, "lr": 2e-05, "memory": 12589, "data_time": 0.04444, "loss_rpn_cls": 0.01562, "loss_rpn_bbox": 0.03923, "loss_cls": 0.16422, "acc": 93.86035, "loss_bbox": 0.20873, "loss_mask": 0.212, "loss": 0.6398, "time": 0.51373} -{"mode": "train", "epoch": 10, "iter": 1250, "lr": 2e-05, "memory": 12589, "data_time": 0.04473, "loss_rpn_cls": 0.01543, "loss_rpn_bbox": 0.0388, "loss_cls": 0.14823, "acc": 94.40674, "loss_bbox": 0.18942, "loss_mask": 0.20265, "loss": 0.59453, "time": 0.50007} -{"mode": "train", "epoch": 10, "iter": 1300, "lr": 2e-05, "memory": 12589, "data_time": 0.04323, "loss_rpn_cls": 0.01377, "loss_rpn_bbox": 0.03613, "loss_cls": 0.1459, "acc": 94.51709, "loss_bbox": 0.19247, "loss_mask": 0.20137, "loss": 0.58964, "time": 0.50526} -{"mode": "train", "epoch": 10, "iter": 1350, "lr": 2e-05, "memory": 12589, "data_time": 0.04702, "loss_rpn_cls": 0.01538, "loss_rpn_bbox": 0.03839, "loss_cls": 0.15759, "acc": 94.22852, "loss_bbox": 0.20568, "loss_mask": 0.21579, "loss": 0.63284, "time": 0.56169} -{"mode": "train", "epoch": 10, "iter": 1400, "lr": 2e-05, "memory": 12589, "data_time": 0.044, "loss_rpn_cls": 0.01634, "loss_rpn_bbox": 0.03794, "loss_cls": 0.16327, "acc": 93.97729, "loss_bbox": 0.20371, "loss_mask": 0.21123, "loss": 0.6325, "time": 0.52236} -{"mode": "train", "epoch": 10, "iter": 1450, "lr": 2e-05, "memory": 12589, "data_time": 0.04773, "loss_rpn_cls": 0.01536, "loss_rpn_bbox": 0.03677, "loss_cls": 0.15258, "acc": 94.35229, "loss_bbox": 0.19679, "loss_mask": 0.20954, "loss": 0.61104, "time": 0.56546} -{"mode": "train", "epoch": 10, "iter": 1500, "lr": 2e-05, "memory": 12589, "data_time": 0.05267, "loss_rpn_cls": 0.01561, "loss_rpn_bbox": 0.03819, "loss_cls": 0.15161, "acc": 94.30688, "loss_bbox": 0.19865, "loss_mask": 0.20777, "loss": 0.61182, "time": 0.50459} -{"mode": "train", "epoch": 10, "iter": 1550, "lr": 2e-05, "memory": 12589, "data_time": 0.04555, "loss_rpn_cls": 0.01562, "loss_rpn_bbox": 0.03957, "loss_cls": 0.15439, "acc": 94.14917, "loss_bbox": 0.20498, "loss_mask": 0.21355, "loss": 0.62811, "time": 0.51102} -{"mode": "train", "epoch": 10, "iter": 1600, "lr": 2e-05, "memory": 12589, "data_time": 0.0407, "loss_rpn_cls": 0.01563, "loss_rpn_bbox": 0.03747, "loss_cls": 0.15671, "acc": 94.14648, "loss_bbox": 0.20231, "loss_mask": 0.21281, "loss": 0.62493, "time": 0.49579} -{"mode": "train", "epoch": 10, "iter": 1650, "lr": 2e-05, "memory": 12589, "data_time": 0.04467, "loss_rpn_cls": 0.01476, "loss_rpn_bbox": 0.03912, "loss_cls": 0.15714, "acc": 94.1499, "loss_bbox": 0.20134, "loss_mask": 0.21329, "loss": 0.62565, "time": 0.51305} -{"mode": "train", "epoch": 10, "iter": 1700, "lr": 2e-05, "memory": 12589, "data_time": 0.04855, "loss_rpn_cls": 0.01564, "loss_rpn_bbox": 0.03707, "loss_cls": 0.15076, "acc": 94.30664, "loss_bbox": 0.20026, "loss_mask": 0.2102, "loss": 0.61393, "time": 0.5043} -{"mode": "train", "epoch": 10, "iter": 1750, "lr": 2e-05, "memory": 12589, "data_time": 0.04277, "loss_rpn_cls": 0.01402, "loss_rpn_bbox": 0.03563, "loss_cls": 0.14623, "acc": 94.58325, "loss_bbox": 0.18785, "loss_mask": 0.20874, "loss": 0.59247, "time": 0.50529} -{"mode": "train", "epoch": 10, "iter": 1800, "lr": 2e-05, "memory": 12589, "data_time": 0.0425, "loss_rpn_cls": 0.01438, "loss_rpn_bbox": 0.03575, "loss_cls": 0.14905, "acc": 94.5, "loss_bbox": 0.19078, "loss_mask": 0.20434, "loss": 0.59431, "time": 0.51794} -{"mode": "train", "epoch": 10, "iter": 1850, "lr": 2e-05, "memory": 12589, "data_time": 0.04531, "loss_rpn_cls": 0.01594, "loss_rpn_bbox": 0.03661, "loss_cls": 0.1511, "acc": 94.36255, "loss_bbox": 0.19502, "loss_mask": 0.20678, "loss": 0.60546, "time": 0.50122} -{"mode": "train", "epoch": 10, "iter": 1900, "lr": 2e-05, "memory": 12589, "data_time": 0.04823, "loss_rpn_cls": 0.01554, "loss_rpn_bbox": 0.03651, "loss_cls": 0.14833, "acc": 94.42676, "loss_bbox": 0.19257, "loss_mask": 0.20138, "loss": 0.59432, "time": 0.48994} -{"mode": "train", "epoch": 10, "iter": 1950, "lr": 2e-05, "memory": 12589, "data_time": 0.04004, "loss_rpn_cls": 0.01467, "loss_rpn_bbox": 0.03695, "loss_cls": 0.1564, "acc": 94.05078, "loss_bbox": 0.20037, "loss_mask": 0.2078, "loss": 0.61618, "time": 0.50522} -{"mode": "train", "epoch": 10, "iter": 2000, "lr": 2e-05, "memory": 12589, "data_time": 0.04943, "loss_rpn_cls": 0.01466, "loss_rpn_bbox": 0.03647, "loss_cls": 0.15362, "acc": 94.40454, "loss_bbox": 0.19435, "loss_mask": 0.20743, "loss": 0.60652, "time": 0.55232} -{"mode": "train", "epoch": 10, "iter": 2050, "lr": 2e-05, "memory": 12589, "data_time": 0.05091, "loss_rpn_cls": 0.01448, "loss_rpn_bbox": 0.03574, "loss_cls": 0.14899, "acc": 94.4375, "loss_bbox": 0.19715, "loss_mask": 0.20619, "loss": 0.60253, "time": 0.49863} -{"mode": "train", "epoch": 10, "iter": 2100, "lr": 2e-05, "memory": 12589, "data_time": 0.04638, "loss_rpn_cls": 0.01459, "loss_rpn_bbox": 0.04025, "loss_cls": 0.15421, "acc": 94.17578, "loss_bbox": 0.20049, "loss_mask": 0.20758, "loss": 0.61712, "time": 0.49965} -{"mode": "train", "epoch": 10, "iter": 2150, "lr": 2e-05, "memory": 12589, "data_time": 0.04276, "loss_rpn_cls": 0.01529, "loss_rpn_bbox": 0.03796, "loss_cls": 0.14938, "acc": 94.4751, "loss_bbox": 0.19455, "loss_mask": 0.20619, "loss": 0.60337, "time": 0.55997} -{"mode": "train", "epoch": 10, "iter": 2200, "lr": 2e-05, "memory": 12589, "data_time": 0.04281, "loss_rpn_cls": 0.01591, "loss_rpn_bbox": 0.04063, "loss_cls": 0.16011, "acc": 94.00073, "loss_bbox": 0.20287, "loss_mask": 0.20758, "loss": 0.62708, "time": 0.53624} -{"mode": "train", "epoch": 10, "iter": 2250, "lr": 2e-05, "memory": 12589, "data_time": 0.04721, "loss_rpn_cls": 0.01506, "loss_rpn_bbox": 0.03884, "loss_cls": 0.15505, "acc": 94.31152, "loss_bbox": 0.19578, "loss_mask": 0.20711, "loss": 0.61184, "time": 0.50603} -{"mode": "train", "epoch": 10, "iter": 2300, "lr": 2e-05, "memory": 12589, "data_time": 0.04404, "loss_rpn_cls": 0.01501, "loss_rpn_bbox": 0.0363, "loss_cls": 0.15351, "acc": 94.32446, "loss_bbox": 0.20027, "loss_mask": 0.20969, "loss": 0.61477, "time": 0.50326} -{"mode": "train", "epoch": 10, "iter": 2350, "lr": 2e-05, "memory": 12589, "data_time": 0.04748, "loss_rpn_cls": 0.01612, "loss_rpn_bbox": 0.03882, "loss_cls": 0.15671, "acc": 94.17822, "loss_bbox": 0.19874, "loss_mask": 0.21331, "loss": 0.62371, "time": 0.50945} -{"mode": "train", "epoch": 10, "iter": 2400, "lr": 2e-05, "memory": 12589, "data_time": 0.04078, "loss_rpn_cls": 0.01469, "loss_rpn_bbox": 0.03618, "loss_cls": 0.14893, "acc": 94.45508, "loss_bbox": 0.1965, "loss_mask": 0.21051, "loss": 0.60682, "time": 0.49687} -{"mode": "train", "epoch": 10, "iter": 2450, "lr": 2e-05, "memory": 12589, "data_time": 0.05326, "loss_rpn_cls": 0.01684, "loss_rpn_bbox": 0.03827, "loss_cls": 0.15513, "acc": 94.25781, "loss_bbox": 0.20195, "loss_mask": 0.21217, "loss": 0.62436, "time": 0.52064} -{"mode": "train", "epoch": 10, "iter": 2500, "lr": 2e-05, "memory": 12589, "data_time": 0.04366, "loss_rpn_cls": 0.01475, "loss_rpn_bbox": 0.03961, "loss_cls": 0.15229, "acc": 94.31982, "loss_bbox": 0.1965, "loss_mask": 0.20897, "loss": 0.61212, "time": 0.51424} -{"mode": "train", "epoch": 10, "iter": 2550, "lr": 2e-05, "memory": 12589, "data_time": 0.04434, "loss_rpn_cls": 0.01688, "loss_rpn_bbox": 0.03883, "loss_cls": 0.15375, "acc": 94.29541, "loss_bbox": 0.19863, "loss_mask": 0.21118, "loss": 0.61928, "time": 0.51131} -{"mode": "train", "epoch": 10, "iter": 2600, "lr": 2e-05, "memory": 12589, "data_time": 0.03895, "loss_rpn_cls": 0.01428, "loss_rpn_bbox": 0.03623, "loss_cls": 0.15353, "acc": 94.16724, "loss_bbox": 0.19997, "loss_mask": 0.20894, "loss": 0.61295, "time": 0.50231} -{"mode": "train", "epoch": 10, "iter": 2650, "lr": 2e-05, "memory": 12589, "data_time": 0.04688, "loss_rpn_cls": 0.01526, "loss_rpn_bbox": 0.03758, "loss_cls": 0.15542, "acc": 94.25781, "loss_bbox": 0.19607, "loss_mask": 0.20537, "loss": 0.6097, "time": 0.57539} -{"mode": "train", "epoch": 10, "iter": 2700, "lr": 2e-05, "memory": 12589, "data_time": 0.05143, "loss_rpn_cls": 0.01581, "loss_rpn_bbox": 0.03757, "loss_cls": 0.14936, "acc": 94.43701, "loss_bbox": 0.1941, "loss_mask": 0.20725, "loss": 0.60409, "time": 0.55883} -{"mode": "train", "epoch": 10, "iter": 2750, "lr": 2e-05, "memory": 12589, "data_time": 0.0576, "loss_rpn_cls": 0.01651, "loss_rpn_bbox": 0.03899, "loss_cls": 0.15538, "acc": 94.23486, "loss_bbox": 0.20084, "loss_mask": 0.21534, "loss": 0.62706, "time": 0.51082} -{"mode": "train", "epoch": 10, "iter": 2800, "lr": 2e-05, "memory": 12589, "data_time": 0.04794, "loss_rpn_cls": 0.01383, "loss_rpn_bbox": 0.03726, "loss_cls": 0.15559, "acc": 94.22168, "loss_bbox": 0.19918, "loss_mask": 0.20832, "loss": 0.61419, "time": 0.49963} -{"mode": "train", "epoch": 10, "iter": 2850, "lr": 2e-05, "memory": 12589, "data_time": 0.04811, "loss_rpn_cls": 0.01539, "loss_rpn_bbox": 0.03765, "loss_cls": 0.1565, "acc": 94.11255, "loss_bbox": 0.20296, "loss_mask": 0.21016, "loss": 0.62265, "time": 0.51567} -{"mode": "train", "epoch": 10, "iter": 2900, "lr": 2e-05, "memory": 12589, "data_time": 0.04625, "loss_rpn_cls": 0.01486, "loss_rpn_bbox": 0.03592, "loss_cls": 0.14982, "acc": 94.43262, "loss_bbox": 0.19143, "loss_mask": 0.20722, "loss": 0.59925, "time": 0.50762} -{"mode": "train", "epoch": 10, "iter": 2950, "lr": 2e-05, "memory": 12589, "data_time": 0.04697, "loss_rpn_cls": 0.01568, "loss_rpn_bbox": 0.03911, "loss_cls": 0.15773, "acc": 94.11328, "loss_bbox": 0.19961, "loss_mask": 0.21155, "loss": 0.62368, "time": 0.51279} -{"mode": "train", "epoch": 10, "iter": 3000, "lr": 2e-05, "memory": 12589, "data_time": 0.04013, "loss_rpn_cls": 0.01401, "loss_rpn_bbox": 0.03609, "loss_cls": 0.15054, "acc": 94.36914, "loss_bbox": 0.19243, "loss_mask": 0.20651, "loss": 0.59958, "time": 0.50627} -{"mode": "train", "epoch": 10, "iter": 3050, "lr": 2e-05, "memory": 12589, "data_time": 0.05044, "loss_rpn_cls": 0.01633, "loss_rpn_bbox": 0.0396, "loss_cls": 0.16364, "acc": 93.84619, "loss_bbox": 0.20899, "loss_mask": 0.21367, "loss": 0.64221, "time": 0.50925} -{"mode": "train", "epoch": 10, "iter": 3100, "lr": 2e-05, "memory": 12589, "data_time": 0.04838, "loss_rpn_cls": 0.01653, "loss_rpn_bbox": 0.0369, "loss_cls": 0.15643, "acc": 94.104, "loss_bbox": 0.20219, "loss_mask": 0.20842, "loss": 0.62047, "time": 0.51277} -{"mode": "train", "epoch": 10, "iter": 3150, "lr": 2e-05, "memory": 12589, "data_time": 0.04721, "loss_rpn_cls": 0.01614, "loss_rpn_bbox": 0.03652, "loss_cls": 0.15147, "acc": 94.39819, "loss_bbox": 0.19248, "loss_mask": 0.2083, "loss": 0.60491, "time": 0.5091} -{"mode": "train", "epoch": 10, "iter": 3200, "lr": 2e-05, "memory": 12589, "data_time": 0.05378, "loss_rpn_cls": 0.01725, "loss_rpn_bbox": 0.0391, "loss_cls": 0.1624, "acc": 93.91797, "loss_bbox": 0.20395, "loss_mask": 0.21372, "loss": 0.63641, "time": 0.51142} -{"mode": "train", "epoch": 10, "iter": 3250, "lr": 2e-05, "memory": 12589, "data_time": 0.05324, "loss_rpn_cls": 0.01634, "loss_rpn_bbox": 0.03935, "loss_cls": 0.15729, "acc": 94.14062, "loss_bbox": 0.20458, "loss_mask": 0.21037, "loss": 0.62794, "time": 0.6198} -{"mode": "train", "epoch": 10, "iter": 3300, "lr": 2e-05, "memory": 12589, "data_time": 0.04687, "loss_rpn_cls": 0.01384, "loss_rpn_bbox": 0.03366, "loss_cls": 0.14752, "acc": 94.43726, "loss_bbox": 0.18822, "loss_mask": 0.20008, "loss": 0.58332, "time": 0.5497} -{"mode": "train", "epoch": 10, "iter": 3350, "lr": 2e-05, "memory": 12589, "data_time": 0.04927, "loss_rpn_cls": 0.016, "loss_rpn_bbox": 0.03581, "loss_cls": 0.15138, "acc": 94.34277, "loss_bbox": 0.19543, "loss_mask": 0.20589, "loss": 0.60452, "time": 0.51993} -{"mode": "train", "epoch": 10, "iter": 3400, "lr": 2e-05, "memory": 12589, "data_time": 0.05668, "loss_rpn_cls": 0.01528, "loss_rpn_bbox": 0.0364, "loss_cls": 0.14907, "acc": 94.41772, "loss_bbox": 0.19622, "loss_mask": 0.20981, "loss": 0.60678, "time": 0.54336} -{"mode": "train", "epoch": 10, "iter": 3450, "lr": 2e-05, "memory": 12589, "data_time": 0.05891, "loss_rpn_cls": 0.01691, "loss_rpn_bbox": 0.0388, "loss_cls": 0.16032, "acc": 94.03809, "loss_bbox": 0.20289, "loss_mask": 0.21237, "loss": 0.63129, "time": 0.50267} -{"mode": "train", "epoch": 10, "iter": 3500, "lr": 2e-05, "memory": 12589, "data_time": 0.05965, "loss_rpn_cls": 0.01502, "loss_rpn_bbox": 0.03638, "loss_cls": 0.15118, "acc": 94.32153, "loss_bbox": 0.19941, "loss_mask": 0.20895, "loss": 0.61093, "time": 0.50853} -{"mode": "train", "epoch": 10, "iter": 3550, "lr": 2e-05, "memory": 12589, "data_time": 0.04832, "loss_rpn_cls": 0.01515, "loss_rpn_bbox": 0.03802, "loss_cls": 0.14592, "acc": 94.57153, "loss_bbox": 0.18519, "loss_mask": 0.2042, "loss": 0.58848, "time": 0.49653} -{"mode": "train", "epoch": 10, "iter": 3600, "lr": 2e-05, "memory": 12589, "data_time": 0.048, "loss_rpn_cls": 0.01441, "loss_rpn_bbox": 0.03734, "loss_cls": 0.15051, "acc": 94.45288, "loss_bbox": 0.19184, "loss_mask": 0.20225, "loss": 0.59635, "time": 0.50777} -{"mode": "train", "epoch": 10, "iter": 3650, "lr": 2e-05, "memory": 12589, "data_time": 0.04349, "loss_rpn_cls": 0.01429, "loss_rpn_bbox": 0.03703, "loss_cls": 0.15021, "acc": 94.33887, "loss_bbox": 0.19555, "loss_mask": 0.20839, "loss": 0.60548, "time": 0.50155} -{"mode": "train", "epoch": 10, "iter": 3700, "lr": 2e-05, "memory": 12589, "data_time": 0.04949, "loss_rpn_cls": 0.0169, "loss_rpn_bbox": 0.03841, "loss_cls": 0.15951, "acc": 93.91553, "loss_bbox": 0.20694, "loss_mask": 0.21528, "loss": 0.63703, "time": 0.49438} -{"mode": "train", "epoch": 10, "iter": 3750, "lr": 2e-05, "memory": 12589, "data_time": 0.04722, "loss_rpn_cls": 0.01713, "loss_rpn_bbox": 0.03889, "loss_cls": 0.15927, "acc": 94.07935, "loss_bbox": 0.20107, "loss_mask": 0.20706, "loss": 0.62343, "time": 0.51128} -{"mode": "train", "epoch": 10, "iter": 3800, "lr": 2e-05, "memory": 12589, "data_time": 0.04931, "loss_rpn_cls": 0.01488, "loss_rpn_bbox": 0.03806, "loss_cls": 0.15242, "acc": 94.26587, "loss_bbox": 0.19733, "loss_mask": 0.21161, "loss": 0.6143, "time": 0.49771} -{"mode": "train", "epoch": 10, "iter": 3850, "lr": 2e-05, "memory": 12589, "data_time": 0.04765, "loss_rpn_cls": 0.0165, "loss_rpn_bbox": 0.03862, "loss_cls": 0.15662, "acc": 94.20264, "loss_bbox": 0.20248, "loss_mask": 0.20896, "loss": 0.62318, "time": 0.53738} -{"mode": "train", "epoch": 10, "iter": 3900, "lr": 2e-05, "memory": 12589, "data_time": 0.05138, "loss_rpn_cls": 0.0136, "loss_rpn_bbox": 0.03377, "loss_cls": 0.15015, "acc": 94.50098, "loss_bbox": 0.19457, "loss_mask": 0.20472, "loss": 0.59681, "time": 0.49871} -{"mode": "train", "epoch": 10, "iter": 3950, "lr": 2e-05, "memory": 12589, "data_time": 0.04673, "loss_rpn_cls": 0.01525, "loss_rpn_bbox": 0.03417, "loss_cls": 0.14846, "acc": 94.52246, "loss_bbox": 0.1881, "loss_mask": 0.20715, "loss": 0.59312, "time": 0.48888} -{"mode": "train", "epoch": 10, "iter": 4000, "lr": 2e-05, "memory": 12589, "data_time": 0.04445, "loss_rpn_cls": 0.01435, "loss_rpn_bbox": 0.03677, "loss_cls": 0.15074, "acc": 94.4209, "loss_bbox": 0.19585, "loss_mask": 0.20582, "loss": 0.60352, "time": 0.50323} -{"mode": "train", "epoch": 10, "iter": 4050, "lr": 2e-05, "memory": 12589, "data_time": 0.04676, "loss_rpn_cls": 0.01461, "loss_rpn_bbox": 0.03675, "loss_cls": 0.15653, "acc": 94.17529, "loss_bbox": 0.19752, "loss_mask": 0.20642, "loss": 0.61183, "time": 0.50878} -{"mode": "train", "epoch": 10, "iter": 4100, "lr": 2e-05, "memory": 12589, "data_time": 0.04638, "loss_rpn_cls": 0.01432, "loss_rpn_bbox": 0.03507, "loss_cls": 0.14192, "acc": 94.71021, "loss_bbox": 0.18278, "loss_mask": 0.19769, "loss": 0.57179, "time": 0.54721} -{"mode": "train", "epoch": 10, "iter": 4150, "lr": 2e-05, "memory": 12589, "data_time": 0.04409, "loss_rpn_cls": 0.01601, "loss_rpn_bbox": 0.03543, "loss_cls": 0.15023, "acc": 94.41797, "loss_bbox": 0.19211, "loss_mask": 0.20739, "loss": 0.60118, "time": 0.49873} -{"mode": "train", "epoch": 10, "iter": 4200, "lr": 2e-05, "memory": 12589, "data_time": 0.04912, "loss_rpn_cls": 0.01557, "loss_rpn_bbox": 0.03927, "loss_cls": 0.15042, "acc": 94.3479, "loss_bbox": 0.19442, "loss_mask": 0.20608, "loss": 0.60576, "time": 0.50743} -{"mode": "train", "epoch": 10, "iter": 4250, "lr": 2e-05, "memory": 12589, "data_time": 0.04604, "loss_rpn_cls": 0.01675, "loss_rpn_bbox": 0.03915, "loss_cls": 0.15692, "acc": 94.10376, "loss_bbox": 0.20644, "loss_mask": 0.21297, "loss": 0.63223, "time": 0.50518} -{"mode": "train", "epoch": 10, "iter": 4300, "lr": 2e-05, "memory": 12589, "data_time": 0.04485, "loss_rpn_cls": 0.01514, "loss_rpn_bbox": 0.03533, "loss_cls": 0.1488, "acc": 94.47144, "loss_bbox": 0.19407, "loss_mask": 0.20858, "loss": 0.60192, "time": 0.49494} -{"mode": "train", "epoch": 10, "iter": 4350, "lr": 2e-05, "memory": 12589, "data_time": 0.04067, "loss_rpn_cls": 0.0155, "loss_rpn_bbox": 0.03963, "loss_cls": 0.15673, "acc": 94.10205, "loss_bbox": 0.20061, "loss_mask": 0.21051, "loss": 0.62298, "time": 0.50806} -{"mode": "train", "epoch": 10, "iter": 4400, "lr": 2e-05, "memory": 12589, "data_time": 0.04759, "loss_rpn_cls": 0.01507, "loss_rpn_bbox": 0.03747, "loss_cls": 0.15026, "acc": 94.43091, "loss_bbox": 0.19509, "loss_mask": 0.20936, "loss": 0.60726, "time": 0.49445} -{"mode": "train", "epoch": 10, "iter": 4450, "lr": 2e-05, "memory": 12589, "data_time": 0.05463, "loss_rpn_cls": 0.01436, "loss_rpn_bbox": 0.03628, "loss_cls": 0.14889, "acc": 94.42065, "loss_bbox": 0.19125, "loss_mask": 0.20445, "loss": 0.59523, "time": 0.49466} -{"mode": "train", "epoch": 10, "iter": 4500, "lr": 2e-05, "memory": 12589, "data_time": 0.03961, "loss_rpn_cls": 0.01511, "loss_rpn_bbox": 0.03674, "loss_cls": 0.14924, "acc": 94.45752, "loss_bbox": 0.19329, "loss_mask": 0.20595, "loss": 0.60032, "time": 0.50676} -{"mode": "train", "epoch": 10, "iter": 4550, "lr": 2e-05, "memory": 12589, "data_time": 0.0513, "loss_rpn_cls": 0.01456, "loss_rpn_bbox": 0.03809, "loss_cls": 0.15547, "acc": 94.11426, "loss_bbox": 0.20434, "loss_mask": 0.21023, "loss": 0.6227, "time": 0.50682} -{"mode": "train", "epoch": 10, "iter": 4600, "lr": 2e-05, "memory": 12589, "data_time": 0.05056, "loss_rpn_cls": 0.01756, "loss_rpn_bbox": 0.04071, "loss_cls": 0.16111, "acc": 93.948, "loss_bbox": 0.20343, "loss_mask": 0.21121, "loss": 0.63402, "time": 0.52021} -{"mode": "train", "epoch": 10, "iter": 4650, "lr": 2e-05, "memory": 12589, "data_time": 0.04466, "loss_rpn_cls": 0.01525, "loss_rpn_bbox": 0.03866, "loss_cls": 0.15273, "acc": 94.3335, "loss_bbox": 0.19573, "loss_mask": 0.21019, "loss": 0.61256, "time": 0.50903} -{"mode": "train", "epoch": 10, "iter": 4700, "lr": 2e-05, "memory": 12589, "data_time": 0.05415, "loss_rpn_cls": 0.01539, "loss_rpn_bbox": 0.03742, "loss_cls": 0.15873, "acc": 94.02344, "loss_bbox": 0.20303, "loss_mask": 0.2113, "loss": 0.62587, "time": 0.51539} -{"mode": "train", "epoch": 10, "iter": 4750, "lr": 2e-05, "memory": 12589, "data_time": 0.04782, "loss_rpn_cls": 0.0151, "loss_rpn_bbox": 0.0372, "loss_cls": 0.15242, "acc": 94.29688, "loss_bbox": 0.19882, "loss_mask": 0.20866, "loss": 0.6122, "time": 0.5023} -{"mode": "train", "epoch": 10, "iter": 4800, "lr": 2e-05, "memory": 12589, "data_time": 0.04833, "loss_rpn_cls": 0.01494, "loss_rpn_bbox": 0.03864, "loss_cls": 0.15648, "acc": 94.09741, "loss_bbox": 0.19967, "loss_mask": 0.20874, "loss": 0.61846, "time": 0.51914} -{"mode": "train", "epoch": 10, "iter": 4850, "lr": 2e-05, "memory": 12589, "data_time": 0.04818, "loss_rpn_cls": 0.01497, "loss_rpn_bbox": 0.03726, "loss_cls": 0.15334, "acc": 94.20312, "loss_bbox": 0.20105, "loss_mask": 0.20537, "loss": 0.61199, "time": 0.51602} -{"mode": "train", "epoch": 10, "iter": 4900, "lr": 2e-05, "memory": 12589, "data_time": 0.04707, "loss_rpn_cls": 0.01681, "loss_rpn_bbox": 0.03843, "loss_cls": 0.15574, "acc": 94.15063, "loss_bbox": 0.20135, "loss_mask": 0.20915, "loss": 0.62149, "time": 0.5645} -{"mode": "train", "epoch": 10, "iter": 4950, "lr": 2e-05, "memory": 12589, "data_time": 0.04734, "loss_rpn_cls": 0.01587, "loss_rpn_bbox": 0.03824, "loss_cls": 0.15515, "acc": 94.26758, "loss_bbox": 0.19843, "loss_mask": 0.20586, "loss": 0.61355, "time": 0.51595} -{"mode": "train", "epoch": 10, "iter": 5000, "lr": 2e-05, "memory": 12589, "data_time": 0.04775, "loss_rpn_cls": 0.01484, "loss_rpn_bbox": 0.03883, "loss_cls": 0.15921, "acc": 94.14941, "loss_bbox": 0.20292, "loss_mask": 0.2124, "loss": 0.6282, "time": 0.51209} -{"mode": "train", "epoch": 10, "iter": 5050, "lr": 2e-05, "memory": 12589, "data_time": 0.05432, "loss_rpn_cls": 0.01662, "loss_rpn_bbox": 0.03614, "loss_cls": 0.15642, "acc": 94.16797, "loss_bbox": 0.19761, "loss_mask": 0.20801, "loss": 0.61479, "time": 0.52274} -{"mode": "train", "epoch": 10, "iter": 5100, "lr": 2e-05, "memory": 12589, "data_time": 0.05297, "loss_rpn_cls": 0.0159, "loss_rpn_bbox": 0.03859, "loss_cls": 0.15384, "acc": 94.21826, "loss_bbox": 0.19653, "loss_mask": 0.20967, "loss": 0.61454, "time": 0.52058} -{"mode": "train", "epoch": 10, "iter": 5150, "lr": 2e-05, "memory": 12589, "data_time": 0.04855, "loss_rpn_cls": 0.01666, "loss_rpn_bbox": 0.03807, "loss_cls": 0.15478, "acc": 94.30078, "loss_bbox": 0.19802, "loss_mask": 0.2075, "loss": 0.61504, "time": 0.49657} -{"mode": "train", "epoch": 10, "iter": 5200, "lr": 2e-05, "memory": 12589, "data_time": 0.05, "loss_rpn_cls": 0.01515, "loss_rpn_bbox": 0.03581, "loss_cls": 0.15474, "acc": 94.29517, "loss_bbox": 0.19558, "loss_mask": 0.20521, "loss": 0.60649, "time": 0.4994} -{"mode": "train", "epoch": 10, "iter": 5250, "lr": 2e-05, "memory": 12589, "data_time": 0.05025, "loss_rpn_cls": 0.01537, "loss_rpn_bbox": 0.03849, "loss_cls": 0.154, "acc": 94.24951, "loss_bbox": 0.20072, "loss_mask": 0.20677, "loss": 0.61535, "time": 0.49605} -{"mode": "train", "epoch": 10, "iter": 5300, "lr": 2e-05, "memory": 12589, "data_time": 0.04535, "loss_rpn_cls": 0.01615, "loss_rpn_bbox": 0.03577, "loss_cls": 0.15779, "acc": 94.15137, "loss_bbox": 0.20232, "loss_mask": 0.2119, "loss": 0.62393, "time": 0.58344} -{"mode": "train", "epoch": 10, "iter": 5350, "lr": 2e-05, "memory": 12589, "data_time": 0.04615, "loss_rpn_cls": 0.01498, "loss_rpn_bbox": 0.03647, "loss_cls": 0.15328, "acc": 94.27124, "loss_bbox": 0.1978, "loss_mask": 0.20884, "loss": 0.61138, "time": 0.55957} -{"mode": "train", "epoch": 10, "iter": 5400, "lr": 2e-05, "memory": 12589, "data_time": 0.05542, "loss_rpn_cls": 0.01575, "loss_rpn_bbox": 0.03945, "loss_cls": 0.1557, "acc": 94.14307, "loss_bbox": 0.20171, "loss_mask": 0.2084, "loss": 0.62101, "time": 0.50555} -{"mode": "train", "epoch": 10, "iter": 5450, "lr": 2e-05, "memory": 12589, "data_time": 0.05036, "loss_rpn_cls": 0.01686, "loss_rpn_bbox": 0.03904, "loss_cls": 0.15968, "acc": 94.03247, "loss_bbox": 0.2059, "loss_mask": 0.21083, "loss": 0.63231, "time": 0.50417} -{"mode": "train", "epoch": 10, "iter": 5500, "lr": 2e-05, "memory": 12589, "data_time": 0.04944, "loss_rpn_cls": 0.01721, "loss_rpn_bbox": 0.0385, "loss_cls": 0.1598, "acc": 94.02563, "loss_bbox": 0.2035, "loss_mask": 0.21452, "loss": 0.63352, "time": 0.50899} -{"mode": "train", "epoch": 10, "iter": 5550, "lr": 2e-05, "memory": 12589, "data_time": 0.0518, "loss_rpn_cls": 0.01463, "loss_rpn_bbox": 0.03408, "loss_cls": 0.14582, "acc": 94.58276, "loss_bbox": 0.18616, "loss_mask": 0.20611, "loss": 0.58679, "time": 0.50928} -{"mode": "train", "epoch": 10, "iter": 5600, "lr": 2e-05, "memory": 12589, "data_time": 0.05432, "loss_rpn_cls": 0.01521, "loss_rpn_bbox": 0.0359, "loss_cls": 0.14483, "acc": 94.62036, "loss_bbox": 0.19164, "loss_mask": 0.20898, "loss": 0.59656, "time": 0.49915} -{"mode": "train", "epoch": 10, "iter": 5650, "lr": 2e-05, "memory": 12589, "data_time": 0.05047, "loss_rpn_cls": 0.01408, "loss_rpn_bbox": 0.03576, "loss_cls": 0.14721, "acc": 94.45239, "loss_bbox": 0.19308, "loss_mask": 0.20641, "loss": 0.59654, "time": 0.50152} -{"mode": "train", "epoch": 10, "iter": 5700, "lr": 2e-05, "memory": 12589, "data_time": 0.04633, "loss_rpn_cls": 0.01678, "loss_rpn_bbox": 0.03916, "loss_cls": 0.1563, "acc": 94.20288, "loss_bbox": 0.19906, "loss_mask": 0.21373, "loss": 0.62503, "time": 0.52665} -{"mode": "train", "epoch": 10, "iter": 5750, "lr": 2e-05, "memory": 12589, "data_time": 0.05424, "loss_rpn_cls": 0.01434, "loss_rpn_bbox": 0.03657, "loss_cls": 0.14988, "acc": 94.43945, "loss_bbox": 0.19205, "loss_mask": 0.20355, "loss": 0.59639, "time": 0.49753} -{"mode": "train", "epoch": 10, "iter": 5800, "lr": 2e-05, "memory": 12589, "data_time": 0.04438, "loss_rpn_cls": 0.01454, "loss_rpn_bbox": 0.0366, "loss_cls": 0.15129, "acc": 94.43628, "loss_bbox": 0.19165, "loss_mask": 0.20445, "loss": 0.59853, "time": 0.49431} -{"mode": "train", "epoch": 10, "iter": 5850, "lr": 2e-05, "memory": 12589, "data_time": 0.04573, "loss_rpn_cls": 0.01604, "loss_rpn_bbox": 0.0387, "loss_cls": 0.14958, "acc": 94.42041, "loss_bbox": 0.19687, "loss_mask": 0.20771, "loss": 0.6089, "time": 0.49735} -{"mode": "train", "epoch": 10, "iter": 5900, "lr": 2e-05, "memory": 12589, "data_time": 0.04932, "loss_rpn_cls": 0.01589, "loss_rpn_bbox": 0.03928, "loss_cls": 0.15923, "acc": 94.11621, "loss_bbox": 0.20258, "loss_mask": 0.2087, "loss": 0.62568, "time": 0.50756} -{"mode": "train", "epoch": 10, "iter": 5950, "lr": 2e-05, "memory": 12589, "data_time": 0.04481, "loss_rpn_cls": 0.01678, "loss_rpn_bbox": 0.03978, "loss_cls": 0.16014, "acc": 94.05347, "loss_bbox": 0.20732, "loss_mask": 0.21134, "loss": 0.63536, "time": 0.50798} -{"mode": "train", "epoch": 10, "iter": 6000, "lr": 2e-05, "memory": 12589, "data_time": 0.04569, "loss_rpn_cls": 0.01561, "loss_rpn_bbox": 0.03872, "loss_cls": 0.15198, "acc": 94.36035, "loss_bbox": 0.19522, "loss_mask": 0.20569, "loss": 0.60722, "time": 0.62353} -{"mode": "train", "epoch": 10, "iter": 6050, "lr": 2e-05, "memory": 12589, "data_time": 0.04545, "loss_rpn_cls": 0.01474, "loss_rpn_bbox": 0.03744, "loss_cls": 0.14975, "acc": 94.42432, "loss_bbox": 0.19803, "loss_mask": 0.20866, "loss": 0.60863, "time": 0.49977} -{"mode": "train", "epoch": 10, "iter": 6100, "lr": 2e-05, "memory": 12589, "data_time": 0.04334, "loss_rpn_cls": 0.01414, "loss_rpn_bbox": 0.03648, "loss_cls": 0.15125, "acc": 94.33496, "loss_bbox": 0.19803, "loss_mask": 0.21105, "loss": 0.61096, "time": 0.48412} -{"mode": "train", "epoch": 10, "iter": 6150, "lr": 2e-05, "memory": 12589, "data_time": 0.0491, "loss_rpn_cls": 0.01689, "loss_rpn_bbox": 0.03712, "loss_cls": 0.15364, "acc": 94.24756, "loss_bbox": 0.19785, "loss_mask": 0.21076, "loss": 0.61626, "time": 0.51317} -{"mode": "train", "epoch": 10, "iter": 6200, "lr": 2e-05, "memory": 12589, "data_time": 0.04471, "loss_rpn_cls": 0.01356, "loss_rpn_bbox": 0.03653, "loss_cls": 0.14995, "acc": 94.35571, "loss_bbox": 0.19617, "loss_mask": 0.20544, "loss": 0.60164, "time": 0.49386} -{"mode": "train", "epoch": 10, "iter": 6250, "lr": 2e-05, "memory": 12589, "data_time": 0.04118, "loss_rpn_cls": 0.01475, "loss_rpn_bbox": 0.03341, "loss_cls": 0.1409, "acc": 94.72388, "loss_bbox": 0.18979, "loss_mask": 0.20382, "loss": 0.58268, "time": 0.50106} -{"mode": "train", "epoch": 10, "iter": 6300, "lr": 2e-05, "memory": 12589, "data_time": 0.04543, "loss_rpn_cls": 0.01639, "loss_rpn_bbox": 0.03817, "loss_cls": 0.15521, "acc": 94.18286, "loss_bbox": 0.20196, "loss_mask": 0.21134, "loss": 0.62309, "time": 0.49776} -{"mode": "train", "epoch": 10, "iter": 6350, "lr": 2e-05, "memory": 12589, "data_time": 0.05087, "loss_rpn_cls": 0.0159, "loss_rpn_bbox": 0.0384, "loss_cls": 0.15103, "acc": 94.34912, "loss_bbox": 0.19421, "loss_mask": 0.20513, "loss": 0.60466, "time": 0.49323} -{"mode": "train", "epoch": 10, "iter": 6400, "lr": 2e-05, "memory": 12589, "data_time": 0.05684, "loss_rpn_cls": 0.01588, "loss_rpn_bbox": 0.03875, "loss_cls": 0.15749, "acc": 94.14648, "loss_bbox": 0.20309, "loss_mask": 0.21403, "loss": 0.62923, "time": 0.51023} -{"mode": "train", "epoch": 10, "iter": 6450, "lr": 2e-05, "memory": 12589, "data_time": 0.04899, "loss_rpn_cls": 0.01567, "loss_rpn_bbox": 0.03813, "loss_cls": 0.14966, "acc": 94.40186, "loss_bbox": 0.19278, "loss_mask": 0.20918, "loss": 0.60543, "time": 0.51162} -{"mode": "train", "epoch": 10, "iter": 6500, "lr": 2e-05, "memory": 12589, "data_time": 0.05754, "loss_rpn_cls": 0.016, "loss_rpn_bbox": 0.03722, "loss_cls": 0.15562, "acc": 94.15015, "loss_bbox": 0.19953, "loss_mask": 0.20739, "loss": 0.61576, "time": 0.53715} -{"mode": "train", "epoch": 10, "iter": 6550, "lr": 2e-05, "memory": 12589, "data_time": 0.04536, "loss_rpn_cls": 0.01418, "loss_rpn_bbox": 0.03649, "loss_cls": 0.14735, "acc": 94.53174, "loss_bbox": 0.19206, "loss_mask": 0.20855, "loss": 0.59863, "time": 0.50121} -{"mode": "train", "epoch": 10, "iter": 6600, "lr": 2e-05, "memory": 12589, "data_time": 0.04936, "loss_rpn_cls": 0.01553, "loss_rpn_bbox": 0.03782, "loss_cls": 0.15355, "acc": 94.27295, "loss_bbox": 0.19763, "loss_mask": 0.20744, "loss": 0.61197, "time": 0.50525} -{"mode": "train", "epoch": 10, "iter": 6650, "lr": 2e-05, "memory": 12589, "data_time": 0.04632, "loss_rpn_cls": 0.01689, "loss_rpn_bbox": 0.03969, "loss_cls": 0.15709, "acc": 94.16333, "loss_bbox": 0.20099, "loss_mask": 0.21209, "loss": 0.62675, "time": 0.50451} -{"mode": "train", "epoch": 10, "iter": 6700, "lr": 2e-05, "memory": 12589, "data_time": 0.04227, "loss_rpn_cls": 0.01428, "loss_rpn_bbox": 0.03612, "loss_cls": 0.14943, "acc": 94.42749, "loss_bbox": 0.19407, "loss_mask": 0.20233, "loss": 0.59622, "time": 0.50218} -{"mode": "train", "epoch": 10, "iter": 6750, "lr": 2e-05, "memory": 12589, "data_time": 0.04289, "loss_rpn_cls": 0.01527, "loss_rpn_bbox": 0.03769, "loss_cls": 0.1516, "acc": 94.36279, "loss_bbox": 0.1918, "loss_mask": 0.20566, "loss": 0.60202, "time": 0.48707} -{"mode": "train", "epoch": 10, "iter": 6800, "lr": 2e-05, "memory": 12589, "data_time": 0.04748, "loss_rpn_cls": 0.01418, "loss_rpn_bbox": 0.0336, "loss_cls": 0.14908, "acc": 94.47192, "loss_bbox": 0.18745, "loss_mask": 0.1997, "loss": 0.58401, "time": 0.54773} -{"mode": "train", "epoch": 10, "iter": 6850, "lr": 2e-05, "memory": 12589, "data_time": 0.04723, "loss_rpn_cls": 0.01688, "loss_rpn_bbox": 0.03872, "loss_cls": 0.15662, "acc": 94.18896, "loss_bbox": 0.2025, "loss_mask": 0.21072, "loss": 0.62544, "time": 0.5155} -{"mode": "train", "epoch": 10, "iter": 6900, "lr": 2e-05, "memory": 12589, "data_time": 0.04905, "loss_rpn_cls": 0.01742, "loss_rpn_bbox": 0.03735, "loss_cls": 0.16026, "acc": 93.94922, "loss_bbox": 0.20517, "loss_mask": 0.20905, "loss": 0.62925, "time": 0.50378} -{"mode": "train", "epoch": 10, "iter": 6950, "lr": 2e-05, "memory": 12589, "data_time": 0.0486, "loss_rpn_cls": 0.01584, "loss_rpn_bbox": 0.03763, "loss_cls": 0.15853, "acc": 94.05322, "loss_bbox": 0.20165, "loss_mask": 0.21096, "loss": 0.62461, "time": 0.50048} -{"mode": "train", "epoch": 10, "iter": 7000, "lr": 2e-05, "memory": 12589, "data_time": 0.05303, "loss_rpn_cls": 0.0151, "loss_rpn_bbox": 0.03814, "loss_cls": 0.15366, "acc": 94.24023, "loss_bbox": 0.19837, "loss_mask": 0.20493, "loss": 0.6102, "time": 0.49981} -{"mode": "train", "epoch": 10, "iter": 7050, "lr": 2e-05, "memory": 12589, "data_time": 0.04622, "loss_rpn_cls": 0.01392, "loss_rpn_bbox": 0.0356, "loss_cls": 0.15306, "acc": 94.31177, "loss_bbox": 0.19539, "loss_mask": 0.20418, "loss": 0.60215, "time": 0.54955} -{"mode": "train", "epoch": 10, "iter": 7100, "lr": 2e-05, "memory": 12589, "data_time": 0.04915, "loss_rpn_cls": 0.01386, "loss_rpn_bbox": 0.03568, "loss_cls": 0.15081, "acc": 94.37598, "loss_bbox": 0.19541, "loss_mask": 0.20754, "loss": 0.6033, "time": 0.53949} -{"mode": "train", "epoch": 10, "iter": 7150, "lr": 2e-05, "memory": 12589, "data_time": 0.04548, "loss_rpn_cls": 0.01569, "loss_rpn_bbox": 0.03646, "loss_cls": 0.14982, "acc": 94.42529, "loss_bbox": 0.19051, "loss_mask": 0.20563, "loss": 0.59811, "time": 0.49197} -{"mode": "train", "epoch": 10, "iter": 7200, "lr": 2e-05, "memory": 12589, "data_time": 0.04861, "loss_rpn_cls": 0.01569, "loss_rpn_bbox": 0.03607, "loss_cls": 0.14763, "acc": 94.53613, "loss_bbox": 0.18977, "loss_mask": 0.19927, "loss": 0.58842, "time": 0.49272} -{"mode": "train", "epoch": 10, "iter": 7250, "lr": 2e-05, "memory": 12589, "data_time": 0.04085, "loss_rpn_cls": 0.01702, "loss_rpn_bbox": 0.03801, "loss_cls": 0.15673, "acc": 94.229, "loss_bbox": 0.19887, "loss_mask": 0.2079, "loss": 0.61852, "time": 0.49969} -{"mode": "train", "epoch": 10, "iter": 7300, "lr": 2e-05, "memory": 12589, "data_time": 0.05365, "loss_rpn_cls": 0.01496, "loss_rpn_bbox": 0.03742, "loss_cls": 0.15175, "acc": 94.35083, "loss_bbox": 0.19439, "loss_mask": 0.20695, "loss": 0.60547, "time": 0.48975} -{"mode": "val", "epoch": 10, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.4426, "bbox_mAP_50": 0.6601, "bbox_mAP_75": 0.4844, "bbox_mAP_s": 0.2791, "bbox_mAP_m": 0.4765, "bbox_mAP_l": 0.5829, "bbox_mAP_copypaste": "0.4426 0.6601 0.4844 0.2791 0.4765 0.5829", "segm_mAP": 0.4057, "segm_mAP_50": 0.6339, "segm_mAP_75": 0.4353, "segm_mAP_s": 0.2176, "segm_mAP_m": 0.4331, "segm_mAP_l": 0.5894, "segm_mAP_copypaste": "0.4057 0.6339 0.4353 0.2176 0.4331 0.5894"} -{"mode": "train", "epoch": 11, "iter": 50, "lr": 2e-05, "memory": 12589, "data_time": 0.13091, "loss_rpn_cls": 0.01394, "loss_rpn_bbox": 0.03609, "loss_cls": 0.14573, "acc": 94.51416, "loss_bbox": 0.1926, "loss_mask": 0.20997, "loss": 0.59833, "time": 0.59011} -{"mode": "train", "epoch": 11, "iter": 100, "lr": 2e-05, "memory": 12589, "data_time": 0.05806, "loss_rpn_cls": 0.016, "loss_rpn_bbox": 0.0389, "loss_cls": 0.15504, "acc": 94.16797, "loss_bbox": 0.20424, "loss_mask": 0.20706, "loss": 0.62125, "time": 0.54004} -{"mode": "train", "epoch": 11, "iter": 150, "lr": 2e-05, "memory": 12589, "data_time": 0.05291, "loss_rpn_cls": 0.01606, "loss_rpn_bbox": 0.03795, "loss_cls": 0.14791, "acc": 94.43286, "loss_bbox": 0.19202, "loss_mask": 0.21374, "loss": 0.60768, "time": 0.52619} -{"mode": "train", "epoch": 11, "iter": 200, "lr": 2e-05, "memory": 12589, "data_time": 0.04923, "loss_rpn_cls": 0.014, "loss_rpn_bbox": 0.03695, "loss_cls": 0.14793, "acc": 94.44141, "loss_bbox": 0.19329, "loss_mask": 0.20903, "loss": 0.6012, "time": 0.52374} -{"mode": "train", "epoch": 11, "iter": 250, "lr": 2e-05, "memory": 12589, "data_time": 0.05032, "loss_rpn_cls": 0.01434, "loss_rpn_bbox": 0.03945, "loss_cls": 0.15117, "acc": 94.27246, "loss_bbox": 0.2033, "loss_mask": 0.21482, "loss": 0.62308, "time": 0.5241} -{"mode": "train", "epoch": 11, "iter": 300, "lr": 2e-05, "memory": 12589, "data_time": 0.05028, "loss_rpn_cls": 0.01569, "loss_rpn_bbox": 0.03772, "loss_cls": 0.14195, "acc": 94.62036, "loss_bbox": 0.19187, "loss_mask": 0.20243, "loss": 0.58966, "time": 0.57791} -{"mode": "train", "epoch": 11, "iter": 350, "lr": 2e-05, "memory": 12589, "data_time": 0.05412, "loss_rpn_cls": 0.01478, "loss_rpn_bbox": 0.03893, "loss_cls": 0.14769, "acc": 94.46289, "loss_bbox": 0.19438, "loss_mask": 0.20341, "loss": 0.59918, "time": 0.51691} -{"mode": "train", "epoch": 11, "iter": 400, "lr": 2e-05, "memory": 12589, "data_time": 0.05019, "loss_rpn_cls": 0.01601, "loss_rpn_bbox": 0.03648, "loss_cls": 0.14917, "acc": 94.39844, "loss_bbox": 0.19357, "loss_mask": 0.20436, "loss": 0.59959, "time": 0.5292} -{"mode": "train", "epoch": 11, "iter": 450, "lr": 2e-05, "memory": 12589, "data_time": 0.05876, "loss_rpn_cls": 0.01537, "loss_rpn_bbox": 0.03585, "loss_cls": 0.14575, "acc": 94.4751, "loss_bbox": 0.19024, "loss_mask": 0.20683, "loss": 0.59403, "time": 0.50092} -{"mode": "train", "epoch": 11, "iter": 500, "lr": 2e-05, "memory": 12589, "data_time": 0.04141, "loss_rpn_cls": 0.01476, "loss_rpn_bbox": 0.03995, "loss_cls": 0.1499, "acc": 94.34619, "loss_bbox": 0.20037, "loss_mask": 0.21098, "loss": 0.61595, "time": 0.5087} -{"mode": "train", "epoch": 11, "iter": 550, "lr": 2e-05, "memory": 12589, "data_time": 0.0464, "loss_rpn_cls": 0.01435, "loss_rpn_bbox": 0.03468, "loss_cls": 0.14482, "acc": 94.5083, "loss_bbox": 0.18993, "loss_mask": 0.20067, "loss": 0.58444, "time": 0.50564} -{"mode": "train", "epoch": 11, "iter": 600, "lr": 2e-05, "memory": 12589, "data_time": 0.05728, "loss_rpn_cls": 0.015, "loss_rpn_bbox": 0.03771, "loss_cls": 0.15228, "acc": 94.34375, "loss_bbox": 0.19886, "loss_mask": 0.20662, "loss": 0.61048, "time": 0.50843} -{"mode": "train", "epoch": 11, "iter": 650, "lr": 2e-05, "memory": 12589, "data_time": 0.04787, "loss_rpn_cls": 0.01532, "loss_rpn_bbox": 0.03872, "loss_cls": 0.15185, "acc": 94.32935, "loss_bbox": 0.19621, "loss_mask": 0.21045, "loss": 0.61255, "time": 0.56643} -{"mode": "train", "epoch": 11, "iter": 700, "lr": 2e-05, "memory": 12589, "data_time": 0.05099, "loss_rpn_cls": 0.01405, "loss_rpn_bbox": 0.03697, "loss_cls": 0.14363, "acc": 94.54126, "loss_bbox": 0.19095, "loss_mask": 0.20379, "loss": 0.58939, "time": 0.50514} -{"mode": "train", "epoch": 11, "iter": 750, "lr": 2e-05, "memory": 12589, "data_time": 0.05439, "loss_rpn_cls": 0.01546, "loss_rpn_bbox": 0.03751, "loss_cls": 0.1537, "acc": 94.16406, "loss_bbox": 0.19525, "loss_mask": 0.2084, "loss": 0.61033, "time": 0.5012} -{"mode": "train", "epoch": 11, "iter": 800, "lr": 2e-05, "memory": 12589, "data_time": 0.05301, "loss_rpn_cls": 0.01551, "loss_rpn_bbox": 0.03743, "loss_cls": 0.15487, "acc": 94.21191, "loss_bbox": 0.19979, "loss_mask": 0.21061, "loss": 0.61821, "time": 0.55931} -{"mode": "train", "epoch": 11, "iter": 850, "lr": 2e-05, "memory": 12589, "data_time": 0.05373, "loss_rpn_cls": 0.01412, "loss_rpn_bbox": 0.03601, "loss_cls": 0.14499, "acc": 94.51685, "loss_bbox": 0.19199, "loss_mask": 0.20472, "loss": 0.59182, "time": 0.50288} -{"mode": "train", "epoch": 11, "iter": 900, "lr": 2e-05, "memory": 12589, "data_time": 0.0466, "loss_rpn_cls": 0.01492, "loss_rpn_bbox": 0.0373, "loss_cls": 0.14819, "acc": 94.52808, "loss_bbox": 0.19059, "loss_mask": 0.20529, "loss": 0.59629, "time": 0.50895} -{"mode": "train", "epoch": 11, "iter": 950, "lr": 2e-05, "memory": 12589, "data_time": 0.04572, "loss_rpn_cls": 0.01518, "loss_rpn_bbox": 0.03671, "loss_cls": 0.15473, "acc": 94.20142, "loss_bbox": 0.19798, "loss_mask": 0.21146, "loss": 0.61605, "time": 0.5197} -{"mode": "train", "epoch": 11, "iter": 1000, "lr": 2e-05, "memory": 12589, "data_time": 0.04703, "loss_rpn_cls": 0.01423, "loss_rpn_bbox": 0.03721, "loss_cls": 0.1436, "acc": 94.64233, "loss_bbox": 0.18393, "loss_mask": 0.20242, "loss": 0.58139, "time": 0.57003} -{"mode": "train", "epoch": 11, "iter": 1050, "lr": 2e-05, "memory": 12589, "data_time": 0.04786, "loss_rpn_cls": 0.01331, "loss_rpn_bbox": 0.03448, "loss_cls": 0.14662, "acc": 94.58154, "loss_bbox": 0.19125, "loss_mask": 0.20286, "loss": 0.58852, "time": 0.50819} -{"mode": "train", "epoch": 11, "iter": 1100, "lr": 2e-05, "memory": 12589, "data_time": 0.05038, "loss_rpn_cls": 0.01261, "loss_rpn_bbox": 0.03457, "loss_cls": 0.14464, "acc": 94.4939, "loss_bbox": 0.18586, "loss_mask": 0.19914, "loss": 0.57681, "time": 0.4959} -{"mode": "train", "epoch": 11, "iter": 1150, "lr": 2e-05, "memory": 12589, "data_time": 0.0521, "loss_rpn_cls": 0.01573, "loss_rpn_bbox": 0.0368, "loss_cls": 0.15047, "acc": 94.34692, "loss_bbox": 0.19748, "loss_mask": 0.2105, "loss": 0.61098, "time": 0.50802} -{"mode": "train", "epoch": 11, "iter": 1200, "lr": 2e-05, "memory": 12589, "data_time": 0.05098, "loss_rpn_cls": 0.01416, "loss_rpn_bbox": 0.0367, "loss_cls": 0.14428, "acc": 94.50854, "loss_bbox": 0.19131, "loss_mask": 0.20641, "loss": 0.59286, "time": 0.51781} -{"mode": "train", "epoch": 11, "iter": 1250, "lr": 2e-05, "memory": 12589, "data_time": 0.04858, "loss_rpn_cls": 0.01693, "loss_rpn_bbox": 0.03953, "loss_cls": 0.16024, "acc": 94.01245, "loss_bbox": 0.20674, "loss_mask": 0.21263, "loss": 0.63607, "time": 0.57715} -{"mode": "train", "epoch": 11, "iter": 1300, "lr": 2e-05, "memory": 12589, "data_time": 0.05144, "loss_rpn_cls": 0.01417, "loss_rpn_bbox": 0.03697, "loss_cls": 0.15075, "acc": 94.38354, "loss_bbox": 0.19279, "loss_mask": 0.20608, "loss": 0.60076, "time": 0.50635} -{"mode": "train", "epoch": 11, "iter": 1350, "lr": 2e-05, "memory": 12589, "data_time": 0.06089, "loss_rpn_cls": 0.0164, "loss_rpn_bbox": 0.03907, "loss_cls": 0.14967, "acc": 94.40332, "loss_bbox": 0.19333, "loss_mask": 0.20739, "loss": 0.60587, "time": 0.50855} -{"mode": "train", "epoch": 11, "iter": 1400, "lr": 2e-05, "memory": 12589, "data_time": 0.0463, "loss_rpn_cls": 0.01408, "loss_rpn_bbox": 0.03521, "loss_cls": 0.14146, "acc": 94.71631, "loss_bbox": 0.19179, "loss_mask": 0.20453, "loss": 0.58708, "time": 0.51001} -{"mode": "train", "epoch": 11, "iter": 1450, "lr": 2e-05, "memory": 12589, "data_time": 0.05447, "loss_rpn_cls": 0.01367, "loss_rpn_bbox": 0.03503, "loss_cls": 0.14987, "acc": 94.38159, "loss_bbox": 0.19214, "loss_mask": 0.20296, "loss": 0.59366, "time": 0.55748} -{"mode": "train", "epoch": 11, "iter": 1500, "lr": 2e-05, "memory": 12589, "data_time": 0.05585, "loss_rpn_cls": 0.01485, "loss_rpn_bbox": 0.03717, "loss_cls": 0.14882, "acc": 94.41309, "loss_bbox": 0.20188, "loss_mask": 0.20703, "loss": 0.60975, "time": 0.50284} -{"mode": "train", "epoch": 11, "iter": 1550, "lr": 2e-05, "memory": 12589, "data_time": 0.04296, "loss_rpn_cls": 0.01463, "loss_rpn_bbox": 0.03716, "loss_cls": 0.14475, "acc": 94.69165, "loss_bbox": 0.18888, "loss_mask": 0.20082, "loss": 0.58623, "time": 0.50075} -{"mode": "train", "epoch": 11, "iter": 1600, "lr": 2e-05, "memory": 12589, "data_time": 0.05138, "loss_rpn_cls": 0.01521, "loss_rpn_bbox": 0.03719, "loss_cls": 0.14887, "acc": 94.40112, "loss_bbox": 0.19353, "loss_mask": 0.20455, "loss": 0.59935, "time": 0.50377} -{"mode": "train", "epoch": 11, "iter": 1650, "lr": 2e-05, "memory": 12589, "data_time": 0.05134, "loss_rpn_cls": 0.0148, "loss_rpn_bbox": 0.03512, "loss_cls": 0.14757, "acc": 94.53979, "loss_bbox": 0.18863, "loss_mask": 0.20333, "loss": 0.58945, "time": 0.51071} -{"mode": "train", "epoch": 11, "iter": 1700, "lr": 2e-05, "memory": 12589, "data_time": 0.05335, "loss_rpn_cls": 0.01418, "loss_rpn_bbox": 0.03503, "loss_cls": 0.14269, "acc": 94.72852, "loss_bbox": 0.18528, "loss_mask": 0.20204, "loss": 0.57921, "time": 0.5009} -{"mode": "train", "epoch": 11, "iter": 1750, "lr": 2e-05, "memory": 12589, "data_time": 0.04757, "loss_rpn_cls": 0.01543, "loss_rpn_bbox": 0.03786, "loss_cls": 0.15245, "acc": 94.32886, "loss_bbox": 0.19705, "loss_mask": 0.20888, "loss": 0.61168, "time": 0.49492} -{"mode": "train", "epoch": 11, "iter": 1800, "lr": 2e-05, "memory": 12589, "data_time": 0.04418, "loss_rpn_cls": 0.0153, "loss_rpn_bbox": 0.03682, "loss_cls": 0.15287, "acc": 94.21777, "loss_bbox": 0.19636, "loss_mask": 0.20745, "loss": 0.60879, "time": 0.49743} -{"mode": "train", "epoch": 11, "iter": 1850, "lr": 2e-05, "memory": 12589, "data_time": 0.10116, "loss_rpn_cls": 0.01357, "loss_rpn_bbox": 0.03546, "loss_cls": 0.14442, "acc": 94.52319, "loss_bbox": 0.19399, "loss_mask": 0.20484, "loss": 0.59228, "time": 0.55688} -{"mode": "train", "epoch": 11, "iter": 1900, "lr": 2e-05, "memory": 12589, "data_time": 0.0516, "loss_rpn_cls": 0.01373, "loss_rpn_bbox": 0.0351, "loss_cls": 0.14202, "acc": 94.61035, "loss_bbox": 0.18718, "loss_mask": 0.20059, "loss": 0.57861, "time": 0.50316} -{"mode": "train", "epoch": 11, "iter": 1950, "lr": 2e-05, "memory": 12589, "data_time": 0.04865, "loss_rpn_cls": 0.01478, "loss_rpn_bbox": 0.03778, "loss_cls": 0.14687, "acc": 94.49463, "loss_bbox": 0.19183, "loss_mask": 0.20594, "loss": 0.59721, "time": 0.50362} -{"mode": "train", "epoch": 11, "iter": 2000, "lr": 2e-05, "memory": 12589, "data_time": 0.04133, "loss_rpn_cls": 0.0139, "loss_rpn_bbox": 0.03592, "loss_cls": 0.14361, "acc": 94.63501, "loss_bbox": 0.18914, "loss_mask": 0.20677, "loss": 0.58934, "time": 0.49863} -{"mode": "train", "epoch": 11, "iter": 2050, "lr": 2e-05, "memory": 12589, "data_time": 0.04606, "loss_rpn_cls": 0.01586, "loss_rpn_bbox": 0.03823, "loss_cls": 0.15041, "acc": 94.42041, "loss_bbox": 0.19178, "loss_mask": 0.20607, "loss": 0.60235, "time": 0.50642} -{"mode": "train", "epoch": 11, "iter": 2100, "lr": 2e-05, "memory": 12589, "data_time": 0.05418, "loss_rpn_cls": 0.0165, "loss_rpn_bbox": 0.03864, "loss_cls": 0.1528, "acc": 94.33447, "loss_bbox": 0.20138, "loss_mask": 0.20675, "loss": 0.61607, "time": 0.50353} -{"mode": "train", "epoch": 11, "iter": 2150, "lr": 2e-05, "memory": 12589, "data_time": 0.051, "loss_rpn_cls": 0.01493, "loss_rpn_bbox": 0.03768, "loss_cls": 0.15309, "acc": 94.14185, "loss_bbox": 0.20078, "loss_mask": 0.20908, "loss": 0.61555, "time": 0.50446} -{"mode": "train", "epoch": 11, "iter": 2200, "lr": 2e-05, "memory": 12589, "data_time": 0.04696, "loss_rpn_cls": 0.01525, "loss_rpn_bbox": 0.03639, "loss_cls": 0.14974, "acc": 94.3811, "loss_bbox": 0.19289, "loss_mask": 0.2102, "loss": 0.60447, "time": 0.55049} -{"mode": "train", "epoch": 11, "iter": 2250, "lr": 2e-05, "memory": 12589, "data_time": 0.04639, "loss_rpn_cls": 0.01327, "loss_rpn_bbox": 0.03471, "loss_cls": 0.13923, "acc": 94.73706, "loss_bbox": 0.18914, "loss_mask": 0.20255, "loss": 0.57889, "time": 0.49699} -{"mode": "train", "epoch": 11, "iter": 2300, "lr": 2e-05, "memory": 12589, "data_time": 0.0578, "loss_rpn_cls": 0.01423, "loss_rpn_bbox": 0.03815, "loss_cls": 0.14996, "acc": 94.39136, "loss_bbox": 0.19902, "loss_mask": 0.20118, "loss": 0.60254, "time": 0.49736} -{"mode": "train", "epoch": 11, "iter": 2350, "lr": 2e-05, "memory": 12589, "data_time": 0.0478, "loss_rpn_cls": 0.01387, "loss_rpn_bbox": 0.03589, "loss_cls": 0.14042, "acc": 94.75171, "loss_bbox": 0.18514, "loss_mask": 0.20633, "loss": 0.58166, "time": 0.49997} -{"mode": "train", "epoch": 11, "iter": 2400, "lr": 2e-05, "memory": 12589, "data_time": 0.05803, "loss_rpn_cls": 0.01646, "loss_rpn_bbox": 0.03985, "loss_cls": 0.15566, "acc": 94.18286, "loss_bbox": 0.19959, "loss_mask": 0.20971, "loss": 0.62126, "time": 0.50494} -{"mode": "train", "epoch": 11, "iter": 2450, "lr": 2e-05, "memory": 12589, "data_time": 0.04879, "loss_rpn_cls": 0.01289, "loss_rpn_bbox": 0.03381, "loss_cls": 0.14362, "acc": 94.6123, "loss_bbox": 0.18835, "loss_mask": 0.2027, "loss": 0.58137, "time": 0.57071} -{"mode": "train", "epoch": 11, "iter": 2500, "lr": 2e-05, "memory": 12589, "data_time": 0.04115, "loss_rpn_cls": 0.01605, "loss_rpn_bbox": 0.03781, "loss_cls": 0.14934, "acc": 94.37769, "loss_bbox": 0.19793, "loss_mask": 0.20829, "loss": 0.60942, "time": 0.50176} -{"mode": "train", "epoch": 11, "iter": 2550, "lr": 2e-05, "memory": 12589, "data_time": 0.05266, "loss_rpn_cls": 0.01477, "loss_rpn_bbox": 0.03636, "loss_cls": 0.14727, "acc": 94.51465, "loss_bbox": 0.19013, "loss_mask": 0.20364, "loss": 0.59216, "time": 0.54721} -{"mode": "train", "epoch": 11, "iter": 2600, "lr": 2e-05, "memory": 12589, "data_time": 0.05058, "loss_rpn_cls": 0.0148, "loss_rpn_bbox": 0.03713, "loss_cls": 0.15277, "acc": 94.31274, "loss_bbox": 0.19494, "loss_mask": 0.20399, "loss": 0.60363, "time": 0.4967} -{"mode": "train", "epoch": 11, "iter": 2650, "lr": 2e-05, "memory": 12589, "data_time": 0.04533, "loss_rpn_cls": 0.01366, "loss_rpn_bbox": 0.03529, "loss_cls": 0.14873, "acc": 94.49023, "loss_bbox": 0.19208, "loss_mask": 0.20533, "loss": 0.59509, "time": 0.50404} -{"mode": "train", "epoch": 11, "iter": 2700, "lr": 2e-05, "memory": 12589, "data_time": 0.04622, "loss_rpn_cls": 0.01461, "loss_rpn_bbox": 0.03899, "loss_cls": 0.15104, "acc": 94.40796, "loss_bbox": 0.19606, "loss_mask": 0.20353, "loss": 0.60423, "time": 0.56804} -{"mode": "train", "epoch": 11, "iter": 2750, "lr": 2e-05, "memory": 12589, "data_time": 0.04828, "loss_rpn_cls": 0.01459, "loss_rpn_bbox": 0.03844, "loss_cls": 0.14939, "acc": 94.33569, "loss_bbox": 0.19515, "loss_mask": 0.20826, "loss": 0.60583, "time": 0.51658} -{"mode": "train", "epoch": 11, "iter": 2800, "lr": 2e-05, "memory": 12589, "data_time": 0.04807, "loss_rpn_cls": 0.01522, "loss_rpn_bbox": 0.03734, "loss_cls": 0.14948, "acc": 94.35596, "loss_bbox": 0.1981, "loss_mask": 0.20269, "loss": 0.60283, "time": 0.51557} -{"mode": "train", "epoch": 11, "iter": 2850, "lr": 2e-05, "memory": 12589, "data_time": 0.04331, "loss_rpn_cls": 0.01492, "loss_rpn_bbox": 0.03728, "loss_cls": 0.14815, "acc": 94.45947, "loss_bbox": 0.1891, "loss_mask": 0.20626, "loss": 0.59571, "time": 0.50328} -{"mode": "train", "epoch": 11, "iter": 2900, "lr": 2e-05, "memory": 12589, "data_time": 0.04494, "loss_rpn_cls": 0.01493, "loss_rpn_bbox": 0.03781, "loss_cls": 0.15038, "acc": 94.34644, "loss_bbox": 0.19535, "loss_mask": 0.20918, "loss": 0.60765, "time": 0.5057} -{"mode": "train", "epoch": 11, "iter": 2950, "lr": 2e-05, "memory": 12589, "data_time": 0.05632, "loss_rpn_cls": 0.01548, "loss_rpn_bbox": 0.03855, "loss_cls": 0.15701, "acc": 94.15869, "loss_bbox": 0.20462, "loss_mask": 0.21335, "loss": 0.62901, "time": 0.51736} -{"mode": "train", "epoch": 11, "iter": 3000, "lr": 2e-05, "memory": 12589, "data_time": 0.05479, "loss_rpn_cls": 0.01539, "loss_rpn_bbox": 0.03626, "loss_cls": 0.15113, "acc": 94.36108, "loss_bbox": 0.19898, "loss_mask": 0.20908, "loss": 0.61084, "time": 0.50662} -{"mode": "train", "epoch": 11, "iter": 3050, "lr": 2e-05, "memory": 12589, "data_time": 0.0436, "loss_rpn_cls": 0.01607, "loss_rpn_bbox": 0.04008, "loss_cls": 0.15185, "acc": 94.21094, "loss_bbox": 0.20041, "loss_mask": 0.21339, "loss": 0.6218, "time": 0.51953} -{"mode": "train", "epoch": 11, "iter": 3100, "lr": 2e-05, "memory": 12589, "data_time": 0.04733, "loss_rpn_cls": 0.0133, "loss_rpn_bbox": 0.03695, "loss_cls": 0.14572, "acc": 94.56543, "loss_bbox": 0.18936, "loss_mask": 0.20197, "loss": 0.5873, "time": 0.50595} -{"mode": "train", "epoch": 11, "iter": 3150, "lr": 2e-05, "memory": 12589, "data_time": 0.05623, "loss_rpn_cls": 0.01496, "loss_rpn_bbox": 0.03891, "loss_cls": 0.15369, "acc": 94.2356, "loss_bbox": 0.19434, "loss_mask": 0.20524, "loss": 0.60713, "time": 0.5171} -{"mode": "train", "epoch": 11, "iter": 3200, "lr": 2e-05, "memory": 12589, "data_time": 0.05405, "loss_rpn_cls": 0.015, "loss_rpn_bbox": 0.03812, "loss_cls": 0.15385, "acc": 94.22974, "loss_bbox": 0.19911, "loss_mask": 0.20858, "loss": 0.61466, "time": 0.51676} -{"mode": "train", "epoch": 11, "iter": 3250, "lr": 2e-05, "memory": 12589, "data_time": 0.0419, "loss_rpn_cls": 0.01538, "loss_rpn_bbox": 0.03915, "loss_cls": 0.15218, "acc": 94.31396, "loss_bbox": 0.19568, "loss_mask": 0.21119, "loss": 0.61359, "time": 0.56681} -{"mode": "train", "epoch": 11, "iter": 3300, "lr": 2e-05, "memory": 12589, "data_time": 0.04233, "loss_rpn_cls": 0.0143, "loss_rpn_bbox": 0.03478, "loss_cls": 0.13854, "acc": 94.83765, "loss_bbox": 0.18426, "loss_mask": 0.20419, "loss": 0.57607, "time": 0.56232} -{"mode": "train", "epoch": 11, "iter": 3350, "lr": 2e-05, "memory": 12589, "data_time": 0.05227, "loss_rpn_cls": 0.01446, "loss_rpn_bbox": 0.03742, "loss_cls": 0.15095, "acc": 94.34814, "loss_bbox": 0.19774, "loss_mask": 0.21074, "loss": 0.61131, "time": 0.50443} -{"mode": "train", "epoch": 11, "iter": 3400, "lr": 2e-05, "memory": 12589, "data_time": 0.04103, "loss_rpn_cls": 0.01366, "loss_rpn_bbox": 0.03747, "loss_cls": 0.15139, "acc": 94.33423, "loss_bbox": 0.1965, "loss_mask": 0.20563, "loss": 0.60465, "time": 0.49354} -{"mode": "train", "epoch": 11, "iter": 3450, "lr": 2e-05, "memory": 12589, "data_time": 0.04247, "loss_rpn_cls": 0.01446, "loss_rpn_bbox": 0.03619, "loss_cls": 0.1505, "acc": 94.34497, "loss_bbox": 0.19493, "loss_mask": 0.20465, "loss": 0.60074, "time": 0.50497} -{"mode": "train", "epoch": 11, "iter": 3500, "lr": 2e-05, "memory": 12589, "data_time": 0.04644, "loss_rpn_cls": 0.01428, "loss_rpn_bbox": 0.0356, "loss_cls": 0.15019, "acc": 94.4563, "loss_bbox": 0.19279, "loss_mask": 0.2023, "loss": 0.59515, "time": 0.50333} -{"mode": "train", "epoch": 11, "iter": 3550, "lr": 2e-05, "memory": 12589, "data_time": 0.04094, "loss_rpn_cls": 0.01487, "loss_rpn_bbox": 0.03661, "loss_cls": 0.15348, "acc": 94.125, "loss_bbox": 0.19853, "loss_mask": 0.2028, "loss": 0.60629, "time": 0.50049} -{"mode": "train", "epoch": 11, "iter": 3600, "lr": 2e-05, "memory": 12589, "data_time": 0.046, "loss_rpn_cls": 0.0153, "loss_rpn_bbox": 0.03549, "loss_cls": 0.14699, "acc": 94.51025, "loss_bbox": 0.19201, "loss_mask": 0.20256, "loss": 0.59236, "time": 0.50319} -{"mode": "train", "epoch": 11, "iter": 3650, "lr": 2e-05, "memory": 12589, "data_time": 0.04747, "loss_rpn_cls": 0.01509, "loss_rpn_bbox": 0.03709, "loss_cls": 0.1483, "acc": 94.42944, "loss_bbox": 0.19534, "loss_mask": 0.2098, "loss": 0.60561, "time": 0.50496} -{"mode": "train", "epoch": 11, "iter": 3700, "lr": 2e-05, "memory": 12589, "data_time": 0.04212, "loss_rpn_cls": 0.01329, "loss_rpn_bbox": 0.03461, "loss_cls": 0.1497, "acc": 94.40625, "loss_bbox": 0.19412, "loss_mask": 0.20336, "loss": 0.59509, "time": 0.50832} -{"mode": "train", "epoch": 11, "iter": 3750, "lr": 2e-05, "memory": 12589, "data_time": 0.05273, "loss_rpn_cls": 0.01302, "loss_rpn_bbox": 0.03645, "loss_cls": 0.14565, "acc": 94.52319, "loss_bbox": 0.18955, "loss_mask": 0.20345, "loss": 0.58812, "time": 0.49172} -{"mode": "train", "epoch": 11, "iter": 3800, "lr": 2e-05, "memory": 12589, "data_time": 0.05128, "loss_rpn_cls": 0.01343, "loss_rpn_bbox": 0.03456, "loss_cls": 0.14198, "acc": 94.72876, "loss_bbox": 0.18249, "loss_mask": 0.20209, "loss": 0.57455, "time": 0.49573} -{"mode": "train", "epoch": 11, "iter": 3850, "lr": 2e-05, "memory": 12589, "data_time": 0.05012, "loss_rpn_cls": 0.01609, "loss_rpn_bbox": 0.03926, "loss_cls": 0.1553, "acc": 94.05908, "loss_bbox": 0.20674, "loss_mask": 0.21016, "loss": 0.62755, "time": 0.51445} -{"mode": "train", "epoch": 11, "iter": 3900, "lr": 2e-05, "memory": 12589, "data_time": 0.04908, "loss_rpn_cls": 0.01503, "loss_rpn_bbox": 0.03681, "loss_cls": 0.14989, "acc": 94.43164, "loss_bbox": 0.19302, "loss_mask": 0.20386, "loss": 0.5986, "time": 0.50323} -{"mode": "train", "epoch": 11, "iter": 3950, "lr": 2e-05, "memory": 12589, "data_time": 0.05064, "loss_rpn_cls": 0.01598, "loss_rpn_bbox": 0.03839, "loss_cls": 0.15451, "acc": 94.2041, "loss_bbox": 0.19824, "loss_mask": 0.20924, "loss": 0.61636, "time": 0.49337} -{"mode": "train", "epoch": 11, "iter": 4000, "lr": 2e-05, "memory": 12589, "data_time": 0.04673, "loss_rpn_cls": 0.0152, "loss_rpn_bbox": 0.03678, "loss_cls": 0.14663, "acc": 94.46777, "loss_bbox": 0.1944, "loss_mask": 0.20263, "loss": 0.59565, "time": 0.51662} -{"mode": "train", "epoch": 11, "iter": 4050, "lr": 2e-05, "memory": 12589, "data_time": 0.05254, "loss_rpn_cls": 0.01402, "loss_rpn_bbox": 0.0364, "loss_cls": 0.1507, "acc": 94.38208, "loss_bbox": 0.1951, "loss_mask": 0.20505, "loss": 0.60127, "time": 0.49918} -{"mode": "train", "epoch": 11, "iter": 4100, "lr": 2e-05, "memory": 12589, "data_time": 0.03977, "loss_rpn_cls": 0.0164, "loss_rpn_bbox": 0.03724, "loss_cls": 0.1542, "acc": 94.20898, "loss_bbox": 0.19751, "loss_mask": 0.20889, "loss": 0.61424, "time": 0.57096} -{"mode": "train", "epoch": 11, "iter": 4150, "lr": 2e-05, "memory": 12589, "data_time": 0.04249, "loss_rpn_cls": 0.01405, "loss_rpn_bbox": 0.03514, "loss_cls": 0.14737, "acc": 94.3833, "loss_bbox": 0.19145, "loss_mask": 0.20673, "loss": 0.59474, "time": 0.50551} -{"mode": "train", "epoch": 11, "iter": 4200, "lr": 2e-05, "memory": 12589, "data_time": 0.04687, "loss_rpn_cls": 0.01449, "loss_rpn_bbox": 0.03814, "loss_cls": 0.14927, "acc": 94.32983, "loss_bbox": 0.19601, "loss_mask": 0.20455, "loss": 0.60245, "time": 0.51676} -{"mode": "train", "epoch": 11, "iter": 4250, "lr": 2e-05, "memory": 12589, "data_time": 0.03934, "loss_rpn_cls": 0.01397, "loss_rpn_bbox": 0.03532, "loss_cls": 0.14434, "acc": 94.61426, "loss_bbox": 0.18779, "loss_mask": 0.20587, "loss": 0.58729, "time": 0.50075} -{"mode": "train", "epoch": 11, "iter": 4300, "lr": 2e-05, "memory": 12589, "data_time": 0.04491, "loss_rpn_cls": 0.0156, "loss_rpn_bbox": 0.03917, "loss_cls": 0.15029, "acc": 94.31616, "loss_bbox": 0.20153, "loss_mask": 0.20801, "loss": 0.6146, "time": 0.51019} -{"mode": "train", "epoch": 11, "iter": 4350, "lr": 2e-05, "memory": 12589, "data_time": 0.0449, "loss_rpn_cls": 0.01454, "loss_rpn_bbox": 0.03866, "loss_cls": 0.15565, "acc": 94.18774, "loss_bbox": 0.20113, "loss_mask": 0.21188, "loss": 0.62187, "time": 0.51126} -{"mode": "train", "epoch": 11, "iter": 4400, "lr": 2e-05, "memory": 12589, "data_time": 0.04704, "loss_rpn_cls": 0.01465, "loss_rpn_bbox": 0.03677, "loss_cls": 0.14834, "acc": 94.39282, "loss_bbox": 0.19531, "loss_mask": 0.20762, "loss": 0.60269, "time": 0.50358} -{"mode": "train", "epoch": 11, "iter": 4450, "lr": 2e-05, "memory": 12589, "data_time": 0.04516, "loss_rpn_cls": 0.01403, "loss_rpn_bbox": 0.03716, "loss_cls": 0.1503, "acc": 94.33228, "loss_bbox": 0.19544, "loss_mask": 0.20852, "loss": 0.60546, "time": 0.50043} -{"mode": "train", "epoch": 11, "iter": 4500, "lr": 2e-05, "memory": 12589, "data_time": 0.05193, "loss_rpn_cls": 0.0144, "loss_rpn_bbox": 0.03725, "loss_cls": 0.14998, "acc": 94.43286, "loss_bbox": 0.19309, "loss_mask": 0.20636, "loss": 0.6011, "time": 0.4981} -{"mode": "train", "epoch": 11, "iter": 4550, "lr": 2e-05, "memory": 12589, "data_time": 0.0498, "loss_rpn_cls": 0.01437, "loss_rpn_bbox": 0.03686, "loss_cls": 0.14272, "acc": 94.64868, "loss_bbox": 0.18969, "loss_mask": 0.20292, "loss": 0.58654, "time": 0.49592} -{"mode": "train", "epoch": 11, "iter": 4600, "lr": 2e-05, "memory": 12589, "data_time": 0.04994, "loss_rpn_cls": 0.01484, "loss_rpn_bbox": 0.03631, "loss_cls": 0.14939, "acc": 94.42749, "loss_bbox": 0.19267, "loss_mask": 0.20451, "loss": 0.59772, "time": 0.51111} -{"mode": "train", "epoch": 11, "iter": 4650, "lr": 2e-05, "memory": 12589, "data_time": 0.04743, "loss_rpn_cls": 0.01522, "loss_rpn_bbox": 0.0383, "loss_cls": 0.15617, "acc": 94.10132, "loss_bbox": 0.20257, "loss_mask": 0.21722, "loss": 0.62948, "time": 0.49978} -{"mode": "train", "epoch": 11, "iter": 4700, "lr": 2e-05, "memory": 12589, "data_time": 0.04976, "loss_rpn_cls": 0.0142, "loss_rpn_bbox": 0.0361, "loss_cls": 0.14973, "acc": 94.36182, "loss_bbox": 0.19442, "loss_mask": 0.20766, "loss": 0.60211, "time": 0.49736} -{"mode": "train", "epoch": 11, "iter": 4750, "lr": 2e-05, "memory": 12589, "data_time": 0.04768, "loss_rpn_cls": 0.0144, "loss_rpn_bbox": 0.0361, "loss_cls": 0.14751, "acc": 94.48145, "loss_bbox": 0.19412, "loss_mask": 0.20681, "loss": 0.59895, "time": 0.5408} -{"mode": "train", "epoch": 11, "iter": 4800, "lr": 2e-05, "memory": 12589, "data_time": 0.04926, "loss_rpn_cls": 0.01637, "loss_rpn_bbox": 0.03936, "loss_cls": 0.1611, "acc": 94.04907, "loss_bbox": 0.20452, "loss_mask": 0.21036, "loss": 0.63171, "time": 0.51164} -{"mode": "train", "epoch": 11, "iter": 4850, "lr": 2e-05, "memory": 12589, "data_time": 0.04671, "loss_rpn_cls": 0.01387, "loss_rpn_bbox": 0.03653, "loss_cls": 0.14596, "acc": 94.56519, "loss_bbox": 0.19089, "loss_mask": 0.20177, "loss": 0.58901, "time": 0.5388} -{"mode": "train", "epoch": 11, "iter": 4900, "lr": 2e-05, "memory": 12589, "data_time": 0.05218, "loss_rpn_cls": 0.01423, "loss_rpn_bbox": 0.0372, "loss_cls": 0.15249, "acc": 94.22925, "loss_bbox": 0.2011, "loss_mask": 0.21303, "loss": 0.61805, "time": 0.50109} -{"mode": "train", "epoch": 11, "iter": 4950, "lr": 2e-05, "memory": 12589, "data_time": 0.03828, "loss_rpn_cls": 0.01465, "loss_rpn_bbox": 0.03461, "loss_cls": 0.14378, "acc": 94.61865, "loss_bbox": 0.19081, "loss_mask": 0.20933, "loss": 0.59317, "time": 0.51195} -{"mode": "train", "epoch": 11, "iter": 5000, "lr": 2e-05, "memory": 12589, "data_time": 0.04753, "loss_rpn_cls": 0.01406, "loss_rpn_bbox": 0.03501, "loss_cls": 0.14536, "acc": 94.48608, "loss_bbox": 0.19018, "loss_mask": 0.2019, "loss": 0.5865, "time": 0.51164} -{"mode": "train", "epoch": 11, "iter": 5050, "lr": 2e-05, "memory": 12589, "data_time": 0.04927, "loss_rpn_cls": 0.01349, "loss_rpn_bbox": 0.03646, "loss_cls": 0.14493, "acc": 94.57129, "loss_bbox": 0.18932, "loss_mask": 0.20474, "loss": 0.58893, "time": 0.50708} -{"mode": "train", "epoch": 11, "iter": 5100, "lr": 2e-05, "memory": 12589, "data_time": 0.049, "loss_rpn_cls": 0.0157, "loss_rpn_bbox": 0.03712, "loss_cls": 0.14935, "acc": 94.28979, "loss_bbox": 0.19743, "loss_mask": 0.20762, "loss": 0.60723, "time": 0.51258} -{"mode": "train", "epoch": 11, "iter": 5150, "lr": 2e-05, "memory": 12589, "data_time": 0.04882, "loss_rpn_cls": 0.01423, "loss_rpn_bbox": 0.03606, "loss_cls": 0.14586, "acc": 94.58691, "loss_bbox": 0.1856, "loss_mask": 0.20364, "loss": 0.58539, "time": 0.5584} -{"mode": "train", "epoch": 11, "iter": 5200, "lr": 2e-05, "memory": 12589, "data_time": 0.04502, "loss_rpn_cls": 0.01393, "loss_rpn_bbox": 0.0359, "loss_cls": 0.1418, "acc": 94.60986, "loss_bbox": 0.18811, "loss_mask": 0.19824, "loss": 0.57798, "time": 0.50117} -{"mode": "train", "epoch": 11, "iter": 5250, "lr": 2e-05, "memory": 12589, "data_time": 0.04588, "loss_rpn_cls": 0.0128, "loss_rpn_bbox": 0.03619, "loss_cls": 0.14635, "acc": 94.479, "loss_bbox": 0.18916, "loss_mask": 0.2034, "loss": 0.58789, "time": 0.50424} -{"mode": "train", "epoch": 11, "iter": 5300, "lr": 2e-05, "memory": 12589, "data_time": 0.0459, "loss_rpn_cls": 0.01574, "loss_rpn_bbox": 0.03758, "loss_cls": 0.15082, "acc": 94.27368, "loss_bbox": 0.20052, "loss_mask": 0.20832, "loss": 0.61299, "time": 0.56747} -{"mode": "train", "epoch": 11, "iter": 5350, "lr": 2e-05, "memory": 12589, "data_time": 0.05053, "loss_rpn_cls": 0.01465, "loss_rpn_bbox": 0.03862, "loss_cls": 0.15123, "acc": 94.32788, "loss_bbox": 0.19904, "loss_mask": 0.20846, "loss": 0.612, "time": 0.49387} -{"mode": "train", "epoch": 11, "iter": 5400, "lr": 2e-05, "memory": 12589, "data_time": 0.0445, "loss_rpn_cls": 0.01467, "loss_rpn_bbox": 0.0364, "loss_cls": 0.14535, "acc": 94.55688, "loss_bbox": 0.18909, "loss_mask": 0.2031, "loss": 0.5886, "time": 0.51728} -{"mode": "train", "epoch": 11, "iter": 5450, "lr": 2e-05, "memory": 12589, "data_time": 0.05028, "loss_rpn_cls": 0.01422, "loss_rpn_bbox": 0.03613, "loss_cls": 0.14696, "acc": 94.57593, "loss_bbox": 0.18883, "loss_mask": 0.19829, "loss": 0.58442, "time": 0.50785} -{"mode": "train", "epoch": 11, "iter": 5500, "lr": 2e-05, "memory": 12589, "data_time": 0.04229, "loss_rpn_cls": 0.01557, "loss_rpn_bbox": 0.03605, "loss_cls": 0.1501, "acc": 94.5083, "loss_bbox": 0.1922, "loss_mask": 0.20514, "loss": 0.59906, "time": 0.50275} -{"mode": "train", "epoch": 11, "iter": 5550, "lr": 2e-05, "memory": 12589, "data_time": 0.05313, "loss_rpn_cls": 0.01547, "loss_rpn_bbox": 0.03809, "loss_cls": 0.15196, "acc": 94.29175, "loss_bbox": 0.19674, "loss_mask": 0.20907, "loss": 0.61132, "time": 0.50442} -{"mode": "train", "epoch": 11, "iter": 5600, "lr": 2e-05, "memory": 12589, "data_time": 0.03935, "loss_rpn_cls": 0.01431, "loss_rpn_bbox": 0.03607, "loss_cls": 0.14392, "acc": 94.67676, "loss_bbox": 0.18513, "loss_mask": 0.20151, "loss": 0.58094, "time": 0.5126} -{"mode": "train", "epoch": 11, "iter": 5650, "lr": 2e-05, "memory": 12589, "data_time": 0.0523, "loss_rpn_cls": 0.01479, "loss_rpn_bbox": 0.03662, "loss_cls": 0.14795, "acc": 94.51074, "loss_bbox": 0.19097, "loss_mask": 0.20509, "loss": 0.59543, "time": 0.52518} -{"mode": "train", "epoch": 11, "iter": 5700, "lr": 2e-05, "memory": 12589, "data_time": 0.04672, "loss_rpn_cls": 0.01504, "loss_rpn_bbox": 0.03764, "loss_cls": 0.15158, "acc": 94.32446, "loss_bbox": 0.19999, "loss_mask": 0.20999, "loss": 0.61424, "time": 0.51565} -{"mode": "train", "epoch": 11, "iter": 5750, "lr": 2e-05, "memory": 12589, "data_time": 0.05517, "loss_rpn_cls": 0.01535, "loss_rpn_bbox": 0.03679, "loss_cls": 0.14803, "acc": 94.43286, "loss_bbox": 0.19411, "loss_mask": 0.20735, "loss": 0.60163, "time": 0.50949} -{"mode": "train", "epoch": 11, "iter": 5800, "lr": 2e-05, "memory": 12589, "data_time": 0.04617, "loss_rpn_cls": 0.01392, "loss_rpn_bbox": 0.03739, "loss_cls": 0.1449, "acc": 94.6499, "loss_bbox": 0.1866, "loss_mask": 0.20241, "loss": 0.58522, "time": 0.50713} -{"mode": "train", "epoch": 11, "iter": 5850, "lr": 2e-05, "memory": 12589, "data_time": 0.04063, "loss_rpn_cls": 0.01461, "loss_rpn_bbox": 0.03698, "loss_cls": 0.14796, "acc": 94.42456, "loss_bbox": 0.19461, "loss_mask": 0.20756, "loss": 0.60171, "time": 0.50112} -{"mode": "train", "epoch": 11, "iter": 5900, "lr": 2e-05, "memory": 12589, "data_time": 0.04948, "loss_rpn_cls": 0.01327, "loss_rpn_bbox": 0.0348, "loss_cls": 0.14385, "acc": 94.61084, "loss_bbox": 0.18521, "loss_mask": 0.203, "loss": 0.58014, "time": 0.49329} -{"mode": "train", "epoch": 11, "iter": 5950, "lr": 2e-05, "memory": 12589, "data_time": 0.04603, "loss_rpn_cls": 0.01345, "loss_rpn_bbox": 0.03553, "loss_cls": 0.1476, "acc": 94.55859, "loss_bbox": 0.18715, "loss_mask": 0.19991, "loss": 0.58362, "time": 0.50436} -{"mode": "train", "epoch": 11, "iter": 6000, "lr": 2e-05, "memory": 12589, "data_time": 0.04793, "loss_rpn_cls": 0.01519, "loss_rpn_bbox": 0.0379, "loss_cls": 0.14963, "acc": 94.33447, "loss_bbox": 0.19368, "loss_mask": 0.20519, "loss": 0.60159, "time": 0.51507} -{"mode": "train", "epoch": 11, "iter": 6050, "lr": 2e-05, "memory": 12589, "data_time": 0.05172, "loss_rpn_cls": 0.01594, "loss_rpn_bbox": 0.03763, "loss_cls": 0.14527, "acc": 94.57568, "loss_bbox": 0.18873, "loss_mask": 0.2025, "loss": 0.59006, "time": 0.5658} -{"mode": "train", "epoch": 11, "iter": 6100, "lr": 2e-05, "memory": 12589, "data_time": 0.0489, "loss_rpn_cls": 0.01563, "loss_rpn_bbox": 0.03778, "loss_cls": 0.14973, "acc": 94.34692, "loss_bbox": 0.19634, "loss_mask": 0.2049, "loss": 0.60438, "time": 0.50606} -{"mode": "train", "epoch": 11, "iter": 6150, "lr": 2e-05, "memory": 12589, "data_time": 0.04982, "loss_rpn_cls": 0.01419, "loss_rpn_bbox": 0.03437, "loss_cls": 0.14508, "acc": 94.55176, "loss_bbox": 0.19311, "loss_mask": 0.20518, "loss": 0.59193, "time": 0.50858} -{"mode": "train", "epoch": 11, "iter": 6200, "lr": 2e-05, "memory": 12589, "data_time": 0.04823, "loss_rpn_cls": 0.01605, "loss_rpn_bbox": 0.03813, "loss_cls": 0.15409, "acc": 94.25024, "loss_bbox": 0.19673, "loss_mask": 0.20575, "loss": 0.61076, "time": 0.51553} -{"mode": "train", "epoch": 11, "iter": 6250, "lr": 2e-05, "memory": 12589, "data_time": 0.05119, "loss_rpn_cls": 0.01352, "loss_rpn_bbox": 0.03516, "loss_cls": 0.14431, "acc": 94.61011, "loss_bbox": 0.18749, "loss_mask": 0.2037, "loss": 0.58419, "time": 0.50682} -{"mode": "train", "epoch": 11, "iter": 6300, "lr": 2e-05, "memory": 12589, "data_time": 0.04467, "loss_rpn_cls": 0.01622, "loss_rpn_bbox": 0.03736, "loss_cls": 0.15632, "acc": 94.15845, "loss_bbox": 0.20325, "loss_mask": 0.2128, "loss": 0.62595, "time": 0.57362} -{"mode": "train", "epoch": 11, "iter": 6350, "lr": 2e-05, "memory": 12589, "data_time": 0.04403, "loss_rpn_cls": 0.01453, "loss_rpn_bbox": 0.03631, "loss_cls": 0.14616, "acc": 94.44116, "loss_bbox": 0.19343, "loss_mask": 0.20704, "loss": 0.59748, "time": 0.50473} -{"mode": "train", "epoch": 11, "iter": 6400, "lr": 2e-05, "memory": 12589, "data_time": 0.0424, "loss_rpn_cls": 0.01505, "loss_rpn_bbox": 0.03558, "loss_cls": 0.15194, "acc": 94.29272, "loss_bbox": 0.19534, "loss_mask": 0.2064, "loss": 0.60432, "time": 0.55675} -{"mode": "train", "epoch": 11, "iter": 6450, "lr": 2e-05, "memory": 12589, "data_time": 0.05487, "loss_rpn_cls": 0.01448, "loss_rpn_bbox": 0.03737, "loss_cls": 0.15565, "acc": 94.1665, "loss_bbox": 0.20443, "loss_mask": 0.20838, "loss": 0.62031, "time": 0.5152} -{"mode": "train", "epoch": 11, "iter": 6500, "lr": 2e-05, "memory": 12589, "data_time": 0.05394, "loss_rpn_cls": 0.01548, "loss_rpn_bbox": 0.03625, "loss_cls": 0.14447, "acc": 94.55591, "loss_bbox": 0.19272, "loss_mask": 0.20914, "loss": 0.59806, "time": 0.51008} -{"mode": "train", "epoch": 11, "iter": 6550, "lr": 2e-05, "memory": 12589, "data_time": 0.04384, "loss_rpn_cls": 0.01581, "loss_rpn_bbox": 0.03804, "loss_cls": 0.15587, "acc": 94.09229, "loss_bbox": 0.20082, "loss_mask": 0.21074, "loss": 0.62128, "time": 0.58387} -{"mode": "train", "epoch": 11, "iter": 6600, "lr": 2e-05, "memory": 12589, "data_time": 0.0477, "loss_rpn_cls": 0.01471, "loss_rpn_bbox": 0.03679, "loss_cls": 0.15215, "acc": 94.28613, "loss_bbox": 0.19513, "loss_mask": 0.20554, "loss": 0.60432, "time": 0.50945} -{"mode": "train", "epoch": 11, "iter": 6650, "lr": 2e-05, "memory": 12589, "data_time": 0.05675, "loss_rpn_cls": 0.01574, "loss_rpn_bbox": 0.03794, "loss_cls": 0.15172, "acc": 94.33228, "loss_bbox": 0.19438, "loss_mask": 0.21033, "loss": 0.61012, "time": 0.55465} -{"mode": "train", "epoch": 11, "iter": 6700, "lr": 2e-05, "memory": 12589, "data_time": 0.05801, "loss_rpn_cls": 0.01441, "loss_rpn_bbox": 0.03733, "loss_cls": 0.1459, "acc": 94.6062, "loss_bbox": 0.19192, "loss_mask": 0.2079, "loss": 0.59747, "time": 0.51446} -{"mode": "train", "epoch": 11, "iter": 6750, "lr": 2e-05, "memory": 12589, "data_time": 0.05749, "loss_rpn_cls": 0.01483, "loss_rpn_bbox": 0.03952, "loss_cls": 0.15751, "acc": 94.05005, "loss_bbox": 0.20862, "loss_mask": 0.21125, "loss": 0.63173, "time": 0.50917} -{"mode": "train", "epoch": 11, "iter": 6800, "lr": 2e-05, "memory": 12589, "data_time": 0.04834, "loss_rpn_cls": 0.01416, "loss_rpn_bbox": 0.03712, "loss_cls": 0.15045, "acc": 94.35083, "loss_bbox": 0.19921, "loss_mask": 0.20775, "loss": 0.6087, "time": 0.51351} -{"mode": "train", "epoch": 11, "iter": 6850, "lr": 2e-05, "memory": 12589, "data_time": 0.0554, "loss_rpn_cls": 0.01561, "loss_rpn_bbox": 0.03664, "loss_cls": 0.15137, "acc": 94.40796, "loss_bbox": 0.19323, "loss_mask": 0.20288, "loss": 0.59973, "time": 0.51897} -{"mode": "train", "epoch": 11, "iter": 6900, "lr": 2e-05, "memory": 12589, "data_time": 0.04935, "loss_rpn_cls": 0.01492, "loss_rpn_bbox": 0.03724, "loss_cls": 0.1475, "acc": 94.58594, "loss_bbox": 0.19215, "loss_mask": 0.20327, "loss": 0.59508, "time": 0.56482} -{"mode": "train", "epoch": 11, "iter": 6950, "lr": 2e-05, "memory": 12589, "data_time": 0.05971, "loss_rpn_cls": 0.01631, "loss_rpn_bbox": 0.03695, "loss_cls": 0.15487, "acc": 94.22241, "loss_bbox": 0.20251, "loss_mask": 0.21151, "loss": 0.62214, "time": 0.51591} -{"mode": "train", "epoch": 11, "iter": 7000, "lr": 2e-05, "memory": 12589, "data_time": 0.04766, "loss_rpn_cls": 0.01411, "loss_rpn_bbox": 0.03701, "loss_cls": 0.14842, "acc": 94.45654, "loss_bbox": 0.19532, "loss_mask": 0.20773, "loss": 0.60259, "time": 0.51946} -{"mode": "train", "epoch": 11, "iter": 7050, "lr": 2e-05, "memory": 12589, "data_time": 0.05018, "loss_rpn_cls": 0.01557, "loss_rpn_bbox": 0.0386, "loss_cls": 0.15284, "acc": 94.30811, "loss_bbox": 0.19961, "loss_mask": 0.20963, "loss": 0.61625, "time": 0.52454} -{"mode": "train", "epoch": 11, "iter": 7100, "lr": 2e-05, "memory": 12589, "data_time": 0.04804, "loss_rpn_cls": 0.01426, "loss_rpn_bbox": 0.03679, "loss_cls": 0.1518, "acc": 94.2644, "loss_bbox": 0.19729, "loss_mask": 0.20772, "loss": 0.60786, "time": 0.51843} -{"mode": "train", "epoch": 11, "iter": 7150, "lr": 2e-05, "memory": 12589, "data_time": 0.0473, "loss_rpn_cls": 0.01534, "loss_rpn_bbox": 0.03648, "loss_cls": 0.1492, "acc": 94.38745, "loss_bbox": 0.19403, "loss_mask": 0.20317, "loss": 0.59822, "time": 0.51112} -{"mode": "train", "epoch": 11, "iter": 7200, "lr": 2e-05, "memory": 12589, "data_time": 0.04598, "loss_rpn_cls": 0.01492, "loss_rpn_bbox": 0.0382, "loss_cls": 0.15481, "acc": 94.23975, "loss_bbox": 0.20062, "loss_mask": 0.20396, "loss": 0.61251, "time": 0.51502} -{"mode": "train", "epoch": 11, "iter": 7250, "lr": 2e-05, "memory": 12589, "data_time": 0.04993, "loss_rpn_cls": 0.01479, "loss_rpn_bbox": 0.0371, "loss_cls": 0.15047, "acc": 94.36328, "loss_bbox": 0.19293, "loss_mask": 0.20466, "loss": 0.59995, "time": 0.5109} -{"mode": "train", "epoch": 11, "iter": 7300, "lr": 2e-05, "memory": 12589, "data_time": 0.04768, "loss_rpn_cls": 0.0139, "loss_rpn_bbox": 0.03662, "loss_cls": 0.15157, "acc": 94.31006, "loss_bbox": 0.19769, "loss_mask": 0.20634, "loss": 0.60612, "time": 0.51645} -{"mode": "val", "epoch": 11, "iter": 625, "lr": 2e-05, "bbox_mAP": 0.4418, "bbox_mAP_50": 0.66, "bbox_mAP_75": 0.4855, "bbox_mAP_s": 0.2772, "bbox_mAP_m": 0.4791, "bbox_mAP_l": 0.5815, "bbox_mAP_copypaste": "0.4418 0.6600 0.4855 0.2772 0.4791 0.5815", "segm_mAP": 0.4076, "segm_mAP_50": 0.6353, "segm_mAP_75": 0.4378, "segm_mAP_s": 0.2166, "segm_mAP_m": 0.4351, "segm_mAP_l": 0.5895, "segm_mAP_copypaste": "0.4076 0.6353 0.4378 0.2166 0.4351 0.5895"} -{"mode": "train", "epoch": 12, "iter": 50, "lr": 0.0, "memory": 12589, "data_time": 0.12983, "loss_rpn_cls": 0.01552, "loss_rpn_bbox": 0.03756, "loss_cls": 0.14855, "acc": 94.32837, "loss_bbox": 0.19801, "loss_mask": 0.20785, "loss": 0.60749, "time": 0.6192} -{"mode": "train", "epoch": 12, "iter": 100, "lr": 0.0, "memory": 12589, "data_time": 0.04943, "loss_rpn_cls": 0.01534, "loss_rpn_bbox": 0.03609, "loss_cls": 0.14786, "acc": 94.49365, "loss_bbox": 0.19656, "loss_mask": 0.20694, "loss": 0.60279, "time": 0.53063} -{"mode": "train", "epoch": 12, "iter": 150, "lr": 0.0, "memory": 12589, "data_time": 0.05226, "loss_rpn_cls": 0.01336, "loss_rpn_bbox": 0.03635, "loss_cls": 0.14784, "acc": 94.48901, "loss_bbox": 0.19872, "loss_mask": 0.20825, "loss": 0.60453, "time": 0.53832} -{"mode": "train", "epoch": 12, "iter": 200, "lr": 0.0, "memory": 12589, "data_time": 0.05192, "loss_rpn_cls": 0.01295, "loss_rpn_bbox": 0.0358, "loss_cls": 0.13828, "acc": 94.75146, "loss_bbox": 0.18553, "loss_mask": 0.20013, "loss": 0.57269, "time": 0.52573} -{"mode": "train", "epoch": 12, "iter": 250, "lr": 0.0, "memory": 12589, "data_time": 0.05221, "loss_rpn_cls": 0.01505, "loss_rpn_bbox": 0.0375, "loss_cls": 0.1492, "acc": 94.40283, "loss_bbox": 0.19761, "loss_mask": 0.20713, "loss": 0.6065, "time": 0.52099} -{"mode": "train", "epoch": 12, "iter": 300, "lr": 0.0, "memory": 12589, "data_time": 0.05226, "loss_rpn_cls": 0.01274, "loss_rpn_bbox": 0.03449, "loss_cls": 0.1409, "acc": 94.62915, "loss_bbox": 0.18726, "loss_mask": 0.19879, "loss": 0.57418, "time": 0.51126} -{"mode": "train", "epoch": 12, "iter": 350, "lr": 0.0, "memory": 12589, "data_time": 0.05137, "loss_rpn_cls": 0.01413, "loss_rpn_bbox": 0.03702, "loss_cls": 0.14952, "acc": 94.40479, "loss_bbox": 0.19872, "loss_mask": 0.20635, "loss": 0.60573, "time": 0.53691} -{"mode": "train", "epoch": 12, "iter": 400, "lr": 0.0, "memory": 12589, "data_time": 0.05174, "loss_rpn_cls": 0.01421, "loss_rpn_bbox": 0.0374, "loss_cls": 0.14835, "acc": 94.48584, "loss_bbox": 0.19254, "loss_mask": 0.20653, "loss": 0.59902, "time": 0.52426} -{"mode": "train", "epoch": 12, "iter": 450, "lr": 0.0, "memory": 12589, "data_time": 0.04143, "loss_rpn_cls": 0.01463, "loss_rpn_bbox": 0.03905, "loss_cls": 0.14579, "acc": 94.51758, "loss_bbox": 0.19118, "loss_mask": 0.20692, "loss": 0.59756, "time": 0.52724} -{"mode": "train", "epoch": 12, "iter": 500, "lr": 0.0, "memory": 12589, "data_time": 0.03949, "loss_rpn_cls": 0.01295, "loss_rpn_bbox": 0.03572, "loss_cls": 0.14233, "acc": 94.64209, "loss_bbox": 0.19029, "loss_mask": 0.20623, "loss": 0.58752, "time": 0.56696} -{"mode": "train", "epoch": 12, "iter": 550, "lr": 0.0, "memory": 12589, "data_time": 0.0435, "loss_rpn_cls": 0.01317, "loss_rpn_bbox": 0.03646, "loss_cls": 0.14238, "acc": 94.62207, "loss_bbox": 0.18582, "loss_mask": 0.19897, "loss": 0.5768, "time": 0.58412} -{"mode": "train", "epoch": 12, "iter": 600, "lr": 0.0, "memory": 12589, "data_time": 0.0475, "loss_rpn_cls": 0.01424, "loss_rpn_bbox": 0.03507, "loss_cls": 0.14442, "acc": 94.53589, "loss_bbox": 0.19245, "loss_mask": 0.20424, "loss": 0.59043, "time": 0.56883} -{"mode": "train", "epoch": 12, "iter": 650, "lr": 0.0, "memory": 12589, "data_time": 0.04752, "loss_rpn_cls": 0.01383, "loss_rpn_bbox": 0.03679, "loss_cls": 0.14208, "acc": 94.70605, "loss_bbox": 0.18888, "loss_mask": 0.20141, "loss": 0.58299, "time": 0.51624} -{"mode": "train", "epoch": 12, "iter": 700, "lr": 0.0, "memory": 12589, "data_time": 0.05034, "loss_rpn_cls": 0.01454, "loss_rpn_bbox": 0.03763, "loss_cls": 0.13978, "acc": 94.66675, "loss_bbox": 0.18761, "loss_mask": 0.20237, "loss": 0.58192, "time": 0.52491} -{"mode": "train", "epoch": 12, "iter": 750, "lr": 0.0, "memory": 12589, "data_time": 0.04894, "loss_rpn_cls": 0.01529, "loss_rpn_bbox": 0.03849, "loss_cls": 0.14852, "acc": 94.45825, "loss_bbox": 0.19302, "loss_mask": 0.2071, "loss": 0.60243, "time": 0.50786} -{"mode": "train", "epoch": 12, "iter": 800, "lr": 0.0, "memory": 12589, "data_time": 0.04429, "loss_rpn_cls": 0.01436, "loss_rpn_bbox": 0.03604, "loss_cls": 0.14274, "acc": 94.58716, "loss_bbox": 0.18931, "loss_mask": 0.20146, "loss": 0.58391, "time": 0.51863} -{"mode": "train", "epoch": 12, "iter": 850, "lr": 0.0, "memory": 12589, "data_time": 0.04233, "loss_rpn_cls": 0.01307, "loss_rpn_bbox": 0.03312, "loss_cls": 0.13616, "acc": 94.84155, "loss_bbox": 0.18405, "loss_mask": 0.20051, "loss": 0.56691, "time": 0.50077} -{"mode": "train", "epoch": 12, "iter": 900, "lr": 0.0, "memory": 12589, "data_time": 0.0417, "loss_rpn_cls": 0.0127, "loss_rpn_bbox": 0.03526, "loss_cls": 0.13955, "acc": 94.74414, "loss_bbox": 0.18545, "loss_mask": 0.19931, "loss": 0.57227, "time": 0.50411} -{"mode": "train", "epoch": 12, "iter": 950, "lr": 0.0, "memory": 12589, "data_time": 0.05117, "loss_rpn_cls": 0.01356, "loss_rpn_bbox": 0.03517, "loss_cls": 0.14166, "acc": 94.69263, "loss_bbox": 0.18922, "loss_mask": 0.20392, "loss": 0.58352, "time": 0.55968} -{"mode": "train", "epoch": 12, "iter": 1000, "lr": 0.0, "memory": 12589, "data_time": 0.03718, "loss_rpn_cls": 0.01447, "loss_rpn_bbox": 0.03488, "loss_cls": 0.14078, "acc": 94.74438, "loss_bbox": 0.1892, "loss_mask": 0.20212, "loss": 0.58146, "time": 0.5508} -{"mode": "train", "epoch": 12, "iter": 1050, "lr": 0.0, "memory": 12589, "data_time": 0.04633, "loss_rpn_cls": 0.01346, "loss_rpn_bbox": 0.03583, "loss_cls": 0.14236, "acc": 94.71411, "loss_bbox": 0.19073, "loss_mask": 0.20585, "loss": 0.58823, "time": 0.50188} -{"mode": "train", "epoch": 12, "iter": 1100, "lr": 0.0, "memory": 12589, "data_time": 0.03541, "loss_rpn_cls": 0.01407, "loss_rpn_bbox": 0.03518, "loss_cls": 0.13795, "acc": 94.86475, "loss_bbox": 0.18109, "loss_mask": 0.20035, "loss": 0.56864, "time": 0.51264} -{"mode": "train", "epoch": 12, "iter": 1150, "lr": 0.0, "memory": 12589, "data_time": 0.04599, "loss_rpn_cls": 0.01392, "loss_rpn_bbox": 0.03546, "loss_cls": 0.14102, "acc": 94.63623, "loss_bbox": 0.19054, "loss_mask": 0.20589, "loss": 0.58684, "time": 0.56216} -{"mode": "train", "epoch": 12, "iter": 1200, "lr": 0.0, "memory": 12589, "data_time": 0.03543, "loss_rpn_cls": 0.01293, "loss_rpn_bbox": 0.03456, "loss_cls": 0.13782, "acc": 94.7832, "loss_bbox": 0.18386, "loss_mask": 0.20008, "loss": 0.56926, "time": 0.50266} -{"mode": "train", "epoch": 12, "iter": 1250, "lr": 0.0, "memory": 12589, "data_time": 0.04388, "loss_rpn_cls": 0.01265, "loss_rpn_bbox": 0.03408, "loss_cls": 0.13649, "acc": 94.92139, "loss_bbox": 0.17873, "loss_mask": 0.19777, "loss": 0.55972, "time": 0.50653} -{"mode": "train", "epoch": 12, "iter": 1300, "lr": 0.0, "memory": 12589, "data_time": 0.04981, "loss_rpn_cls": 0.01327, "loss_rpn_bbox": 0.03665, "loss_cls": 0.14204, "acc": 94.63135, "loss_bbox": 0.19325, "loss_mask": 0.20106, "loss": 0.58627, "time": 0.5173} -{"mode": "train", "epoch": 12, "iter": 1350, "lr": 0.0, "memory": 12589, "data_time": 0.04792, "loss_rpn_cls": 0.01419, "loss_rpn_bbox": 0.03588, "loss_cls": 0.1424, "acc": 94.68213, "loss_bbox": 0.18821, "loss_mask": 0.20024, "loss": 0.58093, "time": 0.51265} -{"mode": "train", "epoch": 12, "iter": 1400, "lr": 0.0, "memory": 12589, "data_time": 0.0407, "loss_rpn_cls": 0.01259, "loss_rpn_bbox": 0.03406, "loss_cls": 0.14093, "acc": 94.7334, "loss_bbox": 0.18987, "loss_mask": 0.20394, "loss": 0.58139, "time": 0.50341} -{"mode": "train", "epoch": 12, "iter": 1450, "lr": 0.0, "memory": 12589, "data_time": 0.03885, "loss_rpn_cls": 0.01356, "loss_rpn_bbox": 0.03728, "loss_cls": 0.14392, "acc": 94.59668, "loss_bbox": 0.18674, "loss_mask": 0.19804, "loss": 0.57954, "time": 0.51177} -{"mode": "train", "epoch": 12, "iter": 1500, "lr": 0.0, "memory": 12589, "data_time": 0.03913, "loss_rpn_cls": 0.01363, "loss_rpn_bbox": 0.03431, "loss_cls": 0.14142, "acc": 94.71313, "loss_bbox": 0.1826, "loss_mask": 0.20447, "loss": 0.57644, "time": 0.50699} -{"mode": "train", "epoch": 12, "iter": 1550, "lr": 0.0, "memory": 12589, "data_time": 0.0461, "loss_rpn_cls": 0.01431, "loss_rpn_bbox": 0.03637, "loss_cls": 0.13765, "acc": 94.83618, "loss_bbox": 0.18167, "loss_mask": 0.1983, "loss": 0.5683, "time": 0.51273} -{"mode": "train", "epoch": 12, "iter": 1600, "lr": 0.0, "memory": 12589, "data_time": 0.05248, "loss_rpn_cls": 0.01357, "loss_rpn_bbox": 0.03411, "loss_cls": 0.13569, "acc": 94.93408, "loss_bbox": 0.17852, "loss_mask": 0.19909, "loss": 0.56099, "time": 0.50527} -{"mode": "train", "epoch": 12, "iter": 1650, "lr": 0.0, "memory": 12589, "data_time": 0.05054, "loss_rpn_cls": 0.01466, "loss_rpn_bbox": 0.03483, "loss_cls": 0.14653, "acc": 94.46582, "loss_bbox": 0.18896, "loss_mask": 0.20407, "loss": 0.58905, "time": 0.49716} -{"mode": "train", "epoch": 12, "iter": 1700, "lr": 0.0, "memory": 12589, "data_time": 0.04616, "loss_rpn_cls": 0.01309, "loss_rpn_bbox": 0.0342, "loss_cls": 0.13836, "acc": 94.7981, "loss_bbox": 0.18534, "loss_mask": 0.20153, "loss": 0.57252, "time": 0.50139} -{"mode": "train", "epoch": 12, "iter": 1750, "lr": 0.0, "memory": 12589, "data_time": 0.04557, "loss_rpn_cls": 0.01377, "loss_rpn_bbox": 0.03738, "loss_cls": 0.14634, "acc": 94.4646, "loss_bbox": 0.1947, "loss_mask": 0.20463, "loss": 0.59682, "time": 0.49665} -{"mode": "train", "epoch": 12, "iter": 1800, "lr": 0.0, "memory": 12589, "data_time": 0.03981, "loss_rpn_cls": 0.01364, "loss_rpn_bbox": 0.03564, "loss_cls": 0.14517, "acc": 94.56226, "loss_bbox": 0.18551, "loss_mask": 0.20175, "loss": 0.58171, "time": 0.54533} -{"mode": "train", "epoch": 12, "iter": 1850, "lr": 0.0, "memory": 12589, "data_time": 0.05537, "loss_rpn_cls": 0.01254, "loss_rpn_bbox": 0.03537, "loss_cls": 0.13945, "acc": 94.72339, "loss_bbox": 0.18803, "loss_mask": 0.19823, "loss": 0.57362, "time": 0.50651} -{"mode": "train", "epoch": 12, "iter": 1900, "lr": 0.0, "memory": 12589, "data_time": 0.04914, "loss_rpn_cls": 0.01437, "loss_rpn_bbox": 0.037, "loss_cls": 0.1507, "acc": 94.35864, "loss_bbox": 0.19212, "loss_mask": 0.20491, "loss": 0.59911, "time": 0.57224} -{"mode": "train", "epoch": 12, "iter": 1950, "lr": 0.0, "memory": 12589, "data_time": 0.04263, "loss_rpn_cls": 0.01467, "loss_rpn_bbox": 0.03826, "loss_cls": 0.14713, "acc": 94.40698, "loss_bbox": 0.19314, "loss_mask": 0.20852, "loss": 0.60172, "time": 0.51889} -{"mode": "train", "epoch": 12, "iter": 2000, "lr": 0.0, "memory": 12589, "data_time": 0.04571, "loss_rpn_cls": 0.01304, "loss_rpn_bbox": 0.03598, "loss_cls": 0.14128, "acc": 94.69653, "loss_bbox": 0.18643, "loss_mask": 0.20271, "loss": 0.57945, "time": 0.55586} -{"mode": "train", "epoch": 12, "iter": 2050, "lr": 0.0, "memory": 12589, "data_time": 0.04606, "loss_rpn_cls": 0.01343, "loss_rpn_bbox": 0.03727, "loss_cls": 0.14666, "acc": 94.53027, "loss_bbox": 0.19228, "loss_mask": 0.20244, "loss": 0.59208, "time": 0.50336} -{"mode": "train", "epoch": 12, "iter": 2100, "lr": 0.0, "memory": 12589, "data_time": 0.04508, "loss_rpn_cls": 0.01481, "loss_rpn_bbox": 0.03604, "loss_cls": 0.14441, "acc": 94.48608, "loss_bbox": 0.19422, "loss_mask": 0.20489, "loss": 0.59436, "time": 0.51303} -{"mode": "train", "epoch": 12, "iter": 2150, "lr": 0.0, "memory": 12589, "data_time": 0.0409, "loss_rpn_cls": 0.01394, "loss_rpn_bbox": 0.03655, "loss_cls": 0.13969, "acc": 94.74854, "loss_bbox": 0.18781, "loss_mask": 0.20037, "loss": 0.57836, "time": 0.50368} -{"mode": "train", "epoch": 12, "iter": 2200, "lr": 0.0, "memory": 12589, "data_time": 0.04498, "loss_rpn_cls": 0.01496, "loss_rpn_bbox": 0.03637, "loss_cls": 0.13842, "acc": 94.70654, "loss_bbox": 0.18749, "loss_mask": 0.20165, "loss": 0.57889, "time": 0.50195} -{"mode": "train", "epoch": 12, "iter": 2250, "lr": 0.0, "memory": 12589, "data_time": 0.04764, "loss_rpn_cls": 0.01287, "loss_rpn_bbox": 0.03672, "loss_cls": 0.14298, "acc": 94.57446, "loss_bbox": 0.19229, "loss_mask": 0.20564, "loss": 0.59051, "time": 0.5002} -{"mode": "train", "epoch": 12, "iter": 2300, "lr": 0.0, "memory": 12589, "data_time": 0.04074, "loss_rpn_cls": 0.01357, "loss_rpn_bbox": 0.03571, "loss_cls": 0.13719, "acc": 94.77905, "loss_bbox": 0.18263, "loss_mask": 0.2003, "loss": 0.5694, "time": 0.4984} -{"mode": "train", "epoch": 12, "iter": 2350, "lr": 0.0, "memory": 12589, "data_time": 0.04644, "loss_rpn_cls": 0.01284, "loss_rpn_bbox": 0.03545, "loss_cls": 0.13565, "acc": 94.875, "loss_bbox": 0.1831, "loss_mask": 0.19931, "loss": 0.56635, "time": 0.49654} -{"mode": "train", "epoch": 12, "iter": 2400, "lr": 0.0, "memory": 12589, "data_time": 0.0425, "loss_rpn_cls": 0.01293, "loss_rpn_bbox": 0.03274, "loss_cls": 0.1418, "acc": 94.65234, "loss_bbox": 0.18857, "loss_mask": 0.20532, "loss": 0.58136, "time": 0.53628} -{"mode": "train", "epoch": 12, "iter": 2450, "lr": 0.0, "memory": 12589, "data_time": 0.04049, "loss_rpn_cls": 0.01408, "loss_rpn_bbox": 0.03936, "loss_cls": 0.15175, "acc": 94.21313, "loss_bbox": 0.20206, "loss_mask": 0.21021, "loss": 0.61746, "time": 0.58603} -{"mode": "train", "epoch": 12, "iter": 2500, "lr": 0.0, "memory": 12589, "data_time": 0.04142, "loss_rpn_cls": 0.0143, "loss_rpn_bbox": 0.03784, "loss_cls": 0.14373, "acc": 94.61963, "loss_bbox": 0.18819, "loss_mask": 0.20881, "loss": 0.59288, "time": 0.49899} -{"mode": "train", "epoch": 12, "iter": 2550, "lr": 0.0, "memory": 12589, "data_time": 0.04912, "loss_rpn_cls": 0.01304, "loss_rpn_bbox": 0.03667, "loss_cls": 0.14383, "acc": 94.51294, "loss_bbox": 0.19248, "loss_mask": 0.20938, "loss": 0.59539, "time": 0.54708} -{"mode": "train", "epoch": 12, "iter": 2600, "lr": 0.0, "memory": 12589, "data_time": 0.04046, "loss_rpn_cls": 0.01436, "loss_rpn_bbox": 0.03722, "loss_cls": 0.14202, "acc": 94.70459, "loss_bbox": 0.18654, "loss_mask": 0.20219, "loss": 0.58233, "time": 0.50183} -{"mode": "train", "epoch": 12, "iter": 2650, "lr": 0.0, "memory": 12589, "data_time": 0.04441, "loss_rpn_cls": 0.01432, "loss_rpn_bbox": 0.0378, "loss_cls": 0.1481, "acc": 94.39062, "loss_bbox": 0.20236, "loss_mask": 0.20679, "loss": 0.60936, "time": 0.50673} -{"mode": "train", "epoch": 12, "iter": 2700, "lr": 0.0, "memory": 12589, "data_time": 0.04494, "loss_rpn_cls": 0.01199, "loss_rpn_bbox": 0.0357, "loss_cls": 0.14152, "acc": 94.66821, "loss_bbox": 0.18761, "loss_mask": 0.2002, "loss": 0.57702, "time": 0.51215} -{"mode": "train", "epoch": 12, "iter": 2750, "lr": 0.0, "memory": 12589, "data_time": 0.05865, "loss_rpn_cls": 0.01293, "loss_rpn_bbox": 0.03565, "loss_cls": 0.13848, "acc": 94.74756, "loss_bbox": 0.18559, "loss_mask": 0.19813, "loss": 0.57077, "time": 0.50374} -{"mode": "train", "epoch": 12, "iter": 2800, "lr": 0.0, "memory": 12589, "data_time": 0.05327, "loss_rpn_cls": 0.01392, "loss_rpn_bbox": 0.03759, "loss_cls": 0.14417, "acc": 94.58374, "loss_bbox": 0.19, "loss_mask": 0.20744, "loss": 0.59312, "time": 0.57425} -{"mode": "train", "epoch": 12, "iter": 2850, "lr": 0.0, "memory": 12589, "data_time": 0.04714, "loss_rpn_cls": 0.01428, "loss_rpn_bbox": 0.03881, "loss_cls": 0.14977, "acc": 94.37622, "loss_bbox": 0.19621, "loss_mask": 0.21051, "loss": 0.60959, "time": 0.57142} -{"mode": "train", "epoch": 12, "iter": 2900, "lr": 0.0, "memory": 12589, "data_time": 0.04227, "loss_rpn_cls": 0.0139, "loss_rpn_bbox": 0.0383, "loss_cls": 0.14047, "acc": 94.62695, "loss_bbox": 0.19133, "loss_mask": 0.20594, "loss": 0.58995, "time": 0.51977} -{"mode": "train", "epoch": 12, "iter": 2950, "lr": 0.0, "memory": 12589, "data_time": 0.05126, "loss_rpn_cls": 0.01341, "loss_rpn_bbox": 0.03554, "loss_cls": 0.14062, "acc": 94.75464, "loss_bbox": 0.1878, "loss_mask": 0.20451, "loss": 0.58188, "time": 0.51612} -{"mode": "train", "epoch": 12, "iter": 3000, "lr": 0.0, "memory": 12589, "data_time": 0.05205, "loss_rpn_cls": 0.01361, "loss_rpn_bbox": 0.03613, "loss_cls": 0.14018, "acc": 94.73706, "loss_bbox": 0.18452, "loss_mask": 0.20091, "loss": 0.57536, "time": 0.50265} -{"mode": "train", "epoch": 12, "iter": 3050, "lr": 0.0, "memory": 12589, "data_time": 0.04898, "loss_rpn_cls": 0.01409, "loss_rpn_bbox": 0.03858, "loss_cls": 0.14954, "acc": 94.42456, "loss_bbox": 0.19736, "loss_mask": 0.20701, "loss": 0.60658, "time": 0.51885} -{"mode": "train", "epoch": 12, "iter": 3100, "lr": 0.0, "memory": 12589, "data_time": 0.0432, "loss_rpn_cls": 0.01369, "loss_rpn_bbox": 0.03534, "loss_cls": 0.14551, "acc": 94.56128, "loss_bbox": 0.19197, "loss_mask": 0.20721, "loss": 0.59372, "time": 0.50061} -{"mode": "train", "epoch": 12, "iter": 3150, "lr": 0.0, "memory": 12589, "data_time": 0.04943, "loss_rpn_cls": 0.01475, "loss_rpn_bbox": 0.03986, "loss_cls": 0.14925, "acc": 94.36084, "loss_bbox": 0.19517, "loss_mask": 0.20691, "loss": 0.60593, "time": 0.50855} -{"mode": "train", "epoch": 12, "iter": 3200, "lr": 0.0, "memory": 12589, "data_time": 0.04608, "loss_rpn_cls": 0.01452, "loss_rpn_bbox": 0.037, "loss_cls": 0.14402, "acc": 94.57446, "loss_bbox": 0.1918, "loss_mask": 0.20815, "loss": 0.59548, "time": 0.55893} -{"mode": "train", "epoch": 12, "iter": 3250, "lr": 0.0, "memory": 12589, "data_time": 0.06226, "loss_rpn_cls": 0.01452, "loss_rpn_bbox": 0.0376, "loss_cls": 0.14838, "acc": 94.39014, "loss_bbox": 0.1935, "loss_mask": 0.20623, "loss": 0.60023, "time": 0.52115} -{"mode": "train", "epoch": 12, "iter": 3300, "lr": 0.0, "memory": 12589, "data_time": 0.05202, "loss_rpn_cls": 0.01498, "loss_rpn_bbox": 0.03867, "loss_cls": 0.1509, "acc": 94.21289, "loss_bbox": 0.20198, "loss_mask": 0.21254, "loss": 0.61907, "time": 0.51885} -{"mode": "train", "epoch": 12, "iter": 3350, "lr": 0.0, "memory": 12589, "data_time": 0.0423, "loss_rpn_cls": 0.01389, "loss_rpn_bbox": 0.03604, "loss_cls": 0.13721, "acc": 94.85791, "loss_bbox": 0.18411, "loss_mask": 0.20445, "loss": 0.5757, "time": 0.49646} -{"mode": "train", "epoch": 12, "iter": 3400, "lr": 0.0, "memory": 12589, "data_time": 0.04957, "loss_rpn_cls": 0.01379, "loss_rpn_bbox": 0.03683, "loss_cls": 0.14353, "acc": 94.50635, "loss_bbox": 0.19669, "loss_mask": 0.20803, "loss": 0.59887, "time": 0.51194} -{"mode": "train", "epoch": 12, "iter": 3450, "lr": 0.0, "memory": 12589, "data_time": 0.04178, "loss_rpn_cls": 0.01277, "loss_rpn_bbox": 0.03519, "loss_cls": 0.13843, "acc": 94.75513, "loss_bbox": 0.18389, "loss_mask": 0.19875, "loss": 0.56903, "time": 0.50609} -{"mode": "train", "epoch": 12, "iter": 3500, "lr": 0.0, "memory": 12589, "data_time": 0.04451, "loss_rpn_cls": 0.01332, "loss_rpn_bbox": 0.03644, "loss_cls": 0.14417, "acc": 94.58105, "loss_bbox": 0.19018, "loss_mask": 0.20015, "loss": 0.58427, "time": 0.50958} -{"mode": "train", "epoch": 12, "iter": 3550, "lr": 0.0, "memory": 12589, "data_time": 0.0442, "loss_rpn_cls": 0.01375, "loss_rpn_bbox": 0.03666, "loss_cls": 0.14199, "acc": 94.54126, "loss_bbox": 0.1909, "loss_mask": 0.20274, "loss": 0.58603, "time": 0.51672} -{"mode": "train", "epoch": 12, "iter": 3600, "lr": 0.0, "memory": 12589, "data_time": 0.04395, "loss_rpn_cls": 0.01365, "loss_rpn_bbox": 0.03641, "loss_cls": 0.13623, "acc": 94.83789, "loss_bbox": 0.1836, "loss_mask": 0.20206, "loss": 0.57195, "time": 0.4969} -{"mode": "train", "epoch": 12, "iter": 3650, "lr": 0.0, "memory": 12589, "data_time": 0.03661, "loss_rpn_cls": 0.0123, "loss_rpn_bbox": 0.03445, "loss_cls": 0.13467, "acc": 94.88232, "loss_bbox": 0.18198, "loss_mask": 0.20479, "loss": 0.56818, "time": 0.48806} -{"mode": "train", "epoch": 12, "iter": 3700, "lr": 0.0, "memory": 12589, "data_time": 0.04946, "loss_rpn_cls": 0.01329, "loss_rpn_bbox": 0.03629, "loss_cls": 0.14515, "acc": 94.50757, "loss_bbox": 0.19494, "loss_mask": 0.20631, "loss": 0.59598, "time": 0.49927} -{"mode": "train", "epoch": 12, "iter": 3750, "lr": 0.0, "memory": 12589, "data_time": 0.04154, "loss_rpn_cls": 0.01362, "loss_rpn_bbox": 0.0357, "loss_cls": 0.14016, "acc": 94.76221, "loss_bbox": 0.18045, "loss_mask": 0.20003, "loss": 0.56997, "time": 0.4998} -{"mode": "train", "epoch": 12, "iter": 3800, "lr": 0.0, "memory": 12589, "data_time": 0.04177, "loss_rpn_cls": 0.01373, "loss_rpn_bbox": 0.03673, "loss_cls": 0.14575, "acc": 94.48877, "loss_bbox": 0.19187, "loss_mask": 0.20691, "loss": 0.59499, "time": 0.55445} -{"mode": "train", "epoch": 12, "iter": 3850, "lr": 0.0, "memory": 12589, "data_time": 0.0571, "loss_rpn_cls": 0.01398, "loss_rpn_bbox": 0.03623, "loss_cls": 0.14036, "acc": 94.70996, "loss_bbox": 0.18971, "loss_mask": 0.20109, "loss": 0.58136, "time": 0.50551} -{"mode": "train", "epoch": 12, "iter": 3900, "lr": 0.0, "memory": 12589, "data_time": 0.04445, "loss_rpn_cls": 0.01674, "loss_rpn_bbox": 0.03755, "loss_cls": 0.14599, "acc": 94.53491, "loss_bbox": 0.19367, "loss_mask": 0.20385, "loss": 0.5978, "time": 0.55726} -{"mode": "train", "epoch": 12, "iter": 3950, "lr": 0.0, "memory": 12589, "data_time": 0.04518, "loss_rpn_cls": 0.01175, "loss_rpn_bbox": 0.03244, "loss_cls": 0.13422, "acc": 94.90894, "loss_bbox": 0.18168, "loss_mask": 0.19797, "loss": 0.55805, "time": 0.48352} -{"mode": "train", "epoch": 12, "iter": 4000, "lr": 0.0, "memory": 12589, "data_time": 0.0485, "loss_rpn_cls": 0.01495, "loss_rpn_bbox": 0.0365, "loss_cls": 0.14631, "acc": 94.49487, "loss_bbox": 0.19302, "loss_mask": 0.20494, "loss": 0.59571, "time": 0.50922} -{"mode": "train", "epoch": 12, "iter": 4050, "lr": 0.0, "memory": 12589, "data_time": 0.05075, "loss_rpn_cls": 0.01352, "loss_rpn_bbox": 0.0378, "loss_cls": 0.15069, "acc": 94.30566, "loss_bbox": 0.19782, "loss_mask": 0.20836, "loss": 0.60818, "time": 0.50696} -{"mode": "train", "epoch": 12, "iter": 4100, "lr": 0.0, "memory": 12589, "data_time": 0.04679, "loss_rpn_cls": 0.01462, "loss_rpn_bbox": 0.03699, "loss_cls": 0.14211, "acc": 94.66431, "loss_bbox": 0.18925, "loss_mask": 0.20343, "loss": 0.58641, "time": 0.51393} -{"mode": "train", "epoch": 12, "iter": 4150, "lr": 0.0, "memory": 12589, "data_time": 0.04252, "loss_rpn_cls": 0.01347, "loss_rpn_bbox": 0.03472, "loss_cls": 0.13892, "acc": 94.79077, "loss_bbox": 0.18638, "loss_mask": 0.20266, "loss": 0.57614, "time": 0.49727} -{"mode": "train", "epoch": 12, "iter": 4200, "lr": 0.0, "memory": 12589, "data_time": 0.045, "loss_rpn_cls": 0.01344, "loss_rpn_bbox": 0.03662, "loss_cls": 0.14807, "acc": 94.45703, "loss_bbox": 0.19374, "loss_mask": 0.19858, "loss": 0.59045, "time": 0.50162} -{"mode": "train", "epoch": 12, "iter": 4250, "lr": 0.0, "memory": 12589, "data_time": 0.05474, "loss_rpn_cls": 0.01463, "loss_rpn_bbox": 0.0361, "loss_cls": 0.14662, "acc": 94.4209, "loss_bbox": 0.19342, "loss_mask": 0.20412, "loss": 0.5949, "time": 0.50679} -{"mode": "train", "epoch": 12, "iter": 4300, "lr": 0.0, "memory": 12589, "data_time": 0.05092, "loss_rpn_cls": 0.01295, "loss_rpn_bbox": 0.03462, "loss_cls": 0.14468, "acc": 94.53735, "loss_bbox": 0.19142, "loss_mask": 0.20175, "loss": 0.58542, "time": 0.536} -{"mode": "train", "epoch": 12, "iter": 4350, "lr": 0.0, "memory": 12589, "data_time": 0.04814, "loss_rpn_cls": 0.0146, "loss_rpn_bbox": 0.0359, "loss_cls": 0.14551, "acc": 94.53784, "loss_bbox": 0.19045, "loss_mask": 0.20246, "loss": 0.58892, "time": 0.5031} -{"mode": "train", "epoch": 12, "iter": 4400, "lr": 0.0, "memory": 12589, "data_time": 0.04233, "loss_rpn_cls": 0.0151, "loss_rpn_bbox": 0.03598, "loss_cls": 0.1445, "acc": 94.46973, "loss_bbox": 0.18885, "loss_mask": 0.20331, "loss": 0.58775, "time": 0.51228} -{"mode": "train", "epoch": 12, "iter": 4450, "lr": 0.0, "memory": 12589, "data_time": 0.04367, "loss_rpn_cls": 0.0128, "loss_rpn_bbox": 0.035, "loss_cls": 0.14558, "acc": 94.47559, "loss_bbox": 0.19502, "loss_mask": 0.20503, "loss": 0.59342, "time": 0.60476} -{"mode": "train", "epoch": 12, "iter": 4500, "lr": 0.0, "memory": 12589, "data_time": 0.0465, "loss_rpn_cls": 0.01514, "loss_rpn_bbox": 0.03546, "loss_cls": 0.14983, "acc": 94.35522, "loss_bbox": 0.19757, "loss_mask": 0.20679, "loss": 0.60479, "time": 0.49629} -{"mode": "train", "epoch": 12, "iter": 4550, "lr": 0.0, "memory": 12589, "data_time": 0.04532, "loss_rpn_cls": 0.01406, "loss_rpn_bbox": 0.03407, "loss_cls": 0.14099, "acc": 94.72705, "loss_bbox": 0.18511, "loss_mask": 0.20238, "loss": 0.57661, "time": 0.50705} -{"mode": "train", "epoch": 12, "iter": 4600, "lr": 0.0, "memory": 12589, "data_time": 0.0534, "loss_rpn_cls": 0.01353, "loss_rpn_bbox": 0.03597, "loss_cls": 0.14783, "acc": 94.38599, "loss_bbox": 0.19314, "loss_mask": 0.20185, "loss": 0.59232, "time": 0.51628} -{"mode": "train", "epoch": 12, "iter": 4650, "lr": 0.0, "memory": 12589, "data_time": 0.05504, "loss_rpn_cls": 0.01329, "loss_rpn_bbox": 0.03474, "loss_cls": 0.13723, "acc": 94.76416, "loss_bbox": 0.18263, "loss_mask": 0.19743, "loss": 0.56532, "time": 0.50352} -{"mode": "train", "epoch": 12, "iter": 4700, "lr": 0.0, "memory": 12589, "data_time": 0.04848, "loss_rpn_cls": 0.01409, "loss_rpn_bbox": 0.03415, "loss_cls": 0.14107, "acc": 94.71753, "loss_bbox": 0.18869, "loss_mask": 0.19891, "loss": 0.57691, "time": 0.50626} -{"mode": "train", "epoch": 12, "iter": 4750, "lr": 0.0, "memory": 12589, "data_time": 0.04252, "loss_rpn_cls": 0.01401, "loss_rpn_bbox": 0.03641, "loss_cls": 0.14502, "acc": 94.50439, "loss_bbox": 0.18969, "loss_mask": 0.2037, "loss": 0.58883, "time": 0.4999} -{"mode": "train", "epoch": 12, "iter": 4800, "lr": 0.0, "memory": 12589, "data_time": 0.0335, "loss_rpn_cls": 0.01526, "loss_rpn_bbox": 0.03563, "loss_cls": 0.14531, "acc": 94.53296, "loss_bbox": 0.19328, "loss_mask": 0.20239, "loss": 0.59187, "time": 0.49547} -{"mode": "train", "epoch": 12, "iter": 4850, "lr": 0.0, "memory": 12589, "data_time": 0.04625, "loss_rpn_cls": 0.01414, "loss_rpn_bbox": 0.03824, "loss_cls": 0.14724, "acc": 94.38379, "loss_bbox": 0.19517, "loss_mask": 0.20406, "loss": 0.59885, "time": 0.51503} -{"mode": "train", "epoch": 12, "iter": 4900, "lr": 0.0, "memory": 12589, "data_time": 0.04357, "loss_rpn_cls": 0.01315, "loss_rpn_bbox": 0.03627, "loss_cls": 0.14445, "acc": 94.59302, "loss_bbox": 0.19305, "loss_mask": 0.20244, "loss": 0.58936, "time": 0.51422} -{"mode": "train", "epoch": 12, "iter": 4950, "lr": 0.0, "memory": 12589, "data_time": 0.05442, "loss_rpn_cls": 0.01545, "loss_rpn_bbox": 0.03624, "loss_cls": 0.14456, "acc": 94.58081, "loss_bbox": 0.18765, "loss_mask": 0.20115, "loss": 0.58505, "time": 0.51119} -{"mode": "train", "epoch": 12, "iter": 5000, "lr": 0.0, "memory": 12589, "data_time": 0.04609, "loss_rpn_cls": 0.0153, "loss_rpn_bbox": 0.03578, "loss_cls": 0.14736, "acc": 94.47168, "loss_bbox": 0.19245, "loss_mask": 0.20224, "loss": 0.59312, "time": 0.51714} -{"mode": "train", "epoch": 12, "iter": 5050, "lr": 0.0, "memory": 12589, "data_time": 0.0608, "loss_rpn_cls": 0.01515, "loss_rpn_bbox": 0.03688, "loss_cls": 0.15019, "acc": 94.32397, "loss_bbox": 0.19841, "loss_mask": 0.20752, "loss": 0.60816, "time": 0.5034} -{"mode": "train", "epoch": 12, "iter": 5100, "lr": 0.0, "memory": 12589, "data_time": 0.04547, "loss_rpn_cls": 0.0131, "loss_rpn_bbox": 0.0341, "loss_cls": 0.13995, "acc": 94.70508, "loss_bbox": 0.18831, "loss_mask": 0.20003, "loss": 0.57549, "time": 0.57687} -{"mode": "train", "epoch": 12, "iter": 5150, "lr": 0.0, "memory": 12589, "data_time": 0.048, "loss_rpn_cls": 0.01334, "loss_rpn_bbox": 0.03502, "loss_cls": 0.13822, "acc": 94.77344, "loss_bbox": 0.18787, "loss_mask": 0.20336, "loss": 0.57782, "time": 0.50351} -{"mode": "train", "epoch": 12, "iter": 5200, "lr": 0.0, "memory": 12589, "data_time": 0.04334, "loss_rpn_cls": 0.01373, "loss_rpn_bbox": 0.03725, "loss_cls": 0.14102, "acc": 94.57642, "loss_bbox": 0.18887, "loss_mask": 0.20268, "loss": 0.58355, "time": 0.50834} -{"mode": "train", "epoch": 12, "iter": 5250, "lr": 0.0, "memory": 12589, "data_time": 0.03606, "loss_rpn_cls": 0.01373, "loss_rpn_bbox": 0.03629, "loss_cls": 0.15057, "acc": 94.31396, "loss_bbox": 0.19456, "loss_mask": 0.20429, "loss": 0.59945, "time": 0.50916} -{"mode": "train", "epoch": 12, "iter": 5300, "lr": 0.0, "memory": 12589, "data_time": 0.0503, "loss_rpn_cls": 0.01327, "loss_rpn_bbox": 0.03772, "loss_cls": 0.1431, "acc": 94.65308, "loss_bbox": 0.19443, "loss_mask": 0.2034, "loss": 0.59193, "time": 0.51425} -{"mode": "train", "epoch": 12, "iter": 5350, "lr": 0.0, "memory": 12589, "data_time": 0.04313, "loss_rpn_cls": 0.01446, "loss_rpn_bbox": 0.03644, "loss_cls": 0.14626, "acc": 94.44751, "loss_bbox": 0.19447, "loss_mask": 0.20668, "loss": 0.5983, "time": 0.51648} -{"mode": "train", "epoch": 12, "iter": 5400, "lr": 0.0, "memory": 12589, "data_time": 0.0518, "loss_rpn_cls": 0.01453, "loss_rpn_bbox": 0.03828, "loss_cls": 0.14987, "acc": 94.34399, "loss_bbox": 0.19961, "loss_mask": 0.20962, "loss": 0.61191, "time": 0.50814} -{"mode": "train", "epoch": 12, "iter": 5450, "lr": 0.0, "memory": 12589, "data_time": 0.03922, "loss_rpn_cls": 0.01263, "loss_rpn_bbox": 0.03524, "loss_cls": 0.14181, "acc": 94.55029, "loss_bbox": 0.19266, "loss_mask": 0.20235, "loss": 0.58469, "time": 0.50871} -{"mode": "train", "epoch": 12, "iter": 5500, "lr": 0.0, "memory": 12589, "data_time": 0.03981, "loss_rpn_cls": 0.01426, "loss_rpn_bbox": 0.0347, "loss_cls": 0.14392, "acc": 94.54126, "loss_bbox": 0.19115, "loss_mask": 0.2037, "loss": 0.58773, "time": 0.51005} -{"mode": "train", "epoch": 12, "iter": 5550, "lr": 0.0, "memory": 12589, "data_time": 0.04263, "loss_rpn_cls": 0.01507, "loss_rpn_bbox": 0.0388, "loss_cls": 0.1481, "acc": 94.42651, "loss_bbox": 0.19174, "loss_mask": 0.20373, "loss": 0.59744, "time": 0.51094} -{"mode": "train", "epoch": 12, "iter": 5600, "lr": 0.0, "memory": 12589, "data_time": 0.04657, "loss_rpn_cls": 0.01431, "loss_rpn_bbox": 0.03653, "loss_cls": 0.13889, "acc": 94.71631, "loss_bbox": 0.18531, "loss_mask": 0.20203, "loss": 0.57706, "time": 0.50747} -{"mode": "train", "epoch": 12, "iter": 5650, "lr": 0.0, "memory": 12589, "data_time": 0.04439, "loss_rpn_cls": 0.01363, "loss_rpn_bbox": 0.03644, "loss_cls": 0.14166, "acc": 94.70752, "loss_bbox": 0.18724, "loss_mask": 0.19909, "loss": 0.57806, "time": 0.51257} -{"mode": "train", "epoch": 12, "iter": 5700, "lr": 0.0, "memory": 12589, "data_time": 0.04161, "loss_rpn_cls": 0.01391, "loss_rpn_bbox": 0.03406, "loss_cls": 0.14053, "acc": 94.68457, "loss_bbox": 0.18616, "loss_mask": 0.2038, "loss": 0.57846, "time": 0.51066} -{"mode": "train", "epoch": 12, "iter": 5750, "lr": 0.0, "memory": 12589, "data_time": 0.04632, "loss_rpn_cls": 0.01332, "loss_rpn_bbox": 0.03503, "loss_cls": 0.14234, "acc": 94.58447, "loss_bbox": 0.18767, "loss_mask": 0.20471, "loss": 0.58307, "time": 0.50138} -{"mode": "train", "epoch": 12, "iter": 5800, "lr": 0.0, "memory": 12589, "data_time": 0.03727, "loss_rpn_cls": 0.01244, "loss_rpn_bbox": 0.03557, "loss_cls": 0.14414, "acc": 94.5481, "loss_bbox": 0.19153, "loss_mask": 0.20255, "loss": 0.58623, "time": 0.5183} -{"mode": "train", "epoch": 12, "iter": 5850, "lr": 0.0, "memory": 12589, "data_time": 0.04379, "loss_rpn_cls": 0.01366, "loss_rpn_bbox": 0.03767, "loss_cls": 0.14284, "acc": 94.60156, "loss_bbox": 0.18775, "loss_mask": 0.20137, "loss": 0.58329, "time": 0.50075} -{"mode": "train", "epoch": 12, "iter": 5900, "lr": 0.0, "memory": 12589, "data_time": 0.04723, "loss_rpn_cls": 0.01323, "loss_rpn_bbox": 0.0361, "loss_cls": 0.14605, "acc": 94.52563, "loss_bbox": 0.19358, "loss_mask": 0.20312, "loss": 0.59208, "time": 0.5047} -{"mode": "train", "epoch": 12, "iter": 5950, "lr": 0.0, "memory": 12589, "data_time": 0.0494, "loss_rpn_cls": 0.01281, "loss_rpn_bbox": 0.03374, "loss_cls": 0.13879, "acc": 94.67969, "loss_bbox": 0.18574, "loss_mask": 0.19923, "loss": 0.57031, "time": 0.51171} -{"mode": "train", "epoch": 12, "iter": 6000, "lr": 0.0, "memory": 12589, "data_time": 0.04221, "loss_rpn_cls": 0.01278, "loss_rpn_bbox": 0.0339, "loss_cls": 0.13832, "acc": 94.79688, "loss_bbox": 0.18415, "loss_mask": 0.19919, "loss": 0.56834, "time": 0.51924} -{"mode": "train", "epoch": 12, "iter": 6050, "lr": 0.0, "memory": 12589, "data_time": 0.0455, "loss_rpn_cls": 0.01322, "loss_rpn_bbox": 0.03542, "loss_cls": 0.14027, "acc": 94.71777, "loss_bbox": 0.18819, "loss_mask": 0.20247, "loss": 0.57955, "time": 0.49618} -{"mode": "train", "epoch": 12, "iter": 6100, "lr": 0.0, "memory": 12589, "data_time": 0.04882, "loss_rpn_cls": 0.01332, "loss_rpn_bbox": 0.03719, "loss_cls": 0.14292, "acc": 94.50879, "loss_bbox": 0.19485, "loss_mask": 0.20395, "loss": 0.59223, "time": 0.51244} -{"mode": "train", "epoch": 12, "iter": 6150, "lr": 0.0, "memory": 12589, "data_time": 0.04299, "loss_rpn_cls": 0.01371, "loss_rpn_bbox": 0.03475, "loss_cls": 0.13517, "acc": 94.85254, "loss_bbox": 0.18172, "loss_mask": 0.1981, "loss": 0.56344, "time": 0.50371} -{"mode": "train", "epoch": 12, "iter": 6200, "lr": 0.0, "memory": 12589, "data_time": 0.04742, "loss_rpn_cls": 0.01354, "loss_rpn_bbox": 0.03602, "loss_cls": 0.14015, "acc": 94.61694, "loss_bbox": 0.1851, "loss_mask": 0.19818, "loss": 0.57299, "time": 0.49675} -{"mode": "train", "epoch": 12, "iter": 6250, "lr": 0.0, "memory": 12589, "data_time": 0.05146, "loss_rpn_cls": 0.01373, "loss_rpn_bbox": 0.03629, "loss_cls": 0.14431, "acc": 94.6084, "loss_bbox": 0.19017, "loss_mask": 0.20184, "loss": 0.58634, "time": 0.52563} -{"mode": "train", "epoch": 12, "iter": 6300, "lr": 0.0, "memory": 12589, "data_time": 0.05075, "loss_rpn_cls": 0.01361, "loss_rpn_bbox": 0.03836, "loss_cls": 0.14701, "acc": 94.38037, "loss_bbox": 0.19655, "loss_mask": 0.20563, "loss": 0.60117, "time": 0.59185} -{"mode": "train", "epoch": 12, "iter": 6350, "lr": 0.0, "memory": 12589, "data_time": 0.05008, "loss_rpn_cls": 0.0128, "loss_rpn_bbox": 0.03594, "loss_cls": 0.14125, "acc": 94.70972, "loss_bbox": 0.18918, "loss_mask": 0.20142, "loss": 0.58058, "time": 0.50762} -{"mode": "train", "epoch": 12, "iter": 6400, "lr": 0.0, "memory": 12589, "data_time": 0.06321, "loss_rpn_cls": 0.01363, "loss_rpn_bbox": 0.03827, "loss_cls": 0.14872, "acc": 94.37866, "loss_bbox": 0.1956, "loss_mask": 0.20037, "loss": 0.59659, "time": 0.52434} -{"mode": "train", "epoch": 12, "iter": 6450, "lr": 0.0, "memory": 12589, "data_time": 0.05041, "loss_rpn_cls": 0.01346, "loss_rpn_bbox": 0.03602, "loss_cls": 0.14352, "acc": 94.48901, "loss_bbox": 0.19585, "loss_mask": 0.20611, "loss": 0.59496, "time": 0.50222} -{"mode": "train", "epoch": 12, "iter": 6500, "lr": 0.0, "memory": 12589, "data_time": 0.04585, "loss_rpn_cls": 0.01377, "loss_rpn_bbox": 0.03651, "loss_cls": 0.1423, "acc": 94.63428, "loss_bbox": 0.18732, "loss_mask": 0.20236, "loss": 0.58226, "time": 0.5215} -{"mode": "train", "epoch": 12, "iter": 6550, "lr": 0.0, "memory": 12589, "data_time": 0.04621, "loss_rpn_cls": 0.01327, "loss_rpn_bbox": 0.03485, "loss_cls": 0.14299, "acc": 94.5918, "loss_bbox": 0.18789, "loss_mask": 0.19698, "loss": 0.57598, "time": 0.50962} -{"mode": "train", "epoch": 12, "iter": 6600, "lr": 0.0, "memory": 12589, "data_time": 0.04283, "loss_rpn_cls": 0.01354, "loss_rpn_bbox": 0.03634, "loss_cls": 0.14377, "acc": 94.53784, "loss_bbox": 0.19514, "loss_mask": 0.20778, "loss": 0.59658, "time": 0.55547} -{"mode": "train", "epoch": 12, "iter": 6650, "lr": 0.0, "memory": 12589, "data_time": 0.03826, "loss_rpn_cls": 0.01367, "loss_rpn_bbox": 0.03558, "loss_cls": 0.14394, "acc": 94.59839, "loss_bbox": 0.19164, "loss_mask": 0.20441, "loss": 0.58925, "time": 0.49812} -{"mode": "train", "epoch": 12, "iter": 6700, "lr": 0.0, "memory": 12589, "data_time": 0.046, "loss_rpn_cls": 0.01489, "loss_rpn_bbox": 0.03708, "loss_cls": 0.14975, "acc": 94.45728, "loss_bbox": 0.19405, "loss_mask": 0.20685, "loss": 0.60261, "time": 0.54864} -{"mode": "train", "epoch": 12, "iter": 6750, "lr": 0.0, "memory": 12589, "data_time": 0.05447, "loss_rpn_cls": 0.01411, "loss_rpn_bbox": 0.03689, "loss_cls": 0.1406, "acc": 94.66772, "loss_bbox": 0.18621, "loss_mask": 0.20527, "loss": 0.58308, "time": 0.51442} -{"mode": "train", "epoch": 12, "iter": 6800, "lr": 0.0, "memory": 12589, "data_time": 0.04779, "loss_rpn_cls": 0.01355, "loss_rpn_bbox": 0.0333, "loss_cls": 0.14144, "acc": 94.68457, "loss_bbox": 0.18588, "loss_mask": 0.19828, "loss": 0.57244, "time": 0.50997} -{"mode": "train", "epoch": 12, "iter": 6850, "lr": 0.0, "memory": 12589, "data_time": 0.05032, "loss_rpn_cls": 0.01316, "loss_rpn_bbox": 0.03592, "loss_cls": 0.14014, "acc": 94.67017, "loss_bbox": 0.18636, "loss_mask": 0.19873, "loss": 0.57431, "time": 0.50377} -{"mode": "train", "epoch": 12, "iter": 6900, "lr": 0.0, "memory": 12589, "data_time": 0.04363, "loss_rpn_cls": 0.01372, "loss_rpn_bbox": 0.03479, "loss_cls": 0.14363, "acc": 94.51636, "loss_bbox": 0.18846, "loss_mask": 0.20029, "loss": 0.5809, "time": 0.51343} -{"mode": "train", "epoch": 12, "iter": 6950, "lr": 0.0, "memory": 12589, "data_time": 0.04706, "loss_rpn_cls": 0.01454, "loss_rpn_bbox": 0.03685, "loss_cls": 0.14469, "acc": 94.52002, "loss_bbox": 0.19268, "loss_mask": 0.20293, "loss": 0.59169, "time": 0.52208} -{"mode": "train", "epoch": 12, "iter": 7000, "lr": 0.0, "memory": 12589, "data_time": 0.04653, "loss_rpn_cls": 0.01387, "loss_rpn_bbox": 0.03609, "loss_cls": 0.1405, "acc": 94.69751, "loss_bbox": 0.18698, "loss_mask": 0.20614, "loss": 0.58357, "time": 0.51126} -{"mode": "train", "epoch": 12, "iter": 7050, "lr": 0.0, "memory": 12589, "data_time": 0.04251, "loss_rpn_cls": 0.0139, "loss_rpn_bbox": 0.03489, "loss_cls": 0.13838, "acc": 94.80957, "loss_bbox": 0.17914, "loss_mask": 0.19535, "loss": 0.56167, "time": 0.5047} -{"mode": "train", "epoch": 12, "iter": 7100, "lr": 0.0, "memory": 12589, "data_time": 0.05068, "loss_rpn_cls": 0.01383, "loss_rpn_bbox": 0.03602, "loss_cls": 0.14291, "acc": 94.64233, "loss_bbox": 0.18698, "loss_mask": 0.20351, "loss": 0.58325, "time": 0.49483} -{"mode": "train", "epoch": 12, "iter": 7150, "lr": 0.0, "memory": 12589, "data_time": 0.05352, "loss_rpn_cls": 0.01293, "loss_rpn_bbox": 0.03505, "loss_cls": 0.13901, "acc": 94.73145, "loss_bbox": 0.18858, "loss_mask": 0.20467, "loss": 0.58023, "time": 0.50044} -{"mode": "train", "epoch": 12, "iter": 7200, "lr": 0.0, "memory": 12589, "data_time": 0.04563, "loss_rpn_cls": 0.01325, "loss_rpn_bbox": 0.03197, "loss_cls": 0.134, "acc": 95.06396, "loss_bbox": 0.18083, "loss_mask": 0.19477, "loss": 0.55482, "time": 0.49091} -{"mode": "train", "epoch": 12, "iter": 7250, "lr": 0.0, "memory": 12589, "data_time": 0.04553, "loss_rpn_cls": 0.01436, "loss_rpn_bbox": 0.03742, "loss_cls": 0.14561, "acc": 94.4375, "loss_bbox": 0.19722, "loss_mask": 0.20765, "loss": 0.60227, "time": 0.50736} -{"mode": "train", "epoch": 12, "iter": 7300, "lr": 0.0, "memory": 12589, "data_time": 0.04574, "loss_rpn_cls": 0.01298, "loss_rpn_bbox": 0.03739, "loss_cls": 0.14838, "acc": 94.40747, "loss_bbox": 0.19729, "loss_mask": 0.20685, "loss": 0.60289, "time": 0.50516} -{"mode": "val", "epoch": 12, "iter": 625, "lr": 0.0, "bbox_mAP": 0.4456, "bbox_mAP_50": 0.6614, "bbox_mAP_75": 0.4881, "bbox_mAP_s": 0.2779, "bbox_mAP_m": 0.4811, "bbox_mAP_l": 0.5832, "bbox_mAP_copypaste": "0.4456 0.6614 0.4881 0.2779 0.4811 0.5832", "segm_mAP": 0.4081, "segm_mAP_50": 0.6365, "segm_mAP_75": 0.4392, "segm_mAP_s": 0.2143, "segm_mAP_m": 0.4337, "segm_mAP_l": 0.5899, "segm_mAP_copypaste": "0.4081 0.6365 0.4392 0.2143 0.4337 0.5899"} diff --git a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/checkpoint.py b/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/checkpoint.py deleted file mode 100644 index 1add5e13..00000000 --- a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/checkpoint.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import os.path as osp -import time -from tempfile import TemporaryDirectory - -import torch -from torch.optim import Optimizer - -import mmcv -from mmcv.parallel import is_module_wrapper -from mmcv.runner.checkpoint import weights_to_cpu, get_state_dict - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 4 fields: ``meta``, ``state_dict`` and - ``optimizer``, ``amp``. By default ``meta`` will contain version - and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - # save amp state dict in the checkpoint - checkpoint['amp'] = apex.amp.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/epoch_based_runner.py b/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/epoch_based_runner.py deleted file mode 100644 index 24c03ccb..00000000 --- a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/epoch_based_runner.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import os.path as osp -import platform -import shutil - -import torch -from torch.optim import Optimizer - -import mmcv -from mmcv.runner import RUNNERS, EpochBasedRunner -from .checkpoint import save_checkpoint - -@RUNNERS.register_module() -class EpochBasedRunnerAmp(EpochBasedRunner): - """Epoch-based Runner with AMP support. - - This runner train models epoch by epoch. - """ - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = dict(epoch=self.epoch + 1, iter=self.iter) - elif isinstance(meta, dict): - meta.update(epoch=self.epoch + 1, iter=self.iter) - else: - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - if map_location == 'default': - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint(checkpoint) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - if 'amp' in checkpoint: - apex.amp.load_state_dict(checkpoint['amp']) - self.logger.info('load amp state dict') - - self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter) diff --git a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/optimizer.py b/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/optimizer.py deleted file mode 100644 index d7f61911..00000000 --- a/cv/classification/repvit/pytorch/detection/mmcv_custom/runner/optimizer.py +++ /dev/null @@ -1,29 +0,0 @@ -from mmcv.runner import OptimizerHook, HOOKS - - -@HOOKS.register_module() -class DistOptimizerHook(OptimizerHook): - """Optimizer hook for distributed training.""" - - def __init__(self, update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=-1, use_fp16=False): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.update_interval = update_interval - self.use_fp16 = use_fp16 - - def before_run(self, runner): - runner.optimizer.zero_grad() - - def after_train_iter(self, runner): - runner.outputs['loss'] /= self.update_interval - if self.use_fp16: - with apex.amp.scale_loss(runner.outputs['loss'], runner.optimizer) as scaled_loss: - scaled_loss.backward() - else: - runner.outputs['loss'].backward() - if self.every_n_iters(runner, self.update_interval): - if self.grad_clip is not None: - self.clip_grads(runner.model.parameters()) - runner.optimizer.step() - runner.optimizer.zero_grad() \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/mmdet_custom/apis/train.py b/cv/classification/repvit/pytorch/detection/mmdet_custom/apis/train.py deleted file mode 100644 index d736ef7b..00000000 --- a/cv/classification/repvit/pytorch/detection/mmdet_custom/apis/train.py +++ /dev/null @@ -1,180 +0,0 @@ -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner) -from mmcv.utils import build_from_cfg - -from mmdet.core import DistEvalHook, EvalHook -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import get_root_logger - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - if 'imgs_per_gpu' in cfg.data: - logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - logger.warning( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - logger.warning( - 'Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed) for ds in dataset - ] - - # build optimizer - optimizer = build_optimizer(model, cfg.optimizer) - - # use apex fp16 optimizer - if cfg.optimizer_config.get("type", None) and cfg.optimizer_config["type"] == "DistOptimizerHook": - if cfg.optimizer_config.get("use_fp16", False): - model, optimizer = apex.amp.initialize( - model.cuda(), optimizer, opt_level="O1") - for m in model.modules(): - if hasattr(m, "fp16_enabled"): - m.fp16_enabled = True - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - if 'runner' not in cfg: - cfg.runner = { - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - } - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - - # build runner - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks(cfg.lr_config, optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - # Support batch_size > 1 in validation - val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1) - if val_samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=val_samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - runner.register_hook(eval_hook(val_dataloader, **eval_cfg)) - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/repvit.py b/cv/classification/repvit/pytorch/detection/repvit.py deleted file mode 100644 index 6d15c49e..00000000 --- a/cv/classification/repvit/pytorch/detection/repvit.py +++ /dev/null @@ -1,429 +0,0 @@ -import torch.nn as nn -import numpy as np -import itertools - -from mmdet.utils import get_root_logger -from mmdet.models.builder import BACKBONES -from torch.nn.modules.batchnorm import _BatchNorm -from mmcv.runner import _load_checkpoint - -def _make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - -from timm.models.layers import SqueezeExcite - -import torch - -class Conv2d_BN(torch.nn.Sequential): - def __init__(self, a, b, ks=1, stride=1, pad=0, dilation=1, - groups=1, bn_weight_init=1, resolution=-10000): - super().__init__() - self.add_module('c', torch.nn.Conv2d( - a, b, ks, stride, pad, dilation, groups, bias=False)) - self.add_module('bn', torch.nn.BatchNorm2d(b)) - torch.nn.init.constant_(self.bn.weight, bn_weight_init) - torch.nn.init.constant_(self.bn.bias, 0) - - @torch.no_grad() - def fuse(self): - c, bn = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = c.weight * w[:, None, None, None] - b = bn.bias - bn.running_mean * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - m = torch.nn.Conv2d(w.size(1) * self.c.groups, w.size( - 0), w.shape[2:], stride=self.c.stride, padding=self.c.padding, dilation=self.c.dilation, groups=self.c.groups, - device=c.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class Residual(torch.nn.Module): - def __init__(self, m, drop=0.): - super().__init__() - self.m = m - self.drop = drop - - def forward(self, x): - if self.training and self.drop > 0: - return x + self.m(x) * torch.rand(x.size(0), 1, 1, 1, - device=x.device).ge_(self.drop).div(1 - self.drop).detach() - else: - return x + self.m(x) - - @torch.no_grad() - def fuse(self): - if isinstance(self.m, Conv2d_BN): - m = self.m.fuse() - assert(m.groups == m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - elif isinstance(self.m, torch.nn.Conv2d): - m = self.m - assert(m.groups != m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - else: - return self - - -class RepVGGDW(torch.nn.Module): - def __init__(self, ed) -> None: - super().__init__() - self.conv = Conv2d_BN(ed, ed, 3, 1, 1, groups=ed) - self.conv1 = torch.nn.Conv2d(ed, ed, 1, 1, 0, groups=ed) - self.dim = ed - self.bn = torch.nn.BatchNorm2d(ed) - - def forward(self, x): - return self.bn((self.conv(x) + self.conv1(x)) + x) - - @torch.no_grad() - def fuse(self): - conv = self.conv.fuse() - conv1 = self.conv1 - - conv_w = conv.weight - conv_b = conv.bias - conv1_w = conv1.weight - conv1_b = conv1.bias - - conv1_w = torch.nn.functional.pad(conv1_w, [1,1,1,1]) - - identity = torch.nn.functional.pad(torch.ones(conv1_w.shape[0], conv1_w.shape[1], 1, 1, device=conv1_w.device), [1,1,1,1]) - - final_conv_w = conv_w + conv1_w + identity - final_conv_b = conv_b + conv1_b - - conv.weight.data.copy_(final_conv_w) - conv.bias.data.copy_(final_conv_b) - - bn = self.bn - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = conv.weight * w[:, None, None, None] - b = bn.bias + (conv.bias - bn.running_mean) * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - conv.weight.data.copy_(w) - conv.bias.data.copy_(b) - return conv - - -class RepViTBlock(nn.Module): - def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs): - super(RepViTBlock, self).__init__() - assert stride in [1, 2] - - self.identity = stride == 1 and inp == oup - assert(hidden_dim == 2 * inp) - - if stride == 2: - self.token_mixer = nn.Sequential( - Conv2d_BN(inp, inp, kernel_size, stride, (kernel_size - 1) // 2, groups=inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - Conv2d_BN(inp, oup, ks=1, stride=1, pad=0) - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(oup, 2 * oup, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(2 * oup, oup, 1, 1, 0, bn_weight_init=0), - )) - else: - assert(self.identity) - self.token_mixer = nn.Sequential( - RepVGGDW(inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(inp, hidden_dim, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(hidden_dim, oup, 1, 1, 0, bn_weight_init=0), - )) - - def forward(self, x): - return self.channel_mixer(self.token_mixer(x)) - -from timm.models.vision_transformer import trunc_normal_ -class BN_Linear(torch.nn.Sequential): - def __init__(self, a, b, bias=True, std=0.02): - super().__init__() - self.add_module('bn', torch.nn.BatchNorm1d(a)) - self.add_module('l', torch.nn.Linear(a, b, bias=bias)) - trunc_normal_(self.l.weight, std=std) - if bias: - torch.nn.init.constant_(self.l.bias, 0) - - @torch.no_grad() - def fuse(self): - bn, l = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - b = bn.bias - self.bn.running_mean * \ - self.bn.weight / (bn.running_var + bn.eps)**0.5 - w = l.weight * w[None, :] - if l.bias is None: - b = b @ self.l.weight.T - else: - b = (l.weight @ b[:, None]).view(-1) + self.l.bias - m = torch.nn.Linear(w.size(1), w.size(0), device=l.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class RepViT(nn.Module): - def __init__(self, cfgs, distillation=False, pretrained=None, init_cfg=None, out_indices=[]): - super(RepViT, self).__init__() - # setting of inverted residual blocks - self.cfgs = cfgs - - # building first layer - input_channel = self.cfgs[0][2] - patch_embed = torch.nn.Sequential(Conv2d_BN(3, input_channel // 2, 3, 2, 1), torch.nn.GELU(), - Conv2d_BN(input_channel // 2, input_channel, 3, 2, 1)) - layers = [patch_embed] - # building inverted residual blocks - block = RepViTBlock - for k, t, c, use_se, use_hs, s in self.cfgs: - output_channel = _make_divisible(c, 8) - exp_size = _make_divisible(input_channel * t, 8) - layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs)) - input_channel = output_channel - self.features = nn.ModuleList(layers) - - self.init_cfg = init_cfg - assert(self.init_cfg is not None) - self.out_indices = out_indices - self.init_weights() - self = torch.nn.SyncBatchNorm.convert_sync_batchnorm(self) - self.train() - - def init_weights(self, pretrained=None): - logger = get_root_logger() - if self.init_cfg is None and pretrained is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - pass - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - if self.init_cfg is not None: - ckpt_path = self.init_cfg['checkpoint'] - elif pretrained is not None: - ckpt_path = pretrained - - ckpt = _load_checkpoint( - ckpt_path, logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - - state_dict = _state_dict - missing_keys, unexpected_keys = \ - self.load_state_dict(state_dict, False) - logger.info(f"Miss {missing_keys}") - logger.info(f"Unexpected {unexpected_keys}") - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(RepViT, self).train(mode) - if mode: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - - def forward(self, x): - outs = [] - for i, f in enumerate(self.features): - x = f(x) - if i in self.out_indices: - outs.append(x) - assert(len(outs) == 4) - return outs - -from timm.models import register_model - -@BACKBONES.register_module() -def repvit_m1_1(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) - - -@BACKBONES.register_module() -def repvit_m1_5(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) - - - -@BACKBONES.register_module() -def repvit_m2_3(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 160, 0, 0, 2], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 320, 0, 1, 2], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - # [3, 2, 320, 1, 1, 1], - # [3, 2, 320, 0, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 640, 0, 1, 2], - [3, 2, 640, 1, 1, 1], - [3, 2, 640, 0, 1, 1], - # [3, 2, 640, 1, 1, 1], - # [3, 2, 640, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/detection/slurm_train.sh b/cv/classification/repvit/pytorch/detection/slurm_train.sh deleted file mode 100644 index 3004fa71..00000000 --- a/cv/classification/repvit/pytorch/detection/slurm_train.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -WORK_DIR=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-4} -CPUS_PER_TASK=${CPUS_PER_TASK:-12} -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:5} - -export NCCL_P2P_DISABLE=1 -export MASTER_PORT=24680 - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - --mem 250G \ - ${SRUN_ARGS} \ - python -u train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS} diff --git a/cv/classification/repvit/pytorch/detection/test.py b/cv/classification/repvit/pytorch/detection/test.py deleted file mode 100644 index 5bc115f1..00000000 --- a/cv/classification/repvit/pytorch/detection/test.py +++ /dev/null @@ -1,241 +0,0 @@ -import argparse -import os -import os.path as osp -import time -import warnings - -import mmcv -import torch -from mmcv import Config, DictAction -from mmcv.cnn import fuse_conv_bn -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) - -from mmdet.apis import multi_gpu_test, single_gpu_test -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.models import build_detector - -import sys - -import repvit - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet test (and eval) a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument( - '--work-dir', - help='the directory to save the file containing evaluation metrics') - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "bbox",' - ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where painted images will be saved') - parser.add_argument( - '--show-score-thr', - type=float, - default=0.3, - help='score threshold (default: 0.3)') - parser.add_argument( - '--gpu-collect', - action='store_true', - help='whether to use gpu to collect results.') - parser.add_argument( - '--tmpdir', - help='tmp directory used for collecting results from multiple ' - 'workers, available when gpu-collect is not specified') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function (deprecate), ' - 'change to --eval-options instead.') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local-rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.eval_options: - raise ValueError( - '--options and --eval-options cannot be both ' - 'specified, --options is deprecated in favor of --eval-options') - if args.options: - warnings.warn('--options is deprecated in favor of --eval-options') - args.eval_options = args.options - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - cfg.model.pretrained = None - if cfg.model.get('neck'): - if isinstance(cfg.model.neck, list): - for neck_cfg in cfg.model.neck: - if neck_cfg.get('rfp_backbone'): - if neck_cfg.rfp_backbone.get('pretrained'): - neck_cfg.rfp_backbone.pretrained = None - elif cfg.model.neck.get('rfp_backbone'): - if cfg.model.neck.rfp_backbone.get('pretrained'): - cfg.model.neck.rfp_backbone.pretrained = None - - # in case the test dataset is concatenated - samples_per_gpu = 1 - if isinstance(cfg.data.test, dict): - cfg.data.test.test_mode = True - samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1) - if samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.test.pipeline = replace_ImageToTensor( - cfg.data.test.pipeline) - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - ds_cfg.test_mode = True - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - if samples_per_gpu > 1: - for ds_cfg in cfg.data.test: - ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline) - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - rank, _ = get_dist_info() - # allows not to create - if args.work_dir is not None and rank == 0: - mmcv.mkdir_or_exist(osp.abspath(args.work_dir)) - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - json_file = osp.join(args.work_dir, f'eval_{timestamp}.json') - - # build the dataloader - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_conv_bn(model) - # old versions did not save class info in checkpoints, this walkaround is - # for backward compatibility - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = dataset.CLASSES - - if not distributed: - model = MMDataParallel(model, device_ids=[0]) - outputs = single_gpu_test(model, data_loader, args.show, args.show_dir, - args.show_score_thr) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - outputs = multi_gpu_test(model, data_loader, args.tmpdir, - args.gpu_collect) - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - print(f'\nwriting results to {args.out}') - mmcv.dump(outputs, args.out) - kwargs = {} if args.eval_options is None else args.eval_options - if args.format_only: - dataset.format_results(outputs, **kwargs) - if args.eval: - eval_kwargs = cfg.get('evaluation', {}).copy() - # hard-code way to remove EvalHook args - for key in [ - 'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best', - 'rule' - ]: - eval_kwargs.pop(key, None) - eval_kwargs.update(dict(metric=args.eval, **kwargs)) - metric = dataset.evaluate(outputs, **eval_kwargs) - print(metric) - metric_dict = dict(config=args.config, metric=metric) - if args.work_dir is not None and rank == 0: - mmcv.dump(metric_dict, json_file) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/detection/train.py b/cv/classification/repvit/pytorch/detection/train.py deleted file mode 100644 index a203dabc..00000000 --- a/cv/classification/repvit/pytorch/detection/train.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import copy -import os -import os.path as osp -import time -import warnings - -import mmcv -import torch -import torch.distributed as dist -from mmcv import Config, DictAction -from mmcv.runner import get_dist_info, init_dist -from mmcv.utils import get_git_hash - -from mmdet import __version__ -from mmdet.apis import init_random_seed, set_random_seed # , train_detector -from mmdet.datasets import build_dataset -from mmdet.models import build_detector -from mmdet.utils import (collect_env, get_device, get_root_logger, - setup_multi_processes, update_data_root) - -from mmdet_custom.apis.train import train_detector -import mmcv_custom.runner.epoch_based_runner -import mmcv_custom.runner.optimizer - -import sys - -import repvit - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--auto-resume', - action='store_true', - help='resume from the latest checkpoint automatically') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='(Deprecated, please use --gpu-id) number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-id', - type=int, - default=0, - help='id of gpu to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--diff-seed', - action='store_true', - help='Whether or not set different seeds for different ranks') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file (deprecate), ' - 'change to --cfg-options instead.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local-rank', type=int, default=0) - parser.add_argument( - '--auto-scale-lr', - action='store_true', - help='enable automatically scaling LR.') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.cfg_options: - raise ValueError( - '--options and --cfg-options cannot be both ' - 'specified, --options is deprecated in favor of --cfg-options') - if args.options: - warnings.warn('--options is deprecated in favor of --cfg-options') - args.cfg_options = args.options - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - - # update data root according to MMDET_DATASETS - update_data_root(cfg) - - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - if args.auto_scale_lr: - if 'auto_scale_lr' in cfg and \ - 'enable' in cfg.auto_scale_lr and \ - 'base_batch_size' in cfg.auto_scale_lr: - cfg.auto_scale_lr.enable = True - else: - warnings.warn('Can not find "auto_scale_lr" or ' - '"auto_scale_lr.enable" or ' - '"auto_scale_lr.base_batch_size" in your' - ' configuration file. Please update all the ' - 'configuration files to mmdet >= 2.24.1.') - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - if args.resume_from is not None: - cfg.resume_from = args.resume_from - cfg.auto_resume = args.auto_resume - if args.gpus is not None: - cfg.gpu_ids = range(1) - warnings.warn('`--gpus` is deprecated because we only support ' - 'single GPU mode in non-distributed training. ' - 'Use `gpus=1` now.') - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed training. Use the first GPU ' - 'in `gpu_ids` now.') - if args.gpus is None and args.gpu_ids is None: - cfg.gpu_ids = [args.gpu_id] - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # re-set gpu_ids with distributed training mode - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - meta['config'] = cfg.pretty_text - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - cfg.device = get_device() - # set random seeds - seed = init_random_seed(args.seed, device=cfg.device) - seed = seed + dist.get_rank() if args.diff_seed else seed - logger.info(f'Set random seed to {seed}, ' - f'deterministic: {args.deterministic}') - set_random_seed(seed, deterministic=args.deterministic) - cfg.seed = seed - meta['seed'] = seed - meta['exp_name'] = osp.basename(args.config) - - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - model.init_weights() - - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - val_dataset.pipeline = cfg.data.train.pipeline - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmdet version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmdet_version=__version__ + get_git_hash()[:7], - CLASSES=datasets[0].CLASSES) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - train_detector( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/detection/train.sh b/cv/classification/repvit/pytorch/detection/train.sh deleted file mode 100644 index a029f1e7..00000000 --- a/cv/classification/repvit/pytorch/detection/train.sh +++ /dev/null @@ -1 +0,0 @@ -./dist_train.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py 8 \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/engine.py b/cv/classification/repvit/pytorch/engine.py deleted file mode 100644 index f6d3e222..00000000 --- a/cv/classification/repvit/pytorch/engine.py +++ /dev/null @@ -1,106 +0,0 @@ -""" -Train and eval functions used in main.py -""" -import math -import sys -from typing import Iterable, Optional - -import torch - -from timm.data import Mixup -from timm.utils import accuracy, ModelEma - -from losses import DistillationLoss -import utils - -def set_bn_state(model): - for m in model.modules(): - if isinstance(m, torch.nn.modules.batchnorm._BatchNorm): - m.eval() - -def train_one_epoch(model: torch.nn.Module, criterion: DistillationLoss, - data_loader: Iterable, optimizer: torch.optim.Optimizer, - device: torch.device, epoch: int, loss_scaler, - clip_grad: float = 0, - clip_mode: str = 'norm', - model_ema: Optional[ModelEma] = None, mixup_fn: Optional[Mixup] = None, - set_training_mode=True, - set_bn_eval=False,): - model.train(set_training_mode) - if set_bn_eval: - set_bn_state(model) - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue( - window_size=1, fmt='{value:.6f}')) - header = 'Epoch: [{}]'.format(epoch) - print_freq = 100 - - for samples, targets in metric_logger.log_every( - data_loader, print_freq, header): - samples = samples.to(device, non_blocking=True) - targets = targets.to(device, non_blocking=True) - - if mixup_fn is not None: - samples, targets = mixup_fn(samples, targets) - - with torch.cuda.amp.autocast(): - outputs = model(samples) - loss = criterion(samples, outputs, targets) - - loss_value = loss.item() - - if not math.isfinite(loss_value): - print("Loss is {}, stopping training".format(loss_value)) - sys.exit(1) - - optimizer.zero_grad() - - # this attribute is added by timm on one optimizer (adahessian) - is_second_order = hasattr( - optimizer, 'is_second_order') and optimizer.is_second_order - loss_scaler(loss, optimizer, clip_grad=clip_grad, clip_mode=clip_mode, - parameters=model.parameters(), create_graph=is_second_order) - - torch.cuda.synchronize() - if model_ema is not None: - model_ema.update(model) - - metric_logger.update(loss=loss_value) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger) - return {k: meter.global_avg for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(data_loader, model, device): - criterion = torch.nn.CrossEntropyLoss() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Test:' - - # switch to evaluation mode - model.eval() - - for images, target in metric_logger.log_every(data_loader, 10, header): - images = images.to(device, non_blocking=True) - target = target.to(device, non_blocking=True) - - # compute output - with torch.cuda.amp.autocast(): - output = model(images) - loss = criterion(output, target) - - acc1, acc5 = accuracy(output, target, topk=(1, 5)) - - batch_size = images.shape[0] - metric_logger.update(loss=loss.item()) - metric_logger.meters['acc1'].update(acc1.item(), n=batch_size) - metric_logger.meters['acc5'].update(acc5.item(), n=batch_size) - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print('* Acc@1 {top1.global_avg:.3f} Acc@5 {top5.global_avg:.3f} loss {losses.global_avg:.3f}' - .format(top1=metric_logger.acc1, top5=metric_logger.acc5, losses=metric_logger.loss)) - - return {k: meter.global_avg for k, meter in metric_logger.meters.items()} diff --git a/cv/classification/repvit/pytorch/eval.sh b/cv/classification/repvit/pytorch/eval.sh deleted file mode 100644 index 9b9a7975..00000000 --- a/cv/classification/repvit/pytorch/eval.sh +++ /dev/null @@ -1 +0,0 @@ -python main.py --eval --model repvit_m1_1 --resume pretrain/repvit_m1_1_distill_300e.pth --data-path ~/imagenet \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/export_coreml.py b/cv/classification/repvit/pytorch/export_coreml.py deleted file mode 100644 index af03e0bc..00000000 --- a/cv/classification/repvit/pytorch/export_coreml.py +++ /dev/null @@ -1,43 +0,0 @@ -import torch - -from timm import create_model -import model - -import utils - -import torch -import torchvision -from argparse import ArgumentParser - -parser = ArgumentParser() - -parser.add_argument('--model', default='repvit_m1_1', type=str) -parser.add_argument('--resolution', default=224, type=int) -parser.add_argument('--ckpt', default=None, type=str) - -if __name__ == "__main__": - # Load a pre-trained version of MobileNetV2 - args = parser.parse_args() - model = create_model(args.model, distillation=True) - if args.ckpt: - model.load_state_dict(torch.load(args.ckpt)['model']) - utils.replace_batchnorm(model) - model.eval() - - # Trace the model with random data. - resolution = args.resolution - example_input = torch.rand(1, 3, resolution, resolution) - traced_model = torch.jit.trace(model, example_input) - out = traced_model(example_input) - - import coremltools as ct - - # Using image_input in the inputs parameter: - # Convert to Core ML neural network using the Unified Conversion API. - model = ct.convert( - traced_model, - inputs=[ct.ImageType(shape=example_input.shape)] - ) - - # Save the converted model. - model.save(f"coreml/{args.model}_{resolution}.mlmodel") \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/figures/latency.png b/cv/classification/repvit/pytorch/figures/latency.png deleted file mode 100644 index fb804fd6b7a257dcc6be98a55e178a2164714564..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 630219 zcmeFZc{tR4`#wJEsqR(@QMRHYLdd>Vh_WRlds#xrZipGWyOVV!A$yS}LiT+|+4qor z8OFYjeT*41-`899JnsAR`5nja_#WRspE-{EHpt9tyRPfJ&hvatz+DaH<44XOfx%$M zZ>uQYgTZLQ)1%k_I0(L^T! z7PYn($#74Z?4bE4Q{HlTLU>UW+l3Qc`}=zQHxK=NJ^pVq^+npR{YEV2*U+}vb0B8HRaHfs$3+INvh96E@Z2N%|E`qW0k7mcmQqk~&A`ym%e%6oLMBxsKIWC; z-q)`Rup25_c^Cg4Oi1SA8Fo;An02NTeULJwg?w4GoZH-w2gOf~*-!ngEO86l_Ix&X z|Fcb=N{0UG|J|VfTmSa%&VRG}-qrYTcKqad(aD7+hc717}V=SFDjuW^h3SzKJ=gH_3gT2WmU7iL8mFp znCos|L)oF@=clu`r?a%AV`QW=ufJvV%%*0jlupFz76a3?v?vVrdVhago3+o_OCD)_ zg$YJvdm|^*FK392t=#?iLt!x3AA1K|^{L@87rWOL+#z}Y2f3VPlTH3pQzMu}wOcI- zY)%{9bj3rboQB=#0&09~55Qo~nf9bjYl9e6YKm&IXMbM9V__k4eW_EGWF>jQEEs-! zBJJR{G%#_4y%T>gf0l)7say~zPtCVoDaORaaI*VU%;yLqg>XqhoFh9+{>5lfvzB+P zFQNBHF{e_jO^QBeuE%O9lTd832{!uWVz-Q(lycfxXLc8SeeWEuo7xRAJ59&a(aL-; zPQi{CVB~OuRAF2-2dZSF$DbC#V>b6=2n*7{`kmC2i!q7Mcw-_3LQ-9&3=$DbAMw{x zx)8iD+V$NXd-k`1i%Usy%2A2_n4LlYo}TBJoKi_#f>+snKlzPM{`&KH1%L^}Uppiig9w6qc7c5Y$0aM(Gfnar^Mef;{KSXXzbuDW@F#5l}B2rIP zwT1=Dys}(P)Nr<=l{x9O{II4N!=|MbhQ9`b-F3`kW9!I!I_z)+Gb@qjY-uq#ct})1 zTWUCm;9R&HF@Fo=E&IjAm&O!CV>PXE#jH9Vb|?GdLm14j(oT%U-mr+(c*`DX=4o+) zt;sgEXF4GWRq2&%LEe(9jLO_y$b!Euq^fYV|KL+T@+w{xxmBHAK^SJz(s>JwtNP}p z@DSyMVZ=8p6GWpv_$l*8#cDle1%+!VQEm~)uGtxnm0EA*xeqq|dCZ9M5Cjsh10|7f z-_!P{JV7FPnhRRa%iB}W4{<(jM{Uhx+Tf?aks6z|Cyg6ZH+~#WB#xA8J9QrzKX|!g zIn3rs;8a>dU}}BTEo~{btinPf%JrW5tSjb7Ra)8ZipcFvZ2tZpOQrsyJ~H4frp-Bm zN>=Lpg9LZqqH83yH_qE~vpy4k88-cWBnQcVf;y+yL2ePDTU;cfy14o06BY^OJH}92 zOna5|x71WIKjZH~8ssuCT#NtcV_?Yjw46UFUj%y_2Gg*ORue;(O-)#~BxWOogrgVE zT{z1g(>ZI6Tx~qBMNxknp=9~whMEVJWWEYV(`%zi@yNuD&LfVd`qYl+hi zq+mNdPSkUk8nvY7`9WZwHVGF*;;dmj0q6oFwJWxai0Pr2*OMIy?{ zE|Hts#1h~ls?N2KA(~_JiWS%q9?o~_AYz}yU8sD8c z4fc3eVNcZfjxDxH@2(Q3xPf6g1-XQl-d!Z=-9<`;^L?hHlQ$-rc`wujC8zTU`F`?y zwR%P*$z!NwW~f+DkiZnm&K7~WOunT32}W{y((>_F6n71mwWiU=kZu z`{@pyn2)<{9KFWv7#Dqrf%sj^NS?=G~&JBy8Yi%WJgc-KB8vB^~ ztS}KZ_Q4zth$}HnBud+y@$S(*r~b$WTPY140*$m}8yl&-_;?}2ZS@O*&c&wDpEYW~ zCXz2$O}mAwDAs<~hqlbZjrSJFTuPIZ`(5>+y3)M6UrCoIRO zrp(5j_EqIp59~bJ7P^@=xV?c~>B}{KiJjBY5l~U;FAaB&wEx2Du)9hLe_JKwz#tcN z=q?}Zp+&w2d-wfls7$#umJS!@R(>P{7T5YXxiEWa=9|M>AUD_G&`_EP*d}HwYS1}` zFZzy>x!}>NfWW}?IBA0QY_a!LiwXndXQ?C~)XpO4@&O zaoM#2F^9HUVn3HMuAxa6+G;DSOVjN^Di=7xIrHaJt#zfRe?q*tN*#N=2pJOHkO{4GsxK07AN*_ z+G=0`*N~z6s}!^9=U?IsKl6pe$r;nB>S6#BU>CVmN9|8k>L*Ss3m2Zd(=N$vWkq`t zz%6CYM_?h$_b$XSHUJJY?^haasi$!j**C*8zHR)_hW-WjazmLh5#{)Ut9=PYsNjSj z7`3}ZpXKJJ%!G_h^cK!jMolb`M?es8jS$drU@=W6H!D8Oq%e-ET=-1=W}~(@r+wS_ zc67KI4Pa@nhwStx#XRfOi7pH zmw0ys3{S^<5vHy_!WMqkXNf;Vdu!#4)r4sMtTpI;67hdj!S=%J_g4!Fl|CEwS1y+$ zywKXm?MZ}bX&Tt|8~tU)mrwWh)Z@zh9^C7Fjh7e>X4g9F6>C7!fi1~6o{NbT#2aiG z$DoR`3=ds>ByS!+o5@s)1njd}i6(r^WK30mtlB4$~#SJ=d#XzShbq6)usD*nUkW0$F;QeIQWXiZ&_W6 z#XqRmy?>8^<5{bcli|h$5APjTLMZ|Ok5?RqtgT?Z`_FHxn#Aa3wZQ-7d;a#`w-_4C zO`a}uFf;_qBtJO`?Z;JXcKu>L%C>ZS&DThjxAJU6P`ex#dMmYG$OV@kF6TAVK@FK= znJMf=uCjU zsCqByU^MxfhkEH0olnusHxFv&V@1!~PB|mD@24FpgBgjrJppxv3uKbXip*q>i3V@w zeax$SUwNj|u4uXp-xSeuY!Z6;CR)40Fv=tb|1TKq z#icNnns2kZE2CY~YiyH$o!Br>DZzEBCRdhwE z=`u{HWm=05G`f9>>v3BP{0Pj~yvZUj7uowE{jC0Js|2mEe3zW)`=DZNyD&JgLFp24~x=FWCwe$0~Sz{Fvr90 zxv)zcL)> z!uV*B`;$INmYaV|UoDwf5W)2`(M~P2m|wKA0)gwoIgscbW)QZloShy_-+ z$Duj!$9XHuB7!y2+3}re(&3-xkkhh}Z}Quu%9agFAKrMI&@aLVBPQ z(#mp3O9K&jonxt)Ic@NLu!rw#ZScX|#S0*Aw>>@=$@So+evC)g79N-6hbb#wiPMT$ z+vlhyM&ET7UK0lfQ;rsUyf}C&CAy6sEs};_dUs&d-~ZM9r3Dw761%1eo@fanscj+x z(a{j5CFw)y=yd3RdpS&^?xqmz<=!}dnBEK)%5JuFX|N*b0=d3nOqx(0(T3It+rKu5>Ca?Dwv0_O&>L%s_CC2Q}n4FuN!hDw0|0fq8ekk zIb%ayV$OnnG;SHuX41Q6=dZP>yzM#?w_Uc?@v@h7Enifcwg#$i1JTLP5GI^q{vbdO zu|G?P3%-rKHOZ)_Xz9ia@Jn#XLY~Uo^l+MEj1+p7ej!KtYaVIOQ9yd*v-O>7>RHi4n8c|?Akq%tEFYYz`w{X zz!4R}P)oi%ZZ@`_@jwD`Q;p9)h)QfyaGLE>HFHPU$GLr6dFyOsf;k^393OUB%Dt5M zH4R?JmQ$LpX>}=6(+DPzD7gmkz@)C8l9q6P+8PZH>)zV-@=9z-UqvaY3@_6DfG}C} z6eG(S)&SyQ^xBV}uz<9prO;Zp4Jen|!r5geMQoqUL-kYeGU@;mD%4co6g!?DZ51-d zOKy92QZG@LF1kAt;66gq?9ti2FGz$@L_Px@m$ZS|vvHIMa_B%k*}HqQPA)Op=*#=T zMAbIA^33JfJ4#Ru&dAuNXY}2-m%X~H5);dH|MQH=)^I*I_7n3RUOujo?H*_$HRu7_ z7V8mLQa48Y_#s-8r|y+pnkAA3zmh$#|3g;yXqu)!7#_#?8%w9f0Vs&=d zlj^heNwYqkxo+JV{+pX|*qy*Z?u*q-L z6Ic&pwhoLI>>b@}!G61%RCU>2gtUuc`onjnfEejWUW?xcWBztFb`ZC6+$%H^2^V8O z^PThr&Ve#MWMGDy35<*PTw}DVCpVg#8_c7vO(IdLIcejg^v%gD-#^3+cd&p2gTq*v z_^h|5)CO$L;7wT_GM9Xu;5cRZH<=zVhO2vWrSI3yA3qSo@P6TW^u&En&ze_qZ}AMU zs*_zK(E+-HEap!3h%4E){@v1NxnkLdj)}T$tSGy*OHaB;ojnV%0}7Qg%cGfQzLsq) zsnY8cwJ>;Uk}a#l;2!&QaJIkT@H0yzPfB0K<9!dU@9tp_ng0M&uL}`mLUhggcV}ji zguloE_U4oJ?(Fv!UAqSM1pv%SCkNK2iw(93pwuH7eMQvOJIE8^9R=6eKywXRf)m%v zD15E1b)PE@IXxFeS8^N@=)sPGdQ)|BZ?b-U^c5|r2U$7oa9+P~;u@FS@^k1E9QpAz za`QpiBZnV9x|P+973>CkChM^zwyMe`#{) z4pAI)MqQOXbc#E{UroE~T$;3Ka)N0x`RMh;y-+`{WvrU6ISOt{u(xi2^KRR2ulwuF zO=1Kp*C%lc1vYf>p)joX&PsH+Ax@BQO+lRWRbfbSc3X-mb18rs*eQ6DMeb7 z7jqFlYfeY24xyg}GG+Z$m^>#Vh#Fv}_a5ryK>*9k%9iHQwmlCB zY#Fyu!SCed&A$GbornMP0Nr>!2bBm&I$X3BV5}Z43nye4I)nhu0or&T*x;sUy)wH& zIG(xk-dq0?T!m1s^ayEJ)~XJZHEad-=OXTU|VU%9Z{F5W=pINuUXIMs9uB@=h6;Vz z#SP&kp%mUT0g6ecRw3_1q)#z2hS}RRIyH^l21L%}P-fX3On0}I(@JgFvg!jE^YuMh zyJ~iR^lQRAnzT6yR>iGDX}A|5So+an9*xk_qFbrND-F3D*6gP}*20#C>oF{k^oj7H z@x|s^KcgI}3Q+7lvI5>ccX`OdNYdrr+lYzO3x+!PPe-^+-vTU;q$K|1setr%qM(h$ z`%Ahcr6-v=E*uk3#+!CxlV_m3jO`3t zv<}Tnq{^hhFKX{1sRSPiU@Cvp1ZTdK!$vLk0A1e?v!h@Axf5p4OXWU30FRn7nGWf$ zBR(yAt<>%>UK(UlQljG^Kdey~SE|@qG!E({3i}{?=kT>m(}Me9uj#!`6e)t-0maX^ zS$f=tUVLnRK6l6&I&JO?=H>|Pwl=aoV~SGl=(m+z>!x=lMfQ5Z`WW6A4x`f`v7q+H z#}6mi59W$AwZ*^9FAwkj6c^7(rzPWVcR~AT($8aMIX$)Ha{}g=v3K0xq8b%tsAO=Y zU8T&1rRLEJ-Dj*^84*5S!|;VZ(9`4w38Fm6EB=Ds=W+Cj%tNB1U?DYq%4|1^Pu}T( zT~915fY$}`laf-i&qHn7w(%)m#6+beFC2Gh&_}ICizIt2^vx_x3sP?D=qLin;Kok|eB3W`;tB_B z;?15UdDr6v>kPnTeedu(>%AAFDvUM}Hp6h9)BJbta4wZ_;ebd|vexE*h~3}%DSHVC z)b6L1qBsSPz;{w^qf0a}XMxz(TZ>E85V`;e8W#Xccp;Zdi&Ym3;g+HDrKUU1gq@D0 zGrB7qDTdoS$nJ(!D3h27$19$ue!5^`Tvu=>Rny z5K{1rp%^Jq+)_Cgg_yW2wq%!=QQ#7K0MFqz2O%huO-1g5DSR8L&dRk=P`Hw9zYFSI z(hK$QqJ?gCU4xtTe;%!$zS0apTylW~-C}$tDWUB1N1iElcBUMOV$KX&`yiqAlw{)k zto2}pjZksTBBkCFWI_s-SzBk5HtT0P=m32P5qUMZS~~fy8#RH9bei(R#g?NM!e(u8 z{+T(blF3MarJyp3*HIWlK@Y0Uv0{elo7?a`HGq*vo7+(ye3LTFRj3b4Y_ll8~~b6k#!_XJ8<`WG0C8c}82Bs6{OP89zUwf?9?q(YF#Mp9pK6`$fj}x#? zIN~%{!ZTj8#R2~v5|@DiC!oihnz?v+IqOg%V3++fvjg8rLgW<2hZa_K+cLku?*Wzp zARaC4-jmH@iy>1*ZE^)GRVxrA9*_Af!`iL2JrFcnL& z8t|RoLnO>!x7aV9qRW5mW=e>cblLh$N(jW^vMncm@>Zh^qGH9h<1CwXxPfhm4q-Y` zaI8Vr)}7rSAO|4<{m6WgoLHgV$le&w4;#Q-z9nBS9D^yi0xd6>WclR8E~QBIQPyae zna=k~>{^4pKe-o9?1VSEW)aakn}E6Mfxy$Y?LuEv5c=q8#gH#ur!rY6um3qlJT}C`TR~ySyH+yI`~LLv{T=Hy^g;shD2(YY`@_YF`+hlbNWN85 z?o4!15f5*(ap@VD-~K%n{5^aVANsSoU7dt#rXnXG1Ij_!eD`Sgo%8zb?1l%u3zlO? zO2khTyH8wf1}g!<#H=hQi^&XYgR*N7wXI*AJs-8f;BTj@%Bo+=`(|x=Qu+w&*yTMH z@wz5R!o7$n{`@8Bw;?ROpsx=WS&cseJ12OesGz>{xEJUUL7&iFzll&@WxLGD`DE@a zCrCe>VG?65{c4vCdUHZ|<+apg;-@su!A@b7%cQi+l{z5KwpI27n}1(%4q#P2C=fN2 zd>9znyoL!ElmH3YrTdF|3bLI~2Ko5Dy+d%Dniw|sS)bdNi((6stX=JAyT|i^W_Aco zsFWaYjd#SEoQ@9(dBL&Fa81yQ6H5ns*suq4_>QIKiS1wR`R^@9&&Qj|VS;P19u~lv?mYnC{JCdVuNJAan`X^I3hXDd|(NV>WyC$x&){A^1T# zz)=py#y=+n7bKY=3f)X$zVs4a>;9l2zK9jlc+&ZfF4wy5uac*?bU&uvubrVw;YM)l z#|I(^Bhr7&K1<_?4sP;X;{^Oyo)n{{cG|6Pf~hpVaeLSA!{-kl0C(5*U0-^l_^&!vI3aP_oCPLmgnM410OkzC;nK`CzVgFrq(W|ov%r8L*aKz`O~OJHl|yFbA* z*%`|GI@>%DaOJ?nIyP8Q_hnTeGhglzb(+h%}s9>4m3B}wHBK4qc91fJweR# zFE%zh+a6a_!f=TY_R~zxVS2#(UCK(KfswOiUl}Bo7K&Wbn_Tirxvbmx?%HF;1v{BH z`68hyc^7bn5wCEB6Al55hCNE3joJE(iv0a&JZ5vYw|Sypm)~^GE0da|ZeX3=O@1;! z$X)$UPr3hXBfz{*VDHcEaU@)qTj=j{87?lB1fvMW;r#is0pRl*dkO6(>O)GU$bGwh zCJ+v}CA-WNgSd&iJGnkfZ)yRh$VnIUSf{o)(xJUSnvO24wJs?vYlW0GIF4iLQLY_` z2$S^qGM_wN(OIw%z>HW^mkhcqWP6_>cjVi5Mw!9a2Z-9k@f{JxNgAdfx&V8Z!o88G5R9Q*(8zh5M0vG&b|mGfYv_3 zydT{*BtOE{mSj9q2sXs<3Jan=*7I8qk9ghg>V(P=_nkXHAFYreV=yxt(vu=q?)+p> zjUx1n8I>X;B%Alcj{ zM>%&6JK3?dL#TrKnD|3xo1W&*ksqEt(Pg;EDO?{rC5s9JAmYJ2j-#}8fiB`%A$-i%LNFO~QGyx*!lq8uV6n!WD3(UxQvmc|sm4%x4lc;r1HzL2B#`i0r7g>9Y><4iH zp!<#B)2o|mjWs82E_U?wF)IUegW4r}u3N$WVO+{}A8&`_{yZuGSaXcbR9ietl(D{DiY3-Yfs%G68y1AZfQ+clU|+YbbJ(k<~J?J@Md!p&=p`so{q#ps8!!;!-!D$`XJ2lrafCUgnAwr>2Uza zkn2AERW4CW+c2?f-||dT=WZ+OY&d>6k*~?MZljho-py}lzKz<@Dni=#61F5eb8Yns z>_<*bGHEpSM>p!Y70WY$^V4`>jSEVNI=puTOCG9_)#PE0W=E>7i^q!D2j+nJtnA=@ zFfbJ>N5 z;meyh{{8vo03()El?ei@1enzSiIudaTtLwz*HWVslK(P3t&#XWAIO?@pIe@nq_|ia z<)(?<%JI3B_lkCUeFWS6)+DA2VAZ#cLUW=y0f9jRAhGcBK5h+EbaHBII|MtT0P5}I zHMV^Xm_pm$x_Y_IT+p_^;tFq^SQwIYnW{;X-*OTMo z#DN!l3&(_E79V!i)@Wl9c-$J}*5XXO7-e*7z9*|$f%Dfaed(6@PUX(g=LBjorbtS!eaa%?HmK34 z!;7!R-J+Q|-5zSI4Sl(E%FBIv0X4$xxkC+DL$9aYsHBHEt8-f~M`!=_D7&;&Qf~bN zZLKkUIHC)OSG}KYbOsvbkYn|6CjtbI-%HMbzT4UN?oRf+KI$=AND9yq^I{KDYsJO<~+Ac%iF~UwqS2{?) zglYMp04z?%!GMVHR;GdZfh%y#sq9`0xgNvwCQKnAA%Kbgk*?)LS}d+N-RU;1)whaw zYq4sRoQ{*2j&lJBNFd12p;B~UaBzIyl={qjGe0qe->_)igHrW;ztdW)czqN&3?77X zYr-O^>wnViZ6r!RR9tP>`5_eF=?sUXuWpO$w0kf47=$7;LFf5!mns) z9m}-4-fHjx<0b;!39SAOvLddI#UVH~HPsv`2VWWjloWc6SP~9PS;V4h1YyW!9Q?@=L6&=!7IbSb41jHIhk)gPLb3vE z4eNUO?~?6T7%yFZ2ST~PyqHud3WZu{XZD|X2M}tqy-=vh+y5^R+x2G9{M@)^z3KPy zd7xgNUD|H99?Y{`R@l&<^_*6M2p^F6rrX=$tgwgvS}?yyA5&5;nh?r*mCVh@h2(ue z5n}_H!SropWWuU3cnS3KOTn6fLfi2u4B0ZYT?7QqB)>y|sBKRTs9*3V!cLJXWi<-8 zsJwCS5-)}~f_B{y2ueyq4kHy3;I@qUP>EB=&=9NF=0bq|U{P&ysN7kQO*PG1V?=k# zsK)qbdb?o_LRp(Q`1>r5dKegj^Y;{+0F}azw}JP-W$FRp08HWlD27ma`PDneo<+&B z>-q!KFTKejxik%c&*@!r$Y<5uz{{kEl{N-uZ?vDE?9*4Fma zG!c|QZ}K=tXfb*t3)*aUd_+VfJv%26jPCFaD_3`kLGofqaCL0-gwVB}mfIpKS=i(r$SO`~A4~+^UW>tYb z6kY?CV%rtiDRv)KUw{uncX?;g4a+9I8XriJ!dLhoI0^{r$f#*)MqpR7SJZcEiIowE zv;(5TTk|`SjLJ`*=w6A@VuV__#&JHbFqQSCVW6u726ApM=MPIp>^jDPjYZkW(~^Nm zISpez2ewy@yo0483W|$4*7A=El$@hXXi_YymXK5u)@Nv+qhgKxPM7@TM_7eLD}Q+}IyitiThF)~^g5j%cU!4Q&^ht?@9NF{Nl7K%oimU=dL_<;(WQ-p z#q#2gVI#lXcyM~jLZ`LoV>eVdyXowKX*eVDm)|Tzkp}hx+@1D&uJGzatvH6*I*z3H ziB~+jj!(R1bI-`R@#4xtemncxn(5A*Uanqs70_0{DU9OQE`v9X~fVvyJ820D^dj)FX27p z0tb=vZHkghnh2?O_y-*aUS z#QI-}i6VbcIVnj&0#aks%~68#Df^b8u&1M|t*u>wV1}e16+IE2NVXV)V-dMNR7yC2 z4TPlz3t_DZ4$AM*kG)U12-z8!kft8zp^|k}0MOQkk>#BSVK6%W#rRyaW;9fB=~8M} z^m1at^7DfNPvts3)0HW3wrXqhzOCzG-ts9Y1S?%}QNVMBm@eTxxN&Itms&VCSdp>%Mt347UH=|JHWW)6rR>ctZ)s^0KD_6x)^z zuW^YV7_}G-D@l5!@bRp8?NB9)S<46$X&tF86%PQtmcG6YXv5T1BiFa%^BTjs00+~g z14y_jrIncusNsD1+1y99Kn5fD#)bPkNL=PWwQpU8;WASWv7}8y%Yb(KV!-&>`3}mI=KtCN5 zu%NgQp8DIK0IjRLeSo4^7oN3MwQC1S@Obm-$Kj(@wc~Yc##`zjsy>-I?ytHUVLu;S zhj$28zznb_n>ZzKE}z_;(f&gx6b%gr4<~w3h|}gvL#6v;l9Q9i!9l&Fwds;bGOE}t z%n5O6mlY2TJV>F!j2*uo5CTnUdRCU~m}g zIGSDJtZ8rL0y-CL<@q}Unq-j~L`RXEZ2^AhOiK8r$%U2)(QY~_c{LGq+=`RkE?htc z1$d|6M0-!zSQDV~=Wzk^QOc|xToT+)&@ma>VTbGQaZ=GE?7mth?y~}EikORTL>4($EkHcGJlO`)HUS~nF5d*ZL;(7 zRzJ3rL9zAsp%Q#T6*+|-O1du%d46jbthicCeJ}O>l17|Ir&V_4=D-b=eZOY#{Serl zNPdu~^|JDoq;*1=J3vFmB;UIaR`too?rBiH_=2p#_He()1@379Y=xk~fOK+Ffdu!o z60kzaok!<+Xb0QbqWI>-Yjlsh2{};fn6V|?R`B&(K&jFI=HH|w32H*Qi#t2)n0R27 z!i#-ZjmkBWU-W>e)1-HPTWlL<&X;o!p3^Q(`({S_|f zQhi+f*v(VFSDoJa-=-s=f|~S!W3<}5%Lo3s*{1e+V1!w;aAW0xvBKw(4rcDHl?9~j zVnfYpE6J=QIGp-kn!KF99M-M@;a`VH| zx@LDVs1Lc`CvJ^Y?0{tlm&pO3Ego`KRg04lnkJn1_~H<4!t&Z6xOmaj+7`#0JYEe@)Kut8nDU)HagU$cbg%ZUYWXpm}7T;XzFJhDaZH1StyduTI za_HLNrzZw6_hAQp_oe+|^ng5ggZbdSPdYMNqc6A-OH>M$%97(z%jaxTw=*S<@$B?^ z6$9GuairLjAL(1M7-H-VwclAJZWELM^;i&G?(S1i3D7?Sx`O7H*aJs7+hjzrpMd8q z0tGgI#Pi&fCoE=w(SgoEe6$_fymqaewsz?Szf;c{Mp?P$$HGoCVakw09=eH!AZ_lp zWv)_nnDHX-dKm24*aM}92Z6))*!~=!o%wM9IYw_M+FD!t%M-)p1C-f4FWhlnA5@T* zmWJ34sikT05f5`C_^_K~sqNJ)xs5;kzle^{TC(+j#hF7WA0VQ=C|Lr>t&SUd90o(I4=Rnegnc8A%(_lt&waak)w1cWO3J-3jWo^e1&D$x z+1VU~=(D!QVSppxc7sKW?-HSgg9)=$8#g44|j{{G-XIPnq(Mfd-%gIS5a9 z6TCcCx}}vGEp*yqnE+1+?4vxfQlg+bqs?hxY3Xa%4J}R-O-k2&0Y#{~h6MUv6GnjR zAgIYK>1^RWcGOfNMH)B;=r}0LuK)mqpizB+O_#>4Ckh$5a5+_0*Gg`md1wgUdCk<~(OGvXAYV01( zLg!7u<*a(drD)zS`y6lIh8Px=`*40T3P5d@Ezc(~-?(LmGmqYFCChIoZ~aJ`0?XT?Xm<#PfyQ8{QS&}zdU7J9tA2Z>)M();7(Sa(|W&epFaI39)a(p zRsoFaBPoC(uinu3ec%}i?8MBf{gy1T-I-j%(caw2^0!ccbm)(v-1>m zf~ijvtU(2!@T>k~b^k$>gM5$9Tv}WO2pCB@C5fg2{KC?R+QhU?M|au6F9TjM|ushtgnh zj`t*AJ$b>8<*ENG;BGa|{PuD*!D}f%)@#$1G{CXb!BIO6T34BizmLo+xc!r!=~cP_ zE|PvWwgo-)SxI3t3K_K6Wta@OeQ?>^ zp{`XN>;26N4-WO#@+*1omiCd&qT*Uv!Vof91DX@#NrzszB>Z~7TST_ zhhP7M*wh@XwD*={#gbc2(C%)~B9~v(dr#S#k_;g3bZ%uMDJf~I#Y)a)$ZYxU_l^#_ z;$e3~RvC}wnO+mpKyO_!=;_WIT7ynz_n~nPY)%#^OtGy{F%Wv910+GGaYzaG#aaXV zHXA9lra(j)_{MQbibm3vyvV%AdZ5Q?oas>zDn}}`^lJlyyY2XAu8?}4l4PzJaEs-Z z3Ea4CDW_sCLj_E(sIn4k^|n3E({-$PH6zp6;O;@#IdKNxN6ax|_6Fn`W%qi$cI~fRpD^P*AWw zJM4w86c@AWJ4q|I^-xDww{}{NP->pN&lFU>m8$);H9*gDX7u{*=IUDGHL{klHK&Jp zwr;!2mV-`6hXO8VLl55%yy1XIEq@vhdhEGi5fc-FdJ5-MYd%B~_?|scTnDV3W={6I zF0}-OtYJB9E>*>@e~?u>!?{?|Yq5Y5VH6}BFI4HI40_cdNv|j4ID}#HwZVQotGa5` zCTVya+#Jz(024SX%KPT%l=3cv5Y9baV00W-RRu-b!uEVlD5&X&Af&cl`Y$7GZTdCf zdu4fAavlvbF`V@I^XDtT=c`$JNPg=p4=-<%<;0Ed@83^??!Z=NwSj9GjefQ<2N5L| z)3W#g%vKP_|1kZfiOc7I&Jy;W@6ZXLB_nW$L9OhsWdc6e$}KafT|JOqm@JYoP$mxq zNia{uXRoo*+gMO5+ihsJDx4ym9_b2;iC&A}`f4Y3i7gw?OYZ5do!kJisEkFTw-pWM z=aQ1qa(A4!TJ_Uo)DCxvvF!*x44R)%8%YM2xp~Ou5AEn-xRct&-QP*mhSyKbwkk!cxR`Q@kx3;$K(u+`|R1fY7QU#{~yH$1gP#ct; zYJkQiI!XYa+T>LA`;P2GW*a+DeD|6F!MO$gxTJKR-Pxk*riylT<|i!Bj7=mQpJbNK zh8zc&BA^pDgrF?})y*3*8<{?19nySOtabtWnXDnXW!APd6Y}$dCjG^WLd56PrhZTG zF+2$$20QmV&&>hTT0Mu43?NH-Yk-#+dAa)8gCh_|-Rg`&GeDtf zL0Vz)NkneXKKSzRhWDbaDJ1T=!A_m1*xugu*zC7jWBUy-P=etB4=`vT(zvukBF(K( z)PpbY!ME>(h#KH#&fx-}zfBg^Wk+Ct`lZsaC~@HRYqAd!a%d{imgD>7I1jRoteK7^ zUwOPVeClGq+KGFcn!nWY0L2kuaFGo3r~8##-wAPb&bkhHueIg^en<{OokPSMuaXGV9+Ja*;dRadAYbv_s-A2eCof`StG>(66+vE?WtU zH6h44o&zW&R51zAC@s+tls;kvKKmfcD;C(3&ALeG?n>4Wio~CRk7H| ziYbBi6jId4HT2N6cz=KW(i8o(;FB(t2B-M7kYjHlIqji8umhJ#0wMsHThAU;T)Ak! z5|8&zYsWkE=Us$4>8*iY^l67@U9ktxjeTmg5O#t>3Mc|Bi;Ihr^74QHP?q2Buy$|= z1)l?fX(bYWWeCTCCX2EjZy)$@)CZfEcYrfZR{jZ*97k2WK8lOg0GAp>94wMh$L4l` z+iHA&PC|dC)gT-nFl9NLoPtHkdL5J){$z7UP#0WTc;Cz5`<(q&Ff$!>(}8+Ah)EDN zTn;A?cc!MMJb+-ynyHtAYWJQJ2~)Xn@SG*pLO9xED%$EGIYARpr$V4X^r&VTxjG8} z72H*?3w(S)33_n|DnZ~H>*LlrfE>*IAmng9DOrTReL>>nFXbe&X!inQu}mhM-G2wT z`o#2mfCtRViq)i|#CA+P9@SHJ+!H9X>Z)BrOFf3rDfB&mlZJU_Ob@fIOKwag-^Mgy zFpK$#J9a<1yK5ng%q(Q4i0X`wdGqE?!{c(wOGOtK7gj0PUAP0VwUNxude;hch+;%- zHHZmdTOGUx2hBhzL%=V76@=05EKEZ4U%XZWXty=dFL;M6_zH z#lFPWi(mVB*-?rYG6tF}47m~|W#zB%1~SQSuFMRT!O=<`0U6xbnt;KLE6g9*MFDc?N;3p%7qc1g9@gZActLS-x>vMj z$}iPhS?WW~fH{*dU}J#iM`A0PWAeN%{z6l8GQd2XKRHld0kp&^w9i49#*Nf0kpVbs z;Of!(3F#v*u7n|Mj?gprpjahOL3oV}7C^H&NALdqI!HQTmPdM%zjBaoEDV+An@+cb z1b_sEq<*3?+~u{qqpQ1n9pv-adgZhA3Ob70h&D(FgAQw4wS~gqXZ?I!zg6Pm$_AM2 z1%Qa7{{6vvac|n%{N3h(^dkq|lm~{vI$%Dn+~U1F}GyJ)baACNCwd zkS680F_F6x4`=h@FJAGYtqbDVnK{QT^<0sY7u-1!KZ^OpEM%ntJ|++f!mBlI*H{C% z+VMe=(*))D)ctg9(Gn!CwWO_JG$OGBU4Ge}+T11Cxb8#28Ya8rBiT)R51*Ad?fbkB z68Io`bP`-{A#KleB#+n2Q=E)kT1DyTc9jPXuq5ClHg9_?+uPe4c`X(Kd_?Aa+!hxd zKD0jz@JS=>+49Ru_Z*U zHb)tPEFKt$s$7aynUFpM#J}aRS3Nbr=(Ng~toO#UHY~<~5i*D+0r&r(D-rcRMN*>v zji{{kKiA5SU_Q;Mv?Vgtn%7N$4)JP@<)C!3Ti0`IPoxh-dP{8rOF-%*aVMZq64vMSFb}-S>HZKPC6&z#}3F<2}xz3qf~o)n()*v zZG9m>*{+cCgSKlE%&95H_BOH5(u{iS?7Y~!Lr%w7K0+(-sqJ3 zG#(|8vTe6GP`IndE&-nzqB1?52kJH-6j*D8!U26Y4iOG#L>_~W0a*zEA9{=^b7@bI zU1`!o3qN`C#B89zChe`ymk@SYsol_qL;<{KyH`?WWo0iOGJ8P@eE0EwS@asXjwS0Y z0Ts2RB!3`WfvjsE17+Pu7Gvv&oqBV`Pf%+6?bPJ9%yuF0hFgb;&Z)!%>ZXL3tI`#4 zdH8f&dY~+JQPqNq@ zL*jfnPMxC_bF95^q0iUhY53?vM_=jE^XLB+?6h#Qz0ae#zSC7tP^7CX-?cd7Gj0aclyD!owZv8bvr4TepWDE&T)c|3%4I1OmYL`SZ`-`6rcS>W48ZU32=Ql-r z-)|TA7l?G_?#|96LM0x-%Wy>G8OSvWT2k?mz4P(Yi@!1_^yS?LZ*;}5_^I4yd^#DW zr4_@)MLly_;1>@+AXxUTma^TkwAv(PyL7y{lfI@gS}bx_>zoT!>84VYA(l9)Fi_?m zZFi~ibsd&X2Y1x;>g1K%Z-xc%+3_h=4qpsKjpOsm`Dl3n{2Gb~O~W5RitMkZwI{}- z!Y=c8d8#F}9>4U||79@j>hd3FnVG*!kF20BS7QKU^c_!b!?(=DC2oI8>;x3SIw%Hu z)z#Jc17qJ_9(@Kh=^Vf!c{xilVdVoy0C6@4k>~3U)J05u$4<1`tu)QV`&XIcrd#*S z)x-C}XNFaTdAWnI&Bp!ozpBqKj>t+bms=iP1Su5uu4b`-uf#JN4M%=o&)?}kycn>7 zJ-U<47Am)F;w6yT-Al4G{6B1ccUV(d_jS}!M+F%x3IgMx^d<;MS5c5I9Vr1+dWq5@ zG)Dy+BGS8v^d==BLI6digVYFt07~zK8tQM~;LPazefeX?c^(JOJ$IkI*IsMw6M}8G zXWxhW@Q}y2>At)ZwGSzm;P7!R^4v>Tr*4et_C}%lTg(P$3fI{}rk`@N#2VGlRKBUa zyR#zjMOFCwT>|Xz6svPATX%ckJ7-KrwT-oJt*~s#@`c-c9+8^+bi@M$s6}x3X%cdb z<2(4Iq@-4XXG>D^S#8VruuoV)ApJ)C*1V4sd9e0mug|wn`*v3R_mIBhH6uy7#<4xq z#6Ao>q?Y6S#w&~f$5fwM33wwa4x66v`&d-n!T31c=Z;}KC8b0~YU`R!KIaY--P1YG zSI$la$hDuO%@Dq;a`2>(x3oLYnXQ3JN78pbc`tlvi3^w4aA>4VzHL{mU*D_s50wsC zXuED{K+10%-E2!y zYo3?;>2c+C&WjiKF1K@bd(mf4-U{rh zF?u0=*I6XpuB%Bch9Id^W`SgRUjJdhe-DjPLQC`G$#sJjGd#v$19pQ4skE0cQ{=%7 ze;;=MjXHpDJzg!SA;`#tC6MpDltA?h=O5sSAQhnYK9cJ_xF{1=RD zXKDEUBx9_~!=OWA3=GN=Z7h?WU$*9rr+q#QTyM-aBD;p`d)q8c1~pEJ*fGP^!39J9 zZHmihL!?=B%}w;=TsHN+S=g9VYkW+S%;wdle)GL2{|Sts3q-*Gbfud@GV?;#_V#gY z&%hr%zSg%@Mk0q5jV=<5pnR`^lOQ8#T z`za*a^*HgfNg6tshl&?YX;`Ex5$5g@jet(#ir>oA=^F zU1cg|*?6){CvV@qT@LLXkuDJ7ZUv$G-twUbt3%Gq!yn_qdAhro>L71*Z)I9$+)QAasG&6lf9_@WXw1mk zEv}=qVw1!8GsAO*$Kk=Wi}#D!1scC^BQth(4ou|L3)nSPo{>GRxvRpZ$RK)ac}rwU z68Na~J91-X?^?a~nXUSp-L_ad9B!>7?`L-ONuJoGcJ!F#Yz=xQ%r9N!@L%v`htVEm zXTh6f?9smn0L5~KgimfXDDx~XgXo=r4zaBxV+eB~x(;03ncu4eCuRpKlc&^qMWR&@ zzG<-pIFvS$#fgr!9sB-!Qj?%}UL~cyt~Wo9XIEGjJ3Fd?-~*H4MJ zJSYNPEW#t?t?hInw9SSfUtnSMJ*HXk$cLo9a67g(WGp@jCX5+CrS(DRyy5Ra<9*8%@W!Wl-@_MaSkQ4<0~+hCfB}4Quehi z$-O?d*EgS#&FMZF@v{1|B9lz^$Jw`MYD?2!)eZ3(t1ops7Zb8`JwQT01;m%81?3or z1l@?>Ef2n6N4B*ODU$jVgAF!p?4aH3j_I&r_}#ZmY$6R9g@rbA58;(>)W0z zgJ7bUvD-bDzN)&hkoWJ|iq`DEUfj;EqJlx1@Gw$AsUEEAMgh7b^PXyFl8xPMCZ0_< zvL9BAy^t`N`{HH73kThtrg;~~U~%S`YhE+?9<=bI#m1!$h_6prl1T^FJw!xnrd_lw z_&)UQeuxry`^gK8x7y;daZ7Qr60S|ebyxg^(JF9xX^0)vdlXO4k8W>_7R6b2q`MO> z%DzsyKrK<>hmK*_%#CtyuVbSB2paI4w6sMGNYmJ{mmLNP8W}pFMQfc6J{bt~%nao} z>O9>22$sJnchiG1{&xS>#Bpq?lUTx>G8z|0HeMFp|4^e}^r!}T^c=)o@U!t5y)<|z zhJE18_1#^OUwp}-`_1}J3eQkMZGPG%(h{3HahHP#p+={Q{*S}!MiIZGb}98B%)eC* z7LT?gmAa!{SF-w6d6DK=9fO&Xo$&(p0k zC9e*l_JqCDOi@+AdM%E3!q==QBMCFd;!F~Jt<0-uMZDL0QE$J? z>w6od6Uh=*$NnyVx^st#$>S0@yaj}b@$vLLgwb7qrd-WAN|9*xCKL^xKmGPHdE19`Dw#Rp3F`oj6x zVViYw7O&%jzV6DQl-xhShnlpq3k4)-EWpvX2_<7S3k}(h=bYS!7q@GM1^6?^rz~4* z2z(JPB&|~c%n&8yLQ$wEQXLpdYysVezN^&8zna)$m7~jz;!!~S*;`*wcSNGIT1AHI zhaBiiJ6l>3v@>+H@BW;4p54xG9cv5TRj!*&EgI%3P&5cO)S~gW#-i*l+Q(=7$>FrU zThB)>e|MEQ+S8g@O^nj!VaOxa8>7F|?cx1uD>(Ax7~q@m%jNzDYjK{%9o#!mzGqVf z!nDO>uD|Ecc5>6Z7vICxH+{UV$yJz8liQH-@x^0)-%=u(c>G0CVr^?!pMuSyb)yg@ z-eXLaxBT+7xU`d0-mu4eYT$4<1{M~deku5aHnG}v&Tai30m1A#m_z?=KdR!sQ8}74 z9bJn&#(;}jFQ_BXHUu@&(|fKE%!6HWtiA+w9%r($>msHXLn?TmfTHL zG*DdX)~6g~dMR@vq`bsnT7{AR$_q%mJiKx(*O$iiIA@9B*QZwpX*AyRn_9zRE4ABk zR#tSx$uIE(D@OhzsgTXRP^4!1#)Vv!Xkx(1WnuR*0gbrPa2h zFX_z&_rw?wwgZrC3T>Af@Y~R;u#{CL}>>Ef$ zcd_Ff#EQ3Y)sIP4#wDpQtrUA$^x0fl(QY?_c z0O(-tl*h^*w0g~H)aRVbw|M$pY{;H%2(4+;Elp~qec8r0l(N+@YI&!gNIHWtxEPle zkcWM2esgnfOVH%;#N=dqZ7@5Fo0qMv^5ZL)?_GSwwWcguJ?3Ox{hUd=tJ1bQkcBWh zIvPTNiKakpSKyca`M4duEtupx)BAR2?t8%+pZP>5Jm%E+Ymh5ZD92$(37pTZi`9Gl zlx2v)&6Xoyt~%5DbsP(@oFBb!@ZFR0OuD>Ct_&Zo96dv~XT;Obu_IlAw!D|jB7Mff zF@Ik)L{SL$GmFsYG~B)Px@?kONK4gRtwM_;8Xj~nI?vG?9^v`HJ{T*Dw-(iMa%r=iuFr0Ej#fDQv#RmDv|)6v%JiqTOU$#K{> zxT+A;TfE+N;;gW<9P2m(jg7&2p5f>FKKx7aXJxAgiA3QMsc8oh;o5w=Xu*-D#+8#; zqu6zSx>|ag^uiK@Y+A<*+RCq51{=J|bH<`2DZaUSEG?vl)Af{}YkZA7904mH~tP3K+Gh0Khk zPnr}aB%Iy=33@SvuvzJ|Lbq=66TmU&j^)EFuaOf4JTw6KY$&IjcR*hT*Ikv5uRuDx{{ zw@&8QU0>H)|IlBs!z8*sDGHB8D(}p}x)QGvTh+f{1t#7{Uy7OiwJ)i;I`sJ&EoB`N zSt52aRQ;HySeF-HTT^CN+h5DWa>Y8WDdZ9H%WAtTG>h`e8aI;O`DRYtt; z6%Eh#5YqbiJ!okr9xAg3j{1U4?$rz)@R)PBj1SW2U&c)&nk`HAv9zfZX#M@>%Y*)sT+nwwC(Xyc+ZE3^->Q zY3ho6;wOcLlN+ITc1tYbE&tJisMu0sz2AmBKQl~-KL_PkbDlzTlM@X2ZY;|qOq#SL)7Oy-0P%~XzNQuxktf4cgv_S@Zv_$4!ruE!W(hRCjZBh2rhyd24h+H;(1XGI083G`E| z6?>&$@q>Gsu5<*%pvls`zFPwU;=oxg15bv7EGzPjpx zU|yTfh~>)ow4O3A1Ekri-;9%F4lIKyKaM|sZCs{!QGD{=--oeb_>+(>z)v{4$ zDQRu0%yn0Jh7J@a_-n@_2DEhK*Zd*g(?t(=1OzQWw;6E7>;x#<(N+>rbz8RFDSoG_ z{_Z}k+i{FGLx`cJaK%h4;I)+_GuyHKkqoFpLc3+XXt`-OqCv^jaWK^Np5Wi@4Luk- zOQU{978oJHCw{gr^U^{)4;(@`p*chaSp^ql+Q7g7C&HW?696mk!o!4AWH{x>kt3@M zDKb(+IE4Jt2Vh@i!6N0Mg{Y!OA75ef!e?cvl+PY0e`UO0Wvu1pUwge=G4lBL`L@w>J04bhQan#$`gCKuO?$6MF~eT+(O?II5eIZ794 zz~Qeqhxl&LIv1)bE|}?LT-r{SD4Ru(>M0M0l$_&R?k@H*|LwRS^5$>C0M}}p%!a1d zs4Pnq`Mv3h)voxB8*rP59H?K(&0EOf&U>4AMhY6a)vwjkI&;OqYIYL)HeFPdl|zgU z_*bqHwpNWNgyJdBsQg3uYDPI%d7trIW{EwiG%Y30LlORuxCQ6v#^f!s@n-$5Es1~Z zMDUJ7I`d975kbix1&Yf4+GkK}r#%=&#` zbiH#XCk%@0JNmXZ`s(eAbUl|4xdAw0%+bK=MT7E1gVpaco3}QiedaL>{F_zfF9;J^ zmOvISOHqFynETkgdhLrQWVS}OG7%y2SnLlO35kXSXfADN4kyBp<}g|Wpv!a@Yq6mY z2B4$wZtL^0ZMmDL7q384xZ*4Q}&6=TKS*=MoJU7PpZd+oAc zRtIJb^v)DgJKQ0EA%=}6v!u$q$EuIR@jn?oh}LC?j=gkhN3jk;AAd+-2GDX~#MVOm zX0Uzl`y^xnOcTBJS+pzx=@?hR49w^*87J1f2IHPtbPy(7$Dan5m6=xV0qVO?Q&Y1o zrb)(QVf5Jy)2UPKh~~CYy|u!!Xj8sqvkGP{$P%k3r5F$TQ%RUrgpJFyf$>->AK(h( zdqO05&VN3~&sV$bnsNfsTHGKU}@m5z4aYBa24<1bwz3b zdj4~GPiMV{Hy5q@RSi#Qi0i%}3~`j0AKf42Mj67=mLJI}%}XuQRbZBqL&4|=&+jfp zMrq;p{G)}&C4FZBivjYK%-SKgUi#^ez~jh)_&jiD6azv&jKnT1gH1(~G!GWh8@kzs zEx^C*w(H7EdMySy5Sgf1lqjEzXd}h8@c(QMgoBv|=h#DfU_eM^c0jDR4>vAXmVJ0tU_(#sXx&mX=0adspoEBb zv$R=uzrSwBX55~pbQh8*XSp87Xr264xu7{^Em0@WU6MT2$B&?PQKnoECipENn~c1~ zdg1fJ<+IR3(hw{L5G=4sY{i}Jb>>BorcF*Fy97%gpV^$ADFEi&3x*204vZ?cLNUZ) zm+d#Yb2g%+H|GMI-=!5@=NJV9xRr$XAH*GpmAe+>S02ZI0VrhL{LX#Z)O$6>M?Ze8 zXN1c`BvN@?37X_x#1|==1_l{@Yxusl@>Sg}#e?z`B2piJd`vzF(u{q4NOtMF{9(ND z`o4IKLkZlMLnU8L#Zy;ufZS{Jnb{EC5j;&OAEa4 zx+Y6&>sFS{0hV!4mLH|91wMP0T2f+Q(Htkd*t2(z;<-AqwH`kq>N78jgX;&<8w{;s zoafxo7MfUuA%Nx4{=#meh0l`3D!98O z%-lqSYe%__DNm5v->|w4@L;2rf8!F^Kra+qP3!8(4|iojpPC{1*RteAoCg9Q(u?f75jiU9e@_t z4|lX|MjTFy>ELesiu?3Hu1%~1ThG|P{=xo$NK`{hJE9+v`}S?T=Lm!T;pqz-obi%) z_>)`nYho?aj&*GeKp-fPI^_WN&5P6NaY0giL7eJ@OC-*1fH{oJBnkMIA<0MaCI#`nU{HV@EfIQ%E40{!s{-3hpcRZ8>JUP z3`Ci8CC}F_&~f*MzexO$waP9U;ZoRW9$b{AkVcP*b{RIx#^I8}`ku4td*9nX$Nn2? z`fjly6%is6f4+P*Sk@wt&UI{ItPRl>orfDv($dh3f~k&yfuVI}vJ|_Bprx%xD$BsYM!T2ez zYyiba#}XD5orO?yD5Zw)S&d=4eWNui$DdGp-X85l8RC4f*>a}!>^NwR&kG7iRScq! z%{FK5YckKTWudWN=(1=;uSu5}k&GmOVlyeSelKq3+NK3W<#4>|iMl>j&fai@ifTww zPqV!C@{|g?18)c4QZxUt0164Hmj(g(M_2Ik;fc?iRVxeuWe6m7ck6?B4bw&>Es134 zM&jU#3TJ0$U^xDH5N@4NK*Aj=Qu8TW!;{~|ikG@aGGpvow)qN!yKZLEJ1eGDqNjz|R}V6W zB$kod=OONy-$WGmk)x-TQUzW{lFKejW_j9P&ORP*m5pi#Am2 zbMA*3S17M@{={RWd^>1lpIm|ZQ?|+lUAea16s%g-pQq9uU?7D=s`=4pMLdsUc*qo)^^_WvCLnxd517h(5aEF^}CFc5OziF{f zj)vr1XiLiObYIlZ z-$#8B7Z9Mpu12C;XO`ikrA0Wej!qqDvIN-Mkv(rfxGc?`6(WhL0f|Kgt^%1&mRBVF z`DzSm8doS;gr?f>p0-Wco*{jlB5Yv@-tF$^=C7X8l!KGyM^>gle8qj+a}(Cs#bv&~ zjt1dI_;Kq8)q2Tm0?SSJnOIx&mY1R%ZJuF@7(ea@Cs4cCNCB^t{B6R!n6-tO@!j0n>vqz^*MF^B^>ZCO3BH z7hZBkr+jXu_>&BxRjkwsw$S~qDuU+S%xUz-TIYg;dcU8WQU*lwnBUX1mC$2Q7{cIP za-Ym};ZJ#^SgUz>d(6oZyc}&ZnZU{bpZM^K83=1!C*Uo-lwx4 z8=BT(v8?K#T&nsTVY<6BF{LixmdJR~wzP7V$}~MobzX^@jEbMJc&03OBt@;>XXUMx z#VSz1jzfx+gUl~vzVgE3WFO;JmH@J_-xQiQX_15l%7fN{cY^c{RtUdj{E;dQYi$Ys>!Z2u!dF_&X;5y(cb z4QIsju(}Fp>|(`s;651Ke>JvDFcZ{od{8R7SX!f{h)(w#rxCNFb){fAC)O;-sBIu$ zYPC_*SQ8iwKk$Y*atF~oJmb~VF0>EeuCH}X6&oV#1M@%p@oh-a(#WTWSL7(84-D(Q z!lrkac>lVb7jJ@sU;zWB8AX?q3hhc*mhi-Bn<5BLSZ>%Kl>$yy97&{WT2ccMmv@~z zo4;GLY5MV}1;=B1eCHoO&|p{PaOZh*o8wJSN5;!H?8;rX_N_ViWDr761qA5iK>icp zW!1gD`VlcIn5M4smeU1KKRp84)PZ(q9aF*DsecPUa6zcLR;xWP8@%piGwz8{@1*sx9aDwMCDc3RliJ^ z=yln)E={!#R4(r-XJXn0$>_lYS7VDs98!iv)5CKPX6QY%wF4CuCV9zUncL3kaSx+Kh}33aZ!k%6TFdn?BaV>LZZaEn40+ZU?_(`XuG^RUq24(6{tO zij3lrjHTow#)#L+`R$b=0JwkGf%2{54dTEoGo3^XW*+hgth)Ky(0X}ha`jzL}a{ctO0Uj*z7f)ia zZGY3CB*NDqr%Ew0-gYDlY4odKLAuR*wBOyUi@_(w;x~O99;YC($}I6C*`cIh}B4w#+POL z{-Ue^7h_njR#m{jJ}H*RWBx&@&zGvt$JE`xx60xj?DW9-!QeHWX9Lgw+Gj!jxUF+M zzEG#E(qAzFfKpGiNS%59jrZ`4Q5Cku1Va6=bImpSbZKIjh9m#%MAwyieqV=QYDmS$ z>l`6PYjHAEf0(AGdkMx1Pvw6556yRk=S3G5Q}tj32Y%#Ts3CdMiPeVXeuv0gXMZCNG~hP$a?8SLT(s<_SFoWdBRgJk=wW zSnG^fbbF6XC`%EE(mp#(EF2s06*(4~_ig#~y>~R6+F|l<;6UQp+P`~jkpMl<(jpbf zv}^6Bl;!poWj9fit*>C9on3Y}kD7F%kt}_?*a>yr^RF8dMQ4l{$(+` z-$XSHIf%F^M}N1o5uD;6+nV$>tcSw5ta^6t$w6wjLWdJt8>`8ZR z^os=KT}ldU2!jK87sw zDFu#a2`k{3x)ZAnqH&m87nxITsm^bF=#PmDKLa$yy8pq@%jF+l88V7Ah(DqMLE|{9 zvDPqoH1-R|kU~u3oKae?(#GBiwQ^)yv@h0K3oQxC9Cf#F3;-nuS$~%nf3%H#{5-0k z@0UbwVh_8HBLK|xt3>)SUIP)aNo+rAd+F`aIz${niO^&*1*UT|=!}?L!QR|O9%&Qu zUZ)ea=vcVLjD~E$mMb*a@!zsy?+G;`A43@9!q)0K&5H04|kb~ zJGIjLK9qWn_1Hw>)ME)^BQ8CnNTVp@WB=;4#jY`RXSuZwFjj(eQ2*&8G1!d@#Q zT;7}VaLFd6q(qoJl%tG|w^$RQ+`mM*-%Fk%a${+CKpizP zHPn$RM`QbaK@y~9LbnHI{gJWgz*`<>%7NS8I0Qi@l1igqG}nPyIaf<;wJc zor7?30oWz4L&O7ukCwwk7RrwgDx55ivE>T~6n6hZ9_p=4V-)a!WwQQ*w$_4(Z}8&T z{j4XGDygDZoN4`x(=+8ukuY^(Z*|_{lESp}VxD`fU@N7SK1Re{M6`~Gwx&2k-l*95 zYx?qP8QpN{yM9gFSy7P07l61fvQ40NZDE2mvb$RDZks-x*DsgyL=YJtgFdlz zfl0iySV3m=X_%4Y(F4{>hS~%$u|D5cG4lITBb2Sbb?6QeO9%x*8=&5_eC6m%kcHlS z`jz2CXYJ<^Xy;8%!>d>#M*xOw9XV3hE4XFRu-zg>SF&ww9uwSu6*?mX1D8aH8KXKu zj{PsCR3vi6hoij~y}vH6+y(5G9~MS-F)r_;rW4fJmp7Q09;Gd-jxK+`=Fz=|WgWWI zTMQwju^+|g*BRKW1AqvjCm9&7_p+)Q>Sns8v9wZ`VDp1j`kEiUL^vAz63rgyIqQ$+U zM_=+8zJ*m;Q^*jvYs=gE%# zcSVpVYQLN_Tibg^L9NM()loS{Eb*h8#&dmXxTeRqDn%fS7q`At1rmGxwdN_IK0|0dyu3FnDZF{hTi26n?slu8p?9N(cmGllkH04tfmp=eU`MzddLb4(X^f&&wcL02n6+e5 z+%+xyK*aNdpU#J6qgPmi>c$os6T5MR36uKgBHOEc$iWAShFrNsqDv)Q>sV;gr|mE} z5AYoLVxPTEnC!^a=r6(y;MS!dA&wmE#VX}X2a&Dd_;oAJ6vsFXH-Ltlq=b}*EJJsy zc86X5u@W1}U|ma-QFQ$bc*v3BO?^o+#L&nZ}7MeORb8mh5 zmwJ8GP$SB3T{`l=xT~TvNQ7Q{PsU>ScPJdJy;2o+mJsq>?ePryL9(Q>#>C;gy>nV-A-ouT2;Amlqx+t_~I1#*(9CP8i%ZDHCnzbB=_xLZ5ptO{zo<>UEf1bwn zZ9kDf7GvcR59z~gbgb%3Q-M|&Li0V6_IivdUj80gYArQXPM(}O$hBnQWBx&BxeEx( zrI~X2)y0CMdh;TKZ43#i_nK-+2t?YG+u=Qo46q5?jtf+vsIMgE|if-kjnK2_JCojef$7iKrkwl z!BFZ^in}V`9kuv1%zXE{=3ZMn`=a%|2QAnAWzJ)#6nYp%>-v~N_)mmvAv`_9urm}g z-(!GjI}V_TXO~w!exNa2omxFVN)q+Uk>R?#;Q{cP4J3}#2ld+fR{%e{_KH)Ru(xMQ z({M@F`GyZm#lM9Yh$5DP%+623Vz;lov|kNVIf=@?E9T_p7>v-RVDPrHigZVP>SH=D zg_``8wp#?D-?vx*Bv(T9kZsyI94CRuEHIUs_I1LLTqs;iJoLiV=He2=Uv?;gBkOJd z%o9UE`TE!1hT55f#vxg@F$%J*khOB<%saD=C!6`kXoNA)J(;FOoTj6hmcg4A)W)AEb$tyck`(t`yZwcam zY3r2R1!|Yx_5HOmf&n)}FDFmma za4jwhUVQ3rzih5|JI)5^T4RsLNbpU?y{FcHw+E-J?9TC`N+pl!5=;isqeZ7B<{f47 zO)pDqRRvpTFGl1&O~GkDA2vUBGdEIZ@PD{Em*_m`7kp1bPU;DmgPg>lJ=t-tHo{YF z+c?ODLj^XUM%m^#*Ahuu`#0?ELV8C8?@FND&i_&~B)bJ8H7b?^?)BLaz_J>uUs4D& z8n;y6A;(xRlNn>vY0Olj)sp|yD1ybI!i_@)IkBR=>)Z-pU4Wi95$;)?@|p$*@V0@- zb@tUMXXoT8_#q~KH@Q-rZtIPexup%cN{UWxfWL(5@?!i#4&)d7%%%%6cUb}2Xyc9;+L zl>7ce`o*mUh(E*}iPrxN0tl7J$>s<}FdQ4}&Mz`&YyG0pEIf!?dX^cRm2q?h5;2kC z)et>u_~g$qP|+Hy3Oyvc97L(q_V>;Z4iXYph~vKEHF#HkIC}QU0}eDiQvCOsS~dg- z7K2QZppbAt){I~)ryebDkN9~ACReB|T|>0UTEKo*R>gRUlqe<%WEw`2r=*!3CP6^4 z9GaO9!%paQ*%mZ|MfPD*L>*@@Frwh?DO-^>fGL<>dQoANtz*|vS8M75WreYfqnOYE zp>hUFT6PBxK>5XEzXS>ot)IFz^lQ_5z18BA5v9!yP9z}{_!8tLtNLl)km_U6#9T-Q z7EM&HRg*+ud^~1cUf>byQFoH3=-b9sBe8 z;n!Y3GV<{Fox55;bHCo%f)}KejbO$GiR&+czL@M#cPA>|Ggh;?cu*!jAN4Gcm>Jr}Dh{_^^&C)FX9OTk_;Tk4L06p^;W zXzN}mnMMC9q*k6kerW>^#G78z%?2K0*a1{H*yGG z$SyqQY)05$S;^p5EivM?H{ZRxdysCA_F9;z0UmV5ih?I&rGgukfSqA0v`^$_Lr+nq zh3>MsMb6AAg0YzjNtn#~Z)+lC{7xIa*8+=q{FZT8wPb1b+0$YM`k9E$-$c>Xz`M;0 zSli*&k5(|!n23kI+BvU_Qo0+ZbnNHUdC?3Ufo{1+$#}*UDMvq~gpLG*e2rTr*)9L$`0;gj zbi92TV7!;F-*zy*GbB4Z*8*08&2gvj4&z~c_ zgBR9pnUXFJ+E0b;5<^N4<9I0kg7qTT^1J_}K{LlM}5UM%wM`2pnLBvmgcdCO8d+XRHZj94GMLOIjI9cIvgkNm9*F> zGc58BZpt`BesJO>?Z2kMvOq>%>0u4%RR`YhWK({h#pbY!vXPmnNYR*^1}1}Q!Hmr6LXnJO7yAUdnWd>b(etSn1m~H~)aH93w&%$(Odq5q>)`^RcPl)(%@5QW zo&er48z2LCUUqLPWLS+5ZAIL>cPirV5DSFP*5X?Ee;B_++iPKTsDK8gx4LrV;`UY} z|I|~YksY7m$1L+6&mz%;re^VCmhJ3+ROA--Qe5#j)LYT`mh7*)5vlZvAI@q)rwY3+ zH=o+M-X8ZPeoI}%Uv^X0|NPPvMhzQp)5rk)BZALeM_Q(csQ-~)j_1$a9UH8Lt|B)A zX;tm);7Gd?y_gPW6zzluN#_dY=ZNM{Z<-LCYMAD2%p<;eJvWPG`d^nZts^=;WdB2D zy7ZNj`)1HM%ij0&r-PwnmWeQIuXx)q1X%+;t83AaC?ZW$56rLhQMQLe>(#B_T$ zcgi)c$bM(MU>81!E5E*fqr17*ygr<-eIwfnzB7VZ<>w=2k_+K<7&x%C@eCxAak$n5 zSpb&s1$0!*^J*MsIWL}1ql%iT&y62Vgp*3IZjIYzB{y2m7Q>KgA`oVA);qEoS+ocy zX#=jVwinAhKNoaiLIUMNnNGa0n9yl!TG)F(uNX+)K6aQH^LcAPvSZ4d6Fdzr%ijn6 z|50E)OdPgn=|MA|LZ?VFPAJ(`w!JN8B4;|*U@7@zadb%AXC<*6sQLRp6KFhIw{9&H zy}-%&{VpQRb`xjzdfmTdReg{OMrLai&*Y1X|M4%XWGgBGmn-~|vtx6eVb~r0$&U{K zUp<3c7fF#7GrHbKf38J(zthO>_h$zp0UH2?C_UT$ zKU-mcP#E`7=heD9X+xri+qemSW+&*%wvxCj8I(q}Mw}xB%LU@vqrZD8mXn`>-No)i ziCtZB=rzB~)~mD4kuGuIwF;O8wO+4+EPDh5tFi}zyD9}8hSz%}9Idpfl+iAM?sG_0 zk}JEbKov9Zft>Ff6W~eOiPHLIJp1j(QA@~bf0;sg(vq(RNHay;ZZ=pO|38%E3gvRk zbO(C$~kA%0f&G{Ov)Qc6h%Emp&dv67su;lYh&s_$UD$Ryz$*;U^g zra}&~hXf*1Az6PXC~rNoc8ORfUme*WA0_iXqn$ia5beT;LWTYUaOSi0VtSBivz+p5 ztBLHlcRC2+3`h*nF};k60_+7R?rqQih~Wlb5=9S3-$i{DpfOup3zc+Hh9xGc#%i1k zjcQw`XehG@&yCxVr>63#gUD@z->tyXoYWjm-1gi60S)tLGhuRVO}kJgd_QYZzFl&1 zpyR|{xC9fGg)jwzh_EGgj8tOT_l;*lst`dA832X%TCeM`%@MWyI+1kEse7W>peTff zNf+=ikhl*m8w$GSdH>PCt+yG|<*JdKq2<%SAe{6&4PQQ03mnMsk)MMq;^|HG@d;>W zu}eINUrJUM-rb+uif_f=cZK6L)BpqDQI_9(6GCakt}`hiB%3Cwz&kQ$QeXPG=2mo{ zASQq0DYkMC(cjGZaWbl54qg(1t#pN;2}+6ypzo6KQjUV28XOiOf{X(*^3S)Ek}qe^i8zkk-mK^TB`&eg7wI^SIqJqZws>Gp zE_5Y+tL^VBQiXlEE1^l^ULO#KElpx|a&&jK@iGAth)*91aB{ z1TmUNGp38adlAmZ5{h1B>4@Hka=ZO=7d~srbzI4=Os0GKX;oToF5H5X@T7bJdQ%k7 zUsN9RQ^ye`PPchXsV{nLIYIm9^TY8FWK*I z_taPeL^?f{`((ufE_)MRRwcH?w!)CI5ZXWA?^xwbbrXR+)H6Z$x7^auo& z-UlKxOv`@sIQJ~E7K^wDK+htc^QqjNd`q$Lrm*Jw3~LI}(CIf`Wo5}iPsG-ozS8uhPeKlFB6Q@ zZyO@OZsMXAYT!L1zDVC1SstDpTVM`SLjKRGu;QaS$F&?h8^|qD9(R)zBGqZ+X=I?=8ST(%v^i~Z1NC)!z82_M z@Nxi01F1U8JO32?gniI_AQci+J-Acl&5RFUJ$tLep3>w-074C7HKxJfHvz>Sk(X?9 zUqKv%%RYcRke}}Vxp;Lxu7cBET}?gh#3ZAVX@BroCvS)9*&k-f3&|Y;!c>1}Y_G;h z%aZn@IM&grZh|{l7J4giq;$sVl{4gVwwaf!Pj?HO<)A|mwA{3?`~J(ygeWPjB5!%1 za|U=-yA1i-5WY+6sk%T2Uf-IcvRnm@4bOf&ze?6JCQjm>I=W`9_{Oz~r{O?8X_GJP?K__(v9DA{8sO(`2 z7$`c6lvHy<)C};z;0Rs*$*onf{-_FcEl3q?EInGjyZ?1~3FXO6A08Dx>v>_MNt69I*c>a=wnD5% z!0_?098QNijLGHcR>MV2We*$(7Jcuy6(aAMo}|3CRV}e9fe1k|nU}}YrV4W8wm&A9>ZsR%om!bf?uFXE+6^c{IP54c zn04i&P1>&YnwEbi-!7c}GLlDmt%`m!@whOoZut0q#dxWd!Q5Nmi$qAs4zqj@pQ|a6 z+{gCUavV3ykEYi`s>JC2??8MVsk?HRE7`o0EWyNTpf&vM5g3?{i16Ia$q4Qp;hC~1 zCz2UoKpAd2_Vg8u0T$v@Ja+UC5fP*~{NSgi#kcFJ9JByxv9EyycR(y1qL?26e?RDP z47f)ZuiP95E}!?DdJ8x>N1yGg@H6Fb9eM(H%pe!&z!gs4YOY;%95FDxePzsXL@WQ6 ze@6y9a06qaCCAS<1(wW zU?YeYX`xQJNA&-y+l$Ebw76oGi{#t%)n|B1!b+T#HZa~#$Am_r5mWX3OYW}I1xO#@ zD<(u$8-?#mXM-KQ2bb%PKw9mC6Cr&B4fj z7J;gIrz17l5Us&*?Aqn3!S$kbX=S*-ZEB!HC+D0*qUf7>hR>~hNu?f^&y+w?zp{463wE;yA!#MqK(tpMfKqNk9(+4CV5&SIY;QfjXw?nE6v{OEOeT( zWMoHIbRu{h6DFf(z9h6R?~`-p9*u$9pBTKsSVz5N!%-Wc464DuC+iIxngz;5*`Eg9 zDK~Q*Yl)qRSf|e+BQ5d_vFMJ99*SruS`2Ux5jN=UlC3(RuF$ zzYZJaW;VqBi!jG{g=}r_uE_)ajyG~Sj3l)8BSFGYW;;zuBqUAnE7&YYx|;VYT5ZWU zNYKIM8$U}FdNlSV9?Y^z3e{OBo|K&pxo&ZS%5$KGTkddWs7cSij|=>CKBS>(MxgRb z_gw1x*Zc7ZD|Zi0k7@h44cfjU@c`?B`H#uB?cEWjl}_r~?5- ziB^|@;==J!rAUlT;-qwV2Y2XEa_dlHQpF4lQu-N23w@h6O?Ql0*{Xn@CkUT%{;QgmmI_$C0{X;-LTdM{tyBo{HHwYIO zXzx zCj}j@+R*1BDhohXs>XXDaY*+i8S)-olB02G(^lg7Lph{L$lz+;Pr|KsqL?{<&e9pZ z@6@UH8kVj6g@iZa5@DA1l$kBDXDWa38|;o=gK5#2N>=0l(e)NkQEuP+_^21XigB+B z0*XaRH%N&^mvoGj(lydOVBr-6=?;|~x{(|Vq(O(0R??vx>HmHQ@0IW8xBj#4TDq>x zaOS-GoU`}yJo`E8wVIeHJ3R%CtT@mc1LX@I3@Klt;wHI#dnnF+ey%P8*tu(*)Gufs zGTF>Aoz_gJ`Cr_KNV&=Vu89uB*3b)Vs(^bC;BTG)Hr_k=(^z3+L6I`iUj1m%fZ-~5 z2OX<}#>@}+>pEBl=9tiRyylT_K6nOW$*gWV?2WRmvO>F5{>6Q$+`W32SJ9xvSt0`5 z`^${V%@K5JtM3%_kPRo(KB*`ffayf7JLsWwBxHaB_(wmyEiIGUX7CQa@1^)_rXEB? zB4N!6I_VE`kEirrZ(-P!ItlB>lb0G% z=J(A&mlCpNvmKGri7~|%7@BQ;wI<={KvxJ{)Q8*UN3n#ruf4VwpUDeNa(HoRQKE`V z`_sEiG`9^(fY@2EnG9_H&gBT3gEs}jr)tOoM6c!J#i?Ja?|dc}kYe#i})FS1OmHb95?UBE(&=*_af6ck-6&F_K1>9#vNzLAia(u_^)tM;}l zIW(%1*9TJ9{(fKVjQxVmAjhta6cAMnEvwMiXYl;WzrCwS>*aehjo=Y5ha5B>7-umJ z(QFyzZ#WVC*~Rkzv?}+%N9O;_S;AV5oGG3~_ZkbeM1~+s$-RieEipo|vm86E2Ic_j zy87Oelf`jb2lhz;P$8#1>1Chp1@))%w9al;b)NaTkp`f26US;HXSLXvaNDZc8TZEV zLfd8O8Pv)s0mA=n*Fw4BR57j~CFCfr>ahEuWsmE?bZ@)cR9fsqZ*x;vv0A_rjbeZ` zXana<4u4sR#HlI={XXabQ?uB2{z8S`mjbBM<2${oAi77UBXhAt@yeLr`=cGCY(&>Tl^4FWC7#vtE!@hKml0P>2CgR;G@a!=O zz+Uo=g04(j*PrBqyx*U^cw=Z$kX=IeYJS}GJjy963)HMmeC!DBwY)0M+V9pe!OmFF zoxI$?dJBKdGIQWhXsUR)C*S4@%g{t1_g(qaFhYiZ;5>mueuoJZ2q%)kjxU|*T-!YP zEt=Mm3oc$f=+8vf!?ZmWnM1Bo(zkJ-uib@;?OBpfsILneyaAQ~;fRW>KU{*3S*(Nk zKBS6l-*0&$q>^vCg5=v5SCQZKX+#^WlJ(;KVFR7!$ydd*ebN|aBg4*aj;5i#No8gH z)4x&0P@Gi zFb09Q=V1yT@O9xOwFX$(1Aqsbp3L+G)jmJ8^lS82e|!{7gOT4 zCYJxM3DM?XYZPBKg=WKQkwFHEBPq=p2c=X)Hs8JKLksN6H!7}}%oc90M0G(a-dAKx z@hLyW$6-siadcM4rpJ}sx0w+tg!YZ@wHW17fjtROIiPzsP3lME@Y*n~hI^(p*MZ8% zVYFc+7px`yBD&BPtDg72Id`!9+C5xB`FiufEo4SPICYOr2vczM7Glv+zM9EVKBiM3 z*<#!ndf{Ab$(l)SM|)j~7aqn5xL#vh4@*qH%`yTErIdCpfP-Ke=QU2G1@So3$Y9Bv zbY5_=Ui=G{t$WE)R2Q^>9w8$SqTB&;dUaA75$OuIuW`LAr4}XG`*9MIIc6CAC zWUokykG~P?sl&BPgs>+IH4|jQZghxnc~=BTr@v*<;S}|Exh{XXQ+*eruuxMlO`Y<9 zeH0ZQd9V0YQ}qMprL&4@e<`8MT7!?GQjww&$vy_wHdKb)7F+a+QXK~7!b;=Xa;r}N z?B70bKE1DghRakOg}S&`dQV9XCNTO{7gnHwC_+Dcfn*F+p+~HK2!YG03F>p03~v#e zqe?&4KyUwo`l587fw|MZZxrP-f>0v~cMsFCK~Xhd=Q z?8GjL&&E~r=5>Lf6f#KIwa6n*Hc+PgNfTLZa=OTTBn8OLW9u{>Djy4=PZh1nwh&6fsP=$wM^EL=zht18qDHq4~ zgD7@S(&RlMmy87UAc@yn9E0!i%RdXncKvZ85tIX9Q0?n)3dGui+TGox8|g9&RFDnU zvl-9bQp;K( zSqze>hlBj}Uqe@?|IrU+N>Kl_HpeyH!oLid{2RKg=AM#EbJZJw2{UYi-;~R0IW*Gu zk?%S|6g#ybX$Tu*y#I1=KO>l^X)@ophkT?GL4(0I*ey$s_#ISPt|?%k1Ia68e6QnkY7nn>Y=npng#9mVwPjFjF7<|I)oH$J^6?oeEkJp_+=hzD2A0ujVKAzMeB`!0hiVPWG$V% ziMk$#j>{ge&D7@`wK8XZ3h+LHce|9rY3%Z`yHXL5RaSe$j}z+jG61atE63<6z!4li zcu%>^DIMDRS2OuMx+@Aemt}EpM=W_}oBS~>TYwi+(r+8V3Hj?4#)}vWK@{aw+(Zm` z|3`9#iX4%)w66iOS-E{I=qv(n^HY5Zcul@=dXXXy4CyA976e>looM^olWNgW#bs$v zvrfRZfhL~uiJdusb@M@+H~5gRu8oMeAg#oZr`rJ)@f0}v+i~!LeE8cBcO@T7Q z;P1jsS(Qtiy4TzwzWB{IJ2#t{$CBp+lUov^5}@>{espV3Ud}6c4cu$b`jh%|94A2= zpj%Gr20?{`#VdP3nU}GhG@_peM&Zj>w*v?a~zm1^6xo@p) zRxB0n`)8j}l6S4V7ScYWga#DK^@~!%`hH zJYb;jUO|jBC{GGngP5=SX2{D_h7W=E+WzLI!kRy!WB|_-?&&7M{L-1iO{YIbtSy5J z%!81j3otKxm7IQm%uB8zK;7%v!&$i~05mkWy20XkVvm;Km669?VMhzd@YqZc@da2T zISt0`D!oyZ&u{Z&Ao+*<$kOXuXXQn@ObQSxp~__gpyXNhQfg4z;)zRVHcF~1NsilT z?}4-Ay80Z`=D*JaB!?dY*Ee9v`ay97)_a!l*O*@hI@L!<|Hs7vewfCS!}D9Uo6wYi z>KN72NAZOxa8%pzWe4ec&g1Ew;c~j}f-q4^P<>n3m;Si}n>7P-g7hHyrpZAcVZ$_>|15e#E19_We7%3P4OZ* zmdouuyQRzN7yfXt9{=U+;zW|Jia1Fhf3~ahi@fyz_&q$$-|$ftLZDCgPJRthrvC%i z^V89-*PUAxyv1`m-wn0|Z?gKjGhwBC=^)|);%VVA&U}{hg-IAbF4!`8u#grS?CGB= zuu$=0KXu<#J+cY5w-880o3aF8Df_w*lsw|4?5F)6h7-QJNUhoH^?pJD=00rCm(a36 z#74XJ{J+d!ve^08k$OkYdFI}^CLzF2ItEHs^|j!p6&BE(z%;a0wJlshgmboJ4*|4KYtQ4Xn{eD zTRlZcMNjqRP?Dd0U0nEJnqHM~=1>!P`wS7xHdry%p5K=5u_U+qae{BntxLwB1^Lj~ z0Ra=dkxKHimwJgvNnDo{e+Xu8P7|S%!DPterlV5iMri}|xlp2Y--TJ6XGoLnDRrk5_jQl|P`^ z*E>d}ctH9+Te4|lf>aSxqskO*uHBVm61S0#XpfCxkbR6IAwEE_xp*0TW}&F)$SQR6 z+n7=`Z#$Lz8MC3PV(r&)7Lo$8J#IDCHwA=Q;;!0?&$(~D4V6aF{Ejc&fJpQT>`ECV ztk~k9F1AH-cCd@(4(IcftTMX-`Ai8$U)iY{tlfZahb9tL(NgdKR~Nm08Ni617dMr= z8xs*@k+w6MGo5`p*i?1CP~ruR<`$E-00)j$^JUB-V`t7U35FQ2mrFenSuwLqvgm|R zsDv!sBcQBb$J?Yac8ea(_ zh}{rBj@2n`Bs2GcLiay;l)RsVmaHDe!0!l}7j1oi`qO7I&0vFlQ%bs5@xO)-cOA5w>( z;iwH`joi#<{1=!%*7=9Sy^a9w3qV5>O^;`BZj`xC>y&n8sY06vuYl*p{}ok#kL-$q z$4&kakpc*(BZf(c>4478qsT)ry%8p;t74a*;&qQq3HOi1_3kpIrKJfSx!LUt<^gM?iY0lS67F3Sj)zDazKA{oTcLx3J@y@mSve|6UK$js&b1!w~F?N_}e1Q}8j zD>_)kVn3t|y*+%R^XX;kko9e4VJH~Bm#Rp2oe3 zdy;xwaI?vP4a~%z=;d;Mj`a8Z(yyTbvULBo`JP;UHaOAd+%ljsIL+PGnq8kkF{NL@uAfCbxlt#On?7XY|0>D@bi2jWD z0doj9CqQ$M0Q=_T70UMbw<;EEa4*>3c((DE2T!NmQLRfC!%E}`VzA%D8Ye}O;s-93 zSpM2+!r$eVBG%^o;|{Zn0ad|KyQ>HGxt3U~u1|HF&ldcz zY+FeFs1h<``v!`=?KGQR&0kQTJ7z;FRLSdrR&km=`Om|n@%U5g02NGjM2ScE%4pss zd=iW*@xyEP%tbojR=2yX_SUH+Mf>}cKSz+%ob{s))M$ScX8G-mHcKV!JGtHd>+; znjHrWs;4B40JECsj5aZDr^Kiv#y*~Ydd<6Va!W-a(G@AuBMlCFups0c=@ohN^~K6< zHd}*Ch5lg%6mUMJ4+&+l4h&-S+C7FkLZ=}ikK)5(CLRZo9j`vo0Ncy(w&LXG;`DN zJh_=;%g6-}ki~R0FnnHMm3dm>u(qZmy$TucJbJ8^s*>&E;ki-qaL8#Vrx^SUO{^(;__)<|w&rvDlXyCPz zC71MzH?@bodvWKA!pCv1vC^Dq@XCjL=~Z|gljOe-B>c}0f!Lq7h#C4}igCe;3E?WZ zDDl}Niq^V+sRDw2%zf^yp}S`UZ!CP}&d`KGXz~5u2KG{!uEqI6=yYov zgHE1cg<$Syx2A&KxB9~ez3*mh${~z!z6@@soX~BG$a$>7^y}O}8dE>09qn}xzTVuy zJ&){9pJ&PVZA&6tWzhZ;jWbdab}L-!8o$Qm=penbpf7u2JGcF-86ltYpZ_Bg>@R0J zge|cdjc|YwdP~)T(!X~7{zNIZf_!$%^r8PqEP9xiVB!nk~u+ z>_O21<7$Q_ZyYOtgpXol0>(tC^F9U-h91BYyheuTzMp$%+d-Yi--6BvAd z!|f`+t{|`RUFzVL2@-J? zzH7QKoA8pAR`H6vu|p4ZEE_@nKjI5Ig_P-WYYgFsb55vdzusitz4$CO@t zF%a0|nRPn15`<5b-x?h?rh+G$k9ojRz;^q0Xr>k0k0*Jpw94M-vS>7tWv|}Z@RHlv z!prp*t2<-iQp)uS0A{wgCuUHKg@te(UOzs8q?!EoH-C5wG-AJXdk-r0H)VXGJkwM1 zRcWdbLy$bEkV+;iC--EbpozTR(f%S;lHcFI`hJ9UxZXjrvoqE2>VRI3+n$#yw-OpS zX*zhS*Y+ZP3&){9KZoAA__~d{hxzo8w*6!&swPxD#L)0?mE-JW>#TC*>>thx%ga6c zgX)TL!L6f*ldoS4U^ivRU}!{J$rUGUY}~pO*fZ}oIyt%}%rtJ$?~Wg2dT?ASe8yg! zecD&8>jLZ6ib(@T^To=OH{~=V>~-A6Vls9L4T1gRDsPg-ghp4*h|XUXfcT!v9tj=v zT_#kqYorqUl4?}GM2{qVDacfWndG{71w{guA@I2FoP0jgJ~->;WstJ%G@4bCG<1(x zE3T+`pg3mr_U7;(k82~q)5@@H_Ri=7>BaF2;{DEJ@+rtImnb?lQA4xI?SoPr*=)_kI7EIE~AcQ(UvQI2-6m7v@Ny zvF}=vr~P!oHm~=P{#w^es7a=LIF`IMgh%+=A0U@9pnv@zWq&xhL^Zg)Og?+Jcf3iq45C|WPfa@bFHY5HzLTUeFR{FnEb=b~h5`Sb3l zQ7Ed1z|?NM8#DS|!V^ni#_K+RszZ@s9cJ&)Q;oT%Mxkrqx$i46%p@n+w^N#IT-v<5 z6UhdtZvPIRK-uZ($K{(FdB@HsaeB(EZ_SD%{LwcsU|R;(`UetDjZq0^4?7GlIb^6J zdv31QWka*xcaq3hGma3a#SDvTkF|AP=sl-l%k?UH>D-p1)s37SPVy&9tA32mF8yNm zDY5r!Su|AdOE#UdzZ&Y)z+4YsLEk(z_Hs;bT#p!h>OtIX`V{F`dO-~j>>4n`)$(i8 zib?s3)C{M+-K}PHh#lyISbb+7wOnEC^|&qV`Vj6pY8ZPR*GL3$<0UcRJ@UHitl zgSFaPHyV(-5ccR@K$|BTouvq_>%t<6t8v%jh*iiB)5|+LDysb+ji-UHp(TxK5^Fhk=|`9k4OpMx~7_%jFIRL2cA#;`-g zv*@rki{7$xRmC&GyZ_n~_iClX$BECGG8MmwD2ZMY+o@2~G<7_=)bVq^K6tkF>7ayP zBuQ)gEmGS3ZjAF&Sh4AaEJ+O?6jHLLeM=^j=4{y0WGJzwU9%Cov~rggZ*}CKJ!}TQ>Bfy z=5O$fmau-&J;rnIaCdB_dydc>imb2(Ym|a{cb@b&ITtTga&Nj%gMz{6G!8p>bj1a*t zA~4z|X4!do6&@=UbM%g#7Q2^F=8u_ou#IB6+u{~R#_Q0LbeJ10a~wO8FQwy;+IO+( z+_%|x)l$~3`ArcaG#M!gd*JPKN68G}57MiN(oZHrmr){7gX!G^Uc;NP!?I& zCqGU7XEgV@c-Uw8KHkTkh9hj=Da!BBSubpPWbg{2rPjPFKPu7DZJR1CJR$XwEwvcS zNZBX3HAu5B6^11s92R4O{L@k+k<4|yCgFhCHF4gaOx2qmI$Em9 zWbeEg3nO{O-o8zdRif~a4CXW4yb`Au7`sqz27Ss{d2Y9W1)iL=z;flUCHf6s78y0RV?_=2NsJnS-VU2 zEczJHUlY>EyRo2eIiM$i&aXgYm1eWg3UX@yj4Rh@i21Mzxju^d`hP)%;+_6_Y3th| zjnwKpe6~aLZJuX1jLM)D7lmqz<6)MI(o3bG%Ts(=K0J*ClJ8@6oja#KXZjp~i;Kl- zE6T*mCLi6*x@w;P$;3@TyKQ_mVlRdBEo5y1M=PXYYKo^TEYz2_BE6)3qS ziS8qu%a);Dpd3LWXf;&6``m{L1Db&_wp9(Qz!k^&SK0;S)heyz)ghEja(D^Lq%Q?c z%`hPE48b;{p>X}JL*n-GP7bpX~neoA=Gee8BB3R;sj$2Q6^Scj*Mjw|;{1g>&f8nHg z`yV*sxA~mjDMEWNq!>cuGTI@{y|4wK`rrI_rg>{mOeQnM=L(Y@Oq2Tbw8)cn8-IFwAaDDM{2TjL-MMFeOs5 zYcY47Xi;9%=we`LtEI=-o2Uc#=UP5e8W1X_Gi}H*ls+^lB209yJe@w#c$u(rj1Zg? z6VYfBF&4{dz2N*rEYXO1Ly+grCt#P7yZM8qwE1cNQY1q4#MmIKgHm*{dp+_bzA2;Urg5}3|7&OfH4d> z7Zo2m#hXTd5>hod+-lr#Sj!==Jep_I5xLcAP@@|(ObFWw%g9aV9&&9j0t zfeA^c)J|I4N;nH#QK8m=v}OK5l2n&O`h@YvW8LQj+VWkVj zy0{TmW%hcCNYs#mAfYCtuOF5@QO7S-iVeXi#=g)1l3<%ep^dWA9l0Ax} z^qxNdl5mDyWidq|Wwu+t<@u=*R%V%SYy@Y7{gPP*9nARABa-fSgW`_)0?1dylj&mJ znzHG{q@rzV0dY~-$SZ;lI$_Bm(_%}6p*Nr-L)Z?9NY#bv%UB$j32T+>?UWPI4>>ul z!4y1XU4MC@!Yfx8_!X4Oo}+WC$g{kiaL@6GL(OmyCFeDW?~l*_jc?jCjJo#4G+3Az zwP$LB!Rs+QatAKydN~=U)_O2(-%v&sn;O@bSVeg-)HCC=ajMW`oEZSDG}o1n7HEp@ z^BPI%ib1rVF$=wMkZNgXq{ft)d|EjC>##9}`wxo_Ren04Qd6K(<7VD^HrR06s{ku9 zr%8&74^+QXmGwzZuRP&>+nt5xwMwgT;t9Js?s{f2drFCuk2;OC;#yX=5YIR2fkDlj z^>6{>ROiip#{St0bE3gU_NAf4X3BO*LOxJxZ)|^+tsa7d>nqjtVF_pcUjdzW$FgD7 zO8j%C%l(*hQDdIxXPR5Nt+Os&s*p?n8n{`2Dd;d47Ib~CR>-)<601O3W`6oGF2UYjpH^@~x-ciIcgiVwBO-&i16Tfx;YXwSWOmbz zH9NSV@#`N~VLfx46C5o`SIje*NF_zG)V{ZZ_pAvRO_?u}86oFdXM#<#Lw!4CpN`8? zyJ|2Y+dSFtG~=^=&_5lAeR2v5VUMV_A4Y6By|?`eYl~bpwTvIWYS@3tTD-k|mY??d z+ar@J$P7Q_S7*uWWljBJJJI{d;Gz(r%Ve@+9ex&6*I6Nx9#}jxnSqo>IAIIBwuxPn z-LzARBFyr`;`0g7y81?1T`Oq1Q(OQnJGB$^!ELO~ zdSPr^lyLIpXDK2Fnx8W2ApC*6Ug3Mz#VL+RHAjB2Ed%>DM;7G&eyHP}o#~q0$ydTm zAIV_vY|ZyEG_z%+|IN$f;6Piw3XNl3}z85Ss;gPOnpXhWp8AA~$5QydR$vvyBak^Oe-RID7Ms{txO@~3- z;Y>v&Dhwifv1u~K5AP)8Tk0#RUED1t4XWe9#U=4>)>1E9l`xo&iq9>O5gLT02y{dSCtlS` z-ih;Ex6Ns$7MD#ugN&V9heu3PPji-Weti?$;jDskQizLY+g#rQq)4qLHWq6P7%kOI z&ov8`tsMR ztDwsERK(jupNIj3HmKfdMLH+c;9pU%B--pk$L!LI#0H4Ark7KMzKgLOKUjE?>W*Q* zlpWQWl1I#$9-OVY(;9j0WZ@=c_f{r5=eeL>$=JrFj?~o$K^jej$e5`QQIG}JkBi5} z#8~cl%xS^Q06WoC-2rWF_*eOLe)TUpi{#bXsv)Y_@$4{0&af#b`!4u))wz2|7=5jO z;*ZW4{)}MH6r^6y5-eO?p&pA~-l@nq2Tb)2kUNwTPvPK5%e zm(2^Fa7xJPXWLczhl%2KyruzGp&&;_tvAis*74HGeVK5}NUY=T$tZ*}>WKXaEm0L4 zgVcg5-x!Fq7eZrg7CSk#*t>OA-c))`zGzXR#QJqgpQ4l&h+&?G&*9>+9f$9?``Vgc zXe6BQGIR8O7#7dLuImOtP$By3FiyEQ|0MRfePd?EC+QY(YXk;aduK5!86&TYPt_Up zP^FfOh%An$qq`MQYOjEX>cxN_iajk=lvB@0R1`bL(5@lJGD&I8TJ)l|mtxP#i^B0^ z_0shEgX%1!IXsvPm~g&HdFHv)0eumh0jA`^3ZYPNqeB_c?AdN5_*nW~5|fV3baAlraog+jM?|pB>Q|XN9)&4zc#n?E@ELbHR;L$mcQ)^Rj;Trb)Xd+az@|(% zAsGS1*(A>x5%JV}m8E0`%-Y|k&G)%11dpsiVb>QmHipb%z|ah`q)0Efj5jySeq1K@ zJ7+2KJP7BE$gb?107wp&EY7FXDQxyn;&OT*K#|iKmSQt5+82{JIpPV!%O|a^A>%E# z;22CATd zCc5ah7xIg1_c6A*&ni*)YWme+)WU}Scd^+eXp04@+>ik+ig}49hn_tq?@1Me=2e4>K%gMv77x;#3i>$ieD4WI^ zJHipVwBICU_z`Kc24EK67p|Dy%tF|3l#M zs)*PHbe`T3_2rrM@W8^7yV2Nm*C>y~WFl*lZgPFXGza)x;moqsT(U!hOb7H}gx62p z`F*f3Sy&`_tuqlz$1@f0`IB?x3PYU1P-YSz_{ko)5vKlh=@V3em-Ei+mvjH6cG`qW9{|-IK+OY!DWUR zQl%&omMO=UDp6BEW0>)-@@wjsFHQFbvu$c*Mog@(U@EOAo7quvgbH#7C6oLmbgqtU zqkuC&45(LU_MF^t;|UQQcv0>B=bw)?1u2U6{|chW6zTGTK57kf^*eAVJ9QN=xn`Ax zJ9)aiQeI>C^UYS}wo?kOUt98Bd~$5KnW=9oVF}VySFzSzqw9XqbIC zW|JesSA^R#DCj%IE8qA%DvVvoN~Q0|m$d+^mlaQiAW3FS-E@=u2^q3)ZIam8mx}A2 z8p3vU6-BH9(&K$5Gq)Tm^`bYC)9~n?n7JC%sX_EPo8&%QB>cQ6yHt{fsEXv#C{Zp`scZ2I=s>#J*t;p4 zST@Qqr>by(skU5k{ISuV(Z{6^4FoZADmJgfXuNCo%y*RYU>HZ0R!?#B1bwHhQUu4S z3<4gagq>eisl!am9=KGgOsnE4=iaaaX#7MEuc@<HZeD}8u z`oZm;Z0bAOJ;i!Z@>y8YM0%Ep-68=3A1LwG$Uu1#j7IjlMeJWs*+4(w+`Jc%Cb?f$~bj;;2M_;uLW|}nBH_5B2*m6Q~iSqN#$SL!&e#Tk$ zCS@aZc!j+kX@9X2KH<8im2@NBRMoSCQT8{@&u{Sx-HT8oFTYP=%s9@owA`L%;$=Q>%ZA9HWMlNXw1KA<(%ZpwyBD{eKg1h_Z#g-Lzkr~K9bW>lWC}_Q7#-j zJ|Lwo{p8qp0e*SHKNrkZ>F?y!tvrw=Rk7wa`>29xP5@iR$h0-%dW+|z29vN}%|X%} zQ-%MIWyxYXsiev5kn7sq@<3(1IHw$Qv zrvbK0{%BvETDe^O?b`QJwz;xAN9&Q?KUw|Vsq>3#TPdf#r-?*a{W7(xlXwpX1|U2b z(46-0xi9EzQ#Lg^Y2y}p>8U$MpQ}AJra#&}Rq<>xVlErP?B3%-j2}`9n& z&j^9RpqOv`hj?+>S$AL7bqG?^wRQ}C2bJ}CH)*@8nTp7uFOq>7=E^Mv7IHYHb_;;j z!+OIpmg?t@n%z(cqx>Tytb2ek&3_*~$)6{bbbz`trm2{>J=4>XJvugI;Cp1w$H9!E z09)ZO0k0}3U_2PG$q2;W*iljhpD+23i+%YUbqISOio@;9Uqo{u?1RlLG&kOvlrkA> z(Hfg^A7$2ZEnF(s6E5tSxJ?UM4z&AaO_ zk0A0eInxmf;~Vjcjje4-Q-zAH_AuMyU-NuB+8yvF+g8aNA*ZlyXE=lCK6FiP!tbYh z9>mpNwIdA_fp)5ZAE{cTNI~_T{X$2;w1nPN*I**nA=Tn8S?RX+*ClwD!Ij@-0G0O2 zR1in$Z5Gx7h;WDb8UlVRrh^%N3YGLZn7UMj^GNlKOSxite=@{0W&0m|8Q;C97z@SV zuSZ#58+*t={c5CcqsqUBS zP<)Ygo-chij!84!46E((LIsr?Ow>A})WPJ{>dmkx z>I0Y(LaXMF;qtyf)VK8%5pJE3aVqbS+5H0f)TMSJQx<+q!eTapC!=-oO5~3KgR%cm zKC1XpzLi^CjlDeR&U=$Cvjj9kE19p=6K20QRfU}ZtY&|1|G4UUzHzNA9s$|G7;YPZ zY>(Xs@6+CB`fLmSvU*WLD3qDpBjJM4HRrhj|pg;95jSyt!{ z=%NZZkYt`qdan21B;4R9I1gl3YPby@&ImcGXW1pYvo#nD)h|-7qnN{IoGj*-&RO>; z$9TNs;(J}HQS*)kYF{8GJJVJ=L&vE)2GV23uc}qJJ@x%$+Fbwlv}sH}p!T^wPo zIi$HhOBp`L(|A>QG(PloaixNW6tNDD#YxB#OSm+%4`1t+ci|Ko0v=DV)YxEEzA@-X zYY0%oysd$EADbOPU{|g`=a>bN;lRtlrT4A!dV^+?n0uxokBhE{t=$6WD2cQFc-1mi zF1u^T&8lJcZKtrDUF@kX>HCZ(QBWm_pouj#DOpGo5)3)&A%%Tk9p+YhW42dZpCPOtliLDAw}@xopAWA)z5$L#z&Q=rwPr1awP*S3@0^G zsbfXY6T{I-UK@h-HYLKyyn5q7H|F#-8h}vq=FR5~%cUC?8oT(o1i<(f1BEiI#h9)m zrwUfo0qOtbG0c*2AKv%s)WG|5{8dI@KJZ%yyEml4_QfTY{hV5UjrLFbF4^Rg7`#nq zJJiCZmdW*UYos0O$M18_`+I7cuuz8+kZbTH8^ak=w<%%eBQQsGDqmrtWm`ZXq!Ivs z_h@#p$TTQA6CaU?E{Tbi`{{r8sEKUhURf_>9v$FD2leK9PIKk@!KliPb zqY)>i4vu*y^*cK>AXvT0c-{B=JX43^EEB^>+jDX`_rKvEqwtn$4fpK?dIYKbj`YFX zuVa2AEk`f-1hExuH{zw6n?Xi0@PZm-Z7O`i58T~E2`{~3W7j7i9PK0V;=&YuOiKe* zZebcv)oA{lsC+pQ#w8rDN%^OvHe8c;%evh8Ld;z6LZ=BsbhJuj-#JP5a-Hpk03Jo! zql^E*-sqEk+fqKS>>nh_M7M$P5FMBp?rb5CUi^71Kyi(|jueD4!0xLI@52pg{c{Vh2&ZU49e$i{EU` zi7JE|BF?^wFM1X^_))xjo{}dgo=w-y+%9^?K=aPUEHo3lPUxm0+1Tb7> zv)9ZwoEBGNc%{n5qDSxvnQV)lBt-JQLJ-uOX@E#0`Cz%F`_5d#2Y+QUi-0tPzO>`D zeX5MCPCo*vl~r4WY#+gi-f)Fk8QGb~#&l>Qe>13)2IBvZ2@2d%Ru%Qs+r#rGRGxQ8 zwx72Mvy_iEcXwN~r;njed`bS5zH3BTmer^M^ODNvbf&1zx-KVePtWq2$r*13^BFST zGv3f4Yf%pcex}UW3_@~#qxvs3^nT4s6J}GPyuQ|&rAXk_)MM^4r>9B|`z9=;ITWS%hj5#{BC{t5I2xcIVosmvWi1)ZB4xi|~@01c%PZ|D(G2}J} zAGw)gDu`)P->0dct%{zYFJQ1?%n%MtVTU;x^_y{UF(A*$3+$BRKmjAGgo zzI^Ix(r0w(D>t$`WXc3VjQQu~rPnvUbWXyKlS=yktFik$tVqwR>u1#ApzIjj>se0ge?w zf6cqcrjbPHmDKMu+lTUBa3MtE>${n$g5+lx!Wrd%^e!KN#^-FafJBP-jYJ`uHgvfK zLe#=CrF}ORI=j}{q9f5Wyig`gY2nF$?YK5~y@A)k0MG+9dj046L@b_gh9gvr-TIq7 ze8f>@F!6KxIn0cA^qsMpasrjz8TD!o6WuIs<#fm8VG;5hGmB*>ft<^79kKw?dZs?2$dF zmwju>lpu(Y^3b`4e347+L0pB?28qAmhL!l-p5aPPlcXXk2fJxSvbOvp5zBiy+5VqS zy16^YDK=kdv+W1Sw?dElz}zcXV)Y=1fz*`~&1kBq*suy`3YFr?FZig1-p}Qr@$zLj z-QWGHP^dJ8ofsW?Y2!-Cxu_ZB_|lwVhUXZhrPi~i1AxO?Y`c!>_WAOLr10EOn*lF- zdVpKb@`AD-a3D zWJ})IT3Fixu4Xizr)EhErpSjExZMnMA&sB@Gd7{?b6ipJkd-UIb~+^hn`<6U_35*s zViVkYYdrze;irODwOHjHQW@QmsX3GCMA|$L1pFs+F?e8U9cJ_PM%k)fM#xfANfkrC zQ_RS+4(T+$&I$43W(hRNzopQ#v&+YBsm2_)++{(cHOp0rd1lYrd>=o&X3Tc%$x{ER zbBkd=6_`=I0pZUf>X7kMQq-H2th(#M;|-$!+n*_QQ{ySJdMc`h-QTM{2cVAYXt&jv z!jI(5UjNQ@$O7+Gk|k8}?viv_E@ajjW)(-R~=3ke7molJqPYwetnb|WCO;H@m z05~c|NeI*~Zst9N^9i#+k64YF@fR06>!4PeqUfZYiieh$`?E)*1^6LvjHoNI!kz2x zu9QYPj$~vDJw-=L6oaO}?}IM@6e=y*H#IZSi|V2=Y^0LeZ3p!_^q&%$hvG7e+wUgm zk-^LF*@fRu4RY!_MJ!k&>D(egEk@@SA}Mc zRO)66u9x-MVZrq?hYs{pdXK-jkQ_$lipbOaQpf8i?PBNd#`I@9?AY;d4U*Fa2-(?< zEOKhDe<9?!FC<~dVaKL2ho=Y3B&cP7C{B`!ed^9MRo7Ni-~X+jN8ZdK+3ium-w5sF z+BPLQ&AXSLet=D9;Qdp=DJ5z9F|A)w9@cc8u3SMjHZ8-DD!S7_x`rQik;R)LVU!qN zvD!xl*LhwHpTEM#_#vxpVG)FMRB^)YFFu(##V++yBn;-H?6}_mM|Ps$83^t0pQFrd zq0Kge8#yort|^9|A>Wxaah>%nRnc>K!YVI8Fw1k&$I9uvUN;ke@5^v(m)!pfd-WUM zOp(5U)bDT5DPniKY(|*)?hqsZVlfKqD<#P>m~hYpv>^}Z$Ky=C2uk_-P5`rBRtlsP z>5*IP2gG4a5>hfcApO>wpYc1rlY#0?Vwl%>R!Zi6yXiH$)pm1Pv=*BPz5qUBNv@x_ z!(zZr+V@xsGd{s&@#;uaT-GlFrD;Kn*8InFK55J*ek5t|P8eVn`Q=GbtjTZqRld8b zNXBth#>YunSs?jsN&!7jWLX#|=K7E~9qpK3=M{pj44ww#myL5 z{qnvEM0fq;wvJ&yKt)yS1Xbi#?C~cFufKwR7yjOWANwAeq(@ z!uqaRE+Uha`&&0zUENzCRW@=Gm)SHy-``jlI8!QBwxm+uxtfoxD(?YvGxM$#+Dp6i z`SRA{xW%%}u)9ZPkn|ND^<)qOK`q{kX*9DIbMnZoD93a!6u&4I=!O7Eg!g~LyiGN3 z0Nu|&pV%9}or~f4C18qb2^|b&)w!>*?<%_#izsL5P7^PEg!4CEWO!5#ztfO|on?-M z4ox|Ke`EzMj1q=l$`zRGR&7)(|I!@kM&DQak=o~)kRIiqPH2>k_=D?EO2XJ;Ff4lL zn4$J9)55~E<;vC2Dx>EeNX~8!b1XOJxepf(W61s~UJOyvB4*V?&@xBnsax;CJJuV1`u8k zNd;)p0%&ib#dYs+8`Fr}_()0xYU*4JBj!dnpt~bc@in3Ptfg%s4Bvd0aoXaR7IqD@ zeMN+k3$pvd2vS+5PPK;b-z|UEC%wJg_@o#2ZJWx!#ew2X5)Gol}PV z6ZoMwB9r{vgJ&9ph}_nmm>U7+mYz)r%?%1KBoOUJn~WPU2}MQpa^)s0I+iy3TpX~F zz86YH=U&h6ZbBn4O?9s1KC0@&)BWX|n9Akw?e#9aMOU`&`j5nyYWn&)ot+T(hm(_& znSsX)VN@U#nzuPLFsGW$57k5qV&bhlt|CziFBuC=giV!p*|G+6P_D4TDo3SS*@g`> z^gJL3Np0tO<|C?zqVi%JP7DUH&tbc4F;S|F{oNOvPS zC<4+-jpRr(z%b+hL;bFM#y#ik`TWk8zg&6kIM4muab54afBSa>LjMOeAklYszi*v_ zSzVeCygf&dUnOSU>)MV7o1c|->w1)WDTW~^tQ60$;vez_wRdAanZv;OL0LEO>Ajkd{tX{uK2;G^A2c(_m^j7F zp4?l$v{zGcfAJyvskP%E(tGjwnss9N9|~CX5EKwI`3+C^m8daFt;B6E`)+CV7THdH z6V@%V5t|vG8?KvLUueaRh8t!DU6EuTt$MJ-W2sY1!@wYm*jsE31GndCwUwQ+kKFje zYDjc+CZRpqx+_Clr_?D`GvoHVU}f?8p%L^iAx7cU>nKW@dp`6u;i2sPzSzY?{-5h= zet$UwI$}I#qnuNWwslGTI1s_Iv1KPfLirNJ3P0sJJ^VX0BkqW z^q8Co!Luw~>``8?s+S7|ZdQK2rJmpnLNAyeQ}D|_lGe$(ZsF>=r?n~luIt|EJv7pp z-#$rb0Z8Y&q^!vDvtpy5kq(h)FPB7L_&N`?rDx?&dd)ttE1$b=`#tB%Gau!W7 zGyNXKLLBsRj0!Z+Gs}MSEQMA993u`+QDI|1@U4Gg#W0kMl0dFo;23g|KAf~Dxhp>Bg@kmuw^bj|RKa~((t{#^xh|DLs# zpu$Ka6ngi^j57zjAjN7Y}B>I9k6dl z+Fm*bdnucqqT-9ng=o@5<=V`Y>s+y*FDN*jH&@54mnJ(semvME)EOUn-Z=M-5B&Yq zSfstC0Mj&1N5WhM(krlrfR?;N`LRQfj5 zWXj2l!zUF*`%Z~5(`vFOFaIxn6uUCm=-bF*kL&Bv-;-!+jZev~=qcko-d=XH)4G1K z4+x~A{K}QB0W}_lW`>s*=0~Wi_3&21Gq?`}8>kiedMiUJ9ud^N)&0t=#OJ?1OSe@C zmK4mt(Kv>bvDq@$&-$!yqsl^PVnVj@388i}b08%Dtqgo?Bkkl=@64Y9X?wJ9xQXao z{^ln~_DGqFn`79Le)eX)x1fQF;YPg|2YSx+eS4~^el7A|bJ4%|kv=U=d~VlSgKRDH zXy>pN$%PxCAt4s)D|5#M&0dxpWdHH*wUxE?(&idzDpbqh+YA26l#R7z*U}dBCpBfU zYg1jB?Iq(ek!e`fh*K;f_wyR7@Zw^a44Zjj?AdL8? zQ%BqA7td8wLYrdZj$IkbZ50oM{R=c>2)$(bHMOWDvkuqu>Q6Hf z=9~AGLZN_1?9t5C8@_KURIwO$rR@9=4SER#hDjyzzRB(?-fEi?E;C+U4FQ)7jfg2R zZ-xv6L2u~Ypn-zLcq=e12QdLGb{A}kavI+|z-DtNTvS>%Q&+~ppUld=?qOzDKQ^>2 zcq2#-@HDTBgkm3i63${b^y>10%7Xmi_60)Tj|KV>nw&SlR zWu+;B25y7&n57Q2FeNbnnBXVp#=qq(3FcX)3Z=vQ>&Nq{owa}k^rSuI+!=6)!!T>G z+D)$3JbAa9w-8(lH)-M<(;`=*LND@u{B zc{Y4nTR4CLnFjCfyM;~@n{2g$QiJQdxK}K?nhk~Hl;!HJvcw-`W)`vd~H|N8h35ns80lw?TPaO|D`!lO#?6|yw`|cGq zHa;uWCzG54R6M6LiwkVJFJjiF^L0Lt(CmH!s_Op-5CgpLXA$f4-;`k3#N5)`Z)VU4 zBgGexm&3Y~r4?qH!`n=+MRxo+)|s9`BBg72D@NwE_p|t~uCwpy_g2a7T6@DGZ3uiO z$Qcv)4bI5!6kS0Farp0BR6SsCbrLLog|&BPH7DX7Oxyv#7WQ9$x^VzW%7c-SA~sm3 zY%BVm7VzHVedoG~W0_)80Kg)2+D|Huj^U+1XJ@BQzFG6`M(ox~z*dSZtNT?u*E)U$ zPcEo8d|?cn`kirvaO}Ei^7#4!pq=MbVlrTIExN~Mt&ea>ANvZ5!2K-Z34*PXYWx+8 zL1DbQB0tt>hX(`G-v}C2p3^U}W8$kWHPnrrB@5z}SQVH*5H56pV?}wrJ!-hNeta36 zERErf!Y(%14)A1gj&n~^V3pR(V>_2F#-{=V6)mR`?WI_&nKl0T8b>FW4Iq)! zb-+DYYURg{iund^7JrTdyY(IF*?XXpO}YVU(+w#iI&oi#4p>Y^RDA2`NNDw*apdCV z?X2gBwb7=9kNAa%dEKV;;1b4T5R}qqM=+NTu3S}ejHRHj!EoPqz~#1~?;?>k)}BTP z)(CHdCXE1Yxsn4L{0=<7?)1>3ixKn*QRe^BPFQuO=KhgESxQ z=Y{+GAozll4!uPtO>}&I3Gi8D^IjQz8qFFhaOdIXxbJ3N2)udb{ZC`O=H;8C#PZvp zRqxxkuXN}+_u{9${BMfJi)?$HIyJQ378w}AEF+-tECQibR8$Mrjket|1yON4ht#B6 z=oz>aRhq{?xv@4_)VWLRsTubb%T zHWm_T?|m=q?c)pDRR>P*O|Qi8733ihfkIi?E)>-pGUan>YCb!tD&PdzawJbjK6)yX zq~|Etvl#`ZhMv`k5R2k-vNaxrd{I}*8X_y`7_NDxZ+rP$nJcM_#kKGLr(x->)z;9r zZ!Ia6#&!Pikot)dr1JdsVgAaMc(kiVJGO{z3~&d^P2@H5sVOQdhL1PDxgdiMOl8cH zI`8eA%f_`YzfF95Z(7{j3HvpRgv$i67~h4?NZ8bG<=)Ovad(;i{tj(b*sTrHniD$s zp^%Sj>*4CZ_sEF^^fTct@Vli%k^0{Jhv=6AZ6mMdEoT0_VJq?7ZMrbSA{g`s0RX9N)o}6xNz}TjNhasqcT=9w(I`ZnMN;tQS z#a^F#_W-4R{`;$R;`?*%N#@!trfya==&Jvx+5;&PpB5<`bQ3ra-^6e5!FB1tam`T6`XHn4*OOZ-vK#&Q4gKtF z>g%Fo5*qN}0?0hPIu~^i0~e<6(ZI@n%n1Bn(V+r(0DJGw$Uc>o(JI ztJRm^6dWZe6*1O*mr81137$rV-FVOA&L+b?*S0%fJoCuD!G72NNJ+swDh@B8)lx2$ zesCZEpgw>Pho=&~qw(36CEk%gmyQTpI+xYdg1a(xT|W(P%`$GiZrk8cibyDN9JlJ- zZTFiuG%tmo%XUh!QWdS91w}=ZPd>X=R*_Nl@;oO`3IDg>iikQTw9y%E7;js!<(<% zf_=^?XRfI!C`yFW>^95jL3qXOg20a=xkO7YA#bib$NfJlXH(~R|7nnPHkc5KXZ1KA z?RtbxRJgnmUU!uue?`aW+_QrmNs`r3VvePfmF^2;K8O8C70Uy|J}#G-OG%Y$PNSh( zWcOz5=)Fr$bu6yuIXGnTTbs+C&cq#zl7Bl$8CX=LdG+r8^AnJ^ws;dR{H+-v3JDY$txEp0efVEL;IBOyqUkFXG-UouyF!g*!(PQPO- z22+_-UanW>LTr0^Oi(q5hSF3WlJ%Fv>2qLK;-D2b&zHw<7C9|62ld)IABS%mhnLz@ zoFW9xxW>&$myR^f15@{uQ98nuF5cI0m;HKWyvCz@;##6n2RY7b>RMBk(@}oK0*P_H=+p=5%Pg$H-|SOcu<{8{v4)C>(_ zy9BxL)Z`?koUEpTaqeps?Yk$_(11NzHkDOYVA^=3VyR0PQ#q#MO9{>@olHBFgysNX zl24l>5l4~+yG>g~eQ;tOYjDe@YU=vC94maSN2Bt+rs)GXhr!|5Z=mM>0U7&`hZBhR zn#9bSUUP2sTC(aGd@R#vki1&Ci7c&My4Vj`LTK?R*_iI`JkOHm z?u*x~Gq~xJN97)mvnyf+H5Ib_v0Lc7RLOm`udNcm!TC~DRK(Ysnvu~lRP#bDg6+uE ziqoWu<0WQq1J4c0Qr+fihnh+!bh=&Rg&qwAr^&&mC%eks7fLtEyr-d*fm#uLAl=Sb zH$6CS2uYDaTaw@P-QGEEZKx{yQ*}&!2G2McZ2R$X0F*$3P?2A(ocrguDQAWBho$%C zLiT5uol_3+>=7y?b{k*UQ-dq_5_xsbf|nmSQRL&As&oAe|HQ<3p9H1A%N5YuD#XSn z7nJBT5-0YJzY!9Lj&|A(J!r_@$)v{l~8zmCNqCcyEmgRL1gb zC`)KK+YR{C3-`?&REmi_uSd#s>}VZV1vY0dV4;nYU!W=hfBoT*u&*aP2mJKxcyO?;TI7F|z|B;HNcD3N$Q5PF z)4(+=yzf0V?9J+I2Km|QDlfp{oF{ZJ$_VzN&IBdY*`d#tSZ5-o7JQmwHENI>`-ClY^)J~Gz zs%7AwIMy1APj;qEQl_$T*}far><6CNwe|}#Ra#IRQ@WM!{~Ls!P<0dn*IwtprM*d{ zT=2hL{q(wjU~Wwxh#CrquQ??KGx!e^nF9>~imHk1J)B8ZO*<t7>qtq0|Q| z`ZGq(q_YX{Yrja{j!(&g9t__1;cM1S{B!{=TM3I2ceXLE{{m0!X9r~WlalFa8@OKZ z*pvI89LdDqUV59KWdKz+66JTL+8Va!0(~yeY^k;OfXzXL)l}yQIFjlu!|(Pp zHs%C4G=fodT~I=nEF^_jdi!LfjclG6vB$ERzX|fUGu1sjw{XX#`Aamv>2PMhZsSK& zV{tq7){g3^emh(Ur&@t4X@IY_3^p@09l8Nn^1B1XnD=Mu6sD<6`OJRY6_fxKnZ^7_ zy|>PLEZ>J84_OcU8Rxdd7EzCTIsqdQyO7HH!n0|VVB`hb(5%_l!KtbZ){EZ0fn;as_>#be3*KMB{6m>(QzLOBF%h7Pjcg3TtM4U+ zKo1OtnKk_8$dcNYM#EdP0jc3}`R?`vij-HffO|cFVymh%Hx@1;JiY2y(6=*fJHl_! zXS?kK99b$-s|l6sqx_MTTe2s;6EgL^K6-tmKZ;;4Z=JHzQXQB}P?b3w4lnhlCdc%= zU$`hJs1}9hB_bqxdJ$)6SD~TcBpv4qg-@EwpI06xwemO(DUs1@Nk(UPL#QZ(A zfUFb$>BeKp=H9<^F~!4>jt{bHA_UDq*{`K~R~h^$!OIE3{z@xK+)O8bkmKMmfC%@%ItgwUL))$iOS2S(kswOL}TXv)&p4i7Fn zOp?Z$(I%=H1_w!7c%=zdRr=BhER<{Sl8n`S@I8k$bu-p>5fX}Jh4>5D#70GLC#EV1 z_ZtKoE(dzqW?H!Zc* z`dOIEH7U&KvK$`B>_ia99pEJ<9dWF1l;svYzGc#o9SxCC##GhRWBl`dslq86DB)qNlFAd8)Pa>8zcTz>O>!V92WeJSm+BTf+u z*r#riS!*+8q(AFgB}kH)dL?Zk6&XWL;==CpCdBT1U|`QFM_s!741E?IT}Zju#Ji^n zFrcX=-IGH7x-`T+t>hq@E`Bk;id-wq6;H_nqgI;C9ue@C*8ZgXKT@UIYQ=2b zCX1O^_@wAWo@<%vbSB=Hu_vZ=YCG0CUkcMKzYMhb^|Ngon_^<~tKr-m76cr|-GK(5 zn1efn{HXc44I>iPfz>n#T0ajp&I2iGl=WQD?=;PEE$~QHE4{t?M^Eu;aGQPgz<~bN zH(k~C)0QF6WQ@-723=Qr9kKtMFI)%Io&I+L>F!Bos>BiQL`Dg@EAVLdr5rI)xaXTZOC?`B^O+-&fDBtR&G77k+6UH z9mAT)uaht)IxumsX>9Ji_uSo8g`mfVtx;mI?|3)By!Fds*XcQ&nxz8%bcSTm6uO=X z9XsL|xP8vWa=F%I&~**a$?A>UO>$YeBU4E=!To&ATY2hOEiN6lQD04mwI*EvlL5eI zM}Yc$z}A*hpfkd*Hrs~;2EIcef}!UOH*5-<-h|2!SpZ)W=&Ge5YP$|C?zlj11sdRt5`T7vOlx9n6J%c7PJ z6BnpYI{5e~YHhoop7if=97Z;Xjz1CtpI>FVxY@Lp{+)Rd8tNv^}L)bOK1 zgtS!pNJHp2ksK8i6tqo_R$gDvuu!KAEGJ0JN#s%Ob?Jp^D3t{VOprUVZl;1 zXZ5>vWszp|fu48tG%32n**Ejd!2!R7;#8B3JX&&bef!J8n!bo>Qh1NYa(ePBoOrJp z%Lb?6Jg@FKSxrr@yRfV;tHujMz84?B{%Ok3l0?$kSUA4*Ws!Htj>JedOrE!}4%^Nz z4^4lXw7l(@w4R)*L&QDB+X}5pRR0!8KEZUsbQ&xMKCll=0&Pi}6Ar+T-yl>+dUGz| zxC8)}&j7=gG@!Msp|xk_+66bRf-TrUk6d41uB7|7mwDn>FuYiIN_ ztd%KKs2FJ95)PFhNIE@u5JvT~xcKO@QT`b)nbzMNrq99=eWt?AibdC25lb>Qvw((N+$rGb zD8&Fbe+gdO{d-{b4^cufg+NGKa=a=MZ~oH898@IDtzq-prIL@mXO~oU5iPw0-0)~) z5`c0)7QvK2CoiwSwJGG3goLYzv5FA3E|pwMNUY3C= z>$H`XmAaM|ZdB~3zSB3l#Gg-9J6PR@t2~EUj)K;9d`?b!BXtoJEMWbxgr^etlX_)T zSGsi%BlAs^>4X0c`ww1Sl3B{dCZ}9Nsl1{@#TQu{5@NowYPsn@HUXZ|n(UM;xhjU* zK*+1|XDIFryzc*GBR0fmx%~98BLR;GR?`kyhLpeULaiedd?(;l{aOFz%hxCSP84{p zI&^k*kylFDWc{fqH-eV>0ZWRGSuosMGNjasR(>0FF6+vlR|Yb{wbI9^MNd=mzVR+Tf6dGrP4LhHB(DU zO6qW+p@2L_yymrQj}gJ!NZcIR{Q$eUl;>_ZFc2~Y;jF0@3&UwoYhCakfM%d0X;Me{ z8oy+VRYKKBfW+a} z*#m^mLPp@~)6&xse`?+I<^rhJeFq5YKSH^+CZBP{d`*xKZhDS6UqSA%=8zQeShUz^ zm)>ZvdQ6+F)wnW}!cjvW+g>v`Q(m3q$E@cFy*#-G?c)nVe->|RDWh{>O;!Gui{BSKEw z4?^%}ux$O<5AR{W3k-xGWe8W?nJQxRPi&2?mFlIUqdSOby&QO<1k}vZAJzLhb>N=1 z_S-N%{j|<>ttlM5&HHtIcExs;t7d36LmO{)oz*EKGIFW*gniWP3U+e}3#^tg9j%S` zy94s@nNmec>sCllPlj@PMIqd=vZ>iHe2xts*oiMjU~vR&~3| zGDn(aU%q+$IDa6-NmAO_9cF(RK_L{Pcy$=`mEo`}p{g*`8vYO7FB*-8`y-3Sl1t7} zMtE-=@#$B)2uKB!iAwO9CC%iOKmQR_LPLxNQ2G27UTSK%p^r8k30Z%F}c}*!zCT|vo&~H4m&U{V@md%n$IqlWL>UHvOSa#=VNUS37;0A z?0su4$CnL0B=i%;B*u%O)7Qx}c{boSyblpXJ%!d#rISWi-g76PIgmH`u&ZJ?UJ>5Sz5hcvY>4t8mQAoc@Kidbwq zrjUmGA*C@!ys+-o!5G9RcHg_aG{f+?E#I>c_pbJ@G;9C!zQ}X~0kaja2FzVzRS1gV z4dcs=T0`NUT|8=g&%qN+=*9Cw`}H$uv()0LaxN_w|10-OW zH6OR_&P&AMu#0VqI^}M;&{|p^K6*6y)Zh;Xy!SHakr&P$(*vF)7Kh;1k3Pl$zi39q zdO;ag^uIyZoGseH`|3n8&d^uiaAQ(yNqTERx}wqt1Y6>u)>xt*($Fl2JpA*K4CtCz z-F`e!4$$|Ss}Y~d$lrPCcj|;=Y8$#wDC$j!v{TqzFmfGy+_tf-C?-Y~x>wOz*}kw| zn8|5KgJ}v;tokvr$85XSZ(gR-%sYkNvG)x|4Mn6$9rkP7e#M*EaV(nZvOesGSzY}* zz-kUuTwX~TKs`ZQgw)nHH)(5TXizR2_m}z;;mHp9q%l||F4}Z1uaIcLAAmgA5IRcQ zkzMxr(pWY|TpFf}4x7^LQ(#3PQOAI39=3#uQ@p%NSb$tKbac|HpE52}C}CkoHio4) z-w*h1dG|Xs^5*TUW=xtX8E2(`^l^W70I-Cpci1WQ8*Kne|7{tTU%q%T!E+{%uOcKe z5Po}|j%HR+2sq*+O4^^!8+K`Mm8L#udl=ydYN2Mk?|35XwL#4M=lJL1o}&AR&C!L= zYlr49v4BEDVSSJEyc|+!IQCsyRZxIYG;JI?E>T3o7?vTY?q~lz(L!NGiBnGIk05Zb zyPpLwNJg~MEB+ZNaz46_(xs?vkK-t3Q(!PBw#!oTRHwRJzO?a7DR?Eb=_KutS_@%! zt=-hIL1CA1j{s`YfI9P5Yy;$rgF=Ptw_hg$Wm#|O=tRg9GH@U|hy{C3H0xStwol>( z<64G$XQ*Q$U;-{Zeec2!x1ndhKdSwwnwayD&2uYb=bNUGfT-~l{v&2quPpE@f~=d(tn%(!CH z-PVKo$MpHaL9cKoaKN{Uf|gWInQJ!MYVWAMtk@v;YUt|r4lXHz$=q<|(@`Ed8g}<2 z>d8(*gmn=8kB=>X!@t4oU`L~4;0`2zKpAj}4*(qN{m93vY|Iq#ommyMQ1rsvni(~$ z)jh4Tf`@kPs6g=1f5nJ|`yVaQQ!vcAOe1>AIbwNEkR0J1@4Ug&0pXp{w42C2I;5E3 zga!3VDrg&AHiNbteNvF!mL zG9lY*d9xDZUYyoJ?zA(AZ9o3*w%2D1fo7g>ercK@~MHBvB&_^MKDzxOU!7lTsNHGD*1?g)XV z6dj5!A6rz&*8;|oK0UF(W`PWg8=&s<2=39+*UyBa+_gJL{tD1Thq zr`*7kD6D_K=CH?fZk(R=x{@g0+}gx%wZ~t3 z@vqhR^71LMrDc28Dv|T1=8D+O!hdPDInDCB3mi242iz^okkv0MRrQuzfL?56s<$7{ zubNX!^)*2W5Nr+J9KOg4I+k{_*|>$KH^6}(GpmEOewl-IxdAw)xmCqxA!g%XDuWsw z>z<(tmaVn0!l}@nw{CKxW{)mk^QGo;8$wCjqFUkx>1<;q1a7UjD%i!kd!kXG`??zq z1Nq~wlNOz68sc6nD~t`(E@iBq(-)!vFlWh{bCdK~{3fe$^Jd)1^&h(~GQ8LF?CR*4 z$jEO|g|3E*`K|AO%N$_u$s){BOvCk)?7fCh963v!o^&z~@5n)~-UhxwVNu=aBu{QF z1%Qr}qOGX$Ppp=Btl_h;Bk9JA&oCJMEcxJ_UWU~`FWE{let z^zge+^fIGir{$`#&}r`GNl132o4C%yOf@>D6gyEtPUqv!JtpbKae}!i1;d@dUN?*< zbcs85W!|<&3yJT8*8HeEUv|k9_3M45M~`3VJf#`!+@-ze@gFSykf?+I_V}p5_t7QM z0k`*i_Uv&2sR!5bS7a-a%+vjr&~|h2^W(nf8hbitYGhTAbvipbn3sEk*cBJk4L2I% zuTlrr=xcgrXjZwZ2gMR9{#HVW&$=<;$%(VFq~vs1N4H1iEK~PyOqjug*RPx$NQm-X zuiMtmeS{%xwOO>YJXH_ShvzKfPSf9w+?Pd3_Y%$cBX}a4&8V6Pr%!*;EP4$&cRzj< z9$mf~udOhE0H&^j1LE>d;Ya3dnblM%Ni-z}3n(=m1yt4-hZVXOSrVHKpKgJr{Q~Q=k{wpn8oe>B9@3FX-ZEtOUe!Qd%)r&T83O z{5)~%g^;Pd6>SSURkPD|YQG-=9?u!7vdNnqqE{|OD`V!rusM|IVGQg?8$x?N+<&i` zZIB6y@MSJVOUqQGvz-jnwIg&o??{c0{{sj6!~N`4FDYbiIC$Xels`of`ESaDRq8{D$&*#>KixLu-EtXO(eQsj5qHS?>b5s|ZO({*|0i;N+1N zLwSO2U(m&SYd!VdbS$h~i#hb-EsI3rw6?a@+5{Wb^ZzKs+>9K@OD9p6z=G7=CiP<% z?@w}5vc%5X`tpHSW3@EI;w0E(V}2s>)LND8uRBbX`EYDTT3VWNfFa1VxVX3`mr6OJ zoq_;JN&(oy5|kwsrR1VsyY_MEEq7jn@(;MRBpqw(^h&5%(@IJJwN_46!-CYW(CFj# zsfW~4!S5qT*a-125|#&$PV{;}7^~OK(jT;pqQ%~uk=y$^7x9p~*5KL!*b}m#gRF$5 z#U1=T5od_UM~N#p4Rv)Qbnu%m2yfLsj=6B#e)|Ii#>m^;!W+~)m?Mtd)yh@6Z{8>2 zr1Hd{BmggHZU#wr+z=8T^}apB|IYwO!RHb=yETn+ow`g@zn}6Z*MTx#p9Tg}p24(# z*yoJX_W+n*91|83lXVhG3ec8N776H$ORaUcPCjCknwWE*bJN4=Ildb3nBwi;*}cma zsoW&i^#QTKPeeJkWwz5SNrV&L(i8dr46#n=bs~o1d`5TD(xD>c;T)R-)wN& z@2r0lUQMp%=6nMLU4LVbu0B=JsR0awVpSD9tGJ* zyt^-oIXY0|>ineg#BAor##5ZFEsR8*AW0zUFeGM!poR$vYMz{?=MGN9Luy1d#13~* zV~0=I#Q*)+hSbMi!Wm|djg1{-#R7$=)Xl-quY#UfGfOX(il3M2+E+M#J><7V+S;U2 zS^=I!)V)fS(`L@~57Zz*y|mO^nBB@!+B(I`5u!A)BIgD5BsyuS1qO`6c&$&i+L2#l zW+5!gUVGd3z|ce|tRZAWK4^j-J$ZjYCgr8;Z{2|IVP#8Q~%5;Mv61^bXCB{y$9RC{B z9?qwwWa$lgZq1CZAOz@mZ#@7u)r!@zNCd`OicyBZYSn@C@*trx&ejJ#W9hKJ1kVkc zFm-3>zrzRRTfL@qQ52U!Hgz9UwaQW1mL=cVRob^2TW-V@(c^SmX~wD3-BYXSJ7kDf z6!J!hi&s=%0Ixhg1!K107E*~~u;D@!`&)rFE^B^7r0`bxd?Fq)I8ns7g>%JA3%y}% znR*LPfTbs7y5G=M3>qn@7ZlVTcnA(f#}Ve>b8!02uujB2@m5{+t{<|mHh|?e@}H$0 zEklgXhzlL+_W*`+bzg_2dcnJuQ!Qzt=BQx^Fu6e81!QR#+eIhR|9G^Eak3ATRD2{L zO_0BA{v%CJBj>TKPZ)ZUXxoeYJ9EP*QA0Dk2+>zneoWJ9ep_Wfcub3g(;>qbS}0I` zH@p%d6mvvpOT#5nQ%kVOy-gPj{vjp%V-kBOzuvP=zV6(560hF*vbyp=5FO^x@)t)( zQSz;yk}Bzrw!tWfEkCBn@un!*I7Bz=nG{?CHM4g7q{=G5Rpto8tGKG?;rJTE(2n8K zCIcuzrED{(InZ>M8>sT4(o%7tX0cQAQLw2envmk-&!GQqUmcIl28jc4wN+{@!w1OB zq_c{y{>)Se6-|`3!6>LD8@c){orKkQmN;tj1As8Nc7{ zA#4#YE;R6gu*H7R#vcgEiR_ZVxIpko$8ZO^ZM2 z*c$8x?~}BoC+C@Ei)4S^!XPLCNvJaV-*l|^R2>^m5E}+Q-xDEPriB9% z$-~~T;)qfsNJERAQca2#pIt4U`d)4TC=29j>kfq{=iC%2bEii~=Av?Wu2LY_z(tXH zlHh?ZYqQY%hIuvL(jdYc>#2}rW2t;BGgrmH>HEmrKz2CXVKz970pE1{t>=TqhJamW z*%>g8>>)aE{u|z!`tVF2VJEEa15;0_q!f70y4BWCzXjnKhzT>{#d3XXR@&)Fo|d-^ z!#abQfdI}IY4QTd$`;Wk3F?j==rUFKVYvAJ)c7`{_pYza#IRTja)?BYcglOKAcVzT zMM^tx8BcNx?B@eb7z!~xpXv5sx;y|Hww#y0i*}H28B?WBYYzqhWrGX2TBx49VBke< zJtP0tt_zI1em@?}KiVAluvaaAb7K8{BEGQg&(-=?bX_P+o3pUB<+!Fm?qa}kVXMmy zO|jC3NUk3T`+r*F0#vlS&IX+A_gZMCjPm=sPmh1T>Q0wUMHjy1Q{ z998V>1fpeNq`u8KK;As+4z(Y}*Q_DdDjGza|bdL;S7PFo$vm)(06C_I}!RzSi9JZJvvUGqkTR(()1J% zOXx0+kydf64G5}uvHjkd`*;J!`MDSJW?8CIMHO*R{~0}d(a5uIfAM+Elc9pW#yt@? z)>!`hR0g=PYRyj0--ENfn){X@#azL+3oCq%UrdLBF>HvoROX|8MReatm4G?Z_3%1f z9i5%$pA}eja+#W#OneDE=<6397#KLujI1Sq!`QQ|i9s8Twq`I$t5qUG9a@S60Em%x zaTsnZaN=i^DII48cPaZh8~A0^*P9XBr(eHQ`$M9@a;SXRp!HF19edc%TR6wD#%Hc-as!{s^}(J#9?sMGOc#!J(KJiRA5jhIw_MqOJ>&DBkj z<;2>XdhPq_Ra=}=?Oa>4a}mW0C^EX%Nb(Jpy7L5yi#=t1Oo|6R%Dr4!gemkVE=zt?8qK$fM8C}(>BD-j(rA}4j_s}&K zdzApgjiOXu@oxa&HayM5X|_8lwuZr=!iL1%RBwD0ZF%`` zpS@@FpEVSR6u`1n!JutE`-}2OpYA1hmvEs4hExlsyv3;##s)u2y;JCIiGu5iZ*SG$ zJvX{`IeL0}{Ud&RRaXwpbHsY78m{%={FSbOLjqZVodEz!wrM>m2ImTSy8C@!Uth>( zG;F+actQ-ypVs3@4 zNaPH5%Lma>Nui8QwO3MBW}5zIPDL2yd|x~+9TK>I z87=5_Ce`fq^djDTyWK`nMa!v9xde_QI7}3pNz9tnjuN33tWjz{0mk9bzWuc8puA8U zk#x0p+<;Uq6e3BjSZZO`mNnde7H05m%UQ~-STVLjd)`a|0z{)&7rJQQDw?|3P;7!E zYe4+BiQTQ7+}tN}w$FZiv`fWp8o3t!^)31zFsf)T-C44#uz~V0b z-pl<#35U5gEqV$p!$#0Np);ohSDexRlI)e(4lkTZ>57i znwFN%EHQS!=dfi!{cYR|M`e&8!es&l?dJYp4>g9q0a? zufFeZMMd)Xb>rQ(1BbidYCnul?_V_`7S_QwMI)XvdrQn)gCO$&>;sF5iX&gX>CGdiU?D{UXlTj`N?4DXDwKglj^@FU? zZ;NDAuD-v8fqWJ?IwSofeWo2soCY6D?xW%sej)z_Pui7Sqb&P|Gh-snKt>}FLd=Fs zm{2Z?_{2;1Mm-il570f^}Gm)w>(ZQ zV&fKz7BLyr6Ya7>#d;pVz7Xb$v?x~|6xc-)Pt;6ga0ow6(p}q1oG0 z+EKxEyEl#Z{7m5iy{dgzcgw%_-YajG7xcEt!N>RI-9XdrV!^?q8>Lx#^Cm5;J~Oix zyMs%{^NpW!=Jzyu5)vmJiNbk_IgdfM6~`^WAoE;ahE;xJANLZG8#rrT3>ng9ORE4v7gyVBl~sn2*Uel@Un>_n$bX+k&@{Cf94ONBLt}eP$LsKLDTBxv2d8y`RSW!Z=j=DxV z$)7oAio;fN}5xbCR%blTIP7QXZbbdR) zyE$}v<1B6nH;z`F%V zpMgrw3Nu@^gGD?!A0QSn1x3YtsTke)wbiliB~;I4zM1i!qp(#Vg^ zKp8_S(i;VOd|FS(kq!|$sIN_}Mi9JT->dQf#(=>|8X4L!!QTW?ss_% z6|O@7$@{F{ag7DoTuJ95L&Q%h+QrJ+#cFZx${j7%eO8m8q&S?2pI=qo*qMHi*z)!W zeqQ2kp1N0QNSHOvR!V2I9nJsYdzXYnBp_D5LCuQ?VTz0Io*ytf@960cdjy!tj z^&OrScte)_0g1(%mIIQ6oNB57F2~R+0)oDrGmm&{Q!*|0XtT%%spHy`LUm6#Ph1fz z!-Orr=kAy#tv>JQIE&fUiV&cu5Pbcy7WtH;Jj}c2nyAIrPbIe!WW%f3# zPWY^B(V?=Jvz7Q?>tbd?(m%f!iq*;*(i6M58k5_rvkJgoorq)zGUHA~d-*G5i2U|~3<|P=ThO5Fy8OpGIrWp7dnbr#$6>Wtjdop6XCrpTr$u0n__4e_ zHtATsug76yNj|!jYzS4_$Y1nqdor`MrnsTM2cAr#$>Mj$QAIX>8UXC;b{;^M!Be1Q!ZB{U`O-`0ta`qb!dKi!s z_$kGjHYRYYCCodd2cOlvO@*TFJfxn)1p5AtS1lM$Z5_Ukf8=JaR9S|fEOGMS*7#tIUli5YM*G2Bg*=_DnB_j7*jN}2WYR^HvV)_ek{l1{dCdFs)~6O$ zg&aoY{$3{DHFw)czWn`O`5i7lPDqP4C&;(41waQ%ZH2#c19De;vJveaows^;JMZ9f zWh9TQ>BZ&Ncy1Xqn&qe7>*C48jQ6c92c0us0~h7xy3c?7{y1&&a0bKq+k;vKr^Q-V z3vI1b3rd@w zQqm*}yg(r$=)o4onyD2?(GIWEnNWyBMu;%W>UKs%f2e^BNx-A%0et{#dv*? zJgOI0RqkFUJ#p{+#v9`TLv67Y&0p0LLYz3u+1v%^pJVwp$Py37!>QdX9la5qp`yGzQfbGbb=ln9 zOx3*tQv&8URZy#cOHzvL=~(3v9+eLzz2&qJc@XKHe4}ZSXkXjLUS^$ooF!mup2tJdU>D z$Dz>rd2m!XV*xg%=c8_FiCMK4{U+3Pc>D7JTR=kmBHm-VM#x-0vVLbs|L z3yEEZB-~q@Ent?u=SS=MaBttE39GO7tC+6O!Cy6ZntW#HzCmF~~ydW5)!PDKw?So@exVU|LK ze9EbN!~QRq?Mt7}3l=05C8~`d88!DR87#gmD{ObO@jc^I9H(nzV5li21IVAy->uZ( z{QnXYQ`u-NHXf7SgO%)<;jN;5Uk#o^8WS%(=(*STX#R32v)HVWRQcKEYDZD3eAbn2 zP1y(C3^Hlj?3qy~`8PGQ|MDFO<&74SaVTwy)bJ`DkqeEtU2~0X=G!Mlm};|k@uit| z>HJt<{hc)Hm&JBGbmoH$z_r8JnaLIj6_42~^~2PV@CkO24&v9N!p-9DZYR}RS#(uj zFK)kA)SDXoWYpS5{=s#|MjSlmr{3c9v6-^M4g3PTe4%fpinl$bgphB$|7m$UU(c{H zxuavFXL}P`j9+^}3@oQ)?bT@i7~5#Gs&rOeLuK2CWB~@tsl04Otg}X$;X$jAF211W zSknPL*CL_CL#gwjpK(2E3Gs8PgsYY-+N#$p-NmSH;BxlA?jE{~%dv|VvW+Nk%Z2IQ z*c38T#vg+-%SX3}Onf~4=uXA*wdjU4D|M)Pmz*U+0w=%b8OCHCdfHzgX8e3%>2ulC z4GEPXW|v6+<8RV0)X*PS3(Br$YzSOvY`PA|NFA#d5tP+k^UibMw-(zQHfBm34j+w- zU=JkQZ8Jj~-9&S|lC71OaS=#*~w z?sq^={=Qjft;5>?_{Q@-&wa&x-5z94^<<*NlxK$W7KWe!Fu^9)8gulCUIfQ<`&!~q zz@M@znZp)dY%*T9OAInOXl zvB5j3yr*+~pbJ2$!JI+bE)%6wE zXwg+tfwJshv3oMF9sfSYM4{5#nlG!{>Xf1SJ9=i2@sJIX3_P}mz!-<=nLDP{F&gD3 zr#_rZ&y*7~y8WGBM}ZS|8g?})D*i9;poHDD)J&{Tfm`AzysfD%PQ-B=GskLm3Nx5K z$u}$C23?YiU#aHsSr7>%w^BqmtrX%TmVY!1Ig?|G@=9@HRFMgV1?p3hTe3v#!z;Ew zRGp6N)sJSYQvFGFnncWKhbg$pm&``>$Z@AmA6|p1(pJBm#A>+`C5l_hVrbt`C$G=q zQl6uPb$OY`CKw51L4HO-!RWR`^c2kQqRF`;1c62gFa#3GSF&>ndGuf|D@9IOA>+J4 zhrXm_y;F2fa_&Pxz5zYM%$(~-p(~0DYw^iN&6~NgD=gqJ70*p2hChkU=RA32=_`<0 z;(jUQG*JPQ_%hC#(R}(sPg`3b4*%|q$=-r=fxBRa@x*E*#qQNbTO|&cc+BZFt?XlU z4VEeATQMU>vZ{f$dRdVKW7^K#3-hKu399?V!As#)x_tw-;MSQU@tm*@gELqGb;#jo z;Oj$(su-+i^-0Hy1Q{>FclFM<^Ejyw1r_gNTLUpsetvDBmc|uJI8>ZA4XaI~=c0dh zb_X2xW?*SXS1kl#2P?v1PRF|7uFUd`IVl7x;By}onkvjxok|0zA-qlz05dlU#r6*< z7%4QByC%$Wj%_2CW`j#VUF-U0+!F~^!-~pO0?aU4I6A+8p$UuRNz({wd@r86IJdV+ zk2?{FP~=UF-%(I#6(&QAhpND<=vPBpLKmBgkWzQtt&kID!z;YYO;usmP-k)$$EJ)? zx-B)5fjQ>aP9n}$fA^wos~qXc8V*DKDCDq&?GLth7$UVYmEIgtkXo^ARd7SYq0)BC zS0J$;rqiUnK1TJ6eA5V{>e90V!0E40*)SFntJFB}Q77k3$wGN|wJNib6e%ri*KUl%g?_M$d=?^}7ocJNvPnr_BJTn@B`tEi+)1$MuG z(JCj6K60Ee6v5(bhKUTb`dC+7-CD>{-Q;L|(~UHaB9w7a-_k0zPist>MRBCh5#DRP z{c$D7sUHn!MhAWTPBL<lo}`FX zH%18e%RJ+2EiD zm1;m_Fh5BEO)?4@hFGDH(Vo`7+Q~SXDLU!eS}%T3NXs`_F)naglaX!As#s*vc}so` zTfWFu=~!^|F&8DW=x?x;>oHvwm`{KiL zhE~D7Rjd9Z*#5mp=4;ccZONSM67LfT-k>4~tnemWwh!XS zUDoekF&LEq`!sZF3J7j#-7#{rW&InF&wwV-BqCEsROFT4A^2tpLhIuPZltE?q@ylo zWGOY2JIj8hC3Uk&fN}tv(g^@q3VqlD*hqC03RH3nr9N1NFs94G*DWq` zYN9>-AxG-QSd?4?KVB)nLV$ri(s47jiMWet^D8@`%mB7){*mTQ1q%Ho{}Q0u(?NS=vZUYQEqL~my| zafR=?Y-$@zQWZL8vcX!jj;U6j^fNXd~8qfl%; zaTUG%KJORhMjQmj)(rX4=4Z;}JbGXiYRgJyu~}wlV~dMtE!ra4HmlF_WybS&MieEC zu?9v!AXjc_d%8`J^(PvGZUfoW0_V`ZOTKwx|7%8}sFc)U%ydfr^H_KT2mDfP;|iem z%iG&k-WyG+Gm16(p#c@)NU?FN`Zun^v2n|)rp;Iny>lbe5bb^YHb-0DX0btxf9+d* z1N=MVhHphBVl6dI)`{t1y{`+!x!q5r46+i9G7#b2Ok|oGerZNr>u>ajs+0ZEIloMW z9mz$}iaY=WSANEmdX5n;HD>I000b&V8)l?Hvr6wEu^7AiB}Nz;No5sAsGZ2z5^v+{ z&%5wXFi@x?Aa)(O)Q^#kjxwdEN$LfyGYxkgqMtT2pu_#yp2utqS_HC5Um7yNzJ&@c zoiydt?2Roq#jbjNFIq?4?2i~i2o;0g#}A1(chk$(e(iIj22*TPY|qY%CFqGKoK5)f z?)q`omwWp;KV{RNWf{U~ZvGDzrhyhX@@%9QRTSxFsMc!x^5GJ4dUpRfOw}r=YE{aa z&>gZm)K%D%*HO6P6(5N)FAYYPfPL-t+)_1kI9y}%xeNYb_opb&rT?la+ES#`s>d7# z-tuJCH)5scTi0kUod9q@axjI$y7EN;&SbA!G-fsCH^zCg3rP=>;a7u0k^1E$p>K3Q zmbN50_9*!zoa!NNG1nKph}bv^-`|B=+R{NLicP(9=K^<`j#uj3Mn;5&?R5k8+=^|J zu4AbHK}xP!_O3 zbwx&P57d^E9<0C_GA}fKx7?yl67dN|e4>>MfNg+roL6-lj*%|yydI9i_`A-VPqdT1 z)?4v$mOx$Tb%F!_@5r4AScLwM>1lR<`lhXXz<>FJHt8ge*EpF-uR-?npL8@M4sd~K z7h2%iQ6S8P{C6cA)iDgp8`M8CV&^)TcfWYDyE_B*!wx<9I9c`N0wWt(D<33O(I95YS zmxY?n?z3pGgu2fIv(E2uH#4QDp?RrOHMPRg=Obm{qWzPmuks&!vqJRhNPA>u1K8kv zipdkEybmDp2i8fctNst=QUd*^sr>U8Oo_l%;~yHP1RuMkH(LxfI0h?^y7YWHG+`+3 zI>w(zkH93MmM$L*bD>%3H=k$A7(xzc9|4sJJ|&9A5Lk{SdHP?QldzMBc#xY);aC={ z(Gr%kIN!v@J^topabg3X!Q-fDt_kOd7GJ)VS$H!*-TLFOH$_B}v31vt^1=+&Bop(| zV=IOvc-nlW%#^$#&$AA zH#py{0w8*-#+7rIShkbD_Vd6;RWx@e+a5;K{X);T&H=j~`TA?aF~Tm;Tm`-!!e6Lt z>xKr%QUtG&GLGgJRuYD8OmwOyw96x>i_4MhVwevd%n3YR#}Akws82k%+zU4-ccQX* zB83;;K!{1YV9X}Fx)kmc(nV2!j+p_l}dZ)B3H57@F#J+fN$hc18BNZOuEl@!7bmi2dbu$ zRCVg1l@!v)bhad17~LVE>rH<#WEVPi^#S~SG2fasC&3WNeQfV_>BA(~y?aAJ0VNBQ z1`L)BfjHrM;sVVPt!wjyb7V92I%!Klfh8Q|X~GJeDspW*#D-9?TP`zHCytV~?ac=3FKzfZQ!utc=1r7JB%OfY^&pI0qI$)VAj;;-W z%hQ`(lETWcY)O^vo>)C6Mu2c5cUccAlgdo!V({UM*W6*-naZ3kTZ}(W9drkx)8J+O zz5EA98*7`4Db-e#0d~=(JAB*G>O1KuZIg(dGE{+Iz(Y6siGP zu#NJ*(2p`}las^_kiQjuK*>@n!8d3x$a7@c$`-gcg!FjsA?|)%GDV}Q10fut^i>$;-}$(6O}2-^SN_x(5=~R@$Dzge}7N+556Y{ z6A0k%0U!BjhLqu>@_Qlc*;lql-=Pl9hHv7Fqi7ge#mR*L zDw%ijNzoymo&;=L5nI5VnlQD+n3moWmr9|a$W3R`34g%%#6VJ-7>|` zQlv@^#;lJLNJr=jY^gjHa=hY$@}n35?*15H&?h@b^x>_qO>|*z6aZ zJYk)=Jo_F_uqPqd9~s(P#-@Rqot(CsAjA9VHU>rBDgP6m9sm(V6QfoFjDmXsk&IWz z3c&h=-k`<9lnFhn>puNz=hW=I3%7rSQt{DW7q)F+m&{ZV)q^;c$Mq?tr+e zg1FoJ7YP@P%BGB>+S4OYc6qsM$Wq$2L%#KAZ(zcr1HwOL%S+0MdMR}!r{^l_q!}r+ zk>=9W^!A=NS(l=rklBF zFkJANgm0KRHMJ$KP9+#)lF!>J1yyf^+1BMWZ3`wYcOloJudO8~So*UzWl4BN5qgnL znm=Wvw)ELTD#zc-=D@#8e%y?c*M0%<$eD9LSZ$|O_e%{zrM%Y^?UW0BQ z@YpmHZ2GP1V|$lgeINiKIO)346zNIp25w@8wA5IX1FF^Mjx5j6x(|hh&6V#3T%qE& zp_++hF!0H$$QFydSds_%*V7;A+|4-WPoETYKwMsEJ_oAdGXMxD6xZ z!TC(I6T$`bMp`am<1|#&Zk`mK$%S3#YD-uoNoj{n2~I$8GbKX9Sv_a8oJ=XVVwril zc>ZAm$?fE`fv_(+G`t}v!k8*%f6VIiboq#lSa}b%+kMS;@ZCN$k67h@2V7?xmU3m| zJwKCzX*Au=t*_VYDOfwE&LK4QGJrcf*A9=0m1@<0vp=9_KIM7tJ$dLSR-#jPDqh-J z1YrGjB1pF>B(qcs86$rL#&q7C(q!VWu02M+sT`jziyo3dD&nvo4r%W!R2#P(&TXw3 zDl`>q7uCpmdA3`RHq_mjpg9o$I?o_W?-|8p#+WCsT1jFFf$;=Zc`ue4_K9el z&wI)$YCPq?;QvE61Q?B5ehwL)YE(*>%1I-2m%Mh|o+A-zd#@7~qPxaDGv(H=Rq~64 zeIyp~(zyxo$)Nd}tPNhT3%`Rt3XNK_8%wtw!O36AU2XOExpX2B$YP$rjxzdOb_|4! z)*j)+sBpd)$th#8l-(LE%#M#$z=fw~Vi_@z@Gm5>ZRTD>V+`NHK+~lM(7h{{C|XZI zBU=s={vz?Ufk;~Y-7E!_u#By*@*P}7$`S99FAHho`xop9P-N>id!e?wAy|c87B6xP z7dtarHeYVjdlvcpmFM{i8YTZ?SAgU$y9MU;7~#wCg$cczP<2|pF58y+Mvy+KYrs~h zWs$=;M8KhalC)IpU%=)L)qcg#u;#HRNuZhoFSsEQot9nb4>1U1dh?#$){A}jNpl)F zZM`prWH~eUwWgTo4M=&j>{ZC{-(C(bN#TZqK_CH+S^+p+hFr@uniRW6DvGtrZFIQE zI-FTFLLi-#0}}yFXLzK-PtXjfTPor>Qc`?H`?``3sbMykh;FRBU7W!Y`-ZFKGiYrlp$$bXj@52!?YKFERxDHVl&x%xW#~>Uy0f0$sffJ6c%oj5G z490vISVZj(b!!z(B)=@CP(3rWV87;~#xp(YNX##soGR9*JCsNz9x}bKK3X&=yV!?e zf~XHK_~ua4Sgi}NbV2iA;&raaT>YSMh^TV>f?nz{pU}n<1}De|*s~tMlR;(9oh-zz z35fN?LCJ3SDpVr?fIFe-?diWE_wn;_w}b#TDZz0JmDAF^o1L(>n?|Wsy4Z$pIF}x< z&IWqhS@#Al2oQ>jlS8^^HYon<#Gi4JlBwH^(Id`wnWWyc^ARRRLr`kIdnPS8Gi(9> zzWu-XoN{Q5V4n~N1&;iCm@xw>UCW__vwT^oNTn^<%Z5$|cwi3ip?Q}~--B%cS5PR{ zrO65bc^&9pz0W71kfkXiQ05i}SY)WP@;=d==GHkNPU^*IB*BbYt+g_;(F%czW3rhC zh5|@kj7Yk5{V0WMWYz~;a@y_kt% zwOi?2aQmpL3ZRa+$i$l4+V8`F)TD4YBH34<1B=GZqO=^1T=x`@^A4BpQH zSWc&#iC@|;g7C6Z3QRNF{nVF^?jH``%nZ$^p?v-EXK$EAY7iJn8&VX4=&m z3_s$7?0fvG3lW>EcaXIMb{5UwCAZ5g?fF1tLgl(HL)j)*C5v7w4YFOlBD4fI>wJ3Z zrf6!?-bcWEEx>dViFj}GYS%{a9<7oomb;iZLLnEK<$gW`Z8Pwn5TYlh5P$Whi{b2g z`Q+}pbZs|&cZf10qt~emW7*7aO~*h5o8KV7ykxX`9?DqSx=#HJ_y9f4N4^YiwrkXa zH%>uz{jkBv*4p_2bLv~x4<|nPKvWh2VKqaY@;0&OLAFGG{KtRO9Wi`8Wy%o(>UN8g z2!sfs-QHP6NOZ2kG@Krw&u7M`C2}PjtU7H{+Qh z5`J3hAi&=dD z&?W-CAo_LR(e(Vre08mEpe#vU(4MwA>2d?uTSbJuJ-`OLN#dWc>__p~4;>Z!5Xk9Nj`ub>D5ix2>NYJt831`$eO z3Vv3dDwyHUJC~Y#bEIstr^w*TVpHwta7_hN{)^p_R>f{3jE_wl%;fU4UEERfUSOxv5Bl)@-?i(}4*;+7j~B7WcX~Q86_? z=|1eexw}eT{WluIkZKIRpuC=VN=y(EYdumO3THY}IQDkN!#iV6P48&6bSe-IC_QH) zqvLHVkk{VM?#wyy*GOUlJ!{*inaAa7@9jzpcUOuGA!0_ZiC96A@z>X(VVsqJ${0`q zG(T2sc?(dHY?tL`&mi@=>~R3AN_${+Uy_|E&2j)i=Lwn-_kQf~bHk9~O{=YWtt4~` z`Do+b2KFyv`gmOkOK_r8Fw${~etP7Xs5Jj|Z)yx;9p#n$Pn0p)zMfXdX<)DY$T4>y z5yz8%xh+4>j_+NDM#Wy7+ezu!`|D(IEf;<83hcAFkA2Ux67)3|%E zhA-i+^gxEv)dbsw?|Mj~6E`0a$SuD|kKC;}5S&Wl^yR_VHW7+{yIYb5$OeJq zLYDJLK9&`XO~04`Z{!QmG_4EhTlTKgg8T%NT{ zNlHdW{;3+E7c*%F_eStqX{2eY?E1)aG)6iE1kDCZ;Uy>K7>NaH`^|3cEhK0M+Fwy! zdH-#32>tOVDh5(fgwl$*jbIx_4#OQ}54)A7JrkFn>UQ`!S!nmaU#eK# z4BkiGjCMUMO+jZ(eFebm4veUOOz~JK>&+tQM5NpL@*AtV>>ju>#>GY(kq{EYd?)Oa z`|lp2VNT1rbA<3^IoPF^8Yfj?{mZ~1UdS1-6=xoZpp(a6{RSGbFCH^=_JYJsEi-V< z_lv%fc-_g+rlgOmx(wjL3NQo0Xk6f_iHxga7ZYGsD2`h;ngy~>DI6uZdA6cj`1{|D zR&Oeyq$Z-T6tQZoNm7M1glLK~7qq5^f>ywZ5zl$Ppc~BUhsec!f!uo%AL{1mgB_Pi zU;ho6^H>7}N<6hHUGmZtW!DpvihBAF)Y8dl6VvAUXcK5r$WSyk_0yuMNJ;}MRcawm zyg&~OG+q(|aeHF9OTn>*%G0-xhXjWaSi7nUUE8EF6Z{;E&vgVgDJ{p9uWM zUn95Qwf-XC&=)UBE&AGLF4A^?y&Ru!)7ty9(ojbS2I97EECZ%8{^GY5@?+10;F2;( zpVjGzan!PWYc>C$Rxo9%lrAF2dDm1-sU-7UMQG$9F02`TQyt+vA~i5(x==N-VKpic) zbEl^J67UJ^`v>@hD@}XZuw_)YW6&DE1$&q9RnTv&KAi@UGNhG+kAq?C2pIqL z#zKZXU0wwra_wvv_g^89`{a)rCyrySqG*7iaqZ;zq8`aq)Wl@%0gW4>DeTRXe5TsK9*i4ZY6M4O>8Y9P-XmC zPFQT0qT<|QM4QOc%xRtWYsZG^>u2QUZu00fZYu;u$L3t0XU5l%jWD!T`!1IphCHI% z&K0T7t9-&nhJkEV{-H^L)5FbnhMx6I8`mE}#U%BCgFxqQONo!k`atNfL|tKQ@9-Bf z*ni&oI|jWl)-2q*+ITZQqaJF*g`=|SWDbD|pEG2iWDGMCTC9+V?)O&~GVcia@dj#% z`tsAl@l}xj7)P7);%7Xx;~yhIplsz~YRn(wBGS(JO-50rPlFzktXRp$4Zq-><~7i- z&{%iCUSIrAU}GU}_GIoAlagT{MmKFaP&$CU^1I(^oL7+ZvVz$&o$Go~0FIvl%gweS z$gTN__dGxGnpkDC#lY1|@->D0xH)sfLM|sP-(Uh*ayRe__u>!&Ih6x^2bf^@|fED?-jS4le(MStdxm zPPUH(trFa=G#=n6yj6;NtHDU&-^q$_Oy%^PN~$=0>lEdHxjw|e27;=g+ix)LvV9;2 zkDtVpU6G7tG3j{L|ayi47!5tLut z6r}HM_6s5hV}JYVXUZT>tyjieMBjiSqVX^3#JLVNM;s&n-q?pQQgtqG!>&+b1YY~B z{gRm&p#)B#lp~8{OCrT00O~ojlKt2R_$>eUy>(s=|1VTQ_Zl?)Fif#nATk z{?$%Lc?8{exQroN2-rTvLH#3Dh4a`oK5A8HMKPd-u2s7>P8vZWBO_G_9@e&zb`NlO z0qqSsg(Nf6wPGuWt(?M!sEp>i1rR^Q(f+HOp^Uo=^*Ykh(sd^RN*2F;E)iQmOr*GW z4jdq|eTJ6t{A?=M4Ewriz&@1rE{1Or*J4Rj8SW}O(E~L1w?LZXtR8#3-`%DUc}P42 z{O;zat1*BE%B*P$Z)RaJLFAG@tu*41kMXiGsP0_R+??QQBdP>kG^i;@1{<5gJ5Ek} z*q#+S^?Uz(Eg8NU4D$DJH58If!)p%gbW17u}K#?!!cwq0ETVImIgc z7@f(((OSB+IY~jK$(Va+EEG$;BA=-HmDh+JWrlPLfwcl3-Wx&DLl>w5`9vq3gYrkt zPK}02c{-oGt(d#Gy4UlK`e#{<$|FBuBtBu!GIMLag6`&hpMk|`*o;DzjEz>8S)B_N z@){X~2Tb{5l%p{Q?ZDT=>D*F7c^I&f)xhQ~nI(~kc4CNe4Jp}p?P_Py=Ee?>DFX4p z*M-PpAeANR+#6&2sFlGt=OA2IusgK9?ayZ8M}(@T_^N+|Z z9$CeL3P!L|02^NG-+)#=LrahAEQO*JU`3MK2ExzomAvM3$4PmMBnFCI%wVA%C! z8r~kX=#gW1a|G3V_O_Y2=UVA3wxUON+KjjXOrK!k4{n<(1Lq>+r{`Q5@e=u%n3VKX509&HWPbh zR8gUD!7~%Y%!)YHz~ANbusRcicYTBExxXjD(M#4qQMi%`LCZN6dI7 zu^h!@ch}r|6`i%}Y?5qvBH^x8iJEmE6RzrQFbZ6Ir(a}H^ z_9T3uLj#6r&HKp%A06@&J+OLPeEq-Z5U(3U@>z5JyQZ>@gRrOlEqFt$u4i*$BGIm6 zPG8Q{F4aI!icc#$5qGmYf3dk7AsrDX1SibIbs92is+0X7?R2%tmF&ytut*MUz|UsC zU*aq)E0ojYUEI~HpYt>Bk-doHr5^ZOOH@_Bzm7~4YVVSc`6jYUQGXx)MKdwOx4J9v zuh4oJi$(gh;5U9{ErosW&*zt)-kONAkB8w(5CPss!)2Sh^s7qt-b(~tpSLs7Dd;#Y zLB<|$jZbb1cVEtfUUSRA88Lm@4_y|eaa7514MYW5kCwzdSmh9u}T=QHze3BINt zI(4Xyz`)Vg*zfp8eKwW{gz6?HJpomx95U!Fe_=o%&keiZZ*qbWZOiET++T~MEBp@@ z&ZBoYJ+dql!4n+(P$I-rI<(cSp??42FTtq6y+2*H3vBa%R$m&$!k2M_S#qO_I=-GI z(0H``%Q;jtfBD8b3>~2nR#qTB`Xr1o!P$8PTK77(zWe@}Wl3!nMuZ7Z$gesz_cyX2 z$`NzXviF`|RL^-^HOkyml5a>iuXbiJStq4Ni+!3%X2{;u?1cGh+Wn6U%F0Cpg7$2VA^jC+yg&?VwkP;)j*+U3DE*>@y6;cL zHA!X|dnEIo)J#-Q=ygc9EKKgbIg7F{Euv=EHgBeC&gkyco! z#Pz|J{P{fDxZh4sZXbixk`c4xo4Q(22?R)fzdrd*!uPKU%`VdW;PPJd~8RgO`#TlNYoj5v- zmY4U7%>m~ve+9jCIEz!6Yu3!#@L7eX3ds*mf?Y9myQyz~`mj4bocwUyXq<_)el#W@ zBtSh6uELocTgTE4v%MNEyQSK<@v!>U>7Id6N9lUx4A*9GEDCXY5PR z+`1pZBPVvh_9&`(aj72NCKq}st7b?&iBD5Gp-`00IK=R{`#|EmW21(~Xovc_??=o= zvhx^-?REdIT4=(h^$HFpJvLut@pWZnI%hMAS&%CI5{GO#&JVD?03XHhuY#A^UinLf zQ>OpILW+NnF>IHTe5d&Rv_v`6DpeI;$jzFB}lB-<&h*o9;1G-LBTi zN-18|(22HngF^`y1{Ht4`)T&}!W=Zn=tr53fKn~xAa~{;Oh=Gyl@i1InuhJ9K|LM# z_0}CyOba?ZtP1|44pL=E^z?teSzLTLiH`<26zav z?loF*AA)y{@MJ|@ev40bS^r^I2*rzz;q-vJy?g4r8LvfKcml7MeiV!C9UkZNQoOl% z76Gknp9^82^YSyWH=9@A2P2!kM1ixd=);@ckop&8uMxG_DA)Vlo$eOs8r^|N2OS(d zkBbobQ3w9vucwK{r_R@i|GxZ2a0l0bys0Il7|R$ z$0ftSE!G4dp7A<*8*6NZ7=CF~^Vidu@rPVf4|~pJJ0Vh|r%P_l;++$``aO3hIhDJHVx!}QrYn|>vgj^DLWC#_1 z`w41{SZ`00a~T>c(dhiemc@L;&FG6kjHAD0ZT)jOQ2T5_QwwP2<#|_lX&3F7UhseYDsM1p=NVeYny86>+D|7b;SKw7h7F z=jk93hwiBbFhKlwjSw?-!ZkjeJ)v|C)Pl{eago+JCpsM%zXjz@RfA@QaLeO`&H|8U z&)*n|0}DR7_oM{|*wGi?@0;MKiah0&uy$7Mb4m@-JLw<8xv_SDVBeT;mZ@i=jGM#& zNO?eWmAQ`fVk>{U0rSB|6Zo_X(hiL<=;L+@W! z43^DGs!sf36QLiXH^snD0@VZ80O}5i?Repww=-lwq zd)(J08FxbR7c{8^>vcnqMK9@0tQs?vI@dJqF)gU(jNcgeZqsiJjCPt&At&LNhqfBt z>6-`(W*0j`Ikh^qA)ll!xV!S4JYd#AC%E@&j@3}9v7lb;n8EgvoXKW`@lmC8(MFdb zQd`!j#4V=Gplj^ccmM>AyMa;cSLb=8XjYngH0U~7!Qj<5CMkBPJ1YFDHEgg`ERDhZ z{MAe3<3}O8W5EM_#*ERd9T_1cn}QSlB42TGyz`YXD4i6FWk_ro=98=Y!p*)fox7O& z9PGeu5M8@51yv6S2b&O_Kk&kR7~-hla}ejZn7>srUo5@cnxuMGL$#Mx)w%4OXz)@) z01aXbF^W=8T*&bLbB=%Nms81Ne-&RO*??Sprc~ctu?TS!KN#o9>2UEC?3j?WWO~Fj zJudZ>?;}wrG+{$7i>qeN=�_t7l0<+KdLxPeY5=v%qH-zhWS6AjAgLgP>NyKxX{y zo5+z;3*^dC*(x`6>MREZ&%bGs>dV5fpQRB=B{eY`T81pQ?v&OywX*XQ(2=@776afZ z{lOaF{=XUn#z)xHrMA~UzO(rH#Mjh5Ma{o^HiH-(*AB-1P8%6#p!p0Jy?Rb}LQ@Hs zY_E~lE~I1hY&!Or|Bq~g(eQ)_ryn13d+0GsTn}mDD$Dfb7n1}TI@^PS$Qh1P#aD1| ztA!^;)#0_lIoNQsVsL>n{f}f5p?9Qxiv=Hu@N6Q>+$M$Bt_j1((q8ytcekuaFaFC6 zkQD>fmkPIjR0>zMmQ(|~PB^PpolKHtR2GM947zS)-Y^(vkR4&N$J26(FF5j5FxwEQ zT3f`xjr^7#AKW{VlEA+z$~|<$waO!_2vX=YJ4DB#Kxe1hx5k={vGmrkX$CdF9PVJw`&|$5={&T{E;(@W=j zP*%?(H+ZUowY0B{*bS<8x}458uhNbXLCp>-SbB;hR^;_L z#2SWJ2z!-Kh3J$cdBCf?+xIMZ`TTkOr|=-;uvFhq;cVWquOGvzqC%j@s+7#74T;|P z>3hzb+j?r>&E}#3qha=Ly(H=1+ZbA@cP^`2DLVvT1&^z0dATx~V95jMs(@8r!@%6( zzm~Xr3!J^IJ2v^5wpQUlRarL59G_4MJS9`e_ac1l@3=ciJ-xj!6g}lC&4nZn?AUye zSiel(EbNTR3DHNYQD{9+K&IAS5p2dyj!?ivCG$MH`` z8P;nT1DUH$Rk+{;`^?QXJ%0;H<2i$bjDV=m11>%Dt-j~Qk5*lWRNTEu9N6e7tMp;n*&t?gE zuFHo}0Y3*i3QDGHPj$4OA6PHuRx;@#-rbV7-W@ufJEzz4QLAXidVST|a#+Jcx&eAJ zk@D!*20vzdir(I?WFf0WY|wzL_VcPd^uRRUJPBPt$(CSk^mOU%vqvNFkzLr zCx|Dc$R%_@K^o9q`LZ=oTI^>HVHbW~zR;t%JshrM==Iskei{}a6F0vw{>+S8!C_Y{x5KEiv_&3mZ&G#9fOV1 z%xQ9c?u*ZpvL#O`*$Dd4c*5-x>%O0HM}#`?Gzp!KTUx$6c%)@FT14m&jy-*;BV_1_ zaM>?PmC#6L{N|Yy6NUIpg~qacDcp(a&tN0=fLtoD5##tuZL#?iM+}~ zgLR+|(~_Zk-HQzA{`x8b;bcE-d-DGPX7u>b&VB2$vQ+i1o!J7hk@zG-P&xu5iTH6^ z7Yyx|6%>YSL$9h9(dxMI(}_4{hR60VHU>6)7e$1b^uJCx7LOFci6wlu#m%q7A8F8C zd;s3SXKq(I(me?o2^~!74RQj}zqcS4mV?NOl0;~tXUYnqphRA6vP^56w79S^((SSY z^U%kALX{?1y*j769Pw+bdUcZGEuXG!&@qtP9l3U~bH&~%I&vYaArUS|?gJEawB|6Topm%BahO=@DGP^c4EedRc>TV-l8_C% ziPSX0Y}bIWsw)F#UL59$!^0_D&?@{O?$*H8MvJID}KSYSP$5VWlveKjivToLYvKW`lg~0>lKR>5J`xccz zy?=C;K-abePatirZV}dx1TK{f)|rtl?$tTa)VVB*nep~d&H08|R=&(EJ_ce?lH*2( z>P6PAOgkAvfnRt$jW?4jWB6qNWNtXd1l`Abb1vZ;S71;E;bKwgWcwie?9gAW9UzYm z*k=XdObBqci}d8w0-a#1z0oAELnpaAxV&7*-oMi(!jzz4+JN-K+YV2T-^MLySokkw zRfzd~l`x0Vfz7oMp{5FpaScGZ(kE!T!B$BKqtbz}2W6^1`D@*-x7Ii#{Qwz<(%Pn3 z8kmR$nKWPkHXV<;BHeyL?P7J{Nj6eYK+7eGQ9HnV@lR&v3HU8x-Z?@pz1_=_QNN1S zwc{!|#j#a->UOPvHSng%Tk+|fvXB5$s~#FPQ|-rNw=1}Rev+BwGfz|#a;LolM;xw6 z{l^S+RblI|SbNazd|&+HFZum6I)D9|L62IR+%#j}BSzeX-3tx@!{4r(?L8?}30>5{ zUPbM~`5*8aCe*IS@5Afz%Fzu)NqY;A&UT9}7@R}5W-+oB3l-Py$BXNNWopE`J4IJJ zI%Y}+d+^a4gKOqGxEzMuPySsW|j=*JQN|kJxO{|Z>7aZ2cFkppU%8TB| z+2xghi6U0UWlc=}uo}OJm5kWlCFyAYc^n&b@=c!Jv*bB%Zv91R=fb8b_wvTzPvpR# zZRUYs6({E%e=qROHgoC6e%ML&P{(Y~Vw6cqbeZTOmyn0Z0^8A~wzTv-AI)o-C zCw#qUAR47ZN7^Zw%A|z^7Z~!RSu@eQ9hBL?fJ3DK_mTD08D;K6y1g;f)BfRghjpif zjt{xE79c+>!iVdsIUz#&#XsDzx7akgD?hKIHe?bxeSy7(Vk)xOcwLxGffsGJ*g1?7 z{Px~v!g{SIz*=-+^!E!UI|!_)U0ok*ZBJ=k7(o;pK~0EsGH9hCz&L+Un>*g(ndMYD zI%CbDA8)C9)u*H7^&NP4D~&Tu{?OBU6u5avFCEdGzh;{x9m?NPCJVg)$LszOIj7P_ zXyG8jU$wl44uCv?&tEDM8(xf-3{PV-MVCp;8>g_B5U&wjtr^UR_7e3WU2OdNTe)ns zsDmnzx7UgY^lL&B4v&`E1*Jo^dCaL&4n-CO-4H^9);26##W0b9vl^{PBr;3@WZi@Fgk7383YCbu2c+2`P0Ob?o_ z0^a>EkAKkLPvQKve|o{~l7qcBAZP{mCW?UQSUOgIsUzhRv@}A}e;Ju53_u9WtiOm( zegT8!m%m0})bND)#h#u?>!~aWK{%6EZvKu0L2#$2iP1-UgcOc*GbX_r@W6H&L1tpO zx(uCpaI=J13a{Ze)$!iISRnSJDz_|Zrd{hZM3qSMZa3aO(m;0JKz8C;%CJQ(PTQ1r ze?g6p{^pn}9azh0AihyvP8)Vv5D5)O;cPx|n zx`BzJMNFLcDOpZ!7k+8sxmut588|Aw-;e}BnOqNw5bAUX~Bm89L@&BmY&GafRcJ3k!r3S-={5u*W`5G z$OAi)@jYN1Ng_|L8`fj)LJ5796FQt|&!qnm1{6Kd&!a=%#w_^+$8L!BDX&=)gRU7s ze=(?Yi(6wO<1u}~v0iJ<^}`eK(P5!gx?Y$=PGi2Mf*}Z93bxqu-hq*2w)=t_D*wQe z9;V&HW>80(QicS~o;xoFWTRx*6Xa-SJ$q7QIZ#ia#fWJ&*TYCKQn6RbIm#@kpMeDK zfzOMJJ8h+DwHAW3WNC7{rVcNTGOX@aFj=F~bl^3AvMaoG?ZryWZ^>SX(}}%? zSg$gPec=SPLe`YNcZkPm|9>R$atXPc>Ge@tkLI%Gt=H1ceJ{oEHStSG9mfvav0pnc zSGBd?yHi?vE+a$IL5L$2v6eS$M*B7BiaXs!p?VEMneo9+!iCl#uvGkAluzH=O$*sS z270;|2b9P~Ryk+xJmJn*2D+iOZXJ4X#r&VtH1;jMY=19YH;WOwgKoViBL34f$PGtJ zW%2ge*c{XT3Ti(6dj&oJ#)S!i^J&nb4IY0&k~oru(4%w4l?)~{rkRjT*!!|}=wT`O z96C=uJk`>m7gA?B)|5AxzP#9^z`w~=;IvTdG*F~HAEMFD9d=NaVqU(&t;QINr0}m4 zQFbtJV()ry-)!P1V{JQj81bDBi-M=Ec8uU?A?qdWj`gBePvwLuYywkmm`#i7_p!#9 znQEl5V&|7g2h4X1(7M=6-eJA182+=SJ}S!8434Um#y6iJIWrj=pP-#cxNt6;Lpu8^ z$&}`b|g|RZ3S9laf?qOgg?8<`8_acW1`R#CFo~K}-j`wfN2;X-Lpt|XZD!bG4IbfzA zqa(~bP${=7fUf@-VoEN32++%6+|j~SeSc(Xnn?sHjaiy_9JR2pFvYE> ze?NSC;filf8(MR3lezt*B^l=Zz&Oil=YbBKbH!5Skuyzv;DpS5pQ;!txEyO`@w{H$ zE>-B1B;Uh#UOitrC|!#%T`Mp{!h4Y{%&G7+Ng5z{inY+=l7f_|=F$g*XpDB1@4lFI zV$0WK#F1RZo@Uqi`kY%l5VChqUrU|3ipf^`C+0+1$dJ0^$l2N!LqIiu`R>jplFkVe z3rR`lDuOF-zO{-b_jn+$s+!i@tAgCQeOaP(v;gn0pAd&(JH7WMeigbXo4DNET#~h^ zy$^yG#y>=GNvf1}!$+&5gI*n=Q@8Mc_EWeFqT+6-|@J{U~~mt@dSh2kAR;R z!2 zkEsW5<`o%eZ22H=a}%R3+7Sns$GDL7{{G;cWGFu;Cj+oAnE%ZHOt7E;k5x5q?{446I~K%M>75(V8?T+CPC682ft=|2efi3L zb7ex3S$O|^@_)MWJ|byIODtBQLhv0z&T0{_Cr$157vOy;aT^6Y42}*C@UnBD&E-5Y zBz_DY<9#>JvlRWvVGHF0H0_7VdNY0Weym@V^N#KIa+Y%k7E@7N(>gauRN_HY;gLV) zgl#D7+u9wtpr)jh1e2V^q(P1?)5v0h8I8zfKHllqtlb= zE0(R>=?O3P7K8(Del+7(>zwz_aK99kf!8pLTwt@k<%BtX;+Vqfmv^c5Yg**7k7RY< zdwK!>xS?~=SqubjSN;f@K zs8w}r8$!Z6-$54Q#)chwlV#f~f7(8#g2x);`h-E_SOsKWTh5iQPKIgw5MqO2FhSk- zZ|e4hy>vYL`QwshMZ{ZE&;8d_E1ab}OLPoO^@t@%qY)forgJte!+&4Rx>T`tL8n+w zLew=A7)wrQ@-7^Dk@OME4a^%qlQ<+-c-$e=xQq>?J^}n_6&T6BCp2QmzF2~%45*LK z5WZ+rjw(S7k*g=6BKbraaiqH!2x{uVJ9_T2hIT#$gN~^uR{AhqTkDJGysArJn6l=X zbFc7rTQC!}877UysluF$E_G|1)34Xm^vui|a(RSerdeD5|6(Ds+nVtFYRk-8{F~oD zA1T>gI_+pV=PFW*yZmRemSp)XD_nbOayLRn8#fRGt$Ss?o(si3Jrg(DrnjUi0yg|%SZitzic5w5VN`%gGfif3>S#Ew*^hJEIP@`#lSCyEVFav z0E`Xn$3Bc*S^zz67NDl}=vG!9(~YGQhcQMruhdnObC(Wa7WiYg%j4hmIx5a5;msNf z9Pi4Pzh30=I&hbEV?S^#g#$Nd6_ampJ~Jn$6YZoA0_956guo<2X^-S%`K zj%XYq%~6TqNwEskz!^{2m$Q3+;SB0K4?m6L{_@jz_6?)7Q&l_;^W~E(`Qqi%uZJ!3 zXNQ4pFzV0oORguC0pxo0@^<#tw8mo}2UYc& z2Io^f3Bxc%s0IXem^#viR|^yc{@vu&{WIkWZPE=KD6*UEdRsJ}C1W%p{hGgzA*%5n zYT+x@#fv zBt8X)GXEOw$u$7QBA1gSG=vtFixN`;1lUW%Iv> znv0vvOQohM8mwrpwjg0O57VZQKJ^epVMcdi=X^d7J~q9n1Hl$j zzx>7)Q1+q}-W5@7+9gwMvY_aZhqKmNcJlH}G%+dL$}01nD0Q0P^Z)2$P=EbYQ=FiB zUe!i4^5($}$%TgG@B!X(+W7y$LXU2(aO%3woT!_vAgj#EIsGXvLDAvIdL2n%#|5Ax zU#AIacqO4q2=dCDAoEiCPqz`EK&IK(Tucf?w>S*GoC1lhEC*-`GGrVTttM033iDXG zwR%Jb0Kkf~-t#>f2(Nb|A6D;&=hk zSl6%fd$W66`zB=D)3pe#zDB%9zyzo9`fFXi@2&A6bJcpNg1p^%#|p^wurVP;5~9Or?@C$oW}!l{{J<4`Ue3L$lR@qcjN>!29tw-l*eO{@$L}22 z?%+-gDc_gWxR61LunYvuKrX%Y%{4)h?Yo^omimbggAvy7kgJRI5ln=a5smS|!B`lU z$ff(Gi}I5h2c=4_)&>YyR~K15RLSPK>K5(7anW{H$XtUd1>}|v$Guf$u4sE21V7Z! zQb(q}aZJUQL;KNq^%N;7M0H#nRL37lw+~ zRzBj#vIZU%C2q5e2Ursl5)xG{t<2lcj+=e-CaQ2hMSb)7v~kY_%&@0X{yt#&(e z%(uF6A-r@C=FXoTbT7n^iQGi)#4b>Vv4J`XI#*US08l92K!YjH&hW8Rn(1$f$^`w_ z&~?ow2f^q;KbY}_Cc+aB{h_XFIomds&n?cWtGQ=A8gF#-6hhhx%FJ=BX9KuO-2sz3 z{7l(yWyn&hYiec?c5#Gu(*%!_#;zgZvo7Rk4OdrI?7!Wpn|cpdn3>shb5Lw-?3f49 z{F|t#jBnrMXxx`RQHgr)SqF8)6N4y(*er*0=;s!%Rw_|dNc?FgU*zFq7Y`WFwsM>rU^PKU@QM^nM*oZ}CJw*InZuOmR`q6P(O z8(luV428j;Z_^6Obck&RW3h$G$4OO&VyrO}4Wm7Yf1`xIgH+U6=#L)tD}F8rgM6R+ z5qF)?fE5k_?Q~78{c=RcZ#}xBtI})pxBj$*TrbS9Ycho?y!JXJB5k)cbn^E@N%U}* zC?j#(U+;DpbeYV^&$rDRAF2xTg!2|Cv?3K}her!6d>FN#%deaHX}uG;w7tD8FkjSe z-)ojOzo!FcWhZFe6Knk`EtW@WU3ce8JKNf>nzb1Jw6O+qdJS?%uOI9#&Q61TS>OD^ zm%PJKLhoyR`C@S7rVoI$-zL(nLkC3A<;}-HqmEy520^?>NdseL(nwEIef zvh2YFwI(-FQ)q;36g~k}^XI&h0BMkjz4gA0gU+PhWb!n__okxb$$0hA@d1uO?A1?4 zMYBfe(K{y^f(^BV!rZl2NW{N?}x^U@3AZ#t4QaG8AjQ8vDjHD9dX z(_Yf|`SYE99AT5dZSeW8fQx(zWo2b?(Pdm^I_yXV`c5(^oPuMmdbL?l>%*fYqz1^* zER<-AZ~RE8IKB0PQtvz5%WuJSiA7X&C?xL}zfbx9!R}!IOL1s`d-tg$11ro+Ywa{5 zEVF$>YD2&YlOY^1Tsw-=U@M`^zOM=jH3vXKn>ZXH%ujs}sw7xfrPR)Q-e00El0iQY z1Oba-0{a#JWXbX5^+mXFn)nq%fR&&q!n-K+`_13VgJ?m5*vHvJ&O2{AyU0ga1VdT* z7+!hAyoJZSBYlAP$mTgT8V!5D@@o)nUd2MWY=Y-bM!e^a86+TzP~-&{xn0=G`gaM; zhx5o4h3=71vUj{UsL9@e1QI?|ks5M{p(!{mKI(S9vMeGW&nmq4UPTFkGEt~@ioLNq zbDa-L_izO71w|TO6>!5qk3J`tQ#g*Al9TxM+ZXDL68J(jmLQnxk}^u7yYoH&FR;g1 zozt)(s?ZM6UqJpoNi4%IZ{mR(%Sp=-!;Y-KLSn^n35@J5|-E!3;AOqpZx< z*w_Ii5P0kb6`jUySVQ(J;x($5)ON4HPQI*fQu6ir zq!v*^|I@qh7u_Oz|37DEgmriJ7gJz%#&!B42pyKE-oKgYXC*ISct00f$xWSv)r4Mw z+3@}Hy7;J!r?mQPkY9nq(>3%3)a9uE!8A+J&rh{pn$Y1zkAAr0QmqWOVm~boc$V*2 zLq@SG8OOy{n77uh8Zt1*zk0Y}n7oO3OWv{l1J(aR2OKY>1)UbG={xgyJ7wAcIaq7d zKft8L+WD#8s5o0OkPEJI%BJxysmv@a4sc>EKpq~FOFZ1Q-rNdb_^bVm?Vo~$2qlE-a? zi2N`36jkjt%7eq%OpAc1*yG|NvkdO3=l@akfl9vA6{h>8xLYqPEx}W(E%${!x;m*O zIr%Q)$~EY@YlX+1NQXZv3N1fJ7CQWba$(~G*avRR_c`FvY3T(8ZNIC6xt9|r)4M|v z)w=_^`Uk~5$q57dq8J_eA^*L)L?&i-kS&pa-e; zdA1)cq@EErUhlM&#jS9l>>u#Hl=u5uZoS>jkV5(mvtjnEV=|Mj(`I6MC4NzPC)fj; z!2%%Ya-iWls@|3m6nyDC{r;w-1sej5 zCy{GK9h9V8rp^5+dn~o32l37$zOEy_jNFz#uKTyGTOzJJ;jEf(MGuyP%C{$C0L{Ht zEBi5q&!+2(FL|-^vSCU}ie@YY@=Lur#NRDnozL?;OK?zkI@oSIN+xwF2M;<;b)kWU1mI>kgE`=gI=Q%A zVB82O5#=(Rx< zY)E%~{o6PrQ(s)0@AUVcQ#DffmIpBA0mWXBs>klU%td|-FrGr(wjQ;^06=y`o2c@} zH;UTmnmFEe#Bc2f-R7gb{@OA-qg2OU%liqRZ<{q~F9(#LB|&BZeE*vT2>1IctuVOu ziyaN~xIFhzC9eRk25i{1#p!{n{{H=8P7WxAsqN>%@p`(T4zLS*>(5cG0l8-c zq&5cUce-&0DN$UeGbMf2g#uTR6({n=2&$K_$YkWV0@@ouMGv+uDk?_EwX!jVJKN__ z+fEBdR^EAjQk!c18O}2u>BBBi@IG1RmS9cwnM_c>KgdP3x29Z>mKGyY!!>?~fA=i& zY*D*&yE&g=9mF*TJ0-4dW-M>rRaj?SmXl`*1^a*qCHeCAHi56LX<-8Q!<5olQ8 z{AC{|-F7(V9`y&JvG)*)u?jY8Pc?&75MFVRzzh>&+0M+$T69x)5iwHp+Mm^r7IrJ~ zT=v(WV)Nao zKq=e@LbmtY(@DCQf;vj@Q3)o!Lcs1XkQVS>zyD%-CORE#a=PT3CehcA_0iE@GH_tUZ&{p%Ybka`dlFZZ=l-hwG)aB zGd|qEP~{KoPMGf>1mH>IR{QH>RQmapMXC5Yl8qoK`k&10qTSKq_n9^c@1A(Me18fQ z9WHp6^&5Gmdp=z}GObmmw$H#=VS?7E*WJz44|Bpe%gSTabHMbm;5d5&jDHw6a+Krx zpTKZu3LCA{LjUI)DBRD4CLhtCY(D;by_bENP)=6?f9^4kKvR=~5gk-Fhjx%N8EMd# z$O(LD^8rfEF?)ga8rlOTpk;GZEMUn_`h~e-G zZWcWBqn%S_HJR$>$n79s%$;4*lsr}6>QZ-V&Y8)+Je*!u70$*TRn^_f+-)c?K&Txr zusx;#uXZY3gDLYn?YRdvIa-b6(54jwd~^ zfm=8YF_RSDC#Vq}34wbnq|yg}(qR$^8=BlDE7sKb7e|LDK0{RZYx1^Wf-RmFyqr%Zpl}At5KWe*{!m*e6qx+gL*F9AMr*Iyo`j^m$Hf_9q zYMd6&&a^i%C&glqw=u9QNo$Ck4dMAdQS2UQ(ker6>_+ElQ}gmiOEjFJrI*wxvK*G% z7C+WiiCY^WAJp$Q44ZKZ);jejdOi5Fg^SmpZY2tTelTN%ad%_B{YU--55WrN#SgC|`Y@4g}}AI;q`r-5uWZ@-cHGiDu3(8w+frD(E_ z0D;{FoPL4J;^{UN%30enSJvszfmL@vMLZ6fzM7LNY1GMRW0v6s+T zlW=tO&28$d(r#!sWt!;HcE^&UlrRfA5q2f1_%eR+X88vC=oRgB_fl9i`h#YXQ&*@_ zo~C%0Qbm81OF_4gHQq)zb?@yd7-u9fqglHujC(T!QfAmvp3TbQ!aLb{%ImpQZywlX z$jD81|MmNfBvL1N%rKG(EqQZtavJ4ps8qQ+AQjT5PoGTc?%dGGQS*TQ!_4=jXJA90 ziI`I(W|1SKqfLQ_$Bb#QdRKvpAJeP9)9_q`l9r<>n!eb|!VzvW;;`_zw3tEdj4(lE znX0a0ni=!{k7@;)=s7Iq1<}34WYhS2gSi8yRbPieBJ_**K|y!+G~`s?BQTk{{jzYp zAA5Cj!f4!5bQg^iw`E{(D@Ix7YpFO$JwEp3rV)v0v=IJssci3ROp$GTk!>}$L~`cf zRfpAUr%kiL#W|9XM4vFB{=%W2qpq@ne=q)Wjf7>xl2LOfvea_g>S(=K{cGQT{CEq= zMIB~3ZU~-}xy;FlDKu%qq$$K*#~ zLPVy@4MD&qUI;`Ecf88LnYBN8jmdN96A$1?b*f}NhsTZIUdYvzaW#3%>9v*AVR50S zM<$wE+Ck4ff9(?rRXft`fHrlDj`eM@V@RzJn_FwFZvH}X!DqNLX!CXe6U*Dzw&T)* z5iT;gm})F%sDecgx8cl z$=94DXGnig7ic2+S@YpgbrlHbD;$=Oprc*{{j%bRt)GAIBSFkb2o?eoIiWj8fL(0$iqbP4*E5%s9u7{MikUeY0IN^V4bOHB6;|P zu#`SmK)om65*zt;9@XBj?dLV)*RUK*+Hr3~D(l7vD=SH4!VuX;(B>imo= z`hq)%G#b0_7l*{hCGQV~3j1b~NaV1YSpmR3(%|?VC`v)qa=q z&cI#(Hb!&Lg0DHEynnO`zGMf)qg7eP>Pu8CJd3mZYB+zRQ7hTsXA(u>lb5{v1NyNU zJL|J7LP9#wu8i-oRgzK7RB63vQ<)?P-i)~YjhKzap**c(^C+4_Z4ad`dV0hj6Oc{E zkhjNI}M^Wb4=2RP>E7rb244)?dr?}QTmw+s3dQM!=BP?@ZwD#XqC>0EH66NI791Z;-?nrN27t+Ojj*85FPht^6bM&TQ z4?4bJ(Z{8&WHDv0khRMua&q9v`!yS)n+5Tcbxpgn4!x`Ef4D3t8E<%39(&7mCX(I} z>(lL$9-s;>co;|$E4Nl;ky#_kWZ94)6)WLy%sW2Jlb|;?gf%d+mkFC`I6(XqZwz9-V!$*qlsCGxoT=y zu+mtM9r`>rClwkz8B%M_B}l9!6!kLs3x{yZ0F9{3LYwg4k|9NlPH3wHis4#L_j%qn zHrbRPYVp1w|GbvOFMpVaTX)pLxm}Cac4kc&(eZAJVeu{!b~nEvdneyL-!)-@u*ET6 z5rD~2E|y6gtVnv+oIx|3Vzk6tGq2X$-{SCS#i%Thdb#0JiTRDp<&Q^+7l-uE5i_Hd z3^grC(X&C<#w`^?@8M58-g-v=Ju{Kvp4}I>>U#&`Q4uAIy*Gzetu{GAhMy5yc-h{t zf2ehv)nvsqbq#9&xy`&~Uj62+dY9(tv7Nqf)v6!a6187t7ONQ7XN-~X?8j!XtspF7 zQ}w#d!An(;Q-jN9E>WWMP8Inny@)Eu9Mabp87K8E+zW5t@_sA%TIt(-X~Q{u!co*s zZI@k6*b+AcC6bP0zNDb=d`5M*{@?pNg8wRg7@Lnr<1S9RF|6(M`nU}KCj^1=#{412 z|75c+h4=|k6dPgc82uJJQmb}zDaaojek(X%ReSsAs@4X=F?pTIuIVO&KW_8TJ*|3k zYA`$@Hd~CNI=w@2GQC57kt@Betk7JDQ|)%Z#a$y0RJt7X@;&^PZC=j}h5Py1XFIN8 z0=FLWkG!M!XZ$*Rxtld_NUe6#)&v!0v?ho4od{QrFNNI6+IITG06{&H_(P6dqu7`(e?ES$|23(P)V;SStGmcXOHMnT)QP{-&*>;_ zoi?w>>SK@kT$$G&k6Nj)jkh<-^409ej6`7iX1_K5*?=g8KUQ-f-STZ2Yo?4r&Ps_@ zie_Q*mh)klwsc`Hy|gvP;gRp^mC;@PUi$SQ)u5W{(+}d9E~6ZsjH9DpSr~7$GR|rr zIjXvz8l&RSAzyHuYx@GKvca$r-^z{YnGM`>(yvgx9Y8n!Dky|bM2n~uawz_vjR!Pv-%iH=T8Q&W@p)F~{m&mKNZH}A^MfKZwNL6i98 z$s2;4hd*jsV)z0)Rg0`*U@dB;x!5amH4dK)T7`83VTAL zTARP3jey=~U=f`jI>CN3MlwrgS0iuU*#Ga4DwDJ0`yNFd7RSE))p&-IUb67vF`iQq zwbrY5?tE0_<)8RR_Va4 z^gEUW+x)!M(zf$3n_bQpL%0n8k((ENd9O0dK-dh zRGt*<-{&XE;m*lxoVJh0Yzg;e5ZC1u8rBl$Xyo4s5~$8!p{$ibij=N>a4t#wm84@GihVDhn&*(rs=+NC2SM_OR^KWw7)Sa#T|4w##?J~LY>mn&DvBn(ce9%=y{7CPr0cdvRhKLJ=%+VA zA83laA=aLzs6N}3BTIfbcjx^%B~`|$|4LK56$W+PqE z2PvGf+bLR%*FR>W#d@bg&hZ?sR>W1MZM)QtuI4sHs49zJNLET7VV_&^OQ>5nN9uv{&h`i-V4p`V<+D;rQb`YUWUuh5o)lv z=08;J?#eW)5Eg0rb`sibyfU;Z37dcLM3i1_#pOp`q%;D_&|p>2_u6;3&A(~?VxZ8r z+(J>$4LY;nS04(EO`6jeILER#54Zcz*Ff|2IUVY_$SWw_O2xAxw?E&}Fn0}wiVdTW zA)P7F^s0}VWbYJ6x=rNjWF+WdsnRLiprqBstV+?A@vcoIZgk)U+c|O4OWEd|16khb zOpCdvQO2)_?O0$%u`!`gaJ{WODp^&K%XOnC>)3t&CT7WYCgVI|R_Ow9L1JRu?SQLB zr;Xq}62<8H;dFKy70E)g+@3(%zUqllIZBv=D(Kf|@9B~2&3H(=JKsr?&G!^a`WQIp zY-vZ1%6_feXoUR*J(@5qHhFiUqJ?qyaCyv0l|s|LjLE*_^iI<m+-Eiz%HE% z*ndCOCY_CiQB3|i)BF%@of-D6rTO<&1~BjkrjY_4J>5cxMaR0xBIrb9h;Dph?3X>x z95Au(4ioYkDaR>YQ}=jhSzdInHT~CZ{b*0&ea}$4)LJ#@l5mmJqMkHCVPai%dDDoI z^2zC%??=gm5)%_I4|wj*djehopY)H=YV;uuQ_JOsmv)sdrF1YPw;YeWHLOfr`aj2x zYlJ1ZZPcdr27iind2oU!^W01=T!r45s;cw36hG%aQ3oN+y9LrrwVS`_ztk9+6QvjdV zhKbq+NC+e6KBwMr-Hu*$n4A+GXTN=73sxQN`70;Xz3Kdd{_-uJOKvI^lCN1?9j?#_ zGH(Ru6Px#TNH(B8dJ(7Y9VqWK_1PR8O^=NUL8IQ;J^G&o|ABCcZM+l6H<)VKNRxcL zJ&fs3!N7@wg8%+2C$9gl3A-0ZuKUfef=t?6PoW~35UjvN7pY6RPWuDP+PDGM8$yqTF9zP|pZalixdq0mi!wIkulH806#7b+IJ;CEH6 z`XAP4D(*ubi~-c>`5_ymny8Zn^<=BwNPxOg2_Gs|8ESP0-Vl;KnWY-HYy;yB! z;}87>KeA0Ei7GjMY)Dv^v_9iGSG@!^9-i%)$7d<- zuPSpo%dX#efXyE}0a#F)u=UAPDr-PmGwEQ`-0nYdi{I$O>G6ezB??b+Wo#)&`<&DZ zC>HPoVY!JC!N^mgV^r?l$$_kQ`y#sZJb`S&M5fQ{%?QW3USQ(&jE36yk>!TZ<}yt? zG5`{-3&`>gR{tUFp%iLq{-S7Hn{F?5S)F*=HlJURa}A&`X~>~ehN?YOv((t~<|zOD z_bohwOlSa=ueEvEytf*v)m&TA8qt&y5I-L(0db>(!qjRmh#)eY+2&BPk)BeBs|Cb zeUF)UTZ6(f)`Ra`Xo&~MxS(wH*RR`9cgn&EA)NE3n{`RuD>;9;)0$B;s{9XQ6Gd!# zL%~|%NLp*IuHZg&^%&&_s;1uhuYnm}2DG75jdlPV<2Lzr9@brTxjWzT$&)9&r8Wtf zYB}aIyeQMI0%KX=NeyGWwTm7mt*;VSWZu;6(LKQntC!nJyi-zQ6`@>9t>@ zkt2cwdy99Xy6+STKiodgIyu_B-F2&tTXHbKiZ1l)SS@xdx>PvIhRm(cq8;ZnUO3L0 zfCC=@EHNPtMiM=^76TG8EEHpbTTI7ze6gLMxN7bNLIKOmwnX9Nw8vB?t651F7;@>y z)t9t&zuvFg1L?Tl^A?^@I;GFR$k4Rh1|1S)BLDR?CWaok0t) z)YGL-AOyUQEOm|mYJSY)$!6i@oIUyvXfGuX6( zhfcF#)bXRJ^df+^81!0Y%z9jQ9K}MeP5;iCqhhn|8v1bh8(PJc)ww*^mGU3dnFFYW zo+#;~@EDNNJbJXMnhdoitMu+~^vA2?5&4rg+HAzxwCs}Z*rze#EQu0-leOO34lij@ z-Ws^6-@ET`^r5QM(a{fzzm5+Tn_Z^@L)bmPxy}9;3}ujDSoE;=$vJ;1 zspDg@eL8M#mB$m`S{=G|)1E(nPHwHnp;_?TukGV#BS#91V3N{s!hNk(90E>r&HQ!9 zeI3dE(WEo0J3cRFa98(Pb6yThU)K#mOiFNtQv#{TZZ>BGoAT%jeY{>_YWL>f^Ih`t z)dU5i)>b{>&+Ro)JyceDCpB)Va6`~9d-#dn7jYMceO_Zoro8F;4$%9}p&R?deO0{8 zo7R8a0syL}_|QJd2+^qhG++OH$Jd1gmOY%vf`r{RSh>cBMynu!f~qwPniJaBGn|1F zX;OFik|NnyWT}L;T=yNICDBr~DbpfE$4`xs;HaJo-De^kNtdFwy*EMO4UPa+kNBg-ks8gdiP^pX?zYMKwod`^pBLfY z4WV7!x6!(+dr)aqyknz?ghb@y&6S9nW8@pK53SxBA*lkMStjhb9$OhBSPjfTLw`k1 z&d!90h{$zo_$7}~{hO!Vj`O|7hF1@NdRi?n+~-s&Uv~G5)MKobKHhFopW2vykD8FwX6e`_o9Jm$vwJ}Eij!fx# zAG*7W>;7s~1CsWcwC*m}`@u(MUpD?~X3Z5xuJ{vwoDbR9+pwADvkL0*72<79`{T=v zLgD`+{^;qrwPc{{(5YGugB3i+cnY}PB55+D0PZ*@nu|&CgKtlHX`$-SZ4$0Yb3nSj z{0%O&uv6Hs-F00ShhCO1pB3B~68%_&Faq-*2tyNEFZYzw6jbx{s=fuPrcX^h zq`CR%R!EUqJ43NmU)R{(v{K-*l6&eEo2!O~9R3!$8>WG2axo!>kfRh^u|Pwrwk`$< zz!gq0M zmX?;ciJYMhG%i8y!Kh)}(rRz9MGO`ef>#P}!gJhdGNjaz&{pPY}@S zBTK)vSWt_0UKDPjEuFyK@uM+D`3QSW>_xtrj~|xt2ctp%>d!WR>-f~bk=gZpSw!kM zD&+yB9Qz&}%3Jbs_ZAlH8<_T}oJ`_ZHcXk{D@1nN-J}wiB2q&S>1MCyBsnS$^NGj= zrrrZsur?T)9GYWMYZb=9*cda%x5_WRf)TPyb?4&M9_L>af8-l8X@77VW`qW6$pVhx z2?$)hVLnCWHgx2x*1NQKAk*Z2@b#3#LVwQ~$Ogf!tCpw7R&$-(s+Sv*@$7HoIxs*A zGln@&?TP`)stcrP3OodGmWt1^BqxcbSoI-TXt@K>ZBne|f{9Qn)MeY&1B3?sXhMW( z+pwUl6H}&4qBSnvx1r$?r@DD2Y1N7!Mr2RQjxrDHC}5W0k(!>Bhgm9{bOlY*SQ;m_ zct}M7=nm$(v0bF>=|3VTw+t3GhNTUyH}5om!E(i3KGtobqhDq*%(l zD1bRC{~s_16??{sb8jp)#<~sa{*+wZ3jFm0kB+tc&of;)+MPMte8|uYzL8}+z)Zh< z=`#tvOTom%g!%Wx+dBYp$t$eq0n-&B#M=qPF;)&1Td0TrdwA=)H;;!v=egV-O*=jW zjP`>af1vdx-f6%$b<3Z`Q38o4YR%!T$d*;9bjBmjnsE$XlGdUxB?4`{aUzA8aOv^S zXNBo&uVG(4PagjhHK&BhU$HropcJdpFCt*P8-+%wLx4z)w>S=DCX%3*so9^qU zjvYH@)|Mm}&4A-7K>5_`C7q-C6DW=cJUAHWOm_px7mnN@7rXh1k7_GC}& zWf39iW$|q96%jN+2D7o~wPmD>Vv1bZt+~{K9ZCylap@9O_N>x92)iZ*b^G#62$AYD z)&0O#iowF_BeQT$!c^|kWH@RQ7BthIby&pOZ<SvEYIA>vOB@>tFZD5xP2% z*GKDuA{xg9hJWN=wRlNNgMITR5vr4DY}nNY*sAUP*PbI>U$^^P0 zY?v@A9`j^xZ*R;ZoE$HU{hjrl`O={x`)SPnQVkitsY%#?Fv`VGO^S>>oTlPtS|3DJ z2BmqxH~5*I{J-%;!U1Cd4U5b0P%R)g^nzrxo$oVPiumX&G$vpE)ywUsA{nR7*H-!o zgzQ&VutFBl66Y0P_$HSyXOO3^$aBe4%3z=Z=~9J-0$!F0X9!@8X%VKKb63aqX8fw7 zLoWYxXw<`2yM1<=)y~E|BN0~ij9Cx+Y_Vg_@%x#8hC}^d6yA;Aq6b@IfMO#Pl8Wd; z8O)+g(}CvRnEfgw*`UV9Y-6s6(Fkkbdo}l(W{w&h9@+f-{LFjre6G^bou#76 z{Qh03XWsGHUw_pDEo$5jEh?%HBLV=rs=KVom$oiqW8cY(|LE_vOY}J-!Raub=#Fhnio<+vm&EQOx^@W{3&+6xEosAqg_dB1~x9qGo7Y2CXvgyoFuEV}|+gwlu962mJT5S{PhJkw;U~z zzN5ElCmknPOf52FW0V2W3ukG1L7adCx5JEzuFH5B<5-COovzReKKCB-f4Kf4 z$7Rp9=K9#9n`;e>3BN+Px8ME7bHEod8<#3aR=Kj%H@%b=n?-juve#cS`uX`~;(6s?4!LUkv`>c$KaJ+S7a)1*URWTFM}Y`{ zCqJ)f6;$neqwwWAL_^+#``Ul*8@!>JmoTQ2Cci6=(K6xN|+-?kTD&b z3>U8qX+`v#6TbjhcoN?Dmq#95Xos5;Xvk`t;J^zghSrh-FmxM*!%qVzH{T*i7rjW714jCZqYo@8s#W=eH-74%WfH4O=#Lf?jy_^~%N^ zPyxEWE2&n@Du_e3-hW^LIRbJEv0kZrhi$^E>RN$4!G5)Rf+{6TmkS@Ds>3F+B^=9G zPjX6`<__hSo|{9v$9_V4g*F~NSoRANYqJ4J3`NjCEC>&uLjXJn3OM{QeJKgez&02y zS%b)!zB@^R^k}A}>oHqyPC}d@M9>zGG3I(o>YvEfG`X_@Ol8Fmw0V?EXa#rDu`MdQz%o;|u*6N68tcrW9p8 zN+iFWRZrg`45WESo==!3N;I^aEMMFFaF%nW+0(ERr$J2wr852uEt{kc*3#e!(0APb zL%)!WQ*&!u+9G9a9%E0pXq-NH$IqIT-bQeL`MZXq^amHw3{Q|1?fZj&piqIs&C(Yf zfyQWo+@`XeZsHOWGk}9D<2?7>s}Do-$CtlJaJV4^_zH>u(4e>IkQ63jaNdW(t3or6eNbzr zc*Np4diX18=9z`1I|cYgkF|7C+;(_~>9;pPoC+1DX47T{_R3lJK^GBC+FP}zr;8f& zSX9a?+L{9Gv^KOU^?#sWIfH#Ey?HcGk8y(@qWLb3QBIRUQWTdkUyNzJUM*g~>!%Q-n$kr{mf>(Eb^#Vj28iKzD zhA}8YTD9EonziK7ICVKb(bwL(!&sI~AE9dv<~>{>}WByXmOC zMJGTlilX~P5`ok%cw3aQeq%UWG2FPydN!?ZUv%Z^+uT7gj7Wq@SO1S$6BQB}86=_w zRA+`@dw_O=Hz5qkHC@y1dOPi+68s#f!O?#QOTmQ1R($vZ(Kv9L-X_ z*RAu;+-Znj+bG#)1En-Qw$Rw|NnJgv1Xnd-tL!;@14<5^UOlr>(UhTqM*5ohs^Z1I z%iUA*xiA(u zlT$hfbZ6ya+tII2yJfZ8gf}FMGz(42p-O|%Iwst?SR!Xi+BpaSmuudn$^}aVg~uwu zEtunE1J(^_-mYJ3RI{$Oj@o`1Zq2VVQ@j(WEz5CAMEwbo8ppLPcO5e`fwdW6Z3%1J zx#ibCj{WIqb#`~^ogKHuMCdw^iXI5IE}mq8cHU~EXZU4!=X4}8I0$^WIg-eUdft}j zg8{3_9$s7435^WWl>H+W>;E@xybA~?6DmSk&~ffNj20DJ_7t=$90fxX*Pl;LtaT_W zHdvSL3J-DM_TIv4R2lNvjDy0Oh%C4B(tU^d-cCkMqOD(RWiI=F zfu8g%#~4tzYQv(#vSv-QJ2D3{QuHjG>vmiVyks?Vwe$Kx&&NUjT_niYWCu|66kI)hAd zK0qc5`gNoU;pV#R5e$IwBUme*!5 z8**qguws4HS}`?-R-2fEl}4EAOD+l1d@la%Ur&M__5~CVzMCded>o zicCE|jH*qUYKhX~xqTFt!{v~B7|wM=oWD;QXfl|&DM!8iioP?nQbvOyJ22#7~``SCg1)YQoj(_M~UN-;t>?fjh0ySCBAxEHmuAq_I@XOyhTE&?^ zCn{?VgTZR9&?EpUA%GaH1Kim+Xq!p|wgeVt8qi9%Jq~}DfoB+DPZCN3 zjoSfRi$_%?B;sY8=gojianI4PKu0o`k=p4$cC21sdn>n0fs9=ObToPHAUG4;gimK) zHvan&tFQ9NI-MM?qHo&1 zg|=9-R~pge#Ka5?$#;UZOc)eZus0$v97YYLsO$F5>GOI64Jx6RB0*$PRGj&I0ykm^xe) z==DN=d|z=zhlS?Trz-0c_cC2a!K{c{PwGTsHrGl&z9*Prl0k8rkMO90p+b;6^sHl& z6Ybc~hv#&Sh;AlIyaI|M9Bc#wkc?OEt^T5%E3d=EuiT_ypZW7)M$b5=@iv0e|Nf+} zF`?A2!#V4Qv~>OFGcqJMVRPQsIC+zWMHWb2rN}rBGWrY+O`aQU$d4u7=`B^Hx^V~O z#2_0XIT@_yDqTL!;M?Gg@|OA8cUrpq@kRn3ZAfKw?=v+~w4<(DvCTj8u)dhzjhPm< z(s1**_j*+-qn~aRbemk|QfeKiZP<|88o*}9&YMAMcCfe({{pF+(di-=wgjN6P3tcT z4fDt*s@|4=3+)UE^yQG36p11A4cHRTS@gcO=CyH?=(s7Bq^D z&OPc*Jwi-O3@Rd3$xn`+m?@vlZC@CuFwWN?W;k-uCkY|50p#9=R4QlaBqC$aBWqPM z_!dP6Gju7*kq~sO`ecCcw7iTwz>gdgd->`QCbEMYt+3{Ck>4*jRpvj7?mo4N;grHW zEVU-Cn!0Lyuq5W2ciOT;DnCf%Epwcpo9KCO&ZPUmLk9qZEX)#blceKgSCYc~!9_KX z%?ug0Sh}T2!wZl$G$1ib&;ACs&q|b6&|3FPOm+L*wKTtMMj+-D|LQny6UdQoiF^{X zA?13a3*(PkrPgr^Ws{fW6GW;KQSO#hkF>O+b8!c*h*%r|oTVx`T0FluD?SJG)qz=I z)(+q%x4*dS)D?sy^YK<;JCsjDn%o~<7QP^No!LW=z&wDbJ#`@n;} z!q=6RhH*AI81WlU{Yg^>V#dR(REeJ(6dhrrM6+2QC~DzanTO?7sxvZOp+VuMudVs4 zj3z*)*^o_pd?@mRmke${OZ5()_rv@!qr@d+}{3UnTOJoitI z>a9DXguOyU4(>Gd&F}pmkstG3F|8#=3dKHNNQZK)b^-VEO)WQ8R+TY}E zr?kCupQ0jJYpxS7TyZ@*u73+QS=%CmC@WjZIxc0%e0jp;zcY2q_fdCo$W(V;0^Iv) z+BQ|H&YobAkfiH3_=d8JI4019MVz9WP`dcz zT9KW|JH4@slSU}_E26cUN?T1m;mma1PRgDnqcRQm4w5W;vR+0Oh$VTQm|O8aw-w-w=5wj59Y!^ z>W{H6*Zbf!AvXZrcQQzB6ff+CjgHQoo3kt%us(L;1PgRUK}banP_oW)2MhRD1^8cA zX2?exkEFF{Jb(Tdf~!(OZHoxN71ANkbE3CQnTl6#XWJA>a#`h~Icq~ppYCfPgZ^Jb zgVQuYWJgv%SJ=b1=o-6UUMinYiRz=oCXxmv3Ja-M=0Z0mb{DD^pLbiGIg{x)|+bSBk>3&<*9p#!UlnvJohrcPhaV-MmA=j9pg^~ z^bd?P+t9*)6dx20?)j5Iox6*CUF#3S$Jw-srl7V#upa0n*(2%!03s8UlJfH3*fRr` zXtaPsCK5^k2~5)H&ekbgd7e=B^=l@y7t0TJW;@Fso9cgAi;0b;uG-(|bDfAXUE8!J z=Q|t-NW{eO$d>OrqJ;+_;Li4Ix)MAxuC#;cR9IbWtoy*u0j*Z~_@j(&sz-Wk!lGL` zJywtgi`l+dU&wg_)7dai8dCeV-Xd4680?;JsbG6rQL!)fI%1!GwrTECxlqOqGZ#n_ zWI)Jx;CL!=#H@9Q@7Vn;UGsijbKp=e(y3<5>#KDiC}O`uh69JDA_b5E1PKXFPhP|1^BFobWJ254f zzQHKq=KAaQ0ZKmfR`iVn;|~KeF}r1?3h}JK*Cx)^mINB(?79cQVygR5C-w9xy=_uF zi^U|p_O&Gxdf0Z({r#Rf{oeCm=|tlta!bdn9|<}xZOtjbhHLwZBui~SFF-ai*jKag zCVC^J%gfQR`@{=b+bUpEr!Y{;K2fcV_>T~SGN!UpNdG^4eFa#RS=26~=-8tIDvFE> z(x7yMA_9l*1`807?nVq01f-?w(A~`e5fKp3LrH_u-5qyroEhi-&wuZEMrX!=bG~nX zd$0A@f}T&k)G8&)ah4lNZ~-9?!n$+X017V^!G$7H6z|+|2fcx=o~GS5ST6`tH103o zcss^*`4p0s_m$bIj5SA+TmnhY`wM`qlM6Vhiuzps7RGLr4+#g(OHG`hqZu&hgcHNG z4fdI)D1MC)U|q9Z#Pu1Wbqk1#0gSgxVs4c_!1<_p9 z(7+Re(H_&mLyK9A-Ezo!L44IHrfFKtZEu7-VMc;4Vc_FSd7Fkeb(~0wp}HtJ!zg}$ zi%Z?GqfTMdxJ~NVAB!eeem!XDULPlRyIQ}b_u|&0Pvc-9?*xqitQprxfRvyBm4~fm z<>NV-@#-TKj~9ix*Ptgu2PQ^JihpJyJNqV*ik5k{{)4D|gv!~ET5`Jnq&?b0-rs{WV6G>nEU{>Il75n&MAKUyt%&SyGnfCph(YAG zZ_V5xdPS681Y_9LZ@=IPPww@N^iH$y84izBHLQKJW@{ghME_sjpPjd3|$eM!&vowW-GU5S)(ig2{Z&SSihpK%c4Zn z0aRLT>1wQfW;G}G$vL+Ga_(y|5|_EqXVl^J5vJ-ehNKG$CZTJb<#DJ7BDG|H3 z;@#+bsphU4wLT_E$#>CsBJIK-MZv#Mv-Ul z+Ol@b9PfJNoymrHslIocW4U4x6oCJ(tqAv{Rb27MnTmzdRI*SWF`I5fpnIopkXs(a zIzWQd1k^`gK~JJsW@CJ=HI6GnItiVIZEZmR&b6n700|9Q0TLAq=z29#)J#5M3(>yamMKysy(cO^{s#2zr z=ckFe*Kr6Vi-+Q|+U!!UDx z@*PVa7fawq6Pc-0q8`)lIJXiuZD9&r15#@pO{M<5Sc*^h?vTE;;JWMpb?`E9rKu); zr9`c6jHD8`&#c4q16)|fGUTF#N05rxFV<5dw@vGnG_|}$WR}kYi9t@+(@i)y6%#N) ztyg3uBwB%EC@&nC0JTk5o_-qe@sREP>tG(VvamoyHJD1Uz&=AAzPTg=WdlJ3-K}+d^CY93|x_8;CER53U{3S;m6nK`pX~Z z*hRgLf+`xgv&jbfthS+=O6NH4=H})|-jxr>nIjg~k)mhbY!3G=%VA}*bR<}#*=`%0 zz2D%Bm|1c51w}Db@7S~F&XIuR)+nn&nZji~7L;&QP;-M3-)Q=K4w%%I)lk_VBdLMY zp8R?{jIZmLz;?7L0OV=Mz^(=dxh`TEQImU3_?YWgou#m$&EXE|RFxp#6#u{ys}!5% zsVre;$zZ=A>yD|%d%GTZoaQ+A?aQmpE;e+9Cgz*k&`kshuB^Qv!E9K0<#t{P9{Lhu zX_>3CmiLmVL3?WXv?${@9>3eE7Pr$=eV)Xs?@eJ1ISb^!yH0-A=En|RA(u6{@G3C> z_U4egEZ+ z=3t3QA(_zT+MMY_OvKZ+WH|?5AY>fVVXeV2nAEb`4@z7RMp=IP3H3(jk6JJ1!aWqd z1FFbz5Z3TKs<($~4%O{m1^sa1W_`_HJRY5}i^@PJlqx@ISBT{q`zeA)FL`fJ7W{${%G%JhBx zqe7DUs9HO0jo9s)23#C-__o|Vbd5IT>;baS;Rd;$a$x22N8)Q1z#WzqeWzX+F4 zDDK?jR|I(nW415eyzj9VRk~-AsnXdjXeWyTFWTpD18Gx)G!a7_fo0S+`^vNa z$}bOf!t>abz&RL9Ft-z4bB01G9fsgs%Z{191cu&@zf1<={4IeD#{tkQMgPFBKiua^ z;A;B2Nm~x)eSmL&nbEP&{`E5*kQCJ1fBO>3Y7N8=4LiFMsaHbG3Fv}gTSo;cfG}F{ zOyA@-A6czEuR}vmKlQc_>Zr-36R@@fA01HjCuP!1hOMt}jQbcQT0zqdt464{=>D^$ zZO8F<00DQT2C+F7S~YY3S~HVmDpJha&QY_Q_k&IdlUcbKOKqAObr}b(#gc)j;b*uA zCTuxcang#12BveQt08X@yrH%yQiOYm0yqJUGe!}546@;m~~ z(MQ|rwl>`ruU+A=A196#4*)V>Qd*N6qO*!#Udd8Q3vOX$gOG%;KT){A1koUDUIT4A zsMC*bXz&N7N81%z+*->4XZ(w!XZLyU8N7G@^2rSlo{2By?)=D%xKx0lf4)+m~+ zT3={T-%T^+$W|rhw#!cKv&|xmi*w35B3pmaM7YF}6=V+Zgmkfvbkb~={xJJH-$QNq zwU^>*SWWKBXtt5gYDRS-dufBMvoh5j=COg14?$l@Br37`XP(0TZAL%Eoc%e=g2{kA zQ$7EQU!r7a90*5cySlpk6=i`_=nG@f^@P=q1q8!-{A=}>;vjrO5dCrfD;F07HhdE z|4ZvEYDWeImjmMR(uw&i8U{6d294KfIa`s1IgOTmioReR@{}36>?Hox$`O32>7%o2 zA_pk1zrxW7kZ#n%%Y+^-w_nmjiA9K~hSa&E5L#2$LIDtep>dD$ICn})ia?8;;sGwI z+U+C>z=l5aM2hl-MN97HR(SWOb%1qVY5-g~D-Tv8GWxEZIg z>$unTWE|m5Nn7VV1aDkLJAzA|Z5B*v%eTvchn4i~H3(X?YLz;x0EsSc=O1V3WMPx) zXdPD!2cF)~qjmUxv$ogX!v!tqdS9l$LN)+}+*bxg#bg z&wwuFvmrj<4G7o3qjXivWn+X4o>>s{I|4A#o~DxKytV28WfnU;#_m7oE7Q}{ueGQ_ zhVQFWOz*vVWi^@n9h5h+0XNXHJ>es@VSNM4kE62-K(58Xe&>X> z$K}cqWZGv^ODyfmvsa$_7ff}_^(8dpHXE!uu(STtcN45M1$s?n89F%a`uh5p+K&p< zmE!N6goK$p(9_C;Y8M$ol`_=ZWl|CVew_mKNhdm?!4Dt}1vMVm>Sk$Su?+G&6z_=< z6O)C;csn?Q1zJDa#R!Bh19_P$SFocxL_3(5v5DI8P+NKm*=d2~ z?EVV7WGXXKFy(>r26C5XV_UgW&EJq)mMW z0iplA-16=oD1OUhHy;Cs4APg3!>1G+dN21jg@`FcVry-#bHH^k12G!@Np_&j)Ig_o zr>3R1PD7PVnHGRavU|HbP-FgK+*i5+xFv3T(W@C^6#3VBEfEp#9F>y$=IvXC^$LN2 z3dglcAVrXO%Q^TFR5vfL;K0h{RA?)Hh=?eZ9Nj7|UV3FQVGm4CA>wAOr!cJgn7-0b zlu^er^fSnD`K!KB+h2*E0woCnsjn8VxS+i%@W$UbxVkA-&KFMtZ6WIf0D?e>or!!_#wgUQI zWJ6Sjzyd2c!il+q+BJA=rjv2In-WD3`EwMe#vP;pm8)&9xd+Pr!Re+lfT;JpyZ7(wIsbi})hmOrD>9o)B=-fVe{fqP=MaibEEfZtF zaJwZY_rjK|NDYRaQfg?>xBvS^vj#GTGq9=Ql~lA6IE=f;;XzS0F)^wB*byL6;V(@~ z6umw3o|VKsFE4MU+bAmqWEx25H*8bpylZsg)Ts{WClW!!au2|Kfss!A3$M^QsGOm0 z06|CsEefIePzipvM1kNS3R$?e+?tboV~&ElAZACdE$-OdK3+(Wk9OAYIFHs>xv z>&9Z8TBzd?iXUrQgGh{(skGT1)0EjM1!C2=M;Hj#LwH$L)LgFw^0Kx})961wSwN4K zvKGl!U$(CIG3q^dLhs=%TDDV+_zXU zA=P&6-#v;z51`r zv!D`(b^;K}DX_j}D?s^!*b-nKpUe+d^H>bO)?rby0`UV~hdrWUPhI93T-niYiP9Rb z)UC}C^neQ{6v}B7Y&BbgZxKY@Kd_65yQe6a7PvWKgSIVM`5IB+FjY5`#J}5Ec)fJ2 zMybJeW4GFSL~1lQy1=z`9`pj`m6cSKcM>R`L2jG@o6>t08 z^TnR6d?0oacUHnJ@wH!ed={7Gsq(p`{qf11@W@j%mjDwkq{6#zVe6Z)&o5P!xwLuY$KpX6harj*I4I?De*VQykWqbUZ z?dki#C=*z45bFQ~y9IY?*q=VJ0d%wGyWzyHBO0{Tgj%BCAiqXdrwb}lP zZosm-EnB4;`f_Th9bW**Ovbz|e=Gi8J=ulv0DiZIs)t3i$$lHUd`2W?|GSF;&^a#f z^H8}^IlGiCJ1H293QepT$m{v%e><}FBz%o|gT4A2exoKKHX0LOgi)#d8G7ULg&#Xk ze?P;p2rizxZ{yZr2omGTuDm3WWh=;T^M9f+9p}*+vVlz-HmYz^X6ZJng)u()ChFWN&KW2E+j)US5Lo%-c!&A8A1ER@u(roX_f@J>n(vlz(|8~ zt3v%k1G2);xpXr3_z+KR`??OKCDT1~SkE05kXE)Sv9z!njwH)XnW#TuIUcI+>W ziJ#{m@2d!33vhWayKMh}oqvpDj(We^)%bS-63|i428P}THUYU{tmg-Bi}O7Ak%c4I zITVZ)_BfRSL)B3I3tYog$O%BrTYF0^SHLJW0Y6}FL^o}~y8)Qc#t7F1kqRQ`wT449 zNt!^qsR11&CDQO7J9)AlFg=NaF({~4py-HRSO@Z<9aW9<#Sc#wu`626@oB zb!t(AXXvAF1o)ZEC|N~2TU1ln9a}M0eHwx$DvDw;CNmx1Me7^5LL^e6T_yt-v)K7G z+aY{ju2cKE1VNDmfhqL+8@d!mK?K|oQTdHQ{kBy6+r0g=~?cRxW~+-gPO z9K&W9)*A0;EU(1ua!S6UM6uwF)#Is@x@%JsjGK;$hdRN~43f@&B!2LQl8TK+^-*>M z)DCdDs?R0PMP+U~P8Ty*MozEqK$=5F8bnXby&ML#%CISneezp;-BCx1Eq(yBLf~jj zah$7Ak=DOug86v}6pu{b9*>>ZagsxEIn8R7IZR*?fUx-tJYIV5)hGB;7$|G@)H0yB zB2c8$C=}Bw6+3q$aA(oB6(vM; zq3RW?)Sa*_zC2Y1ht)3qcA&l<>afCOB$Ray#Do%vXC?9nfR0T8WHfRWGQ#*lh-KC- zxP;!)?t=>`dKQd+YHE5(wA~`-;l!2iEByr^cRAb?u1VTW#I<79WNSnPb&{B*9x77P zeUzwg@XeNBR{HEgL2Fd&DW5g+q)$D^mmeZgGRohbpwWS$P%iSp;6XuADEj)P!@FZw zBMH_F9|$570*>eBiq;8qq?lkQgDq%dd4&#UC_;R-kl20z!~m zf>oywWp9D_oR)z>5>60M0m%yqxj|9|TuYE$qoj(rZ%;rtIxQ8I2wc!scvvC4K+m|h zSPs#$4h|000@NWl0n@MZzLbO_bznPK8Zm7&=mRKgvahTN#JVXY^vcAK`GOg5-ICKP zwfZJ}njZ_}L|u>FzayLjaR{Oa%m<*z%ee_@Lm`~!3DU$Xec`M{_i@778Q*hV_A4iMr1f_c$*!@-WoaBU$_@r6!xEnED3#b{?B> z5x|AC6Bza|QCY`3?H;GwoG86xQPOO9Qo|}gb-)#O?eNLKmnrVacYC=vP6>*wn}$8tPzC-mMBRd<9ohfcP5aC5o|A{R z6na%+Vqz?qh1%2Amh#8IUkYrM9*f z;%eXmoTIWN=QQUInqpg-;DI&5NV3gLeWPQvfBzL!CUMP9`Kr z1u=^?7r63535$KU_|Ja$-8{$ma-ILCk+`vp zjb(UPTYi}133If0+5X6VhNFVJJfpOBWAC=N?wjjO(sIHcG*q$cr#S;DyPkTI@j!i5 zu248{Ht^zL!=GTyaJ|_GNdq8v)0*&aq*>%y`{pz<)2MjA?`D?nQNs&Yz53{`XN@1l z1`iU*46Q|*7T!$UCe(+asQ>ul@jL^wMdr`i7L*5Oe^;V|`3#>HuImr6!f+B~#?qvg zq6SwoICPK~!3rtuVcd~R3ovy&0ECEx7@vnqLsIfBL@DVzJg|iA0*yw6-p023O+7O9 zK$JC`!}627Yr*s=mk2s+>*3FSXSMVT)Y;8{d}Q4fkO6%HGU7|`8r2#W0Do$(&u}<| z3JQ&{V3j+@KekXu(FvsMP_z(g|R@ z-$3cs^Y#X75-kUM$-CKFty!j794m3B>MqXb3x*mj_(XbGZMa$pgz*bo(?xP_2rNv(nA#)eAK@QQ{r)_rW+)4j9ZJ z{ouf|(7cW|!wu#pQ>|C>zR#pOG%^~N z&r7`2vFSQ`I^a+PuWhD8*Cihs-C0s{JDUW5?QI2vdeLEhJD?t>sl7WVBVYX1#>_No~kZjQN&|tR$f4m7h5h9MOWJ;L1 zbt~|y@h1M`K8n77Y26yU2(zQpHHv0mL~|P8?@~C;KO;QY*m3DH((*wl5ABT`iGxjY zz*lo{wu$rD&Ze^)b&SKs<4-umAPyB?J6!C{My9ea>90rbLB+5F%teIKQM(Cz0!r#O zDVyPdAy1nx9zkji5__Ze0%|-*dM0=SvN@Dg;?7$b+deApNGK{aW#7T%1N?`V@=w!I zIsuI(L@&H}d1+<3GaOT)?6prLHm6=9|MBuMiEpi;(saD9ICYStaGAA2JkgTUPNk#n z$)UAVBhg?Q=Y5~>Hem~%>uAl)>qir!7d6%s<(q>QvzRXaIU##@?(i|gKdS?E1|=6G z@QE++y*=2mFfrC6;;xb*KEbgmWj&IHD2Kqr&*E0@DEjHHKjKv)Q2Q2V$U}w$TG{ zCIfgR$$&fqP7Gr;<3rB73__%+d3A7DvpH{C6_|jpdtaGC$*B5QpV@P(|Apd_MBV$m zzR^lOg`-4;vd61$C~xm2@3gjg%JTa?#&)suDA#UMfjhj2J$ip3q4;BISR@9s&E7F5!hsozv%6sj~S2X`8%ankQ z3Nk+YqoQ0sv@n+ppAGOQZgJh)8gM5x!(cq?4Vu$cvb6U##-MozbyZmm!CU`{{))27 zy*(GP-3`Uobn6LrzuqK`z!E*NB zL~btIXtoCcT%)uvH@dt++D;18?CBCapRIFhkie$fIhCI5~(+^ zYS9hnaae6oK9(J{=Pw0elKW0yQbvEumsZosuLYdTD`kYN@*m#wUfe8o%i3QGQ8)I3 zLPsSx7~k^J>i{Q4tjOFuO(Rqb!xJu@$4odjNn518w*&kXoSol0&Yspp;Qp^y!`sC@ zlQ@fOJUp8(qN9zg@j9@iDaE0ZgS59BNwiV)S>^J8Iw%0f(Bva;0#%s-Os$4!jU6?u zJ*4dbgAd9M3JE~<$;kU2M*h8Wt7@W=_-)?%ZuvHhNJVqZoD16A%B3`w1{|mMciEPm zBI^?7y;8sf##cHEGGqqaK`}w@WaHlpa~|HGxT(`W^i+4dT*f0YqjfjQrQPr1l(p5y zfG%{RZ&`2EPX8fP3pzb>-SL60oQ#Akz8Kr>8+{w$V|DB;3Ht}-uTK|vYh;UDWaM-G zIoB~5A0zMoV`IpJ;+TtORALDvgJ{7B`BreLk&u$=*+f{Rz|G%;GKSr5elV}p+> zx+#dR(k%!c$n}P8O|@LLXKT5ANJTqnh%e{sd)N6GWZV>=Ah5i!VFC#;K|H46r}&EG z#as3=^EY~e^0-ABY{SXt-Vz$VpvgBPC*t&F+I&nv)me@!+3Ktinq-$Q9b^8Lc_3e4 zXZdf4AMC%1Ze0iHyZ}{$6#iXsYJ)Ec4cRb(k#{U=nN2M%NFRCJr5{vT=UO8o5xWaW zB^&R(5|Yc8)6s|q!JXv*m_d$la#mIjMw64#Y(I#)3wW9I654icaB*wCT#Ubj1Csz9z=p@B^F!V4wsyF^?`kH_wdfm;Tq~ zEfP2N1-+yUzt$@7nRKWe%bJLG{#_f}ojm8Zg9XJg_4YPAf@EA~F&37VQWz@-fNt04 zhty#-MYRz?t}M*5v~3w0ozs7ebbwP8HIvawpkqTKT(HMvfTW$73bi{8&)%LuNoCc~ z8(lU!ANqmgYIEc~g4l^4tVAgyR3fDj2W?rFR!^d4h-)9%9|w;!KXKeUAQJ3KC2ntz z37da=qGsNiE9OKt4gxw(fB=cO0wKR8nwjpM&>#b)5z*|AD_IDT z5%`Am{r9U0Yd?OpW2Gxkhez8G45=+^U4_Qs78Vvx5U@tzMys6u4kG#rjaec1WU?bj zuQ{;Pnsq-|t3Q{~!+PnAVv(ys(?J;apu2s!Uye1&&pKNaO)U@uP$fI7i0z z*!Aw6)MC z?HmnXr%L5}6QwxMleSMry~*41QQ5xXN6#qbp>2;c$NnnmOHijbh2U+O@Wl5$CQI@> z#I>KTX;+qv8&q>eAM{PhA6mWc`1JKt$kt03+jL^~+JUCA2|{qal!e$7T%Mh6w8Zp~ zN*Pf~TD1&p%^;rH42caNNK@+>z2Keib~n5Wfnx(|153#Q+Rr1gADCKmdYLd^ffJ zV{UZr`kh<{!BbAenOKMdx#OcQ!z4&>>v=>C+@MYhOWd$p)gQvva5d zMop{W_WbiOPE+m4X}B3~6A;jX%qrJw_RT6?0Ywjl_k}H^0Pb|Ueoq{$6lZAXY0vX- zXZw+`f7*^Vc24bbHP7ah1yVI@($Z>ctj%kf(#N^Fb{Tc9c8Ae~(KCiz zo4*K0;t0&y02KF~a(j7UCLCcv4~VL4j_z%_8CF|-6xj#9PhNYRvovsyCMQA?q%SfP zwvBPEG9!IATgr&;*RvKB6%!5qrvmo(?%5|W_7}R4?g#Qaf{x&#TLvErEj9Jni?j0! zB7=_xOnVAv0_77qFHn1@ah1R-ZiSr!^t%<6f(<_4kxT)K4uiq)3|XhOr@~;qRG>SP zG|vIX1@a&ST%l5$)-}r00tg4>Xn>7EDd2__CWr#Q^Gzwk)TH&T>LUp3Yq|mGAyUr} z=s;B#Lt|1Dd-OQxWS=fC#N@k0M3z4M_EfjYuY4~^d*E2IlElW=nxnH;%|u)Ed2QL5 z`hbc`4$pgMzF!xt{UIC~*U>AMxeN1WfN2FQHjhfrP1t9d)&0 zAn^2efy{ec(6#Yei`D(I^VCM1C3laRMcB`M-UMnMN@l2+3l?|y5kmj#RrTC&D zsa)G>L8Q;7$2cj_#h00oLIH1~YaTJ%L|=!&)m7raWnl$FXziR$1Igh6p|oo&IDr*> z_}2%YlkwS^#dZpAj>kiSZtLqa*8>9r6yhpQ!{SA6RqdT4nI_L-^S~ z^yJ6i!>6l(2;`Se?lUw|gAoN`OJn^|KHsi+(#O8K7ue>Svv@i%x*k`z9tUWR((V#P zZLf8ROvk+irNV33esjvr&H3(l)7|Q6g=X`FQJTH`!PD%icE$rxbojSlD`}b4mI1CH zY5o#LM$9Y84WefC(c1>CAt)!>Cz{gMWg)An39?iCet^I}3Iz5GwJ>L6uaflKp3Fp1 zG;xpm;*XmBY0tkdS)dNFo0J)-RG+~7Y!ECc-=0wB$(eRc$4)g-H&J9|(^eUG=cBPa zLjlfm98Uq*2dx5+@kY_#;vKJoW<=nF*yqVF=Le(QD{%B00Y$UFO;!;jiDlrCf0?F@ z!3R>>FfsLa{CSL~*=UK?*tF87$s>7~>VXwGS>Z5)8ZL`Ii@Z~O8-eG~2Ai((SSvo7 zt~oZ-->0v$cCTJ(_?nr^WDYIv*Cc{YrD1m9=(0%NGJFV#*#A&zUd{iZ_2>Pdcyq83 zeY5xorevC+{0cA+Md9aWNBoBbwhB1DEFlw2pmXb*!!aDp^J)K4@t>31?bx%9NO2e& zcolfu)3DJ5idRwZeed)czyZUt3GFz^g$v_FJ3~4kLMz=~Y_uL^QxFBW!ZMI_5X5la zu>v?>1GusXr3a8xC5{1AuSmrPj+V^3ov1!^Dbn>qQ#dRVkbvtVZN*L@n_V`NsnKxa zBkjpn`sdVhu9vhK?Gzc4OO?k35k8GNtNSrXWfJzMkVe|7{`=-!|C+TAgDB>RzU&## zs$b(NAYTfZ>o&yIsI+U)WTQ;$@-WckDm?+ic4|C2k7@3ZmNlhq9SmylGv1Fq#a9ZJ z7n%IHQy)a#<>|VYE|!rM++6YLcm6Ec;@;5^gn!h_Ls|<@Pft87kgHy`7Twpfv&eB{ zfY_viFT$7B*d-{sg=={0ooV|`QnVIA>#|2d8l!t z>PO7qJU2veNC$!Fse#V^S=Ag2QpJe9o9tjm1>MNK%EpWbbD3YmDrzIXs=b!mGf2(} z;S_xEAp?xww}PyWnW@h8r{2;7qSh#!oTyf@-?M7NfEzww;t!b(M#<)8s}jpa-II&` zGptbF;VZj0=8hi#pN8i@S$2=!FCLFz-_NB8bb$osQ<-Z`0?mq$@0ntLdM?AhcWi$S z#jgj+>;^nYP_BUigIEiuJtHx^)$r#tIFe_+12rZ^Qz=PNXza7J-6Jx(D8xL1E?~vL z7tD*}Z>XcVrV1{xJ&6ID42IA%eFI^Jr*mZE2p(Z!P|?kgL3vA{+xS1|7z$=Y$1s)= z!5-KUM-Z;0Qjn%GNjU#Jgrha1k1TT4NHfUznA!cGpFEuroejxlG+Yw81%Es(1}}do z6Qs?LP~M=Qa|@xs|6~cwW0$ICOmi@86|59aqrK znLzqy$aHnSzHtrT9I4yyCMV_sh4a113L;_= z%?NlY)5<1I&+2Sn8se$+`2;{T0o8q=q3`$aVErgpMe6s>m0H(^O<+(~U{t*!>?sM| z=%kttGsOkn^9kJD?Lk(7|3(sxbY%OipvaQ3^TtdI1t_K#^8D`08q3F-7By)e`y$?8`e<|>NA%t_v49CI+{kCETc zDC48*g_N@8irtp%35k@JnSee^v|va|uoa0Id<+EpF`C##Onj`!c?v@%eQGa@urmb!>f{0c6j zGb-9CZr{J>d8>?yoVc9)NyjxIwNp!Uqll14`H;YTFGhE_ROeo0kG|tYrv1r_GgIUk zhY6XToiC=C&^(c@Dc|*SIH=Ih$`?o?LY4^pb zzb=Iv#VwEsLXhPfz2=A%)U$&cuo8__M)fpxc^9^rYA|3RMjq-&EytMx6jVff{`}a= zG)>v=_L2-TyXo`w7n?sSSa|p#2CO9r;w-hAKpqII?9232fZU;c@>rVK-;fC!;%b=7>tAjeFrwu-K9POndK%wt68JYlRWHPp5h&c z5e`)9f|So+N3xWNLzUdef^#6m+u zA*v~`5$1HzDWT*xh&M&**^eJ-B;@7gm*G;8I!FTr;GAmu zjd>KIlNIwJwruamnCS%yIV|Ys_wNE-X*roe-LJdJ6G0~s}9VO*mNYdxz zJ>2jNO%-jvmiQh`>lNtPgH(`qq!LE^ph#31m^OESuuBEoS<522ibSLg?;ygD$Lcd| zc1G=(Fn*22p|M$L;B`)wNzG|@N~_>j(qQJ1_Pj&9p%UBDr{jveEbHl=Of3p-&wKLa zrXTq>d@C>Ml9T-uk@*J<1!W!Kyd{DL9QCLiPI8EP#mFKCD;tu^NUxM1vYN}N4 zuDojzQtd(T@#DTEXXjJl=#a)gHx+7M(E}Hf2Vvs7{J6PCO)Nl^8*DX5MhEUDWoRoH z7kh!%*|(Fpnd<6_pXU1uQYE0m8LQmF9z93t0Sy{b)(o90J8Lo?Q*F8%7b%+CmTL1t zPjf(9N%%RYz{Ea+@Um=1SgP;E49JnDy&ET6cm5fpP1@q=km{5u>O3{#w!bw09)c&! z|3YxgElbr{UfZUNZm)x{Q(@h>UniB+6lv-Hd2t;ex*y}NQDUJRTW8dsN|n|=*?K}b zO$n*4yYme|IH@&W0;1GK12j2I(IQP*T=&)SE*$IEmZa z$r1np_fCt1l+=m@@@Xrw{C~y`(p8jxyXr8~y;M=EXaAm1GeP2fJLodp$=F#y$n5Zs z*XDj0=$ErTt=r1%qnE5BD2$^#(e?L4Cjun;hTDyL_HlDp5;v-ePnS#?atR{BH43SP z;eEth?k-b-NxAynS{gjBy)Fi#uiV=78>Yos^2|-7hA;YidchQyp`!y&^?B>h>sK%=YpG}JBxwyVh zbItoNFI$PDO$Z-nq6*fhN~c} z{{NKQTNqo)zv7sd>8x66RC(4uOrP(%L40H&qZH- z7?B8SD>*qimwJWo0x+imDW`QT;7i94Hh^m0{+`34c>M?qBxq$;syqBVSuYdq1>;)f zc1n0fI%RZHPUuHnwh3qAr|y^*qk8xW7M2;9!0eOaJWdBty~7hXa2UPU@+p1-lO^iE zQDQpon;0L^4nbk^l*SQr8v>k5AAepNvxh84`$U2eEWu^vR!b?EdQ{Qu?~U7^cl+yO zDmp3W9aOXeFj5X+ipLjTjkZK*L2LD8T8v8qfHf3bxw1FiXKSaZ+y807Q4jAJN^te@ z**_;N7zUze>`cC;4^$`cb?>FP|MEK8$_bC(y9WVxCzNj4Gs zC1TB&03*XfZR!7I%~Gr68td40l;gXzEMo#P&dwF6pa<<)^Q4&fV%@!I%GHq&ox@kf z4-OpgyZ~}aWlc@Zues>@kwZ$RG>d0+r z=;#_IRlO}b)bwdtSQg#tpr<>38*?4%HhWwPI=OE*<_Nmr_G0QTygKs&HW-2@`R8>4 zA#lyz?oqfN?<|)=?Xij6M#}J}Nf@G)Z(y4F9eb47Da~-slyfh6X@lK-mSv|nr>*pU z(>uG}jbl@7i@fJ~POC%`)^{J@eTF{W`;Y^gcu z$|O?-hlu)Gp`woE;8J0RzbD80aH@=uz;5RO-)OsdwvFQx-6VK}HL*}p37h_o-^}Wt z4t7awe@Jr;z0j)tggn?FsOmU;C!h~pV-BDYg3sRuI56VUl|>EnKYqN6;&hJijY3Uw zTAmExNd0JEkC!juqcI&<_u1k8da7268-4xTvg&l+m%C&W8JUZCnx1iA#z)#>qMdJ8 z&lp<68QfiNuQN6g7UKG@vuMq`n{LX9jCV|9C)b#Q_d=n3bvF^$%w*bQ{Z5_yhdv+0 zs3KdnB2D-(=5^RcQpmU!s%qfC=m^p`aVt1rJ(fFkuqP=bb~h87)E+G|RdIy3Io2^% zO&r83m;_~RE0-=_nh3;6On8|8WccUa6A)bWrKffu1}0U-$||1{$nO4DsHmU9?eBm& zU@M=m`-=Gk=>&k!w^5tR#n6?UTHp30psdd2an&3oX4>tQ|S}Os;*F->{sRh-uG;}m*AI;6q zG1UfHlkrboGN#KorCm%=u*_8TU}6Hp_t~59K$`3}k)$&C%m;qraWZ~arsyj;r5d*6 zS+<*^OS*?a+G@72R$p!@LpEJZ)6&+- zzMT#-{@nGkd*ry3^m`#qoA=-oGe&&5%vfa%^gbfgqVP9C$ayRyd{=SJ{eaxst{zR*l2QsX6^SWSDL@Bl~ zNZ|XV>UUq55+H*Z#mT8D_{~62Z-U8cU{bYd6%_FFw6q4e7IX+~4hP7nKi!#}H0*;A zku9fLhz0U%95Ku(xYRhemX@2l7*hu%1pgl(8Y0LJf&TKRPXLHLj`V`;T@2$7$gM&1 zWw0IYKiu}((K?*vrFZghUkvc}myXf1+ce87wRIH9vRD?hF;z|PdUd?|!F->v=H0Q% z5P@>x%}zNyYaJ%KpV?Te%fcwrIc-GmHzPx^`hkSxHZd?!$fynlU~bpb)XdBdAdXkS zE~ZX?($Nm0OAt4lmN{G0cN-nyz>uiFr@INZNhF(G-71F-4qC^N1v~Y}JIFLV>pcCH z15lQo5oT8dKlbp{;$qc%*F7mkL_}_j!8do)Iw2g!t-!NwX3)iw)_S>DoLU_0C|Y*0 z?EpqTId51lAQie6xjX0Kt{1uY^An3ySeazOiaXjX#F0lhVyd{Sc9X9OG0^8G4! z$JT_pEr~q=watM?^52-N)m&FDRjd7`@(BKVLumGu+`3J!6b@!ss_^6)8XAJ5Xbk02 z74eYSz^*uE4Z`xY(o%JQv@b($$K+eq=4>vtL8N&$$}G{twU~D2i-T%i*}0;GH;zF) zUkn^eYudix2++zZU-X<_?N|_%)7Fkkse{L)>X3*Ceq&7H95qHsNjZr-d){n6 zrdY$5d>(+&X6~pJsKo0twu>4S5wVS>ofJ`D@4FWuU%-%OyZ;dnqmQ^=j(_Ewz`1vg zA1@KMq(3p0l#shUsBg~fM@P8D``au@@YPQ9!GmWY|DE`FPNr9hFf%O;kqT%UOj`WQ zax~imjeqEz3Tn~|Xwqv1QAY;gW^I#C6750$g_1aS1oq&bqoI^?#>N@6Q_kQeL}%5u z%btm^zOL>Bq+H8`IEr@a{R03Pym?H({Xho`eF%;ng0DYn<@a{+aU#mp+{ID+30b&c zlX5vjD$ZhfQZss#wd1{_x6~ypy$(jPZ}|4HS-fj}p*WhlzHQk)GCI5ZzN?HNZ;Lto z?!x&eH%z_rXSEEzDNe-B;>#@)=0?SWltWBU8q8>r>i>Q~-M&@_k&8k$nxgQy#(qc_`@VCAOcDI!{WPNOYWZJhN=p-&wFMfDz;g z9^^LfO)6HDNzQY+gpfqHB*7l9CVIsl3)RhPh0QMi)^r}^JnGQ$^76h5xt|(DuQC~^ zApNb-*c)fL3u`77ajyB1{_s>}*Q6>zP&z)|R~=5T2`d z7hgS4CBGI3BVu-TI&y)bJW=k;r_z2vRt-CQn1FL>l+$9p7FB!X zh)#G@qU+9-4vIXirPb(%b|AApfEoqjZ}Hz?VUY)ORl7k_WFlY{;5D&U0JT0G;=o}j zDk@?DY9EKi)DE~SiU_aJ!(y@H06TRQoBvc_9GO;ueWanGK~Y5|30AN@-xVabLqt#k zq;0f!c5)hJwobIcHqCy&ia=LamlmdvLd3r3>({ctuagL$dCUyFS>Kq{sZB5h0*9`K zCM)J2dvz7wGG2;PoShvlP}b&mqhI_azlQ?gyFFS~e%3oYPqD`+Y^@?J^4et^e)IY> zJn=eHTV8cgU5n)q0S@hJPg9mC0p5YyE z+ijOGtOu}?z-$)`uBy7NQ@l~fzoVRX#OcPy!7bh_5iUoJ^DBu8 z4~fW436AjgH@}y2KB1MnJ2>O%=b>a82P7bH#?~}yjv;n+Za~LZU^7{odxZG-xk5pd zFb`{%X9uoOnM)?%9Q~#s-$k)-q)8{DN=te8ca~cnhW$iW)%s7~gqa1yjP5RLl>+ zu6JY0s*gj;dBIW1(fL5r8?O_smhnd>i-jX@P_fErX-8RzYSBg4<+Y3%4-GZgZQrWh zn~?Fx`V$?`A89TND^O12HBnitJ1hE^{Ve{7ScM0B zpi8TRI>_t(c?r?qrKVXgCW|L2hHM_N=GacGX-!A+egnkDmj{xmt)Z8g0L@e0P2H_| zSY;C+i0VXB_+TfT&fA8G18wiLXG@|nSX@O%3jIbvMb5*38|B#ygNvq^r!Pn) z`4W|aT`R&l8~JA!cR&MHKkrbm5N4#bx;f>K{kYVkqMDRA_waIXROZK}W+NNr!TXor z_+faG%6R(hz)LBfw0)fIWgpWDnW44k;n3#^t+PRyT34lZHF*~9{m0u)@YVB$8actq z-Q6`@wDs zcR>ujlI6P7=>8%@*v_sP!Q!YwO-)Yz$~oW^k22m-xR#oVy=3w(%KpNmTsKrxN2ZNdWcod**vE=#5csYHSsyc^pZhqe)vdQ zx?;m;KMU!owqY%|$N=j4D^(|Vzq+)IgoN|>lDi%VSiW+DF`Q)E+V~Xh*6EXF+pA=B zRM;be|0(ujzR?gAT|@gCRO^s(TcB}N6`$GnOmKbhAUfI!N(S=u0>*MG7SBF{-wD3?eDldIHtvrT(a?U@rvBdG2@CUlZ{6DKt7wde5_FlJHd?JJidLdISN`XB^d3V`wa}9IJf#--;lU~ zgh~HkoyId)t$9mgt$LRvX*mi;IuHHxz<_N8WZm?Z&}}D!?W`2g{E@XSDubdvLt9da zsa1=I0`DF?1r^PFOOhyDnrA9KkNAA&rcKcwiLiN~i-VXUZO0jQm~e#YE?X}fYngz! zJ1gMTH2hH=_QypR-12xlp0P+wJFDH`)a-N~%+Q1UvHi23*98|_)~}v1hHE%!*Wt$W zCXChOo%} z&8W6%y`Z+|5G`(7Vlde7Vq_KMyEiFtcha)ZYCJ_=@nK{7fA01ZIbt`BSn+@xv>^6F z3BGjn^w6lRs@;dXXKiim08+CR*e~hftV)Wo@NDM#c>+Bw;Aiy+V!r+a^uZa>>!<>t zm?N+~FRY}jjM>asrk|uNH;1f0@B?oB|~?^w;!zguJz5j@BQPh`+Da*bDned{_S7X$7ui-kNCVo zlXu{*IjsNUkWDYsGb%c-jW@_o&LHi+;b8TCuT_wb0+dE?-MqZ}R!9fS@;x=)=4>^J zqIuYbW9Mv|+lKjr=bt!F?HF`u8k$wHVH?@jNJ?>lcC7{cZZjP6*91KnZ zXW9lNix}LcMw!ILeTdY)KE{@Pt2#b0C36Df zIqE+*=ZHve!2cp6jNLE|DWHHR*|WH@C10`5qRI$HIyh~kCTQMik`~}mNIcdnR0Y`? z3-cSH>UT#jFxACzJoFFbC0k>v)M${ejKXZEV%K1`<(2887%f zuV%POcF+9lyvWVW4OX&!%E6bv?vJ)QPo}f<>f{opKXT2lh0>s!!Srj zS{mLo1*GIdsE(k&LSj0{oNLCMS-?3>IYk5D;9X^BDh7zn&CPvcNJt1g=BYI%xN9Mb zE)V#1W%UC&6D9L}0>jrJyb}Vnk@6)VF!1P)MdXWJ^R+8~A^j|v-zEzlote4({D_>- zM&eHQT#%3Juv|#T>y^;^8Xk2m{GKG~JUAx0yjpgh0N_x@F_Bl32(0;q_gghh5b9t@!5+)Qw#WEAA$E2vb& z!|LjuYW>4mU$xQut1 zA!olmN>H%bEL_gm+X@@}00=7}94=l^XHVFNdPSDeRce@M!GShe^>)#)x#HiKOkO`u zJQP&J%lvXRAlAnyHYN6;rF3sKBVm4XxjH{ChI+XGIfwn&6Z?q?lpU|xEj%Qgg8i@> z6K?|<@VoU=v!5M#G|*-zJ4GKHx#AsGpx!<6rgdC#o?Ge@U#M?KdE3n3!Y=q%`YZ8( zA7=dbdJ2JJgbRLS#J3qqka zlZNkhrlv^sH+`%z8_@LOyCQTQI<`6vzH(w!s92(rae4 zF0djg45WSdO{y2eE9A)6*w{iR^DY&mx|zh_V632dhcX|7cxTSl5@p;Ht= znFbmG)Rd>(Od!#U!UZdQNpdzNw?8KD9|oSoRz0Kvm$kn9n%>p%{uiD`)ec{y_;}@( z#rgi)vY>oF9&Z{!6xXJ4%@lDfV)@blsu(5i!xb#@0qKEf9Sgq~9m(mLL==<= zmWBX?Nb8oCmKJHaS1cmz>I{EJ>_?=XM(ke^*AbYzKB|V_*Z26$>L{|CZ402J$kZ?J zGYkk4gi;zhSq~xyN8{qQZoIqCtpOFqXU`fBg&$L|czQD=_$BfCd;{8kP${@#RciFJ zu^}!fnS6$qCi&&gTs|V{}RK+l=c*mCJ@oC9aPQ!gl ze*4ZHq$RTgXH_IB3o(K`!?wF%pT`-WNtY7or)T3~x;{OXApA~9uIh3=MGwrvOUNBLnkLih-Uq<7_Uc5$-QZ7YTOUo_61CxH|OKFTM?3=&(?e#g8OP3^v;}j zXn1d*zBwF63i83=Pggv1gAf8O z!W7&_KHj-TK|Y0IUv>Rm9=G()J_wCHi@A{Z>*Jl%A>D(XHTx%@qyCVkv}S`T{^<|qC}e?W;|ffc`}pkStp~zb3Uov6O!hSE z2Fbhl^Ns&KdftkGi?XB2x6I(_`Ud)_Lqp4}>4E;{a0(vid z7n|~W>=WMhi>;Z?Pg|%nhL^XA?>cj$d+ix>0cMKE_oaM*3!iuCGh7bT)G)_hPR&+z z^kFmGeuumqzdx_;=MffHH%5mYM$8yrMmHZu)?~(BcGs55@T|_X><*fkR^NAbU75DXEKR!oi^?deu&f#_{pL)s6jV z;=*fF=ostvH#`n;QZ(+O>&!cyVSp*h^>&aPbKSh1n)#k|(_?*D^5EOw4*-rvvZ%gH zrEx8N>OE{pi$-!}8r0q^-;n*g0AR^Du%UL`-2_066eVESkNpA_iFz&W{&HS>>^Q=t z`_UMWsHXtaO<`tXNs~fG&nQfwi7d<;A-W9Qy!Gq{VxA zJuAQWhS&V*LWz%Xt{yy+`B4QK8Y}Ec6zWC|{y|RekM-RcRw8EUY0g^8c+-=8Ox)>< zXHJ#%eTk2Ys=Fj|k)A~_OY7iQ{Y4Y%gyGWCxCymV zwuf42FsfcQk{nwO54vJJzpZS=C8rAwk^1yE^-pSADgUYY<$_yqX4t*C7o$uT!-!d} z00s~cZLhKC?4$CnHEYO9m|lXurr5k|KHL}pB1G?XNz(5JB>uGlh&*IDk!gzReh6Ur z-9=K_^ws7R6cu;;VjX5wmIMHJQzAS@zrF`{MN;k)T=VUg za*z*jO#JG{4>|}8@;nzXU+FJ83E6k;LK>(AJMkqAQ;hy)W)+lFkJo}v7Qw(}a?LnW z{ECRdOBHc^;VETKjQrrPhfOFG^w`o`n{4$N5UBsFAM|CIX>Vf@dEdJ{V`0Q|-AunALdeOO#(kOx(`WRi zW6=4&-q8a0V-7GR$qf`+vy=-SnC_Sxu3ZE^!rhMn6riKbLo9h9;zYV`%}ckCt8DJ| zYbL~r2!Vqj78Co0tLn#Nx`Gi@ixk@(S{c`VE%hK)TuA2xqlPTY9Xo|3HSo*Oh|xAu%q z&tMIex}4?LY^SG!jEsMOEE6}pnedq*OEaSVo z!;Fb(@_L0?!yR0eR2H&MBeF3<2#AJ~Jr{vXx8OMc8u$*Z#ObdB`#08g%lN)Mw7o9d z?zt6t4paL5VB5yZ9SNuA>2iew7SSf^EDv|vnczfnv}#^kdB%IIEJbkQm42jTwOXDE z*~Rd$e5gT{Y>}}}Ow#p{uhjSf6LrV@Hml!H3EAXmGd##iHY#5*X+oUW z^ar1&{8ul|zw00$%^;v>KY3D&4D*J1^|{RzI>Df_J4rBhLqy}y+xjmyE#~pq_#NUe zD&yI}K-TzlM!`Odj)D9nA0MkNPar-c_qgxoS+Q3+z_ck|%)@zzQc{_BXwU_Wt!6@+2SQ}KLC(%es0J}2a6o*7EK>gz-PunQ-Y9CVp`u

l z+=u;=cvri_{q9TlW6wiik|Wv>dRyZ`0so(G?#usiW*rSow}bOe!_~Do*st$5L~yqK zS1M=7MJhNsWQ4w%o(krdt*zn5oYABFc^x!H-YaV^bN~vhZcNhR>bNm}eJUg&0YM~D zN55y^$<|cYO_8D8E**cf@|m;lf>Y=|SL?nw{giON}XdeSUI~T%=JP{j$rk zIQ#VWBnuo(CR69GUHr8sz1CJ)I%!$Ea%lG2>Hmweq9VVNG>Qp@*~$$*I93aThq1H* zvYXs`KC@{vO)}`UIo4fuvLe$W*a&_luz5y)l6?D;_t8yrwwX2~DCXe=nt&|o$6m~ZQ50jexa z7fPms>ujMky77qZOJ=Bod+HU(va~P^W^Pq()LHwB?@_krhc7kg(_vP0fZ`%FNmKnL z>O2COm$S_XHZiLcbCSd?6b!eS9!c#LJN5f_LPmNovp^4n|}?>nCi#ysB>OM-{VSHZ0+3-F-U$+&3NAKa}m%#y$YQ)g5|x9nHTC+ z@_xHI3xR8(}ou1hAyisv5Vj;~Al(r3DzmnmWK>^0R$ zabr?bB~p7{ycr7g>->JKs{~I1`BBYq#<=IA_YE0OoQ>*B{t(}18g#T}=~y4RsMyw! z;>Ybb3de)v`v~kW+RHjr*EWags52KS8i2?)~aOzI>8fNDFxhdUeZfHEJM^UwL(Na{o(3hHWo!Q zyUpT(!P`Mf+V8DA2HsIx<_2Jzy9)pdkQa;tcGw=;iV>x;p|KJ3AEJtDL;aiL(-W)u zV+9)9em*;{zdxV2Z&Gf0rk0E)zlwKe>bwxO<7?!9sXQj4zrG`G&E0B3C)I_vgym7> z7e%Xnk+C43otI!g*H#+Iw?61cyHe&$9LkpC0%CbYUBrhnADsmB)VA5 zQT%NoHsG+64mIPClm16=qWsciKC6VM#l;LU=lhFt400@(uhQcmHSbcNXo@j?;`n-H z7#mECe@)fKh_0qwcwVQxO8HDP_qyJ^U+&rwtCr<|>hv~?o!?_ehJw$zEKEA(hptOW zek)j3WftryUH!d@QK+jh%+V}1%Q-mjXtv@-S|p$6e zaVcq2YYb3B?LaiFHk}vyT%y{T(EJy$M8oe!QIdIRZnr5G{Va%&KU02QP1#s zCZZmA{OajXagG7paZ&LbteK%|=(g8R7r=(z$(*%RBjt=Txh&Uai)oPk>mMKa0zS*D zLXu0i;Em~JmdLl9Nf^R;msP_}X~0U&b?1|d$Zp9A4Nq^B9RITfOL7(kg&~L0)W#UP zAYql*4Fihd67)2tklDl0-2!goZzo>JD^I6?S30j0)unF3UlJ^A@p-I9OV2zkTGtRb zP4`kxli&E6T#-KW^?^Dj16JnL>(Km!L*B4NZ>Q46jicoxE@3VBF-_b| zTCbnv(BB`HbvsB5Atm7$%XHV_Tm*N+WvATm}+{Mr=pG@^6oE~WE-nnFcZluWc z{&BR!^{vc4Pf=kpKouWkmsiDDkVJA0y=|I3AYWV4ueMaD70IffYG`mUD1aMF{fg`N z_XX?hT-z9U){KI>kzA6$z^CJLTkCz&eG3D$!}I5#32lvRZ@vET!#<|JK;sajbirSk{kAu#hrvL^+9Zd9Zjn|NOQFTMMlA`IR_3r&or8OV02bMwu&aW z`$n~1L00H#GcccGso2sxkP>5#y?}~;g&+g5xg$r$5NIHy?cb%aQ2IqGcC&0cp`T9| z{@$q9s)e3nS{a#SSwAy8nwp>1OtSP~h91Y{Y%u<(uuKy=)vJ2F1%pV{7B|C4C5M^&xL1=faQgaf?G$ z6};mc#qJdh^@hpejMRzVRR1!hSf9Ue;6SKP)0OgUJ*q|TM|f#{YBG$}&zXww=7=Zz z>Uf57u7E`YG_BJn>M501rd<9Gd39{N^q+)QkYp%|ozPRvxkAc`=x{r2v~KF{kYJ^# zmm{RKy73NXZhozwm4OS+OQXb;nVaELZl3S0-%tDdyqx&PEo?xTSq=bzNjmaJ;k?Vp><2Q6MN$8jgAJL!~E3>@!$jrVn_ zH`d^2n82kCw9oMQd~*`75zLH(wu(*JPm7fS&C>l!3%yl$p)_$pSW&({A0eUs`uOg# zH#g7g+MUJ7A=BkgZPH!3*Mhvod{@Q+vRlZ%vOf&SL_*i?J(puzQ-a!<;h*T@@)jfK zzPqm7E@S+vTe;7dlD0ho%ziX7{}2O0(qq?*|wLtrAG+mFc)pXz>z`%4M@( z(BmjEB#COSxg=w$Z2rqt(GUNd(IXLZ&9Lidr4-wC!k7P_pGEZE+8P|Z6oIO4;BT0F zgC5B3%NqrHvBu|S(rW{&n`0{10YV_7Ic!<=)V(6W$RM#edeTE&do;wNULGGv%J|Wr zdL~Jd(BYvobLeg0{?wl8Bi73z8Ue!{48NWQ3Uw>Lruig*AAfEv18T?CThty~^vM}7 z+NBS@D-vd0naukmE}>GrnS*b^*PLjPJ;dpU+1~(+BE2 zoBVu@tqi9Uif@j()O#*{nmNz1)gaWCF8sFR?V?}6NQPR?W10ZLQpuV3@tDs-MghuU zY5FSzYCBz|dRz`(QU#ZAQr`t}75w?RcoP{hg-!!zvtE*I7n@cizbwShFZi)FFGSe& zMc)aA3Sxh2k$Hd;q^R*ISm_!LfB6CkU`Fdx#Ya*%b*PV6p89oAW<9LwtGO|Z-3^?o zp?fSlaFePY^-Ow?|6)_2xHI1+^o)(qvLQvUeWFmmXO231Nh|5v{u6N4IX7{nZ8bT~ zJlOOLFxF-C$xv~4(4sGPq5G&z#80BiNJF6FbNA>=aPUyiF*0BNw6Sv$?kvLhxXH>U zOHV%f(Nwfwj2B$3&ZBA0wdPcQ6>2>`v1D?23uIWHskmrM(GQT3DslBIg~*Vk)H`UsDvdc1E@=bL!kb32bBpG?iWdxj)pA z7}_}YMVj`RZ$gOSR@bQ_UwC2X46<{Cqy|PEuSRnAwl(H~<60GsONh)=?i~vAd6i z)Az*pydD!xLPEkHlT6?3jDb=VaccS{9ZnyeMm`&>Wh%q4EUMDvx=^kKh4C57_3x#= z=Jy1&WxXXGVf=;z4ls6F&bt4(_p0pN@B=V6@3LG%w~s#idm&@Z)0JnVC;!;lv+raO0cU8@5xW8~U4 zG>~`Vn}EL9jI|Dz5%J-Qt<4dgr@wcgV>awSI!z-+6za^{wCj0ni;quegB{>sRFtW1 zgXF2M?)T-Y?V4mW^7k407NUf!p=R<~*}M|;a+k-70op!@Gm zEcC`>f5+=xDkKaH~@9(m2u z8e<5KD)DViM?VkNb774P&|YJcn(8Je{r3Z6tS-3GwDV#-}#p3u9Kc0H@lJjb=J0rVY#ptU4hY0Y)IYM zI5eyhoy6WCMkixna41>QLo28JrqrQ!{$kozB{Q2M}Ur*>88Q5lIg}%ibE6~ zMiwu_Submmwo?2IAM=n4Ow9FoM1)40_5UprB!tzBwNp~qD|8n7g$RkfFJaUSMtvas zeQ5#GIj=^CIk$0d)^&3;R(8R*Tt1YJKS2@M$QO0o7lUu#PD*1Tgycqd>ZDqz5Hyg1 z>&k|ot(O;m#Aop6z4o{1rgbz~vbIE_?L zghlu69&3zk8^LHd^|l2QS0{~S9Ft(G5Kae1f#uee5Vo109KTQk9eWS_KQi4~dJ*AR zk%rLt3Fh*UBBwt;n6C@N)yp>=m{VE$in~ZmN?Q8p)Va$+C!?*B>qNY2_>${(Hx}>c z_))%KG?YXa_`G}rloM>DkLk`KZxg2yrc9-yEc=Gt_UV-NLY`g&Ye=T23Sl*HZY*RWmS1dT1A<-iQ@sHM_T27i$s}q}u&(%khddxU7;TVct z4QsW}C9#unM{#xSpK2<^;`sYS3@PHg&oy2c#c8u(Cv-HpFY&I_j3a9)&+Z;Mb|8lt zt4J$l`YDR?@R6U*XiWj^0?^mgW-npJ(q-s9db)vQ{`<54;kRXiO^+7E$Y@PSI4j_# zkERT8wp~{%XQr1$a$L(W8Nw)kDe0Q7kMt=7LbHLcV{viKSdBVh0BudE-0QzvB^&;+ z0M;aHVxG{ytKe1gZE+XM*LOotC%)F44Sn4T`!;9lLcjK%z}llCn#ifd=GvDrX;(qr z7#+%$I9y7S2ZZwU3ZZoFi(aa`3aVmzf*&bxccu_#sTk{|lpw(wE^SXXtmQjl%?>Tu zadAtaKyFK<9oiV#pZxl2Xs<%j#2~^?`2ER!Oq!b%K3I;-f&h)G4vAy5!zhJVxGA!u z7_p?JtjDr#9c1=SqY|Aq@>8Mg7UuczOW}*46syrv_gBvDnrt`S-8l98C9bW>7W!uu zPNJs74CP&br59wsQoLi%fmSm!1g7rD_?IxaBda^MZL;)K9 zRxRDV^;KDc743`-^Ms*3^{$Oe{LE%Ik4Y^n^Kt0BPWuHK_szV}99THCU(Y~24BldX3j{dG$H8pjc?sMjpzLPHg>LrD_z1Y`3^xmi>3zifUnwtOa}|sJJ5dTgGa*^H3A}k7?hf)ULEGf2r#TzyiP`T>r}T`d;v*|6S^9YRQkr zvQkj!#(Xe%@uE4`N-G7(bV}>D+Rhl}eI;*eTZiv>oFcjFPEfmA7c&F>q0H~$M(V&# zC*!4CS{ZCxjraLHVjmHDa5Cl_k7n+zjJrO}M=X_pP9=&3<8R0*z0CEzE;-gwc9VPe zx>J@^5+KbRO9;}O^`E}8ktq+2`?cZV;0l3JUNMQKmK)&0nhB2xt9cj{)rkqbk(3cM zG{lY6K76+EcCr_Q4D@nHI<1iNtsS4cx7>bBS@Y8Y0MKv^7`n>& zU$~8Iitcpn@L}t2D8>f%+bHXKCXE=0$(FU$m1x*<;&x^hUMl~YSzLafq-i;QxFSCI zkvPE|drM#p;h)d?Ox#vaJ3crsKdoYC2CTJE!Fvd;Gim55Vb7nd4z)f^AZk~S+&3&) zo$Bo>=+&^q=9Zm~G(frVC!j7+oG(cH8imxxf`;%LnKNG&`jmSC)u;}PNGBwK5s;vL zM=!k2sTZmxA?iMf9}t`|hSN@0HYtDi06$JF5Q0`2)2@A9q}btG(5?mt{D&VOXCDm9 zW1{SoaXu^Y{7Mz{uc%&lxwFo9S(pX|@-tI(3s-V#u8r(NZ)UN*5`0sY%=(S@U_#xQ z@zCt8i^JX~n4?jWHDX%@&32xQ*7JY&-MX65L{lV(7>=>RL}uXoea}_Wct&6w6V#EZ z^WGJEvieRWw?1VrahX@@nH6NGN5ed^6}#)A4yeE$gD}f)r-;0TTka3BE$QxvO51p*$Ojo{L4&rV1<@}w;AK7QyA_)y8bD0pJT8Y`gi#0=cdTdq;+*cikjoomE zi3MFe?}c5%te37n!Y4&U9G)~>wK$L7lJkarwwA=QdY>k{i+Our`f}sf3<{bS)oWrt zVa|j5tXh4N=-B_`WZNc6W=8f8nzhlPS=UI>U)2CSIP$%jFQ4)InHqu%6G8PN#pbXJfPDE(jwq8||fTT&XYuJ~!e5B8f95{WgC#%~urimAfN2e%|zO zG6{_)>r>EBnRTB7+#Cp#N>P-GQe?LDSb2oyk$d2@ujIoza}7|IfD9rTQW5mPGd@QM zXkVzt@;(pZLbtV63g-aWxv~;vzhRtGt^;P21L$)dwYV&$SjkN{sr3O0*9nKNVvHqi z*CfViRBd{suBK$eaaAPNydmr3htG?>hou%LOm1`My0%$AL2uMF1UGF^0RlOEp+|d3 z*5B=2K?1uMYMPn4hd;F&hVf8vo5_rsAHUa&LESS!g)A+q32D3naq_t z%K{8f=K4sjeWlBVEe2g{Zq<~@b=C*|JJUc4G(p3z06E!Qp>x0-HCR1<`EccPSTzX8 z85AR4ZLe2sw`3Ibr$G#pE;Rx)TKWLRu6UAv=Nf(O7vlpv7u9%M=!1+^|L_CsR*rFM zR;dedlqj6%pDBhf0kBMWD}#e2cztd*Z6LR(i^t}~i5kx}W~pOmZz1dEJ1>u3_ zU}MQn=Wb~9;CeO9soTcK);-_pGU5rXsTH}yDakS6J8<`+n10P!rmn7+e--m11i)u{ z0?VO%gJBztqO4k1AFUQ8pYe^!1nLXfmtW(M^9RdevI7UT{6wdmi?D$9K3@q{)KM<} zC?yDL1PzVVMDuomjZX&@9!!)G*f>v>k5+2huFU_gjZ}x!6t8k5J*Jt z@+1WLPhm8zQk|E&)ZL2U2;#YC52ht$%MK1=uot+>T23J_PqA~mK-w}Dke*O6UibXk z-bB_JqwyQe8t9T_D*AUZHvGS*FR-Df+zPa6cr5Twi$|t!TL&Epce)||h`u01AU)3Z zpWU199hAgJ1GCF$$3+cb0`El|Cw5C_%(XDoyg#Pt&&_;15<>fmQdA#JT>yl)q^2C$ z8#OZm+avGS2OY@mb0)lG?OjxFI_CmJq7+jS@*he^j8{%E3j>!Nzis0o)OE>S*FH>K z-J05{br}s3r3pgv1VraP)1BiseVnWulLqll)%8<|l1O53#S_W4YY>Oy>;i0rK39Q> zwQ+kLBr)T;v!+WxY?1RTn_y*9*E`a+x)CCwIU zuy{@m*N#u#=2=~n@^xRDN8&p6iLkzrtn#~0WYt&JuAC~b9*Z#>?Y@r6x7|YL?>G-bKl1SjrSH~NG(E&`uz|ahSZ)+$Td0OJ_NHo#&K=Lxn4=GedNTg)UzOX^Vf{kG`Zrz_ zO@j3wOs~Bb-;kQq0x|TUuqBHT(H;Eg5Z+@NC?vN4VaY7KaZXfEUUA3G!;&0y5K(qzO2f3C#~Y(t|2;Jv z&D#XDDhllC@Eop{*T=UqxwUqUi93q;?7`GyH7x)%etjfZAyOKERD=Kk?6uepKWeWh zhvuM!ED-AVhN+ek8%o%)Yu&SVYQKiObohYF0(NIY3s_3>q{Ly={-dyl_vSg(oQJ`z z0H2(k2E5DR;o%a|3gR|s?-myq5%nsPS`FzI)^kJENQ40xU*2Cs;3i)!gnRntPKLfX zyD|t9F3F(LgAGrf;R3JS5L&Yw10m}vmC%(>GyuW;YjU>ReR~g!elF81LAh{)^xW~S zyTFGF1|<~IekPDv)5kUChG5i5^kt{pDoX|W|Lj6k#9-C2ffmX`Myg6utM884ngP6T zM(gr$r_K*hQL^rA>cEu|W}=#RYO$8~U4Uu9dGiu17Pe-Cd>~3&D_B!qJpu8xM4Voc z!~lxZr)9fcsqT$tz8&u%c03d4e~kz%kbJ*n#g_?p0tFd96JZ#DvZN1qf(K$53u}h1 zIJf%R(5bQ4iC5xO^a_;Mri87A$e z9Vt7QuXoSZk(j z`6FiiVqWLq$dzaW@Nde^Bd=aO`^lQw6+P+{G*je!sfda=b>q`Q4P~{__34nG3_5gc ze6^$U4~~XtH1}Q7F?Y}AEZYF;POkmczZZl;@ybN9_K^11X<0hLpt5nxVNvt(S%~%9 z6<5$@133#3$afze8u}5YZ>yG~9AkR+Qzl4UeA4U8dlq}a>WUXj(nK;UwGFX2@3pa% zGoJMDr%SLp)Uq{yhj51N@Cw;P8T|tI?yAo^Qdo`GjGW59v_*6&ELxj9YI|{2yVyQH zUv%Y`1kde#$Fx3y^XM5GTiA)kpVRl^l}H>Oz5nWy1&*47AJ zD3xbtM%Ki?EP1v2N7%q<4wP@Nj=Z7r$QN$P?3{D~`WGVnUTcZlYE5$T5{pA#pixk; zKq$(K!KfJbTg`d6lR6 zoG-B(d3kHt6??b&-n!Tmd|x?>%@cexy&ok^n$1B(4KSGw9bj(twx=HPRX9X{JQAqc)X zjX4cs1VlMQ6C@LSRA*oRs=?j6X`pIN0dEunL~GU=(MqhpGhPCQ_w@QSUVP(vTIw6~ z%?X>a9y<`rMAtVUxXk*!bx}gM1=pC#gOobm&5)+tf zl*IAbWnPez8zJB0@{0)F=8@(qdIphn#dL`B)>Kr47u<|=&D1pZnz?h#oOCkvPTbqk zKfB>o8~gx`S~;#1BZRK>I}#GZVc`4x*XK$ee3 zDB;_XdTA{$%dw*B@Mp#5jROxT&QqQFV45N0CHLX~b}rPk8N5X_(9N3o>I}cke9% zCa>*W`jcsQSt`*w4{@up@k5IrH%67BIxYb527nvGUtFP7LvExcvSLSB%E5!+;3Y~N zx>QtDlwAMh@#99wTjz5el11XP8bs!wBB2?Gi^5vQ;f-~5$Q1-X;3t$E?T80?k&^`y z*a{g`s_yQ^sR+x@_srm;Je{^*!4<>R4YJ-UqLdSdLCJ0o3)O#(`z#?R}z6VsEGrrInCD#d^mAO|4 zSYqhN-OjoW;N$%S+tzzk!B2uN#SrgI27ID+dXJs6-vyI%IQAg-6kw~v`d)Z)403cr7)9;ExDa;HSZ!n(Ws^j z1Kgr);LL;wW5fDRxC?CW^Yw~9b=q?R_0(5NM#dU~aL)lADL5c*yiO6jX`fnCN!LHyPoNl6HzCmAqOLWs2vgR}NX+iI~+{Qtp z6kVc-! z=H8q-j=%9>ui9qYItZpM(ty)f=iHGQfT;tHl0q-I>VId0P3iM=ckbXuNGvHZ`m})A z!{vlKiggE260ew0m706@9d3L|#j}9e4?)&MD|pAGfMiagy9`x2U*S4+u+)3gz0bb# z$Xd_9R`LltX-oK3g+?Oo+>b@#13GJ)@y(&k?HN9wFE!sMJL!rzrJtP6=uNjib5%R0 zVIX~;@+y{ats79B6F9ZO9sMRtC8xlNzijuQ5+pGkzvyiHUc1Hh)|JJLnI zI=IM0sB;%HVXAD)g~hAzNq$4mN_U|!$H;EwJ}xZ0h6o#xWH7}01`*UFhSg~V!m{q( zuyf@z>Hly2frlUkRwHDPego1rn%zd_@<19zOUVcVxzK2vx;3C;6l#}at=G0UuT-4q zjMfgvv_2dxVHV_nAAJk+wCCOp(1_@Om25k>f~bgYZ_K-c6=1G;-8nh(eu{e_?Lx{b+tOKSLzgZy zBr|Yrd75?uIURf(HMJRGBBrT&%h0dLiQeVo9{4>Qyz{RgD=p#wD%h3E6)E6t|2fRZ z6^s-~_ASpGuP`uO7S?xQjJ}1va*_9*by7ITi-Tc1;-z0dAcIb}eYyAQk~QHD?~|$+ z>$|%ON8d0q(bMI`&qTeooUk4KQ@R1t$f8j#ob((R~=YFSndXIfjN>7df!I*$%$V`T!cz*Q@iR zgjcU5j)~0pLi9*JVZRMx9omWBo8M(qLZhE;`Y(IQ1UD8^-dd8CT7c(vca{|z2#FHj zJJOZ@Xt*nFEBcC|NZ7BoC{~fYxhzw+A~G{Ac!U(05Y9-On7bl-DtK3XYI&}-KJBJ5 zg#y1WHGXn-9p6C=Z7-wlj6P)-dX&~onc>eD-+Q$V6p;_j=OYY&8GiA!D#{P9uAzZ# zQ4$yfL(dK>i|99m71m9yyJWhB| zd>)IJ*MoeYU^{G5ekCVNe?z@J>9YhMFeor?*F`J-`MQ_hgZK%m*LI(hn+j*x8%Z2u zTv%`Y{#_9W%&tUfjzvAp&(7xF(QAX5VLW8c8PX37tZxRnhCOf*U zNM5U81g~3ISa?;2yui71WMqhKr4f;Dobie*v8;UtAhW$nZqKwsM!Zw#E6$LKjF-~! zEPP${XYPGhC*nIYr|mZUGEaRmmSh-&@e$XSRw_GL|G9s1{X?gJ&{S}F3{~gTU37RJ zn&&G;$bVwz(@BI;BO*E5nfm-By^BC$?m`_R`c-yO@SAB%yCSJo&g8_4G$MfyWBw$o zE3}u{5Fl4jL&-at)xN}Guq@D%Ux2OkuxhXgi3C6nEO-GtaNf%n4b=bmZ0lUV{?8Ny z`1W}%_ni~-EJW@g=&o|VAftVWbqx(VNGi|868Oe@Y??P#)f-~l(~p6F4@=tXwobtT zQp){oUkk%6|I9Uz6`B=jMC^&IH69NcT}98?ruL2(%{B1X7;6xda!P7HP3L&658P?q zra59GO^k1zVWeLVJWu8PgRSqxO5BTI>jqd1UDiSPet!dGLI!Rwo|>VjX%`G&Zk*Rc zEk7BYno+TJo75|Dvx<=kl!Wk3A;_Z>7JWBd@sW%)f~ETM2kqv(e;3}5wJPzgcqDzT zxY)hm1~JJu;H@w%kx1-Fka~hRm4nxgA##>?zPW>oj<%U2?7T@PXPaA7o15|NW`*mv zW9d2RJ2$VUs$NE|xp$(ms0ZbdDAK1E4^lwLQ7ZxAkaLK1!i7uPp30~KNQzo9C7u;k zn%+55TY7Qo>=T#H3uC6IdUTupUH9@Q_Ea~M<RYH6V@=Tw06E z0xHx8Yb(};>_&r@ubw0tR~QNO+CSJ@DyXo6kV|A77$`Ur;cEBlXK0qiA9!r;Z5 zXT>Ce+mp5wj*y$mN>!!u#PER=Z2H&eZ^<z1GO_J*roXn#5t$ltHtI<}O>4w7V46dZBC6!Iz ztwE16po)@yqF3UQ0p6@)Wsv;^RyfI&J}u47t%)+nyjF+90p=NXK|!DEDy8zM zup$zTgq);cxl6VH_b;mPsOMVhLPv#Ofj4OMqXmylUV^6LbnUzXpKZN5PSb5qTp94$=qvzyEh6ss=15U5%??P$o~^BD3`Q zxDVk@UrBR9b?~|EvZXTT365tQL8nhx>s|f5`F-2pY8w-SmCmCh22^JjcV2&VQ3|Ie z)xkbD1bJsT-YlH&t0;JSitxo6Q7ur7U3s) z^{l2gguPUCZZ%z3Tu|42k`FQ*kQQ|R^=rX;soO*Ccaz&005oeFJUDPQBQOEPU| z-JX%eM0(d|(rWuG$IN|t`oeld5e9rZFyMO$jf0odgEa5X_8}*0A2y9|UvTp0JG7S~ zX3ofV>>KiA-?VsYx_8Dx@2=}uwv@Vau+^L2PX~3sj`J{p@c`0lm>2~@X+AeJ{QC8+ z#Eo)yg`6D)fpz`Sw@3lUQ#^r49t`A3wqAg$HWqh2P0r=~=fMj2xgmbEKOoN$ccmqu zLbzcJO#PCTqLS4TrJ1kVOmDekv3ZEMKNXKT62<3{Q8u~;vkZvX1Q&mTGbHpe#r7%s zp6LK@_6$fMBv!5ebr3zC4Tgq;LE+vj?;;l`UB>!c@h!?Z=13XQSR#&T&*R{LvX-_n zIw``wj>;n!EVPlWgoA`zpJr34{xVmoKt#dTm2wnzB#LOjzYPSu7h~!ANfw8UBZi_3BYun`Z4`eNSE#kx^Vbw$z|abKgw4 z+#&htehs>5uMe#+8x)H*0sUR~WEbO8LRxim^XivvXsegH@>9K1fnLsVRazbty8fty(0oBc3`J6#HHxrVrj)}=ahv|D! zFRL}9f^AK13uWiu$$Uv3%nzI6se2lp4W)UnTwm2T%4>k2tG3NG!klzquywtSl&Kk+ zAa!>R+%@|Qww{qf(jJOG`bw>LX%`D$;Dm z&kzQA|78?c@f+ZX>a*=ughoK$TbI$o{Qt;$3#csD=6jg#PNh>C3_?ng7HJ8U6cA8K z=>|c%TWOGz7D14f5D*nX=~hAnX{6zs`{A7P{=W6Um+J(}=i$C;u9-c1_Mj?p(o8na<5t<<--p15Lz7l$N3Ec-IcA|oR$EAUNE>&9`>vUS~|ls}jr9MAY$c`3&6 zB#)iwfoM+xbs?Xu=(l5bhp?qeO0FPEklHD8=P2WANI(UdXRWG#(_z~8t2rao+P$&l zDYlAX;Q&J4#%IeU!2^WgFqtq18PB&Dw*(WdDZ~AeEVEd3ZS77Xi{u}!jB6L3EX@DU z65n?!Pb>Zr&}x6tCTZjJs5V|AKTWjaZLv038Qcj|I7e^#@d=vA8h9^kTUc2kXcf|0 z?UfVsGCh3+TLQr~5s{4Z!K^4ECU7xbkLxbYPiHJDw@^O`fpjnDt>j-IDne2OD-JE` z>Y=e#jpMvZ)9J~f#Lvhpt_I7y=V5sO>Tp3kbfndRwGpgbwKg>8opercS!VXF6Lz|Q zG>s2vy+|~<`-SfI#}*$+lxP07WYfXOz8uQggp1)9T#Hfic&Sw4*%?lw9{jibQKx89 zGtHfSu{P?8@BKcq6-NeoET5$%D>oh7BZVKaBHwxTd|f!Ju6Qf!7X50A&D;!0#*$&k zu6%rs2tt9PJ!a<5-9zXPA%yh7La_8?P+Ok<;b6XU_o!z44qOGr-z4m6h1lP6E!^zC>yd4e$f&P*UFFEpd5$Z+Bm zsoy?4baKt4J%9cc$Tlw(q}gk`y1Kq`6757#e2@;ZhMX88bv0qXTaK~y1%xj&h7Fyf zK*!5v8RQc4?@iFERWy{xOL zY3jKlcI}!m;&OrnR$(-wq$p3?U3KXl9Z)aSpc8k@n*u@RNtIT~)5B=VRZUgZNJL=! z)@3-OtysaVQH*?Lh7XN`yO_Yv^q`_V@_7+sHTU~3njsfXyy04JYDz2p*w`;feB}a% z4WsvXZLjqG+~1B)Y1cbuH5YN2vw>Ors#i$ApRgqVV1f1yV;D&0{^A164`w2^LvWzZ zEg$d+1HVTH`&QRc+6IFE&IZ}#|AY({KEC<+GS^~ytq)Mpg7U0|N0Vtn|M=tpgp2=Q zt{{rvPz_?Mavh+J#SEBkjaC1W?%_Ke`6!QX@?%N3F8#Tnyp$gu9M+Z$VEkm*p5nK1i6Lpg}e% z>KZypt5wgPe{ zx9rOjp&07COEXYOx`KPfbF$jvWsiP@){zX1)aLE8=N&oF`Auap$CI?=%{%_*$Gdi- zsw3Qzyxn{SM-5q8DmK>k5lWl-|4EpS>qM_V&bO>{1v=&2C#3xKtr#_z60xKNZ zJ0N2YGMU~{4B~)d*G<|>Z*)*XJK*HE4%7CY8@15TtsZQO$io2+02jr6>Q*xVT>8#w z5wXAbT6~XG0_D=<<269qw);VUNZH#Q?pm6f*Lf-_ zpYIhjXgp|rV=wWtu;64P)^+DYX-l~}a(o7_A3?HKy>_~wb^ zVh2b+C(3=1mY2dWO}If6sQCnRu!-d6b_;-w%;%vpXX?}*wjmhClcrGw6zHFT3hx^GZ3 zG1(8@x@=@?Tgv3KtwjuN4{L`j-(VT)=@r}(YSKaOFn)EQLHa9#QV%pxd(<}krr`I8 z)4zm7zhces|)e-XgX)&!Q|)%cMeWcsl!|Pz#Z@ z`q<7e&;E}HWXfjvN_1ySlQvU^MZAIqCImr$Cj1wJm06W53!5I@et7k|6H^`@>HyCA;w724CdfEBNOF{zEH zI|I>desOVa40~_ajo}{PO_?4G=i$r_{>XzpdbQ}!IOK>Q?%4)GzmAqur0y2w-FA?! zQbc*37$oT-fzoW{@ z_{LXU^*3vVH`>)lQPgdI&%bEW-We|j8+`#G4{C*^x?NIkn8Qo> z;(IS#H8%On-k%NM#amX=D*xqR{9Vh-3+}5<#t?S}*vrU*my&`aEzl?n3O3wYyds`Agplmq8!Y z#cnkX%IOnc)Wjuhxf+85$n>|Z~0DE%B~@U|E1OaZ=OkTP3dk< z?!QA^_m_-1C_MFW4E-vb#Jw+{Rsc5oN8UtF$1wdBAOvVw^uW!jdQ&u>p5f~@wq*zg z6TEWGZ-pBArzgjs@gLc z?*TCsR0(b`Ds*-}Y<=aYXBd9_ENbIpEyENO0o3ph$Ow~t|2|Mr?Y1Gt#wLqs5`z3X zBE@*+=iT4Xek5snZCZExxM_L~qtRtq4{7WS0C_$2C0N$V>(E~lVH5Z+ei>*B(w2Yb z0;ne54J+o%ocNCtps@K{G3(ps^CcdXIDG6p_1Fn_Upmjq+Ig+NhanO%l}2_kB?+Wc zu+nw?TeQoEZ>`?;>TdX1nfNI7RDd-T!<*PEOoF>h{~>EV1(1?^m(Ssk8OV=s#I;Mf!yd#@$vc(a`z)7MiC>a0y=GDy6l^63ZL2N+r z&0eE%lI@g*3s6h^MWGV#??aHL1%?WOrvk{jloP_F4LRCQ{LhQ74LbmYue}}Lc@F*-WiEjAhV;MEH zPYFmXxEb8Pay~AEiFY^>k!*z`Efl13R*nl(4$?7+?v}%fet!RnX=RHPj~QRs{m8s? z+~81DUjOm?=$F#+f-i&@KxGGU>e0sXE|y?syX+r#>Vi%wv}^!<)(nKSHZto@1C{>&fVI7^!3pU4qX&mhn4n*q-tNuVQBq#KVTlz`AmXcuz0-bl8^6D~1T`=*KhdF`bhAhNY?=a_^Mic) zJboW_&ezlc17+y-OVIM+Ym~!1qlQ<>jMFYcVMwu6mA2w`A|m7S@2IM;p`Y$(-ZQ1cW?Yzcw7QY7%-dJ6YPnb`=VnA+m zSydI{=*@;QUZ1UDe@l61HO-GjQ&mAhL%X>(J_%GA{6R_vi0qF?<#)hj4bm3SnPK3gdavn1oCh<^3ivG4FS@bZ@5<1&AL z2fVE(yvw#@)vs=LxtWSVtr41yhtz=hqVnJ%v*({UMlbPs z$%h(|X}?zUJmKeQe&gJO!NdFhxBS0@d&Xem(**2Eo%_C^Q}^==l3kw9l)XK?QRk$Z zL7S3ZTvX>isFRo`;imCld2R2DB~aZ`jVOnW?cbz@MslM?=tNOo{ukV^X$T4l* zyS@G0lB9s508s`A;x$O~EVLDUpC;(Mp>1F8a^CVwV)WHEBmiu^0lCQ9@y!aianW~O zM&$=~LITE4iIN{T={#Xwo_zo|7DNlpFR0{^H;EzEyR z$l}v5E`P+!$KMOm1OHXrI$K(bm7pGRfFVe>OgB&;pKN^kq#AYZ65E}A_`tds^dN|lwwydD%;cT) zYeCtS*oA+~a;Nv`NB?x%>wl0V#57Ig%?qxoxPSCEqI1nn{VfQVi#ruCkc4oG!|f;wx~2miVMArpWog@+y%b^t@I#vQ z;qTTzsr8E%3bUY}g72=8%GICFh|lW|t{E~|rxO`_iCbx+;(l$PR0O9TE3&gs5y;=f#5#GU4qL>Ci4_KSLm@ya9v-2&VSTAhwDSoe^-Q z#2iAzXPNa9O96z?#iXovy^Rq16Czi*j3m;5=*YV1~N&^}L4S7zq5or!`;ghkApkKv07t9KNI=w?<|8MTH zOlZgm8R|kvDrke)bhM+RBOs^w$hG(BN}mq2r5YqhcHuXZQ{Lk>KVZk!wun&00#%ad z$X9N+u}Km7fsXEsNIO($IjTPU!aUO1W>E$@CS#7nlVc~Y?5IbuG`%z*P83@^`b*du1_e$<4=Z-}*}UQjDcE=q_peev-QX?%$=d4g}ns zk6%jmOqJj+By<5royYqr;S7I8m=WxTDRj^Te6V*ObZ+uLcXxMx?FgeRi$=W6o6z^u z^Wfp~(Dh}0XxD{eeYt*E-m)2MW8U$*PPMW#p>}$#71lpXH&FNzyXPJ=nCZQ4c3z+P zB>x4DOt(x4)wBy~pDnd7KVwre1STKj1*i_zNohEq;Tp+HjPT^H*H`Hcv9UYZt=sNo z|9rirS@GVbtTQ1Cc!6xSqFqm2!`T1TbAe7v`K>j-iE5D*C=9-0G5 zN#6rF1^B3MBmgHu`zo9!dUG*LXmIfE8(;56C|lFkju%<}c{`o0OHpg|)D#G|9*mJcVxJ2{hFTY#oy) z#KgXR;L6qDbz07sVzKy1+zsvNr_4nudHFeu0Y_)|hn>@Vd=l>_o_Kh&E0*ukFDpui zUbye;s-nMdaq@%=-^9;ukinQyf@QmRiOl^u{!Qf^Em_HG^7xjP(DZL%`OzC*Z!a{C z>%MWSQkmJlFw-?$md4&Fuzo}*Gk-zt0^9!{!a2(aY#WM16y~{c?(B5|($peql zZ4(xiBcl}Me_F;rASgGMt9n;DuP=f%7D|U#&V>KR54#1OHAi#ZM;!U8_;ZB*857V9 zmdgzb_0?PR^VC$QQ_TV=mOJmHV@E(VhG%WK)hGSG@l{<9G>%C$c%ocK9@udzad4qS zo}HcN{qJ{_X676MXhR=0(Nug)HrA`XXpk>)RO3fOO;_$m z1*)g|#^})uaYF6iU&+KUZI)@ka>?6sSedU6ybQg0`B?Zdn)0VlT{E0YuRT<~p6$IG z=Qf(>_Cuo&#!zgUcfuM(54J^@I^v8_03G#z9vx*W1&aa>)=L-Y@L%^UAFZ)3 zSu#Z8@MSuruwVLHpSLwyg;^;Jarqth8&4f}U~udxM19Dv}+co-7v1c?l-?q(q4(5t6$+3Z9F17k~XSHC}Y7Z zy8MtL@mjO;bCKRWFX5U;IS28ang@F$q739{H&|OM3X!K|6P9MP0k)iqy2Z{u;p%;*>0FkPn4=QEWrQ8IbYMb?&p56 z&n_u2SnT0l{T>^;nLO^R;*k7fc@7NO_8gtp)d8qcFrV-xosB1E+7I`FYOKStVw8Up zA9|2%GZD-h6z4yuA~HFl9dIg>yd))YiQdW~|L3g4y|8$3nj2);=h5*~yKywvUUoLNs*l$xG0!R^v_LcHOj_M#ey?Sm)op2HQNZ zO8a)W%a&Sbl%nGV@h_KsM{5nX(F!iyJSn7AaZ*1|skfWPWs}im&+)Js*9P9Liv0Z#bwcdAq^TFQAmF!!zXJ?S2B)VfJ(9VkG zp8jEk=5@DHx>1svz@~is1$pmGZwhik+L;f)%(AqSTi=%7Q6QW(ZpobJ4>gq2GrO@k z{ryhwqUIz2#bFF`yZ@i*_|hv>S3E(^pL~u!u-W^Julxk*bjG)>Ni zrcPuE3bKU=Z9?=kgr;hW7nX(E`(0+?7gP)3L3 zi#63HmoD*76XpsxCj3%E4)6c`^%rkvH+cMtXXAxKv|n|s z&$eHBJR{=`S)x5)JcQSugy)7fxUHE%X~UgLQ*>ZU#Xz$0&cF`xLdurI6|~ONBIoAr zUcX2-Qcu0U8E=JJ#Oy2QHiimrk8G}~l`MMJsgu!=c%V($4_vQ;T6%nQ*SB+9V}Who zD1TR04pewlYNo0dHlzhb`iRX;HwaSEC7D#!|Cu4z z+~K9imeWiOllAB?ZiYE_y`qxWl2ncAFl?5Y&#zd5t#G3H_&$8EI=dJt5Shil9EB zp~0lB-yH)D0}dfHLD=lUP}4*PXSxHL&OO=w)t_6Vpw$8F({-Skn-ApO+=GATXm8XC zAhx@`_ajDbuK%2c$B;hnQ(}7mRFV0+h!=!EH%GtzaqQ>bAKAu1C+Tk-SZ^{N(8ZGM z!YQu!oie%=sK4o%cuTX{AA;w8;_=QrHt-wQJj?x_^XIN?=4kW=y52G8p4d+x7ZV67yRMUL)4U#Xh$sy13u{&GPV*R8U@gp?k4$UaJ zo4#KzESZxhAGw5nwRW)HV=M-j=>^A4j{i-Kfzx1{$-hpamgY>_Ir^w)Bi0ri7oJP& zt1C)Webb(uXJ)zEJo8(`LIBNmcQg2{=l`yOm}jDF6m+B-qn8m|LjDc|j7cm|3upVT zp9+_(*FH8>R16k)Nv*}3wD-HDLbJiuE|OlX{FloOGB|o9Oj4n)f5z_bu1*%|mLwaY zz9g=rKtt$PGhYg;oIgJ09LiUI-wD3*uYs1wv(YxMK z7&HZ2I3+3N=&*A0A2pjz)2)*bRqcCuqRyud1);xD2ebfv4RxMxBK)2P{4juOS9(r9 zXy|4hmyr>Lq^P-_?vtl6`Mw` zdu~x4dbwmq1y$|0gHBDmwsMWzMaLxWPSt{~D<`*xE-(C!t6L%3-Vo0>9`3SYKWSkz z)r}3w_$Ka;`RytD^gS`9gljUxFOE(Ekz@o z$|!I-(<(AxGthlDxG8G#Sf03Nfp?CQDsrZq?yI6a4fP?;v)~)Av~M-jpjQ5FFm|gT z3pi&8%%srS5(K?`*j-8qpn4CVOFg}Hti~oH+>fv&7;1x@e}BhUieXaF)Qp{cgM2y; zThIgthlu-E9%smmscA1wYhWiaOk{H1V=hLyXU|6XT&&}wUYOZ*b;BkjQ5qS&z>@cnGun{5y9~Y7&lXtNuP6H3%d7JO zMlz@!QiTtD+#gEH{ghQz`FV3|I<7z;(I>p$Kp}`MgoIJn*48$RZ3Z|RIc{rHdyoi2 ztRS+rwepkW(B&1-U%lEatPU0i!C}6)X|CKiXdn`!nBO{qF3|t( zNvM3!r=Q$N@8^_K*56Ue#!gK8!{5~C96og`wYA;;O5%W__ADj}R@hG;vtjDBeRMa8 z%qg{hJR`-wD;!$1`DD9oy7W60M~5QZ3Up!Mgoido&(+a?UEZ9J)>AdJjm+D6=jGKd zscf)X@giH3<)SddSg10V($KTFo*dOsVoYzCZ+$9)T4!+@JvE7Ud3hW#+OI1PV+37< z0si3CWCJhoZKyu0kdV-)3`tsP&u#9Z_jQ%$_GaD(K9};ij5G_syNOzC{76O9Uz#%$ z;`^6BWfDsr6nQVh_7<=7q5XNua_4&Y;Q6xDu5q;>1v!J4K6Yg=2a9KOz|Tpiz~N9A zIuXHRIcfE2i2Z)gL+cQc00RNP@n=q0i*M1#yY7l9S@W5;O33_S{lAfY*wsg~YwxcA zon5WrLoP*GTVcn6Z*$QW@p);lFVYcdawo-qK5y8h*k6&MFT)a0GVK4>Gb;BK2mJTW z-LuFrodAjJi*-w4lah4r?@kX8JX6qm7866NS9XunCS=lnvOWiF{8eZb{kr1idY6gc zu}Hn!TC~#LuQw3CR@`S3ztW||dpYJDdgOWb=>zy7iS2k;n!9=X&0kRuuIpb`$WFoG z3$9i8_V{{O{N3^@q$j6IRU&t|@}wiScp?9IHRrXx^{n;dLL`3`PFNh3)z5A&<@a)GZ3B7+988X-#(; zB?8esJUsOJZpS6GoCM4nh!I{Rr;4|zP1&OtBhxHPwb-gz?WYxJ_$w=8eNL*;N5xPq zl#R%BPXr0R9B~3zWO6jAe8_b5_|aS)p$d(6YwTE|Ww(WApjelW|4tHI8m_&N2QPC`fSK%+38GRYwa{fK7+d)Owqi7 z$5lFJ`|E@@yZyoYUZ(S*Z;KdL1k^6_>HbAozGU#SA6_?78oIPhFy3@e?`d&(q5LZb zu z(xQq>h>I>wv^YoX`Q3Z_=FLS7)n1uLUqo&$@~eH0HqsQhGP$rk()Q<>d*0lfvRF?6+vK|tTkEe7`ha@n-9XYEiAI$ zxV)q@bT(}t_x|8qKCOE=$iVcudVNi|u;YE4`r6-{rgnZZ4xKpsS&v%3+WA*F8!?>C zS{!#c#4U1)ia3`qUtSqdi$+Fze$r%b`Iiofp>s}OUy39Ka>8H8R1C{i46`pbw;N1M zOw9cJiOI~&Tn7xcsZ4@msyk>a9e@9f>;I_nituB0pFwZj{K)rf^bOb1@$sMZFH!_S z7>Hk0^`KYf=>FY1WKzIQfudU+UTQxBFcjoHf+EzvMk%|yW37CmaQ0?#S%h^m5nr4k zv7g&)EM{?iGOJ&&Z&0d@dAjK<4!UuY!)?U}sFuwZVUcoTam`#Gip3|Eaak(;d;rAo zddWLWQCUnIYo)M=7tr3GCiPX&c8%{fLYvIFHVE;W@PL!HZw0XbM9ocgmBHwpud zQ%H@BnYQ=$0iy_RsI9fwDiMzMT9+jBM;3;`z}T2+b8|EH`E$IQd&F@2W8vV)=4f^^ zMj7Jqab}Ew-x%TR?J!%RlHZmt?sU;}+xlJZ-MM}x=1v%Tc$2FG?F>-P??dD8bHJfz z*g#gHDE2)X5gnFcGrvi&Lkl*Ow`MbHu~Rg@tm_nXzuHgWrBR!9~h=xSEY1cY0Kq52~>H> zvsaOZqmp!Q^$wg|c@FXn^^j-JuppSoR!QPr-`RPfWLg#Ru>mut)D^Vp%(7@{dZCXm z31mv+c#`H*MeXj*J_j|IXzW(x^Nxozm@|c0a|V7ZcTDG$&u{=4@qL^~qb<_EMyi#b z)O*wy@3bFda^mPii?ZadF={5-V&UWiVDBsrQ4 zS)TM`?N{2Y7fU73v<5r`xMVc($gt1@vbnhr$gpVqkZ}4PUXyVQtQsqLLb@e-K_MX_ z6LH-=??&QI%I87$qEq6OAiy6=4zF5l2%*UWb=pqg zL4B6@^7T!;^_}4}aMd5M@N-q$MI4;Aa(!7i_k%&D$SR7n13WQOJE*g*G(jFIv4 z4-;f6ZiTT$;$Uvqd!O5u2sL}a6DeRCAr&AS{bMWu!z4%+KVJnqzdrj-#n@+JUY8qg zCPCWxH*!ojgkb1#gdYL*adsAlFN9fChc#4R`H~^YX`BW%(?mD(W|x-@elPD{p+Ib=IDC!=FR+7E9CZKqtw}Kn?B`I$^1No{T6OAP3|1cVz?x7v~36o zd_?HD2On`K@^at#bqkrP{5q#a5~!Y?i@u>A31j1zl%#JxrR^5A86w(Q9!|LukA!`i zt9Zy?F6R6d;M^EmRHnqzR zCo=fUy58PW6qJ;VKc;`rx$8LMvW)li_VOLPX2G)_F3*U3SPF27HsCtr@Y;AZ5HY?Ij{ZGNGr{ySdwoW z9&dBWEZ_gzduD^>;Vof=*8NRzGtfm9{E{ovgFmz`rcNE0J&0XWWu=Si{`^r^-IJHd z6BDgwB2IN-X^m9qx?j)2xA}RtoiB{v9LUh}?-^-+FBzgb)_QijAAoaXmYJE^{qZC3ffQTY6EQOkY=Qn> zc03GutqzFbRuYld2t`9n;KKC!+g*Fo5!M>ZdUkAEkEBKJi029%d^)9k!Xs`uS)#?W z{@Fj-z5ccLq*MNn;X+ahTET#+^eA=;(_NFxN!K`O!)~d!KhJ;TPV!z2|IhjbP5qtt zi!Uf2isWVExo;`A-uc+K_0j?TPuJ*mT4ZJ7D7(LW5pCGhf2Gsb)rEWQ+O_rV?Pmdu zBk)e$L4f`^UcMZglu8t6Op5NWvUKN^NjsDcZf1@&jWRv<+viOV?(tH8Pi<3gPR+G% zUipFA@5yjX;cFGo?t9aV&wVw$%*@yto0?{4-v^G~1nk{b>T1 zMyjwN=y`m&iOfb(_gGD9}8wy~A$J=AB2l5J{(mPUkRl_}>2{ zQ;vMBnaJ5&|1Ijak&jlT>LeibZb{L8C7;BrJF~dh^~DwzdCqZF=*Y$5;7ID`;%#m! z&4&{#+*n25>53)@B78(NdEYT+BJZ2`Qj>e;&mUq32Zv|Rp0NlB5Q9O!K#MWm>_abN zGc>oQ7Ecr->S%_Jj^24~c!__FMC&}1Ep+5P866!JRY&d(c{Kl@T~Eo`=L}jDiDwSi z(5a`p!PHdg5tq^GHIC3+v!_07qMOTqf1pJwq4uxwer+OV?t033``jBBWkXM#gFM4t zE-gGfx(PD-nj_BM`R~84;S-Hw)a}!FPkFO^opYw;gU^q;Pe)Vq}2bXd0@ z5eb4x>;C)W=Tk^d81fO2m(I?j+fEW#*KC#|$-rlHAqV%D2<>JHcn{<5KX1VmjrPS; zYdczb{^=S)1KK!Y!>P7Hqz4X%uNzIh6QuXM()LK|Unt@+u=pq3zct!KtRJ}5|W1S1PN zm#f38m9IkmEI2q-_!=uK`u_gDsJcRVK!aZTohHv6w4+n8#uOow3Yf(bkjlzRg08UKPzb#xIoR|8 z+2$HL$_&8MqI$Boe*gTpJByHCc=cFyHce5t_%S4%;O*6#HgF!#_|$Ondk{}AiO2`R>-%rg8n*TFpe zvt+Bq1-NYDv^H`SZ``;c^$SV<7$#?HwtgEX1;k(vNB^nzJOeHT@8F|4504u;IsoJ& z15kE^eBIYaHPq3)-Q7j&{_%v_Pgq(0bp|bk%TwXl(#{Z~F8M5>(i>r;aAA7r{W2($;?0uAYlh49w1Ekwo2x#f=O#WOp>V zvp3fL3$vKL14~#g@-yf@8j^*47lQt#6{bv(haq2J%f`)(mr6i#Dt3+tu7s-FY>t+Q zF$9Kf(d|O@*TczzM!dJ_dO5hb%mEmO;mmn~Yx}4j8H~K($}cU`ZR`D^irct%8fa$l zU&9YwH_Sp(_bg3s@t^;T);Pgn{xXcp_G~EpkU?_k(zQ6HcNmm1Ce;mG`xZx$k6htk zSX4bdJ%h|;Wo5H9A1@<2R*O}M4~v9|Sjp7-s&|d<%+fM$R0A#*!RitV@`1%dO>QzX z*+BH$nF$oS&oufo470BJu!Ngw?M{F;)9g^0fn(R@n{doLcYX&+x~-!5_t^EVt$CyJ z4it4xxW6FV+m~YH z=p;vCip9QO3)Xe26C;q7$?_zl5`6nJ+#wNpIdWYIh!UvnA8qaI%+I2Z1GOeM)jDxG zuNx6EwQRgWEa0r~2)IS|Tg$7fM`uvPy#eEyP@-SKB;ou^vy^Zh$k(Al&3m$|)(<{* zcgv&*8dW~%rAe4x;W;`0RiQpHWP63?iP|%)FJ{5T`5srM!WeaI~v)7JL0hT52kOQ_#eL{%WQ_^Ge2R*nL-h0&o_ljqL)Hi>fC0d;RfwGXl3TWyD4N52B1M6 z6Y6m91K4!&9MRr>*aJnyYlCSKr@^9u8x@@1BY=7d)L$afw^=I+0Ecv{?sBz zzaJTYZT@Bk`9(4QlRn||HTIN1<_ zQa-&>nd5_bSAibo~S}m9~s1@X4j46W!+!Ln1?J`S1Y4pqAzc`LW8* z@_qw^kK8BGH{Y+XMYBqp2)S{DJowaeW1?>3qNT-%g)a>4aNbx=VIXzOzkl<$NzxGJ z7;T!&?A?=_Rq9;-q?vV&T;!P6k6D08<>820F|D|80tBOm-_q@!+3c&i!9tB3h+LJz zTujnY^yj6W7^bxS00^q4UwwT?%kF;MpZ=FhP~1w`xbcA=AcoponJX){{P(KyAd>0= z5v$NNF$Wr#O>E8k8xA>bUWh~>h@4I&#~}vTP|05KYlGP1-EygALX1jR0nL_XDHMk6 z&m7%#|9i1tY@rwi+DVjJ=dZ!8xYOxH3kkR;58FxO+?KRE0|i=FTbx^t=+7(PdqVQn z!1$V<**-O@%D;z%A0_E!!_j3{JH8$2%gwsg^Eh#y)JQJQx} zcS6A`hsiXsA!?_6+W`nA1Apd{`G*-u87YW=;yDpu;KRds^u6dQ+>9!jsF~#iT7;*t zX`mlz>Pmu!f4LEgMphILYK9WV(~l{F4<$lQe0k6-9Ik5paN#xim|CMe-P~@u_2y7$ z@>W)QM8~`p%K|-rrrd0PFpRe7Xp&QP0d0G~HJDXqWtIRZyfudMY2cfcIFKxbXVv_p zqG-)MyATdIA@X08ey);en6zB3DEa;fONUS{iLM6n0@x}^AETF0f8Q{Hr%&=On(EMu zo2y{Q_omv$)6*G{90_2U@_)u^`txtmN&)}a`da}pfa;b* zo?u_Ex~(#hFkg#A;E=<=f8+p>>Axqlar=Ynp93!kW4N&46t|C$kC!5jy}VAaLo>Qw zDUh3*eV+CtoXrRYr2@Ih+Pi1sEk-;}pJEE_+xz6AG;&kdo!uxcm``cluAfev&5Fj- z8cxiIdlrlh=KaS34HRSKYS|=z6R+Oov=T^zE zCh}}DL~Y;D^i)}`Cw_XjZ5BdR+Tmo}hOB5QpkR=0G-YMw)(lBEGD^z8k&zKIKbgrx^IHUd zYr4-p;EcKd$_e2=c!wC8+{2Y|T-9{$=>xS?wB8gaCjcb0c!(6q0bUOrR5IgZXjri3 z4`>v2KVCX;RbHsu#>cPQAG7f%7Hh18U8JFjx4veC3p4e?Z$fy@xCUcvRxKOf<#jti z@FSO)Ajf>Aw+u&cTvn+ z0-bzkMoKCI8Vm&7B_DmxgWb9U+sG8Xk`UCyuRwMs2slLVt%Yf*(J{I8@QA2{zNo3F z@cy1Lh#Ls^T=T>1if^0tdgHs@uXIkrxoFf0fkCu*a$i36J@)rf53MQjM+0Z2b-XkD zJS>@NY=zj59b&HapRdL*OkY{7mEt6ASxgyzZ-v;kE-=xN9NBM zD(KP)d$72C$#dt)kuT5t6|cf$cM^2tN869i+Xq@i3!Id^=!V2YS~)T@CK%cHD+w$& z&H-1$64Q%h!rAB(%Qz!kh7TM~B_7WiEH7A3&|6=VNd+SAzYjGm^ytG^kR1=!$fn8k zg>q(AJdC4~;OBnKgilf4aVUd&r*uKjLnt|?0!5gRloVS30m<#DL#?YH5@R+OF6z&w z=W+uQZaP$Ym;VGYp7G(_80<Ll&4fE2N$5YXy_$m;06IKFjZ4h7I}V@J6t2Jw48X z#P{SEnKcD}S)Tus63lg+TU=n_S;e^=4|SYT-Ky%PiP&$0f?EF#WIBDd&qkd@4axio zNb&|ERw*tnj!fb8_wtsYRiOou`SyE3cY$?dYRUrPcMxFu2GR%^N`AapQUQ{!0A0_> z%tVQyVwR%#`h?_&6GGCT(<3@13zyC*q&Fl9s&RA0y6s=KZ@S+3<4}71t-^Xo7BhyqbcOG zY5Zt=#C~S?^&OrLpYmapm;STYZyT+12~K{l_ADoD5NoSb zci~i?-y?eQB9vZ?$;|bw!fr{ip2EoRmD%YQW*t`62Qe_1Dt{$Pf$DZTInC2kcCYC| zw%Oyy&m&)1IJ)vx`lP!!!Rf&%#r)0`FOM=AsDJQAaff%SYsmeGIyYyL=*wcGu86bg z#P8+MX8hlXJYYmb365mFzP)jz%Gp;>I z)ICR0Zj7Dj$8?cy{;o+0_FEtmOQX0*Me!0&SBQ^kj>75^8Gvp8NanGj6OmtCfqV}` z7DTyCHz|AV^-9REdC};?MAEu12hlYR`EJIZlkyd8 z(HJawzGe-4do-dG^vAnN|Ck44hxGe6dw3`}Q-^Fnb3aEM?ATSnbwQ?j;ay=yCD~cR zxoGn~`bFw*kOYMM5Y8K`K1F3^Jb*a3!#l&B^7-)u%YM~XuDgARJdY6Za7dZ>M^W?> zES4*j06qf@f7j#{mZ&`z8~eWl^c29yq+z^aX4$N}RriYOj#RaXXAMyb_X{UYcTYCp z&lep`dHKJf*OQ&fAc~<82*0+w{H^di3ep&IjM!Ef#4khaVFTl*I}bY;aE3^6A|-kk z*JH4PZLy^=|1Y!BS?FcIe`E(1$7E!XpxAJp7i7x1)o<0CchHWBAaGh+BeMO2>u}OtCCkeucMd^Khx#QZt`;tj^JaxtUE_vTjnU>R* zrMuME`iUp@#Pn=)FG+6+d}wSz;2ny)+-J$o6#IP!rl;5$ewa2YH2>rN7R4H;#C(^m zjrH6mA@b}caiWf58%UtwP`6~}$3jJ5Z1JQmnCuBU6~>Gpg4P8hX!kFkLlXH6lCDIw zf`&4xszhey=H$%GF{P#ab8~YnK$L@cZgq8aeSKXq=66of2IE45J-`#zIaL-vXj;U!dyxb32xUOIl zJoPC$`+#i}cB_SGzIIr$fI;qq-sCefI@@idMm$K*Dz678DirGp{E{+ML??#LPy)$R zlRwSR9R3=;_AEcp^{I4}H_+LcpMJAI@%@obhSd30wY1s4i9BRZjwcN)EGxZBU*+W! zy;@ANt|=SVL7EW;aasR^k?8{smg9nXr=p3cPp2{Om;K*uiNzu$Nw^e-fAN=d{Fk)2 zM7*dFE-Hd9flPsaSB`Dm938bit!U5 z?)`aFoQgO5lsEfB`#)S1ls!+e`nnt&ml!(nNp+l`B^=&Uv+mufrSF-=cjW!2xWv&? zk^Q2D@(olel5;X*=}X`TuVqgOXwd^N7uw{xe-^b_+0K_=a`9|-|o$DT#+ z9#U!#mCJp-#je)eSE0d7Uy%EeSLgyGff(m^R%1t{CsH0ST2p;xj_&eZ`2Way?|82N zwtd`)gp!e&l$BLvmB=VtDA`4lWY3ChNjMli|_H$b>H{( z`2GCh`aG_TxA*Hg&huC&eL!D$fS0b;e=Ykni2}H4v{bA$#g)@e&MZNpl98-{TAs6f zW#c|aVanpM`5#PG6eUk~WgJaM{8>uXhz-ehswS_nZd6;=dM#HxW6r}Ywb>KRqn$h1 zcO1N_uJQBxsSjRD%w)Mt+Gyi{I@aDJ($rDZOjE~a3CsQj22G zo;~9KV3OSmt0X?F1z?a@#l$W~Zb+;QxCKp?Sl4%xiE9{k8FUPS*+q3CZ zm95+Otax01vfV@|ui{|LM=JI&tP_`4hqsPU(H^$=7+y06uj`MWx&a;akDLA#M#Sg+ zEjD7ic?9WY*nUdiJNrn#Zm&9xhcg!yS%2pSWY5h#TcAQ^V21+$9gQ;duWf}Fv_54S z(V;zzmL;?XCx<;oXvHo;1{QLI45GO4n`+9+*JW+XMysdtL3T49{@n0s7b?Y=Yw}W{ z#@^Q?qzij#Hy;E|D)B}pp7*@?W(!HOujZutCmnHmXb%%eWnbv;xogm@?vj7`ulB^^3r7thRE4CC0px}RV<9v*D+{qe*I ze^shC8*NPhY0c-XnG43H+ikuYZb(IhLM0A<6m#f4tN!+FiQTTaqcykuTIfHEA5XPX zw%T_7#kNqbR>kCmQ}1{0zFM!sI{RnJPwbQ5x1F_l9|O*7WhbUBDW<&pL6S`Izi)7K z@GVvH!k9bJaSy3)wzrA`6x~J}l#$1^P`t5U=%cN07j8S>6vGF_08gA zOJe{Q5;-u?SyOwjqoplh_QU!8DJRtb zOZRyu43GHTTV+)}Zd$Lgz7oOkiV`~Xq$i%0?n@+I3J}-^PKhD0O-ksj#@b3<&r=1F zoMb9ES5H}qz+E>r+(_KdcD`KSwKzz#)ZxGCYVn$OL!9!Acm6>|_@j4gh2@K#jb_0H zJYNbHPd!)j*hN5;Q)E60NP_wvvMXP z*=r~gAq&d9d5n7lzSR$(uJ*cYQBSFub61p{s2%cuCLg+G{+DiMP8GhPdDr?I@mxm} z*#jB)+~sf3&W4GHHt%6q=FO=Hs1(j*?oD>xy)E-}Ua1rPRAxlY`-9nuJgJ=jKl3Ee z^tPMLL%h7bH&x6DZv)WS$(r4pf{j8FGctx||Fc|nSmHH5P}_C&@n~bPN)1Nm&=b@f z+bLrGHwhe|N8t(;j!Az#IlL59+wKxxid0R@Caz8~+wU5zw>oyL8qocD^`wpN(K)HJ zaH}Pce{Xah4%JcrI-S;Y^39e|ReHh1{d*S9#2Cy9$bH_vNhwu-+t6TZ&E0rXC1tSS7?=###=#{oJyuQ5af39qc19-X7s<5}l3i;t4gW4q;`0yUYV}%>)^q9)Qp&jF;|HP6 z`m@giTnyB4ru&vwiK7}H>1Zfq9uUOHXyd}e_slFTA-eKQ%8}~CKh9j8)wbxDU+PGf z+`@Y3ADC}tEf(hKJjUizG5HMkjO}azlWIq<3%#5`J)o1S6H*$=_()e>z-!-yIlEVt zVmr$#I(9fPkSZ=uuv&4x1UvCC=UGj)w`1qn8pp!Hd?yl%H$^_%s;MkSrE6r3%g+$M zcjIH|6S`js*wG*Wq&&4_yJ$f45#4QX<7%=^P+)&LRdX@xUH>ayc&~fjQZ}?FHc}9Yda;O{BH8hluQ9@gu%qLB!or1bw-Y2P>7o;|i-MFs&iq@Odpx=v;^jOHuCO>hlk+!euD@*0z z+CERKeW;<_9=m_uTQ`|5+jrYP<7IWw;8{n`Og-XY*!YO1f!dBT=?}#*>U0KqCDT&x zm8rj!9OaN@5utq#kh0n}wz4qU2-h_5^=&>2y&flo;$}AShWx>FILp<=X(*6)lcb>4 znxW)Q72gsR@amOv<@^`>gd^V4@=HUEv^x)-*pM#9ZSxqbc>3v+z7GP`6BPXA+uPg2 z%!G5^ybv#|1_1`J2b=~ro|}`$Bw3XgMAS3clfx8{KWw7h5PF^Dn?C;0j&(tqv-$7u zum*QpZvh6bE6%jRS(j;~PW27HEB(6`8@8F9wsfWY2T)|{&+-o`gX80W*TXC&PapLu z=kB^cEy?{~@~Fh9g5FZlEE%w;G&)%P-K`k!xc1jh)ZzpG?#H2jy8rLS$jp>v_gr4# z5$01VlsV%2cYu?16Gga=<68@{6r((IzA*pEVy`(n=pn!sz05x==4NbbYwNlGcj18l z+G^?(qa4%XI%E-F&@t3IdX$U+;!#d~ha!Xk7zPIiZ^Ookrq1fZ1tlhx#l^*%uU|Q! zHhOw8jq+D#UKB8vM<6{2ME{dranD)mXoJHZ6Swd%EFoGh_4|7mI_K*!5LyD4eri_n zqGy^|pj(x)$3H*hEsGO2vaGwlMO{}{wCMH=@nmB|(~-JL(BItLWXL0*k5p0Sw}aaf zqC%s=In@aEZ{yuX;xIG0i7hQDX6V%TXXwAR{KyZg_up--40YDZ=~j!DHshknq(!SE zw;}*;Y5w9C*6r{);WDI(vTjFvJ)pZN7C^f#1kz z_UM$B_M@pw)7s1hY5)ZSdY*?hGV9p|1abs?;dK0uYj#`+JuZB^!XC%*77mPzSZ5HY z+R!*5A0D(k_u*&ueYN2c%CnAQk`uc~sM&uv-g(%= zc6MUNVSKESXSHe0YYZ2nXJfOS{}Rl}XZ9I~fac#a4^{B;K#6%=7ADL{0x~l85^mH` z#=VAN+u9+rUL1};oA0}Fg)pIkVJ`I2fuczcvq(ojWioW$i1U~SI*zBY^a{BY9xe2soua?Y>y=FmQo8z%5X~bq!WWakk!ba@Zcz26VjsCNqtW0%N_u6NJ|6v{T7gza(mjz5keY|j#`vreYn z=%o&DT)MoCLpu8t_a}s~-oI3>lTi@i?9x24nffW@M zkL~HZP4mpFK2%^q06N!ihH`8zy^4TBnO8h!=18zAx;ovkd)C>;DTPacWirnUZMn}XWTXWAq z)>=zjkxD{RyUdHnd|J_E3r69Sbh{KDHdO0B`yXPIW8cGLKYvjEe#3ua*#-*Kfur}y z>ODw>y~%#M4o5}Y-z+(9o&3KyIBzB>+qayxywFM7bY!z3hxQugAkw0~;9Xr^9g0+u z9+()W@DLS6#>PF5^F?!k*(8XA(NI>>pI0q$>LsWwjDX`-P&oUn^n#vH z9ISE55$sYYvJCen=51#3Ndiz^M=0nIi)!-8$?e>C`i02c7wi`8hiYL?nL|Fg=2c;Y z*B5ScmAoUf%@W2XPD$DR9b=acXP>R!b=G%oht5Me+m0&WiBr?lfskzAwsUiL*Di0h zk+hubtLm!o5Jf(<d_m+tSsHpFiC`B^aLzt@unGM`i(>kc{mJqw zx%-#revxJwk!z;=5cHJLfe6h&$oJDOTwWJ%M9*U^H%`jDeqUg$RUul;{%!+;*na`F zT-IW_`QXNc4l_f1a$tgUzUQ{!680}}zRmnNcb(5>j6`&TjLbg%$VfMv;_Q##@&Ah| z@EVjZoIqZNl@)e_4325;TcaS!F%0LFy)y7A^AypG!T>b|nZzTg{q5d7+oWdGM#AOsVQD1!LJZy1;2 zIrrJB zHlqoa4VTC~UywKOzpA8Xvncjy)`ON-4yX-4>j2}kT3&S2iyxcj$m zJ$@{nhh{P_K+$t)=~#aLQ%MO4MlPQz(vjwS$XIX`Zh4t)5Qd6@JNpQ{9RLq%-wM9z1^Ed!na|c-Y$P-g%wo-dj;KORvaZ zEPx9eu!*l1nx1Tu_gyBajt@4H;(R>ZLP`H~ z)sl2k5Jo2~SrMU_%hC&1By&xRh4!13Z-Rg7-YJ<}m`cz6Y)^*-*FMvKcOo}0?XBH`_R`lolu>iC`JP0Aj5~ivu330xrQDqf+pjP7y^iDz2ZOF;@On#t{^7L?y2@5h zD&>$hLQU;5Tx1z6YjJ8Hg)xUI;xR`^?Z?H?>Y4Yzn64vH$fB-t^JL;X+2px<_=l*Pr6rs z4W_QHOxK+HNU|Bh0~@#Z@$Yttz37`Ah;1srGA>ax(PkrQgQQig`7m&JsW>KyvJ%q9 z?MzIYbaizhzQGiZ@7SV5F-mkgtMgx-5BV(StE}*N&yR;<)`nu%=li~*2J)#kL|Dn_ zq#&)x(8$j%T#h@QezMc(SpHqo|DnozA$3G<#0$Z+goa9^uXgOR?~m7mne?i^cO-@$ZAt z@2U3U-$MU&zdP0QnE3t;W5XqEvjxJS5()a~6IRuid1|jiZ+UT@QiUnacFKiYTI;qTTvg7%@QR45k5dn;Iw5nXJ%>*D2+W;Wti!?wN|Voj@A z9JUNNcMp_b`7PA_K(Ea8J0*NgA^36=IfBQ5iitMETisG?_1(jpZ)j7gZtdLi=BoJ2 zbK9C&!G8+ml-#}6k0w_YzWvUvd&cthk=N~vp6H#ge+zQ$zWiPFofGpWWf4=VNwrJS7&I*$QEDjqc_-=JAXV@ zN@@P$y8rJOvmE1s)X_@?g<<{_N&?d-)tMR?xE6#RZ|QSA$xhc7qTjJ2m{2k=c9P-Z zgxho{hKWVu8Ecp#vJlHb6s!IQ4yN50!QNiBj}LCs=6l!Bl9?Fb3(#w-ppq4{s39rh z+KI#8z-G4?wEKXlGus$$~Ln7SpBaRfM;@48Pq` zi2(@Ky&OQd5Om1|4O*e6?=v|`RC-g>fe`7=&?y4n>#3@8Y#$#%pqkV#T{|?^eXgy*6$w>>*ver z`U1v?Z_Ch4RvqiedB<6T9{sshV&M6rL- za_J+38x<-z0l2P%UoN!UNW!tC3W^o7=|^ogVK0>`-G-^L_)R2MG)2b8LTVIc>#dSet^n)gn*(1Upn?hpTmuV-C7maW7Aer)8 zb+u88x+_L6Lr%m*^d|*RSU)kCMabOdD1IMG+XHd#4=gv_PO#$FS&-~ML$bEC`H(xM zx(2D6d*G^%Vj5{rZ(smPiU#9A;O2nkIZ_sZ2YX`b@kDN+{%_BSG>Mb zKotH^8bseS0qzD-HO%`*VZQi%S6EV#Y4f(7in#X*@c6_VlMcC(VSY$WUzK+oip`2) zp2AqD7KqF;?20`H50;G1DXcA$W9>J9FmZHyvl%qQg*sSN?%ugWY>BvKy8CsVkgfpI zzzCch7+%5vpyVh{;FI`Y@gKdqB`u8OdM z0gJ@!ADY{C9twE+^r?77$MWK$1#BbSLj*Dxf zyylKi)74Pwxjwypc6ML+S+lCcM>G6M14H6J98s>KZ?b>Sqj0zvRvYh6H;l&GlcWA~ zyq)CR9fs#aG3m@HOS!rtYGGN+MN;#Q#-@$T<|8Ex@Sfub0yy4;^6t5K?j-ssLL5r` zC2VYL{Jm+NW79qEt>5r}9L3gjc<@1rjn>PGpKs4z9?tC~5fvfYVVB~BcElQ9ij@vQ zXqmb9Z>h?KK5zd1JBV?EQWud0U1Iknsf+q_w{L1rJlOhcq6n)5lh_4L4%pHV%Mycc zAEYiWm?LzZ08q`Z4(}*SQ=hV0kgo&kfPD?jMK4+UqG(SATf$ip2&`BJ{ zz^}k~9eZL0h;6=>(^Y324W__hzU(OuE~V{ZVYa@$zQpazvw#0K3!)d3uCF{jMSyyUv{9u7x_ivg9{#gUKhW8Squh3r(*SAU|yF;b9LzI z46i)uF7a@Gtu&m-?OLJ)v@i-aa^`t}d_;+j`r;yELrWU0c1BU6$37u^pvREu$H|57LY-IfDaQ-_oyt6--xC|5@dU1}0d{d=E49{XWL`FSh`1OpK;B#Yi(L zyA;{sE1Rra$hL8c=K)eHETo=Dl*+Prs2V{%vhV!at@QWy%gLSTm_tLQ`x%6?~ zBvd7F%gZTB*Q^9{gVPzke=Pqc!I${% z>o=Gk3PIBmx~q%HzSc!^4|;*+5M2U8uRRSh4*bxi(d}rgZKv5GGwiCoD)Bms>+DDZ zhA-{F8Jh38)g8vcZwN~hntkNZ`tV4BQ}1E40(XA+3`MKu!$hMe>i~~a8o=g&8sI+v<_wexiQn9!uM4xL6*hDf6aFK@8>7l!QOwD8Th%? z_J(om`IX*!@Uer_bo8$ctIuIo_1J$xeHaAY=j>{|?arM8k`wT0MPmOMfrNu1*%nDz zM(Ih(Z-PCqUajrzHS3@yP%}U%4jo5ALql1Qcs-9`DBFb)ix}sIJxBKJQoZ(QvACya zU6ksI1(+s)!`Dp~#=n-ds@guD!xeMa6Ss>6lAXjK7o0)h-vPLq@N_?+@jl-XxH--Q zzA*w|Mm1rHmOTIvnBt>+=Pj@BB*hS^$2zehG42PMdkuoF&t>RefR~E6F@#rT4@pVw zKr4F;^Qtx;F>e29H|geD+S)3(oT4#D zdA<@4uMS&mBgXN^L-IZ^t&JLYeB97(TlssLoBjKH;$*|#y+X;^!h@9kL8@2FBZ$B$&%r4tw?q zOUPs@pUdZA2bf`^5{lOL#X0t$u^pr7sMm@7HCkJdgVxQg6)X{7P6IglP;^lj8+H&) z^Zw$MKiJhD05Viq%eW0u;VIKqnV+2~#dRxUR{lvRQ>f$VNgg6IaYtl@?~3$8-}!Fw zg}aOf{D9nvC!kenZXAC*CWiZ%5K7mt?h}(%XLH83`h4@eMxt1GII+05y89#0Mpvj`^zr~0Zsj=WB$-9yq!O#i z2WWn?sA!&EUPgNQH-IfM4pr?$Gl1jUDzg(MI!=vm-@l)qUQPC$XM?}y9d7FqCb$(4C&gNHfi^u;YLFCidQj_qxak}&Jt^q4&a7U$c z^RmqSlWm23&tA6FYdtN!)$g=>c0fC8WXk4Au)jWMCjxSP85Z4hdjvON57@9g*5sz9 z>T6^tNv>XIXCUa(>fcv!`Z`%t;`HSf z-nZka#ma*=3q@78RI|cvJ&VJii~hLI<~lS77W5X8^0%FSdx9~L+=S0 z>`xU`2MGNvM(s{$?)l7(vOX`47qyPxA4WAaidzb~2f{mLo%`f$icMu7i}ym*NdO{7 zhbF+Vp~d1t>(8l&lNW3MDOO8pr*ltfAaeeiHYBR^M5GVVWjxo&$#=PeF9Kd4Yqj>R zK7^%(hJ=P$}pffxPOW7Pmh(N|B<~v6H?>xHt1&I7V%M>1R6ys3YmIhp2y>GE9-BC8g%D=u3$nAb8S=Cf#FPq` zP{mNkX;P%5)Oex@M*K;n)ogzrv=qeUd$O%SAPns=xnZ8UW06!+)B4(i!sw-Py*vP; zmWVTyK3$Reu@AwQ`gfB`6&5tw4Po7jPJZ`o*P7~=0f#Qe109=qs zRLJACtV!ijZ`!|h#7#AK*RlA&!3mFJT2HB)rV0$s$I{(4dK)^;e^Q8EdRF*pmSMCs ztIYH4>Phb3jrN8uk1~ITq$q9OEnqfYa8;8wzi{vEB>kzSM>c-ot|{WbcE>J zw@>^kKG^43pKJ2u_`M*oe^qs^LAztWrf4|P9r?2zRQrYYF2Pez27m^=aOLJ6o5Nmi zzGjBvBnn4bmke%?XveZH9sW)?e^M9Fh42^(b+2MEsQ`Ei-&Km%5ROar!X+*D$%D=i zW97YE7DDH9?&D=nGsJ$BCApx}xBoa-Usv}5Sgd&7>7I zsvt^%*F-C=Z6MCbTyWM4ebc?YG5GC_+f^*q{VWERQ;uz}>145K-^41NJ@99ePYW*F zz^xqAYZX=+wPW=+NFzAN0|Nsd?4@i?Aiea9cSAgN<@s2%BQo(>3um^bo9i|=*AJwt z(I-E&|6!HOcj2?$G=sj5F$(L!F#Xn zVE(=zL3>_R<};V6FC$wM$=gEnrXbwHg@yASze-!hyFcqoi;}mSrNtE>vak$U6mPdF z0kojbdB=$t!`jqA9zbvej|5htKA{`308bK=tD&qcuJ!-zBgmSMG0TB`t!HI_?DUb&ZQeOvZj6N?`uC?@ZYm9QN{Vi?Z2%GrLw-|ATH%TQUr?9-T@v{ z-lHX`ZthQ-pDK9$S=ip^^{^S|#_Uhvd84ymU`=~}n)}Sph?<3CB%BAY5+W@ur=*{k z6xY{kB01gm3id~G*0{G8Fp5Q!+qOU60+)Mp`48M$V(jnyoofx){v5{n8WH=o#Nzg7 zYaL{XtPZ7{Kkp*sCn+hr?=-e1$#&kb6)CZ5iebpxx=W4~r9Fz;NFI+@#me~8ab3P3 z@&$GsDQ_ObzIfJK2>yXkfAUX;BoB~y=iQgI4I*x%#|3~6UW0aVd~oxP7vV#9adFA) z^jllHxQZU?iViLzD_MfECzgTFdx%$f#Kj9oXK@&RseD7Ax+)p3wgi!+48&HU9{PI8 zOzj0i(cSX2y4lJ_wz5kKEG&aHWix&SKMjgjs(savi@;0SR{Zybu z*1dNC3BecmRUrC2g9^P#rXU6Zgd=X&_J=fkbabr1&j<4(02@(U+>={FUkQdiVJqW3 z%4 zD_tVg1GW{U{n!<67##tT8hpB7%%nod!ea8=^K_C9$Gw4^nGgy*&dB(B$x+Rqt$q*E z{t_=f9aXNtYt$@!4|w7xI{mw{{6a$1LUDgM%?PY(Vs3QQ#%JYsNlYFfNThj_pYlrduj`{WvN9)VSk|szL&)Iri`s1L#{^YV^x@KF zvR9$aAQ&%>z1)R;z>(fnURhhc+MKM=A8CsoqMiaKHFuyZhTUZ4aDQ`E#@v+fTo~iOG?l}M=OuF@C+rbDR(=pkz0B4L2Jwclu zlYP0G5SbkULN9|L327p#++lR}>cF>AWghM-syjE@@3*WN&jUEanCRRiN)ytfS}J ze^ft|0qH+q*ov{{ZybG581i#Na)=sr;Em_cpNr=yV0=3^ja3uF9Xekm*JQ7^4nJwL z`2Z8*oyI)W;;7%Hk1JbM;X8=^e8GGjCEe`@&Br1EYfK|LKGXn+Zr2Z#rqU*AS)WtS zR{J%Sljzzd)XA2P^NbllFmxX8IC%^fdS`n4*?a!RYNmg-CyZpM zJ%v1O9epIwreZ22W%o%DMaElBM(BcYv36NO6kbUB-<7uxPlSwn04PIQC{m-fhkwy} zUij_muqgZfS6Sig+?#{>H2G5Pem$YFaakfS6-Lver+j}2q5+q1&0{3_eG}GA?a9Zx zO9dbK7Hs)sV3wWsyp76%ljh87Q{l?@R?>In`HcUZee~To%;_qpg1+y1ofyBFmTTM8 z2~RwX7+<>uX?0gA$Q5OeY}Ghui40k%J5qAKe^VlEUJr>9-K3P?#O{_Lb}p#tJODOf z?f5#F_PO`xE_yovO(lq|F;e|!?5Vm3Vd z30?V}Mt0g8nw?1xo=W0v5t=o1a}ytVpJd-CiOc3iEnqNa1_nL-SWtDwZ5PtH7Zjd? zs(RkCOZ9OEi~r3_&W~J?)41k#cW`HDg4^-{`ZULfJUl#UJ!1yar5SXz!wZ-XHPkc6Z)~IJ^LpockiExwNIhV0Ap?E%op*0OpF@WA;&8U@ zUWxb&k1R1*wAd%?#^`d~49ZwUzrEODc3O_hYkRG=!|gqtL(D(J2jUZzzUAb4=4+Dv z&R6|#J+Xhd{!u9R)JNWdKa^u5nVoD?RJZ@f%F|Y5{r=J_`PHah6fc)GCzzD=1HR#m)Vgh4CT8I4SN?B9DIi1UIuMD4gTF5F%ekomkaXA4EZ; zSvg-vBU%G&44YsayM0TNj?u^*PIaYp@xR-R?Fp?Irt$JeMM0=(#>(*%A{#k5Is6p# zbqp)>p+boewA}Uf#ajk3VDTw-0^C4!0b@l8W2QjZA-AEM8VxCTiC>L#D}=!;0Be}< z>=-n)!{+@(7o&#k4~y&Swo*4jI@n^(nMl)Hd9|o#3^iP#-5MkutLVdbM4|&uG8oez z0_2|S(u^NOh!GM;5b7X2JVNHXOk&^hCQ11)ojesRa|6m5!{EbvMFl13$XU1=zS(QQ zGB;Q+bs){z{b`(!kG0@2Rm#EFQP&dgAo${$-RJG|pN+PU|9K)WBdskZx+%(OTYB~B z?rThBL6irtBVM~=_?e_%66I>ur*aunTC&6buP6TlXsCM`<&kmu?Dn9x1%O=Jc}zLZ zP-Ue{^Avzz4&Kbl|HGgTb1*eiQBz=O5fycK!0X;WMCeFa0}w&4JE+BH}CGq220M& zP`}*8{VW#b1VBrxjw6ULl+4@`QU&6J1MPf0M1NBNfl@&%gEWR7Vza)`4v5{hv$2Ij zvuX*r#<<9OE1DsL2?RF4l|fFNYH@d|)w@m&_21b$y(~MKdgbE?Ri?q_yyKoZb-ob` zU~(7;SpCnh&olrpR6s?4vXjCwLP+W^P4;m@)ld&Vfny(S!_L+t3p+vD+085hDDD!| zZh$F_Hm%KgyqI8`@$JR$mJxXLA07|_j6GCApS`s*XO;^L}2mMRDfyD4Z|#gVXWh-jnYW{ z7D=hL#%)xUEoVYoYi_6ef4g%3a>fT5vMCLHMp`=-!IR>hanEfIi062IFZi`GS)XPi zfvYm>#8Sl`&r1@#`;UgbV7Z4QwRLsX z(78e>RR9?k5g?DyS75+Y=3YPfHAl)xz9uBxTMQ&cAOz{9O2Mwd1D9ebMAkDCf4uBPWsfm>TsX`x-av3`E4vg zHfX-~do)8TVVxmagHpSh-QW4nezo&Xr1`K=VDb{sLHwwr+c#;*ix8rWtj-)57F6ux z`N6M3h$CHBUw^=7(FMb2U<$pCO8p$#p&inxUqE1joG}?1xnH6?2=a|^h2U5*beeWH zAr#t392-QLq5Aom0U;nE;k{!<37}R$)>YaUOUjas&-yXRyy-% z&TYZ!KB3{ZKj@$wZ(h|V^NhtXZ+#blJ?54<5O%4!>j-H;&RZr9j&N*a;_`)O$Xm3D z9hQFZ*40hl{W0kPM02;;$)C(3#wP}gqj}Vhrt4{x>!A84q-1!OY3QB1&_HBY%kxIH zmuvU8#sYO;zU40-!1PV#0B8U?N`Yyt%Nng87r`)P&xY(9WOc=_$n7eYc~3AUUNH-*b@<2-yFBy57O z&@GHZh!s2)c7ztZENZ&{*c~m}I;TNIT7t6Uk1!C5W6t^3y!_|q`Swy+_+Z=H4*0XM z7?ZZ3SwIDkqTJM1vlA!zarV8#g4)BGr%Q6NfY$$daO_1e!JVITH8i52;N12`zfX%I zE3$4%!r;s|)_(H79QE;B$F97rg(OGrlku#ctemN+ajI4BJ4m2Qrx`8OwTr~^*6It0>6p>H&9^%!hAs3DS`SFO|Z$h z@m5w2$?O(I&NRV@(}-3EAS}hQlXwmQep!gvO$^$AcPJ2Qk>doCiSm#I$Hf z@*zMODzEdw-?0MDt%wOMR@pkym-DcYkUyDp-+yUw>YLtKRK^*Ezx09JRR%&*0U_Ts zyoQ7}Ei7TjGMs5HIR1AYav{;WD^i=4_cH9miIttpNf2hira;V!By0><#=D5P*S0gn za)EqFbJf>54FAp}D_c?LidyU#7EbiThvtK(xMZhW4)zb=ru^{p61*g;xD}!Lg1DbR z3I>Wh&5t1qy!rh_Q5;bIWsrTm80C_Hq9B-4pO%swzn!2q z9{~u$;yd5gh-)tl%L)WFU90?x2#RCtnfqY#R6wePRq<27u9zU!Z3HU>$p43AxYe2DChD|I9v3R)L1lpQ}v@K zM!OuOQzcRXH$Hw4>x>AP%l7txdI|;hSAmmD=p3#?yJ3mOtHB%~=pu9mXrC!oC;iut zVILzd-RzPXn4d=>yXpuF2Kv-*$e|9b*b?IOy6@((Jf&+czi!#KF1ow97vNU;kmKxs z=~oANe~w8w?Upe6A1#R?a4wqmMwCiI!>^>T&opConDr$N6#mI?n*vY<^X%Q*cq26I zbg9d^(BuU?EM^ouC3bzcZqX9uI1wL&@}E$16mc*{&o1G&92Y8e1l7XXP6F3VNcY00 zytDn38siygb*v+*VFMyeD1(>LIub!7cq+%R$i&=+)FEYHVi7f0h6U|$*y$%kLXmUd zPWY7sAoC4`nwJ|9O^^m&sgq-P9v0b#mj^^e$o7jo**AGCw(vn*zSv~@yHDUy-4Q#b9 zoO(^Oy9u>_FO&WsnD)euBegQnM*PL z!qQH217ajbUHbn8a`6^zVdRhPew8YM)8Lp({$IK#S^qUT3{om!pim*Cw7A!%-wNj` zrt29TibAK`=RJ93gB0*}7Zl8761--b`T{I zE+M|eJjv>{Wnm(THZk|)E`t8!<2#47BYk{!9dE)^uSb8Jr!U-LhAkB+ryxSLgfJBk zJJ!HG+P(XQc<^5YB776RlCla^(74#58xBKRHwf|K zL2pib!8pyz+4r(B>q&0N*CMDMLMx5iKOkE~xG!o26!I%V6#?+%w3M!okI%hMr~a1_ zf#jQy=5?}on`M0-#P~HL6SHXyJ@9uNzps&o+y-Yz`0N)=G>gJJ-67}F3-1O6AS6o+ zwnKdKj-)yR&-*P^DrGx-Ndn+_g~iFnt$UTZp3Xv z97=G@(C<#c!8Hh^&d=h-SF#wFLrPAG zVGB8*#vlQ0B!ppYJqycd;?Tu9w|8L=W_3ygXHwLVxGev%<=`i7Z=Y$3-5W` z@}azVLtMiPg`E7#64}eH%a~`5(F2zC@96e$e>fFnwTUE1_1aXKJN@eGv4FZtwa7kB zS?*D2P-1VrQQ6V#it=T4DhrUIs39F+Z!|PtPBgD z=o2L}Zh%4HK@H|W1?!MDK-|dl55Ok~|Y{njg*g&euz> zJ3}JPcUTv6{NTNd4-4%2P*m^uYi~J5kHA(J z506GjhP32%W93aa+GIH|p5rGFEKTXu9M@Xq)K^=cTstH{d z%UPdu9Y-KRCtT4dCz=#5T*anyg*!km?+D?dD|1Wd?f-PoeZ~;TXtsGP(}<6||3r@r zu^v*X)1Q!CIeg%=)7mnPz;_a8vANaK4{l7~HU3jEx3}O$4%^MkX}gs3qY^hYjhbWD z&q}-2)0b8UVo6CMmd-Wje00`ln^N7uv!Q2nzGz!qV7h&^b4KQqc}+@-qB>)fB~iU? zd^9;?d~<@NZH&pnr)LHCP#=CzK5FcEjV>%1@YQOtdmS-s|1d$3`7cY?{-(VBDVm0u3eqGZ-~>Ev))!yKqbC4LL^$KKMxtXft2?hu z^w|Deb>H6~W@!WB08r=kxKsH8OYbh(E6qdR*REZgCX{fdp0o*lzNM*12(Ere5aw7- z{1-1tVIeo%-g6AH0le4BK;c?`>93#%JCFJ0>E>goLV*WwJ9yAm+B9)?cP<=$G1(^3 zGdg+eBq62F9@9+!)FZlfTZDiHul(WpbqR&ekl;;c&-qW9HM8 zEQJq@R_33UU1jVMK6Jp?#Y1heN9F`(j0mzB7%s=2DdRbmo2_hNz;xTXZzR?-mVIDJ zN>Q2PzgJz8g&2Y5fwvZc6t8Ik0reIrEv$d9$Gk=7aKcj51X%~T!R<~?i=N0YX@8eD zQ1c#S>$y7Pk72q-Ij5u2{MXG!=E~>B_F%5X%UXk!#XKsb?VE@-h2eV{CTxwc@d3SH z@!bUk7*P2j5_1+IVVH)iDitz**i|uBNEwWi7zQ~KT5%$x0PTN^DIE!TycVHXaqWm& zb?C^;$f3D_mluvheuuXL{+W`QaRwcvR)88)Au(gRS&G_i03*CGV|z|L$s zd~lX3J2};QKZeGaXhGYsT%bg@Kxst8ibMOM|4oi@uMS3b>SlJnU6!?iO3nJhzHk%- zhVT)^Y4SDsbch#|;Fr|HmR;aVkU6%XG=m`}rh#-uvVv;SoRH&0aEV({!E$bBseP(^ z{r<c0ePoQqxk7RqIaGLu9Z%aP1Y2dC%DE7Mw zm`{5b2Ll6lclofZyZg2H_;`);t;q_`-@)YR^Pk)_(gG~su{qIbY7WPbOzEvNoaV-B zELW0?QnQBrm!DHN2G!;aGuByhUg+5^x4b8V z?~T@X`<~5iY9;%rP-Y7uqv9anyC2(Q75j%3e@FgTN@GkL0H7&i&-O;Vz_CyC=O+=n zfZ4@gL0Z8ceU~ARU~`qHAD8v``lPsucI7S1gxAwEvD&ozuCKVMhIVcWCX@eSu_;)E zspYDIJGH$BvXSE!cDk%JOSBGPheLCkr_p;6om@EfAu^v|ih1{s;;_*A9cE*1n$wY8 zmjy|1VzFBLne@x+5kyK#Q5*_|E}yN>eBi0DX+Pw>gXFXxT)hiDZ(o~p29Ty;i8A-@ zrO={I+V`k!mCJj9lhdI11?lR&m_Cl{*SC93wx7HoK-FH$aCnogp1yKkN@{AIfxcpN zpSN>Z%5EyXP0=61pDP~qnrv!bT6+^rcU)G&{q6CY)sDA%*7MyfY*c$>iu(V40dmZi z+UVSs|84pk4PlEyYe>V?MtG}GE8?Q!>j^(}q5_vBq57lo-496vVbS|jhBGAey=e?; zrf~pk}=7d8(rPqL{V?^`Pt|1K_;i1y)Xqdv2p@D z!4%#>V|~vvOcAQHo_&CP&Lh?+=KWdrvhs2wO@TjIF9FRo9OeVqiXv{!0I9^#(UL{M zLrloCiQ8g){PFKXtG6xt(hrl63(SP%_*eL^ziifkAzNK73PRefq@%k!b4jb$bFWZi z4}`x;zwo^{J;UYKm7oUqVENM&XSdlrVZ$R43e#77klVM+}#yP8HWSKnjk;=`+|(U;$N@$&I~ zL+WDcYas)Y2e&Pjz1v2vTqxlmVIoPK^`=m4e{%AP&@+kLo#vk3C{8AG;HD#dz5%O) zqv&gE*Zv<>-yM%-+y8GQA{7~tky-XAGowOAQX(NmR)s<+Nl}tbW@MC2RuoEBMW`f_ zQ9=@mY@x*OeRSW?_j!KTANT$0RW8?cp678KpU-=JEGIJ6Ph*oIIc91L{nR_0x;A`$ z8LI)Nq4-oQKe~qW#_=Zv`v91YOTept5wsEm$Jyk_ilp-EH*bdJgU2^y_?>YiF!&JGo zV&4w|aBOItp5ydp@Sdr`CqVDQ_K0r?J1jV-+{VJ%I3#ZYCfq1zlJ{QJ>wDaT2M=;w zv0?Xg>1k%feR}fVU`tWk@-I9;#+Po!U%kbiT$dTh5;E*H>~9|5o4)R$`RVmS0#ho# z=d~wqFy3dNqo7cfJAG~XU9X`MYp#TXD0i`#0VLezq=L*908#=j{(>R-hQMbjSqKCKALYHuY}fR zBVd{wbIms0vpsedLT;4eL0{hB97UByQH$@RDOWxX%M8qH*Dx?l6J$A>aHIn07Z*)H z2P`?Ql)LlQk(`6@moP@T8@L5YP^h5+B`X`5mQ?++josZGXVdd;>d3VD?A8!P4 zNQ``XQf@}fte=4egAb0&*f^O!iWuCgDpeprH^2~>?>K4p4sunP9$)20=fJ}kgeFM6 zdJ1dY@&rWxH1uW=_C=7$FnDop3eMC(3LdQH4`Iaq_Ass7K#9UP&CPPZjWH6Z<5>&=~fo9fO`h`5EWw$k1<*-{kO{O%j|-TG7i z6R+A-duj*2m>2`DG|7S?@COd{ew;oS)~G?OY8cT=pm#XFZtCveM7nX5tpixD>El6!5V+OjQg?XEL||s{|q}aPbCe8+K3q;KFy-*vI374ekl{ z63wSWQc+RSh_FhWXdX?i@5LgG@aJ@T8!=TS=5=lMcO?l+>!_`I`G;mIgnzW$~lxzW)Zi>g+inFWLId-R0c5YY0cXXPkQt?k&Qy zk)%7~&g-f0c$M*Z&Px?4dVf-YKn6`h4q*4P^%5E&mJ>(=X?UC=Eyj=Ye=~<(;H=t} zZO+eBf5E1{ERUaR4ay40$$+TdqL5 zpQB$QEDIo+6Q~dfvj6t6+sUDzWiLspgh(UoIASb!9yao2DJ+|10r8Dr>fcz4sA zpUU~0dbvJrHom!ygGsmkW}IKwSN0P+;){%5^Bva&9&_b*k$>&jwO4|>Db^X#E+0SH zt6?n3FJgsdQbTe&frlwIIXl_sEk^A%d#VQ%}6-}`chIL2Z|Ug zN&P{ZeN?-FU^1-0b`g_ahGX}~k2a8-p}16^M0>_QBERIkGOn`(o@D?Bepii;`pa0G zhD{>wV_i(pQI;$O3_a5`3dQns2CYU9jc(dFQO3iphSGs3Eiaq!l+c~#%<~O zl?K?HFRCB&4rIz(*4!Iz_eo7P`N#be?qVuOG^!L{1?ITC$XBT=s0^))*Kh3{?9tsj zRuUqearC_Ou3Q)QbuJF6c5dyw6tnTK{+TO^GKF7sBT?^?6B}?AeEByA{KJG6;V2Cr zyJ41Xz-`vierkuy%$#5P8ZAo{K5`eiWiAchJ5nThljrlk79`suxR%Pp)P7{Ma!#1$ODqwMZ!|B zXyWA>a7c%vI7panejubTLLs01yjV#8+Z87X8p5BevPnL45y+(j{SbATkw1<3SVhKp zD!Z82P8c!d&piL;qTlDi6vq=YHZdMpnr?n~??pSGWr4IP4!5;|M?E+mGo2Q;O#Hot zu{cDkR_dXe`*_wJ&Euj;7Ia+2lPu$$QWTfjDa}Vmoc_+L#Y)sP-h{*C(W6H!-THzM z4q)VK+7O~%{)(WmIqGs4%EBfhHLdiR-j{Nh^GT@+4L%ABS)0DA?Z+OKT-8S3i&`U@nG{_~;|<~;q*9Jso;z&-NioafWgxNM#Vx~MzotS| z+*M-709u2u={A7Zt2H(1z`z6(TDXK*)zrV?Hcit2DURC2;H&FF=D?s(dLO=<0!Hp2 zbmD1v$Q0d%LYq2Q;EW@Ag@pA3lHGW7@^lsp#mA2yU&Uu^NpIcem0#MkHZUyg>*>9k z#^(iv?0x(!Rvky)C>_l&u-DyCC^$U-K&-~-F}Zo)N6`So8M7jkR$T`vgo<- ztaw%J#wD#(Xsl4{rVc%OvsFlxVkQMUxHTo0C7yFZ2*m)nP z&2vH)#D|10pXJ5uoN#YFf=-2a7XqN87Zy2=W(d+I{qH%BjK_Wcc(l^g0&;tFaYh(j zrjXmcYxpjL6%?i$RuK&qO0x1(&CfbEBS1>a?e@NJmyX=FBiUg8bj3&Fm3S~WJJ39 z-CynnHhzzlvgFZ@;HigCx4B{&ZZyxI-6xAL&e#VqiLce4W*$sw^^se&iEHq`&_!k zm#Zo_cI|8`QKGs{Rdi<6SknJYBFIQ9@AYE9Va}?${LG)h6T7FKzHJJUES7NH92z!A z@bmBaK_?9FAs>m!^8yt%{FT8d6pJ$r7R0oMBJ)yu5s@L@N66gLz3%U?Iv1apCkEKa z_GeNY$mM(a`(AF(6v&A>W`E4WLccoe;n<>F>CNuv&o#0HGXvU^2SeTgax$d7iv&~< z#cP3~y#OrCsv|6AMDIJ%S0iP)0rxq`tmOiSrmY+lE|89S$4FWB=8xjcC$xz*c6lV0 z54ayF5{PS~sm*eyK*S(xyyvylW>KN72Fy%`-v+;Y_kAyLw6^)_l99_%lc^UkS_}F| z9WHO3-x%w0ZOazdhW)n3Lm!;t={$ecWt2tGP;~RTnIl%ePwl$j&l7FQZ8dqm_UZor z$=()4ZV(x<1M#P(<#v3evA!D17sgpnyccDW!|*jWo09qJeE(c- zU$MjY1dXEFG}2phxIfFBJb7|Ehj_zmyVH6HaJK^?S&P7$f3KThZe$LCTrpL@p z9{Is{`@^gc2PxJg{8nfG9JT6wchl`_k4+Q&fpkYn-Qg)B8V3>?Nz0 zmzN2~LbfJ{$}MyxVOP@PkuNZerX+(Tp9@xh%FkW7^7hXJPu1SM!+)++705q71+&aD za{Gfh^XzYZ!O}?wm<4zsE@!Im@;o=zvh8%$43sWFT{COH;E<6!%_CAw@n}ps^R0n? z=b>Bz5P64XUdk)jZDJDFL~CeVXknHjof%kKm96ADpoxHurx}h!jkh%W?f|iL+rN&# zO6-3d4TMp%1KdsHj7QMMHsa{k;mlVS6TrRyH+E=xO4siE`SYIRkB6l(X9C)Ht*_T@ zC90RE_bm;puUx$6`~J6rMV1yQ5wHEuD zg14e~Lzv&r364mPL+ksrKe)b<-=`!u>wogjca!h9Q^;3R4cgsS4>(F=HFCDL z>YDrGe`~|QzVhH410bkG$cC&7uRF!R;GM-w=`?L>HXQPRV z)ScOo@I1y~K<_+7A?i_5a){mt3kr&qiIjc)&DsLf$T^~7Bs+dq+k@((dq#*l@z`t zTw`|CHlr(zM}D#F!9EqMQjo8R^bg1e3dctOwn}}x{QPE~7jysb=xZh`W@y$qF!yMP z`Vr(o^P}EZ$|Lu0Vj|Zh}6ALf!S?NR2Rl$ z&%r5u>SNgtBZbhE$~A&?RG>NGHe}kgDG(fd9#gb28?vV$N%@Qi%KN)3=6Dd1GJZsY zqMyVrAeR9~6uW7lp*^+8P$O(gDmvMyLHSGF zzm>5AK1YO&tz|%9j_KTJ-hb|zVXkz4La@%u#;5tbMQln#pU#%blwCf2)+N$#X`o&Q z?D5LVTw7*ir90DAx8Xe#KMJg6OK85GOpmObDARK5I^r78(y#n>lP7Ti8|}X@v4Q@6Uz~qwFuemp-Qdv-gEI^;v^!3{-8JN{wJ~(> z^;-vT{+g&dsg#g_DQbSviHV_2dfDh{LqsXwynY>{t}oN`=@ZegqJr7gz5qOg^+=%w z)2GQVKq`>|`oSmy_3CGs&SjLuB9-*Jjq;8xSPkI7jyL8?h>i8fGRNV1{DtwfY&NwE zgbB7fd(O?w&ennRehUo*_NYYF7F_*MBDp2mIJCUpzK|m7uphFmMr5@5MxjJNxZ~u^0L z>V7C-X%(i4c?Z1rdttB@C;!*M1k1*w102{mk|V<((S8n$kgA>(l`phX_|U#!x`}c1Fh1 z<~!<^vt3YquzETH(jN2D36`-tP|)`KGUhv-o|apY%qf?{%uM013+K-5fnd&755c-7 z1)K&+idUg!FjMISAktcFqy0DW@P@UXbS8zOg2Is(f@n|LPEfjtoF8mtT0-DJgfpFr z*}~DTqIZz=kG$opcZ>2O0yQEDD|c4^lt2Qdq4qfi=H#j)7Z=xkDCkM@oBf%JaIkUE zcunGF$0z@0LMz^C6RVcm5M%8H6&NXF4L>_{u1-pRZ@?$T&VJWKqSOebZGw|Sr@3c= zR%Sy@S?B4g%R29}c?7-rD4{S)D~gk7lV#>21iH5yqwIyCiy&QSE(MzNmwzYw4!+tI|Vw`d5vs2OY;Lpud)_(L34WjQu&CCMhcdWSqEp5Y^ z$BzSljR>eD$x!c&tq{05*HoPtx=?zA!f!Egt4DA6BtyaaWCn$GDq0g1Yfabe;4yx^ zh4HPk$iWAB*8f>1g7f1~d^vs}$dB^VSRUo$*OGup5QfdP29naU0I8Opehf15GPVlw zk6%$NX*{Jza-}4AL=VuHl$W!sntY?`N6)O0Cs&X`MEbIshA*JQ+0VHG%ht=7>oJ4aUlI89 z8GCAuNx^MFHweGD!E%W|d{=2c>orAduv1z2pW0a&L-aq}5`w_b=ozi;G zn-*rbHb&lfbodtEAZK%m#;E=J8pF9%x8XmLd%n%?Dw^<(`$F$iJ*;x~in`suK^;uG z1?F=7(FfWM_`(jTHTkG&W^4m)L8OrxRE1E}v3*d{5XC_f=;OkhJLS3|^gy469A`*P zwpVHVvyzxD(^ke4+g98()&dPmhz@**z<1yQ+OrEd)Ol9>r8W*jlmK-Uy^0!1)tqKZ zcy*h>_ft>LTO@5q7b;yCPtrdfbHahr1!9`PHGA$6RXW*w;OsYzqLxM{PDYp#gB!YV zX2FNAYt~n7TlW`4GPzIjh$SR6RNVD-G|e;A$hgxn+nLRXs+;@(U~-z2ccU0-=p{Ec zR(Ij(X*2`G*Hc@&EBe6_5P^6lx2&d4Rpw2bw*89XF$z8>VfS*q6eT6)UwGYmZoJ?; z)y3GKoyaW6Oz}G&V7`3Tp<&xxRGE|3bU3%Lk|^y5D#B9bo6+v;eQZBdY?b|rM1 zsF#hQ?!z};3;rDey$#CpCT0=^Y$HEu-Y!>V^G^KYn0zhmA^TVLgeLz-LAfuRM7zO> zggA+-<$WeQX$%wIBLY+Zj@q%=h4}bQYd6d5Pg|S&@vU!?Y1Ikz_N`1B+tU!!EpqJiiHBc9n)&N<^dTeR1~3 z$Ih`(%^?>L4_2}stSE$6k{58!9&QrxM_Z4|_weij*w8fp0-rkuG>Xc~10?7j%R~lJ z-Vt}@kl`L+p(LXlP{kVN+Yw|Yw=pN$CKj=A-=~g9mTs6k|`dFE|nYOj(RQ6KL5# zx1-c~efz!4^YjWyWZ<+D=be8k&5TnQcEd<>nT9X1QO*RKJZdzKa zuX?+#h0SQ%Rzsppcjm)GCe>x2FK{uY2s)yxKn7P`-+FC(hbcxuJyH3Ko4CV{UKnTb z?CJhQUA_D1+r+oJ5Aru2rLe#jgk_-s)ETqogL%~9XydwSwKD=uP}0M61A) zn0##E^O|$ArO(gU+h;YOr^Mq~v;UXscxO?c5o1G+`x-jBTJUAAu-w4|wkxkRzmlXy zkp`DX&3l{So_F9R&%3%7FU$e)7V%%Pj(G?uNHxQ$9+WfJX5wuDSF$|aA zAb&dTY$fX17cdEt4B+TOq@}V%)V<`=a9B4GckDW!*-}#z%}D7aw+Z`C z)`Il(Yx_^DQf`_PdpmWxG#XU^dFd7zaQDAE@E+M1W)G%kHb&lq$pC_DRPF0qJ4wbQ z$#&El9kM?uz&A~K$lAt=rS(z$^Qys1Am_M!mR8QMTSw+1KCx6fb@8pQZXH`e&>79N zRruD#emuK?f5tB^o?rU~)e`gx6GjoIk@}9Dd$yB;vaxUgjueBqlob}S0I&QOIS$yD zU@>U&%SSDntIMEoMn*#V+furyE}%e2 zAm2=7g^7imX;wMlk@WpDAo*% zj;lr&?sOYk3u4&5<2FoNY;uwqfV5`K8Y~A<^pK9R9dDu$MPNO0u(sRm2TR+z^KX2P zp`fF0+`cK&QTiUYpT6MWSH6LQ^2n1zxlcH^ZZ%G-p-{eX0aJBKx7!c$pIsCJ$D5!Y zWI}!CsYn08_eLcab_NICcosD9Hh5)resrexTb-NyY=_@!O6Mx9IR0OE@Zs4%%st$O zr^fR~X(#4Yot`$|n@*YQS{xWbr+qeV;dd6RCQH)T??s9o<`O;Le`jRu;Y0iQaFH_`=%+(ipxKB?nN=iotLLi&A1u}#r zOK@7*b>R9)G1lO2@Dg_V8XZ|sX%dW@@b?(=^K|SYG1R=-IiVA;Uuh8;aQDIAR&fo3 z@4jLvpV$|Z_NHI7So&Dw;{%36G(NO_5-tn@Rn@A-VMz0~HYuYiSH!r3k!L;H$f9+7 z&~a%f;iP6nSa`U+BcoA5NqFwo8(@*@R8-G6Q}5PY%}AxM*o=0x(ZcNU>d<2s))|%G zczQ~FnL@TivUp}l;CWqi@*U&P?+$!6&C9wT)X28!0pnj9!BDHuQZ#b)R#P??>*Z2jVebHgyM0vmT$Co^d$FkT#g}j!a`AKoG zutq=$ur8`E86URl+1`T}JBTOGgmMx*z5e$tE7)`r+oc;h2Zx3Ru;HaOK#-((t61prgC7cqOhsbb*3l9wFklNRlPFIBhr=)|eY zXhe{9xRAN8xw_Y}af4@NyT&t3TZZcv0t21LcN=)#)0*-8S+H(+?-l0Ef4l3nL$BVf z>nf@L72=CXeR=0ME&DW0P9mT>$INKyM4bB4BsEZqqXUmBd$2LUW9AVA# z0yhxOw2CrBf5cWifP^T)k;nE)w}mrEt^En^T*Vr=YO`|*A}-%Rrp zeLfEQnR>Cv#l6sk^5_}9`QV@}M>I=weLW47mn^Eikf;J>lNFBY1u!f8YhD3)c6h&N zB_^{cP%04b3Goj>ljG1FW0YK8}rXyfnXt+@S9LY_@Ax4-2R75 zw} zL5IkMl0$xxP^j=v_-K6#VG!ZrRrT8OaLn|8sH=yEe@)={u&)=l@71O7@^E2qpYqf1 zggM^nIHNsIYak)~;568Za!uGTMTH|?6{brU3@35_-rkhqxE_8?5KT=n`F8=Kwc0r1 z!Leb+l{!7O_3@#zm^lQfa`+2!2^rR`G31MVh*Ub)iC$IWanCw%J9-~{OhM{akZGI6 zm5xH5g_bX9nbHN=qi|8`yKj>SClBJh^!w)-ES`Was(zGxuabcDYhD_axU0obYGj&) zNX;MR?+q~IGqdN<^|fm)+^LT6WU+XTCt5mN%>_Mq8NhD{iM*%uX@30A{#en>j8=bn z-!2>f%~m>U*9>n}ZYloJ4x9+wp#M zF6y-aI|!8XtIWiYmaT>P2GoAE(;MuRpK*%UV{M;pyGlHmvEmxKpqdNCrgxDEagMc+ zfmy+~lyfMP-uGJeNX(6E-6-En1x!l`#0HqTK0Md5rrK%r0;bSGtsj|F4CY&8dR6^E zpoEpl_kb~DgPNL)?WBYG$U5v zaw5aTs43Q7e5@1f6<8M#DXsIU^HBD?)H8W;X)h$rN>&kHpsiyFBFoGW@3N7Cg!!9Q zD5!lm7Aps$AE%fFCX-%JrUj|T{X`HzP-7CAMMZOl7`y%HSu?`5ZJCQ{YSF*lW3#J< z$_iu$tDMP(RYbWVyEWL7ph2jazMd07HW?;+F4~`v&3;&iacWd2HS{U&~K0NzL)@{atV+!E` ztiLPM(*>YZ6Lq`+ms`kq+a>rjqleiq`Ta!QE!=YE9{No9#$y%V^!Dsb!=|Kxm{?(GoK_f9^%nj95LThV|4WBsHA$VfWA!B^y@!(ahZR$U`6%`zXPwlQ}6ybeY<5K6@$>k)8{2ABS z_JLI;BBk-S9}z}o)?t`-F%14Us4zpgzd+8x=D;g$NE&>EHk!{afaJG^RfmR$SK(|@ zOmX1rHi(rBzo49hEK11jb{et;`i(aJyjgI(%G~`#fw{hjF*=k-BdUGYkY5LJ-inSE z=)RqG@cSey?d)K|F(uhB<%$hI7ZWRA^i61)hR$org5#`g%I7%&#=0P zSpvaNbZ|Ricm;qeApZy?A+uiYakuazAmQN7Yll%l>le4su3EKAY$i7Y0yCUu8%p2c z>C#WBfvO_ni3R&Gs6d^4(aSf+qU)A#wgW%M4)f|ZTsN;#9Qv+t{$=LB+t0VW_YjFa zi?!BvuCspI_D<&wOF5H3%QmZ?ikivJs#OWmk(<8xECpSQbrDnH#R zO;@xI3dF`DF(lE|L!6$kQsJxk>Qe zZX<9w462&7%!s8;dx5{!t0*+pE0tZ6LWqbxLbTg*9G@D4nHzX^25~YG6Z~(eKX47Df4f?*i4+v*I`nKQGC;5 zbpJR;;4nAfp@mZS!4Uhtu_Z*1#mZqcO2W6Li(BMPey|K+kYm2Z8q>u@LqM8Oe%{dH zcc0_19pcolkQD0-WD;xXH#ZK}Z_)|v89Aj{xyWWE{i9`qDFA~oPjqD46iH$o_o1(TBnGBYUIXuayStB1*9e$~AF+QqXUcSL?=mhpFSl zDftDiF3w#2P@0)sbNb4~VtSPXg@ubpiBz5|usUy6SLkU@V{guPx_wMn+0 z_T%YEdb;b$e$=AIIlerqUV+uiu*(pOtlbY|Poy&HCn-)%6%jz^=!MD{mc>2|Wv|&& z#Vr{4gTCfv%JW04VkVk3SN^=NUdA>(d`Z=o6`2Q&z~K#TcvU>NIL9+1mp>nB42%a! zdjUI4U;b(xy~a0hUZC!`98kpcMnIUl)kQrG4chIGaCqr#(P{_h`|wjy zq~L`K+<51W!yyu>fx5YXo^VVX2id$<&HS5RB4;oj-b!+yTtiXMajjr{!f^N-^I;thf1cUZ5Qv*MYP;h8 zg`zx=g_Bo~@ysNJu)(TA3(9aV%3kk{oA0O-pQpa*9c>PSiREwke=zxUf4&xb}e5xq%d} zebv=aK}wu0qO_uw6{vA9m|6OyJl8$rNXZRmRj>1= zbh^$_-|wHOI!433u&#)4RsSCcYAgpQs*|1kz$U_fQ?iu zBKC1SU6|`okJ5vV2k=B(u-ee`=jHA^G0(o)dUi`qh+_BOc0AOlPwZjWr3w_)zXF)y=sgeBnY1G<-}^DtPAs*Y)zR zyutlRSt&gju}W|DknaA^#IbejaNZ9&@3k7nKTES*~~eY+-T@5Jb;zR0;$4@sN8 zY}}jfcU#_z7-xbA(w|=Q^tum`S_ff8CqJAG>ckU zxXHKpg6pMTYpOABAS=qnoGDV*I`J+%qU@NX%czkc95Jb0sFZHd6F@bVCw}P#xRTvv4bLR z{I2L5rGcR^k5ZDMPBm5kS9W@H5ZdN787{hStE(*2P7tk8RLx(nl*y; zm21xihR zRhX+9-(CgT0|}EL#20bW_Ro{33*igG@(sXtY!eiOglM3!5S_jVK%0zYiYxs(3qfNV zu60@irgm)rGzt~e*E;XZ(wQNT%=H&PWX`4>b2NH#{){msg>|kS#uRHQ#7y$H6~7X> zxiSn}Y?@d0Fa(B##1@kOfVfQ>>zYYyq!`eqo zm-{O))2b)UgO|yOf@||;B-LfTeDj9w{i#=6Z{EDASL=h^z^vnsLb0U+0Ug~4u<@;F z{u5{JI6x_x)842~NF<@Ame%eKt9mLgZltDNM|Q(jbRG$Z*g$O}Q%eA%EHr;Gh&8~z z!+XbDh;H$d4mOw)Y>{7Zl!;F-;?rY5;1H7ho!zg!0#hrlvJGvYsJPErh!q zQFdl$4=zK6EEJxA(UOCskQT2hbD~U3e{W!@NJ|UW;c9vBz_%~RD@x3_pcc+o576ke z7Ej4Ls_NtY+rdWh!&IL{CP!5Cl+}PBNDj{Y^*O%2k1cmUpHO-!h~dIZ$DISgw??`t zXw7<27pujwV+<-qjW&4_YAOB?p_dsT zV}FK~EsfonOhLimCgeLY5}y_9&%3hU@yWaZcfh_43=9N&Uja0jWAV5o!%_4p8d|tM%#A$kmd_;2Br!2@EP+@7K&-Is6o#u8*y#&=-`C~>5SU^!AXaJ0 zL4~*9VM)a0G9M`2dLZR*w1D}%uJt-OGQ=L)j}QflA8#=xvsvXa%?_#?-C>DBmyb|T zB7q~uy($gj;~>rkDl2IHvp2nAHGOTAqYShiOPQTE(|@3*+{dYQRU)LV!XP~mE!D-A265J@t<%5>hYj!7*NOA zj?-ayvbOK0cwOnbP3+`JtG8SFKV7)$@V?yuR7yMQS1H6XsVw%!)N%o4i=F z`t9k(2Ni9r(wQPBssk*xD`=k{`(fzX>pjA%yd^Y0y2EfijxDB|anTazn99_>Uvm5= za(>;P*?w6gasPHsU+&ae^%`?&ZJwK+M@n#R{`=|}0evGJcBJG>`mI~+sQv;_T0T3q zBe(+UUj|RRF1mcn;d)`d+7n4<+5){IW{mHL}cIV zdGZg6&J2G2$|syP_&g{*J-y&c(R~q1lJ!6W)uxV{9qj;DV@S(r@%S7KdV7;Ck%|Te z2E?<2c&AwAo$)Yo-4`C&x$8|ER{V4Hm zU}8vsaAp)el{Ttk!7~{i*be#cAROq9?x#@+SHKp`4LiCb`Yo4H&&c zxU-AjJ-eWJxr;WYm{D}E!2Ju%E_UU1GY^y3s?3+|_z(JyJ|%J#@|1+^HtHnc$JHy;*(lTc`iKOiwVDK(^%qUhAUh1iCmJ01ubuIA(o8u zZHsNkV;(--0VbQpjEe|$oG|2SI{dlbG^kJzPy4voaY~u;l|RPRiSVAD9_9u+Tib2F zJf0fER82s}k*NsI?Xh++aB0FPtjBfa5baU9E-)NWQ^9g*5oB{q;^++$fuyQ>?5b@o zDhX&#Mwvr=G&D4dkrY=S>_d4M%~e{*cFr)uX%Yc8gr+fy%Q*VTaZG@QhWZ*f$e$zc z@L;LK;nqb$HKKxN@l@Hmm9}PCba|ImF?aR8@B9+kv$O6vxv9YdMB7&5;b32sJ6P2D z1sg}4Ew@aaI|UEqid26c!zZ7p*B=T(U?gRK?lUtP-Ut>G&%B|A=mgUxu@t0c0N^|_ zwn@n6CT2q!q(+R+c{Nq%rpVgq1*t#6f-s0GRMcmx1O?CxmYaUd#SkW@SVzIx1oHz z@rb4mMbU6A++E%O+M?X{9#SN{peTmjHGc(`6^2aN>X!|hdK5q;hMOhoB+u2j;!SLzVTtndNoofmMrYJ{e460$|J!E=okY0$`7RY)@>>(J&<7-m#eJV^a@>h&vs70b79$BgrWk z*d(_szP}Xru+;`0o|7EWrr2Lp;U$v$b{ieZtC$z(1G}Xx4)Ntdu?6p!MIfHMr%g?{ zUByq%|9D2mN!P3_A1zaX%#{SOw4$NvslNQA(e>xgpXi;};E<@5Ql$VH+fSkw`2~!2_f0J~G*ULV!FqgwsbA7QG!l zP;QdrgDjm%M5LX7d;pxkHg>8>Gz4*;f6x`Xpba#1#PhrseY^0<7(H)q6}*=(;+>k) zE`p=$_58xPY3+?YcaZcvPm)7II;6F9xqW~9_#p%G!oQ8^4xrV@>_VNe{ODju`^n^$)w)YI2lB@Oktxy;LbIi0TE{S&i2AR3!rDvMk6+4O|rnlSc zBn9hHhnY|pU6JemAm49!fI_Y4{S6yNDUWNp-$uCl+xNU0%2B5nk-613zwzJ7+b}l{ znpy;O38<;{4m#E>3@~UtQNuh3GE;y@n5Qi{DJc}FKaofi*`4k0&3(9Z7|BKFwT2;N zpRCU}>qKPz>B)t3$ZzB79&+NDk}5NDLmB=S zBfV)hCP=EO@sL_-WQurxA^#wbD4aPbNa=2gMJdXq?5=YIdFen|g{~xTG;+_)B8EOh zv+f0#`-4!S>Oa!=hxIRqmuB;sH?; z?S<#Ge_~=io=glLO$GRdB5<|j@ydnv1w_U|%)bf}_3b;qK(<20znnmjVprLj&6vjf zd2sMLB8YBz{kmUj$ef>I!35X5}k*n)T?Qt$YIhh-?2EzfizhA!_l2}gSIPM-& zyRxK+CRU(338|qCm^Mig%V7RVYji~{3o-qZD6zIB26jnJ9UYzG-z{J55$SI8Vr_fD zQFcfPd|OYkcpP(b5(jS*=Fh;OuJ{nb9SC*soLR~sU}tyI7)6Tcm4q`a7pa#VfD~sV~VtumGuo~y;`Sf*YSHw`|hO+b{%+fM4v^D%4{kh=aO}k*h%sH z{q6`MjXS#&9qjJYz10I_5ioH;yCle2P%Y~{|EX`xkIK%7oskwTJrtq6E1lBf9SHee-R&c6)`bn3w-#k=C;;2;Wk zhfFRR>#f>c^XF%xS@IzLJIswGw-E==-o@#!pU20S7an7O*E?=13mqqM7Y!GQ9_WnJqq@*PC)|Lrd9#&Qw3Te(TJZ`)6_16+31b!3Ch?+wownxZb*-=PnLu`X| z-9L|Z^}c(zF(oA>D&HoIP|3`Rqh#ITCycj6+L5voaRnQJsk6&fLkTnTp;U}0UV&>7 zB?8SAgO|6U}qOAam-f&-{p^Y z2Ps=r>qZ76qxlv0k_72OdY3N^5u_fV-(dw6CXgjb#<9On88d10eF^S zbkV{aGHuvEfvY|u?j%}*V8G~|YS2-UIC)LPQWFg$m{YYkkD*YwF` zTosuvnv|R@@zVJonq=bgZyY9(VGyPi=khQ$W#xa`Xt)37G$5Ckf1txB_C$mxQ}{ya zV+EEDn>hL=>B{SBi8}t?8=MH=39ihf=oGQUwl4}^k^AJsB?$dR#Ae0 zm7HEpx=LJbDdMY?^EqAbGZxh7AZ|t#%8~xsT+WdSQ*UJdT!NxQ@I}gfoyyWhmFdv9 zPrQ@oiyOW(NxZ$A7NFVG$Q6_*FW@kw|Db68(#-ya@QcTgWt;i_>#^E0Cf_aNb)$P| zcF^4TLCMu47y^3`UbY1t8TZu%Jllh! zzEVea;uf^1aBmtA_p0K74yz6i$LFtK?WVl_0e-1Cbx9K<4Z@LU0lZhXEiQLpU^Cfy zUEIW~=IzJ?JS?CuxeX{-Lm}HTN#5m8qJ+bFR7FQQBh+qX5cf?t=l8(5R&CxBSu_*0 zAFf5IY4>{1>?dzSFI>`tU~Y2p@QWJ;i_YL2g<{)(Z~AU5cc_B)AFn|l3kkb0(1_F1 zrFtZz8)dNTSeH!5Feg?C3SZN~mRr^gzJLDwk->IK=5qk^)e7G@Gm*%G_T!GSQ#}{b zwN?^M^89gQN=iy#x zeGLUU1xa+8mYVvdpH2^)rT3G{W55zm_SmN{Wq0D}&=%wuQXB+0aPeM%gYiVYZgIn|jD5LK~MYTT83N?z}^s{4@H=%9z!*o>wjZ zAoDkJmXO8+k_s6#na261hO?y!a8-nbY3b{S>Y~Sdb*tg1a}^|Kl4J~t$0B0eeiV7E=K+&>=&NzZ90L6HUnuM5%AZG}vc!O9b zdHJ%sfj;a=e2CM=W^g$3_*cA9J=+SR8H>ACJ795??J7D?5!HvYmrU!hZs@Xhbvgax zlz2J%2u>ieP}}ZHZGq387kS*AXwbGcePc#WQXND*Ry)+H3}vB6L~(izGt=uGV{9?f z0FYcAAUvE6(wpAm(sHSJZ$g{|lbq%G^YUa(kYJ`?mC?}~Q6>c8Esw?|V#Fp6io5M= zG{(nk@1J;?kEJ^+b#05)a;nc{b5YzKW6S0i8b|F?P2cQjp!pUTr=~omFVf#kbnKY> z`SdAW{2!VfDoNLKJm(Fl5^1UT&MX%q^#xw?3zToB`&Rx_+VLNXGQTvB0a|SvnQx{< ztqR-ApmChX-RpGuQ%D!@sde;uqn1eNdiOWYM2}yYC`vrlqd2NaKI3a)1IBWtPN-=F}GpYsZ-lkh7cwZ5%tdyg=0L}S2F+*h8!~SvUc=%5CqJQyCL?VrfV1sh(l=~HM+Q-kI8`KZSyUy#EWK4_dE(#7F7};S4FY+|)EID=lIQf%gCXg2 zwQ-(FbG{s!VmKZBu?!#8uHQ-Qxd5IIK0asnU`Re-3mqLJBbkgsf4R50nGxXf3&)}= zlQ9)!Mrq*cFT-K6=g$1p)Kl;*08LuVlO8`rGS2Z-h0{0;#eM~NBA{_@Y|Yrm?APYb z)oZ-}l**zl~Rt^D^;#X&}#unhHldJ z+@7z^iT#b2wrbXgsI1Q)Ut^mc|JO%9*s`bvl^=Q8>0m{`WOi&U9zk?^=eA(LbSyVA z5-{x$mMiJOuc}viTHiAwLhrEtUjJMOy3w^B8H-0hc>)?7f3*uxml826Z~+A@bpQDH z`0U_B2q+A}2*8eodF8*w!7iLNIWpKrQuVRxt){@S&qN0BR$L8yeIN77F2f9hU9a<@^S@|7(<3#LNWqpX|k^61n}D+ z+{{fBZ||SbAbDu`eNYbMSxD6j*Q1ogcR{wyT2h;lPlWO&LVYXXkIz4TgdvUbE|_#e zsFT4Mw~t+yD8*{-$6OD8wIhfJuy}Hw@#oK;(LI{eqK@;LW*ak95b4F_TsgHG2j#*e zQ#Q|vK$D`*bZo5uiNgK%Z4FpA7@(ek>qZ@bWsO8K$a^D-VxPF(N811K<2>Zh1a<{e zbP@ugQ5|d9+uL8=W5q@h0=kntpuis4E^#p5ON{G(emM^4-dwJoU_Jo0{DFx(0o%;;OHPFH3%C&fB>v!;(drm zA#VonXUU+H#6pBBQ||a3AK%+@ltR^v9jB%{^UMz~m*V(Kx#=i9kX&%0%R{Yg;PD5O z`KJSX&Hdl4c5YpL%R~lq7_NMieNePM{x_?N*X&JcqBiYjYz$QJS+~{Ms_x3& zBo7aBC^hgpw8!k@*j`P|=~u6scRN^!mu#2CV$0jJg)1qU)j+Gh_#vDaFH>OBpu}k* zKKmHt!2k*fdi3!}H$VFhuMDiWxXA^z@nnJpsuv(MLS(ub zv1H<=YGJXU2?CmIfxZOb5f8w9A}0jQ%MKyp-C7H%P!_nqG&jBel3GZgR3y{|=75+v>ookD~d| zFQDs~DK9Xn+WPfRB(XlBM!=kwLwp=eW)A>5v3%?rg|pvK5j$C=if2Q5XpT|#I;c%n zJgNKBSMaE@A#1WccfgW@*pjgN-{M`YoyQHENY#O&-y0sX93+gV+Ccb1Msbm{5@#ZY zsjUZE+Yi7Nqri8Z?b!rb!*kqX29}iy3`wY(?|qJ5)L3HqGD))z1xGdj@sQ+s3v+Yx z;`$m$t;wJQQY2me8 zjQ#k5abMVGKur<;3d@KP3bpUv8CBFW;~^)p$F9JWD}R1)6s|fB-R)XlH7@MS|BtWp zj^}!R|Nq;bS!M5$)gU2z6EY$ZA+xOPm1Jh`)uPBKLROSAqs%hO-h^b65b?X7I?m^O z|M|Ucw{vdioH}`p=eVBN^|ljPG#@qSjjIMm-MS@>6`nrdvP_@DP!VKlCHqK3I~;_ ziQ=zCtH#sSgo-{d2NN95t6&#b2*i6g?TWwh8Bg($o38i|!kXXmKg#cKk+n?^pF9YM zRv7vu7)!&b*%`3l$9@k(pJGB%L(#g@&wX?V2YZkF&e@M~tZ@)HoJfQ3s4C-NU>J&$ z69DyrG`ipJ6$qX#YgkpnsUF2T5H0IUs^^(OVgXW1)UZ$SBfmjFt02NXoOSErD33bn zU>{-!NrjvrdjZTx5C-ey=^jUkAP8?7+&ZLhc;MA4+rn;yQU-94hE{9=R8B?$b?{+e zgH@oivJ(6#Jtcp>S0(F9y;HZ6m}qWXv+eSG-kFO>t62ia{BWw%(|57dE)RGm zQP%z&jbkyYyz-wh^0W85(r_FCA>d=E=~^npU4sJw>M+~ zQBzw8xW-VnKzYDA900fw|G?I=v4tQ})JN5_;3Rnxb1x8H;S>M8TSz@-cU}kF3t1sW zO97QQ&tsssYSWg*dhN(pb}x>+&i@j8MRo($XJ+3cWhql1-RzQ*D8w(@5vi-YE>Crw zQ~Zs`LOB0W#bkYUOLL81b$`k9bIsuOvmLz2(O(l$pLz=PsmoOJofhCedM4`qIPDJt=>94$Xw+)~{Hz0uW9gIU$qjH@1 zQmD(|Jl5LW>Vxx|-kBI-24R~FPh6OprFiQze6M>SoC!D@brK%f@5VCsc*+N!Q>XXJ z`|MX6GM=5>E2o5`M{boZ=kV=cCtag=E(p}yH=E&IN?>rc6ry-N$#K^~@o5JmRT*U& zFKMOD95GKBR;aPr5aIaeWnk%UoXwcK_#c*T3Ol|cOQL0F0>yUS0R;jCam zN=m~xIE2tZQ|RfHEj0R}n<*EVQU#yj$Vc+$i4*jELs2S>7;P2b5R;f2P!rPW{Y*Yk zS)%@;ngtDBcIVN~ZSOTl<@`Nf$R5nvwpI_$S=NJ2eHMb=+qT(O(}u#JmSV|2|4OdU zXAi#Rahu0!6f0xmyt0-mdz-A<&bfw1sb#k5faRUzo_)8f@nqwvT0%>o)oVD>d6nTN zu2|xmm#EdR<3y{N^?8r~)`rL2Gs*bG*lh(r{O`rji3YX0(ByI$=UEZu?#d&#ZmDWV z-**Q}rY>Nx4}jVP1Yy`6pVGvSLIW&+Hr%%h#P|yD?w+&l#zp0qzmM7kk7~1f6pt?~ zng4D=0?0`QDd80~4}MB78aBeg#{>qp7Q)_W-TQK#vbnW<;n;0f+S?93)zn4?>j}Kn zd}_d^y1l}NTVx26l@%j5KNA{BGNU-ZP~a+&n8YqZfSSqU(>U#_&Tx{KToB-x*H5gf zcL-#|$ZF!9b}Vsta?di5Y(L@9Pa9gJ=@U{P$J}iuP>qNQ>@th@^{WoXVKO#8o(`cv zau4L^K*{yGZVH|Y0OpV2Ihu`-Am`RCZPvuanVC?uQTsS!;7_teZAH1j(F4pQKB#{a zk~kfcGDO@wJwF1X0wq{L^OMUAeURY=n!ZxSo`tYhJt-K5Jwfl;A1^sYsjDM>3-dVA zkBdDV?7xzK>*%idTxok$Z=i(v^QTl!v(HTZ1O)la5mr|1&C0`TQO!{uLUR)AO#v`N z<8S3ed)12lJmc(HY3(M=f?KtMS9f{{wwu3KCS44E_Qww#G8{~mhR;TZjwxpKy#Lfk zE)ytU?B$;5>jxim6dDXxAS$8p2F2Pm!o0%cs@_Ce5=bxn{%Q&22FSlD!Kb~f4kvyt z+3G~%(p*FS_@vxM4H*|g<$f9J2H z_cHm!b-P^akGm$9&3+y!e_Z$^oQm)z;vGGXtWZtUsF%U@oPBhkMx zqk7`~q~OL}RW9Z8ExNn;AS*Knk@LQX^dTu{7-ABm**OrY5Nc2d#p@|ZE;PX@n~g9? zqCQ+z2Qn~_F@6KpyGwDkNbw|S9;BWJ11`A%+pLriDoW>J_g;l>jLe(tng9>VVgK5V zXJtH4$o_IJhLrAah@y>>xPv;UD83uI$}hW#_?F1LI$mYGK&`5xq)SPv(&$tby5OG( zX5LPHaP@3-J3le3NG|__*r~VsE z_oASOM0(>2e7>2_kMniWB4u5IdElT9c@F3`f<_}K`Srwta7(Gzaa3x$mqC`L_{JG( zO3J*fps=I8(RNtN7N8jr&JD0`cvSx(@cH=B0jhdlHkTnyE0c)VHY~5TYxnn0rIl3NP#H z>qnr#n_5*tDJKN9BFNzr_HyZsEr`R5feMuw()^*(N- z8tA0jrv6~x=&0Ct4-{g!0Us_~`gQ!OSG4l_n+l?U+xN{PbO+*x&Ic4WVAY%LG0BJo z)JtOVXh$dKTktb6N(Lq0#wX|S(lgA&-oCl_X(vsTn~!fUkK=J>W8)RimE)8t zn$S9hsL~+!+1x5wSa3jw0b9^;siy%mVh3a#s4YDE`EyWG*Q_9%K5)_MmSJM&wwq%K z4lw!q?>Ft&I!`E2Bj`jzkIxoGf0be72Di4pBl>n!uU^gs;}wYnn&#I>vv>$@gc$j* z2RGyAuvEwWpY}9OLupLL7t)L=_w|<^d<3K;N4)itMGqAhkOfFcdKj@qu$jQX3K-91 zU%%VudAtem>pwNo8l8Imb}$kG$Or~6Xgj8}-l}5H?CyYs336aBFf3!)--(AzAJX=K zvWx#YCkG#stREAqWP!mhrVgAF(C-9c!gE(4_(UEnF9`?IAf1BQeopX2!h;x&>HLr} zfR9&ZLm8i%-)^)xgcHNSS4IQRtSazzy=mgLB-_tQh;G<9&SpRM3;*`V>(rx$r=dK) zruj!=UPjaGV+W|G*AUlOB zq6!Mw7?cwegWUS1$F;C3w=&RRk~CZ3&jM-Dn%*qf&(P$7NDH=bEl*EBtYY2%KM*Js z=pARhQOf<_-whAuK%NxI>~<#@-U=oZ*bvn~@I=MXv%X=C(R;Zrg}g%}e6=TBlZUftbay`2bG zO@``i1}Hv&2gHJ5gD4vXGo{vTiV3LG`FpwtZvs*S1lW9Pxe z1liA1Be(5@9tzxX|8s%I;1yxUe7WHNOZZ#Jn%%C2+WDz1GV+fxVeI0DBW70ArH;qh z{TeRU1wC@EE;{Oaz9+0#O#t3Nwxa->)!7U${;f0>0#ai+G%17vYspk@yBc!Nc6SUU9;Ii(EM)N;C^4W%dcBCk9J^b00{ zE&vk$;Lgkj-2jmA1R^}>g|I=bl(13D7#1gUnUxwNYe;B}v>8?2I^}%|N|SK>C5Pky z--TwroF#LDW3>(X-$&pBm6@fokrowvm5oQxH}OEVwW&{q#h$L_5q zwUX8G9rGgU{7$_g^mm6#q57{(gBS{_A?RCI;?{@1*_jahHq$Cp3ymSezP?Yttjj50 z)rE4!@H;Qb0kdUrHzs&vi#x^U+(g%*a6y^sVnfKHCFZbVxsxrhorhB!LER zWJLuXT#A62{2HsQWK~+~shun$-#Ya;NyrRhW@s$2YZ6q`@tD)5`6FY9>kAASL_M>4 zu2&D8>Xe2&$KUOfF#`LNgF`5ljc{w`K4ZW8j7&w$&0uwz&vxfI=;1&T?mfS#5Cx{O zn|pVL?`(+ZZGE%!{`n)vSwi=!qn?C?fpKZ>mchIo!C2?-lY7+px+l!GOwDe}vT>4q z-=qFd;&M+ptLuGB&k@@QqQuWA21Tt6Y1^ZOkd{F9>SAzwMtCL2Xy?IkCNiVPuB+&p z?16~1zxe5(3tAp&Br*4~j0%5&YWAwmm9DkgYYj;w0oTCv7dXuFdEQD^+)sYVE0tDy zk>?sw3o@h_E)R+TbP}m^GIc*d;il_J+m+k=`|dx@SZ_Em4>3#+UwLC-^uc$7dh-4$ zT9MupT)5L?w`mz}dxoASS(`kIPVFD{)+KFhm^M~!S7R;U7{ljRX&k|IWW_Ry)|Z6# zSGaWYbb8;KP*A3L0;Ts2LAN_r?}3y)6zq_MjS6wmdZ+%&CLNe9tb2dBGP^VX(O7uX z=8yv1Uvy(NoQ*I}Hqyrg!flZML2dZ*kJm})p+9YooiNxPDr7`&v-za>EftMX-#x@Z z9gga#E2h2?pJux~LtiSxM?e;NdJiw4@d%@{(|5sdG+n12sK-@(b7(yM&IWSHk7;T+ zf0S(X8P|wh+5L;ig!!jO3X6i*HQA_H7iZ5v?TdoO)m4V*mm@7)*b_D;$FyM~aq;HK zA-z=^-+NC+-A6MH_&^o_S#;eOMT1*JU7O(E>C2AQElDX?p><9@COS>(?o08gYV!oV zC+~g>;yumpU_0BCp&*oa0ah39t@Ql|z9uu<>uq$1;H?r$tTIX7oYbve5u8j`f~!T6L?%u)PSzcZXnU=@k{-{ zuJT@;q>2iKy*Brl(K{Mdb=~XWbP({;B*pOH{<5Q)NKHK%YRAG)xwb!i7vXK$ApVWv z*bf`+Qladesd~nv=Izab^FA?J1d{tkA~1g}rCJ5F1F`*8h{Yy=Hz2xqeQR{NLW z1V4z-QKp95W2pyy_n$%V9+^yx2S#Gx3! zx4Ye1c+Qr}>^);De2y+YLvQO!s=C>!--c&Awm!J9Q=;jU|5XJ7f1{ zT~fYQw!Td0`+F+oIaw9QP-3ygab0pI!JXf0%hd$4&osF-lzQcw&vN*UT>dt!Sz^P| zVcIjkrg1Am_wa_G0z9=afmL0tn9yPGA>o~Tg1jg|u3!9^bnHL36y9iwC78khB>|9K z2jDR}z`Zs@k>UBU(bK0;d2yYGVW;_y>0vZhl-9Vs1z;Rs?oOSqk=%_V%~N%pd#=TK zeaRT0fXh8cO@kE(};F(g`&pBdb|75ot0Yl{@<3SuF zy8Ls|yXx)MvB_@N_9vE00_TdCzDg5aN*H#{%A{EBy$bTj-VL&HyQlA)Sl!(8_&5o% zsl0}jW3e$F3nMg*3c%m1u1@Z)qI-hj`@fh;9IyySYvloPp z8K7!-?PvmWTp}nw+nt9lF%J90G|~d2oME3No)fR(Trg0u=r3LOYLV7V(8%*aN>M(J zew;(B4;BmLGb=x|KZ$7vctjET)-uX|U3d^)Aphg3b2f8au?-pUg;y3XRWIDkgqppB zw|MOqErxVp`$HXgyd1fvFT!v*j1$TspdFaARgH-Yh68NBxR`oPD4-miE;nAnS{tyRB%H^EY{GgeEB$pDb^$8CigoCyU1=FB64150k`QR!{ zgk~c!PtDHZD+A!L+&j;|bKfuns zcEuq3&I>dd{JbX4_%-`C%5Gx4zd|=plxK9^D~WM0ITzsZCtxA@IV&w`U2NYvs}su$ zEz!eo!Equ5(nZNzSH4+UVKBOr78A@Db?zycpmX)#i%&7%6Qhf?%@DG!f%XCyny@oQy7xKO9Zd|-+<8Hk`<2Wg8 zrs>4SSrd+hy)rdaS5ibB^H=HzQ`M>vq#@l@la%{xyJ(;McHNr&tVqs-6NgJ)^I%HJ zh^6&?Hp0&NjR$9M#185xfqIG_fhS{Ba3H#Zyc{Y8f#KEP0am^^L@-z%HqH(q%YfG_IT%5=05))TY+vvc+brUCAy6`%IeUcb0 z*sF9eeGb;+a;&xOV)CRmnUE|vE$nH%;KnJ!U%Pcqr1V=!m<+z(HX#M>w%g%AqcqGz zpe$Jg%EG>@u#}Q>ysur=8U)eU_S*a_S7a=1fOp_&-(AKsMPWZ8aZ8GuOA!DF#zD>LVn#gRJH>jSK zV$l=T4c5qI5)u5_T|Koj1#2DeRj?-f=d!nIZd2C%s>mf1eV6^*>t+`fML#cS$^3hj zYC>bCB1jtZJaNlFMMn);Q|}jqZ{QHBs-~tRiDq9OkeC2SmTY*Th~Q()C6uNr<(nJ0 zMOXft)URr#bF$J$l(??@>8EjrYab*3I2aGVXN}+6IRkRh{qLM&J^4H^H)vpUVHwG< ztG-a&()Xi^d4d+&zG)@t6Aeoa%W-pkPfV~De3=E0Rkr|q>+hBO3bYyjCVGT*`mCNk zW8*n{Y3AQOf{OrzCG1eB(U%MwfP+6CO4HPmk&q~YJTu|(<3bB`K-2wz_Wg~uHaWR~ zzuTFrMn`>y^>den>uDd{j@VL1-zpLgt$y;_#zAcmx{Y@BZo>okPE&jTnm+*#APNt# z)L{^GqvJ(pQP2Ps5;XIq`{+~9!i(1#YCs~Kj z+xar#W_4xTUz)UlM6X86;q$4U9yT!w#qD=qjf8?ejG#YnNQ&7AA$uW)dN)0V(Y@;$ zGZp()|66IWI+~eSwz3A*XMfq2$x3eNYdGmYUlM>n^({eNjn98}!o6=Wn9Zp0HH9A0 z@%z0)^_;_gdQg_^wEw#5{WYQ2(;r z$A^7;V8f*(BWD_W>&V$PSJ(K2G_IS1|BN1vL}r)>6bCEp6)O4^okijq2T%9_D@xKO_k$jiwnB6N8F_>KF|{Rv#hkJ&)ZKQBE$S$JCHX;(2QE9uUj zMGfF|mvr0Tn$Av#M6TC_^p2WNT3LNEe!@qdF7JUcneZPIA(PmOEg*llR&i(fvlTQ_ z+DkeA;c1((-9HVW;r~?l3@Kp)O-PL1+L>t95LvGInbb`+W#0TLg^m7j!t;edomLLy zdR^uA2Fbv2TjZs=J^R>;Wq&`5V;pqBmN1g|pP`3V{x*Y6Ld*Y}A=H4 zZRS`sw(isv*F$)lf3IYuix`LgwvSMkr95^l+w)k|5kQeZUd(@Y{=9M>v<^Y-U;hkx z>=wrzPc{yPhvnEffqQ=rqV$5%xuI3>n=O2eIn!5!2dc}HIf}Ct{Y?wT@$!$bd=#2@;E=ynjloP4QigV-sRnhZP@^vKF zFbmmQ|Auqn0QXlnJ~_#Hm-F0-3v1hDXV3QI9i-ymA(}(Mb;Kc3>$xIu3)l$XUy5sL zYLe|?M!&B9^wxUvRr_6UU=pEJa*kA2pzNt(If|0{J^a}vJ8Zh7YF?6|64N7t)4?Hr zIdZ1POyCQR)xHpJb}fgP-*;79ctKZv-;#Blcs~!4;?3HNyseUDCu?_#j=%80{w;t5 z8d{;lF-)0vQha5&AhyQzPKYaoB zBm@RfgHb^VvBcEG5r2Xs*uHlT!qtNAQp?w?zj_7fq7_<#qbR>E-d&tB%F@E~ibzaI z1ns3#)rRPrg#}F1sfB1%#GU=Y?;=#{VMOqFkTTiJoMdH6uWg3V4zH?|tW#1o>F~vc zt13fZ@&B)wbof{;aKyB7+x_#0MCw;A$qEP<*7f|KKh#k<r*0xX=LJVJQdY9oOFz1Z)D@56dMyaT$a!9_*VE_*nsS?@mwbj5)%> zcedWNf7i!8+084NjXsZkdNw_w*dk=0?{*v=)~4eKoTIF=99i)y$Ov$Qx81hBV(^!7 z)1-1x-S+R5O{{2oIx&-lW7V9Cw6Zd>qv*^0#N0KkCwtTxugj89MfmTfOG6USG6Q*M znw?!})=VZs)`@Fvt=KdI`hV-6{AH!*gCkvRAs5wuE`4LeUE=uSKSgU9fHaYqhRtA1uHHI9&< z*Y|n!Z5_C2Z@hi#p4T(wrc&=7o%TcH_c67E#G z67J5}gI&|2inZ}iYcmsnwsRx9;>N95ie5M@)H!Ycdxyno?HbvQA2f`Zo~&Ed2M zIa@ehGz-Zd4#T`KNy?TPq-F|{dD&$H!89m;bDYt;O$=cJ3s1h+qm2=uv>o<=F+olB z#ZAz+hyLz}cB%JE^TOa&oerN)yIU`63gHMtK9(K}dB*nCy3b^Ru(^%)84Q|bNB%sk z(|O-5RI&KoppJv~?kR*xJJGT$&LkbcE{TQ%|e)DGOQ`Wf#5;hz{BfH*h?cK0MHXr8qjy?j~MoN#c)(VsGR zOJ`bw8H5^I7k6jZL;5yJt&T$s+vumPz3 zFGpoM*W&H}d7@0+7?|akkaHY|xcIiq>#_G%*4BJ!k+2e5J%tM5L<1H%`E)OwJbOlB zH5=lm{W&=JzD!oiEL|pV7}TqfI+n-(=PO@h`=7yud1;@`AOlt4LaLs6XfT;jJT1Y7 zQHZ5<$v@q|lTDWPV>_4m)9;)nEc+cy%fw8_(=r0nAq=j_|G8t=va$p!+F;{{zl!IW znZ}pq37`UJICD@ILpME->Yem(mTUlJ?1hDA-{$7>G?3OOv18<2-O>nTm|QYlQH$;6 zH`J-IVJ-Kl_kWZ#wBMk36as3yoIN#gmvbE(8`_RSO?; zruJUDRL{od_lMg1NG{R+XVaJ4u5Xkk2A1Cme8UL$+=Yc>fOT0?H3hw*gG-fCB^V2m zy$5?0syi4sv5LLBy<&P;z}KPAyex@Hpo5!6`>UBC9VVl5sWy&fePiRYFe3^I#@{Ws zpRPM&xVa?v>CxY@k#%8Y?ZHUx(}6z8LE92{o3kK>pjthl|2{t6the0A%S+E#kkyFC z^J=jNsE{aYe-pm z(>LOWc%V&xcmDei{ILwPH{RZ*|8@`MMw=6umD(ZWY3Of3umfIdWFWE2=)Wwons-_kOGN?|Z*hX-JVMR4`6%nziCU0PR{O{*V!Afbq0H@gk|k8HrKLy%Ui3?^PTde z&Rp-Rwlo?~*AlXy1@FLQ953_s*5LE6TTfz(e#4Sf2)r70GIe&FJdh?U=MZ@d*~$3f zg_DMn{vr%VjH{~FCHL&g6FY$W_4ld>MuGEmYegmfYKvBF+!}4~pUsxOXMOWs;2OT& z08fx01+ug~Ul0`w6BXAlLn=bmXxTGAr12+{gUs#PewUQ0$0d$>7xdO9)OK=GZohVP z&NXEw=hjjHam$BT4_in~j6KM>2&rt-Ll5=7?D_6knSl5df{xLir(O5DmU&`S%$BXj zTOHv5AW852>(qY&Bfe&6=XjXM2dS`R2Q7jdv~RY5%R?S}AnoumtloJ`8Pl^927S5% zL#v53&Q#H*NIR=512j0f7oP^{))vBVL#Cluk$q$)TEG01eW&_qEL1 zo^=6Njhp`G92k-5t95|l#*bW}a16h7=W~rVF=NBt-?`>#TI>?~imCC`J3>W^3d6qV z>9Govzf!55mr@2{n9|77t{pmpw=3Q{oY&3V@FU*r+)K(95BpcSVYVu8xHCzAG>Nos zO#1MuVhu~mXXXb+RtnesKL>wO`x(#eYUathX>9y#JEVfV%GTeb514?~*db zv3sBQq(~{xp}8fbE$->Yhll80ScyqgIM6KG4szWBAIwEephB_Phn2d50YyA zI(hL#o3T;3LdxRMI5144uHC_Va%U%(IpQSSH+saI0>5QAgWXuRbG9`yMH;4IqwzX_ zxYsqey1L5&u>>bTfO2r@%t7UDVuo|CSgufuAkKAqf|`O*l7Ik-#Q$Cig$LmkQ(UA! zp1d4rKT5#Y^hpr#s;t@)69ankq1|)MY8o}vr%o+e#ND(Us{7JHW>l!MC}xGXz^^B_ z^0wTriH(Fv1&b)Ph;v6J6XZv=6r_El<_$o%w{oMGBz_tG*q9-QjBEaqB+w`|OSTgsUM(>23*D1e= zsePTjNgPPwKerPD!nVH-|e%;X|3t%d4AA)_q%>@Obs>PfNc3 zE5`-cZ+d!piVh6d&QEr?KM|k3u2$Lt?SL+8D|gjD4GABf1)Bw{O7&E$ZV(KVzm=BB24e0TEY@Tv50SDH;HLW_N3 zAv>}m^(E(JQ!L`+MQTrNM9fxL>psTHX*iEgElLOM-Nt&I$Q_4<*1^T4Z)C^sZgi?( zG21^*{^`Z{18s~!S^+WD)!)|tU8E*}w5h;d?JZjUy$|I7)@}Ec6F8KiQ-Xkzp2R|r zpSqy1@a!IHOa03?;MFgn1IhzAV4w%qkV=f+xDq=}jJ?B1U%xI_XAO1Pa(ivfT! zq5m>uAYc3YGNw1)paRo+hb2N>>J*Pqz>3~e-X;h)oxZd=^%(rBR!5v%Ba_q>PLBpqt zpDboT@UX2qx#6s|SmX4SNxvp`h3O(5@re|ryp!SfL}s0=a%MNir#A}X@)rX}hXO5@ zn^_JkvEjUud=W(9N63Tqv#>BWU%KNdsEK&UMRWf)wZS0Xg*J&0D92yS?}1kJv(N{0 zeMt-+P_I8E1nX)=^Xt*A)wM$O-qzb%!XCJN*ZSBLgwYsw!Tao~Z(~&77TprgTdP__ z8`6B!3sb)${&n=%3X|Ru^{b{bpAxcY1;;VPIg%8lO89|$-&T3Nv)LRa^j zVZOOyhNHhThhe$WebUHg z@#U19J#gwqn98t?ECTGIh=5f1jKEc*Hj#wh==UH8(YE{b-oV0xa!{W>K!rH*_`L6=4Q z%zp8rg7Xtg%S7StuV3bvl(O}jEFk0 zE}JdRbD8{`?1xkb_i?<0j&yd>lV5XB*F=<@QPtGCC;0Dvm9UCl8?b0nC9&aY5~@c0 zcYfhXNqs{r3)iZY1d$1j*3@7bUGVIi4GtQ`bH+-K^m)vMTH~lbr^)wskhn2tUC>Z! z-dgqEn2Mw5CS9hwupM<;CwN7qFM-CqQS+8Mp44ysP)&M+7T3eqMVT-+_sXa-G>m=H zkV2S>dg)_Gm;wc^+W$6MRIMYSyn6NO!RyL=f0_%ozx+bY#_peKB%k40WfNvGF-ZHE zRg9Z8qr)3OCK4rIIGIC!;n}S{CibH|SQ0KFqY3uC95&4tvz5P(89rL0HHoB|n0HiI z7wTG3I~H&|F0wfP+v)$jDj`0|9y76qh2M}@jgGkI?x?9wOl5gwEG_wPRZ+FGb0c8; zvICsorx$t0>zBM>Mp!wv-N$kD7_lFq%GK9=0!Iv0s=g=6mgwuLf29+NOHmKR8yS-i zd-Q=zU1dArvz&9nPTd51R;-u$&&_Yx%|W$0HWPEq{!kik|4JA3=2_J8#mR)jPdT5RZ)7V*MG zLywJ*=t`w-&Bn*L`%X{<6dIf(n4qgz<)G3nSNUXlBS%55R>H<8Q2EZirJmPg?AyOR zfTfaT>-4922 zCbnGXTs#GH$ldR}gh6cEyTl>bWR6SD&MqMed7y_r{CymX5UQaE`c`61LO}_;taTgq z;$REP71A%pF$#>=ZV>C1OJvQsl$5<63|!9CQacv`D>8NU?Q@x43xZp|9cL`Z`(Ncj zx`-@DOYZB-9n~7^;f`djQ|fhj5pYf*j+YDzxON^ph(*znpAa0MX6BBE?R&Ou8(F|f@rKhe8Pr_qG@?O`o;z0b)B1opQ7R# z;rp-BeNWH2HuyR5Fgh^kl2b=Xs2z=7E6ip*x4JezfS~CEzX0vsDz;|)Bm5!$(a5p8&^|s_ZoF3D%|88$iiQAX$ zMBon1ESE7M3g(F-{|<%d4~WtWpR{_a3C_6FhHWiGbi%4=05BY8V zHL2TAOmHiH;lJX07&q2`B)bu3ZT_SQkQUhnZB`jbzJYj{bwB7fFSvE1{iCkU|s2U9^L(=q_Umm4EcbvGG8QE z8ijhmX6;ygYf@bFll;iIv!bMaGzC+7N%iN7JLLP+6<)yOO%5s8+MGVZR!owi*1Wg66BbzaQm&{l;FN#p;f-;u zt81z&IC#^+j9O!Hz}R1v6uae9-x|8i7e5W+I6T1au%^lSvmn+ikO3*FsY7yed4b*0 zFf_!d9C=J06w95N$|Sq)8ci zoB{Vrf-v$I^EJIB%k=X<0PNX$X;>WnEFeaI_Kwxn)g1@00{9F=Ws?lrrp_a^4;&?Q z+-kj7vB&)Pr4c|LsMYnL1_EeIJ=73mAqac4uxyo=n3ROtS-`^(4_b4A(V=j#{0zX2 z!18xwc=hi9YFB{ViP;iGa&2~YcHKuMT5+&8zX;y9xOjM2-ql%?&}6LAkM!fa-U1|A;;klrI4CFYA1z)2_iY4f>>! z`wB{UN&2awTj7LUA3i*?^2<@j<003E*iil{uT?5;qO!V{)<>YQAw0@1$gex+x=sy2 z;swayz>YY|RekL&pg&SYZ@vd7NCYjb{(K6K2EYOegZj(+%kN{1!Gt6R2zTtDXcsW8 z>YN`2LXCNMwi>h$osa#{0+?cipaesKhtp?L)I)$@ZwAL9d~1Pd(;D9hWR(lz>y}@u zh}ZzE{)>5fYxoB-LaUU^k20Ym3UnJdh z&<=>8GZ9qBp{M~~u#j~NgEk2&WNrg(W|VYv;Xn>n>qB+1vqlvfPk>Zl(-(_iQQKEa zRie&Z-Al^e+6-Va`9(9`(mrF+&~-t?V=s|Ro;RlwRdezy6IW|yf{`f zIriX2-AAoS-~o?s``5jV3%K&fF#t{9gG(rsjZif@MqbG|z9rIs@v+IcVJ6o?S{fX_ z@t&{|0uSIQa_m8BY*0h_Q14^QSy*t~1NbnoXDpiI*~23u&hTFX?K(n%04~ZB-lz}l zK6V{6&Yb~-dor&MHvD7X&hpO+04E?$9U6%+JafH3yJTGBOOGURFif36&~P#iB;~VL zc1fz1B&4plKXB;CDR^ac3IJ?rmhiiA@$oHyy0aF7?wy|tnr`lb5XAy8q`=su2NOFy z0|SFAhKBXMy);WpOS}dznGlx_KCmSeixKd@(())Z{{n%8>33*pKYF%dV+(ar%%L%UbC%_7sS?i-+!-^$unV+bAeUt9S^c1FOLL}o= zo8Jw;s1%C5x9d)gcm8yEoFe);M)2nTJ(+)88>opx3WxA0>9vv)gZfu4+9W1v3e~2> z9Dyy!{L}3Yjq5N|uDr5IsI+Xp4jlw^V$K-@y|s`MN5oA~AX(pP2af@uzyRLr>$`3B z5{nPW&H#=Orpp?DAZdUm-O^+I{3)%`S~WU4I)+P{pDw^mC~+K-(0ie$R;u2~Q-}f9gG?|0O&9m1P8V{7|-DVF1k=53R_2uO#}XRq<_9Xo>4{e4G7)3FDUsE>0d9uXr(RK*R@PQ9Te& zN^TB`>V&)oN3A9xa781dD`^|v*zc6#6&E9{bqE6(cPOOG+;N7F5^XuRnFrK4I#`-RjQ;$V4u-e73&hGT zKr@1_hRbdbxQyd4Y*3pdEIOf(0FIJa214>1&qW4EUh&!anwJAfTxSZ1HouleqQLPx zRg99_)2>`%N<<3`BiPyNzb}1*{*n>!xV+iBG0l$=ogOO#IJXKdhUfV41F#sNVUG;b z&1&`)U2B5gpjBv8M4oLh$Y`pnt79G|g_kzzi-&YVJ0l`$E2*elcOSX4ZZSKAR}YU1 z{lzhArMW$J(ae%G(6sph=y5{Rkf;P|(RUCl!m^cQeP0aLt=3t03A0s;?cViGf-U^h zlsYv>A+_d+bpy!ukGtP;)Yvsmj=hI7Rn>ztNBK6F=tTq7LncXR>VE7y`8*B!G}we? z%4x4K{b*C`0Y5FHH(4Qm$|d#_yIPvK6+bbire__W(0n;ap8k!96T$wWWz{8|kW4{A zgvQ#WbQ;FCOK4Nic3p{yp*B@Z>?F8 zGKmRZT3rSwWkg~Hcp6t03_{4pA*=|BR{(Ss0@~hO5eNeTOH~5i9a2pno0*JxU9RFkoj&8F`G4W4j7MLAZq)9XCX3L9Arpx5k{_yK@PeJpnlw2yhrn zXdOxXbU|tVQnQiSMKd!tU>qju*pRRdnT(G5GGSs_Wn+O^)^1e^mn(Xn71KUnCf);bgL>{P1KGh z1beIElE}|I1Kmt3Jqjf}N0o&@95~HLyKpf5s0KZ#(76-ZlHM@|5eG;6d&!G*Ia$&)1>pqJT9C&cdkEmd`vg13oRtTEe3gOg0J7D z2VyQ9$i-Hp2|-mM(w@EL2$*Js^+X&=SUfX22eaA(*hEW=gTTI zidi;>Q{MV43mXB#ghNo5`S$4Z?CcB}e{XKBb2~Vcj=`a((@#0fUm2Q1JcVwhf|sFLm?U249lJ}^?r>e z1uhs*Jdj|)pz-s@c>)`~K7g=W{lI4ek^^X{&B28fK1Ak8RQ7BpM91XbVsp$``*Ds| zgO|AHEj!=-FaHrDZTHmV2rHc8<>S5Qs7ybl-dNezD!kfoXN8UEsE*%L&+@>&a0bF_ zZa9xVE-y6+=N4@jyq&ps-oT#qo)*s-~G|)1>3o=Ks-HVzYgBR}Dw0^yJ)~ zuRn8O){W1Q5zrd&UZJ?RjN(S5oqzz}8Cv_gW=~xy#sGv@*zhz4EmT*o(7}Ei28{)% z@7gfZ=+n(#3j>A_!!9mD;6mKx1{B332#JudUkC)dd$U{cr2s~O0VRCP+5?b1!db6O zN*nDvwY9b1HPDU^Ffc*DbLmX6%J_!Exz}naZxa-qHgN8H_LmvfQMPsNF{Gv;)Lq?L zwFL6#m9OF;qLPF$K|w>qZRZQ^waPIk+W7${4OGWShPK(HwP1))ppJADah88Aj|u#) z0ynI2aA`@r9uXIZfJH2_eh17jI3Rw26H7c?o1hE0o}rs}5!?{I0yZDo2vOV|jzBqT zY6DKV0{7sY0T#ly2P8;mbCe>e=wy7wP%A5Pv8@Zh@}ZX8+1as#_W}&VGyGv%&l{k@ zwd@c5m>wEH=oVlE@?Lrt?J!SeC}g_9&jnD>5cmr?6)AviKL2S=~1A@%AIK@nQ{ANFG4$`4Id)2J5qxqj1Om^*^3Q-=Y zbnA6|*WJ*`fxAQj;fy+^mL|_geQS2KiXOkuGt&__YOwb{a^?Q+m{uY2iQ`fPmxDdd zoIlHb7UwZpzbwm}@{TvRMp?;)Cqr-l-O`}U&J1K}$o7TDMGC8}7C|Zq)nc+zC8PY; z&p_@Gfp=$O6GFhW5Iq2p1t`+*{Gx{j@CbN~Pl0hg6N`@#0|TcTK-->dA|Ry;;82eI zE@FV(vcW8_Yl@snEPinnL1O)3THl+x^h~2`NBSW;twFb)5PuR3AqYcO0iB`Il%jqd z_KR!-7C1S=#5{J&NbjpR+#+NhbSz;6MgpE_F6Fu4kAbr1xh}oEz2Cp)CD}GmL|EOt z83W-40<;&1L+qpj=iyfl1606%XoT}FxDG|)!QTJn%a?)yIGbBoTZcg0G6DG*6%W;A z;Pca*GD>!e^dIX5^9G2u@S%c!tp5RoQ1`ythx>f__iA- zKdcq@RGuB5FZ3>HzdvG2k)Yn*yk4Xk6T0D#>jhA9VW9lr?}o<bGOc!z+#ylo5BN|GB^&2x2^vCE2oEus&p1+YNnKuG8Uug z|0;ZygFZ5ZmKD2UdN+F+sebBw9iWjsFD!RT7A0-vDSv;_R(hU9nMUhrh$U|7hf!PVIc-2fzU?Un36U=+<#ii%5 zm7L$2Ax{Y1YE8hNYiAA#=ha3@3t#|eZ{C8D2yx3fd5*bq_&|M{R`_I0;jm<@m-fQ9 zhyfg`y3qYIK|w*C(B6>!?AaqpVx7Q`P}Uy*_d++X6JQ!GZV_ECE}tM} zmJsIj`tAQEDgdkrV&Nk{I*tT_(`KMXJEQZY+^!FQ)c+dF4)dN5Vl&%bB|xP`_utFn zDQdRb1Pcs3nCU1nwWG1b(GP+NC@amg>Sy1P)058jZaaBd3!MrvGTf4Yl+45Fy4A{$ zpLCuc=@iFYRCmMShM#h9bq(q9i!hCR@YrgPDKBImj=CR^6vmURKDu^=B85*b)$4^G zKgx!H89`~$pOpzkTwGkMod@aZ*PgvMzLRp=gaKw@YnitnoR}WUkdcx$LTp^hd=Z<-d57HdnykanTc2t#9!g~*26q#LBe}+mv90j=5xunm%UO4wgz~rqkjezON z76#y!(FxmJtVrFe%7e{=PTEJf!eJl?9B|mdqW~6agv^GV7UEa}CzBjtjA!kRz65N> zvGCKMl(y;pkCr-vU3(o+z|Rt2|KPpOSO!B4@{Bs*1$Iff{R^zqLY_}7>a5DJ{)VUAN_vP%ro20a#^`^8JKd`C!!_|k}9 ze!IQ7GMPVzpYC+6h@}bu)$d`6?)Og(Xt)FuQqMAD$VbllxEfM-9bB1d{o>*ct-ZY& zZYliK##K>ug;!jz(Ccie&fLFr|LblmXet0%Sy}|q;xJPbE#P=#p6pM>$$kXqH6o^UVN)8MYrbF#d!N8Cd%Wz{UHfUhMnU!9fPFiD-iT3u2!Dpjm<~Fe!|U zvtGjdY;d%P@H+x_W1|XeGf;aN_MQnfD*Oo9-tiwl?3Q?ylqz-kQ}C%HAyxSm*ESpA z#4rU*?>GP~+}SnpnfXVM4!apQ!C?SM9`(?{iB{s1U`Pc|pa6I_oq{W8V2!DR)S--w zj7@hokqp=zpnCw*iT7aYl>y6A>nrC81yO%4A*gYl8iuUB>mB2#7p=B&s{G91tdF#5 zL3kwJ!XI%$l_Z))TH+Gcdg?x$r0*=@!$B_j+U_q#wmyMwZ5?>im`(LXEFD=yWK0g@^c(NlgB>AqcOi2wsI%AJ*_*z)nTjS{)kOaH` z|Csvjc&^v?e~A(;Sy_=JNkk!$D1@X)Lsms8WN$)bWo4BzlcXXmBRiFu)k4Y4mc5DJ z^X{DU`+R$x^T+8VZ?D%quIst3X=`B!Vv*ZgKW+)8Liro3m@y-vCD8F09cn|&JNdam zjIdUaiV5#4@<7->C54FFvWtVmQkp>?o<7>_TNS0`$>PPExH>(|TUzMFcKtNC1mB+FvFz-f%j zDYCwieUXfyAD#5=7h|T--iNGv)b`h$=>-a|v`-zuo*$DuFuUm;V1-&Ha3Q{*{*ytq_B`~hGz}HI81Q|G%TCwn+LTV}3&ywJZ%5x& z-H+j?W0!mOZc;dj=WD|T`IMB7g6rGAn73{|YM3#yLOr}~4UHsVh+S#bga-*cP*DD# zw2|KL@>PS?YmLq~8sNFOXFZSz7l?rQ)D#WsM@~6&#Dfvhwdv6B);A28dMYU+bhU|0 zx$XkBk%tMxb}&XGo~jeGnQJuO=ione`a%T4aNeNZ7@rzAL(-^_M7Re7F+>39sm`X+ zdbp$FtO>0`HWkLHJIoBT0)X=+g#-bs-Ubf(O=RN~t}0GSO-&s(VKW9Y&&v7H&v#-E z%3JiAL$-Ap1{JE|iWl+PYG?Ao*$*s%SjUWxIvdre8kccGlW=uR@-hChG+H>h5p|R& zs@cI?BgmJsSemnZ@@MWq&xX*gIB2ktWO zHGWkoDWF$)5K}mONeuj%<*Ah>XoIooE*{j6@Zr$^;iMkZr7PDm) z+xnjt^nKfo zQW8~X<$l?YK0&^2;?V906>N~Twu4`)W-QSQ;A*ORRD}x)=1#;zoxioIRoyl#G zu`FP2{_WE6O22+Ds9XheltE3ks_)wFGmA!f~kQ(GDS#$2N=M(X|MV;~E z0KKvSR^&A;i4vBmqMK-pALhSgpzk@3p)*@CwX%O4;3uNe5zNjijPLqmBp2ORxM?5( z02WyFQqOe*VhF`genubyCygx1jp;+?=>0HKEL?h1$kXmq=FG#WF)RY$1Xd{b?sIVb!654%UY{;Tc19R|#+ zhV~3^+dQ}K9*^1iAEqsDR{xnP9-&bEuUR%!6yF$>o|9WYcj4yM4H9u(7JD@qUr>HI z984Pwgj2aqU0e!w+?uTBZY3kweq@9RA-AI{|VIr|cli9cGUfVjs@F5&3t=#Td*4+3_nAJ3Sg>_cSI zxS|H$i}i&!Ta->l7^Tj(uO0jnLWq@Wb&>N@rW_iZ%w`sTUwf%ZVSVIzP-AU%hC|l9 zPfrs&TISX@^jhvXp%yMAG^SffqoVzeu0dI59g99sZ|M#S*`<>0zHO5f9zjiJ@_tUV zro%M6id$uTvKbo8H+ASL=j;6cM>wu(yApv;Cd-EL$Rr0rn1->1!5Lb%w{ zQ6~jNJ*Sp)$7zp}64~^oy@uZ7Gc$EFb5ShO`oMBfrq(c#F511xtEA>Z%|A}+9`>F> z&J5QEXN&Z6er)Ec8dq+Ne(#@-jl(QK*O|c|<{>WK-j{dnVdP)#-jpt+e)Rik!!3SI z@947^U6TyfvFPzw{otgz;ce?Cbf8PGSZF!W*=&x(SUY~!F8;#a{{_&8d^n-3Dl4zF z?=_!5g1WZ$yU%F{IDWQ1zVu8@fC8gJ4~L~8=G;@d;)1qzlypVZgmx0JOKG}8zg&ZS z`Q31#`MK9Q7zP-{lDLdd@fIo^$CTt)QWePy9 zo!Z?XApknBqQHQQE;JjVrXz&}SRped4le^B=2Jap_|Sp6)

t8xWo9>`+YcWUdp z=DWkg)+p^6c9AFvRl5c#S&SwttJ((MS;%hZm3b#@K zmALJigl+6HJ{lXQXsA?*gms1fJN7cKLrIjNUo88@zfb9evYd{5Uez~^%&`Zw_Xq%l zn*(6t%c^ecYPi$6S2Jo>DQ&W*2|4>Thw#F+RNomIkDl%`= zo{=X(2aW3%TmejnWw^Py^^5roo3`g-1@oVM5pHO*1a}1vGUThjtvKH1JcWpV9;acL zMNUTEsT3Voc|?6eMTK&#WTXU{DECqh7fNDw`dv1AgmC=x$kuyM2tqvWWKd4w^3L0r zzumDUdKdGo(^?gUnc#vJ?aMXk3)eN<&a_<$Pvg}sxqZIf#bj(g=wwux^%wqi3?a`G zrTygICuzHjM-8^GJaCbqh4xSOLG((SgUHM9gO{w&b5GsxwB`>l-SyHP;;sMhE!8aT zx1n6l8y9Ula!9wXDUFr^A}6TY#yYrpcfB{_IHl%$T;+-fyc#{5P~76jWd!1OGN-(| z!qJ@OxyEzVoaG+;(F%z{YGyeLd=_cQ{awi$GXH%DO&q616A=$b6(y-hG*L|rBh=0$Q8qnJM>K#w~_g(+P;|1cE>F$Rs&`sJ0!0Afe;(1dTyuS7o1C4h zPbQ~7CQhnGs}!Q7-XS0$@Hu14PL+6U!q8XNNN%UVZM+)t{#^sDS8OrJ2~@mwZ1;r0 zgKG&(LFan~Bg4-0M!h^NE&VZEo(;5W?X;y=m+i~vAc-Zrh$)BLe74>755;WRpEc6- z>aIQH+(xIsd*ez8$~k={)PsH-9g)uey?~qr~y7P zlV0^bUbj!ipEols&_bpQOA8%xO4s-`s!PhK$$wH(sx0_nx9}&#YIE4yO1Fo7KLI&fE{r zUepvVYtr!%l3UR|^!aGm=KKk^PduTj_fjYt2HcCD+%E75h@qW@p)j<3Fzx<*hyf!|4^%^*kP)_ZV6B3`Rp=n)o#7(Lq-A&ifeT)VEdAU;D$a zRTKJr%oGev8^IHjF@j*A|914eVIl_Q{b%;L<8du+G%B9Y=El9jK=;*vV&>xH6Uq8oN z|8$!5uBqTU5$&6p^q+!zt1CmixH!}I=A#Y_kGUy)_*OUBZ^IW=#M^;9+g#0#s6Tb#)~qLjO4WcrZxG1C)M@pfr0fpgisD?l$l~kaRvlex174 zcl_{S+c$(BmQtf;~MvK8C+{!2SWe9huDe9p~UxB@{cFl{@FR{m&g=p#GJ$P z9HIXDx;ps?b#)ut+S-V;k4z*&FT95QNC0>~a!UCF3Av0>Rrr}bjy?wa@Kio#cJS+- zB@i!U>KQZ~lpqH&J~ja_zB$V%n3vsyND!e%^TaQ`mbS#N#^X>4U9Jw_O)x+g7ne&0 z283yV%%Kx`#}zF4F1JAws}%eBAvel)kde%fM(>}1MG2bqc0>e|^x?zmsTajSDNq1l zI;Nu&ooI66d3@11Vkg8AHa0m~Gq(*Y!Z7K2B7_B$GY+-WyZZV~;^N|CfCF%ErX%c~ zcjr#!M_K5&kWqZUe;i;95bMxu4HXcs(6h5g4P3y`iAFF&`*()=6cycyHNYU-#3MP~ z8J9*&R$R+h=sesZVB8Kp8y3Sg)u$!kBQYwTzg}OFinI-~4@IvUsz4R4skDwV{dx!* zQuYcDYycj*%i2$ARYt%2-9rd(S(RHRSE9z_xq7-wXA^x3JC%h0^E#NeMj4yM)VwS9 zhS!1i9-WDv{fd}k3$yHlukLprIQ&Q8efq<~==C6@tDv>`@6pHqW%ms&_AmcwOZ*f~ z+h&<{FS2z#;prUKqDkuu`QRTEByLY7P)Y{kVZt%ApVxLz?0Yab=P(Q`3=j4lu1u@n z@f=)S!T3#xpF7DgWz2b?h=*!ptF*&DaN!#vwRn@zjc$_sALNyJ0iLYRF*6%!F9`+= zwgYltOpSdLbn@g$1xzI-q5^`19a$P%jd@?-$JY`?7mf>Y$2KHP9q(!CXh~o2s3N;EM9{ z!fF6>M}!}Bv4^9En#o9LFRxDIiG!HFUOtX~Cjs{k(VD{JXg*lk^2s=xw8%t%3-!~0 ztr^ZPGV%ulS0qk^7%3^yfb93gq~v(zFuvyETLo?UN1i`_-u%XT7x@}c6#cd%QCk=o zAkWf(kdi2&EA3QK&H^5^!~iP7XAoHv5v8wMy=H5n(3n%!U4Vhxxw$Ez0IatIszBT+ z7C_I*kVPWG0@zDb6HueDW*0Nkem9|}h(7GhOn_AIYnZ9{qUlY6{b)HBd=?|E4}L`8 z{L4B{BUHUSz%IaMUuLu3Y!%Xq@HklOKgdpjZ&G#79jyI?^m6`^w)- zO^w37842iqM(TdPC&k-^d^8=N>*%WUa8tw@lp8MQ&Hk#g=N_~!8!MZ^<#zax&b&C zM_lDw93uoKL@6C+GHz~0!pw=X6C+xQrZVLIeLl5E*AdJjD}K)|6l8>=d34|g0SY_e%HFc;)^CCdV!Q}QZhy!y*>#rKu?4Xi6QFW( zx@Cq00wR3EMG>BO#vJpU!Ln-?t}HLEVOkM4Z_`N|rJjFmkee}aX4M#8^*%_saJas0 zZ$DDs)!r%%1_`JINvUtlw>81(tW}oyh7QwC?b~np02F?VEdhmjvCBL>%CAeo{XR7E zX~J_&KxjRr%Zy_7mYlUXDks`;^B(XonpfhPO%3&n||DdEAIha%4#K2+>&|x@KUoBj4c- zcg;|rH}JD}@7^6tLV4ne@`R|>;2Np#4*&>@u#)Q|7n~p{q{D-Y4$)u#mqs$YVr93& zDZTZcC5t*wLRnYa%Y!yrmTbh}m+37rcyEbOWoKb|pqF_mEq}X1s{KOphSs%gx%&kl z-j8S4oINBHYOa8Vea3%_h4Jtnn3|FdSC^(0clNQQZk%e{ctJ|(!GGsAf86H*s#a&F z2SafM!rfzwwtxH`-AcwO?e%+QW{b{SHVnM~SzstcEXNMaci<*!oYfk5SKhX2Te0iu z7n-5WC7n-Dq1N=$!7CX~jo1H8pb#AiRSx z5>xCqu(Ml61t=ar-VK%UJIMF^fru%yvSKpoh4f3S1fW>Pp>_qJTf_B_C4RQ(#Ea*; z=)RYtqp;bA}AP4&STI zjHO(c!H@`y5QFBR2D^0g=3Zknxy=?Bg-*UX)bE>h`AvK=6Ey*{MSk@Js^W#gJ<`rI zs-u!bE`UXa#d##5{taY0&@DuDcMsIdp_(U}Ae`_`$!YV@Q9s5?#*iXP;L2CWq@fs4 zBsAE7}sx_ho zF!z%dbC&Pmi1(A(B_cvk2;HnbDHl+Q;NrQ{KaQL*3FFn2$EZR1OQP_Q@E3Lg#92Gp zo)3W`If4cMS}Oa9IOcDssPA0+1fc$Tfsz2jgNG5kT~kS zn%?=8F8`TGd3u_nyPO~%>f0yuPcyDbEpU(I-Jtg&>C*818k&A>c4MYlnjQ0!koSHn zutp@`{B|c*HX`t zDXFF>(halcOY6S!Hu@%8Xt`dMEF3OsNIfc{_3ekF`L41p@AfUkG-jqC=m5X&_Y7uT zx6yO`&TC!btE@QWy;hD0pysQC_Pk!bWlIEyd_r{)af4EBOKayBZhd=iP zKcVCe6o}ql^8hn*!S|3`6BLZ*Xu0f=s#6w6gfH^L?;@6R->)(z-#e zbQk~YGR{3myS@Ue9$E-R{c)9(39$PFCZQ{VV$~g-#8w?cV*ym$idgxUDEkG2#Y~$w z<6S5KgvX|P9+2=MGIA>sgOSlQLvt8oWr5+4!{4*%>ed`MPP1;^TWF1l zIs(9J|1DQLh;xlxFlIQ^#`&9Z_EvGztwaS2e0tD!1Q(RZB?^haZ|7}n4yeV6*Vflu z`B~`5KnQh6VzO?vqU*pD4us3Y z6k{UcP!2tUB9(kxFel?Ez*xnaup=9hLM1Jgc+Cjq9;7`RA;*ZY6og!zlz(V*$Z(G* znNOuKFPNBEd}+GHSJYU*SK#>X0%nJJhzqeMXaP>x_>30<4!sN&5I?9y=!=D_qTlT1 zPh;}n?m)2fdwgHgggm+H2R9w*w23I!DQ>a9afc~}50ehe;g1BSRnUUYU?`G%kGVFd zON!~EUJvc&<0DVH#>$20?5^&60R6jH&oCw3VX>`9o*X=u|p zJv)0GCJN}xjYb?K55d|1F@ZPgOZA^_p9?rj7Do!kCMH66?*y^7{WS)|HLc88>vP*BPqFMf(M2Y<6W!pt2nf648>pcCujSM*9lnu>iZ9ct19EAGDB zBPt_qOFvfUEG4q|aX27Yi|QXXY2lDn&6t^H=+(S=QGF%~^cU|V`n9$*Ndv-YjNPG#5*-cRgPXZW4NznxoQjjlyR|3| z^O!;$dXoo3v(%_gAc^Re42FNAU#*E^y}+xQQ>4^W9?yALL4f*x?X31 zBfYUUD$4IH{tZbI2v8a!{o(+e*#;=s!Dt>hq@9lmjhBGThPu_^dRl+Xdwtb(yBS(} zaw8bFzx#xcEKF3V10d@~sBY_$-6bXGDAMRo!#xv>zp5LLFQKlgCQ&)eqN8+`XZ;bU zsboXyQhbbKU>b=cM7LgPSIS}(%eienUPZVLa2rD4|rbU#eo-tM?dGDevT~r1e*a}; zh$AXPXyb)~bN?CX6OLY5`Pvtm*G+ridrFDH@8wZeo!_ZrXA7zYt9N9b{@x;Y!^jA5 zmw$8jTVJVdre8Sh&K~ag_LP#b)wDw_TdcvBGrgTFk0WyN!dmnA6`9fx=VMLf4IA5p zmQMW$1JcscqGKQ`Vcev=D$_^6MZbqhOLc~A7Ube3h?g)Umd`UEgL^ zIz|WJYlwqP<>gWy7Kj&}OQ(CdvS{aRbu+&X5FQf!1$nYp)peXuZ5B7Oy}@ z&`JR+@t9k8Sr^TIeSd$ewj{W7XMh2bS!|`;rU(nb}a1*p1+X=0{nJ*>&suA z=(?Lb5hm~>u6Ha}bmr+MXZawiB zz-pTVlJ$|(bK)@`>9J>@-$EuRX8c7KkY-4D_#M;;{)0iL#t(#aiP2*-6H`P_1}Yu` zhdS9Ct#QYOF@XNr(Nz~_`FM0+#3Tg$@nm!^gk-EFL(ElV>6Y&c5gfJ0&9r6}LJRw-|RqT4*3nr9jitVJhV~XhE@icbE7Ph)|eh z!2E>^YH8T$iO`mrc!9=8uefvX@~w$q>lA~Nr0L`;E%r4WamjE#!{1>YG0wy)L|KkNrQ76W#fLzHX4(B4(6TDy4r0tSXT9EE=2_?K*$DgcC)mnpX|p zIc;s=Tw)1e&3BEB^f$l1z8w~JrhFXdR{)3PmfN>)U#pqH*$WB2_>Zo3Xqd<}Q2;fR zX`_3$Nm}s|+^w$}*R%l~tO@yo_jjLJLtIs86Hj9fRX{iS;Xgeco1K;5Ph)(gS9BL` z-?iQSZ2PYTyrtG1ApUz8SbPPdX(8CeR{~Q)dYNib_zcZZ`_|yDY*%D5ng7&-b@v?E;7p(Rb0((((fOjT|pG z!)TO0mk|gshkaL@xeRP(pVL&o>FBJzl~oz)sP#C?nu>2Zr{G?}x~R1t27u%Tr1h>D zfA8<#fzbFff^B#Q+$R{$9wNZQsa%F$VjTDDV1$^Upy1~wX8-}@8YH$XxPw^GZg^|w zPPAQ6etep^K!ETcGc^^2^N#NL@#7R}CFV%nU_~9(STN>>{QyHOge>DNMsb%>(6li3 zKlrzl!vor9@=XwH_>|4@F5n%PQQQ$20{Wc_bGNh=bc17%W;})lM5``0BjdKeKOHvO z9pa=#bKd{6Kmsq}QQvpc?V;7rfM8i&>jS@18WX``PI83knS2YXZHE_Pzml7$L@}OL&u8frZzPk3(Z9`ZOG$_^Wv<`j8KmFYy-)@6XDZY*TyER73s?ZBY7>N}B}D zKWg2OfeS6or9}?w4mvyf3oP%Kvu{N8@Z^* zq}#x91M_pu@go)!kSU*-m>6{v?6`AN&1jnmH<3{83KK9^#qFlqRl)2M8?jLv_dL;r z7zKldDg;)nqt)3AxVNFu(OMl14`)>4_FX9hm6#mcF%j^?I;En+j81wp3~V#LMDCFnDv5IQdKFrt3zUdU_Hiv6UTSCH{$>bdJKQ`U&p}L zAF&s-3=k?0wPy&y@dh(Mz(cg@I+~ZsXg>Y%i9rZ4ViJIJiSG^L>DeGErs3k^0<{>A5$**3 zn|4jbL{l+T5oqbCof+~~` z)-QC(_{G}B2qt;laU}`1phq@8mk7-xDRb_rN`wwwlOI162|~OnDJg(m1_h7o6JVsJ z@o>jw`WS#S;c#P*xS4$tBzX%G?Jl0!s|E;Kflv9eVyC&&Hjx;&+bgy!x^Jv@!hV19 z5k_bvpZ=Se=e=p|@NbeUSxI^0^&!g;XgwZ`N}9R2qKz^l6Of&1GD+>bw0BgW=I@lgu0i&Y0dH zMe1;{h1A|nnHspc!4mVPlD=|V3k7%AZK02q98>Jjc1QKyZ^2>FX{ zxzg;BM!ozV>*-+VIxKG~3i-GSJ&M)YIxq~ErP`KkbBor9%fqvcQ zk>TNJkyCo)41-{(s5e(0ww%g5eZBT0u;Y=O3xctMZ!>_H*U5GwJd~S;gf(DcM*Qw3 zJQ%1gUR`T+Dj=M7V!HwNqAV1K;EgRi7R3XY;=O;RJLy+zn9=|KbY1x~D;TAO_UJN< zCkGwIxS?Nv^jjL64tSNLdHYp|)9EDlKzr2~01yf$36342wD25;pd;ymeWI5C)k$PhG~Pt~ z52y-XLe8K;BZMYsH(sqpTu3C^fzU^|VXvU1qS?5yPIGSYT2f)6(K&ub(oUg5*r}|B zFQ_h0)h>;5tIoQwLLIPeZ#{I;%mP|ZJ|NjMy)U!VJod(GN=4KBJ8K_XZsFUJ;K8$# zpP8QBY}F0#+~u=xzL#2Gw@v1Fb2+Nbcy{$t8b!qShZrlUbS!@+3F6cz@R;D3+5n`| z3;kT=Vww_jJZg{3w0UA9A-$taZ%>bxaPi#H%xskhLVKOCB8>v|>N)EBP{(FqHwads zYVgACtCD|?R8LR{s2eTp=CDqOZNhHoDU2`RHq1ISzxTxhQk zS2)j`hW51sz79#5W%MLU%lngXmGrB9B6%a2ZG3g=5mcQDI3FiDo_!wZ7#-czVf$#@GVbM6^Va$r&q4lfiF?60=@6 z(ZVuf_xt@%AJjD|0YTrO){up2--?5n0|`wGIO44jmn%7hNo1NqdB9uTi0asnMOk;n zuDqwg(d@g;{Dn{{kXjULRaQE2lsz89h=h?`j{OSgtufACK74;pJHH{}e{>2R=$&VV ztN0VB?>(oG*t>Uw+tT=2G;NyznPIW7CO*#H8@K9ZB zN0^wfe(`%W7?wld_O=4#CJ8o%hK7pXxlJyUkz37~R}h}F8U5#juIEH3939!u(q?Kq zKpD=zop_^}`S_GQFS84EErnGUuI+n#Ebt0{PbasW0FQvu0bz#C#Up%oG*c1x%9+|B znaK9#Rad*=$(342POOxXdioGWxTpDXi}W!81wgx5>Cf4n=l! zT%uaLb~Odz7u0_?b8>Q$6af&jA398NFP0%O!-_MkEzkz1ccjd%{I@1WV3gH}0|?Ow z(S6O>CD@20#|w4;Gv1+1P}dwmcM#j1I2UW+m&OEDJ-wJ24sr?t-jetl$Ds295jf8u zR245Y?7jL4lUOTdui)qD_M#6f!_( zBMRlNDiXtJP?D`9sv1T5z4zYmOB9u!+f3DObBGi)Knh5WLU1s`J|bWcT^0jwcz6ao z`}=9&vBLpEP3USomoYdhyMSy!^;;P!To6>U0Bgw}?HoaX2WugeS{KKOW2u@~bW3xd z`hTb;4;_+k17b^}BNFHANCET%VeKZHdtc|C%%Wj21St?mp)0hVOmO1eU|+RM%D)LQP*r;tL$#A3{|4aa&J*CWT51U9Qg~p15YZT%|<i0d=6OTS}!lo>}@NOA_m+qhce6BX_e z75ns`5wy38sRqx}m=;{r@ZE{sXHDvkXHy$nuK?hL%Tsk?{a+j|1lDk49ne|38NHRMT}wnDX36t&O%x8Cvv-6TXaoBS^JdgK;M1% zouMzJ|5GD?a)|Z%v@!Q?<=E`OZ`2UP6I%)$1II&OBsy6vDx@@8pZt@&CCOD&>kB)d zW?sLB#@I(U>DZ~E;N(>8>E)u>iQLzbjCWFtH|j>0cStgRQ*qe%g4Tq~ZqqKF>iR?> z%4`4a|IBKcy5=J}^xg*qHayMj(37yA5u!XUVb334sC&;>iFSdpe(N3_%HXGJ#>a6& zE1j!+I06LbHk>zNIS`A4rVL<-S4m$HL?`{D8{RwdmPf$k84k}tY~vg7#jV>M{i=F5 zGBJde{hQk{qg|adS-;q2cgrKgZO|c0+BvE=Jw3q^b}boLJ*eEh#=|IbNe1SVZ`M{; zTg23!YJl~W{pNi0eIauMQe5_5^L0C=ltR=kY!c`T2ofFoD)lt~Y1Ho06|6rVw`u6_ zdGD&WI*o3msIYKK@0b}*gKtB9A?S{Fh={0XPQ%I)(I_&1a>wx?__CIUAqMbN44c+_ z0(j>{{yc^H<%&H{Ug6ku%S)4|km|`>@v5+eX*K&q-*3qcZ{+4`9igBK#?2m&jrd8L z6&oSE==Fh10~+(+*pFrjsa+eo*3y4;zAmlw;WdnsFbZiL~y*al%wq7X++l1^kqO>yJd&_Tr z_WYyktK&(Z$+pXG|GHZ+jP*#CM~+I~U%I$`#yTIxZl>L9@%u8m?mKqufJlJ9Tw^u5Sx9MnL6Jdf!)}cRs%N}^Qog1uSe4> zI8&sYRS_ILv-W5E-tFq5nFrLR?0K#=ji^>!nHl*IrdJdq&{R)nxF!GBfjemhey`Jc z-|bST+*Y*bZBC&}!A}OXcmaF=&-S!k-TvWAu)sUTn~&JP`BlC~KT9$SkXnH1j}*ov zOAaTd7jS03k&DN}88W=?Yg|BZ>#JZ~Ic2;<(1tRgqNvZqfp_VK)V@}SpJ9!7 zoqJh3BXWj$u(_pWKx`Q&s8ibIIv#cr>QbCRqP!V8cuE%5)|iCz6*%xF5|&0oOIrqs z?o}=QmtRTO{3dAk`B%|`oS%MVo9xQJcMH&xqHP@Wr0rD`6MryBtcBi>r~VZ4JSB>R zn>Rabjk*g#S@Oupw8hWQUmVYW`SLk!-pR*+#1mwMKlOa+)Y6BF81O;0l?y7X(2sHL zC*x!|p*YkEIu#|p0hQJ()-CkJt;S*npt%_sE%=h?jHrsJ80XsEhmSk08YdhfBr*;T z4v~_`Hdo)C0UfFd1}9w5hFy(5g!-VX1mmsqamP%p9;r_ryznyJb*$j}nbc&mFX0~w zI-gZiHIotkBX>N{MB;s%^loc`P!FNA{$tY;kJpI|pG>nc)7?}0>cul6P&JWqry9{z zlc$b)tWf&j0G$&oM!&M{d8$iD@*2?eD+n=#>Yq9?A|gU?sI|zMh1esU;IYuM4+sJ#dInO#Ph=w=Di|A}_fT()eVd5mMnDih07cwo z1DSSB^VzBFi!ptXh$`}3V7vE7vZR+2M)CXs_vYGN8XD_Ok}p-IFDgyOpVSGR~hH{cYTU#5k!3V@jl}I6{HK$!4ANUq<@z3l1dDCq#1gUe@ls&eihopyltr@XF!+2u>H>jw&9E$vT8Fpg0zY|0R$QkU;-c@OfmE+H4Pg4241Cvr}A9 zRIw20ICL^&&_$joe#KFbT8eVHw$qhA^&FkH_KyJP$fwIs%yT~?xwdGR0*QfC)F%h+M>q&CgzSoLG?!Z7<(Aj%jmV`7q?VJ#l$0?7C&P7{`FogzHLaYg z8!0JrTBSgHxsirGqX%or&pR0({^*jSB~>bO{jQTozfZF8 ztf8aP&u3I_N_#xF>4blaj z!sW_x;RjoP$tDY7gEkmu6_tb=zCKGozy6;vAa%LEp=&-)Pk+{O9y33CSMtQ6xU85% z?T0t9rEXlIR){iP4x4>P&-{Q=_}NF!NY!!a>tfFvj5oB8oq4Kp?CR4s_pG0+HIlkv z&Jey}{qc0MS(o_;Uv(YMzj1Wl8GCjPoP3qw9erqX(jv7Tkcc|-&eKJasyI-fN2W1^kNVKjyO*X zkq#N}rPjiat<-i>H~OGog>J!HGPK~_E7Jr+ z!}W5TH)r?Bgue1&7YjCE&_5^^HoBqJrF&VwbmX%8m}2fwKli1Z?D6e-@k&ft3Ojg2 z?|=FK2j_`~RI_z9FSTjXZWlf^7GW{vYAkIP5!IR;H3(y-FtMea(Z3b+Hc-XrklEcn zxv2ZzV;#>n+vFFnEy2i_HEOYK>CBFk1>x81k_8qY$tRsoRH7%@SOngx<@BQ6DtXrk zk-|cHg=iqJd=NBxa!xY#GjbE6TsAz1#PRRT&41rIdh-RJ=F~;dHbj^kE%~I~dFcuX z=){Y33_{3Ah|BM2ggYl79pj+$jA^U^Bwj?M``Y-c%BZc?3+KvBA9X2A>!v@rmV6$V zmtwv&pT;ylZ+(##aX1Nd?$8zx%SxQva!Y(R4%UdP`_s?|KLJlA8GV@*=&yfT@q$3j zIHUy$0CKM>$!i|ObmdHPvBJYfBs(N)j%ay+A9G7Dxk>n3JHJ`*+mDj>Oo%ciJTRt2 zHYBU>c#?@>NFgDVloW65bWB^fCZ9b)lAmbz#i*4Iug*{?_hMkk!`8FSv%oWZ z{B7lx51^oUT<1T;TZ_GY>)TTBfVy=49yg7zLb~ zRYa6Dj6U*fEOLwYBL@WLr<7 zYV{YOSmO>|?M0nc7Or*Y?u}0^AdgC2Ogwpo~^B-@Uu^F4IJ6#Rz(8$4}R7BQqxjX7n7* zrgTOhOqN!=+IC)lSFuwGISrk^3IX#o4Zyvh!}4yc%*0X zTeGgKow2%Wp6SwwP278T_lYR0mzD?L*3-WI>7=2*qCD@@j=W1tPVIT@djB;ONp>q| zIn3_7f8-J~v|YquGn zS}UysQd#p$Sqw?$!gvNmj9o4JKb!BnqT)63N2)pjlmy?yS+rFN<$eQHd!IsV;k(Tl z7?|g`%6k3t^OD+f<+#Pnk@k#@VSu1*3<(88doE3k$mZ_d`#9T9{8ivqoodfCke8z+ z{wNu*g-HMTWpaH#`kkj#KJZk-517*vc$-)&z;Y4o0o*j(4;)}b{1yVXjzTU00Ob-; zIcOzdEqM&tIIG*2LK@6V2|x0?c~zn4s~yqTU0o5!L5M(2*_}9ouGP5BEvh9qTHn%4 z-i}ZW)zCFC6O?-GQdg}1I#jd;>B6?j zwAzi^1{6pvwG8L(S+1@B#8UUZd*Pe=`mNFy1a}sCpFKXe5T$f0!OfmAN>pg9|I9kt zh)+}l6B)_RFoZz%%AGmu*Skw+ie>vEWT#*8k>wq6K$q>lp59$CiEn=`(dcGE_|JPj znMN+fxh{Y42yTkV*$U3YqyhOjXNCIgfXi1+=$wsMc+1NTIS6=VVeRATzlX8LB2N!7m^D?Mb?GfzyLWVnLfaFV=9Kn=wsBD z>RU7#o>An*+uz(dKbD=)fut9bvx`hmUgIUOB}9_d@g(^D&531_`U+vhXAztbK#2xp zvg2tsY*2uHTl7}kzbXUOoOvqxtK}~}2;w&Nec*HH(-SC;%{N^b6E2XKFQ*MWp%@%N z62CdWrU)W|d0e<>uW`5H`)vz9J0^>ssd~S29>vt}#H80}-(hf7^EsQ(L(+Tnk2|bS zru2&F8PR-{n)iSV+taBtAL~(iux5B0t2=eGLqW)U29{DfXLU+O{pWFA_Qu*3b*y5H;)2B{48C&~tGK z8}RT9w6wG+%NUvcJl%f((IbKRg*TSAwoE^5L-pHv*XZvNBtrP9L6rif;9h0ma(B3C z9tM$B4IKc>;p1`%jn^BH+6jFLPd9onCgKqRP4rL%zVWg>o_`D&z45p53H?@tf+pFt{+hUL(}8v+b;cTUDS#W)9-?jhdrgvp0{Z$H}bHVo@Ucx zK1S&)?`O&5gFOEXuw&2m3+vhM7gCy3C;P?VUEk>!tBmvqMCs|3hd;eM(O+C5B~)0* zJy0k}`OhGlGuE%w4Q`}ejBI&K`$f{!6*E+JJi7n3O=N@mV_%4D?kdEj+p8W6xUR8A zQuk1<#_oMDk1+}UK1&zWqQmkz>BhbL#*c#hRU&AS*JiPIaCK9H%xP_{hlZRy)hXbj zdxK=d6`;{VZ(-NN%PlD>Sue7{cxX${aa1~<+%#IWp9APQIrloHM5gV3?&$0c(52T| zfF94+2Q(=3Si(+Z`aB|T_U72;k)F6OEamM#l0zO*F|>4YqYuhw=r?q9 zUx^yXKul8ep%fXQs;uJ1XYX6z{34)wQ=3)W=WG2;>+c&O1Q4ySzhA3_*y%;V`SeKk)~cC?ONrF?p}SS(x|8hQ_>AJ9 zfIzj4oE0fe)w{n89?j_#W4)8cbpHdSo>QY=p?(6I?HKeU%V^WlW5@P$0>@AJ!2E6* z@C&#lg(L$M{qi^)K8E`yk0C6B@aKq_W7U~@lDp{$mB9J?z@-u>e|3c^OBPJsAn+oHl9NM^T|y-T9(oE(U+NJ=^_(6OK_c~D#+ z_IJ065kizxH%4RyeysYa8n)i_H}xGIQM1^M{n zTMFh?e+O@da14gqyt{1~*pOGai=rd!;)n5yXD3gW1RRg;`zG9d`(2#4T;IjCv=A|& zQNPv8f$Bf%_trFjN#upUQBg36x#UF{cB* zyR55vR4B~bSKg=?c=pdwf?P|XxhW5&`}rfwcM?~Y_s_Y+jrNr#sop^7Of;G_o$nFe zpVR`IkHlT$IRE`+rx<9tk(@WtH*ddYyVNJ!QFWxe_OsJQj5p*(h*zSG#DtK{;pN>P z-5yu9t!Ey#P`HC@uFCl*Rh3Bw(GeGp^dHt)7j!fxbi^rU$kfCu&biRDc>2$n+B%8k z@Q3E#zal<_i~e?EUbA3dxq@P8`CEDcI|hvNU`WrdUA*-Oa%Sjg-qzHd@CIZR**A&G zENRnG%~Eip?fVwcb@Hi(9?qDDH9X8`w01V?+=rfU!dag< zl4rv|xnRJ~B)#^$=h={>-@a-+TRjs*Ir7J>Rr+KJ1zSJ}2Gob#aV=Wev%oR9Lu)*( z=)2z31-pOGSjpsY+9HJJYCcPJMg%*VTE_0_+YxVySZfTe+sv}v?ZJM?o;lt#ZeK+* zAXTH^ivlCXZvLIHjfZmh_lFX$G2GbNncuwM_SguNh9G^6%QfZSQxX|;LYeAj_YeL(+UBhX zWvjU4{Cq#%JotwSDjT^+YWs|1yI$Fmpw<^meL+uo)h2!xIEaAK?E4{+yeVybc?%zp zwY5{p!Pfk$TGp_iOrO27?k$)s3o1M<>2((XqD~ zZ{Fp#F7BS*=sGmTe(gZy_63fA_piEKg0iyQuSp}{4Rl^=FKca0tz0-XLbqHc&VDby zmoHx?TM|P4hUWzu2xK8n={eL31y`PBx3{XFFYTIJv_B9^gZi>1D<_9p>=haxOoYGh zj&trfv^S8}CRD(|LGw3eP#P=&B3>*1x}6kzYRvzYxS@Q%%CXr0Bskr&Tan*!R3b!c ze9t|@Sy%xKpw`ZVqmAlqUQ{a@s7j^d!ITo_P7_py`KHoKCmKkB( ztW0Y>)O;X^TUGlxzHqJ4vCCJf+ZE^@+u1)j$Qtr+DU0G@cEi6ZB~%u`zM@oA;2TDg>{9zRI$3+(<)HDOwkrzw-a6dhd9w+xLI`GDEhE zP)1gSjI8XPk?cJ}6tc6kD%m3>dykB)2xae;nNb{Y42Ns?9;*DM4nD$1V0GEs;ap+LHX{Z!T znC+A^dTk`zBPXC|%~>5Tn;k|Emk_w@MEyWuYPVvis#MNF_ zg$1xBf_?KFGIe-NRe|GEQlG>_5KMY90c)sn zU43C``}xZkgl`OAyaR?HSirf$2}HqG4=Ny-OlHDp3PN|pfgZs+S%*BE2xetK#~6+d zEHYFsBC403WjMZ}8pNGiTdKFPy!}~wzxg}bqi<%dfqbOgr!$&3hHr&70)!n`w{eZ$ zyT@LKpm5_ULfRy|ue8kHF~kJWY-k=pp?~qJ=vCU*Nc`41nl+~6${=e3+7%yEu94EB z$bVdXS*<}JeY7da(?kBP1{qmIni?@?+q371_2(~;bR(?WmEqj;AT4F_0UN{yL{N>x(MUX1K9K@*L1vq;z!XzWVY2?u#s=u>_bczwdl`Z*L@r9q z_qb*?(DDi|FYf>k_yQo@Vf+VOq?=VOyx?w50lw*EV&d1YU!Onfh0>r>bQ&BSknZ;7 zKngUkPl3Z@>0<;oP)KMZILioFurqqWaOeDMurkO7a0}sxgODap{b|u1HgEw#eQ3CT z2~5g>g$P4mso_0iV`Cn`Oc6#mLX3;(KeGq(2{c&Oovi6Tf$$8GFp3v%kQiH(xkbRE z;VI&j{H=K;?6&}QhwZ|vky;V)Vh!017w_?%);W_ZvuuHeu*0`${Wki+b#a8Y4suw0MV~B~T7N^b<7lVV?T<`$x}P8M7~v z6NZyv)C2$R;_Pf1q-4Zf*KoGfV_r}w=Q6R-Gpm30E(RaTB#}q#z)qcASjf?3K#LIv z-W%d3q;7$vyuh03ZwsmT{qy|>3Izfif2H`dnZ`E|15n^Z$z{|8lW>(3mLibmBCx#w zL9wj!0g&)zqrfh*05{f>93)4}R1G?@;s*wIYmZM(#OG_Ln zVz8)$)QaZ46`?X0dU~oYTCccszkj0HaasuKGO|yARNym>Ezf$_^{1@Qlm#{hzsqbr zp5NG;K7R7wMi!U;V1z%|_Yhs3J58o&NKw2|sbq;IRQ6dDfaGOh+l-O~+|{giU#;sY zLUHt32McT47AE~4la=Qvxf`aV%fOPd;?2VXXii%`ebOGC0A_eF`?a`X;F9vzg$mBs z>%>JN5H9eMvO2_}J^@j@fK%2Pv>Dyt6DZw`ZZ>=#D&p~TbUyy@KteA!LSh4=JO^)w z#K`g2nhK>KR}Mb`i#Gyd(5!sc;Wi_zS@1EjQjwDtzKiC_umk@$6E?z&mmbfo|2q1W zE_g}Uxh$d|msRNNHg2ZniM;NLHJggE5y$V?*`5LWZNL3c( zx*X&>d!=DE?WsGoM<6(THC#vPaG9|l=-&wtDtI`?*~$XEnDzX0vmb^=kd9k6(W*vL zy-odg>-AEk#wMmI3Q6^q7j@@i_CNJTQ{e74hfRxJ;0DEfR7B@Mf^X2+s|bQ=-rS!T z!b`=bXLt5E(Fv9D`8Oow&gv(-*{k#O@ z%TP7MwZK?`!aWcUv1`v7Ye_JrsvYtWA}K7YCon5VN#@x%0Z~N~#Fb^<`}X=3h?t~U z<~amzK=^Vi#>$l8pT)w)YRFMcw|lwA!^%%HTIqF@+LVp?{drzq)+@@y^)9UDZW}xY zKHO(cj&@%aTVyT6|7wKsRl#yo$3(_JCA9VmIW4yXD)~Js3A1(CaTELZ28NMOE_jq$ zX55T%AKZ~tqdhOqvif0UO->V9{xb6 z_9>#GnOfUAqKwLr4O0;wnGas}_dgZJCz8eF+QQv7p^;Tf-RbIxJXdvf^#SN! z2f=M_vQkz-Asa|U_Xji3fcCvVcz_h z4Kw=hF{yEpYXVnJm=RLQUsKushwNTH;AsH5_4_TdO)wqa>=&G|^brMXqS!aARlxjc85z;sJwP&5 zff^)um(dK+KbRi(9e9B7ip99JOi}5@^5an$e#M6OJBYZhpiE7j_TM0^mCndp?*ksV zL3Nh0HV=yqG3HQBdvc8kHpK)efq}jHK0HY{v_cTI9IUJB1qZrd6Zz%K7pEfX$NMiQ2Yomv=*@xhkBy)P zV(N1p?NDF?k;^`aiSbyHxEv^NF?gus>&qsv;mtNt&S}cBO*gh{s z3{%C7@JL$}VwZJmiQ?#Mmp;GF9X|4fmb$MihGt;iq6GF5Uzqfv>l*^coSeLd2Jr`w z^o3aBSaNEXgDw|Zn>`V0D3A{z>{RsybZDmmjg5XConXO^@cC|yRa`I^G>y)t5%3Q$ zJ;#H$y?+(R!H8HttCQQXy6@=&lbcYOy@`r~+Y4h4mI0}k&tZK&gkdv;Q)8zZM!ydo zNaSPHTsDPK1(cri*-_#z{ya`Wds>8Si~@z9W&l?)%L|N+xoeokd^!sOWHWQDJ?Y>@~^H z^1R>w-*RLJ3{qCyFKD9d&dOVZ?&WtCjI~gNB=upM~IYlw3BAJuHWfSK=gY z)ubKCWCWYM7k!-769U4*`Y-m{*b2a=7skf25wUYSwIJ^UaX^8|GRQHBAUlJ{>xUo+ zY>-?R%TYcTGlGuAcD;_C_6jbWEsw_^Dy!5c}wD7?tAn@5VUF{ znrN?qBc4LwldhA#QMJU_-k1qCN(D}kxglT5EDy%CE9K6Qm<8fVuE+;(;C*3GT7$2?W&&h>+~;#nJh_c@fLw=coL*gt#cfVK$ZqV!xi$^+SVA1+3_J z@o4;p!zCKTmiZ!G(I*=)vH5h(*d3rnFt-_3p^-!E^?uRHViCYX4uI3t`fAoE-FNnR z?+;l8LGLC7TqZyjF5t=KG0wu~0;JC?Bkh>56Zgo;VwOFK{a<(i#X>6S8zGU`R}0DA zc@V}`3k8?-T;f;vy8eg(!OFXa7d`Dy`)<#;wM!aKnyqVZWC9iY=h(*D*`AV~vpdjY87-Ow6F6EtApquQ!6 zCGg3nYP#0>%93z=Ays#8Z*N9R6C&V)z7HX$tFN)LvLb8)gki@^{*>aX1}4N0CN}Sc za&q0U&Meb1B{Tn8&vBac8#veq%}ynW|CVcjUmX~M4_RbqW$h;2Y;dxklXOK~{4x;6Wq=+ESt@Da^9U~x z`WwrBX)R(PO|<|t<%if&=FLrhV7m|+Zr==KCFp$+XiDWGpDxTd`X)+Ne=yF1=M>oO zy^~xw0Q#AwTagZ$^NGgkw72BR?>|c2fMTuVN2&N^N6YEKdR0BMJ5#@ZwV#2^Y)D<( zn5HQ`@z#Kf7ka?cr|k7g`N0C2bo-y}+ZWO*WcB<5K{blRBUt^pU$3rVU%0IY! z+S$D$ad`IO8Ig@I{jHr1z9|k@>m6=njzbcnM2u;fo6jK5i=KG3Tc`V5mR2*j5a!P` zlYwzfl9*NSY!1I{tCoKS1cYSZF5r>tnFEz|*5dVF-ei=NtOg&CFN`LEcmGg-?@Qk5 ztMcFk^uhs96~u}EmJ6s2w;)1m9`G+g?}!qJVuYpT1TM&FEHqpUt*lspSgxL{M6Qx9eiGl907@=NaBDCc z;p-6h3q5?yA5TSa52D=*H2rr(F8yj`QA-O)I!=g2VR6_bzIgG$E+WB+HUE+g*k%?$ zP4W>_V1EK>#`f_d@OQh^>%ja5EQYn>L_0+NSXfyNtA+#wJC04Ompa%xd>o47XC{_S zl+!ox0NpF!z|HYVw>}Tjau&$&ez5BSyH_`j6(=(eI`BgFhdT)190Vm{Sb-_jAA@Uq z;C!p7tcDERP5BLfrQ+g`3YU(gvnSu z?Nk{iXTp0YN5N^wA(K1Cw&U{UQEgvxll$@`x+RYbi-`GIxH<-L74^Ic5-BZ8H1(u1 zG%;w7(4+b!uxL=5AEP8AF8_JCf@=DN@P&mizv;Hhg-vGKoh$aXz9cOwEgN zPq@qtGAlrzSq;)Y+~)RC2fX#Z@J?&ewzVBux?cX{y4u=rtbAi9V3qE3ywwRNfoZo` zNx*9y)cYB*WkGE00Y^kGi1)J6*B|ulYxRw20_+3Et4ft>JJ49OA+FNEK)nmxRWMq( z#ARIfw5?4Rq&1fJ?jcbWNX#${NJ7Cf7BQ|tbmA}o+*kSl^{c^$T1gd^aP@f=;do=a zyp)&834n_DbRq>R6#U)|^T?lseGnamct6G3);?N$Ib=~;Q4xOXU7_n-OA8cw`SiA9 z2$>I_#-zSCGx1QVLw3w>Le+1`qk5;%gn5Kb}i=Y77c=9UJX3I zX5x5evOiqt7{{{^GzSrNrkp1py5?=P%kUjjQc{ZbB5LrlF%XX(fF$d8A=XSDP6_Ar zD}4J2MqJN9^_`NMijeq`ask9%h{YmeTL4MCR7fu8n==PccXPDAds@(Bf#JyT4*nWi z_w>Hkllzn0i7G+h3S$9MV8o9C?6FxMO_p~3Xy2O*X##Zz|ATYPP)RXbAGt{uj`uW4 zX88Ybh0RlI@FFiSv9K3EzLl>>*5-~(x3CL6hooT9c@ASAia8ylfarTmS#lWBAC9>2 zmp*cL?TVPK-I=vNrCV5jy_g_bzu)#cB)CguhpVsk*~Iy*Hs=>N?<%!kJ@_3_`tAEH zyOIh51~4)j@=)O`s;NjyTVnDjCT4XebX(r}eb6GvM)lFH%qp~?tm}0EgJ6e%m%#`? z07Jk|{MzVu7Hb3)qB5rf$AE}T{?U*2&8eu@PquR_AY&5+*$iWXToln{{xa>H>-dT@H`)f+9*1VzvfBgA!|t zMC{y}A4t7v?M&#c340n8bG@KwJw4)vE{)CRR89PlX`H14;d;VoA#0JhI^Y)w11UK3 z%fme*hHR>Iz8Eoca&pe_ycC!U8LRQPsYKY$h`2{*;)CV8)pJ5S=Z{EB?Y(Nl!E6_Ly5z`TH za6m|`;nX*J>?1Wm;fW6SWAB61MUO+d13{xa z`RB?}FCH3WllbRL&?)$BVez~FOrn&`lpIW%bi77EZ-3X4*Ep2gskgK#I`78KpgwiN z88mHr_o^swkrzBv{! z%AOZk2j8qWi2_hA3jl;#flt9xsAFi6sBip)(?Q2J%QFVd*CDz<-2(E}A1vCW)p_pZ z4H1ZHe2O!)SofO(rN!t65Uk3<3)P?I2R4z%kXnWqnn17_c+L}njxh@e8i0@#1azl4~ot@ z$L1PUX4T2KX1y~pW)VU28>=*5W<9}<5AjO^tX3LqPTm@D0`!HrY<>V`9r)g~?2OY| z&={ER&NgGj()f=<$dU79?_Jp8%z!a2A7Ko}ZV+0)sWc5+^dHglmH}92B8F}1+oBfh zX!rj3v6S+FIlBF54QMNa(~e-e0%-y>aYpOQ>(=37w~jz^m$ks5QcC{o*ulUC=_>{7 z^cv<7OCA_`E8U4%1f?U=pB9@oPLJhM<$~KnHCI4E(`;&w^g8Iwxlax!UpEHQLuXHe z4AtQ9MdoENIsX%854esNdwc}@$1i|;#?H?VG~@w_Gui&r4^A_;?m=I@LfJc`)k8E)C|}5;q>> z?RMMjJYlok{9$_WBg5r6avfqEbjMB2*yYi|`JL*zAfEmk7#6Y|ElG^3fcV|>#VB_@nqI-`}B z6^02~;w{x9L?i=-8{m~_0;nd!YXwRK28iVuM!%+ zw%zBqx88oOoh-L_K9oOZ(%>_>vUEimJodX_GWX8@Mj=DD2B^__Y`(%kQPx%*9Rv0Ec*5+RqzCpK77p!$>CJw_z*aGuFVCDQ@%3ADG_0H49D=Dv|O z6j2ZQzUS?SCQLZ!Za;YKW2a$&p&FM0q43p1K*zN}b~8{~L!dY=D3qwKn>8sDi~!)L z0Cp-Wg$!4&U;w=5-)wgM!8E)d<=}T0yEIV*+dIu;n{5Ck%E8GZY&#X$^YDiTwhTbs zg$(r71!YtrssPUV2>?h#tM_L2x{AP zd5#V?{8Dlh0=s5+Ox3!agZG$}IbZAunvB_31CRZo-Br|zxQAG4fR`hf#E!tL7(vh^ z+kqUXZXm&4ZuA>tIxkv}`{;1{9%5SsE}{a3A6@=cj-o2dYIYc$bIvBB$#!Tv2Sc3}7llKXNK2Kz3 z=iXeX()SNegE9h1kFekVfGCjRZ9%;KK`tL$Tzsuk?NopYA6y~2y1JSGS8#-|q!dw4 z(*J~FV5#sq6ZWrveoxldc+}3IDMczTSkw_|J75xZacbQn5^?{8-kqf%GG60WBE-0V zgZx~@MkpdYybaKw5*?z=w=q3&ETaI0gd{Zr?+|e<5f#g=-v>0n7@UGRW>_w2GBYvF z!Z6^v*D>G*F)vwuTJJ(1h(z^9Obmn>YI=cF+$HSGwvI+aUb&O-;i{^i|3 z0gD+N1n`=fA>{q~JaEpw1WNqwt+%V7_cVi|9YQyFgzZhhxWH<6bqsXmko>T@LzY|e z5IO?+s8lg;ffuo{a=oc(Y4*ETobVf1#5!p#U@6O|+kj&)$xxF2`BvL@Xopoqs(hgP zLc*ijYJX^lkTN4!iD+rq-pRW>Nb%wa3j1iDyKNB=DUn2mHQr!6d;t`txo~~?*7i2y zFLnn+xingqiJ~|487M7$df0mkqyxF|_t+Ag1@TtY-T1%yA?oFc0Z@eL& zHTWd(zW&ST>vVq!YXgdGX~T>WbICWv`9Mu2dBib8@S~UNEVCjijOb;RK~UzWAZf07 zg}>Q5T8^uGB22R4(gxC^m}F|&d_l2c96n0eGK*{HYEvX6NJ5}^AwWEQ465xOxX$u%U_jFms?iztFU!SkBUa1n(tcl~u(59I0>+Grroy z@5v4g&7cr19)9Hms%UWY9n29%vLs;kZ0<(peH+rFBmSdBWVQDx!+^(eWsqSgIZ_k$ z6M$zCCv-7Zc6L1Ihj08oD=7z}0L&CZfKwoqS@zRfJ}oT`iN}D=l}qUZjLTn*HYKH| zM!@`=W=b+40rX-RpgQOG`T+85s8DyN92q2d6Y;_Jq>K$41;GV>LxkgNBavcSQ4ujW zB))t(67p@dN=>G6Sn5)W4XW_Kok9vGW5`wtjnDY{xQZd!(kj7CyUVf3g81dRK6Z>CEv&Y`1{# zQc|b?oE`@;&lqCeL|2=UfbbyAxaEG(-~eck3xW@RuaAh^9he9Q57XAeMvv5vD_hV$ z-MxDkNkDC{4G26jKn}M*E{ek7;idpHW+V|OMB_<4b#iANIGH?e0q5ox8x;_f%JAEN z)r%t#$QcERrC|b$@4GO0{IZzdYgF6m!#YGYjR<#tT?4<91 z36Y=ZrpftueWW8z*mCL2BO&;f@A|TaH!ELaht*-i|CNI^%AiEuW#Q*foT-Y7A9Ixr zYejm(GWh6y_ihw8Fza9FERxY@qlqVeU5LIkK<9tOG+9F8E$*N--z?koIY?1A?;XCE zxj#K!6PvPcUi=xYYD;E>j(C=(KT87Xo3c{C)A}oH)&pQ-5*JX(w~?zcjYK&0S=@Kl zTX4Xwl}3v%r8A0Sw~th`%mf|FGQmq|dSLK%0B{=OGY<1RxzX6A^=5#1BLU4wyrD~s z2^OWZaOn`-auZBhf_7bZ1hM0L7gt8|iGaNXehCjTF)?3#eU&l3MnXyX9LAvLz!E|H zmhdU;Y7n9r9HxjFHX?z5^mbL>v`_c>ArqqwrVAlp6{r8zNEaRq62^hp8A0fas=bM- zqN2Ziwv>;LC}IwT@DM;^r(k1KsJ}M~ewzpu3~GFm3l}hRHE@9&3GG%QIS~pP{O7Q$ zzk!XtRcF@#90O&R-K1gPG1KhvqZ1QDz}Ophy;(R45Q%%k#w8a`U?hQ0Fcr`oWWhod zG7>Vh*uVY!X|J9N>4)q3_|*JtosS}478<`70H@FNxjkb#5*YZs|NsPm!1jzK4sHv^QKLtPoFNBDzQr4$}kM4PYBbH@A;_mhKVqPapyormlz$EuuFB zSWflx+i!B9K$k@1g)o(cZhYy`<+l9{CG0o=>mzFvMjOolQy{%prE~A1GlGsoBP|V= zgB)idzS19^cuZR7wjZLv#>Xw~y(4kV!>#r-MQNKGcM2~M({QLZ zJpAJS5vcBRKy|O&C*xU2a8c|XTNHj{bBn0huM|y-OL&HgwW7LMIe}Q%jn-9uz3lQ^ zM>H*mdS)r@IGSd9dZ!ZtOXHAtY>^6U#2#V^v=L6& z7%@dp|1@uJMw#T8`Mm&bY;%<#JdSZY^=wGfF{z_&xGq1IJ90mo_#(sSkP!RJX@IK$ z78#oa_8p*kLt;m&xny`DzO4o5+Nwz-a7PeIkWL|+1LCK!m1AIN?~BA6fOBNS?kmpZ z^=;cXv2)(>UuI&jLQA^!?K#~Xbi@?-1!v2vt06Gm%>)~;thU&>vsUO+;RxbVx(0JU z1jI#*L?DH%eG75?TFpI&DDZ@l5{s8jtJBo!GN! z=!!x%H#d!v>n2ck4al~DB$30?vyDH19Uacob~t|2VLJ(%zjbKv@O$G1u+?D=cPV{3 zR=Zs)r*TTxwR#dzFTQ-7Al9G*k)Y&Y6IEG5f#|0&VM~c+N~i+Ipr7#sSV-tQbVU<+ z^jB`}i854!sWRjrVZc4olKYXy<{g-0J!2gcD+jVs)bYmd47~dnCYDT5T^~OR*rYJO zj|D;pIXjJj;`lKQHJka(QJ%Q-^ZTP#D``!rgU;YS!-^O345;S z_2!0c)^2r4@NY*^-hs6KM=2X+ckNEAc%X29`=h$Ozp+Q~NruhkYw#B6oqSVE_Hd8H z#Jr-$ic^5UJJqvf%E*3QT1;k3JG=Dyw*g#ku{W+-b8CVP^1ab71q#CfcmW2n7i~() z?9auB-5lAlIU$joGK-X`{Ie8G1=*KSF(S!)4~>JEEO{@&25)DKtsV&lRvCnIAq-a_ z03(d{YoYnz)egRaFxbTa`X{5d{-)5)n-(C!1mso-T=kwn*3v8p+*mqa>yXH}XB*f;9Jj2~9|r;S#y!At27h7}!sN{pY)O@GvsC3m zEjmRbOzE8A^I}kM|-HC4)G@f*S-0GfPY^C=oqCn{aooEhIQ3#Pf6lyb|s7 z)W3~^w*pcL%x+@NO7}q<>Q#U0iLnWA0+LQOS$C~c4tODuM&+4?!Ut>IQ<0BL<%5aA zc46-y6W@|0_7`~sn%~^+{rRY-r?`hvR~XqPt4+h(+WOBV9j^cAT+)dTyX%zx+3`+S ziPlB+X;7b(yDZT=wfE@0sTH&dp1s#gKaWc(a0LflrchCSFPymqW4!GqRCKy`+!f*Gb zngEKyR&a8#s-oU24Q26M}G+sd%&+4CKy;!;bS8yFbHc6{vAjKvvLkc1kZp0 zLA^1+k>3_7)iZ7b)@89BV3%NNWo3~6igIB$4SsbwWtvWLZ{Pg;ObCzNi9t+i>cib; z@0U=}V!2fYrXX~eSG@P|P#JI{AV{j_`dL1<;%%;ZwiY?MPyIu>a>?tB`aQ!Vw@v6S1bzp*9t6C?=@uqV`$T3kbdt%v=Q0aPn`TeclKnvyL0=*j)_e~8 z$|&E3^iOB_T!fmJCc_a5`DS!bTFAh`iX4B?N{V*oK|CD~Oi_@=9+DH<;AuK7f4l@`AU$+o zkU3>yVgfNRfpEVN(-j~>W&tk)v5t7ODhpbaj*cv-nE@nt#bbdD6Fa-L7X_%RHv6)Z zWK3ey->$WRfg5t8&nCfbI_Q($PwH2-7F%9hnP4p+?_{LH7Xe2uQ^J_+PRXMclE9^Yi=EPl+|=NTn#1wjEFzy4 z_Oe4k!Fz_N6F1aa9me;#gwopIs1x_H!-QZweudgAFRKmk1Qi4g6QmJfM3(8IwV-=V z4M$9B%o(m0;qBXZ4u7{!Mo0((eltV?F)XkZWkX?C&Gjr82dv)ij}>1861CNu2IRg& z#4Y{gDyZ4=UVN!TRiwc-)#$2WZ)L^Bntp*8DGb%;`Kcq~IY5Y3;X*`*{^gx=fc^5?eJxH(plM2mNNcwV<)*3CXcI@g(}U zGhLr=M+n*(%g4O*WG1|2=wNuEl}8b`5iOpg+BM$uBQvmb~|32jWgwy$vWV|QYM_7c*dXDQlV^)5J12cnDt1A7*my4dxIF3(dG76*LZ|bDo zI_^PxLWBx&4~UkpeiIZN8f=R@e*JXCBd1Way!=ESvncSP@!uhXdgiyK)n~J1`O;@nX{Hh zi8P|imZ&oX8;^9q&7+@XYG*pSUns9&Y~!F}8|fRW3>@=r`GwNZ2~+5%XCb%R}!Z15IRuK4XI37fgCK% zG!+ip)&S-C;X21J4*hEw0is3%A zz0B=xS!R{!)Xi@Q?8xvuJ=kkod*x&mM8E-*BKW*WW8pY|Ye@b2%AQWh#=>{JMvY8Z z*is({*#CtI1{Xxn8xKnNf!~$GhpmpCSSK_yuq_vO)58x7_a@lTI}G1Wh57=9Zpixs zjBx+8@!amkRovYTZlt2)sG&;_uvb29DPdiu=u8@J;O%%<*hGdMydwP2j%7~QBp^e; zFDCnIKia8GVM}wTTw4#=b=VPUT+SG15TAJDt{XZ2ZhAWOlDnItA((W-Cx&2UPb8;J zTxxD!v4D>+CMNEFl@Z>#b4BfcMc&A@Rj}HjxE6phyK}q`CbU3?6>^;QC*a_n1_34v zP~I))pHjR0=0-tF*pYM({?^G-&z+p!8?I~_T1p8~!HTlFbom)aFB*vx8vgdP9n zWEzwtORN-FI6&Q}sPcB2E7=?bIHbrx2)cEYr{J0RQ5`A+$^ek!yv&h@OMmaYMp5RF z&R@IWY&OdF0Cv%k^}c${T_Z2*tyO9$X>WFw7yDgueM8QqaZgr@Y6VFDrcPMDwa%wm z&@OD!+`+mn#lD3H?}+*X0s|kvpWfjTB9||I4)^IS+Ot7^{ve4TxA4DrC{m|H9-c!^ zM|(@gc%zRxN@Lz4VtOsk3>Hn7uB}#IM_&key&4j zWXu-r_6TG+_uQ<%>IO@Hm||E{0&C+!3-~|6hy4P+mzc)I%{3Qn=R7>*>H(2{{#@FFTra1@5!q=0XG`nJ3ggM%G0$OcNU|LExGa@CUi?+ref zeE`TG{b>#Ie=2TXy(;NZ^FK&gvHJ8<$T!=wAOq_%w#fA1i!{$~$kq>wGiuB4N;vH- zee+%BQk-sR9m|al!M72c(=~o3z%p`7&)byWY3o>`hna2~Ie<-Z17*Kn^@tg)ZA8c( zW8d{|apd`xrN2X@K-=-%wn*;HpE4*|Q3Qbl#0a)xNV-&R3E15BGDqL_>J?amFZSPu zy%2hN%(!d%q*vs_J*E{oV!L4j1j$C_u zxNK2T(PCpTSdBT+uO(GfOJLEBpl(WbP>Kx;R%GJ1{k8LHe?q39p4(@GvM7$~h8f~` z<;`V6Qq%95GKzZDw1EZ@D-b@j`}V>0`~1~hVd3=m2^94nlP4)WplILvX>{5C#<5&s zAxii?{=sAFjBymr5n6)aL>*K{x>T4LFHb7*TYYJQ+SX;m%=-_$3AseePsBX`UYO=8 zKQ&XksC-W>6_sm;;4by9R}L$aUOA+V{rAag2_tBdYlB${06DwI0|QC^hyWYNHs2u3 zR7OYV<-(&&0RI4hq5z2rNTVN{vLfK2K%q#$TzE~z61T2gd z)}WFd%S?XNu$n`NdF_btrrw8$^Y z4AMlGonnu=YhDYTKfL__njtP|X}cC9ni@jRjw&^}O64#b;>k$&Z~Q8y_%3(4jpR9k7XUOf+k zLa4qkIwqitC;M(!m&c;PuSGw{MrB>g6!C{ode|far6h-6e(7qknp|Zrq-efVe=+W? zsb#6d#H{m3*z}7aiIDjD>)WTqBr#P+5sjgjy}f4j7I@U>u3F-$6!4*?3mzE7dHh+k zvSWE4`;bO(IF>ZRq&GHs^gt-g0sx5rKIAvxw2($9Q((C$;gIy$P4i@O14y07_#RZ! zh~xeNz`nBYw_x}H3@JAR#)PWA4G<3}pmr|zH*}=^KI*GXg0>rBC%`NFRPJlQ$~?^q`&TeuVw0?`tE6 zSXmg&H;I{w^_ttA++=~}zEKy}7gkQut}eii7Nx_$sIBe)^(LO(<%8X}nXkvEpC42qFYg|1gBMK{loVyG{Ey$*+x zN~(NBU!0{nD3ktu$jOnqG3N%*)e%2Q7)koQ4*AuN{S`=LU`0#?z-Ul@J}bNvK}e4S zZ4(;^N|oZ+V%8qKx2Zx5`2dV%fC*g?^s93)wyg0v4sl-XK`HaTp~gOw@H7YdRIHi+rp!A{a+9Pk_uc{gnFs#t#ZktEst1TPMjgtplH6 z+W+6%16v@k=8N>SgQK%GpSHM|6dYY5wXVev8@K=d2KdnunJV%Casxx*b{K($K=^Qj zGwG%ujdN^lY&pd7!JK?2R|r@^AjNqCZZA->!rF?7kAD^(9NW{UBEwagCsN;*9aq(%BK$3g-#q$Dn?8ZE zyhHSbO2=en@X3}!>{8CjFPc&J4aWP;K0*ncwz)lBR9vNpP1&k!8jw$G+h9v>>fv63 z(pUG63^}CK8z&!Z@H%7D^(#f^UzN;n_Ycf3CyRMBrB)lYrO-Kld;8LmIpJ#&%xs&8 zz$Ynmoqv{3%;OQudkibh)P&XPEy zWmxlDY*eS&xqlhIHVnP{=0+0#Kgr=&m8ZACC9ujUQZS6sZ4k2ckc zrl32dd<9HLWQYjA^)@!h23<|QupqTq&>|SoD7B-}FBkGx^JU5DRZqBzi`%87T&3jm zzX8(p$p0Ouyd*UKVFjejN=OBI|4TI|1el$K3kBAOn z{Ae5pjPhW-oOD0zipt#XPZ?-;O4Imjfs5lPA&&7lAzez#`DJ@~UUg}BfI}QbCVN&2 zE;ab$M?Mi}Xf8f<2AgQBkF(OkH#6<@(0M7(F+FU+rP`+3xG;w{;|t9tn|Xdpjx2i7 zP4tW9OPw`-w2Czt7fM-7j6nhnt`MrGdM}u5KX)%a?$vpzI>}#=OcQO5WGIosD72IZ2KB1FF!@Q zQ*`g9#Xj|lR#6UAdyDJc?lV``hM6mu6AtWIQlw?%ik@FvHsL@yvkH^?)}yVsG;+^T ztq?W_00oZSJnYiQy#J?E|2T!E&O1UF^<$NQbI4N0l03QPce^FGMmei}-JpR({Ph`b z3i-#Z_P@`MG#N@;8=q@$bv`>z3{75SZ$t^z*61;d|Jx%;U zAIRslKYipBXQ6*KK6$LHlTcwx`C_bu&%0q`^V^*&yQ}7MOcdknzMC{!SiJcu4pqVj zwJi>c+FizB6DB_T-A$iAO13=QvFN%Wxvwpmn3oj~f)7;w<#g#NF)>-&(?>c&7Qbo0 z|7=K003rjBbvebmvZojkB_nP5R6EHJuu;+D<8PY0uloOH4*YqXjAj_uLFE zbmHL|85zqFA3FHa4O{(a{7#OK%RnS!2Qg_sfBt+iE(4pO`%-@_7l?OJk&y_Shj-Xv zJT2p_r&3XK?OJHIvr|(N!`{#bbARL^UOVa_ko6O8UqAdALEY@vcb|lVvAi~JCuk%c&c3O!M_j&yh_P_2m97YTgZZ!4=4g-BzS?p&2+=K-41Bcd_ zSE+1DeHw)tdix(mT-nEO{X6^3$uQCMS04mNg-co~RcJfcuxW3eCk#l=Pf#7B`QNi7 zA(yv9W5NveE=&w-fIaEq{dCM$vL zF}&dHgcd{)%L2F(uA50{OK_E+ObMUfg-+AOaHQC=FNi{}SS9^MVAqfZ{VUI`Aej)! zxL12uEb*Ro#m7vf$cWBL6jLwsq{+9lrC_$5BN`GQWj&df zB<$qq{EwRc?1{`N*`NCG&ktSq;ldMY>gvqlqHA2130YKLp#Vh235>SjD`O^T7H;=^ zsgSF27ls)%?i-hZgLaONj;jM|39s8dBYsD_XvfDny$Z6HIU3^jj&5b^os;mB3EnIQ zb}X-8Epqjbn85ay_)Ya$A*wHl+r?RW@WY|w<}Zp`-U^C*Ql*2a0<=y^_XC30*a{N5 zh6zlZ^@I~57ix~4-*Ym!jrP5s^)^@)px$(t%1!*dP8%h{r6l@HB_O0!tu4jMCBs=G zgf#Zgl_;tJFJd(WS1~x;pM?)3n9xru(U~5+#1E=WPF~sizp^)&peZg4=;44L`AP!a z6UjK?w;jWGu08gQwYMi|?l?u?)VKz~I~dO5K?NfZG?=B2sbNh`l7_zL;a^)HnD1=L zQypHUxv5hJBnOrP?H(HCqfY_ZEgw`v*eP`H_67&xY(1i^<;~J<=hhG0l{n+~_Jwpk zf7_};AWdpG?z+e|tr$4sT{9ox!^SA{#&>+2aL;L)>Ky+?ZY2;HK;tkU5wY^Fk?yj! zCVGY(VyXZDfnGIWH>Yjc)~9V@eTi^pmYA*nF@3DdA$YU@ z_lc2q$KK4gpH`F zb;5{NHcbDWhq2Gu(*CB)9vSCFci!Tqdx>el-%3NQ&9(w%dCLu>{=n%;|>uCE_pO|KM2zvL^g&mCs6vejm#FL;U zbt%fK{>EW>RM6OzYomSP=$J32h_NCon=gPbj5AEVA+ zW4ic8gP{^~R}e!MAVk-dC;o4}vFU||Ts)KK{{i2% z^$gHO-u__vS19x6OWzPcA;9$HO(_&|Ow6ndL5XQWs@}XMRav3fH)z%#awKd*H+UUW z^RsS%(MK}o?%!}X`bHnz0nW4us+N}7dLR&-spcr;EjcO2;3cZMCV!tS!&@lB@j@;W zBYWLjFHWm7F&bN7K5$wwrz^esWJeLG-Tc2J{fSAbjF&VzAa5n4rf=>@0)LJ&7mxSi zM(O`S=9B0&Fie6KK2>hfwAeempurx{{NxWuffSmSf$7&zb5sbbjf1^bKR)s7#6XlK$qlzB_gCV@d&1|2l*F3HmW7mL4s4f`xZ*qN}i#)Fm zmSv41$u^dk-z>hLi~XJ6q~ELwjBEIGokE_Ke#zl@e=8a6UR|{huK&Ho@MDZ>IT{jD z-Zg#8^Y=zmfPa3z>tS%PEMr3Nrt$c9#s1;B%{{#?%ci%4+JlXE3rdu;8JX6$@3r~t zluYi#J5)X0DBG7U4&G+OL4V;s_00C3X-(XI^&?@dfgi0+WBaIPGv9Fk9?5i0|4O9~ zb-@LSUeVjAll^^#7b0+=a^X?>OwYQwUK>~cce;*c?EaE+^P3DK#~7`@Dbb zc|qI1=g^;S^PICG0)>rF(*;_>x&3aSEUFLGRQ zaf4}Z5)Rt*Cv9Ay4khKY+I(vh0&W{|_4_eS?9^M|bYmXfm>y#$zeaS1F7OKrf9b^g z9LggB`}f0hUwzt&!h%DK&9twxd*~7~>jW#{j5Sog{I905++V=M(rWO{YJ1VWxVgFW zT&>&ezjEl$4W1-PL#bd*Lwg^cl!S*`nQ3f{D1voGI@$8fzd2CzlL}<7Nn$xssF0CN zhK6OM+XjtVkTKs<0#myizUZzzJXZY)wRK3p#ik{SrOp1JQGxzMc63By1&gWpJ>iBu z`K-zNR}ar~;9NFE_x*vkQqdQc$&R)=gIf?9rb9tO{t&MgeWw2KQj6Pj1_6$-bs>j% zuk)t$TjW}|eN_K|+gX9la<8RzY}olHmu8Bb)C)7V_nx!_^mZX{&fm{E!NymO06`#? zw;i7`+9HciJ@Z8x;Wbk(D97W!2|1Zr_jyO6DH)J2!iY5vjjWvuoeO=#ozT zo6nt-;a*WqDvG|kc{i14sL|Q4A>mIUorNw35cc9mR9lw(4_%fB^JM6r0JzEHX+E#QH(9pV9wCyK%gWNa>m!^h#2l3;#Dpm;ja)4xX+N6 zGxjKV98%@1*Cwp(Eu^}~crgX;Uvz%4ORP7A+of_{BB=7*pgYa%S<-!g`RfbZ%0C~J zeSFT+kLyU#)FZ&blhK3v?=4UdSXf!W7gXcU?u3n(?Xk}{!~cIl>=sGTq?G|SS&SSU zy<_*_ObHn8Y+tv@?l#WX^wS~a3=h^*%^m$InyPZo3 zUYiUWgra6-n_a==s?y-;!uh>E+VU+GGu!@tZH`fmN6nDD(~ zLs|52vGDK1E$)T00Poy$pHAWTwXK3em`Va6y(nEa{_hjkj{;?0&w#}E8XVmo0v#TnU$cd^mZ?cN6xjTtxfoxtRjFh(B^hh6J* zDVL2jPc&NEJI>QaLI=wY^igfuzR`hCZ;YerAM}5?Xl)){v&Dj0->ei*j{44TW-Lhm zE9sEO(m7*|H{;lF?vtEs98qp8DG*44!de#NZP{7LDXR5?d8#@k8FVKyM37Ak?xQ12 zSS!EurXP00q3rF6+%uM8x~~xVOI~% zzKKzLvLtN5sy~xm_-12QV!Jc8cl;V6JGwuSi%T^lvp%k6Ql7up2R~*72D`hzZo(rX zQn0ftF6jl%0qIO2*6`x@uNQG}W4P$x>{_+Cg#A3dHK2%xTkLcyfJVr zPrvPB(mhaT{QB{2>RnVYxw`$^^2A_MPpm)#2RHoMWQ@27fksoLf@g+!#`%pUMJ z-z2<-tH5eqv;Z6Qzt01t$Ypdv@;rc|GPeq?tkwY&D@NlnT_WbH zu*sR-5xGaL$lMQ`08(9fOX`Oz_*vq|#DmG>cfK2CqNC@lE)GfWI9b3Zs2;N?Bz*p{Zc1}~&DZoV_~_T&TA4j+MCKx#eMxl`EJdHlKPf1m z&rnn0w{0-Wl^zT{jDBAmVs!B&L0t980l995+NW1^ZBm4dXxg(Ea!5QGPLEMHw|oWsAIZ@GL^CtJ{L z;``3QUOjdJ$@;yMxnPwGH(gjNxJ(7SeM2B_PE#=$Uc%e>i3V+)jp)YGrvo=@U-Zs? zc}JSk>k+D}C*|b9?&zC)kH_X0mVsaEexj@;X^F{sg=g9SA6xGo&vpC1|Gy9!$tWR0 z5<*7G9t|`^DKZj;l9iRclQKe*NXjfK5*juYp^PMA&hGjXKEV z^4&`0FS3~h{Ba|1?wmk-F|Sx98u5_55zW>g^4N`insNo-7;!E4kw_ljqp>L{loN?M=6q7Lc#WwN_`%f5z)I zR5V$gp5!Y0CNWxd=P1TmEgOHd2W6PI-B9hE{=;_g@o7oy>Z^iHE*2UXrutm5@n_}k zYMIzpcw~|dpS<98iS_YA1<)hrBeP%*%fZ z?lE_a<%ASv2rhD=jr)#r{om@%#vy7Nx!e5*4dG&J9qdcEym zD?{C7GXM!kz~`{txyc7MY&vZB^Hau>`zAQ6PXg{OtKyO2Ha6Ll_c|s%^T=OV%+l-O z&`@N5L?@J8PWL^1YebKyha}6>ZwnZjRC+F>rkSQ1r5b3E`p7<%zv8lZRZc`GTJmoY-th80Q3l{;yg#!U+A*0!x2h^rTKW~`WH)jvMSdM#^Y{9Y zuUV(D@^|Ebnv@F9G|5lLm~^^yJieh~pk??M0YN}Fssj%K3(1Z#GUCF+a(!5V$o-Zr zt88d!$SMT;Yd4hS~R)FS)e&CU4)ICjUeVua}_wEHwY@U0>(Eha4xi`XaIjXg!OCR?{gbtbgRGs|r zk!g;9B%uETI>`xs{!f_i{aw`aFDc2=FvPI!Ik0c5*RcqQZ92Z%9Qj9PymaF*ZjqzJ z9%Efbg?GLi>z-b&nW{bPa&*Dnafbss=IyS-<@vFAS^6Vb^wGOXtNM*l91`1$gq#YO z224tj#Xv-eGKlF!7$bK$r|!T{sFa~6&4VVHEJrYp*2=a47X%Q_F4GGCV^=?fWzmO4 z(D?r`_*dx_eraRCm+G3Mxs}Jek4_ylm$&z;U^5ecz@8~B$Rv7oR8-Mgj#}+Ez4Z9% z&Yr_}Dj)CT5I4}OZ1G{JD;X7>IFzL6d^K`kWx7G=9s>^cTRSRK{%jw&^@^U@%rg0z zuF!MeVNNy<&$-d>yPC>Wou&HC9$3CNGmi*narsgky>uyRzhG4ns?pehrwnr44(GS* zJty7_FDaRpz5!ntX>hr8P=5E2mGb(QlW*XJMru^g?NtFCGvc@K`1B&B_le{SQMjYh zs;+~63#96@;O^E{LtvJKVQOgs1e?}P2~IAqP-x#t=Ax5}%PY4}fJl<@lxc~3LmIA6 zsB!UXq}SKAX0eU^9$s2=p~9(e_gE)8-T6@ANdI;gd%kA+XKE^%S4J+vDEgyohv;LM zji4ZCHHOy6p>UO8#q+FkC)W3iD zDn*5g@~r7gpctqDM@<$FbUGD5DCR{cM_^Es%O zg}C82#xv3zDgULCn77RRXKA4xl-l;E+_OjOMnH9X^Jy^C_8pBsaTPwC7uD6(X?3zSpQ!Vdwko3ow8vni+cfY&gB3 zQajNu9V!v;k8OrU1%hW+FR7Wydyug0k^!D62@i zuAuz^t_M}-GG3!M`;-&>y_J5fT3UuX%mYf6_!0 z{Ls8uxj}KK^{`6nFsm^9*4r}B4`J1$R8*+-LRWF%KpIy!PmD>+R2=5Rpc%G zUDacB%r?HNr`)ZpCyO<8i{NmqqGIYyyt#nWz>3-=bA>z;$=!7w1zlWRr%QUm10j6a z^Tn$zjZ1eu-`ICU?k&r|tdUI&mVM*EanM8bjWyH3?C>0zvXnFBhw_Hr3YG@U_zB&Q zZHOuu{K98n{@A$m`QJ69erb|_`107$deQ^D@&Y|J|7Yu++&!M|^Zl|Bo%pHwCz8Z>tfQA!Q%~_YhK|_xx1-(l-doECYtlj4q zf1~1skm%}Q0{qAWJ|@y^949gl&{#L^i+$b7;rToB(p^Dwa8wjWSGdG803~U`m86!fzV{^-me7$6rY2SLZ*jZm7J1+ezmHyvn zNB4f?{|WcxL-{%};Z)lg+}+JjDLOZu{r7T_uR|w^5`-}xB{?zq0v<+?>ok<68$QX~ z=F4Zej5{HzgcEJtG}rL{5oVpltvdkHV?`$&jt5U~Z;*gFuy`APOiy3pr4(2WYCtfEx_AgDiCev(xP*eM#u=o-a4B>_ zn+%<%Xnp-|W9SreNZ(}*jM%|G&fL(Kwii=t$w^JWZDP=%jnuxqPqgf$>=ikOb3fjs zK9S5k__{py5-K5@HQ9M*$6P}?FGI8Z)13bPJnnmQT=cfs z#(gS(ZwpIqq8QsKD7&#kZ2W($BrgBmHLo}fRj!Egsdxs$+I0fX0feG?-+dMzS@GmC zqR^0^{-sQrEXqinKm694xC}`(jVJ5+I(ZoxHu%159<-IX4WEjm`-H(cNjS&2(fffV z903F}P8_I{^BVernwnIH)TAzPZ-5>`>?$h%)XwmNP~EyGqEY+l_Wv;A@Z&ijs?Qzc zut`O=8_LlqlIaHz@6HfB@ylSl^@1vGNWAR)g(JZ`zY1Akf0?`lPT8-QxijPQ(QmlV zrd6$QVyeu(Q*cl=<&Nn1-#g(FCfWJ2Z)0!<3f4_Zc3gO^dH39_HWtR^k{|!O0%@h= zx?7C9zI;{t5ktO(9G@0(d+o3^=-vq|3VJ=NOshwhL zu=FIIoz>ey_dqEkV}FjOGcU)M`mcJczTb;yy}LzFY~+y)cc{4=$GMmD=>h%wcF!>9 z&@Gp;X2bb4%^KBi!tXZyk${Khj}n-vdAdsED?q4?J}vWiT=+8`rocqj4UHT+*dL)HgvHt*#&R@#eJ5VxouxA?tH-d z&(KZJ40ZmN*jP|8tm1x&Vw^hqMa^k?^4xw&PSyW(5NsS~rT6!10Wv}dxnu~UcQ5hC zuLPV0DPbsX;WF?_A1$b<|0QumSH|_#?xLf}37hDw`1SrMv9LEJ22dSG!79z=qV{-` zghc&FAwk!;ySqPt==6_U!HtN>?!LbBcqICiNBinx70wP_Zv7^J45l{op!3UYT;et} zmCilDz$@#G;+W%Bx*R2I=ZW0R`y1Yu878BNVYT&NHqx1ey521!it}`aL5EYI*XE$m z(NspZgzljx#c6I?8PpQL&Q4XOU5k*u_g?g0wS?8X>)efx$o#obNRF|cvwQ7P~f zH!li(Dhi&FDx4ZY9Xr*M2+-D*-d@6s1}EBBh9sJ9K6-61fyAnDd|Zuv7>SnRKiA1a zqj&$7)_x7zKmp3cz}|Sr$IHqNnf<-yt67aa{ZhmZcZh1A+ZVZck$ zB|rf)Li!<|8A{)$8S$5+nTwGkv311eykuck_R!k1r&ipRL~5fO0PX*yB>SV7>4`1s(- z(H-}$!>y2a{24WXXH>S2f8VAvm*EL;4L8U(k^xME74KIQmvJI#5fz3d4vj=s=CQ9~ z$Du>)@Zc_^;3!RtQto7Lcp%U&yK>tGJvT2sR88fLrQi(RSl4L;og!j7h{Jw@SA?D# zaK$LtIzN2d#ujt)%FoHp9l?}nLHBR#*o+}Pt#1#YQRgqQcFq>tsKTn2#W!y-;4ryU zLo<3u`flGDgVX%hkh+u{r7b)%z6yMXzIO*IlVumD2k(ERa#Lb`vev0bYlVhY(JDO= z>Dd{pjI6^NVT0eg!Z$Hzc75LRBb@#32~bo$FLG{TH0h_V~eZ z_wS2+IZq%1e3|H=y*x+Dh%sdQ}Bf30Yai z9xUyTwG&OZa7%kX!ZK5PhHC}&%40rP1{2=LMeL9aa9g$R=9j<&PmVnssMxUiLAsd@ zzXbo7l8Pzs<|s`eVLy|#NchZoox)46M8p2x?Bc+Vdz#LrO4mYs9BlmQ9*J`vp;@l( z_iJ3af@-*!s!Wa|`HAa7Xni`$*}mkBM&4eVmKpg+b5VAx?v%;w;JrE_De_NYg69ur z7!e@>@rHb#14S|7tI^iSjIc*|>hH;sd=(OdIQ0a|eDd%SP6ZGPatuc47AgtxGG*l8>OMyh}kJUk1bdKXAdg1CRPqbfSsf*zIS>> zb^P|<^^RfO>ua8U4xiAlsTEC774`^tHK{~&qoe+ z!!AR#=ghYEyu5L+b6-9@|2J4gd9D#ANFN!rw|~l2S7Dfc%QsZQ>@*$s0L9U2_JUP1 zv+Y(%;c6RA>Yt=K5Di?Ra&g3(zmg$Q5n_qh%<(mBTzZHZJlKJ5hTG9neT-jL{kOjh zjuT0v66~&PBuzufLzUT5_O8n==VhmL-oALrNDxB!VO$A%`t+&hI8};Xy4IQ+(M#mq z0H#w%@uflck#?$^y79e_!I&xNy`>)TCChn0j_bxfkh(!b+JU{U+>*Fq@+>EeQgk#z z4w$TcXc2ODc7}JCzR>rwD~ki7K8s;qcc~OMAB~tkl3erECv45&4-;kjPB#=YA)k7T zG%Vg~?cQKJzl`hcTh?u(yR4XhFQ;1da@RoRN5Lrpucl*5oP-PcHmv#$=QQtSQHuWX zJp84)z1Et_ZgS)C){om4*9Goov%exNSOAaV`LF=ps@VOn<-@2b%Zq4)%hlM*#HR+r z0B6U`%FX-c%DMw-soQ!cd|dx=$%}Au{%=_%NKJY0N>UIa=RAvQ#&`D&n8=y~`4~7K zC?~D9bWE6|A-H%l%$lk2R<#Wp4WN^R0F8GX4)wIHi_`~r!uwqoWS@9|Y*07u43Y`3 z+#f~?;k?~yo21XgxDI1Kv=-A}@+rI5ziMTT`3&NQz&3+KQ-gby%}0sEa^%7@f#ZvD z3BEPo7v!l-9O?YDVy( zZuggizcDD$3iL_-eH3hjptk|z4#~|C(cwYBS7<<8qv`Y(h#Ow^kZ*J-Kdi6!TSu6UOhZ1$bXC3K#ClsW%aF6L^ z+72D7xT@(Kn!VBMqq~q}d&02WF?$KVDgHo<sa?MO1S?`Lf{>&y_O4oDEJV>ph zFuB`Q?02O|Hno+e!$eTwjmi%X0)-n&XTQo+-dU)ul?^mqf5t%raZ7uPgIS{eLzmxW z(FhX>Uh%?iMNR0p4Jlm{%uB^aM<6y&Qv<%*eFgR#_AeI^f!Q5WIbpc{{(HTIjcxsv zYu7HI3kvojJ(~2?V-=`1Zum!r;T*qtFR9A$90HJaS%~xcnb+4gE2@q5_lF$GF)?1P zEG+yo%h|@}lw`Qvu04A~n=KbH+Ji;@Q#g4F6G-FdN7-_xaVotSOKJM{fa=0&Ey}|` zhgM7c$*a^`|Kbzh+C+~&m~lg+n$x2@}lmr9>`clWYu?4}#e zIHfPnZ6kEav(8@0_QR$xSC#vT?YwhYTmRHb;QF<#AC;qAqnxWz(l=_ob%BH6* zKHRmb-SW$7+|*Oa$Kq7%_0y*LdH)_|>u*o~3Q>|>M_2Z&ujqe{G-g$!yOyrs--_+n zaa}rJ(yOk?E7gdsuV)arFaNssFk6lhS2r1 ziMh8MD$eLKN_;n5uJdZ{=o&b%C+q5#hB_Y#W38wJP7X?1b(V&t%uaOahi9663=htf zoxf-_nvKf z*BfWIvieh(e3n>Wqo)0VlKLuiPxnqYM6dT)=ABKub8Q$AUfGlKy>fvnO>S|@H%G+Z zYmW2Zqu()S1Fy>F>gue#|CB5Ls~U@G;K*yeLTheeAxJ7K2uW2O`F!7hvdd}lPp)=# zbJIlIM+V&+!j?;$(cg+L&c>RWveHs=*^GJ3CF4nrt~|Qn$Fye8jaqOYhVo)!Vvb~= z4;|Tgi2b4W?ZRYa#w)1{# z?gKbN(hH=mYFTk?GhUq>9)(5<(xaqn>_FZ)7aQv((pXlDCZb7>T&FYEseFa_y#2Gq zxYM$Y(CQ57e;ri6-^Qi&F3r{+n==|A4gDC7n@z?WSk&CPtFzYpWz)L*7vJr={Pc#w zxApA1KE!MYHGfn)GHa&lU3p&{Z6HDI+MBD#hRwOdcpP{Bk>*&~b$>^9xBYN_#_a~W z_tVsA3r{3;=VrHaEPNDBTbvypH&5HPcx_4|Qn6tmCEL*W-%DW}oL<26Nu})a^(*tQ zwuu-T5{>eZe$}Yz|Ew#YEomOEYx#P%9VmWMPC}?fdDv)Ep;#STXGMlBq=kZJ=XBhv zps=3+#_?HM5BesW8W|;sDiS)#;lmT_5KOd9?2C5-1Pn!qFfrm#wHr}V|MRkn2bbz1&I@_yp_6Pr z%yf^9IYz$hT=e%uMF_i_G*XlZFBX`$r=n^c4u1A*QJh-5`|wzgH2Vp!`loGGhwTJLs92tI_0&L=p%6xv_bTFV;+VZ>gSokYr(S zPoF{k&*Un+U1^U|Q;1_V5h?jFAfPegsi+2cX%=d;HBF7#~cP>wgFgmLkHX4*bYi6?+cR z$c^EzPfiHlI%o_NdU_2S?gt;0dhOk}Wua;{6#e2yc52AEjSD)>7j7*|N&>Ao0|<~S zN}j*MaM+vDQ9jlJ%`9W{R*0C1u>!&jn{R+(A}_4Ht&J0~;FkdKBR%m5;nikKMbJ|o0@1?D zKRrIa1BI97Yjg_NrPp8}a+ti;{rmrJlZ-N(4lI7}SS5M{%obD-yWHQM^%+Y1vvtF& zGSP|Wnkv_2T1IB?9m(mEwSpFk&3I_WzN^plwT1pD{n-STTCb1Mp_gQ{e@kzb>`-Wu z*-|^^A32?u)i3B-;;F6w*FM5QOHRm3&;Sff?}%*NncMQ=f%LyE8T_i;b$pmQJP{p~ z6M%yfKq`E`+6`s-oVq9TGQP?`H6m^jM8grfD{NF)<3;5$e{BbV6epCWdwF-W%QoA0 z^oWk23tr!uhxHPXxvSU8*^8PiBq7PPeJF3ry=Mx*)${1i7^^k`teSHxxc1I16{`2S z(BHP_kr22#oLahFtip3%dyf80x2l3uv`|D|di`8#M8vLlh0cMx*6)2Y-`-|Z==~V) z?=PHxx8QYPXQuI*rP$vse`&m8?0s7lPi?&w^icIgXk(~z8i!sUYS`c17PAX|kk4;z2WGe)vIx z4wiaX7Z(zqAY=PZ6u{2To*9kV4v1gyfV-5gS+hpi7WVmocQ{g#3@h9DG|h`4Hao`n-yZj*NVg${hf(ASL)i=-u zXW#P9o+4bb{`6zwjy;=)i23-UqV+_lesy1rxOnlhGAab2bRWq?N20;mYr{jww39OX zwvhsfH;G6~!&ggjO4ZvcG5ukgyx2HaF`d%-=U#8k9d*Hjb+pqJ(X-AYb)(V7eOD_n zgC0kGRkm@#g@y{D>wlYdfNFtnyve&9lK?JNckn%hZ@#fxdAMgeolCcnUTS(XC4y5s zK00%eox=qCUQgFWN5~?}pGeY1zMk#wDwNl|eLfRj8E;AVDPF5v>Y&)Ocly%|y7X&g zXPsWPg8GVF=T0yO_I&W>Irr=*b7S~b9-p9fKaQ?R_jF5{)~c-2NlJt%;@*HcGk@Ek znb~9?gOQN2C*6?KFTF`U?iF-GH7Z(|6X@BUgk(~HYX$p1c=4sHWEId(kznJD&lEXX z);_~zY#Uf9uWaufK3EfR8*V{S#@0ih$fKe>&8v679W%)CYr?kRy2hlc zD8M)fY|m}oncIr5+^Pq1TLm{L*8QvCSgMOUq~s{teVFgP1V*g+jt&oH(Qf{af$T#} z6j;qGhw^gErV=qM669fBb{vxOpOL%_Mkg$31@om`idO7e8~<`tFH$eBa9@6D9JT60 zAAQReHl}M`ioUAnvKmMHhIedX#I<0sr1k#AK!#qFOOW(vy!<*8&-c2RJsKkO<8O@K zyfay?@(*5Mpk_>hcJd=t;oQ=}!U6li;kCNYKHlzp;l@|#*T!Ou$P zG$HGGW3##N;$PI--*4HTM`g4VP?=}YV-Su!DT!dR6_%&2@a%C!`(MWHy#y2}C%~@F zGI@N@U|8f;6OJD)zMrt4Kt=n|RlI?+8NL7stwu|zcH@}C|FV7u3mc@J?~N*?h&pI`rN(k zPW!D^_<1V{?vA{0`MS+}-;05it4XR%Th64cPh67rOzK6ohJ9pem!Tai>M!h9Ng4%DE0_$Ouh*KTT|tBUj-LIA4UT z4B9qRk?BfUuYd{yjPgFBrHr1~bFbZIyDV-WEnw!b`}N{8zhEZyXP^w^`RDkLC}8H) z%|-kshxF=(Q_?o#QBt2Dqapb-s3aguNJHumYVg<`?WI%g7AC{858G#d_LI?z@7(vN zWQ;=UbBH;qDz$Yg7yoqiRNEosh9uKN>hi3`zfKZRX~ae<SkjTg7y3(&Di z#g_ik99k?06Q3i6UE}_a`>j2g-){Bx1#H*!*fGu5Qgm44A=>Ns=7oSPqJ;gWqH!e_ zmio9!`OUfHt|0gy z*6A(zAI7ih=Vc^EX?+V??iJOnvPbh%zW5lADF>s+ZW~5%`hRf;FrAEhf0!RWdW3WY zC`GG%7+_3Hcb#qgSF$d;o34z{CTzS`SN05zgC%^Xsf~LsMqIgJ2s3C58OlHh<&tfb zM~%RGILMB~gN%ZKQtw#z3n^89z@d{=eb%6Ue+HQhsL?qlK%JAIf;CcRmr(6|$QVxu ze}KBT^v~2sFsg!NZRJ@&^OC#8Q%qgDXQs!JyU%#G;s7LXwxA?;~gf=3z0iWN}H$t<`O+9iywfQuO6_A&gM{jl4U420zp=EdwNJ1gu zL=hypSe(}lap>yS3v_+$rbarWJYoPg55(^wPrBl99Esr<7=j3bptGR(wE%9JU%@0(T@6}8)5NtkXKIdaUda169;$u#g?aU1r8>3K>FI3nfgT@d z{M5)Ho>q9BXW`BL{2xLay4!{WxdYby7Wy?Ie7weDw1?CBNB)(d$2Wt{e;lZ}yeV+B z8_*6515FLus+^3Y2~KCJ`7+czzmB(O8~OUqTT1xZZCdRT;t;;HQ>6(WxOqLSLN2Ge zn#Mh9{DmXOTDYWZ#6N-YID=61E|M zxHkOJ1+#k%bKft^G*n1MJ$^)`fTST34}+a^Ld7A@>{?RNB?L!xNCW`IW5rzgeqL$$ z3a-oOU*#>bARr4v&7eH@;+0YxH`4OR*;f)EJpx?fSFT(YVU}BiC@?3`|9$`bY6B@t z1I)iSWj-ET9X_mf@fzik!p!RX?2ir#LF)-I(aT&W;5!;7f4S>-m^gZ zjX)Fw;n?!KZ+d*$ft-qFQ2(!r889MPYmbgjHFhDVuWvV!&wTkx-rDiy@3B_&LiMTD zFK$*^C?u~Jx<_#MJylrP+deUKKF>J^NX=r7eQ=wVnWP(c98ph~r*HS0m7$Yg69Bxy zVeYMKHKNun_d9gOc%q+=k3LbI#idQqwIHVF=4V`Zfv(0!2EBq%7z;;PYcMENZv+XV8*2=ne} zyd~$EPC5To9{{?#esd5Xx>2d845H5>-@Vf>=v{39c{v{J7ZB8gU#J7V_5DG8(bW*h z?m*!K#E9iD-^!NU0`xaX2IZCd`BHUxV2F+q^>e z&o3*M2OG0>jjmRJ`B-yp@$o< zylNH4nsbzTbmGdJ(}Psp+cu; zV(_nAu{1V;YWst;=Rv_Ctyz&v!|y6zmRF{?T&wn)-knnXh{v8S#^LTSTVIbUW&85D z#WTNNKakkf5SZvGJEX(#&%B13oqudy(Ca34~FFzbf9Yqs?{qUZ6rSJS{YK#kLs+aT}3_2pSql|A8Nc z2#_9`&zoZB%AClt)j$9H_iV3dw)JPafYEse-UtIp4 z6`3j!F0IACN~U%&}(j{IPAoI^1y6f9Nhy83E!8e2gLYUo8$7+Dk37KeL5OeGy(Here$I}k}NF8yS6W_KE;28 zm>0E-1D1al*V|OM%I=G9nuf*RFU8DtJo1UEosNBbL~qD(pzhS>Ob#jJFch%*S|7FE z)j1GYC8tSoi%gkex61dam7momB3fJ+4|-%NYy+l{_4=Byg}FJN*7>=5ks9?K`rcPL z{-X2+sUNM~rhCRDqBT~_x6#-*7t}N8anK+#=vN zzGKgpLthH{I{QP*nw2DM1NY(XI5C5iLw8;5&q1l96~f3tJ}TeI=H2a29C$1&ESzx6 zBxiK7II*@Q98g~1WYLVWgG?=qu+eF^{kPF^ZeLhfn1S(#+>@jnfzY=hkBzHRJF*h| zA}mmv*oE~T+ZvD&Nk>PA+zexu^epbCn1qCc-pAG@N+0$?rsXu=qv<=%5Wc4Qee0|3t6pT2;#MVKDZt*%5uG-#HJZv2PA0_v-y~+Dd zePeg6d;SA6&H6_+rNx3h9b2E*_%il&z=3(&&gQ6JPvCC!A!zX>~Jhi2Vq(*LwTgw z6WEV)6gi!lZxZ}to4U{#=>8i`hd;b%kC5a~@ttwO_6TC-}oP-!! zCaj|~fCZELg|xn}V%(dKqKfF_u{V#S>-D^?t*uYt|F)_G&f_5H4T}tDNt)Ozg!>BQ{p*ja`zpaECbnDh zxIi46j%J2%4f&tYg&&7nt(W*RvGR6zb&bvc2LH*`;oL$lc2J}FV0pHVuo90TRiBKy zjj>cfwGa{2`~KZ^SNe;EPDrM4e!X&qb=xV4^SgK?6gu}BmQU;I>VkF~^jYra8X@%0D$#hWH?h2TP;{($H@|VJm^0>an>q(QP)0z- zmWrE4xEkRMOeT*`$NVTHZ3oma!qsb9aJw9YdCYZy zry{!`ZVl~kZ|_pbEE!8pR9o@cuNH>#XTv$AqL3FSuw@H7shLSH%;SsKa_qIiP`kpm zh3qU;d4Yu6ifqON^#CY#8ToOmFGIg*JlN%q=nyJ4J$jQH5A3t_a>Ti zQ`@g^Aq0_Q`&mRGon)>xBOUUd%ZyR z+(1+C^Viy+o`(zk|0t@Z5=*kvPw}Uiicbt04zDbg9sa5%4ewmFb+sj+et9FWwnSJu z6hCGt`7sLECAmt1z(FXjYyAG@!r1ta-YM?!wkv>e{2!6y)u5d*+}KVGICG%+R-fpD zz{n)^0JFaEjJT^;&tqLl+%^p>VApUuH@RfP?DnTnMj~wG*n>7~`jN0BnQZjHTB29s z?{~jS(l$}fAskQ94V7G&GsY$+3U+Gt48!zvtI5&acU<$<;x$*WH6G%zR$1J)Pc{P72tNMbeni~bMy;;QVU*c&0`m`&r1tH7P6W1)9hog1L-$i4 zPj5~SjgCeREEEG(tz0z#I6|2F0w;BnJ3#t3P$!|ciYodgi77>=4r3NNB*JRp`z9`B zwtO8!qmnDG1Q3EX-*2HcBD9S|dV*$c!KEbvYyQ!F*2^VE449J3jy7k{PWZGwACrIG z_d$Cg>+%WPxmQN*_t#u)T)9SnpVh6IWdEHn!&aSYl2UUJGZiyjgTkaPrFnzeyx5~? zS-j%rX*#64Kv_F^cnWQ=)(Ub)M z_am>(@lfBdmb4{?V)Sie2hjT!H`t&iFaOkZF&W`^{MZ_hD(vUy=RDFbQ-4#L6J4|m zK=`9~NjS+^Hc(c65>^kkI(d>X0<_8={DSJ5oXa|f3HHLd`#J4a`*!Khy#Lz`(l`&1 z6zDBh^q6^_C()LGPy2ounKPGK+7C#XL!yH39=c7E?Is!&`)zSwD5~+{2}dov&ot6u z(u>A<3O~kJAaYJ#^!C`JrfQ<--!Nfn&fBr#R)et9aK!$GgkNgsk@Qn zuTiU-F9={}#w54WF%-Pn0m-*kt88{h3HUc#;MxE6jUiX}W|eqWDEnE!7RpvO1nUE~ z;3mN|{h=DJd5WtOb@2m=?XtDepM41F!_d&+v=c#!gJXa1>T!UIjlIkjOd+Yl5@f$ zE{+sLDT{4yn@?{gYk0>*@VRpu*>{G}QiNdSe9r5K+(o8QR5Wy5SWx$9Ywz;*RwA1# zg35QK9yHqKp|?;Tp##I=r$T4phA@nZ(KvKtWH&zZ_?6Wbk__Dwa34!Km$JLW!or-R zm6esiwrw0(U3Z|?+Z=B90AW&4Ca=Zn(cm#+Xzmw+2XZfFduQh7=f|L4NXK^L6{W4K z3*tvy7Fov9(g<#TJL2RX`pqrod9;liwEFQ@sy}^SCi7B`>(ZH+%V#ofvwf93K~K}G zw|cea#*C|@?5rAl<0KiVzMhe@_*~+3OI=vh-cr7?$iriL!oSQ{HF|KNr$F(g`{?XK zxq2^@*5qG|YRKG}eub54;1pV)q+%33lFdcVxwv^gzN*`#4m?GBv722~uG{D*d5NrGqbCp}v=tT^asjz8nsD$FBGSM&ry1{WWFH+@v+`r*hX>JXJ-pS|Cm z0_NqOtXDE%qM~dST^IPBjd{#l#$3;pom-obm^&9M0mL%6FF2GQt+Br^VOenC40Bsb z`K{y4N@km1bm?7m990)FqEk zv}|Mb3^ln&9|QMM#c3G-^^WjH9GC8z|MJ;L&0RrqUHgVQK9>;ceIg>Y9E^M#do}uw zOKf6%H~b3(mJ6Xa+1BV!Xvtrz4a1x@)P^?#XM~&$3PR@QYOVrzh|d&|cdcl7l+ZZx zo)4>X#i4~acN+O_IBOQiwd@f`#|sC|=jccMlnz zvAa#$+Zs_3w3RC}U;!lr7?Ku^>LC>}z^0Ogg|>p7UDn5fFwP|zw=fT?ynFEC z<&FXY*PgbA8}ZSc-z)|aNLfSy(%UmdR1)A0y~?cs5X-^RPIqoY>s(r6n#77HEJcnjm|1}R_k2xoblp} z2h|g6xz*bwtajR0wM?;oHn{6Z##^G}D7R;LURhDMA{UJkxStZk*9b-X5hZ?pRPV2P40lFBi%2 z_4HIY)tD3pR;pc1rmqFoXhkzA(?V$oQ%?79eIf(v-Ow*D51YGay`v_x34t?$qrm#8 zV{Ym$DnrBJ@4Gc-zm(rU5c5DLN82N{Xdt| z_PLy|Phv1Qc)9QyFC4jhJ~;Dy4Y=-KSQ$`M!M5zjk?L7i>oZncQkv(s-*?Qf7&uBj zG$GsdvSlcYvDo~~!Z|Z#?{45o^xjXnj=y-mhDj#-n{@5T($>Fd?mK#!nU;&;nnn}d zKAvZL6NiRhTe*1t<~(PS0qN=*7WCn|wfPA<0S4>~D6SP!tR(tK9gZ>CT3(M0T zYi&@tl9d;pV6P|#<@w~Fh$_Z>LWyqub77H+aJyiL#aQ`DS&>sqcW z4L?NLxwsjoKRAXa)XLl*DT{uWekZ_jR;uk@NLNeh>WVh5^eLy57c7+pJ5`&Gr~Hr# z6k+NJW8CuCEOKv+aCdBavcTcwO|KGjIlP&9J~(W@*`<`0{g)0GW!?FcR-s_nkpA3` z$;Zc3RM-wmtT^*Ut<)MTgyNcfZy%p@B>8FO${=SBJ4`6I_92jMn%r~Hiagz%7Se1t z<|R9k$J#1q^hmNCO!*WvnY5yj|LziXk)=;$uqn9)&5-P9xxQ@5*M)hGf>WO1DF8hx!pI_P7TVka|_-~V|DV&#Y zo;Vy8#|m&iLAJ)*$L%}g`3okF>VmJP549?dl>C0z=k@(Qo%ZDmwJt8!*5?<-RH+5E ztlf5R3$lKFZH%Y3lWKFEix6X-&*;%d7qk`CXWXP{xJ$x#a!sUCJ~CDo(7*8z@8UT% zjPXJ0#Z4^D*4F&?E^Pf}H5Ag)?kRDkwnWF_t=eH@XDK@<;&mr`FNOyRS3TFKC@4LzvI?*{O^8K#1#26 z$Zw}pjfb%`H+B7rf1_2dk0t;*x9 z6|QCNF5ec87CN1XRpN_1d37J3h6NL2$n*DXj2YSdv1&lc7!|JW2xgJ9L zVc&L@%TXL4om1ivE_GpA!DE5JbzvFKnEjgk?lePwz_3fer|umPBVJ z$ofID#=N6A&PrORPMx~dTR#6&`+P`V>?*yWJWrX+F(+=8*}h+9+xDc#KVXIh__EN|Y^oDrITYE$x|nY_=@r;R`P^DSFK9;VIb zzC3!Y>!?sw+P3HBxnT>k0hK&Ai+9yMaKt8*Hr`vqqdJ9Ksz~kWHFHRzD8Sm+yZa%@ z1`GI;9fCDgJM}+y4IWdvbCxP9GLrc6Htx49N3*jRQ5{mI6O)q~O}A7XuI}-LWMGHA zz5T6TBG9&cxk6ls-4Nx^8w^`HALLO8ka&+bZN;Hzi6h|nNgy7EOmIjU{N~LY`(r_i zN0-fy1pENy4*e>mBU}9C=ob%loIxC4&Jpq7#aBy^fxcczX|N}D^mDc1It6uQmp*!+ zBCZS;;>2})MTrMH2d~n)u|L!AKO!75kk&_Il7H`46W-D@znLed41zcA*(JoB`R3qi z^QSAn+*ImUx>Nh^uG31Erxy3liA|9A-V7*H|Chwr=sBN0OY-e#;^>(}+FPsR0u!$VVRO;Y#nyMQ* z4bXS-`_HV$m5amxJ_S8>snyi4$poTOeQpt2U7Spa(n(A|D-dJU9(4OSH@6_rN#kh5 zr2_OQcypK2(x&2Ue?-^v7@~Uc>DJ|~nhKbmnK@n^&QY4GsegHXUUsZ2$ZbmO(6XWT z4N5-9iHNLf{pJRkqOi8h4J;KK8ppG^@5=l;9S=4|Ktbt3%9t52%NnTCWt7U{YF z+x(eMzwoO>^6YA8NHN|2x%Sn2htrduMLUBM*A-h-UYZp(D3I&qMCUI83WK$(Q#i~) zPn1F~Bqr5x&0CkKA#-_VcD6$xrnjd_^eV1U+wkW40E!53B3(X0IsL#$~KX+b6Qze~!9 zBOGM=pP|q(SJV=YJpl$64rU2)huC5a>Tfxem1$j~Bqe1(<%h+^VK7bS){D^!t6Ut^ zeSJzJd+d(v;=bCdSoJ+pp7nI!jgwbpb@at73{B6aY)dG+@aZ)}2%X`D=Mn1nS=s)? z3(y;R*=(fyBS$a8-J{aSI>L9UHODaBr(@LjG}xs#k`46<7&|mD))%EXA+?gs}EO04w$UrNKm*Msyo5`7u z_r6~m=wbC~e80X7$+uaiMfs8JVRmmbz%GlHw;e%&-rd`~!9gJG&(px`|E$&G?XBKI zMI>PkyXSLM_~RYc$*E4=&!x`N`VUluRGt&)!Vbc<&2}|R)r572QBGT58JHNL=)79q zl|6*1z2{}^<@K2H9#xEVJ67mK)#*E-^Oqfj3f6c%D2@$ZTPmB=-*>#OW8jjjU%IVB zwGDdPa};6+AKyQuac(QfTqcjj$dYaL{o@8J(5Uu%%Ep>y_$5ZQpX3zC`gJOIy~p9S zH!1rL{vLm(E10&puQ}uRNKjtFLfhFP+KGy$1;gOWbOhThMyHSadufAAD}~iJ{mVx` z7r)P~uX%Dh*IgV<)(r^|WqBd2JU+=NT{qXds8VJzd{tvvON5oj=%+9g+OG*j?%)6Mx0#T=XCx%q64|4WjFONQ3K3;wj}(!}-XRjQvK2yh z$SNb*J9}?_*R$@=`TqaT`JD4PxBI4E@8|1!Ue{w?xcI2=DJqJx1s*oOj_-$$6#*>^KDC&I zIARW^0g033Q#bp@^WQdYvGlup=X1}u+_Qg`cadQ|zWrHQQ5m;2C=}s~e!fI(FjaFk zhA`4xNc&+IK2G(W0K!uVYd*m0hPT06IW{w6@k3XR43UF@lyTEdtlstT2#XA%R#cwq zCexA=z28Qc*7QJxp)Fa)_^U|cV1e1y(&aFy;!p+>;AGE7Xh^s84qXxhOhMt%qmT80 z@3>-G1)&aMWIa42iSeHW|4n0~wtxo&J8~sSd*pI%BVGun{lx3%K_^HPq3gH$R8f)e#4LR7^=w_g7KEAgE%sl%Ao?h2nq^0SI1+T>vjG$*7P~{#YrxGWqj_0(qEO^N)L`-32PKE{^O! z+)B8uzv#@0+SrtlmjwNw`LBfV^#7Qf)6B@YR`?{W`<_;sA(sgAB_^rr#HT4Y?r{vf z2ngWjg2}0CMwl+#BqI(D8FcxbO!Kz~4aVA(!R zD~`6k!|t&%+C0cLhqQsVD}nW!042}t*m$S|L9sFo8$1)6-B8X2+-#6OS9RDAYr*V4 zPJBH>w<}%ikWalSrzIn1g$;@E$I7>ZSq4y$Efk z2L%=l09ye7Le2!uM1ffNAp%{%qzM)z&|`IiUn}tDlr4~DC|<$98oYaODjr51p2EjL zAQ9+-OyFvQZ=(2P5~fQgtG@u!Xn+Cs70U=9%!mQ6wDanIt8n7Z)zff*>_%Xj2fe*zq@~vc zZxly+dV4k+PK9*LdxeMv(I!R=|D6z&C04t%`pZ19mfpw^VRrVn-(>zM$0K|ZQ}`G; zW04%z6xzWnPB+!m)f?ei3wRy4oabmA0G{z96nC3G==T?o=$k;bYX$`o^dcAFu8p0) zl&cd2!~zmI09@cQwyp*02N<1jK>G+WSplM^L8?FlLN^wB*_kiTs49LU764R;s~9`j zUEcvx5iY@}fB>1n&^J3=RdM|b2^c^M05U^`4e-{|y`$Z*T9X=xgY^eq6gFCW$-#)5 zXV%lx69#hr42@)z2HHU+S@-p8XN9vJQeMI!U8=KTlbDz&01+c zDoXMQRcpAJGIa~shdsB441&BJ9DvqFhXHJpH86;CsKyR~;|D<;Aj39+yO#Gzsn?+q z?e0UmL}=PfK;D_pd*m4fpT!t3yo&H5qh1L#({{8cyUD_&$Q-eW%_SGIHq?_YT7PD}g0o3(r?@sQ|z zQ+3cpg7zSK$URBPNhfuYhldbag$RM`OCXbpoEV1Q!51#u zQxtlf;^Or1Xcdro$Zhk%?52423`a~W^dKjIDCUk*P3!FihOPYZ88njvy~B3qj6&ON z>WzQP!XbYQT@!bTdtpci%>|AJ=m9#XR@M_C$f^6I%c?0_$Oah0SCHU_CMEH;7;Q~M zKZ1mH!>%)L41(T5iY7J=&UK#m?QLyngSFOMc{~s+Dd&`Po+Km`%5p{CFr!_y9Bmq2 z-O$BfP;7&bhVSO=?6|Zgic^9VR|yf3Fi^CRzPRMMSFAE}27Sy>tW*)Y15COQ4d&*T z(WRvjc+DrErXX=B7>oBRo42bCy<0B+tIeRdr^!6N2$z5Y-0GkgK4db!%}o_6;lu+i zs&)4%R`x=L-i4#VeAdx?`#cBhLIrHhYPId#(i7#NT8ig)_!h<6kk;@Z}!AgtK4+Q8|ARX*2 zpvIai7)I-XhX0E;{sEIcG=0S%r`!kys9`!4jAY^lO zWR6E53eObYV^BTkX}Keq4{1F>eVD#OwnhbcGaI@Pehq3bqpu1Zb!3k&$6d}7ArS$?f7ZkL0J?xHI`fs{@CnfagA}P_w&K}W%G=vjSf{~U@6NIm{ zgil-cpxJ%_sN&>K{5h_GDd)>i7G&W)Se@1S?spL?-&gpxAxNzG4PNIzOCu+|jt1QJ z;~oh|2QW1a^@|t;s0vwy7~mIwsK+)bJY@)rFdUbBBR%)mS=7SAv#3^<4O#MTVXMW@ z=u`<*4*FdlULGb@$C`J?cDp`oRz&W@9L9^yX-~$dO50{5Ntwi>5i#TM^4!EJ;peQ* z`-BP4fVaBa;T$E-xC}%Vr~m4bkVwx?_`g8 zOnm%Hxa02s2PVZiYi8#nLm=vDKu?#jSOV=IuU;W81f13b2BihuAuWRiHx*&z`P(E= z^qN2%{ulxoK+=4NPL6m1k3S8jW<)G=%!2@r5!sY(gj;oB9mM?G zv91ZCBcxu2+yg045-O=#9Xm=eqm7Lqj6|P#H6H|KLMsMQ{>;|BIdbr_k<-!be3jaV z+tqYswDywQ2DcaV{E_^|!&qiEC@Cm8x(ywEhMO@EcY3e;6vGDe) zkrCLj8*#FK>wD=&gNlQQm3m#Le}AVf4$t0tEeZNNv40z?%Z~XtT_<>{_&$B{gB}*6 zY7Z9~76S=>E!4&hBG~Wi*RQWwWI$~U$D|CzK^z<$m5;8!)ieRkyiWVwEV#?Yo{>uX z)S5JJ=|e?D8dVey4%|lO=7S%ZPhmnE_deb?PU>)lJ}VU(9_Wv<^adI>U>-x(81OAb zIVJ<|mj?a4@-^YZsw|jUPlKTj8i$tNpV5W0juPnFhic(9_8GS4Aj}}g;1dNr^((nQ zA^h|JhBqh^Fh1??L`WaJ1`oCy`XSV$0q&Q_(`|7~#?C-hm5q1>K^5#nRp5Siw@-{M_O%CXbn}Z! zicjwC?_Nb5tltn^A*R#qD4eI`SjzSY+uW!;U=+bZ^VW6dA=f!$YBO%0Yc_~fmf8^L zd;C4!b97r>yqRk2aQ{uqxGh%pWvsaA(Ut|JRLRfg0gGC>< zcYcu*VGrIde0}c=!)~v?<5b8{*hyJ;>ONW-$>y~?-~RcAwJ0_B4W-J>C!Pe7UrrCZ zO}piCX||V-^A_?$$|~rAle+Lqz+O}XY#~C}loN@;z#J=Tty4M=63{CyF6EQ^K~u>7 z4vb0Egf-kojUXZ&U9j21`T?vKa;2cl7+(L%HE|7aX|gH!gMN@2C1tzv3v!%2`xcpL zoaUo%ZS8|UP7pG5KtWpj&Xt-++husF7?gzEkOu&><|FU|!G;K}mzhtL$Z$a*Ej+lM z2Sy*{RX}fjM4uykVlV)XfszHHtR}^!K}L#5V_0c#wPrN^=U2&j*oyEPbSP1qKbW3OAEf8z8d47I=D7Ts>_YRmZX7D!Pzq|?>3Yaq=A04>D@Y$1Ea{aD4xffVN z&no|QyodPW)tJ+QNC_FQ9T)n^;h@U_au(#13x?~UPe~8@{%T*yJJ4w0^hX<=+@>BP ztQ0`-5oZm0SKFaSispB$dolcPI(c9(NTCEUu2qwb5mW}AiKg95=?4BfQLpAVR5;$l z^+GP@Emt}ZDz;JEK3jq~Va7b_fz?hOK-Meo(7yP6>*?gk(mnT~b8SRNo;bnSx5Du~ld6x)S8q^bta0>h)k%g{#$#~hPNml5sqkGX<0Wh-VaA!ib*`7WMi&6y(;}^S$TMZ@N?Tc-(6Q1 zw}uVztv%O|lLGaLl3zk)q-h)G2PLpnETOE&O1cWc_!i7?46LUqG!fnY@;VB^Fz3Tn z1tVq4Hccyhat;@cK)=ilxcCE|)n#u0#fS?E3U($VG>pX6VOIgjQ(>pK5n&2Lg^Q6*0*ruYR)9+mtuKQ6G0z}b#v=+&Xez{4yTIXo zl{7Io7l{4=u-|aVeh}q9goWVHuK1{D2#a5n69jdX&9CA)KWn6ghM&^TbL5HW4u}Fn zT75X_Ti+nEhy^geCb)zVB)gUL9z>XQf~JZ|-rtix0g{CBpn(Qdj<>G$UFaYqW|p9` z|8#Hp8Zc)Qpo9O)2m!$IVithpf_NfAg8j}ofG(UJf~ET+=rz|;XIeO^QRs{`TQDR6 z;-cfs?7l3UFjD2AFc}9Me}81%Z2D%!>kK{)Q1qXK_o)H8ZTrey9cZO(hCWk+wZdYR zKP(M`*S<5&FVd1@>AKvE0EOwp`;NyCH6<|8w(1n`*uK4)9+pz{UXjbow@yoawEQMX z`FCZ*%dFXZU{^7^{<3x^m$u@^hnMANQoNNQb?l<*{WDpDH6KD`yK&%fUBX(}w4VnqO0hXQBb{Tf@ zMqiwc0H!Vt93R#P30tIqoWcD7;LH7u9Uz``(}U*0Qd4MX4TA%aTW5ENO_3E2tT;(v z&Y`*B3-Ptw?w&AxxXkC4tubYES$3vhqXpZPW~67ZD6)nPSMBdspN~?7)^HRcn*J(x zF`#||p}(mG|1}u&jEszc@(}`%_c<~JK^uHu?8ffM@e%T-B#2~yI&R>lH&zOusycIj z;*AeM!;ic$MTWqM@`04udUX%VCKwI@Ap7R902k&;?fGGan6Jpu58EO(cEl-USe+sF z!iL!%<{tP>9N+e!kbc)uJ0H=4)+7Kbpc*N)W)FBtkrKi?@y=^E{7Vx8#9 zNUlxk(mlsKRi9=n;878UHR&T%Lyh%}M{emuu@mq7H)Y&jN9=Li*lK?@EXO-o&&6oY zQIQX1WBM>^)|XBZ4pg5ww~^aPd+iRN(%tQ^7uBGFg|FQx2`T3N+q_?Yy>1-uYPU|! z=fBNOw!e}zI}c3Azzs!3Tp&m<6X@mlnZdOjJ}nLgC-o%BxW&EaCn-6GZu0_zGYgfy zyQjy&qY4gxDsATlgYwQy3#o~Ti7o(siX{HTPrwWVdt$z=d*|OsCqxPv;JT5uZS(~p3+OcRHX zTvAyn;4H0$*H;io-U@ETlB>7#3~Hwg2*=u_O4JFV^r4mZur(|c9>MIar2!CwNGS!S z29i+r16x2|4FUV7&@l$^^p2n?`P8+)3O>j{CnZG+z9$n9CG|T*Mc67N;zkG!_3nKv zz1OD+TsGz`&|>P1#QI>yFxM$U`I$EV56a{xd%CjAZ}Y>#ONtNdESfHtbo(A%XZX1E zaC8-~wRi3kD|eqxq8ezfAK?C!t)C~?b@U>C|0BV7`feq-`_UH?pICddG(Np`Ei{^p zzlCM$kEr~_muF*3b$Re%YJY?j_X)5x*8&O<%zy-7fWpyZVPS<&#z}j+m0BZTB^+sv z0Sy~KaWCW7$Y4*1fI8 zAQ*q*e<*EMLwaxk0<$4q_+OFX@t^E` zMekAgzsDfz1?3mDBY=-6wHLrVYm=M|*JUcq zKdDO*2|%{BY=cTm$Tb8IGF+9mJ{VWis&(oeAs3#C-{|+BeZqWz`!8+1lc7ywq$WfZVumg9cbIVqA zi}?<`D6be2z>3r%y{}ft+W{Lz0aj9u93QdxT31&92I+{Ghrp-YyU{m#JM6={lb5LL zn)&SJo=snc;~e`y56d8iYF?M!;vaznN zZb5yxQ7FCUm_ADwwL&V_GrA^L7da9Kw=wR~##e5<+NpNv$u5B1l_dxxhQ_=!MC&xY z!!IMRm9ZDB=ie`d!B%sV1-(V|qIZyZze<)yRNZrO&Z+ ziovd=opWvR%i4t;FV*70^GT^vu^UY=rAnCFUL`hf+?KIUu`2~CDwaOJJad0mvmpI}`P!pzK#&W3TuAl`qMC21Di~x=2Or>T=qM(|pMbszb;6p(mRc=qD3?aT zKfrru9_{1-Z6dm{t^|YbU$^7|^${?adX;4msNP`R9|R_yzl7x%6eiNLb?V~S$9b4P z?tX-a4Vg#Gc(L-3!HpKL$8*q;p6+-9ylWrwt<^VL)sML$H&>2vup3{4Bj;y zzZVrCb+cz9sIfc7IWn>ykPyYkZs!W50P%(f87^-{Iq@6sxmOp&O8X^9&=Di2e$-MR zxZ~7#yMJwyL*=En)FIJW?ka2A#C~$&4kTkTsRK5GU`^mf62*m^zNbvX;K2f6LdNCO zOG|tY!U`gxPl?ENU-jOF-MO%DFMLLzv|lr=3+>2_5DALxNQC?+3aAnn-70n8n1a@B zrm7$O<8m=|wLbwTz2ZD4detOl^+Yi%;NXQ!;#%BPNID+O;teD4!O{M5>SDD?t*=er zXXjc~ME zH8PNAfRertcLi7Ey;Xh~HKgx%T2SvBiPHo>o6o!3^W&!O#;}}C{2nbyUWNbptE_BG zcmEJIRGTvb^I4&jsmVrlUS}anECV_k`}95>hZ%jlYC$s$i@uzqxBwlljm&W~2&F&J zd4J{&JDa4Q9P+AKJwd9v`A7aJ6APayd#WVU4#OqEkwQ@(XPjScqaAjNPk8hqUJ33u zjaKxG$nnpv{tV22B)(LbYxeik(ZYJyH}~klAWaUP7`B)?3#F$g(%9KK)%wJ(Ykj`Y z%o3`us7P{NNT_?Z0OFRvMqe0bB8-I)iVPBBzA#?FCr0{2<^e`kStgzaLcO^ znU;YTe8o@!6eHN<0x4A>y5lZztI4>sM7i!LasGk?NLK`9$v7b-q~ zW{|oFt9{DsMxuGIspFHf*mU>Rg2q{CK=*p#2ed>~pW5Im|_kLONvbYn-cs-`r*D-8p zP|$L5xk=@ z&;F>>3v9ecqym-BSmpAQ(_T|NMKk-P^u+mMBUFM~wHBSTY?d3EE_B-%+^Wfr<)_b# z@04DN>y>TV^+~jtAs&Y0zA>e?9B1O=+aY|H%kG2FnV>If4$^$1 zufu0$PH)vpK^44Ue6k%sSXQ=6K&6+<-o-Em=PBot%7faSkPtxd^^N-v7|32ykT&V5vDN|wzkg1 z2ss*1&Jeh#qs;jQGFg$Z6o6#QBr_oQk=zmJRo&u@G{*thD9VQT=?Uu_&V>+%aM9`j znrU*|hG&J~eMIS8z%F-Yu0jL)607$9%GUU0N9q>omn2l|mXZBYt`L-@E_(;CH12$(eRSzp>OC%Ax+V#&Mv-=;ESm zPg>TgxJU@eZKkQd8lCiHWz&Qkt8oN$2K1_|J;RDmTr703st-E~MPG_xKIdYnLRLd@ zwd9JX8fiRS53X4f4`|$cts4B5>#B6^!uuk&n1UKGBEnZ&Oj6==1?%XE?pmqvoaH}L z!B?tIq{`R5U)Z_EkzTX*o)$wH)(%`COyKy4ad>b5K*Xcb8Y~E|=wFb6hx}!KsluiY zC{dfAsnXXV!c4G3@b2B-%Qo-8Oml@AgY$77+XMp{DCJssHJBC9_1-bS*w{GVbKm}u zpA4nY6zY_Y+WG_NnalQ);PF9Q36TL%q)fr>NV6E%;|26gk>zy1rzW50PbJBN;bkLTkR=(rRgquVtX}FbV!xP?=I4k`-P3+|Vq~1(Deneo8?;W_7 zg?lBW7>VAU9o=xA7`d3QRX(}p`hZjAT5`B=W*0e`kWgEEJ~5$u^Q*bgbFTBXy1sI< z2maQY#u^w*J@2I#KDAMA2?Y+XE52}?voJ~U!<%9I`P8`=hll02yt$o_Q{2v4eN4Zb z!~FRsW)~MsD<5Co=(lINl>d^D2&q82;OALAvGSR^S?yZ8yKffK`3)0px^FxSdhuds zK4Q+}CDz@|hi|?*%0)F8b~FqgNt4EljCSd*_LY=%)CuzqkLPi1^~HYjcsU?}r8fYR zDI!bGK#q3iD!ffanHbng>QxKpxX@6nlwhDYNz&C95cx^*JOIFTlq1ri@Lzd71fvnI z+{Zy2E!4?E2(|hB`_tlEz@^YEXJ}q(#2N>)Xn_-M8kjirJUbw9Bn3QxU`Q&jdx_pl zPy&N~s5R&C=wMwV5zc_0H5lL!imx}Xr}{_c1YBvH9gzF{Ts`AicSzD3l=v}eFAnlT ztqBDMg@Hr3R-09YZiwi;@mxL&yx_*Wg%^`IYfWWjR8X%S*aAN4>)pAGWK1@QcFjxt z664da-Q%^p_2kxDuPk{GMSOD$72K`2;_v{0Fn59 zK#HWo(#-pdVt-{83?i03`Xvh!!{aakzTz1!yx*l0NO4vZY;2Nlk=two;0Hw7DkNg^ zGapBe2ju#OY)Hwb;m=O3pP>7DR%CJZ(x?Kvm6g@S0cV(q;Kj;#_s#?ee>&mD z%4gBEh*j4|c^Y$83EKwgM-HO^W3DlQy$^ChVB#@2kfi{XlN|i^S4YF<`6C&CV60ts z`b?v-oMG;?S(R{WC4nM&8%sfZ<>OyN1(Svi@5lSWn_I%`_~*#Q{q@fteDrkmw7JY0 zyg0+c#v{0Fo1M#pU#l`lXEz-7u>OLPulAYYfbZAGUd#AfNAa+8^7KO_QFq9o^6|V% zX&-Txem$?`YoDjoV+L)NH)Dt@-J5()d19XR9*BBe!+3h(|5o#^18mEB`LJqdvYAWy zC&P$q>uEZHu})mHkmsrGQ;PTh0_c2Y-z0^kG2Wh79=d#5*M3KrFVSE~v-&ElwrA0) zN&BYRUx-)@M2Rer-_-eF-OCsvE!C2xnZaEp;F=8eZvrl*fjo^^f!fIun<-fSD)1mt zocY29G^;{5;Xr1A_u*!YA#F6YSSAnR|v4T0FFCRdgAyyeGFX#26qxu6UmU z@uG5~Kj2BDJr6eY{JtE|EhKD+EYAeC2>$m`bc{F1zK>F`cAW^$@rE{yEuuy9Ts>oy zeC6uhZ2L?5r^(^IF`Mzb>Jpyw{jvJ~hXo$>ZWRbK6)dtykQ%9Qu-K^_GOSm`GyXQb zuPhb6pRgzYo}}O!Urn_JUU}qJiK)Xs* zMPCXoHUghpT4O_FqkKk>T-K!7P`*7VJ8sh)gR{{D65VvvB4t{p?2$MO@m*RIb|7kk zoPyYc!-s%DG<(d^SHf>e1JIc>*GvN&8+&wm+8?BC5z|PL0L3+$Gg1*iiC~ub`PLCD zcsF>RmgSke?s#@@w|pm7+ZL*#K>wYg=sZOkt8RE@K4zQJQvZ~x`adfokg zf#Qp9eSG-d&KWai2SaZ(FDEkds4)D> zclh2$&IRfU_@d@M_qbqZt~Ib<&?26rp&AT^vlmyd6$i)Gj2lBMDh#oNxqjPh}ZE>H8;tSX$Z2D{l4*T>0J-D9aaCG1bEmdn|H6y?? zXVn0Onxw<7dv!bz%tZQE{NG$)Gk`t$SXfu2wpO+;4@s*CnLZi#i+w#5OgFoixMM7w zL%)GPoTo))*T+kk?pysWY=`)PC`*Vd>0Z5J(R6E76}f~3$)t{#Y)#UC7Q*1<6k4*f zEurr3(ouYoRm$Z=Ie#$k?r4%NF5qnpV^awou~D)YR^5E9@piD(_R{*sEAIK+>d!2% zdsu=BY5qM*CiT3$0QW~48%|L?TtAk(Xg#an1erz=+TP}cCz*9C>joZBTLNw<43?|2jl$fNF&h zw4}v7h^QB`H34n)e=|L99YmQ7mI^oCRkpvyYT+_pp@1{ZPt4tO(oL(YOrKk8(4S^Sz4 zTo2ls1iq*LxCzCz*?z{fP99W@{DOua0X-9tN!}roTNG0yEZ~@lKnnIk&2bGGw#Pl3 zXkP~l-fR3*ifY;)?->PMJ$T)RAzH$E&*F_MXFb%@zwWoOlQTZ@{PJ`@+ifhne5|j3 zUeW{wyo-9L{x#OqOklo^mk#&wmu*qLUv!MW>||Y!*ZybF#?$!p`IORJ_xzkOfnD%1 z9rMUB)$mhEANZt|dF>#f|ATfG&`-d5Fy1oR1z7B~l+>Z0HN>*hiQB(xp?`QzP_S#Z zpm#}f9+ZFJhWgPTb9|@&j2V%Sp`qbfb<^XuJ$=#il_By-K0^l3S6*~G0Qh^Iw-Db4 zVz5!d?LZzW2=L!|Z2f_JA^}%*$SF}zmc9UX0)+I)k)`y}hsM2EdLYk9k^Jt=)?FTA zx5vpNM8xY9PEvxGK5>)X=sodMpGu*XEvnN(m3fEeNs$xp3$x#Lo$z24x0Lfw8x5|OSNS5$+UCs! z01!{Q0wWrH<#T{=$)=Mahe?JIV5`Yp$msYv_(bsy-?Ho!#_%INXd5<=8-T+=_<@b1 zqX_6yUVxTrM&e4Yjv0XC@OXf|_VV(YZY(}|lh7yl+zw@g8m9uB$`v?)l1&VExD%7< z2g!d94H=GeZ1JBtGZB?P0#-2zxrJs3-#{rpX^rn82?5fbVqvZJ`6eJnc?X{oA{iDy zpY7kRG(OzUCy|knsc&jZ5V3>O0IhbZVir0DYHn|^B!XG=gRvw^Sm6KQD{Rdi7Bk=^ z8aBB5cl0v*MDd6dQm-q%70|%-cV!c3qm+JXyRef$+J9`OnF4`*N>9i;;MsW z-iyZj$IFQ-{TG%(C|y0O`NJcdNd2G^L?Ume&c7W7_10n&(#z@%7D3(GQsZrlzZV z0u;jUF}i#)MK~YHiT!U^ckDs-3~W1RKYhAJHE^w(=7N3-zRQ5qcAoVaF)TqeXoCn1qZ4e;<0hIg7<1G9fN~+#yPrs=GVQv zKPrc@>nNKZzdH9`+I9h%~aVK+{Z-4$m;Pl8#tj$HKUT<+JxouT6>d1Xs%oE;IsM zOVZyTZ9kFM55II`O6E*N>nQC)NX@b1)+Xtek9>ovv3u>m&%n6M7A%@LIfm91>eb?R zp`~Ll#1V9uW?$?VESjO+^$;@n7EKLhC_Sh{aUq;Y;N4uCk-~J2Y)y@-CiN3q!8m4E z=`l|*y2sbrYqi0q*;Fo864+e2batc;&7D$XpDnFPGB07s^RLRDn^}AWe`t1gp*P=J z=&zq3Zt56O`|cU4=KrDn4v;!tk5*Sc8eL-lW@zGSaw9zVCRrFYQkg)$8y;k7zxpe> znbyTr!m($%Yz{Q&@c64zXk+hh^uxHW9Lr(RNQ2qm-@lfq==PvQqC*f^$YObleG9di zNszV_VRp^}$lq{$v@HXB%YZ@yNahm_28GECmVmBKF8zicMb}cXbsq_6D8OEO(Jchv z4G>VEB|#_A%-kj1@zOJydhfx{XzKlG#SHQ?3>}8h&y{V_BWIVW88h9L-koz%;Mmn1Rngn^XUsiLFG)tvBJ@I`9>>V z^B2e|sSWnbPnkv=`A84=-rS9}$KA*#$k5aPGZj%BG54PTJ?kGVCq<4l6D1WnvZS`R zMi&?Df^-h%v)1Z!IEw`G8ZZ6ForNi(RDv6#^OHOJCV@a|-jj$Y^@;(iPEqk9{8z!C zVxyBf0;%&QV7E%=|7D%PlK>l1w9>B4S*R4V1^mwI#rP1jnbvqKQ&SF(b!%H&o5L%q zCvl~C5K}GLgrNNH69vNP8Wyn(?V2EEkqiJN>s|#5GjlLtnpKZbN)W6Tps^?x`=_9e zX<3K7>VMZN1u=H<11SJbltbEOJsf#VS>K&rRgt(sDHIm>p1Nv>7DMbNx8X!YE4D~4 zmoJ~ked`RF)8!cXrpG@8%sE1lvag%_?D@pX~8wM_8N3 z+G9QL{@|9aWepDu;6CpDT@^G-+Q#qin5%QjPsYUM;D>y9^boGyfiE4U zVA}z6OX^fH7ev@J07`Z}-T}4}fHvc;tcM-+sD;$f?DR0C+?+3x*gxtS>(g;$kUBlS zI3|DB=+l?!vH1f#ixKamz6Qgy!_$#=z+R%i)(O~UeDPGK&V^%|^&8kHtnA0CjV z?DNRp{n}^bM^bag0V8hbO+ICz(UWYtV)snx!qKAk2g=@Jy}G|69`FrOw`EWNadbU_ z(abYCL!4Mb_EWzEVzjjs#(g5RfH-k%JRG+1aqL4|1v= zoD?h>Y}0Z)6fEg%XQe6Wv-A^ajVZZ!-rg48us7?@jP|BSmbZ6JQ&aCb_?7mM zv&?e+fa(vUSlb=hx3bM`^qSNpRARbV9G0g7?l{r^I&qFBcVh69$nOtS6z6VXX?|hu zbU2e@-{Ef4a$k0)0mE8z_FJF2v_zoo;ODWn9^urSnVd(BGfpy1Gn>*cg1VJZ&46Tj&53$cv&PholLR!xNXBN*0888#VwvUche{;8jqEtLGIuilTR$vURmZBo@sj}m8pMgY4IbL%fi~6is2@Af*9oEI++?V=`by}77nBP_FN8+)Be1G$Y zw2q?VPCCwE!+kr?;mRll2rBw)qCUEbhC43^xOlzfKh!nBYwnNmJc4zg9 zL!1XG9_3s8R-h+IH>XTl)~RW@ye-)92LrQJWaXSnq%l1%4o$N0D>iRpIgk65+}1Rk zye~x`_NLsNQ~%#+wWXA;H8Jb|Q&)7%kZI)LjBTPMXNxppItBEeDurlqjoy3E5L*5! z&gz=D4S0Bezb*<-mUy$|_dl~-0CU$Lp!n}@^uxw;Wm=9CtE;XfjC=%sen9xQI;eql z_O0fM<{dL+xcV#)nc*}2xxEaKGzsaZD9RhLQk?YmJ?~T~BR)Sk2A6n8waoNJ0WrN0 zuKg#o81^68w@G0f0CNCHE500V-GPV`s485rxv2j|R={g@1$!I(?&EV?QGZgd3R;Pd z&upwXmCoa2DSLz*-V}p>cgjxxLTAoPg#aCM6bqGWU%VoEJODzaMQLr(l(V~R&=4Z@R$ftsz zgu*1AEDs-kwFWupGyGE&my--3K$js>HC@F4S))KoaV-+PtVhjJ_5}V1#tR9QwWn)R ziaHj|W&U9RyAib*NEl;nsK%^pSm_u&yF~gCTA4tl{f%=hA=xV!}!09 z2HLyAzNEnFVfnogL8v2IVsu9BL4lDUFhn1Si4RV#@a-*^_z1}+CmRnj#|GH^0XJu5 z)!{=X4;RhIO)4S0##^i}^se8pZEd;szdGXe*D*#9xrq5(XXfT2XpOD~zpg7t?j7Vt zTQLVz0m9P)@ec?XlB5_R+?U_L!P&V9qBa^r?ocMI*JMRy_~s|U;8#}VA=&A{@#^c4!$};ncI!>td!Qh9-k^MgQn5% z_mJVFIiALvopacegD@rQ@>~4G%TLC5OD{%ytC-XPicK|YNIY9a{fP9V6Zr9-o+jvb z^rYNdSa_ECgIlYY_|d7FaIqK@*>tRfFMkd8lO~Yhtp0G&mwMvLXk*K7BA)LH2Y7

z4ZR`(88Ptu^O=OfMh<2K-E&&CD7yf*InawDP!L(yIMg@tLEJbF&NTecpND0dZ{>9c z2X*uE@*rY2pljk7=r?e=aSd*Q+aiFzb;%ROrNS;46$eDqK?Im8B|5q!S&56;^;%is zbq$7iq%ySJZ>UN`6z~!~59g4rz3I^up}^x*psi*iGz!_F8qaP*K#!eddgl^~4$= zYaBd>U=6{zuhZ2@$RB{CMydAwdZczL<<%x==O8^~^e zAN_~(9Mf9aPqApQ{8sxU969*y7 z6h`lTWy}cS*>n*JcOEIRJT=oTy)v&wB7_&@-9D~GutJ22_qW=icNMdr(zr^8hB<%5 zc{r> zD0aU+e(>+SPMfpM1R7f+SW}kPcmyVWaJ84#2l_zj!Iw16rH zwZ`pIV;FR#JYabyXFKvyNP{En@2wUG$}P*EZszYSLz!Zp?#znRODfBKop5}|Y0fTu zohbLpZ;Pp0yYDc%d69Jk*YTWLXJ#6{9I-(c;wE0(qI6O`!6P*4pB`W>CQcaM3@9W* zT#Po25^UB*|3e{>~Sx3YRy@`1-$3^U1nEFf2&oV`{o8V&o)~j zMzieKem86n=H}oKz~wfA`wVMld!cb{+kXrs8257@KU%amf5#D&h_nK)(tGZC%V_hj zzWoFKOfZeSXjumr6Te)%_teIx=3w3AG+0;yPQB*L0w^P}%7pCR$Zc`2gPrrn`~-%v zz!kurl$J(6u1a4o$C^|g6ahesaDp#mj{Zt zK&HyuVq_W^Ah7I;L;)MiBgVf4@S5T02Y}WxR%8Y#q}7R%mZ4}AMxG||ri_uG8bTE;)zZYgJa@%(xpruINo zTj@7>XEpoe?&O9%J9~kL^WJ4emRE}L7q3Tdf(^UQ{|RGfvF^?PxCr+B72x@Ek}qL- z^CrD$E7mURYf)2a9NRpv^w+-dzSC0ViXmkL*ao_k@;gK}{N55VL(1b-rPL2xj2BP|RQr!V%Gc^udZk*H)-1 z$N)5<-HDiis(1>*o+vmdUgRD#fif6$IbH9nHGoZ za$twz*9npO|4*-e3+K%hyC1@jC9V&9FUQO0g(|S!r92g9fX_vH_IKr*F~er+BQ_QS zf=wUzIDc2Y8C&@9+@MzI+PCDLD0E%!Hi?a39+c$X3{c4ZzULCqJSL^4+!VJT_~}bt zFa5{)akI*bCE^bnyEx?Y9i|;qT)jlfc+qNEiYAknp0~Zhfgq;G4%$4=VYz#Hv)x5n zkvK6*lzm}RE2Vai?KUp)2i#BRiE;xoN6#&A4rgt>x#ryXr#D;cM|P&AHEMy$wT|Aa zYczeKCMwuUD{ygKkLR{F>fdKf%{L~$cLDA@q3aJGY(0~-|IAqmcM7D!9g+1yYJzes zXV5v;ve@J_s7q-Tj2Y?acVX5^T|6uVb5M0*`7WIQrv|nouP=g}Vic^49h;i~bEbg; zBX!wYt*cxoo7&U(=GFO{b?p?m7keYMhj!=fgz*yVzdb%J$iCyput8+s&=FNlB69 z;h<}e#*EEvQC(k-$>zkmYbo{qr&es<>@!Xe8%oq=xQm^CUnmoQ)hfWy>!fcZV0!kX z)YuTj*xjU8Z~jC!1AmDH3Nm{k1+?`lPF!!FJD&vk>i-SQ?EieDsRumL5`;3I;^dd7 z)fDvQD`)5T@n57q9;po-a1|4WH_m+nS~RA7n7LHGloc3y!hDgtCg5V5OmwY@p$_EG ztug#!et9CK+wi%XhKF@3^c|T7Jk0D6cz=c+c^=gz#~>KEXt{q&R-#+1lK2erina zj7zoYV;6c^^04rwXA}=!kY`rS*kwl4OSt=D4pyG!-c*L2W`)wy?Doperag}Yc?RBz z6MiDjJ&y+hV5Yd=L(FW#7VH2cU>btNz8Uan%XZcWR<3Rme(oo(=c~f7DOhYV2+^L! zhUinCnp^1e&&Rn)imy|8;`^cHJSYs({spxbhu>-$N4#g;d1EX%N_4!)^G{u4N^ zQRZF=PGi(D`}y4Y@$uJ@)cpMabD&UVYv0<~gdUXJ4rDstSNFgLadxD_ZSxO!dm-Y4 z>YoE8O!AtkDSK*Ku#Ds$fVZg@@LfUZee8bpBd$l-;ycwtJ2^5kdiuEed03+-hg?*H z{QLl;%c2OE!#$_2-UBd5bmkjZGa0YLdOpAdG~cVycWt@H2^t%NXA;3x=EpGC2T6R_ z1(}OHgsHSA@e^R9^{;6C6u|J12LA%4-AL859v&W=sfs62&LN89gF^0bA%z8^J;7~e z`Gn*Sy`U-B6%*H(W+0jjhP*p$VY`rnsbxF@Nn~L;WJmYTsz##7qfSW3PMi5%;Y^Aa z>p-#u2Wlv`>QlsIBi<_Lhcl=Mf*R6{^*#HXl|*;=k52Phlg`0uJ*PMN{D0i|X))*D zP(51JtnKN*`y#i*t^EVXO9>8xp;TguqiF%ZIR0PUENqcklEC2q5%%5DSpI+9w=F9| zR-)SsAu^(@Y$eLb$ks3;d+(W*q>@c!r?O`vl9ipkB75)gyf6BGzrW|4=bYy{r~8kN za$nc=`Mf{x@fzh1<|uxaxKIk)H9}9bQMM?NzjDud==mZ3@{@wrBohG$DR3uD7WKC5 zn3yOOfwi^YkJnhGp`l@&Q~G7n0!A-&y)LMm!-`>{Ae)(%c3a_Q7C-ok5xmBX>Snr}-NfZ2qR@GOU=bj*iye4)+t74U{@h3* zhD-cTVBQq#{>Gab@in}Pj`yN3IAUhzkim})$<*qPuMlT*3qAF4@=pYd)|c9#dD)k? z0-8>~wTxCmmr#g04>B3->I1PBQi!aSrXvU&24>$^s{{c9j+lVFaKC8-?5(aWbwVgR zK0D>Y(3TFmWzsG#E=9e6ZB8aW_v(FgN*qjs(;Pr{vSBe<+rxeK_1sp75Cl*4z7HhK zs=Wd-fI|j!AW__cbufsj*v<7^YO?`31HahVSjsc6m!KztpayS?10e=MGfSm#iHV6J zY-6x3d9?l(oJOIgP7ff@%Vp(7eLx(sEaV}JO68}Vq){wvj;8D^yA z>zM1oW@}WcjiV#Qm~TU;mV~5b*nm01fZ>_%zOvi8Qj`2gD<71@ds!=pU0y7#e7}s5 zhXKxoXV&w(60Lh+n$m+hg?9%)7ft zpsGOL0c5*PE1v;Fr=5;v3`HZfH2lx$|LsPxw4qRtB?&yJ*Pj1kPBPhn0D;)aNha4y)_v9{CPWgKtug9GLJ3? zX~Set?Y!Cw0(@5xz;dWa7p*bK(8wM+{{n{A%pM3Kr-s&V&G%zf>J3Uo!AnPMlz_&U zl4t%5Kv@U^PS(DoB_De5(bKOG6_N|VrbX5xR_pgJ9-gZRQB3~=+Oxncbiv+#dawL4 zVgm}yW27?kQuguvXuAr9GNO6`2ZpcuCQOVZ^7$FB3S!xGybp4Z^nn~e{+7y(_&k5T zSZE?VP3feTKB{c3uRF!@kj2DSOk8T6=mZP5sT^PU?`yU@P$0;=u5kHrxZ;Hy%^_}# zVL3O{g-H#wHS1C(-F4)eiiv1M00V%3|7HM8j4+N8rpE4k*My1Dc#YcVSD&}~16w!+ z)BI;K$gc*S5@1WRm6$rGpG~7i9{0;yDlCasL-|-%NxJ9V|A&IZi@aZBXb8$ecl)V3 zKw{>Aa9KEc6;vvajBCOi3zx_XjB6%ofHxqzZ`3fh0xQR9ReB%muA9622QYsJ{U5Qm z4*)QH1a9BZzB265gX8uav}vM^g(iUCVF&=`$mV+Y7tSD`3W8ph3jQ?!u9XJG4#ZIr zv;8Nli`oX zO_MVC#W&*wTV_^Hrtddh-^meyqC^d-iRkZaO*DfR<}!GJrvO|MkCAfq5f_c{Q}!lhYbFFNAe@VvPFVY`#pAX=%U)d|<5S@GkO5ZoQ^#XuT$RhE%efo+;(PeH zIJ|TFzq^gYTXBmbaESJ=24^Ncy(4TWn@Y26v*|pxo22MlkXet$E}`}Oc@^2K*p=Apd%eu@$=-Oyb#q=4VxK?a=Ro_#>fB!ff;B6HB#c4$TeDT0uaR^N8mPK zE~8T6WD{CQN_JJGR2SF(r$_Ui9j(}&470#%Mu&V$nCoaFf{dVG@OHrk^v)13aXMyn z99Yfpj>*0pDziJ4sZxui3yRP#=%m53Ve9$yrFLmJFGRUpibYXI`klNugYUMzCirh; z+)!}pI(7D96#dnY1l#tXRwg*@aYQbmC^-HAvApWZm05P#c^Zt) zM`(=zzW$Tb5G+X0ki=}#&WJUFm{e!lMwAtP90P`BK{@{{GjS?K9V!K)dam{{AN zPK15{bYnjDsDlI|7-hoD)S}A?Y9bILNCmc2=G_busO53N5gwdyMj;6lwx6W{wsQ#F z%tOw794`pv;6-N;=z{AF;gr1k3~w>v1py zD{Z|@O{9|D9TXgl(1Dg=^&0{vv9Ym{mlF|NK%`RCa$KnhO3`FOzQ3DlB z7UcbIwt~)asS-e99=z%4xqK5H-JE25r>6TZ&ckA#wWm>6&n+adoBQN|r45`@>if+J zg<78pW_sItE~XWmQMPA9APA}$-DiqY<)v~r-%O;vm{@0P8T&KdCnOcL8r?O~A-_L$ z)}SGtJpX0=Sa(|HcJgX@sE$`6?w85i+J9b3Y{htScvjfll>XW8=FV?M`QASzfn@vo zb0h(7^v1`<;-2555A4l_FZf61tn>3GmCw$cfO9#`eg6D-{_O`(KLri})3_T-UR9_z zvgf-qRhb|N){NT&4HXetPD?kXA4gDakkdAg+k-M+1Q6@BE5cbIcp2MlA>xiS0Qaf~ zlZUUTHqL@c@Av@$p4mN*+3z-4%d}=4fx^s68#%4o_cu!u(Rc zxHaGcV&~1j_7S~==!+MAyuFxCXGl zOM-Te&GM+E!q!Z0J_DTI7yg{*f!+?nbA?~%1G*l>+POP82C%D_mzTXjM4`(}0M_*- z4!_tY6QoRroFNbhq62329neIKV#Md2!G0v*lKGZ0#bM#r?H)#f3wJ+ld}WP>y7=Xd zsWh*j3K%ibn6~N90{IT>Iu?TqSsQofi{*sjz@j9g;`t@wZ7=-&n}RgT%W^t(?fm-l zccxfc`${j`oVrK?thWkTN3O3}P|UunuLe?B5s|+ZWv7q_R*wT#*Ts}?fDlPndd>Y; z!H8otVX)MHgPd(NqJE9};vq4Y>Y3{qH|5r9myRjoLC=0qPX~Hq;bEcx0>h}dVlDX- zGEm?P%dBJGhuIaB7ZZ>o-UIvHmJm95Ye>6=%th!Q z7OT>J0}V`os^M7-6cNL@^95j;5n!PEMqMpkBOA0d0)e0eY|7W_`<$`O2#M)tya*)# zDI4J2LiE@G6c<(3@wZ2nDLWo*_f_nsjIJHWb7TIbmafpU*q;}VUTlE`JG`DJ8+Jai z)tvi2Lu4vlSXKT*V`FoeRF0Gs99KgL#Z`y zTD@9*qS@V*P*|Q=*DSY2@yJZcuxnSUB94=lsCtl|7cR3AI2^9T@-c9B%#FQTS_Ys= zvCQ2Ebd6I49ldNe!rH}za3cE1r*n^L*2*@(GyYL1*xbch>8y= zNd~fEl^~+rsILnRH-86%{Si#-pMVVUxvRF=P*Hg+XOO!yz>eoo&q{*-ALeW4hrv!{ zdv7na>NJ_E$Afmdq;6^eQYEX;C3nskwof1Cyn+()+JWxZfe}W=O>bgawnkJsM|SGc zN0Mr;MS=L#A#&4OksB8+>5(%2+*=y!Gkq*_JHo;E4;h%>=)iZHoV9aHQSAJ(tu?1oYIc*R>3W23}Ggx!_{Cdm4fwS=e9v4Yoa|Sk6Xr z?!<8k85Ix9;Q!4E10)nuJxZ_h0Im|mg7FZ+JzyFjZ8iW-jzO#KSTZR(I_mjzOi5BW zva#TFK-8I_bOV+Mu31kGNH!WHLwOL}L$pYtG};1rK@0j?G(f}QCCEZ6(2|~IgcgLL z*Dwtnjy_kygbk7R1LPAn3UL>Nv=Kt$=9=fB>m&I6yK6^~4Lq^6^@)xq2Rt((OfTh6 zaDkVX6ejkvFu+%ZrV}ER4`rW!c6PR<_&qe5laCsErpx7#01)>8mya9&IPmSvm2qhH zHJhqrO}0N~5wnoluujg0*p^Nn>K;6gaZ>jcU7^)sDGP~JrGo1?;Awp%IQ=ze}8~7!Ru-Kt(?}$jmms^M}>Tl~j-*ET6(c#vPk96>! zBOSb;nQpM6Y%erb0tVR69@U^m0ywq_BpWLSF7&?y2VuxX1Asi&FG2uDOdtv|bH!!m zO2Bly`v&V-1POc@=#{FW4`Y-GR0+K(vpfmi)mv@Z)d{}27KLScjgtR{*KMMstX!mQ zK;PP0q%0w=VZe*uXLI2f;=xA|#Yjv^0dvol>BPEh8N!m@T=O|$l9tj}R0PDiK7Hw6 z$I^Z1dVP@Y@Otx&caa0U@x;u7I2w-Fd|Wg06DRyG4kir_G+$kMpVBb}q!W5a4^|VS z0iJYDareb1q+JWw_k0jcS3ayD%kUT?s^s1?eEjlG)?1ZwrPFHGy{8mo6p!m3!SPo-7O;Y)ObZPw{V@9*pNIwPB z_t;U$M2DLjx!3Sjn=@BS?mAAaF!z57A4zCcyuzU)pCYYB`ZZq{Y%avnHg&QTE=$+9 zPt$PCHhG)SW}p}G&*@QKR^paO%nZoNWZU0U_qsV{P+xs9!dso*i=wkUK|I3_$HfZU zN1q&I`d-VDVp~)ck&V3F#nrA0wMLuH@bPZe+{>{pKflP~NXG)$NQjI3X47wh=r>!! zY#vyxM-BjVU$78kM*4IoSCl)sUyhOfkobM~&$L)}w3X`oa8|?XRF}{=X^6WX-YS0B2yu0-DA3Rrt}E@eOJu_d!o6!87T8I)Md>d z`Zrkd%D6BTg{XL_$X-t>|D=B=%1a_dZvFeJL+{Xir%THM>s%w91SfN~Mm_APpK45%%x3DNl+$V~;y?MQaGK`UeDp+?y@I8k6C5 ze5FKy(u{1vi1`T4#5j`#G^drA(V&-q;JQ4D0qQ`fOLbe3)}yOhWmzuyy}(qrx6&l@ z+Y3%#(;q_yJ@HQJ;0^}R^j%pWT?v9UJ7aA8)b0M$E$&4-Pr6!tWpB+mK@DX&5~Vnd z98bI;G2?y_a8srf4FRE^o5g#JR&EaK{AqJP!xH!x@*ihXtB|P^s>kpmcs6KA#NjTj z*R@1@qM2Cw=LMviHKjgEGLfoWx%t)PpIehQIgRa$vVTV+vLQYM%p1V&X##dHUgNYR zsr04xjrD1aU{Gh<^N1emH2aK-&vm;?-1S3bOe)f*W{?=Zh@jU*7fZ-3vl) zCbipqWuqzo^4KVLG>Az`JZv}IuU(OEycx-UBBIjIq`FvG-1kH74_^~;D$A83aJ7xttOhDytyv@UV{Z*oJL&U7Jq`e0k(KV}CN z6kZztvk`XUe>Qn24XnF}&fdEz?VtokLDDZ>RwsjVnMlc^(nai04_XTC3I5p47|em= z>PQ+CukXijc!+`d)w%BC4^E<6p@Njh_yiag3KLLrhW8HG)%;ly zpWZHu&Ze8)n(@uDE&}{P_0YBB69@S?7hfB>vG1+>^&x}0GJ(mLPqKm_68Ccguz!7zLBjQ@N^8xOgA&n<{MmMU(qywGspo>j4vSOIX>ap%l+L#g z;&Kl#O4nB@XkX zcP1wEO7wt>aAi@HdQq%O5HULVd;NyYPy{H92$Au;$Y%_Z-17+@+)#6zv1CZUdnkHE zCHaLo&$_$Ddb5!}FfkxR4T(Chrk%Vu6F!4Ud4nI+I&5SkjZs`q7dS>qbVDVxv;lX@ z$U~*Z(||WTqi;ho$m-8M^j{qryf z+sg%8Pn&z#yU)#3U>iJSKxdbcKE;T_V_IQfh+?0+XhQHTSM01i=ent!{qb?~JqF$$ zFkC_``7}fS(S#wpduEJdd{~jqx#we# z6iJ+xx9)|&?+PSS;{H_}Y*!}!)FuEDeu+_DDVOckw4h~qCdvdTArntfu4RdscP*bcXogstgT zk~t&7-0UT+=Kk(-{PIj(L66Y36q6u*J0kQ&o;nqnQ^|}i-^m=#BS*%5uE)7Q63?Sd zu>tV#aJ#r&Gc{1z*vr5KBTw0wR3PFetM46_K;={_`=PguQ-xFn?vORIzktD4%6=H! zc`*?#7HFY9UO!C(P?olxZ4{7%L#YNtf}LD8Fe@sLu%c&}XU&_gG^k zq0zCLxCn0?IfIw<&qNIwV(|8-LlcvTLF`=h!6qHdSVE!1CI~q#hSJEz>@w3~`|$ST zQv^eY^DBY#c2u&!!n?bw;+ks2Q)4J>YCdr4Bg1gN^IU65^(_}_=nAAq!a z0Ne7J_F#>q+DCHVg)Y7a74y-ljet2ks8e;*f zPjPXiI^x(lEMN!;6Y(UpVYoCjG~iHzXtINEstuTkA+rSVIw=YVbea*T)|u`7H6K;s zy`5<-hSOw}j|SR6d@JLAjKQ!P8f1~bC3<@QXpU{skzl>C{ESA$-!F*nM=Q&b1Jz@; z^V3(kH=a=V-MsE55_4vQS+RU`l)0daEvXl{<&Kw=+B9n7hopjYOt<2Pw%7YI8rw~S z3KKG-+1~XehRDj$1s9%2Ry$e$OTCP`*Y~T(+iuMzsUPhfmMP~p98!itV!$SN^S|uu zW&0d)`$1=i=HBf{<^Z(`{XqCt`$y&)(d45V(rSaokwrN?KOLYiyk;vufFPcVwJjnLq-QN zfCSF`^jNo2U-=~7SFw`Vi8-O$^xtwj*;ET%F8!lIk3`M8H0uGd**}|M5xMxbtU}FP zbGKq6fpS?an{t;Yy|dcCO7S5M5(m(Db(jX6O$$6|zP69mIXIG|`h$QR54&?J=N^Vy zHg0Bjk+QMK!-obtwh~~st^t6z1Dijmu82Ut$)CzrZU>DX-awbXusOVzaPzg%rd!0j zTVvvOnL)wx`m-yKn3t-|S1X>aeF(ZcH`bedZSvh)v{vNTAxsTNOulFkUa4ox-tMhV zyZ1h5hVp#Ib=gU8`L_|*d49ba`fgW;_gT#A;}@-eVuIk$=x7+B+3Jdvu(f^Id&wjH zK~X))*5QNHNxg~OIbT36qr_IClzzPI>B`_X4#iv99ie3j5D4Kk_mytWJ6JRH24FM|u=)3YjWucz})KiO=^QN9}%m zP3@oYiIpcmTuDD^fBc>OCFH{P-22ET`U>L(&$JUv^Xu#4YucF4Jx@(%gnKgz=o0gp z(chsHlA8wOQUp?tG)En&vK`cyD)35b4p?_>_W7E85!;~Qvftg*yV6*z5J)o8XcDIkaF$3+hQ=(Genm6_wAQt|8N@O*&MA5 zm!2~$&-@mMo=(`c=l?GBi0CLX?WzYUPH8hMt27vn_u0eHWtm+D~y-Q$E*JIoj{0K%Has zpiOpnW#T~_&uTr@lZ(yh+=qPDtOzNt#s`Zh#$T6=H}lM79*KjPwk*2f9XY9E=e~dm zLDB#LV?`9KTz0BcuTN2zK4kU~-tv7^Uw!F|_UN}F@srkeP-E-+&dv|k^-?(pk8-r2 z)ojo{S76=mR_@*ydAz0Jqfrlf!R+z#TcyVFtVfOL}rD`Utl`|k!xw(L~Llce`h7N&`AtH*T#xE@eZ8LT}>(&W)$YtqWs ztGO+1I>n3*B#lmKNGu+|iRkN-o%$JhU1g-oSB~MJZGI;Y#CyMxvj)9tta=PQR>-08 z&%R~6xTSKz;E5@_`&*ft5Af{FM!)IR_>zM}gM)}ldO1jCu*-o|#!!s?OmR5_gqgFZ zEfT#FT;`vN+&R|A#{7mjv1OBbEt;o?v16t|YhmjQufsfP0WOJfQ$$tb@c zyCi%LukoPnwYma+_B4k?T#_=ZL z==*bB^n+oKTAFTXXPQU43vtK0BS!4l;I1^We*#$)rIvTNewVNuK3`ug?Z*J58-$$Z(uW4zDvwR$gwB-!1(MomnS#7l5)6rP?@ckX0Mryw{9glE4qw^~1Cs`Y1 zWtX3m#l9h~^8B`3^scwz*rQHsQVt=D6a|pRKeL6ZLab3Aa6nQCpb! ztz0e`T4ukNDiqRA_tCdX$g>T33@i1hf;a?a4qjb4c}McU&sq_mJ}n1ILI6ENK*un? zsRN%9*635Xz8ER&Qu)0jl;Im{At@576nz$g_i{Brw-bHo5^+;gQ;FT5GYEba^d`=% zd_F@K6}s*+8Ybk$mVm0-KXb1D)%c@p(apo7hQAGEU3F^EHQ!>ix${G!TDBJdC)S3~=L2WF9(9|Y z@H0B1<{jqs*2FFD7m^BwYHjTLu%}3V%X{=Qmx@SCEUOnJo=z?fWfLv$g9q)s=sK$y z#;c^8Enz0n7n%vMB`fzH5wWU%L=`+Lpfec7J?eU^4-v zb*OONps<8d(acD-_<5&Mj29g0-@}H^@)U9zgHjr`h#fmXDV7!;;1r8rtfYD+Mo-O; z`%0AR6(5eUi(2JwcNT^(9+vwaW^L_kHw!u<=9QtpDuqGJxufO$yB1o+qABBzXQ={} z@=s$i_lms`7;HT$hA)4vD{kLk+03fAMP%yFC>REVPNv$Wl8trBwBjt))_Cy^n|q&+_zuOQP~|z3SM@TmM7kcJ`r_C%IJt@4iSHPtD6RQ> zp}yYS#q~){{2uRozMh-O=8)`OTl(27<@c>yUycs>IPBHGY-3|p#$tr-q1cWZcbw{9 z)WR;~m~bysq{>CVVr0_YuBRRk`z`r?jIeu zDH(%wM-ziGnT=3=VJ_=4-iWy~YekYO>n`91$6`G2cF4;qN(L5fZbe|7gPYLE8 zD%ZwDf4lOYLG?Y~FOqZ58o#Swd4KyjFN2RX9N1d@Qv;GCbjkYl#D*h-nU&22ppIP2 z{>7~52HOm*Fk)6RmsEVouAvxYIWB#H)8YEs5B}|$)QBgbNu%=(rmV9K;n!{hm5CY! z#Sa=?%t=1uTXl;w#kw=3|?!EjZ_e;25mHs<3s=72A-py+VaRN8CBQ@2Ln-B4M z(VY>sW38J!yDbzhy_`z9Q(MfBP3E+(!=&wBrX|uE0pY<<-jODr7K>1&wJ(15{s& zwCYzK>q>trJ*Tmr%OCrR^!4?H9Hz9WUm^FLGXBn@fG5FRt7*tmg)HjMohK{Z7gT;c zi2Xg6`W~X&`*Hfq z{p3!JOM&I(gy?*Y>z_o1AMcaMTGPibmEc$7KeH4#Yv0^;>3P+0Se2&_Ng3@EkNaRV zywxbL{B!bLx6UWoY%#1c{j5*_6Mnm%pGm}t>nACt4urt_@jLHMmScX*aHIb|^mrW? zD?i4fPymj;QEEG7b+7|yHmmuDHDL9Xm*XI>6TNyTgll3o&broN%tvX>UY5zI>JWGL z=VJMVyB`!ws>Uhu!ux~QEn}1Wx<~9A$R2y%m-@B4=ASio_Y(Gnp==2@-hCzMgzN$L zq4u5OR%fx{PZoE%7C*fjL$qDd=+n`5J7+u&*WGgtA3S3n8;Q)Mw(~sVw{x<`uKR1| z1aeGd6k02d%RH#Q+~?&wFcRabMLoHk)H}D*KxaL;y9Uga!M?C*K$F=x5+1Mb$@DYt zlx21s!noTa&TWXfnfKnwPnL?jlVe4Q`Eb4>1Qs1UceRVu$!%)B%9C>R(v$sQ5<6_U z%ANHh&vwgZX4xp7YUTdG66ttdTix*!zvka!R{K(Bjn{FqS=iGce|BR-mRx&%Hnhr^ zHQ6oqUiW{q=*KeF>w|l4^Jj6P3!3@JX|ehx%a5mR*8{=h2NMNg&H3L}*OVkAypHb% z9s|#dDrf#PtnPEG`*(&CrQfq9s(oOxr?4j#e(HJcd2Bqp8u5*rD5=C;jCe`4_nc_- z)@t{jk=xt~KCg-ABBcdE!;7N3%eF(Y7YO$CPPW5E<)!{$hJe!zMW&&Q{R>&W`D*zW zcEjSHHQaZV=%Q20|1T1-RCcW8d4}R*h#lfmAFMB0`3D76U?0_AWCc1OxZ8z;mIFd& z0jeP*7*m`tm!Nb6SkR)C_wj?Nc(ySvqf5Jko5arDvh`k_kZh?k!&Y;)j0RhIZw(WZ zfeY^lH>&+l_kFn^umcoA!!h!%hoMmJehgoBdwmSX<<3c~6{_K9c}2u|KGVW`XlRlB zA&4Nz))6q2a7`wyr%7A9x=zgIu%D`MP*d>a(;p%H7&^g@|MLBgTOm0$R?&vVilox? zH$7%ANk8PJTtXFOQY9fNE)Mlg^tvpWPtsFI9rzDTog6JLul`ixId*Xeh$~nl`hh`I zX*o-T0pDwtDmQ0*OrcN50WBpZO4yin@&!fg?~ruG3%}=rNpG^ol>ZFSkTht1O~KqM zjwzp);aRW3;CDKQxMQT*q2$8@lnaAbx!0K(Cri#x^}bW*C-Y7g^Lg2q%y0Ym@@MUF zRHc2x?{}o>@KBeiAOb$e*ZZ5QR)OvdXK4hpvCGxVHcFaq=bNGKk=m1D)0>o%mg8mi zgra;^y$)sDdkrWUgMMVfC~)sGO843J?(R5X!4PtSF9nq~^Oh`tx4!^7a%c$m_`i0J zYTO$3Z+#@aTH)M!ivn9Zf&uS_-6z4Y$MP+V1siun&ip>-V-$UVQI3f3L7=DaGw$AH z`64a@!`n4H6f=TY?XLa3w2q^ zSp^p`Vf1pA8pZuU&v}aS+U`s$V$6*kwaArnZH5R}8c@h?#>O_LUF_^yS|lxm3?dYV zEpHwV#upLLVHe$Rmj051rNVWm)_09@dyZ^V*K+1Yg+&Hk6E4>uytfCOdu=^LN#As9 zW~O_Jf+}7!E8VlP#Szh5B&w{5&_=*SK{G|T^yQ<=zlr6`?2=XN_kEO-)t4|bhgT)R zJ2-MbNWqnD?c}uJ}#U<_oqxse9Q^~nd7Y`E~!Nr2JuO7luCITJL>zY`Z3OK zYtfOBITH=XX#4;q7!nna*Y*V;QG37lij=zNRyi!DTcUe z=!gas-MTQ@+Ccnv01vQ!FWWOh3^JQ%A*QKk7O`XTUd$ zp?J<*=ONeL3uT+qte^W+$l(ARs`|?8 zI=hJ21-oqa8U_Z`QERK~j~Mrw`AvxtN(mLUsL!&KVH7g8K1_tXR=vV~G3Ac@o?AA5zJZCY%Dz0;PG^k=2sbj90C&K+L&oY`@BF z^!HndB$u}xe|E+>M5=W)m7z6rkjJXu&$E*k!~yGb`_ zHz+(K@QU2-FPHck*W}1~rFH(=lAcWGv!J0ab!M&_+%XM@lRB;cC$B{(@2ds>d-Erx zOT}|XchWmjdOzZXXAh?c?B0GAvd8cTPW9t;10RwvDetv7n7q@DJd^1ERq2m>$*b_W zUcM-sAxx!rm5p`KKlkC#Ozck~Z({jXpq}<2o9{JU> z^C8`A7CYPw|nNh`gOSY9`lzSizm-IF5=H?ietW1`lIsGlDnH@ zXhCedJ|KHA8L#g$*1e;siHZC~ya(3_*0nICyV`Rup{?!sc9``>wnbXsCA2sWcBG|l zyM3G9F6a-G=|q&5V2k_p6-A7WCRQ^ot_Pos@RzP1P4@OxtM`-MRC9A z?yuZ478<9X=g!nshDWmC6sAdZWmDB`oZ_t+Wg70A()45G9fVu4BgX=s!12Yw-yog( z;49S*)XZq4n}Ed0}XuXj#)xD$K%D8AR5 zt~_-njP=T&9&%z4d;d(KlG2kV97JOdv)@E?IS)kV&(b)=O)fgTg}R<}L#h^w-eh%R z+8DmAv+9z*?n&f6GwP&E>kxm-hl>;Pi)(M8#*Fu7P#$5<=r63RR?cas*;aAJE_&8e zkZm;E$YD^gl-Jg#i0i>%KNnzxwhdD+9G}Q#n24_)u@T;P?aUYuum6x{Op7R?!Sh|;)2K5pZ% zR2^iwm&v*jw>9q@Vm~`Jyc;I)Urm-!XFy@tQdnM29NTxP@HjbKr6u~ih=&Y5H{Wxd z;XsYrhS2aIiulra#!K#yirZz~Jxgz#s&a`h($D}Mw$mLUv1}YuvGm!&5N~;+)62xZ zQuc{+QX4^y_|4C7;cab< z-z;zzeVciZI`(k+ZZ7Nbs(8=FFMty_-S#E<*4)!;XFWHsZ070syU>f2C;$8XZP~P* z8-Dy=pu^mkb7o%SRAE}pOt##6zS;XnlrnIu5-&}<&CZ69w9`pbKSQsxV*Jz9CW|V+ z{Pz&UuCa(--WwCn_zA}I)d$Q{-Y|t)FeeKlnpo+K`<1krY-=1H# z{*KdY9>hBOC(-K_?(BxlBnzCb`%G(>J$jw`D9bqnm=2gO=XDEz4fvLIC*pYVc2PS1 zrZ{|cl}7XSd7dXZ0-S_&)9w-BH%8cfTK_KZAbIn`oSMH0LqGlLk?)0=T6Y*{y9t|S z!_Zu>XLa*iBmSq=i;1KS(g|Mwe(CH@eH)in;}516mVH$sV#^#Z!B`CD03$=WD%^?| zq{4F|yw#kMPCqwjMaX8v@JgmJK9`|aY5(-tJ==Xq;UT_NZYB^X2N< z1y6DC9v^>!>5AL@{c_rn@$Fmj-wykqX`e2-QQx;GF#m57*Y&QEU|#SPbp4U^n?tgY z(j()3oQzNAxH~6qYNmE>oOIjBE&PFIAco}t7pdVnHyMQMUXX%NYb2yteQB(m)5S0*Q@ zt89efkW^1kAQ7ohn`WsX-NTfYJP7@W+=cX{h>Yr!`NWn!;U!lSX0}o_syrYt9ae94{kvWW^#DuU1XFv7g=zq^yx9{EpSm^-8X3pH<#r4BgFi` zY3lbPJI{Fr>>?d6nM!ZM0Yc5`4>l4Oj`l9-RW#@A+^=#v;yzk@*?00^yr-yow1e|T zPDu0b2+U5`)FQ+T+#@1PKhbK{&bKWRa# zhD`1|n2$McK#aV@UfhrXH0c>3t2RN;X7`%?EE zBQ*forlYsl#E(N2JYH2*3Y_QntLUM*y{jI=J|9O11p@5hVWD?IAy+YLL?mNj@^MNC zvl@-9`faZp?!1EZpJmm3r_iF$?b`4kXEj1=!@HJJu?JUeFdrt&hiB$j zNgKoo$_&y_N)rhV_3a3K`RH&MV`?x(I@3E}IpKMyK-anh z)k}OTsp0bGf8vCg#s->p!lODnZETjC@!f4mzraueU=pVnngq_%?c<#p}oK<&K=0)V0I6{PY0f z=uY!6xF>8hP&9oj_IMA$2S?Lzzb0(&IuE9GPO6Uu)_U_A_EXU-me%vbFyQzEvlr1%do(AHTWv7sx)Y^C|A%-3>nuG%{Fr9!M_pu(WmGA2 z=?^Y&$%U6(8FAR)n7lIWQ#bkM3_0?JJy9O6#ZLqlol#DXO^q*Y(Gn1? z@Il)VC+%1~%2%!)kixz{56?L$1Riu6txS`sa4mxslF(&Bwsln4`BbFgWp@(!&tDs0 z5LIa8-@I0*?@4@kD=w>-bV?<~ryBL-2~C0RqcJmSA(YD{Y$CDQb!jypXyca#P8)rx zCTF}jn^UCQQ_a=znKtm(*k|?k$K6P2|I|jl&3yg9YR`R)-_PteZvSudp#;VFH^g+r zcac=gh9!`r&Te2r!wxqURjJ@N%+AMv@ML5HFbwY8(i8vsn~LbTH)k7b{`i4~X#5PE zRUcGo?QUp)|5Z+4d{c9+K6_1jew?<#HcwJP1 zqe?N$&Rj(HfS;f`aSt-Imejb3(6jnUFK%D_I~k6e8*hF4g@K=KVzz1GO;<#w8jU3$ ziLA*K$D6KyhQh1NAt@)#$Kxc84_h+u4w*+*{0&k&QesTzxE$?U0{AIUph`!2b}W+~ zXJK=%$`hfY&1?yGZB6Nrz0Q%>bFPG-zsW)`{*}Rt)9OpU7C#fNUnZjptitWH;BFDp z$O$S;^laS{q-Bq))kg`Ji(JU-Wp_W_{_yH3Rf?XzG)ot9^cLArx27lDMUGyg zJW*j&=2y2Ph9I>}F|HSmBp$pN=x=6C)HHJK&HhgQJx$iCJp=IIbVSAMCBuS;m=pD=Xf*sqks z)}N=!PR3x8T{d@7$h+A5H<5U{<8sy^Udh2_7&1A5nSf8WmS3<<7W>9l_cL#cd%w3e zr4(JYViDOUMD>Pau9ac*T2~&2FHR%J*Kp}~j7eL3j1?i;_sL@J<>z@) z>f~Ff=jtuZR$k6IFb_{ ze>Y0XbD8HReM|UA*TbY~qrlqALW4$(qldg3pL`B(0-M?j+nrdJEX5m{!lF5@{`(d_ z+K!lJOxtCVr%SS>;WWR$Te7-2z}>cG-i|U2IDPj6(~>DWLl7U&*O!B%(0tOv`0arE z^R529(X*92%j-srS$SrbWHBa{=Ia_PsQHYoy1*9kuc?#_>d7!mL4MV?$s{2!K#Lhe4rrf;tjv?f`{zT<++T_u8T`{13@+J@kjN4UYb?ksr(kdcE`XUiVq3z4X?=T}$0OG&9YW@izvNM!#E1T(k}Qp3Y&c zJ7<$gp{r1TQnWN`y+5_G{j}BY?V#4SjE5~b_1SR21KecWI@bS!)z3FvoCPC&9ndj^ zX-VY#FX&uDD^!I-rn@G0n0b;03Nb4@cj5**L;aKYu*&_n1repGRXVaWgd(}_$E47sRg76m^UD`5Q=$N0)%QyHL) zN_cfj%?j2Bq&`pLYx@|g83yt!p1Ld=$DFUrWKA7?xZIg^Qu*5YnG_pu>C?~KzZg*lBi-b95Q{eWtOu1d89Rc?u{`ikyCbu&Wv8V zYt!N3@##B>q48{o`r+1Lw!*?I+RyC{qXvErFM9u-TSd|SAHKdatje|PcF{ z5(3g7C8%_Zq$nyP-O?>7xDk{FgANhtmJ%ryL157(ozigTvvj}T`(5W;XJ7k|Ze_3a zJoi0g%rVD+Ni^D16389x9xFk&lVQS{i)H@29*fHtgF^#`W{$D5h2p3c@CY%-bNxBu zo^qkekCd#xQ|6qr9CkHk6z!hblZ$ba2Z45dxC?KEh}QCMx1t|ESoDv*I(dD(_wvB3 z>^`2oubs;ij*ZLgGVF8%)=c|F%jDH}dfaUO@ufoOP(~JI;n|Xy`|w#~nP51F1@)Vl zFIc~pAlj4+=?ZWK54 zzpkF@o1ZNR*F}NHmD2*{4j(Bae*62In`(M_nM9x;k#RS@a>JM@Qk$07B<|h6lfMr<`8ZkbLk|s7?*5 z(S$g-ZXX=##55omYOqij&_S+t?P8j?(%2s)c;-4tj6)cqAPtf4e+q>|kJn|g#{_k_ zX(;OQ)PJ{ozdmc6uM{M!hEdCt2c7^-hPC6wuPwX($-(cLYhQ)vtkbJ^35H{{6#hCV zca5BSXKzkmb$AsrP*Gh_iI)epB!_}CDpJsY#WrgmS~wi)Ukt5hAqDx=-~N+wEcy`&pLhBVu>f!4a^h1D z=k_oaXsswS|;&YO4_H``jb!Fc?L|WI{#Gx z4E{U$@sRu%KWl#<&_Kc3Z@maX`x>Qd-fJVZ{&`H^0o?5E(n&Zzmc-~PQqWJ+{0(_p zPoc`tUvWC*YD$vmznCW{`Wk<8qParC^8*lBK)rFWlRs}j^Arbu62-2$xXMHQ*9Eex z-;GnQ2brZ@W^kFsVj%xahIMk8PIG#=ji(&1stZ-YNg4YVXa)hdV!p_i^;!!M8aE)_ zPgwhz`KW7B!sF+7k8E`PEfq5Fto1fDUNGbNNcf-8fgQ)q%S-xy=EDJ9911U1Dprbc zux}*kk~S_x$(InK7Na<}S{ePv80+s3B9}6I_umL|qXs8I+g|44w;XX9e5}WL%FvNe zhd3We@R89IwcK%dSjR5G&}Uf}pnjKbQ(ZK$9>^*vxzIeb73ZoDGqgP+7r(FRq6}Ul zwqF_nDvtWX7|FaKc`qXxO&@EJ&1^c1z64bu#N_APKXO93)r`h>Kh`^W8~=APKzs1_ z;Ab|)-oyJX8Bph%hugt7CwmQji^i55Bqsho<<5Y-#cUbUmhID;88a&UM5 zjog{10Tt;~@A1es!aY>;>eL;%@ILaUT@BW7O9Wrnz)RJzmoS~ve)IkDm%v|<95-BY zNKWu6dw=1fuKwsdt8%fMApeNbX}|_`bz8mRTZCVirT;r0+(@65m9Xr=;!fAZWh*#$ zBWNPbDWm2rNb@GU|NEFYNqn(zDtE*8L(Ck z$8mPQXJ_0|PSr3QBEV8xIX*}&LGKMjW*DY@fqj3zZ^RmiVbZs27M+?s;K-IUf!TWdY4+jboQQ< z0l@5~mxn(eCI4F*FI;=xDBuDNMXgrXc&<`-NF$cL*=PTAm_DHAo;!%QEPZG4%?cyP zi|@TcG!lph4(&OOWiqEmYirGOVZ46Wrh(HF+MA6f6pJq z1OPSy0mHFzbnP8*oT|TaD_-C+fFA>{WZe-6w}4HIS(`=?4MS|`Q`&*tjI<{$X^h^%Y5OS6#<@OEpAd9@47!Pfy%F zZ=4ho)}96BjyS=b`FNFoc@qd6-CLzPblPJTb^SAi>aKF1W}B@Q$5lT0nSb3|j%t1D zyk+0UFQ30jE7p3+6{=*jXk}J!kP89f{0xxCU$MP;Chn{SD0x2kkx!3}gF{I{;Rh0x zmf*+y?CiFyoXdC6Z&@3CvWj@SLKugCK{k$eL#BQ6BEX|+0piUeAI=KJ6p+uwVs~qAkt^GmqCpPUbzuy+pg!RPlJRCagqKvdv{a_@6@FdHa^g; zER9o+rMkOPv#=#q?|F}nj{ep8%<4C{zO9Xpty(k(lZvzGq>mSL*;zQeYYE$$Bb&s$ z=aySm)`xitk}T`X0vgL0{whdvS64H4 zf%bY9$TRLv#mKHm)PWrR(qIXW?8%`Jh;oO3NRHZpCKVnCJ~q3Jd540ibM-lQ)l@a4 zgTe*?9L?Qm+ck~L) zFwkRs=y?3;lOkdO23mkEAlN7(PC%V{_WEGC9qaMYj#;vV3(-4`v{rCI1&5^|kZS;y zKsFv8;<~yzLPEl;SFa*kqYmIdm;tIe*+ymDz_kv~zOYU6BVh^B4>DFbi0UmeR&A=Q zEV*@;PIaMQXn&rPYJnKDPx2>PXEOaKT0i_M%L7bVRQ-%7mLNMaXOcNG*yg6SFD1^-&7*&Bini+$zpfKv-IC| zsN9kw9xQ96-rE1X&B@zqb*MjkuR$R!TemVy5}6@nPL6{#VZ~hP6Wt<4CMK|G=3^5O z(A*wc0YDyXilafDF30o$42wXAGIPO2x&sta!9i8^OS1SGTE6L8v4_{o?sd|DeBkQ_ zhX`8!R**33%fBVm-(CB7_zK9f;=yl*v`mg*FY5{;%MOU--Qqw|8(4eArly*XgO$|Q z?J4zCsi)v+HM;E%DvrHx)flCI4_dlU$11ahccbrHcD!#+whBhtN-3UfyYk%Xys~Us zHhe-natvNw)bbZqiC4LDz4aANcPPDNzm{FRHh1zD=_eFBd$dqj*$j}e;}GP+W3_zVI09 zzf2kN&v7sP;oMom!_@qz#;!q+ir&{t7Ei-hibld;GzY1^Vn-E%F}VLZ7=jZ7dw4i>tthr(*Rr0hQ;Psh}tel*_UgU zsW5^MTB5iUm(St4GUyv?Z0{?_u(SY-DtPZRmEyH)nP6i<2Od`NrdE2rpo;5wpEqit zRh+~0hilAD_!rvNnfhudDJTRFrkBkW{=fo52bg*K<>@>4Q{tT5kDFt4Kja=+N@K)1`0z?g{e}}M~eeMtoqI1 zhAEj_)PP!R4;qw-T~?qM%S_8}`U%9-iaj>15b-ums`LC3Xhabf473PI@B_n{tH(f0 zlvt$R=V*Wad(Ii;>Vb4K+u5_YO;IiGpI>Z`<2^B<0LeViVnfn~nA2=z_3%X+IDZps zgCI&-?Uf3mdl=!^kxD1f>0EGR{@M*X=7O-u35T8F7VzeJzAbLA?d*Vg06)logJ0N$ zefBY^anMP+@Xe%wBR$p|L}ulTnd*CVxtCsQ zttlCe&AX7oe2nK8`KTe840!JTj-=>e##n+Odmo(9g9B-x5*Zi9KJ59l7S&vm-i*CV zf2jL6AQHs^d-zAsKG_`kl8ficnC!9N0fR^|py2(OHG#~@n^a1=xz0NQ(I9_}LM2Oi zoaR)H{shvf#<||>lS+q>MCL)-)MA+TitEy#KZrwizNrL>&qU#er>k6-a~%y*AA%a6 z`&OT_g_8sOuei@IgG=o2+fqdCwo|CS67G0Y`MMU8(F7HC z$$tPYqrZsC%!4GuJh6a?d$CFl;idPkP30}w$BoBf4krR9?Yk)xg|{Qq+GY84IG*;^uGpc%s*CP z7FlAI^T0U|scS%zzQQ=GxO#?0w9rQyaB+(z&Dm>>>gE4C;sMbntf7d}2YwM2p zAAw&KG}(14wmpxLE50@eGHq4P(Xi@?)qea0xT_j9o86LdFI1viP}vWz#|6RvU>pKS zGpO!ulAM45FLFw|UWrsV&JZC{L<;=pK&;!NaKa3A0mRio`qGaD{lwxhvB+U-UW-Z6 zl>{7LX}~9TbLqMk6|d0;LJ{llh@+z;>E42~#C%kXWGh=3wrZW{Zo=gZU3%G|7F;%A z2~^`m9mRG_wHxNK->Sgau>3s#^Xo^@?m)-ITx+L4K#vrIE6__O*qv%Hm>GAQtfTf4A@y6Ok;N3`-> zBtL!6ej|S$DX(<3Bk2UOL5i~p#EDZcumXg^*wo@dvFu{o~bPY9;@>2t~UX z87PQi*IYA`&ehEc=#f2P0&j&TkXi+8*j;SUp|1RK^XARwPX78xD1GmMTI>b0x%Ix9Bvijnt^v$nH)rXL3S?vmtO7gs1dU~rk zxngCj2Bk54WT55eVu$mDc3M{bJ$A%C>VSq?%}kNy!CZgJeBJwM>j|?g^G^h7vdCVJ zf6?d@gEBvGnE3)c43m)Op$mR6ms=FAy;eRJ+*;+YI*3 zY<_sq$I<3wC(M8t$0V37PQUsqiMto#3(f^mP!3u^eim=4#|){=yNlVSJ0hp^=v6A7 zMZe~uS5Xt@$K? z?34Dr?;JpO4N_-7Bq7<3(gSCdyd}0~ZY1mB&6nmILVvlf=*#abF}atQe&eU6QikZkX6d{LjiLGc^sbpG@xfiRk|2dlb?Z zQqG>o>!XlC*j-d?q#&Mm@j}ZrZGu zjtR=g)w8}vplE3uTIv4&^YcSqBzA_WD8!+oQ%-J&i#MA#b)?+ZD@ya4InBo^o9StS z9Gnds#D)jHu*s`~xisH91AHztna&BIa}Curqz3S90=ftv?8hcm-?mRe6tO{%^ULNs z7%R|$Gtal_5y&235mUld_RID?FJkD?3~KSuKmdEL0v13m$LhSn%upHjAzkTbRT;lN z@2X%)cp4P(OQkB=|9cSc-BH2=Znr=DWbx?W&`?VdPyy=#$2#^kFbHF8rab!^gN zPM!lBp@eWYT|i&Vz^C7CEKN;wj8i$+X0(QS{N}amD(a0Nx|OR`t`(RN@9>%somh2k zu-p}3+h2Uz8|q=mRz|0}1W8=HemKUgmT;bgWh=Hxb__eI&e>QL~om5VQ&r0cba;7G)gxi*d4v!JW&Va7qgk#8=D&F<|^7N1(7 z!Q6*r4k8KRYEhD|71Czqlt+{D+FZE)tZTFe`(NK^*|s`=@)bJuv@&0XY5n)@wW$`x zguBe@L9jy)wbPUeX;oi)y)3G}c5(l3n-$B);_A`;{>7*5%}i<%;TDXc6Y@iNdJCr{ zu!=@cxbeR;pycM{B2Hn@IgiBuMH#Tu6foMM4_mmzI^BigX8rSyLBzU3$_<<-pt2-c zshne>mgHstCppOCjvhX$s*&Mnck&eVQ1A^%mhm1!7gPkZ5xRrX>-tiulJ8?7ZtmRn zo?mL(Iy#6XG?-ZljXCTLU_~PbYyK8tU7kKK^^(yv|o{HVq?;z#_uGP{#kZz(B zKu8y95$4sfZL9`pSbs1o(M4i!VHOc<{U!I1{zIz+n3~i^;21E1hJ6ueNIFR}{ccQP z0gHqv{fyE-9!Da(tv^~}1tB0gl?+9=awv8~cKe^C>P=&M$wCR~HrV_Mv0v5?(eH1( z*YG(bIFAyI%I;T7@d=aaeSfBN?bk;Qi}-@v@5Jk;8WRax)3ttH%l(dFrDY~l-D*AF zUR>LWk^P^=(_xlNp4-71C|Ltc+j&UMMRno&X{3y{t3T3ji4JcKz$3Pw`AiK%*N~9h z!6-LaR$CuJ6-$Cs!w~w?KZbB5A3$jEpBN*W;kk31-?Fx|Iq;gigHPc_ycg}D?@~z{ zrsINUf6NF1P2f+P*4OJ};#2@J2ElME#lnnl3$xN?8t}I)=h&e0CzlQizezLo^%=Wg zw(|6RpK)Wq((fQ%Lv>R(&(yZ1<0pSu5mW<@{>eEdRf;Qsm-~Yz_MHH9iO_h%W=ycP z$YOj>D8TONk!JLNNfCYM|34`rGwxln+z-e@Q_bYETNPKAK%$yRMAk22xq?^V*}(4+ z|DNh9*rP{tvu}G!RLQK1fx&8;SKH#Yr|OXvO_O}H@N%bZy!lp#tWR&fukRuJ*F;e} z)4D02-3bE4Amux|JgXxv#u!|+b!ptrC&Lnn-kEX#4qZ7mF1&rRBG;l6!S?2@{MxpQ znoadBy+MuP3n!@g85#Eu6XW@p4q!PYO-4e!cJRqOH@5yt~b<)Nv~6(C2N zmjmPX9K!fT*!8$dTD_~mqq_%b^+frC8{6etMYi5uPs(l)k)d!JkirQP(-bs&d(j8h zO9^3ox5;n_WZZbRIx)y6Wgi;XO%xeyz(3QxjXmTcD&#+(ex%ceNPglXgMuZ9(-Pi- zCl@9*Sdfh`(=}2~52A4V{bOw0YuN*|v7v^(KGL3r3pqx+q74*&!22>+Y^MB+Yd!>e z0u%cEsnXEeAn8Z?)~AmPe$-Iot0ss1&xVKirE<1M_Vxk*YEIcl!L6WyCe-@ln;h^V zn-$Y9FcSFyE;bVk0h(WmI;bs4GOGv_7NegpU4H+yK;@<|=dT<1ll-sQ_pI^1#(fcZ zH*8*h9p!gwEV^ZA%z)OA|D!%LiO0jlACGZROZsgxZiGZcO_B6MObn8TT?56IJF)+8>LqQjX5C?_s&l1E93U-zTWQ)1J8z}f7fcX3tCYXV zN6V2y2lwTkfoHZi4c=JAf7M&lX-n(Pf4M=^u-f11Ml&`=)YB z`oTfkWC=SbvXi@jSxIk~oOiAat%Rk@)H%x&YF5@5ai`hr!Fbo-OFsjVhEQzf%fQWG zO96X1ZMX5w6|p2Gd817FTfg*y`% zW@_?#MlOFhOL0+PZKDJxY0NCLwj*Ng?=2-Ys&s6tPXIX)E1MFejy5fFb0aprPNf9o zG{LRbL%XqPHK9i~-D9!2w z3Xsqb#FT#*Bq%*7hs#tAdtQr4_tcB69w&L?B@WZ8R8M-6{uc|xBOt}VE%V^>%Aoh- za_;TO0LrYS=+?F_*^r_-I(`PsC7I6ne=pCb1t80_J!p)AkBTLf0QT-nIXdr(ZGK)R zt-e?T_@X)Jtcyq2Z+zwvc&TbT-!3T_w!q>w%6xtSZ$79-p=8y2mn*L5yrk}nD4dohdj`Hjx;1<>^xRg1mu7B6dcl?Re&ajh;@cCSAAqh5yO^GkfXY^9- zADpv8ekzBfF%V0A=npQw@Lgwz7Ba_FR)Ci&MY$%F&n{Bl{^gw-LKtFu)Lwk@&Kx+y zO0IS1+7iq^k6u)AmEo}4Y``H;hfqcq!zyGI8G@!71&;yJ17Dhj^_`tIkT^kqk9C_A zA=SiPZ6s51RU=?yQrovR=JPODJ+~s;j{F!(7TH*qBKGuXr(O${bFLkN>vHN=OJmD3* zJpDfcfT#}JwiPqq#vf_EL0xzBYwOs;v&Tw!Ub!yf1;)f+F79_>nDuy-uU6moT`_%A z{dVrQqt_{!q5ZxBS@Ob86l*LEvL_81B@)7}aOMZW3J$!u-Z&U^-&9j;fk@=Yv}b| zDqJ_Nm&@ILj($#9tL4{tVRXNi;lynC!^1$Rg(sk&z>AKGiD?7dph$?PH2XEk(^5Q# zhtn`!{QC7-0Ko~smaqk^BptwG$+*Uy66%lk&!6S>^=Wu%h+)ke3<$@;K?m$Uz|bfi zY(k*pmO>0xzyu`=th-l^9WhmFSWWbv;^k@8Cyh9(uW=3;X_`}fZEiI*nKsOzgY!0b z&!$_^>KSot;3}Vv8Q+d7n$q4shbcrAR#*d$%k~v|shfkbF+WObu!WOgz3t0Il^S>8 z*9wYQXDl}~5=Kpw-3Dyy_?Ypymi|jWH}2us+G>}R5U%R$y7Q^-*gGRmh-tIk>|Rk`v`t+3j;Gzen5-%5LAI^X{*vu@aaW!P*Vo zE46QI>_N#Z$oCyjMDwMY36Bu&zk>A{Y#?(<$W>G9fR%C#ARv;$uApP+zTQHsRq1%I zYacVa-Z)Ew(n*EKhG_of-!3OZW=-&Dog%fU&b5_*ohyhSVWaqrEBufgPj`lfCIm{s zEcnIzUPup-*tM~9}c7+ZVUZ*;5^ZjAjK)sJ}*T0{qvNVd4 zYi$a%Xh;RrhJdK4VWaH2}K_L_+BB@_H)@4@rV+SMvSXqyWCMhS#iq@*}Jc1 z-YUCwPUYiBGcpc3hdkdjBeoUWULs>qP&VeJIRn(&zUQNTd) zIL3aqShKt9j#ybCe;sitIasY{g-)djuoCocJZ8=w`)WF$*VxO2uv_|wBrbv1!A;UR zP*4K(rWzgke)WocPGP)7*hY_=7gPZs4~lrPItj4a251V^7t8&*_;bFU(GpC!*VA-x zd$eu7AWM(<iqoejdkvMe5FffINylyt*`pvBEX|6^ z-1$T(>a3s$(2Z5>vv?~d>pL{afcIFQ0mi)?(mzR@Lb`I0A@3zT{=#JvkZPfBGTGY1 zHjb*ETsw@8aqZ43L_JSqIew#hL9dnELJ*5EitH*3e-8sxp(Kis0`Ycg9(_d{oBZaf zIgnWbEc@XgCcehb!KmN%Cqyz9&v|x4F%jC1W_YT0c&@}md$Z^2J>2jEc9w@j2TY%k z0_uUNS%E2?zM`23<_A2K1G4+xIF4a7Yv`TQiG_nK`@Py7w%fcbwo$ees-C#Qd=Zz6 zMq@7X4+NV90+Qh7tICGUFrkAkNxBpsge-((5hTD{c%I;|ZzGr2Hd6ifTnuU z9BOR73xCe~!Y1S^03GMob^&Od1PGJCbvhaP&>68G;3fiJx}6kF;X#Yk<3O?~=eX48 z#krIejlgF&FE=OK2BM92*9yB9jLj?9Ad`FQ{N`%_YT3q?9aHM!Dy!d%uG?}HE&}@t z6RbA!Yf;5^5a0uvkH%ST0{RTd<#=VZ{Gpx^llozE#qu-NaScWy9v61cQR-h$6QKdSOa&*#rgQ0xP$dI{VaS;23|VR#cWlTj?ZX9{$eC^+>V>BmhfDO?*hJhBn#GZP8MblQG>cgoY*A6bU z+}c)1!s4I&rP0Rz-Z8#O6pB3EpyXSAg~jaV%FXqn4h~Wr+qL@(c~BLRMU@#9AdYSy zNyX=5WRDxbp3aGhRowAzpQwUv)Bm5-Fu2p|{nVAO%^}~U}6rCx+8i=lIyUpFh)IS=ZB)4lu!0pW+J^-Cn z>0Y7vids0nU4T>G!I;yHxFE3OoCIUeCU9Sjfc}ax^MdtPVocdlxMRSi7?sT?Wh8X^ z%~8~fZK!SgNG*YbnOE$h<*Y)e{p{28lp?mg*snzSC%d!R0%BOO)Uw9MUi0S9eM{$G z9#o@Ink;f?kc}?#OwMC4@lziI#MJc79eX9O?@OE{W5P9>F1arX!`BPivb;1s;{VKG zG7oom3DWh_{0%oBcc*+}(kt-b`LJ@pba*pfA6a(I%=}h+!wqFGmiWYJq{}1P@yelO z?yn2)fG>j=0Rmt`JyL-%FccT|8T#@`Ocio)3gK8e(L`Q5 zb9i(_X>x^unnzc@wraZ%sV#H7WtV79Wu|$u->wy`{_&MMBIVVm(CEteFQ;o~Diy#E5pWwNbVC0&;q0BI}%SRK-M)m9Z^BM1Y-V(`RV8Y&}paBxsj zQ&ZB^q=18@S!62x!4}N`CV6fL^*WWY_vk?*wiyN^#bAe-Eard>BNYWBBZkYDFN2O9 zHa0duaQtIGF(Q+-rq>B)Dk8Wfn zW#@fc+tkr-Zh*eC88jE=~ zGOlJ*(5o#o;}P9RtkwjsHcT_GeL<{<7f@dmwF%H-vT(iJ{O z60_YY5g__Au{P1Hm{(!mp)W-#mz;`WA}65x30A6GTycw932pgD7>)HN_b zi%7aZ~$} zCd;Z2LedB+Dh&E9b}_LE0kZ(^-z(?{zzxh{FcuaQ)8mwkwhe8$Q@LFR0&^!gC|m-9 zsi)^t8-T0=cl5UIW26Z3kRH9a98AV65d;qRXrhx*5;S-CF^l61OBo7S`+=^bwjBT{-2@vtSA0+b1O|JzfG>a?Jv<&xXv8j6@uCA zNsGWOC*TQ76Qe{{Y%g9?JD6>ET0S_nTlP%U7If{l5((xjb;LZ1EJ#B#Y~4-X%_b_I zk+J?wDPwyq_*d!676#r3;gv$Mji&S~W8{OrJpO$jk*5ePja#eA;8(PknzFdmaaIAV z1@xqN(U!Xj*}I50-cY&S>e}v8P@J_YeK@Ff+#1cCG-ar$h>M_2A-3NgHm0wG3lc39 ztxaG%nqs$G&W2D;pmEa5kjFzDYr#ZUw?yZ0fk`zDl)Il`^q72qSCk*|J%r~K4Ayt! zaP|7Ko0DBvMxFwJgF`}s0V?`SKPJpD%oI+}mVkdAy#&>g5JbG>X^*^BIG%o~N&H$7 zO!$xM)B5&f-?~?wJm0<7|K6*)ey`I0msD%y!D%SJK7sBULKm7pR;=~IIq$}CQM@C^ z1K=RutxEJ8IHT%OX5g=S_m=CP7XcYl12Px|zeJdyx6iEEfv*63yddyj_>v^bKd@$J z3)e-^=g?iZ$P|WCOZLA;tOq|nW@Pc+(i(v;vbNW?)!h+(y=+et`6G*MSMgueCuKfm*<~U~qk24$j=u zo+}2X9(dr`Ny<#EZf(ttJUAe|SORC@+3S&_BH{o)Fk24_2?1$aN!{CVXL(I)=n#rP zu|@N<`1m(gi@EO&c@c$HICC*m!8IXZZfXmzbEjbxSZK9VV|DkgIdF22c`1xhf*|9v z3km5wcgkt8j(uh?$b9jyqRk8D_pgtozqDzvG*6*fTOt(jCkLO7D2#xvu7$;f00Xm` z$`I6qKnFp_TvzNyjpt=j3bX(&h73%-U$Zi~R9IqAXxs=#G5M9{l_WvCQ7K_CP=!#F zc=;X?3|~@m9p1rNP6s|kqKvm^>KG$2_zzMe`O5cjJ+#xif&(nV!0l5J3i}dZQW&>^ z2z+s|!0w(nSa!pyExCgSR)tTY?aJ%tLvqcGKbASaBe>mvup486IsL2U{U@b`S@lF4 zV{FBl7ZL?6@j>VoJFh?2XbeFhrf;c#dNVQKAK(@xGJH3G)EAGxdn}48t>`|Xp+Zjm zarkRiANz-e%Of7I3yt46Do?MgADucJVbvsBuObC`6;u3O+&qAv2_n{F8WPC8PBCws zsLnPzlHbvKr<6^jT@dzfaU|@KNgL4AIlXTPOVScC?&Kz;biw17V_}&bQe>Yt5AK&W zey#N%RW6Ud7nzMstV6xy(usdz3`{AbpU@p09SOSIv(uKl1Okte5&VH00akD#&DGC0 zq(sOGAczdXKNx70XV0E(fc;~*vb*4+-U}wC*gck|KLsqBaD^ZAHvGD3rlq2)stEPa zdY9-}w!Y4bdtNXbL#SL|zY^88HEI!%vwY$ystJJkSq1(Y*X}X;j(Fcb0B?@yD_7m7h zhX!n<9L-t@MpU7GS83508rZMuJr2(2mw^#x3CR^H#9+ciT#HlA+es(xcvixB-Va=< z<@>Sy^7DBSqw@3W$rqe2DT6IM9ufqBFtQ?12tj$RncNZ_?vZgIz#d@#3v*b1$l4Y^ z-`{hee0rYAXc|uWnWCBvB8a%n2m445FZ>BroB-a)SD_+uINEp7b3M9rq4$YUMbbdg z;llK6J^MIvI^jJO8}jn`l48zFrT%Qmft}C73m!byz-&3uyC4maF!p3@X3r;zf!%AK z0(MO`P_h=cpotA8E`BrMh?FIr}*e_Gr&Ya1Z*(B{{gS z!UudXsHP6-I`PV5qv$EHlU9bW6#HA-(9Cpd4ZXYu$1AZtWSipW?p~J z<+l6}cQ^-PJPBO}rSI|Hc_eXw&>)q2H&jDekj}v~yYiLUjSPh248=kN6w^d3O99~i zISI9%;*A^Oog(nO5%pu5OGcP`3_O4gR}mLlEm$NTstGL{UZy+#O;NKxDqxsjzRLdt zG=KY+FWMY!nyJ1N!-{1MY6%;ByB8DzGQ56LAe8?jNqR(`rDYAIMW7?zf1mH69UI=&h^l8A0CWF zv{v>b+%71;kzyKpl0;Q5_?IvM714qXjlzcy!r&y#&d0~smXtpd@ag2}sEc9^3V!6{ zAx)j*{v!;K*8}>5`pfl8|89?K8!;eTnKv+7nUs`aEV5)01o~v>_#L)U{|{}V8*#wL zYOCAz-d+4^RvsO^7c!AYP$v)2o9)sc>5mQVN;%7s;dDXu<)P^R+5dAxL?{()9fY1O zK6tj%?En?BM+oA_Tu?sR1dS&XgRIpTI*_B6n<}(DI6hd7O-}au7v7}3MQ)7{=wyHh zV$AwvJ|nc)enK7?Y^@WXqzu9hKqOp)zcK}ntY=3?51t47WJ~Dz=)~;Pzt!elgl-4H z6ku#240Do`;|Sm?Pk}siXnlRXjuTieN5jh?0ctLTi@^ujllqaLT&+eMg0^i-H#o5D ze{}+3X-pP1<}H0D=ELJv`vUz-!{+GEQFQCSHF-@+DjL>Vz1FY-0s{q|=QZDXj+Wk! z`Nqhum#f>6v)-bnt4jq?^Rr){q`5rs{LY*?gMb*uwVrwfzfq`xBC|G_7%w@C4QqjQ zWeZ>&m?Nd_3+yPmD>jDFQ%Ed^vW?kPvG7Jbw_%wzHJ5sF7(lfOFJ`>jPN1}zM^}r2 z!FN;e0#csnwo2yLf(gb^keSb33sg(C{>ISBBB(Z?irG5v$PgYlG{B7m0|PljIXO9E zj?;w5k-T*K<9Jd1aTDBWyA;v#RBEM4YHZW@cTXXA8W=p1E~W^Suwfip^m@{{#;NgY z)`fKFyP_mEOhKD>W7w|w73>z03DQw)ZY$0=S19nHpScqy4p0kJ*KLS+-N5pGfs?TV z^LDN4ob$L|qZ>|JhCguQ?RV0f`<&AE#*x;oT#kW4iL$8n z_p4_Ap(sYtI{*fNExLbe3Ks0E0Am@01b1ZxctGvLCv&h=YRviwT_vJlWJ>zHW7^q7 zj$#cqQUM>>aR2otVWZYeFYYYD&|KTzZuzDmyS!ET{aIXG0H9oon`^6T$QuVRv=}F)aEk5kcejW?`OkWlnUuWGJ5S zfP@gTFO{^5#$oL#qG+Wv^a{Xh*1nS?q*_v3&(M%5R-LODmQD(F{AC2n_Y|eyT&x?4zRE5GYNz;D69C z2aK6FP?l239Pe5tbYnonN=sBlQ0Y$__qh}|+oPW^MTJgCkMZ&GC)e9pm3LOqeT62H z?*MN2TRx-L80s?Kb=B~Tw)sD1~}EL3p3rvb|HSUM_W1w*9>-&Ze?J0 z_A)2!`j0`0=%I-Slbj4bY1O*zQz0vkqnK@2SM!>Cy!)dd&s1UhXQ3Giw(l=bniN@( zpgj3;IIMFAXthouBE`Eurq&NC$JRd|6VwKg`4{fbfzpewD zb=upTxWNf>UHarXrhu_;xQZWL1|F=V8>4N4mD*31Gwk z!!D?FgKtiZNkRZ6gvl5Jk8Lo+<|nwy#z2(@C3BVLobwCb&iVd4T8Ni!0}fhL2)PO4 zMbMkxSWx^J7#`gzlA+LXnFC48u+&P;U-HhZ70iPl41Kt0`ea!obPg9E{|@{{w)Pt$ zIMrSQme8Ik6`9tWmrJr{>tWQDsJF_-2#=0;Of9SRv_(+)qEB<7K;v)q4mc?=br=xL z>I)P9Ml(%++egSmgB4RrDc|UEUPnlH-;6P#*b>uiysiE&# zW)HrU^_A&{c`6U@VzwksIsCjJ( zykDpMJxu)mAub%CP;1bU#TMMgC@z^s=_l?4A$Z=%Khk?F6@ffQ=>>h=Pb27Uo zQ8QX;I6X|DhcDn*f7n#7sQUrHK7j|lY@u!Zw?F2qxH>S4+8Re&LRL|P2Mx!MfeZ>v zs0{^s+F%ze8YZuL-P_PnAQzV3KDpnz<&1`uKa7EL9Zi*^Sfm-C;I7@iUuzlzaMn0< zCkWh%PtFnvuZ$FvxwrxIQ>I$OMdiBu0+bpkW*dmOKOKBlvUvOy+xqK8 z5oeCxDT47-0$`gV1q!Gpu)g?0Vv{IDN<8Yr41$F9izVNx-6p5I3v3sVI?}iW>0E_i z;e;;gT4*Z_Ct#K5(NGVBFDSjEnc5&r<^1yKk9+y@>G?FT)8H&wDRbP;88Z$yOK&?F zx_hSwqThiBu(g67hR0>%0NVgw8y)A%kC_0&2Ly5)D6-ceYBA?(}OVs8Ej37udG;ayvL{P{6?QW85HG5E_H)H4l{$~lixP* ze!pjwcmx}5PA#oew#4$71@iRtBW1$cAYe`wfKs9SAQmHBKqPo1*SX1XakWnPMjS)1 z1LvW43v)Bz(9i%A%2kQk)z!4cOzOhRbR@gby)rXAf%OldaNjI*!hoTY_*So8L*v6h)`YYWh00C04hfO^ADA%iwe@IDmBKV zeTQ}Vab!8&uINVCl#C;SWO2_;Uh(7Nv0Hr=rSS6BTZxo0Oq&FOwKF!K(ELou_r3rg@=`{JCOoXlj+Q1G7` z^EoJ3ke=qcc=6Yd+HCKG-z|{Hkm@_Z+64%>P<<%@wO3~o;U+@CQt8}%QdPfll>5CS zqU+T`cSCSSD_=vFtT*|<#x67kMW3L_uZ1HMgvw(l|OVNPGhWX!LF5!Q~ZQcZbHmHQPC-oS z3J=fCQq(WF!Nt&hFtvq}vzqc=xplD}KE0sTmqaPnBE38s#2rel8(nT^_NlcMxJBQy zwJJQfbGf{yXP;}xG?L36YR+rkgRN3zJO#&6%-iRF$9wF&pV42ah_xq6pSfRTzSTh_ ze}lH>+xtAKTmCi%J4W%N^M}}do`EPT_T4%}_ z;*vHpaPgVZ{EXgybJM|l{SI)fYY#7D8?$jY`qMLha5@u_dW5X2WtZvhEHIKydPaXQ#lJyLO8nTL* zuYTzt^OYL80;94uaKVv~^4XZ~03wxkZ4AO1FwBS7j16_X){(K1`(~GDECl1ZDOf)2 zT^fNwTblRpmkO3T+S+RX@I3g@sRkB`fS9?%j*nj6EeO4b(7>wW2@k*>L0MLOrBBr0YwNI^Q$SGGMM3+4}W}YyAY1#z7HQT%3dGx zD6v=rH}hJx)o&4yPb`G8R!WW4_u~*U9>#?;YuVl{WzJ#bzj~t-{Hq3Ztx2U0WEjiB-#&z{x4FhcoOZzr#)ncIk#2y= zV=EXMg>Yewm$b1cfMp0Li#ufH*-+YWdJn#p4s_AwW;w1>gcP1A8&dV@sBxXA> zH3d4az7f2mUyiNI#dh!MK|%d&Qu6(RabQtAmNj#Y$7%4KTE0D~R12&fEMV=FH`QhI z{j@NvXXA{N@g!_oPWcG^Lio9xjwj{S5VLcCd%T0J7}{27T=~91aFhXcg&|74r^5~_ z@KEgZJSiz@3*Z&TLodbnp+0tcU~J(gl_va9yRa>yb8#GKo58$uN8Xr9gsB;2f6Ddv zaPKzzSPcm8!iEQoo-!a3D1Cn^QMbRqqYEdrZ5;~w$_z#%X?=||iCbUvho*2*_gy|d z+MV@@<7GZldhC`3qwMdRg-n_C`$Q7{c5-DeDt8#P2U1(&`CZu@8YvSsxD;X|#<`{* zDf{&`?wwS*ekw5%B=z&j!#eTO&w9MewAOpU#q&Hj7uh)`d9b=F*_)g-W-aG&jT{ui zetPRE2G)lo=XOIatc~ANwq=7~M)7l*Si+*QgePIBsGTN80Q@U95ZYRX6<`elXkmc* z0~X%na$nQH`(U|TAgRlZ86f~cgQO=J`l4=cHpz>Yi`%5?L-WK)ll!l8Z8VV|fvO^4 z5%9K_%GvI*1wuwsdnN4Pe)u&lf&ITZ>F=BCT$7x9Lyry*lOOkAypmVNE+a<;zxzLo zt3WkzV57X0bmLp9PE;1O*IA*S83L15hyGwxpj>AR!z{G;h|d!YF4NLf z*PCW2IW4M}4#~ct(uH19Y<3M$1O`Geb9_^a>aI$5NM=DHIhMg(n zo)@u~hv(yw7!ujIe6w|PFS_=H3#py0&zZRV z6{1DBdSGsl1WIopJbb6SIKkd=PG8U3_SKH``>)@0a}Z=>$bv92+nrIy3&0tT6p3Jf z?Q$GPVPfG0QByp62MAE??CiP83O5n>5HjMk)kGNUJzxl{&9NJ+BU?QbcUv(wJ;hLsD7ayno8|B4K5~kpfM%fft-iB+ zi6RL!G9YE;>92BWc$Y5^YMb7eajG1fkUf3I#OzSKXaMWkkx_4W zsjr>e^3WTt^T-z*4a6Eh&08eu;Z|4K|H~PvUc+SBpo%b$}(eoymm!coH{i-W}cj)r^%Gm1U)MM6WF3FEgJHPXW z1wB^pM*U5A>WGYfjcBO(X28$lTVL=r>^1d`fPyP{!ngtnH{0DD-ofs|&(ib2U71@$ zn77%Eqd-BJ9?a09QmTPPy4>#rL$H4LEH-wNVKQX_+OxB@Kt#xbC3vzZHC5G5@Ha(P zR3zcoNVm_-z|(p82IoT57v2r1ITm46O$aUhOz3=fR zNjK$DgYhXbCg_QwJ|%(=6x7su#Tp@)t{`Qrlez1LL|9Z(KaCrU$RwCwl9d@Tx8 z@Y?RK;xf?_k9;F?ccfvHsAmw+a&YFFAxYiarq7pJKJ4NVBKy(+_6s8OnB>S%xZxkmEB_Tk>4tmWDdRB^wxi)VD6Bx+HFtaYv}_N2a0 ziWX%L9175l;T~uM!s6>+i|NoKsBatfE@JFmh-T~3tgky|WMm+O6gSm`)AmBgK&VQb zQuqiOG~#7L7{NAbGv74ThHce3`S6V6Yr+LoxmwDjjh?@G|B5dyewPEIy_`YH8RhQx zoR55uS5H>%#`MU4UJ&xjXDikd^j)3CSd~(2`8Bqo(G)q8<)!r&-(|C&S#A>QY8zbJVrMh4Gke)G{i}!V zjmn-`=l&oT#;2*CW0w(Pnfux;F6E>9O$+rNELtkW3!lWVxH0qF_5@~ONBQUe&^cI5 z%*(_k8od>B;|1h>7ErIIgyH-D;p@M{x&GfbaQJ1WLfIoLWke{E%#0{fQ6hULqwKv? zMuaF)HW_7;y*D9y?+Dp@Wc{v(-k69FDw5rr0JwG>6vD^(Z*2N`CIy>luTo~u}x3i$YC!9pMMF&+Q8=n?ok=(h;v=gFUM7t5UC%SX9X~RZ*kQr~WeM z-0~tJCx4A#f|l^Bu+%2kkO-k`&=(ikltYU(eN5*KDq02tMCiNUo|)%iOqNej+++t} zaV*Tc!9YUprT*mPB%%rp!{}VPePHBxUqy{!C?FOVmv(5WseJ)*(T8n`c!8qMOKGPy zNAI2OF1jsgj%*0Pwr=(w^9Fw06R)i8g#l3c(tQjLKGLfMX7ml0ovcUvWTd1DfGA1Y z{C3NDtpxzfa%WMulV)LcFedB5Sc3JV5Ump^T_{~a-~{?s@Pc;(ze3eTVw(`v*w{$d z(5Xqt0aH&P{psb)?i?sYcC8E5S>>6lAS2cUF%Ooo&@_5xKNwvycx$-bXM4f!=eejq3kjt0e~SecZkJe>P}V zxt6rRw+>bEB03t~ch$AWW@X-{f!-Vja0C&p%H#hL*kSnF@Mrz`P(Betq2bKm4zwx$ zSh5d8N#cNY0N4uOmq5@?t8`X3hd2~AjisZi4D{-M8W^B`h8%3|Nh|Ucmae~my`prd zFxq9)`;5L@N!~fW0@D&-C;A~u9z0kx{^)n z>jd4;+_RjVKxl;UbW)zy~=pdQ^nb$r(rgmn}ii(}Dp$srm59Mz!P&cs~ zpzI=-U>p3gNR-2wDuw=|;%e@$gGlFPyX5IxxS!gy^n7LLLtUEIr!Uy{aD>f-(Of4y z3=q|?n-8gW5+m9wluw@772qxoEqzlu(%Y=Q+N*7TzA5sHoalR_YUg{SlS|joo4$4b z|M~~z2;Gc0Qi1Ki@vv=fh3B!_vFH3~MiwXrsIghso+FMu%pc%n%(Nf#)d3xowz2vC zXr}2!*u;M%-upe4$$(o4@%;?eVah9^OZ?c9K+voYq)73FxCTX^?#wp2O4@+Tzn;8mby0^V1 zk;#w`lD9zU+=%@zwYN&_!iAX+w zei^1_C!`hETyP$d+WojEE8Aq5mU8nQO4rOU(uSA!ISPV$0@n)tmFlFYD~_0#QZqH8 z8Fv~l5YV*4_Vh*)nxRPdsM#79-EXyzmEU}4<>XCTjPRDIIP;KeUtKIqgwp0o<-es* z!DRnOkJ)i?{~RgBtEFXkZ{keCXGpYh=YolsyZM|}qOc4<{cnr1pnSL2pm4h(67Bn) zA4C^E4ZyyVkG#?G^)K>J0sb%GP*8$4ylw(2FUo~J^_1W5ejP*`a98?)n_$}c3d~J@ zW>HWd;$b2lTkVQKw6!pbjAw6bX|dQ92_(g(|0lW-T`Tv4xzF#tLQn?B-|wFIk&(}G z9z(~-xZJ%1ya*T)nRExt0GC)LQ>%5|@_{q`@s+QF_Ie`Pd8x>Su>w@{B)0KUQ&UrP z4LdT3ui3c%RBdr*(gH<<-ieH)j4aI(Z~x3P{QjsR%U#yVvhDbVJUStq9Z83;g^& z6RTvb@*bDR=m`USeZ^zMT6%VJMfs8t$>>PVpCsV3 z>Og=XGxP1p(Oj~dgp6-rG*e1gSytulD}96cU;RDceuy&75go6dkAJQPr&W(2ywO@P z_p}A@7J)fD*WM41eoM5daKq|A%SLJi0lHOAlDX4qn7rx+VYi zS7i#J5yWPwqFwShxv&~_q~oHm))F4lM#Hz@rrEG zU%&kCaYOB4A(KD?42-AIsX+&TaiOP;H2-uBOJ&J`) z_eYR^{=@H#B<{lNc9(PP4q!hn>_8xXlqknxcMF-X(F{eoNA2{SdNrH57SJ?JZaWYA z4Wt#bW@ni~_N7&~&j8@#S?xkXOW~0fj``rXA|-x_Ox_LO*J-(!s5G`B z^(XF_KaL`~A}A1&ryZOy{d(Es`@8G+Ta5?5f0Y?=Nb=M=0VJNR!oq}O#Vo~xz_M4Y zLA_>&FKvXySyII@Npbs-TXcKlI zfAb3?5Aa}tyVr)YCi?kdAB!J2r$>ZvJ?$~McyOf;7Mr&b-D z%zbv(o+*P$5r1W7Si?T6VsGAjFyF9&na8-b&WXLSidHI)Pg;uG-jLT$hQNXzwK}OK^CbeIs4Zuey z`{g9eUA$sqV)C8}d9h%P|0|);oA2dnAZLyP6S;8!rwoIF-n_Zg)90!U{(&+H zyEh+jgq{L!UcSC`;seW@C&YiYArd#e0uhSTg4j5fziW~ADNI+!4-fW7o^^J0Aug&8 zu)ywrYq^dr2%)81!*2CtiiIJV-N<`;X%vidDn`Qe;(FA#@YOak_I!j~QRWsm?ry=L z+-X`qb*3cG7?#96`oigZImB&vZK=YKNhRgmr&a5m@97j26c_Z;ep`<|d#1}lZJ0tLH)%kJ^zOSz#K_EUfRN8%mdoU~VOi#P;~-Y!jQdAlca z*r}q$9r6+Vmy?I4bFMY}(LFQW5eO?fKOPoQ9;tGUM6eGDnc%mhf^}4e0ovdwDHFt? zX_R6GPrx@Ipw(WUJ8Tk+6$^@#Z#OqKUU7cz;!;Y_c@<G*9cDG zcSTmci<9aVY##HqH*bw0WB$5Wn1}Pg+*K`SB{TU4nYPUyc4nT#|ELLi1^_8Ebb9UV z9uzE)9@?&NNyNx@t#qda9dB;73VGAY`s;TR50#VFw-xh@_M0gLzea?~PqqRIrFNiC1 z9V*8AAg82ON^r{E-T3r{+^2{1r>QTyyiz^?%GsZ3kNlPr@gG7V7pr};((EH5gSl%g zs=4LbuKUBw&!8cd4%tM#wC@j&4x`#nnicw+AHpnjj~M<4oWdtO5c8xeYM# z>3JXqq$@=I_=)YS#*-4HABYC%Q*(ql^Ze=vX%q~dE}0@whWOYoV- zj8|}1B5pF8@Uy8rD+hi%V(2;AE0IZ)z;Fs8hC+W4&Zk8C7!)9wn-^fCTa;H-y-*cH z3fc3+h~nRx)dHNsIjW}yt1i17Bc z4eD)P4Thp@UKHKh63**5d%0VS(qbEe{%2}!YpWh0-Bt#7I1_X?-AXH0@lI2c-{}4N zhz-FnNq-|*wrLfuhYKxh0z>z2vy#OsOl#ynH^UT~Eduh?#O!o$HOl2))6bvWcJjsx zyo-=p#OeZffdH_e?OW?jbk4A22b$Oq#Yqpuf0_y*wuKyi@iCU5>6)=f9qikK_{|unsQKLB@9#2YU+5Jt1kb=wc{+Jn88% z*QzF0o-x=;KN20|LBi=1V296M0jdGZ7;uU&o6zyPAMU#_ z9)aGlADoz8TAG)xdbj2Y-UTgH3xo4SHJ-3?NQ@-Q3C!$vC{-xY5=QR8{?% z;SVnZoyn@GwDBsaVGyrU{@C-~z#b0oBM_d}08bGpL5(iw#{tguULQSRdQahs#exoQU#%e-J?UZV zrczWg#9|IS#A^leXl{e~a3KlIF533`mA1!#3lRr-KBRkw5Kt06=LPg|LPL-6Yvb7^ zGx;lx-sLx^Jd9e8ylW0eSW8NFTfj%2Tm1JokCFn~(U%!3nNu%4fzIw)cbDQaEf@lO_QsnXNd#*EQ??pzs`EVqRw!iv&q+}5>TUJI zv5$Y&7HyZ7v=P$$J4GQ*GJ6u3A^Jype!73!j$PM20bn9HJw5G>1PegIdsR>%;1S7I zj)V}v+>X-Flz=Kz3hu%xuwiHDn~8WJ)XW$S<)>?ISJ*AmYC+ZHO^+h#!tD0?RY1`E zr}@6kdd*=i*?mh{2reo*Z*!PT;I^$<=~<%FTw11R+(g=_p|S9Uo}%GhhIB7pv(y^H z3&|Y~??(Klfu``nXUye72NbuSY%MG4Iw}2m$+&muTkK}bqI#@oOc4p@B%i(dbBifZ z+lWPkfBk}ppfT;Rs|!oYKfZqCQ@ELiLHx7ZgsTZ*+v8m$mu3{|2adU*n4 zE?vg_s16FHm@aURf7sA@VlhZ`i#k3&-U5my-*{ig_2~%^F&On{#YJ(nfGVhOKCZ_k zI4UY&W5WixyAxnm3FHHOJ$?Q4dMx&pYQKt#n<6427acuA)b9ePud}lgd58% zt=)(Z>QnS+bAtN2pVL)@8m*R=2S_a_+}j=R&0+aw-tXa@`9QjN%$-VzTc?wAWp;Sr zWoYoFc*R?PSE>DHimso0-O^K?d_)#R>BhZy*h=fX((Wejn5aYbG%9~~Deohx z0X8{@;KPR@{`5l$>gKZBZsGFWQ(nP~f1qQo3#0+P*T_8kUA6q@pdr3bE^M|mT!d;( zVC(B2#WUpC&MSjNg+ly(`p2EIqeCB<+@9xZE$tITqzGU(RsEukngC(3SBKFk5gzEZ z&Ukr}_{4|rmE}R=L4<6$x&_wKyLEMSx^R;*myZNdnveI~nJpxAhUA(~oHlLOkB~8t zk=KTPViVZdv$z+;@Zip!toe~Q45R-4sKL;m`k zsqzK8Lzj!L1sRo&LBRM2jj(VWuhWNVCtkvn1C(vn=8vk@T0Q5e+{})WKW3i5ykI3S z<5oNw^0m44#BH(4t%xBO_1t@2*LS|Gvk_SEC#(y>@gpJmRM*;kE=uNYRYqn7yItBd zai-#cC=a#t!5gvvdn-`FA~P^UFK#U4eufuVR8pc3@{osN(`RK~kqD)eIHhAb1z^31 zh;V;DGszan%+HUfr8T-6IB$#J6p{Sp>BS42!l{i{&CHf&+cVw|m~tpt{cDO-hUI=H zb^tYwd#yAqj0z6?OsvTdWRxD>qbk74ihs?pr4mRiiyq&sfEK4savZ?5{Ai!5 zNqkijN-2(HH5?ca>?;7tMV;;*qFIDK-l}k=5R78?qdaJedcRj&sp^g^(YQS`@i*;Y zIBi#LYAL4Vc-%SA@G*CrE4r&fHie$!k~?l9DaEMciMZ30;uf~8!Wn5AG;uLMUolNS zY6xMPeDb}{`osApg~wYTGJL-bRo1okw%V@0&md+NE)I6VxsV_5j&EPMeAOXsw#f6} zX!7Q!VCd@6hDdZq-u9uia8vxppMjH_4CzYGkc<rxAwvj9yAC+wu$W((uDPTYU5{_5+vIrr_p z2H{e}Y|9V82S%v_xpE zW}qqm-67gJ;;ig)eSm&5RtM9F(=C>_?0ydOoj5y?U)J{3v((ii(O#8h9k|UNi45hm;GfRJFXY z3)F<(F4|%7m!D)|Hs>hZ-V6LN2T6!w&;d|gvLHzcE5ENX;`&Wx{Ztzr_-@O86W4jQ`rg-~ z?(|pSxtUVxLZh)$FZU+ck{SPs+Sc$bg{JY|mNns1R2-Z%H+G|J=1C}Y|NKcLIBM9C zX5Bh__~6FJXsuH~MNO`uD{Opmw4I1L`_5%55v^UzlN`sZR^sD9H1Lwr_3w=9!t~-R zvG++6UsV~aDjrg>DWgj8)~&##hhJCVw^HM`2WBd0^Sx^+RBylkDrMqANxBR~f7h<} zIl)ni4*h)}lD$Q6R8$0V>z9t^9}E{;=T=%q)nbjqUIQw}a@DTs+0NVx*c{H)H@k7P zure&xA4>d=XTLh)+DgKpwc)G@B}A@Jn$~C;Xv7VJ?mc*LyDeH2`S+FdB=pU%c0YOm zATyTu&*eCXY`t@vLiS5<;VMP&Ul$dnU#>VR?{OX&g7U=@g0f!2Ciq`Ka)Yb$v;kBh ztxMiS7?{XZf^C{!gP#XN)WBMzanAt=W_BZ}UQQ|1!9aEb_RGZY-*}i#I~P8{#t3q$ijvuehw^RhXf}8{*qJ9CChWbr->T;_HENJ|Qmk#$ibrt6#qfIKiNKJbm^Ny`q$P6{_Vt*R~}7m~ZRky`n|Lp(32JN~mjsYEh@{ z+;YDAoT|g$6Y<92(Yd#4#W&j>x*U9bpF%NCW#yIG!#_Q?ctL^h;Nb8?q;0q|Do@^( z>(u*?kzNag^czA$<;sU{htyYHzNqR;E7pFt#S^|Y$3J}{uoc?$Z05EH5n_4WUapLx zz=#FD9F>z8lyKJ5EtWdcPwyTb9w=t(N#`~+)YZiSXafy@q2}Bt5l3F&7=&-Hf|~3s z9o_S;&+rKn{*+uTv?XxlVQ@N{bg(edk^->yypej~JBMsr0!Q4?G9~}% zM%>s338(D7`KB0jXx#HWFAH$Xb}#AS+u6LzY>LkG+e*H}E9TtJJ|5OJ6-8E3mC&r# z0_?)2)h4~<86@QP8jwE;E|t5g9LG=kE0yykt(%LvP&~JGR`XvD6@j-tW&~CeKb0Ja@O=FGBWsqbb=3OccDemsr4lZS@uQjD#;Hc?geUqOTGmS6-)x91J?mE0{RWiI}E~v7F=si zst^2Kzhc^ZxzS*{L1M#6v_6P22&xKt+JN7EapZ<|eiV4cS``OdLzF$oQNU|5;T!K# zVwjed-O1yqAU$X(y*);phkbUuN{1r!%%6tK%ZI6HvsjN=9{lkuT;Y6YxZB1N%)$V+ zD>*6zI~~U`%gWG;Q5n~YzlNDAJ*gbM+nPa#{Cjs*S^vF_F&m<#u?^ z%G_R6o*6116+Ssi2%~>OB6#1)saUn)%N7jOq`1Q(L=OW5Mzi$!1w%+5wlG8lUmEFS z=ljB}<+l1bx8BguQ0I>yRA=X&FWibWg?A?HO+M2ap)!Y~Z)|KFE?P3~tWe}lNCP54 z=lg4}ttZ{*Y&!9p(+3!{(#Y%-k(Q@BO|>gt5R98qn+SYvKj|njlqVjEJW#w%B zNE*`o&1X5fHQsoh1=FafLKz#1`Z{YR7t0wRqU-;)&dWw3aoR*T<+;3^qGxkHoc6c7Z?ZPV3O1#t$`5Vh474|D{ z7ZwC(?PaoK`&7I$6J;hlb^2&JLo$&NrDj3Bb+hB;hDe_K0z^E=bEoi<2zbMxS& zt;_r!5{OUT1mbVKYQp8 zmneI#Y|BZR?sRpkM@jNk5ZkyuJ5SaV$;XJ%1ChDv@L+E}DiucSKu|Y*e9sG1H-gBc z3O-k_0i3uP!62aI%bvPEOjnSy*pKc)SaY`1rU}UNMl$jMPRgHYleYY_Q(ytaab;Wb zoAyg!89M6$?I+ca7WaKHj{`BjCu(}p>V%&FK|<-RZbL?V9L;C}Ev*Fj)}Q_Y0rnC3 z`tkDKg_i{R_63!*8&^#hywB1i>zj{jIB~4Ik<+C7n z{48;J3SoYoq5yRm=bOMOH+^=DLr`2P3^6#MrL%OfSv zw#!-<=Y(OynUBT=YWf$4GcGs(mVa?BDJjxl-A^=hjiybBv_A~Va&LBNZx>dM`TXj8i&R=(5jASZX~ zmg4v3Lzv+j2Hch7y-A=Q33}MiwgKNS4GauCDRsCCBwJ*__Y1IDv~bRHefr*)#KbC{ zD@@?*1l-VS`1eo$gv=yda6*L8V4(DQvX$0AF1=F)?vH zmG%?&N89i#p!|p=s7US%{HjUtj{^TZPp!<`5cUyz0jM}^)5F!()g=IF0ur?Y2Za2I zIgFiV+v8qw#T;jKQOSt~?8ZZxx705YeFRito=j8D4@39#0?VS?!zhZ*>$st%!@dj6 zL-*cfHE>*5$o_HXPTx4di2YX6Vu53Fx0a-`@8{u6OeR^G#oZ!iqLntF<2h_ixn0D6 z`n5dQM3mCNLD<6OcCPk??4qg>7=%6T(VF&lx5g{RiwCDW>OLbnTX+q0i#hVLq13tMW$6z4ASLoFvK-B5(BHedQ{Ej{8}}4Q9_M zWM()QfI*aIu}u|-pHI<~fhz#8Fh7g1`NbPljl*u4TMgliH_DtA8D+OzpZM5CU0JPD zS?%#-sS6yjx8PBBEg*c9R7&inM(ov@?c z=dr-S3Q2PWRX^mr!70I;H*YSCmZ|5I$rrg-?6enF=~TiP5(<{fyNA=bTwFBIGQb*d z0``(A`)EOJ1o;%bq=-4R0u6Zd23@q$e*iK>7A{yg_)MarW$l(|fl62M8MmVYd+@&M zG}{c%fs#i){7PJ-c&b`DB+1t!B^fit)-(6B_0f8{&`><+98M^Gh++4G{bmx7QeUV& zgY3Tp%G0zZKOzj0*@Up%%`%W(J(3|$0+v5)e2DH@M;$^VfSt`)XU?2~g@bfZEBx5j z>#ms?1L1QVK-nqE>9nvmb>h#Tlh9(ahsrhnCe^y3jp&71GBz9!cm^f%`IgZ3WH2v{a_OJE;xClha?xQbtQdqd1SX7|hRvEXF;W zCehj@uxVN>KLUxjKOyG9XTFmrd9*SK1ZBM#_!Cu+{*MX|#jr?Xk>Do8SuR>e$2Dp| z`VDx8JPU%H?CbY#xUbS}G4AK~XCFmeBTFADkc4LV#prf!n;3}eK98>VsA$&#r|Ap@)2r9+Qjx+miCazkMq_-6?`+leT~UqzuLiS z4kAMWOwjBy@22+rin1TQv}Fnw$ezG`^Re~mG06im4wEGbtE66BAnlK_kw zS!1X%D4hW0t+#D9qcKSWcM0f*;_1r5Pu(G-#OJ)l8n^ldrCIg2$AVEN48(_jcXo7u z7>013Yt5Xi$fE&z%}Gc)gjiu$;UxQlJ1XFQXgv$S;}DKS#KFsfj}}Hp{KLb;;FJ0q zx~K%LGAE;sc)Y><7pLv*?Vp7~L9Aqk>s+Ms+-Ht@C{Cz$Sh*5cG5OV&8F@GVH{lZU zoL6u4r^m2F!g6xhl;RjVDJ-Wr|MW6*c$P_3=D&8w5~#eh=;IH}}{i zxj7jH;dtC|T>H{wU9rD* z+ORG9#rFJaw9A?-G9fcy(}SN;M8M@T=~8k{>@~_8nSIol2gY-aWo}&>tE-kbR{COQ zI|sH_7HgDDSb8)5GL-Gv)`YrekP9j&cj=B-z67Y~gpl2W(Ps3-!~~Ku-itm5jw8S+ zG1weTdi^@wy=DsJ&`1#lYVTd8r_;2q)HbM@j!1>FoEP|?fo<*;w_D|160nAAauMtR^e zI0pa`ukh;<`(-UzoJtU7_5<@+RUb4uUXUaQOOqwzD&ZTU$>hRV#n|-b<%y-GK;YQK z1Fe>LAS679VI=;q^BU$wK{w9_%Y*3RKhNk`jt!?S^=}S0QcW@vuCpk1ZfGf zB!3{XiVHRn?XcGY{ge>7s^K?72tI<7?U#m8L1qbmMaHtA&*lQ%?@ItT(ef(fa0&&j z{AeEP#TR@840azZ?C%x|+OR*&e{O4c>Ecz_9Q=VWGOp;Ej$%(6=`VGd8MkZ~tpwf5 zy-RpUxIJMR*E!xtLq~0x7x8p!2U_n7`!Oqe4Fj}?trX+59<4Y6lReX!y!&@ik&0Ux z+c&v0O^_D-H|H^!G2D3*P_=rm?qkW>anDoZGHm5N63qwt1vWhK z;P2jG3vr%<<40OHgrGVoP*_G7F>qd9!RN+7N(sr-42QB@7os2-71{PjV^9QuS33y< zxy!tFUhvVb0u?$ICNTZwt|ediVPS#}bS20sCwM};NFxwble!SXiHVM|vu3BU;c&`m zXha$~^8h>sEEdB6O|Y98xOubA(%v4KEw9v)J-Cw-EePY86_6<=zz;|~gjQ3el1b}cxr~khLlX+g7sZ_Fj*}A;FM;EbHK}pwV{~*NybI*vUW-i9z{5>4q2o#3N122w zct*%j9wzN12$@h*`&>(b%nvBIpUz*0Ndd8*GUZamD!$&d8-Ze~t)1${;J1y`mDR}% zo@&H>NiQK_?VDEW)NL#6$#(XL{|@Xvuk*FKqDVxE)UN+u;anHb9oQEsiUo8J) z33Iy!IKukMi%2gB1eq%!&7mgTebw#BG!C!07Gf65nWJeZ0KP0CC!hg>g}uFi>3t~4 zkyi-#j_Crw5V=1>&IoncG|f{x-YiynAYdmRt27HFLmt@y9_8|><**S2x0Nk4GcHktt<^BYc6$uT^o28M&#mE6A zvbN$gdN;Dgewz0Bb6=>WDQ}F%O5h-^-bo5LRH0%6Du-3+$uZtL%ldm78DNe0tJ+He z|AgRI-Vjftvn1D+ZxDX{=w&`KyW&m%waKm*-#(!$ zA@16oLS3rg)PLg?Y~Dv9mO2L}8vPZwJ8e_>%)E7%f4*cQ2+4pD8JOH*WC55ga|4(# z;F#nMO27{jEIPaSw=_0{-47O}!Y!u*ql#=;4Gzz`Hhyz>WCWSup&AGPD{m;LG&D7_ zFwkuwLd8qLc=qa80m;m(lF9tyndlGY;JE%o*!pUdXOSc0nOR|iF_*mj{y`VdkytB5 zawi}rF+jnBu{xxG3+rCsvLl~%kS|H+SXeht2gThk7&Yqnjt4nsVtM(57dSU0683yk z@fwtV+EWZX)y(q093XQ(ms<};w~%Ob0Eq<02YRsPz&1BGhgx|8&MPX2kcBynJ>U1K z-~(U@-9IQ#!X9s`0STHi#%1lKa)!pI?V}&0Fb*Fb9evki@UP9|v3U+~X*Og4)cAZ1 zxrZ;l-wG=aK6+u)pZSc zX|!B7AA^1NSX^9hJT)Oc`T{#Z;%=SOoB#6%F*;+R(nii)&_$cBZ9mu3W9HT_e+$R$ z7kJXZ?YP-ej4uE`B3c-X+F5pTucN~cp1^lG9f^mrbt5(YF^z{a_$y6((85D)-Ld$#5b2uVumCG*TTykt`jh|LGAV?@q4 zPk@cw;GoW^8DI$<C@PQ*uu+{W@1-A)i5un2~$;(nlpM9SB8G@9Mz?lnyk7Cx8+! zdLTe8{BI*TVuQnOGANhyAd!JVtpNNi%2R4EK&S<_z~W)Gv~ivL0`z9j_3zxRY1OTX zEpcs_a&*+eaByi52wYq0mS=g?>xFwUjdRE0>y4^MYT~L@r6np<^Zk95%jQqS%xihX zqeeTs#eYX9VvRnV7xjN3P;kDT>uin5nVN=%o7bJW|FZ>`CJT^>9LXhfKPaRep!ukM zS@W_)r8OhhEeoPIX3rrE-G4O7Z3jDlGQp({#XMnyOjv|OhiHv&ov!lCrgqWB_K`De z@v~eKB;gtqkTf*J$Bb{^bMA;&ZPuG{-1#;!{dlKB9Xw;I2J`Apyjhg>GhK9pVYKc9SYS0gQpBw}`R%YVmf@h{qU6y+)~FGtsK|y`tUf*uqfe#i(1N~OpH5nde9Aq| z)Zm1S8V?!B$R=y$q&)rC8dLDUbt7ZM;_d1VyY{CNo7%LypMue%eY z;`0o=<6-ypbjg|ujE=8=!(xa2yF$BkX>*51O~IDKIRd{h;*Wdo7O!M>VCW8htT|%Y zdZ>veetk($`N%f9I?XP(g`whb&FP|Ph$a6Yj{i;$sb85T(an~~z!A=hoVz;$|Itao zg9mCW-^Pn&y}mmsHW!B<+=k&D@1wl-r{^xa@pSh!oN0S4dIDoKR9vQdT+}{4L-Y|J zP?P$^yI~j4%tCSU+RJMKN=Ia+9z_X)FW8w}y4kA^-aH%-Kz-nPp~gWulZuws4<_2b z;MYYRhh|NMK@|9ck3*T)fQ){?uMY_0YNg&VuZEkJ4I%fIjLgS=Sm{$(RHO@KX`$mf zYcIH_qt#bdMZ;-Dz=;IfBmz;uAb4Ox`;+{!cPtR_YM@_PnT{|Qy1ajPdUb8B8ua)U z01Bcl0&p~PO`-rDhhfQ6wEf`UGa{}h2D&P&OR@kaC-T8Ze(`pKi9f+nwF97_hXYuJ z@R}2U9qoeHFo4m0zaxns6madZe;RP!#wkZYWzbeAhv`qx(nco%;snYVSzL#ad^9T8 z;Y$z$4fILbZPnTmS70=$^0@7us)Z(U%OR63=2m0mz`NDf(1%hn;Q`^h!5&_1-L5g; zaJD~^mZ`d}3XF#Ze{7qtz%>$7`%LT0pl8gcbN9dX(n>e(8R8Qk?2F!C9>!PvaOi2D z7a=SpB1(GCdF!RT;b>h79PYKy0y8S$lL;qO5k&P8l-0lMo9HuQ7Yxh$LBYGDvhmKQ zb*f+9h~J6*!+(qu;GHyv@)CG5{n6@e2lfMKlu%s;O(7$ou9TS_-yU-!<2>~HLSJCu z!;uJe_t7|vR6*j6n*rV|uuFs)Om+bz1qBdPjy^C~N1m||q!Z8<3V19X(4&&B9yJF; zpk?{6IYLGy9L^)DZox_E-o|Ci)ssnnk8)>a~@08(Jgg7(@2 zlobd36=gi1yME!})NqS*&}TWUFaaIzg;v+l&^a`mgW~N9(2$iNOnJ&R1*5Ns4Bw#c z1kLj)v$6632nvvzs_W|TAorr|g>YpsjYnS{u-yFIW+?VKvVRPj#<$=KG5})V4w>c3 z(;wvUW|^QQF*7s!$^BIcY&Pg1FRRUAS0YRa0^7pjFMsmuX2ZUKQmKn-lS>oQ-8ieB z|NN?>7%+U{%jf;&Rz4$aGkX?Nkm9U=Bw7;!k@3lr)-30#qikqBbMF7u_OY-jL=KT%~1H$G7PdV}cMa4QCd zwP^tZ#w`M@YyX;DgU{@Kg)TdoE%yRankHGhfdseY42S)qnTe%I)3D7qzKxDQhU(1V z#L&QrDeDcV+UPNUdxci+aPv7h=8NiWP_nU7$<^^ch4_UYH|V2~S<^3Isi4g; z0889bd>s^Y24wta&z|)_tKXobbAS!L>DxnWshsR}1FZmGB~=)fI=fKr#6~J)dzWE-ZT$D9OEo8Z1YJQ7y8ktc|PW zKe;{V%%O-tSPiPDhAZ3|0jsdGlTGxKON2?KvMNlm|HbiGxYO6uSHY^BlJ;;D#-Whj zFP=QY7DwF%@MoY;>YdsJIwB1a*BSsDW*I;z#1FO?-J-)W*IR!y%Eyq4Ed2(#p%mbRZ%k;+)g= zQUFv@fUhy<1TzsBb*GR(K%H86h+?{LBQ}&oQgBNOZT^tLrJG)OD8~YoE%=aqQyO;V z2?Hi8_=wIg@V0aUogE?*Bi#-6LlMI^?tRY{7{LWj%>C``S8%W^NeonmX8h2#Q!8HFqCNA7pAP2FB@1N9(sL8 zr8b!7a!i5+o?5{B4E%V>jr36}_u;=dw-bT}zZ3tLrttpo4cyo=y}|50gh{B{pZPNV zFs0i_*x-iRCL>{y&h3LqARqBW~yG5`R9RwCuhiubSXo0h>2B^{s- z{N%1-hNld*v$Y+K;mOmdf%XL$J8{(gXKNZjAh`;`AflJ&xZwnHqAzRl+EXa|fk{~p z{y+v{Wsn?Gof{4nEz(#q<^1C#skAxVwHFSa@O>9} zvf zgZtMPt-jRP4L&`fdfneJuI=yFY*-quR`!ZJC|FD3fUF8uWx2G&8`GVEYuId8IN zde&Abb(%9(Nn*D20n;6qYgASY5|@0O{dEcp2)wyjacCO!nXDg)E68me?bLFJhZ*md zZe_~l-t3V2Sszuma3R-{<&WQ&dBeGD+2gg?E$%MmU*`W&msl04+Vr=-fef-(KWvlp z|5pX+>-*)rAM;z-NE(3UlzyIVBb8sA&(Bf+Eq>w`ES(}r4<@p;#R)=~jGq1r+c+D^ z36JTia8(qWYB5(-F9Hn=c5g@-7+eHY0zp}~q@^V?`>2E(pppOvg(P&}9baV0(E1u^ zTz{W7BnJDYm%!fmEldgX7#P&+S9)OG|9Xj;xq8YHM32ZocFq>u9pARJRO6fmyC3Du z7a7nFP&gh!i3>7e1&%~#UDzX(@*U(H1Q-KeG5pJcMpwoUzZq$n#Nr z={JkzQ>BOOoP{;d#P9u~>^Ak#*H#zH)Vmuir~YRy){5s|gm)bq)npiFh4^b%?!y5~ z(-Uew-|L~?&uRV_5m_D^%JCAXX$PF%+wOnrDqzkiB4$N5XTMOrjX91ZaR93F47g@g9qr_!$kyiDlx}Jv7Lc|LGTQyzSFdZG@#8>_^Jdo zCu`fhjS%V+5sk(f{YWgW8UEQjW9Bm*1(xi8`H3Xfe~7s}$~$Z6AI3DfLXtY;)AEUk zN^F$rZsY-8qk|4Qf!ObxrwE|`UI2f?RgAXNcUzrO(mj>Qc zxM}z=ytVl!V%)tbM9g@S>dcuxeaBIY*`J1}J~B2FMvts)8>W2JTv`pWZJ*en5K6rl z7a|eCHDI-S(nE4ryItjLh*X9_|M5mtxTA6c_#R&sU^-Y09Q#!u?J{w4o))jF%q-#w z6cyqD<|6WicH5aakeyl4pau0M(*qk`rS`ET60|zbl6_z(Un%3e#2}r#o0WwtAsVT>KyRg zUSN}3Ol0dk@yN}oc2^o~gn7rWF>TfJFKFeFU+wdAJ5#4t1#62N3g4tCqz3*k(*L3f z(7XRkI{@piyHo{!*>BykYB+-_nu(_|oIO@ey${)s;Kl;S{R1xnZ}~C`Ht6H&X`D)> zN&vjB0u>GZUTMTPeR`RavW5K{Sr!uk>&)+;AX2PH%5GS=Knz77s?3l#wzv3m8<^Q6 zbPae_b%b#(^881wu3Gwap4dI3vjr(xnU|94Z_=Ln zNpB5~EeQNaTMkMc*V~gHFy6gM^jF1fd#tjul9c2St)g*{UqF0S;(bx5yvBOp>wxj1P32P?~%taK5c2NENgnNEB~Otq+EmvLqWy;8s?cAG@*_@VNb*w>dqM*MsDOO z6j?G)<4)5>y*hVEU?JyFpMpWkKpyC-{~fy=_PI{S$ieJ7Zqq(c7Y5zj76IYfVh&Pn z!d|Yk0n8ghlTstDMD6nn=eGEzGcWDCYMDZ-^n)S??y(%cmk`@VbJlSs z4xbpR0!0;O92QOjZVt+a$Edx1jKRF`9Y> z$V`o(v)w3#LI!rBFXZ2ZWF?#TyTvuP;gBpMu+9G9)#WHGb1}uXz{jChH}stSbh59e4NJ7Q#6F9i10O7 zSI?O_pDFsPB1wc6|EDOc-|;vbkXwDA#{r`zWW@9mC#uF^LX7-QU>Id@Cx`(LJ%aZQ zWo2a@sVY1XecPK{hxgY(8j)p+*#U4+IvQ{@(GPNoBU=!aIo>?Xwe8=25LvhWm;cgD zfz~^JmxRBV9gWQGR$UwQ7}5`teL@m?j>;|1B~bq4cSqU3YANy{=Z?rUuYblSCY8fF z!0o*8ZSeHEP}?iXkLwRr%)-O|dp<$965<8fJQF{6bK6b%ucs#}eC@DnwD>-EliCVz zY!ZSiwB(`eepjaKwt5b?wQr0IiJDRq$xv^|G*WbWS&ft`S!^mu#)aH1K3*6H4@Xh8 z7IdQUB!KVx8cK>8JapWzr)H8eC_$_r(M5?JQpLsIC8 zXBHk}rflkA`kL8fje}1J%AHO#|+iTyIlzN|)%2&G_LJmj1s}^T|K+%ZEoW zm1z@MCxi`uk(u-6vW}J=N@pG5S3_e0|J zzAPUi3fS`F1!|BqA;vNgaz7Zf-G*u|z1 zS|Rl+`y?A91%LlCm9}Fh5D(xMbZ5}O4si-|yLV0^+a^uf&euY@q%>c0lWXS`RGmC8t7+*@Z-CS?E zMnFjT!3EZc#)n)MeB}R1Ws)npuR_+B{@W$hTj`YBN%AkxkFI5&!88t!C<4F)|HG2; z_D6c`-^$8Nk4rNz?{{HQ#dzwQ*x+&SSveRh{iQ2+r0e(1 z+(Hmt&^fKvQNeWfV<}SxyPvX``wOD&&%!qg%7?81Su-FgB$vH^zaI8_0Vn5%1(Om- z=*2zI6~qDC)G#0lthxUmw!Q-z>%RZri0n~D_6W%)*_ojvAtJj_lAXOrMr0?-UfCmL zuP7rs+1V@Ed&d9$y}Iw`d4A{ouT#!*p8IxleXq}WzhCdy07&X`dwmt8E9;oW8Uypr z9Sp~#))b-aqh1kVz^v;lX??fL;hg4JoR!GIA^LccR~ifx9`q0=?&JO#$h)IW+SefA z=d*UrCx_DK3ME=Su}AVJuK^bawH%UU@^Gs2Po^^Ud8e)3dnA{})0}pFB|*?AxK2Q% zD_=(xPAOh!=ht#+cZu?s;A4U}0EhmSSZBMdE9#WP*XM{zij+JH+{qz26{ zAI1n%2un71xhTiT`q4I)jCV)~{tt7935GT5FYoHRO{pNQM!M zJoGqlX^$jhicSCwm)gIioP>{$A8lU*Ui(Hck^CA4UNDU?*t3OHU5NJ_t+fVa6QmV% z$o4bp)UIz_Hr-ug=flHk1CQux;yj{Zb;9~A=`W6BBdk%}#v1M8o|0)jPi0ekqRXbqTBMb!!f3Ld^vt62S7=Y^p3Bdhwz#vksb9I1) z0lC4Kz$Yc59j{><2$;yI8jNlb>=*ca)4=8varjM!7~Q;i^QQI>L!q$nIW8cz0~i5@ z3`Q_8pzz{uvxVPJnSOtBQwA<`*70Qs#`x;CUKfaRd6=m8pn*dgDWOnWn0h1&62R@d zNzZ{Zhv+cl5g3FcbmqIDi2IF+8J9orhdp0}MR6TNTiTQGY}b8%ZW8Z z$Q}&I?nS?U9}G@G&2v={<|>=1e!Dukrbhh3y<{uLDTHf@Oz4qZbs$6Pd|HJ6<0Alf z(Tgx>g9ZDUX8>vDRz(;HjW_q>+p}!tt)cX+xF^Hv&2*hytw<#xF41zpUET~wR_d6* zw{~ulE&=ap=b+p|Q`U^ek?C)Nk1L%5`8Ab9*nLwGH!R^H3kaV(vFnfJ|2T3pe*f>` zjr)Gj^}^!*YKvd|5UE}|h3o71Ag~S*so_FC2v$v#;W8FM5~MV3ib5#FzrSjdLFL| zn|;!WUTkV=S}9qC5os|D8-Lb&jOA}4LkYk`HsO+(0Z6dStO&Ysr2M$%(%ss69zG>S z*f!icz`V-NAGTu~oe}*LujA$SE=mP45bCN(sd>?wd+4kFN-2a=ZhD#1)^zSxTDN!H zz00ZjbGYF`Og1@-=ViJ>IJ51Q20qLHa`C1b?uHqZ0T8klubBy4i*V+}WBRnLR9~*l zrr1MiAhWaU)m!$Iq0OZB^Ce`(#^@uokZTKA9u7KkD#Js*G?!=y5f}&{@b!82A9iLX zJ5dEsgF(R#4yc3x{fH(8qTmkXAhn<26rY!d;z{!8S32{$LL5y#xjf7p(f~|20KDI3 zv762v{s$5M+57LHM1&z84w$j}#Kvm*Y$Fy+AV#M%yhILeml!aI>-esofOZMkUbG(j1o)xRA`kl3x=P=h7NkvMjXIPW?1i z!Q>;}8u^A&sTPZVsj1B!3H|*gjMhvdU}oiZ-;W!xdf+qrSKb_%{wgi#PT-y~m^p7O zXO%`KCRxRLaQw*iTc|t!2cIQCT@P3+x%2P4e?=Lei3A_nq218$FCbK z0m(?+gVA^+4vp`U6qF_y8a#NFE_u^*zwpk2>2b2>aMu}yE8X%^9X0=%K*ABM@ojS; z(GJOBhcgu$EqdIGVeLp4EcpOhB!$s85_3rCpEf7>iXOPLVGL4X$gBdKP`JHdW{OZU zF6N~xW@ctS0&{P+9wdH_Bj%fx+de`jfIw;a7r<*lQsRL$CzsVvF1iP|8{z?b;UYeE zJVG`@qGX|RrdUPj$ZsvgX$UV8i--Iu$>rrZsvON?ymCkCfO3X{7qH&_;?xa7{ODq4 zySJ~655lDApe%u<#w_-mdy*PDbICW%OhtVdsPFRLA?Wg;Vn> z@SIUE)4T1rouc%C%#*M4>6?l0k3hc$C~<+?_!;_Frs$ZTlo%if2!-w;4M;ICIDax> z3n3OzR3N?tD*0jbIV@A#xA8i%*G>pJjlI(x zkj?kv!xzoSYrh@~q|`+}Im*B>F;uy^H-EFOZotNi>(W}druR?X)nVpWTp1Bp`SMn| zBx?RV3>qv`LF3CFWK^~kC9mhHcWJf5tn{(+afBoJ+xTUkYhizRYIc_rANb8Socgv+ z3IS_(QQYatK9S(ccTA>Fzr}LIkn=b#>NCPDyDpt|9r$JNRb=N$u4|~;_g%Q}=J2Qp zj?#w6&!OYR5I`lS^obTvvs>c+5Zb@3iQ32myXYjASHMXo?CTRne;v4PabR;$0VOF? zD!|obkOrDk4vlO?a?;q}f3eX^-x~me5a?^KXem~~uK=~e?yfuxVpi%K7;&^4JZ~U$ z^39(J%@z)$O7}y%`dRoEIavC(a4uuK_@MmG7m5hftag54{EKozeY`~rxBLPY5n=0M!{e%K;S4jK%E8~t|%H^+ylGgix9iBxYcVgTpEoBc8P z6Y^1Kt_qp$l1_Ox@NE^PO^FX4*aT)JECULRL=fDGpiSxHyNOb3h&kGmL zb~237X`mt*BaM+~S_;n;aiO@lskRgpuB7@DN!adsB97Ink#) z3?6zV=cB5|3SV&yPPazhFLdy50c?;-ef%FRFz6IAY^V^_Wo0CK9qaSYjUq-I$Zx?w z=&{*%o+*9=>#$i!?dhPz;K^U+5*>soFWzbkfXWNKI<<9mAECg=lWhco4Pujo(D6Yp zvgpv?JphE41aT)KsQ4ms^$68zoII5N+w?DVdi!sHl`dApy0;t=nPAfT4RxH-%4P90Ueok@Q_02hvjg{-O{(ptLJ9VJ z+AXeX2wkk?FHrc)4|m)7Jv?CI(GYoe~kFAp681rWxf{VyYzpcW%OBY?AR zDIKk2(7;rPOShZ=ctg@=W+V7qu%;g_yXuMYC}imfD)LGpGl_w0B6Iu6T@A_I!ciJJj`yD3^XoI=_r$$|=Wo1CFVQkR zuZ4O%b9#tx=y-fI!qNS8ojPQ}yj0+$*Eol`=HGuV@-Bxu^|@gnK& zC|QtLqULstW**~CbU0Jnfkr`o@5H^pHLLeu5rV*QDmC*l={fZEau-VTL$8K@ESyUQ zP9mN(Cp1SI>&-ma$i=5o_&=VW5hM<+za4n~d?I%0PL3@8>%r4{KYOm5=d-yO;a@9K zI{pHXZkQwsqeLT2L2TYfg2>)*Vcz&Hvr2^@yY|uddtarI-o4E z_4xbyi&nlr0p-&MwBnC|p$;~Bp_4t}phVjPe7GK%a?_i}n_1-@_I*4K-OHF7?PuyhI(gW_$8H#Jj znHa|0;53%)$JKVX(5S+j<6mqkA7Hq=rFK5xSQ{!dCKuxt&0yIt%f#}Y7U zaNtzzbc@`D3Ra~s+a0nG_*{NM`3hH$lg)Lkt7LeHc1JT;<1-Wp+(``Jk_FnYX(VqC zW}K_4soM}QKMk;KHxJ|wD&=tUKdU@yED6zxu|uz;(I~zz-_5kYb2F2-Ub^X9OptCR z4j*+_5{(9}R8iY!pK2-(}5!sPI%93r7H#hnb%m-7)%uB8=;<**@*%%6_Lm z$S`gs=>9|UlJ;}PHv$0{k$08t{;JM}u13q#KkoV2t!gU(+7gDhI7bN_*FsUxBHJAT zyVKNOD)WrH5#VVqIXB2g;Hu`%sb-sso@ur-MdSbX;)(2k6Q(uT#sQj`s~a=3pu}Cs zFkiXC!oo7y3LmqBI_n!S69Y!!dkJCZ4I_3k*2N@DB-tHxln$7OEK(C-*KW4ZAnB?R ziHVT24ZosNGWS!kJ{YTVWUL+kW+0*r2FR}&!Z|n_e0~nq@)XuzR&1&u4ah!H(X{*S%u)rY zlRHCAO%0-3RLO1e)YH@3I^_vwPD@Wu)V_OI4m`(Bvompz%oN$<7hQh6Nb z6wQx3x3o0C@qc|nP0X*uSL`5#kBuUUgJ^1Gtb`3Pv4nrzK3Mt^H8oewo^zq|q*_Bg zapar7L)?%6moQqw@nZ|9=TbG^*5SLsh<4i}AtGWbK^6zD2Db3y<733QkErm28EA4Z z-Eb9zEBHyj*cFuoX#lp1L6ZV{Q|0$F$ZC4Gy{~S|`MHEB-Q1tx2KBkIoU|x8D(c|e z$s)OxKnAA6SVO#o*z?#aGim-L-==1Fwp3_hu*Yxo3%6bk26Lt{;d|ADdOuGlTOYn^ z^ttCF7vl4w*~c;3yRo`_T`sdJ@>p?W`XjNe_^u<=qW24pSHFI8PTj;;$we7LX4~JX zW$?d|_ZH?M*Bl>C@|Inw4v!Dw2z?K91LT2m9KXgkR_WDx-iCinO^DT8{~@R)p}P?; zQ?JDJjX9Oz+qz6MI&@XkycZ#s{?BE+1n3>rTQHnip9QUsXwHS>U~sSA-%L*y77YOP z7EpCz7%Ab*M;08ly}SUE-b>)ZY&F^Kg^pOE0k+0%z{|mbhj2Wdokfsf$DD6qV#IMh zf9Z6rz<@;J2kYXd&yDsU<}W)>(F*UzGcNdzkbD|VxQK^^kn~MS#7o}FQz1?zock|) zs^5mM#xv*=1qJxF5>ijiIf;6i?_-bN!^m~OaQ=Mf9T~vBSXU^F#zLE{UCT~7?RE+W zY2fe{|9c0eeZQMEFm+JWz_?XG$oFeEs;KCHM@QC|P&f1fKlRtzh$4gMow&!N!|H$ouisO{&zBvR+8$jXKSe9;^H#D97aQ{CnTPqC}PVz z*_viHk;rjd2S~<>sna9iqk3NJekFtF0XiZnadd>wlecyQgMv%j8#u@_fVh2f9tUg0 z9f*sAY|Age*UjV@N(J3`A83ne)?YxyY$ixugOHNKfOb=!6k6_V(7*=Q*$0ztDlxL( zuDdqt1MR)z4dGK%){O<4oKIur9gR8%FK>5Zg8yMfq0uqc)rak(=#w*gKIK@U__ z4i@1icws52mtL;h z`o`%D0hufH_dq2ImQ(V7AEoVSwHpD?=Eg_S>zB$>>7$XwDDNa-B*%yzFwlqE0_2Tz+9A zwpKRsf@oa)WnzWX%q#PI!$o|^ZcJ0>xqga%s<%y8C{O>f<0uK;=a-NphWRIRpjV+N z2s;;GPF|f5R$oQuNhr1i`xQ>R+e6Xq=YaHGx*W@CA}V7DJ+Xtz{{WTD9M(wT3>7et z@cxM+O$f3w)=OWivx}FpIwW$c$&)(7EdtA`NyX6QPay> z;LYOrdPC>N0K&HxV1WbG_gNpz*%9A$fSBVTn!M{1LddY!?fwY`Rrt|oo0uu0$;_Ch zA_o&mp$oMut7C2Zd+P8whlCW+V?z^M4o@r;bFOY?i?Ftqd*lG@2~j&X)Ww`dTb-e z*!Ax#lN__rR6A2CW&_^oq6aNG`TQ62jHe)+g*dvoy0(O|f&nmse`gwV2P8(_LzYJ{ z_Inh@y+Eo{K!I?;L4AgofQ+_~Jn86d1P%rkHPSdwro(hGi>E$;ZR0ft$>B-cz2v~nN+ zvy@C=q8fpP^AXnD=3tHkt_QO9HOhFt-q4zef6U!Nv3EATz0p8GIPiJkL=lwvV3Gn3 zvd?{9z<{9z3Ja!K%2PIZP?p=A9N0=6&P9tjttEUtz6V-pV5&I(&dK6R0`Z~-HEqMY zp!3h(Iy7wBt4(cI`+#p)+aX*Pc^ghX)dAs_G$G732lyR}E}G}4iOMal?ShA2Yj=hu z1nO5f-7cBv(eiBZevjEJQv8u+87^{Ip%32WTou{)Q&Cy$6|O(%w7vIv_Ht{gXDd&X zqdJQJ8FhzD9#v3?j(fb2Yj}){cTAy>b^R7Kn&YlU)4=`q)~0_~%*`h=1>4_>VWu*F zd;pabpl(Ugxsij2*4$~rrFAakGo-6H$;!{i8WgnxfB}e@btvJT|9do!K%spFrpf%^s}_YSmR7NuqQTECK(P|U$}3gyB~d;3nO!VDmw zU`KonK|obGni|kjhAy${-y=CENrBQ-BL`Ut8nS=jL*1$~0-owet8uX_4_HD$Mo0h% zG4Nf{RHv{uTS7+)+9Khc_Je*NHSwU;>++faw))A*b`M)mR~&!3v~)c?=rf1GSza^w z9!SVwE`l66sK*ecY|S^bzx~!pY;{ub3(){c48glBdg&hzota+idFysCjtN(MI4=5j z(M)u7ude+O&lkhSYhy{DYujAfe%v{uO%JoCGb4EG;E19Ma^OFs_`EI)0^h{=Mg!Il!NSrx`IzDAX4ZO2d&M zVGM>;a)Z~e@(I{RU^zt>%WK$uoIp<`3i?%4N7uA(ljXl zz5wCyZW2D&66spknQp#^uoRfCT?SJ(>&>5t)&dUca%XUcP`mAT;EHkKS{X&xro;U- zxdJqC&6{(2aJv2hGgdfVv82~hODwmCLmEGg1HVm_+GZ2k8A0}zVv>@3{0bHfK6hTl zvKTg0J2|+I9qYOeZ^~72jl0%Dne?c7bt;j2zi=(cHC+cC6=h2cogOu|I8j#m))UTIUfyhkE z;rC0h<`OsjMR`z!4?E;LOK!VeAw1IV5HG1l z>YD|{`V#jr#`B|OrdpH>bD zwxv`=vj^yKJ1&umS{--(ZZt__N4AumW{KHZW4LdZH~~lx=LiL zew&&pdK$PZ>1gW>hK+7mj31F!#v=0gTNh1iS zv=%1&x9;8R5#ReY&Wa@?4HN)6eVr;WF_HwmJK+BS_)VGH)$WZx2hGlPmu?W*fQ<=o zi@-rvj^w3m4MF^4zDpb3;_FCSxzIX{SLQ5q>PW@A0kbOd0uMP2W5a+S8f|s83HsPE z7*vYf;P{7Q?_V7dO6T1F*ylxqwq(&XV`Ekc93J}PcNzkT!##ui_K868;K;PPVKBCj zNYDS_jB@0>R>1W9Gl%(8?8SEw#c1Zgrk^!Xm=CzS@|WUwyT^>|Q7=_W|0SRYprL_Z zXjB1jG%jGtFx84GxJq6ni=7DuQA+R!8@|3%X7@yNAb($?=fwT}5rPloUn@ zummD1#76vKVA@(C+Y?TG9dB)IEox;KKrpR5Q$r@KPD4o~;*^nrl(^jayC3SMjWKku{^)K{z6S(=)ekRg z!l@-cTj6A{;KJN*1JA}kJ{FII66+ta4ziqCUvC>eWT}b2{`^6GIF?K!j6?eXQiD-y z+igv9ZcxaB`dTJkA+#6_`D>@v!KzBHY7v`=HV7o^VmUKgtpK0`#R~?0R00rm@$(}V zI-=SNk>GXEnnA`@pyH{+Go z;dP0Fm1{w+z+=(yG)Ztr`^8;La*L_4&NRZ3=J_@6wwSj)d84G+!%gLN?K~<{BB90H zkF@RWNJ|Juq6)dCrL`0JPK165OFW#tJ=VO4+O3(|$5~1KHATsgWMMIYZ7iu9EaaEu ztC#x5LLYTKg~BZLIboa95HmLu4J01~)gaFlW~omui|aov;e_;!-sRU(gJ~L~dt%f8 zx&V3)-Y?GQ|Mh-p6z?Tpn=lVR`Tru@K1c}{EWUi;Vn-AQhrXr0jEq1LM?BMLtZUIj z6=uKSoxK3>TV=4rLX2O)X3}S6Wu?}e*PD60P}I1EKr!Yv8WMs422ODjR~&i=X^;2U zTcKaUzp=i$8VV?bQgs##qF~MkM1AAE1PJZIat%QA^kf}_RK?4W^ASQN;U~H4dyhK1 z6@JXM^L@yeqq}qPgmtZ!kN)V(O)vpwjbw*p8wyKEwrTN+s2XurJ6K}yUNPxd1bS4t z$8Ec-@5ySt4m@6;c(4?*8b-VS5vzA7hM=3c5i7RZfr+RhI7y%$8_gd7*=za(U#Mu7 z{L)Jw>&`DJ!cUsS*|mL9NPnA-XsSMDNx#D_C>hk1v1islT;>M++sdos~; zzli%1LAifZ7`l-RoAi8|clVQc$-Ovn5I90{roo*tzw zS0IxBjGLkmzwJ48F9T0+0(@R)oJ~iW4vu=q+*>+u?E*m2t5>TrrMJ+RUMXtAUJ_in zgcBn*8lThKYWh&3r|vG+n2->)_hmyJ$({xw)tXp1TTQ^R{q3?Y>?kyg%4(UdU48Px zxIsH=&)@qO-(w5KxTBDowD0*=H7ePo8C9AuOzp6Ah+XlFDY)bd9$2yzqOhLe%OFD% zyIgY3{?%B!s&qX)A6W6&R!wmV#wl+`hAmN^nb`yv z&Y_RDsCfTuiVu8X)q5#j@()I05*1-)NJwHXsy%>03$Z@|?`&5)p(x{PhzI@4V?b(q zWS``&!t}=ef)g4AL^mjP6lTGY1jf{bNurMo8&vntn(FI$Y;1dQk0ssdz=q|YR}A;} zisp<9%gw}6U%&u64955K%Sp#Jy+WW;24~8zus$L6${WstG@!_ujXaFOFD>%P}FlpJ~DCOm6VqwQRoUY$mnC6+Z<`enmL&0BH_sW=MB12cR`U@6KbR~t@W%CZwLE-8EhV?}My z?tUxbJvYLIsz9cujKEK7A%8h&Nbhgu;9OVgh;RR3AY{Uz#3A$JTk^r_nM1?s1j+Z^-va+(qmr2f{BWxO&bUmD_1YgpcAz1(>c_AfI zP2O5C_g82|Qg+~dr)U^;3YI6jqG|DfVG@w$~P{9C;vFJsT8_qe8p(?$~vV8rCz#_Yb_rsx%r@@`y*r#n!fI1 zmpxPM)q;eq6ka$aKsn?GDoH3jg7K`NN{({;Z+o3q5(3Z9353;-pj0mlji>47s{Nf$U{*q)tw{L^sX4f`qWszytS zhV2ci^mO%1rJ&$o4*S;+@*Hj<(!D)491IUYT$_g`5fpEG~SI^$EX9Q<&<=)d=?QNwTM;{}De7ADT1ZT`j6ZxRBrEA`9 z-f-uAeVstliH{z_HS{< z3&mLu42u!a4Qf-+c^=9(R^j0RC z9f;K+C{9|5%*`B-2#eDFmo^ZT@T!80s}AC$JP{N@IY^(|2GPz2a>m+)wx9(=F68)PYU@!hM&oOSMGtqNii z)v+Uwc8-dEt1;FS-M)0@w~mIvJ=N|le_a-0I1!hH&FK9|GLIh6qhp<$Ih5z@efnMS z?*x@4L$xDO-**6z%hw9e3cZlJ{_m>fsAMP69xPxjNypNC#Ka z0zUfKGIMZ~1g#{*<{^+PqP_$|vPc;b2Rmw!LlFSl%Rc0tLw?%Vu&Nh}VSn%swQT8f zyK;~pzXFu1n+?Z2WkW@}72dKwN+A?7RA0K|=OfL?B0VKqW?TyIVlut>IDNI!0F(A? zbju}`+nZxGe=^o$OJ&)~2JRS;*9#T&OJs{aDNlSgTE8-fl8nxGAT|m$uA=JnUl!DAxu^V=Iw(J=>``#2$4#zXR_d6!1V$y1b@8< zOoF5OBoL7DpBJ6?)gF*O+Jb{MBAXO=*Evn+!$3%vr@?{v93kE`%?+$bIHoc^1NgC- z2rT{)dj_&Bu)GVQ-G*_SXf}1D5&O~IJdmk~XJ3FdAi!$7OP#U$^V#mn6xB6I70Y}{ zuJ$agDv3Z&w*2e1#OTZ6^35t6#vA1aw&CVy<~hsxs&8L9znqGBQlx!j^3J^f{#af3 z^?Uh`4p1WFZ!uuGFkCw&cOtg#=MFu#SK%M*_npdDP-H3%A%LFE1RQSN8)OvyO97}% z4*PMy1q2~R>P$-t$iZRN@6iDiKId)ozdPr5=K*q>o?ga)WTQDHs{iTx zH6DX1{7@rX6&-PMg%27rDUBn*xO2$7pZ+`;0WC==C;|{KnDlT--@Lp$ljF!;M0vEmS0Q3^8^cePn}WK!0`#=jv9rD5O?WOj2+vqj7mo@CYIm zIL0)ihbmVdS0^U1}($OwowC(!yxh|T` z-wZf6)w05;)_+?sB3u>#CBm}rEPh<;Y8NT?yuL$y_v7rn(|Rmot8p7?Q{?o>41}uV z*m_tPp)dZQ5UqIlN2=~lhKVig&-9gI{)nS-zOk!xW}jYK#F)fe!^vO4Z6I-SKB{RP zki+s1{*NdEW?3GW&^|B&rJHbjv8NSOY{g(d0Cr=ao0~85mg#CURu)U4ZFR-KJqi>q z#48TWpMVgJ=kWW7?LwBaKT!Jo+GoKP#b0^h#E=yMM&Z$i(3V16Q{WFI%PzY0&&dhW zXa0SBhp%+F*Xo3@0$!d2Vy%RP>QV@6L{$|cQD&pM@)*r{lTzV^vTFPEAXjgBXt?{M zL)uTmy^k|2kE2Z9pA(YN9xrBYWj9q)wh$ww!M2c&j}RjI%b#dF6u5uC*mz0y<@CvcdVZkdY&%q=@CAL~FrAMiXRi<2mc5|gwds#)>O@~bqSi`M;=yw3r?&1}73QJ8 zym)4RCfD?4xLLo-Lt1Q(hd04Cz8kF1(CS*y;HzJ^r?70_ZCW(sDu^Kv3!PsEBQ$of zA{YRISoT~*`=$TzdBEzx2i@qrZkbu6iBlXI5(hoyb#|OQj@d|j9|EUg^@+i((TNLe zMv$OX2)Kj@pNJSRfd#ulf9M54poCx@2gk?a`6^K=&d!zXvp|4CavuEhuIhnX8Dd5R zA6lG5(SZ3Tg0%x7(=qQ6#4ox`o)JSlrYLa2A6FnSVYW?syb#k;AOmnnrKVY(9hcdV zLJ%v=+INNX6+m*KD_&7(*WRPoRD9fAxQ3ckD5ICXGa9RB5#^1cd?F!)d5x&EO?(01 zV_|wXOpm;MQ(-)$=HbvnCRb}QhD-~iQbI)z8-dreceQmZT{Z(s;kVNe3VN(EtOYcB z6rWycBC2p_C7d@-am*!?n&>h?a=0u`C-pJ>_*Q$_(_N$s(9w5IH z0f#gon%zx8!cO7a{GTSdC~VagftMRF!zo49zz`Zh2YNg#uX(_rM0NHu?*tuA9zW=D zTJJ^oDXQntm(j?@wL>o?JeeMCWH`#OFuB>ncv*m=MX;cK^P3XAs^(6yXY(7kMwfo! zimU4BhzNT3IBHx%Z57v|x*RufouQ>zteG$KE17d1l{zxEOWTr*e&-`6? zU&`tX*12;=0DK`rVL|H9c7BLAlZMHd8t8G__rTym5vD+?0}I4nZ6*Twx)nqqw~4Ue zg|{LQ&cup^DCM-W}KjU>-i>CJotVJbxE)o5LJb|~^)x%29l6g_6wz=^)k z4V|&!Kl=xGjoAPP9N#Wq)$PtSQ@7#Vr9fehmNF#Bga^^eLT<=^27*9Lj3s7hJY9dl zd|CAN)^5-Gm_LYun}jpc=1?8kZTbH*GJh*bYQvI)*$AMEl>v~s?O?06&B?y1S&~4V z0mv7%+s$aj(PX++ZF*SBCRA{A*HK+d2?;JS zuTN1=orz*qJvdT=TjIK}9`=v|x{W0nn#W@yfv{G_*_#A66G@@2EilxM#!k>SKCGqI z_r?n?n5>`$-o%)-RclscG6tF_%}%_2_4b3k{MX--Vpt7PbzOFb(*I%I6+JdZ65vsV zmPN4;~)S8=rhun3F5j70e%o` z;{gHWQPYNYxrh;zgrvoA@db}7gD@$b%1ZY11=|o8Nii{M@Y!nyxsO@DB;<}N!1_TV z;@n#Y%&&uuLm-gJM#~V>jfYn}mjp~!q>Oe?@Xq3~h7`&t*><-)0E_K#l&oo26(>~T4SEtvm0!Kw3^g|;aV$i^4&f>A$X7A7C+{i42z zQerl;ereYC($~|2`_%_p5}8z^=qGkbP)*E&bo5`4vV#!z6AVZczXpF;KS~C7b&#DP zwt!&%TxPZk|F-d5TO@MGr_nEQfzhq}%6JVWWP0QeAoQfj-S6zgtYz5TZ<8N}eU3jrduu1seNjkn-1|PU3K(`H z=Wp8g)t2^K`H4cLio&M^eg?dUN7r>feU9sSo7R<*#i+&pcmLetz;jrt6YNM4Zh{bW z{Qr5^4+ECIUznQSC{ku?hRg&$;45Fzl0OGABk>XjIqIxsAjbVXpq&LOQlxW11a+W5 zdPiFdtX=qUF%c^rFoN#|;1Ws7f)*bnHDG8B79|*nN*V@yh>Ho5jk8tlun5RD4ELJS zUe0@9$!O!cVRNiLe`aAm6!U`Lm`z(dQZL+aK zm{*;+l84sFrzRI;HEP@9Zgq-^ofp1uMIrWNFLLA{^Nd&Z+a-1sFG4`^Zw3erWPqrj z7;!pdVW%#~#j56M()_hbWVsGgUl|fSJwsJ$FcZ2&$ZTTwR0{x=e_k4%{AhZzC^iMy zb-q#Wdl>T#=3QVl5=cn%mQs(G5C|6=<4%(RpBkX(eNb<{r*#5Nd4ia!OCa1>h|y~& z%-6Yf%ga6icZN9&&ak30cj}4@0)6!#q>hAf<<|qqr&b?qHCwmjbKR^HqiXq~0fnT8 zyXvz|<~9wESDcqiypeRZGWR4!zTShYOUqaNqDrER!4B=j@lVC$R)ahB0qZ>OfwrNo zqJLwB)4q3rcih}wmk8}9<_>k}+;Ybp9^yh`d7e1%KLjAb~T@O zLn&+g1Pzn_g!rv!jvy=&GSUq8V5RfqG``;K?7sprd`V?wu*l@qg1t2|ogxhGd$YtR z-5s-dw;qmj;r7T#=M$&s{e20Ewv`);^pCwQo^;>Xd%>g@nAZ*lF%3LfQ$-zD{eN_OQSs0d|%MqTj zh>}WEyT5ZulmdO4RHz*pTxv#}Q9Kfid&I>D`ohoJM!tPU*l6bzoZClFavjygG%$>~1K-S^fbA{}YQOI5SJkXX0 zZI2m%2$F|$7vXckMh3W`d_`=DAY=ogr|2LIlbxO2TOr*CsW1Kf;L!6yY^Gs5{~G>y z8Okc_^+`N9pH_qtsvUZ+``!K$hOe_ync}aoD_lK9tT4Io$T>xZ@2B3JI97v%K>Zp0 zyM|M92lw6%-sW^;MJi6_lJDr8AI@^eeS~^C4DYboU7zK6LKo*htFzQKaQMN!Q&jML zh#MD=T@WR5;;L!J#>c#pYBLHk-)~piP6?Gwm4=}$V)*42i=e0^FbQqx$IUG4qv;y96C%~^ zW@{Rkf0aZi$2Z@HoxNb(;0KUW(#@F!;{34vl@UyI-g-u3`J3~-b|Mk6=wDW2Nv;vt z>{mRz-OV)_7oM3FI|CkG&1OP6iHk+?Uen>jQoVT!yi};(g4s?bZRB;oc=_Y?_c3if z?Qgv>Wu!w4!e>rpnx}tHU-j~Y-In_U3cz-=05xmh)T!hTlNi?^(@rE%hSwUixZ0Q6 zn;b9oR(jDld0cSc_@Cv{{hF}R(b4V58c;|u(YIW;3tEGNyQ~i3i>2Z1?RP1E$(+}z zm8@udS@85C|Kh6;LixG&EpNzv#_b3g@oAH_xdS#(38@1~E0f6x*2$mEFZ%fNFH7hTd!snnnp)p?^M>m! z_%9UcmE1m!&~fP7_Wy`epg+WRh_objS1tC)&z@~v|C|y#v7$?BEOT`-`jf9LtF$T( zN!NVdvX!r__ozJXfK+I`dFTba%*JE7|89yH0vd0+n}|SG9|KuRnzl45<59Hz>5up% zuJ5_%GEs+C9{x7D6djcxObIzu^=3?O3=9Oc6j=N=1{)j6rCkYZ_^3rr360vSob>%9 zy}b&83%gTU7uk#_B3*_rl*$j*j!JM-Bx=9By85B*J*v2Bqx$vYMe%yzx{cj3TTmvQ zx(RD&&3Q3GPwXyw_e*&4UzOXC&T$dI!%nu3@ZqF3vOK@PY8ikLoqu>PwfJEL!3iPWQw z1Xzr0yHz=-CK=us*+z}oMFrHzY#*H^4{vt~%$a12nS=)<)}${NF9LM;H2MS{^IU>5 z+qKtD*N>?2mHFc5toOLXM$7_6Oaf$cBDk}ZjA4H#jXQ`pKR8npsKMM5^-TK)|>4xWy8gg{- z3rnT+b4nlEHokLc^yxvPBxanEbkkZO;mOc%%*QOn8Wl*p|DLKh z=YqaljW>qPCtqIcyc$2ZUqtVdbh(u=&GuUS973*Kd?fTPMDU1u$Dgor&xD;%ZkEju zvv@S~K#wEVi^fBfnpMN%Kb(KqA~nV6!!qa2k7e2VBJT$WO-P6TPsP)8lFBM}OVzte zJ6c7|VEJ1R=49QE{kiDw+mEoVGVIWGBnSl=2oD+R(K27Er-LLP%Juh}m^O0?& zQZ&9O-v}jJ#(4tQYp=1`;$H7r!YA>u@my%Ud$?xK>AN|%nm(=7G;gJXmnlZ2|2osb zhKPYwj#==f!|@qD19w7ON)O4R_~bp;{1SCsGT(L&WPyCDY@Gqa-OY0l)`N3_!z|!TM=DC8Mg~`%T#mXl*;g-5w1-y@Rj<+=AMYsC)t)Ef z(8N3U(&gKGB&q427l#z=032XzxW!JDL1XpNdGj$^BcdAALx9-GZBhFhgd?^18v{gsGSgKf9?YfspHM3u?oa+>oIA1gCG{)jJ)>|Go z=t*sU^rP(dKfn4=hVH$D(Q>NMxV)129U?r1ebRK*th?ePcCbYvXF1!hF3sU>Xu@8NTE&w8JlxQ^W3-C)# z{%&tTBiS0%wXq&hZwbyWX%~&c$tJV0zJMcBQ0yeS!=O+~n%7;_G!q>X`Lr|oMniH7 zanaZtHhV*6VO%-%D;-P~ugv`H+Y#r8b$8n>?D{^w)N?<-XZ=Hc&2KwLM14y>wA3PKd zo;jD%0*x&z_TAOh&liy&eKZtszk;Pz;wKB4xp?yM`aqqy`RUGOE9Hc9f4+vW7T$Mu zVRG-2W&b(OSuAF!AR2fvq|U(R#>Hnt*HSks@%9H(H!NuIG*fgTKy(*Glid8eeCurz z8FQI!Nt2pO!D?NK(<2H!vIQC)Nq=eA0yeulH-{ee7)=H)xN#G!vpgm2Q8!~LC;kB+XhyOypealTKMZm`I@Bsv`}!E!P!Oiz7K;=Qb8mJOE|w>8*Drk( z^oaK#NugfSOG{IoJ-&V`P7?DWjSyK4o5h4)M74cplrLPcAHR-~*M zk9%l(e{A|Ij;U5&Cwke4OHvZj6Wn?nIMI5ldu$oiaDP_?IgOj@nk=LKdE3u^%%dla z?4Gwukh=f78j6+PZGT%RRw{f^27EbNCHQzW9>(xD+PE|#`k_san56}Uxgg;iOT(C! zZo)>46m?E`3bL7M(yM7N)w`Y;0>~E(K_g%*zp5Y51quveSOyhNRCC^r>l2|Mm>#n{ z1OD2BHp%i+kFZPp7(mUeTs(@mu7CJ=d)@Nz@DSm(B7s;Cb+o%%P=Xl*2HjvX{b*eP zwLRKok=5b^E-e2(wbR-+82a^U>dFzZ*p|e|f2y7=5;(A-VCR z@zg+F4PD;lr;++iJCjkricA_KA}}iy(9sSe;`9jEFzY3cufI()z^lvIiLV;OvBPhQ zx179Llx^T%iIUT`$pWvXwT=H^uj>_Gyz+i+q3ynO*5{s-5kF$NRGZo*PxuryVE@1+i(7)AT~jVAL*ui#9{&D5fTkN8V!x>bay3!_lfEQpT8S2>EV`coWl0wPn|#K zT;<&U&$cPSe;p*dJn_eNuEIgSP~C2z$tO}}$BMh_&Hj8)R%y2ZDRVJ6K3(wj8*gT0&drC}XNYn*%nkrnM4~c~7-_^b z)&l2)QUOd_z+6HcNhCQqIKUg}hcGY5GRp5=8bi#$*Ab{P^$QDP8lA!ejEfW*d zek~-ET#xH|OUGy|6mJVbL_(;Om{cvZ1>4BZ9C@UW%2?NdASiZXE?8_Qwmi4(^d=Vu zEL1}!e(HX@*_AxR^lU;A$Hfir9P1TgwUMpv4~+?zEiVY6IZsfU`}Qu8+eD1rHl@Kac|!!=4dbrmYwWRgA>$_>kas`wUsY*$uFA)tVn>ugo$<{{?QS3 zD9rsZUn?x_Cx<|kI=$;JCPWQZ31(?(z`uRipiuhd{=FOiSxK&Vu66W~^U;J@n*(PK z!ew-0fSO?Ce0sRpS7a#V?B+&UqwsrOEp_TM;&lYqE^0B3o}KPX%`Gg9)({{|G^=zu zjiL(cyNVvswyk^zPHWQdbUZxS-SCBx8Uj0J(ZGy*X9IAAarAa!4`^^V=>H*PpgY8& zIwBZ(QIq$Rzkc3XVtndZHrdG_%J^B65Zgntq+fkntDqWcwZg(*{C0xsGmrbAPEgJL zQL+xi;hXaBl%Hgbf4iEnLZc-zExh`KsPl<#=l4Ha1`p6n`gb?BoMO1MQqUNu-@cU1 z0U$zGWLgW|KPnX3uI>yVat&A7UWTxf5D3)?nMDkpE?&I+oEy$51nmPyNDwWABCc@2 z_o}G&iTu?702!anlr$1t14e&MNVY7@uk7Xth>2T)1@rR%QT5${SnhA!govy{Wv}d2 zR%Xe}CVR`sp2^-bLUu(G$sQRQWhHy1$R3%I%^7G7nZ?<<*Z4bG-fXu-Tk%EIWNp=M_|kL1-C!i^3%GNKY=G6%voMSGUb|i0%l&!b7MM;Fny`y^SF5fUW5kRQUfES4aOj2MIoTW4;%3Hh;Lc-z!(9^4!0$~^u zxf}I^JX!|azL5BMaI|Ov!bYj@;bR2w6%rC6n@3@+^oxyfYzPA|23zQ=pabPonc3Td zk;vuLu0G<$@HlNq9@^K!m-zW~Wi+<$PT52Uooq==Cc*?`&gE$bGO3`+D|eAkxrlyf zaR?UBKA4hi$y-pvcdYWJn4quih(Eh)u9;_By|-;qHxP1J@2$+D@RRu1_PWeW6WLtk zfZyfy1PiRbww!v-!Pxe&IX8ueeyBD6LAh#y#WSA^q5l2=ITxqfczD!G%x8w&WnHf! zq|cOB1U4R=0DDuXClA-uBVo*qY~hW8`zjzcz5tkgXq`f)yc8tYf&CfJ3&p*eC~n3! zU^tDLruwh#KvEdusy_?}?-N__r3(QVTz~w?dkjK#!3U{WokDj4OkjXwD&J|F2O=xP z!`kZmyJ)4&E{SaQCUCi|udjz4flBU$mrcvddl^n`uqiGVT7nD+sF>rCE(6p#AxB3( z$g8-19Sf-)Ax8}fnYWv$B?ZqxBI!6m6!)mGbk^t6-%B~?rLd~Atlg{y%_AsZSn)9I zdc?!J7kEQd_)7f!LoO#zic(*uuYP=Ay)dWs+)fQVbH#YF|B!qtJ4|w!SHqRR=*UND z_A)l`P#x`K^Hn#n`vgg@Xdo5b1h_^$vfh`A z8Ju{rl1iBqEIwql_6>N>sn>1AC5WIjS`~%h-K;>FX30xWBJv|)#g2Upx#a2PH3LL` z87PBqc<^^)HU6MnW~G!7yz8_@FOZOsho@&n9f?Pfk|7 zzwe8)qx7{z%eM z@m_d@&|*>SJbh+<0UVb*TyGR%SS^mEHG<9A=w!Ak7EfvqlB_5yCbj|9!I&LQv`jFV z+7g15UouI~X`iWMOku^sBt$O10SbrKqTjDK;fS`}42+KVfb$)fm>2@mw7<)PctALn z22YH_Uc-3gZvg5`f>?~5!#anCHz;$x#T!1X?Me?|$cy-B%@BDwEf|0 z-!LS>4K9!!u24HWV!(Df4o;$4D-HX5>S;P6-LcMiTY?(qYAE~WNQ+@vMYAYX&*~Wpat$5_r!D(2|^6i!dlp}2ZrEbg1%E#}o z8Q0zH%v_ind+bMrFX>m0k1V>(Z`Gc)jqHt`7+6#R0$LB(E+J>j`qoyt`vo#2MH_b9 zis5<9fM36U;f^W1QJsz!oP0)74#CM3nhXsLvN0sy$3W5vyZcjg*9F*5Y=5^h>1(R1 zry({m_@fBnE^0dDqSd>#njimh;%g^O@-gau_-!Z3>AP$_Wohb8@E|ThCzPDB%v;-TQ zIHc6zJaUamm_gmto$O1aq=lY~Zpg6ZAdT|g``Scv%Ag?EV|ae#D^OFxIVTx+!WBG)aV}hV@8JuP zx76ljID0HRpH!ukHdtujRD*rZl zMVdv+0?c2mke{ zxV_u2NRhIazHD^u;U$xsQSlGRHMeB`(5mSECW7=*pLxw++=h3L6takJnTnp1jSb6~ z-j_1>uF1JiyFihRbs+$Hc~~qpxa|rl)I-jn+zy4_lQnU!o7XXK%x2T98Q5O_c7E-V z2X>{8&B*(+!8jsH<60RDo|xj0bg-1E*?OVGw4!q&bYA&2hD7O%DzY`?zu4Vm;H1MI z)iL}T`<&zLWOvxe-?iNy1`6!PF5`OE(3qR(!{1ecGvv52ht_4B60Za?)cst}`{1q% zYe_nHuy@E9CLnhuakH4x7ay@eT0k4DQ@PgA`nprS)Vsg4`0vNkN;BJiL5*fpqz2 zlx9&&4Wp+$va4|np$$aRZ#fJ?j=uXd03O0pN99g_PLlxIhf^V^qBX*#5>mqB7fOvy z*T2o4TR!JT6Hl#uLD7c)g2>F{Mvj*odZ|O}I)B=R-=0luWR`n8R*sfkzq}ISw4K~@ z^`AYOoUj_G`B>-)+YYJ;vbZ-rp#n_5Ux!Vh>R&4s-_cY6gWUA6yR>AVQ8A%?^vols z+*ZkQOcV^t`~>p8ymetX{L+$O(|O%1 zf36z9ZSx`q+b*)Vq^_9`JF64bCv$f~xb<*`4z?{O5`GCtJ|MuIdHZ_6e*5oKRBB*_ z?YCLsrx=tT^gAySdDopcBtqOnR%(3t#C5*%S8KERSATPy?>7m4p}KbO z!+w>{Y%!bgh+rp0x*vItZ7pqWkJA=YxAH+{!RHC8<>f&*kcd0xz+}@5!|8g~_oa2v zfR0v%K_;&J!KcWWm{uT8*y+S=BmEAvxMVd8tE(}MAFtbdKi3&f39)q;~x}gJOBQZcfH;rB1*D5qbXP5x8a`X&U3rN zOB2cVEq{%(WAVT^O~2~rN#EPHYV318*!oqkqnB_W>3I0i`Bm>>jB473XA^Q&b_9oT z2I;q_qrm)jHA4|UOp*FX87-H1rOhFm2{OW`SFv7C6QRa2`BtE2VwNPdf4T~*=<_>c z-(t~lE5^1Z+LzKexcl{Mq$Eq%6);NmvN-H^>TJtORGv|!`sS&3oNTEu){AsaTIgJV zv~Y0`|8e))+U2-M2t^n67BMBI-I}@_6JoL#pOElJPZDV6D6htek3*T;;Kj=cRtI?Q z?)&Lu&$`Mix>johtiIAhf28GeV?IMYPqhJ;N_`4sIA>TI9OqDdBim1nwDf1i-`2MJ zp8Spv4Xh$yJ){z#)0+3^fuFcT-3HU9@++D4RL46`&T&QO^GCa}nC7UV?b)j4CV|r( z``1h?+6H!ZTct0t>c@Mr<#cK!?>Qe+EKsLM&3*cuI)i zR`lMJXvgSm)Hj_o3Y74euIYy==ateX0M43;~Lj$2JnLci09xp2E`S!VIw>Q5Ze$?Sy zXJLZ|x{wR^CuP8%hOF&viC*YY`9n3|rX1C9Cwbx|8B_N>i=x)g>g*)4f-P z0B`e7IfsU%6vN61+}rbr`oXPN(f*S>py6PdorHu$EHMY+X+XrxE4H#;6btJ9_Ntq# ztSkdNJ1p+A6CcwK?J<#^9cL;njx>4p{te{Gm36=S4zqx9?PkJ3pSc%zUw-frdgOmZ zC&aZjchA880h-Oprx>M14Tq8NY!y5w`)R_YSeXM$>!H9cinne2{^aX8iPR4<&L^O= z_Z3M2dF7W<%Wd-)$*;q(+OC**g)jXB!}s3oJqBaQW=d)y}i)KlB}SD^8#X z1{_kvYiGrJwvz*{amZrv>T`RTb|T`l=j5GYas@;nP`#q6Ax2_~fIo6Xm}g8aUtyZ( z^-914jh304pQjTMp#V@b0`kdV)&vM4`t+2gi2yn>eoC^RUBc|R;hP%XQRd#;6#TXK z^vkkGpYpCGDJ6#jr-mEpcy}0`5wlQl)~lrZrG$P25td8NRIR#Iue)QIsoK1;r<#+V zvRf1%Z+wgODBLl#A|C&|^N(JhH{N}5Wq*}D?a(zxY4hX%Mn?K^!5YX_8v4F5B#~m| zYwL-~h@cqj6{oLb$D_AAb4>I2uS#h?DOW^a_zO)?TP5@> zPp{tiJZVAFH8n*8bns@-nOpZCK*jzOn9fKVcQiY@{-Z~K#fE@qaUKMeV4{2Wc=+4h ztAcXRAq2*fx*c*zbPfX>HJ5Y94WM?d@Wsj4o5yHDpZMlShYVB$C=!gqSUG$0JwmGl zKVRVr_}o)u;70UfTV^g@E*a{20g|B4kqcX9#u(?>saMu`jJ%#I0=>vl+rQcDH%-F$ z5guB~ckbDbu3CIz$WkQMd~k25*rZ3K#O$!_x`6cwex4?_!bt_nxEs4z)#@Z7^w$62?!%;w9RMSZJ6!^=R2 zoi;z~@2zVFeV<~ke5eb^^Uk3+eC^w$g=G*gY53PsHw$F)&3NUCHtEsbSMIpa&-&JU(bwo08Mu`*-rJ-h-~Hj_ z{SL?uXln^UZ#eofQJo(Gz6nLw16qC~=PiI{W2Kp8K)<{wll8v6@^Gzz1_F-X--kvJ z`O?Io2K(l|>C! zeLP!5f%_J|)`#qX+)ki@F7B?jYIDba?kJ+5i?kfswHmGwNk3$Sc)ex((7UWCf<+CM ztmnu_%npNbja;KCLUMO#%;@qrDSdHy;Drn204@l^9PmU071PH-INk!0%z^JYDs7PN z9Ee1}fZkz!a}(&ShL%cf{(%8E{N>R}09z2M7};FE{!ly^GO4I7~zW3G|i^ z3@+8*p*^&)Fl~t>{lNb2B2&Tz=(9gA9)M{mdtn%ic#)V%n1+Va;t*jr71N*w-GGFA z79pD(I8R9Z6k?NT1q6Pabw5IB_&Jvt+aBdp;M^wsO{3K=;!XC1_n{uuT7Ne0xwhy^ z;q+e)Sb~pu#uNQ;%Va;=d=tDZJ>B^5+pG_LVA%27wxs4jeeCsStqiwhq*ZHJh)9+~;Xh!?lAf$wo;xq0_C9c`Fim^B_(Ma8#W{m1!7JJ&{RJ@w*L#$Wh74tMpLxo zZgQJ2P(YmOG{qqP4dj%cp{QGXq58Z1VCPR1OcG{bd_`7dTwz5F0)FkeSuHh)(0dAl zMEaqBGn>@4Iyg_IWMyXoE|r1bni&Y-A%5fbzobmahVg z3N1~w)~q|*8+)E4(2|kpkd_b-%5HmZ(XxRkslxx)1J}S_Qv?fOuOUlX1n;) zpXy0b=l@_|lt~poexs3xIxaNtrx&A*>QGg@*LC;n?cq2|cZem%@6V_s4WCVFjAKG$ z*En~sxsK@4d}s!|wW0MO<6LYKm?64YRpp>DBh5uNNz+n;RPXQZ7rdnPIUVMo5x|g8 zd=Cwf_W54sNctFRu}JVbd(isy+HFjE8ay+(1Sq;G}oM z|HKE$mV<#n?)@AB7z2dE4(-WL1B0Mu*bwke{pDh!qob3?6_MD9fm}7oO`{ey77~Od z&7_iU*M9)kE2gl6Jm%tL%}0zQ0i^RTj)r^A^0mA0D}pKCT`EvFMe#MFE)rr*Y2@_` zfuq*hX!p2nACs#1`l4KS}q@_ z2u($d-`DGKMnLY=_4iYeOAX{S8G}ws&X-vmtNTUnqHg8w!r|A`q%>)u>O<%T~ID7;(EP~5sM1PlnYI{zH z3T8CVUcKXXt`m@9w#9HVsb(g@MYk+x(N|W$!9_}am?SrLJ_P?veosW}gg5dUp3ztJi&vfpS6K~*`qWo8Ja`cjVbmAi#!5rjEKkwEKP1S7_4pD(t-jxpgKP(K!rAD<$ve-ypP_n0 zpe2!KH3cZ)EzESr1q*bnfln58Iz_aw{pbk7&eJr)+LwU|<`o#%b>%@P_ki zaLMm!@7|IBgikYm1+76f7TQc1Pdc9Q%WPMuHDon(Z;g_eD4$+;F|sVb1{R9mp?|7C z6N?OZg$+Bf|L(vH0n{CLVR?g95L$-QXGeK?+{dfqPY}N_Fp@OOi>^R(S{)Zc2EIt- z6%_Io$G&Kda47LWZU0>Howt$gXvNa2DM|Q?GE_;ZYaa-t#-*h_g~_rBgy-qhdGS~1 zEn(kEnS(=P)8&_?#wGlEX;JSRzpSdtlbi5D!?px_B{FZs{4##4~1oRO+FS=FP<`1j5ZheF1 z)#Np8)#to={uAq+==Go|b{b#0{1r-zv!bC2DG^|?la>v^04XvM13URv2lxqJL5#i4 zV#OZ!*}|S+VZni?_Zf6O>&T?@O9U}c$RFi-Gchunz|<5XRI?@CHu|5Kpeaay?uwd=kXSB0H%wou$Yem??ke(wv#YyxMz z+rfc9BUgxBmv3D>@X_6r=Cg^U268g3sCu#wRGT%=5lTik>&+1q9V44^$7@z$}dN zN}kmr+yTH-Q-+UzFD!(?R5}aP>}2z{Aj?MFZo_H(s=JqRANHlUJiW%t<2WhW;dX@? z+yW6tw0v= znSi(n4qo&5k_1rJ$pSuG3THyp4^Y@%0!O64*x1-;c8nh-#<3wGnZ|wEpw34;9JD7* zo5M^YnRmYPY<{AldB&p!s_6QbQ`2{QhK)yKa#n8p!!GYiukFbx3R7EMz36_$+L5r^ z(-XyA*45P-%=)`4yBCDzqZ(xXu;;I_W&k^1rBAlw<6n4<21h-I>TZ}nbv?MR%&WgC7ZChuiMLZcN9? zO}`VY7Y#=+1!4ToXBY6bY6>`SC;MA+FSi2i_6zA=_7Dx&L|o?^7#aPu)bPGJ>h+_g zUw0?d&p~%;(Uo%RA;I58?)Kz9J@MGv1XIk^2z3w>aVVw~dEC8l2n*f}<8dc?Uvl{~8oHY% zCMpV6OZVYggUb^GYRLg$EjWscdNJ zcnSBtXC6Q8-dM7|QTFqf)hRb}&gfK@0M=Fco4s-A=lQ8LvpYjm0 zXhza2b(0fn#&y&;sTr-W53b+2=pq-EDj4TI7g5($Dsn%$xYM5!q*-o1VuM)~m~F+6 z_wfie-a9#t{g}M|ELcHBrLBv*;i+T&{=2(7h_3-+gMZ0*7Xa6g3zrK!smRGA5pu$4 zg>`muS5Hq1U>3RywPm2Ty>uH~^2LV0+rt`e%WQ)hO*T>>?}QRi2_u>F^PQTyqVOHL z+9lM`dYShhfWY)3cf-+565O$ghB}}9n{e^!&bu+DrSI-UIAYB(IJn)r zjd)^Hi(Vb88e8~k3$F^k9)5wLJ|F_W2TP1;EL~e(d7!%D5sfY91j>YZFa9#WF_Z1| zrUc_q*~7s`5+q$&x;JoOCIHXBvzM0aw(It;@Ru(42{-+@BQ#STb%;JYa& zC)Wsr6R>wAKt^uuU0nswjzM(#-fpB!eF%()h#efht@TX_00$kQF$|{rOX01+aLO9T zZ-l8Q$Ibz8wmXw{1q1~JVQzKFezYR6|CfjH6>UBVA|BJvm{Am8O2F?B6ww*R4SoUz z*6^B$qycDOjO-Ms;lclR2T4N$ zWI(Z|O@($Tn6M~j7fPvDu$<*IP>$c%X6Dt-+I0QbL-W2f^1p6~pj}X_p*Of*lrhI!jf;J? zUgx#inQm5PNhDPT63<9Nhn38D4WE@6UJh&r3p$mDrS7_9_v+&CvzjoI^y>XfY6;7I!=6F9-Z+o2&!hRHCr2gVXGS%HWlcsa2{AjhUNb_U(dT_oAz3c!W%Pryfzb(?wfT0(;J)_e*&yNaG$o+jSl zLbvm1>W8^u+?hu_O!F|1v7%t}E#GQDzvhe@`E_pWb3;@~t6d9Va?Q(l1s0AcZfeyg zK2aNcd21?mC#V>ZvO6iJeNBVS&LB^KOVk`4ZF*_x$zr;9@60w7$DiR;w^Q#zn39k!O+REakQ)g1_b}7pu{O{l z+0AK?PY!z)A>wRI33eilvQ&%=-&LorByH0Ho1#eV>W>VFKtN2)y!dIZYOlo7cYjR@ zhpz*hab%2Krs-4yjXX8Qm^B4zB^dV!7W-v}d|%z-3{ z4AbU#00W8OaD(y#cO(ah%Vb5sr$`?3PP^rt`DN(Ag9nAZ`e|U|gyb`nEj+RW>xpLr zEcM7@7$Byi^6jfkG4i^X%=B16N?Ka8T^#i>rFsKE;55^j#Qty8nX7>siuY$}wP~mF zV!s@%v?z@DWxQa3;c}WhN%Fh@!^^)<2Ar`2Q`;EVLO*b$vqjJG1djw*_;WYUHi*a6 z3GT{IRb4zOK6yUo$yM*Mjwj_M+xZ{A+xFlNw?@JJ)7}zIF7iiS2fCuXj&$_og5_ow zAn;I;P@CN0Y7##8>{Yts8XLU+bI9u_Nmd+B@-Z&E3e4;ZD?>fKmxJDkKjuC@?4vH& zN;ZdXzijc5R;A70U9fZEG-`ln;Bwr+)r*iV-5vW6Aryri=r@UOufJiQKd&`xboR+( zV&Ic!W-9xm25Mz8Kh_!;>|1aPW{&|dnk2lO7d$&mnJbvr2f}hnr}xgyrOx@Ts|N(+EQ^IAlp zoFP(DE#Hi5g7oj_fW~n0180!2uXJl<#OnN|C9EpTz;=;T+~*v&n2#OFF+)Fo!QQ-I zjCMs(fxC2`r*UFxX1*rOpc`h2zk$zJpMnK(zRiCLzW0mf*8JC8zbqu-+&NySswJSg z;OvK2m182xa8Q_92~f5i1NaoneFCf;wCTr#2u0qnrCFC@qlEbsSJ72{h{H~iom zMCEj@H0SmCCbdbN$j!U$w{L&+T5rUq&zx~O;EwZ!Wu7GL`+*M&3#+}SM;eS25R)%Z zj3Akd5SE-=)qV*Hst1n^E;pqj2=uLs&5n)(efRui!7Ye^L8joqV`H_Qg#K=(Zg(<) zKc{@TKMR{&(7|rQFNvdj`O{oq2b}Op{tOy248dS z!)SK0nJjma`C7zfX60$n<5~SEd{K9nF<9H2_$r!WmibP_=0XM=Bj(`k7laaF6iQOr z{Zd|%yk98Zt(=>qR=B(KQ|h_8!t3Y1`?)j@WU#MbJiBFm_1ss#r*Cnh)q`F078zIX z3kW7rvxklWB(`$R4WNr&%RrtcGvfmLwE2-=;ntZ1DFNK%26d`SRo{zfGdFEtDM_ zPzag;YjA#Y7X}fsGBPr8w&0?32SDAfSNgaLxBE~mlZgCt;-PDIEc?=5Pu=bIWg8jo5}%7`Ht#7{q3P)hdNJO z9{pd2>S}LggS&&|tOH@D*Kye6LNNQblOT~c&z>VaO1C{v^;aiQ!b#qy`%N9t<+osvHav#8hCycD&ko z54fw~RZ#iBm`2Ej5$1K;tG)v^T?G65FE|c`t8W38kX`Qp&HtCnDkkuKWqkY?cK{b9 zh64J_7b)FMp^wl{{w%7RyzAvvEBzaw0Tu;}b+Ed3?&DXWh?_TGS`<2i2Q=Zn6#3|l zyN=wf(6VOvp8}r`qAWq|o7a^rvD@~zU{-P}{ah6p=Nx;4+&Kxl#>!$EgWm^DZ`tqk z_CsK&atLeP`di!B$8WyT1c;VV@+6CER_jnP%f+HY%Fm7SLo`nZuM}Dd*DI8I^GrG% z3Q$M*1Q7ZnB1yHRXQ?j&#io2iUHw-%cDAXy$jpYr@iJ7|*sFoK79YhD2|?=&LR#U_ ziZHg32>>B0T5C~Jp*`?TBW7ik>(+pbBp!XGTe$N(bu^I3O7@Mw0|7X*4=01!y( z1KiRXOlTgYQq0fH&xigQt1^SNcMBN&uYLnGgOp%o@QAv^aYH|6x>=TQhhMA$R&h5! zoFYhk@1)@B2iU5N(;rE^9FZY$JH2#K6lgw>pN6;0(!06*hyN%VLn4 z=E=h_z%_%IUhF4+HkIKg$wuK~L zE){z$QsJrB@g7$qkPbi0#HVXxUDaF15CeJFapgRzN_*aIT1S#waRB zh})o=h+V6&)T;rW_ffj7ir6;E(a}+1uQlg&!d28Jr>_l$8e)2f#6*G!6orA1PDca@ zFrR&u5Ae;9n$pGLzuEu%lg5`}_{74l=b4qCH4~GZ15Q|TWqrfn>F>AEggU;>7N(te zr{+X4Pud+aF|EDeU@dmOW2U&qLlu3FJjwRV9QR?|qd&L^r{oKJ3Wr{U;~%zx-(N2n zodgOX0RnG3`)-W$k7#4Vq^<0`f}`j&1$D*bLi|Q3tH??;S+FoQP*kRpm)ME&T}+>d z<7{L63qY;ToA*9CdIuQM*j2WKY~|97M# z8{lJV0L0&Yg@gwfflfI%rX#tzD}Tv22+q)0*Dy!kTK*9-RzEzh zx>aet%&V7(aTDLW_1{N)5${jFY)(v*R5%!rXx)KLX-7{{N<%ii#ZC>1FSr%-w=! zB(;YkU_SBn>)6-+C)a>q(gr%BPX0mde2}*2jHCIC!R)2fbIVc`&Tq)+Bc}+o;BlUj zilPYE_7Vex=?1{d9fv-!%G-N;xm=3C^WSQy_|=?&%Rd?(aeu=w+RB|(hjTw5KM|&5 z^(P0mkn2DQZ=u}X6ICo=G4`YfAAQd$!Xv}fIJKe%%cVKa$o9$NBlNdA0@8&M1hZ(@ zUOGYQ%)rk5et%#8xztSsgCX|(E6Cann#g|r3uO*GX9sMJP+#xT!&Hm*p6*h%w(MYU zV%N>9HWgN!j({F%a7YGT$P+A!BiULrpZfdOAY;bsYUkVNo_PM(8AL>s^Py?oL2z8C z0dj%C@G0X5oEH&tT)0=RV8(dmjo%OGdU>KC)msA15VQ^S(A-KQF!tf9$Xk6+4|mT> zNVG+?@W^Mp=M4HP_>ekP4phka782dC;W53A&M!PdWMpKVdeabaN>&sL>`+L^fRX+6 zKU!R54ILf;}e)1XG2V&vuD70kHETL3$(f643EswN6t**vTYfk zPuuhdd<9`zaek^SAujV>I=j3#p%_`-YjmES9?4dyv7vR5JG^o))_EIC?_yk0$a4DN zjoN)HAFreD4~$b0tBY{=Wc%C5qNJBi_Tr@$X*i&Cepo2lxXdL|AJbY+!xr&&C z-BaWtg86o0&F6emsL>GV7pGq8vu6Q-Qape*lxz_)!)PJBLUZnAN=nLGN@RUO(tG6p=tq#;HK3Aag1z@uai3cd4n`2%WYRWu0ePMyT|2J#bl%@S zuOdTpC#Oc_hWB@&w8*{y7J2^&WhkcXM;^FwQX77xby~SvF#Ciu*$8{?pz?PJASA5^`5>#8~{K(i+di>+;GCE z03Zz;<4b9u;{~Imy(YC4U0)!aT&LZ({Ln0TdJavBR!EuIRvvL}(0mF6qM{b488m|d zsjpM0mIZPFQhkD|%>)4E7J<{S@bFnMI+HdqpagW;*tBy0-~h>_0J}Rl0}u!q4nbhM zi_+cUD@ZrH_ZK_DrL8iK1xRs94tvqn7gAnY@K^S|}DuAD#>k;$^u=5JP3Uj81^ z#Skan@`rsga3#L-M>QbkgdoGJ-I}VpxwSEGJk;;FA^zytN^rrxCcw3-jKkqPc(D?{ z<_UJDYc>wmB`?~-$y3_#>w1G<~HE?L$i^ijRjXKMEbaQQ7CJx52g zA|1!Qy>m>Qpr;g++Nrcthg5$Nls%QRvZW6;Z&4mtaCS3g71j`*9x#Be)}rU0lt$78 zT=bK{IkBrn^uX)32U31!@0uC=wboo6I$N`1>>bbljduAR#2 z3ImQ2Us8p~#?pUi3IAG?h1K^ot1?&r{jQ6ZgiGgGk6yLV;{-%-B3IY0LP>-W0A5K1 z9_lya>@d-D2P@aZcg{IY^G^RI49J^K&t^hv7+ebyUp-MB%SZ$bM(NB?Qmbp@mv94> zc9A`zv2XF8T=PL{_}#kTpp8b?tm!2h6GQ;6sp*s`c&XSNbkGG>X5}b-zV6Zv$@CZZ zhxa09{~BlJOJ=L<`b2yvGN^IEf+0RNOx5wTHYO!|A^8OHyhT#15EhqY=x)AIoiK3x zqXZ{CFyL}#+ihEil1BT*^nKwn9PdEJbOs$p&nxZdUR&zS`%CnW6dUavwJU&NpYWotFAliZNdC_$OC|g0} zL(5Ht=JZ24S~u|IR*L(Dj)x7c3O#eL&s{_-lGGbxJmVuV+_)Bf>w~ozL7_fImJzqN zU#NPhfG>^gA@g~OqMs54Qc^B>#Xl|9E^)m31`WW^CnyGzC*4~nLJ>xK7fW%B^R)9( z**r{4dKKxwk;~0wWMX;%YiHJThA&^#oUFe-(?mXdy_40s3G|dpbY3JD2;|w;FyjAzRmQXWJ{U9*%xL*o#uBZ(dGz$Bg|e$17KEco{Q@muca$5OZ@mZNW_ z&c#}dUqo1g0Zx8(%2JxzKmfXTLsu>aC?*AxJqv+WkNlSi!xW(J=hWvCVg=J784vnf z-|!GDfMg!4L6wk&B&bdSYT_4iXZMP^El?X^=E7>({TvBN^ESGIiH@XUBS#{fX(Add zO3<+*?g~it0Vlq+gM&lOrp3dDNV*3SV!RHuV)NM8jVXm&K`?*N8%MK)G(?zY&jJ#| z?%aczBLa(#eh3J8h~PM>KpAyU(t{oHS#(C4qRj{D!;*Jr>gl)J@+vAC#?@%nT`7{_ z0@i3(zx7goYXQgeH)jVwktP!q2;QIle6F72Rge=mHV{TGD|vnL|K%&S`!tMp9bFyu zw=2w9TAK8^8~+@CEdo5Z2}no@i0~^@OvZ<+IZ-f?dY?ZcNdhZIApfSC_e|J49bIhq9{u zzdBWW$@x1fO3f+aZbr+&U>LE+8UFlks(bF!%~@cgmHnUt6@jeh&(_dKx?9Pg=%$T( zFq-0wP^C3KH>TEZNT;;!jYGf;CEq_naWvvMeL*R9mDlnjbbIehkcdcdExZ7P3%2T; z`(S5p3a$ue7`ujC0p2a;UXd(L%enL_^q+iv`GQ5`%-G!%%PVG1RB`3 zCP3t9so#N0=?aRx6it3z_Ql!5V`TW}1tRnUKgR*A6UILId>0WNt&#=HkDm(#3B-Rl zyHvwN64`0u%5Of$-`g3liV!D_7Bw3o?CTY~d4;OLJ)|ni9ZLgsanJe07aO8<&K~oi z0tz34zQyR|?v-}i zBW38r%)>?P=2MM0D%I7S8`xlpq^*0#hPP`03wL0Gw19{r($|m(6O0|1pYvI(V`*E8 zK>OAVq1T9)1Y*$z){mE^Y$u!?dNko5VE8upFT2htF(#G ztHTYVO%%Bj$19vC_Cy!^;ZK(XizqYoP4&|Cy}OFGw|m0=$x~>29SERs<5adEyQo&- zcT@2m-zxR5zkkcEer(aPkffc+N%i)b-(>h>*^9X6ls|jSi)~r%MmTsymLf+|3d4V7 z6HB(kmG)&ygRNvwBr6KUuAewE9 zEosbWQ-*19>El0ol(+pT9UG1XfWd2{nT{~XLEIzz=ut&Csqk&kHUqgD?#^q_3`3k7 zJQr>V5ZmwNS0i~3Luv0Hz=zcPAbi&4%8CCID!R}k#)avC*?hY~e6m`$(zjq-cEz=x z7%ii;nK4Dtmq}dggw?P1Z-VBAEqAeW!|3r>zTCX-{~iuvPB=xqa$6*d0tNg9O!qm?N93TK_v#-UotzOi~-$&tv6kaU8oJHg@||p zK48mvQsFA(5r9@6f)9W*TWTT^cYYeEY=|ueXBiXdonXVIlq>!_!Y}-HgtxaMavi9* zEQ94E*tB-)^}qg!3D@7E_4rV7KE-K?@R|JeNO}RosgT$hpO6ZlrG%=rxxffmHKs~y z{Hm;b0w@=V{NjPsjp6;l#Rw7CQKtRl0@;kk_8SpaTe*+=i9JfJfmccFb4y0|cL%`uG0rB-8#( zX*h^2o|J@#giHgyr?@e4r360TEzeCx@xv8fj6CfA`Q)J-28UuT!{7onjw;Eu|DqFX9^uXS=%a@r9e8g1kADQ zbl$kWpb?lhTDxb z-|Jr0!h9u5TZ3C;%ltm#y-uu2H55bB6jPEJmy~9naD+shVK5j4xvQ2U)2$>~<#+X^ z9fI#lM+Akq3cgrZ$J|LtjukaTAOAF$8ZrEHWAqnaDFMX%(6J07$swd`4gZ!A3ViqT z^UJe^n_2BfOFMG!du+5^*?_2@MBzc5$o6(QyQz9Am}}4h9(2BZ#7=D-nC`=922YWo zI4A+y>U|HPLbn^aiJaSj?+5qm{3XhlrS_cbeS}i70yZi7-#K%MqR0wubxgGTd=imn zVVdN1|J5kaUsB3T^}dnHZ5I1EQcOPn5?kVDX5y#5pCQd2Yt!Bl9g(cEl`Z#ia7ER2 zFQ7cUQc;y9@&Ec3m#<9w|N0iyk8UCeMT{+$YAQ|ksD^LO{8yLJiU}ZRQv4U5aT=R= z>=W8S{m51VeKZ|T8sIu*AAR;W#V%`}7Qi4A2OqyEBXW+xLa*dHrbHtEG|*FHh;9$u z-{s3BdkFW-^P3itg>D*T$hfl_{T3%DK7hB4h(y&B0dQPWPu8!1 z)a!mZ?w3RBMX^opx4IQSoERvU1i2W@X8AE=?9M5@t&(=O-nZVH>OcKi zRnU->GjNF?*D3^IzX449AN!4wDT}yGpr|d-&DLa~YijSEYW|tU&NGyn{sqg5AsVu1QF1-~ zc(c`#%Mo`}qQOuo|CW&KC3Lk1&(Lxu#X1cv$$AjMnTOr683?GhRlTQ+J~*0gXB8k^ z4fxV*1qYjrp=DC|-Dv0<`nvT)dR13Utv`K&0rW(54SXo&GMrhEa)Rv>I+A`Jicpz9QYUs3T z{|d{|X@=^$n#q|gGRgUowUt8r2t&a0fk zZ=BSUYZv4XNJXpWg}R4tW4!RP{$9zsVKNZBS2_8W_AI-c z)o+zh88ZWVON-);?!;3+bKN-`G$Vk*IXok`z`Q8AhUxb0Da*sf`S}qJcWF=9%s&?v zEHw-Didtd9WKoO)AtEB?-j%tdqduQTt-?KF1I03*&DMkz_u05%$;MAva{TWaPxlvT zaf0yK8Ox%N@T_?(NabR-?6ix%3~fI3)ba9S)k@Pv&tf^nz%->ceLm{PRvY3T_m36L z>*Va#jL}ygGPTgh>VNb9waV5#0UE9QjThZ=ACrPXQy}WYA7OKeR^LI^5pC!1cUs`f zLa#?exfGqj!d>l^`^S(BSxuViSXjwZYp!5>r@%BYS?gACj*V5mIClPxWH=&8t*{#b zg04H_(V+Li4w^XS#Ol_ruCDxXHqDV(*~Ntgy`-@?Ovp?Duu$iN-HW+lzvBZsKE8qT zn*9@Rk97H7C<+vwDFM<*|CI!ks{3gY!@9xDYH_e!@!_v2M`tb z>V2y`Mvyy_$zS|h)Y2^mitLT)s97TQf|#dC;F@Y75Y;Z%9g5TlR}pcHTumH)!W$=g zEohLTz4L~UmWeT}8ZAmYCP6`N{vcZqhFjkX))!eEtG!(h=upoqZDqwSYDjoiN(-J> zy0|a7@T=ZsOD$VD>rY%=@z)jB=TnS3lh+(Te0L`gzY+e7*8HFqA1tem4hfJrA@irv zW;MmMTmAXxr$JFMoI&oXBiy7%B2A&K?x%jj(Ye2vj9N`>g#X=sJZdEGH!3WYo=Ma! zdHnw6uA2D^pJ3ug(O2X8r|W%W688&|IXfBSU)|uO*-!a)9Z1?AcSC08#?8B-AJI4a z(5W^mriWSCC_*(UA=cpapI@{coWmG>u}z&FW~O{{-d0MptAWZYx1%h@XoBv(sZa}* z(s%*<)b2S^Wc&)Kv>N~lO&Q#tkud^V%(nQzkFO3Be4v~nc0o~@+?_Ni>k|>leDpJ9 z5DT^rI6Sk?3a;T4YC=gq#)ng8OqKdaS zM%00r&pGKaiw`1H?GI^Hp`oEk2VOANE9(5cylm573spm=@8J$vksM3#93eEFFkM*N-C3ZA%$_{PLX%32&ePp{5qMP>Cqha7s*3 zq6ioyzaB(iM?~r7T~>1C6UGFp>;Tc(B8b<>N=T)NLE?==K=8p-dJ|}tEam7|(ZK5= z3bJVuf7|W=lhXuzeaMP93*vYL1+PF^D-GoKuI}ui-#5Uv1xo*(%(`qRcX#gXZHOP! zPZDHdY0q>+YTR7(J@^@jvmN-6u7gX9u4V!!V2}6IG1d_~1RbS)-OIMe+L1}hR&ma6Vb8OIJA|pRTJXV*%F5(mh)J2cqMOBPzrZ->k zfTj&$%Yw@TSxe%NN6%AJqo8Pe0Qyieel!e}9t7AM`r`q1c_AV3ub+Um9uNvjG(91q z!m_^>%(AAT#SE9Z|9{x}>aeQ1?Q0A~QN$pm1qo?E8flP5xLF& z1ZfHBM!LKEHxKBm_j|tchlg{Y`<}D*-fPV@*O+6Bxqk5N)vNQ@@K~;ar658dN9erD zmu{S!2wxWmcxf`b4M2T?Bsz$#a6OEv`vNbpJAA1rc__W^im06K?1Bm?(Ypwv zcM@x9K*A>GI*5WUxoa|bt%@uX`)nl9srh|!HmuDmiyn*wTEc|;wx_|B!HMn zMAdDw;ieCHE(bz6tNaLO=d;}iAIR1bHljWoEB$#WkP%j=$|aL_zW`EzPWFNg2e7YD z+>ZAsJc0Q|IU`PA{sx@P;-O>^G%l9sw6lCm%L$GkH*Ve1Q5b;(nOgob1OxlMsqh@# z#vq;KhZGbP&tNh1%WUA@$P+tfc;a$GW z`0yd^aXAv%Ny+aB;`=9p%F()$Ot(^&Ey0A5p#4|(bY#s z%nef1B}OVYkWL=>lR)V;%?WxGG^t)i+;r%jX2W4RR4gO=1Az=7Iyjo<(nvwn1d%+H z5hsJ6j03v;4UFDmr9+(C+o0-?FeL=Ccw)oWmoE=|a8Y-b2ECx2J&A7==s{c8WEm8yQbim`xv0Kq(GHmCSHMK&p_8?+o9z=*GR)blZ$&2Ooc4_%^#g@2B zKfy&+{j2>=xg-fb;QGe=G_(XGXGCW;C}i+IetZcj2BZRa*sT+~nfnJ2DnzJ|PfkvT zl$yip!;}x)0>sq9r-$9KW>L%Sh|K6HRv?V%_|z{RR>Yr_qsvn+arEta&&<>Lu~16} zyv!#0lx}G?2UWVj%sX-$F_oW)*_SEW%yyl7S=X^I#+YexCnYAuCW)g~omtC~G?t83ZTQuOX9zh;Y5v&^-4`UblTm*wD;1~sM2_;D3XHAx;ml#?BTybEFef#_by zk|v=EbR}Ux7HG@`LB;jqf%bXDEcJJQ!vJfV5Nc-vKng{m%%H2*0WKJ5(>A$sKuuN+ zh8SKcDfcI*LR~);Yi8F2`1u9e2wX%#c?N_J#21ZEOq@dk;ujOB(&+bgA|oQap$x7K z2Btv~(I=pzdUm+G2B2AO#WC>O)gILrk@`Pczgz)!0zN&<0|i&%VNtEU;Jfh!_{sM} z{SF9Q!Lk~p7c^lsDzf7d5Vzo3u45{P+CVUZjRq@|%kmczEFgqM>qJ^c9-M7O7Z5T8 z5LhF+A3w%KOiwVf{cpoyj1)3GE}Ojc)WfhYQdo(Yl)fhI@Kri>H8$;=4nB)l(%qrY zNoA}h!WdDSQ57$c5e1lG@&sSWPEL$}VoP@sX_k#|1no4J!hhRnDCk;v9r`)v&TMa@ z^Ue^r@r6gc-+prKmFfi50iodJ2Z_fIWi9lv!KkJ26^o)*K6ozcbJ`tX0dC|@M@uwv z&OCQ2qn8c?Q8Q7mLCDYAi`TYKd*~^kfj2e66>f5_vIw+h=zx3FK+zpR?faM>9L^tG z1r#R)qg_gN2p-=WsECyF6%%E(r+<8eb1y0eDl#~gG@C*Pc<3=l095GG8Al)@eR1Bi z7Tp@51uZTp2k(x!>}VibCkI992+1IXGvGzLt)m4ok^<3eI(?zc43KRsR-|w}Xe|Zh zDleK6QzP(kMK~4^d)HR30Nud`S&4KQ_~PJT;Na6UzvXyZe<-$$^-T|2hC^x{=pm`t zm$#$QczE_IpPcP2t?%saoT8if?LYf$NcE?}clW<+sQskc3+moWZ*j!VLc>)ic%pkX z?&teY@uEQ!$E)l}ud7Q%`^a!&n(L^Ia8?;2uhPFMxEN>OM2C3J3R1uMmA{skUcc;| zY2Y|oFT^S*4CSH#3lHl0{9*?N0))RX*w6IO+P74YiTZUn4j`$g3k8^#oITah``!&d zeh7lL7U5I^WjcfbFG30CT13WTKJ*qL`-8sXcOE=Ng8c!}wanh$zCpgk88S|9giHv? zWx-r_y1JxLT=j(t!bxiZ$BX&9l%o4C3P>q%>H$B1Ypg}<w~GN zC%?@0yB5gnAvlq;nD$98F2HiFhr;W%(6mE;R=edP9B74BXdhuJdP-OsbtxL!l79o; z5MvH|gVWCCZAz0hEX^l^W1 zI{@Jm0#TanN)7(ejkuNnu9*SLdBu0x1y9&sC(|!|;V2pnAYOG?WB@{iDcOoT)sOy5 zjaHQh%b}T`9LUJ_6l4UMoBMI}HlL9CVy>L(aT=lkY+jv7%eJz8GUoI{9tnGU`EgC_ z@Dg+rVJ{H(x@~nN?+uzSZ&)J(nXeE=ZHN)n_?&tCn^pMto7MrCC1#36Yt&>UB$wU| z{&lpfMdrTm-(TxrzYZd4E$6jbu3%oZ0%H%Dz@n{iJGcM{A-aqjwDKhmq4bC2bqz)2h{5>BVC2st#g4cAMzfSM4Z{g?SL z)gI3K4um8duKKxPxee(${bxpR&5T`q$XlIV~+Mm59a@YOo>h@9#%KO}+uy1OoAerkIen z;01(*DP|pF(`yPs6tFru+L-_V;W32E8+Yzp*3uf~^kM}f$>8M98J^|mr3LgKc(LD! zZ+t)6Mv8Hus%*=s$+=zLT{5hi&)cA8p$aQY9J*SN)qk?7d>e^tR4yCG;np3c`Y|bZkw7 z@9^BuSO1)&_1BqV#RU0q3f{-CaZ}9AS0q<8p*>NrCDT=@fV^Gb0=_emxz; zcf`1T`!zh5;Z(3ny?ggA65K{T*GIt39Dy#G5>b!#q@KsCgXR)JhIzEvE&yq4X{{AV zIS|Gyfd4HsF>?9w0I|&`!H#e&y7sLdMI_(WNM^L&YUnQDT#QaV7^DY=)k#_n{!Nsu)PkuG8D6*#lI z-@ZjiNSFFCH_aXxFRLJVi-o?Jz=rWuU?+6hlmXp6KzniDnGtU97tkNM!xpX@f}pE? zhtk{F{5(}j6ij@CvK28mAshVy4KQ@|LS5{G`b{6hG3?_42>7*~_vex!R#Z$5&?vxW z^NHgHDr8dMpmGBg4>65VptL#fj(I^h>cke~Tr$w>LBasKhXtOQwt5VnXOUby~T9Cvt3 zhcDlZo7}w{f72ocoY7$H&$HzQId4zOCh31yMy!V`k)C?`hfIsG&j} z$x^Pj0MJmSaFFDN4`rN(e0B_PoKCIvmxx8bb-%Z4BJa~T6aXpQ1QsDsJ9a%w1!4p2 zIuLq7P_`$^rm{KYUW5Q&4=SP6Zzx2+pr48mFs6j85K&moqTBjwhBB=u^X;)0T%pEHa@!K^(gj3S&vfJ~4){sd?QP7E&AhO(;;VE$ zS_O(oJV?G3yU9bt!?8~bL3`7YyO0;d{)@Qz|T=)$;d-@&TC68H9-O8ady8Z~hq*c^W|`XCr85kZK^i}9sBWapIhZ+Lt>%bDYm z;pD<;#*-_%uGeBjdrF2tKedKUzy{!~hUdox$n{4O;o8JNg$gT}`c$C<`8`z%Vp1Hy z$>AG<3=5sK`!bg!OPrSe$6;36d5Juym28%dafDq+AdnM$sJP**uuu zCw)*HGVevEjZ(8pXeg*l?OhuiX^W+L#xE!OmC)f%_e@*So-Y@P$y8JaiCK$Kqg8Bn z!Aq)kw9@kg`$-P~3^=vA3hy4Ra%Yxbu6gger$F;ca(P`)m2!^_Mk_xxZeRioA${F{MC^R>l3z95}I9Qrl?PcHa0< zTf98oEl<}C{3A0}RTg~);*pF#tuA~nYUi5C6uG(`Xfc-Es}lMI+M2Hl`NXWQ2BZs; z^yB20+%Jxz1T#a80+LGC`}4#Z*%gEb=F4RUX6U^SzP^~d%r8l96lrsZzJ%`dUOAQh z=jD@-iyh7fp3{g)DfaCB?1?DG8xD2nuACX}oemuSnY{mpD|XjbOX6|i@o1w2{ipu7)l-N9fJvxq+8-~N6fcSOEU?O zMa+&f`!Q2`B&^TsE!#n%akW{P;#N})d@37J2PW7NkJ_)^^o?kOo{mlWpxAxVfUw^H zy@$l(yhSIn`g!Rh$+xEf4$?WSC&b6}c0}Q)cB^@j;XbBYe;&ZuJNtj7+PBbgWO#2S zi&ZM-q@XgR6?1j`>Z5MT}K>6QJs$MJN^|M$-)u+Kc( z{dmn^-sEgGIful|czbpIU4i0^HPFzZdflji8<>G0u~4|8U+ThhA9_gnU4o751=)W4(WwnPjp#g7l%L2 ztFiU>QdeQVx;qNZRsMa2BLBIFD!&cYNgN4F)=BLF06ds+GMeI1(S)}4PF}~1hwo1+ z|A@ok(LB6))sNo2HAVm}8uu+5>CuL~TrWD6TkkC`CF*VuAW~auUkK0^x+mymg+%n| z)O;Hy?5}DIdIQUh8vZL)UxXZFeJ8wx zBW}GU!1mk zL0yP-S)IU_Qz@MG%vI;lyq8g^`JHKUc241aIgn+HD$T!2aH~!AGllK|Q!(_Z0ZsH& zut`HLG&RhbKcxB~C>Pip@m*N7HTlf;zZt8Z;G#KU@K=<9@{1`<9TTUs(Ub;E{Y|wh zF1}NugJ~j?S!%ypYjrtQp&pnnY2KBt#qXPigBCSB6`+QdVlnqk?cXT?PU@oMU=ZrixvbFL9>>wLR* zMpy3Hqa)eUe`M>r#XjEpsZiAQZ#4qJ*D@gSj?3^$L-usyzk&3#^!W%?2txXdP~$>| zD}tb|<(&3h`7b8DZFu?H3aG*F)~FMDvCX$quyE9L` zZj)Cjmbrt7kDe>eO`PxDKMteiF2#Q^Ek1IAfI`gww7wtov-_D>&c*_L!2gG;!;a{E z^zoC|ZznyV?ZP&2sXVoTBh=?S*yt@x4*t4NU&exDGNb@HB%pq|yf(4_R-<%pA>rBS z5dZrpm+Q}5ZOp_mqh^i{310`0y(GR-Wz<#N9 zSHc|+ereU!fLvlCwy*Nmu5ZqSnWyB9AU;KJtTIKK+R%F_z&GZ+e+T^$D{6t7_=)PL z1{F|~FWPYB=bT7wO9ERvT5X%u?Ac44xp(ZBFK>ioM#0HPoh{(LT17{SM(AZf8jk*6 z&i2rM`&=XDa46F6xKH(gfs25ZTXy|#7zdgls3zFTCtb@pUJ^Utq@42Dif>MY1E+SOoaPZjo*ltkVN&S9U8~Dx;(K@yo_-$N2X> ztY=(F%kw7hBxYaA+B&3B`|1`rd1wi`xa*;WXCfxD0-sApMdl6B%Ky;D?VF;b8!r97 z6_hWohFRUR7P`<{P9QZXdyM6NQ~K_ur{`9$S_w;^b{|1t@GoyhNtdl!QvJ7pPye9u z{vFvddEJgU%*&Z;=k1F?A-_J2$EYaiw+zEB>OUKPQRC0U1^tc~ryiG}iAMvT{L;z~ z3Kn}r?@rUtc?`4OvRn)JGj_gMUa+CgnO{-@pJEkj&%aD=-9};EtK{)GU~O@g;-F3! zp}A1yzb=?ZQ0n5N%@`4AON|260!V@UHmli@MqXw?7oNv(75s6azL-PCW*vz;EhqkW z8F9I5%Z!L8B7-cVu}}7+zhAMI>cRz?qs}M9#l;QknCxj_TxJnmcBMFBTC1e1N0^sb zwY|ZK)dLASoP}j_t$y5$3!&p33{OB9@B?d1P9&4Va826cQ(tbZy2JmHtntPCYEIj~ z>_j(KaFEvP6Hn>0{CHe%xJ~=+tP4P?8NYOy-UD^6?%b*01n}QujZ5?RbF=Sm4u8C% z&fK89QJ&A6cxx>BhZhHFAB#s|4P`p?hboD1HyZ}WZ-b%iU)|HuI-WZwmn$dH1azc6 zGMJ+s#cmeH%yHc#WZTHwjEmq#wpZzUdb(qi<{hA*)qwCcFgVa$!TB!-I=F-}Zk@f5 z?2<5RZIxkwLG=rvj=K5w;ymX)wyD2u*iW~lCLg;~mGN6D_v%{YZv7wbYI)Nlz^shZ zw2)C>UC91+RB>RcPQV^?cL(HpVMSMpNg~`rU)>VGyIxz2HSsIU)@%3e+gJ7k*4N2M zR%{r*-y%faF;r^FMT7obaiWge1a+dn_o{c`;H?qua4r(tm6WpeA0K6T9GN{h`$_d5 zUd50)Yi0A_6DP4I{j=(LEBtr7OpY9j8LVzyV>J&c?K$`<{0kbyW6I;Odk?mJ4P%*B z9lWT1v!zn0`pdoB^d#@Vo8~SEaC&s5-|M!vL{oX*bCY_y_0JfPHpRbKafNP~V22g>0uZ=G zjaS_r=F?TmPpeP*%T`uXnlgQB+9VvGxRc%xoAqX*lufutr6JA7rChbI@&##o#2*d<=@NW;M7NvY*`+h0PEb5?eTw#^WQh4?rdsOHc>gAy9D6nb=a--+=Q-Mt z<0{>k>8)lbesov8$fjIqV!$srx?mh#N=@Jpb0j&Zx>Z8jO15ylhm`uHh{U%wsF9|K|)$RUpZ^Y{FzyN?xB_p zSVh)r{W*EPNJ!l{CJkM!^D(*iH*aqRB1F!jvg`qak7 zOv|7e6DH2e(2^NMxS5f4%&XL0sIv=Ez0RbG0^eWESN?{aoeO~r;zBOiv7k4f5SWgm1UTSXf$JwrpYG#~5nkll8jK>6el z{9V@!eq=Ww>RJ7qKc;x-=P(k{-2J`ugOr^47so=Vmr=K^Il63WJrczRBjdZ^I>_~? znJI<)jgafJ)Q)K8nM)H7&;Kxa??2}%-UX(Ol1dz(bJc7? z{(B|vH<^$wz(-9i2Fn=LACc)ttfO(Ou16RS+x;Vc7m0Kp-(JmiX|`GMb_!Mar02-( zmR?C-5S?JcsgeHF4V{^S4ZkA|r*47dR>Jm7WHjOdq^hlnEwgQ;01nD?%mtM?x|Ny$|D%vg%z@OBRWEyZF$aAl+iqcEB&H50&42BgX*AA>sBjC7*bbsQ zmR8-%&9_T@&iryrw&KoL$TJQXC7Hpi-Nf$w@rDw^MU;FImOVhzVvb>L-}7hk4KBA1HDbK zVgoJ{E?)dQ;uO5!vFF#E#Yxw+%SYU9c0PeYR(Xj#M2a3(k0afOe;zt%rf)D69LU#R ztp89^)S00;*L73)q`#6+m04h*P{dk(eQZ4|T%7nPq0eO)cLcp&@2Kkfv;^78;Xx=sY`9U4oV=5YshAz{;<3{QtJ)7U=)0S+Tdr7Z30#sr`IUm}bRF8HS zNnY?>42gL@P*2UsRyBB2=Jg%3^DN>%DOjbIy+<7JV>pCdMPibrq7(Y!!rLP02BHuf zLnjF;>vTdLkE2;TU&qfbn1vxYOz=2!#05#li=vGG-} zn%Gb8tK;Qdf^+>@4#=lcirc4u;(MV_rzZ3zx%2XLZofCtbO4+4FVQ+D-LrV%cVge} ze{V%iNQ9Sb}sq5vHZ+MG$ZWfcHZN+ zerXQ5SJ>KdQxFSPlz^PFz0He4sh%m^^GFHTJ~>~zFn^kva;E*KNwROC&aLu0*Wrhl zU)GGZZ)=4a&6QbD@lfYgzi?tPYu7SSN;7NYj=JilN=%V8%jarDySh_116!`R}Ncl3YhOi-{d|-7zDe$L=aCc&z&pdD(PSy`> z8M5`93C=x9<4{<$_P~FP4nU)-pLPRy9aI2@$uFeMvOGhp~atm$|N2L*V^Kn%H z(ySB$k_rKGhuMu4!Y7}j6&9D2F~uYqi_R4{QZnE=@1=T+zn7P&kybg%NLoVsJ62u| zzO%LZn59fW%bb0(YoPj8S;dgG2z*HBrC&&TKEP`89HR2F@p*89(z5$x<>gs$hukO8 zQ%ZLoJw&QfaNapbUd|11hb_y3;`lRFR#SUixV3E6QafLicsIw*w)k=rs8mbzMS7vS-9>ZPYMYc6pPA3|5knA_2 zr0YJ}?hsA4ICUaux=zHSO! zvYEX)pK*?y9Pt&-KAHJ~copgi3eheJ9dRUb+mCi#>SQE}&6=s!lUPYP; zb;spNE9Hx??Yc^ZEs(uDA}d&jYRv8KKV;W}ow6%U{FBiEIQZ4F__TkkWRx82G$jsv z>HXxrwEBh!{EyP=Itl#uE|e3TuC3o=4S9VwMXp?rltf9AQkaVss4tHyFbqUJIh2#s z|C&&hB9_aOTB43t6F%OkejUYy6Y3*1VM=j7=xwPM z-+Q7O<0y>DBcg0o=XxGv!QxR-f!UtFc|Q*mmFflhGBLhbZS#Jj?iouqLvk-E?lrs{ zU&eJuOd^Ve^3}L?rA)dV)t}o-11{e@j1PZwq7pse4U$(k?q#i{z~eTMPk!9p^se+< z#`Gwoo>R%d#ZQ&QLdcOF@?R(=kJ0!sdg5dG%W9kHX{z$y!*{yhv@MfV3v*7#Bx;k9 z(Tv(dQKsQI`ZPBG{HLD8@fPuQFM$$a?~u|N!#s=AO=vpheq}zS>@He?*G3knfDwcW75QHpdYxbC&ybKJS88KD`I zk!?%8Fvx{E*T*}?g-iKGh?Ix}5-<1qgvEU|@sUrdlb_un+u02>v)N{5rK->~DOyCI znbN~N>YP2ObioR@(iu_ee*! zBVB`gB_C(lYfezQU_OJzi7yt-E{9xslCmS@DUa9g**5E**j~rSzS(I+8=C{PoloS zwa&^=MM zZuW16f?dQ}Oee=>fxQ}%RSqWC6-ttr3V2m~>?ju^xlotjmb>6lf3Yf-J3xdJ1uHiE zf#YUIT9+M3NF+W_*HqJmws1WNSo@1y`HK(F8*W38{_~2VJ3DIJ_kz7S_FzS6ZIn|Z zEjv0oH9&TT;wG~<~1c}Hfm(MH+>0V>KuN*9Y(t6-Oha_d~SuC}T#Q8&ae z@Ufo4#@=!--5jYH7!i*&11Hc{TUV z%-TEVw(fDT1a%u*?LF$!>^3{5o$w?7b%;h8)G}tnBAR!9$p_oLsGV?oWjbxQBnIi} z7{(KNN$@TbnQX#@kT%V`jO@9}Lh$E~!h6b{Xd1*vsenrJ&$7`DJyZM9k+whM!d%e0 z-_L*gweFBoK7GutP8ex&= z7(3rZ+LX8@j4V>GDKwH5haF4xP6p`c(?t$W*fK)kWJ#t+ZKjoFN@iz8`qQGd~7ckA>2pPw+fIT@ zB}4e*&=4m|_knMUs&QyqbRB7J+_FY8fpZ6HM~B@mUCLV=CnU%hks~YIG^}Pp^2xA%Ug$Ur^9n99;-Iij3-!G0=GNdY;v2KI~{AsQCM33t>x@rth9B$YMDOV}K1U zESg-jNiCYDpTM&<_u@X~=4M|LcA{yh?Xl(hLTyRMyix*lUP5P`h`|Uc^RoV|GyG^5 z&7R6f)$QZKfx<)KqD3NH%*>bJ8i{fT*E9BsnJvWxkDelXp`<)0`aURgZR6?}s)x+c zr}ym7fPcxIrhdGim3C*MIX0_a)Af3Nc1Y5(h{~v4F@K=WnJagmh^5|2MpoNGJE_ z<`uZDw^(CfmCCO~w~Jiqj8@Q(m83ovV^!x#dwwsxRcL8bxQ@utcu7twQ)6}Kyb7)2 zaQ6fq@&Ss{!cbZS`1QGwrOsyG%O#*>bbhSdM&+7mKonHyJb&mZT8V#@yd&9B;*WQB zP5JAc=mq-UD9rmgEA;TtO7<@?n|9aH9vtM3SIZ&-7_2kd;M~xNkod*?^BQ{+S{{NQ zShrXp$4JI?r^TX7lKC(mAxxE}@@D+jJ%viq4~G{HzY_d&TtSKvJ6(@4t=4@@9QO5} zsrt)$6CK5$#zcJG4LW?Lv)M9GPF28ptvh<)0e|KMi8=>f!4q?)7>1Eapb?;SWm=cmvJXL-7ZADf$a~ov%9B)u3!AxN~Y9=Z!bKr zDud6it(Ino)6X%nF+LO;NE?3G*R=NFKx0=@G+DAZq@jiV#cdf4PLs_j%O`R3i`LQ#GE81#6wo>3Q2Pw+0oeu+->Be6`I!^Ef_)cofnVzz$;-+<34!PaiaDn|4zmL=SdLmu80tNfy}s#1Z9D~ux; z?74E7l7T4?x%W11yOhnSg&SvLr4w>T}ngi)HDmuh~(nYE`BwBC~_V*Bhs<<%iZ z2YcbVP$YK%YTkP@8qsnJyle%~ayU41^Fm2(I7japI&}NCnD$RTR9{qiSH+x?gU85JS$jOb6|l}wub?w+ zA=0=1URq~x*8HdOqk|Vu2bO|r3eZJS>}F$j!M1XHBHN9Jl){Uepfgo82-K&DrGHg3R%2WjEiHk)-#l2ch3RWH?kAE0 zLY=ROGcj(r}FbaY%JYHrqve8HUaOh{FCq^8DIJx9{R3c zEjXEu)N8ZW9!McY0M`y{$SSXo_g2@iBhhh3;J9%yyj6H9NbXa7s(5nRcg=BGm*_ed z8$B;-NhyVjY64;1W-3N??4Ee-sjJ7k{7n*!gtI>)Nz=RS5ebR$n>U5YX%i|l^F>4z zFG`aXcA{`?Y74x*ee9kwZm{EDpVGx3{5&5S;*#IbN-13H!BQ0cIb0#*>kkq+JkyC= z_db~;UB9-*(=&@VNCS}jJq#IoWh|fC{73&i9QcD76Rir~QXb-8_X;9%6%-mgwluqt zu7C)3yc2cxnj9etF)y6_?u-}Qhs4*JD86+rKY# zl{!5^CrgilVJ9v&Dh`YQ0NQS(K?ZyCnlv9 z32{0Cx;Y*H9;J|^3C|vmfvziulFC)AVs#>?W?Iyh&Y9xgiXok^l(|ki?{CEksJFfx z7@OeX>Ddt-Et6j`SnGUv6fGlR`Mxo=Nhv}lp?b*+t=D5KLuY@bg1O`6R^5fIVky+i zEQl(CJ1!Y1iR=-DL`PAcCJm%YZCb;^z6#kn*pzqsSWldl{})NB2S`%Jr zTYN5JewH4kvh%)qE4WNTv3EC%oIFi^q*S+2JEQK_Vse0#0@KZ6#-m;yL5>Nki5G|` z95{f^CSG~8S$_SvWU09K--IMlmx8*9US=NdJ=xL2e%2Yw*rI0&l+3B?SDGJt2wt3z zz&>bhYcrm&_-CLLvOln zdo4VHe5zs0^27dC!rs3(HfH}UOUazRPuu;HQcfYl^y^ERAR1y~<3|%@dqRSrY|Pdy z0IllH6LP5)x>y#1%p45|D*K4{2{F{x}hP>vk77>MUmGxO_*tb#P z4l7lX)I9Uh3v?WQ%7!@F54D*WrREnazx+m=v_deLQBvxUpAP4jt&&+r1bXB8_(k`V z)pCheKuTp=$M}VMSWiwk|Gl}zey1r`m$4CjMap4!x<;18W?ww2`C#FYiIyq1u6;<>CddSUg-0f8ihy_D^Zn4ko@Q9Ox47<#}tqcwTj2wPdMI z)@R7xpW&@CoXyalrsHK!y{~IY&0AYp#p*Km9)~!Y8K`C*vxS%$^6;b!)o`>UsVkXD zj;0y(@+?>%wP;uI#eTvB4UYf$5(d;ObAmUpRR?&r*^Z5 zh48=G+;3QtH&Z^}B>eiYi+}SL@o*WX`9MBuHodL(zD0Bonmj#C`;kB)&}UoWE}4da2%(@-q|a8~|N zxSs0%a%59m)1&19bc^z|<|s!Q*W+^|SCr0q4aa8WpZ{HvOBL(4u{f8AaJ81T z*EAl(4iNIZSQ;iqNFl2l<>{MnEpk0i00k{&{m`Xf`XkYCcsCa9Y|F08p=x9lUh!@(n63N;8w1FJ$&|zT7BI>%`d$ODSLa+wFt6;P=b>&@C!h2%%XUAlVsn*^{)J+YQqX=s9c%k-{3+)n-{!3~ zyP?>25Y}mI)apZad{E@8ATP3a+e60bC~P}2O5-G>c?(f|PphI0nDo0&??R4~*B+Exgk*$DV8TS;EeH(@nca!5P->O8}&mz>B} zq$CsQ8nIE@8fhCn$9~n?2mY^613qCsguPT#F-P19&g_v9v}`oIfjPuYQq&J#XhHQ@ ztt-Z=c>LDM$3;A-_9^nD&l&ui>Qu3wf_v_&_birF6S)^0^YoxMgm=jz+&?z_>$D>} zUm2|BXT*}?;uM>xnmU%6EtFEr*XIl|aoTm==`7qz8dE>XMaaa&RWibMs>E`n6r|Yl zfRLO`jKk&CX->eFn6YNpQvD<}GBAt0t@V*D?cZX6nf+*GZ)Q&`nn`z>8hUshCT0kg znMG@5(t$oTMST0zkyJ=T$UUzbVxQWt?@A#1>;@o8nKrHW{vy;fw{BNYVe{tE|1;-! zHTX|)C}WSH5DbE1GjH9z5_zJo!c1JQb<0qq!{a61upq{Dv?T)kr`-XzYjjimEt8gP zHYfBu3%twc!sqckm^&t1h{C(aKcnx?WQwz>KSSG667jl(NzazAGEhEb|6y54lyB^8 z%SyeC=XTQazwkt}!69Hmi0bS4SFh65)ns+$_A5R;^Q}k~Wp9#T*tVmOhofugXU&+D zKBzLk7k}S1x*+$N{>v%ACb^wCN~(VDrE8MjzcG%tMeyvp9tm6JTpzMdH^srB>sGr* z(xqD`p6jW26lK>MX^n|Ze)}WGg_*L)pAXJl|=&vF#8N!UOnmF z8E^J?8dlU%+J89|AMCONn=GXolrbB}Qnmvc(*eRDAz?xd*SM-hxkE_BeSnP%|M9U$`CRDE9K*JB6&gL4y4bk4wK0pv*fD-1(yX z(aj;x8TG8ng2Bvr{K5Y3f^`_C%>i6R-yyR3dtz$+7@#-%Z1G4Uc}-UL-f0@dCaU0D zsaN5$g)G;mv9ssQY}_dz_DIasabD7NgnND{bqCVXiI-aHVH0)BDkpbNrWy1s5FL*RGS*r;7@3)qJ@nQ;$o)_So$*4@DeXAD1H6aYnK>jI-%YA$@D8?>*fBrOD!I7D6ro||0SLV zE8UKfEj6Jb=|_ljw+Y7d5(4M_?^R{>d%(9kO7c{HoJgP~NhlmgFi&OWT)dRFCQjeT zL!P8+Amnk$@DN9m`S^WOQv+>BZzYAvTG(sVyn`?J*C|Q*w2lT?QhIPwa9EPfdN%N) z>`IOcA(fOfrMP_?XXbtZwVS|{@5a=X1wwisH)KhILRq60nhEW3P7zuZ9K@YYsd0{5 zQwn1K^7%afezQ&{J6Zwc8)rD|pRrcW{EqAOm98$#P79KW<*Tn4N1PWD7SM>Cd5H-I zA|3UQR_|&IP|JjGi^+@G##Zbv1)z$VqOP;DXy0Leq9_w)u4P*$MvzF?jc&{#68_ot zV`J)j>Rm34?MD+lUS9~&s4I%iVnYhr>_3dKiemX7qP;sugA%UW>+12zFV_N?V^Vt= z?`Uu3rMOO5Lh5#(gRX1P>dz1n@9x3BhuJv3l3i@ni7}KD3>jR};@)0R!tB}vrpq^E zb1qJ9M&;oBe9Hx#RWwK?b~1~fc&%SAk&nI+79)J{h{^|@scbq+ltuk^7rE;3mOxYG ztfdIG!5#ElKSkz$jjRCd>ON2f<2ri%s_WpDIL3_*N$Q!EZB@S}Wg6M?HxFvbA5(EN z@aMb>L@>cz&na36|GT=7sD20Ef`;d9GSZPf18$wVDmB#@ZlmQRes5GKhs{=u=Z@h< z?W-MaExH?EVq)l--^UI~m`=C13mYr_Lg-6X@nO@)#xQv80ateL;RRnrg5GtrqGOj@ z^uNLZi~`lK#NN9~%cbiGfOSQH4JaYkUL;I8-JMjxB_UY2e!+M?3@)wDV4{vdTw z@HAe&aHnTJhNHR(^sRzzxb0asvmq9-kB~Lnqfd!^Ux0}P;LlIbQK(m;qK;*v|2G8q zqs!SD6|sz7ywbA&a9CaQmEb~wpuTpJxt8nisbkKee0xCh0eh$kKAG**NdzK+a}rQF z%$U-W7$jcWt+W4tUBhTE>S3`+E!Tf6k=O^sNEqjeA*@Hzs<#4vZ`5Eb zy3chE^IQwy=kZrNH2gz{fCx=-8ljP@kjiSZoZL#38XfAAr^RF^R_h!S+V15l(~wg< z4;1po?G}T|UaI|*!cJyw1E!4K{j_k62JKMCqv+EqteTdeAXs^_`&qH=MBd>e&WK+; z;AL|e)Y(c$7+zu8b1^O+xW+Pu! zaBy%qoOi!~svS%P;`{oPnTx19WRW&cc64+HC2YQ&d=nzR!*@0JK1sb_Vc5& zleD(Z1;I6lu~|Z$jylBYyTJ%^YG%gZfy1zFUOyA`Cn^Wq>?zHajzk|sE1uf=dM7qS zflYd&tH3F4^P&Uc?Nic1mAiI}mr?CYOz4tZGB#i|oV;hckxh#H-HX!EhFa*lS=Ou3 z4u2`+k;vyArK?e)5+@ovNo&Bq;CP<92K7~U+@>+h`}BFr5aEG^0ppam^U`(?fy*ecv0XdUsLwC<~2tf%plaAXFG-a)|#q&QMJN zzz3eX3nNY|-QcOFK5&nRN9&Rsb&n zaVLK3T_YZUdH;&R%Wf=Dt5@!izK#C*_SEJPe--4P!RZZu=L8us^{O^L0H(isKWh6N9IzIzmra{~j>mR$kzw&_NsS6BcsS>igo%T zOzKs>y?)_Hm(K0!`%n%-#d6U;+{Y)GJqhj@5W6sHz$pj=p(wv3z^L1Q%?(#w5VVs{<*#nQL~MqZ5j3{#rrj&or5_7hsKQ;F{XdW~)~ml&?7f_>vg3|P~PzkB!2 zvo+lWI_Ad1QfAvSnF8a6cDLgVBJ>1)jfdb$1}21!U0l%4Cu}6@|hh zB06GK!Ne~&o0xS$oOKR5{dEl#8P;Q3FYdlzk=5rh4iEcKX`nW~K37YFzjp5Pw1%XK zdo~NGm7?;;+oUs0KE1!;?UYutl}l?h0R@e$Xj+(bMWJrxuFim%#$WkKQEsT?grw+v zw|9y26@+)(HA&j}8amGNQDO`I2Yh{e?i{~1e9WYYPbN%kIg6T{GssmDEHoLpDYK~P zG|jGKr#Ecc{*9k{C1Sj*gj-ZJzl@Lsh=I0X7=^83o#%TPU?rlXoP0D5lh$LX z4a$z9o!emI6oWx}vEkvTamzJjTgJd@Drw`kzYoC^wZ{p@7zsluh1wWB@a~bJR&BbW zBJ4}#yg&&11_Z>Vr}HJLr=5qfU4}}c+BJhhSXg=NI(`N|Ly9m3r)OeHfFqye>rO=D zaKDH(id`(qfXdUjZR^gOg8a+~itM8u&l<}0^Aj62^@pN#bi68VF9rG5=Ex~7ZnEy6 zPChe>d|K#1oi4bX%1aN*sqTtO!nuUrQJ+@rc|IB)v!%7bMWMtWICe}P_F9T*dKG*y zkrAEweuGimIS-FxVi5N7<*o;AZh@gG*vg+ied0QFNd7X>F60)&>1PUihIO_Hj)K#@ zL1rrIA_m{~E@Kis*_XV$yurd|P3eY=dy~Uv*q6ej6hfrfoVo z`p7i9=kHISKM&$|vNjUFjnC%E>4zNRx0Aj)UWs|LSJ<9>`Z1+Z5Nhs^1<5cNui9{K zae62Q(cK8IZ~=oX_%R#G2@~UZhL*evO2GYOKYj{atu6gi5jl zPbB-TuW+#!L_QJoiT}yT0e<%grKod3%0AQYq$be|@gIp|Eo*N6W+ZP?)~w{Dq?+1G ztWUCKrO*Z2mfP-rM|F}BSI&h9TTj^VN!HMcFNI^~{9Gh8CF{r5;#$_tJD6iJE%e*K zH2jEpW~cF&YRJKa2BzT{);i|!W46s9g{*cAb#YNdaOe`-jI0j-o0QA zhw5_-*+Mk(Z{hT0jH8u<)Rl(n=jUex@5QEE+Y~0R01h4=SQ)wFFo?&$-Tr@TBw1b*6| z!Ms=IC3owa^Me)v-3)3g8ijk~O87%7zyJ=O0-g^LfWFu7WscdE$uBOI;p6|U(n&pd zvGM2E@%83uZH4e{sCws7pw?&DL;>=7BH&f#>;6u#C#U(tF zpu zkFr1FSO|@YrgF$m($|s z(S_;eeBrUaKj|5Kq`VZoQe{MQwa>gPurK_Vj9}wLzk!o#`h$K5)5>FjcKRo!Y@NH&9 z9p}dEPyf^Rg&G1)Yj*0NWT`Q1Jud@Gs@mfhWIp)j^o4!?k$V1WVbM$~y_vG%dyUd- zBiU=4o^o2|4_4OE;osee?=Oboq!Fh(&(K9!3dcuAMz$CuTaTMrfq88MM#@{`s_(&mk+7&ZzoynS27= zP4o9`;Qx*>R+Cv(mo_C$tsh z6I7!07p_mf_tq9cXBE|NxW>5LChpw*V;(>2E=CA*#G0J)&Z873IxD1Ux5=)fxN<^K zYBI+z=~b6(dtiR!F|DFKkGD##!Y6pG%gcgyYeI(p=annyN$Y0q&{vS+gq$~PQwJ~K z`n8O=!wIM8;O*5-V_rzG%q|@J*<_qjf5TH$#WW&Oq`RkQ08Y(uWA?XhC35y%aao?0 zx^$^7-LQ5F0|}WOoa$mP2cFP>_F;AU*bL^448h;j#+V%u0A8}ETXSuxbaJO&!IV?d z3(?55aM_0BboF8zT-VB8Ofb&AP__N{-@kFEcY8I@VV{O9K5_$4p>YPUL-Jmx9xv(zC0nd$U!!Y1l!nBCo# zzJ&H2rC*92Y$LmkN@tX0#1cFP)bULn%`2kd9R(co-8qXkHA?-I5`Ughht|PXbjgY~ z3R@hi`W9B2aDbn)xqDNGw|MYnmM5QdrUR!GZq@$7b!16v(Ro#SUfoyx;V+shYovYl zc9zSAJMIY;8f^lObLO7>l&UcC!kVGWmoG<$>St)wCus_jnKms$Ho01es}ly%$g)vT z71fEZ%esL7%p*8?0g0k9h6o3W*d`zTyM6bJe47i${d@QFo=48yL0ONI1_u`P-p*fl zbHuHHfF|*IJ>+>;;%N<9)V5TeI`Sketh6-plje@y$-C?0UJMA42VPuzU<=Qk$2VsE zB@m=_VMh_h%k_BfroLO77y1g8-t4E4=c0~NNapzT>@Z?{J1{J=ra-~tqafq`SXCit8 zuc|K5$*EC#__55APO;fqj}&*(B<=6=eQE9q%SCC11TMlmxXIC0(z!xIIe^jGui+G~ ztE-zm9SnR>sqq%mUw{2**vM3AiO^uV)3GiDDF}gfG9!v; z^eThne%`WV{SB3Ths>`xd!t6jPT8)U+r`L4mr-mk{7h4(AS90kR@W9rcURn~JM>sE z-6++%bcoP}qK>`AjNh%oGrO{6#$TP<`0+v65B+f)bPyXopo>}ci+iF-8iG=5w>`rn zqqoY$?Acnsno86G_X#EN&8MH+)n0JZqE`24Fy39G`;WChn!P>vW0{GPjN5gvEkD#w_<-u8rd27uLYjO>n~aydwPNu|k}ssIYAOfXn53!u>% z9nY4`9O-@|GBtj{c)B%6GsCb{Y|j#R+IhZJDX-#vQNt%}x4zk3#&|cyaon1S$IjJx zPTL$LZ^>!=F{TLK+&)*^YXv>+wO_jC@tmB zz6>5@mE3LDlqy~>?*Sd$(D!dLk2*{Ra%F5+9W!gTu4H$LI1JsAS<-vGqlYnU({TA3 zjCtMvNP=3U^bFU0p|P)+e!|vwtbWBKO|$s*t<}%fx|I9KyolpYbSC{-ox{6$q+7dk zSqH=aBt8lh*TJ}cF1Zy%>X3!y z_qE8vr{N7+YQ*2bNamj#p@_H0nhOw^^CQ_AK;CH^qV28y1!85UGVC;1?GDnl9XEXB zeEhi4M9q`bobl4PPcnD8+%HruUwHiFcN+=DJ0F4y^W*B>RiIz=OFxn`_GDYet#;m{ zk-SjX$+rb|(`ox`MPtd+Mor(PDPi5!H>&S$MdL>Acp4v#4Q;G=^T%JtrW?$gwA+Iy zj{E;-(nV+b51!2S#2yq%`7reE>waCAuttBQ(rsPk`62v~Iq<6F+$Zxd&3?6x&i?@+8v`t_8-{v~YlY!4R!;%D`^(FwX|6W`^Se`GF+6svMjE|j=)CEGzQ zdyb3e_!*kObBFf6>2=wgl=x>B!gGf8z^q2v)np4^NCAHlpkbXqotSm9T{ckc?C8bo zvY&psK7QIOBDnN`od7ob+vee%pQ+_OKHW=;H`}H#JHim*40xJ%Ek6R9>Xq9Mn%`@Ifc@USl1WeWlepk(IvzAyNj@2Oxr;m*iN~?njGMAJJjtVfjq}sL% z(Cno9yWfq^oFk|;I@h^CdSK&#EcKRbbK2%*(duXnRmM(%k;vCXpA@AMLu+g=6BwAue86#vgj? zoi)2BIBR*;78K13aBI%rQ*4N;S>w&syOk%qwnxQ?tnPjVT+w6qiStNf6fbQL3yI^z zER8tegZ3C*LUGlsACWta@Lt3%Ra)Nr4D-ar@X?;nNLMxzI->a*(2tYQWqIfQD6wxOZ+*vLtI2zQ#w2= z($o_F27vYL?w)|?5zQBG`uBcXPN~lP+5WzB9g9#iL-p?@dQpnRpSG1;+j@+mgHgR5 z!@rk|XPiR*3*aT;g|{-?tdr%$=FejIM6#5$bVzbDDrFuZN=^URngiOiX2ELtk?(Dgg*;b&oFl7;^EgeUt zsTrV)ERv`5T2GNbu`6Un^K&96?~YuFfVlY!F$*hZIc5t5%)X3H-%D1RB}-Lr{}Z=f zCipp1(5?Z$O&uL$t`Qyu6|07GG*7njE(bTc{+IPXZU40p9p*~QS1-t=Hf!b1ZV$Jy zo2tGrU9^Grj)+_NCR&z6{V*q$%r^UkS2=)QWNllm{X^|~chCNIpOavy7?^eFz=7hm z{OnTuIc#$m~Y|inap5Xl}mU@mp?dG6sBTAQh@n)p26O{{=l^ zFi;bvU7@U|7E3@+FblP;E=#X4wqtElkpUhbU&=4^-$*2uTdAl~ZB@aq<~=zeHnX{oHB@ER4wxeFH( zfDsB~3TS<%nGWj7i(9sA(Jl@0^NR-crHNk@lf0YQBmsGa@tJp=j+sR&s*!s05r)jX z2Cr%WNF1A55g^<3>({No;m2W?1feAeK0NW1K5)V^v4TJPfi(0%Vck zvbr)zO$obquVqrc9aW+VN=HRm*=L$F0942Le`UgajqQ#R#b$-W7Ebw(=;(!Ry?peD zi@38uO;;2l?9eB(KI^ky=0I|fEoa2Hj2KY?&s)ix?M3Cg0%a!}c7?4Bu`<05_q)uY z^F8-`Q{l)Qb?Xl4@~{`qc9#dOUeqnJbB-u6)UHQOcm+o&8o?G}$}ZsC>ZKV{4MyIg z@9|aPNL5tu{%T5j#+TkE{zdMsOffxm2`&Ct(k`C+f~}FTT))GWvvup%DUA6rZb($u(DZO$4jLHiEGD$?6cAq{ zAd(>9Dl1ixHayZ@8^NfGNX2eq6_kBfDb(j1zc!TEmW_yM(~Oe$r?S`*0isO9PKxyGQDR+L@_$ z>S=n0(1GCPYSlkk?TJr7Cq_(YJSLK5aR!(*<2u!=3QYnFAa7_$Us$N5p%MT3^?u!( zHx2Mz3jh+?B#}|)bllr!RemHBL)>JeZ(5vWw(kwlw}Sd2KD};vGzO{u+*TG%Qg$ou z1gLa>wgq=?|8D(5rcFw8bjqgXD~E?gu3U{}KUox?Cac?IC^NC(N6lcj^N`N?$N8T? zG4mYMgEF!`+VArI>RHvxdwU@9Lc|19h-!gS>9tQapJyXg?WIO_4h?It+s$t&qEKGO z|od) z;0DvEib7vW8JT*5vc5BE9t29JeBRBNR>-yi0v~JaUoPk~RJHA*1%#n}rat{@V7EAY`eo>o$;`n_IUugwu;GD-LRz zoW+q`n&l>KC=5s(6rK2Tz^&>?Ri52sUACpZf1nEpqiYg{47&mzU(tI1{yoV@Kp@i8 z)9>7A@bmjyDLFY?Mc(#wknl8ngq1J~q?$x*!$~vU)YQ}|%$FDfOG*WAnxS76YSe?6 z=3gl~d!^N`FT^d##oNnk`|50bjvdmLuOENA8ve`<;IS1Z&Nzsi_q7>qbE=`asL#g6 z7Ka#{+U&IG5dg+Yo>p|~dFBlQPG)BlX`j%My&GvJmlf{k5Nsl8qx?yd25vMrJ@RvfZSfac5Ju%E2EL7XswU=h47o@hlBh( z6tDbPh4cCOcc=2K@4UID5Fr+;3Et%=*N@jWEZhs$E3m3BNHNbw+JEwB{ z_S@d7uoQCUF-IwS*w~Hy9nd|9(99ngLn=WJ!1?RfOEKb1IhK90As98tctY=KoAaW{ zr;i_%M27Gt)-Jn=QjRj)aZ3eVmb6coIOPyiLX_6p0=l^{W80Epl!YVmnmos58N?WYI# z1nG@zM+*!z8G|GMW8g^=lUZK1e6o+vPEpi<*uMT(oWCH=6%K15yJcy5{y|A z_W$IJXgK>bTbM8ofAKrO|882%TQPgB#?|O5d?z9Dxr~v~eBwcU%Ej}AZl#6N`|4e4 zk&&96FkUd*tz(2x@^k78eXsE9y3P9A3lc+;?Oqk~?qELK!${X{lUr6V^PXxkCe%f7+_gz3_I_naqV)% z^vkA9v#`t?N}Q)oUBSRQo_aqYpMEfeyh1M=(oI{NV1HG#(;a!2P`k^hh0~IjEmyiR z11=_$;c zlllW=ORc%OkU~b2&qR`}iUepcNkfsr%((hXl9xc&7Q+OyXqjbf;cAAtjGA27?R3I^Q4XEO0uGTR4Lf%yB(yyXWalmz z^NEOPqAu_q>Fn$4o7%9UtZwE}6~eV%ryTp~JrjcoDfS}w6v?4`{`?PrqdtUN1SxO2 zfTM>FT?BnM1K=(8U0!{Hukf;v|ImR8o(f-GR&HG1-*>nT)&>=gXwo=Y^qom{?3ucz zo}tNvhQHDM<$L^2MOF_s?Q-akYeXZQ1$jnjJ3chq=)i0(x{37MlWSZ$*%^dKwt*K_v`=q<9 zI>$5!Pnr*sin+-{A9;N(26x4;9Z{4hv1QhX5DJLj3BHoT!XeYCIFw#y_m#&{+tyuIJj_2&Spv~F~WLFm^ z-NaYsB%p{;v6vVSyWQWI7V|gHm42%TZOA7^@@KWl&Lpo0p@3T}?8S>$NO)AS*Hsc! ze8s;n1sW%lH$0%UrtrH zVhaqGhz@V3I9@J&B|$)l?jMS0?n4sk!Q%YA zOHswd#=T&v;*NLs7g>*Z?cY)~t<8P4o0kI1r)Q_~E|^?zdinB;l@(nXF0Iz5r=5x$ofCL$!g#I8S==Q@Vqh_^ zJ=WG}^t_QR`)Jn(YaOR)U+3ivJ!c&%J2XWK5Embe!@)O#lmS|xblvj5NqMX-%dtny zQBA^ado}U_xtxZ0W$q-kw`UNo-DZa|&yz%X@-mc@^Ot9u5$BGQS>0tBP!EX+WTzqp zAqgyz*8m~UZP}zJN8lceoV2P5_s6*T{s>Po3#0x0kA!VU5|I|hp^Q})n2(ByF-9jO z%AXKqKipTSm|~U>)%@e9PnDnut4~xvfeUs37`lzo85=vo>x*I|l;rDy97N?#Vh2>W zN&wgB_v}eR>NMP5s6!qasoL<=><6eIGH}DuU@}_NQ|o^~N{}&X)}w8$aa(@sK7Ra| z&*rBZw8Zbn_|$c<7oU0ux+JM2kb$Q%FOR*C5;e=8R>9-S(Q(N76nDm24Fki8#&q}7 zo?;8J(xl6?jtw=p^+$*^RC?%q9sjv=CkwLKUWH)-WuTS}Wcnf#Z0EI~GRF%s#Y{W8 z3f*aQmCQUI%6soVS~~dS47ZE1XB`?>s!gAT$aF_SjiFOu!jTM-n4hKl&xT`nv|9jHbSx}~>$ zY|NYla320uzii|sAx4yUjkj`o1C7Z}Q%~}hi_26y94PAW4zLL%Ffm=pq=|!&dZ}so zldj)>{=DVEBDxUS+m7v(+p?q?ZCD#k>KBsTpsPcQYT90)g$y;pssn+0l9hC9kjQoQ zYR7GH_bqhn2AE>%HnBWgxGI`FFSvK_X}q~Q+>23uM>Ipz#RLRYAle#4M*%`de6wk8 z#lSitzzO?lij)n$zj;Lx^p)ksgxD|hRWFE|gW-z#T_Szza@MNjYey!Yt@tU-evs22 zS=1F|Oiek~36PHra8bpOAQD9`CjKcVoxsj8$}L;{`}c5_Lp(eUNT`R0cggPNH}WYL zylsTmAZULfLAPsrwvJNoHSv*-t0nbQ&%(bkVG38%^w`-g+88vSibpvaxVvqg>hr8D zNUYP27x8OzXhb*9hsVY|@@%SBoGE^{T^c^;TRS=l2I1fwBif(0n@u`dac|)f=rG zvBW}?b_r{vVIp<{(x!Ar6pAP$AEI%kCwO@kpkau{O9I`kia)b&S!V}oFE9z3))6%` zQS3q$1=uZR@bwcOi@`ih#k~TVs%&3Eg8fu~XHDNTRiN*s-W<96Ug#fWV;caYgIq>r zWO{X}PdpH84yXPps8M8>R}394ltesP7rOyRK=sB7uQ<6e1_207BP$OK z{w#{IzVT2<6SCc3w^AqktEc$w?1J|+f`_hu+e=3$l=4vzwqr~#Y0iY;io6@3-MSRt z5R+&dC-D8K8Zm(~I2~IWcPhp}-%qM`+jcmn+_IW@;&Svw%{wOA>c#V=gZjjlhgHEl zW_4k+Mt!Wck1(xid|JvW89J7csD!Ygl;fG!)d78(Dnq8XAK&^z;RfrSUt<3bNv49f z1DcOoK_YwEKb8k)ykStCqoMe`DB{;;|wtv$HVFGCEZXgr!T#N%K2(9@Ipe4j+H zM()khzqA1Od%=q*o$r_35k2fWCT&gG%k_;ERWx=W;K* zJ7K6V$MSW|uK4^Z&*;pE6iH1+A9F1w-dvrK2P1xq}PD*dDucO_jH0n1^ zg07m~)u^`b6K=T9XL3}NZUj5LPENt;%6lssel2!=kng0tm+#Ttw0|(Og^0ahD=VU- zFRf+}2LMUp7q-)X*P4X}-*(@6&py##@8IRDTU*WDS99z>@qA zKKS=vk_T3!cGw=KjPYt`fMWP67OoKW@fNx{X7>}(a$(5`b~2)KcM*xf3*%h6nR|8Wr4F=T_843svj;oxDx`NPKkDpd?|0!Z1*J%)BW> zQyQHa$XrP~1!{=Pm{JbOnId|^xKEx`+qZAu*6rJ^2ELYU2TctOeOfYQ^XARQQ~lM^ zus007TK=_;SbbdX){r<0J0@akn>uG_cBY>p$U{(a$Iy5A6+VTz#JVM=*8CrEpl)J% z*OwtIcSFPTxS)@Fqs0^DH~htDhOau$(SA}q{T%TqcsTMu9tC6j3WS%b*E7(9dkoB( z{4KPU23?3>)0EDi;383#;woYSIJ9Zn^k5-nq?i(l)%r6g@PWMTL3VcgrLUZd(TBLX z6IzT(A3%=nsA-cr^rF`GCQ?$5IrUM0+NZCZ19fm@HjjWWN5a}q4JD=p8&Uis0Wf}w z+SCELzHq*Pu&@Tne<1QTnG2>PNle_P4t$dGVq(sR_I8~x=Y=?Yd~)Q$zIN`zoSfAU z7_`@#n2^JnUNU&*<@Fayu{iQgGc>rJI_@T{U%$St(8cA5MtW>n2E^bsP*_G>D|QPA z1(zEx5RP=Zhs)^c9zoM!qL3gW9hQno{HDImXk6+l@w7FpLX#8`{&M8Ep*C47e$gmI z|HmI4HEqe5q+aL6CS8^9xi{Xegv2Ws?&PF&71H(0ucw zB%I0p7~*)%*V4>&PYL7R>8yrdcmPNAsz&5Xw`;3GKOL-c%lFe*jPp= zCD{P=JW8-wF{gz?;^OTCIVfVuXKEBUW|wbxeyWEwuOS(ewa>>8^2blO`%au?@BjWb zLXoD(3#23KnzBTG9gE!Wb=N^r*yG4Pb4bF#YK?S*7erX>3A4fYvCnEK;R&?sKXdAB zX`@lb;JaqiT4YLAAgAu6!}|^dN~D*UmqiK*r4j0sqQhjQh&CHtPeKjGAJaQF8M3qU znl^?RmT_|NE77s;jy!=Xte7Qqwk%pTeV^`q>5P3V`+iXfPxJ3~8LwGXGMe3@9bo1* z9@|oNzf5@V@=Cz?AA3q9Uw9CgnGT6O{N{#WhZ#JF8LB@?oV zU%A-tGT-X2=z6ExEoJtLio(kMk#&?%y5aYmE8AUGm+7^iJbXy-kxQZ}xaW+Jl_(nH z?lCS4IDY(i^v|xR$i6NBOu(*;l4c{2%YFIuRKiZG^$`J=#ow0j1>%%gV}XprLjASpu+$&^x4Mj`QHb zk3EU?$8ASiEOcmh?3ihyvY;($)?deGrHq@0dO^qW#p^0ono@)c`D9C-SySbry^)i zByC`iJv^p0LZRVSowJHYXI^f4AAU2Ru&|>8{fYyG>HYJ-`K;i#IjL!B+>VSbZ>|+@ zZ&g}P(fJb~=i^<%bx}VB1su2;-<#!p>34({D|X`PJh+zClch`9D{oRYLjdVH9e6|C+2%n_zVoH|&uh=qD*WoF%eYg%bjHr5{f?5z zraT+9>N|oNe*E~+BIUyQ^Mhcu8g}akUn{cWuFYJQ!a<`iwpp6KK1+l3(~ssqg4H z+;KUso34iYf#vsrD(aA=FxD9dYk!JhSY?nPu+akt4rncuryc}uAYD@((@c6k^Brbw zgu-@;_8j8_X;kpRD2>sT5%o7mpcQ8mw3JX3+8BKlaUIAb7HC zMyZFPd3!zIvmKZHqpL@@{N@i2n6~cwCPt-r4 zewgFZySBMdB_ESh9|OFJvFh5U|Nkv}Qxu~20#aQ*0O@8I?yggoDdE2l8@rZlOKz=N zsQ)rUTq=e7`pNEwq9KZ;w`ifgAAKp{HHj<^vm^DWnarBhL+9+JYTy^3)yZ!wNMSPk zX01DD5Y8Y@>-)T;<@9|-8$ph^kfo(161n4)6I9#=Aug~+mzE)T%aJWAH1tkKv7TyR znM_5f1`(&$+TP3h*7DwnTMK5$y%DIp-*UWDJ}4@xg>ZLdwh5gITK0@hfIvJ&_l$s+ zP(o)@0$ue^`c}$zbWI~Po(kSX`Fi86nvzS+*^QavGMi0<_G>E3JxtfApUWQDYViHr ziFFBY?v(#i%zr?{zxLpkl%Q@L)Q6Hvua#vauB(QVrNELm00tE-NCn! zQ9)bJ$e7wSI-2jy|25Hu4LB?g*Ed`0_iaB zs|@MU7BTN|m2(m^1%co848LE8Hxx-muK=$y=N_LY^edopZP|lI)ZUyH(46;XD1ie+ zO{i|I3wD8{7Sc{0{S(q)ZVEurTBOi0349%*d%3P?`!!Mf*iTDURuFh8zcEyNF!Uq3 zCHjqa{wapUzp~FoE!=j9@65Rz66~PCeo7xk2KJMgf?COi7?ESNyS3)y*|x4XYo@K{ zT3FrGO(A2{B9m=ViB6v-rV1n{UI`*Rn?J||-^m%YzxI{Ojb9T%D3n#WJ})-?h-_Fc3`&l3(DbMYbsGDLuhoUE1**R6M=)$;D#ejs?i#Dl4?laJCN zwr+e}%F*_j!pfr0Gsn43N$P=0G0o}ZoU(1DWtlK$dMxGPA!(v5u7n;X)P2bpE2SGm z#oN0qmgh|>sj(^r<9+*2>|C)v`|eU#h14}sxjVf1zkFCvpFI#)ypi~*AnE5_M$}ld zprDM24p-iaw9)6C=6`4S?`PTJ!Hh{0HkVR^>V+M zm^K)_+It(fv&@DmY`u8GH*|K2efQ0bON`C+$vkNMs00Zz|QBhlCAWNbOT+R-{RF^lPgMO5YlX zM62gOg{nKuM&XB|Hj;rw=00@j%UCF+B=%^+g$D;|OjZ(n&DmQSqB$~z>PoEkAq_>3 zI52f?nB|VN`4fRUMqzh^B3MILkP1Q-eEO>2{m=P51Uap1b6& zFS;~VOjM}_b3e98SOeYnBnUJJtsKq(ORh!j?c292A$o4ERpQ~|Dq9=G~ zd3KhhQfYg3`|F1X+u$#uF_N*CCFr9ZfCa_G;749!)&VU?5RN*^PLl1&{zeJ?)4I)W zt8mZG;V{QEILI0C!WH#_yQ)_IKJ|GvjTP0v&YTw^MpcTM#`paN3len{U995-K8l~M z-7Ks6u+S3p`j%Vj=eSxzuVwM%T@a$%s#0+4)albN>zurZb&li16dBTzadmBZT)XC* zoR~-2DH(Do^PyA_+snpw4j_SV@)qQZ@_|j)5ZZjTS8*4CA2OzmI?gJSW$liBLNv~- zS>zxomZhQev7b3kwvyiuSk=x`H_Z&oZ7@Wu-FdctSBVz(O3CCQ1bv6O_SAp-wJB6dJ4ci$R z%VavPTI5eJYVuIUzb&rIcxq&Leg9~=rvKsH4r7Uj|8d?r;P`<~q}Jx5NPz%L_>iu` z#i##0`y~&d%Z)o>JV3sfdyLnbi0Z_E2dSPu?V5R`sNBmd-ue4)G-02ap%kncf>4Y_ zYQ2)q&6`9HcJs~|6KKpnKiwUHyFidFA-L646%IfTsPkgLKVx?z;&-S71O!0tCz(w} z%l@g2n>N{>eBnT7r537$gakjxTH2U(l&Z-mj}_EBS+GUCj=qCNw|GjY{Y9{UqJt^~ zR?8brc)oq{%NQywhQ8EpNCB{(%_DFAFQan|s#Sa}MHq0VWwr+E{V-O%A)5|0cdG9$ z$EIIc{plm!3BMkV2d!O3XdNYw2Bc22C-C+DcDudjWaO1Mn4jCAO3|qYBj$u^ELFoJ zL+2K-7JnV*Y~~#|Bo2Rl?&bBmrlw|V%A^xXf=|E+tSIUKiaK;q7aO)jz$F6d$;wn7 zoK(0EAO2eWh(hu5d7&AJF>c$bXfymmV|Z2RQU2M?qq$lE=K^@@Gang7KyC{IE>`)u zzH5!CPYdIAHAHldAMK8o3=-pI32!yvjL!a_{pum~=l>+N&nD^@Pqq}UGYV6b6RmIl zvqKk~RXMq|ynOPsS5p^UFnqP;B#*C*Sr#=D4-z!4)|s~=N=|(_;F-1Wa?jk_d)l-1 zo={0~&rbLE8-zH`G_uHcu>ku}ejqpt7K>a2uDm*{p@)N7zkAa)bDWLWy_FU8*ZYUN zne+`5o7nx0B$g!j!0ckoKoC>e=I$h7ZuXt;%-t;3a`#>S6u%($^~;y5IcPCEO#^g{l37|=*+2m8kqi*;a(D0EWs&V>0Vr@}b{e1o z$k&=;LkdOr-iD>R*LUA?{zS`X7AMWq@6S{J8C_a=Rz6KT<1ByVwFan1K_9)p=&$Ihc1dM`Wb!~P;?m*u*MntjhwOJRgU3*oqG*J#90X`zAZb_s(WO*%X zQmK_)B%|44T?;utw zEEg3;z7PAZ3ReJT3UVqB;-ADKxjDl?WB|~t#J&!?T(szOqF%MP8x^Ac>5zW zv;xm17>csAbZlEnTFJ>CLLGF|@A|##!O6xe8i| zTI}1nW?pm0)jlaw(T%th;x#V(zpqi5zh|wFFns3s9Xo7{TEVNd1V_%q4jSh`T1?Bf zt|bDb>jK_!cVD0C^beOt_;##JhOM5g57yMei+cxukIX)-+UaTXm`4y0r-n1z{TxiN z6hKy|%=gc}1i7A_;{M=);m3zfLQa2BjJO~Z;;FbkRTNvBKA4;~Rhp4+rJi!Oo>pX` zJrYf6dBJ~h5Yvp_XMqXHx3v-3xVvc(y1Bukr4iu2M|j-Wk76|M%Z` zNGLr?@POGFAdK7DR1$Z-$wyyYKPEc!<2wCt8@M}d(&d<@>m({1eZQf_(yy&BKBiJ6 zWYblhoveSSWL7#!Q?2r%Ypls(tcVD7VE18I(~8i&+!g0K_ohqd%5>^@vBaQc$m)$> zWR7C==irB&zsC25C&wACg?mbdgq2mg;t*Ih{n<@%=Yr=0ozY7^Lc2bz5sWD@zbLjw zNq+Nz#KW&YzRq@{c=rtm7-(R|bH%c_m~F=T{JN)aT))l-tb;b@wkOg1z%U3v6JT2R zT8gWs0*lKTbsfi7|CX>ScE7pS_NE@GNMx4(#2Z8o{r?+Qt=*MPz zv^_|#St|YlkBSZ$HBGhif{u!?$!97(>Z;C6kym6pcyjjJ9Ka1C_lZ|c2@RsFcO(t_ z78MY*!W>D^ZZA!THc?Zq%yrQ&!%@d=5ZP%Ye5yXl#6_O>U@V0p11ym~@CyXV;276^ zd_>pkNA^rTnVC@w+1dcuL7v&Bm&b4H1nAs(cI0xkcP1f~0H)w3ca7I|7SIU@bKEZ} zY~H++{#M)TEJH=z@jlOgV|OnIj3i`7^bS_`*bY+bA_Tw6JrX)+JCjvgUfRNKrj^=# zOy$OwaaDs7I>u(zoMX$>qZbD+&rj)MA?|%y*tb?k;&*Z7s!Jaz)Gdpq*!H)7Cruta z7Vp+&fslidO9Q~1k>0aC&FBG#SiLS;OC%-O?B5eKMZ_8ckRfsPpgqVHR-xw|#!7qR z-1NWy{%i1`5)jvWaB_Khc`Y7~)X+I>{^;66jT}p)YdNL}>Dp$LUWY#c*I&1Cpd4#O zdJqTo2V&NL_kUKFnhw0BuJKrs?M0b8Z?(pEyR(XoWE7|ACzubZ@H?*v%?n;# z&tv4~c8zAbA=_-A$K&Xbx43xq)ZJS8Zien*HB;Sk9>2r9b-yt;@XPwSy0q>T_c~~z zd2?xun3~*Pv!BFb;A=a)n+tfH!`FsM?R}`XF+6!5fT#6X$3NCTzh4FlFmA)Fq)6mW z!CvaUApNE^HjLq_NhTc{q`QPCZ|qFUP4fomD>Zr;Ay5rCp`Vj9U%i_!D1?%QzMvtn zQ(){Fm=UA4JUbGyrKF^SlbfM})8x1Y8b3~1fN~UuONGlbL2wP~f5+_G2!}Z$ibbU# z2XI}x6u)c-v5dcpSW3!J+!9s|>iTv_e-Epto<*wo>saX*Q%=)Mc`{euB_kZN<|2*f zhZ+oL=NV$hw;eqMMdQ)P4NoUC7ls;4GY3;^c@&qQFW}hOOn=T$?pCkx$>dSD8}inz zid&Gm_a@Zinaj0`!=;y}nS410h6@DO+ELe5Zska-7_H`J`4cGrB#1l#U=jCHBD0ZS zN5|C+lV%kH>iHTMuC6!?H>O=cEU;0m{q%n@EL3gV7*LZk$|#;*k*)BP^Z(gvl<>cE0mz_+BQ!k{kPpqb zuX>@IMkJ%sG;;{hn3ZLqx9H-}G_J#km0-zKTanRbh0b%hv80z-!;dWoy}hE(`)`sp zILHYdf=N^QxYVXPF~PPw5ybBeUQW>vJjy2FL#py>97!t?vAgh^R7dn1WMrpoIEH6; ziptB&`@eIC(+|4~yZ_GJ^T7GDP1wYPzN*@{7lnxR)#7f-$_p{e=n0j?So>>|j}%y( zF2Nuqp}Ls1TtmX;=msN^Ct=F3_Anhvzp=qrZ|5O1lT;92UqWZAlsd-H$8P%a2z?); z_|Hqx|FI=x3l$Y6X=F%2$Lnupwf8*AS%13shl%egxCc<{1ga#Ja!GUp%SrQ&@A`{q zUOVjK(!M&EiO2%7D-tYoYFP&9lj6z!$vZtzTe6|_nFi2fDBkxD^w+7t7VT}?_ zxx9KU4i-cDDA%#$)iR}&m+uRmAHH_Wkne7M#P!|pRUW^)l2>rd^BpuR>Pi}ilk6(y zdOd`tD;5J7{vStjqmwu&T)fXn^ z{PVUq-8yrICxyzdLax(0$Z+rn+qfNGSzR!0YnWH4{HH5uCiL*QmF8N=S^LLHJ@l`U zcYH7o48AZndVcWldPc1|830(irWVMy0V{8z6TYero|0(jBrh;^!zgli=0pWA09PC& zp$r>f+N3Cetz>Jkl>Opxrl*YF^z3X(^?e=3bIP^4L$7hC4*=6ugj_E(q93kAR`$bi z?jbIIImat5Loq^*-7SHjg9HtRnXFB@`qBcbcm6#v(zyf(tNoCmU}lRk*k?qR*+WN; z-~xRDiusqGz~SVdl;E$SVJXYrW`3d^B~kW;3)A)e)fXT zgM*p1?0S4lWa2$MJ#Cxq^o-%Ah1{}pY60TF%Y?u}>~8gg#!UuN*@?HTw1AOr-@a`s zl9GHVNZz9hD$C{@-XFLv(T;Fv@7|(;X}IUE#>j~;|9@;<2UyN~{|*O7#Qz{6*=-eR zXc`HnJ(Y%|K}%`xvWY~Cw$h%oca9`Yv>xr!($e1Re}8k%dC&S?*SXGl&v`x1@A;0; zyzl!{#%y@MnlGtoTh(8O=7mfmepNhfO-`4*qaT3ifbQKU9zbSd_ zI2@WxmveYz%;hV9O&31yW@1{|rmPZY+4pmAX;MU-i+6*E&mo_A-ouAa_1S0jjQ-vo z#R^4FJmS%x`8p2#gOdtiKFzQ9FdsY^4Y+IMz@Tx83DRKJ9)Uu-qfoG8qdXrmib7C9 zNFbs9JC5pqBVdfl`Xx{`N#si|fzYHnsD8XSxQdfD-Ti959ZU5&L*H1+65I=?%ERea zt~j;Xen|X;|M>p>ds({f+y5jGw)@0M@!wWX-hFX1?8`4(*T2BY+52L=0H3x243}yESc_QAzGc=U|7g2TyR%HLoVn-ZXGB24W2>NNP*4Jzs<>%;l+nelHb-s#9!2V+3w-Scy*VKVtq>3--Edj1x?s!R zzOc3ITrr#p{CZcc(5}U15SN+*-}53nO*LzOtC0lVKhQs&zteQ;f|;p$YYmNtbLDHD z;=tJo^gg_@b?#kWvc4kx@+L6C--hP`pZ}b>|Hx7K2uDdf!8L4YPu5g;ewCFB(3cd_ zKd-iwsdMdNsHgUQ=BoegCrjibsnJ8m(i;7gBnlMlENXQ~Z#7fCHw3VV0rh8rfdk%S z1M=IC){kYI)rI#GHWu6{kJ8u|9S=eck`*2~edD7CAaO`=lI)Vc-d?vQa56h(YU~jH ztEi~xYSAGA&dRvfyrFiWA?-YAB5K)n(Gm%=mX?-{A;&>rTL#V8j(pYj@RMSa_Tw&U z7g-V&pSBmSXBmk9xXz)4-T{i}Ia6gHK75)U>F~iq<1I9>o&4U+8A&4F=j}UxHZ?V^ z2sy5eM!Ka@%6Kc`t)T9d@q?C3ASEC^@mO?AL4vBXeQe9-&EqwSvP6njRiq_a{s3ub zwL1+u4NancUK1%zpHt^NQbb1(uweKkVY{o_jd2Gm$#j^g=f_=~6Fr`>tIB|2c->+r zWf}zqOSEk?l1R_G!XVHTLSe*ZmVJa758b%ySlg(Dt0&Xk+toqzM^#EIt)p9_&d`YG zj=D)Pd0E^FeJ2!PsUqgP;j_rZ(C7P12XymW#?>;<`WY%DQ6eh0gb${KR~(H9zgnXE zRQ-5o@9#JsVz=DnOyh^#N0F;jgvZwMfk*8U&>y7O}xJ$Dc(t^IPCLrC`-v$8q=!Flc>#dGTvd_h) z6F{t(L-+fZ-Ub!e?Z;)cmh$ViA1^fk2emvqWXtLky_({CNW zaAfUI(NzvZrJHqI`9G|1(CgCeeUK*0vhTpjp+S4uO$oN==&BPJn1qGj6{OZU$MfiW zSkB#3D88=HJMZ#;KgvO5U#ZS?@>=fW$J-~azx)QUUYz$NBmZ!`WT;Aly1-x{d`^AZ z_Yj)UM5XOV!3^)pVGG3Y_rRysn`hg*aHO64vBt2_!eT+MWrRlf=`DnL7Cc6S(J z9RNFxl3wx|n6s6vTzak&E%#e!1CRl|2>z4+o+4x_gzSVw4m=4~KN}QUoN_yY9dC6U zb0>`kboFAvq|M@6Qi)LzM>MH_BUn~S>XByuBdHTM=}kNJ7~p??rDOwR%hJk=z`q8T zpIySf?Z(<&hV#M@98dIUoJ%Y7LK{02O}j5TJ*v@iCkgJQaDr4W(c)3gu{I$>BUt!q zAlPX!{{U!uv`W9Bm)ekPmqIX|zuj8Pj_d^>wUUXUl_YTkvm^4RFk_B(2*`e=Q~l;> zju0Fl{$Mngun*0_8_BST$Ig#@D_zwl&#szSK1QEJjG*auy4UIn1S4Ud2MS>ZZC}pa z6)cODvVO710T~iKYzKK~>Z04tBq5vyF@1Jhq0ok&g zIwBz<>BA-?kOCGkQ8*@;O7KtaO`emKtU{=s1X68F`z{!8g0^Nb99|T1pl=&ANw;3@ zuynj93NgQRLIA{y0da`LFY2*(>MHdiB?_1dTx(VW^4Zu`(r@&U5J~WUf~iqL1Uyf~ zydj^=;SuOU(X=sW&bA^_2D@510QewM1K8*9!@Ggk`~;wAyV*j>fRod1A!{kZ;6eo; zq(5W}9jC@8!ij*Pm-Hm`!l-#Wbcx$GNKvhkyM)B>sf~*pACl$o&q=sqDZpD+ z{d%uKSB%eWUz;jL*m>XJ&F@QH&Q7U42h>bdHXZ!SRn9!Gp%DlYp#Ib>;Cq??=5b1S zXCQ>mLl}qtmq{WB1Y`8BOU68qRlUtulV|Hhn^&v{|LHd_uk9k?1@v4(89m}rOugwu^t#wgdRDD*ZxwK01j|+@&?usLD`N{Rh z)=9_VgGq=($CRytIFJnYfHzYj6a!#r%{3{neiZ{*{fj}*8F6s~)H$sa(qL)ea09Ka zwSbI$er0UzI1yu8yJ@%O_5B7Q{a7byqG+Y~F<8JNa+!W2GKk`SB=F7>XrRh8+rThp zo3-E6GzwZ6@eW5lNS4^o>Z!D1ION{pn)-s2dSmr8Lmy2eKzlXY#uizVrUn+P5ftgB zZJY0&ClGjK?3zYLj~<->e<2zuXs|#OF>vNu-b7toBp-~%S=iR=EM-s)7&Xpc#2ar| zu__Wj zd53{+Z_1P>FWOG7m<)m^g88)Ut6>EXRG_IwFHPMC%uxWDiJm?g!~;ezLL=>7f=mk$ zby5mFwYu4gqS;X3vSc-7-aXzLAAgAGT0*WOBc~hxuNpYoK;QO|u&}Ukuvh3~cZk?q z2GdkmnvHxFC$vH9Bo{~#AVpo8qaElP8cIT}7K_+Srm=cA`d_tfwCqX|lC)7nBDk74 z6w+5A!E5!=N4rJT5nFjbqi@Y1j6kCKEmPl5YE5thur5w>)@aq0vGzz-q4+e|!UY}~ zp+VgEI6gIH(=c~qVPS!c#6hyUBhmfAgU@h9W4X@L!@m;Nv z8F&;O9sNfjO?4t!Ex`>fFTG&l-Y;ufT4H$y8SH#G`fD`4r(d=Ct{t$%d9*QBZ%&8D z{<{n|_)dO{c>_&}WO__vWZ#>jRk)BL)nJ)K^MZMP-&|ZuJv5g1`qWAzbUml4i2mzg_Cn1nE zR%N=vOI7^-eJpGo!LV+qC9V`jW#ZNp&F4w)vgWnU*&rdCO1WlK7-KOyhmDV~uGeq# zmMuS2_=+A{Qv!u-WWa8DHMbNa0tpRb`}%MvEgOOuSR!2J-hoe?Dl%c zNyPam;vE;0o+4VEKqF8)P9m$N7ItsA{Q4WBHH)jBvp8SGQ$W9JAiJLo>maBg#1b|< zkrxS3kw_aCS1wuX96}%C`)CTt^JKD$)wfT-^*Y#m`*adeSk7Tl_?=qFfw$d0jV-9u zOBS%d$V0`b1$O@a{rk7l(b-g_5t$Dh*zpr=J@km+(crMI=m1O$F{s|kS0_oPHbH-r zn{v(kK1jp9O#*; zrmdo6egz^3)l;W#URfomh88XOd+?L=(U|BtlRnQ$w_NF$xiGL}M9;D%P+zmQ6D|6^$=y9)qJ0vLH2XBE)oW z>Vqt(_2cLJBx*vNBxTAcZ-Lh^ryO1APrqLS847eBjDy zhV=LXc%d*xrs2TwD}p@?69H<>i?+75xH^aiVPK3VD2yb8K-!&Z-X-aTq#G&N^>10E zErZjd3>+ctFTaGkx>Dgy8a`1TLs;e;;Zc5bM9xn#P`i!I1ZD!gBSy^`{~dewEKw-Y zkcGJluXwj|J!o|}`-$*cu?S2dW#a?=i-V-G+TPy2QDl><9hjLD&^OYYxDX{1w1Ibk zt|nQp4E@vl1&hazk?|cU|Ll6^+=Q4LJ2@u2RsgwyHuni|Cu{M@J%IBs1k6Zq{rC ztt}CItvQy-+{$JC_1!u^vpFb6-={AS*(9V(Y{Q~R8$UtLynWcRzv;rEZD?r75GSbU z|1lViooBMImeknZEiGbS-f;y zLLJI=fGHYBleT9K!g?Cslay$AF{IyhP&M=J@y8SsBF%F*`B-VQd)nkHXq7c)Bmpzf zQ?PsNbR|MV;tvU*6;&$gTy5a^CcxM3neBPXPBeai<*C8D3?zp;o65{5KAI)S zVc`0`02e0^6sE^M(n9b@LOr#3I%w9O*X_j&MI7U{2`z}2Qlr1bd4FO)Q3Zn$ zdpR0ZN{>7J^uSrdNJfj&T=8vKi7}ti_oggZ3?EcJ2VUo1xYHb3TgN=S@$6n*1=WguubmwRGjrWFsVHWV)Su)dWB%l4QK{#f2Fx-|xSr8!C&`h{s5#tMcWA znQnCIkTZED`zpVGEQoD2-2x@@K)j!To3mAlaj&ZL)vk*NPTxp4(AoMbUP_B6{*Tnm z&DWD}a3z&%KRU>KJdD+T;O&?@nO=l_pDUa%P>7H$G3L}dzP1&RBA6TzCzP7*GOe7O zo{okF4XH1Y#`@da;>076AykA{im<%~OK#a)#fU<~2VHJf%&CE?iPg+WC-ZQS5XV8L zLdNgKhPn9PCsCmt7KMAt3cf6mwuB>!s9S|}nyG2J^3>0enR&4NDv(AG^-}_4U?1sn zUQ9Rs47*7JQG_u>zldlmZrrE>&#RPiO_MatVc8V{V;5n#h!XZ^kbsC*KT(o+#-*)- zfkGtCL1Lzf1UL-yY1()g2g&%stUU<8cO^qc4Fe&0DMlC_7YAMQuid8;zR8XvhQ()Rq@1269<#hnw5ep0dX?NY00z4mNB)Vk;8+8$$mr7c>evh+-r zKJ^;oQG4k4%49hynQ0xn5?!Log!)t>0=8_Sky1NqVO3zW2%fJN zY(yVc#rM^(S8P3nRQWSX4&;ZESfctmK8-=Lfef-UKh7l!j1f;o*~@haaB)T5k1est zC23#_qBY!!Oj0uW_RJWA>7tL)wejf7)u4esv_88dvA6l)&2DYFV?nm+A5Cm4+h<+) zr)?fwgI!{nZEGt~QWNe-yaE{i2Wzs~YRZ zT7FK-1rI)#nvxhd-|}x;^@3$eZ8YD1{z-3_^b>&(7jJMB(l(_fmx~s44E?2V`opTZ zHXRYAcKvl(23Sfek%E@`KsQgKm+C4r42S6K=;$B|7d9|$8^kbnA2SLz);g#r#(yX= z{h-S!>e4kg;V~vzR(eE9G3HT$zcS~b{|Uw~0T)|qemlRGA(m!xU7fd#VyDT7wRvn9 zqn^e0YL~Ig(v=E*m0y;mSEP5J;|o~-=K=Y{&{7E}Ok3WfG7CW2qHEUn4mB-_y!oT9 zyQ6O04;WRJ7CpwJ1yob9JTQi>vUdzje))*^=f#rFo+!y7nP-d#-<6aI8-4!HQpoG% zf4WqcJ8Q3WN!q|WZYPIt7w`3JXZ6yGuMrnKBL7*ZqFlaDrlUH3mFb3OC8JA8J(?B~ zeuEL3+UM&ZH?#ctN`L%=+%fR=-uz!cMI;Joq7);<`cJZ+OZk;U#}d%<0`jt`A9CBFs#qx#8o75=1S3Gt&%V`i1kHqB#md5k35(LuLzkG~bn< zv1taw-_`xDyZMAPWeGH{o}Rp z_ogIU%f1z60k{|}Y;04#4@oT){f&~i6`Alrqa!2ec2F@&S3!{UYxyRIL@o=8CUw>z zA2So%E|p#PmG#qW<)q!sZy0V@D;0kr8T_{+#@c#X=i(!)Ytj4J>>!8O;yiJxi}tHS z7ynsRNxH@9)}rvM23=!S7iq#HwuDbBR<6Ump1(iCKSM)*J2!aB8hbW@y$sS|6Pjjd z-$g(HtJ~fLVHpbYFTzWN?G)r?AKPTM6sR3le%@y5^lQ4U$H?6hQR)52h%}x=>iXyE zB`;=PEs6Uxj^PP8L_GsAx6WepXkS zuVp)ngm1E*^7RHGZrVbLu5~efvxCE{3$2_2&Sc}PjgO6lOfxX!@h7B521;HHA>SrA3P4r#94Bq(5(AX6^X!OEKT83rZt{8J(J;H!W__ zZZ4ZDW@$7$s-&24>Xo7D&(`u`o?V??UiQCstlvo4WOPg_h~36o>&5gF4AMI(feWB#`aos=Ph=q+n;xbmR zH#gbNQ zK~RknU5|@8&q8hn8y38urt^-yc4$)sK&qFH(qA;rzsVL-Gk@@w2Oo?M+L@ zsl{=F+RCT@Pw~o{*@P(mWmIH$Tx_VP5;K5eXiYodEofp-%I!|pvKmI z{H9uo9)yA$8E_3nbP8})uIP9^=}$X(YYm3+xq7}G=x1NL>%J?FW`k0MO;p*g3%T@~ zbc2t%HL9QW7Ob}ZI=U_|b42`J|B#Voh@eW}w|A$-wUbv>yKsa$k0vP3mYP~uKIPu* zzNhJ0Dj910_s8`txq6%^+$u2)`*X^D-2s|`2<|wuX1)Jb!9A$>_AB+_)J)~A$oO^Ss zcXUU`+6)GhX$+71=kpPH>WTaxcMjBBBG^a_DZxS@nEHz$rSAK5l7!)=w`M7tbe|&K zOFedUL@EWR*~)si`@@=Q;@(|wbbP&Sms*?I0{fu0yzQk-#BxqBC9DBD8z?jUhuF`* z^3Y9@pl{)IHj~S&v3)_fGkfR^@_k`UvIpy&G`HCrUPccOp}z%5YLrp$Vg2g!jCS$udTu+1rHRav#*03YvRG zdueBUthN(t(lEQ?33lPR`1t^y5i{;7wOr;>GwIgIr<~^$xO4NntQe$^^~ss+Xyx{A zx}Ka{-o(=p5^qp1*=T`K<5Jwue&rBGzi(+9GjhvxI%BIG_qD%334edRqF;tStu@p9 zaS?BuV32C@2J#a_bl*m0kH?I+d0Hsm+;W|@!G`m*oO_Y6 zLeiObPD!O)4#D(ut$Su#+GkaTa$|>2h)!lnH2=bW(YN-()1q+KZNI86I=qC1kk*f> zaQP40L`{Q8q0@kHu2GQu-{$4+bcHOawVmCF-e%VxyE}Jd2VVUe{J34^p$&b=Vq>Dd znh(vW)k*)agSKz4=cV>C!YG+K7Br|D54}5G1o3;tQQz%_R*UYXC7U}BFls-scF~z{ zn{|r#)c03Z?(Zvd=Wc!V3aYJIKM(_Gs>VRBhV(L+Wdc8eK5UW4cU7*$FLfQ&or*A# z_w=rFd=D&7dVb~L5R&#ta?&uUH$Z5<$tY32;*5ZR^2k#=Hu?J3F~mMEEb(+Dgq*AI zeZT42+^c_E#S@``s<;Yw@$RsJPCK`s6!}^^kz&&O1O}797M_HQ9gz^+6<{3wzVWic z$Gc?V9rLnhl>c!Sv|#=m!;Sew7btPL1&&6y(y)p@w@QO|N$nmN-!CaC`HaXobh zo*_#RV8S1c0A-FG0r&pSS}Mn42AdD4#t6_*o$%Zy1Rj^p;3)OFVQVaTfAP!XhV7s4 zY^?RytukWJQoQqqx{^luQD?X=tJM$!qZ)r({QD#K4OqCn zsy`OI)Q3F|1r+kjG4;3(h&>zx2z4O3raI>dFymK|&v*N0dECwL?d+T!GMLO$)IK{$ z-^ZK7ce-~_4c``b2-`rhjG>Rbblj9KoVCSXr9Wn9)-lRH zbYdWJ-NgFZ&c|L|dXwhACWmp2z0=dvK6g}u?u+1jsf)4W$LY{Ty>NXSLbvw-w%;2G}2Zk6h=iE$yjp<3Q8 zJm|`1OU~cE)YMSzd6-@Ia$nkBM~-~^^okVwl#T%E@r~sc&45+aBwiNcXJYp^S4j3w z=co-G$WN?eb8lqM12Z%<`KOSn1<+)vfdpq8(6PLD?tb2Dlh(SQ1Cra9jgSoi+ zwcFOSkxSNb+9y|tYQemZHYqG@b876lSi3#}-3^t>#>>i3@uNJj7Kgdhy6y*)A>j3 z*{)1ox;RgKAHAWOeC);G)ehc&{9-ECQ|`FGRIV$5L2wH`CcoqCV=W5~y=K9nUB}KD z%uPA$Rr{Pg(aBiEq`y6O-QY_Nmqh|q7=ox zE4;lq)cid5n;Pn`eN%-CS&Ze#1}82i)t z#w!wjB{PWyXL9ZlEtG#e)FR6pyURPDWlWwQ23GX`W(XZdbvo4NH06x{lQ84I=yfr2fm}1V4)+mvbv`dbv~(y^xB)_3L+y zsv+1C6I!TbQF7TYH&63P;l#-d&lh2QM6{@(z`)MWiKFW7)T2EZ)KN2R{U2t3{|$0%sBT?u4Gc>S5@JZ) zsHE_v7r==Z-rZtTlDw;?n4>g%sP<<~Rw2k!sER{QcE;!bV7 zl}t59nQdq|2+KGy0n4g_@l~$A*rz{)ZF-?)`_ka@)GR zHlx3$sMGQ~-$dIGSkunI1j*{C5_X=Nk`h;T&IoNdzcI^4n^}k7oB3o%_#X;&wx;~V z!_H-)ADB8N)UlRtfXgTGzuZhX+$$B;6OV_Ox!hSB$B>wGctmZ6|JuO2({Gl-btA5R zs|vMu9PE%Rl^V4S++4hQ;H5$kJG-OCc^~?gbBi=(-E&FtFauS-9^rvf9c$AW1~16{ zeRsi;dHS_$d13CTjSlr&{@P%9uU~9UQ)tz#H3CO|89ol@!e#7ul6i}4pYB%)hFn`Y zMIl+`Tgf!S3<>X!_(_k(DV{-vgoXKt80$qHpJd)M0_%3DCX6L+Js2{|WKbFPep5_? zR_vE1i3k5O_jh(ID-ex7v6~DGd++XL%y+MEVJDB=^t%TLqGV*&XdGGk;K(8JdOWE$ z1QTQ(ob%7N&X2eoZ_K|YZ|19z&sLk@_BJF}Uhf{SxAuZu?^3~rpVHi(SUWRDFmG?o zY%Y|k_;v|*m>`IZ*`{tb5*fpfAZq-p`13f*Rx^HOjaWYi(2WS^`j~p`c z)vBL&OMh=%J2n`sinhnvs*~B=v|kFeqUZbXu+5p@RmdnREw~;%{&_=3fKLKPZGQdC zzJLGJ&C5S^k+_ELqG#UDZLT`>&!yY~xy40&*aZ2v4=;Hh^&x`6r2{kt7vye!;ji{GVPpZu+Q6o?Ts%6SxU-}xBhc8UxZ&;iNPZ*mdTYtUc9`u zIz8H2`5Mm%gm8A$Lpc7$#=)THKTq#LIh2ks|Apd%s8LM^rrt;2L5>iy>+qUYZFVU^ zCb4f{Vwv$%{|%lyH+jn}a+=tx3XYad7_BUokRGy>75(Seo(Mews5ZXV(`8Qj2v_d6 z-t8Ml63|)S`p!2ID(?D2atfZBoBQ=O8Q$(399n$z&+npwI6lc}-yFY#vL6yt+2X^Mm*u-Yiu#n4 zYrk&Wxt42p4gJLyvuNv?lCIg2FCH>7KSli<$J{)b$YUK+GqSFvpnNIOZbpK?Whl$| z-yi=%Xf;f_p#GjcJ`Tex@agd|c%z)EmG~u6j8~Y9ni{T`rgn4htJ^rZztP}a-afR0 zqo@=2S~;oMkGs}5#BNm=GZZ{c-P!-InyNPsiQHUI#YCf+?#-ECeqgD;cRNcYI7q>* z)44LtLDtct<;SIx(p{AGx40~DZFXFlD7C=EgO5>-Fw?u;eRibEH@#D4PqNP?$ zQg2fIz+hu5xW+B!7Tu;k_XN{p8cV}8o8t6r&oeXq>(5dpo>|`eRJYQtZRVRqK8$M% zJ(@Sn37)$ik{+B3#oq4TW8$Y2^gVZL&t)dZt8J`p-D>A`mw)H?PNjeR9PDast}yvY zZT6?uY>;cZw}(}T>3y}lc^B<3zu|*VYs*xBbV!;%K7YA`Rs8mJj7`VS6DB)jn0!v5 z(kvYKYM7wxoeReVy)55{&c1RzoY&B<>?hqc_OF_eYuMdu_#(Mz+DX}-(abf~E`u50(CTxsnN^5w(YBDvkXh1fZdH%VP~BDcd937` zl7ia|`-~R4+cU$1A$}EgFSX(gTNXMyN!w^-`?CE%U-tnCCgcuoO=o=r z8RFKj2y~6qRW~VT?bSx&(nQW@<8WXlQ;1cx=%KxToX-jy-@vjRrNXMo%nJX?Qkpy} z@y7rA%&=yZW)K4*=YBiXe>}g=O_6<9Y})ff{#6EUrPDsw&0{?|?8Lk)3vv`+4-G1n z`fgV{YA($1F9R;Oa@jAsJ$lQ%`gw1`q>#O++S->}XlgH%Gffz7Z4Wzrhq9QtU~kbF zu#s}SYW|nz|FgceWy^2ACd#Fg7bFzWZYXkbIsG~5n#%m!m!*#`vn|hR`eS*?-``j* zhmqoEe)U^fKCV5-(XvG^vm3~!tvfFyi{?M@dqDhdX}Z3UQfuhtwKPV@%CkaWq^_dV zldjC}J96>NV+B6({`v7B!KFpc2;+x7yk0SH{`v*W|J;N5urM{O!NJR4Qg6(M?=i_t zl=bq2cqZo5WlenS)fg4H*T?>o(kB#-ed^z74+e4QjZJ75_(gebJ-+lROuIS$cF1E^ zcD^F8Q~&uCF=*n#zBo5BzV%Y%Qp1p(vFGDpSw5u>lOr)##6I%cI=pQ<-fh^`>$9;| zU22=@_^D~|NlTF+TCv^Nhq zV7fn-J$%r^i?`NR$6BuHY?kpht_4wX`>3bm9W1@`Cz%iaE3)2dV~IG#b$?0o@;!cp zl4MTZdc`M6&(>FM71d^#myDB1GLNn+>x|$R%8#jXjtHWsc9Cc?o}cKyC^4G-ihF3a zfL~Q5SHx7~Iqi+!kCUDn^A(Hz%R;;Hk|_1W)2>8vxq06krWuT&)B1bst374vt(J7N zXl?uKYmWD^Y8Q_7tbVJUTp7hSb~>y*`$QS{9NiwxeHGuComw6;UHa#Wok;v~Q+#tG zsg5pCo(Em|N^2$1EGO;iA+FhXH;YBS{klL$^7{maC}X2^v56pdt2#7Kt<2fxHQi7N zs92#tQ(VsDe8E-+fyaL&bdx_m-#;#je14{Qjyu2HPR+^S5N;q{ZlPkfSUMxR4nYRn^W9m9w2JrKCHmiI-U%QotIa8$Sydh$ueK< zI?0ZEX%1g(|J5UXaT*YdzkD{|uAJ!etbTV5OMkMB)2vA5lp%erQw>UXsk7dwVS}Wr5*d{~n`}=R2R74&0eFG zO6WiQuYFYz&f!4%^q@0-`C4%v5IS2nS-kys#D_3fvH!~}6p=XNi#YX~>^8Mv8(ueO z>txhBpW42&z*T3ITl26=r}h~`>u3M<>YJ8d{Q~Q^xMe^Z`?!!xs|g!$8d?eC90 zU|$_X98mEg;^iU{QmK^}Wt5~{yQ=9_-9tYS+fw#@Cs)cd(zQS3D5mjzgUhqz`ss1) zhc<5W<&7J*&74c177hIi^6tk*$Z`Mh(+p>qukI1R%G9s#t@{H_k@=JT+CKM8Qhh@YX?ie(&tDT-cvR4{pEs_ip|hd~dRE7hV6)J(lBMF5S8P z@g;Ff$CA9ucZu;n-REqm74hidW?D|r9d7k)sA_7lP)<{-6{c9lRY}`W3L4!@o}kHS zyyLxAj;Hhr-!80_P{V&5)ptMq^!YQ(zvKGD$WfNXKXn`86o08o?Jh-)OmE|FVPLzsB0rv0PQlR7;;rdXPt%@Q;samU!B6s;OP^f~E7{ zMt?k>q1xpyc>>F0O!$ih*?=80Y?!YJqOc0iSWb(DFB4v^uQPtEM8xt9)uz zP3GnMST*P0S4}mqK*6VtZ70P%aHp@)x84hkGNt+*DB*S=oAI;R&Q5K5NO?$i`p{oC zM(!8^i~-DE?atpq*nhpiI;uioBT8ra&D{18=obphKF6b|igSm*T2YQHP_93;rZ){6 zXymz>6C@c+r!G4-Tj^EstRU*8_p!Q1#u4AMFJ*Fmnop5p(xDkLoO|`I|1Vuc*Zo7F zZ25z;gAuwBw{QO>TfF>L-c-w(&1DYVH&jJV(@&FiE}Q37;;d`RlvSDDbLW?zT!s&b zE4tdaMjh}z7n8&J!Pigc72DYxzO?TpZyty^neQp;Ub)f_I&}?jqc1RALYXh1 zqW22HB@$}$Z2lb#ztI_oR3wrwvYcC4e}DM`VqvVBxaM@c{dA0X8YaBjTf2Ufwfz3Q zF3X)?ciF^AepAn)t)}tO6=I%6jeqrWs4E)t>0P9 zQ~iVMic#uK1>M&pm$sFE6?q@Pf+}O%#+>2k_>_`TH_0O!bXV2%9#khBJ!rhBhT3Ys0jVPGauKx%w7<#tT+r3xz@7lDvZVU6 z+XNP+_xU)|NSeDHyh7scb9CV=2A&PPfo)?|&+pcR-Dzc?x$`%>LoEckI_x!2wSYSk z0cawRM8?EHO6J=Q%Pb*`cqMdQ0@4OFm<&@N{Y?1w&xKX9G@6G?HR%-m0Ejn)v;!3X_HOY{6#UP3k} zBl8HI9mHjeV|(1_QtIPDVz>&15g|lE(d8wZa_l#VofC2^m^UK631B@EX#ARuyQ0M( zZmZX%q?=q!P)m6qqlWq0R^U7b9lL&tAOS#W9WUc5U}0n|iuv)viOT_r_s>g#a9FK3 zf8-ghkpSnd{K93`V^^OXza}o3_NtmeeE6>}F_yHq{`x~lFzqo4(n}fu;6ud8Uz2xn zagit=6A=RF%b$W8#nWf1Ev6Rk;A=peD2&L803}#jS(z@>O1L|iU_yi^kLs5{$;ilP zPGiX@8i9nL0~lX!-v}hp)Bt*<0N^d`Js>GIkK0YqufPo39XbovA7~>=FVE0`m?iBY zGWneF`9XZ{BgzfntrioCG}`pZvuKvs?qIBng^B_1`Ceb0*Wp7Z-+H;$V&_Ccw%NTb(8qrM7%I`h`DU>_p@08k&ui)#XNf}q5(zEXDqj@u9*ma`{Nde7JSY^Ie2p*o7b3a83r?w7s8E{+oxXt-;N!48{Fr>r~{pZGySG_W0F)=(bsBzSC@*^0 z_XOE!Y@rs{0#H+!AyiKH)mi8t5S4h@V)6%F_j}k7&+_J)P0Izf^Tn2*K2-85GxEGi z1OiIfWlHg^hEz`UNP+vaT5FSzZ#NIDx1@Zu>3!F=W%s!?2apbW@*@Zz<2QX$uFA;j!wH_=H%%(9>X|k4evM)`m?r*H6-QhX6iLRU}MzzC?KkCr2 zxQbY^T69&T!M2VE7w<2oO>3*0J3gTI`YpO#@jT){x3I_E|mL^x+QrPaQ&u2tgvFqOn>V(6`W;Z={z z^shC=4x)9CPnbzOUnWz1yOsId49!P&5us+9l1oY?OV5+K(y7yM<;5G>>=Ea`T+kO+ zXHrI{s*fB;h@eA(mJ$E8hb-O<;cxiftyhOO-@t)(~Z}PB|IyyOxX?I?RkU{X{ zOCJ!d>G6$OFQZw^wl0O?W}LX;SHYAPzb&@M)F`?_)~34Ymjo--ezsYPyN?J%#jAuP z7bh0KPtnvKI+lFGX2~%_MoYBUU(W2DjHIu^RJTZx0M`v)J#N1ZTX`7vul7Ca_&?l+ z+g%sFMJDY>W0nVBd^o&gwmr@$73Ln)9Udj5t1pQZhWG%*9cKUPwl zl%A*pOE2@qXjYipPfWh=qr`_rb!^}KY+Dqwi-UqnK&{ze`*h)~aqAV%bF0sGX;eN@ zRdcze>_?p%&#h8r?fu))KduCo+}2Oi>$mQKvO__;KU`1cY6&@Sa0_J5vUoyHhEMkF zk4pd@PI6{l-|j^oNlwyrgU1V>@&P*7*=m^Ur5L)`asMr)UyUum9#AOlz8dlUfmd=; zYZJpIE#?%~`bDw67B-(a%{SqHelv?ya!PWl=>)Zj)-;23!HFZEA%0QFi_sGlK<<=s z_>OOR4|#u~ZZf-RytkTm=gz)Atiy65M|E!2cBr^vYhFpHy7t4QNNfMt6GJDMqfVW~ zm6%cCU~G+zP|B92i5Nf%#_<_P+_!)BM3Ub7jaP*l*0KJ11?9GrCC~;zPXSCAOK9$j zSq1Q$My}ZIa>P$6__5qUM0DqwWrN2_eGFwk&t;Et;z0(Z{X@|&3pWZUwx~^7jq=fY&mTMS zg=hv-QF&hwsM{qy?R1kfQrj{*Up>aqK5@{ZsXguIl%MR`r+xLQ`#xU~+_l#v`(TiI zh|90%Xk(m@Q`Y`jM0BnGj9(!IPsFR?_2fz0{N5iQZ@rwWQZd$@br7xU{#LQEc2?by zH==s$YTLEP1ZahS(|vtz!a;M!>RL`}yGZXF4^Ix}shfODK8NNl{%lvhUP>4Rx-TebHK!J-!}m*A6u&!ORr9O@IdaWlJP#8Sq-=2j3&XX zgU>nin3jgJP4Cytw+5H}&rOM8RrX4Nu71~Oetur#$&VlOMwIwxsu%AboPSt7-TJ-kzCdqGIt{{AH1daq5DB_S!vt+55J;O0RJsL{0;%uCcsfiILspj!=jfUSG> zaG|LR-wWv%qUaKf$r40DgD}t_9U$JfupT^MGD3pn;{;p|%{fA%LIeZl-9`IgS~ev) zFvye(oSne_li_c+^a8s|m|@S78TkLT2^Up8byEbaw>YZ`I?bjc*NDca7$OA$xp=XE zLZ2^ycGs?W@aSTYQ7nHwuVFagywEQZ41VnF11W{8NK5>o&xANzIn7WxNZ5|MmCR1B z@H={zC>|3k9q85Iz{}vX>&VP@C^6-|(LQedy(tDZOB*aHWkbW*=K4iW@aLeCREIJ5 zX%Giz02}lfr>1AoW%(@gE~PAUJrD{`irA0KQ4%$SW_drNY7Tla5rj=QZA}E%1;caC z6QwDnmxwKiGO^Y7CN-ipY-ZNJYde%EP9iHfL#ES{_Zsb#U}R!q6A)0t6@=s!(xY6& zSTWGpCA5FS-9l_F>^U_*Ge#83iSksCfW;Y4%O8x}!E7reQ6vx{0t>+Lo`+N|^}2On zgSyxJ9@ZlrxMoXI@9#g1zNGi2DaUAzBNtJEOqn(?jEbQi-bl{lIxSI!YN5Y&Qte)h z(7l}ux&L!#!gZ?$fjeQ z(gc)u`tH(SGgGU(`2qZDEGjS5+KAW+7}_NkCg40mxx=!%dja%skZ zI^@gGkV*=L9oT~j9AlQZjRKFmXc3tZND>lArlySv^rSd7GG9Qt#{jm~MMMj`k73qw zgl29?NgV9^kX2ez@)-k4cri#lK`XC8&ID9^$c)BQv>c={OH)-7a#Df$5CbNfVK}@m z^A{#BLZE@Z;rP*`L|?i3(T=03*E`OLrMW!w_m72`sq9eh*yFI5FD*1J@Zw}M5Wf>O zp-@;wBZ#FFv9m&GVc-Js;Co4gkJ&L5P#Sc~8N*NhRJ0PyAsW17f2fyfqv6ir47b?Y{}`Sm0MWPJZV+TB0PA+m5n zHp5vxt<1KYe(n!DK}BW$EXXH9vtT}6w)oMfSlQO78M+gRL~6W{7vEuR{}^QnB47~< zDFGsxe-#^nL@X~D_m7PAgR*oM1{uepE>v4$!0-&pdn8pwN-B@v$AhVT{d`MWvfy0) zKG886#O+xe5FanEj?Q<7V-NFEojrTj2txbhD|Xv;K6MOmN$Npl>n~^(OUfruATWfz zl994^BFa?kNx5MtKo|hBd^f-aD(^LzMn+;w3eVDFHO?o|lj_IjSPw!*+kg~eNL`C4 zae$_6VQx+&ISKO;=&(8d&Rw%pT ziqW|8lKKMrX!uV*9@j#6q)WFy-`&f_#T6yGWy>v0pluA9 zcjN_ErWOPEOcbDCQiHki_vqgS>Otlj>RbB>T+{*gEx1!o#j8 zu-Vu0=w`OIMs?duLc<~0dDZ}P=@qM^WL3|cIg{9rv<~@L0&ad8G^P^sO!xvIS4rkc zo`QKx%aO191>anSBlHq^UO$9EYoWqGGjttWL&(PkON%+pydddSf#^S$yVz6w0-9u~ zJl@rE806K2GND=e@gNZg74V^J;d^N5>HCL8VWd={iDflEeLcsB)B(VbCZBCJt^boJ zG(yOz_CC~LENwTPMI>PK_Vzl%hSV6`{YGKaUGnM1YVddT4GyrUhM&XkKv2JPn1*$c z2z(N8Uetb;6_Svi^XG!LbaxihpdnEDMbir=VxQRE&Y3i9!wZ%lQA?55vXkDEbiVho z(}Dam=V(M4yVVd~GBYn%?v{7#(pL8Zsz-tjlku2hV-y{U-$Q2hmd>D(C=UUN3ETzZV2y@}ZPljYHZn3I=(3oLiE>us-{c9= zyRSORqrAJnAgBeN6Go6MJM?EdQs?pKXCZMlE6@%TFd3 z^IAdAkX*MV|C{p|SwTBJHUKFK6$nlQ)@8cj*bwp6MAm#vR8F~;J8}klSjKtv1>( z&q4BeqS#2uJ6Qy;c-R@s7lYC~S)FdY7I z7*tD2U-vmGWJl>7t6iIO{8QcY1O&5c^E`hBv%lY9{Othla9cD{S-XcoC zQNn8-0&#(f!~?|Np7l?FThWh2nZ^8TA|VuexyVgba}y1XGC6lB+<$|-v^{27jRmRds>mde$jSy9^Zm>AHh~IA|eIQJT2&ov>Cr4NhwK!FzA!0LtQzNejX^dD{KDNDW zE<+N92Q3;~oNn}9qhpAJw zanHy=Q?Y8ZrQVMQl|%;;QsPQb!VtksdQ`;E-AGAL}3=wE37l~ z4@IB zbEHiE0!=$1|H`yNa!@;`s`|)=^>ONAns-IoQWq*igcEDEk1dYzU3sOJASrQQRiSBh zm+h79QCD4)RWT3+3nT;#KvBvT z5JkE%Xe6XV%3M*9Mga-w8lYjPt6404gmdujKkVwl*B+n3j6C@EWl<2`qZ{cG^OAj#@;jgJMzdF`J(kTQQ zAHgzV)5LH`k-dP~?(U9z8)e>AqD#w>75|c z*>;q6ECUwLtiO@}alj1e+J)R_e3u%Mj0!(Cp?7{z76eCSP-YpL&YsDF=~qKr2U;bO zz0ED?=!pC5yP4b14KuU| z|EC!y)1na8=z*{q(F_rO27Q}8nIHr?KOx?Md--HO2Fnl!vBX$}sTQ57MI=8#K^xLq zDx*Ijr`<7VHcmuG2q_gotkpx{r&f(=NKb!0*H2H0J4j8VM9_2oam4j7QCnTjj;PFJDJtsp^ z;fK8G3EatdmVGesJXV9B^|y9>-iF;!6Q9-`Lmc1kPIw0;-0}CRPJBm>D8Q?mIx&n- zjdWeJJlO}z@z4y@&iWin^hQY2CI`iB(sBgbnpZ$T99-PDu?81f)^re}%er_VWvRuD zq_nkZ5N%~Jz`f9C!iX!Fz4R~^Pts_ZB4$w3ok!_T4n(+E@7?gU`4cb_cCy1pV6#+n zm@y|2Cyww(n969lI97Z!^aF@vvl;J?M`26{6sIJpZ_?f_bW;HyVX|kOG$%*`6I*Kr zx*S3xgZ?n_rd`+_a&RZ@Jw35*Zhw)?fiOg352s!$T8j{_nv#siD8639vl>RyL?k&^ z88r!M{>C2_kzfk@h80WF?<3+xpiu6})MvE+aV2Ny9S6~*Km*(h$p%ak$xkh2CPpmL zVaFfm^fT>nNhI|E1BK1+x5a{^qvnZ` zMRY+a2TynGb|uqq*$f-%2W~ep=_vQ|-PfDRndVI$mRTsw{d*HFkFALzW_EDo9qA_$ z>a-w#2ry#azKws~4d_~Df>?5U^V|SBP;t<($fu-M2u;P2-mRDE98oTi2j{HNx@?$? zj0-Jhr|z!Yn1W^TyhxB8rMWR|1kqmbKO!kOB*WlPljw%)YwgX5wl ztJ}AMvk6s)9inqC)WVZV#ldT=gqT_#_ zg-{gFvx;9QIX=k!gcRfDtgWrP2$>=r6p#J16;1?u%OnVfx*_HS%2vTXAyY8JdxU|J z?%V9tz+CJlfnB>!kkS_6R0Pde*4U)!q!oNppAu0r<>25b?T@2+K6w%aqoz8neGc_B z6388Ce+2o5Jk+Qjs_LV6XV)Y;Oj`UTZ6nn=aAypSj>^I}jv{VtXwsREgqYeXqzpc0 zn(U+G==?*?t_NWgLe0vZ`{M-4R@A0Q$dY8@_O5w`;)3Lp`L2><-oE;Lb0fuse59rKTq|d_8L}_*Tk$QJ=MnK4uv^0*canq!Tqn-ZW{a-E8SOijAzzY4etlZY$gwPqHmR6r?fe4idF;$gw#ne(%IXp|+J|s9g(h!DDSl{_H z3tY6C+MR+jue{NP11G+R_8|w{Yf{@fI{F^l^dz{lz;uM{7ytUay1IJtGi@+t0=w~v zXpD?HNH7_G2zJ62jhSaHw6e9;hp{0Fs)2`jb+C{^KR(~SS;2s^QBxhMZh=-0$dnlT zGK6D8QQy(BI-2~C7e#rbF4cZwsGYQGXGar|5-FiIpMYE{6u&@!RM}&CW@ePIHSXE7 zrzI7c46y)T4w)Q+BoJZUiYY=;4jODjtLOcr*<ay0&=R!ejMd(^l>fZJa%?!yc3r$2(DJ3meIYu4}sYK1B>$3I0L%&QYY0Q79TN73vHk@_4{tbYzty1w~G1ec{TPk3(S4 z)xo~ERtwsH<%E-vkttNt6R`Q;Qc_9v0XO&WG6Yx8d4P_R6kb6r zd9-2_J^V^?hT-9wtXf^rCzeK9I)bC5kE))9L>tJ3oK$D%pHy`XLBN0d)G1=Ri2MX8kt0rKC`WDSpJPoUTAAPwAg`z88sg_1K2wt| zrc3?9MC2u}^2OP)u&;-{q7xyT=@Y0s{K!-K^~URR{!%Yzqpx;0a=a&`U4p5gQ`B;Z_6Zm8i5Sm!c z^+baWkJ0M$qO96klr=zSsOwa23yw#4@Nj{I&1TLfYUs`qbjPREF_OWe{GjTwq~8;P z0m9Xz6+Iy1A(Zms<^9#so}f*;My*GcP9AY7=ipO_$`o6og`k~+`y*P3HI7GC*r~h$ ze29AktxkZ|?Gm)H$@(CcWAO7(0-@ebss;)OxUsN{&k-{8d9lTz+$)cSpijv^AYdP2 z36hv|XAjE|ol#7GBsroaCMT{JS%^Ae|s`h}Gl45K6uqUyTms6C4RC@?T zBgs66RgFN3iW^O+WKmZpP%y$Qh+AOPGvPy zU5AgM=083}C2VK|?E`Hx#&<_f`g)o2@22YGJc=opsbGMlHr)EYcjDaN*X2Mv`mg)b z$PCODwuv+e;;~~-rUD*BO8yb3h|#g;lPnvxly>VJFcD(+b9|tu9=h@MR956F&_T8) z9sOZCLRuu)cg4dts3a1K!uw+{i{MzzT3rs;r;dRWg{X$8`Yff4BhY3$dKC9*o}ef# zt+d<$C5Ap5BN>!puqWj|55s<68?Q$locKlV!KY(M<2_Wc0|x`8cC=3gp&eljPvv7} zJ%_}AglkF{TBYCRarV!yKXJsl@!zikH@MybwFRx+@(G z>?b#=pOWIlniykOnO-%U&6p5Vp1$q%tn1#toD8 zF|l{NJ=|z=pqbE(7~vAaL}a|5dM<>Qm0{UP9ZnV-Poxf&>k&d?YU{%RqTnusFlnm# zw2T5#sQUApl@or>-rin;gmL7iv$K?(3Bqb(NX-O~cn+KZSJ`6f19Tct$Dm{t)&$8D z;zJU9C{3YIN{Da~#{)+~F$X#nRcW)H&&zDL@F2vyTzfoH;iM( zEzyWoTk~{rrmibb{DYK!12)drykCC054yKp{~b0qP4`WFHiyzdGLnfGM*BKdGOvLoQ6x4_hYmi>7YwEqjT={HxA3?Y3ff ziv&CD-_-+5s8*s%p>FANEUK1T+kbt`ZPs49|~&VE>bOq$rA9gP<%4 zpoCcTDEA6_p@}w675}+84E|LELFn6QJla{)0vt?Tsv(?}9;$dwnm+Q zo6CTD6ncUl!+lTxV%og9b8r?$!TLgi5yI2_sg9yznMue*pz&EgNI#Yv0i~va+6T2* zk<%;YTL~Y&3KG;9&jYg_)jh)=24%JARC5UJZVs7L3fJ$7u5~?3n?ED<^6fdkYrHJX zr7cMp#GHNPgt&8RM3jymcNT84j3C z7OzT#Sj4>QcD9G#Xe}f}va0%M1w|u}VW>y(p+$b48TON{{c#d4fy2!rz6>%ts1d3L zME&@V7?lv>rQf(V)1y9D2@&)#bR6_iv@zrthV=!IF}>dp<$^k1@0Y-Fv<3okr`lhL#rU*L5K?U@g^0V%}yG zkKk%;pDu0a+{HA52CW~fZPltgM&Ml1Vg;YxK5UD@|n6RE{H~NC^l156+H@$T_D>EYY<&q}#l| zD=v{q?_Y*DAQq>C6q&(`83(U?#GnO`>~2a{e&q=}6a;w)Oenip)?5xmWRGydsyuF#YvlGj?Vt1o}j{TTv(|lByCiI+o-* z^I;Pbk=>*MKwG2PIadXjM_I9fKi@i_i&p$c#KR^zKAxVt^G=Ldmf(>`z-R(48e4NgEm2a<5+XQGiFr z23it;Vd=LcO)ZzS^2Jn}cNtEf zP3n9ry>Gnzad>D+j!%8shp)R+zn1-{e|z_tZ(5q)Sh{R}q+MA5_+uI4x}iYVKZ5U| zbr(e#u?8~#{&PIH;yL0ii-e{ZS8l8J*n@@sCYA1f=ZYza0H~u%VqVe=A60`Z8dmKr zr}{&m7FB#hYCR-x4C;~!LQIhgo;2_O+_miT#XuvY4Y4(PyxHTps5Mk8h(SAl@q07v zhoT}Y`#igZN2?31XJAS@L8+@5DE#e5Fg+=G1W zEVWL3aYvqaaLmM8@xjXd328<*JYP#0FKvDO^N7-XUgyBK_;7DysO0%SmPro9DI~t3 zm}9?lT`)=YH_Q|zo>5gF1~&r2Z|LKYvLkNk&GDhP!zn=6H>f-b(;y%jGNdqKw`#bv z2I(>m5O7S;63QQ>znh_$v|pR!9lBD_I(bo|9vn129FZXe=Kp-&Y%(jqa(>F!asBo0 zfeS>vnL8fbz`Srz6J??ZW`OqBbyb9}+u|4ry~0w8IRG1!HAA$hCneg(3vPI{s%JP3Lj>75Io{sh zb5o0583khygx%lz$t?>}ZI(vMn@Jnfu})dTm8QuWSIXtQus`=z=%Zj}(R4Fpf2%pV z#*)Tb^TS4t+HYAR^oJja6B05oIBhL@BB9kp{U%hFsp-T1N1Jelxk%rlZt2j+3?8rR zD%Yxb@RpRHY0g`bvM<@_N<5>Kf?fX9uP$`(;E&{WRzFf6SI^A4ujEp0u5zcLGach& zpPZP;;42ocj#sAkr-BHr+*0C-T_A>pNXb)GKVDAX-6)LXI*{1C9(D97)TIt-pzo8TAoJ<#Aglh#Hl*AY&XvHQxkf1siM!S1MarHm(U;HG)hr=1nN z=I^RM-jUnT_+(|d*E}23W8H-8GK<+k%D-{z&a1XMElR&#qX#9-5{o*WH9XsKMQvr(NQ(n|pS;q5x$VWrFRf%YdA7&sFH&1X<`k#18*Mu; z9yvQ!z@xcMbic~xw$$p}yI)+wyPvK8&xUY(<8tOQ4aLjNEf`4WtXeHCt;{23sc;Y) z)Xac@fSdmBdv(|QtQ;Tq3PW*vGYgwFsyNSbBXbgXOJ)+%(1_*$zMGPYZ~GYUfIfoC z_|XJreA{^ttK{Nm;^(Sdt5)38JI@l0o+1+R^~oi1UtI3>3@Z!CD~Nnje=JNLGml znY`((Py6gBr)R9+JVCHjf=NNGWPXE+uS3=a`fJ8sbn;#gV}tM^O~E{u0}b;B55dC7 zxn>1knJ+s}U%S=6=E`9m`<4kk^~9WIHS}T1`il~Us8g1T6joSfB19ttEZ26*b9q_vbfGMBL28MuEfoq(4&?%p0R1Pb43LQSCBke zo#}`K@yZgb%e zko(P^3=ZiC2o?gch77^HiFZth`es^yIT}F_W!2(H)`jmz3$)w-ijiYcXmJ>x!?AOG zr!$ zMszABD0Qj&zCE|`(ZQ1cM6TdLb}cM+(2-9Lc(=S^ZjHq&`XsI}_SPqfTIQlFQ}(4A zT^SDXaM1m?y|4;_o!bq~gI+aT01`K|lswCEaYpI1%+e*Nf){;OY#@tx^G&-C(dBgIu}&i4*g`qLp*GtgF~QBF z7{*7-@Ae<}Sg;XW^bsotKbrI(1B#D}9{&D4QJ;tq45?llFiowyZfNu_W!!ohfUCeF za9y~4i(NEGUIY`3u)GObEbQ!J02Z1`@4_0;otX#fdhIxS)pO+Rd5;hb28*WgthB@| zqUA5Tg_Jn260c`-Z|=>6SIRzG{>y5LB%a;9pT<@O>X-dmuYk7U;NmJTctG*-_SU3k z!lZX*MTl*ja>&g^)Rsj(dON2g8I2nZHIr|3a7gJGqDth1tEyB=p9aZ0mS`Z()5z_C z`1!y+9(+C4xcQWr_4p=!%+{RjGhyF;xrH{4Mq9LjX|I0Fc9-nAzjPE0Tiy^Sg!22s z!_ng|Y!3C>!KY+2D~mb+MZUYbl=k;$3J{6b+ZkT64;;*fJA_Epr&K=31f6x=cH_qL z*zY#Clj{yWy$MHaXk;-LiS+StAbgvo$9$Pr^28@X>9Jjtj22E09Q^Z$^3cK#-9B`% zs4eeBw}ezd$J(%8`HP6?sonb>vw8Gu-}k2imSw8yrUKMeLn;L!$^?aNZACJo7_JM8 zwfFm%=~ae3XzE$j+Edoj((<8mwb)BkyxX%nz_(9CvKT7c`q2qM8teqX;I~8qrDvv} zPh^?jI(<8}{jvA~nbuZKxySc5s(AfbtXexEqZxRPXMS3<<&6a`Lo>3qe;z-ycfWWX zloz>s(clX-{_Asyj2X<`L-dQS1qxlC1et5-d#wWj)J37!P zCoj!qlU~a7%*3(9y{6DF`IOp^LbBNnk8Hca*UOJ$Wb%ybwyyvFmd~3C_r_RIq}9Tw zvsK+X^c49OokbI`nQ(~Rp<5m?=VvZD+fg>by-!&=rm5;*2mRGX#7e&GOdS|s5QsyO z5}JN#hR+}gsPd#k4^XQKXh>wGsoFLf=dWQ`ei^5#FD)g7dJaog?a|wU70Ec3D_6DZ zTU@%*HjJEGmg~F;r?$P?B9CfduCi(knlfccy!Eh3g#*}EG7PBUAcv(ovJl8KJI>Rc z&T~bA?#53)C#Th5%@1A2+5p8?pJ7R>x+D{?q*gNBUb^z>dHxCR>&L_n4p;2>x!gGG zuL&N_-F+-4Z5Jn}1d#-JxM8LO;Im;>BppmyV+Z}qNM)u^$aY?|kdswg=px-(0LVQ_ zfKJXj0a<4r`8WrOv;It~1Pffv{xDn`JJhRG_0gcvm*0?iCym%KQQaUv4yVT3muenQ->pz!7V{!;!s|C3{ENCinan*4fbP{Tyb(S?w z6e61RJInmF$Lnvs8?g2()52DIFL>oM9i|0HZ?IQwfKn0rUoEUtVyWSIX?yv{D%VJjgd&3ys~oeyUB6C zIeutCZ=hbx?oL#k($@WybjA4sUm*QybuwDXo7;hd+Hin3z2?u7hL(TrFdCXRLsDx= zK<*h*S^22xRnt0P3n%#4g$7?OfxWUAsKa!PH>YwfC6(|G!?yJ`XYByO@@7*H4uuqw zu||)j<=5JVyMZdSxZ*KaGLXt5N_vb89>M7cOm;uOaI7x0geJJnZlU%wGpa3$1hoc| zmT3X2aC>4GY|MMbcs|Y>!Hd%8r~dUt!|jK7akWBFy(`CoR)o8Xg&kGCU{k zNv(0vm^YZiScK;L*fz3k>ZpbInsfR6Mq!drI z#D4f7iArHXtND%rM7Iyo^VK>sOL{k`w|BpG4x=-9{|5~`spzTWGHlWjI&UHQX0R$}xkW6U?EYnHAxx#Clms(BtaPjgR?-74(g z5=*D-d{+N>`DYLc#B!CiWR_W9&N&;tqm|q1m&>D}Impe)S!R6Q&`|CsseBe)c(VWX zZSU!809eseP-89)M|~2CH+yjXClgUVmPYNOlwTzgnOWWSmu6H_ZYO%fp}ZQY2zm1p zGbRQ|7EY8;^S|?d8K?oh0+DulOm)gG0Q0|Td?<9rFIn8GCvOvpFcD&m^4-u_BEL-!gSYudWImh=nq5_Ae!o91c zV2SFcRZ1WbQQ=t~=obP^lx1!RSv{f{M(-?f{`|;~JMSI|T@al_iweiNc}o4!g_NO&l2By!pFi4Qu;N+268a9}Yf7KGdw2-mrh6oiFq~gK{z#*v z)uAtyDibWS$aict2DFJPEJY>bt3~r1(*GbREZ+Awzv%O9nUwaAl+i%~YmR{QyA>=a9W&XLIPQgHPn(AR5J7QoD+ldGYC$CIb{xT<}M1U2$Oa}^u(hJEu(F1O>n&Qt~o$v7Ihb| zgn=a5&8rLQuk$^YIO!cm=8I4Q?xN5#@SVrMJ}tkdp_j>VSZ=?kduQI>ps6!Lhv_B$ z7&kLyGnI$~u+uX3!88C1((oUo!WaNeV= z|LJBu_nfaUGn=FOjhfhu9y+Ph@i}~SasIhTcz)V0mVDL3Bnee<0RU_v7Vgx)%%RZCBahmiN^Yu#-fP+K818kE)5vWc(;((?5Li!&(D{DpB#& z%S%Hq;20X=*N%)KC}ND=9XrxPy@+ZVFWoU9!3{pWc#U^{a$t8;D(aBmP$S;Y{McQq zMa9GQI-c3b9^Et)@NUt$c0+5fKM&GsWYVH^igcv_%jzZyl*m6-_3pT$ zKZJ1@8MIAu(1}dD8CV1v707*Ns#NiwVQ>_*P6n@^W8*?WMc{f#pefftMuJ7u%^bmD zgRhatglcBRr<&@OKSTzfY2j6pTh;QWgSVvfs`t(M(TKqv3GL;##i+jQJpb?fY4RMX za))iSj`D~*00gr)>!ml6y>2^C(rw(RsW*!e6g!|ix+oA$RRt#Komn8T0CET}u?!HT zMnjyfihoWYU7s!rL8Os}Ll?ixthKetP(K>>tBY8`tRW!Kcr8_9DNss)@)slel$<6w zQkxCm(9x+fpcxw-t%3R(fPn^LT)U+uG0K1$9Ro2T`7PG7wnHpyvHNb^xWQOrFv9O$ zm(YbW7Um=T;^fAgnbN=lf?}m8Dsfcli-O@qffAKWNz{*#MUt`r!W3Qjeu8_Q9P5k4 zf)l|$A}7j8I@+gk&UGS$SVBGROsDeZu%!Y|jU&MoVe}5rt8$*9%*keR+5@^TPs{n1An2jJ56b{Yne- z6C##$2C)~fdAE$>>gm9v0s<=NzaHkD18OnAFu#*D60t4(Jc_O+>Y!=Pu9Om0rI6J` z!M`_FM|=V{gXnjgq}M_}aJB#KL-{&~;`jO+9yV(3m6`@vN>E0$HW~vQLNg=MClir_ zRl%lf>CJ+%g`qY#;`U8q4+Vz+t%D@HbMTb@nLgz=(eqnOWP zhN8}Y|6OcGL71*Nss9OqBoR#}I*4R=#Th_sb#?Wn%a+BQe|nIhGKqSy9VE`WggQwJ z1HC}an6+%$!i0m=WYvPBjFLFZ&3;s;kzQJCX?N|Ynphf`(_?fZ>sch zq#RD%)kgO+jHh5A_cL6_09?m)`sH`MeDy4DB>zj1O+<^F!SEjdJKJLORzW}vL>t(= zq-}$p=jTC$w~2GuPCU%80cc1~#fpL1MrP!d=})Kurv6 z3`?w}=iuWcN)7{mFxuBZ^-6AB2RsCNn=$9y-{(EvzxThFlG(N*BfHb-Xn0$nW`Ek> zCAuXx22J|T5)aJv=lV_?Q9s|iEfK$h|LmKNqhwJ7oks=lgI$sDntN8h@Yt-BB`%2T0|*lZlwZtiq}HiM|QAjue1D+xR}!0!Yar z7v~lfq#j{%1Pu~pGJ6-k*5=-?ZSuM&2_*YJnMtIFi(R zUk-B(d`~m<^lg_aJZ`^J>UOiC+|vK7eDNcEZ9yQ^Fc4#`tBxp+jAoqlWHdd9YC4%; z!^hwIs*bRYOqLn)78&B=;i}~{G}<`zVf8*@mQyxG=aO#)Z#OX6yp6B2f!ZW^!_8bO z#{3n)*pOED4Avzr4ue-cfQKH4zf$Iot;XO={u=(0ugZk!h_MJ!C2rV|lps+dXjwjb zWikJQgao4Os(n)86()$f>yil&vE5iH2QmzWIi9SWhG$5t{fB-B zdS|>i7^wQ%0REYctn(fwNw>$s##EU=gPXuq_S4ypcFVfZA1X{~^x%On$(MGaLM~fr z!wZ7Pd;c^P5MF+FR;pM&3d6RQ!}?LBYYgYu6jEMy#gwLbu(z`U&G` z%5anIh+@UD>RhcA6IPlxi%L9R+v}+${vFLbZ8Ef(r~{^r=TW~3dUGn&zIAr<+!N#? z`;MNv_vCVG`gCc-P5dsD?%~I_MSgGjJxdxD?WK_v9#5qHc|T z#AoEfB&-HIM8(}$Q=pz9)-5~qPXdK1!{D%ZlDObw2O8mkgeNeFg?N95}A)KKUy3wXwb>R*O65W80dl?&IaA z6ZiAz(l{H#1=XA-qgp)Sgu0e1=7jr>i|y1rsJQdzAvQiOCFKFelE5qHA=Qg{Pv6rU z7tsjpgxo8c(3m}fIxUXk2TW*SFmo5->WY8hKi7^C2pAqR>_8yF}Q0klLP$fFEYESa7G4lj)i_hLhqV0423IUxr%f}(;R(%VT` z)b0HP7!fC6&c#btJ{da@79P&FbLVk%$-tHtG(XY}gty_dcI0J3c9Y-8RiK4V2SqpV z8B9!0Hq())tO>|(Ew6!>u>IeFQ>jzB&4E%ZOhgDF2H;a(Hn20}HG%o#GGYLwA`;3n z@VtP__rPVsZu^ow1`l|2+Exi^8Tx#rKKM?i3HFH0ZX*vd$6Pn24CQbxCC2#;??65r z6PO{m3U%a3vIJ&MN2d|Bb1n}Y1{h(roiRO#{yl!BHa?SGE-nfIm6Bz%btHLBmIN3B zOuck04e z_KChSLmD@AutG?sHCZTI_+An7ydhVgWtl)U$l<6zUgO7Ij686ykf7xmOTV0lk0VfS z_$8a(iYVAV5}?)O*jh?gl*M$8VHlj}=8wUi$GS8YEkI5XsCT;HjYfL-LtqYih_W?? zFb~tYyr5?I%57wj!%zr49irapSS<<)N3z8M_+lFy5AxbxJjZ&AVrznlNQlBG0`977HI zP+Zm&95>ocno26MI*Wt_n^SHv8YvxLC%=BQ=k=->b&8vA=i@q|!oH4{Te(YlJ17Tu z<~nZ3sdIZ^Yp7v7X*sgF%ZC@#+HUwx^tk3e0?NMh7{oU=m12iOHQyHQy(hJKebfN?s)bU{87%GF~c;rOomHxOM4MdvI#8tWEkPmYnvOlni}+(c7Qtii-f z7P&hCKCs^qI&R=qHky3}%sp2?y?RjMWEJms&Sm%hclB|;T8oF3E` z{#fz|6n(xSKjRum0LF*Ww)X%rSQr8r61m%P#O#|Jp;{oL8YqZx4bXy%17ox}bKC^M zo1_%a7&XKMk0MV|`Tl8w`eSMeC$pv1dWZo`EAT4wm@Nm}Sag7^D$K(qLr!oN?A^PULrAFkP(kAk;23jra}TSE1A+%=a*?UHl(F$3 zr~28X?uvpM;SIY<3=J9#rJu6jXxSCKeak8&L}uvN6;&KLZ)Zk+!tMS&L{cFT+wGk|EXn zqo7lmo$TR0efo44A{RIt!rVzBb(E80aW;w6IedMud&jAZgrkzk(;{K*0T-7uUU06kzCbS_OO%u1qllA>B3jNJyg!9`3w&w&$@r__{tm3-j@`1;pd*i z^Lxw~)THQ2qhrF8CH_4f7avQ8KU9|uEkac^Rk!L4an$`%si~D2W$0f2m(~_1=#)_o5vVW+dwDmD)G^9%S7~2;5`{nIW+@~nLgS^Lo zJs360`pIo^o8lBsR2g=CZ)!WYoKsk`YjL3U;-y#JA7A-&;A{5Vp*24|kq& z>q;-L?vdij$^UvI@`pj1`OMmlEs_#YIoNVPD<{t=K37kg;-WjcgO)>0wR>8)%iB(> zXQz~iy93ovfZ6((&T7Dp{`;|G^TG(dq4;@;>d&dSHwJATsGZF$=;?01$L+7OaVUE+ zbJ)wdy*WwhHpX#uN@w+Wc4Wo=++4rj>lIIP9nLcn{)CSWXL|zj7&oII%fwuk&ki&F`kkLItpA+XyZP)B^ms0A;%rA&&fM)yOi?mUQY>-=Zt&j^ zx5M+f7>)0_RTCYWD2g;RQ6Z+50%u< zUlVCQMHuBoxL$rqIA_3bFUsfN6l%R*Y6Fi|-(}`7mh{v-`v;RYA+xJk}ap~IaR4?-)bLGSx=5bRM z8lUx;S7ipYT8-SuNoyBO)8A$_C6dbC@UCyBxM}^Es+Up5t)Z3QOBc@TulFS{ie>-# zz6k$2c{<(}9GMwN3!Q8+Ix8EmvB<7AIakknEuWeF3B6m(5*Xs{RR59ItuOi9NiAyb zma?t;RCq!7x5~8pd$Lq#d7WcKsw5|5Irlx}HQf;X+YTtEUHGWGYccig{jVL-q_#e%^cOSR-Ujpl;_++zcaj!S@gkjF1GKln5{RCNlSW4 zF6CY6Eq}b_`hWc7&uelFc<73@yrW8SF=jUPR8!MWEVf%+G#*`ZHnAjevb!QdrJOgW zhShsDd)JUq+Zu~3)f)|Oj4}c(_(kRI`NKJ4FR%r_TDO#!jX95R{N2~G$IL(9`qy(~ zv0oF}MXz}A+i3P!TB{e?gsV7@zqIKIU&gI#GF8IfEB|RQcW;kC^d?QEbS1_ix3Fsm z#B!dv**i*zrAtX{mftM%<3*5}X6=da)2Fo+{~hWqd`_F$LO1E)RE6rt2M=#>^?e_H z8Dl3g&NB3pwd%z1(K6}V>(>Mn^6jgWtZ4ckplt1C`PSf2b8w)KoQnLI?}vhmo5k~T z_o})KUe^5e)qlQdTl>$ui;t0BRa1?4RB+zo2L6+`Pw=MZ%LrcTd|-CX`N!su-)ckK z4Mr`F(VFC?e9~a<&1*x&v2>+w{@zOMH1UT)zFgZzM1BRlKi{6_=lk!PU!g5Q6dMVg zr9;me8oJ|s8M5OZ^s&UasfYO2?n}~(+?ygp&H0YL4i|LFRnt?>&n|3~(^S^A zkDQ26e?B{^f3en1s;rdfFz0fUy;)_Hbb~Kk1FHs=_Qo>L*4_<`4KvC-m!;(=x&7z+ z{(8<>K9BIoDE1KSRB5iFXo)U``l-@c9m%K&izhKoX|yt}9|~<=HYY~}K5k^|(n+a& zzFxIi{*}J<U6)}?dOXOH%kjJv(V67yUv$oKRcXVI(4*$v)xH=56tH> z-fdxMgl5OzR9?Ym7~4~Ek*W03gwo>(CdoXA7Z=Yw|M@n*o{R4;NU(`t7p4z~$>Hi_ z`_>ts6|;F<^7Pzb;-uBc&9=3=@*dvCgC3<*w+{Nv>L-d>9#s9!+_=rndjdD{eRa0w z?SkXAYrTWge3!>8GTfB5TO}#uWzlo1r8e;wg@ffbM;qULzwtL$DZ=%SHFopUsu~i` zS8b-k=+Vk7X?$;(%wg!?Q&Az3V4kG3zLHjF_9{!vZ$@;-_`I*(vH8=@X%DUf=B$)R z$>7IIa%>$T&W;Yuv*wDrA4PHZhLi$luTFGGJ-zodYUQt$_~(UVR$We2$FpcD9My z^;!K!Lq4;=zU@9|X-Deh$Pzz6ZpDd9#H9S!!_DD4#RgZf&9H=#8aXEuty%G8pFfFu%f8F+aKO#yPK( zq;o0{3(mszRQE+1`Xy;QU$@v?l)AXl*Qy?PkYe$u8*YF=0t?0 zI7Frx3gszY5r31i`{sRJ^IdbIE*E6Jeqr?;Uuw;H(IK&Ux0KV@|Mji2=Y?q;x3LTH z%jRv*M*qZhR7~8X*COl8b;b}+`3ueKO?Q5|wT!YYDq+l?*}uuBPRgCOz+jbcF9l{v zZjl;86|K(Li4N!AckSbaF86&UyTM!Qr7BhGr77neyS>-A@8O)*5Ag{)=TIM0W>~d3 zEANA8Y>>3f^BzwXcZa&FUgw5&^B8~@jnqB-YH=1pGTBNj9RI>QEZol>Mn69wvooO7 z_Gm39CzAgBOZR#8dRR}|@;z$^V-z0!Q2zHbuFjLQURK{cuFHF`}d8%{}zEI_ebjjCozV-SKAn^K%BX&ovS?p0h7}(SQAaXw!nV z51w!CL|$vER2rl%<`i~69;&%4Zd2@;G{*Q7vv(WoA5ZTzv`yErGzE6C7O?VCQfeEXl|LWZ?uqFJovO{P`V-PZwTE0cGmLpp-#(=HZ#I>MrL?A1B}i%h zb8R(y41_F7!UcuyzU)?*K9e@2$SG&nIC*X38rJSw`!=R%-}PQ2@+qZLE3LRKvoDYO z_z8Jt{15LFx?t{(`wSf8_HVMvt{I41e8M;L*}&anYLcNJ&KM4yon41&al4>g0qd*D ztk#11rZBgpr*XgA?d%8hc6*x!>~`Vp_y;RlkI7qz$0%ngc^QT{nJ>#s-8}J7uqGn? z*@ETc6t}6U=-~dJtH!mkYJU$OSW$0r@ZIGU)!Fb(@49VE;#HUzi+#(NV|u6erQ!Gy zU*pdn*Av-@Pnj(E{>C6aTx#^-|5~`s+eu!=|28QH*z!rb=3PObg`3{BYJRW$mUSlf zZ-;ehW4q@bG<3hw=$@;n$UG}lYwEh$Kj%q_^S?In!pq`@g*Dshv*kL?%gG>b^#|dG zoz_TF^c0dp8m8Ry!cG-(bSXT7yUzAGrpBsvGNw|<^Y_fkEo~mQ=C&lj)tMSt^|5>j zS+qlEH|hH+TPu0#hAc{>HO6L(oR4`DyOS2jDR1NEvsFtPx`wX6|9jAXc{Yx};(sM| z67q^e+^LdsEv#uKUVrWjxx{(4MDL5T$d+pt6WQ#w?~pGhS9(>{{DtI+mdQ z0n%q+=77I!qY)nqFx}cJZT0ZxEwGE-~E@FyGJn>6nO~z=u0AzkRvlsf8P$wR9$XR4OYbMrL2}4M*PNi!?jl zU6nBHnU>R3ik|D6Ido~~P^j0c?_xskH?C#wUP^98)Lc59!k+(iGlZ=clF13x2bUdk zY%cQVROZHv1!X-)@P8WVPkiFdWb!D8g3ftxlbZOV0eWh*COx~EzT-%Z<-t1GXOv~1w zz)s~^74zGw?HC}w^I#WhjFTAgO~vFoX_Grlf4>g^o5zkip}-HH(VH5gRr)jZhYLM0Bu}OV5yg)hkx5Dy^NyA7sFV4LZCGvuk9~8l=Et=(me(l3TVTp`IZ_{+i_@ptXYU z-IhtP65PP?TU1LUVv(eU>R3{t7n?NpNmAD8NL1p3v9E-cO(Mg?yTJWbi9V+9m0_dD zjDjgfEkb?{i=OOaVLOwMV5=HECX+Un#@2AgPH*%;#_xd*jmBO`M@@(TpNO09)19CT_1)o2{YYLP&#w1(002um|At{MNe8ewLV zx;ZUFb|}?({vJ8wB9KuQFWxR)#64MS!7Dy$)t0E~zz1RE|MWUR2!k@~%-;eU*MweUHKS+9z3stZ%vBR{Qry<0bw@yoMjFg;sieE$D{ikU4bZ-3=_h(4$UA8L20D=*#YMb67^){gE_6~N7^K6_twS$b3^=#qq)eJN?(OUrl4ZMiN9!}GX8XH? zLj)~-vy^l6N5_nRcjz?5TsXwmjbWmm+tda=hTdU zBm(k6AQpnC| z>({&zVdMj8yegO*CtG0{fm0U5#INbgYSS-omV&ya`HK?9Owr~a_`cPCs_!tNmISps zeIX%MM;9@nsArLz!=8w+=!z5rztnawu0ob@2Fn(M@vbS?me@1yThrp!7#>kDccy;V zc=~Hg{d!Tnh}flnifcfld*;X%WhQYE`j0s4URZs-7ng!;GuzM43?fv~v zwF5uCa#|v&M(a4OSr)QN$zhMw09Sn^3saojK(zAc%vj#W?G8O_R1D zAT=SCqvZl1h5{VoXBf59>XNhmM^j<#Kra`KqxN;Q?*Xpy*+XtU!JmMNDmgU*+FnPp z89`?WdK)VqODn5%bf_cJ)6-YC0*x92mWRgBRGQ)-)S4Y;<_s)!^VPvFn()v@`=bz? zkIJ)dZ>cFiX4wxdXFb|F+aoReZ=PKv&R0!!u#b1=Jo64)M1G?9_Hze;nCj98EA`gL zI4Ygi{NB+l@@v6D-RHeigjgqo4M7hVe7Y32sbFE5E)CK0nDbMWM*tw-+WOL^Ha^+o*CS-R@E&bP1YdRR}#hEYRVZ>1?-w_ z=$F6myj>&pL%@pRPwS;O&J``KlfI6Sh*N(N)6yEwsNke8LzEV|=XL)XasB5jI$kF> zd%0@vO+}a;&pi@=*#Uc5(cv5h`oK#7aIB?*Azi?4iZ}{PyF4qjgOXqj-1J%fMQ+0J zLR&)K()m}Vx5@a+_0m$EXTHR&Qj_fHD}xJTTmg)5c}Kh-7!b3AixA&@{a^!&DnoG}e0t zfOlEVJF2vAlN}hh;EEZemx#=6@^pJXdUh9zjq_0CCFe>a6WEyr%5?+ExYTS=Y&ji0~o!EWShj|FPfgG?@S+1RJ_w zHEkVfG_8mRd_YlBwy-=*BG z^8wQqNqa6@FaXPlqu|vv9}T?XB*BhjwgJ#KAaA*mzG>5@4)*|nuxdb^_N>s_*0u*Y z?~!w46*$K3a@1&9>Rv39dL1M6-0?p9346qC8%Yt-Zc(koP?Ac<#8uC|%q=3G?Efmv zwe7{0B!v93gkXKBcsC)~yH}P+PqZxHQg{%sH(4|okoi~pnNj)3DcGUCSxu?qW;<-l zf`nTnRlr>~y6k&z3eeNUM`47(2XPUm{i=W*1j;QIkx~Id3LIq{%x|=}wqERvX!PyeL2_m^sdvRNc}Tzo+-)B+WyMX-Pak>qVI)WD@*9o@Y+UjH?T`}3UH`S;`4Y-j?@UCvZAV(xMA#Z!WY)!=! zz=^Ai82L@!-u6B+jxpeGPRkiTz$zuE+hQ2l9Dy$4y>Ag+Zvm?%s1=qA|HsyQz~#LE z@8cPVW5h9|66a9bp^{WWM5WNwrX=mX%gu2_OM~`6Q(H^BkhV(Q4W*&2CGB0m>(x2m z?>XQ9GPyc5nq%JU3Xau)X(AS~7`f=XC$TMP{MgekMMD+fm zf$m2c+C`n0Eq*5PJE|_R{$74b^{J-$TnLb6hnJ_%x>{(@CujDW%&AOwIR1^=vnc$h zc`w@zH&3EBNOo+LPuR`CDZswV<>(}MX~avy9^Xv?tj#5KPsI5KJ24|;^f$ScSZY)f zn*|OMJ3=ZwlUee$)55fVM?pT` zS_}x6@wQ`Ksm+G)v1-I2$k44}!k3aFvd@UVMXeOZ!0-NNx9g% zQzYke)tDq1pN$YgV>9qUzvFL%4r$>eM33}Qz=F(7>+zl(b#s1lmp=XPT>^X7#J+&& z`a&wrS!?4q#_x3p)~q=sf?1M@iLEN#PHweAYnEyEJ~b!xGmj3+SM&-o?_aQ|#xhYU zo%_ci&k>u{eoj#NoJL`rW32whSX+MT;D4Ih9wYo<=+=coIy3N=8kkg*(7%quY6VWP z)jqhKy(ZgFAR7*1o&CrF?nNEolNe}Z5jOqgytjog;?Y3C(J^t*-R#l13u}I+45LNE zvPCOb(7#%*YB+%ap~+PreB*&eirWW!Tax*WE@8 z&K$wWd2J$tPR7S}$CNp@|9kI-h&2|PSV5y@C-@B!^1KMd0l5p46Y#x=d^KV5N(US} z7bRseF_i~Oy)hx}QL~8hG(?e=uhp*di6Lk>J3Fh|h)TBgbQ-y`s9|Y~TSlzO?n$EX zomiM6BZ~tQIVFqEl=%KaUO?d)i*#!4YRzm)6c&WFkcuU*ShoJ``>{o3Kgm`^{fZUj z?P!8&?J z2b=Gd+!g;@?pJ@l*qk#$Nu7UDbu7uO_yfB->$^|mG!nAoF8iXyS+Z~$9pm%LS{s6TaYAl9Lfaczyk04l;@yCZ1IW5cWdG ziXntf1}a>Su$T46miYNnn6{8m{n!8@!+fC?LZT>g%w@Wf^5w%XY|5Mnhyo?GS^azt z!41i_8<^!)f75#2G3R|)^X@P(j&DEh!QF*0kpp!CD-T7chW1nn{FUO!W+L3at|P$_ z-tb-QkP!PkQWqeavO^UCJA|aNC6xd%2|($%nU=PJtUI8&4xHSvrfh(lS*?y?sKcwu z*n-|*DnIhw=3TvVwrNLlivpfnWv}BBK5T7}Rq2q&71LuNXgg8d#{*AyiT{o*##3>v zR3Kq5VbccR4D67u?%~=!eE5(bsfK_P7hQKBpEA5%7=F=BA7$}8DlD?)PPXZ8zW z{gaTV)!D}jqjYL8#e^&{p)HZdd6>Mh?e=!89p(dJZOOKXLHMxjt=L0c|A<+JpM8nv zK`2hKJ~kjEQcE#&R76t%#L>evb=m`?;m6XaYLX@#^{^$Fj#UJ2ol#v*#e^GvJ>~9L zb|4G>(S@F?GezM?4COu`aHV3^6@b>Z>ds;EG!ofWKtlGy zbmOQHlTf6__9!ri+4RB&#EI3Qvi(r)gz1ju3 zsS*Af_H4P8tAC*wFqxY!7gvH@zt`%N2^xy)8H zes1xjpFEy&;@@w2Rs)bF8}9Ytq5x=Sa#H24dmtJ-M=chM>vBipOb-vfR^?BTCtw?& z;A$dg6U&QtA-_%T1O$9vVM{v}s-Br2TMXYC+Kq-6Qhw~e{}Va7KhB=ics-hgrg}e* z8A~m2y+~WlHJ8tv>66)h1zi$&$XiRvMjQZ6C3Ev}=_Z&=5PyZMr1K;t2uY&oE=c`> zmNy-4mSlB<5S%;J(D;&)n-t;j?yCA(Fnj{H9)MYGw;h&y`^FgW$EaP&dM(gc_yLZw)Pm5WnwEkQjLAE-+x}{kz`X%zeGd{a!6v@8m zYJb>c#|tVmZk!3133x&n@{bSRJ0&&o$LIn)7*42xCVyN+7g&cpHZHenlS12*_EPzi zOq(Bes`jXPrr@ifhe+A1zFLFbW}R7#{N<3+ETkR5@9sC{;^h1^=?G02c`)BQXF>b! zn9Q;sX(7g5_1J773qLzG5x}uLPArn^Q4A&F*pSwmEb(-Pkop@S?j(91hW2L%{y2OX zdEmh?_qwu_Izv5sgCbSeD^Dy93b|qfb{6GIrnXwt-)TA2bnsAG*V|AVJ5SHv zzZ|(9ujH>z>9w`~?I_Nc$nqRzA2WUaC;g*}u2BA6=hp=2;vVl$vKeaRa%0a{{>S5u zXxumZ9JC`n79Pvy%@yI2#&}gPbfh%WHh#2K!7M?Kh7i8Iy2m!N(a8Oc0{n5&^)))t z=>D&83{YJlI0s_XAg+5oWS)P#nQQVsY1DET`EgYSy+Yvb=a;HMo@;uzHJ9dm--dK_ zHEQHmsczCe_{?xQGr=lO>2zAun#{|&K?#2?`Ky7!lA-3u;81U1<9og$NNLqOeDywj z`%6Sny4LL}o8u?zVq?Lh4sl$$QfqI1u8p{41(}zv+N(pPhSouR4D4`dzU{r#z7xe2 zjx=vk!2SF8Ipg{?VYXu365vjOsJJY)D=<7EKl_jvz^G%=T3X@oP8c*zL?#@cK1umL zi`1y@3Qf-XjRzfH5RGDSt=-JI_Q$M-P&nwA+ym&&1Z;PB!Y# z)T;}VFdn_?`u&J%U7|-pfret6YVPejlIs{ty}G z#S$of^N|1X6Q5pTn_T&t#C-M(+b$QwO3yqUY_X?)<-KPjZp|0si9({%qVLWsaJLFv z{QX6$2r3SNw$-D0Sa4>z8L|zeMTDp&S@NmuSZQPnNuodFn6+W+zMH4coH!wivPCU= zVr`iU-7fAy4_cG5J2s>K{$86Q_1 zUTJZ}P9Es{4>vID0Y3i>*-)VdK+>4=^${mA=dET2ch~>aM6X~)-wj#hUwju0u)IV@ zlBs;U#!ZQ;0t&CMoqPZO{XaMF6k!%pIad*X+~_!Dz36@^*vr9tV{54=ysdzs3S>B= z?j{}r&fRpF_&}>AKxG|GjYL}1KozaYpw@?rnbTi=T-lE2#UCjxJT-VDQ=TPm9i!A~ z`o#3t7gjlBg>&a8{IA6QJI^ZM2Kq@)u229on3DIr{N^PEwDZ=@3y1ib)4(> zZ?V4@uj(*z(u6t0NiB6wP;F)=CC;hJCy;HsDXa029XNM4o>ny37lrD%Dfq`hLEzu& z^&Mjq{e$Er!Nfmp{N=7iTvN^qe7L4(JvM~Q=+m&HNK~qouSw4nA!}4RItH|5OIV-k z~GpSOFZSC2b+$~0x!ch_$)4}-E#%yI1C6IET(w{yNOOF_?c z8O|D!!`2RO8}7IAu(Ma_T~XwxS;3YaF>>+iS?v~Yb7_pNzv=pSj!(F8oNQ_`mD^1Z zD;K5Ur_EfZZ6fX^+{F+)XcqIOIaXBsmz%xz5<_ZbzUE})$JSTRznAPUv5fP-XL0OX zQr3Y^pQ;Rzy~P}#4VDV&{&bbH61JH5mwNHe5HF#q)KF>I&muJ@XLbuOO*9);qNo9o z>hU-rbaG3ageDZ46;Q<+Ca%fWgvdE{AC%X}P7{Yws500JuYyyD96&ugM8zg2HT2$B zn0Nq>yYrzJ7yxalAA4`JcrLC*EXGN=8eA$zf;}`(tr5H)#3gN#M3M8~ss19IT|!Ty z*w?loK5|m?-S+V*+2BFu)`Z_oSGl{YS?C$_8_V@2W@lDuI+R`inYIdh;*Kvs2(|25 zD@0;49A=Db_ICdjA%@{Tb5k@8sYcOXPTl0EY30`$-EBl`Gddd6)|dD(%~Q&{aBKIE!~^V< zgFKn$Gy@WgL#cJ)&sLMhV#mhzHL?*8SpGvLy3K`qJS>MTVKFm+jDr|q!f;Jk?fN`{ zQjD^_1>jtBYWw(0pkU;4o`xsV+s0Mg>nA5p_wAOy!3xEMfJ>v#@Aho7Z~DvG**@W^ z0Wpq3fxPc&2eRPC_%-7NE|e?vi-#wqMJy`kxPtO@Q@<`e`;c4a*RwbzU78bC$hKzu zMOI?c)s?9fM(^Y)XU+9(V*$J6>p%Wm$J`%2clekWcKqUXPfcT|F9$uOUo*D!6l17v#c+ zIGCfosaZEECK%VXCNiG2uha4&Oj8g9TTd-1?Tu}o==%?$w}*yr?KZ^~QQ}jfA`;rB z)1ighm|!~)0#%=1u)!80n(y<(a4SWtc=vC=y@XY#GCXN*MY=w0~&2 z$r%v^E1*|N83U)`v>}Q(B1AoW@}eOJO(!wGUP$z=l!wm>E*lL@Lby9*}Ac)GnI{yemn8Oqb1%=5K6pV`4MdyPa zA1=#jRnagq8UffYkDd!_&w!}6j^qNBC41?RUb%!;-y55&=f%XP-mTkCoa*6^FlE|{ z>V{{c8Y^}^#^7VcL)3k>dDAiDAK*~SGWF{j0RIMVB)3_oF;ocS+E5me#J{4!Yd&#V zLe^7Xr1cDMm0vW<^H)+XNdng-WmRO&n4LwBl!KRvFv+fI^5gwli(;eg{%Vdd{=8_m z`KQXt=p*{=k{3CGRsXHtyu{QD))(aTrEhLmmpSKgUZ>M0^L{L_FiXHoqz-VN3Blqt z4nhMk$iz!bb{`)$w0(j(6?`~JDlZKPCoP5MoLuUW}PF3=KhWjLy;_S<9m`W z16EpP^7nWbfs+$hmKc432xHj|cpZ&yB%0R&w5>$M?a0!^(go5Fz|V!?piIe4#+e4V zU1Z=(YT@y}W&k#RZ?oyh@74Ce+zB8_2Bt_(_|v4nM00rO&H&EX3rs!FMt^!SYCC=h zCsgF#(R^&e7NwxX8CUyVz!um&IklNWFAfFm8ZFt-V+r1OK1-VPyJs^pjcB>#JNKJP#sJf}U3v`?+7|c(p8BBA^TCZaOqK-Z66! z^MzFh7vjb$HY&ZCjF3_lHR~ScnSD`4pXLly^r6fei}MA@jZ03K7&<_7u?dsZ7_!3; z2GTY3tXJarw28$gxHd%f1kwugFFoKb_8Pg+Il2JGJlEivN7B|Sy%Ay*4}RYi!ohct z9J!mUN$N)#}HPrVk1PsDqk3{CJN9diV%F4v$><;NEVireYecUPB;vw?Lhkzk31 z@7GxP%}(#&B-3E*{!hV)=k{O-^muR~rO87hu?V(qo>wLc5OfG@09EWDek=EAA%?XW z%f^82_+D=qBc%E`1rDS3JY{?>^B6VskDg-{%ru!%(b4)i8rz!CaI(NG-m?!)=LyDO z1)Fr>j{J*acdlG<$7o(e^*67_kxh#ZRb^F&`B$EF2k*SyGhn5 zs2E=S)3G7X{CcLvnx{vYp316R@tJtH^6U8g%#WAs)XVHm|0lDCj6fY=HekfH&0J<$ zzKO+YjcEcp5pDqn4PM9y;Ne^)OwDqIKnOrKP4YT9I+}m{Sd!4wuRAQhP$&BA6xe+6 z$jtHReTn)+X%NLC2>a@U`U5?!&q#JZ6empXP2n!yvDrlCW&dMQkBcmsIoE_xhOKTg zxX#P>sG24|>QLU2k5}X(N^#IUI+iv$1R@22cF4;^zN8niPgkufbeJB#jvulWVL1)G zfd`}DiMA<{>2WB~6s-;qf#hA`Utt1FL##(()XHmBjrlf&0Bo}PVjYkNiP{F{V*Tb5 zqaLW~Uy+zUH?e{XO?2mBb)ae@rLuDnB{|!)^G`GlBj!Y~7L`+$5i0wE<@@yL3&s|c7JvKhLZ?BM!&t{f@O`@Vcl~@cyMW}R@*?)~_<}i3 zGQ`vhuI4aV(gb@-9?72SF#_rUkrx4D8F0;azz2%({AsOLhnO|7c||1lKx&9Ubxss@ zqgG`#;8S}*#GW|bD`Bb}`Xl8Ar~PnBDJ*g9#iZ{8W>AC&jY9O5{j^NNH&DKq#g^pV zglV)r@r?sRbC{c1sTuMy5K0&|OuU7-PTmi=vQoGv?Wr1=h10?|jQk+TfkJ8}5~+{< z`0?xH?juM*l7k_}4$NY5D~X^RWqB?}Z>kgnI7r|q5vdBu0u!;Klfe!hhsoQ+8<7LA zrP2Y2hy^B89hM_Nu;~TOBMFEDl!iUD6 z5~!bKf{zWIhV406@LA`cJBxq|~&0^om=cg>wUS!T${?O$y9Q)o`Td08}G*;~$B}yCs+MA)8 zPUu9goMDwe7uQ_=-!i?mU*+)62L2d@pFhku>+fcF6A-Z7&{_-O4MfYTk!Vgp&pd^T zvd9^A7}?iDZ~7U>rw|f75N7RxhXcIl%%7~2;+A#;zVVVwU~mr^+F=FBg94BshollB zqnI~uj`_z37eJW;!kW)98&YeT(S$P^yif@z21XbONFyZF6HmpWS&WD80^gSDP_hNX zhm0)nI2%#&@G3;)x}Q{f{P=N=rYS76`j;7Z4nMC5dFe29x7`AK4x$&r!f>$KIk;*ouD{T z7!5WxV%3*h>r9aWh?o%G?cpL`q=`tJo?H>{T23&Nj_+1NXhKEGUKF6D>BKKrMy27ZppL6I6yx<4c`3q4J7C}$Ca>nm5^wxIzB zW9fE_Wjg~on`B%qEjklKE+DDdiwa`^+%}?}7V&ey(;~xKXfAbmPe4+(|K|`DT@SzG ztZC81U=*`tQDTtrNWeV_l?g(yeu3J62Q(wmB1XVuwfJZ8XaloE_`i5#{vp~nQ^Q~d zda#8nV1v00p}I7!sgPX4CKMs%`_QZD(NtH^{Sq%9t99xmyvrW?(%88LO1~?7v81j! zQq{U8KZ$QevhJakRC#5P6}8NJ=Kr%Jx-{`l^g>)mWlzOw#Y_HU+&SyO4%e>(^P}3S z!6pNT_6*85r%(9Q6tmlxn^CJRO8i2`_=*#{Uhg<%9yjN4RQc7l7|C%i^>^tL;r@(r zMElKd_WfxN)6uA=hme{+1qC8i*1i4qG9lJ|?qPU(Ud^<<=og zdI;1C=X&QZ4%nhdw!eINh^XwL>At3~A4i=ke5?8W)7gOT$#Q5OWP^{#@d)>ggol+Wr8u^Q#Ui86z^ z!tM05UW>Dt|BH#g^9Bi9^30BBZ92DwRL|=dEDN`O`88y(w#K@1imA-|Ok|J$F{h^X z)K+FzVRQM9o!>4%9hF>Pk*Rt)nP9sRvi@`0DvrqI5f@NpYGyS$`7CpdUkn_Z?bq*}d}s5QeQMMxmxQSqs7_ zzy0Lhm=u(1uZU+ndVmj|IhmQ6M5Yc6J?8vm>Y@ay4RI3#ap^;^L?_s*q$H@TTh3O# z4vH#2tawRhKmao0Xp3gB`LiU}WyQkH!TjxE3u zJ=SS7w@gWWnh3OfyOrJ=DtoqH-%%ECFApQ-D(BPN7gxlMnPTTIEobYUi{nxdYutWa zK77Guc}Q(quvpzZaW3=9D;j=Q&9ae@`-f9h)mI+9Ox7uMOo&;ryE)|@By426qyRMSb+2+)N#sl!8Ak#6#t8l;{S$vObsqOH}6*ChLE^#MEcWW^jy4B~c zWFSrkGdbAV>GfRJ7MLrHM!ydJspQI)9rpknuI>iqi1*OKHRUvAIq%-ek_~~3z&7+G z%rv8_LZVBJ^6A-+d9+orUcXrwtBg@_kg$4^2EKpKZL87twrkH4Cu=4f$wn6>$CQA9 z!UX7dB4JCcIe`IK^@rNz$oG>oi>_3nPHekKq$Xua0a%V4C`t{eK@XfVeRnPGI~bDT zxH$i;-gQe?Wn+>>FKqh;LXqC@H)W#Cte@)Z!OJEj3*_uZ3~-EIOXR|(2E@j%Q_dQM zcXUXCD2QX6W4&aLQ$g&30o~SM6X!4)dgC|BV`L^aMhdFV)1iVyUK$yDNcv5aFcguU zSGdgki2o`+U}y;}M~L-hX1#gD3Pi^Vi<7_^u@_u+0yAzR#+AtD<9?e;s`C)6n^u;R zpdA0!usIxw;TlR|LPsNw2!_FxUxm8o98vzQ3Mgc`SM$bgt2#vH2*wMhYv@(5smSgotGIc8V zUc8u1nV4T8bikZlznP?C(M^XyS&DofM%_T{ML?oPhScGYGke0U!0$(CQ&rFb7vbsT%A> znV4-IchPu~QtL=8x{G;qo+f&(r2l|_cp|tm9Pn}I6bu!zX?(Kda7f<}IPRS-XO?HR z^U+89L?VHcl_(;&2yQIxi{F>zRsiD@uA0~wA&wA&J#dia`3xwi;&iJ!cWiWuz!;No zaryY(44|lTj{b#V44xJ6+n$d$!1tO5#4atU>4SOt)}?HzmDX$8_=v{6*yFP485%NR zod!>?Nwfs52i#_p*%b!+X4+7z=A$Db-v!A3;Li}0ZiA>iE#fr*Hx&~0z7AS*8{7Qb zYcS^mz&O-81L&s}@ZHH2g18}(J`mN_H_M>7BSln~Op}fz%v9ujCQBGh z?n2d^m@&M{e~Q)5zInUq;r4u02!YFEgBP!cAk@T~dh_PZ=0X%S8ub`Ailtyv(Nw60 zAK(;;K_=PDz>zul`1q*(9R#>!SIc_y>I-!C%MuxAc0&~raqGlxn2_DXsV}drltS55 zWjb#T{wG-_B{(aY?tp}&0FH7#3?iWXoQpk+A@f8$Yf|;Nv>7*%FJYR9!Yx%LZMw1o zs-8NmKk$5_+xd?l7N>X_*1z_C%&}(tj8gFZHmgfxLX-F0W|$?;-D@9IOHMBsrd}3m zH%h+hP%jreSlnx5CES^*ReQF9dt{dT(f!=b?Fx#;^G$w~#6NuMwvKF?oR{7!RUx{a zJ=0gRaa!%ce_s0HCA9eIn0!)WZb{ZbRXT~(3)pFnmOgFyxHdWpsBch^00=|Z4mV@0 zVWi(~fLmlm?Z&+!B(dne*HHG1?OzjU)%R&X)W2EU2u-d{Tg-|iZ~&$Bt^kZ*e<&*A z^{s(CF$N_T^qJbF+sH7$hiv)yaw^vigrSnvLxDsj-7(=zN2nn%m~-@3Nm;BW)rJ1y z7=fV%kTN~|Tq63q^~DQs-7ve7&yzP%I04Nq>IIxK(__UIo{|5{+-T zu9aG5t&CfeWQcV+q@ov|m<-|9`*EZ)XYf z3+dlrads>)$T1)clgn7VD|>{Y-E?%i^K;d+b8`bA`B>1;00@X)I3sMcoAugRFfC2# zXYkSnP_#t%`Y!8jT`&=|K&L5vkVO_;WU*bBA0&+9jzRz6OqR_^96&$U{+1q=;uvR-k zS$2HfBqkqC2Xk(EG?pfA+C0D@a`Zp zHfoJcG-YL3M!OD@@BqVMQJeXdsFli1aI3z(f_A6rpepBHv5c$;P=T2Q=}a3S=pTCW z0P6L&2!3H$3J~|$S6@IO{*LKcV1*J?qQ%TWH3Ax77;TYQ2V z?!llWy3YvwRpO`1XH161Qc@Q5y7gEGGq!Kdu`&>|$|~9=7vU`eNQoo2?MEYJ)22

-UQ zUFkLV^%j@8nqQg?@~m?gP)%W$aT6;@mX)p94Qh1$+E?N@T<1RtBN zjh>J<9v(E}7qO<>Jt&(%H|iD0?^i`5049 zIk^32QSI9X(jwtxO<`nP(e;2qB^fjp{LKx-B+1CJ>WTNJtc97;CImIk>B!nJrrzGa2+!HU$fo$lpi}g!=dt=alfh3*|TT;`tzb;Dbnnx>VtIL z&j;(#ijfK3svzGVDuC8n$&*~UVl(`u(|lWmz_wYS5b9^(SorP=;XfKKFV1aC_Ged+ zL;?S)OW$}JNX2Q7l&>RX>ATdR8N(IlAwaE4h?OE58@)7vDX2wV+mqk?ajt34p_sJw z{_V-7Rk$H+S>kow8lgu4(Mtbh!W4>gD9+c_c7YhLv zfpNi$%kt>m30-3lRcomQfu%5K+F#O&eL$vUG2%>1nDb&OMBpC_vaQuH9T?$@Mv2^9 z_u3?VQ_C)x_yiecGiy^ld8xCo1~Y%$REc5Nw{JHyYphBdV%vXQp*cAo;zuN0aviaz zb3c%9!@pqR>3|0K*0~9d!%OCM03=B7E7Kj)W>vMd`o|wSk;?ne_T7%&bn2bZ*%xo; zF5nDp9s@oYcW~##jrTAT0pzRJ(x)pJ4{SQ z%azqEqCJjK)*u}j^tOj;g|T>RK@Z}|FP@F25xCIwPp2%`nOc`uC(|~I1h$q%wxa#a zti}xcZjM0JDeQ_MRF^IrVD(HYV;S8WH^otKP7oG_riTj**4!)u{e|$=9~;2MLRrm2 zn?P-tSu^Lhg&BWBZYS?nAxW+IFh5Kb2w*mlB#MO#WoumlnjiaH3}3T9lCSuill%AM zxXnBIVR<~C>j)%m0s(c()ue5()g9=64^j2{v=n7O9ubY-@3Gt32>!y)&gM2FDo|46 zQL}O2g{T;B;1Z*Szs;?cVeU0dS}8Y(H<8HV&mtnRPgeDG^zAz-5PP_Y6Ww(5g=4sq zhFx*G#R_r5l40;TeE2XSI$%~ONL{rAT3t0Y1kQ=BEEbDTS)rpCoPWj;!8qq6es*RJ z?}y6D%835MGZQfILd%l?-IR{Q{jfH=i;OMPP41-k;!`Z?ia2Zg^JsGCwsE7qGY=bP z+Wp*nc(LWT-Ouwo=;juLR_>LxF-RN2G9b~Zxf63-?$Pq{8a*622GZMjO=|2zKqUR3 zcX7k1M!2dXh+IVxP;<<ePUN`z&LP-J35bHua?%dI#GW0y5(1cE8I3jm%BG`2(C z_gVbEm(z;@PcZhSc;D3*pNt%(=&rfMhzLoHinHx7bKU2^T-78}b-jC(UwO{1tbDW8 z_$QB<2P;s)Z$z&ei9%AGc)Yxd4Hic$Pv+TV830}66`>h%V)-KJ$2nkG_X814T91Ya zOUB@$(3V~eJRStH zf_j31i31h4ZXwt4X}o!z!ya!7w87vct90C3NF30Gq*KK;HK4`=*}QdN7Fc%lFK7x` z50_ccz7x!T+d5n**KUFg5|4&l_ZI*DEt@u(*=?^u`1evJ^>{478&T7*3Q?qH1-TO% z8k$YAChTW?1ykr%f(c|j>RV&3zGws-e;|nvCq@evl49p*TG`X83Y$K!Cq~Kt{F+r^=0AW1fZ(bQ0folr6UOz06iN*3JF*E5*pck9XNT%S zu*%iLBY!yyIStgzEUH8lNN`s1YhVDZgBm1n`Acc(B{D;vV|*uv_}J~e;Ty~gims`l zTE=6L#a~cF`wp9>rW}`KjQu{9XY}3ykweNR5wKkgK%u>g+GoOmNgm4lwnvIahTQSa zDYDOk?i~Tlf|SKnIF!O0EE{ceS5lM&Tm(^{(Z>B`8d$Yvg8hG2Y9N9Vufkpn znyYN^Dj(2=mcRhY5@)Td*_Iwx95(n{AsmU^*fhQwdSE|){aV~4;!MEjIz;JikfPa7nqy+{@ITqf45%1EpcNqa zW2rqo+L27wzNpz)ONs`{R@(^+lA=ju3*FNj?@wm8EDEJJU!yfTS#K&V{qEm4?6puk zyQN1OmM_8g7RN3vVXFtYv21iT;oe~{NJmy@h3vuh_V@SmRfn>@!}2QJHwL7ei1`&L z+hlkInV&2wFRVreWvelJVr6EA29a!(=_)~yb;~$ zVJhb_b(6V_19YyCPWuZNF6>nxt(8XcdaqX)4+-RBHX=@N*)DxGIZ`1J2n}=0oa3Yl z0D9YJ{S;{<9_4fnq%yb+ObX_rowpzSG2KiPVi7Z!c`1Vle2IKDj4!+$WpyF) zeGF1z8S-8tFJ`ht-0d01GekPWyOLx0v^tqz)lV+~6l??}m?XLl3FFz-Gi(xI2wz7* z8xodc(Juozwo=n-G-s`oSvYIPl(wF~(#) z2`iGKhC@82(~Jd{16cj?Nqzm#S>|1H`!42k+mgW}D1u}RF7gmDVI+7sR{kUf?f@zw z_4vGTdcCMaVEzyXEK_=gp z?KsY#H4bH3M6LPKJSm@sfG)PHka@rHZt-P5FRyQWC7(7p;C)7VD)Fz0MBU{?=uAZ! zM9?g$#0hDy0f-ay3q~xpiIWr&YHRbH5l4?kcs)S^CP{d^9qEebcgb`Sw8b*ZHkvyB z=KsGtOM`WNDqblW{QJ*Dqe25-9NlC|`YmjE%XFji(VP85{89ibHXU4;o&Am~zJM(n z4^6*4FPoYMzN_|KtpQ5R3cz_;fD62e$Gj?`y#cdb+p+MC9A*g+b8+s1qFGP#d9R~i z3{S=!2a;FyrzF1SyZB(5I(|w)H%le~$bgb7t4a4^(s2&F#OwI>RF66Qg~S$?42=kP z5yLUmIEi4xT@)7|M1Z2AOGv_xCyQB#S~+?Y3|KO7%J3(us0X)sB?n^sNBe&cG`YdI z3-L4X{P8c4r9@Q?*idf}_Ge~$|6AyrQHN4g5(lX5Id@-F=o$spyB07Cf6pst;E$NZE8D}H#ugN-hSzsV0C){@UzUm?Z} z;8I6nBxLG65eI_zE_%>w7<r5%;>xZ37jH7(N>6zZVuUK7 zS^du=u&%yATB-UnOAjE+Ag^9Yjshm3-<%pn%|b9N(fb5s+1GD+bm3Pb{(?tSbIb7B zH81<=;ep9dI8-Ff4kXot2u?(oqU|9Fxjna+(r#=Qnhs+kEpo#}8=1zr3qQ8EQ__oW zH!vl=Pk6edI3H8Nmme3>m|Fks^Wxmo;1)CqF_=m7y2T%V0mmRDWSSq znJkdd1HsPQ87lB|i^Ob%5H(3712zkRF|h_OMY;nNQw3^*wEN49gxeM!q`~K(yinCd zTS{HO_39hu7&c?2zmAlL^+jX_u(;mX8NQ|H4u|pbOlqlmRo~|S4Sn~A#kzUCgbdsB z1VNWi!eysFT|X<`$vgo5FH>yPYS;s*Jp!>NCYK32ml&D1S#sqM6gF)Idq+F$L^*F*7ufW2VeyTTt{d^YA2ZC8%He z-P-U3Hjohx9qewnO0(_nRn*5T2OVY(+_`9VX4_)hdG(<1wd{Z19Nv>6;9kNfN_c>{`QGyNfj<{&dw6yAWHaeB*6f$HCV~&$Yq77v_X( z&Ugx0^yQ7JHHtXg=?k^^e2y#WTL#E=X2{^B9V^Q-wunI8h%rxbE-y(-OMe12h~;V+ zBlIfI5u|a;?=4$Rz#3+SvmoE2R zOX1m+N7@A*^2K%WONxq9Zptg_f-LGk%|8`%YF_7Gju*a76ld^j?$uAaAb+d_9No*w z8q&`uBf2#C9jCEngy1`unFbPqR8>`>Ht1v%x?qyk>CAlQE3H85G3#a>fBS`2^GYsl z3{(t`6@nM0?c)2W&<0_h^SWAAql!WC$*+dgo0QK!lcOD?BX7ZEFhhHE?7g91G);gZ zI~$t|y4;greqeX>>Y2a|oCd9HY3@p>{ql>S9z4IjC8PZ2bH*z;CPL{yZw^f#lQ8QN zKYH>@QfzTytUNmtg9uyP9TRWG9Y4bdgo>q1XrIimR2y4fnN1lNINFpe>N~i5qELlD zdfDmxm0VL>PTS75%L)yBtU`Z(@iMMQg`b5L6cmcd_+gZ9FX65&&NmgxSdXG@3!hKNd^XerFcSzCMDlhpr5;YyY zJr~3smAqzgMOZ>=ebD;R8$zC=!(GGq?9f~Md4Vr|sP^4a=ZJ1>=KP8k3_N2%1C%%+)Wxp6wUhq3P~=Im$=spY=fV%bgf4k~D3zS?Uy^5vuW5 zq_~gc`%c`x{Ll!)*lt^t(Ecs=FqbGv*Q4&G6jrduM>=U>W9Bv?yS%r?P;X*qhbM+f z&Ut1r&mkXrWolx(}w=>9K%%daxN$HajurD`GQ@YnwRPok9z&HxX~kX{p(J=)bqla zM9sZ$gF(-CkSB|7CEf*uyQu_GPEQwhh z32)91Be>DNC4*D#@tzftLQ_BkTqzU~0Ixdik-w{F4em+$*QcP%zrW&l5)_~fUxz`- z5-j`6Pm3xoARXU^_|*;er1YuaD3G2-5a7d}TozTLx!+gTa2+pq&TNP2TUUyKf!x@~ zfG9T}irV_cqOOi7Px4QyWtXS?{zv%RRF=JYLW!NuqOywfFId^*Jid(o^~7h=c<}G= zdG-~%dQ$!e&AY#!!EUWB-r9I*$<9WUqUkO0wH8QW#A)pc(0;1-hM*4Nj61Ra8D$Bu@E1~cGm+$k%dz*Vubq#wcKaA+|? zVc#3Tt1I(o^I{F~7HzT^^p=f-!|i}g>+z#=pHD1K#)x`=_qyS0lx*y5FArruE|*~0 zAf*~Z{MWBvW7q+hqL=R^JhrkWH`DM5bx$!7F646WSoc)s2&metzwRcCZjsnhj40%&g9qoY0OTfA>m|dvk&NXydLS^J=Inu%(ktMp&T%(oIplI->4J;>m zL=&uFxdfaXq44l(>P7i#CFf69kgsl#lzqn#K~FCuF(JX7C_Xg*wOk0gBHx$3e$f>> zqG`0T`Lk?7dYxoS^PS+RhG|UT+`()S>2zQ2#}dvH29rDxfHD8jjtRA{P&R~{j2x8A z`J|2-Im$M=QV#!tNEX|^q>z?*BODJy%iqf3!oo>r<;n+*M#~IR$!FWVri)F)C2HND zthsbCebnbH{0moqeYUk}vr;0NqC=0h= z03mh8rjhbajigggJXKXwW5nt_4hQ6X_ChaGCGP&h>xieW_#N)uKALzU9c<~Wmkx$5 zOU+aA{W?lTDQ=geVYax^rA!Wn5n69ARK>ow#!(e?kyBR>z5MmOMg8?l7g42A^%vl7 z{Sghhx2gCJU`@vP3mdoY`wa-e72)kFVW3@{COi=9gnZT7x)Hq9 z7M)Iav(cQjh^E;`b9btJpD1rZD8?4Y)rHbsj2dtJ{>uHXQ3XTFyop5Z^Shdh#vkW5 zUa%Yr{rAmT;x9Hsk~`d?cLCSL5o1Z(w?YWa7GBaH*KB0`4Rfj3Olkb15R7=nP>}MZ zEG48>>sz*hI3B6cvxu1lPm0#KpQ`cxp@cMJ zxICd$l}%&&XpiC(=x=f-zQwCG#)tU*>y^ zHYKWoJJdHh?bWN5!h5Vr@jH+9W)?0Pk-M4rR-nyN&JnUPF1@lE8cev?DxnJkt)@k% zhTjzJbXyS{SfqIWWw~5vA`@?Y^c7n_1;ITk!+dcjEU4~`yL-C*ZtwckSNtkJ+b1t8 zG~TwAw|ed?ysOqG_ld87%pY9;ob$3EjQvVQSLSN+`b?2J_`k5wJ-Q&uce~8X-BYh( zu&@Kb=2AEsS-s3rq)e6K3JEFQN6l(`^Z~vkzdsqc`{PM*!t~P@fN6dDf+I-#-9YJ| zBDvoQxyr)fR1?I{(Y?H-JT5*dXnMu1x3|{}l^e5PB_gB7=n8|snz@AHz2t^^zC}m5 zx{IS$68$z}vhgx86WJFcXVyME(k;+y(@jKZOyuivK)T$1X`%2 zxN-ZAhM0cX*`JGMt@L;)BlY_|>eBE(kLH(mcr?@Fx*a9MzXYnc4r(->d^?=!cz%Cv zP^4@uD@EJ9q|}P_-eF1CJC_WdB&VPK&xu(dVp%6KHcD9$;z>d6QMExkIlC`NP_x&% zk}K04heT*(8ZDghFoyZkPq8lb^%O=@RAUOF(?uk>V8+bYI<^lLefrO)iZ36n#)y7b zm3HwSr7Q7m;M)f)=8~h{`&_OIfy|_Sb3U@CRg2LxwH*RV)q?#Y-=RXNVgzpF;4GxW zc9wS|=pay;~p&XUWSPxxYI2lnn2n6l~C@mJql z$@5~4_Zi0i_wdY4reKYrbPG?OS8WO~>!-*qEGUu_qcpm%%ggU32lYAl<{{LXx~0s@ zrW$^lcQda+O{?89v^b*%DJ{Dd49oHn3m7Dvi=JL4z z{{D&bqc0-ctU+>_S{7|)eD3R8jD>c;#+7I2f?YLN=9`pjI>8(Bz@7%DMF;XVU1qzz zL8Dfb+qnrnfgx^swJk%PnryIrl*-3~fpE&ET|% z49f_rnCpwho}**I9g8f$8}|Y{x%CNW9oIGjV}G*PWD8nASCB!H3-oD9D>#xF3+Jj# zv=G##rfdddV@gPO@?1`Gp~D!C)zYr(Xg`s7 zQoU=Av(oz#Q=sAwU>hRcPYypVQ)xwN7Jd6g%WcEg#FP?xQlRQ`8EY#jDed^&Hsh>eMmDcBo)9(EKiHFl4)k3%o#|1;% z%WIYY!4l4<%Erp8*>*lkL5?H6#V$8FeC3O(c139g>-`^Zf?v~+$U@YqWrti(^r2F$o-MS z(S7RC_bxY{r!SU0uOn$^-Go!MNUm zU2spNM*i&N^GksnXl9SEhT*9%0iTpK+bkaXMr38NJ}|s%gf4?iRw1yUH-o)GGR))4 zqW=z~mZLuZ1LO64z~8$MHLMxBH9=*b3PSSJLaXQrK3;OWb+FM}WnoC1=q0`jj&yc4 ztgNUA${XxTC7IkaH8u70jQ{iJA0clQI+f)y24YA|oAt*hJJ5_lf-wu{ood|ix#p(a z2x7Qi<`N!HSJ`1t96S3923Nd*XxH9Fej{yV~Z#o*{ReOBL z4$8vUjxG@BDrd9K%gM^_L}N`!z%B^w+r+pyS2RBY7>M+;6f>SQyg|N1WvdNpz+wd@ z^@hl0$}svN35VVxa`>^6_Pk+sYQLcQZF!}b%wNl1Uyn#-TEVABPlw+SIQ0eja|~0& zsD_hmbaZ@?&>)BF!LxWE{>fW zyPe+whJ~O4mezRn6CcT?W9f&@ano<_Y>8^1kr4S`pjt)%%#d~sq(!4{V~LSq^%ALv zIWt?$l|8?LJ-ScW)--g%29>}K9bbi1CILC#JD;AQP2wDx#>~b8EJbFYN(A+gY1iN# zJ0P)VL_&eL+8=))2^NytUU9w!-?Gk0P_OM;&f)A2cUKx zN5|iljk9~Zu%qH>)u(F7@wt_x?T`i<- z=L7QHa410j?RMv_F4JGkArBSJbg=aT3Y_Q+AKRRQxxV16A2~yH`V08DACY51GrsKD zLCTtz!l57eGwlh9Yt^-5m@6hnccKb$Jlf3UodpM zaY4CFVCkP%^)g!-DOZiMG?jvURZ0Q_PJpp6dI!D4t&4aJKp$FnJU(E3BLTIASvj-+ zM=VYZiuB>)KOvFFJG`ajE{;DVj#g!>63#Tvzr7Y(^`syEUQL-iN*LNRkg^fk;u^ysQRL(YGR! zO(i+2%gOD+vGN2(?2vKZSP2?`5iL}F##N0R{uIt|H&U1jN+R|cjAbar!>3c}=2M{gV32l;g7eYr(MNJcO%K^yJ=zan9j)ly1O#%WyBgnX}W3k|k4!V{koCY4@bF-Z_RPOIto z*@9%0?3gEI@o@F&p#=A*4*0%Wcwh$gm4Kvz;#hqpqsBwm(w7dS5LL6&uz}IF}26w3(6UL{1InP;R&$hshEw z@RPFF9pJ7N>&GYV#h>%Q?*0V;SOnxCa12Nd$Duu%zCacvwjMamN4^#-hDEqnYaSt%YDmV^*#AJBp> z0kK3SJv}|F^m`#sDl_^i#T=>v`RKFcVRuG$YYz93kJy42zReBHp?yf`y2iB5oja$4LTMjX zF&|5_7)40?h`F{9WaFT~<`WX2q_=imd%Dvqgp6wbwKFn)e|8|s7WKl zI$k5;@bpoRQe{0`B;pQm5`DPNLIMZTt9>M50Ahe{cxLN|up9*1-a*Zr!naWG8;qql ztF4ufDIf0H$e6h@X5P@!-)-5CCw0sySJ1OERVduy^fEnZV$-Q5FWq+?*%7yTQ$eX` z@^3pL?uyx)<&9}~KkE#1c-M99UfHMe&-bGq?r9&4TkYg@sM>NW)hoJOJ@kNwq4e!6 z+pp-*Z9!j)y8yysH*s)&J;{AiiAZji*thSyDWBg$@aTQ$2;kv5|6T@7%EuLNEBgOoyte!#oI5?N1%r>aTJxyjBXrPbDJypXr;GQTcE=5wc_S>kf(f5QEcO1 z#<&!dc52D-CD-kXos{P@{JsL%zBnSb&kf9}Hn-$IvzWIT;Q!W z-nou9;1j{dz}z?8N0=rF{F>a@VESpGnXM=i_@H0>!CUKDD=xfJopCLLrblEIn8kw^ zDu>YE;iDG!%`LFT#r3_9K+ljw^vOiqY*JgWv3}Gc!+UgwfhD!3zESbOB?=(+@Q+gS zKU4@jk()5!d0Zlq`AZeq7-BcrUD%2?nvLH2Wx4(6$-c(IpzNJPmErdn8GK4s+r@Qc z|GjAc!phw2l&_<{_f1pOehxhlrPJ9w!Bh2Snf?W>D=kM}I}RRMR4B)DS58D|vbwAj7^EAS*&+|4vLHg1LgadSq;Hp@x^c&(1q`ZnClhe+CNImR+2y}QgK z9-F9z`n*DXR{zT*8X8=S7C~%aKQ^+gl$+yUzs)Mxy>JHP{~C=%TubrfGg@iK8o`=I8I zk%D^*98p}|>MhygN`)e3)D5+lcHZC;dDYjKEUPl~gfi*JinS@1B&oDD7^HU(b+RLx z5SBWJOqlFT9&;bS&T7+G)2VH(D2?QbQXNk~52Ez%kp6K+h?$Q|zXP(>Yeb|3w!{O$ zPpVQonXj_Fd#T*Wi$lrF9SCI6#y@t9h#TxYJ}>KY z(A|44J{`I>8Jgo5Ul(DyOXJ9XCoRTUw#m|ILfjR}EsP0(tjs2>8q*7>!4I&_0yD1f=`6lHa2J`#@`PcXHy~=kYVWbK zvLbX5;r<;|{8%=h+c=)}RA9GO$IaN2VGR%}A9WyRLRz&u=zbdv_6DQ3T9?k#qDAJY z_~ijN3-o*RsvfP9E4A%9qBwWhUGRiXC*;-nXuYvQd*R{H^tTtyAvy%%&KIv;iCP;c zrCo^YcQnUNXFQmf&>eg?_*Ua)(?_!z^2L2$t{fGC4M^FyK0kL)Y3R$q@z3qTy(4a3 zfxOOI zCx6yDs?9$4ZTPZ9LD@kM<8}8eHMLUR<0jYi-Qv)f-%2~Ym%&Ii>~+|-S>@4au5u?c z(Q7FqCH3Ik-#D^F1@5d%L2c~ihOV#HdFv#^1vI9vnDzg+G>)K{l4^J;OD;pPFfsQ) zfI4j77QNj{sU^Gq3Tj<_zShD#q)S+?b7n?I78!6-~ z^!p03K534YdCn5FZNA(-RniKsEWF9{<)x~4>auN?o0vmI{ej6J!-aG0hWAGuU^)$+ z=Ndk$DYmNqeCp0U9CCmBb2R`$>eVrA)-dU&jkATHH^#c3e{DbFt?}}5b@Vzl&$_By z?~E!rpZbsfeSKjEZo1z|m=qdy`(afXGY-)Wt^VPLBrvO>?p1J=!my{$MtAp*j}(;m zln*W``bUicU%%l({)h?)5wq!OM3*EOe(Br0cai=1+RYwZJHJU^&sv(_FtNHBobi8O zws$C1y7v7_<9qvGM<|VM(VHH$@#H%{*9p1Oi(t_G^OC=Ly=^-i!{uNlmO7l_9@wED#k_-B}Tv2WjNZhpk?+m+1Jjof^uv$_YGL@NLJ5HMilBI#Sz z6r6W#AIg1KmDj2@+-P9lWEn1C+59~B?K<(ECV{@?hGEjyHUjSAkEH*{H6IJ@1wlxp zFFfh!K-jLa_UF+Fg7XH0PFNl;LA0>{cum{(e+2=tAcqKxEq(cB#$kt(KTEx`s1iKK zlfpM46RkC*R^sgFyX(vk7iV~h_3i?LH>mZ$DIN$@aFETGkqr??rS8?Us*>1aKmWjI zW+Ysnc7}1Q&gR?mSk$eB;~0LMJ?8#4`7QXrAG&txL!IZ`{Wv7YoEyU#RDE0DDS`dB zA0GRy8;7a7IM=Uzi|6apyV2^?vcAIXYOnODj7iaGQzP>+^F*uGGNy#j%~K8SGP0G4A19rE zxD`g}DI+>eMSHLjmOt$7OzP&|oM+Z!FClwVL}Tw3`p?QB?xN*Uh zBkba*6?)U{_Z;D!)B3|lxqOruV$zi(%5ng_4Yb^BuOi&OCzEjT=?Z_}+e zn0bA)9eVOHRZl4Y$7_9}bC2F{{3WlBE6wbr=JNLLwDYtrB1|vm`}gW`&h1oP{N~Qd zj=|TKMHX{@+#N1@s;21OJ~=6Q?#!o*vx%wo_us@R$siLk|CAnAXx?z2*C#B?G#)?H zEbPG>z7`&t9^$)cEb;ThCZUL<9Y>EG)4bk${fCuvnRYywi~@Ju(Z>~!dR{5(oC*KX z@@&1VduM&6s_FmOC;1IEopn;Y6NA;}ADm^~oU^{{jr5lltB2HThJE~!eP&G(?(R#C zD7QyGg9_9C-hSrYH(I`oHlOUQ?OMS1%XGDqf_;{^^i;q8jTO?Ti>a~vP|;=O z(JdT4;mM{Z!+z|mSGTw*tGO+TF*|M0*SpZPF411b@!L?+nl3ZBO%PxMITFs~pN6GNd6J!F-%*w#0e6 zZmXnDag|Q%+v2^7=0|pERH*)VFkhFLFG~E+@nUfAV+9e{mvZQ7jg9VBmh!JQa(mWX z(b6k^KUF0!y#2j|@V_etJ4t4U;q{>8KO5O)3mHBJ*Wv%H1HUppn-RJ5KfCnmW`vK7 zTfG0>Pb^vnqxjb6zp-!n&LR%%f5zwOdrg(@G&mXm_rm)X=)=e4J1n{*-uP$thyB~E z0RQc-QSqaV$A`^v#l!yoTDcx2i=3mZ|0_?`YHWiVXg z36#tpWrrigOJFbF#FME#XZ{c0sC#MZHW1S>4pxxe`u@_mro#5Csj2DT-=JV>nlSCm z;u!ew@2?{F%e;n4QL4gbZ?l%`vHtL?=Tr=Q@1fVUGru{v~sT5rFYfSc9 zuD4f-I40j;MfZ>lY^#k1G=UWS-p{OGY_I{P1OLelSbcuFa{IRpk-LBRBAV~%1@^r} z?Rju%-C39H^81iuUoXVRxlJ07U4Mkm=7Quq+gJx^>Z^2*Lqqz@r#jJubdagt(YvOA z?eT1?;HDd$oGiiu%|X}O0}wa@eP-|C4F!H&K+>0@I)SDsx;UtM4XKNi;4_02u_dwL@8wd7cLj%~@ ztEcy2@#&zY<4!adOHD)X#s-1o_Z6*2-2UN*FFuMA1n601cYpqBkLQI_(^8bi4@|L9 zT?)tnfU39;NNUc}+cP$mUHs=y<%@?L7d>FNr`vP!-_<9iNU_ZK7k>{Jb3j2HVp#8g z#SaA(cW1@Wse?gpTDhLmR20%)qvh}Kw{0Ng7J7mkC)S!rQM~{CWw--GwFuNn>CJ(t zE*?8}?EUt&^jnvi7|bXOdOB;4<;6{@i+|YupEId`>V(UKckHT$beN*1?(k^dFlC2=nWWDMSl$h0E#w*k-y)*LUZE` z*6&!qe+yx?b^gHESQTd!$?&?9oNVEi*xA|X5X0(#PF)!l74;hR-nBUx4w?8d;P+eq ze*cCIIEfl<)2+ccIsz6O9u5HNfva5f@vj;&n~C-8@eee9=e+q%nxd=kuizOA-?Mhz zE6qpTC>#9vn(@PFpZfXIU;e%Bsh=E{|KY^2ek9NKgZ-L%)Bk?Zzh2~j*9a@{zx(K4 z`}h5bME+B*+Xvzo6N(c`re-LFQRBfvF4hs9dMAT1I0A^72~{B_(_V}UDG8AfTF8~= z0Ap4Z3pXib`QV?}i2fuv04xfiqI+}R7+!N92>bhEzW?*H+`m?E>PNR+HvDTKSU+-^ z_Jg^bdefZ!l-b_0>VVxVL1`?uo5BG=p8~_P+4xB>&{zhe_KSK zaaE8xlhN_dziie1=O^+51K{NCJ$}4Owc)W6`7Tbt~X&E~^dlI_R_7XJ( zwe^3>Ng!5~Ud)uA=E=dhngxcY4LIRU^lSHl&r}FMz(lE*Xt>}iXi*?5-gMy&(YlE1 zR(>&4S;re*M7IHoyq{svF&Y$ z7*4GzSSt})ClDQ7u?!nfi6tfpn&v4-ASPKYxF@u9SdHtwU)Z=y19$F5P~C!}k9*wx z%uSg09#Sn#>okv7MzTzH^b?hfo^egfIFrR#xteINtmSzN{w)!wbx-~f%YwVwLHy@h+$$5z8;|2^8f%+lCFata@|=IPil|7iU3Qv<#FG*RDP9ty&qQsT zZ4kJkjq{Z>r>L*6^Tb&Gun1&y$2l>R27@ivBI}DlA|P%3iIb8BN~=J&S(dy1tcO6l zYlW(D9ploAz@6emCPRk~CyPJ`(h61)8Vv;?1#qA^Yi)wBf3faov7o+bF8mbq-%_M> z6@7X}=pUF*mCm(8wy%<%z+3i2ffjvKt=`zWM#uOdaOsKkw*Xwg@DHCbj4xCAJ_@>z z{`!4xh2XCfNqCHfV5=>69w|Oe^5Gg~zZ9`E74(VnfdK^O*mH%50r=31#WhSEltO9= zp~e;>rOeh13I%;m2x8;Ge(DHK$gQi}qZ@tkLc>7ehz@X|nWfumJkcfG&i|G=M5ymU zMaQuhY#uKbtB!+jOdUxvm7UGOQdNKseoE+yt8)wHik$Uk6L^CM)dFonPf!5XlbnpF z52hTtg9s$E0HdriMvfVQ%zL(OT^4#~S5AQKS-7o6YYAeObI14yaiDK8uhuYNzwldt z)XY?q0?+p{_#AVvfAhgoLC&|jwH1Wfky>!jT4{q5MR>MuU?`>~Z37+SQD{Dou+E-O zsPs#D%u2W+yyhV>42e@~u{aL;(~!nJ{r-Y>KM1J>W=;bkUqNT#X)2`0hf``54_Z*2 z(mX?v))|Uh3mN{@hQ#pdhZ!kAuRkvsIH!O*2&H(<66mr~}IjGo3dP z2gv-ye+dvk3x@S_u!)FQ8~>#S;TRtDb{!a{7YCf>&vS~3T%^B8VYeYN#6=D%^HOS= z({kv`r;@&YR5?HKI*1E`)VNGB3*vTbsNpO$Z(658NzqN*>>4tqq51YG@wb4ZRi+q^ z(b-~7*dGN%LajIrT1g99A?&FMN26om{+oSwoT!YBB%V-+oTITd;S*nCzJin#>Zgvw z(NAo6kn?l%lqH%fKi;Y2qvma*j`lrn9xt07Fxd=w=q0(|suGijbZ(<@IuDAc$A1;N zCtB#@UmAEV6r@9GI^IMrO{s3j-dRCiOC}Hn)94c22qG1W;E1#0?`K0~9O`0{a+^WV zYFHa=>`?9Jk1>secsRHrC|>7nj0s%~mf;z-Z(YF45RtI=d7K_wy?LC?%OI=B8HWgug)m&ETOR( zNzWH-&%=e>OrxI8W1HBi@Na_8?CUx`X%}1ddQK?}cSy>tfirLpzDm@T2;b7we=Bp4vv9#8BKhYsFGl&x{ACMZ{+$*Gi2FZou($(|S#?t4A${ z&N%JK!!h(vQGdnFCEDNm`&_{yjd;+=L)ptosVE8VwMJv?ap%3b0vv`>>RF~h1K`{u zbO4)2GiFt2@i<77=M=W%d`14`oL_U|2lYGCaq|IV*W!0 z)z=fLa}@&lA05K`MM^>7NqaVD^ZWOfuCA5nObYKkdRYzI!xKCqA%om~YgSu9E1>;- zmSEQBXVR9W2ha&lsS0kes-X~Um7}mH%&_t0*0pD20fL2wF1#w+y7wCbw4+d!E$Wbb zRZFDH&FjP6V40W0$;A4pNN+TlM5-*pp9Ucnhj8mM=(=o8!vU!r$sjUPEE>ThpE^OL z;f5xjUqBt9Vx73W@~3r`<;$mQ3NjTeKu1id+!Pn@>sLb(W8Ad z1v=KwU{foB_o`+!G~CcUA^tRlO+>H6Fze*`jHt2@7ez->@WjwMka~m*mP7Wb2y9}}Ft^@G+aSzJEnA8xMn=PUZa3-?zI1)+eYJbx zpo{F%pIi5~S&T1sEpWoPPC?Rt4|P}vlP*FrcN&=d@5hFL6`0-#Ms$>ONX`He&(ZCl z9_$VaFNA(d7^EFl>&GE`Ej8#DJ{DOv(Xa873A}oM@ER#HMfc+4x5R`Sqd1}p9`4DU zLyjBCD67a_k%K84k<~N2=E*ez6J`J5lc*48a?w~ux9~S)v=9ku>MsDE8cU);5fenp zl`dCYwDrAr-|A7i>SzZ?FoC5;4weYJd|L`IWy^DPt+WU^k0?=KP2J#VTRM-NA)TxU znFR;*R2Dbp<}W*m-jFqf=^G7afWVwb9twgj>gP^lm6pT5^kJuqOSc!mYdNulxi%>> zEC2<6;;!X^@a$*pi~$Pn;1=F1V^X`sS6JNoyIQ|}64n!IMFWFgH|RT z_-$`^8L=4iE@~J;j=Z<5B>~-T)aBW}h4)NJ(IxslqG-Pq8|tB;2J+XO#J7cPghS9M zN+%r9RdDT2ZxHOB0D9hvP8JVPR0S+2sdZKo`LX_LIloC1mN@kW}FfjnhcJ_)@SipO<^LxYe z&ZCH4n>TsvEs21Dp6-S?qg~*NmL$Gg2S=f}5jXOf{Cn7?ticZS(KRBeDsBuw5sY~F z)E&_3#8V9OE6!99f$~8Fh0}y?+7`c~K4AFO?Ph(>AHg-eJ22tAcmd4BN7$ghpEfaz zWBSP6?iqE)7Hh!`#0}(3qSfln-LA7pE{LkQ2Q5Sa8Iwp^JX6Y~H@7Jpep@o&u^X1bGS-{ll)jB|5&0p{3$fx)Wv9vY-&h5<`(|&dyYvpT&di#CL$>xcn zhw4^U$9bA2DRie4Fd(BBb-mI$uUhnoR*pU?0jZc}93n>fB!4{rxqbcGd* zd_6ur2pMi60L~T|&eD;WxpL`f)TZ7h2N+`8Qpy6!2!!U7wo51??!Pk$%+Tzhkkw+D zS+nD?7lP=u)syhltd4IqM%5d|e1(On`Z#d{QVNgW`zm!dIu~#m40@xTXTH`Kr8yhMV=MMT^?$Y81TNB6=+pzMYH zEk@mLi1pmj%*l@yTz~8E@4WU68If}vk+!tN$c-C?nO&qHk;uCx45wxK^W!o)VQW-4skdxvJ@!)!lV+ml9<(h01eFI8yCFV z%2cpke&N+lYIU}2_|jRm+3DMx@PhP)^E^b6WGBuv7CilwX}?73H2b>bA$DhHR$t3miGIjBhv-Ke#|_V^^$fE8T)HFsM(UE1dxIorCoxAO?^$)?d62-$0oV#x?obU>T|J(9GiX zD?)^zI&9(hZG#x5e|jj3*F~0f9iSCNy9?E{LzWxB|A5V1uh#?DFup`!8o3KZn4xsI z5!SlfHLxJNH!9onX(>;dB<;nF(KWP`n8xTy*$`T7bnyKw=nlS;iAknjWoQzUCC)Te z=DC~p9rc7p4yomY=8vMWf}6TsHDnCuBe=MTW{$QC$A?Lh%v+|vXwNqlG^fC#7Q%?1 zFbJWB&$sQF6=|j74SC2@GD{o@RiN|@4N%qZC7)dMqr!I@!)4M8Z2_z>a;|_EjWGLl4q0&zkl$q@0!jZ#@8~0Hs zwsb4X{88|#L22r8)=0;z?Qc4Nea@G>tPAmbtVA{1jsnuu@esL`DTuEgwM8e#2FyTy z04BikddD)f2wA~F#ZXZ?Rc@Hi`4m#x**C44;A)N<$0sR744f~iK~G!1#eGV1EU1M} zYJ3NtbjUh8Spx8~`+PvZkI&CUzSHG})NY@iz_P;ITgbZcW=Y^_L_A8B?UV$O#6 zd^@z@cX~l{{MIp$LaGsc7-a1)Ks%P@x8Vy%a|8jFv7%miX{wQ5y$1G3jlEy*mD35` zJB#_G(M8oYZX%9YCcc1@$4;*r?7m8frl)X=-Z=NjB>~;%vA`U31>U0Vz;BbF;C)eT zKwbqL)h_QjMg0@f)5X&$pBx$eh7Mm-)k!ctlvdyjdu0|ifc{VkeoV6E9j-^ny8>Q* zKt@<@%PT*@DH?p9yjqJT25&1n;z^IF#2rC^_~x(a@)T;MEK3BqRfSKmYo#f58XmL6 zfYzLcL5d$pQAZYZl^KTO2=j40nyrR&BE^ znEpu0ebQ{2|HgHwF)ygS3T%{0frpL0_Yx9JFEmq{GY<8$^Ofy*^1(DdDa9tb16%im z|H}^uHcD~xtw_py(TRDwvk|PFe218-4VeC;0_nu4x6Uj6~+#Ak(rETARBNLufm26#S?&P@HIW!%{Qj755x5|ZKaL-n4BqYJVc;E_h zhYRe;9dyr7+e4Uj{l;darKtq?lLU!f1LQ_#Yj|ZKKivc6{mw=?7}}YZFX;%P0Vy~3 zVJWshAEmpRCGb2MZ^LtuolBzWF?rcSLvCh4GR!-(53OzjY2(y!4kKQG|7wbyph6j9 z%voqNy6=iu*V7wAajdHgJuah1`_acc9{#n}$jbpVRvG7QI%>0CbnxmGFtc zke`LsQ!*t&4z>68;pSK-Gjaksb|I_mf&Sx?ueB5!l6cs#Agceq2IFlgRM*mJfI0zd9c66mQ4S;ERNNFWir>9cBsJtPlELFFKlUS6o3d_|nYLycAQ zV8JkqfgOaW;?BX?UP?01LzP-5r9j zBnv#FfS_3^nT|f^oA`z5W*0E4z}TE3ZjwI`bz-zr%mHs4|KasjYOVw9@Hj6W(gE8t zt-FI$mb*MV1H6Gx?sILZ**sb7I!bmy8{`}C#Jz8&I*|Sj!K+fO`4v<#S(}55-J2B1 zV3U^M0}ZL@5;FoPyT(eJr+8>vk{+PBHnWyIHzoGv32^t6{N%5?vfMXK5-Elcb>>m` zw5KAMgi)bO6rFmM>(PxO7|m5``a>4#A$O3|N)@x=P%2EELK_s?-ZJvi zFKZt-c9&QR)VkYgDheM5mTKv_Mtp15+tEq%tDo-7w~^OJ7?6RhiO9^fb2FvJSZbki z8jL%wHy9LqAq)7kFr+{AdW!S2=UL8MxrB%bMe0YJ&s6)8qZEeYwre04uBLwWQhXsV zvWM(ZQV&X#SS5t6V?xoPQE3rMTU?D@ z5->9K%7oSR2-OiNBOoBpe0pVdN=IRq8uf@@cc(@1-NEbi)VF95F!ozlKll}TW%#mpT!MSq@nKn6z!s=eNFeC zkr3tOZ0ucyTGj|)G&DxL4C|xYHYco2ONw|j9K8bGuXP>WaEg&QpmsG@DTvv!2!!qOSC~1qvE) z6)-%M=_`8=GO1vQ4)V;m;y`{i?mbyhw2)cgOE|>Z0+x;tsr>9=Vh&@FIedivg%Y}; zKa({opk_caay^S1F!A?vdv}X0xrTy~KCh>6BBUHQa9#Ngb$?UIxu!uME!sX*^6X0T zaCfKXbX*CQ-GJ_w*_o^qE$~>|1hE9FGrGE}lY$1UV5v>MiwP#OpNFJ@XmR61n7Dn| zR*SvNfbTu0#@VmASacRXfgWjzBpbkIt|(({Wr5AYr03+xBO#a+>T8V>;TB%GPT$IG zPXeGJ#dgvx1o)T?75vr}k}oL->`dmMtvo#U>!@jZ-8#~a+r#9AEi?xxXmH`+{T@Ba zd&tjhN$N)J${Q)j_Vsow?BMQ~X2dHkHa(=l{NF5tYcZN0CG= z=6iU1E#;yBDO`zZoOe-ZE~XUPP|?k{cs=;hh!%SFdRTd`*S6~YVmVSXud-O6x?B7_ z<|n*BgqM;x<5uR_Cj&Vpey1x7R_LIVyDa+?wr6C*R;Lm z`#rgozP-zDp#4rt8KHUE2*5?YJ)~^PSE+55Qy+G!bfo5jct(Ejo8Qi7FNju+ zjUVea#iI@Zlq;7NKvW0nGL1k);8vK8;DoHH(Zli&*`9nkjcHQ3E(l_S-uK|EtAgLSLr8n4{Zn2|!)N{u}&>jABZTi~mWCzt8B*&a69RJbXzoQ(&$qVV& zC~ui{>uiC?B-gfc#~jcZy{PY0-{IXtmy0i9Vwr%YJ#83Z!75_u1k1^3y+}lXS<3OV zY0=eHNl$0;L^jEC)Ex4K16!Df?1KfHL@2`9dF3QBCT!3X0!hgZdj?L8 zKSY_M2lcd>4Qr*!i2)OV{CzNe1csv(=0Ro4EF@VRF9;-|qWYpmeN>ttY){7E*l@O| zt1C(->$w{M%FUyah;ICC57&e$!lP$4P{9-WI*sA)8cb4@|1e#5Fz^>BfS$YH3GtSP zPsNdV_z>LsT9`W-lBV0f`ga`LbfbFj#g!Tbo|Cq~r{Nm&;=Z~Z!djU0& zgmjXS!G2vAT#j(>LEbQZ5b+{j2g;wtb7}*2rhaZgGGccAZx?NRDLmZvwP4KEm3pR= zmP2}M1#*(}51LR@y{5iE{<$Z|Y_WXx)WD8%id%sM ztkgSL8SSxVm{5%}?VHOM9f7*3L8!SxefRow`T8|_56^DCqY0TPeoF!2NDTH5x43oJw z-8A*Jf?Mte!HvM;3^j$q_gmbgA}lE+a+?_3WD6pzhC-!1G6zn8mYX>1v1OTLEl-Jci*0JpKdj~eLV0)DOUZZ8c9P#1)LPp7I- zgd_+ncaWUB<^MD%TzHKriu-*n97+y}Qq7!MIQ85nDL6|pgP$FaLJ{D>Q*pXW5@LBR z&4_ORb|_>mllIz1*T`m6VTj z(W)zv+4T;-H7dR$XsNe>I5LjYmuydfuq5PLbLMfu+hSMzW%UzyAHKK6xY|9vbsSRo;POSGYr10@8O9BY;ll;h2c|Ey0 zLL|*~8j*Y{nIQaZs<3OdUq#a;VTGcfT3-?=r&^jPr|{}NEf+meZHE{@-7NkP;k^X0 z4}_`gsT^OOj$R}FoO>Q*-$=v^M@C^!9%-lrEGMlGQoPYL^~rggvF>ps z0Z~?mS_)C7`XH*$-#%*3wK*YGb@{JP*lb_iKHQ@>hBJKX-Xx`tZu!%FQ9lX?hvO0I z^ZvEzP~|EhPZ|DaupvrSM&nIyF*fTXDzA4kB1OyhN2xATahb;{s;7@fHe1xavBV8R* z4EHd*%SiAMUgJm%uVZRzf=LO)+XAdYoFg5?-L@~|%}YdG4sr_Ryg}O(E6RsxMES){ zi~5?LAFVq)M01b@xeggVggWPNjYBTIUl%RQ2wm2exv~ganZ&WN*jkm_acT~ z-!Rz~64L4Gt+=74xiP>atIcuYbn4eU|Q$cL_%f32Ffc23?H`K2VP5ai5!2>-d)JZYQltyi3){Skl#30;p$ z&qdbMha?C|oLO^@hqDhQd0rqYwu_W{F0GJhT`xDHsM$>nN@7hCWbn@O)`+Nf>H^5J z>W7sdTGW!mz) zy#@P+Wk90qmgbY6;cHN`L8fEOy9a6J>LfXsrPTAD*yW<6Xh<^QIuTyt?!?k$k-)I@ zOJ2;~Ft5P#ixyZgsJFg8^BC4%lH%R303mp@?FQXRn+ZOY5As?scKUaOkc=4014e13 z`7#j{nzFN|!}b9lUVn4p1F_bfYx#)+OE~8MwlTS`|mWF;C zzF#@+lwcbd<%ZyeU;+i|ia;~d@&?$FW751hGSq7Rj=7fAAm-Y1u?X$OZWY;2Q21%q zT{1FRk^S=*JnyLElAp6Ll#BW;4_@6=o&w)PJQ!U+HbaF>1e|9=NZ;jJEeDC%EexJ` zMc0P^*_tHfFQdXz9?4!I$W~st+vOQ$sE;URk-ZNnoA`FbHDDr)^dhMwjLEH27*{Dv zhbP<79daWh(14|geMNt@h;!^>c_kwH*Z_ow*oM!JAy=@h@^nII?c3`W_gEm>_3ha< zfuHVik9FG8l-s)=Dv7V3PquN&3#L^HNBj~wP9r_~j8$84T?N;<+MsyxWXqK9i`}L}?xvv@a|! z2k}5Ejp|zo>9}y+#f37{g9EC?wBJ8RwiA%y2&px#YQk^-Xwe~^Nrw&YT`SqHI!v zy>3_(9jUCi2iw9NxNSbFH_2C5yb4W=7G^OU5X0SjF;1eA=1R5e`!DoQG8=X$=fnRu5=dj-Iiayb&QyX${ z0FBa$t85h_o$m*Qu9sJOh!m%RalTZyB_~1ZpT-tDm&@O#25c<= zKpNea2wTlzg$opTOqWu`s5Yu{B6GkKJ1O-vqmmXUi0l7wr1F%criZOvkvYYVNK zid|iaMxbL}D(qsyYKvC>jK7X)N=LjrTfbipaIyy?{pc!ep(LX;3@%ilO$0lK)!W0; z%T7}omSNNUse$y7la?MtD(;?g4vJ7bcQ89vct&r`5uYIZwZbuPTwf z$WT_Yw7Q5NjarThe?ch0CQii>$d_1!wvFPaBotlTO|q^76}+1c4>83xix0?l$M2ICJU4R4^`4>;$V3MF$@(6iB3z9me$X-mr$ODUpAbStuB3$FNQXC$6 z0MWzHY{Lyrwgu2z$2W4#HkhufWXvVhe|9RsAfJfsy2Q;0n4Jejc!s|pH(td1vVocTwXLVsP#?|Cl&iWnR|eqZ%ghKSSU^+ zEfMpPfaX0+w)dL0wzdTRxE0zW@26Qs9)xK3sV`bYrK?fX!b_6n)bUMfm$=l!wHSDJ zr2Rk!B}}a6&%f|jD2%1;QlU|@}Chwo&S5iQXGY7dJGv6EYvrGL@5@71LLq*wxlSr zdaJ$+l;@0cvtnYYs|Z<$*N)*vQ4DpvVHp3yZ+#&Qe8oBAn`~moX+>_|w}(z1hP>14xWo;iGZtl*09Eln0`~S!e&O9V@$e#^Fb%?dYF#=mvj$03hLIanVHF5~_l7@| zfm6vM!^a6vB9e(rtJf4)#vo&B6aY#1-jbW6$OlV93~(u*5wP*{QBhnhAR}JRu!!B} z$uY7O@x9)8BD6$+hvAkdhV2n6M*W-emf}uhUsD^}`d6D}E>s7m5eSxeJV3xf)`~0= zM)~FZ;{tQ#p~kPN*j;0;&Ri+d{}G^<{e!(6x9JTgMX_s4V?0p+3oPb8RU$ zb<8Tdf^1a|VuufbBEX-}StS`MhL|dL0lfF7?jg$UwuY>QU`SMxc-XYR4!+gbC~BYm zs@{ux+r2VJY6(VEBoNm}vz)+%zx$A;Be1kI=))m3x`G;ehe2=btM$*wh6ARO;2s@{&f~C^O2-VHf+}{Zho8aLFTt-uTwat zOz>ZqFweJ9Cqxax*nOz9tVLKFs%dlfVB5|WeDrIGkTYr&O2r&Ez#BG$p3ci&EGr58b{TUNKV#*vz@RzI;T|U*}DWMPtR66sUGhShk!XB_%jBo0ijjIUaiGbBQJYH3)I6(#+$9&VTr~KBB$reCBW!2N4-BFw zsd~g-=7mI?gg>0deH79X%N(gDMPy#7l6BjuWr!qN9y?l2GyqC3it?A#U-~=7SLZ7U zWglywK>Lr}mAcAA)LxPxou5Y{=7=f0DZZ#iK(KCbgz#62G(YkGymTH@p@|ljSmXrQ zQK5oJBq3jPGEhRuBN_*cw{o?G81`^JIvkMCdr<@(G&-kG8wG~LEHSEA`)NqUV^hKA z&z#rl_P&L2nAxzjGedtAyEqxoHMh($Ce-2pk?k+O(7gp4XHL{rz`Kixfb|9O)RA$r z=GX_ke(RK9T3lvaG6Swp1>H5I&q^eDlp>>#j8kdP9M)D>Kxkl|OPetmDaDOMCj%pB zhefEeK8QM*^jWO5ED;20k}qCvaD7G4SksEvS(8J^(JZ2(5NDpj-$G#Jafu_*pS0B` zl?mo*B_1;+T|f)(QxhXkQw{2XoHR6NS<3YO7lGGuz&6`P05jD)LP2YVN2xztE^8&(oXlvQFgC^u4==AObn~`+R!3u-R(({ypE{HAH4*-`qb1*C zfsa*p{qYaQ0mHv4^ql=*b#ecG*EbULEG|DjZJ$!W_-mt)RoUkmkdnSw%e#2-9c;6r z=XL5Fzx;9wk(@F5UjWMl?%K6$%kU!bAwg7sR$=gk;lN>e*$Q6!HP+-FfbUp<+bV-e z=bNM-c5|0YPj^|r^>i(9K^I{B%*R#i7)krQPV&mOnG81y^chc#)ZYFzf5C!uVZS`z z%+1Mb;Hwn-9^H7XW7n_QWE;k~3LdKPZcw@`Y-lpa5#YuUN-V?*-hsjG zZQGx5Pk)k&v!`^W8>j9dJL3Ajs+uO>(bM*$ooUGhjEbA7(grpS?(CaWFZOC*DTLnc zU7dV)Tgf%;GiJ{01ZlDomgx)X-RiPHw8m+n<_EET7HgZ=&b~$cum^`wH8|*)_oCFoYA%M%ZzIpf7c7^&g|P+C-z8Ok)tG6F8O>9A8o&D9czo8j-+RP& z3;U<&3jzD$E{^xVPs<#TW?PvcGH*BB^rh#_ZN}BdusSj!S3@X9u1{iK0hFpc(uh`o z-59{0v}!YB{|*cj7d!KWVcMK?3UNyMg`OLmPkcc;@1xK^67R>V8!+Vi^~-N|KiHw< zzvWzgYV4I1bG~X$`ETitqt>R4k6yn^!iiMJ{Q7=up4Pa;$7_dkhlZ+*^}N_31&nOV zKBptFYdmY8q?7*zj1D_Vo?%RPeD!Uxi)tq)WrvZhRe=s@S~SNcW~;%YLc{@aXo{wF z^e^*pb2|bTd5SmOY9`~c@aft$vE?FTV`Hj%Y@c(nhdY+Abdp^B;v3(+eselF$s(Ze zx_XAniim{6 zB1LF-&@}U?1MEue%0VLe9~RDXv{KHOp3d-H#={c_vFEyXNqEO*=$G@B6=l;U)PNn= z^&lGq7dopGLa;tUryZLFGA7d=CTefSLo-z@w!AC>%dbo>qOzyGJqFB$=+3muxw*L* z_QC|IQ1QO`<_*tYxUdEbRrTQPZl(4Ftqj$;6N%PFCfFXI z3LMV*U{6Y#I3wXZ4&Zm8L92GWu*;V#Q-?=UttXO`lhcs7&MiY!VzXmVM|qgTh#7;C zvr<-88&_{`^9hLWu~^k8Y>@d_QZbX-tzIj(SE~vSAYh9Mkua+dq!TmJ|KSY)v8r1; zr@v{>%g^rs=iS;!>$KyJ-Z)z}7a^DwFD&B6&+6xLm=mlODaQ#*wdrfw(+$RE6_Tf; zgQaiyo1GlHdKw)DH66DJF{+~mrJjepbs9Cj(v3>48ff=BLcMlqBfvyHb!g6l1>%ln z7@=S__SKvxW!f1IEJtzhwm5j&y9licDr_MzZ~4zUc5IouQYkS#>H%S{$U8#C}^lvM&49|Hl#waLNYGbH72BOvs%&z+g62gNE z0WV<>43V{n>;J@v#R1R|GkYs@{NAw_U!(wt)eLskgz>)lZaa(+uUOaK+-9t{&8Q;p zd0nl?wwYI*_|v+EvA>Ri=!7Tal=Cg~q{XDM+39RG9HRPb)$Q}lG}6x9@E1CL^KLet z2F*;$x_ZqK1J|RmOM<<9V7ecfx0;mv>~bnl^wT0v&bu&XWK5&WHIj5LE#Q^;_$3g= ztJV!mpy=rTuDv}VdXCFle7@aImbl+v{F^V9y8HM%ge8s7dcm`JvF|7I%{3cshoztpv~X!POwD}})lX7{0)ZzH~H;TaeJN3Z}Z!6sUqcJbq@oj{I4IYcbv z{>SmyzYxa*!`(Z91E^Gzhk--a4ieGp-=#K$5i}{NC7pWH2hZhumJ?6nNfXwA9o`3U z))5$M2}keEy>$=D7}z9pN3mxL&)N@mRvo`3&$iHjDM#xZFqG3l+C(-Osko2HWSW-n zC!sd;Ch7L=+b?(@ep~loePUHu&1{BIDWc*Cz-{Z#g^L;ac09XZX$V30dm1Uxv9&a5 z9TiD&1KJEWHjhAbKV!GzGtPtb4*w}*>B8))POyAbaP@jyPoT39eYaI>!F(WN#%}ev ztqHNTxt9}jdi)zF>l;oa-oO_578M}%oCht-bGw@!OB@*`cZtE{wz8|xVyo+lg6o-0 zLtweo=QuttJTbb&nRZ5o7DO4xViD@x*0$H6xdVl_uW+LBE;er)jxP=2;+|KfrYPK zRAh+>?_#F)Vn+v3&vI%$woaIpJk6ytN;T@hH&io~^Uv2l*okfy6|dIsTcH+N=ZbX9 z13&J?BQf-=_|50s)Av6H-9R8r)=_%rNz7{zgCiFk{9yRsZ7`@LWO+Geg`ATX$Tiw` zhLF)nHnK}5aBa7a?`SGS{}|Iugwc3e>bd2dWm`Ue7B$}oX^i$S$_?;Q52g=Xa`gSS z<#~CzkTM5DiEiEV``HKJ9^O&=@uMM+sNyHYXwgg@)SI8o<>cfpUAh$X@BOzRn(0Ks zcSk24EMOy0qqSeYOzA;z>4?ZST4%&UmLXZ4cv|jwtm@kQN2ogR+s?w6na*H$I4AyP ze0+(({I|)yui$8-*9CQv}aWJ?AhaPj=bs0tYkKq%EIR6(-d4FE(llBKy1*NboTj7 zaNeumC8atQjyWeGCh{&S+7ArQQe@tmo%V3vlt0;R+&rpV5_klsT7;d4i_6GTe0HCP zf=;|JquKXu+q^^IDa5NKXx_Lbu+MvWu$*&_h(Xz>sCbz~xa4~P(%sEr;+hLX-|YYu zE(p6))w4Ba9wWzP(Ks490qTzJihc7uY_-QY#mN%p;c{ zh!Y&bdgwif4=T@B-8;?=3fS*B6v-XX2?&o`xbDsGPoBun2hTVE5oY7rujFCu4?9;K zZ`ApG=im1YVyMZL?z&{Xqv)K1=ekJWOZqWgXiJ%0&%NzjB%%u2^{W_SCl&SG`Ax1N#zJ^j|Lm1y8NvG&0A&#G!~MFFHPWll_tnl)uu z7Eb6gB)z1b+q*9lxJRf$zNBn1-c30_^#^+2$#0b;Se;jb2y~E*-U@>lSwJ(i7W(#kzc=leQb&t z=(nZfX1mFB#B8%cx| zYtLW$^G_SeHN9tXB~A#w-ypDEoO)y9Q3bH>|LEXxidN|Okap27Lxpkkns)oV$ACWh_aJBnjMz5ziy+J z2*zT*epqIZ`|24&Le(pyU%&>m-?*mvOxw2|8pmcn zk!hRPeE8n6jXMw+yqLOmoty{FK(^DYy3}Ck8w|J>!^$MwX32^Dx)X8s!NUKGy*H2R zIc>wgzswk8zh~@HX2}q;3#k~}NY+v$2_Z{Go0O=1VIupl-F9@18&mD1y4(@Oj(pyOj+M4N^CS4@uGS46bt6 z`j^jgMPLWroFQX}51#|m_-3s}ZUO<_|BTtyGl+Bh4;}i(+JvJ6+FK$O+>QfITL;Bf z?b@|VqrUfziZ{kv<$Ja$+?aAMownp~?r;6liuywr6jOF?dSCuayIwk4Kbs(|tUv|n%p27|GQP!@ZQJ@Nb?nsX*%3>{cMg;@ zE5j4~H>yx_AZ7GVMJtuDCn?`|rO8>%ZgYht70yX*w@x z@$#(Nv}$!1!FFoPgSE_2{66)d@Oa>v;$- zDm`tazKQYR6_+-4TT8=UpZ9=k>d#vYUb4}~|C|5)Z&BzU-Q6|}FLpUC1wm*SytDD#!~Yx7dvv0J3A`WhBv*KTVS`yI%Xd@5X`8(&PZ&Jf+n? z#>M+uk1yO#0@Q5YJkz~sm8EgbBANqw2lBR4w{9Q&VB zNvXrrzgitSU;B%`*=_qkAXM}o6? z+~2#p=DxW!*Vb`*yS8oT(1XE-%2M4*duYTZ^aI@d9&Ryl|834Iq zF~lMxp9*pngZG<%2x9N7Knrojy1{ebI=B3!TDQ_gDP^XK+CITSsY6xY3cz-&9&ENg zg}k+_+vKC`TwPrWNhxi**M24N@>=B2lIyxIP4b@JThDuVXU3am^A-{a5z}vy0Sn3L zBXdeITv*XB<>48Fp#ftQfn7iV_r$v4@uhR~FRqe=Td16szW15?gXEIdUnmMqZBv61 zmGxS&`ejzgNE|Y9!vTo2@RT;1>dYY(>VIfHg?!v}E+co@@mN*WbwHDb4n`08;Ml-@B2fLCztj#(({ zoMe;a>fLVXTR%Ybni>{wIz;Ovv613hzi9iet&+wx+{vQeZP?Xw^acAu{m$z1@9i+I z)nMbVF;AJI%nz3v>9W#)4=SF1@Kd#%|l&PR~nwXVx; z3sO!KQu}LZW%D?(bhcGJZ4%5CgZqRQq_J-Nm-;iRm5A5Q+XhM*Y^>W^5owvJf9={e zLP8`V!Q~-=_#SPsFURZ!=Ugc8cT>Llj(m1($ z)q%l_H-FWxhkbQ5mF#NB=T6@Rxse{DeTV6L1|HV?ZS!W~6Qk}qQlXRj(ln1Dq_aGz zSpUDbTO3s@jm1;zglwx(N_h~NQ^tJo7+xSGBVf1x%_wT2tWrPB5S=I`)Ne+4_15e6 z!w=VO8`VuBhIcO5KJxtETHfGFLe!SxOA5C>A`O7L%*e@`Fk!+)mS}(9+1Ga%2esGz zdnXZk7TqJQLv0m-`nuXa{A+if5AqC*aM-$3$3ijXG!nv*%G%ENo`{v<1|EuvuxvWf zBdsa^ILJkd!_DK{8jLKT_WFH&{rxLWb_QYVu2yNZy>V}sA(g8kNKw5ywZ|9ZrgDa$ z6na(`Wxg#bRC|vOCO{!gzj@I*=TT_G#^;(SF{i~G zFg-ng1b{B@!2(5GM>cnGL;btFdTm->^P|=tDfG`wj}C0zl^+*EqmegH!QkaQaD>UY z-G0O)HC0vBo1>y1cK5n8!Z6mQ7B;p+=HWJq^#=Nm$?)gVg*h|QhCb-#R=OR&lj*t( zfa+)~uJikYnkAsbQT?jV*((XIHD!w@O`aUw?cs%$1|bcbD4tZejlQ8njG0B7tW&RE zg9-yoGcA&RuMI5tsrSW{z9tc;q<-yS@x{0$DN}Y9y&sT|x_CsTvZ|S}#qYoWew@Zl zL_u?ks6+f9S~5|eA0+k94pKWPz>v7OxK8;P>n~lcLPQ_hxf-T!%zf916Mye#bjo*$ zU(Ulz7O5)R)?~l;IS-icqEKZr?-&5P?tClTabI-lar8Oj5gJ@k99hZ3psR52oR{|+` zFv8Hj86_m}@Zt9&v=r}utGyl#`EZPtzP-1P=eG0Z+D&iH(Ni+>$XomS`-DbMLg9V) zfmU+%q;SAiKUOB)TqJpF@kLDU6Nqe_J@)xtN_qTvkwa0&-H4uSGzg|Y>8R?5g@t)V znZ%w4QurJ{J?_`$6r%^aNP2tmD`pkUi_`+fCOiARvIzrMR3o>W{*g?I1zzJsyLaTGH+K$%@@|$uy~F3pX|G z^smYCa;cL2miQgxMcz!-Wa6nuzqx1ptVS!`22FXjt9G>QvspwfEx(;o?bfuvG5B80 z2FK|EI`!5jN5*ftu;gazbv0As8Di=?cI?%q=4TdMTC(aFHH!mI#r4#8{Ra=eQ#gcG zh@wTdQOYlVKq0x?F^fky3j2Pa;HpDQ`~iUD+lyn_CY?MLD)D}MoJZFUnBc{~S`aaW zGk077QMZC(hsh+0`+>S0S?)NOCv7;)u8tG7Y=Gc9&dZ0$lWdgMj3{pw zXDt};q|s@6+K$myyrQ8Se_8B!*`+p|a&XCO>%Qv_awo{qh_bS^kzGzV8#jL)2kuM9 z(+v;lhzH8w?#wSOMZ4h~)GdKxy2xivv$cF)C#th2mM$Guw~cd@iEQ>+T29{EXNku; z_cuB|>&SSO)aQ&QHm&1f?bqdDr~56u5*6z2Q0!d15qf`{na-8Dvu5p1N{T5QP7KLgr^%Z9uzMDJ7wn4?LAsX80tgHcloatlLH`RMXMX`Pl@*zmoAe(EE4w zgiU{VIXc|mUW~NLGU}jJ=4O)cirIJfA-M}JXpkRLy5CI0Jhm)0P$%iQ;mIzOGK7m6 z@p1nSU;8+&lX;gx3oVl}mbc!)tov#B@hGha2u1X-yp1(|=tNY9Y8jA;O6pZqeLELL z**=ca2~xmi+NL=X%-Dn?(ps1pLXd+OxlDVz|5=Yxcf#dLCLOjv%G*$?9htmMv0G)D z@8F+(2AR#c)1^m`6AW9XbJbqi^)_m`R_p&hIHJ{yDUO#2wHj-3{Vb=!6u`}|j`VCt zq~=6vf*Ws`)6K31$>JO+8hGBi&wN=j)B%%|Bfbzk2nepX?72-KZ z#|X08kdInwUR9bG@YL~;-{NjtjZV$mV>yl7fkM8dDy-T2UN72|M%zAjCo(@T>-K>h z{Ndvv5VmKk_kaVI$+$;$>ew;ADvTH(P*g6AD;gS`q7(7q7sksgj` z_nk5D;+Q_IvS}9A(dg{L)4+j#!h>W{4qNH%+25UFJy{N_`{HG9&3YCaPq<~|nP6k}ll8e?=`))1^ zWFxL5t$67dSaipUYvs|LM>M>>`|UXJ>pl0;KRVE(wOPOS7C&_3DY%V9}-N;w{5 z?g0AzmL4M?ATIqT^i7+=5aceDiPVa@d?z-05X#OM)+Y2cE-p>{ z*6{TF`}wth{q@&N3~la_!+Pt^)UMo@V7GSC)Y8#6X^f$brxEPP+Eje&mGE14;5gvq`_^MAqu%)e5!#@SOrC~{ZRrG zg<#e1339-mdg(hC=UpB9v&r^vdkA z^a4DSvg=A8QxOu(^}G_kXx_YnSs#?0iZkXZgDs$7R^?u~WQ_nkvX}!YLXubI3zjTd zQh~zrIKj}daiEaYhGr|qZmhc?O!4uT`Jy;4|81Tk_lDmS6NEp3#P!}Q+O%mSM&*|N z9mALZeF7QbPfF((U2ASi=hMoZQ(=fpbfeeK?xSs0$JjN~*<5>Fhgf|dLSI=0pNtql z9dKAtI;`ykoPwgFqXSfzC3?xlD`~XRi^^SQ_>DTk(xNNbbUEums6ijkf+y}21Se5u zCB{d=Y>>Z1;H#^W>aBhVyOIf!yAr9-T&ZbWB28y>`~3KP8UpMDFaaq;zP>>~a)fsU=t4+W8K0Nn*6?BESVoxd|w{U1-wV zE9P|}G(H#=O;DKuzjF}gbsu7i=ePDuV0z~+67gxJJsdKmUdZ~9uKmS4t*E_@C;*6E*%~T}heIp^En&n~ zvHY7yn$eg)^t9#Hk}g>N!qgl`M%;sGi_Nbcs(#xH6|>^ zjEn%BoS;PxkXf*~3RjA${DcUr1+uZ$T0(d@Gq6WpK7m9#qt~XIBkg>;ajmPp&r?nUx#whRW2-*2Q6Ps~-^>r9TBR+aPwCi6ZQ5%5G} ztzJ}4pqz>3ipOab6w-^}@OO7or2!?hSZ_sJ5fRgy&SJsSlI2Y)^9bbKI21S2*2bi4 z^0s+$y#+4GbJXWwm<8L4^_Sz{2ZsGZ2~sk3%pRx%Dr3#N(|uJoR?J4o`b)&lHjf+8 z?Kq}na5bXnH>3}PquB$x8R8BVSy_4u5tIm_CYFY6WM)GpbLV+Ljq}s(eK>K$mV0Ey zGyJz}SFf>$pOp)+;G%SEr4DgpX>pmDk9y6(!fbnc`=eF=B#Yj>Shp;>cFnB`zJ0HD zGk`SZG{(>zxyodc*xuwYfOS0ljaBz(q*sYLI;B_M?lOt5CB*DiE+Ev}Cymmr&1-D% zDXKxzcHAUh=;1xs2C62g5njLP-o4N}OcB+=Zt@$Z99#a_w{PETqt;w0-)fRDxb2NU zcJ3Sjy zuy77<4SU4HNP6&|B#B>Rpk8vEUMaajBK*i^nU9BF#DQ69IT z%%K{6=98*0uk&^VGk*N;yYCVcYJ}t^GA>XlAnQ*?LA|$UA3CY=l+8J7Cn>VG5YTuT zj=mN!FIW5zF78H&TG3Ba^O4w+Z*KezR(2$~u;_u!)BOCMFlG6)!{a$09vtN!Ejy?T z=l``7b%l8R(+gUqTHB+NU~9MXJ+Yxji`Jc{*fuc^Hb{0xnp%w)|Cn&QSLnNny=_>G z{Y9gM(fFoo*8=abp3)}V*t9*FLLWg;KDfw_pBVkz>Fq)k1(WUVFW7L)Up`1E9wCee zh+7FhcqG@Jtif3M{w+$<8?W29X)}yfQ1=AZF8`4#y8XKWvyc8BMp}(rcx-XEc#7Da zyxHD8&cfEk)1S}LQE~grcUblq3o&v^96}&%WwNl*Iy3y{*YxIw=iqn$AA)ubQ`L3%gY-Mp(hd%A{0|q&%R^HUrolK8bUiRhT zO;7#9BzM)(QrGPy-(MSVikc*<#~t*+FkpPW00FNelQwT*je zJz;H!4c)Cst-}P45u$-z;Q^-a!Hn11q;3i?R0FxA%|TmyLG2Ce%vsqPp5RYo#ii=2X!fDXrEk7cR4+=Ia$I_O2?&?Y z+q_8b9+JQ$>pZVsi#jN4-YeW>%AJ)KqWnvgqhwbYJQdzVw`zG$8k~;#_Z6s(+H8=S zGF_dAb>n{>3eKy)1pFvRitSXa{Z0R|plrv&`Fo>lpS!vue|;JJ%cvvL!--yPb$b5A z1&{n4F{jq8DlRCncn|ksedS=FY>=N|{$I#Ft=(a97rJIAzsFqd+M7z8=ripbpFDXI zj|I@<3iIy0daa?Ty>^Rf>Z7rG`}-~H371?L+sv0$Z5VxwxDuQ|S9!bM$c9U6pKLBi zY@%5djbdcrY2yWQH$xw%9F{8{vDg#)JT}Pmbv{q{uP&8ORdoC6?H1*?TcW20=;SQ6 zM*a|w1on!*%Ida?h;q5wdC^@8znmTx+lX+hfFd-1b}CuusRT{kpGG_TSwTTTSq&(f zfdc!GI+Y!BapJNVs~k4=-iw=#Abhy?30XK(n@TmKWyPKA@O zKE%5d;E>fE`&PSJv~0Qh&9g+Q#-~*mSkrpv-hj(=d!Cim&|t41EA`dU$+gYnWf%Z} z*ggei>j4;*B-D8&UES)513;2wMXEN5#>yQaag4va8Z86c zx@;nwC<%k}nR_q^S=bn?3dE`-mv!#hv#6|wQcd<|NL>T6lnUg+eVka(v2#J)ey_6n zNx^u-Z_n_#1s)vR8KNa27xh=WTnUV z6Sf|oxc%FMOtoz`r%xh`L?v49TX}gKmVJYjZxC!@QCS9?=3bcjFFCOrvuv-j60_k9 z?NI}6OXx*OPIUa+_Nu}w-eu45=v<%hA5b=9fH;IoGU-O9svsPzD+}DXzSnJzka)o* zvkWh;GCcddh95QWa>s)}ZX2X6S#RW)%Dpk^k}9WH%YhbeW8EqQ`imG!%Zym9n?keD zLniPJWKnLZS5~7rQQgor2~3jv2g1<8Rf?!D?BW}@Q+ErwSRTFc@|$6s-mec)A$`~1 zD;y$y%qv(?#M2Y>z&N4@-O;8TUH)`ZJ{-gdDy^lky_sLPYd76i1)rF{Z@<{xnxrIh zqL9PC{Px>#J83ZXkk?aACD5H0G6clp0SlLpL^z&;ekZ7~yFhZ=k#*y@wC>&nl8zNz z!49_@KxxLCgV+--DRgoLm`9*5gQ zYMHaoNqgrCr+SI%MTtu7&@j-%{%7wd$JV^XrD3##KN)+Sb29H<34U}K#%PV*8A|f8 zp}F_tPd~*+!H9+dgJrx~uFl%)zeZ>^L02=Lw~y=56bC8rvqM)`R9BCLr(DG44t?Tz zR~{sb?i`3Al)zSgIvXF&qjQCe_7oXs&^@DO`yzQIP9=8=6B2XyIE#G)jE5$!Mv{3X z$l)3znWTq5RNu)xQ63KA&CiAjQSg3Z7oqXS&5OsYH zI5L&wA%fkk1gOiMls02CoBz)Ltwc&xx!g}-#MFQ}=%&3LJIv=Bs@aQIN^Y{8y^xqL z?WCs9WaDrC^(!gz^&V=`h)gn+}=AcP3Mzdlq*KPA$v`I~Lp6utDj( z?EpT@`)U3A_X7gV?-L6~payEuqD61bF3QTM={y)p`;6mAZ6Eq&g3z19h=qjy2fRJy zDzl>#TUq5^F(w{eEbfuKg1rW}8l>g)W|WqW!>c*;?&nbW+!v>T>O2G53ovdS+No&G zI)^rR~`53|Yb5Zg!a!RxP~p>eksL_&G9k0H!4+g58RaucZQ zfn}pC=|^7VZ6o(Imj>S&Dko8fQ~JFqo-;romR`Y_x9@vF>8AOr2`-hc>Hy<#RZU85 z!Q}e~80(9G7L~}^=meO73#j;5*_kAnnYQgy&^d@I1llQUUAyHXDjM{n?b}ny-FaG( z6;=W(=cyzdBEg)Zm>lI`%~#c~J8+&+5=e?OGG_(9siM?%N~-U$WV?0#caKbH2|1D; zYUjU)Iei&+kX9dDr$o9!EBvNOg3vSktY7`3tS+JfEpE^aGZOxx1V-Mqxzw5?#~X zNT)Xj-Qx~MG9`5rqLbqZujs`;LU`hpq`W$)DZi3PG%)a8VbC2X>D0ew2@K%cr6*L; z(l)V)=L}miDZQK-44cBbQ!;x3o;^+s{z=IPT54H}pD|MlQ>&6r1LtVdM3VWK0v&p3 zjA@9!mXkgk_dXQs-koh)x7LRzaG;9#a*gGfG{~Uo;LJXGf3-~e@rB1eY;=W1ozG9$ zo9UE_Zf(Kk(1}zlaxFY{Wb}}ezq1$a!}N(aaB@30Pq?KN4t^-_E+lH*+DV7CoQAKs z^PMzcm}iNw>gfo_rho>?BhOWkXh_0v|{}S4osQj zLg6Zgi?#2vT(36DR2AQ!iGo?-Heg~Ql?Ix{`aN~fVz#u7lEdXy@C0`DJ$mG^qw|H1 z78Gv3DR(GEP-SUeUXTvvNqe9D9h!43HFYK&(H)HF49W0W@y&Ms&u(kM1Igep{7&v6eTbnni$2$g|U_MjN}ss5Q6-}aumf&tv@Q|;Fd zd(kG(#q8D9AAbCCKl^rD>V9&=ToS1EVFOzt?%FYDi?snqiU8Hb{ITmo>5CQi#E-%x z>&@C{q%i5eu4pPeGhjATGcMVM{9+gRgNDWfT)78%5QHw-`x6)jV;%mwIH_3~T$$P! z=ZfQVRr<_z9Y0~hPH?qPuZKv-#Efe4xN&YKZ<9EmV#%u{66n#tC=L^UWQlsn(q;7u z5X{Lnq?c`IM2^(zcX%Y}rJ0RgMn4ofcXRCPcn%wnpHE(!SvWW91TUwra z_-1Nr;nAI$Scn6~(zR zCNSxP_ZM76&3`Y#$^-^TO_KAQ9^y%o4g_%bMTM1C%St|^+!7A^;>z^+ z-~r9zBAKA#;^5QrK1yUhfg85&5RvaOh2gW#o6i1K(V)ptnWoLt-=)pbx$msF=RKRn z^_!X6eUf{pyAUfQY@SZ^Sr=OI_H8Ikj1zSz(0qH9ATUYP!j0-H%P&7bjPZ9Z3 z+9l{TxtgN_^czaAVBsab5pEX-T|6~}keq)m27E92g0&iILJ|1j3ZAH5FUCK&p$y$^ zhn6+OaHasH(j~} zlg#O>I0;Xpw{gF^iIMn@qv{Ey6J{zraOda-&9vjQf2`-miG#?|NH&&**%+z3Lb3IL z+Yh9vJ?EWMd&P#p2esFY89JLyafQ~@X_S(koXtLCZwzj`rjG!TjAby+Qmhhbipti5 zk-4O~K<)gOr4Y1Zas(QpF+T4du&9BZyAIllw0IoteWV z#jWIHWqn|c&&Yp6fVs_uwhX9=;+KqPAPhD|ZH)g=H+$8K^hPp4{ufr<{h>G`Y=Sg* zm^yU;rRQ96GGIExND>WQ^E-EDu|dYMJ_eiy83i>wKI>=7yxiC?6}zA1j5eoem`{Zv z$cB-vxuVlR>)ok)_fTllWsE)%^KPT+qPy%=edz*)rO1M^-j=tZqXQ(AIVyB4f5N5d zp~}3+uHH|h=K0@;=CU?MaH|#gR-8nopF5-bIL#-oS9?&E2#-S=FpFIPUSLxD0hoQa z5v`A2(*F?c^R|Tkh9?5qGxK5-gQH1cl33?o-4r8>ODzE)(G#VW^dR8JPEcY&MSF`P znxEw~!pwq?e4M18nxw zsKh`%tM1v~`xJpQbNz-%I}FHI2p)e(DT1v+tDGpC2?&Pf!0hq_w$<8)81W6|CyTZQ zBf-ToKu}y;qv+&mK9!QZFG&vtB(yjoLU&_2Ov+zerC;e4bCjx99Fm)Be z0eU-WtRfDoM4>qEJ4~TiUh~F9sx9)~*xB!J;tJtVm2LS}QFe~mub1SPJ5(SJ(3G|B z-kk(vh@t>*y%y^>G*@Kb^lPRVykqniibY!orFLc24Zyb@FScYXoc~HE`isGes1L)- z5-R`u%dJCs^}6Q&{ku<&{`WyY@$cV!`dzOkQ~ghW`s|;NCw!2ge)d-%f9vw~|M@as z{6Aj{fAfD<=KpN7ul`?m#{W4p|Nn|+ZZiSsz7qG<7hcK83RSFvNisUKps;xOr}bOe z^S@!POqbRykSS+Q_)^`Sy~ZwtwKK2Jo9EM1!Yw-$-)|Pl0ON zb^mxs9fVaEEHj64f6cXs@={=j4D|_)K{$L0HmvV}0n;#dG-AN%`t|Gk z2ASz(#kW+%`Aqud7v|13_0V><-(1B~zlI2##iz&Y>%J(LHn&tQR7PWd-TCAIGE~<(njm+xGKt)jUryUa7`LoUI z7Ko7M1iXmUQhN2jZ`-ySLg0i!$gU0FexdlT9pp%WXp)hqsII^$wz?AQxbT#^VC>|};-7#18K(gc`a3T^`$6A)_JiUYBV8Le3GyB?WXKGE`6#IpT6gH+Xt$gL zp)3oDqL{J9i(eLtQzAqU!^4Vdgc(h{LM|i2dt10i7C~D|UNOmb{i5_vXBpUabE69mBsLJa`tjDftUG_%cIgb@BiH zqrS(dPg?#tWXe8fc+zNwzf5XK7qDddR|>b`uv@)ls0KdY|IZkhuO4XUeZre(Bbl6q z%4arsy`%CwT7Q+4tNm%TwfT4Hn{K1}__N$L%sIO-C-f~f;>#D)6pATh;N2MWd^nLJ(}l(mbUo#K~tuG4$VqtAD{I-eZ<5!PJI2y z@09|Lgsh}6I*N+TX-sED*<}QGLLv!cwYiGVTgC3ZjU9#0&@YM@%+f(-tjFHkzDwd7 z98kq)Z~pTD@pvpY4l8Q?7}dzy-Aex$&9W0ne4TNB;E{HhFc>wS&oU6g(Y&drHs|=` zEvFQzRJFULTHcm`XMab$a~|F3{?GYn^V#`uYZQ&hFDXAis=#J^!3w>N@V zw7mUQXd*9*;e`QfLYgTSqVPm@c8X*2lcqO}Bm$>+en9RM#O^3S`iec_E;N#bIG5c) z@*9IK)rVhkJ|El~KK$wZFFd2w&R-r0LoY4Ftd$D4hjMbUbX&W=2r}J}>XxT`YzmEa z2V2a7vJ6VY`%LJY?&aj#MM zhzVrUlQ1mqX3FBNo$R*eCEaOC*}}36(!>!uKG<7*d}Oyl9}KBJ{IRCP$rf=&`Fk{k zWM(R`a$OI_V?)Zu*|c9fDJi`!t3ma?Pu)bLgGGy)t;k|U&ZesppkZ~8dBZ2yJU)J` zBKCuM$A>@0O=@UxDe}%0B`gHg8~IcR@wA?CSgWvX9{aBnM$dWFk8zP$mCO+rJg)>1 z+FxyK$olP{z2VY1Z{nxV=bv3gkEaRW7n!b5oSo0K2qvEgpOvFC=*O8+N&dQQI$dRJ zuLRmxCq%jZTjr<#+~xC-sCc}Y!2(s2TTIRVymV=xbH&R&&&r|1C}ETM@v$Ph7CQs4 z1eEz_MBDk$+j1HsSwiz4@o>9$j9iweBF0AydT8%! z_wlph9)2ER62;a?r&#p`0*EVqC&%b7-6|eKn5Mx8Fbo5dLPq~up6VvtB8rx&LrVC8kz+rd zTDR2yl-<*6B5NF>uybd$P`F&Duj{vD8hsP_YQI{#-3w)0TrGY8Ts{J+>Tw$X?~fDs z-A_MFPStsLilL>)#d8#Kec*WK^X{e(IRiFSx%JhQ802H>u8sSyUgtXD!5z4I-89Lk*CCu>6k7)k$b1=jb6#uMaqs?MnjCrkh!D`x?XZkP&2T{0!F!Lzd@vKm^kl;;?4Cp19gKp3k zB94J?;TI8v&f+l0MBW;+>!yOAV6#sGcf>ryD-gHVx#CE6;wo8YB`lRpXs#>GJuP$% zjilRtqt8SB9ihp0f(;`LBAICzxp2D$&Q*q>fOHr`hqA{%3r)pj!$YB|-oBWQ2VXzp z?Pbm!rZ{JCEy6Bhs&HPn_o?LGZJHG4FgA^432xlO+p(;dAG( zqs0j%xM5Q-m$F~6OFqRURXrQB%52KF@|hs@RZL_h#@B-)XpQ-%H)?#!1EHP)49P_9 z{daq~=+Z13PTw}Y9OaWxyG43^_h%s~M2{K7ut(za=1Sm5!&CES^MxO$|wPY|b}o0=D*IR||}CC0;{C_M*0$HBJ~;P8Hs&M99lr*PVhPi@B< z6>NWo5ZMIZ3mZMN!{IQSY*3AxQOrxfLe8ji>()U~K9hW=Z0HB%f^U^0=PP<*?ZhIl zWOE32TUfA*F?nW?4e*h*dHuJ#8~_gXO8SYH(V!K+1=<`(wQS7)KN~a6tT#53r%m&~ zNAWZcU8^AjgL|dBPMSE8c`+k|?m;5$rhe-^Y7@;mG~iRgDQ}&dol|YsZSXi5<8-&7 zG4PP8NNe{eR1(*xh-iKT8}0grS7F6mm9~Yvy{%XcoXa z6x=HDfo4{z{m-F>5=^8C$YBQrO%rtDR0OMff-%=sG-hPJFf^kGt--6@m}t>lDUVtk z7o=506QLP016{;^uYedZ%`ejO9k2`8NSz=RD8VKK9Cdk!N#Iw=W>1Gk0lhQ)wvWIi z^C5QBN7?(VLi*zJvxX=@=28nwbC#!==Ufs+e|c+rYq0gYMG`ykR?SarXj&s&=Z(BgAwAIM{okir+d?_d8mvH`V$oIGg@AVZc8R0RsA zl`^G(@(}gt@(EQbq4ZNiqa>(t)HN*r3CNv`6w;>r@auicF^cFHbj@ZlaCgxrf3l|s z8p04p&Q~Lo6dbL$5u8IgZ0yYNm%T`-m#y%>pVP-+sB41@K|8g|HH00KMr*ASt?Z?k zH8%#!)5|HsMd&<{OJ6G?xqTd9!-yXu!o$dC2hhEW!4%}ZgP!Y|QH~{kGV{bn&w22) z2qeZQ&m_vl;+UfDzo9U#w=~S=qh)6vJCJB}N&PGV@HHd1h|uouqa%yO+0Sqc21s?2 zHU^MYWJ;S>20PDnB?5G@3oLvqIDz-^nz3_3Vr6HB8*FDB{uZ$bi^nYBkee9@Y9`8L zNz3r5^TkN8q1Iv}HwewYWE&J}m#o#2(l1Z84g%VzTL(1W4bp4EZr+Y_Ja_C!M@14y zlfjJo(mJs>b(HvnQIqC-HzY?ny!J*{d78o24VKe%b;7C~8h%-bQgOrXBqVA(!dq4_pxSCwoH9wENDT7l*H7 zZN$Wa@0wKjzD?(TfryASH^xRsomA9-$+)!G2LrqX+Y->=%)yd~13R7Pzba!%yyrri z19iN@i=47fjoyu&@p~lYuZ(kRhIgr5Z4cULr>G-jGDvfM*`Gmn2K^M|iAw~yqkN}m z-(}-T6lqk$b1vB~i>z9G^I76vd~RJ$FWL4^^rgEcy)noh=}93W2}Fo3kkhfTL8NBd zFSyU}PkOe}Zkq8N<6;LX>`kqfKyioNJ=GbpCQODZy{ zAHOiZl*bpT{Hx7p&u%d6Hz11kgroJF7RdokP8#!NvxJf!i-#FMy= z$PhBADTog2Xv15t@r+6vJMt?6g1)=rL3WaIi<2dfcQt>E{Pl#0El93M%@Hpe zaYXL1b;W`zGvTW1HtwBzN%RRH-oX9cK`iMgQ@2Z8zbywDJxj!b-`}5o*4phCk8Ve3 zJ3opTW{s!Y1r-TxBeu_S3gPKR`AE_`ByL4X|wf1Kq&s`+M(;M4ChlS9E z6Vzj5aP>90n(cddmx}hQd$k~zbkKsIN?vEVZ8gF==ioD&VbS{UE;G=&3KA`N34VEJ zsRUDBczpZl+~VIbF+9yTIrvB|d(X4Ozm0uYG)(MnB1%r=Kq^Tsrxst&c#Tke^#2x{ z4CKrR`9RoSn$-=cp-(odq~zArT1a12lAEL_n;D}wy^j*a!Rb6S@Kxph=8r)Ot3iTl z@$}YS*Oce?oAsv~-jg5C>iW++XK}8x6qcsE-U@2s-bx*5ag(eG$Mb{V#ehde7e#2x ztr#tknwouyX2U{|=JX-UiQw{lp$#&eDt&TW6a&a{9p^}HU$=${y1Cq8 zB#W*$DGf$S#4>VAm+_--B4ESl>IcAATJ~!I-iczGi`m7**jJ^G+Aeo>97kn|R$c$S z)xhciFBXoxp&Q}(_AETvaTZ9o%U#gG3Q=C8wO6lXK476hNzur54nTRqJZf5y@@_=? zfm{fBD^%G?%ru~A$7o0NhfWnQRIHfU^(%>e;!G^9b*bWsi)!)6(DI(N`?~0VY#Szm zUW?ftucXA*70a5d+zBBPX48_fy@3ES=PjlkOOsgx8LRVgOwu@p>4b25G;W_(#qxa& zu_nU)dC1)`rb~pyl{vO)rGL1rxhknz;2M4PVVU#H)D)ZWiyw}F$RG@L)Fp;USFaiW zaUm6F+f;ns>C}&uYi({m%>#_BhAk5YdqAasZ@n(!_033NI(uQdKi7%QaVW_)CTm3w z_f`c247=&ep*c8EVGX~AcF}xvhq;KKjWM5gIhvy;X^82JG5cp?4Zs?O5Kcm|mZZLp z3Vq9X{wY}$UY%OW8JFj`J9i$qU>+KnbAAEFyg-35V)mNVtr}AUcPjkFmmWXp{?S)p z_tHJq7#E0#jl@3NJpT7C$+D9BD-J%IT}cHaFae}GH^lL{j3>b~oFK2x(}|oHwq(Z6 z!uEiCtb<9a!WP}gjLJi7F|BjsRNIwZSM&$z9upBJk>|XFAF<#S6TV>%9FDx zLE8zf+fI9b|1hZ}l+EPs;ZZR~2gWo~>I3F@F2|H&5!BoG{TQ5FwUK(ws8k}YfNsUT zOrm%A7`u1wa9)RY`Hx(q@=yK0yM4`uF`udjRxDK9s1bp$3F=mjB7gqChI${ADiRj} zdk1ttEV(wTPTjL+0c868=+iv1JPORr=S!YRWx{E{j96}Zz%Zx@rVNMQjOsbs_Bco1 zt3mC(V7?wBtxxgCP&98d3f`X`<5+SW%+h{`>lbM4XhDE&r&7t^V;4*L3Jbgn`>i^d z+WUAJq>t&|$kc=g@&*C1u@Kb; zIoM~#wg+;C;p_!9WyL=_Z{bOqb41l~O#NBdG1Ois>la35GL;T2&BhM%v?Sb zBzC)Ew2>b~atG~-SzcS57{yCx2dj4-G{zQ}yri2MLT?9Mf6_=N!do;f;G6?G7QAK= z+v4qQ%$UP%j?mG@-&ojtD5Or)3Ub!!$ct2WV+6kSLKAZ~H(bNTiwca?q*~iMmMdA6V@0 zdHoWn218`(e)Fo=Jp^JtJ6wB*vYAiJIlwKWmD%od&lj}2F^gc=u~Xw$En2K( zuuyg-o@h%?)Xc3bxXj4)QK~w7>bCQHmoplt z?mtZ4$E2)AoM?Xn&_A*MIix#bW1X|sTv#>DyNy5VN(ekLs9}GD+MUSo=eIKO9qt1n6=XqQa|!h@LwP>Cauj;XC9C}W;9X=AH*K_TChX~rr)Ij& zFAi8_&wt-v7bJcU--2hQrO{^lMN}rE1e%-J9U9C3DKT5bzB}^|m53s?iUQ)0dw;a^ zm^d(HJuRVx$z-eiy32B|=(ucugy>xtZ`14ELpTDH1p3&&=Xr<2H~Yl9aJ|}qu3JQH z56AK%CogG#{}38K)WJIX0H^^4yCoz0k9lZ+#9x&_kosoSa(i@PnK^$DvNkz(HE;~m z|9y}yo4<(#@`*o%SpSUZ4b7+CxI)k$zhIJRPHA* z^qT+xn}clkxeHfRhB4$7oPTz~rK2pR=7af}s2TZrV;4p_7;iKK#p)k@mO>7%FUQ@)7{@S$B}k)DdZ zf0}RO2mlZ7dt_ohk8i}r&(;i9O&a4n)wZTI{RBeBLjy1G^6;2e6&7{_ak+F^Cl(Vg z@1tgYd3QlkI5yqsVT7|}s}6JE$jh*uy`uc-mmb>M{yCYJi#d}p9HDG}^r>oiSQx1<5Gq%>imFq5b2qi^4y zZa@eZ2Imo$svmyE{ioVmlnLo`WvDeTYB~$< zQeIxYS*x|n=RSl&i`KHnai$Sq9*PjnL?F*bk4zTxl&h3&Du>&wgWAhAEE-mp7 zF_X;E@h^?nY=Bzh~(A7aoJ>DPM z(VsygA<9*q-sk%7%2A(rSj+p&!}Qjj`Yux21Y@AF>{CHg_8&FFxuQJ(ueH_T#KpfS zG=m~UZRpUn7amtuGtn*3;)Xe;bi~H@!bGsv-YK;_hiQ-WSL;FWqS=+3HGJah&oWC8 z%1s=>`PG=^M}&dRtxT_O(oW2W0TAXfD?WNI!9+d44qWl+Ai zjlq&`TjBZwj%#BwL=dQ$(th1LgupU+kY2t#AVj-2Vb|NSbR%{#B8org}r><1De24sMfj zT72Y!Y7j85WT}NQRj-Aao@5-pbVz63n6WfzgUXxCkvyTe{|ag_(1&t0Y0YrOs%B;2 zXDhs!Km7VHIkm_brg?h8R@{^vicp*Q?utPtW0FwS8?IK@wrd8puhRfe=q3|)X`>O< z{8dsd_Z*Z;$pF>;TIW;yw76d{*c`x8>^%h4$Bm_qDf=2SFWno+(69Kusr#ps;v;vC zWzwhi-Jy3UZor#d;MSkWo&NjCSelllndd`_%{SbJA5p(q0O1up&Cg<4ZXL{F&Zf8S z8$?e6FF#y)EGk+WaStP07>rggF4@Q+x=(yGkSII>mF~=@>UN0!8>Wjh9$h_us*9r6 zsI}uLvo)6eljAi0vki=73MkCT@sH`-_bV>~rWZd5+n5N`wO7jsnCuJ6%nR98t4r|E0|bJU&n~OVyT?9H*I{gwH%b`36C!RRu#iuw9tIev9?ox zqh$LcBC%~Pi=2u1~pNXgQZzHTX+@8Q&(jW;V{8Ak&gazy9Js1?;j{H(g*pahtYbs zak?Vtg1NR#U!ZTa){LprCW0dm9AyMyP8s^F>S`v0z)H<3S{AxhjqWFZn&J1dY)u+U zk*xTNtuT?u9on=xuhy}kc7B)zEwL;9^Z`sx@a6GR+*b(62lOT8Kx)14~Q}bt1zV9Az znkeOI%+H}1BU-D=MFdm?`pCI(6jT-9#`55S#_np57_tHXdS>d9qo(#>r6iEgXk9zsNgD55= zDqf4|OfT9{FHP>Zy8TJc`Pk~eE!tON(Iwnm51cmI-k4(l9_(oZ-IT3k4SXrgxIj!q zM5pR`Ga=z!_6(m!U{G&#?qUq&omv^x{4q*8q2n={Q;W|nvgX73}(41Vh;eZvb$)ZmE1?F_GzQsB6d>sWqP$=(lz= z@TPyTCSh7^^kR(qY=;VA) zq?2xcVGSb;8UNydLmADweO!MA+|z}rfYJq*FO>EyY(fz9gxp0?M<4S!J(BzHdv^XQ zWF56h8a7fdiia*<>`H@~QDfu3@`II4w<~^n{ssEXHP%rpr;+K?BT zNNT{Mx0EOEg0;{+r&y!EEFxB<+~0f&h9jukUg|HoHYUhMf3Jve?2BlQE0;WYeTte) z_}J}_AU;lE3lnA4(`+pvG^wEGk#)3}_~{pS`nK$%>Cw|#IwiipQGG5}Yar;qqO^zb zwSHAQc*ObnilU-+BR)NItY`B@pMH-rvzQGo+mBhVGqj{vf@yPSB%Wj!6&H8Peo8Ky z;a^Z%YSPPY!_nh+2DhzeepM}AVtUmAmqfPF8K~w%?mH=pF|2(W5!MIg!nxz(!(pv- zW9}#eh2^r5#xu(=7VNs0tQ=h?GvqVs$7C|}gOK603~6RQ{DS5$uYKvvA3i zX!P`1RSH{vFnd`Q?KSUI=I6t{@Axn zb){F1^jM!UoX;Y%TJgp=pTLGf@tq+9*=cP3N~zGvgLPcpXHPF^TbNil!S572X}d8A zDIOO|J>U%;qb54!uIJfe7nHv`LWCO-M{u-<7%A!Wy1cGto5OU?`>gw#JrZ*Kgu9ksbfd;2|rCq z5>brQS54sOoi^@lFm1=6ex{?(AWd5r=d~~T^HYGR`Ag|vFIBbHJM_0q<%Mk3wDg{nnNF)r|lJG*p>%uXn$Gx94Q@_||qml?8$T zecka$6~|^PE+45ZO#cN1mqzCeD813?r06&9dOD2WE~bDF^v8xyM~42~uCHS2d6YNe zqa&BLN|?)fmxmv=5TCLW4U4|A?D8-9ukjz&&W=J$Q4D{fqY}= z^5mCK1{m}?A04f)@Qaw2+0GW^rLp{J7W+=@vD3?;7Y3Uyh(U~a+B8~Y_B$~ug+f21 zGHgU^!k0_?7IAA0Npu(2J{hmRAZixr*I{)YHR4@@^$}wKYY@G4ah7tj&Y+wR?v%1& zryTmcgZo&68q5HsC#-aDV}%iiN$eiPjqczeKf96VTj(&lh%W{jjhSKk8Pbn@Nym##&*vk}QP?SLslAv_efl8Qwf1>e8BgHw z+OEEO#(meWh)%tDH*nmC!av1;|1>NW*-wcuMntB)>f?$~HpmqTOs!`Ab|6T3d7GbU zyQU6|TlvR+<#6UDEzN(59RyV?+p4gJb()8LQDS^H+JE2?w^jl^w5jF1?WS3YBTTIy zgX>B|e|DBD5=ncnSRmnYm`o^&h~?Njj#zPLoJ3}mfHUEjzZg6cA!gIBPnMq_{-hWv zrX2ix6(F!amtG%(&DE)U`6F_GT>T+Gev5>TVuX|X$d_|YjS_4=(lTR_`A?xYV4I(p zt|@u_KNMN%sAX^-g_X4jS4}KN3~Af%>b>{_%^!1aGIzI1ix&CG%ezqT%4rA#FhTh~ zvLYHpNokI0Q;scy6`pe~sQa^m*y{5Tbq7-WPq6uYyB9 z6Zo|*v1W+!J0e(?JTepI_C@4`3m7{qoiE>rih35+!L$Uqh?nr9tI}(v^@5`jAG$Ml zL*MHwS-7{6+vXc4x;2*Oj(aCHDr$KvUAdnITWITk>);yefLKM-Z?)RGc>S0y za}uT^4HMdxg4+&5DNoN@^7Fx6+BkT_-{qHO5Gl^^iP&gEBo8RQ<$rMW`J{d=;>v#K zet{Ugcr;yV&H=iM-ZUoiE%XtAI2x==+2QUU-f3pjrJ=G30!=@E$%WY{drl)5#U$(` zP4D2;4e&M$R3BYxE`m4dG#>n+WHy)4riYOq!iIEwP17P2Ko49p%$?EOPz^ROsD@mY zwQsu}F)t7#vBBQrDI|%pvkvnqzBis$1U7V#9!O!f{vqU8w{c!h8{J89W3P-Yi@#~h z_C4DPIP5(b0Tx9Vhtg}by>QmY5M>Jwk@OQvivnCBdiPS$$q~|xS zq+?`XBm{iPr=@Hy83kU|Es+#|ED_t{$`rK^QdV| zMT>}1l08)Rw2&+*OC>~El1O98GO~O=pFWdaSxd@fO$%cyLbp%DC5nkkvXvW>^{Oo4 z-tY5V^!a|i-|wF=^UD;v-tX6P&htFa^PD5+o5{Db{W(ShPorrYiL}gmf}pE+tE*mh zOM@>fOL#8Rxc!%z*4{I=<$^I4-IG#lJ;sjh!^>|z(++>@eA4GW`fFicN=1gq#Y^$u z9@QsYMfUMZz9JfF`1$xu-$AwsGPUw#lMfVl8L|-uqsz^C*BWLrjEFNfGbSB~TpFcQ zEHw*k2Qp|-HB`2BaNAZ-+mILsEN^R!)k#xyE-_>yxtS*=kUpQ1VR&&cky+4Xyi(eY zPuc=hMX7=W+wj|#MKuY@PiER&(|Zgn{13V&*y|(|d_Lhuq~)Z*=(!)1PMemEVOWpq zRpFR}5D*fx8U@js83=sD&i3{PrFw=@Ak{ie)1z%_C5kA~mobhz762W{R!93f>gb2R z>eH~C!FUlGB?n-d6vZ7=gr6h)b@9C~jHK)|*HDJoCn*l@{$0Gz7ToE3MO;wuE|9q- za5_XWTNTRXdy2QxsT3c&hD;W6;sfzlWM+1Sk56-Es@BY3*Q{2@%C@xyNnCE9#t_AU z%y2uXy06we#YBW*?}cuOkj}#M{n4O?3dsFv(>oN1eo8#@=Nz7 zw*Y=b_NoB51OOiYvc()MDkX@AI5rqEOr?LKPv6>rQZfU2*;~bcHKsJ=(jpbaUdZ>l z+aJ%S)+%yZS!R5T3TTEkjiRdZgK;+M=+}v6BJ{$Uq*E4nI~G|%WJ$Ic=cD6PAKJ776lx~yvK#>oWwp%-%+Lbw$L3}_RS;nACl@DM@KGY1;qB$_5^9av>y*kjG{7+@U&0R z7zPT$Wos&VTfTE$J3v)?knT2LFtKAyjTrv@@AfGABboR*d&wROV!-4#_JAX{K+tr? zes}gh2&82z3x3%2Bnbm~^_WuN@kP%L?=Z`w&R1%}RS0S9)L)`DgaYHp|DWP~qYdq} zY^A?TqUtS9TjXa1)(c+W5#+OKe&!D9G1Gp)PS4#NAieh$L`?Ssrh?Q;g`E+gUqhCb z)X;8q8?V!TfYCGl^CjlIG?wmNmq$H3@DvI4;Zacskz?S^O3(kI4a3j~O=0}e6`yee zZoxs{jdGz`d)T6$PZQw(hd;b0(Gg9vHWRVd0t z?}8=+JyA~Df6XqRHta|-p7Te1eEb$|MW2yN+-CcoD@GfP_LHbe;zT|AB#|9#oOY;7 zZuaU21Sor;k^jQZO&JufQI)8=SG5P13PAu+KLLPm5XP}krmUz3w?UgLihg@H0QRk3 zzi5sCoF(P|Ds`_DsrD%4$f=!f$-7mMWzlw|^&%2{8fV%O{E!}@#oLioC^YM(y1E)= zq2-?wP4Lb*z>G9l8l0}<@4x+2&8z_TZ+cUu7nC3SCXWTt&a zkauk0`|IaP&if;}`!-vERVMe4@P(U;3&3lM4 zjBeN9#^N)QK7tj8?pPiG!0`Fd3Ow^3HVSlQsdXL87Q-c=CNjfmL`$5{-&G-p#qzp% ze>Hj!-rg4Q#bR47NJ^3n|0t9p4}PAwkMEE|yskyEC<7>z*^FpWYYd zUD^6LCdxBi!_m{M*CNZ@O7u?t8E=_89}0CVb%f z>1Szyhkxx$D#b3S1LIt;<*t2P((X`U%i?ItXi)wg>HlzCjH5IbiC& zo^}{I?yu?n_AL1NsV;?}&>!TB*VIA0KMt_veT9^(7>an?NR3f|_MGRO8r{G3I-D+D z52PwaNvs#Ik-pIZgy{q!DdpMgk0#;Iz%Cbq3-f8Y1u*T7=Z|;hdcE&p1c`XRI;MZ8 z7Xt;78v29e@4rty=d?o3GLE_av0=CFPhISyb(iZI=4F~kZDXW_vREuLb8|uDeD?Y` zt6gNoVouXj-Mcb4-5VQN(+B$e5=$O%I$7GTfDv z?+56VhGk|${yNAGgyxu)r9JIz>W^_2;@J}<_RPUJqx6?k1%PEtSYV-f|FtLt-gz1;Fl+*bfzPaX_M^Vf`zZ18QNKwKUNIepV5s zC8{58q;*PaPw1l}NJ|l-B#Q@U{itK7iG{^V5Q7o8rc9)#?10*y!Rit~MvpEj6g8T6 zwj=c(3Q{^j&0ePh&Gs)2_ap8{g3|mjBQU^V6b+~K7l~>XKkq}&#N~`HFLJ5c9-^%5 z^stLdmcT8CqAR5)+@KKn%?{R+Q}^1ISFQH3&Q{Je#QLRzWmr&;tcD)h>J=xnKYmQ% zYVxbSyKwG!!N7MESSr13cCgvJ-}3RI}Jh0M3p-M$9Olbdf~!{GgoeS9=I)p1>Wn+2O-klcf;IN6t`a( zG*z^6o$5NE5{u4Od`Aq1&fw$_2z808J?I4)f%{(p#1m_I4||Jo1c#{J1b>YG1zfQ zscZ;jHzYgR0>mys#1n2i{o&9v>^RjnJcm2*@bqMXe~!jR>_!rW77ddX-Ulr2#;OU! zfEgWrfL+|6dFAG#Q#a$fz1T6WUX6R7VXKq|@=&E+42Cle0#M+NXy5+4DmwsrHdypZ zs18%%Ru(Q?XonhVMFcf;cUUM_h`{BmM1J3_v*#0?Mr2}KF6U=7g3?HXXcChSY-AVx4*P|($ zjyM z)C$+O`&*#ZTy@CC&yiZYiDEFtMrrBSeMt>*dQocKmrlTLH}vg}Sh)ln9f9lRkWv7o z9R@eYL{>lyUwJ1l@dr#qCECO#4SS-r;;98V8Mh|bsa5#Hm%t$hW9d8K{3$GbJAoVb z9=3zj9;g`lJP!8I90!Eub>zv_kp*T@evlJ0)oYuYdsDocnFOUAHj`dHUK8?gpQ;Y(~q%?G3 z|Ng~<{xEG$1+OH4)s4iz(S=e7Va2Lv(7hC+k0T4877gC#Xs&<2nE$o+QhAs61MQtN z9>0x!WEy|_^f;nQzefdB$-OFeeGX0|>d~$ZpB3-8RU>_Q=Lp2IjhEh8JucgH3_@7v zS`%w)5&9=IdQPgtAE*JzyojYkJ@4LC3yjEN?vML!i3_vf{-t^ecFqxc?f;x55sR;V{`9XCZsfxOlH0*;30@E87d7ayS zxQNxrLvVx*4MQq}3G8MMaLb06uL@446UCR&NaR#Pg(Qd=FOY^7LX~vw=a;Rmt)_{3 zkhCwu$T!c1J~R;(K{=s2X022)rdt@g0EaBZp%LJQnaJ|8hJv{M^oEMCR9ZROgT*X6 zi18Fr>vRbs^quvxva%8AZ>Gu_KCi8VT!$Uv8b!2_^u;GKQc6(tk%kA=pSC2g<%=v_ zj3v5-kFV4YoU(C}Hovi$!JQrw*fu?O9(Q*+8n{J*FOn$Vx*AloAf#7z1d1H}moHz= zv^rfLKRg9tq!>yP_kBt4(?R*@hU^&X5x%;~+H-HsHCt>cI#OCE`wy1nA*PEPu(E`W z3PW#x$pbg)<6ufE1@>v?HLZ~F8fw>yQtb>G`JpIg;;!M+# zdrgU3KMi$gQ;l*=PdpPRacNR1)>2?wr3hKt;Atykcct8QbI0M;vP0HOK;C*~+Q)?> z5UPrkGBYUh#?Dgj8F-ytVP^4Yb?t^;bJfWH;PNVpBhe8X6VOuRvI2^p(6D8({yrmc zcPo%w^)OJH`V-cF83F?T>?ZWFk$|vL1mfAf$8I+GAzEPFyp?0ndkyqMND6nVNB@o= z8U&_+vypnrzKZqU!kJ`nHQr7jfk}rcZp0MR`pigvn+GN{ZtzM3*$4QMSouaLQ*afX z@CpiS4!C;iYSMM-1lRq=fkTye*ip5yf|O}l9Jpl8_Q_309|o`7cGmaNM>IlpIE(2( zCa@@l?=EAkNU}$b6kJ|cS0z#6!{Dz-=|1-vw$n)UwlN{>DADJ z>^X54%fiON)d4EYzP5}#V0b{jv_Ib$)0+&E)mRbAc33DOnU}!yTaf4$MYd!z86^xh zmK=Vbv~K%o^Q24VOuX`D&F^GRyOUh_lSWSK=m0Gnl;6dzQ9SYIZ_oeP9F8c3h9|Eg zY=N7qqoad+YMMU=OSuZwZKe%f9U&ob9ZcO7N8(~cI;irZ)w|`C{G~E=GAyE&r5gQm47Ow4(k1T5dR1aP^La-om1BNR6nh{&^9PJ(?rc>!cKJ0!B&8mkmF< zgu~U~!evJX=>Qhk!(vO-BKH-iI#l-2;7sQrBH?A^FM$Un9R0)}$Xx)A(4ypJL%>)D zcsDY8?{V5owy7)wVmoX%i1mv1Oq#AmRhK9%aeGfIT!Ik%sPuUqe-yr#fezi3z3mW^ z9zrT@US@)*M3JorI;QHeFa)C;%13c%!Vp_AX)ag3CWJ^4c&SZjJ|3CgRCh&10dwAj zVHo!wz7}&Er4Vk|8&&kpwnttd4;*IXQWy>d!}8L3pBddWKeAGDZ=w5%@#r%gf}yO3`?u-;EsWaQiCZ`>!U%exCvpT@3eN54$PVF_}X3W8lBdGVR6@&f9|KYc3q`2jRWQ8GEj z;`rqUyCWt&=)A#_)s%ZB8<0tz>Q-SicUi&$bmFC+z6!0G496yJCBqX?N9CsOc#5no zhDfiww%nCNb5%h^f1wD)E08)MvLOV%BF=dTZw$&!>jBsL4s0u3X?m4q()|2VF%1-$RGR~vJjP9cN)sgTGWk=o?!jNu-;Ju#;R*k zwIYCkG89rSjFvd^Bx#EbV0}@2$0T^pDwL4Y$~o4=EF>_AH6A?EeaSv56w3R=UjLbM zHo0Q{(V6&y{{*aM*sYxV(v6DwHMk(GyDe^pkEx||2tqIw!d)@bQDEqNAr$Vc(&n&j zdeDEv=B9ufBp6GBrdVQWu*Mli#if8T$|$B1#k%UBJc6jv9!G{9ZaXFJczBeGF31Wp zF-SP>u9Nl8|C)~(O(8eXSMBnsAIcC@v361h@x^F+q6?GYQ<9LKyP6(@Mt3+z1JU}T ztgtgmV=x4vX~u?EKT))SW&x`}eHcE@esS3pF(SHwM3CD~?4t+?3C2ykvPZ?>VcZ`r zkgR|s&TlFSm%qOeW;*euEYuAV2s5%i3vq9wUYiZDJ;b)j+@@gg<;%rTO`leGy{Ruv zjCNMJxnOWD?I;_zPY*swDdFvTz&KHq)rZlQ&h7l+KX6vy?$)6y2>h^^^6@8o{7UZP>``dsi=esInj>-Mfrc(R z17tiyb!yzz<%e;@$f4NbOY}1N`zS@g~DZgOo3CpS%t$sSDFM=JzcS4i6U@TSXyL!UTtO_3rf8dKDEF zWcACc{L!pRs!6s3BYN4QKX5Ns$0jBo>TC)ZyT+#p)&rjE4>x648-n_mHanXdAt{cQ z_rgHi3I`Hmjn$=yfO(IctcZKXl8o2uuDadt%`8U1G^*~2f=(vn1GQ2!O-Gt{&QU#k zvWcz%#~}ia_25|ugFuMcR)B;`g>Rv4JOPS?FdsWMWokVP(+ZqMxz7b*VPT!X8TOi3 zHcIG?LxYqGqr*-Uo8DLlp0^muX;v{S%J6OO;6sYW5M9`yJCqzPI^H#ucpKa~7WuVZ z{A%m@HTr+AuB{CRaYLaBQBzTHBa&TN8_fdIGh!(rXaKQF)IHflbAdX>#YoFW(c!0a zg4;^Y;RG0nLKJkM%YyQ1jV{2vxR7~7zuAE{SA+loT8y9UfOv57!^qRQzAltPb^R(lcobR}Fu^to19&~dnNtO? zZ76%vDI70U1^J+18O1u|una7~GPx@AnS(xW&c^b+`7J+7K&20M8*Z$fwcuvlZSWXm z@Clu`XX2z~s0Xa#_d-9EXftCq$6RGPsGpEiI{PgTCtNrn?VvtmvC$sN4;4685dgdQ z`fzkjIdcD`B^^XTcX{1eL=80TAkj?xRgmUne}R8F)IznzA8y{Nk(yCmHSeA~5PYg9 zrY7oxqL3OI^0LLH z?`60<=h~!*Y{qcWXKguW;LGr6=A2{vYtW+LXAzW-O0>f|8KIq4oV|FpThMf$Eel>+ zjvWX`n+n#l>k=<>7VMh#wd;zAARQgpv{wVa@T^kzLl2NaOjNEKlrd0*kj3}MPnz)S z673p*mNYA<>k=<|3wDh&vqrY8ck$eU^?J9RX{tnb&HUOTpbIAUEN$ofq5TTCpgCcx z^&|_xgm;S$q2tao+JEHV4b&w6(LaK@Ejj?zai8q1>b1tKtvwz~bR{pOj}3NZ1?A3u ze(=0@%*a^=x~hHP7s#jLgUr&gmc)9g_W%CJs4tgFeiv@islZJ!vzwrCf;9}@Kh`-X z5Ihs-@CElj8BcULyQ(X5j`89g>KGQzXn6-)HuPH(JUN*)rR zX!o&WZ{5a-Od>AeU00Xmnwef_%}g2Bx!*AFH7-aeUaEEA7nEwo2Q~hNK0wcKD7D(H zK#^h$VS2K_j?rT1_r@$ZXyjx!)w?gql*996Z~W&%2)>8k3|v1i8U{kU-+Nk{`%zo@ zLid|tT!1P7==)Xv_zG75uytdP&ILapdSfyi{$(n_IjViP3u@G6Et70kG4i5~YiAdd zui-7xCCKLE2L2)6RIkd4xP%IUUrkqw0Ld;R1O#h3=si@}XdDjMr$M8E{^qcs<2~3a zL_iw`ambr`&yx$Re*F0Hb|v0t=g0qho##^PKtR33g#zxDK4Tcc}`5 z1>+#Xul&vYf)H4$H=?x+r)2ltbeCp#t)IEz;p`(n{HFIFWBOF9N3C4x5!9wM(!@%k zJzhUkH3HFZ$Qt>ST*G#95xxsMUGqEsYs`)ba#zVnBomrEk^`8`W}`q~mx@y3co!AM0^%rI(T2E9pJw_M?qdb3Q*oDntVFN zs;l#)c7Q;!Lus_h^+!3{nfZv4$H#)WV;0zMJjfSn2<%Qt@q--5`@9i8noQ6PMZVyr zYG18H^(ul?#OhHP)l@<#lz?8MwTC`}$o|PbXbU6Iby-9d+L7 z2SV_~M6jSFu&}`au9Z;AksSIb#EqVS1DN}jjQ{Z~cg`|s&dnNNVLguToOvJH>{0)Bh9$pVwByAQI_GvYOfK_MDTN zgHe1!_h7RM2LiB-Ri^_#UMU>dH&UnGcbl#bO8Yz#`zSF2zRhvtvbZ*E2n+mLOP3`S zYG6r)Xprlo3?A`vG${S60d}wkTi1|3R3A86;0wt^>dyJv;ulW=m@S16*ZWg{vu9!- zK{5DM4a$uQEHZ$T*yvWqX1<4R9&xRnjt;`ZR*= z76DE)Tx(|GCz|&`R8cC3d#RS#PZ*gl*$MtS8n-_%hz=oO^GsZ?2L&!c5R^$uZ?fXLQ0#Mz65(-7(>#O*W7W zXY!eshdXSM4zVo6oKKvnxPLkP?t?^A6p}6Wi6f^0{p}#knKcA)q2F$r-O>xoh%`k7 z!^TEYxYK9`=X0)m(-9wCtUy?diKwwAyJCE@gDXlt5q*HQ#Kj4%plRhV_>O>(@%N<& zCCwJC+_Z+MbLct3Lr(9T4nR$xJ<5AbpmAIz$&22O24($wlm2NRt+f?J5aR(8%0lRo z5dqq>3cUl8@0W35wViaD0p%(Y zINIYxT!g7Yk|qL{Zw&>{Mpe9s<_+J)|K(IW4uf^UDS(Rx&Tl3f4wVwCFfB6-cRX&F zdQBtrvZC~NRHN(YRGwC zgVdsi$8+PXBIj`iD1r<%AwYM8q`Q&UAX?iUxpgOXX=ihQPyA#kvKw+Fm79N5DF+0i zJJzuZ)5=6KU#8lNNBQ#c@BjL1_#Dc?dQ}0?(QMkwQIB@GA%1m#qFCjiK4XNwSN#g- zW_P?=e3{bfuNQje593_HIe$L;oi*We>7ltF-M-x9t6`h@TCi;D*Bzbxslz&7t?%qt z@zN4>etdq0z=->Yv-Hbri diff --git a/cv/classification/repvit/pytorch/figures/repvit_m0_9_latency.png b/cv/classification/repvit/pytorch/figures/repvit_m0_9_latency.png deleted file mode 100644 index 9b6000a32140fafb7ea9a42211902b3c43a0efeb..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 120998 zcmeFZWmuGJ7dDKDiZqe}f;0$%(v8GW(%o#720=Onr9lA+K^lpn!vR5BMWrMMq@^1K z36cKRJkQ?G{`39(j_-JXymPR(2Qf4EeO>EX>s;q~u8GoAS0u!v!NbDBB2-qA*T%xS zw1kC)O^k~TKS_O_+5`VweXL~Uj)is21M}a7A2tj=@Iz7$grSFyi;ai3rJFUDj8c2dHH|e-duQUb@6}Sk+|JM{rf~* z(VJuc`xRCw-Aku`zmp|W68iT$#w#qe|NHLg%Zmww|NHJ<$A$mz-%in8MJU|TFqt*< zj#^@C*?Ej5^}h7(72_XU2k0w@E+HNG+xiQOi)#Pg*-7?2dwD`pE>*Oyo*wS(zTc)- z;|V-14SAK3yIY$)OnMtz=C;776Y~UxEJw8LZQAfRB_+8h=cl>6>4QZ7-p)$D)2;B5 z5~*wrTp2&YD8Br3yEFxL0Ty)RQ|Z`}k_~LRglKCcq`h(S7ujp8+%Y~~I9KncAqTva zKU*%}`0o6_SHhCbw%J8ljy5B_oEQneY#mHE3H^S@s1$48e+d~?ahIF8vj@c-XKf@S zCH3!<7}cwN$Ev&uI++4nvxh0s0?Lw>&0hu7v>iG|!cQFSl-usC6@Ic0XskgH)z2l@ zyPBiwy(s_p{28i(!)B*Y5+frcm*)NNZ{#AFn3%!|uDcyfd5dTEUe=kwRd%>~q#FGE z59b!OL$da0BtOo*__B*s|L*UJ#T!vpT5XNrThl?un{|y-&MJC&_QtJ&r$79+B#p!O zsd~x2l_rJ|6pQX8z&efWE7YbZCMKq)+Sg4qoAWNypS3ZZk&uxw=W~nhhZO$r;;>}N zOxy)OtKW5LIYdru)?)uTJ?lvqcA0$d`r+w?{KEI974`1R34SQMf!vakTa5Cms^2#2 zrqxRbZ`-8V=<4crC*L-ybDnzpuzh7^h5wP?7|QzmbSp?uE%A9kU%K#C!hN3jyj(*9 zOrpR%wz;d3^%NYwE=_ZVq7`r7g7Sw}}lH*agex7T-1j}KBDwSFa#(|&3Dh_8vi zpXqH!F?OLa-VDD)jYmChIH5IP&3z%Ih-z4`wZT_2dNF5wL3%9vX!c=-7Veq+f3NJR z4Le7$j8OlAHO?N*cB9O9oCOy{h8d^ZwzNlRrvwOGq4b`u14RU4ae3J!8R7QA+Q_R> z^46_rKZQ%J%KAB%$PtWY3QC?xI}H!BLK{a%#}rj(#@6eH;?GAz5i07Hx>R^Gm z_fOOJk1b72O<&@X$-iWX&e%50;2{u}irTJZd0ur#!I+@3S4yUA6{k}!z-g^;$R zAY^ghtc!`GUiNj3Q7SqJb%wJ>HEy*Wcf>Pu8C2v4$|DeJSrYYnrTTEC@;HN30a6V6 zCb%*QgsXxJfwGGq!brRtGg}x5Kizpj#;&fKA>#UbV=QKwh0So}rs%3FpBi(Eav5QD z(dSZk`2=p%wkuwA!<_6A3*v1D)0N=Ph+5M3_nXY<1B~8ZRaqwZ=b{Qd*Yaml^{Np6g3}*+I*h z-rDJEtLdf|8FDX!KgH9?)A!_jzFT&tDK3Gl(yXNJtdgu{I7zxM;Yj11@?c-l+#osk z5C0y`e=qkQKU1q48~W>}m%jvk>-3g~*uw`87C1t0K8%)@YroKX_fM#8M{ZnH6k}ZV z`1rU^p}JNe^$}N^7o$L73ZZX%f9nk~kCmZfvM-WX8I^+Hv@USxm6VLEdx~Y6SElgn zLt^+k|7+AqD7()6g-5-qr>7?=DXCXu^JGkM*Jbkj0KXP*CQmqwvb9T(c~ z5V*14x_#RWa%cY=xxOI=qw2?vQ?OUQzP`?N;*5-MAlz=7)Fxfb{G4h_GUi!OTPq|i zT=)L{s6fx>&z~0-7AS8Uci_7--tw(P$hF5&aHMp4Y@)y*$)C{)i(NJvOah*$a^ zMbK7MT-@2yQ(jR~%gx2f$tfr($jwdQ&h+1V&hed9x)i++Z=Qw>i-@?g_&rHxoM!u2 zKE8``%&8XATUS?CLxUJ&XvMnPx4tRJmCpsMZF-E~K0s(<4Tx_wC1zx(c59x|tkqzO zW1n+fVh*#qx5TgGQ+s=Re%oF;Nk0K!GA+l2n4Kx_X+fSbj_8E+ba9s-O^e+r2KxGL zK1udz&He{1o3#@at!I9eQ8Z)km=$BW%^ExwyI!%fvbtTqUO7TbOY48Qm@*x(+XH`$ zoSJ=;6XgWB%5kJ(vFNVZ2nE8Z$;VY!cc{Xwk&@f6%5UGHa-_R6T~KWuUDy|TmCO$g zr<)Hmx1LwN*SYbsa&yCSv$aH5KQ8z0OkG(N^mA}uQ2w4VAE*vsc+3VM@ zFK=>+;?5XsF=H`au&piVcnxI+3Q;uA(Yt0RAOFL{!}d4^;-h!9waF=rLDvT@R;v4! zb!cv-tDSqT^v`~!buq26BgJPJQdd^K>Q5&4TFPdm!fgI)OletJ4fk_-Zl3aHxDiSJ z-&R&uO?!j)wY0QqB>FfvR&`8F#O(+3AsSq@sqdE<*w#&ymX<9iQs12J9~N1fA_>GX!jZJpS1j zw|JWrmnZM^HZL!Obl}PNa+A|Q>2-JA?hI`a21&oYFH{y8cOQPO?voO?`9{si>ZwR{ z=kG((_e@2;-n0UOkiYrdCXcxzJ*Q=gy|B9IT*r=vcSqf|OSfKxU_Zm%I^*_^X2(_Y z^tb1P<1;cwI-=$6E-ln|iBZFLmzJx;nhjS3`UP&Yu_5#G;o@Z(A+$ z#dMdJ)H{~gbsw-~IeZY~o~?Ux6Xha*+7a!=3#oAYtffpESSpAe|M`MW%j%o$#2GIx0*{Oq#j zdI;i*s4)gUai z_oFf*ZDf%JC!T+sueGRIwf+HA;peZV-=6S}y!vWMIGnKd==nX5I1<^j^pq4u>3C`x ze>k|KO2^68hXGRBLB$>9S2;>b{3Pf_we4p=g>Be;O?`IuD=sK0C8c^}Zuihm^>8L$ zYVP2;cuxdhIZk2+x08!WvnfNDx*Me=o6A#g$`d>6CYISyk%%VbEFRLR966poKQjRE z&``1Naa%1-ptz~YRO!`#XdK@Y<;D*(Rtv4WKO47xj=#IFG(!J(UdgydpZN85 zc!Wb-6%SuE5oEqkr(`8)V}6SDiwXKxSoc*A>s3QFkksO6)!S-VY|qEMf#m8 z^9O6&w71JvCvK9=UT<;;f7Ol-Ag>GG(C%Qrle*SAS!MAqmC9xxJ5_Qeu>JoKQb;E;8%(AlcT-WsYY)#wI1*3CyKqCaDRAm z>p!O&h2bG8ni>)X*`dyLUcu||4WAx;96LAnMz8o5oq*lTw%O5^C&y3&bj;1w8v$S4 zAd2W)?(XTyguKrjC-LdobDorLs0fB~vfPWh-eQ?l=028j^pYwnD&NbDbJL+x-hY}jwTLnbKjD)^{OVVVOBBuGg+YXEf7VJXrB3(Id5UqA6cnos^kigc zs1TTf{J4#9;Zqv`hSf|llcalx58b%84+9d}HGVYv*V(7t5cg_YNVq>bqL5Mg3sJc| z?K2+}931SmIWD>V;bPBdQ15HT#?+4ABJ22XM9+FI#a+zI@KAB5A6;Tn8XNM=%#arG zRAye!67CFdo?0=jtgJ7V(y zhKlF+a9{V>boceKcZM?|Qh3Z%5}9Kzdv2AKR`|gQF8&4Oe;`kh_=ect(Rw8>7uURF z-_`+xmw94bdl^z~~! z*T1Lo+rsuvKy|@g#JlB&!teSd)A&`33YFI*TBE@gxZ5Lqg_yo+CHFbK&&)HMq-&#C zwtYz0kKSx)3%%jlwg~Ul!LZ|Rvz$GsDa{c6W`{I=U0s2ovjBva!VBtwVx7Vzht@M& zyfOO+Uo&=*$!~Gw9Fj?bB@_+aDP)UE8UhY?u9C6!^Tgkhkz$sO+r3<;Vo_fj4<1pkG6&1ntX;%k?={#NO zOk@uuX{bm^N^<(qlxlVJ6RHeKIh92~d=f8x4gBd?wQcqCn`?i4iKbv8(Hj01LH6Hu z-u4fE6#X^k`O)Rpigj3LYDqs2OUpMsHu(7Xw4!bckdF?QVex-YxWq+9_6w9ZG#n+H zIFHrX>u@a3v9!g^XCXDR1EG1B-XGGrc!h}kLUVXiXfI<`Swby^ES=jsMWXKibvq9QdZI1GJWA42G(;ndK3r%_j58_W7vD$ecX^sqD%p$G?M)C#H+o%Ptrt3p$Nf z0U|0lYt*NF!C6zHkuA0HL{1FeFH55Jr>i=AP<0=^~+gg+EG@&)kPwDtMi#`f7>f`Do6z{@W>-`>4+)FanPpe zM1m_d3s}NkV}ma?uf+TqBUSj<1gS) z@$vcd;yS-`!z!|@tPI*9h}BjNsY8}im9IqC;r)1dE8(YbXyW7JQ3F(a9|hFvCNdKf znM0M-)nj90E3%m)3>Y0tD=K&Z??7L`!&4D|no!47*;^7LB6(LuUq88|1ZtJdt9;js zG0k^Etz^0p$@Y1G&4tyfVcVUUtC`x`?`tK&`{d^4*3<|@w0*qTTkdLY^=WzcEgOYA z;BHGx%Lt+yOw7#Ii#$aSH*Dow@b`&dS{uQ#!h2?lc`~))jNPg%DpI*jGe&MNb}ckcj<Mx;zbRrhj~BmZy8}tA?bE06ca8?< z_XL#sH3G6a90dnI!T$Q-G@DmetfSWPR+;JJ+HI;u>4>zs-m?kgGXO<5y_jM_@4*sW zE1T-v(HL6EfY6$oo8zcg?Qb}#&&>_y{VUdHt^Z93?w^uv&B0lSdqU5w!rKTfmHYSY zTQ|JKqtt4=_ujxMBJ>i6I#9h#qMtKZPmdHo7g&d|sgF9POcggZjQSudhoKee^l_M{2=?b)St zzk?(jbh>WA!m0{OHIR#d;%*Am-0@cH`F9f+Epsij!13PdZXYxwxzBkL7B8ZB4t`D@ zcX3rZjiJ7*??qoMn=dI$JHw zZaG>j6%{SUrH-@m9XJSVJ?{~o4p=X*d+^{vt@D(Mbo2Hnd`!m!cn2E7HQOv-8#php zjTrn|0ndh?u)#|?T|sXhXrv20f>3vLbzT2n?!T03+j9I<9A)Wg1_f{BId2O!1qGrP zhecrM`}dZUo$l$t<42a3jQi4pg8RHdhux!ZAKHI|8hW%9be_J`oQ4#IQVaJnzWI9< zU1;InBM1=$Ia5f@y(@`RHWI^V{t@(C8-i|RbNt^Df59WNPCdsVi5Sndz#Pal<8#ik)t({lnDF%8kzA}KnBu5N* zVp>WxQVsrpLs@C#?Yp4!vlNTKRM}-a_luVasHFn~AOrvL0Ctmx&nj|aGw|$i+2ozW z{QJ916Qj&RtI6V!kxTmp0Q^K45G@Qo|`|}9~O84Ol_8^ z`_e~9A13$Z<>j-bp0KH<4FNQCZxKlQv44MV?6qw6q{@aM5i(LK@5dvH;N-lfJ@}krAsXC+Ld-xP(#qGR@QKu zw4QK{-$?igp-O@H{`PDe4BSS;bL*`?)?rZFg^vulLtriA) z`%n#lqnbilp^D{=8197aWYUluN|=acvygbM?T3}&@X z*Bc8uo{{gRS>Re$j{yV`MC_;Q>FxV9hg$**4&fB|d(0LB*oxmc`5jOICeHN;9-K*e zC!zF-O}62uY2{^QR%7eym8~tS$n+v3JZ(tnK=_+b@tT_&8rofsCSP7!Y5|G?Q%*;v zpEeOTne!SP|Mrm}|0WZQkF<9lt+Ipwg-SM~dV|HF6Y=q?7@pO$!MjX}$!Tu&rqXAB z=7COue&VaI@p51G`8WQ)Z<*o;a2%16DDLj{edrwy;6&UvH>Vdw#8}zd#z8TKKx{H= zyCdZCBcM<6fKJ@&MXk~qA-PyJgl9Xw|HjP65TI8BZ*g*72SNfOjEPF8E?-vC14kL> zj+#3hOaLSY5L0%M0&1nj#fQ+cD#p;f^W9y__7FH$rm>%GQWo+UIy|}gEaY<7wTly* zUZf<7%InclQNB>+%>#ZRj-o1)FaJ6WK$(}~;JqHOry9C2##!<4)?R)`7H~P&2+W*x2QtO(6+*stt^+0* zec@3oo_go)jCI$)g{P-TJmH0(Z*a7?=|$Z_@doxb#u5oQ>M!G87qlwRFDw+NBTDH; zj6Dt`Ay?4U)!iGj&3@-N@)Q0sd*?8Wsf^H}{5cJ*n>d{bA=WoAcz*B4i=CYv|DW}R zB_%588dx$XSd;t7+^}3tZL=J_yg}O^uRvk!42Po6oSg*CQAWT$K0ZFF^OIc+%#n@3=dhY<`0#(u$L5!h1`1$yNS(9_U2W;f-a-VcW zJbP(95{V&G+QJF_0Y0{#pB+O2LlzVONw~4G5hryN$*p-ytSRU`P*3lCC#BVCyv809 zJ3b}%%1C8)E{8?n;UY9LVI;!Pc$MgtBGuB79ML^C>31wY0nu^_sEPPGpFA|EZcaib z3JT#B>%>xjW+0HyK3-uUpqF^9>X!V_@aaP?ait!{ho&R@&`2?{vRXs?iXjEdjH<6) zySCwV457mZU@|l`^!#L58lo#xIw%l$)|0>>BctmSgKsD?#6|0QpqN*jjaGkW#vRs4f{$cB;<5S0K{A43Lw2esqHix+@(vn|05oa_89v<7Mt=CBi~2EB@~LX1K(r~1UYp;sC2xSfBabA z#%u6RUuS2hM}vDPqv%GB(^!Jo*SEps#GZe)Eg9v3F!rI-)f@IaWxvSwH>k_0f4TcC zIVtI-0b;o~bBv9N-&~0zUKPFZqggyvXhE&*l{st_aHbie?f_`mLsv%M&VP7z(bLQV zWu>gH-kXbfP9Lx>!x(C9RBhYW+t+8H=ee`c3DhnNGc!AcR1fg77@p7}HT2GZy+@VA+ zi|OUjg&OQ~yrQD@&{dC^`k16f4l&X+i!UH??Aj3pjVwHYB%$h6FYi0M=C9#+GYprDD&Cy)-POx zk_0agLdL+)8aO)8-!lCrm8cjtz2z-JhiUt!Tb_t=5uRJegvW8RSH7Lt%oqso3$3cE zg66KUpkQEWZhqdSVU?{AZBP9b^rne=ciXN%Xh;LyJdI@y4Swfmr!^FMdH}5uKXoD4 z=T-*tR*x~{g!7aiK#ThN`sb9l{q_bF!$`Qs0=!f`nmfZ4KAdv+4mr zDDBhtP!eTjWq`noHS!Au1sq)%q|nKlg3FkYmo|S!FbQ@BgazXoJ4@xxM5XbWJ!xAv>iDU(g^YV;10Cmh#gWEv zNg@idy3~4l#0*lZg;b~3!)-Rz&_Bf2%E~C`<7m;d{fx?Riu!HvgPES>=e2ijdTsW3 zZ{NQ4SoCi;*N?SHxdh&@r9k<7nh6!O5h_a*N z&51CQ)1!S_Nxz!ORG>9IK`?Ig-h$N=wp}BUwM+Zn8gvdc5o%UJ*%s7<9D8FKnF|QF z&dP10lE0qE;oKnY*XNj7sc8I};9U*t3`k+L_R$2?=U%T5ukH~Ms-b^EiXkOY;qG>r zYS>3mnbpAR1B!K<_AnDB$ji^qfB#;Dk@1-r2?K*Ps3fuCZ=MvFmXgV8dJ)yeua0U3 z^ivp^4geNDYR~*;`9qvbbN;o$)26#hqE_|#;pcY*%fdMsbyV+v;LP|Lj-_HO9Vbhs-Jk7Qd@7zvcotyE;^ne5!s~pY3qr9k zuEQyq`|?~3{jIZe^+$7K09{y8DaXn;37_qmD$BMND}6nXO5Os0+|HZH@f(68*?QnE zJ~6R(;}ob$y7R>!S$#XWi*b$ShlsHs*phiE#ebn8dh_7Qbym$sUCX;SMBU_-RAS2V z>BKyg^z`D|<$4X|p(KR@u)K@heciTTMVQdo-bExnvb_VZSR+=bOeKqnXnD658fw#!@|hP+miO4qE8#QZ z_>S$Zq7`xGgzU=U_yN-btrMvA?p_IIjb5Uey$uZwF|@*NJH~v^@$m4ty8-T#&_-Eh z78Dj%rz%jIy1AjIoMF!z8qyx!b@jXa;D?$heei}_Nl#K%Rxj_4eJ3nGw9h)!Bck`z z_E!evD%x2>l_1&(@=&ORWJXelzAQ^L%1 zuCknt*12T=r#l6f3c5C@OY05h=H?alG?#*no-Hr}tpPPKfPPQ`oEY|q?xP^pZKX^Lg(ggl%hcGyL;&4u#S>!+N>(nFHMZd%2J^pz{% zn2lA3^HTqoo7IURzRauo%dMdTPHlsTpc=4Im|EKY;(A+VKK?_fT=-ZfHW3Usmq8wi-=C-F%+h8 z9fxwru;hHv(OUoA^rseAL|ZW0D8`ln+?0S-2Z?z&Ass0RcG3H|=`EKrRKdxP`N%#T z#^|Rw9|q;1vLPfAR5UmGgk-roN9O;tqWQ=1XaMsl9~fC(bOct!xPIb-#VS0 zmUb)OU_W2~G5gre^69k;ZRSd|W*N32M+{U{iY^Cw#a)72BFlHOtOm8~>IK(o(X087 z-s1f3mm|I*^7t=+N6FT=AwfdtD;o*ibqt;84sfR__)Cv#t`rb1Z7FwGadUxP!z(6F z+p1oLo{Yv^%skxkaj?O-qk~K>+Lyy>*HnqoGs!;mfvMd{ES;zu;4L)iEFIfW>EM4J zfByLJ4HC9NfIX1QYWOJV%aIcmhhI4Ms+EA47~{-g@R_;Lje-qjQSk+} zOxJv0KB>z)n`9>?%LKt&Mh@}EAG3Y0f3Kl&GCGq|YM5`MThL@)JABE2v*6YxJg+4$vOCq(a{Fi1r(nw3o6(Ieql$2d^u<<~- zTkc6`?<4>Fc=vl)=^+G;u4Pu`r`^fO(LoDnTiIQY6(Aj4KZH$WM9^|*7sb+@exPMa zjF`yBy@7fHFpU*E>ZScFP>5?ChHvrk^x`eVR&jUFKGrmRd7@xiD-a@2={sGH6|Fm_ zK*+*ao3F%mrYx{lCjao$1Vv%%maVnCPz~q3(1Z9EP%E|oJsjVn@;6U$Jzrf``|0pxKGT|)OH)AQXWS$gy?{{#9*wHGD!qAZ zIX$HSwpev8E)ap_Sein-g5%r|*Rx)ei$L+n_0 zCzpeR10Z2ZNtT4~&gsE)Hn<#W?9otn8mjfC(*kb@rd!DsCY9B{%)?u?B)`b|Jk8^2 zXLi_H#Bg8t1o_HSS25>`$$TXu@tsRZ4m{;>x*rN*bDFIk5bNZIuMd--S6dP4XLT({!il2QG zNji>Zdw!EsMQ<3An-}D{4Oc~zlkGO*8$|7d!jt{gwi;_+*M>_ARa0dP22KK7;lLCx zgCb78pW*uVR_8<+YAfd;b8t%9U&_f>nm6a><*l!+1)ls;tQ@(|X}>66U`0U_{?>C{ zA3QE?ZN~>&ysauwctCiE%d{j7{Tv2lHW&*M60XtHSClvagICnGFGno7H~KK{sxm>s z7i^R&CtC+tX0~OP7{_YMNL<^4P4A4Ju<$2Nf7B_vSfENYC@z0-1mR8 zYhUCvHcXqHdb2q8QBesw{gP1*<6R0B@!MPJdc~E(ZKQ2(-U3n#rhfrzRl;qZ+F(&} z;<`5DaPHvkgQT34qkDO}W@eAlkieb#+~wIZ3(5$&94g2>PRZW>Ci0fq2qwfpx4-fw z?9%F0m78~QQ%|_DIHP5)t3VWd(s?D`PszFrk)tB$OY!CFR}A$Iq(XUVDJ1)ePho(1 zM*pfFV*}Q?Q2LaUj}H}PfV1@%S&CEbQ!e!?*)=TRr2hIv4>9}qM(=0cWzhnl5BDme z*!=XX$YrT6u5$;B4;0_b)@IUt6kY9$&)L}xt|y2Kr@D!;D=!36AOgBbr{Fk2z^%it z1iCdHJu0Am*qM!BNS17*BZ_FBXYB093w7-pr?jxwjY*fig2;)yQNOFzhv|2KWBp#} zlM0@TX%Dm-XpQyoLf|8b8`T~kSk;BAXNpNeyXg46EGjx$n|u-4mcLd* zk1FkrA3Z7;X%-WYYvul~x%}|q!{Unw1cE6?Kufy1x;l)+im#i`C}iI0^4juaZR21T z{@*@e+}fO|J2*bQLPUp4!!ve0`iZ>VL8ozDK3!Y5rAVeaV5A6hLS1wHf{dym8ZKAk z9C|G)s|9cjO+z4Dq4#;3Hjc8s&)oi&bkwqhG))b4b%*j{yO0{%lqZFvL;um`zYItd z%B&8`-6v`))&oS8&#VSwZ%oMNu$a6&>HOtsUTJ^}RwfW%RL7JflE_&#qtgZ0)RAiJ z1?}^1$DDwvnFQ`MM`bnEhru@uE_h?&Ymj~aitHS5VC(cthlurtGoWrY!*B3w?nUoQ zSB`+;li6a}Cg*4KlrzR=)9D*w@HcjUYC-OG^cQ*$m>c$LJ23e{7cNz(WBGh1FHXw1 z$>;9%PB^e6-Evl_RHW#`uP>l1k9jUipC9&tV&-Y4p`k%_0slFk1gOD>i5gOGzKAG) z2_cpedGI+B8mM%+=#EcwQroq_g^jE)b`S*LeuNqi)fJ3NB2T{Ev4d9Vg= zlVC=tdlc=JY%Pewh)N8Uls0Kg5X^rf@fl5dM&LXmKFB=t_d&YqV_Rp2T)P}^^>wO> zyA<@P=v*gov%)>gIIF--_5-8_jjQEm@$+M+ODLjvu9&vj8Rsfwp8YtmB=z@YBY~xU*SOE$mND3?H2|=y-{YGIg?ie0$qD) z+kHad1_6<5UUdWo9pmCJB&!-a-;O(hpW`}V=1-PCR~Z4}ZUY#N4{?p zz2CV`M*t1|Ho#LAsn86ub^=|+JhduEI{#lCA@gqahLdlu#(K-+VuV)sQjnMLr`A5O zyD<3T)xAJ#$`_#|fYl(NdCjn-6>t3Gz(#T^WSo35{7;`2M=J~t1kmnbZ`&Dy0stg} zObB+mMz2l3BOnl;{5EoX{(ilpk*8@}w)S<$?R)4NAhRLT|LRO&h0{C%>*n4kS!dM= z4R~@w!Va)A!*~mZzR(53nG;mQ@b>0FH8BL8+?AFNf+i0`jf2_BAJ7mmw5VbwHbxV3 z7P^-MDy{-$aUD@2B5mQgE8;_PlqDVJ4`UQ?WzsGcj@VgEj$RDAEuzm9n&hB}bM2tY zo7kQMG<%?bfFc64>w|HlJvo@#*l>bN$<99iOr}9jNjdIRTUyZ&{lpxgQ*jstmwv#> zuXd2;SiNbr(H)O#)#FBUTtSN)@q7Xor-M}O?CQce0imfculr%Fsrlhda3jbA>AG}b zmgBnE&kV(|18)c&(3S#QgMdHbxB2$xS3AS==xER<8Zpf)XfMDSZ4Q@G(S9jws=B~~ zUBP1I>K=WKQ69jX{p9yb3+bS<6X2VD!Ez%HwGpgv)qpNABqCrL`nW>iwSlIpZ2p}_ z0<;Y!T$$teq8pEZ?UChSeiHNdm8RKk#5E$N+8h$T9TF{djgBD?kq)JJ)It^ar@~0q zFACbKH;yg9JiWUoum2xyeFsjcFHn-ZJ3D_2Q7v)~45H=K;R7h^Ziwu9&?4L_|DfKu zbQGuzx#yBSGkRgARftx$y;bL;@-|X7(y}ew6RdObiu9nP;CP0pKgsIm*d;2@%E~%h zNYv0iVdp|Et@>8~10hyBY}Dspa`IqT)+kT;KsZn!d^g{X5VRS#1RO%OHhSHUjWFYg z=Wu!cQ1HRShgS%wx1ld-0);4J3EW6veryFr1@u~`zf#%R*r0ztfD(< zaybc208NcSfX)``FS{g{PXxKY_hWBx-LxIeQCLXganroi=u`Evn;G5FKd{7MC#TfH zz9305#1`=&eqTj=2V56AJsF_Rsbz}kNjbod+>isQd9HU>cKAVB^C;~uC- zP%DA_%&J9%S4|TBZuFgtVh4G_#Kfc#*fO9-pE?AZBzr;-R+=>?pU{Sq&kfCpR6k{< zEpftkeAd)~v`2#e3e11qbYNrmVj5I+NN~5YWh`YQ!Tz6MOSx{Un;A``TQDcVa;eI} z(g@rXxT0p|r1qgv;{(?dJj82|_^@9e_ zn3I>dfkiI4R8qYx|)iW`A{9vl$Pg$lM!IF~kg8M19*-yNHcMdw4XILfB#J*Hb zj+iYHbxv)GztI*-;(>^tEW&Z)9pEF1kdtz>XK3_T-2tr>BXA_iN0UjAEo_ZIhK76v zXtEt-uPW?k0kU0V|9FO4Ib^PC7R^PI%mJVY1nwK-(M{NtQ^2z)1CAgYESWkQl88X8 zoe4)iQ46>}%$2AY0-K-RHv%m##D>SuNh2lgU~41z;|xJZeOSYVrdnWI`y_V1(YiE! z+;qM8M5DN)Kg^KO2JWkuK4vnAXk%38z0&)Ysxf){LCi5n=b?v45dud$ zsKZ>y?O1m?eK^;#qTUAhDL9jPv$>LL_wV~dbc67sas83}2Mt@`zw4&FOacNVY<(Do z@x5*)l-vAN04@uN!{J767n&=2dU<(yz+84>iX;wAFPChLfJpBUTOo-hqcXl#p+5HN zvrv+$22T+3=nq*R-_v;TU=CPMn0fdT%kS~nT7!f1&cdn$3opI?HSTVVHR$|kq*Z%} zTJU}^33y=6>7g4M8q#%ZBbEL9v5Ax$kdE_8q&zzJS#C#2g1?RRCU(j>eZ`& z@`1m+d$F@-y_9j1%(+&EI3>rk2d!f+{NAFBFZtE0#ib=ma&ldLeTOt4t%o3plak1z z5d;kheU_pB0Fay%P#hb9djsOf*4pBAVTwa81wJ00dWMLMi~yWMC|~SthOy6bi2J0D zHxgLy8-r};yKBSuGnPU6r&r_r{5bP zEktt)Yqvfe9v+sB1kd74D#jlwAy+!)e}RYf%jeH5tgM|ss52xCe+Goi2O!ltY~GzX zK6>;BOc5lxz&+}6mIY1+s|L~=9@Tv`Pem#Z_nUy*Sa*GF(G6Bb0o2iBA0mA(ctElT z`@#EP{LlRQ{!W}&X*gd(sKdchpOT#X+p*;urqet*+)YeKXaSMu$oL`_V`RXy4jK#4 z-(_X74;cgliRMrK86@jx>0tP{b)bV(-!&*h++UUAwC5bDI~%Sg`XHL4^lotUYR5_O z14}-UFMsa=mcUga##z~hR`totR0auTpHI?)NwJo(Y!%-GDVKpb=k|N$t9|c3gbx;0 zU>kL~jFG&c8jxFnRc*UQ#+vNXgdWe_OYfn)ew`EBH*(hcqHM;^u=s=x9|`*U*kO&D zr>6fKQcJ@g`C1O1s?t(NAYy%26*Rrfv1Ai3JCZCeE+U{wkeAmatUNl*x6V=dXFka8 z8eNY}ix41_)?TlAhj+NTH|K&b;2N#r-v!st@~&3%?qw=m?Eg)>J!P{rO75|ND+Kol z*jGTHO;X=a`N3mUyO+{+=lC=bN8wq2`=(cEi39az-y+~DVS*>#3Qf*%H)HPKozQc{ zBPOPj)r^`0rrknOj4{suCpXtgh_=b;UgSfgWb~>4LC6jv*meel_gr+0{~h7WDSoLB zvr*_((5b-i4cdS#Za2zME?!!g%4rcL8!9h!6DM9HFhQh~x2&qFf71&tU=@t34NWS{ zwtR!c|KK&dTTZgto9W}@#vlf`)k8-OPMJ^=K>Ie;QBTFc9d2-8wc?S}D%FP`-fz$* zM~Fh9aCLoaNevu6SbdT_4F%NP<>Oucxhpk9Ma8Cv*ed_=IOGOsViPv;nlGQIRY$|-&<^C0>Ui7aey<;eQ(;o907 zG-8Wx_3(5x$f#|1e9Cvn?6CqVV8~DJ75R097L2!LsN3r5k}p;ccxLjXBL5wJ{9q&} zDsI@q*irvs>mVJeOmwMf+jo;2tCgO6a-~7NK3hNygugr$Tg-@un#}(OBOg;%X)}lB zIEd^$ix#_^Kb|p?g}u%7G0I8lA>;5@6!$ed7}mkju$|)mcPLU#G1-Zu{f_@6j28aV zjLx-7`}#}^>m;1fGeJG@(EljB+V#1gkxSsUf2S&Yi7LX$_L`kiy1Iz4T1^(7FFNt! z{rhVyEXFI*75|(jtpB6NGyi{{2K~QphdE^b?~}ZNihobZo(~FRVfDZ0-+MqE{()t; z`pt#>R%&gmKo^epGY~~jr`eOOfp7}y^R~dkYQ53Xf}3WD8J2CjEv2|d%!zcy3S|H5 z2gU>zXwiS)_z(`I05dnMVY0OmrSIg^?5qA`{YA2s7iPS%n=o_2Pp6p)TP|P0dTR45 zz`@W6M*7e-yHmyIr9)J9Gbo%7TUY)Q%OAFIYx7{n5xkXQ_G!_%fAxymj#{-*V{iLk zT;qn<*%l8r|E7&c`)rg0>jUr8jHSt~1C!)}l3Snh0*Myv@;IXH)7;K{us*oZ|NVma zl_Ol(77Sxy=k;D6Y{H~43q8G~7n)b$hWNHJOxwRFE+q5i!Hr|uz9s4?6RAL;Yd>FaiDq)Ac z+J%Ls_LN;UI9f;7z@Yu}=L~y?IZ;LP8C8A_XJ=<9(Ey#Mw=|TM>o-u{YZ~RY!9ptC zI?{{B_!@5}xy%iR(%I|J3Duk2bY#RQW}e9ZSmf_H8OlDI&bFTpK}kQhU7OaXt@rN8 z1eLn45d1iyE8oi}Uf+GVK�tjLeDsW%k-d?&LBx7KwHTzjpr%_TD?o9f`F8p1W&% zWNNKDdrU4|E6QiUm&2r-RQ487mTJn%qG$<=X$KP74l@6c-CK%g!`a9M)+y(@zGYAl ziT!ekRJJ_XX!Flj1Vx-}$C>L(PbzUIP3?GI3G24HKoffmos*#2eGQG)#blF(3-${w z*FDlJf)o|Z`7z_viB2nByti%{*SlGpnVA7@1ePpD_(umBUeBePQGju#JO=O!L&H(v zFz4syKCklgJxOMGw0Jda$@)wgEdV2nyN95ZuX%`_wq6SV@{boFFbFfJH9zmmx8oy0 z6cO^1t46}xWW+PO&$fEZ>*u6>8q(SbT65rLUa>m{TJdslfM*^Q2b=0CFr`3Kf@33@ z-s1&*J8$brf|w}C+Z`QBwdTRjqIX1r-92=0bmVQ^;LE}a`AkLn`nw?_A!yDeG$|JmNL^5jY9W!U$1+Tw zv)9S`O@Vlmg=Xg6ER$d-3t)(MaSsa%!wkUkq6Y_IQv6elrUsZ0o&&{h9JtUC4=NX~ zwj@X7Q*pWZZ^C>(j9ib(Ws-~jD0Cj^wA}%rUD-(tMvA867V2*9Kcb-5_^I6=6Ga^T z!Ce}Q@v>Puu%pmE)p9?lk6Lx`6rqF88-P9x{lRjQ7MkoxbZ(z0r%2#03Z=&^#pI01 zDe+x+!ok4-XA4YMdc$BM29M3rmR-Gd3uy=8Wl4!{zEGzJ+vGiU*U$nWvDG?_xoywL zq;xMH95XXBlaZ0>=;-h^?`a&d$O#CJZ~Q=yR_tb?&RL8MGrFz)LaN$D)+#06>*>&uS!GT=@tsprUaO zW?$#J-ENJvQ)LsN7!{QjE)$0;L@Qk4myk!^L+tX7(1WtO0 zAx25%-J6kU1ex#U&MStt- zI|m;}NolFF3^Loz-5rP-*d9{Y zvOBX4VrX&M(GOPTJ?aPT0s8Ruh97U_uF#V>C${UixS~6AI8^%M3t|U$%qvHn>+mNn zuyF6LB<%S}fOj*cQY}p~lmuq>t*i*B?>r6lInwjmdr9c;8_Y@?akeAhrNjDs2x*T| z2-(?huG5B5OBG!QG{%n5N^|L#0nXfl%TT{|0&U zjxiw6Yy2(sXS$3FpU;QIax@_kzYx|N4fs?Gm7DKgU)v@bgN_|08?4km+Ej0jz74T4 zOy*yx7wHhct3;d}_WGu!3ch%-RQapboa|akcqaNj- zQuN6~+KXlOf+Gi9)=IdpAkst2k)O}IoQw|IH8 zfo+2k)mN`y7hRUH{=7=OVB28Pp;^6QULugx^HeIG@RcYr4Pl^nw>Lwr3T8AJERg^? zKrTC+5%QT{xU^=%AX6A|l_3C2Hp`~hTF#wInLQ1;`J?$bnyVtG&^K)W6bIn(I40^( zHn`M-L@LV5C4G0S3=It-etekbqr$^pq@|_7r~_Cb;X@zNdytnu6?1c9MYH$7JAnsg zbaWI>pLxQvosoI*dBdxBL1~el|EGb!9UWU9|i1x)y zwdZS?6v|6D5s1(cA+*44YQ|&Wgn4j#epufJ{$Ox@pN*Xz47fB6IU`^xURqi@Iz0nQ z25(i0aVv66lkE}~)?ao+3-#jyJ;0Vbz~^iTG*%c@+rX4E(q3O*AKtDoKfj9Z9=|r! z2?#@Q^jlf6G_&N31J4HYKq%DU9fCsiDs-rciBJ%b_6-#X3%)Ud(OOHZ-J`Oj}-90L7C}Of1dD*!ah$7Zo}A=k|7CABNew7J=#CYr`02v<5Sn z05xWePt<4o=KVvM1*>I=P-B#rk$Kj==l}y5n`O9@7Is#DJntfeH9r{3$_9fde7&Wn zrlzTBa`tfz`1Hg+G`wn`zlx6!GP{TL+`)nqZPKe(aQi?gAQTh^dy*`uVc@rQnl}_i z6*sPXfJvqho%;H>geVXgp%qMT%fEcUbB=kSPy}sN6Pw@? zRMybgnylx)^4HG#AXw5GrhdXyW7JccPrfXxMk#cLc4v%*u!!>q$C>Kv%6H5wh40=S zxI-X%6oq(sa&iJ6-~b^7$`~V#BE_Y8hwsC~@L?qF(A>AT>xiS=Sh&Pxap4nRAirvF zJ`yqkQvg7?^o$HsEv>geegjz2s)T3|3i$K*H(7pj1$6(Moc7y+jG-c)YvfaLKE^z^ zZ^uMO&&}|SO~mWpCCGzf4`QH@P%VfPK)!ytc~}rUy>;N%3|4>jLViw8<-lc`wreop zMNj|JtsZ3Qk`nl!1KS3d!2_$5Usmb|xF8GHsjszW#FAl=qzSn5&}z5Xw+N|KS5*82 zK^3wLcOy8{fOUJRs_xDpdH##+{A&#D!RCa4wzh0TT#t<-7^Q$(&Pqy>znvHwu+skq zv>*y{@_u1>GDg!VL!&dJgc`t+4n#0H6%|?oQYgqOw09rih?{qb%T}jvG%@>ky`+42 z2_+E#EbhUiJF!|1e4@+x{-C(BAjz@@|2%+hKXB<)RowxtUTz3n)(}`wczA`up|=el z5OlNU=SWh_SPQV)9sboIrC@xm;JP;`H(rE6W)koxxWdqeC>>GDT^Mf;0+9f)9F)rS zePBmlzZS9TcnR*tXmmAghcSG{&`<<25d56SNG5<``PkVDk;w3@EEs4QgNx&;crS}9qjP}I+jOrla! zPFF`af3x0C2JHv%9!P&1FdhIC^92Tx;LA)Rkww?Qj1b05?IC=K;gV@-9jU* zc&!dDE~UFr0&;$9-A&mL*MBk%-HzxKe+^K-fGC}PQ&UqzE)&kBq_Qj)mC#(j-nn%j zKx50<(K;-JrpPNxY9O6gr(1)-e{)#A1H;LHtgo1j04xF*$O%Nk_wV0n!heC)g`I^3 zml0%J#bBCLp1@X^2=au=0(@9LD}2sCAeh3^knijU=H9l`&l{lO7GNQe<>*skDXA7P z=NXu~hH)QhWQx7|76QXSV3Klz_Y??^&OKSx2x`F0YHTJ_VD)os{Qi_R1w6Qb8Q^~c z8UZ_)4~0UL{}~%LWA%GEZFv!gijmQnfyEO&O-)T59l5x8IBb|nVgLO#1aWqfE(oz; zxc=ca@rYaRTe@&%PR>2OVd#PZ0MUfQ<`AUGDa6nq+$%|PVfG448C|`-lo!SjBl|M2 zS~LP2EB5~nWA7c0b^pJO%id1=v_~pMAv>873Q@^O_Rda7vZFXjDl1zdBfFGjkD?+e z86ho75fu`@?(v!_T8YHMG4~(;rfi4tU};u7Cvi zWJezT&c~O9EK8rgfA5&Z$E_zO2I`#N2LZl%UY)0#;!Mio3o6fLM3_5NAZ^FYDr9#k zx1n%GeJif{JBZK0^}qS-V6dX+5{8gJ9xbsxG0*mFlO=(^iaFWqSFa?aBlsHo&6Zaxa4O?+Dk<)Kt0WYBis|H%fZ0d4NGWJ&O)h*<36 zm1wYxQV~V6fNl^6e?TZ!N&nhsj4W^7xS{8x<{C-EoLkT2yeXLg?kVqRB7zlJH3AXW2mP(F)o;C!A29i1_^PJht{{z@XLsS3 zxwrT33EnSB%P>g+mu~x&o(|fWVh=<}jCps;$dtHVBm|C`-_Bk};)Q;tx4-{G@Tame z44J&g8yBGj%x_q^s~fxo!s>UF=oK_`^I!??-1$l7oFg3hSWk?3%!w|&lVz47NSsAn z!)SjL9vlBQva_>6k+@2K67{s-bUT<_s8@bIR8>{&OP7J_NrmPYpr@;@)p*rq`;Du1 z8CJ;|*TLCr-9Y+QcB9TEoO*~#gf+vnx1aRv`T7eTPK9MnbWZ05jY(LbLG1IiukYLj z1a4qdSl2o;JQ>nS*zrC%6Hz_i!)~Y^`qRGT&soJRf-17xw?q1fT^KzgY)$m#KVhiXc@5kxX*t<{X;K%%j6L!6Xix|IpA!lf35aG z6^b{+1)Pn*SMF!Cnr9Eg(oWettMr+cQr_C1XUI-N^M5})xduM2WAGbvLocHs7;yr3 zYn~M-piVYGoaa zwuOLiRv&)7D*z*lVh`J@KcE32F7reY&9RG zheJ+QuW=vLzz2X~64GFEMFpvWgFS@WuHN13+&R?;2ZQEb&DQqqIsLYF+cQ-hmw#Yb zT_j``p9*-g4Xy%m`tPAXBi9KiR1ylA`S#vI=KOHiz3wf-jFNq8B!($>$_^bFR%k-0 z{?aKh2q{u<`K`jUgapQfBy#ZzYcM5m4jq-8=is{INc@{-ix8Cwu6hG*@TSv`XEQ~m zwHuo!&6}Mkc?}FAzx}0j(BCThrJBBWR|hXwC%d+p{so(EZJ4f@_0aGg)ZU`M#f;Bo zvdckVLTpR4p#eQZR7y%plJvx>z8_bXUY(l=3>H_3VrcpNX!DCj@Y&t>%dWjpMY;4s zogtXfjC-LA#&#}NzZVxTCnv|fJ%Y;%j?G2(XvhUXWxYqGMwzU3C=~O`K&SkvZToB5 z+jaVh#wI4TMN7!sI>L_=6Z>$6Kq(U7;Go>Ifs}c6epIpB4&ewyf^^2Gg>zTMHIM|I7{ zA@o5vuWMFsDGl#{$B!T5e+iMi)DBR3`LzS-yaM4ZN+6!Z+xQ7ZMJbkkD`s))5jtOp z(d)RoyMsQD9xwCpXfTEVpazK2<1;}l4cZ75ZwqRKvNFcxxyhDIy!aj>g5Chq->Z=k z$-V#whiPPc7X^OkX~~zM_wjMA+lUsXA&GKI;8sFagi(2va`x4$WEyy29|s>FN)$t@ zG`QAu63MCN*fGwyS?H^%n=^O~0(Es`B@>K|jZtDx^}#Jvr1S0A81Cl;ZOklFeK96oFGkr7OxoyriHTRPOhPzq`4HEZ=odHKz~}Vt-Mayr zvO9O$9u1;}gvR)$91kz=9|#ic9*4bktU;f=`uA_l)%Lz^+rfucVXq1SiM`(!s*L=+ zJUcugdwY8qm)nvFDE%@RnEWFY7!k!|dNtO=wjLxA^!yj5+E8qW^|Pv+|1EuduMVe7 zf}@|`0@z>qA`Pex*VaO<(i~#rjF`0pg|x=2MYrDaI&I^^X~q>39X)UkhisqIH+0-7 zSv%<4QCQe-xPnK6g3`v^oW?&BYC%8+eH|TCo=rGch*)ynkQ<3w68QP&K7HyY#t<3v zT5Un_I(d?5_mlqqSk)WkfJY}1W@=%v!|)}_K>MTKK0bS#DZrw7ks(LrJUKQd@KgCk zj#_+hf;L}DXd}B-+JOTH%+eQ>w)qsOt30}Wo943uN-efSWoiw)vfCLSlR$C~2S=2HD~CX=Lc~33&znB%F8O^FeiZ&p5pZCbD(~$X3y!S_w(geE`!yZ422| zzm-}$(0Jd2by-Uq(~gV`V(B}w8(`oUQ%e^^F1_I3KY+FkVqzl|Tpz>~qtjelS&{8- z{jnbadMMcQ%xN? zkV#>fI`{P}yaQ{(7g#6*WIC)FiSqPx-s3i|B%HawNTc>4`N9MzX>oB!TseYhd~BG(umfpl!chy0 z;JH^jWp}CE_=blj`;~ZY|DArKHbE#-iZe+_?e()~yU{z}xx;YuBcD^WD5y#h#ne4| z6owlv`-}FI(&Fdmw;yJ{^!GqoQexs^d@k*OP|fDY`NNjv;AidKd~eFp8DYGlUF266 z)VarbtKm_!Jg^QblmwG3{?;H0&;ta}9%+BW*(7CIk^EX6{47%?1i!;=1!;$KIiV}% z0y+I@%dkEr##rAVt7&c4Dy8v(o=T8aA=3z@R%g^W8b|NR z$EUdTN-HUTu^MgXlon4*OQXHQI=2^AsX7E|$?>6c5tpg8*Gt^T>ksw5I`QkH@uHMb z)RxO}mp4;tdWq$gmCv3&btGLbq*t9tnTDgL$ii>nS)xRBKuCIK_NK30>)ctP)@ZFaH#BhP7Dy)G82NdWibE3s%mTSFpX`Jw8_IIlbM)_pvAUb7 z61Q*N3Io^hV9_E@imC78#zx`>GlVS@6BDarE!!f(a7j3FUZ3S{q`9mmh{+|m%|~+g zTUl6GSX-BQ@P9Z3$U*C(tg+qJ%W|*B$HxZ-C>mVDE?h8syUp zZjLd4stMISOUb8b!IPD*pwo=}4T7-@ZbQQA2C-tM(=Swh0L0sCJpvdnmtey2io!*n>Wo5wX_h(g!d zcidT+QQjH1X|`>R_HpE#WQ z0Uy(mrbow-{`gM>d0-hg^DX4u+05*M;a$p7GIptu$j-iHa2_XTZEX$JJ&Ny{8NA#) z_zz;h3TAzG_d)kCV2#=d1u-=!a8Vw0T$1{b5y|g#Say&j;g@6N&52t=ZF{!F?bbCj zGB&>0evQ!B!%A`MK>M?_^z@iO0p>U$502v^>AG$C8Tm&he*O7FClHmh2jv99H;`?P zr=>Z7qi`dJ>RVe|@h@44vJ}?z7xM;f4|KBd1j+FyfuX#jOdt><<8j5~Vq*=~QXd{H z8tm&!oK-bNS>?w?Y9de%4o7I zjc*C~$mnHs_%M9isGYw3{Md)Xh1i;ulqCA5nv!;VT)wn?Hzpw3%c3lToX(U~RAQPm ztWh^Ma45L+0I_Vq3yj8R*t1#<-7kcp8FQ?IjNKWqiamDh+FP%<%eN)_%4|Pp97&0- z%DQ>8KyYt=NPwq|t`M%3nT5?cSy9pa;$m7_Q?cZtvN?clWHS%kiOVN9*tXw^3cY{t zo{mP5grWT*txX3b%CYgbD_>vv?Cs{@_d+fB=)`_PqOeyVhH9?`Y>mY}mhyO^XDkt5FkuyLLR!=Z$?pCqdi3bnEY zI6P%*Xg*!l>5QGd{k-2*KJ4q)uc!BBY8@yqD_j5f&;JqctvV_}`R3`DXqnln}5mH*)aJbCegEzNPbG1KnWbKFl-u|-9!a&zdf=#QuPoG=3$g(Ad> zIgVwJJ)Ye_aO^|Oie(D1kV84(Y-xMqM*S1#IaGhJppdwrMco4ChiuiM)LSoK9=$Y* zsvILb_f;fniwesv&y}5+fASSv-%jM!P^2|trfx7>@O_%Ba;RLC-ZUyh?)B(sC{hP1 zWYhD(!ND226t&DHE~Ink&lmlK8I3_IW?}esvFLrYhH?VipN^?jq;nA>5pSL82G^7X z0wDz8NXc=OgaM+>?W?ogfBX|i?;T>#Iy=JmO_AqRce02{gP%e=f-%8Hh!t%1R%`}dXmlc2!# zad)TCZ^m_$m12C}(HIs+yd2w0!@Q@k{EEiz?RAmd6f~@ijE^>6I4! zse;UzJlRfEizq@+S4#c}KOYj(I#;tor5(bQOesbCSKh?Lx6?Pc{^ph~TL5mAmzK&B znSolE9=|S!-WMfhPh+OP!u7+1Q`yh6MXAJyq)CJYWVn^Tf8kc_>gLvw*4nDod)Ap0 zwc!|aIX+(h@L?urC&yO@x%4DoiT4E{d5g_rq)u~G{AO0xqeqWK&rp}=L@8HDCfvL! z(5WwB_z=WeXc&JGP2&%Qx9N%*m6C-L@WG_LoUYY2JMk1|4_QOnj z0C@(dfBGTZx4wKikS6KWkM01zoy46pdCVe=Q51ZWoH2*-CP1&iP@kf#1tTXIP4Ta< z%=BF8_;w28&Dq2=ZQJ~6GCqBBU0I#7#x&6WmeKZO_V)P$_jYNn?@>{D9xP8E{bir~ zzSPIWd%eT8?9^gAc6^48H$Mu28D%S}CMAWS(&?pywoOE@dyfUpb&RHF;pLQ7j8Ag} z(U+iQc=)iB_kmWUBWC#fva$r-D)iCF{X0q>cZo0aBJyHLCK{p+t`Fw93nOv|sT{U8 zos5f%T8Ms)e#9Q^nUT5_iCK&dp&$}^O;h*Q3;h5(Cd*^Y-!2t3Yuy**8F|{k6oOA|fKXN01E)N~kK&g37VHTv1i!M#7c2@k75gcKfv_ zT$aj;6(xaoEb#H$zEs=}C8N^S~lr3tq-Eg{;Rd&NLW-3rgNeMiN0oOW* z=Q~2KnZ7kMyAJDD`wM5whaY0{Kwjc7@sRtVotlY?Gb{I{%v`DGM10t6S%>K2BR|{a zoHHU4AIL*0A*+HW#{z@S*@m^u+%K#XR2sE+=iJxIBo`Q;%Z)QRNyD_KqcvllT#E(!zCTE0B67&@TeO%}=2o@-A<)kIdQ| zqDq{62T*_vXbGwUTIAs7{?$sw#w@gtt9GsOgcRw$&SAe42;>$P{D_`PL}tJnQ6+=O z4O6XKs^0+k_9qXJGkxRtrMT~rrbxAVWz9+dvvn&1d8vFq1>h9J1& znhZ=rs}c3s5U_)!ACxbh?Pz!GFP~Il6kNgMSTO^=3AMrW7|Kqd!{iB^tWZVFiEhn~2&pS#ap$PJ`b)o+S|RFP0fKf1+7?l?_{ z*HyeYBAb_o$11#ep`kc#zFS(l-ntScFhF!d1!ne3zkZ<_C_@A0;W3Nh=f@vE*!0>( zgN24(VF(WR5dK{fhN#v4LceC;kPJ9Y=MU^>oxnEj&y4@K#P-}iGF2f}0vvhpVg3L) z=)8Wmu`NGe)Qk)07Bg)Jo_2vPc^)Lq!$cjdb%$n8(l!5U;r_zXLtDR)l(z`>HwG(srGJXzbw zh*VPIcJ{2|^}&l5ppk>|+`lW}Q{s zwl4kch1NF){;BWZ-vJK&`7^+HNHhjBD*D%wQFN52+uD8Ugf#W+{1P@52Uj?#2X9mO zQ@t2+tby;W_KMi(&ptX$(4?TA-%6BJyJ3xB8rMVwJA{O6wK{#)E1w-t@ zakNu3w6qvdG}P3vzg7Z-$J?LK&Tx+~(|RZwDF=l}^@$0*Q}9%%sHhSX6CG?hoR}H! zN+qOXV07&oIf6-nh-t&e$yrq9dEx}m^=p5A{v;O>X6LG@DC_QH9vudzV`CHI?T?X~ zA_O@gJLsCbMwx&r4^-niN7LqI@4{*ijP4r2WQ#D(a>tfhI(xWJE9 zBhI~fY#+pNNy9*i`cbBB zmawi!s)V93q^xAP*x(?#Ty!WP%#@Yo-+(eIrp!886ow&hQB1j-z4cO(tY1X}gYxPYKDpA|>% z&!)Lvjf7$;Wb@yAO6)+~!jtSTo}AR$Dl`(}$b+h?e1-Ga)FLB;SK!3E$7W#pK{i<~ ziJb-RE0}zVy*bF$1#%lkfYajRz(x^g9769G#K@_mzeI(p#ntLqgRM7Pc;wr)0IN64 z@o4}|9=13XEl!iNol{6C>C7GURW*V?(TCxuy_uKCrmAAkJK#-oyKDN8M#Kjs9OJ4v z$}vC&(ELrmYtsQMr{Hnuz@mKyJ4~)3zAa7FORuEO{-VbhMpN*7 zna=^j^N%12;LsIJwPWaeZn1~TVG}2m7BdCkQzuS<&TLrvS~}r9#$RiHe!OU$E6DJn znvc?CyY@sk?V{*I(FHap+xrkiojuze{_o*4*N(wsMoO7V-u#E^i#>p$M-}KB4?$A* zIQWNcuLAW>S}Xg910XqRhw-Jg#d!&3`z6fo4oe9J=Nb0{ionX(AhKbgQgiPfPvIEa zyI)8E-MaevM>1r$aIL_(%5-b(g9ldT=GUGq!!q%`fdr$>>p#tK!v?-yE&Z@i5>>wJ zM@}MU^oI^HW|ScdWxRHlCGM-{4LQb>NV8YnVL~fAXbm}*nwD1Q+QJ;}5P-=67-cyq z`bZLJ|8@+)WLu&$()#4dGXVjZC`$7rN*DrXhpp>(ty6$@LJUGf_w8pfH~~=gjK6=| zs|}NXC%4kSKQZVbCO{gDZg-?*bduh-jYd~iPL7@S3z0Wtpz%o!Mwy6YF{)|R6GKB_ zGkM_WHvPyR#Kg*)i!ldcZNI7`2?!V22#+Lnu{SVXL0w&FjU&XQ+Y%Oh7-*tgyF5D{ z)bq1+yiDAkIQ5$A!ny6R5Orv$PBGXJhCn>$>ecB_pJ)XxgoVML6|x8FLrI2)#Lqm- zACD_saM&4UYQ$zBci{r3u}sj^^z>ry)+-<~|NZ-Sh84FGnH1&r!`Jt=Q-0LqpkAzV z&vI4Z$Mx8cK?r?AV(1nb=A1ufX2?;UE(`bhGLCR}7v%iRz@T^dlm*?EN zwN$Un_9sEJeH%W90?ghlnHX%`G<N0?C@y|2-q7DZajpsws7c|0su7V zNa;gWBiPukz_W4x^1uNj&*{(oh(IBX^KgbjiSR{)O zTRGcRQIELwXKb;E&%N>1iQ-9>=;rU$_K$_H?ykfC&bfUUdz9KyL#tXbU%}C|AUDa zOk(V18=IOs8l~(;I0Pc=u2Ju)%n=c9wPz_SJEc)joUYfJ3iDZIMa8z;4k81_H_C^6 zg2OjB!Ws_ujO&H(MR`yyT#4KYoFcB?Adw-u(dzhJuUlP?4+1_Ox^UrwmX?;EpX^&ElqDVRY&7(wu|?c%-<%xy~-N!M#!5^=99`86F>kOaFZo zvA#KUcxZ?-;M2VH-qWXNR`RscJ_7A1AI3cB6X2cuB{TXxEj&F{^wBFa(bn<1deTc= zSanL~5(5^(J(>t(8?hV0 zEYB?QS&R}`Tzp9fnG+U0z1~;@ZN-q#j7#i-@PXYy%84v>0_h(sslEk)cZ4ZxhX#ro z@OWe1MRk8Xoa)1=%Ud#}qtxGaMo(M2Qd@7TlE!z}z=aPkLgMd}zweCT-cMN0dXZkU zq&VohP{V9nw$MD(%9*vV`@wT!y1(@2C#f@yw>{U_i)t9N8<2ly_QWcl^6T-X#fM=$3=zWPlOCiEw|PCZ zqcIS*TPn>bUJxxHaJ^XJ|^6t(^A24snlA zex;%2BTb#0O=lQG%OrHqP~UyYQZ<;B>-~I^^`6L5Vwrt8zBx_x0Y)0wBUTzPlvsOjI8*Cr@4tqo*pJmI?bWbkXl(yPqx+J%v0 z7ZTfhzwQ_A?ct|xc4W%ECP({`Vx`t*gf`G}K0sno%BHw;@wKUi-0i&CYwbkd90z4D z{p^?8Ut$~P6Q(FA-f4_#2yR@ARtZuqi+=nwe}X&b_tA{U?un%`Ru>QQ{oo%fnp^T) zRb^fIx>T%Bpw2GPBIxo){^3uz(wmr3*1;q?qa*exL)K&{WHXO9b0w=jZQ5``>PRI1{AyCe0!&#Y9b0*pGi@N|FDcS-odz z>Z9Q2jf;k6M;v?j>WEIYhq)ZHw>WvTl(b&#pReHh=ENKPxkqQ3TbT7;rmAJz^ObW9 z(JPvb|FUGY+ za)O)m)ZK>eE>_b7a?h~qjBR@o@j7|^+t!QQRlEO9vuJE-da1b{R8Q~jv%t_1)TmbA z&BL*I+Hv2mceOcBTJvJ=KUlbWHy{nN;rF*@`NM=h%Phd#Nn;Ny&#EhJ>#8B>_?sGqq@5lMbcN>2@`errpU1M{K z#FMTry@Ai>1vPbbyg@=Dr8Y4qLt-xJYv;AgwQ-JJR=)JHe_!>@l}hP8lzC`XP}qI! zI2uO0w2z5;uxqyevKd9*&(DE(w>$D4(>iSbS!SR5T*S%A-fB|8tg^hHGMqkuKb7RdFz{}4|b}( z+!C~8IWH`_c~v#AFe5j#!}s%#LiNPi`i-m^-yXqz#P#!)-fx{Yc)oe13^BChTLsq~ zGYD5ZWAN`2cSdlWmD*^RetefeX?*Kw=q%~xxu-WNY(JmtiaER#+AFyH%)>9cMX@{S zjpsO_D`|x35y5x+0!Kw<=9zECBMoUSg)=eUKXxuo%ZIzaeeQD)s)2j=fVa1c%LaY+ z{ZR$T*gg%w#aCi%xs;fIVSkW&HJB4_l@hHcgMD} ztnU-XP{6Nxur3nU(H40YQY@b0f)t3kIv7o!*GG zsS9-|TaW1qKhiFle?E^GTmPR95ZNhyz+wpJWmHk#G z`gF~qwp%iCr}unWywDW6cSfhxWXUBX=6?T=8HJsx7QzMIK8dFWN@7YMI#sN@&eP0% zWB+uoHAtVm`{e7<*W({VC}*}yE(k^ae#j7{nNT$0oZa9lqc!siz1raLmk-hcC1s~E*ZM4E5~%&v zrs*i>vqyf1**Doon^|NnW^blPG9|MR!;=l&nQKwv;G zc2qSfn*>kU!O&yBpwhnY1HYnPx0e@6t~_n=4^vq;TbB}cxqnp@J=0KB?$Svt?&H3x zDtY7BDy7Rcm&^w4unodUq2yDU0rL$)y^6_M?msr0Tx&^l8?C%pT>SB>#Fv$-n(=B5_rMzVYcc`b7sekn(~Rbas?w z6njLAmX_^G5cgS~?4!q2d^16l`rXX^ghGQ(Ba-y|+mEOwm^Q=*F`u!)HvD)NWIL3uIPz-7lyxV79bsp~4t6rMSfVWmoP~ z&xS0c?L8-BESlsYQZE|pRerIk9wFSQy^-d7F9)yaKAqa&jCuNO*6i`+kcl|^q&EBN zzfT9vKK~LNetCCWq2#{blq(&JlKM>T%H3v84$Atg+whG!XX{f~zHHrfN4hUsH2+X^ zd;;auuOa6P_`;ipgmuzsjPY2B*QYB*H6-iPgHCM?T;_9Qoo5!tf!k_;CQg40yA@3tu&V=e}k0pcLA+Zx^B;N)tlf&L>4u> z*0rUnNfxln>uocPPT_Q7u4JP4!>UfV&f_KKRB`)um%XgLr+S3+>K|(DaoT+YXBK+3 zUzoLV>9skZUHHNG`A-Z(q|>V5!Ll+$P}x(f4rzc5t@o6@I4`ApUJ@g187r?ahk`@eA7C-Bvu8-jtSMo>KjWB9+Ol^`BYCw3for z5U$1nK$3TSW)Ez~E4)n%4^J03I8=`|_dBMx_bsJ?TjWJ-s9xH@W`myqQsukBZ=;X$ zGfYJXhUN)QW&}Y&uXNuCFxWuCI>Ob3JqA&HjWISSHLi*D+@{eUfgA^&o-|J%a41Z{ zFWEWxPLu5pVI&704LBKKS3s;J2Y?e@vaH*gVq9pSohH8oC;a_7$%7Wy#X(I6qU)2| zaeOt9b>W(bR&$HK)!>EuZ@p+^v{P@vS{!>Hz@eAX)NFyO3qu<)Zd6szK?meo$9Lx3 zx8fehavxob!7K#@g$nS8z}EomLmMLo?z)_uKc)|me-o?Pd}x20=eLm^|G)#JuaYp# zu%e=eoBT zI|M@U02~!)4IsAVuW!f?wh6!aobvlKcxC8yFz^8t43010Pl8p9nhy{4@2{`QpqzDg zHLa3_Ng@tBY~L>rEH(cR7l6D?3Y#g`K;Ho{>yJq$ddtMbuWza*VZ%PJBcD}#0Xlr> zN3Xf8k_Vv_*We&orwR}n;4E}AFLB6?+FC7Xp#02)3*H0iOWfkh8HduVPfTVuPc!k zi__V=UGuP>23SV+3(5xTO0wyDOGJxg0;XR|m_tFk1ksoaW_$zY`K2rfIh-Y-tO2kx zdw{?5sJ51)R!=mO=1}*g!Y`m_V8a3Qej0;9&&gDR7MSK?AA-+cTjoMyKm#LIrl3(xG6RZ`A_|r5AHXN{Nz*5*;@Y7HZ)3b}b zwm8?y2ajr1DHxx}?jm?Hn-R_&uSP3Xk{b$uk67p7=N<$s4XS?*`mNSmG-iaU4DC-~ zq2T7t$@Nb_4T)MkT*8T6shOFpAj3bqs*VQzD_w3fj9B? z?c0Pxfby_K-ynw+*Zb@?92NLCCpZ5_Y&QHAqkq4-D>;6^Hj1utOfne%Q;mC){ zIy6nsF-W96>n367>ETgxQ%MoFl>i=3XMvZY3{@j2SD>$ALs7wl2dl7ZmB0B6yQtgq z4KG~y2Sy)8eS7+_|6myi-F|g-%oUdzo*{z-543F9Pa(Xr?(vgxiz)-zX--E^@c7B3Pm zgdxAk-jFz!_+=aM@2t80P(5@|m=|ke_=-*G;5fguU65?#c|534dHc2tZX;MEU>jp8 zktDT^-zGjIOT;@sM^X~Hn zO@|yLV~(lYVIjMo9*vW{Pw`)v78PC+1HC5AVL=;QaFHt5u(#6Q%a-dDn27_tqYb-y=@Z zk56!srxz|bdjR>MQ$d6Z3C?&cE z1Pi+nE-;};oFiyP%{Ss2=o4!6!H~}}v9rBW(f6rQSZCk%*!*<9wBqS>`(hrE(phRa8 z+IPKy^W%fS*`t_|BgW)2%lG$QJdA7I@huWO!=FEadOuScUIKv=S-WU(1;r#XULk9T zShOg{HPWXZ0_Zj8>cSt$*HKbXATrZld-4j?fbgB$B-QQrW@=#f8}ya-*q`mEV>!!k zB=N_5;P6iTe3Wb(H!N)1j;4>upM%jV8y0wg*08)B=q{X0x8N^ct+=tVQQfVoz>rHd zu2^phT8FkPuVoASKNQ=I>iAB0(D&bke#K(QIuP!S)VjQc@lBID8~IJ3G6&xBV)cSd$37{DKE?`sQ+^K!j~N4(zaaS?|Npas>rVZ6mA3RkMTiPV~lQO-(VQ$j{0XlL2X=f z?~c0ZcYQ)ET$)O641iVo`rZ&YH2z=RkGw#3gG8s_^$c($jLBN$Y3-=BLGK_!41_y{ zy$e~`5tEy{C&3Yq`ML>t80K_E9aF2GrWsNVxIo<>m2?K}x@o>l2PhYkHzW=#Je!yh zN|V0oqKb+iEDPjLQBf@}y{4?ajYiyYdB@EmjN_yh6BTW>m&MhCOb7M5*0#gN7M3yJ z>0Y0DZ1vv>7>?8>C!f*!J4n5v!Kem|c(npU$!ZUe7{U$En-)UBSvO48H6tY@{hO4& zV1{jJkPU&siONI4lA54p#)Z8VI}-|@xgIt#`Sj0O%>tq|q-bccuRXbrWSE-Tu0qvW z@)AYSH|!%ZHwSyxU071}Rh8MVk9Sj0y~9~^X98JYknjc%4Xe5C-pcVQ+{(IBK2^f1 z@qhd(I$>u=W(LCq-~tL`P((M;3-GbC z7kF4h){g-`(JcTASRX)fWp0iweJG%7VK>nNrZq-)S;b_x0W2LJ&~wOci|G_p=>QXA zp>SSZnK*HAM~vZt1xwg&bQ2SynQZSD%8M7JFRa7CAZyh8886vHhD8-Id%jqXdJVSI zisjLxl%ejjL`?Xhgu?en@7RLHr`YM;P5e+6*%%d4NZfm&;m6IFFd;)t?W@3#a%%#v zGB685Jp-Acv`wR|y!;u20u*=2(tRO1#zXlmrBEO%Dk!w}Mv2lxU`>S06sDc~u`d;W z3j$|X7nkW%GfA72|LbUItvI2xMe`QPZc4UUl@A$g6xdg@B3ye))V2;c#oXM<(Gl__ zF`IVcO?kKEl9G~Q4@CV5tOIj$g2NEl7?zJh7IBeiw26wI$3Yp3^QgAnq8p|)hj)cO z`nb60z+eS-6y!_D<>-W&&L2u8FO!29N4`7v!tb{@FRjC%*C6U+VFq<#7kQTu7!Pee z>q`jUWIB(h(oGjD;l0ki_@XK(0Ta6d@U-#aq76X+`ZqSYK^wlQxv|KG?FnXFXl`k- zTnYaBXbDipkUN*YsTucOEB|DrnZL3IEex#4*;2$c@b0#zR@H_fJpXC6Nf6`TpO`uR z;Suu52KAqSQh8PB{YjiE^BRK=l0JaJi8(jkbT;4J!(=VLn)#uEn42lewxL%chy0CGr4$rG936o($l z6COw|Y8?MX4lw~D6UZ<7V&_?CsZJ1Dz0URLDHEAdBa`hLn*RthyMhJ4ahvZ`C;D;l z5UAfP=C5qOHjDl3SlB)U`oL6Stu)MsdTXJ6YbCpH`FF8HfMgf8*o6&;sAcdH!6m)K zr!DXLA~(sxfpjrG9wM$>6@UB}$_szsBTy|4n$~Y{^lRv?(wdu_aTQj;KYUsnh6WQp zTiKg5bcJu7Lfapmnsyd;XCv*hR~mq6xj(FhiUw71CrKpc5TAjjFBOxCh!^UkLrXw` zv?I6Ty}#cn`R&mo`EIl*t^N==>ujp2sDvP}VWr?FL_@mEyRk>M_sJ70v=CKQKS0;Q zHod9YS%!1!AvzjToLO&j?>gE%#(dv-@BH|KYjU4vW|mO;g8%r&T>)|6-@I=GLi`BY zT0~Op**RMrZ>?p&$bkv=KpZ+JsFPk~DUz3;qx{8E4n)9aY!$^JjAIe7e_%ynv8~1b zf5nk0n;nA6U5t(=%7+0H`iQQL5gQyx|Go#esD-aL7JHyO`G#Gk#U4b3s0lo0?fsA( zSk-40qnZ6PJ46>Vp>po^cMehw@~_ISefzSil=92waPq5c9*8H1bKQvwNy^KcM2I-a z{9Q{Ptz!Qb#slG|pzm<;|E*FFMJa4jh*^04qwUuW(_B(4m>nV;VQ=#uCYcv68T~iV z7rM@PO#&O94UsjdGdO$YqMRIPr4yPY468%*fu%?j5IT^_L9W8r9br?DNQFW8Hto-1 zGBRwgr$Ca+DCHIrVe^jJrQz6oZ+bsl$U~M4 zU@H0;8F@ozC>kaY?*cGR1An~xhn~lc1fAi(Ot;)9DaQ-XZhDfTg~~V?@fZ&h!Hw?5 z!K7NIV%hGE#VZl5*GmC!kYP`qU6$)#@d`o8L(QQ`WQM#Q@SL``HfWnTyQR>XH2R4d z>2@*e?$?Z9K*ccd85|Afs>r4$2$50sM^bq;Sho?dIaqzEUw(krkECaotron8Iu`=0 zJ9n_+3%|g&9)kx41hh*qVSM+T&IA&$*+AD`TJ_^bOg*>!M<`8XX;=6_A`eG&K8Kb_ z`;vpgOE0CEs&Di2m9s9>+Fz0SzBCSdg01`P*|UpQ$6>b}ByUfn9a#pVZ?5|Jp!Qe> zlAPq#8Bo+e67#Q)>^JD6JM1b;gueq#61l-8GTVs6%F((E+nehYpX7}?=}m1H@%zCd z-%ad1hlx;Tu?J2A1mB98Mj@&_?=d+d+>idfJU%%o((Q_42lUkEx(sQ1Q4{ogYy~`) z4^K`pCNM-#ef+2h00_Lq@-k~!-vA$l=5YAUn>@{k4ZM<76f#d59a|m86z}=!TC^R# zRH%X_N~qdL=yx!%vE>2CMANpTe0U9|04yqETU-PQ!sxKbZPPDBxeBUt3gkixyS7eY zM+W|oJtceGH{Y`V<|X`rd*w{uuOt{oldgS5AfB5Do>aW~44$H7%lif!IXhQ`J4m{s zMfalvAR_|@KSLG_M;w?FG~?4kB!QSQwQa?oC42X-U5tB!A+f~|L^Y;uhCZ2`3la+@ zaa-_3j6}p*rGH@;{rWCcCpU@k&z+?$(MBB$<=Z+qI2}I0yXod{v(*y?W7I@_t zXuG=e4uO|0egraL{Mx=ft}3Ft%=yf}$UL&M*2Ub=-X$a;n?h zt=dwWAZci16icC*J>a~zp}rm`7kqHFmPhK!Lf`p4W##XlZ@ur0yoe|tCl?F;jg!Af zx0MaK>yA98&c?zrtYDn>5pbq7bARLXF{4OW)V5}y>*n5S9}3N@MPD)i_vB>I4Ci2g zf(8mGcnvFmT$0ZTi+}(A;Td7l!d;ZHr@*frHVRSq+}zyQ_!a2+yJb?lh-ZR>;WgZL z#ZTq8GJ04t^A8p%8*6w7NMubaa~?SPYp3;eLno*v~7 zoRH(=>GGEDdW_=ipb(XzZwH$Z>8#TKtQ*8rFoIEpU0GaQ91zU?0B|0Kk2bKJU^3%^ zh&I;WJjLAnqM~_C1G)At_JE4V=*sNOB}%(Hbha>koece22^2K3Fa%Z+;2)+-V6Lr} z{+#)3zE5HLm8)D2{hCPLSKztc3|#Q3`zf#Xd|+B~ZGhxI6%EhBkQb&1X2(m!$ct&+W)&%RltXVq#E%a*?-2}om@9Ra_nLx9(vZ>+nN?Q<#xZM z95)zRF-Xe0%cU_SO@c$hn= zI&sg9w6qiE=IFkGuwOTQ#9rxrTevY1KFat8u%1E@^ItcnKjdM%wNo3?hY{NTh{0vX zPRo6~;(O;+D+uXPG>1@`_U;z!BOWv7PNSfsQNH;MWolM2p2T7l(ftKZ*R(ZRXjH;b zR-wH`zMH7Ppsz1hMYpsY1@^QTRtXx39CQsmLglmN(0=-?zfVWRywkK!Q<)0g&4 zvCg5W4e-*(J?$Boj8X;_SagOXdw1tM$Pz)|CrboxKKu0JM+-7FL=+Y54CS8+1eiro z3r7k!7FV%#5v6wTzWC$>;pnvfKK6kQ>@$}nyBpkhea5e`RrooTFLD{JMi>aI%pCdk z@AVX$kFrtsK>L6$1{F3F(`gx+RJ07q`9p;iq91XzN;51kQxr}hYe80hD@63$lg2@7 z;%00YrS`eOdMy2Z|DN)qaNCt5@EM>Mlal(1YT)7pQNl4>+YQ+G0TPCwjhmZ>_u|p4 z3|6WWv9ng1>j7er%5-(L<#&o;X%A{LFR>4n$@sK@{BRZDZ~gO#q5i^`u`7rp$V>DSvJ>l(K*~ez)tjOp5=;+Xy?h!0G@*sOYe1$#Yi7MO5hTT== zl26I|9jhY#F&%m*ZCOo0^#`7rKHlCpBxbR8k5&I;p$I+hxu}WHy(+Al9vkPMnj*;a~{@vaMGex za|F_P&isVQ+l#Y$ zr=zca#{>|gsxikJAcBEFtyKmYSlRXbF`EQ0G3r0A7Q{<&wiMrECg`;7h)h1Kcsx=H zJLToC)q+FPSTy^=S_x5wz#2~$af>q*B@-Vzgvv%AxvyQf<-zU{t{KN|)xFn?J!DIu zG9&LHL;H&|^>CW>ow721)qq>D^&m?w0H>4>mq{`nk8gAI_|iCYdr>`OkKicE%N$e; zu&{wBQ&XJ`OR6uMT3cszIilWt0V)9k2N@AKFCy=_&mgwVkK2+zj51xuE*BPIfM9gZ z9zj0?Od9G&2eI8DCkiF8&<+bqYFKyBgC!FvZt@FC4+J4}9(zvS>vye#zvM|o2>`KQ zCz_OBNsyKGEvugd$Xb+aC2unGJXaQ_IJR)!x?dzXON6x<10j28B{M##kgidE4{z4XA1_6^5XbSAGYJNmV_nb$cPKXGS&!1jb^J`U+)settfrOyt=%qSL=$RS`n$>}E%g)}mJfVhivM z^l43`PrpI@%n&BQ5ePsMS*&dIW4~5Mb$ zd3hMz7q=Ob`K%{Syav30RAF67xCG!2TWql^(OwpYBII9gZku=~WS^vJdu_5T&wdJz zkSu3G{II+_hk@^_WpNeJ&_`ozJF|fb!>56~v>1cKywJHjpqrDITOv|}pbT*?nq`Pc z;|_ot^6lftuEuFwt29s+Fu0~t(+Xg5Ka1Tn#U9Y$K^NROe>DME@LJZu_f{Qt4{-v3>VYe zGRhuhuR``Nl$D%9WhI0%N|bC-smw^SLI~MH=zD*i*ZckXUZ3B-f56x6x?N|TopHR5 zd=##4bt5EUoHT;8_Y*jFnh(DN4NX9J4HLE)J2$sp1!3zbJkOux(NC0H?mWqTm?;opv{MPxt2Ylnpt zKfM5iHlkUf)4Wg{cw($KD*OSfZG!yL+tZ>o%`DJ7qj?HMIRrYnH&9FP5w2ewW3}bt z;zG;!6~Ueqqrny{R(me%zb}_}?iGSc3cDdf{KbDP377XhwzBfZC<2@p(e95_pTcZn z2}fVmi1cNKm>d=chNT8_?s^Ur)WULib@vci>5winHN>2>4So52ONjqNTI+qCb*fFu zKnrT-#|_RZW|{`c5Z%LQ%NLRJ9tUK^czMSw`YgcAP)t8q^w~)wElZI`C*nK1l_Z;W zjDUe?CFQ>IA@ry?z+kVhmy`<#nLQ48B%0V~)=gw4oGF>w0}aiG4H!p%qURqAwcl;o zP(?1!?dvaA-szct9n1BqBf!L`zA(e>!FiUPmiEp|yY=NGtmk#Ok^P+v*&&Z7E@6wm z39&q&5k9?)+3D$+UosT#$g{v)Fc=AJYIE8uqH&MCO&(V&m|u4%gxHB(D97ZNoJu5A z0aYoXbs|L38SlxEyY{rx0nlR_j7;*j`41ls0yc)T9eO3~8GQqjq+G~mf-#f8&bErc zihyNUAGDhnQuzPX4K+`is|Lur{rj-hy6t--a%?s!4Yng&)ne%{!Q?{qC{}7BFFK3|U!c(d zG=-w3l#?u?!KnhzA8I{}r9M26yIUgpfdzyA5&0$uB28E5o=!OR6xe6@UB=0bUZ%Jy z-2q*zd1ibLb=|J&bz(9J$r_&Uw@f!Dk2$0^5g*iTkL7-FLyV1`9kw=%8jGu|1mK~L zC)xI3tofmV>iuU*oQ%NS`0`~6a;b{TvwYvn?`2kuuXD&&9j_Ia_GZF@veXRlk8gY= za0+ToXT3Qg-LJk{v-FavI*$KZ0p-O@DXE@pF{kk+5LeP?+N|FE)yCI?{FpXa+6=mTYx(bz|2} zOb6BG*;KWvlp0_oU22{JF5pZ@8{xON6_fyu6S{};KF;=5ARkFWnl zA64->Az-KWU;mSiI($*Od7W5V!olw&4*+wUfruR4Cmagug3y5o1{pl)m!Tb*Jt} z<}R~7-LBC?EAb<&-xYnyb`+}rX}`l0{WSDJimDC~sk#B90b{3HwT2Ve7T~7-X}_<0 zlB1fzBVTtu{Cqr)|8(7M(V;!#dv~c*flwS}|9I>@32o%Hs)lzz>L!Hg?0U0AKe0HA zS~RT;Yb5b%7%*z6>-Ss{JZIL)!Wd0&?M`~PwWSaMky zdOxjOQ#_Y+C+x#9Z3Bkfl2z|h;M!1~(vW>f`}c8PRrm8ft2?(R?H&ZJSvb$t)K&CW z56%Bk>@LXp5ghLJJe)UbeQnQ}^V6aoVtn<*2cPR-(QNb&#s@E zvzz5Z-jcxw(=!&TMJ2^TxLl`6wsb%JrMGaLS1j07Bc)Hhwh-k!5-3Och32#5 z-g8bq-jT!wE84~C3%>HRZ0l?-EOWIs102;yEkjL9@<{%!88kVcVES7hIN>9_<#Vz2 z%Ig00mhiq-c;V8UYJDSrHk`2BnQ@aThr$1+PHhpxVBd7gJwB^(yu6H^_ zDzv(tP>LS0(l8JU?|55eJ{S4t#A<$MXnh=sq2W93n~F^|iBFp9wq=w^4>3uSftKXsJ_?q=@WyGcn&pK@-y z=sQsJ)LlT(#Hv3WZr}{a`z@$QwKlB2oSZL1lkKa?uM?R+>QwF5=2#B5N%-%8;)h1W zt9lTez&fOaf0@+zJJ#h^iKG$JCq`GqYKL6?V7+W^9t7*p3n_SW!cnZ>*lA4F@H582 zVMlJo#=f4;yVtxd>M=$#FWRV?WO_#LR?qw>FOvq5G;iAyrXIgvypXz(0>j-q_wRq; z?ww}~Rx;g8V{Ey*WUvL_5S7}&p>)c5rE;yo`VEiL!2KQb@~ zhx9ez{4Xjh9Mi$*c-3C5gb(dT*O5zq&jQgJE>eF2Yf&r#gotR^B{w|(^fscF`(c#Q zI>rjn8<8^W(Q!lxlXYKTUgbCW1OGf&->>zG?3`E7Yme9C`WW=gq6Y2jJ@mJEL+Tf>!x~>}uQ7+oARLyM7dum3g9}Xnr0}ptWQ8 z3!ytOLKM1F!$%aS_g=aHc$3`nAtA%y!r8M7{c*vQ`w^D*;>C}NiP7PWF#p{ut9VQl7ml6M(wd4} zB4jE-ZQJR<0SinZGFvjg;Vcq*w*A8gLhoPq{J$5rW`Ckovy~Xq;NijY;P;GY^&ke? zfHw$n+9P4)aImWPZVduX-88@Zo#pq7L(X;Y*Z)~Iv@zb3yfynGbN9e&zpEM=*Lk&3^n%IB{~!=aDL{Z> zc#rIfPmW?N7|O;zxEj$w$I1#-eArHGe7u{J(^*4g&1c85C#}=p5qy_m#Bf)cXT;$& zZD{7{aKcd@fk87sbGZk)SB!se1Oj~mdjZ^}*MM8#v$1jafq4Sw@;zpxtH4{%T2 za3Ff;>tq(FX)&O6_mH$#P}z*$lcw2AMB(Li@c{qo)N#E zNyo0SyCm0t%xr1X)x_#NYo9z+A|gwzgy=>%h@^h|bDXucdozsf^f_xK+n~8DqGEoc z=BmUy>VEosx52OE<>Py0_H=#SH}-JWc@7$KZ~6z!0f*$>RjVauonFO#ji%*Fm;~#Z zm+cL#{GRk;iB%3f5jb`3TsQc;K%iyo?oFH|S>-IfVXy5ln_vB|oqs*MY|=7M@4^Cg z)u3BKtJ>*vvh0J3!Jkj+j%;;LRidL^YrCqN>q`&}~+Z1c#017SbDH530f zleapoOrAK=`uVVkCW(dp>z6?R$F>9JE4+?po@P${J9AR<_w#{tE?zj|CbZgC4zSuH z+i~psW>fCK+!{heWf2hcY+!C`2z0D7_hoK>dI64Bu`95d4w3tDlTX%(lItSQzooq% z>`d0shvybrKT{&`zD=}FNIR)*0!Gllh+xeI(DVy2Y^Ded5Jb)hXf0F>h?n1qefZ7|~pYHe@*Z-tA-ywUiD+;4P;o2UBKE8q=a zU*T*0QUtdpmJocKu{@wPa->k-c9mk6B~pl`NxpPKKp=YhfYNR6N7>m8Zk5Px$)a7n zsy)n>^T-xf6P*sRH*a=HezYlDs5D(TPAo06K>XfcksekrAR&VemEU~#qleg&>s8`J7JbfhUc= zz8Dfla^$1WT7UEE9Q8E!+T<)*V$YCv?0UpuMa#Yct%*e4d&@#FtY ztErtlc@mZ)*dJzQ_cLY5#`ce`KZN!|JBjXB=yc@m_a`{p%3YrIgS-w)IKMY^b2rY0 z`U%15=H+*v>T`$)@Kq-Yb0^-sll2cTW!Y(T5gr~Mr%t_xd8K%Jd~~$QzvK;q8w(W` zbaw;-%H^VRA#ORJVlb#{Yq=y*n85@**prk7mW^BMo?uPd+QM}71sIg~KDMwRF!X)$ z5ll}+jvPTH%C1cZCOZZeWKDBdUS@!3|KZwv)flL9g_wU~BzLzBj>5;UjK9UWUOec0 zTU(o4z*=#hBRCQm#!E{vDR8wTKH?3_ImSz{X9i(jHgJ6br-x%m1wz;KLN@O-SgjKn z?m3a-YXIVr+R)e6XDeBL@d=Pepyc+l%S-Mb++r>syxs;#5}s6i+GoM~g^~nQ9E^A7 zR{i5NZE+U>gO={HibWb5hhpHD$z@pR?M$6iI?Fhb5VG|!caO~RGpG%4Kyw{;VibJr zco7626%`^;9k4!4%I&nV`r{gctD94q!S~K=k6NdZu|kP5H#f&JC@#$7*M@8cSUSnc z%8K!zw8h)@V}=5|X+x~3k@kHqs!UikUO~ohd_0Yr01iDU8S#nn6Sgk@!7wdwZT13Y zu8Y1rgQs+KpmLIfAfqELWTVZpl(Bhy3`!>4e7Iy2Y8GZ*>^6e`2YmQa^n6bdsC2k# z-CVbMwecsA2^3&j@e^{hBSJzhDCT)00C}9Krlk1xc9%8PxrlbKUOGMy82IQ1Fo0J= z4AbRtfc&^d-gox0a&oYhM9#p>m6skba5e*eynOj`BcuG^t>un#6sq9P1WB3(O+UJe zGmF(0%wk&kh0yNlK$y@roGdXZI|zkKQ8a26WJ|$t81L!VW8h7JKX+f&Td*!i-;n9h z>3{%4ZGM&nmqzf=AzLU$MnrungSgE-Yvzwr3`0`{ihO`PS85d~pOBLN7#|l?xPC4; zcn8C&jxvS0HB2~AC}$Nqsi-vI2E6MsI)7e%wdY)6VeovwFw~Z9?d`2%@Q(_`#Nx}G z4?z^!v7_8b0Nz_nbag+$7Outuc6@&W?mM9vKv+@os+K*91MHF-_-eZqys<&f0=5cF ziUoef54wfm6CzCVogIWS?{Hth?L2-4s=QaI^ZGw82^@G%Pgm6$y74kcAoUtF7jZ+~ zAPqtA6YmB%QtkLGP1A5fgJknyP9k`z!EWAfs~rH%`#|XAeB((1pkD4r(^rM@P5*H7j${5a%!%|C7R>*qmC5 zdz++#x7OhhIWjVW_b{}GHuF&CK14(B;^Y+0zQ^YRUb+PSFedpeEP1nbtq*i35fc~V zcZ_+QAOyzv6te+`TZRV7AUlE1b?*Cp%LQws9pDn9)7-^aIA~h2Pvf~13O$K^!5Cm< zjD#WznF@_ZcD-OBVKSlKqRL@lb5Uj|(CcyYonV121hfzDOd}(VmB`Xg?+vI?&^bl- z)1>glpxf|sM}$i4|JX;Mazjq?_1x|WJ!y(``W3-1R0oATo)B_kaRCnT#V4g5AmjjT z4=*R*Cwt>tb$yX-WoI-+nip^-O(zdbnV}GP{$q2Yee@{h)y6`E6uaum#Ld2$HzTgk z`jJ|Kcuc-R${1Zr0q{KD!c4Kcx_YmF%KP~Ec+kh{G`cSE_$d9^t?IZIwIp^0X#+l@@>Rj^B8cWU(`?3l~otJSTImhXs}X zAlJ_9tV=Nms-4nctJwazHIO1(Qia+mdZQnUO>=nKojlnJo)2P()%}&>`A}WGyuAGG z!2aX%SO)lD17E+2Y$_puAWsP>f z*INVi%o|T^7>|v>?pgZvYL>;e_{kGjxLf5~H!z7$o@d-0;2SF*)wK8@A+S}rtJFQ~ z2d7S-w)CGqJauVH>gVQ2NLkTI35cXiF}_EMpiLX7*~I*j%u!gG{s613*d@A*eNn^? z=_%k-?k9qC9hV+%9iD@<%NKnM3z4?+fjQ`TeX4_S5oiydPi46g?BxhYVu}yY3^^Ym zLV{aGY#GW)NRPl0>{#iVxIzd>!sz(RMpg{Mf6x#?Wd7j614-^3GcGm)!iuuCzusxU zMM|jeiIFoTuHE_f#BQNL+-jf=UH}1iYavQW{YEEYw+&P#i03(8gd`Jy5_(&7J2?VA z)tqP91UyRx^G?GRPC{7tOcL+H3^+NO55)mU5K5--H>M?E!U{?ZVZ;b;;Wl&^@v*TS&I4R(*W+mJFLOd$|!Dv>Y_B* z>^Mcj$0h#dYgI$WM?c*??S+vj&19y$A0gxu1rIX^$Ly*VN?J*g!xAllik6z1EjSg?NDzttgd9_8YeDTi$gm4g98Wz3=>)WnED#{xjCvp< zg_xfJT585~#Mskrrh_1nz#>lEJ_LiY?x7(!d=|cL5N|>3PsPqHGhx)|ukt)ZEF2N? z+H0EOaR|UV$mjr#_vU0B+I{KJ!at!t-lV4_#&p_ek~1=BGLL*QOa_tV9u^0tt?)ia z)>M&eK)e&0;1hQ!t6r?yaem^QM*6H3}2AlEYog-OGi#Kba%~&YHm>IPn zibGvUX%y?AzdxbRt1XMihT=^>vzHFwidp74|1KYH6aKT_G0`LD~qlI9SA^fyVyth`9P3P!1J)>D(K&p->$S_O`-Sam#AZX{8 zIJg^qjv&@=E++)5824I;r-;}ElmQ@e(Z>b=_l5G%mb!~p$1m+yPm{fa z&1SKJf4OaDT%XKH?9@30= zo@!!fiA8AxAQbQ#DwX;EXXMT2u3o(g^VbJq?a#djTq{9O-_OkaokllJV)yy`3J(L#^dWg5+mAeL@S49Z(h#U5ZNl&_;3Cw*@n7A2x{fQ+7J|2&zM)#;YYl81? z9BaV@K$*1O=vWI7WK3^L-x7?xqo`RMS=v5+3_>w8$_?G!!w)9xyf=iIzE!E!T5Vbcj`56+RoLU1{Mz~lq&0QJhpUf6tf?6s^#H;IupdpRL4jzq!=arvD=|1_f~!RD zM>KXjuXQXpbQ8;wK&XF>W37`PR86Q&ff3=~!qy;pg)aN~P!;~D?2@W^)Sd3yveGLU z959MWv%~fk4B|8qsS`*jANqy12}QOwn=Pi2eer(sun>`e#troQWui9m`Gd3K#^FL2<1H8e8_g#wfT zWQ|QJSpIY5EJM$5^P7&hs`7iduRmyshHf0RET?p&K5e15CE3Y_3S@ZW(={9mkWe3M zmnZoNuL~b2h$Q)qS6QN2_8J;0Kp2aPqQMn)AabKAnsQplRAM!BB4EA^fCzi1-=!8s z9krs2Kj2NNd*IK$0%^yY^)*IywNg`)yu)AicvlY^6&cRm?u+0JQLlkVs0}%sW;{on ztr`%MPTp^av@!Li#8=9n3DEjgv@GCr1oCJjQ` zGl-l%`*i;Xwb=^0<@){KvKl z{MM-GpNsG3??WPQ9k}W%kE+#UZ@ulq!ot=uM|XD^N@%{fX!!yS|5W8ZKAq^;*zuD_ zE1~b~8c!vVU1|A(wNZB6MvUdY?j+h>u}8{Z_%N%qv`qMg#1xf}i>ad*aU>J0FRopa zmM7Zw#{`8el3V-KII$ybw#bUFylMEj zfI_how38xzTY$PrOW&QAs}Y?1=N(HC)LwBmvDVy4;t&L(w7tGCYHe*@;2IQElS}?l z!$sot>%=J(gZDm??bnsKL4-bi71tcI4=)XT)}&t*H=UaH?fbRME;!(;>zQs4NK=2k zy8g_7!dmfnO7r!nxw5BU%^Vdq%DFulFg`@b>PWRXS*y0^3dl;4=xSRrI|Oik%mkq*3FMq?b*%6V;z-G_Q}1|-Fe+Z0K> zFU>?Ejmc*epantWai-f$#MsvMQ21jts~kr*A=BV~%Vaf?@02`t;Ycbfu&Ky5_{ zV`sb9-_Bp>i^CP&)#m5Jb`qVWNuJA+1&Wt0AW2A#3-3|TNv)!g>3}LJi}viQReI7d z%$t``#MSrJ_dpVEO`8Sp{xMsePK7vcV5Ooa#3Eu>IlxIn{^nk*gG|UyVAJP*gFoA0 z^mG6SIx<-wve*J+gp`7stU1rx*wmDvFYDW=2ano|JjdiiBtInh`0{XePc9!|J4|_$ z#rExJD@#*45zQLftmZG}T$MWA4^qXR*55#o(4x!fD_52<{$#b)?>RPD9c4^rty{99 zb&X1oNFx3DM+$r651wc{N3zwHFIEi!f8u$?>q+Ou6?0W{s_9_2M|c?BDejBs&~PFZ z)^_Xr*Zu#o{b98ss7SV8Qbo zHr8#VQ&I#P@Gx_An@ODUjb+jZyUqWs*zqFP zL?;ohHGG~o?`>%pkPF`@Xjd?XzP{xtmP(-v>Lw1i_cG?RV+A?jt|mv_xUs}13yB`| ztQFy&1RQ6&qo)T7HmT>ty(FuUNhU-`FAUVZnMv4?rSI)qwSdjEbyJ%73%}*dY>W;L zP0=kf&-Qh=9gDx!=M@&~Q*lVJKg5M;fhg2(PejqaCuGZqXH=xY6$l@FSX-NMLom-7 zz&A^wq%W-0-g9jjn$H@EX$}boL^awPwKb&z&Qgqdf18O$dbT zwT0Y|0R4L)z z(>;=|c#b?1jz-hhR5$BOL>Wa&UKEoz9N;%Nl}Sj}MtzsW3k;;VSZK%y36U8Yfu}&IzYUqrgU~i>-B{NT$PcBC zvM8ynsX@%*Hy5epJ;6EWJy_znqf6dBlfpS7L(Adj$?&Y7`oZm%M}ql|3VRP+r1d;v z&Pl>QN83?}Hd0qx8y$lBza~%?C4Dw!K&Dh5h0Hvuln_zoGY~8k=|QY79Qy+}W5pw> zT3ic@i@^b)zEbdz=4Ou7iO8pz7Tt=DR?a2+CLvVsG3K$~FdCx5rY>XQ5CWvK|#2q9P6olb$aW|P<}3%PJ7ljJ@<@I z<-a6)qOL3G?v}P%i(JK6Fs4s)tIE1S!ZUg!f??=7I43xPU^T?XcC*h3H9xG~WGhgd zpG$s^Bhv0jV}QCQ%V8FZ()z@w%d+b;u zXZ%YR)z$-%F<_&Nfk}2YtbACqe|I*icduNojQQA_Mo&MSl|QI)LWg5q%+7lweP1Qs zW+VRf-Di_{M<)xUZV0YiRZ}COmY6V#`sYWh9AQ0>`bNZ5%Ai!k4djb%x5o@+m+7?{ zpV;43{M($}QPdKL2+HkW#wHB0i~u)o6gA16@?Vk6s`R-%r*Q@%Yz!#YdQIKMTYy`~ zB;;+&U$mjS6-U)wJd_m`d*EE`BU1Oo196MzzU%Mu&2Y+}Q@g*HztxN_kNqE&kMtWd zii%u&fvEzw0_^#m?_fOFoCfWlT21AVdl5T2OXxg-#wG6VcaX$TWZ}htn{6!pqbsOb z5a801+BdVk(H_DD5P_R}_+Mjg?BYgRK?)BCse4cnaoTnypJ(=IR^rugNe38GsU|W7B~g?R1bo#B*gqxJaLspI-P5dg*>7X z^GmE3ghygUVnmV5{jpu9U&%dsFB#BQVoF&Y1~;f#iMcrI3QI~(?s^M6g5<7F_##TvkM7s-PSjcynG|;;`ySlzH20&)}s;n_NdB{Kg z6DCyEgTK6Ff#x~BMk?1omrkH>;o~ECJP8UGK6?h=brcr}yNa(YoBb)&b2H;r{B##r zawzM|q2IEH4^QCO(vqkGNg=1t@s(cSofd6QR6ZQqWOOTO#ZgHInr;~~my{Eg9h`3D zW@yXGSh)0%^A^;(xEs#$p!%T9+f*PsZz!f>(%2A~E%BTI(0G+~808S0Ji9)BzEoy{ zs#GgjA+`)X(*!V_Xn}A2{qI<+E#&g1JTKvt7Qdi&5_pPtQ`#m} zZN+yj#=RjUncH{g^Qv=guH2o*jpSNa7Z=hC`?$FiD&9d&&+{P@v%~9+i!&BFDt=41 zJ{^gaA|@WDTgfNESQlPahu^efsXz;Es30a6o-;%sjC?uJMFG^y3{8mq%i#D#GfxpZ;7FZ1@c2icC6_!0kYp97T zlw_KUS{G7|a9lxW^T`8=IA5y}w)+*)4)Ryz(C;nQHmhndr1W8&(; zTJY3>AcB-M`V(7z>OQ2|)r0XCvhe#vl?A}-3G`Z_)0;v=nXKkvr@^7KHJH5e5@rQj zv|E@-;j`mQH&Ze04+A^3KoG{05{f7dh_qv*WnouX2j08RsJE!Jl`c<+Q8Bm+wsV?c~ze;T^;8c zN5+_A3*ce>qxF{A#FYi@r19425vzq$CPv!d6pw{J26+-syD|4z3r-S<_%O_P0ZE+x zPaWwoUM0rBxX^mz-KDeuhp`P$AY2^raV)MZPeJ3tt`L)XJL`#e(2tE+@h_&S$CK_i z#hYWns8E;I-SVYC*?`wY6~{+jzu(J>>lCu7m6>pvL}%3~CA<-M+mF9L03vu-S7F^? ztaBTC0XvoliFJUj($7(AmvfOAbg#-47Ixj|zTXLVC)}`9x62@Vh$X=(vD=@>-0Q~& zR-3!CJ^{EYe|ACs52}VD=UMj=u3DOMTWX?_^pjJ?5T+b>7|P7~5Wy4t3XU9k{O}=3 z*U88tLixe>5Yc(5{aE&pO&vaZv~AP_ti2`+FzED4P4o1=kBaO$r19w#nZl;1IFF98 zU?XYKecK+uLns~1MDDgtg!?qPNZG?_i5jRpO5(;eOTUSKV2j~jGlZynDk5d=@*u@b zM0qhIyLJ;sLQ-&{0lf)51WaiF(!Vc9vi&L?iEuZL-aHE+tiKko zP}8ie*u`zsf#BN~gQ-aH`vDXXq&SQng!m?~1|XitC@k&iC2*k!7 z!g5DMDvK>ZnDEN$m7TLCV*M@&Y>7XnA{ zA#l+EJprn~r3^FXo9&YU82J%AO%fCJ4Gq0POqR3<{7@BxlayefQayP2*#?~BVD^ch zjuJ>R=UGt%6{h*69g=Zf$Nc0opaf>ulhb<)#w%Dk0ML)M@M_NrsPh*ul80Jm3D=pLk6-r!AQe}Zrg@c&;477jJzS6|PCUatQlOhHAJ^I+6D zqy&W>#Pe9&yxN-x-2#LON{OgtZR`;_TOn9nc;2Ql5%s}f_~E2MC1$@kPoDM9ddLuz zi=cSoDIO{Z~1*O*TY*5VI58E-)Mb6l(Du-W1O3v1(OqdyY0-k+22~|0{BR^|? zpi0lN;v*K=95}Y_wgD3+JA9~^VF{63q+=_u5ju@(K>;Ez~3}ChaP6d zi-7o{7zE-VLcsCn%&Al1!Z(Ia_Kxi(1VPn-iQww$ilemQ`vEz*08m>-me|?Y60d!8 ztF+Z5@c!tydjh6ppL+pBKqh;B3d|!hd9J9ev>g{?i-R!vMiGr|tk;sHPm&vZY6f{H0-2zrXD6Nm}a(L>odQWUFpfvqKk+lyc2WRVIc!? zA6}b>;zdz53Jb?g=M1An2NNZ$1pWYf`)C2~gZKkD8~SaoZF!J23aITaa7nwt$leGP z##iv%;v9_vX@1(`+^iljFXk_=c(bDr2mb4qrVp)Qf+*w<@DP3Ozv|RgE8fBd_h~Ti zbgB^ZRR#erl*A3bd5<0;DlNu74Fe48sjNX3P7j`rO zu!=HD%Edte-`~dfqfer2o@QqgqEiXSyv3uuf4R|(_50juoF7k@{Tx-c>$&>TqAwr_ zX~bv%!!IMZ86cU`<2*7S{S-=@CO9 zG}n~?4FV{_L&qM*@b-|(A}Rv$zDs?Mt@YzcQAg4wGUj&`DcZfIYe_}KuDg4Cd;3Wr z|%om38(Cm(~_C0HO9FxJLdy+`I77>^Rh_ecZFuEXTV6-=HCh^9s zMeH@ZVtCQ!t#MS-UGMEf@8A7A^@{(<4Jim#-q|}lAI&)|aP#}3FJU@+WS=cm4EFuy zm(nw47O++n;nc9sb|D3A>5ZF?={K16x<1nX8={*K2cG!e>%pe7NwNSy0Gx^fOWss_ z*h%s8V#)*xGxwkwQ{Xp|8WAs9;nfT`GnKiqQj9|IvLZ~<7`u`RoT-Z6%5ZZ_Og$f% zzVb3qG4qX0Lj#M!!rU3@K}no>(PoSq+NVwtC)dzXawIlRE_%nq{4;`5*!$;;2*r!bHIxVsx9vj&VId@6V{-%^yy z6WmuZlorSp@jcjL;^g!bE)^t&TcHqfT5!mX&C=X(EJ|Ur4 z3emtnu(*4 z@~A;82Zeqv%p&u0a}&GGFd)M#yBCR#$P*;lAMOP~IZ*ie==f*ZNdP%fQ;)zTn3VH|Z)Sx4hmP>! z!-qiVH-Mh+e)weCfRM{g*sgejkx@~$!P8#*v<{p6FBd=-HX=`OQ$e=sO5w2xLxb_& zU&|MM&}nJfW>PHV6ufC}wXkE=(s~P)i4y$H!7xo`k88o(3kwLgc6<1v{0NH$j|H~! zv6WmulMbleAfzmdi{G9e7o`YFdJq9yb=C9h6?J_#b1;^&O1*{Y$Ep@NOh*9y5%^)N z!cIf?293P`ir!a(qe8sgFNl$wmHFBv8lPI*HSn}mo@c+3n2g~ zHD{^Y>7C7Zh0ZpM!c+{Ss09{GgE6dsD;K|UjHV^+oyV%Ry@Okk8h0}zsClT>saq&fT7CkE?HicYbw|k;fw)5qvPzoeqBMrVe?n*+(JLvI=-@f1KsyWouz|QjWh?JkowTI@CTz#6EdB0uq72%JnL zNghsYu9VGMo#~$oz5z3C;F)K-6Fn1dNGUZ9UAqQ>>otYO$=#h6r2eOCGtz}l7#FM^;5GMXuA&rkuSxUJXQEr@8@-zAQfszw zD6)0T?|Te?DRJFcyc{|wpE)(Cf&QO}(m>ZE>@6CT#6fDjG>3OHO5z>#-(U4{F(oH1sriMho3XD5+fo}0 zv+k#}`0t;CUqhMOmy)l~xSMrJKwFqVmOQv67N4E`JiG+6tpEP1%3km`>M!LAbMg0A zz1{oc*3-2A{%QEN^89-GIYJRjOuS7q#T*bWuUB^&+FPPF*p1aM4UzTkeB~Oc$l0Dx0X;HdRy0`v9%`LblL&FFT9hXPSpJ1u6HRvUf)l&idb#!e?Mo$Hw1HnAiZI>mk= z1wH9IyYF=sCPJ3VhyTw#*>Me2$nX1~Z^d&jx*wXmt;+E7|NRVA4F?3+3E!Nkh4|$% zMLU527mTn45cii&?MIimc07(@+3rFoK zY1JUYQ+)crf02UvOt*5!2uU}8ZJ*o(#=7{527MF7)Grab10g+ZbQ35_7eXgc{ZW}H z+B8RpP5^}3Yl0S$we&UYoUQ!a(_Z)&*}GSvl&@a`;1D{2Qia-t3oU!;>&ibv?AQBD zs}TgkP>N_62`d<)2GlPhN;-4`B_4&zkf{RNHR2RcLo@~xR9`>R?1?R|f%8t(hLJyU z3c((js4WEUDC9y)D4yz$<%qur{IcCT@@i{s>=}R%D zB^8=?eGS-rj|t3Zg3Twfiq;uA+mrW-rWzX>rd&!Mx!PWrp7*%W2`&XCIRbCBPW^cm zE52Ddf7E7Q>eII@%lDVBS!@+E+Z`mk9vwe3t-*#c<^_))j-Hw$5Bq^AkevjbQX1;YjRc_RT z7mIaa_d}&4?E1XV8ODs;QvEtCWi{bC-B|^0wRRGoo|IFMKuc<2A!G(%ZI=rXib3`G zKt6P^V9mm46Z}u)=zuk$v)hjDiMP4PUHrN(_8!wlCo9q2ZJDge^5u)nnOjL2<|LMC z-fEPbIy0_@hBy`aUDo76Er*uxj`?w1OgUFMLLFQb1TBZ#xS!fq!s*)NO3(-@%)#BC%ZHbwn-dMv9I=^{@Q3vW}H8y)_V4L*T(ed zVt{G~^OzgsJ3l6fd7(!htCnpC(^HMj0?part z=814wsnF4*0btbQ+T~HjqNfnLc3AtAo*p-9*8+owrNxgdgCMPdXU$#S>FVKaH}@D? zkoAGEoGj^+gJUM(!PR4a0z0NXR7ZV^uRx%(gVGNMz|Six-j&Pa%pwr`p{hWw7u=jE z)P_W#EpVL4ht4u^nP7YqI4J1rn=Td8L5KPDjfhuRZ2|3_sgUXfV1ah>ek93bzg&-*B0(jtN_{;*6u># zzDYB%2)F-+TPodZksbZ|qj=juu#)OY3kJ2rZoP?xy$$5DTy}eDe!M&K3UnDl<~*K? zi1t|xjW-ZLP!dP;+xVZ#@fA$85BKl7+w7X{223?UVGR!bU7%iKTkf_zM0itpaMA^; zXVL*lCM*aB!h#5AR)s^IP#e%ZeDv9=sa!;BNV9eFFv+$<>27hBE(*ql!i_h4Wl1+AADul{T!zAE1LZ>4?Y3aG&M*D zg%f{*{o>N1W4=!8nxeFKI!`Ru5**+l8sw!Wc{IG%B0Se8`#`gaA0^rC_ zCUjOA@9cn1y-v3M-W-FT)C{$tqbblbXdV>Kbi*```uz>^LI`A#oFm$H1^(>m5t(D~ zq?K~eLP#@I$k2^I5Lb0|Qa4A6_tS8zmDqojJ7)}%ajjXD4CGspg=c6<+9{LxuHXHs ze@5%*c&q*bEv=_y)gO6sm08;E3k(I7S=_L-`q}gw=|q@65x}5p%>#e5nVJf|D_;UJ=QrDDv?TeYncC{dGfAxJJquoQ}b6f^@xH2xgNvEQ8z113@1kxM*@EKtsH~ zzK+8~;lzoTkmLFjw%@Q|TqMX+Q50R9m_~)ca~{F`Y|e_g$cYgj_nr^~|lU`GD;$$p*w7sEV?7 zQejb1rR%^CpK4fjrihqt!t9I&M$pCzZwYQdd^)MhT!vWqUVw*8+H4IMT+B}`ZMg)iB%04D+l6!y}~UdIHn?XDxU51^Jqfl5Ov?bgS5+o zo>T2BYgc1pIUL)R^aOS;k7`|^KbSB0oo(t@$7gMthtqD(J;!o|AY7#@!>DCQRqE`e z`X!xoNo@nh9_C7R+R^$j5|xwTAG7bNe&$)bnc?G8QTix|@2^6Zj&CezH zV5m_h`pnrT_q3do_uBO;TK7{?!20FyBwRZL!@g1c-&{o>(3`aMbU~AQ>{OO4cMh%- ziqsHto45b_gog##yLdVz;4_EKb8m$LcLMXshg&qwyAv&TEsFY9-174N0Pc3zHU=K~ zLT%tY0FJfaJbSKN^4`Za?4I6H&|_M#u;B-{&r9%3H-VA;ceCU#*mu4#TPMuRynmkW z;ig|_)bN7@5RBf-uTpXZAp0Tc|K@!Dv|rJmlZ{Yo86f^E66-+!jKb^4DOIEPcMq@u|b3z;Jibe zL7bA`nKK_nA&NwKh%3 zgp~+H679z`Tz{1_oKw}=#c2WO+KLx1;BIbWkPNf~Wt8j#-Er2#oJ3<1muw7=OzSF- z7dLFK+$4s6B5+;ImYPd8_Lk}rR7TKSB1ds{c9vkuFOV;J2ZgIIo|DH*Fd1y#!1|?i zEc>%i_o`y7I5bBvy5-(kh~hDWdNn&ALm(^WO(Vm4=U9$(lld4`)=9WKH{KGhg~`t& z$BTHMzy@K-G3p}f(xDVne=d)1O`Qbu;2NMVP0= zF)SBEP^hr*qrfQn-D9nkIl|4f66Rrz`|NuHpZ~&a4vH3@o!dD=D7i7oJ`UH1&<(N4;8 z8=I#X`@X(zex9Hv`+4&w)y?qfYb+zn8<@jtYrg}KST9^&G}wRMa4!uFS`dWQ;8ho- zaQHtxWiXxKzwKj~JJ;r!OHKL8ErQozSu1k)bjk!E!xxp6c)fXpzOA%9t+!Djt#NwQ zzsBU&(xSSgg7>9iaTOG}Alb9pUj6Z5^wB=D2y5I@2z(qQv0_5-BX3X>*^!?NZ z%RNkxz{5}TQDGs1udplhMywe!r^r4`t>=^dV(?K zB#Nf#(a{O{WQJRx2!SGCxc>xgrZ9s&7*i2~*C$3bEc`v2D5YRhy94F{%#&^>q#kWs zg8w5Vf%!UVOSq!!N2q7Mz-9uaB@B~bRErQZ|DKsu&~;{^sZv6aY!EUh6}MN;MKuv+ z>thN8hfqSe-98O1&PV1CypQu_8RW<{+85@?boWdWzTJO@&V)oILgE^FQ^;;HcrzobDD6%9$r$W~@#ls&R3WJD4r3CW5E4Jw-=WMr?5 z>=9*VE7^psWQ#(d^U>@5ejT4b;By?`U%q~L9!Jl@bv>^8xZTd%Ir?FY2`wwG$T8#| z_LO^iUb&LkIYlfW$bxZvO@aDVU0R;a6OBJ31p!n9TFD}0O%UO*K%LbX*mQp+MA&x2 zw+53_tV2Sy_>=7(|5m_$Pu~JLGtVXaCcBQruzgvK27g_NoQ%=GKnd=;ycBb=tQaPY zA+s2F6}xSsvCyax%s++$d{0)1Xm@G;^71*2`Y%5oAwKQyDC7`i4Q>Z@TpI>;ps_($ zf8-i0m@c}2URp7aTVfl}%(T>(&dzN-{5(86(9*lP^%PD_udWP9ES8;Id&tm+&9yOkR*fpXbf-csJF2HC2J@m)* zfBy=14&@4hIgKBjKH!=oD60AWBw!h)--4mfqW4fAorfQgJFsqyD86Q2hGzW_py&jR z=%4v1VT&CiX^Qa&%l#Q8nU&k{yY1{S$>q=F;N``b9#J7d>}vJ3wS(MG-mqKVBV1!a z&SyPdBD($az3p$qu?m=uUp$S094r#sDepH2Z#u!pMti7t8~w+E&B`e$7$Tyt&C@2O8Ry$Z)(n%ec?r7VRRpxg-M3RmgM#Kmj9uAT7OH*t4x`x2zf`lcUn4y` z+aA_l*t~Kt1gfFvn@6Mo;dG5pdM!wnRo%m*?6pnVJ(9ud<<6tWf%|G))d2#$-XgVf=;7VbJH_7sfylRpT{O`A4{A9ev3P~&VEBWp z;Rqzq)7MAT4x*_tQzkL%LRA9E1EJwY8@kvje-CqDmB8Q^FDPQjV&JuP@wjtoKS&B5 zOR5>7FQ_5W+twQ+PfHtqJ!oev#C6Gek7y1zIVSr807j>9X6ZA_a;`bXy-R_JVFhXkEhV0SUj`J&J0A_BI-|Nu zjLX5y2tDD1rU8v~D)K7uftVmB*TP~XuUBPb^0oTPmpPiPqeZdm_(sguh*2X*Jh03n zsy!?yBFh}~QmiRLrGqwQP7d?@WyGj#S!=?uMi4JD^|LEnx!}814teRw&qsn-a!-6~ zJ}o916O#Qx+$Y%D1cC(3GF(oa5ta+K6|Q%p@V7Qd6f;6fa}F6C%G42B81p=z;zkKR7eBeRCPp;O(7mIrrkzFP+ypHWEIz zExzv3@2Ru7N;itVOJksF&tEX@oGZ27?5g9ixbg_w)Jy^*ImeUhX)Xlq5+7Wvo0%kvd>`EDjW?Eqd#pxy>#(<(H5Mfwkn zVtVXuo!pBVsOuDAwIM4DK#;IVM_c#Y7wZJts^!1y<$rMy{zACtp4Ado$(gn*^XC4k zNBI1DrlAp=xa23f7or7fBE0n6xlo`@c~?I}h1~KLR^Q%UUTYH?o@nzyQN((?HgKMj zLOBqC+QzW-2KmA7&X->_)uaPP0Q)46iE-Of3`ciLiaJyv#L2#o4{l~)$I2bP8orE{ z649v--p>+10_Bebmq2=g{f=L*HF| zzX=SHZyzW%MUc^kji8Em%F|H5Sx}C9>bZ~4@4r7HZ@bg${r=nhDShIZE(IGcg_<~p z(#k;~ldKPPYX;!+b zJ9=7l;alCD$tgKy8p$IX&mD`>+$b{@$m)k1Ps3mP2a=l_oe4_Q9j2PNIkKtGTO_i& zY+fALL~Et_vb&*gJ87e(V~*c{jbBLZLW%QR+E`FlFued{7LEg7v&;}y=iM-BSZF_r zRc>*+I?cyn4rT|GIG@Zh8 z7>d`wicR5acDSl+TDg->Vo<7CpDSGAysob1f|ZvF-+tY$N&Ijr_}tWS;9k|!d)fFM z3l*;HPj^-h5MD z?>3dNRBHxnZsamD;0%R(&#Q=tKWMzlk<%-q{YAe)^B+MGZxHueA!PG~B>@>z1fYmi z957B?1}n(Qtk;CL_XeEk`Cl%8a!inH{`jEd5SW)kc9`Kp4Gqhh(7(O5w?P&o=)yyK zf`6(H-f$lcId3D<1I0X_Om<8)-lRKf+#hrmQfJ^|$apdfMy5v3BDOaZ9j|@f3^4U~ zHeE!H!!{Zwx`&qe;K3I6^~1R>!nR0R=|Eaft~GJBm#i=MtoiHM?Fo4sc14({f}j&o zg(jyUUI)tJG1yJ$_%MpgC@4I{^XbYtHX)pLkl>(3dv2`4_y7XW#OQ;OVgCME_ywL! z+XwA2&RyTK+DVDtY$LGbuA%?FgI=^n89LqeO*Er><}K1BllP7upJzuzw|c1FnzHpq ziT~Q@S-=Y&TO#){Us~FU?A|E_(p}O$ZPCg(>bni##xs3k22rUSGoDrjkPS zCzu=B+S+J}fNM``X5TC{0#4C-EAoEC2tz;6HKA5sZtjKJwWG~Fb;Ghs6CTOt!Cf{? z(jM;MUYEL|4uP#zAd3R-C$v>`KoIw6`S~BcaWX?iGCwL;+L7~(FYWzdThz`hPrIZd zT|xN4DjjLzg>94FNmvmacKdOHFXXzwOA8I!um*X~{YeK{6rMVxjQ8HvS3Pub{=P`` z^N5jvT7?taWjdu=5+iuxyFY(G+xr&WWZZ1-P*aQc+Esz-sdmZMY3;Nzs^hzb)6iLvF(CQa-)B1IFOGN(f15&PJ;nvs$M zVgme#qXjPeq?NSf){ok+E*dB5I7}M!DK)ahD`j*KZ5qO9!AflvZkgXhkgU;{0trXJ zCrJG#aSon)V&Dj#+;QXY2u5=S2`^cJ-CE^15cIT^N&GVy1D&C~vl7+u)0k}sbDjBH z%sO&z)0C?!TlB-Y6cF(MHY+y$Pvx+9l!HUbW2==$y&KolGzHMH7O9z# zO@ye~D>ZE0FP|If>+9?4%268CO-Kp}byx_jroJQbv4F<_zV^m9zcps*xBWLbeKz@} z3{d_a%@%$3V~nFg^p$EzFRy$6YCN!}gjH)}u_#Duse_}P)|rP zreZlk9Z5vi&Nl(GOs}&boF`lFC5+2Fh8;wBz06b*6CGU#5HZLk8?L`5&wgte?c&lH z8g8twue8rR&yb$8dt744Iq(_w@pyHhq5=*(0uo18LAc$3y!=AUA>q}n$Me_by)n*y z^2$hQU5oBATpVz(0-=BoeI~1%u`c-3!@kIqD%;TI`IZfxXWr)v=#PzUflTc$0spn&#pzD33_!}MxKMu?J`27R= z;vV_)>onEziu!p_glqXn?Nq<@VSJ0{%|!!{pJQaVb?Z=^8JK}t3J?ou;6_et*88rl z_t}(8-Oh!Qni{0`3`rRux~nP<>iky4w@+B7wHlkyB`?RGEWc}HcV=9h#PBQ1CMaf* zg2jXJj7Hdw$vFPo+?y!Yds|i%o-sUecyrzL%(d9HUDX3|Q~afOK#$uZ+&sYWq&&8J z!^g8T+hzwTX{O`yM~?7x05G>1gW3umu5?ApbuM9tUJJ){m|YZT<7n*n?Ae+b2yWzyiTuvY~LDCef}awKpgitnt!BbVhSoDI^^ zyWeY|nA-x_psi^zw`S^3y~k1?0c z>otFC@r~iG1b>0=NeZ_dW_;J7%m7`OULGs!6NZ``E+X`Gww)P$^8AXL&-eG)>BOq~ z^Lrf@8SOQ{Qt3~1`Ovd9E7T+4#-say9}XIDu_#}Qeas(#ktyd4XFhPRp<{WI+bglb z++HIlp+@oEB@^X^tnVj4A+BifN7!Awnr_lX)LB4J7~DP)ziHtJ?W}>Y#`y0^j5+>3 zFlP3m{VXhu3F69vKeLoy&YeF_(LIuF^#U6Y-;I=xK9g=v`|v?7x@Vm_d41Gwi0&ej z`}$I+R6e|+|MdQB-tjwC;?`%q^Q*l-wa)mhVkUOS)^o8{@s90Z3<2)g{@NE`0-v_% zo?^#|Zw|$Zg$mUJw@Oj^Nae$#asGbyo&yIa9Ng5^pEpnI&NLtNOzi?BvRD`y=;AUw z|BUkj+1hXColn5*|bFOHtQe^i~-3T5QUWW-uaE4vr%_4cVpC)F)?*mt&*?yVOm(eCi9y@ z-pl1whTE0=)@R*kM?MK4ss~+FZhZU*1`QFggP7qz{{W$>VrqdGa;=#2?|BLCyc~F* zzNB1kdlKrr4oez8)*@;5Hy!P6<5GDi#TJ62$Q@P&JDRYnLH=9=_=*Ou^;y=T?g7gj zVHmE!lmN(liY{AZ(4K`maei_N3a}LBQf19=nRYrX?)bPt@QK6H78+xX&Lt31S}rYj z=xEG+?Y0oO-d7Z|Fkjzl)-U?y*sH~`TL)Gt7H`#BU!&vPq`E8}n>tvr&A2nOXfg#> zZs|yxM$+}jE&1sBy1JJTBk&crZfCLdQ4s4D2@Uefm1=ILx#PoRnEE*!{Ug8%RDrb` z6vkm|rFNUx%?g&Zljtan=)CV=AGv-^J!&oP&kmB^P?jy9!*~lbvsWw!M(!%xHS1VC z1D{E+Z82YI$oYO_sVrmDe#C5SHZs{zG*#Mg=50=I)-g)r_Aht(9rS%@n37arp46~n zX50zCsa3Qs6wJ8p#*BAYj@sh`3$`qTBjn&wVISRU{r%Rvf*r0Yb=tRqA_7bG%UK6< zy4*2Fr(lMs7(BSZ*&jNw>Uffoi2X(SXi5qOk8WUsP6_+$diHImB)eD*PESQhxkqCa z5HN>uu~zTeJu~Pi5Sn+kJr|*{NM|-)4qpCpZN(Ug7#9K!|3E4PqR~W=6s8WKq_=ey z4hGPggK$81wgW6~R3|sj`q$8vF2BC1)(+AZ5CY^!FaPX21yraoaHG5aG6}H>0ykt& zNTKD!)k_yDlSoWq%1<;r&7-$ zcjhc>CJ{`G98c7q_!afACc@bs;qOmmT|hnO zKJHUhRYf#1Z)4I++!4S|Hb%otw{F=26ZGY_Z^iGWMP1MAs|>hS*$HIu{&K1CFdM69Ob)BW#%k_#E=J#AAp zI*_>R`V|~~MCY2`Sqh2efFUt#E|*$ZnC>jLg6o64Z~S+XxKKoIFpX`Fajn9+zSdd3 zA%8&I`e(h^xvSC6K_;@hQ8*Yc;R!rDbZQwEVfeuGi=B7}_CB|99gDop@j5cnw5`1z zdSaA7R<~47Y`;gv`96zLQpobt{^Gt3j4AcbpGWEg5?nvBirQ`V`DpF}WSxVII&o)A zvmwvkvS)cKcUvytv_UXtO!%G>fSuRMUY$*GrlJ=G{t|A~*fZfr#!D+UBru?soAeSfMdpFeRL4Ip8YiiK2WHzf0gOze>GUyku~*tRae3|!*5N-23h zjf8g#F797K8t3A&is=*~;kVxvd$<;^157R>yO>wCr@L|#wdfvI4e`(IpEf?+*q zq8llf{*pHjuW&2LBaNECzz%^j^EyQw^Yl~_^^o0{x?zUb2W#(wQ8PU61Nhy z7kzE*Fs%mzhUwP0ZH|tyHnH)~fAh1EwNuTokL(E~B$Do*D#CREZ3hM0L z+ND>E`_y$*JUJ;p7Fs(_sGu~#rZIKIhAk=z#|Xm+4Lu+>@C0^F3G>v9jE-{pw8bgX zoTEOetLT*Qy1Hej&?pdO!H_u1fClro_)b{;U5+=%y0HCF&4eSDLbZ|BH5)oAG7QsK zQK*Y4GY<$os%izCjOR)mbstL0&-^5J)JF-Ui?NiJ+ z>KLZ4zhmdlgv^KZ+tu0uuazOe9*0e>2OrkjERH|rd|z{t*w}OP`1z&8P6`&`Y};VE zn61S|e2=Qm+z+trxIH4EU8$sBmH+$}<2+saGxyVu1F%ml#r*&xf#>XdbTs@Zlhz4S zw5~yC@uJkVigAV4M=$UDu6yQ(>PMBmoOgMyy}Q==6SMsB{?b!_Ka8uofLCJK^eUMl zIdY;6`?1gVHoPU)f1W-PgyI_jD?BY#Uw2|$3>msj&%0+$sZ*|)``}#sXO^UM0ip7s zQH@E9W9|soK44IenIA$2eLkT6lX7>)eTPW`#{Z~S=T;^s1`xcB`HC$H5FEQ08_b7s z?I)2cT)y$&7A6h0z3%vYG3+wM@AJ#UgOtZk@y{lX(3NMWdb8^}nwwP)B=_0=LUFK) z{!hr}>n!fS*{?xYVq_j;TsJy42012{a2~YAe0=Jnf#vmw&2DU3^>H`J_7Ch6fMEs< za3moG{(P01hUNyeQn;He@+rFi;)DW|@VwpTE*jUf$z||me`AUNB!GpQlENh|ZL0Sf$6ihXOT1!OpM|a-yzP?Z=l6_l zC`fqlGnx9Y@d5JVf3~+qVW#v_$T~2V174t7_>69GI%<{C z&pXkGif&}DxEhGTUzT`TeoUWl5laj)AR9+K6Y)d94LNke`~wDg`G!AfC^o~enmW4Y zQHSA)ddr-E%%g1bN7aJnwGDXo{-y_@3kPysm~YEU;9-t|SXe{Iix*$9@i1txZ?-t7 z)XXY920RO$e!}8e>~h%ICPAq$9+};4;Nntf8{gG-D^iy!r;G8xfn*L9AdYxKFk^+a z08m+YpYi3I2ba5F+1POri;3vhc9~jzOaeN)()%;hobHxn|4M4hNap_Zav#ABh1D^G z6FwuCpgAT&kxEK{3*kORB`A)drs>@Wxx40rGHp#vOm5!1i9rnVhP=)0LoE%1VnhV! zWgqjC@q8Da-Rb5KLmx*#8LPczs7#`J>K=SP=t&>9*5~ONV!NhS-Bs>c-aNe#b-{Bb z_3LP+c?Y3RJX5nRFDNo{TJUimcB<5`UspWWM%fn6bF>fK1i!@?t+*cFLT3|-4o|0vHEsoAo3F?D_?pI=NOx6b zBey8LeSMmo)1`vz#;=x}Uz_ITv_h_)*D<0@d^!gqGy}ZO5qJjQcEVL2AH;mFX_PLv z55b;BGs&&(r0+8``$FH9HF%EFZ>MC_n<-aE$K2ilyWjc-%DbvxXtd{IUeczp>^tYb zW5twjn#W61FAumoivR4ZgRM%-e=c4L?T-6I!MA#LAN|3D=!Q*KRjfrfswb{t7g4PD z;KMewjr<`fGn;l+$~*1CoT-Q5xYXt?TQG?9DdTVCh?tD=eXDcU+&tFRR?DvcS%`p7 z-yZGW(Bo0G;6Dg_VQZY>>DfPImVMR1NH)um6^cO4+7 zc*vA|Oh93yIoArUK&QRuX~{~<9Lz}&5?I_91?zHzxDS~oL|*tAIT+3SHv?=mRY7H7 zH9C!<=Qf_eJ{PtSc;^OR5*bb0-PJ4P{y4wbGE=STag*7_Xk+JXs*bJJE-5M6Uc!Y( zVa|kztl(g|X7b`a*^4uCYOS}g0C1J|71h47i^y;>gunfq<$;omsr#P$=l&)EpWt}~sgoKdk{PN|lWrUgV-I*ZYR``hZO?_HDF|vQ z*6j)CfoJo-F_(Ox^LyxW>Rs(G9nnB)mx7Ag4FjfQfiy=w4Ww=_&&~yyItMS#J3W{$ z6PmG(DZemuN53#&ey?AjL=?5H1v2r#JveV*fI}elTzYr@jc5pAqod2sKW_KQDh9fT zicV}M;>Uv9+->HKCcYAnrTHlUO9<-0oqS_s%Ji?Rl=KO^!lZSw8e)msf}VKuL1nxmVKrT-OGQ zB{%$=xd+U;R2=SRJxUIsIm+#r*6Z?@-s4C5$fU{FIWmInd8dEbYNhgau8ihNv z(u0>6qzY?IJEJxPC@Cf1?~1 zCR3lL$tpkm`cc3>6Pj818Pf*kkA)g>`FB!BJGqMO+KQ#3Ej-J&_^h^-mI$7nb{P|i7Q7JXZ9D>Z8-hBnp>Av%c&Zv?pDM+ zXNyp!;SUhD?Oj21-T+W*G{K{`!8Af~u0o2*Am%JSJZM`mSAB3augq{{wnFa2mnz)e zgl-B4>Zkri2TS%mi+ZkS~fv<%9 zA3WGZ9U63!ll_>v|2EkrJ_lyuucP)83hO-gJ`Y!%4)OV*v(u(8sUxE!ocY_YyUnV9 zxfCRZ*-i=*FDB!ngt^VG4sudKWzsJbkyq)OqFj1XD+uc}T_ z@UbxIRgGqAGMTopU3l8|X7TH{lBqB@IDDVUA>XLp1KAYv5+EW{N zubGQ}EkltyxS{hCRl;)o;-J0OpBzWda-QhjyP4=olY5m!{ymZN`S zjodHbM6qq4=d+rqCEJ3tF`{%2_PqG@*Kez&Zg5NZb~h>(?`Qr;;~%TehTQye{if~p zeWcTeVpkw6-uCZ@g}vZ@-|)~Z`u<(DYbxi8U6%fr3t)eiw=vbv_1SEQ(dzuWZI2xe zi};Nn6XVk`O5u>9X%+};VCziz&(9!{2H(@Ob^ZH!|NdiipyJ@B|NimF;9U#y|Ne>A z?sPWUfB!^r^2q=7eVZBmZT|aLq)j{j=MDe+Uu4@@5B~S>?%s{#V>@Wvd8ceE9->s% zXvq(Dx@AB}8~ZLclaU7N2eW$?ih^zEyOHe$Re!#dnQEh&~GEMGZm z4Bxt|B8r?;5prhmtHYa}>2SSGN%{R}ABl9Dcf3ellB{&llb`C}KWzP_l5$C8ig@#p{f zqP4@yB#}1VKW$fH@$bdP-jl=@JOA^0jhx(e#vJNezSuH_Dm?hWSNt5m)j{l;J6s~pE9?$bD9Uxih>0Kp{H2mqWZxV1} zv@}B0@#W&ppgLABIGstEi$oZZ$L5dp-?u%Ku$OeVVdqP!cb~IQWb$Vjze!C>T)>7{ zIXqC@^El~r+|zQM1xSI4UD3{}32;ptJW z-||M8CEFXIt!&J8W^Gz3z%jHC52gc6g!(t|ghNfU4zbcd_>u7FpsI0R;$HoO-`E4r z#ZDFVOtaJ1!K^j13Fa)XB;C%PfPE5E-`(a1RI=Ft^c>pBU!x!Qe|zHJGYvC4S!HEN zs2-ANNCiK#gcG!E*5|c{b`>5^ySd&LK_b1*c06(lx* z-;A=UZ6{!3K5p(fNw=j*dZ<>*Q`eYohxFtC<^=$AfhQD>3=oPQUf)F;TuT--zB@fk zd|P`vW)_&$#Sj*lKl>u|c1JXZ`un$}=%oUM-MwPa#a!$}8NXd5pyOg#12LIQY_}G8 ziS}M88|*I_HS=?FGWV^=`AIPdSUmngt6LnobbK+p^G(`H)`MoS@({!$`?m?e4c=PI zHMC5-vOlp0Spx(B-b;(M`UO!CkjWVROo-fvIrb}QPpPgRJX_Em^wvWAw&I$jU!K4C z%SlfU9!oFaW)mH4YGQ(Cg8WEFM@K+S-MQ95!NC@JhTbD@^vhXqd>J1tRK_qbmIJ>D zk|t0p_Q{MFm0|b*ttS-*gF2Y_pl<%~*WXRwWOu0dL3u53ae+o*IZnl;=zT#^0FZTAnl9Ta>~xC% zn>~}5*+TmNZOI}$6{zO(iKTNFu1aqiJ3g!tDdB>kC>@HJl&fdYo&_drkP1NNxBPa$ z*rfAa<0w1W9-fWOn1b!x24MJFLy$M#l_`n*_^NTDaLKj|MbfRuC5HGckfK($@p~#T zaQE*p!S9%Ftbs%W%x3cI+hKV0;T3!|kD0q#niHKJO=Wz9dIIA+;4dzR)iY)s#l`ey z<;R$s8X8iQlViv+PJgBGzHr~T$97KM6C~0rnp?ni1S=_+t-Q}s#BitptSQwqHil6U z07DQ5s-GUnfg(%n$@-hRkMdV3*tW7z@r^Nk@1Fuh@XXg2!Kn!B10ZPgt67luz%k~y z>G_ilU8$8NR>5a*IVo6yU%tZ@zor44-< z3*Ngw={UJ`Yx^fTn_e}Vz+7Bn1DNoQIEJ=sP!Xy{y5u>_Rka;Wth3k-Bc4`IdHF4c{x8n5BZW^}TLAA3 zVyRq2R61Rnlgj?lukD+v%KOCLYv>e9`S-QpAJdhmID7&z4|83b1l0)$QBZWeJFRD& zr=N}v%ji)}9{RZ_UbMW1x%xSlY6=PpSmi55{fgf^YNhzkEmthAq`HM9n_q6clAGR) zx@zO3GpkD20_Sh9=_j<66OQwhX&N~{t~-qQMK^PfJa8US8_v=(JPc)ea^i$D&P(-+ z?N{FF!+9D4MJ4i!*`=^tk&^1anim=x3gLXj)AF$L1${H~F}JRz-=^~yUt2iNm7mDbs}u97-TCk1GqsEIuh z5Q&(EhXN8gQ)p2KsC?dli029>973<9LBTx;-33Nr0hcXQ8_y|=+|Sms^sVZDn7z83 zWg5?TQ^c7$olBEMcS^zj-ZAm}jh5Fi2re9&HC?TBboyysaLNBcl$Y5R>(60V)$8u; zFw5`3kyQIVN=@X#1$nIk?IV%FO-Ds#U!58o+QX^9sTf`jb!gK8)?+mHIk%CF7_%ed zaouW*NpxjpJ+F%NZceM9YiIR~7|qjcTc4=?p*VaZ;D~R{#@b_%Eu=Lwt$2$79DCRj zmN_pU<`Fgx1Ld9&`#%A^P7jVhW|15(+rb%!TtsJ`nt?fIYWo$RAznv!nzvE2$QB&Tn##;G4uJ_q8)A&(_9OjX`7N6j zJ$yGt8)6P@+ZJ9YxUZ-RPn{{-_qhEIP6a25&>JiEBj+gNE-fVn)bWnEjn2uFNRx+d zz4XxcFng=d!Q}H}%M*4iLuXKbapz%Z{Fj1RR}Y8kH4hIDC_TMs0UJSDUz7n<8t||_ zPiR_N8m2eEPT1Mm3C4gQq*SogC;=y8ZEX$EJ)(|a!4Au9cssiC1IutB>o(ZqWmnQYQE5Dh69YUPT?87#wio0u zT%!3voW;Zpe$qgH=?X$6?!Kua6CIsPZ}p!amfpZ{uT8QLrzKQGASijPFP)dy!a?7S zyD&wM+r>jh5eZ{|A!WzwzKJ<8V0O^?Fmx`)v^HBp?$lq<)i5{fsG>d~AV(1p5m5|? z7WWIr3Ay0x0oVu4l4VbG&LJ|LoumSN{g3Ppa-|gz2f-Hxo&bn^0n5PL{M$3p{{0LL zTz+Rp{l<%;A|fJijYLO#LXL!|19QGSn=S$5`CQGLPYQch0x3wM1MC1{n2e(hQw9UP zYMfzRFM>F5ic5x?p|ACOSZ4u2Y9~xHy=j9}(hwmHQ4w9B^DYJk!kQcNEF_54SO^%T zf5EFZc4!m^?|eS|O~WYIKgd9Hwzf`Au*M)1YXt0+y$|*Ir*%im`4`6b*SvEEzh6*sxS!HEPDk>IHjnF>ujTOsY6=mdK>s0St z0_7W`Zy4yxYhgN#c@A>gtUV1opzFr~5VIQQ7?|59V4Y-h$x&KaeQNf7?^py~Cvpw5 z{{y?i!4xEpe@Q3RcsBKNkW2%&Pp}^8RNmf-B?UUin-pZU^QdWLXJ=!kt~4;w67#^n zgE6WPal)_hV<2>!b^`wdq$*^p6YjkNyoX9a=K_V4J*aNf9#L8tcTqB9-Qf#k3dC9O z*|Ue!jKf#4DBvww+#S*+3--v$-{)2$K_(Y-x~%AfEv%WYO3&A_z!l)c%ZrbyAcT|+ zY(%PGQd?9k2XJhH{)U(yt7q^nUItl$RaQq=R|v-#Hp`rSYQrcpqhTTQ1<0SN^U#(_ zON+k&0KPEhx%D|kFIVOCvl9`Gs9#`6!)S22%ZZ8^EK#V+GEYXT8+U@;^b?z;Rk%%C z*34`OzA_6SPGyCw3vLFu2C5uJvO!C_bdYA4h=Hels)>Rb;cJAtv`7_o7wEPqquN#Z zOXcqW#x?|V&)P77o#leA?joo>&^wVq>Hp91XeJS9-euAYR6e?0`JHa_7&UbRT^ z5S`=Ml<;M_1OEuXf(cXNY`btwIu#qY=K|m?|GV)A)Q!CguQ=CHW=-RLofV`tdwPc$ zX=Cn8#cWkdXW|81=0OFvDdnXd2lnh~2QkCRjd{kI!JTkk23zyuRAh8?bZBVxwk}Y+ zuu))D#IX__d4}+=z*DAX5UMqQTie7x;J)w--AQUxXy`IHC6-sNH0*ZAo4HXq8VXBZ z;>lpf+Dp)IWZQEIVaY_hV#sb5f-HoYnr==mkvzj7hh13s)<|uD`&*gE zI2_^e&~gTcSAf5N#eJ&tM*?cT`o!vO%aleSoiyu#j z0v`;E_D`{3{G2zw^%?VBr2!=ra}x&On!%R_Nz`$p)W7XpX*OwUBlwF-e?q|w&m>US%KmP+b=S5%iRHi&j}pGN_xcs-v!T3T zKY0&nak7fW;;G`!aK}5sGT86(al8$;4*o6JIG`F^UC|C!;%3g*m|RE-8ukk^IIkBX zadSM!u~ZbF9=Z*$08#Z50zGJaaN?k00>R#E{Xws-=phw~A^XCOIX}-dlhW`^^>!hj z!8{Vw0#Ysv^;8mm{y<7(a#*Fn4L_iqn5JUv1qphe8D1exVxj(92I(qsp8xgL3 zA3f&7e(fbARb0OJ)B;Bi%=9z)ogkFEQ(ea#u#XBiE~*4XB=n$f#9Rg~Utbb;HZB@s zyJPgmcw5%;RqO_gaN!iE1Nx(uZlyjQ>q`%u7`Q7K$K|Tm;AMb%x(`>Jh;46e;O4P1 zqyKcA^qdvtIRCaT!0rU|kn**8RV;~Q`l0}uA+X9I(t6`@kDgY{`~IuGK!)ag-({%k z-C@ZifDgcipI_Q4{G7J-Jm@N5q$+)}S~1=^9BZ&+UG>2Jq;(lXkJqYTF?zwcaD9QmQ&u%JIC9FJrf%30F`gZyM^&i4l0`q^@|{f09CxHUds>;xW2}an zG+V5BB~HId^Yr%$%N$I<)eV)apI;7uRM5B+rcubbJIcahz;^Q_BkNgrm_o5$^$*c{ zHt!5F!w9AaUbGMr;TAwj3Y1H-;CRuN<}&JLHo{#`MG*_bvPv%7a5fv>5jo^dJ*jll8bkKlY~7!Y4tC$Oz$U zPT*z{6#QSZQ`MN-f)%5zsv7s(0RD`owF@AqU?gY+=}9|N_<|!$^m%xYQGmivLIB>TG7)WmdxO=xInfrVO8sswnP`;p`AcD4D zg#$?w+09c>-5{zmj;5_(0X&q~_Bk>X{}?gEFdcH(VjBlPzvpt- z74H+@U=FtP5sQF8+mIckBp|9{Wj*JSIJSj$pB~v}6oaQMl#~?`y(M$HdI7_jSzuro z8LBEKa|L1V1Z5r!8N&SstwOsk#%OPjdzz&^efaPpLf_tAy(I7wjq5V3fg3`^_03yu ztwUg_MaL#26x=&&t=2i=2+~T1{@eQx2XL-H6(UY&_ycilaeSi%tT`Iu4Oq2w1os;m z14DMToW{g}9g$`7DzviU-p9pxio{+Wz1z1uF@Q@nBrGiFC^BFMfL_2jwp-?{?#PUW zUeY)C`wtIK@P^BMsHsteAsIfSY%>$_(TUxq@D4g zc-Fel7}!f(nZ0U3W8~&;7vjarM+x!ANc8;Duf_RoIYMAXZx?Q8)NeF|R8TshKA zb%$}=oCGKH^E^Poqw3Zg0`2j8Zcnt0qA8vRPZXsGWM{D)U=~R`&mOTB4^OzWHdlkY z>rE9@@@sI}V7^v>yAlUizD<`lMT~Vf_?m~q=|LBrot+(nS_=1GzF|7fe8i;J!x~Ok zk8O+BWvc8ataJW0&4FtKZ#N=B)%gBL^OwiXDxrb5euTb)s8AM@i59FoFBP;bU{ddB z&B3XK_d`LSEhgb*zb8{cS7nLk`N-qepk4Zp)ExiOUyFbiSo|bp0yd%IcLfvzP7-nA z6LVnX0hD`bb%!4O;Z#hd_<(;6$A7G9LPd?0{)R}= zzVhkC8HF^2*ug0oDAG|Fdl;>OWeeTyy{*akRtDj#bfXm9JUm8a?qZj(qgEtp7Tm4Y zNB$s)etneRlX;;4mJ>f!kXP-AToMLuczEHFfSdOKg)|or&l)kQ#*yDr5&Yqry&zo@ zfY=pTnYdycBTW<&fb%Y0nlP5HH*eq${&A$rgBy7kK+CQ?3dZ(AMzx@nl!;a_>Pb8< z5kbKqnT;Gsj92wNGN8i8zz z0_nQQ!POtIQDGMl&@ST;e;vz#Y8hS@nl?6Rjg~!#-Kh`Z##2?AB2*#8x26=Hk98N; z{m{$&^y`(9K84n~a|&JV94{-Nl@1rZp)NaxC4=VdaRM*aD=RO6R2z86;Uhu*D;&>a z-x1MYwr%m00~e$?-N~!xudHKj2d+|*yo6#Pk2?2npE$@>Bl(PyWYyS-e?KXNjGo z8%6P`A-P27Hh>;a^D)oT9*dvGeCI*ilY({JnFm{O&rKAm8#ll@p>8`&wIDEEobCy` zBaBDrgoTrJ>DcHmW}x6!7HSlmScQ@)JPIm&`Bdn}8^UK{hIXW?H* zPlji0^<)q?jt86{@##z-w`zF}FNw!nR-lg1VmM;P+Mk#)`sKonJFrYoDhAf$g31^0 z8T|vWu1xZ1;SP>59<9*pUUWh?Z=R_8iROO(dcf$@$B&_bvjo@X_?KHCfuD&6HO>>4 zCkRr4;PipupN$*V;u2a>RQz@F0c*=hxK*y^!usO9j)jl1Rk*$z2NFOPBJv$zqDthE zr+k!rBBZ<0?D0oHWV>0S4F-$%a*kQ>CGW2nQxxtXoa9wxBoT+iaF5%^9>bVBwcwXpj(Y(nunjG|;k1|^$IfS!`D_Jq{9_I6crdZ6r%qoYuto_LwRia`K}Y|U8k^&%~8AHG*kgDym%_}qnC#8DgxzENYolv1SkahZCnsaW@=xY*mT!b z6~b$tf$a9kj41+$Cp>d;>GK*D%n}|;xal?)|ej%xUmG>wu=+(CwlW|{cM72DUEo*j>*4Ye>9)y&g1sHj7`24P~?hf z_4%zfZ<-Z<9LXufW!T8!=n8lS5tXt^s$np~LQ%r*cT--gw$qpbb=9B62?umm5WwZ* zmdq~CPe}pzH1tBpUT2@o8ZMHwo9LQDzB5&3Jww@K0d0(5#42ynzh#EukNN6^DU)-< zTbat^f4U2|#MgbbBohn;ig<*iEO$ot%9R{NcT@rdIaAD>PM+|P#LMk+LUcQlkC+Cu zXHGM5<4ui5$x{Tpc%hWD72Pq$3xGe|e)&i5JS3+sT19?Ct^N0}nZpMTC?`Z^WPo=H zDMqEg7J^S#U$E9_KJXS=9A&$pe!@D>5Z857czAcLJk>mFpyHtJbrsQUHVV$TO>XVA zFP|26$)uN0ju0_On7GTw5kC@LtZYNjo{{gx#3;o{zp1DCLmp_{p4oVqo>j{8b*e9KM(vcABqA!JPj^`kEpHD1`T!0QLf^BU&hjeV; z#d7F|u}VrBi{)y$?v?%e&er2_+>zDG$rM@ zxy_VL9(8$1j=$?3Jw1$DM$YiB{-9v4HC3?XP2Hbosw*RNzs4C3s*p?0*J}~U?B7IZ zcDN;QYwtA^Ujkx57^vZ-0dR!b8_Ia_bL-1Kr52`DQe4VXi+tT|z2&yT+L`V8EH~nS z6u7x_7G(te*)hX&*vBy%f-o0$fI&#|dia}UWY+Q0g2nHNa?Zulmx~N35;c6s5aH1P z#DleX*&zL#4|$sqFDIw6pg<6%vbWaLf=0yJF=Pj|(#dEB)4r*V`Ie1w5q-OI^nM3ePV2Avvs!3Od~KkdKNa? z%XE7lrl}vsOalt{=1-r}l9S&Wrh~~{sXz5S-1o;yJ@==PVwBx^6+U9eY&&m7a-IG> zzbo)O=9%J=_?|KQ{fut`Vj$b63o(uL-pb33HX_AF8PLQyA+p9MZDff5aJ0BD+|eQQ z&?)o>oH#pc*MIN28=x9Ui{3v<>8r(jiGGK{#d%5A+c>OEbam-a_uhEhOmR)9D0uF} zkR2i{&}{B?tgNb1&K%l&0*uN6|5y$Tx?*ExP6)7$Yul0qKJyZ~ths~cHeYD{T7reF zF^ML=EAEl@Y@0&;s(!KanL~_;FNU6>A*T+J-`6%zzD@F`o>9>wN9G?dntiwTa#JQ> z8ubJ<9{vo)4Bn3(IXqWYe_zcc6rKR*s=6HBB9pSz9qF={LN?vyep2)+TkDz1=aJf) zpYE)6NYn&EYL5a8NW-vSmdN|>#YIKoclx|{-H;a~6Bih@*rl+`c3uIyoDyF&G*?n< zmOF;*G~cqC1v4k+*Xrbk1}e{c-3xZUA1hl}9GJNIcj3$F2_Bw{H`SeglH8-h_A-uF zCXW`;omBXXE-a8HAi)|bEbi{&nLGz6S>}C(Gkt#tl94>=z8>&5TK;62Q<5-A)^#A; z(83ItVi)VYFUre^#_DSSa|~2eRB#2)y**=M_Myk&)-3vyzbl=!jV`3c+ zKM-grE*Ke|6}{Ja=9V!H4NanUZ?Mn3eV4m9pB%bw70-Ljmlj~g0cBQe>IXl?)b>g! z{83K~@;hA=c)T)=F4;$ne|&x~MqzTec5ONxjT29^GJk-+i7*}9%cTHgD*47^U*F-_;o!!oJ>33p;wp}+M-Ec+PX}Ik z)-ZbXw&J6zn@A>{?;Ucvf0Vvo=V^zzqt*`%)>C~Q_R`2~o6y0_nQzmPNIxpJdRW5Q z|Ih9(S^AkVzHj1g`qkEr4IZZi8AOlEPwmSeQN??$t*wlY!vg}|cMBb4^wFi4!K{vp zI@P)xrv{X5st-@_@YrBALEUuS(2#-hG=U#L52W#h2v_eq%Uxxm1-uXgGdTY#wO{EH z$Mjao$xkuAd_j_&bocH99?<7)Fpq|Tr&tM)s{)+Nm9&nvn{rx?qp(93Z&uxBY za39M)&~6jvNYe}AI%`dUC^AbSY3Go+a8i$Ih*?OLazrgoCD}~EUE_mG!_>${Cfthc zo1JncU=Ngh3?=}~%xBg@=f+&f#y)@%{}%$02lX64sk*v)LN#%;qYVZ6*6Un$ zx=6ur#DDa-tFJ>YC8y~bJG!nRX_$1R62&e6pb1^J`Kc*IfUjK6Jm_`j*dwLpgYKtE@{#}5 zpIizpp=*bXTSTcyE98|--t)U-%OmTWAo;>L3bzHSVSMN{yQ&M^(K708$aXGapwoH+ zw1=vIzcdoF6=*uUlXZ?Qj{_A>?3w-&&ohG>^OZ{(~S)6}SCse+LyY%g9(Na^amu_?w+{S{9=XQxtyf!3hRf111~oheWo`9rkqtH?;KHDE+kWYQ@QD*|Ljy$4r0(jV zZtXb2&+o@KU?E_9XX!HppQww>8b>j!kXVMEqxa)%iXbm*P%5Vet@C3(DsHR}E-s&L z82+JWP@)@-Wn&LQ9+|HmlaJIMh4SyE5@-37$oAb-B_oj$HFH`>R{Qn?r<`_}4f7pa zq*vf_Uaq!qQ274v!3*T(5W=Tj`v3`K+W+KZ_DZk@T7s)35|9E=T+8MtwV-OMYn zJmJDF%Wq)-*4Ns5mW%V5L$2-{z;gg@oYM;_v4Jfnk+H~ZP>#7EvAjmIGdSu&TGmK> z-F8ce?Ddj@$PZX*Av)d5e=ogq%S>N!{_(Uf6a?`1V;1Q^L=v#gL@p-K>YQskL3L*K z>at0=Hwxz#WY_XOwcr!7kwS9@7{2ZZ2qBPz0uYebGR(JCOWqG?5{AX1*SkMdX#p&D zZ~&AO%i;6+>gQXMNa1T5*GZurQztt)?OUVhu?N~rV9pknmi5~|5(0JL7>2%iT{f6X ztTe{8zViNZ&iY<|A;NQiR=;(VwBtb5+2R}HP; zQ4xI?${>Nq(9_2XLK4S^1Zp(59lkth!{EL^fciUk_Cx3bSy+s+3QC9Yrk&&5El8_`Nd+(X-Ei)=4MRp=1DN;sO5@{fH-Cys|=l#9Dx9bnM zx}86qx6YH-@j8y<`FuVe_w^`a5>|Yj0Q<;iuoFoOn0T^|DQuHh2jsDF^!N5|frtgi zrSwJyL)cJ!x0&5i*VG(}*(HjJ2sgeECQkE+ipB~zw4Q?lhA?l;hytYNO~Al$kjns# zMEnoPhYxj*QEK_0v4_+Hd+C4DMGw^+Pp8_Fld>Iz2lZ@uUAH zr(VIRYYNH-f&l!T?!&Lhp@%}I@oUpEm`mX3zhPBvo^-GKiDMTp9D@>E<+b)_G#v_D z-a+-mBI!DUrHp=-Km?%&&QduwEV2I7n9dPg3`mWc+SSKbX55h*3DS<9jt)ZzHgGhs zkIMd5I{$0eu3Hkm%tw(r2CC&thMBds0-OOT*dMu$_S)5=K6kuu0h|53q<6?>Aw;me zbivUB17SdbLeDR70ZGL!AfhSu?;q&xeOF_o6DQIs!p5fE+Mj-ZBK*2g9)OZgQ z1YjCw2mB5Y^RtIYQjbc}Wr6P7Lyrs;$gd}=n)Wgu-;MUxn|GWh^1xi7dYS86nj5pz ziKExnFBYzhH27|Ry0-jg1qN#y47BJcLEb!a`7{~07L4#-iioTLJs64!K8j)&2-mjR zEFpXbjnnVRj~~x#PWQ=&iA4*PX+s&0Z%!JbB|bP;Dl#V<4fqTNdUkFhZVOn3;-ed6E)EG;k&Lslkq zA=AMwyB#2Vnz*<{{&>}4A#v{Gx$D$H0#v(hb3}EO)pZ{O7kxYb^v$nRm=l6w1^+v* z7ImCMJJ8qT*S=bok9S1{l$R^~`|}<0#Es|JX8Us#b0~kqIRzH@@`G`-L(X>3gpwi? z)J~86z_TO3Sa|DsZXn@m1B++9W^}~|1C1L0V|N`NpEH1{>W?@F886U2*P5YJtYUZ2 z6zb#@aTVZmas7OMe}^C7A_RIbs;w;&GD0w3!Ty%|xb)qAFerQI`BK6%y-w+p2#}bp zHMi;8`ZoU>G*0vX=@K#aKhQ-T38~8^gu-LWU#VfpS@zAo8+BEuBWdU8bu~0#Af~tk zu@H-3HZOrKpu`u)6Dw`(i^ZFKLebfZrU!p+~s6gZpn{^I%7+zpd|O^7f$w zHvi4Jw4Njzb+Y*$Vxz~YV$s6PI*;DVBf-OSvE-UeVV`S*A?FU1Bm~fW-4Qx1K#nb5 z#F;Je2}Ml1n@lBo^3?gCdgEneIpJ}{P4-K}i?VK%U5MblS@p1cwQlLfP_td3TIz9< zwH2Q8y}NfQ9os8sc{aLK`_}fbJILApnc8i4TvaI0bNV~tAPHI9)BAA-UIU}d%|Cp` zdD3vW9w-txhxi>!0A>$D-MIF2n0 zi3i&~tFPR2+m24|5C&|;b10uk%b?*rc5EFJ7D>s;IrD;rlE~W=!j3WNXFa8iQN8XU zR6XOVN^P6Wd7$$|YSxR!wTXiU`m}Ss|A~>zKEf!IH;L(#@rj8_R=Z4(eH!`k?Qp3;Z|K?gc#w#GY4EyLN@LY_DHLPOz042#+)GTI?J#78`<)ThI zRDOlp0E{4^6w3ct~9EfHpGlHK9U zbY<5r<fPVmA4jc;l3z6Vh=%Ie4>*e_0NbJz{*-dwniEA5woPz{=QM}yuEVrL^0rZDCxj$gk^CSmX{ANzDTHv`F6@VhRE~IpMYEH>fuvwgH6=~nA^o# zMO}R#T*y3IMEs*-JV!n0=Tb4TVj{gc)gqg!0SGFHijj)~tPM;qCnu-+u+CTq)3f>S zxI>Uw<&6Q4bExX8sK{Nd`Bu?%dQ0;7@$V>KaYEya0a<1ktzz`<=lc;#XMTqfyrev( zrbdO1nkV+(qxu!)kOdNJtlFkEGxYVAfb3m?2P4V*cn%MUQ64P)Fgk0jT&%tL6=M(O z<>k4!lns)RND&-NoJ=22-?Pzy3xXtUTsT#kl zsrVAX6lxn#(QUvP;*anM%&!1ChQwK?2dZjn7=WZGEUbOj^8gL!v@t?lAh*dZ##{p^ zhf9XZii#}hjf|O@ihElPOZ9X$KZ<`u;zgl+HfFV(zZhkR;^E?IcWppQU06s6&cu>o z4Xqq)`80ov#%p3!w%_tG;y5?=nfs%z7cb&DICIR|Pm&ho9@FN0M;5F`lANi6L2Ox+ zb;vA5H;;P1!GepE^JC*+k&FJR_oaRIqV+nWq8!zd2Zu;QY}0KRC7s6gJMXy)Q&k51 zxjdw@$yF|FzePHEDtDs0C!ndG%Ov+E*R3(`;kIwtbDe!d+IE`1cv$3(AIjI9Nj~aO z=cLk5kn0-CTpV`l%+gT4rJvE?jiz-w0CDi)5@AB2hKS9q>GJgi_=A z*r%XAPW#BXxn0KMu5bIgl9F^?g@bd)RUe*qdhBrcd$FO=l8$C!QJ4OfVfz zI+Wy@N!0i3=)TZ|z^=jVAviIwl_{FjNjL9f=vLky#X*Fgq_q?v=G7%grX{1CbBi{G<-{3F@&X(>d-~kQ znL;ry=8D)srn;p z_uXly<|9c|uP1I?%p0p0?A@q0?_~}>5ZaV@wtJahD8#T!qg^AGpIuT!e<E()bw{2MV~f{bx-FdL29|0}OCp6-HH*!jOh-$;_4yz|a#qT$u)lTD`nLPXnS-5G zY`!{9bV~b046-FgoB;PX)^7f^yhKyVuq7@vkK47@(vMcS6P^Dvq8@J7UJPS(i zH5un+xCf{5nQm4G+or$M(NU`{G%wUQ{iB&%aFlYYYc}rGa-LV++GRs=2R=cyH4)9T z$u|zriL~#Ybt~=Utus5P^)6Dfe&qZi5mIHtPc&p@M@ORb-7{;2E`2o0ukKc(XUk<7 z?HG*Gcir!OJhS-u{e(WTBy+Clr^M2ng1RRc=$G?*7+Z=JrLpap%$~}Sh~E&Txqk1- z{V+O1<+RcbdlN<_;%erUy}_>~2cOqTMMic?o193!9DH&4SsY;%@7Jf54e>9w^d7Pu zXU#ir@_{!(L9(AWaqQIYN>&kjzm(YIf@|Z8@$>si=hnN4E*lUpn73x4PuOF0(Lq_|Ygp;dh_O**K+#pwjfAPUCoOgVJm>i+N>*^A zKy7HHyK8vIa3Hec$fBCG&#C(g;*nTcWS)-&<}V7FoXU=h=he~(bm8E0DVR8|&wgSj z`4U^XGMS==PycT>Tm*zNv7MCLpE}N+vb+6NfEIJ9er^RsC&h|w`N-@Zr zal{BXCkAcVeWhalY^q6>cGwA*^q+sq4s`$ewdoI)2{Ld(8>i3-HmV$A*`RFybJiBd z;Q#sm(%(@yZj^S$eoPK^(Y$;CD?R3UDpk7})(>F`Q8%8?aLE&`OxmiJinnF@%oyev z``@3m2(zqJ4KVuYR+>mLpnRMXnQwk`%K!a%+rKOe^w%Cx-~Zj$D22}q{S;#GKfjc+ zeMEMajZ$Yd$!*W$KVpOz^lj3AFT427X|0@KnzyNUhg|vl_OVzPQB${zy}V5Ms_uV3 zm#J9FPVo4}ccUzOm_N+l#gh}eIR3xa`=1XOk@dBu`|mfv-)L6`iLm_l`{MG_ElAGc zivHgpX`p_1@&8;l{>7E;|N9Hy{+|yg{GKxJ8pr?Z$0?1L&;OtMN%%d*Z~m`a`2WuZ zl-l;Gjr}P9P!jh|>92V|WA*1qB$m0gm&$K^Z(=!r=g`cr?TRK@+>A@S`d!+0&X%G7 zv20l|@lIc?S1bu4mAm@U&DFz%@mR~zUy|#4UQ6_EsOAh@@C#g>KKJ{&q&y=Y!a(o~ zL49?dyb~><9zE;Mkx7dM&!mM~^i3?q{}k(8@!Y&%(%mn{R<_(!W>itUzr3EgZrw({ zn6G_$K-{pTi!^- zREf3*?elH1mW^0k9)qEB>sH)|U9DwK-Eq80*9=VPHQj5?a%VS`Nb{IJqIk+Jjm$4C zU;4GZ=JaDeOwN$;Nc4x@Y{V^U!8%0NI>MsLRJryGrdy;(2WcmW{D@ig`rF-H>F*uf zAM8$Yfky49eS^i?ftF`w!VDBUg|F`5MNivvhfDdn%-&hwuxZz$*snMRMbkC*1>bIuejCy8walNP9U)ST1x&NJ4d)p;VY zz3lbPn?;l%N)?lBwnz2$;#rO;br;^ww_7raO=+<{n!#B3VV#@1|I%6VCzP@!qH|81 z=|8u3*2m_|%{k^Sp1nmeJC=K$&VM*d@UumT#5~#fFTa+BBVwh`ZU>82mJ7bp<~({~ zLjSw5NeY53EDngZVX1hon5AD6IAk0MO9c20BAm=_>GwJsJ;CZP3Iq%Dh z;S9toLYtMPdjZ6)AJ-PMud31QfXaccVl5b zf#!lKvpJ$H&XNk3)es z%jM50x83|Up|XtEozk0-te~Z-3Gp+29z9>b6mZb6c8-9b#v*t(y}90#)RFhp*4__H z*;iXkr%wBtHUng zd$iXcPY=QVv?qSEd}?uN$Jdxc8~t2|U^t!fOwPi5DTDw*a>^kjv^?DrMIF{Yvlbi@ z!UMMv4^PY?$#SGpkMUU-`|Ijs4vSrUQql~*VLm49q01o9ebB)GJ0*Zw4dWwxm!L`l zH8lP1>ZQT?Pg$7vhNy;a?+L;}yLDj5BdM#hdu}Zm$;uqa1Z^bWeR|_u5+z$$dp;OPdD=ZH=L>e zYbk`t6UHD93bZE@67YsR1P%wW%9qJz>ACIr2)(P78xGE%(?Exz*ZKgE856wRcmJYg zW*$eQ``*3z(zDyUVT=71@wOVfj$)i}#HrzWIIYkHb0qCuOrj2hO#v6%YeT0Nci!T7 z3iM&1Rx3s$N7OUJR{&D~5ILZPSt&Uuu1rHMj<(DF(YfPGk8#ToyMnI(s=W?hHBig3 z+FLMr)?(}j0{z)e`Uu0+q^Nk~?W0*>1{oDU3n!>#l4E}#N=h1E2ZhJNf)~2rL(2 zDqJ3|7#nce<)1;wkAz$tWFwuKk!#VwJXgwy#7gQp_Y7HYR{&)n7m~UP`Fj=rZ4SWe zG%VLX$~Auuk$5r~MRk+1c@YaEGTz>+bqd%i05oa%s9?K0|IkQuA~rsG=oL^^McX!F zOpr+t4-^>2{3O^0W-;@L|4kK(&+a+LAwKn@L=J86zp4 z!i@uLB?Qweuj}IEd-NW&ChqB$6uVegkD;sV?0f|v15eBO-1*Q1cRIVbkbDv%KQl93 zAB>CbqEY%70$=T>{{kIc5|D%OX{Vnr-_}2VO9UlHIfn+FA-!!>M;%^p*u2O&a+}Ov zkum?t`L}sk#@mY6cMgozw@>YO-@1kbtH&yqWv!vY1)61Ibn@lQ+Wm8nz9Qg1KUrBu z<{pCs0y7cT2=*8Z1n}&*F8fFCtIdSE9G2r>!6$YB&@UbeGG9}J$Xpt5NEn%n&a7el z2h}YM5QJhj@1K4ED~xtcaM{@`e}7~qT>s&$L9#{?Z^w%l-yYw#O|K~VEW*-Tw6W=r zX9kex{#8GgQ78Sgn(a90yVFK-6pRYp z0l7^^x z^+=Ri=QvgXZ31Vdmnl*9__0~lb_^pO6oVeXzLoI))x@QZ1x>9U6%v9c0w%gcAH-w!8k^k1kj{y{>8#%% zegxJIS9s!Q6^;=HkC)l<&4OB-A|jU{UrldDm;rUz9z}0^68FY*>ZaRJ+kxV?N&4Ft z^#m$wWL#fa9O98Mta8?n)}<(0h{e2A#f>lH!z1)nAfUihk8RZJXoZISC-)cfiRwje ztcF)*;!&p54IJ9od?622y!tshpWnf$qMUpcGhifok5>TQ+=S(^jGMc@+0Q6q_6Jgw zfl7lvthYUQ@32AhsA~s%Rrf!y=1FK{*yHBGujW9WS8A3X^VF<=$FQAWqGT+2lI!tH zCRFKCvl^RRrs`n7x@SYV-NNbeu9et zl!qZK;%?+!VTFmZdpCH~|7C#yz!;sHzh-c={n~>e-<^b;|7=ihNq{}q;PVVfL|A3nBgB# ziPZw>(rn5Ph&F-~^m=EALT#&3GAG0}0}vgJC2==9TeL97z}pI2gF3Yendk!J7Fk@^ zjK>M2MYbZ8RaKI_(MZB_j*97K56C0Xya{dgz4uRlI9X{Pasj~z*oW2@aae+4<+m<5 zYhZRbeAKll#Pn>1$#cUfYcD0r9M@va74o8Yv__x4PdHrIhGQg7fcsC7VduK8R%P_uL{Oo45Lk(bw`9E zr#%}HU{t6IAbaKyFb^dSf0e^OU=s+6^VsxcD=h*;{jFQx@a zd{F!@&YEi1(it+IMKHNEM~A-eY*?b>p66Zl#D2`G2H5U2Gqxeiq?hpIII}N}M(N1O z(d8f0Z|Gb&a0g$Go)?|izM89{)jHLRLlaXK4CQTozr%z?pbyVIf=?EVpAEuBgtz`P zsGQN-HxCRAK~eeEzWR_$=(;;Hld(-JP2Hw-b|@f-sPS!F0fOV&QM>#?@HQ-o?cX8( z=+3~_3R%SvA$I!EHk)1EeK|MRJNf?Od1EoMf9f5NP>GbUS`qqD4y;XBKTpfWoM)U~ z{I!ie0Oa;tSUzEp!v>DmbEwWJxFj;4U}J>!4>A*Dz5qxxKtZdDbb`2eiy`L5bVXQd z1S5&Z$7i5DKa$vfqHm&Yo>YKXhA_DgoJ0L1!C9*8)T1`p2?@Ko$PUP~&Be#ZgQ3FB zd$;JtF5^jTBF3@3V2*5xey1BS--S^!6D6EULo@?H;*F0=>znhEz0&vC8FU~P08|>+ zKedGrE^}_~`Bc~LZ;kj}e(*Yg!}`PbHyC@_)3xm#SOrP1KpBY(Au3|$n5p5&)zW|G z`s(Wc48j{(@vBk7~CQ0?%r+Rsn_1RDU}VbYUd@C~R}6(GC8afWbW{1Fm@iHMZv z=cpWh7(3@A8y}0K&E5gtR26iAvQLpi5=>yc3faoDK2giIC_H*pUBO?UvA2R{T zo-6wCLx<$qNQk~er2dBU=&zH>XM(zx*LGu{I35UY5(nG zNNrit)M{LKxGyRrjI>u&T1UMN6nh*J(1Q7A^(9ry5aPh0#CqH*+PXH_q3BdNlqTHK zg*WSBvUKOa4=CP(=`W8eG9RbF)&CJLFF>h~;H4ZAhG5#esi~nu8%0-7}?2a~n z9C?Tb+n+7ZvP9YP@I4_88#1V3$!9pZ8V>h*|AU(6s8tiOqgwjcU)whorj#*-6DjjD zS9ai4v1#<=$+w+tnlXkub`d+RDeIzVBe;2i;M+Dco{b(Hcl!y&A!aATt^DEzE*{qX z4Y+7LtimgcU9g1`#%mHLY~S4Ok?(T0MaV-bRkZZ$hj1tm97bzvYssgLtgQ)y(l5V% zs6!g4Kq0CJ_n3u+1q`fZ$6kT>f<22SHZ3)^ZSL3h&i|07`617ms_=zxfWY677}U5f zmksNMC%n!RN6~X>DKFo~=^5CdE|H@-%9r32e!r*OmB=4Yo z8bx3*lp-?Jeet^?ae%ef4BiX2Ri9=Iwn3>Xpf0+Pls&E_Hy0*znlop)vOPn$QQTh< zxyVNGNZla0`D;@J?(s%SVPPS3&Ce!Ew6FW37Lx%pgZ2M-8G06sk`OhqN_#^+vpG9I zUkzv6mQmLDCy#GVb$CsLBrW^)t?z(jhRew4F9!qb=FVSJ>+6oFDI;qgBoq|>!lZg> zw4A<}V-DimJ-yU;gJ72L6Ub z{rn}NNFq}!gJj6gFyVSwg7S6-VLB^9dzcu=v>7;pyJirILdl~TfE5o?r(5>)2sImF z$~2BsSUJn`Ph+EF5ViD$X~lqSS4XMx6Ks4h6VSBNTum;5`4m+K61XIt$Lr=$l^g6c z6W5HBVU2}O^Kj2mvu7B3xc(Chvs(SfI#z?qOezRW?GFUs5$c${w$9(12za4JasyKk z=lzWJ?H&I2n^Auc;N%asM8HT$JN&JcA z^0vZNAgWn4!>4yGA};RmaVKrO1h25f2Wt?BkAYp~7qqYDqkT5DG~s4Ud*Zd2=x!_- z+WysxkmQYyjl@EZ2hqNg+}sY(o3@!9y#4w4;~q@H`*Ig=&RfCo*x1;(IIR8fn3zhO zddRkb6=#rJ&=%Y>*k}W{&8ENU*zk1AdMu=juMvh-JX`0?T%uEoW|gUf!#iHLd>jM3 zHXxiMFZjzmN1mFJV#_GMAvD^&_@61PiBoz;#n8S}RQ7DOHk+7ms6ae~ZjB??SxP}k z{Q6{Rg2oT$a-y3(c$VPY^cL@(F!A;Mjowhwb75gUraEOf9S4fMK(*egb5U52>% zxdVp$7w=vBpdm{+3bi`U5G7K&VzAyvhdMNtuw~%C$zX8Du?XjTt^H1S6>kOL-re^w zBfs-QtMGbv6Ve11pii~EPpxo!F@m`Mff$x4!PB|tYtdB=p}C112qhuK3f0jXAK zjBbTZg=|FEOyDT>I3*?kA$A)^8$l~k6}wWFhL;G%>M1_+>3Yt<7@BRFb8x+r7yL0t zG`oRA7{dTLDDttyU@VE ze16;{$c0WAifiT?IM$tV?)-tg5tKUakKVgKGP25sg{#ngy^S9wZ_KG_kj5@oMM$#9>Y6Wh6qOG5?w!-s@V@W0-z$B`9*zm; zSmWV(xZoB$-HjA-5P^*thLF%u*G}C>N~AA)Eh^stLYNU`+B2B+X_~+~TsXBwNxq9~ z>k>wQVd6rK5=j;Dolma1`TUy59J2cYwjE-JBQTn(&s|*LyLX2Ax3aP_{6ewU^hZ^P z>+$P`d*}<^?UG=QSfa5OWYS}aHXb%U@wuBLGB);AcXw`H*r;oYX~65przCw6+w9DQ zZ92$pat6YFJW%SLDqGqlZ6_P@-oc(3BvQoG5Q8oS59YjY%ajxo z(Ao;OmD#Q$V@rNwGKeXfQG7dh!pzIx3n1eNH*@9ZA=HIa_ipCAv{Xe zQ&VUuPh8ccE_tYFn>_v~vKp(6EXP;;&taRhoBEhB<`mAx z-TGsX41xtCLHRPK3Kb!FGQm1f^2pRFglEs;p0T%_`P)I|WV%u>PN~5=XbT`7t#hZ~ z@K>+c7^sl5id>ldLZ*(l*>ad2 zz@DZNxu;r7lrWKD?z;1Y5*%T`trk-kYln$Uz5p0 zfPuVS>r!KAl*WEQ5@R7587HCRHeSawa{4SWdALEHmcp)to3Ox*zDS77$4ldaSPot* zc+CA_^CgrEQ1qjzL}Pdf12r&T0==6d-QIKT(=SmMW5eP(cyBhE(<*q?<4&nKEJlQo zqz$b?eKl28f@aJ34#|aQFU9hGT&28N55G<@HruT@8rjS@wA1VI$2dID*!pS6@}H~6 zDBF$qUHYQAdMX@n!#8ZJJ*p)7LAgMJ+Gu2j6;#FT?`O~|p=-rRM!)JV-x z+u$?c&FX?R?h@YC&gCDNggaf7X1#?d_Q-wlkC37-h5_-m+WMxgWTNtiJ4!DR@GiW$ z@B_*dZJoIudH1Z=>x25y_Ye1~CQ@kg8%9VZ*%UAjK2Me}Wa*B}l%3F|V#Cwk6HgkA z)GXi=U3R6*;rkE!E#yIc38xlCf!CGXTL+lx_9@YG&s}~khxxQ=s1#Zu#Z@(UVqNVn zoC34-;#l?Jl-*UIl^h0<6$0}tN~s|e{nzNrpCHr+eJ=)on9vpF=6(chfznsr_gk)$ zactKnM0Sx0fv$fJabz4pApyL}D`f$RO04hNxu_tboR*Cq#A1& zlY~ey!}a5v6t(tQO{n# zZvqYFndYRZpxpNQP?B*)q<76^9KWFJ;v%p#g30ZVSx|Y$Yl(DrbtPRnbuy0t-l07+ z`*74W@JjO}dIhKp6K@=x#`+mMU^i5nivd-bt=3`k#CjFVR+mNwRvrklf8v<|-j05y z-%>in6iUkuO|jA0v-^|}pJ0uJ+!*~(@2ivd0{t{Lb<}11?T23mJ{hRWn$V2Nqi(CG z%o~z>8xeb7b8WYtGsqxqT5w@DR^CJiTuYKI}DL zxwoV5=fCF#XN)aVp{zo3fTd-Y*hWp}&!j}NDcN5ARld>Plx_M#@EXR#5@{Vv2aS_1uGC1t+JbW=K4#mqfFnx^#GMucMOSo^s7| z4}=V*Xy#%vId3&G(sVYF$Q_<2%j$YjMb$py3`s9cX60C! z4vt%3LLUeIqoYhw^CU(G1whz|=7oWUMX03eS=;T-JX`(~pa+1z!^{R(wauje_kK^# zqKSPR{i^ZftG~99?%)r9$ON4q-Q9i$1_mk5S@mO2LTt|$?5@*NEy?#$a_zC}a``lN zsyT%2+^|MX@9pe;e)n>QQwZyQFY|b^X2c0kAQC9<)OHxhLosixX?QXC!I9tv_@=WC zt+OHO2oM@_VQL;##FLv97ZoAt;-)8Q1&b<`m(|guM_a!(W3Jxg$G#x1aFEI(R|srD zX-zGDPb8pJJSh?2-vKd}Dvg=rE)%@vo4zd{fGTZa82pGtj1SMX6Rbdl%1W{mPpTN5 z*&|{9-m2F^NsohXpzlH5f+K%HQljWwfYdir&s!T6AA$*bHm(|(+0kEPe)}@@3(JYd zCYh4$!&u=R<^>T4!6>A9I5(9@O-w$OBTB?-45-Z}{2b4#6VmKVaNddPV|d~{-FZO% zK^A@C8D%WALofC)Pe0Z~2}PKL2BE%yns}zrhoese{`_7ea5G2gCd&GqD5$B`QuZ~@ zt?xRM-5g@Btd|~JlDC^JNNDJo^%qTD(}aI&b$5g2x%T_ims2o_QyOWN*3!iMoyQ-c zp>p*n&jh)Xf=+oZe&1A5Rt`C!pSW@_E$y1guiSDa(!!I34IiHs9TjC{LjNY(X@se` zqp7^yF5ed@0Z_z?Fh0yh=SIh2HHlHP-%+_eIc_$)`>IbQ`m?SG&L_rO|d-ZHNK4H0dqX z4$0Fs7X_^vBR?jN7>U1e6&zacb3MWc58*Aus}b^BB41K)h+VwxoG)Hp>rKZX-tw)D zf{?q1C59PRFKTMUTq}jG*NLlLzqmCA?D$#ky&hN7j?53j#5#`3tb7tX%})l=!d{cz z0d%2n?n+p^aw>eB^-3?J$dgCP#Kgso{;VyYH;o~SVIQ0|FsMMF4j=gkHuohkFmM+`)Gpn|PY);EM2eOb6(g|RcRMHdQULGGH4p7wv zoP9~RL%oyEkWZ`Um@l2guEFSaiDXB6Kg*GWJvZy+0?spiendj`QD^C9rF)bM{lLJ* zI5B@_1M$`UuaDLlOXzF^H=$EqL>Y(r3u^ExwM~-(W9I%Rgwg5%Ai4(z(jG9bNtC`| z60rdd0`3s$R7Su|w|=ffH5hxXyJoY<)*%<~bo0`8ysG#?jZ zL56*21>Yp{jYVgZEy`HMu74GmE%?Jb$?sog`Y;UIp@x{u`;I~XeuC=8R>R8$iTqva zR31H(QPg;TIou~XIj1AsIb}qedJ|ZJ9bcZ*~2UVW437os3oSDUFfv6jml3xY=PG`s+qQVaU7wKqHcd6nk zP?&+pgsm3D?KK1yfx&O`S&6Lbn^}V|Xc5M*@^X~awWttFQ}yCV@9AJT3hW_p3*O7f z7zdJuXh57FtT6-$n856p{jF2~FmLuIU|#mPF84>6f6BGTlJqr-5dx?+=OPv%7#-Y; zjoT0TBS==_X52Hr`qoje%F|LkZIjRL z)=zu)B=W$3-w%!jNphT8evZ%CcoBkZv>*xq zx-sbs9zggYC!buuYHSj}UA)ZHhJAV4plBr{eEIi)XwLk+a1qBX=B=aAb(W+x9IeK2 zw2>|7Y(cTX_XB-A?+Fdb2Z?knGg+v4EFL&f@z4rJPOteoJWw++x!WJgv8R|-Y^E9C zg+Lr`3@Ve3b)={at^T?^@%X@n@0WF?+*A3$soAUYr z;CdYgt^Ot$lKU}{5fP0jq8FK#hWO>f+-(S;*tNvAztC2<`<)Bn@>9)jWHYz$}15;Q@aI+&@bfM(WhzZh;3*-SHf_ zPiTs{={v}a5n;Go!$fDFvkF}a&MW4$tBsb(c>!{qxcYAZnn$)Jfc&{4Ko=j6u|g^x#5b%g0-|on`F{I|yXnye016>-eyAVP zJ=Dfhv$FnRfvsR^!i9VENBlmc209Fa1x@^GHY53MAh3kB29M_Y zuv05qpo;E@_r`HwWNB+P-e}vdNS@Aapy32`i~pnaA6SN?ps~%&e1ZPcH5?c^dlNfT z4rm&P{JXQ7^&B>Z5?N$1WblBPwfiilTjquZGlxh}fCmbF440Y^IQHN(877acD zJe$Dud@)eU_|=Lz9&vWgzcyfY?C4u<{o_)> ziOt+`UB|Js5zUzC%4Ecsn4G*p$o=K!?&i6u%@!=Mx^F;AUY;=THO!{q&|Y5CD$9)c z_;2XvunpTWFvj%s_uoXmBEkxgI&mim1xO$Wq@IWzUx;F4VP!*hJS>6Q*`>dW{)&rJ zexC~c8)T#N@VsRDQ|BS7P17W$-tEIZL~r3gH1dMb5A!ttq3=2}zqcnSpml1_lW_8& zJ}VB|6T@e&3T^z^YHN|ZGfnm0*TB2ZRbxWw(Q^-lxw!T?$&VP4CSVFYz9k~@9+Z

ET(>Lwx?FFV^Y-bkd zm2wNy{z=yH%Ne+~wc(O8Fz<1~xWKTf!wy^Z0eTVtA!kpFdie)$sT_Np`~8THyWrIb zLIpt5bE<|lfk+5cMA^Bq8+=2z25?1;frBtH!R@Atp^O^;Z=@!A0vdFsZ=1q20-W5D z%Yq~x;F%Fx9n}HJUBvZ$S@THdZR1u(lv&|n<;;zZ4YXBcR3{M{fb|zXN+Nal5|BM? zSdZZ~uaSEW<&{}2yhX-7_lVPY&{S*cPiQy=ixon+h;iO~Hzkez_C-!&_rlMgK(vRt zlk^MCG}kH4!w@X-;Pvr?FZD==z9ON0y5Q{W3_cU@#PRFv4i%$V+q7 zO9WrGS!+Pc@M59rV`m?%7z8mhAbn%3@~IDRro*H6^s3JPe$O~I-;cxObmy6MBB4abiqUr{Xh4)wq%ilyj!CRwmw%fPY{;v_m_n5aDHis! z_Q8@F)VENL0k#f8Kgks*H-z+dt1|I}{2dNc2#XM?uaNHwy&&LvRze1A!6l_|6bRbbBXPRGjR$tk7rGogZDgb( zZdo|mn73|7;sY}bN9B25w6`>xHoK1OAnR(z7P3SB>$4j5yMKkeqL4O2 zP{dpNI6^#&E`RjX0_;M-4r5|tQ`w51*@luubxiv-uRVrmJRvcWkg`Zng!0S9ElatM zEWE$miug6u;s{sfWR*}gKmiukOaM8(3(VFrlY|iSD~IS%f&muRSg`(tu>)WT5vKo< z!C`+CM7HzFvAaYBhGK5YEieiG=HXstvO(Ez4?2_8W75oN8G_?+{j{EQ_ak=qhn1vDda3r(Dqz*X((@hp9d{jfb9Rx8 z5$1zKv14FxfR2GoIJ5YZF_4txPj#rapg8Z~KmX-Bp?E~+gfwxdi}JoY5mHT6h`k|* z%FgZJkTXOlG!4~zlODlf&&q4Q2gpOzuPN16df0bflRMs?iC&( z1Z28?c+$u@dWNgd+IkT}C3psA3)#CpVJ9ywXZ*pYL>fE{kq^#D9ES_SHRQtdU1PBSWfa%-05TBU-@P;{X?=Su?&`W(d@oaH8B5qD zFtqg+PFxO|El)4e;BUTih(u#L*exNBWHVvCqXjo^=clBmf?A>3l+-Ob*brbPI)+sm zbmz{j+@@PTn$3|k{zyY~sn`o=n!tn>&>ZFoGYFux3a81(qjXNJS_W8Pf9>s}}23 z6g&i)alF`|D8{s*{2DpqkirCx)?#z=iqXvYcn{={kK7~-#MxYbtGZV&o4rFvdJQnr|`#!yX~!PDe+A=%z+7|AP(%$obXImdo&b zFIh0c*yE(DOFf#*(Hy+)EcP@$X+gE4;orz?%$@ z!9mt`=o$&hw~l@_V!bOqzgDNFu<@O#iOH^l=Rl(1NHDZ8&)m5)i#s|y4QCAA#0)82 zX#^iwQdsyg!5UBwM2cuK(UJp*bQ!B+^O@o4ztER$8$dte5MIpU*aDD7^Z6s!OXWSP~Bb$TDS;`8uT!#6PFJ^EErh@ zBJW&x8FDn9R(W(f{kc1ir}~ASCLDdy2QL)0QL$ zFUl+*YZn{k8jwe|!pFyV7!hNIdtT@DJFy_DL-Ft0eoyjmL@u9ZZAg_fuC2WL0kjXY z@onfsF;L+R#9YMyV=$2EqCoh}0o#H&jFRdH+gs&DZ-5US~x3G9qlPt)FR4|Ve5T0Rp^CEY=#3E&Jk*Ngu1 zBBGtZUXJzFr6&{mTPsD8YsRlyM zfu^RqXI`zCuzEv$pf9p7p}T~blcO>fgDD0zh=*o>XF*R-Nzq8eJG%pZ@ohxJ9gmlJ z7(5;Ce0N6{7#-Nswd_#0skoh>J(aAE6zEHcTpW4Hc1gO^_FN4f6 zIZnrR8g>2efm?r*Xl;+&hb{yLANZ6w#pU5?$U#6qvN~q)<-d$oPhopPug!m#fP|Tm zSFr5;M6W7?eFtILAqGt$Dmn(qglQUZK=2lY`eJv^@VOk4Zv~M#QjYaw1f|Mr*KeV? zBj|}2lBzDJ4qd)T9L3G0ZL_`e7qF4RmB1>?dfMZ=Zf(Dx%D(?i>bYI5cGihC1l89v75T`s z@7FQtU3?Q>Nw3<70WoVtw2@j44h}Exon+*|cM1hY3y|^^^xF_*2eGv|KN$boIG;^` z+Dr2Jv0*lXAiyah2dMxg*a+|X2^4PDuikBp+5meRN4p=W7$Bz%^llpD94J{4^DRs{ z$F~MOG7_hl_I><~J`4gpn3!ttTv{9!0W zqN9Z?VP&Fs5LRpSGy4RbMHcaKrH^y&({N(Q3%2!0gS^CIQ*=wIWrG;TavwPEKZvh> zBUelwDJHc4t5(TYZsZuz!k!bB$Qm;n(Coi+S<{=jp<6y zOwlA1Fu(7%h?ad02<~|C`D%O1BPA=tu6E^Zyl#>*XMB$t)00M(@7vFA*&1-O!>lyK zk%}Q|4lfOSGPb8r?|v%E=<5Viv7tRh!IAKqU;;dN_wrL2=vg22qKb85tUxgke}X=BJy>%YQBoUqHn;J2=A) zD6L`48c^Q^^qedb&bQT?EUOT%1SlYr*IWEo`0dB|ML13vI`?t@CMVv)sRvTXFgkx9 zX#t9Hb^^UJXKiisGqU4t7R#3I!3Q2&bLTA*1XU6pu>au?8cYxl=aIn+7`6N5OFAWZ z$j~l9kB-(!_vu6f!vw1HYox>?($a){BrHHrr4yW1X?@-R1W^o~okXMPPL7l>UgTv*g$GdRuxW)qST zZRI}YhX;9EU53yZZ_z?+VNp>G-g^L*&KyPI`FjY7c7O!5MI(4uRKs;I;Q9gLmVW+3 zQE;#@b$ch^#e}C#kp72}Gao$>MhQTYkXh&M{uQS+SoY`@D+3=sK&kjD2YIz=)ItKF zI>-#UABI7O_?GI%mM><1_$i`!z-~s6Zs6fy+#c2)&{$MtnCf)Km2A!mF`~pw}Fp9+RO4x^89GT!?>)ahbfwsivslU18(wC1yKFvFwESKp*Hv=kjXT*4ZkaB%7 zjaK@bOw3I=*Y&}$&FshoK^qNHo1VU?qy*t_YdB-OEeuT>Fu4oRIjnSVVtU=ilBG1u1v*{nZ4{)i~yN*UOL;)K=E-%k^ zp*ZF|o>#!yG#qRy9FWa(r3#xT;+;QMawvt4kn%s11??s96VRwytE-?w(xU1q91ivQwz0xq9*-w5pzcg zLw)G=iuikx+J)UQ$c6x7+ILas43IQu8S0|!6+>d0)tHN{fv!SH`Zf-rs~F`VcQlf^ z&9$Lv(miq&anK1F8O)4~7aokeV@p6;yE55qs{Sr&*ENhP#D*Ix1n4K>v%UDzbf4=c zn(o&t3rz~Q{{BJ5Y@cxd&)~NSqQg;~CNa;W?@YQQht~t|_pF_Cq=ACDI5}s0P&G?P zu;Xg5|Jz*P%TeS1o7s6eDA77QAC;=OxORbrz;9|4tQwD#dm znF+t;$_<)X)*iUJz`C7dGGXjCI};hhJNmkH`tY5l--A3vrpC|j=efsPL^-x5hjInq zmfW)j$+k0eY~TcuW8Vstlzql(q_dx5vH#1x{8~$7a zI`#CbWda?Ea}&A1OGi&lEjByfLRmw2Qw**QE7aeDX+|vQBlGx*kJT8+8cf)5DrfUh zZQVzb^m*uH`iYbjHb*!9Nw#0QW!lcO*xzMQ=@7~h#J{Cu(2&>?kNtasA*Xd}e@RZ^ zzeuu89D>}hJbGNp^%0FA(NCK2UKL0<}y>~d)@BcS0GP3txA)Cm^%$^~eWbYLrTsGM|RAgmjlNHL&PT4a;$R;G4 z+|TR%d4Img?|0mP-1ootbsP@YQMg{O^L3u*^LaiWkj~q_4 zb*vxV=3cs{65$CLzbH?)-4AJN-uH=g$~_PD`$GqW)HJ?D@Q7>!<^)$MD5iC)hYu2_0oqJOgb zT-~J9H?*$}a0AH63O*wnf5?x34B@5ro$5NS>;3?7c{OEO>j(*Jb<=^*7GyfSY;2f1 zSTzj^7-gk1|H1e&IB|3EAj$)Ir)Zl-q8b!Yu$c|B7gD!HaMNqKJ%IY`cMl5<>My=t&E~A6cFmpR!ksRx_CblhNyCv8#OVe!l^?l`AsG zJM#G-Wp0L*mI7HM4bp!?R5u>u8(>=CGglZb1N{R4t_Ev3$nmkTUlhP%$9hDj;P%nD zG%3my^xPKCbQ2IffSk-H2tH~udK4)-yeteXz@~opTO9T$sO2SX$$yS4Ek&NXpdIC& zud=_8dIm6#o7sMUENhTm=Q}kQyUB{+$W3ym8$rv>N)zu6SfpZQW7DBHpK+=@1pOKm zD{1)5ku5Q+iE>Iyr|}H3KM)$*##y8}-lU3riMxyO%wK)lx&HTW_w<19a@Z*xVI9k2 zvD`@4|5L2kA*xQvq-PEF1nB;XF_Fv?bQ7@Lka92q(Pw<(SY<>1ZWj6;L$I*4!^7fm z4gi09V7^T$3V^Q@HBJ|ldRmd~ihhiLQAx121tHCHCU0t4>6$c`DF;r3i+ter&*ias z!O1bosD|F=Hj4 zG)lumejmC$!nXMg1T;}B_cp;J*2rz4%rQHoczq_n@~u`}aoWqosrS1g3DCTcl4>z? zy$E?6es~HMVt=5-KyR9Th`h8C2*QT)Enh%#`Tr0a@FSyWo5sLUY0^3LZbQ2N!M>U& z$h-pm1-3 zz1}ll1phOwrH!5*R5W5V^R9+HB{?}aK|O{P43~R%6qka-3`8}Pu+=KRoD)FV0jW%h z#QRyKduUkf(J^;(I|5I>L3(FAZUtrhu_nbcihl%uP+mUXoWL2V{pJj*3lxL9sjqEk zi$gI03U^zhtn2O!pi|P}AQbzVjKy4B5SCijyW_8{s#? z%hd*^wKXZTlO`O2`j`dB??#Rnr{yaV0+I?Zs9gye}H zWH_s6#fdo>=4M7vg^(;Bht*|RMyow(|sD>lqA}A~qt1?PWfC2*> zDGaz0IEO?%$EEh6Z({xX1D=%dNZgJt5jT?xO)^4V^Ht>Y&0X1EWN1Z}N9m*jn~f)v zerjOlZuuG-8oImxMzXeIh&2cim#N@}JZ#5q_Ti2Ps#aC>so6f>C*w)%H1<3Z{dAzOfwg9<92#AQBAloMa zN?c9CbPr74ffodT3iL{+l=KxG2;|D79O7#eU{o^PH8VBkjbr?_b|9Li;d3}JvF#t$ zlD9a%Eb)GZbgDxan%RSUG_ZRZgdBAtPl77!j|`m7g-_euZ|h(Pp(K_j;${g6SGeGD z^c(neAV=QbChl_vjUq5nrH zdyr1Tv5l(1Ht9&3W#pdw&nW1V!|94hg9H#CAAb^WtN-@n18OWhN&CjsJ6s#gbA4Jb zDTuz5h4NeE-2d|J$!(;aKQv_n^0LCrv8L7w;NL6J`@98i_X{d{fv%~e%1Dc}D{i6F zHr-d0Ix$%byWSs-;dfLXJOEq^R0L3zfE$8iyz^Y6(4A@ZKuqk`A9aNeK#YLp4Xw>! zAmHrNi!trqnDcy+Dg=qZ7Z%#u*Z^I~Pl8^33~Ks7I7AwsfMC#>gq`zO&ULFt8m=Sp zAC=Y%GgBUXISYE{t49)^_!U&;!%Eq<^M^^ZJ z;AKIRNXI&0l6P&Ul^5tF4JWzf(_~2cNYH~lE5PN->>Tx2#dkpb4kb1yAjZbTup1Bw z*#mD2ZkdZhGAAMN?fi?E%(x`IPc!l?znV>WF#AyG;a<1L&C}zy-FE-@T+h1XgIUEk z*9eQett}LO0Z@yNj!qg1dWqyoIievYV3W_^IVgTIiE$ySlk-qvWyjN2_N-S9_?f0n ziCTfN{n%oYhh1IqmDK22`@8P9`tHg+X1dq=>~j)zE)Pon$mUrlx(fxcN$Ope)_y}1 z4=p<-2Wy5$YUD?rnK$_?h0#-AAeb09l&Zs6cj#|LRxwF`{=D>wqD1F;cfIyLZd~F9 z?l^N=v#Y3fWh+IFEf({L+Ir*fgBN(z?*5zst9h0;F^CAi#iBI9Y+exl z9lUUx?5w-Ux4Z6Q0(XO>!{9A?#fuLzcAA{;cna=gsZ^@bg*<;np(nIK$(~yn@wt*q z5^aE8mvj4f5&~2EVabm%riHTTuPW}9HIWtXk4ygiLdBG!Odml}zb@fVPr;|gvzFQ) zo^o|CHqOM+s)C0Wba}Ot(oB$E1Pj=-4!R^`Xk)k34m$M;JGlF#YN(d>3*WMhxHmiSNu=nHY?Kl^7vTnf^aUe_$k7H1+?uQQ-f{ zsj%~G!3CvroTYIX1g)B3ykWAt?5Z}A*!w}f#&fT)uI?}2kJ*pr4I2)B=Kb#M1so{V zURGrz2_eGfl6Glt)jJxh} zOM6?JN28yumq8O(1$hfDXO%*n&-8R^e|O1AWBV$R@s#H$V@ASu%BoNN31&B@K21+` zy;~x3Hjqu4Ub#*ar;Se3^L?+TlG)lyO+otWedQb{LSkj*8Iv6++s5itHvXB_FljOJo`^h}t(HCf$h;ZjnW4zk!06niz>_=W=$Fi6+0j z<|)Gy;iDmN?I6>BGi^8^GfX~xn$?1lA)8#-Y@IycO6Fme|P9{cK$WD5rq#LJvu z@ai6We^0Zg<{K(O=O2n@ZI>t_?~-6kPi~>7ocN9r6;&Fc*g-jHtv05X-YP56qxd;} zN0R+opW&a#dr6DUdd0FK!)7|x<(v$y33wlc_TuO6Zk|0`GBx?0|9c&nmAV5J?(7Ca|)g0-|IDRfk z+fyjbHG_s`xIdb?#$?)cO*2}T`c{B6Ix0o~F)Djplopv589y!8eF5x)Oe6TR*Z%6^ z$nn)9k0WVL(WhDl@qH=jYG#=(rxvM#mNgICD9OmmTV3WKj$xOYMK?bi*u+* zne{VMd%18ysUF|;!G@^hr9-~T^Xgse@=j~Y9Y?wB$O?B%qWCz9pY-%r_pzvBxp4~q zRxj4hObem7Rgr(}@QIRfHYZa|Lg{gCi{uj#6ide&${wSO7P2t%OW-Wp==^v3qrikB zgbAg}%KI&ht(KdHo*srsC0V;9?3Gw9Tlw&s550ocE1RCQVK+gEG1)hM*VMVGg9j9MHM5|EJU9 zsD@7j+5L}cDbuOqQ?&~MT4 zB)+9LEz^xs<%9J=s`m6{9ZdtCLmh9GP1NBgqiMHKu_~pJ6Ru#0!31p*iS@nP3hsyp z$65zX5AMASaJ${bbU=)TvHolO*Y0kao|D%3Nm(2o8l5s+o&sE67XMHQ8Ehhj_Rma( z75t!fG(Iu$l0^j(Xx!UPyRg4H2*SwzAt?t1-X{p4DniDpeJxJcIJNF zI7WSG0ZM^#FWsuo%kyR%F?MDQC1>shWxwL8Z&fC*yyhoFgwSktl(7CO4|`Bi_d1<6 zSm7=E%INCoH?KdJ_if*tv(aUd-%?N@MzpD(GB{6x6s{1%-#T%{IVr5g-l z9&!)`Wo~Cwv72YF&7ZTV#N@tB8SoWI2@ia^=+t-lgd?Jd@p~#?PnDjCVU%zy_Hdew zDM~deLfV6}WNnk#+3nf6^|FIbd5OLqoYm0`u)xQo&3mQ**$ zeEekoQI4e=XR*!W#7ZiJ)9mWd#YU0`p1VT1@{)3Mp#!!46Z8AhnDj8;kUJzMT;xOK z21fNJG*@zOzR1C-L^s`!tzO?g!YUz;-T!c&LP}fD>2dZNN~TA5&r?l%HD?SIjH+OC z#u#N3g!Subd%oTPsmjWk98DiJaN4Pt9SU>xL)W^Y8bg zxh{`yD9U5+;cQau6RGX(gqLapL&ZSDvka3yLWx3nuhJy z5YhC4Z#SF}#qp|vCm}PV#hnp>7f0Rka_beR)3N&>@1hc=6#AgXcTegm1SlNTOx`yl zQI1hnPtF~ewq#zCXqLlQU(`vR@aNFk@}Es96VCRRwL3<&$KYpwshTe*v}QaK^19An zY!9As3_N3Tt85)EWWedhzm<{Kzn=N!Ktc^ILUF1~(t$wtaQEu(E_5*<`0@M8ZL=Ed z&x16i0U`=N?WE4X0bsY7DZcZ5OaqXZCG8vD#4+B^zbL7WG~a6v=)53%5D|~`DT3l+ zeM7?=fQ&6d0nkNa=@?bu8-sNC@rmrlye-zZV^)qus&Jo)30r2D%{RoZKZ#B=upHUX8X?gG#Y({_?OSyqo3&& zJx}q9E4aGSaS5x*dwf&v$h2w{1{I3NS5#K?;GPH$lBJHxEMS8wM_Cj@cObUS zGjz~)-qZ%w(@+l=K*WkN-BqzH^?1lnanjHIg^y8rvTrMPCfr{et`C1)!;Le95|vkM zZ;(z9XuPHUo^Uas^Fb^cSD_*oeU)UIsd`($oN2>t)$_?!UT<>GA1@CdMVd(D6_>H< zXZD*{%%h54vD&XJ?6oo_`@&9_aIj27JG6kqM?QIdbteda%&|I@KJ^SsW^p?OxHM(wnWF27=F;PW3~N z@6kHid&;g;^pcE9{1ifQ+KH69Slohw6Ldsqf!L#PU zTT~XFn2>(pZRGAvRvYDKL2KHBZvAmpKl-P?3 zxtGABRJ3wGVH%J#>#vgW_|*F0hNgs>?wH(dbt|=X4(fO9!$x7meka@(Qs7Cq_`wfV z0k75mYe!;72wC@eRm0L=M?mP_!*s4ozr*|a4ZZS3?@)s4w6Z(>ie=4nwS|id*t=PE zq`OWeyz6)cYTnLWttjC5Uc-sQR&-kD%}4uiVl}dRcl;a!(>I0Q>~O{X&gKuY;i?|S za*h=a(qbF?*xW$>l=hS`d-Y1-#qZT$)#jgsD1p>^G_sGOpW?(&D+`_E{5IwSZSxfjQtn#24h zaQW&X_LOhv^%~|~WTv@O3itid?SX3IsA0nprA~=%=>*d76r>_Z0s<=5M+V~Wen_i9 zsC-OLuC@@|G~K@Wq6aBLf)ey0R6$9T;K~(7;GO?~(3JlVGzR;75>{$vW(z#v0R{E; z@Fq+*+Bfd~9A>Be2C4CY|wC*&$5{*}*%PVCi7W~T@#e@iVxCpsL zqMoLNj9xH`c&nmxOKEFg4?;0uXCT-KTzS#he3-N*HIZJdY(s;84=l~$Y2<@Rm z+YLxBG`m=$xH>eu($kB5c6mE+KlVQFM{6jjBq* zXrVBwj{)H(;4esb!4p-&#n8Dz-(xUfKh_NCL~?d^rYoebT2XJ76%|wKMXk)k&79K+; zY^qJQ6lLi1SA(V0Eq5cM;Yv1-VQqb^7n7x?xg~S390IC zZVA@6xv44dZZZR00s=M_cW9M92WkMAI732H##R+O;AsUz2bAh(JwGOZ9J3GN6O_GQ z-ficP7S1N2-h48*8+Z&7-T)xwfiZxV9uMmqr7_}&0&Nb76q|3LB%yY^GCQA{3OpNo z!~3Xy8^`RV1DG@S3q}Ex0vB!FX1+fY+gn?HP?sMP+bUN~Eblm)JgNTwkYdIgo#z_AIo*4=kuVZh zqo8Z@v;45OdE$LEbdj4Ta-{7Wj%G!T-c}snBkcyW6OVq5qNqMFoV(HP$FPbEpx_)h z%a)WP?N)^gX}Jm$43K@$h}towPCmiPDoQgn)w})S$1`L*V9{fcyGI;3fMukGc6Hln zM%#^L-GXZ3MIeZMpPVc#b)${+=3tc_>mhYQNdRsqFvJ}}0uCUrMCyLv!>PHxfM%5&eMRdM(tfEuJwydRa^P}qAjV^tUG^|63%CnQj zqI&;xrANC#E{OXMi|m|vtOJ@4h^R_bE--A{sw&*SB;LLHnR~aiutMtCM&@(=lBQRc z)tuN(Y`0(ranhrfx04;PbcZOBhY1T98av#7qGznBuwQXnhGfFB)OoGcc{glNfKXN# zRxc&~T~CxHyjoY1LIt|ecI}{88A6X=%0GJz+(oYsPfZc1 zN_)gyF=H>0(>lRm-)WD?8`SQ1fUA0%?Na4nMrxh6@yi{ygk}y_m_ zu_x2by)(&&K!m>vabY9GrG8_ibGP1ld#_LW!6S~r-|yZUAGWTJ5M>xeXsD=8;wG-v zrX+t>TAF@A%KUJCHAOq@YYFw?QkE~K8>-zRE-udL_%meKr;3H5lEMGHi#$4LFXkiG zqbA8M*@Rf;w^h-w>Jtce<9}5924vSI{HtAX3OrlSzb3L4qs#;GO!sOn=Z zZAGj1k4Y62sxM$EpG;H+)`1!cAo5?oeyt{?qCk#B7qf5w-q0d+Rj>_UY%pnjUbse-2|ixS2h4c>=7yr$xs3Pk zvL4$KioI2Cj?W|;nvtv%hEs+Aszo-*4$l9)%ewq%soSTjH*oOXh^Bw1CoT4NE2T7K z#{bDve*L<)RbDI06iKxmOB{&s2QO~XHDu#yNaDzrUBS{HNi+c2#l=rHDXRJXG^>Og zo|n*$*~Q!(B|c7n0+|^>twiC0hF~ERPWY*1m6OPc#853a&d1$Mk3ppL-a@F$GBVeM zLtl&=`*g8rvF~B`8!xM?JWfqCq4*0-YiCI*^c+no{eFXM+_dFW#n)AW%VUU#Zy5U= zZI8|0t?YfQ@jF+egeM_lL6WWMs3zYReMcJC`q1bVbAom8uUzWRsF?O*RgqfiwW;eo zBqM4)$SvSH%z_eRl46I7+!ED{YLa?>nhrc{+X@Iz4U7JsVdnRhzc)*aLQN=M+y~A zmSVF!fN2C9EEm#!2r=WJbjZ-krPK z=v6FCeQNQhXU}7z%X3Z7W2Eso_w)vj_Q?a!ZTTe?S!9{+e=jklePO+xDU9bcC%35g z@YO99;>k^@ai0b z4yhfG6K7-Q6IGHk{eORcbNbb87;D9;vc(JNu+Z?n3+Hz8-c-=#R4$d~1SH&9Pe)Y< zB>!kuOIRG_e77v$hJ_dap2Gp;zIw952vL;Wr58y`-}93m#WJz+)f-c@meYjztF7eI z7P3F`AAXJP{ymav1&()5Tl4x`dNi~+)T<|EJE=p5ZAo4yH6?y8x~xBLRq4l<6!9i7 z>lucizu$Ne<3Q09Lxv-uzDU906ndt0S{8A^p*bljVUt zzU~57ADL*!i_5uHM)5PPwE#G*&vN{{X0hr z^-_X-s3B8Wd!pB?l5i&j^pY)v==r2mb9g)k6#_;Nx_rMs=`Hm1Wak_D@hG)zD}$o# zss+bf5+h{x7mfd)k_xnM7qljav^FdbF5~H1o^r1ns-8S_CdB!1Ff1QV()xim3ZD-- za6&L}J^u_G33A{*53bo~uQ{Yc%kGUn=0y2?tHo^6L#Q&*>&!H-Sq9eR-M5~;)AeVt zlA4|#>1_8;TD88piS0Ms^VCM6sDBs>NM(bR9xzW*RSQxLANuW+zCm$=@Dy%pQ{jAnM^a{p5JulXk;lCBI9LYIq&|64rb~Z~9nA*BL@BxSZ zF^1MFZ%QlqI|}3~RIT}~D=i0?Q&N&a7S9Zmu-vpvx3wEXni4?bSUlHUzH#x)F22pr}!O9xzaCGf*9h7sUoO$$48cLXU5KMD)Ym+^<5 z&gd>z4|hU}gk2OKqG{(b_PtXFHbEKJ{Ei%U0;MoZrIp=T(^Gz8j&*zyYD5~8x_$3!CtQ52! z_^nyqOnIjYkJ>~qHX%LNQzpNdK^WijZ1wLO3&ZUX)cyyNoWFPkcZ*Ks2O3yox^CaQ ze^4-p6Uchwikx17YykzEe;Ai6R!XAvJhg3h`ydyuK0o^-9`UC<`iCO&^om4E#bN

|NX&SDmGNuqu% z$xHi2WHqS*Zs|Q8G8*(lZRFf?DOp{)0rKH;6F*|FT(gJ#(?b4_;xy+(lVXVxM3?re z(@b_5ZFm{smx)b19f>iM_QtIgvlJv5@w2S71nKGTR)72Nn=bnfY$SZa?3b5&;y^9f z_|Rq7gNmLiN=a1l)$+Rc4}#P`am~{Po~spa=c41RMa>smsIAcFJGm6qZ+WkLeUW8( zKQ7(73O~v}Asv;7F&NGL=?yWHmirh|e_9oG(?XB16v;{*((^X39q-vi;lqLOpg9~i zmUQGD@gVQ0(u@5Ysg`{Ogel0~OiWDdUkrJ-e~-y%+M{PPM+Hx~3OTJL49 zOlGiz56}`)rB0Rgv@1okvPMR_5E%%~mw{=zk#z9)YfCp#YIv1hc_r^$WTQ{u{C(T- zC5PBIC|W4o4@z!XR zsv_Ul&_O?FRVP+Ne>WhEUDkS>vpL0=#u%@P^Kp#f*5-XnmQ=^pt>{f>8MYb%y0*RV z0=~W^UlS}csk!p=^{jS2Y=y6*0O<7ZYDOY3c>#Q%m=-KbB-V@}p9E#CmJ%IVwrj%Q z)xcGi(O`+Y;I>Me)3w3RoSPZQj55dPtGqcCP!Q*QsyTLnqe~t-Gr;NcIaNZi)$Sh4bc)vX+=``1D+`6iFGX~wdmQF({*`eZ{S449et zB{n^AQ-9$8wun+9z3K4;10_!Onk!>WADT3F6der<7wWZg1_Hj&(-ydG_Y!=GN9M>SN3a zB6->+qz9iVGOB%YUd#2QU)Q>!9zw0B6@rbMLzVOVI?9j5Yj%VdT~l~=`T1+$Ofx4AXz{{J9! z%h6y0!O2vO>?! zJGD_vY$kb7g!dbZ<%9d_Hvu_yL}F;jNsK%rHO;+nuHmkow0-!h&;c6XO*n#4i$i*$ zL;9KpJ_iEmC&p5Vwx|xcZh9dUQ621YG5eNyqy8eiFGnRJ)ODLfJ^Cs_J_lT9c5F7V zTK+0-Sj>*1Rre%s&ysX0J}TzWQnDODgrk!zGRneL_i}cok!xkNyuR4_F^6L)g}gJa zeW7`_juTvoI?$*p!r>8-HF11H!7A`w%ki6^jtNddga4w(=jPCbT@a%yLKr3ZxD{dSyeCZ0JQ7Fx7nid&!Uk1N>gOs0(EvLv z@&7U(ynw_KIz<1{Ii0NAPmmX_9`otC{k_@PE`}*#wJ#w=J7@qxK*V^;XOTUY3p*}m zL%@wcZU*RfZ;Rvx2Jz(fac^IDhx}f;WjcO4&(XX&wdLrqS5=?}rG!<`XVmHw#r!6+W*>jrVwm zpy8#}`0wxJ^2L93Nj;r)y+z{$1zt4^oo2Uvp8tFt5z&fIg5=gN@lIuf?nG;At8a@- z1)f!Y9RHvwe)4}Vm5B*Ejvq?lTBKz6h1G>SaSfjS_dn9zj@Ft5av#L=7g8n&EjBLw z_lhD5C5H0ZQ8mZaShs9yp;$Kf(bB?|mm0(_|GWfWuYQbNI7(JOXsR)l`|m&IqP6sS zBNA6p2~*El|NRW=BFX|%@|o=NpzW+0PgU!dK zVlEQnVV4d^cDFOKrXrVOwEp|^DK}O!Zp5;=PPMcD_YP+4wo(7*^OvijbNPQ>g8x_V aFM=dC+6XZ8N>x$drJ|rA|4!EY+5Z9*+Zho6 diff --git a/cv/classification/repvit/pytorch/flops.py b/cv/classification/repvit/pytorch/flops.py deleted file mode 100644 index 71b80af3..00000000 --- a/cv/classification/repvit/pytorch/flops.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch -import time -from timm import create_model -import model -import utils -from fvcore.nn import FlopCountAnalysis - -T0 = 5 -T1 = 10 - -for n, batch_size, resolution in [ - ('repvit_m0_9', 1024, 224), -]: - inputs = torch.randn(1, 3, resolution, - resolution) - model = create_model(n, num_classes=1000) - utils.replace_batchnorm(model) - n_parameters = sum(p.numel() - for p in model.parameters() if p.requires_grad) - print('number of params:', n_parameters / 1e6) - flops = FlopCountAnalysis(model, inputs) - print("flops: ", flops.total() / 1e9) \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_300e.txt b/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_300e.txt deleted file mode 100644 index d0f889ad..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_300e.txt +++ /dev/null @@ -1,300 +0,0 @@ -{"train_lr": 1.0000000000000165e-06, "train_loss": 6.982649914080576, "test_loss": 6.934983491897583, "test_acc1": 0.11600000423312187, "test_acc5": 0.6040000045239925, "epoch": 0, "n_parameters": 5489328} -{"train_lr": 1.0000000000000165e-06, "train_loss": 6.980203570936509, "test_loss": 6.933213808319786, "test_acc1": 0.11600000371813775, "test_acc5": 0.618000005363226, "epoch": 1, "n_parameters": 5489328} -{"train_lr": 0.0006008000000000089, "train_loss": 6.5278757431810135, "test_loss": 5.5409025658260695, "test_acc1": 5.8420000340271, "test_acc5": 17.012000080566406, "epoch": 2, "n_parameters": 5489328} -{"train_lr": 0.0012005999999999965, "train_loss": 6.0004106841499, "test_loss": 4.452513044530695, "test_acc1": 15.22800017501831, "test_acc5": 35.65600021789551, "epoch": 3, "n_parameters": 5489328} -{"train_lr": 0.0018004000000000203, "train_loss": 5.479916962645323, "test_loss": 3.5698775378140537, "test_acc1": 26.8200000881958, "test_acc5": 51.46000034942627, "epoch": 4, "n_parameters": 5489328} -{"train_lr": 0.00240020000000003, "train_loss": 5.008051208669333, "test_loss": 3.1019760966300964, "test_acc1": 34.42600043243408, "test_acc5": 60.374000172729495, "epoch": 5, "n_parameters": 5489328} -{"train_lr": 0.0029979511544580427, "train_loss": 4.66632095207985, "test_loss": 2.6850028308955105, "test_acc1": 41.65000046630859, "test_acc5": 67.62600069396973, "epoch": 6, "n_parameters": 5489328} -{"train_lr": 0.0029970499590002224, "train_loss": 4.407360971831589, "test_loss": 2.4368611804463645, "test_acc1": 46.07800023895264, "test_acc5": 71.58800047180176, "epoch": 7, "n_parameters": 5489328} -{"train_lr": 0.0029959851434506303, "train_loss": 4.200504349075633, "test_loss": 2.2634752013466577, "test_acc1": 49.678000061950684, "test_acc5": 74.69600061889649, "epoch": 8, "n_parameters": 5489328} -{"train_lr": 0.0029947568245780043, "train_loss": 4.059042043394322, "test_loss": 2.1879833286458794, "test_acc1": 50.96000021270752, "test_acc5": 75.98400108703613, "epoch": 9, "n_parameters": 5489328} -{"train_lr": 0.002993365137081617, "train_loss": 3.9475623488569145, "test_loss": 2.012806689197367, "test_acc1": 54.52400045104981, "test_acc5": 78.57000065673829, "epoch": 10, "n_parameters": 5489328} -{"train_lr": 0.0029918102335755405, "train_loss": 3.8505038518985684, "test_loss": 1.9666168662634762, "test_acc1": 55.09800032226563, "test_acc5": 79.33400072021485, "epoch": 11, "n_parameters": 5489328} -{"train_lr": 0.00299009228457261, "train_loss": 3.7840481309010254, "test_loss": 1.9200503812594847, "test_acc1": 56.420000356140136, "test_acc5": 80.1240010357666, "epoch": 12, "n_parameters": 5489328} -{"train_lr": 0.002988211478465166, "train_loss": 3.7191767599656997, "test_loss": 1.8602412099188024, "test_acc1": 57.4000005267334, "test_acc5": 80.92200083190917, "epoch": 13, "n_parameters": 5489328} -{"train_lr": 0.0029861680215047875, "train_loss": 3.666098939429084, "test_loss": 1.8317908929152922, "test_acc1": 58.47200075714111, "test_acc5": 81.34200054992675, "epoch": 14, "n_parameters": 5489328} -{"train_lr": 0.002983962137779637, "train_loss": 3.6223566518556014, "test_loss": 1.7595897195014087, "test_acc1": 59.484000420837404, "test_acc5": 82.38600054382324, "epoch": 15, "n_parameters": 5489328} -{"train_lr": 0.002981594069189697, "train_loss": 3.5532970992352464, "test_loss": 1.729432460936633, "test_acc1": 59.942000522766115, "test_acc5": 82.73200067871093, "epoch": 16, "n_parameters": 5489328} -{"train_lr": 0.002979064075420377, "train_loss": 3.5320936797334137, "test_loss": 1.7108701142397793, "test_acc1": 60.54200043273926, "test_acc5": 83.1220004876709, "epoch": 17, "n_parameters": 5489328} -{"train_lr": 0.002976372433913977, "train_loss": 3.5062816803403893, "test_loss": 1.7010381641713055, "test_acc1": 61.08400041107178, "test_acc5": 83.6960003668213, "epoch": 18, "n_parameters": 5489328} -{"train_lr": 0.002973519439839436, "train_loss": 3.465390722385699, "test_loss": 1.6781218702142888, "test_acc1": 61.58800073730469, "test_acc5": 83.38800070007325, "epoch": 19, "n_parameters": 5489328} -{"train_lr": 0.002970505406059452, "train_loss": 3.4363235491071102, "test_loss": 1.6407857821746306, "test_acc1": 61.91000063537598, "test_acc5": 83.90600130615235, "epoch": 20, "n_parameters": 5489328} -{"train_lr": 0.0029673306630970224, "train_loss": 3.4092236555737556, "test_loss": 1.643885849551721, "test_acc1": 62.32400022521973, "test_acc5": 83.93000072692871, "epoch": 21, "n_parameters": 5489328} -{"train_lr": 0.0029639955590984074, "train_loss": 3.3889658501822884, "test_loss": 1.5926248932426625, "test_acc1": 62.9940004208374, "test_acc5": 84.78800040039063, "epoch": 22, "n_parameters": 5489328} -{"train_lr": 0.0029605004597953776, "train_loss": 3.365060423847011, "test_loss": 1.6175998374819756, "test_acc1": 62.978000413513186, "test_acc5": 84.68600085327148, "epoch": 23, "n_parameters": 5489328} -{"train_lr": 0.0029568457484649134, "train_loss": 3.346576735198641, "test_loss": 1.6067326231436296, "test_acc1": 62.7400007220459, "test_acc5": 84.79400069824219, "epoch": 24, "n_parameters": 5489328} -{"train_lr": 0.002953031825887318, "train_loss": 3.3284510317609177, "test_loss": 1.5610008023001931, "test_acc1": 64.10800035217285, "test_acc5": 85.34800088989257, "epoch": 25, "n_parameters": 5489328} -{"train_lr": 0.0029490591103021138, "train_loss": 3.3251032213941754, "test_loss": 1.574185999956998, "test_acc1": 64.11600030944824, "test_acc5": 85.3100007788086, "epoch": 26, "n_parameters": 5489328} -{"train_lr": 0.0029449280373624936, "train_loss": 3.2984712626293695, "test_loss": 1.5691163336688823, "test_acc1": 63.880000305175784, "test_acc5": 85.42800085754395, "epoch": 27, "n_parameters": 5489328} -{"train_lr": 0.0029406390600870565, "train_loss": 3.2845331783489073, "test_loss": 1.5228009908036753, "test_acc1": 64.81400064819336, "test_acc5": 85.95400047363282, "epoch": 28, "n_parameters": 5489328} -{"train_lr": 0.0029361926488104344, "train_loss": 3.2676040529490087, "test_loss": 1.5689399005337195, "test_acc1": 64.06400032775879, "test_acc5": 85.59000085876465, "epoch": 29, "n_parameters": 5489328} -{"train_lr": 0.002931589291131832, "train_loss": 3.254195202383206, "test_loss": 1.5162948322567074, "test_acc1": 64.78400083007813, "test_acc5": 85.91200096557617, "epoch": 30, "n_parameters": 5489328} -{"train_lr": 0.002926829491861259, "train_loss": 3.2359598957234437, "test_loss": 1.5258609469641338, "test_acc1": 64.63800040100098, "test_acc5": 85.7840011328125, "epoch": 31, "n_parameters": 5489328} -{"train_lr": 0.002921913772964315, "train_loss": 3.228627877198249, "test_loss": 1.570804637941447, "test_acc1": 63.73600059387207, "test_acc5": 85.33000086120606, "epoch": 32, "n_parameters": 5489328} -{"train_lr": 0.0029168426735050783, "train_loss": 3.212015571782915, "test_loss": 1.5069011280482465, "test_acc1": 64.58000067321777, "test_acc5": 86.0700007574463, "epoch": 33, "n_parameters": 5489328} -{"train_lr": 0.002911616749586609, "train_loss": 3.206843217785696, "test_loss": 1.5038624690337614, "test_acc1": 64.76400076293945, "test_acc5": 86.26800065429687, "epoch": 34, "n_parameters": 5489328} -{"train_lr": 0.0029062365742903795, "train_loss": 3.191110656415816, "test_loss": 1.4681706577539444, "test_acc1": 65.54200082824707, "test_acc5": 86.41600063049316, "epoch": 35, "n_parameters": 5489328} -{"train_lr": 0.002900702737613334, "train_loss": 3.2066745146287143, "test_loss": 1.529716589234092, "test_acc1": 64.67400057678222, "test_acc5": 86.178000758667, "epoch": 36, "n_parameters": 5489328} -{"train_lr": 0.0028950158464029476, "train_loss": 3.1850599932441894, "test_loss": 1.442604360255328, "test_acc1": 66.00600052124024, "test_acc5": 86.8920008807373, "epoch": 37, "n_parameters": 5489328} -{"train_lr": 0.002889176524290917, "train_loss": 3.179959580409441, "test_loss": 1.442764052613215, "test_acc1": 66.37400091522217, "test_acc5": 87.07400059753418, "epoch": 38, "n_parameters": 5489328} -{"train_lr": 0.002883185411624865, "train_loss": 3.1746888466590315, "test_loss": 1.4658111069690098, "test_acc1": 65.87800032409667, "test_acc5": 86.5220006274414, "epoch": 39, "n_parameters": 5489328} -{"train_lr": 0.0028770431653975456, "train_loss": 3.1484257861864653, "test_loss": 1.4648799781094899, "test_acc1": 65.84200084655761, "test_acc5": 86.64000114562988, "epoch": 40, "n_parameters": 5489328} -{"train_lr": 0.0028707504591756914, "train_loss": 3.1434144889422173, "test_loss": 1.4519595767964015, "test_acc1": 66.4100007006836, "test_acc5": 86.98000137512207, "epoch": 41, "n_parameters": 5489328} -{"train_lr": 0.0028643079830253625, "train_loss": 3.1369852942885825, "test_loss": 1.4261594841426068, "test_acc1": 66.63000101196289, "test_acc5": 87.18400094116211, "epoch": 42, "n_parameters": 5489328} -{"train_lr": 0.0028577164434366873, "train_loss": 3.123919892153866, "test_loss": 1.4313833598386159, "test_acc1": 66.64800058288574, "test_acc5": 87.2340005505371, "epoch": 43, "n_parameters": 5489328} -{"train_lr": 0.002850976563246287, "train_loss": 3.1184884615653425, "test_loss": 1.433833163570274, "test_acc1": 66.55200081665039, "test_acc5": 87.09200049499512, "epoch": 44, "n_parameters": 5489328} -{"train_lr": 0.0028440890815579303, "train_loss": 3.1133779042916334, "test_loss": 1.4401618215170773, "test_acc1": 66.23600057617188, "test_acc5": 87.15400046386719, "epoch": 45, "n_parameters": 5489328} -{"train_lr": 0.002837054753661586, "train_loss": 3.1052396368923234, "test_loss": 1.4640056538310917, "test_acc1": 65.9500004260254, "test_acc5": 86.62800080139161, "epoch": 46, "n_parameters": 5489328} -{"train_lr": 0.0028298743509506535, "train_loss": 3.109855320385034, "test_loss": 1.4230894263495097, "test_acc1": 66.6020008380127, "test_acc5": 86.9460007989502, "epoch": 47, "n_parameters": 5489328} -{"train_lr": 0.002822548660837159, "train_loss": 3.102892851586536, "test_loss": 1.4583244398236275, "test_acc1": 66.12200051757813, "test_acc5": 87.05000080993652, "epoch": 48, "n_parameters": 5489328} -{"train_lr": 0.002815078486665612, "train_loss": 3.089659762253864, "test_loss": 1.4121072556484828, "test_acc1": 67.05200043884277, "test_acc5": 87.48200055114746, "epoch": 49, "n_parameters": 5489328} -{"train_lr": 0.0028074646476246414, "train_loss": 3.082653873186889, "test_loss": 1.444496064023538, "test_acc1": 66.4640007409668, "test_acc5": 86.95200061950683, "epoch": 50, "n_parameters": 5489328} -{"train_lr": 0.0027997079786577367, "train_loss": 3.084147678219157, "test_loss": 1.411378639665517, "test_acc1": 67.20400028564453, "test_acc5": 87.28600041137695, "epoch": 51, "n_parameters": 5489328} -{"train_lr": 0.0027918093303709217, "train_loss": 3.0915799520427374, "test_loss": 1.4523341662504456, "test_acc1": 66.60000071136474, "test_acc5": 87.13200038513183, "epoch": 52, "n_parameters": 5489328} -{"train_lr": 0.0027837695689399552, "train_loss": 3.0660781651663838, "test_loss": 1.3977252583612094, "test_acc1": 67.2380008227539, "test_acc5": 87.46000054138183, "epoch": 53, "n_parameters": 5489328} -{"train_lr": 0.0027755895760153255, "train_loss": 3.062053642470202, "test_loss": 1.382409724999558, "test_acc1": 67.35200053894043, "test_acc5": 87.51400050476074, "epoch": 54, "n_parameters": 5489328} -{"train_lr": 0.00276727024862551, "train_loss": 3.0638390075174167, "test_loss": 1.3727358254519375, "test_acc1": 67.42600036071778, "test_acc5": 87.7780010333252, "epoch": 55, "n_parameters": 5489328} -{"train_lr": 0.002758812499078425, "train_loss": 3.06613167863098, "test_loss": 1.4401213635097851, "test_acc1": 67.09400093505859, "test_acc5": 87.54400068420411, "epoch": 56, "n_parameters": 5489328} -{"train_lr": 0.0027502172548616124, "train_loss": 3.0614122362088243, "test_loss": 1.4220237569375471, "test_acc1": 67.13000048156738, "test_acc5": 87.53000104492187, "epoch": 57, "n_parameters": 5489328} -{"train_lr": 0.002741485458540409, "train_loss": 3.056485358438046, "test_loss": 1.4455626911737702, "test_acc1": 66.75800064697266, "test_acc5": 87.28800059082032, "epoch": 58, "n_parameters": 5489328} -{"train_lr": 0.002732618067654839, "train_loss": 3.0362050200966624, "test_loss": 1.3690447522835298, "test_acc1": 67.62600059692383, "test_acc5": 87.77600054992676, "epoch": 59, "n_parameters": 5489328} -{"train_lr": 0.002723616054614203, "train_loss": 3.0381337372447663, "test_loss": 1.428373554890806, "test_acc1": 66.68800065551758, "test_acc5": 87.15200032470703, "epoch": 60, "n_parameters": 5489328} -{"train_lr": 0.002714480406590576, "train_loss": 3.0401366553861173, "test_loss": 1.3838141878897494, "test_acc1": 67.76800049438476, "test_acc5": 87.8260004675293, "epoch": 61, "n_parameters": 5489328} -{"train_lr": 0.00270521212541074, "train_loss": 3.031719181570504, "test_loss": 1.4025159566239878, "test_acc1": 67.33400064453124, "test_acc5": 87.73000104614258, "epoch": 62, "n_parameters": 5489328} -{"train_lr": 0.002695812227446153, "train_loss": 3.016277303548454, "test_loss": 1.3900823322209446, "test_acc1": 67.644000390625, "test_acc5": 87.71800099487305, "epoch": 63, "n_parameters": 5489328} -{"train_lr": 0.0026862817435016183, "train_loss": 3.0280274826345397, "test_loss": 1.3811739039692013, "test_acc1": 67.54800050476074, "test_acc5": 87.91000095520019, "epoch": 64, "n_parameters": 5489328} -{"train_lr": 0.002676621718702138, "train_loss": 3.0110837584205097, "test_loss": 1.3759576800194653, "test_acc1": 67.64200027954102, "test_acc5": 88.0980002331543, "epoch": 65, "n_parameters": 5489328} -{"train_lr": 0.0026668332123782144, "train_loss": 3.003496237009835, "test_loss": 1.4057106389240785, "test_acc1": 67.70800037841796, "test_acc5": 87.92200085754395, "epoch": 66, "n_parameters": 5489328} -{"train_lr": 0.00265691729794979, "train_loss": 3.0002371372579097, "test_loss": 1.3600165383382277, "test_acc1": 68.10200042480469, "test_acc5": 88.12000060546875, "epoch": 67, "n_parameters": 5489328} -{"train_lr": 0.002646875062808804, "train_loss": 2.9966356026397336, "test_loss": 1.354297081177885, "test_acc1": 68.47600036743164, "test_acc5": 88.29800081176758, "epoch": 68, "n_parameters": 5489328} -{"train_lr": 0.002636707608199411, "train_loss": 2.9930455872623756, "test_loss": 1.32852184840224, "test_acc1": 68.57200035705566, "test_acc5": 88.34400083007813, "epoch": 69, "n_parameters": 5489328} -{"train_lr": 0.0026264160490975853, "train_loss": 2.9991976085255185, "test_loss": 1.3876942159100012, "test_acc1": 68.33400087402343, "test_acc5": 88.2020010723877, "epoch": 70, "n_parameters": 5489328} -{"train_lr": 0.0026160015140887375, "train_loss": 2.986508907352706, "test_loss": 1.3632259910756892, "test_acc1": 67.9520006640625, "test_acc5": 88.2160009765625, "epoch": 71, "n_parameters": 5489328} -{"train_lr": 0.002605465145244029, "train_loss": 2.987065404117536, "test_loss": 1.3436542586846785, "test_acc1": 68.31200089782715, "test_acc5": 88.31600126464843, "epoch": 72, "n_parameters": 5489328} -{"train_lr": 0.0025948080979949885, "train_loss": 2.9718855207772563, "test_loss": 1.3114922974597325, "test_acc1": 68.92800043457031, "test_acc5": 88.55400082092285, "epoch": 73, "n_parameters": 5489328} -{"train_lr": 0.002584031541007042, "train_loss": 2.98022096873902, "test_loss": 1.3056595921516418, "test_acc1": 68.98600047668457, "test_acc5": 88.7920005078125, "epoch": 74, "n_parameters": 5489328} -{"train_lr": 0.0025731366560510526, "train_loss": 2.9820126657434503, "test_loss": 1.3511723781173879, "test_acc1": 68.34000089294433, "test_acc5": 88.30200062866211, "epoch": 75, "n_parameters": 5489328} -{"train_lr": 0.002562124637873926, "train_loss": 2.9657894533386617, "test_loss": 1.3421478461135516, "test_acc1": 68.59400088317871, "test_acc5": 88.10800065307618, "epoch": 76, "n_parameters": 5489328} -{"train_lr": 0.002550996694067511, "train_loss": 2.953480788093391, "test_loss": 1.3630854358727282, "test_acc1": 68.33800039733887, "test_acc5": 88.2140007647705, "epoch": 77, "n_parameters": 5489328} -{"train_lr": 0.002539754044936295, "train_loss": 2.9620328497686548, "test_loss": 1.320670879699967, "test_acc1": 68.86600083068848, "test_acc5": 88.50400079284668, "epoch": 78, "n_parameters": 5489328} -{"train_lr": 0.0025283979233634165, "train_loss": 2.9526467806786942, "test_loss": 1.3828121416948058, "test_acc1": 67.50600081481933, "test_acc5": 87.77000102539063, "epoch": 79, "n_parameters": 5489328} -{"train_lr": 0.0025169295746755903, "train_loss": 2.9514542752675874, "test_loss": 1.3260450918566098, "test_acc1": 68.43000069580079, "test_acc5": 88.49400093261718, "epoch": 80, "n_parameters": 5489328} -{"train_lr": 0.002505350256506467, "train_loss": 2.9655020009699484, "test_loss": 1.3262165744196286, "test_acc1": 68.80000077575684, "test_acc5": 88.83800022155762, "epoch": 81, "n_parameters": 5489328} -{"train_lr": 0.0024936612386588365, "train_loss": 2.955407465712058, "test_loss": 1.3233475861224262, "test_acc1": 68.6700007397461, "test_acc5": 88.59200090698242, "epoch": 82, "n_parameters": 5489328} -{"train_lr": 0.002481863802965163, "train_loss": 2.9492059543693094, "test_loss": 1.3679293969815427, "test_acc1": 68.20600039123535, "test_acc5": 88.04000028686524, "epoch": 83, "n_parameters": 5489328} -{"train_lr": 0.0024699592431473524, "train_loss": 2.950202565684879, "test_loss": 1.3441526327620854, "test_acc1": 68.67000058227539, "test_acc5": 88.51200062805175, "epoch": 84, "n_parameters": 5489328} -{"train_lr": 0.0024579488646743037, "train_loss": 2.930438473892155, "test_loss": 1.3374762704426593, "test_acc1": 68.87200047668458, "test_acc5": 88.48600072509765, "epoch": 85, "n_parameters": 5489328} -{"train_lr": 0.002445833984619558, "train_loss": 2.9273927594474753, "test_loss": 1.3107599094510078, "test_acc1": 69.07800030334472, "test_acc5": 88.79600095214843, "epoch": 86, "n_parameters": 5489328} -{"train_lr": 0.002433615931516097, "train_loss": 2.936311906333164, "test_loss": 1.3423645530234685, "test_acc1": 68.3220003930664, "test_acc5": 88.30600037841796, "epoch": 87, "n_parameters": 5489328} -{"train_lr": 0.0024212960452112274, "train_loss": 2.9349906141880893, "test_loss": 1.3106247200207277, "test_acc1": 68.90600073852539, "test_acc5": 88.72800109436035, "epoch": 88, "n_parameters": 5489328} -{"train_lr": 0.002408875676719279, "train_loss": 2.9333155572557335, "test_loss": 1.3417520136995749, "test_acc1": 68.54400094055175, "test_acc5": 88.49600056030273, "epoch": 89, "n_parameters": 5489328} -{"train_lr": 0.002396356188073559, "train_loss": 2.925317129035362, "test_loss": 1.2985680530017072, "test_acc1": 69.22000068908692, "test_acc5": 88.75400110290528, "epoch": 90, "n_parameters": 5489328} -{"train_lr": 0.0023837389521772496, "train_loss": 2.912679416455811, "test_loss": 1.2864723584868691, "test_acc1": 69.59000073425292, "test_acc5": 89.2240007043457, "epoch": 91, "n_parameters": 5489328} -{"train_lr": 0.00237102535265232, "train_loss": 2.922132308320176, "test_loss": 1.286797214638103, "test_acc1": 69.54400115356445, "test_acc5": 89.08000088562012, "epoch": 92, "n_parameters": 5489328} -{"train_lr": 0.002358216783688205, "train_loss": 2.910284753719108, "test_loss": 1.3071843764998696, "test_acc1": 69.21400046508789, "test_acc5": 88.89000062866211, "epoch": 93, "n_parameters": 5489328} -{"train_lr": 0.0023453146498889055, "train_loss": 2.9081432014989623, "test_loss": 1.303432946178046, "test_acc1": 69.35200064880371, "test_acc5": 89.01400108093262, "epoch": 94, "n_parameters": 5489328} -{"train_lr": 0.0023323203661187596, "train_loss": 2.9041212871849393, "test_loss": 1.2873523235321045, "test_acc1": 69.48000070556641, "test_acc5": 89.02400067749024, "epoch": 95, "n_parameters": 5489328} -{"train_lr": 0.0023192353573475138, "train_loss": 2.9082510722198074, "test_loss": 1.2991093918681145, "test_acc1": 69.17600054138184, "test_acc5": 88.88800094238282, "epoch": 96, "n_parameters": 5489328} -{"train_lr": 0.002306061058493568, "train_loss": 2.9109772084285317, "test_loss": 1.3323754261840473, "test_acc1": 69.06200040161133, "test_acc5": 89.03000049621582, "epoch": 97, "n_parameters": 5489328} -{"train_lr": 0.002292798914267547, "train_loss": 2.9051670660432296, "test_loss": 1.3287428597157651, "test_acc1": 69.14600074890137, "test_acc5": 88.62600056030273, "epoch": 98, "n_parameters": 5489328} -{"train_lr": 0.002279450379012848, "train_loss": 2.9065279915607234, "test_loss": 1.261979422786019, "test_acc1": 70.01600048400879, "test_acc5": 89.33000094909669, "epoch": 99, "n_parameters": 5489328} -{"train_lr": 0.002266016916546765, "train_loss": 2.9016446648813266, "test_loss": 1.2931847463954578, "test_acc1": 69.41000027832031, "test_acc5": 88.73800086242676, "epoch": 100, "n_parameters": 5489328} -{"train_lr": 0.0022524999999999915, "train_loss": 2.900196699608716, "test_loss": 1.2572775354439563, "test_acc1": 70.14400063903808, "test_acc5": 89.55800116271973, "epoch": 101, "n_parameters": 5489328} -{"train_lr": 0.002238901111654604, "train_loss": 2.8859709109619653, "test_loss": 1.2606944644993001, "test_acc1": 70.09800098144531, "test_acc5": 89.36600073669433, "epoch": 102, "n_parameters": 5489328} -{"train_lr": 0.002225221742782058, "train_loss": 2.887156999404196, "test_loss": 1.2794224294749172, "test_acc1": 69.60600083679199, "test_acc5": 89.07600096923828, "epoch": 103, "n_parameters": 5489328} -{"train_lr": 0.002211463393479272, "train_loss": 2.892042436688352, "test_loss": 1.2532702101902529, "test_acc1": 70.23200061523437, "test_acc5": 89.58800067749023, "epoch": 104, "n_parameters": 5489328} -{"train_lr": 0.002197627572504191, "train_loss": 2.8770288831467252, "test_loss": 1.2643966390327974, "test_acc1": 70.1040005859375, "test_acc5": 89.5140003729248, "epoch": 105, "n_parameters": 5489328} -{"train_lr": 0.0021837157971105795, "train_loss": 2.874526688735262, "test_loss": 1.2730488648468798, "test_acc1": 69.78200079101562, "test_acc5": 89.26000125671386, "epoch": 106, "n_parameters": 5489328} -{"train_lr": 0.0021697295928814844, "train_loss": 2.876025534397978, "test_loss": 1.2349401983347805, "test_acc1": 70.46400092834473, "test_acc5": 89.72000085998535, "epoch": 107, "n_parameters": 5489328} -{"train_lr": 0.0021556704935616435, "train_loss": 2.869538084637347, "test_loss": 1.271562776782296, "test_acc1": 69.608000859375, "test_acc5": 89.44000096618652, "epoch": 108, "n_parameters": 5489328} -{"train_lr": 0.002141540040889749, "train_loss": 2.8606805109934843, "test_loss": 1.2529681941325015, "test_acc1": 70.4280007067871, "test_acc5": 89.54400075012207, "epoch": 109, "n_parameters": 5489328} -{"train_lr": 0.0021273397844292653, "train_loss": 2.84997950422821, "test_loss": 1.2388226755640723, "test_acc1": 70.3700010510254, "test_acc5": 89.61200043823243, "epoch": 110, "n_parameters": 5489328} -{"train_lr": 0.0021130712813983554, "train_loss": 2.8658530665673227, "test_loss": 1.259788597171957, "test_acc1": 69.78600092285156, "test_acc5": 89.32200120239258, "epoch": 111, "n_parameters": 5489328} -{"train_lr": 0.002098736096499017, "train_loss": 2.8608930307469493, "test_loss": 1.2462838725610212, "test_acc1": 70.5820008795166, "test_acc5": 89.81000068664551, "epoch": 112, "n_parameters": 5489328} -{"train_lr": 0.002084335801745925, "train_loss": 2.8549354622404066, "test_loss": 1.2981592999263243, "test_acc1": 69.11200054199219, "test_acc5": 89.0100005102539, "epoch": 113, "n_parameters": 5489328} -{"train_lr": 0.0020698719762936493, "train_loss": 2.8521940006340722, "test_loss": 1.2827483456243167, "test_acc1": 69.82200069946289, "test_acc5": 89.2560007019043, "epoch": 114, "n_parameters": 5489328} -{"train_lr": 0.0020553462062635787, "train_loss": 2.8486744831863353, "test_loss": 1.2509150884368203, "test_acc1": 70.44400058654786, "test_acc5": 89.61600130737305, "epoch": 115, "n_parameters": 5489328} -{"train_lr": 0.0020407600845702154, "train_loss": 2.8523533128791576, "test_loss": 1.260513426228003, "test_acc1": 70.11400087402343, "test_acc5": 89.6720009197998, "epoch": 116, "n_parameters": 5489328} -{"train_lr": 0.002026115210746134, "train_loss": 2.8317914640517543, "test_loss": 1.2541327564553781, "test_acc1": 70.3900006060791, "test_acc5": 89.53600037719727, "epoch": 117, "n_parameters": 5489328} -{"train_lr": 0.0020114131907666985, "train_loss": 2.8325191564697154, "test_loss": 1.2369797453284264, "test_acc1": 70.67400050598144, "test_acc5": 89.74200069335937, "epoch": 118, "n_parameters": 5489328} -{"train_lr": 0.001996655636874217, "train_loss": 2.843444799312013, "test_loss": 1.2747823555361142, "test_acc1": 70.31200049743653, "test_acc5": 89.45400088134765, "epoch": 119, "n_parameters": 5489328} -{"train_lr": 0.0019818441674006432, "train_loss": 2.817317276335449, "test_loss": 1.2246373993429271, "test_acc1": 70.72600118408204, "test_acc5": 89.8940008013916, "epoch": 120, "n_parameters": 5489328} -{"train_lr": 0.0019669804065905413, "train_loss": 2.828895867585564, "test_loss": 1.2407841269265523, "test_acc1": 70.53400081481934, "test_acc5": 89.7100005065918, "epoch": 121, "n_parameters": 5489328} -{"train_lr": 0.0019520659844228236, "train_loss": 2.811425115743415, "test_loss": 1.2950490123846314, "test_acc1": 69.84200103210449, "test_acc5": 89.2380009729004, "epoch": 122, "n_parameters": 5489328} -{"train_lr": 0.0019371025364320208, "train_loss": 2.8330874309765637, "test_loss": 1.2402492395856164, "test_acc1": 70.88200088073731, "test_acc5": 89.84200102539063, "epoch": 123, "n_parameters": 5489328} -{"train_lr": 0.0019220917035286814, "train_loss": 2.8106724680256217, "test_loss": 1.2178548628633672, "test_acc1": 71.00600078063965, "test_acc5": 90.0820006842041, "epoch": 124, "n_parameters": 5489328} -{"train_lr": 0.0019070351318197952, "train_loss": 2.8187015124505086, "test_loss": 1.219154865904288, "test_acc1": 71.1820011151123, "test_acc5": 90.01000085083008, "epoch": 125, "n_parameters": 5489328} -{"train_lr": 0.0018919344724282487, "train_loss": 2.8173309459531906, "test_loss": 1.2355267351323909, "test_acc1": 70.74200082946777, "test_acc5": 89.68400081542968, "epoch": 126, "n_parameters": 5489328} -{"train_lr": 0.00187679138131143, "train_loss": 2.8127200637432597, "test_loss": 1.2408980327573689, "test_acc1": 70.80800122192383, "test_acc5": 89.82000061035156, "epoch": 127, "n_parameters": 5489328} -{"train_lr": 0.0018616075190799894, "train_loss": 2.8074514081866906, "test_loss": 1.2143180973150514, "test_acc1": 71.27000076782227, "test_acc5": 90.09600048156739, "epoch": 128, "n_parameters": 5489328} -{"train_lr": 0.0018463845508154277, "train_loss": 2.8018668064182037, "test_loss": 1.2406955537470905, "test_acc1": 70.55400055847169, "test_acc5": 89.76600073364258, "epoch": 129, "n_parameters": 5489328} -{"train_lr": 0.0018311241458878586, "train_loss": 2.78496662439774, "test_loss": 1.2268398146737705, "test_acc1": 70.58800089294434, "test_acc5": 89.74400057250976, "epoch": 130, "n_parameters": 5489328} -{"train_lr": 0.001815827977772535, "train_loss": 2.798799208528418, "test_loss": 1.208004555241628, "test_acc1": 71.2080007458496, "test_acc5": 90.01200079406738, "epoch": 131, "n_parameters": 5489328} -{"train_lr": 0.0018004977238668082, "train_loss": 2.791832945722756, "test_loss": 1.2269997054880315, "test_acc1": 71.07000098144532, "test_acc5": 89.92200095581055, "epoch": 132, "n_parameters": 5489328} -{"train_lr": 0.001785135065305683, "train_loss": 2.7870988657148623, "test_loss": 1.213838512247259, "test_acc1": 70.9980007824707, "test_acc5": 89.92200052917481, "epoch": 133, "n_parameters": 5489328} -{"train_lr": 0.0017697416867777532, "train_loss": 2.779824685350025, "test_loss": 1.2119171937758273, "test_acc1": 70.93600098632812, "test_acc5": 89.91400105224609, "epoch": 134, "n_parameters": 5489328} -{"train_lr": 0.001754319276340568, "train_loss": 2.7789870550235114, "test_loss": 1.2336744395169346, "test_acc1": 71.10400094848633, "test_acc5": 89.93000133728027, "epoch": 135, "n_parameters": 5489328} -{"train_lr": 0.001738869525235161, "train_loss": 2.7827069755795475, "test_loss": 1.1996059878305956, "test_acc1": 71.04800059936524, "test_acc5": 90.09200104187012, "epoch": 136, "n_parameters": 5489328} -{"train_lr": 0.0017233941277008241, "train_loss": 2.8017188030824385, "test_loss": 1.1996900330890308, "test_acc1": 71.55800096679687, "test_acc5": 90.09800036315917, "epoch": 137, "n_parameters": 5489328} -{"train_lr": 0.0017078947807893019, "train_loss": 2.777806167408145, "test_loss": 1.2042339491573246, "test_acc1": 71.29200078247071, "test_acc5": 90.09800065429687, "epoch": 138, "n_parameters": 5489328} -{"train_lr": 0.0016923731841786377, "train_loss": 2.7795464046996274, "test_loss": 1.2312956059520894, "test_acc1": 70.93400051208496, "test_acc5": 89.95000098022462, "epoch": 139, "n_parameters": 5489328} -{"train_lr": 0.0016768310399868367, "train_loss": 2.7779633022969863, "test_loss": 1.225905447520993, "test_acc1": 71.0100006225586, "test_acc5": 90.03200054016114, "epoch": 140, "n_parameters": 5489328} -{"train_lr": 0.0016612700525851145, "train_loss": 2.765111952484083, "test_loss": 1.174434863708236, "test_acc1": 71.62000085083008, "test_acc5": 90.4580009729004, "epoch": 141, "n_parameters": 5489328} -{"train_lr": 0.0016456919284111504, "train_loss": 2.7632018094011346, "test_loss": 1.178583443842151, "test_acc1": 71.35200067687988, "test_acc5": 90.2980005670166, "epoch": 142, "n_parameters": 5489328} -{"train_lr": 0.0016300983757817835, "train_loss": 2.761106224833347, "test_loss": 1.2211713926358656, "test_acc1": 71.05600065551758, "test_acc5": 90.02000083740235, "epoch": 143, "n_parameters": 5489328} -{"train_lr": 0.0016144911047057992, "train_loss": 2.7621695786285745, "test_loss": 1.1920244233174757, "test_acc1": 71.7440008227539, "test_acc5": 90.3660002355957, "epoch": 144, "n_parameters": 5489328} -{"train_lr": 0.001598871826696317, "train_loss": 2.7553261750036007, "test_loss": 1.1588926125656476, "test_acc1": 72.09000070617675, "test_acc5": 90.58000088562012, "epoch": 145, "n_parameters": 5489328} -{"train_lr": 0.0015832422545831886, "train_loss": 2.75625000054316, "test_loss": 1.1610492759130218, "test_acc1": 71.95000077575683, "test_acc5": 90.59200078063965, "epoch": 146, "n_parameters": 5489328} -{"train_lr": 0.001567604102325149, "train_loss": 2.739677145148067, "test_loss": 1.1858979097821496, "test_acc1": 71.60400074951171, "test_acc5": 90.23400088500976, "epoch": 147, "n_parameters": 5489328} -{"train_lr": 0.0015519590848217988, "train_loss": 2.7443948920539243, "test_loss": 1.1827229539101773, "test_acc1": 71.85800045471191, "test_acc5": 90.46000080444335, "epoch": 148, "n_parameters": 5489328} -{"train_lr": 0.0015363089177256083, "train_loss": 2.7432561740100527, "test_loss": 1.1669334789568728, "test_acc1": 71.86400083129882, "test_acc5": 90.70600096252441, "epoch": 149, "n_parameters": 5489328} -{"train_lr": 0.0015206553172537663, "train_loss": 2.7367805074230374, "test_loss": 1.1660961163314907, "test_acc1": 72.04400041381835, "test_acc5": 90.50000065368653, "epoch": 150, "n_parameters": 5489328} -{"train_lr": 0.0015050000000000063, "train_loss": 2.733312111106708, "test_loss": 1.2006417465480892, "test_acc1": 71.58200067504883, "test_acc5": 90.33400071533202, "epoch": 151, "n_parameters": 5489328} -{"train_lr": 0.0014893446827462008, "train_loss": 2.7312803056528816, "test_loss": 1.1829913888465275, "test_acc1": 72.06000135253906, "test_acc5": 90.50400056945801, "epoch": 152, "n_parameters": 5489328} -{"train_lr": 0.0014736910822743703, "train_loss": 2.71996616656117, "test_loss": 1.169490132142197, "test_acc1": 71.91800062438965, "test_acc5": 90.46400098999024, "epoch": 153, "n_parameters": 5489328} -{"train_lr": 0.0014580409151781846, "train_loss": 2.7257540726261458, "test_loss": 1.1570914529941299, "test_acc1": 72.13800068908691, "test_acc5": 90.6560006829834, "epoch": 154, "n_parameters": 5489328} -{"train_lr": 0.0014423958976748272, "train_loss": 2.7487421040412046, "test_loss": 1.1761940785429694, "test_acc1": 72.02200087585449, "test_acc5": 90.5940005633545, "epoch": 155, "n_parameters": 5489328} -{"train_lr": 0.0014267577454167891, "train_loss": 2.727275757266463, "test_loss": 1.1534085829149594, "test_acc1": 72.41000091430664, "test_acc5": 90.77000091918946, "epoch": 156, "n_parameters": 5489328} -{"train_lr": 0.0014111281733036705, "train_loss": 2.71384374472163, "test_loss": 1.1772066496989944, "test_acc1": 72.30600072265625, "test_acc5": 90.68600115234375, "epoch": 157, "n_parameters": 5489328} -{"train_lr": 0.0013955088952941799, "train_loss": 2.7097338892573077, "test_loss": 1.1668597967787222, "test_acc1": 72.0740009729004, "test_acc5": 90.49000061767578, "epoch": 158, "n_parameters": 5489328} -{"train_lr": 0.0013799016242181934, "train_loss": 2.718845778196264, "test_loss": 1.1729929494586857, "test_acc1": 72.25400064086914, "test_acc5": 90.7080006060791, "epoch": 159, "n_parameters": 5489328} -{"train_lr": 0.0013643080715888263, "train_loss": 2.706513445976255, "test_loss": 1.1695706857876345, "test_acc1": 72.04600080688476, "test_acc5": 90.55800101745605, "epoch": 160, "n_parameters": 5489328} -{"train_lr": 0.0013487299474148615, "train_loss": 2.701983116442065, "test_loss": 1.136717946014621, "test_acc1": 72.71000053771972, "test_acc5": 90.83400109375, "epoch": 161, "n_parameters": 5489328} -{"train_lr": 0.0013331689600131439, "train_loss": 2.7060210594742133, "test_loss": 1.1439180848273365, "test_acc1": 72.52800106140137, "test_acc5": 90.69000056274415, "epoch": 162, "n_parameters": 5489328} -{"train_lr": 0.0013176268158213607, "train_loss": 2.701444753675724, "test_loss": 1.1534242440353741, "test_acc1": 72.71800058654784, "test_acc5": 90.81600017028809, "epoch": 163, "n_parameters": 5489328} -{"train_lr": 0.0013021052192107134, "train_loss": 2.701265111458387, "test_loss": 1.1098095022819259, "test_acc1": 73.09200036560058, "test_acc5": 91.13200093383789, "epoch": 164, "n_parameters": 5489328} -{"train_lr": 0.0012866058722992017, "train_loss": 2.686695615057465, "test_loss": 1.171540897678245, "test_acc1": 72.35200070007325, "test_acc5": 90.600000647583, "epoch": 165, "n_parameters": 5489328} -{"train_lr": 0.001271130474764872, "train_loss": 2.680191534493181, "test_loss": 1.1859262050552801, "test_acc1": 72.25400052856445, "test_acc5": 90.53600097473145, "epoch": 166, "n_parameters": 5489328} -{"train_lr": 0.0012556807236594413, "train_loss": 2.682171668973472, "test_loss": 1.1154955517161975, "test_acc1": 72.98800086914062, "test_acc5": 91.14600086975098, "epoch": 167, "n_parameters": 5489328} -{"train_lr": 0.0012402583132222486, "train_loss": 2.6836374318070835, "test_loss": 1.1633765345269984, "test_acc1": 72.2720007763672, "test_acc5": 90.75400100463867, "epoch": 168, "n_parameters": 5489328} -{"train_lr": 0.0012248649346943232, "train_loss": 2.679487286104287, "test_loss": 1.121026681905443, "test_acc1": 72.5580007220459, "test_acc5": 91.01200084411622, "epoch": 169, "n_parameters": 5489328} -{"train_lr": 0.0012095022761332281, "train_loss": 2.6660829035998534, "test_loss": 1.1436210667545146, "test_acc1": 72.98600078918457, "test_acc5": 91.00200018676757, "epoch": 170, "n_parameters": 5489328} -{"train_lr": 0.0011941720222274684, "train_loss": 2.675217351932034, "test_loss": 1.1158477704633365, "test_acc1": 72.97000066223144, "test_acc5": 91.29000096496583, "epoch": 171, "n_parameters": 5489328} -{"train_lr": 0.0011788758541121593, "train_loss": 2.656217823175789, "test_loss": 1.1219261959195137, "test_acc1": 72.7360006463623, "test_acc5": 90.9640009136963, "epoch": 172, "n_parameters": 5489328} -{"train_lr": 0.0011636154491845528, "train_loss": 2.6631108922638194, "test_loss": 1.12674538587982, "test_acc1": 73.07000067871094, "test_acc5": 91.02800106689453, "epoch": 173, "n_parameters": 5489328} -{"train_lr": 0.0011483924809200239, "train_loss": 2.6770155836495277, "test_loss": 1.1095613600178198, "test_acc1": 73.0940010583496, "test_acc5": 91.24800089538574, "epoch": 174, "n_parameters": 5489328} -{"train_lr": 0.0011332086186885677, "train_loss": 2.648247052832759, "test_loss": 1.1026599942283197, "test_acc1": 73.36200082336426, "test_acc5": 91.29000050292969, "epoch": 175, "n_parameters": 5489328} -{"train_lr": 0.0011180655275717427, "train_loss": 2.658767690261205, "test_loss": 1.1265720291571184, "test_acc1": 73.07400102661133, "test_acc5": 90.99200086914063, "epoch": 176, "n_parameters": 5489328} -{"train_lr": 0.0011029648681801783, "train_loss": 2.659898082736871, "test_loss": 1.1056312580000272, "test_acc1": 73.3400005090332, "test_acc5": 91.21400053161621, "epoch": 177, "n_parameters": 5489328} -{"train_lr": 0.0010879082964713417, "train_loss": 2.646968958975314, "test_loss": 1.0923045866868712, "test_acc1": 73.6860010534668, "test_acc5": 91.35400087219239, "epoch": 178, "n_parameters": 5489328} -{"train_lr": 0.0010728974635680036, "train_loss": 2.6578639559537103, "test_loss": 1.1224440173669294, "test_acc1": 73.28200066040039, "test_acc5": 91.29800082763671, "epoch": 179, "n_parameters": 5489328} -{"train_lr": 0.0010579340155771634, "train_loss": 2.631283787693337, "test_loss": 1.1290460276332768, "test_acc1": 73.27400086120605, "test_acc5": 91.20000114379883, "epoch": 180, "n_parameters": 5489328} -{"train_lr": 0.0010430195934094646, "train_loss": 2.630698478407711, "test_loss": 1.0871608257293701, "test_acc1": 73.55400110229492, "test_acc5": 91.57200048461914, "epoch": 181, "n_parameters": 5489328} -{"train_lr": 0.001028155832599401, "train_loss": 2.6453926532174186, "test_loss": 1.0986685719002376, "test_acc1": 73.4840005834961, "test_acc5": 91.28800081298829, "epoch": 182, "n_parameters": 5489328} -{"train_lr": 0.001013344363125828, "train_loss": 2.639771506011629, "test_loss": 1.0941215645183215, "test_acc1": 73.65000081481934, "test_acc5": 91.41600058898926, "epoch": 183, "n_parameters": 5489328} -{"train_lr": 0.0009985868092332844, "train_loss": 2.630961024182306, "test_loss": 1.1005237190560861, "test_acc1": 73.6500011138916, "test_acc5": 91.55600101379395, "epoch": 184, "n_parameters": 5489328} -{"train_lr": 0.0009838847892538985, "train_loss": 2.6419538471052686, "test_loss": 1.092784708873792, "test_acc1": 73.76000089904785, "test_acc5": 91.63600066040038, "epoch": 185, "n_parameters": 5489328} -{"train_lr": 0.0009692399154297689, "train_loss": 2.6214854467472586, "test_loss": 1.0863753828135403, "test_acc1": 73.9440008001709, "test_acc5": 91.53600074707032, "epoch": 186, "n_parameters": 5489328} -{"train_lr": 0.0009546537937363906, "train_loss": 2.6226626404826874, "test_loss": 1.0869006663560867, "test_acc1": 73.67400066711426, "test_acc5": 91.51800083435059, "epoch": 187, "n_parameters": 5489328} -{"train_lr": 0.0009401280237063831, "train_loss": 2.607796788358574, "test_loss": 1.0775000730698758, "test_acc1": 74.16600065124511, "test_acc5": 91.61000074829101, "epoch": 188, "n_parameters": 5489328} -{"train_lr": 0.0009256641982541195, "train_loss": 2.6092406416611134, "test_loss": 1.091751673004844, "test_acc1": 73.96600069091797, "test_acc5": 91.40000088806153, "epoch": 189, "n_parameters": 5489328} -{"train_lr": 0.0009112639035010129, "train_loss": 2.600952501812999, "test_loss": 1.0631162652915174, "test_acc1": 74.2620009552002, "test_acc5": 91.63400070556641, "epoch": 190, "n_parameters": 5489328} -{"train_lr": 0.0008969287186016897, "train_loss": 2.5923953903593318, "test_loss": 1.068333013491197, "test_acc1": 74.17200075378418, "test_acc5": 91.7540006213379, "epoch": 191, "n_parameters": 5489328} -{"train_lr": 0.0008826602155707057, "train_loss": 2.60417324247406, "test_loss": 1.0722642717036335, "test_acc1": 74.35400067321777, "test_acc5": 91.84400079162597, "epoch": 192, "n_parameters": 5489328} -{"train_lr": 0.0008684599591102219, "train_loss": 2.6003946437538383, "test_loss": 1.048651290888136, "test_acc1": 74.586000803833, "test_acc5": 91.95800054870605, "epoch": 193, "n_parameters": 5489328} -{"train_lr": 0.0008543295064383764, "train_loss": 2.5997606179268242, "test_loss": 1.0662899471142075, "test_acc1": 74.39600034484863, "test_acc5": 91.74800090759277, "epoch": 194, "n_parameters": 5489328} -{"train_lr": 0.0008402704071185369, "train_loss": 2.600401768617207, "test_loss": 1.058066845617511, "test_acc1": 74.47000114013672, "test_acc5": 91.98800072814942, "epoch": 195, "n_parameters": 5489328} -{"train_lr": 0.0008262842028893667, "train_loss": 2.57999200207724, "test_loss": 1.049769357524135, "test_acc1": 74.50800104919433, "test_acc5": 91.86400047729492, "epoch": 196, "n_parameters": 5489328} -{"train_lr": 0.0008123724274957999, "train_loss": 2.5846607611119317, "test_loss": 1.0509046051989903, "test_acc1": 74.34600083007813, "test_acc5": 91.7800004449463, "epoch": 197, "n_parameters": 5489328} -{"train_lr": 0.0007985366065207493, "train_loss": 2.5838186333148028, "test_loss": 1.048478253185749, "test_acc1": 74.69000050537109, "test_acc5": 92.0820011114502, "epoch": 198, "n_parameters": 5489328} -{"train_lr": 0.0007847782572179334, "train_loss": 2.584075616060687, "test_loss": 1.0415306741541082, "test_acc1": 74.5200010345459, "test_acc5": 91.96400098999024, "epoch": 199, "n_parameters": 5489328} -{"train_lr": 0.0007710988883453671, "train_loss": 2.5720064666488476, "test_loss": 1.033482779156078, "test_acc1": 74.62400079833985, "test_acc5": 91.98000096496583, "epoch": 200, "n_parameters": 5489328} -{"train_lr": 0.0007575000000000003, "train_loss": 2.570120200085983, "test_loss": 1.0477916171604937, "test_acc1": 74.5300007849121, "test_acc5": 91.97000121582032, "epoch": 201, "n_parameters": 5489328} -{"train_lr": 0.0007439830834532034, "train_loss": 2.5691074629982027, "test_loss": 1.025062590160153, "test_acc1": 75.08200058044433, "test_acc5": 92.21200077392578, "epoch": 202, "n_parameters": 5489328} -{"train_lr": 0.0007305496209871747, "train_loss": 2.559809827475811, "test_loss": 1.0371907245029102, "test_acc1": 74.74400127319336, "test_acc5": 92.14200114929199, "epoch": 203, "n_parameters": 5489328} -{"train_lr": 0.0007172010857324749, "train_loss": 2.564915954494934, "test_loss": 1.0807473998178134, "test_acc1": 74.2440003088379, "test_acc5": 91.6800007183838, "epoch": 204, "n_parameters": 5489328} -{"train_lr": 0.0007039389415063991, "train_loss": 2.5517435227152254, "test_loss": 1.0312461189248345, "test_acc1": 74.99200086669921, "test_acc5": 92.19000057739258, "epoch": 205, "n_parameters": 5489328} -{"train_lr": 0.0006907646426525357, "train_loss": 2.5466621104333043, "test_loss": 1.018813282251358, "test_acc1": 75.20600088562011, "test_acc5": 92.24800092773438, "epoch": 206, "n_parameters": 5489328} -{"train_lr": 0.000677679633881209, "train_loss": 2.552655185941312, "test_loss": 1.0460823320529677, "test_acc1": 74.50200055603027, "test_acc5": 91.96200102539062, "epoch": 207, "n_parameters": 5489328} -{"train_lr": 0.0006646853501110644, "train_loss": 2.5552261030430987, "test_loss": 1.028150273994966, "test_acc1": 75.0700008605957, "test_acc5": 92.21400067993164, "epoch": 208, "n_parameters": 5489328} -{"train_lr": 0.0006517832163117662, "train_loss": 2.535901029797481, "test_loss": 1.035858548500321, "test_acc1": 75.21200107299805, "test_acc5": 92.28600103515625, "epoch": 209, "n_parameters": 5489328} -{"train_lr": 0.0006389746473476941, "train_loss": 2.5355072717943923, "test_loss": 1.0289408666166393, "test_acc1": 75.19400114013672, "test_acc5": 92.0960010369873, "epoch": 210, "n_parameters": 5489328} -{"train_lr": 0.0006262610478227575, "train_loss": 2.5284848421526185, "test_loss": 1.0362500656734814, "test_acc1": 75.1000003717041, "test_acc5": 92.32400099365235, "epoch": 211, "n_parameters": 5489328} -{"train_lr": 0.0006136438119264093, "train_loss": 2.5498432767905777, "test_loss": 1.0097397863864899, "test_acc1": 75.59600113342285, "test_acc5": 92.52000104797364, "epoch": 212, "n_parameters": 5489328} -{"train_lr": 0.0006011243232807449, "train_loss": 2.5379712112801824, "test_loss": 1.019668305462057, "test_acc1": 75.1820010357666, "test_acc5": 92.26800073242188, "epoch": 213, "n_parameters": 5489328} -{"train_lr": 0.0005887039547888132, "train_loss": 2.5393184717181776, "test_loss": 1.0052516589110547, "test_acc1": 75.32600108703613, "test_acc5": 92.44000074157715, "epoch": 214, "n_parameters": 5489328} -{"train_lr": 0.0005763840684839261, "train_loss": 2.5160723232108055, "test_loss": 1.0046497014435856, "test_acc1": 75.46000109313965, "test_acc5": 92.5080009185791, "epoch": 215, "n_parameters": 5489328} -{"train_lr": 0.0005641660153804961, "train_loss": 2.5040251108096374, "test_loss": 0.9810452298684553, "test_acc1": 75.81800123046875, "test_acc5": 92.57600110778809, "epoch": 216, "n_parameters": 5489328} -{"train_lr": 0.0005520511353257044, "train_loss": 2.508633638493163, "test_loss": 0.9988361339677464, "test_acc1": 75.41200061340332, "test_acc5": 92.40000098754882, "epoch": 217, "n_parameters": 5489328} -{"train_lr": 0.0005400407568526992, "train_loss": 2.508338349614498, "test_loss": 0.9957357821139422, "test_acc1": 75.65200044799805, "test_acc5": 92.46800108154297, "epoch": 218, "n_parameters": 5489328} -{"train_lr": 0.00052813619703479, "train_loss": 2.5098287384930273, "test_loss": 0.9912437295371835, "test_acc1": 75.84800080871582, "test_acc5": 92.45400080444335, "epoch": 219, "n_parameters": 5489328} -{"train_lr": 0.0005163387613411468, "train_loss": 2.499951646339407, "test_loss": 0.9997073527086865, "test_acc1": 75.68000053955078, "test_acc5": 92.67000037780761, "epoch": 220, "n_parameters": 5489328} -{"train_lr": 0.0005046497434935117, "train_loss": 2.4989565651622607, "test_loss": 0.999553532762961, "test_acc1": 75.60800078552246, "test_acc5": 92.5520008972168, "epoch": 221, "n_parameters": 5489328} -{"train_lr": 0.0004930704253244259, "train_loss": 2.488104790020332, "test_loss": 1.00661229071292, "test_acc1": 75.48200063171387, "test_acc5": 92.56000098510742, "epoch": 222, "n_parameters": 5489328} -{"train_lr": 0.0004816020766366113, "train_loss": 2.50141910633309, "test_loss": 0.9646504656835035, "test_acc1": 76.25200074768067, "test_acc5": 92.92200072937011, "epoch": 223, "n_parameters": 5489328} -{"train_lr": 0.00047024595506373474, "train_loss": 2.4756679499892593, "test_loss": 0.9683435200290247, "test_acc1": 76.10000065368652, "test_acc5": 92.71800062255859, "epoch": 224, "n_parameters": 5489328} -{"train_lr": 0.0004590033059325158, "train_loss": 2.485948639522068, "test_loss": 0.9670453999530185, "test_acc1": 76.41800088623047, "test_acc5": 92.71400125122071, "epoch": 225, "n_parameters": 5489328} -{"train_lr": 0.0004478753621261082, "train_loss": 2.4832942381465464, "test_loss": 0.9604323587634347, "test_acc1": 76.48800116760253, "test_acc5": 92.81600050292968, "epoch": 226, "n_parameters": 5489328} -{"train_lr": 0.0004368633439489532, "train_loss": 2.478800973267578, "test_loss": 0.9644632339477539, "test_acc1": 76.51200071105957, "test_acc5": 92.92600062255859, "epoch": 227, "n_parameters": 5489328} -{"train_lr": 0.00042596845899295257, "train_loss": 2.4864996233932692, "test_loss": 0.9891165914860639, "test_acc1": 76.29800079101562, "test_acc5": 92.90400073791504, "epoch": 228, "n_parameters": 5489328} -{"train_lr": 0.00041519190200499167, "train_loss": 2.4742173160508956, "test_loss": 0.9703729057853873, "test_acc1": 76.41000023742676, "test_acc5": 92.7740002960205, "epoch": 229, "n_parameters": 5489328} -{"train_lr": 0.0004045348547559999, "train_loss": 2.457650666828636, "test_loss": 0.9675034541975368, "test_acc1": 76.42000048217774, "test_acc5": 92.88200091125488, "epoch": 230, "n_parameters": 5489328} -{"train_lr": 0.0003939984859112978, "train_loss": 2.457672262077423, "test_loss": 0.9513290863145482, "test_acc1": 76.58400076965331, "test_acc5": 92.99400054138184, "epoch": 231, "n_parameters": 5489328} -{"train_lr": 0.0003835839509024671, "train_loss": 2.4515049376433415, "test_loss": 0.9455240524627946, "test_acc1": 76.72200102478027, "test_acc5": 93.07000078491211, "epoch": 232, "n_parameters": 5489328} -{"train_lr": 0.0003732923918006212, "train_loss": 2.445595462473748, "test_loss": 0.9495904201811011, "test_acc1": 76.60600036682129, "test_acc5": 93.09200052001952, "epoch": 233, "n_parameters": 5489328} -{"train_lr": 0.0003631249371912071, "train_loss": 2.4532173156238004, "test_loss": 0.9700723080472513, "test_acc1": 76.2500008795166, "test_acc5": 92.86200078613281, "epoch": 234, "n_parameters": 5489328} -{"train_lr": 0.0003530827020501991, "train_loss": 2.4553281284636443, "test_loss": 0.9540502219037577, "test_acc1": 76.60600050903321, "test_acc5": 93.03400085327148, "epoch": 235, "n_parameters": 5489328} -{"train_lr": 0.00034316678762182974, "train_loss": 2.4407229009006235, "test_loss": 0.9393852237950672, "test_acc1": 76.8520004272461, "test_acc5": 93.11600073730469, "epoch": 236, "n_parameters": 5489328} -{"train_lr": 0.0003333782812978692, "train_loss": 2.452535758844668, "test_loss": 0.9400186362591657, "test_acc1": 76.94400067993163, "test_acc5": 93.14200082702637, "epoch": 237, "n_parameters": 5489328} -{"train_lr": 0.0003237182564983461, "train_loss": 2.437413530384036, "test_loss": 0.9423322189937938, "test_acc1": 76.84200081115722, "test_acc5": 93.0900008392334, "epoch": 238, "n_parameters": 5489328} -{"train_lr": 0.0003141877725538217, "train_loss": 2.4419937852403812, "test_loss": 0.9298432726751674, "test_acc1": 77.0720005621338, "test_acc5": 93.18800047912598, "epoch": 239, "n_parameters": 5489328} -{"train_lr": 0.00030478787458928857, "train_loss": 2.436965294593244, "test_loss": 0.9230398020961068, "test_acc1": 77.12800069824219, "test_acc5": 93.2700006414795, "epoch": 240, "n_parameters": 5489328} -{"train_lr": 0.0002955195934094545, "train_loss": 2.416370161252914, "test_loss": 0.9432167007841847, "test_acc1": 76.82600111206055, "test_acc5": 93.14400065490723, "epoch": 241, "n_parameters": 5489328} -{"train_lr": 0.00028638394538580574, "train_loss": 2.4378121172209726, "test_loss": 0.926462972028689, "test_acc1": 77.18400064086914, "test_acc5": 93.29600081481934, "epoch": 242, "n_parameters": 5489328} -{"train_lr": 0.0002773819323451111, "train_loss": 2.4119101976104775, "test_loss": 0.9164459119466218, "test_acc1": 77.13400038024902, "test_acc5": 93.44600058044433, "epoch": 243, "n_parameters": 5489328} -{"train_lr": 0.00026851454145953397, "train_loss": 2.420333356701499, "test_loss": 0.9156691831621256, "test_acc1": 77.34200098815919, "test_acc5": 93.3960003881836, "epoch": 244, "n_parameters": 5489328} -{"train_lr": 0.00025978274513840325, "train_loss": 2.412523080869544, "test_loss": 0.9295811978253451, "test_acc1": 77.18600069885254, "test_acc5": 93.29000088745117, "epoch": 245, "n_parameters": 5489328} -{"train_lr": 0.00025118750092159413, "train_loss": 2.4228956761548845, "test_loss": 0.9213146550411527, "test_acc1": 77.24400077514649, "test_acc5": 93.35800024719238, "epoch": 246, "n_parameters": 5489328} -{"train_lr": 0.00024272975137448746, "train_loss": 2.4302421232922184, "test_loss": 0.9227185530418699, "test_acc1": 77.40200085205078, "test_acc5": 93.41200071472169, "epoch": 247, "n_parameters": 5489328} -{"train_lr": 0.0002344104239846416, "train_loss": 2.4103884291377287, "test_loss": 0.9143274897201494, "test_acc1": 77.55400057373046, "test_acc5": 93.39400121032715, "epoch": 248, "n_parameters": 5489328} -{"train_lr": 0.00022623043106004268, "train_loss": 2.413554388663466, "test_loss": 0.9208044389432127, "test_acc1": 77.42800118896484, "test_acc5": 93.43200094665528, "epoch": 249, "n_parameters": 5489328} -{"train_lr": 0.00021819066962910723, "train_loss": 2.4089133355686134, "test_loss": 0.911855207925493, "test_acc1": 77.69800108215333, "test_acc5": 93.37400060668945, "epoch": 250, "n_parameters": 5489328} -{"train_lr": 0.0002102920213422626, "train_loss": 2.3919918026855522, "test_loss": 0.9014351191845807, "test_acc1": 77.67000116455078, "test_acc5": 93.49400063903809, "epoch": 251, "n_parameters": 5489328} -{"train_lr": 0.00020253535237531743, "train_loss": 2.382273366732849, "test_loss": 0.9178736142136834, "test_acc1": 77.48400079589844, "test_acc5": 93.414001171875, "epoch": 252, "n_parameters": 5489328} -{"train_lr": 0.00019492151333441997, "train_loss": 2.381275333910823, "test_loss": 0.9032369476150383, "test_acc1": 77.67600041503906, "test_acc5": 93.5560009765625, "epoch": 253, "n_parameters": 5489328} -{"train_lr": 0.0001874513391628396, "train_loss": 2.3865845045454495, "test_loss": 0.9050272096964446, "test_acc1": 77.73000063171386, "test_acc5": 93.53200079223633, "epoch": 254, "n_parameters": 5489328} -{"train_lr": 0.0001801256490493344, "train_loss": 2.3941760980468287, "test_loss": 0.9038314656777815, "test_acc1": 77.71200040649414, "test_acc5": 93.48200058044434, "epoch": 255, "n_parameters": 5489328} -{"train_lr": 0.0001729452463383904, "train_loss": 2.3823429156335996, "test_loss": 0.9007301730188456, "test_acc1": 77.88800092956544, "test_acc5": 93.63400074035644, "epoch": 256, "n_parameters": 5489328} -{"train_lr": 0.00016591091844207897, "train_loss": 2.3909174540131497, "test_loss": 0.8972184089097109, "test_acc1": 77.92600046203613, "test_acc5": 93.60400059387207, "epoch": 257, "n_parameters": 5489328} -{"train_lr": 0.00015902343675372216, "train_loss": 2.362817195286568, "test_loss": 0.8894396431066773, "test_acc1": 77.96600097106933, "test_acc5": 93.75400071289063, "epoch": 258, "n_parameters": 5489328} -{"train_lr": 0.0001522835565632992, "train_loss": 2.379428905334404, "test_loss": 0.900025965476578, "test_acc1": 77.91600065734863, "test_acc5": 93.6040007836914, "epoch": 259, "n_parameters": 5489328} -{"train_lr": 0.00014569201697463283, "train_loss": 2.3821296116931263, "test_loss": 0.8999610245227814, "test_acc1": 77.85000065490722, "test_acc5": 93.58600085205079, "epoch": 260, "n_parameters": 5489328} -{"train_lr": 0.00013924954082431066, "train_loss": 2.3617400605258325, "test_loss": 0.8922458660196174, "test_acc1": 77.96400058227539, "test_acc5": 93.63600050903321, "epoch": 261, "n_parameters": 5489328} -{"train_lr": 0.00013295683460244701, "train_loss": 2.363371703193056, "test_loss": 0.8902311277660456, "test_acc1": 77.96600112792969, "test_acc5": 93.67200095703124, "epoch": 262, "n_parameters": 5489328} -{"train_lr": 0.00012681458837518887, "train_loss": 2.362482649173668, "test_loss": 0.8872034272009676, "test_acc1": 78.10400048156738, "test_acc5": 93.7100009613037, "epoch": 263, "n_parameters": 5489328} -{"train_lr": 0.00012082347570905826, "train_loss": 2.380023015774697, "test_loss": 0.8855606940659609, "test_acc1": 78.01400048156738, "test_acc5": 93.69200064453125, "epoch": 264, "n_parameters": 5489328} -{"train_lr": 0.00011498415359706417, "train_loss": 2.342684495220367, "test_loss": 0.8852495466443625, "test_acc1": 78.11800085632325, "test_acc5": 93.76200103942871, "epoch": 265, "n_parameters": 5489328} -{"train_lr": 0.00010929726238668528, "train_loss": 2.3575089012475896, "test_loss": 0.8809825513850559, "test_acc1": 78.2880007232666, "test_acc5": 93.69800049438477, "epoch": 266, "n_parameters": 5489328} -{"train_lr": 0.000103763425709623, "train_loss": 2.3688723870532975, "test_loss": 0.8968683708120476, "test_acc1": 77.91600059936523, "test_acc5": 93.67400097473144, "epoch": 267, "n_parameters": 5489328} -{"train_lr": 9.838325041343348e-05, "train_loss": 2.355187349432378, "test_loss": 0.880853113125671, "test_acc1": 78.2200005517578, "test_acc5": 93.78200108032226, "epoch": 268, "n_parameters": 5489328} -{"train_lr": 9.315732649496637e-05, "train_loss": 2.34280242926354, "test_loss": 0.8830713704228401, "test_acc1": 78.27800056640625, "test_acc5": 93.78000067932129, "epoch": 269, "n_parameters": 5489328} -{"train_lr": 8.808622703566918e-05, "train_loss": 2.341078065103478, "test_loss": 0.8783232102339918, "test_acc1": 78.22000091186523, "test_acc5": 93.8480009338379, "epoch": 270, "n_parameters": 5489328} -{"train_lr": 8.317050813874646e-05, "train_loss": 2.3441224316898865, "test_loss": 0.8771902360022068, "test_acc1": 78.18400054992676, "test_acc5": 93.81200080688477, "epoch": 271, "n_parameters": 5489328} -{"train_lr": 7.84107088681662e-05, "train_loss": 2.3454227253830404, "test_loss": 0.87601864812049, "test_acc1": 78.24000083862305, "test_acc5": 93.89800061096192, "epoch": 272, "n_parameters": 5489328} -{"train_lr": 7.380735118955883e-05, "train_loss": 2.337904933652432, "test_loss": 0.8770983828739687, "test_acc1": 78.26200006347656, "test_acc5": 93.87000075683594, "epoch": 273, "n_parameters": 5489328} -{"train_lr": 6.936093991297094e-05, "train_loss": 2.3417823917860034, "test_loss": 0.8710269193080339, "test_acc1": 78.28400085205078, "test_acc5": 93.84800058044434, "epoch": 274, "n_parameters": 5489328} -{"train_lr": 6.507196263750235e-05, "train_loss": 2.333819971346169, "test_loss": 0.8749676919118925, "test_acc1": 78.27600077514649, "test_acc5": 93.82800092956543, "epoch": 275, "n_parameters": 5489328} -{"train_lr": 6.094088969784356e-05, "train_loss": 2.326003105198737, "test_loss": 0.8725792362608693, "test_acc1": 78.41600105224609, "test_acc5": 93.8180012475586, "epoch": 276, "n_parameters": 5489328} -{"train_lr": 5.696817411269586e-05, "train_loss": 2.3274167423053895, "test_loss": 0.8680824302136898, "test_acc1": 78.51400083435058, "test_acc5": 93.90800055847168, "epoch": 277, "n_parameters": 5489328} -{"train_lr": 5.315425153509348e-05, "train_loss": 2.336997047281094, "test_loss": 0.8682596205987714, "test_acc1": 78.44400088562011, "test_acc5": 93.952001015625, "epoch": 278, "n_parameters": 5489328} -{"train_lr": 4.9499540204625534e-05, "train_loss": 2.330068415767855, "test_loss": 0.8720369576053186, "test_acc1": 78.42600038269043, "test_acc5": 93.89400079162597, "epoch": 279, "n_parameters": 5489328} -{"train_lr": 4.600444090157376e-05, "train_loss": 2.3371108116791977, "test_loss": 0.8707644018259916, "test_acc1": 78.4820006451416, "test_acc5": 93.94600119018554, "epoch": 280, "n_parameters": 5489328} -{"train_lr": 4.26693369029598e-05, "train_loss": 2.3327449468685852, "test_loss": 0.8687785870649598, "test_acc1": 78.4980007043457, "test_acc5": 93.92000095825195, "epoch": 281, "n_parameters": 5489328} -{"train_lr": 3.9494593940526046e-05, "train_loss": 2.325679314722546, "test_loss": 0.8714613999155435, "test_acc1": 78.48800060913086, "test_acc5": 93.938000703125, "epoch": 282, "n_parameters": 5489328} -{"train_lr": 3.648056016061075e-05, "train_loss": 2.3183054037803084, "test_loss": 0.8668204498561946, "test_acc1": 78.4840006842041, "test_acc5": 93.93600100341797, "epoch": 283, "n_parameters": 5489328} -{"train_lr": 3.362756608598404e-05, "train_loss": 2.3169099378714457, "test_loss": 0.8711183338680051, "test_acc1": 78.48800068115234, "test_acc5": 93.98000087341309, "epoch": 284, "n_parameters": 5489328} -{"train_lr": 3.093592457959537e-05, "train_loss": 2.3186273350775672, "test_loss": 0.8700975227085027, "test_acc1": 78.52800067321778, "test_acc5": 93.94800077392578, "epoch": 285, "n_parameters": 5489328} -{"train_lr": 2.8405930810268923e-05, "train_loss": 2.3303842530619328, "test_loss": 0.8696487410502001, "test_acc1": 78.60600080810546, "test_acc5": 93.91000049560547, "epoch": 286, "n_parameters": 5489328} -{"train_lr": 2.6037862220332676e-05, "train_loss": 2.3138679277410894, "test_loss": 0.8683652630583807, "test_acc1": 78.59200096374512, "test_acc5": 94.02400085571288, "epoch": 287, "n_parameters": 5489328} -{"train_lr": 2.3831978495191956e-05, "train_loss": 2.3272241153162447, "test_loss": 0.8806463994763114, "test_acc1": 78.46000049987794, "test_acc5": 93.91600079040528, "epoch": 288, "n_parameters": 5489328} -{"train_lr": 2.1788521534855537e-05, "train_loss": 2.3170523319741805, "test_loss": 0.8650245331227779, "test_acc1": 78.58200076660157, "test_acc5": 93.97200118652344, "epoch": 289, "n_parameters": 5489328} -{"train_lr": 1.990771542740754e-05, "train_loss": 2.3151823685680935, "test_loss": 0.8753454759716988, "test_acc1": 78.46600063476562, "test_acc5": 93.8980005303955, "epoch": 290, "n_parameters": 5489328} -{"train_lr": 1.818976642443126e-05, "train_loss": 2.3058120840602068, "test_loss": 0.8663493079895322, "test_acc1": 78.51200056274413, "test_acc5": 93.95800080566406, "epoch": 291, "n_parameters": 5489328} -{"train_lr": 1.6634862918395725e-05, "train_loss": 2.302420408319798, "test_loss": 0.8672037734226747, "test_acc1": 78.64200114929199, "test_acc5": 94.0060003527832, "epoch": 292, "n_parameters": 5489328} -{"train_lr": 1.5243175421991016e-05, "train_loss": 2.3128656267905408, "test_loss": 0.8675745326009664, "test_acc1": 78.59600097473144, "test_acc5": 93.99400063171387, "epoch": 293, "n_parameters": 5489328} -{"train_lr": 1.401485654943523e-05, "train_loss": 2.310560715534418, "test_loss": 0.8600679818879474, "test_acc1": 78.70200073364258, "test_acc5": 93.99800098571778, "epoch": 294, "n_parameters": 5489328} -{"train_lr": 1.2950040999734157e-05, "train_loss": 2.3136999860441656, "test_loss": 0.866105316714807, "test_acc1": 78.65800076904297, "test_acc5": 94.0060008782959, "epoch": 295, "n_parameters": 5489328} -{"train_lr": 1.2048845541912047e-05, "train_loss": 2.3161479929368274, "test_loss": 0.8627566373483702, "test_acc1": 78.68400093933106, "test_acc5": 94.01200065795898, "epoch": 296, "n_parameters": 5489328} -{"train_lr": 1.1311369002206675e-05, "train_loss": 2.318793747565157, "test_loss": 0.8604081164706837, "test_acc1": 78.73800100097657, "test_acc5": 94.01000094604493, "epoch": 297, "n_parameters": 5489328} -{"train_lr": 1.0737692253231284e-05, "train_loss": 2.312896420594028, "test_loss": 0.863391715017232, "test_acc1": 78.63800063842774, "test_acc5": 93.96800065307617, "epoch": 298, "n_parameters": 5489328} -{"train_lr": 1.0327878205106021e-05, "train_loss": 2.3050328499550443, "test_loss": 0.8632796481251717, "test_acc1": 78.58200132873534, "test_acc5": 94.00600107849121, "epoch": 299, "n_parameters": 5489328} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_450e.txt b/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_450e.txt deleted file mode 100644 index 4d1b8355..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m0_9_distill_450e.txt +++ /dev/null @@ -1,450 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 6.982682788944245, "test_loss": 6.936226417036617, "test_acc1": 0.11000000747680665, "test_acc5": 0.5740000230073928, "epoch": 0, "n_parameters": 5489328} -{"train_lr": 1.000000000000014e-06, "train_loss": 6.981119913482666, "test_loss": 6.929457773180569, "test_acc1": 0.1180000080871582, "test_acc5": 0.5960000260591507, "epoch": 1, "n_parameters": 5489328} -{"train_lr": 0.0008007999999999933, "train_loss": 6.485806449985504, "test_loss": 5.328061075771556, "test_acc1": 7.024000212440491, "test_acc5": 20.110000636062622, "epoch": 2, "n_parameters": 5489328} -{"train_lr": 0.0016005999999999787, "train_loss": 5.905580425548553, "test_loss": 4.179676476646872, "test_acc1": 17.60400049316406, "test_acc5": 38.950000974731445, "epoch": 3, "n_parameters": 5489328} -{"train_lr": 0.0024003999999999835, "train_loss": 5.348255184555054, "test_loss": 3.476589534212561, "test_acc1": 27.648000910186767, "test_acc5": 51.89800137573242, "epoch": 4, "n_parameters": 5489328} -{"train_lr": 0.0032001999999999873, "train_loss": 4.928900110912323, "test_loss": 3.024628195692511, "test_acc1": 35.196001001434325, "test_acc5": 60.708001903381344, "epoch": 5, "n_parameters": 5489328} -{"train_lr": 0.003998784699903044, "train_loss": 4.635436840152741, "test_loss": 2.750154462807319, "test_acc1": 40.294001303710935, "test_acc5": 66.0620019543457, "epoch": 6, "n_parameters": 5489328} -{"train_lr": 0.0039982500460471835, "train_loss": 4.387199071502685, "test_loss": 2.502233590273296, "test_acc1": 44.672001258850095, "test_acc5": 70.35200212219239, "epoch": 7, "n_parameters": 5489328} -{"train_lr": 0.003997618243996162, "train_loss": 4.206799716997146, "test_loss": 2.293640235767645, "test_acc1": 48.50200118270874, "test_acc5": 73.46200244812012, "epoch": 8, "n_parameters": 5489328} -{"train_lr": 0.003996889324543062, "train_loss": 4.064500818634033, "test_loss": 2.1734732775127186, "test_acc1": 51.12400143234253, "test_acc5": 75.78800251281739, "epoch": 9, "n_parameters": 5489328} -{"train_lr": 0.003996063323214417, "train_loss": 3.9570414434432983, "test_loss": 2.1125502525007023, "test_acc1": 52.2420013381958, "test_acc5": 76.40600247253418, "epoch": 10, "n_parameters": 5489328} -{"train_lr": 0.003995140280268348, "train_loss": 3.866841499519348, "test_loss": 2.0373643470161102, "test_acc1": 53.78200138916016, "test_acc5": 77.76800242706298, "epoch": 11, "n_parameters": 5489328} -{"train_lr": 0.003994120240692708, "train_loss": 3.8035504395961763, "test_loss": 1.9470819944844526, "test_acc1": 55.33800143157959, "test_acc5": 79.11200235137939, "epoch": 12, "n_parameters": 5489328} -{"train_lr": 0.003993003254202742, "train_loss": 3.7340546696662904, "test_loss": 1.8524297458284043, "test_acc1": 57.30200146087647, "test_acc5": 80.75200217346192, "epoch": 13, "n_parameters": 5489328} -{"train_lr": 0.0039917893752388625, "train_loss": 3.6687149409294126, "test_loss": 1.8518594144021763, "test_acc1": 57.23400156036377, "test_acc5": 80.64400240203858, "epoch": 14, "n_parameters": 5489328} -{"train_lr": 0.003990478662963736, "train_loss": 3.6198748785972596, "test_loss": 1.7983430207652205, "test_acc1": 58.46400160491943, "test_acc5": 81.50200274597168, "epoch": 15, "n_parameters": 5489328} -{"train_lr": 0.003989071181259754, "train_loss": 3.5829882686138155, "test_loss": 1.784463669447338, "test_acc1": 59.05400185028076, "test_acc5": 81.7300026147461, "epoch": 16, "n_parameters": 5489328} -{"train_lr": 0.0039875669987254015, "train_loss": 3.5467581224441527, "test_loss": 1.7559054415015614, "test_acc1": 59.240001843566894, "test_acc5": 82.16600265808106, "epoch": 17, "n_parameters": 5489328} -{"train_lr": 0.003985966188672574, "train_loss": 3.5381131104946135, "test_loss": 1.7416115661754328, "test_acc1": 59.59400177642822, "test_acc5": 82.30400252349854, "epoch": 18, "n_parameters": 5489328} -{"train_lr": 0.0039842688291223715, "train_loss": 3.484798039484024, "test_loss": 1.7198421876220142, "test_acc1": 60.78200176818848, "test_acc5": 83.15000267059327, "epoch": 19, "n_parameters": 5489328} -{"train_lr": 0.003982475002801825, "train_loss": 3.4620894355773926, "test_loss": 1.7029116149334347, "test_acc1": 60.510001813049314, "test_acc5": 83.01400251129151, "epoch": 20, "n_parameters": 5489328} -{"train_lr": 0.003980584797139465, "train_loss": 3.4461016340732575, "test_loss": 1.6450615577838, "test_acc1": 61.95600199432373, "test_acc5": 84.07800253051758, "epoch": 21, "n_parameters": 5489328} -{"train_lr": 0.003978598304261148, "train_loss": 3.4027491386413575, "test_loss": 1.6297660360441488, "test_acc1": 62.54000175720215, "test_acc5": 84.16800247894287, "epoch": 22, "n_parameters": 5489328} -{"train_lr": 0.003976515620985842, "train_loss": 3.3880037083148955, "test_loss": 1.6342102144570911, "test_acc1": 62.22200176208496, "test_acc5": 83.99800256469726, "epoch": 23, "n_parameters": 5489328} -{"train_lr": 0.0039743368488206155, "train_loss": 3.377678227519989, "test_loss": 1.6501361350802815, "test_acc1": 62.02000161407471, "test_acc5": 84.1580025088501, "epoch": 24, "n_parameters": 5489328} -{"train_lr": 0.0039720620939556715, "train_loss": 3.3598640303134917, "test_loss": 1.6167812592843, "test_acc1": 62.2520017553711, "test_acc5": 84.18600245605468, "epoch": 25, "n_parameters": 5489328} -{"train_lr": 0.003969691467259384, "train_loss": 3.3399931443214417, "test_loss": 1.5670365283594412, "test_acc1": 63.616001839294434, "test_acc5": 85.16800248931885, "epoch": 26, "n_parameters": 5489328} -{"train_lr": 0.003967225084272694, "train_loss": 3.305207725429535, "test_loss": 1.5403287437032251, "test_acc1": 63.704001997375485, "test_acc5": 85.17800261291504, "epoch": 27, "n_parameters": 5489328} -{"train_lr": 0.003964663065203757, "train_loss": 3.3074219027519227, "test_loss": 1.5465779646354563, "test_acc1": 63.722001624145506, "test_acc5": 85.42200269683838, "epoch": 28, "n_parameters": 5489328} -{"train_lr": 0.003962005534921608, "train_loss": 3.291002331829071, "test_loss": 1.53234654095243, "test_acc1": 63.94200179199219, "test_acc5": 85.51200239746093, "epoch": 29, "n_parameters": 5489328} -{"train_lr": 0.003959252622950646, "train_loss": 3.2912186820983886, "test_loss": 1.5318346089299988, "test_acc1": 63.854001684570314, "test_acc5": 85.38600265472412, "epoch": 30, "n_parameters": 5489328} -{"train_lr": 0.003956404463463954, "train_loss": 3.258279404592514, "test_loss": 1.500951527234386, "test_acc1": 64.84400198486328, "test_acc5": 85.72800285888673, "epoch": 31, "n_parameters": 5489328} -{"train_lr": 0.003953461195276696, "train_loss": 3.252042481660843, "test_loss": 1.5663895953227491, "test_acc1": 63.87400165527344, "test_acc5": 85.28200245330811, "epoch": 32, "n_parameters": 5489328} -{"train_lr": 0.003950422961839594, "train_loss": 3.2522083000421524, "test_loss": 1.5594305058612543, "test_acc1": 64.19000179260254, "test_acc5": 85.36400272247315, "epoch": 33, "n_parameters": 5489328} -{"train_lr": 0.00394728991123201, "train_loss": 3.240077836537361, "test_loss": 1.5672123905490427, "test_acc1": 63.74000185119629, "test_acc5": 85.28000266265869, "epoch": 34, "n_parameters": 5489328} -{"train_lr": 0.003944062196154177, "train_loss": 3.2300604409217835, "test_loss": 1.4874895399107653, "test_acc1": 65.09600183685303, "test_acc5": 86.1300025289917, "epoch": 35, "n_parameters": 5489328} -{"train_lr": 0.003940739973920592, "train_loss": 3.2178120161533355, "test_loss": 1.5133764682447208, "test_acc1": 64.38200207153321, "test_acc5": 85.56400281158447, "epoch": 36, "n_parameters": 5489328} -{"train_lr": 0.003937323406451619, "train_loss": 3.2066711245298385, "test_loss": 1.4726855474359848, "test_acc1": 65.19000200286865, "test_acc5": 86.19600262451172, "epoch": 37, "n_parameters": 5489328} -{"train_lr": 0.003933812660265883, "train_loss": 3.2089511155605317, "test_loss": 1.4796281860155218, "test_acc1": 65.29400186828613, "test_acc5": 86.11800242401124, "epoch": 38, "n_parameters": 5489328} -{"train_lr": 0.003930207906472293, "train_loss": 3.192045785331726, "test_loss": 1.4959008816410513, "test_acc1": 64.95200193359375, "test_acc5": 86.06600268005371, "epoch": 39, "n_parameters": 5489328} -{"train_lr": 0.003926509320761305, "train_loss": 3.179773176622391, "test_loss": 1.479294702410698, "test_acc1": 64.96000180389404, "test_acc5": 86.23400266906738, "epoch": 40, "n_parameters": 5489328} -{"train_lr": 0.003922717083396902, "train_loss": 3.1758796588897704, "test_loss": 1.5646396524765913, "test_acc1": 63.806001772766116, "test_acc5": 85.17400242370606, "epoch": 41, "n_parameters": 5489328} -{"train_lr": 0.003918831379207381, "train_loss": 3.1692817989826203, "test_loss": 1.4891098417779978, "test_acc1": 64.79800197235107, "test_acc5": 86.33800247589112, "epoch": 42, "n_parameters": 5489328} -{"train_lr": 0.003914852397576493, "train_loss": 3.163010068964958, "test_loss": 1.4883535447366096, "test_acc1": 65.30000189331055, "test_acc5": 86.34400275878906, "epoch": 43, "n_parameters": 5489328} -{"train_lr": 0.003910780332434081, "train_loss": 3.1578324907302857, "test_loss": 1.4753978984320866, "test_acc1": 65.26000228179932, "test_acc5": 86.01400231658936, "epoch": 44, "n_parameters": 5489328} -{"train_lr": 0.003906615382246946, "train_loss": 3.146924594163895, "test_loss": 1.44475304160048, "test_acc1": 65.34800198455811, "test_acc5": 86.52200256134033, "epoch": 45, "n_parameters": 5489328} -{"train_lr": 0.0039023577500088094, "train_loss": 3.136498975610733, "test_loss": 1.480575817911064, "test_acc1": 65.16000184814453, "test_acc5": 86.38800255981445, "epoch": 46, "n_parameters": 5489328} -{"train_lr": 0.003898007643230756, "train_loss": 3.1413722690105437, "test_loss": 1.4834517240524292, "test_acc1": 65.77800218017578, "test_acc5": 86.55400256286622, "epoch": 47, "n_parameters": 5489328} -{"train_lr": 0.0038935652739308757, "train_loss": 3.140752594637871, "test_loss": 1.4450803740936167, "test_acc1": 65.65800201080322, "test_acc5": 86.65000257202148, "epoch": 48, "n_parameters": 5489328} -{"train_lr": 0.003889030858623732, "train_loss": 3.132297666311264, "test_loss": 1.4753411197487045, "test_acc1": 65.80200193756103, "test_acc5": 86.41600292938233, "epoch": 49, "n_parameters": 5489328} -{"train_lr": 0.003884404618310635, "train_loss": 3.129385268688202, "test_loss": 1.4609664341106134, "test_acc1": 65.85800206573487, "test_acc5": 86.50000257354736, "epoch": 50, "n_parameters": 5489328} -{"train_lr": 0.0038796867784678503, "train_loss": 3.116185826921463, "test_loss": 1.416261007242343, "test_acc1": 66.66600206146241, "test_acc5": 87.10600280517578, "epoch": 51, "n_parameters": 5489328} -{"train_lr": 0.0038748775690362956, "train_loss": 3.115948366522789, "test_loss": 1.444041628171416, "test_acc1": 66.37200205444336, "test_acc5": 86.85000238189697, "epoch": 52, "n_parameters": 5489328} -{"train_lr": 0.0038699772244100415, "train_loss": 3.1096669251918794, "test_loss": 1.4410073555567686, "test_acc1": 66.50800202423096, "test_acc5": 86.96000248779296, "epoch": 53, "n_parameters": 5489328} -{"train_lr": 0.003864985983424946, "train_loss": 3.09792636218071, "test_loss": 1.4976683710427845, "test_acc1": 65.21200215881348, "test_acc5": 86.32800214782715, "epoch": 54, "n_parameters": 5489328} -{"train_lr": 0.003859904089347072, "train_loss": 3.0993753562688826, "test_loss": 1.5010543421787375, "test_acc1": 65.38600189880371, "test_acc5": 86.23600264953613, "epoch": 55, "n_parameters": 5489328} -{"train_lr": 0.0038547317898607334, "train_loss": 3.0966635668754576, "test_loss": 1.4678667463800485, "test_acc1": 65.7820018927002, "test_acc5": 86.56800269500732, "epoch": 56, "n_parameters": 5489328} -{"train_lr": 0.0038494693370565466, "train_loss": 3.0862732273101807, "test_loss": 1.4120628789943808, "test_acc1": 66.55000181213379, "test_acc5": 87.19800238250733, "epoch": 57, "n_parameters": 5489328} -{"train_lr": 0.0038441169874190843, "train_loss": 3.081954338026047, "test_loss": 1.4400458462974604, "test_acc1": 66.29200194488526, "test_acc5": 86.82000234649658, "epoch": 58, "n_parameters": 5489328} -{"train_lr": 0.003838675001814183, "train_loss": 3.0782429320335387, "test_loss": 1.3778744217227488, "test_acc1": 66.89000220367431, "test_acc5": 87.52000251861573, "epoch": 59, "n_parameters": 5489328} -{"train_lr": 0.0038331436454766355, "train_loss": 3.0640235084056853, "test_loss": 1.4118132363347446, "test_acc1": 66.56400198730469, "test_acc5": 87.01200255706787, "epoch": 60, "n_parameters": 5489328} -{"train_lr": 0.0038275231879969967, "train_loss": 3.0768762122869493, "test_loss": 1.3971303977510507, "test_acc1": 66.71200193450927, "test_acc5": 87.3440024963379, "epoch": 61, "n_parameters": 5489328} -{"train_lr": 0.00382181390330831, "train_loss": 3.0753588831186294, "test_loss": 1.4028215184807777, "test_acc1": 66.55400182403564, "test_acc5": 87.01000260223388, "epoch": 62, "n_parameters": 5489328} -{"train_lr": 0.00381601606967318, "train_loss": 3.0713245104789735, "test_loss": 1.4215119564357925, "test_acc1": 66.4800021710205, "test_acc5": 87.09800239868164, "epoch": 63, "n_parameters": 5489328} -{"train_lr": 0.0038101299696697475, "train_loss": 3.063152387690544, "test_loss": 1.372943929013084, "test_acc1": 67.26400203796386, "test_acc5": 87.56000241577148, "epoch": 64, "n_parameters": 5489328} -{"train_lr": 0.00380415589017823, "train_loss": 3.052668389725685, "test_loss": 1.3413153571241043, "test_acc1": 67.61800198181152, "test_acc5": 87.85200239105225, "epoch": 65, "n_parameters": 5489328} -{"train_lr": 0.0037980941223668303, "train_loss": 3.047393211579323, "test_loss": 1.35909125734778, "test_acc1": 67.67000228637696, "test_acc5": 87.6740023348999, "epoch": 66, "n_parameters": 5489328} -{"train_lr": 0.003791944961677627, "train_loss": 3.0481047841310502, "test_loss": 1.4371029476032537, "test_acc1": 66.33000198883056, "test_acc5": 86.96600248168946, "epoch": 67, "n_parameters": 5489328} -{"train_lr": 0.0037857087078119896, "train_loss": 3.036186860537529, "test_loss": 1.3643182055915104, "test_acc1": 67.85600204803467, "test_acc5": 87.9940021749878, "epoch": 68, "n_parameters": 5489328} -{"train_lr": 0.003779385664716107, "train_loss": 3.047884468984604, "test_loss": 1.449155173757497, "test_acc1": 66.35800194488526, "test_acc5": 86.61400251068115, "epoch": 69, "n_parameters": 5489328} -{"train_lr": 0.003772976140566265, "train_loss": 3.0619409211158755, "test_loss": 1.3938324276138754, "test_acc1": 67.22800182952881, "test_acc5": 87.48600243988037, "epoch": 70, "n_parameters": 5489328} -{"train_lr": 0.0037664804477535617, "train_loss": 3.0252772168636324, "test_loss": 1.390597855343538, "test_acc1": 67.20200183105469, "test_acc5": 87.43200239807129, "epoch": 71, "n_parameters": 5489328} -{"train_lr": 0.003759898902868911, "train_loss": 3.0310360545873642, "test_loss": 1.3465509686399908, "test_acc1": 67.69400196044921, "test_acc5": 88.04400266906738, "epoch": 72, "n_parameters": 5489328} -{"train_lr": 0.003753231826687486, "train_loss": 3.03070186419487, "test_loss": 1.3945761292296297, "test_acc1": 67.00200187347411, "test_acc5": 87.59000261108399, "epoch": 73, "n_parameters": 5489328} -{"train_lr": 0.0037464795441532936, "train_loss": 3.03004901535511, "test_loss": 1.393523382351679, "test_acc1": 66.93400213287353, "test_acc5": 87.55400256317138, "epoch": 74, "n_parameters": 5489328} -{"train_lr": 0.003739642384362937, "train_loss": 3.0236722556829454, "test_loss": 1.382100898115074, "test_acc1": 66.79800194793701, "test_acc5": 87.43600255706787, "epoch": 75, "n_parameters": 5489328} -{"train_lr": 0.003732720680549938, "train_loss": 3.033923989057541, "test_loss": 1.3662078582188661, "test_acc1": 67.11400187561036, "test_acc5": 87.79800239868165, "epoch": 76, "n_parameters": 5489328} -{"train_lr": 0.003725714770068486, "train_loss": 3.012726956105232, "test_loss": 1.4228415121050442, "test_acc1": 67.15200209747314, "test_acc5": 87.54200241455078, "epoch": 77, "n_parameters": 5489328} -{"train_lr": 0.0037186249943766602, "train_loss": 3.022262201499939, "test_loss": 1.3669594004750252, "test_acc1": 67.45200210052491, "test_acc5": 87.95400251922608, "epoch": 78, "n_parameters": 5489328} -{"train_lr": 0.003711451699020238, "train_loss": 3.0136971290111543, "test_loss": 1.3213231664370089, "test_acc1": 67.95200178527833, "test_acc5": 88.24800233215332, "epoch": 79, "n_parameters": 5489328} -{"train_lr": 0.0037041952336154147, "train_loss": 3.0045296655416487, "test_loss": 1.333353132886045, "test_acc1": 67.59400203704834, "test_acc5": 88.11400246246338, "epoch": 80, "n_parameters": 5489328} -{"train_lr": 0.003696855951832067, "train_loss": 3.0097770352363584, "test_loss": 1.3454967259484178, "test_acc1": 67.51800194641113, "test_acc5": 87.78800282592773, "epoch": 81, "n_parameters": 5489328} -{"train_lr": 0.0036894342113765284, "train_loss": 3.0111976786613464, "test_loss": 1.3604529920746298, "test_acc1": 67.58600188903809, "test_acc5": 87.86000250274658, "epoch": 82, "n_parameters": 5489328} -{"train_lr": 0.0036819303739738757, "train_loss": 3.012610289478302, "test_loss": 1.3500439125825376, "test_acc1": 67.67800190704345, "test_acc5": 88.01000256317138, "epoch": 83, "n_parameters": 5489328} -{"train_lr": 0.00367434480535066, "train_loss": 2.989384972810745, "test_loss": 1.3742614340256243, "test_acc1": 67.30800172454833, "test_acc5": 87.72000223052979, "epoch": 84, "n_parameters": 5489328} -{"train_lr": 0.00366667787521664, "train_loss": 2.999461818122864, "test_loss": 1.3365078331793057, "test_acc1": 67.97800199371338, "test_acc5": 88.20200257507324, "epoch": 85, "n_parameters": 5489328} -{"train_lr": 0.003658929957247333, "train_loss": 2.9996882858276366, "test_loss": 1.3335509173133795, "test_acc1": 67.96200216400146, "test_acc5": 88.15400264343262, "epoch": 86, "n_parameters": 5489328} -{"train_lr": 0.0036511014290652147, "train_loss": 2.9842830940961838, "test_loss": 1.3649408716489286, "test_acc1": 67.44400211975098, "test_acc5": 87.69400246124268, "epoch": 87, "n_parameters": 5489328} -{"train_lr": 0.003643192672221756, "train_loss": 2.9920346967220306, "test_loss": 1.3979206111501246, "test_acc1": 67.32600182861329, "test_acc5": 87.81400266662598, "epoch": 88, "n_parameters": 5489328} -{"train_lr": 0.0036352040721785803, "train_loss": 2.976590948343277, "test_loss": 1.3864969422712046, "test_acc1": 67.3220022012329, "test_acc5": 87.51800243774414, "epoch": 89, "n_parameters": 5489328} -{"train_lr": 0.003627136018288861, "train_loss": 2.986627764248848, "test_loss": 1.3002647520864712, "test_acc1": 68.65600213531494, "test_acc5": 88.56200253295899, "epoch": 90, "n_parameters": 5489328} -{"train_lr": 0.0036189889037780316, "train_loss": 2.966718445658684, "test_loss": 1.3734597270103062, "test_acc1": 67.28400199676514, "test_acc5": 87.74400249298095, "epoch": 91, "n_parameters": 5489328} -{"train_lr": 0.0036107631257249954, "train_loss": 2.982848889899254, "test_loss": 1.3077462394447887, "test_acc1": 68.54400181060791, "test_acc5": 88.46400236602783, "epoch": 92, "n_parameters": 5489328} -{"train_lr": 0.003602459085042744, "train_loss": 2.9814115156412124, "test_loss": 1.3324314518009914, "test_acc1": 68.28200181976318, "test_acc5": 88.1740028930664, "epoch": 93, "n_parameters": 5489328} -{"train_lr": 0.003594077186458248, "train_loss": 2.9744423536777496, "test_loss": 1.349931874695946, "test_acc1": 67.95200195709228, "test_acc5": 88.14800239349366, "epoch": 94, "n_parameters": 5489328} -{"train_lr": 0.003585617838493613, "train_loss": 2.9690139094114305, "test_loss": 1.394432466258021, "test_acc1": 67.0280021081543, "test_acc5": 87.40800244293213, "epoch": 95, "n_parameters": 5489328} -{"train_lr": 0.0035770814534454225, "train_loss": 2.97184903960228, "test_loss": 1.3204691313645418, "test_acc1": 68.44000211334229, "test_acc5": 88.34600237884521, "epoch": 96, "n_parameters": 5489328} -{"train_lr": 0.003568468447365067, "train_loss": 2.962534091091156, "test_loss": 1.2787198726745213, "test_acc1": 69.06600207061767, "test_acc5": 88.99200252197265, "epoch": 97, "n_parameters": 5489328} -{"train_lr": 0.0035597792400383233, "train_loss": 2.959124959230423, "test_loss": 1.2992980190936256, "test_acc1": 68.56600214019775, "test_acc5": 88.82600286407471, "epoch": 98, "n_parameters": 5489328} -{"train_lr": 0.0035510142549648235, "train_loss": 2.968005501246452, "test_loss": 1.3166443915928112, "test_acc1": 68.41200190856934, "test_acc5": 88.51000255554199, "epoch": 99, "n_parameters": 5489328} -{"train_lr": 0.0035421739193377214, "train_loss": 2.962721552205086, "test_loss": 1.3314102533109047, "test_acc1": 68.23200192382812, "test_acc5": 88.10200268066406, "epoch": 100, "n_parameters": 5489328} -{"train_lr": 0.003533258664022372, "train_loss": 2.9577469920396804, "test_loss": 1.3001416426371126, "test_acc1": 69.11800240936279, "test_acc5": 88.81200246307372, "epoch": 101, "n_parameters": 5489328} -{"train_lr": 0.0035242689235357775, "train_loss": 2.952367301893234, "test_loss": 1.3226267305367134, "test_acc1": 68.4660023638916, "test_acc5": 88.28400250793457, "epoch": 102, "n_parameters": 5489328} -{"train_lr": 0.0035152051360252245, "train_loss": 2.944233580350876, "test_loss": 1.3546307446325527, "test_acc1": 67.63800202789307, "test_acc5": 88.04200259979248, "epoch": 103, "n_parameters": 5489328} -{"train_lr": 0.0035060677432469894, "train_loss": 2.946201162338257, "test_loss": 1.338999739902861, "test_acc1": 67.98200222564697, "test_acc5": 88.25800242370606, "epoch": 104, "n_parameters": 5489328} -{"train_lr": 0.0034968571905445293, "train_loss": 2.93749849858284, "test_loss": 1.361468922565965, "test_acc1": 67.91200200500488, "test_acc5": 88.06200273345948, "epoch": 105, "n_parameters": 5489328} -{"train_lr": 0.0034875739268273947, "train_loss": 2.9391875838279726, "test_loss": 1.323911911862738, "test_acc1": 68.07600227661133, "test_acc5": 88.35200274475098, "epoch": 106, "n_parameters": 5489328} -{"train_lr": 0.00347821840454859, "train_loss": 2.9297858187198638, "test_loss": 1.3559671421261394, "test_acc1": 67.66600209320069, "test_acc5": 87.88800216156005, "epoch": 107, "n_parameters": 5489328} -{"train_lr": 0.003468791079683292, "train_loss": 2.9344312356710436, "test_loss": 1.3114024383180283, "test_acc1": 68.82600196319581, "test_acc5": 88.63200291748046, "epoch": 108, "n_parameters": 5489328} -{"train_lr": 0.003459292411705684, "train_loss": 2.932270190858841, "test_loss": 1.3625420423991539, "test_acc1": 67.88000173858643, "test_acc5": 88.09800248596191, "epoch": 109, "n_parameters": 5489328} -{"train_lr": 0.003449722863567734, "train_loss": 2.936783611869812, "test_loss": 1.3283470842768164, "test_acc1": 68.29000214935303, "test_acc5": 88.40000253875732, "epoch": 110, "n_parameters": 5489328} -{"train_lr": 0.0034400829016756297, "train_loss": 2.9417962437152863, "test_loss": 1.3560282193562563, "test_acc1": 68.08200206970214, "test_acc5": 87.99000225921631, "epoch": 111, "n_parameters": 5489328} -{"train_lr": 0.0034303729958673978, "train_loss": 2.934301179552078, "test_loss": 1.2896481603384018, "test_acc1": 69.28200190948486, "test_acc5": 88.85400255187989, "epoch": 112, "n_parameters": 5489328} -{"train_lr": 0.0034205936193903307, "train_loss": 2.937591792392731, "test_loss": 1.3354134358027403, "test_acc1": 68.60200209869384, "test_acc5": 88.52200235992431, "epoch": 113, "n_parameters": 5489328} -{"train_lr": 0.0034107452488774006, "train_loss": 2.9155634258031844, "test_loss": 1.2680678144097328, "test_acc1": 69.45000232513428, "test_acc5": 88.98200255493164, "epoch": 114, "n_parameters": 5489328} -{"train_lr": 0.0034008283643241475, "train_loss": 2.9187775524139403, "test_loss": 1.2849330770618774, "test_acc1": 69.15600213104248, "test_acc5": 88.83400222076416, "epoch": 115, "n_parameters": 5489328} -{"train_lr": 0.003390843449065705, "train_loss": 2.923577092242241, "test_loss": 1.3148881762343294, "test_acc1": 68.9460021951294, "test_acc5": 88.6800028527832, "epoch": 116, "n_parameters": 5489328} -{"train_lr": 0.0033807909897526967, "train_loss": 2.922770926499367, "test_loss": 1.286766448879943, "test_acc1": 69.3260019241333, "test_acc5": 88.7680025567627, "epoch": 117, "n_parameters": 5489328} -{"train_lr": 0.0033706714763277455, "train_loss": 2.9218416714668276, "test_loss": 1.2973953844869839, "test_acc1": 68.59800223205566, "test_acc5": 88.60200217651368, "epoch": 118, "n_parameters": 5489328} -{"train_lr": 0.003360485402001723, "train_loss": 2.925329022192955, "test_loss": 1.2882284592179691, "test_acc1": 68.8240022442627, "test_acc5": 88.73000256958008, "epoch": 119, "n_parameters": 5489328} -{"train_lr": 0.0033502332632295347, "train_loss": 2.9142068369627, "test_loss": 1.287881035138579, "test_acc1": 69.06400207305909, "test_acc5": 88.59600236846924, "epoch": 120, "n_parameters": 5489328} -{"train_lr": 0.003339915559685877, "train_loss": 2.9078141283512116, "test_loss": 1.34339722728028, "test_acc1": 67.89800195648193, "test_acc5": 88.3580022454834, "epoch": 121, "n_parameters": 5489328} -{"train_lr": 0.0033295327942412492, "train_loss": 2.9110252041339875, "test_loss": 1.340552164351239, "test_acc1": 67.98200198364258, "test_acc5": 88.34200267272949, "epoch": 122, "n_parameters": 5489328} -{"train_lr": 0.003319085472936782, "train_loss": 2.9112323061704637, "test_loss": 1.2838360623401754, "test_acc1": 69.226001847229, "test_acc5": 88.77000256378174, "epoch": 123, "n_parameters": 5489328} -{"train_lr": 0.0033085741049602795, "train_loss": 2.8992143704652786, "test_loss": 1.2808582725770332, "test_acc1": 69.46800232269287, "test_acc5": 88.92200260803223, "epoch": 124, "n_parameters": 5489328} -{"train_lr": 0.003297999202620968, "train_loss": 2.9035562308311462, "test_loss": 1.3048693331725456, "test_acc1": 69.04800207000733, "test_acc5": 88.7380023739624, "epoch": 125, "n_parameters": 5489328} -{"train_lr": 0.0032873612813246714, "train_loss": 2.903075083446503, "test_loss": 1.2757633227635832, "test_acc1": 69.43200193908692, "test_acc5": 88.9620029119873, "epoch": 126, "n_parameters": 5489328} -{"train_lr": 0.003276660859548651, "train_loss": 2.899484923362732, "test_loss": 1.3264249300255495, "test_acc1": 68.67400202484131, "test_acc5": 88.28600259155273, "epoch": 127, "n_parameters": 5489328} -{"train_lr": 0.0032658984588163557, "train_loss": 2.897530978798866, "test_loss": 1.277975847615915, "test_acc1": 68.95200218811036, "test_acc5": 88.96600263092041, "epoch": 128, "n_parameters": 5489328} -{"train_lr": 0.003255074603672122, "train_loss": 2.9004534839868548, "test_loss": 1.3160569137510132, "test_acc1": 68.52000198150635, "test_acc5": 88.3680026687622, "epoch": 129, "n_parameters": 5489328} -{"train_lr": 0.003244189821655263, "train_loss": 2.8970450409173965, "test_loss": 1.3136221573633307, "test_acc1": 69.0140022720337, "test_acc5": 88.46200266662598, "epoch": 130, "n_parameters": 5489328} -{"train_lr": 0.003233244643274736, "train_loss": 2.8896473105430602, "test_loss": 1.2732728908167166, "test_acc1": 69.59600230133057, "test_acc5": 89.19400261688233, "epoch": 131, "n_parameters": 5489328} -{"train_lr": 0.0032222396019829943, "train_loss": 2.8940828743457794, "test_loss": 1.311287126120399, "test_acc1": 68.98400222167969, "test_acc5": 88.96800232635498, "epoch": 132, "n_parameters": 5489328} -{"train_lr": 0.0032111752341504192, "train_loss": 2.8860142535924913, "test_loss": 1.3241723346359588, "test_acc1": 68.88800210845947, "test_acc5": 88.59400253509521, "epoch": 133, "n_parameters": 5489328} -{"train_lr": 0.0032000520790385592, "train_loss": 2.8914654708862306, "test_loss": 1.2535387724637985, "test_acc1": 69.93800217468262, "test_acc5": 89.22200247131347, "epoch": 134, "n_parameters": 5489328} -{"train_lr": 0.0031888706787743812, "train_loss": 2.884213059926033, "test_loss": 1.2726785321446026, "test_acc1": 69.40600217163086, "test_acc5": 88.95800242980957, "epoch": 135, "n_parameters": 5489328} -{"train_lr": 0.0031776315783234484, "train_loss": 2.883822993612289, "test_loss": 1.285428978064481, "test_acc1": 69.08800220123291, "test_acc5": 88.90400232025146, "epoch": 136, "n_parameters": 5489328} -{"train_lr": 0.0031663353254638284, "train_loss": 2.879138673686981, "test_loss": 1.2349271844415104, "test_acc1": 69.97800244415284, "test_acc5": 89.5320027078247, "epoch": 137, "n_parameters": 5489328} -{"train_lr": 0.0031549824707587932, "train_loss": 2.8899755994081495, "test_loss": 1.2797221681650948, "test_acc1": 68.93400210601807, "test_acc5": 88.90600262237548, "epoch": 138, "n_parameters": 5489328} -{"train_lr": 0.003143573567530467, "train_loss": 2.883976936554909, "test_loss": 1.2774230907945072, "test_acc1": 69.17000205780029, "test_acc5": 88.97800230957031, "epoch": 139, "n_parameters": 5489328} -{"train_lr": 0.0031321091718327755, "train_loss": 2.8725642558813096, "test_loss": 1.274866067311343, "test_acc1": 69.4840021347046, "test_acc5": 88.9960024407959, "epoch": 140, "n_parameters": 5489328} -{"train_lr": 0.003120589842424192, "train_loss": 2.876158675837517, "test_loss": 1.3087933107334024, "test_acc1": 69.12600209869385, "test_acc5": 88.78800262756347, "epoch": 141, "n_parameters": 5489328} -{"train_lr": 0.0031090161407405044, "train_loss": 2.860266390109062, "test_loss": 1.2659962251782417, "test_acc1": 69.3800022491455, "test_acc5": 89.2320028817749, "epoch": 142, "n_parameters": 5489328} -{"train_lr": 0.003097388630867618, "train_loss": 2.872442623639107, "test_loss": 1.2264237287728226, "test_acc1": 69.93800208770752, "test_acc5": 89.34800260009766, "epoch": 143, "n_parameters": 5489328} -{"train_lr": 0.0030857078795141065, "train_loss": 2.8692436812400817, "test_loss": 1.2629415870589369, "test_acc1": 69.70400200805663, "test_acc5": 88.9240026159668, "epoch": 144, "n_parameters": 5489328} -{"train_lr": 0.0030739744559831164, "train_loss": 2.866269199538231, "test_loss": 1.3026177738519276, "test_acc1": 68.99200200836182, "test_acc5": 88.52200264404297, "epoch": 145, "n_parameters": 5489328} -{"train_lr": 0.003062188932145215, "train_loss": 2.853608574485779, "test_loss": 1.2388191442279255, "test_acc1": 70.09200219665527, "test_acc5": 89.49800267181396, "epoch": 146, "n_parameters": 5489328} -{"train_lr": 0.0030503518824103173, "train_loss": 2.866551047229767, "test_loss": 1.2558012249715187, "test_acc1": 70.11600228485108, "test_acc5": 89.28800251098633, "epoch": 147, "n_parameters": 5489328} -{"train_lr": 0.0030384638836993723, "train_loss": 2.864772027206421, "test_loss": 1.322094998377211, "test_acc1": 68.85600251708985, "test_acc5": 88.60000265258789, "epoch": 148, "n_parameters": 5489328} -{"train_lr": 0.003026525515416759, "train_loss": 2.853336106109619, "test_loss": 1.2959378041765268, "test_acc1": 69.49200240631103, "test_acc5": 89.07000248535157, "epoch": 149, "n_parameters": 5489328} -{"train_lr": 0.0030145373594217015, "train_loss": 2.8602942652225494, "test_loss": 1.2513212219757193, "test_acc1": 69.6920021157837, "test_acc5": 89.2340026361084, "epoch": 150, "n_parameters": 5489328} -{"train_lr": 0.0030025000000000156, "train_loss": 2.8549770845890046, "test_loss": 1.254305893445716, "test_acc1": 69.67200218780518, "test_acc5": 89.20400250457763, "epoch": 151, "n_parameters": 5489328} -{"train_lr": 0.0029904140238355007, "train_loss": 2.853122001028061, "test_loss": 1.2500373011126238, "test_acc1": 69.78600208526612, "test_acc5": 89.21000249908447, "epoch": 152, "n_parameters": 5489328} -{"train_lr": 0.0029782800199817903, "train_loss": 2.844920504999161, "test_loss": 1.2557250215288471, "test_acc1": 69.6940020614624, "test_acc5": 89.05800267761231, "epoch": 153, "n_parameters": 5489328} -{"train_lr": 0.0029660985798329416, "train_loss": 2.857099409914017, "test_loss": 1.2586957575643765, "test_acc1": 69.96600234039306, "test_acc5": 89.16200231201172, "epoch": 154, "n_parameters": 5489328} -{"train_lr": 0.0029538702970952164, "train_loss": 2.8523653394699098, "test_loss": 1.2507666362559093, "test_acc1": 69.78600218902588, "test_acc5": 89.44800256072998, "epoch": 155, "n_parameters": 5489328} -{"train_lr": 0.0029415957677578724, "train_loss": 2.8379808021783828, "test_loss": 1.2439078140784712, "test_acc1": 69.7320021447754, "test_acc5": 89.11800267578126, "epoch": 156, "n_parameters": 5489328} -{"train_lr": 0.002929275590064108, "train_loss": 2.8431136138439177, "test_loss": 1.2762794354382683, "test_acc1": 69.38800191772461, "test_acc5": 89.23000230895997, "epoch": 157, "n_parameters": 5489328} -{"train_lr": 0.002916910364482115, "train_loss": 2.8327792612791063, "test_loss": 1.304224869784187, "test_acc1": 69.50400235656738, "test_acc5": 89.05600221832276, "epoch": 158, "n_parameters": 5489328} -{"train_lr": 0.0029045006936754257, "train_loss": 2.8490025893926623, "test_loss": 1.2473938425674158, "test_acc1": 70.15400213226319, "test_acc5": 89.29200256652832, "epoch": 159, "n_parameters": 5489328} -{"train_lr": 0.0028920471824738832, "train_loss": 2.830263878417015, "test_loss": 1.258006108157775, "test_acc1": 69.63800227294922, "test_acc5": 89.13600269989014, "epoch": 160, "n_parameters": 5489328} -{"train_lr": 0.0028795504378442225, "train_loss": 2.82492243309021, "test_loss": 1.2521288942764788, "test_acc1": 69.49800243804931, "test_acc5": 89.30000242370606, "epoch": 161, "n_parameters": 5489328} -{"train_lr": 0.002867011068859989, "train_loss": 2.8325537420749662, "test_loss": 1.2629169032854193, "test_acc1": 69.69800195037841, "test_acc5": 89.16400240478515, "epoch": 162, "n_parameters": 5489328} -{"train_lr": 0.0028544296866723304, "train_loss": 2.8272564717292785, "test_loss": 1.2203058236662079, "test_acc1": 70.57200227020263, "test_acc5": 89.68600246337891, "epoch": 163, "n_parameters": 5489328} -{"train_lr": 0.0028418069044801562, "train_loss": 2.8259049645900727, "test_loss": 1.2626630387762015, "test_acc1": 69.21400210021973, "test_acc5": 88.97600215332031, "epoch": 164, "n_parameters": 5489328} -{"train_lr": 0.0028291433375, "train_loss": 2.8316095495462417, "test_loss": 1.2176105842432554, "test_acc1": 70.46000225708008, "test_acc5": 89.81400236297607, "epoch": 165, "n_parameters": 5489328} -{"train_lr": 0.002816439602936208, "train_loss": 2.8133465188503264, "test_loss": 1.282503088169238, "test_acc1": 69.77800242614747, "test_acc5": 89.15800253601074, "epoch": 166, "n_parameters": 5489328} -{"train_lr": 0.002803696319950981, "train_loss": 2.8218683398723603, "test_loss": 1.263667189023074, "test_acc1": 69.89800215698243, "test_acc5": 89.40000256072999, "epoch": 167, "n_parameters": 5489328} -{"train_lr": 0.0027909141096339935, "train_loss": 2.8266776928186417, "test_loss": 1.2328040965777987, "test_acc1": 70.29600224182128, "test_acc5": 89.45600245422364, "epoch": 168, "n_parameters": 5489328} -{"train_lr": 0.002778093594971943, "train_loss": 2.8167896891355513, "test_loss": 1.2502283210701801, "test_acc1": 70.22400199066162, "test_acc5": 89.41400240997315, "epoch": 169, "n_parameters": 5489328} -{"train_lr": 0.002765235400818761, "train_loss": 2.8151753108501434, "test_loss": 1.2628379337051336, "test_acc1": 69.94000222137451, "test_acc5": 89.35000204437256, "epoch": 170, "n_parameters": 5489328} -{"train_lr": 0.0027523401538647224, "train_loss": 2.813928868341446, "test_loss": 1.2185404651305254, "test_acc1": 70.87200210693359, "test_acc5": 89.85200235107422, "epoch": 171, "n_parameters": 5489328} -{"train_lr": 0.002739408482605956, "train_loss": 2.8120668452501296, "test_loss": 1.2435454230974703, "test_acc1": 70.36800235626221, "test_acc5": 89.5620026309204, "epoch": 172, "n_parameters": 5489328} -{"train_lr": 0.002726441017313784, "train_loss": 2.803455928468704, "test_loss": 1.212542721453835, "test_acc1": 70.42200254730224, "test_acc5": 89.62600239013672, "epoch": 173, "n_parameters": 5489328} -{"train_lr": 0.002713438390004251, "train_loss": 2.8071369275808333, "test_loss": 1.23727614695535, "test_acc1": 70.35200229949952, "test_acc5": 89.64800260742187, "epoch": 174, "n_parameters": 5489328} -{"train_lr": 0.0027004012344070075, "train_loss": 2.8052487490177156, "test_loss": 1.2795174608335775, "test_acc1": 69.42800209350585, "test_acc5": 88.9240028427124, "epoch": 175, "n_parameters": 5489328} -{"train_lr": 0.0026873301859347007, "train_loss": 2.8007238898515703, "test_loss": 1.2100038098938324, "test_acc1": 70.81400240234375, "test_acc5": 89.76600220336914, "epoch": 176, "n_parameters": 5489328} -{"train_lr": 0.002674225881651733, "train_loss": 2.805079748106003, "test_loss": 1.230743950342431, "test_acc1": 70.62200220550537, "test_acc5": 89.74000258819581, "epoch": 177, "n_parameters": 5489328} -{"train_lr": 0.0026610889602434354, "train_loss": 2.8010721408843993, "test_loss": 1.2507646973518765, "test_acc1": 70.17200221221924, "test_acc5": 89.48000247436524, "epoch": 178, "n_parameters": 5489328} -{"train_lr": 0.0026479200619848845, "train_loss": 2.7964249844312667, "test_loss": 1.2582399165805649, "test_acc1": 70.11200214538574, "test_acc5": 89.35000311553955, "epoch": 179, "n_parameters": 5489328} -{"train_lr": 0.0026347198287094897, "train_loss": 2.799652952027321, "test_loss": 1.202388984753805, "test_acc1": 71.12400227020264, "test_acc5": 89.81200245117188, "epoch": 180, "n_parameters": 5489328} -{"train_lr": 0.0026214889037780493, "train_loss": 2.8013158967018126, "test_loss": 1.2151044291608475, "test_acc1": 70.68200240264892, "test_acc5": 89.86000240203857, "epoch": 181, "n_parameters": 5489328} -{"train_lr": 0.0026082279320471633, "train_loss": 2.808219378519058, "test_loss": 1.2650725122760325, "test_acc1": 69.70000241271973, "test_acc5": 88.81600218963624, "epoch": 182, "n_parameters": 5489328} -{"train_lr": 0.0025949375598379055, "train_loss": 2.7972511495828627, "test_loss": 1.2282253245220465, "test_acc1": 70.48200262695312, "test_acc5": 89.65800242126465, "epoch": 183, "n_parameters": 5489328} -{"train_lr": 0.0025816184349041886, "train_loss": 2.7993521733999254, "test_loss": 1.1928847879171371, "test_acc1": 71.03400237091064, "test_acc5": 89.92800231781005, "epoch": 184, "n_parameters": 5489328} -{"train_lr": 0.0025682712064015187, "train_loss": 2.776861971282959, "test_loss": 1.248494610628661, "test_acc1": 70.23200244628906, "test_acc5": 89.58000268371582, "epoch": 185, "n_parameters": 5489328} -{"train_lr": 0.002554896524854948, "train_loss": 2.790606994485855, "test_loss": 1.205579002990442, "test_acc1": 70.87400242584229, "test_acc5": 89.79600234680176, "epoch": 186, "n_parameters": 5489328} -{"train_lr": 0.0025414950421274317, "train_loss": 2.787113848924637, "test_loss": 1.2254828508285915, "test_acc1": 70.51200252655029, "test_acc5": 89.53400254119873, "epoch": 187, "n_parameters": 5489328} -{"train_lr": 0.002528067411388543, "train_loss": 2.7740025800466537, "test_loss": 1.1990178674459457, "test_acc1": 70.96000224273682, "test_acc5": 90.01400276489258, "epoch": 188, "n_parameters": 5489328} -{"train_lr": 0.0025146142870819286, "train_loss": 2.775219857764244, "test_loss": 1.2094007331658811, "test_acc1": 70.82800236053467, "test_acc5": 89.826002918396, "epoch": 189, "n_parameters": 5489328} -{"train_lr": 0.002501136324893901, "train_loss": 2.7887268534183502, "test_loss": 1.220556293559425, "test_acc1": 70.46400225585937, "test_acc5": 89.81400274291993, "epoch": 190, "n_parameters": 5489328} -{"train_lr": 0.002487634181721322, "train_loss": 2.7735549794197083, "test_loss": 1.2604317064670956, "test_acc1": 70.29200234680175, "test_acc5": 89.18600262207032, "epoch": 191, "n_parameters": 5489328} -{"train_lr": 0.002474108515639672, "train_loss": 2.7718087262392044, "test_loss": 1.2850687788689839, "test_acc1": 69.5020021194458, "test_acc5": 89.33200264984131, "epoch": 192, "n_parameters": 5489328} -{"train_lr": 0.002460559985870747, "train_loss": 2.7788787214517594, "test_loss": 1.2008302842869478, "test_acc1": 70.94000236816406, "test_acc5": 89.84600296539307, "epoch": 193, "n_parameters": 5489328} -{"train_lr": 0.002446989252750831, "train_loss": 2.7631763280153274, "test_loss": 1.1762362926760142, "test_acc1": 71.50600213409423, "test_acc5": 90.31000240447997, "epoch": 194, "n_parameters": 5489328} -{"train_lr": 0.002433396977698326, "train_loss": 2.771462535238266, "test_loss": 1.195206825785777, "test_acc1": 70.90800235717774, "test_acc5": 89.81200271942139, "epoch": 195, "n_parameters": 5489328} -{"train_lr": 0.0024197838231814215, "train_loss": 2.76681389939785, "test_loss": 1.208971041966887, "test_acc1": 70.78400247009277, "test_acc5": 89.82600264129638, "epoch": 196, "n_parameters": 5489328} -{"train_lr": 0.002406150452686214, "train_loss": 2.7575708069086073, "test_loss": 1.1873694469823557, "test_acc1": 71.41800240234375, "test_acc5": 90.07800230957031, "epoch": 197, "n_parameters": 5489328} -{"train_lr": 0.0023924975306838653, "train_loss": 2.765890283942223, "test_loss": 1.2021817332681488, "test_acc1": 71.09400215789795, "test_acc5": 89.92400277282715, "epoch": 198, "n_parameters": 5489328} -{"train_lr": 0.0023788257225985116, "train_loss": 2.769860298037529, "test_loss": 1.220791383262943, "test_acc1": 70.75000215545654, "test_acc5": 89.7860019833374, "epoch": 199, "n_parameters": 5489328} -{"train_lr": 0.002365135694774904, "train_loss": 2.7601501495838163, "test_loss": 1.208219483933028, "test_acc1": 70.8660024307251, "test_acc5": 90.08600249389649, "epoch": 200, "n_parameters": 5489328} -{"train_lr": 0.0023514281144455126, "train_loss": 2.7618193942785263, "test_loss": 1.166205917868544, "test_acc1": 71.43800236602783, "test_acc5": 90.33000227600098, "epoch": 201, "n_parameters": 5489328} -{"train_lr": 0.002337703649698603, "train_loss": 2.7564521104335786, "test_loss": 1.174627224531244, "test_acc1": 71.39400252593994, "test_acc5": 90.26000230438233, "epoch": 202, "n_parameters": 5489328} -{"train_lr": 0.002323962969445206, "train_loss": 2.7473366062164306, "test_loss": 1.1841160722953432, "test_acc1": 71.28200225158692, "test_acc5": 90.1020027508545, "epoch": 203, "n_parameters": 5489328} -{"train_lr": 0.002310206743386666, "train_loss": 2.7442904025554657, "test_loss": 1.1866373741889702, "test_acc1": 71.09800204589844, "test_acc5": 90.00000290405274, "epoch": 204, "n_parameters": 5489328} -{"train_lr": 0.002296435641982043, "train_loss": 2.7483475884199144, "test_loss": 1.1706735018421621, "test_acc1": 71.51800246368408, "test_acc5": 90.24800241516114, "epoch": 205, "n_parameters": 5489328} -{"train_lr": 0.0022826503364153008, "train_loss": 2.7399600739479064, "test_loss": 1.2049234732985497, "test_acc1": 70.91600200531006, "test_acc5": 90.04200249603271, "epoch": 206, "n_parameters": 5489328} -{"train_lr": 0.002268851498562944, "train_loss": 2.762183469581604, "test_loss": 1.1737120585406529, "test_acc1": 71.19600211547852, "test_acc5": 90.31200275054931, "epoch": 207, "n_parameters": 5489328} -{"train_lr": 0.0022550398009608037, "train_loss": 2.7525073345184325, "test_loss": 1.1976005605915014, "test_acc1": 70.91000230773926, "test_acc5": 90.09000256072999, "epoch": 208, "n_parameters": 5489328} -{"train_lr": 0.002241215916771494, "train_loss": 2.7396430621385575, "test_loss": 1.1787493242060436, "test_acc1": 71.7040026171875, "test_acc5": 90.18000287597657, "epoch": 209, "n_parameters": 5489328} -{"train_lr": 0.0022273805197516256, "train_loss": 2.7328821305036546, "test_loss": 1.1796223792959661, "test_acc1": 71.41200253265382, "test_acc5": 90.34600235961913, "epoch": 210, "n_parameters": 5489328} -{"train_lr": 0.0022135342842189523, "train_loss": 2.740017214560509, "test_loss": 1.1677445795606165, "test_acc1": 71.6600023236084, "test_acc5": 90.44000276794434, "epoch": 211, "n_parameters": 5489328} -{"train_lr": 0.002199677885019512, "train_loss": 2.7414966775417327, "test_loss": 1.1741574086687143, "test_acc1": 71.76600262664795, "test_acc5": 90.31800251617432, "epoch": 212, "n_parameters": 5489328} -{"train_lr": 0.002185811997494599, "train_loss": 2.7433612816333772, "test_loss": 1.166179152972558, "test_acc1": 71.74600248596191, "test_acc5": 90.20800246917725, "epoch": 213, "n_parameters": 5489328} -{"train_lr": 0.0021719372974480025, "train_loss": 2.7339321412324904, "test_loss": 1.1534738707191803, "test_acc1": 71.88600227508545, "test_acc5": 90.4720026083374, "epoch": 214, "n_parameters": 5489328} -{"train_lr": 0.002158054461113036, "train_loss": 2.733884996390343, "test_loss": 1.203512260580764, "test_acc1": 71.25200230682373, "test_acc5": 90.04200272064209, "epoch": 215, "n_parameters": 5489328} -{"train_lr": 0.0021441641651195054, "train_loss": 2.736064590573311, "test_loss": 1.1898802957114052, "test_acc1": 71.57200235443115, "test_acc5": 90.34400261962891, "epoch": 216, "n_parameters": 5489328} -{"train_lr": 0.0021302670864609768, "train_loss": 2.716573852944374, "test_loss": 1.2067535304847885, "test_acc1": 71.02200218414306, "test_acc5": 89.94400285125732, "epoch": 217, "n_parameters": 5489328} -{"train_lr": 0.0021163639024613505, "train_loss": 2.7323195385217667, "test_loss": 1.221479400554124, "test_acc1": 70.96000232635498, "test_acc5": 89.85600277252198, "epoch": 218, "n_parameters": 5489328} -{"train_lr": 0.002102455290742262, "train_loss": 2.7235631905794144, "test_loss": 1.1473084249917198, "test_acc1": 72.13000243469239, "test_acc5": 90.46600234039306, "epoch": 219, "n_parameters": 5489328} -{"train_lr": 0.0020885419291897665, "train_loss": 2.7201386459589005, "test_loss": 1.153828916085117, "test_acc1": 72.03600263702393, "test_acc5": 90.73600264526367, "epoch": 220, "n_parameters": 5489328} -{"train_lr": 0.0020746244959214863, "train_loss": 2.7133942554950714, "test_loss": 1.1827181253363104, "test_acc1": 71.8140025479126, "test_acc5": 90.28000252746583, "epoch": 221, "n_parameters": 5489328} -{"train_lr": 0.0020607036692535004, "train_loss": 2.715531592011452, "test_loss": 1.1535968022311436, "test_acc1": 71.91000251739501, "test_acc5": 90.58400254730225, "epoch": 222, "n_parameters": 5489328} -{"train_lr": 0.0020467801276673257, "train_loss": 2.711980234050751, "test_loss": 1.1584073603153229, "test_acc1": 71.978002477417, "test_acc5": 90.6080021963501, "epoch": 223, "n_parameters": 5489328} -{"train_lr": 0.0020328545497765972, "train_loss": 2.718863862252235, "test_loss": 1.1701578221776907, "test_acc1": 71.98800248077393, "test_acc5": 90.53400269439697, "epoch": 224, "n_parameters": 5489328} -{"train_lr": 0.002018927614294425, "train_loss": 2.7111785474061967, "test_loss": 1.1666631387437092, "test_acc1": 71.77400248413086, "test_acc5": 90.4200025970459, "epoch": 225, "n_parameters": 5489328} -{"train_lr": 0.0020050000000000176, "train_loss": 2.712495191025734, "test_loss": 1.1339741787489723, "test_acc1": 72.37000217773438, "test_acc5": 90.76800242919921, "epoch": 226, "n_parameters": 5489328} -{"train_lr": 0.001991072385705573, "train_loss": 2.6984582086801527, "test_loss": 1.1375976524808828, "test_acc1": 72.10800245147705, "test_acc5": 90.67600263305664, "epoch": 227, "n_parameters": 5489328} -{"train_lr": 0.001977145450223401, "train_loss": 2.70909379734993, "test_loss": 1.1298664343707703, "test_acc1": 72.33200267456054, "test_acc5": 90.63800248046876, "epoch": 228, "n_parameters": 5489328} -{"train_lr": 0.0019632198723327173, "train_loss": 2.7037742657184602, "test_loss": 1.1096153754521818, "test_acc1": 72.6400023034668, "test_acc5": 90.84600239105225, "epoch": 229, "n_parameters": 5489328} -{"train_lr": 0.0019492963307464952, "train_loss": 2.703394556975365, "test_loss": 1.1669372219373197, "test_acc1": 71.84800270690918, "test_acc5": 90.50400259094238, "epoch": 230, "n_parameters": 5489328} -{"train_lr": 0.001935375504078517, "train_loss": 2.689776934313774, "test_loss": 1.1787424074376331, "test_acc1": 71.83000245513917, "test_acc5": 90.27400236785888, "epoch": 231, "n_parameters": 5489328} -{"train_lr": 0.001921458070810235, "train_loss": 2.7007477444171903, "test_loss": 1.1390699766138022, "test_acc1": 72.39400223052978, "test_acc5": 90.55800237091064, "epoch": 232, "n_parameters": 5489328} -{"train_lr": 0.0019075447092577794, "train_loss": 2.692799937438965, "test_loss": 1.1473842337727547, "test_acc1": 72.17800240478516, "test_acc5": 90.55800258026123, "epoch": 233, "n_parameters": 5489328} -{"train_lr": 0.0018936360975386397, "train_loss": 2.6918602629184725, "test_loss": 1.12204189072637, "test_acc1": 72.27200252746582, "test_acc5": 90.86600260559082, "epoch": 234, "n_parameters": 5489328} -{"train_lr": 0.0018797329135390225, "train_loss": 2.692230868458748, "test_loss": 1.149621654959286, "test_acc1": 71.9720024822998, "test_acc5": 90.43200239654541, "epoch": 235, "n_parameters": 5489328} -{"train_lr": 0.001865835834880448, "train_loss": 2.6968779824733735, "test_loss": 1.1429057327263497, "test_acc1": 72.06600222473145, "test_acc5": 90.70000254730225, "epoch": 236, "n_parameters": 5489328} -{"train_lr": 0.0018519455388870075, "train_loss": 2.6858881766080858, "test_loss": 1.1625491924145643, "test_acc1": 72.14400263549804, "test_acc5": 90.45200268646241, "epoch": 237, "n_parameters": 5489328} -{"train_lr": 0.0018380627025520412, "train_loss": 2.6801040362119677, "test_loss": 1.1506203005419058, "test_acc1": 71.88800241790771, "test_acc5": 90.36400268890381, "epoch": 238, "n_parameters": 5489328} -{"train_lr": 0.001824188002505409, "train_loss": 2.6945638154506684, "test_loss": 1.1468344492947353, "test_acc1": 72.44200220855713, "test_acc5": 90.89400292633057, "epoch": 239, "n_parameters": 5489328} -{"train_lr": 0.0018103221149804824, "train_loss": 2.6807963071346284, "test_loss": 1.162281874786405, "test_acc1": 72.34800227111816, "test_acc5": 90.66600233428954, "epoch": 240, "n_parameters": 5489328} -{"train_lr": 0.0017964657157810383, "train_loss": 2.6692727227926256, "test_loss": 1.1358524819507319, "test_acc1": 72.18600242492676, "test_acc5": 90.65600270904541, "epoch": 241, "n_parameters": 5489328} -{"train_lr": 0.0017826194802483815, "train_loss": 2.6725406902074815, "test_loss": 1.133946320589851, "test_acc1": 72.40400259918214, "test_acc5": 90.69000237609863, "epoch": 242, "n_parameters": 5489328} -{"train_lr": 0.0017687840832285521, "train_loss": 2.6727315227031707, "test_loss": 1.0969566084882791, "test_acc1": 73.06800273956298, "test_acc5": 91.1380021536255, "epoch": 243, "n_parameters": 5489328} -{"train_lr": 0.0017549601990392034, "train_loss": 2.671658711528778, "test_loss": 1.150169454076711, "test_acc1": 72.15600256713867, "test_acc5": 90.70200271392822, "epoch": 244, "n_parameters": 5489328} -{"train_lr": 0.001741148501437039, "train_loss": 2.670308921980858, "test_loss": 1.1576605638598694, "test_acc1": 71.97800230865478, "test_acc5": 90.34200237091065, "epoch": 245, "n_parameters": 5489328} -{"train_lr": 0.0017273496635846672, "train_loss": 2.6782226393461226, "test_loss": 1.167304572813651, "test_acc1": 72.07000214813232, "test_acc5": 90.49000279846192, "epoch": 246, "n_parameters": 5489328} -{"train_lr": 0.0017135643580179704, "train_loss": 2.6723088541030884, "test_loss": 1.1147537538233925, "test_acc1": 72.56200198028564, "test_acc5": 91.01000256103515, "epoch": 247, "n_parameters": 5489328} -{"train_lr": 0.0016997932566133241, "train_loss": 2.6702289808511734, "test_loss": 1.1140236438197249, "test_acc1": 72.62600233642578, "test_acc5": 90.92600264434815, "epoch": 248, "n_parameters": 5489328} -{"train_lr": 0.0016860370305547992, "train_loss": 2.6547289654016493, "test_loss": 1.1542252089609117, "test_acc1": 71.94400233062744, "test_acc5": 90.65400244293212, "epoch": 249, "n_parameters": 5489328} -{"train_lr": 0.00167229635030139, "train_loss": 2.653833013367653, "test_loss": 1.0900958471876734, "test_acc1": 73.10600220703125, "test_acc5": 91.14200313293458, "epoch": 250, "n_parameters": 5489328} -{"train_lr": 0.0016585718855544908, "train_loss": 2.655570250105858, "test_loss": 1.1427346029702354, "test_acc1": 72.53200230957032, "test_acc5": 91.0500023828125, "epoch": 251, "n_parameters": 5489328} -{"train_lr": 0.0016448643052251444, "train_loss": 2.660989372730255, "test_loss": 1.0972723274984781, "test_acc1": 73.284002371521, "test_acc5": 91.08600259918212, "epoch": 252, "n_parameters": 5489328} -{"train_lr": 0.0016311742774014707, "train_loss": 2.6474074434280395, "test_loss": 1.1794096354176016, "test_acc1": 71.88400258026122, "test_acc5": 90.33400273376465, "epoch": 253, "n_parameters": 5489328} -{"train_lr": 0.0016175024693161407, "train_loss": 2.6503192035913465, "test_loss": 1.0855806496213465, "test_acc1": 73.37600259399414, "test_acc5": 91.27200255462647, "epoch": 254, "n_parameters": 5489328} -{"train_lr": 0.0016038495473138241, "train_loss": 2.63867147295475, "test_loss": 1.101818918962689, "test_acc1": 72.97000250091553, "test_acc5": 91.14400266174316, "epoch": 255, "n_parameters": 5489328} -{"train_lr": 0.001590216176818559, "train_loss": 2.647195916581154, "test_loss": 1.0869013336213196, "test_acc1": 73.44600207824708, "test_acc5": 91.29400252990723, "epoch": 256, "n_parameters": 5489328} -{"train_lr": 0.0015766030223016928, "train_loss": 2.651092212486267, "test_loss": 1.1399171440040363, "test_acc1": 72.28000233978271, "test_acc5": 90.44400262481689, "epoch": 257, "n_parameters": 5489328} -{"train_lr": 0.0015630107472491771, "train_loss": 2.643571128797531, "test_loss": 1.0991859177456182, "test_acc1": 72.96000270233154, "test_acc5": 91.08600211395263, "epoch": 258, "n_parameters": 5489328} -{"train_lr": 0.0015494400141292598, "train_loss": 2.6426662017822267, "test_loss": 1.1047974899411201, "test_acc1": 73.1960024319458, "test_acc5": 91.15400230102539, "epoch": 259, "n_parameters": 5489328} -{"train_lr": 0.001535891484360313, "train_loss": 2.6409519815206526, "test_loss": 1.0815525002339308, "test_acc1": 73.30200222564697, "test_acc5": 91.36800265594482, "epoch": 260, "n_parameters": 5489328} -{"train_lr": 0.0015223658182786706, "train_loss": 2.64479117834568, "test_loss": 1.0980442613363266, "test_acc1": 73.24400264587402, "test_acc5": 91.32800244598388, "epoch": 261, "n_parameters": 5489328} -{"train_lr": 0.0015088636751061355, "train_loss": 2.6374009889364243, "test_loss": 1.09140111308764, "test_acc1": 73.15600275054932, "test_acc5": 91.30200249633789, "epoch": 262, "n_parameters": 5489328} -{"train_lr": 0.0014953857129180808, "train_loss": 2.6307121496200563, "test_loss": 1.1258311030619286, "test_acc1": 72.89800257629395, "test_acc5": 91.05600225372315, "epoch": 263, "n_parameters": 5489328} -{"train_lr": 0.001481932588611488, "train_loss": 2.629590391302109, "test_loss": 1.1082642168244894, "test_acc1": 73.09200244659424, "test_acc5": 91.05600214050293, "epoch": 264, "n_parameters": 5489328} -{"train_lr": 0.001468504957872541, "train_loss": 2.6305075746774675, "test_loss": 1.091428609455333, "test_acc1": 73.43200269561767, "test_acc5": 91.33200268554687, "epoch": 265, "n_parameters": 5489328} -{"train_lr": 0.0014551034751450972, "train_loss": 2.6246288627147676, "test_loss": 1.0899422909407055, "test_acc1": 73.58200252960205, "test_acc5": 91.3920027001953, "epoch": 266, "n_parameters": 5489328} -{"train_lr": 0.0014417287935984719, "train_loss": 2.623251469421387, "test_loss": 1.0690652969128944, "test_acc1": 73.73800271362305, "test_acc5": 91.54400264434814, "epoch": 267, "n_parameters": 5489328} -{"train_lr": 0.0014283815650957576, "train_loss": 2.62502386200428, "test_loss": 1.056309355532422, "test_acc1": 73.94800246429443, "test_acc5": 91.53000231811524, "epoch": 268, "n_parameters": 5489328} -{"train_lr": 0.00141506244016212, "train_loss": 2.6240208350896834, "test_loss": 1.0955674400662674, "test_acc1": 73.06400232849121, "test_acc5": 91.14400225128173, "epoch": 269, "n_parameters": 5489328} -{"train_lr": 0.0014017720679528809, "train_loss": 2.6115007848501204, "test_loss": 1.113464899799403, "test_acc1": 73.10800237701416, "test_acc5": 91.18000262237548, "epoch": 270, "n_parameters": 5489328} -{"train_lr": 0.001388511096221964, "train_loss": 2.614397641611099, "test_loss": 1.0878224116476143, "test_acc1": 73.59800241729737, "test_acc5": 91.3780025, "epoch": 271, "n_parameters": 5489328} -{"train_lr": 0.0013752801712905223, "train_loss": 2.620662770009041, "test_loss": 1.0496430322527885, "test_acc1": 74.1600023135376, "test_acc5": 91.55000220428467, "epoch": 272, "n_parameters": 5489328} -{"train_lr": 0.0013620799380151495, "train_loss": 2.604539441895485, "test_loss": 1.0650873988428537, "test_acc1": 73.85800294006347, "test_acc5": 91.31400260528565, "epoch": 273, "n_parameters": 5489328} -{"train_lr": 0.0013489110397565372, "train_loss": 2.618216068506241, "test_loss": 1.098580230246572, "test_acc1": 73.46400263458251, "test_acc5": 91.22200262298584, "epoch": 274, "n_parameters": 5489328} -{"train_lr": 0.0013357741183482558, "train_loss": 2.6104128652572633, "test_loss": 1.0629106372156565, "test_acc1": 73.90200243103027, "test_acc5": 91.63400276794434, "epoch": 275, "n_parameters": 5489328} -{"train_lr": 0.0013226698140652842, "train_loss": 2.6014016249895096, "test_loss": 1.0720594067345648, "test_acc1": 73.75600259643555, "test_acc5": 91.60000254394531, "epoch": 276, "n_parameters": 5489328} -{"train_lr": 0.001309598765592993, "train_loss": 2.611026699256897, "test_loss": 1.1033827916664236, "test_acc1": 73.36800269989014, "test_acc5": 91.1300025189209, "epoch": 277, "n_parameters": 5489328} -{"train_lr": 0.0012965616099957775, "train_loss": 2.6005664803981783, "test_loss": 1.0731506198644638, "test_acc1": 73.78000240753174, "test_acc5": 91.40600251342774, "epoch": 278, "n_parameters": 5489328} -{"train_lr": 0.0012835589826862073, "train_loss": 2.5983195759534836, "test_loss": 1.0696699614910519, "test_acc1": 73.61800235992432, "test_acc5": 91.44400219055176, "epoch": 279, "n_parameters": 5489328} -{"train_lr": 0.0012705915173940611, "train_loss": 2.5898238287210464, "test_loss": 1.0991484021001003, "test_acc1": 73.57000231781007, "test_acc5": 91.35400241088867, "epoch": 280, "n_parameters": 5489328} -{"train_lr": 0.0012576598461352462, "train_loss": 2.5991780428409577, "test_loss": 1.0605356375522472, "test_acc1": 73.864002472229, "test_acc5": 91.48800240661622, "epoch": 281, "n_parameters": 5489328} -{"train_lr": 0.0012447645991812122, "train_loss": 2.5840495631217957, "test_loss": 1.084253457320087, "test_acc1": 73.7380023562622, "test_acc5": 91.47800260803223, "epoch": 282, "n_parameters": 5489328} -{"train_lr": 0.001231906405028045, "train_loss": 2.6027878999471663, "test_loss": 1.043669831007719, "test_acc1": 74.00400231262208, "test_acc5": 91.7540022402954, "epoch": 283, "n_parameters": 5489328} -{"train_lr": 0.0012190858903660415, "train_loss": 2.5853033125400544, "test_loss": 1.0615591411204899, "test_acc1": 73.99800236602783, "test_acc5": 91.8280024911499, "epoch": 284, "n_parameters": 5489328} -{"train_lr": 0.0012063036800490023, "train_loss": 2.588232604384422, "test_loss": 1.0602789992357002, "test_acc1": 74.29600237731934, "test_acc5": 91.8440026864624, "epoch": 285, "n_parameters": 5489328} -{"train_lr": 0.0011935603970637625, "train_loss": 2.5802653237819673, "test_loss": 1.058918645057608, "test_acc1": 74.01000228424073, "test_acc5": 91.51200266204835, "epoch": 286, "n_parameters": 5489328} -{"train_lr": 0.0011808566625000356, "train_loss": 2.577137097764015, "test_loss": 1.0763782118173206, "test_acc1": 73.83400245819092, "test_acc5": 91.49200246307373, "epoch": 287, "n_parameters": 5489328} -{"train_lr": 0.0011681930955198627, "train_loss": 2.571085385632515, "test_loss": 1.031742976649719, "test_acc1": 74.34000273773194, "test_acc5": 91.95400248535157, "epoch": 288, "n_parameters": 5489328} -{"train_lr": 0.0011555703133276894, "train_loss": 2.566234385037422, "test_loss": 1.0321421318632715, "test_acc1": 74.50400236938476, "test_acc5": 91.86800234954833, "epoch": 289, "n_parameters": 5489328} -{"train_lr": 0.0011429889311400574, "train_loss": 2.561730055356026, "test_loss": 1.0376020875923775, "test_acc1": 74.45400254974365, "test_acc5": 91.83800289855957, "epoch": 290, "n_parameters": 5489328} -{"train_lr": 0.0011304495621557978, "train_loss": 2.5613750465869902, "test_loss": 1.0505455506636816, "test_acc1": 74.0320023550415, "test_acc5": 91.65600255279541, "epoch": 291, "n_parameters": 5489328} -{"train_lr": 0.0011179528175260622, "train_loss": 2.572518133211136, "test_loss": 1.0746183241991436, "test_acc1": 73.73400239959717, "test_acc5": 91.69400251312256, "epoch": 292, "n_parameters": 5489328} -{"train_lr": 0.0011054993063245714, "train_loss": 2.5678311863183976, "test_loss": 1.0510645256761242, "test_acc1": 74.2820024395752, "test_acc5": 91.70000251922608, "epoch": 293, "n_parameters": 5489328} -{"train_lr": 0.0010930896355179074, "train_loss": 2.558314349794388, "test_loss": 1.0454952830777449, "test_acc1": 74.61800242523194, "test_acc5": 91.73800215484619, "epoch": 294, "n_parameters": 5489328} -{"train_lr": 0.0010807244099358875, "train_loss": 2.561174604320526, "test_loss": 1.0169800730312573, "test_acc1": 74.80000248046875, "test_acc5": 92.15600273345947, "epoch": 295, "n_parameters": 5489328} -{"train_lr": 0.0010684042322421535, "train_loss": 2.565677432847023, "test_loss": 1.05553288012743, "test_acc1": 74.32600261199951, "test_acc5": 91.83400275634766, "epoch": 296, "n_parameters": 5489328} -{"train_lr": 0.0010561297029048062, "train_loss": 2.5665708602666855, "test_loss": 1.0310534010915196, "test_acc1": 74.50200255584717, "test_acc5": 91.81200230407715, "epoch": 297, "n_parameters": 5489328} -{"train_lr": 0.0010439014201670813, "train_loss": 2.5527193979740144, "test_loss": 1.047282613156473, "test_acc1": 74.31600264526367, "test_acc5": 91.73800246948242, "epoch": 298, "n_parameters": 5489328} -{"train_lr": 0.0010317199800182345, "train_loss": 2.5503327620744707, "test_loss": 1.0243850087418276, "test_acc1": 74.6200023840332, "test_acc5": 92.12400244384766, "epoch": 299, "n_parameters": 5489328} -{"train_lr": 0.0010195859761644697, "train_loss": 2.552464806675911, "test_loss": 1.0158604946644867, "test_acc1": 74.93800231079102, "test_acc5": 92.15000235961914, "epoch": 300, "n_parameters": 5489328} -{"train_lr": 0.0010075000000000067, "train_loss": 2.5493112383365633, "test_loss": 1.0245849571245558, "test_acc1": 74.83200237121582, "test_acc5": 92.04600244598389, "epoch": 301, "n_parameters": 5489328} -{"train_lr": 0.000995462640578269, "train_loss": 2.550278265571594, "test_loss": 1.0311846009948675, "test_acc1": 74.72600233245849, "test_acc5": 92.07200212310791, "epoch": 302, "n_parameters": 5489328} -{"train_lr": 0.0009834744845832106, "train_loss": 2.543252788901329, "test_loss": 1.0594486467101996, "test_acc1": 74.26400226470948, "test_acc5": 91.63400221221924, "epoch": 303, "n_parameters": 5489328} -{"train_lr": 0.0009715361163006195, "train_loss": 2.5319687700510025, "test_loss": 1.015006987049299, "test_acc1": 74.73400266235352, "test_acc5": 92.23400223236084, "epoch": 304, "n_parameters": 5489328} -{"train_lr": 0.000959648117589686, "train_loss": 2.5479360697984696, "test_loss": 1.0045536780620323, "test_acc1": 75.27800228027344, "test_acc5": 92.29800251647949, "epoch": 305, "n_parameters": 5489328} -{"train_lr": 0.0009478110678547599, "train_loss": 2.535770112705231, "test_loss": 1.004158906857757, "test_acc1": 75.15800273223877, "test_acc5": 92.36200251586914, "epoch": 306, "n_parameters": 5489328} -{"train_lr": 0.0009360255440169043, "train_loss": 2.5224735994338987, "test_loss": 1.0008817299762194, "test_acc1": 75.24200269226074, "test_acc5": 92.3000027923584, "epoch": 307, "n_parameters": 5489328} -{"train_lr": 0.0009242921204859447, "train_loss": 2.5228070178508757, "test_loss": 0.9903110129868283, "test_acc1": 75.14200244171143, "test_acc5": 92.39400261352539, "epoch": 308, "n_parameters": 5489328} -{"train_lr": 0.0009126113691323534, "train_loss": 2.5230309495210648, "test_loss": 1.009709462742595, "test_acc1": 75.17600235839843, "test_acc5": 92.21000277557373, "epoch": 309, "n_parameters": 5489328} -{"train_lr": 0.0009009838592595214, "train_loss": 2.5309439411878585, "test_loss": 1.0045992710134561, "test_acc1": 75.03200250671387, "test_acc5": 92.26200240570068, "epoch": 310, "n_parameters": 5489328} -{"train_lr": 0.0008894101575758612, "train_loss": 2.51371374733448, "test_loss": 1.0059964479330707, "test_acc1": 75.25000254547119, "test_acc5": 92.3060028125, "epoch": 311, "n_parameters": 5489328} -{"train_lr": 0.0008778908281672491, "train_loss": 2.523050567507744, "test_loss": 0.9883044451913413, "test_acc1": 75.44400233551025, "test_acc5": 92.40200287872314, "epoch": 312, "n_parameters": 5489328} -{"train_lr": 0.0008664264324695524, "train_loss": 2.514179601097107, "test_loss": 1.0194230265915394, "test_acc1": 75.292002628479, "test_acc5": 92.20600277252197, "epoch": 313, "n_parameters": 5489328} -{"train_lr": 0.0008550175292412688, "train_loss": 2.5164126866340637, "test_loss": 0.9928521941251615, "test_acc1": 75.34000297180175, "test_acc5": 92.46400244110107, "epoch": 314, "n_parameters": 5489328} -{"train_lr": 0.0008436646745362156, "train_loss": 2.513331944608688, "test_loss": 0.9760968790334814, "test_acc1": 75.57800249206542, "test_acc5": 92.58800208099365, "epoch": 315, "n_parameters": 5489328} -{"train_lr": 0.0008323684216765116, "train_loss": 2.5097219279527665, "test_loss": 1.0043572301373762, "test_acc1": 75.34000236785889, "test_acc5": 92.33200258850097, "epoch": 316, "n_parameters": 5489328} -{"train_lr": 0.0008211293212256214, "train_loss": 2.506665081644058, "test_loss": 0.9915832839906216, "test_acc1": 75.33000279083252, "test_acc5": 92.29800240966797, "epoch": 317, "n_parameters": 5489328} -{"train_lr": 0.0008099479209613996, "train_loss": 2.5034478543281553, "test_loss": 0.9808787826229545, "test_acc1": 75.47200280914306, "test_acc5": 92.46400275604248, "epoch": 318, "n_parameters": 5489328} -{"train_lr": 0.0007988247658495707, "train_loss": 2.515386078286171, "test_loss": 1.017895820824539, "test_acc1": 75.22200262756348, "test_acc5": 92.33000240234375, "epoch": 319, "n_parameters": 5489328} -{"train_lr": 0.0007877603980169765, "train_loss": 2.501075223827362, "test_loss": 0.9824544303119183, "test_acc1": 75.52600259613037, "test_acc5": 92.38600268371582, "epoch": 320, "n_parameters": 5489328} -{"train_lr": 0.0007767553567253202, "train_loss": 2.4923879891633987, "test_loss": 0.9844790736542028, "test_acc1": 75.39600236419678, "test_acc5": 92.43600256713867, "epoch": 321, "n_parameters": 5489328} -{"train_lr": 0.0007658101783447642, "train_loss": 2.490847807598114, "test_loss": 0.9983092892695876, "test_acc1": 75.41800226593017, "test_acc5": 92.5180021987915, "epoch": 322, "n_parameters": 5489328} -{"train_lr": 0.0007549253963278913, "train_loss": 2.502311716866493, "test_loss": 1.002669565817889, "test_acc1": 75.40400262359618, "test_acc5": 92.36200277252198, "epoch": 323, "n_parameters": 5489328} -{"train_lr": 0.0007441015411836098, "train_loss": 2.485104311323166, "test_loss": 0.9697698521263459, "test_acc1": 75.68000239501953, "test_acc5": 92.55000243011474, "epoch": 324, "n_parameters": 5489328} -{"train_lr": 0.0007333391404513692, "train_loss": 2.4810690405130384, "test_loss": 0.9713389178847566, "test_acc1": 75.77800248596192, "test_acc5": 92.52000238555908, "epoch": 325, "n_parameters": 5489328} -{"train_lr": 0.0007226387186753506, "train_loss": 2.4814069356918336, "test_loss": 0.9760545273037518, "test_acc1": 75.70600285430908, "test_acc5": 92.58200243011474, "epoch": 326, "n_parameters": 5489328} -{"train_lr": 0.0007120007973790458, "train_loss": 2.483963348579407, "test_loss": 0.9900346558321925, "test_acc1": 75.618002784729, "test_acc5": 92.5580024609375, "epoch": 327, "n_parameters": 5489328} -{"train_lr": 0.0007014258950397421, "train_loss": 2.485542554521561, "test_loss": 0.9762896294979488, "test_acc1": 75.76200246551514, "test_acc5": 92.67800230926514, "epoch": 328, "n_parameters": 5489328} -{"train_lr": 0.0006909145270632263, "train_loss": 2.4822423493385313, "test_loss": 0.9598893325994996, "test_acc1": 76.17400239990235, "test_acc5": 92.73200257843017, "epoch": 329, "n_parameters": 5489328} -{"train_lr": 0.0006804672057587739, "train_loss": 2.482234259724617, "test_loss": 0.9957965822780833, "test_acc1": 75.66600238861083, "test_acc5": 92.52600248657227, "epoch": 330, "n_parameters": 5489328} -{"train_lr": 0.0006700844403140784, "train_loss": 2.4644939952135085, "test_loss": 0.949832538033233, "test_acc1": 76.30800229766845, "test_acc5": 92.85200243286133, "epoch": 331, "n_parameters": 5489328} -{"train_lr": 0.0006597667367704799, "train_loss": 2.475263838267326, "test_loss": 0.9766295869999072, "test_acc1": 75.85600258361816, "test_acc5": 92.56800263397217, "epoch": 332, "n_parameters": 5489328} -{"train_lr": 0.0006495145979982786, "train_loss": 2.469987089419365, "test_loss": 0.9624841964858419, "test_acc1": 76.07800218017579, "test_acc5": 92.80200251861572, "epoch": 333, "n_parameters": 5489328} -{"train_lr": 0.0006393285236722668, "train_loss": 2.473596811747551, "test_loss": 0.9601411766865674, "test_acc1": 76.09400264556885, "test_acc5": 92.75400245025635, "epoch": 334, "n_parameters": 5489328} -{"train_lr": 0.000629209010247336, "train_loss": 2.4560529881715776, "test_loss": 0.9430380630142549, "test_acc1": 76.50200265380859, "test_acc5": 92.80000244140625, "epoch": 335, "n_parameters": 5489328} -{"train_lr": 0.0006191565509343066, "train_loss": 2.456862557768822, "test_loss": 0.9660214507842765, "test_acc1": 75.93600234802246, "test_acc5": 92.73200235168457, "epoch": 336, "n_parameters": 5489328} -{"train_lr": 0.0006091716356758274, "train_loss": 2.4604978447437285, "test_loss": 0.9778794896076707, "test_acc1": 76.13800248626708, "test_acc5": 92.66800266143798, "epoch": 337, "n_parameters": 5489328} -{"train_lr": 0.0005992547511226205, "train_loss": 2.443753675031662, "test_loss": 0.9324206124772044, "test_acc1": 76.51000288299561, "test_acc5": 93.08400270904541, "epoch": 338, "n_parameters": 5489328} -{"train_lr": 0.0005894063806096327, "train_loss": 2.460339043831825, "test_loss": 0.9490314518265864, "test_acc1": 76.18000259002686, "test_acc5": 92.94400272827149, "epoch": 339, "n_parameters": 5489328} -{"train_lr": 0.000579627004132555, "train_loss": 2.4561485634326936, "test_loss": 0.9575657651704901, "test_acc1": 76.21000229492188, "test_acc5": 92.73600250244141, "epoch": 340, "n_parameters": 5489328} -{"train_lr": 0.0005699170983243841, "train_loss": 2.4475825569152834, "test_loss": 0.9545962540980648, "test_acc1": 76.42000278167724, "test_acc5": 92.74600233184815, "epoch": 341, "n_parameters": 5489328} -{"train_lr": 0.0005602771364322523, "train_loss": 2.4499591156959535, "test_loss": 0.940788367872729, "test_acc1": 76.496002237854, "test_acc5": 93.01400247497558, "epoch": 342, "n_parameters": 5489328} -{"train_lr": 0.0005507075882942857, "train_loss": 2.4489539041519164, "test_loss": 0.937682845574968, "test_acc1": 76.52200232574462, "test_acc5": 92.94400256072998, "epoch": 343, "n_parameters": 5489328} -{"train_lr": 0.0005412089203167633, "train_loss": 2.4379465339422226, "test_loss": 0.937956989468897, "test_acc1": 76.63400255096435, "test_acc5": 92.94200284301758, "epoch": 344, "n_parameters": 5489328} -{"train_lr": 0.0005317815954513637, "train_loss": 2.433229269194603, "test_loss": 0.935450929490959, "test_acc1": 76.62800237487792, "test_acc5": 93.05800242736817, "epoch": 345, "n_parameters": 5489328} -{"train_lr": 0.0005224260731725992, "train_loss": 2.4418812071800233, "test_loss": 0.9435369370176512, "test_acc1": 76.40800237792969, "test_acc5": 92.90800277069091, "epoch": 346, "n_parameters": 5489328} -{"train_lr": 0.00051314280945543, "train_loss": 2.44208586461544, "test_loss": 0.9314367085256997, "test_acc1": 76.708002159729, "test_acc5": 93.11000265899658, "epoch": 347, "n_parameters": 5489328} -{"train_lr": 0.0005039322567530305, "train_loss": 2.4316853781700134, "test_loss": 0.9331747747081167, "test_acc1": 76.81000260437011, "test_acc5": 93.07400279846192, "epoch": 348, "n_parameters": 5489328} -{"train_lr": 0.0004947948639747458, "train_loss": 2.423801432776451, "test_loss": 0.927660991821219, "test_acc1": 76.92200245544434, "test_acc5": 93.1160023513794, "epoch": 349, "n_parameters": 5489328} -{"train_lr": 0.0004857310764642128, "train_loss": 2.4304777066469194, "test_loss": 0.9463735209668384, "test_acc1": 76.49200244445801, "test_acc5": 92.97000258941651, "epoch": 350, "n_parameters": 5489328} -{"train_lr": 0.00047674133597763773, "train_loss": 2.4270163486480714, "test_loss": 0.9267629115458798, "test_acc1": 77.05200262908936, "test_acc5": 93.10800270874023, "epoch": 351, "n_parameters": 5489328} -{"train_lr": 0.00046782608066229685, "train_loss": 2.426986998653412, "test_loss": 0.9409292123335249, "test_acc1": 76.83800235504151, "test_acc5": 93.06000267791748, "epoch": 352, "n_parameters": 5489328} -{"train_lr": 0.0004589857450351661, "train_loss": 2.4167419354438784, "test_loss": 0.9417895709767061, "test_acc1": 76.69200255645752, "test_acc5": 93.12000231903076, "epoch": 353, "n_parameters": 5489328} -{"train_lr": 0.000450220759961707, "train_loss": 2.4147852895975115, "test_loss": 0.9128246660179952, "test_acc1": 77.13600262939453, "test_acc5": 93.24000269439698, "epoch": 354, "n_parameters": 5489328} -{"train_lr": 0.0004415315526349521, "train_loss": 2.4190078267335893, "test_loss": 0.9235443144160158, "test_acc1": 77.00000225006103, "test_acc5": 93.15600246826172, "epoch": 355, "n_parameters": 5489328} -{"train_lr": 0.0004329185465545854, "train_loss": 2.409707309007645, "test_loss": 0.9212544115588945, "test_acc1": 76.9180023602295, "test_acc5": 93.09200277618409, "epoch": 356, "n_parameters": 5489328} -{"train_lr": 0.0004243821615063944, "train_loss": 2.4067449088573456, "test_loss": 0.9225409690509824, "test_acc1": 77.0480025793457, "test_acc5": 93.13600245971679, "epoch": 357, "n_parameters": 5489328} -{"train_lr": 0.00041592281354172557, "train_loss": 2.4144380251407624, "test_loss": 0.9126006441519541, "test_acc1": 77.27800223876953, "test_acc5": 93.25200274505615, "epoch": 358, "n_parameters": 5489328} -{"train_lr": 0.0004075409149572814, "train_loss": 2.4033818771123885, "test_loss": 0.9077878013691482, "test_acc1": 77.40400247253417, "test_acc5": 93.21400252197266, "epoch": 359, "n_parameters": 5489328} -{"train_lr": 0.000399236874274954, "train_loss": 2.397537757229805, "test_loss": 0.9096497086917653, "test_acc1": 77.19000261535645, "test_acc5": 93.29600260040283, "epoch": 360, "n_parameters": 5489328} -{"train_lr": 0.00039101109622197687, "train_loss": 2.4017059454202654, "test_loss": 0.9252860795925645, "test_acc1": 77.00600256530761, "test_acc5": 93.1800028540039, "epoch": 361, "n_parameters": 5489328} -{"train_lr": 0.000382863981711175, "train_loss": 2.40151202955246, "test_loss": 0.9060102208134007, "test_acc1": 77.27000260375976, "test_acc5": 93.33800291564941, "epoch": 362, "n_parameters": 5489328} -{"train_lr": 0.0003747959278214157, "train_loss": 2.405127745842934, "test_loss": 0.9130838386276189, "test_acc1": 77.3300025592041, "test_acc5": 93.40400261657715, "epoch": 363, "n_parameters": 5489328} -{"train_lr": 0.00036680732777826604, "train_loss": 2.394267408490181, "test_loss": 0.9148858971893787, "test_acc1": 77.11400275482178, "test_acc5": 93.34600276275634, "epoch": 364, "n_parameters": 5489328} -{"train_lr": 0.00035889857093479767, "train_loss": 2.3893506700277327, "test_loss": 0.9064855588709607, "test_acc1": 77.33400246948243, "test_acc5": 93.34800263641357, "epoch": 365, "n_parameters": 5489328} -{"train_lr": 0.00035107004275269313, "train_loss": 2.3868637495994567, "test_loss": 0.8901403484975591, "test_acc1": 77.62800262908935, "test_acc5": 93.4500025390625, "epoch": 366, "n_parameters": 5489328} -{"train_lr": 0.00034332212478335543, "train_loss": 2.385976159286499, "test_loss": 0.8990073420983904, "test_acc1": 77.74200267547607, "test_acc5": 93.37400270141602, "epoch": 367, "n_parameters": 5489328} -{"train_lr": 0.0003356551946493703, "train_loss": 2.377333409285545, "test_loss": 0.8974691640366527, "test_acc1": 77.4880024710083, "test_acc5": 93.57600269256592, "epoch": 368, "n_parameters": 5489328} -{"train_lr": 0.0003280696260261078, "train_loss": 2.382786916232109, "test_loss": 0.8947482786196119, "test_acc1": 77.55200251312256, "test_acc5": 93.50400265014649, "epoch": 369, "n_parameters": 5489328} -{"train_lr": 0.00032056578862347564, "train_loss": 2.375433035540581, "test_loss": 0.8860310337999288, "test_acc1": 77.63000258117675, "test_acc5": 93.60800255615234, "epoch": 370, "n_parameters": 5489328} -{"train_lr": 0.00031314404816792945, "train_loss": 2.3779392432928086, "test_loss": 0.8800549601369044, "test_acc1": 77.69600253112793, "test_acc5": 93.54600250305175, "epoch": 371, "n_parameters": 5489328} -{"train_lr": 0.00030580476638461713, "train_loss": 2.3751421840429305, "test_loss": 0.8811632545993608, "test_acc1": 77.8820029611206, "test_acc5": 93.58400221252441, "epoch": 372, "n_parameters": 5489328} -{"train_lr": 0.0002985483009797873, "train_loss": 2.36683928437233, "test_loss": 0.8763677712310763, "test_acc1": 77.86000259002685, "test_acc5": 93.6820026901245, "epoch": 373, "n_parameters": 5489328} -{"train_lr": 0.00029137500562332014, "train_loss": 2.3804030711889266, "test_loss": 0.910019747693749, "test_acc1": 77.4840027243042, "test_acc5": 93.42400272857667, "epoch": 374, "n_parameters": 5489328} -{"train_lr": 0.000284285229931521, "train_loss": 2.36042182700634, "test_loss": 0.897652995717876, "test_acc1": 77.61200269042969, "test_acc5": 93.528002890625, "epoch": 375, "n_parameters": 5489328} -{"train_lr": 0.00027727931945004304, "train_loss": 2.383020047426224, "test_loss": 0.889577176202746, "test_acc1": 77.62600231781006, "test_acc5": 93.61200289093017, "epoch": 376, "n_parameters": 5489328} -{"train_lr": 0.00027035761563708795, "train_loss": 2.349593761849403, "test_loss": 0.8818005844950676, "test_acc1": 77.92400273529053, "test_acc5": 93.65400229675294, "epoch": 377, "n_parameters": 5489328} -{"train_lr": 0.0002635204558467305, "train_loss": 2.355286785340309, "test_loss": 0.8779470659792423, "test_acc1": 78.04600265106201, "test_acc5": 93.55800261383057, "epoch": 378, "n_parameters": 5489328} -{"train_lr": 0.0002567681733124936, "train_loss": 2.340660724020004, "test_loss": 0.8747680770123706, "test_acc1": 78.03400274017334, "test_acc5": 93.64600277862549, "epoch": 379, "n_parameters": 5489328} -{"train_lr": 0.0002501010971311009, "train_loss": 2.3614319144248963, "test_loss": 0.8788485765895423, "test_acc1": 77.9960026486206, "test_acc5": 93.61200252502441, "epoch": 380, "n_parameters": 5489328} -{"train_lr": 0.00024351955224644067, "train_loss": 2.357013498544693, "test_loss": 0.892716196968275, "test_acc1": 77.84000254302978, "test_acc5": 93.63800249725342, "epoch": 381, "n_parameters": 5489328} -{"train_lr": 0.00023702385943372427, "train_loss": 2.349902216219902, "test_loss": 0.8796017286093796, "test_acc1": 77.86200243743896, "test_acc5": 93.73800255950928, "epoch": 382, "n_parameters": 5489328} -{"train_lr": 0.00023061433528386214, "train_loss": 2.3539917830467223, "test_loss": 0.8648624994298991, "test_acc1": 78.16400260375977, "test_acc5": 93.79600249206543, "epoch": 383, "n_parameters": 5489328} -{"train_lr": 0.00022429129218801117, "train_loss": 2.348893675351143, "test_loss": 0.8724439926445484, "test_acc1": 78.00600256896972, "test_acc5": 93.68200275482178, "epoch": 384, "n_parameters": 5489328} -{"train_lr": 0.00021805503832237022, "train_loss": 2.345540096783638, "test_loss": 0.8653516171171385, "test_acc1": 78.14600269866943, "test_acc5": 93.71200239715576, "epoch": 385, "n_parameters": 5489328} -{"train_lr": 0.00021190587763316056, "train_loss": 2.3515658717393877, "test_loss": 0.867141546571956, "test_acc1": 78.13600275024415, "test_acc5": 93.84600235717774, "epoch": 386, "n_parameters": 5489328} -{"train_lr": 0.00020584410982180034, "train_loss": 2.3457618033885956, "test_loss": 0.8711896674597964, "test_acc1": 78.11800263458252, "test_acc5": 93.78000265014649, "epoch": 387, "n_parameters": 5489328} -{"train_lr": 0.0001998700303302881, "train_loss": 2.3395393743038175, "test_loss": 0.8599361709373838, "test_acc1": 78.1480028692627, "test_acc5": 93.93200259246827, "epoch": 388, "n_parameters": 5489328} -{"train_lr": 0.00019398393032684917, "train_loss": 2.324184018278122, "test_loss": 0.8725863011444316, "test_acc1": 78.16200254241943, "test_acc5": 93.71400234924316, "epoch": 389, "n_parameters": 5489328} -{"train_lr": 0.0001881860966916756, "train_loss": 2.3327846156597136, "test_loss": 0.8642698870423962, "test_acc1": 78.22000260467529, "test_acc5": 93.90600237762452, "epoch": 390, "n_parameters": 5489328} -{"train_lr": 0.00018247681200301023, "train_loss": 2.33244682598114, "test_loss": 0.8626689937184838, "test_acc1": 78.2240025289917, "test_acc5": 93.86800239105224, "epoch": 391, "n_parameters": 5489328} -{"train_lr": 0.00017685635452333062, "train_loss": 2.326043147110939, "test_loss": 0.8676175983513102, "test_acc1": 78.3180028414917, "test_acc5": 93.87200260253907, "epoch": 392, "n_parameters": 5489328} -{"train_lr": 0.00017132499818580898, "train_loss": 2.3336563930511476, "test_loss": 0.8633257656851235, "test_acc1": 78.30400237792969, "test_acc5": 93.92000242462159, "epoch": 393, "n_parameters": 5489328} -{"train_lr": 0.00016588301258094182, "train_loss": 2.326644114995003, "test_loss": 0.8589911460876465, "test_acc1": 78.2580027545166, "test_acc5": 93.9120024887085, "epoch": 394, "n_parameters": 5489328} -{"train_lr": 0.0001605306629434379, "train_loss": 2.3221926173448564, "test_loss": 0.867661322302678, "test_acc1": 78.14200245056152, "test_acc5": 93.78800266113281, "epoch": 395, "n_parameters": 5489328} -{"train_lr": 0.00015526821013925752, "train_loss": 2.323915548610687, "test_loss": 0.8589649608030039, "test_acc1": 78.43000256011963, "test_acc5": 93.98000248535156, "epoch": 396, "n_parameters": 5489328} -{"train_lr": 0.00015009591065294023, "train_loss": 2.3179795355796813, "test_loss": 0.8534053669256323, "test_acc1": 78.46400244842529, "test_acc5": 93.934002288208, "epoch": 397, "n_parameters": 5489328} -{"train_lr": 0.00014501401657505492, "train_loss": 2.322910798048973, "test_loss": 0.8558578131829991, "test_acc1": 78.33800268463135, "test_acc5": 94.02200278442383, "epoch": 398, "n_parameters": 5489328} -{"train_lr": 0.0001400227755899522, "train_loss": 2.317503710269928, "test_loss": 0.8462123835788053, "test_acc1": 78.64200275482177, "test_acc5": 94.12200248687743, "epoch": 399, "n_parameters": 5489328} -{"train_lr": 0.00013512243096367772, "train_loss": 2.307237675642967, "test_loss": 0.8448777805794688, "test_acc1": 78.58200236053467, "test_acc5": 94.08200258636475, "epoch": 400, "n_parameters": 5489328} -{"train_lr": 0.00013031322153211376, "train_loss": 2.3119441409111023, "test_loss": 0.8475763821864829, "test_acc1": 78.51600261535644, "test_acc5": 93.95400258972168, "epoch": 401, "n_parameters": 5489328} -{"train_lr": 0.00012559538168934326, "train_loss": 2.315199934267998, "test_loss": 0.8498126917025622, "test_acc1": 78.49800247589111, "test_acc5": 94.03400276397706, "epoch": 402, "n_parameters": 5489328} -{"train_lr": 0.00012096914137622728, "train_loss": 2.3118947125196456, "test_loss": 0.8427636119372705, "test_acc1": 78.64000227508545, "test_acc5": 93.9900025189209, "epoch": 403, "n_parameters": 5489328} -{"train_lr": 0.00011643472606918499, "train_loss": 2.302748835539818, "test_loss": 0.8509843667202136, "test_acc1": 78.68400257873535, "test_acc5": 93.99400228607178, "epoch": 404, "n_parameters": 5489328} -{"train_lr": 0.00011199235676923019, "train_loss": 2.3066329703092574, "test_loss": 0.8428904157789314, "test_acc1": 78.73000243682861, "test_acc5": 94.13400231292725, "epoch": 405, "n_parameters": 5489328} -{"train_lr": 0.00010764224999117014, "train_loss": 2.3050150693178177, "test_loss": 0.8498501998974997, "test_acc1": 78.63600258544922, "test_acc5": 94.05400261138917, "epoch": 406, "n_parameters": 5489328} -{"train_lr": 0.0001033846177530702, "train_loss": 2.3074391365289686, "test_loss": 0.839178569834022, "test_acc1": 78.7500024597168, "test_acc5": 94.09400268951416, "epoch": 407, "n_parameters": 5489328} -{"train_lr": 9.921966756592387e-05, "train_loss": 2.302069125318527, "test_loss": 0.8390197068014565, "test_acc1": 78.67800255859375, "test_acc5": 94.15000250030518, "epoch": 408, "n_parameters": 5489328} -{"train_lr": 9.514760242352498e-05, "train_loss": 2.310490322256088, "test_loss": 0.8462823517620564, "test_acc1": 78.6860027432251, "test_acc5": 94.01800286804199, "epoch": 409, "n_parameters": 5489328} -{"train_lr": 9.116862079258612e-05, "train_loss": 2.298784346342087, "test_loss": 0.8497367393882835, "test_acc1": 78.75400279663086, "test_acc5": 94.02400264770507, "epoch": 410, "n_parameters": 5489328} -{"train_lr": 8.728291660305303e-05, "train_loss": 2.2977315115690233, "test_loss": 0.8367948159575462, "test_acc1": 78.82000234802246, "test_acc5": 94.11400219909667, "epoch": 411, "n_parameters": 5489328} -{"train_lr": 8.349067923867126e-05, "train_loss": 2.2944240085124967, "test_loss": 0.8429319350158467, "test_acc1": 78.67000214996338, "test_acc5": 94.01000221252441, "epoch": 412, "n_parameters": 5489328} -{"train_lr": 7.979209352773835e-05, "train_loss": 2.29606977994442, "test_loss": 0.8396986519150874, "test_acc1": 78.79600272979737, "test_acc5": 94.10000259246826, "epoch": 413, "n_parameters": 5489328} -{"train_lr": 7.618733973410262e-05, "train_loss": 2.299792330980301, "test_loss": 0.8402848326984573, "test_acc1": 78.8160024835205, "test_acc5": 94.17000248870849, "epoch": 414, "n_parameters": 5489328} -{"train_lr": 7.267659354838017e-05, "train_loss": 2.2903787502527235, "test_loss": 0.8375451098031857, "test_acc1": 78.76200254852294, "test_acc5": 94.13200255279541, "epoch": 415, "n_parameters": 5489328} -{"train_lr": 6.926002607938772e-05, "train_loss": 2.2932934037208557, "test_loss": 0.8356789002085433, "test_acc1": 78.87600262329101, "test_acc5": 94.10400239379882, "epoch": 416, "n_parameters": 5489328} -{"train_lr": 6.593780384579997e-05, "train_loss": 2.277270288681984, "test_loss": 0.8402433044770184, "test_acc1": 78.77600255126953, "test_acc5": 94.13000263702392, "epoch": 417, "n_parameters": 5489328} -{"train_lr": 6.27100887680448e-05, "train_loss": 2.280167933034897, "test_loss": 0.8362148298936731, "test_acc1": 78.83400253448487, "test_acc5": 94.15800254730225, "epoch": 418, "n_parameters": 5489328} -{"train_lr": 5.957703816040123e-05, "train_loss": 2.2910956374406815, "test_loss": 0.8386045799535864, "test_acc1": 78.77600299163818, "test_acc5": 94.126002756958, "epoch": 419, "n_parameters": 5489328} -{"train_lr": 5.6538804723335324e-05, "train_loss": 2.2908202161073685, "test_loss": 0.8340367850135354, "test_acc1": 78.80000250854492, "test_acc5": 94.21400234405517, "epoch": 420, "n_parameters": 5489328} -{"train_lr": 5.359553653605782e-05, "train_loss": 2.2956794839143755, "test_loss": 0.8356021355618449, "test_acc1": 78.83600280181885, "test_acc5": 94.2060025454712, "epoch": 421, "n_parameters": 5489328} -{"train_lr": 5.0747377049308795e-05, "train_loss": 2.2842257081747057, "test_loss": 0.8339896977824324, "test_acc1": 78.92600268249512, "test_acc5": 94.17600270080567, "epoch": 422, "n_parameters": 5489328} -{"train_lr": 4.799446507836315e-05, "train_loss": 2.2837854653835294, "test_loss": 0.8289435681174783, "test_acc1": 78.90600243133545, "test_acc5": 94.24800273132324, "epoch": 423, "n_parameters": 5489328} -{"train_lr": 4.533693479626563e-05, "train_loss": 2.2818058735132216, "test_loss": 0.8319391404442927, "test_acc1": 78.92200259246826, "test_acc5": 94.18600242553711, "epoch": 424, "n_parameters": 5489328} -{"train_lr": 4.2774915727294984e-05, "train_loss": 2.2746737439632416, "test_loss": 0.8295967925120803, "test_acc1": 78.92600275543212, "test_acc5": 94.23800264312744, "epoch": 425, "n_parameters": 5489328} -{"train_lr": 4.030853274064522e-05, "train_loss": 2.2857276471614836, "test_loss": 0.8345952435013126, "test_acc1": 78.87800246429444, "test_acc5": 94.18000253051758, "epoch": 426, "n_parameters": 5489328} -{"train_lr": 3.793790604434225e-05, "train_loss": 2.279724598574638, "test_loss": 0.8321768996470115, "test_acc1": 78.88200255859375, "test_acc5": 94.18600269012451, "epoch": 427, "n_parameters": 5489328} -{"train_lr": 3.5663151179389266e-05, "train_loss": 2.2799470175743104, "test_loss": 0.8290559329530772, "test_acc1": 79.04200266723633, "test_acc5": 94.20600255340577, "epoch": 428, "n_parameters": 5489328} -{"train_lr": 3.348437901412699e-05, "train_loss": 2.2757314744234085, "test_loss": 0.8360554746845189, "test_acc1": 78.92000269897461, "test_acc5": 94.13200291229248, "epoch": 429, "n_parameters": 5489328} -{"train_lr": 3.14016957388384e-05, "train_loss": 2.2844612030506135, "test_loss": 0.8276705397840809, "test_acc1": 79.0580026763916, "test_acc5": 94.25800253051757, "epoch": 430, "n_parameters": 5489328} -{"train_lr": 2.9415202860566895e-05, "train_loss": 2.286879315972328, "test_loss": 0.8369652648620746, "test_acc1": 78.95400262939454, "test_acc5": 94.23400258911133, "epoch": 431, "n_parameters": 5489328} -{"train_lr": 2.7524997198174166e-05, "train_loss": 2.269189723134041, "test_loss": 0.8261395885663874, "test_acc1": 79.0300025177002, "test_acc5": 94.27000233856201, "epoch": 432, "n_parameters": 5489328} -{"train_lr": 2.5731170877616888e-05, "train_loss": 2.27684845957756, "test_loss": 0.8357724961550796, "test_acc1": 78.92400254241943, "test_acc5": 94.16200243560792, "epoch": 433, "n_parameters": 5489328} -{"train_lr": 2.403381132745848e-05, "train_loss": 2.283688117837906, "test_loss": 0.8267239809912794, "test_acc1": 79.0360023538208, "test_acc5": 94.22800231018067, "epoch": 434, "n_parameters": 5489328} -{"train_lr": 2.2433001274609897e-05, "train_loss": 2.274532955861092, "test_loss": 0.8254960201242391, "test_acc1": 79.11200280792237, "test_acc5": 94.23400281341553, "epoch": 435, "n_parameters": 5489328} -{"train_lr": 2.0928818740294644e-05, "train_loss": 2.2857620173454283, "test_loss": 0.8236950890106314, "test_acc1": 79.0800024758911, "test_acc5": 94.27600285186767, "epoch": 436, "n_parameters": 5489328} -{"train_lr": 1.9521337036247088e-05, "train_loss": 2.266635660672188, "test_loss": 0.8337272230316611, "test_acc1": 78.9940029196167, "test_acc5": 94.20600286254883, "epoch": 437, "n_parameters": 5489328} -{"train_lr": 1.8210624761139314e-05, "train_loss": 2.2721822620630263, "test_loss": 0.8299992829561234, "test_acc1": 79.01600255310059, "test_acc5": 94.25200260925293, "epoch": 438, "n_parameters": 5489328} -{"train_lr": 1.6996745797238736e-05, "train_loss": 2.2779783382892607, "test_loss": 0.8271425850689411, "test_acc1": 79.06000235595702, "test_acc5": 94.25400253112792, "epoch": 439, "n_parameters": 5489328} -{"train_lr": 1.5879759307294027e-05, "train_loss": 2.271667207932472, "test_loss": 0.8314072413041311, "test_acc1": 79.0300026852417, "test_acc5": 94.24000265380859, "epoch": 440, "n_parameters": 5489328} -{"train_lr": 1.4859719731650575e-05, "train_loss": 2.2804886820077894, "test_loss": 0.8301879047032665, "test_acc1": 79.03400292510986, "test_acc5": 94.26000253936768, "epoch": 441, "n_parameters": 5489328} -{"train_lr": 1.393667678559817e-05, "train_loss": 2.2674054854631422, "test_loss": 0.833733676768401, "test_acc1": 78.94400247253418, "test_acc5": 94.18600250823975, "epoch": 442, "n_parameters": 5489328} -{"train_lr": 1.3110675456947718e-05, "train_loss": 2.2690427226066587, "test_loss": 0.8242344840922776, "test_acc1": 79.07400234436035, "test_acc5": 94.27400244689942, "epoch": 443, "n_parameters": 5489328} -{"train_lr": 1.2381756003839114e-05, "train_loss": 2.2671503870248793, "test_loss": 0.830698562457281, "test_acc1": 79.04200244476318, "test_acc5": 94.244002293396, "epoch": 444, "n_parameters": 5489328} -{"train_lr": 1.1749953952777368e-05, "train_loss": 2.269840371966362, "test_loss": 0.825350443010821, "test_acc1": 79.09600272155761, "test_acc5": 94.26400236877441, "epoch": 445, "n_parameters": 5489328} -{"train_lr": 1.1215300096904058e-05, "train_loss": 2.273718570971489, "test_loss": 0.8294606349047493, "test_acc1": 79.03200281311035, "test_acc5": 94.23800229949951, "epoch": 446, "n_parameters": 5489328} -{"train_lr": 1.0777820494492919e-05, "train_loss": 2.2723432403087616, "test_loss": 0.8293789489304318, "test_acc1": 79.00800244750977, "test_acc5": 94.2380025338745, "epoch": 447, "n_parameters": 5489328} -{"train_lr": 1.0437536467683126e-05, "train_loss": 2.2611375933170317, "test_loss": 0.8274803808068528, "test_acc1": 79.05000271026611, "test_acc5": 94.20800266723633, "epoch": 448, "n_parameters": 5489328} -{"train_lr": 1.0194464601437938e-05, "train_loss": 2.270895019578934, "test_loss": 0.8323951887295526, "test_acc1": 78.97400266845703, "test_acc5": 94.20800243896484, "epoch": 449, "n_parameters": 5489328} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_300e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_300e.txt deleted file mode 100644 index e367e7a5..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_300e.txt +++ /dev/null @@ -1,300 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 6.996545906162262, "test_loss": 6.946782283923206, "test_acc1": 0.09600000640869141, "test_acc5": 0.5180000264167786, "epoch": 0, "n_parameters": 7298036} -{"train_lr": 1.000000000000014e-06, "train_loss": 6.990340623855591, "test_loss": 6.930838150136611, "test_acc1": 0.09400000671386718, "test_acc5": 0.5680000236368179, "epoch": 1, "n_parameters": 7298036} -{"train_lr": 0.0008007999999999933, "train_loss": 6.424326080799103, "test_loss": 5.149288678870482, "test_acc1": 8.474000263290405, "test_acc5": 23.312000712738037, "epoch": 2, "n_parameters": 7298036} -{"train_lr": 0.0016005999999999787, "train_loss": 5.770950894451142, "test_loss": 3.920866203658721, "test_acc1": 21.478000618133546, "test_acc5": 44.03800114349365, "epoch": 3, "n_parameters": 7298036} -{"train_lr": 0.0024003999999999835, "train_loss": 5.163202000904083, "test_loss": 3.156571191023378, "test_acc1": 32.73000096343994, "test_acc5": 58.25200153930664, "epoch": 4, "n_parameters": 7298036} -{"train_lr": 0.0032001999999999873, "train_loss": 4.732681879758835, "test_loss": 2.720577548531925, "test_acc1": 40.24600108001709, "test_acc5": 66.03200183105469, "epoch": 5, "n_parameters": 7298036} -{"train_lr": 0.003997265921835383, "train_loss": 4.459653646039963, "test_loss": 2.5398809454020332, "test_acc1": 44.196001249389646, "test_acc5": 69.70800194000245, "epoch": 6, "n_parameters": 7298036} -{"train_lr": 0.003996063323214417, "train_loss": 4.226721474552154, "test_loss": 2.31655812088181, "test_acc1": 48.50000106124878, "test_acc5": 73.43400246917724, "epoch": 7, "n_parameters": 7298036} -{"train_lr": 0.003994642382062749, "train_loss": 4.048776956748962, "test_loss": 2.0986234624596203, "test_acc1": 52.60800140228272, "test_acc5": 76.7740026977539, "epoch": 8, "n_parameters": 7298036} -{"train_lr": 0.003993003254202742, "train_loss": 3.910291869497299, "test_loss": 2.0507471254643272, "test_acc1": 53.118001334533695, "test_acc5": 77.3440023638916, "epoch": 9, "n_parameters": 7298036} -{"train_lr": 0.0039911461193831675, "train_loss": 3.8054256803989412, "test_loss": 1.9204925158444572, "test_acc1": 56.09200157958984, "test_acc5": 79.71000230102538, "epoch": 10, "n_parameters": 7298036} -{"train_lr": 0.003989071181259754, "train_loss": 3.7126118285655973, "test_loss": 1.855531308142578, "test_acc1": 57.436001713562014, "test_acc5": 80.66000234771728, "epoch": 11, "n_parameters": 7298036} -{"train_lr": 0.0039867786673727715, "train_loss": 3.6532316446781157, "test_loss": 1.79489125048413, "test_acc1": 58.468001560058596, "test_acc5": 81.45800268127441, "epoch": 12, "n_parameters": 7298036} -{"train_lr": 0.0039842688291223715, "train_loss": 3.583187641096115, "test_loss": 1.7722960320465706, "test_acc1": 58.712001424865726, "test_acc5": 81.90200237762451, "epoch": 13, "n_parameters": 7298036} -{"train_lr": 0.0039815419417405275, "train_loss": 3.5213339024543764, "test_loss": 1.6865873205311157, "test_acc1": 60.65600187286377, "test_acc5": 83.1920025454712, "epoch": 14, "n_parameters": 7298036} -{"train_lr": 0.003978598304261148, "train_loss": 3.472724555826187, "test_loss": 1.6693662229706259, "test_acc1": 61.33000174041748, "test_acc5": 83.29600242004395, "epoch": 15, "n_parameters": 7298036} -{"train_lr": 0.0039754382394872915, "train_loss": 3.4404728583335875, "test_loss": 1.688900323036839, "test_acc1": 60.626001799926755, "test_acc5": 83.10400257141113, "epoch": 16, "n_parameters": 7298036} -{"train_lr": 0.0039720620939556715, "train_loss": 3.4021807349205018, "test_loss": 1.5895948414416874, "test_acc1": 62.674001701049804, "test_acc5": 84.48000279937744, "epoch": 17, "n_parameters": 7298036} -{"train_lr": 0.003968470237898645, "train_loss": 3.3920923497200013, "test_loss": 1.5580581005881815, "test_acc1": 63.098001961364744, "test_acc5": 84.84600236968994, "epoch": 18, "n_parameters": 7298036} -{"train_lr": 0.003964663065203757, "train_loss": 3.340391812515259, "test_loss": 1.587934611036497, "test_acc1": 62.92200181976318, "test_acc5": 84.79400290039062, "epoch": 19, "n_parameters": 7298036} -{"train_lr": 0.003960640993370302, "train_loss": 3.3155155145168305, "test_loss": 1.5516358066131086, "test_acc1": 63.79200184814453, "test_acc5": 85.34400254089356, "epoch": 20, "n_parameters": 7298036} -{"train_lr": 0.003956404463463954, "train_loss": 3.2993571688652037, "test_loss": 1.5439158870893366, "test_acc1": 64.07600194702148, "test_acc5": 85.56400269439698, "epoch": 21, "n_parameters": 7298036} -{"train_lr": 0.0039519539400677895, "train_loss": 3.2579265090465546, "test_loss": 1.52309190613382, "test_acc1": 64.18400188964844, "test_acc5": 85.47400255462647, "epoch": 22, "n_parameters": 7298036} -{"train_lr": 0.00394728991123201, "train_loss": 3.2426084147453307, "test_loss": 1.4966315267717136, "test_acc1": 64.7120019821167, "test_acc5": 85.90800233062744, "epoch": 23, "n_parameters": 7298036} -{"train_lr": 0.003942412888419754, "train_loss": 3.23285906457901, "test_loss": 1.5082084764452541, "test_acc1": 64.44600179107665, "test_acc5": 85.87800279205322, "epoch": 24, "n_parameters": 7298036} -{"train_lr": 0.003937323406451619, "train_loss": 3.2163117495059965, "test_loss": 1.5168770104646683, "test_acc1": 64.47600174652099, "test_acc5": 85.63800275939941, "epoch": 25, "n_parameters": 7298036} -{"train_lr": 0.003932022023446712, "train_loss": 3.1966141392230987, "test_loss": 1.550561343484065, "test_acc1": 64.35600186645507, "test_acc5": 85.75200222747803, "epoch": 26, "n_parameters": 7298036} -{"train_lr": 0.003926509320761305, "train_loss": 3.1612900936603547, "test_loss": 1.4780465697540957, "test_acc1": 65.04000155059815, "test_acc5": 86.24600261138916, "epoch": 27, "n_parameters": 7298036} -{"train_lr": 0.003920785902925497, "train_loss": 3.16104283914566, "test_loss": 1.4789530477103066, "test_acc1": 65.27000187164306, "test_acc5": 86.23800211975097, "epoch": 28, "n_parameters": 7298036} -{"train_lr": 0.003914852397576493, "train_loss": 3.1468934456586837, "test_loss": 1.427885087535662, "test_acc1": 66.13200187103271, "test_acc5": 86.90600236419678, "epoch": 29, "n_parameters": 7298036} -{"train_lr": 0.003908709455390004, "train_loss": 3.144895732355118, "test_loss": 1.433924960301203, "test_acc1": 65.98200190643311, "test_acc5": 86.81600253997803, "epoch": 30, "n_parameters": 7298036} -{"train_lr": 0.0039023577500088094, "train_loss": 3.1151494140148164, "test_loss": 1.42600616023821, "test_acc1": 66.00800172088623, "test_acc5": 86.72800284851074, "epoch": 31, "n_parameters": 7298036} -{"train_lr": 0.0038957979779690624, "train_loss": 3.10938111615181, "test_loss": 1.4645230261718525, "test_acc1": 65.42000200683594, "test_acc5": 86.66400247161866, "epoch": 32, "n_parameters": 7298036} -{"train_lr": 0.003889030858623732, "train_loss": 3.1005525683164596, "test_loss": 1.4183509691673166, "test_acc1": 66.91000205352783, "test_acc5": 87.12400221984863, "epoch": 33, "n_parameters": 7298036} -{"train_lr": 0.0038820571340636525, "train_loss": 3.093326467871666, "test_loss": 1.4138408404062777, "test_acc1": 66.50600213226318, "test_acc5": 87.36200272003174, "epoch": 34, "n_parameters": 7298036} -{"train_lr": 0.0038748775690362956, "train_loss": 3.082571380615234, "test_loss": 1.3978633906911402, "test_acc1": 66.912001953125, "test_acc5": 87.49800237609864, "epoch": 35, "n_parameters": 7298036} -{"train_lr": 0.0038674929508619614, "train_loss": 3.0720184989213943, "test_loss": 1.4035294651985168, "test_acc1": 66.75800202697754, "test_acc5": 87.2420026473999, "epoch": 36, "n_parameters": 7298036} -{"train_lr": 0.003859904089347072, "train_loss": 3.0638939274549486, "test_loss": 1.3905095834942425, "test_acc1": 67.50000230499268, "test_acc5": 87.62600242340088, "epoch": 37, "n_parameters": 7298036} -{"train_lr": 0.003852111816695943, "train_loss": 3.0582713093042373, "test_loss": 1.3941545302377027, "test_acc1": 67.11200167633056, "test_acc5": 87.43600247650147, "epoch": 38, "n_parameters": 7298036} -{"train_lr": 0.0038441169874190843, "train_loss": 3.046226432347298, "test_loss": 1.366710067671888, "test_acc1": 67.16400220336914, "test_acc5": 87.608002583313, "epoch": 39, "n_parameters": 7298036} -{"train_lr": 0.0038359204782394862, "train_loss": 3.0320587174415587, "test_loss": 1.3812676874153755, "test_acc1": 67.52000208892822, "test_acc5": 87.56200237915039, "epoch": 40, "n_parameters": 7298036} -{"train_lr": 0.0038275231879969967, "train_loss": 3.0269130241155624, "test_loss": 1.3831817889038254, "test_acc1": 67.14000202697754, "test_acc5": 87.4300024319458, "epoch": 41, "n_parameters": 7298036} -{"train_lr": 0.003818926037548892, "train_loss": 3.016672101807594, "test_loss": 1.376876715789823, "test_acc1": 67.73000197052002, "test_acc5": 87.84400246887208, "epoch": 42, "n_parameters": 7298036} -{"train_lr": 0.0038101299696697475, "train_loss": 3.014890837907791, "test_loss": 1.4772728098665966, "test_acc1": 66.51400222259521, "test_acc5": 87.000002505188, "epoch": 43, "n_parameters": 7298036} -{"train_lr": 0.003801135948947377, "train_loss": 3.0077090810537337, "test_loss": 1.3714006052297705, "test_acc1": 68.11400195587159, "test_acc5": 87.85400272247314, "epoch": 44, "n_parameters": 7298036} -{"train_lr": 0.003791944961677627, "train_loss": 2.9948545116901397, "test_loss": 1.3311020144644905, "test_acc1": 67.98600203186035, "test_acc5": 88.01400250213624, "epoch": 45, "n_parameters": 7298036} -{"train_lr": 0.0037825580157558134, "train_loss": 2.9841788262605666, "test_loss": 1.3769900312318522, "test_acc1": 67.35400188537598, "test_acc5": 87.56600262756348, "epoch": 46, "n_parameters": 7298036} -{"train_lr": 0.003772976140566265, "train_loss": 2.9901762791872026, "test_loss": 1.376797047608039, "test_acc1": 67.3420019128418, "test_acc5": 87.7600022177124, "epoch": 47, "n_parameters": 7298036} -{"train_lr": 0.0037632003868696556, "train_loss": 2.985890336918831, "test_loss": 1.333244692753343, "test_acc1": 68.55400234771729, "test_acc5": 88.30400248016358, "epoch": 48, "n_parameters": 7298036} -{"train_lr": 0.003753231826687486, "train_loss": 2.978564969611168, "test_loss": 1.3209374608362423, "test_acc1": 68.89600227966308, "test_acc5": 88.5840025881958, "epoch": 49, "n_parameters": 7298036} -{"train_lr": 0.0037430715531847655, "train_loss": 2.976120478963852, "test_loss": 1.360913823632633, "test_acc1": 67.93200191528321, "test_acc5": 88.01600256652831, "epoch": 50, "n_parameters": 7298036} -{"train_lr": 0.003732720680549938, "train_loss": 2.9615328174114226, "test_loss": 1.3151600864880226, "test_acc1": 68.58400231048584, "test_acc5": 88.35200227020263, "epoch": 51, "n_parameters": 7298036} -{"train_lr": 0.0037221803438729105, "train_loss": 2.96236713283062, "test_loss": 1.3316978834131186, "test_acc1": 68.58000214447021, "test_acc5": 88.2660022958374, "epoch": 52, "n_parameters": 7298036} -{"train_lr": 0.003711451699020238, "train_loss": 2.9560344067811966, "test_loss": 1.3304718570674168, "test_acc1": 68.48200189208984, "test_acc5": 88.2420026751709, "epoch": 53, "n_parameters": 7298036} -{"train_lr": 0.0037005359225088163, "train_loss": 2.940285456061363, "test_loss": 1.3489914676722359, "test_acc1": 68.37400224517822, "test_acc5": 88.06200234008789, "epoch": 54, "n_parameters": 7298036} -{"train_lr": 0.0036894342113765284, "train_loss": 2.9441924720048904, "test_loss": 1.310870728948537, "test_acc1": 68.69800180145263, "test_acc5": 88.3960022418213, "epoch": 55, "n_parameters": 7298036} -{"train_lr": 0.0036781477830511327, "train_loss": 2.940477569293976, "test_loss": 1.34056986473939, "test_acc1": 68.29600216461182, "test_acc5": 88.11400266113282, "epoch": 56, "n_parameters": 7298036} -{"train_lr": 0.00366667787521664, "train_loss": 2.9293638094186782, "test_loss": 1.2868605067624765, "test_acc1": 69.27000235778809, "test_acc5": 88.70600262145996, "epoch": 57, "n_parameters": 7298036} -{"train_lr": 0.0036550257456777722, "train_loss": 2.92315795609951, "test_loss": 1.2919262538061422, "test_acc1": 68.9960023022461, "test_acc5": 88.92400238220215, "epoch": 58, "n_parameters": 7298036} -{"train_lr": 0.003643192672221756, "train_loss": 2.9239499595880507, "test_loss": 1.2775040886857931, "test_acc1": 68.8460025567627, "test_acc5": 88.80400256561279, "epoch": 59, "n_parameters": 7298036} -{"train_lr": 0.0036311799524784525, "train_loss": 2.9050547318458557, "test_loss": 1.3036477925146328, "test_acc1": 69.05000221252442, "test_acc5": 88.74400248046875, "epoch": 60, "n_parameters": 7298036} -{"train_lr": 0.0036189889037780316, "train_loss": 2.9180272254467012, "test_loss": 1.2735355277271831, "test_acc1": 69.29400246002197, "test_acc5": 89.02600267272949, "epoch": 61, "n_parameters": 7298036} -{"train_lr": 0.0036066208630062997, "train_loss": 2.913866351890564, "test_loss": 1.3067156413898748, "test_acc1": 69.17000221130371, "test_acc5": 88.76400258911133, "epoch": 62, "n_parameters": 7298036} -{"train_lr": 0.003594077186458248, "train_loss": 2.9109446462631228, "test_loss": 1.3222998244797481, "test_acc1": 68.96200208892822, "test_acc5": 88.558002739563, "epoch": 63, "n_parameters": 7298036} -{"train_lr": 0.0035813592496895586, "train_loss": 2.9014147219657898, "test_loss": 1.3081093255211325, "test_acc1": 68.88800214111328, "test_acc5": 88.55000269470214, "epoch": 64, "n_parameters": 7298036} -{"train_lr": 0.003568468447365067, "train_loss": 2.8917479412317277, "test_loss": 1.2488169661339592, "test_acc1": 69.79200228149413, "test_acc5": 89.33200255004883, "epoch": 65, "n_parameters": 7298036} -{"train_lr": 0.003555406193106677, "train_loss": 2.8849032590627672, "test_loss": 1.2275119577260578, "test_acc1": 70.13600234375, "test_acc5": 89.48000229644775, "epoch": 66, "n_parameters": 7298036} -{"train_lr": 0.0035421739193377214, "train_loss": 2.884925026464462, "test_loss": 1.2953675341080217, "test_acc1": 69.04000215545655, "test_acc5": 88.77200229034423, "epoch": 67, "n_parameters": 7298036} -{"train_lr": 0.0035287730771260805, "train_loss": 2.8757342945575712, "test_loss": 1.2760666069739006, "test_acc1": 69.66200241973877, "test_acc5": 89.126002371521, "epoch": 68, "n_parameters": 7298036} -{"train_lr": 0.0035152051360252245, "train_loss": 2.882862783551216, "test_loss": 1.3013126652906923, "test_acc1": 69.16400218139648, "test_acc5": 88.89000257232667, "epoch": 69, "n_parameters": 7298036} -{"train_lr": 0.003501471583912782, "train_loss": 2.8987921117305757, "test_loss": 1.288423260783448, "test_acc1": 69.57800217437745, "test_acc5": 88.8880025491333, "epoch": 70, "n_parameters": 7298036} -{"train_lr": 0.0034875739268273947, "train_loss": 2.8582205597400665, "test_loss": 1.2609004544861175, "test_acc1": 69.44000222595214, "test_acc5": 89.08400266387939, "epoch": 71, "n_parameters": 7298036} -{"train_lr": 0.0034735136888038418, "train_loss": 2.8653640487194063, "test_loss": 1.2586495284648502, "test_acc1": 69.39800215942383, "test_acc5": 88.93200274871826, "epoch": 72, "n_parameters": 7298036} -{"train_lr": 0.003459292411705684, "train_loss": 2.86430874505043, "test_loss": 1.2802685091600698, "test_acc1": 69.45600208282471, "test_acc5": 88.96400239532471, "epoch": 73, "n_parameters": 7298036} -{"train_lr": 0.0034449116550562854, "train_loss": 2.864935412168503, "test_loss": 1.2222786041743614, "test_acc1": 70.2220020300293, "test_acc5": 89.51200240264893, "epoch": 74, "n_parameters": 7298036} -{"train_lr": 0.0034303729958673978, "train_loss": 2.858577492856979, "test_loss": 1.2787197433850344, "test_acc1": 69.46200228332519, "test_acc5": 89.15200251098632, "epoch": 75, "n_parameters": 7298036} -{"train_lr": 0.003415678028467135, "train_loss": 2.862791300034523, "test_loss": 1.2336019768434412, "test_acc1": 70.22600240905761, "test_acc5": 89.38800245758057, "epoch": 76, "n_parameters": 7298036} -{"train_lr": 0.0034008283643241475, "train_loss": 2.844709085035324, "test_loss": 1.2668427697875921, "test_acc1": 69.90600237060546, "test_acc5": 89.12200248260498, "epoch": 77, "n_parameters": 7298036} -{"train_lr": 0.003385825631871496, "train_loss": 2.8506728066444396, "test_loss": 1.2706355312291313, "test_acc1": 69.7320022756958, "test_acc5": 89.12800263061523, "epoch": 78, "n_parameters": 7298036} -{"train_lr": 0.0033706714763277455, "train_loss": 2.839215397810936, "test_loss": 1.2058536616318367, "test_acc1": 70.61400210205078, "test_acc5": 89.55000245788574, "epoch": 79, "n_parameters": 7298036} -{"train_lr": 0.003355367559516879, "train_loss": 2.8330447653532027, "test_loss": 1.2048980405225473, "test_acc1": 70.90400239471435, "test_acc5": 89.89600235351563, "epoch": 80, "n_parameters": 7298036} -{"train_lr": 0.003339915559685877, "train_loss": 2.8348069459676744, "test_loss": 1.2031853650422657, "test_acc1": 70.6700021887207, "test_acc5": 89.7280024407959, "epoch": 81, "n_parameters": 7298036} -{"train_lr": 0.003324317171320666, "train_loss": 2.8390980534791947, "test_loss": 1.2367436868302963, "test_acc1": 70.27800228302002, "test_acc5": 89.5520022821045, "epoch": 82, "n_parameters": 7298036} -{"train_lr": 0.0033085741049602795, "train_loss": 2.839516238331795, "test_loss": 1.21318638763007, "test_acc1": 70.57200241241455, "test_acc5": 89.70200232025147, "epoch": 83, "n_parameters": 7298036} -{"train_lr": 0.0032926880870092624, "train_loss": 2.8147069344997404, "test_loss": 1.273277894977261, "test_acc1": 69.78400219085694, "test_acc5": 89.17000257171631, "epoch": 84, "n_parameters": 7298036} -{"train_lr": 0.003276660859548651, "train_loss": 2.8218267758369446, "test_loss": 1.22079018108985, "test_acc1": 70.1540022845459, "test_acc5": 89.70600269226074, "epoch": 85, "n_parameters": 7298036} -{"train_lr": 0.0032604941801444055, "train_loss": 2.8210701436281203, "test_loss": 1.2012844484518557, "test_acc1": 71.09000232879639, "test_acc5": 89.8040028314209, "epoch": 86, "n_parameters": 7298036} -{"train_lr": 0.003244189821655263, "train_loss": 2.8065209851026536, "test_loss": 1.212363612564171, "test_acc1": 70.59600221710205, "test_acc5": 89.7140026361084, "epoch": 87, "n_parameters": 7298036} -{"train_lr": 0.003227749572037655, "train_loss": 2.8154258788108826, "test_loss": 1.2420243778649498, "test_acc1": 70.53800256469727, "test_acc5": 89.55400258087158, "epoch": 88, "n_parameters": 7298036} -{"train_lr": 0.0032111752341504192, "train_loss": 2.8012271128177644, "test_loss": 1.271223815048442, "test_acc1": 69.76800225158692, "test_acc5": 89.16200266357421, "epoch": 89, "n_parameters": 7298036} -{"train_lr": 0.003194468625556447, "train_loss": 2.8091566697597505, "test_loss": 1.2228057906031609, "test_acc1": 70.59600227966308, "test_acc5": 89.51400222320557, "epoch": 90, "n_parameters": 7298036} -{"train_lr": 0.0031776315783234484, "train_loss": 2.789153494429588, "test_loss": 1.2032753632349127, "test_acc1": 70.66600263275147, "test_acc5": 89.80400255523682, "epoch": 91, "n_parameters": 7298036} -{"train_lr": 0.0031606659388236785, "train_loss": 2.8004688170433045, "test_loss": 1.2004428659291828, "test_acc1": 71.04600243835449, "test_acc5": 89.82800274536133, "epoch": 92, "n_parameters": 7298036} -{"train_lr": 0.003143573567530467, "train_loss": 2.8010775094509124, "test_loss": 1.201245457810514, "test_acc1": 71.01000261444092, "test_acc5": 90.04000263305664, "epoch": 93, "n_parameters": 7298036} -{"train_lr": 0.003126356338815038, "train_loss": 2.791255831360817, "test_loss": 1.224129281938076, "test_acc1": 71.34000202209472, "test_acc5": 89.85200273956299, "epoch": 94, "n_parameters": 7298036} -{"train_lr": 0.0031090161407405044, "train_loss": 2.7868678555727007, "test_loss": 1.2461273678961922, "test_acc1": 70.35800219818115, "test_acc5": 89.52000230163574, "epoch": 95, "n_parameters": 7298036} -{"train_lr": 0.003091554874854953, "train_loss": 2.7865748196840285, "test_loss": 1.1931486572412884, "test_acc1": 71.20600211975098, "test_acc5": 89.88400234558105, "epoch": 96, "n_parameters": 7298036} -{"train_lr": 0.0030739744559831164, "train_loss": 2.7808667219400407, "test_loss": 1.2395068858476246, "test_acc1": 70.32600244689941, "test_acc5": 89.53400259979249, "epoch": 97, "n_parameters": 7298036} -{"train_lr": 0.0030562768120159086, "train_loss": 2.7741607828617094, "test_loss": 1.2006589818526716, "test_acc1": 71.05400236022949, "test_acc5": 90.01000258789063, "epoch": 98, "n_parameters": 7298036} -{"train_lr": 0.0030384638836993723, "train_loss": 2.7798236853122713, "test_loss": 1.2189802010269726, "test_acc1": 70.71800237976075, "test_acc5": 89.70400262237548, "epoch": 99, "n_parameters": 7298036} -{"train_lr": 0.003020537624421951, "train_loss": 2.7765327590227127, "test_loss": 1.2380169779062271, "test_acc1": 70.6560022756958, "test_acc5": 89.77200285095215, "epoch": 100, "n_parameters": 7298036} -{"train_lr": 0.0030025000000000156, "train_loss": 2.769494745159149, "test_loss": 1.1826450908008743, "test_acc1": 71.43600231323242, "test_acc5": 90.23600259765625, "epoch": 101, "n_parameters": 7298036} -{"train_lr": 0.0029843529884621693, "train_loss": 2.7658523258686065, "test_loss": 1.1718732675208765, "test_acc1": 71.89200251190185, "test_acc5": 90.390002633667, "epoch": 102, "n_parameters": 7298036} -{"train_lr": 0.0029660985798329416, "train_loss": 2.7553889272212984, "test_loss": 1.1761691561516594, "test_acc1": 71.49600237365722, "test_acc5": 90.29200241821289, "epoch": 103, "n_parameters": 7298036} -{"train_lr": 0.0029477387759137964, "train_loss": 2.756905302643776, "test_loss": 1.1918228527202326, "test_acc1": 71.25800243408203, "test_acc5": 90.27200256958008, "epoch": 104, "n_parameters": 7298036} -{"train_lr": 0.002929275590064108, "train_loss": 2.747792113614082, "test_loss": 1.1904765021275072, "test_acc1": 71.51800224395753, "test_acc5": 90.27200255767822, "epoch": 105, "n_parameters": 7298036} -{"train_lr": 0.002910711046980378, "train_loss": 2.748899767613411, "test_loss": 1.1851270014748854, "test_acc1": 71.22600215087891, "test_acc5": 90.0580024911499, "epoch": 106, "n_parameters": 7298036} -{"train_lr": 0.0028920471824738832, "train_loss": 2.7408900070667266, "test_loss": 1.1735106005388147, "test_acc1": 71.5820025769043, "test_acc5": 90.32200246337891, "epoch": 107, "n_parameters": 7298036} -{"train_lr": 0.002873286043247822, "train_loss": 2.7406880177736284, "test_loss": 1.1755481178269667, "test_acc1": 71.96600258422852, "test_acc5": 90.46000254150391, "epoch": 108, "n_parameters": 7298036} -{"train_lr": 0.0028544296866723304, "train_loss": 2.7374384325027465, "test_loss": 1.1538178276489763, "test_acc1": 71.96000241760254, "test_acc5": 90.602002394104, "epoch": 109, "n_parameters": 7298036} -{"train_lr": 0.0028354801805594724, "train_loss": 2.7415000156641005, "test_loss": 1.1490414543625187, "test_acc1": 71.82200256713867, "test_acc5": 90.4940025857544, "epoch": 110, "n_parameters": 7298036} -{"train_lr": 0.002816439602936208, "train_loss": 2.7458119537591936, "test_loss": 1.1613534612252432, "test_acc1": 71.73400252288819, "test_acc5": 90.46000250579834, "epoch": 111, "n_parameters": 7298036} -{"train_lr": 0.002797310041816382, "train_loss": 2.7371725225925445, "test_loss": 1.1937640226062607, "test_acc1": 71.75400250244141, "test_acc5": 90.27200257720948, "epoch": 112, "n_parameters": 7298036} -{"train_lr": 0.002778093594971943, "train_loss": 2.742541198849678, "test_loss": 1.1604705702732592, "test_acc1": 72.18000253173828, "test_acc5": 90.56200250518799, "epoch": 113, "n_parameters": 7298036} -{"train_lr": 0.0027587923697028264, "train_loss": 2.719964967560768, "test_loss": 1.1650713942945004, "test_acc1": 71.6240022128296, "test_acc5": 90.38600232910156, "epoch": 114, "n_parameters": 7298036} -{"train_lr": 0.002739408482605956, "train_loss": 2.719283892631531, "test_loss": 1.1150342190966887, "test_acc1": 72.55000251617432, "test_acc5": 90.82800244384765, "epoch": 115, "n_parameters": 7298036} -{"train_lr": 0.0027199440593428507, "train_loss": 2.725144488286972, "test_loss": 1.1545306054108284, "test_acc1": 72.18400263275146, "test_acc5": 90.60200261383056, "epoch": 116, "n_parameters": 7298036} -{"train_lr": 0.0027004012344070075, "train_loss": 2.7213669301748276, "test_loss": 1.1563762137118507, "test_acc1": 72.01200246734619, "test_acc5": 90.71600271484375, "epoch": 117, "n_parameters": 7298036} -{"train_lr": 0.0026807821508893883, "train_loss": 2.7159017336368563, "test_loss": 1.163988294408602, "test_acc1": 72.10400262298585, "test_acc5": 90.49000240478516, "epoch": 118, "n_parameters": 7298036} -{"train_lr": 0.0026610889602434354, "train_loss": 2.7192249139785765, "test_loss": 1.1400163024663925, "test_acc1": 72.4320025036621, "test_acc5": 90.60600261169434, "epoch": 119, "n_parameters": 7298036} -{"train_lr": 0.0026413238220496715, "train_loss": 2.710683917427063, "test_loss": 1.1946555915124275, "test_acc1": 71.65200232727051, "test_acc5": 90.23000273101806, "epoch": 120, "n_parameters": 7298036} -{"train_lr": 0.0026214889037780493, "train_loss": 2.701699657702446, "test_loss": 1.1364909800536491, "test_acc1": 72.26400221801758, "test_acc5": 90.6720025326538, "epoch": 121, "n_parameters": 7298036} -{"train_lr": 0.0026015863805508724, "train_loss": 2.702342320203781, "test_loss": 1.145970356376732, "test_acc1": 72.20400248779296, "test_acc5": 90.65400245788574, "epoch": 122, "n_parameters": 7298036} -{"train_lr": 0.0025816184349041886, "train_loss": 2.7036109118938447, "test_loss": 1.1387004510444754, "test_acc1": 72.10600255767822, "test_acc5": 90.47600240753174, "epoch": 123, "n_parameters": 7298036} -{"train_lr": 0.0025615872565482966, "train_loss": 2.6921864577531815, "test_loss": 1.1446565917747862, "test_acc1": 72.18600277435303, "test_acc5": 90.60400260925293, "epoch": 124, "n_parameters": 7298036} -{"train_lr": 0.0025414950421274317, "train_loss": 2.6971765137434005, "test_loss": 1.125643571729169, "test_acc1": 72.63200249908448, "test_acc5": 90.90800215972901, "epoch": 125, "n_parameters": 7298036} -{"train_lr": 0.002521343994979551, "train_loss": 2.6906984821081164, "test_loss": 1.1615426816484506, "test_acc1": 71.91800223388672, "test_acc5": 90.44400264160156, "epoch": 126, "n_parameters": 7298036} -{"train_lr": 0.002501136324893901, "train_loss": 2.684740758228302, "test_loss": 1.156342789530754, "test_acc1": 72.20000222717285, "test_acc5": 90.61400233489991, "epoch": 127, "n_parameters": 7298036} -{"train_lr": 0.0024808742478692517, "train_loss": 2.6863719819545744, "test_loss": 1.1356623652665054, "test_acc1": 72.47200222991944, "test_acc5": 90.80000281829834, "epoch": 128, "n_parameters": 7298036} -{"train_lr": 0.002460559985870747, "train_loss": 2.6869468106746672, "test_loss": 1.1174274678615963, "test_acc1": 72.61400257934571, "test_acc5": 90.97400266662598, "epoch": 129, "n_parameters": 7298036} -{"train_lr": 0.002440195766586069, "train_loss": 2.6802422429800035, "test_loss": 1.1200481346424889, "test_acc1": 72.59400264251708, "test_acc5": 90.88800266174316, "epoch": 130, "n_parameters": 7298036} -{"train_lr": 0.0024197838231814215, "train_loss": 2.674258456683159, "test_loss": 1.1114801981431597, "test_acc1": 72.97200279815674, "test_acc5": 91.02400239868165, "epoch": 131, "n_parameters": 7298036} -{"train_lr": 0.002399326394056337, "train_loss": 2.674580824661255, "test_loss": 1.1406807351638288, "test_acc1": 72.33400251617432, "test_acc5": 90.60800276794434, "epoch": 132, "n_parameters": 7298036} -{"train_lr": 0.0023788257225985116, "train_loss": 2.6682451501607893, "test_loss": 1.1395994885002865, "test_acc1": 72.43600264068604, "test_acc5": 90.79800241333008, "epoch": 133, "n_parameters": 7298036} -{"train_lr": 0.0023582840569375944, "train_loss": 2.672188775396347, "test_loss": 1.125757076722734, "test_acc1": 72.54000223297119, "test_acc5": 91.01600285644531, "epoch": 134, "n_parameters": 7298036} -{"train_lr": 0.002337703649698603, "train_loss": 2.66116555044651, "test_loss": 1.1256333192919983, "test_acc1": 72.62400260620117, "test_acc5": 90.87600230407715, "epoch": 135, "n_parameters": 7298036} -{"train_lr": 0.002317086757755268, "train_loss": 2.660890611934662, "test_loss": 1.137982293525163, "test_acc1": 72.57200252105713, "test_acc5": 90.88600281188965, "epoch": 136, "n_parameters": 7298036} -{"train_lr": 0.002296435641982043, "train_loss": 2.6541965732097625, "test_loss": 1.0976952187278692, "test_acc1": 73.18000276611328, "test_acc5": 91.16000265014648, "epoch": 137, "n_parameters": 7298036} -{"train_lr": 0.0022757525670064538, "train_loss": 2.662671432805061, "test_loss": 1.145453055115307, "test_acc1": 72.57600254882813, "test_acc5": 90.66000259094238, "epoch": 138, "n_parameters": 7298036} -{"train_lr": 0.0022550398009608037, "train_loss": 2.658701115489006, "test_loss": 1.146797457600341, "test_acc1": 72.69200230163574, "test_acc5": 90.82800274475098, "epoch": 139, "n_parameters": 7298036} -{"train_lr": 0.0022342996152332844, "train_loss": 2.6452890876531603, "test_loss": 1.1185918781248962, "test_acc1": 73.13400246582032, "test_acc5": 91.2000026864624, "epoch": 140, "n_parameters": 7298036} -{"train_lr": 0.0022135342842189523, "train_loss": 2.645230135345459, "test_loss": 1.096876362886499, "test_acc1": 73.4040028112793, "test_acc5": 91.27400221801757, "epoch": 141, "n_parameters": 7298036} -{"train_lr": 0.002192746085070428, "train_loss": 2.6305364119291306, "test_loss": 1.1021068733842934, "test_acc1": 73.10200248565674, "test_acc5": 91.13600251342774, "epoch": 142, "n_parameters": 7298036} -{"train_lr": 0.0021719372974480025, "train_loss": 2.6396239295244217, "test_loss": 1.0678150322945679, "test_acc1": 73.52000241149902, "test_acc5": 91.50400254425048, "epoch": 143, "n_parameters": 7298036} -{"train_lr": 0.0021511102032696337, "train_loss": 2.636941668534279, "test_loss": 1.065899118561955, "test_acc1": 73.86600244598388, "test_acc5": 91.51000258666993, "epoch": 144, "n_parameters": 7298036} -{"train_lr": 0.0021302670864609768, "train_loss": 2.633192768931389, "test_loss": 1.0874447515782188, "test_acc1": 73.45200254089356, "test_acc5": 91.37200261688233, "epoch": 145, "n_parameters": 7298036} -{"train_lr": 0.0021094102327046753, "train_loss": 2.617673898553848, "test_loss": 1.073324018760639, "test_acc1": 73.51200246826171, "test_acc5": 91.35800269714356, "epoch": 146, "n_parameters": 7298036} -{"train_lr": 0.0020885419291897665, "train_loss": 2.6299877017498017, "test_loss": 1.1113169386106379, "test_acc1": 73.32200234527588, "test_acc5": 91.38400247131348, "epoch": 147, "n_parameters": 7298036} -{"train_lr": 0.0020676644643608877, "train_loss": 2.6255562224388123, "test_loss": 1.0910054077120388, "test_acc1": 73.59400245788574, "test_acc5": 91.48600235992431, "epoch": 148, "n_parameters": 7298036} -{"train_lr": 0.0020467801276673257, "train_loss": 2.6155716257095336, "test_loss": 1.1059588443707018, "test_acc1": 72.90600264312744, "test_acc5": 91.16000270843506, "epoch": 149, "n_parameters": 7298036} -{"train_lr": 0.002025891209311914, "train_loss": 2.6176024052858353, "test_loss": 1.0937925691113752, "test_acc1": 73.35200236755371, "test_acc5": 91.28800264709473, "epoch": 150, "n_parameters": 7298036} -{"train_lr": 0.0020050000000000176, "train_loss": 2.609076938462257, "test_loss": 1.102897394229384, "test_acc1": 73.23000235046386, "test_acc5": 91.16600265869141, "epoch": 151, "n_parameters": 7298036} -{"train_lr": 0.0019841087906880845, "train_loss": 2.6099142376422884, "test_loss": 1.0829623238567043, "test_acc1": 73.63200246795654, "test_acc5": 91.44400250579834, "epoch": 152, "n_parameters": 7298036} -{"train_lr": 0.0019632198723327173, "train_loss": 2.599513148832321, "test_loss": 1.0756400258225554, "test_acc1": 73.67000248443604, "test_acc5": 91.43400269256591, "epoch": 153, "n_parameters": 7298036} -{"train_lr": 0.0019423355356391558, "train_loss": 2.6100772357940674, "test_loss": 1.0807239500915302, "test_acc1": 73.85400241577149, "test_acc5": 91.4100024520874, "epoch": 154, "n_parameters": 7298036} -{"train_lr": 0.001921458070810235, "train_loss": 2.6054155319690704, "test_loss": 1.055833689211046, "test_acc1": 73.72000239837647, "test_acc5": 91.62200263061523, "epoch": 155, "n_parameters": 7298036} -{"train_lr": 0.0019005897672953243, "train_loss": 2.5918313050985335, "test_loss": 1.069866947391454, "test_acc1": 73.83000217437744, "test_acc5": 91.63600270599365, "epoch": 156, "n_parameters": 7298036} -{"train_lr": 0.0018797329135390225, "train_loss": 2.590130034685135, "test_loss": 1.0839287200394798, "test_acc1": 73.76000243286133, "test_acc5": 91.5540023046875, "epoch": 157, "n_parameters": 7298036} -{"train_lr": 0.0018588897967303623, "train_loss": 2.5789262905836106, "test_loss": 1.094087970607421, "test_acc1": 73.48200239990234, "test_acc5": 91.44800263641358, "epoch": 158, "n_parameters": 7298036} -{"train_lr": 0.0018380627025520412, "train_loss": 2.5983130114555357, "test_loss": 1.0942513655652018, "test_acc1": 73.7760027835083, "test_acc5": 91.4140024130249, "epoch": 159, "n_parameters": 7298036} -{"train_lr": 0.0018172539149295633, "train_loss": 2.5744936166763304, "test_loss": 1.0472531092955786, "test_acc1": 73.95600262969971, "test_acc5": 91.88000227111816, "epoch": 160, "n_parameters": 7298036} -{"train_lr": 0.0017964657157810383, "train_loss": 2.565139394044876, "test_loss": 1.0587081350386143, "test_acc1": 74.14800253204346, "test_acc5": 91.72200297729492, "epoch": 161, "n_parameters": 7298036} -{"train_lr": 0.001775700384766716, "train_loss": 2.5746239227294923, "test_loss": 1.05552039251608, "test_acc1": 74.11000270599365, "test_acc5": 91.76800257476806, "epoch": 162, "n_parameters": 7298036} -{"train_lr": 0.0017549601990392034, "train_loss": 2.5673289600372313, "test_loss": 1.0307811803239233, "test_acc1": 74.46400269500732, "test_acc5": 91.92000245819092, "epoch": 163, "n_parameters": 7298036} -{"train_lr": 0.0017342474329935537, "train_loss": 2.5659127903938295, "test_loss": 1.0496385101886356, "test_acc1": 74.05600251037598, "test_acc5": 91.5360027307129, "epoch": 164, "n_parameters": 7298036} -{"train_lr": 0.0017135643580179704, "train_loss": 2.5670994029045104, "test_loss": 1.062112503192004, "test_acc1": 74.01400244384766, "test_acc5": 91.65800231323242, "epoch": 165, "n_parameters": 7298036} -{"train_lr": 0.001692913242244742, "train_loss": 2.5514530499696733, "test_loss": 1.070329389808809, "test_acc1": 73.92000223846435, "test_acc5": 91.78000275634766, "epoch": 166, "n_parameters": 7298036} -{"train_lr": 0.00167229635030139, "train_loss": 2.559064494228363, "test_loss": 1.0499488075866419, "test_acc1": 74.24800241241455, "test_acc5": 91.99000242126465, "epoch": 167, "n_parameters": 7298036} -{"train_lr": 0.0016517159430624528, "train_loss": 2.5612751989841462, "test_loss": 1.0270073571625877, "test_acc1": 74.84200262634278, "test_acc5": 91.98200257202149, "epoch": 168, "n_parameters": 7298036} -{"train_lr": 0.0016311742774014707, "train_loss": 2.545291890573502, "test_loss": 1.0255735937286825, "test_acc1": 74.92200251983643, "test_acc5": 92.08200232086182, "epoch": 169, "n_parameters": 7298036} -{"train_lr": 0.001610673605943664, "train_loss": 2.5453528559684755, "test_loss": 1.0377328577725327, "test_acc1": 74.51800267791748, "test_acc5": 91.76400224578857, "epoch": 170, "n_parameters": 7298036} -{"train_lr": 0.001590216176818559, "train_loss": 2.5426793584108354, "test_loss": 1.0584832090227043, "test_acc1": 74.25800265869141, "test_acc5": 91.93400245300293, "epoch": 171, "n_parameters": 7298036} -{"train_lr": 0.0015698042334139058, "train_loss": 2.5389381959915163, "test_loss": 1.0564879566869314, "test_acc1": 74.0300023815918, "test_acc5": 91.59600276733399, "epoch": 172, "n_parameters": 7298036} -{"train_lr": 0.0015494400141292598, "train_loss": 2.5293234243392946, "test_loss": 1.0336648611461414, "test_acc1": 74.71600224578857, "test_acc5": 91.92400248596192, "epoch": 173, "n_parameters": 7298036} -{"train_lr": 0.0015291257521307172, "train_loss": 2.530665948677063, "test_loss": 1.022610088043353, "test_acc1": 74.83600273956299, "test_acc5": 92.0460025390625, "epoch": 174, "n_parameters": 7298036} -{"train_lr": 0.0015088636751061355, "train_loss": 2.5276676824092865, "test_loss": 1.0322164253276938, "test_acc1": 74.64600234893798, "test_acc5": 91.95400270996093, "epoch": 175, "n_parameters": 7298036} -{"train_lr": 0.0014886560050204627, "train_loss": 2.5198777024030687, "test_loss": 1.0236634868471062, "test_acc1": 74.97800259246826, "test_acc5": 92.064002237854, "epoch": 176, "n_parameters": 7298036} -{"train_lr": 0.001468504957872541, "train_loss": 2.524646268892288, "test_loss": 1.021059085998465, "test_acc1": 74.9500024267578, "test_acc5": 92.12600235992431, "epoch": 177, "n_parameters": 7298036} -{"train_lr": 0.0014484127434517488, "train_loss": 2.5196302161455155, "test_loss": 1.0221082026905872, "test_acc1": 74.66400244293213, "test_acc5": 92.08200232086182, "epoch": 178, "n_parameters": 7298036} -{"train_lr": 0.0014283815650957576, "train_loss": 2.5117968289613724, "test_loss": 1.0052364586907274, "test_acc1": 75.01800264251709, "test_acc5": 92.30200265899659, "epoch": 179, "n_parameters": 7298036} -{"train_lr": 0.001408413619449102, "train_loss": 2.5126665816783906, "test_loss": 1.0166301004150335, "test_acc1": 75.05000243988037, "test_acc5": 92.16200264282226, "epoch": 180, "n_parameters": 7298036} -{"train_lr": 0.001388511096221964, "train_loss": 2.514492232513428, "test_loss": 1.0134229335714788, "test_acc1": 75.20800264099121, "test_acc5": 92.2860023348999, "epoch": 181, "n_parameters": 7298036} -{"train_lr": 0.0013686761779503403, "train_loss": 2.517375748872757, "test_loss": 1.0121952365426456, "test_acc1": 75.09000244506836, "test_acc5": 92.17200258972169, "epoch": 182, "n_parameters": 7298036} -{"train_lr": 0.0013489110397565372, "train_loss": 2.503720897722244, "test_loss": 1.0151139480226181, "test_acc1": 75.01000251373291, "test_acc5": 92.18000252716064, "epoch": 183, "n_parameters": 7298036} -{"train_lr": 0.0013292178491106572, "train_loss": 2.5023404817819594, "test_loss": 1.0004466872881441, "test_acc1": 75.32800223785401, "test_acc5": 92.30200260284424, "epoch": 184, "n_parameters": 7298036} -{"train_lr": 0.001309598765592993, "train_loss": 2.482987153363228, "test_loss": 0.9924322162919185, "test_acc1": 75.5860023614502, "test_acc5": 92.46600264801026, "epoch": 185, "n_parameters": 7298036} -{"train_lr": 0.0012900559406571204, "train_loss": 2.4928647154808043, "test_loss": 0.9872634424006238, "test_acc1": 75.69200246459961, "test_acc5": 92.38000282348632, "epoch": 186, "n_parameters": 7298036} -{"train_lr": 0.0012705915173940611, "train_loss": 2.491313820171356, "test_loss": 0.9936330614282805, "test_acc1": 75.38000286376953, "test_acc5": 92.58000250823974, "epoch": 187, "n_parameters": 7298036} -{"train_lr": 0.0012512076302971713, "train_loss": 2.4757644552230835, "test_loss": 0.9865674724911943, "test_acc1": 75.51400248077393, "test_acc5": 92.2880024609375, "epoch": 188, "n_parameters": 7298036} -{"train_lr": 0.001231906405028045, "train_loss": 2.474780326962471, "test_loss": 0.978144701132003, "test_acc1": 75.62600256530762, "test_acc5": 92.56400254211425, "epoch": 189, "n_parameters": 7298036} -{"train_lr": 0.0012126899581836061, "train_loss": 2.4858536343336106, "test_loss": 0.9852347856058794, "test_acc1": 75.77600233673095, "test_acc5": 92.63200253631592, "epoch": 190, "n_parameters": 7298036} -{"train_lr": 0.0011935603970637625, "train_loss": 2.4700578172683714, "test_loss": 0.9750669719103504, "test_acc1": 75.95800222412109, "test_acc5": 92.58600217315674, "epoch": 191, "n_parameters": 7298036} -{"train_lr": 0.0011745198194405004, "train_loss": 2.467990641093254, "test_loss": 0.9798409101717612, "test_acc1": 75.6460028829956, "test_acc5": 92.57600241638184, "epoch": 192, "n_parameters": 7298036} -{"train_lr": 0.0011555703133276894, "train_loss": 2.471226690363884, "test_loss": 0.9799047127804336, "test_acc1": 75.86800228302002, "test_acc5": 92.68000223724366, "epoch": 193, "n_parameters": 7298036} -{"train_lr": 0.0011367139567521967, "train_loss": 2.4527605704307556, "test_loss": 0.966713201473741, "test_acc1": 75.95600269866944, "test_acc5": 92.64800272583008, "epoch": 194, "n_parameters": 7298036} -{"train_lr": 0.0011179528175260622, "train_loss": 2.4601806716442107, "test_loss": 0.9572100709466373, "test_acc1": 76.07800235534668, "test_acc5": 92.7900026864624, "epoch": 195, "n_parameters": 7298036} -{"train_lr": 0.0010992889530195922, "train_loss": 2.451168726277351, "test_loss": 0.9533393367686692, "test_acc1": 76.36200225524902, "test_acc5": 92.9700028704834, "epoch": 196, "n_parameters": 7298036} -{"train_lr": 0.0010807244099358875, "train_loss": 2.4444945929050443, "test_loss": 0.9692433559719253, "test_acc1": 75.86600277069091, "test_acc5": 92.50800250213624, "epoch": 197, "n_parameters": 7298036} -{"train_lr": 0.0010622612240862497, "train_loss": 2.4485621072769166, "test_loss": 0.9495304829495794, "test_acc1": 76.5100021496582, "test_acc5": 92.87200252227784, "epoch": 198, "n_parameters": 7298036} -{"train_lr": 0.001043901420167081, "train_loss": 2.450498999595642, "test_loss": 0.9535453295444741, "test_acc1": 76.36200238739013, "test_acc5": 92.82200239440918, "epoch": 199, "n_parameters": 7298036} -{"train_lr": 0.001025647011537802, "train_loss": 2.4409843618392943, "test_loss": 0.9678697191617068, "test_acc1": 76.25200285888671, "test_acc5": 92.80800273925782, "epoch": 200, "n_parameters": 7298036} -{"train_lr": 0.0010075000000000067, "train_loss": 2.441233262586594, "test_loss": 0.9488543179981849, "test_acc1": 76.49600269042969, "test_acc5": 93.05200243530274, "epoch": 201, "n_parameters": 7298036} -{"train_lr": 0.0009894623755779988, "train_loss": 2.4323109345197675, "test_loss": 0.9414531415437951, "test_acc1": 76.55800258758545, "test_acc5": 92.93600268096924, "epoch": 202, "n_parameters": 7298036} -{"train_lr": 0.0009715361163006195, "train_loss": 2.4237272392511366, "test_loss": 0.9519220882040613, "test_acc1": 76.3720025982666, "test_acc5": 92.94800246124268, "epoch": 203, "n_parameters": 7298036} -{"train_lr": 0.000953723187984137, "train_loss": 2.418271534848213, "test_loss": 0.9337963420240318, "test_acc1": 76.63800232818603, "test_acc5": 93.0520023324585, "epoch": 204, "n_parameters": 7298036} -{"train_lr": 0.0009360255440169043, "train_loss": 2.4214365202903747, "test_loss": 0.9402041838449591, "test_acc1": 76.50200255767822, "test_acc5": 92.95400252166748, "epoch": 205, "n_parameters": 7298036} -{"train_lr": 0.0009184451251450147, "train_loss": 2.4098754527807236, "test_loss": 0.9438128949088209, "test_acc1": 76.44800239105224, "test_acc5": 92.99400246826171, "epoch": 206, "n_parameters": 7298036} -{"train_lr": 0.0009009838592595214, "train_loss": 2.4277090819835663, "test_loss": 0.935694015420535, "test_acc1": 76.86400263244629, "test_acc5": 93.08600283660888, "epoch": 207, "n_parameters": 7298036} -{"train_lr": 0.0008836436611849946, "train_loss": 2.4191668417453767, "test_loss": 0.935195762444945, "test_acc1": 76.92600268005371, "test_acc5": 93.10000243164062, "epoch": 208, "n_parameters": 7298036} -{"train_lr": 0.0008664264324695524, "train_loss": 2.403905984568596, "test_loss": 0.9471020100309568, "test_acc1": 76.67000256530761, "test_acc5": 92.99200243530274, "epoch": 209, "n_parameters": 7298036} -{"train_lr": 0.0008493340611763644, "train_loss": 2.3973234872102736, "test_loss": 0.9271602608701762, "test_acc1": 77.0720023892212, "test_acc5": 93.07200271484375, "epoch": 210, "n_parameters": 7298036} -{"train_lr": 0.0008323684216765116, "train_loss": 2.3993312264204025, "test_loss": 0.9259937199599603, "test_acc1": 77.05200238098145, "test_acc5": 93.15400244171143, "epoch": 211, "n_parameters": 7298036} -{"train_lr": 0.0008155313744436086, "train_loss": 2.4025443796873094, "test_loss": 0.9203305597252706, "test_acc1": 77.08800256439208, "test_acc5": 93.29200261932372, "epoch": 212, "n_parameters": 7298036} -{"train_lr": 0.0007988247658495707, "train_loss": 2.401576587986946, "test_loss": 0.9215519962941899, "test_acc1": 77.00200259613037, "test_acc5": 93.33200246917724, "epoch": 213, "n_parameters": 7298036} -{"train_lr": 0.0007822504279623159, "train_loss": 2.3891451785087585, "test_loss": 0.9148973217781853, "test_acc1": 77.01400268524169, "test_acc5": 93.33600244415283, "epoch": 214, "n_parameters": 7298036} -{"train_lr": 0.0007658101783447642, "train_loss": 2.3877255731105804, "test_loss": 0.9144268081906963, "test_acc1": 77.20400244812012, "test_acc5": 93.30600258361817, "epoch": 215, "n_parameters": 7298036} -{"train_lr": 0.000749505819855568, "train_loss": 2.387241353750229, "test_loss": 0.9090447513496175, "test_acc1": 77.3580022052002, "test_acc5": 93.544002449646, "epoch": 216, "n_parameters": 7298036} -{"train_lr": 0.0007333391404513692, "train_loss": 2.3693468382120133, "test_loss": 0.9142299625365173, "test_acc1": 77.10800272064209, "test_acc5": 93.47200246368408, "epoch": 217, "n_parameters": 7298036} -{"train_lr": 0.0007173119129907244, "train_loss": 2.380033146071434, "test_loss": 0.9170122880707768, "test_acc1": 77.19400256011963, "test_acc5": 93.36600247253418, "epoch": 218, "n_parameters": 7298036} -{"train_lr": 0.0007014258950397421, "train_loss": 2.3705189427137374, "test_loss": 0.9141664559788564, "test_acc1": 77.37000247375488, "test_acc5": 93.48600272125245, "epoch": 219, "n_parameters": 7298036} -{"train_lr": 0.0006856828286793126, "train_loss": 2.3646087681293486, "test_loss": 0.900331899523735, "test_acc1": 77.44400247833252, "test_acc5": 93.56400253326416, "epoch": 220, "n_parameters": 7298036} -{"train_lr": 0.000670084440314078, "train_loss": 2.356088075017929, "test_loss": 0.9233429087873768, "test_acc1": 77.17600230957031, "test_acc5": 93.35800238494873, "epoch": 221, "n_parameters": 7298036} -{"train_lr": 0.0006546324404830861, "train_loss": 2.358073934864998, "test_loss": 0.8934592497261131, "test_acc1": 77.74200239471436, "test_acc5": 93.63600218170166, "epoch": 222, "n_parameters": 7298036} -{"train_lr": 0.0006393285236722668, "train_loss": 2.351995425581932, "test_loss": 0.882089408881524, "test_acc1": 77.94000262695313, "test_acc5": 93.65200243347168, "epoch": 223, "n_parameters": 7298036} -{"train_lr": 0.0006241743681285438, "train_loss": 2.358582537341118, "test_loss": 0.9062919152133605, "test_acc1": 77.5620029220581, "test_acc5": 93.61400241882325, "epoch": 224, "n_parameters": 7298036} -{"train_lr": 0.0006091716356758274, "train_loss": 2.3478543623685835, "test_loss": 0.8884250271846267, "test_acc1": 77.76600274871826, "test_acc5": 93.57200244903565, "epoch": 225, "n_parameters": 7298036} -{"train_lr": 0.0005943219715328445, "train_loss": 2.348215368962288, "test_loss": 0.8762840334983433, "test_acc1": 77.88600273223877, "test_acc5": 93.65200267578125, "epoch": 226, "n_parameters": 7298036} -{"train_lr": 0.000579627004132555, "train_loss": 2.3310361555814745, "test_loss": 0.8773399473113173, "test_acc1": 77.9220025100708, "test_acc5": 93.71200258087158, "epoch": 227, "n_parameters": 7298036} -{"train_lr": 0.0005650883449437713, "train_loss": 2.3426754274368284, "test_loss": 0.886272586443845, "test_acc1": 77.82200242004394, "test_acc5": 93.69200261474609, "epoch": 228, "n_parameters": 7298036} -{"train_lr": 0.0005507075882942857, "train_loss": 2.3332966658592222, "test_loss": 0.8684262324343709, "test_acc1": 78.20600274932862, "test_acc5": 93.86000263916016, "epoch": 229, "n_parameters": 7298036} -{"train_lr": 0.0005364863111961281, "train_loss": 2.3289395385742186, "test_loss": 0.8758952036938247, "test_acc1": 78.15200259613037, "test_acc5": 93.7320026727295, "epoch": 230, "n_parameters": 7298036} -{"train_lr": 0.0005224260731725992, "train_loss": 2.315483183884621, "test_loss": 0.8748529277303639, "test_acc1": 78.02200243652344, "test_acc5": 93.69800261657714, "epoch": 231, "n_parameters": 7298036} -{"train_lr": 0.0005085284160872295, "train_loss": 2.3252110177993774, "test_loss": 0.8698054703281206, "test_acc1": 78.30400259246827, "test_acc5": 93.77200234466552, "epoch": 232, "n_parameters": 7298036} -{"train_lr": 0.0004947948639747458, "train_loss": 2.3169444283008573, "test_loss": 0.8650499735684956, "test_acc1": 78.22400266021728, "test_acc5": 93.82600261108398, "epoch": 233, "n_parameters": 7298036} -{"train_lr": 0.0004812269228738896, "train_loss": 2.314324055957794, "test_loss": 0.8557157349937102, "test_acc1": 78.45800271911621, "test_acc5": 93.95000275604248, "epoch": 234, "n_parameters": 7298036} -{"train_lr": 0.00046782608066229685, "train_loss": 2.313182456660271, "test_loss": 0.8649136599372415, "test_acc1": 78.44400260894776, "test_acc5": 93.86000256408691, "epoch": 235, "n_parameters": 7298036} -{"train_lr": 0.00045459380689333805, "train_loss": 2.3138422129392624, "test_loss": 0.860244113513652, "test_acc1": 78.35000234313965, "test_acc5": 93.88400273986817, "epoch": 236, "n_parameters": 7298036} -{"train_lr": 0.0004415315526349521, "train_loss": 2.3024248414993287, "test_loss": 0.8798057419412276, "test_acc1": 78.02800278808594, "test_acc5": 93.77800247619629, "epoch": 237, "n_parameters": 7298036} -{"train_lr": 0.00042864075031049536, "train_loss": 2.2970545565366747, "test_loss": 0.8487709576592726, "test_acc1": 78.6280022390747, "test_acc5": 94.07000256622314, "epoch": 238, "n_parameters": 7298036} -{"train_lr": 0.00041592281354172557, "train_loss": 2.3093529156684873, "test_loss": 0.8588257627013851, "test_acc1": 78.4680025064087, "test_acc5": 93.9420028527832, "epoch": 239, "n_parameters": 7298036} -{"train_lr": 0.0004033791369937298, "train_loss": 2.293317923307419, "test_loss": 0.8553368249798522, "test_acc1": 78.4760025592041, "test_acc5": 94.04400212127686, "epoch": 240, "n_parameters": 7298036} -{"train_lr": 0.00039101109622197687, "train_loss": 2.280245849752426, "test_loss": 0.8445835705189144, "test_acc1": 78.72400256011963, "test_acc5": 94.05400254486084, "epoch": 241, "n_parameters": 7298036} -{"train_lr": 0.0003788200475215383, "train_loss": 2.2822407207012176, "test_loss": 0.8408517092466354, "test_acc1": 78.7200028048706, "test_acc5": 94.11400285797119, "epoch": 242, "n_parameters": 7298036} -{"train_lr": 0.00036680732777826604, "train_loss": 2.282271601366997, "test_loss": 0.845338562174755, "test_acc1": 78.59000238342286, "test_acc5": 94.11800221069336, "epoch": 243, "n_parameters": 7298036} -{"train_lr": 0.0003549742543222451, "train_loss": 2.278850628185272, "test_loss": 0.8557130790808621, "test_acc1": 78.7060029421997, "test_acc5": 94.00200244445801, "epoch": 244, "n_parameters": 7298036} -{"train_lr": 0.00034332212478335543, "train_loss": 2.2751825882434846, "test_loss": 0.8556398442562889, "test_acc1": 78.7100024987793, "test_acc5": 94.11800248352051, "epoch": 245, "n_parameters": 7298036} -{"train_lr": 0.0003318522169488759, "train_loss": 2.2821692679405214, "test_loss": 0.8407476897187093, "test_acc1": 78.87000244384765, "test_acc5": 94.14800241699218, "epoch": 246, "n_parameters": 7298036} -{"train_lr": 0.00032056578862347564, "train_loss": 2.276680469250679, "test_loss": 0.8441872428006986, "test_acc1": 78.73000226776124, "test_acc5": 94.07400259796142, "epoch": 247, "n_parameters": 7298036} -{"train_lr": 0.0003094640774912099, "train_loss": 2.271435017633438, "test_loss": 0.8340558975058443, "test_acc1": 78.99000290039062, "test_acc5": 94.21200216156006, "epoch": 248, "n_parameters": 7298036} -{"train_lr": 0.0002985483009797872, "train_loss": 2.2566624700069426, "test_loss": 0.8344247299520409, "test_acc1": 78.94000246734619, "test_acc5": 94.23800253417969, "epoch": 249, "n_parameters": 7298036} -{"train_lr": 0.00028781965612712917, "train_loss": 2.251102318882942, "test_loss": 0.8305677350829629, "test_acc1": 79.15000223693848, "test_acc5": 94.1400025189209, "epoch": 250, "n_parameters": 7298036} -{"train_lr": 0.00027727931945004304, "train_loss": 2.253050202703476, "test_loss": 0.8358966219512856, "test_acc1": 78.90200239776611, "test_acc5": 94.20400263977051, "epoch": 251, "n_parameters": 7298036} -{"train_lr": 0.00026692844681522316, "train_loss": 2.2593398396730424, "test_loss": 0.8231533912613112, "test_acc1": 79.04600257568359, "test_acc5": 94.37200248016357, "epoch": 252, "n_parameters": 7298036} -{"train_lr": 0.0002567681733124936, "train_loss": 2.2435153215646744, "test_loss": 0.8257213627152583, "test_acc1": 79.11000237304688, "test_acc5": 94.29400258728027, "epoch": 253, "n_parameters": 7298036} -{"train_lr": 0.00024679961313034334, "train_loss": 2.245332973885536, "test_loss": 0.8196729789761936, "test_acc1": 79.21000270202637, "test_acc5": 94.39600272064209, "epoch": 254, "n_parameters": 7298036} -{"train_lr": 0.0002370238594337239, "train_loss": 2.2327603429079055, "test_loss": 0.8196848937693764, "test_acc1": 79.22600275787353, "test_acc5": 94.38400221343994, "epoch": 255, "n_parameters": 7298036} -{"train_lr": 0.00022744198424420818, "train_loss": 2.240685054731369, "test_loss": 0.8162516517674222, "test_acc1": 79.25200242126465, "test_acc5": 94.40600264526367, "epoch": 256, "n_parameters": 7298036} -{"train_lr": 0.00021805503832237022, "train_loss": 2.2454025808811187, "test_loss": 0.8170623119701358, "test_acc1": 79.28200238464356, "test_acc5": 94.36400240875244, "epoch": 257, "n_parameters": 7298036} -{"train_lr": 0.0002088640510526241, "train_loss": 2.237302639555931, "test_loss": 0.8211817377630402, "test_acc1": 79.21600240692139, "test_acc5": 94.37000264526367, "epoch": 258, "n_parameters": 7298036} -{"train_lr": 0.00019987003033028822, "train_loss": 2.2320779779434203, "test_loss": 0.8176076857044416, "test_acc1": 79.3180025177002, "test_acc5": 94.34800235473632, "epoch": 259, "n_parameters": 7298036} -{"train_lr": 0.00019107396245110126, "train_loss": 2.22953121881485, "test_loss": 0.812110769617207, "test_acc1": 79.26400295593261, "test_acc5": 94.45000278533935, "epoch": 260, "n_parameters": 7298036} -{"train_lr": 0.00018247681200301023, "train_loss": 2.2351234139680862, "test_loss": 0.8128473348915577, "test_acc1": 79.34400265167237, "test_acc5": 94.44800239685058, "epoch": 261, "n_parameters": 7298036} -{"train_lr": 0.00017407952176045884, "train_loss": 2.226163739156723, "test_loss": 0.8055148644044119, "test_acc1": 79.54800250427246, "test_acc5": 94.5600025253296, "epoch": 262, "n_parameters": 7298036} -{"train_lr": 0.0001658830125809418, "train_loss": 2.221389745235443, "test_loss": 0.8135875726447386, "test_acc1": 79.45600261260986, "test_acc5": 94.4400023880005, "epoch": 263, "n_parameters": 7298036} -{"train_lr": 0.0001578881833040612, "train_loss": 2.2171045988321305, "test_loss": 0.8080706245758954, "test_acc1": 79.49400263000489, "test_acc5": 94.54400238891601, "epoch": 264, "n_parameters": 7298036} -{"train_lr": 0.00015009591065294023, "train_loss": 2.2183821927785874, "test_loss": 0.8086656200535157, "test_acc1": 79.56000248229981, "test_acc5": 94.55400251739502, "epoch": 265, "n_parameters": 7298036} -{"train_lr": 0.00014250704913808254, "train_loss": 2.2133004793405533, "test_loss": 0.8068831335095799, "test_acc1": 79.53600268188477, "test_acc5": 94.60200253967285, "epoch": 266, "n_parameters": 7298036} -{"train_lr": 0.00013512243096367772, "train_loss": 2.2120785457134247, "test_loss": 0.8043271487250048, "test_acc1": 79.4920024874878, "test_acc5": 94.59600270324707, "epoch": 267, "n_parameters": 7298036} -{"train_lr": 0.00012794286593631978, "train_loss": 2.210547281074524, "test_loss": 0.8047707751393318, "test_acc1": 79.66600250335694, "test_acc5": 94.54600282165528, "epoch": 268, "n_parameters": 7298036} -{"train_lr": 0.00012096914137622728, "train_loss": 2.2121072183132173, "test_loss": 0.8004494153839701, "test_acc1": 79.60400265441895, "test_acc5": 94.56600245880126, "epoch": 269, "n_parameters": 7298036} -{"train_lr": 0.00011420202203087673, "train_loss": 2.2009074617147446, "test_loss": 0.8078799924867994, "test_acc1": 79.48200303375243, "test_acc5": 94.47600238006592, "epoch": 270, "n_parameters": 7298036} -{"train_lr": 0.00010764224999117014, "train_loss": 2.2026195067405703, "test_loss": 0.8007585969479645, "test_acc1": 79.67600270507812, "test_acc5": 94.60400258666992, "epoch": 271, "n_parameters": 7298036} -{"train_lr": 0.00010129054461002765, "train_loss": 2.208136276245117, "test_loss": 0.7971953138270799, "test_acc1": 79.74200239593506, "test_acc5": 94.60200254760743, "epoch": 272, "n_parameters": 7298036} -{"train_lr": 9.514760242352498e-05, "train_loss": 2.192712025260925, "test_loss": 0.7950009878943948, "test_acc1": 79.75200286407471, "test_acc5": 94.64400266479493, "epoch": 273, "n_parameters": 7298036} -{"train_lr": 8.921409707449843e-05, "train_loss": 2.2070269002914427, "test_loss": 0.7989918994991219, "test_acc1": 79.74200257354737, "test_acc5": 94.68400275695801, "epoch": 274, "n_parameters": 7298036} -{"train_lr": 8.349067923867126e-05, "train_loss": 2.1948891755104065, "test_loss": 0.7960589372498148, "test_acc1": 79.82200262115478, "test_acc5": 94.63600248657227, "epoch": 275, "n_parameters": 7298036} -{"train_lr": 7.797797655330979e-05, "train_loss": 2.1904393648147584, "test_loss": 0.7977451346814632, "test_acc1": 79.80600260131835, "test_acc5": 94.63600219329834, "epoch": 276, "n_parameters": 7298036} -{"train_lr": 7.267659354838016e-05, "train_loss": 2.2024544734477995, "test_loss": 0.8019434246508514, "test_acc1": 79.78400225036621, "test_acc5": 94.59800246124267, "epoch": 277, "n_parameters": 7298036} -{"train_lr": 6.758711158027577e-05, "train_loss": 2.193653433847427, "test_loss": 0.7970996737918433, "test_acc1": 79.8400024874878, "test_acc5": 94.67200272888184, "epoch": 278, "n_parameters": 7298036} -{"train_lr": 6.27100887680448e-05, "train_loss": 2.189904114460945, "test_loss": 0.7924261246533955, "test_acc1": 79.8020026989746, "test_acc5": 94.67000258666992, "epoch": 279, "n_parameters": 7298036} -{"train_lr": 5.8046059932199434e-05, "train_loss": 2.181450237107277, "test_loss": 0.8011857762056238, "test_acc1": 79.72800268829346, "test_acc5": 94.63200260620117, "epoch": 280, "n_parameters": 7298036} -{"train_lr": 5.359553653605823e-05, "train_loss": 2.1920840031147004, "test_loss": 0.7919480132267755, "test_acc1": 79.81400267669677, "test_acc5": 94.6780026904297, "epoch": 281, "n_parameters": 7298036} -{"train_lr": 4.9359006629664604e-05, "train_loss": 2.182497728443146, "test_loss": 0.7965855613789138, "test_acc1": 79.73200253997803, "test_acc5": 94.63200280426025, "epoch": 282, "n_parameters": 7298036} -{"train_lr": 4.533693479626563e-05, "train_loss": 2.1970236100912093, "test_loss": 0.7907672807136003, "test_acc1": 79.94400243164063, "test_acc5": 94.70200276306153, "epoch": 283, "n_parameters": 7298036} -{"train_lr": 4.152976210136288e-05, "train_loss": 2.1822773602485657, "test_loss": 0.7936071955105838, "test_acc1": 79.82200246520996, "test_acc5": 94.7060023361206, "epoch": 284, "n_parameters": 7298036} -{"train_lr": 3.793790604434225e-05, "train_loss": 2.1874619306325913, "test_loss": 0.7895211448125979, "test_acc1": 79.86200253936768, "test_acc5": 94.72200247283935, "epoch": 285, "n_parameters": 7298036} -{"train_lr": 3.456176051270035e-05, "train_loss": 2.1813245223760607, "test_loss": 0.7868078342255425, "test_acc1": 79.90200285186768, "test_acc5": 94.74400233886719, "epoch": 286, "n_parameters": 7298036} -{"train_lr": 3.14016957388384e-05, "train_loss": 2.1804779521226885, "test_loss": 0.7904232043553802, "test_acc1": 79.93400257385254, "test_acc5": 94.75000241973876, "epoch": 287, "n_parameters": 7298036} -{"train_lr": 2.845805825946997e-05, "train_loss": 2.174716061401367, "test_loss": 0.7884576009476886, "test_acc1": 79.99600261260986, "test_acc5": 94.71400280761719, "epoch": 288, "n_parameters": 7298036} -{"train_lr": 2.5731170877616888e-05, "train_loss": 2.174104197716713, "test_loss": 0.7868640385568142, "test_acc1": 79.95200243438721, "test_acc5": 94.75800211517334, "epoch": 289, "n_parameters": 7298036} -{"train_lr": 2.3221332627209112e-05, "train_loss": 2.1675359332323074, "test_loss": 0.7886127183104262, "test_acc1": 79.94800231384278, "test_acc5": 94.73400231933594, "epoch": 290, "n_parameters": 7298036} -{"train_lr": 2.0928818740294644e-05, "train_loss": 2.170308047246933, "test_loss": 0.7926525125170455, "test_acc1": 79.8780027609253, "test_acc5": 94.6700023361206, "epoch": 291, "n_parameters": 7298036} -{"train_lr": 1.885388061685525e-05, "train_loss": 2.1790167866945267, "test_loss": 0.7967344713123405, "test_acc1": 79.79200261871338, "test_acc5": 94.64400272949219, "epoch": 292, "n_parameters": 7298036} -{"train_lr": 1.6996745797238736e-05, "train_loss": 2.179533949279785, "test_loss": 0.7884140571250635, "test_acc1": 79.95200244232177, "test_acc5": 94.71000282653809, "epoch": 293, "n_parameters": 7298036} -{"train_lr": 1.5357617937206103e-05, "train_loss": 2.1743434730529785, "test_loss": 0.7847247136866345, "test_acc1": 80.01200260406495, "test_acc5": 94.7580024810791, "epoch": 294, "n_parameters": 7298036} -{"train_lr": 1.393667678559817e-05, "train_loss": 2.175496062064171, "test_loss": 0.7886179560685859, "test_acc1": 79.93800254882812, "test_acc5": 94.74000214355469, "epoch": 295, "n_parameters": 7298036} -{"train_lr": 1.2734078164625274e-05, "train_loss": 2.18176610019207, "test_loss": 0.7914153278312263, "test_acc1": 79.89400266326905, "test_acc5": 94.71400241973878, "epoch": 296, "n_parameters": 7298036} -{"train_lr": 1.1749953952777368e-05, "train_loss": 2.1861217031002043, "test_loss": 0.784393009455765, "test_acc1": 79.98000264587402, "test_acc5": 94.77000222442626, "epoch": 297, "n_parameters": 7298036} -{"train_lr": 1.0984412070365348e-05, "train_loss": 2.1724857716321946, "test_loss": 0.7882706612786826, "test_acc1": 79.89600235076904, "test_acc5": 94.74000237182617, "epoch": 298, "n_parameters": 7298036} -{"train_lr": 1.0437536467683126e-05, "train_loss": 2.1720380091667177, "test_loss": 0.785580664215719, "test_acc1": 79.94200233398438, "test_acc5": 94.76200281585693, "epoch": 299, "n_parameters": 7298036} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_450e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_450e.txt deleted file mode 100644 index 28c7e4d7..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_0_distill_450e.txt +++ /dev/null @@ -1,450 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 6.996546177864075, "test_loss": 6.946746033780715, "test_acc1": 0.09600000640869141, "test_acc5": 0.5200000265693665, "epoch": 0, "n_parameters": 7298036} -{"train_lr": 1.000000000000014e-06, "train_loss": 6.990341601657867, "test_loss": 6.930841431898229, "test_acc1": 0.0920000065612793, "test_acc5": 0.5660000239419937, "epoch": 1, "n_parameters": 7298036} -{"train_lr": 0.0008007999999999933, "train_loss": 6.424154660987854, "test_loss": 5.146933240049026, "test_acc1": 8.560000221939086, "test_acc5": 23.330000695114137, "epoch": 2, "n_parameters": 7298036} -{"train_lr": 0.0016005999999999787, "train_loss": 5.771530437088012, "test_loss": 3.934614484801012, "test_acc1": 21.14600069671631, "test_acc5": 43.7140012550354, "epoch": 3, "n_parameters": 7298036} -{"train_lr": 0.0024003999999999835, "train_loss": 5.161794794559479, "test_loss": 3.1630375735899983, "test_acc1": 32.61600079345703, "test_acc5": 58.09200160827637, "epoch": 4, "n_parameters": 7298036} -{"train_lr": 0.0032001999999999873, "train_loss": 4.732723784017563, "test_loss": 2.7384808677084305, "test_acc1": 40.21400119918823, "test_acc5": 65.87000196197509, "epoch": 5, "n_parameters": 7298036} -{"train_lr": 0.003998784699903044, "train_loss": 4.462127535390854, "test_loss": 2.55808124647421, "test_acc1": 44.04400123962402, "test_acc5": 69.28200220092774, "epoch": 6, "n_parameters": 7298036} -{"train_lr": 0.0039982500460471835, "train_loss": 4.22705719037056, "test_loss": 2.3208559690152897, "test_acc1": 48.134001237792965, "test_acc5": 73.23000254028321, "epoch": 7, "n_parameters": 7298036} -{"train_lr": 0.003997618243996162, "train_loss": 4.049244967365265, "test_loss": 2.1220928334137974, "test_acc1": 51.922001387634275, "test_acc5": 76.37800263061523, "epoch": 8, "n_parameters": 7298036} -{"train_lr": 0.003996889324543062, "train_loss": 3.912545064163208, "test_loss": 2.0508745637010124, "test_acc1": 53.47800132232666, "test_acc5": 77.63000273498535, "epoch": 9, "n_parameters": 7298036} -{"train_lr": 0.003996063323214417, "train_loss": 3.804067411851883, "test_loss": 1.927887054927209, "test_acc1": 55.85200148071289, "test_acc5": 79.63200253540039, "epoch": 10, "n_parameters": 7298036} -{"train_lr": 0.003995140280268348, "train_loss": 3.712746146869659, "test_loss": 1.8401245610678898, "test_acc1": 57.380001499328614, "test_acc5": 80.93400299713134, "epoch": 11, "n_parameters": 7298036} -{"train_lr": 0.003994120240692708, "train_loss": 3.651302821683884, "test_loss": 1.8067224538501572, "test_acc1": 58.28600157806397, "test_acc5": 81.16400253082276, "epoch": 12, "n_parameters": 7298036} -{"train_lr": 0.003993003254202742, "train_loss": 3.581402754640579, "test_loss": 1.7849617508404396, "test_acc1": 58.77400154541016, "test_acc5": 82.04600223754883, "epoch": 13, "n_parameters": 7298036} -{"train_lr": 0.0039917893752388625, "train_loss": 3.5190082602977752, "test_loss": 1.688932638396235, "test_acc1": 60.80400152557373, "test_acc5": 83.11200221343994, "epoch": 14, "n_parameters": 7298036} -{"train_lr": 0.003990478662963736, "train_loss": 3.470579928636551, "test_loss": 1.688485283185454, "test_acc1": 61.07200178009033, "test_acc5": 83.15600284576416, "epoch": 15, "n_parameters": 7298036} -{"train_lr": 0.003989071181259754, "train_loss": 3.438385342121124, "test_loss": 1.675506721086362, "test_acc1": 60.736001692810056, "test_acc5": 83.19400224121094, "epoch": 16, "n_parameters": 7298036} -{"train_lr": 0.0039875669987254015, "train_loss": 3.400655124950409, "test_loss": 1.6038379533325924, "test_acc1": 62.27600187866211, "test_acc5": 84.28400218597412, "epoch": 17, "n_parameters": 7298036} -{"train_lr": 0.003985966188672574, "train_loss": 3.3900198633670806, "test_loss": 1.593461950912195, "test_acc1": 62.62000177429199, "test_acc5": 84.39800244934082, "epoch": 18, "n_parameters": 7298036} -{"train_lr": 0.0039842688291223715, "train_loss": 3.339180280160904, "test_loss": 1.6007849210325409, "test_acc1": 62.26400176940918, "test_acc5": 84.73000235870362, "epoch": 19, "n_parameters": 7298036} -{"train_lr": 0.003982475002801825, "train_loss": 3.3164612229824066, "test_loss": 1.5568593061145615, "test_acc1": 63.632001703796384, "test_acc5": 85.09400252960205, "epoch": 20, "n_parameters": 7298036} -{"train_lr": 0.003980584797139465, "train_loss": 3.299241796064377, "test_loss": 1.557809076326735, "test_acc1": 63.87600203216553, "test_acc5": 85.29800249328613, "epoch": 21, "n_parameters": 7298036} -{"train_lr": 0.003978598304261148, "train_loss": 3.257146124649048, "test_loss": 1.551576562664088, "test_acc1": 64.13800188995361, "test_acc5": 85.32400241729736, "epoch": 22, "n_parameters": 7298036} -{"train_lr": 0.003976515620985842, "train_loss": 3.2442416013240813, "test_loss": 1.4983852102476007, "test_acc1": 64.97000198303223, "test_acc5": 85.95600255157471, "epoch": 23, "n_parameters": 7298036} -{"train_lr": 0.0039743368488206155, "train_loss": 3.2335712834358215, "test_loss": 1.541557899731047, "test_acc1": 64.14800176574707, "test_acc5": 85.6820027810669, "epoch": 24, "n_parameters": 7298036} -{"train_lr": 0.0039720620939556715, "train_loss": 3.217523281669617, "test_loss": 1.503864937407129, "test_acc1": 64.67200161315918, "test_acc5": 85.61200251434326, "epoch": 25, "n_parameters": 7298036} -{"train_lr": 0.003969691467259384, "train_loss": 3.1977948766231536, "test_loss": 1.5052950390998054, "test_acc1": 64.66000168640137, "test_acc5": 85.9620022668457, "epoch": 26, "n_parameters": 7298036} -{"train_lr": 0.003967225084272694, "train_loss": 3.1629677473068236, "test_loss": 1.4796724766492844, "test_acc1": 65.30800171081543, "test_acc5": 86.1780025515747, "epoch": 27, "n_parameters": 7298036} -{"train_lr": 0.003964663065203757, "train_loss": 3.161825146818161, "test_loss": 1.4923765356926357, "test_acc1": 65.17200177581788, "test_acc5": 86.24800254943848, "epoch": 28, "n_parameters": 7298036} -{"train_lr": 0.003962005534921608, "train_loss": 3.1509854679107665, "test_loss": 1.4240203420905506, "test_acc1": 66.10000195068359, "test_acc5": 87.04600241790772, "epoch": 29, "n_parameters": 7298036} -{"train_lr": 0.003959252622950646, "train_loss": 3.1474906748533247, "test_loss": 1.4429662354728754, "test_acc1": 65.80000220672608, "test_acc5": 86.52000273376464, "epoch": 30, "n_parameters": 7298036} -{"train_lr": 0.003956404463463954, "train_loss": 3.119138572573662, "test_loss": 1.4533341176369612, "test_acc1": 65.87800203430176, "test_acc5": 86.55000283660888, "epoch": 31, "n_parameters": 7298036} -{"train_lr": 0.003953461195276696, "train_loss": 3.113514545559883, "test_loss": 1.429458456004367, "test_acc1": 66.5200019934082, "test_acc5": 87.27600300476074, "epoch": 32, "n_parameters": 7298036} -{"train_lr": 0.003950422961839594, "train_loss": 3.106103778195381, "test_loss": 1.4381398107199108, "test_acc1": 66.16400213623047, "test_acc5": 86.67600251525879, "epoch": 33, "n_parameters": 7298036} -{"train_lr": 0.00394728991123201, "train_loss": 3.0972593348026276, "test_loss": 1.4275717564365442, "test_acc1": 66.47400203338623, "test_acc5": 87.06400254882813, "epoch": 34, "n_parameters": 7298036} -{"train_lr": 0.003944062196154177, "train_loss": 3.08783807785511, "test_loss": 1.3976242656216902, "test_acc1": 66.89400195465088, "test_acc5": 87.27200278015137, "epoch": 35, "n_parameters": 7298036} -{"train_lr": 0.003940739973920592, "train_loss": 3.0789684978723524, "test_loss": 1.4543339767876793, "test_acc1": 66.41800170806884, "test_acc5": 86.78400277984619, "epoch": 36, "n_parameters": 7298036} -{"train_lr": 0.003937323406451619, "train_loss": 3.0713806258916856, "test_loss": 1.387839191538446, "test_acc1": 67.0920018786621, "test_acc5": 87.4840026071167, "epoch": 37, "n_parameters": 7298036} -{"train_lr": 0.003933812660265883, "train_loss": 3.064801841402054, "test_loss": 1.44850707185619, "test_acc1": 66.3140020501709, "test_acc5": 87.07400236999511, "epoch": 38, "n_parameters": 7298036} -{"train_lr": 0.003930207906472293, "train_loss": 3.0542320673227312, "test_loss": 1.4010259403901941, "test_acc1": 66.69200198699951, "test_acc5": 87.3300025805664, "epoch": 39, "n_parameters": 7298036} -{"train_lr": 0.003926509320761305, "train_loss": 3.0393110852956773, "test_loss": 1.3811640537836973, "test_acc1": 67.46200188964843, "test_acc5": 87.60600215759277, "epoch": 40, "n_parameters": 7298036} -{"train_lr": 0.003922717083396902, "train_loss": 3.0367018224716187, "test_loss": 1.3729932711404913, "test_acc1": 67.5600020477295, "test_acc5": 87.73800278839111, "epoch": 41, "n_parameters": 7298036} -{"train_lr": 0.003918831379207381, "train_loss": 3.027012432765961, "test_loss": 1.3684851768262245, "test_acc1": 67.52800219787598, "test_acc5": 87.88000210906982, "epoch": 42, "n_parameters": 7298036} -{"train_lr": 0.003914852397576493, "train_loss": 3.0250262630462648, "test_loss": 1.3983463097144575, "test_acc1": 67.27600192077637, "test_acc5": 87.64800242828369, "epoch": 43, "n_parameters": 7298036} -{"train_lr": 0.003910780332434081, "train_loss": 3.017699542379379, "test_loss": 1.415415186654119, "test_acc1": 67.1000021383667, "test_acc5": 87.5480027307129, "epoch": 44, "n_parameters": 7298036} -{"train_lr": 0.003906615382246946, "train_loss": 3.0064922607421876, "test_loss": 1.3556359923061203, "test_acc1": 67.68400204711914, "test_acc5": 87.68800256378174, "epoch": 45, "n_parameters": 7298036} -{"train_lr": 0.0039023577500088094, "train_loss": 2.9964697070121766, "test_loss": 1.391698279801537, "test_acc1": 67.60000197937012, "test_acc5": 87.44200266052246, "epoch": 46, "n_parameters": 7298036} -{"train_lr": 0.003898007643230756, "train_loss": 3.0019940599918367, "test_loss": 1.4228303270304905, "test_acc1": 67.11800195343018, "test_acc5": 87.53600275268555, "epoch": 47, "n_parameters": 7298036} -{"train_lr": 0.0038935652739308757, "train_loss": 2.999621746826172, "test_loss": 1.3567072731607102, "test_acc1": 67.94600192108155, "test_acc5": 87.91800277069092, "epoch": 48, "n_parameters": 7298036} -{"train_lr": 0.003889030858623732, "train_loss": 2.992978629755974, "test_loss": 1.3362617273541058, "test_acc1": 68.33800235534667, "test_acc5": 88.16000227600098, "epoch": 49, "n_parameters": 7298036} -{"train_lr": 0.003884404618310635, "train_loss": 2.990533433866501, "test_loss": 1.3082674478783327, "test_acc1": 68.21600212554932, "test_acc5": 88.25600245178222, "epoch": 50, "n_parameters": 7298036} -{"train_lr": 0.0038796867784678503, "train_loss": 2.9773987127780916, "test_loss": 1.3554469303173178, "test_acc1": 67.91600196441651, "test_acc5": 87.90200243804932, "epoch": 51, "n_parameters": 7298036} -{"train_lr": 0.0038748775690362956, "train_loss": 2.9775673088788985, "test_loss": 1.3595847066710978, "test_acc1": 67.96800208496094, "test_acc5": 88.02400242492676, "epoch": 52, "n_parameters": 7298036} -{"train_lr": 0.0038699772244100415, "train_loss": 2.9714450040340425, "test_loss": 1.3549648523330688, "test_acc1": 68.02800212158203, "test_acc5": 87.89800254119874, "epoch": 53, "n_parameters": 7298036} -{"train_lr": 0.003864985983424946, "train_loss": 2.955781775975227, "test_loss": 1.4023256415829939, "test_acc1": 67.83800183746338, "test_acc5": 87.8840023046875, "epoch": 54, "n_parameters": 7298036} -{"train_lr": 0.003859904089347072, "train_loss": 2.961031496977806, "test_loss": 1.317502821193022, "test_acc1": 68.1000021939087, "test_acc5": 88.29400277740478, "epoch": 55, "n_parameters": 7298036} -{"train_lr": 0.0038547317898607334, "train_loss": 2.957547092342377, "test_loss": 1.3269709528369062, "test_acc1": 68.72600212615967, "test_acc5": 88.46000286224366, "epoch": 56, "n_parameters": 7298036} -{"train_lr": 0.0038494693370565466, "train_loss": 2.9458640316963196, "test_loss": 1.3371764846584375, "test_acc1": 68.17000222229004, "test_acc5": 88.05600239227294, "epoch": 57, "n_parameters": 7298036} -{"train_lr": 0.0038441169874190843, "train_loss": 2.9429121100902558, "test_loss": 1.3287840126191868, "test_acc1": 68.31600218780518, "test_acc5": 88.30600272521973, "epoch": 58, "n_parameters": 7298036} -{"train_lr": 0.003838675001814183, "train_loss": 2.942146599411964, "test_loss": 1.2648146700333147, "test_acc1": 69.37600253112792, "test_acc5": 88.91600262573242, "epoch": 59, "n_parameters": 7298036} -{"train_lr": 0.0038331436454766355, "train_loss": 2.9246367720365525, "test_loss": 1.3291937461232437, "test_acc1": 68.30000218353271, "test_acc5": 88.25600286224365, "epoch": 60, "n_parameters": 7298036} -{"train_lr": 0.0038275231879969967, "train_loss": 2.9387775617837906, "test_loss": 1.3281183080638157, "test_acc1": 68.37800199768067, "test_acc5": 88.28400237609863, "epoch": 61, "n_parameters": 7298036} -{"train_lr": 0.00382181390330831, "train_loss": 2.9365363005399705, "test_loss": 1.3043042044429218, "test_acc1": 68.91400219940185, "test_acc5": 88.75400249694825, "epoch": 62, "n_parameters": 7298036} -{"train_lr": 0.00381601606967318, "train_loss": 2.9319739805698393, "test_loss": 1.3538871775655186, "test_acc1": 67.97000213256835, "test_acc5": 87.88800273620606, "epoch": 63, "n_parameters": 7298036} -{"train_lr": 0.0038101299696697475, "train_loss": 2.924105158472061, "test_loss": 1.2875952746938257, "test_acc1": 69.0540021673584, "test_acc5": 88.74000274230957, "epoch": 64, "n_parameters": 7298036} -{"train_lr": 0.00380415589017823, "train_loss": 2.9131333384037017, "test_loss": 1.2914338462492998, "test_acc1": 69.14400202026367, "test_acc5": 88.81200278656006, "epoch": 65, "n_parameters": 7298036} -{"train_lr": 0.0037980941223668303, "train_loss": 2.906169344687462, "test_loss": 1.2661653353887445, "test_acc1": 69.39200212432861, "test_acc5": 88.98000240478515, "epoch": 66, "n_parameters": 7298036} -{"train_lr": 0.003791944961677627, "train_loss": 2.9098335091352463, "test_loss": 1.305793789379737, "test_acc1": 68.85600200408936, "test_acc5": 88.58600230407716, "epoch": 67, "n_parameters": 7298036} -{"train_lr": 0.0037857087078119896, "train_loss": 2.9016470760822295, "test_loss": 1.3020531657864065, "test_acc1": 69.18400226715087, "test_acc5": 89.00800253601074, "epoch": 68, "n_parameters": 7298036} -{"train_lr": 0.003779385664716107, "train_loss": 2.9084082518339156, "test_loss": 1.3163933806559618, "test_acc1": 68.80400192687988, "test_acc5": 88.48800228179931, "epoch": 69, "n_parameters": 7298036} -{"train_lr": 0.003772976140566265, "train_loss": 2.924442398571968, "test_loss": 1.276019753778682, "test_acc1": 69.4140019821167, "test_acc5": 89.11800244110107, "epoch": 70, "n_parameters": 7298036} -{"train_lr": 0.0037664804477535617, "train_loss": 2.884533396816254, "test_loss": 1.2952623731073212, "test_acc1": 68.33600221282958, "test_acc5": 88.76000261901855, "epoch": 71, "n_parameters": 7298036} -{"train_lr": 0.003759898902868911, "train_loss": 2.893633928346634, "test_loss": 1.2982387100072468, "test_acc1": 68.75200207397461, "test_acc5": 88.42800242889405, "epoch": 72, "n_parameters": 7298036} -{"train_lr": 0.003753231826687486, "train_loss": 2.89145554459095, "test_loss": 1.3006356801180279, "test_acc1": 69.02000243682862, "test_acc5": 88.50600247131348, "epoch": 73, "n_parameters": 7298036} -{"train_lr": 0.0037464795441532936, "train_loss": 2.89318241584301, "test_loss": 1.2726124078035355, "test_acc1": 69.26600218658447, "test_acc5": 88.88800258575439, "epoch": 74, "n_parameters": 7298036} -{"train_lr": 0.003739642384362937, "train_loss": 2.8881117799282072, "test_loss": 1.304633056416231, "test_acc1": 69.14400212127686, "test_acc5": 88.77600262451172, "epoch": 75, "n_parameters": 7298036} -{"train_lr": 0.003732720680549938, "train_loss": 2.892132214927673, "test_loss": 1.2521233782172203, "test_acc1": 69.80000219848633, "test_acc5": 89.32000257232666, "epoch": 76, "n_parameters": 7298036} -{"train_lr": 0.003725714770068486, "train_loss": 2.8762981982946396, "test_loss": 1.2658729483099544, "test_acc1": 69.4500017919922, "test_acc5": 89.1120024935913, "epoch": 77, "n_parameters": 7298036} -{"train_lr": 0.0037186249943766602, "train_loss": 2.884427904701233, "test_loss": 1.3061705047593397, "test_acc1": 68.71200213623047, "test_acc5": 88.69600265869141, "epoch": 78, "n_parameters": 7298036} -{"train_lr": 0.003711451699020238, "train_loss": 2.8717648814201353, "test_loss": 1.253862103995155, "test_acc1": 69.91600215789795, "test_acc5": 89.14800237060547, "epoch": 79, "n_parameters": 7298036} -{"train_lr": 0.0037041952336154147, "train_loss": 2.86700092318058, "test_loss": 1.2739331257693909, "test_acc1": 69.58600197357178, "test_acc5": 89.21600234313965, "epoch": 80, "n_parameters": 7298036} -{"train_lr": 0.003696855951832067, "train_loss": 2.8695004572868346, "test_loss": 1.2350372536217464, "test_acc1": 69.7900019442749, "test_acc5": 89.36600257537842, "epoch": 81, "n_parameters": 7298036} -{"train_lr": 0.0036894342113765284, "train_loss": 2.874682143449783, "test_loss": 1.256427007562974, "test_acc1": 69.77600218353271, "test_acc5": 89.05000269744873, "epoch": 82, "n_parameters": 7298036} -{"train_lr": 0.0036819303739738757, "train_loss": 2.8754480736017225, "test_loss": 1.283850837718038, "test_acc1": 69.3040022732544, "test_acc5": 88.83400268341065, "epoch": 83, "n_parameters": 7298036} -{"train_lr": 0.00367434480535066, "train_loss": 2.8518703672647474, "test_loss": 1.291789442739066, "test_acc1": 69.20800216125488, "test_acc5": 88.88400275726319, "epoch": 84, "n_parameters": 7298036} -{"train_lr": 0.00366667787521664, "train_loss": 2.8595035460948943, "test_loss": 1.2608497984269087, "test_acc1": 69.58200216705322, "test_acc5": 89.39800242645263, "epoch": 85, "n_parameters": 7298036} -{"train_lr": 0.003658929957247333, "train_loss": 2.8584667724609374, "test_loss": 1.2392159091199146, "test_acc1": 70.03600206604004, "test_acc5": 89.47800259399413, "epoch": 86, "n_parameters": 7298036} -{"train_lr": 0.0036511014290652147, "train_loss": 2.8466722165107727, "test_loss": 1.271062825094251, "test_acc1": 69.60200221313477, "test_acc5": 89.30600242156983, "epoch": 87, "n_parameters": 7298036} -{"train_lr": 0.003643192672221756, "train_loss": 2.856234351825714, "test_loss": 1.2445952497860964, "test_acc1": 70.03400202819824, "test_acc5": 89.49600239135742, "epoch": 88, "n_parameters": 7298036} -{"train_lr": 0.0036352040721785803, "train_loss": 2.8411135602235795, "test_loss": 1.2598512440043337, "test_acc1": 70.15000238983154, "test_acc5": 89.2560022958374, "epoch": 89, "n_parameters": 7298036} -{"train_lr": 0.003627136018288861, "train_loss": 2.851404172229767, "test_loss": 1.2540437526562636, "test_acc1": 70.17400237487793, "test_acc5": 89.11600252746582, "epoch": 90, "n_parameters": 7298036} -{"train_lr": 0.0036189889037780316, "train_loss": 2.830674692773819, "test_loss": 1.2277944736620958, "test_acc1": 69.95200238494873, "test_acc5": 89.50200240753173, "epoch": 91, "n_parameters": 7298036} -{"train_lr": 0.0036107631257249954, "train_loss": 2.844798229074478, "test_loss": 1.2586376316407148, "test_acc1": 69.67800243133544, "test_acc5": 89.4440023373413, "epoch": 92, "n_parameters": 7298036} -{"train_lr": 0.003602459085042744, "train_loss": 2.8442047775030135, "test_loss": 1.2380931929630392, "test_acc1": 70.12000197235108, "test_acc5": 89.39600242370605, "epoch": 93, "n_parameters": 7298036} -{"train_lr": 0.003594077186458248, "train_loss": 2.8364009353637694, "test_loss": 1.217537481118651, "test_acc1": 70.7040022265625, "test_acc5": 89.73000252197265, "epoch": 94, "n_parameters": 7298036} -{"train_lr": 0.003585617838493613, "train_loss": 2.8319277475833893, "test_loss": 1.3014139923102714, "test_acc1": 69.39600224578858, "test_acc5": 88.76800240203858, "epoch": 95, "n_parameters": 7298036} -{"train_lr": 0.0035770814534454225, "train_loss": 2.832149552989006, "test_loss": 1.226082798312692, "test_acc1": 70.3380019744873, "test_acc5": 89.4840024710083, "epoch": 96, "n_parameters": 7298036} -{"train_lr": 0.003568468447365067, "train_loss": 2.827371108484268, "test_loss": 1.247724157922408, "test_acc1": 70.15200251220703, "test_acc5": 89.5380025100708, "epoch": 97, "n_parameters": 7298036} -{"train_lr": 0.0035597792400383233, "train_loss": 2.8215230338811876, "test_loss": 1.2708871228729977, "test_acc1": 69.6140021975708, "test_acc5": 89.07600277984619, "epoch": 98, "n_parameters": 7298036} -{"train_lr": 0.0035510142549648235, "train_loss": 2.829666579246521, "test_loss": 1.2364889615598846, "test_acc1": 69.81200235137939, "test_acc5": 89.35800269165038, "epoch": 99, "n_parameters": 7298036} -{"train_lr": 0.0035421739193377214, "train_loss": 2.826340156340599, "test_loss": 1.2580038536997402, "test_acc1": 69.78600229034424, "test_acc5": 89.17400251647949, "epoch": 100, "n_parameters": 7298036} -{"train_lr": 0.003533258664022372, "train_loss": 2.8200066433191298, "test_loss": 1.2437092201674687, "test_acc1": 70.0140024572754, "test_acc5": 89.4660026473999, "epoch": 101, "n_parameters": 7298036} -{"train_lr": 0.0035242689235357775, "train_loss": 2.8166155769586565, "test_loss": 1.223127977374722, "test_acc1": 70.51000231781006, "test_acc5": 89.57000303222657, "epoch": 102, "n_parameters": 7298036} -{"train_lr": 0.0035152051360252245, "train_loss": 2.807154325056076, "test_loss": 1.22829500147525, "test_acc1": 70.37200218719482, "test_acc5": 89.70400250518799, "epoch": 103, "n_parameters": 7298036} -{"train_lr": 0.0035060677432469894, "train_loss": 2.8104967404842376, "test_loss": 1.2309899216189104, "test_acc1": 70.23800230712891, "test_acc5": 89.66400254180908, "epoch": 104, "n_parameters": 7298036} -{"train_lr": 0.0034968571905445293, "train_loss": 2.8017113666296005, "test_loss": 1.2504147663712502, "test_acc1": 70.34600232849121, "test_acc5": 89.44200277252197, "epoch": 105, "n_parameters": 7298036} -{"train_lr": 0.0034875739268273947, "train_loss": 2.8054983381748198, "test_loss": 1.271145900382715, "test_acc1": 69.79600231597901, "test_acc5": 89.15400297668457, "epoch": 106, "n_parameters": 7298036} -{"train_lr": 0.00347821840454859, "train_loss": 2.7974525689840317, "test_loss": 1.2044533587553923, "test_acc1": 70.82000231658935, "test_acc5": 89.82000234863281, "epoch": 107, "n_parameters": 7298036} -{"train_lr": 0.003468791079683292, "train_loss": 2.797737002515793, "test_loss": 1.2386262197704876, "test_acc1": 70.13600219665527, "test_acc5": 89.54000247406006, "epoch": 108, "n_parameters": 7298036} -{"train_lr": 0.003459292411705684, "train_loss": 2.796301659607887, "test_loss": 1.2152098382220549, "test_acc1": 70.85400248199463, "test_acc5": 89.79200257141113, "epoch": 109, "n_parameters": 7298036} -{"train_lr": 0.003449722863567734, "train_loss": 2.8018102867126466, "test_loss": 1.2385743183686453, "test_acc1": 70.11600217376709, "test_acc5": 89.47200256072998, "epoch": 110, "n_parameters": 7298036} -{"train_lr": 0.0034400829016756297, "train_loss": 2.8050277500152587, "test_loss": 1.274137268171591, "test_acc1": 69.63000204772949, "test_acc5": 89.31400246887208, "epoch": 111, "n_parameters": 7298036} -{"train_lr": 0.0034303729958673978, "train_loss": 2.7983987142801285, "test_loss": 1.2506804755505394, "test_acc1": 69.98200204620362, "test_acc5": 89.30200276763917, "epoch": 112, "n_parameters": 7298036} -{"train_lr": 0.0034205936193903307, "train_loss": 2.804061582326889, "test_loss": 1.2294344963396298, "test_acc1": 70.59600232818603, "test_acc5": 89.52600220672608, "epoch": 113, "n_parameters": 7298036} -{"train_lr": 0.0034107452488774006, "train_loss": 2.7827276220798494, "test_loss": 1.1978501966770958, "test_acc1": 70.97400205749511, "test_acc5": 89.95800285949707, "epoch": 114, "n_parameters": 7298036} -{"train_lr": 0.0034008283643241475, "train_loss": 2.7827863807439805, "test_loss": 1.2036359761567677, "test_acc1": 71.1260022052002, "test_acc5": 89.74400249664306, "epoch": 115, "n_parameters": 7298036} -{"train_lr": 0.003390843449065705, "train_loss": 2.7899448613643645, "test_loss": 1.2475444363320576, "test_acc1": 70.52000246276856, "test_acc5": 89.58600225128174, "epoch": 116, "n_parameters": 7298036} -{"train_lr": 0.0033807909897526967, "train_loss": 2.7863457988977434, "test_loss": 1.2427952355321716, "test_acc1": 70.06800258972169, "test_acc5": 89.518002444458, "epoch": 117, "n_parameters": 7298036} -{"train_lr": 0.0033706714763277455, "train_loss": 2.784694997549057, "test_loss": 1.231937454903827, "test_acc1": 70.16800199371337, "test_acc5": 89.69200249328614, "epoch": 118, "n_parameters": 7298036} -{"train_lr": 0.003360485402001723, "train_loss": 2.7878955679655073, "test_loss": 1.2155882185872864, "test_acc1": 70.51800249572754, "test_acc5": 89.69000306640625, "epoch": 119, "n_parameters": 7298036} -{"train_lr": 0.0033502332632295347, "train_loss": 2.7791864313602446, "test_loss": 1.2801344175549114, "test_acc1": 69.66200237121582, "test_acc5": 89.33000239562988, "epoch": 120, "n_parameters": 7298036} -{"train_lr": 0.003339915559685877, "train_loss": 2.7715338892698287, "test_loss": 1.2080326601862907, "test_acc1": 70.76200242675782, "test_acc5": 89.63800252746582, "epoch": 121, "n_parameters": 7298036} -{"train_lr": 0.0033295327942412492, "train_loss": 2.7750792606592176, "test_loss": 1.211709973785807, "test_acc1": 70.63200212249755, "test_acc5": 89.8140023828125, "epoch": 122, "n_parameters": 7298036} -{"train_lr": 0.003319085472936782, "train_loss": 2.7755958496332167, "test_loss": 1.2207845975370968, "test_acc1": 70.62200230865479, "test_acc5": 89.8100026776123, "epoch": 123, "n_parameters": 7298036} -{"train_lr": 0.0033085741049602795, "train_loss": 2.7667891951084136, "test_loss": 1.2342784367501736, "test_acc1": 70.97000230804443, "test_acc5": 89.77200280334473, "epoch": 124, "n_parameters": 7298036} -{"train_lr": 0.003297999202620968, "train_loss": 2.772660490274429, "test_loss": 1.2055354096433695, "test_acc1": 70.72800256744385, "test_acc5": 89.93600248260498, "epoch": 125, "n_parameters": 7298036} -{"train_lr": 0.0032873612813246714, "train_loss": 2.7668412957906723, "test_loss": 1.180936819928534, "test_acc1": 71.17600239654541, "test_acc5": 90.16400238006592, "epoch": 126, "n_parameters": 7298036} -{"train_lr": 0.003276660859548651, "train_loss": 2.760892004966736, "test_loss": 1.2030006226371317, "test_acc1": 71.05000216369629, "test_acc5": 89.87800228729247, "epoch": 127, "n_parameters": 7298036} -{"train_lr": 0.0032658984588163557, "train_loss": 2.7643126065015795, "test_loss": 1.241692784077981, "test_acc1": 70.14400248687744, "test_acc5": 89.61400234375, "epoch": 128, "n_parameters": 7298036} -{"train_lr": 0.003255074603672122, "train_loss": 2.7673361812829973, "test_loss": 1.1681556346661903, "test_acc1": 71.27600240875245, "test_acc5": 90.1820024456787, "epoch": 129, "n_parameters": 7298036} -{"train_lr": 0.003244189821655263, "train_loss": 2.761053791332245, "test_loss": 1.1850097510306274, "test_acc1": 71.46400210479736, "test_acc5": 90.33000247161866, "epoch": 130, "n_parameters": 7298036} -{"train_lr": 0.003233244643274736, "train_loss": 2.7558148704051972, "test_loss": 1.1895602235899252, "test_acc1": 71.01000231292724, "test_acc5": 90.01600211120605, "epoch": 131, "n_parameters": 7298036} -{"train_lr": 0.0032222396019829943, "train_loss": 2.7585041531324386, "test_loss": 1.220232764587683, "test_acc1": 70.96400235107421, "test_acc5": 89.89600225067139, "epoch": 132, "n_parameters": 7298036} -{"train_lr": 0.0032111752341504192, "train_loss": 2.7516683938741684, "test_loss": 1.2101049848339136, "test_acc1": 70.83600222351075, "test_acc5": 89.76400271728515, "epoch": 133, "n_parameters": 7298036} -{"train_lr": 0.0032000520790385592, "train_loss": 2.756665706181526, "test_loss": 1.2090573551900246, "test_acc1": 71.17800234191894, "test_acc5": 90.00800245056152, "epoch": 134, "n_parameters": 7298036} -{"train_lr": 0.0031888706787743812, "train_loss": 2.748698607945442, "test_loss": 1.2304845982614685, "test_acc1": 70.36800229003906, "test_acc5": 89.63000275360108, "epoch": 135, "n_parameters": 7298036} -{"train_lr": 0.0031776315783234484, "train_loss": 2.749942735481262, "test_loss": 1.1799897840794396, "test_acc1": 71.46800260620117, "test_acc5": 90.27600275024415, "epoch": 136, "n_parameters": 7298036} -{"train_lr": 0.0031663353254638284, "train_loss": 2.7433435507535933, "test_loss": 1.1944995583856808, "test_acc1": 71.12200255249023, "test_acc5": 90.0620026663208, "epoch": 137, "n_parameters": 7298036} -{"train_lr": 0.0031549824707587932, "train_loss": 2.7537519922494886, "test_loss": 1.2089915718225872, "test_acc1": 71.13800225646973, "test_acc5": 89.98800264099121, "epoch": 138, "n_parameters": 7298036} -{"train_lr": 0.003143573567530467, "train_loss": 2.750586961603165, "test_loss": 1.202470710829777, "test_acc1": 70.99400232757569, "test_acc5": 89.97800274505616, "epoch": 139, "n_parameters": 7298036} -{"train_lr": 0.0031321091718327755, "train_loss": 2.738236189484596, "test_loss": 1.193385338082033, "test_acc1": 71.50200229125977, "test_acc5": 90.26600242980957, "epoch": 140, "n_parameters": 7298036} -{"train_lr": 0.003120589842424192, "train_loss": 2.738656638979912, "test_loss": 1.1594019492759424, "test_acc1": 71.59800252441406, "test_acc5": 90.41800282012939, "epoch": 141, "n_parameters": 7298036} -{"train_lr": 0.0031090161407405044, "train_loss": 2.726146498274803, "test_loss": 1.1756140874589192, "test_acc1": 71.46400240692138, "test_acc5": 90.40800254455566, "epoch": 142, "n_parameters": 7298036} -{"train_lr": 0.003097388630867618, "train_loss": 2.736005064582825, "test_loss": 1.1725546090918428, "test_acc1": 71.29200232025147, "test_acc5": 90.3120026272583, "epoch": 143, "n_parameters": 7298036} -{"train_lr": 0.0030857078795141065, "train_loss": 2.7337994750261307, "test_loss": 1.1641203881624866, "test_acc1": 71.71600260192871, "test_acc5": 90.37200210632324, "epoch": 144, "n_parameters": 7298036} -{"train_lr": 0.0030739744559831164, "train_loss": 2.7309325996637344, "test_loss": 1.1695656756706097, "test_acc1": 71.59400251495362, "test_acc5": 90.35400253082275, "epoch": 145, "n_parameters": 7298036} -{"train_lr": 0.003062188932145215, "train_loss": 2.7170996203660964, "test_loss": 1.1434765515082024, "test_acc1": 71.99000225708008, "test_acc5": 90.72800272247315, "epoch": 146, "n_parameters": 7298036} -{"train_lr": 0.0030503518824103173, "train_loss": 2.7307835283756257, "test_loss": 1.188961842261693, "test_acc1": 71.34200251464844, "test_acc5": 90.31400240814209, "epoch": 147, "n_parameters": 7298036} -{"train_lr": 0.0030384638836993723, "train_loss": 2.7281987243175507, "test_loss": 1.2025675747324438, "test_acc1": 71.22600237121583, "test_acc5": 90.1440025970459, "epoch": 148, "n_parameters": 7298036} -{"train_lr": 0.003026525515416759, "train_loss": 2.717855363726616, "test_loss": 1.176426713957506, "test_acc1": 71.24000273895264, "test_acc5": 90.22600248260498, "epoch": 149, "n_parameters": 7298036} -{"train_lr": 0.0030145373594217015, "train_loss": 2.723557350730896, "test_loss": 1.2114701192168629, "test_acc1": 70.7220024874878, "test_acc5": 89.80600234375, "epoch": 150, "n_parameters": 7298036} -{"train_lr": 0.0030025000000000156, "train_loss": 2.717916741156578, "test_loss": 1.1867851124090307, "test_acc1": 71.09200235290527, "test_acc5": 90.12000218688965, "epoch": 151, "n_parameters": 7298036} -{"train_lr": 0.0029904140238355007, "train_loss": 2.7181020808935163, "test_loss": 1.1901250819073004, "test_acc1": 71.20800248168945, "test_acc5": 90.06400228759766, "epoch": 152, "n_parameters": 7298036} -{"train_lr": 0.0029782800199817903, "train_loss": 2.709710819673538, "test_loss": 1.1530741843230583, "test_acc1": 71.68200237091064, "test_acc5": 90.51600256866455, "epoch": 153, "n_parameters": 7298036} -{"train_lr": 0.0029660985798329416, "train_loss": 2.7206264285087585, "test_loss": 1.1688616043504547, "test_acc1": 71.6800027722168, "test_acc5": 90.41400245544433, "epoch": 154, "n_parameters": 7298036} -{"train_lr": 0.0029538702970952164, "train_loss": 2.716020250368118, "test_loss": 1.1478439223240404, "test_acc1": 71.72200241210938, "test_acc5": 90.52200215881348, "epoch": 155, "n_parameters": 7298036} -{"train_lr": 0.0029415957677578724, "train_loss": 2.705766776895523, "test_loss": 1.154691987835309, "test_acc1": 71.73400242828369, "test_acc5": 90.58400252502442, "epoch": 156, "n_parameters": 7298036} -{"train_lr": 0.002929275590064108, "train_loss": 2.706762760066986, "test_loss": 1.147573577349677, "test_acc1": 72.0040021838379, "test_acc5": 90.7080025164795, "epoch": 157, "n_parameters": 7298036} -{"train_lr": 0.002916910364482115, "train_loss": 2.697211047768593, "test_loss": 1.208337858757552, "test_acc1": 71.2240022418213, "test_acc5": 90.16400247406006, "epoch": 158, "n_parameters": 7298036} -{"train_lr": 0.0029045006936754257, "train_loss": 2.717174967122078, "test_loss": 1.1867469137205797, "test_acc1": 71.66000232971192, "test_acc5": 90.33000263336181, "epoch": 159, "n_parameters": 7298036} -{"train_lr": 0.0028920471824738832, "train_loss": 2.6933923840999605, "test_loss": 1.1505466912160902, "test_acc1": 71.72400268737793, "test_acc5": 90.64400270080566, "epoch": 160, "n_parameters": 7298036} -{"train_lr": 0.0028795504378442225, "train_loss": 2.6873402201652525, "test_loss": 1.2013795016443027, "test_acc1": 71.11800234893799, "test_acc5": 90.15400260833741, "epoch": 161, "n_parameters": 7298036} -{"train_lr": 0.002867011068859989, "train_loss": 2.6969776383399964, "test_loss": 1.1612574745188742, "test_acc1": 71.99400207763672, "test_acc5": 90.57200254974366, "epoch": 162, "n_parameters": 7298036} -{"train_lr": 0.0028544296866723304, "train_loss": 2.691287936067581, "test_loss": 1.1362488225978964, "test_acc1": 72.28000235931397, "test_acc5": 90.78800232666016, "epoch": 163, "n_parameters": 7298036} -{"train_lr": 0.0028418069044801562, "train_loss": 2.6925796298503877, "test_loss": 1.1430633589625359, "test_acc1": 71.97000232604981, "test_acc5": 90.72000260528564, "epoch": 164, "n_parameters": 7298036} -{"train_lr": 0.0028291433375, "train_loss": 2.6943301060438154, "test_loss": 1.1259560133604443, "test_acc1": 72.3020026739502, "test_acc5": 90.83000253143311, "epoch": 165, "n_parameters": 7298036} -{"train_lr": 0.002816439602936208, "train_loss": 2.6805055073976516, "test_loss": 1.2353558448307655, "test_acc1": 71.22800245361329, "test_acc5": 89.97400260040283, "epoch": 166, "n_parameters": 7298036} -{"train_lr": 0.002803696319950981, "train_loss": 2.6893674031496047, "test_loss": 1.1327775850453798, "test_acc1": 72.50000224243163, "test_acc5": 90.86000262023926, "epoch": 167, "n_parameters": 7298036} -{"train_lr": 0.0027909141096339935, "train_loss": 2.6910617644309998, "test_loss": 1.1121527760782663, "test_acc1": 73.18400258544922, "test_acc5": 90.99200270874023, "epoch": 168, "n_parameters": 7298036} -{"train_lr": 0.002778093594971943, "train_loss": 2.6809598824501037, "test_loss": 1.185080217964509, "test_acc1": 71.60400264282227, "test_acc5": 90.43400230438232, "epoch": 169, "n_parameters": 7298036} -{"train_lr": 0.002765235400818761, "train_loss": 2.678742490553856, "test_loss": 1.1562197532723932, "test_acc1": 71.91000237731933, "test_acc5": 90.6520025415039, "epoch": 170, "n_parameters": 7298036} -{"train_lr": 0.0027523401538647224, "train_loss": 2.679750859475136, "test_loss": 1.1873583123087883, "test_acc1": 71.26000224182128, "test_acc5": 90.18000223144531, "epoch": 171, "n_parameters": 7298036} -{"train_lr": 0.002739408482605956, "train_loss": 2.676679968357086, "test_loss": 1.1826016543542637, "test_acc1": 71.49400258636474, "test_acc5": 90.41200257171631, "epoch": 172, "n_parameters": 7298036} -{"train_lr": 0.002726441017313784, "train_loss": 2.6686548888206483, "test_loss": 1.1388426186407314, "test_acc1": 72.00200218536376, "test_acc5": 90.71200240447997, "epoch": 173, "n_parameters": 7298036} -{"train_lr": 0.002713438390004251, "train_loss": 2.6710284029483797, "test_loss": 1.1396826637141846, "test_acc1": 72.31400254241943, "test_acc5": 90.74400270629883, "epoch": 174, "n_parameters": 7298036} -{"train_lr": 0.0027004012344070075, "train_loss": 2.669759320950508, "test_loss": 1.1806067079305649, "test_acc1": 71.52400231079102, "test_acc5": 90.52600245300293, "epoch": 175, "n_parameters": 7298036} -{"train_lr": 0.0026873301859347007, "train_loss": 2.662749852824211, "test_loss": 1.1469946490491139, "test_acc1": 72.2020025088501, "test_acc5": 90.70800285369873, "epoch": 176, "n_parameters": 7298036} -{"train_lr": 0.002674225881651733, "train_loss": 2.6689642449855806, "test_loss": 1.1333779604995953, "test_acc1": 72.3620024963379, "test_acc5": 90.79800247375488, "epoch": 177, "n_parameters": 7298036} -{"train_lr": 0.0026610889602434354, "train_loss": 2.6663634417533872, "test_loss": 1.154456934069886, "test_acc1": 71.73800257751465, "test_acc5": 90.40000259674072, "epoch": 178, "n_parameters": 7298036} -{"train_lr": 0.0026479200619848845, "train_loss": 2.661807390880585, "test_loss": 1.1343563304227942, "test_acc1": 72.04600269317626, "test_acc5": 90.7380027142334, "epoch": 179, "n_parameters": 7298036} -{"train_lr": 0.0026347198287094897, "train_loss": 2.661804564332962, "test_loss": 1.1647687436026686, "test_acc1": 72.00200257141114, "test_acc5": 90.45600242797852, "epoch": 180, "n_parameters": 7298036} -{"train_lr": 0.0026214889037780493, "train_loss": 2.6658322935819627, "test_loss": 1.1295849514358185, "test_acc1": 72.55400279327392, "test_acc5": 90.90000258880615, "epoch": 181, "n_parameters": 7298036} -{"train_lr": 0.0026082279320471633, "train_loss": 2.6718591308116912, "test_loss": 1.1860816331908983, "test_acc1": 71.20400222961426, "test_acc5": 90.10600260070801, "epoch": 182, "n_parameters": 7298036} -{"train_lr": 0.0025949375598379055, "train_loss": 2.65969391143322, "test_loss": 1.1815963409402792, "test_acc1": 71.64200222686767, "test_acc5": 90.16800265533448, "epoch": 183, "n_parameters": 7298036} -{"train_lr": 0.0025816184349041886, "train_loss": 2.658025332093239, "test_loss": 1.1570363079800325, "test_acc1": 72.18600251800537, "test_acc5": 90.66400264221191, "epoch": 184, "n_parameters": 7298036} -{"train_lr": 0.0025682712064015187, "train_loss": 2.639771134018898, "test_loss": 1.1423691405969507, "test_acc1": 72.28200219024659, "test_acc5": 90.69000234039306, "epoch": 185, "n_parameters": 7298036} -{"train_lr": 0.002554896524854948, "train_loss": 2.653822904205322, "test_loss": 1.0890906330417185, "test_acc1": 73.18000263397217, "test_acc5": 91.21400253936767, "epoch": 186, "n_parameters": 7298036} -{"train_lr": 0.0025414950421274317, "train_loss": 2.6536820988178254, "test_loss": 1.1422562603564823, "test_acc1": 71.94800253479004, "test_acc5": 90.71400264953613, "epoch": 187, "n_parameters": 7298036} -{"train_lr": 0.002528067411388543, "train_loss": 2.640178265976906, "test_loss": 1.1126851873362766, "test_acc1": 72.67000231140136, "test_acc5": 91.12800248779297, "epoch": 188, "n_parameters": 7298036} -{"train_lr": 0.0025146142870819286, "train_loss": 2.6407607492923737, "test_loss": 1.1133012204047512, "test_acc1": 72.51000223968506, "test_acc5": 90.92000216827392, "epoch": 189, "n_parameters": 7298036} -{"train_lr": 0.002501136324893901, "train_loss": 2.6517627828836443, "test_loss": 1.1023529868792086, "test_acc1": 73.13000227172851, "test_acc5": 91.280002394104, "epoch": 190, "n_parameters": 7298036} -{"train_lr": 0.002487634181721322, "train_loss": 2.6378917422533035, "test_loss": 1.158058858531363, "test_acc1": 72.43800245910644, "test_acc5": 90.71200244873047, "epoch": 191, "n_parameters": 7298036} -{"train_lr": 0.002474108515639672, "train_loss": 2.637692283034325, "test_loss": 1.1383179128170013, "test_acc1": 72.10600242034913, "test_acc5": 90.73400252502441, "epoch": 192, "n_parameters": 7298036} -{"train_lr": 0.002460559985870747, "train_loss": 2.6423336028814317, "test_loss": 1.1313969361431457, "test_acc1": 72.44600219116211, "test_acc5": 90.98600248016358, "epoch": 193, "n_parameters": 7298036} -{"train_lr": 0.002446989252750831, "train_loss": 2.6254419162034988, "test_loss": 1.1923202723264694, "test_acc1": 71.74200242401123, "test_acc5": 90.35800278656006, "epoch": 194, "n_parameters": 7298036} -{"train_lr": 0.002433396977698326, "train_loss": 2.6336471945285798, "test_loss": 1.1414244551869, "test_acc1": 72.02600249053955, "test_acc5": 90.69600245788574, "epoch": 195, "n_parameters": 7298036} -{"train_lr": 0.0024197838231814215, "train_loss": 2.6281891491413116, "test_loss": 1.1118973515489523, "test_acc1": 72.52800254608154, "test_acc5": 90.90800282775879, "epoch": 196, "n_parameters": 7298036} -{"train_lr": 0.002406150452686214, "train_loss": 2.62200645942688, "test_loss": 1.0615131958004307, "test_acc1": 73.7600022619629, "test_acc5": 91.5660023260498, "epoch": 197, "n_parameters": 7298036} -{"train_lr": 0.0023924975306838653, "train_loss": 2.629978430056572, "test_loss": 1.1026081977521671, "test_acc1": 73.10800219421387, "test_acc5": 91.20400264465331, "epoch": 198, "n_parameters": 7298036} -{"train_lr": 0.0023788257225985116, "train_loss": 2.6322596537828447, "test_loss": 1.1003724702579134, "test_acc1": 72.94200223999023, "test_acc5": 91.01800232421876, "epoch": 199, "n_parameters": 7298036} -{"train_lr": 0.002365135694774904, "train_loss": 2.6241918612003325, "test_loss": 1.1121999963241465, "test_acc1": 73.32600241241455, "test_acc5": 91.3140024432373, "epoch": 200, "n_parameters": 7298036} -{"train_lr": 0.0023514281144455126, "train_loss": 2.626209532046318, "test_loss": 1.1081909138490171, "test_acc1": 73.00200273712159, "test_acc5": 91.15600252166747, "epoch": 201, "n_parameters": 7298036} -{"train_lr": 0.002337703649698603, "train_loss": 2.6190173553705214, "test_loss": 1.1110801819492788, "test_acc1": 72.65800276367187, "test_acc5": 91.1120024584961, "epoch": 202, "n_parameters": 7298036} -{"train_lr": 0.002323962969445206, "train_loss": 2.613282528710365, "test_loss": 1.1213552176075823, "test_acc1": 72.40400233886719, "test_acc5": 90.93400228729249, "epoch": 203, "n_parameters": 7298036} -{"train_lr": 0.002310206743386666, "train_loss": 2.6085392309188844, "test_loss": 1.1034928034333622, "test_acc1": 72.91600264801025, "test_acc5": 91.18800237091064, "epoch": 204, "n_parameters": 7298036} -{"train_lr": 0.002296435641982043, "train_loss": 2.612711403942108, "test_loss": 1.1061106740551836, "test_acc1": 73.00800256835937, "test_acc5": 91.07400260284423, "epoch": 205, "n_parameters": 7298036} -{"train_lr": 0.0022826503364153008, "train_loss": 2.6017460937261583, "test_loss": 1.139831574743285, "test_acc1": 72.24000273406982, "test_acc5": 90.79400284240722, "epoch": 206, "n_parameters": 7298036} -{"train_lr": 0.002268851498562944, "train_loss": 2.62364499604702, "test_loss": 1.1277172302498537, "test_acc1": 72.34800232635499, "test_acc5": 90.94000260803223, "epoch": 207, "n_parameters": 7298036} -{"train_lr": 0.0022550398009608037, "train_loss": 2.6168748069286347, "test_loss": 1.1257645393557407, "test_acc1": 72.72000252624511, "test_acc5": 90.840002684021, "epoch": 208, "n_parameters": 7298036} -{"train_lr": 0.002241215916771494, "train_loss": 2.602150402736664, "test_loss": 1.1166668056565172, "test_acc1": 73.00200224090577, "test_acc5": 91.02800256958008, "epoch": 209, "n_parameters": 7298036} -{"train_lr": 0.0022273805197516256, "train_loss": 2.5984171810865404, "test_loss": 1.1017490818220026, "test_acc1": 72.82400211669922, "test_acc5": 91.10000240203857, "epoch": 210, "n_parameters": 7298036} -{"train_lr": 0.0022135342842189523, "train_loss": 2.603498088645935, "test_loss": 1.09393518527641, "test_acc1": 73.17200268066406, "test_acc5": 91.2380024142456, "epoch": 211, "n_parameters": 7298036} -{"train_lr": 0.002199677885019512, "train_loss": 2.6059072279453277, "test_loss": 1.1134489568717338, "test_acc1": 72.89600224945069, "test_acc5": 91.12000256652831, "epoch": 212, "n_parameters": 7298036} -{"train_lr": 0.002185811997494599, "train_loss": 2.6087295845270155, "test_loss": 1.102344322073109, "test_acc1": 72.68400274414063, "test_acc5": 91.31000217559814, "epoch": 213, "n_parameters": 7298036} -{"train_lr": 0.0021719372974480025, "train_loss": 2.597718401265144, "test_loss": 1.0974839735118782, "test_acc1": 72.96200226593018, "test_acc5": 91.10400263397217, "epoch": 214, "n_parameters": 7298036} -{"train_lr": 0.002158054461113036, "train_loss": 2.5983903506278994, "test_loss": 1.0804023988106672, "test_acc1": 73.53200263824463, "test_acc5": 91.50800287872315, "epoch": 215, "n_parameters": 7298036} -{"train_lr": 0.0021441641651195054, "train_loss": 2.5976852435588835, "test_loss": 1.089575047878658, "test_acc1": 73.26000250305175, "test_acc5": 91.37000276123047, "epoch": 216, "n_parameters": 7298036} -{"train_lr": 0.0021302670864609768, "train_loss": 2.5810913019418718, "test_loss": 1.1004024051568086, "test_acc1": 72.95400234893799, "test_acc5": 91.29800234375, "epoch": 217, "n_parameters": 7298036} -{"train_lr": 0.0021163639024613505, "train_loss": 2.5951970599889753, "test_loss": 1.1088504405582653, "test_acc1": 72.96800221405029, "test_acc5": 91.22600252746582, "epoch": 218, "n_parameters": 7298036} -{"train_lr": 0.002102455290742262, "train_loss": 2.586338751530647, "test_loss": 1.0813347557011772, "test_acc1": 73.728002315979, "test_acc5": 91.4960024243164, "epoch": 219, "n_parameters": 7298036} -{"train_lr": 0.0020885419291897665, "train_loss": 2.581400453543663, "test_loss": 1.0731956189607872, "test_acc1": 73.62400214996337, "test_acc5": 91.5940022128296, "epoch": 220, "n_parameters": 7298036} -{"train_lr": 0.0020746244959214863, "train_loss": 2.5750847052812578, "test_loss": 1.080921535982805, "test_acc1": 73.52200249420166, "test_acc5": 91.46400248657227, "epoch": 221, "n_parameters": 7298036} -{"train_lr": 0.0020607036692535004, "train_loss": 2.579925388431549, "test_loss": 1.0751159903319443, "test_acc1": 73.30400213134766, "test_acc5": 91.34000243652343, "epoch": 222, "n_parameters": 7298036} -{"train_lr": 0.0020467801276673257, "train_loss": 2.574589943432808, "test_loss": 1.1108947356834131, "test_acc1": 72.91800263427734, "test_acc5": 91.16000237976074, "epoch": 223, "n_parameters": 7298036} -{"train_lr": 0.0020328545497765972, "train_loss": 2.5838713140487672, "test_loss": 1.135703589548083, "test_acc1": 72.76200229766846, "test_acc5": 91.0920022314453, "epoch": 224, "n_parameters": 7298036} -{"train_lr": 0.002018927614294425, "train_loss": 2.575000814437866, "test_loss": 1.0871966722256996, "test_acc1": 73.27800262878418, "test_acc5": 91.33000271392822, "epoch": 225, "n_parameters": 7298036} -{"train_lr": 0.0020050000000000176, "train_loss": 2.5777293172121047, "test_loss": 1.05365838855505, "test_acc1": 73.87200253326417, "test_acc5": 91.58600258331299, "epoch": 226, "n_parameters": 7298036} -{"train_lr": 0.001991072385705573, "train_loss": 2.559157823729515, "test_loss": 1.0649874144617248, "test_acc1": 73.73000274902344, "test_acc5": 91.58800278106689, "epoch": 227, "n_parameters": 7298036} -{"train_lr": 0.001977145450223401, "train_loss": 2.5729527430057524, "test_loss": 1.1115328722140367, "test_acc1": 73.04600245544434, "test_acc5": 91.03800294281005, "epoch": 228, "n_parameters": 7298036} -{"train_lr": 0.0019632198723327173, "train_loss": 2.566056352806091, "test_loss": 1.100285498534932, "test_acc1": 73.06600222229004, "test_acc5": 91.19400231781006, "epoch": 229, "n_parameters": 7298036} -{"train_lr": 0.0019492963307464952, "train_loss": 2.5619646339178086, "test_loss": 1.044030527200769, "test_acc1": 74.08800267242431, "test_acc5": 91.83800236602784, "epoch": 230, "n_parameters": 7298036} -{"train_lr": 0.001935375504078517, "train_loss": 2.5521877739191057, "test_loss": 1.053186941672774, "test_acc1": 73.9060022088623, "test_acc5": 91.73400245147705, "epoch": 231, "n_parameters": 7298036} -{"train_lr": 0.001921458070810235, "train_loss": 2.562366943669319, "test_loss": 1.0617768578231335, "test_acc1": 74.17200223175048, "test_acc5": 91.65600245819091, "epoch": 232, "n_parameters": 7298036} -{"train_lr": 0.0019075447092577794, "train_loss": 2.5555278039455414, "test_loss": 1.0484313208828955, "test_acc1": 74.10200268280029, "test_acc5": 91.73800274749756, "epoch": 233, "n_parameters": 7298036} -{"train_lr": 0.0018936360975386397, "train_loss": 2.557155821943283, "test_loss": 1.044089439598953, "test_acc1": 74.11600245788574, "test_acc5": 91.71600278106689, "epoch": 234, "n_parameters": 7298036} -{"train_lr": 0.0018797329135390225, "train_loss": 2.5568383813381197, "test_loss": 1.0656664825099356, "test_acc1": 73.74600229858399, "test_acc5": 91.5120023123169, "epoch": 235, "n_parameters": 7298036} -{"train_lr": 0.001865835834880448, "train_loss": 2.5569966619491575, "test_loss": 1.0675454271190308, "test_acc1": 73.69400258575439, "test_acc5": 91.66000269714355, "epoch": 236, "n_parameters": 7298036} -{"train_lr": 0.0018519455388870075, "train_loss": 2.5487236221313476, "test_loss": 1.0651251174071257, "test_acc1": 73.69400256500244, "test_acc5": 91.54200260345459, "epoch": 237, "n_parameters": 7298036} -{"train_lr": 0.0018380627025520412, "train_loss": 2.544088577103615, "test_loss": 1.05794533758479, "test_acc1": 73.98200234436035, "test_acc5": 91.89200234375, "epoch": 238, "n_parameters": 7298036} -{"train_lr": 0.001824188002505409, "train_loss": 2.5573047666072846, "test_loss": 1.083369691582287, "test_acc1": 73.39400255737304, "test_acc5": 91.4760024822998, "epoch": 239, "n_parameters": 7298036} -{"train_lr": 0.0018103221149804824, "train_loss": 2.543277328467369, "test_loss": 1.0759321617729523, "test_acc1": 73.88200211151123, "test_acc5": 91.66600260894775, "epoch": 240, "n_parameters": 7298036} -{"train_lr": 0.0017964657157810383, "train_loss": 2.5316893547058106, "test_loss": 1.0337719185387386, "test_acc1": 74.14000222686768, "test_acc5": 91.8360025970459, "epoch": 241, "n_parameters": 7298036} -{"train_lr": 0.0017826194802483815, "train_loss": 2.5347032483816148, "test_loss": 1.0256707328645622, "test_acc1": 74.61000281860352, "test_acc5": 91.95800230895996, "epoch": 242, "n_parameters": 7298036} -{"train_lr": 0.0017687840832285521, "train_loss": 2.5362037711143492, "test_loss": 1.0547979478450382, "test_acc1": 73.90400262268066, "test_acc5": 91.57400282836915, "epoch": 243, "n_parameters": 7298036} -{"train_lr": 0.0017549601990392034, "train_loss": 2.5332529230833054, "test_loss": 1.0598403294296825, "test_acc1": 73.83600260650634, "test_acc5": 91.58600240203857, "epoch": 244, "n_parameters": 7298036} -{"train_lr": 0.001741148501437039, "train_loss": 2.5318330518245697, "test_loss": 1.05079592830118, "test_acc1": 74.42400215118408, "test_acc5": 91.87600252807617, "epoch": 245, "n_parameters": 7298036} -{"train_lr": 0.0017273496635846672, "train_loss": 2.539151284122467, "test_loss": 1.0351353997693342, "test_acc1": 74.25400242095947, "test_acc5": 91.79600281829833, "epoch": 246, "n_parameters": 7298036} -{"train_lr": 0.0017135643580179704, "train_loss": 2.5341243579149246, "test_loss": 1.0410818731521858, "test_acc1": 74.12000245361328, "test_acc5": 91.91600229736328, "epoch": 247, "n_parameters": 7298036} -{"train_lr": 0.0016997932566133241, "train_loss": 2.531615267109871, "test_loss": 1.0288819436203032, "test_acc1": 74.62200237060547, "test_acc5": 92.04200265045166, "epoch": 248, "n_parameters": 7298036} -{"train_lr": 0.0016860370305547992, "train_loss": 2.5172065652608873, "test_loss": 1.0706198062090313, "test_acc1": 73.85200288146973, "test_acc5": 91.49400285003662, "epoch": 249, "n_parameters": 7298036} -{"train_lr": 0.00167229635030139, "train_loss": 2.512416501379013, "test_loss": 1.0220867592622251, "test_acc1": 74.60200274353028, "test_acc5": 91.99800272033691, "epoch": 250, "n_parameters": 7298036} -{"train_lr": 0.0016585718855544908, "train_loss": 2.515940954661369, "test_loss": 1.0872315192485558, "test_acc1": 73.50800239440917, "test_acc5": 91.57600256622314, "epoch": 251, "n_parameters": 7298036} -{"train_lr": 0.0016448643052251444, "train_loss": 2.5241574343681337, "test_loss": 1.044734899612034, "test_acc1": 74.22200227050782, "test_acc5": 91.80000233215333, "epoch": 252, "n_parameters": 7298036} -{"train_lr": 0.0016311742774014707, "train_loss": 2.507938566350937, "test_loss": 1.057835256352144, "test_acc1": 74.23400271179199, "test_acc5": 91.6660025744629, "epoch": 253, "n_parameters": 7298036} -{"train_lr": 0.0016175024693161407, "train_loss": 2.510973720908165, "test_loss": 1.0333308687104898, "test_acc1": 74.62000245941162, "test_acc5": 92.1860024911499, "epoch": 254, "n_parameters": 7298036} -{"train_lr": 0.0016038495473138241, "train_loss": 2.4986161624908445, "test_loss": 1.0196915882475235, "test_acc1": 74.83000232513427, "test_acc5": 92.26800251861572, "epoch": 255, "n_parameters": 7298036} -{"train_lr": 0.001590216176818559, "train_loss": 2.507596874976158, "test_loss": 1.0158295780420303, "test_acc1": 74.9560023916626, "test_acc5": 92.19600250427246, "epoch": 256, "n_parameters": 7298036} -{"train_lr": 0.0015766030223016928, "train_loss": 2.5128785177469255, "test_loss": 1.0234962734667694, "test_acc1": 74.79600222503662, "test_acc5": 92.12000243255615, "epoch": 257, "n_parameters": 7298036} -{"train_lr": 0.0015630107472491771, "train_loss": 2.505295595169067, "test_loss": 1.0224355809828813, "test_acc1": 74.73800240905761, "test_acc5": 91.99600274780273, "epoch": 258, "n_parameters": 7298036} -{"train_lr": 0.0015494400141292598, "train_loss": 2.501854888510704, "test_loss": 1.0194684251704638, "test_acc1": 74.75400267364502, "test_acc5": 92.05800249420166, "epoch": 259, "n_parameters": 7298036} -{"train_lr": 0.001535891484360313, "train_loss": 2.49896938560009, "test_loss": 1.0139251068672712, "test_acc1": 74.93400257751465, "test_acc5": 92.27400254516601, "epoch": 260, "n_parameters": 7298036} -{"train_lr": 0.0015223658182786706, "train_loss": 2.5053092823505403, "test_loss": 1.0067950550685911, "test_acc1": 74.87000243133545, "test_acc5": 92.27000260009765, "epoch": 261, "n_parameters": 7298036} -{"train_lr": 0.0015088636751061355, "train_loss": 2.497705894780159, "test_loss": 0.9985965737963424, "test_acc1": 75.0700027960205, "test_acc5": 92.30600257995606, "epoch": 262, "n_parameters": 7298036} -{"train_lr": 0.0014953857129180808, "train_loss": 2.491771837568283, "test_loss": 1.008961298448198, "test_acc1": 75.24800236877441, "test_acc5": 92.278002807312, "epoch": 263, "n_parameters": 7298036} -{"train_lr": 0.001481932588611488, "train_loss": 2.487892787885666, "test_loss": 1.0171663546386887, "test_acc1": 74.83400244812012, "test_acc5": 92.15800217681885, "epoch": 264, "n_parameters": 7298036} -{"train_lr": 0.001468504957872541, "train_loss": 2.4918847252130507, "test_loss": 1.013217196745031, "test_acc1": 75.20200270904542, "test_acc5": 92.19000251068115, "epoch": 265, "n_parameters": 7298036} -{"train_lr": 0.0014551034751450972, "train_loss": 2.4844820635318756, "test_loss": 1.0356521954869522, "test_acc1": 74.32600225219727, "test_acc5": 91.96800250885009, "epoch": 266, "n_parameters": 7298036} -{"train_lr": 0.0014417287935984719, "train_loss": 2.483473820614815, "test_loss": 1.0243457557962221, "test_acc1": 74.88400221618652, "test_acc5": 92.05200239135742, "epoch": 267, "n_parameters": 7298036} -{"train_lr": 0.0014283815650957576, "train_loss": 2.484413605928421, "test_loss": 1.0041120514711912, "test_acc1": 75.02000259613037, "test_acc5": 92.30600256958007, "epoch": 268, "n_parameters": 7298036} -{"train_lr": 0.00141506244016212, "train_loss": 2.4831577420473097, "test_loss": 1.000810155973715, "test_acc1": 75.21400257629395, "test_acc5": 92.29800230895997, "epoch": 269, "n_parameters": 7298036} -{"train_lr": 0.0014017720679528809, "train_loss": 2.472080621790886, "test_loss": 0.9841916530447847, "test_acc1": 75.7220025732422, "test_acc5": 92.5180022454834, "epoch": 270, "n_parameters": 7298036} -{"train_lr": 0.001388511096221964, "train_loss": 2.473742818260193, "test_loss": 1.012401935589664, "test_acc1": 74.97800257965088, "test_acc5": 92.03200265380859, "epoch": 271, "n_parameters": 7298036} -{"train_lr": 0.0013752801712905223, "train_loss": 2.480457191157341, "test_loss": 1.0023301497978323, "test_acc1": 75.20200254577637, "test_acc5": 92.19200277618408, "epoch": 272, "n_parameters": 7298036} -{"train_lr": 0.0013620799380151495, "train_loss": 2.4629456825256346, "test_loss": 0.9821342876290574, "test_acc1": 75.4180024508667, "test_acc5": 92.52000239349366, "epoch": 273, "n_parameters": 7298036} -{"train_lr": 0.0013489110397565372, "train_loss": 2.4799975400447845, "test_loss": 1.0091400153058416, "test_acc1": 75.10600256774903, "test_acc5": 92.47800262573242, "epoch": 274, "n_parameters": 7298036} -{"train_lr": 0.0013357741183482558, "train_loss": 2.466777157521248, "test_loss": 0.9851598903975066, "test_acc1": 75.47000258789062, "test_acc5": 92.54000248931885, "epoch": 275, "n_parameters": 7298036} -{"train_lr": 0.0013226698140652842, "train_loss": 2.461476747226715, "test_loss": 0.9993378138717484, "test_acc1": 75.56000249816894, "test_acc5": 92.48600254150391, "epoch": 276, "n_parameters": 7298036} -{"train_lr": 0.001309598765592993, "train_loss": 2.471724288845062, "test_loss": 1.0109269963029552, "test_acc1": 75.21400258209229, "test_acc5": 92.21400256103516, "epoch": 277, "n_parameters": 7298036} -{"train_lr": 0.0012965616099957775, "train_loss": 2.4615280864953997, "test_loss": 0.9980578177115497, "test_acc1": 75.4380025189209, "test_acc5": 92.57000234344483, "epoch": 278, "n_parameters": 7298036} -{"train_lr": 0.0012835589826862073, "train_loss": 2.4575456416606904, "test_loss": 0.9708173516042092, "test_acc1": 75.58800263641358, "test_acc5": 92.59800277160645, "epoch": 279, "n_parameters": 7298036} -{"train_lr": 0.0012705915173940611, "train_loss": 2.4466985244989394, "test_loss": 1.0174343588159365, "test_acc1": 75.14000249938965, "test_acc5": 92.284002661438, "epoch": 280, "n_parameters": 7298036} -{"train_lr": 0.0012576598461352462, "train_loss": 2.458017695379257, "test_loss": 1.0056652485886042, "test_acc1": 75.24800258087159, "test_acc5": 92.17600268157959, "epoch": 281, "n_parameters": 7298036} -{"train_lr": 0.0012447645991812122, "train_loss": 2.4478625379323957, "test_loss": 1.0229578566025286, "test_acc1": 75.2140025088501, "test_acc5": 92.14000242675782, "epoch": 282, "n_parameters": 7298036} -{"train_lr": 0.001231906405028045, "train_loss": 2.461859957432747, "test_loss": 0.9600188767208773, "test_acc1": 75.83200248474121, "test_acc5": 92.66400283172608, "epoch": 283, "n_parameters": 7298036} -{"train_lr": 0.0012190858903660415, "train_loss": 2.44466598110199, "test_loss": 0.996317250544534, "test_acc1": 75.40800242401123, "test_acc5": 92.41400248291015, "epoch": 284, "n_parameters": 7298036} -{"train_lr": 0.0012063036800490023, "train_loss": 2.4494924306869508, "test_loss": 0.964211056337637, "test_acc1": 75.94200249633789, "test_acc5": 92.7160022958374, "epoch": 285, "n_parameters": 7298036} -{"train_lr": 0.0011935603970637625, "train_loss": 2.4390205520391466, "test_loss": 0.9973637761876863, "test_acc1": 75.18400232513427, "test_acc5": 92.35200224609375, "epoch": 286, "n_parameters": 7298036} -{"train_lr": 0.0011808566625000356, "train_loss": 2.4381395535469057, "test_loss": 0.98931044077172, "test_acc1": 75.63400264312745, "test_acc5": 92.47800255340576, "epoch": 287, "n_parameters": 7298036} -{"train_lr": 0.0011681930955198627, "train_loss": 2.431986510300636, "test_loss": 0.9689149551970118, "test_acc1": 75.74000250915527, "test_acc5": 92.61000224609376, "epoch": 288, "n_parameters": 7298036} -{"train_lr": 0.0011555703133276894, "train_loss": 2.428816387915611, "test_loss": 0.9613465294241905, "test_acc1": 75.72000255645752, "test_acc5": 92.8800023413086, "epoch": 289, "n_parameters": 7298036} -{"train_lr": 0.0011429889311400574, "train_loss": 2.4206805315732955, "test_loss": 0.9912525308044517, "test_acc1": 75.64000264801025, "test_acc5": 92.48800244750977, "epoch": 290, "n_parameters": 7298036} -{"train_lr": 0.0011304495621557978, "train_loss": 2.4208395852327347, "test_loss": 1.0054507446201408, "test_acc1": 75.13000258148193, "test_acc5": 92.44400255584716, "epoch": 291, "n_parameters": 7298036} -{"train_lr": 0.0011179528175260622, "train_loss": 2.4291888501644134, "test_loss": 0.9992973653270918, "test_acc1": 75.45600247406006, "test_acc5": 92.41000255645751, "epoch": 292, "n_parameters": 7298036} -{"train_lr": 0.0011054993063245714, "train_loss": 2.4273169549942017, "test_loss": 0.9827794721897911, "test_acc1": 75.57000243743896, "test_acc5": 92.58600252471923, "epoch": 293, "n_parameters": 7298036} -{"train_lr": 0.0010930896355179074, "train_loss": 2.4188252673625947, "test_loss": 0.9528712174471687, "test_acc1": 76.27600276550292, "test_acc5": 92.79400262329102, "epoch": 294, "n_parameters": 7298036} -{"train_lr": 0.0010807244099358875, "train_loss": 2.418368045735359, "test_loss": 0.9936138183316764, "test_acc1": 75.36400225372314, "test_acc5": 92.60600241027832, "epoch": 295, "n_parameters": 7298036} -{"train_lr": 0.0010684042322421535, "train_loss": 2.424034754610062, "test_loss": 0.9711610560908037, "test_acc1": 75.91600247070312, "test_acc5": 92.82400261688232, "epoch": 296, "n_parameters": 7298036} -{"train_lr": 0.0010561297029048062, "train_loss": 2.4254582210302353, "test_loss": 0.9617599253268803, "test_acc1": 75.7940026449585, "test_acc5": 92.75400247436524, "epoch": 297, "n_parameters": 7298036} -{"train_lr": 0.0010439014201670813, "train_loss": 2.4104105238437654, "test_loss": 0.9622917528099874, "test_acc1": 76.01800247955322, "test_acc5": 92.75800295501709, "epoch": 298, "n_parameters": 7298036} -{"train_lr": 0.0010317199800182345, "train_loss": 2.407976567864418, "test_loss": 0.979393698494224, "test_acc1": 76.0440029498291, "test_acc5": 92.7620025112915, "epoch": 299, "n_parameters": 7298036} -{"train_lr": 0.0010195859761644697, "train_loss": 2.4124459301471712, "test_loss": 0.9514966019812752, "test_acc1": 76.30400250061035, "test_acc5": 92.9960027255249, "epoch": 300, "n_parameters": 7298036} -{"train_lr": 0.0010075000000000067, "train_loss": 2.4056717858314514, "test_loss": 0.9736723696046016, "test_acc1": 76.14200224182129, "test_acc5": 92.81400215118408, "epoch": 301, "n_parameters": 7298036} -{"train_lr": 0.000995462640578269, "train_loss": 2.409020874404907, "test_loss": 0.961438053232782, "test_acc1": 76.08400264038086, "test_acc5": 92.94400265350342, "epoch": 302, "n_parameters": 7298036} -{"train_lr": 0.0009834744845832106, "train_loss": 2.404333558678627, "test_loss": 0.9887305579203016, "test_acc1": 75.56200253295899, "test_acc5": 92.50600244903565, "epoch": 303, "n_parameters": 7298036} -{"train_lr": 0.0009715361163006195, "train_loss": 2.3915642235279084, "test_loss": 0.9436375562320737, "test_acc1": 76.47400264678956, "test_acc5": 92.97200205688476, "epoch": 304, "n_parameters": 7298036} -{"train_lr": 0.000959648117589686, "train_loss": 2.4049660050153734, "test_loss": 0.9237527018960785, "test_acc1": 76.7200022668457, "test_acc5": 93.28600291015626, "epoch": 305, "n_parameters": 7298036} -{"train_lr": 0.0009478110678547599, "train_loss": 2.3946733414173127, "test_loss": 0.9429432361879769, "test_acc1": 76.52000252075196, "test_acc5": 93.00400251617431, "epoch": 306, "n_parameters": 7298036} -{"train_lr": 0.0009360255440169043, "train_loss": 2.3823892540454863, "test_loss": 0.9495078454561093, "test_acc1": 76.29200294616699, "test_acc5": 92.73000244232178, "epoch": 307, "n_parameters": 7298036} -{"train_lr": 0.0009242921204859447, "train_loss": 2.3807427861452104, "test_loss": 0.9234843383378842, "test_acc1": 76.63400249847412, "test_acc5": 93.17800248596191, "epoch": 308, "n_parameters": 7298036} -{"train_lr": 0.0009126113691323534, "train_loss": 2.3826080370903013, "test_loss": 0.9388975930564544, "test_acc1": 76.5420027090454, "test_acc5": 92.97600239349366, "epoch": 309, "n_parameters": 7298036} -{"train_lr": 0.0009009838592595214, "train_loss": 2.3891737176418304, "test_loss": 0.944640935782124, "test_acc1": 76.66200279693604, "test_acc5": 93.11000257598877, "epoch": 310, "n_parameters": 7298036} -{"train_lr": 0.0008894101575758612, "train_loss": 2.3734276170730593, "test_loss": 0.9478350555633798, "test_acc1": 76.35800201568604, "test_acc5": 92.84600250549316, "epoch": 311, "n_parameters": 7298036} -{"train_lr": 0.0008778908281672491, "train_loss": 2.377720849061012, "test_loss": 0.9291355566066855, "test_acc1": 76.67800282440186, "test_acc5": 93.16600280914307, "epoch": 312, "n_parameters": 7298036} -{"train_lr": 0.0008664264324695524, "train_loss": 2.373582329535484, "test_loss": 0.9324750076321995, "test_acc1": 76.53200245422363, "test_acc5": 93.07200294128418, "epoch": 313, "n_parameters": 7298036} -{"train_lr": 0.0008550175292412688, "train_loss": 2.3757509226322173, "test_loss": 0.9286933600464288, "test_acc1": 76.65000253265382, "test_acc5": 93.18400271057129, "epoch": 314, "n_parameters": 7298036} -{"train_lr": 0.0008436646745362156, "train_loss": 2.369099672627449, "test_loss": 0.9098964320386157, "test_acc1": 77.1940025479126, "test_acc5": 93.39400250823975, "epoch": 315, "n_parameters": 7298036} -{"train_lr": 0.0008323684216765116, "train_loss": 2.3687452283620836, "test_loss": 0.9562607777907568, "test_acc1": 76.51800257873535, "test_acc5": 92.83600247772216, "epoch": 316, "n_parameters": 7298036} -{"train_lr": 0.0008211293212256214, "train_loss": 2.3670384380340574, "test_loss": 0.920820165425539, "test_acc1": 76.8000026373291, "test_acc5": 93.29400252288818, "epoch": 317, "n_parameters": 7298036} -{"train_lr": 0.0008099479209613996, "train_loss": 2.3615950959920884, "test_loss": 0.9389444302548381, "test_acc1": 76.84800248962402, "test_acc5": 93.05800251586913, "epoch": 318, "n_parameters": 7298036} -{"train_lr": 0.0007988247658495707, "train_loss": 2.3733336450099944, "test_loss": 0.938817074193674, "test_acc1": 76.87200226104736, "test_acc5": 93.2440028213501, "epoch": 319, "n_parameters": 7298036} -{"train_lr": 0.0007877603980169765, "train_loss": 2.358410629272461, "test_loss": 0.9088872405974304, "test_acc1": 76.95200258758545, "test_acc5": 93.33800273406982, "epoch": 320, "n_parameters": 7298036} -{"train_lr": 0.0007767553567253202, "train_loss": 2.349117143249512, "test_loss": 0.9172111037461197, "test_acc1": 76.99600252288819, "test_acc5": 93.18600286834717, "epoch": 321, "n_parameters": 7298036} -{"train_lr": 0.0007658101783447642, "train_loss": 2.3479167003154755, "test_loss": 0.9269298663472428, "test_acc1": 76.87000278839112, "test_acc5": 93.16400257598877, "epoch": 322, "n_parameters": 7298036} -{"train_lr": 0.0007549253963278913, "train_loss": 2.357111514568329, "test_loss": 0.9283191601143164, "test_acc1": 76.7840027835083, "test_acc5": 93.20600241638184, "epoch": 323, "n_parameters": 7298036} -{"train_lr": 0.0007441015411836098, "train_loss": 2.3416995874643325, "test_loss": 0.9171226842000204, "test_acc1": 76.98000257110596, "test_acc5": 93.19600259521485, "epoch": 324, "n_parameters": 7298036} -{"train_lr": 0.0007333391404513692, "train_loss": 2.337912822175026, "test_loss": 0.9077348542564055, "test_acc1": 77.19800266082764, "test_acc5": 93.35600246124268, "epoch": 325, "n_parameters": 7298036} -{"train_lr": 0.0007226387186753506, "train_loss": 2.3360916755199432, "test_loss": 0.9101267987314392, "test_acc1": 77.24600243225098, "test_acc5": 93.29600257995605, "epoch": 326, "n_parameters": 7298036} -{"train_lr": 0.0007120007973790458, "train_loss": 2.341637464547157, "test_loss": 0.9160922187216142, "test_acc1": 77.01800240570068, "test_acc5": 93.34600254241943, "epoch": 327, "n_parameters": 7298036} -{"train_lr": 0.0007014258950397421, "train_loss": 2.343623603224754, "test_loss": 0.9033404428730992, "test_acc1": 77.35800248260497, "test_acc5": 93.47600255004883, "epoch": 328, "n_parameters": 7298036} -{"train_lr": 0.0006909145270632263, "train_loss": 2.340073880338669, "test_loss": 0.9072278753361281, "test_acc1": 77.29200259033203, "test_acc5": 93.40400267944337, "epoch": 329, "n_parameters": 7298036} -{"train_lr": 0.0006804672057587739, "train_loss": 2.338018840289116, "test_loss": 0.8926747533328393, "test_acc1": 77.53000266906739, "test_acc5": 93.53800244750977, "epoch": 330, "n_parameters": 7298036} -{"train_lr": 0.0006700844403140784, "train_loss": 2.323673962497711, "test_loss": 0.8839146363384583, "test_acc1": 77.59000246337891, "test_acc5": 93.64400249786377, "epoch": 331, "n_parameters": 7298036} -{"train_lr": 0.0006597667367704799, "train_loss": 2.3316104535102844, "test_loss": 0.8968203050248763, "test_acc1": 77.57600218933105, "test_acc5": 93.62400228240966, "epoch": 332, "n_parameters": 7298036} -{"train_lr": 0.0006495145979982786, "train_loss": 2.3246267285346986, "test_loss": 0.8919209086281412, "test_acc1": 77.52600275268554, "test_acc5": 93.63400247009277, "epoch": 333, "n_parameters": 7298036} -{"train_lr": 0.0006393285236722668, "train_loss": 2.332592113661766, "test_loss": 0.8985129364711397, "test_acc1": 77.46600282714844, "test_acc5": 93.65600240814209, "epoch": 334, "n_parameters": 7298036} -{"train_lr": 0.000629209010247336, "train_loss": 2.3157878938913345, "test_loss": 0.8881998855401488, "test_acc1": 77.62600254638672, "test_acc5": 93.61000270629883, "epoch": 335, "n_parameters": 7298036} -{"train_lr": 0.0006191565509343066, "train_loss": 2.3123969563484192, "test_loss": 0.886814687620191, "test_acc1": 77.7400024935913, "test_acc5": 93.67400257019042, "epoch": 336, "n_parameters": 7298036} -{"train_lr": 0.0006091716356758274, "train_loss": 2.3173032944917678, "test_loss": 0.8963595481918138, "test_acc1": 77.71800257080078, "test_acc5": 93.5760023852539, "epoch": 337, "n_parameters": 7298036} -{"train_lr": 0.0005992547511226205, "train_loss": 2.3028970170259475, "test_loss": 0.8956589446786571, "test_acc1": 77.52600264343262, "test_acc5": 93.66400244354249, "epoch": 338, "n_parameters": 7298036} -{"train_lr": 0.0005894063806096327, "train_loss": 2.316052129125595, "test_loss": 0.8995716045884525, "test_acc1": 77.54400272827148, "test_acc5": 93.61000231201172, "epoch": 339, "n_parameters": 7298036} -{"train_lr": 0.000579627004132555, "train_loss": 2.31180285949707, "test_loss": 0.8940930791637477, "test_acc1": 77.78600223999024, "test_acc5": 93.79000271026611, "epoch": 340, "n_parameters": 7298036} -{"train_lr": 0.0005699170983243841, "train_loss": 2.306078789591789, "test_loss": 0.8694823114749264, "test_acc1": 78.07400301208496, "test_acc5": 93.80600263702392, "epoch": 341, "n_parameters": 7298036} -{"train_lr": 0.0005602771364322523, "train_loss": 2.3053324651956557, "test_loss": 0.8837948875392184, "test_acc1": 77.82400196594239, "test_acc5": 93.73400235412598, "epoch": 342, "n_parameters": 7298036} -{"train_lr": 0.0005507075882942857, "train_loss": 2.3063364749193194, "test_loss": 0.8671852039063678, "test_acc1": 77.9920026977539, "test_acc5": 93.83200248413085, "epoch": 343, "n_parameters": 7298036} -{"train_lr": 0.0005412089203167633, "train_loss": 2.295621811437607, "test_loss": 0.8632363593753647, "test_acc1": 78.27800247680663, "test_acc5": 93.97200251098633, "epoch": 344, "n_parameters": 7298036} -{"train_lr": 0.0005317815954513637, "train_loss": 2.28743119931221, "test_loss": 0.8621908787857083, "test_acc1": 78.23600241790771, "test_acc5": 93.81600250915527, "epoch": 345, "n_parameters": 7298036} -{"train_lr": 0.0005224260731725992, "train_loss": 2.297768548130989, "test_loss": 0.8753008713178775, "test_acc1": 78.06000269409179, "test_acc5": 93.80000285705566, "epoch": 346, "n_parameters": 7298036} -{"train_lr": 0.00051314280945543, "train_loss": 2.2997488481521606, "test_loss": 0.8737060727880281, "test_acc1": 78.03000248809815, "test_acc5": 93.7480025567627, "epoch": 347, "n_parameters": 7298036} -{"train_lr": 0.0005039322567530305, "train_loss": 2.2880155382633207, "test_loss": 0.8761547124561142, "test_acc1": 77.94200280639649, "test_acc5": 93.69600254516601, "epoch": 348, "n_parameters": 7298036} -{"train_lr": 0.0004947948639747458, "train_loss": 2.2805166622161863, "test_loss": 0.8628105635152143, "test_acc1": 78.05800266326904, "test_acc5": 93.89800255645751, "epoch": 349, "n_parameters": 7298036} -{"train_lr": 0.0004857310764642128, "train_loss": 2.282940669322014, "test_loss": 0.8742733192356194, "test_acc1": 78.04600272216797, "test_acc5": 93.8200026599121, "epoch": 350, "n_parameters": 7298036} -{"train_lr": 0.00047674133597763773, "train_loss": 2.2825661782979965, "test_loss": 0.8604613244533539, "test_acc1": 78.27000223541259, "test_acc5": 93.94200245056152, "epoch": 351, "n_parameters": 7298036} -{"train_lr": 0.00046782608066229685, "train_loss": 2.2811500553131103, "test_loss": 0.8544722348451614, "test_acc1": 78.52800251068115, "test_acc5": 94.0700025845337, "epoch": 352, "n_parameters": 7298036} -{"train_lr": 0.0004589857450351661, "train_loss": 2.271720604109764, "test_loss": 0.8587004919262493, "test_acc1": 78.55600243499755, "test_acc5": 93.95400236358643, "epoch": 353, "n_parameters": 7298036} -{"train_lr": 0.000450220759961707, "train_loss": 2.2730486157894134, "test_loss": 0.846900279250215, "test_acc1": 78.4120024874878, "test_acc5": 93.99000258148193, "epoch": 354, "n_parameters": 7298036} -{"train_lr": 0.0004415315526349521, "train_loss": 2.2723387486219404, "test_loss": 0.8546708894564825, "test_acc1": 78.43000236206055, "test_acc5": 94.06800252502441, "epoch": 355, "n_parameters": 7298036} -{"train_lr": 0.0004329185465545854, "train_loss": 2.265364590525627, "test_loss": 0.8611945617286598, "test_acc1": 78.32000274719238, "test_acc5": 94.0100024508667, "epoch": 356, "n_parameters": 7298036} -{"train_lr": 0.0004243821615063944, "train_loss": 2.2618649871587753, "test_loss": 0.8508295237141497, "test_acc1": 78.54000269561767, "test_acc5": 94.0340024609375, "epoch": 357, "n_parameters": 7298036} -{"train_lr": 0.00041592281354172557, "train_loss": 2.2687717294692993, "test_loss": 0.8486351500100949, "test_acc1": 78.34800239471436, "test_acc5": 94.07400261810302, "epoch": 358, "n_parameters": 7298036} -{"train_lr": 0.0004075409149572814, "train_loss": 2.255942825317383, "test_loss": 0.8436768394620979, "test_acc1": 78.41600241790772, "test_acc5": 94.19000255615235, "epoch": 359, "n_parameters": 7298036} -{"train_lr": 0.000399236874274954, "train_loss": 2.25349505572319, "test_loss": 0.8436765361796407, "test_acc1": 78.56400238067627, "test_acc5": 94.13200212432861, "epoch": 360, "n_parameters": 7298036} -{"train_lr": 0.00039101109622197687, "train_loss": 2.2556183048725127, "test_loss": 0.84733236668741, "test_acc1": 78.68800259613037, "test_acc5": 94.16600251953125, "epoch": 361, "n_parameters": 7298036} -{"train_lr": 0.000382863981711175, "train_loss": 2.257443091750145, "test_loss": 0.8519715020323501, "test_acc1": 78.56400246002197, "test_acc5": 94.09600283569335, "epoch": 362, "n_parameters": 7298036} -{"train_lr": 0.0003747959278214157, "train_loss": 2.2603793987989427, "test_loss": 0.847300445113112, "test_acc1": 78.58400239593506, "test_acc5": 94.1600026373291, "epoch": 363, "n_parameters": 7298036} -{"train_lr": 0.00036680732777826604, "train_loss": 2.247489723777771, "test_loss": 0.8371082274791073, "test_acc1": 78.70200255462646, "test_acc5": 94.22800253662109, "epoch": 364, "n_parameters": 7298036} -{"train_lr": 0.00035889857093479767, "train_loss": 2.2464755400180816, "test_loss": 0.8421895436066038, "test_acc1": 78.76000237823486, "test_acc5": 94.24400287139892, "epoch": 365, "n_parameters": 7298036} -{"train_lr": 0.00035107004275269313, "train_loss": 2.242171888923645, "test_loss": 0.8318258938105667, "test_acc1": 78.68000264007568, "test_acc5": 94.25800275665283, "epoch": 366, "n_parameters": 7298036} -{"train_lr": 0.00034332212478335543, "train_loss": 2.242269374227524, "test_loss": 0.826381018056589, "test_acc1": 78.88800226959229, "test_acc5": 94.33600224365235, "epoch": 367, "n_parameters": 7298036} -{"train_lr": 0.0003356551946493703, "train_loss": 2.2346670835256575, "test_loss": 0.8255993296556613, "test_acc1": 79.2060022177124, "test_acc5": 94.34600234222412, "epoch": 368, "n_parameters": 7298036} -{"train_lr": 0.0003280696260261078, "train_loss": 2.2371157382011413, "test_loss": 0.8318725127507659, "test_acc1": 78.89000243865966, "test_acc5": 94.32000239990235, "epoch": 369, "n_parameters": 7298036} -{"train_lr": 0.00032056578862347564, "train_loss": 2.23090071246624, "test_loss": 0.8340057903791175, "test_acc1": 78.85400240600586, "test_acc5": 94.24200250091553, "epoch": 370, "n_parameters": 7298036} -{"train_lr": 0.00031314404816792945, "train_loss": 2.2336700860261915, "test_loss": 0.8194345330052516, "test_acc1": 79.11600237945557, "test_acc5": 94.39600233886719, "epoch": 371, "n_parameters": 7298036} -{"train_lr": 0.00030580476638461713, "train_loss": 2.230939082121849, "test_loss": 0.8265131694429061, "test_acc1": 78.95200254577637, "test_acc5": 94.37200274871826, "epoch": 372, "n_parameters": 7298036} -{"train_lr": 0.0002985483009797873, "train_loss": 2.22307300593853, "test_loss": 0.8258587636930101, "test_acc1": 78.94400229492187, "test_acc5": 94.36000274871826, "epoch": 373, "n_parameters": 7298036} -{"train_lr": 0.00029137500562332014, "train_loss": 2.2357719297409058, "test_loss": 0.8383553177118301, "test_acc1": 78.98000288726807, "test_acc5": 94.24000254486084, "epoch": 374, "n_parameters": 7298036} -{"train_lr": 0.000284285229931521, "train_loss": 2.216908124089241, "test_loss": 0.8279894511489307, "test_acc1": 78.92600283630371, "test_acc5": 94.35600267059326, "epoch": 375, "n_parameters": 7298036} -{"train_lr": 0.00027727931945004304, "train_loss": 2.2349593551874163, "test_loss": 0.8258826693191248, "test_acc1": 79.06000236236572, "test_acc5": 94.31400272125244, "epoch": 376, "n_parameters": 7298036} -{"train_lr": 0.00027035761563708795, "train_loss": 2.2051746411561965, "test_loss": 0.8156017192146358, "test_acc1": 79.19600241027833, "test_acc5": 94.33200244232178, "epoch": 377, "n_parameters": 7298036} -{"train_lr": 0.0002635204558467305, "train_loss": 2.210484447312355, "test_loss": 0.8111103913363289, "test_acc1": 79.23000246795654, "test_acc5": 94.48600233062744, "epoch": 378, "n_parameters": 7298036} -{"train_lr": 0.0002567681733124936, "train_loss": 2.197012175726891, "test_loss": 0.8184359248946694, "test_acc1": 79.2100026751709, "test_acc5": 94.3960029751587, "epoch": 379, "n_parameters": 7298036} -{"train_lr": 0.0002501010971311009, "train_loss": 2.2179677588701248, "test_loss": 0.8138409611933372, "test_acc1": 79.31000274169922, "test_acc5": 94.4920024420166, "epoch": 380, "n_parameters": 7298036} -{"train_lr": 0.00024351955224644067, "train_loss": 2.2100116575717927, "test_loss": 0.8201799616217613, "test_acc1": 79.22200260681153, "test_acc5": 94.48400216827393, "epoch": 381, "n_parameters": 7298036} -{"train_lr": 0.00023702385943372427, "train_loss": 2.20481566734314, "test_loss": 0.8174867820652092, "test_acc1": 79.1720024243164, "test_acc5": 94.48000231658935, "epoch": 382, "n_parameters": 7298036} -{"train_lr": 0.00023061433528386214, "train_loss": 2.209949382352829, "test_loss": 0.8101321452242487, "test_acc1": 79.29600261566162, "test_acc5": 94.49000225524902, "epoch": 383, "n_parameters": 7298036} -{"train_lr": 0.00022429129218801117, "train_loss": 2.204105324220657, "test_loss": 0.8111938387155533, "test_acc1": 79.26800217590332, "test_acc5": 94.48000217712402, "epoch": 384, "n_parameters": 7298036} -{"train_lr": 0.00021805503832237022, "train_loss": 2.1997555752038958, "test_loss": 0.8101878330549773, "test_acc1": 79.42000239074707, "test_acc5": 94.43000250946045, "epoch": 385, "n_parameters": 7298036} -{"train_lr": 0.00021190587763316056, "train_loss": 2.207076029372215, "test_loss": 0.8119069997440366, "test_acc1": 79.20200251342773, "test_acc5": 94.39600244781494, "epoch": 386, "n_parameters": 7298036} -{"train_lr": 0.00020584410982180034, "train_loss": 2.199598826432228, "test_loss": 0.8051122523405972, "test_acc1": 79.39800256866455, "test_acc5": 94.61200271240234, "epoch": 387, "n_parameters": 7298036} -{"train_lr": 0.0001998700303302881, "train_loss": 2.1960119938611986, "test_loss": 0.8019180771182565, "test_acc1": 79.49800279449462, "test_acc5": 94.59200262268067, "epoch": 388, "n_parameters": 7298036} -{"train_lr": 0.00019398393032684917, "train_loss": 2.179600920534134, "test_loss": 0.8107875460649238, "test_acc1": 79.40800264923095, "test_acc5": 94.49400253143311, "epoch": 389, "n_parameters": 7298036} -{"train_lr": 0.0001881860966916756, "train_loss": 2.1874710554122925, "test_loss": 0.8011737244532389, "test_acc1": 79.44000270233154, "test_acc5": 94.55800276306152, "epoch": 390, "n_parameters": 7298036} -{"train_lr": 0.00018247681200301023, "train_loss": 2.1898976377248762, "test_loss": 0.7991703119786346, "test_acc1": 79.50400247375488, "test_acc5": 94.59200253417968, "epoch": 391, "n_parameters": 7298036} -{"train_lr": 0.00017685635452333062, "train_loss": 2.1796127670049668, "test_loss": 0.8061414556029964, "test_acc1": 79.4580022314453, "test_acc5": 94.61800252868652, "epoch": 392, "n_parameters": 7298036} -{"train_lr": 0.00017132499818580898, "train_loss": 2.188413988351822, "test_loss": 0.8082629146383089, "test_acc1": 79.43400271667481, "test_acc5": 94.57600255889892, "epoch": 393, "n_parameters": 7298036} -{"train_lr": 0.00016588301258094182, "train_loss": 2.1820069454669953, "test_loss": 0.8009944702772533, "test_acc1": 79.51600256652831, "test_acc5": 94.57400256225586, "epoch": 394, "n_parameters": 7298036} -{"train_lr": 0.0001605306629434379, "train_loss": 2.1765265538215637, "test_loss": 0.8055482706164613, "test_acc1": 79.45200254058838, "test_acc5": 94.57200248077393, "epoch": 395, "n_parameters": 7298036} -{"train_lr": 0.00015526821013925752, "train_loss": 2.1801244829177855, "test_loss": 0.8034393706304186, "test_acc1": 79.5960023953247, "test_acc5": 94.63400225494385, "epoch": 396, "n_parameters": 7298036} -{"train_lr": 0.00015009591065294023, "train_loss": 2.171605939412117, "test_loss": 0.7933506189900286, "test_acc1": 79.71800247680665, "test_acc5": 94.63000221343994, "epoch": 397, "n_parameters": 7298036} -{"train_lr": 0.00014501401657505492, "train_loss": 2.1778587456941603, "test_loss": 0.7929608254309963, "test_acc1": 79.6980024761963, "test_acc5": 94.6140024282837, "epoch": 398, "n_parameters": 7298036} -{"train_lr": 0.0001400227755899522, "train_loss": 2.1727487627744675, "test_loss": 0.7887912252370048, "test_acc1": 79.72400232330322, "test_acc5": 94.7440024255371, "epoch": 399, "n_parameters": 7298036} -{"train_lr": 0.00013512243096367772, "train_loss": 2.1632856247901917, "test_loss": 0.7919384942335241, "test_acc1": 79.64200234832764, "test_acc5": 94.68200254821777, "epoch": 400, "n_parameters": 7298036} -{"train_lr": 0.00013031322153211376, "train_loss": 2.1652320227622988, "test_loss": 0.7895159971188096, "test_acc1": 79.81400227905273, "test_acc5": 94.65000284454345, "epoch": 401, "n_parameters": 7298036} -{"train_lr": 0.00012559538168934326, "train_loss": 2.1685021246910097, "test_loss": 0.7940862474634367, "test_acc1": 79.6800023590088, "test_acc5": 94.64200249237061, "epoch": 402, "n_parameters": 7298036} -{"train_lr": 0.00012096914137622728, "train_loss": 2.1644829222679136, "test_loss": 0.7873093571294757, "test_acc1": 79.77600254089356, "test_acc5": 94.74400253692627, "epoch": 403, "n_parameters": 7298036} -{"train_lr": 0.00011643472606918499, "train_loss": 2.158027069783211, "test_loss": 0.7881849907776889, "test_acc1": 79.71200249725342, "test_acc5": 94.69400241699219, "epoch": 404, "n_parameters": 7298036} -{"train_lr": 0.00011199235676923019, "train_loss": 2.161591362071037, "test_loss": 0.7869491399649311, "test_acc1": 79.82000261230469, "test_acc5": 94.76800248413086, "epoch": 405, "n_parameters": 7298036} -{"train_lr": 0.00010764224999117014, "train_loss": 2.1609512971878053, "test_loss": 0.79119824705755, "test_acc1": 79.83600251892089, "test_acc5": 94.68200231384277, "epoch": 406, "n_parameters": 7298036} -{"train_lr": 0.0001033846177530702, "train_loss": 2.1624391537189482, "test_loss": 0.786705738700488, "test_acc1": 79.93400250732422, "test_acc5": 94.69000260131835, "epoch": 407, "n_parameters": 7298036} -{"train_lr": 9.921966756592387e-05, "train_loss": 2.1547350258111955, "test_loss": 0.7838799962226082, "test_acc1": 79.89000249633789, "test_acc5": 94.7460027798462, "epoch": 408, "n_parameters": 7298036} -{"train_lr": 9.514760242352498e-05, "train_loss": 2.164170332741737, "test_loss": 0.7880984579815584, "test_acc1": 79.88400230834961, "test_acc5": 94.76600252593994, "epoch": 409, "n_parameters": 7298036} -{"train_lr": 9.116862079258612e-05, "train_loss": 2.1514782518148423, "test_loss": 0.7900709875804537, "test_acc1": 79.95400250396729, "test_acc5": 94.71400287200927, "epoch": 410, "n_parameters": 7298036} -{"train_lr": 8.728291660305303e-05, "train_loss": 2.151021363544464, "test_loss": 0.7815023851306999, "test_acc1": 79.95600261352538, "test_acc5": 94.78000256500245, "epoch": 411, "n_parameters": 7298036} -{"train_lr": 8.349067923867126e-05, "train_loss": 2.1492631515264513, "test_loss": 0.7840851146508666, "test_acc1": 79.90000266357421, "test_acc5": 94.71000260406494, "epoch": 412, "n_parameters": 7298036} -{"train_lr": 7.979209352773835e-05, "train_loss": 2.151286989760399, "test_loss": 0.7804561700014507, "test_acc1": 79.97800273406982, "test_acc5": 94.81000279937744, "epoch": 413, "n_parameters": 7298036} -{"train_lr": 7.618733973410262e-05, "train_loss": 2.153090433740616, "test_loss": 0.7825328067821615, "test_acc1": 79.98000254089355, "test_acc5": 94.77000242828369, "epoch": 414, "n_parameters": 7298036} -{"train_lr": 7.267659354838017e-05, "train_loss": 2.142410843014717, "test_loss": 0.7811045497655869, "test_acc1": 79.87800243774414, "test_acc5": 94.76200274902344, "epoch": 415, "n_parameters": 7298036} -{"train_lr": 6.926002607938772e-05, "train_loss": 2.1478301615715027, "test_loss": 0.7787229642271996, "test_acc1": 80.05000220275879, "test_acc5": 94.8180023110962, "epoch": 416, "n_parameters": 7298036} -{"train_lr": 6.593780384579997e-05, "train_loss": 2.1346821863889693, "test_loss": 0.78078228432466, "test_acc1": 80.03800292022706, "test_acc5": 94.81000245605469, "epoch": 417, "n_parameters": 7298036} -{"train_lr": 6.27100887680448e-05, "train_loss": 2.135209207391739, "test_loss": 0.7777341089266188, "test_acc1": 80.00800246551513, "test_acc5": 94.82800265991212, "epoch": 418, "n_parameters": 7298036} -{"train_lr": 5.957703816040123e-05, "train_loss": 2.146470212173462, "test_loss": 0.7795118522994658, "test_acc1": 80.03000242645264, "test_acc5": 94.80200254821777, "epoch": 419, "n_parameters": 7298036} -{"train_lr": 5.6538804723335324e-05, "train_loss": 2.1440317486763, "test_loss": 0.7788517856422592, "test_acc1": 80.02200256317138, "test_acc5": 94.83600273742675, "epoch": 420, "n_parameters": 7298036} -{"train_lr": 5.359553653605782e-05, "train_loss": 2.151880567932129, "test_loss": 0.7780319209046224, "test_acc1": 80.00000254364014, "test_acc5": 94.81600223571778, "epoch": 421, "n_parameters": 7298036} -{"train_lr": 5.0747377049308795e-05, "train_loss": 2.138569548368454, "test_loss": 0.7774156711119062, "test_acc1": 80.09800252410889, "test_acc5": 94.84600243377686, "epoch": 422, "n_parameters": 7298036} -{"train_lr": 4.799446507836315e-05, "train_loss": 2.1389743479013443, "test_loss": 0.7743955964551252, "test_acc1": 80.11400243255615, "test_acc5": 94.83600245056152, "epoch": 423, "n_parameters": 7298036} -{"train_lr": 4.533693479626563e-05, "train_loss": 2.136804186272621, "test_loss": 0.7748413977815825, "test_acc1": 80.0640027975464, "test_acc5": 94.85200221343995, "epoch": 424, "n_parameters": 7298036} -{"train_lr": 4.2774915727294984e-05, "train_loss": 2.133044938659668, "test_loss": 0.7749293166048387, "test_acc1": 80.09600244873047, "test_acc5": 94.85000248962402, "epoch": 425, "n_parameters": 7298036} -{"train_lr": 4.030853274064522e-05, "train_loss": 2.140539645266533, "test_loss": 0.7753426518072101, "test_acc1": 80.06200246551514, "test_acc5": 94.86000209899902, "epoch": 426, "n_parameters": 7298036} -{"train_lr": 3.793790604434225e-05, "train_loss": 2.136482640552521, "test_loss": 0.7755109849659836, "test_acc1": 80.05400262420655, "test_acc5": 94.8640026095581, "epoch": 427, "n_parameters": 7298036} -{"train_lr": 3.5663151179389266e-05, "train_loss": 2.1358518453359605, "test_loss": 0.7747705806704128, "test_acc1": 80.14800246826172, "test_acc5": 94.86600254547119, "epoch": 428, "n_parameters": 7298036} -{"train_lr": 3.348437901412699e-05, "train_loss": 2.1298414205551146, "test_loss": 0.7785103250952328, "test_acc1": 80.03000255767822, "test_acc5": 94.81800239196777, "epoch": 429, "n_parameters": 7298036} -{"train_lr": 3.14016957388384e-05, "train_loss": 2.136502484345436, "test_loss": 0.7717491048662102, "test_acc1": 80.22400269989014, "test_acc5": 94.90400229156494, "epoch": 430, "n_parameters": 7298036} -{"train_lr": 2.9415202860566895e-05, "train_loss": 2.139490125131607, "test_loss": 0.7785972365561653, "test_acc1": 80.14800233703613, "test_acc5": 94.85400248687745, "epoch": 431, "n_parameters": 7298036} -{"train_lr": 2.7524997198174166e-05, "train_loss": 2.1242366835832596, "test_loss": 0.7703370263471323, "test_acc1": 80.2560024319458, "test_acc5": 94.90000257324219, "epoch": 432, "n_parameters": 7298036} -{"train_lr": 2.5731170877616888e-05, "train_loss": 2.131270783662796, "test_loss": 0.7748865984818515, "test_acc1": 80.15600269744873, "test_acc5": 94.8940026235962, "epoch": 433, "n_parameters": 7298036} -{"train_lr": 2.403381132745848e-05, "train_loss": 2.1404332532167434, "test_loss": 0.7716723288245061, "test_acc1": 80.1920024407959, "test_acc5": 94.90000266815186, "epoch": 434, "n_parameters": 7298036} -{"train_lr": 2.2433001274609897e-05, "train_loss": 2.130281942367554, "test_loss": 0.7703206942361944, "test_acc1": 80.2280025881958, "test_acc5": 94.89200251190185, "epoch": 435, "n_parameters": 7298036} -{"train_lr": 2.0928818740294644e-05, "train_loss": 2.136551797413826, "test_loss": 0.7703693382003728, "test_acc1": 80.26200278350831, "test_acc5": 94.9120025064087, "epoch": 436, "n_parameters": 7298036} -{"train_lr": 1.9521337036247088e-05, "train_loss": 2.1196899954319, "test_loss": 0.7751071968061083, "test_acc1": 80.19800253570557, "test_acc5": 94.84600241149903, "epoch": 437, "n_parameters": 7298036} -{"train_lr": 1.8210624761139314e-05, "train_loss": 2.128845579147339, "test_loss": 0.7745976158801247, "test_acc1": 80.20400258026123, "test_acc5": 94.85400252319336, "epoch": 438, "n_parameters": 7298036} -{"train_lr": 1.6996745797238736e-05, "train_loss": 2.1313156620025633, "test_loss": 0.7715496876660515, "test_acc1": 80.21200256866454, "test_acc5": 94.88200235290527, "epoch": 439, "n_parameters": 7298036} -{"train_lr": 1.5879759307294027e-05, "train_loss": 2.127250641512871, "test_loss": 0.772787895930164, "test_acc1": 80.20600245758057, "test_acc5": 94.92800248962402, "epoch": 440, "n_parameters": 7298036} -{"train_lr": 1.4859719731650575e-05, "train_loss": 2.1346032338619234, "test_loss": 0.7725648428587353, "test_acc1": 80.1860023538208, "test_acc5": 94.89600243652343, "epoch": 441, "n_parameters": 7298036} -{"train_lr": 1.393667678559817e-05, "train_loss": 2.125934087705612, "test_loss": 0.774717049782767, "test_acc1": 80.09000256591797, "test_acc5": 94.82800245605469, "epoch": 442, "n_parameters": 7298036} -{"train_lr": 1.3110675456947718e-05, "train_loss": 2.1236096183538438, "test_loss": 0.7699775283827501, "test_acc1": 80.25600251281739, "test_acc5": 94.91600241699219, "epoch": 443, "n_parameters": 7298036} -{"train_lr": 1.2381756003839114e-05, "train_loss": 2.1218350173950196, "test_loss": 0.7729646421092398, "test_acc1": 80.19400282531738, "test_acc5": 94.8960022607422, "epoch": 444, "n_parameters": 7298036} -{"train_lr": 1.1749953952777368e-05, "train_loss": 2.1244102136611938, "test_loss": 0.7708226426997605, "test_acc1": 80.20000260498047, "test_acc5": 94.92000220214844, "epoch": 445, "n_parameters": 7298036} -{"train_lr": 1.1215300096904058e-05, "train_loss": 2.128528174734116, "test_loss": 0.7711947483613211, "test_acc1": 80.2440023147583, "test_acc5": 94.92600235565186, "epoch": 446, "n_parameters": 7298036} -{"train_lr": 1.0777820494492919e-05, "train_loss": 2.12798429043293, "test_loss": 0.7729313945507302, "test_acc1": 80.21800289001465, "test_acc5": 94.9080024142456, "epoch": 447, "n_parameters": 7298036} -{"train_lr": 1.0437536467683126e-05, "train_loss": 2.1158873163938523, "test_loss": 0.7719080717686344, "test_acc1": 80.18200255249023, "test_acc5": 94.84600274902344, "epoch": 448, "n_parameters": 7298036} -{"train_lr": 1.0194464601437938e-05, "train_loss": 2.125984363794327, "test_loss": 0.7727927267551422, "test_acc1": 80.20000252410888, "test_acc5": 94.9180024560547, "epoch": 449, "n_parameters": 7298036} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_300e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_300e.txt deleted file mode 100644 index dc3a4955..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_300e.txt +++ /dev/null @@ -1,300 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 7.00662564945221, "test_loss": 6.948657460072461, "test_acc1": 0.11400000759601593, "test_acc5": 0.5120000247383117, "epoch": 0, "n_parameters": 8797984} -{"train_lr": 1.000000000000014e-06, "train_loss": 6.9974605115890505, "test_loss": 6.929055077188155, "test_acc1": 0.13800000859737396, "test_acc5": 0.5860000248241425, "epoch": 1, "n_parameters": 8797984} -{"train_lr": 0.0008007999999999933, "train_loss": 6.40715795955658, "test_loss": 5.043183060253368, "test_acc1": 9.480000264358521, "test_acc5": 24.850000758361816, "epoch": 2, "n_parameters": 8797984} -{"train_lr": 0.0016005999999999787, "train_loss": 5.716562725162506, "test_loss": 3.7755942467380974, "test_acc1": 23.506000678405762, "test_acc5": 47.11000126953125, "epoch": 3, "n_parameters": 8797984} -{"train_lr": 0.0024003999999999835, "train_loss": 5.0836966384410855, "test_loss": 2.9830795622923794, "test_acc1": 35.50600110687256, "test_acc5": 61.40000159484863, "epoch": 4, "n_parameters": 8797984} -{"train_lr": 0.0032001999999999873, "train_loss": 4.653420350313187, "test_loss": 2.6670023413265453, "test_acc1": 41.83600116134644, "test_acc5": 67.57200193603515, "epoch": 5, "n_parameters": 8797984} -{"train_lr": 0.003997265921835383, "train_loss": 4.379179119729995, "test_loss": 2.4238605341490578, "test_acc1": 46.35600142593384, "test_acc5": 71.49200211669923, "epoch": 6, "n_parameters": 8797984} -{"train_lr": 0.003996063323214417, "train_loss": 4.137888956928253, "test_loss": 2.1632904410362244, "test_acc1": 51.09200123092651, "test_acc5": 75.68400216125488, "epoch": 7, "n_parameters": 8797984} -{"train_lr": 0.003994642382062749, "train_loss": 3.958152617788315, "test_loss": 2.0830640766550514, "test_acc1": 52.92200130554199, "test_acc5": 76.99200241638184, "epoch": 8, "n_parameters": 8797984} -{"train_lr": 0.003993003254202742, "train_loss": 3.8204752212524413, "test_loss": 1.9857511283720242, "test_acc1": 55.06200118621826, "test_acc5": 78.71600255950928, "epoch": 9, "n_parameters": 8797984} -{"train_lr": 0.0039911461193831675, "train_loss": 3.716270379447937, "test_loss": 1.846644846832051, "test_acc1": 57.70400149871826, "test_acc5": 80.76800270660401, "epoch": 10, "n_parameters": 8797984} -{"train_lr": 0.003989071181259754, "train_loss": 3.625722735118866, "test_loss": 1.7743130286826807, "test_acc1": 58.85600165374756, "test_acc5": 81.89800274414063, "epoch": 11, "n_parameters": 8797984} -{"train_lr": 0.0039867786673727715, "train_loss": 3.5642605731487276, "test_loss": 1.7024617133771671, "test_acc1": 60.292001640625, "test_acc5": 82.80600257873535, "epoch": 12, "n_parameters": 8797984} -{"train_lr": 0.0039842688291223715, "train_loss": 3.5035005945682527, "test_loss": 1.6845679050859284, "test_acc1": 60.92400186889648, "test_acc5": 83.34800255340576, "epoch": 13, "n_parameters": 8797984} -{"train_lr": 0.0039815419417405275, "train_loss": 3.4386805131435394, "test_loss": 1.6343737853800548, "test_acc1": 62.008001869201664, "test_acc5": 83.9580027407837, "epoch": 14, "n_parameters": 8797984} -{"train_lr": 0.003978598304261148, "train_loss": 3.3928624116897583, "test_loss": 1.6133061849019106, "test_acc1": 62.57400168273926, "test_acc5": 84.21000268218994, "epoch": 15, "n_parameters": 8797984} -{"train_lr": 0.0039754382394872915, "train_loss": 3.3609007955551147, "test_loss": 1.6259869332699215, "test_acc1": 62.644001883850095, "test_acc5": 84.43000226989746, "epoch": 16, "n_parameters": 8797984} -{"train_lr": 0.0039720620939556715, "train_loss": 3.3212381683349608, "test_loss": 1.5738751594634617, "test_acc1": 63.850002052612304, "test_acc5": 85.27800258789063, "epoch": 17, "n_parameters": 8797984} -{"train_lr": 0.003968470237898645, "train_loss": 3.313477579975128, "test_loss": 1.5619675587205326, "test_acc1": 63.57800186706543, "test_acc5": 85.24200243469238, "epoch": 18, "n_parameters": 8797984} -{"train_lr": 0.003964663065203757, "train_loss": 3.2611653960227964, "test_loss": 1.5248717429006802, "test_acc1": 64.38600171447754, "test_acc5": 85.54400216461181, "epoch": 19, "n_parameters": 8797984} -{"train_lr": 0.003960640993370302, "train_loss": 3.23621964802742, "test_loss": 1.493879080256995, "test_acc1": 64.87600190856934, "test_acc5": 86.16400244750976, "epoch": 20, "n_parameters": 8797984} -{"train_lr": 0.003956404463463954, "train_loss": 3.222177745246887, "test_loss": 1.5087093014050932, "test_acc1": 64.8880018814087, "test_acc5": 86.11400209350586, "epoch": 21, "n_parameters": 8797984} -{"train_lr": 0.0039519539400677895, "train_loss": 3.182385726118088, "test_loss": 1.474853861857863, "test_acc1": 65.53200189392089, "test_acc5": 86.5460021862793, "epoch": 22, "n_parameters": 8797984} -{"train_lr": 0.00394728991123201, "train_loss": 3.164822752952576, "test_loss": 1.432515128132175, "test_acc1": 66.11000179473876, "test_acc5": 86.76400229888915, "epoch": 23, "n_parameters": 8797984} -{"train_lr": 0.003942412888419754, "train_loss": 3.156393798184395, "test_loss": 1.4646268333582317, "test_acc1": 65.55800211456298, "test_acc5": 86.54000267120361, "epoch": 24, "n_parameters": 8797984} -{"train_lr": 0.003937323406451619, "train_loss": 3.1409024054050447, "test_loss": 1.4163770623066847, "test_acc1": 66.39000213775635, "test_acc5": 86.8800023098755, "epoch": 25, "n_parameters": 8797984} -{"train_lr": 0.003932022023446712, "train_loss": 3.1175347202777863, "test_loss": 1.4293811715701048, "test_acc1": 66.272001925354, "test_acc5": 86.9460026461792, "epoch": 26, "n_parameters": 8797984} -{"train_lr": 0.003926509320761305, "train_loss": 3.0826920178413393, "test_loss": 1.4443110451102257, "test_acc1": 66.39400200775147, "test_acc5": 87.01600251861572, "epoch": 27, "n_parameters": 8797984} -{"train_lr": 0.003920785902925497, "train_loss": 3.0859243623256685, "test_loss": 1.3747015981113209, "test_acc1": 67.45600226409913, "test_acc5": 87.69400248870849, "epoch": 28, "n_parameters": 8797984} -{"train_lr": 0.003914852397576493, "train_loss": 3.070430847144127, "test_loss": 1.4136551973574303, "test_acc1": 66.74800167266845, "test_acc5": 87.17800267456055, "epoch": 29, "n_parameters": 8797984} -{"train_lr": 0.003908709455390004, "train_loss": 3.071654429960251, "test_loss": 1.3816218043074888, "test_acc1": 67.58400212219239, "test_acc5": 87.59000260498047, "epoch": 30, "n_parameters": 8797984} -{"train_lr": 0.0039023577500088094, "train_loss": 3.0395385040998457, "test_loss": 1.3869953510515831, "test_acc1": 67.01600195526123, "test_acc5": 87.36000266906738, "epoch": 31, "n_parameters": 8797984} -{"train_lr": 0.0038957979779690624, "train_loss": 3.0342903151273726, "test_loss": 1.3975096112665009, "test_acc1": 66.97200183685302, "test_acc5": 87.51200250457764, "epoch": 32, "n_parameters": 8797984} -{"train_lr": 0.003889030858623732, "train_loss": 3.029416131210327, "test_loss": 1.3551185253788443, "test_acc1": 67.41400198150635, "test_acc5": 87.95800298156739, "epoch": 33, "n_parameters": 8797984} -{"train_lr": 0.0038820571340636525, "train_loss": 3.0156137319564817, "test_loss": 1.3945688215248726, "test_acc1": 67.41000186431884, "test_acc5": 87.65600232635498, "epoch": 34, "n_parameters": 8797984} -{"train_lr": 0.0038748775690362956, "train_loss": 3.0118677906036377, "test_loss": 1.3608289952663815, "test_acc1": 67.4140022201538, "test_acc5": 87.77600243804932, "epoch": 35, "n_parameters": 8797984} -{"train_lr": 0.0038674929508619614, "train_loss": 3.001497319817543, "test_loss": 1.3310309517032959, "test_acc1": 68.27600195617676, "test_acc5": 88.0160028930664, "epoch": 36, "n_parameters": 8797984} -{"train_lr": 0.003859904089347072, "train_loss": 2.9912898592948913, "test_loss": 1.4092323026236366, "test_acc1": 67.38000186889649, "test_acc5": 87.64200279418945, "epoch": 37, "n_parameters": 8797984} -{"train_lr": 0.003852111816695943, "train_loss": 2.9847582148075102, "test_loss": 1.3973107859492302, "test_acc1": 67.66600192932128, "test_acc5": 87.58600251922607, "epoch": 38, "n_parameters": 8797984} -{"train_lr": 0.0038441169874190843, "train_loss": 2.9684127049684523, "test_loss": 1.3588794829214321, "test_acc1": 68.36000190460206, "test_acc5": 88.27400260559082, "epoch": 39, "n_parameters": 8797984} -{"train_lr": 0.0038359204782394862, "train_loss": 2.9572065189123156, "test_loss": 1.3358681052923203, "test_acc1": 68.43400187927246, "test_acc5": 88.16000233459472, "epoch": 40, "n_parameters": 8797984} -{"train_lr": 0.0038275231879969967, "train_loss": 2.9526279399633406, "test_loss": 1.362792825435891, "test_acc1": 67.84000201141357, "test_acc5": 87.870002449646, "epoch": 41, "n_parameters": 8797984} -{"train_lr": 0.003818926037548892, "train_loss": 2.946421145272255, "test_loss": 1.3377828624318628, "test_acc1": 68.72800195800781, "test_acc5": 88.20600218688965, "epoch": 42, "n_parameters": 8797984} -{"train_lr": 0.0038101299696697475, "train_loss": 2.939489113783836, "test_loss": 1.3185578843249994, "test_acc1": 68.9740023687744, "test_acc5": 88.53400233093262, "epoch": 43, "n_parameters": 8797984} -{"train_lr": 0.003801135948947377, "train_loss": 2.932461163496971, "test_loss": 1.3644730150699615, "test_acc1": 67.91400192260743, "test_acc5": 88.21200228149414, "epoch": 44, "n_parameters": 8797984} -{"train_lr": 0.003791944961677627, "train_loss": 2.920219123959541, "test_loss": 1.3339535747380817, "test_acc1": 68.61400188049316, "test_acc5": 88.41200282592773, "epoch": 45, "n_parameters": 8797984} -{"train_lr": 0.0037825580157558134, "train_loss": 2.9141055622577667, "test_loss": 1.339244392426575, "test_acc1": 68.51800172271729, "test_acc5": 88.37800257171631, "epoch": 46, "n_parameters": 8797984} -{"train_lr": 0.003772976140566265, "train_loss": 2.912625495147705, "test_loss": 1.3835193558650858, "test_acc1": 68.35200188903809, "test_acc5": 88.2400023815918, "epoch": 47, "n_parameters": 8797984} -{"train_lr": 0.0037632003868696556, "train_loss": 2.9120515676498413, "test_loss": 1.3014568094821537, "test_acc1": 68.88000213562012, "test_acc5": 88.71000263092041, "epoch": 48, "n_parameters": 8797984} -{"train_lr": 0.003753231826687486, "train_loss": 2.9055953983306884, "test_loss": 1.2896822445532854, "test_acc1": 69.36000226745605, "test_acc5": 88.88200253540039, "epoch": 49, "n_parameters": 8797984} -{"train_lr": 0.0037430715531847655, "train_loss": 2.9017197492837905, "test_loss": 1.3532365214298754, "test_acc1": 68.51200224456787, "test_acc5": 88.11600274749756, "epoch": 50, "n_parameters": 8797984} -{"train_lr": 0.003732720680549938, "train_loss": 2.887098063158989, "test_loss": 1.248783257077722, "test_acc1": 70.02200201263427, "test_acc5": 89.49800240509033, "epoch": 51, "n_parameters": 8797984} -{"train_lr": 0.0037221803438729105, "train_loss": 2.886287173962593, "test_loss": 1.341391615131322, "test_acc1": 68.3200021282959, "test_acc5": 88.23800237915039, "epoch": 52, "n_parameters": 8797984} -{"train_lr": 0.003711451699020238, "train_loss": 2.8778722555875778, "test_loss": 1.2912216918433415, "test_acc1": 69.34000213867188, "test_acc5": 89.02400251861572, "epoch": 53, "n_parameters": 8797984} -{"train_lr": 0.0037005359225088163, "train_loss": 2.8664099135398864, "test_loss": 1.3160417921402876, "test_acc1": 68.82800211303712, "test_acc5": 88.71800271697998, "epoch": 54, "n_parameters": 8797984} -{"train_lr": 0.0036894342113765284, "train_loss": 2.867848265337944, "test_loss": 1.2617822736501694, "test_acc1": 69.83600241027833, "test_acc5": 89.13200271453857, "epoch": 55, "n_parameters": 8797984} -{"train_lr": 0.0036781477830511327, "train_loss": 2.8616182975530626, "test_loss": 1.269898412420469, "test_acc1": 69.49600209747314, "test_acc5": 88.94400264709472, "epoch": 56, "n_parameters": 8797984} -{"train_lr": 0.00366667787521664, "train_loss": 2.8544674674510957, "test_loss": 1.298689098919139, "test_acc1": 69.23400203277588, "test_acc5": 88.89800276702881, "epoch": 57, "n_parameters": 8797984} -{"train_lr": 0.0036550257456777722, "train_loss": 2.845240908408165, "test_loss": 1.27922344733687, "test_acc1": 69.51400236114502, "test_acc5": 88.99400228424072, "epoch": 58, "n_parameters": 8797984} -{"train_lr": 0.003643192672221756, "train_loss": 2.845465776991844, "test_loss": 1.2437541191192234, "test_acc1": 70.04400222564698, "test_acc5": 89.39400258270264, "epoch": 59, "n_parameters": 8797984} -{"train_lr": 0.0036311799524784525, "train_loss": 2.8319218487262727, "test_loss": 1.2790250896530992, "test_acc1": 69.8200020098877, "test_acc5": 89.05200277801514, "epoch": 60, "n_parameters": 8797984} -{"train_lr": 0.0036189889037780316, "train_loss": 2.8434263391017915, "test_loss": 1.3115389355841804, "test_acc1": 69.31000227508545, "test_acc5": 88.85400309570312, "epoch": 61, "n_parameters": 8797984} -{"train_lr": 0.0036066208630062997, "train_loss": 2.8375775569200514, "test_loss": 1.2587082337807207, "test_acc1": 69.9120022253418, "test_acc5": 89.27600268676758, "epoch": 62, "n_parameters": 8797984} -{"train_lr": 0.003594077186458248, "train_loss": 2.8348506576538086, "test_loss": 1.2881953330600964, "test_acc1": 69.38000229980469, "test_acc5": 88.8980025326538, "epoch": 63, "n_parameters": 8797984} -{"train_lr": 0.0035813592496895586, "train_loss": 2.828078900194168, "test_loss": 1.2837346073459177, "test_acc1": 69.57000180114746, "test_acc5": 89.0980027331543, "epoch": 64, "n_parameters": 8797984} -{"train_lr": 0.003568468447365067, "train_loss": 2.8174985458612443, "test_loss": 1.2287515818196184, "test_acc1": 70.38800253417969, "test_acc5": 89.47600261413574, "epoch": 65, "n_parameters": 8797984} -{"train_lr": 0.003555406193106677, "train_loss": 2.8100489403009417, "test_loss": 1.241696145166369, "test_acc1": 70.15200214080811, "test_acc5": 89.27000265777588, "epoch": 66, "n_parameters": 8797984} -{"train_lr": 0.0035421739193377214, "train_loss": 2.8115002183437348, "test_loss": 1.2846885275314837, "test_acc1": 69.21600216064454, "test_acc5": 89.13200248596192, "epoch": 67, "n_parameters": 8797984} -{"train_lr": 0.0035287730771260805, "train_loss": 2.7995157326221465, "test_loss": 1.2948848105528776, "test_acc1": 69.36200205108642, "test_acc5": 88.92400281402588, "epoch": 68, "n_parameters": 8797984} -{"train_lr": 0.0035152051360252245, "train_loss": 2.8100510515451433, "test_loss": 1.2249338999390602, "test_acc1": 70.29600224090576, "test_acc5": 89.70400255218506, "epoch": 69, "n_parameters": 8797984} -{"train_lr": 0.003501471583912782, "train_loss": 2.820155364871025, "test_loss": 1.2892859100418932, "test_acc1": 69.70000228485108, "test_acc5": 88.91000254608154, "epoch": 70, "n_parameters": 8797984} -{"train_lr": 0.0034875739268273947, "train_loss": 2.784951093792915, "test_loss": 1.2749497829114689, "test_acc1": 69.92600236694337, "test_acc5": 89.39600252624511, "epoch": 71, "n_parameters": 8797984} -{"train_lr": 0.0034735136888038418, "train_loss": 2.7923724996089936, "test_loss": 1.248139856492772, "test_acc1": 70.27400223358154, "test_acc5": 89.39800245483399, "epoch": 72, "n_parameters": 8797984} -{"train_lr": 0.003459292411705684, "train_loss": 2.7897125442028043, "test_loss": 1.207502152551623, "test_acc1": 70.81400249206543, "test_acc5": 89.88400219787597, "epoch": 73, "n_parameters": 8797984} -{"train_lr": 0.0034449116550562854, "train_loss": 2.788037047433853, "test_loss": 1.218254751580603, "test_acc1": 70.6940020892334, "test_acc5": 89.70600261077881, "epoch": 74, "n_parameters": 8797984} -{"train_lr": 0.0034303729958673978, "train_loss": 2.7782343257427216, "test_loss": 1.2423176923218895, "test_acc1": 70.02800225463868, "test_acc5": 89.64800240722656, "epoch": 75, "n_parameters": 8797984} -{"train_lr": 0.003415678028467135, "train_loss": 2.788100484633446, "test_loss": 1.2275819296345991, "test_acc1": 70.16800228729248, "test_acc5": 89.54800255157471, "epoch": 76, "n_parameters": 8797984} -{"train_lr": 0.0034008283643241475, "train_loss": 2.768639791059494, "test_loss": 1.2081144942956812, "test_acc1": 71.02400243896484, "test_acc5": 90.05800249084473, "epoch": 77, "n_parameters": 8797984} -{"train_lr": 0.003385825631871496, "train_loss": 2.7781463117361067, "test_loss": 1.2720260786659576, "test_acc1": 69.62200228759765, "test_acc5": 89.3220024472046, "epoch": 78, "n_parameters": 8797984} -{"train_lr": 0.0033706714763277455, "train_loss": 2.7670633871793746, "test_loss": 1.22465060125379, "test_acc1": 70.566002578125, "test_acc5": 89.71600239837646, "epoch": 79, "n_parameters": 8797984} -{"train_lr": 0.003355367559516879, "train_loss": 2.759076997947693, "test_loss": 1.2177014885579838, "test_acc1": 70.64800217926026, "test_acc5": 89.83600305725098, "epoch": 80, "n_parameters": 8797984} -{"train_lr": 0.003339915559685877, "train_loss": 2.7621600836515428, "test_loss": 1.1790425729225664, "test_acc1": 71.20800218780518, "test_acc5": 90.3020026663208, "epoch": 81, "n_parameters": 8797984} -{"train_lr": 0.003324317171320666, "train_loss": 2.7612019165992736, "test_loss": 1.2239548349205185, "test_acc1": 70.80600252532959, "test_acc5": 89.8480028201294, "epoch": 82, "n_parameters": 8797984} -{"train_lr": 0.0033085741049602795, "train_loss": 2.7644670932531357, "test_loss": 1.2069209608085014, "test_acc1": 71.1120021560669, "test_acc5": 89.99000239318848, "epoch": 83, "n_parameters": 8797984} -{"train_lr": 0.0032926880870092624, "train_loss": 2.7413184492826463, "test_loss": 1.2010729431229479, "test_acc1": 71.19800228607178, "test_acc5": 90.08600257781983, "epoch": 84, "n_parameters": 8797984} -{"train_lr": 0.003276660859548651, "train_loss": 2.7493623904943467, "test_loss": 1.1713051993180723, "test_acc1": 71.45000272674561, "test_acc5": 90.35200231781006, "epoch": 85, "n_parameters": 8797984} -{"train_lr": 0.0032604941801444055, "train_loss": 2.747970477247238, "test_loss": 1.1570177805774353, "test_acc1": 71.78600234466553, "test_acc5": 90.45600253601074, "epoch": 86, "n_parameters": 8797984} -{"train_lr": 0.003244189821655263, "train_loss": 2.7332126109361647, "test_loss": 1.2080212568535524, "test_acc1": 71.15800213287353, "test_acc5": 90.19000213348389, "epoch": 87, "n_parameters": 8797984} -{"train_lr": 0.003227749572037655, "train_loss": 2.741607307767868, "test_loss": 1.2543326966902788, "test_acc1": 70.56200241088867, "test_acc5": 89.51200256378173, "epoch": 88, "n_parameters": 8797984} -{"train_lr": 0.0032111752341504192, "train_loss": 2.729074546456337, "test_loss": 1.201183251598302, "test_acc1": 70.84000237579346, "test_acc5": 90.09200227416993, "epoch": 89, "n_parameters": 8797984} -{"train_lr": 0.003194468625556447, "train_loss": 2.7312028856992723, "test_loss": 1.1801188535550062, "test_acc1": 71.42800215789795, "test_acc5": 90.0860025527954, "epoch": 90, "n_parameters": 8797984} -{"train_lr": 0.0031776315783234484, "train_loss": 2.71285434384346, "test_loss": 1.1827624256120008, "test_acc1": 71.5480023034668, "test_acc5": 90.33800237121582, "epoch": 91, "n_parameters": 8797984} -{"train_lr": 0.0031606659388236785, "train_loss": 2.7280399065732954, "test_loss": 1.184212037745644, "test_acc1": 71.59600251708984, "test_acc5": 90.21600265594482, "epoch": 92, "n_parameters": 8797984} -{"train_lr": 0.003143573567530467, "train_loss": 2.7254566715478896, "test_loss": 1.1720233945285572, "test_acc1": 71.53200233337402, "test_acc5": 90.55400264221191, "epoch": 93, "n_parameters": 8797984} -{"train_lr": 0.003126356338815038, "train_loss": 2.71612527487278, "test_loss": 1.1871624396127813, "test_acc1": 71.67200212402344, "test_acc5": 90.22200255249024, "epoch": 94, "n_parameters": 8797984} -{"train_lr": 0.0031090161407405044, "train_loss": 2.7116607195854185, "test_loss": 1.1817068727139164, "test_acc1": 71.65200252746583, "test_acc5": 90.15000244140624, "epoch": 95, "n_parameters": 8797984} -{"train_lr": 0.003091554874854953, "train_loss": 2.7118859815835954, "test_loss": 1.1626549814553822, "test_acc1": 72.14200267089844, "test_acc5": 90.46400244598388, "epoch": 96, "n_parameters": 8797984} -{"train_lr": 0.0030739744559831164, "train_loss": 2.704840909266472, "test_loss": 1.1576794850475647, "test_acc1": 71.9420025656128, "test_acc5": 90.51400230133056, "epoch": 97, "n_parameters": 8797984} -{"train_lr": 0.0030562768120159086, "train_loss": 2.697796268439293, "test_loss": 1.1532227112089886, "test_acc1": 71.90800251464844, "test_acc5": 90.52600242675781, "epoch": 98, "n_parameters": 8797984} -{"train_lr": 0.0030384638836993723, "train_loss": 2.7056281283378603, "test_loss": 1.2423078027718208, "test_acc1": 70.27400271850586, "test_acc5": 89.78800257202148, "epoch": 99, "n_parameters": 8797984} -{"train_lr": 0.003020537624421951, "train_loss": 2.701355481362343, "test_loss": 1.180541687809369, "test_acc1": 71.74200269500733, "test_acc5": 90.26600255767822, "epoch": 100, "n_parameters": 8797984} -{"train_lr": 0.0030025000000000156, "train_loss": 2.6943001904726027, "test_loss": 1.1635832554277252, "test_acc1": 71.95600234802247, "test_acc5": 90.61000233184815, "epoch": 101, "n_parameters": 8797984} -{"train_lr": 0.0029843529884621693, "train_loss": 2.688375810170174, "test_loss": 1.2144191010909922, "test_acc1": 71.20800214233398, "test_acc5": 89.96800241271973, "epoch": 102, "n_parameters": 8797984} -{"train_lr": 0.0029660985798329416, "train_loss": 2.6816513046979904, "test_loss": 1.1574835577870117, "test_acc1": 72.40600229003907, "test_acc5": 90.58800276489258, "epoch": 103, "n_parameters": 8797984} -{"train_lr": 0.0029477387759137964, "train_loss": 2.6801275798082353, "test_loss": 1.1414532924399656, "test_acc1": 71.94000227508545, "test_acc5": 90.75000277893066, "epoch": 104, "n_parameters": 8797984} -{"train_lr": 0.002929275590064108, "train_loss": 2.675430798578262, "test_loss": 1.190634776125936, "test_acc1": 71.89600250061035, "test_acc5": 90.43200245727539, "epoch": 105, "n_parameters": 8797984} -{"train_lr": 0.002910711046980378, "train_loss": 2.6740477435588836, "test_loss": 1.14639713659006, "test_acc1": 72.10400207336426, "test_acc5": 90.59800263061524, "epoch": 106, "n_parameters": 8797984} -{"train_lr": 0.0028920471824738832, "train_loss": 2.6624251825332643, "test_loss": 1.1428927279570524, "test_acc1": 72.38200233398437, "test_acc5": 90.70200256713868, "epoch": 107, "n_parameters": 8797984} -{"train_lr": 0.002873286043247822, "train_loss": 2.6679887310504915, "test_loss": 1.1687933083842783, "test_acc1": 72.25000253723144, "test_acc5": 90.59800268066407, "epoch": 108, "n_parameters": 8797984} -{"train_lr": 0.0028544296866723304, "train_loss": 2.666775965476036, "test_loss": 1.2000315855531132, "test_acc1": 71.50600267730712, "test_acc5": 90.27400247772216, "epoch": 109, "n_parameters": 8797984} -{"train_lr": 0.0028354801805594724, "train_loss": 2.6659722348451615, "test_loss": 1.1696274122771095, "test_acc1": 72.01800223052979, "test_acc5": 90.45600289611816, "epoch": 110, "n_parameters": 8797984} -{"train_lr": 0.002816439602936208, "train_loss": 2.671028081417084, "test_loss": 1.1433361492612784, "test_acc1": 72.40400234344483, "test_acc5": 90.80600260803223, "epoch": 111, "n_parameters": 8797984} -{"train_lr": 0.002797310041816382, "train_loss": 2.6631252997636796, "test_loss": 1.1770663011599989, "test_acc1": 71.9980025064087, "test_acc5": 90.55200247131347, "epoch": 112, "n_parameters": 8797984} -{"train_lr": 0.002778093594971943, "train_loss": 2.6641556940078734, "test_loss": 1.1713457545813393, "test_acc1": 71.85600226135254, "test_acc5": 90.52000273376464, "epoch": 113, "n_parameters": 8797984} -{"train_lr": 0.0027587923697028264, "train_loss": 2.6426822751760484, "test_loss": 1.145811927230919, "test_acc1": 72.15000257080078, "test_acc5": 90.64000213195801, "epoch": 114, "n_parameters": 8797984} -{"train_lr": 0.002739408482605956, "train_loss": 2.645306040096283, "test_loss": 1.1599642033962643, "test_acc1": 72.05800238861084, "test_acc5": 90.68800268341064, "epoch": 115, "n_parameters": 8797984} -{"train_lr": 0.0027199440593428507, "train_loss": 2.648820380425453, "test_loss": 1.2218674728099037, "test_acc1": 71.48800235321045, "test_acc5": 90.24200256317138, "epoch": 116, "n_parameters": 8797984} -{"train_lr": 0.0027004012344070075, "train_loss": 2.64360742559433, "test_loss": 1.1377997974700786, "test_acc1": 72.41600265319825, "test_acc5": 90.80800253631591, "epoch": 117, "n_parameters": 8797984} -{"train_lr": 0.0026807821508893883, "train_loss": 2.6433754105091096, "test_loss": 1.1009051598170225, "test_acc1": 73.23800241058349, "test_acc5": 91.28000245452881, "epoch": 118, "n_parameters": 8797984} -{"train_lr": 0.0026610889602434354, "train_loss": 2.6423984412908554, "test_loss": 1.1152473556644775, "test_acc1": 72.93000214660644, "test_acc5": 91.13000234558106, "epoch": 119, "n_parameters": 8797984} -{"train_lr": 0.0026413238220496715, "train_loss": 2.6352161314964295, "test_loss": 1.1404598822050236, "test_acc1": 72.47200254455566, "test_acc5": 90.75600270019531, "epoch": 120, "n_parameters": 8797984} -{"train_lr": 0.0026214889037780493, "train_loss": 2.628158800172806, "test_loss": 1.1227150191279018, "test_acc1": 72.96600257202148, "test_acc5": 91.05800236328125, "epoch": 121, "n_parameters": 8797984} -{"train_lr": 0.0026015863805508724, "train_loss": 2.6280778027534484, "test_loss": 1.160332514301819, "test_acc1": 72.09400242645263, "test_acc5": 90.64000265533447, "epoch": 122, "n_parameters": 8797984} -{"train_lr": 0.0025816184349041886, "train_loss": 2.6265902143478392, "test_loss": 1.1149536927833277, "test_acc1": 73.0880023071289, "test_acc5": 91.15400267272949, "epoch": 123, "n_parameters": 8797984} -{"train_lr": 0.0025615872565482966, "train_loss": 2.61643052110672, "test_loss": 1.0908466332099016, "test_acc1": 73.18400257324218, "test_acc5": 91.30600258666992, "epoch": 124, "n_parameters": 8797984} -{"train_lr": 0.0025414950421274317, "train_loss": 2.6189835176467895, "test_loss": 1.1093420303043198, "test_acc1": 73.2440025415039, "test_acc5": 91.17800242980957, "epoch": 125, "n_parameters": 8797984} -{"train_lr": 0.002521343994979551, "train_loss": 2.6126243747949602, "test_loss": 1.0996280130656326, "test_acc1": 73.29400256622314, "test_acc5": 91.30200281951905, "epoch": 126, "n_parameters": 8797984} -{"train_lr": 0.002501136324893901, "train_loss": 2.6103306737184524, "test_loss": 1.1181046029224115, "test_acc1": 72.57400220062256, "test_acc5": 91.05800243835449, "epoch": 127, "n_parameters": 8797984} -{"train_lr": 0.0024808742478692517, "train_loss": 2.6100456027984618, "test_loss": 1.1125953350873554, "test_acc1": 72.946002427063, "test_acc5": 91.19400262542725, "epoch": 128, "n_parameters": 8797984} -{"train_lr": 0.002460559985870747, "train_loss": 2.6113156700372695, "test_loss": 1.1055143212570864, "test_acc1": 73.02200246154786, "test_acc5": 91.24800244323731, "epoch": 129, "n_parameters": 8797984} -{"train_lr": 0.002440195766586069, "train_loss": 2.6054583287477495, "test_loss": 1.1241742429487847, "test_acc1": 72.69800235809326, "test_acc5": 90.7920028314209, "epoch": 130, "n_parameters": 8797984} -{"train_lr": 0.0024197838231814215, "train_loss": 2.596555658006668, "test_loss": 1.117617811788531, "test_acc1": 72.87400207824707, "test_acc5": 91.11800267089843, "epoch": 131, "n_parameters": 8797984} -{"train_lr": 0.002399326394056337, "train_loss": 2.5968192108154295, "test_loss": 1.1172638811609323, "test_acc1": 73.05600248718261, "test_acc5": 91.16600286193848, "epoch": 132, "n_parameters": 8797984} -{"train_lr": 0.0023788257225985116, "train_loss": 2.590200480270386, "test_loss": 1.102769396322615, "test_acc1": 73.04400256866455, "test_acc5": 91.18000230133056, "epoch": 133, "n_parameters": 8797984} -{"train_lr": 0.0023582840569375944, "train_loss": 2.5961897142887116, "test_loss": 1.0805326595025904, "test_acc1": 73.74400249084472, "test_acc5": 91.56400266113282, "epoch": 134, "n_parameters": 8797984} -{"train_lr": 0.002337703649698603, "train_loss": 2.586422337460518, "test_loss": 1.0961029533954227, "test_acc1": 73.2640025125122, "test_acc5": 91.33600261657715, "epoch": 135, "n_parameters": 8797984} -{"train_lr": 0.002317086757755268, "train_loss": 2.586640829205513, "test_loss": 1.089021375074106, "test_acc1": 73.4840024182129, "test_acc5": 91.2900024597168, "epoch": 136, "n_parameters": 8797984} -{"train_lr": 0.002296435641982043, "train_loss": 2.5776752308368684, "test_loss": 1.095254610566532, "test_acc1": 73.2580025390625, "test_acc5": 91.26600265319824, "epoch": 137, "n_parameters": 8797984} -{"train_lr": 0.0022757525670064538, "train_loss": 2.587668113732338, "test_loss": 1.0926807286108242, "test_acc1": 73.19600224914551, "test_acc5": 91.20600267028809, "epoch": 138, "n_parameters": 8797984} -{"train_lr": 0.0022550398009608037, "train_loss": 2.5823921482801437, "test_loss": 1.0994210039429806, "test_acc1": 73.50200245330811, "test_acc5": 91.42800252471923, "epoch": 139, "n_parameters": 8797984} -{"train_lr": 0.0022342996152332844, "train_loss": 2.568902717280388, "test_loss": 1.1217149509226574, "test_acc1": 73.16800241912841, "test_acc5": 91.17800271118163, "epoch": 140, "n_parameters": 8797984} -{"train_lr": 0.0022135342842189523, "train_loss": 2.570842918467522, "test_loss": 1.0796257150085533, "test_acc1": 73.69400244934081, "test_acc5": 91.57800280334473, "epoch": 141, "n_parameters": 8797984} -{"train_lr": 0.002192746085070428, "train_loss": 2.555863049006462, "test_loss": 1.053335444892154, "test_acc1": 73.98600255584716, "test_acc5": 91.61400255279541, "epoch": 142, "n_parameters": 8797984} -{"train_lr": 0.0021719372974480025, "train_loss": 2.5668394111156463, "test_loss": 1.0467725448748644, "test_acc1": 74.22200254150391, "test_acc5": 91.80800280639649, "epoch": 143, "n_parameters": 8797984} -{"train_lr": 0.0021511102032696337, "train_loss": 2.558166383600235, "test_loss": 1.0702526431311579, "test_acc1": 73.64200259155274, "test_acc5": 91.68000259948731, "epoch": 144, "n_parameters": 8797984} -{"train_lr": 0.0021302670864609768, "train_loss": 2.555858587312698, "test_loss": 1.0682730915791847, "test_acc1": 73.7800025479126, "test_acc5": 91.55000231903077, "epoch": 145, "n_parameters": 8797984} -{"train_lr": 0.0021094102327046753, "train_loss": 2.543506459712982, "test_loss": 1.0490015245535795, "test_acc1": 73.98400232452393, "test_acc5": 91.8380021395874, "epoch": 146, "n_parameters": 8797984} -{"train_lr": 0.0020885419291897665, "train_loss": 2.5527322645664214, "test_loss": 1.1120192426092483, "test_acc1": 73.06800240936279, "test_acc5": 91.07800266723633, "epoch": 147, "n_parameters": 8797984} -{"train_lr": 0.0020676644643608877, "train_loss": 2.5494332670450213, "test_loss": 1.0921611132867195, "test_acc1": 73.58800272735596, "test_acc5": 91.60000246520995, "epoch": 148, "n_parameters": 8797984} -{"train_lr": 0.0020467801276673257, "train_loss": 2.5386385632514954, "test_loss": 1.0591233924907797, "test_acc1": 74.26200242279053, "test_acc5": 91.72400295166015, "epoch": 149, "n_parameters": 8797984} -{"train_lr": 0.002025891209311914, "train_loss": 2.5420638280391694, "test_loss": 1.0488093598362278, "test_acc1": 74.28600253173828, "test_acc5": 91.61000273345947, "epoch": 150, "n_parameters": 8797984} -{"train_lr": 0.0020050000000000176, "train_loss": 2.536020321583748, "test_loss": 1.0809630295809578, "test_acc1": 73.62200251831055, "test_acc5": 91.34000227050781, "epoch": 151, "n_parameters": 8797984} -{"train_lr": 0.0019841087906880845, "train_loss": 2.5325328210115434, "test_loss": 1.1431410330183365, "test_acc1": 72.96400222137451, "test_acc5": 91.02800264495849, "epoch": 152, "n_parameters": 8797984} -{"train_lr": 0.0019632198723327173, "train_loss": 2.5220769458293915, "test_loss": 1.0765108496827238, "test_acc1": 74.04800274078369, "test_acc5": 91.47200277557373, "epoch": 153, "n_parameters": 8797984} -{"train_lr": 0.0019423355356391558, "train_loss": 2.5316455113649368, "test_loss": 1.0469674954519552, "test_acc1": 74.57800269104004, "test_acc5": 92.02400227874756, "epoch": 154, "n_parameters": 8797984} -{"train_lr": 0.001921458070810235, "train_loss": 2.5266462695121765, "test_loss": 1.0241846516728401, "test_acc1": 74.45800265502929, "test_acc5": 92.02400246124267, "epoch": 155, "n_parameters": 8797984} -{"train_lr": 0.0019005897672953243, "train_loss": 2.512449283146858, "test_loss": 1.0451790171072763, "test_acc1": 74.2880023071289, "test_acc5": 92.10200230926513, "epoch": 156, "n_parameters": 8797984} -{"train_lr": 0.0018797329135390225, "train_loss": 2.514903481268883, "test_loss": 1.0297048006425886, "test_acc1": 74.75800242797851, "test_acc5": 92.16000271453858, "epoch": 157, "n_parameters": 8797984} -{"train_lr": 0.0018588897967303623, "train_loss": 2.5036504529953003, "test_loss": 1.083050288259983, "test_acc1": 74.14600235534668, "test_acc5": 91.64200273986816, "epoch": 158, "n_parameters": 8797984} -{"train_lr": 0.0018380627025520412, "train_loss": 2.518381821203232, "test_loss": 1.0541779236758457, "test_acc1": 74.36600230224609, "test_acc5": 91.92600243560791, "epoch": 159, "n_parameters": 8797984} -{"train_lr": 0.0018172539149295633, "train_loss": 2.499576439881325, "test_loss": 1.0180462162722559, "test_acc1": 74.55400264038086, "test_acc5": 92.36600283447265, "epoch": 160, "n_parameters": 8797984} -{"train_lr": 0.0017964657157810383, "train_loss": 2.4905323662757874, "test_loss": 1.038012313492158, "test_acc1": 74.46000276733399, "test_acc5": 92.118002449646, "epoch": 161, "n_parameters": 8797984} -{"train_lr": 0.001775700384766716, "train_loss": 2.4975560065507887, "test_loss": 1.0443587070878815, "test_acc1": 74.47400267578125, "test_acc5": 92.0180026083374, "epoch": 162, "n_parameters": 8797984} -{"train_lr": 0.0017549601990392034, "train_loss": 2.4918262166023255, "test_loss": 1.0292732095455421, "test_acc1": 74.74800245178223, "test_acc5": 92.0140023602295, "epoch": 163, "n_parameters": 8797984} -{"train_lr": 0.0017342474329935537, "train_loss": 2.487996785402298, "test_loss": 1.017134561258204, "test_acc1": 74.56000234649659, "test_acc5": 92.26000245544434, "epoch": 164, "n_parameters": 8797984} -{"train_lr": 0.0017135643580179704, "train_loss": 2.4902234442949296, "test_loss": 1.0173201081069077, "test_acc1": 75.03000253204345, "test_acc5": 92.17200259979248, "epoch": 165, "n_parameters": 8797984} -{"train_lr": 0.001692913242244742, "train_loss": 2.4748720911502837, "test_loss": 1.0190116337993567, "test_acc1": 74.98200262878417, "test_acc5": 92.18400236297607, "epoch": 166, "n_parameters": 8797984} -{"train_lr": 0.00167229635030139, "train_loss": 2.48164103243351, "test_loss": 1.0240504334078115, "test_acc1": 75.0880023046875, "test_acc5": 92.28400271362305, "epoch": 167, "n_parameters": 8797984} -{"train_lr": 0.0016517159430624528, "train_loss": 2.484630357503891, "test_loss": 1.0252053362920004, "test_acc1": 74.98000278045654, "test_acc5": 92.23400231506348, "epoch": 168, "n_parameters": 8797984} -{"train_lr": 0.0016311742774014707, "train_loss": 2.4699243451595305, "test_loss": 1.0142395689206964, "test_acc1": 74.9880023727417, "test_acc5": 92.25400219299317, "epoch": 169, "n_parameters": 8797984} -{"train_lr": 0.001610673605943664, "train_loss": 2.466286900305748, "test_loss": 1.020766309079002, "test_acc1": 74.74200230529785, "test_acc5": 92.12600226074218, "epoch": 170, "n_parameters": 8797984} -{"train_lr": 0.001590216176818559, "train_loss": 2.4666355488061904, "test_loss": 0.9851069364915875, "test_acc1": 75.3880025088501, "test_acc5": 92.50400290740967, "epoch": 171, "n_parameters": 8797984} -{"train_lr": 0.0015698042334139058, "train_loss": 2.462085518026352, "test_loss": 1.0085265417309368, "test_acc1": 75.4060025177002, "test_acc5": 92.51000295135498, "epoch": 172, "n_parameters": 8797984} -{"train_lr": 0.0015494400141292598, "train_loss": 2.4514950835227967, "test_loss": 0.9977793719838647, "test_acc1": 75.51000249542237, "test_acc5": 92.4700028717041, "epoch": 173, "n_parameters": 8797984} -{"train_lr": 0.0015291257521307172, "train_loss": 2.456798539853096, "test_loss": 1.0165918535169434, "test_acc1": 75.03200249176025, "test_acc5": 92.38000261688232, "epoch": 174, "n_parameters": 8797984} -{"train_lr": 0.0015088636751061355, "train_loss": 2.451893878221512, "test_loss": 0.9816583572503399, "test_acc1": 75.6020023892212, "test_acc5": 92.55600249664306, "epoch": 175, "n_parameters": 8797984} -{"train_lr": 0.0014886560050204627, "train_loss": 2.444070977449417, "test_loss": 1.0006781514076626, "test_acc1": 75.24200288635254, "test_acc5": 92.42800250793456, "epoch": 176, "n_parameters": 8797984} -{"train_lr": 0.001468504957872541, "train_loss": 2.4464133898735048, "test_loss": 0.9744103160851142, "test_acc1": 75.96400259002685, "test_acc5": 92.76600232086182, "epoch": 177, "n_parameters": 8797984} -{"train_lr": 0.0014484127434517488, "train_loss": 2.441909277486801, "test_loss": 0.9679901119978989, "test_acc1": 76.0060026119995, "test_acc5": 92.73000274505615, "epoch": 178, "n_parameters": 8797984} -{"train_lr": 0.0014283815650957576, "train_loss": 2.4352708159446714, "test_loss": 1.0130440264063723, "test_acc1": 75.10400253631592, "test_acc5": 92.43000245178223, "epoch": 179, "n_parameters": 8797984} -{"train_lr": 0.001408413619449102, "train_loss": 2.4358827045202256, "test_loss": 0.9644462439943763, "test_acc1": 76.02600207183838, "test_acc5": 92.89200244628907, "epoch": 180, "n_parameters": 8797984} -{"train_lr": 0.001388511096221964, "train_loss": 2.4363844382047652, "test_loss": 1.0015734620392323, "test_acc1": 75.41200269012451, "test_acc5": 92.68400274139404, "epoch": 181, "n_parameters": 8797984} -{"train_lr": 0.0013686761779503403, "train_loss": 2.441321029353142, "test_loss": 0.9879213385283947, "test_acc1": 75.65600267456054, "test_acc5": 92.58000258972167, "epoch": 182, "n_parameters": 8797984} -{"train_lr": 0.0013489110397565372, "train_loss": 2.427705215644836, "test_loss": 0.9673464846085099, "test_acc1": 76.1500027508545, "test_acc5": 92.79600272277833, "epoch": 183, "n_parameters": 8797984} -{"train_lr": 0.0013292178491106572, "train_loss": 2.427727861785889, "test_loss": 0.9535637933980016, "test_acc1": 76.04400257843018, "test_acc5": 93.00000278961181, "epoch": 184, "n_parameters": 8797984} -{"train_lr": 0.001309598765592993, "train_loss": 2.4062647043704986, "test_loss": 0.9721839850878015, "test_acc1": 75.8920026123047, "test_acc5": 92.70400255828858, "epoch": 185, "n_parameters": 8797984} -{"train_lr": 0.0012900559406571204, "train_loss": 2.4162303750276566, "test_loss": 0.9481784484403974, "test_acc1": 76.43000288116455, "test_acc5": 92.99000270111084, "epoch": 186, "n_parameters": 8797984} -{"train_lr": 0.0012705915173940611, "train_loss": 2.4109337394952775, "test_loss": 0.9805574373287314, "test_acc1": 75.74200247924804, "test_acc5": 92.74800225250245, "epoch": 187, "n_parameters": 8797984} -{"train_lr": 0.0012512076302971713, "train_loss": 2.398541698026657, "test_loss": 0.963898550499888, "test_acc1": 76.23200245330811, "test_acc5": 92.88800242431641, "epoch": 188, "n_parameters": 8797984} -{"train_lr": 0.001231906405028045, "train_loss": 2.399469068264961, "test_loss": 0.9688862551222829, "test_acc1": 76.08200241943359, "test_acc5": 92.69000274017334, "epoch": 189, "n_parameters": 8797984} -{"train_lr": 0.0012126899581836061, "train_loss": 2.4075324808359144, "test_loss": 0.9474115137229947, "test_acc1": 76.38800232910157, "test_acc5": 92.79400249420166, "epoch": 190, "n_parameters": 8797984} -{"train_lr": 0.0011935603970637625, "train_loss": 2.3916092777013778, "test_loss": 0.9492881149053574, "test_acc1": 76.42600261871338, "test_acc5": 93.1080024420166, "epoch": 191, "n_parameters": 8797984} -{"train_lr": 0.0011745198194405004, "train_loss": 2.3874968413591384, "test_loss": 0.9641284574480617, "test_acc1": 76.31200220184326, "test_acc5": 92.99400267211914, "epoch": 192, "n_parameters": 8797984} -{"train_lr": 0.0011555703133276894, "train_loss": 2.396844467306137, "test_loss": 0.947236714336802, "test_acc1": 76.40000271026611, "test_acc5": 93.0480022946167, "epoch": 193, "n_parameters": 8797984} -{"train_lr": 0.0011367139567521967, "train_loss": 2.3770384763002395, "test_loss": 0.9458958160351304, "test_acc1": 76.48400242767335, "test_acc5": 93.10600244110107, "epoch": 194, "n_parameters": 8797984} -{"train_lr": 0.0011179528175260622, "train_loss": 2.383848299789429, "test_loss": 0.9303207480732132, "test_acc1": 76.60400230865478, "test_acc5": 93.24000252716064, "epoch": 195, "n_parameters": 8797984} -{"train_lr": 0.0010992889530195922, "train_loss": 2.373356815481186, "test_loss": 0.9321163443519789, "test_acc1": 76.82400233825683, "test_acc5": 93.26600251159668, "epoch": 196, "n_parameters": 8797984} -{"train_lr": 0.0010807244099358875, "train_loss": 2.3649015534639357, "test_loss": 0.9443567662554628, "test_acc1": 76.66800250335693, "test_acc5": 93.09400215454102, "epoch": 197, "n_parameters": 8797984} -{"train_lr": 0.0010622612240862497, "train_loss": 2.3707919770717623, "test_loss": 0.9317112810471478, "test_acc1": 76.67400235198974, "test_acc5": 93.21000258361816, "epoch": 198, "n_parameters": 8797984} -{"train_lr": 0.001043901420167081, "train_loss": 2.3720986389398573, "test_loss": 0.9496082503567723, "test_acc1": 76.64400270202637, "test_acc5": 92.98600242553711, "epoch": 199, "n_parameters": 8797984} -{"train_lr": 0.001025647011537802, "train_loss": 2.3639592633008957, "test_loss": 0.9258417046683676, "test_acc1": 77.24000276947021, "test_acc5": 93.25800256317139, "epoch": 200, "n_parameters": 8797984} -{"train_lr": 0.0010075000000000067, "train_loss": 2.3624628940820696, "test_loss": 0.9191775541095173, "test_acc1": 77.15800255218505, "test_acc5": 93.24400252227784, "epoch": 201, "n_parameters": 8797984} -{"train_lr": 0.0009894623755779988, "train_loss": 2.3556620641469954, "test_loss": 0.9040371591553968, "test_acc1": 77.33000236114502, "test_acc5": 93.44800269042969, "epoch": 202, "n_parameters": 8797984} -{"train_lr": 0.0009715361163006195, "train_loss": 2.345656874871254, "test_loss": 0.9255383974489044, "test_acc1": 76.85600253662109, "test_acc5": 93.31800245239258, "epoch": 203, "n_parameters": 8797984} -{"train_lr": 0.000953723187984137, "train_loss": 2.3415804884672164, "test_loss": 0.9098310137496275, "test_acc1": 77.30400271820068, "test_acc5": 93.52200259796143, "epoch": 204, "n_parameters": 8797984} -{"train_lr": 0.0009360255440169043, "train_loss": 2.3425601803064344, "test_loss": 0.897557531209553, "test_acc1": 77.38200234405518, "test_acc5": 93.63800260864258, "epoch": 205, "n_parameters": 8797984} -{"train_lr": 0.0009184451251450147, "train_loss": 2.33097168366909, "test_loss": 0.9379922440823387, "test_acc1": 76.86400252990722, "test_acc5": 93.1000024194336, "epoch": 206, "n_parameters": 8797984} -{"train_lr": 0.0009009838592595214, "train_loss": 2.351423911356926, "test_loss": 0.9104045042220283, "test_acc1": 77.42800238433838, "test_acc5": 93.58200268676758, "epoch": 207, "n_parameters": 8797984} -{"train_lr": 0.0008836436611849946, "train_loss": 2.340428762173653, "test_loss": 0.9059869793846327, "test_acc1": 77.25200280548096, "test_acc5": 93.56000256408691, "epoch": 208, "n_parameters": 8797984} -{"train_lr": 0.0008664264324695524, "train_loss": 2.326401999950409, "test_loss": 0.9087393450386384, "test_acc1": 77.39800263427735, "test_acc5": 93.44600243255616, "epoch": 209, "n_parameters": 8797984} -{"train_lr": 0.0008493340611763644, "train_loss": 2.3210451282024382, "test_loss": 0.9177266624482239, "test_acc1": 77.37400231109619, "test_acc5": 93.42200246276856, "epoch": 210, "n_parameters": 8797984} -{"train_lr": 0.0008323684216765116, "train_loss": 2.3240459307432175, "test_loss": 0.8875229676418445, "test_acc1": 77.89600263244628, "test_acc5": 93.77000227294921, "epoch": 211, "n_parameters": 8797984} -{"train_lr": 0.0008155313744436086, "train_loss": 2.322563945531845, "test_loss": 0.8893171561991468, "test_acc1": 77.6480026828003, "test_acc5": 93.68800281738281, "epoch": 212, "n_parameters": 8797984} -{"train_lr": 0.0007988247658495707, "train_loss": 2.3248341326475144, "test_loss": 0.8725866004824638, "test_acc1": 78.04000250793457, "test_acc5": 93.89600241271972, "epoch": 213, "n_parameters": 8797984} -{"train_lr": 0.0007822504279623159, "train_loss": 2.3101975283145904, "test_loss": 0.8785238285713336, "test_acc1": 78.01400315338135, "test_acc5": 93.88000241516113, "epoch": 214, "n_parameters": 8797984} -{"train_lr": 0.0007658101783447642, "train_loss": 2.310905122113228, "test_loss": 0.8844653427162591, "test_acc1": 77.69400247406006, "test_acc5": 93.74600234405517, "epoch": 215, "n_parameters": 8797984} -{"train_lr": 0.000749505819855568, "train_loss": 2.3085864757537844, "test_loss": 0.8937932154273286, "test_acc1": 77.81200252380371, "test_acc5": 93.69200228363037, "epoch": 216, "n_parameters": 8797984} -{"train_lr": 0.0007333391404513692, "train_loss": 2.2943482458114626, "test_loss": 0.8888056852361735, "test_acc1": 77.77400274353027, "test_acc5": 93.6740025439453, "epoch": 217, "n_parameters": 8797984} -{"train_lr": 0.0007173119129907244, "train_loss": 2.302545066332817, "test_loss": 0.8799607740605578, "test_acc1": 77.93000234527588, "test_acc5": 93.76600237701416, "epoch": 218, "n_parameters": 8797984} -{"train_lr": 0.0007014258950397421, "train_loss": 2.2938474058151246, "test_loss": 0.8790178128025111, "test_acc1": 78.01400243286133, "test_acc5": 93.71200281555176, "epoch": 219, "n_parameters": 8797984} -{"train_lr": 0.0006856828286793126, "train_loss": 2.28843698117733, "test_loss": 0.8735112821792855, "test_acc1": 78.08600261779785, "test_acc5": 93.90400247955323, "epoch": 220, "n_parameters": 8797984} -{"train_lr": 0.000670084440314078, "train_loss": 2.2775986280679703, "test_loss": 0.8625493604032433, "test_acc1": 78.34400277008056, "test_acc5": 93.95200241333008, "epoch": 221, "n_parameters": 8797984} -{"train_lr": 0.0006546324404830861, "train_loss": 2.27900623652935, "test_loss": 0.8689355127075139, "test_acc1": 78.19800257507325, "test_acc5": 93.97600260345459, "epoch": 222, "n_parameters": 8797984} -{"train_lr": 0.0006393285236722668, "train_loss": 2.2748034262895582, "test_loss": 0.863779600490542, "test_acc1": 78.24600231445312, "test_acc5": 94.02600272949219, "epoch": 223, "n_parameters": 8797984} -{"train_lr": 0.0006241743681285438, "train_loss": 2.2802551902294157, "test_loss": 0.8789933332625557, "test_acc1": 78.15200290100098, "test_acc5": 93.80400250091553, "epoch": 224, "n_parameters": 8797984} -{"train_lr": 0.0006091716356758274, "train_loss": 2.2702320793390274, "test_loss": 0.8613051513538641, "test_acc1": 78.44800268585205, "test_acc5": 93.98400248901368, "epoch": 225, "n_parameters": 8797984} -{"train_lr": 0.0005943219715328445, "train_loss": 2.269695570230484, "test_loss": 0.8454571134027313, "test_acc1": 78.52200255493165, "test_acc5": 94.03200260406494, "epoch": 226, "n_parameters": 8797984} -{"train_lr": 0.000579627004132555, "train_loss": 2.256294013094902, "test_loss": 0.858688931693049, "test_acc1": 78.30000262176513, "test_acc5": 94.06200267883301, "epoch": 227, "n_parameters": 8797984} -{"train_lr": 0.0005650883449437713, "train_loss": 2.2617056559324267, "test_loss": 0.8490574611898731, "test_acc1": 78.51200247711182, "test_acc5": 94.08800249969482, "epoch": 228, "n_parameters": 8797984} -{"train_lr": 0.0005507075882942857, "train_loss": 2.254766882801056, "test_loss": 0.8453282930833452, "test_acc1": 78.63000262207031, "test_acc5": 94.19000249786377, "epoch": 229, "n_parameters": 8797984} -{"train_lr": 0.0005364863111961281, "train_loss": 2.253105350160599, "test_loss": 0.8373398662490004, "test_acc1": 78.74400237365722, "test_acc5": 94.25200236907959, "epoch": 230, "n_parameters": 8797984} -{"train_lr": 0.0005224260731725992, "train_loss": 2.239468121433258, "test_loss": 0.8604607693850994, "test_acc1": 78.5740025970459, "test_acc5": 94.00000243988038, "epoch": 231, "n_parameters": 8797984} -{"train_lr": 0.0005085284160872295, "train_loss": 2.246300274682045, "test_loss": 0.8361279589726645, "test_acc1": 78.66000256072998, "test_acc5": 94.18400262054443, "epoch": 232, "n_parameters": 8797984} -{"train_lr": 0.0004947948639747458, "train_loss": 2.2396012403011323, "test_loss": 0.8361681921078878, "test_acc1": 79.12000252990723, "test_acc5": 94.27000230194092, "epoch": 233, "n_parameters": 8797984} -{"train_lr": 0.0004812269228738896, "train_loss": 2.232483561038971, "test_loss": 0.8198219191502122, "test_acc1": 79.27200229248047, "test_acc5": 94.27000237060547, "epoch": 234, "n_parameters": 8797984} -{"train_lr": 0.00046782608066229685, "train_loss": 2.2376874376535416, "test_loss": 0.8328948842690271, "test_acc1": 79.08600243164062, "test_acc5": 94.30000260345459, "epoch": 235, "n_parameters": 8797984} -{"train_lr": 0.00045459380689333805, "train_loss": 2.2338714203834535, "test_loss": 0.8282046385985964, "test_acc1": 79.05400235412597, "test_acc5": 94.33400250762939, "epoch": 236, "n_parameters": 8797984} -{"train_lr": 0.0004415315526349521, "train_loss": 2.224285921430588, "test_loss": 0.8412332339760136, "test_acc1": 78.85200238708497, "test_acc5": 94.19200261535644, "epoch": 237, "n_parameters": 8797984} -{"train_lr": 0.00042864075031049536, "train_loss": 2.2179994625806807, "test_loss": 0.824046637643786, "test_acc1": 79.12400250091552, "test_acc5": 94.41800279663086, "epoch": 238, "n_parameters": 8797984} -{"train_lr": 0.00041592281354172557, "train_loss": 2.229687078690529, "test_loss": 0.8317541554570198, "test_acc1": 78.97400246490479, "test_acc5": 94.28400223632812, "epoch": 239, "n_parameters": 8797984} -{"train_lr": 0.0004033791369937298, "train_loss": 2.2151035534620287, "test_loss": 0.8274886638802641, "test_acc1": 79.11400256042481, "test_acc5": 94.35600241424561, "epoch": 240, "n_parameters": 8797984} -{"train_lr": 0.00039101109622197687, "train_loss": 2.2032485479354857, "test_loss": 0.8185789076720967, "test_acc1": 79.31200242126465, "test_acc5": 94.39200267089844, "epoch": 241, "n_parameters": 8797984} -{"train_lr": 0.0003788200475215383, "train_loss": 2.20626659514904, "test_loss": 0.8117338026709416, "test_acc1": 79.47200223449707, "test_acc5": 94.45800217376708, "epoch": 242, "n_parameters": 8797984} -{"train_lr": 0.00036680732777826604, "train_loss": 2.204397435450554, "test_loss": 0.816496495376615, "test_acc1": 79.28400273101806, "test_acc5": 94.44200246124268, "epoch": 243, "n_parameters": 8797984} -{"train_lr": 0.0003549742543222451, "train_loss": 2.200350084710121, "test_loss": 0.8239806594655794, "test_acc1": 79.29000272277833, "test_acc5": 94.32800207550049, "epoch": 244, "n_parameters": 8797984} -{"train_lr": 0.00034332212478335543, "train_loss": 2.198197116661072, "test_loss": 0.812060030286803, "test_acc1": 79.35200249450683, "test_acc5": 94.54400254394531, "epoch": 245, "n_parameters": 8797984} -{"train_lr": 0.0003318522169488759, "train_loss": 2.2029127745628356, "test_loss": 0.8133631800027454, "test_acc1": 79.42600226776123, "test_acc5": 94.53200248199462, "epoch": 246, "n_parameters": 8797984} -{"train_lr": 0.00032056578862347564, "train_loss": 2.197422416305542, "test_loss": 0.8022520640755401, "test_acc1": 79.66400254974366, "test_acc5": 94.6920024951172, "epoch": 247, "n_parameters": 8797984} -{"train_lr": 0.0003094640774912099, "train_loss": 2.192817654132843, "test_loss": 0.8061204103424269, "test_acc1": 79.65000249847412, "test_acc5": 94.62000204437255, "epoch": 248, "n_parameters": 8797984} -{"train_lr": 0.0002985483009797872, "train_loss": 2.179452992272377, "test_loss": 0.8050729634568972, "test_acc1": 79.56000227020263, "test_acc5": 94.6260029876709, "epoch": 249, "n_parameters": 8797984} -{"train_lr": 0.00028781965612712917, "train_loss": 2.17367519800663, "test_loss": 0.799533420625855, "test_acc1": 79.62400273834228, "test_acc5": 94.654002684021, "epoch": 250, "n_parameters": 8797984} -{"train_lr": 0.00027727931945004304, "train_loss": 2.174968121910095, "test_loss": 0.8054483081488049, "test_acc1": 79.65600268829346, "test_acc5": 94.67600228363037, "epoch": 251, "n_parameters": 8797984} -{"train_lr": 0.00026692844681522316, "train_loss": 2.1794476939439775, "test_loss": 0.7917735605117153, "test_acc1": 79.80800224853516, "test_acc5": 94.69000260925293, "epoch": 252, "n_parameters": 8797984} -{"train_lr": 0.0002567681733124936, "train_loss": 2.1653352103471755, "test_loss": 0.7970319905263537, "test_acc1": 79.75200279754638, "test_acc5": 94.69200223236084, "epoch": 253, "n_parameters": 8797984} -{"train_lr": 0.00024679961313034334, "train_loss": 2.1663529244661333, "test_loss": 0.7930895379360985, "test_acc1": 79.85400246826173, "test_acc5": 94.7420020147705, "epoch": 254, "n_parameters": 8797984} -{"train_lr": 0.0002370238594337239, "train_loss": 2.157053434586525, "test_loss": 0.7847723134738558, "test_acc1": 80.00800232635498, "test_acc5": 94.74000256774903, "epoch": 255, "n_parameters": 8797984} -{"train_lr": 0.00022744198424420818, "train_loss": 2.1645934719085695, "test_loss": 0.7873001935727456, "test_acc1": 79.98200257415772, "test_acc5": 94.77200272399902, "epoch": 256, "n_parameters": 8797984} -{"train_lr": 0.00021805503832237022, "train_loss": 2.1672429827451705, "test_loss": 0.7874313266400028, "test_acc1": 80.07400241516113, "test_acc5": 94.89200225738526, "epoch": 257, "n_parameters": 8797984} -{"train_lr": 0.0002088640510526241, "train_loss": 2.1574661384820937, "test_loss": 0.7872424664742806, "test_acc1": 80.05400258605957, "test_acc5": 94.74800261077881, "epoch": 258, "n_parameters": 8797984} -{"train_lr": 0.00019987003033028822, "train_loss": 2.1548511963129044, "test_loss": 0.7889032028615475, "test_acc1": 79.93600245147705, "test_acc5": 94.79800236297608, "epoch": 259, "n_parameters": 8797984} -{"train_lr": 0.00019107396245110126, "train_loss": 2.1519443960905074, "test_loss": 0.7846254640642334, "test_acc1": 80.10600252685546, "test_acc5": 94.78400222503662, "epoch": 260, "n_parameters": 8797984} -{"train_lr": 0.00018247681200301023, "train_loss": 2.1545569406509397, "test_loss": 0.7763242184677545, "test_acc1": 80.23600276428223, "test_acc5": 94.93000257324219, "epoch": 261, "n_parameters": 8797984} -{"train_lr": 0.00017407952176045884, "train_loss": 2.148888859200478, "test_loss": 0.7795399455901455, "test_acc1": 80.06000260253906, "test_acc5": 94.85400249511719, "epoch": 262, "n_parameters": 8797984} -{"train_lr": 0.0001658830125809418, "train_loss": 2.14183042447567, "test_loss": 0.7816555002594695, "test_acc1": 80.02200232666016, "test_acc5": 94.91200269042969, "epoch": 263, "n_parameters": 8797984} -{"train_lr": 0.0001578881833040612, "train_loss": 2.1388972400188444, "test_loss": 0.7762076203875682, "test_acc1": 80.24600244903564, "test_acc5": 94.9180025592041, "epoch": 264, "n_parameters": 8797984} -{"train_lr": 0.00015009591065294023, "train_loss": 2.1415247331380844, "test_loss": 0.7751467543489793, "test_acc1": 80.23400228118896, "test_acc5": 94.93200253967285, "epoch": 265, "n_parameters": 8797984} -{"train_lr": 0.00014250704913808254, "train_loss": 2.1371633058309554, "test_loss": 0.7773057537920335, "test_acc1": 80.17800253509522, "test_acc5": 94.92600259613037, "epoch": 266, "n_parameters": 8797984} -{"train_lr": 0.00013512243096367772, "train_loss": 2.1357727833032607, "test_loss": 0.7756684792830664, "test_acc1": 80.30600245697022, "test_acc5": 94.91400233367919, "epoch": 267, "n_parameters": 8797984} -{"train_lr": 0.00012794286593631978, "train_loss": 2.1343108714580534, "test_loss": 0.7719246438320946, "test_acc1": 80.39800276672364, "test_acc5": 94.98000262634277, "epoch": 268, "n_parameters": 8797984} -{"train_lr": 0.00012096914137622728, "train_loss": 2.135501954627037, "test_loss": 0.7672526621643234, "test_acc1": 80.39800272247315, "test_acc5": 94.98200253417968, "epoch": 269, "n_parameters": 8797984} -{"train_lr": 0.00011420202203087673, "train_loss": 2.1230517618894575, "test_loss": 0.7720202108954682, "test_acc1": 80.31000271697998, "test_acc5": 95.02200245056153, "epoch": 270, "n_parameters": 8797984} -{"train_lr": 0.00010764224999117014, "train_loss": 2.122815388584137, "test_loss": 0.7673828262178337, "test_acc1": 80.4140025946045, "test_acc5": 95.01800249176026, "epoch": 271, "n_parameters": 8797984} -{"train_lr": 0.00010129054461002765, "train_loss": 2.1326320339918134, "test_loss": 0.7642901467488092, "test_acc1": 80.54200263336182, "test_acc5": 95.03800239746094, "epoch": 272, "n_parameters": 8797984} -{"train_lr": 9.514760242352498e-05, "train_loss": 2.1147342514276506, "test_loss": 0.7640822459669674, "test_acc1": 80.42400254943847, "test_acc5": 95.06800238342285, "epoch": 273, "n_parameters": 8797984} -{"train_lr": 8.921409707449843e-05, "train_loss": 2.1288970571994783, "test_loss": 0.7678982006276355, "test_acc1": 80.27800255249024, "test_acc5": 95.06600254211426, "epoch": 274, "n_parameters": 8797984} -{"train_lr": 8.349067923867126e-05, "train_loss": 2.121355558872223, "test_loss": 0.7667576593949514, "test_acc1": 80.39800224792481, "test_acc5": 95.046002578125, "epoch": 275, "n_parameters": 8797984} -{"train_lr": 7.797797655330979e-05, "train_loss": 2.112449050116539, "test_loss": 0.7650132957188522, "test_acc1": 80.50600266143799, "test_acc5": 95.05200294158935, "epoch": 276, "n_parameters": 8797984} -{"train_lr": 7.267659354838016e-05, "train_loss": 2.123628714966774, "test_loss": 0.7677471784546095, "test_acc1": 80.45600269744872, "test_acc5": 94.97000257873535, "epoch": 277, "n_parameters": 8797984} -{"train_lr": 6.758711158027577e-05, "train_loss": 2.1145796382188795, "test_loss": 0.7652344438521301, "test_acc1": 80.5120027609253, "test_acc5": 95.07400258148193, "epoch": 278, "n_parameters": 8797984} -{"train_lr": 6.27100887680448e-05, "train_loss": 2.1105474678993223, "test_loss": 0.7602299947072478, "test_acc1": 80.56800300994873, "test_acc5": 95.09600238067627, "epoch": 279, "n_parameters": 8797984} -{"train_lr": 5.8046059932199434e-05, "train_loss": 2.104593182229996, "test_loss": 0.7660377862698892, "test_acc1": 80.41200239562988, "test_acc5": 94.97600260070801, "epoch": 280, "n_parameters": 8797984} -{"train_lr": 5.359553653605823e-05, "train_loss": 2.1147620722055436, "test_loss": 0.7614464076126323, "test_acc1": 80.49400267791748, "test_acc5": 95.0880024810791, "epoch": 281, "n_parameters": 8797984} -{"train_lr": 4.9359006629664604e-05, "train_loss": 2.1046359468460083, "test_loss": 0.764514276648269, "test_acc1": 80.4640025302124, "test_acc5": 95.06200243591309, "epoch": 282, "n_parameters": 8797984} -{"train_lr": 4.533693479626563e-05, "train_loss": 2.118258895468712, "test_loss": 0.7594240018550087, "test_acc1": 80.60600263885497, "test_acc5": 95.13800246154786, "epoch": 283, "n_parameters": 8797984} -{"train_lr": 4.152976210136288e-05, "train_loss": 2.1059252162218094, "test_loss": 0.7614748504231957, "test_acc1": 80.5160024710083, "test_acc5": 95.03800253082275, "epoch": 284, "n_parameters": 8797984} -{"train_lr": 3.793790604434225e-05, "train_loss": 2.111562433934212, "test_loss": 0.7579311084221391, "test_acc1": 80.61200200836181, "test_acc5": 95.1360027154541, "epoch": 285, "n_parameters": 8797984} -{"train_lr": 3.456176051270035e-05, "train_loss": 2.103490874147415, "test_loss": 0.7547027722877615, "test_acc1": 80.63000254974365, "test_acc5": 95.18000242523193, "epoch": 286, "n_parameters": 8797984} -{"train_lr": 3.14016957388384e-05, "train_loss": 2.1029143022060395, "test_loss": 0.7590424587183139, "test_acc1": 80.57000278625489, "test_acc5": 95.11400255645752, "epoch": 287, "n_parameters": 8797984} -{"train_lr": 2.845805825946997e-05, "train_loss": 2.096513543319702, "test_loss": 0.7568075639360091, "test_acc1": 80.61200254364013, "test_acc5": 95.12400296386718, "epoch": 288, "n_parameters": 8797984} -{"train_lr": 2.5731170877616888e-05, "train_loss": 2.097456101131439, "test_loss": 0.7558876937803101, "test_acc1": 80.60400241577149, "test_acc5": 95.12600268768311, "epoch": 289, "n_parameters": 8797984} -{"train_lr": 2.3221332627209112e-05, "train_loss": 2.090725235581398, "test_loss": 0.7567271007334485, "test_acc1": 80.64200250732422, "test_acc5": 95.12800257324218, "epoch": 290, "n_parameters": 8797984} -{"train_lr": 2.0928818740294644e-05, "train_loss": 2.091590406560898, "test_loss": 0.7610782391446478, "test_acc1": 80.55400254638671, "test_acc5": 95.10600255371094, "epoch": 291, "n_parameters": 8797984} -{"train_lr": 1.885388061685525e-05, "train_loss": 2.099250428366661, "test_loss": 0.7650780557271313, "test_acc1": 80.56400251525879, "test_acc5": 95.08600236114502, "epoch": 292, "n_parameters": 8797984} -{"train_lr": 1.6996745797238736e-05, "train_loss": 2.1022204178333284, "test_loss": 0.7565811863716911, "test_acc1": 80.63400277526856, "test_acc5": 95.12600290802001, "epoch": 293, "n_parameters": 8797984} -{"train_lr": 1.5357617937206103e-05, "train_loss": 2.0967053665399553, "test_loss": 0.7545781394138056, "test_acc1": 80.61800257415771, "test_acc5": 95.10800268768311, "epoch": 294, "n_parameters": 8797984} -{"train_lr": 1.393667678559817e-05, "train_loss": 2.0979882511854173, "test_loss": 0.7571322304361007, "test_acc1": 80.56800273895264, "test_acc5": 95.15800236114502, "epoch": 295, "n_parameters": 8797984} -{"train_lr": 1.2734078164625274e-05, "train_loss": 2.1050916015386583, "test_loss": 0.7590663980911759, "test_acc1": 80.57400272491455, "test_acc5": 95.15200255096435, "epoch": 296, "n_parameters": 8797984} -{"train_lr": 1.1749953952777368e-05, "train_loss": 2.110148558592796, "test_loss": 0.7544577877749415, "test_acc1": 80.68600262939454, "test_acc5": 95.15200259552002, "epoch": 297, "n_parameters": 8797984} -{"train_lr": 1.0984412070365348e-05, "train_loss": 2.096974531507492, "test_loss": 0.757622724508538, "test_acc1": 80.63600231750489, "test_acc5": 95.14600269042968, "epoch": 298, "n_parameters": 8797984} -{"train_lr": 1.0437536467683126e-05, "train_loss": 2.09555684530735, "test_loss": 0.7557909422937561, "test_acc1": 80.62000246765136, "test_acc5": 95.1220027267456, "epoch": 299, "n_parameters": 8797984} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_450e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_450e.txt deleted file mode 100644 index a6c3dcff..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_1_distill_450e.txt +++ /dev/null @@ -1,450 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 7.00757974281311, "test_loss": 6.94837687997257, "test_acc1": 0.11200000808715821, "test_acc5": 0.5800000217199326, "epoch": 0, "n_parameters": 8802912} -{"train_lr": 1.000000000000014e-06, "train_loss": 7.002298969078064, "test_loss": 6.933731990702012, "test_acc1": 0.12800000747680665, "test_acc5": 0.6260000270605087, "epoch": 1, "n_parameters": 8802912} -{"train_lr": 0.0008007999999999933, "train_loss": 6.4456030395507815, "test_loss": 5.2014855286654305, "test_acc1": 8.166000214538574, "test_acc5": 22.088000678100585, "epoch": 2, "n_parameters": 8802912} -{"train_lr": 0.0016005999999999787, "train_loss": 5.8110536100387575, "test_loss": 3.9179715496652268, "test_acc1": 21.372000612335206, "test_acc5": 44.12200125213623, "epoch": 3, "n_parameters": 8802912} -{"train_lr": 0.0024003999999999835, "train_loss": 5.199463818740845, "test_loss": 3.213488471858642, "test_acc1": 32.15400086578369, "test_acc5": 57.662001741638186, "epoch": 4, "n_parameters": 8802912} -{"train_lr": 0.0032001999999999873, "train_loss": 4.761039460325241, "test_loss": 2.7896020973429962, "test_acc1": 39.3820011807251, "test_acc5": 65.21000186676025, "epoch": 5, "n_parameters": 8802912} -{"train_lr": 0.003998784699903044, "train_loss": 4.45670155210495, "test_loss": 2.5229435466668186, "test_acc1": 44.5920013154602, "test_acc5": 70.0220021017456, "epoch": 6, "n_parameters": 8802912} -{"train_lr": 0.0039982500460471835, "train_loss": 4.18971237077713, "test_loss": 2.30037355160012, "test_acc1": 48.70600115570068, "test_acc5": 73.94600262359619, "epoch": 7, "n_parameters": 8802912} -{"train_lr": 0.003997618243996162, "train_loss": 3.993955112838745, "test_loss": 2.062746558995808, "test_acc1": 53.310001466064456, "test_acc5": 77.42800248016357, "epoch": 8, "n_parameters": 8802912} -{"train_lr": 0.003996889324543062, "train_loss": 3.850284407424927, "test_loss": 1.9726097049082028, "test_acc1": 54.99200151824951, "test_acc5": 78.8440025326538, "epoch": 9, "n_parameters": 8802912} -{"train_lr": 0.003996063323214417, "train_loss": 3.7400550690174104, "test_loss": 1.853277571937617, "test_acc1": 57.204001482238766, "test_acc5": 80.45600232635498, "epoch": 10, "n_parameters": 8802912} -{"train_lr": 0.003995140280268348, "train_loss": 3.6469556347846983, "test_loss": 1.8098409513340277, "test_acc1": 58.23600189239502, "test_acc5": 81.19400246643066, "epoch": 11, "n_parameters": 8802912} -{"train_lr": 0.003994120240692708, "train_loss": 3.583845655536652, "test_loss": 1.7339006916565054, "test_acc1": 59.624001550598145, "test_acc5": 82.40200255493164, "epoch": 12, "n_parameters": 8802912} -{"train_lr": 0.003993003254202742, "train_loss": 3.518181526517868, "test_loss": 1.7265122721300405, "test_acc1": 60.3520017880249, "test_acc5": 82.84200258087158, "epoch": 13, "n_parameters": 8802912} -{"train_lr": 0.0039917893752388625, "train_loss": 3.45453241314888, "test_loss": 1.6790055303889162, "test_acc1": 60.970001957397464, "test_acc5": 83.02400239837647, "epoch": 14, "n_parameters": 8802912} -{"train_lr": 0.003990478662963736, "train_loss": 3.4055942385196687, "test_loss": 1.6012991969199741, "test_acc1": 62.64400190612793, "test_acc5": 84.40800228790283, "epoch": 15, "n_parameters": 8802912} -{"train_lr": 0.003989071181259754, "train_loss": 3.369740376996994, "test_loss": 1.589534041197861, "test_acc1": 62.85400166564941, "test_acc5": 84.56200266723633, "epoch": 16, "n_parameters": 8802912} -{"train_lr": 0.0039875669987254015, "train_loss": 3.3330695972442625, "test_loss": 1.5464726178085102, "test_acc1": 63.82600168395996, "test_acc5": 85.23000285797119, "epoch": 17, "n_parameters": 8802912} -{"train_lr": 0.003985966188672574, "train_loss": 3.3230633949279786, "test_loss": 1.5190706748296232, "test_acc1": 63.95800186126709, "test_acc5": 85.49400236114502, "epoch": 18, "n_parameters": 8802912} -{"train_lr": 0.0039842688291223715, "train_loss": 3.26986241402626, "test_loss": 1.5116736963391304, "test_acc1": 64.61000198913574, "test_acc5": 85.75200263641358, "epoch": 19, "n_parameters": 8802912} -{"train_lr": 0.003982475002801825, "train_loss": 3.2447737285614013, "test_loss": 1.512159539934467, "test_acc1": 64.68800177215576, "test_acc5": 85.95000227233886, "epoch": 20, "n_parameters": 8802912} -{"train_lr": 0.003980584797139465, "train_loss": 3.2290869401454927, "test_loss": 1.532008578233859, "test_acc1": 64.70200183288574, "test_acc5": 85.67400277130128, "epoch": 21, "n_parameters": 8802912} -{"train_lr": 0.003978598304261148, "train_loss": 3.1858438288688657, "test_loss": 1.5212818498120588, "test_acc1": 64.81200184295655, "test_acc5": 85.75200249298096, "epoch": 22, "n_parameters": 8802912} -{"train_lr": 0.003976515620985842, "train_loss": 3.1710791205883027, "test_loss": 1.4879575704827028, "test_acc1": 65.3840017300415, "test_acc5": 86.20600263793945, "epoch": 23, "n_parameters": 8802912} -{"train_lr": 0.0039743368488206155, "train_loss": 3.158749589395523, "test_loss": 1.5097149312496185, "test_acc1": 64.7500020022583, "test_acc5": 85.85800253509521, "epoch": 24, "n_parameters": 8802912} -{"train_lr": 0.0039720620939556715, "train_loss": 3.1461593976020814, "test_loss": 1.455172301215284, "test_acc1": 66.00400188781738, "test_acc5": 86.64400255493165, "epoch": 25, "n_parameters": 8802912} -{"train_lr": 0.003969691467259384, "train_loss": 3.1242377655506135, "test_loss": 1.4215183801510756, "test_acc1": 66.52800196594238, "test_acc5": 87.13200230926513, "epoch": 26, "n_parameters": 8802912} -{"train_lr": 0.003967225084272694, "train_loss": 3.0870540767908095, "test_loss": 1.3879302314975683, "test_acc1": 67.19000197723389, "test_acc5": 87.36200242126465, "epoch": 27, "n_parameters": 8802912} -{"train_lr": 0.003964663065203757, "train_loss": 3.0877624022722245, "test_loss": 1.4345473673413782, "test_acc1": 66.55800206207276, "test_acc5": 87.15800252441406, "epoch": 28, "n_parameters": 8802912} -{"train_lr": 0.003962005534921608, "train_loss": 3.0738050204753877, "test_loss": 1.3919297500568277, "test_acc1": 66.93800215301513, "test_acc5": 87.44400267700195, "epoch": 29, "n_parameters": 8802912} -{"train_lr": 0.003959252622950646, "train_loss": 3.074115948700905, "test_loss": 1.3871767227264011, "test_acc1": 67.1740021182251, "test_acc5": 87.40600320587158, "epoch": 30, "n_parameters": 8802912} -{"train_lr": 0.003956404463463954, "train_loss": 3.0395767622709275, "test_loss": 1.399406509802622, "test_acc1": 66.9980021774292, "test_acc5": 87.35800238739013, "epoch": 31, "n_parameters": 8802912} -{"train_lr": 0.003953461195276696, "train_loss": 3.0359605996847154, "test_loss": 1.4062637831358349, "test_acc1": 67.28400184112549, "test_acc5": 87.60000234527588, "epoch": 32, "n_parameters": 8802912} -{"train_lr": 0.003950422961839594, "train_loss": 3.0334449071645735, "test_loss": 1.3869044320548283, "test_acc1": 67.30000207000732, "test_acc5": 87.50400276397706, "epoch": 33, "n_parameters": 8802912} -{"train_lr": 0.00394728991123201, "train_loss": 3.0212832870721815, "test_loss": 1.3975171608959926, "test_acc1": 67.44200193359374, "test_acc5": 87.76600242370606, "epoch": 34, "n_parameters": 8802912} -{"train_lr": 0.003944062196154177, "train_loss": 3.011615329360962, "test_loss": 1.3201785021845032, "test_acc1": 68.49200202331544, "test_acc5": 88.46800249084473, "epoch": 35, "n_parameters": 8802912} -{"train_lr": 0.003940739973920592, "train_loss": 3.003369493365288, "test_loss": 1.3427800474797977, "test_acc1": 68.1320021975708, "test_acc5": 87.98000201324463, "epoch": 36, "n_parameters": 8802912} -{"train_lr": 0.003937323406451619, "train_loss": 2.992035495233536, "test_loss": 1.3538860683055485, "test_acc1": 67.9460018157959, "test_acc5": 88.09600287994385, "epoch": 37, "n_parameters": 8802912} -{"train_lr": 0.003933812660265883, "train_loss": 2.985352141666412, "test_loss": 1.3489182324093931, "test_acc1": 68.15400214202882, "test_acc5": 87.99800272277832, "epoch": 38, "n_parameters": 8802912} -{"train_lr": 0.003930207906472293, "train_loss": 2.9737408992290497, "test_loss": 1.3386975610080887, "test_acc1": 68.49600214355469, "test_acc5": 88.17800259643555, "epoch": 39, "n_parameters": 8802912} -{"train_lr": 0.003926509320761305, "train_loss": 2.958854973292351, "test_loss": 1.3849371084395576, "test_acc1": 67.67600179351807, "test_acc5": 87.87200262756348, "epoch": 40, "n_parameters": 8802912} -{"train_lr": 0.003922717083396902, "train_loss": 2.958423114347458, "test_loss": 1.3967782964601236, "test_acc1": 67.49200215667724, "test_acc5": 87.36600255798339, "epoch": 41, "n_parameters": 8802912} -{"train_lr": 0.003918831379207381, "train_loss": 2.9483619425058363, "test_loss": 1.3751029591349995, "test_acc1": 68.08400216705323, "test_acc5": 88.05600254638672, "epoch": 42, "n_parameters": 8802912} -{"train_lr": 0.003914852397576493, "train_loss": 2.9460404906988145, "test_loss": 1.3325762437546955, "test_acc1": 68.58000184967041, "test_acc5": 88.35600241241455, "epoch": 43, "n_parameters": 8802912} -{"train_lr": 0.003910780332434081, "train_loss": 2.9356827599525452, "test_loss": 1.323133861317354, "test_acc1": 68.76800203613281, "test_acc5": 88.61800266662598, "epoch": 44, "n_parameters": 8802912} -{"train_lr": 0.003906615382246946, "train_loss": 2.927689355254173, "test_loss": 1.3322298583738945, "test_acc1": 68.27400215881347, "test_acc5": 88.31200257781983, "epoch": 45, "n_parameters": 8802912} -{"train_lr": 0.0039023577500088094, "train_loss": 2.914213638472557, "test_loss": 1.3642746118061684, "test_acc1": 68.16600203430175, "test_acc5": 87.97800246551513, "epoch": 46, "n_parameters": 8802912} -{"train_lr": 0.003898007643230756, "train_loss": 2.9190436950206755, "test_loss": 1.3619632203789318, "test_acc1": 68.08000231628418, "test_acc5": 88.27800243225097, "epoch": 47, "n_parameters": 8802912} -{"train_lr": 0.0038935652739308757, "train_loss": 2.9172109978199003, "test_loss": 1.3997363789993174, "test_acc1": 67.73600198638916, "test_acc5": 87.77000257110596, "epoch": 48, "n_parameters": 8802912} -{"train_lr": 0.003889030858623732, "train_loss": 2.9118568771123887, "test_loss": 1.328559831223067, "test_acc1": 68.63600172912598, "test_acc5": 88.33400218109131, "epoch": 49, "n_parameters": 8802912} -{"train_lr": 0.003884404618310635, "train_loss": 2.907359966492653, "test_loss": 1.3047574600752663, "test_acc1": 69.13600206115723, "test_acc5": 88.71600242065429, "epoch": 50, "n_parameters": 8802912} -{"train_lr": 0.0038796867784678503, "train_loss": 2.893872502374649, "test_loss": 1.2745483272215898, "test_acc1": 70.03200228546143, "test_acc5": 89.23800271179199, "epoch": 51, "n_parameters": 8802912} -{"train_lr": 0.0038748775690362956, "train_loss": 2.891839105272293, "test_loss": 1.3590129705912926, "test_acc1": 67.93600199279786, "test_acc5": 88.05400221954346, "epoch": 52, "n_parameters": 8802912} -{"train_lr": 0.0038699772244100415, "train_loss": 2.888479748249054, "test_loss": 1.3008417811463862, "test_acc1": 69.1700021899414, "test_acc5": 88.78600255584716, "epoch": 53, "n_parameters": 8802912} -{"train_lr": 0.003864985983424946, "train_loss": 2.8764600900411605, "test_loss": 1.367189863587127, "test_acc1": 68.49600216644288, "test_acc5": 88.23400250457763, "epoch": 54, "n_parameters": 8802912} -{"train_lr": 0.003859904089347072, "train_loss": 2.87621153550148, "test_loss": 1.2707928059732212, "test_acc1": 69.62800231750488, "test_acc5": 88.8960025805664, "epoch": 55, "n_parameters": 8802912} -{"train_lr": 0.0038547317898607334, "train_loss": 2.87452378411293, "test_loss": 1.362220043206916, "test_acc1": 68.41600197265625, "test_acc5": 88.42000240997315, "epoch": 56, "n_parameters": 8802912} -{"train_lr": 0.0038494693370565466, "train_loss": 2.8642717346668243, "test_loss": 1.2691086425500757, "test_acc1": 70.10000220367432, "test_acc5": 89.36200222564698, "epoch": 57, "n_parameters": 8802912} -{"train_lr": 0.0038441169874190843, "train_loss": 2.8563370568990707, "test_loss": 1.3219118490815163, "test_acc1": 69.0260023526001, "test_acc5": 88.74200253265381, "epoch": 58, "n_parameters": 8802912} -{"train_lr": 0.003838675001814183, "train_loss": 2.8593198444604875, "test_loss": 1.2517330278368557, "test_acc1": 69.91600218444825, "test_acc5": 89.29000235321045, "epoch": 59, "n_parameters": 8802912} -{"train_lr": 0.0038331436454766355, "train_loss": 2.8390841595411302, "test_loss": 1.3166714556076948, "test_acc1": 69.26000238677979, "test_acc5": 88.69800219451905, "epoch": 60, "n_parameters": 8802912} -{"train_lr": 0.0038275231879969967, "train_loss": 2.8532621945381162, "test_loss": 1.2803420402547891, "test_acc1": 70.02600229248047, "test_acc5": 89.28800246826172, "epoch": 61, "n_parameters": 8802912} -{"train_lr": 0.00382181390330831, "train_loss": 2.8500570197343826, "test_loss": 1.2823673666400068, "test_acc1": 69.47400248840331, "test_acc5": 89.13800287872314, "epoch": 62, "n_parameters": 8802912} -{"train_lr": 0.00381601606967318, "train_loss": 2.847277815937996, "test_loss": 1.3367713667890604, "test_acc1": 68.9420020941162, "test_acc5": 88.46400250915528, "epoch": 63, "n_parameters": 8802912} -{"train_lr": 0.0038101299696697475, "train_loss": 2.838800849676132, "test_loss": 1.2881186464253593, "test_acc1": 69.38200219299317, "test_acc5": 89.10800263031005, "epoch": 64, "n_parameters": 8802912} -{"train_lr": 0.00380415589017823, "train_loss": 2.8299730539798738, "test_loss": 1.2424979488201, "test_acc1": 70.0260019543457, "test_acc5": 89.68200272827148, "epoch": 65, "n_parameters": 8802912} -{"train_lr": 0.0037980941223668303, "train_loss": 2.8220232197761534, "test_loss": 1.2861629207344616, "test_acc1": 69.23000219726562, "test_acc5": 89.03000251037598, "epoch": 66, "n_parameters": 8802912} -{"train_lr": 0.003791944961677627, "train_loss": 2.8225324197769166, "test_loss": 1.2920263988130234, "test_acc1": 69.64000186889649, "test_acc5": 89.15200270874024, "epoch": 67, "n_parameters": 8802912} -{"train_lr": 0.0037857087078119896, "train_loss": 2.8138319599866866, "test_loss": 1.2647533745450132, "test_acc1": 70.1720023864746, "test_acc5": 89.42800241821288, "epoch": 68, "n_parameters": 8802912} -{"train_lr": 0.003779385664716107, "train_loss": 2.8268703518152236, "test_loss": 1.2662970612154287, "test_acc1": 69.60800186798096, "test_acc5": 89.01800268035889, "epoch": 69, "n_parameters": 8802912} -{"train_lr": 0.003772976140566265, "train_loss": 2.834527232480049, "test_loss": 1.2999131162376965, "test_acc1": 69.10200221832275, "test_acc5": 89.0640024105835, "epoch": 70, "n_parameters": 8802912} -{"train_lr": 0.0037664804477535617, "train_loss": 2.8003930218696595, "test_loss": 1.2845862564795159, "test_acc1": 69.41200203918457, "test_acc5": 89.21000280944824, "epoch": 71, "n_parameters": 8802912} -{"train_lr": 0.003759898902868911, "train_loss": 2.809183050441742, "test_loss": 1.2868426187950022, "test_acc1": 69.50400257415771, "test_acc5": 89.04200268859863, "epoch": 72, "n_parameters": 8802912} -{"train_lr": 0.003753231826687486, "train_loss": 2.806111382174492, "test_loss": 1.2443238531841951, "test_acc1": 70.3360021826172, "test_acc5": 89.4760028201294, "epoch": 73, "n_parameters": 8802912} -{"train_lr": 0.0037464795441532936, "train_loss": 2.8081279205322267, "test_loss": 1.2164020810057135, "test_acc1": 70.75000210327148, "test_acc5": 89.73600234588623, "epoch": 74, "n_parameters": 8802912} -{"train_lr": 0.003739642384362937, "train_loss": 2.8001057704925536, "test_loss": 1.2539568999234367, "test_acc1": 69.76800243621827, "test_acc5": 89.31200266967774, "epoch": 75, "n_parameters": 8802912} -{"train_lr": 0.003732720680549938, "train_loss": 2.8041526014328, "test_loss": 1.238751567681046, "test_acc1": 70.20000210083008, "test_acc5": 89.46000230407715, "epoch": 76, "n_parameters": 8802912} -{"train_lr": 0.003725714770068486, "train_loss": 2.789421655893326, "test_loss": 1.3086714424631174, "test_acc1": 69.4200019229126, "test_acc5": 89.20800234558105, "epoch": 77, "n_parameters": 8802912} -{"train_lr": 0.0037186249943766602, "train_loss": 2.798131973671913, "test_loss": 1.290232282351045, "test_acc1": 69.53200206878662, "test_acc5": 88.94200254974365, "epoch": 78, "n_parameters": 8802912} -{"train_lr": 0.003711451699020238, "train_loss": 2.7871337986469267, "test_loss": 1.223076876033755, "test_acc1": 70.90800209991455, "test_acc5": 89.86600255798339, "epoch": 79, "n_parameters": 8802912} -{"train_lr": 0.0037041952336154147, "train_loss": 2.778705654788017, "test_loss": 1.2650055039454908, "test_acc1": 69.81400222808838, "test_acc5": 89.35200251373291, "epoch": 80, "n_parameters": 8802912} -{"train_lr": 0.003696855951832067, "train_loss": 2.7836963824510574, "test_loss": 1.1932876355507795, "test_acc1": 70.97600226379394, "test_acc5": 90.04200239654541, "epoch": 81, "n_parameters": 8802912} -{"train_lr": 0.0036894342113765284, "train_loss": 2.787019657564163, "test_loss": 1.2635693541344475, "test_acc1": 69.84800236022949, "test_acc5": 89.0500023071289, "epoch": 82, "n_parameters": 8802912} -{"train_lr": 0.0036819303739738757, "train_loss": 2.786663887000084, "test_loss": 1.2404922308290707, "test_acc1": 70.6320025302124, "test_acc5": 89.64400282043457, "epoch": 83, "n_parameters": 8802912} -{"train_lr": 0.00367434480535066, "train_loss": 2.7658754073619845, "test_loss": 1.3128296106177217, "test_acc1": 69.65400215881348, "test_acc5": 89.21200248840331, "epoch": 84, "n_parameters": 8802912} -{"train_lr": 0.00366667787521664, "train_loss": 2.771395058131218, "test_loss": 1.2277726999977057, "test_acc1": 70.47800237457275, "test_acc5": 89.60600271697999, "epoch": 85, "n_parameters": 8802912} -{"train_lr": 0.003658929957247333, "train_loss": 2.7749514417648316, "test_loss": 1.2549144388998257, "test_acc1": 70.10000208709717, "test_acc5": 89.39200226257324, "epoch": 86, "n_parameters": 8802912} -{"train_lr": 0.0036511014290652147, "train_loss": 2.7584387183189394, "test_loss": 1.2147804446080153, "test_acc1": 71.26200244262695, "test_acc5": 90.09200261932374, "epoch": 87, "n_parameters": 8802912} -{"train_lr": 0.003643192672221756, "train_loss": 2.7656126528501512, "test_loss": 1.2904045266263626, "test_acc1": 69.56000201538086, "test_acc5": 89.1520025189209, "epoch": 88, "n_parameters": 8802912} -{"train_lr": 0.0036352040721785803, "train_loss": 2.7543541821241377, "test_loss": 1.2646858867476969, "test_acc1": 69.86600222625732, "test_acc5": 89.49800262176514, "epoch": 89, "n_parameters": 8802912} -{"train_lr": 0.003627136018288861, "train_loss": 2.7622146857738494, "test_loss": 1.2114516146042769, "test_acc1": 70.6960021987915, "test_acc5": 89.87000242980957, "epoch": 90, "n_parameters": 8802912} -{"train_lr": 0.0036189889037780316, "train_loss": 2.7438843497276304, "test_loss": 1.241044652812621, "test_acc1": 70.30000216979981, "test_acc5": 89.66800254974365, "epoch": 91, "n_parameters": 8802912} -{"train_lr": 0.0036107631257249954, "train_loss": 2.759021883010864, "test_loss": 1.2326184182482607, "test_acc1": 70.54000243591308, "test_acc5": 89.65600253601075, "epoch": 92, "n_parameters": 8802912} -{"train_lr": 0.003602459085042744, "train_loss": 2.7571325706720353, "test_loss": 1.2376215874272234, "test_acc1": 70.50800260864258, "test_acc5": 89.75000231201172, "epoch": 93, "n_parameters": 8802912} -{"train_lr": 0.003594077186458248, "train_loss": 2.7484602867126466, "test_loss": 1.2216600714360966, "test_acc1": 70.50600241485596, "test_acc5": 89.85800260559083, "epoch": 94, "n_parameters": 8802912} -{"train_lr": 0.003585617838493613, "train_loss": 2.742923972773552, "test_loss": 1.2582991977824884, "test_acc1": 70.33200227508544, "test_acc5": 89.56000241027832, "epoch": 95, "n_parameters": 8802912} -{"train_lr": 0.0035770814534454225, "train_loss": 2.749069707942009, "test_loss": 1.1932634023182533, "test_acc1": 71.2320020413208, "test_acc5": 89.9020026385498, "epoch": 96, "n_parameters": 8802912} -{"train_lr": 0.003568468447365067, "train_loss": 2.741363721370697, "test_loss": 1.216847019160495, "test_acc1": 70.54800244293213, "test_acc5": 89.88000256622314, "epoch": 97, "n_parameters": 8802912} -{"train_lr": 0.0035597792400383233, "train_loss": 2.7341262596607208, "test_loss": 1.2126203806084745, "test_acc1": 71.04000236602784, "test_acc5": 90.0800025970459, "epoch": 98, "n_parameters": 8802912} -{"train_lr": 0.0035510142549648235, "train_loss": 2.743459006547928, "test_loss": 1.2090957892291687, "test_acc1": 71.19400203887939, "test_acc5": 89.82400218383789, "epoch": 99, "n_parameters": 8802912} -{"train_lr": 0.0035421739193377214, "train_loss": 2.7411962458610533, "test_loss": 1.263271478607374, "test_acc1": 69.88200216430664, "test_acc5": 89.40000239074708, "epoch": 100, "n_parameters": 8802912} -{"train_lr": 0.003533258664022372, "train_loss": 2.733297230052948, "test_loss": 1.2669737593216055, "test_acc1": 70.73800235412598, "test_acc5": 89.45000254669189, "epoch": 101, "n_parameters": 8802912} -{"train_lr": 0.0035242689235357775, "train_loss": 2.7277202188968657, "test_loss": 1.1923884990460731, "test_acc1": 71.24600225830078, "test_acc5": 90.11200247497558, "epoch": 102, "n_parameters": 8802912} -{"train_lr": 0.0035152051360252245, "train_loss": 2.721015006232262, "test_loss": 1.2059869332348598, "test_acc1": 70.74400211914063, "test_acc5": 89.96600235107422, "epoch": 103, "n_parameters": 8802912} -{"train_lr": 0.0035060677432469894, "train_loss": 2.72244071764946, "test_loss": 1.2031660969643032, "test_acc1": 70.85800234039307, "test_acc5": 89.96000231292724, "epoch": 104, "n_parameters": 8802912} -{"train_lr": 0.0034968571905445293, "train_loss": 2.713634927415848, "test_loss": 1.2625468728296898, "test_acc1": 70.04200245697021, "test_acc5": 89.39400263092041, "epoch": 105, "n_parameters": 8802912} -{"train_lr": 0.0034875739268273947, "train_loss": 2.7156151170969007, "test_loss": 1.1931169247802567, "test_acc1": 71.31000257019043, "test_acc5": 90.12000244934082, "epoch": 106, "n_parameters": 8802912} -{"train_lr": 0.00347821840454859, "train_loss": 2.7084970991134645, "test_loss": 1.2363116219639778, "test_acc1": 70.63000222167969, "test_acc5": 89.69800274200439, "epoch": 107, "n_parameters": 8802912} -{"train_lr": 0.003468791079683292, "train_loss": 2.71336643280983, "test_loss": 1.2005542486029512, "test_acc1": 70.95800235778809, "test_acc5": 90.04400251831055, "epoch": 108, "n_parameters": 8802912} -{"train_lr": 0.003459292411705684, "train_loss": 2.708950842499733, "test_loss": 1.2734895975274199, "test_acc1": 70.0660020602417, "test_acc5": 89.37800269256591, "epoch": 109, "n_parameters": 8802912} -{"train_lr": 0.003449722863567734, "train_loss": 2.711569919848442, "test_loss": 1.1708419890526462, "test_acc1": 71.41600223937988, "test_acc5": 90.35400227600098, "epoch": 110, "n_parameters": 8802912} -{"train_lr": 0.0034400829016756297, "train_loss": 2.71836285841465, "test_loss": 1.193154356935445, "test_acc1": 71.38400228424072, "test_acc5": 89.9120027734375, "epoch": 111, "n_parameters": 8802912} -{"train_lr": 0.0034303729958673978, "train_loss": 2.711472584056854, "test_loss": 1.2386341235216927, "test_acc1": 70.03200205444335, "test_acc5": 89.4400024105835, "epoch": 112, "n_parameters": 8802912} -{"train_lr": 0.0034205936193903307, "train_loss": 2.7141998252391817, "test_loss": 1.239805735209409, "test_acc1": 70.67000250579834, "test_acc5": 89.91400250549316, "epoch": 113, "n_parameters": 8802912} -{"train_lr": 0.0034107452488774006, "train_loss": 2.694856726717949, "test_loss": 1.1904923617839813, "test_acc1": 71.43600217163086, "test_acc5": 89.97000266967774, "epoch": 114, "n_parameters": 8802912} -{"train_lr": 0.0034008283643241475, "train_loss": 2.6965582564353943, "test_loss": 1.2092351676786648, "test_acc1": 70.8440021459961, "test_acc5": 89.87200218383789, "epoch": 115, "n_parameters": 8802912} -{"train_lr": 0.003390843449065705, "train_loss": 2.7017865480184553, "test_loss": 1.211741224369582, "test_acc1": 71.00800231750489, "test_acc5": 90.08400278686524, "epoch": 116, "n_parameters": 8802912} -{"train_lr": 0.0033807909897526967, "train_loss": 2.696418272638321, "test_loss": 1.2115104496479034, "test_acc1": 71.41800226257324, "test_acc5": 90.2200027420044, "epoch": 117, "n_parameters": 8802912} -{"train_lr": 0.0033706714763277455, "train_loss": 2.6976484503269194, "test_loss": 1.1957371164770687, "test_acc1": 70.95400230987549, "test_acc5": 90.15200232086181, "epoch": 118, "n_parameters": 8802912} -{"train_lr": 0.003360485402001723, "train_loss": 2.7006652145147325, "test_loss": 1.1483490907532328, "test_acc1": 71.86400246337891, "test_acc5": 90.69200231506348, "epoch": 119, "n_parameters": 8802912} -{"train_lr": 0.0033502332632295347, "train_loss": 2.6918533306121826, "test_loss": 1.1915686919408686, "test_acc1": 71.53600252288818, "test_acc5": 90.08600248291016, "epoch": 120, "n_parameters": 8802912} -{"train_lr": 0.003339915559685877, "train_loss": 2.684101654076576, "test_loss": 1.1945234734345884, "test_acc1": 71.380002449646, "test_acc5": 90.08200272613526, "epoch": 121, "n_parameters": 8802912} -{"train_lr": 0.0033295327942412492, "train_loss": 2.688681266283989, "test_loss": 1.2906139580642475, "test_acc1": 69.8520022076416, "test_acc5": 89.5080025137329, "epoch": 122, "n_parameters": 8802912} -{"train_lr": 0.003319085472936782, "train_loss": 2.6881786192417145, "test_loss": 1.2081944118527805, "test_acc1": 71.44600255950928, "test_acc5": 90.24600265594482, "epoch": 123, "n_parameters": 8802912} -{"train_lr": 0.0033085741049602795, "train_loss": 2.678794109749794, "test_loss": 1.1892642006278038, "test_acc1": 71.62000217163086, "test_acc5": 90.3540024053955, "epoch": 124, "n_parameters": 8802912} -{"train_lr": 0.003297999202620968, "train_loss": 2.682017682480812, "test_loss": 1.1772824100711767, "test_acc1": 71.80000256408691, "test_acc5": 90.70000240417481, "epoch": 125, "n_parameters": 8802912} -{"train_lr": 0.0032873612813246714, "train_loss": 2.678594740843773, "test_loss": 1.200793846126865, "test_acc1": 71.3000023602295, "test_acc5": 90.04800263916016, "epoch": 126, "n_parameters": 8802912} -{"train_lr": 0.003276660859548651, "train_loss": 2.674001724052429, "test_loss": 1.1759266507099657, "test_acc1": 71.64400240692139, "test_acc5": 90.29200252807617, "epoch": 127, "n_parameters": 8802912} -{"train_lr": 0.0032658984588163557, "train_loss": 2.6754158237218855, "test_loss": 1.2104625671225435, "test_acc1": 70.8500023236084, "test_acc5": 89.91800273895264, "epoch": 128, "n_parameters": 8802912} -{"train_lr": 0.003255074603672122, "train_loss": 2.677900793671608, "test_loss": 1.184881956261747, "test_acc1": 71.48600242401123, "test_acc5": 90.27000269775391, "epoch": 129, "n_parameters": 8802912} -{"train_lr": 0.003244189821655263, "train_loss": 2.6747010142326353, "test_loss": 1.2241844881106825, "test_acc1": 70.75600220733642, "test_acc5": 89.91600250518799, "epoch": 130, "n_parameters": 8802912} -{"train_lr": 0.003233244643274736, "train_loss": 2.666985640358925, "test_loss": 1.1913521381862022, "test_acc1": 71.27600249084473, "test_acc5": 90.27200258605957, "epoch": 131, "n_parameters": 8802912} -{"train_lr": 0.0032222396019829943, "train_loss": 2.6723787071466445, "test_loss": 1.20225558780572, "test_acc1": 71.80000215026855, "test_acc5": 90.484002527771, "epoch": 132, "n_parameters": 8802912} -{"train_lr": 0.0032111752341504192, "train_loss": 2.663997348880768, "test_loss": 1.2077304628842018, "test_acc1": 71.27400250152589, "test_acc5": 90.23200259429932, "epoch": 133, "n_parameters": 8802912} -{"train_lr": 0.0032000520790385592, "train_loss": 2.6693343281269075, "test_loss": 1.1650966997532284, "test_acc1": 71.64000235565186, "test_acc5": 90.43800235443115, "epoch": 134, "n_parameters": 8802912} -{"train_lr": 0.0031888706787743812, "train_loss": 2.661883883142471, "test_loss": 1.2126008614021189, "test_acc1": 71.43800234466553, "test_acc5": 90.36600261932374, "epoch": 135, "n_parameters": 8802912} -{"train_lr": 0.0031776315783234484, "train_loss": 2.6604898091316223, "test_loss": 1.1847548734615831, "test_acc1": 71.3720023425293, "test_acc5": 90.33800258544922, "epoch": 136, "n_parameters": 8802912} -{"train_lr": 0.0031663353254638284, "train_loss": 2.660472065258026, "test_loss": 1.1562395135269445, "test_acc1": 71.94600248565673, "test_acc5": 90.64200278106689, "epoch": 137, "n_parameters": 8802912} -{"train_lr": 0.0031549824707587932, "train_loss": 2.664892584967613, "test_loss": 1.1834398319616037, "test_acc1": 72.09600239318847, "test_acc5": 90.42000225372314, "epoch": 138, "n_parameters": 8802912} -{"train_lr": 0.003143573567530467, "train_loss": 2.6627353639125824, "test_loss": 1.2486067791195476, "test_acc1": 70.45000225616455, "test_acc5": 89.42600243774415, "epoch": 139, "n_parameters": 8802912} -{"train_lr": 0.0031321091718327755, "train_loss": 2.6514323102474213, "test_loss": 1.1431264084051638, "test_acc1": 72.07800213134766, "test_acc5": 90.76400256317139, "epoch": 140, "n_parameters": 8802912} -{"train_lr": 0.003120589842424192, "train_loss": 2.655152057623863, "test_loss": 1.1767640635371208, "test_acc1": 71.4720023776245, "test_acc5": 90.4320025604248, "epoch": 141, "n_parameters": 8802912} -{"train_lr": 0.0031090161407405044, "train_loss": 2.639615317106247, "test_loss": 1.1347106436596197, "test_acc1": 72.25000246734619, "test_acc5": 90.708002237854, "epoch": 142, "n_parameters": 8802912} -{"train_lr": 0.003097388630867618, "train_loss": 2.6509745678424834, "test_loss": 1.1263248298098059, "test_acc1": 72.24400234680176, "test_acc5": 90.74800236846924, "epoch": 143, "n_parameters": 8802912} -{"train_lr": 0.0030857078795141065, "train_loss": 2.645409398174286, "test_loss": 1.18303443403805, "test_acc1": 71.58600234985352, "test_acc5": 90.2900023348999, "epoch": 144, "n_parameters": 8802912} -{"train_lr": 0.0030739744559831164, "train_loss": 2.6473371891260147, "test_loss": 1.1482038156074637, "test_acc1": 71.8780023248291, "test_acc5": 90.63400276245117, "epoch": 145, "n_parameters": 8802912} -{"train_lr": 0.003062188932145215, "train_loss": 2.6329492732048037, "test_loss": 1.1652677186271723, "test_acc1": 72.13400230804443, "test_acc5": 90.61000246856689, "epoch": 146, "n_parameters": 8802912} -{"train_lr": 0.0030503518824103173, "train_loss": 2.6430299598693847, "test_loss": 1.1984967348330162, "test_acc1": 71.3280025454712, "test_acc5": 90.34400218933105, "epoch": 147, "n_parameters": 8802912} -{"train_lr": 0.0030384638836993723, "train_loss": 2.641696829319, "test_loss": 1.223995253443718, "test_acc1": 71.44600240478516, "test_acc5": 90.1920026296997, "epoch": 148, "n_parameters": 8802912} -{"train_lr": 0.003026525515416759, "train_loss": 2.632146650004387, "test_loss": 1.1759142860331957, "test_acc1": 71.91800233795166, "test_acc5": 90.59600252716065, "epoch": 149, "n_parameters": 8802912} -{"train_lr": 0.0030145373594217015, "train_loss": 2.636480943775177, "test_loss": 1.128433358581627, "test_acc1": 72.64000220977783, "test_acc5": 90.952002527771, "epoch": 150, "n_parameters": 8802912} -{"train_lr": 0.0030025000000000156, "train_loss": 2.6319951573133467, "test_loss": 1.186610540484681, "test_acc1": 71.68800260528565, "test_acc5": 90.39800204193115, "epoch": 151, "n_parameters": 8802912} -{"train_lr": 0.0029904140238355007, "train_loss": 2.62986329703331, "test_loss": 1.1822106476654024, "test_acc1": 71.75200229431152, "test_acc5": 90.3340025946045, "epoch": 152, "n_parameters": 8802912} -{"train_lr": 0.0029782800199817903, "train_loss": 2.6230160806417464, "test_loss": 1.1357619311003124, "test_acc1": 72.60000257843018, "test_acc5": 90.67400257781982, "epoch": 153, "n_parameters": 8802912} -{"train_lr": 0.0029660985798329416, "train_loss": 2.6331616760492325, "test_loss": 1.1696988970917814, "test_acc1": 71.82200262237549, "test_acc5": 90.52800233215332, "epoch": 154, "n_parameters": 8802912} -{"train_lr": 0.0029538702970952164, "train_loss": 2.630023702287674, "test_loss": 1.1325136317926294, "test_acc1": 71.98600257904053, "test_acc5": 90.97800230987549, "epoch": 155, "n_parameters": 8802912} -{"train_lr": 0.0029415957677578724, "train_loss": 2.6157315504074097, "test_loss": 1.1464958488941193, "test_acc1": 72.0660022619629, "test_acc5": 90.48200245300293, "epoch": 156, "n_parameters": 8802912} -{"train_lr": 0.002929275590064108, "train_loss": 2.6222502685785294, "test_loss": 1.1982037880841423, "test_acc1": 71.55400256622315, "test_acc5": 90.39400268096924, "epoch": 157, "n_parameters": 8802912} -{"train_lr": 0.002916910364482115, "train_loss": 2.6114053263425827, "test_loss": 1.2099787807639908, "test_acc1": 71.08200244415283, "test_acc5": 90.24000278717041, "epoch": 158, "n_parameters": 8802912} -{"train_lr": 0.0029045006936754257, "train_loss": 2.6298371148109436, "test_loss": 1.1921825930476189, "test_acc1": 71.77200267150879, "test_acc5": 90.44200236785889, "epoch": 159, "n_parameters": 8802912} -{"train_lr": 0.0028920471824738832, "train_loss": 2.607103025650978, "test_loss": 1.1250100784442003, "test_acc1": 72.29000252227783, "test_acc5": 91.05200239593506, "epoch": 160, "n_parameters": 8802912} -{"train_lr": 0.0028795504378442225, "train_loss": 2.6031004187345506, "test_loss": 1.128390966092839, "test_acc1": 72.43600252441406, "test_acc5": 91.00800245758056, "epoch": 161, "n_parameters": 8802912} -{"train_lr": 0.002867011068859989, "train_loss": 2.6127615695476534, "test_loss": 1.1785339774454342, "test_acc1": 71.67600199768066, "test_acc5": 90.53400247772217, "epoch": 162, "n_parameters": 8802912} -{"train_lr": 0.0028544296866723304, "train_loss": 2.6016048929452897, "test_loss": 1.117519370992394, "test_acc1": 72.73400228515625, "test_acc5": 91.28800221862792, "epoch": 163, "n_parameters": 8802912} -{"train_lr": 0.0028418069044801562, "train_loss": 2.6044637111902236, "test_loss": 1.1320514860836899, "test_acc1": 72.42200228698731, "test_acc5": 90.87800283386231, "epoch": 164, "n_parameters": 8802912} -{"train_lr": 0.0028291433375, "train_loss": 2.608439180970192, "test_loss": 1.1847845359760172, "test_acc1": 71.83800240783691, "test_acc5": 90.36600278930663, "epoch": 165, "n_parameters": 8802912} -{"train_lr": 0.002816439602936208, "train_loss": 2.5928482845067977, "test_loss": 1.1859395933501862, "test_acc1": 71.96600265197753, "test_acc5": 90.75800258483886, "epoch": 166, "n_parameters": 8802912} -{"train_lr": 0.002803696319950981, "train_loss": 2.6005174310445787, "test_loss": 1.16901648964952, "test_acc1": 71.71200243896484, "test_acc5": 90.45600274261474, "epoch": 167, "n_parameters": 8802912} -{"train_lr": 0.0027909141096339935, "train_loss": 2.6074734619617463, "test_loss": 1.1616473452133291, "test_acc1": 72.59000288513184, "test_acc5": 90.6820027481079, "epoch": 168, "n_parameters": 8802912} -{"train_lr": 0.002778093594971943, "train_loss": 2.5931850266695022, "test_loss": 1.1688530445098877, "test_acc1": 72.14000259918213, "test_acc5": 90.74600259094238, "epoch": 169, "n_parameters": 8802912} -{"train_lr": 0.002765235400818761, "train_loss": 2.5934141369581223, "test_loss": 1.1238390378215735, "test_acc1": 72.48000247161865, "test_acc5": 90.92200277832032, "epoch": 170, "n_parameters": 8802912} -{"train_lr": 0.0027523401538647224, "train_loss": 2.59390983774662, "test_loss": 1.1270536712425596, "test_acc1": 72.31800257080079, "test_acc5": 90.93200239379883, "epoch": 171, "n_parameters": 8802912} -{"train_lr": 0.002739408482605956, "train_loss": 2.587843846297264, "test_loss": 1.1475514120915358, "test_acc1": 72.41800236846923, "test_acc5": 90.82000228729248, "epoch": 172, "n_parameters": 8802912} -{"train_lr": 0.002726441017313784, "train_loss": 2.581195289158821, "test_loss": 1.1433854372624088, "test_acc1": 72.38600246185302, "test_acc5": 90.91800246582031, "epoch": 173, "n_parameters": 8802912} -{"train_lr": 0.002713438390004251, "train_loss": 2.584689849972725, "test_loss": 1.1281452292905134, "test_acc1": 72.81200222625732, "test_acc5": 90.84400266937256, "epoch": 174, "n_parameters": 8802912} -{"train_lr": 0.0027004012344070075, "train_loss": 2.583056979584694, "test_loss": 1.1493739051853908, "test_acc1": 72.1320024142456, "test_acc5": 90.66200204223632, "epoch": 175, "n_parameters": 8802912} -{"train_lr": 0.0026873301859347007, "train_loss": 2.5788612841129304, "test_loss": 1.182425534023958, "test_acc1": 72.1060023400879, "test_acc5": 90.48600250152587, "epoch": 176, "n_parameters": 8802912} -{"train_lr": 0.002674225881651733, "train_loss": 2.5814184900045394, "test_loss": 1.1419935824678225, "test_acc1": 72.62800236785888, "test_acc5": 91.0140026123047, "epoch": 177, "n_parameters": 8802912} -{"train_lr": 0.0026610889602434354, "train_loss": 2.5790953180074694, "test_loss": 1.118449683794204, "test_acc1": 72.19600240783691, "test_acc5": 91.04200250793457, "epoch": 178, "n_parameters": 8802912} -{"train_lr": 0.0026479200619848845, "train_loss": 2.573088735413551, "test_loss": 1.1011702801813097, "test_acc1": 72.95600268981934, "test_acc5": 91.22800257019043, "epoch": 179, "n_parameters": 8802912} -{"train_lr": 0.0026347198287094897, "train_loss": 2.575206805109978, "test_loss": 1.1347377988345482, "test_acc1": 72.73200262176513, "test_acc5": 90.90000244354248, "epoch": 180, "n_parameters": 8802912} -{"train_lr": 0.0026214889037780493, "train_loss": 2.5807328763246535, "test_loss": 1.1208167957032429, "test_acc1": 72.94000228881836, "test_acc5": 91.20000250823975, "epoch": 181, "n_parameters": 8802912} -{"train_lr": 0.0026082279320471633, "train_loss": 2.587697632265091, "test_loss": 1.1356581800124224, "test_acc1": 72.66800257751464, "test_acc5": 90.84000229187012, "epoch": 182, "n_parameters": 8802912} -{"train_lr": 0.0025949375598379055, "train_loss": 2.5760198566675188, "test_loss": 1.1258967562633402, "test_acc1": 72.79200242156982, "test_acc5": 91.01400235443116, "epoch": 183, "n_parameters": 8802912} -{"train_lr": 0.0025816184349041886, "train_loss": 2.574317347407341, "test_loss": 1.1552698515793856, "test_acc1": 72.16600228149414, "test_acc5": 90.65600266418457, "epoch": 184, "n_parameters": 8802912} -{"train_lr": 0.0025682712064015187, "train_loss": 2.556429031252861, "test_loss": 1.1358413827769898, "test_acc1": 73.09400231414794, "test_acc5": 90.93200226745606, "epoch": 185, "n_parameters": 8802912} -{"train_lr": 0.002554896524854948, "train_loss": 2.5702195051431658, "test_loss": 1.1258788060616045, "test_acc1": 72.73800241516113, "test_acc5": 91.08800276000977, "epoch": 186, "n_parameters": 8802912} -{"train_lr": 0.0025414950421274317, "train_loss": 2.5643902470827102, "test_loss": 1.1001654574099708, "test_acc1": 73.11800225738526, "test_acc5": 91.2120027368164, "epoch": 187, "n_parameters": 8802912} -{"train_lr": 0.002528067411388543, "train_loss": 2.5546228750705717, "test_loss": 1.114708869772799, "test_acc1": 72.63200235778808, "test_acc5": 91.21000243469238, "epoch": 188, "n_parameters": 8802912} -{"train_lr": 0.0025146142870819286, "train_loss": 2.5554607296943663, "test_loss": 1.1099232945810347, "test_acc1": 72.78000221740723, "test_acc5": 91.09200276550293, "epoch": 189, "n_parameters": 8802912} -{"train_lr": 0.002501136324893901, "train_loss": 2.56572049844265, "test_loss": 1.0967789697734749, "test_acc1": 73.00600224395752, "test_acc5": 91.34800255035401, "epoch": 190, "n_parameters": 8802912} -{"train_lr": 0.002487634181721322, "train_loss": 2.5525357033014298, "test_loss": 1.072232031427762, "test_acc1": 73.68400241973877, "test_acc5": 91.62600219329833, "epoch": 191, "n_parameters": 8802912} -{"train_lr": 0.002474108515639672, "train_loss": 2.5491821521520617, "test_loss": 1.113099125378272, "test_acc1": 73.0140023336792, "test_acc5": 91.19600248260498, "epoch": 192, "n_parameters": 8802912} -{"train_lr": 0.002460559985870747, "train_loss": 2.5579778899908066, "test_loss": 1.0711433804210495, "test_acc1": 73.46000260192871, "test_acc5": 91.52400237915039, "epoch": 193, "n_parameters": 8802912} -{"train_lr": 0.002446989252750831, "train_loss": 2.5417408557891847, "test_loss": 1.1391959133393623, "test_acc1": 72.68200261627197, "test_acc5": 90.9940023236084, "epoch": 194, "n_parameters": 8802912} -{"train_lr": 0.002433396977698326, "train_loss": 2.5498608253479005, "test_loss": 1.0891889596686644, "test_acc1": 73.45600240600587, "test_acc5": 91.44000239135742, "epoch": 195, "n_parameters": 8802912} -{"train_lr": 0.0024197838231814215, "train_loss": 2.5418812530994415, "test_loss": 1.0774515985566027, "test_acc1": 73.6460024130249, "test_acc5": 91.5360027142334, "epoch": 196, "n_parameters": 8802912} -{"train_lr": 0.002406150452686214, "train_loss": 2.5348350748062134, "test_loss": 1.143979862332344, "test_acc1": 72.31400230255127, "test_acc5": 90.8180028994751, "epoch": 197, "n_parameters": 8802912} -{"train_lr": 0.0023924975306838653, "train_loss": 2.544350483083725, "test_loss": 1.0814029495505726, "test_acc1": 73.59400250061036, "test_acc5": 91.46600270904541, "epoch": 198, "n_parameters": 8802912} -{"train_lr": 0.0023788257225985116, "train_loss": 2.54661043817997, "test_loss": 1.1208867084454088, "test_acc1": 73.19400227661133, "test_acc5": 90.8820028149414, "epoch": 199, "n_parameters": 8802912} -{"train_lr": 0.002365135694774904, "train_loss": 2.538269160723686, "test_loss": 1.091546661713544, "test_acc1": 73.51000256958008, "test_acc5": 91.53000242950439, "epoch": 200, "n_parameters": 8802912} -{"train_lr": 0.0023514281144455126, "train_loss": 2.5387278804302214, "test_loss": 1.0999216222587753, "test_acc1": 73.4840027178955, "test_acc5": 91.37800279083253, "epoch": 201, "n_parameters": 8802912} -{"train_lr": 0.002337703649698603, "train_loss": 2.534332941699028, "test_loss": 1.1123261416659636, "test_acc1": 73.03600240203858, "test_acc5": 91.20400272583008, "epoch": 202, "n_parameters": 8802912} -{"train_lr": 0.002323962969445206, "train_loss": 2.526767383813858, "test_loss": 1.0508502241881454, "test_acc1": 74.12000233276368, "test_acc5": 91.83000274505615, "epoch": 203, "n_parameters": 8802912} -{"train_lr": 0.002310206743386666, "train_loss": 2.521291253066063, "test_loss": 1.1033476724782412, "test_acc1": 73.26200255889893, "test_acc5": 91.39200235717773, "epoch": 204, "n_parameters": 8802912} -{"train_lr": 0.002296435641982043, "train_loss": 2.528657273244858, "test_loss": 1.052001643268501, "test_acc1": 74.16400247833252, "test_acc5": 91.82800264709472, "epoch": 205, "n_parameters": 8802912} -{"train_lr": 0.0022826503364153008, "train_loss": 2.516057552862167, "test_loss": 1.1017902274342144, "test_acc1": 73.19600234344482, "test_acc5": 91.19000254119874, "epoch": 206, "n_parameters": 8802912} -{"train_lr": 0.002268851498562944, "train_loss": 2.538595021247864, "test_loss": 1.0500936067717916, "test_acc1": 73.88800272247315, "test_acc5": 91.8300022314453, "epoch": 207, "n_parameters": 8802912} -{"train_lr": 0.0022550398009608037, "train_loss": 2.529685878634453, "test_loss": 1.1305344472913181, "test_acc1": 72.54200221191407, "test_acc5": 90.9720026638794, "epoch": 208, "n_parameters": 8802912} -{"train_lr": 0.002241215916771494, "train_loss": 2.5152113378763197, "test_loss": 1.0899192776311846, "test_acc1": 73.44000264343262, "test_acc5": 91.42400230499267, "epoch": 209, "n_parameters": 8802912} -{"train_lr": 0.0022273805197516256, "train_loss": 2.514386184000969, "test_loss": 1.1236931063673075, "test_acc1": 73.11600250823975, "test_acc5": 91.32600272216797, "epoch": 210, "n_parameters": 8802912} -{"train_lr": 0.0022135342842189523, "train_loss": 2.5174347745656966, "test_loss": 1.132155286915162, "test_acc1": 73.51400250457763, "test_acc5": 91.51400258605958, "epoch": 211, "n_parameters": 8802912} -{"train_lr": 0.002199677885019512, "train_loss": 2.522267490696907, "test_loss": 1.1270191807518988, "test_acc1": 72.85400230743409, "test_acc5": 91.30400253540039, "epoch": 212, "n_parameters": 8802912} -{"train_lr": 0.002185811997494599, "train_loss": 2.5220797347307204, "test_loss": 1.0744847495327978, "test_acc1": 74.14600222961425, "test_acc5": 91.67000270935058, "epoch": 213, "n_parameters": 8802912} -{"train_lr": 0.0021719372974480025, "train_loss": 2.508114530825615, "test_loss": 1.0909595645087606, "test_acc1": 73.37600252044678, "test_acc5": 91.67200236816406, "epoch": 214, "n_parameters": 8802912} -{"train_lr": 0.002158054461113036, "train_loss": 2.5124550097465517, "test_loss": 1.072978124679888, "test_acc1": 74.12400251983642, "test_acc5": 91.75200248840332, "epoch": 215, "n_parameters": 8802912} -{"train_lr": 0.0021441641651195054, "train_loss": 2.512985480761528, "test_loss": 1.0711318070397657, "test_acc1": 74.01200238098144, "test_acc5": 91.62000252410888, "epoch": 216, "n_parameters": 8802912} -{"train_lr": 0.0021302670864609768, "train_loss": 2.4981127818107605, "test_loss": 1.0955076226416756, "test_acc1": 73.36600226806641, "test_acc5": 91.49600238769531, "epoch": 217, "n_parameters": 8802912} -{"train_lr": 0.0021163639024613505, "train_loss": 2.5086812860012055, "test_loss": 1.117870643734932, "test_acc1": 73.15800254089355, "test_acc5": 91.31400268035888, "epoch": 218, "n_parameters": 8802912} -{"train_lr": 0.002102455290742262, "train_loss": 2.500440197467804, "test_loss": 1.0767008364200592, "test_acc1": 73.96000260498047, "test_acc5": 91.55800237426757, "epoch": 219, "n_parameters": 8802912} -{"train_lr": 0.0020885419291897665, "train_loss": 2.4993754625558853, "test_loss": 1.1081211378469187, "test_acc1": 73.37000230743408, "test_acc5": 91.49400250762939, "epoch": 220, "n_parameters": 8802912} -{"train_lr": 0.0020746244959214863, "train_loss": 2.491419598913193, "test_loss": 1.0986771423588781, "test_acc1": 73.53600260772706, "test_acc5": 91.3100027090454, "epoch": 221, "n_parameters": 8802912} -{"train_lr": 0.0020607036692535004, "train_loss": 2.494174570417404, "test_loss": 1.0759035530774033, "test_acc1": 73.9100021484375, "test_acc5": 91.62200292572021, "epoch": 222, "n_parameters": 8802912} -{"train_lr": 0.0020467801276673257, "train_loss": 2.4903147062540056, "test_loss": 1.0543653084074749, "test_acc1": 74.39800264892578, "test_acc5": 91.95600281158447, "epoch": 223, "n_parameters": 8802912} -{"train_lr": 0.0020328545497765972, "train_loss": 2.4957724907159804, "test_loss": 1.087625122464755, "test_acc1": 73.67800223754882, "test_acc5": 91.70600235198975, "epoch": 224, "n_parameters": 8802912} -{"train_lr": 0.002018927614294425, "train_loss": 2.488828461027145, "test_loss": 1.0242567754843657, "test_acc1": 74.74200256744385, "test_acc5": 92.254002527771, "epoch": 225, "n_parameters": 8802912} -{"train_lr": 0.0020050000000000176, "train_loss": 2.490553906393051, "test_loss": 1.060360787107664, "test_acc1": 74.18000267456054, "test_acc5": 91.57800261322022, "epoch": 226, "n_parameters": 8802912} -{"train_lr": 0.001991072385705573, "train_loss": 2.477818633508682, "test_loss": 1.0360273447106867, "test_acc1": 74.45600251983643, "test_acc5": 92.12400261138916, "epoch": 227, "n_parameters": 8802912} -{"train_lr": 0.001977145450223401, "train_loss": 2.486578170442581, "test_loss": 1.0727028272607748, "test_acc1": 73.97800247131347, "test_acc5": 91.69800241607666, "epoch": 228, "n_parameters": 8802912} -{"train_lr": 0.0019632198723327173, "train_loss": 2.4801866537094117, "test_loss": 1.0577357917585795, "test_acc1": 74.23000261779785, "test_acc5": 91.96800282592774, "epoch": 229, "n_parameters": 8802912} -{"train_lr": 0.0019492963307464952, "train_loss": 2.4804067786455155, "test_loss": 1.0660248560940517, "test_acc1": 73.92200257293702, "test_acc5": 91.64600279846191, "epoch": 230, "n_parameters": 8802912} -{"train_lr": 0.001935375504078517, "train_loss": 2.4687070182323456, "test_loss": 1.0597361650537043, "test_acc1": 74.26800237884521, "test_acc5": 91.92600259155273, "epoch": 231, "n_parameters": 8802912} -{"train_lr": 0.001921458070810235, "train_loss": 2.4768204506874083, "test_loss": 1.0257845672176165, "test_acc1": 74.68800270385742, "test_acc5": 92.2100024609375, "epoch": 232, "n_parameters": 8802912} -{"train_lr": 0.0019075447092577794, "train_loss": 2.46774526321888, "test_loss": 1.020549598204739, "test_acc1": 74.90200233856201, "test_acc5": 92.09200266754151, "epoch": 233, "n_parameters": 8802912} -{"train_lr": 0.0018936360975386397, "train_loss": 2.469535802912712, "test_loss": 0.9928030415492899, "test_acc1": 75.27800255737304, "test_acc5": 92.32400263061524, "epoch": 234, "n_parameters": 8802912} -{"train_lr": 0.0018797329135390225, "train_loss": 2.4706064199924467, "test_loss": 1.0519538179916494, "test_acc1": 74.11000265625, "test_acc5": 91.91000253570557, "epoch": 235, "n_parameters": 8802912} -{"train_lr": 0.001865835834880448, "train_loss": 2.4715217574596404, "test_loss": 1.0420573949813843, "test_acc1": 74.34800219299316, "test_acc5": 92.07600243591308, "epoch": 236, "n_parameters": 8802912} -{"train_lr": 0.0018519455388870075, "train_loss": 2.4627477085351943, "test_loss": 1.0235423014006193, "test_acc1": 74.7640028540039, "test_acc5": 92.07600255340576, "epoch": 237, "n_parameters": 8802912} -{"train_lr": 0.0018380627025520412, "train_loss": 2.460023950481415, "test_loss": 1.0071833517183275, "test_acc1": 74.85200255737304, "test_acc5": 92.28200225708008, "epoch": 238, "n_parameters": 8802912} -{"train_lr": 0.001824188002505409, "train_loss": 2.469959481334686, "test_loss": 1.166262144332423, "test_acc1": 72.48800257049561, "test_acc5": 91.34600257537842, "epoch": 239, "n_parameters": 8802912} -{"train_lr": 0.0018103221149804824, "train_loss": 2.4578725676059725, "test_loss": 1.070676745079896, "test_acc1": 73.94400226257324, "test_acc5": 91.91000279815674, "epoch": 240, "n_parameters": 8802912} -{"train_lr": 0.0017964657157810383, "train_loss": 2.444655114555359, "test_loss": 1.0800352232421147, "test_acc1": 74.17600243652343, "test_acc5": 91.85400252410889, "epoch": 241, "n_parameters": 8802912} -{"train_lr": 0.0017826194802483815, "train_loss": 2.4487457510709763, "test_loss": 1.034357904949609, "test_acc1": 74.53400223449707, "test_acc5": 91.85000267089843, "epoch": 242, "n_parameters": 8802912} -{"train_lr": 0.0017687840832285521, "train_loss": 2.449190316438675, "test_loss": 1.0487424814525772, "test_acc1": 74.92400251403808, "test_acc5": 92.10400250549317, "epoch": 243, "n_parameters": 8802912} -{"train_lr": 0.0017549601990392034, "train_loss": 2.4490826187372208, "test_loss": 1.0805276323329, "test_acc1": 73.95400270599366, "test_acc5": 91.82000235168456, "epoch": 244, "n_parameters": 8802912} -{"train_lr": 0.001741148501437039, "train_loss": 2.4471388262033464, "test_loss": 1.0856803596458013, "test_acc1": 73.65200241668701, "test_acc5": 91.82800239837647, "epoch": 245, "n_parameters": 8802912} -{"train_lr": 0.0017273496635846672, "train_loss": 2.4539292171001432, "test_loss": 1.03655271736138, "test_acc1": 74.5660026321411, "test_acc5": 92.10600242156983, "epoch": 246, "n_parameters": 8802912} -{"train_lr": 0.0017135643580179704, "train_loss": 2.4469535983800887, "test_loss": 1.0087965590988888, "test_acc1": 75.17400241485596, "test_acc5": 92.54400261779786, "epoch": 247, "n_parameters": 8802912} -{"train_lr": 0.0016997932566133241, "train_loss": 2.444410781955719, "test_loss": 1.0247314309372622, "test_acc1": 74.59600245056153, "test_acc5": 92.07800263061523, "epoch": 248, "n_parameters": 8802912} -{"train_lr": 0.0016860370305547992, "train_loss": 2.4308273503303526, "test_loss": 1.0320759124177343, "test_acc1": 74.78600229370117, "test_acc5": 92.29200270019531, "epoch": 249, "n_parameters": 8802912} -{"train_lr": 0.00167229635030139, "train_loss": 2.4275312590122224, "test_loss": 0.998486482702634, "test_acc1": 75.31200241699219, "test_acc5": 92.37200247497559, "epoch": 250, "n_parameters": 8802912} -{"train_lr": 0.0016585718855544908, "train_loss": 2.430759438252449, "test_loss": 1.0259777138776638, "test_acc1": 74.97600257110595, "test_acc5": 92.29400238006592, "epoch": 251, "n_parameters": 8802912} -{"train_lr": 0.0016448643052251444, "train_loss": 2.436187075996399, "test_loss": 1.0252608503488934, "test_acc1": 75.1060025088501, "test_acc5": 92.31800235137939, "epoch": 252, "n_parameters": 8802912} -{"train_lr": 0.0016311742774014707, "train_loss": 2.4226906020879744, "test_loss": 1.0081365014262058, "test_acc1": 75.16000274536133, "test_acc5": 92.45000248321533, "epoch": 253, "n_parameters": 8802912} -{"train_lr": 0.0016175024693161407, "train_loss": 2.4258243584871293, "test_loss": 1.0081383568399094, "test_acc1": 75.10200262573242, "test_acc5": 92.52200285644531, "epoch": 254, "n_parameters": 8802912} -{"train_lr": 0.0016038495473138241, "train_loss": 2.415545753073692, "test_loss": 1.0118173485731377, "test_acc1": 74.8900023171997, "test_acc5": 92.38800250457764, "epoch": 255, "n_parameters": 8802912} -{"train_lr": 0.001590216176818559, "train_loss": 2.4230620580911637, "test_loss": 0.9836603933397461, "test_acc1": 75.62000253448487, "test_acc5": 92.67800249145508, "epoch": 256, "n_parameters": 8802912} -{"train_lr": 0.0015766030223016928, "train_loss": 2.4250809122800825, "test_loss": 0.9980065629762762, "test_acc1": 75.47600234466553, "test_acc5": 92.37600257507324, "epoch": 257, "n_parameters": 8802912} -{"train_lr": 0.0015630107472491771, "train_loss": 2.4182885842323305, "test_loss": 1.003571409951238, "test_acc1": 75.41200240478516, "test_acc5": 92.67800252532959, "epoch": 258, "n_parameters": 8802912} -{"train_lr": 0.0015494400141292598, "train_loss": 2.4167381868362425, "test_loss": 1.0201460563522928, "test_acc1": 74.8120025592041, "test_acc5": 92.36600265045166, "epoch": 259, "n_parameters": 8802912} -{"train_lr": 0.001535891484360313, "train_loss": 2.412824540567398, "test_loss": 1.0581611210809034, "test_acc1": 74.58000227508545, "test_acc5": 92.12400251068115, "epoch": 260, "n_parameters": 8802912} -{"train_lr": 0.0015223658182786706, "train_loss": 2.4184564230203627, "test_loss": 0.9861624370164731, "test_acc1": 75.53200234954834, "test_acc5": 92.73800259216308, "epoch": 261, "n_parameters": 8802912} -{"train_lr": 0.0015088636751061355, "train_loss": 2.4104201919078827, "test_loss": 1.0040870151099037, "test_acc1": 75.5400024810791, "test_acc5": 92.3800025164795, "epoch": 262, "n_parameters": 8802912} -{"train_lr": 0.0014953857129180808, "train_loss": 2.4053339702129364, "test_loss": 1.0213478067341972, "test_acc1": 75.12600247253418, "test_acc5": 92.18400234680176, "epoch": 263, "n_parameters": 8802912} -{"train_lr": 0.001481932588611488, "train_loss": 2.4035803348064424, "test_loss": 1.0133089565617197, "test_acc1": 75.0080026739502, "test_acc5": 92.44600238250733, "epoch": 264, "n_parameters": 8802912} -{"train_lr": 0.001468504957872541, "train_loss": 2.4052466153621674, "test_loss": 0.9823872642043758, "test_acc1": 75.79600258789063, "test_acc5": 92.81000237365723, "epoch": 265, "n_parameters": 8802912} -{"train_lr": 0.0014551034751450972, "train_loss": 2.4011172059774397, "test_loss": 1.0269925142912304, "test_acc1": 75.24800248565674, "test_acc5": 92.3560025668335, "epoch": 266, "n_parameters": 8802912} -{"train_lr": 0.0014417287935984719, "train_loss": 2.3984264409303666, "test_loss": 0.9993550306295648, "test_acc1": 75.23600236083985, "test_acc5": 92.4220023739624, "epoch": 267, "n_parameters": 8802912} -{"train_lr": 0.0014283815650957576, "train_loss": 2.3996414154291155, "test_loss": 1.0062363717485876, "test_acc1": 75.13800256225586, "test_acc5": 92.358002371521, "epoch": 268, "n_parameters": 8802912} -{"train_lr": 0.00141506244016212, "train_loss": 2.3978533087968827, "test_loss": 0.9750935444060493, "test_acc1": 75.7460026095581, "test_acc5": 92.8600029385376, "epoch": 269, "n_parameters": 8802912} -{"train_lr": 0.0014017720679528809, "train_loss": 2.3873388515472413, "test_loss": 0.9933815000250059, "test_acc1": 75.68200256011963, "test_acc5": 92.50200249664307, "epoch": 270, "n_parameters": 8802912} -{"train_lr": 0.001388511096221964, "train_loss": 2.3880348885059357, "test_loss": 0.9784809552571353, "test_acc1": 75.80200262573243, "test_acc5": 92.67200243041992, "epoch": 271, "n_parameters": 8802912} -{"train_lr": 0.0013752801712905223, "train_loss": 2.3939721091032027, "test_loss": 0.9513416485313106, "test_acc1": 76.26600255065918, "test_acc5": 92.95200276550293, "epoch": 272, "n_parameters": 8802912} -{"train_lr": 0.0013620799380151495, "train_loss": 2.378580770611763, "test_loss": 0.9594222390476395, "test_acc1": 76.00800240936279, "test_acc5": 92.850002421875, "epoch": 273, "n_parameters": 8802912} -{"train_lr": 0.0013489110397565372, "train_loss": 2.392060397863388, "test_loss": 0.9782284000778899, "test_acc1": 75.54400265594482, "test_acc5": 92.77600272003174, "epoch": 274, "n_parameters": 8802912} -{"train_lr": 0.0013357741183482558, "train_loss": 2.382560392832756, "test_loss": 0.9615063790012809, "test_acc1": 76.07400250457763, "test_acc5": 92.86200240997314, "epoch": 275, "n_parameters": 8802912} -{"train_lr": 0.0013226698140652842, "train_loss": 2.3745041449308397, "test_loss": 0.9912158645251218, "test_acc1": 76.0180023751831, "test_acc5": 92.77200273590088, "epoch": 276, "n_parameters": 8802912} -{"train_lr": 0.001309598765592993, "train_loss": 2.3870569831848143, "test_loss": 0.9912875572986463, "test_acc1": 75.4680025479126, "test_acc5": 92.80000256469727, "epoch": 277, "n_parameters": 8802912} -{"train_lr": 0.0012965616099957775, "train_loss": 2.3743462139368057, "test_loss": 1.0167885405175827, "test_acc1": 75.56200237640381, "test_acc5": 92.49600265625, "epoch": 278, "n_parameters": 8802912} -{"train_lr": 0.0012835589826862073, "train_loss": 2.371912781739235, "test_loss": 0.9812103122034493, "test_acc1": 75.88200265167237, "test_acc5": 92.70400253234864, "epoch": 279, "n_parameters": 8802912} -{"train_lr": 0.0012705915173940611, "train_loss": 2.3629370184898377, "test_loss": 0.9747664972263224, "test_acc1": 76.03400268463135, "test_acc5": 92.86000253814697, "epoch": 280, "n_parameters": 8802912} -{"train_lr": 0.0012576598461352462, "train_loss": 2.371208876991272, "test_loss": 0.9573772643857142, "test_acc1": 76.43000236694336, "test_acc5": 92.94600271484374, "epoch": 281, "n_parameters": 8802912} -{"train_lr": 0.0012447645991812122, "train_loss": 2.3600691187858582, "test_loss": 1.0026746427311617, "test_acc1": 75.87400247955323, "test_acc5": 92.54400256103516, "epoch": 282, "n_parameters": 8802912} -{"train_lr": 0.001231906405028045, "train_loss": 2.373417446422577, "test_loss": 0.9672680793001371, "test_acc1": 76.33600237579346, "test_acc5": 92.9960029095459, "epoch": 283, "n_parameters": 8802912} -{"train_lr": 0.0012190858903660415, "train_loss": 2.3592100303173065, "test_loss": 0.9725156866890543, "test_acc1": 75.99600247589112, "test_acc5": 92.67600259155273, "epoch": 284, "n_parameters": 8802912} -{"train_lr": 0.0012063036800490023, "train_loss": 2.3650057861566545, "test_loss": 0.9488397589062944, "test_acc1": 76.42800273864746, "test_acc5": 93.27200248138428, "epoch": 285, "n_parameters": 8802912} -{"train_lr": 0.0011935603970637625, "train_loss": 2.3528033335208893, "test_loss": 0.9492182021631914, "test_acc1": 76.58400264312745, "test_acc5": 92.98200251068116, "epoch": 286, "n_parameters": 8802912} -{"train_lr": 0.0011808566625000356, "train_loss": 2.353589914727211, "test_loss": 1.0051489154643871, "test_acc1": 75.86400267944336, "test_acc5": 92.63400263183594, "epoch": 287, "n_parameters": 8802912} -{"train_lr": 0.0011681930955198627, "train_loss": 2.3433619570732116, "test_loss": 0.9575225932194906, "test_acc1": 76.24600258209229, "test_acc5": 92.87200269989013, "epoch": 288, "n_parameters": 8802912} -{"train_lr": 0.0011555703133276894, "train_loss": 2.343400845503807, "test_loss": 0.9462038348702824, "test_acc1": 76.23800230865479, "test_acc5": 93.10200283416748, "epoch": 289, "n_parameters": 8802912} -{"train_lr": 0.0011429889311400574, "train_loss": 2.336573574781418, "test_loss": 0.9556533036863103, "test_acc1": 76.36800242462158, "test_acc5": 93.10800251037598, "epoch": 290, "n_parameters": 8802912} -{"train_lr": 0.0011304495621557978, "train_loss": 2.3357489563941956, "test_loss": 0.9453259548720192, "test_acc1": 76.42200252502441, "test_acc5": 93.16800236938477, "epoch": 291, "n_parameters": 8802912} -{"train_lr": 0.0011179528175260622, "train_loss": 2.342373822808266, "test_loss": 1.0194665158496183, "test_acc1": 75.40600251708985, "test_acc5": 92.58600239074707, "epoch": 292, "n_parameters": 8802912} -{"train_lr": 0.0011054993063245714, "train_loss": 2.342305346941948, "test_loss": 0.9247610174557742, "test_acc1": 76.88200223815917, "test_acc5": 93.34000252197265, "epoch": 293, "n_parameters": 8802912} -{"train_lr": 0.0010930896355179074, "train_loss": 2.3317987708568575, "test_loss": 0.9394776582279626, "test_acc1": 76.48200255859375, "test_acc5": 92.99200259490966, "epoch": 294, "n_parameters": 8802912} -{"train_lr": 0.0010807244099358875, "train_loss": 2.335050850224495, "test_loss": 0.9719734180937795, "test_acc1": 75.94400268951416, "test_acc5": 93.01200268859863, "epoch": 295, "n_parameters": 8802912} -{"train_lr": 0.0010684042322421535, "train_loss": 2.337995481514931, "test_loss": 0.9493657037177506, "test_acc1": 76.70000260375977, "test_acc5": 93.13000244140625, "epoch": 296, "n_parameters": 8802912} -{"train_lr": 0.0010561297029048062, "train_loss": 2.3403138323545454, "test_loss": 0.9464916731504833, "test_acc1": 76.34000235931397, "test_acc5": 93.05400253295899, "epoch": 297, "n_parameters": 8802912} -{"train_lr": 0.0010439014201670813, "train_loss": 2.3272110107421873, "test_loss": 0.9458064929965664, "test_acc1": 76.65000242279052, "test_acc5": 93.10400242279053, "epoch": 298, "n_parameters": 8802912} -{"train_lr": 0.0010317199800182345, "train_loss": 2.3235699743509293, "test_loss": 0.9332919046282768, "test_acc1": 76.61600272644043, "test_acc5": 93.19800258331298, "epoch": 299, "n_parameters": 8802912} -{"train_lr": 0.0010195859761644697, "train_loss": 2.3241101622581484, "test_loss": 0.9187867321512279, "test_acc1": 76.81000269927979, "test_acc5": 93.29200252227783, "epoch": 300, "n_parameters": 8802912} -{"train_lr": 0.0010075000000000067, "train_loss": 2.321313760781288, "test_loss": 0.9624901234227068, "test_acc1": 76.57200276824952, "test_acc5": 93.18800236602783, "epoch": 301, "n_parameters": 8802912} -{"train_lr": 0.000995462640578269, "train_loss": 2.3237937082529068, "test_loss": 0.9206300713121891, "test_acc1": 77.16800258728027, "test_acc5": 93.51400260284424, "epoch": 302, "n_parameters": 8802912} -{"train_lr": 0.0009834744845832106, "train_loss": 2.3193284062623976, "test_loss": 0.944353304365102, "test_acc1": 76.54800273040772, "test_acc5": 93.13800229248046, "epoch": 303, "n_parameters": 8802912} -{"train_lr": 0.0009715361163006195, "train_loss": 2.3055600822925566, "test_loss": 0.9389462514835245, "test_acc1": 76.60800243377686, "test_acc5": 93.238002633667, "epoch": 304, "n_parameters": 8802912} -{"train_lr": 0.000959648117589686, "train_loss": 2.319485888648033, "test_loss": 0.921706677578828, "test_acc1": 77.15800247283936, "test_acc5": 93.42200260833741, "epoch": 305, "n_parameters": 8802912} -{"train_lr": 0.0009478110678547599, "train_loss": 2.306406626009941, "test_loss": 0.9588734373888549, "test_acc1": 76.36400208831787, "test_acc5": 93.06600306030273, "epoch": 306, "n_parameters": 8802912} -{"train_lr": 0.0009360255440169043, "train_loss": 2.2972592967271805, "test_loss": 0.930164492524722, "test_acc1": 76.68200244598388, "test_acc5": 93.3520025970459, "epoch": 307, "n_parameters": 8802912} -{"train_lr": 0.0009242921204859447, "train_loss": 2.2967053968191147, "test_loss": 0.9124530385084012, "test_acc1": 77.10600272460937, "test_acc5": 93.45800254180908, "epoch": 308, "n_parameters": 8802912} -{"train_lr": 0.0009126113691323534, "train_loss": 2.2972530581712722, "test_loss": 0.934481860302827, "test_acc1": 76.56800255096435, "test_acc5": 93.22600243499755, "epoch": 309, "n_parameters": 8802912} -{"train_lr": 0.0009009838592595214, "train_loss": 2.3038690823793413, "test_loss": 0.9395043738186359, "test_acc1": 76.94400238677979, "test_acc5": 93.12800250305176, "epoch": 310, "n_parameters": 8802912} -{"train_lr": 0.0008894101575758612, "train_loss": 2.2868576136827468, "test_loss": 0.9219647803727318, "test_acc1": 77.10400232513427, "test_acc5": 93.35600211120605, "epoch": 311, "n_parameters": 8802912} -{"train_lr": 0.0008778908281672491, "train_loss": 2.2908714654445648, "test_loss": 0.9186732569599853, "test_acc1": 77.01600248352051, "test_acc5": 93.31600253814698, "epoch": 312, "n_parameters": 8802912} -{"train_lr": 0.0008664264324695524, "train_loss": 2.285632735681534, "test_loss": 0.9020089035963311, "test_acc1": 77.40000246765136, "test_acc5": 93.54800218841552, "epoch": 313, "n_parameters": 8802912} -{"train_lr": 0.0008550175292412688, "train_loss": 2.2894469646692275, "test_loss": 0.9261526430354399, "test_acc1": 76.92600265350342, "test_acc5": 93.33600241394043, "epoch": 314, "n_parameters": 8802912} -{"train_lr": 0.0008436646745362156, "train_loss": 2.2856356060028076, "test_loss": 0.8992229719810626, "test_acc1": 77.43000267181397, "test_acc5": 93.57600261199951, "epoch": 315, "n_parameters": 8802912} -{"train_lr": 0.0008323684216765116, "train_loss": 2.282478279185295, "test_loss": 0.9078522070365793, "test_acc1": 77.3640024798584, "test_acc5": 93.66000294311523, "epoch": 316, "n_parameters": 8802912} -{"train_lr": 0.0008211293212256214, "train_loss": 2.2807609046697617, "test_loss": 0.9158135786214295, "test_acc1": 77.17600261932373, "test_acc5": 93.36800268737792, "epoch": 317, "n_parameters": 8802912} -{"train_lr": 0.0008099479209613996, "train_loss": 2.276206053781509, "test_loss": 0.9013342629460728, "test_acc1": 77.52400239196777, "test_acc5": 93.62800251586914, "epoch": 318, "n_parameters": 8802912} -{"train_lr": 0.0007988247658495707, "train_loss": 2.2865639132738114, "test_loss": 0.9110528682084644, "test_acc1": 77.43400238952637, "test_acc5": 93.5880027734375, "epoch": 319, "n_parameters": 8802912} -{"train_lr": 0.0007877603980169765, "train_loss": 2.2741829414844514, "test_loss": 0.8722478865700609, "test_acc1": 77.89000231994629, "test_acc5": 93.85200241394043, "epoch": 320, "n_parameters": 8802912} -{"train_lr": 0.0007767553567253202, "train_loss": 2.261508320879936, "test_loss": 0.8891522259834934, "test_acc1": 77.78400280456543, "test_acc5": 93.7920022845459, "epoch": 321, "n_parameters": 8802912} -{"train_lr": 0.0007658101783447642, "train_loss": 2.2624458954811097, "test_loss": 0.8744938577360967, "test_acc1": 77.74800251281738, "test_acc5": 93.78200236419677, "epoch": 322, "n_parameters": 8802912} -{"train_lr": 0.0007549253963278913, "train_loss": 2.272594901895523, "test_loss": 0.892631834044176, "test_acc1": 77.78200269927979, "test_acc5": 93.69000268432617, "epoch": 323, "n_parameters": 8802912} -{"train_lr": 0.0007441015411836098, "train_loss": 2.2591507011413574, "test_loss": 0.8698577920303625, "test_acc1": 78.18800238372803, "test_acc5": 93.82800220977784, "epoch": 324, "n_parameters": 8802912} -{"train_lr": 0.0007333391404513692, "train_loss": 2.253259333062172, "test_loss": 0.8891201933078906, "test_acc1": 77.94400256011963, "test_acc5": 93.80000277008057, "epoch": 325, "n_parameters": 8802912} -{"train_lr": 0.0007226387186753506, "train_loss": 2.251421787452698, "test_loss": 0.8865866871441112, "test_acc1": 77.94400261657715, "test_acc5": 93.88600297912598, "epoch": 326, "n_parameters": 8802912} -{"train_lr": 0.0007120007973790458, "train_loss": 2.2556623353004457, "test_loss": 0.8618302704656825, "test_acc1": 78.27400252410888, "test_acc5": 93.95600274780273, "epoch": 327, "n_parameters": 8802912} -{"train_lr": 0.0007014258950397421, "train_loss": 2.257809988808632, "test_loss": 0.8893819276024314, "test_acc1": 77.75400210235595, "test_acc5": 93.79000258911132, "epoch": 328, "n_parameters": 8802912} -{"train_lr": 0.0006909145270632263, "train_loss": 2.2548349143981934, "test_loss": 0.8678661569514695, "test_acc1": 78.27800251281738, "test_acc5": 93.96200239013672, "epoch": 329, "n_parameters": 8802912} -{"train_lr": 0.0006804672057587739, "train_loss": 2.254263136982918, "test_loss": 0.884122750338386, "test_acc1": 77.8300027078247, "test_acc5": 93.82400236816406, "epoch": 330, "n_parameters": 8802912} -{"train_lr": 0.0006700844403140784, "train_loss": 2.2395645513296127, "test_loss": 0.8836199985269237, "test_acc1": 77.83800224670411, "test_acc5": 93.76400254760742, "epoch": 331, "n_parameters": 8802912} -{"train_lr": 0.0006597667367704799, "train_loss": 2.2476229377269745, "test_loss": 0.8688090829288259, "test_acc1": 78.28400264404297, "test_acc5": 94.04000263092041, "epoch": 332, "n_parameters": 8802912} -{"train_lr": 0.0006495145979982786, "train_loss": 2.242548645043373, "test_loss": 0.8614001806606265, "test_acc1": 78.24600238739013, "test_acc5": 94.02200260040283, "epoch": 333, "n_parameters": 8802912} -{"train_lr": 0.0006393285236722668, "train_loss": 2.2469553867578504, "test_loss": 0.8792864809579709, "test_acc1": 78.03200247039796, "test_acc5": 94.00000252166748, "epoch": 334, "n_parameters": 8802912} -{"train_lr": 0.000629209010247336, "train_loss": 2.231515493106842, "test_loss": 0.857284192872398, "test_acc1": 78.32200233154298, "test_acc5": 94.10200256683349, "epoch": 335, "n_parameters": 8802912} -{"train_lr": 0.0006191565509343066, "train_loss": 2.2296433639764786, "test_loss": 0.8646224715253886, "test_acc1": 78.38600276916505, "test_acc5": 94.18200260559082, "epoch": 336, "n_parameters": 8802912} -{"train_lr": 0.0006091716356758274, "train_loss": 2.230228359770775, "test_loss": 0.8842549324035645, "test_acc1": 78.01400243377685, "test_acc5": 93.85600263336181, "epoch": 337, "n_parameters": 8802912} -{"train_lr": 0.0005992547511226205, "train_loss": 2.2186210971832274, "test_loss": 0.8422662168741226, "test_acc1": 78.67000244293213, "test_acc5": 94.2380025479126, "epoch": 338, "n_parameters": 8802912} -{"train_lr": 0.0005894063806096327, "train_loss": 2.230004249238968, "test_loss": 0.8510870764798978, "test_acc1": 78.6060021749878, "test_acc5": 94.19400251281738, "epoch": 339, "n_parameters": 8802912} -{"train_lr": 0.000579627004132555, "train_loss": 2.2259789878368377, "test_loss": 0.8678487335496089, "test_acc1": 78.38600250488281, "test_acc5": 93.99600277404785, "epoch": 340, "n_parameters": 8802912} -{"train_lr": 0.0005699170983243841, "train_loss": 2.219413055062294, "test_loss": 0.8531131571268334, "test_acc1": 78.48600224731446, "test_acc5": 94.14800271759033, "epoch": 341, "n_parameters": 8802912} -{"train_lr": 0.0005602771364322523, "train_loss": 2.2194655583143232, "test_loss": 0.8383295917335678, "test_acc1": 78.80000256561279, "test_acc5": 94.32400245758056, "epoch": 342, "n_parameters": 8802912} -{"train_lr": 0.0005507075882942857, "train_loss": 2.2222323662042616, "test_loss": 0.8608833562363597, "test_acc1": 78.24600257751464, "test_acc5": 94.08200264312744, "epoch": 343, "n_parameters": 8802912} -{"train_lr": 0.0005412089203167633, "train_loss": 2.210113369822502, "test_loss": 0.8309423627660555, "test_acc1": 78.82000258239746, "test_acc5": 94.50200248962402, "epoch": 344, "n_parameters": 8802912} -{"train_lr": 0.0005317815954513637, "train_loss": 2.203175178861618, "test_loss": 0.8365735659266219, "test_acc1": 78.95200244476318, "test_acc5": 94.34200248382568, "epoch": 345, "n_parameters": 8802912} -{"train_lr": 0.0005224260731725992, "train_loss": 2.2141412459135057, "test_loss": 0.8454593665459577, "test_acc1": 78.68600238372802, "test_acc5": 94.20800258636474, "epoch": 346, "n_parameters": 8802912} -{"train_lr": 0.00051314280945543, "train_loss": 2.2149076700687407, "test_loss": 0.848565008272143, "test_acc1": 78.69800232299805, "test_acc5": 94.32200253601074, "epoch": 347, "n_parameters": 8802912} -{"train_lr": 0.0005039322567530305, "train_loss": 2.2021272256851194, "test_loss": 0.8389715390170321, "test_acc1": 78.8700029510498, "test_acc5": 94.24600246185302, "epoch": 348, "n_parameters": 8802912} -{"train_lr": 0.0004947948639747458, "train_loss": 2.1929888575315477, "test_loss": 0.8375660413328339, "test_acc1": 78.90600257720948, "test_acc5": 94.37000258636475, "epoch": 349, "n_parameters": 8802912} -{"train_lr": 0.0004857310764642128, "train_loss": 2.199724263572693, "test_loss": 0.8305928143946564, "test_acc1": 79.22800258087159, "test_acc5": 94.45800252319336, "epoch": 350, "n_parameters": 8802912} -{"train_lr": 0.00047674133597763773, "train_loss": 2.196790407681465, "test_loss": 0.8294842705568847, "test_acc1": 79.14800280853271, "test_acc5": 94.4440028012085, "epoch": 351, "n_parameters": 8802912} -{"train_lr": 0.00046782608066229685, "train_loss": 2.197732003068924, "test_loss": 0.8354320569949991, "test_acc1": 78.98600252929687, "test_acc5": 94.382002711792, "epoch": 352, "n_parameters": 8802912} -{"train_lr": 0.0004589857450351661, "train_loss": 2.189996877503395, "test_loss": 0.8315567365464043, "test_acc1": 79.09600269165038, "test_acc5": 94.41600259765625, "epoch": 353, "n_parameters": 8802912} -{"train_lr": 0.000450220759961707, "train_loss": 2.184710729289055, "test_loss": 0.8259145602145616, "test_acc1": 79.08200255981446, "test_acc5": 94.46400259796143, "epoch": 354, "n_parameters": 8802912} -{"train_lr": 0.0004415315526349521, "train_loss": 2.1880140792608263, "test_loss": 0.827222386484637, "test_acc1": 79.13200244262696, "test_acc5": 94.43800255279541, "epoch": 355, "n_parameters": 8802912} -{"train_lr": 0.0004329185465545854, "train_loss": 2.18097491979599, "test_loss": 0.8300838262280997, "test_acc1": 79.06600223724365, "test_acc5": 94.43400235137939, "epoch": 356, "n_parameters": 8802912} -{"train_lr": 0.0004243821615063944, "train_loss": 2.1783664808273318, "test_loss": 0.8278282766833025, "test_acc1": 79.12400278961182, "test_acc5": 94.44000233520508, "epoch": 357, "n_parameters": 8802912} -{"train_lr": 0.00041592281354172557, "train_loss": 2.1835879442214967, "test_loss": 0.8177238253547865, "test_acc1": 79.30800220336914, "test_acc5": 94.4780025479126, "epoch": 358, "n_parameters": 8802912} -{"train_lr": 0.0004075409149572814, "train_loss": 2.1709422487974166, "test_loss": 0.8121458960368353, "test_acc1": 79.47600254089356, "test_acc5": 94.526002527771, "epoch": 359, "n_parameters": 8802912} -{"train_lr": 0.000399236874274954, "train_loss": 2.169428454065323, "test_loss": 0.8159934987916666, "test_acc1": 79.33800268249512, "test_acc5": 94.59600259002686, "epoch": 360, "n_parameters": 8802912} -{"train_lr": 0.00039101109622197687, "train_loss": 2.1707143297195435, "test_loss": 0.8256213956457727, "test_acc1": 79.22600250152588, "test_acc5": 94.49200259735107, "epoch": 361, "n_parameters": 8802912} -{"train_lr": 0.000382863981711175, "train_loss": 2.172377712082863, "test_loss": 0.8224404890309361, "test_acc1": 79.39400224884034, "test_acc5": 94.48000241943359, "epoch": 362, "n_parameters": 8802912} -{"train_lr": 0.0003747959278214157, "train_loss": 2.1745034873485567, "test_loss": 0.8070758815635654, "test_acc1": 79.64200261260986, "test_acc5": 94.73400264434814, "epoch": 363, "n_parameters": 8802912} -{"train_lr": 0.00036680732777826604, "train_loss": 2.1633327790260317, "test_loss": 0.8091909740777576, "test_acc1": 79.59800251281739, "test_acc5": 94.616002477417, "epoch": 364, "n_parameters": 8802912} -{"train_lr": 0.00035889857093479767, "train_loss": 2.159312206864357, "test_loss": 0.811264168471098, "test_acc1": 79.38600269470214, "test_acc5": 94.64600235992431, "epoch": 365, "n_parameters": 8802912} -{"train_lr": 0.00035107004275269313, "train_loss": 2.157475806260109, "test_loss": 0.8024204075336456, "test_acc1": 79.57200257720947, "test_acc5": 94.70000266723633, "epoch": 366, "n_parameters": 8802912} -{"train_lr": 0.00034332212478335543, "train_loss": 2.1555263712882997, "test_loss": 0.7955219614155152, "test_acc1": 79.71400283111572, "test_acc5": 94.71400278533936, "epoch": 367, "n_parameters": 8802912} -{"train_lr": 0.0003356551946493703, "train_loss": 2.150041740512848, "test_loss": 0.7990382059532053, "test_acc1": 79.63000267578126, "test_acc5": 94.84000284576416, "epoch": 368, "n_parameters": 8802912} -{"train_lr": 0.0003280696260261078, "train_loss": 2.1519452327489854, "test_loss": 0.7973103578038075, "test_acc1": 79.71600276123047, "test_acc5": 94.736002293396, "epoch": 369, "n_parameters": 8802912} -{"train_lr": 0.00032056578862347564, "train_loss": 2.147808137202263, "test_loss": 0.8042589018450064, "test_acc1": 79.60400271118164, "test_acc5": 94.76600254821777, "epoch": 370, "n_parameters": 8802912} -{"train_lr": 0.00031314404816792945, "train_loss": 2.149571741271019, "test_loss": 0.7960999717607218, "test_acc1": 79.83800244659423, "test_acc5": 94.72000258300781, "epoch": 371, "n_parameters": 8802912} -{"train_lr": 0.00030580476638461713, "train_loss": 2.148436837053299, "test_loss": 0.794623577638584, "test_acc1": 79.89800252746582, "test_acc5": 94.83000266479492, "epoch": 372, "n_parameters": 8802912} -{"train_lr": 0.0002985483009797873, "train_loss": 2.13867980029583, "test_loss": 0.7938631772994995, "test_acc1": 79.82200272216797, "test_acc5": 94.74600280181885, "epoch": 373, "n_parameters": 8802912} -{"train_lr": 0.00029137500562332014, "train_loss": 2.1493874601840974, "test_loss": 0.79805342699675, "test_acc1": 79.84800245178222, "test_acc5": 94.79400213104248, "epoch": 374, "n_parameters": 8802912} -{"train_lr": 0.000284285229931521, "train_loss": 2.132440699839592, "test_loss": 0.7987377680838108, "test_acc1": 79.80000246185303, "test_acc5": 94.82400234924316, "epoch": 375, "n_parameters": 8802912} -{"train_lr": 0.00027727931945004304, "train_loss": 2.151890300130844, "test_loss": 0.7864132416160667, "test_acc1": 79.98800271942139, "test_acc5": 94.83000248046875, "epoch": 376, "n_parameters": 8802912} -{"train_lr": 0.00027035761563708795, "train_loss": 2.123177099132538, "test_loss": 0.7831812656539328, "test_acc1": 80.03000282623292, "test_acc5": 94.95200261657715, "epoch": 377, "n_parameters": 8802912} -{"train_lr": 0.0002635204558467305, "train_loss": 2.1275127417087556, "test_loss": 0.7836187635274494, "test_acc1": 79.92600243774415, "test_acc5": 94.81600231018066, "epoch": 378, "n_parameters": 8802912} -{"train_lr": 0.0002567681733124936, "train_loss": 2.1135940309524535, "test_loss": 0.7817345038056374, "test_acc1": 80.00800257446289, "test_acc5": 94.90800220977783, "epoch": 379, "n_parameters": 8802912} -{"train_lr": 0.0002501010971311009, "train_loss": 2.1325760640621185, "test_loss": 0.7893616820082945, "test_acc1": 79.82800258270264, "test_acc5": 94.82400265045166, "epoch": 380, "n_parameters": 8802912} -{"train_lr": 0.00024351955224644067, "train_loss": 2.126667087006569, "test_loss": 0.791089139002211, "test_acc1": 79.8520023703003, "test_acc5": 94.96600269226074, "epoch": 381, "n_parameters": 8802912} -{"train_lr": 0.00023702385943372427, "train_loss": 2.119239133453369, "test_loss": 0.7867367659859797, "test_acc1": 80.0180024710083, "test_acc5": 94.8580027142334, "epoch": 382, "n_parameters": 8802912} -{"train_lr": 0.00023061433528386214, "train_loss": 2.1252701262712477, "test_loss": 0.7793995845405495, "test_acc1": 80.11800259399413, "test_acc5": 94.95400231842041, "epoch": 383, "n_parameters": 8802912} -{"train_lr": 0.00022429129218801117, "train_loss": 2.11843846988678, "test_loss": 0.7795084717080873, "test_acc1": 80.17000235412597, "test_acc5": 94.93000254455566, "epoch": 384, "n_parameters": 8802912} -{"train_lr": 0.00021805503832237022, "train_loss": 2.115817010307312, "test_loss": 0.7761347907430985, "test_acc1": 80.20800270446777, "test_acc5": 94.90200281005859, "epoch": 385, "n_parameters": 8802912} -{"train_lr": 0.00021190587763316056, "train_loss": 2.120494583106041, "test_loss": 0.7759140889872523, "test_acc1": 80.24000234588623, "test_acc5": 95.02800281799317, "epoch": 386, "n_parameters": 8802912} -{"train_lr": 0.00020584410982180034, "train_loss": 2.115719668889046, "test_loss": 0.7700108931783367, "test_acc1": 80.27600261627197, "test_acc5": 95.09000238830566, "epoch": 387, "n_parameters": 8802912} -{"train_lr": 0.0001998700303302881, "train_loss": 2.11289814286232, "test_loss": 0.7699887252905789, "test_acc1": 80.22800247131347, "test_acc5": 95.02200253936768, "epoch": 388, "n_parameters": 8802912} -{"train_lr": 0.00019398393032684917, "train_loss": 2.0959777050971984, "test_loss": 0.7745645852649913, "test_acc1": 80.3800025164795, "test_acc5": 94.96400282714843, "epoch": 389, "n_parameters": 8802912} -{"train_lr": 0.0001881860966916756, "train_loss": 2.103974385499954, "test_loss": 0.7662942576057771, "test_acc1": 80.48200259094239, "test_acc5": 95.12400251922607, "epoch": 390, "n_parameters": 8802912} -{"train_lr": 0.00018247681200301023, "train_loss": 2.10454733171463, "test_loss": 0.7714541385717252, "test_acc1": 80.37200292572021, "test_acc5": 95.06600265533447, "epoch": 391, "n_parameters": 8802912} -{"train_lr": 0.00017685635452333062, "train_loss": 2.0966897549390793, "test_loss": 0.7727652121992672, "test_acc1": 80.32800253295899, "test_acc5": 95.03800239318848, "epoch": 392, "n_parameters": 8802912} -{"train_lr": 0.00017132499818580898, "train_loss": 2.1037274630069733, "test_loss": 0.768835389000528, "test_acc1": 80.44400224090576, "test_acc5": 95.04400253692627, "epoch": 393, "n_parameters": 8802912} -{"train_lr": 0.00016588301258094182, "train_loss": 2.0978885699033736, "test_loss": 0.7629450767794076, "test_acc1": 80.47600234039307, "test_acc5": 95.18600258666993, "epoch": 394, "n_parameters": 8802912} -{"train_lr": 0.0001605306629434379, "train_loss": 2.093850972175598, "test_loss": 0.7739407255807343, "test_acc1": 80.2780027947998, "test_acc5": 95.03000229980469, "epoch": 395, "n_parameters": 8802912} -{"train_lr": 0.00015526821013925752, "train_loss": 2.0971151699066164, "test_loss": 0.7688054757959703, "test_acc1": 80.4600026864624, "test_acc5": 95.12400244445801, "epoch": 396, "n_parameters": 8802912} -{"train_lr": 0.00015009591065294023, "train_loss": 2.0893484116077423, "test_loss": 0.7621755527661127, "test_acc1": 80.46800256988526, "test_acc5": 95.13400272064209, "epoch": 397, "n_parameters": 8802912} -{"train_lr": 0.00014501401657505492, "train_loss": 2.0933464641809465, "test_loss": 0.7677063665845815, "test_acc1": 80.44200261383057, "test_acc5": 95.11200270111084, "epoch": 398, "n_parameters": 8802912} -{"train_lr": 0.0001400227755899522, "train_loss": 2.0883269896984102, "test_loss": 0.7577400575665867, "test_acc1": 80.70000294525147, "test_acc5": 95.14400259460449, "epoch": 399, "n_parameters": 8802912} -{"train_lr": 0.00013512243096367772, "train_loss": 2.079770263314247, "test_loss": 0.7572178630267873, "test_acc1": 80.67400243530274, "test_acc5": 95.19800227935791, "epoch": 400, "n_parameters": 8802912} -{"train_lr": 0.00013031322153211376, "train_loss": 2.082069941997528, "test_loss": 0.7561709644163356, "test_acc1": 80.61000227874756, "test_acc5": 95.20400253814697, "epoch": 401, "n_parameters": 8802912} -{"train_lr": 0.00012559538168934326, "train_loss": 2.084484007740021, "test_loss": 0.7602436031927081, "test_acc1": 80.61600277557373, "test_acc5": 95.13800278442383, "epoch": 402, "n_parameters": 8802912} -{"train_lr": 0.00012096914137622728, "train_loss": 2.0829938852548597, "test_loss": 0.7547944591325872, "test_acc1": 80.66200234039307, "test_acc5": 95.26200250305176, "epoch": 403, "n_parameters": 8802912} -{"train_lr": 0.00011643472606918499, "train_loss": 2.075128895330429, "test_loss": 0.7542206498191637, "test_acc1": 80.65400250732422, "test_acc5": 95.21200221343994, "epoch": 404, "n_parameters": 8802912} -{"train_lr": 0.00011199235676923019, "train_loss": 2.0784793405532835, "test_loss": 0.7528919511858154, "test_acc1": 80.80800278167725, "test_acc5": 95.25000248077393, "epoch": 405, "n_parameters": 8802912} -{"train_lr": 0.00010764224999117014, "train_loss": 2.0771301867246628, "test_loss": 0.7559833885992274, "test_acc1": 80.68600258911133, "test_acc5": 95.29000258117676, "epoch": 406, "n_parameters": 8802912} -{"train_lr": 0.0001033846177530702, "train_loss": 2.0800893121004105, "test_loss": 0.7514787669129231, "test_acc1": 80.77800271881104, "test_acc5": 95.21800253875732, "epoch": 407, "n_parameters": 8802912} -{"train_lr": 9.921966756592387e-05, "train_loss": 2.0728273374080657, "test_loss": 0.7476655667757287, "test_acc1": 80.80400246276855, "test_acc5": 95.32600263977051, "epoch": 408, "n_parameters": 8802912} -{"train_lr": 9.514760242352498e-05, "train_loss": 2.08168813765049, "test_loss": 0.7560478369979298, "test_acc1": 80.80600250183106, "test_acc5": 95.26600251861572, "epoch": 409, "n_parameters": 8802912} -{"train_lr": 9.116862079258612e-05, "train_loss": 2.0711160086154936, "test_loss": 0.7561923048513777, "test_acc1": 80.77200240966796, "test_acc5": 95.3280022970581, "epoch": 410, "n_parameters": 8802912} -{"train_lr": 8.728291660305303e-05, "train_loss": 2.067354734826088, "test_loss": 0.7478374071857509, "test_acc1": 80.76800252746582, "test_acc5": 95.3500026095581, "epoch": 411, "n_parameters": 8802912} -{"train_lr": 8.349067923867126e-05, "train_loss": 2.064986595416069, "test_loss": 0.7517838250188267, "test_acc1": 80.84400254119873, "test_acc5": 95.30400243652343, "epoch": 412, "n_parameters": 8802912} -{"train_lr": 7.979209352773835e-05, "train_loss": 2.06815638589859, "test_loss": 0.7480608752983458, "test_acc1": 80.85600279785156, "test_acc5": 95.36400281036377, "epoch": 413, "n_parameters": 8802912} -{"train_lr": 7.618733973410262e-05, "train_loss": 2.070025790333748, "test_loss": 0.7489545932587456, "test_acc1": 80.88000235717773, "test_acc5": 95.35000233886718, "epoch": 414, "n_parameters": 8802912} -{"train_lr": 7.267659354838017e-05, "train_loss": 2.0613095519781113, "test_loss": 0.7469652518630028, "test_acc1": 80.82800259429932, "test_acc5": 95.37000270904541, "epoch": 415, "n_parameters": 8802912} -{"train_lr": 6.926002607938772e-05, "train_loss": 2.062598721575737, "test_loss": 0.7450882790281492, "test_acc1": 80.9560024798584, "test_acc5": 95.36400226074218, "epoch": 416, "n_parameters": 8802912} -{"train_lr": 6.593780384579997e-05, "train_loss": 2.051753142786026, "test_loss": 0.7477088728371788, "test_acc1": 80.93400256347657, "test_acc5": 95.35600272857666, "epoch": 417, "n_parameters": 8802912} -{"train_lr": 6.27100887680448e-05, "train_loss": 2.0520748597860337, "test_loss": 0.744843174210366, "test_acc1": 80.9760018157959, "test_acc5": 95.39000267974853, "epoch": 418, "n_parameters": 8802912} -{"train_lr": 5.957703816040123e-05, "train_loss": 2.0614806449174883, "test_loss": 0.7482929852078942, "test_acc1": 80.90600258026123, "test_acc5": 95.37000234130859, "epoch": 419, "n_parameters": 8802912} -{"train_lr": 5.6538804723335324e-05, "train_loss": 2.06133599255085, "test_loss": 0.7455202520770186, "test_acc1": 80.96000234588622, "test_acc5": 95.3340026727295, "epoch": 420, "n_parameters": 8802912} -{"train_lr": 5.359553653605782e-05, "train_loss": 2.0665272137880324, "test_loss": 0.7455598554190468, "test_acc1": 81.05000299835206, "test_acc5": 95.40000262023926, "epoch": 421, "n_parameters": 8802912} -{"train_lr": 5.0747377049308795e-05, "train_loss": 2.056276075196266, "test_loss": 0.7428394777371603, "test_acc1": 81.07600289794922, "test_acc5": 95.38200244445801, "epoch": 422, "n_parameters": 8802912} -{"train_lr": 4.799446507836315e-05, "train_loss": 2.0539174494981767, "test_loss": 0.7406903768286985, "test_acc1": 81.05400259979248, "test_acc5": 95.41200268707276, "epoch": 423, "n_parameters": 8802912} -{"train_lr": 4.533693479626563e-05, "train_loss": 2.0527466562509535, "test_loss": 0.741633709958371, "test_acc1": 81.02200249420166, "test_acc5": 95.40200267547607, "epoch": 424, "n_parameters": 8802912} -{"train_lr": 4.2774915727294984e-05, "train_loss": 2.0493616263628005, "test_loss": 0.7399411880794693, "test_acc1": 81.0700024154663, "test_acc5": 95.46000270111084, "epoch": 425, "n_parameters": 8802912} -{"train_lr": 4.030853274064522e-05, "train_loss": 2.056658226656914, "test_loss": 0.7414427699849886, "test_acc1": 81.11000255828857, "test_acc5": 95.44200269500732, "epoch": 426, "n_parameters": 8802912} -{"train_lr": 3.793790604434225e-05, "train_loss": 2.0521021610975265, "test_loss": 0.7408183821860481, "test_acc1": 81.0760026638794, "test_acc5": 95.40400245788574, "epoch": 427, "n_parameters": 8802912} -{"train_lr": 3.5663151179389266e-05, "train_loss": 2.051672790026665, "test_loss": 0.740456909598673, "test_acc1": 81.1180026940918, "test_acc5": 95.42800255096435, "epoch": 428, "n_parameters": 8802912} -{"train_lr": 3.348437901412699e-05, "train_loss": 2.047314824080467, "test_loss": 0.7436066622681478, "test_acc1": 81.06000278381347, "test_acc5": 95.37200267547607, "epoch": 429, "n_parameters": 8802912} -{"train_lr": 3.14016957388384e-05, "train_loss": 2.054726559972763, "test_loss": 0.7379481614950825, "test_acc1": 81.18800238220214, "test_acc5": 95.4400025326538, "epoch": 430, "n_parameters": 8802912} -{"train_lr": 2.9415202860566895e-05, "train_loss": 2.057950229239464, "test_loss": 0.7449111938476562, "test_acc1": 81.0300023513794, "test_acc5": 95.41400277038574, "epoch": 431, "n_parameters": 8802912} -{"train_lr": 2.7524997198174166e-05, "train_loss": 2.0420039050102234, "test_loss": 0.7364648639717523, "test_acc1": 81.13600264160156, "test_acc5": 95.4680025643921, "epoch": 432, "n_parameters": 8802912} -{"train_lr": 2.5731170877616888e-05, "train_loss": 2.0478152777194976, "test_loss": 0.7420091942390975, "test_acc1": 81.00200252990723, "test_acc5": 95.39800246612549, "epoch": 433, "n_parameters": 8802912} -{"train_lr": 2.403381132745848e-05, "train_loss": 2.0562989787578583, "test_loss": 0.73860027224702, "test_acc1": 81.11800259124756, "test_acc5": 95.42400232940673, "epoch": 434, "n_parameters": 8802912} -{"train_lr": 2.2433001274609897e-05, "train_loss": 2.0464507545232773, "test_loss": 0.7371521826614352, "test_acc1": 81.13600287597656, "test_acc5": 95.42000265930176, "epoch": 435, "n_parameters": 8802912} -{"train_lr": 2.0928818740294644e-05, "train_loss": 2.052907079935074, "test_loss": 0.7375068955999964, "test_acc1": 81.15400267791748, "test_acc5": 95.47600283782958, "epoch": 436, "n_parameters": 8802912} -{"train_lr": 1.9521337036247088e-05, "train_loss": 2.0375041194915773, "test_loss": 0.7405890125562163, "test_acc1": 81.07000239318847, "test_acc5": 95.46000237182618, "epoch": 437, "n_parameters": 8802912} -{"train_lr": 1.8210624761139314e-05, "train_loss": 2.0431576354265215, "test_loss": 0.7387776819660383, "test_acc1": 81.12000260589599, "test_acc5": 95.45800246398926, "epoch": 438, "n_parameters": 8802912} -{"train_lr": 1.6996745797238736e-05, "train_loss": 2.0486454112291335, "test_loss": 0.738609870786176, "test_acc1": 81.13400233184815, "test_acc5": 95.49000246063233, "epoch": 439, "n_parameters": 8802912} -{"train_lr": 1.5879759307294027e-05, "train_loss": 2.0433913497924805, "test_loss": 0.7390869099865941, "test_acc1": 81.14800288421631, "test_acc5": 95.4660025112915, "epoch": 440, "n_parameters": 8802912} -{"train_lr": 1.4859719731650575e-05, "train_loss": 2.048901915884018, "test_loss": 0.739242367884692, "test_acc1": 81.11600266113281, "test_acc5": 95.45400263641358, "epoch": 441, "n_parameters": 8802912} -{"train_lr": 1.393667678559817e-05, "train_loss": 2.0417265288591384, "test_loss": 0.740910637904616, "test_acc1": 81.03400212249755, "test_acc5": 95.42800244995117, "epoch": 442, "n_parameters": 8802912} -{"train_lr": 1.3110675456947718e-05, "train_loss": 2.03997964451313, "test_loss": 0.736109651186887, "test_acc1": 81.16200228454589, "test_acc5": 95.49400273681641, "epoch": 443, "n_parameters": 8802912} -{"train_lr": 1.2381756003839114e-05, "train_loss": 2.038857909345627, "test_loss": 0.739153818172567, "test_acc1": 81.13000268615723, "test_acc5": 95.42200289642334, "epoch": 444, "n_parameters": 8802912} -{"train_lr": 1.1749953952777368e-05, "train_loss": 2.0419404532432557, "test_loss": 0.736341873950818, "test_acc1": 81.12200244903565, "test_acc5": 95.43800255004882, "epoch": 445, "n_parameters": 8802912} -{"train_lr": 1.1215300096904058e-05, "train_loss": 2.0453065361976623, "test_loss": 0.7371754944324493, "test_acc1": 81.17800286987304, "test_acc5": 95.45800255889893, "epoch": 446, "n_parameters": 8802912} -{"train_lr": 1.0777820494492919e-05, "train_loss": 2.0442736115455626, "test_loss": 0.7387418946360841, "test_acc1": 81.1440025857544, "test_acc5": 95.4640027874756, "epoch": 447, "n_parameters": 8802912} -{"train_lr": 1.0437536467683126e-05, "train_loss": 2.032862963414192, "test_loss": 0.7381034828722477, "test_acc1": 81.15000259979249, "test_acc5": 95.43200241699219, "epoch": 448, "n_parameters": 8802912} -{"train_lr": 1.0194464601437938e-05, "train_loss": 2.0418562072992326, "test_loss": 0.7398468442261219, "test_acc1": 81.15800249359131, "test_acc5": 95.43400259735107, "epoch": 449, "n_parameters": 8802912} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_300e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_300e.txt deleted file mode 100644 index aadff220..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_300e.txt +++ /dev/null @@ -1,300 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 7.0084885094642635, "test_loss": 6.952229661100051, "test_acc1": 0.0880000067138672, "test_acc5": 0.49200002499103546, "epoch": 0, "n_parameters": 14635024} -{"train_lr": 1.000000000000014e-06, "train_loss": 7.004423981571198, "test_loss": 6.939732621697819, "test_acc1": 0.09600000732421875, "test_acc5": 0.5280000284671783, "epoch": 1, "n_parameters": 14635024} -{"train_lr": 0.0008007999999999933, "train_loss": 6.4324036357879635, "test_loss": 5.182443804600659, "test_acc1": 8.146000188064574, "test_acc5": 22.23200067047119, "epoch": 2, "n_parameters": 14635024} -{"train_lr": 0.0016005999999999787, "train_loss": 5.768660827350616, "test_loss": 3.937949208652272, "test_acc1": 20.93000057144165, "test_acc5": 43.694001222076416, "epoch": 3, "n_parameters": 14635024} -{"train_lr": 0.0024003999999999835, "train_loss": 5.122080135059357, "test_loss": 3.176813847001861, "test_acc1": 32.53200097686768, "test_acc5": 57.84400156524658, "epoch": 4, "n_parameters": 14635024} -{"train_lr": 0.0032001999999999873, "train_loss": 4.673259150266647, "test_loss": 2.6555412455516705, "test_acc1": 41.810001302947995, "test_acc5": 67.68000182739257, "epoch": 5, "n_parameters": 14635024} -{"train_lr": 0.003997265921835383, "train_loss": 4.381957061481476, "test_loss": 2.5195155283984016, "test_acc1": 44.79800117248535, "test_acc5": 69.9560022201538, "epoch": 6, "n_parameters": 14635024} -{"train_lr": 0.003996063323214417, "train_loss": 4.114765290307998, "test_loss": 2.208562319769579, "test_acc1": 51.1260013482666, "test_acc5": 75.59800253936767, "epoch": 7, "n_parameters": 14635024} -{"train_lr": 0.003994642382062749, "train_loss": 3.9235779257774355, "test_loss": 2.0436292422168396, "test_acc1": 53.618001389770505, "test_acc5": 77.92600272003173, "epoch": 8, "n_parameters": 14635024} -{"train_lr": 0.003993003254202742, "train_loss": 3.7721920563220976, "test_loss": 1.9707106956664253, "test_acc1": 54.85600138153076, "test_acc5": 78.8340023348999, "epoch": 9, "n_parameters": 14635024} -{"train_lr": 0.0039911461193831675, "train_loss": 3.6595628350257874, "test_loss": 1.8306503168800299, "test_acc1": 57.67200165008545, "test_acc5": 80.95800298248291, "epoch": 10, "n_parameters": 14635024} -{"train_lr": 0.003989071181259754, "train_loss": 3.5590855560302734, "test_loss": 1.7818616475252544, "test_acc1": 59.32000181640625, "test_acc5": 82.27200269226074, "epoch": 11, "n_parameters": 14635024} -{"train_lr": 0.0039867786673727715, "train_loss": 3.492379592514038, "test_loss": 1.7050994178828072, "test_acc1": 60.46200176940918, "test_acc5": 83.00600265686035, "epoch": 12, "n_parameters": 14635024} -{"train_lr": 0.0039842688291223715, "train_loss": 3.419259351873398, "test_loss": 1.6778714805841446, "test_acc1": 61.67400190490723, "test_acc5": 83.90800253814697, "epoch": 13, "n_parameters": 14635024} -{"train_lr": 0.0039815419417405275, "train_loss": 3.349964776325226, "test_loss": 1.5805353706373888, "test_acc1": 62.93600196166992, "test_acc5": 84.6020029574585, "epoch": 14, "n_parameters": 14635024} -{"train_lr": 0.003978598304261148, "train_loss": 3.3016368045806885, "test_loss": 1.5377339909181875, "test_acc1": 63.914001807556154, "test_acc5": 85.45600256958008, "epoch": 15, "n_parameters": 14635024} -{"train_lr": 0.0039754382394872915, "train_loss": 3.263258225488663, "test_loss": 1.514564213945585, "test_acc1": 64.08400176025391, "test_acc5": 85.5440023928833, "epoch": 16, "n_parameters": 14635024} -{"train_lr": 0.0039720620939556715, "train_loss": 3.2234153276443482, "test_loss": 1.5200267359614372, "test_acc1": 64.54600178039551, "test_acc5": 85.74000265869141, "epoch": 17, "n_parameters": 14635024} -{"train_lr": 0.003968470237898645, "train_loss": 3.2114568531036376, "test_loss": 1.501023080857361, "test_acc1": 64.6260018359375, "test_acc5": 85.99000271850586, "epoch": 18, "n_parameters": 14635024} -{"train_lr": 0.003964663065203757, "train_loss": 3.152591049385071, "test_loss": 1.4462598461438627, "test_acc1": 65.73200197601318, "test_acc5": 86.68200242004394, "epoch": 19, "n_parameters": 14635024} -{"train_lr": 0.003960640993370302, "train_loss": 3.1269438134670255, "test_loss": 1.4120729570003117, "test_acc1": 66.39400201721192, "test_acc5": 87.0500026776123, "epoch": 20, "n_parameters": 14635024} -{"train_lr": 0.003956404463463954, "train_loss": 3.1087948686599733, "test_loss": 1.4612314052441542, "test_acc1": 66.28400191009521, "test_acc5": 86.85400256317139, "epoch": 21, "n_parameters": 14635024} -{"train_lr": 0.0039519539400677895, "train_loss": 3.0648625071525575, "test_loss": 1.3700746771167307, "test_acc1": 67.45000182434082, "test_acc5": 87.84200236450195, "epoch": 22, "n_parameters": 14635024} -{"train_lr": 0.00394728991123201, "train_loss": 3.050774854040146, "test_loss": 1.358577152385431, "test_acc1": 67.5080019732666, "test_acc5": 87.71200261047363, "epoch": 23, "n_parameters": 14635024} -{"train_lr": 0.003942412888419754, "train_loss": 3.0372900277376176, "test_loss": 1.3593457349959541, "test_acc1": 67.64000206359863, "test_acc5": 87.97800241455079, "epoch": 24, "n_parameters": 14635024} -{"train_lr": 0.003937323406451619, "train_loss": 3.0208886452436445, "test_loss": 1.3685259718228788, "test_acc1": 67.50800207214355, "test_acc5": 87.77000279449463, "epoch": 25, "n_parameters": 14635024} -{"train_lr": 0.003932022023446712, "train_loss": 2.997187345266342, "test_loss": 1.3588789876769571, "test_acc1": 67.66400215972901, "test_acc5": 88.1200025491333, "epoch": 26, "n_parameters": 14635024} -{"train_lr": 0.003926509320761305, "train_loss": 2.9599620769262316, "test_loss": 1.3387346434242584, "test_acc1": 68.20800233459472, "test_acc5": 88.22600239349366, "epoch": 27, "n_parameters": 14635024} -{"train_lr": 0.003920785902925497, "train_loss": 2.959521119379997, "test_loss": 1.3170154072782572, "test_acc1": 68.74000212036132, "test_acc5": 88.57000246276856, "epoch": 28, "n_parameters": 14635024} -{"train_lr": 0.003914852397576493, "train_loss": 2.9445672343969345, "test_loss": 1.305239801897722, "test_acc1": 69.01000195037842, "test_acc5": 88.68800249511719, "epoch": 29, "n_parameters": 14635024} -{"train_lr": 0.003908709455390004, "train_loss": 2.942554824638367, "test_loss": 1.2700130562571919, "test_acc1": 69.34800177124023, "test_acc5": 88.90200264129639, "epoch": 30, "n_parameters": 14635024} -{"train_lr": 0.0039023577500088094, "train_loss": 2.909557375216484, "test_loss": 1.2669480495593126, "test_acc1": 69.7040023425293, "test_acc5": 89.13200236877441, "epoch": 31, "n_parameters": 14635024} -{"train_lr": 0.0038957979779690624, "train_loss": 2.902030280160904, "test_loss": 1.2900826361249476, "test_acc1": 69.41800214874267, "test_acc5": 89.0160025466919, "epoch": 32, "n_parameters": 14635024} -{"train_lr": 0.003889030858623732, "train_loss": 2.8979876755475997, "test_loss": 1.251826962127405, "test_acc1": 69.64800240112305, "test_acc5": 89.2400023538208, "epoch": 33, "n_parameters": 14635024} -{"train_lr": 0.0038820571340636525, "train_loss": 2.8838001067399976, "test_loss": 1.2704121676437996, "test_acc1": 69.7380022088623, "test_acc5": 89.3440024545288, "epoch": 34, "n_parameters": 14635024} -{"train_lr": 0.0038748775690362956, "train_loss": 2.8763358629226685, "test_loss": 1.2743076353388674, "test_acc1": 69.64600236541749, "test_acc5": 89.108002633667, "epoch": 35, "n_parameters": 14635024} -{"train_lr": 0.0038674929508619614, "train_loss": 2.861101145553589, "test_loss": 1.2360960635192253, "test_acc1": 70.37800213439941, "test_acc5": 89.4620027230835, "epoch": 36, "n_parameters": 14635024} -{"train_lr": 0.003859904089347072, "train_loss": 2.854618036532402, "test_loss": 1.2124484917696785, "test_acc1": 70.81200232757568, "test_acc5": 89.7500024243164, "epoch": 37, "n_parameters": 14635024} -{"train_lr": 0.003852111816695943, "train_loss": 2.8479023877382277, "test_loss": 1.2292337222572636, "test_acc1": 70.54800220916748, "test_acc5": 89.5800025466919, "epoch": 38, "n_parameters": 14635024} -{"train_lr": 0.0038441169874190843, "train_loss": 2.8302554711580274, "test_loss": 1.2510057253872646, "test_acc1": 70.28200209289551, "test_acc5": 89.55800249053955, "epoch": 39, "n_parameters": 14635024} -{"train_lr": 0.0038359204782394862, "train_loss": 2.816298244214058, "test_loss": 1.2319338825695656, "test_acc1": 70.6360025604248, "test_acc5": 89.85000239898682, "epoch": 40, "n_parameters": 14635024} -{"train_lr": 0.0038275231879969967, "train_loss": 2.8140914922475813, "test_loss": 1.241631375516162, "test_acc1": 70.6000024633789, "test_acc5": 89.55800246063232, "epoch": 41, "n_parameters": 14635024} -{"train_lr": 0.003818926037548892, "train_loss": 2.8048237040758135, "test_loss": 1.2319387881194843, "test_acc1": 70.40400227539062, "test_acc5": 89.50600260864258, "epoch": 42, "n_parameters": 14635024} -{"train_lr": 0.0038101299696697475, "train_loss": 2.8012983701467515, "test_loss": 1.2276041231611197, "test_acc1": 70.97200237609863, "test_acc5": 89.86200279724122, "epoch": 43, "n_parameters": 14635024} -{"train_lr": 0.003801135948947377, "train_loss": 2.7902367380380633, "test_loss": 1.19313053479966, "test_acc1": 71.39000231658936, "test_acc5": 90.22200234039306, "epoch": 44, "n_parameters": 14635024} -{"train_lr": 0.003791944961677627, "train_loss": 2.779478725385666, "test_loss": 1.2082479195121456, "test_acc1": 70.77000241790772, "test_acc5": 90.02000282043457, "epoch": 45, "n_parameters": 14635024} -{"train_lr": 0.0037825580157558134, "train_loss": 2.7692357589006424, "test_loss": 1.2143663544865215, "test_acc1": 70.96600232513428, "test_acc5": 89.80800243469238, "epoch": 46, "n_parameters": 14635024} -{"train_lr": 0.003772976140566265, "train_loss": 2.769208755135536, "test_loss": 1.2055932936422966, "test_acc1": 70.8920023587036, "test_acc5": 90.03000277832031, "epoch": 47, "n_parameters": 14635024} -{"train_lr": 0.0037632003868696556, "train_loss": 2.7681687794446943, "test_loss": 1.1930167692549087, "test_acc1": 71.32800270843506, "test_acc5": 90.2600022479248, "epoch": 48, "n_parameters": 14635024} -{"train_lr": 0.003753231826687486, "train_loss": 2.759938165307045, "test_loss": 1.1926450247273725, "test_acc1": 71.23000228851318, "test_acc5": 90.26200267547607, "epoch": 49, "n_parameters": 14635024} -{"train_lr": 0.0037430715531847655, "train_loss": 2.7562825431346893, "test_loss": 1.1500861237154287, "test_acc1": 72.0160025076294, "test_acc5": 90.56000262207031, "epoch": 50, "n_parameters": 14635024} -{"train_lr": 0.003732720680549938, "train_loss": 2.7425877616405487, "test_loss": 1.1829591450445793, "test_acc1": 71.63400245056152, "test_acc5": 90.21800300109864, "epoch": 51, "n_parameters": 14635024} -{"train_lr": 0.0037221803438729105, "train_loss": 2.7390850293159485, "test_loss": 1.169563772266402, "test_acc1": 71.66200254302979, "test_acc5": 90.324002288208, "epoch": 52, "n_parameters": 14635024} -{"train_lr": 0.003711451699020238, "train_loss": 2.731433552312851, "test_loss": 1.148443390341366, "test_acc1": 72.148002288208, "test_acc5": 90.77800263854981, "epoch": 53, "n_parameters": 14635024} -{"train_lr": 0.0037005359225088163, "train_loss": 2.7199712455272675, "test_loss": 1.2022730896578115, "test_acc1": 71.734002315979, "test_acc5": 90.4020026699829, "epoch": 54, "n_parameters": 14635024} -{"train_lr": 0.0036894342113765284, "train_loss": 2.721943637442589, "test_loss": 1.1657923525747131, "test_acc1": 72.020002366333, "test_acc5": 90.60800257507324, "epoch": 55, "n_parameters": 14635024} -{"train_lr": 0.0036781477830511327, "train_loss": 2.715258173370361, "test_loss": 1.2287086704198051, "test_acc1": 71.37000245025635, "test_acc5": 90.25400281158447, "epoch": 56, "n_parameters": 14635024} -{"train_lr": 0.00366667787521664, "train_loss": 2.7017625034093857, "test_loss": 1.1509752983556074, "test_acc1": 72.53000292999268, "test_acc5": 90.60000246368408, "epoch": 57, "n_parameters": 14635024} -{"train_lr": 0.0036550257456777722, "train_loss": 2.69814714653492, "test_loss": 1.1611011194832184, "test_acc1": 72.19800251708985, "test_acc5": 90.38800244903564, "epoch": 58, "n_parameters": 14635024} -{"train_lr": 0.003643192672221756, "train_loss": 2.695821371269226, "test_loss": 1.1192046579192667, "test_acc1": 72.59600266113281, "test_acc5": 90.82000235168456, "epoch": 59, "n_parameters": 14635024} -{"train_lr": 0.0036311799524784525, "train_loss": 2.6802169582843782, "test_loss": 1.148838761536514, "test_acc1": 71.99600220581054, "test_acc5": 90.67600255828857, "epoch": 60, "n_parameters": 14635024} -{"train_lr": 0.0036189889037780316, "train_loss": 2.6912066464185713, "test_loss": 1.145149259006276, "test_acc1": 72.06400244262696, "test_acc5": 90.58200238311768, "epoch": 61, "n_parameters": 14635024} -{"train_lr": 0.0036066208630062997, "train_loss": 2.68860595703125, "test_loss": 1.1448214325834722, "test_acc1": 71.80600236206055, "test_acc5": 90.61000241851806, "epoch": 62, "n_parameters": 14635024} -{"train_lr": 0.003594077186458248, "train_loss": 2.6815669948339464, "test_loss": 1.1475488821811535, "test_acc1": 72.08400226867676, "test_acc5": 90.70000263916016, "epoch": 63, "n_parameters": 14635024} -{"train_lr": 0.0035813592496895586, "train_loss": 2.676478939461708, "test_loss": 1.116785522112075, "test_acc1": 72.59600251861572, "test_acc5": 91.03800237091065, "epoch": 64, "n_parameters": 14635024} -{"train_lr": 0.003568468447365067, "train_loss": 2.6644667405605316, "test_loss": 1.124901302597102, "test_acc1": 72.51600236297607, "test_acc5": 91.12800286773681, "epoch": 65, "n_parameters": 14635024} -{"train_lr": 0.003555406193106677, "train_loss": 2.6580114840745925, "test_loss": 1.0916991939439493, "test_acc1": 73.0700023614502, "test_acc5": 91.31200290405273, "epoch": 66, "n_parameters": 14635024} -{"train_lr": 0.0035421739193377214, "train_loss": 2.6542122586250305, "test_loss": 1.1088627881425268, "test_acc1": 73.04600254974365, "test_acc5": 91.06600246948243, "epoch": 67, "n_parameters": 14635024} -{"train_lr": 0.0035287730771260805, "train_loss": 2.647336489367485, "test_loss": 1.118355513057288, "test_acc1": 72.78800246185303, "test_acc5": 91.17000239898681, "epoch": 68, "n_parameters": 14635024} -{"train_lr": 0.0035152051360252245, "train_loss": 2.6531433739185335, "test_loss": 1.151622185812277, "test_acc1": 72.10800251190186, "test_acc5": 90.64400259185791, "epoch": 69, "n_parameters": 14635024} -{"train_lr": 0.003501471583912782, "train_loss": 2.6648873148679733, "test_loss": 1.110732607981738, "test_acc1": 72.86400255462647, "test_acc5": 91.22800235198974, "epoch": 70, "n_parameters": 14635024} -{"train_lr": 0.0034875739268273947, "train_loss": 2.629285999774933, "test_loss": 1.1037482538205736, "test_acc1": 73.0960023260498, "test_acc5": 91.18200264160156, "epoch": 71, "n_parameters": 14635024} -{"train_lr": 0.0034735136888038418, "train_loss": 2.6351823521852493, "test_loss": 1.1130409521215103, "test_acc1": 72.83600213928223, "test_acc5": 90.98600254699707, "epoch": 72, "n_parameters": 14635024} -{"train_lr": 0.003459292411705684, "train_loss": 2.6328317997932436, "test_loss": 1.1180168933728163, "test_acc1": 73.05000237060547, "test_acc5": 91.17800234313965, "epoch": 73, "n_parameters": 14635024} -{"train_lr": 0.0034449116550562854, "train_loss": 2.6309286134004592, "test_loss": 1.0715705614317865, "test_acc1": 73.38200223388672, "test_acc5": 91.56200243804932, "epoch": 74, "n_parameters": 14635024} -{"train_lr": 0.0034303729958673978, "train_loss": 2.6208770953178404, "test_loss": 1.087347521501429, "test_acc1": 72.95200209197998, "test_acc5": 91.41800246643066, "epoch": 75, "n_parameters": 14635024} -{"train_lr": 0.003415678028467135, "train_loss": 2.6293863820791246, "test_loss": 1.0793530309901518, "test_acc1": 73.47600244415283, "test_acc5": 91.48800275939941, "epoch": 76, "n_parameters": 14635024} -{"train_lr": 0.0034008283643241475, "train_loss": 2.6140382974147798, "test_loss": 1.1267612563336598, "test_acc1": 72.97000253265381, "test_acc5": 91.16200256408692, "epoch": 77, "n_parameters": 14635024} -{"train_lr": 0.003385825631871496, "train_loss": 2.6165645290613173, "test_loss": 1.1446586514220518, "test_acc1": 72.69000208465576, "test_acc5": 90.94000238250733, "epoch": 78, "n_parameters": 14635024} -{"train_lr": 0.0033706714763277455, "train_loss": 2.605862063407898, "test_loss": 1.100001063416986, "test_acc1": 73.5040026550293, "test_acc5": 91.40200250549316, "epoch": 79, "n_parameters": 14635024} -{"train_lr": 0.003355367559516879, "train_loss": 2.5986893322706224, "test_loss": 1.0653049216112669, "test_acc1": 73.67600270507812, "test_acc5": 91.59600252441406, "epoch": 80, "n_parameters": 14635024} -{"train_lr": 0.003339915559685877, "train_loss": 2.598620568919182, "test_loss": 1.0768731674727272, "test_acc1": 73.17400227416992, "test_acc5": 91.30600255859375, "epoch": 81, "n_parameters": 14635024} -{"train_lr": 0.003324317171320666, "train_loss": 2.601451517367363, "test_loss": 1.0562997839468367, "test_acc1": 74.01400259460449, "test_acc5": 91.74800220458984, "epoch": 82, "n_parameters": 14635024} -{"train_lr": 0.0033085741049602795, "train_loss": 2.6031070964336394, "test_loss": 1.0700129318763227, "test_acc1": 73.7780026538086, "test_acc5": 91.63600269744873, "epoch": 83, "n_parameters": 14635024} -{"train_lr": 0.0032926880870092624, "train_loss": 2.5791460530281065, "test_loss": 1.0970431176178597, "test_acc1": 73.66600232513427, "test_acc5": 91.46800246307373, "epoch": 84, "n_parameters": 14635024} -{"train_lr": 0.003276660859548651, "train_loss": 2.5853986984491346, "test_loss": 1.0545320318025702, "test_acc1": 73.95000249633789, "test_acc5": 91.71000235107422, "epoch": 85, "n_parameters": 14635024} -{"train_lr": 0.0032604941801444055, "train_loss": 2.58502169547081, "test_loss": 1.0811584894271458, "test_acc1": 73.21400237976074, "test_acc5": 91.57800217285157, "epoch": 86, "n_parameters": 14635024} -{"train_lr": 0.003244189821655263, "train_loss": 2.570812246131897, "test_loss": 1.0669915492043776, "test_acc1": 73.82200277404785, "test_acc5": 91.74200247711181, "epoch": 87, "n_parameters": 14635024} -{"train_lr": 0.003227749572037655, "train_loss": 2.5758040194272995, "test_loss": 1.099634897621239, "test_acc1": 72.91600277832032, "test_acc5": 91.22200241851807, "epoch": 88, "n_parameters": 14635024} -{"train_lr": 0.0032111752341504192, "train_loss": 2.5645405571222306, "test_loss": 1.0993791595101357, "test_acc1": 73.38600235931396, "test_acc5": 91.35600236114502, "epoch": 89, "n_parameters": 14635024} -{"train_lr": 0.003194468625556447, "train_loss": 2.5692114056110382, "test_loss": 1.0399401902275927, "test_acc1": 74.09600266876221, "test_acc5": 91.74600278747559, "epoch": 90, "n_parameters": 14635024} -{"train_lr": 0.0031776315783234484, "train_loss": 2.5500866854190827, "test_loss": 1.0485088658683441, "test_acc1": 74.01600213104248, "test_acc5": 91.83600242218017, "epoch": 91, "n_parameters": 14635024} -{"train_lr": 0.0031606659388236785, "train_loss": 2.563412369918823, "test_loss": 1.0513617201324772, "test_acc1": 74.1200020602417, "test_acc5": 91.67200266967774, "epoch": 92, "n_parameters": 14635024} -{"train_lr": 0.003143573567530467, "train_loss": 2.5580865959167483, "test_loss": 1.0776274651288986, "test_acc1": 73.41800256469726, "test_acc5": 91.56800269287109, "epoch": 93, "n_parameters": 14635024} -{"train_lr": 0.003126356338815038, "train_loss": 2.5524689707279204, "test_loss": 1.0537152323214447, "test_acc1": 73.98400255218506, "test_acc5": 91.76200222290039, "epoch": 94, "n_parameters": 14635024} -{"train_lr": 0.0031090161407405044, "train_loss": 2.5465300158023836, "test_loss": 1.0429105947122854, "test_acc1": 74.15000237548828, "test_acc5": 91.87600253631592, "epoch": 95, "n_parameters": 14635024} -{"train_lr": 0.003091554874854953, "train_loss": 2.550401001572609, "test_loss": 1.0346838281873394, "test_acc1": 74.42400275939941, "test_acc5": 92.00400243011475, "epoch": 96, "n_parameters": 14635024} -{"train_lr": 0.0030739744559831164, "train_loss": 2.538812317609787, "test_loss": 1.0590019857182222, "test_acc1": 73.93600246398925, "test_acc5": 91.76800241119385, "epoch": 97, "n_parameters": 14635024} -{"train_lr": 0.0030562768120159086, "train_loss": 2.531609162211418, "test_loss": 1.0327960480661953, "test_acc1": 74.620002807312, "test_acc5": 91.9920025189209, "epoch": 98, "n_parameters": 14635024} -{"train_lr": 0.0030384638836993723, "train_loss": 2.5378674788951874, "test_loss": 1.0844243957715876, "test_acc1": 73.560002421875, "test_acc5": 91.60800243530274, "epoch": 99, "n_parameters": 14635024} -{"train_lr": 0.003020537624421951, "train_loss": 2.5331902249336244, "test_loss": 1.06381834517507, "test_acc1": 74.00600283508301, "test_acc5": 91.68800242431641, "epoch": 100, "n_parameters": 14635024} -{"train_lr": 0.0030025000000000156, "train_loss": 2.5270299388170243, "test_loss": 1.02511194151114, "test_acc1": 74.4860027697754, "test_acc5": 92.03600253387451, "epoch": 101, "n_parameters": 14635024} -{"train_lr": 0.0029843529884621693, "train_loss": 2.5224661358833314, "test_loss": 1.0479958532925915, "test_acc1": 74.06600221405029, "test_acc5": 91.82200282958985, "epoch": 102, "n_parameters": 14635024} -{"train_lr": 0.0029660985798329416, "train_loss": 2.5126967831134794, "test_loss": 1.0451506472685759, "test_acc1": 74.48200244964599, "test_acc5": 91.99400262847901, "epoch": 103, "n_parameters": 14635024} -{"train_lr": 0.0029477387759137964, "train_loss": 2.5113472267627714, "test_loss": 1.033312485060271, "test_acc1": 74.31600257232667, "test_acc5": 91.95400252227783, "epoch": 104, "n_parameters": 14635024} -{"train_lr": 0.002929275590064108, "train_loss": 2.502416784620285, "test_loss": 1.021280337125063, "test_acc1": 74.71000238555908, "test_acc5": 92.18600268707276, "epoch": 105, "n_parameters": 14635024} -{"train_lr": 0.002910711046980378, "train_loss": 2.505182149267197, "test_loss": 1.0394687532063793, "test_acc1": 74.36200260437012, "test_acc5": 91.98400275634765, "epoch": 106, "n_parameters": 14635024} -{"train_lr": 0.0028920471824738832, "train_loss": 2.49331994240284, "test_loss": 1.016819057000034, "test_acc1": 74.68800244628906, "test_acc5": 92.11200229278565, "epoch": 107, "n_parameters": 14635024} -{"train_lr": 0.002873286043247822, "train_loss": 2.4976906627178193, "test_loss": 1.0057583201457472, "test_acc1": 75.11000220947265, "test_acc5": 92.35000265563964, "epoch": 108, "n_parameters": 14635024} -{"train_lr": 0.0028544296866723304, "train_loss": 2.4946251371860506, "test_loss": 1.0443755844060112, "test_acc1": 74.61400227722169, "test_acc5": 92.01800230743409, "epoch": 109, "n_parameters": 14635024} -{"train_lr": 0.0028354801805594724, "train_loss": 2.4965917791843415, "test_loss": 1.003603276084451, "test_acc1": 75.03600262939453, "test_acc5": 92.48600240203858, "epoch": 110, "n_parameters": 14635024} -{"train_lr": 0.002816439602936208, "train_loss": 2.4999699425458908, "test_loss": 1.0334447856773348, "test_acc1": 74.6160026498413, "test_acc5": 92.11600258087158, "epoch": 111, "n_parameters": 14635024} -{"train_lr": 0.002797310041816382, "train_loss": 2.491388480091095, "test_loss": 1.0040799440706478, "test_acc1": 75.01600255096436, "test_acc5": 92.4500026953125, "epoch": 112, "n_parameters": 14635024} -{"train_lr": 0.002778093594971943, "train_loss": 2.4935693233966827, "test_loss": 1.0332412419511992, "test_acc1": 74.780002522583, "test_acc5": 92.2160021963501, "epoch": 113, "n_parameters": 14635024} -{"train_lr": 0.0027587923697028264, "train_loss": 2.4712094358682632, "test_loss": 1.0021358658285702, "test_acc1": 75.12800241149903, "test_acc5": 92.29200218139648, "epoch": 114, "n_parameters": 14635024} -{"train_lr": 0.002739408482605956, "train_loss": 2.4735076610565185, "test_loss": 1.0075564888470314, "test_acc1": 75.16600235565186, "test_acc5": 92.29200250854493, "epoch": 115, "n_parameters": 14635024} -{"train_lr": 0.0027199440593428507, "train_loss": 2.4749995725393297, "test_loss": 1.01169639365638, "test_acc1": 74.9480027835083, "test_acc5": 92.4440022543335, "epoch": 116, "n_parameters": 14635024} -{"train_lr": 0.0027004012344070075, "train_loss": 2.471109974336624, "test_loss": 1.0148502285866177, "test_acc1": 75.08400261688233, "test_acc5": 92.35800251037598, "epoch": 117, "n_parameters": 14635024} -{"train_lr": 0.0026807821508893883, "train_loss": 2.4692989517211914, "test_loss": 1.0118109792032663, "test_acc1": 75.04400241821288, "test_acc5": 92.32000259490967, "epoch": 118, "n_parameters": 14635024} -{"train_lr": 0.0026610889602434354, "train_loss": 2.4720983571767805, "test_loss": 0.9901764616370201, "test_acc1": 75.58600247772216, "test_acc5": 92.4600026864624, "epoch": 119, "n_parameters": 14635024} -{"train_lr": 0.0026413238220496715, "train_loss": 2.4601832890748976, "test_loss": 1.0051177726948963, "test_acc1": 75.1940023449707, "test_acc5": 92.3300022567749, "epoch": 120, "n_parameters": 14635024} -{"train_lr": 0.0026214889037780493, "train_loss": 2.4547134467840195, "test_loss": 1.0233692439163433, "test_acc1": 74.79000260101319, "test_acc5": 92.12800250213623, "epoch": 121, "n_parameters": 14635024} -{"train_lr": 0.0026015863805508724, "train_loss": 2.4541589466571807, "test_loss": 1.0284325366511065, "test_acc1": 74.89200265808105, "test_acc5": 92.1880022958374, "epoch": 122, "n_parameters": 14635024} -{"train_lr": 0.0025816184349041886, "train_loss": 2.45337007830143, "test_loss": 0.9951068964951179, "test_acc1": 75.48200219085693, "test_acc5": 92.49600241333007, "epoch": 123, "n_parameters": 14635024} -{"train_lr": 0.0025615872565482966, "train_loss": 2.4424573600292208, "test_loss": 0.9782673164325602, "test_acc1": 75.50200243469239, "test_acc5": 92.65600239715576, "epoch": 124, "n_parameters": 14635024} -{"train_lr": 0.0025414950421274317, "train_loss": 2.4421314146995545, "test_loss": 0.980571087449789, "test_acc1": 75.61800222564698, "test_acc5": 92.68400244659423, "epoch": 125, "n_parameters": 14635024} -{"train_lr": 0.002521343994979551, "train_loss": 2.4398035313129425, "test_loss": 0.9872330131337923, "test_acc1": 75.65200264892579, "test_acc5": 92.74200280181884, "epoch": 126, "n_parameters": 14635024} -{"train_lr": 0.002501136324893901, "train_loss": 2.434983744311333, "test_loss": 0.9906127209610799, "test_acc1": 75.67400245635986, "test_acc5": 92.64400264770508, "epoch": 127, "n_parameters": 14635024} -{"train_lr": 0.0024808742478692517, "train_loss": 2.4337609479904176, "test_loss": 0.9966512127395939, "test_acc1": 75.50000262969971, "test_acc5": 92.49000271484375, "epoch": 128, "n_parameters": 14635024} -{"train_lr": 0.002460559985870747, "train_loss": 2.4321835339069366, "test_loss": 0.9705383994561785, "test_acc1": 75.82400235351562, "test_acc5": 92.75400235565185, "epoch": 129, "n_parameters": 14635024} -{"train_lr": 0.002440195766586069, "train_loss": 2.428828772211075, "test_loss": 1.005873595309608, "test_acc1": 75.29400278289795, "test_acc5": 92.53000283508301, "epoch": 130, "n_parameters": 14635024} -{"train_lr": 0.0024197838231814215, "train_loss": 2.4198735173225403, "test_loss": 0.9712368863470414, "test_acc1": 75.4880024633789, "test_acc5": 92.8720027935791, "epoch": 131, "n_parameters": 14635024} -{"train_lr": 0.002399326394056337, "train_loss": 2.4240345028162005, "test_loss": 0.9940603647161933, "test_acc1": 75.4340024307251, "test_acc5": 92.55400255004882, "epoch": 132, "n_parameters": 14635024} -{"train_lr": 0.0023788257225985116, "train_loss": 2.4140852580070495, "test_loss": 0.9822354323285467, "test_acc1": 75.6840023336792, "test_acc5": 92.78400212890625, "epoch": 133, "n_parameters": 14635024} -{"train_lr": 0.0023582840569375944, "train_loss": 2.4154191996335985, "test_loss": 0.980549193918705, "test_acc1": 75.64600260314941, "test_acc5": 92.75800248535157, "epoch": 134, "n_parameters": 14635024} -{"train_lr": 0.002337703649698603, "train_loss": 2.411737289428711, "test_loss": 0.9688848079565693, "test_acc1": 75.81400273620605, "test_acc5": 92.80800236358643, "epoch": 135, "n_parameters": 14635024} -{"train_lr": 0.002317086757755268, "train_loss": 2.406751788520813, "test_loss": 0.9543393975233331, "test_acc1": 76.40600287384034, "test_acc5": 93.10400253936767, "epoch": 136, "n_parameters": 14635024} -{"train_lr": 0.002296435641982043, "train_loss": 2.4002585480928422, "test_loss": 0.9607174685772728, "test_acc1": 76.10800256134033, "test_acc5": 92.88800237792968, "epoch": 137, "n_parameters": 14635024} -{"train_lr": 0.0022757525670064538, "train_loss": 2.4072413693666457, "test_loss": 0.9645924984532244, "test_acc1": 76.13800237823486, "test_acc5": 92.86200240753173, "epoch": 138, "n_parameters": 14635024} -{"train_lr": 0.0022550398009608037, "train_loss": 2.4032689255952837, "test_loss": 0.9767370824428165, "test_acc1": 75.6680022644043, "test_acc5": 92.72400226898193, "epoch": 139, "n_parameters": 14635024} -{"train_lr": 0.0022342996152332844, "train_loss": 2.3884897265672684, "test_loss": 0.947490405948723, "test_acc1": 76.44000260528564, "test_acc5": 93.02400253936767, "epoch": 140, "n_parameters": 14635024} -{"train_lr": 0.0022135342842189523, "train_loss": 2.391751857447624, "test_loss": 0.9540577973513042, "test_acc1": 76.18400221374512, "test_acc5": 93.04200228210449, "epoch": 141, "n_parameters": 14635024} -{"train_lr": 0.002192746085070428, "train_loss": 2.37689594745636, "test_loss": 0.9547568399678258, "test_acc1": 76.26800265991211, "test_acc5": 93.04200248870849, "epoch": 142, "n_parameters": 14635024} -{"train_lr": 0.0021719372974480025, "train_loss": 2.3863122648477555, "test_loss": 0.9719617680153426, "test_acc1": 75.7960025680542, "test_acc5": 92.67200268737793, "epoch": 143, "n_parameters": 14635024} -{"train_lr": 0.0021511102032696337, "train_loss": 2.380632731580734, "test_loss": 0.9421959230128456, "test_acc1": 76.53200263000488, "test_acc5": 93.08400256347656, "epoch": 144, "n_parameters": 14635024} -{"train_lr": 0.0021302670864609768, "train_loss": 2.3742016892671587, "test_loss": 0.9286056702189586, "test_acc1": 76.60800246124268, "test_acc5": 93.34600274261474, "epoch": 145, "n_parameters": 14635024} -{"train_lr": 0.0021094102327046753, "train_loss": 2.36232274851799, "test_loss": 0.9542053131496205, "test_acc1": 76.22400235198974, "test_acc5": 92.8200023776245, "epoch": 146, "n_parameters": 14635024} -{"train_lr": 0.0020885419291897665, "train_loss": 2.3717861619472504, "test_loss": 0.9363608669270488, "test_acc1": 76.60200226226807, "test_acc5": 93.2280023638916, "epoch": 147, "n_parameters": 14635024} -{"train_lr": 0.0020676644643608877, "train_loss": 2.368433395862579, "test_loss": 0.9459190476028358, "test_acc1": 76.56400264282226, "test_acc5": 93.154002315979, "epoch": 148, "n_parameters": 14635024} -{"train_lr": 0.0020467801276673257, "train_loss": 2.357728848361969, "test_loss": 0.9594122061834616, "test_acc1": 76.50800252807618, "test_acc5": 93.05800253631591, "epoch": 149, "n_parameters": 14635024} -{"train_lr": 0.002025891209311914, "train_loss": 2.3619740161418914, "test_loss": 0.9349752250401413, "test_acc1": 76.46800237670898, "test_acc5": 93.23600261505128, "epoch": 150, "n_parameters": 14635024} -{"train_lr": 0.0020050000000000176, "train_loss": 2.354039988565445, "test_loss": 0.9201177568996654, "test_acc1": 76.76800223083497, "test_acc5": 93.22600234893798, "epoch": 151, "n_parameters": 14635024} -{"train_lr": 0.0019841087906880845, "train_loss": 2.3519832239151, "test_loss": 0.9532386494033477, "test_acc1": 76.31400268615722, "test_acc5": 93.22400257263183, "epoch": 152, "n_parameters": 14635024} -{"train_lr": 0.0019632198723327173, "train_loss": 2.343254959535599, "test_loss": 0.9343010767856065, "test_acc1": 76.62000260742188, "test_acc5": 93.19000277923584, "epoch": 153, "n_parameters": 14635024} -{"train_lr": 0.0019423355356391558, "train_loss": 2.3496272327184675, "test_loss": 0.9260044284164906, "test_acc1": 76.75000255432128, "test_acc5": 93.36600231903076, "epoch": 154, "n_parameters": 14635024} -{"train_lr": 0.001921458070810235, "train_loss": 2.345919791483879, "test_loss": 0.9105459979790098, "test_acc1": 77.0640024295044, "test_acc5": 93.5140025, "epoch": 155, "n_parameters": 14635024} -{"train_lr": 0.0019005897672953243, "train_loss": 2.3321957904338837, "test_loss": 0.9194517554167438, "test_acc1": 76.8880026208496, "test_acc5": 93.36800248565673, "epoch": 156, "n_parameters": 14635024} -{"train_lr": 0.0018797329135390225, "train_loss": 2.3309873267412184, "test_loss": 0.930313469732509, "test_acc1": 76.84200254882812, "test_acc5": 93.43200256134033, "epoch": 157, "n_parameters": 14635024} -{"train_lr": 0.0018588897967303623, "train_loss": 2.3209416528701783, "test_loss": 0.9325012812281356, "test_acc1": 77.01000291595459, "test_acc5": 93.52000262542725, "epoch": 158, "n_parameters": 14635024} -{"train_lr": 0.0018380627025520412, "train_loss": 2.335115595149994, "test_loss": 0.9177369694499409, "test_acc1": 77.45400243774414, "test_acc5": 93.4920027432251, "epoch": 159, "n_parameters": 14635024} -{"train_lr": 0.0018172539149295633, "train_loss": 2.315282396674156, "test_loss": 0.9098364559604841, "test_acc1": 76.85600242614746, "test_acc5": 93.45400257537842, "epoch": 160, "n_parameters": 14635024} -{"train_lr": 0.0017964657157810383, "train_loss": 2.3045800117254256, "test_loss": 0.9135036295389428, "test_acc1": 77.21200237854003, "test_acc5": 93.49200258392334, "epoch": 161, "n_parameters": 14635024} -{"train_lr": 0.001775700384766716, "train_loss": 2.3152376883745194, "test_loss": 0.9130853072685354, "test_acc1": 77.01800229248047, "test_acc5": 93.37000251434326, "epoch": 162, "n_parameters": 14635024} -{"train_lr": 0.0017549601990392034, "train_loss": 2.306278000664711, "test_loss": 0.9147269305060891, "test_acc1": 76.90400280181885, "test_acc5": 93.45000256774902, "epoch": 163, "n_parameters": 14635024} -{"train_lr": 0.0017342474329935537, "train_loss": 2.3067558505058288, "test_loss": 0.9050429945482927, "test_acc1": 77.26200248199463, "test_acc5": 93.4620025756836, "epoch": 164, "n_parameters": 14635024} -{"train_lr": 0.0017135643580179704, "train_loss": 2.3081211486816406, "test_loss": 0.9070606266751009, "test_acc1": 77.07200267059326, "test_acc5": 93.53400289916992, "epoch": 165, "n_parameters": 14635024} -{"train_lr": 0.001692913242244742, "train_loss": 2.2870739699602125, "test_loss": 0.9054245894007823, "test_acc1": 77.33600272918702, "test_acc5": 93.45800226318359, "epoch": 166, "n_parameters": 14635024} -{"train_lr": 0.00167229635030139, "train_loss": 2.2934294464826586, "test_loss": 0.9045759478912634, "test_acc1": 77.40400270935059, "test_acc5": 93.47200266967774, "epoch": 167, "n_parameters": 14635024} -{"train_lr": 0.0016517159430624528, "train_loss": 2.298650993967056, "test_loss": 0.8898187845068819, "test_acc1": 77.66800216461182, "test_acc5": 93.58800249511718, "epoch": 168, "n_parameters": 14635024} -{"train_lr": 0.0016311742774014707, "train_loss": 2.2843149079561234, "test_loss": 0.8886679581859532, "test_acc1": 77.76200262207031, "test_acc5": 93.7940032876587, "epoch": 169, "n_parameters": 14635024} -{"train_lr": 0.001610673605943664, "train_loss": 2.2795941779851914, "test_loss": 0.8836695845512783, "test_acc1": 77.70200274169922, "test_acc5": 93.68800296936035, "epoch": 170, "n_parameters": 14635024} -{"train_lr": 0.001590216176818559, "train_loss": 2.2808990640163422, "test_loss": 0.8864313131746124, "test_acc1": 77.70600259887695, "test_acc5": 93.69400252471924, "epoch": 171, "n_parameters": 14635024} -{"train_lr": 0.0015698042334139058, "train_loss": 2.275710647559166, "test_loss": 0.8854850535007084, "test_acc1": 77.73400260284424, "test_acc5": 93.72800254821777, "epoch": 172, "n_parameters": 14635024} -{"train_lr": 0.0015494400141292598, "train_loss": 2.2635853153228758, "test_loss": 0.8814994434223455, "test_acc1": 77.87400271636963, "test_acc5": 93.76200278167724, "epoch": 173, "n_parameters": 14635024} -{"train_lr": 0.0015291257521307172, "train_loss": 2.268682176208496, "test_loss": 0.8902432798024487, "test_acc1": 77.52800249084473, "test_acc5": 93.81400231658935, "epoch": 174, "n_parameters": 14635024} -{"train_lr": 0.0015088636751061355, "train_loss": 2.2615846705675127, "test_loss": 0.8736651798381525, "test_acc1": 78.03800242004395, "test_acc5": 93.92200259521485, "epoch": 175, "n_parameters": 14635024} -{"train_lr": 0.0014886560050204627, "train_loss": 2.2571368155002594, "test_loss": 0.8835824688567835, "test_acc1": 77.91000231414795, "test_acc5": 93.85800261230469, "epoch": 176, "n_parameters": 14635024} -{"train_lr": 0.001468504957872541, "train_loss": 2.258315887069702, "test_loss": 0.8775279958020238, "test_acc1": 78.07600280853272, "test_acc5": 94.0040024673462, "epoch": 177, "n_parameters": 14635024} -{"train_lr": 0.0014484127434517488, "train_loss": 2.2549368814468385, "test_loss": 0.8658205294872031, "test_acc1": 78.0020022567749, "test_acc5": 93.9820027166748, "epoch": 178, "n_parameters": 14635024} -{"train_lr": 0.0014283815650957576, "train_loss": 2.2489981445550917, "test_loss": 0.8622220013948048, "test_acc1": 78.14400262207032, "test_acc5": 93.99000271728515, "epoch": 179, "n_parameters": 14635024} -{"train_lr": 0.001408413619449102, "train_loss": 2.2473128587722777, "test_loss": 0.8581664862439913, "test_acc1": 78.32400232086182, "test_acc5": 94.05200256652832, "epoch": 180, "n_parameters": 14635024} -{"train_lr": 0.001388511096221964, "train_loss": 2.2506633516311645, "test_loss": 0.881490187609897, "test_acc1": 77.87600233154296, "test_acc5": 93.93800242492676, "epoch": 181, "n_parameters": 14635024} -{"train_lr": 0.0013686761779503403, "train_loss": 2.2387099551677703, "test_loss": 0.868472193751265, "test_acc1": 77.80200262176514, "test_acc5": 93.89000217041016, "epoch": 182, "n_parameters": 14635024} -{"train_lr": 0.0013489110397565372, "train_loss": 2.2340584660291674, "test_loss": 0.8623946346342564, "test_acc1": 78.32600243255615, "test_acc5": 94.0600025366211, "epoch": 183, "n_parameters": 14635024} -{"train_lr": 0.0013292178491106572, "train_loss": 2.2254148035049437, "test_loss": 0.8546886742115021, "test_acc1": 78.3940026473999, "test_acc5": 94.1280023110962, "epoch": 184, "n_parameters": 14635024} -{"train_lr": 0.001309598765592993, "train_loss": 2.2196825875759125, "test_loss": 0.8536171019077301, "test_acc1": 78.42800224243165, "test_acc5": 94.27400240905762, "epoch": 185, "n_parameters": 14635024} -{"train_lr": 0.0012900559406571204, "train_loss": 2.2221514944076537, "test_loss": 0.857072432689807, "test_acc1": 78.43000246765136, "test_acc5": 94.08600206848145, "epoch": 186, "n_parameters": 14635024} -{"train_lr": 0.0012705915173940611, "train_loss": 2.222288849902153, "test_loss": 0.8421244516092188, "test_acc1": 78.77600263824463, "test_acc5": 94.37000284332275, "epoch": 187, "n_parameters": 14635024} -{"train_lr": 0.0012512076302971713, "train_loss": 2.222497997760773, "test_loss": 0.8427152088021531, "test_acc1": 78.68000267730713, "test_acc5": 94.36200268676758, "epoch": 188, "n_parameters": 14635024} -{"train_lr": 0.001231906405028045, "train_loss": 2.2188879974126814, "test_loss": 0.847457730813938, "test_acc1": 78.75800249328613, "test_acc5": 94.22200250640869, "epoch": 189, "n_parameters": 14635024} -{"train_lr": 0.0012126899581836061, "train_loss": 2.2155973654031755, "test_loss": 0.8622507400372449, "test_acc1": 78.31600240081787, "test_acc5": 94.03000235290527, "epoch": 190, "n_parameters": 14635024} -{"train_lr": 0.0011935603970637625, "train_loss": 2.204261158466339, "test_loss": 0.84203605945496, "test_acc1": 78.77600271148681, "test_acc5": 94.25400263458252, "epoch": 191, "n_parameters": 14635024} -{"train_lr": 0.0011745198194405004, "train_loss": 2.203715603137016, "test_loss": 0.8261379441794228, "test_acc1": 78.90400259460449, "test_acc5": 94.47000261199952, "epoch": 192, "n_parameters": 14635024} -{"train_lr": 0.0011555703133276894, "train_loss": 2.2047760606765747, "test_loss": 0.8326328694820404, "test_acc1": 78.85200258514405, "test_acc5": 94.54600260375976, "epoch": 193, "n_parameters": 14635024} -{"train_lr": 0.0011367139567521967, "train_loss": 2.1986150622606275, "test_loss": 0.8396835541900467, "test_acc1": 78.72200265319825, "test_acc5": 94.3280023501587, "epoch": 194, "n_parameters": 14635024} -{"train_lr": 0.0011179528175260622, "train_loss": 2.1802130987405777, "test_loss": 0.8341447353801307, "test_acc1": 78.83600263061524, "test_acc5": 94.27800281219483, "epoch": 195, "n_parameters": 14635024} -{"train_lr": 0.0010992889530195922, "train_loss": 2.1794676902770997, "test_loss": 0.8333840324159931, "test_acc1": 78.82000255401611, "test_acc5": 94.52200249450684, "epoch": 196, "n_parameters": 14635024} -{"train_lr": 0.0010807244099358875, "train_loss": 2.1814608832120896, "test_loss": 0.8278183430871543, "test_acc1": 79.1260023400879, "test_acc5": 94.49800249420166, "epoch": 197, "n_parameters": 14635024} -{"train_lr": 0.0010622612240862497, "train_loss": 2.1802974442720413, "test_loss": 0.8376982400522512, "test_acc1": 78.80000247070312, "test_acc5": 94.29800266113281, "epoch": 198, "n_parameters": 14635024} -{"train_lr": 0.001043901420167081, "train_loss": 2.190734174346924, "test_loss": 0.822734439635978, "test_acc1": 79.11400266967773, "test_acc5": 94.43600250579834, "epoch": 199, "n_parameters": 14635024} -{"train_lr": 0.001025647011537802, "train_loss": 2.1705811187028883, "test_loss": 0.8172255412620657, "test_acc1": 79.33600237701415, "test_acc5": 94.58600237854004, "epoch": 200, "n_parameters": 14635024} -{"train_lr": 0.0010075000000000067, "train_loss": 2.168136600089073, "test_loss": 0.8262867456411614, "test_acc1": 79.09000239471436, "test_acc5": 94.53400281555176, "epoch": 201, "n_parameters": 14635024} -{"train_lr": 0.0009894623755779988, "train_loss": 2.170270802235603, "test_loss": 0.8299077615580138, "test_acc1": 79.11600252502441, "test_acc5": 94.37000259277343, "epoch": 202, "n_parameters": 14635024} -{"train_lr": 0.0009715361163006195, "train_loss": 2.1517897695064545, "test_loss": 0.8186800922541058, "test_acc1": 79.26600251281738, "test_acc5": 94.58200259765626, "epoch": 203, "n_parameters": 14635024} -{"train_lr": 0.000953723187984137, "train_loss": 2.150259417247772, "test_loss": 0.8138383320149254, "test_acc1": 79.23000271697998, "test_acc5": 94.59200221923828, "epoch": 204, "n_parameters": 14635024} -{"train_lr": 0.0009360255440169043, "train_loss": 2.1561012197971343, "test_loss": 0.8356622694169774, "test_acc1": 78.87000281158447, "test_acc5": 94.42200238586426, "epoch": 205, "n_parameters": 14635024} -{"train_lr": 0.0009184451251450147, "train_loss": 2.153783795952797, "test_loss": 0.8080721808707013, "test_acc1": 79.5480025930786, "test_acc5": 94.6380025012207, "epoch": 206, "n_parameters": 14635024} -{"train_lr": 0.0009009838592595214, "train_loss": 2.1473649198532105, "test_loss": 0.8112337549819666, "test_acc1": 79.50400261901855, "test_acc5": 94.55800244995118, "epoch": 207, "n_parameters": 14635024} -{"train_lr": 0.0008836436611849946, "train_loss": 2.1245522045850755, "test_loss": 0.8060670120312887, "test_acc1": 79.6480026638794, "test_acc5": 94.73400260620117, "epoch": 208, "n_parameters": 14635024} -{"train_lr": 0.0008664264324695524, "train_loss": 2.1323995695352553, "test_loss": 0.7956731001682141, "test_acc1": 79.8860025152588, "test_acc5": 94.83800267333984, "epoch": 209, "n_parameters": 14635024} -{"train_lr": 0.0008493340611763644, "train_loss": 2.1303968190431597, "test_loss": 0.8029743579818922, "test_acc1": 79.7820025164795, "test_acc5": 94.78400244934082, "epoch": 210, "n_parameters": 14635024} -{"train_lr": 0.0008323684216765116, "train_loss": 2.1360297205924987, "test_loss": 0.8052952458315036, "test_acc1": 79.64200264770508, "test_acc5": 94.68000271545411, "epoch": 211, "n_parameters": 14635024} -{"train_lr": 0.0008155313744436086, "train_loss": 2.11755691742897, "test_loss": 0.7906667506870102, "test_acc1": 79.84000229034424, "test_acc5": 94.95200217468262, "epoch": 212, "n_parameters": 14635024} -{"train_lr": 0.0007988247658495707, "train_loss": 2.1174602812767027, "test_loss": 0.7955497262232444, "test_acc1": 79.71000259979247, "test_acc5": 94.92600262664794, "epoch": 213, "n_parameters": 14635024} -{"train_lr": 0.0007822504279623159, "train_loss": 2.1157798944950104, "test_loss": 0.7932630023535561, "test_acc1": 79.8000028930664, "test_acc5": 94.8700024319458, "epoch": 214, "n_parameters": 14635024} -{"train_lr": 0.0007658101783447642, "train_loss": 2.1150281487226485, "test_loss": 0.791051059961319, "test_acc1": 79.88600258056641, "test_acc5": 94.89800274047852, "epoch": 215, "n_parameters": 14635024} -{"train_lr": 0.000749505819855568, "train_loss": 2.1131988970279694, "test_loss": 0.7876434291110319, "test_acc1": 79.87600253875732, "test_acc5": 94.85800231689453, "epoch": 216, "n_parameters": 14635024} -{"train_lr": 0.0007333391404513692, "train_loss": 2.1069563972234726, "test_loss": 0.7882909073549158, "test_acc1": 79.76000272796631, "test_acc5": 94.89800235046387, "epoch": 217, "n_parameters": 14635024} -{"train_lr": 0.0007173119129907244, "train_loss": 2.104197657227516, "test_loss": 0.7781317718327045, "test_acc1": 80.0820028137207, "test_acc5": 94.98000235778808, "epoch": 218, "n_parameters": 14635024} -{"train_lr": 0.0007014258950397421, "train_loss": 2.1038340403318405, "test_loss": 0.7805950893637013, "test_acc1": 80.350002449646, "test_acc5": 95.03800256530762, "epoch": 219, "n_parameters": 14635024} -{"train_lr": 0.0006856828286793126, "train_loss": 2.0924764758586885, "test_loss": 0.781080387532711, "test_acc1": 79.99800232696533, "test_acc5": 95.03800253875733, "epoch": 220, "n_parameters": 14635024} -{"train_lr": 0.000670084440314078, "train_loss": 2.089489102768898, "test_loss": 0.7829206077491536, "test_acc1": 79.96800272491456, "test_acc5": 94.94800223602294, "epoch": 221, "n_parameters": 14635024} -{"train_lr": 0.0006546324404830861, "train_loss": 2.083846352529526, "test_loss": 0.7679170744822306, "test_acc1": 80.26400244934082, "test_acc5": 95.11200261260986, "epoch": 222, "n_parameters": 14635024} -{"train_lr": 0.0006393285236722668, "train_loss": 2.078221984577179, "test_loss": 0.7748956121504307, "test_acc1": 80.19400254638671, "test_acc5": 95.04000261444092, "epoch": 223, "n_parameters": 14635024} -{"train_lr": 0.0006241743681285438, "train_loss": 2.076352436542511, "test_loss": 0.7734343648395118, "test_acc1": 80.35600227600098, "test_acc5": 95.07000249542236, "epoch": 224, "n_parameters": 14635024} -{"train_lr": 0.0006091716356758274, "train_loss": 2.0758343604087828, "test_loss": 0.7741681749329847, "test_acc1": 80.25600262969971, "test_acc5": 95.03400241210937, "epoch": 225, "n_parameters": 14635024} -{"train_lr": 0.0005943219715328445, "train_loss": 2.0675455425977707, "test_loss": 0.7690036577336928, "test_acc1": 80.38600232849122, "test_acc5": 95.12600254241943, "epoch": 226, "n_parameters": 14635024} -{"train_lr": 0.000579627004132555, "train_loss": 2.062310416817665, "test_loss": 0.7625430861816687, "test_acc1": 80.4800022454834, "test_acc5": 95.19600242889405, "epoch": 227, "n_parameters": 14635024} -{"train_lr": 0.0005650883449437713, "train_loss": 2.064762346506119, "test_loss": 0.7603000650072799, "test_acc1": 80.50200262207031, "test_acc5": 95.26000246429443, "epoch": 228, "n_parameters": 14635024} -{"train_lr": 0.0005507075882942857, "train_loss": 2.0647213991642, "test_loss": 0.767977541203008, "test_acc1": 80.54000250152588, "test_acc5": 95.12200273498536, "epoch": 229, "n_parameters": 14635024} -{"train_lr": 0.0005364863111961281, "train_loss": 2.060899593806267, "test_loss": 0.7596025666331544, "test_acc1": 80.56000251068116, "test_acc5": 95.20400228393555, "epoch": 230, "n_parameters": 14635024} -{"train_lr": 0.0005224260731725992, "train_loss": 2.054299810028076, "test_loss": 0.7540326160104835, "test_acc1": 80.69400276153564, "test_acc5": 95.38400271820069, "epoch": 231, "n_parameters": 14635024} -{"train_lr": 0.0005085284160872295, "train_loss": 2.0503336856842043, "test_loss": 0.7528282664716244, "test_acc1": 80.73600227874756, "test_acc5": 95.26600287445068, "epoch": 232, "n_parameters": 14635024} -{"train_lr": 0.0004947948639747458, "train_loss": 2.051196492242813, "test_loss": 0.7579426820225575, "test_acc1": 80.5620027029419, "test_acc5": 95.33200259552002, "epoch": 233, "n_parameters": 14635024} -{"train_lr": 0.0004812269228738896, "train_loss": 2.0404999817848206, "test_loss": 0.7501720325911746, "test_acc1": 80.81400285614014, "test_acc5": 95.37200249786378, "epoch": 234, "n_parameters": 14635024} -{"train_lr": 0.00046782608066229685, "train_loss": 2.030909739089012, "test_loss": 0.7534324024968287, "test_acc1": 80.79000258605957, "test_acc5": 95.33000264953613, "epoch": 235, "n_parameters": 14635024} -{"train_lr": 0.00045459380689333805, "train_loss": 2.034115887141228, "test_loss": 0.747466500629397, "test_acc1": 80.75200247772217, "test_acc5": 95.44000248443604, "epoch": 236, "n_parameters": 14635024} -{"train_lr": 0.0004415315526349521, "train_loss": 2.0319877266168596, "test_loss": 0.7569202969179434, "test_acc1": 80.78800276489258, "test_acc5": 95.29600230865479, "epoch": 237, "n_parameters": 14635024} -{"train_lr": 0.00042864075031049536, "train_loss": 2.0211067603349684, "test_loss": 0.7417228590039646, "test_acc1": 80.98400280670165, "test_acc5": 95.39000270202637, "epoch": 238, "n_parameters": 14635024} -{"train_lr": 0.00041592281354172557, "train_loss": 2.0180216962337494, "test_loss": 0.7473821609335787, "test_acc1": 81.03000274780274, "test_acc5": 95.43200241546631, "epoch": 239, "n_parameters": 14635024} -{"train_lr": 0.0004033791369937298, "train_loss": 2.018517233181, "test_loss": 0.7424432050217601, "test_acc1": 80.95400251556397, "test_acc5": 95.46600247711181, "epoch": 240, "n_parameters": 14635024} -{"train_lr": 0.00039101109622197687, "train_loss": 2.0032027349710466, "test_loss": 0.7393956304911304, "test_acc1": 80.9340028201294, "test_acc5": 95.44200252563476, "epoch": 241, "n_parameters": 14635024} -{"train_lr": 0.0003788200475215383, "train_loss": 2.0111294474601746, "test_loss": 0.739004955133971, "test_acc1": 81.07000233459473, "test_acc5": 95.4460025024414, "epoch": 242, "n_parameters": 14635024} -{"train_lr": 0.00036680732777826604, "train_loss": 2.0123588945150375, "test_loss": 0.7368852018433458, "test_acc1": 81.11800254699708, "test_acc5": 95.37400244812012, "epoch": 243, "n_parameters": 14635024} -{"train_lr": 0.0003549742543222451, "train_loss": 2.0112011058807373, "test_loss": 0.7379034799249733, "test_acc1": 81.28200256988525, "test_acc5": 95.48400251953124, "epoch": 244, "n_parameters": 14635024} -{"train_lr": 0.00034332212478335543, "train_loss": 2.003249467945099, "test_loss": 0.7294378199559801, "test_acc1": 81.31000261352538, "test_acc5": 95.57600246368408, "epoch": 245, "n_parameters": 14635024} -{"train_lr": 0.0003318522169488759, "train_loss": 1.9926614114522934, "test_loss": 0.7330418950056329, "test_acc1": 81.21000263854981, "test_acc5": 95.50200243682862, "epoch": 246, "n_parameters": 14635024} -{"train_lr": 0.00032056578862347564, "train_loss": 1.9897459844827652, "test_loss": 0.7266335765666821, "test_acc1": 81.3460027734375, "test_acc5": 95.5980025479126, "epoch": 247, "n_parameters": 14635024} -{"train_lr": 0.0003094640774912099, "train_loss": 1.991629239332676, "test_loss": 0.7370447963476181, "test_acc1": 81.29800256103516, "test_acc5": 95.41800249542236, "epoch": 248, "n_parameters": 14635024} -{"train_lr": 0.0002985483009797872, "train_loss": 1.9790056727647782, "test_loss": 0.7258831104811501, "test_acc1": 81.3400026196289, "test_acc5": 95.57800264007568, "epoch": 249, "n_parameters": 14635024} -{"train_lr": 0.00028781965612712917, "train_loss": 1.9885617131710052, "test_loss": 0.7263623259085066, "test_acc1": 81.45000256378174, "test_acc5": 95.59200259033203, "epoch": 250, "n_parameters": 14635024} -{"train_lr": 0.00027727931945004304, "train_loss": 1.9976691291213036, "test_loss": 0.7247702415813418, "test_acc1": 81.38200275299073, "test_acc5": 95.66800250671386, "epoch": 251, "n_parameters": 14635024} -{"train_lr": 0.00026692844681522316, "train_loss": 1.9686730257511138, "test_loss": 0.7203140887705719, "test_acc1": 81.5240023236084, "test_acc5": 95.69200234710694, "epoch": 252, "n_parameters": 14635024} -{"train_lr": 0.0002567681733124936, "train_loss": 1.9730240814685822, "test_loss": 0.7183823933934465, "test_acc1": 81.61800266693115, "test_acc5": 95.65800252563477, "epoch": 253, "n_parameters": 14635024} -{"train_lr": 0.00024679961313034334, "train_loss": 1.9712176985740661, "test_loss": 0.7181562174330739, "test_acc1": 81.53400215393066, "test_acc5": 95.67800230010987, "epoch": 254, "n_parameters": 14635024} -{"train_lr": 0.0002370238594337239, "train_loss": 1.9701037156105041, "test_loss": 0.7142862213008544, "test_acc1": 81.61600249420167, "test_acc5": 95.66200245635986, "epoch": 255, "n_parameters": 14635024} -{"train_lr": 0.00022744198424420818, "train_loss": 1.9634563100099565, "test_loss": 0.7182575032553252, "test_acc1": 81.53600233825684, "test_acc5": 95.63000221923828, "epoch": 256, "n_parameters": 14635024} -{"train_lr": 0.00021805503832237022, "train_loss": 1.972850040268898, "test_loss": 0.7171550239710247, "test_acc1": 81.54400252685546, "test_acc5": 95.72400260650635, "epoch": 257, "n_parameters": 14635024} -{"train_lr": 0.0002088640510526241, "train_loss": 1.9578622064352036, "test_loss": 0.7150002292850438, "test_acc1": 81.55000264190674, "test_acc5": 95.69600246765137, "epoch": 258, "n_parameters": 14635024} -{"train_lr": 0.00019987003033028822, "train_loss": 1.960743408536911, "test_loss": 0.7206023298203945, "test_acc1": 81.59400280639649, "test_acc5": 95.68200248718261, "epoch": 259, "n_parameters": 14635024} -{"train_lr": 0.00019107396245110126, "train_loss": 1.9517299750804902, "test_loss": 0.7093910620931316, "test_acc1": 81.7380026889038, "test_acc5": 95.78200253112793, "epoch": 260, "n_parameters": 14635024} -{"train_lr": 0.00018247681200301023, "train_loss": 1.9485755353212357, "test_loss": 0.7074563483543256, "test_acc1": 81.8440026638794, "test_acc5": 95.76000263275147, "epoch": 261, "n_parameters": 14635024} -{"train_lr": 0.00017407952176045884, "train_loss": 1.948955102109909, "test_loss": 0.7076595675419358, "test_acc1": 81.91400247161866, "test_acc5": 95.84200233306885, "epoch": 262, "n_parameters": 14635024} -{"train_lr": 0.0001658830125809418, "train_loss": 1.951452920627594, "test_loss": 0.7037951492649668, "test_acc1": 81.84600226470947, "test_acc5": 95.79000262878418, "epoch": 263, "n_parameters": 14635024} -{"train_lr": 0.0001578881833040612, "train_loss": 1.9495584125757217, "test_loss": 0.7053149495931232, "test_acc1": 81.91200240234375, "test_acc5": 95.80400255065918, "epoch": 264, "n_parameters": 14635024} -{"train_lr": 0.00015009591065294023, "train_loss": 1.9362858592510224, "test_loss": 0.7061531539348995, "test_acc1": 81.77800235687256, "test_acc5": 95.76400228668213, "epoch": 265, "n_parameters": 14635024} -{"train_lr": 0.00014250704913808254, "train_loss": 1.9407692424297334, "test_loss": 0.7071137570721262, "test_acc1": 81.93400271453858, "test_acc5": 95.7520024420166, "epoch": 266, "n_parameters": 14635024} -{"train_lr": 0.00013512243096367772, "train_loss": 1.9396841402530671, "test_loss": 0.7024399305091185, "test_acc1": 81.98600233825684, "test_acc5": 95.79800261535645, "epoch": 267, "n_parameters": 14635024} -{"train_lr": 0.00012794286593631978, "train_loss": 1.9293860609173774, "test_loss": 0.7030632513410905, "test_acc1": 82.03200261657715, "test_acc5": 95.79400266235352, "epoch": 268, "n_parameters": 14635024} -{"train_lr": 0.00012096914137622728, "train_loss": 1.9360322934865952, "test_loss": 0.7043372782714227, "test_acc1": 81.94600258941651, "test_acc5": 95.78000275665283, "epoch": 269, "n_parameters": 14635024} -{"train_lr": 0.00011420202203087673, "train_loss": 1.9224997882127761, "test_loss": 0.703774356885868, "test_acc1": 81.99400258331299, "test_acc5": 95.82800247589111, "epoch": 270, "n_parameters": 14635024} -{"train_lr": 0.00010764224999117014, "train_loss": 1.9296757005810738, "test_loss": 0.7003473692080554, "test_acc1": 81.95000255218505, "test_acc5": 95.82800283538819, "epoch": 271, "n_parameters": 14635024} -{"train_lr": 0.00010129054461002765, "train_loss": 1.918625832414627, "test_loss": 0.7005044743418694, "test_acc1": 82.05800231201172, "test_acc5": 95.85400272705078, "epoch": 272, "n_parameters": 14635024} -{"train_lr": 9.514760242352498e-05, "train_loss": 1.9254883463025092, "test_loss": 0.6991058169480633, "test_acc1": 82.03600287017822, "test_acc5": 95.83800262054443, "epoch": 273, "n_parameters": 14635024} -{"train_lr": 8.921409707449843e-05, "train_loss": 1.9250803816080093, "test_loss": 0.698040372089428, "test_acc1": 82.0760028314209, "test_acc5": 95.83000266845703, "epoch": 274, "n_parameters": 14635024} -{"train_lr": 8.349067923867126e-05, "train_loss": 1.9221804589271545, "test_loss": 0.6986715664320132, "test_acc1": 82.07200305236816, "test_acc5": 95.88200251220704, "epoch": 275, "n_parameters": 14635024} -{"train_lr": 7.797797655330979e-05, "train_loss": 1.9162270869970321, "test_loss": 0.7027800363652846, "test_acc1": 81.99400234893798, "test_acc5": 95.79200251220703, "epoch": 276, "n_parameters": 14635024} -{"train_lr": 7.267659354838016e-05, "train_loss": 1.9211914346218109, "test_loss": 0.6989509073250434, "test_acc1": 82.10600258087157, "test_acc5": 95.87400259033203, "epoch": 277, "n_parameters": 14635024} -{"train_lr": 6.758711158027577e-05, "train_loss": 1.9157532945513724, "test_loss": 0.6958019374048009, "test_acc1": 82.08600249725342, "test_acc5": 95.91000274383545, "epoch": 278, "n_parameters": 14635024} -{"train_lr": 6.27100887680448e-05, "train_loss": 1.9093472470402717, "test_loss": 0.6959709937081617, "test_acc1": 82.12000279571534, "test_acc5": 95.90800270477295, "epoch": 279, "n_parameters": 14635024} -{"train_lr": 5.8046059932199434e-05, "train_loss": 1.9148232417106628, "test_loss": 0.6958169288495007, "test_acc1": 82.12000229034423, "test_acc5": 95.90200256744384, "epoch": 280, "n_parameters": 14635024} -{"train_lr": 5.359553653605823e-05, "train_loss": 1.912014953160286, "test_loss": 0.6967848828610252, "test_acc1": 82.12000234100341, "test_acc5": 95.91000272155762, "epoch": 281, "n_parameters": 14635024} -{"train_lr": 4.9359006629664604e-05, "train_loss": 1.911918233561516, "test_loss": 0.6941521003404084, "test_acc1": 82.12000262786866, "test_acc5": 95.91000272155762, "epoch": 282, "n_parameters": 14635024} -{"train_lr": 4.533693479626563e-05, "train_loss": 1.9085830712318421, "test_loss": 0.6957223827348036, "test_acc1": 82.07000233825684, "test_acc5": 95.9140027633667, "epoch": 283, "n_parameters": 14635024} -{"train_lr": 4.152976210136288e-05, "train_loss": 1.89970385890007, "test_loss": 0.6939927854520433, "test_acc1": 82.19000271209717, "test_acc5": 95.89000246429444, "epoch": 284, "n_parameters": 14635024} -{"train_lr": 3.793790604434225e-05, "train_loss": 1.904931198132038, "test_loss": 0.6952023907181095, "test_acc1": 82.14800271209717, "test_acc5": 95.94000276611328, "epoch": 285, "n_parameters": 14635024} -{"train_lr": 3.456176051270035e-05, "train_loss": 1.8996695263028145, "test_loss": 0.6914426108055255, "test_acc1": 82.19400263671875, "test_acc5": 95.94200230194092, "epoch": 286, "n_parameters": 14635024} -{"train_lr": 3.14016957388384e-05, "train_loss": 1.90046166203022, "test_loss": 0.6912505390012965, "test_acc1": 82.24400244140625, "test_acc5": 95.93200268463134, "epoch": 287, "n_parameters": 14635024} -{"train_lr": 2.845805825946997e-05, "train_loss": 1.8954494747161865, "test_loss": 0.6925920646856812, "test_acc1": 82.18400242706299, "test_acc5": 95.91600276000976, "epoch": 288, "n_parameters": 14635024} -{"train_lr": 2.5731170877616888e-05, "train_loss": 1.9013289905667305, "test_loss": 0.693348590284586, "test_acc1": 82.19600250793457, "test_acc5": 95.9320027633667, "epoch": 289, "n_parameters": 14635024} -{"train_lr": 2.3221332627209112e-05, "train_loss": 1.9003433240175247, "test_loss": 0.6920366504174822, "test_acc1": 82.20200277069092, "test_acc5": 95.94600270141602, "epoch": 290, "n_parameters": 14635024} -{"train_lr": 2.0928818740294644e-05, "train_loss": 1.9047676013231278, "test_loss": 0.6924964467830518, "test_acc1": 82.23200268981934, "test_acc5": 95.92600254516601, "epoch": 291, "n_parameters": 14635024} -{"train_lr": 1.885388061685525e-05, "train_loss": 1.9105107673645019, "test_loss": 0.6922693909967647, "test_acc1": 82.232002315979, "test_acc5": 95.90400249725342, "epoch": 292, "n_parameters": 14635024} -{"train_lr": 1.6996745797238736e-05, "train_loss": 1.9083107931733132, "test_loss": 0.6918739555951428, "test_acc1": 82.27400230194093, "test_acc5": 95.95000257537842, "epoch": 293, "n_parameters": 14635024} -{"train_lr": 1.5357617937206103e-05, "train_loss": 1.9070727568626404, "test_loss": 0.6935213243260103, "test_acc1": 82.19400235168457, "test_acc5": 95.91200268188477, "epoch": 294, "n_parameters": 14635024} -{"train_lr": 1.393667678559817e-05, "train_loss": 1.8907882341265678, "test_loss": 0.6909689175731996, "test_acc1": 82.29000279022216, "test_acc5": 95.93800263397216, "epoch": 295, "n_parameters": 14635024} -{"train_lr": 1.2734078164625274e-05, "train_loss": 1.893856303668022, "test_loss": 0.6922610647538129, "test_acc1": 82.27600259765624, "test_acc5": 95.92200255584717, "epoch": 296, "n_parameters": 14635024} -{"train_lr": 1.1749953952777368e-05, "train_loss": 1.8984243153333664, "test_loss": 0.6909527726033154, "test_acc1": 82.2120027255249, "test_acc5": 95.94200274047851, "epoch": 297, "n_parameters": 14635024} -{"train_lr": 1.0984412070365348e-05, "train_loss": 1.9023100278615952, "test_loss": 0.6914110558436197, "test_acc1": 82.2740026171875, "test_acc5": 95.94200264556885, "epoch": 298, "n_parameters": 14635024} -{"train_lr": 1.0437536467683126e-05, "train_loss": 1.9029136568427085, "test_loss": 0.6915355414590415, "test_acc1": 82.26800282318115, "test_acc5": 95.94400275115967, "epoch": 299, "n_parameters": 14635024} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_450e.txt b/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_450e.txt deleted file mode 100644 index fa9fd91e..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m1_5_distill_450e.txt +++ /dev/null @@ -1,450 +0,0 @@ -{"train_lr": 1.000000000000014e-06, "train_loss": 7.008284466838837, "test_loss": 6.951188168104957, "test_acc1": 0.10800000778198242, "test_acc5": 0.5140000247383117, "epoch": 0, "n_parameters": 14644432} -{"train_lr": 1.000000000000014e-06, "train_loss": 7.006554371547699, "test_loss": 6.9447544883279235, "test_acc1": 0.11800000835895538, "test_acc5": 0.5260000248241424, "epoch": 1, "n_parameters": 14644432} -{"train_lr": 0.0008007999999999933, "train_loss": 6.518093123626709, "test_loss": 5.419627831262701, "test_acc1": 6.158000177307129, "test_acc5": 17.874000563659667, "epoch": 2, "n_parameters": 14644432} -{"train_lr": 0.0016005999999999787, "train_loss": 5.937290691757203, "test_loss": 4.316322635201847, "test_acc1": 15.920000427398682, "test_acc5": 35.90200115737915, "epoch": 3, "n_parameters": 14644432} -{"train_lr": 0.0024003999999999835, "train_loss": 5.3397601802825925, "test_loss": 3.47492315576357, "test_acc1": 26.992000716552734, "test_acc5": 52.20800130889893, "epoch": 4, "n_parameters": 14644432} -{"train_lr": 0.0032001999999999873, "train_loss": 4.880985075902939, "test_loss": 2.9093371138853183, "test_acc1": 37.22600098754883, "test_acc5": 63.340001943664554, "epoch": 5, "n_parameters": 14644432} -{"train_lr": 0.003998784699903044, "train_loss": 4.552185489749909, "test_loss": 2.660707438693327, "test_acc1": 42.27400104904175, "test_acc5": 67.88000193969727, "epoch": 6, "n_parameters": 14644432} -{"train_lr": 0.0039982500460471835, "train_loss": 4.264975570583344, "test_loss": 2.389086161466206, "test_acc1": 47.44800124298096, "test_acc5": 72.56000242401123, "epoch": 7, "n_parameters": 14644432} -{"train_lr": 0.003997618243996162, "train_loss": 4.0588179904937745, "test_loss": 2.118877914898536, "test_acc1": 51.53000127105713, "test_acc5": 76.35400264709473, "epoch": 8, "n_parameters": 14644432} -{"train_lr": 0.003996889324543062, "train_loss": 3.8961911264419555, "test_loss": 2.0409970388692966, "test_acc1": 53.31800113861084, "test_acc5": 77.99800229797363, "epoch": 9, "n_parameters": 14644432} -{"train_lr": 0.003996063323214417, "train_loss": 3.7744434077739717, "test_loss": 1.9093990378520067, "test_acc1": 56.09400146484375, "test_acc5": 79.92800251617432, "epoch": 10, "n_parameters": 14644432} -{"train_lr": 0.003995140280268348, "train_loss": 3.6709865481853483, "test_loss": 1.7701611891388893, "test_acc1": 58.35400166046143, "test_acc5": 81.63400288726807, "epoch": 11, "n_parameters": 14644432} -{"train_lr": 0.003994120240692708, "train_loss": 3.6012020011901855, "test_loss": 1.7532399792005033, "test_acc1": 59.15800175689697, "test_acc5": 82.08400241027832, "epoch": 12, "n_parameters": 14644432} -{"train_lr": 0.003993003254202742, "train_loss": 3.525584548377991, "test_loss": 1.7154930935186499, "test_acc1": 60.742002059936524, "test_acc5": 82.93200237640382, "epoch": 13, "n_parameters": 14644432} -{"train_lr": 0.0039917893752388625, "train_loss": 3.452783676147461, "test_loss": 1.7037041919196354, "test_acc1": 60.14800155303955, "test_acc5": 82.81600243072509, "epoch": 14, "n_parameters": 14644432} -{"train_lr": 0.003990478662963736, "train_loss": 3.4000623994350434, "test_loss": 1.6311201365555035, "test_acc1": 62.47400193908691, "test_acc5": 84.17000220733642, "epoch": 15, "n_parameters": 14644432} -{"train_lr": 0.003989071181259754, "train_loss": 3.358888169002533, "test_loss": 1.5872814725427067, "test_acc1": 62.8160018762207, "test_acc5": 84.59800259338378, "epoch": 16, "n_parameters": 14644432} -{"train_lr": 0.0039875669987254015, "train_loss": 3.315856420469284, "test_loss": 1.4994699784061487, "test_acc1": 64.43400213439942, "test_acc5": 85.74400284973144, "epoch": 17, "n_parameters": 14644432} -{"train_lr": 0.003985966188672574, "train_loss": 3.303329928445816, "test_loss": 1.5704924924408687, "test_acc1": 63.24800186523437, "test_acc5": 85.03000240997315, "epoch": 18, "n_parameters": 14644432} -{"train_lr": 0.0039842688291223715, "train_loss": 3.2463722860813142, "test_loss": 1.5296235356260748, "test_acc1": 64.1840019631958, "test_acc5": 85.40000232635498, "epoch": 19, "n_parameters": 14644432} -{"train_lr": 0.003982475002801825, "train_loss": 3.2158166055202484, "test_loss": 1.4546482330735993, "test_acc1": 65.63000193206787, "test_acc5": 86.4800023562622, "epoch": 20, "n_parameters": 14644432} -{"train_lr": 0.003980584797139465, "train_loss": 3.194486325454712, "test_loss": 1.4520771946100628, "test_acc1": 65.79600175567627, "test_acc5": 86.67600225891113, "epoch": 21, "n_parameters": 14644432} -{"train_lr": 0.003978598304261148, "train_loss": 3.1518150411129, "test_loss": 1.5053721889853477, "test_acc1": 64.99800197662354, "test_acc5": 86.02800244293213, "epoch": 22, "n_parameters": 14644432} -{"train_lr": 0.003976515620985842, "train_loss": 3.1331355843782425, "test_loss": 1.447665103656404, "test_acc1": 66.21400216461181, "test_acc5": 87.08600245697022, "epoch": 23, "n_parameters": 14644432} -{"train_lr": 0.0039743368488206155, "train_loss": 3.117431927418709, "test_loss": 1.4275341971832163, "test_acc1": 66.03400173156739, "test_acc5": 86.81400241271973, "epoch": 24, "n_parameters": 14644432} -{"train_lr": 0.0039720620939556715, "train_loss": 3.0989305408239365, "test_loss": 1.398505777997129, "test_acc1": 67.0240020401001, "test_acc5": 87.37400248840332, "epoch": 25, "n_parameters": 14644432} -{"train_lr": 0.003969691467259384, "train_loss": 3.075309196639061, "test_loss": 1.4065674205913263, "test_acc1": 67.4620020147705, "test_acc5": 87.55000265808106, "epoch": 26, "n_parameters": 14644432} -{"train_lr": 0.003967225084272694, "train_loss": 3.0370455713033677, "test_loss": 1.345239028772887, "test_acc1": 67.87800236358643, "test_acc5": 88.00800259429931, "epoch": 27, "n_parameters": 14644432} -{"train_lr": 0.003964663065203757, "train_loss": 3.0356533358812334, "test_loss": 1.4050118217573446, "test_acc1": 67.3600020147705, "test_acc5": 87.71200255523682, "epoch": 28, "n_parameters": 14644432} -{"train_lr": 0.003962005534921608, "train_loss": 3.020168463230133, "test_loss": 1.3432285509565298, "test_acc1": 67.90400173583984, "test_acc5": 88.13400253601074, "epoch": 29, "n_parameters": 14644432} -{"train_lr": 0.003959252622950646, "train_loss": 3.014135610342026, "test_loss": 1.3232944441192291, "test_acc1": 68.45400199188232, "test_acc5": 88.13000246612549, "epoch": 30, "n_parameters": 14644432} -{"train_lr": 0.003956404463463954, "train_loss": 2.9801429943799973, "test_loss": 1.2924990412943504, "test_acc1": 69.38200229522705, "test_acc5": 88.71400245452881, "epoch": 31, "n_parameters": 14644432} -{"train_lr": 0.003953461195276696, "train_loss": 2.9719162997961046, "test_loss": 1.319385301979149, "test_acc1": 68.77600205932617, "test_acc5": 88.67000249847412, "epoch": 32, "n_parameters": 14644432} -{"train_lr": 0.003950422961839594, "train_loss": 2.9652118861436843, "test_loss": 1.3077770326943958, "test_acc1": 69.06400217712402, "test_acc5": 88.65800261138916, "epoch": 33, "n_parameters": 14644432} -{"train_lr": 0.00394728991123201, "train_loss": 2.9515589082717897, "test_loss": 1.3288197271964128, "test_acc1": 68.71800204376221, "test_acc5": 88.53200261444091, "epoch": 34, "n_parameters": 14644432} -{"train_lr": 0.003944062196154177, "train_loss": 2.9417968888759614, "test_loss": 1.250486870022381, "test_acc1": 69.83400206298828, "test_acc5": 89.28400238769531, "epoch": 35, "n_parameters": 14644432} -{"train_lr": 0.003940739973920592, "train_loss": 2.9317537396907807, "test_loss": 1.2969833866638296, "test_acc1": 68.84600205780029, "test_acc5": 88.65200244598388, "epoch": 36, "n_parameters": 14644432} -{"train_lr": 0.003937323406451619, "train_loss": 2.9172662548065187, "test_loss": 1.255044880158761, "test_acc1": 69.85400247589111, "test_acc5": 89.41200278503418, "epoch": 37, "n_parameters": 14644432} -{"train_lr": 0.003933812660265883, "train_loss": 2.914268874192238, "test_loss": 1.238330475109465, "test_acc1": 69.88200226074218, "test_acc5": 89.37000230987549, "epoch": 38, "n_parameters": 14644432} -{"train_lr": 0.003930207906472293, "train_loss": 2.895674155163765, "test_loss": 1.2654331681482933, "test_acc1": 70.03200194580079, "test_acc5": 89.35200248565674, "epoch": 39, "n_parameters": 14644432} -{"train_lr": 0.003926509320761305, "train_loss": 2.882482876396179, "test_loss": 1.2245763566564112, "test_acc1": 70.24200239501953, "test_acc5": 89.60200258636475, "epoch": 40, "n_parameters": 14644432} -{"train_lr": 0.003922717083396902, "train_loss": 2.878323216176033, "test_loss": 1.2609136757605217, "test_acc1": 69.81600234985352, "test_acc5": 89.26400292907715, "epoch": 41, "n_parameters": 14644432} -{"train_lr": 0.003918831379207381, "train_loss": 2.86767467341423, "test_loss": 1.262873303364305, "test_acc1": 69.87200188995361, "test_acc5": 89.47600250518799, "epoch": 42, "n_parameters": 14644432} -{"train_lr": 0.003914852397576493, "train_loss": 2.8625484871149065, "test_loss": 1.2738893842872452, "test_acc1": 70.11400206848144, "test_acc5": 89.49800253265381, "epoch": 43, "n_parameters": 14644432} -{"train_lr": 0.003910780332434081, "train_loss": 2.853123303580284, "test_loss": 1.2361809208112604, "test_acc1": 70.23800212402344, "test_acc5": 89.638002527771, "epoch": 44, "n_parameters": 14644432} -{"train_lr": 0.003906615382246946, "train_loss": 2.8408728387832642, "test_loss": 1.2007137883235426, "test_acc1": 70.65600210876465, "test_acc5": 89.94800273132324, "epoch": 45, "n_parameters": 14644432} -{"train_lr": 0.0039023577500088094, "train_loss": 2.8271664252996445, "test_loss": 1.2514411062002182, "test_acc1": 69.93800233673096, "test_acc5": 89.41000258605958, "epoch": 46, "n_parameters": 14644432} -{"train_lr": 0.003898007643230756, "train_loss": 2.832879266381264, "test_loss": 1.2605286950574202, "test_acc1": 70.20800250549317, "test_acc5": 89.46400247711182, "epoch": 47, "n_parameters": 14644432} -{"train_lr": 0.0038935652739308757, "train_loss": 2.8289256808519365, "test_loss": 1.2204523748334717, "test_acc1": 70.51000225921631, "test_acc5": 89.8640026196289, "epoch": 48, "n_parameters": 14644432} -{"train_lr": 0.003889030858623732, "train_loss": 2.8171999957799914, "test_loss": 1.2052408592665897, "test_acc1": 70.97400234222413, "test_acc5": 89.90800281188965, "epoch": 49, "n_parameters": 14644432} -{"train_lr": 0.003884404618310635, "train_loss": 2.813540559577942, "test_loss": 1.194179067716879, "test_acc1": 71.32000228149414, "test_acc5": 90.158002472229, "epoch": 50, "n_parameters": 14644432} -{"train_lr": 0.0038796867784678503, "train_loss": 2.7983796468257904, "test_loss": 1.17974222802064, "test_acc1": 71.25000220977783, "test_acc5": 90.05200237579346, "epoch": 51, "n_parameters": 14644432} -{"train_lr": 0.0038748775690362956, "train_loss": 2.8022860885858534, "test_loss": 1.2046621674123932, "test_acc1": 70.98000203308105, "test_acc5": 89.86000269744873, "epoch": 52, "n_parameters": 14644432} -{"train_lr": 0.0038699772244100415, "train_loss": 2.791049560713768, "test_loss": 1.2004990987479687, "test_acc1": 70.78400213439942, "test_acc5": 89.9540029071045, "epoch": 53, "n_parameters": 14644432} -{"train_lr": 0.003864985983424946, "train_loss": 2.778411211514473, "test_loss": 1.2062437104828216, "test_acc1": 70.93800227478027, "test_acc5": 90.11600274536133, "epoch": 54, "n_parameters": 14644432} -{"train_lr": 0.003859904089347072, "train_loss": 2.7792170516252517, "test_loss": 1.1913484462043817, "test_acc1": 70.9980022668457, "test_acc5": 90.10400248260498, "epoch": 55, "n_parameters": 14644432} -{"train_lr": 0.0038547317898607334, "train_loss": 2.774047027659416, "test_loss": 1.185920160044642, "test_acc1": 71.4840020779419, "test_acc5": 90.30400270324706, "epoch": 56, "n_parameters": 14644432} -{"train_lr": 0.0038494693370565466, "train_loss": 2.759350538659096, "test_loss": 1.1522059519501293, "test_acc1": 71.93400256835938, "test_acc5": 90.41600262176513, "epoch": 57, "n_parameters": 14644432} -{"train_lr": 0.0038441169874190843, "train_loss": 2.75507874751091, "test_loss": 1.174687815939679, "test_acc1": 71.89200259033203, "test_acc5": 90.33000273651123, "epoch": 58, "n_parameters": 14644432} -{"train_lr": 0.003838675001814183, "train_loss": 2.755833395910263, "test_loss": 1.1515371678944897, "test_acc1": 71.89600246551514, "test_acc5": 90.61600257019043, "epoch": 59, "n_parameters": 14644432} -{"train_lr": 0.0038331436454766355, "train_loss": 2.739069173383713, "test_loss": 1.1532514064627535, "test_acc1": 71.65600240203858, "test_acc5": 90.73600258575439, "epoch": 60, "n_parameters": 14644432} -{"train_lr": 0.0038275231879969967, "train_loss": 2.7515201347112654, "test_loss": 1.166433783576769, "test_acc1": 71.77600223602295, "test_acc5": 90.58200270629882, "epoch": 61, "n_parameters": 14644432} -{"train_lr": 0.00382181390330831, "train_loss": 2.7462666627168657, "test_loss": 1.1784911585204743, "test_acc1": 71.46600232635498, "test_acc5": 90.32000222015381, "epoch": 62, "n_parameters": 14644432} -{"train_lr": 0.00381601606967318, "train_loss": 2.7425789399147034, "test_loss": 1.133149261422017, "test_acc1": 72.41200255218506, "test_acc5": 90.75800222625732, "epoch": 63, "n_parameters": 14644432} -{"train_lr": 0.0038101299696697475, "train_loss": 2.732644404888153, "test_loss": 1.148283217321424, "test_acc1": 72.04000223724366, "test_acc5": 90.69200235168456, "epoch": 64, "n_parameters": 14644432} -{"train_lr": 0.00380415589017823, "train_loss": 2.722247062945366, "test_loss": 1.1357493391808342, "test_acc1": 72.23200231994629, "test_acc5": 90.58400235778808, "epoch": 65, "n_parameters": 14644432} -{"train_lr": 0.0037980941223668303, "train_loss": 2.7147997661590577, "test_loss": 1.1120255116154165, "test_acc1": 72.7440022894287, "test_acc5": 91.1160024661255, "epoch": 66, "n_parameters": 14644432} -{"train_lr": 0.003791944961677627, "train_loss": 2.718214280152321, "test_loss": 1.1718948286245852, "test_acc1": 71.81600249847412, "test_acc5": 90.32200261047363, "epoch": 67, "n_parameters": 14644432} -{"train_lr": 0.0037857087078119896, "train_loss": 2.7077870247840883, "test_loss": 1.1235760998199968, "test_acc1": 72.3960025692749, "test_acc5": 90.98400294067383, "epoch": 68, "n_parameters": 14644432} -{"train_lr": 0.003779385664716107, "train_loss": 2.7138586283445356, "test_loss": 1.1173144795877092, "test_acc1": 72.28400256469726, "test_acc5": 90.94000274230957, "epoch": 69, "n_parameters": 14644432} -{"train_lr": 0.003772976140566265, "train_loss": 2.7263456097364425, "test_loss": 1.1144990018185448, "test_acc1": 72.690002444458, "test_acc5": 91.1400023626709, "epoch": 70, "n_parameters": 14644432} -{"train_lr": 0.0037664804477535617, "train_loss": 2.688573930120468, "test_loss": 1.1196541488170624, "test_acc1": 72.6580020678711, "test_acc5": 91.10200267547607, "epoch": 71, "n_parameters": 14644432} -{"train_lr": 0.003759898902868911, "train_loss": 2.696794249343872, "test_loss": 1.1106504767256624, "test_acc1": 72.82400226196289, "test_acc5": 91.10400230438232, "epoch": 72, "n_parameters": 14644432} -{"train_lr": 0.003753231826687486, "train_loss": 2.6960800410032273, "test_loss": 1.139154666048639, "test_acc1": 72.45000271484375, "test_acc5": 90.70600267700195, "epoch": 73, "n_parameters": 14644432} -{"train_lr": 0.0037464795441532936, "train_loss": 2.690470236778259, "test_loss": 1.102291907019475, "test_acc1": 72.91200279418945, "test_acc5": 91.07200271942139, "epoch": 74, "n_parameters": 14644432} -{"train_lr": 0.003739642384362937, "train_loss": 2.6831923714876176, "test_loss": 1.0846440371345072, "test_acc1": 73.12800248870849, "test_acc5": 91.35600231842041, "epoch": 75, "n_parameters": 14644432} -{"train_lr": 0.003732720680549938, "train_loss": 2.690972243642807, "test_loss": 1.0984460264444351, "test_acc1": 72.99400258911133, "test_acc5": 91.29400268981934, "epoch": 76, "n_parameters": 14644432} -{"train_lr": 0.003725714770068486, "train_loss": 2.67374376578331, "test_loss": 1.1411481648683548, "test_acc1": 72.34400236419678, "test_acc5": 90.86200226715088, "epoch": 77, "n_parameters": 14644432} -{"train_lr": 0.0037186249943766602, "train_loss": 2.6779138342142104, "test_loss": 1.1429322725709747, "test_acc1": 72.50800257598877, "test_acc5": 90.7600025378418, "epoch": 78, "n_parameters": 14644432} -{"train_lr": 0.003711451699020238, "train_loss": 2.671588545703888, "test_loss": 1.1211429201066494, "test_acc1": 72.71800269256592, "test_acc5": 90.7220024243164, "epoch": 79, "n_parameters": 14644432} -{"train_lr": 0.0037041952336154147, "train_loss": 2.661923124718666, "test_loss": 1.1109782606363297, "test_acc1": 72.83000244934082, "test_acc5": 91.1480025415039, "epoch": 80, "n_parameters": 14644432} -{"train_lr": 0.003696855951832067, "train_loss": 2.6647210408210755, "test_loss": 1.0689263148781132, "test_acc1": 73.38000240234375, "test_acc5": 91.33400241638184, "epoch": 81, "n_parameters": 14644432} -{"train_lr": 0.0036894342113765284, "train_loss": 2.666898946881294, "test_loss": 1.0916876146460281, "test_acc1": 73.2820026248169, "test_acc5": 91.34600272338866, "epoch": 82, "n_parameters": 14644432} -{"train_lr": 0.0036819303739738757, "train_loss": 2.6685909784078596, "test_loss": 1.0957123356706955, "test_acc1": 73.39000217132569, "test_acc5": 91.39400275146484, "epoch": 83, "n_parameters": 14644432} -{"train_lr": 0.00367434480535066, "train_loss": 2.647059836292267, "test_loss": 1.1194452248951967, "test_acc1": 73.0800024005127, "test_acc5": 91.26000267547607, "epoch": 84, "n_parameters": 14644432} -{"train_lr": 0.00366667787521664, "train_loss": 2.652027036714554, "test_loss": 1.102800321929595, "test_acc1": 72.86800243652344, "test_acc5": 91.21000261901855, "epoch": 85, "n_parameters": 14644432} -{"train_lr": 0.003658929957247333, "train_loss": 2.6517084219932556, "test_loss": 1.0901143614421873, "test_acc1": 72.99200250671387, "test_acc5": 91.29200248229981, "epoch": 86, "n_parameters": 14644432} -{"train_lr": 0.0036511014290652147, "train_loss": 2.6402752996206282, "test_loss": 1.0831571153419859, "test_acc1": 73.2420024658203, "test_acc5": 91.44600257507324, "epoch": 87, "n_parameters": 14644432} -{"train_lr": 0.003643192672221756, "train_loss": 2.6429654801368714, "test_loss": 1.099335196921054, "test_acc1": 73.0380026638794, "test_acc5": 91.18600249206543, "epoch": 88, "n_parameters": 14644432} -{"train_lr": 0.0036352040721785803, "train_loss": 2.6328382944583892, "test_loss": 1.0927010862266315, "test_acc1": 73.46200239654542, "test_acc5": 91.2780024154663, "epoch": 89, "n_parameters": 14644432} -{"train_lr": 0.003627136018288861, "train_loss": 2.638186792373657, "test_loss": 1.0603573458159672, "test_acc1": 73.65000247283936, "test_acc5": 91.65200240783692, "epoch": 90, "n_parameters": 14644432} -{"train_lr": 0.0036189889037780316, "train_loss": 2.618958823680878, "test_loss": 1.0663818642497063, "test_acc1": 73.83000275268554, "test_acc5": 91.60200234893799, "epoch": 91, "n_parameters": 14644432} -{"train_lr": 0.0036107631257249954, "train_loss": 2.632265359520912, "test_loss": 1.0954553033499157, "test_acc1": 73.17600260742188, "test_acc5": 91.27200229034423, "epoch": 92, "n_parameters": 14644432} -{"train_lr": 0.003602459085042744, "train_loss": 2.6308042375564575, "test_loss": 1.0962702921208214, "test_acc1": 73.46200265441894, "test_acc5": 91.37200260894775, "epoch": 93, "n_parameters": 14644432} -{"train_lr": 0.003594077186458248, "train_loss": 2.624654632616043, "test_loss": 1.092516953673433, "test_acc1": 73.7020021798706, "test_acc5": 91.31200260894775, "epoch": 94, "n_parameters": 14644432} -{"train_lr": 0.003585617838493613, "train_loss": 2.616440146827698, "test_loss": 1.091936216196593, "test_acc1": 73.27800258544922, "test_acc5": 91.51600267547607, "epoch": 95, "n_parameters": 14644432} -{"train_lr": 0.0035770814534454225, "train_loss": 2.622430535531044, "test_loss": 1.0527586831766016, "test_acc1": 73.97600271942139, "test_acc5": 91.86400253570557, "epoch": 96, "n_parameters": 14644432} -{"train_lr": 0.003568468447365067, "train_loss": 2.614762443470955, "test_loss": 1.0668924518806093, "test_acc1": 73.5820024874878, "test_acc5": 91.58800261108398, "epoch": 97, "n_parameters": 14644432} -{"train_lr": 0.0035597792400383233, "train_loss": 2.6047208726882936, "test_loss": 1.068926732548896, "test_acc1": 73.94600249176025, "test_acc5": 91.63600243316651, "epoch": 98, "n_parameters": 14644432} -{"train_lr": 0.0035510142549648235, "train_loss": 2.6173360176801683, "test_loss": 1.0887897321406532, "test_acc1": 73.43800244445801, "test_acc5": 91.41800256164551, "epoch": 99, "n_parameters": 14644432} -{"train_lr": 0.0035421739193377214, "train_loss": 2.6133335992097853, "test_loss": 1.0771124485661001, "test_acc1": 73.66600253814697, "test_acc5": 91.54000227325439, "epoch": 100, "n_parameters": 14644432} -{"train_lr": 0.003533258664022372, "train_loss": 2.604357618331909, "test_loss": 1.0649314426323946, "test_acc1": 73.90200236328126, "test_acc5": 91.72800254638672, "epoch": 101, "n_parameters": 14644432} -{"train_lr": 0.0035242689235357775, "train_loss": 2.6003109339237214, "test_loss": 1.0644695009378826, "test_acc1": 73.85600242095947, "test_acc5": 91.59600270874023, "epoch": 102, "n_parameters": 14644432} -{"train_lr": 0.0035152051360252245, "train_loss": 2.591521009349823, "test_loss": 1.0690864392501467, "test_acc1": 73.79000241149902, "test_acc5": 91.59800271759033, "epoch": 103, "n_parameters": 14644432} -{"train_lr": 0.0035060677432469894, "train_loss": 2.5944792801856993, "test_loss": 1.068486130632022, "test_acc1": 73.47400246612548, "test_acc5": 91.6160022567749, "epoch": 104, "n_parameters": 14644432} -{"train_lr": 0.0034968571905445293, "train_loss": 2.5866521033525465, "test_loss": 1.072594429640209, "test_acc1": 73.91800252807617, "test_acc5": 91.78600223266602, "epoch": 105, "n_parameters": 14644432} -{"train_lr": 0.0034875739268273947, "train_loss": 2.5860131551265715, "test_loss": 1.0650176272234495, "test_acc1": 73.98200262207031, "test_acc5": 91.64800236236572, "epoch": 106, "n_parameters": 14644432} -{"train_lr": 0.00347821840454859, "train_loss": 2.5770531233072282, "test_loss": 1.0673080785747837, "test_acc1": 73.75400263244629, "test_acc5": 91.62800247497559, "epoch": 107, "n_parameters": 14644432} -{"train_lr": 0.003468791079683292, "train_loss": 2.581898590183258, "test_loss": 1.0477981525747215, "test_acc1": 73.95800260192871, "test_acc5": 91.76000234954834, "epoch": 108, "n_parameters": 14644432} -{"train_lr": 0.003459292411705684, "train_loss": 2.578146036028862, "test_loss": 1.0724450823138743, "test_acc1": 73.3940025656128, "test_acc5": 91.63600260314941, "epoch": 109, "n_parameters": 14644432} -{"train_lr": 0.003449722863567734, "train_loss": 2.5816635498762133, "test_loss": 1.0725192046340775, "test_acc1": 73.74400229003906, "test_acc5": 91.62200263427735, "epoch": 110, "n_parameters": 14644432} -{"train_lr": 0.0034400829016756297, "train_loss": 2.5854579980373384, "test_loss": 1.0373928897521074, "test_acc1": 74.32600252288819, "test_acc5": 91.95800279022217, "epoch": 111, "n_parameters": 14644432} -{"train_lr": 0.0034303729958673978, "train_loss": 2.5819211410522462, "test_loss": 1.07029871291974, "test_acc1": 73.8600022894287, "test_acc5": 91.70400250274658, "epoch": 112, "n_parameters": 14644432} -{"train_lr": 0.0034205936193903307, "train_loss": 2.5821726234674456, "test_loss": 1.0588722097523071, "test_acc1": 73.8980026828003, "test_acc5": 91.65800246063232, "epoch": 113, "n_parameters": 14644432} -{"train_lr": 0.0034107452488774006, "train_loss": 2.5610309550285337, "test_loss": 1.0747716501355171, "test_acc1": 73.65400286743164, "test_acc5": 91.67200270599365, "epoch": 114, "n_parameters": 14644432} -{"train_lr": 0.0034008283643241475, "train_loss": 2.5618724781513214, "test_loss": 1.0359906876350151, "test_acc1": 74.30600260742187, "test_acc5": 92.04200268920899, "epoch": 115, "n_parameters": 14644432} -{"train_lr": 0.003390843449065705, "train_loss": 2.5664499773025513, "test_loss": 1.0647801489514463, "test_acc1": 73.77000270233154, "test_acc5": 91.80200263000488, "epoch": 116, "n_parameters": 14644432} -{"train_lr": 0.0033807909897526967, "train_loss": 2.5628699987649917, "test_loss": 1.0441649323877167, "test_acc1": 74.41400235229491, "test_acc5": 92.02800270019532, "epoch": 117, "n_parameters": 14644432} -{"train_lr": 0.0033706714763277455, "train_loss": 2.562810378885269, "test_loss": 1.0251829006216104, "test_acc1": 74.64600249694824, "test_acc5": 92.07000242980958, "epoch": 118, "n_parameters": 14644432} -{"train_lr": 0.003360485402001723, "train_loss": 2.564703248310089, "test_loss": 1.0182839835829594, "test_acc1": 74.47200251159668, "test_acc5": 92.31400220703125, "epoch": 119, "n_parameters": 14644432} -{"train_lr": 0.0033502332632295347, "train_loss": 2.5557704483747483, "test_loss": 1.035627002006068, "test_acc1": 74.54000254272461, "test_acc5": 92.0800025302124, "epoch": 120, "n_parameters": 14644432} -{"train_lr": 0.003339915559685877, "train_loss": 2.551942544031143, "test_loss": 1.03343084016267, "test_acc1": 74.62000232574462, "test_acc5": 91.81800278747559, "epoch": 121, "n_parameters": 14644432} -{"train_lr": 0.0033295327942412492, "train_loss": 2.5511802800416947, "test_loss": 1.0386262411142098, "test_acc1": 74.52400268493652, "test_acc5": 92.10200217590332, "epoch": 122, "n_parameters": 14644432} -{"train_lr": 0.003319085472936782, "train_loss": 2.551176596593857, "test_loss": 1.0236438440487665, "test_acc1": 74.74800247375488, "test_acc5": 92.25800214569092, "epoch": 123, "n_parameters": 14644432} -{"train_lr": 0.0033085741049602795, "train_loss": 2.5406541544675827, "test_loss": 1.0200479004312963, "test_acc1": 74.69400287200928, "test_acc5": 92.15400278747559, "epoch": 124, "n_parameters": 14644432} -{"train_lr": 0.003297999202620968, "train_loss": 2.542414959359169, "test_loss": 1.0655197972322212, "test_acc1": 73.75600249328613, "test_acc5": 91.8120023852539, "epoch": 125, "n_parameters": 14644432} -{"train_lr": 0.0032873612813246714, "train_loss": 2.5421835290431978, "test_loss": 1.0044124786029844, "test_acc1": 74.99000230041504, "test_acc5": 92.40000229248047, "epoch": 126, "n_parameters": 14644432} -{"train_lr": 0.003276660859548651, "train_loss": 2.539451810956001, "test_loss": 1.0102833888548262, "test_acc1": 74.76400266967774, "test_acc5": 92.31800250549317, "epoch": 127, "n_parameters": 14644432} -{"train_lr": 0.0032658984588163557, "train_loss": 2.537626796793938, "test_loss": 1.022060335120734, "test_acc1": 74.70800231384277, "test_acc5": 92.09800248565674, "epoch": 128, "n_parameters": 14644432} -{"train_lr": 0.003255074603672122, "train_loss": 2.537217797112465, "test_loss": 1.0048571610275436, "test_acc1": 75.10000280761719, "test_acc5": 92.30000246063233, "epoch": 129, "n_parameters": 14644432} -{"train_lr": 0.003244189821655263, "train_loss": 2.5363189699649813, "test_loss": 1.042021280483288, "test_acc1": 74.43800241912842, "test_acc5": 92.07000245758057, "epoch": 130, "n_parameters": 14644432} -{"train_lr": 0.003233244643274736, "train_loss": 2.5279310478925705, "test_loss": 1.0404886296566795, "test_acc1": 74.21200250305176, "test_acc5": 92.18200278686524, "epoch": 131, "n_parameters": 14644432} -{"train_lr": 0.0032222396019829943, "train_loss": 2.530080285382271, "test_loss": 1.022953789900331, "test_acc1": 74.8660026373291, "test_acc5": 92.1860023071289, "epoch": 132, "n_parameters": 14644432} -{"train_lr": 0.0032111752341504192, "train_loss": 2.5264592044591905, "test_loss": 1.027657877872972, "test_acc1": 74.97800235198974, "test_acc5": 92.1560027456665, "epoch": 133, "n_parameters": 14644432} -{"train_lr": 0.0032000520790385592, "train_loss": 2.526892432188988, "test_loss": 1.0197436138987541, "test_acc1": 74.85800263061523, "test_acc5": 92.25800243011474, "epoch": 134, "n_parameters": 14644432} -{"train_lr": 0.0031888706787743812, "train_loss": 2.5214610470294954, "test_loss": 1.0216573996140677, "test_acc1": 74.69000220306397, "test_acc5": 92.17400283447266, "epoch": 135, "n_parameters": 14644432} -{"train_lr": 0.0031776315783234484, "train_loss": 2.520404613637924, "test_loss": 1.0302109431256266, "test_acc1": 74.57200273498535, "test_acc5": 92.21600265869141, "epoch": 136, "n_parameters": 14644432} -{"train_lr": 0.0031663353254638284, "train_loss": 2.5173986617088318, "test_loss": 1.0073342505185043, "test_acc1": 74.72200246368408, "test_acc5": 92.4400027279663, "epoch": 137, "n_parameters": 14644432} -{"train_lr": 0.0031549824707587932, "train_loss": 2.5255429135799408, "test_loss": 1.0066507035756813, "test_acc1": 74.88200246826172, "test_acc5": 92.41200260559081, "epoch": 138, "n_parameters": 14644432} -{"train_lr": 0.003143573567530467, "train_loss": 2.5188443867206574, "test_loss": 1.0222833489670473, "test_acc1": 74.92800226959228, "test_acc5": 92.17400278381348, "epoch": 139, "n_parameters": 14644432} -{"train_lr": 0.0031321091718327755, "train_loss": 2.508847153902054, "test_loss": 1.0088582816807663, "test_acc1": 75.29400281738282, "test_acc5": 92.34600276123047, "epoch": 140, "n_parameters": 14644432} -{"train_lr": 0.003120589842424192, "train_loss": 2.510888283610344, "test_loss": 1.0018632609178038, "test_acc1": 75.25400263549804, "test_acc5": 92.53000291198731, "epoch": 141, "n_parameters": 14644432} -{"train_lr": 0.0031090161407405044, "train_loss": 2.497389319086075, "test_loss": 0.9974721282282296, "test_acc1": 74.90400243927002, "test_acc5": 92.29000217071533, "epoch": 142, "n_parameters": 14644432} -{"train_lr": 0.003097388630867618, "train_loss": 2.5100986449003218, "test_loss": 0.9870462318991914, "test_acc1": 75.42800235992432, "test_acc5": 92.48600246887207, "epoch": 143, "n_parameters": 14644432} -{"train_lr": 0.0030857078795141065, "train_loss": 2.502269327545166, "test_loss": 0.9802120864829597, "test_acc1": 75.57400249298095, "test_acc5": 92.63000270935059, "epoch": 144, "n_parameters": 14644432} -{"train_lr": 0.0030739744559831164, "train_loss": 2.5025472364664076, "test_loss": 0.9924784087959457, "test_acc1": 75.45200255004883, "test_acc5": 92.46200261657715, "epoch": 145, "n_parameters": 14644432} -{"train_lr": 0.003062188932145215, "train_loss": 2.486561963105202, "test_loss": 1.0016497310031862, "test_acc1": 75.19800243591308, "test_acc5": 92.47000248565674, "epoch": 146, "n_parameters": 14644432} -{"train_lr": 0.0030503518824103173, "train_loss": 2.4974504533290864, "test_loss": 1.0033418117638897, "test_acc1": 75.11200231872559, "test_acc5": 92.45800208374024, "epoch": 147, "n_parameters": 14644432} -{"train_lr": 0.0030384638836993723, "train_loss": 2.49754859790802, "test_loss": 1.0376444117111319, "test_acc1": 74.90200242095948, "test_acc5": 92.2660025302124, "epoch": 148, "n_parameters": 14644432} -{"train_lr": 0.003026525515416759, "train_loss": 2.486808262252808, "test_loss": 1.0180032119593199, "test_acc1": 74.91800256347656, "test_acc5": 92.37600243164063, "epoch": 149, "n_parameters": 14644432} -{"train_lr": 0.0030145373594217015, "train_loss": 2.4919574544906617, "test_loss": 0.9872786437763887, "test_acc1": 75.3020024597168, "test_acc5": 92.6600027230835, "epoch": 150, "n_parameters": 14644432} -{"train_lr": 0.0030025000000000156, "train_loss": 2.485015009665489, "test_loss": 1.01998840754523, "test_acc1": 75.0580024432373, "test_acc5": 92.37600264770508, "epoch": 151, "n_parameters": 14644432} -{"train_lr": 0.0029904140238355007, "train_loss": 2.4851486999988555, "test_loss": 1.0138441953150665, "test_acc1": 75.60600264312744, "test_acc5": 92.5120024130249, "epoch": 152, "n_parameters": 14644432} -{"train_lr": 0.0029782800199817903, "train_loss": 2.4757453747749327, "test_loss": 0.9898308513357359, "test_acc1": 75.39800255676269, "test_acc5": 92.59400246032715, "epoch": 153, "n_parameters": 14644432} -{"train_lr": 0.0029660985798329416, "train_loss": 2.4872015516281127, "test_loss": 0.9848947220427149, "test_acc1": 75.63400241027831, "test_acc5": 92.6820024520874, "epoch": 154, "n_parameters": 14644432} -{"train_lr": 0.0029538702970952164, "train_loss": 2.4828611624240877, "test_loss": 0.9665873107664725, "test_acc1": 75.75000257629395, "test_acc5": 92.80800260559081, "epoch": 155, "n_parameters": 14644432} -{"train_lr": 0.0029415957677578724, "train_loss": 2.4713316747665406, "test_loss": 1.000809188493911, "test_acc1": 75.06200261505127, "test_acc5": 92.57000279571533, "epoch": 156, "n_parameters": 14644432} -{"train_lr": 0.002929275590064108, "train_loss": 2.474997405076027, "test_loss": 1.0123125687241554, "test_acc1": 75.22600264709473, "test_acc5": 92.5680022845459, "epoch": 157, "n_parameters": 14644432} -{"train_lr": 0.002916910364482115, "train_loss": 2.464137577819824, "test_loss": 0.9967569154413307, "test_acc1": 75.31000241973877, "test_acc5": 92.64400251617431, "epoch": 158, "n_parameters": 14644432} -{"train_lr": 0.0029045006936754257, "train_loss": 2.4769646788835527, "test_loss": 0.9991204975282445, "test_acc1": 75.342002578125, "test_acc5": 92.54400227630616, "epoch": 159, "n_parameters": 14644432} -{"train_lr": 0.0028920471824738832, "train_loss": 2.4628052623033523, "test_loss": 0.9718471623080618, "test_acc1": 75.60000268554687, "test_acc5": 92.91600269256591, "epoch": 160, "n_parameters": 14644432} -{"train_lr": 0.0028795504378442225, "train_loss": 2.4530190636873246, "test_loss": 0.9694889702779406, "test_acc1": 75.88400221374512, "test_acc5": 92.76200235748291, "epoch": 161, "n_parameters": 14644432} -{"train_lr": 0.002867011068859989, "train_loss": 2.4631530643939974, "test_loss": 0.9830352992695921, "test_acc1": 75.59400231964112, "test_acc5": 92.73400263153076, "epoch": 162, "n_parameters": 14644432} -{"train_lr": 0.0028544296866723304, "train_loss": 2.4552556029081343, "test_loss": 0.9758348226108972, "test_acc1": 75.7380026461792, "test_acc5": 92.78800263702392, "epoch": 163, "n_parameters": 14644432} -{"train_lr": 0.0028418069044801562, "train_loss": 2.4568763515233996, "test_loss": 0.9597905519254067, "test_acc1": 75.8280024005127, "test_acc5": 92.85800277618408, "epoch": 164, "n_parameters": 14644432} -{"train_lr": 0.0028291433375, "train_loss": 2.4570828877449036, "test_loss": 0.9902266993680421, "test_acc1": 75.50600237884521, "test_acc5": 92.5840026675415, "epoch": 165, "n_parameters": 14644432} -{"train_lr": 0.002816439602936208, "train_loss": 2.444617389655113, "test_loss": 0.956026341766119, "test_acc1": 76.01600277496338, "test_acc5": 92.90800247161866, "epoch": 166, "n_parameters": 14644432} -{"train_lr": 0.002803696319950981, "train_loss": 2.4509379133701326, "test_loss": 0.9738420787103036, "test_acc1": 76.11400223083496, "test_acc5": 92.86400261657715, "epoch": 167, "n_parameters": 14644432} -{"train_lr": 0.0027909141096339935, "train_loss": 2.456623383498192, "test_loss": 0.9611923600382665, "test_acc1": 76.01000259552002, "test_acc5": 92.9900025479126, "epoch": 168, "n_parameters": 14644432} -{"train_lr": 0.002778093594971943, "train_loss": 2.4415668269634248, "test_loss": 0.9789058936431128, "test_acc1": 75.97600256561279, "test_acc5": 92.90800267547607, "epoch": 169, "n_parameters": 14644432} -{"train_lr": 0.002765235400818761, "train_loss": 2.4428994225740435, "test_loss": 0.9567826521747252, "test_acc1": 76.06200275970458, "test_acc5": 93.09200263641357, "epoch": 170, "n_parameters": 14644432} -{"train_lr": 0.0027523401538647224, "train_loss": 2.4428655291318893, "test_loss": 0.9634987551938085, "test_acc1": 75.96200239715576, "test_acc5": 92.97200278411866, "epoch": 171, "n_parameters": 14644432} -{"train_lr": 0.002739408482605956, "train_loss": 2.438224412059784, "test_loss": 0.9684398619129377, "test_acc1": 75.84000228332519, "test_acc5": 92.732002527771, "epoch": 172, "n_parameters": 14644432} -{"train_lr": 0.002726441017313784, "train_loss": 2.429723677420616, "test_loss": 0.9706335547654068, "test_acc1": 75.80000250579835, "test_acc5": 92.79200250488282, "epoch": 173, "n_parameters": 14644432} -{"train_lr": 0.002713438390004251, "train_loss": 2.4355383128643036, "test_loss": 0.9571902530596537, "test_acc1": 76.21200252593994, "test_acc5": 92.99400233520508, "epoch": 174, "n_parameters": 14644432} -{"train_lr": 0.0027004012344070075, "train_loss": 2.431719440150261, "test_loss": 0.9606954523307436, "test_acc1": 75.87400257385254, "test_acc5": 92.96200249389648, "epoch": 175, "n_parameters": 14644432} -{"train_lr": 0.0026873301859347007, "train_loss": 2.427032042360306, "test_loss": 0.9773518506656674, "test_acc1": 75.57600255432129, "test_acc5": 92.74600250793458, "epoch": 176, "n_parameters": 14644432} -{"train_lr": 0.002674225881651733, "train_loss": 2.429347982788086, "test_loss": 0.9541274723322952, "test_acc1": 76.12600289703369, "test_acc5": 92.95200261199952, "epoch": 177, "n_parameters": 14644432} -{"train_lr": 0.0026610889602434354, "train_loss": 2.4262933370828628, "test_loss": 0.9487058303373701, "test_acc1": 76.3680024822998, "test_acc5": 93.09600260803222, "epoch": 178, "n_parameters": 14644432} -{"train_lr": 0.0026479200619848845, "train_loss": 2.4205800812244416, "test_loss": 0.9598895157961285, "test_acc1": 75.88600222290039, "test_acc5": 92.98800266418458, "epoch": 179, "n_parameters": 14644432} -{"train_lr": 0.0026347198287094897, "train_loss": 2.4217813963651658, "test_loss": 0.9593071337131893, "test_acc1": 76.14600254180908, "test_acc5": 92.916002371521, "epoch": 180, "n_parameters": 14644432} -{"train_lr": 0.0026214889037780493, "train_loss": 2.4275010081768036, "test_loss": 0.9644953380612766, "test_acc1": 76.09600220642089, "test_acc5": 93.01600240325928, "epoch": 181, "n_parameters": 14644432} -{"train_lr": 0.0026082279320471633, "train_loss": 2.431760982990265, "test_loss": 0.9673345332198283, "test_acc1": 76.08600268554687, "test_acc5": 92.746002265625, "epoch": 182, "n_parameters": 14644432} -{"train_lr": 0.0025949375598379055, "train_loss": 2.4189007735729215, "test_loss": 0.9474517163984916, "test_acc1": 76.33800261291503, "test_acc5": 93.09000300170898, "epoch": 183, "n_parameters": 14644432} -{"train_lr": 0.0025816184349041886, "train_loss": 2.419967445588112, "test_loss": 0.9637696423074779, "test_acc1": 76.10000246154785, "test_acc5": 92.87800291809081, "epoch": 184, "n_parameters": 14644432} -{"train_lr": 0.0025682712064015187, "train_loss": 2.4012995896100997, "test_loss": 0.9596342319513068, "test_acc1": 76.00600245727539, "test_acc5": 92.97400268371582, "epoch": 185, "n_parameters": 14644432} -{"train_lr": 0.002554896524854948, "train_loss": 2.4127788978815077, "test_loss": 0.946751927409102, "test_acc1": 76.13000291748047, "test_acc5": 93.17600250579834, "epoch": 186, "n_parameters": 14644432} -{"train_lr": 0.0025414950421274317, "train_loss": 2.4132883436918258, "test_loss": 0.9550477432854035, "test_acc1": 76.09200250427246, "test_acc5": 92.96600270355225, "epoch": 187, "n_parameters": 14644432} -{"train_lr": 0.002528067411388543, "train_loss": 2.3973791504859925, "test_loss": 0.9399141802945558, "test_acc1": 76.49000264038087, "test_acc5": 93.34600236877442, "epoch": 188, "n_parameters": 14644432} -{"train_lr": 0.0025146142870819286, "train_loss": 2.4001392053604125, "test_loss": 0.9439231683226192, "test_acc1": 76.64200273620605, "test_acc5": 93.1120024295044, "epoch": 189, "n_parameters": 14644432} -{"train_lr": 0.002501136324893901, "train_loss": 2.4113122057676315, "test_loss": 0.9474824783118332, "test_acc1": 76.42600248138427, "test_acc5": 93.18600248077392, "epoch": 190, "n_parameters": 14644432} -{"train_lr": 0.002487634181721322, "train_loss": 2.3959370755195617, "test_loss": 0.9401186597259605, "test_acc1": 76.56800247802734, "test_acc5": 93.1460024609375, "epoch": 191, "n_parameters": 14644432} -{"train_lr": 0.002474108515639672, "train_loss": 2.3944999575853347, "test_loss": 0.9429464848602519, "test_acc1": 76.5160025756836, "test_acc5": 93.20000208740234, "epoch": 192, "n_parameters": 14644432} -{"train_lr": 0.002460559985870747, "train_loss": 2.3998017702817918, "test_loss": 0.9555521394838306, "test_acc1": 76.22600276306153, "test_acc5": 93.13200248229981, "epoch": 193, "n_parameters": 14644432} -{"train_lr": 0.002446989252750831, "train_loss": 2.383102368593216, "test_loss": 0.94449606866521, "test_acc1": 76.52400267395019, "test_acc5": 93.30400222961426, "epoch": 194, "n_parameters": 14644432} -{"train_lr": 0.002433396977698326, "train_loss": 2.3943027203559875, "test_loss": 0.9425070910331081, "test_acc1": 76.62200286468506, "test_acc5": 93.1660025869751, "epoch": 195, "n_parameters": 14644432} -{"train_lr": 0.0024197838231814215, "train_loss": 2.387287039136887, "test_loss": 0.9402796135229223, "test_acc1": 76.49000259307861, "test_acc5": 93.25400259185791, "epoch": 196, "n_parameters": 14644432} -{"train_lr": 0.002406150452686214, "train_loss": 2.378365732717514, "test_loss": 0.9496210318277863, "test_acc1": 76.38800239776612, "test_acc5": 93.17200267608642, "epoch": 197, "n_parameters": 14644432} -{"train_lr": 0.0023924975306838653, "train_loss": 2.387211174368858, "test_loss": 0.9050649744184578, "test_acc1": 77.22400266082764, "test_acc5": 93.61600245758056, "epoch": 198, "n_parameters": 14644432} -{"train_lr": 0.0023788257225985116, "train_loss": 2.388748096370697, "test_loss": 0.9252007757039631, "test_acc1": 76.7440026473999, "test_acc5": 93.27600256774902, "epoch": 199, "n_parameters": 14644432} -{"train_lr": 0.002365135694774904, "train_loss": 2.3793591475725173, "test_loss": 0.9342659915633061, "test_acc1": 76.87600259765625, "test_acc5": 93.26000229278564, "epoch": 200, "n_parameters": 14644432} -{"train_lr": 0.0023514281144455126, "train_loss": 2.3821985756635664, "test_loss": 0.9156757016830585, "test_acc1": 77.07600268188476, "test_acc5": 93.4640023803711, "epoch": 201, "n_parameters": 14644432} -{"train_lr": 0.002337703649698603, "train_loss": 2.375049981188774, "test_loss": 0.9434572951320339, "test_acc1": 76.53800275512695, "test_acc5": 93.27200265136719, "epoch": 202, "n_parameters": 14644432} -{"train_lr": 0.002323962969445206, "train_loss": 2.3667534192085267, "test_loss": 0.9224573082345373, "test_acc1": 76.91600287261963, "test_acc5": 93.40200252502441, "epoch": 203, "n_parameters": 14644432} -{"train_lr": 0.002310206743386666, "train_loss": 2.361911857318878, "test_loss": 0.9325095412924009, "test_acc1": 76.82200309570312, "test_acc5": 93.44200267578125, "epoch": 204, "n_parameters": 14644432} -{"train_lr": 0.002296435641982043, "train_loss": 2.367554174423218, "test_loss": 0.919285978683654, "test_acc1": 76.99200254852295, "test_acc5": 93.53000244689942, "epoch": 205, "n_parameters": 14644432} -{"train_lr": 0.0022826503364153008, "train_loss": 2.3556892956495283, "test_loss": 0.9366802849313792, "test_acc1": 77.0860027355957, "test_acc5": 93.37400245513916, "epoch": 206, "n_parameters": 14644432} -{"train_lr": 0.002268851498562944, "train_loss": 2.3786955635547637, "test_loss": 0.9176601241616642, "test_acc1": 77.23800248413086, "test_acc5": 93.56000274780274, "epoch": 207, "n_parameters": 14644432} -{"train_lr": 0.0022550398009608037, "train_loss": 2.3683229177236558, "test_loss": 0.9299823023817119, "test_acc1": 76.85600276184083, "test_acc5": 93.39200263702392, "epoch": 208, "n_parameters": 14644432} -{"train_lr": 0.002241215916771494, "train_loss": 2.3549082159757613, "test_loss": 0.920084528186742, "test_acc1": 77.02400277893067, "test_acc5": 93.4220024407959, "epoch": 209, "n_parameters": 14644432} -{"train_lr": 0.0022273805197516256, "train_loss": 2.3520528597831727, "test_loss": 0.9281943755991319, "test_acc1": 76.87000277374267, "test_acc5": 93.3660026260376, "epoch": 210, "n_parameters": 14644432} -{"train_lr": 0.0022135342842189523, "train_loss": 2.356937094759941, "test_loss": 0.930694739389069, "test_acc1": 77.0360023614502, "test_acc5": 93.50800238311767, "epoch": 211, "n_parameters": 14644432} -{"train_lr": 0.002199677885019512, "train_loss": 2.356708559346199, "test_loss": 0.9101824006613564, "test_acc1": 77.28800253509522, "test_acc5": 93.6820023828125, "epoch": 212, "n_parameters": 14644432} -{"train_lr": 0.002185811997494599, "train_loss": 2.358726715540886, "test_loss": 0.9096451945164624, "test_acc1": 76.91200228881836, "test_acc5": 93.4520024911499, "epoch": 213, "n_parameters": 14644432} -{"train_lr": 0.0021719372974480025, "train_loss": 2.3488243489027023, "test_loss": 0.9052529475268196, "test_acc1": 77.28000224151612, "test_acc5": 93.702002605896, "epoch": 214, "n_parameters": 14644432} -{"train_lr": 0.002158054461113036, "train_loss": 2.349529097032547, "test_loss": 0.9196497210684944, "test_acc1": 76.99800247619629, "test_acc5": 93.3740026751709, "epoch": 215, "n_parameters": 14644432} -{"train_lr": 0.0021441641651195054, "train_loss": 2.348751469278336, "test_loss": 0.9289583035689943, "test_acc1": 76.89400246490479, "test_acc5": 93.30000234863282, "epoch": 216, "n_parameters": 14644432} -{"train_lr": 0.0021302670864609768, "train_loss": 2.336058083939552, "test_loss": 0.9071564172558925, "test_acc1": 77.11000262512206, "test_acc5": 93.57200248962403, "epoch": 217, "n_parameters": 14644432} -{"train_lr": 0.0021163639024613505, "train_loss": 2.346371303796768, "test_loss": 0.9136426564963425, "test_acc1": 77.14000211151124, "test_acc5": 93.49400260040284, "epoch": 218, "n_parameters": 14644432} -{"train_lr": 0.002102455290742262, "train_loss": 2.334925742983818, "test_loss": 0.911964259822579, "test_acc1": 77.1900028414917, "test_acc5": 93.59400253112793, "epoch": 219, "n_parameters": 14644432} -{"train_lr": 0.0020885419291897665, "train_loss": 2.3319269308805466, "test_loss": 0.9056697663138894, "test_acc1": 77.31600272460938, "test_acc5": 93.65600208740234, "epoch": 220, "n_parameters": 14644432} -{"train_lr": 0.0020746244959214863, "train_loss": 2.326057073020935, "test_loss": 0.9217295964412829, "test_acc1": 77.03800246551513, "test_acc5": 93.4480023022461, "epoch": 221, "n_parameters": 14644432} -{"train_lr": 0.0020607036692535004, "train_loss": 2.327583364224434, "test_loss": 0.8963831601773992, "test_acc1": 77.47200250671386, "test_acc5": 93.60800268920899, "epoch": 222, "n_parameters": 14644432} -{"train_lr": 0.0020467801276673257, "train_loss": 2.324955131840706, "test_loss": 0.8907753726577058, "test_acc1": 77.71000241638184, "test_acc5": 93.81200255615235, "epoch": 223, "n_parameters": 14644432} -{"train_lr": 0.0020328545497765972, "train_loss": 2.328627539968491, "test_loss": 0.9300611948265749, "test_acc1": 77.16800291015625, "test_acc5": 93.51400290374755, "epoch": 224, "n_parameters": 14644432} -{"train_lr": 0.002018927614294425, "train_loss": 2.3251516570329667, "test_loss": 0.8992707816993489, "test_acc1": 77.4400025744629, "test_acc5": 93.64600296081542, "epoch": 225, "n_parameters": 14644432} -{"train_lr": 0.0020050000000000176, "train_loss": 2.3233235283374785, "test_loss": 0.886970917968189, "test_acc1": 77.75600260406495, "test_acc5": 93.72800258636475, "epoch": 226, "n_parameters": 14644432} -{"train_lr": 0.001991072385705573, "train_loss": 2.3088970940589904, "test_loss": 0.8873695908662151, "test_acc1": 77.5400026626587, "test_acc5": 93.79800264434814, "epoch": 227, "n_parameters": 14644432} -{"train_lr": 0.001977145450223401, "train_loss": 2.3198727637529375, "test_loss": 0.910687634611831, "test_acc1": 77.19600239501953, "test_acc5": 93.48600260009766, "epoch": 228, "n_parameters": 14644432} -{"train_lr": 0.0019632198723327173, "train_loss": 2.3157082497119905, "test_loss": 0.8953483468469452, "test_acc1": 77.4280024206543, "test_acc5": 93.79400246734619, "epoch": 229, "n_parameters": 14644432} -{"train_lr": 0.0019492963307464952, "train_loss": 2.3121706690549853, "test_loss": 0.887924979933921, "test_acc1": 77.66400243682861, "test_acc5": 93.88400247711182, "epoch": 230, "n_parameters": 14644432} -{"train_lr": 0.001935375504078517, "train_loss": 2.301927711081505, "test_loss": 0.8951577182640048, "test_acc1": 77.11800267333984, "test_acc5": 93.82000264556885, "epoch": 231, "n_parameters": 14644432} -{"train_lr": 0.001921458070810235, "train_loss": 2.308832682466507, "test_loss": 0.8850333499996101, "test_acc1": 77.65400244842529, "test_acc5": 93.82600286010742, "epoch": 232, "n_parameters": 14644432} -{"train_lr": 0.0019075447092577794, "train_loss": 2.3004082919836044, "test_loss": 0.8751153253457126, "test_acc1": 77.83400244812012, "test_acc5": 93.9360022543335, "epoch": 233, "n_parameters": 14644432} -{"train_lr": 0.0018936360975386397, "train_loss": 2.3029363746881484, "test_loss": 0.8714386318974635, "test_acc1": 77.86800265777588, "test_acc5": 94.02200284484863, "epoch": 234, "n_parameters": 14644432} -{"train_lr": 0.0018797329135390225, "train_loss": 2.30359673640728, "test_loss": 0.8852894722538835, "test_acc1": 77.89600230285644, "test_acc5": 93.75600248840333, "epoch": 235, "n_parameters": 14644432} -{"train_lr": 0.001865835834880448, "train_loss": 2.3025263266324996, "test_loss": 0.8871886728002745, "test_acc1": 77.6940023260498, "test_acc5": 93.84600278167724, "epoch": 236, "n_parameters": 14644432} -{"train_lr": 0.0018519455388870075, "train_loss": 2.2922758467435838, "test_loss": 0.9123687586363625, "test_acc1": 77.39600247650147, "test_acc5": 93.58200245635986, "epoch": 237, "n_parameters": 14644432} -{"train_lr": 0.0018380627025520412, "train_loss": 2.2894143171310426, "test_loss": 0.8844378759317538, "test_acc1": 77.53000246704102, "test_acc5": 93.86600230957032, "epoch": 238, "n_parameters": 14644432} -{"train_lr": 0.001824188002505409, "train_loss": 2.3001453861951826, "test_loss": 0.885349144173019, "test_acc1": 77.86200246124268, "test_acc5": 93.94400262878418, "epoch": 239, "n_parameters": 14644432} -{"train_lr": 0.0018103221149804824, "train_loss": 2.2885710789203646, "test_loss": 0.8951462625580675, "test_acc1": 77.60200269348145, "test_acc5": 93.73000269012451, "epoch": 240, "n_parameters": 14644432} -{"train_lr": 0.0017964657157810383, "train_loss": 2.2776868677139284, "test_loss": 0.8768755482400165, "test_acc1": 77.99400246124267, "test_acc5": 93.97000234619141, "epoch": 241, "n_parameters": 14644432} -{"train_lr": 0.0017826194802483815, "train_loss": 2.27923393945694, "test_loss": 0.8767980672419071, "test_acc1": 77.99400250610351, "test_acc5": 93.8840023135376, "epoch": 242, "n_parameters": 14644432} -{"train_lr": 0.0017687840832285521, "train_loss": 2.2798223489284517, "test_loss": 0.8718501233002719, "test_acc1": 77.94800252716064, "test_acc5": 94.06200282623291, "epoch": 243, "n_parameters": 14644432} -{"train_lr": 0.0017549601990392034, "train_loss": 2.2792635464191435, "test_loss": 0.8819670710055267, "test_acc1": 77.99000239562989, "test_acc5": 93.86200225769043, "epoch": 244, "n_parameters": 14644432} -{"train_lr": 0.001741148501437039, "train_loss": 2.278652398443222, "test_loss": 0.8828468162785558, "test_acc1": 77.86200251434326, "test_acc5": 93.81000287658692, "epoch": 245, "n_parameters": 14644432} -{"train_lr": 0.0017273496635846672, "train_loss": 2.2834743703365326, "test_loss": 0.8853334177504567, "test_acc1": 77.8220024685669, "test_acc5": 93.77200234680176, "epoch": 246, "n_parameters": 14644432} -{"train_lr": 0.0017135643580179704, "train_loss": 2.279866909790039, "test_loss": 0.8767632833298515, "test_acc1": 77.78600231506347, "test_acc5": 93.97000243591309, "epoch": 247, "n_parameters": 14644432} -{"train_lr": 0.0016997932566133241, "train_loss": 2.273913697910309, "test_loss": 0.8682133217944819, "test_acc1": 78.17800266357422, "test_acc5": 94.02000256347657, "epoch": 248, "n_parameters": 14644432} -{"train_lr": 0.0016860370305547992, "train_loss": 2.262735151386261, "test_loss": 0.8892561600488775, "test_acc1": 77.79200256500245, "test_acc5": 93.82400277008057, "epoch": 249, "n_parameters": 14644432} -{"train_lr": 0.00167229635030139, "train_loss": 2.2575932300567625, "test_loss": 0.8664558065288207, "test_acc1": 78.03400239837646, "test_acc5": 93.97600236083984, "epoch": 250, "n_parameters": 14644432} -{"train_lr": 0.0016585718855544908, "train_loss": 2.2609459050416945, "test_loss": 0.8785623686716837, "test_acc1": 77.96600273101807, "test_acc5": 94.10600258636475, "epoch": 251, "n_parameters": 14644432} -{"train_lr": 0.0016448643052251444, "train_loss": 2.2620825137615204, "test_loss": 0.8571217941010699, "test_acc1": 78.34800234222412, "test_acc5": 94.16400256958008, "epoch": 252, "n_parameters": 14644432} -{"train_lr": 0.0016311742774014707, "train_loss": 2.252650984120369, "test_loss": 0.8800082842216772, "test_acc1": 78.08000234680176, "test_acc5": 93.88200255340575, "epoch": 253, "n_parameters": 14644432} -{"train_lr": 0.0016175024693161407, "train_loss": 2.2549595155477524, "test_loss": 0.8671156869215124, "test_acc1": 78.26400232818604, "test_acc5": 94.01800230438232, "epoch": 254, "n_parameters": 14644432} -{"train_lr": 0.0016038495473138241, "train_loss": 2.2433535368442534, "test_loss": 0.8646712881677291, "test_acc1": 78.19800217315674, "test_acc5": 94.09400249145507, "epoch": 255, "n_parameters": 14644432} -{"train_lr": 0.001590216176818559, "train_loss": 2.2521208713531493, "test_loss": 0.870599228231346, "test_acc1": 78.27600234832764, "test_acc5": 93.99000234313965, "epoch": 256, "n_parameters": 14644432} -{"train_lr": 0.0015766030223016928, "train_loss": 2.253848235654831, "test_loss": 0.8585249021211091, "test_acc1": 78.23200259094239, "test_acc5": 94.17600243041993, "epoch": 257, "n_parameters": 14644432} -{"train_lr": 0.0015630107472491771, "train_loss": 2.2463192692279814, "test_loss": 0.8674441720632946, "test_acc1": 78.38800227874756, "test_acc5": 94.02200239440918, "epoch": 258, "n_parameters": 14644432} -{"train_lr": 0.0015494400141292598, "train_loss": 2.244365702223778, "test_loss": 0.85308832740959, "test_acc1": 78.37400268249512, "test_acc5": 94.2080022265625, "epoch": 259, "n_parameters": 14644432} -{"train_lr": 0.001535891484360313, "train_loss": 2.2431818103551864, "test_loss": 0.8562104149776346, "test_acc1": 78.25200245178223, "test_acc5": 94.12400242492676, "epoch": 260, "n_parameters": 14644432} -{"train_lr": 0.0015223658182786706, "train_loss": 2.2461932315826414, "test_loss": 0.8431256349910708, "test_acc1": 78.66000251831055, "test_acc5": 94.32400263977051, "epoch": 261, "n_parameters": 14644432} -{"train_lr": 0.0015088636751061355, "train_loss": 2.2393710055589677, "test_loss": 0.8561551062499776, "test_acc1": 78.47800272979737, "test_acc5": 94.2560024533081, "epoch": 262, "n_parameters": 14644432} -{"train_lr": 0.0014953857129180808, "train_loss": 2.230603923487663, "test_loss": 0.8620811376501533, "test_acc1": 78.25400242858886, "test_acc5": 94.15800279602051, "epoch": 263, "n_parameters": 14644432} -{"train_lr": 0.001481932588611488, "train_loss": 2.228229825925827, "test_loss": 0.8506314112421345, "test_acc1": 78.63400229766846, "test_acc5": 94.30200260406494, "epoch": 264, "n_parameters": 14644432} -{"train_lr": 0.001468504957872541, "train_loss": 2.231067123198509, "test_loss": 0.8458955384352628, "test_acc1": 78.5500025466919, "test_acc5": 94.28800241760254, "epoch": 265, "n_parameters": 14644432} -{"train_lr": 0.0014551034751450972, "train_loss": 2.226256445837021, "test_loss": 0.8449154277058208, "test_acc1": 78.65200240692138, "test_acc5": 94.19600252197266, "epoch": 266, "n_parameters": 14644432} -{"train_lr": 0.0014417287935984719, "train_loss": 2.2255757328510284, "test_loss": 0.8434030308442957, "test_acc1": 78.71200238616943, "test_acc5": 94.18800266479492, "epoch": 267, "n_parameters": 14644432} -{"train_lr": 0.0014283815650957576, "train_loss": 2.2236312935829163, "test_loss": 0.8555197240236927, "test_acc1": 78.37200218322754, "test_acc5": 94.17200243927002, "epoch": 268, "n_parameters": 14644432} -{"train_lr": 0.00141506244016212, "train_loss": 2.2247671367645263, "test_loss": 0.84219566360116, "test_acc1": 78.74000246887206, "test_acc5": 94.31200286834716, "epoch": 269, "n_parameters": 14644432} -{"train_lr": 0.0014017720679528809, "train_loss": 2.2103102148294447, "test_loss": 0.8480033333248952, "test_acc1": 78.62600274169922, "test_acc5": 94.30400222686768, "epoch": 270, "n_parameters": 14644432} -{"train_lr": 0.001388511096221964, "train_loss": 2.2128832716941833, "test_loss": 0.8351058225859614, "test_acc1": 78.78200262451172, "test_acc5": 94.42000243408204, "epoch": 271, "n_parameters": 14644432} -{"train_lr": 0.0013752801712905223, "train_loss": 2.2181834563493728, "test_loss": 0.8404164912507814, "test_acc1": 78.85200275299073, "test_acc5": 94.30600231658936, "epoch": 272, "n_parameters": 14644432} -{"train_lr": 0.0013620799380151495, "train_loss": 2.2023749286174774, "test_loss": 0.8346509056932786, "test_acc1": 78.71000279205322, "test_acc5": 94.35800251373291, "epoch": 273, "n_parameters": 14644432} -{"train_lr": 0.0013489110397565372, "train_loss": 2.2130212624311447, "test_loss": 0.838735823683879, "test_acc1": 78.91200221130372, "test_acc5": 94.41800239471435, "epoch": 274, "n_parameters": 14644432} -{"train_lr": 0.0013357741183482558, "train_loss": 2.2070435631990435, "test_loss": 0.8311163844431148, "test_acc1": 78.96800230895997, "test_acc5": 94.39600234069825, "epoch": 275, "n_parameters": 14644432} -{"train_lr": 0.0013226698140652842, "train_loss": 2.200695264816284, "test_loss": 0.8467198667280814, "test_acc1": 78.79200258270264, "test_acc5": 94.20800246887207, "epoch": 276, "n_parameters": 14644432} -{"train_lr": 0.001309598765592993, "train_loss": 2.2084183532953263, "test_loss": 0.8457542245878893, "test_acc1": 78.61000237823487, "test_acc5": 94.31400223236083, "epoch": 277, "n_parameters": 14644432} -{"train_lr": 0.0012965616099957775, "train_loss": 2.2004794236660006, "test_loss": 0.8399940851856681, "test_acc1": 78.90800240386963, "test_acc5": 94.51600244781494, "epoch": 278, "n_parameters": 14644432} -{"train_lr": 0.0012835589826862073, "train_loss": 2.196091612625122, "test_loss": 0.8302350745481604, "test_acc1": 78.8860024230957, "test_acc5": 94.44600294769288, "epoch": 279, "n_parameters": 14644432} -{"train_lr": 0.0012705915173940611, "train_loss": 2.1859992015361787, "test_loss": 0.8411792985656682, "test_acc1": 78.90600252838135, "test_acc5": 94.37600238037109, "epoch": 280, "n_parameters": 14644432} -{"train_lr": 0.0012576598461352462, "train_loss": 2.193873863005638, "test_loss": 0.8340215476996758, "test_acc1": 79.00200268798828, "test_acc5": 94.34800238311767, "epoch": 281, "n_parameters": 14644432} -{"train_lr": 0.0012447645991812122, "train_loss": 2.184743403983116, "test_loss": 0.8454755358397961, "test_acc1": 78.86400270874023, "test_acc5": 94.30800258392334, "epoch": 282, "n_parameters": 14644432} -{"train_lr": 0.001231906405028045, "train_loss": 2.1971245656728744, "test_loss": 0.8335827700793743, "test_acc1": 78.91400228668213, "test_acc5": 94.42800248687745, "epoch": 283, "n_parameters": 14644432} -{"train_lr": 0.0012190858903660415, "train_loss": 2.181000202894211, "test_loss": 0.829011498566936, "test_acc1": 78.96800255737304, "test_acc5": 94.450002633667, "epoch": 284, "n_parameters": 14644432} -{"train_lr": 0.0012063036800490023, "train_loss": 2.1880723589897157, "test_loss": 0.8235121176523321, "test_acc1": 79.2260026324463, "test_acc5": 94.48400240905762, "epoch": 285, "n_parameters": 14644432} -{"train_lr": 0.0011935603970637625, "train_loss": 2.1812364821195604, "test_loss": 0.8181308069649864, "test_acc1": 79.11400271575928, "test_acc5": 94.67200244781495, "epoch": 286, "n_parameters": 14644432} -{"train_lr": 0.0011808566625000356, "train_loss": 2.175161252427101, "test_loss": 0.8364557731239235, "test_acc1": 78.96000248382569, "test_acc5": 94.43400229888915, "epoch": 287, "n_parameters": 14644432} -{"train_lr": 0.0011681930955198627, "train_loss": 2.165868051576614, "test_loss": 0.8188688305809217, "test_acc1": 79.2420026940918, "test_acc5": 94.59800282714843, "epoch": 288, "n_parameters": 14644432} -{"train_lr": 0.0011555703133276894, "train_loss": 2.165940290641785, "test_loss": 0.818586403175312, "test_acc1": 79.20400211181641, "test_acc5": 94.64600227752686, "epoch": 289, "n_parameters": 14644432} -{"train_lr": 0.0011429889311400574, "train_loss": 2.1603198742866514, "test_loss": 0.8140084739117062, "test_acc1": 79.26800267669678, "test_acc5": 94.59600220428467, "epoch": 290, "n_parameters": 14644432} -{"train_lr": 0.0011304495621557978, "train_loss": 2.159823133444786, "test_loss": 0.8171816941569833, "test_acc1": 79.22600266052245, "test_acc5": 94.73000230560302, "epoch": 291, "n_parameters": 14644432} -{"train_lr": 0.0011179528175260622, "train_loss": 2.1648028638839723, "test_loss": 0.834381979835384, "test_acc1": 79.14400225891113, "test_acc5": 94.57400260253907, "epoch": 292, "n_parameters": 14644432} -{"train_lr": 0.0011054993063245714, "train_loss": 2.1639160779953004, "test_loss": 0.824189857524984, "test_acc1": 79.20800257141113, "test_acc5": 94.6520021963501, "epoch": 293, "n_parameters": 14644432} -{"train_lr": 0.0010930896355179074, "train_loss": 2.1562773304700853, "test_loss": 0.8145915914107772, "test_acc1": 79.37400287200927, "test_acc5": 94.61800250030518, "epoch": 294, "n_parameters": 14644432} -{"train_lr": 0.0010807244099358875, "train_loss": 2.154001876950264, "test_loss": 0.8101970489849063, "test_acc1": 79.49800267089844, "test_acc5": 94.73400282592773, "epoch": 295, "n_parameters": 14644432} -{"train_lr": 0.0010684042322421535, "train_loss": 2.158974649429321, "test_loss": 0.8231486338464653, "test_acc1": 79.23000241577148, "test_acc5": 94.4180023602295, "epoch": 296, "n_parameters": 14644432} -{"train_lr": 0.0010561297029048062, "train_loss": 2.160519556903839, "test_loss": 0.8175181376145166, "test_acc1": 79.2280026409912, "test_acc5": 94.64600280456543, "epoch": 297, "n_parameters": 14644432} -{"train_lr": 0.0010439014201670813, "train_loss": 2.1489318937540056, "test_loss": 0.8193365938084967, "test_acc1": 79.31800271392822, "test_acc5": 94.70200275085449, "epoch": 298, "n_parameters": 14644432} -{"train_lr": 0.0010317199800182345, "train_loss": 2.1434435996770858, "test_loss": 0.8125920547720265, "test_acc1": 79.54200254882812, "test_acc5": 94.67000265136718, "epoch": 299, "n_parameters": 14644432} -{"train_lr": 0.0010195859761644697, "train_loss": 2.1451399575471877, "test_loss": 0.8076981636969482, "test_acc1": 79.38000256072998, "test_acc5": 94.78800253723145, "epoch": 300, "n_parameters": 14644432} -{"train_lr": 0.0010075000000000067, "train_loss": 2.1418717903137208, "test_loss": 0.8149674271836, "test_acc1": 79.43200291229248, "test_acc5": 94.65800252593993, "epoch": 301, "n_parameters": 14644432} -{"train_lr": 0.000995462640578269, "train_loss": 2.1443246034145353, "test_loss": 0.7994964681565762, "test_acc1": 79.80000229064942, "test_acc5": 94.73600243133545, "epoch": 302, "n_parameters": 14644432} -{"train_lr": 0.0009834744845832106, "train_loss": 2.138472586917877, "test_loss": 0.8192075681598747, "test_acc1": 79.29000241485596, "test_acc5": 94.55000245330811, "epoch": 303, "n_parameters": 14644432} -{"train_lr": 0.0009715361163006195, "train_loss": 2.1268363349437713, "test_loss": 0.8041115828296718, "test_acc1": 79.62000245178223, "test_acc5": 94.72200245666504, "epoch": 304, "n_parameters": 14644432} -{"train_lr": 0.000959648117589686, "train_loss": 2.1399843096256257, "test_loss": 0.8017512128195342, "test_acc1": 79.76000257385255, "test_acc5": 94.80800248352051, "epoch": 305, "n_parameters": 14644432} -{"train_lr": 0.0009478110678547599, "train_loss": 2.1292216998338698, "test_loss": 0.791861147784135, "test_acc1": 79.95000232879639, "test_acc5": 94.88000250579834, "epoch": 306, "n_parameters": 14644432} -{"train_lr": 0.0009360255440169043, "train_loss": 2.119354291796684, "test_loss": 0.7883714566774228, "test_acc1": 79.99200276153564, "test_acc5": 94.84600237731934, "epoch": 307, "n_parameters": 14644432} -{"train_lr": 0.0009242921204859447, "train_loss": 2.117926901960373, "test_loss": 0.7924940831520978, "test_acc1": 79.8320025012207, "test_acc5": 94.9440023034668, "epoch": 308, "n_parameters": 14644432} -{"train_lr": 0.0009126113691323534, "train_loss": 2.117405355811119, "test_loss": 0.7978518311591709, "test_acc1": 79.84200246246338, "test_acc5": 94.792002522583, "epoch": 309, "n_parameters": 14644432} -{"train_lr": 0.0009009838592595214, "train_loss": 2.126254099011421, "test_loss": 0.7906534095459125, "test_acc1": 79.88200264953613, "test_acc5": 94.832002315979, "epoch": 310, "n_parameters": 14644432} -{"train_lr": 0.0008894101575758612, "train_loss": 2.1081273740291597, "test_loss": 0.7933799869873944, "test_acc1": 79.72400230316163, "test_acc5": 94.83800261474609, "epoch": 311, "n_parameters": 14644432} -{"train_lr": 0.0008778908281672491, "train_loss": 2.112644783568382, "test_loss": 0.7869679108262062, "test_acc1": 79.97400258544921, "test_acc5": 94.90800248687744, "epoch": 312, "n_parameters": 14644432} -{"train_lr": 0.0008664264324695524, "train_loss": 2.1072550389528275, "test_loss": 0.7921348827726701, "test_acc1": 79.93400266113281, "test_acc5": 94.92000254760742, "epoch": 313, "n_parameters": 14644432} -{"train_lr": 0.0008550175292412688, "train_loss": 2.1080673930883407, "test_loss": 0.780780642567312, "test_acc1": 80.0100025491333, "test_acc5": 94.9440025616455, "epoch": 314, "n_parameters": 14644432} -{"train_lr": 0.0008436646745362156, "train_loss": 2.1049666350603102, "test_loss": 0.7840102634885732, "test_acc1": 79.93200237945557, "test_acc5": 94.87600247833252, "epoch": 315, "n_parameters": 14644432} -{"train_lr": 0.0008323684216765116, "train_loss": 2.102830903124809, "test_loss": 0.7851325676721685, "test_acc1": 80.06200261962891, "test_acc5": 94.91800236724853, "epoch": 316, "n_parameters": 14644432} -{"train_lr": 0.0008211293212256214, "train_loss": 2.1007350898265837, "test_loss": 0.780007224968251, "test_acc1": 79.96200256988526, "test_acc5": 94.96800235565186, "epoch": 317, "n_parameters": 14644432} -{"train_lr": 0.0008099479209613996, "train_loss": 2.094916911673546, "test_loss": 0.7804765775799751, "test_acc1": 80.06400234313965, "test_acc5": 94.9560025756836, "epoch": 318, "n_parameters": 14644432} -{"train_lr": 0.0007988247658495707, "train_loss": 2.1043096370458603, "test_loss": 0.7914079613983631, "test_acc1": 79.90400238189697, "test_acc5": 94.84600272064209, "epoch": 319, "n_parameters": 14644432} -{"train_lr": 0.0007877603980169765, "train_loss": 2.092169943642616, "test_loss": 0.7727471511153614, "test_acc1": 80.18800205566406, "test_acc5": 95.09600259887695, "epoch": 320, "n_parameters": 14644432} -{"train_lr": 0.0007767553567253202, "train_loss": 2.08183091275692, "test_loss": 0.7797033543534139, "test_acc1": 80.07200234527588, "test_acc5": 95.01600241394043, "epoch": 321, "n_parameters": 14644432} -{"train_lr": 0.0007658101783447642, "train_loss": 2.080571981191635, "test_loss": 0.7800328008392278, "test_acc1": 80.26200265808106, "test_acc5": 94.93400239471435, "epoch": 322, "n_parameters": 14644432} -{"train_lr": 0.0007549253963278913, "train_loss": 2.0901854033231735, "test_loss": 0.7818297588211649, "test_acc1": 80.07400236206054, "test_acc5": 94.92800243896484, "epoch": 323, "n_parameters": 14644432} -{"train_lr": 0.0007441015411836098, "train_loss": 2.077150841140747, "test_loss": 0.7796191691475756, "test_acc1": 80.26000281524658, "test_acc5": 94.88200256500244, "epoch": 324, "n_parameters": 14644432} -{"train_lr": 0.0007333391404513692, "train_loss": 2.0702812563180926, "test_loss": 0.7671845675829578, "test_acc1": 80.31000243835449, "test_acc5": 95.09600219055176, "epoch": 325, "n_parameters": 14644432} -{"train_lr": 0.0007226387186753506, "train_loss": 2.0695504348516462, "test_loss": 0.7757071992930245, "test_acc1": 80.16000279571533, "test_acc5": 95.05800276550293, "epoch": 326, "n_parameters": 14644432} -{"train_lr": 0.0007120007973790458, "train_loss": 2.0755706747531892, "test_loss": 0.7766221479019698, "test_acc1": 80.20000277252197, "test_acc5": 95.10800262298584, "epoch": 327, "n_parameters": 14644432} -{"train_lr": 0.0007014258950397421, "train_loss": 2.073395503854752, "test_loss": 0.7701463431996458, "test_acc1": 80.23800279754639, "test_acc5": 95.21400233337403, "epoch": 328, "n_parameters": 14644432} -{"train_lr": 0.0006909145270632263, "train_loss": 2.0728023956775665, "test_loss": 0.7615720428088132, "test_acc1": 80.46000301574708, "test_acc5": 95.17400239471435, "epoch": 329, "n_parameters": 14644432} -{"train_lr": 0.0006804672057587739, "train_loss": 2.0706855823755266, "test_loss": 0.7740035536972916, "test_acc1": 80.27800222351074, "test_acc5": 95.01000249511719, "epoch": 330, "n_parameters": 14644432} -{"train_lr": 0.0006700844403140784, "train_loss": 2.058548561811447, "test_loss": 0.7599051811677568, "test_acc1": 80.53800224151611, "test_acc5": 95.18400247894287, "epoch": 331, "n_parameters": 14644432} -{"train_lr": 0.0006597667367704799, "train_loss": 2.0644254323482514, "test_loss": 0.768208755289807, "test_acc1": 80.36000234344482, "test_acc5": 95.06800278533936, "epoch": 332, "n_parameters": 14644432} -{"train_lr": 0.0006495145979982786, "train_loss": 2.059019343662262, "test_loss": 0.7680303489460665, "test_acc1": 80.4080023312378, "test_acc5": 95.16800265319824, "epoch": 333, "n_parameters": 14644432} -{"train_lr": 0.0006393285236722668, "train_loss": 2.0649638771772385, "test_loss": 0.7621916219153825, "test_acc1": 80.50600252197266, "test_acc5": 95.16600251739501, "epoch": 334, "n_parameters": 14644432} -{"train_lr": 0.000629209010247336, "train_loss": 2.0491137909173966, "test_loss": 0.7621786799281836, "test_acc1": 80.59400298431396, "test_acc5": 95.10000269317626, "epoch": 335, "n_parameters": 14644432} -{"train_lr": 0.0006191565509343066, "train_loss": 2.047744801616669, "test_loss": 0.7631370436181041, "test_acc1": 80.48200247344971, "test_acc5": 95.23200242248535, "epoch": 336, "n_parameters": 14644432} -{"train_lr": 0.0006091716356758274, "train_loss": 2.047822431254387, "test_loss": 0.7630747591747957, "test_acc1": 80.58200256591797, "test_acc5": 95.15800272857666, "epoch": 337, "n_parameters": 14644432} -{"train_lr": 0.0005992547511226205, "train_loss": 2.037434673976898, "test_loss": 0.7553827433901674, "test_acc1": 80.7180023449707, "test_acc5": 95.29400255035401, "epoch": 338, "n_parameters": 14644432} -{"train_lr": 0.0005894063806096327, "train_loss": 2.046631807041168, "test_loss": 0.7600345598424182, "test_acc1": 80.56000247375488, "test_acc5": 95.25600277404786, "epoch": 339, "n_parameters": 14644432} -{"train_lr": 0.000579627004132555, "train_loss": 2.0433492158174515, "test_loss": 0.7534672547789181, "test_acc1": 80.66200242156982, "test_acc5": 95.26000260986328, "epoch": 340, "n_parameters": 14644432} -{"train_lr": 0.0005699170983243841, "train_loss": 2.037170747303963, "test_loss": 0.7682744525372982, "test_acc1": 80.56800252716064, "test_acc5": 95.10400253051758, "epoch": 341, "n_parameters": 14644432} -{"train_lr": 0.0005602771364322523, "train_loss": 2.03823720099926, "test_loss": 0.754180941511603, "test_acc1": 80.79600267456054, "test_acc5": 95.31200246765137, "epoch": 342, "n_parameters": 14644432} -{"train_lr": 0.0005507075882942857, "train_loss": 2.036054890012741, "test_loss": 0.7492879008983865, "test_acc1": 80.78600255004883, "test_acc5": 95.31800293029785, "epoch": 343, "n_parameters": 14644432} -{"train_lr": 0.0005412089203167633, "train_loss": 2.0274003850460054, "test_loss": 0.7552605948465712, "test_acc1": 80.6720025579834, "test_acc5": 95.21000258666992, "epoch": 344, "n_parameters": 14644432} -{"train_lr": 0.0005317815954513637, "train_loss": 2.020665373873711, "test_loss": 0.7500698478782878, "test_acc1": 80.81200295043945, "test_acc5": 95.36400235778808, "epoch": 345, "n_parameters": 14644432} -{"train_lr": 0.0005224260731725992, "train_loss": 2.0282314709186555, "test_loss": 0.752015763126752, "test_acc1": 80.61800251556396, "test_acc5": 95.38000255310058, "epoch": 346, "n_parameters": 14644432} -{"train_lr": 0.00051314280945543, "train_loss": 2.0269103865861893, "test_loss": 0.7484394176041379, "test_acc1": 80.90000280822754, "test_acc5": 95.34400222381592, "epoch": 347, "n_parameters": 14644432} -{"train_lr": 0.0005039322567530305, "train_loss": 2.0186123415470125, "test_loss": 0.7493291249608293, "test_acc1": 80.99000241851806, "test_acc5": 95.25400261779785, "epoch": 348, "n_parameters": 14644432} -{"train_lr": 0.0004947948639747458, "train_loss": 2.010388493347168, "test_loss": 0.7450415649834801, "test_acc1": 80.99800256256104, "test_acc5": 95.35000245605468, "epoch": 349, "n_parameters": 14644432} -{"train_lr": 0.0004857310764642128, "train_loss": 2.014192424893379, "test_loss": 0.7442669864086544, "test_acc1": 80.90200254669189, "test_acc5": 95.38000268035888, "epoch": 350, "n_parameters": 14644432} -{"train_lr": 0.00047674133597763773, "train_loss": 2.0134257645606994, "test_loss": 0.7423504932838327, "test_acc1": 81.0080027581787, "test_acc5": 95.38000268432617, "epoch": 351, "n_parameters": 14644432} -{"train_lr": 0.00046782608066229685, "train_loss": 2.013339639019966, "test_loss": 0.7387743441059309, "test_acc1": 81.07200287628174, "test_acc5": 95.39400269317628, "epoch": 352, "n_parameters": 14644432} -{"train_lr": 0.0004589857450351661, "train_loss": 2.0037132870197296, "test_loss": 0.7447369414217332, "test_acc1": 80.8920024017334, "test_acc5": 95.35800242431641, "epoch": 353, "n_parameters": 14644432} -{"train_lr": 0.000450220759961707, "train_loss": 2.003134480500221, "test_loss": 0.7425402317415265, "test_acc1": 81.04600258270264, "test_acc5": 95.35000260345458, "epoch": 354, "n_parameters": 14644432} -{"train_lr": 0.0004415315526349521, "train_loss": 2.0047036505937577, "test_loss": 0.7373017677489448, "test_acc1": 81.10200251922608, "test_acc5": 95.38400273010254, "epoch": 355, "n_parameters": 14644432} -{"train_lr": 0.0004329185465545854, "train_loss": 1.9968857483506202, "test_loss": 0.7363924697479781, "test_acc1": 81.23200231811524, "test_acc5": 95.41800267272949, "epoch": 356, "n_parameters": 14644432} -{"train_lr": 0.0004243821615063944, "train_loss": 1.9930577622413634, "test_loss": 0.7375234141945839, "test_acc1": 81.0920024710083, "test_acc5": 95.40200244140625, "epoch": 357, "n_parameters": 14644432} -{"train_lr": 0.00041592281354172557, "train_loss": 2.002008916926384, "test_loss": 0.7371732496163425, "test_acc1": 81.11600255767823, "test_acc5": 95.48600245544434, "epoch": 358, "n_parameters": 14644432} -{"train_lr": 0.0004075409149572814, "train_loss": 1.9893780679941178, "test_loss": 0.7373416982591152, "test_acc1": 81.08400281127929, "test_acc5": 95.37400243774414, "epoch": 359, "n_parameters": 14644432} -{"train_lr": 0.000399236874274954, "train_loss": 1.983378790307045, "test_loss": 0.7363958729102331, "test_acc1": 81.16000275909424, "test_acc5": 95.38800252960205, "epoch": 360, "n_parameters": 14644432} -{"train_lr": 0.00039101109622197687, "train_loss": 1.9849244111299515, "test_loss": 0.7325157779542839, "test_acc1": 81.27200242675781, "test_acc5": 95.4900027456665, "epoch": 361, "n_parameters": 14644432} -{"train_lr": 0.000382863981711175, "train_loss": 1.9873437984466553, "test_loss": 0.7353764908278689, "test_acc1": 81.23200265563965, "test_acc5": 95.3760023538208, "epoch": 362, "n_parameters": 14644432} -{"train_lr": 0.0003747959278214157, "train_loss": 1.989047701191902, "test_loss": 0.7362166421816629, "test_acc1": 81.18000268035888, "test_acc5": 95.43600240844727, "epoch": 363, "n_parameters": 14644432} -{"train_lr": 0.00036680732777826604, "train_loss": 1.9778188276290893, "test_loss": 0.7249268223257626, "test_acc1": 81.44800260528565, "test_acc5": 95.58400270660401, "epoch": 364, "n_parameters": 14644432} -{"train_lr": 0.00035889857093479767, "train_loss": 1.976019456911087, "test_loss": 0.7254558159586262, "test_acc1": 81.25200241577149, "test_acc5": 95.54800273284913, "epoch": 365, "n_parameters": 14644432} -{"train_lr": 0.00035107004275269313, "train_loss": 1.9704632642030715, "test_loss": 0.7273815013468266, "test_acc1": 81.49000226165772, "test_acc5": 95.48600241088867, "epoch": 366, "n_parameters": 14644432} -{"train_lr": 0.00034332212478335543, "train_loss": 1.9692828476428985, "test_loss": 0.7225918592337299, "test_acc1": 81.56600294616699, "test_acc5": 95.52800266387939, "epoch": 367, "n_parameters": 14644432} -{"train_lr": 0.0003356551946493703, "train_loss": 1.9650333142280578, "test_loss": 0.7211172731921953, "test_acc1": 81.49600255310058, "test_acc5": 95.55600258209229, "epoch": 368, "n_parameters": 14644432} -{"train_lr": 0.0003280696260261078, "train_loss": 1.9663497287034988, "test_loss": 0.7263684535727781, "test_acc1": 81.32800252471924, "test_acc5": 95.50400245941162, "epoch": 369, "n_parameters": 14644432} -{"train_lr": 0.00032056578862347564, "train_loss": 1.962503769469261, "test_loss": 0.7209974260014647, "test_acc1": 81.4200028201294, "test_acc5": 95.56600241973877, "epoch": 370, "n_parameters": 14644432} -{"train_lr": 0.00031314404816792945, "train_loss": 1.9630246537208558, "test_loss": 0.7166317951153306, "test_acc1": 81.6420025717163, "test_acc5": 95.69200231658935, "epoch": 371, "n_parameters": 14644432} -{"train_lr": 0.00030580476638461713, "train_loss": 1.9603365182161332, "test_loss": 0.7173000482075355, "test_acc1": 81.51600236755371, "test_acc5": 95.6300029019165, "epoch": 372, "n_parameters": 14644432} -{"train_lr": 0.0002985483009797873, "train_loss": 1.953730888724327, "test_loss": 0.7166697139687398, "test_acc1": 81.67800251281739, "test_acc5": 95.62200254577637, "epoch": 373, "n_parameters": 14644432} -{"train_lr": 0.00029137500562332014, "train_loss": 1.9626653195619583, "test_loss": 0.7201198146623724, "test_acc1": 81.5760024017334, "test_acc5": 95.59600281585693, "epoch": 374, "n_parameters": 14644432} -{"train_lr": 0.000284285229931521, "train_loss": 1.9470620208263396, "test_loss": 0.7197983446804916, "test_acc1": 81.60400261108398, "test_acc5": 95.61800233001709, "epoch": 375, "n_parameters": 14644432} -{"train_lr": 0.00027727931945004304, "train_loss": 1.966079558157921, "test_loss": 0.7140199614360052, "test_acc1": 81.61600269226074, "test_acc5": 95.75800254302979, "epoch": 376, "n_parameters": 14644432} -{"train_lr": 0.00027035761563708795, "train_loss": 1.938252211523056, "test_loss": 0.715643314535127, "test_acc1": 81.70800284851074, "test_acc5": 95.66000231414795, "epoch": 377, "n_parameters": 14644432} -{"train_lr": 0.0002635204558467305, "train_loss": 1.9411064824819564, "test_loss": 0.71343959670733, "test_acc1": 81.7140025857544, "test_acc5": 95.70600260406493, "epoch": 378, "n_parameters": 14644432} -{"train_lr": 0.0002567681733124936, "train_loss": 1.9296714122056962, "test_loss": 0.7121003774159095, "test_acc1": 81.66000265838623, "test_acc5": 95.79600254577636, "epoch": 379, "n_parameters": 14644432} -{"train_lr": 0.0002501010971311009, "train_loss": 1.9440606165885925, "test_loss": 0.7145262933610117, "test_acc1": 81.5400024911499, "test_acc5": 95.75200235321044, "epoch": 380, "n_parameters": 14644432} -{"train_lr": 0.00024351955224644067, "train_loss": 1.9428935190200807, "test_loss": 0.7144244254073676, "test_acc1": 81.62800247802734, "test_acc5": 95.736002394104, "epoch": 381, "n_parameters": 14644432} -{"train_lr": 0.00023702385943372427, "train_loss": 1.9334639280319215, "test_loss": 0.7097312527544358, "test_acc1": 81.78400255889892, "test_acc5": 95.796002678833, "epoch": 382, "n_parameters": 14644432} -{"train_lr": 0.00023061433528386214, "train_loss": 1.9383020144939422, "test_loss": 0.708961403764346, "test_acc1": 81.85200268707275, "test_acc5": 95.80200275787354, "epoch": 383, "n_parameters": 14644432} -{"train_lr": 0.00022429129218801117, "train_loss": 1.932455139040947, "test_loss": 0.7062309340519064, "test_acc1": 81.73800253509522, "test_acc5": 95.71200228302001, "epoch": 384, "n_parameters": 14644432} -{"train_lr": 0.00021805503832237022, "train_loss": 1.9294837582588196, "test_loss": 0.7119643425240236, "test_acc1": 81.66400258300781, "test_acc5": 95.76800259277344, "epoch": 385, "n_parameters": 14644432} -{"train_lr": 0.00021190587763316056, "train_loss": 1.9337775450468064, "test_loss": 0.7077027879217092, "test_acc1": 81.82600255523681, "test_acc5": 95.7540022164917, "epoch": 386, "n_parameters": 14644432} -{"train_lr": 0.00020584410982180034, "train_loss": 1.9292793531298638, "test_loss": 0.7049576543709811, "test_acc1": 81.8660023300171, "test_acc5": 95.79200269592285, "epoch": 387, "n_parameters": 14644432} -{"train_lr": 0.0001998700303302881, "train_loss": 1.9260736905574798, "test_loss": 0.7089402278994813, "test_acc1": 81.85800289886474, "test_acc5": 95.75800242584228, "epoch": 388, "n_parameters": 14644432} -{"train_lr": 0.00019398393032684917, "train_loss": 1.9110684110879899, "test_loss": 0.707160093328532, "test_acc1": 81.78000233734132, "test_acc5": 95.79600251556397, "epoch": 389, "n_parameters": 14644432} -{"train_lr": 0.0001881860966916756, "train_loss": 1.9181381858825683, "test_loss": 0.7039791431058856, "test_acc1": 81.87600264831543, "test_acc5": 95.83000258544922, "epoch": 390, "n_parameters": 14644432} -{"train_lr": 0.00018247681200301023, "train_loss": 1.9183492998838425, "test_loss": 0.7060225612538702, "test_acc1": 81.87600237945557, "test_acc5": 95.80400274047851, "epoch": 391, "n_parameters": 14644432} -{"train_lr": 0.00017685635452333062, "train_loss": 1.9094015722393989, "test_loss": 0.7058947816052857, "test_acc1": 81.82400240447998, "test_acc5": 95.79200257263183, "epoch": 392, "n_parameters": 14644432} -{"train_lr": 0.00017132499818580898, "train_loss": 1.9190484129667282, "test_loss": 0.7041817953919663, "test_acc1": 81.92400254760742, "test_acc5": 95.7960024206543, "epoch": 393, "n_parameters": 14644432} -{"train_lr": 0.00016588301258094182, "train_loss": 1.9094256176471711, "test_loss": 0.7045052730861832, "test_acc1": 81.97600241333008, "test_acc5": 95.83400265960694, "epoch": 394, "n_parameters": 14644432} -{"train_lr": 0.0001605306629434379, "train_loss": 1.9065420143008232, "test_loss": 0.7020648575442678, "test_acc1": 81.9440025946045, "test_acc5": 95.88400239562988, "epoch": 395, "n_parameters": 14644432} -{"train_lr": 0.00015526821013925752, "train_loss": 1.9080480643749238, "test_loss": 0.700748438563417, "test_acc1": 81.98000253601074, "test_acc5": 95.8760024029541, "epoch": 396, "n_parameters": 14644432} -{"train_lr": 0.00015009591065294023, "train_loss": 1.9033146166324615, "test_loss": 0.7006678960340864, "test_acc1": 82.02200237609863, "test_acc5": 95.84200264007568, "epoch": 397, "n_parameters": 14644432} -{"train_lr": 0.00014501401657505492, "train_loss": 1.907233038365841, "test_loss": 0.6994432457448805, "test_acc1": 82.042002633667, "test_acc5": 95.80600249450684, "epoch": 398, "n_parameters": 14644432} -{"train_lr": 0.0001400227755899522, "train_loss": 1.9026837960481644, "test_loss": 0.6962427685365957, "test_acc1": 82.19400271697998, "test_acc5": 95.91600239227294, "epoch": 399, "n_parameters": 14644432} -{"train_lr": 0.00013512243096367772, "train_loss": 1.89314926071167, "test_loss": 0.6944817670566195, "test_acc1": 82.1120025076294, "test_acc5": 95.93800269073486, "epoch": 400, "n_parameters": 14644432} -{"train_lr": 0.00013031322153211376, "train_loss": 1.8945241536974906, "test_loss": 0.6925518642015317, "test_acc1": 82.17800268951416, "test_acc5": 95.90400266296386, "epoch": 401, "n_parameters": 14644432} -{"train_lr": 0.00012559538168934326, "train_loss": 1.8991916872739791, "test_loss": 0.6953297209213761, "test_acc1": 82.02600257965088, "test_acc5": 95.84800234497071, "epoch": 402, "n_parameters": 14644432} -{"train_lr": 0.00012096914137622728, "train_loss": 1.896589391207695, "test_loss": 0.6934648419127745, "test_acc1": 82.09800281219482, "test_acc5": 95.84400260437012, "epoch": 403, "n_parameters": 14644432} -{"train_lr": 0.00011643472606918499, "train_loss": 1.8881960542678833, "test_loss": 0.6939842593582237, "test_acc1": 82.17600231292725, "test_acc5": 95.95000263793945, "epoch": 404, "n_parameters": 14644432} -{"train_lr": 0.00011199235676923019, "train_loss": 1.889745704960823, "test_loss": 0.6947740228737102, "test_acc1": 82.21200266052246, "test_acc5": 95.8740024710083, "epoch": 405, "n_parameters": 14644432} -{"train_lr": 0.00010764224999117014, "train_loss": 1.8901097463846206, "test_loss": 0.6957297031493748, "test_acc1": 82.11400292663575, "test_acc5": 95.92400287780762, "epoch": 406, "n_parameters": 14644432} -{"train_lr": 0.0001033846177530702, "train_loss": 1.8908258248329162, "test_loss": 0.6966066763681524, "test_acc1": 82.23800257843017, "test_acc5": 95.9000027545166, "epoch": 407, "n_parameters": 14644432} -{"train_lr": 9.921966756592387e-05, "train_loss": 1.8854652364611626, "test_loss": 0.6938120665795663, "test_acc1": 82.19000243499755, "test_acc5": 95.92400276062011, "epoch": 408, "n_parameters": 14644432} -{"train_lr": 9.514760242352498e-05, "train_loss": 1.8947328332185744, "test_loss": 0.6926266899441972, "test_acc1": 82.20600243011475, "test_acc5": 95.96800247650147, "epoch": 409, "n_parameters": 14644432} -{"train_lr": 9.116862079258612e-05, "train_loss": 1.8845370596170425, "test_loss": 0.6912294873858199, "test_acc1": 82.18000250244141, "test_acc5": 95.94000261047363, "epoch": 410, "n_parameters": 14644432} -{"train_lr": 8.728291660305303e-05, "train_loss": 1.8809439139962196, "test_loss": 0.6903324677225422, "test_acc1": 82.25200248016357, "test_acc5": 95.93200285827636, "epoch": 411, "n_parameters": 14644432} -{"train_lr": 8.349067923867126e-05, "train_loss": 1.8804415259122849, "test_loss": 0.6933000970850972, "test_acc1": 82.23400224578857, "test_acc5": 95.96200266021728, "epoch": 412, "n_parameters": 14644432} -{"train_lr": 7.979209352773835e-05, "train_loss": 1.8809592279553413, "test_loss": 0.6900636558147037, "test_acc1": 82.21200263031005, "test_acc5": 95.97000260437012, "epoch": 413, "n_parameters": 14644432} -{"train_lr": 7.618733973410262e-05, "train_loss": 1.88183807336092, "test_loss": 0.6895537821247297, "test_acc1": 82.27400239837647, "test_acc5": 95.94800242523193, "epoch": 414, "n_parameters": 14644432} -{"train_lr": 7.267659354838017e-05, "train_loss": 1.8753189448714256, "test_loss": 0.6920196541091975, "test_acc1": 82.17400250244141, "test_acc5": 95.95600272674561, "epoch": 415, "n_parameters": 14644432} -{"train_lr": 6.926002607938772e-05, "train_loss": 1.8761766585588455, "test_loss": 0.6876350391437026, "test_acc1": 82.31200255889893, "test_acc5": 95.96600276062011, "epoch": 416, "n_parameters": 14644432} -{"train_lr": 6.593780384579997e-05, "train_loss": 1.8643295570850373, "test_loss": 0.6888668497695642, "test_acc1": 82.33200243225097, "test_acc5": 95.97000243469239, "epoch": 417, "n_parameters": 14644432} -{"train_lr": 6.27100887680448e-05, "train_loss": 1.8655161537647247, "test_loss": 0.6881866284153041, "test_acc1": 82.28400251342774, "test_acc5": 95.9740026159668, "epoch": 418, "n_parameters": 14644432} -{"train_lr": 5.957703816040123e-05, "train_loss": 1.8769179522633552, "test_loss": 0.6874386315398356, "test_acc1": 82.30600246612549, "test_acc5": 95.9760023336792, "epoch": 419, "n_parameters": 14644432} -{"train_lr": 5.6538804723335324e-05, "train_loss": 1.8727938571929932, "test_loss": 0.6893886801074532, "test_acc1": 82.25600213195801, "test_acc5": 95.9800026461792, "epoch": 420, "n_parameters": 14644432} -{"train_lr": 5.359553653605782e-05, "train_loss": 1.878782054400444, "test_loss": 0.6891155147596317, "test_acc1": 82.24200238800049, "test_acc5": 95.98200250946044, "epoch": 421, "n_parameters": 14644432} -{"train_lr": 5.0747377049308795e-05, "train_loss": 1.8713161778211593, "test_loss": 0.6863510136437767, "test_acc1": 82.38600237976074, "test_acc5": 95.96800253509521, "epoch": 422, "n_parameters": 14644432} -{"train_lr": 4.799446507836315e-05, "train_loss": 1.8689822697520255, "test_loss": 0.6889928864205584, "test_acc1": 82.30800260803223, "test_acc5": 95.98200266296386, "epoch": 423, "n_parameters": 14644432} -{"train_lr": 4.533693479626563e-05, "train_loss": 1.8660423449039458, "test_loss": 0.6874132905812824, "test_acc1": 82.37200241546631, "test_acc5": 96.02600267181397, "epoch": 424, "n_parameters": 14644432} -{"train_lr": 4.2774915727294984e-05, "train_loss": 1.862619765806198, "test_loss": 0.6870079597129541, "test_acc1": 82.36000240142822, "test_acc5": 96.05000237884522, "epoch": 425, "n_parameters": 14644432} -{"train_lr": 4.030853274064522e-05, "train_loss": 1.8696968845129014, "test_loss": 0.6842456449042348, "test_acc1": 82.34200250030517, "test_acc5": 96.02400299224854, "epoch": 426, "n_parameters": 14644432} -{"train_lr": 3.793790604434225e-05, "train_loss": 1.865544829893112, "test_loss": 0.6864264263387989, "test_acc1": 82.39000239349365, "test_acc5": 96.01600248260497, "epoch": 427, "n_parameters": 14644432} -{"train_lr": 3.5663151179389266e-05, "train_loss": 1.865717422056198, "test_loss": 0.68479898035088, "test_acc1": 82.41200240478516, "test_acc5": 96.03800267974853, "epoch": 428, "n_parameters": 14644432} -{"train_lr": 3.348437901412699e-05, "train_loss": 1.861180189847946, "test_loss": 0.6857684341204517, "test_acc1": 82.31000281707763, "test_acc5": 96.01600282806396, "epoch": 429, "n_parameters": 14644432} -{"train_lr": 3.14016957388384e-05, "train_loss": 1.869162977194786, "test_loss": 0.6850220426259672, "test_acc1": 82.45600241363526, "test_acc5": 96.0420026852417, "epoch": 430, "n_parameters": 14644432} -{"train_lr": 2.9415202860566895e-05, "train_loss": 1.869455711376667, "test_loss": 0.6844582002171699, "test_acc1": 82.3620027456665, "test_acc5": 96.05400261657715, "epoch": 431, "n_parameters": 14644432} -{"train_lr": 2.7524997198174166e-05, "train_loss": 1.8541804674863815, "test_loss": 0.6841002782697186, "test_acc1": 82.4460024243164, "test_acc5": 96.03800256866455, "epoch": 432, "n_parameters": 14644432} -{"train_lr": 2.5731170877616888e-05, "train_loss": 1.8630968676328659, "test_loss": 0.6869816341820885, "test_acc1": 82.36200239410401, "test_acc5": 96.01800260437011, "epoch": 433, "n_parameters": 14644432} -{"train_lr": 2.403381132745848e-05, "train_loss": 1.870025201356411, "test_loss": 0.6846885754562476, "test_acc1": 82.43600260894776, "test_acc5": 96.06000258026123, "epoch": 434, "n_parameters": 14644432} -{"train_lr": 2.2433001274609897e-05, "train_loss": 1.8606269687891006, "test_loss": 0.6854885698241346, "test_acc1": 82.41000241027832, "test_acc5": 96.05600269592286, "epoch": 435, "n_parameters": 14644432} -{"train_lr": 2.0928818740294644e-05, "train_loss": 1.8667979100227357, "test_loss": 0.6860016251311583, "test_acc1": 82.46800242156982, "test_acc5": 96.05400261230469, "epoch": 436, "n_parameters": 14644432} -{"train_lr": 1.9521337036247088e-05, "train_loss": 1.853089206957817, "test_loss": 0.6839373320998514, "test_acc1": 82.34800254486083, "test_acc5": 96.05800279968261, "epoch": 437, "n_parameters": 14644432} -{"train_lr": 1.8210624761139314e-05, "train_loss": 1.8571117868423461, "test_loss": 0.6828877178623396, "test_acc1": 82.40600291595459, "test_acc5": 96.06200256866455, "epoch": 438, "n_parameters": 14644432} -{"train_lr": 1.6996745797238736e-05, "train_loss": 1.862385506784916, "test_loss": 0.6835613623261452, "test_acc1": 82.40000233215332, "test_acc5": 96.0720024710083, "epoch": 439, "n_parameters": 14644432} -{"train_lr": 1.5879759307294027e-05, "train_loss": 1.8569214275240897, "test_loss": 0.6821257930029841, "test_acc1": 82.38000223236084, "test_acc5": 96.07000243804931, "epoch": 440, "n_parameters": 14644432} -{"train_lr": 1.4859719731650575e-05, "train_loss": 1.8622953770041466, "test_loss": 0.6846911127076429, "test_acc1": 82.39200244049073, "test_acc5": 96.02600273834229, "epoch": 441, "n_parameters": 14644432} -{"train_lr": 1.393667678559817e-05, "train_loss": 1.8542459638118745, "test_loss": 0.6844795793294907, "test_acc1": 82.39000240753174, "test_acc5": 96.02800255737304, "epoch": 442, "n_parameters": 14644432} -{"train_lr": 1.3110675456947718e-05, "train_loss": 1.8537878739118576, "test_loss": 0.6837402816642734, "test_acc1": 82.3800022769165, "test_acc5": 96.08200259368897, "epoch": 443, "n_parameters": 14644432} -{"train_lr": 1.2381756003839114e-05, "train_loss": 1.8514744840860367, "test_loss": 0.6822275293662268, "test_acc1": 82.45600226013184, "test_acc5": 96.04400280303955, "epoch": 444, "n_parameters": 14644432} -{"train_lr": 1.1749953952777368e-05, "train_loss": 1.8535707010149955, "test_loss": 0.6825845541761202, "test_acc1": 82.47800252197266, "test_acc5": 96.0400027835083, "epoch": 445, "n_parameters": 14644432} -{"train_lr": 1.1215300096904058e-05, "train_loss": 1.8594912578105927, "test_loss": 0.682269995484282, "test_acc1": 82.45600251647949, "test_acc5": 96.05400288116455, "epoch": 446, "n_parameters": 14644432} -{"train_lr": 1.0777820494492919e-05, "train_loss": 1.8557264672279359, "test_loss": 0.6828777511768481, "test_acc1": 82.39600244384765, "test_acc5": 96.03400245422364, "epoch": 447, "n_parameters": 14644432} -{"train_lr": 1.0437536467683126e-05, "train_loss": 1.8463314975619316, "test_loss": 0.6844738144427538, "test_acc1": 82.4000025643921, "test_acc5": 96.02200297546386, "epoch": 448, "n_parameters": 14644432} -{"train_lr": 1.0194464601437938e-05, "train_loss": 1.8551643808603286, "test_loss": 0.6836049074635786, "test_acc1": 82.37800234680176, "test_acc5": 96.05400266906739, "epoch": 449, "n_parameters": 14644432} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_300e.txt b/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_300e.txt deleted file mode 100644 index 0df355c5..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_300e.txt +++ /dev/null @@ -1,300 +0,0 @@ -{"train_lr": 9.999999999999953e-07, "train_loss": 7.028671503543473, "test_loss": 6.947310715010672, "test_acc1": 0.12200000847816467, "test_acc5": 0.5820000323009491, "epoch": 0, "n_parameters": 23674632} -{"train_lr": 9.999999999999953e-07, "train_loss": 7.013530400016611, "test_loss": 6.929137390671355, "test_acc1": 0.1540000113773346, "test_acc5": 0.7180000362682343, "epoch": 1, "n_parameters": 23674632} -{"train_lr": 0.000400799999999992, "train_loss": 6.542853530410001, "test_loss": 5.542588701753905, "test_acc1": 7.124000216064453, "test_acc5": 20.33800058425903, "epoch": 2, "n_parameters": 23674632} -{"train_lr": 0.0008006000000000176, "train_loss": 6.049540851661246, "test_loss": 4.587968978014859, "test_acc1": 15.814000458221436, "test_acc5": 36.264001013336184, "epoch": 3, "n_parameters": 23674632} -{"train_lr": 0.0012004000000000285, "train_loss": 5.598522345129725, "test_loss": 3.7754338374643615, "test_acc1": 25.752000729370117, "test_acc5": 50.32400142730713, "epoch": 4, "n_parameters": 23674632} -{"train_lr": 0.0016001999999999824, "train_loss": 5.069169552896997, "test_loss": 2.9104072844440285, "test_acc1": 38.080000981903076, "test_acc5": 64.13600177001953, "epoch": 5, "n_parameters": 23674632} -{"train_lr": 0.0019986363870807653, "train_loss": 4.570817369660981, "test_loss": 2.5233675435636984, "test_acc1": 44.87200132949829, "test_acc5": 70.72200208496093, "epoch": 6, "n_parameters": 23674632} -{"train_lr": 0.0019980365947861213, "train_loss": 4.18011409480223, "test_loss": 2.230768616903912, "test_acc1": 50.462001322021486, "test_acc5": 75.37600238555908, "epoch": 7, "n_parameters": 23674632} -{"train_lr": 0.0019973279048383337, "train_loss": 3.9541061202303873, "test_loss": 2.0478962127006417, "test_acc1": 53.52400139007568, "test_acc5": 78.10000245788574, "epoch": 8, "n_parameters": 23674632} -{"train_lr": 0.0019965103949532445, "train_loss": 3.7992727634051056, "test_loss": 1.8585209042736979, "test_acc1": 57.45200161468506, "test_acc5": 81.03600246246337, "epoch": 9, "n_parameters": 23674632} -{"train_lr": 0.0019955841547800715, "train_loss": 3.663231690081475, "test_loss": 1.7752914654486107, "test_acc1": 59.4020017288208, "test_acc5": 82.27200261505126, "epoch": 10, "n_parameters": 23674632} -{"train_lr": 0.0019945492858914116, "train_loss": 3.539731670578988, "test_loss": 1.6790366680784659, "test_acc1": 61.314001546325684, "test_acc5": 83.83400247009277, "epoch": 11, "n_parameters": 23674632} -{"train_lr": 0.0019934059017723614, "train_loss": 3.459473499886805, "test_loss": 1.6353098061500173, "test_acc1": 62.274001620483396, "test_acc5": 84.30200245117187, "epoch": 12, "n_parameters": 23674632} -{"train_lr": 0.0019921541278078774, "train_loss": 3.3684585540176486, "test_loss": 1.5610981012384098, "test_acc1": 63.59800175292969, "test_acc5": 85.29600256652832, "epoch": 13, "n_parameters": 23674632} -{"train_lr": 0.0019907941012690697, "train_loss": 3.314292826657768, "test_loss": 1.5201764063853207, "test_acc1": 64.41400179656982, "test_acc5": 85.90400257659913, "epoch": 14, "n_parameters": 23674632} -{"train_lr": 0.0019893259712981484, "train_loss": 3.2558824573751455, "test_loss": 1.5188083253575093, "test_acc1": 65.14200187957763, "test_acc5": 86.23800253082275, "epoch": 15, "n_parameters": 23674632} -{"train_lr": 0.0019877498988920997, "train_loss": 3.203067158111375, "test_loss": 1.4383029143015544, "test_acc1": 66.49000190673829, "test_acc5": 87.16200285888672, "epoch": 16, "n_parameters": 23674632} -{"train_lr": 0.0019860660568851683, "train_loss": 3.165598945330373, "test_loss": 1.427652822073662, "test_acc1": 66.89400187866211, "test_acc5": 87.38600247283935, "epoch": 17, "n_parameters": 23674632} -{"train_lr": 0.0019842746299293954, "train_loss": 3.1242057206414398, "test_loss": 1.410327848837231, "test_acc1": 66.95400181152344, "test_acc5": 87.53800258605958, "epoch": 18, "n_parameters": 23674632} -{"train_lr": 0.0019823758144750267, "train_loss": 3.083684859384926, "test_loss": 1.3952099678642822, "test_acc1": 67.56400200683593, "test_acc5": 87.82200256774902, "epoch": 19, "n_parameters": 23674632} -{"train_lr": 0.0019803698187486067, "train_loss": 3.0489340134971528, "test_loss": 1.3496633451996427, "test_acc1": 68.47000199432372, "test_acc5": 88.31600267913818, "epoch": 20, "n_parameters": 23674632} -{"train_lr": 0.0019782568627301467, "train_loss": 3.021253827128479, "test_loss": 1.3503189901962425, "test_acc1": 68.55200184570313, "test_acc5": 88.24400251861572, "epoch": 21, "n_parameters": 23674632} -{"train_lr": 0.0019760371781290314, "train_loss": 2.9895761268649648, "test_loss": 1.3097318358945125, "test_acc1": 69.38400220916748, "test_acc5": 88.88400266937256, "epoch": 22, "n_parameters": 23674632} -{"train_lr": 0.001973711008358824, "train_loss": 2.9687505258883027, "test_loss": 1.2973835782119723, "test_acc1": 69.90000208343506, "test_acc5": 88.94800252716064, "epoch": 23, "n_parameters": 23674632} -{"train_lr": 0.0019712786085100717, "train_loss": 2.953759967709045, "test_loss": 1.2848406833681194, "test_acc1": 69.77800217895508, "test_acc5": 89.1800026425171, "epoch": 24, "n_parameters": 23674632} -{"train_lr": 0.001968740245323006, "train_loss": 2.9287071316767275, "test_loss": 1.2575784343661685, "test_acc1": 70.53800224212647, "test_acc5": 89.46000256439208, "epoch": 25, "n_parameters": 23674632} -{"train_lr": 0.0019660961971576653, "train_loss": 2.902812722477791, "test_loss": 1.2498558813875371, "test_acc1": 70.5920022341919, "test_acc5": 89.71400251037598, "epoch": 26, "n_parameters": 23674632} -{"train_lr": 0.001963346753963693, "train_loss": 2.884111875431429, "test_loss": 1.2404610420944113, "test_acc1": 70.82200229034424, "test_acc5": 89.55600250915528, "epoch": 27, "n_parameters": 23674632} -{"train_lr": 0.001960492217248555, "train_loss": 2.8716133058690527, "test_loss": 1.267926553885142, "test_acc1": 70.65400250793456, "test_acc5": 89.61800271575927, "epoch": 28, "n_parameters": 23674632} -{"train_lr": 0.001957532900044387, "train_loss": 2.848315531294123, "test_loss": 1.2422792342576114, "test_acc1": 70.85400214447021, "test_acc5": 89.9460024798584, "epoch": 29, "n_parameters": 23674632} -{"train_lr": 0.001954469126873758, "train_loss": 2.828659397556627, "test_loss": 1.2209853336892345, "test_acc1": 71.25400232971191, "test_acc5": 90.130002550354, "epoch": 30, "n_parameters": 23674632} -{"train_lr": 0.0019513012337137013, "train_loss": 2.824760337218011, "test_loss": 1.198929077860984, "test_acc1": 71.57600218231201, "test_acc5": 90.03600248748779, "epoch": 31, "n_parameters": 23674632} -{"train_lr": 0.0019480295679595498, "train_loss": 2.8064687316842694, "test_loss": 1.2024575878273358, "test_acc1": 72.04000226928711, "test_acc5": 90.43400248962402, "epoch": 32, "n_parameters": 23674632} -{"train_lr": 0.001944654488386279, "train_loss": 2.7899218848902736, "test_loss": 1.1921154171454185, "test_acc1": 71.6640024520874, "test_acc5": 90.26200237548828, "epoch": 33, "n_parameters": 23674632} -{"train_lr": 0.0019411763651094522, "train_loss": 2.7834657421834366, "test_loss": 1.156994390442516, "test_acc1": 72.39000257080077, "test_acc5": 90.9040027420044, "epoch": 34, "n_parameters": 23674632} -{"train_lr": 0.0019375955795444654, "train_loss": 2.773014918327046, "test_loss": 1.1691528984317274, "test_acc1": 72.20000227661133, "test_acc5": 90.56600262878418, "epoch": 35, "n_parameters": 23674632} -{"train_lr": 0.0019339125243647028, "train_loss": 2.757252192528223, "test_loss": 1.1503202740215894, "test_acc1": 72.55200240783691, "test_acc5": 90.87400247711182, "epoch": 36, "n_parameters": 23674632} -{"train_lr": 0.0019301276034588161, "train_loss": 2.7465882809947337, "test_loss": 1.1514497980701202, "test_acc1": 72.34600214019775, "test_acc5": 90.91200240020751, "epoch": 37, "n_parameters": 23674632} -{"train_lr": 0.0019262412318859438, "train_loss": 2.7381019534991324, "test_loss": 1.1438853092717403, "test_acc1": 72.83600236480713, "test_acc5": 90.99400238952637, "epoch": 38, "n_parameters": 23674632} -{"train_lr": 0.0019222538358305563, "train_loss": 2.7215652773729997, "test_loss": 1.123619888655164, "test_acc1": 73.09000244995117, "test_acc5": 91.2300024798584, "epoch": 39, "n_parameters": 23674632} -{"train_lr": 0.001918165852555554, "train_loss": 2.7119534005649943, "test_loss": 1.152502892821124, "test_acc1": 72.57800230773925, "test_acc5": 90.95400240264892, "epoch": 40, "n_parameters": 23674632} -{"train_lr": 0.0019139777303543704, "train_loss": 2.7144805499070364, "test_loss": 1.1454694299309542, "test_acc1": 73.03600222930908, "test_acc5": 91.04600301940918, "epoch": 41, "n_parameters": 23674632} -{"train_lr": 0.0019096899285018238, "train_loss": 2.6937548407047487, "test_loss": 1.102821859107776, "test_acc1": 73.36800241607666, "test_acc5": 91.41200261352539, "epoch": 42, "n_parameters": 23674632} -{"train_lr": 0.0019053029172036347, "train_loss": 2.6890565349305753, "test_loss": 1.1331457896440318, "test_acc1": 73.22400248168945, "test_acc5": 91.30000242767333, "epoch": 43, "n_parameters": 23674632} -{"train_lr": 0.0019008171775451975, "train_loss": 2.675827024985465, "test_loss": 1.1202139092439956, "test_acc1": 73.25800235351562, "test_acc5": 91.38400264709473, "epoch": 44, "n_parameters": 23674632} -{"train_lr": 0.0018962332014382094, "train_loss": 2.667000353145752, "test_loss": 1.1123256108751802, "test_acc1": 73.22000242492676, "test_acc5": 91.29200243255615, "epoch": 45, "n_parameters": 23674632} -{"train_lr": 0.0018915514915674328, "train_loss": 2.6658985841569662, "test_loss": 1.1291604622295408, "test_acc1": 73.60600240600586, "test_acc5": 91.39200261047364, "epoch": 46, "n_parameters": 23674632} -{"train_lr": 0.0018867725613350395, "train_loss": 2.6554124338163745, "test_loss": 1.1377086799704668, "test_acc1": 73.04400237060547, "test_acc5": 91.07400247955323, "epoch": 47, "n_parameters": 23674632} -{"train_lr": 0.0018818969348046717, "train_loss": 2.6505308856638217, "test_loss": 1.0777307362719015, "test_acc1": 73.90400254272461, "test_acc5": 91.79400266174316, "epoch": 48, "n_parameters": 23674632} -{"train_lr": 0.0018769251466436553, "train_loss": 2.6434170987894783, "test_loss": 1.1160355031941875, "test_acc1": 73.5620024194336, "test_acc5": 91.34800255065917, "epoch": 49, "n_parameters": 23674632} -{"train_lr": 0.0018718577420645938, "train_loss": 2.6393211149721503, "test_loss": 1.0805073614147576, "test_acc1": 73.69600252288818, "test_acc5": 91.68000242736817, "epoch": 50, "n_parameters": 23674632} -{"train_lr": 0.0018666952767654964, "train_loss": 2.620102807283878, "test_loss": 1.0535380852719147, "test_acc1": 74.32400227874756, "test_acc5": 91.94800263183593, "epoch": 51, "n_parameters": 23674632} -{"train_lr": 0.0018614383168689135, "train_loss": 2.614571997635275, "test_loss": 1.0994400440743475, "test_acc1": 73.90000248962403, "test_acc5": 91.74000251800537, "epoch": 52, "n_parameters": 23674632} -{"train_lr": 0.001856087438859743, "train_loss": 2.608169463410271, "test_loss": 1.0501775986543207, "test_acc1": 74.37600247650147, "test_acc5": 91.94800260986328, "epoch": 53, "n_parameters": 23674632} -{"train_lr": 0.001850643229521931, "train_loss": 2.6076732861409657, "test_loss": 1.0688474307006055, "test_acc1": 74.30600235900879, "test_acc5": 91.87200262664796, "epoch": 54, "n_parameters": 23674632} -{"train_lr": 0.0018451062858744953, "train_loss": 2.607445873731046, "test_loss": 1.082743859878092, "test_acc1": 74.49000276763915, "test_acc5": 91.90000253601075, "epoch": 55, "n_parameters": 23674632} -{"train_lr": 0.0018394772151057238, "train_loss": 2.604476198434925, "test_loss": 1.0626203686450466, "test_acc1": 74.01200222625732, "test_acc5": 91.95800251098633, "epoch": 56, "n_parameters": 23674632} -{"train_lr": 0.0018337566345065688, "train_loss": 2.5806280537713153, "test_loss": 1.0594278645109048, "test_acc1": 74.3480023751831, "test_acc5": 92.01200234924316, "epoch": 57, "n_parameters": 23674632} -{"train_lr": 0.0018279451714031977, "train_loss": 2.587183403644821, "test_loss": 1.0457321021592978, "test_acc1": 74.58800234527588, "test_acc5": 92.17800254577637, "epoch": 58, "n_parameters": 23674632} -{"train_lr": 0.001822043463088011, "train_loss": 2.5831356053110315, "test_loss": 1.0611413545680768, "test_acc1": 74.27400242218017, "test_acc5": 91.95400249847413, "epoch": 59, "n_parameters": 23674632} -{"train_lr": 0.001816052156749885, "train_loss": 2.5712812338873063, "test_loss": 1.0671976985353413, "test_acc1": 74.39400213134766, "test_acc5": 91.94600235107421, "epoch": 60, "n_parameters": 23674632} -{"train_lr": 0.001809971909403067, "train_loss": 2.5692195423024833, "test_loss": 1.0408201529221102, "test_acc1": 74.70000259063721, "test_acc5": 92.18800259552002, "epoch": 61, "n_parameters": 23674632} -{"train_lr": 0.0018038033878151612, "train_loss": 2.562958811541541, "test_loss": 1.0472293584184214, "test_acc1": 74.56000242736816, "test_acc5": 92.19200270294189, "epoch": 62, "n_parameters": 23674632} -{"train_lr": 0.001797547268434101, "train_loss": 2.5573378815043935, "test_loss": 1.010281561902075, "test_acc1": 75.28800243591309, "test_acc5": 92.54200247283936, "epoch": 63, "n_parameters": 23674632} -{"train_lr": 0.0017912042373137995, "train_loss": 2.551833665532936, "test_loss": 1.0236831815405325, "test_acc1": 75.1180026486206, "test_acc5": 92.43400251403808, "epoch": 64, "n_parameters": 23674632} -{"train_lr": 0.0017847749900392269, "train_loss": 2.5493268894849064, "test_loss": 1.04915718564933, "test_acc1": 74.87000238311768, "test_acc5": 92.26200237609864, "epoch": 65, "n_parameters": 23674632} -{"train_lr": 0.0017782602316496804, "train_loss": 2.54351332553809, "test_loss": 1.020719919466611, "test_acc1": 74.9320025680542, "test_acc5": 92.57600271026611, "epoch": 66, "n_parameters": 23674632} -{"train_lr": 0.0017716606765619024, "train_loss": 2.539691241644174, "test_loss": 1.0240747427398509, "test_acc1": 75.16600246459961, "test_acc5": 92.31000269348145, "epoch": 67, "n_parameters": 23674632} -{"train_lr": 0.001764977048491505, "train_loss": 2.532698775259711, "test_loss": 1.0065522554020088, "test_acc1": 75.47800234039306, "test_acc5": 92.6160025680542, "epoch": 68, "n_parameters": 23674632} -{"train_lr": 0.0017582100803734976, "train_loss": 2.536707830419548, "test_loss": 1.0414887926343717, "test_acc1": 74.58400263336182, "test_acc5": 92.20600247070313, "epoch": 69, "n_parameters": 23674632} -{"train_lr": 0.001751360514282295, "train_loss": 2.523125802632049, "test_loss": 0.9988815419827447, "test_acc1": 75.53400270874023, "test_acc5": 92.48400263580322, "epoch": 70, "n_parameters": 23674632} -{"train_lr": 0.0017444291013499857, "train_loss": 2.5141956792842093, "test_loss": 1.0291258085406187, "test_acc1": 74.74000248687744, "test_acc5": 92.30400251556397, "epoch": 71, "n_parameters": 23674632} -{"train_lr": 0.0017374166016841382, "train_loss": 2.5191290226581096, "test_loss": 1.045431965792721, "test_acc1": 74.75200258209229, "test_acc5": 92.26200252716065, "epoch": 72, "n_parameters": 23674632} -{"train_lr": 0.0017303237842843022, "train_loss": 2.5023488185698275, "test_loss": 1.0130353067634683, "test_acc1": 75.42600235076904, "test_acc5": 92.61800257202148, "epoch": 73, "n_parameters": 23674632} -{"train_lr": 0.0017231514269578298, "train_loss": 2.504168885300676, "test_loss": 1.0104543430109818, "test_acc1": 75.52200256408692, "test_acc5": 92.61200240234375, "epoch": 74, "n_parameters": 23674632} -{"train_lr": 0.0017159003162346151, "train_loss": 2.5009488232320636, "test_loss": 1.0080303338666756, "test_acc1": 75.41800218719483, "test_acc5": 92.6280025024414, "epoch": 75, "n_parameters": 23674632} -{"train_lr": 0.0017085712472806156, "train_loss": 2.495158234886605, "test_loss": 1.0058160129595886, "test_acc1": 75.35400245758056, "test_acc5": 92.65800270263672, "epoch": 76, "n_parameters": 23674632} -{"train_lr": 0.0017011650238107918, "train_loss": 2.4955507457303963, "test_loss": 1.0094872110269286, "test_acc1": 75.58400258728027, "test_acc5": 92.65000245880127, "epoch": 77, "n_parameters": 23674632} -{"train_lr": 0.0016936824580010095, "train_loss": 2.4869383431190877, "test_loss": 1.0088220170953057, "test_acc1": 75.55600233184815, "test_acc5": 92.63200242492675, "epoch": 78, "n_parameters": 23674632} -{"train_lr": 0.001686124370399008, "train_loss": 2.486904711912957, "test_loss": 0.9687424685918924, "test_acc1": 76.2400027658081, "test_acc5": 93.05400259155273, "epoch": 79, "n_parameters": 23674632} -{"train_lr": 0.0016784915898342416, "train_loss": 2.4734913689388836, "test_loss": 1.0196352829084252, "test_acc1": 75.5060023751831, "test_acc5": 92.59000251495361, "epoch": 80, "n_parameters": 23674632} -{"train_lr": 0.0016707849533270642, "train_loss": 2.467465571255135, "test_loss": 1.020131050637274, "test_acc1": 75.34200251281739, "test_acc5": 92.51000257080078, "epoch": 81, "n_parameters": 23674632} -{"train_lr": 0.0016630053059970035, "train_loss": 2.467835176131613, "test_loss": 0.983576074468367, "test_acc1": 75.94200266662598, "test_acc5": 92.88200250396729, "epoch": 82, "n_parameters": 23674632} -{"train_lr": 0.0016551535009701297, "train_loss": 2.4693450159449086, "test_loss": 0.9710261452604424, "test_acc1": 76.16800247100831, "test_acc5": 93.05200258789063, "epoch": 83, "n_parameters": 23674632} -{"train_lr": 0.0016472303992853623, "train_loss": 2.4613067525015366, "test_loss": 0.9901614029976454, "test_acc1": 75.77000264678955, "test_acc5": 92.77200261260987, "epoch": 84, "n_parameters": 23674632} -{"train_lr": 0.001639236869799948, "train_loss": 2.457386907437246, "test_loss": 0.9864572788955588, "test_acc1": 75.84800235961914, "test_acc5": 93.01000257019042, "epoch": 85, "n_parameters": 23674632} -{"train_lr": 0.0016311737890946018, "train_loss": 2.450212701285581, "test_loss": 0.983309224925258, "test_acc1": 75.90000257843018, "test_acc5": 92.95200231018066, "epoch": 86, "n_parameters": 23674632} -{"train_lr": 0.0016230420413769352, "train_loss": 2.4479038351707514, "test_loss": 0.9664611184235775, "test_acc1": 76.29000236633301, "test_acc5": 93.05400243011475, "epoch": 87, "n_parameters": 23674632} -{"train_lr": 0.0016148425183846724, "train_loss": 2.4444479668121355, "test_loss": 0.9743553474545479, "test_acc1": 76.13600242706299, "test_acc5": 92.9120023953247, "epoch": 88, "n_parameters": 23674632} -{"train_lr": 0.0016065761192880512, "train_loss": 2.4402576681973933, "test_loss": 0.9660781616288604, "test_acc1": 76.3020024319458, "test_acc5": 93.06200258789063, "epoch": 89, "n_parameters": 23674632} -{"train_lr": 0.0015982437505908077, "train_loss": 2.441242078916251, "test_loss": 0.9856527340457295, "test_acc1": 75.858002583313, "test_acc5": 92.90600224761963, "epoch": 90, "n_parameters": 23674632} -{"train_lr": 0.0015898463260310446, "train_loss": 2.441713045981767, "test_loss": 0.9680407168061445, "test_acc1": 76.16400255615234, "test_acc5": 92.97400264038086, "epoch": 91, "n_parameters": 23674632} -{"train_lr": 0.0015813847664809546, "train_loss": 2.432839146894898, "test_loss": 0.9619194260149291, "test_acc1": 76.45400231689453, "test_acc5": 93.1460024835205, "epoch": 92, "n_parameters": 23674632} -{"train_lr": 0.001572859999845966, "train_loss": 2.4315952127047487, "test_loss": 0.9713713933121074, "test_acc1": 76.198002338562, "test_acc5": 93.04200243804932, "epoch": 93, "n_parameters": 23674632} -{"train_lr": 0.001564272960962846, "train_loss": 2.417742719634069, "test_loss": 0.9707369469106197, "test_acc1": 76.43800245056153, "test_acc5": 93.2080025466919, "epoch": 94, "n_parameters": 23674632} -{"train_lr": 0.001555624591497151, "train_loss": 2.4188562472470756, "test_loss": 0.978413288904862, "test_acc1": 76.48200265045166, "test_acc5": 93.1060026626587, "epoch": 95, "n_parameters": 23674632} -{"train_lr": 0.0015469158398399693, "train_loss": 2.4166340805786692, "test_loss": 0.9568610358418841, "test_acc1": 76.5740025918579, "test_acc5": 93.16600244140625, "epoch": 96, "n_parameters": 23674632} -{"train_lr": 0.0015381476610041188, "train_loss": 2.4090757823700337, "test_loss": 0.9444657336130287, "test_acc1": 76.60000250366211, "test_acc5": 93.21800261688233, "epoch": 97, "n_parameters": 23674632} -{"train_lr": 0.001529321016519214, "train_loss": 2.4064858896459795, "test_loss": 0.9709514904428612, "test_acc1": 76.35000258605957, "test_acc5": 93.02600244476318, "epoch": 98, "n_parameters": 23674632} -{"train_lr": 0.0015204368743262477, "train_loss": 2.4104801023440015, "test_loss": 0.9483697584858446, "test_acc1": 76.6300026626587, "test_acc5": 93.16800251495361, "epoch": 99, "n_parameters": 23674632} -{"train_lr": 0.0015114962086715817, "train_loss": 2.3996390219596173, "test_loss": 0.9527620301779473, "test_acc1": 76.65600240722657, "test_acc5": 93.30600230377198, "epoch": 100, "n_parameters": 23674632} -{"train_lr": 0.0015024999999999741, "train_loss": 2.3923949666684576, "test_loss": 0.951590853771477, "test_acc1": 76.84800255767823, "test_acc5": 93.20200233154297, "epoch": 101, "n_parameters": 23674632} -{"train_lr": 0.0014934492348470663, "train_loss": 2.3904296242290264, "test_loss": 0.9532376843871493, "test_acc1": 76.80400269866944, "test_acc5": 93.3380025579834, "epoch": 102, "n_parameters": 23674632} -{"train_lr": 0.0014843449057311761, "train_loss": 2.3950603423859005, "test_loss": 0.9343842174293417, "test_acc1": 77.02200245269775, "test_acc5": 93.53600270355224, "epoch": 103, "n_parameters": 23674632} -{"train_lr": 0.0014751880110446746, "train_loss": 2.3891586826907263, "test_loss": 0.9366757958901651, "test_acc1": 76.83000270935058, "test_acc5": 93.36200271026611, "epoch": 104, "n_parameters": 23674632} -{"train_lr": 0.0014659795549442738, "train_loss": 2.37941605844896, "test_loss": 0.9298215698112141, "test_acc1": 76.84000267730713, "test_acc5": 93.47400241394043, "epoch": 105, "n_parameters": 23674632} -{"train_lr": 0.001456720547240874, "train_loss": 2.3894584278503865, "test_loss": 0.9306312847995397, "test_acc1": 77.18600245635986, "test_acc5": 93.53200243988037, "epoch": 106, "n_parameters": 23674632} -{"train_lr": 0.001447412003289014, "train_loss": 2.379643604683457, "test_loss": 0.9346611445600336, "test_acc1": 77.23200268066407, "test_acc5": 93.36600229797364, "epoch": 107, "n_parameters": 23674632} -{"train_lr": 0.001438054943875488, "train_loss": 2.3682541368748073, "test_loss": 0.953125424683094, "test_acc1": 76.75000258178711, "test_acc5": 93.31400234527588, "epoch": 108, "n_parameters": 23674632} -{"train_lr": 0.0014286503951072642, "train_loss": 2.368850970392128, "test_loss": 0.9232851214932672, "test_acc1": 77.38400248535156, "test_acc5": 93.73200263641357, "epoch": 109, "n_parameters": 23674632} -{"train_lr": 0.001419199388299089, "train_loss": 2.365081130255231, "test_loss": 0.9331681516134378, "test_acc1": 76.99400249481201, "test_acc5": 93.44200246368408, "epoch": 110, "n_parameters": 23674632} -{"train_lr": 0.0014097029598603777, "train_loss": 2.3609060350772766, "test_loss": 0.9151926176114515, "test_acc1": 77.23800245452881, "test_acc5": 93.6040027294922, "epoch": 111, "n_parameters": 23674632} -{"train_lr": 0.001400162151181645, "train_loss": 2.360984646707035, "test_loss": 0.9278319956678333, "test_acc1": 77.33200261932373, "test_acc5": 93.52400241058349, "epoch": 112, "n_parameters": 23674632} -{"train_lr": 0.0013905780085198772, "train_loss": 2.354395049712736, "test_loss": 0.9124803712422197, "test_acc1": 77.18400230651855, "test_acc5": 93.55400258514405, "epoch": 113, "n_parameters": 23674632} -{"train_lr": 0.0013809515828844105, "train_loss": 2.3535821861405073, "test_loss": 0.8995381283263365, "test_acc1": 77.70400268707276, "test_acc5": 93.69200285705567, "epoch": 114, "n_parameters": 23674632} -{"train_lr": 0.0013712839299212982, "train_loss": 2.3455048350097654, "test_loss": 0.9108139685157574, "test_acc1": 77.60800238189697, "test_acc5": 93.57000244537353, "epoch": 115, "n_parameters": 23674632} -{"train_lr": 0.0013615761097975877, "train_loss": 2.346665995036193, "test_loss": 0.9189098733618404, "test_acc1": 77.22800229217529, "test_acc5": 93.45600259765625, "epoch": 116, "n_parameters": 23674632} -{"train_lr": 0.0013518291870851622, "train_loss": 2.3415914375742943, "test_loss": 0.927187896023194, "test_acc1": 77.15600273284912, "test_acc5": 93.62200259521484, "epoch": 117, "n_parameters": 23674632} -{"train_lr": 0.0013420442306440433, "train_loss": 2.335092773385566, "test_loss": 0.9117830874341907, "test_acc1": 77.37600263549805, "test_acc5": 93.72000251068116, "epoch": 118, "n_parameters": 23674632} -{"train_lr": 0.001332222313504878, "train_loss": 2.339308992707186, "test_loss": 0.9234430688348684, "test_acc1": 77.44200256835937, "test_acc5": 93.71600252868652, "epoch": 119, "n_parameters": 23674632} -{"train_lr": 0.0013223645127515833, "train_loss": 2.3260304393956988, "test_loss": 0.9049439977741602, "test_acc1": 77.5940024182129, "test_acc5": 93.7300024923706, "epoch": 120, "n_parameters": 23674632} -{"train_lr": 0.0013124719094030866, "train_loss": 2.326747494933607, "test_loss": 0.898953902111812, "test_acc1": 77.64600251983643, "test_acc5": 93.81200275726319, "epoch": 121, "n_parameters": 23674632} -{"train_lr": 0.0013025455882948365, "train_loss": 2.323583811045074, "test_loss": 0.911176670568459, "test_acc1": 77.62600256072999, "test_acc5": 93.78200286651611, "epoch": 122, "n_parameters": 23674632} -{"train_lr": 0.0012925866379597717, "train_loss": 2.3254625921745857, "test_loss": 0.9096861257020271, "test_acc1": 77.77800241638184, "test_acc5": 93.72600247375489, "epoch": 123, "n_parameters": 23674632} -{"train_lr": 0.0012825961505090187, "train_loss": 2.3173446513051323, "test_loss": 0.8950760348728208, "test_acc1": 77.8980024621582, "test_acc5": 93.87400259552003, "epoch": 124, "n_parameters": 23674632} -{"train_lr": 0.001272575221512157, "train_loss": 2.3078848625496806, "test_loss": 0.8893336956248139, "test_acc1": 77.90400240142823, "test_acc5": 94.04600240356446, "epoch": 125, "n_parameters": 23674632} -{"train_lr": 0.0012625249498769983, "train_loss": 2.305432522265936, "test_loss": 0.9014886928101381, "test_acc1": 77.80800250305175, "test_acc5": 93.87400231658935, "epoch": 126, "n_parameters": 23674632} -{"train_lr": 0.001252446437729037, "train_loss": 2.304441943836155, "test_loss": 0.8771884574583082, "test_acc1": 77.918002449646, "test_acc5": 94.12400262115479, "epoch": 127, "n_parameters": 23674632} -{"train_lr": 0.0012423407902906738, "train_loss": 2.3045364604722396, "test_loss": 0.8837747304051211, "test_acc1": 77.80800240112305, "test_acc5": 93.952002522583, "epoch": 128, "n_parameters": 23674632} -{"train_lr": 0.001232209115760109, "train_loss": 2.297391520189248, "test_loss": 0.8812247354424361, "test_acc1": 77.97800255767822, "test_acc5": 94.04600252197265, "epoch": 129, "n_parameters": 23674632} -{"train_lr": 0.0012220525251895827, "train_loss": 2.298771728476365, "test_loss": 0.8936988566861008, "test_acc1": 77.8540025958252, "test_acc5": 93.87400238433838, "epoch": 130, "n_parameters": 23674632} -{"train_lr": 0.0012118721323636779, "train_loss": 2.2882113532363464, "test_loss": 0.8636000567313397, "test_acc1": 78.42800238189697, "test_acc5": 94.1180025793457, "epoch": 131, "n_parameters": 23674632} -{"train_lr": 0.001201669053677205, "train_loss": 2.2895332947313833, "test_loss": 0.8859650722958825, "test_acc1": 77.99600227905273, "test_acc5": 93.97400249816894, "epoch": 132, "n_parameters": 23674632} -{"train_lr": 0.0011914444080127799, "train_loss": 2.2858335819842814, "test_loss": 0.8848573163603292, "test_acc1": 77.94800249328614, "test_acc5": 93.86000235809327, "epoch": 133, "n_parameters": 23674632} -{"train_lr": 0.0011811993166179768, "train_loss": 2.27832063882471, "test_loss": 0.89191699174769, "test_acc1": 78.00800271179199, "test_acc5": 93.9900025994873, "epoch": 134, "n_parameters": 23674632} -{"train_lr": 0.001170934902982497, "train_loss": 2.2772262986186598, "test_loss": 0.8878307842621298, "test_acc1": 78.04200234954834, "test_acc5": 94.02800273986817, "epoch": 135, "n_parameters": 23674632} -{"train_lr": 0.001160652292715008, "train_loss": 2.2759168817223214, "test_loss": 0.8867087769463207, "test_acc1": 78.26400233612061, "test_acc5": 94.0980026260376, "epoch": 136, "n_parameters": 23674632} -{"train_lr": 0.0011503526134195741, "train_loss": 2.273585062923668, "test_loss": 0.8555671945214272, "test_acc1": 78.48200250061035, "test_acc5": 94.28200272033692, "epoch": 137, "n_parameters": 23674632} -{"train_lr": 0.0011400369945721502, "train_loss": 2.273059729978049, "test_loss": 0.9015664076714804, "test_acc1": 77.58200257904053, "test_acc5": 93.86000274719238, "epoch": 138, "n_parameters": 23674632} -{"train_lr": 0.0011297065673965083, "train_loss": 2.26235218550995, "test_loss": 0.8824909103639198, "test_acc1": 78.25800233032227, "test_acc5": 93.97200265686035, "epoch": 139, "n_parameters": 23674632} -{"train_lr": 0.001119362464740389, "train_loss": 2.258752130561595, "test_loss": 0.8768163273731867, "test_acc1": 78.20400256164551, "test_acc5": 94.09600250823975, "epoch": 140, "n_parameters": 23674632} -{"train_lr": 0.0011090058209513043, "train_loss": 2.261332073729101, "test_loss": 0.8648452141293974, "test_acc1": 78.5260023840332, "test_acc5": 94.29800260162354, "epoch": 141, "n_parameters": 23674632} -{"train_lr": 0.0010986377717519026, "train_loss": 2.2575851801535687, "test_loss": 0.8882467930741382, "test_acc1": 77.89400258911132, "test_acc5": 93.94000240631104, "epoch": 142, "n_parameters": 23674632} -{"train_lr": 0.0010882594541156603, "train_loss": 2.250349061166068, "test_loss": 0.8496538829622846, "test_acc1": 78.77600272064208, "test_acc5": 94.40800244995117, "epoch": 143, "n_parameters": 23674632} -{"train_lr": 0.001077872006141972, "train_loss": 2.239838519208818, "test_loss": 0.858550379109202, "test_acc1": 78.58400250823975, "test_acc5": 94.29600255584717, "epoch": 144, "n_parameters": 23674632} -{"train_lr": 0.0010674765669316437, "train_loss": 2.239097916727348, "test_loss": 0.8626692433926192, "test_acc1": 78.65600238311768, "test_acc5": 94.22600236358643, "epoch": 145, "n_parameters": 23674632} -{"train_lr": 0.0010570742764617334, "train_loss": 2.2404981018494454, "test_loss": 0.855592544896133, "test_acc1": 78.57000270477295, "test_acc5": 94.39400243804931, "epoch": 146, "n_parameters": 23674632} -{"train_lr": 0.0010466662754605423, "train_loss": 2.2364469693838167, "test_loss": 0.8591165662263379, "test_acc1": 78.74800253997803, "test_acc5": 94.25200271148681, "epoch": 147, "n_parameters": 23674632} -{"train_lr": 0.0010362537052827495, "train_loss": 2.2365117459226664, "test_loss": 0.8536303148350932, "test_acc1": 78.64200249664307, "test_acc5": 94.416002605896, "epoch": 148, "n_parameters": 23674632} -{"train_lr": 0.001025837707783931, "train_loss": 2.2323568923343764, "test_loss": 0.8515519853116889, "test_acc1": 78.79400241119384, "test_acc5": 94.40000236450196, "epoch": 149, "n_parameters": 23674632} -{"train_lr": 0.0010154194251956726, "train_loss": 2.2276509207882564, "test_loss": 0.842156617140228, "test_acc1": 78.8760025491333, "test_acc5": 94.50000259613037, "epoch": 150, "n_parameters": 23674632} -{"train_lr": 0.0010049999999999942, "train_loss": 2.2282166001345995, "test_loss": 0.8374538198113441, "test_acc1": 79.17000242309571, "test_acc5": 94.56200243103028, "epoch": 151, "n_parameters": 23674632} -{"train_lr": 0.0009945805748043264, "train_loss": 2.221568247480549, "test_loss": 0.8349143432622607, "test_acc1": 79.13600255187988, "test_acc5": 94.5000025982666, "epoch": 152, "n_parameters": 23674632} -{"train_lr": 0.0009841622922160565, "train_loss": 2.2106874980491034, "test_loss": 0.8528418246317994, "test_acc1": 78.8200025466919, "test_acc5": 94.34200264282227, "epoch": 153, "n_parameters": 23674632} -{"train_lr": 0.0009737462947172526, "train_loss": 2.211563217947237, "test_loss": 0.8491827487155343, "test_acc1": 78.77600257385254, "test_acc5": 94.45600278015137, "epoch": 154, "n_parameters": 23674632} -{"train_lr": 0.0009633337245394539, "train_loss": 2.2064708630791863, "test_loss": 0.8521235243727764, "test_acc1": 78.7000025088501, "test_acc5": 94.38400249450683, "epoch": 155, "n_parameters": 23674632} -{"train_lr": 0.0009529257235382627, "train_loss": 2.205914626173455, "test_loss": 0.8374005797685999, "test_acc1": 79.16600277740478, "test_acc5": 94.52200237609863, "epoch": 156, "n_parameters": 23674632} -{"train_lr": 0.0009425234330683488, "train_loss": 2.1986750432198567, "test_loss": 0.8289885452073632, "test_acc1": 79.10400240966797, "test_acc5": 94.64800236907959, "epoch": 157, "n_parameters": 23674632} -{"train_lr": 0.000932127993858017, "train_loss": 2.1988314176849326, "test_loss": 0.8308466991240328, "test_acc1": 79.13800236236573, "test_acc5": 94.61400266387939, "epoch": 158, "n_parameters": 23674632} -{"train_lr": 0.0009217405458843423, "train_loss": 2.1936570602379066, "test_loss": 0.830462084010695, "test_acc1": 79.14600242584228, "test_acc5": 94.58000259979248, "epoch": 159, "n_parameters": 23674632} -{"train_lr": 0.0009113622282480768, "train_loss": 2.1901023208285024, "test_loss": 0.8349496882521745, "test_acc1": 79.00000265441895, "test_acc5": 94.38800257934571, "epoch": 160, "n_parameters": 23674632} -{"train_lr": 0.0009009941790486844, "train_loss": 2.192301337750886, "test_loss": 0.8213236744543819, "test_acc1": 79.26400239379883, "test_acc5": 94.75400263641357, "epoch": 161, "n_parameters": 23674632} -{"train_lr": 0.0008906375352596069, "train_loss": 2.174428109559033, "test_loss": 0.85534762150862, "test_acc1": 78.81200237701417, "test_acc5": 94.37600256866455, "epoch": 162, "n_parameters": 23674632} -{"train_lr": 0.0008802934326035345, "train_loss": 2.180521368301458, "test_loss": 0.8337887813986251, "test_acc1": 79.09400280639649, "test_acc5": 94.61200244049073, "epoch": 163, "n_parameters": 23674632} -{"train_lr": 0.0008699630054278557, "train_loss": 2.180668344564861, "test_loss": 0.8332443055555676, "test_acc1": 79.15600275238037, "test_acc5": 94.54200269226074, "epoch": 164, "n_parameters": 23674632} -{"train_lr": 0.0008596473865804057, "train_loss": 2.1725651730331395, "test_loss": 0.8250848059965805, "test_acc1": 79.22200258972168, "test_acc5": 94.69400259155273, "epoch": 165, "n_parameters": 23674632} -{"train_lr": 0.0008493477072849649, "train_loss": 2.1732553562028802, "test_loss": 0.8171610820361159, "test_acc1": 79.33400261749267, "test_acc5": 94.71200268859863, "epoch": 166, "n_parameters": 23674632} -{"train_lr": 0.0008390650970174828, "train_loss": 2.1616838094737414, "test_loss": 0.8176948528623942, "test_acc1": 79.5280025302124, "test_acc5": 94.78400268280029, "epoch": 167, "n_parameters": 23674632} -{"train_lr": 0.0008288006833820049, "train_loss": 2.1613348435512263, "test_loss": 0.8112125089109847, "test_acc1": 79.70600264648438, "test_acc5": 94.87000252105713, "epoch": 168, "n_parameters": 23674632} -{"train_lr": 0.0008185555919871891, "train_loss": 2.1590069403632177, "test_loss": 0.8111590868934537, "test_acc1": 79.70400239440917, "test_acc5": 94.9780024105835, "epoch": 169, "n_parameters": 23674632} -{"train_lr": 0.000808330946322793, "train_loss": 2.156406996478375, "test_loss": 0.8055071424354207, "test_acc1": 79.77200235290528, "test_acc5": 94.90200253967285, "epoch": 170, "n_parameters": 23674632} -{"train_lr": 0.000798127867636333, "train_loss": 2.1589071449520683, "test_loss": 0.8076449817780292, "test_acc1": 79.854002605896, "test_acc5": 94.80000266479492, "epoch": 171, "n_parameters": 23674632} -{"train_lr": 0.000787947474810457, "train_loss": 2.141394469473097, "test_loss": 0.8021792807249408, "test_acc1": 79.8100025970459, "test_acc5": 94.92000258605957, "epoch": 172, "n_parameters": 23674632} -{"train_lr": 0.0007777908842399036, "train_loss": 2.146198669640566, "test_loss": 0.8205311964407112, "test_acc1": 79.50000254730224, "test_acc5": 94.73800265411377, "epoch": 173, "n_parameters": 23674632} -{"train_lr": 0.00076765920970933, "train_loss": 2.139552868789525, "test_loss": 0.8180694666437127, "test_acc1": 79.6640024862671, "test_acc5": 94.7140022064209, "epoch": 174, "n_parameters": 23674632} -{"train_lr": 0.0007575535622709474, "train_loss": 2.1385381953845872, "test_loss": 0.8058951688986836, "test_acc1": 79.69400260986328, "test_acc5": 94.91000248168945, "epoch": 175, "n_parameters": 23674632} -{"train_lr": 0.0007474750501229816, "train_loss": 2.135537542861333, "test_loss": 0.7982382180564331, "test_acc1": 79.80800252838135, "test_acc5": 94.9400023272705, "epoch": 176, "n_parameters": 23674632} -{"train_lr": 0.0007374247784878159, "train_loss": 2.1265576857028248, "test_loss": 0.7997241337981188, "test_acc1": 79.94000270446777, "test_acc5": 94.95000239837647, "epoch": 177, "n_parameters": 23674632} -{"train_lr": 0.0007274038494909853, "train_loss": 2.1291268262550602, "test_loss": 0.8035071634997925, "test_acc1": 79.77600251831055, "test_acc5": 94.93600275024414, "epoch": 178, "n_parameters": 23674632} -{"train_lr": 0.0007174133620402375, "train_loss": 2.121557547319993, "test_loss": 0.7960153695083025, "test_acc1": 79.96000225311279, "test_acc5": 94.9680025680542, "epoch": 179, "n_parameters": 23674632} -{"train_lr": 0.0007074544117051741, "train_loss": 2.116082024243143, "test_loss": 0.8002377430710829, "test_acc1": 79.96400241241454, "test_acc5": 94.88800244812012, "epoch": 180, "n_parameters": 23674632} -{"train_lr": 0.000697528090596946, "train_loss": 2.1164597512053835, "test_loss": 0.7944511200555346, "test_acc1": 80.02000253112793, "test_acc5": 95.04200273223877, "epoch": 181, "n_parameters": 23674632} -{"train_lr": 0.0006876354872484032, "train_loss": 2.1164494466652974, "test_loss": 0.7800706257535652, "test_acc1": 80.32600266357422, "test_acc5": 95.18200214172363, "epoch": 182, "n_parameters": 23674632} -{"train_lr": 0.0006777776864951223, "train_loss": 2.103074126564961, "test_loss": 0.7970214548774741, "test_acc1": 79.94000231689454, "test_acc5": 94.94200250579834, "epoch": 183, "n_parameters": 23674632} -{"train_lr": 0.0006679557693559405, "train_loss": 2.1040827207070745, "test_loss": 0.7785147919573567, "test_acc1": 80.30600271270752, "test_acc5": 95.21600247253419, "epoch": 184, "n_parameters": 23674632} -{"train_lr": 0.0006581708129147805, "train_loss": 2.0973862382791024, "test_loss": 0.7798348068062103, "test_acc1": 80.54000268096924, "test_acc5": 95.17600238952637, "epoch": 185, "n_parameters": 23674632} -{"train_lr": 0.0006484238902024427, "train_loss": 2.094101869743124, "test_loss": 0.7815392011155685, "test_acc1": 80.38400243225098, "test_acc5": 95.05400259979248, "epoch": 186, "n_parameters": 23674632} -{"train_lr": 0.0006387160700787323, "train_loss": 2.098712220788002, "test_loss": 0.7789028827100992, "test_acc1": 80.44000246490478, "test_acc5": 95.10200279876709, "epoch": 187, "n_parameters": 23674632} -{"train_lr": 0.0006290484171156341, "train_loss": 2.083750256805492, "test_loss": 0.7807146813720465, "test_acc1": 80.56200262512208, "test_acc5": 95.15800254394532, "epoch": 188, "n_parameters": 23674632} -{"train_lr": 0.000619421991480156, "train_loss": 2.0780949638806563, "test_loss": 0.7893501240195651, "test_acc1": 80.17200255706787, "test_acc5": 95.04000268493652, "epoch": 189, "n_parameters": 23674632} -{"train_lr": 0.0006098378488184041, "train_loss": 2.086560629957395, "test_loss": 0.775267330365199, "test_acc1": 80.55000249084473, "test_acc5": 95.22400242584229, "epoch": 190, "n_parameters": 23674632} -{"train_lr": 0.0006002970401395665, "train_loss": 2.0780565406111697, "test_loss": 0.7779008844359354, "test_acc1": 80.55000263031006, "test_acc5": 95.20200266479492, "epoch": 191, "n_parameters": 23674632} -{"train_lr": 0.0005908006117009122, "train_loss": 2.0762361296646885, "test_loss": 0.7776829188858921, "test_acc1": 80.30200254730225, "test_acc5": 95.24000249267579, "epoch": 192, "n_parameters": 23674632} -{"train_lr": 0.0005813496048927556, "train_loss": 2.0761221809972294, "test_loss": 0.7792747336242235, "test_acc1": 80.6640026022339, "test_acc5": 95.27400250213623, "epoch": 193, "n_parameters": 23674632} -{"train_lr": 0.000571945056124538, "train_loss": 2.0577223720929796, "test_loss": 0.7700564386605313, "test_acc1": 80.60200229064941, "test_acc5": 95.23200258422851, "epoch": 194, "n_parameters": 23674632} -{"train_lr": 0.0005625879967110111, "train_loss": 2.063040064559947, "test_loss": 0.762483549163197, "test_acc1": 80.70400262817382, "test_acc5": 95.35200270141601, "epoch": 195, "n_parameters": 23674632} -{"train_lr": 0.000553279452759151, "train_loss": 2.058568944986299, "test_loss": 0.7612141865785375, "test_acc1": 80.76200258239746, "test_acc5": 95.35800264953613, "epoch": 196, "n_parameters": 23674632} -{"train_lr": 0.000544020445055746, "train_loss": 2.0542113661027543, "test_loss": 0.7675608686086807, "test_acc1": 80.59200260681152, "test_acc5": 95.30000249694824, "epoch": 197, "n_parameters": 23674632} -{"train_lr": 0.0005348119889552969, "train_loss": 2.052741454123593, "test_loss": 0.7573377701143423, "test_acc1": 80.88600232879638, "test_acc5": 95.40400252380371, "epoch": 198, "n_parameters": 23674632} -{"train_lr": 0.0005256550942687951, "train_loss": 2.0433241033570755, "test_loss": 0.7609199094162746, "test_acc1": 80.87000258209228, "test_acc5": 95.3480023727417, "epoch": 199, "n_parameters": 23674632} -{"train_lr": 0.0005165507651529303, "train_loss": 2.0427952490562347, "test_loss": 0.7659713797497026, "test_acc1": 80.74600233978272, "test_acc5": 95.37000272766113, "epoch": 200, "n_parameters": 23674632} -{"train_lr": 0.0005074999999999948, "train_loss": 2.0417200470571037, "test_loss": 0.7553581987705195, "test_acc1": 81.04600256042481, "test_acc5": 95.4580024118042, "epoch": 201, "n_parameters": 23674632} -{"train_lr": 0.0004985037913283869, "train_loss": 2.0338319914101315, "test_loss": 0.7498747764550375, "test_acc1": 81.00200247253417, "test_acc5": 95.38400241821289, "epoch": 202, "n_parameters": 23674632} -{"train_lr": 0.0004895631256737284, "train_loss": 2.031308743355276, "test_loss": 0.7461879007292517, "test_acc1": 81.11200255401612, "test_acc5": 95.42200227844238, "epoch": 203, "n_parameters": 23674632} -{"train_lr": 0.0004806789834808088, "train_loss": 2.0315475256489717, "test_loss": 0.7526004862491832, "test_acc1": 80.90200259124755, "test_acc5": 95.41200259124756, "epoch": 204, "n_parameters": 23674632} -{"train_lr": 0.00047185233899589926, "train_loss": 2.02053971259857, "test_loss": 0.7453646178949963, "test_acc1": 81.10800247558593, "test_acc5": 95.5260024810791, "epoch": 205, "n_parameters": 23674632} -{"train_lr": 0.0004630841601600537, "train_loss": 2.018731469974863, "test_loss": 0.7470499962906946, "test_acc1": 81.17600264831543, "test_acc5": 95.49000233978272, "epoch": 206, "n_parameters": 23674632} -{"train_lr": 0.00045437540850286774, "train_loss": 2.0169428394709845, "test_loss": 0.7386862715762673, "test_acc1": 81.2460026574707, "test_acc5": 95.60200244720458, "epoch": 207, "n_parameters": 23674632} -{"train_lr": 0.00044572703903712334, "train_loss": 2.0087376983403016, "test_loss": 0.7431443859681939, "test_acc1": 81.14200254058838, "test_acc5": 95.57800254211426, "epoch": 208, "n_parameters": 23674632} -{"train_lr": 0.0004371400001539953, "train_loss": 2.0057549571259607, "test_loss": 0.7381120254708962, "test_acc1": 81.27600235839844, "test_acc5": 95.6040028225708, "epoch": 209, "n_parameters": 23674632} -{"train_lr": 0.0004286152335190424, "train_loss": 2.0073371522551438, "test_loss": 0.7395362268681779, "test_acc1": 81.23800280853271, "test_acc5": 95.56000270568848, "epoch": 210, "n_parameters": 23674632} -{"train_lr": 0.000420153673968981, "train_loss": 2.0014276991454625, "test_loss": 0.7372909027970198, "test_acc1": 81.2420023526001, "test_acc5": 95.58600286804199, "epoch": 211, "n_parameters": 23674632} -{"train_lr": 0.0004117562494092057, "train_loss": 1.9951673041787936, "test_loss": 0.7322124433562611, "test_acc1": 81.40200239471436, "test_acc5": 95.67400272094727, "epoch": 212, "n_parameters": 23674632} -{"train_lr": 0.0004034238807119475, "train_loss": 1.9943688186429007, "test_loss": 0.7329509124498476, "test_acc1": 81.31200252624512, "test_acc5": 95.6500025592041, "epoch": 213, "n_parameters": 23674632} -{"train_lr": 0.0003951574816152834, "train_loss": 1.9890572676519982, "test_loss": 0.7317971325846333, "test_acc1": 81.43600266998291, "test_acc5": 95.71800223968506, "epoch": 214, "n_parameters": 23674632} -{"train_lr": 0.0003869579586230879, "train_loss": 1.9891120796378472, "test_loss": 0.7308707465841011, "test_acc1": 81.52600256988525, "test_acc5": 95.6840023880005, "epoch": 215, "n_parameters": 23674632} -{"train_lr": 0.00037882621090540595, "train_loss": 1.9854818805456638, "test_loss": 0.7385751524883689, "test_acc1": 81.24000246765137, "test_acc5": 95.65000244018555, "epoch": 216, "n_parameters": 23674632} -{"train_lr": 0.00037076313020005785, "train_loss": 1.9795309277794821, "test_loss": 0.7244955827905373, "test_acc1": 81.71200276000977, "test_acc5": 95.74400268798829, "epoch": 217, "n_parameters": 23674632} -{"train_lr": 0.00036276960071467357, "train_loss": 1.9763569626078712, "test_loss": 0.7314223626233411, "test_acc1": 81.52200247650147, "test_acc5": 95.7740027557373, "epoch": 218, "n_parameters": 23674632} -{"train_lr": 0.0003548464990298402, "train_loss": 1.9744451918380914, "test_loss": 0.7253207257299712, "test_acc1": 81.55000256072998, "test_acc5": 95.75600251312255, "epoch": 219, "n_parameters": 23674632} -{"train_lr": 0.0003469946940029594, "train_loss": 1.9691343160568, "test_loss": 0.7275639499791644, "test_acc1": 81.47400261657715, "test_acc5": 95.80600247467041, "epoch": 220, "n_parameters": 23674632} -{"train_lr": 0.0003392150466729432, "train_loss": 1.960126058773981, "test_loss": 0.726351813328537, "test_acc1": 81.64800245605468, "test_acc5": 95.7100025857544, "epoch": 221, "n_parameters": 23674632} -{"train_lr": 0.0003315084101657573, "train_loss": 1.9579175059064973, "test_loss": 0.719451610111829, "test_acc1": 81.77600255126953, "test_acc5": 95.76000245758057, "epoch": 222, "n_parameters": 23674632} -{"train_lr": 0.0003238756296009559, "train_loss": 1.9539548433084282, "test_loss": 0.7207214160624779, "test_acc1": 81.69000239013671, "test_acc5": 95.76800264221191, "epoch": 223, "n_parameters": 23674632} -{"train_lr": 0.00031631754199895195, "train_loss": 1.9503294451238153, "test_loss": 0.7187537616401007, "test_acc1": 81.79000248809814, "test_acc5": 95.82200273315429, "epoch": 224, "n_parameters": 23674632} -{"train_lr": 0.0003088349761892061, "train_loss": 1.9484935290605234, "test_loss": 0.7161156608525551, "test_acc1": 81.85400249969483, "test_acc5": 95.85200247680665, "epoch": 225, "n_parameters": 23674632} -{"train_lr": 0.00030142875271938044, "train_loss": 1.94992163042418, "test_loss": 0.7117963569859663, "test_acc1": 81.89200227020264, "test_acc5": 95.87400232055664, "epoch": 226, "n_parameters": 23674632} -{"train_lr": 0.00029409968376536603, "train_loss": 1.9439259016995045, "test_loss": 0.7098434396991224, "test_acc1": 82.02000249786377, "test_acc5": 95.92600252014161, "epoch": 227, "n_parameters": 23674632} -{"train_lr": 0.0002868485730421238, "train_loss": 1.9403443149180053, "test_loss": 0.7110286479759397, "test_acc1": 81.95600264526367, "test_acc5": 95.85800248809815, "epoch": 228, "n_parameters": 23674632} -{"train_lr": 0.00027967621571569273, "train_loss": 1.936508745747171, "test_loss": 0.7062922031477545, "test_acc1": 82.04600248565674, "test_acc5": 95.96400263275146, "epoch": 229, "n_parameters": 23674632} -{"train_lr": 0.0002725833983158693, "train_loss": 1.9242371463899512, "test_loss": 0.7152549591141216, "test_acc1": 81.91200267272949, "test_acc5": 95.83200271453858, "epoch": 230, "n_parameters": 23674632} -{"train_lr": 0.0002655708986499856, "train_loss": 1.9270733194433147, "test_loss": 0.7142806261439215, "test_acc1": 81.96000263824463, "test_acc5": 95.82600248901367, "epoch": 231, "n_parameters": 23674632} -{"train_lr": 0.0002586394857176932, "train_loss": 1.9221196299512608, "test_loss": 0.7098730084570971, "test_acc1": 82.02400259368896, "test_acc5": 95.88600245697022, "epoch": 232, "n_parameters": 23674632} -{"train_lr": 0.00025178991962650487, "train_loss": 1.9178030738751475, "test_loss": 0.7018264888814001, "test_acc1": 82.16600260284424, "test_acc5": 95.92800241241456, "epoch": 233, "n_parameters": 23674632} -{"train_lr": 0.00024502295150852677, "train_loss": 1.9095911363439022, "test_loss": 0.6990771552152706, "test_acc1": 82.15800234344482, "test_acc5": 95.93800279449462, "epoch": 234, "n_parameters": 23674632} -{"train_lr": 0.00023833932343808706, "train_loss": 1.9066613604374927, "test_loss": 0.6988492080320915, "test_acc1": 82.16400270202637, "test_acc5": 95.96800248352051, "epoch": 235, "n_parameters": 23674632} -{"train_lr": 0.0002317397683503139, "train_loss": 1.9098849880585758, "test_loss": 0.7016996666789055, "test_acc1": 82.19800268432617, "test_acc5": 95.98400252716064, "epoch": 236, "n_parameters": 23674632} -{"train_lr": 0.00022522500996078677, "train_loss": 1.9070999146544105, "test_loss": 0.7010150292718952, "test_acc1": 82.17200233245849, "test_acc5": 96.01200255279541, "epoch": 237, "n_parameters": 23674632} -{"train_lr": 0.0002187957626861862, "train_loss": 1.8989386981923422, "test_loss": 0.693379067336068, "test_acc1": 82.33000252380371, "test_acc5": 96.05200267547607, "epoch": 238, "n_parameters": 23674632} -{"train_lr": 0.00021245273156592372, "train_loss": 1.9005813147786328, "test_loss": 0.6985225599597801, "test_acc1": 82.26200255828857, "test_acc5": 95.99600254852295, "epoch": 239, "n_parameters": 23674632} -{"train_lr": 0.00020619661218484483, "train_loss": 1.8897759235448404, "test_loss": 0.6948535290518494, "test_acc1": 82.39200246795654, "test_acc5": 96.06000266296387, "epoch": 240, "n_parameters": 23674632} -{"train_lr": 0.00020002809059693196, "train_loss": 1.8903780047842067, "test_loss": 0.6930544753417824, "test_acc1": 82.51600252563476, "test_acc5": 96.09800244537354, "epoch": 241, "n_parameters": 23674632} -{"train_lr": 0.0001939478432500862, "train_loss": 1.8947295763080927, "test_loss": 0.6956843064928596, "test_acc1": 82.40400248291016, "test_acc5": 96.03600231048584, "epoch": 242, "n_parameters": 23674632} -{"train_lr": 0.00018795653691196918, "train_loss": 1.885340933099115, "test_loss": 0.6878945016386834, "test_acc1": 82.50200270385743, "test_acc5": 96.16800223052978, "epoch": 243, "n_parameters": 23674632} -{"train_lr": 0.0001820548285968152, "train_loss": 1.877378008759422, "test_loss": 0.6890693048974781, "test_acc1": 82.50600276184082, "test_acc5": 96.13200279449462, "epoch": 244, "n_parameters": 23674632} -{"train_lr": 0.00017624336549345505, "train_loss": 1.8794486181829855, "test_loss": 0.6876538705193636, "test_acc1": 82.54000276733399, "test_acc5": 96.13400251220703, "epoch": 245, "n_parameters": 23674632} -{"train_lr": 0.00017052278489430102, "train_loss": 1.866330471518133, "test_loss": 0.6882877308649548, "test_acc1": 82.43200254699707, "test_acc5": 96.12000257415771, "epoch": 246, "n_parameters": 23674632} -{"train_lr": 0.00016489371412549964, "train_loss": 1.8716210417812773, "test_loss": 0.6861813995196964, "test_acc1": 82.60800268218995, "test_acc5": 96.13400245361328, "epoch": 247, "n_parameters": 23674632} -{"train_lr": 0.00015935677047806976, "train_loss": 1.865009898529779, "test_loss": 0.6866794617451502, "test_acc1": 82.66600270965576, "test_acc5": 96.0860025857544, "epoch": 248, "n_parameters": 23674632} -{"train_lr": 0.00015391256114029848, "train_loss": 1.8608152890424554, "test_loss": 0.6857708327359322, "test_acc1": 82.60800241699219, "test_acc5": 96.16800257415771, "epoch": 249, "n_parameters": 23674632} -{"train_lr": 0.00014856168313107316, "train_loss": 1.8617713526784658, "test_loss": 0.6906871605438717, "test_acc1": 82.49400259338378, "test_acc5": 96.15000267913818, "epoch": 250, "n_parameters": 23674632} -{"train_lr": 0.00014330472323448312, "train_loss": 1.856742965577127, "test_loss": 0.6859504349078193, "test_acc1": 82.57200233062744, "test_acc5": 96.23000245361328, "epoch": 251, "n_parameters": 23674632} -{"train_lr": 0.00013814225793541067, "train_loss": 1.8564208229656796, "test_loss": 0.6803848850681926, "test_acc1": 82.66600236999511, "test_acc5": 96.2980025805664, "epoch": 252, "n_parameters": 23674632} -{"train_lr": 0.0001330748533563583, "train_loss": 1.8483893078961056, "test_loss": 0.6792835824643121, "test_acc1": 82.75400239074708, "test_acc5": 96.2540024697876, "epoch": 253, "n_parameters": 23674632} -{"train_lr": 0.00012810306519533694, "train_loss": 1.8441892998348133, "test_loss": 0.6810044746733073, "test_acc1": 82.73400236083984, "test_acc5": 96.22800269134521, "epoch": 254, "n_parameters": 23674632} -{"train_lr": 0.00012322743866494126, "train_loss": 1.841813487198046, "test_loss": 0.6791777305982329, "test_acc1": 82.80000282958984, "test_acc5": 96.2640025479126, "epoch": 255, "n_parameters": 23674632} -{"train_lr": 0.00011844850843257583, "train_loss": 1.8385265963159496, "test_loss": 0.6809278167219777, "test_acc1": 82.77400253997803, "test_acc5": 96.2580024508667, "epoch": 256, "n_parameters": 23674632} -{"train_lr": 0.00011376679856178666, "train_loss": 1.8429881056352295, "test_loss": 0.6740502941450386, "test_acc1": 82.77800258789063, "test_acc5": 96.33400252288818, "epoch": 257, "n_parameters": 23674632} -{"train_lr": 0.00010918282245481492, "train_loss": 1.837818184255553, "test_loss": 0.677249933903416, "test_acc1": 82.71400244506836, "test_acc5": 96.23600261413574, "epoch": 258, "n_parameters": 23674632} -{"train_lr": 0.00010469708279631207, "train_loss": 1.8367025311187588, "test_loss": 0.6765622201968323, "test_acc1": 82.79800241363526, "test_acc5": 96.31800253265381, "epoch": 259, "n_parameters": 23674632} -{"train_lr": 0.00010031007149816415, "train_loss": 1.82822202219904, "test_loss": 0.675347691458283, "test_acc1": 82.80400257049561, "test_acc5": 96.31400265167237, "epoch": 260, "n_parameters": 23674632} -{"train_lr": 9.602226964561253e-05, "train_loss": 1.8307636170435866, "test_loss": 0.6784832841632041, "test_acc1": 82.94400241668701, "test_acc5": 96.18400252990723, "epoch": 261, "n_parameters": 23674632} -{"train_lr": 9.183414744444096e-05, "train_loss": 1.8236459384826447, "test_loss": 0.6726572436816765, "test_acc1": 82.9060027005005, "test_acc5": 96.25000249176026, "epoch": 262, "n_parameters": 23674632} -{"train_lr": 8.77461641694416e-05, "train_loss": 1.8174053693977859, "test_loss": 0.6696104135815845, "test_acc1": 82.9860023590088, "test_acc5": 96.36400252197265, "epoch": 263, "n_parameters": 23674632} -{"train_lr": 8.37587681140576e-05, "train_loss": 1.8224526514752593, "test_loss": 0.6701857620342211, "test_acc1": 83.03800240783691, "test_acc5": 96.3180024963379, "epoch": 264, "n_parameters": 23674632} -{"train_lr": 7.98723965411878e-05, "train_loss": 1.8209654480981217, "test_loss": 0.6710484942816424, "test_acc1": 82.97400225677491, "test_acc5": 96.3060022921753, "epoch": 265, "n_parameters": 23674632} -{"train_lr": 7.608747563528369e-05, "train_loss": 1.811561862788469, "test_loss": 0.6698399673244266, "test_acc1": 82.91600243438721, "test_acc5": 96.35800249450683, "epoch": 266, "n_parameters": 23674632} -{"train_lr": 7.240442045557008e-05, "train_loss": 1.8127867883260398, "test_loss": 0.6671685903248462, "test_acc1": 83.12000257385255, "test_acc5": 96.3700024874878, "epoch": 267, "n_parameters": 23674632} -{"train_lr": 6.882363489054739e-05, "train_loss": 1.8157024249124776, "test_loss": 0.6687131474979899, "test_acc1": 82.99000244415284, "test_acc5": 96.31400234344483, "epoch": 268, "n_parameters": 23674632} -{"train_lr": 6.534551161370524e-05, "train_loss": 1.8100585136041463, "test_loss": 0.6680600790476258, "test_acc1": 82.99200246185303, "test_acc5": 96.30200249603271, "epoch": 269, "n_parameters": 23674632} -{"train_lr": 6.197043204046283e-05, "train_loss": 1.8082292513262264, "test_loss": 0.6707597165509607, "test_acc1": 82.99400237670899, "test_acc5": 96.34800253875733, "epoch": 270, "n_parameters": 23674632} -{"train_lr": 5.8698766286322e-05, "train_loss": 1.8072402023082728, "test_loss": 0.6675704057243738, "test_acc1": 83.11600243682861, "test_acc5": 96.4300025289917, "epoch": 271, "n_parameters": 23674632} -{"train_lr": 5.5530873126305155e-05, "train_loss": 1.8059129172937571, "test_loss": 0.6677711774228197, "test_acc1": 83.10800232421875, "test_acc5": 96.41400230438232, "epoch": 272, "n_parameters": 23674632} -{"train_lr": 5.246709995559324e-05, "train_loss": 1.8020626902520704, "test_loss": 0.6653897110372782, "test_acc1": 83.01800250915527, "test_acc5": 96.3920023425293, "epoch": 273, "n_parameters": 23674632} -{"train_lr": 4.950778275144282e-05, "train_loss": 1.804957490971239, "test_loss": 0.6643984670553243, "test_acc1": 83.14000222412109, "test_acc5": 96.43000223510742, "epoch": 274, "n_parameters": 23674632} -{"train_lr": 4.6653246036330756e-05, "train_loss": 1.7949742732955207, "test_loss": 0.6653525847941637, "test_acc1": 83.16000268127442, "test_acc5": 96.40200254394531, "epoch": 275, "n_parameters": 23674632} -{"train_lr": 4.390380284237578e-05, "train_loss": 1.7963768709882748, "test_loss": 0.6632925446399234, "test_acc1": 83.13600252075196, "test_acc5": 96.41200235839844, "epoch": 276, "n_parameters": 23674632} -{"train_lr": 4.1259754677011786e-05, "train_loss": 1.792266890400653, "test_loss": 0.6652497252957388, "test_acc1": 83.15400266723633, "test_acc5": 96.40000235229492, "epoch": 277, "n_parameters": 23674632} -{"train_lr": 3.8721391489913164e-05, "train_loss": 1.7919875488971635, "test_loss": 0.6622015124356205, "test_acc1": 83.12400241851806, "test_acc5": 96.39600244384765, "epoch": 278, "n_parameters": 23674632} -{"train_lr": 3.628899164120446e-05, "train_loss": 1.7945935445830976, "test_loss": 0.6605893105381366, "test_acc1": 83.25400257751465, "test_acc5": 96.42600264678956, "epoch": 279, "n_parameters": 23674632} -{"train_lr": 3.396282187094542e-05, "train_loss": 1.7899002528590835, "test_loss": 0.6633771292752388, "test_acc1": 83.13200263336182, "test_acc5": 96.40800259735107, "epoch": 280, "n_parameters": 23674632} -{"train_lr": 3.174313726986302e-05, "train_loss": 1.7887012487430747, "test_loss": 0.6597029004020222, "test_acc1": 83.1980024295044, "test_acc5": 96.3900024862671, "epoch": 281, "n_parameters": 23674632} -{"train_lr": 2.9630181251387e-05, "train_loss": 1.7891262778632646, "test_loss": 0.6613493052057244, "test_acc1": 83.11800244781494, "test_acc5": 96.43600250579834, "epoch": 282, "n_parameters": 23674632} -{"train_lr": 2.762418552495418e-05, "train_loss": 1.7907932260625368, "test_loss": 0.6589531985421976, "test_acc1": 83.15200259155273, "test_acc5": 96.43800239227295, "epoch": 283, "n_parameters": 23674632} -{"train_lr": 2.572537007060402e-05, "train_loss": 1.7796013446139107, "test_loss": 0.6595599941457763, "test_acc1": 83.22400227752685, "test_acc5": 96.42200241424561, "epoch": 284, "n_parameters": 23674632} -{"train_lr": 2.3933943114847106e-05, "train_loss": 1.78061285165312, "test_loss": 0.6603434763171456, "test_acc1": 83.22000241149902, "test_acc5": 96.45000252166749, "epoch": 285, "n_parameters": 23674632} -{"train_lr": 2.225010110783857e-05, "train_loss": 1.7776048699204774, "test_loss": 0.6619861926319022, "test_acc1": 83.23600267944336, "test_acc5": 96.46000251190186, "epoch": 286, "n_parameters": 23674632} -{"train_lr": 2.0674028701827162e-05, "train_loss": 1.7819798741337778, "test_loss": 0.6575288885470593, "test_acc1": 83.2360025704956, "test_acc5": 96.47400246307373, "epoch": 287, "n_parameters": 23674632} -{"train_lr": 1.9205898730913854e-05, "train_loss": 1.7852935116115711, "test_loss": 0.6590995851791266, "test_acc1": 83.21800246032714, "test_acc5": 96.48200234863282, "epoch": 288, "n_parameters": 23674632} -{"train_lr": 1.784587219209501e-05, "train_loss": 1.7876553010883378, "test_loss": 0.6589689433687564, "test_acc1": 83.15000263336182, "test_acc5": 96.44800247039795, "epoch": 289, "n_parameters": 23674632} -{"train_lr": 1.6594098227605158e-05, "train_loss": 1.7731075771301865, "test_loss": 0.6595228312141967, "test_acc1": 83.23000240875244, "test_acc5": 96.43400237884522, "epoch": 290, "n_parameters": 23674632} -{"train_lr": 1.545071410856787e-05, "train_loss": 1.7811763699082352, "test_loss": 0.6575870276852087, "test_acc1": 83.22800220214843, "test_acc5": 96.46600242492676, "epoch": 291, "n_parameters": 23674632} -{"train_lr": 1.4415845219935669e-05, "train_loss": 1.7857277637524762, "test_loss": 0.6573574195305506, "test_acc1": 83.18400240447998, "test_acc5": 96.43600271697998, "epoch": 292, "n_parameters": 23674632} -{"train_lr": 1.3489605046743441e-05, "train_loss": 1.7791869480380242, "test_loss": 0.6565095283881281, "test_acc1": 83.30400273376465, "test_acc5": 96.46600256347656, "epoch": 293, "n_parameters": 23674632} -{"train_lr": 1.267209516166461e-05, "train_loss": 1.7776049780462095, "test_loss": 0.6575961404790481, "test_acc1": 83.22800250213623, "test_acc5": 96.43600235931396, "epoch": 294, "n_parameters": 23674632} -{"train_lr": 1.1963405213869678e-05, "train_loss": 1.778250541558845, "test_loss": 0.657947035449924, "test_acc1": 83.21800243804931, "test_acc5": 96.45600251556397, "epoch": 295, "n_parameters": 23674632} -{"train_lr": 1.1363612919198828e-05, "train_loss": 1.7759816746393458, "test_loss": 0.6590973856492024, "test_acc1": 83.26600230682374, "test_acc5": 96.48600251556397, "epoch": 296, "n_parameters": 23674632} -{"train_lr": 1.0872784051636045e-05, "train_loss": 1.7778462773080257, "test_loss": 0.6570738648261988, "test_acc1": 83.24000260406494, "test_acc5": 96.50000254943848, "epoch": 297, "n_parameters": 23674632} -{"train_lr": 1.0490972436097075e-05, "train_loss": 1.7771521425194783, "test_loss": 0.6582362476849195, "test_acc1": 83.24400240447999, "test_acc5": 96.4580024081421, "epoch": 298, "n_parameters": 23674632} -{"train_lr": 1.0218219942528797e-05, "train_loss": 1.7794868382428, "test_loss": 0.658030013074026, "test_acc1": 83.30000237792969, "test_acc5": 96.482002444458, "epoch": 299, "n_parameters": 23674632} diff --git a/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_450e.txt b/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_450e.txt deleted file mode 100644 index 867297c8..00000000 --- a/cv/classification/repvit/pytorch/logs/repvit_m2_3_distill_450e.txt +++ /dev/null @@ -1,450 +0,0 @@ -{"train_lr": 9.999999999999953e-07, "train_loss": 7.028661812190339, "test_loss": 6.9473059538638955, "test_acc1": 0.12200000866413116, "test_acc5": 0.5840000324535369, "epoch": 0, "n_parameters": 23674632} -{"train_lr": 9.999999999999953e-07, "train_loss": 7.0135364792615675, "test_loss": 6.929130725788347, "test_acc1": 0.16600001183509827, "test_acc5": 0.7100000356578827, "epoch": 1, "n_parameters": 23674632} -{"train_lr": 0.000400799999999992, "train_loss": 6.542746398469908, "test_loss": 5.551603456338246, "test_acc1": 7.062000201187134, "test_acc5": 20.224000598602295, "epoch": 2, "n_parameters": 23674632} -{"train_lr": 0.0008006000000000176, "train_loss": 6.048615605758725, "test_loss": 4.568462046709928, "test_acc1": 15.900000462799072, "test_acc5": 36.572001139526364, "epoch": 3, "n_parameters": 23674632} -{"train_lr": 0.0012004000000000285, "train_loss": 5.598474353956852, "test_loss": 3.76579338266994, "test_acc1": 25.68600075942993, "test_acc5": 50.314001300354, "epoch": 4, "n_parameters": 23674632} -{"train_lr": 0.0016001999999999824, "train_loss": 5.072072735936236, "test_loss": 2.927180756222118, "test_acc1": 37.70400099914551, "test_acc5": 64.042001796875, "epoch": 5, "n_parameters": 23674632} -{"train_lr": 0.0019993938728840033, "train_loss": 4.57422440815315, "test_loss": 2.520600895086924, "test_acc1": 44.946001317443844, "test_acc5": 70.72600224700928, "epoch": 6, "n_parameters": 23674632} -{"train_lr": 0.0019991272159483823, "train_loss": 4.182143216367534, "test_loss": 2.242982774069815, "test_acc1": 50.398001309204105, "test_acc5": 75.1940022592163, "epoch": 7, "n_parameters": 23674632} -{"train_lr": 0.0019988121066547484, "train_loss": 3.9536258678594463, "test_loss": 2.0095873348640674, "test_acc1": 54.31400154724121, "test_acc5": 78.69000274932861, "epoch": 8, "n_parameters": 23674632} -{"train_lr": 0.0019984485603610773, "train_loss": 3.796824180727287, "test_loss": 1.8876004733822562, "test_acc1": 56.876001674194335, "test_acc5": 80.64800243286133, "epoch": 9, "n_parameters": 23674632} -{"train_lr": 0.0019980365947861213, "train_loss": 3.6599651825466126, "test_loss": 1.7641342914465703, "test_acc1": 59.838001592407224, "test_acc5": 82.53000258636474, "epoch": 10, "n_parameters": 23674632} -{"train_lr": 0.001997576230008536, "train_loss": 3.537867943755538, "test_loss": 1.6878389558105757, "test_acc1": 61.0580017855835, "test_acc5": 83.5080024899292, "epoch": 11, "n_parameters": 23674632} -{"train_lr": 0.0019970674884657426, "train_loss": 3.4580030032723164, "test_loss": 1.6289895888079295, "test_acc1": 62.43000170166015, "test_acc5": 84.4960024685669, "epoch": 12, "n_parameters": 23674632} -{"train_lr": 0.0019965103949532445, "train_loss": 3.367487786223086, "test_loss": 1.597528363493356, "test_acc1": 63.248001854248045, "test_acc5": 85.18000245513916, "epoch": 13, "n_parameters": 23674632} -{"train_lr": 0.001995904976622931, "train_loss": 3.3136043563115893, "test_loss": 1.512741717651035, "test_acc1": 64.7680019506836, "test_acc5": 85.94600257324218, "epoch": 14, "n_parameters": 23674632} -{"train_lr": 0.001995251262981942, "train_loss": 3.2551728431150306, "test_loss": 1.4411390672127407, "test_acc1": 66.40400192504883, "test_acc5": 86.70800248016357, "epoch": 15, "n_parameters": 23674632} -{"train_lr": 0.0019945492858914116, "train_loss": 3.2021339922952805, "test_loss": 1.437987937845967, "test_acc1": 66.43600183166504, "test_acc5": 87.16600259399414, "epoch": 16, "n_parameters": 23674632} -{"train_lr": 0.001993799079564837, "train_loss": 3.1664236810781974, "test_loss": 1.4605896640004534, "test_acc1": 66.61400195434571, "test_acc5": 87.16600232788086, "epoch": 17, "n_parameters": 23674632} -{"train_lr": 0.0019930006805659703, "train_loss": 3.1257025470002757, "test_loss": 1.4017437560991808, "test_acc1": 67.28200194274902, "test_acc5": 87.6140024142456, "epoch": 18, "n_parameters": 23674632} -{"train_lr": 0.0019921541278078774, "train_loss": 3.0850324005746157, "test_loss": 1.4033687385645779, "test_acc1": 67.37600193145752, "test_acc5": 87.78000240509033, "epoch": 19, "n_parameters": 23674632} -{"train_lr": 0.0019912594625502737, "train_loss": 3.0509677338752623, "test_loss": 1.3520973584417142, "test_acc1": 68.31600214569092, "test_acc5": 88.1600025189209, "epoch": 20, "n_parameters": 23674632} -{"train_lr": 0.0019903167283978865, "train_loss": 3.0218634672111553, "test_loss": 1.3259023601810138, "test_acc1": 69.21000210144042, "test_acc5": 88.90000276397706, "epoch": 21, "n_parameters": 23674632} -{"train_lr": 0.0019893259712981484, "train_loss": 2.9914106734388834, "test_loss": 1.282287425841346, "test_acc1": 69.59200198608399, "test_acc5": 89.17600269592285, "epoch": 22, "n_parameters": 23674632} -{"train_lr": 0.001988287239539311, "train_loss": 2.9708871626072555, "test_loss": 1.2873304310170086, "test_acc1": 69.83200201721192, "test_acc5": 89.25400234436034, "epoch": 23, "n_parameters": 23674632} -{"train_lr": 0.0019872005837475834, "train_loss": 2.9562376529145107, "test_loss": 1.2720997676704868, "test_acc1": 69.69800224029541, "test_acc5": 89.37400241027832, "epoch": 24, "n_parameters": 23674632} -{"train_lr": 0.0019860660568851683, "train_loss": 2.929753255894144, "test_loss": 1.2647564806269878, "test_acc1": 70.3460020880127, "test_acc5": 89.44600262145997, "epoch": 25, "n_parameters": 23674632} -{"train_lr": 0.0019848837142471525, "train_loss": 2.9056842155474647, "test_loss": 1.2721718549728394, "test_acc1": 70.26000234069824, "test_acc5": 89.3500024194336, "epoch": 26, "n_parameters": 23674632} -{"train_lr": 0.00198365361345938, "train_loss": 2.8850000717228266, "test_loss": 1.246594167004029, "test_acc1": 70.39600224395753, "test_acc5": 89.51400243652344, "epoch": 27, "n_parameters": 23674632} -{"train_lr": 0.0019823758144750267, "train_loss": 2.874905311065517, "test_loss": 1.2549262216145343, "test_acc1": 70.93400227752686, "test_acc5": 89.76200259155273, "epoch": 28, "n_parameters": 23674632} -{"train_lr": 0.0019810503795724453, "train_loss": 2.8521747248802636, "test_loss": 1.2173543299237888, "test_acc1": 71.13200202148437, "test_acc5": 89.98200250213623, "epoch": 29, "n_parameters": 23674632} -{"train_lr": 0.0019796773733513177, "train_loss": 2.8317441787127016, "test_loss": 1.1995628009917159, "test_acc1": 71.3000023098755, "test_acc5": 90.20200268585205, "epoch": 30, "n_parameters": 23674632} -{"train_lr": 0.0019782568627301467, "train_loss": 2.8273272807244583, "test_loss": 1.2003586039398655, "test_acc1": 71.39200225158692, "test_acc5": 90.14800239105224, "epoch": 31, "n_parameters": 23674632} -{"train_lr": 0.0019767889169424544, "train_loss": 2.8110896623868356, "test_loss": 1.1861933535247138, "test_acc1": 71.86400238342286, "test_acc5": 90.53400241943359, "epoch": 32, "n_parameters": 23674632} -{"train_lr": 0.0019752736075340656, "train_loss": 2.7925926808783, "test_loss": 1.1901946343255765, "test_acc1": 71.16000226104737, "test_acc5": 90.32200269897461, "epoch": 33, "n_parameters": 23674632} -{"train_lr": 0.001973711008358824, "train_loss": 2.780271666251022, "test_loss": 1.1545316536318173, "test_acc1": 72.27200220367432, "test_acc5": 90.77800262207032, "epoch": 34, "n_parameters": 23674632} -{"train_lr": 0.0019721011955756446, "train_loss": 2.7768269829231675, "test_loss": 1.1566716545458995, "test_acc1": 72.23400253509521, "test_acc5": 90.66400257202149, "epoch": 35, "n_parameters": 23674632} -{"train_lr": 0.0019704442476446197, "train_loss": 2.765373977694771, "test_loss": 1.1829499393475778, "test_acc1": 72.13600240447998, "test_acc5": 90.35200240264892, "epoch": 36, "n_parameters": 23674632} -{"train_lr": 0.001968740245323006, "train_loss": 2.7522349164402073, "test_loss": 1.1261391268309318, "test_acc1": 72.9360022253418, "test_acc5": 91.1780024383545, "epoch": 37, "n_parameters": 23674632} -{"train_lr": 0.001966989271661401, "train_loss": 2.7458750642865875, "test_loss": 1.1564711465528517, "test_acc1": 72.66400246520996, "test_acc5": 90.96000229156495, "epoch": 38, "n_parameters": 23674632} -{"train_lr": 0.0019651914119999453, "train_loss": 2.7226847042020657, "test_loss": 1.1383709106029887, "test_acc1": 72.81800241119385, "test_acc5": 91.15200263610839, "epoch": 39, "n_parameters": 23674632} -{"train_lr": 0.001963346753963693, "train_loss": 2.716710502497679, "test_loss": 1.1412093411340858, "test_acc1": 72.99600236480713, "test_acc5": 91.12400264465332, "epoch": 40, "n_parameters": 23674632} -{"train_lr": 0.0019614553874586467, "train_loss": 2.720998429232459, "test_loss": 1.1357963647354732, "test_acc1": 72.8740023626709, "test_acc5": 90.9120026928711, "epoch": 41, "n_parameters": 23674632} -{"train_lr": 0.0019595174046673747, "train_loss": 2.708590019616387, "test_loss": 1.131203093989329, "test_acc1": 72.8800021182251, "test_acc5": 91.17600268615723, "epoch": 42, "n_parameters": 23674632} -{"train_lr": 0.001957532900044387, "train_loss": 2.6906316679754227, "test_loss": 1.1237044710327277, "test_acc1": 73.49000241760254, "test_acc5": 91.45800250030517, "epoch": 43, "n_parameters": 23674632} -{"train_lr": 0.0019555019703116996, "train_loss": 2.6926511208430752, "test_loss": 1.1881925398201654, "test_acc1": 72.64600219970703, "test_acc5": 90.79200254577637, "epoch": 44, "n_parameters": 23674632} -{"train_lr": 0.0019534247144539964, "train_loss": 2.669253727979511, "test_loss": 1.1056232055028279, "test_acc1": 73.32800236694337, "test_acc5": 91.24400281707764, "epoch": 45, "n_parameters": 23674632} -{"train_lr": 0.0019513012337137013, "train_loss": 2.672120205706639, "test_loss": 1.1311405066287878, "test_acc1": 73.43200234008789, "test_acc5": 91.2960025112915, "epoch": 46, "n_parameters": 23674632} -{"train_lr": 0.0019491316315862757, "train_loss": 2.6618877924103246, "test_loss": 1.0918339218831423, "test_acc1": 73.62600237945557, "test_acc5": 91.52200273590088, "epoch": 47, "n_parameters": 23674632} -{"train_lr": 0.0019469160138151087, "train_loss": 2.655660231955808, "test_loss": 1.0967326478073092, "test_acc1": 73.7500025604248, "test_acc5": 91.56400254333496, "epoch": 48, "n_parameters": 23674632} -{"train_lr": 0.001944654488386279, "train_loss": 2.653644362561327, "test_loss": 1.117539778803334, "test_acc1": 73.73400222503662, "test_acc5": 91.40800271392823, "epoch": 49, "n_parameters": 23674632} -{"train_lr": 0.0019423471655233763, "train_loss": 2.6471054494785937, "test_loss": 1.0964316455929568, "test_acc1": 73.76400243896484, "test_acc5": 91.41000250457763, "epoch": 50, "n_parameters": 23674632} -{"train_lr": 0.0019399941576820443, "train_loss": 2.6386362530892606, "test_loss": 1.0782580411795415, "test_acc1": 74.03800250823974, "test_acc5": 91.83400246276855, "epoch": 51, "n_parameters": 23674632} -{"train_lr": 0.0019375955795444654, "train_loss": 2.6287055357302025, "test_loss": 1.1008769603389683, "test_acc1": 73.6640023373413, "test_acc5": 91.45200231170654, "epoch": 52, "n_parameters": 23674632} -{"train_lr": 0.0019351515480140196, "train_loss": 2.6251729451995387, "test_loss": 1.0804623428619269, "test_acc1": 73.94800250152588, "test_acc5": 91.52600238800049, "epoch": 53, "n_parameters": 23674632} -{"train_lr": 0.0019326621822094159, "train_loss": 2.6182354852426157, "test_loss": 1.067708346766956, "test_acc1": 74.29800248229981, "test_acc5": 91.67600251312255, "epoch": 54, "n_parameters": 23674632} -{"train_lr": 0.0019301276034588161, "train_loss": 2.6152079407450297, "test_loss": 1.0625954653051766, "test_acc1": 74.23200255615234, "test_acc5": 91.88200266235351, "epoch": 55, "n_parameters": 23674632} -{"train_lr": 0.0019275479352939572, "train_loss": 2.613620963766516, "test_loss": 1.0511416563707772, "test_acc1": 74.37200253570556, "test_acc5": 91.95400262512207, "epoch": 56, "n_parameters": 23674632} -{"train_lr": 0.0019249233034442181, "train_loss": 2.6063726664685327, "test_loss": 1.0869849975587744, "test_acc1": 73.9840024584961, "test_acc5": 91.69400271942139, "epoch": 57, "n_parameters": 23674632} -{"train_lr": 0.0019222538358305563, "train_loss": 2.599999952016117, "test_loss": 1.0701658532700755, "test_acc1": 74.20200257049561, "test_acc5": 91.72800234191895, "epoch": 58, "n_parameters": 23674632} -{"train_lr": 0.0019195396625589479, "train_loss": 2.594913657966087, "test_loss": 1.0470554258561495, "test_acc1": 74.57600217681885, "test_acc5": 91.99800264160156, "epoch": 59, "n_parameters": 23674632} -{"train_lr": 0.001916780915914446, "train_loss": 2.5906916927400347, "test_loss": 1.069072867207455, "test_acc1": 74.03400257751464, "test_acc5": 92.08800257476807, "epoch": 60, "n_parameters": 23674632} -{"train_lr": 0.0019139777303543704, "train_loss": 2.5849767789113627, "test_loss": 1.0592132888057015, "test_acc1": 74.27400248901367, "test_acc5": 91.98400257232666, "epoch": 61, "n_parameters": 23674632} -{"train_lr": 0.001911130242502158, "train_loss": 2.5733981787634317, "test_loss": 1.0600332232813041, "test_acc1": 74.32800243835449, "test_acc5": 92.05800260498047, "epoch": 62, "n_parameters": 23674632} -{"train_lr": 0.0019082385911402455, "train_loss": 2.5776837415379776, "test_loss": 1.036757068200545, "test_acc1": 74.77400236724854, "test_acc5": 92.16400248474122, "epoch": 63, "n_parameters": 23674632} -{"train_lr": 0.0019053029172036347, "train_loss": 2.568179278374671, "test_loss": 1.0968205006510923, "test_acc1": 74.08000238891601, "test_acc5": 91.85800244171142, "epoch": 64, "n_parameters": 23674632} -{"train_lr": 0.0019023233637730892, "train_loss": 2.559221243936953, "test_loss": 1.0567774637178942, "test_acc1": 74.55600243103028, "test_acc5": 92.17400241851807, "epoch": 65, "n_parameters": 23674632} -{"train_lr": 0.001899300076067683, "train_loss": 2.5611906952852252, "test_loss": 1.0191919556395574, "test_acc1": 74.83600230438232, "test_acc5": 92.4280025289917, "epoch": 66, "n_parameters": 23674632} -{"train_lr": 0.0018962332014382094, "train_loss": 2.5581404020746263, "test_loss": 1.043987557743535, "test_acc1": 74.81000247619629, "test_acc5": 92.11400260681152, "epoch": 67, "n_parameters": 23674632} -{"train_lr": 0.001893122889359906, "train_loss": 2.550681145142594, "test_loss": 1.0535988781714078, "test_acc1": 74.75600253509522, "test_acc5": 92.12600244781494, "epoch": 68, "n_parameters": 23674632} -{"train_lr": 0.0018899692914248528, "train_loss": 2.5494831402739173, "test_loss": 1.0637075704607097, "test_acc1": 74.51800247283936, "test_acc5": 91.9640024697876, "epoch": 69, "n_parameters": 23674632} -{"train_lr": 0.0018867725613350395, "train_loss": 2.5468780107254223, "test_loss": 0.9995830167423595, "test_acc1": 75.55000243347168, "test_acc5": 92.53800259857178, "epoch": 70, "n_parameters": 23674632} -{"train_lr": 0.0018835328548946853, "train_loss": 2.5386204090978888, "test_loss": 1.0347655713558197, "test_acc1": 75.05200260498047, "test_acc5": 92.18400266998292, "epoch": 71, "n_parameters": 23674632} -{"train_lr": 0.00188025033000234, "train_loss": 2.5357063405281255, "test_loss": 1.0346686766680442, "test_acc1": 75.08000273254395, "test_acc5": 92.45600247619629, "epoch": 72, "n_parameters": 23674632} -{"train_lr": 0.0018769251466436553, "train_loss": 2.5395764926616713, "test_loss": 1.0423876438854318, "test_acc1": 74.69600231842041, "test_acc5": 92.39600240325927, "epoch": 73, "n_parameters": 23674632} -{"train_lr": 0.0018735574668835124, "train_loss": 2.5250419324560225, "test_loss": 1.0034422681412913, "test_acc1": 75.73800239990234, "test_acc5": 92.62000249267578, "epoch": 74, "n_parameters": 23674632} -{"train_lr": 0.001870147454857663, "train_loss": 2.523793592227163, "test_loss": 1.042537679500652, "test_acc1": 74.9440026184082, "test_acc5": 92.20400248382569, "epoch": 75, "n_parameters": 23674632} -{"train_lr": 0.0018666952767654964, "train_loss": 2.515186340367194, "test_loss": 1.0099044568610913, "test_acc1": 75.44400238037109, "test_acc5": 92.38200275177002, "epoch": 76, "n_parameters": 23674632} -{"train_lr": 0.0018632011008612105, "train_loss": 2.5127259317657455, "test_loss": 1.0330266662393555, "test_acc1": 75.3340023904419, "test_acc5": 92.44400267913818, "epoch": 77, "n_parameters": 23674632} -{"train_lr": 0.0018596650974460499, "train_loss": 2.5203188904922165, "test_loss": 1.0324109745296566, "test_acc1": 75.54800278503419, "test_acc5": 92.50600236846924, "epoch": 78, "n_parameters": 23674632} -{"train_lr": 0.001856087438859743, "train_loss": 2.511093680366433, "test_loss": 1.0058152502910658, "test_acc1": 75.29400244659423, "test_acc5": 92.58600276641846, "epoch": 79, "n_parameters": 23674632} -{"train_lr": 0.0018524682994723429, "train_loss": 2.5079577987094956, "test_loss": 1.0179408920759505, "test_acc1": 75.44400267364502, "test_acc5": 92.77400252166748, "epoch": 80, "n_parameters": 23674632} -{"train_lr": 0.0018488078556755862, "train_loss": 2.5035631124564497, "test_loss": 1.046843944625421, "test_acc1": 74.92800241424561, "test_acc5": 92.37600273071288, "epoch": 81, "n_parameters": 23674632} -{"train_lr": 0.0018451062858744953, "train_loss": 2.502321364246398, "test_loss": 0.9906590834937312, "test_acc1": 75.79800271728516, "test_acc5": 92.70600232177735, "epoch": 82, "n_parameters": 23674632} -{"train_lr": 0.0018413637704782046, "train_loss": 2.493414182218907, "test_loss": 0.9930989391198664, "test_acc1": 75.62200256744384, "test_acc5": 92.7400023840332, "epoch": 83, "n_parameters": 23674632} -{"train_lr": 0.0018375804918916805, "train_loss": 2.4842096553574935, "test_loss": 0.995877200342489, "test_acc1": 75.52800233703613, "test_acc5": 92.62800248138427, "epoch": 84, "n_parameters": 23674632} -{"train_lr": 0.0018337566345065688, "train_loss": 2.4799253754883552, "test_loss": 0.9632780700922012, "test_acc1": 76.26600257110596, "test_acc5": 93.07400265197754, "epoch": 85, "n_parameters": 23674632} -{"train_lr": 0.0018298923846922339, "train_loss": 2.4852469105371755, "test_loss": 0.9982868427354278, "test_acc1": 75.66400239074707, "test_acc5": 92.66200250823975, "epoch": 86, "n_parameters": 23674632} -{"train_lr": 0.001825987930786891, "train_loss": 2.4873847608848347, "test_loss": 1.0085755188130971, "test_acc1": 75.63400277160645, "test_acc5": 92.69600250549317, "epoch": 87, "n_parameters": 23674632} -{"train_lr": 0.001822043463088011, "train_loss": 2.4883046793304, "test_loss": 1.0216012322767214, "test_acc1": 75.2980025289917, "test_acc5": 92.6020024130249, "epoch": 88, "n_parameters": 23674632} -{"train_lr": 0.0018180591738434484, "train_loss": 2.470752347311337, "test_loss": 0.9907527987026807, "test_acc1": 75.96000245391846, "test_acc5": 92.7520026095581, "epoch": 89, "n_parameters": 23674632} -{"train_lr": 0.001814035257241804, "train_loss": 2.4772405807229636, "test_loss": 0.9947223787506422, "test_acc1": 75.53600247192382, "test_acc5": 92.62600239715576, "epoch": 90, "n_parameters": 23674632} -{"train_lr": 0.001809971909403067, "train_loss": 2.4777074161765102, "test_loss": 0.9735540884236494, "test_acc1": 76.2420024609375, "test_acc5": 92.9780024822998, "epoch": 91, "n_parameters": 23674632} -{"train_lr": 0.0018058693283691516, "train_loss": 2.4668329380851666, "test_loss": 0.9743707519814824, "test_acc1": 76.1580023562622, "test_acc5": 92.92600266052246, "epoch": 92, "n_parameters": 23674632} -{"train_lr": 0.0018017277140940004, "train_loss": 2.466900534409699, "test_loss": 0.9735702445561235, "test_acc1": 75.92800248596191, "test_acc5": 92.930002628479, "epoch": 93, "n_parameters": 23674632} -{"train_lr": 0.001797547268434101, "train_loss": 2.4640541057959258, "test_loss": 0.9873968748883768, "test_acc1": 75.87600259307861, "test_acc5": 92.87000252258301, "epoch": 94, "n_parameters": 23674632} -{"train_lr": 0.001793328195138373, "train_loss": 2.4594512021393893, "test_loss": 0.9773468118951176, "test_acc1": 75.86800239349365, "test_acc5": 92.92800258972169, "epoch": 95, "n_parameters": 23674632} -{"train_lr": 0.0017890706998386666, "train_loss": 2.4610727241070722, "test_loss": 0.9902109318610394, "test_acc1": 75.87800243499755, "test_acc5": 92.790002421875, "epoch": 96, "n_parameters": 23674632} -{"train_lr": 0.0017847749900392269, "train_loss": 2.4561611906015615, "test_loss": 0.9580596130002629, "test_acc1": 76.2720026235962, "test_acc5": 93.24200275390625, "epoch": 97, "n_parameters": 23674632} -{"train_lr": 0.001780441275106795, "train_loss": 2.454640916128048, "test_loss": 1.0005119690163569, "test_acc1": 75.64600250183105, "test_acc5": 92.76400262451172, "epoch": 98, "n_parameters": 23674632} -{"train_lr": 0.0017760697662607182, "train_loss": 2.4524167169579307, "test_loss": 0.9825683989082322, "test_acc1": 75.7740024975586, "test_acc5": 92.93200261932373, "epoch": 99, "n_parameters": 23674632} -{"train_lr": 0.0017716606765619024, "train_loss": 2.449532974550097, "test_loss": 0.9986273950022279, "test_acc1": 75.75600243774414, "test_acc5": 92.94400269073486, "epoch": 100, "n_parameters": 23674632} -{"train_lr": 0.0017672142209033527, "train_loss": 2.4505179482850905, "test_loss": 0.9782555642904658, "test_acc1": 76.10800239868163, "test_acc5": 92.81800255889893, "epoch": 101, "n_parameters": 23674632} -{"train_lr": 0.001762730615999072, "train_loss": 2.44269978221086, "test_loss": 0.9499874060804193, "test_acc1": 76.55600260223389, "test_acc5": 93.17400246459961, "epoch": 102, "n_parameters": 23674632} -{"train_lr": 0.0017582100803734976, "train_loss": 2.437293980404627, "test_loss": 0.9567815400660038, "test_acc1": 76.45000264251709, "test_acc5": 93.12400250823974, "epoch": 103, "n_parameters": 23674632} -{"train_lr": 0.0017536528343512082, "train_loss": 2.4394713453442263, "test_loss": 0.970225046642802, "test_acc1": 76.26600243164063, "test_acc5": 93.11000265197754, "epoch": 104, "n_parameters": 23674632} -{"train_lr": 0.0017490591000460089, "train_loss": 2.4301701794139485, "test_loss": 0.9571477396018577, "test_acc1": 76.44000235565186, "test_acc5": 93.30000265625, "epoch": 105, "n_parameters": 23674632} -{"train_lr": 0.0017444291013499857, "train_loss": 2.434681732758439, "test_loss": 0.9915644988191851, "test_acc1": 75.77200257781982, "test_acc5": 92.77000268890382, "epoch": 106, "n_parameters": 23674632} -{"train_lr": 0.0017397630639227185, "train_loss": 2.431138311799386, "test_loss": 0.9610535173930905, "test_acc1": 76.56400254638672, "test_acc5": 93.06000264221191, "epoch": 107, "n_parameters": 23674632} -{"train_lr": 0.0017350612151803562, "train_loss": 2.4242601192516866, "test_loss": 0.9573961207360933, "test_acc1": 76.50200255096435, "test_acc5": 93.23400229827881, "epoch": 108, "n_parameters": 23674632} -{"train_lr": 0.0017303237842843022, "train_loss": 2.4300030732540776, "test_loss": 0.9950741286756415, "test_acc1": 75.94200252502442, "test_acc5": 92.84200253723145, "epoch": 109, "n_parameters": 23674632} -{"train_lr": 0.0017255510021302878, "train_loss": 2.420560175351959, "test_loss": 0.9803514820382451, "test_acc1": 75.89600260009766, "test_acc5": 93.03600229858398, "epoch": 110, "n_parameters": 23674632} -{"train_lr": 0.0017207431013369194, "train_loss": 2.4216428123694436, "test_loss": 0.9533975974402644, "test_acc1": 76.84600227386474, "test_acc5": 93.21600223205566, "epoch": 111, "n_parameters": 23674632} -{"train_lr": 0.0017159003162346151, "train_loss": 2.408453074314421, "test_loss": 0.9512832521489172, "test_acc1": 76.62800246429444, "test_acc5": 93.32000226165772, "epoch": 112, "n_parameters": 23674632} -{"train_lr": 0.0017110228828538367, "train_loss": 2.407640092747865, "test_loss": 0.9658774340694601, "test_acc1": 76.31000259613037, "test_acc5": 92.98000257293701, "epoch": 113, "n_parameters": 23674632} -{"train_lr": 0.0017061110389138003, "train_loss": 2.41146745629829, "test_loss": 0.9485105854092222, "test_acc1": 76.6560025845337, "test_acc5": 93.21200261444092, "epoch": 114, "n_parameters": 23674632} -{"train_lr": 0.0017011650238107918, "train_loss": 2.408730362447427, "test_loss": 0.9517113302241672, "test_acc1": 76.93200238616943, "test_acc5": 93.18200237213135, "epoch": 115, "n_parameters": 23674632} -{"train_lr": 0.0016961850786067405, "train_loss": 2.4074596738119682, "test_loss": 0.9690595943142067, "test_acc1": 76.43200234039307, "test_acc5": 93.12000244354248, "epoch": 116, "n_parameters": 23674632} -{"train_lr": 0.0016911714460170083, "train_loss": 2.4029114807276226, "test_loss": 0.9515209310885632, "test_acc1": 76.69400230041504, "test_acc5": 93.24200285705567, "epoch": 117, "n_parameters": 23674632} -{"train_lr": 0.001686124370399008, "train_loss": 2.3966799414843964, "test_loss": 0.9569985177932363, "test_acc1": 76.45600250549316, "test_acc5": 93.29800251403809, "epoch": 118, "n_parameters": 23674632} -{"train_lr": 0.0016810440977401808, "train_loss": 2.397041397510196, "test_loss": 0.9477255343261993, "test_acc1": 76.77600239318848, "test_acc5": 93.26200268280029, "epoch": 119, "n_parameters": 23674632} -{"train_lr": 0.0016759308756457906, "train_loss": 2.3965240997209443, "test_loss": 0.9470923041755502, "test_acc1": 76.73200252624511, "test_acc5": 93.34200257476806, "epoch": 120, "n_parameters": 23674632} -{"train_lr": 0.0016707849533270642, "train_loss": 2.391548952288765, "test_loss": 0.9436041687925657, "test_acc1": 76.75200249450684, "test_acc5": 93.28600266174317, "epoch": 121, "n_parameters": 23674632} -{"train_lr": 0.001665606581588985, "train_loss": 2.3944383112074945, "test_loss": 0.9839616380631924, "test_acc1": 76.13400232727051, "test_acc5": 92.90400278778077, "epoch": 122, "n_parameters": 23674632} -{"train_lr": 0.0016603960128180801, "train_loss": 2.395859982029235, "test_loss": 0.932567030868747, "test_acc1": 76.84600239898681, "test_acc5": 93.47200246368408, "epoch": 123, "n_parameters": 23674632} -{"train_lr": 0.0016551535009701297, "train_loss": 2.387055763213945, "test_loss": 0.9624366560442881, "test_acc1": 76.59600270294189, "test_acc5": 93.13200250305175, "epoch": 124, "n_parameters": 23674632} -{"train_lr": 0.0016498793015578023, "train_loss": 2.38620214300047, "test_loss": 0.9289655079218474, "test_acc1": 77.33800248779296, "test_acc5": 93.4260023361206, "epoch": 125, "n_parameters": 23674632} -{"train_lr": 0.0016445736716381243, "train_loss": 2.377324535084857, "test_loss": 0.931503171496319, "test_acc1": 77.13200252136231, "test_acc5": 93.5940025088501, "epoch": 126, "n_parameters": 23674632} -{"train_lr": 0.001639236869799948, "train_loss": 2.382151820712524, "test_loss": 0.9262266726882169, "test_acc1": 77.08800256988525, "test_acc5": 93.51200227111816, "epoch": 127, "n_parameters": 23674632} -{"train_lr": 0.0016338691561515017, "train_loss": 2.3786492453133175, "test_loss": 0.9397467785712444, "test_acc1": 77.00800240661621, "test_acc5": 93.38200251403809, "epoch": 128, "n_parameters": 23674632} -{"train_lr": 0.0016284707923075977, "train_loss": 2.3725956648731117, "test_loss": 0.9328662517170111, "test_acc1": 77.13600245941161, "test_acc5": 93.35800262756348, "epoch": 129, "n_parameters": 23674632} -{"train_lr": 0.0016230420413769352, "train_loss": 2.3689503879022062, "test_loss": 0.9444249626813512, "test_acc1": 77.05400251434327, "test_acc5": 93.28200254425049, "epoch": 130, "n_parameters": 23674632} -{"train_lr": 0.001617583167949022, "train_loss": 2.376231709889752, "test_loss": 0.9323822458585104, "test_acc1": 77.13000247253417, "test_acc5": 93.61000261108398, "epoch": 131, "n_parameters": 23674632} -{"train_lr": 0.0016120944380817502, "train_loss": 2.366143221847064, "test_loss": 0.9229171491707816, "test_acc1": 77.46600251831055, "test_acc5": 93.51600255432129, "epoch": 132, "n_parameters": 23674632} -{"train_lr": 0.0016065761192880512, "train_loss": 2.3636687905025138, "test_loss": 0.9544883466353922, "test_acc1": 76.58400261566162, "test_acc5": 93.33600243621827, "epoch": 133, "n_parameters": 23674632} -{"train_lr": 0.0016010284805230144, "train_loss": 2.3589863693185276, "test_loss": 0.9429874363722224, "test_acc1": 77.01600241821289, "test_acc5": 93.45000276672363, "epoch": 134, "n_parameters": 23674632} -{"train_lr": 0.0015954517921706322, "train_loss": 2.363728223492106, "test_loss": 0.9281501076889761, "test_acc1": 77.20200239440918, "test_acc5": 93.61000239624023, "epoch": 135, "n_parameters": 23674632} -{"train_lr": 0.0015898463260310446, "train_loss": 2.3639958831641694, "test_loss": 0.929974162894668, "test_acc1": 77.23000253967285, "test_acc5": 93.53400251098633, "epoch": 136, "n_parameters": 23674632} -{"train_lr": 0.0015842123553064837, "train_loss": 2.3559913335443974, "test_loss": 0.9060135542443304, "test_acc1": 77.3640024810791, "test_acc5": 93.6400024645996, "epoch": 137, "n_parameters": 23674632} -{"train_lr": 0.0015785501545889072, "train_loss": 2.3600902683991225, "test_loss": 0.9251669150861827, "test_acc1": 77.13600242248535, "test_acc5": 93.494002578125, "epoch": 138, "n_parameters": 23674632} -{"train_lr": 0.001572859999845966, "train_loss": 2.3542424221571494, "test_loss": 0.9075656538885651, "test_acc1": 77.45200244537354, "test_acc5": 93.74200230285645, "epoch": 139, "n_parameters": 23674632} -{"train_lr": 0.001567142168407809, "train_loss": 2.3472880297999303, "test_loss": 0.9488741222656134, "test_acc1": 76.86400240875244, "test_acc5": 93.39000249481201, "epoch": 140, "n_parameters": 23674632} -{"train_lr": 0.001561396938953405, "train_loss": 2.3519763433628325, "test_loss": 0.9014055391378475, "test_acc1": 77.7900024331665, "test_acc5": 93.81200255615235, "epoch": 141, "n_parameters": 23674632} -{"train_lr": 0.001555624591497151, "train_loss": 2.3443326057671166, "test_loss": 0.9201101497041456, "test_acc1": 77.33600248413086, "test_acc5": 93.56400243103027, "epoch": 142, "n_parameters": 23674632} -{"train_lr": 0.0015498254073751242, "train_loss": 2.3427161007381074, "test_loss": 0.9129735618603952, "test_acc1": 77.5760026184082, "test_acc5": 93.79200254760742, "epoch": 143, "n_parameters": 23674632} -{"train_lr": 0.001543999669231291, "train_loss": 2.346766409089716, "test_loss": 0.9200402717247154, "test_acc1": 77.38600257537841, "test_acc5": 93.73800254760742, "epoch": 144, "n_parameters": 23674632} -{"train_lr": 0.0015381476610041188, "train_loss": 2.3413067513780534, "test_loss": 0.9221692764849374, "test_acc1": 77.30400241760253, "test_acc5": 93.6120024746704, "epoch": 145, "n_parameters": 23674632} -{"train_lr": 0.0015322696679120752, "train_loss": 2.3370988417467435, "test_loss": 0.9145435039518457, "test_acc1": 77.35600240509034, "test_acc5": 93.72200234283447, "epoch": 146, "n_parameters": 23674632} -{"train_lr": 0.001526365976440195, "train_loss": 2.33150366739594, "test_loss": 0.9029338000850244, "test_acc1": 77.69400251739502, "test_acc5": 93.89200275054931, "epoch": 147, "n_parameters": 23674632} -{"train_lr": 0.0015204368743262477, "train_loss": 2.3336874108186825, "test_loss": 0.9196664908844413, "test_acc1": 77.44800241577148, "test_acc5": 93.71200240539551, "epoch": 148, "n_parameters": 23674632} -{"train_lr": 0.0015144826505462119, "train_loss": 2.333453637220972, "test_loss": 0.9347541019546263, "test_acc1": 77.27200254455566, "test_acc5": 93.41600258544922, "epoch": 149, "n_parameters": 23674632} -{"train_lr": 0.001508503595300589, "train_loss": 2.3327814052335554, "test_loss": 0.897119368341836, "test_acc1": 77.70600249786376, "test_acc5": 93.91600242645264, "epoch": 150, "n_parameters": 23674632} -{"train_lr": 0.0015024999999999741, "train_loss": 2.3356509086944692, "test_loss": 0.9110601425848224, "test_acc1": 77.52400238616943, "test_acc5": 93.84400255981446, "epoch": 151, "n_parameters": 23674632} -{"train_lr": 0.001496472157251286, "train_loss": 2.319668770705958, "test_loss": 0.9029949480159716, "test_acc1": 77.67000243743897, "test_acc5": 93.72600267852783, "epoch": 152, "n_parameters": 23674632} -{"train_lr": 0.0014904203608430181, "train_loss": 2.3211226105975875, "test_loss": 0.933666670751391, "test_acc1": 76.95400270721436, "test_acc5": 93.47200261932373, "epoch": 153, "n_parameters": 23674632} -{"train_lr": 0.0014843449057311761, "train_loss": 2.321056483258351, "test_loss": 0.912113118352312, "test_acc1": 77.53000252288818, "test_acc5": 93.68600240112305, "epoch": 154, "n_parameters": 23674632} -{"train_lr": 0.001478246088024936, "train_loss": 2.3267108600059574, "test_loss": 0.8912327543578364, "test_acc1": 77.73200251373291, "test_acc5": 93.9640025164795, "epoch": 155, "n_parameters": 23674632} -{"train_lr": 0.0014721242049719888, "train_loss": 2.317100684705684, "test_loss": 0.9082492090987436, "test_acc1": 77.75600239013671, "test_acc5": 93.85400254119872, "epoch": 156, "n_parameters": 23674632} -{"train_lr": 0.0014659795549442738, "train_loss": 2.311547778528943, "test_loss": 0.9190242828970606, "test_acc1": 76.98000238586425, "test_acc5": 93.67600281646729, "epoch": 157, "n_parameters": 23674632} -{"train_lr": 0.0014598124374234239, "train_loss": 2.3123664903483516, "test_loss": 0.9320285556217035, "test_acc1": 77.46200237457275, "test_acc5": 93.57200259979248, "epoch": 158, "n_parameters": 23674632} -{"train_lr": 0.0014536231529860136, "train_loss": 2.3074802518438853, "test_loss": 0.8918340232110384, "test_acc1": 77.88800248565674, "test_acc5": 93.90600272399902, "epoch": 159, "n_parameters": 23674632} -{"train_lr": 0.001447412003289014, "train_loss": 2.3094657339352214, "test_loss": 0.891006710741556, "test_acc1": 77.92400268096924, "test_acc5": 93.94400246948243, "epoch": 160, "n_parameters": 23674632} -{"train_lr": 0.001441179291055139, "train_loss": 2.311056453249247, "test_loss": 0.8984054417321177, "test_acc1": 77.71600257537841, "test_acc5": 93.87000230987549, "epoch": 161, "n_parameters": 23674632} -{"train_lr": 0.0014349253200580082, "train_loss": 2.3127624113544476, "test_loss": 0.9305622384629466, "test_acc1": 77.12800247314453, "test_acc5": 93.6480025918579, "epoch": 162, "n_parameters": 23674632} -{"train_lr": 0.0014286503951072642, "train_loss": 2.299905186839146, "test_loss": 0.8934898607884393, "test_acc1": 77.69800257598877, "test_acc5": 93.97400234283447, "epoch": 163, "n_parameters": 23674632} -{"train_lr": 0.0014223548220339633, "train_loss": 2.3018215062568705, "test_loss": 0.9030935185199435, "test_acc1": 77.55400237915039, "test_acc5": 93.81400244171142, "epoch": 164, "n_parameters": 23674632} -{"train_lr": 0.0014160389076753996, "train_loss": 2.298819356851822, "test_loss": 0.8856238063537714, "test_acc1": 77.96000260894776, "test_acc5": 94.04400244628906, "epoch": 165, "n_parameters": 23674632} -{"train_lr": 0.0014097029598603777, "train_loss": 2.294658946738445, "test_loss": 0.8816852905985081, "test_acc1": 77.99400257659912, "test_acc5": 94.01600252807617, "epoch": 166, "n_parameters": 23674632} -{"train_lr": 0.0014033472873941557, "train_loss": 2.293040415782818, "test_loss": 0.9006092684964339, "test_acc1": 77.68200231567383, "test_acc5": 93.90200244506836, "epoch": 167, "n_parameters": 23674632} -{"train_lr": 0.0013969722000430292, "train_loss": 2.2925363708195166, "test_loss": 0.9068647397286964, "test_acc1": 77.6840023828125, "test_acc5": 93.92800225006104, "epoch": 168, "n_parameters": 23674632} -{"train_lr": 0.0013905780085198772, "train_loss": 2.293812284581095, "test_loss": 0.8904510330070149, "test_acc1": 77.85800231597901, "test_acc5": 94.0480026538086, "epoch": 169, "n_parameters": 23674632} -{"train_lr": 0.001384165024468538, "train_loss": 2.2941772529570987, "test_loss": 0.9045293528699514, "test_acc1": 77.97200258666992, "test_acc5": 94.05200245941163, "epoch": 170, "n_parameters": 23674632} -{"train_lr": 0.001377733560448806, "train_loss": 2.2852075304821144, "test_loss": 0.8892888838820385, "test_acc1": 78.05200234954835, "test_acc5": 93.89800226593017, "epoch": 171, "n_parameters": 23674632} -{"train_lr": 0.0013712839299212982, "train_loss": 2.2830550349254213, "test_loss": 0.8959864595848502, "test_acc1": 77.66800249450684, "test_acc5": 93.91000272766114, "epoch": 172, "n_parameters": 23674632} -{"train_lr": 0.0013648164472316938, "train_loss": 2.2864060001336126, "test_loss": 0.8964155743067915, "test_acc1": 77.93000239074708, "test_acc5": 94.12400268035888, "epoch": 173, "n_parameters": 23674632} -{"train_lr": 0.0013583314275960664, "train_loss": 2.2849046724091333, "test_loss": 0.8776531928416454, "test_acc1": 78.2560023095703, "test_acc5": 94.22200246459961, "epoch": 174, "n_parameters": 23674632} -{"train_lr": 0.0013518291870851622, "train_loss": 2.2782498187870144, "test_loss": 0.8936613740568812, "test_acc1": 77.86400263732911, "test_acc5": 94.02400254058838, "epoch": 175, "n_parameters": 23674632} -{"train_lr": 0.0013453100426090588, "train_loss": 2.2694001310854124, "test_loss": 0.8676405456481557, "test_acc1": 78.29600263061523, "test_acc5": 94.18000244873046, "epoch": 176, "n_parameters": 23674632} -{"train_lr": 0.0013387743119015316, "train_loss": 2.269167763640364, "test_loss": 0.8816767803421526, "test_acc1": 78.09400248840332, "test_acc5": 94.07800244873047, "epoch": 177, "n_parameters": 23674632} -{"train_lr": 0.001332222313504878, "train_loss": 2.2730469468900627, "test_loss": 0.8924445254784642, "test_acc1": 77.90800280090332, "test_acc5": 94.07600260986328, "epoch": 178, "n_parameters": 23674632} -{"train_lr": 0.001325654366754359, "train_loss": 2.268406944773752, "test_loss": 0.883020167888114, "test_acc1": 78.20400249786377, "test_acc5": 94.10600230072022, "epoch": 179, "n_parameters": 23674632} -{"train_lr": 0.0013190707917623515, "train_loss": 2.2741476108344623, "test_loss": 0.8848660684218912, "test_acc1": 78.01000250213623, "test_acc5": 94.04200241241455, "epoch": 180, "n_parameters": 23674632} -{"train_lr": 0.0013124719094030866, "train_loss": 2.265736633258091, "test_loss": 0.877197431456862, "test_acc1": 78.39200238220215, "test_acc5": 94.16000249847413, "epoch": 181, "n_parameters": 23674632} -{"train_lr": 0.0013058580412967258, "train_loss": 2.267990934870226, "test_loss": 0.8687445536030062, "test_acc1": 78.270002348938, "test_acc5": 94.17000225769043, "epoch": 182, "n_parameters": 23674632} -{"train_lr": 0.0012992295097937925, "train_loss": 2.261614302091366, "test_loss": 0.8859694279504545, "test_acc1": 78.0860026449585, "test_acc5": 94.04600250549316, "epoch": 183, "n_parameters": 23674632} -{"train_lr": 0.0012925866379597717, "train_loss": 2.2613180603364484, "test_loss": 0.8555459184854319, "test_acc1": 78.48200268463135, "test_acc5": 94.36800258300781, "epoch": 184, "n_parameters": 23674632} -{"train_lr": 0.0012859297495586568, "train_loss": 2.2520714745723565, "test_loss": 0.8647028141175256, "test_acc1": 78.43000251037597, "test_acc5": 94.23000258270264, "epoch": 185, "n_parameters": 23674632} -{"train_lr": 0.0012792591690378846, "train_loss": 2.2520919285899255, "test_loss": 0.875474264337258, "test_acc1": 78.31400265197755, "test_acc5": 94.22200248413085, "epoch": 186, "n_parameters": 23674632} -{"train_lr": 0.001272575221512157, "train_loss": 2.250092155439295, "test_loss": 0.8594567533018012, "test_acc1": 78.38200254455566, "test_acc5": 94.3300026852417, "epoch": 187, "n_parameters": 23674632} -{"train_lr": 0.0012658782327476911, "train_loss": 2.252468040449728, "test_loss": 0.8696386875076727, "test_acc1": 78.40400256530762, "test_acc5": 94.20400250610352, "epoch": 188, "n_parameters": 23674632} -{"train_lr": 0.0012591685291460967, "train_loss": 2.2432635046666762, "test_loss": 0.8723023741534262, "test_acc1": 78.42000241638183, "test_acc5": 94.21600227813721, "epoch": 189, "n_parameters": 23674632} -{"train_lr": 0.001252446437729037, "train_loss": 2.2508608740296583, "test_loss": 0.8651800952625998, "test_acc1": 78.50400260681153, "test_acc5": 94.25600256958008, "epoch": 190, "n_parameters": 23674632} -{"train_lr": 0.001245712286121648, "train_loss": 2.2421685133715994, "test_loss": 0.8768533929956682, "test_acc1": 78.30400239990234, "test_acc5": 94.11400250854493, "epoch": 191, "n_parameters": 23674632} -{"train_lr": 0.0012389664025371076, "train_loss": 2.2412809660609105, "test_loss": 0.8588402511721308, "test_acc1": 78.7020025692749, "test_acc5": 94.35200265838623, "epoch": 192, "n_parameters": 23674632} -{"train_lr": 0.001232209115760109, "train_loss": 2.2413928117588173, "test_loss": 0.873671527613293, "test_acc1": 78.25800243286133, "test_acc5": 94.06400255523681, "epoch": 193, "n_parameters": 23674632} -{"train_lr": 0.001225440755131367, "train_loss": 2.22768487238722, "test_loss": 0.8594393380212061, "test_acc1": 78.50600233825683, "test_acc5": 94.31800276489258, "epoch": 194, "n_parameters": 23674632} -{"train_lr": 0.0012186616505312083, "train_loss": 2.2360062015404423, "test_loss": 0.8669061668668733, "test_acc1": 78.49600264892578, "test_acc5": 94.32400233062744, "epoch": 195, "n_parameters": 23674632} -{"train_lr": 0.0012118721323636779, "train_loss": 2.233335185584595, "test_loss": 0.8809474278805833, "test_acc1": 78.20400270263671, "test_acc5": 94.18800237701416, "epoch": 196, "n_parameters": 23674632} -{"train_lr": 0.0012050725315402304, "train_loss": 2.231606672195603, "test_loss": 0.8640154066185156, "test_acc1": 78.3740025152588, "test_acc5": 94.28600264190673, "epoch": 197, "n_parameters": 23674632} -{"train_lr": 0.0011982631794638656, "train_loss": 2.2372776496944002, "test_loss": 0.8514163136256464, "test_acc1": 78.8840026385498, "test_acc5": 94.45800245056152, "epoch": 198, "n_parameters": 23674632} -{"train_lr": 0.0011914444080127799, "train_loss": 2.2240391194606, "test_loss": 0.8526329578775348, "test_acc1": 78.7480025177002, "test_acc5": 94.38400277832031, "epoch": 199, "n_parameters": 23674632} -{"train_lr": 0.0011846165495243265, "train_loss": 2.220555909758182, "test_loss": 0.8473053065439066, "test_acc1": 78.82600258148193, "test_acc5": 94.54400250579835, "epoch": 200, "n_parameters": 23674632} -{"train_lr": 0.001177779936778625, "train_loss": 2.2244954019499055, "test_loss": 0.8668799587158542, "test_acc1": 78.37000256774903, "test_acc5": 94.17200245422363, "epoch": 201, "n_parameters": 23674632} -{"train_lr": 0.001170934902982497, "train_loss": 2.2193911698320026, "test_loss": 0.8591729530105086, "test_acc1": 78.61000233947753, "test_acc5": 94.39800267822265, "epoch": 202, "n_parameters": 23674632} -{"train_lr": 0.0011640817817533703, "train_loss": 2.2273191184543975, "test_loss": 0.8538258344386563, "test_acc1": 78.88000243164062, "test_acc5": 94.47400245452882, "epoch": 203, "n_parameters": 23674632} -{"train_lr": 0.0011572209071026263, "train_loss": 2.2099945152001226, "test_loss": 0.8341428749263287, "test_acc1": 79.18400243347168, "test_acc5": 94.5780025415039, "epoch": 204, "n_parameters": 23674632} -{"train_lr": 0.0011503526134195741, "train_loss": 2.218700084135496, "test_loss": 0.8594885389461662, "test_acc1": 78.83000265930175, "test_acc5": 94.326002605896, "epoch": 205, "n_parameters": 23674632} -{"train_lr": 0.0011434772354552698, "train_loss": 2.2091660731368594, "test_loss": 0.8434654698904717, "test_acc1": 78.9140024407959, "test_acc5": 94.48800254058838, "epoch": 206, "n_parameters": 23674632} -{"train_lr": 0.001136595108305831, "train_loss": 2.213486420998661, "test_loss": 0.8414751215640343, "test_acc1": 79.01200234100342, "test_acc5": 94.50400249938964, "epoch": 207, "n_parameters": 23674632} -{"train_lr": 0.0011297065673965083, "train_loss": 2.2101269560418637, "test_loss": 0.8439535225431124, "test_acc1": 78.79800236480713, "test_acc5": 94.47200252593994, "epoch": 208, "n_parameters": 23674632} -{"train_lr": 0.0011228119484649741, "train_loss": 2.2049848150959215, "test_loss": 0.8487010459330949, "test_acc1": 78.96600253173828, "test_acc5": 94.50600256866456, "epoch": 209, "n_parameters": 23674632} -{"train_lr": 0.001115911587545279, "train_loss": 2.2027746656839606, "test_loss": 0.8381966822075121, "test_acc1": 79.1300026159668, "test_acc5": 94.49400269500732, "epoch": 210, "n_parameters": 23674632} -{"train_lr": 0.0011090058209513043, "train_loss": 2.2035727397405465, "test_loss": 0.848078556697477, "test_acc1": 78.94000264251709, "test_acc5": 94.44800249694825, "epoch": 211, "n_parameters": 23674632} -{"train_lr": 0.0011020949852603367, "train_loss": 2.199791239486705, "test_loss": 0.839084279582356, "test_acc1": 79.01400263000488, "test_acc5": 94.66200240020751, "epoch": 212, "n_parameters": 23674632} -{"train_lr": 0.0010951794172968006, "train_loss": 2.200444107433017, "test_loss": 0.855833813040094, "test_acc1": 78.74800225982666, "test_acc5": 94.47600248077393, "epoch": 213, "n_parameters": 23674632} -{"train_lr": 0.0010882594541156603, "train_loss": 2.1964619575882796, "test_loss": 0.8298250601598711, "test_acc1": 79.03600280059814, "test_acc5": 94.68800264343261, "epoch": 214, "n_parameters": 23674632} -{"train_lr": 0.0010813354329861687, "train_loss": 2.189066496779688, "test_loss": 0.8421574185291926, "test_acc1": 78.82000268798828, "test_acc5": 94.54600261657716, "epoch": 215, "n_parameters": 23674632} -{"train_lr": 0.0010744076913754073, "train_loss": 2.1929787138192585, "test_loss": 0.8359170114677964, "test_acc1": 79.05600231933593, "test_acc5": 94.68200230773925, "epoch": 216, "n_parameters": 23674632} -{"train_lr": 0.0010674765669316437, "train_loss": 2.1872000570753687, "test_loss": 0.830441726089427, "test_acc1": 79.02200266693116, "test_acc5": 94.6760024533081, "epoch": 217, "n_parameters": 23674632} -{"train_lr": 0.00106054239746819, "train_loss": 2.1867370080890702, "test_loss": 0.8397593441786189, "test_acc1": 79.32800263824463, "test_acc5": 94.57000252258301, "epoch": 218, "n_parameters": 23674632} -{"train_lr": 0.0010536055209466667, "train_loss": 2.193705463211695, "test_loss": 0.8289178114271525, "test_acc1": 79.27200240051269, "test_acc5": 94.68600258911133, "epoch": 219, "n_parameters": 23674632} -{"train_lr": 0.0010466662754605423, "train_loss": 2.1788522116452764, "test_loss": 0.8388158882206137, "test_acc1": 79.00800252563477, "test_acc5": 94.52000243103028, "epoch": 220, "n_parameters": 23674632} -{"train_lr": 0.0010397249992189753, "train_loss": 2.1749409723529616, "test_loss": 0.8251907345697735, "test_acc1": 79.43000280303956, "test_acc5": 94.69400251220704, "epoch": 221, "n_parameters": 23674632} -{"train_lr": 0.001032782030529963, "train_loss": 2.1794291551021647, "test_loss": 0.8130524421505856, "test_acc1": 79.65400247375489, "test_acc5": 94.79800236907958, "epoch": 222, "n_parameters": 23674632} -{"train_lr": 0.001025837707783931, "train_loss": 2.1760056942677517, "test_loss": 0.8360612567401293, "test_acc1": 79.21000254852295, "test_acc5": 94.6400027685547, "epoch": 223, "n_parameters": 23674632} -{"train_lr": 0.001018892369437435, "train_loss": 2.1745788559710664, "test_loss": 0.8049222312190316, "test_acc1": 79.62000248962403, "test_acc5": 94.85600252685546, "epoch": 224, "n_parameters": 23674632} -{"train_lr": 0.0010119463539964599, "train_loss": 2.1764683867220302, "test_loss": 0.8381282923122247, "test_acc1": 79.17000258361817, "test_acc5": 94.78000226409912, "epoch": 225, "n_parameters": 23674632} -{"train_lr": 0.0010049999999999942, "train_loss": 2.159868488423258, "test_loss": 0.834346293612863, "test_acc1": 79.20000252563477, "test_acc5": 94.64800258636474, "epoch": 226, "n_parameters": 23674632} -{"train_lr": 0.0009980536460035385, "train_loss": 2.1674123813899207, "test_loss": 0.8248602898057663, "test_acc1": 79.56000238647461, "test_acc5": 94.68400260772705, "epoch": 227, "n_parameters": 23674632} -{"train_lr": 0.0009911076305625605, "train_loss": 2.168499893505129, "test_loss": 0.8149709305302664, "test_acc1": 79.46400268859863, "test_acc5": 94.79200258544923, "epoch": 228, "n_parameters": 23674632} -{"train_lr": 0.0009841622922160565, "train_loss": 2.1621291593586704, "test_loss": 0.8107775289452437, "test_acc1": 79.68000257019042, "test_acc5": 94.89600269927979, "epoch": 229, "n_parameters": 23674632} -{"train_lr": 0.0009772179694700388, "train_loss": 2.1604311839401196, "test_loss": 0.8266180527932716, "test_acc1": 79.29600269348144, "test_acc5": 94.53000258483887, "epoch": 230, "n_parameters": 23674632} -{"train_lr": 0.0009702750007810201, "train_loss": 2.1571004064344197, "test_loss": 0.8197024631116426, "test_acc1": 79.44200260101319, "test_acc5": 94.73600262786866, "epoch": 231, "n_parameters": 23674632} -{"train_lr": 0.0009633337245394539, "train_loss": 2.156103022783685, "test_loss": 0.8322429389438846, "test_acc1": 79.16600243530273, "test_acc5": 94.71000265655518, "epoch": 232, "n_parameters": 23674632} -{"train_lr": 0.0009563944790533752, "train_loss": 2.153547275766766, "test_loss": 0.824376381143476, "test_acc1": 79.58400243499756, "test_acc5": 94.66400253509522, "epoch": 233, "n_parameters": 23674632} -{"train_lr": 0.0009494576025318052, "train_loss": 2.1496953555076814, "test_loss": 0.8095130158418958, "test_acc1": 79.83000255615234, "test_acc5": 94.84800240722656, "epoch": 234, "n_parameters": 23674632} -{"train_lr": 0.0009425234330683488, "train_loss": 2.150594845009174, "test_loss": 0.8055130211692868, "test_acc1": 79.75600268615723, "test_acc5": 94.88400270660401, "epoch": 235, "n_parameters": 23674632} -{"train_lr": 0.0009355923086245902, "train_loss": 2.1497514385232726, "test_loss": 0.806616640903733, "test_acc1": 79.75800243591308, "test_acc5": 94.8400025125122, "epoch": 236, "n_parameters": 23674632} -{"train_lr": 0.0009286645670138138, "train_loss": 2.1431874090747582, "test_loss": 0.8015019237769373, "test_acc1": 79.7600024282837, "test_acc5": 94.88400253448486, "epoch": 237, "n_parameters": 23674632} -{"train_lr": 0.0009217405458843423, "train_loss": 2.140005906792186, "test_loss": 0.813968557187102, "test_acc1": 79.37800236511231, "test_acc5": 94.85000252105714, "epoch": 238, "n_parameters": 23674632} -{"train_lr": 0.0009148205827032033, "train_loss": 2.1411338831118636, "test_loss": 0.8114350268786604, "test_acc1": 79.71600276092529, "test_acc5": 94.80000245635986, "epoch": 239, "n_parameters": 23674632} -{"train_lr": 0.0009079050147396589, "train_loss": 2.136957728093286, "test_loss": 0.8087191113807035, "test_acc1": 79.75200221160888, "test_acc5": 94.91800279418945, "epoch": 240, "n_parameters": 23674632} -{"train_lr": 0.0009009941790486844, "train_loss": 2.1342858694178117, "test_loss": 0.8047103475440632, "test_acc1": 79.66200236419678, "test_acc5": 94.94400274536133, "epoch": 241, "n_parameters": 23674632} -{"train_lr": 0.0008940884124547167, "train_loss": 2.1385240570175275, "test_loss": 0.8129551880287401, "test_acc1": 79.67600230773925, "test_acc5": 94.90800230499268, "epoch": 242, "n_parameters": 23674632} -{"train_lr": 0.0008871880515350175, "train_loss": 2.1288500198190636, "test_loss": 0.7955897033327457, "test_acc1": 79.97000250976562, "test_acc5": 95.0180023513794, "epoch": 243, "n_parameters": 23674632} -{"train_lr": 0.0008802934326035345, "train_loss": 2.128260092876798, "test_loss": 0.8039196114422698, "test_acc1": 79.89600226806641, "test_acc5": 94.92600251739502, "epoch": 244, "n_parameters": 23674632} -{"train_lr": 0.0008734048916941698, "train_loss": 2.1267971279833624, "test_loss": 0.8289501812647689, "test_acc1": 79.29200272033691, "test_acc5": 94.66000258026124, "epoch": 245, "n_parameters": 23674632} -{"train_lr": 0.0008665227645447198, "train_loss": 2.12365596636022, "test_loss": 0.8007995044869004, "test_acc1": 79.87200238830566, "test_acc5": 95.00600230865479, "epoch": 246, "n_parameters": 23674632} -{"train_lr": 0.0008596473865804057, "train_loss": 2.1258281203244422, "test_loss": 0.8151980845088308, "test_acc1": 79.65600238830567, "test_acc5": 94.86000245758056, "epoch": 247, "n_parameters": 23674632} -{"train_lr": 0.0008527790928973824, "train_loss": 2.120397764751665, "test_loss": 0.8111515246670354, "test_acc1": 79.60800258636475, "test_acc5": 94.88200250946045, "epoch": 248, "n_parameters": 23674632} -{"train_lr": 0.0008459182182466325, "train_loss": 2.1205295658916783, "test_loss": 0.8061568195169623, "test_acc1": 79.66000269073486, "test_acc5": 94.9740024282837, "epoch": 249, "n_parameters": 23674632} -{"train_lr": 0.0008390650970174828, "train_loss": 2.1143399871867907, "test_loss": 0.8028703589895458, "test_acc1": 79.92800235870361, "test_acc5": 94.96200260620117, "epoch": 250, "n_parameters": 23674632} -{"train_lr": 0.0008322200632213884, "train_loss": 2.114326893068332, "test_loss": 0.7984738197516311, "test_acc1": 79.95000262786866, "test_acc5": 95.04400260070801, "epoch": 251, "n_parameters": 23674632} -{"train_lr": 0.000825383450475676, "train_loss": 2.113347759754728, "test_loss": 0.7965630231368722, "test_acc1": 80.14000260498047, "test_acc5": 94.98800266357422, "epoch": 252, "n_parameters": 23674632} -{"train_lr": 0.0008185555919871891, "train_loss": 2.105185791230697, "test_loss": 0.8083052842341589, "test_acc1": 79.77200253448487, "test_acc5": 94.86200240081787, "epoch": 253, "n_parameters": 23674632} -{"train_lr": 0.0008117368205361392, "train_loss": 2.1042456755058754, "test_loss": 0.8052445506733475, "test_acc1": 79.87600269256592, "test_acc5": 94.91000258453369, "epoch": 254, "n_parameters": 23674632} -{"train_lr": 0.0008049274684597493, "train_loss": 2.1031248433341223, "test_loss": 0.7936637092833266, "test_acc1": 80.17000256896972, "test_acc5": 95.08800253082275, "epoch": 255, "n_parameters": 23674632} -{"train_lr": 0.000798127867636333, "train_loss": 2.1005175944140775, "test_loss": 0.7921087618239901, "test_acc1": 80.05200267303466, "test_acc5": 95.090002789917, "epoch": 256, "n_parameters": 23674632} -{"train_lr": 0.0007913383494687635, "train_loss": 2.097981393992377, "test_loss": 0.7866097203258312, "test_acc1": 80.0520024609375, "test_acc5": 95.10000248718262, "epoch": 257, "n_parameters": 23674632} -{"train_lr": 0.0007845592448686416, "train_loss": 2.1039767914491114, "test_loss": 0.786056245987614, "test_acc1": 80.2220023965454, "test_acc5": 95.11600233184815, "epoch": 258, "n_parameters": 23674632} -{"train_lr": 0.0007777908842399036, "train_loss": 2.098961731917757, "test_loss": 0.7923772494788422, "test_acc1": 80.00800273620605, "test_acc5": 95.16200249572753, "epoch": 259, "n_parameters": 23674632} -{"train_lr": 0.000771033597462911, "train_loss": 2.0938470427342932, "test_loss": 0.7857769284058701, "test_acc1": 80.28000221862793, "test_acc5": 95.22000259521485, "epoch": 260, "n_parameters": 23674632} -{"train_lr": 0.000764287713878348, "train_loss": 2.09263975322723, "test_loss": 0.7790632988467361, "test_acc1": 80.39600231750488, "test_acc5": 95.27600254943847, "epoch": 261, "n_parameters": 23674632} -{"train_lr": 0.0007575535622709474, "train_loss": 2.082406867381861, "test_loss": 0.7944991311453509, "test_acc1": 80.23800281280518, "test_acc5": 95.12200255218505, "epoch": 262, "n_parameters": 23674632} -{"train_lr": 0.0007508314708538705, "train_loss": 2.0873756892770694, "test_loss": 0.7772344250909307, "test_acc1": 80.49000258666992, "test_acc5": 95.2080024508667, "epoch": 263, "n_parameters": 23674632} -{"train_lr": 0.000744121767252353, "train_loss": 2.0811384286907173, "test_loss": 0.78039043272535, "test_acc1": 80.41600258300781, "test_acc5": 95.2940025354004, "epoch": 264, "n_parameters": 23674632} -{"train_lr": 0.0007374247784878159, "train_loss": 2.0867376651623837, "test_loss": 0.7846199300591693, "test_acc1": 80.4060026159668, "test_acc5": 95.1860025857544, "epoch": 265, "n_parameters": 23674632} -{"train_lr": 0.0007307408309621095, "train_loss": 2.0806439400529215, "test_loss": 0.7780952513443701, "test_acc1": 80.38600259216308, "test_acc5": 95.2020025680542, "epoch": 266, "n_parameters": 23674632} -{"train_lr": 0.0007240702504413246, "train_loss": 2.0752149743856574, "test_loss": 0.7833857348922527, "test_acc1": 80.23000262756348, "test_acc5": 95.16800254821777, "epoch": 267, "n_parameters": 23674632} -{"train_lr": 0.0007174133620402375, "train_loss": 2.0764546583679846, "test_loss": 0.7674137339786147, "test_acc1": 80.53800240783691, "test_acc5": 95.31400269653321, "epoch": 268, "n_parameters": 23674632} -{"train_lr": 0.0007107704902061801, "train_loss": 2.0692254172311983, "test_loss": 0.7774832465431907, "test_acc1": 80.30000250854492, "test_acc5": 95.20600239074707, "epoch": 269, "n_parameters": 23674632} -{"train_lr": 0.0007041419587032991, "train_loss": 2.071122929120331, "test_loss": 0.7893438453814297, "test_acc1": 80.15600235321045, "test_acc5": 95.14000252990722, "epoch": 270, "n_parameters": 23674632} -{"train_lr": 0.000697528090596946, "train_loss": 2.071135453290219, "test_loss": 0.7769116281785748, "test_acc1": 80.3760025125122, "test_acc5": 95.22200256774903, "epoch": 271, "n_parameters": 23674632} -{"train_lr": 0.0006909292082376485, "train_loss": 2.0721463267918496, "test_loss": 0.7639604629102078, "test_acc1": 80.63200229309082, "test_acc5": 95.38000253601074, "epoch": 272, "n_parameters": 23674632} -{"train_lr": 0.0006843456332456282, "train_loss": 2.060156352579784, "test_loss": 0.7722312069752, "test_acc1": 80.60200253936767, "test_acc5": 95.29000240570069, "epoch": 273, "n_parameters": 23674632} -{"train_lr": 0.0006777776864951223, "train_loss": 2.060504762793807, "test_loss": 0.7756212874682564, "test_acc1": 80.51400247558594, "test_acc5": 95.28400234802245, "epoch": 274, "n_parameters": 23674632} -{"train_lr": 0.0006712256880985121, "train_loss": 2.0513656612018125, "test_loss": 0.766961112166896, "test_acc1": 80.68000269439698, "test_acc5": 95.32000257110596, "epoch": 275, "n_parameters": 23674632} -{"train_lr": 0.0006646899573909693, "train_loss": 2.051346013288227, "test_loss": 0.7886920428524414, "test_acc1": 80.1720024963379, "test_acc5": 95.04800245635987, "epoch": 276, "n_parameters": 23674632} -{"train_lr": 0.0006581708129147805, "train_loss": 2.047327056241264, "test_loss": 0.7710158977883331, "test_acc1": 80.51800238464355, "test_acc5": 95.24400250213623, "epoch": 277, "n_parameters": 23674632} -{"train_lr": 0.0006516685724039192, "train_loss": 2.04984220172123, "test_loss": 0.7702161069733627, "test_acc1": 80.53800242401122, "test_acc5": 95.40000242858886, "epoch": 278, "n_parameters": 23674632} -{"train_lr": 0.0006451835527683242, "train_loss": 2.042706018961448, "test_loss": 0.7709774660003005, "test_acc1": 80.44400254669189, "test_acc5": 95.40600227600098, "epoch": 279, "n_parameters": 23674632} -{"train_lr": 0.0006387160700787323, "train_loss": 2.050115541231146, "test_loss": 0.7720029834432133, "test_acc1": 80.69000264556885, "test_acc5": 95.25800253112793, "epoch": 280, "n_parameters": 23674632} -{"train_lr": 0.0006322664395511658, "train_loss": 2.041179666862928, "test_loss": 0.7639348439759377, "test_acc1": 80.66800236236573, "test_acc5": 95.42400255645752, "epoch": 281, "n_parameters": 23674632} -{"train_lr": 0.0006258349755314805, "train_loss": 2.041423116418288, "test_loss": 0.7718627505907507, "test_acc1": 80.70600240203858, "test_acc5": 95.28400241973877, "epoch": 282, "n_parameters": 23674632} -{"train_lr": 0.000619421991480156, "train_loss": 2.0365088719019977, "test_loss": 0.7676312551466804, "test_acc1": 80.68800252044677, "test_acc5": 95.32800257080078, "epoch": 283, "n_parameters": 23674632} -{"train_lr": 0.0006130277999569868, "train_loss": 2.031454813042038, "test_loss": 0.7661263468026211, "test_acc1": 80.74800249176026, "test_acc5": 95.31400247741699, "epoch": 284, "n_parameters": 23674632} -{"train_lr": 0.0006066527126058782, "train_loss": 2.031157542422092, "test_loss": 0.7550599386520458, "test_acc1": 80.912002661438, "test_acc5": 95.45400251037597, "epoch": 285, "n_parameters": 23674632} -{"train_lr": 0.0006002970401395665, "train_loss": 2.0266579148366297, "test_loss": 0.7599297161919601, "test_acc1": 80.72400248962403, "test_acc5": 95.32800223052979, "epoch": 286, "n_parameters": 23674632} -{"train_lr": 0.0005939610923245708, "train_loss": 2.032529461577975, "test_loss": 0.7734424318892487, "test_acc1": 80.62800257110595, "test_acc5": 95.17600262176514, "epoch": 287, "n_parameters": 23674632} -{"train_lr": 0.0005876451779660561, "train_loss": 2.0300076566368555, "test_loss": 0.7603996418077837, "test_acc1": 80.87200254730224, "test_acc5": 95.53200247283935, "epoch": 288, "n_parameters": 23674632} -{"train_lr": 0.0005813496048927556, "train_loss": 2.028488085203939, "test_loss": 0.7555649057030678, "test_acc1": 81.05400250549316, "test_acc5": 95.39200247833251, "epoch": 289, "n_parameters": 23674632} -{"train_lr": 0.0005750746799420099, "train_loss": 2.0195631625698054, "test_loss": 0.7557831280723666, "test_acc1": 80.87000248413086, "test_acc5": 95.43600252441406, "epoch": 290, "n_parameters": 23674632} -{"train_lr": 0.0005688207089448558, "train_loss": 2.0168300025302064, "test_loss": 0.759918658232147, "test_acc1": 80.84400248321533, "test_acc5": 95.43000249389648, "epoch": 291, "n_parameters": 23674632} -{"train_lr": 0.0005625879967110111, "train_loss": 2.0201259323721117, "test_loss": 0.7507833955865918, "test_acc1": 81.19400234588623, "test_acc5": 95.46600256988525, "epoch": 292, "n_parameters": 23674632} -{"train_lr": 0.00055637684701403, "train_loss": 2.0222607224786597, "test_loss": 0.7656381076032465, "test_acc1": 80.704002371521, "test_acc5": 95.31000229736328, "epoch": 293, "n_parameters": 23674632} -{"train_lr": 0.0005501875625766011, "train_loss": 2.0107726173030196, "test_loss": 0.7489583162082867, "test_acc1": 81.14600249298095, "test_acc5": 95.47400245513916, "epoch": 294, "n_parameters": 23674632} -{"train_lr": 0.000544020445055746, "train_loss": 2.012718982708683, "test_loss": 0.747052573401368, "test_acc1": 80.96800248687744, "test_acc5": 95.5300026196289, "epoch": 295, "n_parameters": 23674632} -{"train_lr": 0.0005378757950280543, "train_loss": 2.0039056176476056, "test_loss": 0.7543544383776007, "test_acc1": 81.05400259857177, "test_acc5": 95.54200259552002, "epoch": 296, "n_parameters": 23674632} -{"train_lr": 0.0005317539119750884, "train_loss": 2.0093490477624556, "test_loss": 0.7521801323940357, "test_acc1": 81.1440026361084, "test_acc5": 95.54600251312256, "epoch": 297, "n_parameters": 23674632} -{"train_lr": 0.0005256550942687951, "train_loss": 2.0053137017549467, "test_loss": 0.7518560218088555, "test_acc1": 81.03600251190186, "test_acc5": 95.58400247741699, "epoch": 298, "n_parameters": 23674632} -{"train_lr": 0.0005195796391569505, "train_loss": 2.0019812659262946, "test_loss": 0.744069047893087, "test_acc1": 81.1640023904419, "test_acc5": 95.66400237792969, "epoch": 299, "n_parameters": 23674632} -{"train_lr": 0.0005135278427487069, "train_loss": 1.996284485923396, "test_loss": 0.7516894224240924, "test_acc1": 81.1180023272705, "test_acc5": 95.57200276428223, "epoch": 300, "n_parameters": 23674632} -{"train_lr": 0.0005074999999999948, "train_loss": 1.9925259367691623, "test_loss": 0.7393281884830106, "test_acc1": 81.23400264434814, "test_acc5": 95.60400270935058, "epoch": 301, "n_parameters": 23674632} -{"train_lr": 0.0005014964046994335, "train_loss": 1.9872543028087544, "test_loss": 0.7472640184516256, "test_acc1": 81.2740024130249, "test_acc5": 95.41800226989746, "epoch": 302, "n_parameters": 23674632} -{"train_lr": 0.0004955173494537805, "train_loss": 1.9928264976572172, "test_loss": 0.7444869947027076, "test_acc1": 81.16200235717774, "test_acc5": 95.59400264434815, "epoch": 303, "n_parameters": 23674632} -{"train_lr": 0.0004895631256737284, "train_loss": 1.9911339553259164, "test_loss": 0.7488920013561393, "test_acc1": 81.26600252258301, "test_acc5": 95.48200230438232, "epoch": 304, "n_parameters": 23674632} -{"train_lr": 0.00048363402355977526, "train_loss": 1.987968945734316, "test_loss": 0.7433947483359864, "test_acc1": 81.32000244232178, "test_acc5": 95.53400219573975, "epoch": 305, "n_parameters": 23674632} -{"train_lr": 0.000477730332087969, "train_loss": 1.9896936054578978, "test_loss": 0.7391428954786423, "test_acc1": 81.38000249969483, "test_acc5": 95.67200240020752, "epoch": 306, "n_parameters": 23674632} -{"train_lr": 0.00047185233899589926, "train_loss": 1.9731915495176013, "test_loss": 0.7345887393775311, "test_acc1": 81.48000252716065, "test_acc5": 95.62000279266357, "epoch": 307, "n_parameters": 23674632} -{"train_lr": 0.0004660003307686737, "train_loss": 1.9758650222056204, "test_loss": 0.7309022429540302, "test_acc1": 81.42800230926514, "test_acc5": 95.75000234771728, "epoch": 308, "n_parameters": 23674632} -{"train_lr": 0.00046017459262491967, "train_loss": 1.9804340221172423, "test_loss": 0.7335097449972774, "test_acc1": 81.4140024697876, "test_acc5": 95.69800232299805, "epoch": 309, "n_parameters": 23674632} -{"train_lr": 0.00045437540850286774, "train_loss": 1.9776329848763468, "test_loss": 0.741749818295692, "test_acc1": 81.24800240478515, "test_acc5": 95.63000274383545, "epoch": 310, "n_parameters": 23674632} -{"train_lr": 0.0004486030610466103, "train_loss": 1.9650027861567043, "test_loss": 0.7359824680695028, "test_acc1": 81.40800257080078, "test_acc5": 95.68000255218506, "epoch": 311, "n_parameters": 23674632} -{"train_lr": 0.00044285783159218775, "train_loss": 1.972480813864705, "test_loss": 0.7415021563118155, "test_acc1": 81.32400235198975, "test_acc5": 95.61200260162353, "epoch": 312, "n_parameters": 23674632} -{"train_lr": 0.0004371400001539953, "train_loss": 1.9583111328663205, "test_loss": 0.7332714551100226, "test_acc1": 81.54000269470215, "test_acc5": 95.62400229858399, "epoch": 313, "n_parameters": 23674632} -{"train_lr": 0.00043144984541105375, "train_loss": 1.9637609720587446, "test_loss": 0.7240908502629309, "test_acc1": 81.57800260894776, "test_acc5": 95.7680026889038, "epoch": 314, "n_parameters": 23674632} -{"train_lr": 0.0004257876446934883, "train_loss": 1.9608476222288027, "test_loss": 0.7302970041831335, "test_acc1": 81.42400261871337, "test_acc5": 95.8060024835205, "epoch": 315, "n_parameters": 23674632} -{"train_lr": 0.000420153673968981, "train_loss": 1.9551541439825586, "test_loss": 0.7196315389803865, "test_acc1": 81.76800264251709, "test_acc5": 95.77800247924804, "epoch": 316, "n_parameters": 23674632} -{"train_lr": 0.00041454820782932107, "train_loss": 1.958623132271637, "test_loss": 0.7391634765786655, "test_acc1": 81.25200266571045, "test_acc5": 95.66600258789063, "epoch": 317, "n_parameters": 23674632} -{"train_lr": 0.00040897151947699516, "train_loss": 1.9552327406218202, "test_loss": 0.7423901235063871, "test_acc1": 81.40800244598388, "test_acc5": 95.62400256988525, "epoch": 318, "n_parameters": 23674632} -{"train_lr": 0.0004034238807119475, "train_loss": 1.9526871408609083, "test_loss": 0.7242432410518328, "test_acc1": 81.6540025881958, "test_acc5": 95.83800248016358, "epoch": 319, "n_parameters": 23674632} -{"train_lr": 0.0003979055619182519, "train_loss": 1.949179187405119, "test_loss": 0.7255946849896149, "test_acc1": 81.55200247497558, "test_acc5": 95.83400231323242, "epoch": 320, "n_parameters": 23674632} -{"train_lr": 0.00039241683205097877, "train_loss": 1.9444169278958623, "test_loss": 0.7235599182771913, "test_acc1": 81.8220024343872, "test_acc5": 95.79400242675781, "epoch": 321, "n_parameters": 23674632} -{"train_lr": 0.0003869579586230879, "train_loss": 1.941482362832478, "test_loss": 0.7231330106204207, "test_acc1": 81.71400271209717, "test_acc5": 95.82800259307861, "epoch": 322, "n_parameters": 23674632} -{"train_lr": 0.0003815292076923558, "train_loss": 1.9401421858657368, "test_loss": 0.727745499518333, "test_acc1": 81.67400260864258, "test_acc5": 95.79600258911132, "epoch": 323, "n_parameters": 23674632} -{"train_lr": 0.00037613084384846844, "train_loss": 1.9426356096276276, "test_loss": 0.7182148872225573, "test_acc1": 81.85000251983642, "test_acc5": 95.77200224578857, "epoch": 324, "n_parameters": 23674632} -{"train_lr": 0.00037076313020005785, "train_loss": 1.936991665086491, "test_loss": 0.7194784415152037, "test_acc1": 81.81000263122559, "test_acc5": 95.77600247833252, "epoch": 325, "n_parameters": 23674632} -{"train_lr": 0.000365426328361895, "train_loss": 1.933695569289007, "test_loss": 0.7187451961817164, "test_acc1": 81.83600255096435, "test_acc5": 95.74000264007569, "epoch": 326, "n_parameters": 23674632} -{"train_lr": 0.00036012069844219694, "train_loss": 1.9302988845667393, "test_loss": 0.7230738088149916, "test_acc1": 81.78000258605957, "test_acc5": 95.75400237915039, "epoch": 327, "n_parameters": 23674632} -{"train_lr": 0.0003548464990298402, "train_loss": 1.9288869658224492, "test_loss": 0.7180854063481092, "test_acc1": 81.8540025805664, "test_acc5": 95.85200257263183, "epoch": 328, "n_parameters": 23674632} -{"train_lr": 0.0003496039871819201, "train_loss": 1.9225422619022339, "test_loss": 0.7144778618645488, "test_acc1": 81.84400254760742, "test_acc5": 95.89400262207032, "epoch": 329, "n_parameters": 23674632} -{"train_lr": 0.00034439341841102536, "train_loss": 1.9181471513472588, "test_loss": 0.7125816913603833, "test_acc1": 81.93400239440918, "test_acc5": 95.91000264282226, "epoch": 330, "n_parameters": 23674632} -{"train_lr": 0.0003392150466729432, "train_loss": 1.9207267479347192, "test_loss": 0.7170561300872853, "test_acc1": 81.96600266448975, "test_acc5": 95.80800251647949, "epoch": 331, "n_parameters": 23674632} -{"train_lr": 0.00033406912435418937, "train_loss": 1.917569068207634, "test_loss": 0.7124018072517533, "test_acc1": 81.99800246307373, "test_acc5": 95.89800251953125, "epoch": 332, "n_parameters": 23674632} -{"train_lr": 0.0003289559022597874, "train_loss": 1.9096247925699281, "test_loss": 0.7116374610499903, "test_acc1": 81.9920024710083, "test_acc5": 95.85400264007568, "epoch": 333, "n_parameters": 23674632} -{"train_lr": 0.00032387562960095595, "train_loss": 1.9140368478713656, "test_loss": 0.7077419345803333, "test_acc1": 82.02600250579835, "test_acc5": 95.88400258850098, "epoch": 334, "n_parameters": 23674632} -{"train_lr": 0.0003188285539830153, "train_loss": 1.9135553209746865, "test_loss": 0.711362985329646, "test_acc1": 81.98600245513916, "test_acc5": 95.85800263122559, "epoch": 335, "n_parameters": 23674632} -{"train_lr": 0.00031381492139330404, "train_loss": 1.9082496725112128, "test_loss": 0.7080927188084885, "test_acc1": 82.03600249511719, "test_acc5": 95.94800237884522, "epoch": 336, "n_parameters": 23674632} -{"train_lr": 0.0003088349761892061, "train_loss": 1.9099590593664575, "test_loss": 0.7153508064308853, "test_acc1": 82.03400252319337, "test_acc5": 95.85400266937256, "epoch": 337, "n_parameters": 23674632} -{"train_lr": 0.0003038889610862266, "train_loss": 1.904867643283473, "test_loss": 0.7109595400591692, "test_acc1": 82.12000261932373, "test_acc5": 95.98000232574464, "epoch": 338, "n_parameters": 23674632} -{"train_lr": 0.00029897711714615987, "train_loss": 1.8982809147865272, "test_loss": 0.7028659302741289, "test_acc1": 82.22600240600586, "test_acc5": 95.9880025982666, "epoch": 339, "n_parameters": 23674632} -{"train_lr": 0.00029409968376536603, "train_loss": 1.8976688618056303, "test_loss": 0.7080865487117659, "test_acc1": 81.97200250610352, "test_acc5": 95.97000247192383, "epoch": 340, "n_parameters": 23674632} -{"train_lr": 0.0002892568986630472, "train_loss": 1.903039646687077, "test_loss": 0.7110782011101643, "test_acc1": 82.07000266937256, "test_acc5": 95.97800263549804, "epoch": 341, "n_parameters": 23674632} -{"train_lr": 0.000284448997869723, "train_loss": 1.8884865178121366, "test_loss": 0.706154797113303, "test_acc1": 82.28000267028808, "test_acc5": 95.9600027758789, "epoch": 342, "n_parameters": 23674632} -{"train_lr": 0.00027967621571569273, "train_loss": 1.8901419911465103, "test_loss": 0.6999441826659621, "test_acc1": 82.15800235809326, "test_acc5": 96.006002472229, "epoch": 343, "n_parameters": 23674632} -{"train_lr": 0.00027493878481963996, "train_loss": 1.8832271987919231, "test_loss": 0.7064303651339177, "test_acc1": 82.10600238952637, "test_acc5": 96.10600244537353, "epoch": 344, "n_parameters": 23674632} -{"train_lr": 0.00027023693607724944, "train_loss": 1.881001831411267, "test_loss": 0.7004713770566564, "test_acc1": 82.21200233703614, "test_acc5": 96.09800269714356, "epoch": 345, "n_parameters": 23674632} -{"train_lr": 0.0002655708986499856, "train_loss": 1.886350481856546, "test_loss": 0.7044994231087692, "test_acc1": 82.0620024520874, "test_acc5": 96.00600237823487, "epoch": 346, "n_parameters": 23674632} -{"train_lr": 0.0002609408999539663, "train_loss": 1.881041363149667, "test_loss": 0.6991715684646007, "test_acc1": 82.14600271697998, "test_acc5": 96.04200231872558, "epoch": 347, "n_parameters": 23674632} -{"train_lr": 0.0002563471656487584, "train_loss": 1.8773427491994212, "test_loss": 0.6968541063481208, "test_acc1": 82.43800248931885, "test_acc5": 96.20600273254395, "epoch": 348, "n_parameters": 23674632} -{"train_lr": 0.00025178991962650487, "train_loss": 1.8779494834425066, "test_loss": 0.6987068550943425, "test_acc1": 82.27600254058838, "test_acc5": 96.02200250579834, "epoch": 349, "n_parameters": 23674632} -{"train_lr": 0.0002472693840009418, "train_loss": 1.8746696807724013, "test_loss": 0.6917445482968381, "test_acc1": 82.47600264465332, "test_acc5": 96.1660026513672, "epoch": 350, "n_parameters": 23674632} -{"train_lr": 0.0002427857790966209, "train_loss": 1.8654595921508415, "test_loss": 0.6927531739188866, "test_acc1": 82.42000219543458, "test_acc5": 96.13000245544434, "epoch": 351, "n_parameters": 23674632} -{"train_lr": 0.00023833932343808706, "train_loss": 1.863659268422283, "test_loss": 0.697753064729499, "test_acc1": 82.36600271026612, "test_acc5": 96.06600265686035, "epoch": 352, "n_parameters": 23674632} -{"train_lr": 0.00023393023373934857, "train_loss": 1.8595573313885647, "test_loss": 0.6945693195995056, "test_acc1": 82.41800260253906, "test_acc5": 96.07200258148194, "epoch": 353, "n_parameters": 23674632} -{"train_lr": 0.00022955872489318366, "train_loss": 1.8624185261954602, "test_loss": 0.7003068047378099, "test_acc1": 82.40600241271973, "test_acc5": 96.07600257965088, "epoch": 354, "n_parameters": 23674632} -{"train_lr": 0.00022522500996078677, "train_loss": 1.8634622599795567, "test_loss": 0.6892498753626238, "test_acc1": 82.51800229278564, "test_acc5": 96.15000251739502, "epoch": 355, "n_parameters": 23674632} -{"train_lr": 0.00022092930016131605, "train_loss": 1.8644092223770041, "test_loss": 0.6933146268129349, "test_acc1": 82.46600254516602, "test_acc5": 96.15400234527588, "epoch": 356, "n_parameters": 23674632} -{"train_lr": 0.0002166718048615883, "train_loss": 1.847170227782713, "test_loss": 0.6874493380839174, "test_acc1": 82.57400269195557, "test_acc5": 96.17400239959717, "epoch": 357, "n_parameters": 23674632} -{"train_lr": 0.00021245273156592372, "train_loss": 1.8520108368947543, "test_loss": 0.6916374932184364, "test_acc1": 82.44800245697022, "test_acc5": 96.15000251556397, "epoch": 358, "n_parameters": 23674632} -{"train_lr": 0.00020827228590600882, "train_loss": 1.853487022328291, "test_loss": 0.6893974201697292, "test_acc1": 82.48000271331787, "test_acc5": 96.13400234436035, "epoch": 359, "n_parameters": 23674632} -{"train_lr": 0.00020413067163086113, "train_loss": 1.8461410176279924, "test_loss": 0.6855263475215796, "test_acc1": 82.63400242004394, "test_acc5": 96.21400255035401, "epoch": 360, "n_parameters": 23674632} -{"train_lr": 0.00020002809059693196, "train_loss": 1.8443448882487465, "test_loss": 0.6876265536429305, "test_acc1": 82.65600260437012, "test_acc5": 96.24000255401612, "epoch": 361, "n_parameters": 23674632} -{"train_lr": 0.00019596474275820886, "train_loss": 1.8435451124342892, "test_loss": 0.6874730902526415, "test_acc1": 82.67400240325928, "test_acc5": 96.21000244384766, "epoch": 362, "n_parameters": 23674632} -{"train_lr": 0.00019194082615654084, "train_loss": 1.843477178070185, "test_loss": 0.6861627658998425, "test_acc1": 82.72800266693115, "test_acc5": 96.22600236846924, "epoch": 363, "n_parameters": 23674632} -{"train_lr": 0.00018795653691196918, "train_loss": 1.8390061163204274, "test_loss": 0.6824297072986761, "test_acc1": 82.61600239837647, "test_acc5": 96.28800269439698, "epoch": 364, "n_parameters": 23674632} -{"train_lr": 0.00018401206921309264, "train_loss": 1.839283546586212, "test_loss": 0.6851596362663038, "test_acc1": 82.67400249084473, "test_acc5": 96.23800279083252, "epoch": 365, "n_parameters": 23674632} -{"train_lr": 0.00018010761530773406, "train_loss": 1.8332490649321478, "test_loss": 0.6799759033390067, "test_acc1": 82.59800254425049, "test_acc5": 96.28200265075684, "epoch": 366, "n_parameters": 23674632} -{"train_lr": 0.00017624336549345505, "train_loss": 1.8331825701976947, "test_loss": 0.6842304366556081, "test_acc1": 82.8240027331543, "test_acc5": 96.24800240844726, "epoch": 367, "n_parameters": 23674632} -{"train_lr": 0.00017241950810833412, "train_loss": 1.83181582714061, "test_loss": 0.678817053127921, "test_acc1": 82.75400251464843, "test_acc5": 96.28400246582031, "epoch": 368, "n_parameters": 23674632} -{"train_lr": 0.0001686362295217918, "train_loss": 1.8361900211226263, "test_loss": 0.6785438204692169, "test_acc1": 82.73400261444093, "test_acc5": 96.2740026159668, "epoch": 369, "n_parameters": 23674632} -{"train_lr": 0.00016489371412549964, "train_loss": 1.8259906530630388, "test_loss": 0.6755713010489037, "test_acc1": 82.90600258361816, "test_acc5": 96.29400248016357, "epoch": 370, "n_parameters": 23674632} -{"train_lr": 0.00016119214432436035, "train_loss": 1.8217629744864101, "test_loss": 0.675348197194663, "test_acc1": 82.9760026763916, "test_acc5": 96.2960026940918, "epoch": 371, "n_parameters": 23674632} -{"train_lr": 0.0001575317005276713, "train_loss": 1.8218710842333157, "test_loss": 0.6788157134345083, "test_acc1": 82.80000264465332, "test_acc5": 96.30800245361328, "epoch": 372, "n_parameters": 23674632} -{"train_lr": 0.0001539125611402985, "train_loss": 1.8142692350941025, "test_loss": 0.6779614267024127, "test_acc1": 82.94400263763428, "test_acc5": 96.27400266845703, "epoch": 373, "n_parameters": 23674632} -{"train_lr": 0.00015033490255398664, "train_loss": 1.81811164713187, "test_loss": 0.6771427145735784, "test_acc1": 82.86400235595703, "test_acc5": 96.29400250946046, "epoch": 374, "n_parameters": 23674632} -{"train_lr": 0.00014679889913877598, "train_loss": 1.8152869669713085, "test_loss": 0.6784232467074286, "test_acc1": 82.90600225494384, "test_acc5": 96.23200253601074, "epoch": 375, "n_parameters": 23674632} -{"train_lr": 0.0001433047232344831, "train_loss": 1.8115896435557224, "test_loss": 0.6769774670176434, "test_acc1": 82.92800241394043, "test_acc5": 96.27000252868652, "epoch": 376, "n_parameters": 23674632} -{"train_lr": 0.00013985254514230816, "train_loss": 1.8154216779650545, "test_loss": 0.6766265005324826, "test_acc1": 83.0260025744629, "test_acc5": 96.27000234283447, "epoch": 377, "n_parameters": 23674632} -{"train_lr": 0.00013644253311653802, "train_loss": 1.8023448365983916, "test_loss": 0.6739698515809847, "test_acc1": 82.93800249084472, "test_acc5": 96.27200253540039, "epoch": 378, "n_parameters": 23674632} -{"train_lr": 0.0001330748533563583, "train_loss": 1.8045401188740722, "test_loss": 0.6748732139328213, "test_acc1": 82.9580024633789, "test_acc5": 96.26800258758544, "epoch": 379, "n_parameters": 23674632} -{"train_lr": 0.0001297496699977167, "train_loss": 1.7962062672483359, "test_loss": 0.6768250010456099, "test_acc1": 82.8620025289917, "test_acc5": 96.26600251312256, "epoch": 380, "n_parameters": 23674632} -{"train_lr": 0.00012646714510536547, "train_loss": 1.794834685566233, "test_loss": 0.6730091811129542, "test_acc1": 82.90800250213623, "test_acc5": 96.28000254943848, "epoch": 381, "n_parameters": 23674632} -{"train_lr": 0.00012322743866494126, "train_loss": 1.8014631235211207, "test_loss": 0.6695989632245266, "test_acc1": 82.9380024093628, "test_acc5": 96.37000252105713, "epoch": 382, "n_parameters": 23674632} -{"train_lr": 0.0001200307085751616, "train_loss": 1.7956985593747368, "test_loss": 0.6721737783853755, "test_acc1": 83.10000263671876, "test_acc5": 96.35000235778809, "epoch": 383, "n_parameters": 23674632} -{"train_lr": 0.0001168771106401347, "train_loss": 1.7954405772981408, "test_loss": 0.6728786354263624, "test_acc1": 82.96400258087158, "test_acc5": 96.36200250518799, "epoch": 384, "n_parameters": 23674632} -{"train_lr": 0.00011376679856178666, "train_loss": 1.790133726349075, "test_loss": 0.6672320338248303, "test_acc1": 83.04200238372803, "test_acc5": 96.40800239837647, "epoch": 385, "n_parameters": 23674632} -{"train_lr": 0.000110699923932329, "train_loss": 1.785875902133737, "test_loss": 0.6680467110127211, "test_acc1": 83.1580026574707, "test_acc5": 96.35000241638184, "epoch": 386, "n_parameters": 23674632} -{"train_lr": 0.00010767663622690869, "train_loss": 1.7865205761923684, "test_loss": 0.666129750735832, "test_acc1": 83.2380025201416, "test_acc5": 96.39000260467529, "epoch": 387, "n_parameters": 23674632} -{"train_lr": 0.00010469708279631165, "train_loss": 1.7841396909906901, "test_loss": 0.6661653956116149, "test_acc1": 83.14800243255615, "test_acc5": 96.41200252197265, "epoch": 388, "n_parameters": 23674632} -{"train_lr": 0.00010176140885975717, "train_loss": 1.7796134863325732, "test_loss": 0.6649694731176803, "test_acc1": 83.24400249206543, "test_acc5": 96.4160025415039, "epoch": 389, "n_parameters": 23674632} -{"train_lr": 9.88697574978549e-05, "train_loss": 1.7866669231824737, "test_loss": 0.6638856508176435, "test_acc1": 83.24600256744385, "test_acc5": 96.44000265777588, "epoch": 390, "n_parameters": 23674632} -{"train_lr": 9.602226964561253e-05, "train_loss": 1.7855164160807546, "test_loss": 0.6678475582915725, "test_acc1": 83.10000270324707, "test_acc5": 96.36200265075684, "epoch": 391, "n_parameters": 23674632} -{"train_lr": 9.321908408557381e-05, "train_loss": 1.77933728190468, "test_loss": 0.6667322442499977, "test_acc1": 83.17400275848388, "test_acc5": 96.37200249542236, "epoch": 392, "n_parameters": 23674632} -{"train_lr": 9.046033744104409e-05, "train_loss": 1.780306733757567, "test_loss": 0.663859857415611, "test_acc1": 83.16400244598388, "test_acc5": 96.44000258605956, "epoch": 393, "n_parameters": 23674632} -{"train_lr": 8.774616416944166e-05, "train_loss": 1.7691684998125194, "test_loss": 0.6649593329903755, "test_acc1": 83.1480027609253, "test_acc5": 96.47400270568848, "epoch": 394, "n_parameters": 23674632} -{"train_lr": 8.507669655574727e-05, "train_loss": 1.7743383791985559, "test_loss": 0.6654665038892718, "test_acc1": 83.21200254821777, "test_acc5": 96.40000226348877, "epoch": 395, "n_parameters": 23674632} -{"train_lr": 8.2452064706046e-05, "train_loss": 1.772549662313206, "test_loss": 0.6666285457710425, "test_acc1": 83.08000267211914, "test_acc5": 96.39200251129151, "epoch": 396, "n_parameters": 23674632} -{"train_lr": 7.98723965411878e-05, "train_loss": 1.767426280237788, "test_loss": 0.6648095601774526, "test_acc1": 83.24200235931397, "test_acc5": 96.45800258605956, "epoch": 397, "n_parameters": 23674632} -{"train_lr": 7.73378177905671e-05, "train_loss": 1.7617858628110348, "test_loss": 0.6664955999815103, "test_acc1": 83.15200255523682, "test_acc5": 96.43400261322022, "epoch": 398, "n_parameters": 23674632} -{"train_lr": 7.484845198596496e-05, "train_loss": 1.7696496462054867, "test_loss": 0.6624569619130908, "test_acc1": 83.27000261138916, "test_acc5": 96.4520026437378, "epoch": 399, "n_parameters": 23674632} -{"train_lr": 7.240442045557008e-05, "train_loss": 1.7613771274018821, "test_loss": 0.6638359359719537, "test_acc1": 83.20000272277832, "test_acc5": 96.45000264801025, "epoch": 400, "n_parameters": 23674632} -{"train_lr": 7.000584231801986e-05, "train_loss": 1.761879297915837, "test_loss": 0.6613105781602137, "test_acc1": 83.21200258270264, "test_acc5": 96.42800244567871, "epoch": 401, "n_parameters": 23674632} -{"train_lr": 6.765283447664094e-05, "train_loss": 1.757611952299218, "test_loss": 0.6638948607512496, "test_acc1": 83.21000247070313, "test_acc5": 96.43600240997314, "epoch": 402, "n_parameters": 23674632} -{"train_lr": 6.534551161370524e-05, "train_loss": 1.7622877960856393, "test_loss": 0.6574116071516817, "test_acc1": 83.42200270080566, "test_acc5": 96.5100023751831, "epoch": 403, "n_parameters": 23674632} -{"train_lr": 6.308398618488081e-05, "train_loss": 1.7590083337599616, "test_loss": 0.6591561624893185, "test_acc1": 83.27200257629394, "test_acc5": 96.53600271636962, "epoch": 404, "n_parameters": 23674632} -{"train_lr": 6.0868368413726135e-05, "train_loss": 1.7529776289487342, "test_loss": 0.663204971616241, "test_acc1": 83.28000273986817, "test_acc5": 96.49600248565673, "epoch": 405, "n_parameters": 23674632} -{"train_lr": 5.8698766286322e-05, "train_loss": 1.7602134907274223, "test_loss": 0.6568176426614324, "test_acc1": 83.37400237823486, "test_acc5": 96.54400260894775, "epoch": 406, "n_parameters": 23674632} -{"train_lr": 5.6575285546019094e-05, "train_loss": 1.7567870839953803, "test_loss": 0.6609503267276468, "test_acc1": 83.4680024331665, "test_acc5": 96.438002583313, "epoch": 407, "n_parameters": 23674632} -{"train_lr": 5.4498029688267055e-05, "train_loss": 1.7515891516249147, "test_loss": 0.6601412367696563, "test_acc1": 83.31400242156982, "test_acc5": 96.3960024005127, "epoch": 408, "n_parameters": 23674632} -{"train_lr": 5.246709995559324e-05, "train_loss": 1.7521577046024237, "test_loss": 0.6559221116600163, "test_acc1": 83.36000256652832, "test_acc5": 96.48200237182617, "epoch": 409, "n_parameters": 23674632} -{"train_lr": 5.0482595332642215e-05, "train_loss": 1.746906806203387, "test_loss": 0.6590257895207314, "test_acc1": 83.32400264770507, "test_acc5": 96.45400254333497, "epoch": 410, "n_parameters": 23674632} -{"train_lr": 4.854461254137116e-05, "train_loss": 1.7456298677755488, "test_loss": 0.6593245894394137, "test_acc1": 83.43600228515625, "test_acc5": 96.53000246887207, "epoch": 411, "n_parameters": 23674632} -{"train_lr": 4.6653246036330756e-05, "train_loss": 1.7490537037833227, "test_loss": 0.6563722186185645, "test_acc1": 83.376002394104, "test_acc5": 96.53400257629394, "epoch": 412, "n_parameters": 23674632} -{"train_lr": 4.48085880000513e-05, "train_loss": 1.745370429583924, "test_loss": 0.6554916786933036, "test_acc1": 83.44400242004394, "test_acc5": 96.52600230285644, "epoch": 413, "n_parameters": 23674632} -{"train_lr": 4.3010728338564125e-05, "train_loss": 1.743386061428834, "test_loss": 0.660346466709267, "test_acc1": 83.35800262939453, "test_acc5": 96.53400234191895, "epoch": 414, "n_parameters": 23674632} -{"train_lr": 4.125975467701182e-05, "train_loss": 1.7392673734411728, "test_loss": 0.6581653400578282, "test_acc1": 83.3900026385498, "test_acc5": 96.526002315979, "epoch": 415, "n_parameters": 23674632} -{"train_lr": 3.955575235538311e-05, "train_loss": 1.7411868051647377, "test_loss": 0.6552334422753616, "test_acc1": 83.41000245819092, "test_acc5": 96.54000214904785, "epoch": 416, "n_parameters": 23674632} -{"train_lr": 3.78988044243475e-05, "train_loss": 1.7415525764941597, "test_loss": 0.6565157809492314, "test_acc1": 83.43000256988526, "test_acc5": 96.52600236053466, "epoch": 417, "n_parameters": 23674632} -{"train_lr": 3.628899164120446e-05, "train_loss": 1.739362525205008, "test_loss": 0.6584816938830595, "test_acc1": 83.50600291290283, "test_acc5": 96.53600235168457, "epoch": 418, "n_parameters": 23674632} -{"train_lr": 3.472639246596378e-05, "train_loss": 1.7420617556400437, "test_loss": 0.6563458379188721, "test_acc1": 83.42600255859375, "test_acc5": 96.50600234466553, "epoch": 419, "n_parameters": 23674632} -{"train_lr": 3.321108305750343e-05, "train_loss": 1.7309157088053502, "test_loss": 0.654438120792761, "test_acc1": 83.43800239074707, "test_acc5": 96.5420024911499, "epoch": 420, "n_parameters": 23674632} -{"train_lr": 3.174313726986302e-05, "train_loss": 1.7367473230194703, "test_loss": 0.6551916318465815, "test_acc1": 83.46000270233154, "test_acc5": 96.55600229278565, "epoch": 421, "n_parameters": 23674632} -{"train_lr": 3.0322626648653058e-05, "train_loss": 1.7350506001739479, "test_loss": 0.6538615852142825, "test_acc1": 83.44800221252441, "test_acc5": 96.53000260528565, "epoch": 422, "n_parameters": 23674632} -{"train_lr": 2.8949620427554466e-05, "train_loss": 1.7390320242809163, "test_loss": 0.6534283785431674, "test_acc1": 83.47600234802246, "test_acc5": 96.52600253631591, "epoch": 423, "n_parameters": 23674632} -{"train_lr": 2.762418552495418e-05, "train_loss": 1.7331759886299487, "test_loss": 0.6527249638386297, "test_acc1": 83.47600252502441, "test_acc5": 96.53600256988526, "epoch": 424, "n_parameters": 23674632} -{"train_lr": 2.6346386540681185e-05, "train_loss": 1.7283465054145248, "test_loss": 0.653346730283264, "test_acc1": 83.46200229492187, "test_acc5": 96.55400243865967, "epoch": 425, "n_parameters": 23674632} -{"train_lr": 2.511628575285337e-05, "train_loss": 1.7274134349670531, "test_loss": 0.6535363888198679, "test_acc1": 83.54400241851806, "test_acc5": 96.53800243927002, "epoch": 426, "n_parameters": 23674632} -{"train_lr": 2.3933943114847106e-05, "train_loss": 1.7264788519087837, "test_loss": 0.6531774646348574, "test_acc1": 83.55200245758057, "test_acc5": 96.52600236846924, "epoch": 427, "n_parameters": 23674632} -{"train_lr": 2.279941625237679e-05, "train_loss": 1.7316904250178025, "test_loss": 0.6543428979136727, "test_acc1": 83.46800271423339, "test_acc5": 96.52000242004395, "epoch": 428, "n_parameters": 23674632} -{"train_lr": 2.171276046068042e-05, "train_loss": 1.7293563828193883, "test_loss": 0.6530164526211042, "test_acc1": 83.47200260742187, "test_acc5": 96.51000235504151, "epoch": 429, "n_parameters": 23674632} -{"train_lr": 2.0674028701827162e-05, "train_loss": 1.7336264221335678, "test_loss": 0.653721841193284, "test_acc1": 83.51200253204345, "test_acc5": 96.53000235778809, "epoch": 430, "n_parameters": 23674632} -{"train_lr": 1.968327160213717e-05, "train_loss": 1.7250373499225273, "test_loss": 0.655056933533739, "test_acc1": 83.51400260742187, "test_acc5": 96.57000234893799, "epoch": 431, "n_parameters": 23674632} -{"train_lr": 1.8740537449716184e-05, "train_loss": 1.7279878530463726, "test_loss": 0.6521409600408692, "test_acc1": 83.51800244293213, "test_acc5": 96.55200243682862, "epoch": 432, "n_parameters": 23674632} -{"train_lr": 1.784587219209501e-05, "train_loss": 1.7261477295133612, "test_loss": 0.6528385120980216, "test_acc1": 83.52800265350342, "test_acc5": 96.54400236846924, "epoch": 433, "n_parameters": 23674632} -{"train_lr": 1.699931943399582e-05, "train_loss": 1.7237375965781159, "test_loss": 0.6504903594596367, "test_acc1": 83.63200243011475, "test_acc5": 96.58600237457276, "epoch": 434, "n_parameters": 23674632} -{"train_lr": 1.6200920435206837e-05, "train_loss": 1.7262302422814138, "test_loss": 0.651487850499424, "test_acc1": 83.56600261291504, "test_acc5": 96.52000256744385, "epoch": 435, "n_parameters": 23674632} -{"train_lr": 1.545071410856787e-05, "train_loss": 1.7229251380292154, "test_loss": 0.6537092885171826, "test_acc1": 83.51000252929687, "test_acc5": 96.51600246520997, "epoch": 436, "n_parameters": 23674632} -{"train_lr": 1.4748737018078088e-05, "train_loss": 1.7272362805349077, "test_loss": 0.6515299257624781, "test_acc1": 83.55800258087159, "test_acc5": 96.54600249542236, "epoch": 437, "n_parameters": 23674632} -{"train_lr": 1.4095023377109483e-05, "train_loss": 1.7256559643575804, "test_loss": 0.650579965955606, "test_acc1": 83.53000245117188, "test_acc5": 96.51200244750977, "epoch": 438, "n_parameters": 23674632} -{"train_lr": 1.3489605046743441e-05, "train_loss": 1.723962129156986, "test_loss": 0.6506682271137834, "test_acc1": 83.564002734375, "test_acc5": 96.54000246612549, "epoch": 439, "n_parameters": 23674632} -{"train_lr": 1.2932511534214492e-05, "train_loss": 1.7213228724116711, "test_loss": 0.6515475463009242, "test_acc1": 83.49800257385255, "test_acc5": 96.54000250152588, "epoch": 440, "n_parameters": 23674632} -{"train_lr": 1.2423769991474675e-05, "train_loss": 1.7271800329156821, "test_loss": 0.6506823117308544, "test_acc1": 83.46800251525879, "test_acc5": 96.56000247497559, "epoch": 441, "n_parameters": 23674632} -{"train_lr": 1.1963405213869678e-05, "train_loss": 1.7266957349926353, "test_loss": 0.649017399515618, "test_acc1": 83.58600250549317, "test_acc5": 96.55400225463868, "epoch": 442, "n_parameters": 23674632} -{"train_lr": 1.1551439638929232e-05, "train_loss": 1.7216172909839071, "test_loss": 0.6501274538423979, "test_acc1": 83.60400247009278, "test_acc5": 96.53600250427246, "epoch": 443, "n_parameters": 23674632} -{"train_lr": 1.1187893345273409e-05, "train_loss": 1.7160098849893284, "test_loss": 0.6510792858616421, "test_acc1": 83.6140025692749, "test_acc5": 96.5280024206543, "epoch": 444, "n_parameters": 23674632} -{"train_lr": 1.0872784051636045e-05, "train_loss": 1.7161437333297196, "test_loss": 0.6515945196716171, "test_acc1": 83.63600252227783, "test_acc5": 96.5820025894165, "epoch": 445, "n_parameters": 23674632} -{"train_lr": 1.0606127115999802e-05, "train_loss": 1.7221731035341081, "test_loss": 0.6516613550766399, "test_acc1": 83.67400251800537, "test_acc5": 96.57000250152588, "epoch": 446, "n_parameters": 23674632} -{"train_lr": 1.038793553484761e-05, "train_loss": 1.720911447479904, "test_loss": 0.6527998449958183, "test_acc1": 83.59400230407715, "test_acc5": 96.5120024093628, "epoch": 447, "n_parameters": 23674632} -{"train_lr": 1.0218219942528797e-05, "train_loss": 1.725287520318485, "test_loss": 0.6523206591549696, "test_acc1": 83.5860024609375, "test_acc5": 96.53600245269776, "epoch": 448, "n_parameters": 23674632} -{"train_lr": 1.009698861074248e-05, "train_loss": 1.7215464074822733, "test_loss": 0.6521446833274129, "test_acc1": 83.5780023562622, "test_acc5": 96.57400233825683, "epoch": 449, "n_parameters": 23674632} diff --git a/cv/classification/repvit/pytorch/losses.py b/cv/classification/repvit/pytorch/losses.py deleted file mode 100644 index ff5f8c2f..00000000 --- a/cv/classification/repvit/pytorch/losses.py +++ /dev/null @@ -1,64 +0,0 @@ -""" -Implements the knowledge distillation loss, proposed in deit -""" -import torch -from torch.nn import functional as F - - -class DistillationLoss(torch.nn.Module): - """ - This module wraps a standard criterion and adds an extra knowledge distillation loss by - taking a teacher model prediction and using it as additional supervision. - """ - - def __init__(self, base_criterion: torch.nn.Module, teacher_model: torch.nn.Module, - distillation_type: str, alpha: float, tau: float): - super().__init__() - self.base_criterion = base_criterion - self.teacher_model = teacher_model - assert distillation_type in ['none', 'soft', 'hard'] - self.distillation_type = distillation_type - self.alpha = alpha - self.tau = tau - - def forward(self, inputs, outputs, labels): - """ - Args: - inputs: The original inputs that are feed to the teacher model - outputs: the outputs of the model to be trained. It is expected to be - either a Tensor, or a Tuple[Tensor, Tensor], with the original output - in the first position and the distillation predictions as the second output - labels: the labels for the base criterion - """ - outputs_kd = None - if not isinstance(outputs, torch.Tensor): - # assume that the model outputs a tuple of [outputs, outputs_kd] - outputs, outputs_kd = outputs - base_loss = self.base_criterion(outputs, labels) - if self.distillation_type == 'none': - return base_loss - - if outputs_kd is None: - raise ValueError("When knowledge distillation is enabled, the model is " - "expected to return a Tuple[Tensor, Tensor] with the output of the " - "class_token and the dist_token") - # don't backprop throught the teacher - with torch.no_grad(): - teacher_outputs = self.teacher_model(inputs) - - if self.distillation_type == 'soft': - T = self.tau - # taken from https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py#L100 - # with slight modifications - distillation_loss = F.kl_div( - F.log_softmax(outputs_kd / T, dim=1), - F.log_softmax(teacher_outputs / T, dim=1), - reduction='sum', - log_target=True - ) * (T * T) / outputs_kd.numel() - elif self.distillation_type == 'hard': - distillation_loss = F.cross_entropy( - outputs_kd, teacher_outputs.argmax(dim=1)) - - loss = base_loss * (1 - self.alpha) + distillation_loss * self.alpha - return loss diff --git a/cv/classification/repvit/pytorch/main.py b/cv/classification/repvit/pytorch/main.py deleted file mode 100644 index 54374da3..00000000 --- a/cv/classification/repvit/pytorch/main.py +++ /dev/null @@ -1,486 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import argparse -import datetime -import numpy as np -import time -import torch -import torch.backends.cudnn as cudnn -import json -import os - -from pathlib import Path - -from timm.data import Mixup -from timm.models import create_model -from timm.loss import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy -from timm.scheduler import create_scheduler -from timm.optim import create_optimizer -from timm.utils import NativeScaler, get_state_dict, ModelEma - -from data.samplers import RASampler -from data.datasets import build_dataset -from data.threeaugment import new_data_aug_generator -from engine import train_one_epoch, evaluate -from losses import DistillationLoss - -import model -import utils - - -def get_args_parser(): - parser = argparse.ArgumentParser( - 'RepViT training and evaluation script', add_help=False) - parser.add_argument('--batch-size', default=256, type=int) - parser.add_argument('--epochs', default=300, type=int) - - # Model parameters - parser.add_argument('--model', default='repvit_m1_1', type=str, metavar='MODEL', - help='Name of model to train') - parser.add_argument('--input-size', default=224, - type=int, help='images input size') - - parser.add_argument('--model-ema', action='store_true') - parser.add_argument( - '--no-model-ema', action='store_false', dest='model_ema') - parser.set_defaults(model_ema=True) - parser.add_argument('--model-ema-decay', type=float, - default=0.99996, help='') - parser.add_argument('--model-ema-force-cpu', - action='store_true', default=False, help='') - - # Optimizer parameters - parser.add_argument('--opt', default='adamw', type=str, metavar='OPTIMIZER', - help='Optimizer (default: "adamw"') - parser.add_argument('--opt-eps', default=1e-8, type=float, metavar='EPSILON', - help='Optimizer Epsilon (default: 1e-8)') - parser.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA', - help='Optimizer Betas (default: None, use opt default)') - parser.add_argument('--clip-grad', type=float, default=0.02, metavar='NORM', - help='Clip gradient norm (default: None, no clipping)') - parser.add_argument('--clip-mode', type=str, default='agc', - help='Gradient clipping mode. One of ("norm", "value", "agc")') - parser.add_argument('--momentum', type=float, default=0.9, metavar='M', - help='SGD momentum (default: 0.9)') - parser.add_argument('--weight-decay', type=float, default=0.025, - help='weight decay (default: 0.025)') - - # Learning rate schedule parameters - parser.add_argument('--sched', default='cosine', type=str, metavar='SCHEDULER', - help='LR scheduler (default: "cosine"') - parser.add_argument('--lr', type=float, default=1e-3, metavar='LR', - help='learning rate (default: 1e-3)') - parser.add_argument('--lr-noise', type=float, nargs='+', default=None, metavar='pct, pct', - help='learning rate noise on/off epoch percentages') - parser.add_argument('--lr-noise-pct', type=float, default=0.67, metavar='PERCENT', - help='learning rate noise limit percent (default: 0.67)') - parser.add_argument('--lr-noise-std', type=float, default=1.0, metavar='STDDEV', - help='learning rate noise std-dev (default: 1.0)') - parser.add_argument('--warmup-lr', type=float, default=1e-6, metavar='LR', - help='warmup learning rate (default: 1e-6)') - parser.add_argument('--min-lr', type=float, default=1e-5, metavar='LR', - help='lower lr bound for cyclic schedulers that hit 0 (1e-5)') - parser.add_argument('--decay-epochs', type=float, default=30, metavar='N', - help='epoch interval to decay LR') - parser.add_argument('--warmup-epochs', type=int, default=5, metavar='N', - help='epochs to warmup LR, if scheduler supports') - parser.add_argument('--cooldown-epochs', type=int, default=10, metavar='N', - help='epochs to cooldown LR at min_lr, after cyclic schedule ends') - parser.add_argument('--patience-epochs', type=int, default=10, metavar='N', - help='patience epochs for Plateau LR scheduler (default: 10') - parser.add_argument('--decay-rate', '--dr', type=float, default=0.1, metavar='RATE', - help='LR decay rate (default: 0.1)') - - # Augmentation parameters - parser.add_argument('--ThreeAugment', action='store_true') - parser.add_argument('--color-jitter', type=float, default=0.4, metavar='PCT', - help='Color jitter factor (default: 0.4)') - parser.add_argument('--aa', type=str, default='rand-m9-mstd0.5-inc1', metavar='NAME', - help='Use AutoAugment policy. "v0" or "original". " + \ - "(default: rand-m9-mstd0.5-inc1)'), - parser.add_argument('--smoothing', type=float, default=0.1, - help='Label smoothing (default: 0.1)') - parser.add_argument('--train-interpolation', type=str, default='bicubic', - help='Training interpolation (random, bilinear, bicubic default: "bicubic")') - parser.add_argument('--repeated-aug', action='store_true') - parser.add_argument('--no-repeated-aug', - action='store_false', dest='repeated_aug') - parser.set_defaults(repeated_aug=True) - - # Random Erase params - parser.add_argument('--reprob', type=float, default=0.25, metavar='PCT', - help='Random erase prob (default: 0.25)') - parser.add_argument('--remode', type=str, default='pixel', - help='Random erase mode (default: "pixel")') - parser.add_argument('--recount', type=int, default=1, - help='Random erase count (default: 1)') - parser.add_argument('--resplit', action='store_true', default=False, - help='Do not random erase first (clean) augmentation split') - - # Mixup params - parser.add_argument('--mixup', type=float, default=0.8, - help='mixup alpha, mixup enabled if > 0. (default: 0.8)') - parser.add_argument('--cutmix', type=float, default=1.0, - help='cutmix alpha, cutmix enabled if > 0. (default: 1.0)') - parser.add_argument('--cutmix-minmax', type=float, nargs='+', default=None, - help='cutmix min/max ratio, overrides alpha and enables cutmix if set (default: None)') - parser.add_argument('--mixup-prob', type=float, default=1.0, - help='Probability of performing mixup or cutmix when either/both is enabled') - parser.add_argument('--mixup-switch-prob', type=float, default=0.5, - help='Probability of switching to cutmix when both mixup and cutmix enabled') - parser.add_argument('--mixup-mode', type=str, default='batch', - help='How to apply mixup/cutmix params. Per "batch", "pair", or "elem"') - - # Distillation parameters - parser.add_argument('--teacher-model', default='regnety_160', type=str, metavar='MODEL', - help='Name of teacher model to train (default: "regnety_160"') - parser.add_argument('--teacher-path', type=str, - default='https://dl.fbaipublicfiles.com/deit/regnety_160-a5fe301d.pth') - parser.add_argument('--distillation-type', default='hard', - choices=['none', 'soft', 'hard'], type=str, help="") - parser.add_argument('--distillation-alpha', - default=0.5, type=float, help="") - parser.add_argument('--distillation-tau', default=1.0, type=float, help="") - - # Finetuning params - parser.add_argument('--finetune', default='', - help='finetune from checkpoint') - parser.add_argument('--set_bn_eval', action='store_true', default=False, - help='set BN layers to eval mode during finetuning.') - - # Dataset parameters - parser.add_argument('--data-path', default='/root/FastBaseline/data/imagenet', type=str, - help='dataset path') - parser.add_argument('--data-set', default='IMNET', choices=['CIFAR', 'IMNET', 'INAT', 'INAT19'], - type=str, help='Image Net dataset path') - parser.add_argument('--inat-category', default='name', - choices=['kingdom', 'phylum', 'class', 'order', - 'supercategory', 'family', 'genus', 'name'], - type=str, help='semantic granularity') - parser.add_argument('--output_dir', default='checkpoints', - help='path where to save, empty for no saving') - parser.add_argument('--device', default='cuda', - help='device to use for training / testing') - parser.add_argument('--seed', default=0, type=int) - parser.add_argument('--resume', default='', help='resume from checkpoint') - parser.add_argument('--start_epoch', default=0, type=int, metavar='N', - help='start epoch') - parser.add_argument('--eval', action='store_true', - help='Perform evaluation only') - parser.add_argument('--dist-eval', action='store_true', - default=False, help='Enabling distributed evaluation') - parser.add_argument('--num_workers', default=10, type=int) - parser.add_argument('--pin-mem', action='store_true', - help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.') - parser.add_argument('--no-pin-mem', action='store_false', dest='pin_mem', - help='') - parser.set_defaults(pin_mem=True) - - # training parameters - parser.add_argument('--world_size', default=1, type=int, - help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', - help='url used to set up distributed training') - parser.add_argument('--save_freq', default=1, type=int, - help='frequency of model saving') - - parser.add_argument('--deploy', action='store_true', default=False) - parser.add_argument('--project', default='repvit', type=str) - return parser - -import wandb - -def main(args): - - utils.init_distributed_mode(args) - - if utils.is_main_process() and not args.eval: - wandb.init(project=args.project, config=args) - wandb.run.log_code('model') - if args.distillation_type != 'none' and args.finetune and not args.eval: - raise NotImplementedError( - "Finetuning with distillation not yet supported") - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - # random.seed(seed) - - cudnn.benchmark = True - - dataset_train, args.nb_classes = build_dataset(is_train=True, args=args) - dataset_val, _ = build_dataset(is_train=False, args=args) - - if True: # args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - if args.repeated_aug: - sampler_train = RASampler( - dataset_train, num_replicas=num_tasks, rank=global_rank, shuffle=True - ) - else: - sampler_train = torch.utils.data.DistributedSampler( - dataset_train, num_replicas=num_tasks, rank=global_rank, shuffle=True - ) - if args.dist_eval: - if len(dataset_val) % num_tasks != 0: - print('Warning: Enabling distributed evaluation with an eval dataset not divisible by process number. ' - 'This will slightly alter validation results as extra duplicate entries are added to achieve ' - 'equal num of samples per-process.') - sampler_val = torch.utils.data.DistributedSampler( - dataset_val, num_replicas=num_tasks, rank=global_rank, shuffle=False) - else: - sampler_val = torch.utils.data.SequentialSampler(dataset_val) - else: - sampler_train = torch.utils.data.RandomSampler(dataset_train) - sampler_val = torch.utils.data.SequentialSampler(dataset_val) - - data_loader_train = torch.utils.data.DataLoader( - dataset_train, sampler=sampler_train, - batch_size=args.batch_size, - num_workers=args.num_workers, - pin_memory=args.pin_mem, - drop_last=True, - ) - - if args.ThreeAugment: - data_loader_train.dataset.transform = new_data_aug_generator(args) - - data_loader_val = torch.utils.data.DataLoader( - dataset_val, sampler=sampler_val, - batch_size=int(1.5 * args.batch_size), - num_workers=args.num_workers, - pin_memory=args.pin_mem, - drop_last=False - ) - - mixup_fn = None - mixup_active = args.mixup > 0 or args.cutmix > 0. or args.cutmix_minmax is not None - if mixup_active: - mixup_fn = Mixup( - mixup_alpha=args.mixup, cutmix_alpha=args.cutmix, cutmix_minmax=args.cutmix_minmax, - prob=args.mixup_prob, switch_prob=args.mixup_switch_prob, mode=args.mixup_mode, - label_smoothing=args.smoothing, num_classes=args.nb_classes) - - print(f"Creating model: {args.model}") - model = create_model( - args.model, - num_classes=args.nb_classes, - distillation=(args.distillation_type != 'none'), - pretrained=False, - ) - export_onnx(model, args.output_dir) - - if args.finetune: - if args.finetune.startswith('https'): - checkpoint = torch.hub.load_state_dict_from_url( - args.finetune, map_location='cpu', check_hash=True) - else: - checkpoint = utils.load_model(args.finetune, model) - - checkpoint_model = checkpoint['model'] - state_dict = model.state_dict() - for k in ['head.l.weight', 'head.l.bias', - 'head_dist.l.weight', 'head_dist.l.bias']: - if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape: - print(f"Removing key {k} from pretrained checkpoint") - del checkpoint_model[k] - - msg = model.load_state_dict(checkpoint_model, strict=False) - print(msg) - - model.to(device) - - model_ema = None - if args.model_ema: - # Important to create EMA model after cuda(), DP wrapper, and AMP but - # before SyncBN and DDP wrapper - model_ema = ModelEma( - model, - decay=args.model_ema_decay, - device='cpu' if args.model_ema_force_cpu else '', - resume='') - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel( - model, device_ids=[args.gpu]) - model_without_ddp = model.module - n_parameters = sum(p.numel() - for p in model.parameters() if p.requires_grad) - print('number of params:', n_parameters) - - linear_scaled_lr = args.lr * args.batch_size * utils.get_world_size() / 512.0 - args.lr = linear_scaled_lr - optimizer = create_optimizer(args, model_without_ddp) - loss_scaler = NativeScaler() - - lr_scheduler, _ = create_scheduler(args, optimizer) - - criterion = LabelSmoothingCrossEntropy() - - if args.mixup > 0.: - # smoothing is handled with mixup label transform - criterion = SoftTargetCrossEntropy() - elif args.smoothing: - criterion = LabelSmoothingCrossEntropy(smoothing=args.smoothing) - else: - criterion = torch.nn.CrossEntropyLoss() - - teacher_model = None - if args.distillation_type != 'none': - assert args.teacher_path, 'need to specify teacher-path when using distillation' - print(f"Creating teacher model: {args.teacher_model}") - teacher_model = create_model( - args.teacher_model, - pretrained=False, - num_classes=args.nb_classes, - global_pool='avg', - ) - if args.teacher_path.startswith('https'): - checkpoint = torch.hub.load_state_dict_from_url( - args.teacher_path, map_location='cpu', check_hash=True) - else: - checkpoint = torch.load(args.teacher_path, map_location='cpu') - teacher_model.load_state_dict(checkpoint['model']) - teacher_model.to(device) - teacher_model.eval() - - # wrap the criterion in our custom DistillationLoss, which - # just dispatches to the original criterion if args.distillation_type is - # 'none' - criterion = DistillationLoss( - criterion, teacher_model, args.distillation_type, args.distillation_alpha, args.distillation_tau - ) - - output_dir = Path(args.output_dir) - if args.output_dir and utils.is_main_process(): - with (output_dir / "model.txt").open("a") as f: - f.write(str(model)) - print(str(model)) - if args.output_dir and utils.is_main_process(): - with (output_dir / "args.txt").open("a") as f: - f.write(json.dumps(args.__dict__, indent=2) + "\n") - print(json.dumps(args.__dict__, indent=2) + "\n") - if args.resume: - if args.resume.startswith('https'): - checkpoint = torch.hub.load_state_dict_from_url( - args.resume, map_location='cpu', check_hash=True) - else: - print("Loading local checkpoint at {}".format(args.resume)) - checkpoint = torch.load(args.resume, map_location='cpu') - msg = model_without_ddp.load_state_dict(checkpoint['model'], strict=True) - print(msg) - if not args.eval and 'optimizer' in checkpoint and 'lr_scheduler' in checkpoint and 'epoch' in checkpoint: - optimizer.load_state_dict(checkpoint['optimizer']) - lr_scheduler.load_state_dict(checkpoint['lr_scheduler']) - args.start_epoch = checkpoint['epoch'] + 1 - if args.model_ema: - utils._load_checkpoint_for_ema( - model_ema, checkpoint['model_ema']) - if 'scaler' in checkpoint: - loss_scaler.load_state_dict(checkpoint['scaler']) - if args.eval: - utils.replace_batchnorm(model) # Users may choose whether to merge Conv-BN layers during eval - print(f"Evaluating model: {args.model}") - test_stats = evaluate(data_loader_val, model, device) - print( - f"Accuracy of the network on the {len(dataset_val)} test images: {test_stats['acc1']:.1f}%") - return - - print(f"Start training for {args.epochs} epochs") - start_time = time.time() - max_accuracy = 0.0 - max_accuracy_ema = 0.0 - for epoch in range(args.start_epoch, args.epochs): - if args.distributed: - data_loader_train.sampler.set_epoch(epoch) - - train_stats = train_one_epoch( - model, criterion, data_loader_train, - optimizer, device, epoch, loss_scaler, - args.clip_grad, args.clip_mode, model_ema, mixup_fn, - # set_training_mode=args.finetune == '' # keep in eval mode during finetuning - set_training_mode=True, - set_bn_eval=args.set_bn_eval, # set bn to eval if finetune - ) - - lr_scheduler.step(epoch) - - test_stats = evaluate(data_loader_val, model, device) - print( - f"Accuracy of the network on the {len(dataset_val)} test images: {test_stats['acc1']:.1f}%") - - if args.output_dir: - ckpt_path = os.path.join(output_dir, 'checkpoint_'+str(epoch)+'.pth') - checkpoint_paths = [ckpt_path] - print("Saving checkpoint to {}".format(ckpt_path)) - for checkpoint_path in checkpoint_paths: - utils.save_on_master({ - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'lr_scheduler': lr_scheduler.state_dict(), - 'epoch': epoch, - 'model_ema': get_state_dict(model_ema), - 'scaler': loss_scaler.state_dict(), - 'args': args, - }, checkpoint_path) - remove_epoch = epoch - 3 - if remove_epoch >= 0 and utils.is_main_process(): - os.remove(os.path.join(output_dir, 'checkpoint_'+str(remove_epoch)+'.pth')) - - if max_accuracy < test_stats["acc1"]: - utils.save_on_master({ - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'lr_scheduler': lr_scheduler.state_dict(), - 'epoch': epoch, - 'model_ema': get_state_dict(model_ema), - 'scaler': loss_scaler.state_dict(), - 'args': args, - }, os.path.join(output_dir, 'checkpoint_best.pth')) - max_accuracy = max(max_accuracy, test_stats["acc1"]) - - print(f'Max accuracy: {max_accuracy:.2f}%') - - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - 'epoch': epoch, - 'n_parameters': n_parameters} - if utils.is_main_process(): - wandb.log({**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - 'epoch': epoch, - 'max_accuracy': max_accuracy}, step=epoch) - if args.output_dir and utils.is_main_process(): - with (output_dir / "log.txt").open("a") as f: - f.write(json.dumps(log_stats) + "\n") - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - if utils.is_main_process(): - wandb.finish() - -def export_onnx(model, output_dir): - # if utils.is_main_process(): - # dummy_input = torch.randn(1, 3, 224, 224) - # torch.onnx.export(model, dummy_input, f"{output_dir}/model.onnx") - # wandb.save(f"{output_dir}/model.onnx") - pass - -if __name__ == '__main__': - parser = argparse.ArgumentParser( - 'RepViT training and evaluation script', parents=[get_args_parser()]) - args = parser.parse_args() - if args.resume and not args.eval: - args.output_dir = '/'.join(args.resume.split('/')[:-1]) - elif args.output_dir: - args.output_dir = args.output_dir + f"/{args.model}/" + datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S') - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - else: - assert(False) - main(args) diff --git a/cv/classification/repvit/pytorch/model/__init__.py b/cv/classification/repvit/pytorch/model/__init__.py deleted file mode 100644 index 9f713ac6..00000000 --- a/cv/classification/repvit/pytorch/model/__init__.py +++ /dev/null @@ -1 +0,0 @@ -import model.repvit \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/model/repvit.py b/cv/classification/repvit/pytorch/model/repvit.py deleted file mode 100644 index 2c1bcf14..00000000 --- a/cv/classification/repvit/pytorch/model/repvit.py +++ /dev/null @@ -1,479 +0,0 @@ -# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -import torch.nn as nn - -def _make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - -from timm.models.layers import SqueezeExcite - -import torch - -class Conv2d_BN(torch.nn.Sequential): - def __init__(self, a, b, ks=1, stride=1, pad=0, dilation=1, - groups=1, bn_weight_init=1, resolution=-10000): - super().__init__() - self.add_module('c', torch.nn.Conv2d( - a, b, ks, stride, pad, dilation, groups, bias=False)) - self.add_module('bn', torch.nn.BatchNorm2d(b)) - torch.nn.init.constant_(self.bn.weight, bn_weight_init) - torch.nn.init.constant_(self.bn.bias, 0) - - @torch.no_grad() - def fuse(self): - c, bn = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = c.weight * w[:, None, None, None] - b = bn.bias - bn.running_mean * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - m = torch.nn.Conv2d(w.size(1) * self.c.groups, w.size( - 0), w.shape[2:], stride=self.c.stride, padding=self.c.padding, dilation=self.c.dilation, groups=self.c.groups, - device=c.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class Residual(torch.nn.Module): - def __init__(self, m, drop=0.): - super().__init__() - self.m = m - self.drop = drop - - def forward(self, x): - if self.training and self.drop > 0: - return x + self.m(x) * torch.rand(x.size(0), 1, 1, 1, - device=x.device).ge_(self.drop).div(1 - self.drop).detach() - else: - return x + self.m(x) - - @torch.no_grad() - def fuse(self): - if isinstance(self.m, Conv2d_BN): - m = self.m.fuse() - assert(m.groups == m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - elif isinstance(self.m, torch.nn.Conv2d): - m = self.m - assert(m.groups != m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - else: - return self - - -class RepVGGDW(torch.nn.Module): - def __init__(self, ed) -> None: - super().__init__() - self.conv = Conv2d_BN(ed, ed, 3, 1, 1, groups=ed) - self.conv1 = torch.nn.Conv2d(ed, ed, 1, 1, 0, groups=ed) - self.dim = ed - self.bn = torch.nn.BatchNorm2d(ed) - - def forward(self, x): - return self.bn((self.conv(x) + self.conv1(x)) + x) - - @torch.no_grad() - def fuse(self): - conv = self.conv.fuse() - conv1 = self.conv1 - - conv_w = conv.weight - conv_b = conv.bias - conv1_w = conv1.weight - conv1_b = conv1.bias - - conv1_w = torch.nn.functional.pad(conv1_w, [1,1,1,1]) - - identity = torch.nn.functional.pad(torch.ones(conv1_w.shape[0], conv1_w.shape[1], 1, 1, device=conv1_w.device), [1,1,1,1]) - - final_conv_w = conv_w + conv1_w + identity - final_conv_b = conv_b + conv1_b - - conv.weight.data.copy_(final_conv_w) - conv.bias.data.copy_(final_conv_b) - - bn = self.bn - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = conv.weight * w[:, None, None, None] - b = bn.bias + (conv.bias - bn.running_mean) * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - conv.weight.data.copy_(w) - conv.bias.data.copy_(b) - return conv - - -class RepViTBlock(nn.Module): - def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs): - super(RepViTBlock, self).__init__() - assert stride in [1, 2] - - self.identity = stride == 1 and inp == oup - assert(hidden_dim == 2 * inp) - - if stride == 2: - self.token_mixer = nn.Sequential( - Conv2d_BN(inp, inp, kernel_size, stride, (kernel_size - 1) // 2, groups=inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - Conv2d_BN(inp, oup, ks=1, stride=1, pad=0) - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(oup, 2 * oup, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(2 * oup, oup, 1, 1, 0, bn_weight_init=0), - )) - else: - assert(self.identity) - self.token_mixer = nn.Sequential( - RepVGGDW(inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(inp, hidden_dim, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(hidden_dim, oup, 1, 1, 0, bn_weight_init=0), - )) - - def forward(self, x): - return self.channel_mixer(self.token_mixer(x)) - -from timm.models.vision_transformer import trunc_normal_ -class BN_Linear(torch.nn.Sequential): - def __init__(self, a, b, bias=True, std=0.02): - super().__init__() - self.add_module('bn', torch.nn.BatchNorm1d(a)) - self.add_module('l', torch.nn.Linear(a, b, bias=bias)) - trunc_normal_(self.l.weight, std=std) - if bias: - torch.nn.init.constant_(self.l.bias, 0) - - @torch.no_grad() - def fuse(self): - bn, l = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - b = bn.bias - self.bn.running_mean * \ - self.bn.weight / (bn.running_var + bn.eps)**0.5 - w = l.weight * w[None, :] - if l.bias is None: - b = b @ self.l.weight.T - else: - b = (l.weight @ b[:, None]).view(-1) + self.l.bias - m = torch.nn.Linear(w.size(1), w.size(0), device=l.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class Classfier(nn.Module): - def __init__(self, dim, num_classes, distillation=True): - super().__init__() - self.classifier = BN_Linear(dim, num_classes) if num_classes > 0 else torch.nn.Identity() - self.distillation = distillation - if distillation: - self.classifier_dist = BN_Linear(dim, num_classes) if num_classes > 0 else torch.nn.Identity() - - def forward(self, x): - if self.distillation: - x = self.classifier(x), self.classifier_dist(x) - if not self.training: - x = (x[0] + x[1]) / 2 - else: - x = self.classifier(x) - return x - - @torch.no_grad() - def fuse(self): - classifier = self.classifier.fuse() - if self.distillation: - classifier_dist = self.classifier_dist.fuse() - classifier.weight += classifier_dist.weight - classifier.bias += classifier_dist.bias - classifier.weight /= 2 - classifier.bias /= 2 - return classifier - else: - return classifier - -class RepViT(nn.Module): - def __init__(self, cfgs, num_classes=1000, distillation=False): - super(RepViT, self).__init__() - # setting of inverted residual blocks - self.cfgs = cfgs - - # building first layer - input_channel = self.cfgs[0][2] - patch_embed = torch.nn.Sequential(Conv2d_BN(3, input_channel // 2, 3, 2, 1), torch.nn.GELU(), - Conv2d_BN(input_channel // 2, input_channel, 3, 2, 1)) - layers = [patch_embed] - # building inverted residual blocks - block = RepViTBlock - for k, t, c, use_se, use_hs, s in self.cfgs: - output_channel = _make_divisible(c, 8) - exp_size = _make_divisible(input_channel * t, 8) - layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs)) - input_channel = output_channel - self.features = nn.ModuleList(layers) - self.classifier = Classfier(output_channel, num_classes, distillation) - - def forward(self, x): - # x = self.features(x) - for f in self.features: - x = f(x) - x = torch.nn.functional.adaptive_avg_pool2d(x, 1).flatten(1) - x = self.classifier(x) - return x - -from timm.models import register_model - -@register_model -def repvit_m0_9(pretrained=False, num_classes = 1000, distillation=False): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 48, 1, 0, 1], - [3, 2, 48, 0, 0, 1], - [3, 2, 48, 0, 0, 1], - [3, 2, 96, 0, 0, 2], - [3, 2, 96, 1, 0, 1], - [3, 2, 96, 0, 0, 1], - [3, 2, 96, 0, 0, 1], - [3, 2, 192, 0, 1, 2], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 1, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 192, 0, 1, 1], - [3, 2, 384, 0, 1, 2], - [3, 2, 384, 1, 1, 1], - [3, 2, 384, 0, 1, 1] - ] - return RepViT(cfgs, num_classes=num_classes, distillation=distillation) - -@register_model -def repvit_m1_0(pretrained=False, num_classes = 1000, distillation=False): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 56, 1, 0, 1], - [3, 2, 56, 0, 0, 1], - [3, 2, 56, 0, 0, 1], - [3, 2, 112, 0, 0, 2], - [3, 2, 112, 1, 0, 1], - [3, 2, 112, 0, 0, 1], - [3, 2, 112, 0, 0, 1], - [3, 2, 224, 0, 1, 2], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 1, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 224, 0, 1, 1], - [3, 2, 448, 0, 1, 2], - [3, 2, 448, 1, 1, 1], - [3, 2, 448, 0, 1, 1] - ] - return RepViT(cfgs, num_classes=num_classes, distillation=distillation) - - -@register_model -def repvit_m1_1(pretrained=False, num_classes = 1000, distillation=False): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, num_classes=num_classes, distillation=distillation) - - -@register_model -def repvit_m1_5(pretrained=False, num_classes = 1000, distillation=False): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, num_classes=num_classes, distillation=distillation) - - - -@register_model -def repvit_m2_3(pretrained=False, num_classes = 1000, distillation=False): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 160, 0, 0, 2], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 320, 0, 1, 2], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - # [3, 2, 320, 1, 1, 1], - # [3, 2, 320, 0, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 640, 0, 1, 2], - [3, 2, 640, 1, 1, 1], - [3, 2, 640, 0, 1, 1], - # [3, 2, 640, 1, 1, 1], - # [3, 2, 640, 0, 1, 1] - ] - return RepViT(cfgs, num_classes=num_classes, distillation=distillation) diff --git a/cv/classification/repvit/pytorch/requirements.txt b/cv/classification/repvit/pytorch/requirements.txt deleted file mode 100644 index 2bc1f2c2..00000000 --- a/cv/classification/repvit/pytorch/requirements.txt +++ /dev/null @@ -1,5 +0,0 @@ -torch -timm==0.5.4 -fvcore -wandb -urllib3==1.26.6 \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/segmentation/.gitignore b/cv/classification/repvit/pytorch/segmentation/.gitignore deleted file mode 100644 index 2dba40fc..00000000 --- a/cv/classification/repvit/pytorch/segmentation/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -pretrain -work_dirs -data -seg_pretrain \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/segmentation/README.md b/cv/classification/repvit/pytorch/segmentation/README.md deleted file mode 100644 index 0baccc7b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/README.md +++ /dev/null @@ -1,64 +0,0 @@ -# Semantic Segmentation - -Segmentation on ADE20K is implemented based on [MMSegmentation](https://github.com/open-mmlab/mmsegmentation). - -## Models -| Model | mIoU | Latency | Ckpt | Log | -|:---------------|:----:|:---:|:--:|:--:| -| RepViT-M1.1 | 40.6 | 4.9ms | [M1.1](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m1_1_ade20k.pth) | [M1.1](./logs/repvit_m1_1_ade20k.json) | -| RepViT-M1.5 | 43.6 | 6.4ms | [M1.5](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m1_5_ade20k.pth) | [M1.5](./logs/repvit_m1_5_ade20k.json) | -| RepViT-M2.3 | 46.1 | 9.9ms | [M2.3](https://github.com/THU-MIG/RepViT/releases/download/v1.0/repvit_m2_3_ade20k.pth) | [M2.3](./logs/repvit_m2_3_ade20k.json) | - -The backbone latency is measured with image crops of 512x512 on iPhone 12 by Core ML Tools. - -## Requirements -Install [mmcv-full](https://github.com/open-mmlab/mmcv) and [MMSegmentation v0.30.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.30.0). -Later versions should work as well. -The easiest way is to install via [MIM](https://github.com/open-mmlab/mim) -``` -pip install -U openmim -mim install mmcv-full==1.7.1 -mim install mmseg==0.30.0 -``` - -## Data preparation - -We benchmark RepViT on the challenging ADE20K dataset, which can be downloaded and prepared following [insructions in MMSeg](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#prepare-datasets). -The data should appear as: -``` -├── segmentation -│ ├── data -│ │ ├── ade -│ │ │ ├── ADEChallengeData2016 -│ │ │ │ ├── annotations -│ │ │ │ │ ├── training -│ │ │ │ │ ├── validation -│ │ │ │ ├── images -│ │ │ │ │ ├── training -│ │ │ │ │ ├── validation - -``` - - - -## Testing - -We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use: -``` -./tools/dist_test.sh config_file path/to/checkpoint #GPUs --eval mIoU -``` - -For example, to test RepViT-M1.1 on ADE20K on an 8-GPU machine, - -``` -./tools/dist_test.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py path/to/repvit_m1_1_ade20k.pth 8 --eval mIoU -``` - -## Training -Download ImageNet-1K pretrained weights into `./pretrain` - -We provide PyTorch distributed data parallel (DDP) training script `dist_train.sh`, for example, to train RepViT-M1.1 on an 8-GPU machine: -``` -./tools/dist_train.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py 8 -``` -Tips: specify configs and #GPUs! diff --git a/cv/classification/repvit/pytorch/segmentation/align_resize.py b/cv/classification/repvit/pytorch/segmentation/align_resize.py deleted file mode 100644 index 135942d4..00000000 --- a/cv/classification/repvit/pytorch/segmentation/align_resize.py +++ /dev/null @@ -1,230 +0,0 @@ -import mmcv -import numpy as np -from mmcv.utils import deprecated_api_warning, is_tuple_of -from numpy import random - -from mmseg.datasets.builder import PIPELINES - - -@PIPELINES.register_module() -class AlignResize(object): - """Resize images & seg. Align - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - size_divisor=32): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given img_scale=None and a range of image ratio - # mode 2: given a scale and a range of image ratio - assert self.img_scale is None or len(self.img_scale) == 1 - else: - # mode 3 and 4: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - self.size_divisor = size_divisor - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, - where ``img_scale`` is the selected image scale and - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and uper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where - ``img_scale`` is sampled scale and None is just a placeholder - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where - ``scale`` is sampled ratio multiplied with ``img_scale`` and - None is just a placeholder to be consistent with - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - if self.img_scale is None: - h, w = results['img'].shape[:2] - scale, scale_idx = self.random_sample_ratio((w, h), - self.ratio_range) - else: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _align(self, img, size_divisor, interpolation=None): - align_h = int(np.ceil(img.shape[0] / size_divisor)) * size_divisor - align_w = int(np.ceil(img.shape[1] / size_divisor)) * size_divisor - if interpolation == None: - img = mmcv.imresize(img, (align_w, align_h)) - else: - img = mmcv.imresize(img, (align_w, align_h), interpolation=interpolation) - return img - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results['img'], results['scale'], return_scale=True) - #### align #### - img = self._align(img, self.size_divisor) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results['img'].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results['img'], results['scale'], return_scale=True) - - h, w = img.shape[:2] - assert int(np.ceil(h / self.size_divisor)) * self.size_divisor == h and \ - int(np.ceil(w / self.size_divisor)) * self.size_divisor == w, \ - "img size not align. h:{} w:{}".format(h, w) - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape # in case that there is no padding - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], results['scale'], interpolation='nearest') - gt_seg = self._align(gt_seg, self.size_divisor, interpolation='nearest') - else: - gt_seg = mmcv.imresize( - results[key], results['scale'], interpolation='nearest') - h, w = gt_seg.shape[:2] - assert int(np.ceil(h / self.size_divisor)) * self.size_divisor == h and \ - int(np.ceil(w / self.size_divisor)) * self.size_divisor == w, \ - "gt_seg size not align. h:{} w:{}".format(h, w) - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - self._random_scale(results) - self._resize_img(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(img_scale={self.img_scale}, ' - f'multiscale_mode={self.multiscale_mode}, ' - f'ratio_range={self.ratio_range}, ' - f'keep_ratio={self.keep_ratio})') - return repr_str diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/datasets/ade20k.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/datasets/ade20k.py deleted file mode 100644 index aa07e13c..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/datasets/ade20k.py +++ /dev/null @@ -1,57 +0,0 @@ -# dataset settings -dataset_type = 'ADE20KDataset' -data_root = 'data/ade/ADEChallengeData2016' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='AlignResize', keep_ratio=True, size_divisor=32), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=50, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/default_runtime.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/default_runtime.py deleted file mode 100644 index b564cc4e..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/default_runtime.py +++ /dev/null @@ -1,14 +0,0 @@ -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook', by_epoch=False), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] -cudnn_benchmark = True diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/models/fpn_r50.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/models/fpn_r50.py deleted file mode 100644 index 86ab327d..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/models/fpn_r50.py +++ /dev/null @@ -1,36 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=4), - decode_head=dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_160k.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_160k.py deleted file mode 100644 index 52603890..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_160k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=160000) -checkpoint_config = dict(by_epoch=False, interval=16000) -evaluation = dict(interval=16000, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_20k.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_20k.py deleted file mode 100644 index bf780a1b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_20k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=20000) -checkpoint_config = dict(by_epoch=False, interval=2000) -evaluation = dict(interval=2000, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_40k.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_40k.py deleted file mode 100644 index cdbf841a..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_40k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=40000) -checkpoint_config = dict(by_epoch=False, interval=4000) -evaluation = dict(interval=4000, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_80k.py b/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_80k.py deleted file mode 100644 index c190cee6..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/_base_/schedules/schedule_80k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000) -checkpoint_config = dict(by_epoch=False, interval=8000) -evaluation = dict(interval=8000, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py b/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py deleted file mode 100644 index a974105d..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../_base_/models/fpn_r50.py', - '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='EncoderDecoder', - backbone=dict( - type='repvit_m1_1', - style='pytorch', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m1_1_distill_300e.pth', - ), - out_indices = [3,7,21,24] - ), - neck=dict(in_channels=[64, 128, 256, 512]), - decode_head=dict(num_classes=150)) - -gpu_multiples = 2 # we use 8 gpu instead of 4 in mmsegmentation, so lr*2 and max_iters/2 -# optimizer -optimizer = dict(type='AdamW', lr=0.0001 * gpu_multiples, weight_decay=0.0001) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-6, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000 // gpu_multiples) -checkpoint_config = dict(by_epoch=False, interval=8000 // gpu_multiples) -evaluation = dict(interval=8000 // gpu_multiples, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_5_ade20k_40k.py b/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_5_ade20k_40k.py deleted file mode 100644 index bd924f0c..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m1_5_ade20k_40k.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../_base_/models/fpn_r50.py', - '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='EncoderDecoder', - backbone=dict( - type='repvit_m1_5', - style='pytorch', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m1_5_distill_300e.pth', - ), - out_indices=[5, 11, 37, 42] - ), - neck=dict(in_channels=[64, 128, 256, 512]), - decode_head=dict(num_classes=150)) - -gpu_multiples = 2 # we use 8 gpu instead of 4 in mmsegmentation, so lr*2 and max_iters/2 -# optimizer -optimizer = dict(type='AdamW', lr=0.0001 * gpu_multiples, weight_decay=0.0001) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-6, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000 // gpu_multiples) -checkpoint_config = dict(by_epoch=False, interval=8000 // gpu_multiples) -evaluation = dict(interval=8000 // gpu_multiples, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m2_3_ade20k_40k.py b/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m2_3_ade20k_40k.py deleted file mode 100644 index 7633b01e..00000000 --- a/cv/classification/repvit/pytorch/segmentation/configs/sem_fpn/fpn_repvit_m2_3_ade20k_40k.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../_base_/models/fpn_r50.py', - '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='EncoderDecoder', - backbone=dict( - type='repvit_m2_3', - style='pytorch', - init_cfg=dict( - type='Pretrained', - checkpoint='pretrain/repvit_m2_3_distill_450e.pth', - ), - out_indices=[7, 15, 51, 54] - ), - neck=dict(in_channels=[80, 160, 320, 640]), - decode_head=dict(num_classes=150)) - -gpu_multiples = 2 # we use 8 gpu instead of 4 in mmsegmentation, so lr*2 and max_iters/2 -# optimizer -optimizer = dict(type='AdamW', lr=0.0001 * gpu_multiples, weight_decay=0.0001) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-6, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000 // gpu_multiples) -checkpoint_config = dict(by_epoch=False, interval=8000 // gpu_multiples) -evaluation = dict(interval=8000 // gpu_multiples, metric='mIoU') diff --git a/cv/classification/repvit/pytorch/segmentation/eval.sh b/cv/classification/repvit/pytorch/segmentation/eval.sh deleted file mode 100644 index d791888b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/eval.sh +++ /dev/null @@ -1 +0,0 @@ -PORT=12345 ./tools/dist_test.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py seg_pretrain/repvit_m1_1_ade20k.pth 8 --eval mIoU \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_1_ade20k.json b/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_1_ade20k.json deleted file mode 100644 index 2f976538..00000000 --- a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_1_ade20k.json +++ /dev/null @@ -1,811 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 11.3, V11.3.109\nGCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.9.2 (built against CUDA 12.1)\n - Built with CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.8.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 9.4\nMMCV CUDA Compiler: 11.3\nMMSegmentation: 0.30.0+25971f7", "seed": null, "exp_name": "fpn_repvit_m1_1_ade20k_40k.py", "mmseg_version": "0.30.0+25971f7", "config": "norm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='EncoderDecoder',\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='repvit_m1_1',\n depth=50,\n num_stages=4,\n out_indices=[3, 7, 21, 24],\n dilations=(1, 1, 1, 1),\n strides=(1, 2, 2, 2),\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n norm_eval=False,\n style='pytorch',\n contract_dilation=True,\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m1_1_distill_300e.pth'),\n pretrained='open-mmlab://resnet50_v1c'),\n neck=dict(\n type='FPN',\n in_channels=[64, 128, 256, 512],\n out_channels=256,\n num_outs=4),\n decode_head=dict(\n type='FPNHead',\n in_channels=[256, 256, 256, 256],\n in_index=[0, 1, 2, 3],\n feature_strides=[4, 8, 16, 32],\n channels=128,\n dropout_ratio=0.1,\n num_classes=150,\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),\n train_cfg=dict(),\n test_cfg=dict(mode='whole'))\ndataset_type = 'ADE20KDataset'\ndata_root = 'data/ade/ADEChallengeData2016'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ncrop_size = (512, 512)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),\n dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=4,\n workers_per_gpu=4,\n train=dict(\n type='RepeatDataset',\n times=50,\n dataset=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/training',\n ann_dir='annotations/training',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(\n type='Resize',\n img_scale=(2048, 512),\n ratio_range=(0.5, 2.0)),\n dict(\n type='RandomCrop',\n crop_size=(512, 512),\n cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n ])),\n val=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nlog_config = dict(\n interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)])\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\ncudnn_benchmark = True\ngpu_multiples = 2\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.0001)\noptimizer_config = dict()\nlr_config = dict(policy='poly', power=0.9, min_lr=1e-06, by_epoch=False)\nrunner = dict(type='IterBasedRunner', max_iters=40000)\ncheckpoint_config = dict(by_epoch=False, interval=4000)\nevaluation = dict(interval=4000, metric='mIoU')\nwork_dir = './work_dirs/fpn_repvit_m1_1_ade20k_40k'\ngpu_ids = range(0, 8)\ndevice = 'cuda'\nseed = None\n", "CLASSES": ["wall", "building", "sky", "floor", "tree", "ceiling", "road", "bed ", "windowpane", "grass", "cabinet", "sidewalk", "person", "earth", "door", "table", "mountain", "plant", "curtain", "chair", "car", "water", "painting", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock", "wardrobe", "lamp", "bathtub", "railing", "cushion", "base", "box", "column", "signboard", "chest of drawers", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator", "grandstand", "path", "stairs", "runway", "case", "pool table", "pillow", "screen door", "stairway", "river", "bridge", "bookcase", "blind", "coffee table", "toilet", "flower", "book", "hill", "bench", "countertop", "stove", "palm", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel", "bus", "towel", "light", "truck", "tower", "chandelier", "awning", "streetlight", "booth", "television receiver", "airplane", "dirt track", "apparel", "pole", "land", "bannister", "escalator", "ottoman", "bottle", "buffet", "poster", "stage", "van", "ship", "fountain", "conveyer belt", "canopy", "washer", "plaything", "swimming pool", "stool", "barrel", "basket", "waterfall", "tent", "bag", "minibike", "cradle", "oven", "ball", "food", "step", "tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket", "sculpture", "hood", "sconce", "vase", "traffic light", "tray", "ashcan", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass", "clock", "flag"], "PALETTE": [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255]]} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 0.0002, "memory": 4602, "data_time": 0.02081, "decode.loss_ce": 2.6497, "decode.acc_seg": 41.65224, "loss": 2.6497, "time": 0.26173} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 0.0002, "memory": 4602, "data_time": 0.01155, "decode.loss_ce": 1.82189, "decode.acc_seg": 52.23696, "loss": 1.82189, "time": 0.14413} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 0.0002, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 1.65146, "decode.acc_seg": 54.48229, "loss": 1.65146, "time": 0.14572} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 0.0002, "memory": 4602, "data_time": 0.01206, "decode.loss_ce": 1.58567, "decode.acc_seg": 55.80722, "loss": 1.58567, "time": 0.147} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0002, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 1.46564, "decode.acc_seg": 58.12866, "loss": 1.46564, "time": 0.14678} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.0002, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 1.41791, "decode.acc_seg": 58.3628, "loss": 1.41791, "time": 0.15242} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.0002, "memory": 4602, "data_time": 0.01282, "decode.loss_ce": 1.36668, "decode.acc_seg": 59.37099, "loss": 1.36668, "time": 0.14844} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.0002, "memory": 4602, "data_time": 0.01144, "decode.loss_ce": 1.32435, "decode.acc_seg": 60.02076, "loss": 1.32435, "time": 0.14463} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.0002, "memory": 4602, "data_time": 0.0119, "decode.loss_ce": 1.29526, "decode.acc_seg": 60.92618, "loss": 1.29526, "time": 0.14629} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 4602, "data_time": 0.01224, "decode.loss_ce": 1.26483, "decode.acc_seg": 61.18323, "loss": 1.26483, "time": 0.14355} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 1.21542, "decode.acc_seg": 62.60061, "loss": 1.21542, "time": 0.1471} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 1.18816, "decode.acc_seg": 63.25093, "loss": 1.18816, "time": 0.149} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 1.144, "decode.acc_seg": 64.4275, "loss": 1.144, "time": 0.14848} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 1.14273, "decode.acc_seg": 64.00464, "loss": 1.14273, "time": 0.14749} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 4602, "data_time": 0.01166, "decode.loss_ce": 1.12749, "decode.acc_seg": 64.32151, "loss": 1.12749, "time": 0.14427} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 4602, "data_time": 0.01176, "decode.loss_ce": 1.123, "decode.acc_seg": 64.28081, "loss": 1.123, "time": 0.14909} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 1.15027, "decode.acc_seg": 63.73398, "loss": 1.15027, "time": 0.14744} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 1.06738, "decode.acc_seg": 65.40488, "loss": 1.06738, "time": 0.14621} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 1.07914, "decode.acc_seg": 64.97744, "loss": 1.07914, "time": 0.14902} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 1.08334, "decode.acc_seg": 64.41407, "loss": 1.08334, "time": 0.14909} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 1.04855, "decode.acc_seg": 66.13308, "loss": 1.04855, "time": 0.14521} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 4602, "data_time": 0.01244, "decode.loss_ce": 1.03966, "decode.acc_seg": 65.48211, "loss": 1.03966, "time": 0.1455} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.00019, "memory": 4602, "data_time": 0.01166, "decode.loss_ce": 1.025, "decode.acc_seg": 66.21056, "loss": 1.025, "time": 0.14383} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.00019, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 1.00329, "decode.acc_seg": 66.41818, "loss": 1.00329, "time": 0.14435} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.00019, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 1.01784, "decode.acc_seg": 66.34611, "loss": 1.01784, "time": 0.14397} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.00019, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 1.03484, "decode.acc_seg": 66.21362, "loss": 1.03484, "time": 0.14346} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.00019, "memory": 4602, "data_time": 0.01164, "decode.loss_ce": 1.00596, "decode.acc_seg": 67.16723, "loss": 1.00596, "time": 0.1473} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.00019, "memory": 4602, "data_time": 0.01226, "decode.loss_ce": 1.01394, "decode.acc_seg": 66.44373, "loss": 1.01394, "time": 0.14725} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.00019, "memory": 4602, "data_time": 0.01143, "decode.loss_ce": 1.02537, "decode.acc_seg": 65.86423, "loss": 1.02537, "time": 0.14799} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.00019, "memory": 4602, "data_time": 0.01149, "decode.loss_ce": 0.97505, "decode.acc_seg": 68.02438, "loss": 0.97505, "time": 0.14169} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.00019, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.93636, "decode.acc_seg": 68.66424, "loss": 0.93636, "time": 0.14285} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.00019, "memory": 4602, "data_time": 0.01202, "decode.loss_ce": 0.98648, "decode.acc_seg": 67.53567, "loss": 0.98648, "time": 0.14563} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.00019, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.95322, "decode.acc_seg": 67.9739, "loss": 0.95322, "time": 0.14569} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.00019, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.98929, "decode.acc_seg": 67.35134, "loss": 0.98929, "time": 0.14689} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.00019, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.95203, "decode.acc_seg": 67.6923, "loss": 0.95203, "time": 0.14674} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.00019, "memory": 4602, "data_time": 0.01158, "decode.loss_ce": 0.93027, "decode.acc_seg": 68.05219, "loss": 0.93027, "time": 0.14643} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.00019, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.95148, "decode.acc_seg": 68.2812, "loss": 0.95148, "time": 0.14844} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.00019, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.90866, "decode.acc_seg": 68.88535, "loss": 0.90866, "time": 0.14679} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.00019, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 0.92403, "decode.acc_seg": 68.35143, "loss": 0.92403, "time": 0.14394} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.00019, "memory": 4602, "data_time": 0.01171, "decode.loss_ce": 0.89716, "decode.acc_seg": 69.17555, "loss": 0.89716, "time": 0.14472} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.00019, "memory": 4602, "data_time": 0.01147, "decode.loss_ce": 0.92293, "decode.acc_seg": 68.79348, "loss": 0.92293, "time": 0.14612} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.00019, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.89107, "decode.acc_seg": 69.28813, "loss": 0.89107, "time": 0.14535} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.00019, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.87474, "decode.acc_seg": 69.72538, "loss": 0.87474, "time": 0.14621} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.00019, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.91419, "decode.acc_seg": 68.77643, "loss": 0.91419, "time": 0.1452} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.00019, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.89607, "decode.acc_seg": 69.09409, "loss": 0.89607, "time": 0.1465} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.00019, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 0.88858, "decode.acc_seg": 68.49078, "loss": 0.88858, "time": 0.14534} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.00019, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.907, "decode.acc_seg": 69.34269, "loss": 0.907, "time": 0.14205} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.00019, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.86919, "decode.acc_seg": 70.47127, "loss": 0.86919, "time": 0.14488} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.00019, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.83664, "decode.acc_seg": 71.16424, "loss": 0.83664, "time": 0.14218} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.00019, "memory": 4602, "data_time": 0.01176, "decode.loss_ce": 0.84883, "decode.acc_seg": 70.59153, "loss": 0.84883, "time": 0.14199} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.00019, "memory": 4602, "data_time": 0.01159, "decode.loss_ce": 0.88687, "decode.acc_seg": 70.36678, "loss": 0.88687, "time": 0.14619} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.00019, "memory": 4602, "data_time": 0.01255, "decode.loss_ce": 0.85285, "decode.acc_seg": 70.47846, "loss": 0.85285, "time": 0.15068} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.00019, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.89643, "decode.acc_seg": 69.39845, "loss": 0.89643, "time": 0.1425} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.00019, "memory": 4602, "data_time": 0.01178, "decode.loss_ce": 0.85991, "decode.acc_seg": 70.34168, "loss": 0.85991, "time": 0.14338} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.00019, "memory": 4602, "data_time": 0.01175, "decode.loss_ce": 0.86727, "decode.acc_seg": 69.92831, "loss": 0.86727, "time": 0.14273} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.00019, "memory": 4602, "data_time": 0.0116, "decode.loss_ce": 0.86903, "decode.acc_seg": 69.84334, "loss": 0.86903, "time": 0.14557} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.00019, "memory": 4602, "data_time": 0.01154, "decode.loss_ce": 0.85788, "decode.acc_seg": 70.47497, "loss": 0.85788, "time": 0.14381} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.00019, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.86376, "decode.acc_seg": 70.15885, "loss": 0.86376, "time": 0.14446} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.00019, "memory": 4602, "data_time": 0.01154, "decode.loss_ce": 0.86481, "decode.acc_seg": 69.86787, "loss": 0.86481, "time": 0.14225} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.00019, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.87053, "decode.acc_seg": 70.5118, "loss": 0.87053, "time": 0.14453} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.00019, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 0.82726, "decode.acc_seg": 70.96837, "loss": 0.82726, "time": 0.1453} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.00019, "memory": 4602, "data_time": 0.01093, "decode.loss_ce": 0.79418, "decode.acc_seg": 72.10297, "loss": 0.79418, "time": 0.14396} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.00019, "memory": 4602, "data_time": 0.01164, "decode.loss_ce": 0.80829, "decode.acc_seg": 71.30203, "loss": 0.80829, "time": 0.14352} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.00019, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.8485, "decode.acc_seg": 70.747, "loss": 0.8485, "time": 0.14427} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.00019, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.82958, "decode.acc_seg": 71.07425, "loss": 0.82958, "time": 0.14222} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.00019, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.83865, "decode.acc_seg": 70.38798, "loss": 0.83865, "time": 0.14703} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.00018, "memory": 4602, "data_time": 0.01223, "decode.loss_ce": 0.8524, "decode.acc_seg": 70.59268, "loss": 0.8524, "time": 0.14381} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.00018, "memory": 4602, "data_time": 0.01171, "decode.loss_ce": 0.81754, "decode.acc_seg": 71.32221, "loss": 0.81754, "time": 0.1418} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.00018, "memory": 4602, "data_time": 0.01318, "decode.loss_ce": 0.80687, "decode.acc_seg": 71.63051, "loss": 0.80687, "time": 0.14543} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.00018, "memory": 4602, "data_time": 0.01264, "decode.loss_ce": 0.80162, "decode.acc_seg": 72.27138, "loss": 0.80162, "time": 0.14522} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.00018, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.81815, "decode.acc_seg": 71.03255, "loss": 0.81815, "time": 0.14861} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.00018, "memory": 4602, "data_time": 0.01177, "decode.loss_ce": 0.85296, "decode.acc_seg": 70.06105, "loss": 0.85296, "time": 0.14506} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.00018, "memory": 4602, "data_time": 0.01253, "decode.loss_ce": 0.82834, "decode.acc_seg": 70.95969, "loss": 0.82834, "time": 0.14413} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.00018, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.80228, "decode.acc_seg": 71.56616, "loss": 0.80228, "time": 0.14523} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.00018, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.80184, "decode.acc_seg": 71.93569, "loss": 0.80184, "time": 0.14215} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.00018, "memory": 4602, "data_time": 0.01156, "decode.loss_ce": 0.78263, "decode.acc_seg": 72.22106, "loss": 0.78263, "time": 0.14634} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.00018, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.80507, "decode.acc_seg": 71.66821, "loss": 0.80507, "time": 0.14337} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.00018, "memory": 4602, "data_time": 0.0126, "decode.loss_ce": 0.79404, "decode.acc_seg": 71.97677, "loss": 0.79404, "time": 0.16024} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.00018, "memory": 4602, "data_time": 0.01449, "decode.loss_ce": 0.77911, "decode.acc_seg": 71.83132, "loss": 0.77911, "time": 0.16185} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.00018, "memory": 4602, "data_time": 0.01421, "decode.loss_ce": 0.79294, "decode.acc_seg": 71.87616, "loss": 0.79294, "time": 0.17604} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00018, "aAcc": 0.7504, "mIoU": 0.2859, "mAcc": 0.3741, "IoU.wall": 0.6619, "IoU.building": 0.7688, "IoU.sky": 0.9224, "IoU.floor": 0.722, "IoU.tree": 0.6678, "IoU.ceiling": 0.7438, "IoU.road": 0.719, "IoU.bed ": 0.7498, "IoU.windowpane": 0.4883, "IoU.grass": 0.6469, "IoU.cabinet": 0.471, "IoU.sidewalk": 0.5043, "IoU.person": 0.7272, "IoU.earth": 0.2496, "IoU.door": 0.2541, "IoU.table": 0.3874, "IoU.mountain": 0.5055, "IoU.plant": 0.4447, "IoU.curtain": 0.5743, "IoU.chair": 0.405, "IoU.car": 0.775, "IoU.water": 0.4135, "IoU.painting": 0.5855, "IoU.sofa": 0.4436, "IoU.shelf": 0.3299, "IoU.house": 0.2338, "IoU.sea": 0.5054, "IoU.mirror": 0.2729, "IoU.rug": 0.3821, "IoU.field": 0.1773, "IoU.armchair": 0.2031, "IoU.seat": 0.4671, "IoU.fence": 0.2621, "IoU.desk": 0.2379, "IoU.rock": 0.3369, "IoU.wardrobe": 0.3815, "IoU.lamp": 0.4627, "IoU.bathtub": 0.426, "IoU.railing": 0.2262, "IoU.cushion": 0.3525, "IoU.base": 0.0253, "IoU.box": 0.0941, "IoU.column": 0.2432, "IoU.signboard": 0.2461, "IoU.chest of drawers": 0.2875, "IoU.counter": 0.2092, "IoU.sand": 0.2927, "IoU.sink": 0.4624, "IoU.skyscraper": 0.5487, "IoU.fireplace": 0.4761, "IoU.refrigerator": 0.4272, "IoU.grandstand": 0.297, "IoU.path": 0.0518, "IoU.stairs": 0.1418, "IoU.runway": 0.6054, "IoU.case": 0.3614, "IoU.pool table": 0.8126, "IoU.pillow": 0.2709, "IoU.screen door": 0.3647, "IoU.stairway": 0.1564, "IoU.river": 0.0642, "IoU.bridge": 0.2037, "IoU.bookcase": 0.2394, "IoU.blind": 0.0879, "IoU.coffee table": 0.396, "IoU.toilet": 0.6776, "IoU.flower": 0.2952, "IoU.book": 0.3911, "IoU.hill": 0.0208, "IoU.bench": 0.3306, "IoU.countertop": 0.3433, "IoU.stove": 0.4416, "IoU.palm": 0.3658, "IoU.kitchen island": 0.1612, "IoU.computer": 0.4723, "IoU.swivel chair": 0.2386, "IoU.boat": 0.5058, "IoU.bar": 0.1354, "IoU.arcade machine": 0.1578, "IoU.hovel": 0.0801, "IoU.bus": 0.6628, "IoU.towel": 0.355, "IoU.light": 0.1691, "IoU.truck": 0.1104, "IoU.tower": 0.001, "IoU.chandelier": 0.4746, "IoU.awning": 0.0479, "IoU.streetlight": 0.091, "IoU.booth": 0.005, "IoU.television receiver": 0.5257, "IoU.airplane": 0.4805, "IoU.dirt track": 0.0, "IoU.apparel": 0.2585, "IoU.pole": 0.0402, "IoU.land": 0.0075, "IoU.bannister": 0.0005, "IoU.escalator": 0.0274, "IoU.ottoman": 0.0325, "IoU.bottle": 0.1849, "IoU.buffet": 0.0265, "IoU.poster": 0.0, "IoU.stage": 0.0473, "IoU.van": 0.1065, "IoU.ship": 0.0505, "IoU.fountain": 0.0117, "IoU.conveyer belt": 0.2456, "IoU.canopy": 0.0127, "IoU.washer": 0.4734, "IoU.plaything": 0.044, "IoU.swimming pool": 0.2113, "IoU.stool": 0.1007, "IoU.barrel": 0.308, "IoU.basket": 0.093, "IoU.waterfall": 0.4982, "IoU.tent": 0.8122, "IoU.bag": 0.0006, "IoU.minibike": 0.4043, "IoU.cradle": 0.5634, "IoU.oven": 0.0281, "IoU.ball": 0.2391, "IoU.food": 0.3291, "IoU.step": 0.0, "IoU.tank": 0.1523, "IoU.trade name": 0.0415, "IoU.microwave": 0.2232, "IoU.pot": 0.1554, "IoU.animal": 0.4302, "IoU.bicycle": 0.3961, "IoU.lake": 0.0, "IoU.dishwasher": 0.1605, "IoU.screen": 0.4835, "IoU.blanket": 0.0, "IoU.sculpture": 0.1026, "IoU.hood": 0.1216, "IoU.sconce": 0.023, "IoU.vase": 0.1456, "IoU.traffic light": 0.0368, "IoU.tray": 0.0005, "IoU.ashcan": 0.1771, "IoU.fan": 0.1857, "IoU.pier": 0.0721, "IoU.crt screen": 0.0, "IoU.plate": 0.2599, "IoU.monitor": 0.2661, "IoU.bulletin board": 0.1009, "IoU.shower": 0.0, "IoU.radiator": 0.0495, "IoU.glass": 0.0006, "IoU.clock": 0.1044, "IoU.flag": 0.0358, "Acc.wall": 0.8399, "Acc.building": 0.9218, "Acc.sky": 0.9605, "Acc.floor": 0.8824, "Acc.tree": 0.8466, "Acc.ceiling": 0.8644, "Acc.road": 0.7823, "Acc.bed ": 0.9134, "Acc.windowpane": 0.6763, "Acc.grass": 0.8405, "Acc.cabinet": 0.6623, "Acc.sidewalk": 0.855, "Acc.person": 0.8628, "Acc.earth": 0.3003, "Acc.door": 0.3275, "Acc.table": 0.5408, "Acc.mountain": 0.6662, "Acc.plant": 0.5954, "Acc.curtain": 0.7356, "Acc.chair": 0.6213, "Acc.car": 0.8775, "Acc.water": 0.5484, "Acc.painting": 0.7753, "Acc.sofa": 0.6122, "Acc.shelf": 0.6184, "Acc.house": 0.2623, "Acc.sea": 0.838, "Acc.mirror": 0.3881, "Acc.rug": 0.4389, "Acc.field": 0.2696, "Acc.armchair": 0.3375, "Acc.seat": 0.6251, "Acc.fence": 0.345, "Acc.desk": 0.3143, "Acc.rock": 0.5558, "Acc.wardrobe": 0.7034, "Acc.lamp": 0.6421, "Acc.bathtub": 0.4738, "Acc.railing": 0.2575, "Acc.cushion": 0.6255, "Acc.base": 0.0282, "Acc.box": 0.1177, "Acc.column": 0.2689, "Acc.signboard": 0.3204, "Acc.chest of drawers": 0.3555, "Acc.counter": 0.3466, "Acc.sand": 0.3541, "Acc.sink": 0.5757, "Acc.skyscraper": 0.7308, "Acc.fireplace": 0.6255, "Acc.refrigerator": 0.6092, "Acc.grandstand": 0.5272, "Acc.path": 0.0618, "Acc.stairs": 0.1457, "Acc.runway": 0.8916, "Acc.case": 0.5663, "Acc.pool table": 0.9474, "Acc.pillow": 0.3056, "Acc.screen door": 0.5347, "Acc.stairway": 0.2032, "Acc.river": 0.0802, "Acc.bridge": 0.2403, "Acc.bookcase": 0.3293, "Acc.blind": 0.0922, "Acc.coffee table": 0.6566, "Acc.toilet": 0.7664, "Acc.flower": 0.399, "Acc.book": 0.535, "Acc.hill": 0.022, "Acc.bench": 0.3753, "Acc.countertop": 0.4634, "Acc.stove": 0.6864, "Acc.palm": 0.4188, "Acc.kitchen island": 0.2629, "Acc.computer": 0.5702, "Acc.swivel chair": 0.3472, "Acc.boat": 0.6073, "Acc.bar": 0.1622, "Acc.arcade machine": 0.1877, "Acc.hovel": 0.1064, "Acc.bus": 0.8977, "Acc.towel": 0.5271, "Acc.light": 0.1848, "Acc.truck": 0.1575, "Acc.tower": 0.001, "Acc.chandelier": 0.6786, "Acc.awning": 0.0541, "Acc.streetlight": 0.1125, "Acc.booth": 0.005, "Acc.television receiver": 0.6464, "Acc.airplane": 0.6266, "Acc.dirt track": 0.0, "Acc.apparel": 0.3674, "Acc.pole": 0.0456, "Acc.land": 0.0076, "Acc.bannister": 0.0005, "Acc.escalator": 0.0337, "Acc.ottoman": 0.0332, "Acc.bottle": 0.2667, "Acc.buffet": 0.0266, "Acc.poster": 0.0, "Acc.stage": 0.0843, "Acc.van": 0.1256, "Acc.ship": 0.0512, "Acc.fountain": 0.012, "Acc.conveyer belt": 0.3202, "Acc.canopy": 0.0128, "Acc.washer": 0.56, "Acc.plaything": 0.0514, "Acc.swimming pool": 0.536, "Acc.stool": 0.1121, "Acc.barrel": 0.5524, "Acc.basket": 0.1113, "Acc.waterfall": 0.6664, "Acc.tent": 0.9611, "Acc.bag": 0.0006, "Acc.minibike": 0.4924, "Acc.cradle": 0.6322, "Acc.oven": 0.0325, "Acc.ball": 0.2929, "Acc.food": 0.3816, "Acc.step": 0.0, "Acc.tank": 0.1527, "Acc.trade name": 0.042, "Acc.microwave": 0.2448, "Acc.pot": 0.1649, "Acc.animal": 0.4889, "Acc.bicycle": 0.5634, "Acc.lake": 0.0, "Acc.dishwasher": 0.1824, "Acc.screen": 0.5947, "Acc.blanket": 0.0, "Acc.sculpture": 0.1309, "Acc.hood": 0.1383, "Acc.sconce": 0.0231, "Acc.vase": 0.2, "Acc.traffic light": 0.0371, "Acc.tray": 0.0005, "Acc.ashcan": 0.2, "Acc.fan": 0.1994, "Acc.pier": 0.0728, "Acc.crt screen": 0.0, "Acc.plate": 0.3249, "Acc.monitor": 0.3463, "Acc.bulletin board": 0.1171, "Acc.shower": 0.0, "Acc.radiator": 0.0495, "Acc.glass": 0.0006, "Acc.clock": 0.1113, "Acc.flag": 0.0365} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.00018, "memory": 4602, "data_time": 1.02195, "decode.loss_ce": 0.76594, "decode.acc_seg": 72.80854, "loss": 0.76594, "time": 1.15132} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.00018, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.78443, "decode.acc_seg": 71.77901, "loss": 0.78443, "time": 0.14411} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.00018, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.77247, "decode.acc_seg": 72.55516, "loss": 0.77247, "time": 0.14071} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.00018, "memory": 4602, "data_time": 0.01268, "decode.loss_ce": 0.77947, "decode.acc_seg": 72.47601, "loss": 0.77947, "time": 0.14423} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.00018, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.7671, "decode.acc_seg": 72.99337, "loss": 0.7671, "time": 0.1428} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.00018, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.75579, "decode.acc_seg": 72.85976, "loss": 0.75579, "time": 0.14251} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.00018, "memory": 4602, "data_time": 0.01314, "decode.loss_ce": 0.81957, "decode.acc_seg": 71.47679, "loss": 0.81957, "time": 0.14224} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.00018, "memory": 4602, "data_time": 0.0119, "decode.loss_ce": 0.79292, "decode.acc_seg": 71.87861, "loss": 0.79292, "time": 0.14405} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.00018, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.73066, "decode.acc_seg": 72.97823, "loss": 0.73066, "time": 0.14361} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.00018, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 0.74852, "decode.acc_seg": 73.3149, "loss": 0.74852, "time": 0.14308} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.00018, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.73548, "decode.acc_seg": 73.55574, "loss": 0.73548, "time": 0.14329} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.00018, "memory": 4602, "data_time": 0.0118, "decode.loss_ce": 0.78471, "decode.acc_seg": 71.87626, "loss": 0.78471, "time": 0.14333} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.00018, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.76111, "decode.acc_seg": 72.75304, "loss": 0.76111, "time": 0.14568} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.00018, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.77916, "decode.acc_seg": 72.22738, "loss": 0.77916, "time": 0.14199} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.00018, "memory": 4602, "data_time": 0.01151, "decode.loss_ce": 0.73658, "decode.acc_seg": 73.48513, "loss": 0.73658, "time": 0.14241} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.00018, "memory": 4602, "data_time": 0.01299, "decode.loss_ce": 0.72706, "decode.acc_seg": 73.78842, "loss": 0.72706, "time": 0.14542} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.00018, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.73104, "decode.acc_seg": 73.94567, "loss": 0.73104, "time": 0.16445} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.00018, "memory": 4602, "data_time": 0.0125, "decode.loss_ce": 0.78908, "decode.acc_seg": 71.90642, "loss": 0.78908, "time": 0.16268} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.00018, "memory": 4602, "data_time": 0.01117, "decode.loss_ce": 0.74818, "decode.acc_seg": 73.33134, "loss": 0.74818, "time": 0.16074} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.00018, "memory": 4602, "data_time": 0.01156, "decode.loss_ce": 0.7426, "decode.acc_seg": 72.8833, "loss": 0.7426, "time": 0.14694} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.00018, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.7491, "decode.acc_seg": 73.30248, "loss": 0.7491, "time": 0.14378} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.00018, "memory": 4602, "data_time": 0.01179, "decode.loss_ce": 0.75093, "decode.acc_seg": 73.73842, "loss": 0.75093, "time": 0.15691} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.00018, "memory": 4602, "data_time": 0.01159, "decode.loss_ce": 0.75233, "decode.acc_seg": 73.07259, "loss": 0.75233, "time": 0.14865} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.00018, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.74484, "decode.acc_seg": 73.16855, "loss": 0.74484, "time": 0.14375} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.00018, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.74913, "decode.acc_seg": 73.07539, "loss": 0.74913, "time": 0.142} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.00018, "memory": 4602, "data_time": 0.01261, "decode.loss_ce": 0.71707, "decode.acc_seg": 74.22499, "loss": 0.71707, "time": 0.14252} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.00018, "memory": 4602, "data_time": 0.01272, "decode.loss_ce": 0.7596, "decode.acc_seg": 73.53833, "loss": 0.7596, "time": 0.14435} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.00018, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.73595, "decode.acc_seg": 73.75162, "loss": 0.73595, "time": 0.14238} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.00018, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.70108, "decode.acc_seg": 74.39371, "loss": 0.70108, "time": 0.14155} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.00018, "memory": 4602, "data_time": 0.01248, "decode.loss_ce": 0.70552, "decode.acc_seg": 74.48684, "loss": 0.70552, "time": 0.14351} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.00017, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.71266, "decode.acc_seg": 74.38585, "loss": 0.71266, "time": 0.14161} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.00017, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.74497, "decode.acc_seg": 74.03318, "loss": 0.74497, "time": 0.14407} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.00017, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.72788, "decode.acc_seg": 73.78088, "loss": 0.72788, "time": 0.14547} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.00017, "memory": 4602, "data_time": 0.01215, "decode.loss_ce": 0.73778, "decode.acc_seg": 74.00842, "loss": 0.73778, "time": 0.14398} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.00017, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.71509, "decode.acc_seg": 74.61908, "loss": 0.71509, "time": 0.14356} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.00017, "memory": 4602, "data_time": 0.01215, "decode.loss_ce": 0.73861, "decode.acc_seg": 73.68715, "loss": 0.73861, "time": 0.14285} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.00017, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.74418, "decode.acc_seg": 73.45512, "loss": 0.74418, "time": 0.14273} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.00017, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.72589, "decode.acc_seg": 74.12044, "loss": 0.72589, "time": 0.14207} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.00017, "memory": 4602, "data_time": 0.01271, "decode.loss_ce": 0.69594, "decode.acc_seg": 74.51923, "loss": 0.69594, "time": 0.14087} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.00017, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 0.74444, "decode.acc_seg": 74.04272, "loss": 0.74444, "time": 0.1495} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.00017, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 0.72688, "decode.acc_seg": 73.43206, "loss": 0.72688, "time": 0.14474} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.00017, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.72617, "decode.acc_seg": 73.71495, "loss": 0.72617, "time": 0.14819} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.00017, "memory": 4602, "data_time": 0.01178, "decode.loss_ce": 0.69183, "decode.acc_seg": 74.73903, "loss": 0.69183, "time": 0.14904} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.00017, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.71786, "decode.acc_seg": 74.70845, "loss": 0.71786, "time": 0.14242} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.00017, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 0.71016, "decode.acc_seg": 74.29755, "loss": 0.71016, "time": 0.14207} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.00017, "memory": 4602, "data_time": 0.01193, "decode.loss_ce": 0.70877, "decode.acc_seg": 74.55708, "loss": 0.70877, "time": 0.14354} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.00017, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.68024, "decode.acc_seg": 75.64949, "loss": 0.68024, "time": 0.14288} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.00017, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.731, "decode.acc_seg": 73.52007, "loss": 0.731, "time": 0.143} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.00017, "memory": 4602, "data_time": 0.01271, "decode.loss_ce": 0.70803, "decode.acc_seg": 74.37245, "loss": 0.70803, "time": 0.13992} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.00017, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.69325, "decode.acc_seg": 75.21026, "loss": 0.69325, "time": 0.1429} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.00017, "memory": 4602, "data_time": 0.01272, "decode.loss_ce": 0.6746, "decode.acc_seg": 75.48279, "loss": 0.6746, "time": 0.14352} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.00017, "memory": 4602, "data_time": 0.01329, "decode.loss_ce": 0.69424, "decode.acc_seg": 75.09987, "loss": 0.69424, "time": 0.14234} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.00017, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.69527, "decode.acc_seg": 74.68375, "loss": 0.69527, "time": 0.14011} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.00017, "memory": 4602, "data_time": 0.01178, "decode.loss_ce": 0.6901, "decode.acc_seg": 75.29175, "loss": 0.6901, "time": 0.14456} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.00017, "memory": 4602, "data_time": 0.01165, "decode.loss_ce": 0.68385, "decode.acc_seg": 75.40127, "loss": 0.68385, "time": 0.14147} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.00017, "memory": 4602, "data_time": 0.01285, "decode.loss_ce": 0.70721, "decode.acc_seg": 74.77559, "loss": 0.70721, "time": 0.14328} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.00017, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.69607, "decode.acc_seg": 74.73037, "loss": 0.69607, "time": 0.14433} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.00017, "memory": 4602, "data_time": 0.01284, "decode.loss_ce": 0.69711, "decode.acc_seg": 74.5955, "loss": 0.69711, "time": 0.14453} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.00017, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.70186, "decode.acc_seg": 74.65486, "loss": 0.70186, "time": 0.14071} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.00017, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.70499, "decode.acc_seg": 74.72077, "loss": 0.70499, "time": 0.14267} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.00017, "memory": 4602, "data_time": 0.01175, "decode.loss_ce": 0.68305, "decode.acc_seg": 74.95806, "loss": 0.68305, "time": 0.14189} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.00017, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.68417, "decode.acc_seg": 74.79185, "loss": 0.68417, "time": 0.14028} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.00017, "memory": 4602, "data_time": 0.01195, "decode.loss_ce": 0.67075, "decode.acc_seg": 75.50171, "loss": 0.67075, "time": 0.14499} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.00017, "memory": 4602, "data_time": 0.01261, "decode.loss_ce": 0.65965, "decode.acc_seg": 76.18452, "loss": 0.65965, "time": 0.14032} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.00017, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.66514, "decode.acc_seg": 75.58957, "loss": 0.66514, "time": 0.14791} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.00017, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.65874, "decode.acc_seg": 75.9094, "loss": 0.65874, "time": 0.14351} -{"mode": "train", "epoch": 1, "iter": 7350, "lr": 0.00017, "memory": 4602, "data_time": 0.01289, "decode.loss_ce": 0.71702, "decode.acc_seg": 74.41838, "loss": 0.71702, "time": 0.14292} -{"mode": "train", "epoch": 1, "iter": 7400, "lr": 0.00017, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.66922, "decode.acc_seg": 75.6527, "loss": 0.66922, "time": 0.14527} -{"mode": "train", "epoch": 1, "iter": 7450, "lr": 0.00017, "memory": 4602, "data_time": 0.0125, "decode.loss_ce": 0.66134, "decode.acc_seg": 75.65571, "loss": 0.66134, "time": 0.14273} -{"mode": "train", "epoch": 1, "iter": 7500, "lr": 0.00017, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.68114, "decode.acc_seg": 75.10051, "loss": 0.68114, "time": 0.14305} -{"mode": "train", "epoch": 1, "iter": 7550, "lr": 0.00017, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.66274, "decode.acc_seg": 75.8582, "loss": 0.66274, "time": 0.14078} -{"mode": "train", "epoch": 1, "iter": 7600, "lr": 0.00017, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.66911, "decode.acc_seg": 75.3065, "loss": 0.66911, "time": 0.14378} -{"mode": "train", "epoch": 1, "iter": 7650, "lr": 0.00017, "memory": 4602, "data_time": 0.01286, "decode.loss_ce": 0.64464, "decode.acc_seg": 76.27689, "loss": 0.64464, "time": 0.14505} -{"mode": "train", "epoch": 1, "iter": 7700, "lr": 0.00017, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.65749, "decode.acc_seg": 75.76167, "loss": 0.65749, "time": 0.14465} -{"mode": "train", "epoch": 1, "iter": 7750, "lr": 0.00016, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.67668, "decode.acc_seg": 75.52487, "loss": 0.67668, "time": 0.14173} -{"mode": "train", "epoch": 1, "iter": 7800, "lr": 0.00016, "memory": 4602, "data_time": 0.01215, "decode.loss_ce": 0.68373, "decode.acc_seg": 75.28167, "loss": 0.68373, "time": 0.14192} -{"mode": "train", "epoch": 1, "iter": 7850, "lr": 0.00016, "memory": 4602, "data_time": 0.01302, "decode.loss_ce": 0.65673, "decode.acc_seg": 76.16382, "loss": 0.65673, "time": 0.14247} -{"mode": "train", "epoch": 1, "iter": 7900, "lr": 0.00016, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.65319, "decode.acc_seg": 75.98192, "loss": 0.65319, "time": 0.14532} -{"mode": "train", "epoch": 1, "iter": 7950, "lr": 0.00016, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 0.6713, "decode.acc_seg": 75.39693, "loss": 0.6713, "time": 0.14145} -{"mode": "train", "epoch": 1, "iter": 8000, "lr": 0.00016, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.64516, "decode.acc_seg": 76.44533, "loss": 0.64516, "time": 0.15684} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00016, "aAcc": 0.7658, "mIoU": 0.3321, "mAcc": 0.4452, "IoU.wall": 0.6901, "IoU.building": 0.7768, "IoU.sky": 0.9324, "IoU.floor": 0.7358, "IoU.tree": 0.6761, "IoU.ceiling": 0.7635, "IoU.road": 0.7658, "IoU.bed ": 0.7933, "IoU.windowpane": 0.5165, "IoU.grass": 0.65, "IoU.cabinet": 0.4857, "IoU.sidewalk": 0.5416, "IoU.person": 0.725, "IoU.earth": 0.2951, "IoU.door": 0.2656, "IoU.table": 0.4276, "IoU.mountain": 0.461, "IoU.plant": 0.4676, "IoU.curtain": 0.5855, "IoU.chair": 0.4366, "IoU.car": 0.7758, "IoU.water": 0.3778, "IoU.painting": 0.608, "IoU.sofa": 0.4918, "IoU.shelf": 0.3427, "IoU.house": 0.2607, "IoU.sea": 0.5416, "IoU.mirror": 0.3473, "IoU.rug": 0.4052, "IoU.field": 0.2326, "IoU.armchair": 0.1701, "IoU.seat": 0.483, "IoU.fence": 0.1839, "IoU.desk": 0.3075, "IoU.rock": 0.2677, "IoU.wardrobe": 0.4086, "IoU.lamp": 0.4821, "IoU.bathtub": 0.5112, "IoU.railing": 0.3162, "IoU.cushion": 0.3292, "IoU.base": 0.1584, "IoU.box": 0.1355, "IoU.column": 0.3522, "IoU.signboard": 0.2684, "IoU.chest of drawers": 0.3178, "IoU.counter": 0.1814, "IoU.sand": 0.2566, "IoU.sink": 0.5244, "IoU.skyscraper": 0.4936, "IoU.fireplace": 0.534, "IoU.refrigerator": 0.5015, "IoU.grandstand": 0.2746, "IoU.path": 0.0816, "IoU.stairs": 0.2162, "IoU.runway": 0.7208, "IoU.case": 0.4049, "IoU.pool table": 0.887, "IoU.pillow": 0.3783, "IoU.screen door": 0.329, "IoU.stairway": 0.1807, "IoU.river": 0.0851, "IoU.bridge": 0.3191, "IoU.bookcase": 0.2757, "IoU.blind": 0.1117, "IoU.coffee table": 0.4405, "IoU.toilet": 0.6972, "IoU.flower": 0.3403, "IoU.book": 0.3751, "IoU.hill": 0.0207, "IoU.bench": 0.3401, "IoU.countertop": 0.3887, "IoU.stove": 0.4858, "IoU.palm": 0.3828, "IoU.kitchen island": 0.188, "IoU.computer": 0.5034, "IoU.swivel chair": 0.2699, "IoU.boat": 0.4463, "IoU.bar": 0.2241, "IoU.arcade machine": 0.2558, "IoU.hovel": 0.1526, "IoU.bus": 0.7461, "IoU.towel": 0.3531, "IoU.light": 0.3126, "IoU.truck": 0.1406, "IoU.tower": 0.2415, "IoU.chandelier": 0.541, "IoU.awning": 0.111, "IoU.streetlight": 0.1154, "IoU.booth": 0.1516, "IoU.television receiver": 0.5742, "IoU.airplane": 0.4399, "IoU.dirt track": 0.0, "IoU.apparel": 0.347, "IoU.pole": 0.1105, "IoU.land": 0.0551, "IoU.bannister": 0.0583, "IoU.escalator": 0.2464, "IoU.ottoman": 0.1307, "IoU.bottle": 0.1425, "IoU.buffet": 0.3074, "IoU.poster": 0.0642, "IoU.stage": 0.0248, "IoU.van": 0.1055, "IoU.ship": 0.2984, "IoU.fountain": 0.1652, "IoU.conveyer belt": 0.3764, "IoU.canopy": 0.1086, "IoU.washer": 0.5719, "IoU.plaything": 0.1122, "IoU.swimming pool": 0.0726, "IoU.stool": 0.2035, "IoU.barrel": 0.0935, "IoU.basket": 0.1512, "IoU.waterfall": 0.5079, "IoU.tent": 0.6074, "IoU.bag": 0.0282, "IoU.minibike": 0.5134, "IoU.cradle": 0.6822, "IoU.oven": 0.1493, "IoU.ball": 0.4358, "IoU.food": 0.5027, "IoU.step": 0.0127, "IoU.tank": 0.1873, "IoU.trade name": 0.0899, "IoU.microwave": 0.27, "IoU.pot": 0.2427, "IoU.animal": 0.4619, "IoU.bicycle": 0.444, "IoU.lake": 0.0145, "IoU.dishwasher": 0.2616, "IoU.screen": 0.5413, "IoU.blanket": 0.0023, "IoU.sculpture": 0.2147, "IoU.hood": 0.1599, "IoU.sconce": 0.0913, "IoU.vase": 0.2021, "IoU.traffic light": 0.1723, "IoU.tray": 0.0006, "IoU.ashcan": 0.2267, "IoU.fan": 0.3424, "IoU.pier": 0.2903, "IoU.crt screen": 0.0, "IoU.plate": 0.2103, "IoU.monitor": 0.2722, "IoU.bulletin board": 0.1696, "IoU.shower": 0.0, "IoU.radiator": 0.2846, "IoU.glass": 0.0299, "IoU.clock": 0.1714, "IoU.flag": 0.2165, "Acc.wall": 0.8506, "Acc.building": 0.9039, "Acc.sky": 0.9693, "Acc.floor": 0.8736, "Acc.tree": 0.8596, "Acc.ceiling": 0.8775, "Acc.road": 0.8617, "Acc.bed ": 0.9346, "Acc.windowpane": 0.665, "Acc.grass": 0.7697, "Acc.cabinet": 0.6578, "Acc.sidewalk": 0.7635, "Acc.person": 0.9159, "Acc.earth": 0.3717, "Acc.door": 0.3334, "Acc.table": 0.5864, "Acc.mountain": 0.7568, "Acc.plant": 0.5915, "Acc.curtain": 0.7802, "Acc.chair": 0.6368, "Acc.car": 0.9021, "Acc.water": 0.4479, "Acc.painting": 0.7768, "Acc.sofa": 0.7917, "Acc.shelf": 0.5422, "Acc.house": 0.2856, "Acc.sea": 0.9172, "Acc.mirror": 0.4285, "Acc.rug": 0.4709, "Acc.field": 0.3552, "Acc.armchair": 0.2244, "Acc.seat": 0.6847, "Acc.fence": 0.2194, "Acc.desk": 0.513, "Acc.rock": 0.3956, "Acc.wardrobe": 0.743, "Acc.lamp": 0.6443, "Acc.bathtub": 0.7222, "Acc.railing": 0.4248, "Acc.cushion": 0.4173, "Acc.base": 0.2609, "Acc.box": 0.1845, "Acc.column": 0.4708, "Acc.signboard": 0.3439, "Acc.chest of drawers": 0.4243, "Acc.counter": 0.2205, "Acc.sand": 0.3057, "Acc.sink": 0.6276, "Acc.skyscraper": 0.6373, "Acc.fireplace": 0.6175, "Acc.refrigerator": 0.7663, "Acc.grandstand": 0.6934, "Acc.path": 0.103, "Acc.stairs": 0.2251, "Acc.runway": 0.8992, "Acc.case": 0.6091, "Acc.pool table": 0.9499, "Acc.pillow": 0.4498, "Acc.screen door": 0.3692, "Acc.stairway": 0.2575, "Acc.river": 0.1538, "Acc.bridge": 0.4076, "Acc.bookcase": 0.3815, "Acc.blind": 0.1144, "Acc.coffee table": 0.6461, "Acc.toilet": 0.901, "Acc.flower": 0.4995, "Acc.book": 0.4844, "Acc.hill": 0.0222, "Acc.bench": 0.4733, "Acc.countertop": 0.4966, "Acc.stove": 0.7257, "Acc.palm": 0.4626, "Acc.kitchen island": 0.4114, "Acc.computer": 0.6013, "Acc.swivel chair": 0.503, "Acc.boat": 0.7624, "Acc.bar": 0.2809, "Acc.arcade machine": 0.5227, "Acc.hovel": 0.1889, "Acc.bus": 0.8982, "Acc.towel": 0.4722, "Acc.light": 0.3575, "Acc.truck": 0.186, "Acc.tower": 0.3658, "Acc.chandelier": 0.7102, "Acc.awning": 0.1558, "Acc.streetlight": 0.1384, "Acc.booth": 0.1886, "Acc.television receiver": 0.7047, "Acc.airplane": 0.6627, "Acc.dirt track": 0.0, "Acc.apparel": 0.5195, "Acc.pole": 0.1349, "Acc.land": 0.0603, "Acc.bannister": 0.0775, "Acc.escalator": 0.3621, "Acc.ottoman": 0.147, "Acc.bottle": 0.1699, "Acc.buffet": 0.3546, "Acc.poster": 0.0753, "Acc.stage": 0.0419, "Acc.van": 0.118, "Acc.ship": 0.3616, "Acc.fountain": 0.2322, "Acc.conveyer belt": 0.7859, "Acc.canopy": 0.1336, "Acc.washer": 0.6422, "Acc.plaything": 0.1918, "Acc.swimming pool": 0.131, "Acc.stool": 0.2814, "Acc.barrel": 0.8357, "Acc.basket": 0.1746, "Acc.waterfall": 0.5975, "Acc.tent": 0.9957, "Acc.bag": 0.0292, "Acc.minibike": 0.6422, "Acc.cradle": 0.9048, "Acc.oven": 0.2212, "Acc.ball": 0.6168, "Acc.food": 0.6019, "Acc.step": 0.013, "Acc.tank": 0.2534, "Acc.trade name": 0.0918, "Acc.microwave": 0.2894, "Acc.pot": 0.2682, "Acc.animal": 0.4826, "Acc.bicycle": 0.7092, "Acc.lake": 0.0169, "Acc.dishwasher": 0.3151, "Acc.screen": 0.8002, "Acc.blanket": 0.0023, "Acc.sculpture": 0.3058, "Acc.hood": 0.1668, "Acc.sconce": 0.097, "Acc.vase": 0.3236, "Acc.traffic light": 0.2247, "Acc.tray": 0.0006, "Acc.ashcan": 0.2872, "Acc.fan": 0.3886, "Acc.pier": 0.4153, "Acc.crt screen": 0.0, "Acc.plate": 0.2388, "Acc.monitor": 0.5175, "Acc.bulletin board": 0.1998, "Acc.shower": 0.0, "Acc.radiator": 0.294, "Acc.glass": 0.0308, "Acc.clock": 0.2103, "Acc.flag": 0.2409} -{"mode": "train", "epoch": 1, "iter": 8050, "lr": 0.00016, "memory": 4602, "data_time": 0.47679, "decode.loss_ce": 0.66164, "decode.acc_seg": 75.59983, "loss": 0.66164, "time": 0.60935} -{"mode": "train", "epoch": 1, "iter": 8100, "lr": 0.00016, "memory": 4602, "data_time": 0.01253, "decode.loss_ce": 0.64489, "decode.acc_seg": 76.71315, "loss": 0.64489, "time": 0.1473} -{"mode": "train", "epoch": 1, "iter": 8150, "lr": 0.00016, "memory": 4602, "data_time": 0.01335, "decode.loss_ce": 0.68232, "decode.acc_seg": 75.5943, "loss": 0.68232, "time": 0.14418} -{"mode": "train", "epoch": 1, "iter": 8200, "lr": 0.00016, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.62231, "decode.acc_seg": 76.74748, "loss": 0.62231, "time": 0.14685} -{"mode": "train", "epoch": 1, "iter": 8250, "lr": 0.00016, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.64608, "decode.acc_seg": 76.60825, "loss": 0.64608, "time": 0.14383} -{"mode": "train", "epoch": 1, "iter": 8300, "lr": 0.00016, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.65924, "decode.acc_seg": 75.68827, "loss": 0.65924, "time": 0.1451} -{"mode": "train", "epoch": 1, "iter": 8350, "lr": 0.00016, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.65525, "decode.acc_seg": 76.21018, "loss": 0.65525, "time": 0.14441} -{"mode": "train", "epoch": 1, "iter": 8400, "lr": 0.00016, "memory": 4602, "data_time": 0.01252, "decode.loss_ce": 0.67071, "decode.acc_seg": 75.58052, "loss": 0.67071, "time": 0.15386} -{"mode": "train", "epoch": 1, "iter": 8450, "lr": 0.00016, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.65151, "decode.acc_seg": 76.74902, "loss": 0.65151, "time": 0.14143} -{"mode": "train", "epoch": 1, "iter": 8500, "lr": 0.00016, "memory": 4602, "data_time": 0.01202, "decode.loss_ce": 0.67438, "decode.acc_seg": 75.77396, "loss": 0.67438, "time": 0.14304} -{"mode": "train", "epoch": 1, "iter": 8550, "lr": 0.00016, "memory": 4602, "data_time": 0.01287, "decode.loss_ce": 0.64005, "decode.acc_seg": 76.79397, "loss": 0.64005, "time": 0.14166} -{"mode": "train", "epoch": 1, "iter": 8600, "lr": 0.00016, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.62723, "decode.acc_seg": 76.70778, "loss": 0.62723, "time": 0.14476} -{"mode": "train", "epoch": 1, "iter": 8650, "lr": 0.00016, "memory": 4602, "data_time": 0.01215, "decode.loss_ce": 0.62867, "decode.acc_seg": 77.25304, "loss": 0.62867, "time": 0.14126} -{"mode": "train", "epoch": 1, "iter": 8700, "lr": 0.00016, "memory": 4602, "data_time": 0.01158, "decode.loss_ce": 0.63612, "decode.acc_seg": 76.78856, "loss": 0.63612, "time": 0.1467} -{"mode": "train", "epoch": 1, "iter": 8750, "lr": 0.00016, "memory": 4602, "data_time": 0.01233, "decode.loss_ce": 0.60898, "decode.acc_seg": 77.0454, "loss": 0.60898, "time": 0.14356} -{"mode": "train", "epoch": 1, "iter": 8800, "lr": 0.00016, "memory": 4602, "data_time": 0.01177, "decode.loss_ce": 0.61872, "decode.acc_seg": 77.15437, "loss": 0.61872, "time": 0.14491} -{"mode": "train", "epoch": 1, "iter": 8850, "lr": 0.00016, "memory": 4602, "data_time": 0.01176, "decode.loss_ce": 0.62838, "decode.acc_seg": 76.61511, "loss": 0.62838, "time": 0.14219} -{"mode": "train", "epoch": 1, "iter": 8900, "lr": 0.00016, "memory": 4602, "data_time": 0.01185, "decode.loss_ce": 0.62225, "decode.acc_seg": 76.50939, "loss": 0.62225, "time": 0.13909} -{"mode": "train", "epoch": 1, "iter": 8950, "lr": 0.00016, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.62317, "decode.acc_seg": 76.75422, "loss": 0.62317, "time": 0.14292} -{"mode": "train", "epoch": 1, "iter": 9000, "lr": 0.00016, "memory": 4602, "data_time": 0.01312, "decode.loss_ce": 0.65596, "decode.acc_seg": 75.96724, "loss": 0.65596, "time": 0.14671} -{"mode": "train", "epoch": 1, "iter": 9050, "lr": 0.00016, "memory": 4602, "data_time": 0.01185, "decode.loss_ce": 0.63403, "decode.acc_seg": 76.99634, "loss": 0.63403, "time": 0.14393} -{"mode": "train", "epoch": 1, "iter": 9100, "lr": 0.00016, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.6394, "decode.acc_seg": 76.54393, "loss": 0.6394, "time": 0.14167} -{"mode": "train", "epoch": 1, "iter": 9150, "lr": 0.00016, "memory": 4602, "data_time": 0.01337, "decode.loss_ce": 0.63901, "decode.acc_seg": 76.33367, "loss": 0.63901, "time": 0.14574} -{"mode": "train", "epoch": 1, "iter": 9200, "lr": 0.00016, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.6356, "decode.acc_seg": 76.73387, "loss": 0.6356, "time": 0.14152} -{"mode": "train", "epoch": 1, "iter": 9250, "lr": 0.00016, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.63107, "decode.acc_seg": 76.86479, "loss": 0.63107, "time": 0.14253} -{"mode": "train", "epoch": 1, "iter": 9300, "lr": 0.00016, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.63302, "decode.acc_seg": 76.96807, "loss": 0.63302, "time": 0.145} -{"mode": "train", "epoch": 1, "iter": 9350, "lr": 0.00016, "memory": 4602, "data_time": 0.01232, "decode.loss_ce": 0.64075, "decode.acc_seg": 76.33344, "loss": 0.64075, "time": 0.14611} -{"mode": "train", "epoch": 1, "iter": 9400, "lr": 0.00016, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.62457, "decode.acc_seg": 76.91163, "loss": 0.62457, "time": 0.14088} -{"mode": "train", "epoch": 1, "iter": 9450, "lr": 0.00016, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.65671, "decode.acc_seg": 76.18272, "loss": 0.65671, "time": 0.14188} -{"mode": "train", "epoch": 1, "iter": 9500, "lr": 0.00016, "memory": 4602, "data_time": 0.0125, "decode.loss_ce": 0.62128, "decode.acc_seg": 77.269, "loss": 0.62128, "time": 0.14469} -{"mode": "train", "epoch": 1, "iter": 9550, "lr": 0.00016, "memory": 4602, "data_time": 0.01277, "decode.loss_ce": 0.60358, "decode.acc_seg": 77.42561, "loss": 0.60358, "time": 0.14261} -{"mode": "train", "epoch": 1, "iter": 9600, "lr": 0.00016, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 0.6298, "decode.acc_seg": 76.75039, "loss": 0.6298, "time": 0.15428} -{"mode": "train", "epoch": 1, "iter": 9650, "lr": 0.00016, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.61928, "decode.acc_seg": 77.13232, "loss": 0.61928, "time": 0.14209} -{"mode": "train", "epoch": 1, "iter": 9700, "lr": 0.00016, "memory": 4602, "data_time": 0.01167, "decode.loss_ce": 0.64666, "decode.acc_seg": 76.93321, "loss": 0.64666, "time": 0.14111} -{"mode": "train", "epoch": 1, "iter": 9750, "lr": 0.00016, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.64379, "decode.acc_seg": 77.01171, "loss": 0.64379, "time": 0.14341} -{"mode": "train", "epoch": 1, "iter": 9800, "lr": 0.00016, "memory": 4602, "data_time": 0.01236, "decode.loss_ce": 0.61755, "decode.acc_seg": 76.87452, "loss": 0.61755, "time": 0.14232} -{"mode": "train", "epoch": 1, "iter": 9850, "lr": 0.00016, "memory": 4602, "data_time": 0.01397, "decode.loss_ce": 0.59284, "decode.acc_seg": 77.75528, "loss": 0.59284, "time": 0.14383} -{"mode": "train", "epoch": 1, "iter": 9900, "lr": 0.00016, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.61942, "decode.acc_seg": 77.30599, "loss": 0.61942, "time": 0.14374} -{"mode": "train", "epoch": 1, "iter": 9950, "lr": 0.00015, "memory": 4602, "data_time": 0.01199, "decode.loss_ce": 0.61233, "decode.acc_seg": 77.72052, "loss": 0.61233, "time": 0.14402} -{"mode": "train", "epoch": 1, "iter": 10000, "lr": 0.00015, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.60546, "decode.acc_seg": 77.88765, "loss": 0.60546, "time": 0.14436} -{"mode": "train", "epoch": 1, "iter": 10050, "lr": 0.00015, "memory": 4602, "data_time": 0.01223, "decode.loss_ce": 0.61747, "decode.acc_seg": 77.48734, "loss": 0.61747, "time": 0.14505} -{"mode": "train", "epoch": 1, "iter": 10100, "lr": 0.00015, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.60693, "decode.acc_seg": 77.73314, "loss": 0.60693, "time": 0.14203} -{"mode": "train", "epoch": 1, "iter": 10150, "lr": 0.00015, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.62587, "decode.acc_seg": 77.02175, "loss": 0.62587, "time": 0.14235} -{"mode": "train", "epoch": 1, "iter": 10200, "lr": 0.00015, "memory": 4602, "data_time": 0.01259, "decode.loss_ce": 0.61396, "decode.acc_seg": 77.61967, "loss": 0.61396, "time": 0.14231} -{"mode": "train", "epoch": 1, "iter": 10250, "lr": 0.00015, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.61618, "decode.acc_seg": 77.15577, "loss": 0.61618, "time": 0.14446} -{"mode": "train", "epoch": 1, "iter": 10300, "lr": 0.00015, "memory": 4602, "data_time": 0.01248, "decode.loss_ce": 0.60362, "decode.acc_seg": 77.73135, "loss": 0.60362, "time": 0.14564} -{"mode": "train", "epoch": 1, "iter": 10350, "lr": 0.00015, "memory": 4602, "data_time": 0.01253, "decode.loss_ce": 0.59133, "decode.acc_seg": 77.7831, "loss": 0.59133, "time": 0.14229} -{"mode": "train", "epoch": 1, "iter": 10400, "lr": 0.00015, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.62643, "decode.acc_seg": 76.97268, "loss": 0.62643, "time": 0.14435} -{"mode": "train", "epoch": 1, "iter": 10450, "lr": 0.00015, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.59456, "decode.acc_seg": 77.71123, "loss": 0.59456, "time": 0.13928} -{"mode": "train", "epoch": 1, "iter": 10500, "lr": 0.00015, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.59561, "decode.acc_seg": 78.03163, "loss": 0.59561, "time": 0.14675} -{"mode": "train", "epoch": 1, "iter": 10550, "lr": 0.00015, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.61125, "decode.acc_seg": 77.84811, "loss": 0.61125, "time": 0.14237} -{"mode": "train", "epoch": 1, "iter": 10600, "lr": 0.00015, "memory": 4602, "data_time": 0.01277, "decode.loss_ce": 0.58015, "decode.acc_seg": 78.58493, "loss": 0.58015, "time": 0.14277} -{"mode": "train", "epoch": 1, "iter": 10650, "lr": 0.00015, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.60265, "decode.acc_seg": 78.15967, "loss": 0.60265, "time": 0.14243} -{"mode": "train", "epoch": 1, "iter": 10700, "lr": 0.00015, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.58528, "decode.acc_seg": 78.21358, "loss": 0.58528, "time": 0.14154} -{"mode": "train", "epoch": 1, "iter": 10750, "lr": 0.00015, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.61832, "decode.acc_seg": 77.34404, "loss": 0.61832, "time": 0.14817} -{"mode": "train", "epoch": 1, "iter": 10800, "lr": 0.00015, "memory": 4602, "data_time": 0.01309, "decode.loss_ce": 0.61302, "decode.acc_seg": 77.84589, "loss": 0.61302, "time": 0.14329} -{"mode": "train", "epoch": 1, "iter": 10850, "lr": 0.00015, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.60347, "decode.acc_seg": 77.75932, "loss": 0.60347, "time": 0.14315} -{"mode": "train", "epoch": 1, "iter": 10900, "lr": 0.00015, "memory": 4602, "data_time": 0.01206, "decode.loss_ce": 0.59343, "decode.acc_seg": 78.01551, "loss": 0.59343, "time": 0.14423} -{"mode": "train", "epoch": 1, "iter": 10950, "lr": 0.00015, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.61854, "decode.acc_seg": 76.60281, "loss": 0.61854, "time": 0.13907} -{"mode": "train", "epoch": 1, "iter": 11000, "lr": 0.00015, "memory": 4602, "data_time": 0.01266, "decode.loss_ce": 0.60372, "decode.acc_seg": 77.67895, "loss": 0.60372, "time": 0.14139} -{"mode": "train", "epoch": 1, "iter": 11050, "lr": 0.00015, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.59627, "decode.acc_seg": 78.2355, "loss": 0.59627, "time": 0.13971} -{"mode": "train", "epoch": 1, "iter": 11100, "lr": 0.00015, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.58907, "decode.acc_seg": 78.39256, "loss": 0.58907, "time": 0.13907} -{"mode": "train", "epoch": 1, "iter": 11150, "lr": 0.00015, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.60253, "decode.acc_seg": 77.91688, "loss": 0.60253, "time": 0.14046} -{"mode": "train", "epoch": 1, "iter": 11200, "lr": 0.00015, "memory": 4602, "data_time": 0.01315, "decode.loss_ce": 0.61587, "decode.acc_seg": 77.50986, "loss": 0.61587, "time": 0.14367} -{"mode": "train", "epoch": 1, "iter": 11250, "lr": 0.00015, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.57806, "decode.acc_seg": 78.1877, "loss": 0.57806, "time": 0.14034} -{"mode": "train", "epoch": 1, "iter": 11300, "lr": 0.00015, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.63099, "decode.acc_seg": 77.12218, "loss": 0.63099, "time": 0.14119} -{"mode": "train", "epoch": 1, "iter": 11350, "lr": 0.00015, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 0.60787, "decode.acc_seg": 77.68771, "loss": 0.60787, "time": 0.14229} -{"mode": "train", "epoch": 1, "iter": 11400, "lr": 0.00015, "memory": 4602, "data_time": 0.01233, "decode.loss_ce": 0.60424, "decode.acc_seg": 77.86948, "loss": 0.60424, "time": 0.14227} -{"mode": "train", "epoch": 1, "iter": 11450, "lr": 0.00015, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.56627, "decode.acc_seg": 78.52214, "loss": 0.56627, "time": 0.14409} -{"mode": "train", "epoch": 1, "iter": 11500, "lr": 0.00015, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.59121, "decode.acc_seg": 78.3318, "loss": 0.59121, "time": 0.14165} -{"mode": "train", "epoch": 1, "iter": 11550, "lr": 0.00015, "memory": 4602, "data_time": 0.01232, "decode.loss_ce": 0.60204, "decode.acc_seg": 78.2997, "loss": 0.60204, "time": 0.14051} -{"mode": "train", "epoch": 1, "iter": 11600, "lr": 0.00015, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.60052, "decode.acc_seg": 78.30119, "loss": 0.60052, "time": 0.13993} -{"mode": "train", "epoch": 1, "iter": 11650, "lr": 0.00015, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 0.60607, "decode.acc_seg": 77.46664, "loss": 0.60607, "time": 0.14085} -{"mode": "train", "epoch": 1, "iter": 11700, "lr": 0.00015, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.6179, "decode.acc_seg": 77.31924, "loss": 0.6179, "time": 0.14558} -{"mode": "train", "epoch": 1, "iter": 11750, "lr": 0.00015, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.59279, "decode.acc_seg": 77.58125, "loss": 0.59279, "time": 0.14388} -{"mode": "train", "epoch": 1, "iter": 11800, "lr": 0.00015, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.59366, "decode.acc_seg": 78.35469, "loss": 0.59366, "time": 0.14148} -{"mode": "train", "epoch": 1, "iter": 11850, "lr": 0.00015, "memory": 4602, "data_time": 0.01396, "decode.loss_ce": 0.60247, "decode.acc_seg": 77.96848, "loss": 0.60247, "time": 0.14246} -{"mode": "train", "epoch": 1, "iter": 11900, "lr": 0.00015, "memory": 4602, "data_time": 0.01244, "decode.loss_ce": 0.60508, "decode.acc_seg": 77.69491, "loss": 0.60508, "time": 0.15032} -{"mode": "train", "epoch": 1, "iter": 11950, "lr": 0.00015, "memory": 4602, "data_time": 0.01256, "decode.loss_ce": 0.58742, "decode.acc_seg": 78.54817, "loss": 0.58742, "time": 0.14211} -{"mode": "train", "epoch": 1, "iter": 12000, "lr": 0.00015, "memory": 4602, "data_time": 0.01258, "decode.loss_ce": 0.58245, "decode.acc_seg": 78.40121, "loss": 0.58245, "time": 0.16393} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00015, "aAcc": 0.7751, "mIoU": 0.3556, "mAcc": 0.463, "IoU.wall": 0.6969, "IoU.building": 0.7887, "IoU.sky": 0.9349, "IoU.floor": 0.7553, "IoU.tree": 0.6972, "IoU.ceiling": 0.7696, "IoU.road": 0.779, "IoU.bed ": 0.8072, "IoU.windowpane": 0.5312, "IoU.grass": 0.6695, "IoU.cabinet": 0.5104, "IoU.sidewalk": 0.5342, "IoU.person": 0.735, "IoU.earth": 0.3289, "IoU.door": 0.3032, "IoU.table": 0.4341, "IoU.mountain": 0.4839, "IoU.plant": 0.4569, "IoU.curtain": 0.6277, "IoU.chair": 0.3814, "IoU.car": 0.7822, "IoU.water": 0.3107, "IoU.painting": 0.6156, "IoU.sofa": 0.5324, "IoU.shelf": 0.2968, "IoU.house": 0.4411, "IoU.sea": 0.5244, "IoU.mirror": 0.346, "IoU.rug": 0.4484, "IoU.field": 0.3178, "IoU.armchair": 0.234, "IoU.seat": 0.529, "IoU.fence": 0.3357, "IoU.desk": 0.3629, "IoU.rock": 0.3795, "IoU.wardrobe": 0.416, "IoU.lamp": 0.5052, "IoU.bathtub": 0.5048, "IoU.railing": 0.2923, "IoU.cushion": 0.3192, "IoU.base": 0.163, "IoU.box": 0.1545, "IoU.column": 0.3665, "IoU.signboard": 0.2905, "IoU.chest of drawers": 0.346, "IoU.counter": 0.1974, "IoU.sand": 0.2633, "IoU.sink": 0.5265, "IoU.skyscraper": 0.5175, "IoU.fireplace": 0.5422, "IoU.refrigerator": 0.5466, "IoU.grandstand": 0.2941, "IoU.path": 0.1971, "IoU.stairs": 0.345, "IoU.runway": 0.6813, "IoU.case": 0.4231, "IoU.pool table": 0.9008, "IoU.pillow": 0.4273, "IoU.screen door": 0.3366, "IoU.stairway": 0.2542, "IoU.river": 0.1063, "IoU.bridge": 0.2708, "IoU.bookcase": 0.2737, "IoU.blind": 0.232, "IoU.coffee table": 0.4391, "IoU.toilet": 0.7137, "IoU.flower": 0.3493, "IoU.book": 0.428, "IoU.hill": 0.0494, "IoU.bench": 0.3437, "IoU.countertop": 0.3581, "IoU.stove": 0.5174, "IoU.palm": 0.442, "IoU.kitchen island": 0.2289, "IoU.computer": 0.5164, "IoU.swivel chair": 0.3294, "IoU.boat": 0.5425, "IoU.bar": 0.1867, "IoU.arcade machine": 0.1524, "IoU.hovel": 0.1765, "IoU.bus": 0.8211, "IoU.towel": 0.4291, "IoU.light": 0.2862, "IoU.truck": 0.1915, "IoU.tower": 0.2094, "IoU.chandelier": 0.5676, "IoU.awning": 0.0511, "IoU.streetlight": 0.1461, "IoU.booth": 0.1754, "IoU.television receiver": 0.6075, "IoU.airplane": 0.4801, "IoU.dirt track": 0.0045, "IoU.apparel": 0.1874, "IoU.pole": 0.1247, "IoU.land": 0.0029, "IoU.bannister": 0.0842, "IoU.escalator": 0.2458, "IoU.ottoman": 0.1854, "IoU.bottle": 0.1342, "IoU.buffet": 0.3477, "IoU.poster": 0.0658, "IoU.stage": 0.0921, "IoU.van": 0.2576, "IoU.ship": 0.0447, "IoU.fountain": 0.1665, "IoU.conveyer belt": 0.4685, "IoU.canopy": 0.1314, "IoU.washer": 0.625, "IoU.plaything": 0.1146, "IoU.swimming pool": 0.2436, "IoU.stool": 0.1898, "IoU.barrel": 0.176, "IoU.basket": 0.198, "IoU.waterfall": 0.5779, "IoU.tent": 0.8694, "IoU.bag": 0.0336, "IoU.minibike": 0.486, "IoU.cradle": 0.7138, "IoU.oven": 0.2053, "IoU.ball": 0.3994, "IoU.food": 0.4999, "IoU.step": 0.0437, "IoU.tank": 0.3589, "IoU.trade name": 0.187, "IoU.microwave": 0.3262, "IoU.pot": 0.2387, "IoU.animal": 0.5147, "IoU.bicycle": 0.4574, "IoU.lake": 0.0003, "IoU.dishwasher": 0.3515, "IoU.screen": 0.5503, "IoU.blanket": 0.0027, "IoU.sculpture": 0.2678, "IoU.hood": 0.3147, "IoU.sconce": 0.1415, "IoU.vase": 0.2026, "IoU.traffic light": 0.183, "IoU.tray": 0.0064, "IoU.ashcan": 0.3181, "IoU.fan": 0.4068, "IoU.pier": 0.356, "IoU.crt screen": 0.0002, "IoU.plate": 0.2824, "IoU.monitor": 0.0603, "IoU.bulletin board": 0.1683, "IoU.shower": 0.0028, "IoU.radiator": 0.2361, "IoU.glass": 0.045, "IoU.clock": 0.1516, "IoU.flag": 0.174, "Acc.wall": 0.8645, "Acc.building": 0.9108, "Acc.sky": 0.9691, "Acc.floor": 0.8699, "Acc.tree": 0.868, "Acc.ceiling": 0.8655, "Acc.road": 0.8896, "Acc.bed ": 0.9356, "Acc.windowpane": 0.7254, "Acc.grass": 0.8068, "Acc.cabinet": 0.647, "Acc.sidewalk": 0.6397, "Acc.person": 0.9185, "Acc.earth": 0.5381, "Acc.door": 0.387, "Acc.table": 0.641, "Acc.mountain": 0.6147, "Acc.plant": 0.5514, "Acc.curtain": 0.7337, "Acc.chair": 0.448, "Acc.car": 0.8865, "Acc.water": 0.3801, "Acc.painting": 0.7606, "Acc.sofa": 0.8004, "Acc.shelf": 0.3829, "Acc.house": 0.5882, "Acc.sea": 0.874, "Acc.mirror": 0.3841, "Acc.rug": 0.5752, "Acc.field": 0.3515, "Acc.armchair": 0.3944, "Acc.seat": 0.6834, "Acc.fence": 0.5474, "Acc.desk": 0.5707, "Acc.rock": 0.5579, "Acc.wardrobe": 0.6582, "Acc.lamp": 0.6367, "Acc.bathtub": 0.6833, "Acc.railing": 0.402, "Acc.cushion": 0.3624, "Acc.base": 0.2317, "Acc.box": 0.2094, "Acc.column": 0.5707, "Acc.signboard": 0.3651, "Acc.chest of drawers": 0.5097, "Acc.counter": 0.2931, "Acc.sand": 0.3514, "Acc.sink": 0.7292, "Acc.skyscraper": 0.7271, "Acc.fireplace": 0.7225, "Acc.refrigerator": 0.696, "Acc.grandstand": 0.725, "Acc.path": 0.2936, "Acc.stairs": 0.4752, "Acc.runway": 0.92, "Acc.case": 0.5639, "Acc.pool table": 0.9441, "Acc.pillow": 0.533, "Acc.screen door": 0.7473, "Acc.stairway": 0.3715, "Acc.river": 0.3859, "Acc.bridge": 0.3096, "Acc.bookcase": 0.4254, "Acc.blind": 0.2481, "Acc.coffee table": 0.667, "Acc.toilet": 0.9203, "Acc.flower": 0.4609, "Acc.book": 0.651, "Acc.hill": 0.0665, "Acc.bench": 0.4319, "Acc.countertop": 0.4161, "Acc.stove": 0.6257, "Acc.palm": 0.6161, "Acc.kitchen island": 0.4412, "Acc.computer": 0.628, "Acc.swivel chair": 0.4226, "Acc.boat": 0.6494, "Acc.bar": 0.247, "Acc.arcade machine": 0.1619, "Acc.hovel": 0.2191, "Acc.bus": 0.8842, "Acc.towel": 0.5386, "Acc.light": 0.3065, "Acc.truck": 0.242, "Acc.tower": 0.2477, "Acc.chandelier": 0.7174, "Acc.awning": 0.0539, "Acc.streetlight": 0.1812, "Acc.booth": 0.2165, "Acc.television receiver": 0.7094, "Acc.airplane": 0.649, "Acc.dirt track": 0.0056, "Acc.apparel": 0.2257, "Acc.pole": 0.181, "Acc.land": 0.0033, "Acc.bannister": 0.1122, "Acc.escalator": 0.3215, "Acc.ottoman": 0.2283, "Acc.bottle": 0.154, "Acc.buffet": 0.3853, "Acc.poster": 0.0699, "Acc.stage": 0.157, "Acc.van": 0.3055, "Acc.ship": 0.0527, "Acc.fountain": 0.214, "Acc.conveyer belt": 0.6917, "Acc.canopy": 0.1387, "Acc.washer": 0.6993, "Acc.plaything": 0.1515, "Acc.swimming pool": 0.3726, "Acc.stool": 0.274, "Acc.barrel": 0.5878, "Acc.basket": 0.2607, "Acc.waterfall": 0.7096, "Acc.tent": 0.9854, "Acc.bag": 0.0341, "Acc.minibike": 0.7627, "Acc.cradle": 0.9521, "Acc.oven": 0.2659, "Acc.ball": 0.4764, "Acc.food": 0.6903, "Acc.step": 0.0563, "Acc.tank": 0.4565, "Acc.trade name": 0.2174, "Acc.microwave": 0.354, "Acc.pot": 0.2768, "Acc.animal": 0.5489, "Acc.bicycle": 0.6085, "Acc.lake": 0.0003, "Acc.dishwasher": 0.3844, "Acc.screen": 0.8265, "Acc.blanket": 0.0029, "Acc.sculpture": 0.438, "Acc.hood": 0.3969, "Acc.sconce": 0.1551, "Acc.vase": 0.28, "Acc.traffic light": 0.3156, "Acc.tray": 0.0074, "Acc.ashcan": 0.4022, "Acc.fan": 0.4795, "Acc.pier": 0.473, "Acc.crt screen": 0.0003, "Acc.plate": 0.3182, "Acc.monitor": 0.0736, "Acc.bulletin board": 0.193, "Acc.shower": 0.0265, "Acc.radiator": 0.2496, "Acc.glass": 0.0464, "Acc.clock": 0.181, "Acc.flag": 0.1815} -{"mode": "train", "epoch": 1, "iter": 12050, "lr": 0.00015, "memory": 4602, "data_time": 0.48686, "decode.loss_ce": 0.56567, "decode.acc_seg": 78.69026, "loss": 0.56567, "time": 0.61658} -{"mode": "train", "epoch": 1, "iter": 12100, "lr": 0.00014, "memory": 4602, "data_time": 0.01325, "decode.loss_ce": 0.58305, "decode.acc_seg": 79.07104, "loss": 0.58305, "time": 0.14093} -{"mode": "train", "epoch": 1, "iter": 12150, "lr": 0.00014, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.56054, "decode.acc_seg": 79.28065, "loss": 0.56054, "time": 0.14559} -{"mode": "train", "epoch": 1, "iter": 12200, "lr": 0.00014, "memory": 4602, "data_time": 0.01305, "decode.loss_ce": 0.57995, "decode.acc_seg": 78.56285, "loss": 0.57995, "time": 0.14226} -{"mode": "train", "epoch": 1, "iter": 12250, "lr": 0.00014, "memory": 4602, "data_time": 0.01316, "decode.loss_ce": 0.59741, "decode.acc_seg": 77.9258, "loss": 0.59741, "time": 0.14217} -{"mode": "train", "epoch": 1, "iter": 12300, "lr": 0.00014, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.58415, "decode.acc_seg": 78.82179, "loss": 0.58415, "time": 0.14134} -{"mode": "train", "epoch": 1, "iter": 12350, "lr": 0.00014, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.55639, "decode.acc_seg": 78.98392, "loss": 0.55639, "time": 0.1424} -{"mode": "train", "epoch": 1, "iter": 12400, "lr": 0.00014, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.56001, "decode.acc_seg": 78.84404, "loss": 0.56001, "time": 0.14242} -{"mode": "train", "epoch": 1, "iter": 12450, "lr": 0.00014, "memory": 4602, "data_time": 0.01253, "decode.loss_ce": 0.57141, "decode.acc_seg": 79.12574, "loss": 0.57141, "time": 0.14409} -{"mode": "train", "epoch": 1, "iter": 12500, "lr": 0.00014, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.58127, "decode.acc_seg": 78.24434, "loss": 0.58127, "time": 0.14139} -{"mode": "train", "epoch": 1, "iter": 12550, "lr": 0.00014, "memory": 4602, "data_time": 0.01272, "decode.loss_ce": 0.59268, "decode.acc_seg": 78.21632, "loss": 0.59268, "time": 0.14453} -{"mode": "train", "epoch": 1, "iter": 12600, "lr": 0.00014, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.55284, "decode.acc_seg": 79.55573, "loss": 0.55284, "time": 0.14216} -{"mode": "train", "epoch": 1, "iter": 12650, "lr": 0.00014, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.58266, "decode.acc_seg": 78.30921, "loss": 0.58266, "time": 0.14018} -{"mode": "train", "epoch": 1, "iter": 12700, "lr": 0.00014, "memory": 4602, "data_time": 0.01191, "decode.loss_ce": 0.55657, "decode.acc_seg": 79.70156, "loss": 0.55657, "time": 0.14574} -{"mode": "train", "epoch": 1, "iter": 12750, "lr": 0.00014, "memory": 4602, "data_time": 0.01302, "decode.loss_ce": 0.57705, "decode.acc_seg": 78.4929, "loss": 0.57705, "time": 0.14382} -{"mode": "train", "epoch": 1, "iter": 12800, "lr": 0.00014, "memory": 4602, "data_time": 0.01244, "decode.loss_ce": 0.5675, "decode.acc_seg": 78.73561, "loss": 0.5675, "time": 0.14213} -{"mode": "train", "epoch": 1, "iter": 12850, "lr": 0.00014, "memory": 4602, "data_time": 0.0126, "decode.loss_ce": 0.55591, "decode.acc_seg": 79.26772, "loss": 0.55591, "time": 0.14085} -{"mode": "train", "epoch": 1, "iter": 12900, "lr": 0.00014, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.57291, "decode.acc_seg": 79.24932, "loss": 0.57291, "time": 0.14254} -{"mode": "train", "epoch": 1, "iter": 12950, "lr": 0.00014, "memory": 4602, "data_time": 0.01217, "decode.loss_ce": 0.59122, "decode.acc_seg": 78.44357, "loss": 0.59122, "time": 0.1417} -{"mode": "train", "epoch": 1, "iter": 13000, "lr": 0.00014, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.54873, "decode.acc_seg": 79.52689, "loss": 0.54873, "time": 0.14244} -{"mode": "train", "epoch": 1, "iter": 13050, "lr": 0.00014, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.5601, "decode.acc_seg": 79.32956, "loss": 0.5601, "time": 0.14201} -{"mode": "train", "epoch": 1, "iter": 13100, "lr": 0.00014, "memory": 4602, "data_time": 0.01187, "decode.loss_ce": 0.58263, "decode.acc_seg": 78.353, "loss": 0.58263, "time": 0.1477} -{"mode": "train", "epoch": 1, "iter": 13150, "lr": 0.00014, "memory": 4602, "data_time": 0.01315, "decode.loss_ce": 0.5559, "decode.acc_seg": 79.10388, "loss": 0.5559, "time": 0.14712} -{"mode": "train", "epoch": 1, "iter": 13200, "lr": 0.00014, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 0.58044, "decode.acc_seg": 78.54181, "loss": 0.58044, "time": 0.14167} -{"mode": "train", "epoch": 1, "iter": 13250, "lr": 0.00014, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.55803, "decode.acc_seg": 78.93842, "loss": 0.55803, "time": 0.14117} -{"mode": "train", "epoch": 1, "iter": 13300, "lr": 0.00014, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.57498, "decode.acc_seg": 78.91345, "loss": 0.57498, "time": 0.1404} -{"mode": "train", "epoch": 1, "iter": 13350, "lr": 0.00014, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.57911, "decode.acc_seg": 78.44514, "loss": 0.57911, "time": 0.14118} -{"mode": "train", "epoch": 1, "iter": 13400, "lr": 0.00014, "memory": 4602, "data_time": 0.01175, "decode.loss_ce": 0.55778, "decode.acc_seg": 78.94509, "loss": 0.55778, "time": 0.14022} -{"mode": "train", "epoch": 1, "iter": 13450, "lr": 0.00014, "memory": 4602, "data_time": 0.01165, "decode.loss_ce": 0.55109, "decode.acc_seg": 79.55098, "loss": 0.55109, "time": 0.14158} -{"mode": "train", "epoch": 1, "iter": 13500, "lr": 0.00014, "memory": 4602, "data_time": 0.0126, "decode.loss_ce": 0.5611, "decode.acc_seg": 79.16019, "loss": 0.5611, "time": 0.14112} -{"mode": "train", "epoch": 1, "iter": 13550, "lr": 0.00014, "memory": 4602, "data_time": 0.01162, "decode.loss_ce": 0.5549, "decode.acc_seg": 79.58552, "loss": 0.5549, "time": 0.1433} -{"mode": "train", "epoch": 1, "iter": 13600, "lr": 0.00014, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.55101, "decode.acc_seg": 79.15676, "loss": 0.55101, "time": 0.14554} -{"mode": "train", "epoch": 1, "iter": 13650, "lr": 0.00014, "memory": 4602, "data_time": 0.01177, "decode.loss_ce": 0.54183, "decode.acc_seg": 79.84546, "loss": 0.54183, "time": 0.13927} -{"mode": "train", "epoch": 1, "iter": 13700, "lr": 0.00014, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.57924, "decode.acc_seg": 78.33739, "loss": 0.57924, "time": 0.14173} -{"mode": "train", "epoch": 1, "iter": 13750, "lr": 0.00014, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.54742, "decode.acc_seg": 79.70966, "loss": 0.54742, "time": 0.14341} -{"mode": "train", "epoch": 1, "iter": 13800, "lr": 0.00014, "memory": 4602, "data_time": 0.01247, "decode.loss_ce": 0.55603, "decode.acc_seg": 79.18692, "loss": 0.55603, "time": 0.14548} -{"mode": "train", "epoch": 1, "iter": 13850, "lr": 0.00014, "memory": 4602, "data_time": 0.01162, "decode.loss_ce": 0.55293, "decode.acc_seg": 79.72073, "loss": 0.55293, "time": 0.14091} -{"mode": "train", "epoch": 1, "iter": 13900, "lr": 0.00014, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.5517, "decode.acc_seg": 79.45132, "loss": 0.5517, "time": 0.14556} -{"mode": "train", "epoch": 1, "iter": 13950, "lr": 0.00014, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.51475, "decode.acc_seg": 80.77057, "loss": 0.51475, "time": 0.14178} -{"mode": "train", "epoch": 1, "iter": 14000, "lr": 0.00014, "memory": 4602, "data_time": 0.01193, "decode.loss_ce": 0.5445, "decode.acc_seg": 79.51675, "loss": 0.5445, "time": 0.1403} -{"mode": "train", "epoch": 1, "iter": 14050, "lr": 0.00014, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 0.55633, "decode.acc_seg": 79.14789, "loss": 0.55633, "time": 0.14076} -{"mode": "train", "epoch": 1, "iter": 14100, "lr": 0.00014, "memory": 4602, "data_time": 0.01258, "decode.loss_ce": 0.53496, "decode.acc_seg": 80.08002, "loss": 0.53496, "time": 0.14173} -{"mode": "train", "epoch": 1, "iter": 14150, "lr": 0.00014, "memory": 4602, "data_time": 0.0114, "decode.loss_ce": 0.54037, "decode.acc_seg": 80.07833, "loss": 0.54037, "time": 0.14174} -{"mode": "train", "epoch": 1, "iter": 14200, "lr": 0.00014, "memory": 4602, "data_time": 0.01236, "decode.loss_ce": 0.54873, "decode.acc_seg": 79.21571, "loss": 0.54873, "time": 0.14137} -{"mode": "train", "epoch": 1, "iter": 14250, "lr": 0.00013, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.56113, "decode.acc_seg": 79.0246, "loss": 0.56113, "time": 0.15039} -{"mode": "train", "epoch": 1, "iter": 14300, "lr": 0.00013, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.54701, "decode.acc_seg": 79.80015, "loss": 0.54701, "time": 0.14003} -{"mode": "train", "epoch": 1, "iter": 14350, "lr": 0.00013, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 0.52794, "decode.acc_seg": 79.96227, "loss": 0.52794, "time": 0.14562} -{"mode": "train", "epoch": 1, "iter": 14400, "lr": 0.00013, "memory": 4602, "data_time": 0.01244, "decode.loss_ce": 0.53108, "decode.acc_seg": 79.92808, "loss": 0.53108, "time": 0.13941} -{"mode": "train", "epoch": 1, "iter": 14450, "lr": 0.00013, "memory": 4602, "data_time": 0.01296, "decode.loss_ce": 0.53771, "decode.acc_seg": 80.16171, "loss": 0.53771, "time": 0.14301} -{"mode": "train", "epoch": 1, "iter": 14500, "lr": 0.00013, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.5351, "decode.acc_seg": 80.04835, "loss": 0.5351, "time": 0.14134} -{"mode": "train", "epoch": 1, "iter": 14550, "lr": 0.00013, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.53481, "decode.acc_seg": 80.54734, "loss": 0.53481, "time": 0.14331} -{"mode": "train", "epoch": 1, "iter": 14600, "lr": 0.00013, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.55715, "decode.acc_seg": 79.40662, "loss": 0.55715, "time": 0.14099} -{"mode": "train", "epoch": 1, "iter": 14650, "lr": 0.00013, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.54306, "decode.acc_seg": 79.79439, "loss": 0.54306, "time": 0.14198} -{"mode": "train", "epoch": 1, "iter": 14700, "lr": 0.00013, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.54169, "decode.acc_seg": 80.10432, "loss": 0.54169, "time": 0.13957} -{"mode": "train", "epoch": 1, "iter": 14750, "lr": 0.00013, "memory": 4602, "data_time": 0.01238, "decode.loss_ce": 0.53543, "decode.acc_seg": 80.21414, "loss": 0.53543, "time": 0.14087} -{"mode": "train", "epoch": 1, "iter": 14800, "lr": 0.00013, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.55042, "decode.acc_seg": 79.36679, "loss": 0.55042, "time": 0.13995} -{"mode": "train", "epoch": 1, "iter": 14850, "lr": 0.00013, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.52735, "decode.acc_seg": 80.29, "loss": 0.52735, "time": 0.13991} -{"mode": "train", "epoch": 1, "iter": 14900, "lr": 0.00013, "memory": 4602, "data_time": 0.01215, "decode.loss_ce": 0.53615, "decode.acc_seg": 80.00962, "loss": 0.53615, "time": 0.14314} -{"mode": "train", "epoch": 1, "iter": 14950, "lr": 0.00013, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.5483, "decode.acc_seg": 79.84936, "loss": 0.5483, "time": 0.13983} -{"mode": "train", "epoch": 1, "iter": 15000, "lr": 0.00013, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.55445, "decode.acc_seg": 79.47106, "loss": 0.55445, "time": 0.14393} -{"mode": "train", "epoch": 1, "iter": 15050, "lr": 0.00013, "memory": 4602, "data_time": 0.01248, "decode.loss_ce": 0.53146, "decode.acc_seg": 79.87095, "loss": 0.53146, "time": 0.14383} -{"mode": "train", "epoch": 1, "iter": 15100, "lr": 0.00013, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.54156, "decode.acc_seg": 79.61276, "loss": 0.54156, "time": 0.1392} -{"mode": "train", "epoch": 1, "iter": 15150, "lr": 0.00013, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.53143, "decode.acc_seg": 80.54074, "loss": 0.53143, "time": 0.14051} -{"mode": "train", "epoch": 1, "iter": 15200, "lr": 0.00013, "memory": 4602, "data_time": 0.01238, "decode.loss_ce": 0.52458, "decode.acc_seg": 80.17012, "loss": 0.52458, "time": 0.14146} -{"mode": "train", "epoch": 1, "iter": 15250, "lr": 0.00013, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.52448, "decode.acc_seg": 80.86216, "loss": 0.52448, "time": 0.14199} -{"mode": "train", "epoch": 1, "iter": 15300, "lr": 0.00013, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.53034, "decode.acc_seg": 80.40591, "loss": 0.53034, "time": 0.14256} -{"mode": "train", "epoch": 1, "iter": 15350, "lr": 0.00013, "memory": 4602, "data_time": 0.01226, "decode.loss_ce": 0.4872, "decode.acc_seg": 81.53168, "loss": 0.4872, "time": 0.14263} -{"mode": "train", "epoch": 1, "iter": 15400, "lr": 0.00013, "memory": 4602, "data_time": 0.0119, "decode.loss_ce": 0.52302, "decode.acc_seg": 80.14477, "loss": 0.52302, "time": 0.1509} -{"mode": "train", "epoch": 1, "iter": 15450, "lr": 0.00013, "memory": 4602, "data_time": 0.01245, "decode.loss_ce": 0.521, "decode.acc_seg": 80.41206, "loss": 0.521, "time": 0.14123} -{"mode": "train", "epoch": 1, "iter": 15500, "lr": 0.00013, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.54842, "decode.acc_seg": 79.55217, "loss": 0.54842, "time": 0.14483} -{"mode": "train", "epoch": 1, "iter": 15550, "lr": 0.00013, "memory": 4602, "data_time": 0.01233, "decode.loss_ce": 0.51086, "decode.acc_seg": 80.72896, "loss": 0.51086, "time": 0.14113} -{"mode": "train", "epoch": 1, "iter": 15600, "lr": 0.00013, "memory": 4602, "data_time": 0.01317, "decode.loss_ce": 0.5338, "decode.acc_seg": 80.36127, "loss": 0.5338, "time": 0.14309} -{"mode": "train", "epoch": 1, "iter": 15650, "lr": 0.00013, "memory": 4602, "data_time": 0.01224, "decode.loss_ce": 0.51052, "decode.acc_seg": 80.62785, "loss": 0.51052, "time": 0.14333} -{"mode": "train", "epoch": 1, "iter": 15700, "lr": 0.00013, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.52901, "decode.acc_seg": 80.248, "loss": 0.52901, "time": 0.14315} -{"mode": "train", "epoch": 1, "iter": 15750, "lr": 0.00013, "memory": 4602, "data_time": 0.01313, "decode.loss_ce": 0.54011, "decode.acc_seg": 79.88103, "loss": 0.54011, "time": 0.14323} -{"mode": "train", "epoch": 1, "iter": 15800, "lr": 0.00013, "memory": 4602, "data_time": 0.01321, "decode.loss_ce": 0.54088, "decode.acc_seg": 79.84865, "loss": 0.54088, "time": 0.14045} -{"mode": "train", "epoch": 1, "iter": 15850, "lr": 0.00013, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.52119, "decode.acc_seg": 80.20609, "loss": 0.52119, "time": 0.14027} -{"mode": "train", "epoch": 1, "iter": 15900, "lr": 0.00013, "memory": 4602, "data_time": 0.01264, "decode.loss_ce": 0.53581, "decode.acc_seg": 79.79781, "loss": 0.53581, "time": 0.14249} -{"mode": "train", "epoch": 1, "iter": 15950, "lr": 0.00013, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.50504, "decode.acc_seg": 80.97665, "loss": 0.50504, "time": 0.1411} -{"mode": "train", "epoch": 1, "iter": 16000, "lr": 0.00013, "memory": 4602, "data_time": 0.01233, "decode.loss_ce": 0.51017, "decode.acc_seg": 80.89692, "loss": 0.51017, "time": 0.15445} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00013, "aAcc": 0.7795, "mIoU": 0.3728, "mAcc": 0.4844, "IoU.wall": 0.7071, "IoU.building": 0.7945, "IoU.sky": 0.9322, "IoU.floor": 0.7511, "IoU.tree": 0.6779, "IoU.ceiling": 0.7768, "IoU.road": 0.7733, "IoU.bed ": 0.8214, "IoU.windowpane": 0.5379, "IoU.grass": 0.6258, "IoU.cabinet": 0.5069, "IoU.sidewalk": 0.566, "IoU.person": 0.7529, "IoU.earth": 0.2595, "IoU.door": 0.3096, "IoU.table": 0.4574, "IoU.mountain": 0.5092, "IoU.plant": 0.4314, "IoU.curtain": 0.6414, "IoU.chair": 0.4608, "IoU.car": 0.7731, "IoU.water": 0.4719, "IoU.painting": 0.632, "IoU.sofa": 0.5452, "IoU.shelf": 0.3466, "IoU.house": 0.4716, "IoU.sea": 0.5614, "IoU.mirror": 0.4505, "IoU.rug": 0.4518, "IoU.field": 0.2976, "IoU.armchair": 0.2654, "IoU.seat": 0.5028, "IoU.fence": 0.3133, "IoU.desk": 0.3257, "IoU.rock": 0.3967, "IoU.wardrobe": 0.4109, "IoU.lamp": 0.5268, "IoU.bathtub": 0.5876, "IoU.railing": 0.2563, "IoU.cushion": 0.4002, "IoU.base": 0.1895, "IoU.box": 0.138, "IoU.column": 0.3805, "IoU.signboard": 0.2943, "IoU.chest of drawers": 0.3606, "IoU.counter": 0.2223, "IoU.sand": 0.2759, "IoU.sink": 0.59, "IoU.skyscraper": 0.6287, "IoU.fireplace": 0.5476, "IoU.refrigerator": 0.5537, "IoU.grandstand": 0.3128, "IoU.path": 0.1875, "IoU.stairs": 0.3034, "IoU.runway": 0.7037, "IoU.case": 0.4435, "IoU.pool table": 0.896, "IoU.pillow": 0.4187, "IoU.screen door": 0.5083, "IoU.stairway": 0.2472, "IoU.river": 0.1162, "IoU.bridge": 0.4415, "IoU.bookcase": 0.2884, "IoU.blind": 0.2168, "IoU.coffee table": 0.4789, "IoU.toilet": 0.7624, "IoU.flower": 0.3336, "IoU.book": 0.3413, "IoU.hill": 0.0441, "IoU.bench": 0.3694, "IoU.countertop": 0.4605, "IoU.stove": 0.5565, "IoU.palm": 0.3806, "IoU.kitchen island": 0.1788, "IoU.computer": 0.4935, "IoU.swivel chair": 0.3309, "IoU.boat": 0.4023, "IoU.bar": 0.2078, "IoU.arcade machine": 0.3265, "IoU.hovel": 0.1417, "IoU.bus": 0.7187, "IoU.towel": 0.4333, "IoU.light": 0.3979, "IoU.truck": 0.1006, "IoU.tower": 0.2438, "IoU.chandelier": 0.5778, "IoU.awning": 0.0871, "IoU.streetlight": 0.1321, "IoU.booth": 0.2499, "IoU.television receiver": 0.5543, "IoU.airplane": 0.4884, "IoU.dirt track": 0.0408, "IoU.apparel": 0.2271, "IoU.pole": 0.1116, "IoU.land": 0.0498, "IoU.bannister": 0.0591, "IoU.escalator": 0.3843, "IoU.ottoman": 0.2466, "IoU.bottle": 0.0891, "IoU.buffet": 0.315, "IoU.poster": 0.1223, "IoU.stage": 0.099, "IoU.van": 0.2002, "IoU.ship": 0.1894, "IoU.fountain": 0.2027, "IoU.conveyer belt": 0.5871, "IoU.canopy": 0.1384, "IoU.washer": 0.5954, "IoU.plaything": 0.2041, "IoU.swimming pool": 0.1475, "IoU.stool": 0.1622, "IoU.barrel": 0.1815, "IoU.basket": 0.1967, "IoU.waterfall": 0.5848, "IoU.tent": 0.8689, "IoU.bag": 0.0448, "IoU.minibike": 0.5986, "IoU.cradle": 0.7132, "IoU.oven": 0.1791, "IoU.ball": 0.4209, "IoU.food": 0.4684, "IoU.step": 0.0467, "IoU.tank": 0.2772, "IoU.trade name": 0.0728, "IoU.microwave": 0.33, "IoU.pot": 0.2207, "IoU.animal": 0.5512, "IoU.bicycle": 0.4729, "IoU.lake": 0.0044, "IoU.dishwasher": 0.432, "IoU.screen": 0.6169, "IoU.blanket": 0.0146, "IoU.sculpture": 0.2871, "IoU.hood": 0.3368, "IoU.sconce": 0.1727, "IoU.vase": 0.2307, "IoU.traffic light": 0.1883, "IoU.tray": 0.0084, "IoU.ashcan": 0.2772, "IoU.fan": 0.5061, "IoU.pier": 0.4655, "IoU.crt screen": 0.0014, "IoU.plate": 0.3813, "IoU.monitor": 0.1187, "IoU.bulletin board": 0.3113, "IoU.shower": 0.0, "IoU.radiator": 0.4363, "IoU.glass": 0.0307, "IoU.clock": 0.1113, "IoU.flag": 0.252, "Acc.wall": 0.8615, "Acc.building": 0.9005, "Acc.sky": 0.9685, "Acc.floor": 0.8831, "Acc.tree": 0.8691, "Acc.ceiling": 0.8815, "Acc.road": 0.8615, "Acc.bed ": 0.9459, "Acc.windowpane": 0.7034, "Acc.grass": 0.7176, "Acc.cabinet": 0.662, "Acc.sidewalk": 0.691, "Acc.person": 0.8883, "Acc.earth": 0.3567, "Acc.door": 0.375, "Acc.table": 0.6326, "Acc.mountain": 0.6977, "Acc.plant": 0.529, "Acc.curtain": 0.7767, "Acc.chair": 0.6221, "Acc.car": 0.912, "Acc.water": 0.6389, "Acc.painting": 0.7622, "Acc.sofa": 0.8215, "Acc.shelf": 0.5354, "Acc.house": 0.7849, "Acc.sea": 0.8128, "Acc.mirror": 0.5393, "Acc.rug": 0.5475, "Acc.field": 0.558, "Acc.armchair": 0.3639, "Acc.seat": 0.7006, "Acc.fence": 0.5016, "Acc.desk": 0.5257, "Acc.rock": 0.5704, "Acc.wardrobe": 0.7593, "Acc.lamp": 0.6797, "Acc.bathtub": 0.6549, "Acc.railing": 0.3009, "Acc.cushion": 0.5162, "Acc.base": 0.3086, "Acc.box": 0.1746, "Acc.column": 0.4971, "Acc.signboard": 0.4333, "Acc.chest of drawers": 0.4648, "Acc.counter": 0.3423, "Acc.sand": 0.4419, "Acc.sink": 0.7111, "Acc.skyscraper": 0.8035, "Acc.fireplace": 0.7732, "Acc.refrigerator": 0.7411, "Acc.grandstand": 0.726, "Acc.path": 0.3493, "Acc.stairs": 0.3868, "Acc.runway": 0.9126, "Acc.case": 0.6144, "Acc.pool table": 0.9509, "Acc.pillow": 0.4808, "Acc.screen door": 0.6835, "Acc.stairway": 0.3227, "Acc.river": 0.1922, "Acc.bridge": 0.6911, "Acc.bookcase": 0.5859, "Acc.blind": 0.2311, "Acc.coffee table": 0.7532, "Acc.toilet": 0.9019, "Acc.flower": 0.4709, "Acc.book": 0.4669, "Acc.hill": 0.0748, "Acc.bench": 0.4269, "Acc.countertop": 0.5565, "Acc.stove": 0.7135, "Acc.palm": 0.4602, "Acc.kitchen island": 0.331, "Acc.computer": 0.5504, "Acc.swivel chair": 0.5025, "Acc.boat": 0.4507, "Acc.bar": 0.2282, "Acc.arcade machine": 0.3694, "Acc.hovel": 0.1682, "Acc.bus": 0.9032, "Acc.towel": 0.5752, "Acc.light": 0.4642, "Acc.truck": 0.1415, "Acc.tower": 0.296, "Acc.chandelier": 0.6818, "Acc.awning": 0.0933, "Acc.streetlight": 0.1558, "Acc.booth": 0.2989, "Acc.television receiver": 0.7384, "Acc.airplane": 0.6546, "Acc.dirt track": 0.1165, "Acc.apparel": 0.2928, "Acc.pole": 0.1393, "Acc.land": 0.0651, "Acc.bannister": 0.0689, "Acc.escalator": 0.4739, "Acc.ottoman": 0.3481, "Acc.bottle": 0.0947, "Acc.buffet": 0.3448, "Acc.poster": 0.145, "Acc.stage": 0.146, "Acc.van": 0.241, "Acc.ship": 0.2736, "Acc.fountain": 0.2094, "Acc.conveyer belt": 0.7985, "Acc.canopy": 0.1699, "Acc.washer": 0.6449, "Acc.plaything": 0.4063, "Acc.swimming pool": 0.2128, "Acc.stool": 0.1894, "Acc.barrel": 0.7381, "Acc.basket": 0.2707, "Acc.waterfall": 0.8257, "Acc.tent": 0.9795, "Acc.bag": 0.0467, "Acc.minibike": 0.7423, "Acc.cradle": 0.9008, "Acc.oven": 0.2201, "Acc.ball": 0.4931, "Acc.food": 0.5504, "Acc.step": 0.0591, "Acc.tank": 0.2998, "Acc.trade name": 0.0758, "Acc.microwave": 0.3594, "Acc.pot": 0.2523, "Acc.animal": 0.6071, "Acc.bicycle": 0.6455, "Acc.lake": 0.0068, "Acc.dishwasher": 0.6138, "Acc.screen": 0.7848, "Acc.blanket": 0.016, "Acc.sculpture": 0.3239, "Acc.hood": 0.3781, "Acc.sconce": 0.1954, "Acc.vase": 0.3266, "Acc.traffic light": 0.2599, "Acc.tray": 0.0096, "Acc.ashcan": 0.3918, "Acc.fan": 0.6094, "Acc.pier": 0.6506, "Acc.crt screen": 0.0032, "Acc.plate": 0.5042, "Acc.monitor": 0.1839, "Acc.bulletin board": 0.4653, "Acc.shower": 0.0, "Acc.radiator": 0.484, "Acc.glass": 0.0314, "Acc.clock": 0.1225, "Acc.flag": 0.2924} -{"mode": "train", "epoch": 1, "iter": 16050, "lr": 0.00013, "memory": 4602, "data_time": 0.46232, "decode.loss_ce": 0.50203, "decode.acc_seg": 81.35896, "loss": 0.50203, "time": 0.59138} -{"mode": "train", "epoch": 1, "iter": 16100, "lr": 0.00013, "memory": 4602, "data_time": 0.0132, "decode.loss_ce": 0.52092, "decode.acc_seg": 80.17557, "loss": 0.52092, "time": 0.14581} -{"mode": "train", "epoch": 1, "iter": 16150, "lr": 0.00013, "memory": 4602, "data_time": 0.01298, "decode.loss_ce": 0.52673, "decode.acc_seg": 80.55217, "loss": 0.52673, "time": 0.14123} -{"mode": "train", "epoch": 1, "iter": 16200, "lr": 0.00013, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.53121, "decode.acc_seg": 80.16072, "loss": 0.53121, "time": 0.14258} -{"mode": "train", "epoch": 1, "iter": 16250, "lr": 0.00013, "memory": 4602, "data_time": 0.01217, "decode.loss_ce": 0.49961, "decode.acc_seg": 81.21167, "loss": 0.49961, "time": 0.14367} -{"mode": "train", "epoch": 1, "iter": 16300, "lr": 0.00013, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.51853, "decode.acc_seg": 80.81772, "loss": 0.51853, "time": 0.14276} -{"mode": "train", "epoch": 1, "iter": 16350, "lr": 0.00013, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.50017, "decode.acc_seg": 81.0951, "loss": 0.50017, "time": 0.14495} -{"mode": "train", "epoch": 1, "iter": 16400, "lr": 0.00012, "memory": 4602, "data_time": 0.01233, "decode.loss_ce": 0.49853, "decode.acc_seg": 81.45496, "loss": 0.49853, "time": 0.14173} -{"mode": "train", "epoch": 1, "iter": 16450, "lr": 0.00012, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.52337, "decode.acc_seg": 80.61664, "loss": 0.52337, "time": 0.145} -{"mode": "train", "epoch": 1, "iter": 16500, "lr": 0.00012, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.52799, "decode.acc_seg": 80.18608, "loss": 0.52799, "time": 0.14424} -{"mode": "train", "epoch": 1, "iter": 16550, "lr": 0.00012, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.51069, "decode.acc_seg": 80.89432, "loss": 0.51069, "time": 0.15094} -{"mode": "train", "epoch": 1, "iter": 16600, "lr": 0.00012, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.51158, "decode.acc_seg": 80.64491, "loss": 0.51158, "time": 0.14398} -{"mode": "train", "epoch": 1, "iter": 16650, "lr": 0.00012, "memory": 4602, "data_time": 0.01177, "decode.loss_ce": 0.51608, "decode.acc_seg": 80.47034, "loss": 0.51608, "time": 0.14669} -{"mode": "train", "epoch": 1, "iter": 16700, "lr": 0.00012, "memory": 4602, "data_time": 0.01185, "decode.loss_ce": 0.54099, "decode.acc_seg": 79.98961, "loss": 0.54099, "time": 0.14131} -{"mode": "train", "epoch": 1, "iter": 16750, "lr": 0.00012, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.51524, "decode.acc_seg": 80.48725, "loss": 0.51524, "time": 0.14071} -{"mode": "train", "epoch": 1, "iter": 16800, "lr": 0.00012, "memory": 4602, "data_time": 0.01248, "decode.loss_ce": 0.53301, "decode.acc_seg": 80.40087, "loss": 0.53301, "time": 0.14216} -{"mode": "train", "epoch": 1, "iter": 16850, "lr": 0.00012, "memory": 4602, "data_time": 0.01299, "decode.loss_ce": 0.49414, "decode.acc_seg": 81.18118, "loss": 0.49414, "time": 0.14063} -{"mode": "train", "epoch": 1, "iter": 16900, "lr": 0.00012, "memory": 4602, "data_time": 0.01238, "decode.loss_ce": 0.52008, "decode.acc_seg": 80.66314, "loss": 0.52008, "time": 0.14368} -{"mode": "train", "epoch": 1, "iter": 16950, "lr": 0.00012, "memory": 4602, "data_time": 0.01227, "decode.loss_ce": 0.52231, "decode.acc_seg": 80.80986, "loss": 0.52231, "time": 0.14475} -{"mode": "train", "epoch": 1, "iter": 17000, "lr": 0.00012, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.52558, "decode.acc_seg": 80.56175, "loss": 0.52558, "time": 0.14363} -{"mode": "train", "epoch": 1, "iter": 17050, "lr": 0.00012, "memory": 4602, "data_time": 0.01277, "decode.loss_ce": 0.49485, "decode.acc_seg": 81.20542, "loss": 0.49485, "time": 0.14153} -{"mode": "train", "epoch": 1, "iter": 17100, "lr": 0.00012, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.50627, "decode.acc_seg": 80.80393, "loss": 0.50627, "time": 0.14155} -{"mode": "train", "epoch": 1, "iter": 17150, "lr": 0.00012, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.50014, "decode.acc_seg": 81.29447, "loss": 0.50014, "time": 0.14612} -{"mode": "train", "epoch": 1, "iter": 17200, "lr": 0.00012, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.50077, "decode.acc_seg": 81.5458, "loss": 0.50077, "time": 0.13906} -{"mode": "train", "epoch": 1, "iter": 17250, "lr": 0.00012, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.50947, "decode.acc_seg": 80.86211, "loss": 0.50947, "time": 0.14615} -{"mode": "train", "epoch": 1, "iter": 17300, "lr": 0.00012, "memory": 4602, "data_time": 0.01266, "decode.loss_ce": 0.48392, "decode.acc_seg": 81.68008, "loss": 0.48392, "time": 0.14276} -{"mode": "train", "epoch": 1, "iter": 17350, "lr": 0.00012, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.51547, "decode.acc_seg": 81.20737, "loss": 0.51547, "time": 0.13973} -{"mode": "train", "epoch": 1, "iter": 17400, "lr": 0.00012, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.48854, "decode.acc_seg": 81.32505, "loss": 0.48854, "time": 0.14067} -{"mode": "train", "epoch": 1, "iter": 17450, "lr": 0.00012, "memory": 4602, "data_time": 0.01286, "decode.loss_ce": 0.50423, "decode.acc_seg": 81.40806, "loss": 0.50423, "time": 0.14203} -{"mode": "train", "epoch": 1, "iter": 17500, "lr": 0.00012, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.52326, "decode.acc_seg": 80.62669, "loss": 0.52326, "time": 0.13841} -{"mode": "train", "epoch": 1, "iter": 17550, "lr": 0.00012, "memory": 4602, "data_time": 0.01224, "decode.loss_ce": 0.49384, "decode.acc_seg": 81.12962, "loss": 0.49384, "time": 0.14069} -{"mode": "train", "epoch": 1, "iter": 17600, "lr": 0.00012, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 0.50865, "decode.acc_seg": 80.89396, "loss": 0.50865, "time": 0.13993} -{"mode": "train", "epoch": 1, "iter": 17650, "lr": 0.00012, "memory": 4602, "data_time": 0.01182, "decode.loss_ce": 0.5174, "decode.acc_seg": 80.67732, "loss": 0.5174, "time": 0.14328} -{"mode": "train", "epoch": 1, "iter": 17700, "lr": 0.00012, "memory": 4602, "data_time": 0.01389, "decode.loss_ce": 0.49639, "decode.acc_seg": 81.20763, "loss": 0.49639, "time": 0.14611} -{"mode": "train", "epoch": 1, "iter": 17750, "lr": 0.00012, "memory": 4602, "data_time": 0.0166, "decode.loss_ce": 0.50872, "decode.acc_seg": 81.10175, "loss": 0.50872, "time": 0.14821} -{"mode": "train", "epoch": 1, "iter": 17800, "lr": 0.00012, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.49268, "decode.acc_seg": 81.09752, "loss": 0.49268, "time": 0.14119} -{"mode": "train", "epoch": 1, "iter": 17850, "lr": 0.00012, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.48661, "decode.acc_seg": 81.51311, "loss": 0.48661, "time": 0.14149} -{"mode": "train", "epoch": 1, "iter": 17900, "lr": 0.00012, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.49947, "decode.acc_seg": 81.20772, "loss": 0.49947, "time": 0.142} -{"mode": "train", "epoch": 1, "iter": 17950, "lr": 0.00012, "memory": 4602, "data_time": 0.01193, "decode.loss_ce": 0.47382, "decode.acc_seg": 82.06673, "loss": 0.47382, "time": 0.14304} -{"mode": "train", "epoch": 1, "iter": 18000, "lr": 0.00012, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.48852, "decode.acc_seg": 81.51108, "loss": 0.48852, "time": 0.14129} -{"mode": "train", "epoch": 1, "iter": 18050, "lr": 0.00012, "memory": 4602, "data_time": 0.01295, "decode.loss_ce": 0.50432, "decode.acc_seg": 80.82901, "loss": 0.50432, "time": 0.1427} -{"mode": "train", "epoch": 1, "iter": 18100, "lr": 0.00012, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.50135, "decode.acc_seg": 81.16468, "loss": 0.50135, "time": 0.14093} -{"mode": "train", "epoch": 1, "iter": 18150, "lr": 0.00012, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.48972, "decode.acc_seg": 81.82104, "loss": 0.48972, "time": 0.14025} -{"mode": "train", "epoch": 1, "iter": 18200, "lr": 0.00012, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.50228, "decode.acc_seg": 81.22854, "loss": 0.50228, "time": 0.14186} -{"mode": "train", "epoch": 1, "iter": 18250, "lr": 0.00012, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.48866, "decode.acc_seg": 81.88204, "loss": 0.48866, "time": 0.14204} -{"mode": "train", "epoch": 1, "iter": 18300, "lr": 0.00012, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.47059, "decode.acc_seg": 81.96147, "loss": 0.47059, "time": 0.13954} -{"mode": "train", "epoch": 1, "iter": 18350, "lr": 0.00012, "memory": 4602, "data_time": 0.01167, "decode.loss_ce": 0.48924, "decode.acc_seg": 81.57343, "loss": 0.48924, "time": 0.14054} -{"mode": "train", "epoch": 1, "iter": 18400, "lr": 0.00012, "memory": 4602, "data_time": 0.01226, "decode.loss_ce": 0.49012, "decode.acc_seg": 81.55925, "loss": 0.49012, "time": 0.14483} -{"mode": "train", "epoch": 1, "iter": 18450, "lr": 0.00012, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.48689, "decode.acc_seg": 81.30354, "loss": 0.48689, "time": 0.14266} -{"mode": "train", "epoch": 1, "iter": 18500, "lr": 0.00011, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.49413, "decode.acc_seg": 81.49595, "loss": 0.49413, "time": 0.14179} -{"mode": "train", "epoch": 1, "iter": 18550, "lr": 0.00011, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.49861, "decode.acc_seg": 81.41718, "loss": 0.49861, "time": 0.14279} -{"mode": "train", "epoch": 1, "iter": 18600, "lr": 0.00011, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.48337, "decode.acc_seg": 81.63961, "loss": 0.48337, "time": 0.1411} -{"mode": "train", "epoch": 1, "iter": 18650, "lr": 0.00011, "memory": 4602, "data_time": 0.01187, "decode.loss_ce": 0.48554, "decode.acc_seg": 81.39649, "loss": 0.48554, "time": 0.13873} -{"mode": "train", "epoch": 1, "iter": 18700, "lr": 0.00011, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.49542, "decode.acc_seg": 81.05146, "loss": 0.49542, "time": 0.14232} -{"mode": "train", "epoch": 1, "iter": 18750, "lr": 0.00011, "memory": 4602, "data_time": 0.01206, "decode.loss_ce": 0.48331, "decode.acc_seg": 81.49227, "loss": 0.48331, "time": 0.14135} -{"mode": "train", "epoch": 1, "iter": 18800, "lr": 0.00011, "memory": 4602, "data_time": 0.01221, "decode.loss_ce": 0.47852, "decode.acc_seg": 81.81722, "loss": 0.47852, "time": 0.14198} -{"mode": "train", "epoch": 1, "iter": 18850, "lr": 0.00011, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.48408, "decode.acc_seg": 81.85968, "loss": 0.48408, "time": 0.1419} -{"mode": "train", "epoch": 1, "iter": 18900, "lr": 0.00011, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.48585, "decode.acc_seg": 81.78714, "loss": 0.48585, "time": 0.14941} -{"mode": "train", "epoch": 1, "iter": 18950, "lr": 0.00011, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.47704, "decode.acc_seg": 82.27166, "loss": 0.47704, "time": 0.1414} -{"mode": "train", "epoch": 1, "iter": 19000, "lr": 0.00011, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 0.48159, "decode.acc_seg": 82.12111, "loss": 0.48159, "time": 0.14375} -{"mode": "train", "epoch": 1, "iter": 19050, "lr": 0.00011, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 0.49394, "decode.acc_seg": 81.61588, "loss": 0.49394, "time": 0.14279} -{"mode": "train", "epoch": 1, "iter": 19100, "lr": 0.00011, "memory": 4602, "data_time": 0.0129, "decode.loss_ce": 0.46313, "decode.acc_seg": 82.45619, "loss": 0.46313, "time": 0.13981} -{"mode": "train", "epoch": 1, "iter": 19150, "lr": 0.00011, "memory": 4602, "data_time": 0.01311, "decode.loss_ce": 0.47475, "decode.acc_seg": 82.40076, "loss": 0.47475, "time": 0.14268} -{"mode": "train", "epoch": 1, "iter": 19200, "lr": 0.00011, "memory": 4602, "data_time": 0.01199, "decode.loss_ce": 0.4977, "decode.acc_seg": 80.94914, "loss": 0.4977, "time": 0.14207} -{"mode": "train", "epoch": 1, "iter": 19250, "lr": 0.00011, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.48262, "decode.acc_seg": 82.10675, "loss": 0.48262, "time": 0.13946} -{"mode": "train", "epoch": 1, "iter": 19300, "lr": 0.00011, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.51461, "decode.acc_seg": 80.89516, "loss": 0.51461, "time": 0.14124} -{"mode": "train", "epoch": 1, "iter": 19350, "lr": 0.00011, "memory": 4602, "data_time": 0.01226, "decode.loss_ce": 0.48637, "decode.acc_seg": 81.41101, "loss": 0.48637, "time": 0.14151} -{"mode": "train", "epoch": 1, "iter": 19400, "lr": 0.00011, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.47747, "decode.acc_seg": 82.16857, "loss": 0.47747, "time": 0.14082} -{"mode": "train", "epoch": 1, "iter": 19450, "lr": 0.00011, "memory": 4602, "data_time": 0.01305, "decode.loss_ce": 0.47016, "decode.acc_seg": 82.18272, "loss": 0.47016, "time": 0.14148} -{"mode": "train", "epoch": 1, "iter": 19500, "lr": 0.00011, "memory": 4602, "data_time": 0.0126, "decode.loss_ce": 0.49096, "decode.acc_seg": 81.56574, "loss": 0.49096, "time": 0.14028} -{"mode": "train", "epoch": 1, "iter": 19550, "lr": 0.00011, "memory": 4602, "data_time": 0.01232, "decode.loss_ce": 0.5, "decode.acc_seg": 81.35707, "loss": 0.5, "time": 0.14406} -{"mode": "train", "epoch": 1, "iter": 19600, "lr": 0.00011, "memory": 4602, "data_time": 0.01202, "decode.loss_ce": 0.50144, "decode.acc_seg": 81.04257, "loss": 0.50144, "time": 0.1458} -{"mode": "train", "epoch": 1, "iter": 19650, "lr": 0.00011, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.48706, "decode.acc_seg": 81.46769, "loss": 0.48706, "time": 0.14165} -{"mode": "train", "epoch": 1, "iter": 19700, "lr": 0.00011, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.47746, "decode.acc_seg": 82.50332, "loss": 0.47746, "time": 0.13973} -{"mode": "train", "epoch": 1, "iter": 19750, "lr": 0.00011, "memory": 4602, "data_time": 0.01284, "decode.loss_ce": 0.47626, "decode.acc_seg": 81.83138, "loss": 0.47626, "time": 0.14371} -{"mode": "train", "epoch": 1, "iter": 19800, "lr": 0.00011, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.46863, "decode.acc_seg": 82.26815, "loss": 0.46863, "time": 0.14184} -{"mode": "train", "epoch": 1, "iter": 19850, "lr": 0.00011, "memory": 4602, "data_time": 0.01256, "decode.loss_ce": 0.48256, "decode.acc_seg": 82.02814, "loss": 0.48256, "time": 0.14223} -{"mode": "train", "epoch": 1, "iter": 19900, "lr": 0.00011, "memory": 4602, "data_time": 0.01288, "decode.loss_ce": 0.47024, "decode.acc_seg": 82.1241, "loss": 0.47024, "time": 0.13907} -{"mode": "train", "epoch": 1, "iter": 19950, "lr": 0.00011, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 0.46429, "decode.acc_seg": 82.60383, "loss": 0.46429, "time": 0.14158} -{"mode": "train", "epoch": 1, "iter": 20000, "lr": 0.00011, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.46395, "decode.acc_seg": 82.12672, "loss": 0.46395, "time": 0.16198} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00011, "aAcc": 0.7851, "mIoU": 0.3797, "mAcc": 0.4889, "IoU.wall": 0.7114, "IoU.building": 0.792, "IoU.sky": 0.936, "IoU.floor": 0.7648, "IoU.tree": 0.6862, "IoU.ceiling": 0.7839, "IoU.road": 0.784, "IoU.bed ": 0.8307, "IoU.windowpane": 0.5474, "IoU.grass": 0.6845, "IoU.cabinet": 0.5072, "IoU.sidewalk": 0.5758, "IoU.person": 0.7633, "IoU.earth": 0.3147, "IoU.door": 0.3185, "IoU.table": 0.4668, "IoU.mountain": 0.5521, "IoU.plant": 0.4553, "IoU.curtain": 0.6431, "IoU.chair": 0.4595, "IoU.car": 0.7863, "IoU.water": 0.4432, "IoU.painting": 0.6411, "IoU.sofa": 0.5445, "IoU.shelf": 0.3641, "IoU.house": 0.3842, "IoU.sea": 0.4674, "IoU.mirror": 0.4234, "IoU.rug": 0.4507, "IoU.field": 0.315, "IoU.armchair": 0.304, "IoU.seat": 0.5276, "IoU.fence": 0.3218, "IoU.desk": 0.3805, "IoU.rock": 0.2948, "IoU.wardrobe": 0.4498, "IoU.lamp": 0.5143, "IoU.bathtub": 0.5652, "IoU.railing": 0.275, "IoU.cushion": 0.3961, "IoU.base": 0.1687, "IoU.box": 0.1816, "IoU.column": 0.3852, "IoU.signboard": 0.2971, "IoU.chest of drawers": 0.3374, "IoU.counter": 0.2225, "IoU.sand": 0.3364, "IoU.sink": 0.6043, "IoU.skyscraper": 0.3764, "IoU.fireplace": 0.5671, "IoU.refrigerator": 0.5832, "IoU.grandstand": 0.3622, "IoU.path": 0.1602, "IoU.stairs": 0.301, "IoU.runway": 0.6997, "IoU.case": 0.404, "IoU.pool table": 0.906, "IoU.pillow": 0.4675, "IoU.screen door": 0.4782, "IoU.stairway": 0.248, "IoU.river": 0.1135, "IoU.bridge": 0.2854, "IoU.bookcase": 0.283, "IoU.blind": 0.3609, "IoU.coffee table": 0.4647, "IoU.toilet": 0.7826, "IoU.flower": 0.3411, "IoU.book": 0.4397, "IoU.hill": 0.0483, "IoU.bench": 0.3577, "IoU.countertop": 0.4583, "IoU.stove": 0.5741, "IoU.palm": 0.4317, "IoU.kitchen island": 0.1843, "IoU.computer": 0.5181, "IoU.swivel chair": 0.3438, "IoU.boat": 0.4331, "IoU.bar": 0.3113, "IoU.arcade machine": 0.1996, "IoU.hovel": 0.3264, "IoU.bus": 0.7725, "IoU.towel": 0.4434, "IoU.light": 0.3437, "IoU.truck": 0.1007, "IoU.tower": 0.2683, "IoU.chandelier": 0.5758, "IoU.awning": 0.1132, "IoU.streetlight": 0.1508, "IoU.booth": 0.2643, "IoU.television receiver": 0.5872, "IoU.airplane": 0.5279, "IoU.dirt track": 0.0124, "IoU.apparel": 0.348, "IoU.pole": 0.1242, "IoU.land": 0.0366, "IoU.bannister": 0.077, "IoU.escalator": 0.4149, "IoU.ottoman": 0.2571, "IoU.bottle": 0.1891, "IoU.buffet": 0.345, "IoU.poster": 0.1083, "IoU.stage": 0.0758, "IoU.van": 0.3545, "IoU.ship": 0.3194, "IoU.fountain": 0.18, "IoU.conveyer belt": 0.433, "IoU.canopy": 0.1137, "IoU.washer": 0.6111, "IoU.plaything": 0.1843, "IoU.swimming pool": 0.1933, "IoU.stool": 0.1917, "IoU.barrel": 0.2966, "IoU.basket": 0.207, "IoU.waterfall": 0.6448, "IoU.tent": 0.9192, "IoU.bag": 0.0775, "IoU.minibike": 0.5868, "IoU.cradle": 0.7599, "IoU.oven": 0.2352, "IoU.ball": 0.3807, "IoU.food": 0.4905, "IoU.step": 0.0241, "IoU.tank": 0.2355, "IoU.trade name": 0.0962, "IoU.microwave": 0.335, "IoU.pot": 0.2045, "IoU.animal": 0.5219, "IoU.bicycle": 0.4597, "IoU.lake": 0.0208, "IoU.dishwasher": 0.4362, "IoU.screen": 0.6493, "IoU.blanket": 0.0518, "IoU.sculpture": 0.3234, "IoU.hood": 0.3444, "IoU.sconce": 0.222, "IoU.vase": 0.2163, "IoU.traffic light": 0.2056, "IoU.tray": 0.012, "IoU.ashcan": 0.3115, "IoU.fan": 0.4723, "IoU.pier": 0.2084, "IoU.crt screen": 0.0133, "IoU.plate": 0.304, "IoU.monitor": 0.0707, "IoU.bulletin board": 0.2383, "IoU.shower": 0.0, "IoU.radiator": 0.4615, "IoU.glass": 0.0711, "IoU.clock": 0.1184, "IoU.flag": 0.2427, "Acc.wall": 0.8616, "Acc.building": 0.923, "Acc.sky": 0.9688, "Acc.floor": 0.8745, "Acc.tree": 0.8523, "Acc.ceiling": 0.9065, "Acc.road": 0.8811, "Acc.bed ": 0.9213, "Acc.windowpane": 0.6929, "Acc.grass": 0.8494, "Acc.cabinet": 0.6585, "Acc.sidewalk": 0.7667, "Acc.person": 0.896, "Acc.earth": 0.4283, "Acc.door": 0.3928, "Acc.table": 0.645, "Acc.mountain": 0.7341, "Acc.plant": 0.5452, "Acc.curtain": 0.7922, "Acc.chair": 0.594, "Acc.car": 0.9059, "Acc.water": 0.6103, "Acc.painting": 0.7734, "Acc.sofa": 0.7803, "Acc.shelf": 0.5364, "Acc.house": 0.4889, "Acc.sea": 0.6595, "Acc.mirror": 0.482, "Acc.rug": 0.5075, "Acc.field": 0.3994, "Acc.armchair": 0.4752, "Acc.seat": 0.7286, "Acc.fence": 0.4753, "Acc.desk": 0.5167, "Acc.rock": 0.4618, "Acc.wardrobe": 0.7074, "Acc.lamp": 0.7316, "Acc.bathtub": 0.7012, "Acc.railing": 0.3399, "Acc.cushion": 0.4836, "Acc.base": 0.2283, "Acc.box": 0.2626, "Acc.column": 0.4797, "Acc.signboard": 0.3749, "Acc.chest of drawers": 0.5809, "Acc.counter": 0.3716, "Acc.sand": 0.435, "Acc.sink": 0.7106, "Acc.skyscraper": 0.4511, "Acc.fireplace": 0.769, "Acc.refrigerator": 0.7418, "Acc.grandstand": 0.654, "Acc.path": 0.2516, "Acc.stairs": 0.346, "Acc.runway": 0.9155, "Acc.case": 0.6196, "Acc.pool table": 0.9527, "Acc.pillow": 0.6012, "Acc.screen door": 0.6713, "Acc.stairway": 0.3216, "Acc.river": 0.352, "Acc.bridge": 0.4284, "Acc.bookcase": 0.3884, "Acc.blind": 0.4308, "Acc.coffee table": 0.7481, "Acc.toilet": 0.8915, "Acc.flower": 0.5423, "Acc.book": 0.6045, "Acc.hill": 0.0778, "Acc.bench": 0.4903, "Acc.countertop": 0.63, "Acc.stove": 0.6934, "Acc.palm": 0.5495, "Acc.kitchen island": 0.4129, "Acc.computer": 0.5744, "Acc.swivel chair": 0.4908, "Acc.boat": 0.602, "Acc.bar": 0.3722, "Acc.arcade machine": 0.2204, "Acc.hovel": 0.474, "Acc.bus": 0.8922, "Acc.towel": 0.6662, "Acc.light": 0.3711, "Acc.truck": 0.1376, "Acc.tower": 0.384, "Acc.chandelier": 0.7036, "Acc.awning": 0.1209, "Acc.streetlight": 0.1869, "Acc.booth": 0.2688, "Acc.television receiver": 0.7293, "Acc.airplane": 0.6272, "Acc.dirt track": 0.0483, "Acc.apparel": 0.451, "Acc.pole": 0.1705, "Acc.land": 0.0835, "Acc.bannister": 0.1404, "Acc.escalator": 0.5242, "Acc.ottoman": 0.3234, "Acc.bottle": 0.2257, "Acc.buffet": 0.4045, "Acc.poster": 0.1364, "Acc.stage": 0.1643, "Acc.van": 0.4258, "Acc.ship": 0.4122, "Acc.fountain": 0.2057, "Acc.conveyer belt": 0.8059, "Acc.canopy": 0.1249, "Acc.washer": 0.6902, "Acc.plaything": 0.2405, "Acc.swimming pool": 0.2835, "Acc.stool": 0.2224, "Acc.barrel": 0.7243, "Acc.basket": 0.3145, "Acc.waterfall": 0.7278, "Acc.tent": 0.9863, "Acc.bag": 0.0845, "Acc.minibike": 0.7118, "Acc.cradle": 0.8971, "Acc.oven": 0.3158, "Acc.ball": 0.4481, "Acc.food": 0.5948, "Acc.step": 0.0294, "Acc.tank": 0.3298, "Acc.trade name": 0.1003, "Acc.microwave": 0.3675, "Acc.pot": 0.2299, "Acc.animal": 0.5486, "Acc.bicycle": 0.6126, "Acc.lake": 0.0209, "Acc.dishwasher": 0.6559, "Acc.screen": 0.7893, "Acc.blanket": 0.0625, "Acc.sculpture": 0.458, "Acc.hood": 0.4266, "Acc.sconce": 0.2636, "Acc.vase": 0.3116, "Acc.traffic light": 0.3111, "Acc.tray": 0.015, "Acc.ashcan": 0.4371, "Acc.fan": 0.5472, "Acc.pier": 0.395, "Acc.crt screen": 0.0365, "Acc.plate": 0.3809, "Acc.monitor": 0.0891, "Acc.bulletin board": 0.3125, "Acc.shower": 0.0, "Acc.radiator": 0.4972, "Acc.glass": 0.0776, "Acc.clock": 0.137, "Acc.flag": 0.2607} -{"mode": "train", "epoch": 1, "iter": 20050, "lr": 0.00011, "memory": 4602, "data_time": 0.47663, "decode.loss_ce": 0.50659, "decode.acc_seg": 81.15093, "loss": 0.50659, "time": 0.61473} -{"mode": "train", "epoch": 1, "iter": 20100, "lr": 0.00011, "memory": 4602, "data_time": 0.01221, "decode.loss_ce": 0.47631, "decode.acc_seg": 81.85072, "loss": 0.47631, "time": 0.14378} -{"mode": "train", "epoch": 1, "iter": 20150, "lr": 0.00011, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.47073, "decode.acc_seg": 82.3371, "loss": 0.47073, "time": 0.14144} -{"mode": "train", "epoch": 1, "iter": 20200, "lr": 0.00011, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.47696, "decode.acc_seg": 81.97405, "loss": 0.47696, "time": 0.14286} -{"mode": "train", "epoch": 1, "iter": 20250, "lr": 0.00011, "memory": 4602, "data_time": 0.013, "decode.loss_ce": 0.47115, "decode.acc_seg": 82.01398, "loss": 0.47115, "time": 0.14204} -{"mode": "train", "epoch": 1, "iter": 20300, "lr": 0.00011, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.45992, "decode.acc_seg": 82.40914, "loss": 0.45992, "time": 0.14265} -{"mode": "train", "epoch": 1, "iter": 20350, "lr": 0.00011, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.47828, "decode.acc_seg": 81.88384, "loss": 0.47828, "time": 0.14174} -{"mode": "train", "epoch": 1, "iter": 20400, "lr": 0.00011, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.47663, "decode.acc_seg": 82.2093, "loss": 0.47663, "time": 0.14327} -{"mode": "train", "epoch": 1, "iter": 20450, "lr": 0.00011, "memory": 4602, "data_time": 0.01316, "decode.loss_ce": 0.46984, "decode.acc_seg": 82.14418, "loss": 0.46984, "time": 0.1451} -{"mode": "train", "epoch": 1, "iter": 20500, "lr": 0.00011, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.47142, "decode.acc_seg": 82.06169, "loss": 0.47142, "time": 0.14216} -{"mode": "train", "epoch": 1, "iter": 20550, "lr": 0.00011, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.46022, "decode.acc_seg": 82.41024, "loss": 0.46022, "time": 0.13946} -{"mode": "train", "epoch": 1, "iter": 20600, "lr": 0.0001, "memory": 4602, "data_time": 0.01355, "decode.loss_ce": 0.48308, "decode.acc_seg": 81.96888, "loss": 0.48308, "time": 0.14096} -{"mode": "train", "epoch": 1, "iter": 20650, "lr": 0.0001, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.46372, "decode.acc_seg": 82.72042, "loss": 0.46372, "time": 0.14602} -{"mode": "train", "epoch": 1, "iter": 20700, "lr": 0.0001, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.48697, "decode.acc_seg": 81.78955, "loss": 0.48697, "time": 0.14169} -{"mode": "train", "epoch": 1, "iter": 20750, "lr": 0.0001, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.48674, "decode.acc_seg": 81.32403, "loss": 0.48674, "time": 0.14377} -{"mode": "train", "epoch": 1, "iter": 20800, "lr": 0.0001, "memory": 4602, "data_time": 0.01289, "decode.loss_ce": 0.47289, "decode.acc_seg": 82.41613, "loss": 0.47289, "time": 0.14282} -{"mode": "train", "epoch": 1, "iter": 20850, "lr": 0.0001, "memory": 4602, "data_time": 0.01199, "decode.loss_ce": 0.46693, "decode.acc_seg": 82.40634, "loss": 0.46693, "time": 0.1417} -{"mode": "train", "epoch": 1, "iter": 20900, "lr": 0.0001, "memory": 4602, "data_time": 0.01273, "decode.loss_ce": 0.47158, "decode.acc_seg": 82.36839, "loss": 0.47158, "time": 0.14403} -{"mode": "train", "epoch": 1, "iter": 20950, "lr": 0.0001, "memory": 4602, "data_time": 0.01298, "decode.loss_ce": 0.46147, "decode.acc_seg": 82.39719, "loss": 0.46147, "time": 0.14175} -{"mode": "train", "epoch": 1, "iter": 21000, "lr": 0.0001, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.46884, "decode.acc_seg": 82.56946, "loss": 0.46884, "time": 0.14196} -{"mode": "train", "epoch": 1, "iter": 21050, "lr": 0.0001, "memory": 4602, "data_time": 0.01253, "decode.loss_ce": 0.46493, "decode.acc_seg": 82.4389, "loss": 0.46493, "time": 0.14015} -{"mode": "train", "epoch": 1, "iter": 21100, "lr": 0.0001, "memory": 4602, "data_time": 0.01315, "decode.loss_ce": 0.47488, "decode.acc_seg": 82.15842, "loss": 0.47488, "time": 0.14398} -{"mode": "train", "epoch": 1, "iter": 21150, "lr": 0.0001, "memory": 4602, "data_time": 0.0134, "decode.loss_ce": 0.47793, "decode.acc_seg": 82.05584, "loss": 0.47793, "time": 0.1425} -{"mode": "train", "epoch": 1, "iter": 21200, "lr": 0.0001, "memory": 4602, "data_time": 0.01285, "decode.loss_ce": 0.46865, "decode.acc_seg": 82.1098, "loss": 0.46865, "time": 0.1444} -{"mode": "train", "epoch": 1, "iter": 21250, "lr": 0.0001, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.47407, "decode.acc_seg": 82.11259, "loss": 0.47407, "time": 0.14538} -{"mode": "train", "epoch": 1, "iter": 21300, "lr": 0.0001, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.45401, "decode.acc_seg": 82.66159, "loss": 0.45401, "time": 0.14064} -{"mode": "train", "epoch": 1, "iter": 21350, "lr": 0.0001, "memory": 4602, "data_time": 0.01286, "decode.loss_ce": 0.47043, "decode.acc_seg": 82.3938, "loss": 0.47043, "time": 0.14336} -{"mode": "train", "epoch": 1, "iter": 21400, "lr": 0.0001, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.47276, "decode.acc_seg": 82.01788, "loss": 0.47276, "time": 0.1393} -{"mode": "train", "epoch": 1, "iter": 21450, "lr": 0.0001, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.4663, "decode.acc_seg": 82.39868, "loss": 0.4663, "time": 0.14349} -{"mode": "train", "epoch": 1, "iter": 21500, "lr": 0.0001, "memory": 4602, "data_time": 0.01283, "decode.loss_ce": 0.46919, "decode.acc_seg": 82.68879, "loss": 0.46919, "time": 0.1427} -{"mode": "train", "epoch": 1, "iter": 21550, "lr": 0.0001, "memory": 4602, "data_time": 0.01223, "decode.loss_ce": 0.45818, "decode.acc_seg": 82.85777, "loss": 0.45818, "time": 0.14548} -{"mode": "train", "epoch": 1, "iter": 21600, "lr": 0.0001, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.46157, "decode.acc_seg": 82.3918, "loss": 0.46157, "time": 0.13898} -{"mode": "train", "epoch": 1, "iter": 21650, "lr": 0.0001, "memory": 4602, "data_time": 0.01276, "decode.loss_ce": 0.45975, "decode.acc_seg": 82.32684, "loss": 0.45975, "time": 0.14331} -{"mode": "train", "epoch": 1, "iter": 21700, "lr": 0.0001, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.4592, "decode.acc_seg": 82.77917, "loss": 0.4592, "time": 0.14134} -{"mode": "train", "epoch": 1, "iter": 21750, "lr": 0.0001, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.44522, "decode.acc_seg": 83.04419, "loss": 0.44522, "time": 0.14252} -{"mode": "train", "epoch": 1, "iter": 21800, "lr": 0.0001, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.4645, "decode.acc_seg": 82.48687, "loss": 0.4645, "time": 0.14596} -{"mode": "train", "epoch": 1, "iter": 21850, "lr": 0.0001, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.45753, "decode.acc_seg": 82.78297, "loss": 0.45753, "time": 0.14106} -{"mode": "train", "epoch": 1, "iter": 21900, "lr": 0.0001, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.47175, "decode.acc_seg": 82.08443, "loss": 0.47175, "time": 0.14116} -{"mode": "train", "epoch": 1, "iter": 21950, "lr": 0.0001, "memory": 4602, "data_time": 0.01223, "decode.loss_ce": 0.46017, "decode.acc_seg": 82.26392, "loss": 0.46017, "time": 0.1401} -{"mode": "train", "epoch": 1, "iter": 22000, "lr": 0.0001, "memory": 4602, "data_time": 0.01257, "decode.loss_ce": 0.46025, "decode.acc_seg": 82.31968, "loss": 0.46025, "time": 0.14056} -{"mode": "train", "epoch": 1, "iter": 22050, "lr": 0.0001, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.46647, "decode.acc_seg": 82.57355, "loss": 0.46647, "time": 0.14297} -{"mode": "train", "epoch": 1, "iter": 22100, "lr": 0.0001, "memory": 4602, "data_time": 0.01221, "decode.loss_ce": 0.46361, "decode.acc_seg": 82.96163, "loss": 0.46361, "time": 0.14186} -{"mode": "train", "epoch": 1, "iter": 22150, "lr": 0.0001, "memory": 4602, "data_time": 0.01334, "decode.loss_ce": 0.46142, "decode.acc_seg": 82.7134, "loss": 0.46142, "time": 0.14106} -{"mode": "train", "epoch": 1, "iter": 22200, "lr": 0.0001, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.46181, "decode.acc_seg": 82.77244, "loss": 0.46181, "time": 0.1404} -{"mode": "train", "epoch": 1, "iter": 22250, "lr": 0.0001, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.44714, "decode.acc_seg": 82.99975, "loss": 0.44714, "time": 0.1393} -{"mode": "train", "epoch": 1, "iter": 22300, "lr": 0.0001, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.44819, "decode.acc_seg": 82.98709, "loss": 0.44819, "time": 0.13932} -{"mode": "train", "epoch": 1, "iter": 22350, "lr": 0.0001, "memory": 4602, "data_time": 0.01292, "decode.loss_ce": 0.42939, "decode.acc_seg": 83.58517, "loss": 0.42939, "time": 0.14353} -{"mode": "train", "epoch": 1, "iter": 22400, "lr": 0.0001, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.45465, "decode.acc_seg": 82.83576, "loss": 0.45465, "time": 0.15089} -{"mode": "train", "epoch": 1, "iter": 22450, "lr": 0.0001, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.45889, "decode.acc_seg": 82.66167, "loss": 0.45889, "time": 0.13833} -{"mode": "train", "epoch": 1, "iter": 22500, "lr": 0.0001, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.46177, "decode.acc_seg": 82.79787, "loss": 0.46177, "time": 0.14148} -{"mode": "train", "epoch": 1, "iter": 22550, "lr": 0.0001, "memory": 4602, "data_time": 0.01252, "decode.loss_ce": 0.45262, "decode.acc_seg": 82.85059, "loss": 0.45262, "time": 0.14296} -{"mode": "train", "epoch": 1, "iter": 22600, "lr": 0.0001, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.44709, "decode.acc_seg": 83.24435, "loss": 0.44709, "time": 0.14167} -{"mode": "train", "epoch": 1, "iter": 22650, "lr": 9e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.44284, "decode.acc_seg": 83.46179, "loss": 0.44284, "time": 0.14031} -{"mode": "train", "epoch": 1, "iter": 22700, "lr": 9e-05, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.4341, "decode.acc_seg": 83.25463, "loss": 0.4341, "time": 0.14419} -{"mode": "train", "epoch": 1, "iter": 22750, "lr": 9e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.43457, "decode.acc_seg": 83.63783, "loss": 0.43457, "time": 0.14116} -{"mode": "train", "epoch": 1, "iter": 22800, "lr": 9e-05, "memory": 4602, "data_time": 0.01255, "decode.loss_ce": 0.44856, "decode.acc_seg": 82.85742, "loss": 0.44856, "time": 0.13976} -{"mode": "train", "epoch": 1, "iter": 22850, "lr": 9e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.44334, "decode.acc_seg": 83.30707, "loss": 0.44334, "time": 0.14249} -{"mode": "train", "epoch": 1, "iter": 22900, "lr": 9e-05, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.44636, "decode.acc_seg": 83.16964, "loss": 0.44636, "time": 0.14075} -{"mode": "train", "epoch": 1, "iter": 22950, "lr": 9e-05, "memory": 4602, "data_time": 0.01191, "decode.loss_ce": 0.44837, "decode.acc_seg": 82.69708, "loss": 0.44837, "time": 0.14502} -{"mode": "train", "epoch": 1, "iter": 23000, "lr": 9e-05, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.45855, "decode.acc_seg": 82.86427, "loss": 0.45855, "time": 0.14157} -{"mode": "train", "epoch": 1, "iter": 23050, "lr": 9e-05, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.45812, "decode.acc_seg": 82.58007, "loss": 0.45812, "time": 0.14072} -{"mode": "train", "epoch": 1, "iter": 23100, "lr": 9e-05, "memory": 4602, "data_time": 0.01217, "decode.loss_ce": 0.43387, "decode.acc_seg": 83.61105, "loss": 0.43387, "time": 0.13994} -{"mode": "train", "epoch": 1, "iter": 23150, "lr": 9e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.43007, "decode.acc_seg": 83.60638, "loss": 0.43007, "time": 0.14125} -{"mode": "train", "epoch": 1, "iter": 23200, "lr": 9e-05, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.43699, "decode.acc_seg": 83.23795, "loss": 0.43699, "time": 0.14283} -{"mode": "train", "epoch": 1, "iter": 23250, "lr": 9e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.43431, "decode.acc_seg": 83.73365, "loss": 0.43431, "time": 0.14019} -{"mode": "train", "epoch": 1, "iter": 23300, "lr": 9e-05, "memory": 4602, "data_time": 0.01404, "decode.loss_ce": 0.43752, "decode.acc_seg": 83.15406, "loss": 0.43752, "time": 0.14197} -{"mode": "train", "epoch": 1, "iter": 23350, "lr": 9e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.4476, "decode.acc_seg": 83.19378, "loss": 0.4476, "time": 0.14137} -{"mode": "train", "epoch": 1, "iter": 23400, "lr": 9e-05, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.43188, "decode.acc_seg": 84.0012, "loss": 0.43188, "time": 0.14579} -{"mode": "train", "epoch": 1, "iter": 23450, "lr": 9e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.45935, "decode.acc_seg": 82.85472, "loss": 0.45935, "time": 0.14204} -{"mode": "train", "epoch": 1, "iter": 23500, "lr": 9e-05, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.44959, "decode.acc_seg": 83.0564, "loss": 0.44959, "time": 0.13902} -{"mode": "train", "epoch": 1, "iter": 23550, "lr": 9e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.4464, "decode.acc_seg": 82.95284, "loss": 0.4464, "time": 0.14995} -{"mode": "train", "epoch": 1, "iter": 23600, "lr": 9e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.4245, "decode.acc_seg": 83.61387, "loss": 0.4245, "time": 0.14218} -{"mode": "train", "epoch": 1, "iter": 23650, "lr": 9e-05, "memory": 4602, "data_time": 0.01293, "decode.loss_ce": 0.44547, "decode.acc_seg": 83.16137, "loss": 0.44547, "time": 0.14051} -{"mode": "train", "epoch": 1, "iter": 23700, "lr": 9e-05, "memory": 4602, "data_time": 0.01266, "decode.loss_ce": 0.42425, "decode.acc_seg": 84.16718, "loss": 0.42425, "time": 0.14184} -{"mode": "train", "epoch": 1, "iter": 23750, "lr": 9e-05, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.44155, "decode.acc_seg": 83.25876, "loss": 0.44155, "time": 0.14048} -{"mode": "train", "epoch": 1, "iter": 23800, "lr": 9e-05, "memory": 4602, "data_time": 0.01276, "decode.loss_ce": 0.41259, "decode.acc_seg": 83.8225, "loss": 0.41259, "time": 0.14199} -{"mode": "train", "epoch": 1, "iter": 23850, "lr": 9e-05, "memory": 4602, "data_time": 0.01275, "decode.loss_ce": 0.43512, "decode.acc_seg": 83.59613, "loss": 0.43512, "time": 0.1425} -{"mode": "train", "epoch": 1, "iter": 23900, "lr": 9e-05, "memory": 4602, "data_time": 0.01256, "decode.loss_ce": 0.45178, "decode.acc_seg": 83.06699, "loss": 0.45178, "time": 0.14269} -{"mode": "train", "epoch": 1, "iter": 23950, "lr": 9e-05, "memory": 4602, "data_time": 0.013, "decode.loss_ce": 0.4391, "decode.acc_seg": 83.48491, "loss": 0.4391, "time": 0.14256} -{"mode": "train", "epoch": 1, "iter": 24000, "lr": 9e-05, "memory": 4602, "data_time": 0.01151, "decode.loss_ce": 0.42868, "decode.acc_seg": 83.68824, "loss": 0.42868, "time": 0.15419} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 9e-05, "aAcc": 0.7875, "mIoU": 0.386, "mAcc": 0.4951, "IoU.wall": 0.7121, "IoU.building": 0.7976, "IoU.sky": 0.934, "IoU.floor": 0.762, "IoU.tree": 0.705, "IoU.ceiling": 0.7798, "IoU.road": 0.7898, "IoU.bed ": 0.8332, "IoU.windowpane": 0.5402, "IoU.grass": 0.658, "IoU.cabinet": 0.5215, "IoU.sidewalk": 0.5811, "IoU.person": 0.7652, "IoU.earth": 0.3781, "IoU.door": 0.3488, "IoU.table": 0.4772, "IoU.mountain": 0.5504, "IoU.plant": 0.4692, "IoU.curtain": 0.6287, "IoU.chair": 0.4675, "IoU.car": 0.8024, "IoU.water": 0.4436, "IoU.painting": 0.6506, "IoU.sofa": 0.5315, "IoU.shelf": 0.3392, "IoU.house": 0.3143, "IoU.sea": 0.4759, "IoU.mirror": 0.4573, "IoU.rug": 0.4812, "IoU.field": 0.2926, "IoU.armchair": 0.2883, "IoU.seat": 0.5363, "IoU.fence": 0.3374, "IoU.desk": 0.3835, "IoU.rock": 0.297, "IoU.wardrobe": 0.4346, "IoU.lamp": 0.5254, "IoU.bathtub": 0.5753, "IoU.railing": 0.2907, "IoU.cushion": 0.428, "IoU.base": 0.1875, "IoU.box": 0.1615, "IoU.column": 0.343, "IoU.signboard": 0.2934, "IoU.chest of drawers": 0.3419, "IoU.counter": 0.1841, "IoU.sand": 0.3343, "IoU.sink": 0.5922, "IoU.skyscraper": 0.4886, "IoU.fireplace": 0.6248, "IoU.refrigerator": 0.611, "IoU.grandstand": 0.4144, "IoU.path": 0.1824, "IoU.stairs": 0.3042, "IoU.runway": 0.7094, "IoU.case": 0.4192, "IoU.pool table": 0.8925, "IoU.pillow": 0.4539, "IoU.screen door": 0.3899, "IoU.stairway": 0.2279, "IoU.river": 0.1136, "IoU.bridge": 0.2527, "IoU.bookcase": 0.2713, "IoU.blind": 0.3274, "IoU.coffee table": 0.4552, "IoU.toilet": 0.7894, "IoU.flower": 0.3452, "IoU.book": 0.435, "IoU.hill": 0.0452, "IoU.bench": 0.3416, "IoU.countertop": 0.4366, "IoU.stove": 0.5551, "IoU.palm": 0.4426, "IoU.kitchen island": 0.2216, "IoU.computer": 0.5194, "IoU.swivel chair": 0.3577, "IoU.boat": 0.4315, "IoU.bar": 0.2293, "IoU.arcade machine": 0.3855, "IoU.hovel": 0.2029, "IoU.bus": 0.773, "IoU.towel": 0.4714, "IoU.light": 0.3939, "IoU.truck": 0.0965, "IoU.tower": 0.3632, "IoU.chandelier": 0.5848, "IoU.awning": 0.183, "IoU.streetlight": 0.1479, "IoU.booth": 0.2733, "IoU.television receiver": 0.4788, "IoU.airplane": 0.4463, "IoU.dirt track": 0.0223, "IoU.apparel": 0.3338, "IoU.pole": 0.0976, "IoU.land": 0.0115, "IoU.bannister": 0.1095, "IoU.escalator": 0.3774, "IoU.ottoman": 0.2757, "IoU.bottle": 0.1523, "IoU.buffet": 0.3474, "IoU.poster": 0.1685, "IoU.stage": 0.1063, "IoU.van": 0.3761, "IoU.ship": 0.4292, "IoU.fountain": 0.1919, "IoU.conveyer belt": 0.599, "IoU.canopy": 0.1432, "IoU.washer": 0.6197, "IoU.plaything": 0.2256, "IoU.swimming pool": 0.2709, "IoU.stool": 0.2368, "IoU.barrel": 0.1666, "IoU.basket": 0.197, "IoU.waterfall": 0.436, "IoU.tent": 0.8428, "IoU.bag": 0.0691, "IoU.minibike": 0.525, "IoU.cradle": 0.7536, "IoU.oven": 0.2717, "IoU.ball": 0.4424, "IoU.food": 0.514, "IoU.step": 0.0791, "IoU.tank": 0.2676, "IoU.trade name": 0.1302, "IoU.microwave": 0.3113, "IoU.pot": 0.2293, "IoU.animal": 0.5438, "IoU.bicycle": 0.4314, "IoU.lake": 0.1361, "IoU.dishwasher": 0.4847, "IoU.screen": 0.6177, "IoU.blanket": 0.0215, "IoU.sculpture": 0.2995, "IoU.hood": 0.3739, "IoU.sconce": 0.2487, "IoU.vase": 0.2341, "IoU.traffic light": 0.2071, "IoU.tray": 0.0308, "IoU.ashcan": 0.3157, "IoU.fan": 0.5014, "IoU.pier": 0.4117, "IoU.crt screen": 0.0049, "IoU.plate": 0.3818, "IoU.monitor": 0.0834, "IoU.bulletin board": 0.2065, "IoU.shower": 0.0, "IoU.radiator": 0.4144, "IoU.glass": 0.072, "IoU.clock": 0.1938, "IoU.flag": 0.2472, "Acc.wall": 0.862, "Acc.building": 0.9242, "Acc.sky": 0.9695, "Acc.floor": 0.8788, "Acc.tree": 0.8548, "Acc.ceiling": 0.8949, "Acc.road": 0.8589, "Acc.bed ": 0.9337, "Acc.windowpane": 0.7273, "Acc.grass": 0.8066, "Acc.cabinet": 0.6991, "Acc.sidewalk": 0.771, "Acc.person": 0.9008, "Acc.earth": 0.5669, "Acc.door": 0.463, "Acc.table": 0.6435, "Acc.mountain": 0.7393, "Acc.plant": 0.5544, "Acc.curtain": 0.7208, "Acc.chair": 0.5831, "Acc.car": 0.8953, "Acc.water": 0.5721, "Acc.painting": 0.811, "Acc.sofa": 0.7691, "Acc.shelf": 0.4814, "Acc.house": 0.3821, "Acc.sea": 0.6666, "Acc.mirror": 0.5604, "Acc.rug": 0.6254, "Acc.field": 0.3745, "Acc.armchair": 0.4419, "Acc.seat": 0.7169, "Acc.fence": 0.4467, "Acc.desk": 0.5985, "Acc.rock": 0.3974, "Acc.wardrobe": 0.6567, "Acc.lamp": 0.6991, "Acc.bathtub": 0.6905, "Acc.railing": 0.4125, "Acc.cushion": 0.5493, "Acc.base": 0.3732, "Acc.box": 0.1953, "Acc.column": 0.4056, "Acc.signboard": 0.3733, "Acc.chest of drawers": 0.5251, "Acc.counter": 0.2488, "Acc.sand": 0.5644, "Acc.sink": 0.7121, "Acc.skyscraper": 0.6268, "Acc.fireplace": 0.7673, "Acc.refrigerator": 0.7452, "Acc.grandstand": 0.6837, "Acc.path": 0.2487, "Acc.stairs": 0.3718, "Acc.runway": 0.9537, "Acc.case": 0.5437, "Acc.pool table": 0.9626, "Acc.pillow": 0.5635, "Acc.screen door": 0.485, "Acc.stairway": 0.2872, "Acc.river": 0.2777, "Acc.bridge": 0.3375, "Acc.bookcase": 0.3623, "Acc.blind": 0.4103, "Acc.coffee table": 0.7206, "Acc.toilet": 0.8858, "Acc.flower": 0.4876, "Acc.book": 0.6298, "Acc.hill": 0.082, "Acc.bench": 0.4315, "Acc.countertop": 0.5598, "Acc.stove": 0.6459, "Acc.palm": 0.5343, "Acc.kitchen island": 0.4279, "Acc.computer": 0.596, "Acc.swivel chair": 0.4893, "Acc.boat": 0.525, "Acc.bar": 0.2975, "Acc.arcade machine": 0.4626, "Acc.hovel": 0.2862, "Acc.bus": 0.8934, "Acc.towel": 0.5935, "Acc.light": 0.4485, "Acc.truck": 0.1203, "Acc.tower": 0.5438, "Acc.chandelier": 0.7167, "Acc.awning": 0.2081, "Acc.streetlight": 0.1823, "Acc.booth": 0.3213, "Acc.television receiver": 0.6924, "Acc.airplane": 0.6461, "Acc.dirt track": 0.054, "Acc.apparel": 0.467, "Acc.pole": 0.117, "Acc.land": 0.0261, "Acc.bannister": 0.1583, "Acc.escalator": 0.4587, "Acc.ottoman": 0.3841, "Acc.bottle": 0.18, "Acc.buffet": 0.4239, "Acc.poster": 0.2202, "Acc.stage": 0.1527, "Acc.van": 0.4971, "Acc.ship": 0.6402, "Acc.fountain": 0.2124, "Acc.conveyer belt": 0.7857, "Acc.canopy": 0.1884, "Acc.washer": 0.7016, "Acc.plaything": 0.3171, "Acc.swimming pool": 0.4438, "Acc.stool": 0.3106, "Acc.barrel": 0.5556, "Acc.basket": 0.2986, "Acc.waterfall": 0.4718, "Acc.tent": 0.9902, "Acc.bag": 0.0775, "Acc.minibike": 0.6233, "Acc.cradle": 0.9585, "Acc.oven": 0.3821, "Acc.ball": 0.5659, "Acc.food": 0.6783, "Acc.step": 0.1004, "Acc.tank": 0.2868, "Acc.trade name": 0.1377, "Acc.microwave": 0.3298, "Acc.pot": 0.2684, "Acc.animal": 0.5813, "Acc.bicycle": 0.6458, "Acc.lake": 0.1382, "Acc.dishwasher": 0.6093, "Acc.screen": 0.8343, "Acc.blanket": 0.0242, "Acc.sculpture": 0.4463, "Acc.hood": 0.4524, "Acc.sconce": 0.291, "Acc.vase": 0.3293, "Acc.traffic light": 0.3173, "Acc.tray": 0.0425, "Acc.ashcan": 0.4578, "Acc.fan": 0.621, "Acc.pier": 0.5873, "Acc.crt screen": 0.0116, "Acc.plate": 0.4885, "Acc.monitor": 0.0962, "Acc.bulletin board": 0.2564, "Acc.shower": 0.0, "Acc.radiator": 0.4444, "Acc.glass": 0.077, "Acc.clock": 0.22, "Acc.flag": 0.2745} -{"mode": "train", "epoch": 1, "iter": 24050, "lr": 9e-05, "memory": 4602, "data_time": 0.48908, "decode.loss_ce": 0.47089, "decode.acc_seg": 82.16605, "loss": 0.47089, "time": 0.61832} -{"mode": "train", "epoch": 1, "iter": 24100, "lr": 9e-05, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.43392, "decode.acc_seg": 83.42938, "loss": 0.43392, "time": 0.14511} -{"mode": "train", "epoch": 1, "iter": 24150, "lr": 9e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.42474, "decode.acc_seg": 84.08973, "loss": 0.42474, "time": 0.14264} -{"mode": "train", "epoch": 1, "iter": 24200, "lr": 9e-05, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 0.43838, "decode.acc_seg": 83.32925, "loss": 0.43838, "time": 0.14055} -{"mode": "train", "epoch": 1, "iter": 24250, "lr": 9e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.43367, "decode.acc_seg": 83.39007, "loss": 0.43367, "time": 0.14067} -{"mode": "train", "epoch": 1, "iter": 24300, "lr": 9e-05, "memory": 4602, "data_time": 0.01276, "decode.loss_ce": 0.42414, "decode.acc_seg": 83.87768, "loss": 0.42414, "time": 0.14134} -{"mode": "train", "epoch": 1, "iter": 24350, "lr": 9e-05, "memory": 4602, "data_time": 0.01296, "decode.loss_ce": 0.44282, "decode.acc_seg": 83.06774, "loss": 0.44282, "time": 0.14133} -{"mode": "train", "epoch": 1, "iter": 24400, "lr": 9e-05, "memory": 4602, "data_time": 0.01256, "decode.loss_ce": 0.43606, "decode.acc_seg": 83.39555, "loss": 0.43606, "time": 0.143} -{"mode": "train", "epoch": 1, "iter": 24450, "lr": 9e-05, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.42375, "decode.acc_seg": 83.95825, "loss": 0.42375, "time": 0.14361} -{"mode": "train", "epoch": 1, "iter": 24500, "lr": 9e-05, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.42887, "decode.acc_seg": 83.89301, "loss": 0.42887, "time": 0.14307} -{"mode": "train", "epoch": 1, "iter": 24550, "lr": 9e-05, "memory": 4602, "data_time": 0.01266, "decode.loss_ce": 0.42619, "decode.acc_seg": 83.38777, "loss": 0.42619, "time": 0.13995} -{"mode": "train", "epoch": 1, "iter": 24600, "lr": 9e-05, "memory": 4602, "data_time": 0.01264, "decode.loss_ce": 0.4247, "decode.acc_seg": 83.65722, "loss": 0.4247, "time": 0.14399} -{"mode": "train", "epoch": 1, "iter": 24650, "lr": 9e-05, "memory": 4602, "data_time": 0.01266, "decode.loss_ce": 0.42107, "decode.acc_seg": 84.06251, "loss": 0.42107, "time": 0.14221} -{"mode": "train", "epoch": 1, "iter": 24700, "lr": 8e-05, "memory": 4602, "data_time": 0.01206, "decode.loss_ce": 0.42748, "decode.acc_seg": 83.76853, "loss": 0.42748, "time": 0.14396} -{"mode": "train", "epoch": 1, "iter": 24750, "lr": 8e-05, "memory": 4602, "data_time": 0.01582, "decode.loss_ce": 0.42709, "decode.acc_seg": 83.82235, "loss": 0.42709, "time": 0.14428} -{"mode": "train", "epoch": 1, "iter": 24800, "lr": 8e-05, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.42561, "decode.acc_seg": 83.81321, "loss": 0.42561, "time": 0.14333} -{"mode": "train", "epoch": 1, "iter": 24850, "lr": 8e-05, "memory": 4602, "data_time": 0.01258, "decode.loss_ce": 0.43508, "decode.acc_seg": 83.50562, "loss": 0.43508, "time": 0.14236} -{"mode": "train", "epoch": 1, "iter": 24900, "lr": 8e-05, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.43567, "decode.acc_seg": 83.40023, "loss": 0.43567, "time": 0.142} -{"mode": "train", "epoch": 1, "iter": 24950, "lr": 8e-05, "memory": 4602, "data_time": 0.01259, "decode.loss_ce": 0.40521, "decode.acc_seg": 84.63938, "loss": 0.40521, "time": 0.14303} -{"mode": "train", "epoch": 1, "iter": 25000, "lr": 8e-05, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.4193, "decode.acc_seg": 83.68122, "loss": 0.4193, "time": 0.13989} -{"mode": "train", "epoch": 1, "iter": 25050, "lr": 8e-05, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.43087, "decode.acc_seg": 83.47243, "loss": 0.43087, "time": 0.14104} -{"mode": "train", "epoch": 1, "iter": 25100, "lr": 8e-05, "memory": 4602, "data_time": 0.0125, "decode.loss_ce": 0.42367, "decode.acc_seg": 84.12254, "loss": 0.42367, "time": 0.14281} -{"mode": "train", "epoch": 1, "iter": 25150, "lr": 8e-05, "memory": 4602, "data_time": 0.01227, "decode.loss_ce": 0.42917, "decode.acc_seg": 83.3837, "loss": 0.42917, "time": 0.14049} -{"mode": "train", "epoch": 1, "iter": 25200, "lr": 8e-05, "memory": 4602, "data_time": 0.01291, "decode.loss_ce": 0.43314, "decode.acc_seg": 83.77394, "loss": 0.43314, "time": 0.14956} -{"mode": "train", "epoch": 1, "iter": 25250, "lr": 8e-05, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.42921, "decode.acc_seg": 83.6154, "loss": 0.42921, "time": 0.14107} -{"mode": "train", "epoch": 1, "iter": 25300, "lr": 8e-05, "memory": 4602, "data_time": 0.01195, "decode.loss_ce": 0.4158, "decode.acc_seg": 84.24436, "loss": 0.4158, "time": 0.14361} -{"mode": "train", "epoch": 1, "iter": 25350, "lr": 8e-05, "memory": 4602, "data_time": 0.01249, "decode.loss_ce": 0.39835, "decode.acc_seg": 84.67082, "loss": 0.39835, "time": 0.14307} -{"mode": "train", "epoch": 1, "iter": 25400, "lr": 8e-05, "memory": 4602, "data_time": 0.01191, "decode.loss_ce": 0.43165, "decode.acc_seg": 83.81385, "loss": 0.43165, "time": 0.14618} -{"mode": "train", "epoch": 1, "iter": 25450, "lr": 8e-05, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.44132, "decode.acc_seg": 83.14413, "loss": 0.44132, "time": 0.14111} -{"mode": "train", "epoch": 1, "iter": 25500, "lr": 8e-05, "memory": 4602, "data_time": 0.01294, "decode.loss_ce": 0.42226, "decode.acc_seg": 84.11336, "loss": 0.42226, "time": 0.14154} -{"mode": "train", "epoch": 1, "iter": 25550, "lr": 8e-05, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.4146, "decode.acc_seg": 84.32527, "loss": 0.4146, "time": 0.14447} -{"mode": "train", "epoch": 1, "iter": 25600, "lr": 8e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.41941, "decode.acc_seg": 84.3005, "loss": 0.41941, "time": 0.14129} -{"mode": "train", "epoch": 1, "iter": 25650, "lr": 8e-05, "memory": 4602, "data_time": 0.01262, "decode.loss_ce": 0.40516, "decode.acc_seg": 84.35129, "loss": 0.40516, "time": 0.14242} -{"mode": "train", "epoch": 1, "iter": 25700, "lr": 8e-05, "memory": 4602, "data_time": 0.01223, "decode.loss_ce": 0.4232, "decode.acc_seg": 83.92989, "loss": 0.4232, "time": 0.13953} -{"mode": "train", "epoch": 1, "iter": 25750, "lr": 8e-05, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.42192, "decode.acc_seg": 83.93107, "loss": 0.42192, "time": 0.14015} -{"mode": "train", "epoch": 1, "iter": 25800, "lr": 8e-05, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.4072, "decode.acc_seg": 84.69787, "loss": 0.4072, "time": 0.14114} -{"mode": "train", "epoch": 1, "iter": 25850, "lr": 8e-05, "memory": 4602, "data_time": 0.01193, "decode.loss_ce": 0.42351, "decode.acc_seg": 84.146, "loss": 0.42351, "time": 0.14044} -{"mode": "train", "epoch": 1, "iter": 25900, "lr": 8e-05, "memory": 4602, "data_time": 0.01194, "decode.loss_ce": 0.41838, "decode.acc_seg": 83.93478, "loss": 0.41838, "time": 0.15132} -{"mode": "train", "epoch": 1, "iter": 25950, "lr": 8e-05, "memory": 4602, "data_time": 0.01264, "decode.loss_ce": 0.42952, "decode.acc_seg": 83.78484, "loss": 0.42952, "time": 0.14273} -{"mode": "train", "epoch": 1, "iter": 26000, "lr": 8e-05, "memory": 4602, "data_time": 0.01202, "decode.loss_ce": 0.41761, "decode.acc_seg": 83.99302, "loss": 0.41761, "time": 0.14126} -{"mode": "train", "epoch": 1, "iter": 26050, "lr": 8e-05, "memory": 4602, "data_time": 0.01217, "decode.loss_ce": 0.43187, "decode.acc_seg": 83.44458, "loss": 0.43187, "time": 0.14221} -{"mode": "train", "epoch": 1, "iter": 26100, "lr": 8e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.42651, "decode.acc_seg": 83.87458, "loss": 0.42651, "time": 0.13975} -{"mode": "train", "epoch": 1, "iter": 26150, "lr": 8e-05, "memory": 4602, "data_time": 0.01333, "decode.loss_ce": 0.43725, "decode.acc_seg": 83.37231, "loss": 0.43725, "time": 0.14474} -{"mode": "train", "epoch": 1, "iter": 26200, "lr": 8e-05, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.41934, "decode.acc_seg": 83.96846, "loss": 0.41934, "time": 0.14426} -{"mode": "train", "epoch": 1, "iter": 26250, "lr": 8e-05, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.40903, "decode.acc_seg": 84.4417, "loss": 0.40903, "time": 0.14082} -{"mode": "train", "epoch": 1, "iter": 26300, "lr": 8e-05, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.41142, "decode.acc_seg": 84.1675, "loss": 0.41142, "time": 0.14122} -{"mode": "train", "epoch": 1, "iter": 26350, "lr": 8e-05, "memory": 4602, "data_time": 0.01177, "decode.loss_ce": 0.41304, "decode.acc_seg": 84.30118, "loss": 0.41304, "time": 0.14342} -{"mode": "train", "epoch": 1, "iter": 26400, "lr": 8e-05, "memory": 4602, "data_time": 0.01247, "decode.loss_ce": 0.40672, "decode.acc_seg": 84.43371, "loss": 0.40672, "time": 0.14365} -{"mode": "train", "epoch": 1, "iter": 26450, "lr": 8e-05, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.42155, "decode.acc_seg": 84.27987, "loss": 0.42155, "time": 0.14251} -{"mode": "train", "epoch": 1, "iter": 26500, "lr": 8e-05, "memory": 4602, "data_time": 0.01154, "decode.loss_ce": 0.39678, "decode.acc_seg": 84.59799, "loss": 0.39678, "time": 0.14012} -{"mode": "train", "epoch": 1, "iter": 26550, "lr": 8e-05, "memory": 4602, "data_time": 0.01226, "decode.loss_ce": 0.40757, "decode.acc_seg": 84.27671, "loss": 0.40757, "time": 0.14149} -{"mode": "train", "epoch": 1, "iter": 26600, "lr": 8e-05, "memory": 4602, "data_time": 0.01389, "decode.loss_ce": 0.41646, "decode.acc_seg": 84.10667, "loss": 0.41646, "time": 0.14307} -{"mode": "train", "epoch": 1, "iter": 26650, "lr": 8e-05, "memory": 4602, "data_time": 0.01159, "decode.loss_ce": 0.41587, "decode.acc_seg": 84.08665, "loss": 0.41587, "time": 0.14229} -{"mode": "train", "epoch": 1, "iter": 26700, "lr": 7e-05, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.41233, "decode.acc_seg": 84.05629, "loss": 0.41233, "time": 0.14267} -{"mode": "train", "epoch": 1, "iter": 26750, "lr": 7e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.42257, "decode.acc_seg": 83.99666, "loss": 0.42257, "time": 0.13919} -{"mode": "train", "epoch": 1, "iter": 26800, "lr": 7e-05, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.41526, "decode.acc_seg": 84.04873, "loss": 0.41526, "time": 0.14122} -{"mode": "train", "epoch": 1, "iter": 26850, "lr": 7e-05, "memory": 4602, "data_time": 0.01255, "decode.loss_ce": 0.41596, "decode.acc_seg": 84.23003, "loss": 0.41596, "time": 0.1398} -{"mode": "train", "epoch": 1, "iter": 26900, "lr": 7e-05, "memory": 4602, "data_time": 0.01368, "decode.loss_ce": 0.43419, "decode.acc_seg": 83.4912, "loss": 0.43419, "time": 0.14047} -{"mode": "train", "epoch": 1, "iter": 26950, "lr": 7e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.40494, "decode.acc_seg": 84.6101, "loss": 0.40494, "time": 0.14191} -{"mode": "train", "epoch": 1, "iter": 27000, "lr": 7e-05, "memory": 4602, "data_time": 0.01258, "decode.loss_ce": 0.40624, "decode.acc_seg": 84.23755, "loss": 0.40624, "time": 0.14232} -{"mode": "train", "epoch": 1, "iter": 27050, "lr": 7e-05, "memory": 4602, "data_time": 0.01255, "decode.loss_ce": 0.41071, "decode.acc_seg": 84.25094, "loss": 0.41071, "time": 0.14909} -{"mode": "train", "epoch": 1, "iter": 27100, "lr": 7e-05, "memory": 4602, "data_time": 0.01297, "decode.loss_ce": 0.42559, "decode.acc_seg": 84.03773, "loss": 0.42559, "time": 0.14334} -{"mode": "train", "epoch": 1, "iter": 27150, "lr": 7e-05, "memory": 4602, "data_time": 0.01239, "decode.loss_ce": 0.39668, "decode.acc_seg": 85.03677, "loss": 0.39668, "time": 0.14392} -{"mode": "train", "epoch": 1, "iter": 27200, "lr": 7e-05, "memory": 4602, "data_time": 0.01278, "decode.loss_ce": 0.40276, "decode.acc_seg": 84.68597, "loss": 0.40276, "time": 0.13976} -{"mode": "train", "epoch": 1, "iter": 27250, "lr": 7e-05, "memory": 4602, "data_time": 0.01247, "decode.loss_ce": 0.41493, "decode.acc_seg": 84.16292, "loss": 0.41493, "time": 0.14414} -{"mode": "train", "epoch": 1, "iter": 27300, "lr": 7e-05, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.40032, "decode.acc_seg": 84.67066, "loss": 0.40032, "time": 0.14166} -{"mode": "train", "epoch": 1, "iter": 27350, "lr": 7e-05, "memory": 4602, "data_time": 0.01356, "decode.loss_ce": 0.40922, "decode.acc_seg": 84.28643, "loss": 0.40922, "time": 0.14272} -{"mode": "train", "epoch": 1, "iter": 27400, "lr": 7e-05, "memory": 4602, "data_time": 0.01482, "decode.loss_ce": 0.40152, "decode.acc_seg": 84.72892, "loss": 0.40152, "time": 0.14125} -{"mode": "train", "epoch": 1, "iter": 27450, "lr": 7e-05, "memory": 4602, "data_time": 0.01355, "decode.loss_ce": 0.40659, "decode.acc_seg": 84.52205, "loss": 0.40659, "time": 0.1429} -{"mode": "train", "epoch": 1, "iter": 27500, "lr": 7e-05, "memory": 4602, "data_time": 0.01318, "decode.loss_ce": 0.42856, "decode.acc_seg": 83.4758, "loss": 0.42856, "time": 0.14373} -{"mode": "train", "epoch": 1, "iter": 27550, "lr": 7e-05, "memory": 4602, "data_time": 0.01281, "decode.loss_ce": 0.42356, "decode.acc_seg": 84.00221, "loss": 0.42356, "time": 0.1484} -{"mode": "train", "epoch": 1, "iter": 27600, "lr": 7e-05, "memory": 4602, "data_time": 0.01256, "decode.loss_ce": 0.40136, "decode.acc_seg": 84.50404, "loss": 0.40136, "time": 0.14425} -{"mode": "train", "epoch": 1, "iter": 27650, "lr": 7e-05, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.41431, "decode.acc_seg": 84.28119, "loss": 0.41431, "time": 0.14233} -{"mode": "train", "epoch": 1, "iter": 27700, "lr": 7e-05, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.41332, "decode.acc_seg": 84.41524, "loss": 0.41332, "time": 0.14165} -{"mode": "train", "epoch": 1, "iter": 27750, "lr": 7e-05, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.40083, "decode.acc_seg": 84.54781, "loss": 0.40083, "time": 0.14156} -{"mode": "train", "epoch": 1, "iter": 27800, "lr": 7e-05, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.39598, "decode.acc_seg": 84.81713, "loss": 0.39598, "time": 0.14014} -{"mode": "train", "epoch": 1, "iter": 27850, "lr": 7e-05, "memory": 4602, "data_time": 0.01293, "decode.loss_ce": 0.39262, "decode.acc_seg": 84.96679, "loss": 0.39262, "time": 0.14306} -{"mode": "train", "epoch": 1, "iter": 27900, "lr": 7e-05, "memory": 4602, "data_time": 0.01257, "decode.loss_ce": 0.40533, "decode.acc_seg": 84.58112, "loss": 0.40533, "time": 0.1421} -{"mode": "train", "epoch": 1, "iter": 27950, "lr": 7e-05, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.40435, "decode.acc_seg": 84.45112, "loss": 0.40435, "time": 0.14} -{"mode": "train", "epoch": 1, "iter": 28000, "lr": 7e-05, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.40199, "decode.acc_seg": 84.70446, "loss": 0.40199, "time": 0.15868} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 7e-05, "aAcc": 0.7906, "mIoU": 0.3967, "mAcc": 0.5131, "IoU.wall": 0.721, "IoU.building": 0.8027, "IoU.sky": 0.9371, "IoU.floor": 0.7705, "IoU.tree": 0.7093, "IoU.ceiling": 0.7861, "IoU.road": 0.794, "IoU.bed ": 0.8403, "IoU.windowpane": 0.5523, "IoU.grass": 0.6571, "IoU.cabinet": 0.5177, "IoU.sidewalk": 0.5783, "IoU.person": 0.762, "IoU.earth": 0.3335, "IoU.door": 0.3582, "IoU.table": 0.467, "IoU.mountain": 0.5457, "IoU.plant": 0.4785, "IoU.curtain": 0.6375, "IoU.chair": 0.4874, "IoU.car": 0.8067, "IoU.water": 0.4707, "IoU.painting": 0.6694, "IoU.sofa": 0.543, "IoU.shelf": 0.3465, "IoU.house": 0.3499, "IoU.sea": 0.4993, "IoU.mirror": 0.4523, "IoU.rug": 0.4893, "IoU.field": 0.3037, "IoU.armchair": 0.3349, "IoU.seat": 0.5282, "IoU.fence": 0.3455, "IoU.desk": 0.3798, "IoU.rock": 0.3317, "IoU.wardrobe": 0.4332, "IoU.lamp": 0.5461, "IoU.bathtub": 0.5976, "IoU.railing": 0.291, "IoU.cushion": 0.4413, "IoU.base": 0.1464, "IoU.box": 0.1751, "IoU.column": 0.3521, "IoU.signboard": 0.3129, "IoU.chest of drawers": 0.3534, "IoU.counter": 0.1765, "IoU.sand": 0.3248, "IoU.sink": 0.6051, "IoU.skyscraper": 0.6212, "IoU.fireplace": 0.6353, "IoU.refrigerator": 0.6233, "IoU.grandstand": 0.3725, "IoU.path": 0.2159, "IoU.stairs": 0.3145, "IoU.runway": 0.7035, "IoU.case": 0.2669, "IoU.pool table": 0.9101, "IoU.pillow": 0.4279, "IoU.screen door": 0.5349, "IoU.stairway": 0.264, "IoU.river": 0.1155, "IoU.bridge": 0.2096, "IoU.bookcase": 0.2842, "IoU.blind": 0.3504, "IoU.coffee table": 0.4478, "IoU.toilet": 0.7907, "IoU.flower": 0.3657, "IoU.book": 0.4441, "IoU.hill": 0.0628, "IoU.bench": 0.3606, "IoU.countertop": 0.4527, "IoU.stove": 0.5903, "IoU.palm": 0.4413, "IoU.kitchen island": 0.2445, "IoU.computer": 0.5337, "IoU.swivel chair": 0.3758, "IoU.boat": 0.5271, "IoU.bar": 0.1853, "IoU.arcade machine": 0.3217, "IoU.hovel": 0.0692, "IoU.bus": 0.7956, "IoU.towel": 0.4396, "IoU.light": 0.4093, "IoU.truck": 0.1685, "IoU.tower": 0.338, "IoU.chandelier": 0.6063, "IoU.awning": 0.1661, "IoU.streetlight": 0.1641, "IoU.booth": 0.2262, "IoU.television receiver": 0.5855, "IoU.airplane": 0.5178, "IoU.dirt track": 0.0244, "IoU.apparel": 0.3303, "IoU.pole": 0.1379, "IoU.land": 0.0132, "IoU.bannister": 0.0912, "IoU.escalator": 0.3219, "IoU.ottoman": 0.3085, "IoU.bottle": 0.2419, "IoU.buffet": 0.4442, "IoU.poster": 0.2384, "IoU.stage": 0.0992, "IoU.van": 0.3404, "IoU.ship": 0.3224, "IoU.fountain": 0.1643, "IoU.conveyer belt": 0.6864, "IoU.canopy": 0.1635, "IoU.washer": 0.6159, "IoU.plaything": 0.1565, "IoU.swimming pool": 0.3121, "IoU.stool": 0.1772, "IoU.barrel": 0.1426, "IoU.basket": 0.1964, "IoU.waterfall": 0.5699, "IoU.tent": 0.9201, "IoU.bag": 0.0887, "IoU.minibike": 0.5566, "IoU.cradle": 0.7478, "IoU.oven": 0.1793, "IoU.ball": 0.4324, "IoU.food": 0.4773, "IoU.step": 0.1192, "IoU.tank": 0.3467, "IoU.trade name": 0.2199, "IoU.microwave": 0.3281, "IoU.pot": 0.2663, "IoU.animal": 0.5331, "IoU.bicycle": 0.4397, "IoU.lake": 0.2479, "IoU.dishwasher": 0.4984, "IoU.screen": 0.622, "IoU.blanket": 0.0126, "IoU.sculpture": 0.3206, "IoU.hood": 0.4017, "IoU.sconce": 0.2559, "IoU.vase": 0.2526, "IoU.traffic light": 0.2047, "IoU.tray": 0.0202, "IoU.ashcan": 0.3196, "IoU.fan": 0.5144, "IoU.pier": 0.4723, "IoU.crt screen": 0.0216, "IoU.plate": 0.3834, "IoU.monitor": 0.1608, "IoU.bulletin board": 0.1685, "IoU.shower": 0.0041, "IoU.radiator": 0.4758, "IoU.glass": 0.086, "IoU.clock": 0.2255, "IoU.flag": 0.2619, "Acc.wall": 0.848, "Acc.building": 0.9155, "Acc.sky": 0.9679, "Acc.floor": 0.8977, "Acc.tree": 0.8785, "Acc.ceiling": 0.8843, "Acc.road": 0.8803, "Acc.bed ": 0.9409, "Acc.windowpane": 0.7294, "Acc.grass": 0.8436, "Acc.cabinet": 0.6148, "Acc.sidewalk": 0.7592, "Acc.person": 0.903, "Acc.earth": 0.4566, "Acc.door": 0.4772, "Acc.table": 0.6586, "Acc.mountain": 0.7062, "Acc.plant": 0.6212, "Acc.curtain": 0.8061, "Acc.chair": 0.7004, "Acc.car": 0.911, "Acc.water": 0.6084, "Acc.painting": 0.8127, "Acc.sofa": 0.7052, "Acc.shelf": 0.5854, "Acc.house": 0.4656, "Acc.sea": 0.7111, "Acc.mirror": 0.5094, "Acc.rug": 0.5613, "Acc.field": 0.4114, "Acc.armchair": 0.5255, "Acc.seat": 0.7323, "Acc.fence": 0.4466, "Acc.desk": 0.5991, "Acc.rock": 0.4768, "Acc.wardrobe": 0.6626, "Acc.lamp": 0.6998, "Acc.bathtub": 0.7086, "Acc.railing": 0.4137, "Acc.cushion": 0.6086, "Acc.base": 0.2324, "Acc.box": 0.2154, "Acc.column": 0.4668, "Acc.signboard": 0.4258, "Acc.chest of drawers": 0.6324, "Acc.counter": 0.2921, "Acc.sand": 0.5217, "Acc.sink": 0.7186, "Acc.skyscraper": 0.8218, "Acc.fireplace": 0.8202, "Acc.refrigerator": 0.7827, "Acc.grandstand": 0.6544, "Acc.path": 0.3477, "Acc.stairs": 0.3811, "Acc.runway": 0.9221, "Acc.case": 0.2971, "Acc.pool table": 0.9503, "Acc.pillow": 0.4984, "Acc.screen door": 0.6599, "Acc.stairway": 0.3496, "Acc.river": 0.2612, "Acc.bridge": 0.2603, "Acc.bookcase": 0.3975, "Acc.blind": 0.4279, "Acc.coffee table": 0.7301, "Acc.toilet": 0.9044, "Acc.flower": 0.5314, "Acc.book": 0.618, "Acc.hill": 0.1139, "Acc.bench": 0.4794, "Acc.countertop": 0.6554, "Acc.stove": 0.747, "Acc.palm": 0.5789, "Acc.kitchen island": 0.4665, "Acc.computer": 0.6184, "Acc.swivel chair": 0.5182, "Acc.boat": 0.6472, "Acc.bar": 0.2185, "Acc.arcade machine": 0.3922, "Acc.hovel": 0.0812, "Acc.bus": 0.931, "Acc.towel": 0.5752, "Acc.light": 0.4628, "Acc.truck": 0.1971, "Acc.tower": 0.444, "Acc.chandelier": 0.7451, "Acc.awning": 0.2058, "Acc.streetlight": 0.212, "Acc.booth": 0.2969, "Acc.television receiver": 0.7452, "Acc.airplane": 0.6376, "Acc.dirt track": 0.0888, "Acc.apparel": 0.4472, "Acc.pole": 0.1777, "Acc.land": 0.0262, "Acc.bannister": 0.1592, "Acc.escalator": 0.4405, "Acc.ottoman": 0.4447, "Acc.bottle": 0.3182, "Acc.buffet": 0.5985, "Acc.poster": 0.2727, "Acc.stage": 0.1629, "Acc.van": 0.4046, "Acc.ship": 0.3933, "Acc.fountain": 0.2105, "Acc.conveyer belt": 0.8457, "Acc.canopy": 0.2238, "Acc.washer": 0.6591, "Acc.plaything": 0.2185, "Acc.swimming pool": 0.4965, "Acc.stool": 0.2187, "Acc.barrel": 0.7572, "Acc.basket": 0.2897, "Acc.waterfall": 0.684, "Acc.tent": 0.9799, "Acc.bag": 0.1076, "Acc.minibike": 0.7247, "Acc.cradle": 0.934, "Acc.oven": 0.2079, "Acc.ball": 0.5429, "Acc.food": 0.597, "Acc.step": 0.1473, "Acc.tank": 0.3863, "Acc.trade name": 0.2823, "Acc.microwave": 0.3576, "Acc.pot": 0.3299, "Acc.animal": 0.5543, "Acc.bicycle": 0.6259, "Acc.lake": 0.3245, "Acc.dishwasher": 0.6268, "Acc.screen": 0.813, "Acc.blanket": 0.0143, "Acc.sculpture": 0.5685, "Acc.hood": 0.5025, "Acc.sconce": 0.2889, "Acc.vase": 0.4277, "Acc.traffic light": 0.3074, "Acc.tray": 0.0256, "Acc.ashcan": 0.4363, "Acc.fan": 0.632, "Acc.pier": 0.7088, "Acc.crt screen": 0.0817, "Acc.plate": 0.5163, "Acc.monitor": 0.25, "Acc.bulletin board": 0.2198, "Acc.shower": 0.0144, "Acc.radiator": 0.5141, "Acc.glass": 0.092, "Acc.clock": 0.2563, "Acc.flag": 0.2887} -{"mode": "train", "epoch": 1, "iter": 28050, "lr": 7e-05, "memory": 4602, "data_time": 0.48292, "decode.loss_ce": 0.39455, "decode.acc_seg": 84.85667, "loss": 0.39455, "time": 0.60877} -{"mode": "train", "epoch": 1, "iter": 28100, "lr": 7e-05, "memory": 4602, "data_time": 0.01303, "decode.loss_ce": 0.39615, "decode.acc_seg": 84.898, "loss": 0.39615, "time": 0.14232} -{"mode": "train", "epoch": 1, "iter": 28150, "lr": 7e-05, "memory": 4602, "data_time": 0.01261, "decode.loss_ce": 0.39694, "decode.acc_seg": 84.73165, "loss": 0.39694, "time": 0.14321} -{"mode": "train", "epoch": 1, "iter": 28200, "lr": 7e-05, "memory": 4602, "data_time": 0.01306, "decode.loss_ce": 0.40929, "decode.acc_seg": 84.12334, "loss": 0.40929, "time": 0.14948} -{"mode": "train", "epoch": 1, "iter": 28250, "lr": 7e-05, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.40965, "decode.acc_seg": 84.6045, "loss": 0.40965, "time": 0.14145} -{"mode": "train", "epoch": 1, "iter": 28300, "lr": 7e-05, "memory": 4602, "data_time": 0.01264, "decode.loss_ce": 0.39052, "decode.acc_seg": 85.04916, "loss": 0.39052, "time": 0.14237} -{"mode": "train", "epoch": 1, "iter": 28350, "lr": 7e-05, "memory": 4602, "data_time": 0.01179, "decode.loss_ce": 0.39749, "decode.acc_seg": 84.73266, "loss": 0.39749, "time": 0.14058} -{"mode": "train", "epoch": 1, "iter": 28400, "lr": 7e-05, "memory": 4602, "data_time": 0.01272, "decode.loss_ce": 0.40967, "decode.acc_seg": 84.51969, "loss": 0.40967, "time": 0.14228} -{"mode": "train", "epoch": 1, "iter": 28450, "lr": 7e-05, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.40586, "decode.acc_seg": 84.47689, "loss": 0.40586, "time": 0.14633} -{"mode": "train", "epoch": 1, "iter": 28500, "lr": 7e-05, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.41446, "decode.acc_seg": 84.55445, "loss": 0.41446, "time": 0.14321} -{"mode": "train", "epoch": 1, "iter": 28550, "lr": 7e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.42339, "decode.acc_seg": 83.99729, "loss": 0.42339, "time": 0.14314} -{"mode": "train", "epoch": 1, "iter": 28600, "lr": 7e-05, "memory": 4602, "data_time": 0.01235, "decode.loss_ce": 0.39431, "decode.acc_seg": 85.07599, "loss": 0.39431, "time": 0.14693} -{"mode": "train", "epoch": 1, "iter": 28650, "lr": 7e-05, "memory": 4602, "data_time": 0.0124, "decode.loss_ce": 0.40496, "decode.acc_seg": 84.358, "loss": 0.40496, "time": 0.14241} -{"mode": "train", "epoch": 1, "iter": 28700, "lr": 6e-05, "memory": 4602, "data_time": 0.01217, "decode.loss_ce": 0.40329, "decode.acc_seg": 84.72952, "loss": 0.40329, "time": 0.14113} -{"mode": "train", "epoch": 1, "iter": 28750, "lr": 6e-05, "memory": 4602, "data_time": 0.01275, "decode.loss_ce": 0.41083, "decode.acc_seg": 84.19822, "loss": 0.41083, "time": 0.14383} -{"mode": "train", "epoch": 1, "iter": 28800, "lr": 6e-05, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.41506, "decode.acc_seg": 83.98956, "loss": 0.41506, "time": 0.14509} -{"mode": "train", "epoch": 1, "iter": 28850, "lr": 6e-05, "memory": 4602, "data_time": 0.01262, "decode.loss_ce": 0.41473, "decode.acc_seg": 84.18779, "loss": 0.41473, "time": 0.14486} -{"mode": "train", "epoch": 1, "iter": 28900, "lr": 6e-05, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.38645, "decode.acc_seg": 85.45161, "loss": 0.38645, "time": 0.14148} -{"mode": "train", "epoch": 1, "iter": 28950, "lr": 6e-05, "memory": 4602, "data_time": 0.01326, "decode.loss_ce": 0.38828, "decode.acc_seg": 85.12428, "loss": 0.38828, "time": 0.14158} -{"mode": "train", "epoch": 1, "iter": 29000, "lr": 6e-05, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.40885, "decode.acc_seg": 84.41683, "loss": 0.40885, "time": 0.14162} -{"mode": "train", "epoch": 1, "iter": 29050, "lr": 6e-05, "memory": 4602, "data_time": 0.0128, "decode.loss_ce": 0.40259, "decode.acc_seg": 84.51802, "loss": 0.40259, "time": 0.14028} -{"mode": "train", "epoch": 1, "iter": 29100, "lr": 6e-05, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.37948, "decode.acc_seg": 85.41347, "loss": 0.37948, "time": 0.14076} -{"mode": "train", "epoch": 1, "iter": 29150, "lr": 6e-05, "memory": 4602, "data_time": 0.01236, "decode.loss_ce": 0.40765, "decode.acc_seg": 84.33219, "loss": 0.40765, "time": 0.14032} -{"mode": "train", "epoch": 1, "iter": 29200, "lr": 6e-05, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.39839, "decode.acc_seg": 84.852, "loss": 0.39839, "time": 0.14359} -{"mode": "train", "epoch": 1, "iter": 29250, "lr": 6e-05, "memory": 4602, "data_time": 0.01237, "decode.loss_ce": 0.40709, "decode.acc_seg": 84.4616, "loss": 0.40709, "time": 0.14179} -{"mode": "train", "epoch": 1, "iter": 29300, "lr": 6e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.3986, "decode.acc_seg": 84.54873, "loss": 0.3986, "time": 0.14255} -{"mode": "train", "epoch": 1, "iter": 29350, "lr": 6e-05, "memory": 4602, "data_time": 0.01247, "decode.loss_ce": 0.40651, "decode.acc_seg": 84.56371, "loss": 0.40651, "time": 0.14573} -{"mode": "train", "epoch": 1, "iter": 29400, "lr": 6e-05, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.39477, "decode.acc_seg": 84.88568, "loss": 0.39477, "time": 0.14746} -{"mode": "train", "epoch": 1, "iter": 29450, "lr": 6e-05, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.39434, "decode.acc_seg": 84.84767, "loss": 0.39434, "time": 0.14241} -{"mode": "train", "epoch": 1, "iter": 29500, "lr": 6e-05, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.40444, "decode.acc_seg": 84.59982, "loss": 0.40444, "time": 0.1426} -{"mode": "train", "epoch": 1, "iter": 29550, "lr": 6e-05, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.39153, "decode.acc_seg": 85.18692, "loss": 0.39153, "time": 0.14272} -{"mode": "train", "epoch": 1, "iter": 29600, "lr": 6e-05, "memory": 4602, "data_time": 0.01277, "decode.loss_ce": 0.39515, "decode.acc_seg": 85.12372, "loss": 0.39515, "time": 0.14156} -{"mode": "train", "epoch": 1, "iter": 29650, "lr": 6e-05, "memory": 4602, "data_time": 0.01301, "decode.loss_ce": 0.38828, "decode.acc_seg": 84.87737, "loss": 0.38828, "time": 0.14506} -{"mode": "train", "epoch": 1, "iter": 29700, "lr": 6e-05, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.38053, "decode.acc_seg": 85.34863, "loss": 0.38053, "time": 0.14333} -{"mode": "train", "epoch": 1, "iter": 29750, "lr": 6e-05, "memory": 4602, "data_time": 0.01185, "decode.loss_ce": 0.39956, "decode.acc_seg": 84.81924, "loss": 0.39956, "time": 0.14352} -{"mode": "train", "epoch": 1, "iter": 29800, "lr": 6e-05, "memory": 4602, "data_time": 0.01224, "decode.loss_ce": 0.38946, "decode.acc_seg": 85.12009, "loss": 0.38946, "time": 0.14138} -{"mode": "train", "epoch": 1, "iter": 29850, "lr": 6e-05, "memory": 4602, "data_time": 0.01381, "decode.loss_ce": 0.39475, "decode.acc_seg": 84.91708, "loss": 0.39475, "time": 0.14664} -{"mode": "train", "epoch": 1, "iter": 29900, "lr": 6e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.39572, "decode.acc_seg": 84.68237, "loss": 0.39572, "time": 0.14125} -{"mode": "train", "epoch": 1, "iter": 29950, "lr": 6e-05, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.37275, "decode.acc_seg": 85.70852, "loss": 0.37275, "time": 0.14189} -{"mode": "train", "epoch": 1, "iter": 30000, "lr": 6e-05, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.38808, "decode.acc_seg": 85.14246, "loss": 0.38808, "time": 0.14343} -{"mode": "train", "epoch": 1, "iter": 30050, "lr": 6e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.37635, "decode.acc_seg": 85.47931, "loss": 0.37635, "time": 0.14138} -{"mode": "train", "epoch": 1, "iter": 30100, "lr": 6e-05, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.38107, "decode.acc_seg": 85.38188, "loss": 0.38107, "time": 0.13995} -{"mode": "train", "epoch": 1, "iter": 30150, "lr": 6e-05, "memory": 4602, "data_time": 0.01258, "decode.loss_ce": 0.40367, "decode.acc_seg": 84.34149, "loss": 0.40367, "time": 0.14028} -{"mode": "train", "epoch": 1, "iter": 30200, "lr": 6e-05, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.38515, "decode.acc_seg": 85.22765, "loss": 0.38515, "time": 0.14237} -{"mode": "train", "epoch": 1, "iter": 30250, "lr": 6e-05, "memory": 4602, "data_time": 0.01329, "decode.loss_ce": 0.39091, "decode.acc_seg": 84.97687, "loss": 0.39091, "time": 0.14132} -{"mode": "train", "epoch": 1, "iter": 30300, "lr": 6e-05, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.38477, "decode.acc_seg": 85.26236, "loss": 0.38477, "time": 0.14344} -{"mode": "train", "epoch": 1, "iter": 30350, "lr": 6e-05, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.38606, "decode.acc_seg": 85.07098, "loss": 0.38606, "time": 0.13953} -{"mode": "train", "epoch": 1, "iter": 30400, "lr": 6e-05, "memory": 4602, "data_time": 0.01228, "decode.loss_ce": 0.38671, "decode.acc_seg": 85.24576, "loss": 0.38671, "time": 0.13985} -{"mode": "train", "epoch": 1, "iter": 30450, "lr": 6e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.3841, "decode.acc_seg": 84.74506, "loss": 0.3841, "time": 0.14149} -{"mode": "train", "epoch": 1, "iter": 30500, "lr": 6e-05, "memory": 4602, "data_time": 0.01234, "decode.loss_ce": 0.38211, "decode.acc_seg": 85.33965, "loss": 0.38211, "time": 0.1421} -{"mode": "train", "epoch": 1, "iter": 30550, "lr": 6e-05, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.38124, "decode.acc_seg": 85.31162, "loss": 0.38124, "time": 0.1495} -{"mode": "train", "epoch": 1, "iter": 30600, "lr": 6e-05, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.38957, "decode.acc_seg": 85.16492, "loss": 0.38957, "time": 0.14481} -{"mode": "train", "epoch": 1, "iter": 30650, "lr": 5e-05, "memory": 4602, "data_time": 0.01202, "decode.loss_ce": 0.36769, "decode.acc_seg": 85.73963, "loss": 0.36769, "time": 0.14141} -{"mode": "train", "epoch": 1, "iter": 30700, "lr": 5e-05, "memory": 4602, "data_time": 0.0127, "decode.loss_ce": 0.38108, "decode.acc_seg": 85.26819, "loss": 0.38108, "time": 0.1426} -{"mode": "train", "epoch": 1, "iter": 30750, "lr": 5e-05, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.39144, "decode.acc_seg": 85.18672, "loss": 0.39144, "time": 0.14167} -{"mode": "train", "epoch": 1, "iter": 30800, "lr": 5e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.37555, "decode.acc_seg": 85.50444, "loss": 0.37555, "time": 0.1405} -{"mode": "train", "epoch": 1, "iter": 30850, "lr": 5e-05, "memory": 4602, "data_time": 0.01285, "decode.loss_ce": 0.39718, "decode.acc_seg": 84.93095, "loss": 0.39718, "time": 0.14399} -{"mode": "train", "epoch": 1, "iter": 30900, "lr": 5e-05, "memory": 4602, "data_time": 0.01216, "decode.loss_ce": 0.39218, "decode.acc_seg": 85.13247, "loss": 0.39218, "time": 0.14435} -{"mode": "train", "epoch": 1, "iter": 30950, "lr": 5e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.38406, "decode.acc_seg": 85.17866, "loss": 0.38406, "time": 0.14055} -{"mode": "train", "epoch": 1, "iter": 31000, "lr": 5e-05, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.38662, "decode.acc_seg": 85.04224, "loss": 0.38662, "time": 0.14093} -{"mode": "train", "epoch": 1, "iter": 31050, "lr": 5e-05, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.38165, "decode.acc_seg": 85.25031, "loss": 0.38165, "time": 0.14005} -{"mode": "train", "epoch": 1, "iter": 31100, "lr": 5e-05, "memory": 4602, "data_time": 0.01373, "decode.loss_ce": 0.37998, "decode.acc_seg": 85.39104, "loss": 0.37998, "time": 0.14233} -{"mode": "train", "epoch": 1, "iter": 31150, "lr": 5e-05, "memory": 4602, "data_time": 0.01252, "decode.loss_ce": 0.38497, "decode.acc_seg": 85.16399, "loss": 0.38497, "time": 0.14059} -{"mode": "train", "epoch": 1, "iter": 31200, "lr": 5e-05, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.37961, "decode.acc_seg": 85.29421, "loss": 0.37961, "time": 0.14062} -{"mode": "train", "epoch": 1, "iter": 31250, "lr": 5e-05, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.38234, "decode.acc_seg": 85.37842, "loss": 0.38234, "time": 0.14129} -{"mode": "train", "epoch": 1, "iter": 31300, "lr": 5e-05, "memory": 4602, "data_time": 0.01301, "decode.loss_ce": 0.38007, "decode.acc_seg": 85.50524, "loss": 0.38007, "time": 0.14249} -{"mode": "train", "epoch": 1, "iter": 31350, "lr": 5e-05, "memory": 4602, "data_time": 0.01214, "decode.loss_ce": 0.39054, "decode.acc_seg": 84.99976, "loss": 0.39054, "time": 0.14118} -{"mode": "train", "epoch": 1, "iter": 31400, "lr": 5e-05, "memory": 4602, "data_time": 0.01161, "decode.loss_ce": 0.39967, "decode.acc_seg": 84.75335, "loss": 0.39967, "time": 0.14166} -{"mode": "train", "epoch": 1, "iter": 31450, "lr": 5e-05, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.40351, "decode.acc_seg": 84.73746, "loss": 0.40351, "time": 0.14225} -{"mode": "train", "epoch": 1, "iter": 31500, "lr": 5e-05, "memory": 4602, "data_time": 0.01132, "decode.loss_ce": 0.382, "decode.acc_seg": 85.46904, "loss": 0.382, "time": 0.14345} -{"mode": "train", "epoch": 1, "iter": 31550, "lr": 5e-05, "memory": 4602, "data_time": 0.01181, "decode.loss_ce": 0.39039, "decode.acc_seg": 85.06294, "loss": 0.39039, "time": 0.14053} -{"mode": "train", "epoch": 2, "iter": 31600, "lr": 5e-05, "memory": 4602, "data_time": 0.06485, "decode.loss_ce": 0.37525, "decode.acc_seg": 85.62903, "loss": 0.37525, "time": 0.19423} -{"mode": "train", "epoch": 2, "iter": 31650, "lr": 5e-05, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.38406, "decode.acc_seg": 85.42113, "loss": 0.38406, "time": 0.14052} -{"mode": "train", "epoch": 2, "iter": 31700, "lr": 5e-05, "memory": 4602, "data_time": 0.01143, "decode.loss_ce": 0.36682, "decode.acc_seg": 85.79579, "loss": 0.36682, "time": 0.15096} -{"mode": "train", "epoch": 2, "iter": 31750, "lr": 5e-05, "memory": 4602, "data_time": 0.01157, "decode.loss_ce": 0.38825, "decode.acc_seg": 85.03793, "loss": 0.38825, "time": 0.14099} -{"mode": "train", "epoch": 2, "iter": 31800, "lr": 5e-05, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.36869, "decode.acc_seg": 85.98304, "loss": 0.36869, "time": 0.13925} -{"mode": "train", "epoch": 2, "iter": 31850, "lr": 5e-05, "memory": 4602, "data_time": 0.01187, "decode.loss_ce": 0.36784, "decode.acc_seg": 85.69576, "loss": 0.36784, "time": 0.14173} -{"mode": "train", "epoch": 2, "iter": 31900, "lr": 5e-05, "memory": 4602, "data_time": 0.01164, "decode.loss_ce": 0.36123, "decode.acc_seg": 85.97262, "loss": 0.36123, "time": 0.14287} -{"mode": "train", "epoch": 2, "iter": 31950, "lr": 5e-05, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.37324, "decode.acc_seg": 85.55468, "loss": 0.37324, "time": 0.1409} -{"mode": "train", "epoch": 2, "iter": 32000, "lr": 5e-05, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 0.3707, "decode.acc_seg": 85.58148, "loss": 0.3707, "time": 0.1588} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 5e-05, "aAcc": 0.7918, "mIoU": 0.3999, "mAcc": 0.5133, "IoU.wall": 0.7186, "IoU.building": 0.7951, "IoU.sky": 0.9377, "IoU.floor": 0.7745, "IoU.tree": 0.7092, "IoU.ceiling": 0.7895, "IoU.road": 0.7954, "IoU.bed ": 0.8444, "IoU.windowpane": 0.5606, "IoU.grass": 0.6552, "IoU.cabinet": 0.5187, "IoU.sidewalk": 0.5957, "IoU.person": 0.768, "IoU.earth": 0.332, "IoU.door": 0.353, "IoU.table": 0.4872, "IoU.mountain": 0.5478, "IoU.plant": 0.4923, "IoU.curtain": 0.6411, "IoU.chair": 0.4769, "IoU.car": 0.8083, "IoU.water": 0.4816, "IoU.painting": 0.6475, "IoU.sofa": 0.5376, "IoU.shelf": 0.3653, "IoU.house": 0.445, "IoU.sea": 0.58, "IoU.mirror": 0.502, "IoU.rug": 0.4992, "IoU.field": 0.3046, "IoU.armchair": 0.285, "IoU.seat": 0.5234, "IoU.fence": 0.3207, "IoU.desk": 0.3735, "IoU.rock": 0.3237, "IoU.wardrobe": 0.4568, "IoU.lamp": 0.5408, "IoU.bathtub": 0.6113, "IoU.railing": 0.2964, "IoU.cushion": 0.4422, "IoU.base": 0.1224, "IoU.box": 0.1893, "IoU.column": 0.3788, "IoU.signboard": 0.3097, "IoU.chest of drawers": 0.3746, "IoU.counter": 0.189, "IoU.sand": 0.3139, "IoU.sink": 0.5833, "IoU.skyscraper": 0.454, "IoU.fireplace": 0.6013, "IoU.refrigerator": 0.6279, "IoU.grandstand": 0.4569, "IoU.path": 0.2248, "IoU.stairs": 0.2599, "IoU.runway": 0.6921, "IoU.case": 0.3926, "IoU.pool table": 0.913, "IoU.pillow": 0.4779, "IoU.screen door": 0.5447, "IoU.stairway": 0.2014, "IoU.river": 0.1299, "IoU.bridge": 0.2499, "IoU.bookcase": 0.2753, "IoU.blind": 0.3425, "IoU.coffee table": 0.4381, "IoU.toilet": 0.7609, "IoU.flower": 0.346, "IoU.book": 0.437, "IoU.hill": 0.0844, "IoU.bench": 0.3877, "IoU.countertop": 0.4672, "IoU.stove": 0.5962, "IoU.palm": 0.4623, "IoU.kitchen island": 0.1996, "IoU.computer": 0.5264, "IoU.swivel chair": 0.3534, "IoU.boat": 0.5019, "IoU.bar": 0.2566, "IoU.arcade machine": 0.3971, "IoU.hovel": 0.1082, "IoU.bus": 0.7755, "IoU.towel": 0.4897, "IoU.light": 0.3953, "IoU.truck": 0.1663, "IoU.tower": 0.2849, "IoU.chandelier": 0.5889, "IoU.awning": 0.1514, "IoU.streetlight": 0.174, "IoU.booth": 0.2705, "IoU.television receiver": 0.4944, "IoU.airplane": 0.5294, "IoU.dirt track": 0.0373, "IoU.apparel": 0.2665, "IoU.pole": 0.1319, "IoU.land": 0.0085, "IoU.bannister": 0.093, "IoU.escalator": 0.3126, "IoU.ottoman": 0.2715, "IoU.bottle": 0.1864, "IoU.buffet": 0.3727, "IoU.poster": 0.2133, "IoU.stage": 0.1068, "IoU.van": 0.3685, "IoU.ship": 0.2677, "IoU.fountain": 0.203, "IoU.conveyer belt": 0.7046, "IoU.canopy": 0.1587, "IoU.washer": 0.618, "IoU.plaything": 0.2369, "IoU.swimming pool": 0.3414, "IoU.stool": 0.1801, "IoU.barrel": 0.3539, "IoU.basket": 0.2029, "IoU.waterfall": 0.4381, "IoU.tent": 0.9005, "IoU.bag": 0.0831, "IoU.minibike": 0.5562, "IoU.cradle": 0.7356, "IoU.oven": 0.3232, "IoU.ball": 0.3754, "IoU.food": 0.5345, "IoU.step": 0.0731, "IoU.tank": 0.3676, "IoU.trade name": 0.1837, "IoU.microwave": 0.3489, "IoU.pot": 0.2557, "IoU.animal": 0.5325, "IoU.bicycle": 0.4346, "IoU.lake": 0.4494, "IoU.dishwasher": 0.4713, "IoU.screen": 0.6079, "IoU.blanket": 0.0257, "IoU.sculpture": 0.3586, "IoU.hood": 0.4472, "IoU.sconce": 0.2338, "IoU.vase": 0.2341, "IoU.traffic light": 0.2109, "IoU.tray": 0.0438, "IoU.ashcan": 0.3177, "IoU.fan": 0.5112, "IoU.pier": 0.4634, "IoU.crt screen": 0.0002, "IoU.plate": 0.3842, "IoU.monitor": 0.1469, "IoU.bulletin board": 0.2247, "IoU.shower": 0.0, "IoU.radiator": 0.4587, "IoU.glass": 0.0781, "IoU.clock": 0.2069, "IoU.flag": 0.2506, "Acc.wall": 0.8585, "Acc.building": 0.9106, "Acc.sky": 0.9692, "Acc.floor": 0.8846, "Acc.tree": 0.8568, "Acc.ceiling": 0.8864, "Acc.road": 0.8791, "Acc.bed ": 0.9366, "Acc.windowpane": 0.7323, "Acc.grass": 0.8075, "Acc.cabinet": 0.6749, "Acc.sidewalk": 0.7603, "Acc.person": 0.9, "Acc.earth": 0.4796, "Acc.door": 0.4781, "Acc.table": 0.653, "Acc.mountain": 0.7076, "Acc.plant": 0.6059, "Acc.curtain": 0.7549, "Acc.chair": 0.6023, "Acc.car": 0.8967, "Acc.water": 0.614, "Acc.painting": 0.8143, "Acc.sofa": 0.8107, "Acc.shelf": 0.5792, "Acc.house": 0.7541, "Acc.sea": 0.8763, "Acc.mirror": 0.6196, "Acc.rug": 0.5953, "Acc.field": 0.4073, "Acc.armchair": 0.4089, "Acc.seat": 0.6869, "Acc.fence": 0.4164, "Acc.desk": 0.6319, "Acc.rock": 0.4922, "Acc.wardrobe": 0.6768, "Acc.lamp": 0.6926, "Acc.bathtub": 0.6866, "Acc.railing": 0.3962, "Acc.cushion": 0.5695, "Acc.base": 0.1622, "Acc.box": 0.2654, "Acc.column": 0.4723, "Acc.signboard": 0.4062, "Acc.chest of drawers": 0.554, "Acc.counter": 0.2828, "Acc.sand": 0.4681, "Acc.sink": 0.7347, "Acc.skyscraper": 0.5837, "Acc.fireplace": 0.8059, "Acc.refrigerator": 0.7779, "Acc.grandstand": 0.6176, "Acc.path": 0.3433, "Acc.stairs": 0.2884, "Acc.runway": 0.903, "Acc.case": 0.506, "Acc.pool table": 0.9556, "Acc.pillow": 0.5906, "Acc.screen door": 0.746, "Acc.stairway": 0.3774, "Acc.river": 0.2619, "Acc.bridge": 0.285, "Acc.bookcase": 0.3671, "Acc.blind": 0.4173, "Acc.coffee table": 0.7977, "Acc.toilet": 0.9051, "Acc.flower": 0.5289, "Acc.book": 0.6393, "Acc.hill": 0.1452, "Acc.bench": 0.4767, "Acc.countertop": 0.631, "Acc.stove": 0.7365, "Acc.palm": 0.6089, "Acc.kitchen island": 0.4325, "Acc.computer": 0.625, "Acc.swivel chair": 0.5254, "Acc.boat": 0.6183, "Acc.bar": 0.3023, "Acc.arcade machine": 0.4777, "Acc.hovel": 0.1416, "Acc.bus": 0.8837, "Acc.towel": 0.6456, "Acc.light": 0.4379, "Acc.truck": 0.2233, "Acc.tower": 0.3799, "Acc.chandelier": 0.7257, "Acc.awning": 0.1648, "Acc.streetlight": 0.2427, "Acc.booth": 0.3306, "Acc.television receiver": 0.749, "Acc.airplane": 0.6515, "Acc.dirt track": 0.2497, "Acc.apparel": 0.3409, "Acc.pole": 0.1636, "Acc.land": 0.0161, "Acc.bannister": 0.1485, "Acc.escalator": 0.4091, "Acc.ottoman": 0.3514, "Acc.bottle": 0.2169, "Acc.buffet": 0.4744, "Acc.poster": 0.2842, "Acc.stage": 0.15, "Acc.van": 0.4545, "Acc.ship": 0.3502, "Acc.fountain": 0.2146, "Acc.conveyer belt": 0.8117, "Acc.canopy": 0.204, "Acc.washer": 0.6826, "Acc.plaything": 0.3296, "Acc.swimming pool": 0.5238, "Acc.stool": 0.2078, "Acc.barrel": 0.5381, "Acc.basket": 0.2818, "Acc.waterfall": 0.4763, "Acc.tent": 0.9798, "Acc.bag": 0.097, "Acc.minibike": 0.7032, "Acc.cradle": 0.9136, "Acc.oven": 0.4287, "Acc.ball": 0.4296, "Acc.food": 0.6808, "Acc.step": 0.0836, "Acc.tank": 0.4052, "Acc.trade name": 0.211, "Acc.microwave": 0.3883, "Acc.pot": 0.313, "Acc.animal": 0.5637, "Acc.bicycle": 0.6082, "Acc.lake": 0.519, "Acc.dishwasher": 0.5702, "Acc.screen": 0.8538, "Acc.blanket": 0.0277, "Acc.sculpture": 0.5844, "Acc.hood": 0.5799, "Acc.sconce": 0.2613, "Acc.vase": 0.347, "Acc.traffic light": 0.3205, "Acc.tray": 0.0685, "Acc.ashcan": 0.4454, "Acc.fan": 0.6395, "Acc.pier": 0.8149, "Acc.crt screen": 0.0006, "Acc.plate": 0.5169, "Acc.monitor": 0.1996, "Acc.bulletin board": 0.3131, "Acc.shower": 0.0, "Acc.radiator": 0.5014, "Acc.glass": 0.0835, "Acc.clock": 0.2244, "Acc.flag": 0.2715} -{"mode": "train", "epoch": 2, "iter": 32050, "lr": 5e-05, "memory": 4602, "data_time": 0.47762, "decode.loss_ce": 0.35947, "decode.acc_seg": 86.19052, "loss": 0.35947, "time": 0.6065} -{"mode": "train", "epoch": 2, "iter": 32100, "lr": 5e-05, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.38759, "decode.acc_seg": 85.12507, "loss": 0.38759, "time": 0.14243} -{"mode": "train", "epoch": 2, "iter": 32150, "lr": 5e-05, "memory": 4602, "data_time": 0.01195, "decode.loss_ce": 0.37649, "decode.acc_seg": 85.59468, "loss": 0.37649, "time": 0.1424} -{"mode": "train", "epoch": 2, "iter": 32200, "lr": 5e-05, "memory": 4602, "data_time": 0.01153, "decode.loss_ce": 0.37972, "decode.acc_seg": 85.43534, "loss": 0.37972, "time": 0.14245} -{"mode": "train", "epoch": 2, "iter": 32250, "lr": 5e-05, "memory": 4602, "data_time": 0.01259, "decode.loss_ce": 0.37492, "decode.acc_seg": 85.56355, "loss": 0.37492, "time": 0.14428} -{"mode": "train", "epoch": 2, "iter": 32300, "lr": 5e-05, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.37704, "decode.acc_seg": 85.39332, "loss": 0.37704, "time": 0.14351} -{"mode": "train", "epoch": 2, "iter": 32350, "lr": 5e-05, "memory": 4602, "data_time": 0.01176, "decode.loss_ce": 0.38597, "decode.acc_seg": 85.24857, "loss": 0.38597, "time": 0.14319} -{"mode": "train", "epoch": 2, "iter": 32400, "lr": 5e-05, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.37454, "decode.acc_seg": 85.71655, "loss": 0.37454, "time": 0.14107} -{"mode": "train", "epoch": 2, "iter": 32450, "lr": 5e-05, "memory": 4602, "data_time": 0.01229, "decode.loss_ce": 0.36688, "decode.acc_seg": 85.84311, "loss": 0.36688, "time": 0.14249} -{"mode": "train", "epoch": 2, "iter": 32500, "lr": 5e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.37754, "decode.acc_seg": 85.46113, "loss": 0.37754, "time": 0.14457} -{"mode": "train", "epoch": 2, "iter": 32550, "lr": 4e-05, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.3693, "decode.acc_seg": 85.92234, "loss": 0.3693, "time": 0.14299} -{"mode": "train", "epoch": 2, "iter": 32600, "lr": 4e-05, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.37573, "decode.acc_seg": 85.60647, "loss": 0.37573, "time": 0.14192} -{"mode": "train", "epoch": 2, "iter": 32650, "lr": 4e-05, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.37003, "decode.acc_seg": 85.97314, "loss": 0.37003, "time": 0.14079} -{"mode": "train", "epoch": 2, "iter": 32700, "lr": 4e-05, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.37267, "decode.acc_seg": 85.7516, "loss": 0.37267, "time": 0.14289} -{"mode": "train", "epoch": 2, "iter": 32750, "lr": 4e-05, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.36805, "decode.acc_seg": 85.54624, "loss": 0.36805, "time": 0.14456} -{"mode": "train", "epoch": 2, "iter": 32800, "lr": 4e-05, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.36595, "decode.acc_seg": 85.67333, "loss": 0.36595, "time": 0.1449} -{"mode": "train", "epoch": 2, "iter": 32850, "lr": 4e-05, "memory": 4602, "data_time": 0.012, "decode.loss_ce": 0.38783, "decode.acc_seg": 85.08896, "loss": 0.38783, "time": 0.14521} -{"mode": "train", "epoch": 2, "iter": 32900, "lr": 4e-05, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.37148, "decode.acc_seg": 85.7186, "loss": 0.37148, "time": 0.14927} -{"mode": "train", "epoch": 2, "iter": 32950, "lr": 4e-05, "memory": 4602, "data_time": 0.01144, "decode.loss_ce": 0.37646, "decode.acc_seg": 85.5737, "loss": 0.37646, "time": 0.1472} -{"mode": "train", "epoch": 2, "iter": 33000, "lr": 4e-05, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.35239, "decode.acc_seg": 86.17135, "loss": 0.35239, "time": 0.14333} -{"mode": "train", "epoch": 2, "iter": 33050, "lr": 4e-05, "memory": 4602, "data_time": 0.01251, "decode.loss_ce": 0.3625, "decode.acc_seg": 85.78206, "loss": 0.3625, "time": 0.14227} -{"mode": "train", "epoch": 2, "iter": 33100, "lr": 4e-05, "memory": 4602, "data_time": 0.01303, "decode.loss_ce": 0.37342, "decode.acc_seg": 85.40364, "loss": 0.37342, "time": 0.14399} -{"mode": "train", "epoch": 2, "iter": 33150, "lr": 4e-05, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.38331, "decode.acc_seg": 85.29082, "loss": 0.38331, "time": 0.14209} -{"mode": "train", "epoch": 2, "iter": 33200, "lr": 4e-05, "memory": 4602, "data_time": 0.01197, "decode.loss_ce": 0.34186, "decode.acc_seg": 86.67828, "loss": 0.34186, "time": 0.1456} -{"mode": "train", "epoch": 2, "iter": 33250, "lr": 4e-05, "memory": 4602, "data_time": 0.01135, "decode.loss_ce": 0.35779, "decode.acc_seg": 86.22401, "loss": 0.35779, "time": 0.14326} -{"mode": "train", "epoch": 2, "iter": 33300, "lr": 4e-05, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.37023, "decode.acc_seg": 85.81281, "loss": 0.37023, "time": 0.14169} -{"mode": "train", "epoch": 2, "iter": 33350, "lr": 4e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.36882, "decode.acc_seg": 85.83329, "loss": 0.36882, "time": 0.14311} -{"mode": "train", "epoch": 2, "iter": 33400, "lr": 4e-05, "memory": 4602, "data_time": 0.01289, "decode.loss_ce": 0.37578, "decode.acc_seg": 85.49354, "loss": 0.37578, "time": 0.14098} -{"mode": "train", "epoch": 2, "iter": 33450, "lr": 4e-05, "memory": 4602, "data_time": 0.01193, "decode.loss_ce": 0.37246, "decode.acc_seg": 85.4687, "loss": 0.37246, "time": 0.1422} -{"mode": "train", "epoch": 2, "iter": 33500, "lr": 4e-05, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.364, "decode.acc_seg": 85.65624, "loss": 0.364, "time": 0.1415} -{"mode": "train", "epoch": 2, "iter": 33550, "lr": 4e-05, "memory": 4602, "data_time": 0.01172, "decode.loss_ce": 0.36302, "decode.acc_seg": 86.21067, "loss": 0.36302, "time": 0.14255} -{"mode": "train", "epoch": 2, "iter": 33600, "lr": 4e-05, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.36215, "decode.acc_seg": 86.1064, "loss": 0.36215, "time": 0.13823} -{"mode": "train", "epoch": 2, "iter": 33650, "lr": 4e-05, "memory": 4602, "data_time": 0.0119, "decode.loss_ce": 0.36819, "decode.acc_seg": 85.65847, "loss": 0.36819, "time": 0.14095} -{"mode": "train", "epoch": 2, "iter": 33700, "lr": 4e-05, "memory": 4602, "data_time": 0.01245, "decode.loss_ce": 0.36179, "decode.acc_seg": 85.81224, "loss": 0.36179, "time": 0.14073} -{"mode": "train", "epoch": 2, "iter": 33750, "lr": 4e-05, "memory": 4602, "data_time": 0.01195, "decode.loss_ce": 0.36007, "decode.acc_seg": 86.00794, "loss": 0.36007, "time": 0.14181} -{"mode": "train", "epoch": 2, "iter": 33800, "lr": 4e-05, "memory": 4602, "data_time": 0.01181, "decode.loss_ce": 0.36543, "decode.acc_seg": 85.98006, "loss": 0.36543, "time": 0.13959} -{"mode": "train", "epoch": 2, "iter": 33850, "lr": 4e-05, "memory": 4602, "data_time": 0.01286, "decode.loss_ce": 0.36801, "decode.acc_seg": 85.80524, "loss": 0.36801, "time": 0.14214} -{"mode": "train", "epoch": 2, "iter": 33900, "lr": 4e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.36696, "decode.acc_seg": 86.05803, "loss": 0.36696, "time": 0.14534} -{"mode": "train", "epoch": 2, "iter": 33950, "lr": 4e-05, "memory": 4602, "data_time": 0.01308, "decode.loss_ce": 0.35149, "decode.acc_seg": 86.36965, "loss": 0.35149, "time": 0.14194} -{"mode": "train", "epoch": 2, "iter": 34000, "lr": 4e-05, "memory": 4602, "data_time": 0.0123, "decode.loss_ce": 0.36122, "decode.acc_seg": 86.02381, "loss": 0.36122, "time": 0.14468} -{"mode": "train", "epoch": 2, "iter": 34050, "lr": 4e-05, "memory": 4602, "data_time": 0.01157, "decode.loss_ce": 0.36771, "decode.acc_seg": 85.74068, "loss": 0.36771, "time": 0.15055} -{"mode": "train", "epoch": 2, "iter": 34100, "lr": 4e-05, "memory": 4602, "data_time": 0.01137, "decode.loss_ce": 0.34975, "decode.acc_seg": 86.34717, "loss": 0.34975, "time": 0.14101} -{"mode": "train", "epoch": 2, "iter": 34150, "lr": 4e-05, "memory": 4602, "data_time": 0.01176, "decode.loss_ce": 0.37571, "decode.acc_seg": 85.68053, "loss": 0.37571, "time": 0.14441} -{"mode": "train", "epoch": 2, "iter": 34200, "lr": 4e-05, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.35939, "decode.acc_seg": 86.23052, "loss": 0.35939, "time": 0.13985} -{"mode": "train", "epoch": 2, "iter": 34250, "lr": 4e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.35359, "decode.acc_seg": 86.36229, "loss": 0.35359, "time": 0.14034} -{"mode": "train", "epoch": 2, "iter": 34300, "lr": 4e-05, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.3609, "decode.acc_seg": 85.98311, "loss": 0.3609, "time": 0.14038} -{"mode": "train", "epoch": 2, "iter": 34350, "lr": 4e-05, "memory": 4602, "data_time": 0.01146, "decode.loss_ce": 0.37107, "decode.acc_seg": 85.74918, "loss": 0.37107, "time": 0.14513} -{"mode": "train", "epoch": 2, "iter": 34400, "lr": 3e-05, "memory": 4602, "data_time": 0.01188, "decode.loss_ce": 0.3653, "decode.acc_seg": 85.99636, "loss": 0.3653, "time": 0.14279} -{"mode": "train", "epoch": 2, "iter": 34450, "lr": 3e-05, "memory": 4602, "data_time": 0.01254, "decode.loss_ce": 0.36713, "decode.acc_seg": 85.94772, "loss": 0.36713, "time": 0.14546} -{"mode": "train", "epoch": 2, "iter": 34500, "lr": 3e-05, "memory": 4602, "data_time": 0.01164, "decode.loss_ce": 0.35027, "decode.acc_seg": 86.42398, "loss": 0.35027, "time": 0.14125} -{"mode": "train", "epoch": 2, "iter": 34550, "lr": 3e-05, "memory": 4602, "data_time": 0.01243, "decode.loss_ce": 0.36929, "decode.acc_seg": 85.77824, "loss": 0.36929, "time": 0.14126} -{"mode": "train", "epoch": 2, "iter": 34600, "lr": 3e-05, "memory": 4602, "data_time": 0.01225, "decode.loss_ce": 0.36704, "decode.acc_seg": 86.16703, "loss": 0.36704, "time": 0.13908} -{"mode": "train", "epoch": 2, "iter": 34650, "lr": 3e-05, "memory": 4602, "data_time": 0.01186, "decode.loss_ce": 0.36884, "decode.acc_seg": 85.89191, "loss": 0.36884, "time": 0.1425} -{"mode": "train", "epoch": 2, "iter": 34700, "lr": 3e-05, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.36903, "decode.acc_seg": 86.01965, "loss": 0.36903, "time": 0.14189} -{"mode": "train", "epoch": 2, "iter": 34750, "lr": 3e-05, "memory": 4602, "data_time": 0.01163, "decode.loss_ce": 0.36119, "decode.acc_seg": 86.14186, "loss": 0.36119, "time": 0.14302} -{"mode": "train", "epoch": 2, "iter": 34800, "lr": 3e-05, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.36122, "decode.acc_seg": 86.25048, "loss": 0.36122, "time": 0.14236} -{"mode": "train", "epoch": 2, "iter": 34850, "lr": 3e-05, "memory": 4602, "data_time": 0.01222, "decode.loss_ce": 0.36488, "decode.acc_seg": 85.88489, "loss": 0.36488, "time": 0.14048} -{"mode": "train", "epoch": 2, "iter": 34900, "lr": 3e-05, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.35644, "decode.acc_seg": 86.25908, "loss": 0.35644, "time": 0.14192} -{"mode": "train", "epoch": 2, "iter": 34950, "lr": 3e-05, "memory": 4602, "data_time": 0.01211, "decode.loss_ce": 0.36356, "decode.acc_seg": 85.82705, "loss": 0.36356, "time": 0.14331} -{"mode": "train", "epoch": 2, "iter": 35000, "lr": 3e-05, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.35078, "decode.acc_seg": 86.33745, "loss": 0.35078, "time": 0.14158} -{"mode": "train", "epoch": 2, "iter": 35050, "lr": 3e-05, "memory": 4602, "data_time": 0.01383, "decode.loss_ce": 0.36187, "decode.acc_seg": 86.11729, "loss": 0.36187, "time": 0.14417} -{"mode": "train", "epoch": 2, "iter": 35100, "lr": 3e-05, "memory": 4602, "data_time": 0.0119, "decode.loss_ce": 0.37055, "decode.acc_seg": 85.56276, "loss": 0.37055, "time": 0.14229} -{"mode": "train", "epoch": 2, "iter": 35150, "lr": 3e-05, "memory": 4602, "data_time": 0.01232, "decode.loss_ce": 0.35679, "decode.acc_seg": 86.17848, "loss": 0.35679, "time": 0.14133} -{"mode": "train", "epoch": 2, "iter": 35200, "lr": 3e-05, "memory": 4602, "data_time": 0.01175, "decode.loss_ce": 0.36395, "decode.acc_seg": 86.00197, "loss": 0.36395, "time": 0.15239} -{"mode": "train", "epoch": 2, "iter": 35250, "lr": 3e-05, "memory": 4602, "data_time": 0.01161, "decode.loss_ce": 0.36881, "decode.acc_seg": 85.72325, "loss": 0.36881, "time": 0.14309} -{"mode": "train", "epoch": 2, "iter": 35300, "lr": 3e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.34815, "decode.acc_seg": 86.40048, "loss": 0.34815, "time": 0.14236} -{"mode": "train", "epoch": 2, "iter": 35350, "lr": 3e-05, "memory": 4602, "data_time": 0.0115, "decode.loss_ce": 0.34649, "decode.acc_seg": 86.49088, "loss": 0.34649, "time": 0.14204} -{"mode": "train", "epoch": 2, "iter": 35400, "lr": 3e-05, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.35457, "decode.acc_seg": 86.25917, "loss": 0.35457, "time": 0.14173} -{"mode": "train", "epoch": 2, "iter": 35450, "lr": 3e-05, "memory": 4602, "data_time": 0.01166, "decode.loss_ce": 0.34958, "decode.acc_seg": 86.23827, "loss": 0.34958, "time": 0.13986} -{"mode": "train", "epoch": 2, "iter": 35500, "lr": 3e-05, "memory": 4602, "data_time": 0.01156, "decode.loss_ce": 0.35093, "decode.acc_seg": 86.44537, "loss": 0.35093, "time": 0.1406} -{"mode": "train", "epoch": 2, "iter": 35550, "lr": 3e-05, "memory": 4602, "data_time": 0.01158, "decode.loss_ce": 0.35784, "decode.acc_seg": 86.35072, "loss": 0.35784, "time": 0.14496} -{"mode": "train", "epoch": 2, "iter": 35600, "lr": 3e-05, "memory": 4602, "data_time": 0.0125, "decode.loss_ce": 0.35967, "decode.acc_seg": 86.26246, "loss": 0.35967, "time": 0.14398} -{"mode": "train", "epoch": 2, "iter": 35650, "lr": 3e-05, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.35273, "decode.acc_seg": 86.31885, "loss": 0.35273, "time": 0.14103} -{"mode": "train", "epoch": 2, "iter": 35700, "lr": 3e-05, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.36159, "decode.acc_seg": 86.3545, "loss": 0.36159, "time": 0.14115} -{"mode": "train", "epoch": 2, "iter": 35750, "lr": 3e-05, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.35486, "decode.acc_seg": 86.35389, "loss": 0.35486, "time": 0.14117} -{"mode": "train", "epoch": 2, "iter": 35800, "lr": 3e-05, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.36875, "decode.acc_seg": 86.05035, "loss": 0.36875, "time": 0.13977} -{"mode": "train", "epoch": 2, "iter": 35850, "lr": 3e-05, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.35988, "decode.acc_seg": 86.18493, "loss": 0.35988, "time": 0.14228} -{"mode": "train", "epoch": 2, "iter": 35900, "lr": 3e-05, "memory": 4602, "data_time": 0.01246, "decode.loss_ce": 0.35578, "decode.acc_seg": 86.1952, "loss": 0.35578, "time": 0.1407} -{"mode": "train", "epoch": 2, "iter": 35950, "lr": 3e-05, "memory": 4602, "data_time": 0.0118, "decode.loss_ce": 0.35755, "decode.acc_seg": 86.23991, "loss": 0.35755, "time": 0.142} -{"mode": "train", "epoch": 2, "iter": 36000, "lr": 3e-05, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.34274, "decode.acc_seg": 86.6948, "loss": 0.34274, "time": 0.15586} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 3e-05, "aAcc": 0.7935, "mIoU": 0.4008, "mAcc": 0.5109, "IoU.wall": 0.7187, "IoU.building": 0.7943, "IoU.sky": 0.939, "IoU.floor": 0.7751, "IoU.tree": 0.707, "IoU.ceiling": 0.7916, "IoU.road": 0.802, "IoU.bed ": 0.8469, "IoU.windowpane": 0.5599, "IoU.grass": 0.6524, "IoU.cabinet": 0.535, "IoU.sidewalk": 0.5998, "IoU.person": 0.7677, "IoU.earth": 0.3391, "IoU.door": 0.3594, "IoU.table": 0.4939, "IoU.mountain": 0.5574, "IoU.plant": 0.4905, "IoU.curtain": 0.6523, "IoU.chair": 0.4987, "IoU.car": 0.8079, "IoU.water": 0.4725, "IoU.painting": 0.6542, "IoU.sofa": 0.562, "IoU.shelf": 0.327, "IoU.house": 0.3137, "IoU.sea": 0.5344, "IoU.mirror": 0.5057, "IoU.rug": 0.4886, "IoU.field": 0.2724, "IoU.armchair": 0.3322, "IoU.seat": 0.5487, "IoU.fence": 0.3448, "IoU.desk": 0.4053, "IoU.rock": 0.3159, "IoU.wardrobe": 0.4494, "IoU.lamp": 0.5451, "IoU.bathtub": 0.6198, "IoU.railing": 0.3046, "IoU.cushion": 0.445, "IoU.base": 0.1569, "IoU.box": 0.2044, "IoU.column": 0.3811, "IoU.signboard": 0.3051, "IoU.chest of drawers": 0.3618, "IoU.counter": 0.2103, "IoU.sand": 0.296, "IoU.sink": 0.6062, "IoU.skyscraper": 0.4474, "IoU.fireplace": 0.6111, "IoU.refrigerator": 0.608, "IoU.grandstand": 0.3948, "IoU.path": 0.2214, "IoU.stairs": 0.3353, "IoU.runway": 0.6764, "IoU.case": 0.3916, "IoU.pool table": 0.9091, "IoU.pillow": 0.493, "IoU.screen door": 0.46, "IoU.stairway": 0.2433, "IoU.river": 0.1376, "IoU.bridge": 0.2807, "IoU.bookcase": 0.2802, "IoU.blind": 0.3673, "IoU.coffee table": 0.477, "IoU.toilet": 0.7603, "IoU.flower": 0.3476, "IoU.book": 0.4301, "IoU.hill": 0.0851, "IoU.bench": 0.3699, "IoU.countertop": 0.4728, "IoU.stove": 0.6019, "IoU.palm": 0.4539, "IoU.kitchen island": 0.2319, "IoU.computer": 0.5355, "IoU.swivel chair": 0.3628, "IoU.boat": 0.454, "IoU.bar": 0.2268, "IoU.arcade machine": 0.4051, "IoU.hovel": 0.0588, "IoU.bus": 0.8098, "IoU.towel": 0.481, "IoU.light": 0.4342, "IoU.truck": 0.1674, "IoU.tower": 0.2678, "IoU.chandelier": 0.5989, "IoU.awning": 0.1468, "IoU.streetlight": 0.1584, "IoU.booth": 0.3037, "IoU.television receiver": 0.5287, "IoU.airplane": 0.5447, "IoU.dirt track": 0.0186, "IoU.apparel": 0.3388, "IoU.pole": 0.1417, "IoU.land": 0.0114, "IoU.bannister": 0.0888, "IoU.escalator": 0.2572, "IoU.ottoman": 0.3206, "IoU.bottle": 0.1693, "IoU.buffet": 0.4045, "IoU.poster": 0.1882, "IoU.stage": 0.1091, "IoU.van": 0.3671, "IoU.ship": 0.3533, "IoU.fountain": 0.1848, "IoU.conveyer belt": 0.7009, "IoU.canopy": 0.1407, "IoU.washer": 0.6101, "IoU.plaything": 0.2293, "IoU.swimming pool": 0.2363, "IoU.stool": 0.2566, "IoU.barrel": 0.1852, "IoU.basket": 0.1993, "IoU.waterfall": 0.4594, "IoU.tent": 0.9174, "IoU.bag": 0.0862, "IoU.minibike": 0.5391, "IoU.cradle": 0.7545, "IoU.oven": 0.3878, "IoU.ball": 0.3572, "IoU.food": 0.5005, "IoU.step": 0.0651, "IoU.tank": 0.3755, "IoU.trade name": 0.1672, "IoU.microwave": 0.3324, "IoU.pot": 0.2551, "IoU.animal": 0.5275, "IoU.bicycle": 0.4268, "IoU.lake": 0.3814, "IoU.dishwasher": 0.4971, "IoU.screen": 0.6225, "IoU.blanket": 0.0309, "IoU.sculpture": 0.4027, "IoU.hood": 0.4566, "IoU.sconce": 0.2415, "IoU.vase": 0.2196, "IoU.traffic light": 0.2048, "IoU.tray": 0.0342, "IoU.ashcan": 0.3251, "IoU.fan": 0.5111, "IoU.pier": 0.4085, "IoU.crt screen": 0.0012, "IoU.plate": 0.4054, "IoU.monitor": 0.1863, "IoU.bulletin board": 0.2209, "IoU.shower": 0.0013, "IoU.radiator": 0.4514, "IoU.glass": 0.0916, "IoU.clock": 0.271, "IoU.flag": 0.272, "Acc.wall": 0.8575, "Acc.building": 0.923, "Acc.sky": 0.9704, "Acc.floor": 0.8858, "Acc.tree": 0.8718, "Acc.ceiling": 0.9015, "Acc.road": 0.8831, "Acc.bed ": 0.9417, "Acc.windowpane": 0.7332, "Acc.grass": 0.8079, "Acc.cabinet": 0.6809, "Acc.sidewalk": 0.7623, "Acc.person": 0.9044, "Acc.earth": 0.4668, "Acc.door": 0.4694, "Acc.table": 0.6585, "Acc.mountain": 0.7158, "Acc.plant": 0.6104, "Acc.curtain": 0.8039, "Acc.chair": 0.6823, "Acc.car": 0.9014, "Acc.water": 0.6016, "Acc.painting": 0.8242, "Acc.sofa": 0.7715, "Acc.shelf": 0.4667, "Acc.house": 0.427, "Acc.sea": 0.7993, "Acc.mirror": 0.5869, "Acc.rug": 0.5816, "Acc.field": 0.425, "Acc.armchair": 0.4631, "Acc.seat": 0.7376, "Acc.fence": 0.4482, "Acc.desk": 0.5937, "Acc.rock": 0.4921, "Acc.wardrobe": 0.6632, "Acc.lamp": 0.7033, "Acc.bathtub": 0.6958, "Acc.railing": 0.4032, "Acc.cushion": 0.564, "Acc.base": 0.2315, "Acc.box": 0.2877, "Acc.column": 0.4875, "Acc.signboard": 0.3914, "Acc.chest of drawers": 0.5474, "Acc.counter": 0.2921, "Acc.sand": 0.4262, "Acc.sink": 0.7346, "Acc.skyscraper": 0.5702, "Acc.fireplace": 0.8028, "Acc.refrigerator": 0.7489, "Acc.grandstand": 0.6547, "Acc.path": 0.3364, "Acc.stairs": 0.404, "Acc.runway": 0.897, "Acc.case": 0.537, "Acc.pool table": 0.9542, "Acc.pillow": 0.63, "Acc.screen door": 0.5071, "Acc.stairway": 0.3408, "Acc.river": 0.2341, "Acc.bridge": 0.3221, "Acc.bookcase": 0.3916, "Acc.blind": 0.4577, "Acc.coffee table": 0.7473, "Acc.toilet": 0.911, "Acc.flower": 0.5505, "Acc.book": 0.672, "Acc.hill": 0.1626, "Acc.bench": 0.4731, "Acc.countertop": 0.6637, "Acc.stove": 0.6962, "Acc.palm": 0.6084, "Acc.kitchen island": 0.4749, "Acc.computer": 0.614, "Acc.swivel chair": 0.5269, "Acc.boat": 0.5186, "Acc.bar": 0.2692, "Acc.arcade machine": 0.4749, "Acc.hovel": 0.0684, "Acc.bus": 0.9219, "Acc.towel": 0.645, "Acc.light": 0.4965, "Acc.truck": 0.2089, "Acc.tower": 0.3472, "Acc.chandelier": 0.7466, "Acc.awning": 0.1632, "Acc.streetlight": 0.2045, "Acc.booth": 0.3384, "Acc.television receiver": 0.7412, "Acc.airplane": 0.6335, "Acc.dirt track": 0.0907, "Acc.apparel": 0.4609, "Acc.pole": 0.1841, "Acc.land": 0.02, "Acc.bannister": 0.1393, "Acc.escalator": 0.3123, "Acc.ottoman": 0.455, "Acc.bottle": 0.1953, "Acc.buffet": 0.4896, "Acc.poster": 0.2345, "Acc.stage": 0.1449, "Acc.van": 0.4426, "Acc.ship": 0.4509, "Acc.fountain": 0.2095, "Acc.conveyer belt": 0.8516, "Acc.canopy": 0.1886, "Acc.washer": 0.6758, "Acc.plaything": 0.3203, "Acc.swimming pool": 0.348, "Acc.stool": 0.3082, "Acc.barrel": 0.6091, "Acc.basket": 0.2882, "Acc.waterfall": 0.5163, "Acc.tent": 0.9822, "Acc.bag": 0.0999, "Acc.minibike": 0.6674, "Acc.cradle": 0.9001, "Acc.oven": 0.5562, "Acc.ball": 0.4105, "Acc.food": 0.616, "Acc.step": 0.0796, "Acc.tank": 0.4132, "Acc.trade name": 0.1848, "Acc.microwave": 0.3653, "Acc.pot": 0.3021, "Acc.animal": 0.5557, "Acc.bicycle": 0.6753, "Acc.lake": 0.5752, "Acc.dishwasher": 0.615, "Acc.screen": 0.8403, "Acc.blanket": 0.0335, "Acc.sculpture": 0.5774, "Acc.hood": 0.5722, "Acc.sconce": 0.2761, "Acc.vase": 0.3412, "Acc.traffic light": 0.3073, "Acc.tray": 0.0487, "Acc.ashcan": 0.4201, "Acc.fan": 0.5929, "Acc.pier": 0.5942, "Acc.crt screen": 0.0039, "Acc.plate": 0.5525, "Acc.monitor": 0.265, "Acc.bulletin board": 0.3311, "Acc.shower": 0.004, "Acc.radiator": 0.4911, "Acc.glass": 0.0999, "Acc.clock": 0.2944, "Acc.flag": 0.3051} -{"mode": "train", "epoch": 2, "iter": 36050, "lr": 3e-05, "memory": 4602, "data_time": 0.47889, "decode.loss_ce": 0.35452, "decode.acc_seg": 86.34307, "loss": 0.35452, "time": 0.60672} -{"mode": "train", "epoch": 2, "iter": 36100, "lr": 3e-05, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.36632, "decode.acc_seg": 85.96534, "loss": 0.36632, "time": 0.14227} -{"mode": "train", "epoch": 2, "iter": 36150, "lr": 3e-05, "memory": 4602, "data_time": 0.01153, "decode.loss_ce": 0.35488, "decode.acc_seg": 86.08054, "loss": 0.35488, "time": 0.14242} -{"mode": "train", "epoch": 2, "iter": 36200, "lr": 2e-05, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.35719, "decode.acc_seg": 86.26278, "loss": 0.35719, "time": 0.14372} -{"mode": "train", "epoch": 2, "iter": 36250, "lr": 2e-05, "memory": 4602, "data_time": 0.01149, "decode.loss_ce": 0.35101, "decode.acc_seg": 86.43265, "loss": 0.35101, "time": 0.14296} -{"mode": "train", "epoch": 2, "iter": 36300, "lr": 2e-05, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.34997, "decode.acc_seg": 86.34632, "loss": 0.34997, "time": 0.14237} -{"mode": "train", "epoch": 2, "iter": 36350, "lr": 2e-05, "memory": 4602, "data_time": 0.01185, "decode.loss_ce": 0.35416, "decode.acc_seg": 86.31547, "loss": 0.35416, "time": 0.15531} -{"mode": "train", "epoch": 2, "iter": 36400, "lr": 2e-05, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.3404, "decode.acc_seg": 86.59121, "loss": 0.3404, "time": 0.14456} -{"mode": "train", "epoch": 2, "iter": 36450, "lr": 2e-05, "memory": 4602, "data_time": 0.01162, "decode.loss_ce": 0.36042, "decode.acc_seg": 86.34249, "loss": 0.36042, "time": 0.14465} -{"mode": "train", "epoch": 2, "iter": 36500, "lr": 2e-05, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.33903, "decode.acc_seg": 86.74244, "loss": 0.33903, "time": 0.14242} -{"mode": "train", "epoch": 2, "iter": 36550, "lr": 2e-05, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.35904, "decode.acc_seg": 86.04622, "loss": 0.35904, "time": 0.14488} -{"mode": "train", "epoch": 2, "iter": 36600, "lr": 2e-05, "memory": 4602, "data_time": 0.01327, "decode.loss_ce": 0.36307, "decode.acc_seg": 86.09326, "loss": 0.36307, "time": 0.14337} -{"mode": "train", "epoch": 2, "iter": 36650, "lr": 2e-05, "memory": 4602, "data_time": 0.01199, "decode.loss_ce": 0.36489, "decode.acc_seg": 85.89026, "loss": 0.36489, "time": 0.13888} -{"mode": "train", "epoch": 2, "iter": 36700, "lr": 2e-05, "memory": 4602, "data_time": 0.01201, "decode.loss_ce": 0.36112, "decode.acc_seg": 86.0712, "loss": 0.36112, "time": 0.14247} -{"mode": "train", "epoch": 2, "iter": 36750, "lr": 2e-05, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.35787, "decode.acc_seg": 86.33485, "loss": 0.35787, "time": 0.1446} -{"mode": "train", "epoch": 2, "iter": 36800, "lr": 2e-05, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.34447, "decode.acc_seg": 86.67624, "loss": 0.34447, "time": 0.14392} -{"mode": "train", "epoch": 2, "iter": 36850, "lr": 2e-05, "memory": 4602, "data_time": 0.01196, "decode.loss_ce": 0.36218, "decode.acc_seg": 86.00846, "loss": 0.36218, "time": 0.1381} -{"mode": "train", "epoch": 2, "iter": 36900, "lr": 2e-05, "memory": 4602, "data_time": 0.01353, "decode.loss_ce": 0.34695, "decode.acc_seg": 86.74068, "loss": 0.34695, "time": 0.14323} -{"mode": "train", "epoch": 2, "iter": 36950, "lr": 2e-05, "memory": 4602, "data_time": 0.01277, "decode.loss_ce": 0.35697, "decode.acc_seg": 86.08216, "loss": 0.35697, "time": 0.14289} -{"mode": "train", "epoch": 2, "iter": 37000, "lr": 2e-05, "memory": 4602, "data_time": 0.01151, "decode.loss_ce": 0.35232, "decode.acc_seg": 86.3889, "loss": 0.35232, "time": 0.14155} -{"mode": "train", "epoch": 2, "iter": 37050, "lr": 2e-05, "memory": 4602, "data_time": 0.01241, "decode.loss_ce": 0.35864, "decode.acc_seg": 86.16388, "loss": 0.35864, "time": 0.13965} -{"mode": "train", "epoch": 2, "iter": 37100, "lr": 2e-05, "memory": 4602, "data_time": 0.01166, "decode.loss_ce": 0.35349, "decode.acc_seg": 86.33803, "loss": 0.35349, "time": 0.13942} -{"mode": "train", "epoch": 2, "iter": 37150, "lr": 2e-05, "memory": 4602, "data_time": 0.01191, "decode.loss_ce": 0.34673, "decode.acc_seg": 86.60127, "loss": 0.34673, "time": 0.14243} -{"mode": "train", "epoch": 2, "iter": 37200, "lr": 2e-05, "memory": 4602, "data_time": 0.01184, "decode.loss_ce": 0.34395, "decode.acc_seg": 86.77592, "loss": 0.34395, "time": 0.14034} -{"mode": "train", "epoch": 2, "iter": 37250, "lr": 2e-05, "memory": 4602, "data_time": 0.01208, "decode.loss_ce": 0.34685, "decode.acc_seg": 86.38127, "loss": 0.34685, "time": 0.14176} -{"mode": "train", "epoch": 2, "iter": 37300, "lr": 2e-05, "memory": 4602, "data_time": 0.01137, "decode.loss_ce": 0.33715, "decode.acc_seg": 86.97451, "loss": 0.33715, "time": 0.1423} -{"mode": "train", "epoch": 2, "iter": 37350, "lr": 2e-05, "memory": 4602, "data_time": 0.01343, "decode.loss_ce": 0.35381, "decode.acc_seg": 86.3986, "loss": 0.35381, "time": 0.1416} -{"mode": "train", "epoch": 2, "iter": 37400, "lr": 2e-05, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.35206, "decode.acc_seg": 86.25067, "loss": 0.35206, "time": 0.14209} -{"mode": "train", "epoch": 2, "iter": 37450, "lr": 2e-05, "memory": 4602, "data_time": 0.01224, "decode.loss_ce": 0.34272, "decode.acc_seg": 86.77985, "loss": 0.34272, "time": 0.14087} -{"mode": "train", "epoch": 2, "iter": 37500, "lr": 2e-05, "memory": 4602, "data_time": 0.01192, "decode.loss_ce": 0.35483, "decode.acc_seg": 86.57113, "loss": 0.35483, "time": 0.14554} -{"mode": "train", "epoch": 2, "iter": 37550, "lr": 2e-05, "memory": 4602, "data_time": 0.01478, "decode.loss_ce": 0.3663, "decode.acc_seg": 85.85455, "loss": 0.3663, "time": 0.15102} -{"mode": "train", "epoch": 2, "iter": 37600, "lr": 2e-05, "memory": 4602, "data_time": 0.01153, "decode.loss_ce": 0.34639, "decode.acc_seg": 86.44299, "loss": 0.34639, "time": 0.14231} -{"mode": "train", "epoch": 2, "iter": 37650, "lr": 2e-05, "memory": 4602, "data_time": 0.01247, "decode.loss_ce": 0.34774, "decode.acc_seg": 86.57197, "loss": 0.34774, "time": 0.14311} -{"mode": "train", "epoch": 2, "iter": 37700, "lr": 2e-05, "memory": 4602, "data_time": 0.01281, "decode.loss_ce": 0.33306, "decode.acc_seg": 87.13131, "loss": 0.33306, "time": 0.14557} -{"mode": "train", "epoch": 2, "iter": 37750, "lr": 2e-05, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.34707, "decode.acc_seg": 86.4951, "loss": 0.34707, "time": 0.14616} -{"mode": "train", "epoch": 2, "iter": 37800, "lr": 2e-05, "memory": 4602, "data_time": 0.01257, "decode.loss_ce": 0.3357, "decode.acc_seg": 86.68546, "loss": 0.3357, "time": 0.14237} -{"mode": "train", "epoch": 2, "iter": 37850, "lr": 2e-05, "memory": 4602, "data_time": 0.01213, "decode.loss_ce": 0.35128, "decode.acc_seg": 86.50247, "loss": 0.35128, "time": 0.14096} -{"mode": "train", "epoch": 2, "iter": 37900, "lr": 2e-05, "memory": 4602, "data_time": 0.01187, "decode.loss_ce": 0.34919, "decode.acc_seg": 86.54988, "loss": 0.34919, "time": 0.14092} -{"mode": "train", "epoch": 2, "iter": 37950, "lr": 1e-05, "memory": 4602, "data_time": 0.01168, "decode.loss_ce": 0.34008, "decode.acc_seg": 86.80242, "loss": 0.34008, "time": 0.14648} -{"mode": "train", "epoch": 2, "iter": 38000, "lr": 1e-05, "memory": 4602, "data_time": 0.01295, "decode.loss_ce": 0.34141, "decode.acc_seg": 86.86048, "loss": 0.34141, "time": 0.14287} -{"mode": "train", "epoch": 2, "iter": 38050, "lr": 1e-05, "memory": 4602, "data_time": 0.013, "decode.loss_ce": 0.34791, "decode.acc_seg": 86.67654, "loss": 0.34791, "time": 0.1425} -{"mode": "train", "epoch": 2, "iter": 38100, "lr": 1e-05, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.35795, "decode.acc_seg": 86.18542, "loss": 0.35795, "time": 0.14162} -{"mode": "train", "epoch": 2, "iter": 38150, "lr": 1e-05, "memory": 4602, "data_time": 0.01155, "decode.loss_ce": 0.34344, "decode.acc_seg": 86.74548, "loss": 0.34344, "time": 0.14245} -{"mode": "train", "epoch": 2, "iter": 38200, "lr": 1e-05, "memory": 4602, "data_time": 0.01154, "decode.loss_ce": 0.35688, "decode.acc_seg": 86.34564, "loss": 0.35688, "time": 0.14546} -{"mode": "train", "epoch": 2, "iter": 38250, "lr": 1e-05, "memory": 4602, "data_time": 0.01166, "decode.loss_ce": 0.34169, "decode.acc_seg": 86.77431, "loss": 0.34169, "time": 0.13892} -{"mode": "train", "epoch": 2, "iter": 38300, "lr": 1e-05, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.34484, "decode.acc_seg": 86.54774, "loss": 0.34484, "time": 0.14103} -{"mode": "train", "epoch": 2, "iter": 38350, "lr": 1e-05, "memory": 4602, "data_time": 0.01179, "decode.loss_ce": 0.34342, "decode.acc_seg": 86.85297, "loss": 0.34342, "time": 0.14097} -{"mode": "train", "epoch": 2, "iter": 38400, "lr": 1e-05, "memory": 4602, "data_time": 0.0122, "decode.loss_ce": 0.34526, "decode.acc_seg": 86.63751, "loss": 0.34526, "time": 0.14026} -{"mode": "train", "epoch": 2, "iter": 38450, "lr": 1e-05, "memory": 4602, "data_time": 0.01269, "decode.loss_ce": 0.34802, "decode.acc_seg": 86.76643, "loss": 0.34802, "time": 0.14324} -{"mode": "train", "epoch": 2, "iter": 38500, "lr": 1e-05, "memory": 4602, "data_time": 0.01219, "decode.loss_ce": 0.3493, "decode.acc_seg": 86.54707, "loss": 0.3493, "time": 0.14011} -{"mode": "train", "epoch": 2, "iter": 38550, "lr": 1e-05, "memory": 4602, "data_time": 0.01326, "decode.loss_ce": 0.35147, "decode.acc_seg": 86.41205, "loss": 0.35147, "time": 0.14139} -{"mode": "train", "epoch": 2, "iter": 38600, "lr": 1e-05, "memory": 4602, "data_time": 0.01263, "decode.loss_ce": 0.35284, "decode.acc_seg": 86.44634, "loss": 0.35284, "time": 0.14251} -{"mode": "train", "epoch": 2, "iter": 38650, "lr": 1e-05, "memory": 4602, "data_time": 0.01265, "decode.loss_ce": 0.35003, "decode.acc_seg": 86.42188, "loss": 0.35003, "time": 0.13955} -{"mode": "train", "epoch": 2, "iter": 38700, "lr": 1e-05, "memory": 4602, "data_time": 0.01218, "decode.loss_ce": 0.34077, "decode.acc_seg": 86.51391, "loss": 0.34077, "time": 0.15407} -{"mode": "train", "epoch": 2, "iter": 38750, "lr": 1e-05, "memory": 4602, "data_time": 0.01212, "decode.loss_ce": 0.34764, "decode.acc_seg": 86.67962, "loss": 0.34764, "time": 0.13951} -{"mode": "train", "epoch": 2, "iter": 38800, "lr": 1e-05, "memory": 4602, "data_time": 0.01198, "decode.loss_ce": 0.33602, "decode.acc_seg": 86.99702, "loss": 0.33602, "time": 0.14358} -{"mode": "train", "epoch": 2, "iter": 38850, "lr": 1e-05, "memory": 4602, "data_time": 0.01133, "decode.loss_ce": 0.34056, "decode.acc_seg": 86.80212, "loss": 0.34056, "time": 0.14164} -{"mode": "train", "epoch": 2, "iter": 38900, "lr": 1e-05, "memory": 4602, "data_time": 0.01206, "decode.loss_ce": 0.33968, "decode.acc_seg": 86.83298, "loss": 0.33968, "time": 0.14303} -{"mode": "train", "epoch": 2, "iter": 38950, "lr": 1e-05, "memory": 4602, "data_time": 0.01204, "decode.loss_ce": 0.3387, "decode.acc_seg": 86.76827, "loss": 0.3387, "time": 0.14013} -{"mode": "train", "epoch": 2, "iter": 39000, "lr": 1e-05, "memory": 4602, "data_time": 0.0121, "decode.loss_ce": 0.34663, "decode.acc_seg": 86.7266, "loss": 0.34663, "time": 0.13871} -{"mode": "train", "epoch": 2, "iter": 39050, "lr": 1e-05, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.33613, "decode.acc_seg": 87.15781, "loss": 0.33613, "time": 0.14068} -{"mode": "train", "epoch": 2, "iter": 39100, "lr": 1e-05, "memory": 4602, "data_time": 0.01187, "decode.loss_ce": 0.34292, "decode.acc_seg": 86.70301, "loss": 0.34292, "time": 0.14153} -{"mode": "train", "epoch": 2, "iter": 39150, "lr": 1e-05, "memory": 4602, "data_time": 0.01279, "decode.loss_ce": 0.34336, "decode.acc_seg": 86.92384, "loss": 0.34336, "time": 0.14164} -{"mode": "train", "epoch": 2, "iter": 39200, "lr": 1e-05, "memory": 4602, "data_time": 0.01183, "decode.loss_ce": 0.3544, "decode.acc_seg": 86.3049, "loss": 0.3544, "time": 0.14003} -{"mode": "train", "epoch": 2, "iter": 39250, "lr": 1e-05, "memory": 4602, "data_time": 0.01173, "decode.loss_ce": 0.34745, "decode.acc_seg": 86.53175, "loss": 0.34745, "time": 0.14328} -{"mode": "train", "epoch": 2, "iter": 39300, "lr": 1e-05, "memory": 4602, "data_time": 0.01317, "decode.loss_ce": 0.34423, "decode.acc_seg": 86.74051, "loss": 0.34423, "time": 0.14431} -{"mode": "train", "epoch": 2, "iter": 39350, "lr": 1e-05, "memory": 4602, "data_time": 0.01203, "decode.loss_ce": 0.34015, "decode.acc_seg": 86.80511, "loss": 0.34015, "time": 0.14087} -{"mode": "train", "epoch": 2, "iter": 39400, "lr": 1e-05, "memory": 4602, "data_time": 0.01242, "decode.loss_ce": 0.33236, "decode.acc_seg": 87.22589, "loss": 0.33236, "time": 0.14399} -{"mode": "train", "epoch": 2, "iter": 39450, "lr": 1e-05, "memory": 4602, "data_time": 0.01169, "decode.loss_ce": 0.33197, "decode.acc_seg": 86.90112, "loss": 0.33197, "time": 0.13891} -{"mode": "train", "epoch": 2, "iter": 39500, "lr": 0.0, "memory": 4602, "data_time": 0.01307, "decode.loss_ce": 0.33382, "decode.acc_seg": 86.9914, "loss": 0.33382, "time": 0.14132} -{"mode": "train", "epoch": 2, "iter": 39550, "lr": 0.0, "memory": 4602, "data_time": 0.01174, "decode.loss_ce": 0.33893, "decode.acc_seg": 86.96712, "loss": 0.33893, "time": 0.14002} -{"mode": "train", "epoch": 2, "iter": 39600, "lr": 0.0, "memory": 4602, "data_time": 0.0117, "decode.loss_ce": 0.34869, "decode.acc_seg": 86.515, "loss": 0.34869, "time": 0.14214} -{"mode": "train", "epoch": 2, "iter": 39650, "lr": 0.0, "memory": 4602, "data_time": 0.01207, "decode.loss_ce": 0.33868, "decode.acc_seg": 86.7538, "loss": 0.33868, "time": 0.14325} -{"mode": "train", "epoch": 2, "iter": 39700, "lr": 0.0, "memory": 4602, "data_time": 0.01178, "decode.loss_ce": 0.35229, "decode.acc_seg": 86.16108, "loss": 0.35229, "time": 0.14306} -{"mode": "train", "epoch": 2, "iter": 39750, "lr": 0.0, "memory": 4602, "data_time": 0.01171, "decode.loss_ce": 0.33239, "decode.acc_seg": 87.11182, "loss": 0.33239, "time": 0.14099} -{"mode": "train", "epoch": 2, "iter": 39800, "lr": 0.0, "memory": 4602, "data_time": 0.01322, "decode.loss_ce": 0.34611, "decode.acc_seg": 86.75829, "loss": 0.34611, "time": 0.14069} -{"mode": "train", "epoch": 2, "iter": 39850, "lr": 0.0, "memory": 4602, "data_time": 0.01167, "decode.loss_ce": 0.34228, "decode.acc_seg": 86.81678, "loss": 0.34228, "time": 0.1522} -{"mode": "train", "epoch": 2, "iter": 39900, "lr": 0.0, "memory": 4602, "data_time": 0.01231, "decode.loss_ce": 0.35166, "decode.acc_seg": 86.32942, "loss": 0.35166, "time": 0.14271} -{"mode": "train", "epoch": 2, "iter": 39950, "lr": 0.0, "memory": 4602, "data_time": 0.01209, "decode.loss_ce": 0.33223, "decode.acc_seg": 87.03382, "loss": 0.33223, "time": 0.14087} -{"mode": "train", "epoch": 2, "iter": 40000, "lr": 0.0, "memory": 4602, "data_time": 0.01205, "decode.loss_ce": 0.33594, "decode.acc_seg": 86.88957, "loss": 0.33594, "time": 0.15708} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 0.0, "aAcc": 0.7953, "mIoU": 0.406, "mAcc": 0.5178, "IoU.wall": 0.7196, "IoU.building": 0.8024, "IoU.sky": 0.939, "IoU.floor": 0.7778, "IoU.tree": 0.7125, "IoU.ceiling": 0.7906, "IoU.road": 0.7973, "IoU.bed ": 0.8479, "IoU.windowpane": 0.5614, "IoU.grass": 0.6485, "IoU.cabinet": 0.5435, "IoU.sidewalk": 0.6017, "IoU.person": 0.7661, "IoU.earth": 0.3407, "IoU.door": 0.3636, "IoU.table": 0.4975, "IoU.mountain": 0.5557, "IoU.plant": 0.4879, "IoU.curtain": 0.6504, "IoU.chair": 0.4961, "IoU.car": 0.8114, "IoU.water": 0.4725, "IoU.painting": 0.6462, "IoU.sofa": 0.5604, "IoU.shelf": 0.3486, "IoU.house": 0.4193, "IoU.sea": 0.5398, "IoU.mirror": 0.5017, "IoU.rug": 0.5032, "IoU.field": 0.2751, "IoU.armchair": 0.3371, "IoU.seat": 0.5322, "IoU.fence": 0.3364, "IoU.desk": 0.4071, "IoU.rock": 0.3306, "IoU.wardrobe": 0.4726, "IoU.lamp": 0.544, "IoU.bathtub": 0.6425, "IoU.railing": 0.2898, "IoU.cushion": 0.4533, "IoU.base": 0.1725, "IoU.box": 0.1923, "IoU.column": 0.3854, "IoU.signboard": 0.3148, "IoU.chest of drawers": 0.3726, "IoU.counter": 0.2052, "IoU.sand": 0.3034, "IoU.sink": 0.6245, "IoU.skyscraper": 0.4484, "IoU.fireplace": 0.6251, "IoU.refrigerator": 0.6145, "IoU.grandstand": 0.3863, "IoU.path": 0.2197, "IoU.stairs": 0.311, "IoU.runway": 0.7036, "IoU.case": 0.3853, "IoU.pool table": 0.9087, "IoU.pillow": 0.4866, "IoU.screen door": 0.5374, "IoU.stairway": 0.2317, "IoU.river": 0.117, "IoU.bridge": 0.2951, "IoU.bookcase": 0.2797, "IoU.blind": 0.3413, "IoU.coffee table": 0.4719, "IoU.toilet": 0.7979, "IoU.flower": 0.3538, "IoU.book": 0.4297, "IoU.hill": 0.0713, "IoU.bench": 0.381, "IoU.countertop": 0.4658, "IoU.stove": 0.621, "IoU.palm": 0.4606, "IoU.kitchen island": 0.232, "IoU.computer": 0.5416, "IoU.swivel chair": 0.3623, "IoU.boat": 0.4543, "IoU.bar": 0.2312, "IoU.arcade machine": 0.37, "IoU.hovel": 0.1739, "IoU.bus": 0.8041, "IoU.towel": 0.4906, "IoU.light": 0.4168, "IoU.truck": 0.2043, "IoU.tower": 0.2412, "IoU.chandelier": 0.5955, "IoU.awning": 0.1603, "IoU.streetlight": 0.1639, "IoU.booth": 0.2941, "IoU.television receiver": 0.531, "IoU.airplane": 0.5182, "IoU.dirt track": 0.0343, "IoU.apparel": 0.3199, "IoU.pole": 0.1444, "IoU.land": 0.0189, "IoU.bannister": 0.0937, "IoU.escalator": 0.3494, "IoU.ottoman": 0.3204, "IoU.bottle": 0.2194, "IoU.buffet": 0.3886, "IoU.poster": 0.2276, "IoU.stage": 0.1053, "IoU.van": 0.3823, "IoU.ship": 0.2717, "IoU.fountain": 0.1901, "IoU.conveyer belt": 0.6949, "IoU.canopy": 0.1328, "IoU.washer": 0.6149, "IoU.plaything": 0.1969, "IoU.swimming pool": 0.3023, "IoU.stool": 0.267, "IoU.barrel": 0.1904, "IoU.basket": 0.2113, "IoU.waterfall": 0.3901, "IoU.tent": 0.8915, "IoU.bag": 0.0896, "IoU.minibike": 0.5985, "IoU.cradle": 0.7675, "IoU.oven": 0.4057, "IoU.ball": 0.4074, "IoU.food": 0.4912, "IoU.step": 0.0892, "IoU.tank": 0.3785, "IoU.trade name": 0.1953, "IoU.microwave": 0.3315, "IoU.pot": 0.2703, "IoU.animal": 0.5342, "IoU.bicycle": 0.4716, "IoU.lake": 0.4376, "IoU.dishwasher": 0.4775, "IoU.screen": 0.6459, "IoU.blanket": 0.038, "IoU.sculpture": 0.3959, "IoU.hood": 0.4425, "IoU.sconce": 0.2538, "IoU.vase": 0.2234, "IoU.traffic light": 0.2138, "IoU.tray": 0.0354, "IoU.ashcan": 0.3153, "IoU.fan": 0.5144, "IoU.pier": 0.4448, "IoU.crt screen": 0.0068, "IoU.plate": 0.3736, "IoU.monitor": 0.1329, "IoU.bulletin board": 0.2351, "IoU.shower": 0.0001, "IoU.radiator": 0.483, "IoU.glass": 0.0864, "IoU.clock": 0.2431, "IoU.flag": 0.2915, "Acc.wall": 0.8628, "Acc.building": 0.9166, "Acc.sky": 0.9699, "Acc.floor": 0.8948, "Acc.tree": 0.8748, "Acc.ceiling": 0.8837, "Acc.road": 0.8736, "Acc.bed ": 0.9415, "Acc.windowpane": 0.7284, "Acc.grass": 0.8008, "Acc.cabinet": 0.6802, "Acc.sidewalk": 0.7772, "Acc.person": 0.9066, "Acc.earth": 0.4776, "Acc.door": 0.4799, "Acc.table": 0.6726, "Acc.mountain": 0.7224, "Acc.plant": 0.601, "Acc.curtain": 0.7957, "Acc.chair": 0.6474, "Acc.car": 0.903, "Acc.water": 0.6156, "Acc.painting": 0.8219, "Acc.sofa": 0.7728, "Acc.shelf": 0.5259, "Acc.house": 0.5532, "Acc.sea": 0.8113, "Acc.mirror": 0.602, "Acc.rug": 0.5878, "Acc.field": 0.413, "Acc.armchair": 0.5043, "Acc.seat": 0.7409, "Acc.fence": 0.432, "Acc.desk": 0.5924, "Acc.rock": 0.4946, "Acc.wardrobe": 0.6733, "Acc.lamp": 0.7134, "Acc.bathtub": 0.7116, "Acc.railing": 0.3826, "Acc.cushion": 0.5837, "Acc.base": 0.2531, "Acc.box": 0.2664, "Acc.column": 0.4894, "Acc.signboard": 0.4106, "Acc.chest of drawers": 0.5638, "Acc.counter": 0.311, "Acc.sand": 0.4691, "Acc.sink": 0.7455, "Acc.skyscraper": 0.5665, "Acc.fireplace": 0.8151, "Acc.refrigerator": 0.7799, "Acc.grandstand": 0.6776, "Acc.path": 0.3364, "Acc.stairs": 0.3707, "Acc.runway": 0.937, "Acc.case": 0.5419, "Acc.pool table": 0.9566, "Acc.pillow": 0.6003, "Acc.screen door": 0.6193, "Acc.stairway": 0.3424, "Acc.river": 0.1903, "Acc.bridge": 0.3533, "Acc.bookcase": 0.4087, "Acc.blind": 0.3961, "Acc.coffee table": 0.7509, "Acc.toilet": 0.9, "Acc.flower": 0.5185, "Acc.book": 0.6476, "Acc.hill": 0.1202, "Acc.bench": 0.4749, "Acc.countertop": 0.6128, "Acc.stove": 0.7437, "Acc.palm": 0.599, "Acc.kitchen island": 0.4545, "Acc.computer": 0.6242, "Acc.swivel chair": 0.5315, "Acc.boat": 0.5408, "Acc.bar": 0.279, "Acc.arcade machine": 0.4188, "Acc.hovel": 0.2356, "Acc.bus": 0.9295, "Acc.towel": 0.65, "Acc.light": 0.4628, "Acc.truck": 0.2691, "Acc.tower": 0.3072, "Acc.chandelier": 0.7292, "Acc.awning": 0.182, "Acc.streetlight": 0.214, "Acc.booth": 0.3608, "Acc.television receiver": 0.7404, "Acc.airplane": 0.6499, "Acc.dirt track": 0.1409, "Acc.apparel": 0.4333, "Acc.pole": 0.1886, "Acc.land": 0.0405, "Acc.bannister": 0.1494, "Acc.escalator": 0.4818, "Acc.ottoman": 0.4378, "Acc.bottle": 0.2582, "Acc.buffet": 0.4543, "Acc.poster": 0.2907, "Acc.stage": 0.1594, "Acc.van": 0.4704, "Acc.ship": 0.3677, "Acc.fountain": 0.2122, "Acc.conveyer belt": 0.8226, "Acc.canopy": 0.1779, "Acc.washer": 0.6777, "Acc.plaything": 0.2783, "Acc.swimming pool": 0.504, "Acc.stool": 0.3193, "Acc.barrel": 0.6871, "Acc.basket": 0.2869, "Acc.waterfall": 0.4324, "Acc.tent": 0.9866, "Acc.bag": 0.1066, "Acc.minibike": 0.7437, "Acc.cradle": 0.9086, "Acc.oven": 0.5199, "Acc.ball": 0.4934, "Acc.food": 0.5933, "Acc.step": 0.1086, "Acc.tank": 0.4138, "Acc.trade name": 0.2269, "Acc.microwave": 0.3604, "Acc.pot": 0.3208, "Acc.animal": 0.5691, "Acc.bicycle": 0.6253, "Acc.lake": 0.5818, "Acc.dishwasher": 0.5878, "Acc.screen": 0.8361, "Acc.blanket": 0.0416, "Acc.sculpture": 0.6005, "Acc.hood": 0.5539, "Acc.sconce": 0.2904, "Acc.vase": 0.3467, "Acc.traffic light": 0.3383, "Acc.tray": 0.0529, "Acc.ashcan": 0.441, "Acc.fan": 0.6125, "Acc.pier": 0.6071, "Acc.crt screen": 0.0257, "Acc.plate": 0.4905, "Acc.monitor": 0.1652, "Acc.bulletin board": 0.344, "Acc.shower": 0.0002, "Acc.radiator": 0.5382, "Acc.glass": 0.0934, "Acc.clock": 0.2633, "Acc.flag": 0.3252} diff --git a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_5_ade20k.json b/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_5_ade20k.json deleted file mode 100644 index b5ba88ad..00000000 --- a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m1_5_ade20k.json +++ /dev/null @@ -1,811 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090\nCUDA_HOME: /home/shijiang/wangao/cuda\nNVCC: Cuda compilation tools, release 11.8, V11.8.89\nGCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.8.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 9.3\nMMCV CUDA Compiler: 11.8\nMMSegmentation: 0.30.0+c54117a", "seed": null, "exp_name": "fpn_repvit_m4_ade20k_40k.py", "mmseg_version": "0.30.0+c54117a", "config": "norm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='EncoderDecoder',\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='repvit_m4',\n depth=50,\n num_stages=4,\n out_indices=[5, 11, 37, 42],\n dilations=(1, 1, 1, 1),\n strides=(1, 2, 2, 2),\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n norm_eval=False,\n style='pytorch',\n contract_dilation=True,\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m4_distill_300e.pth'),\n pretrained='open-mmlab://resnet50_v1c'),\n neck=dict(\n type='FPN',\n in_channels=[64, 128, 256, 512],\n out_channels=256,\n num_outs=4),\n decode_head=dict(\n type='FPNHead',\n in_channels=[256, 256, 256, 256],\n in_index=[0, 1, 2, 3],\n feature_strides=[4, 8, 16, 32],\n channels=128,\n dropout_ratio=0.1,\n num_classes=150,\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),\n train_cfg=dict(),\n test_cfg=dict(mode='whole'))\ndataset_type = 'ADE20KDataset'\ndata_root = 'data/ade/ADEChallengeData2016'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ncrop_size = (512, 512)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),\n dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=4,\n workers_per_gpu=4,\n train=dict(\n type='RepeatDataset',\n times=50,\n dataset=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/training',\n ann_dir='annotations/training',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(\n type='Resize',\n img_scale=(2048, 512),\n ratio_range=(0.5, 2.0)),\n dict(\n type='RandomCrop',\n crop_size=(512, 512),\n cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n ])),\n val=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nlog_config = dict(\n interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)])\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\ncudnn_benchmark = True\ngpu_multiples = 2\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.0001)\noptimizer_config = dict()\nlr_config = dict(policy='poly', power=0.9, min_lr=1e-06, by_epoch=False)\nrunner = dict(type='IterBasedRunner', max_iters=40000)\ncheckpoint_config = dict(by_epoch=False, interval=4000)\nevaluation = dict(interval=4000, metric='mIoU')\nwork_dir = './work_dirs/fpn_repvit_m4_ade20k_40k'\ngpu_ids = range(0, 8)\ndevice = 'cuda'\nseed = None\n", "CLASSES": ["wall", "building", "sky", "floor", "tree", "ceiling", "road", "bed ", "windowpane", "grass", "cabinet", "sidewalk", "person", "earth", "door", "table", "mountain", "plant", "curtain", "chair", "car", "water", "painting", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock", "wardrobe", "lamp", "bathtub", "railing", "cushion", "base", "box", "column", "signboard", "chest of drawers", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator", "grandstand", "path", "stairs", "runway", "case", "pool table", "pillow", "screen door", "stairway", "river", "bridge", "bookcase", "blind", "coffee table", "toilet", "flower", "book", "hill", "bench", "countertop", "stove", "palm", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel", "bus", "towel", "light", "truck", "tower", "chandelier", "awning", "streetlight", "booth", "television receiver", "airplane", "dirt track", "apparel", "pole", "land", "bannister", "escalator", "ottoman", "bottle", "buffet", "poster", "stage", "van", "ship", "fountain", "conveyer belt", "canopy", "washer", "plaything", "swimming pool", "stool", "barrel", "basket", "waterfall", "tent", "bag", "minibike", "cradle", "oven", "ball", "food", "step", "tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket", "sculpture", "hood", "sconce", "vase", "traffic light", "tray", "ashcan", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass", "clock", "flag"], "PALETTE": [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255]]} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 0.0002, "memory": 20279, "data_time": 0.03151, "decode.loss_ce": 2.62003, "decode.acc_seg": 43.34343, "loss": 2.62003, "time": 0.49481} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 0.0002, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 1.8005, "decode.acc_seg": 53.94683, "loss": 1.8005, "time": 0.27777} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 0.0002, "memory": 20279, "data_time": 0.02096, "decode.loss_ce": 1.56437, "decode.acc_seg": 57.34185, "loss": 1.56437, "time": 0.2739} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 0.0002, "memory": 20279, "data_time": 0.01897, "decode.loss_ce": 1.52875, "decode.acc_seg": 57.98145, "loss": 1.52875, "time": 0.27406} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0002, "memory": 20279, "data_time": 0.01971, "decode.loss_ce": 1.37788, "decode.acc_seg": 60.68106, "loss": 1.37788, "time": 0.26709} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.0002, "memory": 20279, "data_time": 0.01965, "decode.loss_ce": 1.29245, "decode.acc_seg": 62.90603, "loss": 1.29245, "time": 0.26786} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.0002, "memory": 20279, "data_time": 0.01962, "decode.loss_ce": 1.27093, "decode.acc_seg": 62.5947, "loss": 1.27093, "time": 0.26821} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.0002, "memory": 20279, "data_time": 0.02023, "decode.loss_ce": 1.25883, "decode.acc_seg": 61.69349, "loss": 1.25883, "time": 0.27137} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.0002, "memory": 20279, "data_time": 0.01775, "decode.loss_ce": 1.23661, "decode.acc_seg": 62.87101, "loss": 1.23661, "time": 0.26975} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 20279, "data_time": 0.01978, "decode.loss_ce": 1.20041, "decode.acc_seg": 63.39562, "loss": 1.20041, "time": 0.26846} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 20279, "data_time": 0.01988, "decode.loss_ce": 1.16538, "decode.acc_seg": 64.24006, "loss": 1.16538, "time": 0.2683} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 20279, "data_time": 0.02086, "decode.loss_ce": 1.1066, "decode.acc_seg": 65.66854, "loss": 1.1066, "time": 0.26586} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 20279, "data_time": 0.02007, "decode.loss_ce": 1.12313, "decode.acc_seg": 64.1598, "loss": 1.12313, "time": 0.2649} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 20279, "data_time": 0.01984, "decode.loss_ce": 1.05495, "decode.acc_seg": 65.76144, "loss": 1.05495, "time": 0.26844} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 20279, "data_time": 0.02066, "decode.loss_ce": 1.04822, "decode.acc_seg": 66.23512, "loss": 1.04822, "time": 0.27005} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 20279, "data_time": 0.01913, "decode.loss_ce": 1.06809, "decode.acc_seg": 65.05076, "loss": 1.06809, "time": 0.26722} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 20279, "data_time": 0.0196, "decode.loss_ce": 1.01774, "decode.acc_seg": 66.9929, "loss": 1.01774, "time": 0.27035} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 20279, "data_time": 0.02033, "decode.loss_ce": 1.00707, "decode.acc_seg": 67.02223, "loss": 1.00707, "time": 0.27082} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 20279, "data_time": 0.01976, "decode.loss_ce": 0.98181, "decode.acc_seg": 67.61421, "loss": 0.98181, "time": 0.26742} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 20279, "data_time": 0.02193, "decode.loss_ce": 1.00857, "decode.acc_seg": 67.04961, "loss": 1.00857, "time": 0.27027} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 1.00363, "decode.acc_seg": 66.87983, "loss": 1.00363, "time": 0.26364} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 20279, "data_time": 0.02105, "decode.loss_ce": 0.97566, "decode.acc_seg": 67.912, "loss": 0.97566, "time": 0.27618} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.00019, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.9785, "decode.acc_seg": 68.42685, "loss": 0.9785, "time": 0.26939} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.00019, "memory": 20279, "data_time": 0.02008, "decode.loss_ce": 0.96777, "decode.acc_seg": 67.82864, "loss": 0.96777, "time": 0.26653} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.00019, "memory": 20279, "data_time": 0.02005, "decode.loss_ce": 0.89166, "decode.acc_seg": 69.86159, "loss": 0.89166, "time": 0.2675} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.00019, "memory": 20279, "data_time": 0.02031, "decode.loss_ce": 0.95049, "decode.acc_seg": 67.75644, "loss": 0.95049, "time": 0.26651} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.00019, "memory": 20279, "data_time": 0.01993, "decode.loss_ce": 0.89204, "decode.acc_seg": 69.90202, "loss": 0.89204, "time": 0.2677} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.00019, "memory": 20279, "data_time": 0.01855, "decode.loss_ce": 0.93683, "decode.acc_seg": 69.00999, "loss": 0.93683, "time": 0.26438} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.00019, "memory": 20279, "data_time": 0.0195, "decode.loss_ce": 0.89739, "decode.acc_seg": 69.43619, "loss": 0.89739, "time": 0.26797} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.00019, "memory": 20279, "data_time": 0.01802, "decode.loss_ce": 0.89869, "decode.acc_seg": 69.7566, "loss": 0.89869, "time": 0.26972} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.00019, "memory": 20279, "data_time": 0.01962, "decode.loss_ce": 0.89281, "decode.acc_seg": 69.93085, "loss": 0.89281, "time": 0.26988} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.00019, "memory": 20279, "data_time": 0.02041, "decode.loss_ce": 0.89911, "decode.acc_seg": 69.48337, "loss": 0.89911, "time": 0.26805} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.00019, "memory": 20279, "data_time": 0.02152, "decode.loss_ce": 0.87353, "decode.acc_seg": 70.14293, "loss": 0.87353, "time": 0.26669} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.00019, "memory": 20279, "data_time": 0.02039, "decode.loss_ce": 0.88088, "decode.acc_seg": 70.43658, "loss": 0.88088, "time": 0.26782} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.00019, "memory": 20279, "data_time": 0.02128, "decode.loss_ce": 0.89356, "decode.acc_seg": 70.20326, "loss": 0.89356, "time": 0.27056} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.00019, "memory": 20279, "data_time": 0.01823, "decode.loss_ce": 0.87908, "decode.acc_seg": 69.85708, "loss": 0.87908, "time": 0.26709} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.00019, "memory": 20279, "data_time": 0.01934, "decode.loss_ce": 0.87942, "decode.acc_seg": 69.52898, "loss": 0.87942, "time": 0.27299} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.00019, "memory": 20279, "data_time": 0.01912, "decode.loss_ce": 0.85869, "decode.acc_seg": 70.46213, "loss": 0.85869, "time": 0.26569} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.00019, "memory": 20279, "data_time": 0.02074, "decode.loss_ce": 0.86318, "decode.acc_seg": 70.55742, "loss": 0.86318, "time": 0.26579} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.00019, "memory": 20279, "data_time": 0.02101, "decode.loss_ce": 0.84842, "decode.acc_seg": 70.87496, "loss": 0.84842, "time": 0.26962} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.00019, "memory": 20279, "data_time": 0.02073, "decode.loss_ce": 0.83178, "decode.acc_seg": 71.24526, "loss": 0.83178, "time": 0.26846} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.00019, "memory": 20279, "data_time": 0.02023, "decode.loss_ce": 0.81652, "decode.acc_seg": 72.00908, "loss": 0.81652, "time": 0.27457} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.00019, "memory": 20279, "data_time": 0.02134, "decode.loss_ce": 0.82314, "decode.acc_seg": 71.84284, "loss": 0.82314, "time": 0.27065} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.00019, "memory": 20279, "data_time": 0.02172, "decode.loss_ce": 0.8258, "decode.acc_seg": 71.08238, "loss": 0.8258, "time": 0.2699} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.00019, "memory": 20279, "data_time": 0.02003, "decode.loss_ce": 0.82804, "decode.acc_seg": 71.55953, "loss": 0.82804, "time": 0.26381} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.00019, "memory": 20279, "data_time": 0.01964, "decode.loss_ce": 0.80247, "decode.acc_seg": 71.80861, "loss": 0.80247, "time": 0.26859} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.00019, "memory": 20279, "data_time": 0.01875, "decode.loss_ce": 0.84207, "decode.acc_seg": 70.50561, "loss": 0.84207, "time": 0.26687} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.00019, "memory": 20279, "data_time": 0.02035, "decode.loss_ce": 0.82082, "decode.acc_seg": 71.84458, "loss": 0.82082, "time": 0.26442} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.00019, "memory": 20279, "data_time": 0.01941, "decode.loss_ce": 0.81194, "decode.acc_seg": 71.9543, "loss": 0.81194, "time": 0.27092} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.00019, "memory": 20279, "data_time": 0.0205, "decode.loss_ce": 0.77219, "decode.acc_seg": 72.57929, "loss": 0.77219, "time": 0.26822} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.00019, "memory": 20279, "data_time": 0.01944, "decode.loss_ce": 0.80678, "decode.acc_seg": 72.14775, "loss": 0.80678, "time": 0.26617} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.00019, "memory": 20279, "data_time": 0.01867, "decode.loss_ce": 0.82741, "decode.acc_seg": 71.3506, "loss": 0.82741, "time": 0.26431} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.00019, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.78083, "decode.acc_seg": 72.6892, "loss": 0.78083, "time": 0.26671} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.00019, "memory": 20279, "data_time": 0.02154, "decode.loss_ce": 0.79765, "decode.acc_seg": 72.06312, "loss": 0.79765, "time": 0.26946} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.00019, "memory": 20279, "data_time": 0.02122, "decode.loss_ce": 0.78182, "decode.acc_seg": 72.76973, "loss": 0.78182, "time": 0.26763} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.00019, "memory": 20279, "data_time": 0.02176, "decode.loss_ce": 0.77617, "decode.acc_seg": 73.02993, "loss": 0.77617, "time": 0.26888} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.00019, "memory": 20279, "data_time": 0.02011, "decode.loss_ce": 0.76376, "decode.acc_seg": 72.62038, "loss": 0.76376, "time": 0.2743} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.00019, "memory": 20279, "data_time": 0.02084, "decode.loss_ce": 0.7511, "decode.acc_seg": 73.323, "loss": 0.7511, "time": 0.26523} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.00019, "memory": 20279, "data_time": 0.02054, "decode.loss_ce": 0.77708, "decode.acc_seg": 72.49452, "loss": 0.77708, "time": 0.26466} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.00019, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.78173, "decode.acc_seg": 72.39638, "loss": 0.78173, "time": 0.26317} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.00019, "memory": 20279, "data_time": 0.01927, "decode.loss_ce": 0.76042, "decode.acc_seg": 73.16764, "loss": 0.76042, "time": 0.26681} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.00019, "memory": 20279, "data_time": 0.0194, "decode.loss_ce": 0.77602, "decode.acc_seg": 72.42564, "loss": 0.77602, "time": 0.27328} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.00019, "memory": 20279, "data_time": 0.02158, "decode.loss_ce": 0.76345, "decode.acc_seg": 73.03734, "loss": 0.76345, "time": 0.26683} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.00019, "memory": 20279, "data_time": 0.01997, "decode.loss_ce": 0.7443, "decode.acc_seg": 73.58507, "loss": 0.7443, "time": 0.2659} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.00019, "memory": 20279, "data_time": 0.02106, "decode.loss_ce": 0.74752, "decode.acc_seg": 73.46893, "loss": 0.74752, "time": 0.26548} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.00019, "memory": 20279, "data_time": 0.02077, "decode.loss_ce": 0.76925, "decode.acc_seg": 73.14004, "loss": 0.76925, "time": 0.27022} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.00018, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.76096, "decode.acc_seg": 73.26491, "loss": 0.76096, "time": 0.26263} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.00018, "memory": 20279, "data_time": 0.02243, "decode.loss_ce": 0.76844, "decode.acc_seg": 73.26544, "loss": 0.76844, "time": 0.26775} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.00018, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.75687, "decode.acc_seg": 73.5067, "loss": 0.75687, "time": 0.2698} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.00018, "memory": 20279, "data_time": 0.02, "decode.loss_ce": 0.76185, "decode.acc_seg": 73.09075, "loss": 0.76185, "time": 0.26899} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.00018, "memory": 20279, "data_time": 0.02108, "decode.loss_ce": 0.72605, "decode.acc_seg": 73.97653, "loss": 0.72605, "time": 0.26683} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.00018, "memory": 20279, "data_time": 0.02069, "decode.loss_ce": 0.73577, "decode.acc_seg": 73.56069, "loss": 0.73577, "time": 0.26755} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.00018, "memory": 20279, "data_time": 0.01949, "decode.loss_ce": 0.73058, "decode.acc_seg": 73.86721, "loss": 0.73058, "time": 0.26327} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.00018, "memory": 20279, "data_time": 0.02063, "decode.loss_ce": 0.7135, "decode.acc_seg": 74.43347, "loss": 0.7135, "time": 0.26408} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.00018, "memory": 20279, "data_time": 0.01834, "decode.loss_ce": 0.73468, "decode.acc_seg": 73.8633, "loss": 0.73468, "time": 0.26429} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.00018, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.71828, "decode.acc_seg": 74.11227, "loss": 0.71828, "time": 0.2661} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.00018, "memory": 20279, "data_time": 0.01919, "decode.loss_ce": 0.69458, "decode.acc_seg": 74.69197, "loss": 0.69458, "time": 0.27561} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.00018, "memory": 20279, "data_time": 0.02142, "decode.loss_ce": 0.73489, "decode.acc_seg": 74.2501, "loss": 0.73489, "time": 0.26663} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.00018, "memory": 20279, "data_time": 0.01871, "decode.loss_ce": 0.72226, "decode.acc_seg": 74.27717, "loss": 0.72226, "time": 0.26842} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.00018, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.72574, "decode.acc_seg": 74.40362, "loss": 0.72574, "time": 0.2808} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00018, "aAcc": 0.7641, "mIoU": 0.3201, "mAcc": 0.4277, "IoU.wall": 0.6797, "IoU.building": 0.7624, "IoU.sky": 0.9278, "IoU.floor": 0.7388, "IoU.tree": 0.683, "IoU.ceiling": 0.7629, "IoU.road": 0.7425, "IoU.bed ": 0.7768, "IoU.windowpane": 0.5081, "IoU.grass": 0.6852, "IoU.cabinet": 0.4877, "IoU.sidewalk": 0.5214, "IoU.person": 0.7495, "IoU.earth": 0.3225, "IoU.door": 0.2978, "IoU.table": 0.4276, "IoU.mountain": 0.5304, "IoU.plant": 0.4558, "IoU.curtain": 0.6062, "IoU.chair": 0.3988, "IoU.car": 0.7763, "IoU.water": 0.5431, "IoU.painting": 0.5825, "IoU.sofa": 0.4967, "IoU.shelf": 0.325, "IoU.house": 0.3739, "IoU.sea": 0.583, "IoU.mirror": 0.403, "IoU.rug": 0.4472, "IoU.field": 0.3129, "IoU.armchair": 0.1997, "IoU.seat": 0.466, "IoU.fence": 0.2859, "IoU.desk": 0.2508, "IoU.rock": 0.3619, "IoU.wardrobe": 0.3699, "IoU.lamp": 0.4827, "IoU.bathtub": 0.5221, "IoU.railing": 0.2486, "IoU.cushion": 0.3669, "IoU.base": 0.0864, "IoU.box": 0.1357, "IoU.column": 0.3457, "IoU.signboard": 0.2976, "IoU.chest of drawers": 0.3553, "IoU.counter": 0.1819, "IoU.sand": 0.2314, "IoU.sink": 0.4575, "IoU.skyscraper": 0.3566, "IoU.fireplace": 0.5216, "IoU.refrigerator": 0.525, "IoU.grandstand": 0.3326, "IoU.path": 0.1077, "IoU.stairs": 0.1988, "IoU.runway": 0.6246, "IoU.case": 0.4117, "IoU.pool table": 0.8913, "IoU.pillow": 0.4105, "IoU.screen door": 0.3075, "IoU.stairway": 0.2248, "IoU.river": 0.0444, "IoU.bridge": 0.4014, "IoU.bookcase": 0.2445, "IoU.blind": 0.1695, "IoU.coffee table": 0.4351, "IoU.toilet": 0.6558, "IoU.flower": 0.2489, "IoU.book": 0.381, "IoU.hill": 0.0153, "IoU.bench": 0.2111, "IoU.countertop": 0.3382, "IoU.stove": 0.4443, "IoU.palm": 0.4212, "IoU.kitchen island": 0.0982, "IoU.computer": 0.4922, "IoU.swivel chair": 0.235, "IoU.boat": 0.579, "IoU.bar": 0.0174, "IoU.arcade machine": 0.2582, "IoU.hovel": 0.3472, "IoU.bus": 0.7949, "IoU.towel": 0.4404, "IoU.light": 0.2137, "IoU.truck": 0.1252, "IoU.tower": 0.0168, "IoU.chandelier": 0.5072, "IoU.awning": 0.0665, "IoU.streetlight": 0.1342, "IoU.booth": 0.0003, "IoU.television receiver": 0.4404, "IoU.airplane": 0.4526, "IoU.dirt track": 0.0131, "IoU.apparel": 0.2706, "IoU.pole": 0.0651, "IoU.land": 0.0, "IoU.bannister": 0.0112, "IoU.escalator": 0.1362, "IoU.ottoman": 0.0932, "IoU.bottle": 0.1404, "IoU.buffet": 0.11, "IoU.poster": 0.0011, "IoU.stage": 0.013, "IoU.van": 0.1645, "IoU.ship": 0.0, "IoU.fountain": 0.136, "IoU.conveyer belt": 0.3169, "IoU.canopy": 0.0926, "IoU.washer": 0.5946, "IoU.plaything": 0.0567, "IoU.swimming pool": 0.1644, "IoU.stool": 0.0898, "IoU.barrel": 0.0484, "IoU.basket": 0.1477, "IoU.waterfall": 0.6064, "IoU.tent": 0.8583, "IoU.bag": 0.0365, "IoU.minibike": 0.4453, "IoU.cradle": 0.5597, "IoU.oven": 0.0129, "IoU.ball": 0.3856, "IoU.food": 0.4813, "IoU.step": 0.0, "IoU.tank": 0.017, "IoU.trade name": 0.1649, "IoU.microwave": 0.3513, "IoU.pot": 0.2658, "IoU.animal": 0.4727, "IoU.bicycle": 0.4558, "IoU.lake": 0.0, "IoU.dishwasher": 0.2743, "IoU.screen": 0.6232, "IoU.blanket": 0.0, "IoU.sculpture": 0.0685, "IoU.hood": 0.2049, "IoU.sconce": 0.0175, "IoU.vase": 0.2209, "IoU.traffic light": 0.1387, "IoU.tray": 0.0152, "IoU.ashcan": 0.2268, "IoU.fan": 0.232, "IoU.pier": 0.0003, "IoU.crt screen": 0.0012, "IoU.plate": 0.3328, "IoU.monitor": 0.1032, "IoU.bulletin board": 0.2984, "IoU.shower": 0.0, "IoU.radiator": 0.2907, "IoU.glass": 0.0099, "IoU.clock": 0.1653, "IoU.flag": 0.3134, "Acc.wall": 0.7929, "Acc.building": 0.9285, "Acc.sky": 0.9668, "Acc.floor": 0.8816, "Acc.tree": 0.8612, "Acc.ceiling": 0.884, "Acc.road": 0.8501, "Acc.bed ": 0.9394, "Acc.windowpane": 0.7148, "Acc.grass": 0.8179, "Acc.cabinet": 0.6729, "Acc.sidewalk": 0.7582, "Acc.person": 0.8929, "Acc.earth": 0.4471, "Acc.door": 0.4622, "Acc.table": 0.583, "Acc.mountain": 0.7064, "Acc.plant": 0.5738, "Acc.curtain": 0.8419, "Acc.chair": 0.5109, "Acc.car": 0.8957, "Acc.water": 0.7507, "Acc.painting": 0.7823, "Acc.sofa": 0.7512, "Acc.shelf": 0.5748, "Acc.house": 0.4309, "Acc.sea": 0.8303, "Acc.mirror": 0.5117, "Acc.rug": 0.52, "Acc.field": 0.5071, "Acc.armchair": 0.3276, "Acc.seat": 0.5804, "Acc.fence": 0.3803, "Acc.desk": 0.3532, "Acc.rock": 0.4645, "Acc.wardrobe": 0.5692, "Acc.lamp": 0.6634, "Acc.bathtub": 0.6626, "Acc.railing": 0.3665, "Acc.cushion": 0.5735, "Acc.base": 0.1219, "Acc.box": 0.2302, "Acc.column": 0.4084, "Acc.signboard": 0.4005, "Acc.chest of drawers": 0.4861, "Acc.counter": 0.3661, "Acc.sand": 0.2426, "Acc.sink": 0.7755, "Acc.skyscraper": 0.4082, "Acc.fireplace": 0.8142, "Acc.refrigerator": 0.6997, "Acc.grandstand": 0.5715, "Acc.path": 0.1443, "Acc.stairs": 0.2191, "Acc.runway": 0.7046, "Acc.case": 0.5332, "Acc.pool table": 0.934, "Acc.pillow": 0.5665, "Acc.screen door": 0.6719, "Acc.stairway": 0.3626, "Acc.river": 0.0477, "Acc.bridge": 0.4428, "Acc.bookcase": 0.3589, "Acc.blind": 0.1957, "Acc.coffee table": 0.6742, "Acc.toilet": 0.8932, "Acc.flower": 0.3774, "Acc.book": 0.6361, "Acc.hill": 0.0154, "Acc.bench": 0.244, "Acc.countertop": 0.4185, "Acc.stove": 0.4809, "Acc.palm": 0.4921, "Acc.kitchen island": 0.1184, "Acc.computer": 0.6295, "Acc.swivel chair": 0.2872, "Acc.boat": 0.8347, "Acc.bar": 0.0175, "Acc.arcade machine": 0.3983, "Acc.hovel": 0.5434, "Acc.bus": 0.8895, "Acc.towel": 0.6179, "Acc.light": 0.236, "Acc.truck": 0.1682, "Acc.tower": 0.0169, "Acc.chandelier": 0.681, "Acc.awning": 0.0789, "Acc.streetlight": 0.1847, "Acc.booth": 0.0003, "Acc.television receiver": 0.8141, "Acc.airplane": 0.654, "Acc.dirt track": 0.0177, "Acc.apparel": 0.4254, "Acc.pole": 0.0717, "Acc.land": 0.0, "Acc.bannister": 0.0171, "Acc.escalator": 0.1605, "Acc.ottoman": 0.1049, "Acc.bottle": 0.1677, "Acc.buffet": 0.1148, "Acc.poster": 0.0014, "Acc.stage": 0.0148, "Acc.van": 0.2099, "Acc.ship": 0.0, "Acc.fountain": 0.1533, "Acc.conveyer belt": 0.5421, "Acc.canopy": 0.1181, "Acc.washer": 0.7321, "Acc.plaything": 0.0912, "Acc.swimming pool": 0.2025, "Acc.stool": 0.0957, "Acc.barrel": 0.843, "Acc.basket": 0.2274, "Acc.waterfall": 0.7506, "Acc.tent": 0.9573, "Acc.bag": 0.0376, "Acc.minibike": 0.6069, "Acc.cradle": 0.747, "Acc.oven": 0.0135, "Acc.ball": 0.5344, "Acc.food": 0.6733, "Acc.step": 0.0, "Acc.tank": 0.0175, "Acc.trade name": 0.1843, "Acc.microwave": 0.4013, "Acc.pot": 0.3499, "Acc.animal": 0.5059, "Acc.bicycle": 0.7504, "Acc.lake": 0.0, "Acc.dishwasher": 0.3662, "Acc.screen": 0.8022, "Acc.blanket": 0.0, "Acc.sculpture": 0.0707, "Acc.hood": 0.2196, "Acc.sconce": 0.0176, "Acc.vase": 0.4273, "Acc.traffic light": 0.2867, "Acc.tray": 0.0178, "Acc.ashcan": 0.2984, "Acc.fan": 0.2658, "Acc.pier": 0.0003, "Acc.crt screen": 0.0023, "Acc.plate": 0.588, "Acc.monitor": 0.1157, "Acc.bulletin board": 0.3186, "Acc.shower": 0.0, "Acc.radiator": 0.2951, "Acc.glass": 0.0102, "Acc.clock": 0.1858, "Acc.flag": 0.332} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.00018, "memory": 20279, "data_time": 2.55973, "decode.loss_ce": 0.70634, "decode.acc_seg": 74.25934, "loss": 0.70634, "time": 2.80962} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.00018, "memory": 20279, "data_time": 0.02104, "decode.loss_ce": 0.72506, "decode.acc_seg": 73.80892, "loss": 0.72506, "time": 0.27158} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.00018, "memory": 20279, "data_time": 0.0208, "decode.loss_ce": 0.74227, "decode.acc_seg": 73.38658, "loss": 0.74227, "time": 0.26656} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.00018, "memory": 20279, "data_time": 0.02053, "decode.loss_ce": 0.71227, "decode.acc_seg": 74.49108, "loss": 0.71227, "time": 0.26296} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.00018, "memory": 20279, "data_time": 0.02066, "decode.loss_ce": 0.67858, "decode.acc_seg": 75.42567, "loss": 0.67858, "time": 0.26864} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.00018, "memory": 20279, "data_time": 0.01904, "decode.loss_ce": 0.74, "decode.acc_seg": 73.62642, "loss": 0.74, "time": 0.26657} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.00018, "memory": 20279, "data_time": 0.02068, "decode.loss_ce": 0.69114, "decode.acc_seg": 75.18222, "loss": 0.69114, "time": 0.26661} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.00018, "memory": 20279, "data_time": 0.01977, "decode.loss_ce": 0.71441, "decode.acc_seg": 74.47626, "loss": 0.71441, "time": 0.2637} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.00018, "memory": 20279, "data_time": 0.02341, "decode.loss_ce": 0.6757, "decode.acc_seg": 75.68255, "loss": 0.6757, "time": 0.27688} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.00018, "memory": 20279, "data_time": 0.02139, "decode.loss_ce": 0.69842, "decode.acc_seg": 74.5713, "loss": 0.69842, "time": 0.27081} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.00018, "memory": 20279, "data_time": 0.01999, "decode.loss_ce": 0.72331, "decode.acc_seg": 74.40029, "loss": 0.72331, "time": 0.26889} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.00018, "memory": 20279, "data_time": 0.02071, "decode.loss_ce": 0.66296, "decode.acc_seg": 76.09033, "loss": 0.66296, "time": 0.26649} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.00018, "memory": 20279, "data_time": 0.01988, "decode.loss_ce": 0.67556, "decode.acc_seg": 75.63889, "loss": 0.67556, "time": 0.27069} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.00018, "memory": 20279, "data_time": 0.02019, "decode.loss_ce": 0.67554, "decode.acc_seg": 75.67504, "loss": 0.67554, "time": 0.27189} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.00018, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.66985, "decode.acc_seg": 75.77847, "loss": 0.66985, "time": 0.26379} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.00018, "memory": 20279, "data_time": 0.01908, "decode.loss_ce": 0.68417, "decode.acc_seg": 74.73835, "loss": 0.68417, "time": 0.27272} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.00018, "memory": 20279, "data_time": 0.02109, "decode.loss_ce": 0.70779, "decode.acc_seg": 74.9776, "loss": 0.70779, "time": 0.26666} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.00018, "memory": 20279, "data_time": 0.01954, "decode.loss_ce": 0.71593, "decode.acc_seg": 74.28155, "loss": 0.71593, "time": 0.26453} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.00018, "memory": 20279, "data_time": 0.01941, "decode.loss_ce": 0.72396, "decode.acc_seg": 73.85265, "loss": 0.72396, "time": 0.26169} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.00018, "memory": 20279, "data_time": 0.02073, "decode.loss_ce": 0.66039, "decode.acc_seg": 76.50398, "loss": 0.66039, "time": 0.2694} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.00018, "memory": 20279, "data_time": 0.01908, "decode.loss_ce": 0.67389, "decode.acc_seg": 76.06689, "loss": 0.67389, "time": 0.27053} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.00018, "memory": 20279, "data_time": 0.01788, "decode.loss_ce": 0.64276, "decode.acc_seg": 77.07387, "loss": 0.64276, "time": 0.26503} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.00018, "memory": 20279, "data_time": 0.01942, "decode.loss_ce": 0.66373, "decode.acc_seg": 76.40841, "loss": 0.66373, "time": 0.26313} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.00018, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.68229, "decode.acc_seg": 75.5935, "loss": 0.68229, "time": 0.26773} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.00018, "memory": 20279, "data_time": 0.01997, "decode.loss_ce": 0.67696, "decode.acc_seg": 75.73643, "loss": 0.67696, "time": 0.26553} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.00018, "memory": 20279, "data_time": 0.01914, "decode.loss_ce": 0.66844, "decode.acc_seg": 75.80699, "loss": 0.66844, "time": 0.26423} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.00018, "memory": 20279, "data_time": 0.01976, "decode.loss_ce": 0.68965, "decode.acc_seg": 75.33416, "loss": 0.68965, "time": 0.26482} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.00018, "memory": 20279, "data_time": 0.02136, "decode.loss_ce": 0.62904, "decode.acc_seg": 76.9581, "loss": 0.62904, "time": 0.26768} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.00018, "memory": 20279, "data_time": 0.01883, "decode.loss_ce": 0.68146, "decode.acc_seg": 75.18564, "loss": 0.68146, "time": 0.27016} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.00018, "memory": 20279, "data_time": 0.01943, "decode.loss_ce": 0.65695, "decode.acc_seg": 76.25748, "loss": 0.65695, "time": 0.26567} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.00017, "memory": 20279, "data_time": 0.02007, "decode.loss_ce": 0.65519, "decode.acc_seg": 76.23802, "loss": 0.65519, "time": 0.2648} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.00017, "memory": 20279, "data_time": 0.01998, "decode.loss_ce": 0.63569, "decode.acc_seg": 76.90096, "loss": 0.63569, "time": 0.26643} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.00017, "memory": 20279, "data_time": 0.01989, "decode.loss_ce": 0.6671, "decode.acc_seg": 76.30091, "loss": 0.6671, "time": 0.27268} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.00017, "memory": 20279, "data_time": 0.02066, "decode.loss_ce": 0.63442, "decode.acc_seg": 76.76979, "loss": 0.63442, "time": 0.27468} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.00017, "memory": 20279, "data_time": 0.02122, "decode.loss_ce": 0.6351, "decode.acc_seg": 76.32847, "loss": 0.6351, "time": 0.26658} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.00017, "memory": 20279, "data_time": 0.01965, "decode.loss_ce": 0.66414, "decode.acc_seg": 75.76992, "loss": 0.66414, "time": 0.27266} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.00017, "memory": 20279, "data_time": 0.01858, "decode.loss_ce": 0.67141, "decode.acc_seg": 75.24527, "loss": 0.67141, "time": 0.26349} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.00017, "memory": 20279, "data_time": 0.0203, "decode.loss_ce": 0.64444, "decode.acc_seg": 76.55926, "loss": 0.64444, "time": 0.26461} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.00017, "memory": 20279, "data_time": 0.01937, "decode.loss_ce": 0.62569, "decode.acc_seg": 76.81418, "loss": 0.62569, "time": 0.26289} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.00017, "memory": 20279, "data_time": 0.01904, "decode.loss_ce": 0.63603, "decode.acc_seg": 76.44859, "loss": 0.63603, "time": 0.2707} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.00017, "memory": 20279, "data_time": 0.01961, "decode.loss_ce": 0.65608, "decode.acc_seg": 76.2119, "loss": 0.65608, "time": 0.27184} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.00017, "memory": 20279, "data_time": 0.01851, "decode.loss_ce": 0.62248, "decode.acc_seg": 77.50435, "loss": 0.62248, "time": 0.26455} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.00017, "memory": 20279, "data_time": 0.02156, "decode.loss_ce": 0.64063, "decode.acc_seg": 76.60645, "loss": 0.64063, "time": 0.26663} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.00017, "memory": 20279, "data_time": 0.01936, "decode.loss_ce": 0.64709, "decode.acc_seg": 76.86917, "loss": 0.64709, "time": 0.2647} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.00017, "memory": 20279, "data_time": 0.02085, "decode.loss_ce": 0.63656, "decode.acc_seg": 77.00529, "loss": 0.63656, "time": 0.26749} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.00017, "memory": 20279, "data_time": 0.02011, "decode.loss_ce": 0.62619, "decode.acc_seg": 77.18838, "loss": 0.62619, "time": 0.26422} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.00017, "memory": 20279, "data_time": 0.02008, "decode.loss_ce": 0.64241, "decode.acc_seg": 76.62223, "loss": 0.64241, "time": 0.2659} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.00017, "memory": 20279, "data_time": 0.01994, "decode.loss_ce": 0.64739, "decode.acc_seg": 76.42645, "loss": 0.64739, "time": 0.26465} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.00017, "memory": 20279, "data_time": 0.01972, "decode.loss_ce": 0.59805, "decode.acc_seg": 77.97127, "loss": 0.59805, "time": 0.2753} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.00017, "memory": 20279, "data_time": 0.02084, "decode.loss_ce": 0.62739, "decode.acc_seg": 77.46328, "loss": 0.62739, "time": 0.26741} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.00017, "memory": 20279, "data_time": 0.01866, "decode.loss_ce": 0.63833, "decode.acc_seg": 76.81931, "loss": 0.63833, "time": 0.26681} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.00017, "memory": 20279, "data_time": 0.02053, "decode.loss_ce": 0.60638, "decode.acc_seg": 77.37884, "loss": 0.60638, "time": 0.26736} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.00017, "memory": 20279, "data_time": 0.02078, "decode.loss_ce": 0.64669, "decode.acc_seg": 76.64743, "loss": 0.64669, "time": 0.2694} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.00017, "memory": 20279, "data_time": 0.01904, "decode.loss_ce": 0.64211, "decode.acc_seg": 76.7858, "loss": 0.64211, "time": 0.27081} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.00017, "memory": 20279, "data_time": 0.02217, "decode.loss_ce": 0.60756, "decode.acc_seg": 77.6508, "loss": 0.60756, "time": 0.26949} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.00017, "memory": 20279, "data_time": 0.01776, "decode.loss_ce": 0.63247, "decode.acc_seg": 77.04348, "loss": 0.63247, "time": 0.26938} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.00017, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.62911, "decode.acc_seg": 77.00289, "loss": 0.62911, "time": 0.26249} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.00017, "memory": 20279, "data_time": 0.02067, "decode.loss_ce": 0.61421, "decode.acc_seg": 77.5584, "loss": 0.61421, "time": 0.26689} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.00017, "memory": 20279, "data_time": 0.01918, "decode.loss_ce": 0.58716, "decode.acc_seg": 78.26447, "loss": 0.58716, "time": 0.26558} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.00017, "memory": 20279, "data_time": 0.02002, "decode.loss_ce": 0.63406, "decode.acc_seg": 77.15949, "loss": 0.63406, "time": 0.27047} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.00017, "memory": 20279, "data_time": 0.01917, "decode.loss_ce": 0.60803, "decode.acc_seg": 77.59284, "loss": 0.60803, "time": 0.27397} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.00017, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.61377, "decode.acc_seg": 77.68226, "loss": 0.61377, "time": 0.26142} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.00017, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.60306, "decode.acc_seg": 77.90798, "loss": 0.60306, "time": 0.2626} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.00017, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.61074, "decode.acc_seg": 77.46742, "loss": 0.61074, "time": 0.26731} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.00017, "memory": 20279, "data_time": 0.02294, "decode.loss_ce": 0.59841, "decode.acc_seg": 77.86098, "loss": 0.59841, "time": 0.26736} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.00017, "memory": 20279, "data_time": 0.01913, "decode.loss_ce": 0.64226, "decode.acc_seg": 76.81252, "loss": 0.64226, "time": 0.26341} -{"mode": "train", "epoch": 1, "iter": 7350, "lr": 0.00017, "memory": 20279, "data_time": 0.01947, "decode.loss_ce": 0.60699, "decode.acc_seg": 77.89023, "loss": 0.60699, "time": 0.26497} -{"mode": "train", "epoch": 1, "iter": 7400, "lr": 0.00017, "memory": 20279, "data_time": 0.01867, "decode.loss_ce": 0.58565, "decode.acc_seg": 77.94072, "loss": 0.58565, "time": 0.26362} -{"mode": "train", "epoch": 1, "iter": 7450, "lr": 0.00017, "memory": 20279, "data_time": 0.01959, "decode.loss_ce": 0.64055, "decode.acc_seg": 76.52502, "loss": 0.64055, "time": 0.26921} -{"mode": "train", "epoch": 1, "iter": 7500, "lr": 0.00017, "memory": 20279, "data_time": 0.0204, "decode.loss_ce": 0.63698, "decode.acc_seg": 77.22516, "loss": 0.63698, "time": 0.26255} -{"mode": "train", "epoch": 1, "iter": 7550, "lr": 0.00017, "memory": 20279, "data_time": 0.01907, "decode.loss_ce": 0.59603, "decode.acc_seg": 77.93637, "loss": 0.59603, "time": 0.26549} -{"mode": "train", "epoch": 1, "iter": 7600, "lr": 0.00017, "memory": 20279, "data_time": 0.01946, "decode.loss_ce": 0.62452, "decode.acc_seg": 77.1024, "loss": 0.62452, "time": 0.26677} -{"mode": "train", "epoch": 1, "iter": 7650, "lr": 0.00017, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.59701, "decode.acc_seg": 77.70391, "loss": 0.59701, "time": 0.26972} -{"mode": "train", "epoch": 1, "iter": 7700, "lr": 0.00017, "memory": 20279, "data_time": 0.01978, "decode.loss_ce": 0.59818, "decode.acc_seg": 78.40027, "loss": 0.59818, "time": 0.27401} -{"mode": "train", "epoch": 1, "iter": 7750, "lr": 0.00016, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.59557, "decode.acc_seg": 77.88413, "loss": 0.59557, "time": 0.26349} -{"mode": "train", "epoch": 1, "iter": 7800, "lr": 0.00016, "memory": 20279, "data_time": 0.01996, "decode.loss_ce": 0.60373, "decode.acc_seg": 77.378, "loss": 0.60373, "time": 0.27642} -{"mode": "train", "epoch": 1, "iter": 7850, "lr": 0.00016, "memory": 20279, "data_time": 0.01948, "decode.loss_ce": 0.58425, "decode.acc_seg": 78.99498, "loss": 0.58425, "time": 0.26389} -{"mode": "train", "epoch": 1, "iter": 7900, "lr": 0.00016, "memory": 20279, "data_time": 0.01921, "decode.loss_ce": 0.58186, "decode.acc_seg": 78.66923, "loss": 0.58186, "time": 0.26767} -{"mode": "train", "epoch": 1, "iter": 7950, "lr": 0.00016, "memory": 20279, "data_time": 0.01974, "decode.loss_ce": 0.58299, "decode.acc_seg": 78.61402, "loss": 0.58299, "time": 0.26556} -{"mode": "train", "epoch": 1, "iter": 8000, "lr": 0.00016, "memory": 20279, "data_time": 0.01917, "decode.loss_ce": 0.58905, "decode.acc_seg": 78.43285, "loss": 0.58905, "time": 0.28528} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00016, "aAcc": 0.7857, "mIoU": 0.3803, "mAcc": 0.491, "IoU.wall": 0.7113, "IoU.building": 0.8012, "IoU.sky": 0.9381, "IoU.floor": 0.7731, "IoU.tree": 0.7028, "IoU.ceiling": 0.7699, "IoU.road": 0.7713, "IoU.bed ": 0.8206, "IoU.windowpane": 0.5484, "IoU.grass": 0.6694, "IoU.cabinet": 0.5192, "IoU.sidewalk": 0.5192, "IoU.person": 0.7773, "IoU.earth": 0.3272, "IoU.door": 0.3527, "IoU.table": 0.4645, "IoU.mountain": 0.5224, "IoU.plant": 0.4542, "IoU.curtain": 0.6706, "IoU.chair": 0.4562, "IoU.car": 0.7938, "IoU.water": 0.5374, "IoU.painting": 0.6174, "IoU.sofa": 0.5414, "IoU.shelf": 0.3373, "IoU.house": 0.4701, "IoU.sea": 0.5577, "IoU.mirror": 0.4682, "IoU.rug": 0.5016, "IoU.field": 0.3151, "IoU.armchair": 0.3152, "IoU.seat": 0.4897, "IoU.fence": 0.3394, "IoU.desk": 0.3117, "IoU.rock": 0.3815, "IoU.wardrobe": 0.362, "IoU.lamp": 0.5232, "IoU.bathtub": 0.5875, "IoU.railing": 0.2728, "IoU.cushion": 0.4194, "IoU.base": 0.0698, "IoU.box": 0.1395, "IoU.column": 0.3875, "IoU.signboard": 0.32, "IoU.chest of drawers": 0.3827, "IoU.counter": 0.2393, "IoU.sand": 0.2574, "IoU.sink": 0.5057, "IoU.skyscraper": 0.5087, "IoU.fireplace": 0.6047, "IoU.refrigerator": 0.5859, "IoU.grandstand": 0.4333, "IoU.path": 0.0958, "IoU.stairs": 0.0785, "IoU.runway": 0.7347, "IoU.case": 0.3998, "IoU.pool table": 0.897, "IoU.pillow": 0.4902, "IoU.screen door": 0.4575, "IoU.stairway": 0.1946, "IoU.river": 0.1432, "IoU.bridge": 0.3696, "IoU.bookcase": 0.282, "IoU.blind": 0.2547, "IoU.coffee table": 0.4565, "IoU.toilet": 0.7578, "IoU.flower": 0.3076, "IoU.book": 0.3879, "IoU.hill": 0.0382, "IoU.bench": 0.2607, "IoU.countertop": 0.4582, "IoU.stove": 0.4954, "IoU.palm": 0.4213, "IoU.kitchen island": 0.1787, "IoU.computer": 0.6044, "IoU.swivel chair": 0.3708, "IoU.boat": 0.6016, "IoU.bar": 0.2392, "IoU.arcade machine": 0.4852, "IoU.hovel": 0.1168, "IoU.bus": 0.8086, "IoU.towel": 0.4348, "IoU.light": 0.3418, "IoU.truck": 0.1722, "IoU.tower": 0.2689, "IoU.chandelier": 0.568, "IoU.awning": 0.1513, "IoU.streetlight": 0.1704, "IoU.booth": 0.1914, "IoU.television receiver": 0.5867, "IoU.airplane": 0.4599, "IoU.dirt track": 0.0897, "IoU.apparel": 0.2383, "IoU.pole": 0.0929, "IoU.land": 0.0205, "IoU.bannister": 0.0141, "IoU.escalator": 0.4359, "IoU.ottoman": 0.2152, "IoU.bottle": 0.1219, "IoU.buffet": 0.2756, "IoU.poster": 0.0133, "IoU.stage": 0.0485, "IoU.van": 0.1569, "IoU.ship": 0.7126, "IoU.fountain": 0.2135, "IoU.conveyer belt": 0.6251, "IoU.canopy": 0.2165, "IoU.washer": 0.6241, "IoU.plaything": 0.2089, "IoU.swimming pool": 0.4048, "IoU.stool": 0.2219, "IoU.barrel": 0.258, "IoU.basket": 0.1958, "IoU.waterfall": 0.3113, "IoU.tent": 0.8574, "IoU.bag": 0.0467, "IoU.minibike": 0.5356, "IoU.cradle": 0.6369, "IoU.oven": 0.196, "IoU.ball": 0.3729, "IoU.food": 0.4841, "IoU.step": 0.013, "IoU.tank": 0.2857, "IoU.trade name": 0.1756, "IoU.microwave": 0.4822, "IoU.pot": 0.3201, "IoU.animal": 0.5276, "IoU.bicycle": 0.4887, "IoU.lake": 0.0, "IoU.dishwasher": 0.4373, "IoU.screen": 0.7197, "IoU.blanket": 0.0025, "IoU.sculpture": 0.2878, "IoU.hood": 0.3597, "IoU.sconce": 0.1315, "IoU.vase": 0.2277, "IoU.traffic light": 0.1918, "IoU.tray": 0.0166, "IoU.ashcan": 0.317, "IoU.fan": 0.426, "IoU.pier": 0.2398, "IoU.crt screen": 0.0, "IoU.plate": 0.2928, "IoU.monitor": 0.0339, "IoU.bulletin board": 0.3312, "IoU.shower": 0.0, "IoU.radiator": 0.4669, "IoU.glass": 0.044, "IoU.clock": 0.1859, "IoU.flag": 0.3182, "Acc.wall": 0.8631, "Acc.building": 0.9011, "Acc.sky": 0.9651, "Acc.floor": 0.8628, "Acc.tree": 0.8914, "Acc.ceiling": 0.8477, "Acc.road": 0.9104, "Acc.bed ": 0.9345, "Acc.windowpane": 0.7153, "Acc.grass": 0.7844, "Acc.cabinet": 0.7003, "Acc.sidewalk": 0.6864, "Acc.person": 0.8933, "Acc.earth": 0.4975, "Acc.door": 0.4799, "Acc.table": 0.6126, "Acc.mountain": 0.6983, "Acc.plant": 0.5467, "Acc.curtain": 0.8, "Acc.chair": 0.6508, "Acc.car": 0.9085, "Acc.water": 0.7445, "Acc.painting": 0.7846, "Acc.sofa": 0.7459, "Acc.shelf": 0.5715, "Acc.house": 0.651, "Acc.sea": 0.7308, "Acc.mirror": 0.6256, "Acc.rug": 0.6013, "Acc.field": 0.4223, "Acc.armchair": 0.541, "Acc.seat": 0.6509, "Acc.fence": 0.5524, "Acc.desk": 0.4104, "Acc.rock": 0.4986, "Acc.wardrobe": 0.492, "Acc.lamp": 0.6669, "Acc.bathtub": 0.7728, "Acc.railing": 0.3812, "Acc.cushion": 0.5697, "Acc.base": 0.0893, "Acc.box": 0.1662, "Acc.column": 0.4853, "Acc.signboard": 0.4716, "Acc.chest of drawers": 0.5677, "Acc.counter": 0.2852, "Acc.sand": 0.4374, "Acc.sink": 0.5617, "Acc.skyscraper": 0.6566, "Acc.fireplace": 0.724, "Acc.refrigerator": 0.7065, "Acc.grandstand": 0.727, "Acc.path": 0.1219, "Acc.stairs": 0.0841, "Acc.runway": 0.9183, "Acc.case": 0.6843, "Acc.pool table": 0.9616, "Acc.pillow": 0.6161, "Acc.screen door": 0.6768, "Acc.stairway": 0.4396, "Acc.river": 0.2248, "Acc.bridge": 0.5734, "Acc.bookcase": 0.495, "Acc.blind": 0.2946, "Acc.coffee table": 0.7827, "Acc.toilet": 0.883, "Acc.flower": 0.5117, "Acc.book": 0.5717, "Acc.hill": 0.0393, "Acc.bench": 0.4486, "Acc.countertop": 0.633, "Acc.stove": 0.5864, "Acc.palm": 0.5696, "Acc.kitchen island": 0.2918, "Acc.computer": 0.7598, "Acc.swivel chair": 0.4855, "Acc.boat": 0.8158, "Acc.bar": 0.2702, "Acc.arcade machine": 0.6265, "Acc.hovel": 0.1608, "Acc.bus": 0.9099, "Acc.towel": 0.7487, "Acc.light": 0.3705, "Acc.truck": 0.2266, "Acc.tower": 0.3235, "Acc.chandelier": 0.7166, "Acc.awning": 0.205, "Acc.streetlight": 0.2327, "Acc.booth": 0.232, "Acc.television receiver": 0.7014, "Acc.airplane": 0.6251, "Acc.dirt track": 0.2706, "Acc.apparel": 0.4012, "Acc.pole": 0.1108, "Acc.land": 0.0211, "Acc.bannister": 0.0185, "Acc.escalator": 0.6052, "Acc.ottoman": 0.2855, "Acc.bottle": 0.1363, "Acc.buffet": 0.3175, "Acc.poster": 0.0184, "Acc.stage": 0.0573, "Acc.van": 0.191, "Acc.ship": 0.8423, "Acc.fountain": 0.2227, "Acc.conveyer belt": 0.7565, "Acc.canopy": 0.2995, "Acc.washer": 0.6763, "Acc.plaything": 0.3832, "Acc.swimming pool": 0.7169, "Acc.stool": 0.2595, "Acc.barrel": 0.401, "Acc.basket": 0.279, "Acc.waterfall": 0.321, "Acc.tent": 0.9853, "Acc.bag": 0.0489, "Acc.minibike": 0.6785, "Acc.cradle": 0.7718, "Acc.oven": 0.5554, "Acc.ball": 0.4817, "Acc.food": 0.5533, "Acc.step": 0.0133, "Acc.tank": 0.2879, "Acc.trade name": 0.1924, "Acc.microwave": 0.5196, "Acc.pot": 0.3602, "Acc.animal": 0.5845, "Acc.bicycle": 0.7408, "Acc.lake": 0.0, "Acc.dishwasher": 0.5049, "Acc.screen": 0.8239, "Acc.blanket": 0.0027, "Acc.sculpture": 0.3705, "Acc.hood": 0.3844, "Acc.sconce": 0.141, "Acc.vase": 0.3146, "Acc.traffic light": 0.2868, "Acc.tray": 0.0174, "Acc.ashcan": 0.4334, "Acc.fan": 0.5512, "Acc.pier": 0.2594, "Acc.crt screen": 0.0, "Acc.plate": 0.334, "Acc.monitor": 0.0389, "Acc.bulletin board": 0.4074, "Acc.shower": 0.0, "Acc.radiator": 0.5432, "Acc.glass": 0.0459, "Acc.clock": 0.2094, "Acc.flag": 0.3546} -{"mode": "train", "epoch": 1, "iter": 8050, "lr": 0.00016, "memory": 20279, "data_time": 0.9895, "decode.loss_ce": 0.6045, "decode.acc_seg": 78.14554, "loss": 0.6045, "time": 1.2345} -{"mode": "train", "epoch": 1, "iter": 8100, "lr": 0.00016, "memory": 20279, "data_time": 0.01928, "decode.loss_ce": 0.6008, "decode.acc_seg": 78.05601, "loss": 0.6008, "time": 0.2642} -{"mode": "train", "epoch": 1, "iter": 8150, "lr": 0.00016, "memory": 20279, "data_time": 0.02059, "decode.loss_ce": 0.57693, "decode.acc_seg": 78.5632, "loss": 0.57693, "time": 0.26288} -{"mode": "train", "epoch": 1, "iter": 8200, "lr": 0.00016, "memory": 20279, "data_time": 0.02165, "decode.loss_ce": 0.57215, "decode.acc_seg": 78.946, "loss": 0.57215, "time": 0.2647} -{"mode": "train", "epoch": 1, "iter": 8250, "lr": 0.00016, "memory": 20279, "data_time": 0.01981, "decode.loss_ce": 0.56407, "decode.acc_seg": 79.18227, "loss": 0.56407, "time": 0.26605} -{"mode": "train", "epoch": 1, "iter": 8300, "lr": 0.00016, "memory": 20279, "data_time": 0.02101, "decode.loss_ce": 0.57077, "decode.acc_seg": 79.24185, "loss": 0.57077, "time": 0.26727} -{"mode": "train", "epoch": 1, "iter": 8350, "lr": 0.00016, "memory": 20279, "data_time": 0.0204, "decode.loss_ce": 0.56416, "decode.acc_seg": 78.82231, "loss": 0.56416, "time": 0.26468} -{"mode": "train", "epoch": 1, "iter": 8400, "lr": 0.00016, "memory": 20279, "data_time": 0.02026, "decode.loss_ce": 0.60305, "decode.acc_seg": 77.8751, "loss": 0.60305, "time": 0.26663} -{"mode": "train", "epoch": 1, "iter": 8450, "lr": 0.00016, "memory": 20279, "data_time": 0.02107, "decode.loss_ce": 0.57513, "decode.acc_seg": 78.92116, "loss": 0.57513, "time": 0.27133} -{"mode": "train", "epoch": 1, "iter": 8500, "lr": 0.00016, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.57352, "decode.acc_seg": 78.71415, "loss": 0.57352, "time": 0.26659} -{"mode": "train", "epoch": 1, "iter": 8550, "lr": 0.00016, "memory": 20279, "data_time": 0.02023, "decode.loss_ce": 0.578, "decode.acc_seg": 78.84303, "loss": 0.578, "time": 0.27047} -{"mode": "train", "epoch": 1, "iter": 8600, "lr": 0.00016, "memory": 20279, "data_time": 0.02019, "decode.loss_ce": 0.56742, "decode.acc_seg": 79.16834, "loss": 0.56742, "time": 0.26581} -{"mode": "train", "epoch": 1, "iter": 8650, "lr": 0.00016, "memory": 20279, "data_time": 0.01881, "decode.loss_ce": 0.58508, "decode.acc_seg": 78.6107, "loss": 0.58508, "time": 0.26747} -{"mode": "train", "epoch": 1, "iter": 8700, "lr": 0.00016, "memory": 20279, "data_time": 0.0211, "decode.loss_ce": 0.54416, "decode.acc_seg": 79.91657, "loss": 0.54416, "time": 0.26782} -{"mode": "train", "epoch": 1, "iter": 8750, "lr": 0.00016, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.58282, "decode.acc_seg": 78.32825, "loss": 0.58282, "time": 0.26377} -{"mode": "train", "epoch": 1, "iter": 8800, "lr": 0.00016, "memory": 20279, "data_time": 0.02081, "decode.loss_ce": 0.54494, "decode.acc_seg": 79.8973, "loss": 0.54494, "time": 0.26813} -{"mode": "train", "epoch": 1, "iter": 8850, "lr": 0.00016, "memory": 20279, "data_time": 0.02168, "decode.loss_ce": 0.56984, "decode.acc_seg": 78.84381, "loss": 0.56984, "time": 0.26779} -{"mode": "train", "epoch": 1, "iter": 8900, "lr": 0.00016, "memory": 20279, "data_time": 0.01949, "decode.loss_ce": 0.59561, "decode.acc_seg": 78.06073, "loss": 0.59561, "time": 0.27228} -{"mode": "train", "epoch": 1, "iter": 8950, "lr": 0.00016, "memory": 20279, "data_time": 0.0181, "decode.loss_ce": 0.57354, "decode.acc_seg": 79.1406, "loss": 0.57354, "time": 0.26304} -{"mode": "train", "epoch": 1, "iter": 9000, "lr": 0.00016, "memory": 20279, "data_time": 0.01969, "decode.loss_ce": 0.56081, "decode.acc_seg": 79.05184, "loss": 0.56081, "time": 0.26535} -{"mode": "train", "epoch": 1, "iter": 9050, "lr": 0.00016, "memory": 20279, "data_time": 0.01965, "decode.loss_ce": 0.54019, "decode.acc_seg": 80.09001, "loss": 0.54019, "time": 0.27093} -{"mode": "train", "epoch": 1, "iter": 9100, "lr": 0.00016, "memory": 20279, "data_time": 0.01899, "decode.loss_ce": 0.56276, "decode.acc_seg": 78.934, "loss": 0.56276, "time": 0.27022} -{"mode": "train", "epoch": 1, "iter": 9150, "lr": 0.00016, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.56917, "decode.acc_seg": 78.92558, "loss": 0.56917, "time": 0.26579} -{"mode": "train", "epoch": 1, "iter": 9200, "lr": 0.00016, "memory": 20279, "data_time": 0.01943, "decode.loss_ce": 0.55565, "decode.acc_seg": 79.30999, "loss": 0.55565, "time": 0.26393} -{"mode": "train", "epoch": 1, "iter": 9250, "lr": 0.00016, "memory": 20279, "data_time": 0.01979, "decode.loss_ce": 0.54914, "decode.acc_seg": 79.51914, "loss": 0.54914, "time": 0.27056} -{"mode": "train", "epoch": 1, "iter": 9300, "lr": 0.00016, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.55277, "decode.acc_seg": 79.34536, "loss": 0.55277, "time": 0.26223} -{"mode": "train", "epoch": 1, "iter": 9350, "lr": 0.00016, "memory": 20279, "data_time": 0.02027, "decode.loss_ce": 0.56119, "decode.acc_seg": 78.86583, "loss": 0.56119, "time": 0.26355} -{"mode": "train", "epoch": 1, "iter": 9400, "lr": 0.00016, "memory": 20279, "data_time": 0.02068, "decode.loss_ce": 0.57173, "decode.acc_seg": 78.83841, "loss": 0.57173, "time": 0.26866} -{"mode": "train", "epoch": 1, "iter": 9450, "lr": 0.00016, "memory": 20279, "data_time": 0.0196, "decode.loss_ce": 0.5453, "decode.acc_seg": 79.40253, "loss": 0.5453, "time": 0.26453} -{"mode": "train", "epoch": 1, "iter": 9500, "lr": 0.00016, "memory": 20279, "data_time": 0.01993, "decode.loss_ce": 0.56182, "decode.acc_seg": 79.57682, "loss": 0.56182, "time": 0.26656} -{"mode": "train", "epoch": 1, "iter": 9550, "lr": 0.00016, "memory": 20279, "data_time": 0.02014, "decode.loss_ce": 0.55205, "decode.acc_seg": 79.22034, "loss": 0.55205, "time": 0.2703} -{"mode": "train", "epoch": 1, "iter": 9600, "lr": 0.00016, "memory": 20279, "data_time": 0.01901, "decode.loss_ce": 0.54615, "decode.acc_seg": 79.97245, "loss": 0.54615, "time": 0.26472} -{"mode": "train", "epoch": 1, "iter": 9650, "lr": 0.00016, "memory": 20279, "data_time": 0.0189, "decode.loss_ce": 0.56462, "decode.acc_seg": 79.20061, "loss": 0.56462, "time": 0.26411} -{"mode": "train", "epoch": 1, "iter": 9700, "lr": 0.00016, "memory": 20279, "data_time": 0.01977, "decode.loss_ce": 0.55902, "decode.acc_seg": 79.44537, "loss": 0.55902, "time": 0.26615} -{"mode": "train", "epoch": 1, "iter": 9750, "lr": 0.00016, "memory": 20279, "data_time": 0.02143, "decode.loss_ce": 0.53127, "decode.acc_seg": 80.40825, "loss": 0.53127, "time": 0.26414} -{"mode": "train", "epoch": 1, "iter": 9800, "lr": 0.00016, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.54398, "decode.acc_seg": 79.72945, "loss": 0.54398, "time": 0.26893} -{"mode": "train", "epoch": 1, "iter": 9850, "lr": 0.00016, "memory": 20279, "data_time": 0.02133, "decode.loss_ce": 0.56548, "decode.acc_seg": 79.35416, "loss": 0.56548, "time": 0.26757} -{"mode": "train", "epoch": 1, "iter": 9900, "lr": 0.00016, "memory": 20279, "data_time": 0.02001, "decode.loss_ce": 0.54041, "decode.acc_seg": 79.57892, "loss": 0.54041, "time": 0.26987} -{"mode": "train", "epoch": 1, "iter": 9950, "lr": 0.00015, "memory": 20279, "data_time": 0.02104, "decode.loss_ce": 0.55366, "decode.acc_seg": 79.38805, "loss": 0.55366, "time": 0.2627} -{"mode": "train", "epoch": 1, "iter": 10000, "lr": 0.00015, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.56029, "decode.acc_seg": 79.17551, "loss": 0.56029, "time": 0.26825} -{"mode": "train", "epoch": 1, "iter": 10050, "lr": 0.00015, "memory": 20279, "data_time": 0.02099, "decode.loss_ce": 0.54249, "decode.acc_seg": 79.88202, "loss": 0.54249, "time": 0.26655} -{"mode": "train", "epoch": 1, "iter": 10100, "lr": 0.00015, "memory": 20279, "data_time": 0.02097, "decode.loss_ce": 0.53409, "decode.acc_seg": 80.23543, "loss": 0.53409, "time": 0.26858} -{"mode": "train", "epoch": 1, "iter": 10150, "lr": 0.00015, "memory": 20279, "data_time": 0.01854, "decode.loss_ce": 0.52734, "decode.acc_seg": 80.12553, "loss": 0.52734, "time": 0.26214} -{"mode": "train", "epoch": 1, "iter": 10200, "lr": 0.00015, "memory": 20279, "data_time": 0.01992, "decode.loss_ce": 0.5534, "decode.acc_seg": 80.1788, "loss": 0.5534, "time": 0.26483} -{"mode": "train", "epoch": 1, "iter": 10250, "lr": 0.00015, "memory": 20279, "data_time": 0.01925, "decode.loss_ce": 0.54664, "decode.acc_seg": 79.54544, "loss": 0.54664, "time": 0.27093} -{"mode": "train", "epoch": 1, "iter": 10300, "lr": 0.00015, "memory": 20279, "data_time": 0.02196, "decode.loss_ce": 0.52532, "decode.acc_seg": 80.61315, "loss": 0.52532, "time": 0.26574} -{"mode": "train", "epoch": 1, "iter": 10350, "lr": 0.00015, "memory": 20279, "data_time": 0.0194, "decode.loss_ce": 0.52622, "decode.acc_seg": 80.382, "loss": 0.52622, "time": 0.26653} -{"mode": "train", "epoch": 1, "iter": 10400, "lr": 0.00015, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.52407, "decode.acc_seg": 80.45014, "loss": 0.52407, "time": 0.262} -{"mode": "train", "epoch": 1, "iter": 10450, "lr": 0.00015, "memory": 20279, "data_time": 0.01981, "decode.loss_ce": 0.51641, "decode.acc_seg": 80.40907, "loss": 0.51641, "time": 0.26656} -{"mode": "train", "epoch": 1, "iter": 10500, "lr": 0.00015, "memory": 20279, "data_time": 0.02054, "decode.loss_ce": 0.53229, "decode.acc_seg": 80.49772, "loss": 0.53229, "time": 0.26672} -{"mode": "train", "epoch": 1, "iter": 10550, "lr": 0.00015, "memory": 20279, "data_time": 0.01973, "decode.loss_ce": 0.53594, "decode.acc_seg": 80.26706, "loss": 0.53594, "time": 0.26835} -{"mode": "train", "epoch": 1, "iter": 10600, "lr": 0.00015, "memory": 20279, "data_time": 0.02182, "decode.loss_ce": 0.55596, "decode.acc_seg": 79.41537, "loss": 0.55596, "time": 0.26965} -{"mode": "train", "epoch": 1, "iter": 10650, "lr": 0.00015, "memory": 20279, "data_time": 0.02181, "decode.loss_ce": 0.52815, "decode.acc_seg": 80.5544, "loss": 0.52815, "time": 0.26546} -{"mode": "train", "epoch": 1, "iter": 10700, "lr": 0.00015, "memory": 20279, "data_time": 0.021, "decode.loss_ce": 0.53427, "decode.acc_seg": 80.33112, "loss": 0.53427, "time": 0.26816} -{"mode": "train", "epoch": 1, "iter": 10750, "lr": 0.00015, "memory": 20279, "data_time": 0.02052, "decode.loss_ce": 0.51949, "decode.acc_seg": 80.664, "loss": 0.51949, "time": 0.26784} -{"mode": "train", "epoch": 1, "iter": 10800, "lr": 0.00015, "memory": 20279, "data_time": 0.02035, "decode.loss_ce": 0.53602, "decode.acc_seg": 80.01364, "loss": 0.53602, "time": 0.26633} -{"mode": "train", "epoch": 1, "iter": 10850, "lr": 0.00015, "memory": 20279, "data_time": 0.01886, "decode.loss_ce": 0.51786, "decode.acc_seg": 80.76655, "loss": 0.51786, "time": 0.26496} -{"mode": "train", "epoch": 1, "iter": 10900, "lr": 0.00015, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.52753, "decode.acc_seg": 80.20014, "loss": 0.52753, "time": 0.26989} -{"mode": "train", "epoch": 1, "iter": 10950, "lr": 0.00015, "memory": 20279, "data_time": 0.01936, "decode.loss_ce": 0.5218, "decode.acc_seg": 80.51516, "loss": 0.5218, "time": 0.26535} -{"mode": "train", "epoch": 1, "iter": 11000, "lr": 0.00015, "memory": 20279, "data_time": 0.01906, "decode.loss_ce": 0.52797, "decode.acc_seg": 80.54803, "loss": 0.52797, "time": 0.26679} -{"mode": "train", "epoch": 1, "iter": 11050, "lr": 0.00015, "memory": 20279, "data_time": 0.0182, "decode.loss_ce": 0.52155, "decode.acc_seg": 80.74111, "loss": 0.52155, "time": 0.2686} -{"mode": "train", "epoch": 1, "iter": 11100, "lr": 0.00015, "memory": 20279, "data_time": 0.01815, "decode.loss_ce": 0.5108, "decode.acc_seg": 81.05129, "loss": 0.5108, "time": 0.26913} -{"mode": "train", "epoch": 1, "iter": 11150, "lr": 0.00015, "memory": 20279, "data_time": 0.01993, "decode.loss_ce": 0.51065, "decode.acc_seg": 80.92613, "loss": 0.51065, "time": 0.26327} -{"mode": "train", "epoch": 1, "iter": 11200, "lr": 0.00015, "memory": 20279, "data_time": 0.01981, "decode.loss_ce": 0.52258, "decode.acc_seg": 80.63793, "loss": 0.52258, "time": 0.2662} -{"mode": "train", "epoch": 1, "iter": 11250, "lr": 0.00015, "memory": 20279, "data_time": 0.02183, "decode.loss_ce": 0.52924, "decode.acc_seg": 80.21276, "loss": 0.52924, "time": 0.27086} -{"mode": "train", "epoch": 1, "iter": 11300, "lr": 0.00015, "memory": 20279, "data_time": 0.02134, "decode.loss_ce": 0.51503, "decode.acc_seg": 80.76979, "loss": 0.51503, "time": 0.26724} -{"mode": "train", "epoch": 1, "iter": 11350, "lr": 0.00015, "memory": 20279, "data_time": 0.01906, "decode.loss_ce": 0.50143, "decode.acc_seg": 81.21612, "loss": 0.50143, "time": 0.26328} -{"mode": "train", "epoch": 1, "iter": 11400, "lr": 0.00015, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.49692, "decode.acc_seg": 81.40132, "loss": 0.49692, "time": 0.26268} -{"mode": "train", "epoch": 1, "iter": 11450, "lr": 0.00015, "memory": 20279, "data_time": 0.0195, "decode.loss_ce": 0.51861, "decode.acc_seg": 80.97443, "loss": 0.51861, "time": 0.26674} -{"mode": "train", "epoch": 1, "iter": 11500, "lr": 0.00015, "memory": 20279, "data_time": 0.02038, "decode.loss_ce": 0.49626, "decode.acc_seg": 81.3215, "loss": 0.49626, "time": 0.26389} -{"mode": "train", "epoch": 1, "iter": 11550, "lr": 0.00015, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.51364, "decode.acc_seg": 80.81636, "loss": 0.51364, "time": 0.26645} -{"mode": "train", "epoch": 1, "iter": 11600, "lr": 0.00015, "memory": 20279, "data_time": 0.02028, "decode.loss_ce": 0.50914, "decode.acc_seg": 81.12937, "loss": 0.50914, "time": 0.26924} -{"mode": "train", "epoch": 1, "iter": 11650, "lr": 0.00015, "memory": 20279, "data_time": 0.02023, "decode.loss_ce": 0.51733, "decode.acc_seg": 80.39058, "loss": 0.51733, "time": 0.26397} -{"mode": "train", "epoch": 1, "iter": 11700, "lr": 0.00015, "memory": 20279, "data_time": 0.02055, "decode.loss_ce": 0.49527, "decode.acc_seg": 81.39649, "loss": 0.49527, "time": 0.26867} -{"mode": "train", "epoch": 1, "iter": 11750, "lr": 0.00015, "memory": 20279, "data_time": 0.01971, "decode.loss_ce": 0.4979, "decode.acc_seg": 81.60689, "loss": 0.4979, "time": 0.26454} -{"mode": "train", "epoch": 1, "iter": 11800, "lr": 0.00015, "memory": 20279, "data_time": 0.02074, "decode.loss_ce": 0.51344, "decode.acc_seg": 81.01902, "loss": 0.51344, "time": 0.26528} -{"mode": "train", "epoch": 1, "iter": 11850, "lr": 0.00015, "memory": 20279, "data_time": 0.01944, "decode.loss_ce": 0.50556, "decode.acc_seg": 80.82504, "loss": 0.50556, "time": 0.26224} -{"mode": "train", "epoch": 1, "iter": 11900, "lr": 0.00015, "memory": 20279, "data_time": 0.0209, "decode.loss_ce": 0.51208, "decode.acc_seg": 80.94689, "loss": 0.51208, "time": 0.26995} -{"mode": "train", "epoch": 1, "iter": 11950, "lr": 0.00015, "memory": 20279, "data_time": 0.01933, "decode.loss_ce": 0.51865, "decode.acc_seg": 80.48815, "loss": 0.51865, "time": 0.2649} -{"mode": "train", "epoch": 1, "iter": 12000, "lr": 0.00015, "memory": 20279, "data_time": 0.02112, "decode.loss_ce": 0.52908, "decode.acc_seg": 80.29805, "loss": 0.52908, "time": 0.28771} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00015, "aAcc": 0.7838, "mIoU": 0.3828, "mAcc": 0.4906, "IoU.wall": 0.7178, "IoU.building": 0.7935, "IoU.sky": 0.935, "IoU.floor": 0.7772, "IoU.tree": 0.7054, "IoU.ceiling": 0.7759, "IoU.road": 0.7814, "IoU.bed ": 0.8207, "IoU.windowpane": 0.5367, "IoU.grass": 0.5416, "IoU.cabinet": 0.5045, "IoU.sidewalk": 0.5528, "IoU.person": 0.7716, "IoU.earth": 0.3273, "IoU.door": 0.3485, "IoU.table": 0.4861, "IoU.mountain": 0.5559, "IoU.plant": 0.4497, "IoU.curtain": 0.6615, "IoU.chair": 0.4909, "IoU.car": 0.8024, "IoU.water": 0.4199, "IoU.painting": 0.5992, "IoU.sofa": 0.5495, "IoU.shelf": 0.3691, "IoU.house": 0.1366, "IoU.sea": 0.5021, "IoU.mirror": 0.4931, "IoU.rug": 0.48, "IoU.field": 0.2526, "IoU.armchair": 0.2985, "IoU.seat": 0.5741, "IoU.fence": 0.3351, "IoU.desk": 0.3525, "IoU.rock": 0.3173, "IoU.wardrobe": 0.368, "IoU.lamp": 0.5426, "IoU.bathtub": 0.6159, "IoU.railing": 0.2639, "IoU.cushion": 0.4494, "IoU.base": 0.1284, "IoU.box": 0.1302, "IoU.column": 0.3949, "IoU.signboard": 0.3027, "IoU.chest of drawers": 0.3987, "IoU.counter": 0.1672, "IoU.sand": 0.3204, "IoU.sink": 0.5865, "IoU.skyscraper": 0.6539, "IoU.fireplace": 0.5778, "IoU.refrigerator": 0.5473, "IoU.grandstand": 0.4158, "IoU.path": 0.1774, "IoU.stairs": 0.2652, "IoU.runway": 0.7615, "IoU.case": 0.4752, "IoU.pool table": 0.8881, "IoU.pillow": 0.4172, "IoU.screen door": 0.467, "IoU.stairway": 0.2745, "IoU.river": 0.1394, "IoU.bridge": 0.3096, "IoU.bookcase": 0.31, "IoU.blind": 0.1996, "IoU.coffee table": 0.4936, "IoU.toilet": 0.7343, "IoU.flower": 0.3298, "IoU.book": 0.4233, "IoU.hill": 0.0326, "IoU.bench": 0.336, "IoU.countertop": 0.4852, "IoU.stove": 0.613, "IoU.palm": 0.4719, "IoU.kitchen island": 0.242, "IoU.computer": 0.5541, "IoU.swivel chair": 0.437, "IoU.boat": 0.6093, "IoU.bar": 0.1051, "IoU.arcade machine": 0.4264, "IoU.hovel": 0.0549, "IoU.bus": 0.786, "IoU.towel": 0.503, "IoU.light": 0.3593, "IoU.truck": 0.1306, "IoU.tower": 0.3195, "IoU.chandelier": 0.5726, "IoU.awning": 0.1188, "IoU.streetlight": 0.185, "IoU.booth": 0.2859, "IoU.television receiver": 0.607, "IoU.airplane": 0.4629, "IoU.dirt track": 0.0, "IoU.apparel": 0.231, "IoU.pole": 0.1219, "IoU.land": 0.0105, "IoU.bannister": 0.0332, "IoU.escalator": 0.3086, "IoU.ottoman": 0.2318, "IoU.bottle": 0.2034, "IoU.buffet": 0.3852, "IoU.poster": 0.1406, "IoU.stage": 0.0718, "IoU.van": 0.3003, "IoU.ship": 0.5311, "IoU.fountain": 0.1456, "IoU.conveyer belt": 0.6926, "IoU.canopy": 0.1321, "IoU.washer": 0.6534, "IoU.plaything": 0.0472, "IoU.swimming pool": 0.2379, "IoU.stool": 0.2139, "IoU.barrel": 0.1646, "IoU.basket": 0.1809, "IoU.waterfall": 0.4393, "IoU.tent": 0.7831, "IoU.bag": 0.0125, "IoU.minibike": 0.5555, "IoU.cradle": 0.5552, "IoU.oven": 0.2398, "IoU.ball": 0.4048, "IoU.food": 0.4873, "IoU.step": 0.0082, "IoU.tank": 0.364, "IoU.trade name": 0.0383, "IoU.microwave": 0.614, "IoU.pot": 0.3397, "IoU.animal": 0.5021, "IoU.bicycle": 0.5307, "IoU.lake": 0.0, "IoU.dishwasher": 0.3767, "IoU.screen": 0.6835, "IoU.blanket": 0.0149, "IoU.sculpture": 0.3005, "IoU.hood": 0.4035, "IoU.sconce": 0.1779, "IoU.vase": 0.2389, "IoU.traffic light": 0.1808, "IoU.tray": 0.0056, "IoU.ashcan": 0.3158, "IoU.fan": 0.488, "IoU.pier": 0.2191, "IoU.crt screen": 0.0058, "IoU.plate": 0.412, "IoU.monitor": 0.0885, "IoU.bulletin board": 0.4463, "IoU.shower": 0.0, "IoU.radiator": 0.4352, "IoU.glass": 0.0475, "IoU.clock": 0.1102, "IoU.flag": 0.3146, "Acc.wall": 0.8513, "Acc.building": 0.9321, "Acc.sky": 0.9671, "Acc.floor": 0.8802, "Acc.tree": 0.8596, "Acc.ceiling": 0.8625, "Acc.road": 0.8445, "Acc.bed ": 0.9532, "Acc.windowpane": 0.7542, "Acc.grass": 0.5912, "Acc.cabinet": 0.7024, "Acc.sidewalk": 0.828, "Acc.person": 0.8826, "Acc.earth": 0.4447, "Acc.door": 0.4513, "Acc.table": 0.7064, "Acc.mountain": 0.8142, "Acc.plant": 0.5559, "Acc.curtain": 0.7512, "Acc.chair": 0.7171, "Acc.car": 0.8856, "Acc.water": 0.6245, "Acc.painting": 0.8249, "Acc.sofa": 0.7157, "Acc.shelf": 0.5823, "Acc.house": 0.1578, "Acc.sea": 0.7402, "Acc.mirror": 0.6915, "Acc.rug": 0.557, "Acc.field": 0.5591, "Acc.armchair": 0.4612, "Acc.seat": 0.7129, "Acc.fence": 0.5377, "Acc.desk": 0.5199, "Acc.rock": 0.5559, "Acc.wardrobe": 0.6237, "Acc.lamp": 0.6845, "Acc.bathtub": 0.6615, "Acc.railing": 0.3585, "Acc.cushion": 0.6114, "Acc.base": 0.1837, "Acc.box": 0.1461, "Acc.column": 0.5328, "Acc.signboard": 0.3869, "Acc.chest of drawers": 0.5272, "Acc.counter": 0.1872, "Acc.sand": 0.3736, "Acc.sink": 0.7253, "Acc.skyscraper": 0.7705, "Acc.fireplace": 0.8169, "Acc.refrigerator": 0.7694, "Acc.grandstand": 0.7128, "Acc.path": 0.2736, "Acc.stairs": 0.315, "Acc.runway": 0.9358, "Acc.case": 0.6614, "Acc.pool table": 0.9417, "Acc.pillow": 0.4623, "Acc.screen door": 0.579, "Acc.stairway": 0.4738, "Acc.river": 0.317, "Acc.bridge": 0.4615, "Acc.bookcase": 0.5068, "Acc.blind": 0.2276, "Acc.coffee table": 0.5986, "Acc.toilet": 0.858, "Acc.flower": 0.448, "Acc.book": 0.601, "Acc.hill": 0.0372, "Acc.bench": 0.4427, "Acc.countertop": 0.656, "Acc.stove": 0.7817, "Acc.palm": 0.6263, "Acc.kitchen island": 0.4397, "Acc.computer": 0.6627, "Acc.swivel chair": 0.5411, "Acc.boat": 0.7732, "Acc.bar": 0.1141, "Acc.arcade machine": 0.6319, "Acc.hovel": 0.0677, "Acc.bus": 0.9124, "Acc.towel": 0.7311, "Acc.light": 0.4007, "Acc.truck": 0.1626, "Acc.tower": 0.4612, "Acc.chandelier": 0.7161, "Acc.awning": 0.1286, "Acc.streetlight": 0.2425, "Acc.booth": 0.3175, "Acc.television receiver": 0.7724, "Acc.airplane": 0.5421, "Acc.dirt track": 0.0, "Acc.apparel": 0.3046, "Acc.pole": 0.1545, "Acc.land": 0.0126, "Acc.bannister": 0.0509, "Acc.escalator": 0.4713, "Acc.ottoman": 0.2775, "Acc.bottle": 0.2856, "Acc.buffet": 0.5449, "Acc.poster": 0.1637, "Acc.stage": 0.0934, "Acc.van": 0.4225, "Acc.ship": 0.5404, "Acc.fountain": 0.155, "Acc.conveyer belt": 0.8636, "Acc.canopy": 0.1489, "Acc.washer": 0.7453, "Acc.plaything": 0.0504, "Acc.swimming pool": 0.3124, "Acc.stool": 0.2688, "Acc.barrel": 0.6451, "Acc.basket": 0.2073, "Acc.waterfall": 0.4755, "Acc.tent": 0.9957, "Acc.bag": 0.0129, "Acc.minibike": 0.8078, "Acc.cradle": 0.6537, "Acc.oven": 0.2936, "Acc.ball": 0.5307, "Acc.food": 0.6274, "Acc.step": 0.0083, "Acc.tank": 0.4642, "Acc.trade name": 0.0396, "Acc.microwave": 0.7384, "Acc.pot": 0.3784, "Acc.animal": 0.5188, "Acc.bicycle": 0.6408, "Acc.lake": 0.0, "Acc.dishwasher": 0.386, "Acc.screen": 0.8145, "Acc.blanket": 0.0158, "Acc.sculpture": 0.3947, "Acc.hood": 0.4471, "Acc.sconce": 0.2027, "Acc.vase": 0.3639, "Acc.traffic light": 0.2174, "Acc.tray": 0.0062, "Acc.ashcan": 0.4298, "Acc.fan": 0.586, "Acc.pier": 0.2465, "Acc.crt screen": 0.0144, "Acc.plate": 0.5991, "Acc.monitor": 0.1026, "Acc.bulletin board": 0.5167, "Acc.shower": 0.0, "Acc.radiator": 0.4551, "Acc.glass": 0.0507, "Acc.clock": 0.1171, "Acc.flag": 0.3477} -{"mode": "train", "epoch": 1, "iter": 12050, "lr": 0.00015, "memory": 20279, "data_time": 0.96084, "decode.loss_ce": 0.50667, "decode.acc_seg": 81.26921, "loss": 0.50667, "time": 1.20808} -{"mode": "train", "epoch": 1, "iter": 12100, "lr": 0.00014, "memory": 20279, "data_time": 0.02086, "decode.loss_ce": 0.50393, "decode.acc_seg": 81.2225, "loss": 0.50393, "time": 0.26864} -{"mode": "train", "epoch": 1, "iter": 12150, "lr": 0.00014, "memory": 20279, "data_time": 0.0207, "decode.loss_ce": 0.47597, "decode.acc_seg": 82.08314, "loss": 0.47597, "time": 0.26449} -{"mode": "train", "epoch": 1, "iter": 12200, "lr": 0.00014, "memory": 20279, "data_time": 0.01991, "decode.loss_ce": 0.49901, "decode.acc_seg": 81.18989, "loss": 0.49901, "time": 0.26576} -{"mode": "train", "epoch": 1, "iter": 12250, "lr": 0.00014, "memory": 20279, "data_time": 0.01931, "decode.loss_ce": 0.48885, "decode.acc_seg": 81.80085, "loss": 0.48885, "time": 0.27417} -{"mode": "train", "epoch": 1, "iter": 12300, "lr": 0.00014, "memory": 20279, "data_time": 0.0207, "decode.loss_ce": 0.50045, "decode.acc_seg": 81.19178, "loss": 0.50045, "time": 0.26482} -{"mode": "train", "epoch": 1, "iter": 12350, "lr": 0.00014, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.50361, "decode.acc_seg": 81.26036, "loss": 0.50361, "time": 0.26504} -{"mode": "train", "epoch": 1, "iter": 12400, "lr": 0.00014, "memory": 20279, "data_time": 0.01969, "decode.loss_ce": 0.50559, "decode.acc_seg": 81.17591, "loss": 0.50559, "time": 0.26974} -{"mode": "train", "epoch": 1, "iter": 12450, "lr": 0.00014, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.50703, "decode.acc_seg": 80.86167, "loss": 0.50703, "time": 0.26646} -{"mode": "train", "epoch": 1, "iter": 12500, "lr": 0.00014, "memory": 20279, "data_time": 0.0199, "decode.loss_ce": 0.50361, "decode.acc_seg": 81.18564, "loss": 0.50361, "time": 0.26701} -{"mode": "train", "epoch": 1, "iter": 12550, "lr": 0.00014, "memory": 20279, "data_time": 0.02019, "decode.loss_ce": 0.48373, "decode.acc_seg": 81.67535, "loss": 0.48373, "time": 0.26643} -{"mode": "train", "epoch": 1, "iter": 12600, "lr": 0.00014, "memory": 20279, "data_time": 0.02008, "decode.loss_ce": 0.52653, "decode.acc_seg": 80.41248, "loss": 0.52653, "time": 0.26719} -{"mode": "train", "epoch": 1, "iter": 12650, "lr": 0.00014, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.50111, "decode.acc_seg": 81.16701, "loss": 0.50111, "time": 0.26963} -{"mode": "train", "epoch": 1, "iter": 12700, "lr": 0.00014, "memory": 20279, "data_time": 0.01938, "decode.loss_ce": 0.49812, "decode.acc_seg": 81.3587, "loss": 0.49812, "time": 0.26111} -{"mode": "train", "epoch": 1, "iter": 12750, "lr": 0.00014, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 0.4877, "decode.acc_seg": 81.42843, "loss": 0.4877, "time": 0.26307} -{"mode": "train", "epoch": 1, "iter": 12800, "lr": 0.00014, "memory": 20279, "data_time": 0.01941, "decode.loss_ce": 0.49933, "decode.acc_seg": 81.27105, "loss": 0.49933, "time": 0.26792} -{"mode": "train", "epoch": 1, "iter": 12850, "lr": 0.00014, "memory": 20279, "data_time": 0.01943, "decode.loss_ce": 0.49421, "decode.acc_seg": 81.27064, "loss": 0.49421, "time": 0.26894} -{"mode": "train", "epoch": 1, "iter": 12900, "lr": 0.00014, "memory": 20279, "data_time": 0.02043, "decode.loss_ce": 0.47806, "decode.acc_seg": 82.33007, "loss": 0.47806, "time": 0.27304} -{"mode": "train", "epoch": 1, "iter": 12950, "lr": 0.00014, "memory": 20279, "data_time": 0.02001, "decode.loss_ce": 0.51338, "decode.acc_seg": 80.81771, "loss": 0.51338, "time": 0.26841} -{"mode": "train", "epoch": 1, "iter": 13000, "lr": 0.00014, "memory": 20279, "data_time": 0.01907, "decode.loss_ce": 0.50896, "decode.acc_seg": 81.22744, "loss": 0.50896, "time": 0.26642} -{"mode": "train", "epoch": 1, "iter": 13050, "lr": 0.00014, "memory": 20279, "data_time": 0.02075, "decode.loss_ce": 0.50262, "decode.acc_seg": 81.42985, "loss": 0.50262, "time": 0.2697} -{"mode": "train", "epoch": 1, "iter": 13100, "lr": 0.00014, "memory": 20279, "data_time": 0.01931, "decode.loss_ce": 0.50449, "decode.acc_seg": 81.19762, "loss": 0.50449, "time": 0.26753} -{"mode": "train", "epoch": 1, "iter": 13150, "lr": 0.00014, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.49062, "decode.acc_seg": 81.53773, "loss": 0.49062, "time": 0.26732} -{"mode": "train", "epoch": 1, "iter": 13200, "lr": 0.00014, "memory": 20279, "data_time": 0.01971, "decode.loss_ce": 0.47353, "decode.acc_seg": 82.32614, "loss": 0.47353, "time": 0.26528} -{"mode": "train", "epoch": 1, "iter": 13250, "lr": 0.00014, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.47653, "decode.acc_seg": 81.96702, "loss": 0.47653, "time": 0.27762} -{"mode": "train", "epoch": 1, "iter": 13300, "lr": 0.00014, "memory": 20279, "data_time": 0.02089, "decode.loss_ce": 0.46811, "decode.acc_seg": 82.064, "loss": 0.46811, "time": 0.26556} -{"mode": "train", "epoch": 1, "iter": 13350, "lr": 0.00014, "memory": 20279, "data_time": 0.0204, "decode.loss_ce": 0.50923, "decode.acc_seg": 81.01142, "loss": 0.50923, "time": 0.26748} -{"mode": "train", "epoch": 1, "iter": 13400, "lr": 0.00014, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.49182, "decode.acc_seg": 81.58473, "loss": 0.49182, "time": 0.26984} -{"mode": "train", "epoch": 1, "iter": 13450, "lr": 0.00014, "memory": 20279, "data_time": 0.01996, "decode.loss_ce": 0.49107, "decode.acc_seg": 81.22425, "loss": 0.49107, "time": 0.26838} -{"mode": "train", "epoch": 1, "iter": 13500, "lr": 0.00014, "memory": 20279, "data_time": 0.02036, "decode.loss_ce": 0.47305, "decode.acc_seg": 82.24012, "loss": 0.47305, "time": 0.2675} -{"mode": "train", "epoch": 1, "iter": 13550, "lr": 0.00014, "memory": 20279, "data_time": 0.01992, "decode.loss_ce": 0.46832, "decode.acc_seg": 82.04106, "loss": 0.46832, "time": 0.26662} -{"mode": "train", "epoch": 1, "iter": 13600, "lr": 0.00014, "memory": 20279, "data_time": 0.01959, "decode.loss_ce": 0.47922, "decode.acc_seg": 81.87354, "loss": 0.47922, "time": 0.26831} -{"mode": "train", "epoch": 1, "iter": 13650, "lr": 0.00014, "memory": 20279, "data_time": 0.01972, "decode.loss_ce": 0.48109, "decode.acc_seg": 81.94137, "loss": 0.48109, "time": 0.26826} -{"mode": "train", "epoch": 1, "iter": 13700, "lr": 0.00014, "memory": 20279, "data_time": 0.02045, "decode.loss_ce": 0.46591, "decode.acc_seg": 82.39169, "loss": 0.46591, "time": 0.2687} -{"mode": "train", "epoch": 1, "iter": 13750, "lr": 0.00014, "memory": 20279, "data_time": 0.01936, "decode.loss_ce": 0.47166, "decode.acc_seg": 82.22444, "loss": 0.47166, "time": 0.26562} -{"mode": "train", "epoch": 1, "iter": 13800, "lr": 0.00014, "memory": 20279, "data_time": 0.01962, "decode.loss_ce": 0.4767, "decode.acc_seg": 82.02298, "loss": 0.4767, "time": 0.26727} -{"mode": "train", "epoch": 1, "iter": 13850, "lr": 0.00014, "memory": 20279, "data_time": 0.02088, "decode.loss_ce": 0.48535, "decode.acc_seg": 81.81899, "loss": 0.48535, "time": 0.26663} -{"mode": "train", "epoch": 1, "iter": 13900, "lr": 0.00014, "memory": 20279, "data_time": 0.02009, "decode.loss_ce": 0.47914, "decode.acc_seg": 81.99121, "loss": 0.47914, "time": 0.27235} -{"mode": "train", "epoch": 1, "iter": 13950, "lr": 0.00014, "memory": 20279, "data_time": 0.02064, "decode.loss_ce": 0.46498, "decode.acc_seg": 82.43076, "loss": 0.46498, "time": 0.26652} -{"mode": "train", "epoch": 1, "iter": 14000, "lr": 0.00014, "memory": 20279, "data_time": 0.01956, "decode.loss_ce": 0.47356, "decode.acc_seg": 82.43245, "loss": 0.47356, "time": 0.26467} -{"mode": "train", "epoch": 1, "iter": 14050, "lr": 0.00014, "memory": 20279, "data_time": 0.01935, "decode.loss_ce": 0.47516, "decode.acc_seg": 82.31105, "loss": 0.47516, "time": 0.26978} -{"mode": "train", "epoch": 1, "iter": 14100, "lr": 0.00014, "memory": 20279, "data_time": 0.01969, "decode.loss_ce": 0.47308, "decode.acc_seg": 82.23698, "loss": 0.47308, "time": 0.26644} -{"mode": "train", "epoch": 1, "iter": 14150, "lr": 0.00014, "memory": 20279, "data_time": 0.01865, "decode.loss_ce": 0.4906, "decode.acc_seg": 81.65906, "loss": 0.4906, "time": 0.26651} -{"mode": "train", "epoch": 1, "iter": 14200, "lr": 0.00014, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.46552, "decode.acc_seg": 82.60391, "loss": 0.46552, "time": 0.26924} -{"mode": "train", "epoch": 1, "iter": 14250, "lr": 0.00013, "memory": 20279, "data_time": 0.0191, "decode.loss_ce": 0.46196, "decode.acc_seg": 82.53095, "loss": 0.46196, "time": 0.2781} -{"mode": "train", "epoch": 1, "iter": 14300, "lr": 0.00013, "memory": 20279, "data_time": 0.0203, "decode.loss_ce": 0.48214, "decode.acc_seg": 81.8955, "loss": 0.48214, "time": 0.26302} -{"mode": "train", "epoch": 1, "iter": 14350, "lr": 0.00013, "memory": 20279, "data_time": 0.01923, "decode.loss_ce": 0.48935, "decode.acc_seg": 81.68956, "loss": 0.48935, "time": 0.26397} -{"mode": "train", "epoch": 1, "iter": 14400, "lr": 0.00013, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.44481, "decode.acc_seg": 82.90791, "loss": 0.44481, "time": 0.27134} -{"mode": "train", "epoch": 1, "iter": 14450, "lr": 0.00013, "memory": 20279, "data_time": 0.01978, "decode.loss_ce": 0.49313, "decode.acc_seg": 81.22574, "loss": 0.49313, "time": 0.26855} -{"mode": "train", "epoch": 1, "iter": 14500, "lr": 0.00013, "memory": 20279, "data_time": 0.02237, "decode.loss_ce": 0.45591, "decode.acc_seg": 82.76916, "loss": 0.45591, "time": 0.26787} -{"mode": "train", "epoch": 1, "iter": 14550, "lr": 0.00013, "memory": 20279, "data_time": 0.02054, "decode.loss_ce": 0.45084, "decode.acc_seg": 83.0871, "loss": 0.45084, "time": 0.26493} -{"mode": "train", "epoch": 1, "iter": 14600, "lr": 0.00013, "memory": 20279, "data_time": 0.01973, "decode.loss_ce": 0.46154, "decode.acc_seg": 82.63273, "loss": 0.46154, "time": 0.26372} -{"mode": "train", "epoch": 1, "iter": 14650, "lr": 0.00013, "memory": 20279, "data_time": 0.01853, "decode.loss_ce": 0.45403, "decode.acc_seg": 82.72464, "loss": 0.45403, "time": 0.26861} -{"mode": "train", "epoch": 1, "iter": 14700, "lr": 0.00013, "memory": 20279, "data_time": 0.02061, "decode.loss_ce": 0.45143, "decode.acc_seg": 82.95343, "loss": 0.45143, "time": 0.26258} -{"mode": "train", "epoch": 1, "iter": 14750, "lr": 0.00013, "memory": 20279, "data_time": 0.01899, "decode.loss_ce": 0.45891, "decode.acc_seg": 82.51782, "loss": 0.45891, "time": 0.26472} -{"mode": "train", "epoch": 1, "iter": 14800, "lr": 0.00013, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.45349, "decode.acc_seg": 82.82391, "loss": 0.45349, "time": 0.26512} -{"mode": "train", "epoch": 1, "iter": 14850, "lr": 0.00013, "memory": 20279, "data_time": 0.01989, "decode.loss_ce": 0.45479, "decode.acc_seg": 83.07614, "loss": 0.45479, "time": 0.26467} -{"mode": "train", "epoch": 1, "iter": 14900, "lr": 0.00013, "memory": 20279, "data_time": 0.02023, "decode.loss_ce": 0.47096, "decode.acc_seg": 82.45851, "loss": 0.47096, "time": 0.27219} -{"mode": "train", "epoch": 1, "iter": 14950, "lr": 0.00013, "memory": 20279, "data_time": 0.02086, "decode.loss_ce": 0.47642, "decode.acc_seg": 82.27077, "loss": 0.47642, "time": 0.26621} -{"mode": "train", "epoch": 1, "iter": 15000, "lr": 0.00013, "memory": 20279, "data_time": 0.01932, "decode.loss_ce": 0.43515, "decode.acc_seg": 83.26998, "loss": 0.43515, "time": 0.26183} -{"mode": "train", "epoch": 1, "iter": 15050, "lr": 0.00013, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.47628, "decode.acc_seg": 82.38987, "loss": 0.47628, "time": 0.27257} -{"mode": "train", "epoch": 1, "iter": 15100, "lr": 0.00013, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.4698, "decode.acc_seg": 82.11087, "loss": 0.4698, "time": 0.26737} -{"mode": "train", "epoch": 1, "iter": 15150, "lr": 0.00013, "memory": 20279, "data_time": 0.01973, "decode.loss_ce": 0.46795, "decode.acc_seg": 82.54791, "loss": 0.46795, "time": 0.26338} -{"mode": "train", "epoch": 1, "iter": 15200, "lr": 0.00013, "memory": 20279, "data_time": 0.01947, "decode.loss_ce": 0.44554, "decode.acc_seg": 82.94932, "loss": 0.44554, "time": 0.26589} -{"mode": "train", "epoch": 1, "iter": 15250, "lr": 0.00013, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.46344, "decode.acc_seg": 82.2799, "loss": 0.46344, "time": 0.271} -{"mode": "train", "epoch": 1, "iter": 15300, "lr": 0.00013, "memory": 20279, "data_time": 0.02067, "decode.loss_ce": 0.44162, "decode.acc_seg": 82.96609, "loss": 0.44162, "time": 0.2656} -{"mode": "train", "epoch": 1, "iter": 15350, "lr": 0.00013, "memory": 20279, "data_time": 0.02047, "decode.loss_ce": 0.45849, "decode.acc_seg": 82.64751, "loss": 0.45849, "time": 0.26254} -{"mode": "train", "epoch": 1, "iter": 15400, "lr": 0.00013, "memory": 20279, "data_time": 0.02038, "decode.loss_ce": 0.46181, "decode.acc_seg": 82.52651, "loss": 0.46181, "time": 0.26864} -{"mode": "train", "epoch": 1, "iter": 15450, "lr": 0.00013, "memory": 20279, "data_time": 0.01999, "decode.loss_ce": 0.47834, "decode.acc_seg": 82.15665, "loss": 0.47834, "time": 0.26809} -{"mode": "train", "epoch": 1, "iter": 15500, "lr": 0.00013, "memory": 20279, "data_time": 0.0204, "decode.loss_ce": 0.44635, "decode.acc_seg": 82.74585, "loss": 0.44635, "time": 0.26153} -{"mode": "train", "epoch": 1, "iter": 15550, "lr": 0.00013, "memory": 20279, "data_time": 0.01976, "decode.loss_ce": 0.44369, "decode.acc_seg": 83.15814, "loss": 0.44369, "time": 0.26466} -{"mode": "train", "epoch": 1, "iter": 15600, "lr": 0.00013, "memory": 20279, "data_time": 0.01956, "decode.loss_ce": 0.44691, "decode.acc_seg": 83.22027, "loss": 0.44691, "time": 0.26317} -{"mode": "train", "epoch": 1, "iter": 15650, "lr": 0.00013, "memory": 20279, "data_time": 0.01952, "decode.loss_ce": 0.45341, "decode.acc_seg": 82.92913, "loss": 0.45341, "time": 0.26584} -{"mode": "train", "epoch": 1, "iter": 15700, "lr": 0.00013, "memory": 20279, "data_time": 0.02084, "decode.loss_ce": 0.44209, "decode.acc_seg": 83.23611, "loss": 0.44209, "time": 0.26267} -{"mode": "train", "epoch": 1, "iter": 15750, "lr": 0.00013, "memory": 20279, "data_time": 0.01954, "decode.loss_ce": 0.44809, "decode.acc_seg": 82.88017, "loss": 0.44809, "time": 0.26563} -{"mode": "train", "epoch": 1, "iter": 15800, "lr": 0.00013, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.42382, "decode.acc_seg": 83.95307, "loss": 0.42382, "time": 0.26528} -{"mode": "train", "epoch": 1, "iter": 15850, "lr": 0.00013, "memory": 20279, "data_time": 0.02047, "decode.loss_ce": 0.43703, "decode.acc_seg": 83.40085, "loss": 0.43703, "time": 0.26442} -{"mode": "train", "epoch": 1, "iter": 15900, "lr": 0.00013, "memory": 20279, "data_time": 0.01952, "decode.loss_ce": 0.44765, "decode.acc_seg": 82.8758, "loss": 0.44765, "time": 0.26828} -{"mode": "train", "epoch": 1, "iter": 15950, "lr": 0.00013, "memory": 20279, "data_time": 0.02014, "decode.loss_ce": 0.44682, "decode.acc_seg": 83.21503, "loss": 0.44682, "time": 0.26521} -{"mode": "train", "epoch": 1, "iter": 16000, "lr": 0.00013, "memory": 20279, "data_time": 0.02043, "decode.loss_ce": 0.44605, "decode.acc_seg": 83.11237, "loss": 0.44605, "time": 0.28729} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00013, "aAcc": 0.7966, "mIoU": 0.4016, "mAcc": 0.5141, "IoU.wall": 0.7266, "IoU.building": 0.8042, "IoU.sky": 0.9369, "IoU.floor": 0.7876, "IoU.tree": 0.7123, "IoU.ceiling": 0.8025, "IoU.road": 0.8005, "IoU.bed ": 0.8502, "IoU.windowpane": 0.5687, "IoU.grass": 0.669, "IoU.cabinet": 0.5458, "IoU.sidewalk": 0.6049, "IoU.person": 0.7683, "IoU.earth": 0.3138, "IoU.door": 0.3627, "IoU.table": 0.4827, "IoU.mountain": 0.551, "IoU.plant": 0.4918, "IoU.curtain": 0.6828, "IoU.chair": 0.5017, "IoU.car": 0.7951, "IoU.water": 0.4288, "IoU.painting": 0.6349, "IoU.sofa": 0.5559, "IoU.shelf": 0.3692, "IoU.house": 0.3733, "IoU.sea": 0.5354, "IoU.mirror": 0.5143, "IoU.rug": 0.4834, "IoU.field": 0.3355, "IoU.armchair": 0.3339, "IoU.seat": 0.516, "IoU.fence": 0.3441, "IoU.desk": 0.3977, "IoU.rock": 0.3639, "IoU.wardrobe": 0.3766, "IoU.lamp": 0.5456, "IoU.bathtub": 0.6855, "IoU.railing": 0.279, "IoU.cushion": 0.4605, "IoU.base": 0.1514, "IoU.box": 0.1718, "IoU.column": 0.3761, "IoU.signboard": 0.3094, "IoU.chest of drawers": 0.4193, "IoU.counter": 0.2231, "IoU.sand": 0.2937, "IoU.sink": 0.6381, "IoU.skyscraper": 0.5687, "IoU.fireplace": 0.5755, "IoU.refrigerator": 0.6051, "IoU.grandstand": 0.3949, "IoU.path": 0.2029, "IoU.stairs": 0.3073, "IoU.runway": 0.7095, "IoU.case": 0.4698, "IoU.pool table": 0.9118, "IoU.pillow": 0.4048, "IoU.screen door": 0.5335, "IoU.stairway": 0.3034, "IoU.river": 0.1893, "IoU.bridge": 0.3975, "IoU.bookcase": 0.1999, "IoU.blind": 0.25, "IoU.coffee table": 0.4681, "IoU.toilet": 0.7973, "IoU.flower": 0.3118, "IoU.book": 0.4236, "IoU.hill": 0.0453, "IoU.bench": 0.3174, "IoU.countertop": 0.4477, "IoU.stove": 0.6525, "IoU.palm": 0.4455, "IoU.kitchen island": 0.2416, "IoU.computer": 0.5441, "IoU.swivel chair": 0.43, "IoU.boat": 0.6081, "IoU.bar": 0.3151, "IoU.arcade machine": 0.3818, "IoU.hovel": 0.1197, "IoU.bus": 0.7955, "IoU.towel": 0.5475, "IoU.light": 0.3764, "IoU.truck": 0.3079, "IoU.tower": 0.2604, "IoU.chandelier": 0.5854, "IoU.awning": 0.1602, "IoU.streetlight": 0.1605, "IoU.booth": 0.3409, "IoU.television receiver": 0.6087, "IoU.airplane": 0.533, "IoU.dirt track": 0.0134, "IoU.apparel": 0.2978, "IoU.pole": 0.1137, "IoU.land": 0.0003, "IoU.bannister": 0.01, "IoU.escalator": 0.1876, "IoU.ottoman": 0.3243, "IoU.bottle": 0.2835, "IoU.buffet": 0.3254, "IoU.poster": 0.2107, "IoU.stage": 0.1161, "IoU.van": 0.2768, "IoU.ship": 0.723, "IoU.fountain": 0.1917, "IoU.conveyer belt": 0.3723, "IoU.canopy": 0.099, "IoU.washer": 0.645, "IoU.plaything": 0.2807, "IoU.swimming pool": 0.3458, "IoU.stool": 0.2544, "IoU.barrel": 0.1356, "IoU.basket": 0.2371, "IoU.waterfall": 0.5916, "IoU.tent": 0.8559, "IoU.bag": 0.0824, "IoU.minibike": 0.6341, "IoU.cradle": 0.6803, "IoU.oven": 0.1856, "IoU.ball": 0.1452, "IoU.food": 0.2947, "IoU.step": 0.0443, "IoU.tank": 0.3529, "IoU.trade name": 0.1694, "IoU.microwave": 0.3894, "IoU.pot": 0.333, "IoU.animal": 0.5229, "IoU.bicycle": 0.4987, "IoU.lake": 0.0032, "IoU.dishwasher": 0.5054, "IoU.screen": 0.6583, "IoU.blanket": 0.0484, "IoU.sculpture": 0.2765, "IoU.hood": 0.4411, "IoU.sconce": 0.2473, "IoU.vase": 0.2562, "IoU.traffic light": 0.212, "IoU.tray": 0.0158, "IoU.ashcan": 0.3435, "IoU.fan": 0.4857, "IoU.pier": 0.3114, "IoU.crt screen": 0.0372, "IoU.plate": 0.377, "IoU.monitor": 0.2381, "IoU.bulletin board": 0.4525, "IoU.shower": 0.0, "IoU.radiator": 0.4393, "IoU.glass": 0.0681, "IoU.clock": 0.1828, "IoU.flag": 0.2839, "Acc.wall": 0.8578, "Acc.building": 0.9169, "Acc.sky": 0.9683, "Acc.floor": 0.8947, "Acc.tree": 0.8641, "Acc.ceiling": 0.9075, "Acc.road": 0.8967, "Acc.bed ": 0.9429, "Acc.windowpane": 0.7365, "Acc.grass": 0.849, "Acc.cabinet": 0.712, "Acc.sidewalk": 0.7659, "Acc.person": 0.9001, "Acc.earth": 0.422, "Acc.door": 0.4856, "Acc.table": 0.6132, "Acc.mountain": 0.7657, "Acc.plant": 0.6091, "Acc.curtain": 0.8162, "Acc.chair": 0.6312, "Acc.car": 0.923, "Acc.water": 0.5803, "Acc.painting": 0.8203, "Acc.sofa": 0.7739, "Acc.shelf": 0.5819, "Acc.house": 0.5406, "Acc.sea": 0.8936, "Acc.mirror": 0.669, "Acc.rug": 0.5752, "Acc.field": 0.46, "Acc.armchair": 0.5002, "Acc.seat": 0.6488, "Acc.fence": 0.4092, "Acc.desk": 0.589, "Acc.rock": 0.5744, "Acc.wardrobe": 0.5397, "Acc.lamp": 0.6493, "Acc.bathtub": 0.7695, "Acc.railing": 0.4497, "Acc.cushion": 0.6743, "Acc.base": 0.2902, "Acc.box": 0.2522, "Acc.column": 0.5323, "Acc.signboard": 0.4204, "Acc.chest of drawers": 0.5366, "Acc.counter": 0.3431, "Acc.sand": 0.4168, "Acc.sink": 0.7147, "Acc.skyscraper": 0.6644, "Acc.fireplace": 0.8225, "Acc.refrigerator": 0.687, "Acc.grandstand": 0.6946, "Acc.path": 0.2892, "Acc.stairs": 0.4173, "Acc.runway": 0.8299, "Acc.case": 0.6948, "Acc.pool table": 0.9587, "Acc.pillow": 0.4416, "Acc.screen door": 0.7414, "Acc.stairway": 0.3602, "Acc.river": 0.3456, "Acc.bridge": 0.4691, "Acc.bookcase": 0.2219, "Acc.blind": 0.2668, "Acc.coffee table": 0.7843, "Acc.toilet": 0.8714, "Acc.flower": 0.3795, "Acc.book": 0.5786, "Acc.hill": 0.0703, "Acc.bench": 0.505, "Acc.countertop": 0.5884, "Acc.stove": 0.7789, "Acc.palm": 0.6214, "Acc.kitchen island": 0.4614, "Acc.computer": 0.6104, "Acc.swivel chair": 0.5412, "Acc.boat": 0.7995, "Acc.bar": 0.3821, "Acc.arcade machine": 0.683, "Acc.hovel": 0.178, "Acc.bus": 0.9411, "Acc.towel": 0.6602, "Acc.light": 0.4208, "Acc.truck": 0.4073, "Acc.tower": 0.4459, "Acc.chandelier": 0.7793, "Acc.awning": 0.1895, "Acc.streetlight": 0.1934, "Acc.booth": 0.3493, "Acc.television receiver": 0.7308, "Acc.airplane": 0.602, "Acc.dirt track": 0.015, "Acc.apparel": 0.4768, "Acc.pole": 0.1468, "Acc.land": 0.0006, "Acc.bannister": 0.0118, "Acc.escalator": 0.2388, "Acc.ottoman": 0.4404, "Acc.bottle": 0.4077, "Acc.buffet": 0.3637, "Acc.poster": 0.2424, "Acc.stage": 0.1847, "Acc.van": 0.3643, "Acc.ship": 0.7978, "Acc.fountain": 0.2104, "Acc.conveyer belt": 0.7553, "Acc.canopy": 0.1108, "Acc.washer": 0.6854, "Acc.plaything": 0.3975, "Acc.swimming pool": 0.5, "Acc.stool": 0.4053, "Acc.barrel": 0.6587, "Acc.basket": 0.3128, "Acc.waterfall": 0.6478, "Acc.tent": 0.9755, "Acc.bag": 0.1037, "Acc.minibike": 0.7641, "Acc.cradle": 0.7993, "Acc.oven": 0.4052, "Acc.ball": 0.1597, "Acc.food": 0.3214, "Acc.step": 0.0519, "Acc.tank": 0.3718, "Acc.trade name": 0.1917, "Acc.microwave": 0.4275, "Acc.pot": 0.3788, "Acc.animal": 0.5389, "Acc.bicycle": 0.6337, "Acc.lake": 0.0033, "Acc.dishwasher": 0.5943, "Acc.screen": 0.8234, "Acc.blanket": 0.0536, "Acc.sculpture": 0.4786, "Acc.hood": 0.535, "Acc.sconce": 0.2798, "Acc.vase": 0.3723, "Acc.traffic light": 0.4325, "Acc.tray": 0.0206, "Acc.ashcan": 0.4786, "Acc.fan": 0.595, "Acc.pier": 0.4027, "Acc.crt screen": 0.0711, "Acc.plate": 0.4656, "Acc.monitor": 0.323, "Acc.bulletin board": 0.485, "Acc.shower": 0.0, "Acc.radiator": 0.4644, "Acc.glass": 0.0728, "Acc.clock": 0.2124, "Acc.flag": 0.3067} -{"mode": "train", "epoch": 1, "iter": 16050, "lr": 0.00013, "memory": 20279, "data_time": 0.95838, "decode.loss_ce": 0.46414, "decode.acc_seg": 82.62435, "loss": 0.46414, "time": 1.20685} -{"mode": "train", "epoch": 1, "iter": 16100, "lr": 0.00013, "memory": 20279, "data_time": 0.02195, "decode.loss_ce": 0.43736, "decode.acc_seg": 83.29194, "loss": 0.43736, "time": 0.26645} -{"mode": "train", "epoch": 1, "iter": 16150, "lr": 0.00013, "memory": 20279, "data_time": 0.01976, "decode.loss_ce": 0.45569, "decode.acc_seg": 82.9552, "loss": 0.45569, "time": 0.26638} -{"mode": "train", "epoch": 1, "iter": 16200, "lr": 0.00013, "memory": 20279, "data_time": 0.02006, "decode.loss_ce": 0.45647, "decode.acc_seg": 82.69323, "loss": 0.45647, "time": 0.26389} -{"mode": "train", "epoch": 1, "iter": 16250, "lr": 0.00013, "memory": 20279, "data_time": 0.01988, "decode.loss_ce": 0.44915, "decode.acc_seg": 83.25386, "loss": 0.44915, "time": 0.26907} -{"mode": "train", "epoch": 1, "iter": 16300, "lr": 0.00013, "memory": 20279, "data_time": 0.02012, "decode.loss_ce": 0.44068, "decode.acc_seg": 83.41656, "loss": 0.44068, "time": 0.2635} -{"mode": "train", "epoch": 1, "iter": 16350, "lr": 0.00013, "memory": 20279, "data_time": 0.01893, "decode.loss_ce": 0.42991, "decode.acc_seg": 83.34856, "loss": 0.42991, "time": 0.26952} -{"mode": "train", "epoch": 1, "iter": 16400, "lr": 0.00012, "memory": 20279, "data_time": 0.01915, "decode.loss_ce": 0.43328, "decode.acc_seg": 83.6575, "loss": 0.43328, "time": 0.27479} -{"mode": "train", "epoch": 1, "iter": 16450, "lr": 0.00012, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.43646, "decode.acc_seg": 83.23306, "loss": 0.43646, "time": 0.26704} -{"mode": "train", "epoch": 1, "iter": 16500, "lr": 0.00012, "memory": 20279, "data_time": 0.0205, "decode.loss_ce": 0.44464, "decode.acc_seg": 83.35279, "loss": 0.44464, "time": 0.26808} -{"mode": "train", "epoch": 1, "iter": 16550, "lr": 0.00012, "memory": 20279, "data_time": 0.02118, "decode.loss_ce": 0.4232, "decode.acc_seg": 83.68212, "loss": 0.4232, "time": 0.26348} -{"mode": "train", "epoch": 1, "iter": 16600, "lr": 0.00012, "memory": 20279, "data_time": 0.02109, "decode.loss_ce": 0.44108, "decode.acc_seg": 83.47268, "loss": 0.44108, "time": 0.26765} -{"mode": "train", "epoch": 1, "iter": 16650, "lr": 0.00012, "memory": 20279, "data_time": 0.0192, "decode.loss_ce": 0.44499, "decode.acc_seg": 83.04609, "loss": 0.44499, "time": 0.26688} -{"mode": "train", "epoch": 1, "iter": 16700, "lr": 0.00012, "memory": 20279, "data_time": 0.01961, "decode.loss_ce": 0.44435, "decode.acc_seg": 83.08733, "loss": 0.44435, "time": 0.26525} -{"mode": "train", "epoch": 1, "iter": 16750, "lr": 0.00012, "memory": 20279, "data_time": 0.0203, "decode.loss_ce": 0.44618, "decode.acc_seg": 83.13051, "loss": 0.44618, "time": 0.27276} -{"mode": "train", "epoch": 1, "iter": 16800, "lr": 0.00012, "memory": 20279, "data_time": 0.01901, "decode.loss_ce": 0.44369, "decode.acc_seg": 83.43132, "loss": 0.44369, "time": 0.26298} -{"mode": "train", "epoch": 1, "iter": 16850, "lr": 0.00012, "memory": 20279, "data_time": 0.02032, "decode.loss_ce": 0.45948, "decode.acc_seg": 82.89707, "loss": 0.45948, "time": 0.26475} -{"mode": "train", "epoch": 1, "iter": 16900, "lr": 0.00012, "memory": 20279, "data_time": 0.02152, "decode.loss_ce": 0.43467, "decode.acc_seg": 83.4904, "loss": 0.43467, "time": 0.26626} -{"mode": "train", "epoch": 1, "iter": 16950, "lr": 0.00012, "memory": 20279, "data_time": 0.02051, "decode.loss_ce": 0.43158, "decode.acc_seg": 83.68188, "loss": 0.43158, "time": 0.26516} -{"mode": "train", "epoch": 1, "iter": 17000, "lr": 0.00012, "memory": 20279, "data_time": 0.0217, "decode.loss_ce": 0.44115, "decode.acc_seg": 83.37217, "loss": 0.44115, "time": 0.2665} -{"mode": "train", "epoch": 1, "iter": 17050, "lr": 0.00012, "memory": 20279, "data_time": 0.01976, "decode.loss_ce": 0.43005, "decode.acc_seg": 83.58093, "loss": 0.43005, "time": 0.26957} -{"mode": "train", "epoch": 1, "iter": 17100, "lr": 0.00012, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.42122, "decode.acc_seg": 84.03323, "loss": 0.42122, "time": 0.26799} -{"mode": "train", "epoch": 1, "iter": 17150, "lr": 0.00012, "memory": 20279, "data_time": 0.02173, "decode.loss_ce": 0.42752, "decode.acc_seg": 84.01649, "loss": 0.42752, "time": 0.26414} -{"mode": "train", "epoch": 1, "iter": 17200, "lr": 0.00012, "memory": 20279, "data_time": 0.02004, "decode.loss_ce": 0.43453, "decode.acc_seg": 83.46498, "loss": 0.43453, "time": 0.26271} -{"mode": "train", "epoch": 1, "iter": 17250, "lr": 0.00012, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.43281, "decode.acc_seg": 83.57622, "loss": 0.43281, "time": 0.27384} -{"mode": "train", "epoch": 1, "iter": 17300, "lr": 0.00012, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.43037, "decode.acc_seg": 83.71389, "loss": 0.43037, "time": 0.26723} -{"mode": "train", "epoch": 1, "iter": 17350, "lr": 0.00012, "memory": 20279, "data_time": 0.01935, "decode.loss_ce": 0.44051, "decode.acc_seg": 83.31421, "loss": 0.44051, "time": 0.26732} -{"mode": "train", "epoch": 1, "iter": 17400, "lr": 0.00012, "memory": 20279, "data_time": 0.01944, "decode.loss_ce": 0.42105, "decode.acc_seg": 83.62954, "loss": 0.42105, "time": 0.27625} -{"mode": "train", "epoch": 1, "iter": 17450, "lr": 0.00012, "memory": 20279, "data_time": 0.01991, "decode.loss_ce": 0.42682, "decode.acc_seg": 83.65753, "loss": 0.42682, "time": 0.26499} -{"mode": "train", "epoch": 1, "iter": 17500, "lr": 0.00012, "memory": 20279, "data_time": 0.01933, "decode.loss_ce": 0.42326, "decode.acc_seg": 83.92277, "loss": 0.42326, "time": 0.26277} -{"mode": "train", "epoch": 1, "iter": 17550, "lr": 0.00012, "memory": 20279, "data_time": 0.01973, "decode.loss_ce": 0.43581, "decode.acc_seg": 83.29454, "loss": 0.43581, "time": 0.26566} -{"mode": "train", "epoch": 1, "iter": 17600, "lr": 0.00012, "memory": 20279, "data_time": 0.01912, "decode.loss_ce": 0.43773, "decode.acc_seg": 83.56509, "loss": 0.43773, "time": 0.26756} -{"mode": "train", "epoch": 1, "iter": 17650, "lr": 0.00012, "memory": 20279, "data_time": 0.01977, "decode.loss_ce": 0.42712, "decode.acc_seg": 83.82296, "loss": 0.42712, "time": 0.26942} -{"mode": "train", "epoch": 1, "iter": 17700, "lr": 0.00012, "memory": 20279, "data_time": 0.02038, "decode.loss_ce": 0.44335, "decode.acc_seg": 83.16822, "loss": 0.44335, "time": 0.26801} -{"mode": "train", "epoch": 1, "iter": 17750, "lr": 0.00012, "memory": 20279, "data_time": 0.01931, "decode.loss_ce": 0.45024, "decode.acc_seg": 83.48588, "loss": 0.45024, "time": 0.26937} -{"mode": "train", "epoch": 1, "iter": 17800, "lr": 0.00012, "memory": 20279, "data_time": 0.01923, "decode.loss_ce": 0.42544, "decode.acc_seg": 83.78232, "loss": 0.42544, "time": 0.26461} -{"mode": "train", "epoch": 1, "iter": 17850, "lr": 0.00012, "memory": 20279, "data_time": 0.02012, "decode.loss_ce": 0.43521, "decode.acc_seg": 83.66663, "loss": 0.43521, "time": 0.2621} -{"mode": "train", "epoch": 1, "iter": 17900, "lr": 0.00012, "memory": 20279, "data_time": 0.02005, "decode.loss_ce": 0.41954, "decode.acc_seg": 84.2075, "loss": 0.41954, "time": 0.26986} -{"mode": "train", "epoch": 1, "iter": 17950, "lr": 0.00012, "memory": 20279, "data_time": 0.01995, "decode.loss_ce": 0.43924, "decode.acc_seg": 83.49456, "loss": 0.43924, "time": 0.26449} -{"mode": "train", "epoch": 1, "iter": 18000, "lr": 0.00012, "memory": 20279, "data_time": 0.01906, "decode.loss_ce": 0.42869, "decode.acc_seg": 83.95636, "loss": 0.42869, "time": 0.26953} -{"mode": "train", "epoch": 1, "iter": 18050, "lr": 0.00012, "memory": 20279, "data_time": 0.01975, "decode.loss_ce": 0.42212, "decode.acc_seg": 83.76468, "loss": 0.42212, "time": 0.2609} -{"mode": "train", "epoch": 1, "iter": 18100, "lr": 0.00012, "memory": 20279, "data_time": 0.02107, "decode.loss_ce": 0.41425, "decode.acc_seg": 84.05523, "loss": 0.41425, "time": 0.26915} -{"mode": "train", "epoch": 1, "iter": 18150, "lr": 0.00012, "memory": 20279, "data_time": 0.02044, "decode.loss_ce": 0.4045, "decode.acc_seg": 84.55699, "loss": 0.4045, "time": 0.26482} -{"mode": "train", "epoch": 1, "iter": 18200, "lr": 0.00012, "memory": 20279, "data_time": 0.021, "decode.loss_ce": 0.4242, "decode.acc_seg": 83.96765, "loss": 0.4242, "time": 0.26546} -{"mode": "train", "epoch": 1, "iter": 18250, "lr": 0.00012, "memory": 20279, "data_time": 0.01817, "decode.loss_ce": 0.43505, "decode.acc_seg": 83.59598, "loss": 0.43505, "time": 0.27316} -{"mode": "train", "epoch": 1, "iter": 18300, "lr": 0.00012, "memory": 20279, "data_time": 0.01967, "decode.loss_ce": 0.40764, "decode.acc_seg": 84.43755, "loss": 0.40764, "time": 0.26747} -{"mode": "train", "epoch": 1, "iter": 18350, "lr": 0.00012, "memory": 20279, "data_time": 0.01924, "decode.loss_ce": 0.40167, "decode.acc_seg": 84.66512, "loss": 0.40167, "time": 0.26827} -{"mode": "train", "epoch": 1, "iter": 18400, "lr": 0.00012, "memory": 20279, "data_time": 0.01942, "decode.loss_ce": 0.40811, "decode.acc_seg": 84.35828, "loss": 0.40811, "time": 0.27462} -{"mode": "train", "epoch": 1, "iter": 18450, "lr": 0.00012, "memory": 20279, "data_time": 0.01921, "decode.loss_ce": 0.40642, "decode.acc_seg": 84.46119, "loss": 0.40642, "time": 0.26497} -{"mode": "train", "epoch": 1, "iter": 18500, "lr": 0.00011, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.41756, "decode.acc_seg": 84.17875, "loss": 0.41756, "time": 0.26197} -{"mode": "train", "epoch": 1, "iter": 18550, "lr": 0.00011, "memory": 20279, "data_time": 0.0211, "decode.loss_ce": 0.4166, "decode.acc_seg": 84.26837, "loss": 0.4166, "time": 0.26702} -{"mode": "train", "epoch": 1, "iter": 18600, "lr": 0.00011, "memory": 20279, "data_time": 0.01984, "decode.loss_ce": 0.40917, "decode.acc_seg": 84.32568, "loss": 0.40917, "time": 0.26652} -{"mode": "train", "epoch": 1, "iter": 18650, "lr": 0.00011, "memory": 20279, "data_time": 0.02033, "decode.loss_ce": 0.41536, "decode.acc_seg": 83.77344, "loss": 0.41536, "time": 0.27052} -{"mode": "train", "epoch": 1, "iter": 18700, "lr": 0.00011, "memory": 20279, "data_time": 0.01916, "decode.loss_ce": 0.39627, "decode.acc_seg": 84.6941, "loss": 0.39627, "time": 0.26808} -{"mode": "train", "epoch": 1, "iter": 18750, "lr": 0.00011, "memory": 20279, "data_time": 0.01946, "decode.loss_ce": 0.39834, "decode.acc_seg": 84.68366, "loss": 0.39834, "time": 0.27104} -{"mode": "train", "epoch": 1, "iter": 18800, "lr": 0.00011, "memory": 20279, "data_time": 0.0199, "decode.loss_ce": 0.41008, "decode.acc_seg": 84.54304, "loss": 0.41008, "time": 0.26858} -{"mode": "train", "epoch": 1, "iter": 18850, "lr": 0.00011, "memory": 20279, "data_time": 0.02145, "decode.loss_ce": 0.41274, "decode.acc_seg": 84.31067, "loss": 0.41274, "time": 0.26687} -{"mode": "train", "epoch": 1, "iter": 18900, "lr": 0.00011, "memory": 20279, "data_time": 0.02073, "decode.loss_ce": 0.42103, "decode.acc_seg": 84.04872, "loss": 0.42103, "time": 0.27142} -{"mode": "train", "epoch": 1, "iter": 18950, "lr": 0.00011, "memory": 20279, "data_time": 0.02052, "decode.loss_ce": 0.4091, "decode.acc_seg": 84.46813, "loss": 0.4091, "time": 0.26569} -{"mode": "train", "epoch": 1, "iter": 19000, "lr": 0.00011, "memory": 20279, "data_time": 0.0202, "decode.loss_ce": 0.39942, "decode.acc_seg": 84.80819, "loss": 0.39942, "time": 0.26955} -{"mode": "train", "epoch": 1, "iter": 19050, "lr": 0.00011, "memory": 20279, "data_time": 0.01943, "decode.loss_ce": 0.40536, "decode.acc_seg": 84.39639, "loss": 0.40536, "time": 0.26335} -{"mode": "train", "epoch": 1, "iter": 19100, "lr": 0.00011, "memory": 20279, "data_time": 0.01903, "decode.loss_ce": 0.40658, "decode.acc_seg": 84.45182, "loss": 0.40658, "time": 0.26868} -{"mode": "train", "epoch": 1, "iter": 19150, "lr": 0.00011, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.39354, "decode.acc_seg": 85.00258, "loss": 0.39354, "time": 0.26279} -{"mode": "train", "epoch": 1, "iter": 19200, "lr": 0.00011, "memory": 20279, "data_time": 0.01942, "decode.loss_ce": 0.40525, "decode.acc_seg": 84.14767, "loss": 0.40525, "time": 0.26431} -{"mode": "train", "epoch": 1, "iter": 19250, "lr": 0.00011, "memory": 20279, "data_time": 0.02475, "decode.loss_ce": 0.3886, "decode.acc_seg": 85.23945, "loss": 0.3886, "time": 0.27202} -{"mode": "train", "epoch": 1, "iter": 19300, "lr": 0.00011, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.42824, "decode.acc_seg": 84.03656, "loss": 0.42824, "time": 0.26864} -{"mode": "train", "epoch": 1, "iter": 19350, "lr": 0.00011, "memory": 20279, "data_time": 0.02022, "decode.loss_ce": 0.42199, "decode.acc_seg": 83.9155, "loss": 0.42199, "time": 0.26756} -{"mode": "train", "epoch": 1, "iter": 19400, "lr": 0.00011, "memory": 20279, "data_time": 0.02143, "decode.loss_ce": 0.40424, "decode.acc_seg": 84.70557, "loss": 0.40424, "time": 0.27442} -{"mode": "train", "epoch": 1, "iter": 19450, "lr": 0.00011, "memory": 20279, "data_time": 0.01939, "decode.loss_ce": 0.38098, "decode.acc_seg": 85.18718, "loss": 0.38098, "time": 0.26763} -{"mode": "train", "epoch": 1, "iter": 19500, "lr": 0.00011, "memory": 20279, "data_time": 0.02159, "decode.loss_ce": 0.4013, "decode.acc_seg": 84.91573, "loss": 0.4013, "time": 0.26688} -{"mode": "train", "epoch": 1, "iter": 19550, "lr": 0.00011, "memory": 20279, "data_time": 0.0195, "decode.loss_ce": 0.43759, "decode.acc_seg": 83.47241, "loss": 0.43759, "time": 0.26519} -{"mode": "train", "epoch": 1, "iter": 19600, "lr": 0.00011, "memory": 20279, "data_time": 0.02031, "decode.loss_ce": 0.38756, "decode.acc_seg": 84.94207, "loss": 0.38756, "time": 0.26288} -{"mode": "train", "epoch": 1, "iter": 19650, "lr": 0.00011, "memory": 20279, "data_time": 0.02045, "decode.loss_ce": 0.38395, "decode.acc_seg": 85.28008, "loss": 0.38395, "time": 0.26702} -{"mode": "train", "epoch": 1, "iter": 19700, "lr": 0.00011, "memory": 20279, "data_time": 0.01832, "decode.loss_ce": 0.38289, "decode.acc_seg": 85.37039, "loss": 0.38289, "time": 0.26916} -{"mode": "train", "epoch": 1, "iter": 19750, "lr": 0.00011, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.40395, "decode.acc_seg": 84.8275, "loss": 0.40395, "time": 0.27193} -{"mode": "train", "epoch": 1, "iter": 19800, "lr": 0.00011, "memory": 20279, "data_time": 0.0216, "decode.loss_ce": 0.40718, "decode.acc_seg": 84.58124, "loss": 0.40718, "time": 0.26728} -{"mode": "train", "epoch": 1, "iter": 19850, "lr": 0.00011, "memory": 20279, "data_time": 0.02037, "decode.loss_ce": 0.40125, "decode.acc_seg": 84.5551, "loss": 0.40125, "time": 0.26899} -{"mode": "train", "epoch": 1, "iter": 19900, "lr": 0.00011, "memory": 20279, "data_time": 0.01925, "decode.loss_ce": 0.39757, "decode.acc_seg": 84.85477, "loss": 0.39757, "time": 0.27213} -{"mode": "train", "epoch": 1, "iter": 19950, "lr": 0.00011, "memory": 20279, "data_time": 0.0197, "decode.loss_ce": 0.4182, "decode.acc_seg": 84.27058, "loss": 0.4182, "time": 0.26191} -{"mode": "train", "epoch": 1, "iter": 20000, "lr": 0.00011, "memory": 20279, "data_time": 0.02169, "decode.loss_ce": 0.38343, "decode.acc_seg": 85.43121, "loss": 0.38343, "time": 0.29456} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00011, "aAcc": 0.7959, "mIoU": 0.4094, "mAcc": 0.5315, "IoU.wall": 0.7245, "IoU.building": 0.8049, "IoU.sky": 0.941, "IoU.floor": 0.787, "IoU.tree": 0.721, "IoU.ceiling": 0.7794, "IoU.road": 0.7992, "IoU.bed ": 0.8349, "IoU.windowpane": 0.551, "IoU.grass": 0.6578, "IoU.cabinet": 0.5428, "IoU.sidewalk": 0.6037, "IoU.person": 0.7791, "IoU.earth": 0.3156, "IoU.door": 0.3823, "IoU.table": 0.4972, "IoU.mountain": 0.5187, "IoU.plant": 0.4737, "IoU.curtain": 0.6666, "IoU.chair": 0.4958, "IoU.car": 0.8031, "IoU.water": 0.4506, "IoU.painting": 0.6398, "IoU.sofa": 0.5857, "IoU.shelf": 0.3582, "IoU.house": 0.3493, "IoU.sea": 0.4488, "IoU.mirror": 0.5207, "IoU.rug": 0.5234, "IoU.field": 0.2954, "IoU.armchair": 0.3579, "IoU.seat": 0.5644, "IoU.fence": 0.392, "IoU.desk": 0.4095, "IoU.rock": 0.3801, "IoU.wardrobe": 0.4187, "IoU.lamp": 0.5489, "IoU.bathtub": 0.6286, "IoU.railing": 0.2827, "IoU.cushion": 0.4677, "IoU.base": 0.2002, "IoU.box": 0.183, "IoU.column": 0.4031, "IoU.signboard": 0.3154, "IoU.chest of drawers": 0.4044, "IoU.counter": 0.2803, "IoU.sand": 0.2646, "IoU.sink": 0.5953, "IoU.skyscraper": 0.5776, "IoU.fireplace": 0.5989, "IoU.refrigerator": 0.633, "IoU.grandstand": 0.4497, "IoU.path": 0.1967, "IoU.stairs": 0.2939, "IoU.runway": 0.7345, "IoU.case": 0.4057, "IoU.pool table": 0.9186, "IoU.pillow": 0.4881, "IoU.screen door": 0.4688, "IoU.stairway": 0.2563, "IoU.river": 0.1544, "IoU.bridge": 0.3063, "IoU.bookcase": 0.2768, "IoU.blind": 0.3516, "IoU.coffee table": 0.4652, "IoU.toilet": 0.7686, "IoU.flower": 0.3256, "IoU.book": 0.4107, "IoU.hill": 0.0435, "IoU.bench": 0.3599, "IoU.countertop": 0.4468, "IoU.stove": 0.6312, "IoU.palm": 0.4541, "IoU.kitchen island": 0.2697, "IoU.computer": 0.5747, "IoU.swivel chair": 0.4259, "IoU.boat": 0.5395, "IoU.bar": 0.239, "IoU.arcade machine": 0.6162, "IoU.hovel": 0.2394, "IoU.bus": 0.8268, "IoU.towel": 0.5276, "IoU.light": 0.4603, "IoU.truck": 0.1896, "IoU.tower": 0.147, "IoU.chandelier": 0.6025, "IoU.awning": 0.163, "IoU.streetlight": 0.1943, "IoU.booth": 0.3127, "IoU.television receiver": 0.4701, "IoU.airplane": 0.5543, "IoU.dirt track": 0.1298, "IoU.apparel": 0.3179, "IoU.pole": 0.138, "IoU.land": 0.0147, "IoU.bannister": 0.0157, "IoU.escalator": 0.1942, "IoU.ottoman": 0.3102, "IoU.bottle": 0.3128, "IoU.buffet": 0.3522, "IoU.poster": 0.1785, "IoU.stage": 0.1093, "IoU.van": 0.3573, "IoU.ship": 0.5684, "IoU.fountain": 0.2386, "IoU.conveyer belt": 0.5045, "IoU.canopy": 0.0983, "IoU.washer": 0.6911, "IoU.plaything": 0.2394, "IoU.swimming pool": 0.3237, "IoU.stool": 0.2554, "IoU.barrel": 0.1813, "IoU.basket": 0.2244, "IoU.waterfall": 0.5428, "IoU.tent": 0.9004, "IoU.bag": 0.0641, "IoU.minibike": 0.6267, "IoU.cradle": 0.6444, "IoU.oven": 0.2809, "IoU.ball": 0.1001, "IoU.food": 0.4887, "IoU.step": 0.0537, "IoU.tank": 0.2435, "IoU.trade name": 0.1217, "IoU.microwave": 0.3254, "IoU.pot": 0.3596, "IoU.animal": 0.5504, "IoU.bicycle": 0.5316, "IoU.lake": 0.1075, "IoU.dishwasher": 0.4478, "IoU.screen": 0.6592, "IoU.blanket": 0.0596, "IoU.sculpture": 0.4081, "IoU.hood": 0.4796, "IoU.sconce": 0.2593, "IoU.vase": 0.273, "IoU.traffic light": 0.2326, "IoU.tray": 0.0567, "IoU.ashcan": 0.3169, "IoU.fan": 0.4921, "IoU.pier": 0.3577, "IoU.crt screen": 0.0088, "IoU.plate": 0.3674, "IoU.monitor": 0.4041, "IoU.bulletin board": 0.4754, "IoU.shower": 0.0, "IoU.radiator": 0.5106, "IoU.glass": 0.0888, "IoU.clock": 0.1747, "IoU.flag": 0.3261, "Acc.wall": 0.854, "Acc.building": 0.9242, "Acc.sky": 0.9705, "Acc.floor": 0.892, "Acc.tree": 0.8811, "Acc.ceiling": 0.848, "Acc.road": 0.8669, "Acc.bed ": 0.9499, "Acc.windowpane": 0.7651, "Acc.grass": 0.847, "Acc.cabinet": 0.6384, "Acc.sidewalk": 0.7859, "Acc.person": 0.9012, "Acc.earth": 0.4558, "Acc.door": 0.4857, "Acc.table": 0.7124, "Acc.mountain": 0.6233, "Acc.plant": 0.5983, "Acc.curtain": 0.8472, "Acc.chair": 0.6388, "Acc.car": 0.9176, "Acc.water": 0.5794, "Acc.painting": 0.8352, "Acc.sofa": 0.7581, "Acc.shelf": 0.5348, "Acc.house": 0.4288, "Acc.sea": 0.7507, "Acc.mirror": 0.676, "Acc.rug": 0.6014, "Acc.field": 0.3894, "Acc.armchair": 0.5041, "Acc.seat": 0.7665, "Acc.fence": 0.5325, "Acc.desk": 0.6372, "Acc.rock": 0.5906, "Acc.wardrobe": 0.7148, "Acc.lamp": 0.7164, "Acc.bathtub": 0.8021, "Acc.railing": 0.3737, "Acc.cushion": 0.5559, "Acc.base": 0.3559, "Acc.box": 0.2397, "Acc.column": 0.5043, "Acc.signboard": 0.4624, "Acc.chest of drawers": 0.5542, "Acc.counter": 0.4517, "Acc.sand": 0.407, "Acc.sink": 0.7543, "Acc.skyscraper": 0.6949, "Acc.fireplace": 0.8622, "Acc.refrigerator": 0.8154, "Acc.grandstand": 0.737, "Acc.path": 0.3141, "Acc.stairs": 0.3713, "Acc.runway": 0.9318, "Acc.case": 0.6058, "Acc.pool table": 0.9585, "Acc.pillow": 0.599, "Acc.screen door": 0.6737, "Acc.stairway": 0.3132, "Acc.river": 0.299, "Acc.bridge": 0.3572, "Acc.bookcase": 0.3713, "Acc.blind": 0.428, "Acc.coffee table": 0.7754, "Acc.toilet": 0.8759, "Acc.flower": 0.5659, "Acc.book": 0.5916, "Acc.hill": 0.0933, "Acc.bench": 0.457, "Acc.countertop": 0.6164, "Acc.stove": 0.7786, "Acc.palm": 0.5959, "Acc.kitchen island": 0.6235, "Acc.computer": 0.699, "Acc.swivel chair": 0.6552, "Acc.boat": 0.8461, "Acc.bar": 0.267, "Acc.arcade machine": 0.6619, "Acc.hovel": 0.3872, "Acc.bus": 0.9321, "Acc.towel": 0.6384, "Acc.light": 0.5481, "Acc.truck": 0.2292, "Acc.tower": 0.1958, "Acc.chandelier": 0.7913, "Acc.awning": 0.1965, "Acc.streetlight": 0.2542, "Acc.booth": 0.364, "Acc.television receiver": 0.748, "Acc.airplane": 0.6831, "Acc.dirt track": 0.2193, "Acc.apparel": 0.4878, "Acc.pole": 0.1796, "Acc.land": 0.0257, "Acc.bannister": 0.0198, "Acc.escalator": 0.2235, "Acc.ottoman": 0.4485, "Acc.bottle": 0.4713, "Acc.buffet": 0.4125, "Acc.poster": 0.2232, "Acc.stage": 0.1729, "Acc.van": 0.4882, "Acc.ship": 0.6413, "Acc.fountain": 0.2654, "Acc.conveyer belt": 0.8271, "Acc.canopy": 0.1191, "Acc.washer": 0.7099, "Acc.plaything": 0.3973, "Acc.swimming pool": 0.572, "Acc.stool": 0.3438, "Acc.barrel": 0.5189, "Acc.basket": 0.2946, "Acc.waterfall": 0.6067, "Acc.tent": 0.975, "Acc.bag": 0.0721, "Acc.minibike": 0.8176, "Acc.cradle": 0.8195, "Acc.oven": 0.5106, "Acc.ball": 0.1087, "Acc.food": 0.5907, "Acc.step": 0.0589, "Acc.tank": 0.2493, "Acc.trade name": 0.1315, "Acc.microwave": 0.3522, "Acc.pot": 0.4185, "Acc.animal": 0.5742, "Acc.bicycle": 0.684, "Acc.lake": 0.1091, "Acc.dishwasher": 0.6145, "Acc.screen": 0.8644, "Acc.blanket": 0.0625, "Acc.sculpture": 0.5837, "Acc.hood": 0.6234, "Acc.sconce": 0.302, "Acc.vase": 0.4306, "Acc.traffic light": 0.3148, "Acc.tray": 0.0879, "Acc.ashcan": 0.3756, "Acc.fan": 0.5817, "Acc.pier": 0.5032, "Acc.crt screen": 0.0143, "Acc.plate": 0.4801, "Acc.monitor": 0.6643, "Acc.bulletin board": 0.562, "Acc.shower": 0.0, "Acc.radiator": 0.585, "Acc.glass": 0.0957, "Acc.clock": 0.2009, "Acc.flag": 0.3585} -{"mode": "train", "epoch": 1, "iter": 20050, "lr": 0.00011, "memory": 20279, "data_time": 0.95197, "decode.loss_ce": 0.40367, "decode.acc_seg": 84.56848, "loss": 0.40367, "time": 1.2017} -{"mode": "train", "epoch": 1, "iter": 20100, "lr": 0.00011, "memory": 20279, "data_time": 0.01904, "decode.loss_ce": 0.39732, "decode.acc_seg": 84.7888, "loss": 0.39732, "time": 0.26328} -{"mode": "train", "epoch": 1, "iter": 20150, "lr": 0.00011, "memory": 20279, "data_time": 0.01977, "decode.loss_ce": 0.38309, "decode.acc_seg": 85.16662, "loss": 0.38309, "time": 0.26349} -{"mode": "train", "epoch": 1, "iter": 20200, "lr": 0.00011, "memory": 20279, "data_time": 0.02082, "decode.loss_ce": 0.39608, "decode.acc_seg": 84.63051, "loss": 0.39608, "time": 0.26021} -{"mode": "train", "epoch": 1, "iter": 20250, "lr": 0.00011, "memory": 20279, "data_time": 0.01979, "decode.loss_ce": 0.39416, "decode.acc_seg": 84.77505, "loss": 0.39416, "time": 0.26935} -{"mode": "train", "epoch": 1, "iter": 20300, "lr": 0.00011, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.37707, "decode.acc_seg": 85.44389, "loss": 0.37707, "time": 0.2679} -{"mode": "train", "epoch": 1, "iter": 20350, "lr": 0.00011, "memory": 20279, "data_time": 0.02046, "decode.loss_ce": 0.3863, "decode.acc_seg": 85.04381, "loss": 0.3863, "time": 0.26752} -{"mode": "train", "epoch": 1, "iter": 20400, "lr": 0.00011, "memory": 20279, "data_time": 0.02354, "decode.loss_ce": 0.40143, "decode.acc_seg": 84.77938, "loss": 0.40143, "time": 0.27675} -{"mode": "train", "epoch": 1, "iter": 20450, "lr": 0.00011, "memory": 20279, "data_time": 0.01925, "decode.loss_ce": 0.40155, "decode.acc_seg": 84.79011, "loss": 0.40155, "time": 0.26472} -{"mode": "train", "epoch": 1, "iter": 20500, "lr": 0.00011, "memory": 20279, "data_time": 0.02153, "decode.loss_ce": 0.38346, "decode.acc_seg": 85.05884, "loss": 0.38346, "time": 0.26145} -{"mode": "train", "epoch": 1, "iter": 20550, "lr": 0.00011, "memory": 20279, "data_time": 0.01866, "decode.loss_ce": 0.39766, "decode.acc_seg": 84.47629, "loss": 0.39766, "time": 0.26437} -{"mode": "train", "epoch": 1, "iter": 20600, "lr": 0.0001, "memory": 20279, "data_time": 0.01929, "decode.loss_ce": 0.38777, "decode.acc_seg": 84.92119, "loss": 0.38777, "time": 0.26613} -{"mode": "train", "epoch": 1, "iter": 20650, "lr": 0.0001, "memory": 20279, "data_time": 0.01919, "decode.loss_ce": 0.37254, "decode.acc_seg": 85.84378, "loss": 0.37254, "time": 0.26599} -{"mode": "train", "epoch": 1, "iter": 20700, "lr": 0.0001, "memory": 20279, "data_time": 0.02058, "decode.loss_ce": 0.3936, "decode.acc_seg": 84.99184, "loss": 0.3936, "time": 0.26757} -{"mode": "train", "epoch": 1, "iter": 20750, "lr": 0.0001, "memory": 20279, "data_time": 0.01822, "decode.loss_ce": 0.3954, "decode.acc_seg": 85.03422, "loss": 0.3954, "time": 0.28098} -{"mode": "train", "epoch": 1, "iter": 20800, "lr": 0.0001, "memory": 20279, "data_time": 0.01885, "decode.loss_ce": 0.37545, "decode.acc_seg": 85.15936, "loss": 0.37545, "time": 0.26255} -{"mode": "train", "epoch": 1, "iter": 20850, "lr": 0.0001, "memory": 20279, "data_time": 0.02026, "decode.loss_ce": 0.38834, "decode.acc_seg": 85.0119, "loss": 0.38834, "time": 0.26334} -{"mode": "train", "epoch": 1, "iter": 20900, "lr": 0.0001, "memory": 20279, "data_time": 0.02037, "decode.loss_ce": 0.38254, "decode.acc_seg": 85.07659, "loss": 0.38254, "time": 0.26565} -{"mode": "train", "epoch": 1, "iter": 20950, "lr": 0.0001, "memory": 20279, "data_time": 0.01872, "decode.loss_ce": 0.39734, "decode.acc_seg": 84.7781, "loss": 0.39734, "time": 0.26207} -{"mode": "train", "epoch": 1, "iter": 21000, "lr": 0.0001, "memory": 20279, "data_time": 0.02119, "decode.loss_ce": 0.39096, "decode.acc_seg": 85.16835, "loss": 0.39096, "time": 0.26441} -{"mode": "train", "epoch": 1, "iter": 21050, "lr": 0.0001, "memory": 20279, "data_time": 0.02027, "decode.loss_ce": 0.39607, "decode.acc_seg": 84.97264, "loss": 0.39607, "time": 0.26618} -{"mode": "train", "epoch": 1, "iter": 21100, "lr": 0.0001, "memory": 20279, "data_time": 0.01872, "decode.loss_ce": 0.39054, "decode.acc_seg": 84.98788, "loss": 0.39054, "time": 0.26341} -{"mode": "train", "epoch": 1, "iter": 21150, "lr": 0.0001, "memory": 20279, "data_time": 0.01835, "decode.loss_ce": 0.3727, "decode.acc_seg": 85.64462, "loss": 0.3727, "time": 0.26438} -{"mode": "train", "epoch": 1, "iter": 21200, "lr": 0.0001, "memory": 20279, "data_time": 0.02062, "decode.loss_ce": 0.39267, "decode.acc_seg": 85.16508, "loss": 0.39267, "time": 0.26421} -{"mode": "train", "epoch": 1, "iter": 21250, "lr": 0.0001, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.39842, "decode.acc_seg": 84.81504, "loss": 0.39842, "time": 0.26928} -{"mode": "train", "epoch": 1, "iter": 21300, "lr": 0.0001, "memory": 20279, "data_time": 0.02091, "decode.loss_ce": 0.39088, "decode.acc_seg": 84.95499, "loss": 0.39088, "time": 0.26837} -{"mode": "train", "epoch": 1, "iter": 21350, "lr": 0.0001, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.41255, "decode.acc_seg": 84.21688, "loss": 0.41255, "time": 0.26944} -{"mode": "train", "epoch": 1, "iter": 21400, "lr": 0.0001, "memory": 20279, "data_time": 0.01952, "decode.loss_ce": 0.38106, "decode.acc_seg": 85.26864, "loss": 0.38106, "time": 0.27065} -{"mode": "train", "epoch": 1, "iter": 21450, "lr": 0.0001, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.37138, "decode.acc_seg": 85.67837, "loss": 0.37138, "time": 0.26475} -{"mode": "train", "epoch": 1, "iter": 21500, "lr": 0.0001, "memory": 20279, "data_time": 0.0199, "decode.loss_ce": 0.37255, "decode.acc_seg": 85.72862, "loss": 0.37255, "time": 0.26365} -{"mode": "train", "epoch": 1, "iter": 21550, "lr": 0.0001, "memory": 20279, "data_time": 0.0193, "decode.loss_ce": 0.37625, "decode.acc_seg": 85.6235, "loss": 0.37625, "time": 0.2631} -{"mode": "train", "epoch": 1, "iter": 21600, "lr": 0.0001, "memory": 20279, "data_time": 0.02117, "decode.loss_ce": 0.37648, "decode.acc_seg": 85.62209, "loss": 0.37648, "time": 0.2679} -{"mode": "train", "epoch": 1, "iter": 21650, "lr": 0.0001, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.38548, "decode.acc_seg": 85.20309, "loss": 0.38548, "time": 0.26762} -{"mode": "train", "epoch": 1, "iter": 21700, "lr": 0.0001, "memory": 20279, "data_time": 0.02003, "decode.loss_ce": 0.39185, "decode.acc_seg": 85.08833, "loss": 0.39185, "time": 0.26963} -{"mode": "train", "epoch": 1, "iter": 21750, "lr": 0.0001, "memory": 20279, "data_time": 0.02128, "decode.loss_ce": 0.37805, "decode.acc_seg": 85.37316, "loss": 0.37805, "time": 0.2829} -{"mode": "train", "epoch": 1, "iter": 21800, "lr": 0.0001, "memory": 20279, "data_time": 0.01864, "decode.loss_ce": 0.3779, "decode.acc_seg": 85.32408, "loss": 0.3779, "time": 0.26102} -{"mode": "train", "epoch": 1, "iter": 21850, "lr": 0.0001, "memory": 20279, "data_time": 0.0202, "decode.loss_ce": 0.39374, "decode.acc_seg": 84.95255, "loss": 0.39374, "time": 0.26466} -{"mode": "train", "epoch": 1, "iter": 21900, "lr": 0.0001, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.37727, "decode.acc_seg": 85.69217, "loss": 0.37727, "time": 0.26253} -{"mode": "train", "epoch": 1, "iter": 21950, "lr": 0.0001, "memory": 20279, "data_time": 0.01919, "decode.loss_ce": 0.3748, "decode.acc_seg": 85.43795, "loss": 0.3748, "time": 0.26366} -{"mode": "train", "epoch": 1, "iter": 22000, "lr": 0.0001, "memory": 20279, "data_time": 0.01988, "decode.loss_ce": 0.39649, "decode.acc_seg": 84.65683, "loss": 0.39649, "time": 0.26054} -{"mode": "train", "epoch": 1, "iter": 22050, "lr": 0.0001, "memory": 20279, "data_time": 0.02181, "decode.loss_ce": 0.38458, "decode.acc_seg": 85.11703, "loss": 0.38458, "time": 0.2669} -{"mode": "train", "epoch": 1, "iter": 22100, "lr": 0.0001, "memory": 20279, "data_time": 0.02102, "decode.loss_ce": 0.38341, "decode.acc_seg": 85.52145, "loss": 0.38341, "time": 0.26508} -{"mode": "train", "epoch": 1, "iter": 22150, "lr": 0.0001, "memory": 20279, "data_time": 0.02054, "decode.loss_ce": 0.38065, "decode.acc_seg": 85.11498, "loss": 0.38065, "time": 0.262} -{"mode": "train", "epoch": 1, "iter": 22200, "lr": 0.0001, "memory": 20279, "data_time": 0.02087, "decode.loss_ce": 0.3627, "decode.acc_seg": 86.14038, "loss": 0.3627, "time": 0.26204} -{"mode": "train", "epoch": 1, "iter": 22250, "lr": 0.0001, "memory": 20279, "data_time": 0.0186, "decode.loss_ce": 0.37501, "decode.acc_seg": 85.66998, "loss": 0.37501, "time": 0.26805} -{"mode": "train", "epoch": 1, "iter": 22300, "lr": 0.0001, "memory": 20279, "data_time": 0.02044, "decode.loss_ce": 0.36572, "decode.acc_seg": 85.7775, "loss": 0.36572, "time": 0.26919} -{"mode": "train", "epoch": 1, "iter": 22350, "lr": 0.0001, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.3612, "decode.acc_seg": 85.9155, "loss": 0.3612, "time": 0.27069} -{"mode": "train", "epoch": 1, "iter": 22400, "lr": 0.0001, "memory": 20279, "data_time": 0.01938, "decode.loss_ce": 0.3808, "decode.acc_seg": 85.18144, "loss": 0.3808, "time": 0.27162} -{"mode": "train", "epoch": 1, "iter": 22450, "lr": 0.0001, "memory": 20279, "data_time": 0.02026, "decode.loss_ce": 0.39588, "decode.acc_seg": 84.89936, "loss": 0.39588, "time": 0.26246} -{"mode": "train", "epoch": 1, "iter": 22500, "lr": 0.0001, "memory": 20279, "data_time": 0.01929, "decode.loss_ce": 0.37678, "decode.acc_seg": 85.56917, "loss": 0.37678, "time": 0.26447} -{"mode": "train", "epoch": 1, "iter": 22550, "lr": 0.0001, "memory": 20279, "data_time": 0.02137, "decode.loss_ce": 0.36121, "decode.acc_seg": 86.04727, "loss": 0.36121, "time": 0.26687} -{"mode": "train", "epoch": 1, "iter": 22600, "lr": 0.0001, "memory": 20279, "data_time": 0.01938, "decode.loss_ce": 0.36619, "decode.acc_seg": 85.83808, "loss": 0.36619, "time": 0.26744} -{"mode": "train", "epoch": 1, "iter": 22650, "lr": 9e-05, "memory": 20279, "data_time": 0.01899, "decode.loss_ce": 0.37287, "decode.acc_seg": 85.46116, "loss": 0.37287, "time": 0.26526} -{"mode": "train", "epoch": 1, "iter": 22700, "lr": 9e-05, "memory": 20279, "data_time": 0.02282, "decode.loss_ce": 0.36024, "decode.acc_seg": 85.93737, "loss": 0.36024, "time": 0.26664} -{"mode": "train", "epoch": 1, "iter": 22750, "lr": 9e-05, "memory": 20279, "data_time": 0.02281, "decode.loss_ce": 0.37457, "decode.acc_seg": 85.84158, "loss": 0.37457, "time": 0.28163} -{"mode": "train", "epoch": 1, "iter": 22800, "lr": 9e-05, "memory": 20279, "data_time": 0.01932, "decode.loss_ce": 0.37609, "decode.acc_seg": 85.77863, "loss": 0.37609, "time": 0.26335} -{"mode": "train", "epoch": 1, "iter": 22850, "lr": 9e-05, "memory": 20279, "data_time": 0.02038, "decode.loss_ce": 0.37394, "decode.acc_seg": 85.73541, "loss": 0.37394, "time": 0.26626} -{"mode": "train", "epoch": 1, "iter": 22900, "lr": 9e-05, "memory": 20279, "data_time": 0.02109, "decode.loss_ce": 0.37884, "decode.acc_seg": 85.69558, "loss": 0.37884, "time": 0.26288} -{"mode": "train", "epoch": 1, "iter": 22950, "lr": 9e-05, "memory": 20279, "data_time": 0.01979, "decode.loss_ce": 0.37427, "decode.acc_seg": 85.64206, "loss": 0.37427, "time": 0.26232} -{"mode": "train", "epoch": 1, "iter": 23000, "lr": 9e-05, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.36906, "decode.acc_seg": 85.8937, "loss": 0.36906, "time": 0.26278} -{"mode": "train", "epoch": 1, "iter": 23050, "lr": 9e-05, "memory": 20279, "data_time": 0.02068, "decode.loss_ce": 0.37636, "decode.acc_seg": 85.34648, "loss": 0.37636, "time": 0.26669} -{"mode": "train", "epoch": 1, "iter": 23100, "lr": 9e-05, "memory": 20279, "data_time": 0.02095, "decode.loss_ce": 0.37295, "decode.acc_seg": 85.53335, "loss": 0.37295, "time": 0.26641} -{"mode": "train", "epoch": 1, "iter": 23150, "lr": 9e-05, "memory": 20279, "data_time": 0.0213, "decode.loss_ce": 0.38329, "decode.acc_seg": 85.08271, "loss": 0.38329, "time": 0.27073} -{"mode": "train", "epoch": 1, "iter": 23200, "lr": 9e-05, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.366, "decode.acc_seg": 86.08027, "loss": 0.366, "time": 0.26577} -{"mode": "train", "epoch": 1, "iter": 23250, "lr": 9e-05, "memory": 20279, "data_time": 0.02002, "decode.loss_ce": 0.36648, "decode.acc_seg": 86.01218, "loss": 0.36648, "time": 0.26972} -{"mode": "train", "epoch": 1, "iter": 23300, "lr": 9e-05, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.36497, "decode.acc_seg": 86.04953, "loss": 0.36497, "time": 0.26562} -{"mode": "train", "epoch": 1, "iter": 23350, "lr": 9e-05, "memory": 20279, "data_time": 0.02052, "decode.loss_ce": 0.37539, "decode.acc_seg": 85.78988, "loss": 0.37539, "time": 0.27166} -{"mode": "train", "epoch": 1, "iter": 23400, "lr": 9e-05, "memory": 20279, "data_time": 0.02007, "decode.loss_ce": 0.37142, "decode.acc_seg": 85.71532, "loss": 0.37142, "time": 0.27146} -{"mode": "train", "epoch": 1, "iter": 23450, "lr": 9e-05, "memory": 20279, "data_time": 0.02033, "decode.loss_ce": 0.36629, "decode.acc_seg": 85.72649, "loss": 0.36629, "time": 0.2637} -{"mode": "train", "epoch": 1, "iter": 23500, "lr": 9e-05, "memory": 20279, "data_time": 0.02181, "decode.loss_ce": 0.36289, "decode.acc_seg": 85.92333, "loss": 0.36289, "time": 0.26445} -{"mode": "train", "epoch": 1, "iter": 23550, "lr": 9e-05, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.36469, "decode.acc_seg": 85.99232, "loss": 0.36469, "time": 0.26413} -{"mode": "train", "epoch": 1, "iter": 23600, "lr": 9e-05, "memory": 20279, "data_time": 0.02071, "decode.loss_ce": 0.35831, "decode.acc_seg": 86.12627, "loss": 0.35831, "time": 0.26841} -{"mode": "train", "epoch": 1, "iter": 23650, "lr": 9e-05, "memory": 20279, "data_time": 0.0202, "decode.loss_ce": 0.36661, "decode.acc_seg": 86.01493, "loss": 0.36661, "time": 0.26681} -{"mode": "train", "epoch": 1, "iter": 23700, "lr": 9e-05, "memory": 20279, "data_time": 0.02066, "decode.loss_ce": 0.37571, "decode.acc_seg": 85.59279, "loss": 0.37571, "time": 0.27098} -{"mode": "train", "epoch": 1, "iter": 23750, "lr": 9e-05, "memory": 20279, "data_time": 0.02183, "decode.loss_ce": 0.36557, "decode.acc_seg": 85.7836, "loss": 0.36557, "time": 0.27681} -{"mode": "train", "epoch": 1, "iter": 23800, "lr": 9e-05, "memory": 20279, "data_time": 0.01946, "decode.loss_ce": 0.36088, "decode.acc_seg": 85.98429, "loss": 0.36088, "time": 0.26343} -{"mode": "train", "epoch": 1, "iter": 23850, "lr": 9e-05, "memory": 20279, "data_time": 0.01934, "decode.loss_ce": 0.36217, "decode.acc_seg": 85.79792, "loss": 0.36217, "time": 0.26364} -{"mode": "train", "epoch": 1, "iter": 23900, "lr": 9e-05, "memory": 20279, "data_time": 0.01954, "decode.loss_ce": 0.35842, "decode.acc_seg": 86.05042, "loss": 0.35842, "time": 0.26487} -{"mode": "train", "epoch": 1, "iter": 23950, "lr": 9e-05, "memory": 20279, "data_time": 0.02004, "decode.loss_ce": 0.36225, "decode.acc_seg": 86.14585, "loss": 0.36225, "time": 0.26521} -{"mode": "train", "epoch": 1, "iter": 24000, "lr": 9e-05, "memory": 20279, "data_time": 0.01952, "decode.loss_ce": 0.35687, "decode.acc_seg": 86.13626, "loss": 0.35687, "time": 0.28114} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 9e-05, "aAcc": 0.7988, "mIoU": 0.4205, "mAcc": 0.5356, "IoU.wall": 0.7278, "IoU.building": 0.7941, "IoU.sky": 0.9405, "IoU.floor": 0.7845, "IoU.tree": 0.7214, "IoU.ceiling": 0.8011, "IoU.road": 0.7836, "IoU.bed ": 0.8513, "IoU.windowpane": 0.5675, "IoU.grass": 0.6403, "IoU.cabinet": 0.5367, "IoU.sidewalk": 0.5728, "IoU.person": 0.783, "IoU.earth": 0.3403, "IoU.door": 0.3719, "IoU.table": 0.5135, "IoU.mountain": 0.5811, "IoU.plant": 0.4705, "IoU.curtain": 0.6877, "IoU.chair": 0.4999, "IoU.car": 0.8206, "IoU.water": 0.481, "IoU.painting": 0.6704, "IoU.sofa": 0.5887, "IoU.shelf": 0.3338, "IoU.house": 0.4418, "IoU.sea": 0.4584, "IoU.mirror": 0.5562, "IoU.rug": 0.4932, "IoU.field": 0.326, "IoU.armchair": 0.3681, "IoU.seat": 0.5326, "IoU.fence": 0.3661, "IoU.desk": 0.4239, "IoU.rock": 0.3884, "IoU.wardrobe": 0.4045, "IoU.lamp": 0.5556, "IoU.bathtub": 0.6347, "IoU.railing": 0.2676, "IoU.cushion": 0.434, "IoU.base": 0.1984, "IoU.box": 0.197, "IoU.column": 0.4008, "IoU.signboard": 0.3113, "IoU.chest of drawers": 0.3681, "IoU.counter": 0.2488, "IoU.sand": 0.2998, "IoU.sink": 0.6243, "IoU.skyscraper": 0.5843, "IoU.fireplace": 0.6598, "IoU.refrigerator": 0.6543, "IoU.grandstand": 0.4188, "IoU.path": 0.1762, "IoU.stairs": 0.2986, "IoU.runway": 0.7065, "IoU.case": 0.466, "IoU.pool table": 0.9148, "IoU.pillow": 0.4962, "IoU.screen door": 0.5444, "IoU.stairway": 0.3331, "IoU.river": 0.1515, "IoU.bridge": 0.3274, "IoU.bookcase": 0.288, "IoU.blind": 0.336, "IoU.coffee table": 0.5127, "IoU.toilet": 0.7922, "IoU.flower": 0.3255, "IoU.book": 0.4628, "IoU.hill": 0.0397, "IoU.bench": 0.3693, "IoU.countertop": 0.4874, "IoU.stove": 0.6725, "IoU.palm": 0.4531, "IoU.kitchen island": 0.3152, "IoU.computer": 0.5914, "IoU.swivel chair": 0.4377, "IoU.boat": 0.6053, "IoU.bar": 0.2069, "IoU.arcade machine": 0.6355, "IoU.hovel": 0.2875, "IoU.bus": 0.8602, "IoU.towel": 0.5175, "IoU.light": 0.4891, "IoU.truck": 0.2393, "IoU.tower": 0.167, "IoU.chandelier": 0.6134, "IoU.awning": 0.1717, "IoU.streetlight": 0.1995, "IoU.booth": 0.3237, "IoU.television receiver": 0.6382, "IoU.airplane": 0.5245, "IoU.dirt track": 0.0766, "IoU.apparel": 0.2701, "IoU.pole": 0.1739, "IoU.land": 0.0136, "IoU.bannister": 0.03, "IoU.escalator": 0.2059, "IoU.ottoman": 0.3489, "IoU.bottle": 0.2937, "IoU.buffet": 0.2774, "IoU.poster": 0.2458, "IoU.stage": 0.1376, "IoU.van": 0.3543, "IoU.ship": 0.5084, "IoU.fountain": 0.043, "IoU.conveyer belt": 0.5573, "IoU.canopy": 0.1309, "IoU.washer": 0.7008, "IoU.plaything": 0.233, "IoU.swimming pool": 0.3418, "IoU.stool": 0.2702, "IoU.barrel": 0.2956, "IoU.basket": 0.206, "IoU.waterfall": 0.5552, "IoU.tent": 0.7503, "IoU.bag": 0.0781, "IoU.minibike": 0.6109, "IoU.cradle": 0.7573, "IoU.oven": 0.2129, "IoU.ball": 0.3451, "IoU.food": 0.4777, "IoU.step": 0.0496, "IoU.tank": 0.3077, "IoU.trade name": 0.2041, "IoU.microwave": 0.4639, "IoU.pot": 0.3567, "IoU.animal": 0.574, "IoU.bicycle": 0.5066, "IoU.lake": 0.2546, "IoU.dishwasher": 0.5629, "IoU.screen": 0.611, "IoU.blanket": 0.0862, "IoU.sculpture": 0.3899, "IoU.hood": 0.4893, "IoU.sconce": 0.2817, "IoU.vase": 0.2949, "IoU.traffic light": 0.2306, "IoU.tray": 0.014, "IoU.ashcan": 0.3178, "IoU.fan": 0.487, "IoU.pier": 0.2778, "IoU.crt screen": 0.063, "IoU.plate": 0.3794, "IoU.monitor": 0.3654, "IoU.bulletin board": 0.4405, "IoU.shower": 0.0, "IoU.radiator": 0.4961, "IoU.glass": 0.0719, "IoU.clock": 0.1678, "IoU.flag": 0.3761, "Acc.wall": 0.8648, "Acc.building": 0.9148, "Acc.sky": 0.9678, "Acc.floor": 0.8928, "Acc.tree": 0.8739, "Acc.ceiling": 0.904, "Acc.road": 0.9053, "Acc.bed ": 0.9369, "Acc.windowpane": 0.7606, "Acc.grass": 0.741, "Acc.cabinet": 0.6283, "Acc.sidewalk": 0.6716, "Acc.person": 0.9024, "Acc.earth": 0.4976, "Acc.door": 0.4718, "Acc.table": 0.7081, "Acc.mountain": 0.7325, "Acc.plant": 0.5823, "Acc.curtain": 0.8057, "Acc.chair": 0.6144, "Acc.car": 0.9129, "Acc.water": 0.6635, "Acc.painting": 0.8166, "Acc.sofa": 0.7777, "Acc.shelf": 0.4408, "Acc.house": 0.6937, "Acc.sea": 0.6634, "Acc.mirror": 0.6666, "Acc.rug": 0.5437, "Acc.field": 0.5659, "Acc.armchair": 0.5821, "Acc.seat": 0.7667, "Acc.fence": 0.5175, "Acc.desk": 0.6434, "Acc.rock": 0.6235, "Acc.wardrobe": 0.5737, "Acc.lamp": 0.7122, "Acc.bathtub": 0.7748, "Acc.railing": 0.3577, "Acc.cushion": 0.4974, "Acc.base": 0.3955, "Acc.box": 0.2814, "Acc.column": 0.5696, "Acc.signboard": 0.4278, "Acc.chest of drawers": 0.7226, "Acc.counter": 0.3778, "Acc.sand": 0.4807, "Acc.sink": 0.747, "Acc.skyscraper": 0.7224, "Acc.fireplace": 0.8375, "Acc.refrigerator": 0.8416, "Acc.grandstand": 0.6838, "Acc.path": 0.2435, "Acc.stairs": 0.374, "Acc.runway": 0.9187, "Acc.case": 0.7051, "Acc.pool table": 0.9584, "Acc.pillow": 0.6296, "Acc.screen door": 0.7731, "Acc.stairway": 0.4026, "Acc.river": 0.2792, "Acc.bridge": 0.3635, "Acc.bookcase": 0.451, "Acc.blind": 0.3736, "Acc.coffee table": 0.7393, "Acc.toilet": 0.8682, "Acc.flower": 0.4628, "Acc.book": 0.6803, "Acc.hill": 0.0761, "Acc.bench": 0.4395, "Acc.countertop": 0.6655, "Acc.stove": 0.7968, "Acc.palm": 0.5924, "Acc.kitchen island": 0.6214, "Acc.computer": 0.6825, "Acc.swivel chair": 0.5856, "Acc.boat": 0.8434, "Acc.bar": 0.2219, "Acc.arcade machine": 0.6792, "Acc.hovel": 0.4657, "Acc.bus": 0.9359, "Acc.towel": 0.6242, "Acc.light": 0.5879, "Acc.truck": 0.2973, "Acc.tower": 0.2427, "Acc.chandelier": 0.7709, "Acc.awning": 0.1969, "Acc.streetlight": 0.252, "Acc.booth": 0.3478, "Acc.television receiver": 0.7746, "Acc.airplane": 0.6409, "Acc.dirt track": 0.2763, "Acc.apparel": 0.3943, "Acc.pole": 0.2499, "Acc.land": 0.0203, "Acc.bannister": 0.0385, "Acc.escalator": 0.2364, "Acc.ottoman": 0.466, "Acc.bottle": 0.3982, "Acc.buffet": 0.283, "Acc.poster": 0.3266, "Acc.stage": 0.2003, "Acc.van": 0.4363, "Acc.ship": 0.5708, "Acc.fountain": 0.0433, "Acc.conveyer belt": 0.7603, "Acc.canopy": 0.1573, "Acc.washer": 0.7181, "Acc.plaything": 0.2975, "Acc.swimming pool": 0.5544, "Acc.stool": 0.3727, "Acc.barrel": 0.621, "Acc.basket": 0.2535, "Acc.waterfall": 0.606, "Acc.tent": 0.9841, "Acc.bag": 0.0943, "Acc.minibike": 0.8217, "Acc.cradle": 0.9212, "Acc.oven": 0.4272, "Acc.ball": 0.4067, "Acc.food": 0.5541, "Acc.step": 0.0532, "Acc.tank": 0.317, "Acc.trade name": 0.252, "Acc.microwave": 0.5067, "Acc.pot": 0.4152, "Acc.animal": 0.5982, "Acc.bicycle": 0.6386, "Acc.lake": 0.2674, "Acc.dishwasher": 0.6376, "Acc.screen": 0.8345, "Acc.blanket": 0.0982, "Acc.sculpture": 0.4997, "Acc.hood": 0.5945, "Acc.sconce": 0.3135, "Acc.vase": 0.4131, "Acc.traffic light": 0.3367, "Acc.tray": 0.016, "Acc.ashcan": 0.422, "Acc.fan": 0.5622, "Acc.pier": 0.366, "Acc.crt screen": 0.1139, "Acc.plate": 0.4905, "Acc.monitor": 0.4867, "Acc.bulletin board": 0.5558, "Acc.shower": 0.0, "Acc.radiator": 0.5444, "Acc.glass": 0.0747, "Acc.clock": 0.1849, "Acc.flag": 0.4282} -{"mode": "train", "epoch": 1, "iter": 24050, "lr": 9e-05, "memory": 20279, "data_time": 0.93233, "decode.loss_ce": 0.35663, "decode.acc_seg": 85.98847, "loss": 0.35663, "time": 1.17885} -{"mode": "train", "epoch": 1, "iter": 24100, "lr": 9e-05, "memory": 20279, "data_time": 0.02127, "decode.loss_ce": 0.35214, "decode.acc_seg": 86.70468, "loss": 0.35214, "time": 0.26905} -{"mode": "train", "epoch": 1, "iter": 24150, "lr": 9e-05, "memory": 20279, "data_time": 0.0198, "decode.loss_ce": 0.34823, "decode.acc_seg": 86.53775, "loss": 0.34823, "time": 0.26814} -{"mode": "train", "epoch": 1, "iter": 24200, "lr": 9e-05, "memory": 20279, "data_time": 0.02054, "decode.loss_ce": 0.37187, "decode.acc_seg": 85.60299, "loss": 0.37187, "time": 0.26991} -{"mode": "train", "epoch": 1, "iter": 24250, "lr": 9e-05, "memory": 20279, "data_time": 0.02213, "decode.loss_ce": 0.35462, "decode.acc_seg": 86.41526, "loss": 0.35462, "time": 0.26619} -{"mode": "train", "epoch": 1, "iter": 24300, "lr": 9e-05, "memory": 20279, "data_time": 0.01996, "decode.loss_ce": 0.38824, "decode.acc_seg": 85.30567, "loss": 0.38824, "time": 0.2674} -{"mode": "train", "epoch": 1, "iter": 24350, "lr": 9e-05, "memory": 20279, "data_time": 0.01841, "decode.loss_ce": 0.37487, "decode.acc_seg": 85.47373, "loss": 0.37487, "time": 0.2667} -{"mode": "train", "epoch": 1, "iter": 24400, "lr": 9e-05, "memory": 20279, "data_time": 0.01888, "decode.loss_ce": 0.35865, "decode.acc_seg": 86.01894, "loss": 0.35865, "time": 0.26356} -{"mode": "train", "epoch": 1, "iter": 24450, "lr": 9e-05, "memory": 20279, "data_time": 0.01981, "decode.loss_ce": 0.35375, "decode.acc_seg": 86.64904, "loss": 0.35375, "time": 0.26624} -{"mode": "train", "epoch": 1, "iter": 24500, "lr": 9e-05, "memory": 20279, "data_time": 0.0189, "decode.loss_ce": 0.36046, "decode.acc_seg": 86.06442, "loss": 0.36046, "time": 0.26567} -{"mode": "train", "epoch": 1, "iter": 24550, "lr": 9e-05, "memory": 20279, "data_time": 0.02041, "decode.loss_ce": 0.36678, "decode.acc_seg": 85.83101, "loss": 0.36678, "time": 0.26485} -{"mode": "train", "epoch": 1, "iter": 24600, "lr": 9e-05, "memory": 20279, "data_time": 0.01848, "decode.loss_ce": 0.34425, "decode.acc_seg": 86.55943, "loss": 0.34425, "time": 0.26737} -{"mode": "train", "epoch": 1, "iter": 24650, "lr": 9e-05, "memory": 20279, "data_time": 0.02044, "decode.loss_ce": 0.34946, "decode.acc_seg": 86.66551, "loss": 0.34946, "time": 0.26949} -{"mode": "train", "epoch": 1, "iter": 24700, "lr": 8e-05, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.36201, "decode.acc_seg": 86.0928, "loss": 0.36201, "time": 0.27203} -{"mode": "train", "epoch": 1, "iter": 24750, "lr": 8e-05, "memory": 20279, "data_time": 0.01972, "decode.loss_ce": 0.36201, "decode.acc_seg": 85.95279, "loss": 0.36201, "time": 0.26553} -{"mode": "train", "epoch": 1, "iter": 24800, "lr": 8e-05, "memory": 20279, "data_time": 0.0215, "decode.loss_ce": 0.35032, "decode.acc_seg": 86.71038, "loss": 0.35032, "time": 0.26624} -{"mode": "train", "epoch": 1, "iter": 24850, "lr": 8e-05, "memory": 20279, "data_time": 0.02037, "decode.loss_ce": 0.36039, "decode.acc_seg": 86.25964, "loss": 0.36039, "time": 0.26449} -{"mode": "train", "epoch": 1, "iter": 24900, "lr": 8e-05, "memory": 20279, "data_time": 0.02009, "decode.loss_ce": 0.34889, "decode.acc_seg": 86.35274, "loss": 0.34889, "time": 0.266} -{"mode": "train", "epoch": 1, "iter": 24950, "lr": 8e-05, "memory": 20279, "data_time": 0.02017, "decode.loss_ce": 0.34351, "decode.acc_seg": 86.64494, "loss": 0.34351, "time": 0.26667} -{"mode": "train", "epoch": 1, "iter": 25000, "lr": 8e-05, "memory": 20279, "data_time": 0.02113, "decode.loss_ce": 0.34908, "decode.acc_seg": 86.53283, "loss": 0.34908, "time": 0.26655} -{"mode": "train", "epoch": 1, "iter": 25050, "lr": 8e-05, "memory": 20279, "data_time": 0.02028, "decode.loss_ce": 0.35985, "decode.acc_seg": 85.96837, "loss": 0.35985, "time": 0.26515} -{"mode": "train", "epoch": 1, "iter": 25100, "lr": 8e-05, "memory": 20279, "data_time": 0.02008, "decode.loss_ce": 0.35091, "decode.acc_seg": 86.4496, "loss": 0.35091, "time": 0.26939} -{"mode": "train", "epoch": 1, "iter": 25150, "lr": 8e-05, "memory": 20279, "data_time": 0.02156, "decode.loss_ce": 0.3554, "decode.acc_seg": 86.42744, "loss": 0.3554, "time": 0.26944} -{"mode": "train", "epoch": 1, "iter": 25200, "lr": 8e-05, "memory": 20279, "data_time": 0.02076, "decode.loss_ce": 0.3517, "decode.acc_seg": 86.44301, "loss": 0.3517, "time": 0.27251} -{"mode": "train", "epoch": 1, "iter": 25250, "lr": 8e-05, "memory": 20279, "data_time": 0.02037, "decode.loss_ce": 0.35567, "decode.acc_seg": 85.98999, "loss": 0.35567, "time": 0.2647} -{"mode": "train", "epoch": 1, "iter": 25300, "lr": 8e-05, "memory": 20279, "data_time": 0.02131, "decode.loss_ce": 0.35407, "decode.acc_seg": 86.34271, "loss": 0.35407, "time": 0.27336} -{"mode": "train", "epoch": 1, "iter": 25350, "lr": 8e-05, "memory": 20279, "data_time": 0.02105, "decode.loss_ce": 0.35691, "decode.acc_seg": 86.2866, "loss": 0.35691, "time": 0.26807} -{"mode": "train", "epoch": 1, "iter": 25400, "lr": 8e-05, "memory": 20279, "data_time": 0.01973, "decode.loss_ce": 0.34763, "decode.acc_seg": 86.426, "loss": 0.34763, "time": 0.26632} -{"mode": "train", "epoch": 1, "iter": 25450, "lr": 8e-05, "memory": 20279, "data_time": 0.02097, "decode.loss_ce": 0.36495, "decode.acc_seg": 85.8741, "loss": 0.36495, "time": 0.26372} -{"mode": "train", "epoch": 1, "iter": 25500, "lr": 8e-05, "memory": 20279, "data_time": 0.02185, "decode.loss_ce": 0.36148, "decode.acc_seg": 86.3688, "loss": 0.36148, "time": 0.27125} -{"mode": "train", "epoch": 1, "iter": 25550, "lr": 8e-05, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.35211, "decode.acc_seg": 86.41701, "loss": 0.35211, "time": 0.26173} -{"mode": "train", "epoch": 1, "iter": 25600, "lr": 8e-05, "memory": 20279, "data_time": 0.02098, "decode.loss_ce": 0.34884, "decode.acc_seg": 86.47086, "loss": 0.34884, "time": 0.26725} -{"mode": "train", "epoch": 1, "iter": 25650, "lr": 8e-05, "memory": 20279, "data_time": 0.02043, "decode.loss_ce": 0.33888, "decode.acc_seg": 86.78367, "loss": 0.33888, "time": 0.27017} -{"mode": "train", "epoch": 1, "iter": 25700, "lr": 8e-05, "memory": 20279, "data_time": 0.02002, "decode.loss_ce": 0.358, "decode.acc_seg": 86.53325, "loss": 0.358, "time": 0.26981} -{"mode": "train", "epoch": 1, "iter": 25750, "lr": 8e-05, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.34502, "decode.acc_seg": 86.31985, "loss": 0.34502, "time": 0.26686} -{"mode": "train", "epoch": 1, "iter": 25800, "lr": 8e-05, "memory": 20279, "data_time": 0.02099, "decode.loss_ce": 0.34342, "decode.acc_seg": 86.66136, "loss": 0.34342, "time": 0.26497} -{"mode": "train", "epoch": 1, "iter": 25850, "lr": 8e-05, "memory": 20279, "data_time": 0.0205, "decode.loss_ce": 0.34171, "decode.acc_seg": 86.79904, "loss": 0.34171, "time": 0.26804} -{"mode": "train", "epoch": 1, "iter": 25900, "lr": 8e-05, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.33746, "decode.acc_seg": 86.92923, "loss": 0.33746, "time": 0.26171} -{"mode": "train", "epoch": 1, "iter": 25950, "lr": 8e-05, "memory": 20279, "data_time": 0.02103, "decode.loss_ce": 0.34401, "decode.acc_seg": 86.82613, "loss": 0.34401, "time": 0.26714} -{"mode": "train", "epoch": 1, "iter": 26000, "lr": 8e-05, "memory": 20279, "data_time": 0.02078, "decode.loss_ce": 0.35661, "decode.acc_seg": 86.26102, "loss": 0.35661, "time": 0.26474} -{"mode": "train", "epoch": 1, "iter": 26050, "lr": 8e-05, "memory": 20279, "data_time": 0.01902, "decode.loss_ce": 0.35577, "decode.acc_seg": 86.21854, "loss": 0.35577, "time": 0.26452} -{"mode": "train", "epoch": 1, "iter": 26100, "lr": 8e-05, "memory": 20279, "data_time": 0.02385, "decode.loss_ce": 0.3481, "decode.acc_seg": 86.49532, "loss": 0.3481, "time": 0.26333} -{"mode": "train", "epoch": 1, "iter": 26150, "lr": 8e-05, "memory": 20279, "data_time": 0.0198, "decode.loss_ce": 0.34573, "decode.acc_seg": 86.48331, "loss": 0.34573, "time": 0.26499} -{"mode": "train", "epoch": 1, "iter": 26200, "lr": 8e-05, "memory": 20279, "data_time": 0.02061, "decode.loss_ce": 0.33309, "decode.acc_seg": 86.88973, "loss": 0.33309, "time": 0.26912} -{"mode": "train", "epoch": 1, "iter": 26250, "lr": 8e-05, "memory": 20279, "data_time": 0.02087, "decode.loss_ce": 0.33492, "decode.acc_seg": 86.81496, "loss": 0.33492, "time": 0.2638} -{"mode": "train", "epoch": 1, "iter": 26300, "lr": 8e-05, "memory": 20279, "data_time": 0.02004, "decode.loss_ce": 0.35777, "decode.acc_seg": 86.45933, "loss": 0.35777, "time": 0.27031} -{"mode": "train", "epoch": 1, "iter": 26350, "lr": 8e-05, "memory": 20279, "data_time": 0.02076, "decode.loss_ce": 0.33223, "decode.acc_seg": 87.07385, "loss": 0.33223, "time": 0.26793} -{"mode": "train", "epoch": 1, "iter": 26400, "lr": 8e-05, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.3571, "decode.acc_seg": 86.36536, "loss": 0.3571, "time": 0.26747} -{"mode": "train", "epoch": 1, "iter": 26450, "lr": 8e-05, "memory": 20279, "data_time": 0.01977, "decode.loss_ce": 0.35158, "decode.acc_seg": 86.49309, "loss": 0.35158, "time": 0.26673} -{"mode": "train", "epoch": 1, "iter": 26500, "lr": 8e-05, "memory": 20279, "data_time": 0.02074, "decode.loss_ce": 0.34127, "decode.acc_seg": 86.94244, "loss": 0.34127, "time": 0.27099} -{"mode": "train", "epoch": 1, "iter": 26550, "lr": 8e-05, "memory": 20279, "data_time": 0.02142, "decode.loss_ce": 0.32794, "decode.acc_seg": 87.25357, "loss": 0.32794, "time": 0.26894} -{"mode": "train", "epoch": 1, "iter": 26600, "lr": 8e-05, "memory": 20279, "data_time": 0.02102, "decode.loss_ce": 0.33891, "decode.acc_seg": 86.90896, "loss": 0.33891, "time": 0.26693} -{"mode": "train", "epoch": 1, "iter": 26650, "lr": 8e-05, "memory": 20279, "data_time": 0.01969, "decode.loss_ce": 0.35074, "decode.acc_seg": 86.35665, "loss": 0.35074, "time": 0.27086} -{"mode": "train", "epoch": 1, "iter": 26700, "lr": 7e-05, "memory": 20279, "data_time": 0.02065, "decode.loss_ce": 0.33907, "decode.acc_seg": 87.08859, "loss": 0.33907, "time": 0.26638} -{"mode": "train", "epoch": 1, "iter": 26750, "lr": 7e-05, "memory": 20279, "data_time": 0.02259, "decode.loss_ce": 0.34859, "decode.acc_seg": 86.7385, "loss": 0.34859, "time": 0.26546} -{"mode": "train", "epoch": 1, "iter": 26800, "lr": 7e-05, "memory": 20279, "data_time": 0.02031, "decode.loss_ce": 0.33011, "decode.acc_seg": 87.37308, "loss": 0.33011, "time": 0.26237} -{"mode": "train", "epoch": 1, "iter": 26850, "lr": 7e-05, "memory": 20279, "data_time": 0.01996, "decode.loss_ce": 0.34468, "decode.acc_seg": 86.42237, "loss": 0.34468, "time": 0.26513} -{"mode": "train", "epoch": 1, "iter": 26900, "lr": 7e-05, "memory": 20279, "data_time": 0.02098, "decode.loss_ce": 0.3301, "decode.acc_seg": 87.15563, "loss": 0.3301, "time": 0.26414} -{"mode": "train", "epoch": 1, "iter": 26950, "lr": 7e-05, "memory": 20279, "data_time": 0.0184, "decode.loss_ce": 0.32541, "decode.acc_seg": 87.29153, "loss": 0.32541, "time": 0.26375} -{"mode": "train", "epoch": 1, "iter": 27000, "lr": 7e-05, "memory": 20279, "data_time": 0.0203, "decode.loss_ce": 0.33561, "decode.acc_seg": 86.82046, "loss": 0.33561, "time": 0.26622} -{"mode": "train", "epoch": 1, "iter": 27050, "lr": 7e-05, "memory": 20279, "data_time": 0.01999, "decode.loss_ce": 0.34435, "decode.acc_seg": 86.79936, "loss": 0.34435, "time": 0.27019} -{"mode": "train", "epoch": 1, "iter": 27100, "lr": 7e-05, "memory": 20279, "data_time": 0.02019, "decode.loss_ce": 0.33012, "decode.acc_seg": 87.08729, "loss": 0.33012, "time": 0.26709} -{"mode": "train", "epoch": 1, "iter": 27150, "lr": 7e-05, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.32042, "decode.acc_seg": 87.39842, "loss": 0.32042, "time": 0.269} -{"mode": "train", "epoch": 1, "iter": 27200, "lr": 7e-05, "memory": 20279, "data_time": 0.02087, "decode.loss_ce": 0.33108, "decode.acc_seg": 87.28034, "loss": 0.33108, "time": 0.27068} -{"mode": "train", "epoch": 1, "iter": 27250, "lr": 7e-05, "memory": 20279, "data_time": 0.02087, "decode.loss_ce": 0.3256, "decode.acc_seg": 87.37642, "loss": 0.3256, "time": 0.26094} -{"mode": "train", "epoch": 1, "iter": 27300, "lr": 7e-05, "memory": 20279, "data_time": 0.02439, "decode.loss_ce": 0.3473, "decode.acc_seg": 86.69836, "loss": 0.3473, "time": 0.26956} -{"mode": "train", "epoch": 1, "iter": 27350, "lr": 7e-05, "memory": 20279, "data_time": 0.02019, "decode.loss_ce": 0.34274, "decode.acc_seg": 86.64463, "loss": 0.34274, "time": 0.27134} -{"mode": "train", "epoch": 1, "iter": 27400, "lr": 7e-05, "memory": 20279, "data_time": 0.02009, "decode.loss_ce": 0.34544, "decode.acc_seg": 86.84865, "loss": 0.34544, "time": 0.26538} -{"mode": "train", "epoch": 1, "iter": 27450, "lr": 7e-05, "memory": 20279, "data_time": 0.02073, "decode.loss_ce": 0.32163, "decode.acc_seg": 87.16485, "loss": 0.32163, "time": 0.26744} -{"mode": "train", "epoch": 1, "iter": 27500, "lr": 7e-05, "memory": 20279, "data_time": 0.02064, "decode.loss_ce": 0.34203, "decode.acc_seg": 86.59756, "loss": 0.34203, "time": 0.26952} -{"mode": "train", "epoch": 1, "iter": 27550, "lr": 7e-05, "memory": 20279, "data_time": 0.01956, "decode.loss_ce": 0.32354, "decode.acc_seg": 87.28884, "loss": 0.32354, "time": 0.26701} -{"mode": "train", "epoch": 1, "iter": 27600, "lr": 7e-05, "memory": 20279, "data_time": 0.02014, "decode.loss_ce": 0.33369, "decode.acc_seg": 87.18751, "loss": 0.33369, "time": 0.26435} -{"mode": "train", "epoch": 1, "iter": 27650, "lr": 7e-05, "memory": 20279, "data_time": 0.02047, "decode.loss_ce": 0.33065, "decode.acc_seg": 87.15555, "loss": 0.33065, "time": 0.26628} -{"mode": "train", "epoch": 1, "iter": 27700, "lr": 7e-05, "memory": 20279, "data_time": 0.02005, "decode.loss_ce": 0.33207, "decode.acc_seg": 87.03061, "loss": 0.33207, "time": 0.26646} -{"mode": "train", "epoch": 1, "iter": 27750, "lr": 7e-05, "memory": 20279, "data_time": 0.01968, "decode.loss_ce": 0.34161, "decode.acc_seg": 86.99287, "loss": 0.34161, "time": 0.27015} -{"mode": "train", "epoch": 1, "iter": 27800, "lr": 7e-05, "memory": 20279, "data_time": 0.02076, "decode.loss_ce": 0.33638, "decode.acc_seg": 86.85558, "loss": 0.33638, "time": 0.26603} -{"mode": "train", "epoch": 1, "iter": 27850, "lr": 7e-05, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.34431, "decode.acc_seg": 86.77155, "loss": 0.34431, "time": 0.27181} -{"mode": "train", "epoch": 1, "iter": 27900, "lr": 7e-05, "memory": 20279, "data_time": 0.02093, "decode.loss_ce": 0.33187, "decode.acc_seg": 87.02583, "loss": 0.33187, "time": 0.26737} -{"mode": "train", "epoch": 1, "iter": 27950, "lr": 7e-05, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 0.33266, "decode.acc_seg": 87.13412, "loss": 0.33266, "time": 0.26417} -{"mode": "train", "epoch": 1, "iter": 28000, "lr": 7e-05, "memory": 20279, "data_time": 0.02105, "decode.loss_ce": 0.33001, "decode.acc_seg": 87.04637, "loss": 0.33001, "time": 0.28691} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 7e-05, "aAcc": 0.8024, "mIoU": 0.4214, "mAcc": 0.5409, "IoU.wall": 0.7364, "IoU.building": 0.796, "IoU.sky": 0.9399, "IoU.floor": 0.7946, "IoU.tree": 0.718, "IoU.ceiling": 0.7994, "IoU.road": 0.7963, "IoU.bed ": 0.8628, "IoU.windowpane": 0.57, "IoU.grass": 0.6766, "IoU.cabinet": 0.562, "IoU.sidewalk": 0.6173, "IoU.person": 0.7802, "IoU.earth": 0.331, "IoU.door": 0.4037, "IoU.table": 0.5022, "IoU.mountain": 0.5886, "IoU.plant": 0.478, "IoU.curtain": 0.6903, "IoU.chair": 0.5169, "IoU.car": 0.8231, "IoU.water": 0.5074, "IoU.painting": 0.6775, "IoU.sofa": 0.5652, "IoU.shelf": 0.3721, "IoU.house": 0.4039, "IoU.sea": 0.4741, "IoU.mirror": 0.5457, "IoU.rug": 0.5389, "IoU.field": 0.3645, "IoU.armchair": 0.3671, "IoU.seat": 0.5481, "IoU.fence": 0.3794, "IoU.desk": 0.382, "IoU.rock": 0.4541, "IoU.wardrobe": 0.3844, "IoU.lamp": 0.562, "IoU.bathtub": 0.6881, "IoU.railing": 0.2729, "IoU.cushion": 0.495, "IoU.base": 0.1869, "IoU.box": 0.198, "IoU.column": 0.3905, "IoU.signboard": 0.3074, "IoU.chest of drawers": 0.3788, "IoU.counter": 0.2357, "IoU.sand": 0.3099, "IoU.sink": 0.6324, "IoU.skyscraper": 0.47, "IoU.fireplace": 0.6644, "IoU.refrigerator": 0.6435, "IoU.grandstand": 0.459, "IoU.path": 0.2258, "IoU.stairs": 0.2169, "IoU.runway": 0.6777, "IoU.case": 0.4775, "IoU.pool table": 0.914, "IoU.pillow": 0.5006, "IoU.screen door": 0.6087, "IoU.stairway": 0.291, "IoU.river": 0.1451, "IoU.bridge": 0.3083, "IoU.bookcase": 0.2901, "IoU.blind": 0.3924, "IoU.coffee table": 0.4932, "IoU.toilet": 0.75, "IoU.flower": 0.3249, "IoU.book": 0.447, "IoU.hill": 0.0516, "IoU.bench": 0.384, "IoU.countertop": 0.4895, "IoU.stove": 0.6609, "IoU.palm": 0.4536, "IoU.kitchen island": 0.2694, "IoU.computer": 0.5712, "IoU.swivel chair": 0.4238, "IoU.boat": 0.5429, "IoU.bar": 0.2252, "IoU.arcade machine": 0.6106, "IoU.hovel": 0.2765, "IoU.bus": 0.8483, "IoU.towel": 0.5117, "IoU.light": 0.4654, "IoU.truck": 0.3056, "IoU.tower": 0.0708, "IoU.chandelier": 0.6059, "IoU.awning": 0.146, "IoU.streetlight": 0.1922, "IoU.booth": 0.285, "IoU.television receiver": 0.6042, "IoU.airplane": 0.5011, "IoU.dirt track": 0.0469, "IoU.apparel": 0.2703, "IoU.pole": 0.1297, "IoU.land": 0.0476, "IoU.bannister": 0.0574, "IoU.escalator": 0.152, "IoU.ottoman": 0.3367, "IoU.bottle": 0.1896, "IoU.buffet": 0.4247, "IoU.poster": 0.2232, "IoU.stage": 0.1118, "IoU.van": 0.3421, "IoU.ship": 0.3424, "IoU.fountain": 0.0471, "IoU.conveyer belt": 0.6201, "IoU.canopy": 0.1083, "IoU.washer": 0.6964, "IoU.plaything": 0.243, "IoU.swimming pool": 0.3549, "IoU.stool": 0.2774, "IoU.barrel": 0.1831, "IoU.basket": 0.2494, "IoU.waterfall": 0.6165, "IoU.tent": 0.8415, "IoU.bag": 0.1148, "IoU.minibike": 0.5848, "IoU.cradle": 0.7801, "IoU.oven": 0.2756, "IoU.ball": 0.4455, "IoU.food": 0.4452, "IoU.step": 0.0858, "IoU.tank": 0.347, "IoU.trade name": 0.1712, "IoU.microwave": 0.5042, "IoU.pot": 0.3538, "IoU.animal": 0.5092, "IoU.bicycle": 0.4778, "IoU.lake": 0.4189, "IoU.dishwasher": 0.5807, "IoU.screen": 0.5984, "IoU.blanket": 0.158, "IoU.sculpture": 0.4727, "IoU.hood": 0.4265, "IoU.sconce": 0.3129, "IoU.vase": 0.2876, "IoU.traffic light": 0.2296, "IoU.tray": 0.0347, "IoU.ashcan": 0.2994, "IoU.fan": 0.5253, "IoU.pier": 0.2748, "IoU.crt screen": 0.0417, "IoU.plate": 0.3365, "IoU.monitor": 0.2062, "IoU.bulletin board": 0.4472, "IoU.shower": 0.0, "IoU.radiator": 0.5014, "IoU.glass": 0.0817, "IoU.clock": 0.1935, "IoU.flag": 0.3745, "Acc.wall": 0.856, "Acc.building": 0.9198, "Acc.sky": 0.9717, "Acc.floor": 0.8887, "Acc.tree": 0.8696, "Acc.ceiling": 0.8875, "Acc.road": 0.8889, "Acc.bed ": 0.9355, "Acc.windowpane": 0.7371, "Acc.grass": 0.824, "Acc.cabinet": 0.6902, "Acc.sidewalk": 0.7549, "Acc.person": 0.9133, "Acc.earth": 0.4829, "Acc.door": 0.5486, "Acc.table": 0.6979, "Acc.mountain": 0.7224, "Acc.plant": 0.5802, "Acc.curtain": 0.8447, "Acc.chair": 0.6861, "Acc.car": 0.9168, "Acc.water": 0.6866, "Acc.painting": 0.8268, "Acc.sofa": 0.7384, "Acc.shelf": 0.5877, "Acc.house": 0.5271, "Acc.sea": 0.6856, "Acc.mirror": 0.6805, "Acc.rug": 0.6194, "Acc.field": 0.5188, "Acc.armchair": 0.5821, "Acc.seat": 0.7446, "Acc.fence": 0.5456, "Acc.desk": 0.5403, "Acc.rock": 0.7386, "Acc.wardrobe": 0.549, "Acc.lamp": 0.729, "Acc.bathtub": 0.7794, "Acc.railing": 0.4131, "Acc.cushion": 0.7233, "Acc.base": 0.3566, "Acc.box": 0.263, "Acc.column": 0.5569, "Acc.signboard": 0.4341, "Acc.chest of drawers": 0.5846, "Acc.counter": 0.3511, "Acc.sand": 0.4531, "Acc.sink": 0.7451, "Acc.skyscraper": 0.5763, "Acc.fireplace": 0.8014, "Acc.refrigerator": 0.8282, "Acc.grandstand": 0.7368, "Acc.path": 0.3836, "Acc.stairs": 0.244, "Acc.runway": 0.927, "Acc.case": 0.5935, "Acc.pool table": 0.9517, "Acc.pillow": 0.5839, "Acc.screen door": 0.7466, "Acc.stairway": 0.4335, "Acc.river": 0.2548, "Acc.bridge": 0.3661, "Acc.bookcase": 0.3822, "Acc.blind": 0.513, "Acc.coffee table": 0.7732, "Acc.toilet": 0.8912, "Acc.flower": 0.4584, "Acc.book": 0.6365, "Acc.hill": 0.0756, "Acc.bench": 0.4638, "Acc.countertop": 0.6946, "Acc.stove": 0.7493, "Acc.palm": 0.6316, "Acc.kitchen island": 0.6159, "Acc.computer": 0.6952, "Acc.swivel chair": 0.5935, "Acc.boat": 0.6468, "Acc.bar": 0.2507, "Acc.arcade machine": 0.6579, "Acc.hovel": 0.4233, "Acc.bus": 0.9282, "Acc.towel": 0.678, "Acc.light": 0.5179, "Acc.truck": 0.3721, "Acc.tower": 0.1098, "Acc.chandelier": 0.7437, "Acc.awning": 0.1788, "Acc.streetlight": 0.2488, "Acc.booth": 0.3154, "Acc.television receiver": 0.7414, "Acc.airplane": 0.7468, "Acc.dirt track": 0.131, "Acc.apparel": 0.4466, "Acc.pole": 0.1662, "Acc.land": 0.092, "Acc.bannister": 0.0729, "Acc.escalator": 0.1702, "Acc.ottoman": 0.4432, "Acc.bottle": 0.2407, "Acc.buffet": 0.5258, "Acc.poster": 0.3222, "Acc.stage": 0.1749, "Acc.van": 0.4323, "Acc.ship": 0.4115, "Acc.fountain": 0.0487, "Acc.conveyer belt": 0.8065, "Acc.canopy": 0.1439, "Acc.washer": 0.7178, "Acc.plaything": 0.45, "Acc.swimming pool": 0.5635, "Acc.stool": 0.3635, "Acc.barrel": 0.786, "Acc.basket": 0.3325, "Acc.waterfall": 0.664, "Acc.tent": 0.9681, "Acc.bag": 0.1385, "Acc.minibike": 0.6941, "Acc.cradle": 0.929, "Acc.oven": 0.5629, "Acc.ball": 0.5492, "Acc.food": 0.5095, "Acc.step": 0.0945, "Acc.tank": 0.391, "Acc.trade name": 0.1909, "Acc.microwave": 0.5587, "Acc.pot": 0.4118, "Acc.animal": 0.5285, "Acc.bicycle": 0.6767, "Acc.lake": 0.468, "Acc.dishwasher": 0.6489, "Acc.screen": 0.8531, "Acc.blanket": 0.2002, "Acc.sculpture": 0.693, "Acc.hood": 0.5718, "Acc.sconce": 0.373, "Acc.vase": 0.4547, "Acc.traffic light": 0.3251, "Acc.tray": 0.0422, "Acc.ashcan": 0.3718, "Acc.fan": 0.6268, "Acc.pier": 0.3361, "Acc.crt screen": 0.1138, "Acc.plate": 0.3944, "Acc.monitor": 0.3037, "Acc.bulletin board": 0.5729, "Acc.shower": 0.0, "Acc.radiator": 0.573, "Acc.glass": 0.0878, "Acc.clock": 0.2273, "Acc.flag": 0.4447} -{"mode": "train", "epoch": 1, "iter": 28050, "lr": 7e-05, "memory": 20279, "data_time": 0.96224, "decode.loss_ce": 0.33798, "decode.acc_seg": 87.1532, "loss": 0.33798, "time": 1.21103} -{"mode": "train", "epoch": 1, "iter": 28100, "lr": 7e-05, "memory": 20279, "data_time": 0.01945, "decode.loss_ce": 0.33479, "decode.acc_seg": 87.02477, "loss": 0.33479, "time": 0.26517} -{"mode": "train", "epoch": 1, "iter": 28150, "lr": 7e-05, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.34499, "decode.acc_seg": 86.93922, "loss": 0.34499, "time": 0.26502} -{"mode": "train", "epoch": 1, "iter": 28200, "lr": 7e-05, "memory": 20279, "data_time": 0.02041, "decode.loss_ce": 0.32711, "decode.acc_seg": 87.326, "loss": 0.32711, "time": 0.27627} -{"mode": "train", "epoch": 1, "iter": 28250, "lr": 7e-05, "memory": 20279, "data_time": 0.01966, "decode.loss_ce": 0.33406, "decode.acc_seg": 87.14813, "loss": 0.33406, "time": 0.26722} -{"mode": "train", "epoch": 1, "iter": 28300, "lr": 7e-05, "memory": 20279, "data_time": 0.0209, "decode.loss_ce": 0.33124, "decode.acc_seg": 87.04998, "loss": 0.33124, "time": 0.26288} -{"mode": "train", "epoch": 1, "iter": 28350, "lr": 7e-05, "memory": 20279, "data_time": 0.02018, "decode.loss_ce": 0.31631, "decode.acc_seg": 87.71187, "loss": 0.31631, "time": 0.27148} -{"mode": "train", "epoch": 1, "iter": 28400, "lr": 7e-05, "memory": 20279, "data_time": 0.02091, "decode.loss_ce": 0.32726, "decode.acc_seg": 87.37143, "loss": 0.32726, "time": 0.27258} -{"mode": "train", "epoch": 1, "iter": 28450, "lr": 7e-05, "memory": 20279, "data_time": 0.02214, "decode.loss_ce": 0.33669, "decode.acc_seg": 87.10203, "loss": 0.33669, "time": 0.27032} -{"mode": "train", "epoch": 1, "iter": 28500, "lr": 7e-05, "memory": 20279, "data_time": 0.02285, "decode.loss_ce": 0.33156, "decode.acc_seg": 87.14268, "loss": 0.33156, "time": 0.27051} -{"mode": "train", "epoch": 1, "iter": 28550, "lr": 7e-05, "memory": 20279, "data_time": 0.01997, "decode.loss_ce": 0.33246, "decode.acc_seg": 87.20853, "loss": 0.33246, "time": 0.26831} -{"mode": "train", "epoch": 1, "iter": 28600, "lr": 7e-05, "memory": 20279, "data_time": 0.01996, "decode.loss_ce": 0.33311, "decode.acc_seg": 86.95584, "loss": 0.33311, "time": 0.26444} -{"mode": "train", "epoch": 1, "iter": 28650, "lr": 7e-05, "memory": 20279, "data_time": 0.02011, "decode.loss_ce": 0.32458, "decode.acc_seg": 87.24443, "loss": 0.32458, "time": 0.27112} -{"mode": "train", "epoch": 1, "iter": 28700, "lr": 6e-05, "memory": 20279, "data_time": 0.01887, "decode.loss_ce": 0.32072, "decode.acc_seg": 87.42934, "loss": 0.32072, "time": 0.2661} -{"mode": "train", "epoch": 1, "iter": 28750, "lr": 6e-05, "memory": 20279, "data_time": 0.01967, "decode.loss_ce": 0.32971, "decode.acc_seg": 87.0529, "loss": 0.32971, "time": 0.26237} -{"mode": "train", "epoch": 1, "iter": 28800, "lr": 6e-05, "memory": 20279, "data_time": 0.02085, "decode.loss_ce": 0.34027, "decode.acc_seg": 86.97381, "loss": 0.34027, "time": 0.2681} -{"mode": "train", "epoch": 1, "iter": 28850, "lr": 6e-05, "memory": 20279, "data_time": 0.02119, "decode.loss_ce": 0.34071, "decode.acc_seg": 86.64085, "loss": 0.34071, "time": 0.27107} -{"mode": "train", "epoch": 1, "iter": 28900, "lr": 6e-05, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.33483, "decode.acc_seg": 87.13751, "loss": 0.33483, "time": 0.26895} -{"mode": "train", "epoch": 1, "iter": 28950, "lr": 6e-05, "memory": 20279, "data_time": 0.0208, "decode.loss_ce": 0.32232, "decode.acc_seg": 87.37141, "loss": 0.32232, "time": 0.26332} -{"mode": "train", "epoch": 1, "iter": 29000, "lr": 6e-05, "memory": 20279, "data_time": 0.01915, "decode.loss_ce": 0.32937, "decode.acc_seg": 87.19878, "loss": 0.32937, "time": 0.2635} -{"mode": "train", "epoch": 1, "iter": 29050, "lr": 6e-05, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.33229, "decode.acc_seg": 87.2143, "loss": 0.33229, "time": 0.26339} -{"mode": "train", "epoch": 1, "iter": 29100, "lr": 6e-05, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.34502, "decode.acc_seg": 86.79128, "loss": 0.34502, "time": 0.26595} -{"mode": "train", "epoch": 1, "iter": 29150, "lr": 6e-05, "memory": 20279, "data_time": 0.01998, "decode.loss_ce": 0.34083, "decode.acc_seg": 86.73914, "loss": 0.34083, "time": 0.2667} -{"mode": "train", "epoch": 1, "iter": 29200, "lr": 6e-05, "memory": 20279, "data_time": 0.02281, "decode.loss_ce": 0.32976, "decode.acc_seg": 87.17641, "loss": 0.32976, "time": 0.27726} -{"mode": "train", "epoch": 1, "iter": 29250, "lr": 6e-05, "memory": 20279, "data_time": 0.02156, "decode.loss_ce": 0.32062, "decode.acc_seg": 87.29581, "loss": 0.32062, "time": 0.26961} -{"mode": "train", "epoch": 1, "iter": 29300, "lr": 6e-05, "memory": 20279, "data_time": 0.02012, "decode.loss_ce": 0.3174, "decode.acc_seg": 87.74584, "loss": 0.3174, "time": 0.26769} -{"mode": "train", "epoch": 1, "iter": 29350, "lr": 6e-05, "memory": 20279, "data_time": 0.0197, "decode.loss_ce": 0.31599, "decode.acc_seg": 87.43491, "loss": 0.31599, "time": 0.2684} -{"mode": "train", "epoch": 1, "iter": 29400, "lr": 6e-05, "memory": 20279, "data_time": 0.0194, "decode.loss_ce": 0.32558, "decode.acc_seg": 87.31199, "loss": 0.32558, "time": 0.27019} -{"mode": "train", "epoch": 1, "iter": 29450, "lr": 6e-05, "memory": 20279, "data_time": 0.0199, "decode.loss_ce": 0.33023, "decode.acc_seg": 87.17815, "loss": 0.33023, "time": 0.26651} -{"mode": "train", "epoch": 1, "iter": 29500, "lr": 6e-05, "memory": 20279, "data_time": 0.02084, "decode.loss_ce": 0.32688, "decode.acc_seg": 87.22859, "loss": 0.32688, "time": 0.26514} -{"mode": "train", "epoch": 1, "iter": 29550, "lr": 6e-05, "memory": 20279, "data_time": 0.02003, "decode.loss_ce": 0.31039, "decode.acc_seg": 87.8281, "loss": 0.31039, "time": 0.26703} -{"mode": "train", "epoch": 1, "iter": 29600, "lr": 6e-05, "memory": 20279, "data_time": 0.02211, "decode.loss_ce": 0.30854, "decode.acc_seg": 88.02547, "loss": 0.30854, "time": 0.26583} -{"mode": "train", "epoch": 1, "iter": 29650, "lr": 6e-05, "memory": 20279, "data_time": 0.02141, "decode.loss_ce": 0.33232, "decode.acc_seg": 86.95244, "loss": 0.33232, "time": 0.26883} -{"mode": "train", "epoch": 1, "iter": 29700, "lr": 6e-05, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.32453, "decode.acc_seg": 87.41606, "loss": 0.32453, "time": 0.26557} -{"mode": "train", "epoch": 1, "iter": 29750, "lr": 6e-05, "memory": 20279, "data_time": 0.02072, "decode.loss_ce": 0.32122, "decode.acc_seg": 87.39173, "loss": 0.32122, "time": 0.27177} -{"mode": "train", "epoch": 1, "iter": 29800, "lr": 6e-05, "memory": 20279, "data_time": 0.02203, "decode.loss_ce": 0.32503, "decode.acc_seg": 87.29899, "loss": 0.32503, "time": 0.26916} -{"mode": "train", "epoch": 1, "iter": 29850, "lr": 6e-05, "memory": 20279, "data_time": 0.02185, "decode.loss_ce": 0.31803, "decode.acc_seg": 87.50321, "loss": 0.31803, "time": 0.27123} -{"mode": "train", "epoch": 1, "iter": 29900, "lr": 6e-05, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.31849, "decode.acc_seg": 87.70502, "loss": 0.31849, "time": 0.26948} -{"mode": "train", "epoch": 1, "iter": 29950, "lr": 6e-05, "memory": 20279, "data_time": 0.02089, "decode.loss_ce": 0.32094, "decode.acc_seg": 87.47354, "loss": 0.32094, "time": 0.26964} -{"mode": "train", "epoch": 1, "iter": 30000, "lr": 6e-05, "memory": 20279, "data_time": 0.01947, "decode.loss_ce": 0.3202, "decode.acc_seg": 87.72761, "loss": 0.3202, "time": 0.26442} -{"mode": "train", "epoch": 1, "iter": 30050, "lr": 6e-05, "memory": 20279, "data_time": 0.02082, "decode.loss_ce": 0.31666, "decode.acc_seg": 87.83577, "loss": 0.31666, "time": 0.26903} -{"mode": "train", "epoch": 1, "iter": 30100, "lr": 6e-05, "memory": 20279, "data_time": 0.02232, "decode.loss_ce": 0.32221, "decode.acc_seg": 87.6813, "loss": 0.32221, "time": 0.2675} -{"mode": "train", "epoch": 1, "iter": 30150, "lr": 6e-05, "memory": 20279, "data_time": 0.01915, "decode.loss_ce": 0.32178, "decode.acc_seg": 87.52337, "loss": 0.32178, "time": 0.26694} -{"mode": "train", "epoch": 1, "iter": 30200, "lr": 6e-05, "memory": 20279, "data_time": 0.01918, "decode.loss_ce": 0.31634, "decode.acc_seg": 87.56178, "loss": 0.31634, "time": 0.27955} -{"mode": "train", "epoch": 1, "iter": 30250, "lr": 6e-05, "memory": 20279, "data_time": 0.02007, "decode.loss_ce": 0.3165, "decode.acc_seg": 87.59287, "loss": 0.3165, "time": 0.26708} -{"mode": "train", "epoch": 1, "iter": 30300, "lr": 6e-05, "memory": 20279, "data_time": 0.02148, "decode.loss_ce": 0.31005, "decode.acc_seg": 87.91751, "loss": 0.31005, "time": 0.27064} -{"mode": "train", "epoch": 1, "iter": 30350, "lr": 6e-05, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.31617, "decode.acc_seg": 87.9618, "loss": 0.31617, "time": 0.27184} -{"mode": "train", "epoch": 1, "iter": 30400, "lr": 6e-05, "memory": 20279, "data_time": 0.0205, "decode.loss_ce": 0.31615, "decode.acc_seg": 87.66622, "loss": 0.31615, "time": 0.27428} -{"mode": "train", "epoch": 1, "iter": 30450, "lr": 6e-05, "memory": 20279, "data_time": 0.02116, "decode.loss_ce": 0.32305, "decode.acc_seg": 87.52393, "loss": 0.32305, "time": 0.26494} -{"mode": "train", "epoch": 1, "iter": 30500, "lr": 6e-05, "memory": 20279, "data_time": 0.0208, "decode.loss_ce": 0.30736, "decode.acc_seg": 87.94151, "loss": 0.30736, "time": 0.26528} -{"mode": "train", "epoch": 1, "iter": 30550, "lr": 6e-05, "memory": 20279, "data_time": 0.0211, "decode.loss_ce": 0.32068, "decode.acc_seg": 87.86533, "loss": 0.32068, "time": 0.27125} -{"mode": "train", "epoch": 1, "iter": 30600, "lr": 6e-05, "memory": 20279, "data_time": 0.02156, "decode.loss_ce": 0.32105, "decode.acc_seg": 87.65752, "loss": 0.32105, "time": 0.26478} -{"mode": "train", "epoch": 1, "iter": 30650, "lr": 5e-05, "memory": 20279, "data_time": 0.01967, "decode.loss_ce": 0.31835, "decode.acc_seg": 87.67495, "loss": 0.31835, "time": 0.26798} -{"mode": "train", "epoch": 1, "iter": 30700, "lr": 5e-05, "memory": 20279, "data_time": 0.02022, "decode.loss_ce": 0.30294, "decode.acc_seg": 88.19504, "loss": 0.30294, "time": 0.27163} -{"mode": "train", "epoch": 1, "iter": 30750, "lr": 5e-05, "memory": 20279, "data_time": 0.02189, "decode.loss_ce": 0.31123, "decode.acc_seg": 87.84706, "loss": 0.31123, "time": 0.26576} -{"mode": "train", "epoch": 1, "iter": 30800, "lr": 5e-05, "memory": 20279, "data_time": 0.01993, "decode.loss_ce": 0.3054, "decode.acc_seg": 88.10007, "loss": 0.3054, "time": 0.27174} -{"mode": "train", "epoch": 1, "iter": 30850, "lr": 5e-05, "memory": 20279, "data_time": 0.02074, "decode.loss_ce": 0.3155, "decode.acc_seg": 87.64689, "loss": 0.3155, "time": 0.26891} -{"mode": "train", "epoch": 1, "iter": 30900, "lr": 5e-05, "memory": 20279, "data_time": 0.02026, "decode.loss_ce": 0.32245, "decode.acc_seg": 87.45307, "loss": 0.32245, "time": 0.26993} -{"mode": "train", "epoch": 1, "iter": 30950, "lr": 5e-05, "memory": 20279, "data_time": 0.01882, "decode.loss_ce": 0.32262, "decode.acc_seg": 87.42998, "loss": 0.32262, "time": 0.26517} -{"mode": "train", "epoch": 1, "iter": 31000, "lr": 5e-05, "memory": 20279, "data_time": 0.01994, "decode.loss_ce": 0.31983, "decode.acc_seg": 87.49958, "loss": 0.31983, "time": 0.26478} -{"mode": "train", "epoch": 1, "iter": 31050, "lr": 5e-05, "memory": 20279, "data_time": 0.02102, "decode.loss_ce": 0.30755, "decode.acc_seg": 87.95068, "loss": 0.30755, "time": 0.26507} -{"mode": "train", "epoch": 1, "iter": 31100, "lr": 5e-05, "memory": 20279, "data_time": 0.02048, "decode.loss_ce": 0.30863, "decode.acc_seg": 87.98744, "loss": 0.30863, "time": 0.26502} -{"mode": "train", "epoch": 1, "iter": 31150, "lr": 5e-05, "memory": 20279, "data_time": 0.01993, "decode.loss_ce": 0.32333, "decode.acc_seg": 87.41354, "loss": 0.32333, "time": 0.26361} -{"mode": "train", "epoch": 1, "iter": 31200, "lr": 5e-05, "memory": 20279, "data_time": 0.01899, "decode.loss_ce": 0.30777, "decode.acc_seg": 87.924, "loss": 0.30777, "time": 0.2765} -{"mode": "train", "epoch": 1, "iter": 31250, "lr": 5e-05, "memory": 20279, "data_time": 0.02101, "decode.loss_ce": 0.31598, "decode.acc_seg": 87.8098, "loss": 0.31598, "time": 0.26774} -{"mode": "train", "epoch": 1, "iter": 31300, "lr": 5e-05, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 0.30307, "decode.acc_seg": 88.01823, "loss": 0.30307, "time": 0.26817} -{"mode": "train", "epoch": 1, "iter": 31350, "lr": 5e-05, "memory": 20279, "data_time": 0.01922, "decode.loss_ce": 0.29678, "decode.acc_seg": 88.31079, "loss": 0.29678, "time": 0.26474} -{"mode": "train", "epoch": 1, "iter": 31400, "lr": 5e-05, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 0.31104, "decode.acc_seg": 87.56829, "loss": 0.31104, "time": 0.26681} -{"mode": "train", "epoch": 1, "iter": 31450, "lr": 5e-05, "memory": 20279, "data_time": 0.01925, "decode.loss_ce": 0.29822, "decode.acc_seg": 88.09378, "loss": 0.29822, "time": 0.26563} -{"mode": "train", "epoch": 1, "iter": 31500, "lr": 5e-05, "memory": 20279, "data_time": 0.01896, "decode.loss_ce": 0.30425, "decode.acc_seg": 88.12362, "loss": 0.30425, "time": 0.26162} -{"mode": "train", "epoch": 1, "iter": 31550, "lr": 5e-05, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.31322, "decode.acc_seg": 87.65697, "loss": 0.31322, "time": 0.26611} -{"mode": "train", "epoch": 2, "iter": 31600, "lr": 5e-05, "memory": 20279, "data_time": 0.08043, "decode.loss_ce": 0.31095, "decode.acc_seg": 87.94607, "loss": 0.31095, "time": 0.32618} -{"mode": "train", "epoch": 2, "iter": 31650, "lr": 5e-05, "memory": 20279, "data_time": 0.01931, "decode.loss_ce": 0.30942, "decode.acc_seg": 87.90766, "loss": 0.30942, "time": 0.26695} -{"mode": "train", "epoch": 2, "iter": 31700, "lr": 5e-05, "memory": 20279, "data_time": 0.01944, "decode.loss_ce": 0.29967, "decode.acc_seg": 88.24023, "loss": 0.29967, "time": 0.2641} -{"mode": "train", "epoch": 2, "iter": 31750, "lr": 5e-05, "memory": 20279, "data_time": 0.02088, "decode.loss_ce": 0.30801, "decode.acc_seg": 87.95178, "loss": 0.30801, "time": 0.26674} -{"mode": "train", "epoch": 2, "iter": 31800, "lr": 5e-05, "memory": 20279, "data_time": 0.02067, "decode.loss_ce": 0.30049, "decode.acc_seg": 88.29387, "loss": 0.30049, "time": 0.26797} -{"mode": "train", "epoch": 2, "iter": 31850, "lr": 5e-05, "memory": 20279, "data_time": 0.01909, "decode.loss_ce": 0.30226, "decode.acc_seg": 88.32625, "loss": 0.30226, "time": 0.27285} -{"mode": "train", "epoch": 2, "iter": 31900, "lr": 5e-05, "memory": 20279, "data_time": 0.01878, "decode.loss_ce": 0.28949, "decode.acc_seg": 88.45236, "loss": 0.28949, "time": 0.2687} -{"mode": "train", "epoch": 2, "iter": 31950, "lr": 5e-05, "memory": 20279, "data_time": 0.02017, "decode.loss_ce": 0.30403, "decode.acc_seg": 88.25797, "loss": 0.30403, "time": 0.26348} -{"mode": "train", "epoch": 2, "iter": 32000, "lr": 5e-05, "memory": 20279, "data_time": 0.0205, "decode.loss_ce": 0.30154, "decode.acc_seg": 88.07198, "loss": 0.30154, "time": 0.28483} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 5e-05, "aAcc": 0.8058, "mIoU": 0.4322, "mAcc": 0.5459, "IoU.wall": 0.7378, "IoU.building": 0.807, "IoU.sky": 0.9408, "IoU.floor": 0.7948, "IoU.tree": 0.7218, "IoU.ceiling": 0.8031, "IoU.road": 0.8132, "IoU.bed ": 0.8647, "IoU.windowpane": 0.5639, "IoU.grass": 0.671, "IoU.cabinet": 0.559, "IoU.sidewalk": 0.6223, "IoU.person": 0.7851, "IoU.earth": 0.3227, "IoU.door": 0.4022, "IoU.table": 0.517, "IoU.mountain": 0.5631, "IoU.plant": 0.4905, "IoU.curtain": 0.6742, "IoU.chair": 0.5167, "IoU.car": 0.8243, "IoU.water": 0.4699, "IoU.painting": 0.655, "IoU.sofa": 0.583, "IoU.shelf": 0.3712, "IoU.house": 0.4272, "IoU.sea": 0.5095, "IoU.mirror": 0.5517, "IoU.rug": 0.527, "IoU.field": 0.3803, "IoU.armchair": 0.3651, "IoU.seat": 0.5424, "IoU.fence": 0.3902, "IoU.desk": 0.4267, "IoU.rock": 0.4444, "IoU.wardrobe": 0.3861, "IoU.lamp": 0.5748, "IoU.bathtub": 0.6727, "IoU.railing": 0.2754, "IoU.cushion": 0.5018, "IoU.base": 0.1742, "IoU.box": 0.1949, "IoU.column": 0.4085, "IoU.signboard": 0.3125, "IoU.chest of drawers": 0.4052, "IoU.counter": 0.2525, "IoU.sand": 0.2982, "IoU.sink": 0.6114, "IoU.skyscraper": 0.6258, "IoU.fireplace": 0.6747, "IoU.refrigerator": 0.6781, "IoU.grandstand": 0.4554, "IoU.path": 0.1912, "IoU.stairs": 0.2797, "IoU.runway": 0.7142, "IoU.case": 0.4781, "IoU.pool table": 0.9199, "IoU.pillow": 0.5047, "IoU.screen door": 0.6535, "IoU.stairway": 0.2577, "IoU.river": 0.1347, "IoU.bridge": 0.4269, "IoU.bookcase": 0.3098, "IoU.blind": 0.3261, "IoU.coffee table": 0.5099, "IoU.toilet": 0.7758, "IoU.flower": 0.3345, "IoU.book": 0.4397, "IoU.hill": 0.0553, "IoU.bench": 0.3801, "IoU.countertop": 0.4561, "IoU.stove": 0.6742, "IoU.palm": 0.4384, "IoU.kitchen island": 0.2915, "IoU.computer": 0.5738, "IoU.swivel chair": 0.401, "IoU.boat": 0.6701, "IoU.bar": 0.3088, "IoU.arcade machine": 0.5458, "IoU.hovel": 0.1882, "IoU.bus": 0.8433, "IoU.towel": 0.5553, "IoU.light": 0.4413, "IoU.truck": 0.3202, "IoU.tower": 0.089, "IoU.chandelier": 0.6018, "IoU.awning": 0.1778, "IoU.streetlight": 0.2109, "IoU.booth": 0.2958, "IoU.television receiver": 0.6147, "IoU.airplane": 0.5157, "IoU.dirt track": 0.0134, "IoU.apparel": 0.2377, "IoU.pole": 0.1413, "IoU.land": 0.0505, "IoU.bannister": 0.0522, "IoU.escalator": 0.1798, "IoU.ottoman": 0.4004, "IoU.bottle": 0.2652, "IoU.buffet": 0.3393, "IoU.poster": 0.2207, "IoU.stage": 0.1176, "IoU.van": 0.348, "IoU.ship": 0.5443, "IoU.fountain": 0.0257, "IoU.conveyer belt": 0.6228, "IoU.canopy": 0.1767, "IoU.washer": 0.6942, "IoU.plaything": 0.2087, "IoU.swimming pool": 0.4015, "IoU.stool": 0.303, "IoU.barrel": 0.1533, "IoU.basket": 0.2372, "IoU.waterfall": 0.6155, "IoU.tent": 0.8829, "IoU.bag": 0.0901, "IoU.minibike": 0.6381, "IoU.cradle": 0.7086, "IoU.oven": 0.2827, "IoU.ball": 0.4374, "IoU.food": 0.3604, "IoU.step": 0.145, "IoU.tank": 0.3363, "IoU.trade name": 0.2267, "IoU.microwave": 0.5433, "IoU.pot": 0.3785, "IoU.animal": 0.5534, "IoU.bicycle": 0.5116, "IoU.lake": 0.4571, "IoU.dishwasher": 0.6035, "IoU.screen": 0.6437, "IoU.blanket": 0.1285, "IoU.sculpture": 0.5216, "IoU.hood": 0.4921, "IoU.sconce": 0.3213, "IoU.vase": 0.2909, "IoU.traffic light": 0.2422, "IoU.tray": 0.0558, "IoU.ashcan": 0.3366, "IoU.fan": 0.5365, "IoU.pier": 0.3198, "IoU.crt screen": 0.0486, "IoU.plate": 0.431, "IoU.monitor": 0.291, "IoU.bulletin board": 0.4375, "IoU.shower": 0.0, "IoU.radiator": 0.516, "IoU.glass": 0.0707, "IoU.clock": 0.233, "IoU.flag": 0.3606, "Acc.wall": 0.87, "Acc.building": 0.9198, "Acc.sky": 0.9738, "Acc.floor": 0.899, "Acc.tree": 0.8516, "Acc.ceiling": 0.8898, "Acc.road": 0.8968, "Acc.bed ": 0.9451, "Acc.windowpane": 0.7215, "Acc.grass": 0.7828, "Acc.cabinet": 0.7095, "Acc.sidewalk": 0.7708, "Acc.person": 0.9054, "Acc.earth": 0.4648, "Acc.door": 0.5228, "Acc.table": 0.7301, "Acc.mountain": 0.6951, "Acc.plant": 0.6389, "Acc.curtain": 0.856, "Acc.chair": 0.6708, "Acc.car": 0.9187, "Acc.water": 0.6035, "Acc.painting": 0.8424, "Acc.sofa": 0.78, "Acc.shelf": 0.5759, "Acc.house": 0.6044, "Acc.sea": 0.8009, "Acc.mirror": 0.6297, "Acc.rug": 0.6205, "Acc.field": 0.5586, "Acc.armchair": 0.5267, "Acc.seat": 0.7717, "Acc.fence": 0.5492, "Acc.desk": 0.6289, "Acc.rock": 0.6889, "Acc.wardrobe": 0.5736, "Acc.lamp": 0.7102, "Acc.bathtub": 0.7497, "Acc.railing": 0.3939, "Acc.cushion": 0.6219, "Acc.base": 0.272, "Acc.box": 0.2629, "Acc.column": 0.5596, "Acc.signboard": 0.4173, "Acc.chest of drawers": 0.5684, "Acc.counter": 0.3487, "Acc.sand": 0.4849, "Acc.sink": 0.7574, "Acc.skyscraper": 0.7168, "Acc.fireplace": 0.8076, "Acc.refrigerator": 0.8191, "Acc.grandstand": 0.7114, "Acc.path": 0.2718, "Acc.stairs": 0.35, "Acc.runway": 0.9417, "Acc.case": 0.6439, "Acc.pool table": 0.9571, "Acc.pillow": 0.5791, "Acc.screen door": 0.7916, "Acc.stairway": 0.3336, "Acc.river": 0.2343, "Acc.bridge": 0.5294, "Acc.bookcase": 0.4016, "Acc.blind": 0.37, "Acc.coffee table": 0.7588, "Acc.toilet": 0.8764, "Acc.flower": 0.4871, "Acc.book": 0.6041, "Acc.hill": 0.0915, "Acc.bench": 0.4512, "Acc.countertop": 0.752, "Acc.stove": 0.8207, "Acc.palm": 0.6894, "Acc.kitchen island": 0.5793, "Acc.computer": 0.6661, "Acc.swivel chair": 0.5455, "Acc.boat": 0.8369, "Acc.bar": 0.3556, "Acc.arcade machine": 0.6721, "Acc.hovel": 0.3157, "Acc.bus": 0.9352, "Acc.towel": 0.6695, "Acc.light": 0.4894, "Acc.truck": 0.4017, "Acc.tower": 0.1183, "Acc.chandelier": 0.7823, "Acc.awning": 0.217, "Acc.streetlight": 0.2741, "Acc.booth": 0.3052, "Acc.television receiver": 0.7583, "Acc.airplane": 0.6539, "Acc.dirt track": 0.0315, "Acc.apparel": 0.3885, "Acc.pole": 0.1827, "Acc.land": 0.0874, "Acc.bannister": 0.0623, "Acc.escalator": 0.2037, "Acc.ottoman": 0.5078, "Acc.bottle": 0.3588, "Acc.buffet": 0.3955, "Acc.poster": 0.3079, "Acc.stage": 0.1775, "Acc.van": 0.4207, "Acc.ship": 0.6234, "Acc.fountain": 0.0263, "Acc.conveyer belt": 0.7749, "Acc.canopy": 0.2231, "Acc.washer": 0.7259, "Acc.plaything": 0.2704, "Acc.swimming pool": 0.5692, "Acc.stool": 0.4338, "Acc.barrel": 0.7423, "Acc.basket": 0.3081, "Acc.waterfall": 0.683, "Acc.tent": 0.9787, "Acc.bag": 0.1026, "Acc.minibike": 0.7815, "Acc.cradle": 0.876, "Acc.oven": 0.4899, "Acc.ball": 0.4944, "Acc.food": 0.4091, "Acc.step": 0.1722, "Acc.tank": 0.3608, "Acc.trade name": 0.2926, "Acc.microwave": 0.6157, "Acc.pot": 0.4359, "Acc.animal": 0.595, "Acc.bicycle": 0.6477, "Acc.lake": 0.5321, "Acc.dishwasher": 0.7011, "Acc.screen": 0.753, "Acc.blanket": 0.1447, "Acc.sculpture": 0.6763, "Acc.hood": 0.5875, "Acc.sconce": 0.3885, "Acc.vase": 0.4364, "Acc.traffic light": 0.3543, "Acc.tray": 0.0871, "Acc.ashcan": 0.4495, "Acc.fan": 0.6337, "Acc.pier": 0.3547, "Acc.crt screen": 0.1196, "Acc.plate": 0.5219, "Acc.monitor": 0.363, "Acc.bulletin board": 0.5652, "Acc.shower": 0.0, "Acc.radiator": 0.5829, "Acc.glass": 0.0753, "Acc.clock": 0.2675, "Acc.flag": 0.4273} -{"mode": "train", "epoch": 2, "iter": 32050, "lr": 5e-05, "memory": 20279, "data_time": 0.95283, "decode.loss_ce": 0.3056, "decode.acc_seg": 88.11488, "loss": 0.3056, "time": 1.19238} -{"mode": "train", "epoch": 2, "iter": 32100, "lr": 5e-05, "memory": 20279, "data_time": 0.01947, "decode.loss_ce": 0.29196, "decode.acc_seg": 88.52729, "loss": 0.29196, "time": 0.26301} -{"mode": "train", "epoch": 2, "iter": 32150, "lr": 5e-05, "memory": 20279, "data_time": 0.02031, "decode.loss_ce": 0.29774, "decode.acc_seg": 88.50522, "loss": 0.29774, "time": 0.26463} -{"mode": "train", "epoch": 2, "iter": 32200, "lr": 5e-05, "memory": 20279, "data_time": 0.01889, "decode.loss_ce": 0.30217, "decode.acc_seg": 87.8898, "loss": 0.30217, "time": 0.26635} -{"mode": "train", "epoch": 2, "iter": 32250, "lr": 5e-05, "memory": 20279, "data_time": 0.01882, "decode.loss_ce": 0.31381, "decode.acc_seg": 87.78813, "loss": 0.31381, "time": 0.27126} -{"mode": "train", "epoch": 2, "iter": 32300, "lr": 5e-05, "memory": 20279, "data_time": 0.02027, "decode.loss_ce": 0.3086, "decode.acc_seg": 87.82824, "loss": 0.3086, "time": 0.26921} -{"mode": "train", "epoch": 2, "iter": 32350, "lr": 5e-05, "memory": 20279, "data_time": 0.02127, "decode.loss_ce": 0.30258, "decode.acc_seg": 88.10144, "loss": 0.30258, "time": 0.27355} -{"mode": "train", "epoch": 2, "iter": 32400, "lr": 5e-05, "memory": 20279, "data_time": 0.01964, "decode.loss_ce": 0.307, "decode.acc_seg": 88.02653, "loss": 0.307, "time": 0.26041} -{"mode": "train", "epoch": 2, "iter": 32450, "lr": 5e-05, "memory": 20279, "data_time": 0.02082, "decode.loss_ce": 0.29816, "decode.acc_seg": 88.30411, "loss": 0.29816, "time": 0.26735} -{"mode": "train", "epoch": 2, "iter": 32500, "lr": 5e-05, "memory": 20279, "data_time": 0.02153, "decode.loss_ce": 0.29733, "decode.acc_seg": 88.27632, "loss": 0.29733, "time": 0.26219} -{"mode": "train", "epoch": 2, "iter": 32550, "lr": 4e-05, "memory": 20279, "data_time": 0.02047, "decode.loss_ce": 0.29978, "decode.acc_seg": 88.34096, "loss": 0.29978, "time": 0.26878} -{"mode": "train", "epoch": 2, "iter": 32600, "lr": 4e-05, "memory": 20279, "data_time": 0.01937, "decode.loss_ce": 0.30493, "decode.acc_seg": 88.01535, "loss": 0.30493, "time": 0.26766} -{"mode": "train", "epoch": 2, "iter": 32650, "lr": 4e-05, "memory": 20279, "data_time": 0.01914, "decode.loss_ce": 0.30025, "decode.acc_seg": 88.31974, "loss": 0.30025, "time": 0.26722} -{"mode": "train", "epoch": 2, "iter": 32700, "lr": 4e-05, "memory": 20279, "data_time": 0.01954, "decode.loss_ce": 0.31065, "decode.acc_seg": 87.94055, "loss": 0.31065, "time": 0.27513} -{"mode": "train", "epoch": 2, "iter": 32750, "lr": 4e-05, "memory": 20279, "data_time": 0.01821, "decode.loss_ce": 0.29365, "decode.acc_seg": 88.51529, "loss": 0.29365, "time": 0.26479} -{"mode": "train", "epoch": 2, "iter": 32800, "lr": 4e-05, "memory": 20279, "data_time": 0.02055, "decode.loss_ce": 0.29103, "decode.acc_seg": 88.61129, "loss": 0.29103, "time": 0.26297} -{"mode": "train", "epoch": 2, "iter": 32850, "lr": 4e-05, "memory": 20279, "data_time": 0.01948, "decode.loss_ce": 0.30948, "decode.acc_seg": 87.98062, "loss": 0.30948, "time": 0.26392} -{"mode": "train", "epoch": 2, "iter": 32900, "lr": 4e-05, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.29529, "decode.acc_seg": 88.4039, "loss": 0.29529, "time": 0.2721} -{"mode": "train", "epoch": 2, "iter": 32950, "lr": 4e-05, "memory": 20279, "data_time": 0.02079, "decode.loss_ce": 0.3013, "decode.acc_seg": 88.43287, "loss": 0.3013, "time": 0.26709} -{"mode": "train", "epoch": 2, "iter": 33000, "lr": 4e-05, "memory": 20279, "data_time": 0.02038, "decode.loss_ce": 0.30466, "decode.acc_seg": 88.08106, "loss": 0.30466, "time": 0.26384} -{"mode": "train", "epoch": 2, "iter": 33050, "lr": 4e-05, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.30194, "decode.acc_seg": 88.29715, "loss": 0.30194, "time": 0.26569} -{"mode": "train", "epoch": 2, "iter": 33100, "lr": 4e-05, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.30611, "decode.acc_seg": 88.26119, "loss": 0.30611, "time": 0.26877} -{"mode": "train", "epoch": 2, "iter": 33150, "lr": 4e-05, "memory": 20279, "data_time": 0.01999, "decode.loss_ce": 0.30351, "decode.acc_seg": 88.14842, "loss": 0.30351, "time": 0.26428} -{"mode": "train", "epoch": 2, "iter": 33200, "lr": 4e-05, "memory": 20279, "data_time": 0.02099, "decode.loss_ce": 0.29802, "decode.acc_seg": 88.44077, "loss": 0.29802, "time": 0.27032} -{"mode": "train", "epoch": 2, "iter": 33250, "lr": 4e-05, "memory": 20279, "data_time": 0.01931, "decode.loss_ce": 0.30132, "decode.acc_seg": 88.36491, "loss": 0.30132, "time": 0.27199} -{"mode": "train", "epoch": 2, "iter": 33300, "lr": 4e-05, "memory": 20279, "data_time": 0.02012, "decode.loss_ce": 0.28712, "decode.acc_seg": 88.73373, "loss": 0.28712, "time": 0.27094} -{"mode": "train", "epoch": 2, "iter": 33350, "lr": 4e-05, "memory": 20279, "data_time": 0.02139, "decode.loss_ce": 0.29522, "decode.acc_seg": 88.48258, "loss": 0.29522, "time": 0.26809} -{"mode": "train", "epoch": 2, "iter": 33400, "lr": 4e-05, "memory": 20279, "data_time": 0.01947, "decode.loss_ce": 0.30533, "decode.acc_seg": 88.18339, "loss": 0.30533, "time": 0.26355} -{"mode": "train", "epoch": 2, "iter": 33450, "lr": 4e-05, "memory": 20279, "data_time": 0.02046, "decode.loss_ce": 0.30036, "decode.acc_seg": 88.3046, "loss": 0.30036, "time": 0.26517} -{"mode": "train", "epoch": 2, "iter": 33500, "lr": 4e-05, "memory": 20279, "data_time": 0.021, "decode.loss_ce": 0.29942, "decode.acc_seg": 88.10064, "loss": 0.29942, "time": 0.26381} -{"mode": "train", "epoch": 2, "iter": 33550, "lr": 4e-05, "memory": 20279, "data_time": 0.01945, "decode.loss_ce": 0.29699, "decode.acc_seg": 88.371, "loss": 0.29699, "time": 0.26839} -{"mode": "train", "epoch": 2, "iter": 33600, "lr": 4e-05, "memory": 20279, "data_time": 0.01959, "decode.loss_ce": 0.30952, "decode.acc_seg": 88.15327, "loss": 0.30952, "time": 0.26502} -{"mode": "train", "epoch": 2, "iter": 33650, "lr": 4e-05, "memory": 20279, "data_time": 0.01912, "decode.loss_ce": 0.29757, "decode.acc_seg": 88.30627, "loss": 0.29757, "time": 0.26623} -{"mode": "train", "epoch": 2, "iter": 33700, "lr": 4e-05, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.29301, "decode.acc_seg": 88.69758, "loss": 0.29301, "time": 0.27511} -{"mode": "train", "epoch": 2, "iter": 33750, "lr": 4e-05, "memory": 20279, "data_time": 0.02112, "decode.loss_ce": 0.30604, "decode.acc_seg": 88.07678, "loss": 0.30604, "time": 0.26536} -{"mode": "train", "epoch": 2, "iter": 33800, "lr": 4e-05, "memory": 20279, "data_time": 0.02043, "decode.loss_ce": 0.28785, "decode.acc_seg": 88.74889, "loss": 0.28785, "time": 0.26964} -{"mode": "train", "epoch": 2, "iter": 33850, "lr": 4e-05, "memory": 20279, "data_time": 0.01948, "decode.loss_ce": 0.29415, "decode.acc_seg": 88.46351, "loss": 0.29415, "time": 0.2652} -{"mode": "train", "epoch": 2, "iter": 33900, "lr": 4e-05, "memory": 20279, "data_time": 0.0193, "decode.loss_ce": 0.30083, "decode.acc_seg": 88.26434, "loss": 0.30083, "time": 0.26685} -{"mode": "train", "epoch": 2, "iter": 33950, "lr": 4e-05, "memory": 20279, "data_time": 0.01952, "decode.loss_ce": 0.29131, "decode.acc_seg": 88.51026, "loss": 0.29131, "time": 0.26777} -{"mode": "train", "epoch": 2, "iter": 34000, "lr": 4e-05, "memory": 20279, "data_time": 0.01959, "decode.loss_ce": 0.28963, "decode.acc_seg": 88.78199, "loss": 0.28963, "time": 0.26241} -{"mode": "train", "epoch": 2, "iter": 34050, "lr": 4e-05, "memory": 20279, "data_time": 0.02025, "decode.loss_ce": 0.29121, "decode.acc_seg": 88.66416, "loss": 0.29121, "time": 0.26666} -{"mode": "train", "epoch": 2, "iter": 34100, "lr": 4e-05, "memory": 20279, "data_time": 0.01752, "decode.loss_ce": 0.29197, "decode.acc_seg": 88.45156, "loss": 0.29197, "time": 0.26459} -{"mode": "train", "epoch": 2, "iter": 34150, "lr": 4e-05, "memory": 20279, "data_time": 0.02, "decode.loss_ce": 0.29432, "decode.acc_seg": 88.59986, "loss": 0.29432, "time": 0.266} -{"mode": "train", "epoch": 2, "iter": 34200, "lr": 4e-05, "memory": 20279, "data_time": 0.0186, "decode.loss_ce": 0.29957, "decode.acc_seg": 88.32061, "loss": 0.29957, "time": 0.26672} -{"mode": "train", "epoch": 2, "iter": 34250, "lr": 4e-05, "memory": 20279, "data_time": 0.01917, "decode.loss_ce": 0.29199, "decode.acc_seg": 88.58727, "loss": 0.29199, "time": 0.26692} -{"mode": "train", "epoch": 2, "iter": 34300, "lr": 4e-05, "memory": 20279, "data_time": 0.01862, "decode.loss_ce": 0.28758, "decode.acc_seg": 88.81116, "loss": 0.28758, "time": 0.2682} -{"mode": "train", "epoch": 2, "iter": 34350, "lr": 4e-05, "memory": 20279, "data_time": 0.01915, "decode.loss_ce": 0.29174, "decode.acc_seg": 88.63321, "loss": 0.29174, "time": 0.27202} -{"mode": "train", "epoch": 2, "iter": 34400, "lr": 3e-05, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.29232, "decode.acc_seg": 88.6902, "loss": 0.29232, "time": 0.26182} -{"mode": "train", "epoch": 2, "iter": 34450, "lr": 3e-05, "memory": 20279, "data_time": 0.0201, "decode.loss_ce": 0.28213, "decode.acc_seg": 88.91469, "loss": 0.28213, "time": 0.26261} -{"mode": "train", "epoch": 2, "iter": 34500, "lr": 3e-05, "memory": 20279, "data_time": 0.02049, "decode.loss_ce": 0.29225, "decode.acc_seg": 88.40171, "loss": 0.29225, "time": 0.26339} -{"mode": "train", "epoch": 2, "iter": 34550, "lr": 3e-05, "memory": 20279, "data_time": 0.01881, "decode.loss_ce": 0.28377, "decode.acc_seg": 88.57651, "loss": 0.28377, "time": 0.26824} -{"mode": "train", "epoch": 2, "iter": 34600, "lr": 3e-05, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.29799, "decode.acc_seg": 88.46429, "loss": 0.29799, "time": 0.26986} -{"mode": "train", "epoch": 2, "iter": 34650, "lr": 3e-05, "memory": 20279, "data_time": 0.01937, "decode.loss_ce": 0.29318, "decode.acc_seg": 88.4478, "loss": 0.29318, "time": 0.26947} -{"mode": "train", "epoch": 2, "iter": 34700, "lr": 3e-05, "memory": 20279, "data_time": 0.01816, "decode.loss_ce": 0.28423, "decode.acc_seg": 88.79729, "loss": 0.28423, "time": 0.27672} -{"mode": "train", "epoch": 2, "iter": 34750, "lr": 3e-05, "memory": 20279, "data_time": 0.0196, "decode.loss_ce": 0.28704, "decode.acc_seg": 88.7969, "loss": 0.28704, "time": 0.26793} -{"mode": "train", "epoch": 2, "iter": 34800, "lr": 3e-05, "memory": 20279, "data_time": 0.01975, "decode.loss_ce": 0.29299, "decode.acc_seg": 88.62407, "loss": 0.29299, "time": 0.26693} -{"mode": "train", "epoch": 2, "iter": 34850, "lr": 3e-05, "memory": 20279, "data_time": 0.01921, "decode.loss_ce": 0.29694, "decode.acc_seg": 88.41684, "loss": 0.29694, "time": 0.26885} -{"mode": "train", "epoch": 2, "iter": 34900, "lr": 3e-05, "memory": 20279, "data_time": 0.02021, "decode.loss_ce": 0.29237, "decode.acc_seg": 88.46308, "loss": 0.29237, "time": 0.26579} -{"mode": "train", "epoch": 2, "iter": 34950, "lr": 3e-05, "memory": 20279, "data_time": 0.01841, "decode.loss_ce": 0.295, "decode.acc_seg": 88.39513, "loss": 0.295, "time": 0.26592} -{"mode": "train", "epoch": 2, "iter": 35000, "lr": 3e-05, "memory": 20279, "data_time": 0.01971, "decode.loss_ce": 0.30001, "decode.acc_seg": 88.15819, "loss": 0.30001, "time": 0.26379} -{"mode": "train", "epoch": 2, "iter": 35050, "lr": 3e-05, "memory": 20279, "data_time": 0.0206, "decode.loss_ce": 0.28919, "decode.acc_seg": 88.67839, "loss": 0.28919, "time": 0.26196} -{"mode": "train", "epoch": 2, "iter": 35100, "lr": 3e-05, "memory": 20279, "data_time": 0.01926, "decode.loss_ce": 0.27997, "decode.acc_seg": 88.95088, "loss": 0.27997, "time": 0.26737} -{"mode": "train", "epoch": 2, "iter": 35150, "lr": 3e-05, "memory": 20279, "data_time": 0.01927, "decode.loss_ce": 0.28149, "decode.acc_seg": 89.02456, "loss": 0.28149, "time": 0.26467} -{"mode": "train", "epoch": 2, "iter": 35200, "lr": 3e-05, "memory": 20279, "data_time": 0.02002, "decode.loss_ce": 0.30031, "decode.acc_seg": 88.31108, "loss": 0.30031, "time": 0.26722} -{"mode": "train", "epoch": 2, "iter": 35250, "lr": 3e-05, "memory": 20279, "data_time": 0.01889, "decode.loss_ce": 0.29561, "decode.acc_seg": 88.68621, "loss": 0.29561, "time": 0.27217} -{"mode": "train", "epoch": 2, "iter": 35300, "lr": 3e-05, "memory": 20279, "data_time": 0.01922, "decode.loss_ce": 0.29118, "decode.acc_seg": 88.60628, "loss": 0.29118, "time": 0.27063} -{"mode": "train", "epoch": 2, "iter": 35350, "lr": 3e-05, "memory": 20279, "data_time": 0.0185, "decode.loss_ce": 0.2963, "decode.acc_seg": 88.43731, "loss": 0.2963, "time": 0.26659} -{"mode": "train", "epoch": 2, "iter": 35400, "lr": 3e-05, "memory": 20279, "data_time": 0.022, "decode.loss_ce": 0.28868, "decode.acc_seg": 88.54184, "loss": 0.28868, "time": 0.26499} -{"mode": "train", "epoch": 2, "iter": 35450, "lr": 3e-05, "memory": 20279, "data_time": 0.02121, "decode.loss_ce": 0.30114, "decode.acc_seg": 88.3176, "loss": 0.30114, "time": 0.26335} -{"mode": "train", "epoch": 2, "iter": 35500, "lr": 3e-05, "memory": 20279, "data_time": 0.02087, "decode.loss_ce": 0.3021, "decode.acc_seg": 88.24001, "loss": 0.3021, "time": 0.2655} -{"mode": "train", "epoch": 2, "iter": 35550, "lr": 3e-05, "memory": 20279, "data_time": 0.01991, "decode.loss_ce": 0.29269, "decode.acc_seg": 88.48029, "loss": 0.29269, "time": 0.26773} -{"mode": "train", "epoch": 2, "iter": 35600, "lr": 3e-05, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.27958, "decode.acc_seg": 88.84074, "loss": 0.27958, "time": 0.26709} -{"mode": "train", "epoch": 2, "iter": 35650, "lr": 3e-05, "memory": 20279, "data_time": 0.01898, "decode.loss_ce": 0.28327, "decode.acc_seg": 88.94448, "loss": 0.28327, "time": 0.27332} -{"mode": "train", "epoch": 2, "iter": 35700, "lr": 3e-05, "memory": 20279, "data_time": 0.02015, "decode.loss_ce": 0.29413, "decode.acc_seg": 88.38527, "loss": 0.29413, "time": 0.26875} -{"mode": "train", "epoch": 2, "iter": 35750, "lr": 3e-05, "memory": 20279, "data_time": 0.01907, "decode.loss_ce": 0.30518, "decode.acc_seg": 87.98767, "loss": 0.30518, "time": 0.26793} -{"mode": "train", "epoch": 2, "iter": 35800, "lr": 3e-05, "memory": 20279, "data_time": 0.01861, "decode.loss_ce": 0.28703, "decode.acc_seg": 88.89022, "loss": 0.28703, "time": 0.26214} -{"mode": "train", "epoch": 2, "iter": 35850, "lr": 3e-05, "memory": 20279, "data_time": 0.01871, "decode.loss_ce": 0.29121, "decode.acc_seg": 88.3564, "loss": 0.29121, "time": 0.27002} -{"mode": "train", "epoch": 2, "iter": 35900, "lr": 3e-05, "memory": 20279, "data_time": 0.01883, "decode.loss_ce": 0.29046, "decode.acc_seg": 88.53334, "loss": 0.29046, "time": 0.26673} -{"mode": "train", "epoch": 2, "iter": 35950, "lr": 3e-05, "memory": 20279, "data_time": 0.01892, "decode.loss_ce": 0.28413, "decode.acc_seg": 88.6277, "loss": 0.28413, "time": 0.26929} -{"mode": "train", "epoch": 2, "iter": 36000, "lr": 3e-05, "memory": 20279, "data_time": 0.01896, "decode.loss_ce": 0.28806, "decode.acc_seg": 88.78426, "loss": 0.28806, "time": 0.28692} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 3e-05, "aAcc": 0.8059, "mIoU": 0.4372, "mAcc": 0.5545, "IoU.wall": 0.7407, "IoU.building": 0.8044, "IoU.sky": 0.9412, "IoU.floor": 0.796, "IoU.tree": 0.7229, "IoU.ceiling": 0.8087, "IoU.road": 0.8088, "IoU.bed ": 0.8625, "IoU.windowpane": 0.5668, "IoU.grass": 0.6433, "IoU.cabinet": 0.5696, "IoU.sidewalk": 0.6316, "IoU.person": 0.7885, "IoU.earth": 0.33, "IoU.door": 0.3961, "IoU.table": 0.5281, "IoU.mountain": 0.5779, "IoU.plant": 0.4799, "IoU.curtain": 0.7077, "IoU.chair": 0.5188, "IoU.car": 0.8265, "IoU.water": 0.4387, "IoU.painting": 0.6789, "IoU.sofa": 0.5818, "IoU.shelf": 0.3856, "IoU.house": 0.4061, "IoU.sea": 0.4268, "IoU.mirror": 0.5329, "IoU.rug": 0.5418, "IoU.field": 0.3629, "IoU.armchair": 0.3681, "IoU.seat": 0.5615, "IoU.fence": 0.3662, "IoU.desk": 0.4288, "IoU.rock": 0.4231, "IoU.wardrobe": 0.4002, "IoU.lamp": 0.5718, "IoU.bathtub": 0.684, "IoU.railing": 0.2897, "IoU.cushion": 0.5087, "IoU.base": 0.1551, "IoU.box": 0.1917, "IoU.column": 0.4114, "IoU.signboard": 0.3142, "IoU.chest of drawers": 0.3948, "IoU.counter": 0.2405, "IoU.sand": 0.2999, "IoU.sink": 0.618, "IoU.skyscraper": 0.6224, "IoU.fireplace": 0.6504, "IoU.refrigerator": 0.6758, "IoU.grandstand": 0.4633, "IoU.path": 0.231, "IoU.stairs": 0.2404, "IoU.runway": 0.7088, "IoU.case": 0.5122, "IoU.pool table": 0.9241, "IoU.pillow": 0.5133, "IoU.screen door": 0.5713, "IoU.stairway": 0.2455, "IoU.river": 0.1426, "IoU.bridge": 0.4265, "IoU.bookcase": 0.3012, "IoU.blind": 0.4089, "IoU.coffee table": 0.5269, "IoU.toilet": 0.7898, "IoU.flower": 0.3303, "IoU.book": 0.4558, "IoU.hill": 0.0598, "IoU.bench": 0.384, "IoU.countertop": 0.4839, "IoU.stove": 0.6944, "IoU.palm": 0.4651, "IoU.kitchen island": 0.2793, "IoU.computer": 0.5834, "IoU.swivel chair": 0.4047, "IoU.boat": 0.6098, "IoU.bar": 0.2422, "IoU.arcade machine": 0.5821, "IoU.hovel": 0.1431, "IoU.bus": 0.854, "IoU.towel": 0.5488, "IoU.light": 0.4559, "IoU.truck": 0.2445, "IoU.tower": 0.1388, "IoU.chandelier": 0.6088, "IoU.awning": 0.1802, "IoU.streetlight": 0.2144, "IoU.booth": 0.2902, "IoU.television receiver": 0.6326, "IoU.airplane": 0.5391, "IoU.dirt track": 0.0896, "IoU.apparel": 0.2646, "IoU.pole": 0.1567, "IoU.land": 0.0521, "IoU.bannister": 0.0728, "IoU.escalator": 0.2174, "IoU.ottoman": 0.384, "IoU.bottle": 0.2434, "IoU.buffet": 0.4349, "IoU.poster": 0.2322, "IoU.stage": 0.1332, "IoU.van": 0.387, "IoU.ship": 0.538, "IoU.fountain": 0.1406, "IoU.conveyer belt": 0.6295, "IoU.canopy": 0.1803, "IoU.washer": 0.6903, "IoU.plaything": 0.2814, "IoU.swimming pool": 0.3812, "IoU.stool": 0.3163, "IoU.barrel": 0.1813, "IoU.basket": 0.2479, "IoU.waterfall": 0.6453, "IoU.tent": 0.8808, "IoU.bag": 0.0885, "IoU.minibike": 0.6348, "IoU.cradle": 0.749, "IoU.oven": 0.3324, "IoU.ball": 0.4837, "IoU.food": 0.458, "IoU.step": 0.1454, "IoU.tank": 0.2923, "IoU.trade name": 0.2258, "IoU.microwave": 0.6004, "IoU.pot": 0.3627, "IoU.animal": 0.5704, "IoU.bicycle": 0.5054, "IoU.lake": 0.4536, "IoU.dishwasher": 0.6162, "IoU.screen": 0.6209, "IoU.blanket": 0.1256, "IoU.sculpture": 0.4989, "IoU.hood": 0.4753, "IoU.sconce": 0.3121, "IoU.vase": 0.2932, "IoU.traffic light": 0.257, "IoU.tray": 0.0463, "IoU.ashcan": 0.3459, "IoU.fan": 0.5469, "IoU.pier": 0.3089, "IoU.crt screen": 0.0472, "IoU.plate": 0.4147, "IoU.monitor": 0.2624, "IoU.bulletin board": 0.4134, "IoU.shower": 0.0, "IoU.radiator": 0.5172, "IoU.glass": 0.0854, "IoU.clock": 0.3163, "IoU.flag": 0.3734, "Acc.wall": 0.8663, "Acc.building": 0.9259, "Acc.sky": 0.9698, "Acc.floor": 0.8908, "Acc.tree": 0.872, "Acc.ceiling": 0.8983, "Acc.road": 0.8979, "Acc.bed ": 0.9474, "Acc.windowpane": 0.744, "Acc.grass": 0.7508, "Acc.cabinet": 0.6934, "Acc.sidewalk": 0.7714, "Acc.person": 0.9044, "Acc.earth": 0.4969, "Acc.door": 0.521, "Acc.table": 0.7161, "Acc.mountain": 0.7231, "Acc.plant": 0.6124, "Acc.curtain": 0.8442, "Acc.chair": 0.6724, "Acc.car": 0.9113, "Acc.water": 0.5659, "Acc.painting": 0.832, "Acc.sofa": 0.7679, "Acc.shelf": 0.5929, "Acc.house": 0.5283, "Acc.sea": 0.6899, "Acc.mirror": 0.6165, "Acc.rug": 0.6636, "Acc.field": 0.5673, "Acc.armchair": 0.5452, "Acc.seat": 0.7423, "Acc.fence": 0.5037, "Acc.desk": 0.6328, "Acc.rock": 0.633, "Acc.wardrobe": 0.6151, "Acc.lamp": 0.7214, "Acc.bathtub": 0.7711, "Acc.railing": 0.4036, "Acc.cushion": 0.6557, "Acc.base": 0.2552, "Acc.box": 0.2566, "Acc.column": 0.5196, "Acc.signboard": 0.4239, "Acc.chest of drawers": 0.5893, "Acc.counter": 0.3378, "Acc.sand": 0.4468, "Acc.sink": 0.7623, "Acc.skyscraper": 0.7879, "Acc.fireplace": 0.8413, "Acc.refrigerator": 0.8291, "Acc.grandstand": 0.6812, "Acc.path": 0.3628, "Acc.stairs": 0.3112, "Acc.runway": 0.9442, "Acc.case": 0.6639, "Acc.pool table": 0.96, "Acc.pillow": 0.5923, "Acc.screen door": 0.7391, "Acc.stairway": 0.3399, "Acc.river": 0.2981, "Acc.bridge": 0.5318, "Acc.bookcase": 0.391, "Acc.blind": 0.4789, "Acc.coffee table": 0.7374, "Acc.toilet": 0.881, "Acc.flower": 0.4904, "Acc.book": 0.666, "Acc.hill": 0.0999, "Acc.bench": 0.4517, "Acc.countertop": 0.6706, "Acc.stove": 0.8134, "Acc.palm": 0.6346, "Acc.kitchen island": 0.5634, "Acc.computer": 0.6791, "Acc.swivel chair": 0.56, "Acc.boat": 0.8328, "Acc.bar": 0.2725, "Acc.arcade machine": 0.6342, "Acc.hovel": 0.2096, "Acc.bus": 0.9316, "Acc.towel": 0.6932, "Acc.light": 0.5131, "Acc.truck": 0.3097, "Acc.tower": 0.2071, "Acc.chandelier": 0.7739, "Acc.awning": 0.2052, "Acc.streetlight": 0.2736, "Acc.booth": 0.3175, "Acc.television receiver": 0.7463, "Acc.airplane": 0.7203, "Acc.dirt track": 0.2523, "Acc.apparel": 0.4046, "Acc.pole": 0.2065, "Acc.land": 0.0833, "Acc.bannister": 0.0939, "Acc.escalator": 0.2545, "Acc.ottoman": 0.564, "Acc.bottle": 0.3242, "Acc.buffet": 0.5063, "Acc.poster": 0.3646, "Acc.stage": 0.1769, "Acc.van": 0.5329, "Acc.ship": 0.6348, "Acc.fountain": 0.1438, "Acc.conveyer belt": 0.7966, "Acc.canopy": 0.2325, "Acc.washer": 0.7116, "Acc.plaything": 0.3916, "Acc.swimming pool": 0.5775, "Acc.stool": 0.4639, "Acc.barrel": 0.8268, "Acc.basket": 0.3253, "Acc.waterfall": 0.7428, "Acc.tent": 0.9825, "Acc.bag": 0.1024, "Acc.minibike": 0.8068, "Acc.cradle": 0.8822, "Acc.oven": 0.5501, "Acc.ball": 0.5715, "Acc.food": 0.5406, "Acc.step": 0.1702, "Acc.tank": 0.3474, "Acc.trade name": 0.284, "Acc.microwave": 0.6768, "Acc.pot": 0.4243, "Acc.animal": 0.5967, "Acc.bicycle": 0.6294, "Acc.lake": 0.5183, "Acc.dishwasher": 0.6896, "Acc.screen": 0.7549, "Acc.blanket": 0.1443, "Acc.sculpture": 0.6865, "Acc.hood": 0.5518, "Acc.sconce": 0.3719, "Acc.vase": 0.4641, "Acc.traffic light": 0.3699, "Acc.tray": 0.0702, "Acc.ashcan": 0.4585, "Acc.fan": 0.6432, "Acc.pier": 0.3804, "Acc.crt screen": 0.1182, "Acc.plate": 0.5162, "Acc.monitor": 0.383, "Acc.bulletin board": 0.4931, "Acc.shower": 0.0, "Acc.radiator": 0.5851, "Acc.glass": 0.0925, "Acc.clock": 0.3673, "Acc.flag": 0.4322} -{"mode": "train", "epoch": 2, "iter": 36050, "lr": 3e-05, "memory": 20279, "data_time": 0.96047, "decode.loss_ce": 0.29716, "decode.acc_seg": 88.48372, "loss": 0.29716, "time": 1.20249} -{"mode": "train", "epoch": 2, "iter": 36100, "lr": 3e-05, "memory": 20279, "data_time": 0.01935, "decode.loss_ce": 0.27329, "decode.acc_seg": 89.24429, "loss": 0.27329, "time": 0.26427} -{"mode": "train", "epoch": 2, "iter": 36150, "lr": 3e-05, "memory": 20279, "data_time": 0.02056, "decode.loss_ce": 0.29312, "decode.acc_seg": 88.42522, "loss": 0.29312, "time": 0.26945} -{"mode": "train", "epoch": 2, "iter": 36200, "lr": 2e-05, "memory": 20279, "data_time": 0.01965, "decode.loss_ce": 0.28623, "decode.acc_seg": 88.707, "loss": 0.28623, "time": 0.26376} -{"mode": "train", "epoch": 2, "iter": 36250, "lr": 2e-05, "memory": 20279, "data_time": 0.02081, "decode.loss_ce": 0.28803, "decode.acc_seg": 88.68176, "loss": 0.28803, "time": 0.26486} -{"mode": "train", "epoch": 2, "iter": 36300, "lr": 2e-05, "memory": 20279, "data_time": 0.02033, "decode.loss_ce": 0.28056, "decode.acc_seg": 88.80839, "loss": 0.28056, "time": 0.26885} -{"mode": "train", "epoch": 2, "iter": 36350, "lr": 2e-05, "memory": 20279, "data_time": 0.01949, "decode.loss_ce": 0.2716, "decode.acc_seg": 89.20823, "loss": 0.2716, "time": 0.26477} -{"mode": "train", "epoch": 2, "iter": 36400, "lr": 2e-05, "memory": 20279, "data_time": 0.0196, "decode.loss_ce": 0.28491, "decode.acc_seg": 88.87403, "loss": 0.28491, "time": 0.27009} -{"mode": "train", "epoch": 2, "iter": 36450, "lr": 2e-05, "memory": 20279, "data_time": 0.01881, "decode.loss_ce": 0.28677, "decode.acc_seg": 88.78562, "loss": 0.28677, "time": 0.26921} -{"mode": "train", "epoch": 2, "iter": 36500, "lr": 2e-05, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.28605, "decode.acc_seg": 88.5965, "loss": 0.28605, "time": 0.26398} -{"mode": "train", "epoch": 2, "iter": 36550, "lr": 2e-05, "memory": 20279, "data_time": 0.02142, "decode.loss_ce": 0.27732, "decode.acc_seg": 89.03749, "loss": 0.27732, "time": 0.26759} -{"mode": "train", "epoch": 2, "iter": 36600, "lr": 2e-05, "memory": 20279, "data_time": 0.02075, "decode.loss_ce": 0.27664, "decode.acc_seg": 89.12545, "loss": 0.27664, "time": 0.2674} -{"mode": "train", "epoch": 2, "iter": 36650, "lr": 2e-05, "memory": 20279, "data_time": 0.02029, "decode.loss_ce": 0.28712, "decode.acc_seg": 88.6289, "loss": 0.28712, "time": 0.26765} -{"mode": "train", "epoch": 2, "iter": 36700, "lr": 2e-05, "memory": 20279, "data_time": 0.01995, "decode.loss_ce": 0.28, "decode.acc_seg": 88.76725, "loss": 0.28, "time": 0.26698} -{"mode": "train", "epoch": 2, "iter": 36750, "lr": 2e-05, "memory": 20279, "data_time": 0.01936, "decode.loss_ce": 0.28285, "decode.acc_seg": 88.95799, "loss": 0.28285, "time": 0.26778} -{"mode": "train", "epoch": 2, "iter": 36800, "lr": 2e-05, "memory": 20279, "data_time": 0.01935, "decode.loss_ce": 0.28493, "decode.acc_seg": 88.84394, "loss": 0.28493, "time": 0.2687} -{"mode": "train", "epoch": 2, "iter": 36850, "lr": 2e-05, "memory": 20279, "data_time": 0.02094, "decode.loss_ce": 0.27921, "decode.acc_seg": 88.84774, "loss": 0.27921, "time": 0.26125} -{"mode": "train", "epoch": 2, "iter": 36900, "lr": 2e-05, "memory": 20279, "data_time": 0.01951, "decode.loss_ce": 0.28383, "decode.acc_seg": 88.8524, "loss": 0.28383, "time": 0.26403} -{"mode": "train", "epoch": 2, "iter": 36950, "lr": 2e-05, "memory": 20279, "data_time": 0.01997, "decode.loss_ce": 0.28641, "decode.acc_seg": 88.65796, "loss": 0.28641, "time": 0.26909} -{"mode": "train", "epoch": 2, "iter": 37000, "lr": 2e-05, "memory": 20279, "data_time": 0.02086, "decode.loss_ce": 0.28141, "decode.acc_seg": 88.89013, "loss": 0.28141, "time": 0.26418} -{"mode": "train", "epoch": 2, "iter": 37050, "lr": 2e-05, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.28334, "decode.acc_seg": 88.91062, "loss": 0.28334, "time": 0.2615} -{"mode": "train", "epoch": 2, "iter": 37100, "lr": 2e-05, "memory": 20279, "data_time": 0.01958, "decode.loss_ce": 0.28962, "decode.acc_seg": 88.95145, "loss": 0.28962, "time": 0.2693} -{"mode": "train", "epoch": 2, "iter": 37150, "lr": 2e-05, "memory": 20279, "data_time": 0.01955, "decode.loss_ce": 0.29788, "decode.acc_seg": 88.28615, "loss": 0.29788, "time": 0.26654} -{"mode": "train", "epoch": 2, "iter": 37200, "lr": 2e-05, "memory": 20279, "data_time": 0.01889, "decode.loss_ce": 0.28952, "decode.acc_seg": 88.70069, "loss": 0.28952, "time": 0.26931} -{"mode": "train", "epoch": 2, "iter": 37250, "lr": 2e-05, "memory": 20279, "data_time": 0.02006, "decode.loss_ce": 0.28655, "decode.acc_seg": 88.74584, "loss": 0.28655, "time": 0.26207} -{"mode": "train", "epoch": 2, "iter": 37300, "lr": 2e-05, "memory": 20279, "data_time": 0.02001, "decode.loss_ce": 0.27785, "decode.acc_seg": 89.0252, "loss": 0.27785, "time": 0.27529} -{"mode": "train", "epoch": 2, "iter": 37350, "lr": 2e-05, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.27558, "decode.acc_seg": 89.04409, "loss": 0.27558, "time": 0.262} -{"mode": "train", "epoch": 2, "iter": 37400, "lr": 2e-05, "memory": 20279, "data_time": 0.01758, "decode.loss_ce": 0.28576, "decode.acc_seg": 88.65197, "loss": 0.28576, "time": 0.26617} -{"mode": "train", "epoch": 2, "iter": 37450, "lr": 2e-05, "memory": 20279, "data_time": 0.01901, "decode.loss_ce": 0.2734, "decode.acc_seg": 89.03326, "loss": 0.2734, "time": 0.26624} -{"mode": "train", "epoch": 2, "iter": 37500, "lr": 2e-05, "memory": 20279, "data_time": 0.01873, "decode.loss_ce": 0.27897, "decode.acc_seg": 88.84996, "loss": 0.27897, "time": 0.26479} -{"mode": "train", "epoch": 2, "iter": 37550, "lr": 2e-05, "memory": 20279, "data_time": 0.01937, "decode.loss_ce": 0.28191, "decode.acc_seg": 88.79375, "loss": 0.28191, "time": 0.26463} -{"mode": "train", "epoch": 2, "iter": 37600, "lr": 2e-05, "memory": 20279, "data_time": 0.01932, "decode.loss_ce": 0.28039, "decode.acc_seg": 88.95736, "loss": 0.28039, "time": 0.2704} -{"mode": "train", "epoch": 2, "iter": 37650, "lr": 2e-05, "memory": 20279, "data_time": 0.01874, "decode.loss_ce": 0.2858, "decode.acc_seg": 88.68072, "loss": 0.2858, "time": 0.26587} -{"mode": "train", "epoch": 2, "iter": 37700, "lr": 2e-05, "memory": 20279, "data_time": 0.02069, "decode.loss_ce": 0.29686, "decode.acc_seg": 88.37203, "loss": 0.29686, "time": 0.26108} -{"mode": "train", "epoch": 2, "iter": 37750, "lr": 2e-05, "memory": 20279, "data_time": 0.02068, "decode.loss_ce": 0.27473, "decode.acc_seg": 89.05919, "loss": 0.27473, "time": 0.27335} -{"mode": "train", "epoch": 2, "iter": 37800, "lr": 2e-05, "memory": 20279, "data_time": 0.01971, "decode.loss_ce": 0.27626, "decode.acc_seg": 88.94493, "loss": 0.27626, "time": 0.26421} -{"mode": "train", "epoch": 2, "iter": 37850, "lr": 2e-05, "memory": 20279, "data_time": 0.02033, "decode.loss_ce": 0.27228, "decode.acc_seg": 89.28132, "loss": 0.27228, "time": 0.26248} -{"mode": "train", "epoch": 2, "iter": 37900, "lr": 2e-05, "memory": 20279, "data_time": 0.01903, "decode.loss_ce": 0.27182, "decode.acc_seg": 89.33414, "loss": 0.27182, "time": 0.2648} -{"mode": "train", "epoch": 2, "iter": 37950, "lr": 1e-05, "memory": 20279, "data_time": 0.01926, "decode.loss_ce": 0.28297, "decode.acc_seg": 88.78782, "loss": 0.28297, "time": 0.2733} -{"mode": "train", "epoch": 2, "iter": 38000, "lr": 1e-05, "memory": 20279, "data_time": 0.02078, "decode.loss_ce": 0.26637, "decode.acc_seg": 89.4865, "loss": 0.26637, "time": 0.26669} -{"mode": "train", "epoch": 2, "iter": 38050, "lr": 1e-05, "memory": 20279, "data_time": 0.02112, "decode.loss_ce": 0.27419, "decode.acc_seg": 89.17631, "loss": 0.27419, "time": 0.26359} -{"mode": "train", "epoch": 2, "iter": 38100, "lr": 1e-05, "memory": 20279, "data_time": 0.01985, "decode.loss_ce": 0.27614, "decode.acc_seg": 89.22592, "loss": 0.27614, "time": 0.26651} -{"mode": "train", "epoch": 2, "iter": 38150, "lr": 1e-05, "memory": 20279, "data_time": 0.02084, "decode.loss_ce": 0.28715, "decode.acc_seg": 88.82249, "loss": 0.28715, "time": 0.26777} -{"mode": "train", "epoch": 2, "iter": 38200, "lr": 1e-05, "memory": 20279, "data_time": 0.02092, "decode.loss_ce": 0.27559, "decode.acc_seg": 89.21658, "loss": 0.27559, "time": 0.26878} -{"mode": "train", "epoch": 2, "iter": 38250, "lr": 1e-05, "memory": 20279, "data_time": 0.01998, "decode.loss_ce": 0.26309, "decode.acc_seg": 89.66555, "loss": 0.26309, "time": 0.26408} -{"mode": "train", "epoch": 2, "iter": 38300, "lr": 1e-05, "memory": 20279, "data_time": 0.01887, "decode.loss_ce": 0.27759, "decode.acc_seg": 89.07873, "loss": 0.27759, "time": 0.27192} -{"mode": "train", "epoch": 2, "iter": 38350, "lr": 1e-05, "memory": 20279, "data_time": 0.02004, "decode.loss_ce": 0.28151, "decode.acc_seg": 89.01637, "loss": 0.28151, "time": 0.26799} -{"mode": "train", "epoch": 2, "iter": 38400, "lr": 1e-05, "memory": 20279, "data_time": 0.0193, "decode.loss_ce": 0.27531, "decode.acc_seg": 89.141, "loss": 0.27531, "time": 0.26804} -{"mode": "train", "epoch": 2, "iter": 38450, "lr": 1e-05, "memory": 20279, "data_time": 0.01927, "decode.loss_ce": 0.28673, "decode.acc_seg": 88.78983, "loss": 0.28673, "time": 0.27003} -{"mode": "train", "epoch": 2, "iter": 38500, "lr": 1e-05, "memory": 20279, "data_time": 0.02123, "decode.loss_ce": 0.2789, "decode.acc_seg": 89.09006, "loss": 0.2789, "time": 0.26433} -{"mode": "train", "epoch": 2, "iter": 38550, "lr": 1e-05, "memory": 20279, "data_time": 0.02101, "decode.loss_ce": 0.28227, "decode.acc_seg": 88.9255, "loss": 0.28227, "time": 0.27068} -{"mode": "train", "epoch": 2, "iter": 38600, "lr": 1e-05, "memory": 20279, "data_time": 0.021, "decode.loss_ce": 0.27661, "decode.acc_seg": 89.08842, "loss": 0.27661, "time": 0.2692} -{"mode": "train", "epoch": 2, "iter": 38650, "lr": 1e-05, "memory": 20279, "data_time": 0.01988, "decode.loss_ce": 0.27516, "decode.acc_seg": 89.1473, "loss": 0.27516, "time": 0.26949} -{"mode": "train", "epoch": 2, "iter": 38700, "lr": 1e-05, "memory": 20279, "data_time": 0.02157, "decode.loss_ce": 0.28504, "decode.acc_seg": 88.81862, "loss": 0.28504, "time": 0.26709} -{"mode": "train", "epoch": 2, "iter": 38750, "lr": 1e-05, "memory": 20279, "data_time": 0.02028, "decode.loss_ce": 0.2806, "decode.acc_seg": 88.96738, "loss": 0.2806, "time": 0.26657} -{"mode": "train", "epoch": 2, "iter": 38800, "lr": 1e-05, "memory": 20279, "data_time": 0.01923, "decode.loss_ce": 0.27858, "decode.acc_seg": 89.02162, "loss": 0.27858, "time": 0.26911} -{"mode": "train", "epoch": 2, "iter": 38850, "lr": 1e-05, "memory": 20279, "data_time": 0.02014, "decode.loss_ce": 0.27184, "decode.acc_seg": 89.23424, "loss": 0.27184, "time": 0.26719} -{"mode": "train", "epoch": 2, "iter": 38900, "lr": 1e-05, "memory": 20279, "data_time": 0.0194, "decode.loss_ce": 0.28183, "decode.acc_seg": 88.88442, "loss": 0.28183, "time": 0.26471} -{"mode": "train", "epoch": 2, "iter": 38950, "lr": 1e-05, "memory": 20279, "data_time": 0.01892, "decode.loss_ce": 0.2861, "decode.acc_seg": 88.73525, "loss": 0.2861, "time": 0.2717} -{"mode": "train", "epoch": 2, "iter": 39000, "lr": 1e-05, "memory": 20279, "data_time": 0.02061, "decode.loss_ce": 0.28374, "decode.acc_seg": 88.87237, "loss": 0.28374, "time": 0.26277} -{"mode": "train", "epoch": 2, "iter": 39050, "lr": 1e-05, "memory": 20279, "data_time": 0.01987, "decode.loss_ce": 0.27002, "decode.acc_seg": 89.29, "loss": 0.27002, "time": 0.26103} -{"mode": "train", "epoch": 2, "iter": 39100, "lr": 1e-05, "memory": 20279, "data_time": 0.01922, "decode.loss_ce": 0.28121, "decode.acc_seg": 88.97259, "loss": 0.28121, "time": 0.26778} -{"mode": "train", "epoch": 2, "iter": 39150, "lr": 1e-05, "memory": 20279, "data_time": 0.01896, "decode.loss_ce": 0.27353, "decode.acc_seg": 89.04364, "loss": 0.27353, "time": 0.26989} -{"mode": "train", "epoch": 2, "iter": 39200, "lr": 1e-05, "memory": 20279, "data_time": 0.0194, "decode.loss_ce": 0.28343, "decode.acc_seg": 88.92322, "loss": 0.28343, "time": 0.26408} -{"mode": "train", "epoch": 2, "iter": 39250, "lr": 1e-05, "memory": 20279, "data_time": 0.01967, "decode.loss_ce": 0.28508, "decode.acc_seg": 89.01137, "loss": 0.28508, "time": 0.26995} -{"mode": "train", "epoch": 2, "iter": 39300, "lr": 1e-05, "memory": 20279, "data_time": 0.01888, "decode.loss_ce": 0.28892, "decode.acc_seg": 88.59949, "loss": 0.28892, "time": 0.2642} -{"mode": "train", "epoch": 2, "iter": 39350, "lr": 1e-05, "memory": 20279, "data_time": 0.01963, "decode.loss_ce": 0.27942, "decode.acc_seg": 89.12899, "loss": 0.27942, "time": 0.26597} -{"mode": "train", "epoch": 2, "iter": 39400, "lr": 1e-05, "memory": 20279, "data_time": 0.02024, "decode.loss_ce": 0.28798, "decode.acc_seg": 88.67979, "loss": 0.28798, "time": 0.26753} -{"mode": "train", "epoch": 2, "iter": 39450, "lr": 1e-05, "memory": 20279, "data_time": 0.01951, "decode.loss_ce": 0.27794, "decode.acc_seg": 89.1922, "loss": 0.27794, "time": 0.26778} -{"mode": "train", "epoch": 2, "iter": 39500, "lr": 0.0, "memory": 20279, "data_time": 0.02057, "decode.loss_ce": 0.27932, "decode.acc_seg": 89.01626, "loss": 0.27932, "time": 0.26312} -{"mode": "train", "epoch": 2, "iter": 39550, "lr": 0.0, "memory": 20279, "data_time": 0.0249, "decode.loss_ce": 0.27558, "decode.acc_seg": 89.22619, "loss": 0.27558, "time": 0.27062} -{"mode": "train", "epoch": 2, "iter": 39600, "lr": 0.0, "memory": 20279, "data_time": 0.02309, "decode.loss_ce": 0.26526, "decode.acc_seg": 89.65793, "loss": 0.26526, "time": 0.2652} -{"mode": "train", "epoch": 2, "iter": 39650, "lr": 0.0, "memory": 20279, "data_time": 0.02055, "decode.loss_ce": 0.26829, "decode.acc_seg": 89.49862, "loss": 0.26829, "time": 0.26907} -{"mode": "train", "epoch": 2, "iter": 39700, "lr": 0.0, "memory": 20279, "data_time": 0.02042, "decode.loss_ce": 0.27022, "decode.acc_seg": 89.26284, "loss": 0.27022, "time": 0.26452} -{"mode": "train", "epoch": 2, "iter": 39750, "lr": 0.0, "memory": 20279, "data_time": 0.01953, "decode.loss_ce": 0.27511, "decode.acc_seg": 89.14335, "loss": 0.27511, "time": 0.27279} -{"mode": "train", "epoch": 2, "iter": 39800, "lr": 0.0, "memory": 20279, "data_time": 0.02041, "decode.loss_ce": 0.27788, "decode.acc_seg": 89.08547, "loss": 0.27788, "time": 0.26969} -{"mode": "train", "epoch": 2, "iter": 39850, "lr": 0.0, "memory": 20279, "data_time": 0.01965, "decode.loss_ce": 0.29558, "decode.acc_seg": 88.54126, "loss": 0.29558, "time": 0.26397} -{"mode": "train", "epoch": 2, "iter": 39900, "lr": 0.0, "memory": 20279, "data_time": 0.01877, "decode.loss_ce": 0.27782, "decode.acc_seg": 89.08317, "loss": 0.27782, "time": 0.26974} -{"mode": "train", "epoch": 2, "iter": 39950, "lr": 0.0, "memory": 20279, "data_time": 0.02175, "decode.loss_ce": 0.26913, "decode.acc_seg": 89.30066, "loss": 0.26913, "time": 0.26837} -{"mode": "train", "epoch": 2, "iter": 40000, "lr": 0.0, "memory": 20279, "data_time": 0.01766, "decode.loss_ce": 0.27011, "decode.acc_seg": 89.30692, "loss": 0.27011, "time": 0.28045} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 0.0, "aAcc": 0.8083, "mIoU": 0.436, "mAcc": 0.5507, "IoU.wall": 0.7419, "IoU.building": 0.8099, "IoU.sky": 0.9414, "IoU.floor": 0.7961, "IoU.tree": 0.7207, "IoU.ceiling": 0.8074, "IoU.road": 0.8138, "IoU.bed ": 0.8658, "IoU.windowpane": 0.5696, "IoU.grass": 0.6655, "IoU.cabinet": 0.5715, "IoU.sidewalk": 0.6285, "IoU.person": 0.7889, "IoU.earth": 0.3291, "IoU.door": 0.4022, "IoU.table": 0.5199, "IoU.mountain": 0.5915, "IoU.plant": 0.4874, "IoU.curtain": 0.7068, "IoU.chair": 0.5205, "IoU.car": 0.8245, "IoU.water": 0.5184, "IoU.painting": 0.6679, "IoU.sofa": 0.5835, "IoU.shelf": 0.3767, "IoU.house": 0.4625, "IoU.sea": 0.4816, "IoU.mirror": 0.5465, "IoU.rug": 0.5211, "IoU.field": 0.3896, "IoU.armchair": 0.3584, "IoU.seat": 0.5625, "IoU.fence": 0.3769, "IoU.desk": 0.4384, "IoU.rock": 0.4062, "IoU.wardrobe": 0.3896, "IoU.lamp": 0.5779, "IoU.bathtub": 0.6875, "IoU.railing": 0.275, "IoU.cushion": 0.5123, "IoU.base": 0.1568, "IoU.box": 0.1943, "IoU.column": 0.4098, "IoU.signboard": 0.3145, "IoU.chest of drawers": 0.3904, "IoU.counter": 0.2643, "IoU.sand": 0.2956, "IoU.sink": 0.6235, "IoU.skyscraper": 0.5938, "IoU.fireplace": 0.6608, "IoU.refrigerator": 0.6708, "IoU.grandstand": 0.4598, "IoU.path": 0.2151, "IoU.stairs": 0.2435, "IoU.runway": 0.7037, "IoU.case": 0.4801, "IoU.pool table": 0.9213, "IoU.pillow": 0.5163, "IoU.screen door": 0.5955, "IoU.stairway": 0.2473, "IoU.river": 0.1508, "IoU.bridge": 0.3862, "IoU.bookcase": 0.3059, "IoU.blind": 0.4201, "IoU.coffee table": 0.5116, "IoU.toilet": 0.7999, "IoU.flower": 0.3565, "IoU.book": 0.4478, "IoU.hill": 0.066, "IoU.bench": 0.3914, "IoU.countertop": 0.4785, "IoU.stove": 0.6729, "IoU.palm": 0.4647, "IoU.kitchen island": 0.277, "IoU.computer": 0.5922, "IoU.swivel chair": 0.4239, "IoU.boat": 0.6239, "IoU.bar": 0.2981, "IoU.arcade machine": 0.5948, "IoU.hovel": 0.1924, "IoU.bus": 0.8442, "IoU.towel": 0.5683, "IoU.light": 0.4613, "IoU.truck": 0.2538, "IoU.tower": 0.1288, "IoU.chandelier": 0.6082, "IoU.awning": 0.1887, "IoU.streetlight": 0.2196, "IoU.booth": 0.2955, "IoU.television receiver": 0.6351, "IoU.airplane": 0.5302, "IoU.dirt track": 0.0492, "IoU.apparel": 0.2265, "IoU.pole": 0.1525, "IoU.land": 0.041, "IoU.bannister": 0.0732, "IoU.escalator": 0.2312, "IoU.ottoman": 0.3754, "IoU.bottle": 0.2498, "IoU.buffet": 0.4462, "IoU.poster": 0.2221, "IoU.stage": 0.1282, "IoU.van": 0.3662, "IoU.ship": 0.4252, "IoU.fountain": 0.0527, "IoU.conveyer belt": 0.6349, "IoU.canopy": 0.1448, "IoU.washer": 0.6946, "IoU.plaything": 0.2649, "IoU.swimming pool": 0.3927, "IoU.stool": 0.3128, "IoU.barrel": 0.1594, "IoU.basket": 0.2437, "IoU.waterfall": 0.6592, "IoU.tent": 0.895, "IoU.bag": 0.0881, "IoU.minibike": 0.6469, "IoU.cradle": 0.7396, "IoU.oven": 0.2854, "IoU.ball": 0.4701, "IoU.food": 0.4036, "IoU.step": 0.1148, "IoU.tank": 0.2782, "IoU.trade name": 0.1952, "IoU.microwave": 0.5098, "IoU.pot": 0.3699, "IoU.animal": 0.5556, "IoU.bicycle": 0.5253, "IoU.lake": 0.4651, "IoU.dishwasher": 0.6218, "IoU.screen": 0.6756, "IoU.blanket": 0.1321, "IoU.sculpture": 0.5052, "IoU.hood": 0.4905, "IoU.sconce": 0.3229, "IoU.vase": 0.2969, "IoU.traffic light": 0.2566, "IoU.tray": 0.0309, "IoU.ashcan": 0.3673, "IoU.fan": 0.541, "IoU.pier": 0.2721, "IoU.crt screen": 0.0534, "IoU.plate": 0.4292, "IoU.monitor": 0.2471, "IoU.bulletin board": 0.4332, "IoU.shower": 0.0, "IoU.radiator": 0.5133, "IoU.glass": 0.0879, "IoU.clock": 0.2727, "IoU.flag": 0.3743, "Acc.wall": 0.874, "Acc.building": 0.9222, "Acc.sky": 0.9716, "Acc.floor": 0.8961, "Acc.tree": 0.8706, "Acc.ceiling": 0.8972, "Acc.road": 0.8937, "Acc.bed ": 0.9449, "Acc.windowpane": 0.7504, "Acc.grass": 0.777, "Acc.cabinet": 0.694, "Acc.sidewalk": 0.7796, "Acc.person": 0.9093, "Acc.earth": 0.485, "Acc.door": 0.5158, "Acc.table": 0.7049, "Acc.mountain": 0.7459, "Acc.plant": 0.6192, "Acc.curtain": 0.8355, "Acc.chair": 0.6717, "Acc.car": 0.9158, "Acc.water": 0.6831, "Acc.painting": 0.8375, "Acc.sofa": 0.7698, "Acc.shelf": 0.5712, "Acc.house": 0.6034, "Acc.sea": 0.6697, "Acc.mirror": 0.6393, "Acc.rug": 0.6048, "Acc.field": 0.5486, "Acc.armchair": 0.5232, "Acc.seat": 0.7248, "Acc.fence": 0.5229, "Acc.desk": 0.6446, "Acc.rock": 0.638, "Acc.wardrobe": 0.5731, "Acc.lamp": 0.7358, "Acc.bathtub": 0.7716, "Acc.railing": 0.4063, "Acc.cushion": 0.6493, "Acc.base": 0.2604, "Acc.box": 0.2683, "Acc.column": 0.5484, "Acc.signboard": 0.4333, "Acc.chest of drawers": 0.5939, "Acc.counter": 0.3833, "Acc.sand": 0.438, "Acc.sink": 0.7534, "Acc.skyscraper": 0.7176, "Acc.fireplace": 0.8485, "Acc.refrigerator": 0.8049, "Acc.grandstand": 0.6915, "Acc.path": 0.3183, "Acc.stairs": 0.315, "Acc.runway": 0.9452, "Acc.case": 0.6618, "Acc.pool table": 0.9609, "Acc.pillow": 0.6008, "Acc.screen door": 0.749, "Acc.stairway": 0.3389, "Acc.river": 0.2995, "Acc.bridge": 0.4652, "Acc.bookcase": 0.4124, "Acc.blind": 0.4895, "Acc.coffee table": 0.7598, "Acc.toilet": 0.8801, "Acc.flower": 0.4773, "Acc.book": 0.6496, "Acc.hill": 0.0926, "Acc.bench": 0.4559, "Acc.countertop": 0.7006, "Acc.stove": 0.7686, "Acc.palm": 0.6627, "Acc.kitchen island": 0.5821, "Acc.computer": 0.6943, "Acc.swivel chair": 0.5998, "Acc.boat": 0.7996, "Acc.bar": 0.3431, "Acc.arcade machine": 0.6319, "Acc.hovel": 0.2886, "Acc.bus": 0.9351, "Acc.towel": 0.6971, "Acc.light": 0.521, "Acc.truck": 0.3144, "Acc.tower": 0.1733, "Acc.chandelier": 0.7668, "Acc.awning": 0.2184, "Acc.streetlight": 0.2976, "Acc.booth": 0.3248, "Acc.television receiver": 0.7413, "Acc.airplane": 0.6605, "Acc.dirt track": 0.1423, "Acc.apparel": 0.3454, "Acc.pole": 0.2015, "Acc.land": 0.0659, "Acc.bannister": 0.098, "Acc.escalator": 0.2766, "Acc.ottoman": 0.5001, "Acc.bottle": 0.3453, "Acc.buffet": 0.5453, "Acc.poster": 0.3467, "Acc.stage": 0.1857, "Acc.van": 0.482, "Acc.ship": 0.517, "Acc.fountain": 0.0533, "Acc.conveyer belt": 0.7984, "Acc.canopy": 0.1879, "Acc.washer": 0.7131, "Acc.plaything": 0.3655, "Acc.swimming pool": 0.5618, "Acc.stool": 0.4413, "Acc.barrel": 0.7855, "Acc.basket": 0.3276, "Acc.waterfall": 0.7531, "Acc.tent": 0.981, "Acc.bag": 0.1017, "Acc.minibike": 0.8078, "Acc.cradle": 0.8704, "Acc.oven": 0.5574, "Acc.ball": 0.5228, "Acc.food": 0.4621, "Acc.step": 0.1332, "Acc.tank": 0.302, "Acc.trade name": 0.235, "Acc.microwave": 0.567, "Acc.pot": 0.4286, "Acc.animal": 0.5833, "Acc.bicycle": 0.6812, "Acc.lake": 0.5329, "Acc.dishwasher": 0.7001, "Acc.screen": 0.798, "Acc.blanket": 0.1536, "Acc.sculpture": 0.679, "Acc.hood": 0.5914, "Acc.sconce": 0.3851, "Acc.vase": 0.4706, "Acc.traffic light": 0.4058, "Acc.tray": 0.0442, "Acc.ashcan": 0.4951, "Acc.fan": 0.6293, "Acc.pier": 0.358, "Acc.crt screen": 0.1241, "Acc.plate": 0.5546, "Acc.monitor": 0.3724, "Acc.bulletin board": 0.5108, "Acc.shower": 0.0, "Acc.radiator": 0.5785, "Acc.glass": 0.0955, "Acc.clock": 0.318, "Acc.flag": 0.4161} diff --git a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m2_3_ade20k.json b/cv/classification/repvit/pytorch/segmentation/logs/repvit_m2_3_ade20k.json deleted file mode 100644 index 518100c6..00000000 --- a/cv/classification/repvit/pytorch/segmentation/logs/repvit_m2_3_ade20k.json +++ /dev/null @@ -1,811 +0,0 @@ -{"env_info": "sys.platform: linux\nPython: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 11.3, V11.3.109\nGCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\nPyTorch: 2.0.1+cu117\nPyTorch compiling details: PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.9.2 (built against CUDA 12.1)\n - Built with CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.15.2+cu117\nOpenCV: 4.8.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 9.4\nMMCV CUDA Compiler: 11.3\nMMSegmentation: 0.30.0+c54117a", "seed": 0, "exp_name": "fpn_repvit_m6_ade20k_40k.py", "mmseg_version": "0.30.0+c54117a", "config": "norm_cfg = dict(type='SyncBN', requires_grad=True)\nmodel = dict(\n type='EncoderDecoder',\n pretrained='open-mmlab://resnet50_v1c',\n backbone=dict(\n type='repvit_m6',\n depth=50,\n num_stages=4,\n out_indices=[7, 15, 51, 54],\n dilations=(1, 1, 1, 1),\n strides=(1, 2, 2, 2),\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n norm_eval=False,\n style='pytorch',\n contract_dilation=True,\n init_cfg=dict(\n type='Pretrained',\n checkpoint='pretrain/repvit_m6_distill_450e.pth'),\n pretrained='open-mmlab://resnet50_v1c'),\n neck=dict(\n type='FPN',\n in_channels=[80, 160, 320, 640],\n out_channels=256,\n num_outs=4),\n decode_head=dict(\n type='FPNHead',\n in_channels=[256, 256, 256, 256],\n in_index=[0, 1, 2, 3],\n feature_strides=[4, 8, 16, 32],\n channels=128,\n dropout_ratio=0.1,\n num_classes=150,\n norm_cfg=dict(type='SyncBN', requires_grad=True),\n align_corners=False,\n loss_decode=dict(\n type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),\n train_cfg=dict(),\n test_cfg=dict(mode='whole'))\ndataset_type = 'ADE20KDataset'\ndata_root = 'data/ade/ADEChallengeData2016'\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ncrop_size = (512, 512)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),\n dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n]\ndata = dict(\n samples_per_gpu=4,\n workers_per_gpu=4,\n train=dict(\n type='RepeatDataset',\n times=50,\n dataset=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/training',\n ann_dir='annotations/training',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', reduce_zero_label=True),\n dict(\n type='Resize',\n img_scale=(2048, 512),\n ratio_range=(0.5, 2.0)),\n dict(\n type='RandomCrop',\n crop_size=(512, 512),\n cat_max_ratio=0.75),\n dict(type='RandomFlip', prob=0.5),\n dict(type='PhotoMetricDistortion'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n ])),\n val=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]),\n test=dict(\n type='ADE20KDataset',\n data_root='data/ade/ADEChallengeData2016',\n img_dir='images/validation',\n ann_dir='annotations/validation',\n pipeline=[\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(2048, 512),\n flip=False,\n transforms=[\n dict(type='AlignResize', keep_ratio=True, size_divisor=32),\n dict(type='RandomFlip'),\n dict(\n type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img'])\n ])\n ]))\nlog_config = dict(\n interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)])\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\ncudnn_benchmark = True\ngpu_multiples = 2\noptimizer = dict(type='AdamW', lr=0.0002, weight_decay=0.0001)\noptimizer_config = dict()\nlr_config = dict(policy='poly', power=0.9, min_lr=1e-06, by_epoch=False)\nrunner = dict(type='IterBasedRunner', max_iters=40000)\ncheckpoint_config = dict(by_epoch=False, interval=4000)\nevaluation = dict(interval=4000, metric='mIoU')\nwork_dir = './work_dirs/fpn_repvit_m6_ade20k_40k'\ngpu_ids = range(0, 8)\ndevice = 'cuda'\nseed = 0\n", "CLASSES": ["wall", "building", "sky", "floor", "tree", "ceiling", "road", "bed ", "windowpane", "grass", "cabinet", "sidewalk", "person", "earth", "door", "table", "mountain", "plant", "curtain", "chair", "car", "water", "painting", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock", "wardrobe", "lamp", "bathtub", "railing", "cushion", "base", "box", "column", "signboard", "chest of drawers", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator", "grandstand", "path", "stairs", "runway", "case", "pool table", "pillow", "screen door", "stairway", "river", "bridge", "bookcase", "blind", "coffee table", "toilet", "flower", "book", "hill", "bench", "countertop", "stove", "palm", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel", "bus", "towel", "light", "truck", "tower", "chandelier", "awning", "streetlight", "booth", "television receiver", "airplane", "dirt track", "apparel", "pole", "land", "bannister", "escalator", "ottoman", "bottle", "buffet", "poster", "stage", "van", "ship", "fountain", "conveyer belt", "canopy", "washer", "plaything", "swimming pool", "stool", "barrel", "basket", "waterfall", "tent", "bag", "minibike", "cradle", "oven", "ball", "food", "step", "tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket", "sculpture", "hood", "sconce", "vase", "traffic light", "tray", "ashcan", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass", "clock", "flag"], "PALETTE": [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255]]} -{"mode": "train", "epoch": 1, "iter": 50, "lr": 0.0002, "memory": 7401, "data_time": 0.02149, "decode.loss_ce": 2.6068, "decode.acc_seg": 43.82846, "loss": 2.6068, "time": 0.38423} -{"mode": "train", "epoch": 1, "iter": 100, "lr": 0.0002, "memory": 7401, "data_time": 0.01406, "decode.loss_ce": 1.81305, "decode.acc_seg": 53.28397, "loss": 1.81305, "time": 0.24121} -{"mode": "train", "epoch": 1, "iter": 150, "lr": 0.0002, "memory": 7401, "data_time": 0.01454, "decode.loss_ce": 1.57513, "decode.acc_seg": 57.00116, "loss": 1.57513, "time": 0.23319} -{"mode": "train", "epoch": 1, "iter": 200, "lr": 0.0002, "memory": 7401, "data_time": 0.01428, "decode.loss_ce": 1.44328, "decode.acc_seg": 60.30138, "loss": 1.44328, "time": 0.23272} -{"mode": "train", "epoch": 1, "iter": 250, "lr": 0.0002, "memory": 7401, "data_time": 0.01482, "decode.loss_ce": 1.37346, "decode.acc_seg": 61.42057, "loss": 1.37346, "time": 0.23629} -{"mode": "train", "epoch": 1, "iter": 300, "lr": 0.0002, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 1.27841, "decode.acc_seg": 62.90869, "loss": 1.27841, "time": 0.23143} -{"mode": "train", "epoch": 1, "iter": 350, "lr": 0.0002, "memory": 7401, "data_time": 0.01335, "decode.loss_ce": 1.30403, "decode.acc_seg": 62.03881, "loss": 1.30403, "time": 0.2301} -{"mode": "train", "epoch": 1, "iter": 400, "lr": 0.0002, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 1.20095, "decode.acc_seg": 63.93601, "loss": 1.20095, "time": 0.23347} -{"mode": "train", "epoch": 1, "iter": 450, "lr": 0.0002, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 1.16725, "decode.acc_seg": 64.30468, "loss": 1.16725, "time": 0.23132} -{"mode": "train", "epoch": 1, "iter": 500, "lr": 0.0002, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 1.17929, "decode.acc_seg": 64.59575, "loss": 1.17929, "time": 0.22738} -{"mode": "train", "epoch": 1, "iter": 550, "lr": 0.0002, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 1.12954, "decode.acc_seg": 64.38405, "loss": 1.12954, "time": 0.2395} -{"mode": "train", "epoch": 1, "iter": 600, "lr": 0.0002, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 1.13768, "decode.acc_seg": 64.49749, "loss": 1.13768, "time": 0.2304} -{"mode": "train", "epoch": 1, "iter": 650, "lr": 0.0002, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 1.10794, "decode.acc_seg": 65.204, "loss": 1.10794, "time": 0.22942} -{"mode": "train", "epoch": 1, "iter": 700, "lr": 0.0002, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 1.05953, "decode.acc_seg": 66.37804, "loss": 1.05953, "time": 0.22803} -{"mode": "train", "epoch": 1, "iter": 750, "lr": 0.0002, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 1.0286, "decode.acc_seg": 67.5845, "loss": 1.0286, "time": 0.22713} -{"mode": "train", "epoch": 1, "iter": 800, "lr": 0.0002, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.99313, "decode.acc_seg": 68.2557, "loss": 0.99313, "time": 0.23317} -{"mode": "train", "epoch": 1, "iter": 850, "lr": 0.0002, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.9414, "decode.acc_seg": 69.38768, "loss": 0.9414, "time": 0.22747} -{"mode": "train", "epoch": 1, "iter": 900, "lr": 0.0002, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 1.02652, "decode.acc_seg": 66.90498, "loss": 1.02652, "time": 0.22942} -{"mode": "train", "epoch": 1, "iter": 950, "lr": 0.0002, "memory": 7401, "data_time": 0.01338, "decode.loss_ce": 0.94773, "decode.acc_seg": 69.08593, "loss": 0.94773, "time": 0.22669} -{"mode": "train", "epoch": 1, "iter": 1000, "lr": 0.0002, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.94242, "decode.acc_seg": 68.92042, "loss": 0.94242, "time": 0.23213} -{"mode": "train", "epoch": 1, "iter": 1050, "lr": 0.0002, "memory": 7401, "data_time": 0.0132, "decode.loss_ce": 0.94801, "decode.acc_seg": 69.01423, "loss": 0.94801, "time": 0.23285} -{"mode": "train", "epoch": 1, "iter": 1100, "lr": 0.0002, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.96125, "decode.acc_seg": 68.4046, "loss": 0.96125, "time": 0.22691} -{"mode": "train", "epoch": 1, "iter": 1150, "lr": 0.00019, "memory": 7401, "data_time": 0.01322, "decode.loss_ce": 0.91534, "decode.acc_seg": 69.0096, "loss": 0.91534, "time": 0.22879} -{"mode": "train", "epoch": 1, "iter": 1200, "lr": 0.00019, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 0.91844, "decode.acc_seg": 69.87155, "loss": 0.91844, "time": 0.22673} -{"mode": "train", "epoch": 1, "iter": 1250, "lr": 0.00019, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.89548, "decode.acc_seg": 70.32852, "loss": 0.89548, "time": 0.23152} -{"mode": "train", "epoch": 1, "iter": 1300, "lr": 0.00019, "memory": 7401, "data_time": 0.01337, "decode.loss_ce": 0.89246, "decode.acc_seg": 70.37671, "loss": 0.89246, "time": 0.23007} -{"mode": "train", "epoch": 1, "iter": 1350, "lr": 0.00019, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.83955, "decode.acc_seg": 71.52322, "loss": 0.83955, "time": 0.22901} -{"mode": "train", "epoch": 1, "iter": 1400, "lr": 0.00019, "memory": 7401, "data_time": 0.01331, "decode.loss_ce": 0.87601, "decode.acc_seg": 70.64549, "loss": 0.87601, "time": 0.2291} -{"mode": "train", "epoch": 1, "iter": 1450, "lr": 0.00019, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.87825, "decode.acc_seg": 70.58978, "loss": 0.87825, "time": 0.23184} -{"mode": "train", "epoch": 1, "iter": 1500, "lr": 0.00019, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.86748, "decode.acc_seg": 70.66666, "loss": 0.86748, "time": 0.23294} -{"mode": "train", "epoch": 1, "iter": 1550, "lr": 0.00019, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.84587, "decode.acc_seg": 71.45389, "loss": 0.84587, "time": 0.22416} -{"mode": "train", "epoch": 1, "iter": 1600, "lr": 0.00019, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.83057, "decode.acc_seg": 71.50714, "loss": 0.83057, "time": 0.22737} -{"mode": "train", "epoch": 1, "iter": 1650, "lr": 0.00019, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.8374, "decode.acc_seg": 71.49613, "loss": 0.8374, "time": 0.23643} -{"mode": "train", "epoch": 1, "iter": 1700, "lr": 0.00019, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.86773, "decode.acc_seg": 70.65878, "loss": 0.86773, "time": 0.22443} -{"mode": "train", "epoch": 1, "iter": 1750, "lr": 0.00019, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.82127, "decode.acc_seg": 71.93842, "loss": 0.82127, "time": 0.23013} -{"mode": "train", "epoch": 1, "iter": 1800, "lr": 0.00019, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.83907, "decode.acc_seg": 71.37513, "loss": 0.83907, "time": 0.22943} -{"mode": "train", "epoch": 1, "iter": 1850, "lr": 0.00019, "memory": 7401, "data_time": 0.01441, "decode.loss_ce": 0.79767, "decode.acc_seg": 72.55169, "loss": 0.79767, "time": 0.22713} -{"mode": "train", "epoch": 1, "iter": 1900, "lr": 0.00019, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.83441, "decode.acc_seg": 71.19938, "loss": 0.83441, "time": 0.23001} -{"mode": "train", "epoch": 1, "iter": 1950, "lr": 0.00019, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.82066, "decode.acc_seg": 71.6075, "loss": 0.82066, "time": 0.23123} -{"mode": "train", "epoch": 1, "iter": 2000, "lr": 0.00019, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.78931, "decode.acc_seg": 72.55481, "loss": 0.78931, "time": 0.23036} -{"mode": "train", "epoch": 1, "iter": 2050, "lr": 0.00019, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.78221, "decode.acc_seg": 72.81202, "loss": 0.78221, "time": 0.22955} -{"mode": "train", "epoch": 1, "iter": 2100, "lr": 0.00019, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.76195, "decode.acc_seg": 73.33477, "loss": 0.76195, "time": 0.22763} -{"mode": "train", "epoch": 1, "iter": 2150, "lr": 0.00019, "memory": 7401, "data_time": 0.01318, "decode.loss_ce": 0.7739, "decode.acc_seg": 73.99163, "loss": 0.7739, "time": 0.2302} -{"mode": "train", "epoch": 1, "iter": 2200, "lr": 0.00019, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.79582, "decode.acc_seg": 72.46683, "loss": 0.79582, "time": 0.228} -{"mode": "train", "epoch": 1, "iter": 2250, "lr": 0.00019, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.82622, "decode.acc_seg": 71.67787, "loss": 0.82622, "time": 0.22876} -{"mode": "train", "epoch": 1, "iter": 2300, "lr": 0.00019, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.78973, "decode.acc_seg": 72.69743, "loss": 0.78973, "time": 0.2308} -{"mode": "train", "epoch": 1, "iter": 2350, "lr": 0.00019, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.77208, "decode.acc_seg": 72.82557, "loss": 0.77208, "time": 0.22566} -{"mode": "train", "epoch": 1, "iter": 2400, "lr": 0.00019, "memory": 7401, "data_time": 0.0134, "decode.loss_ce": 0.7784, "decode.acc_seg": 73.13938, "loss": 0.7784, "time": 0.22886} -{"mode": "train", "epoch": 1, "iter": 2450, "lr": 0.00019, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.77019, "decode.acc_seg": 72.79663, "loss": 0.77019, "time": 0.22861} -{"mode": "train", "epoch": 1, "iter": 2500, "lr": 0.00019, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.75563, "decode.acc_seg": 73.5635, "loss": 0.75563, "time": 0.22723} -{"mode": "train", "epoch": 1, "iter": 2550, "lr": 0.00019, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.7726, "decode.acc_seg": 73.38328, "loss": 0.7726, "time": 0.2383} -{"mode": "train", "epoch": 1, "iter": 2600, "lr": 0.00019, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.75409, "decode.acc_seg": 72.83732, "loss": 0.75409, "time": 0.22573} -{"mode": "train", "epoch": 1, "iter": 2650, "lr": 0.00019, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.73943, "decode.acc_seg": 73.89218, "loss": 0.73943, "time": 0.2259} -{"mode": "train", "epoch": 1, "iter": 2700, "lr": 0.00019, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.74756, "decode.acc_seg": 73.73506, "loss": 0.74756, "time": 0.23033} -{"mode": "train", "epoch": 1, "iter": 2750, "lr": 0.00019, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.7742, "decode.acc_seg": 72.73662, "loss": 0.7742, "time": 0.22506} -{"mode": "train", "epoch": 1, "iter": 2800, "lr": 0.00019, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.71122, "decode.acc_seg": 74.61253, "loss": 0.71122, "time": 0.22768} -{"mode": "train", "epoch": 1, "iter": 2850, "lr": 0.00019, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.72482, "decode.acc_seg": 74.33592, "loss": 0.72482, "time": 0.2279} -{"mode": "train", "epoch": 1, "iter": 2900, "lr": 0.00019, "memory": 7401, "data_time": 0.01434, "decode.loss_ce": 0.77104, "decode.acc_seg": 73.8742, "loss": 0.77104, "time": 0.22746} -{"mode": "train", "epoch": 1, "iter": 2950, "lr": 0.00019, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.72853, "decode.acc_seg": 74.36189, "loss": 0.72853, "time": 0.22531} -{"mode": "train", "epoch": 1, "iter": 3000, "lr": 0.00019, "memory": 7401, "data_time": 0.01457, "decode.loss_ce": 0.69374, "decode.acc_seg": 75.26212, "loss": 0.69374, "time": 0.23172} -{"mode": "train", "epoch": 1, "iter": 3050, "lr": 0.00019, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.69718, "decode.acc_seg": 75.40693, "loss": 0.69718, "time": 0.22878} -{"mode": "train", "epoch": 1, "iter": 3100, "lr": 0.00019, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.73876, "decode.acc_seg": 74.06515, "loss": 0.73876, "time": 0.22689} -{"mode": "train", "epoch": 1, "iter": 3150, "lr": 0.00019, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.70332, "decode.acc_seg": 75.10285, "loss": 0.70332, "time": 0.22631} -{"mode": "train", "epoch": 1, "iter": 3200, "lr": 0.00019, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.68259, "decode.acc_seg": 75.34497, "loss": 0.68259, "time": 0.22922} -{"mode": "train", "epoch": 1, "iter": 3250, "lr": 0.00019, "memory": 7401, "data_time": 0.01428, "decode.loss_ce": 0.70233, "decode.acc_seg": 74.91258, "loss": 0.70233, "time": 0.22389} -{"mode": "train", "epoch": 1, "iter": 3300, "lr": 0.00019, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.71058, "decode.acc_seg": 74.95214, "loss": 0.71058, "time": 0.22621} -{"mode": "train", "epoch": 1, "iter": 3350, "lr": 0.00018, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.70496, "decode.acc_seg": 74.76801, "loss": 0.70496, "time": 0.23284} -{"mode": "train", "epoch": 1, "iter": 3400, "lr": 0.00018, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.67101, "decode.acc_seg": 75.90312, "loss": 0.67101, "time": 0.22547} -{"mode": "train", "epoch": 1, "iter": 3450, "lr": 0.00018, "memory": 7401, "data_time": 0.01381, "decode.loss_ce": 0.7232, "decode.acc_seg": 74.60055, "loss": 0.7232, "time": 0.22494} -{"mode": "train", "epoch": 1, "iter": 3500, "lr": 0.00018, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.68937, "decode.acc_seg": 75.31696, "loss": 0.68937, "time": 0.22969} -{"mode": "train", "epoch": 1, "iter": 3550, "lr": 0.00018, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.75435, "decode.acc_seg": 73.02864, "loss": 0.75435, "time": 0.22381} -{"mode": "train", "epoch": 1, "iter": 3600, "lr": 0.00018, "memory": 7401, "data_time": 0.0133, "decode.loss_ce": 0.69343, "decode.acc_seg": 75.96599, "loss": 0.69343, "time": 0.22816} -{"mode": "train", "epoch": 1, "iter": 3650, "lr": 0.00018, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.67099, "decode.acc_seg": 75.73371, "loss": 0.67099, "time": 0.22461} -{"mode": "train", "epoch": 1, "iter": 3700, "lr": 0.00018, "memory": 7401, "data_time": 0.01444, "decode.loss_ce": 0.68371, "decode.acc_seg": 76.00757, "loss": 0.68371, "time": 0.23293} -{"mode": "train", "epoch": 1, "iter": 3750, "lr": 0.00018, "memory": 7401, "data_time": 0.01437, "decode.loss_ce": 0.6743, "decode.acc_seg": 75.92185, "loss": 0.6743, "time": 0.23024} -{"mode": "train", "epoch": 1, "iter": 3800, "lr": 0.00018, "memory": 7401, "data_time": 0.01429, "decode.loss_ce": 0.66239, "decode.acc_seg": 76.04053, "loss": 0.66239, "time": 0.22651} -{"mode": "train", "epoch": 1, "iter": 3850, "lr": 0.00018, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.66252, "decode.acc_seg": 76.3963, "loss": 0.66252, "time": 0.22754} -{"mode": "train", "epoch": 1, "iter": 3900, "lr": 0.00018, "memory": 7401, "data_time": 0.01381, "decode.loss_ce": 0.65885, "decode.acc_seg": 76.18308, "loss": 0.65885, "time": 0.22533} -{"mode": "train", "epoch": 1, "iter": 3950, "lr": 0.00018, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.67256, "decode.acc_seg": 75.85651, "loss": 0.67256, "time": 0.22779} -{"mode": "train", "epoch": 1, "iter": 4000, "lr": 0.00018, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.67354, "decode.acc_seg": 75.76522, "loss": 0.67354, "time": 0.25385} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00018, "aAcc": 0.7746, "mIoU": 0.3481, "mAcc": 0.4557, "IoU.wall": 0.7051, "IoU.building": 0.7801, "IoU.sky": 0.9357, "IoU.floor": 0.7711, "IoU.tree": 0.6868, "IoU.ceiling": 0.7742, "IoU.road": 0.7469, "IoU.bed ": 0.8083, "IoU.windowpane": 0.5153, "IoU.grass": 0.614, "IoU.cabinet": 0.5225, "IoU.sidewalk": 0.5318, "IoU.person": 0.7809, "IoU.earth": 0.2278, "IoU.door": 0.3382, "IoU.table": 0.4585, "IoU.mountain": 0.5213, "IoU.plant": 0.4172, "IoU.curtain": 0.6287, "IoU.chair": 0.4723, "IoU.car": 0.7882, "IoU.water": 0.4541, "IoU.painting": 0.6124, "IoU.sofa": 0.4907, "IoU.shelf": 0.3503, "IoU.house": 0.3055, "IoU.sea": 0.5482, "IoU.mirror": 0.3989, "IoU.rug": 0.4571, "IoU.field": 0.1946, "IoU.armchair": 0.2794, "IoU.seat": 0.4498, "IoU.fence": 0.2799, "IoU.desk": 0.3353, "IoU.rock": 0.3095, "IoU.wardrobe": 0.4003, "IoU.lamp": 0.5079, "IoU.bathtub": 0.6145, "IoU.railing": 0.2431, "IoU.cushion": 0.4253, "IoU.base": 0.0511, "IoU.box": 0.1436, "IoU.column": 0.3954, "IoU.signboard": 0.3034, "IoU.chest of drawers": 0.2932, "IoU.counter": 0.2463, "IoU.sand": 0.2509, "IoU.sink": 0.5559, "IoU.skyscraper": 0.4353, "IoU.fireplace": 0.5818, "IoU.refrigerator": 0.5394, "IoU.grandstand": 0.314, "IoU.path": 0.1163, "IoU.stairs": 0.3202, "IoU.runway": 0.4632, "IoU.case": 0.4311, "IoU.pool table": 0.8862, "IoU.pillow": 0.2751, "IoU.screen door": 0.4607, "IoU.stairway": 0.299, "IoU.river": 0.117, "IoU.bridge": 0.4515, "IoU.bookcase": 0.302, "IoU.blind": 0.2488, "IoU.coffee table": 0.4523, "IoU.toilet": 0.7945, "IoU.flower": 0.3081, "IoU.book": 0.3912, "IoU.hill": 0.0178, "IoU.bench": 0.3612, "IoU.countertop": 0.4509, "IoU.stove": 0.5403, "IoU.palm": 0.4191, "IoU.kitchen island": 0.1799, "IoU.computer": 0.6128, "IoU.swivel chair": 0.304, "IoU.boat": 0.4255, "IoU.bar": 0.1994, "IoU.arcade machine": 0.4802, "IoU.hovel": 0.1146, "IoU.bus": 0.7897, "IoU.towel": 0.5231, "IoU.light": 0.2182, "IoU.truck": 0.1731, "IoU.tower": 0.1419, "IoU.chandelier": 0.548, "IoU.awning": 0.2184, "IoU.streetlight": 0.1288, "IoU.booth": 0.1301, "IoU.television receiver": 0.5868, "IoU.airplane": 0.5121, "IoU.dirt track": 0.0, "IoU.apparel": 0.1931, "IoU.pole": 0.1536, "IoU.land": 0.002, "IoU.bannister": 0.0006, "IoU.escalator": 0.0282, "IoU.ottoman": 0.2348, "IoU.bottle": 0.1612, "IoU.buffet": 0.4162, "IoU.poster": 0.1063, "IoU.stage": 0.0127, "IoU.van": 0.211, "IoU.ship": 0.3251, "IoU.fountain": 0.042, "IoU.conveyer belt": 0.2105, "IoU.canopy": 0.0268, "IoU.washer": 0.5721, "IoU.plaything": 0.214, "IoU.swimming pool": 0.1517, "IoU.stool": 0.1598, "IoU.barrel": 0.0746, "IoU.basket": 0.1494, "IoU.waterfall": 0.5677, "IoU.tent": 0.5037, "IoU.bag": 0.007, "IoU.minibike": 0.384, "IoU.cradle": 0.7319, "IoU.oven": 0.1034, "IoU.ball": 0.306, "IoU.food": 0.5098, "IoU.step": 0.0059, "IoU.tank": 0.2101, "IoU.trade name": 0.0608, "IoU.microwave": 0.299, "IoU.pot": 0.2414, "IoU.animal": 0.4937, "IoU.bicycle": 0.4072, "IoU.lake": 0.0, "IoU.dishwasher": 0.4159, "IoU.screen": 0.5477, "IoU.blanket": 0.008, "IoU.sculpture": 0.2222, "IoU.hood": 0.2475, "IoU.sconce": 0.1316, "IoU.vase": 0.2448, "IoU.traffic light": 0.1965, "IoU.tray": 0.0032, "IoU.ashcan": 0.2941, "IoU.fan": 0.4112, "IoU.pier": 0.1624, "IoU.crt screen": 0.018, "IoU.plate": 0.3963, "IoU.monitor": 0.2749, "IoU.bulletin board": 0.2487, "IoU.shower": 0.0, "IoU.radiator": 0.4642, "IoU.glass": 0.0678, "IoU.clock": 0.1726, "IoU.flag": 0.2302, "Acc.wall": 0.8555, "Acc.building": 0.9194, "Acc.sky": 0.9701, "Acc.floor": 0.8779, "Acc.tree": 0.8817, "Acc.ceiling": 0.89, "Acc.road": 0.8652, "Acc.bed ": 0.9055, "Acc.windowpane": 0.6783, "Acc.grass": 0.9313, "Acc.cabinet": 0.643, "Acc.sidewalk": 0.7476, "Acc.person": 0.893, "Acc.earth": 0.267, "Acc.door": 0.4648, "Acc.table": 0.6071, "Acc.mountain": 0.7013, "Acc.plant": 0.4816, "Acc.curtain": 0.7562, "Acc.chair": 0.6494, "Acc.car": 0.8773, "Acc.water": 0.5954, "Acc.painting": 0.773, "Acc.sofa": 0.7042, "Acc.shelf": 0.6524, "Acc.house": 0.3405, "Acc.sea": 0.7453, "Acc.mirror": 0.4454, "Acc.rug": 0.4987, "Acc.field": 0.2482, "Acc.armchair": 0.48, "Acc.seat": 0.7864, "Acc.fence": 0.6167, "Acc.desk": 0.5599, "Acc.rock": 0.4095, "Acc.wardrobe": 0.4915, "Acc.lamp": 0.6185, "Acc.bathtub": 0.7234, "Acc.railing": 0.3486, "Acc.cushion": 0.6, "Acc.base": 0.0696, "Acc.box": 0.1734, "Acc.column": 0.5455, "Acc.signboard": 0.3939, "Acc.chest of drawers": 0.4868, "Acc.counter": 0.4171, "Acc.sand": 0.5385, "Acc.sink": 0.6652, "Acc.skyscraper": 0.5423, "Acc.fireplace": 0.7458, "Acc.refrigerator": 0.7653, "Acc.grandstand": 0.6171, "Acc.path": 0.1484, "Acc.stairs": 0.3457, "Acc.runway": 0.474, "Acc.case": 0.5732, "Acc.pool table": 0.9248, "Acc.pillow": 0.2965, "Acc.screen door": 0.5921, "Acc.stairway": 0.3772, "Acc.river": 0.3314, "Acc.bridge": 0.7035, "Acc.bookcase": 0.516, "Acc.blind": 0.2766, "Acc.coffee table": 0.7853, "Acc.toilet": 0.9005, "Acc.flower": 0.4503, "Acc.book": 0.5956, "Acc.hill": 0.0181, "Acc.bench": 0.4465, "Acc.countertop": 0.5098, "Acc.stove": 0.7831, "Acc.palm": 0.4867, "Acc.kitchen island": 0.4578, "Acc.computer": 0.7551, "Acc.swivel chair": 0.5851, "Acc.boat": 0.5522, "Acc.bar": 0.2399, "Acc.arcade machine": 0.5117, "Acc.hovel": 0.121, "Acc.bus": 0.9249, "Acc.towel": 0.6082, "Acc.light": 0.233, "Acc.truck": 0.2241, "Acc.tower": 0.1497, "Acc.chandelier": 0.6844, "Acc.awning": 0.2888, "Acc.streetlight": 0.1493, "Acc.booth": 0.1302, "Acc.television receiver": 0.6888, "Acc.airplane": 0.5919, "Acc.dirt track": 0.0, "Acc.apparel": 0.2739, "Acc.pole": 0.2181, "Acc.land": 0.002, "Acc.bannister": 0.0006, "Acc.escalator": 0.034, "Acc.ottoman": 0.2681, "Acc.bottle": 0.2295, "Acc.buffet": 0.5184, "Acc.poster": 0.1195, "Acc.stage": 0.0212, "Acc.van": 0.23, "Acc.ship": 0.3426, "Acc.fountain": 0.0433, "Acc.conveyer belt": 0.8096, "Acc.canopy": 0.0275, "Acc.washer": 0.6542, "Acc.plaything": 0.3038, "Acc.swimming pool": 0.309, "Acc.stool": 0.1764, "Acc.barrel": 0.2806, "Acc.basket": 0.2011, "Acc.waterfall": 0.8831, "Acc.tent": 0.5083, "Acc.bag": 0.0071, "Acc.minibike": 0.4691, "Acc.cradle": 0.9402, "Acc.oven": 0.1508, "Acc.ball": 0.636, "Acc.food": 0.7043, "Acc.step": 0.0062, "Acc.tank": 0.2137, "Acc.trade name": 0.0621, "Acc.microwave": 0.316, "Acc.pot": 0.2558, "Acc.animal": 0.545, "Acc.bicycle": 0.6715, "Acc.lake": 0.0, "Acc.dishwasher": 0.5034, "Acc.screen": 0.8985, "Acc.blanket": 0.008, "Acc.sculpture": 0.2724, "Acc.hood": 0.2582, "Acc.sconce": 0.1479, "Acc.vase": 0.4584, "Acc.traffic light": 0.2942, "Acc.tray": 0.0033, "Acc.ashcan": 0.3781, "Acc.fan": 0.4924, "Acc.pier": 0.2053, "Acc.crt screen": 0.0271, "Acc.plate": 0.439, "Acc.monitor": 0.3384, "Acc.bulletin board": 0.2745, "Acc.shower": 0.0, "Acc.radiator": 0.5096, "Acc.glass": 0.0696, "Acc.clock": 0.2136, "Acc.flag": 0.2369} -{"mode": "train", "epoch": 1, "iter": 4050, "lr": 0.00018, "memory": 7401, "data_time": 1.08746, "decode.loss_ce": 0.65655, "decode.acc_seg": 76.83212, "loss": 0.65655, "time": 1.2982} -{"mode": "train", "epoch": 1, "iter": 4100, "lr": 0.00018, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.64859, "decode.acc_seg": 76.71943, "loss": 0.64859, "time": 0.2246} -{"mode": "train", "epoch": 1, "iter": 4150, "lr": 0.00018, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.7073, "decode.acc_seg": 74.89157, "loss": 0.7073, "time": 0.22102} -{"mode": "train", "epoch": 1, "iter": 4200, "lr": 0.00018, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.64876, "decode.acc_seg": 76.37138, "loss": 0.64876, "time": 0.22984} -{"mode": "train", "epoch": 1, "iter": 4250, "lr": 0.00018, "memory": 7401, "data_time": 0.01441, "decode.loss_ce": 0.64831, "decode.acc_seg": 76.81152, "loss": 0.64831, "time": 0.22922} -{"mode": "train", "epoch": 1, "iter": 4300, "lr": 0.00018, "memory": 7401, "data_time": 0.01335, "decode.loss_ce": 0.64418, "decode.acc_seg": 77.20067, "loss": 0.64418, "time": 0.22488} -{"mode": "train", "epoch": 1, "iter": 4350, "lr": 0.00018, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.65693, "decode.acc_seg": 76.1537, "loss": 0.65693, "time": 0.22532} -{"mode": "train", "epoch": 1, "iter": 4400, "lr": 0.00018, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.6437, "decode.acc_seg": 76.18158, "loss": 0.6437, "time": 0.22217} -{"mode": "train", "epoch": 1, "iter": 4450, "lr": 0.00018, "memory": 7401, "data_time": 0.01367, "decode.loss_ce": 0.65523, "decode.acc_seg": 76.80026, "loss": 0.65523, "time": 0.2275} -{"mode": "train", "epoch": 1, "iter": 4500, "lr": 0.00018, "memory": 7401, "data_time": 0.01425, "decode.loss_ce": 0.64666, "decode.acc_seg": 76.40415, "loss": 0.64666, "time": 0.22845} -{"mode": "train", "epoch": 1, "iter": 4550, "lr": 0.00018, "memory": 7401, "data_time": 0.01434, "decode.loss_ce": 0.62431, "decode.acc_seg": 77.7595, "loss": 0.62431, "time": 0.22553} -{"mode": "train", "epoch": 1, "iter": 4600, "lr": 0.00018, "memory": 7401, "data_time": 0.01381, "decode.loss_ce": 0.65704, "decode.acc_seg": 76.6679, "loss": 0.65704, "time": 0.22435} -{"mode": "train", "epoch": 1, "iter": 4650, "lr": 0.00018, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.64523, "decode.acc_seg": 76.53526, "loss": 0.64523, "time": 0.22636} -{"mode": "train", "epoch": 1, "iter": 4700, "lr": 0.00018, "memory": 7401, "data_time": 0.01348, "decode.loss_ce": 0.63625, "decode.acc_seg": 76.87318, "loss": 0.63625, "time": 0.22374} -{"mode": "train", "epoch": 1, "iter": 4750, "lr": 0.00018, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.62193, "decode.acc_seg": 77.4393, "loss": 0.62193, "time": 0.22779} -{"mode": "train", "epoch": 1, "iter": 4800, "lr": 0.00018, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.66934, "decode.acc_seg": 75.51455, "loss": 0.66934, "time": 0.22323} -{"mode": "train", "epoch": 1, "iter": 4850, "lr": 0.00018, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.64175, "decode.acc_seg": 76.87041, "loss": 0.64175, "time": 0.23022} -{"mode": "train", "epoch": 1, "iter": 4900, "lr": 0.00018, "memory": 7401, "data_time": 0.01392, "decode.loss_ce": 0.62891, "decode.acc_seg": 77.01829, "loss": 0.62891, "time": 0.22515} -{"mode": "train", "epoch": 1, "iter": 4950, "lr": 0.00018, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.589, "decode.acc_seg": 78.50052, "loss": 0.589, "time": 0.22247} -{"mode": "train", "epoch": 1, "iter": 5000, "lr": 0.00018, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.64214, "decode.acc_seg": 76.71305, "loss": 0.64214, "time": 0.23693} -{"mode": "train", "epoch": 1, "iter": 5050, "lr": 0.00018, "memory": 7401, "data_time": 0.01402, "decode.loss_ce": 0.62676, "decode.acc_seg": 77.14063, "loss": 0.62676, "time": 0.22571} -{"mode": "train", "epoch": 1, "iter": 5100, "lr": 0.00018, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.65278, "decode.acc_seg": 76.50497, "loss": 0.65278, "time": 0.22773} -{"mode": "train", "epoch": 1, "iter": 5150, "lr": 0.00018, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.62806, "decode.acc_seg": 77.24606, "loss": 0.62806, "time": 0.22725} -{"mode": "train", "epoch": 1, "iter": 5200, "lr": 0.00018, "memory": 7401, "data_time": 0.01419, "decode.loss_ce": 0.60711, "decode.acc_seg": 77.22653, "loss": 0.60711, "time": 0.22351} -{"mode": "train", "epoch": 1, "iter": 5250, "lr": 0.00018, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.6172, "decode.acc_seg": 77.90631, "loss": 0.6172, "time": 0.22313} -{"mode": "train", "epoch": 1, "iter": 5300, "lr": 0.00018, "memory": 7401, "data_time": 0.01818, "decode.loss_ce": 0.61653, "decode.acc_seg": 77.78814, "loss": 0.61653, "time": 0.22876} -{"mode": "train", "epoch": 1, "iter": 5350, "lr": 0.00018, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.58811, "decode.acc_seg": 78.69139, "loss": 0.58811, "time": 0.22303} -{"mode": "train", "epoch": 1, "iter": 5400, "lr": 0.00018, "memory": 7401, "data_time": 0.01353, "decode.loss_ce": 0.60249, "decode.acc_seg": 77.93847, "loss": 0.60249, "time": 0.22717} -{"mode": "train", "epoch": 1, "iter": 5450, "lr": 0.00018, "memory": 7401, "data_time": 0.01446, "decode.loss_ce": 0.61255, "decode.acc_seg": 77.54215, "loss": 0.61255, "time": 0.2271} -{"mode": "train", "epoch": 1, "iter": 5500, "lr": 0.00018, "memory": 7401, "data_time": 0.01933, "decode.loss_ce": 0.5844, "decode.acc_seg": 78.52731, "loss": 0.5844, "time": 0.22798} -{"mode": "train", "epoch": 1, "iter": 5550, "lr": 0.00017, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.59555, "decode.acc_seg": 78.65097, "loss": 0.59555, "time": 0.23438} -{"mode": "train", "epoch": 1, "iter": 5600, "lr": 0.00017, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.58865, "decode.acc_seg": 78.45124, "loss": 0.58865, "time": 0.22729} -{"mode": "train", "epoch": 1, "iter": 5650, "lr": 0.00017, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.56914, "decode.acc_seg": 78.56942, "loss": 0.56914, "time": 0.23079} -{"mode": "train", "epoch": 1, "iter": 5700, "lr": 0.00017, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.57398, "decode.acc_seg": 78.35514, "loss": 0.57398, "time": 0.22113} -{"mode": "train", "epoch": 1, "iter": 5750, "lr": 0.00017, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.60325, "decode.acc_seg": 77.71568, "loss": 0.60325, "time": 0.22295} -{"mode": "train", "epoch": 1, "iter": 5800, "lr": 0.00017, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.60027, "decode.acc_seg": 77.61635, "loss": 0.60027, "time": 0.2363} -{"mode": "train", "epoch": 1, "iter": 5850, "lr": 0.00017, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.59791, "decode.acc_seg": 77.90207, "loss": 0.59791, "time": 0.22897} -{"mode": "train", "epoch": 1, "iter": 5900, "lr": 0.00017, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.59542, "decode.acc_seg": 78.14964, "loss": 0.59542, "time": 0.2221} -{"mode": "train", "epoch": 1, "iter": 5950, "lr": 0.00017, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.58002, "decode.acc_seg": 78.36603, "loss": 0.58002, "time": 0.22781} -{"mode": "train", "epoch": 1, "iter": 6000, "lr": 0.00017, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.58941, "decode.acc_seg": 78.35238, "loss": 0.58941, "time": 0.22316} -{"mode": "train", "epoch": 1, "iter": 6050, "lr": 0.00017, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.5885, "decode.acc_seg": 78.47783, "loss": 0.5885, "time": 0.2389} -{"mode": "train", "epoch": 1, "iter": 6100, "lr": 0.00017, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.58459, "decode.acc_seg": 78.73064, "loss": 0.58459, "time": 0.22333} -{"mode": "train", "epoch": 1, "iter": 6150, "lr": 0.00017, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.58423, "decode.acc_seg": 78.81201, "loss": 0.58423, "time": 0.22274} -{"mode": "train", "epoch": 1, "iter": 6200, "lr": 0.00017, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.59467, "decode.acc_seg": 78.1577, "loss": 0.59467, "time": 0.22205} -{"mode": "train", "epoch": 1, "iter": 6250, "lr": 0.00017, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.57987, "decode.acc_seg": 78.62819, "loss": 0.57987, "time": 0.22323} -{"mode": "train", "epoch": 1, "iter": 6300, "lr": 0.00017, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.56139, "decode.acc_seg": 79.08065, "loss": 0.56139, "time": 0.23371} -{"mode": "train", "epoch": 1, "iter": 6350, "lr": 0.00017, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.59383, "decode.acc_seg": 78.25587, "loss": 0.59383, "time": 0.22401} -{"mode": "train", "epoch": 1, "iter": 6400, "lr": 0.00017, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.57179, "decode.acc_seg": 79.00797, "loss": 0.57179, "time": 0.22924} -{"mode": "train", "epoch": 1, "iter": 6450, "lr": 0.00017, "memory": 7401, "data_time": 0.01355, "decode.loss_ce": 0.56338, "decode.acc_seg": 78.91095, "loss": 0.56338, "time": 0.22344} -{"mode": "train", "epoch": 1, "iter": 6500, "lr": 0.00017, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.59187, "decode.acc_seg": 78.5811, "loss": 0.59187, "time": 0.2243} -{"mode": "train", "epoch": 1, "iter": 6550, "lr": 0.00017, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.56451, "decode.acc_seg": 79.30067, "loss": 0.56451, "time": 0.24086} -{"mode": "train", "epoch": 1, "iter": 6600, "lr": 0.00017, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.57403, "decode.acc_seg": 78.80969, "loss": 0.57403, "time": 0.23143} -{"mode": "train", "epoch": 1, "iter": 6650, "lr": 0.00017, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.59292, "decode.acc_seg": 78.26675, "loss": 0.59292, "time": 0.22574} -{"mode": "train", "epoch": 1, "iter": 6700, "lr": 0.00017, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.59199, "decode.acc_seg": 78.00054, "loss": 0.59199, "time": 0.22502} -{"mode": "train", "epoch": 1, "iter": 6750, "lr": 0.00017, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.58133, "decode.acc_seg": 78.60433, "loss": 0.58133, "time": 0.22763} -{"mode": "train", "epoch": 1, "iter": 6800, "lr": 0.00017, "memory": 7401, "data_time": 0.01336, "decode.loss_ce": 0.60553, "decode.acc_seg": 78.27398, "loss": 0.60553, "time": 0.23663} -{"mode": "train", "epoch": 1, "iter": 6850, "lr": 0.00017, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.56092, "decode.acc_seg": 79.41443, "loss": 0.56092, "time": 0.22416} -{"mode": "train", "epoch": 1, "iter": 6900, "lr": 0.00017, "memory": 7401, "data_time": 0.01332, "decode.loss_ce": 0.57551, "decode.acc_seg": 79.00117, "loss": 0.57551, "time": 0.22638} -{"mode": "train", "epoch": 1, "iter": 6950, "lr": 0.00017, "memory": 7401, "data_time": 0.01419, "decode.loss_ce": 0.5452, "decode.acc_seg": 79.8279, "loss": 0.5452, "time": 0.22539} -{"mode": "train", "epoch": 1, "iter": 7000, "lr": 0.00017, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.55403, "decode.acc_seg": 79.16377, "loss": 0.55403, "time": 0.23137} -{"mode": "train", "epoch": 1, "iter": 7050, "lr": 0.00017, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.55818, "decode.acc_seg": 79.35419, "loss": 0.55818, "time": 0.22776} -{"mode": "train", "epoch": 1, "iter": 7100, "lr": 0.00017, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.55181, "decode.acc_seg": 79.64719, "loss": 0.55181, "time": 0.22844} -{"mode": "train", "epoch": 1, "iter": 7150, "lr": 0.00017, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.59016, "decode.acc_seg": 78.79134, "loss": 0.59016, "time": 0.22203} -{"mode": "train", "epoch": 1, "iter": 7200, "lr": 0.00017, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.57353, "decode.acc_seg": 79.25576, "loss": 0.57353, "time": 0.22283} -{"mode": "train", "epoch": 1, "iter": 7250, "lr": 0.00017, "memory": 7401, "data_time": 0.01426, "decode.loss_ce": 0.55705, "decode.acc_seg": 79.75166, "loss": 0.55705, "time": 0.23309} -{"mode": "train", "epoch": 1, "iter": 7300, "lr": 0.00017, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.56026, "decode.acc_seg": 79.616, "loss": 0.56026, "time": 0.22686} -{"mode": "train", "epoch": 1, "iter": 7350, "lr": 0.00017, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.54837, "decode.acc_seg": 79.42044, "loss": 0.54837, "time": 0.23322} -{"mode": "train", "epoch": 1, "iter": 7400, "lr": 0.00017, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.54798, "decode.acc_seg": 79.73734, "loss": 0.54798, "time": 0.22246} -{"mode": "train", "epoch": 1, "iter": 7450, "lr": 0.00017, "memory": 7401, "data_time": 0.01358, "decode.loss_ce": 0.54886, "decode.acc_seg": 80.082, "loss": 0.54886, "time": 0.23569} -{"mode": "train", "epoch": 1, "iter": 7500, "lr": 0.00017, "memory": 7401, "data_time": 0.01358, "decode.loss_ce": 0.53331, "decode.acc_seg": 80.56802, "loss": 0.53331, "time": 0.2355} -{"mode": "train", "epoch": 1, "iter": 7550, "lr": 0.00017, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.5307, "decode.acc_seg": 80.20493, "loss": 0.5307, "time": 0.22743} -{"mode": "train", "epoch": 1, "iter": 7600, "lr": 0.00017, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.56224, "decode.acc_seg": 79.35433, "loss": 0.56224, "time": 0.22868} -{"mode": "train", "epoch": 1, "iter": 7650, "lr": 0.00017, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.54851, "decode.acc_seg": 79.83118, "loss": 0.54851, "time": 0.22649} -{"mode": "train", "epoch": 1, "iter": 7700, "lr": 0.00017, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.57711, "decode.acc_seg": 79.11201, "loss": 0.57711, "time": 0.22495} -{"mode": "train", "epoch": 1, "iter": 7750, "lr": 0.00016, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.523, "decode.acc_seg": 80.47355, "loss": 0.523, "time": 0.22177} -{"mode": "train", "epoch": 1, "iter": 7800, "lr": 0.00016, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.54473, "decode.acc_seg": 80.08859, "loss": 0.54473, "time": 0.22999} -{"mode": "train", "epoch": 1, "iter": 7850, "lr": 0.00016, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.53549, "decode.acc_seg": 80.13729, "loss": 0.53549, "time": 0.23531} -{"mode": "train", "epoch": 1, "iter": 7900, "lr": 0.00016, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.52461, "decode.acc_seg": 80.15831, "loss": 0.52461, "time": 0.22394} -{"mode": "train", "epoch": 1, "iter": 7950, "lr": 0.00016, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.54196, "decode.acc_seg": 79.9705, "loss": 0.54196, "time": 0.22935} -{"mode": "train", "epoch": 1, "iter": 8000, "lr": 0.00016, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.55444, "decode.acc_seg": 80.0267, "loss": 0.55444, "time": 0.25448} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00016, "aAcc": 0.787, "mIoU": 0.3918, "mAcc": 0.5129, "IoU.wall": 0.7099, "IoU.building": 0.7979, "IoU.sky": 0.9351, "IoU.floor": 0.778, "IoU.tree": 0.7132, "IoU.ceiling": 0.7772, "IoU.road": 0.7835, "IoU.bed ": 0.8362, "IoU.windowpane": 0.546, "IoU.grass": 0.6229, "IoU.cabinet": 0.5147, "IoU.sidewalk": 0.5883, "IoU.person": 0.7764, "IoU.earth": 0.2985, "IoU.door": 0.3794, "IoU.table": 0.4831, "IoU.mountain": 0.5091, "IoU.plant": 0.4683, "IoU.curtain": 0.6723, "IoU.chair": 0.4919, "IoU.car": 0.8066, "IoU.water": 0.3792, "IoU.painting": 0.6283, "IoU.sofa": 0.5839, "IoU.shelf": 0.3856, "IoU.house": 0.3857, "IoU.sea": 0.5088, "IoU.mirror": 0.5101, "IoU.rug": 0.5227, "IoU.field": 0.2408, "IoU.armchair": 0.3011, "IoU.seat": 0.5181, "IoU.fence": 0.3257, "IoU.desk": 0.3853, "IoU.rock": 0.3742, "IoU.wardrobe": 0.4002, "IoU.lamp": 0.5379, "IoU.bathtub": 0.6984, "IoU.railing": 0.2644, "IoU.cushion": 0.4693, "IoU.base": 0.1782, "IoU.box": 0.1547, "IoU.column": 0.4008, "IoU.signboard": 0.2917, "IoU.chest of drawers": 0.3343, "IoU.counter": 0.2303, "IoU.sand": 0.2892, "IoU.sink": 0.6026, "IoU.skyscraper": 0.4707, "IoU.fireplace": 0.619, "IoU.refrigerator": 0.5743, "IoU.grandstand": 0.3722, "IoU.path": 0.199, "IoU.stairs": 0.1744, "IoU.runway": 0.7009, "IoU.case": 0.4489, "IoU.pool table": 0.8954, "IoU.pillow": 0.3533, "IoU.screen door": 0.4725, "IoU.stairway": 0.2738, "IoU.river": 0.1479, "IoU.bridge": 0.3513, "IoU.bookcase": 0.2873, "IoU.blind": 0.2609, "IoU.coffee table": 0.5194, "IoU.toilet": 0.7345, "IoU.flower": 0.3155, "IoU.book": 0.445, "IoU.hill": 0.0148, "IoU.bench": 0.3844, "IoU.countertop": 0.4534, "IoU.stove": 0.5731, "IoU.palm": 0.4528, "IoU.kitchen island": 0.2142, "IoU.computer": 0.5696, "IoU.swivel chair": 0.3333, "IoU.boat": 0.4834, "IoU.bar": 0.3015, "IoU.arcade machine": 0.3554, "IoU.hovel": 0.3652, "IoU.bus": 0.8408, "IoU.towel": 0.549, "IoU.light": 0.2861, "IoU.truck": 0.2317, "IoU.tower": 0.3375, "IoU.chandelier": 0.5443, "IoU.awning": 0.1646, "IoU.streetlight": 0.1631, "IoU.booth": 0.2611, "IoU.television receiver": 0.6521, "IoU.airplane": 0.4426, "IoU.dirt track": 0.019, "IoU.apparel": 0.3213, "IoU.pole": 0.1205, "IoU.land": 0.0786, "IoU.bannister": 0.0729, "IoU.escalator": 0.0996, "IoU.ottoman": 0.3564, "IoU.bottle": 0.1335, "IoU.buffet": 0.4472, "IoU.poster": 0.0607, "IoU.stage": 0.0555, "IoU.van": 0.2217, "IoU.ship": 0.6677, "IoU.fountain": 0.063, "IoU.conveyer belt": 0.5268, "IoU.canopy": 0.3176, "IoU.washer": 0.5787, "IoU.plaything": 0.2417, "IoU.swimming pool": 0.3883, "IoU.stool": 0.2553, "IoU.barrel": 0.0868, "IoU.basket": 0.2201, "IoU.waterfall": 0.5556, "IoU.tent": 0.8738, "IoU.bag": 0.01, "IoU.minibike": 0.4067, "IoU.cradle": 0.6819, "IoU.oven": 0.1834, "IoU.ball": 0.423, "IoU.food": 0.516, "IoU.step": 0.0315, "IoU.tank": 0.3021, "IoU.trade name": 0.1208, "IoU.microwave": 0.4591, "IoU.pot": 0.2583, "IoU.animal": 0.5657, "IoU.bicycle": 0.3838, "IoU.lake": 0.0321, "IoU.dishwasher": 0.488, "IoU.screen": 0.5513, "IoU.blanket": 0.0415, "IoU.sculpture": 0.4055, "IoU.hood": 0.3453, "IoU.sconce": 0.1839, "IoU.vase": 0.2675, "IoU.traffic light": 0.1871, "IoU.tray": 0.0318, "IoU.ashcan": 0.3179, "IoU.fan": 0.5353, "IoU.pier": 0.1432, "IoU.crt screen": 0.0411, "IoU.plate": 0.4436, "IoU.monitor": 0.1258, "IoU.bulletin board": 0.4222, "IoU.shower": 0.0, "IoU.radiator": 0.4695, "IoU.glass": 0.0561, "IoU.clock": 0.1757, "IoU.flag": 0.4223, "Acc.wall": 0.8439, "Acc.building": 0.9279, "Acc.sky": 0.9663, "Acc.floor": 0.8814, "Acc.tree": 0.8656, "Acc.ceiling": 0.9387, "Acc.road": 0.8524, "Acc.bed ": 0.9284, "Acc.windowpane": 0.6871, "Acc.grass": 0.7089, "Acc.cabinet": 0.5908, "Acc.sidewalk": 0.747, "Acc.person": 0.9027, "Acc.earth": 0.4288, "Acc.door": 0.4712, "Acc.table": 0.7089, "Acc.mountain": 0.7266, "Acc.plant": 0.6032, "Acc.curtain": 0.7505, "Acc.chair": 0.6812, "Acc.car": 0.8925, "Acc.water": 0.4454, "Acc.painting": 0.8219, "Acc.sofa": 0.831, "Acc.shelf": 0.6862, "Acc.house": 0.5171, "Acc.sea": 0.8338, "Acc.mirror": 0.5802, "Acc.rug": 0.5929, "Acc.field": 0.4976, "Acc.armchair": 0.401, "Acc.seat": 0.6619, "Acc.fence": 0.4871, "Acc.desk": 0.551, "Acc.rock": 0.8331, "Acc.wardrobe": 0.5794, "Acc.lamp": 0.6766, "Acc.bathtub": 0.8014, "Acc.railing": 0.3311, "Acc.cushion": 0.6536, "Acc.base": 0.293, "Acc.box": 0.1901, "Acc.column": 0.5393, "Acc.signboard": 0.3796, "Acc.chest of drawers": 0.4489, "Acc.counter": 0.3346, "Acc.sand": 0.4425, "Acc.sink": 0.6611, "Acc.skyscraper": 0.618, "Acc.fireplace": 0.8447, "Acc.refrigerator": 0.7463, "Acc.grandstand": 0.7875, "Acc.path": 0.3696, "Acc.stairs": 0.1843, "Acc.runway": 0.9113, "Acc.case": 0.595, "Acc.pool table": 0.9429, "Acc.pillow": 0.3991, "Acc.screen door": 0.7725, "Acc.stairway": 0.3792, "Acc.river": 0.4164, "Acc.bridge": 0.3905, "Acc.bookcase": 0.4553, "Acc.blind": 0.3009, "Acc.coffee table": 0.7666, "Acc.toilet": 0.8755, "Acc.flower": 0.5654, "Acc.book": 0.6277, "Acc.hill": 0.0156, "Acc.bench": 0.4179, "Acc.countertop": 0.6786, "Acc.stove": 0.8449, "Acc.palm": 0.586, "Acc.kitchen island": 0.6934, "Acc.computer": 0.6307, "Acc.swivel chair": 0.4467, "Acc.boat": 0.6397, "Acc.bar": 0.3494, "Acc.arcade machine": 0.3652, "Acc.hovel": 0.6295, "Acc.bus": 0.9033, "Acc.towel": 0.6186, "Acc.light": 0.3058, "Acc.truck": 0.3239, "Acc.tower": 0.4774, "Acc.chandelier": 0.7752, "Acc.awning": 0.1835, "Acc.streetlight": 0.2048, "Acc.booth": 0.4556, "Acc.television receiver": 0.7793, "Acc.airplane": 0.652, "Acc.dirt track": 0.1412, "Acc.apparel": 0.4895, "Acc.pole": 0.158, "Acc.land": 0.124, "Acc.bannister": 0.1004, "Acc.escalator": 0.1037, "Acc.ottoman": 0.4476, "Acc.bottle": 0.1533, "Acc.buffet": 0.4841, "Acc.poster": 0.0728, "Acc.stage": 0.1416, "Acc.van": 0.2561, "Acc.ship": 0.8099, "Acc.fountain": 0.0648, "Acc.conveyer belt": 0.8117, "Acc.canopy": 0.4946, "Acc.washer": 0.6583, "Acc.plaything": 0.3105, "Acc.swimming pool": 0.6498, "Acc.stool": 0.2747, "Acc.barrel": 0.3862, "Acc.basket": 0.3088, "Acc.waterfall": 0.6519, "Acc.tent": 0.9757, "Acc.bag": 0.0107, "Acc.minibike": 0.6381, "Acc.cradle": 0.8756, "Acc.oven": 0.2958, "Acc.ball": 0.5747, "Acc.food": 0.6702, "Acc.step": 0.0323, "Acc.tank": 0.3557, "Acc.trade name": 0.1234, "Acc.microwave": 0.4888, "Acc.pot": 0.2833, "Acc.animal": 0.5904, "Acc.bicycle": 0.7453, "Acc.lake": 0.0351, "Acc.dishwasher": 0.5961, "Acc.screen": 0.7073, "Acc.blanket": 0.0429, "Acc.sculpture": 0.571, "Acc.hood": 0.3757, "Acc.sconce": 0.1987, "Acc.vase": 0.4512, "Acc.traffic light": 0.2735, "Acc.tray": 0.0342, "Acc.ashcan": 0.3847, "Acc.fan": 0.6654, "Acc.pier": 0.3057, "Acc.crt screen": 0.0725, "Acc.plate": 0.5336, "Acc.monitor": 0.1389, "Acc.bulletin board": 0.4546, "Acc.shower": 0.0, "Acc.radiator": 0.5284, "Acc.glass": 0.0577, "Acc.clock": 0.1907, "Acc.flag": 0.459} -{"mode": "train", "epoch": 1, "iter": 8050, "lr": 0.00016, "memory": 7401, "data_time": 0.54653, "decode.loss_ce": 0.54392, "decode.acc_seg": 80.39178, "loss": 0.54392, "time": 0.76015} -{"mode": "train", "epoch": 1, "iter": 8100, "lr": 0.00016, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.54189, "decode.acc_seg": 79.98301, "loss": 0.54189, "time": 0.22745} -{"mode": "train", "epoch": 1, "iter": 8150, "lr": 0.00016, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.54152, "decode.acc_seg": 79.86608, "loss": 0.54152, "time": 0.22373} -{"mode": "train", "epoch": 1, "iter": 8200, "lr": 0.00016, "memory": 7401, "data_time": 0.01419, "decode.loss_ce": 0.549, "decode.acc_seg": 79.60521, "loss": 0.549, "time": 0.22397} -{"mode": "train", "epoch": 1, "iter": 8250, "lr": 0.00016, "memory": 7401, "data_time": 0.01481, "decode.loss_ce": 0.5415, "decode.acc_seg": 80.12466, "loss": 0.5415, "time": 0.23309} -{"mode": "train", "epoch": 1, "iter": 8300, "lr": 0.00016, "memory": 7401, "data_time": 0.0144, "decode.loss_ce": 0.50629, "decode.acc_seg": 80.96322, "loss": 0.50629, "time": 0.22795} -{"mode": "train", "epoch": 1, "iter": 8350, "lr": 0.00016, "memory": 7401, "data_time": 0.01423, "decode.loss_ce": 0.52787, "decode.acc_seg": 80.32397, "loss": 0.52787, "time": 0.22853} -{"mode": "train", "epoch": 1, "iter": 8400, "lr": 0.00016, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.54977, "decode.acc_seg": 79.85113, "loss": 0.54977, "time": 0.22579} -{"mode": "train", "epoch": 1, "iter": 8450, "lr": 0.00016, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.5742, "decode.acc_seg": 78.31223, "loss": 0.5742, "time": 0.22102} -{"mode": "train", "epoch": 1, "iter": 8500, "lr": 0.00016, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.56744, "decode.acc_seg": 79.37788, "loss": 0.56744, "time": 0.23215} -{"mode": "train", "epoch": 1, "iter": 8550, "lr": 0.00016, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.53606, "decode.acc_seg": 80.02197, "loss": 0.53606, "time": 0.22472} -{"mode": "train", "epoch": 1, "iter": 8600, "lr": 0.00016, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.52494, "decode.acc_seg": 80.7904, "loss": 0.52494, "time": 0.22704} -{"mode": "train", "epoch": 1, "iter": 8650, "lr": 0.00016, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.50481, "decode.acc_seg": 81.28454, "loss": 0.50481, "time": 0.22256} -{"mode": "train", "epoch": 1, "iter": 8700, "lr": 0.00016, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.50311, "decode.acc_seg": 81.15834, "loss": 0.50311, "time": 0.22629} -{"mode": "train", "epoch": 1, "iter": 8750, "lr": 0.00016, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.52207, "decode.acc_seg": 80.65809, "loss": 0.52207, "time": 0.23061} -{"mode": "train", "epoch": 1, "iter": 8800, "lr": 0.00016, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.52199, "decode.acc_seg": 80.81503, "loss": 0.52199, "time": 0.23228} -{"mode": "train", "epoch": 1, "iter": 8850, "lr": 0.00016, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.50557, "decode.acc_seg": 81.08652, "loss": 0.50557, "time": 0.2238} -{"mode": "train", "epoch": 1, "iter": 8900, "lr": 0.00016, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.50235, "decode.acc_seg": 81.22182, "loss": 0.50235, "time": 0.22773} -{"mode": "train", "epoch": 1, "iter": 8950, "lr": 0.00016, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.5162, "decode.acc_seg": 80.50969, "loss": 0.5162, "time": 0.23466} -{"mode": "train", "epoch": 1, "iter": 9000, "lr": 0.00016, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.50324, "decode.acc_seg": 81.26449, "loss": 0.50324, "time": 0.23073} -{"mode": "train", "epoch": 1, "iter": 9050, "lr": 0.00016, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.50582, "decode.acc_seg": 81.45042, "loss": 0.50582, "time": 0.22519} -{"mode": "train", "epoch": 1, "iter": 9100, "lr": 0.00016, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.51539, "decode.acc_seg": 80.63761, "loss": 0.51539, "time": 0.22544} -{"mode": "train", "epoch": 1, "iter": 9150, "lr": 0.00016, "memory": 7401, "data_time": 0.01353, "decode.loss_ce": 0.51971, "decode.acc_seg": 80.37975, "loss": 0.51971, "time": 0.22504} -{"mode": "train", "epoch": 1, "iter": 9200, "lr": 0.00016, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.50139, "decode.acc_seg": 81.07438, "loss": 0.50139, "time": 0.23322} -{"mode": "train", "epoch": 1, "iter": 9250, "lr": 0.00016, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.48557, "decode.acc_seg": 81.53267, "loss": 0.48557, "time": 0.23829} -{"mode": "train", "epoch": 1, "iter": 9300, "lr": 0.00016, "memory": 7401, "data_time": 0.01406, "decode.loss_ce": 0.48007, "decode.acc_seg": 82.16591, "loss": 0.48007, "time": 0.22803} -{"mode": "train", "epoch": 1, "iter": 9350, "lr": 0.00016, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.48898, "decode.acc_seg": 81.90656, "loss": 0.48898, "time": 0.22719} -{"mode": "train", "epoch": 1, "iter": 9400, "lr": 0.00016, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.50573, "decode.acc_seg": 81.32031, "loss": 0.50573, "time": 0.22597} -{"mode": "train", "epoch": 1, "iter": 9450, "lr": 0.00016, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.50649, "decode.acc_seg": 81.2483, "loss": 0.50649, "time": 0.23196} -{"mode": "train", "epoch": 1, "iter": 9500, "lr": 0.00016, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.50089, "decode.acc_seg": 81.53336, "loss": 0.50089, "time": 0.23633} -{"mode": "train", "epoch": 1, "iter": 9550, "lr": 0.00016, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.49805, "decode.acc_seg": 81.42197, "loss": 0.49805, "time": 0.22438} -{"mode": "train", "epoch": 1, "iter": 9600, "lr": 0.00016, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.48132, "decode.acc_seg": 81.57894, "loss": 0.48132, "time": 0.2305} -{"mode": "train", "epoch": 1, "iter": 9650, "lr": 0.00016, "memory": 7401, "data_time": 0.01834, "decode.loss_ce": 0.49019, "decode.acc_seg": 81.72945, "loss": 0.49019, "time": 0.2303} -{"mode": "train", "epoch": 1, "iter": 9700, "lr": 0.00016, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.48538, "decode.acc_seg": 81.75836, "loss": 0.48538, "time": 0.22872} -{"mode": "train", "epoch": 1, "iter": 9750, "lr": 0.00016, "memory": 7401, "data_time": 0.01475, "decode.loss_ce": 0.52149, "decode.acc_seg": 80.38362, "loss": 0.52149, "time": 0.23816} -{"mode": "train", "epoch": 1, "iter": 9800, "lr": 0.00016, "memory": 7401, "data_time": 0.01406, "decode.loss_ce": 0.48866, "decode.acc_seg": 81.45425, "loss": 0.48866, "time": 0.23137} -{"mode": "train", "epoch": 1, "iter": 9850, "lr": 0.00016, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.49305, "decode.acc_seg": 81.82794, "loss": 0.49305, "time": 0.22578} -{"mode": "train", "epoch": 1, "iter": 9900, "lr": 0.00016, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.50919, "decode.acc_seg": 81.32684, "loss": 0.50919, "time": 0.2225} -{"mode": "train", "epoch": 1, "iter": 9950, "lr": 0.00015, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.48314, "decode.acc_seg": 81.44073, "loss": 0.48314, "time": 0.22755} -{"mode": "train", "epoch": 1, "iter": 10000, "lr": 0.00015, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.48981, "decode.acc_seg": 82.04504, "loss": 0.48981, "time": 0.23458} -{"mode": "train", "epoch": 1, "iter": 10050, "lr": 0.00015, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.48449, "decode.acc_seg": 81.83507, "loss": 0.48449, "time": 0.22618} -{"mode": "train", "epoch": 1, "iter": 10100, "lr": 0.00015, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.48555, "decode.acc_seg": 81.47687, "loss": 0.48555, "time": 0.2272} -{"mode": "train", "epoch": 1, "iter": 10150, "lr": 0.00015, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.48985, "decode.acc_seg": 81.59661, "loss": 0.48985, "time": 0.2258} -{"mode": "train", "epoch": 1, "iter": 10200, "lr": 0.00015, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.53307, "decode.acc_seg": 81.06003, "loss": 0.53307, "time": 0.22979} -{"mode": "train", "epoch": 1, "iter": 10250, "lr": 0.00015, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.50517, "decode.acc_seg": 81.00784, "loss": 0.50517, "time": 0.22938} -{"mode": "train", "epoch": 1, "iter": 10300, "lr": 0.00015, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.48607, "decode.acc_seg": 81.68567, "loss": 0.48607, "time": 0.22099} -{"mode": "train", "epoch": 1, "iter": 10350, "lr": 0.00015, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.46033, "decode.acc_seg": 82.80425, "loss": 0.46033, "time": 0.23038} -{"mode": "train", "epoch": 1, "iter": 10400, "lr": 0.00015, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.49933, "decode.acc_seg": 81.32009, "loss": 0.49933, "time": 0.23333} -{"mode": "train", "epoch": 1, "iter": 10450, "lr": 0.00015, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.48412, "decode.acc_seg": 81.9499, "loss": 0.48412, "time": 0.22256} -{"mode": "train", "epoch": 1, "iter": 10500, "lr": 0.00015, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.45842, "decode.acc_seg": 82.74224, "loss": 0.45842, "time": 0.23072} -{"mode": "train", "epoch": 1, "iter": 10550, "lr": 0.00015, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.48529, "decode.acc_seg": 82.14766, "loss": 0.48529, "time": 0.23208} -{"mode": "train", "epoch": 1, "iter": 10600, "lr": 0.00015, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.47944, "decode.acc_seg": 81.82474, "loss": 0.47944, "time": 0.23061} -{"mode": "train", "epoch": 1, "iter": 10650, "lr": 0.00015, "memory": 7401, "data_time": 0.01429, "decode.loss_ce": 0.46317, "decode.acc_seg": 82.29828, "loss": 0.46317, "time": 0.22412} -{"mode": "train", "epoch": 1, "iter": 10700, "lr": 0.00015, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.47894, "decode.acc_seg": 82.29631, "loss": 0.47894, "time": 0.23911} -{"mode": "train", "epoch": 1, "iter": 10750, "lr": 0.00015, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 0.46814, "decode.acc_seg": 82.25405, "loss": 0.46814, "time": 0.22659} -{"mode": "train", "epoch": 1, "iter": 10800, "lr": 0.00015, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.47621, "decode.acc_seg": 81.94543, "loss": 0.47621, "time": 0.22608} -{"mode": "train", "epoch": 1, "iter": 10850, "lr": 0.00015, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.46783, "decode.acc_seg": 82.43557, "loss": 0.46783, "time": 0.22647} -{"mode": "train", "epoch": 1, "iter": 10900, "lr": 0.00015, "memory": 7401, "data_time": 0.01313, "decode.loss_ce": 0.47222, "decode.acc_seg": 82.09711, "loss": 0.47222, "time": 0.22448} -{"mode": "train", "epoch": 1, "iter": 10950, "lr": 0.00015, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.46949, "decode.acc_seg": 82.56419, "loss": 0.46949, "time": 0.23425} -{"mode": "train", "epoch": 1, "iter": 11000, "lr": 0.00015, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.46081, "decode.acc_seg": 82.78617, "loss": 0.46081, "time": 0.2272} -{"mode": "train", "epoch": 1, "iter": 11050, "lr": 0.00015, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.45, "decode.acc_seg": 82.90714, "loss": 0.45, "time": 0.22771} -{"mode": "train", "epoch": 1, "iter": 11100, "lr": 0.00015, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.48284, "decode.acc_seg": 82.14117, "loss": 0.48284, "time": 0.2321} -{"mode": "train", "epoch": 1, "iter": 11150, "lr": 0.00015, "memory": 7401, "data_time": 0.01437, "decode.loss_ce": 0.45827, "decode.acc_seg": 82.79062, "loss": 0.45827, "time": 0.22916} -{"mode": "train", "epoch": 1, "iter": 11200, "lr": 0.00015, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.47323, "decode.acc_seg": 82.21498, "loss": 0.47323, "time": 0.23359} -{"mode": "train", "epoch": 1, "iter": 11250, "lr": 0.00015, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.47682, "decode.acc_seg": 82.23955, "loss": 0.47682, "time": 0.22444} -{"mode": "train", "epoch": 1, "iter": 11300, "lr": 0.00015, "memory": 7401, "data_time": 0.0145, "decode.loss_ce": 0.48035, "decode.acc_seg": 81.82036, "loss": 0.48035, "time": 0.23079} -{"mode": "train", "epoch": 1, "iter": 11350, "lr": 0.00015, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.47087, "decode.acc_seg": 82.39469, "loss": 0.47087, "time": 0.22365} -{"mode": "train", "epoch": 1, "iter": 11400, "lr": 0.00015, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.46243, "decode.acc_seg": 82.48008, "loss": 0.46243, "time": 0.22989} -{"mode": "train", "epoch": 1, "iter": 11450, "lr": 0.00015, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.46934, "decode.acc_seg": 82.27625, "loss": 0.46934, "time": 0.23429} -{"mode": "train", "epoch": 1, "iter": 11500, "lr": 0.00015, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.47977, "decode.acc_seg": 81.95869, "loss": 0.47977, "time": 0.23057} -{"mode": "train", "epoch": 1, "iter": 11550, "lr": 0.00015, "memory": 7401, "data_time": 0.01415, "decode.loss_ce": 0.43837, "decode.acc_seg": 83.22661, "loss": 0.43837, "time": 0.23382} -{"mode": "train", "epoch": 1, "iter": 11600, "lr": 0.00015, "memory": 7401, "data_time": 0.01432, "decode.loss_ce": 0.4509, "decode.acc_seg": 82.96174, "loss": 0.4509, "time": 0.22648} -{"mode": "train", "epoch": 1, "iter": 11650, "lr": 0.00015, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.4602, "decode.acc_seg": 82.72678, "loss": 0.4602, "time": 0.22718} -{"mode": "train", "epoch": 1, "iter": 11700, "lr": 0.00015, "memory": 7401, "data_time": 0.01799, "decode.loss_ce": 0.45725, "decode.acc_seg": 82.7806, "loss": 0.45725, "time": 0.23124} -{"mode": "train", "epoch": 1, "iter": 11750, "lr": 0.00015, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.45542, "decode.acc_seg": 82.85492, "loss": 0.45542, "time": 0.22876} -{"mode": "train", "epoch": 1, "iter": 11800, "lr": 0.00015, "memory": 7401, "data_time": 0.01763, "decode.loss_ce": 0.4641, "decode.acc_seg": 82.60395, "loss": 0.4641, "time": 0.23039} -{"mode": "train", "epoch": 1, "iter": 11850, "lr": 0.00015, "memory": 7401, "data_time": 0.01355, "decode.loss_ce": 0.44216, "decode.acc_seg": 83.28534, "loss": 0.44216, "time": 0.22503} -{"mode": "train", "epoch": 1, "iter": 11900, "lr": 0.00015, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.44748, "decode.acc_seg": 83.35052, "loss": 0.44748, "time": 0.23331} -{"mode": "train", "epoch": 1, "iter": 11950, "lr": 0.00015, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.45453, "decode.acc_seg": 82.92979, "loss": 0.45453, "time": 0.22962} -{"mode": "train", "epoch": 1, "iter": 12000, "lr": 0.00015, "memory": 7401, "data_time": 0.01474, "decode.loss_ce": 0.44997, "decode.acc_seg": 82.91989, "loss": 0.44997, "time": 0.26089} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00015, "aAcc": 0.7958, "mIoU": 0.4176, "mAcc": 0.53, "IoU.wall": 0.7265, "IoU.building": 0.7859, "IoU.sky": 0.9342, "IoU.floor": 0.7876, "IoU.tree": 0.6945, "IoU.ceiling": 0.7908, "IoU.road": 0.7968, "IoU.bed ": 0.8434, "IoU.windowpane": 0.5636, "IoU.grass": 0.669, "IoU.cabinet": 0.5464, "IoU.sidewalk": 0.6071, "IoU.person": 0.7915, "IoU.earth": 0.3171, "IoU.door": 0.356, "IoU.table": 0.5093, "IoU.mountain": 0.4804, "IoU.plant": 0.4338, "IoU.curtain": 0.7026, "IoU.chair": 0.5127, "IoU.car": 0.8064, "IoU.water": 0.5224, "IoU.painting": 0.679, "IoU.sofa": 0.5967, "IoU.shelf": 0.4025, "IoU.house": 0.3184, "IoU.sea": 0.5217, "IoU.mirror": 0.5317, "IoU.rug": 0.5168, "IoU.field": 0.2801, "IoU.armchair": 0.366, "IoU.seat": 0.5477, "IoU.fence": 0.3267, "IoU.desk": 0.3927, "IoU.rock": 0.3749, "IoU.wardrobe": 0.4563, "IoU.lamp": 0.5632, "IoU.bathtub": 0.6608, "IoU.railing": 0.2627, "IoU.cushion": 0.4811, "IoU.base": 0.1502, "IoU.box": 0.1775, "IoU.column": 0.409, "IoU.signboard": 0.3272, "IoU.chest of drawers": 0.3604, "IoU.counter": 0.2281, "IoU.sand": 0.2371, "IoU.sink": 0.6517, "IoU.skyscraper": 0.4493, "IoU.fireplace": 0.6731, "IoU.refrigerator": 0.5792, "IoU.grandstand": 0.3618, "IoU.path": 0.1831, "IoU.stairs": 0.2632, "IoU.runway": 0.7132, "IoU.case": 0.4649, "IoU.pool table": 0.8979, "IoU.pillow": 0.5123, "IoU.screen door": 0.4879, "IoU.stairway": 0.2797, "IoU.river": 0.079, "IoU.bridge": 0.4082, "IoU.bookcase": 0.2684, "IoU.blind": 0.4157, "IoU.coffee table": 0.5337, "IoU.toilet": 0.7835, "IoU.flower": 0.3752, "IoU.book": 0.4528, "IoU.hill": 0.1053, "IoU.bench": 0.3802, "IoU.countertop": 0.5026, "IoU.stove": 0.6768, "IoU.palm": 0.4435, "IoU.kitchen island": 0.2916, "IoU.computer": 0.6048, "IoU.swivel chair": 0.3711, "IoU.boat": 0.5067, "IoU.bar": 0.2516, "IoU.arcade machine": 0.5949, "IoU.hovel": 0.07, "IoU.bus": 0.8484, "IoU.towel": 0.5144, "IoU.light": 0.3245, "IoU.truck": 0.1754, "IoU.tower": 0.1769, "IoU.chandelier": 0.6041, "IoU.awning": 0.1911, "IoU.streetlight": 0.1961, "IoU.booth": 0.3659, "IoU.television receiver": 0.6507, "IoU.airplane": 0.5317, "IoU.dirt track": 0.1597, "IoU.apparel": 0.3081, "IoU.pole": 0.0862, "IoU.land": 0.1182, "IoU.bannister": 0.0734, "IoU.escalator": 0.4008, "IoU.ottoman": 0.3806, "IoU.bottle": 0.1052, "IoU.buffet": 0.3864, "IoU.poster": 0.2624, "IoU.stage": 0.0891, "IoU.van": 0.3843, "IoU.ship": 0.1053, "IoU.fountain": 0.1762, "IoU.conveyer belt": 0.6582, "IoU.canopy": 0.1253, "IoU.washer": 0.6508, "IoU.plaything": 0.2374, "IoU.swimming pool": 0.2869, "IoU.stool": 0.3337, "IoU.barrel": 0.1908, "IoU.basket": 0.3, "IoU.waterfall": 0.5655, "IoU.tent": 0.8858, "IoU.bag": 0.0203, "IoU.minibike": 0.4772, "IoU.cradle": 0.7261, "IoU.oven": 0.2906, "IoU.ball": 0.4728, "IoU.food": 0.4957, "IoU.step": 0.1419, "IoU.tank": 0.2927, "IoU.trade name": 0.1921, "IoU.microwave": 0.5918, "IoU.pot": 0.3054, "IoU.animal": 0.5562, "IoU.bicycle": 0.5359, "IoU.lake": 0.2499, "IoU.dishwasher": 0.5424, "IoU.screen": 0.6002, "IoU.blanket": 0.122, "IoU.sculpture": 0.3735, "IoU.hood": 0.3892, "IoU.sconce": 0.3585, "IoU.vase": 0.2881, "IoU.traffic light": 0.2201, "IoU.tray": 0.0067, "IoU.ashcan": 0.3913, "IoU.fan": 0.5205, "IoU.pier": 0.3755, "IoU.crt screen": 0.0046, "IoU.plate": 0.3628, "IoU.monitor": 0.3033, "IoU.bulletin board": 0.4706, "IoU.shower": 0.0, "IoU.radiator": 0.5151, "IoU.glass": 0.0522, "IoU.clock": 0.2777, "IoU.flag": 0.2934, "Acc.wall": 0.8584, "Acc.building": 0.9007, "Acc.sky": 0.9632, "Acc.floor": 0.8789, "Acc.tree": 0.9238, "Acc.ceiling": 0.8789, "Acc.road": 0.8754, "Acc.bed ": 0.9418, "Acc.windowpane": 0.7769, "Acc.grass": 0.861, "Acc.cabinet": 0.7306, "Acc.sidewalk": 0.7688, "Acc.person": 0.9, "Acc.earth": 0.477, "Acc.door": 0.4072, "Acc.table": 0.7494, "Acc.mountain": 0.5625, "Acc.plant": 0.5058, "Acc.curtain": 0.834, "Acc.chair": 0.6296, "Acc.car": 0.9084, "Acc.water": 0.7715, "Acc.painting": 0.8248, "Acc.sofa": 0.7984, "Acc.shelf": 0.6286, "Acc.house": 0.445, "Acc.sea": 0.6437, "Acc.mirror": 0.6649, "Acc.rug": 0.6016, "Acc.field": 0.4231, "Acc.armchair": 0.4908, "Acc.seat": 0.7432, "Acc.fence": 0.4852, "Acc.desk": 0.584, "Acc.rock": 0.4586, "Acc.wardrobe": 0.6604, "Acc.lamp": 0.6982, "Acc.bathtub": 0.7726, "Acc.railing": 0.3426, "Acc.cushion": 0.5815, "Acc.base": 0.2098, "Acc.box": 0.2272, "Acc.column": 0.4711, "Acc.signboard": 0.4426, "Acc.chest of drawers": 0.4416, "Acc.counter": 0.3093, "Acc.sand": 0.4214, "Acc.sink": 0.7517, "Acc.skyscraper": 0.5738, "Acc.fireplace": 0.8409, "Acc.refrigerator": 0.6517, "Acc.grandstand": 0.6823, "Acc.path": 0.2639, "Acc.stairs": 0.3365, "Acc.runway": 0.9119, "Acc.case": 0.6467, "Acc.pool table": 0.9423, "Acc.pillow": 0.7249, "Acc.screen door": 0.8762, "Acc.stairway": 0.4845, "Acc.river": 0.1357, "Acc.bridge": 0.5077, "Acc.bookcase": 0.3602, "Acc.blind": 0.4797, "Acc.coffee table": 0.7509, "Acc.toilet": 0.8919, "Acc.flower": 0.5916, "Acc.book": 0.5941, "Acc.hill": 0.2018, "Acc.bench": 0.5515, "Acc.countertop": 0.7077, "Acc.stove": 0.7749, "Acc.palm": 0.5449, "Acc.kitchen island": 0.4995, "Acc.computer": 0.6811, "Acc.swivel chair": 0.5804, "Acc.boat": 0.6602, "Acc.bar": 0.2732, "Acc.arcade machine": 0.6203, "Acc.hovel": 0.0791, "Acc.bus": 0.9105, "Acc.towel": 0.6325, "Acc.light": 0.3417, "Acc.truck": 0.2673, "Acc.tower": 0.3062, "Acc.chandelier": 0.763, "Acc.awning": 0.2268, "Acc.streetlight": 0.2831, "Acc.booth": 0.3941, "Acc.television receiver": 0.8016, "Acc.airplane": 0.6477, "Acc.dirt track": 0.2156, "Acc.apparel": 0.442, "Acc.pole": 0.1021, "Acc.land": 0.1835, "Acc.bannister": 0.1263, "Acc.escalator": 0.5519, "Acc.ottoman": 0.4697, "Acc.bottle": 0.1668, "Acc.buffet": 0.4452, "Acc.poster": 0.3612, "Acc.stage": 0.1626, "Acc.van": 0.4568, "Acc.ship": 0.1392, "Acc.fountain": 0.1912, "Acc.conveyer belt": 0.8889, "Acc.canopy": 0.2065, "Acc.washer": 0.6858, "Acc.plaything": 0.3852, "Acc.swimming pool": 0.3966, "Acc.stool": 0.4752, "Acc.barrel": 0.299, "Acc.basket": 0.3886, "Acc.waterfall": 0.6315, "Acc.tent": 0.9791, "Acc.bag": 0.0207, "Acc.minibike": 0.6379, "Acc.cradle": 0.8714, "Acc.oven": 0.6104, "Acc.ball": 0.6109, "Acc.food": 0.5746, "Acc.step": 0.1886, "Acc.tank": 0.314, "Acc.trade name": 0.2179, "Acc.microwave": 0.6512, "Acc.pot": 0.3392, "Acc.animal": 0.5858, "Acc.bicycle": 0.6679, "Acc.lake": 0.4433, "Acc.dishwasher": 0.6326, "Acc.screen": 0.7736, "Acc.blanket": 0.1372, "Acc.sculpture": 0.4874, "Acc.hood": 0.4787, "Acc.sconce": 0.4572, "Acc.vase": 0.4311, "Acc.traffic light": 0.3334, "Acc.tray": 0.0092, "Acc.ashcan": 0.4792, "Acc.fan": 0.6195, "Acc.pier": 0.6459, "Acc.crt screen": 0.0066, "Acc.plate": 0.4017, "Acc.monitor": 0.459, "Acc.bulletin board": 0.5699, "Acc.shower": 0.0, "Acc.radiator": 0.5966, "Acc.glass": 0.0532, "Acc.clock": 0.3023, "Acc.flag": 0.3122} -{"mode": "train", "epoch": 1, "iter": 12050, "lr": 0.00015, "memory": 7401, "data_time": 0.53886, "decode.loss_ce": 0.45351, "decode.acc_seg": 82.92769, "loss": 0.45351, "time": 0.75067} -{"mode": "train", "epoch": 1, "iter": 12100, "lr": 0.00014, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.45003, "decode.acc_seg": 82.89501, "loss": 0.45003, "time": 0.23132} -{"mode": "train", "epoch": 1, "iter": 12150, "lr": 0.00014, "memory": 7401, "data_time": 0.0135, "decode.loss_ce": 0.44258, "decode.acc_seg": 83.32744, "loss": 0.44258, "time": 0.22364} -{"mode": "train", "epoch": 1, "iter": 12200, "lr": 0.00014, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.44592, "decode.acc_seg": 83.34103, "loss": 0.44592, "time": 0.22518} -{"mode": "train", "epoch": 1, "iter": 12250, "lr": 0.00014, "memory": 7401, "data_time": 0.01344, "decode.loss_ce": 0.45009, "decode.acc_seg": 82.93597, "loss": 0.45009, "time": 0.22934} -{"mode": "train", "epoch": 1, "iter": 12300, "lr": 0.00014, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.44627, "decode.acc_seg": 83.05068, "loss": 0.44627, "time": 0.22467} -{"mode": "train", "epoch": 1, "iter": 12350, "lr": 0.00014, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.43822, "decode.acc_seg": 83.36784, "loss": 0.43822, "time": 0.23276} -{"mode": "train", "epoch": 1, "iter": 12400, "lr": 0.00014, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.43658, "decode.acc_seg": 83.33342, "loss": 0.43658, "time": 0.22234} -{"mode": "train", "epoch": 1, "iter": 12450, "lr": 0.00014, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.43944, "decode.acc_seg": 83.75643, "loss": 0.43944, "time": 0.22978} -{"mode": "train", "epoch": 1, "iter": 12500, "lr": 0.00014, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.4348, "decode.acc_seg": 83.37668, "loss": 0.4348, "time": 0.22603} -{"mode": "train", "epoch": 1, "iter": 12550, "lr": 0.00014, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 0.42177, "decode.acc_seg": 83.57986, "loss": 0.42177, "time": 0.22488} -{"mode": "train", "epoch": 1, "iter": 12600, "lr": 0.00014, "memory": 7401, "data_time": 0.01419, "decode.loss_ce": 0.44678, "decode.acc_seg": 82.85038, "loss": 0.44678, "time": 0.22927} -{"mode": "train", "epoch": 1, "iter": 12650, "lr": 0.00014, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.44918, "decode.acc_seg": 83.21786, "loss": 0.44918, "time": 0.22431} -{"mode": "train", "epoch": 1, "iter": 12700, "lr": 0.00014, "memory": 7401, "data_time": 0.01345, "decode.loss_ce": 0.42719, "decode.acc_seg": 83.91257, "loss": 0.42719, "time": 0.22832} -{"mode": "train", "epoch": 1, "iter": 12750, "lr": 0.00014, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.42215, "decode.acc_seg": 84.11563, "loss": 0.42215, "time": 0.22704} -{"mode": "train", "epoch": 1, "iter": 12800, "lr": 0.00014, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.44757, "decode.acc_seg": 83.0129, "loss": 0.44757, "time": 0.22895} -{"mode": "train", "epoch": 1, "iter": 12850, "lr": 0.00014, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.45676, "decode.acc_seg": 82.74381, "loss": 0.45676, "time": 0.22709} -{"mode": "train", "epoch": 1, "iter": 12900, "lr": 0.00014, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.42976, "decode.acc_seg": 83.42097, "loss": 0.42976, "time": 0.22601} -{"mode": "train", "epoch": 1, "iter": 12950, "lr": 0.00014, "memory": 7401, "data_time": 0.01744, "decode.loss_ce": 0.43563, "decode.acc_seg": 83.5016, "loss": 0.43563, "time": 0.22519} -{"mode": "train", "epoch": 1, "iter": 13000, "lr": 0.00014, "memory": 7401, "data_time": 0.01431, "decode.loss_ce": 0.4367, "decode.acc_seg": 83.73526, "loss": 0.4367, "time": 0.21825} -{"mode": "train", "epoch": 1, "iter": 13050, "lr": 0.00014, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.43586, "decode.acc_seg": 83.5521, "loss": 0.43586, "time": 0.23115} -{"mode": "train", "epoch": 1, "iter": 13100, "lr": 0.00014, "memory": 7401, "data_time": 0.01811, "decode.loss_ce": 0.41183, "decode.acc_seg": 84.39617, "loss": 0.41183, "time": 0.23363} -{"mode": "train", "epoch": 1, "iter": 13150, "lr": 0.00014, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.40597, "decode.acc_seg": 84.3222, "loss": 0.40597, "time": 0.22327} -{"mode": "train", "epoch": 1, "iter": 13200, "lr": 0.00014, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.43612, "decode.acc_seg": 83.26661, "loss": 0.43612, "time": 0.23052} -{"mode": "train", "epoch": 1, "iter": 13250, "lr": 0.00014, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.42235, "decode.acc_seg": 83.65011, "loss": 0.42235, "time": 0.22578} -{"mode": "train", "epoch": 1, "iter": 13300, "lr": 0.00014, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.41422, "decode.acc_seg": 84.42055, "loss": 0.41422, "time": 0.22692} -{"mode": "train", "epoch": 1, "iter": 13350, "lr": 0.00014, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.41236, "decode.acc_seg": 84.4991, "loss": 0.41236, "time": 0.22936} -{"mode": "train", "epoch": 1, "iter": 13400, "lr": 0.00014, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.40844, "decode.acc_seg": 84.72846, "loss": 0.40844, "time": 0.22977} -{"mode": "train", "epoch": 1, "iter": 13450, "lr": 0.00014, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.42455, "decode.acc_seg": 83.88151, "loss": 0.42455, "time": 0.22111} -{"mode": "train", "epoch": 1, "iter": 13500, "lr": 0.00014, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.40581, "decode.acc_seg": 84.80105, "loss": 0.40581, "time": 0.22489} -{"mode": "train", "epoch": 1, "iter": 13550, "lr": 0.00014, "memory": 7401, "data_time": 0.01357, "decode.loss_ce": 0.40199, "decode.acc_seg": 84.69702, "loss": 0.40199, "time": 0.23424} -{"mode": "train", "epoch": 1, "iter": 13600, "lr": 0.00014, "memory": 7401, "data_time": 0.01356, "decode.loss_ce": 0.39695, "decode.acc_seg": 84.91949, "loss": 0.39695, "time": 0.22796} -{"mode": "train", "epoch": 1, "iter": 13650, "lr": 0.00014, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.39163, "decode.acc_seg": 84.9046, "loss": 0.39163, "time": 0.22242} -{"mode": "train", "epoch": 1, "iter": 13700, "lr": 0.00014, "memory": 7401, "data_time": 0.01799, "decode.loss_ce": 0.41543, "decode.acc_seg": 84.0481, "loss": 0.41543, "time": 0.22593} -{"mode": "train", "epoch": 1, "iter": 13750, "lr": 0.00014, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.3968, "decode.acc_seg": 84.73841, "loss": 0.3968, "time": 0.22875} -{"mode": "train", "epoch": 1, "iter": 13800, "lr": 0.00014, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.41737, "decode.acc_seg": 84.10487, "loss": 0.41737, "time": 0.22412} -{"mode": "train", "epoch": 1, "iter": 13850, "lr": 0.00014, "memory": 7401, "data_time": 0.01353, "decode.loss_ce": 0.41991, "decode.acc_seg": 83.83713, "loss": 0.41991, "time": 0.23242} -{"mode": "train", "epoch": 1, "iter": 13900, "lr": 0.00014, "memory": 7401, "data_time": 0.01489, "decode.loss_ce": 0.41687, "decode.acc_seg": 84.5111, "loss": 0.41687, "time": 0.22549} -{"mode": "train", "epoch": 1, "iter": 13950, "lr": 0.00014, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.41382, "decode.acc_seg": 84.46467, "loss": 0.41382, "time": 0.22108} -{"mode": "train", "epoch": 1, "iter": 14000, "lr": 0.00014, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.40406, "decode.acc_seg": 84.58658, "loss": 0.40406, "time": 0.22965} -{"mode": "train", "epoch": 1, "iter": 14050, "lr": 0.00014, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.40542, "decode.acc_seg": 84.77157, "loss": 0.40542, "time": 0.22493} -{"mode": "train", "epoch": 1, "iter": 14100, "lr": 0.00014, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.42695, "decode.acc_seg": 83.43952, "loss": 0.42695, "time": 0.2287} -{"mode": "train", "epoch": 1, "iter": 14150, "lr": 0.00014, "memory": 7401, "data_time": 0.01381, "decode.loss_ce": 0.40232, "decode.acc_seg": 84.60006, "loss": 0.40232, "time": 0.22644} -{"mode": "train", "epoch": 1, "iter": 14200, "lr": 0.00014, "memory": 7401, "data_time": 0.01402, "decode.loss_ce": 0.40484, "decode.acc_seg": 84.25217, "loss": 0.40484, "time": 0.22155} -{"mode": "train", "epoch": 1, "iter": 14250, "lr": 0.00013, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.4036, "decode.acc_seg": 84.77196, "loss": 0.4036, "time": 0.22737} -{"mode": "train", "epoch": 1, "iter": 14300, "lr": 0.00013, "memory": 7401, "data_time": 0.01402, "decode.loss_ce": 0.40109, "decode.acc_seg": 84.68111, "loss": 0.40109, "time": 0.22811} -{"mode": "train", "epoch": 1, "iter": 14350, "lr": 0.00013, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.41308, "decode.acc_seg": 84.28862, "loss": 0.41308, "time": 0.23288} -{"mode": "train", "epoch": 1, "iter": 14400, "lr": 0.00013, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.39097, "decode.acc_seg": 85.12298, "loss": 0.39097, "time": 0.2221} -{"mode": "train", "epoch": 1, "iter": 14450, "lr": 0.00013, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.4028, "decode.acc_seg": 84.60355, "loss": 0.4028, "time": 0.22397} -{"mode": "train", "epoch": 1, "iter": 14500, "lr": 0.00013, "memory": 7401, "data_time": 0.01355, "decode.loss_ce": 0.4364, "decode.acc_seg": 83.50081, "loss": 0.4364, "time": 0.2371} -{"mode": "train", "epoch": 1, "iter": 14550, "lr": 0.00013, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.42058, "decode.acc_seg": 84.01986, "loss": 0.42058, "time": 0.22255} -{"mode": "train", "epoch": 1, "iter": 14600, "lr": 0.00013, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.42349, "decode.acc_seg": 84.57711, "loss": 0.42349, "time": 0.22526} -{"mode": "train", "epoch": 1, "iter": 14650, "lr": 0.00013, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.4094, "decode.acc_seg": 84.14217, "loss": 0.4094, "time": 0.22588} -{"mode": "train", "epoch": 1, "iter": 14700, "lr": 0.00013, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.40013, "decode.acc_seg": 84.43904, "loss": 0.40013, "time": 0.22083} -{"mode": "train", "epoch": 1, "iter": 14750, "lr": 0.00013, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.43677, "decode.acc_seg": 83.78609, "loss": 0.43677, "time": 0.23203} -{"mode": "train", "epoch": 1, "iter": 14800, "lr": 0.00013, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.40397, "decode.acc_seg": 84.60906, "loss": 0.40397, "time": 0.233} -{"mode": "train", "epoch": 1, "iter": 14850, "lr": 0.00013, "memory": 7401, "data_time": 0.0137, "decode.loss_ce": 0.39578, "decode.acc_seg": 84.77959, "loss": 0.39578, "time": 0.22145} -{"mode": "train", "epoch": 1, "iter": 14900, "lr": 0.00013, "memory": 7401, "data_time": 0.01392, "decode.loss_ce": 0.41022, "decode.acc_seg": 84.58992, "loss": 0.41022, "time": 0.22493} -{"mode": "train", "epoch": 1, "iter": 14950, "lr": 0.00013, "memory": 7401, "data_time": 0.01449, "decode.loss_ce": 0.40762, "decode.acc_seg": 84.33431, "loss": 0.40762, "time": 0.23296} -{"mode": "train", "epoch": 1, "iter": 15000, "lr": 0.00013, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.37972, "decode.acc_seg": 85.10214, "loss": 0.37972, "time": 0.22445} -{"mode": "train", "epoch": 1, "iter": 15050, "lr": 0.00013, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.40372, "decode.acc_seg": 84.64401, "loss": 0.40372, "time": 0.2248} -{"mode": "train", "epoch": 1, "iter": 15100, "lr": 0.00013, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.3927, "decode.acc_seg": 84.93071, "loss": 0.3927, "time": 0.23044} -{"mode": "train", "epoch": 1, "iter": 15150, "lr": 0.00013, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.38771, "decode.acc_seg": 85.14642, "loss": 0.38771, "time": 0.23092} -{"mode": "train", "epoch": 1, "iter": 15200, "lr": 0.00013, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.39871, "decode.acc_seg": 84.85741, "loss": 0.39871, "time": 0.22376} -{"mode": "train", "epoch": 1, "iter": 15250, "lr": 0.00013, "memory": 7401, "data_time": 0.01811, "decode.loss_ce": 0.39578, "decode.acc_seg": 84.85553, "loss": 0.39578, "time": 0.2344} -{"mode": "train", "epoch": 1, "iter": 15300, "lr": 0.00013, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.39916, "decode.acc_seg": 84.68471, "loss": 0.39916, "time": 0.22848} -{"mode": "train", "epoch": 1, "iter": 15350, "lr": 0.00013, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.3746, "decode.acc_seg": 85.49077, "loss": 0.3746, "time": 0.22809} -{"mode": "train", "epoch": 1, "iter": 15400, "lr": 0.00013, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.38814, "decode.acc_seg": 84.95106, "loss": 0.38814, "time": 0.23222} -{"mode": "train", "epoch": 1, "iter": 15450, "lr": 0.00013, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.3821, "decode.acc_seg": 85.34043, "loss": 0.3821, "time": 0.22721} -{"mode": "train", "epoch": 1, "iter": 15500, "lr": 0.00013, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.3965, "decode.acc_seg": 84.82256, "loss": 0.3965, "time": 0.21979} -{"mode": "train", "epoch": 1, "iter": 15550, "lr": 0.00013, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.37875, "decode.acc_seg": 85.20255, "loss": 0.37875, "time": 0.23462} -{"mode": "train", "epoch": 1, "iter": 15600, "lr": 0.00013, "memory": 7401, "data_time": 0.01341, "decode.loss_ce": 0.39254, "decode.acc_seg": 85.09445, "loss": 0.39254, "time": 0.23131} -{"mode": "train", "epoch": 1, "iter": 15650, "lr": 0.00013, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.3841, "decode.acc_seg": 84.99717, "loss": 0.3841, "time": 0.22333} -{"mode": "train", "epoch": 1, "iter": 15700, "lr": 0.00013, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.38176, "decode.acc_seg": 85.25994, "loss": 0.38176, "time": 0.22966} -{"mode": "train", "epoch": 1, "iter": 15750, "lr": 0.00013, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.38605, "decode.acc_seg": 85.35393, "loss": 0.38605, "time": 0.23034} -{"mode": "train", "epoch": 1, "iter": 15800, "lr": 0.00013, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.40119, "decode.acc_seg": 84.717, "loss": 0.40119, "time": 0.22202} -{"mode": "train", "epoch": 1, "iter": 15850, "lr": 0.00013, "memory": 7401, "data_time": 0.01428, "decode.loss_ce": 0.39155, "decode.acc_seg": 85.17165, "loss": 0.39155, "time": 0.22945} -{"mode": "train", "epoch": 1, "iter": 15900, "lr": 0.00013, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.39243, "decode.acc_seg": 85.05212, "loss": 0.39243, "time": 0.22696} -{"mode": "train", "epoch": 1, "iter": 15950, "lr": 0.00013, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.40458, "decode.acc_seg": 84.47035, "loss": 0.40458, "time": 0.22377} -{"mode": "train", "epoch": 1, "iter": 16000, "lr": 0.00013, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.38948, "decode.acc_seg": 84.87191, "loss": 0.38948, "time": 0.25558} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00013, "aAcc": 0.8029, "mIoU": 0.4275, "mAcc": 0.5444, "IoU.wall": 0.734, "IoU.building": 0.8039, "IoU.sky": 0.9375, "IoU.floor": 0.7833, "IoU.tree": 0.7294, "IoU.ceiling": 0.7921, "IoU.road": 0.804, "IoU.bed ": 0.8431, "IoU.windowpane": 0.5835, "IoU.grass": 0.6705, "IoU.cabinet": 0.5597, "IoU.sidewalk": 0.5799, "IoU.person": 0.7908, "IoU.earth": 0.3486, "IoU.door": 0.3546, "IoU.table": 0.5144, "IoU.mountain": 0.5725, "IoU.plant": 0.5043, "IoU.curtain": 0.6575, "IoU.chair": 0.4737, "IoU.car": 0.8158, "IoU.water": 0.4153, "IoU.painting": 0.6735, "IoU.sofa": 0.61, "IoU.shelf": 0.409, "IoU.house": 0.5158, "IoU.sea": 0.5276, "IoU.mirror": 0.5351, "IoU.rug": 0.5838, "IoU.field": 0.2675, "IoU.armchair": 0.3789, "IoU.seat": 0.4711, "IoU.fence": 0.3953, "IoU.desk": 0.454, "IoU.rock": 0.3998, "IoU.wardrobe": 0.4835, "IoU.lamp": 0.5698, "IoU.bathtub": 0.7368, "IoU.railing": 0.2968, "IoU.cushion": 0.4756, "IoU.base": 0.2207, "IoU.box": 0.1509, "IoU.column": 0.438, "IoU.signboard": 0.3189, "IoU.chest of drawers": 0.4022, "IoU.counter": 0.2251, "IoU.sand": 0.3234, "IoU.sink": 0.6644, "IoU.skyscraper": 0.4666, "IoU.fireplace": 0.4908, "IoU.refrigerator": 0.6709, "IoU.grandstand": 0.4397, "IoU.path": 0.137, "IoU.stairs": 0.3326, "IoU.runway": 0.6636, "IoU.case": 0.441, "IoU.pool table": 0.9022, "IoU.pillow": 0.4986, "IoU.screen door": 0.5258, "IoU.stairway": 0.3445, "IoU.river": 0.0899, "IoU.bridge": 0.5888, "IoU.bookcase": 0.2721, "IoU.blind": 0.3744, "IoU.coffee table": 0.5023, "IoU.toilet": 0.8202, "IoU.flower": 0.3153, "IoU.book": 0.4061, "IoU.hill": 0.0482, "IoU.bench": 0.3509, "IoU.countertop": 0.477, "IoU.stove": 0.6641, "IoU.palm": 0.4729, "IoU.kitchen island": 0.2957, "IoU.computer": 0.5725, "IoU.swivel chair": 0.4017, "IoU.boat": 0.6301, "IoU.bar": 0.2922, "IoU.arcade machine": 0.4664, "IoU.hovel": 0.388, "IoU.bus": 0.8718, "IoU.towel": 0.537, "IoU.light": 0.3542, "IoU.truck": 0.25, "IoU.tower": 0.4254, "IoU.chandelier": 0.6142, "IoU.awning": 0.22, "IoU.streetlight": 0.2207, "IoU.booth": 0.3864, "IoU.television receiver": 0.6247, "IoU.airplane": 0.5464, "IoU.dirt track": 0.0103, "IoU.apparel": 0.3028, "IoU.pole": 0.1056, "IoU.land": 0.0005, "IoU.bannister": 0.1047, "IoU.escalator": 0.3497, "IoU.ottoman": 0.3726, "IoU.bottle": 0.1836, "IoU.buffet": 0.3691, "IoU.poster": 0.244, "IoU.stage": 0.0778, "IoU.van": 0.3574, "IoU.ship": 0.2927, "IoU.fountain": 0.1268, "IoU.conveyer belt": 0.576, "IoU.canopy": 0.273, "IoU.washer": 0.6664, "IoU.plaything": 0.2694, "IoU.swimming pool": 0.2753, "IoU.stool": 0.3844, "IoU.barrel": 0.2709, "IoU.basket": 0.2817, "IoU.waterfall": 0.5887, "IoU.tent": 0.9149, "IoU.bag": 0.068, "IoU.minibike": 0.5926, "IoU.cradle": 0.7161, "IoU.oven": 0.2421, "IoU.ball": 0.4739, "IoU.food": 0.5319, "IoU.step": 0.047, "IoU.tank": 0.3275, "IoU.trade name": 0.2003, "IoU.microwave": 0.3563, "IoU.pot": 0.3547, "IoU.animal": 0.553, "IoU.bicycle": 0.5014, "IoU.lake": 0.2928, "IoU.dishwasher": 0.5803, "IoU.screen": 0.6789, "IoU.blanket": 0.0775, "IoU.sculpture": 0.4059, "IoU.hood": 0.5021, "IoU.sconce": 0.3141, "IoU.vase": 0.3073, "IoU.traffic light": 0.2168, "IoU.tray": 0.0093, "IoU.ashcan": 0.3489, "IoU.fan": 0.5444, "IoU.pier": 0.2303, "IoU.crt screen": 0.0063, "IoU.plate": 0.2793, "IoU.monitor": 0.2046, "IoU.bulletin board": 0.4677, "IoU.shower": 0.0, "IoU.radiator": 0.4796, "IoU.glass": 0.0525, "IoU.clock": 0.2786, "IoU.flag": 0.2925, "Acc.wall": 0.8605, "Acc.building": 0.913, "Acc.sky": 0.966, "Acc.floor": 0.88, "Acc.tree": 0.8721, "Acc.ceiling": 0.8818, "Acc.road": 0.9069, "Acc.bed ": 0.9527, "Acc.windowpane": 0.7674, "Acc.grass": 0.8357, "Acc.cabinet": 0.6713, "Acc.sidewalk": 0.7439, "Acc.person": 0.897, "Acc.earth": 0.4725, "Acc.door": 0.3962, "Acc.table": 0.7413, "Acc.mountain": 0.7834, "Acc.plant": 0.6591, "Acc.curtain": 0.8569, "Acc.chair": 0.5938, "Acc.car": 0.9244, "Acc.water": 0.53, "Acc.painting": 0.8028, "Acc.sofa": 0.8238, "Acc.shelf": 0.6091, "Acc.house": 0.6999, "Acc.sea": 0.8047, "Acc.mirror": 0.6024, "Acc.rug": 0.6851, "Acc.field": 0.4125, "Acc.armchair": 0.6271, "Acc.seat": 0.6765, "Acc.fence": 0.5763, "Acc.desk": 0.6014, "Acc.rock": 0.5683, "Acc.wardrobe": 0.7484, "Acc.lamp": 0.6662, "Acc.bathtub": 0.8012, "Acc.railing": 0.4447, "Acc.cushion": 0.6568, "Acc.base": 0.465, "Acc.box": 0.1783, "Acc.column": 0.5453, "Acc.signboard": 0.4133, "Acc.chest of drawers": 0.57, "Acc.counter": 0.3053, "Acc.sand": 0.4922, "Acc.sink": 0.7383, "Acc.skyscraper": 0.5706, "Acc.fireplace": 0.9306, "Acc.refrigerator": 0.8174, "Acc.grandstand": 0.6368, "Acc.path": 0.1837, "Acc.stairs": 0.4033, "Acc.runway": 0.8575, "Acc.case": 0.6419, "Acc.pool table": 0.9386, "Acc.pillow": 0.634, "Acc.screen door": 0.7626, "Acc.stairway": 0.4465, "Acc.river": 0.2608, "Acc.bridge": 0.6758, "Acc.bookcase": 0.3683, "Acc.blind": 0.4263, "Acc.coffee table": 0.8433, "Acc.toilet": 0.8733, "Acc.flower": 0.4521, "Acc.book": 0.5067, "Acc.hill": 0.0693, "Acc.bench": 0.4324, "Acc.countertop": 0.6794, "Acc.stove": 0.7219, "Acc.palm": 0.7267, "Acc.kitchen island": 0.627, "Acc.computer": 0.6782, "Acc.swivel chair": 0.5856, "Acc.boat": 0.7556, "Acc.bar": 0.3697, "Acc.arcade machine": 0.4849, "Acc.hovel": 0.4433, "Acc.bus": 0.9374, "Acc.towel": 0.6202, "Acc.light": 0.3829, "Acc.truck": 0.3155, "Acc.tower": 0.536, "Acc.chandelier": 0.7847, "Acc.awning": 0.29, "Acc.streetlight": 0.3205, "Acc.booth": 0.4182, "Acc.television receiver": 0.8319, "Acc.airplane": 0.6384, "Acc.dirt track": 0.0375, "Acc.apparel": 0.5394, "Acc.pole": 0.1453, "Acc.land": 0.0005, "Acc.bannister": 0.1622, "Acc.escalator": 0.459, "Acc.ottoman": 0.5109, "Acc.bottle": 0.2323, "Acc.buffet": 0.4638, "Acc.poster": 0.3201, "Acc.stage": 0.1078, "Acc.van": 0.4218, "Acc.ship": 0.3158, "Acc.fountain": 0.1522, "Acc.conveyer belt": 0.8916, "Acc.canopy": 0.3444, "Acc.washer": 0.7062, "Acc.plaything": 0.4426, "Acc.swimming pool": 0.6189, "Acc.stool": 0.4837, "Acc.barrel": 0.4867, "Acc.basket": 0.4001, "Acc.waterfall": 0.7732, "Acc.tent": 0.962, "Acc.bag": 0.079, "Acc.minibike": 0.6964, "Acc.cradle": 0.8179, "Acc.oven": 0.5682, "Acc.ball": 0.6151, "Acc.food": 0.6723, "Acc.step": 0.053, "Acc.tank": 0.3676, "Acc.trade name": 0.2198, "Acc.microwave": 0.4045, "Acc.pot": 0.4165, "Acc.animal": 0.5697, "Acc.bicycle": 0.6693, "Acc.lake": 0.3092, "Acc.dishwasher": 0.729, "Acc.screen": 0.8362, "Acc.blanket": 0.0826, "Acc.sculpture": 0.5181, "Acc.hood": 0.5279, "Acc.sconce": 0.3853, "Acc.vase": 0.5042, "Acc.traffic light": 0.3705, "Acc.tray": 0.0112, "Acc.ashcan": 0.4314, "Acc.fan": 0.7039, "Acc.pier": 0.2652, "Acc.crt screen": 0.0133, "Acc.plate": 0.3214, "Acc.monitor": 0.2816, "Acc.bulletin board": 0.5368, "Acc.shower": 0.0, "Acc.radiator": 0.5174, "Acc.glass": 0.0534, "Acc.clock": 0.3156, "Acc.flag": 0.3174} -{"mode": "train", "epoch": 1, "iter": 16050, "lr": 0.00013, "memory": 7401, "data_time": 0.53693, "decode.loss_ce": 0.38878, "decode.acc_seg": 84.91629, "loss": 0.38878, "time": 0.74639} -{"mode": "train", "epoch": 1, "iter": 16100, "lr": 0.00013, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.38649, "decode.acc_seg": 85.33462, "loss": 0.38649, "time": 0.224} -{"mode": "train", "epoch": 1, "iter": 16150, "lr": 0.00013, "memory": 7401, "data_time": 0.01402, "decode.loss_ce": 0.41434, "decode.acc_seg": 84.23532, "loss": 0.41434, "time": 0.22096} -{"mode": "train", "epoch": 1, "iter": 16200, "lr": 0.00013, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.40255, "decode.acc_seg": 84.67513, "loss": 0.40255, "time": 0.22097} -{"mode": "train", "epoch": 1, "iter": 16250, "lr": 0.00013, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.37449, "decode.acc_seg": 85.79529, "loss": 0.37449, "time": 0.23394} -{"mode": "train", "epoch": 1, "iter": 16300, "lr": 0.00013, "memory": 7401, "data_time": 0.01789, "decode.loss_ce": 0.38799, "decode.acc_seg": 85.24943, "loss": 0.38799, "time": 0.2366} -{"mode": "train", "epoch": 1, "iter": 16350, "lr": 0.00013, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.3838, "decode.acc_seg": 85.27096, "loss": 0.3838, "time": 0.23205} -{"mode": "train", "epoch": 1, "iter": 16400, "lr": 0.00012, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.36217, "decode.acc_seg": 85.82955, "loss": 0.36217, "time": 0.21956} -{"mode": "train", "epoch": 1, "iter": 16450, "lr": 0.00012, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.37575, "decode.acc_seg": 85.5437, "loss": 0.37575, "time": 0.22353} -{"mode": "train", "epoch": 1, "iter": 16500, "lr": 0.00012, "memory": 7401, "data_time": 0.01425, "decode.loss_ce": 0.37538, "decode.acc_seg": 85.63213, "loss": 0.37538, "time": 0.22286} -{"mode": "train", "epoch": 1, "iter": 16550, "lr": 0.00012, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.36407, "decode.acc_seg": 86.06335, "loss": 0.36407, "time": 0.23441} -{"mode": "train", "epoch": 1, "iter": 16600, "lr": 0.00012, "memory": 7401, "data_time": 0.01434, "decode.loss_ce": 0.37039, "decode.acc_seg": 85.67366, "loss": 0.37039, "time": 0.23121} -{"mode": "train", "epoch": 1, "iter": 16650, "lr": 0.00012, "memory": 7401, "data_time": 0.01434, "decode.loss_ce": 0.36464, "decode.acc_seg": 85.89135, "loss": 0.36464, "time": 0.22529} -{"mode": "train", "epoch": 1, "iter": 16700, "lr": 0.00012, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.38302, "decode.acc_seg": 85.6347, "loss": 0.38302, "time": 0.22602} -{"mode": "train", "epoch": 1, "iter": 16750, "lr": 0.00012, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.37746, "decode.acc_seg": 85.6067, "loss": 0.37746, "time": 0.22207} -{"mode": "train", "epoch": 1, "iter": 16800, "lr": 0.00012, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.36361, "decode.acc_seg": 85.99208, "loss": 0.36361, "time": 0.24491} -{"mode": "train", "epoch": 1, "iter": 16850, "lr": 0.00012, "memory": 7401, "data_time": 0.0134, "decode.loss_ce": 0.39091, "decode.acc_seg": 85.06568, "loss": 0.39091, "time": 0.22531} -{"mode": "train", "epoch": 1, "iter": 16900, "lr": 0.00012, "memory": 7401, "data_time": 0.01348, "decode.loss_ce": 0.37602, "decode.acc_seg": 85.53989, "loss": 0.37602, "time": 0.22644} -{"mode": "train", "epoch": 1, "iter": 16950, "lr": 0.00012, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.37253, "decode.acc_seg": 85.64786, "loss": 0.37253, "time": 0.22173} -{"mode": "train", "epoch": 1, "iter": 17000, "lr": 0.00012, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.38002, "decode.acc_seg": 85.38096, "loss": 0.38002, "time": 0.22305} -{"mode": "train", "epoch": 1, "iter": 17050, "lr": 0.00012, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.36424, "decode.acc_seg": 85.99034, "loss": 0.36424, "time": 0.23784} -{"mode": "train", "epoch": 1, "iter": 17100, "lr": 0.00012, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.35095, "decode.acc_seg": 86.27368, "loss": 0.35095, "time": 0.22951} -{"mode": "train", "epoch": 1, "iter": 17150, "lr": 0.00012, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.37223, "decode.acc_seg": 86.05524, "loss": 0.37223, "time": 0.22221} -{"mode": "train", "epoch": 1, "iter": 17200, "lr": 0.00012, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.38448, "decode.acc_seg": 85.28365, "loss": 0.38448, "time": 0.22491} -{"mode": "train", "epoch": 1, "iter": 17250, "lr": 0.00012, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.37603, "decode.acc_seg": 85.6903, "loss": 0.37603, "time": 0.22786} -{"mode": "train", "epoch": 1, "iter": 17300, "lr": 0.00012, "memory": 7401, "data_time": 0.01875, "decode.loss_ce": 0.3702, "decode.acc_seg": 85.73497, "loss": 0.3702, "time": 0.24085} -{"mode": "train", "epoch": 1, "iter": 17350, "lr": 0.00012, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.37383, "decode.acc_seg": 85.46116, "loss": 0.37383, "time": 0.22742} -{"mode": "train", "epoch": 1, "iter": 17400, "lr": 0.00012, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.36041, "decode.acc_seg": 85.91064, "loss": 0.36041, "time": 0.22564} -{"mode": "train", "epoch": 1, "iter": 17450, "lr": 0.00012, "memory": 7401, "data_time": 0.01466, "decode.loss_ce": 0.36352, "decode.acc_seg": 86.01896, "loss": 0.36352, "time": 0.22954} -{"mode": "train", "epoch": 1, "iter": 17500, "lr": 0.00012, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.36599, "decode.acc_seg": 85.83166, "loss": 0.36599, "time": 0.22741} -{"mode": "train", "epoch": 1, "iter": 17550, "lr": 0.00012, "memory": 7401, "data_time": 0.0145, "decode.loss_ce": 0.36489, "decode.acc_seg": 85.89325, "loss": 0.36489, "time": 0.23426} -{"mode": "train", "epoch": 1, "iter": 17600, "lr": 0.00012, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.35996, "decode.acc_seg": 86.00997, "loss": 0.35996, "time": 0.22602} -{"mode": "train", "epoch": 1, "iter": 17650, "lr": 0.00012, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.36151, "decode.acc_seg": 86.04543, "loss": 0.36151, "time": 0.226} -{"mode": "train", "epoch": 1, "iter": 17700, "lr": 0.00012, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.37258, "decode.acc_seg": 85.65338, "loss": 0.37258, "time": 0.22709} -{"mode": "train", "epoch": 1, "iter": 17750, "lr": 0.00012, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.34593, "decode.acc_seg": 86.27273, "loss": 0.34593, "time": 0.22802} -{"mode": "train", "epoch": 1, "iter": 17800, "lr": 0.00012, "memory": 7401, "data_time": 0.01423, "decode.loss_ce": 0.35144, "decode.acc_seg": 86.31738, "loss": 0.35144, "time": 0.24608} -{"mode": "train", "epoch": 1, "iter": 17850, "lr": 0.00012, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.37009, "decode.acc_seg": 85.90001, "loss": 0.37009, "time": 0.2208} -{"mode": "train", "epoch": 1, "iter": 17900, "lr": 0.00012, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.34559, "decode.acc_seg": 86.69646, "loss": 0.34559, "time": 0.2221} -{"mode": "train", "epoch": 1, "iter": 17950, "lr": 0.00012, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.34532, "decode.acc_seg": 86.65609, "loss": 0.34532, "time": 0.22615} -{"mode": "train", "epoch": 1, "iter": 18000, "lr": 0.00012, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.35467, "decode.acc_seg": 86.37172, "loss": 0.35467, "time": 0.22409} -{"mode": "train", "epoch": 1, "iter": 18050, "lr": 0.00012, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.36323, "decode.acc_seg": 86.3726, "loss": 0.36323, "time": 0.22766} -{"mode": "train", "epoch": 1, "iter": 18100, "lr": 0.00012, "memory": 7401, "data_time": 0.01427, "decode.loss_ce": 0.35012, "decode.acc_seg": 86.61465, "loss": 0.35012, "time": 0.22227} -{"mode": "train", "epoch": 1, "iter": 18150, "lr": 0.00012, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.36804, "decode.acc_seg": 85.96309, "loss": 0.36804, "time": 0.22954} -{"mode": "train", "epoch": 1, "iter": 18200, "lr": 0.00012, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.3461, "decode.acc_seg": 86.39048, "loss": 0.3461, "time": 0.23184} -{"mode": "train", "epoch": 1, "iter": 18250, "lr": 0.00012, "memory": 7401, "data_time": 0.01448, "decode.loss_ce": 0.3656, "decode.acc_seg": 85.86794, "loss": 0.3656, "time": 0.22619} -{"mode": "train", "epoch": 1, "iter": 18300, "lr": 0.00012, "memory": 7401, "data_time": 0.01358, "decode.loss_ce": 0.33413, "decode.acc_seg": 86.95959, "loss": 0.33413, "time": 0.23488} -{"mode": "train", "epoch": 1, "iter": 18350, "lr": 0.00012, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.3853, "decode.acc_seg": 85.14934, "loss": 0.3853, "time": 0.22265} -{"mode": "train", "epoch": 1, "iter": 18400, "lr": 0.00012, "memory": 7401, "data_time": 0.01344, "decode.loss_ce": 0.37317, "decode.acc_seg": 85.73167, "loss": 0.37317, "time": 0.2286} -{"mode": "train", "epoch": 1, "iter": 18450, "lr": 0.00012, "memory": 7401, "data_time": 0.01426, "decode.loss_ce": 0.37181, "decode.acc_seg": 85.82212, "loss": 0.37181, "time": 0.22826} -{"mode": "train", "epoch": 1, "iter": 18500, "lr": 0.00011, "memory": 7401, "data_time": 0.01513, "decode.loss_ce": 0.3481, "decode.acc_seg": 86.4927, "loss": 0.3481, "time": 0.22723} -{"mode": "train", "epoch": 1, "iter": 18550, "lr": 0.00011, "memory": 7401, "data_time": 0.01336, "decode.loss_ce": 0.34328, "decode.acc_seg": 86.81062, "loss": 0.34328, "time": 0.22864} -{"mode": "train", "epoch": 1, "iter": 18600, "lr": 0.00011, "memory": 7401, "data_time": 0.01329, "decode.loss_ce": 0.36922, "decode.acc_seg": 86.06112, "loss": 0.36922, "time": 0.23213} -{"mode": "train", "epoch": 1, "iter": 18650, "lr": 0.00011, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.33562, "decode.acc_seg": 86.84701, "loss": 0.33562, "time": 0.22582} -{"mode": "train", "epoch": 1, "iter": 18700, "lr": 0.00011, "memory": 7401, "data_time": 0.01458, "decode.loss_ce": 0.35249, "decode.acc_seg": 86.637, "loss": 0.35249, "time": 0.22797} -{"mode": "train", "epoch": 1, "iter": 18750, "lr": 0.00011, "memory": 7401, "data_time": 0.01441, "decode.loss_ce": 0.33334, "decode.acc_seg": 87.06937, "loss": 0.33334, "time": 0.23249} -{"mode": "train", "epoch": 1, "iter": 18800, "lr": 0.00011, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.34873, "decode.acc_seg": 86.51966, "loss": 0.34873, "time": 0.2324} -{"mode": "train", "epoch": 1, "iter": 18850, "lr": 0.00011, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.35095, "decode.acc_seg": 86.72336, "loss": 0.35095, "time": 0.22534} -{"mode": "train", "epoch": 1, "iter": 18900, "lr": 0.00011, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.33774, "decode.acc_seg": 86.90246, "loss": 0.33774, "time": 0.23131} -{"mode": "train", "epoch": 1, "iter": 18950, "lr": 0.00011, "memory": 7401, "data_time": 0.01355, "decode.loss_ce": 0.32983, "decode.acc_seg": 87.19046, "loss": 0.32983, "time": 0.22258} -{"mode": "train", "epoch": 1, "iter": 19000, "lr": 0.00011, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.32802, "decode.acc_seg": 87.31137, "loss": 0.32802, "time": 0.22119} -{"mode": "train", "epoch": 1, "iter": 19050, "lr": 0.00011, "memory": 7401, "data_time": 0.01357, "decode.loss_ce": 0.33744, "decode.acc_seg": 86.95112, "loss": 0.33744, "time": 0.23231} -{"mode": "train", "epoch": 1, "iter": 19100, "lr": 0.00011, "memory": 7401, "data_time": 0.01336, "decode.loss_ce": 0.33968, "decode.acc_seg": 86.93245, "loss": 0.33968, "time": 0.22445} -{"mode": "train", "epoch": 1, "iter": 19150, "lr": 0.00011, "memory": 7401, "data_time": 0.01367, "decode.loss_ce": 0.33855, "decode.acc_seg": 86.90706, "loss": 0.33855, "time": 0.22256} -{"mode": "train", "epoch": 1, "iter": 19200, "lr": 0.00011, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.34884, "decode.acc_seg": 86.32514, "loss": 0.34884, "time": 0.23588} -{"mode": "train", "epoch": 1, "iter": 19250, "lr": 0.00011, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.35707, "decode.acc_seg": 86.12925, "loss": 0.35707, "time": 0.22859} -{"mode": "train", "epoch": 1, "iter": 19300, "lr": 0.00011, "memory": 7401, "data_time": 0.01381, "decode.loss_ce": 0.33328, "decode.acc_seg": 87.16172, "loss": 0.33328, "time": 0.22964} -{"mode": "train", "epoch": 1, "iter": 19350, "lr": 0.00011, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.33183, "decode.acc_seg": 87.17401, "loss": 0.33183, "time": 0.22641} -{"mode": "train", "epoch": 1, "iter": 19400, "lr": 0.00011, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.33957, "decode.acc_seg": 86.75789, "loss": 0.33957, "time": 0.22936} -{"mode": "train", "epoch": 1, "iter": 19450, "lr": 0.00011, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.34621, "decode.acc_seg": 86.82627, "loss": 0.34621, "time": 0.23203} -{"mode": "train", "epoch": 1, "iter": 19500, "lr": 0.00011, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.3567, "decode.acc_seg": 86.55767, "loss": 0.3567, "time": 0.22552} -{"mode": "train", "epoch": 1, "iter": 19550, "lr": 0.00011, "memory": 7401, "data_time": 0.01453, "decode.loss_ce": 0.34228, "decode.acc_seg": 86.77573, "loss": 0.34228, "time": 0.22721} -{"mode": "train", "epoch": 1, "iter": 19600, "lr": 0.00011, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.33482, "decode.acc_seg": 87.06891, "loss": 0.33482, "time": 0.23312} -{"mode": "train", "epoch": 1, "iter": 19650, "lr": 0.00011, "memory": 7401, "data_time": 0.01431, "decode.loss_ce": 0.33797, "decode.acc_seg": 86.95939, "loss": 0.33797, "time": 0.22667} -{"mode": "train", "epoch": 1, "iter": 19700, "lr": 0.00011, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.359, "decode.acc_seg": 86.19759, "loss": 0.359, "time": 0.23133} -{"mode": "train", "epoch": 1, "iter": 19750, "lr": 0.00011, "memory": 7401, "data_time": 0.01448, "decode.loss_ce": 0.34007, "decode.acc_seg": 86.87378, "loss": 0.34007, "time": 0.23387} -{"mode": "train", "epoch": 1, "iter": 19800, "lr": 0.00011, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.32542, "decode.acc_seg": 87.2527, "loss": 0.32542, "time": 0.23073} -{"mode": "train", "epoch": 1, "iter": 19850, "lr": 0.00011, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.33904, "decode.acc_seg": 87.00543, "loss": 0.33904, "time": 0.22588} -{"mode": "train", "epoch": 1, "iter": 19900, "lr": 0.00011, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.34622, "decode.acc_seg": 86.38797, "loss": 0.34622, "time": 0.22221} -{"mode": "train", "epoch": 1, "iter": 19950, "lr": 0.00011, "memory": 7401, "data_time": 0.01439, "decode.loss_ce": 0.32842, "decode.acc_seg": 87.44023, "loss": 0.32842, "time": 0.23848} -{"mode": "train", "epoch": 1, "iter": 20000, "lr": 0.00011, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.31675, "decode.acc_seg": 87.69488, "loss": 0.31675, "time": 0.25779} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 0.00011, "aAcc": 0.8064, "mIoU": 0.436, "mAcc": 0.5542, "IoU.wall": 0.7374, "IoU.building": 0.8026, "IoU.sky": 0.941, "IoU.floor": 0.8005, "IoU.tree": 0.7151, "IoU.ceiling": 0.8066, "IoU.road": 0.803, "IoU.bed ": 0.8506, "IoU.windowpane": 0.5791, "IoU.grass": 0.6593, "IoU.cabinet": 0.5775, "IoU.sidewalk": 0.6289, "IoU.person": 0.7892, "IoU.earth": 0.3445, "IoU.door": 0.4323, "IoU.table": 0.5495, "IoU.mountain": 0.4843, "IoU.plant": 0.4975, "IoU.curtain": 0.6889, "IoU.chair": 0.5399, "IoU.car": 0.825, "IoU.water": 0.5057, "IoU.painting": 0.6742, "IoU.sofa": 0.6126, "IoU.shelf": 0.4157, "IoU.house": 0.4339, "IoU.sea": 0.5252, "IoU.mirror": 0.5816, "IoU.rug": 0.5731, "IoU.field": 0.3066, "IoU.armchair": 0.4053, "IoU.seat": 0.6014, "IoU.fence": 0.3513, "IoU.desk": 0.4699, "IoU.rock": 0.3394, "IoU.wardrobe": 0.4434, "IoU.lamp": 0.5967, "IoU.bathtub": 0.65, "IoU.railing": 0.2907, "IoU.cushion": 0.4933, "IoU.base": 0.2754, "IoU.box": 0.2143, "IoU.column": 0.4315, "IoU.signboard": 0.3377, "IoU.chest of drawers": 0.3513, "IoU.counter": 0.2368, "IoU.sand": 0.3542, "IoU.sink": 0.6499, "IoU.skyscraper": 0.3482, "IoU.fireplace": 0.6247, "IoU.refrigerator": 0.6473, "IoU.grandstand": 0.4185, "IoU.path": 0.1878, "IoU.stairs": 0.2966, "IoU.runway": 0.6549, "IoU.case": 0.4024, "IoU.pool table": 0.8958, "IoU.pillow": 0.5304, "IoU.screen door": 0.6049, "IoU.stairway": 0.328, "IoU.river": 0.1226, "IoU.bridge": 0.3419, "IoU.bookcase": 0.3282, "IoU.blind": 0.3719, "IoU.coffee table": 0.5459, "IoU.toilet": 0.8286, "IoU.flower": 0.3666, "IoU.book": 0.4337, "IoU.hill": 0.0489, "IoU.bench": 0.4047, "IoU.countertop": 0.4947, "IoU.stove": 0.7179, "IoU.palm": 0.464, "IoU.kitchen island": 0.3464, "IoU.computer": 0.6101, "IoU.swivel chair": 0.3925, "IoU.boat": 0.4177, "IoU.bar": 0.3683, "IoU.arcade machine": 0.2792, "IoU.hovel": 0.3769, "IoU.bus": 0.849, "IoU.towel": 0.6055, "IoU.light": 0.4247, "IoU.truck": 0.3982, "IoU.tower": 0.3184, "IoU.chandelier": 0.6108, "IoU.awning": 0.1552, "IoU.streetlight": 0.1978, "IoU.booth": 0.356, "IoU.television receiver": 0.6595, "IoU.airplane": 0.4607, "IoU.dirt track": 0.0189, "IoU.apparel": 0.3344, "IoU.pole": 0.1745, "IoU.land": 0.069, "IoU.bannister": 0.1167, "IoU.escalator": 0.2004, "IoU.ottoman": 0.4014, "IoU.bottle": 0.2426, "IoU.buffet": 0.5193, "IoU.poster": 0.2082, "IoU.stage": 0.1205, "IoU.van": 0.3936, "IoU.ship": 0.4001, "IoU.fountain": 0.2022, "IoU.conveyer belt": 0.5699, "IoU.canopy": 0.1822, "IoU.washer": 0.6743, "IoU.plaything": 0.1937, "IoU.swimming pool": 0.3916, "IoU.stool": 0.3094, "IoU.barrel": 0.1376, "IoU.basket": 0.3134, "IoU.waterfall": 0.6097, "IoU.tent": 0.8406, "IoU.bag": 0.1329, "IoU.minibike": 0.5245, "IoU.cradle": 0.7345, "IoU.oven": 0.2619, "IoU.ball": 0.4746, "IoU.food": 0.5514, "IoU.step": 0.133, "IoU.tank": 0.2959, "IoU.trade name": 0.1862, "IoU.microwave": 0.4777, "IoU.pot": 0.366, "IoU.animal": 0.5467, "IoU.bicycle": 0.4531, "IoU.lake": 0.2283, "IoU.dishwasher": 0.63, "IoU.screen": 0.6099, "IoU.blanket": 0.1245, "IoU.sculpture": 0.4794, "IoU.hood": 0.4694, "IoU.sconce": 0.3514, "IoU.vase": 0.359, "IoU.traffic light": 0.2853, "IoU.tray": 0.0616, "IoU.ashcan": 0.3882, "IoU.fan": 0.5639, "IoU.pier": 0.4292, "IoU.crt screen": 0.0532, "IoU.plate": 0.4718, "IoU.monitor": 0.1288, "IoU.bulletin board": 0.4557, "IoU.shower": 0.0, "IoU.radiator": 0.3965, "IoU.glass": 0.1005, "IoU.clock": 0.3404, "IoU.flag": 0.3044, "Acc.wall": 0.8705, "Acc.building": 0.9008, "Acc.sky": 0.9758, "Acc.floor": 0.8915, "Acc.tree": 0.865, "Acc.ceiling": 0.9016, "Acc.road": 0.842, "Acc.bed ": 0.9474, "Acc.windowpane": 0.7539, "Acc.grass": 0.7778, "Acc.cabinet": 0.703, "Acc.sidewalk": 0.804, "Acc.person": 0.9247, "Acc.earth": 0.5363, "Acc.door": 0.5416, "Acc.table": 0.7466, "Acc.mountain": 0.718, "Acc.plant": 0.6253, "Acc.curtain": 0.7743, "Acc.chair": 0.6918, "Acc.car": 0.8899, "Acc.water": 0.678, "Acc.painting": 0.8311, "Acc.sofa": 0.7484, "Acc.shelf": 0.6315, "Acc.house": 0.6887, "Acc.sea": 0.8205, "Acc.mirror": 0.6585, "Acc.rug": 0.6636, "Acc.field": 0.5545, "Acc.armchair": 0.6204, "Acc.seat": 0.768, "Acc.fence": 0.4497, "Acc.desk": 0.7039, "Acc.rock": 0.5056, "Acc.wardrobe": 0.631, "Acc.lamp": 0.7428, "Acc.bathtub": 0.7417, "Acc.railing": 0.4012, "Acc.cushion": 0.5739, "Acc.base": 0.4326, "Acc.box": 0.2781, "Acc.column": 0.6182, "Acc.signboard": 0.4439, "Acc.chest of drawers": 0.4698, "Acc.counter": 0.331, "Acc.sand": 0.5076, "Acc.sink": 0.7382, "Acc.skyscraper": 0.4365, "Acc.fireplace": 0.7959, "Acc.refrigerator": 0.7805, "Acc.grandstand": 0.6706, "Acc.path": 0.2398, "Acc.stairs": 0.4013, "Acc.runway": 0.8342, "Acc.case": 0.4641, "Acc.pool table": 0.9556, "Acc.pillow": 0.669, "Acc.screen door": 0.7574, "Acc.stairway": 0.4004, "Acc.river": 0.1469, "Acc.bridge": 0.3954, "Acc.bookcase": 0.4505, "Acc.blind": 0.4175, "Acc.coffee table": 0.7426, "Acc.toilet": 0.8927, "Acc.flower": 0.5311, "Acc.book": 0.6233, "Acc.hill": 0.0747, "Acc.bench": 0.5135, "Acc.countertop": 0.7606, "Acc.stove": 0.8419, "Acc.palm": 0.6464, "Acc.kitchen island": 0.6771, "Acc.computer": 0.7147, "Acc.swivel chair": 0.584, "Acc.boat": 0.4665, "Acc.bar": 0.4701, "Acc.arcade machine": 0.2978, "Acc.hovel": 0.621, "Acc.bus": 0.9255, "Acc.towel": 0.7292, "Acc.light": 0.4674, "Acc.truck": 0.5232, "Acc.tower": 0.395, "Acc.chandelier": 0.7372, "Acc.awning": 0.1688, "Acc.streetlight": 0.2364, "Acc.booth": 0.3728, "Acc.television receiver": 0.8181, "Acc.airplane": 0.6491, "Acc.dirt track": 0.0892, "Acc.apparel": 0.4739, "Acc.pole": 0.2612, "Acc.land": 0.0915, "Acc.bannister": 0.1579, "Acc.escalator": 0.2202, "Acc.ottoman": 0.6773, "Acc.bottle": 0.3401, "Acc.buffet": 0.712, "Acc.poster": 0.2511, "Acc.stage": 0.1519, "Acc.van": 0.5273, "Acc.ship": 0.6852, "Acc.fountain": 0.2199, "Acc.conveyer belt": 0.8845, "Acc.canopy": 0.2232, "Acc.washer": 0.7154, "Acc.plaything": 0.3328, "Acc.swimming pool": 0.6484, "Acc.stool": 0.4058, "Acc.barrel": 0.3991, "Acc.basket": 0.4005, "Acc.waterfall": 0.7192, "Acc.tent": 0.9366, "Acc.bag": 0.1602, "Acc.minibike": 0.6089, "Acc.cradle": 0.8345, "Acc.oven": 0.5396, "Acc.ball": 0.5622, "Acc.food": 0.76, "Acc.step": 0.1553, "Acc.tank": 0.3281, "Acc.trade name": 0.194, "Acc.microwave": 0.5237, "Acc.pot": 0.4386, "Acc.animal": 0.5644, "Acc.bicycle": 0.6578, "Acc.lake": 0.3139, "Acc.dishwasher": 0.7347, "Acc.screen": 0.6988, "Acc.blanket": 0.1492, "Acc.sculpture": 0.7903, "Acc.hood": 0.5373, "Acc.sconce": 0.4143, "Acc.vase": 0.5355, "Acc.traffic light": 0.4106, "Acc.tray": 0.0818, "Acc.ashcan": 0.5089, "Acc.fan": 0.6713, "Acc.pier": 0.4801, "Acc.crt screen": 0.1436, "Acc.plate": 0.5538, "Acc.monitor": 0.1862, "Acc.bulletin board": 0.5214, "Acc.shower": 0.0, "Acc.radiator": 0.4616, "Acc.glass": 0.1041, "Acc.clock": 0.4231, "Acc.flag": 0.3418} -{"mode": "train", "epoch": 1, "iter": 20050, "lr": 0.00011, "memory": 7401, "data_time": 0.53808, "decode.loss_ce": 0.34327, "decode.acc_seg": 86.52045, "loss": 0.34327, "time": 0.74523} -{"mode": "train", "epoch": 1, "iter": 20100, "lr": 0.00011, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.34755, "decode.acc_seg": 86.49062, "loss": 0.34755, "time": 0.22764} -{"mode": "train", "epoch": 1, "iter": 20150, "lr": 0.00011, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.32728, "decode.acc_seg": 87.27159, "loss": 0.32728, "time": 0.23587} -{"mode": "train", "epoch": 1, "iter": 20200, "lr": 0.00011, "memory": 7401, "data_time": 0.01431, "decode.loss_ce": 0.33188, "decode.acc_seg": 87.02549, "loss": 0.33188, "time": 0.22706} -{"mode": "train", "epoch": 1, "iter": 20250, "lr": 0.00011, "memory": 7401, "data_time": 0.01417, "decode.loss_ce": 0.32771, "decode.acc_seg": 87.18824, "loss": 0.32771, "time": 0.22422} -{"mode": "train", "epoch": 1, "iter": 20300, "lr": 0.00011, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.32465, "decode.acc_seg": 87.20915, "loss": 0.32465, "time": 0.22721} -{"mode": "train", "epoch": 1, "iter": 20350, "lr": 0.00011, "memory": 7401, "data_time": 0.0145, "decode.loss_ce": 0.33295, "decode.acc_seg": 86.94314, "loss": 0.33295, "time": 0.22562} -{"mode": "train", "epoch": 1, "iter": 20400, "lr": 0.00011, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.33654, "decode.acc_seg": 86.93734, "loss": 0.33654, "time": 0.23154} -{"mode": "train", "epoch": 1, "iter": 20450, "lr": 0.00011, "memory": 7401, "data_time": 0.01431, "decode.loss_ce": 0.35363, "decode.acc_seg": 86.40729, "loss": 0.35363, "time": 0.22239} -{"mode": "train", "epoch": 1, "iter": 20500, "lr": 0.00011, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.33656, "decode.acc_seg": 87.13339, "loss": 0.33656, "time": 0.22477} -{"mode": "train", "epoch": 1, "iter": 20550, "lr": 0.00011, "memory": 7401, "data_time": 0.01442, "decode.loss_ce": 0.32964, "decode.acc_seg": 87.1397, "loss": 0.32964, "time": 0.23011} -{"mode": "train", "epoch": 1, "iter": 20600, "lr": 0.0001, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.33271, "decode.acc_seg": 87.40062, "loss": 0.33271, "time": 0.22765} -{"mode": "train", "epoch": 1, "iter": 20650, "lr": 0.0001, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.3532, "decode.acc_seg": 86.30349, "loss": 0.3532, "time": 0.23437} -{"mode": "train", "epoch": 1, "iter": 20700, "lr": 0.0001, "memory": 7401, "data_time": 0.01448, "decode.loss_ce": 0.32793, "decode.acc_seg": 87.24848, "loss": 0.32793, "time": 0.22425} -{"mode": "train", "epoch": 1, "iter": 20750, "lr": 0.0001, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.33373, "decode.acc_seg": 87.32289, "loss": 0.33373, "time": 0.23302} -{"mode": "train", "epoch": 1, "iter": 20800, "lr": 0.0001, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.32059, "decode.acc_seg": 87.52584, "loss": 0.32059, "time": 0.22336} -{"mode": "train", "epoch": 1, "iter": 20850, "lr": 0.0001, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.32332, "decode.acc_seg": 87.39319, "loss": 0.32332, "time": 0.22635} -{"mode": "train", "epoch": 1, "iter": 20900, "lr": 0.0001, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.33883, "decode.acc_seg": 86.75072, "loss": 0.33883, "time": 0.23405} -{"mode": "train", "epoch": 1, "iter": 20950, "lr": 0.0001, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.33031, "decode.acc_seg": 87.09981, "loss": 0.33031, "time": 0.23053} -{"mode": "train", "epoch": 1, "iter": 21000, "lr": 0.0001, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.32276, "decode.acc_seg": 87.7261, "loss": 0.32276, "time": 0.22386} -{"mode": "train", "epoch": 1, "iter": 21050, "lr": 0.0001, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.3156, "decode.acc_seg": 87.74911, "loss": 0.3156, "time": 0.22153} -{"mode": "train", "epoch": 1, "iter": 21100, "lr": 0.0001, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.33493, "decode.acc_seg": 87.22443, "loss": 0.33493, "time": 0.23508} -{"mode": "train", "epoch": 1, "iter": 21150, "lr": 0.0001, "memory": 7401, "data_time": 0.01827, "decode.loss_ce": 0.32292, "decode.acc_seg": 87.34676, "loss": 0.32292, "time": 0.23683} -{"mode": "train", "epoch": 1, "iter": 21200, "lr": 0.0001, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.31937, "decode.acc_seg": 87.49494, "loss": 0.31937, "time": 0.22317} -{"mode": "train", "epoch": 1, "iter": 21250, "lr": 0.0001, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.32873, "decode.acc_seg": 87.53491, "loss": 0.32873, "time": 0.22889} -{"mode": "train", "epoch": 1, "iter": 21300, "lr": 0.0001, "memory": 7401, "data_time": 0.01469, "decode.loss_ce": 0.31149, "decode.acc_seg": 87.84935, "loss": 0.31149, "time": 0.22993} -{"mode": "train", "epoch": 1, "iter": 21350, "lr": 0.0001, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.32131, "decode.acc_seg": 87.5639, "loss": 0.32131, "time": 0.22496} -{"mode": "train", "epoch": 1, "iter": 21400, "lr": 0.0001, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.30567, "decode.acc_seg": 87.90837, "loss": 0.30567, "time": 0.23154} -{"mode": "train", "epoch": 1, "iter": 21450, "lr": 0.0001, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.32096, "decode.acc_seg": 87.4727, "loss": 0.32096, "time": 0.22788} -{"mode": "train", "epoch": 1, "iter": 21500, "lr": 0.0001, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.31308, "decode.acc_seg": 87.87945, "loss": 0.31308, "time": 0.22967} -{"mode": "train", "epoch": 1, "iter": 21550, "lr": 0.0001, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.30905, "decode.acc_seg": 87.90369, "loss": 0.30905, "time": 0.22911} -{"mode": "train", "epoch": 1, "iter": 21600, "lr": 0.0001, "memory": 7401, "data_time": 0.01432, "decode.loss_ce": 0.32152, "decode.acc_seg": 87.44824, "loss": 0.32152, "time": 0.22577} -{"mode": "train", "epoch": 1, "iter": 21650, "lr": 0.0001, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.317, "decode.acc_seg": 87.53676, "loss": 0.317, "time": 0.23317} -{"mode": "train", "epoch": 1, "iter": 21700, "lr": 0.0001, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.33291, "decode.acc_seg": 87.08955, "loss": 0.33291, "time": 0.22668} -{"mode": "train", "epoch": 1, "iter": 21750, "lr": 0.0001, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.30747, "decode.acc_seg": 87.98398, "loss": 0.30747, "time": 0.22017} -{"mode": "train", "epoch": 1, "iter": 21800, "lr": 0.0001, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.3238, "decode.acc_seg": 87.40259, "loss": 0.3238, "time": 0.22555} -{"mode": "train", "epoch": 1, "iter": 21850, "lr": 0.0001, "memory": 7401, "data_time": 0.01443, "decode.loss_ce": 0.3082, "decode.acc_seg": 88.04081, "loss": 0.3082, "time": 0.22797} -{"mode": "train", "epoch": 1, "iter": 21900, "lr": 0.0001, "memory": 7401, "data_time": 0.0137, "decode.loss_ce": 0.32775, "decode.acc_seg": 87.30987, "loss": 0.32775, "time": 0.22773} -{"mode": "train", "epoch": 1, "iter": 21950, "lr": 0.0001, "memory": 7401, "data_time": 0.01331, "decode.loss_ce": 0.31427, "decode.acc_seg": 87.83382, "loss": 0.31427, "time": 0.22896} -{"mode": "train", "epoch": 1, "iter": 22000, "lr": 0.0001, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.31027, "decode.acc_seg": 87.75872, "loss": 0.31027, "time": 0.2332} -{"mode": "train", "epoch": 1, "iter": 22050, "lr": 0.0001, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.32792, "decode.acc_seg": 87.20563, "loss": 0.32792, "time": 0.22269} -{"mode": "train", "epoch": 1, "iter": 22100, "lr": 0.0001, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.31347, "decode.acc_seg": 87.77058, "loss": 0.31347, "time": 0.22105} -{"mode": "train", "epoch": 1, "iter": 22150, "lr": 0.0001, "memory": 7401, "data_time": 0.01832, "decode.loss_ce": 0.31105, "decode.acc_seg": 87.80008, "loss": 0.31105, "time": 0.23769} -{"mode": "train", "epoch": 1, "iter": 22200, "lr": 0.0001, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.32508, "decode.acc_seg": 87.32412, "loss": 0.32508, "time": 0.22443} -{"mode": "train", "epoch": 1, "iter": 22250, "lr": 0.0001, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.29829, "decode.acc_seg": 88.12952, "loss": 0.29829, "time": 0.22277} -{"mode": "train", "epoch": 1, "iter": 22300, "lr": 0.0001, "memory": 7401, "data_time": 0.01415, "decode.loss_ce": 0.30967, "decode.acc_seg": 87.69754, "loss": 0.30967, "time": 0.22527} -{"mode": "train", "epoch": 1, "iter": 22350, "lr": 0.0001, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.31452, "decode.acc_seg": 87.729, "loss": 0.31452, "time": 0.242} -{"mode": "train", "epoch": 1, "iter": 22400, "lr": 0.0001, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.31007, "decode.acc_seg": 87.79196, "loss": 0.31007, "time": 0.22052} -{"mode": "train", "epoch": 1, "iter": 22450, "lr": 0.0001, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.30647, "decode.acc_seg": 88.25079, "loss": 0.30647, "time": 0.22666} -{"mode": "train", "epoch": 1, "iter": 22500, "lr": 0.0001, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.311, "decode.acc_seg": 88.0259, "loss": 0.311, "time": 0.2331} -{"mode": "train", "epoch": 1, "iter": 22550, "lr": 0.0001, "memory": 7401, "data_time": 0.01334, "decode.loss_ce": 0.31454, "decode.acc_seg": 87.7114, "loss": 0.31454, "time": 0.23022} -{"mode": "train", "epoch": 1, "iter": 22600, "lr": 0.0001, "memory": 7401, "data_time": 0.0133, "decode.loss_ce": 0.31467, "decode.acc_seg": 87.86691, "loss": 0.31467, "time": 0.22561} -{"mode": "train", "epoch": 1, "iter": 22650, "lr": 9e-05, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.31323, "decode.acc_seg": 87.49755, "loss": 0.31323, "time": 0.22148} -{"mode": "train", "epoch": 1, "iter": 22700, "lr": 9e-05, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.29907, "decode.acc_seg": 88.28051, "loss": 0.29907, "time": 0.22639} -{"mode": "train", "epoch": 1, "iter": 22750, "lr": 9e-05, "memory": 7401, "data_time": 0.01456, "decode.loss_ce": 0.30918, "decode.acc_seg": 87.98117, "loss": 0.30918, "time": 0.22715} -{"mode": "train", "epoch": 1, "iter": 22800, "lr": 9e-05, "memory": 7401, "data_time": 0.01415, "decode.loss_ce": 0.30819, "decode.acc_seg": 88.07622, "loss": 0.30819, "time": 0.22989} -{"mode": "train", "epoch": 1, "iter": 22850, "lr": 9e-05, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.30658, "decode.acc_seg": 87.9708, "loss": 0.30658, "time": 0.22569} -{"mode": "train", "epoch": 1, "iter": 22900, "lr": 9e-05, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.29472, "decode.acc_seg": 88.25464, "loss": 0.29472, "time": 0.23139} -{"mode": "train", "epoch": 1, "iter": 22950, "lr": 9e-05, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.30789, "decode.acc_seg": 87.93537, "loss": 0.30789, "time": 0.22504} -{"mode": "train", "epoch": 1, "iter": 23000, "lr": 9e-05, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.2993, "decode.acc_seg": 87.94907, "loss": 0.2993, "time": 0.23261} -{"mode": "train", "epoch": 1, "iter": 23050, "lr": 9e-05, "memory": 7401, "data_time": 0.0137, "decode.loss_ce": 0.29835, "decode.acc_seg": 88.37659, "loss": 0.29835, "time": 0.23133} -{"mode": "train", "epoch": 1, "iter": 23100, "lr": 9e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.30582, "decode.acc_seg": 88.12177, "loss": 0.30582, "time": 0.22225} -{"mode": "train", "epoch": 1, "iter": 23150, "lr": 9e-05, "memory": 7401, "data_time": 0.01329, "decode.loss_ce": 0.30488, "decode.acc_seg": 88.03827, "loss": 0.30488, "time": 0.22822} -{"mode": "train", "epoch": 1, "iter": 23200, "lr": 9e-05, "memory": 7401, "data_time": 0.01427, "decode.loss_ce": 0.30549, "decode.acc_seg": 87.98179, "loss": 0.30549, "time": 0.22689} -{"mode": "train", "epoch": 1, "iter": 23250, "lr": 9e-05, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.30347, "decode.acc_seg": 88.03303, "loss": 0.30347, "time": 0.23251} -{"mode": "train", "epoch": 1, "iter": 23300, "lr": 9e-05, "memory": 7401, "data_time": 0.01436, "decode.loss_ce": 0.31985, "decode.acc_seg": 87.7908, "loss": 0.31985, "time": 0.22732} -{"mode": "train", "epoch": 1, "iter": 23350, "lr": 9e-05, "memory": 7401, "data_time": 0.01406, "decode.loss_ce": 0.31509, "decode.acc_seg": 87.73078, "loss": 0.31509, "time": 0.22285} -{"mode": "train", "epoch": 1, "iter": 23400, "lr": 9e-05, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.29022, "decode.acc_seg": 88.2324, "loss": 0.29022, "time": 0.23322} -{"mode": "train", "epoch": 1, "iter": 23450, "lr": 9e-05, "memory": 7401, "data_time": 0.01761, "decode.loss_ce": 0.29546, "decode.acc_seg": 88.39574, "loss": 0.29546, "time": 0.22486} -{"mode": "train", "epoch": 1, "iter": 23500, "lr": 9e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.29705, "decode.acc_seg": 88.43025, "loss": 0.29705, "time": 0.22616} -{"mode": "train", "epoch": 1, "iter": 23550, "lr": 9e-05, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.30808, "decode.acc_seg": 87.8872, "loss": 0.30808, "time": 0.22862} -{"mode": "train", "epoch": 1, "iter": 23600, "lr": 9e-05, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.29727, "decode.acc_seg": 88.43492, "loss": 0.29727, "time": 0.22501} -{"mode": "train", "epoch": 1, "iter": 23650, "lr": 9e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.30718, "decode.acc_seg": 88.17554, "loss": 0.30718, "time": 0.22931} -{"mode": "train", "epoch": 1, "iter": 23700, "lr": 9e-05, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.29733, "decode.acc_seg": 88.10539, "loss": 0.29733, "time": 0.23041} -{"mode": "train", "epoch": 1, "iter": 23750, "lr": 9e-05, "memory": 7401, "data_time": 0.01432, "decode.loss_ce": 0.28155, "decode.acc_seg": 88.90577, "loss": 0.28155, "time": 0.2254} -{"mode": "train", "epoch": 1, "iter": 23800, "lr": 9e-05, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.30318, "decode.acc_seg": 88.25682, "loss": 0.30318, "time": 0.22272} -{"mode": "train", "epoch": 1, "iter": 23850, "lr": 9e-05, "memory": 7401, "data_time": 0.01459, "decode.loss_ce": 0.30695, "decode.acc_seg": 88.0493, "loss": 0.30695, "time": 0.22539} -{"mode": "train", "epoch": 1, "iter": 23900, "lr": 9e-05, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.2955, "decode.acc_seg": 88.25924, "loss": 0.2955, "time": 0.22534} -{"mode": "train", "epoch": 1, "iter": 23950, "lr": 9e-05, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.29028, "decode.acc_seg": 88.57241, "loss": 0.29028, "time": 0.22528} -{"mode": "train", "epoch": 1, "iter": 24000, "lr": 9e-05, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.30843, "decode.acc_seg": 87.90342, "loss": 0.30843, "time": 0.25373} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 9e-05, "aAcc": 0.811, "mIoU": 0.4471, "mAcc": 0.5584, "IoU.wall": 0.7452, "IoU.building": 0.8037, "IoU.sky": 0.9421, "IoU.floor": 0.7972, "IoU.tree": 0.7142, "IoU.ceiling": 0.8048, "IoU.road": 0.8219, "IoU.bed ": 0.8657, "IoU.windowpane": 0.5792, "IoU.grass": 0.6773, "IoU.cabinet": 0.5782, "IoU.sidewalk": 0.6201, "IoU.person": 0.8033, "IoU.earth": 0.3431, "IoU.door": 0.4458, "IoU.table": 0.5582, "IoU.mountain": 0.5546, "IoU.plant": 0.4587, "IoU.curtain": 0.6918, "IoU.chair": 0.554, "IoU.car": 0.8308, "IoU.water": 0.51, "IoU.painting": 0.6676, "IoU.sofa": 0.6194, "IoU.shelf": 0.4322, "IoU.house": 0.2771, "IoU.sea": 0.5647, "IoU.mirror": 0.5844, "IoU.rug": 0.5378, "IoU.field": 0.323, "IoU.armchair": 0.3641, "IoU.seat": 0.6091, "IoU.fence": 0.3573, "IoU.desk": 0.4475, "IoU.rock": 0.4076, "IoU.wardrobe": 0.4419, "IoU.lamp": 0.5699, "IoU.bathtub": 0.7082, "IoU.railing": 0.2855, "IoU.cushion": 0.4829, "IoU.base": 0.2225, "IoU.box": 0.1915, "IoU.column": 0.4557, "IoU.signboard": 0.323, "IoU.chest of drawers": 0.4508, "IoU.counter": 0.2469, "IoU.sand": 0.3251, "IoU.sink": 0.6794, "IoU.skyscraper": 0.4308, "IoU.fireplace": 0.6391, "IoU.refrigerator": 0.6536, "IoU.grandstand": 0.4154, "IoU.path": 0.2326, "IoU.stairs": 0.2979, "IoU.runway": 0.6703, "IoU.case": 0.3858, "IoU.pool table": 0.9131, "IoU.pillow": 0.5244, "IoU.screen door": 0.5325, "IoU.stairway": 0.3145, "IoU.river": 0.1148, "IoU.bridge": 0.3783, "IoU.bookcase": 0.3524, "IoU.blind": 0.3759, "IoU.coffee table": 0.5305, "IoU.toilet": 0.8325, "IoU.flower": 0.3843, "IoU.book": 0.432, "IoU.hill": 0.0657, "IoU.bench": 0.4397, "IoU.countertop": 0.5144, "IoU.stove": 0.733, "IoU.palm": 0.4222, "IoU.kitchen island": 0.2863, "IoU.computer": 0.5819, "IoU.swivel chair": 0.4034, "IoU.boat": 0.5447, "IoU.bar": 0.3463, "IoU.arcade machine": 0.5379, "IoU.hovel": 0.2971, "IoU.bus": 0.8495, "IoU.towel": 0.5985, "IoU.light": 0.4733, "IoU.truck": 0.3412, "IoU.tower": 0.1568, "IoU.chandelier": 0.6347, "IoU.awning": 0.2234, "IoU.streetlight": 0.2385, "IoU.booth": 0.3675, "IoU.television receiver": 0.7066, "IoU.airplane": 0.5046, "IoU.dirt track": 0.0817, "IoU.apparel": 0.3411, "IoU.pole": 0.1691, "IoU.land": 0.0625, "IoU.bannister": 0.0789, "IoU.escalator": 0.3317, "IoU.ottoman": 0.4074, "IoU.bottle": 0.1864, "IoU.buffet": 0.4497, "IoU.poster": 0.2081, "IoU.stage": 0.1164, "IoU.van": 0.3741, "IoU.ship": 0.5011, "IoU.fountain": 0.2277, "IoU.conveyer belt": 0.7508, "IoU.canopy": 0.1002, "IoU.washer": 0.6665, "IoU.plaything": 0.2683, "IoU.swimming pool": 0.1891, "IoU.stool": 0.3403, "IoU.barrel": 0.3177, "IoU.basket": 0.2906, "IoU.waterfall": 0.6273, "IoU.tent": 0.8926, "IoU.bag": 0.1362, "IoU.minibike": 0.6016, "IoU.cradle": 0.781, "IoU.oven": 0.3462, "IoU.ball": 0.3985, "IoU.food": 0.5, "IoU.step": 0.0971, "IoU.tank": 0.5033, "IoU.trade name": 0.2073, "IoU.microwave": 0.5933, "IoU.pot": 0.4284, "IoU.animal": 0.5587, "IoU.bicycle": 0.4694, "IoU.lake": 0.4941, "IoU.dishwasher": 0.6622, "IoU.screen": 0.6723, "IoU.blanket": 0.1157, "IoU.sculpture": 0.4355, "IoU.hood": 0.487, "IoU.sconce": 0.3347, "IoU.vase": 0.3503, "IoU.traffic light": 0.25, "IoU.tray": 0.0496, "IoU.ashcan": 0.4369, "IoU.fan": 0.5231, "IoU.pier": 0.5162, "IoU.crt screen": 0.0686, "IoU.plate": 0.4899, "IoU.monitor": 0.147, "IoU.bulletin board": 0.4106, "IoU.shower": 0.0001, "IoU.radiator": 0.3955, "IoU.glass": 0.1095, "IoU.clock": 0.2467, "IoU.flag": 0.3119, "Acc.wall": 0.8844, "Acc.building": 0.9264, "Acc.sky": 0.97, "Acc.floor": 0.9012, "Acc.tree": 0.8747, "Acc.ceiling": 0.8657, "Acc.road": 0.8977, "Acc.bed ": 0.9457, "Acc.windowpane": 0.7373, "Acc.grass": 0.8233, "Acc.cabinet": 0.7113, "Acc.sidewalk": 0.7588, "Acc.person": 0.9045, "Acc.earth": 0.4883, "Acc.door": 0.5836, "Acc.table": 0.7382, "Acc.mountain": 0.7646, "Acc.plant": 0.5517, "Acc.curtain": 0.8025, "Acc.chair": 0.7167, "Acc.car": 0.9218, "Acc.water": 0.6822, "Acc.painting": 0.8355, "Acc.sofa": 0.8034, "Acc.shelf": 0.649, "Acc.house": 0.3363, "Acc.sea": 0.7623, "Acc.mirror": 0.6326, "Acc.rug": 0.6224, "Acc.field": 0.4512, "Acc.armchair": 0.4932, "Acc.seat": 0.7853, "Acc.fence": 0.4699, "Acc.desk": 0.6421, "Acc.rock": 0.7006, "Acc.wardrobe": 0.6225, "Acc.lamp": 0.6684, "Acc.bathtub": 0.8149, "Acc.railing": 0.3575, "Acc.cushion": 0.5412, "Acc.base": 0.3538, "Acc.box": 0.249, "Acc.column": 0.5585, "Acc.signboard": 0.4517, "Acc.chest of drawers": 0.6828, "Acc.counter": 0.3552, "Acc.sand": 0.4963, "Acc.sink": 0.7484, "Acc.skyscraper": 0.558, "Acc.fireplace": 0.7985, "Acc.refrigerator": 0.776, "Acc.grandstand": 0.7123, "Acc.path": 0.3169, "Acc.stairs": 0.3522, "Acc.runway": 0.8606, "Acc.case": 0.5448, "Acc.pool table": 0.9665, "Acc.pillow": 0.6768, "Acc.screen door": 0.757, "Acc.stairway": 0.3983, "Acc.river": 0.2164, "Acc.bridge": 0.4584, "Acc.bookcase": 0.4578, "Acc.blind": 0.4118, "Acc.coffee table": 0.8175, "Acc.toilet": 0.8931, "Acc.flower": 0.5112, "Acc.book": 0.5854, "Acc.hill": 0.1327, "Acc.bench": 0.5174, "Acc.countertop": 0.6412, "Acc.stove": 0.7988, "Acc.palm": 0.8264, "Acc.kitchen island": 0.4739, "Acc.computer": 0.6532, "Acc.swivel chair": 0.5703, "Acc.boat": 0.6961, "Acc.bar": 0.4275, "Acc.arcade machine": 0.578, "Acc.hovel": 0.3718, "Acc.bus": 0.9419, "Acc.towel": 0.7531, "Acc.light": 0.5869, "Acc.truck": 0.4558, "Acc.tower": 0.195, "Acc.chandelier": 0.7942, "Acc.awning": 0.341, "Acc.streetlight": 0.312, "Acc.booth": 0.3907, "Acc.television receiver": 0.806, "Acc.airplane": 0.6545, "Acc.dirt track": 0.1938, "Acc.apparel": 0.4542, "Acc.pole": 0.2545, "Acc.land": 0.0802, "Acc.bannister": 0.1144, "Acc.escalator": 0.387, "Acc.ottoman": 0.5328, "Acc.bottle": 0.226, "Acc.buffet": 0.5371, "Acc.poster": 0.3211, "Acc.stage": 0.1392, "Acc.van": 0.4997, "Acc.ship": 0.6315, "Acc.fountain": 0.2488, "Acc.conveyer belt": 0.8688, "Acc.canopy": 0.1396, "Acc.washer": 0.6854, "Acc.plaything": 0.471, "Acc.swimming pool": 0.1929, "Acc.stool": 0.4364, "Acc.barrel": 0.4214, "Acc.basket": 0.3437, "Acc.waterfall": 0.7447, "Acc.tent": 0.9552, "Acc.bag": 0.1539, "Acc.minibike": 0.784, "Acc.cradle": 0.957, "Acc.oven": 0.4837, "Acc.ball": 0.4651, "Acc.food": 0.59, "Acc.step": 0.1107, "Acc.tank": 0.5685, "Acc.trade name": 0.2267, "Acc.microwave": 0.6618, "Acc.pot": 0.5132, "Acc.animal": 0.5991, "Acc.bicycle": 0.6311, "Acc.lake": 0.5827, "Acc.dishwasher": 0.7459, "Acc.screen": 0.8449, "Acc.blanket": 0.1366, "Acc.sculpture": 0.7039, "Acc.hood": 0.573, "Acc.sconce": 0.4773, "Acc.vase": 0.526, "Acc.traffic light": 0.4409, "Acc.tray": 0.0657, "Acc.ashcan": 0.5437, "Acc.fan": 0.652, "Acc.pier": 0.6412, "Acc.crt screen": 0.1837, "Acc.plate": 0.6078, "Acc.monitor": 0.2063, "Acc.bulletin board": 0.53, "Acc.shower": 0.0001, "Acc.radiator": 0.4279, "Acc.glass": 0.1158, "Acc.clock": 0.2762, "Acc.flag": 0.3265} -{"mode": "train", "epoch": 1, "iter": 24050, "lr": 9e-05, "memory": 7401, "data_time": 0.55188, "decode.loss_ce": 0.29279, "decode.acc_seg": 88.53831, "loss": 0.29279, "time": 0.75639} -{"mode": "train", "epoch": 1, "iter": 24100, "lr": 9e-05, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.30765, "decode.acc_seg": 88.24599, "loss": 0.30765, "time": 0.22798} -{"mode": "train", "epoch": 1, "iter": 24150, "lr": 9e-05, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.29441, "decode.acc_seg": 88.69019, "loss": 0.29441, "time": 0.22229} -{"mode": "train", "epoch": 1, "iter": 24200, "lr": 9e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.29248, "decode.acc_seg": 88.72412, "loss": 0.29248, "time": 0.2236} -{"mode": "train", "epoch": 1, "iter": 24250, "lr": 9e-05, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.29118, "decode.acc_seg": 88.64651, "loss": 0.29118, "time": 0.22234} -{"mode": "train", "epoch": 1, "iter": 24300, "lr": 9e-05, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.29579, "decode.acc_seg": 88.33409, "loss": 0.29579, "time": 0.23489} -{"mode": "train", "epoch": 1, "iter": 24350, "lr": 9e-05, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.28864, "decode.acc_seg": 88.71916, "loss": 0.28864, "time": 0.234} -{"mode": "train", "epoch": 1, "iter": 24400, "lr": 9e-05, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.29684, "decode.acc_seg": 88.34716, "loss": 0.29684, "time": 0.22508} -{"mode": "train", "epoch": 1, "iter": 24450, "lr": 9e-05, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.28672, "decode.acc_seg": 88.73064, "loss": 0.28672, "time": 0.22317} -{"mode": "train", "epoch": 1, "iter": 24500, "lr": 9e-05, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.2732, "decode.acc_seg": 89.12514, "loss": 0.2732, "time": 0.23676} -{"mode": "train", "epoch": 1, "iter": 24550, "lr": 9e-05, "memory": 7401, "data_time": 0.01497, "decode.loss_ce": 0.29021, "decode.acc_seg": 88.63957, "loss": 0.29021, "time": 0.22687} -{"mode": "train", "epoch": 1, "iter": 24600, "lr": 9e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.30463, "decode.acc_seg": 88.18009, "loss": 0.30463, "time": 0.22802} -{"mode": "train", "epoch": 1, "iter": 24650, "lr": 9e-05, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.28167, "decode.acc_seg": 88.97034, "loss": 0.28167, "time": 0.22806} -{"mode": "train", "epoch": 1, "iter": 24700, "lr": 8e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.29307, "decode.acc_seg": 88.3089, "loss": 0.29307, "time": 0.23199} -{"mode": "train", "epoch": 1, "iter": 24750, "lr": 8e-05, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.30932, "decode.acc_seg": 87.86559, "loss": 0.30932, "time": 0.22097} -{"mode": "train", "epoch": 1, "iter": 24800, "lr": 8e-05, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.30584, "decode.acc_seg": 88.08893, "loss": 0.30584, "time": 0.22696} -{"mode": "train", "epoch": 1, "iter": 24850, "lr": 8e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.30324, "decode.acc_seg": 88.14184, "loss": 0.30324, "time": 0.23355} -{"mode": "train", "epoch": 1, "iter": 24900, "lr": 8e-05, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.28797, "decode.acc_seg": 88.51484, "loss": 0.28797, "time": 0.22945} -{"mode": "train", "epoch": 1, "iter": 24950, "lr": 8e-05, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.2797, "decode.acc_seg": 88.9479, "loss": 0.2797, "time": 0.22693} -{"mode": "train", "epoch": 1, "iter": 25000, "lr": 8e-05, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.30071, "decode.acc_seg": 88.06391, "loss": 0.30071, "time": 0.22327} -{"mode": "train", "epoch": 1, "iter": 25050, "lr": 8e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.291, "decode.acc_seg": 88.75163, "loss": 0.291, "time": 0.23459} -{"mode": "train", "epoch": 1, "iter": 25100, "lr": 8e-05, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.29312, "decode.acc_seg": 88.37805, "loss": 0.29312, "time": 0.23108} -{"mode": "train", "epoch": 1, "iter": 25150, "lr": 8e-05, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.29615, "decode.acc_seg": 88.48054, "loss": 0.29615, "time": 0.22116} -{"mode": "train", "epoch": 1, "iter": 25200, "lr": 8e-05, "memory": 7401, "data_time": 0.01359, "decode.loss_ce": 0.29282, "decode.acc_seg": 88.42914, "loss": 0.29282, "time": 0.22525} -{"mode": "train", "epoch": 1, "iter": 25250, "lr": 8e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.27157, "decode.acc_seg": 89.34627, "loss": 0.27157, "time": 0.22048} -{"mode": "train", "epoch": 1, "iter": 25300, "lr": 8e-05, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.27933, "decode.acc_seg": 88.90296, "loss": 0.27933, "time": 0.23296} -{"mode": "train", "epoch": 1, "iter": 25350, "lr": 8e-05, "memory": 7401, "data_time": 0.01354, "decode.loss_ce": 0.27856, "decode.acc_seg": 89.05348, "loss": 0.27856, "time": 0.22728} -{"mode": "train", "epoch": 1, "iter": 25400, "lr": 8e-05, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.2826, "decode.acc_seg": 88.9258, "loss": 0.2826, "time": 0.23262} -{"mode": "train", "epoch": 1, "iter": 25450, "lr": 8e-05, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.28267, "decode.acc_seg": 88.92869, "loss": 0.28267, "time": 0.22475} -{"mode": "train", "epoch": 1, "iter": 25500, "lr": 8e-05, "memory": 7401, "data_time": 0.01417, "decode.loss_ce": 0.28147, "decode.acc_seg": 88.77121, "loss": 0.28147, "time": 0.22603} -{"mode": "train", "epoch": 1, "iter": 25550, "lr": 8e-05, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.27325, "decode.acc_seg": 89.12536, "loss": 0.27325, "time": 0.23521} -{"mode": "train", "epoch": 1, "iter": 25600, "lr": 8e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.28031, "decode.acc_seg": 88.94078, "loss": 0.28031, "time": 0.22752} -{"mode": "train", "epoch": 1, "iter": 25650, "lr": 8e-05, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.28221, "decode.acc_seg": 88.90694, "loss": 0.28221, "time": 0.22632} -{"mode": "train", "epoch": 1, "iter": 25700, "lr": 8e-05, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.28565, "decode.acc_seg": 88.73966, "loss": 0.28565, "time": 0.22111} -{"mode": "train", "epoch": 1, "iter": 25750, "lr": 8e-05, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.28247, "decode.acc_seg": 88.85954, "loss": 0.28247, "time": 0.23233} -{"mode": "train", "epoch": 1, "iter": 25800, "lr": 8e-05, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.28267, "decode.acc_seg": 88.73835, "loss": 0.28267, "time": 0.22878} -{"mode": "train", "epoch": 1, "iter": 25850, "lr": 8e-05, "memory": 7401, "data_time": 0.01348, "decode.loss_ce": 0.2735, "decode.acc_seg": 89.33288, "loss": 0.2735, "time": 0.23411} -{"mode": "train", "epoch": 1, "iter": 25900, "lr": 8e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.28612, "decode.acc_seg": 88.79624, "loss": 0.28612, "time": 0.22927} -{"mode": "train", "epoch": 1, "iter": 25950, "lr": 8e-05, "memory": 7401, "data_time": 0.01339, "decode.loss_ce": 0.27951, "decode.acc_seg": 89.06565, "loss": 0.27951, "time": 0.22397} -{"mode": "train", "epoch": 1, "iter": 26000, "lr": 8e-05, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.27081, "decode.acc_seg": 89.33503, "loss": 0.27081, "time": 0.23335} -{"mode": "train", "epoch": 1, "iter": 26050, "lr": 8e-05, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.27663, "decode.acc_seg": 89.0346, "loss": 0.27663, "time": 0.22582} -{"mode": "train", "epoch": 1, "iter": 26100, "lr": 8e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.26611, "decode.acc_seg": 89.6732, "loss": 0.26611, "time": 0.22816} -{"mode": "train", "epoch": 1, "iter": 26150, "lr": 8e-05, "memory": 7401, "data_time": 0.0135, "decode.loss_ce": 0.26783, "decode.acc_seg": 89.42938, "loss": 0.26783, "time": 0.2261} -{"mode": "train", "epoch": 1, "iter": 26200, "lr": 8e-05, "memory": 7401, "data_time": 0.01846, "decode.loss_ce": 0.27278, "decode.acc_seg": 89.27343, "loss": 0.27278, "time": 0.23111} -{"mode": "train", "epoch": 1, "iter": 26250, "lr": 8e-05, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.27343, "decode.acc_seg": 89.30415, "loss": 0.27343, "time": 0.23075} -{"mode": "train", "epoch": 1, "iter": 26300, "lr": 8e-05, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.27865, "decode.acc_seg": 89.03808, "loss": 0.27865, "time": 0.23149} -{"mode": "train", "epoch": 1, "iter": 26350, "lr": 8e-05, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.28319, "decode.acc_seg": 88.85292, "loss": 0.28319, "time": 0.22496} -{"mode": "train", "epoch": 1, "iter": 26400, "lr": 8e-05, "memory": 7401, "data_time": 0.01819, "decode.loss_ce": 0.27966, "decode.acc_seg": 88.92101, "loss": 0.27966, "time": 0.22864} -{"mode": "train", "epoch": 1, "iter": 26450, "lr": 8e-05, "memory": 7401, "data_time": 0.01426, "decode.loss_ce": 0.26622, "decode.acc_seg": 89.53067, "loss": 0.26622, "time": 0.23664} -{"mode": "train", "epoch": 1, "iter": 26500, "lr": 8e-05, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.27501, "decode.acc_seg": 88.97253, "loss": 0.27501, "time": 0.2275} -{"mode": "train", "epoch": 1, "iter": 26550, "lr": 8e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.27343, "decode.acc_seg": 89.00139, "loss": 0.27343, "time": 0.22687} -{"mode": "train", "epoch": 1, "iter": 26600, "lr": 8e-05, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.27051, "decode.acc_seg": 89.35079, "loss": 0.27051, "time": 0.23223} -{"mode": "train", "epoch": 1, "iter": 26650, "lr": 8e-05, "memory": 7401, "data_time": 0.01439, "decode.loss_ce": 0.26079, "decode.acc_seg": 89.61889, "loss": 0.26079, "time": 0.22786} -{"mode": "train", "epoch": 1, "iter": 26700, "lr": 7e-05, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.29074, "decode.acc_seg": 88.66896, "loss": 0.29074, "time": 0.2292} -{"mode": "train", "epoch": 1, "iter": 26750, "lr": 7e-05, "memory": 7401, "data_time": 0.01423, "decode.loss_ce": 0.2662, "decode.acc_seg": 89.481, "loss": 0.2662, "time": 0.23118} -{"mode": "train", "epoch": 1, "iter": 26800, "lr": 7e-05, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.26544, "decode.acc_seg": 89.40111, "loss": 0.26544, "time": 0.23334} -{"mode": "train", "epoch": 1, "iter": 26850, "lr": 7e-05, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.27646, "decode.acc_seg": 89.49659, "loss": 0.27646, "time": 0.22565} -{"mode": "train", "epoch": 1, "iter": 26900, "lr": 7e-05, "memory": 7401, "data_time": 0.01883, "decode.loss_ce": 0.27331, "decode.acc_seg": 89.1079, "loss": 0.27331, "time": 0.23256} -{"mode": "train", "epoch": 1, "iter": 26950, "lr": 7e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.26937, "decode.acc_seg": 89.27706, "loss": 0.26937, "time": 0.2388} -{"mode": "train", "epoch": 1, "iter": 27000, "lr": 7e-05, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.27755, "decode.acc_seg": 88.86215, "loss": 0.27755, "time": 0.22757} -{"mode": "train", "epoch": 1, "iter": 27050, "lr": 7e-05, "memory": 7401, "data_time": 0.01412, "decode.loss_ce": 0.27204, "decode.acc_seg": 89.3767, "loss": 0.27204, "time": 0.22037} -{"mode": "train", "epoch": 1, "iter": 27100, "lr": 7e-05, "memory": 7401, "data_time": 0.01452, "decode.loss_ce": 0.25905, "decode.acc_seg": 89.67118, "loss": 0.25905, "time": 0.23305} -{"mode": "train", "epoch": 1, "iter": 27150, "lr": 7e-05, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.26473, "decode.acc_seg": 89.48576, "loss": 0.26473, "time": 0.22826} -{"mode": "train", "epoch": 1, "iter": 27200, "lr": 7e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.25869, "decode.acc_seg": 89.55859, "loss": 0.25869, "time": 0.23753} -{"mode": "train", "epoch": 1, "iter": 27250, "lr": 7e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.2601, "decode.acc_seg": 89.78804, "loss": 0.2601, "time": 0.23034} -{"mode": "train", "epoch": 1, "iter": 27300, "lr": 7e-05, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.2616, "decode.acc_seg": 89.69532, "loss": 0.2616, "time": 0.22867} -{"mode": "train", "epoch": 1, "iter": 27350, "lr": 7e-05, "memory": 7401, "data_time": 0.01444, "decode.loss_ce": 0.25567, "decode.acc_seg": 89.91842, "loss": 0.25567, "time": 0.23535} -{"mode": "train", "epoch": 1, "iter": 27400, "lr": 7e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.27063, "decode.acc_seg": 89.47956, "loss": 0.27063, "time": 0.22826} -{"mode": "train", "epoch": 1, "iter": 27450, "lr": 7e-05, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.26361, "decode.acc_seg": 89.50887, "loss": 0.26361, "time": 0.23545} -{"mode": "train", "epoch": 1, "iter": 27500, "lr": 7e-05, "memory": 7401, "data_time": 0.0142, "decode.loss_ce": 0.27589, "decode.acc_seg": 89.16469, "loss": 0.27589, "time": 0.22582} -{"mode": "train", "epoch": 1, "iter": 27550, "lr": 7e-05, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.26625, "decode.acc_seg": 89.5813, "loss": 0.26625, "time": 0.22982} -{"mode": "train", "epoch": 1, "iter": 27600, "lr": 7e-05, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.27654, "decode.acc_seg": 88.97836, "loss": 0.27654, "time": 0.23102} -{"mode": "train", "epoch": 1, "iter": 27650, "lr": 7e-05, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.26035, "decode.acc_seg": 89.73067, "loss": 0.26035, "time": 0.23357} -{"mode": "train", "epoch": 1, "iter": 27700, "lr": 7e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.25768, "decode.acc_seg": 89.73759, "loss": 0.25768, "time": 0.23239} -{"mode": "train", "epoch": 1, "iter": 27750, "lr": 7e-05, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.26422, "decode.acc_seg": 89.60646, "loss": 0.26422, "time": 0.22479} -{"mode": "train", "epoch": 1, "iter": 27800, "lr": 7e-05, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.27855, "decode.acc_seg": 89.21922, "loss": 0.27855, "time": 0.22513} -{"mode": "train", "epoch": 1, "iter": 27850, "lr": 7e-05, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.27032, "decode.acc_seg": 89.28107, "loss": 0.27032, "time": 0.23591} -{"mode": "train", "epoch": 1, "iter": 27900, "lr": 7e-05, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.27511, "decode.acc_seg": 89.2836, "loss": 0.27511, "time": 0.22876} -{"mode": "train", "epoch": 1, "iter": 27950, "lr": 7e-05, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.25888, "decode.acc_seg": 89.62478, "loss": 0.25888, "time": 0.22701} -{"mode": "train", "epoch": 1, "iter": 28000, "lr": 7e-05, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.25614, "decode.acc_seg": 89.93634, "loss": 0.25614, "time": 0.25265} -{"mode": "val", "epoch": 1, "iter": 250, "lr": 7e-05, "aAcc": 0.8141, "mIoU": 0.4552, "mAcc": 0.5718, "IoU.wall": 0.748, "IoU.building": 0.8045, "IoU.sky": 0.941, "IoU.floor": 0.8015, "IoU.tree": 0.7191, "IoU.ceiling": 0.8078, "IoU.road": 0.8189, "IoU.bed ": 0.8684, "IoU.windowpane": 0.5829, "IoU.grass": 0.6706, "IoU.cabinet": 0.589, "IoU.sidewalk": 0.6392, "IoU.person": 0.8021, "IoU.earth": 0.3276, "IoU.door": 0.4406, "IoU.table": 0.5602, "IoU.mountain": 0.5768, "IoU.plant": 0.4946, "IoU.curtain": 0.697, "IoU.chair": 0.5483, "IoU.car": 0.8283, "IoU.water": 0.5003, "IoU.painting": 0.6772, "IoU.sofa": 0.6325, "IoU.shelf": 0.4417, "IoU.house": 0.3588, "IoU.sea": 0.5593, "IoU.mirror": 0.595, "IoU.rug": 0.5351, "IoU.field": 0.3173, "IoU.armchair": 0.3804, "IoU.seat": 0.57, "IoU.fence": 0.3574, "IoU.desk": 0.4562, "IoU.rock": 0.4211, "IoU.wardrobe": 0.4638, "IoU.lamp": 0.6023, "IoU.bathtub": 0.6852, "IoU.railing": 0.2724, "IoU.cushion": 0.5249, "IoU.base": 0.2528, "IoU.box": 0.2245, "IoU.column": 0.4367, "IoU.signboard": 0.3398, "IoU.chest of drawers": 0.4563, "IoU.counter": 0.2964, "IoU.sand": 0.3787, "IoU.sink": 0.6605, "IoU.skyscraper": 0.4225, "IoU.fireplace": 0.7159, "IoU.refrigerator": 0.591, "IoU.grandstand": 0.5049, "IoU.path": 0.2285, "IoU.stairs": 0.3201, "IoU.runway": 0.668, "IoU.case": 0.4861, "IoU.pool table": 0.9175, "IoU.pillow": 0.5512, "IoU.screen door": 0.58, "IoU.stairway": 0.3457, "IoU.river": 0.0789, "IoU.bridge": 0.6557, "IoU.bookcase": 0.3094, "IoU.blind": 0.3872, "IoU.coffee table": 0.5472, "IoU.toilet": 0.8179, "IoU.flower": 0.3356, "IoU.book": 0.4589, "IoU.hill": 0.0363, "IoU.bench": 0.3778, "IoU.countertop": 0.4982, "IoU.stove": 0.7097, "IoU.palm": 0.5011, "IoU.kitchen island": 0.3787, "IoU.computer": 0.5617, "IoU.swivel chair": 0.3945, "IoU.boat": 0.5243, "IoU.bar": 0.3429, "IoU.arcade machine": 0.5438, "IoU.hovel": 0.4357, "IoU.bus": 0.8432, "IoU.towel": 0.6113, "IoU.light": 0.4598, "IoU.truck": 0.3989, "IoU.tower": 0.2324, "IoU.chandelier": 0.646, "IoU.awning": 0.2047, "IoU.streetlight": 0.2404, "IoU.booth": 0.3534, "IoU.television receiver": 0.6942, "IoU.airplane": 0.4512, "IoU.dirt track": 0.1112, "IoU.apparel": 0.3357, "IoU.pole": 0.1288, "IoU.land": 0.0697, "IoU.bannister": 0.0623, "IoU.escalator": 0.2693, "IoU.ottoman": 0.4349, "IoU.bottle": 0.2171, "IoU.buffet": 0.5845, "IoU.poster": 0.2241, "IoU.stage": 0.1068, "IoU.van": 0.4111, "IoU.ship": 0.5035, "IoU.fountain": 0.1832, "IoU.conveyer belt": 0.6143, "IoU.canopy": 0.1337, "IoU.washer": 0.6649, "IoU.plaything": 0.2601, "IoU.swimming pool": 0.4309, "IoU.stool": 0.3992, "IoU.barrel": 0.2185, "IoU.basket": 0.3049, "IoU.waterfall": 0.5594, "IoU.tent": 0.8688, "IoU.bag": 0.072, "IoU.minibike": 0.6565, "IoU.cradle": 0.8145, "IoU.oven": 0.3187, "IoU.ball": 0.4636, "IoU.food": 0.5074, "IoU.step": 0.1574, "IoU.tank": 0.4809, "IoU.trade name": 0.2627, "IoU.microwave": 0.5472, "IoU.pot": 0.3265, "IoU.animal": 0.5339, "IoU.bicycle": 0.4994, "IoU.lake": 0.5096, "IoU.dishwasher": 0.544, "IoU.screen": 0.6545, "IoU.blanket": 0.1386, "IoU.sculpture": 0.4084, "IoU.hood": 0.494, "IoU.sconce": 0.3629, "IoU.vase": 0.3439, "IoU.traffic light": 0.2589, "IoU.tray": 0.0682, "IoU.ashcan": 0.4288, "IoU.fan": 0.5708, "IoU.pier": 0.3038, "IoU.crt screen": 0.0437, "IoU.plate": 0.489, "IoU.monitor": 0.1495, "IoU.bulletin board": 0.4935, "IoU.shower": 0.0006, "IoU.radiator": 0.4477, "IoU.glass": 0.11, "IoU.clock": 0.3687, "IoU.flag": 0.3286, "Acc.wall": 0.8821, "Acc.building": 0.9134, "Acc.sky": 0.9683, "Acc.floor": 0.903, "Acc.tree": 0.9053, "Acc.ceiling": 0.8816, "Acc.road": 0.8842, "Acc.bed ": 0.9526, "Acc.windowpane": 0.7309, "Acc.grass": 0.8353, "Acc.cabinet": 0.6903, "Acc.sidewalk": 0.8202, "Acc.person": 0.9137, "Acc.earth": 0.4264, "Acc.door": 0.5387, "Acc.table": 0.7546, "Acc.mountain": 0.7205, "Acc.plant": 0.5975, "Acc.curtain": 0.8316, "Acc.chair": 0.6809, "Acc.car": 0.923, "Acc.water": 0.6251, "Acc.painting": 0.8575, "Acc.sofa": 0.8429, "Acc.shelf": 0.632, "Acc.house": 0.4551, "Acc.sea": 0.7996, "Acc.mirror": 0.6599, "Acc.rug": 0.6231, "Acc.field": 0.5566, "Acc.armchair": 0.5175, "Acc.seat": 0.7642, "Acc.fence": 0.4783, "Acc.desk": 0.672, "Acc.rock": 0.6809, "Acc.wardrobe": 0.6884, "Acc.lamp": 0.7025, "Acc.bathtub": 0.7799, "Acc.railing": 0.3782, "Acc.cushion": 0.6417, "Acc.base": 0.3805, "Acc.box": 0.3325, "Acc.column": 0.5546, "Acc.signboard": 0.4618, "Acc.chest of drawers": 0.7268, "Acc.counter": 0.4215, "Acc.sand": 0.5841, "Acc.sink": 0.7768, "Acc.skyscraper": 0.523, "Acc.fireplace": 0.8394, "Acc.refrigerator": 0.8048, "Acc.grandstand": 0.6789, "Acc.path": 0.3403, "Acc.stairs": 0.3749, "Acc.runway": 0.8593, "Acc.case": 0.6696, "Acc.pool table": 0.9602, "Acc.pillow": 0.6704, "Acc.screen door": 0.8034, "Acc.stairway": 0.4226, "Acc.river": 0.1791, "Acc.bridge": 0.793, "Acc.bookcase": 0.3804, "Acc.blind": 0.4726, "Acc.coffee table": 0.8284, "Acc.toilet": 0.8934, "Acc.flower": 0.4236, "Acc.book": 0.6353, "Acc.hill": 0.0562, "Acc.bench": 0.491, "Acc.countertop": 0.6407, "Acc.stove": 0.7875, "Acc.palm": 0.6853, "Acc.kitchen island": 0.5719, "Acc.computer": 0.6421, "Acc.swivel chair": 0.5402, "Acc.boat": 0.665, "Acc.bar": 0.4347, "Acc.arcade machine": 0.5631, "Acc.hovel": 0.5801, "Acc.bus": 0.9429, "Acc.towel": 0.7405, "Acc.light": 0.5237, "Acc.truck": 0.4828, "Acc.tower": 0.3187, "Acc.chandelier": 0.7926, "Acc.awning": 0.2665, "Acc.streetlight": 0.299, "Acc.booth": 0.4263, "Acc.television receiver": 0.8224, "Acc.airplane": 0.6596, "Acc.dirt track": 0.1817, "Acc.apparel": 0.4909, "Acc.pole": 0.1782, "Acc.land": 0.1363, "Acc.bannister": 0.0842, "Acc.escalator": 0.297, "Acc.ottoman": 0.5522, "Acc.bottle": 0.268, "Acc.buffet": 0.6766, "Acc.poster": 0.2845, "Acc.stage": 0.1551, "Acc.van": 0.5063, "Acc.ship": 0.6599, "Acc.fountain": 0.2022, "Acc.conveyer belt": 0.8892, "Acc.canopy": 0.1914, "Acc.washer": 0.7012, "Acc.plaything": 0.3524, "Acc.swimming pool": 0.6292, "Acc.stool": 0.5322, "Acc.barrel": 0.4971, "Acc.basket": 0.3552, "Acc.waterfall": 0.5987, "Acc.tent": 0.9607, "Acc.bag": 0.0842, "Acc.minibike": 0.8006, "Acc.cradle": 0.9524, "Acc.oven": 0.5334, "Acc.ball": 0.5473, "Acc.food": 0.6034, "Acc.step": 0.1827, "Acc.tank": 0.5656, "Acc.trade name": 0.3136, "Acc.microwave": 0.6107, "Acc.pot": 0.365, "Acc.animal": 0.5509, "Acc.bicycle": 0.7102, "Acc.lake": 0.6156, "Acc.dishwasher": 0.7884, "Acc.screen": 0.8623, "Acc.blanket": 0.1517, "Acc.sculpture": 0.6466, "Acc.hood": 0.5701, "Acc.sconce": 0.4294, "Acc.vase": 0.4275, "Acc.traffic light": 0.5178, "Acc.tray": 0.114, "Acc.ashcan": 0.5297, "Acc.fan": 0.6523, "Acc.pier": 0.4022, "Acc.crt screen": 0.0998, "Acc.plate": 0.6306, "Acc.monitor": 0.166, "Acc.bulletin board": 0.6732, "Acc.shower": 0.0007, "Acc.radiator": 0.5345, "Acc.glass": 0.1187, "Acc.clock": 0.4225, "Acc.flag": 0.3747} -{"mode": "train", "epoch": 1, "iter": 28050, "lr": 7e-05, "memory": 7401, "data_time": 0.53787, "decode.loss_ce": 0.26615, "decode.acc_seg": 89.38399, "loss": 0.26615, "time": 0.74124} -{"mode": "train", "epoch": 1, "iter": 28100, "lr": 7e-05, "memory": 7401, "data_time": 0.01392, "decode.loss_ce": 0.26586, "decode.acc_seg": 89.6249, "loss": 0.26586, "time": 0.22604} -{"mode": "train", "epoch": 1, "iter": 28150, "lr": 7e-05, "memory": 7401, "data_time": 0.01436, "decode.loss_ce": 0.26326, "decode.acc_seg": 89.89526, "loss": 0.26326, "time": 0.22572} -{"mode": "train", "epoch": 1, "iter": 28200, "lr": 7e-05, "memory": 7401, "data_time": 0.01441, "decode.loss_ce": 0.26696, "decode.acc_seg": 89.59746, "loss": 0.26696, "time": 0.22949} -{"mode": "train", "epoch": 1, "iter": 28250, "lr": 7e-05, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.25489, "decode.acc_seg": 89.95484, "loss": 0.25489, "time": 0.22743} -{"mode": "train", "epoch": 1, "iter": 28300, "lr": 7e-05, "memory": 7401, "data_time": 0.01423, "decode.loss_ce": 0.26836, "decode.acc_seg": 89.51988, "loss": 0.26836, "time": 0.22993} -{"mode": "train", "epoch": 1, "iter": 28350, "lr": 7e-05, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.2725, "decode.acc_seg": 89.08147, "loss": 0.2725, "time": 0.23485} -{"mode": "train", "epoch": 1, "iter": 28400, "lr": 7e-05, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.25443, "decode.acc_seg": 90.00185, "loss": 0.25443, "time": 0.22552} -{"mode": "train", "epoch": 1, "iter": 28450, "lr": 7e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.26893, "decode.acc_seg": 89.45928, "loss": 0.26893, "time": 0.2211} -{"mode": "train", "epoch": 1, "iter": 28500, "lr": 7e-05, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.26076, "decode.acc_seg": 89.5004, "loss": 0.26076, "time": 0.21988} -{"mode": "train", "epoch": 1, "iter": 28550, "lr": 7e-05, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.25201, "decode.acc_seg": 90.06334, "loss": 0.25201, "time": 0.22509} -{"mode": "train", "epoch": 1, "iter": 28600, "lr": 7e-05, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.25768, "decode.acc_seg": 89.84184, "loss": 0.25768, "time": 0.2308} -{"mode": "train", "epoch": 1, "iter": 28650, "lr": 7e-05, "memory": 7401, "data_time": 0.01358, "decode.loss_ce": 0.24967, "decode.acc_seg": 90.07331, "loss": 0.24967, "time": 0.22269} -{"mode": "train", "epoch": 1, "iter": 28700, "lr": 6e-05, "memory": 7401, "data_time": 0.01446, "decode.loss_ce": 0.26586, "decode.acc_seg": 89.73213, "loss": 0.26586, "time": 0.221} -{"mode": "train", "epoch": 1, "iter": 28750, "lr": 6e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.26516, "decode.acc_seg": 89.56967, "loss": 0.26516, "time": 0.23472} -{"mode": "train", "epoch": 1, "iter": 28800, "lr": 6e-05, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.26541, "decode.acc_seg": 89.45753, "loss": 0.26541, "time": 0.23088} -{"mode": "train", "epoch": 1, "iter": 28850, "lr": 6e-05, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.26015, "decode.acc_seg": 89.64466, "loss": 0.26015, "time": 0.22271} -{"mode": "train", "epoch": 1, "iter": 28900, "lr": 6e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.25434, "decode.acc_seg": 89.96853, "loss": 0.25434, "time": 0.2228} -{"mode": "train", "epoch": 1, "iter": 28950, "lr": 6e-05, "memory": 7401, "data_time": 0.01463, "decode.loss_ce": 0.25345, "decode.acc_seg": 89.8512, "loss": 0.25345, "time": 0.22717} -{"mode": "train", "epoch": 1, "iter": 29000, "lr": 6e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.26528, "decode.acc_seg": 89.48612, "loss": 0.26528, "time": 0.221} -{"mode": "train", "epoch": 1, "iter": 29050, "lr": 6e-05, "memory": 7401, "data_time": 0.01868, "decode.loss_ce": 0.24376, "decode.acc_seg": 90.46945, "loss": 0.24376, "time": 0.23122} -{"mode": "train", "epoch": 1, "iter": 29100, "lr": 6e-05, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.25745, "decode.acc_seg": 89.828, "loss": 0.25745, "time": 0.228} -{"mode": "train", "epoch": 1, "iter": 29150, "lr": 6e-05, "memory": 7401, "data_time": 0.01784, "decode.loss_ce": 0.25229, "decode.acc_seg": 90.03561, "loss": 0.25229, "time": 0.22493} -{"mode": "train", "epoch": 1, "iter": 29200, "lr": 6e-05, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.25303, "decode.acc_seg": 89.92095, "loss": 0.25303, "time": 0.2255} -{"mode": "train", "epoch": 1, "iter": 29250, "lr": 6e-05, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.27297, "decode.acc_seg": 89.24089, "loss": 0.27297, "time": 0.22955} -{"mode": "train", "epoch": 1, "iter": 29300, "lr": 6e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.26208, "decode.acc_seg": 89.65811, "loss": 0.26208, "time": 0.23624} -{"mode": "train", "epoch": 1, "iter": 29350, "lr": 6e-05, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.24509, "decode.acc_seg": 90.22958, "loss": 0.24509, "time": 0.22269} -{"mode": "train", "epoch": 1, "iter": 29400, "lr": 6e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.26181, "decode.acc_seg": 89.60919, "loss": 0.26181, "time": 0.2234} -{"mode": "train", "epoch": 1, "iter": 29450, "lr": 6e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.25328, "decode.acc_seg": 90.06646, "loss": 0.25328, "time": 0.22822} -{"mode": "train", "epoch": 1, "iter": 29500, "lr": 6e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.24553, "decode.acc_seg": 90.26669, "loss": 0.24553, "time": 0.22634} -{"mode": "train", "epoch": 1, "iter": 29550, "lr": 6e-05, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.25015, "decode.acc_seg": 90.01254, "loss": 0.25015, "time": 0.2413} -{"mode": "train", "epoch": 1, "iter": 29600, "lr": 6e-05, "memory": 7401, "data_time": 0.01417, "decode.loss_ce": 0.25776, "decode.acc_seg": 89.83906, "loss": 0.25776, "time": 0.22639} -{"mode": "train", "epoch": 1, "iter": 29650, "lr": 6e-05, "memory": 7401, "data_time": 0.01837, "decode.loss_ce": 0.265, "decode.acc_seg": 89.86793, "loss": 0.265, "time": 0.23477} -{"mode": "train", "epoch": 1, "iter": 29700, "lr": 6e-05, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.25072, "decode.acc_seg": 89.94923, "loss": 0.25072, "time": 0.22365} -{"mode": "train", "epoch": 1, "iter": 29750, "lr": 6e-05, "memory": 7401, "data_time": 0.01308, "decode.loss_ce": 0.24965, "decode.acc_seg": 90.11084, "loss": 0.24965, "time": 0.22219} -{"mode": "train", "epoch": 1, "iter": 29800, "lr": 6e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.26276, "decode.acc_seg": 89.62061, "loss": 0.26276, "time": 0.23085} -{"mode": "train", "epoch": 1, "iter": 29850, "lr": 6e-05, "memory": 7401, "data_time": 0.01396, "decode.loss_ce": 0.24946, "decode.acc_seg": 89.87843, "loss": 0.24946, "time": 0.22983} -{"mode": "train", "epoch": 1, "iter": 29900, "lr": 6e-05, "memory": 7401, "data_time": 0.01787, "decode.loss_ce": 0.24423, "decode.acc_seg": 90.2963, "loss": 0.24423, "time": 0.223} -{"mode": "train", "epoch": 1, "iter": 29950, "lr": 6e-05, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.25751, "decode.acc_seg": 89.81516, "loss": 0.25751, "time": 0.22129} -{"mode": "train", "epoch": 1, "iter": 30000, "lr": 6e-05, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.24109, "decode.acc_seg": 90.22067, "loss": 0.24109, "time": 0.23639} -{"mode": "train", "epoch": 1, "iter": 30050, "lr": 6e-05, "memory": 7401, "data_time": 0.01405, "decode.loss_ce": 0.24888, "decode.acc_seg": 90.22352, "loss": 0.24888, "time": 0.23269} -{"mode": "train", "epoch": 1, "iter": 30100, "lr": 6e-05, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.2402, "decode.acc_seg": 90.3177, "loss": 0.2402, "time": 0.22334} -{"mode": "train", "epoch": 1, "iter": 30150, "lr": 6e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.25272, "decode.acc_seg": 89.96938, "loss": 0.25272, "time": 0.23106} -{"mode": "train", "epoch": 1, "iter": 30200, "lr": 6e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.25224, "decode.acc_seg": 90.01061, "loss": 0.25224, "time": 0.22964} -{"mode": "train", "epoch": 1, "iter": 30250, "lr": 6e-05, "memory": 7401, "data_time": 0.01382, "decode.loss_ce": 0.24026, "decode.acc_seg": 90.34562, "loss": 0.24026, "time": 0.22685} -{"mode": "train", "epoch": 1, "iter": 30300, "lr": 6e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.24025, "decode.acc_seg": 90.36949, "loss": 0.24025, "time": 0.22954} -{"mode": "train", "epoch": 1, "iter": 30350, "lr": 6e-05, "memory": 7401, "data_time": 0.01348, "decode.loss_ce": 0.2484, "decode.acc_seg": 90.08473, "loss": 0.2484, "time": 0.23292} -{"mode": "train", "epoch": 1, "iter": 30400, "lr": 6e-05, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.25432, "decode.acc_seg": 89.85067, "loss": 0.25432, "time": 0.22641} -{"mode": "train", "epoch": 1, "iter": 30450, "lr": 6e-05, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.24368, "decode.acc_seg": 90.29863, "loss": 0.24368, "time": 0.22081} -{"mode": "train", "epoch": 1, "iter": 30500, "lr": 6e-05, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.2417, "decode.acc_seg": 90.49365, "loss": 0.2417, "time": 0.23064} -{"mode": "train", "epoch": 1, "iter": 30550, "lr": 6e-05, "memory": 7401, "data_time": 0.01436, "decode.loss_ce": 0.24327, "decode.acc_seg": 90.34675, "loss": 0.24327, "time": 0.226} -{"mode": "train", "epoch": 1, "iter": 30600, "lr": 6e-05, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.24054, "decode.acc_seg": 90.2877, "loss": 0.24054, "time": 0.22897} -{"mode": "train", "epoch": 1, "iter": 30650, "lr": 5e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.26102, "decode.acc_seg": 89.7172, "loss": 0.26102, "time": 0.22641} -{"mode": "train", "epoch": 1, "iter": 30700, "lr": 5e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.24674, "decode.acc_seg": 90.1102, "loss": 0.24674, "time": 0.22953} -{"mode": "train", "epoch": 1, "iter": 30750, "lr": 5e-05, "memory": 7401, "data_time": 0.01367, "decode.loss_ce": 0.24554, "decode.acc_seg": 90.19576, "loss": 0.24554, "time": 0.22744} -{"mode": "train", "epoch": 1, "iter": 30800, "lr": 5e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.25018, "decode.acc_seg": 90.13869, "loss": 0.25018, "time": 0.22114} -{"mode": "train", "epoch": 1, "iter": 30850, "lr": 5e-05, "memory": 7401, "data_time": 0.01327, "decode.loss_ce": 0.23524, "decode.acc_seg": 90.59131, "loss": 0.23524, "time": 0.23218} -{"mode": "train", "epoch": 1, "iter": 30900, "lr": 5e-05, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.24336, "decode.acc_seg": 90.22935, "loss": 0.24336, "time": 0.22637} -{"mode": "train", "epoch": 1, "iter": 30950, "lr": 5e-05, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.24737, "decode.acc_seg": 90.23347, "loss": 0.24737, "time": 0.22224} -{"mode": "train", "epoch": 1, "iter": 31000, "lr": 5e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.24133, "decode.acc_seg": 90.31915, "loss": 0.24133, "time": 0.22379} -{"mode": "train", "epoch": 1, "iter": 31050, "lr": 5e-05, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.24952, "decode.acc_seg": 89.98261, "loss": 0.24952, "time": 0.22391} -{"mode": "train", "epoch": 1, "iter": 31100, "lr": 5e-05, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.23753, "decode.acc_seg": 90.57205, "loss": 0.23753, "time": 0.23531} -{"mode": "train", "epoch": 1, "iter": 31150, "lr": 5e-05, "memory": 7401, "data_time": 0.01417, "decode.loss_ce": 0.24104, "decode.acc_seg": 90.30558, "loss": 0.24104, "time": 0.23275} -{"mode": "train", "epoch": 1, "iter": 31200, "lr": 5e-05, "memory": 7401, "data_time": 0.0135, "decode.loss_ce": 0.23861, "decode.acc_seg": 90.38136, "loss": 0.23861, "time": 0.22878} -{"mode": "train", "epoch": 1, "iter": 31250, "lr": 5e-05, "memory": 7401, "data_time": 0.01352, "decode.loss_ce": 0.24708, "decode.acc_seg": 90.23609, "loss": 0.24708, "time": 0.22358} -{"mode": "train", "epoch": 1, "iter": 31300, "lr": 5e-05, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.24642, "decode.acc_seg": 90.22902, "loss": 0.24642, "time": 0.22615} -{"mode": "train", "epoch": 1, "iter": 31350, "lr": 5e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.25387, "decode.acc_seg": 89.85779, "loss": 0.25387, "time": 0.22402} -{"mode": "train", "epoch": 1, "iter": 31400, "lr": 5e-05, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.23644, "decode.acc_seg": 90.64109, "loss": 0.23644, "time": 0.22921} -{"mode": "train", "epoch": 1, "iter": 31450, "lr": 5e-05, "memory": 7401, "data_time": 0.01335, "decode.loss_ce": 0.23015, "decode.acc_seg": 90.70054, "loss": 0.23015, "time": 0.22249} -{"mode": "train", "epoch": 1, "iter": 31500, "lr": 5e-05, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.23435, "decode.acc_seg": 90.65848, "loss": 0.23435, "time": 0.22077} -{"mode": "train", "epoch": 1, "iter": 31550, "lr": 5e-05, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 0.24578, "decode.acc_seg": 90.15818, "loss": 0.24578, "time": 0.23406} -{"mode": "train", "epoch": 2, "iter": 31600, "lr": 5e-05, "memory": 7401, "data_time": 0.06505, "decode.loss_ce": 0.23277, "decode.acc_seg": 90.69416, "loss": 0.23277, "time": 0.27913} -{"mode": "train", "epoch": 2, "iter": 31650, "lr": 5e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.23535, "decode.acc_seg": 90.65026, "loss": 0.23535, "time": 0.23078} -{"mode": "train", "epoch": 2, "iter": 31700, "lr": 5e-05, "memory": 7401, "data_time": 0.01353, "decode.loss_ce": 0.23148, "decode.acc_seg": 90.63577, "loss": 0.23148, "time": 0.22646} -{"mode": "train", "epoch": 2, "iter": 31750, "lr": 5e-05, "memory": 7401, "data_time": 0.01383, "decode.loss_ce": 0.23527, "decode.acc_seg": 90.67277, "loss": 0.23527, "time": 0.23689} -{"mode": "train", "epoch": 2, "iter": 31800, "lr": 5e-05, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.23605, "decode.acc_seg": 90.67432, "loss": 0.23605, "time": 0.22586} -{"mode": "train", "epoch": 2, "iter": 31850, "lr": 5e-05, "memory": 7401, "data_time": 0.01532, "decode.loss_ce": 0.24392, "decode.acc_seg": 90.3149, "loss": 0.24392, "time": 0.22904} -{"mode": "train", "epoch": 2, "iter": 31900, "lr": 5e-05, "memory": 7401, "data_time": 0.01461, "decode.loss_ce": 0.25354, "decode.acc_seg": 90.00689, "loss": 0.25354, "time": 0.22817} -{"mode": "train", "epoch": 2, "iter": 31950, "lr": 5e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.23201, "decode.acc_seg": 90.80891, "loss": 0.23201, "time": 0.22442} -{"mode": "train", "epoch": 2, "iter": 32000, "lr": 5e-05, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.23611, "decode.acc_seg": 90.51421, "loss": 0.23611, "time": 0.27018} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 5e-05, "aAcc": 0.8151, "mIoU": 0.4536, "mAcc": 0.5629, "IoU.wall": 0.7497, "IoU.building": 0.8018, "IoU.sky": 0.941, "IoU.floor": 0.7977, "IoU.tree": 0.7253, "IoU.ceiling": 0.8142, "IoU.road": 0.8253, "IoU.bed ": 0.8713, "IoU.windowpane": 0.5894, "IoU.grass": 0.6701, "IoU.cabinet": 0.599, "IoU.sidewalk": 0.632, "IoU.person": 0.7979, "IoU.earth": 0.3441, "IoU.door": 0.4468, "IoU.table": 0.5775, "IoU.mountain": 0.5863, "IoU.plant": 0.483, "IoU.curtain": 0.7059, "IoU.chair": 0.5527, "IoU.car": 0.8365, "IoU.water": 0.5674, "IoU.painting": 0.6842, "IoU.sofa": 0.6387, "IoU.shelf": 0.4354, "IoU.house": 0.2574, "IoU.sea": 0.5827, "IoU.mirror": 0.6075, "IoU.rug": 0.5455, "IoU.field": 0.3249, "IoU.armchair": 0.396, "IoU.seat": 0.6067, "IoU.fence": 0.3701, "IoU.desk": 0.4824, "IoU.rock": 0.3926, "IoU.wardrobe": 0.4525, "IoU.lamp": 0.6, "IoU.bathtub": 0.6826, "IoU.railing": 0.2711, "IoU.cushion": 0.5182, "IoU.base": 0.2169, "IoU.box": 0.2113, "IoU.column": 0.4406, "IoU.signboard": 0.3211, "IoU.chest of drawers": 0.4199, "IoU.counter": 0.2653, "IoU.sand": 0.335, "IoU.sink": 0.674, "IoU.skyscraper": 0.396, "IoU.fireplace": 0.6821, "IoU.refrigerator": 0.6067, "IoU.grandstand": 0.4265, "IoU.path": 0.1976, "IoU.stairs": 0.3129, "IoU.runway": 0.6736, "IoU.case": 0.432, "IoU.pool table": 0.9233, "IoU.pillow": 0.5391, "IoU.screen door": 0.56, "IoU.stairway": 0.3311, "IoU.river": 0.1011, "IoU.bridge": 0.4993, "IoU.bookcase": 0.3327, "IoU.blind": 0.4029, "IoU.coffee table": 0.5801, "IoU.toilet": 0.8312, "IoU.flower": 0.3368, "IoU.book": 0.4324, "IoU.hill": 0.0879, "IoU.bench": 0.4114, "IoU.countertop": 0.503, "IoU.stove": 0.7438, "IoU.palm": 0.4816, "IoU.kitchen island": 0.3684, "IoU.computer": 0.5726, "IoU.swivel chair": 0.3461, "IoU.boat": 0.4437, "IoU.bar": 0.3994, "IoU.arcade machine": 0.5981, "IoU.hovel": 0.4077, "IoU.bus": 0.871, "IoU.towel": 0.5882, "IoU.light": 0.4842, "IoU.truck": 0.3586, "IoU.tower": 0.1237, "IoU.chandelier": 0.6553, "IoU.awning": 0.1985, "IoU.streetlight": 0.2445, "IoU.booth": 0.3843, "IoU.television receiver": 0.7191, "IoU.airplane": 0.499, "IoU.dirt track": 0.1433, "IoU.apparel": 0.3458, "IoU.pole": 0.1669, "IoU.land": 0.0623, "IoU.bannister": 0.0792, "IoU.escalator": 0.2604, "IoU.ottoman": 0.4557, "IoU.bottle": 0.2157, "IoU.buffet": 0.4668, "IoU.poster": 0.2061, "IoU.stage": 0.1298, "IoU.van": 0.3922, "IoU.ship": 0.2902, "IoU.fountain": 0.2024, "IoU.conveyer belt": 0.7482, "IoU.canopy": 0.0989, "IoU.washer": 0.6705, "IoU.plaything": 0.3339, "IoU.swimming pool": 0.362, "IoU.stool": 0.3888, "IoU.barrel": 0.3749, "IoU.basket": 0.2952, "IoU.waterfall": 0.6143, "IoU.tent": 0.8853, "IoU.bag": 0.1144, "IoU.minibike": 0.6409, "IoU.cradle": 0.8378, "IoU.oven": 0.3237, "IoU.ball": 0.4937, "IoU.food": 0.5128, "IoU.step": 0.1748, "IoU.tank": 0.5168, "IoU.trade name": 0.2253, "IoU.microwave": 0.4541, "IoU.pot": 0.3492, "IoU.animal": 0.5619, "IoU.bicycle": 0.5031, "IoU.lake": 0.5045, "IoU.dishwasher": 0.5904, "IoU.screen": 0.5633, "IoU.blanket": 0.1227, "IoU.sculpture": 0.4733, "IoU.hood": 0.4886, "IoU.sconce": 0.3641, "IoU.vase": 0.3761, "IoU.traffic light": 0.2663, "IoU.tray": 0.0735, "IoU.ashcan": 0.393, "IoU.fan": 0.5581, "IoU.pier": 0.2622, "IoU.crt screen": 0.0238, "IoU.plate": 0.5043, "IoU.monitor": 0.0779, "IoU.bulletin board": 0.4738, "IoU.shower": 0.0023, "IoU.radiator": 0.4543, "IoU.glass": 0.1303, "IoU.clock": 0.3491, "IoU.flag": 0.3622, "Acc.wall": 0.8728, "Acc.building": 0.9289, "Acc.sky": 0.9716, "Acc.floor": 0.899, "Acc.tree": 0.8812, "Acc.ceiling": 0.8968, "Acc.road": 0.9084, "Acc.bed ": 0.9501, "Acc.windowpane": 0.7652, "Acc.grass": 0.8411, "Acc.cabinet": 0.7219, "Acc.sidewalk": 0.7693, "Acc.person": 0.9325, "Acc.earth": 0.4744, "Acc.door": 0.5555, "Acc.table": 0.7671, "Acc.mountain": 0.7355, "Acc.plant": 0.5976, "Acc.curtain": 0.8084, "Acc.chair": 0.6884, "Acc.car": 0.9131, "Acc.water": 0.7431, "Acc.painting": 0.8434, "Acc.sofa": 0.8354, "Acc.shelf": 0.6695, "Acc.house": 0.3094, "Acc.sea": 0.7718, "Acc.mirror": 0.6651, "Acc.rug": 0.6195, "Acc.field": 0.5145, "Acc.armchair": 0.5644, "Acc.seat": 0.7651, "Acc.fence": 0.5133, "Acc.desk": 0.6634, "Acc.rock": 0.6655, "Acc.wardrobe": 0.6333, "Acc.lamp": 0.7482, "Acc.bathtub": 0.7584, "Acc.railing": 0.3807, "Acc.cushion": 0.6203, "Acc.base": 0.334, "Acc.box": 0.2957, "Acc.column": 0.626, "Acc.signboard": 0.4157, "Acc.chest of drawers": 0.6477, "Acc.counter": 0.3532, "Acc.sand": 0.4867, "Acc.sink": 0.7701, "Acc.skyscraper": 0.4992, "Acc.fireplace": 0.7867, "Acc.refrigerator": 0.7406, "Acc.grandstand": 0.6791, "Acc.path": 0.2579, "Acc.stairs": 0.4033, "Acc.runway": 0.87, "Acc.case": 0.5324, "Acc.pool table": 0.9531, "Acc.pillow": 0.6806, "Acc.screen door": 0.7059, "Acc.stairway": 0.4078, "Acc.river": 0.1682, "Acc.bridge": 0.6986, "Acc.bookcase": 0.44, "Acc.blind": 0.4591, "Acc.coffee table": 0.8057, "Acc.toilet": 0.9082, "Acc.flower": 0.4137, "Acc.book": 0.5799, "Acc.hill": 0.1564, "Acc.bench": 0.5406, "Acc.countertop": 0.6255, "Acc.stove": 0.8478, "Acc.palm": 0.6666, "Acc.kitchen island": 0.5568, "Acc.computer": 0.6645, "Acc.swivel chair": 0.5017, "Acc.boat": 0.5316, "Acc.bar": 0.4938, "Acc.arcade machine": 0.6446, "Acc.hovel": 0.6224, "Acc.bus": 0.9377, "Acc.towel": 0.7179, "Acc.light": 0.5653, "Acc.truck": 0.506, "Acc.tower": 0.1706, "Acc.chandelier": 0.8012, "Acc.awning": 0.2372, "Acc.streetlight": 0.3186, "Acc.booth": 0.4019, "Acc.television receiver": 0.8197, "Acc.airplane": 0.6369, "Acc.dirt track": 0.1988, "Acc.apparel": 0.4882, "Acc.pole": 0.2423, "Acc.land": 0.078, "Acc.bannister": 0.117, "Acc.escalator": 0.2886, "Acc.ottoman": 0.6157, "Acc.bottle": 0.2777, "Acc.buffet": 0.5174, "Acc.poster": 0.278, "Acc.stage": 0.2345, "Acc.van": 0.546, "Acc.ship": 0.4066, "Acc.fountain": 0.21, "Acc.conveyer belt": 0.8777, "Acc.canopy": 0.1479, "Acc.washer": 0.7069, "Acc.plaything": 0.5134, "Acc.swimming pool": 0.5041, "Acc.stool": 0.4902, "Acc.barrel": 0.6125, "Acc.basket": 0.3329, "Acc.waterfall": 0.6853, "Acc.tent": 0.9507, "Acc.bag": 0.1385, "Acc.minibike": 0.7756, "Acc.cradle": 0.9424, "Acc.oven": 0.5798, "Acc.ball": 0.603, "Acc.food": 0.6078, "Acc.step": 0.2311, "Acc.tank": 0.5428, "Acc.trade name": 0.2565, "Acc.microwave": 0.4918, "Acc.pot": 0.3999, "Acc.animal": 0.5858, "Acc.bicycle": 0.7228, "Acc.lake": 0.6097, "Acc.dishwasher": 0.7646, "Acc.screen": 0.7761, "Acc.blanket": 0.1377, "Acc.sculpture": 0.6623, "Acc.hood": 0.5522, "Acc.sconce": 0.4329, "Acc.vase": 0.5087, "Acc.traffic light": 0.3833, "Acc.tray": 0.1055, "Acc.ashcan": 0.4695, "Acc.fan": 0.6482, "Acc.pier": 0.2903, "Acc.crt screen": 0.0652, "Acc.plate": 0.6713, "Acc.monitor": 0.093, "Acc.bulletin board": 0.5571, "Acc.shower": 0.0045, "Acc.radiator": 0.4985, "Acc.glass": 0.144, "Acc.clock": 0.4115, "Acc.flag": 0.3962} -{"mode": "train", "epoch": 2, "iter": 32050, "lr": 5e-05, "memory": 7401, "data_time": 0.54962, "decode.loss_ce": 0.23538, "decode.acc_seg": 90.50749, "loss": 0.23538, "time": 0.76553} -{"mode": "train", "epoch": 2, "iter": 32100, "lr": 5e-05, "memory": 7401, "data_time": 0.01355, "decode.loss_ce": 0.22693, "decode.acc_seg": 90.78199, "loss": 0.22693, "time": 0.22045} -{"mode": "train", "epoch": 2, "iter": 32150, "lr": 5e-05, "memory": 7401, "data_time": 0.01379, "decode.loss_ce": 0.22951, "decode.acc_seg": 90.75181, "loss": 0.22951, "time": 0.2236} -{"mode": "train", "epoch": 2, "iter": 32200, "lr": 5e-05, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.23235, "decode.acc_seg": 90.7428, "loss": 0.23235, "time": 0.23514} -{"mode": "train", "epoch": 2, "iter": 32250, "lr": 5e-05, "memory": 7401, "data_time": 0.01386, "decode.loss_ce": 0.22884, "decode.acc_seg": 90.75896, "loss": 0.22884, "time": 0.23582} -{"mode": "train", "epoch": 2, "iter": 32300, "lr": 5e-05, "memory": 7401, "data_time": 0.01448, "decode.loss_ce": 0.22922, "decode.acc_seg": 90.90773, "loss": 0.22922, "time": 0.22265} -{"mode": "train", "epoch": 2, "iter": 32350, "lr": 5e-05, "memory": 7401, "data_time": 0.01444, "decode.loss_ce": 0.23171, "decode.acc_seg": 90.71425, "loss": 0.23171, "time": 0.2353} -{"mode": "train", "epoch": 2, "iter": 32400, "lr": 5e-05, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.22815, "decode.acc_seg": 90.9131, "loss": 0.22815, "time": 0.22666} -{"mode": "train", "epoch": 2, "iter": 32450, "lr": 5e-05, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.24031, "decode.acc_seg": 90.52723, "loss": 0.24031, "time": 0.22401} -{"mode": "train", "epoch": 2, "iter": 32500, "lr": 5e-05, "memory": 7401, "data_time": 0.01491, "decode.loss_ce": 0.24212, "decode.acc_seg": 90.27845, "loss": 0.24212, "time": 0.22361} -{"mode": "train", "epoch": 2, "iter": 32550, "lr": 4e-05, "memory": 7401, "data_time": 0.01456, "decode.loss_ce": 0.23566, "decode.acc_seg": 90.62327, "loss": 0.23566, "time": 0.23311} -{"mode": "train", "epoch": 2, "iter": 32600, "lr": 4e-05, "memory": 7401, "data_time": 0.01432, "decode.loss_ce": 0.22372, "decode.acc_seg": 90.94242, "loss": 0.22372, "time": 0.2278} -{"mode": "train", "epoch": 2, "iter": 32650, "lr": 4e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.23872, "decode.acc_seg": 90.6027, "loss": 0.23872, "time": 0.21977} -{"mode": "train", "epoch": 2, "iter": 32700, "lr": 4e-05, "memory": 7401, "data_time": 0.0142, "decode.loss_ce": 0.2303, "decode.acc_seg": 90.78958, "loss": 0.2303, "time": 0.22645} -{"mode": "train", "epoch": 2, "iter": 32750, "lr": 4e-05, "memory": 7401, "data_time": 0.01415, "decode.loss_ce": 0.23068, "decode.acc_seg": 90.83794, "loss": 0.23068, "time": 0.236} -{"mode": "train", "epoch": 2, "iter": 32800, "lr": 4e-05, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.22791, "decode.acc_seg": 90.81311, "loss": 0.22791, "time": 0.22145} -{"mode": "train", "epoch": 2, "iter": 32850, "lr": 4e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.23202, "decode.acc_seg": 90.67486, "loss": 0.23202, "time": 0.22056} -{"mode": "train", "epoch": 2, "iter": 32900, "lr": 4e-05, "memory": 7401, "data_time": 0.01463, "decode.loss_ce": 0.23085, "decode.acc_seg": 90.79495, "loss": 0.23085, "time": 0.22553} -{"mode": "train", "epoch": 2, "iter": 32950, "lr": 4e-05, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.23477, "decode.acc_seg": 90.65462, "loss": 0.23477, "time": 0.2352} -{"mode": "train", "epoch": 2, "iter": 33000, "lr": 4e-05, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.22978, "decode.acc_seg": 90.90406, "loss": 0.22978, "time": 0.2266} -{"mode": "train", "epoch": 2, "iter": 33050, "lr": 4e-05, "memory": 7401, "data_time": 0.01446, "decode.loss_ce": 0.23362, "decode.acc_seg": 90.57098, "loss": 0.23362, "time": 0.22447} -{"mode": "train", "epoch": 2, "iter": 33100, "lr": 4e-05, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.23233, "decode.acc_seg": 90.70321, "loss": 0.23233, "time": 0.23406} -{"mode": "train", "epoch": 2, "iter": 33150, "lr": 4e-05, "memory": 7401, "data_time": 0.01443, "decode.loss_ce": 0.23172, "decode.acc_seg": 90.77526, "loss": 0.23172, "time": 0.22114} -{"mode": "train", "epoch": 2, "iter": 33200, "lr": 4e-05, "memory": 7401, "data_time": 0.01445, "decode.loss_ce": 0.23356, "decode.acc_seg": 90.64096, "loss": 0.23356, "time": 0.22674} -{"mode": "train", "epoch": 2, "iter": 33250, "lr": 4e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.22775, "decode.acc_seg": 90.8175, "loss": 0.22775, "time": 0.22494} -{"mode": "train", "epoch": 2, "iter": 33300, "lr": 4e-05, "memory": 7401, "data_time": 0.01376, "decode.loss_ce": 0.23218, "decode.acc_seg": 90.6653, "loss": 0.23218, "time": 0.21978} -{"mode": "train", "epoch": 2, "iter": 33350, "lr": 4e-05, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.23095, "decode.acc_seg": 90.70321, "loss": 0.23095, "time": 0.22368} -{"mode": "train", "epoch": 2, "iter": 33400, "lr": 4e-05, "memory": 7401, "data_time": 0.01473, "decode.loss_ce": 0.22436, "decode.acc_seg": 91.02079, "loss": 0.22436, "time": 0.22986} -{"mode": "train", "epoch": 2, "iter": 33450, "lr": 4e-05, "memory": 7401, "data_time": 0.01423, "decode.loss_ce": 0.22362, "decode.acc_seg": 90.9464, "loss": 0.22362, "time": 0.22349} -{"mode": "train", "epoch": 2, "iter": 33500, "lr": 4e-05, "memory": 7401, "data_time": 0.01413, "decode.loss_ce": 0.22712, "decode.acc_seg": 90.93223, "loss": 0.22712, "time": 0.22693} -{"mode": "train", "epoch": 2, "iter": 33550, "lr": 4e-05, "memory": 7401, "data_time": 0.01473, "decode.loss_ce": 0.22321, "decode.acc_seg": 91.0858, "loss": 0.22321, "time": 0.22718} -{"mode": "train", "epoch": 2, "iter": 33600, "lr": 4e-05, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.22577, "decode.acc_seg": 91.08109, "loss": 0.22577, "time": 0.22803} -{"mode": "train", "epoch": 2, "iter": 33650, "lr": 4e-05, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.22976, "decode.acc_seg": 90.71987, "loss": 0.22976, "time": 0.22979} -{"mode": "train", "epoch": 2, "iter": 33700, "lr": 4e-05, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.21722, "decode.acc_seg": 91.09897, "loss": 0.21722, "time": 0.22589} -{"mode": "train", "epoch": 2, "iter": 33750, "lr": 4e-05, "memory": 7401, "data_time": 0.01521, "decode.loss_ce": 0.22663, "decode.acc_seg": 90.88778, "loss": 0.22663, "time": 0.23014} -{"mode": "train", "epoch": 2, "iter": 33800, "lr": 4e-05, "memory": 7401, "data_time": 0.01459, "decode.loss_ce": 0.23182, "decode.acc_seg": 90.8422, "loss": 0.23182, "time": 0.22465} -{"mode": "train", "epoch": 2, "iter": 33850, "lr": 4e-05, "memory": 7401, "data_time": 0.0144, "decode.loss_ce": 0.23246, "decode.acc_seg": 90.69036, "loss": 0.23246, "time": 0.23781} -{"mode": "train", "epoch": 2, "iter": 33900, "lr": 4e-05, "memory": 7401, "data_time": 0.01466, "decode.loss_ce": 0.22509, "decode.acc_seg": 91.049, "loss": 0.22509, "time": 0.22294} -{"mode": "train", "epoch": 2, "iter": 33950, "lr": 4e-05, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.22558, "decode.acc_seg": 91.02546, "loss": 0.22558, "time": 0.22072} -{"mode": "train", "epoch": 2, "iter": 34000, "lr": 4e-05, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.21729, "decode.acc_seg": 91.26921, "loss": 0.21729, "time": 0.24147} -{"mode": "train", "epoch": 2, "iter": 34050, "lr": 4e-05, "memory": 7401, "data_time": 0.0141, "decode.loss_ce": 0.22985, "decode.acc_seg": 90.67808, "loss": 0.22985, "time": 0.23332} -{"mode": "train", "epoch": 2, "iter": 34100, "lr": 4e-05, "memory": 7401, "data_time": 0.01465, "decode.loss_ce": 0.22444, "decode.acc_seg": 91.07517, "loss": 0.22444, "time": 0.22017} -{"mode": "train", "epoch": 2, "iter": 34150, "lr": 4e-05, "memory": 7401, "data_time": 0.01455, "decode.loss_ce": 0.22712, "decode.acc_seg": 90.91139, "loss": 0.22712, "time": 0.22516} -{"mode": "train", "epoch": 2, "iter": 34200, "lr": 4e-05, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.22939, "decode.acc_seg": 90.78296, "loss": 0.22939, "time": 0.2382} -{"mode": "train", "epoch": 2, "iter": 34250, "lr": 4e-05, "memory": 7401, "data_time": 0.01512, "decode.loss_ce": 0.22408, "decode.acc_seg": 90.94145, "loss": 0.22408, "time": 0.22255} -{"mode": "train", "epoch": 2, "iter": 34300, "lr": 4e-05, "memory": 7401, "data_time": 0.0147, "decode.loss_ce": 0.22897, "decode.acc_seg": 90.93588, "loss": 0.22897, "time": 0.22195} -{"mode": "train", "epoch": 2, "iter": 34350, "lr": 4e-05, "memory": 7401, "data_time": 0.01867, "decode.loss_ce": 0.22513, "decode.acc_seg": 90.99434, "loss": 0.22513, "time": 0.2337} -{"mode": "train", "epoch": 2, "iter": 34400, "lr": 3e-05, "memory": 7401, "data_time": 0.01422, "decode.loss_ce": 0.22185, "decode.acc_seg": 91.01825, "loss": 0.22185, "time": 0.22447} -{"mode": "train", "epoch": 2, "iter": 34450, "lr": 3e-05, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.22254, "decode.acc_seg": 91.20016, "loss": 0.22254, "time": 0.22867} -{"mode": "train", "epoch": 2, "iter": 34500, "lr": 3e-05, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.22587, "decode.acc_seg": 91.01109, "loss": 0.22587, "time": 0.22392} -{"mode": "train", "epoch": 2, "iter": 34550, "lr": 3e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.23325, "decode.acc_seg": 90.61057, "loss": 0.23325, "time": 0.22879} -{"mode": "train", "epoch": 2, "iter": 34600, "lr": 3e-05, "memory": 7401, "data_time": 0.01409, "decode.loss_ce": 0.2188, "decode.acc_seg": 91.15502, "loss": 0.2188, "time": 0.2305} -{"mode": "train", "epoch": 2, "iter": 34650, "lr": 3e-05, "memory": 7401, "data_time": 0.01438, "decode.loss_ce": 0.22099, "decode.acc_seg": 91.1031, "loss": 0.22099, "time": 0.22591} -{"mode": "train", "epoch": 2, "iter": 34700, "lr": 3e-05, "memory": 7401, "data_time": 0.01426, "decode.loss_ce": 0.22814, "decode.acc_seg": 90.90191, "loss": 0.22814, "time": 0.22579} -{"mode": "train", "epoch": 2, "iter": 34750, "lr": 3e-05, "memory": 7401, "data_time": 0.01427, "decode.loss_ce": 0.22903, "decode.acc_seg": 90.97506, "loss": 0.22903, "time": 0.22596} -{"mode": "train", "epoch": 2, "iter": 34800, "lr": 3e-05, "memory": 7401, "data_time": 0.01421, "decode.loss_ce": 0.22329, "decode.acc_seg": 90.86866, "loss": 0.22329, "time": 0.23383} -{"mode": "train", "epoch": 2, "iter": 34850, "lr": 3e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.21979, "decode.acc_seg": 91.22021, "loss": 0.21979, "time": 0.22357} -{"mode": "train", "epoch": 2, "iter": 34900, "lr": 3e-05, "memory": 7401, "data_time": 0.0142, "decode.loss_ce": 0.22059, "decode.acc_seg": 91.21588, "loss": 0.22059, "time": 0.22448} -{"mode": "train", "epoch": 2, "iter": 34950, "lr": 3e-05, "memory": 7401, "data_time": 0.0134, "decode.loss_ce": 0.21778, "decode.acc_seg": 91.07783, "loss": 0.21778, "time": 0.23858} -{"mode": "train", "epoch": 2, "iter": 35000, "lr": 3e-05, "memory": 7401, "data_time": 0.01452, "decode.loss_ce": 0.21331, "decode.acc_seg": 91.32972, "loss": 0.21331, "time": 0.22271} -{"mode": "train", "epoch": 2, "iter": 35050, "lr": 3e-05, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.21886, "decode.acc_seg": 91.23767, "loss": 0.21886, "time": 0.22693} -{"mode": "train", "epoch": 2, "iter": 35100, "lr": 3e-05, "memory": 7401, "data_time": 0.0147, "decode.loss_ce": 0.21795, "decode.acc_seg": 91.21672, "loss": 0.21795, "time": 0.23112} -{"mode": "train", "epoch": 2, "iter": 35150, "lr": 3e-05, "memory": 7401, "data_time": 0.01357, "decode.loss_ce": 0.21943, "decode.acc_seg": 91.20869, "loss": 0.21943, "time": 0.21853} -{"mode": "train", "epoch": 2, "iter": 35200, "lr": 3e-05, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.22333, "decode.acc_seg": 91.09856, "loss": 0.22333, "time": 0.23292} -{"mode": "train", "epoch": 2, "iter": 35250, "lr": 3e-05, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.21815, "decode.acc_seg": 91.14795, "loss": 0.21815, "time": 0.23023} -{"mode": "train", "epoch": 2, "iter": 35300, "lr": 3e-05, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.21987, "decode.acc_seg": 91.00664, "loss": 0.21987, "time": 0.2212} -{"mode": "train", "epoch": 2, "iter": 35350, "lr": 3e-05, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.21978, "decode.acc_seg": 91.14446, "loss": 0.21978, "time": 0.22077} -{"mode": "train", "epoch": 2, "iter": 35400, "lr": 3e-05, "memory": 7401, "data_time": 0.01371, "decode.loss_ce": 0.22127, "decode.acc_seg": 91.01832, "loss": 0.22127, "time": 0.22352} -{"mode": "train", "epoch": 2, "iter": 35450, "lr": 3e-05, "memory": 7401, "data_time": 0.01387, "decode.loss_ce": 0.23538, "decode.acc_seg": 90.61837, "loss": 0.23538, "time": 0.23389} -{"mode": "train", "epoch": 2, "iter": 35500, "lr": 3e-05, "memory": 7401, "data_time": 0.01414, "decode.loss_ce": 0.22218, "decode.acc_seg": 90.95765, "loss": 0.22218, "time": 0.22304} -{"mode": "train", "epoch": 2, "iter": 35550, "lr": 3e-05, "memory": 7401, "data_time": 0.01448, "decode.loss_ce": 0.21478, "decode.acc_seg": 91.3441, "loss": 0.21478, "time": 0.23047} -{"mode": "train", "epoch": 2, "iter": 35600, "lr": 3e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.2146, "decode.acc_seg": 91.31584, "loss": 0.2146, "time": 0.22925} -{"mode": "train", "epoch": 2, "iter": 35650, "lr": 3e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.21202, "decode.acc_seg": 91.49127, "loss": 0.21202, "time": 0.22406} -{"mode": "train", "epoch": 2, "iter": 35700, "lr": 3e-05, "memory": 7401, "data_time": 0.01351, "decode.loss_ce": 0.2128, "decode.acc_seg": 91.31605, "loss": 0.2128, "time": 0.22667} -{"mode": "train", "epoch": 2, "iter": 35750, "lr": 3e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.21854, "decode.acc_seg": 91.27688, "loss": 0.21854, "time": 0.22605} -{"mode": "train", "epoch": 2, "iter": 35800, "lr": 3e-05, "memory": 7401, "data_time": 0.01469, "decode.loss_ce": 0.21176, "decode.acc_seg": 91.34979, "loss": 0.21176, "time": 0.22617} -{"mode": "train", "epoch": 2, "iter": 35850, "lr": 3e-05, "memory": 7401, "data_time": 0.01411, "decode.loss_ce": 0.21442, "decode.acc_seg": 91.34667, "loss": 0.21442, "time": 0.22946} -{"mode": "train", "epoch": 2, "iter": 35900, "lr": 3e-05, "memory": 7401, "data_time": 0.0144, "decode.loss_ce": 0.21466, "decode.acc_seg": 91.25406, "loss": 0.21466, "time": 0.23561} -{"mode": "train", "epoch": 2, "iter": 35950, "lr": 3e-05, "memory": 7401, "data_time": 0.01362, "decode.loss_ce": 0.21859, "decode.acc_seg": 91.26463, "loss": 0.21859, "time": 0.22675} -{"mode": "train", "epoch": 2, "iter": 36000, "lr": 3e-05, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.21738, "decode.acc_seg": 91.16469, "loss": 0.21738, "time": 0.25164} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 3e-05, "aAcc": 0.8178, "mIoU": 0.4596, "mAcc": 0.5713, "IoU.wall": 0.7534, "IoU.building": 0.809, "IoU.sky": 0.9425, "IoU.floor": 0.8016, "IoU.tree": 0.7276, "IoU.ceiling": 0.8232, "IoU.road": 0.832, "IoU.bed ": 0.8715, "IoU.windowpane": 0.5825, "IoU.grass": 0.6595, "IoU.cabinet": 0.5895, "IoU.sidewalk": 0.6415, "IoU.person": 0.8063, "IoU.earth": 0.3682, "IoU.door": 0.4571, "IoU.table": 0.5858, "IoU.mountain": 0.6008, "IoU.plant": 0.5104, "IoU.curtain": 0.6964, "IoU.chair": 0.542, "IoU.car": 0.8398, "IoU.water": 0.5134, "IoU.painting": 0.6911, "IoU.sofa": 0.6399, "IoU.shelf": 0.4368, "IoU.house": 0.3694, "IoU.sea": 0.5311, "IoU.mirror": 0.604, "IoU.rug": 0.5768, "IoU.field": 0.2548, "IoU.armchair": 0.4243, "IoU.seat": 0.6111, "IoU.fence": 0.3649, "IoU.desk": 0.4738, "IoU.rock": 0.432, "IoU.wardrobe": 0.4512, "IoU.lamp": 0.6052, "IoU.bathtub": 0.7007, "IoU.railing": 0.2643, "IoU.cushion": 0.5543, "IoU.base": 0.2243, "IoU.box": 0.234, "IoU.column": 0.4423, "IoU.signboard": 0.3339, "IoU.chest of drawers": 0.4385, "IoU.counter": 0.2786, "IoU.sand": 0.3541, "IoU.sink": 0.6703, "IoU.skyscraper": 0.3359, "IoU.fireplace": 0.677, "IoU.refrigerator": 0.6302, "IoU.grandstand": 0.4355, "IoU.path": 0.1663, "IoU.stairs": 0.3201, "IoU.runway": 0.6822, "IoU.case": 0.4263, "IoU.pool table": 0.9174, "IoU.pillow": 0.5398, "IoU.screen door": 0.5349, "IoU.stairway": 0.33, "IoU.river": 0.1084, "IoU.bridge": 0.5691, "IoU.bookcase": 0.405, "IoU.blind": 0.3696, "IoU.coffee table": 0.5875, "IoU.toilet": 0.8311, "IoU.flower": 0.3286, "IoU.book": 0.4489, "IoU.hill": 0.1161, "IoU.bench": 0.4088, "IoU.countertop": 0.542, "IoU.stove": 0.7549, "IoU.palm": 0.4801, "IoU.kitchen island": 0.3903, "IoU.computer": 0.5688, "IoU.swivel chair": 0.3712, "IoU.boat": 0.4896, "IoU.bar": 0.3232, "IoU.arcade machine": 0.6469, "IoU.hovel": 0.3884, "IoU.bus": 0.8708, "IoU.towel": 0.604, "IoU.light": 0.496, "IoU.truck": 0.4005, "IoU.tower": 0.0985, "IoU.chandelier": 0.6383, "IoU.awning": 0.2386, "IoU.streetlight": 0.2512, "IoU.booth": 0.3722, "IoU.television receiver": 0.6872, "IoU.airplane": 0.5528, "IoU.dirt track": 0.1225, "IoU.apparel": 0.3565, "IoU.pole": 0.1599, "IoU.land": 0.1047, "IoU.bannister": 0.067, "IoU.escalator": 0.2603, "IoU.ottoman": 0.4352, "IoU.bottle": 0.1782, "IoU.buffet": 0.4967, "IoU.poster": 0.2112, "IoU.stage": 0.1505, "IoU.van": 0.3976, "IoU.ship": 0.5277, "IoU.fountain": 0.1902, "IoU.conveyer belt": 0.7453, "IoU.canopy": 0.1033, "IoU.washer": 0.6521, "IoU.plaything": 0.3388, "IoU.swimming pool": 0.3259, "IoU.stool": 0.3656, "IoU.barrel": 0.2845, "IoU.basket": 0.2911, "IoU.waterfall": 0.6663, "IoU.tent": 0.8858, "IoU.bag": 0.11, "IoU.minibike": 0.6332, "IoU.cradle": 0.8295, "IoU.oven": 0.3219, "IoU.ball": 0.4732, "IoU.food": 0.5276, "IoU.step": 0.1393, "IoU.tank": 0.5584, "IoU.trade name": 0.2425, "IoU.microwave": 0.441, "IoU.pot": 0.3382, "IoU.animal": 0.5645, "IoU.bicycle": 0.512, "IoU.lake": 0.5198, "IoU.dishwasher": 0.629, "IoU.screen": 0.6337, "IoU.blanket": 0.1006, "IoU.sculpture": 0.5161, "IoU.hood": 0.5417, "IoU.sconce": 0.3956, "IoU.vase": 0.3672, "IoU.traffic light": 0.2788, "IoU.tray": 0.0717, "IoU.ashcan": 0.4232, "IoU.fan": 0.5999, "IoU.pier": 0.2809, "IoU.crt screen": 0.0772, "IoU.plate": 0.4707, "IoU.monitor": 0.0786, "IoU.bulletin board": 0.4568, "IoU.shower": 0.0049, "IoU.radiator": 0.418, "IoU.glass": 0.1177, "IoU.clock": 0.3622, "IoU.flag": 0.3437, "Acc.wall": 0.8766, "Acc.building": 0.9287, "Acc.sky": 0.9719, "Acc.floor": 0.8992, "Acc.tree": 0.8758, "Acc.ceiling": 0.9098, "Acc.road": 0.9043, "Acc.bed ": 0.9468, "Acc.windowpane": 0.7721, "Acc.grass": 0.828, "Acc.cabinet": 0.7168, "Acc.sidewalk": 0.7833, "Acc.person": 0.9206, "Acc.earth": 0.5199, "Acc.door": 0.5826, "Acc.table": 0.7599, "Acc.mountain": 0.7406, "Acc.plant": 0.6303, "Acc.curtain": 0.8141, "Acc.chair": 0.6706, "Acc.car": 0.9176, "Acc.water": 0.6812, "Acc.painting": 0.8573, "Acc.sofa": 0.7951, "Acc.shelf": 0.6466, "Acc.house": 0.4578, "Acc.sea": 0.7587, "Acc.mirror": 0.6683, "Acc.rug": 0.6576, "Acc.field": 0.3643, "Acc.armchair": 0.6505, "Acc.seat": 0.7754, "Acc.fence": 0.516, "Acc.desk": 0.6921, "Acc.rock": 0.6742, "Acc.wardrobe": 0.6728, "Acc.lamp": 0.7337, "Acc.bathtub": 0.8024, "Acc.railing": 0.3615, "Acc.cushion": 0.6718, "Acc.base": 0.3082, "Acc.box": 0.3588, "Acc.column": 0.5523, "Acc.signboard": 0.4468, "Acc.chest of drawers": 0.6473, "Acc.counter": 0.372, "Acc.sand": 0.4872, "Acc.sink": 0.7585, "Acc.skyscraper": 0.4185, "Acc.fireplace": 0.7869, "Acc.refrigerator": 0.7759, "Acc.grandstand": 0.7083, "Acc.path": 0.2223, "Acc.stairs": 0.4074, "Acc.runway": 0.8704, "Acc.case": 0.5065, "Acc.pool table": 0.9593, "Acc.pillow": 0.682, "Acc.screen door": 0.7051, "Acc.stairway": 0.4047, "Acc.river": 0.1869, "Acc.bridge": 0.6969, "Acc.bookcase": 0.5505, "Acc.blind": 0.4219, "Acc.coffee table": 0.7835, "Acc.toilet": 0.894, "Acc.flower": 0.4601, "Acc.book": 0.6102, "Acc.hill": 0.2088, "Acc.bench": 0.5317, "Acc.countertop": 0.7345, "Acc.stove": 0.8292, "Acc.palm": 0.6281, "Acc.kitchen island": 0.6702, "Acc.computer": 0.6536, "Acc.swivel chair": 0.5312, "Acc.boat": 0.631, "Acc.bar": 0.3933, "Acc.arcade machine": 0.7141, "Acc.hovel": 0.5367, "Acc.bus": 0.9452, "Acc.towel": 0.7465, "Acc.light": 0.6015, "Acc.truck": 0.4915, "Acc.tower": 0.1293, "Acc.chandelier": 0.781, "Acc.awning": 0.2942, "Acc.streetlight": 0.3163, "Acc.booth": 0.3829, "Acc.television receiver": 0.8118, "Acc.airplane": 0.6361, "Acc.dirt track": 0.2436, "Acc.apparel": 0.5078, "Acc.pole": 0.2163, "Acc.land": 0.1521, "Acc.bannister": 0.0944, "Acc.escalator": 0.2812, "Acc.ottoman": 0.5943, "Acc.bottle": 0.2205, "Acc.buffet": 0.6134, "Acc.poster": 0.2925, "Acc.stage": 0.2631, "Acc.van": 0.5097, "Acc.ship": 0.7155, "Acc.fountain": 0.2007, "Acc.conveyer belt": 0.8779, "Acc.canopy": 0.1368, "Acc.washer": 0.665, "Acc.plaything": 0.5219, "Acc.swimming pool": 0.4191, "Acc.stool": 0.4909, "Acc.barrel": 0.5436, "Acc.basket": 0.3709, "Acc.waterfall": 0.7553, "Acc.tent": 0.9632, "Acc.bag": 0.1301, "Acc.minibike": 0.7759, "Acc.cradle": 0.9451, "Acc.oven": 0.5471, "Acc.ball": 0.5466, "Acc.food": 0.6301, "Acc.step": 0.1744, "Acc.tank": 0.5964, "Acc.trade name": 0.2811, "Acc.microwave": 0.4806, "Acc.pot": 0.3839, "Acc.animal": 0.591, "Acc.bicycle": 0.7095, "Acc.lake": 0.6241, "Acc.dishwasher": 0.7633, "Acc.screen": 0.8077, "Acc.blanket": 0.1105, "Acc.sculpture": 0.6882, "Acc.hood": 0.5942, "Acc.sconce": 0.4977, "Acc.vase": 0.5113, "Acc.traffic light": 0.4334, "Acc.tray": 0.1057, "Acc.ashcan": 0.543, "Acc.fan": 0.7318, "Acc.pier": 0.424, "Acc.crt screen": 0.2119, "Acc.plate": 0.5982, "Acc.monitor": 0.087, "Acc.bulletin board": 0.5579, "Acc.shower": 0.0059, "Acc.radiator": 0.4512, "Acc.glass": 0.1263, "Acc.clock": 0.4152, "Acc.flag": 0.3752} -{"mode": "train", "epoch": 2, "iter": 36050, "lr": 3e-05, "memory": 7401, "data_time": 0.55367, "decode.loss_ce": 0.218, "decode.acc_seg": 91.19828, "loss": 0.218, "time": 0.76522} -{"mode": "train", "epoch": 2, "iter": 36100, "lr": 3e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.21955, "decode.acc_seg": 91.26917, "loss": 0.21955, "time": 0.2258} -{"mode": "train", "epoch": 2, "iter": 36150, "lr": 3e-05, "memory": 7401, "data_time": 0.01442, "decode.loss_ce": 0.21936, "decode.acc_seg": 91.1704, "loss": 0.21936, "time": 0.23006} -{"mode": "train", "epoch": 2, "iter": 36200, "lr": 2e-05, "memory": 7401, "data_time": 0.01467, "decode.loss_ce": 0.21509, "decode.acc_seg": 91.33919, "loss": 0.21509, "time": 0.23212} -{"mode": "train", "epoch": 2, "iter": 36250, "lr": 2e-05, "memory": 7401, "data_time": 0.01367, "decode.loss_ce": 0.21094, "decode.acc_seg": 91.46875, "loss": 0.21094, "time": 0.2214} -{"mode": "train", "epoch": 2, "iter": 36300, "lr": 2e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.2284, "decode.acc_seg": 90.89076, "loss": 0.2284, "time": 0.23109} -{"mode": "train", "epoch": 2, "iter": 36350, "lr": 2e-05, "memory": 7401, "data_time": 0.01385, "decode.loss_ce": 0.22582, "decode.acc_seg": 91.08249, "loss": 0.22582, "time": 0.23987} -{"mode": "train", "epoch": 2, "iter": 36400, "lr": 2e-05, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.21723, "decode.acc_seg": 91.20877, "loss": 0.21723, "time": 0.21889} -{"mode": "train", "epoch": 2, "iter": 36450, "lr": 2e-05, "memory": 7401, "data_time": 0.01418, "decode.loss_ce": 0.21886, "decode.acc_seg": 91.25951, "loss": 0.21886, "time": 0.22711} -{"mode": "train", "epoch": 2, "iter": 36500, "lr": 2e-05, "memory": 7401, "data_time": 0.01345, "decode.loss_ce": 0.22184, "decode.acc_seg": 91.24373, "loss": 0.22184, "time": 0.22809} -{"mode": "train", "epoch": 2, "iter": 36550, "lr": 2e-05, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.21782, "decode.acc_seg": 91.33109, "loss": 0.21782, "time": 0.22247} -{"mode": "train", "epoch": 2, "iter": 36600, "lr": 2e-05, "memory": 7401, "data_time": 0.01339, "decode.loss_ce": 0.21428, "decode.acc_seg": 91.28593, "loss": 0.21428, "time": 0.22648} -{"mode": "train", "epoch": 2, "iter": 36650, "lr": 2e-05, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.21246, "decode.acc_seg": 91.46581, "loss": 0.21246, "time": 0.22063} -{"mode": "train", "epoch": 2, "iter": 36700, "lr": 2e-05, "memory": 7401, "data_time": 0.01359, "decode.loss_ce": 0.2079, "decode.acc_seg": 91.64304, "loss": 0.2079, "time": 0.22729} -{"mode": "train", "epoch": 2, "iter": 36750, "lr": 2e-05, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.21503, "decode.acc_seg": 91.34136, "loss": 0.21503, "time": 0.22582} -{"mode": "train", "epoch": 2, "iter": 36800, "lr": 2e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.21726, "decode.acc_seg": 91.23199, "loss": 0.21726, "time": 0.226} -{"mode": "train", "epoch": 2, "iter": 36850, "lr": 2e-05, "memory": 7401, "data_time": 0.01397, "decode.loss_ce": 0.22757, "decode.acc_seg": 90.96177, "loss": 0.22757, "time": 0.231} -{"mode": "train", "epoch": 2, "iter": 36900, "lr": 2e-05, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.21279, "decode.acc_seg": 91.36138, "loss": 0.21279, "time": 0.22454} -{"mode": "train", "epoch": 2, "iter": 36950, "lr": 2e-05, "memory": 7401, "data_time": 0.01378, "decode.loss_ce": 0.21061, "decode.acc_seg": 91.50533, "loss": 0.21061, "time": 0.22315} -{"mode": "train", "epoch": 2, "iter": 37000, "lr": 2e-05, "memory": 7401, "data_time": 0.01404, "decode.loss_ce": 0.20763, "decode.acc_seg": 91.64266, "loss": 0.20763, "time": 0.23131} -{"mode": "train", "epoch": 2, "iter": 37050, "lr": 2e-05, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.20621, "decode.acc_seg": 91.60276, "loss": 0.20621, "time": 0.2316} -{"mode": "train", "epoch": 2, "iter": 37100, "lr": 2e-05, "memory": 7401, "data_time": 0.01449, "decode.loss_ce": 0.21618, "decode.acc_seg": 91.29533, "loss": 0.21618, "time": 0.23687} -{"mode": "train", "epoch": 2, "iter": 37150, "lr": 2e-05, "memory": 7401, "data_time": 0.01391, "decode.loss_ce": 0.21084, "decode.acc_seg": 91.44727, "loss": 0.21084, "time": 0.22752} -{"mode": "train", "epoch": 2, "iter": 37200, "lr": 2e-05, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.21063, "decode.acc_seg": 91.44848, "loss": 0.21063, "time": 0.22412} -{"mode": "train", "epoch": 2, "iter": 37250, "lr": 2e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.20995, "decode.acc_seg": 91.5832, "loss": 0.20995, "time": 0.22145} -{"mode": "train", "epoch": 2, "iter": 37300, "lr": 2e-05, "memory": 7401, "data_time": 0.01794, "decode.loss_ce": 0.21123, "decode.acc_seg": 91.50569, "loss": 0.21123, "time": 0.22667} -{"mode": "train", "epoch": 2, "iter": 37350, "lr": 2e-05, "memory": 7401, "data_time": 0.0137, "decode.loss_ce": 0.2109, "decode.acc_seg": 91.45923, "loss": 0.2109, "time": 0.22901} -{"mode": "train", "epoch": 2, "iter": 37400, "lr": 2e-05, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.20225, "decode.acc_seg": 91.7988, "loss": 0.20225, "time": 0.23416} -{"mode": "train", "epoch": 2, "iter": 37450, "lr": 2e-05, "memory": 7401, "data_time": 0.01363, "decode.loss_ce": 0.21637, "decode.acc_seg": 91.32616, "loss": 0.21637, "time": 0.23075} -{"mode": "train", "epoch": 2, "iter": 37500, "lr": 2e-05, "memory": 7401, "data_time": 0.01438, "decode.loss_ce": 0.20781, "decode.acc_seg": 91.50418, "loss": 0.20781, "time": 0.22469} -{"mode": "train", "epoch": 2, "iter": 37550, "lr": 2e-05, "memory": 7401, "data_time": 0.01442, "decode.loss_ce": 0.20334, "decode.acc_seg": 91.78832, "loss": 0.20334, "time": 0.22931} -{"mode": "train", "epoch": 2, "iter": 37600, "lr": 2e-05, "memory": 7401, "data_time": 0.01359, "decode.loss_ce": 0.20963, "decode.acc_seg": 91.50683, "loss": 0.20963, "time": 0.21875} -{"mode": "train", "epoch": 2, "iter": 37650, "lr": 2e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.21057, "decode.acc_seg": 91.38515, "loss": 0.21057, "time": 0.22464} -{"mode": "train", "epoch": 2, "iter": 37700, "lr": 2e-05, "memory": 7401, "data_time": 0.01435, "decode.loss_ce": 0.21195, "decode.acc_seg": 91.47684, "loss": 0.21195, "time": 0.22515} -{"mode": "train", "epoch": 2, "iter": 37750, "lr": 2e-05, "memory": 7401, "data_time": 0.01384, "decode.loss_ce": 0.20811, "decode.acc_seg": 91.57603, "loss": 0.20811, "time": 0.22661} -{"mode": "train", "epoch": 2, "iter": 37800, "lr": 2e-05, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.21289, "decode.acc_seg": 91.39582, "loss": 0.21289, "time": 0.23067} -{"mode": "train", "epoch": 2, "iter": 37850, "lr": 2e-05, "memory": 7401, "data_time": 0.01374, "decode.loss_ce": 0.20567, "decode.acc_seg": 91.67688, "loss": 0.20567, "time": 0.22738} -{"mode": "train", "epoch": 2, "iter": 37900, "lr": 2e-05, "memory": 7401, "data_time": 0.01346, "decode.loss_ce": 0.20742, "decode.acc_seg": 91.59246, "loss": 0.20742, "time": 0.23456} -{"mode": "train", "epoch": 2, "iter": 37950, "lr": 1e-05, "memory": 7401, "data_time": 0.01347, "decode.loss_ce": 0.19964, "decode.acc_seg": 91.92319, "loss": 0.19964, "time": 0.22286} -{"mode": "train", "epoch": 2, "iter": 38000, "lr": 1e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.20808, "decode.acc_seg": 91.66195, "loss": 0.20808, "time": 0.22827} -{"mode": "train", "epoch": 2, "iter": 38050, "lr": 1e-05, "memory": 7401, "data_time": 0.01372, "decode.loss_ce": 0.20857, "decode.acc_seg": 91.64983, "loss": 0.20857, "time": 0.22396} -{"mode": "train", "epoch": 2, "iter": 38100, "lr": 1e-05, "memory": 7401, "data_time": 0.01399, "decode.loss_ce": 0.21365, "decode.acc_seg": 91.26774, "loss": 0.21365, "time": 0.23252} -{"mode": "train", "epoch": 2, "iter": 38150, "lr": 1e-05, "memory": 7401, "data_time": 0.01375, "decode.loss_ce": 0.20805, "decode.acc_seg": 91.59072, "loss": 0.20805, "time": 0.22677} -{"mode": "train", "epoch": 2, "iter": 38200, "lr": 1e-05, "memory": 7401, "data_time": 0.01425, "decode.loss_ce": 0.2066, "decode.acc_seg": 91.64017, "loss": 0.2066, "time": 0.2256} -{"mode": "train", "epoch": 2, "iter": 38250, "lr": 1e-05, "memory": 7401, "data_time": 0.01395, "decode.loss_ce": 0.20893, "decode.acc_seg": 91.56587, "loss": 0.20893, "time": 0.22701} -{"mode": "train", "epoch": 2, "iter": 38300, "lr": 1e-05, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.20987, "decode.acc_seg": 91.5694, "loss": 0.20987, "time": 0.22425} -{"mode": "train", "epoch": 2, "iter": 38350, "lr": 1e-05, "memory": 7401, "data_time": 0.0136, "decode.loss_ce": 0.21411, "decode.acc_seg": 91.33737, "loss": 0.21411, "time": 0.23387} -{"mode": "train", "epoch": 2, "iter": 38400, "lr": 1e-05, "memory": 7401, "data_time": 0.01389, "decode.loss_ce": 0.21123, "decode.acc_seg": 91.4589, "loss": 0.21123, "time": 0.22268} -{"mode": "train", "epoch": 2, "iter": 38450, "lr": 1e-05, "memory": 7401, "data_time": 0.01407, "decode.loss_ce": 0.21262, "decode.acc_seg": 91.56054, "loss": 0.21262, "time": 0.22686} -{"mode": "train", "epoch": 2, "iter": 38500, "lr": 1e-05, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.20743, "decode.acc_seg": 91.68582, "loss": 0.20743, "time": 0.23644} -{"mode": "train", "epoch": 2, "iter": 38550, "lr": 1e-05, "memory": 7401, "data_time": 0.01369, "decode.loss_ce": 0.21278, "decode.acc_seg": 91.45004, "loss": 0.21278, "time": 0.22103} -{"mode": "train", "epoch": 2, "iter": 38600, "lr": 1e-05, "memory": 7401, "data_time": 0.01366, "decode.loss_ce": 0.20901, "decode.acc_seg": 91.50299, "loss": 0.20901, "time": 0.2299} -{"mode": "train", "epoch": 2, "iter": 38650, "lr": 1e-05, "memory": 7401, "data_time": 0.01364, "decode.loss_ce": 0.20754, "decode.acc_seg": 91.56729, "loss": 0.20754, "time": 0.22178} -{"mode": "train", "epoch": 2, "iter": 38700, "lr": 1e-05, "memory": 7401, "data_time": 0.01433, "decode.loss_ce": 0.20927, "decode.acc_seg": 91.61324, "loss": 0.20927, "time": 0.22424} -{"mode": "train", "epoch": 2, "iter": 38750, "lr": 1e-05, "memory": 7401, "data_time": 0.01416, "decode.loss_ce": 0.21363, "decode.acc_seg": 91.43232, "loss": 0.21363, "time": 0.22585} -{"mode": "train", "epoch": 2, "iter": 38800, "lr": 1e-05, "memory": 7401, "data_time": 0.01361, "decode.loss_ce": 0.20678, "decode.acc_seg": 91.70025, "loss": 0.20678, "time": 0.22383} -{"mode": "train", "epoch": 2, "iter": 38850, "lr": 1e-05, "memory": 7401, "data_time": 0.01398, "decode.loss_ce": 0.2167, "decode.acc_seg": 91.30655, "loss": 0.2167, "time": 0.23605} -{"mode": "train", "epoch": 2, "iter": 38900, "lr": 1e-05, "memory": 7401, "data_time": 0.01365, "decode.loss_ce": 0.20273, "decode.acc_seg": 91.64025, "loss": 0.20273, "time": 0.21971} -{"mode": "train", "epoch": 2, "iter": 38950, "lr": 1e-05, "memory": 7401, "data_time": 0.01373, "decode.loss_ce": 0.20121, "decode.acc_seg": 91.94475, "loss": 0.20121, "time": 0.22646} -{"mode": "train", "epoch": 2, "iter": 39000, "lr": 1e-05, "memory": 7401, "data_time": 0.0138, "decode.loss_ce": 0.20683, "decode.acc_seg": 91.80829, "loss": 0.20683, "time": 0.22594} -{"mode": "train", "epoch": 2, "iter": 39050, "lr": 1e-05, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.20789, "decode.acc_seg": 91.57739, "loss": 0.20789, "time": 0.23087} -{"mode": "train", "epoch": 2, "iter": 39100, "lr": 1e-05, "memory": 7401, "data_time": 0.01393, "decode.loss_ce": 0.20912, "decode.acc_seg": 91.38234, "loss": 0.20912, "time": 0.22115} -{"mode": "train", "epoch": 2, "iter": 39150, "lr": 1e-05, "memory": 7401, "data_time": 0.01424, "decode.loss_ce": 0.20084, "decode.acc_seg": 91.83429, "loss": 0.20084, "time": 0.22527} -{"mode": "train", "epoch": 2, "iter": 39200, "lr": 1e-05, "memory": 7401, "data_time": 0.01402, "decode.loss_ce": 0.20382, "decode.acc_seg": 91.78701, "loss": 0.20382, "time": 0.22396} -{"mode": "train", "epoch": 2, "iter": 39250, "lr": 1e-05, "memory": 7401, "data_time": 0.01429, "decode.loss_ce": 0.20449, "decode.acc_seg": 91.64664, "loss": 0.20449, "time": 0.22731} -{"mode": "train", "epoch": 2, "iter": 39300, "lr": 1e-05, "memory": 7401, "data_time": 0.01349, "decode.loss_ce": 0.20834, "decode.acc_seg": 91.50477, "loss": 0.20834, "time": 0.23214} -{"mode": "train", "epoch": 2, "iter": 39350, "lr": 1e-05, "memory": 7401, "data_time": 0.01368, "decode.loss_ce": 0.20104, "decode.acc_seg": 91.95542, "loss": 0.20104, "time": 0.22573} -{"mode": "train", "epoch": 2, "iter": 39400, "lr": 1e-05, "memory": 7401, "data_time": 0.01377, "decode.loss_ce": 0.19855, "decode.acc_seg": 91.95394, "loss": 0.19855, "time": 0.22725} -{"mode": "train", "epoch": 2, "iter": 39450, "lr": 1e-05, "memory": 7401, "data_time": 0.01417, "decode.loss_ce": 0.20102, "decode.acc_seg": 91.78002, "loss": 0.20102, "time": 0.2247} -{"mode": "train", "epoch": 2, "iter": 39500, "lr": 0.0, "memory": 7401, "data_time": 0.0139, "decode.loss_ce": 0.20402, "decode.acc_seg": 91.58404, "loss": 0.20402, "time": 0.23048} -{"mode": "train", "epoch": 2, "iter": 39550, "lr": 0.0, "memory": 7401, "data_time": 0.014, "decode.loss_ce": 0.20574, "decode.acc_seg": 91.64258, "loss": 0.20574, "time": 0.22812} -{"mode": "train", "epoch": 2, "iter": 39600, "lr": 0.0, "memory": 7401, "data_time": 0.0143, "decode.loss_ce": 0.20536, "decode.acc_seg": 91.68261, "loss": 0.20536, "time": 0.22306} -{"mode": "train", "epoch": 2, "iter": 39650, "lr": 0.0, "memory": 7401, "data_time": 0.01394, "decode.loss_ce": 0.20462, "decode.acc_seg": 91.64666, "loss": 0.20462, "time": 0.22318} -{"mode": "train", "epoch": 2, "iter": 39700, "lr": 0.0, "memory": 7401, "data_time": 0.01408, "decode.loss_ce": 0.19674, "decode.acc_seg": 91.70103, "loss": 0.19674, "time": 0.23048} -{"mode": "train", "epoch": 2, "iter": 39750, "lr": 0.0, "memory": 7401, "data_time": 0.01388, "decode.loss_ce": 0.19706, "decode.acc_seg": 92.00382, "loss": 0.19706, "time": 0.22764} -{"mode": "train", "epoch": 2, "iter": 39800, "lr": 0.0, "memory": 7401, "data_time": 0.01317, "decode.loss_ce": 0.20338, "decode.acc_seg": 91.75991, "loss": 0.20338, "time": 0.23161} -{"mode": "train", "epoch": 2, "iter": 39850, "lr": 0.0, "memory": 7401, "data_time": 0.01473, "decode.loss_ce": 0.19919, "decode.acc_seg": 91.92282, "loss": 0.19919, "time": 0.22363} -{"mode": "train", "epoch": 2, "iter": 39900, "lr": 0.0, "memory": 7401, "data_time": 0.01401, "decode.loss_ce": 0.20211, "decode.acc_seg": 91.86198, "loss": 0.20211, "time": 0.22947} -{"mode": "train", "epoch": 2, "iter": 39950, "lr": 0.0, "memory": 7401, "data_time": 0.01403, "decode.loss_ce": 0.20048, "decode.acc_seg": 91.70485, "loss": 0.20048, "time": 0.23191} -{"mode": "train", "epoch": 2, "iter": 40000, "lr": 0.0, "memory": 7401, "data_time": 0.01415, "decode.loss_ce": 0.19637, "decode.acc_seg": 92.05002, "loss": 0.19637, "time": 0.24595} -{"mode": "val", "epoch": 2, "iter": 250, "lr": 0.0, "aAcc": 0.8177, "mIoU": 0.4606, "mAcc": 0.5736, "IoU.wall": 0.7519, "IoU.building": 0.8081, "IoU.sky": 0.9412, "IoU.floor": 0.803, "IoU.tree": 0.7263, "IoU.ceiling": 0.82, "IoU.road": 0.8279, "IoU.bed ": 0.8733, "IoU.windowpane": 0.5842, "IoU.grass": 0.6643, "IoU.cabinet": 0.5949, "IoU.sidewalk": 0.6394, "IoU.person": 0.8061, "IoU.earth": 0.367, "IoU.door": 0.4683, "IoU.table": 0.5961, "IoU.mountain": 0.5842, "IoU.plant": 0.5045, "IoU.curtain": 0.7127, "IoU.chair": 0.5623, "IoU.car": 0.8413, "IoU.water": 0.5069, "IoU.painting": 0.685, "IoU.sofa": 0.6451, "IoU.shelf": 0.4463, "IoU.house": 0.3998, "IoU.sea": 0.5449, "IoU.mirror": 0.607, "IoU.rug": 0.5653, "IoU.field": 0.2787, "IoU.armchair": 0.4057, "IoU.seat": 0.6056, "IoU.fence": 0.3581, "IoU.desk": 0.4803, "IoU.rock": 0.4135, "IoU.wardrobe": 0.4356, "IoU.lamp": 0.6091, "IoU.bathtub": 0.709, "IoU.railing": 0.2543, "IoU.cushion": 0.5623, "IoU.base": 0.24, "IoU.box": 0.2255, "IoU.column": 0.4392, "IoU.signboard": 0.3392, "IoU.chest of drawers": 0.4438, "IoU.counter": 0.2939, "IoU.sand": 0.3584, "IoU.sink": 0.6774, "IoU.skyscraper": 0.3624, "IoU.fireplace": 0.6895, "IoU.refrigerator": 0.6198, "IoU.grandstand": 0.4303, "IoU.path": 0.1645, "IoU.stairs": 0.3213, "IoU.runway": 0.6768, "IoU.case": 0.4357, "IoU.pool table": 0.9116, "IoU.pillow": 0.5495, "IoU.screen door": 0.5084, "IoU.stairway": 0.3418, "IoU.river": 0.1083, "IoU.bridge": 0.5342, "IoU.bookcase": 0.4, "IoU.blind": 0.3985, "IoU.coffee table": 0.5918, "IoU.toilet": 0.8362, "IoU.flower": 0.3262, "IoU.book": 0.4592, "IoU.hill": 0.1071, "IoU.bench": 0.4171, "IoU.countertop": 0.5501, "IoU.stove": 0.7517, "IoU.palm": 0.4846, "IoU.kitchen island": 0.3944, "IoU.computer": 0.5717, "IoU.swivel chair": 0.3804, "IoU.boat": 0.4965, "IoU.bar": 0.3532, "IoU.arcade machine": 0.509, "IoU.hovel": 0.3928, "IoU.bus": 0.8787, "IoU.towel": 0.5861, "IoU.light": 0.4922, "IoU.truck": 0.4053, "IoU.tower": 0.0775, "IoU.chandelier": 0.6457, "IoU.awning": 0.2213, "IoU.streetlight": 0.2549, "IoU.booth": 0.3667, "IoU.television receiver": 0.7131, "IoU.airplane": 0.5288, "IoU.dirt track": 0.0996, "IoU.apparel": 0.3448, "IoU.pole": 0.1727, "IoU.land": 0.0676, "IoU.bannister": 0.0748, "IoU.escalator": 0.2787, "IoU.ottoman": 0.43, "IoU.bottle": 0.2119, "IoU.buffet": 0.5243, "IoU.poster": 0.222, "IoU.stage": 0.1349, "IoU.van": 0.3883, "IoU.ship": 0.4589, "IoU.fountain": 0.1902, "IoU.conveyer belt": 0.7322, "IoU.canopy": 0.0798, "IoU.washer": 0.67, "IoU.plaything": 0.3573, "IoU.swimming pool": 0.34, "IoU.stool": 0.4004, "IoU.barrel": 0.3153, "IoU.basket": 0.3118, "IoU.waterfall": 0.5588, "IoU.tent": 0.9002, "IoU.bag": 0.1407, "IoU.minibike": 0.6526, "IoU.cradle": 0.8382, "IoU.oven": 0.3005, "IoU.ball": 0.4864, "IoU.food": 0.5546, "IoU.step": 0.1545, "IoU.tank": 0.5269, "IoU.trade name": 0.1826, "IoU.microwave": 0.514, "IoU.pot": 0.3694, "IoU.animal": 0.5612, "IoU.bicycle": 0.5123, "IoU.lake": 0.51, "IoU.dishwasher": 0.6124, "IoU.screen": 0.5865, "IoU.blanket": 0.1271, "IoU.sculpture": 0.5235, "IoU.hood": 0.5548, "IoU.sconce": 0.3917, "IoU.vase": 0.3679, "IoU.traffic light": 0.28, "IoU.tray": 0.087, "IoU.ashcan": 0.4215, "IoU.fan": 0.5985, "IoU.pier": 0.2862, "IoU.crt screen": 0.0584, "IoU.plate": 0.4884, "IoU.monitor": 0.0948, "IoU.bulletin board": 0.4801, "IoU.shower": 0.0113, "IoU.radiator": 0.4375, "IoU.glass": 0.1233, "IoU.clock": 0.3867, "IoU.flag": 0.3453, "Acc.wall": 0.8702, "Acc.building": 0.9196, "Acc.sky": 0.974, "Acc.floor": 0.8979, "Acc.tree": 0.8721, "Acc.ceiling": 0.902, "Acc.road": 0.8994, "Acc.bed ": 0.9481, "Acc.windowpane": 0.7708, "Acc.grass": 0.8326, "Acc.cabinet": 0.7273, "Acc.sidewalk": 0.7981, "Acc.person": 0.9253, "Acc.earth": 0.4952, "Acc.door": 0.6053, "Acc.table": 0.7585, "Acc.mountain": 0.7615, "Acc.plant": 0.6328, "Acc.curtain": 0.8221, "Acc.chair": 0.7088, "Acc.car": 0.9227, "Acc.water": 0.6797, "Acc.painting": 0.8598, "Acc.sofa": 0.8271, "Acc.shelf": 0.6651, "Acc.house": 0.5423, "Acc.sea": 0.7788, "Acc.mirror": 0.6788, "Acc.rug": 0.6466, "Acc.field": 0.4327, "Acc.armchair": 0.5762, "Acc.seat": 0.7871, "Acc.fence": 0.493, "Acc.desk": 0.6932, "Acc.rock": 0.6667, "Acc.wardrobe": 0.6477, "Acc.lamp": 0.733, "Acc.bathtub": 0.7908, "Acc.railing": 0.3393, "Acc.cushion": 0.6822, "Acc.base": 0.3468, "Acc.box": 0.3148, "Acc.column": 0.5674, "Acc.signboard": 0.4681, "Acc.chest of drawers": 0.6485, "Acc.counter": 0.4084, "Acc.sand": 0.5, "Acc.sink": 0.7773, "Acc.skyscraper": 0.4642, "Acc.fireplace": 0.8324, "Acc.refrigerator": 0.7579, "Acc.grandstand": 0.7244, "Acc.path": 0.2317, "Acc.stairs": 0.4041, "Acc.runway": 0.8645, "Acc.case": 0.5467, "Acc.pool table": 0.9574, "Acc.pillow": 0.681, "Acc.screen door": 0.7006, "Acc.stairway": 0.4332, "Acc.river": 0.1829, "Acc.bridge": 0.6912, "Acc.bookcase": 0.4994, "Acc.blind": 0.4664, "Acc.coffee table": 0.7839, "Acc.toilet": 0.8948, "Acc.flower": 0.4583, "Acc.book": 0.6301, "Acc.hill": 0.1699, "Acc.bench": 0.536, "Acc.countertop": 0.7102, "Acc.stove": 0.8373, "Acc.palm": 0.6498, "Acc.kitchen island": 0.6063, "Acc.computer": 0.6582, "Acc.swivel chair": 0.5414, "Acc.boat": 0.6124, "Acc.bar": 0.4358, "Acc.arcade machine": 0.5534, "Acc.hovel": 0.6044, "Acc.bus": 0.9497, "Acc.towel": 0.7463, "Acc.light": 0.5853, "Acc.truck": 0.5045, "Acc.tower": 0.1007, "Acc.chandelier": 0.7823, "Acc.awning": 0.2666, "Acc.streetlight": 0.3295, "Acc.booth": 0.3822, "Acc.television receiver": 0.8298, "Acc.airplane": 0.6442, "Acc.dirt track": 0.2084, "Acc.apparel": 0.4811, "Acc.pole": 0.2438, "Acc.land": 0.0884, "Acc.bannister": 0.1228, "Acc.escalator": 0.2987, "Acc.ottoman": 0.5752, "Acc.bottle": 0.279, "Acc.buffet": 0.635, "Acc.poster": 0.2892, "Acc.stage": 0.1876, "Acc.van": 0.5045, "Acc.ship": 0.6377, "Acc.fountain": 0.2056, "Acc.conveyer belt": 0.8972, "Acc.canopy": 0.112, "Acc.washer": 0.7034, "Acc.plaything": 0.4958, "Acc.swimming pool": 0.4277, "Acc.stool": 0.488, "Acc.barrel": 0.5552, "Acc.basket": 0.3892, "Acc.waterfall": 0.6241, "Acc.tent": 0.9437, "Acc.bag": 0.1702, "Acc.minibike": 0.7701, "Acc.cradle": 0.9554, "Acc.oven": 0.554, "Acc.ball": 0.5559, "Acc.food": 0.701, "Acc.step": 0.1839, "Acc.tank": 0.5703, "Acc.trade name": 0.2032, "Acc.microwave": 0.5643, "Acc.pot": 0.4229, "Acc.animal": 0.5852, "Acc.bicycle": 0.6991, "Acc.lake": 0.6102, "Acc.dishwasher": 0.762, "Acc.screen": 0.8025, "Acc.blanket": 0.1394, "Acc.sculpture": 0.7347, "Acc.hood": 0.6419, "Acc.sconce": 0.475, "Acc.vase": 0.5041, "Acc.traffic light": 0.4085, "Acc.tray": 0.1334, "Acc.ashcan": 0.5572, "Acc.fan": 0.7125, "Acc.pier": 0.3619, "Acc.crt screen": 0.1621, "Acc.plate": 0.6517, "Acc.monitor": 0.1116, "Acc.bulletin board": 0.6188, "Acc.shower": 0.0262, "Acc.radiator": 0.483, "Acc.glass": 0.1347, "Acc.clock": 0.4644, "Acc.flag": 0.3793} diff --git a/cv/classification/repvit/pytorch/segmentation/repvit.py b/cv/classification/repvit/pytorch/segmentation/repvit.py deleted file mode 100644 index b912b02f..00000000 --- a/cv/classification/repvit/pytorch/segmentation/repvit.py +++ /dev/null @@ -1,429 +0,0 @@ -import torch.nn as nn -import numpy as np -import itertools - -from mmseg.models.builder import BACKBONES -from mmseg.utils import get_root_logger -from mmcv.runner import _load_checkpoint - -from torch.nn.modules.batchnorm import _BatchNorm - -def _make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - -from timm.models.layers import SqueezeExcite - -import torch - -class Conv2d_BN(torch.nn.Sequential): - def __init__(self, a, b, ks=1, stride=1, pad=0, dilation=1, - groups=1, bn_weight_init=1, resolution=-10000): - super().__init__() - self.add_module('c', torch.nn.Conv2d( - a, b, ks, stride, pad, dilation, groups, bias=False)) - self.add_module('bn', torch.nn.BatchNorm2d(b)) - torch.nn.init.constant_(self.bn.weight, bn_weight_init) - torch.nn.init.constant_(self.bn.bias, 0) - - @torch.no_grad() - def fuse(self): - c, bn = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = c.weight * w[:, None, None, None] - b = bn.bias - bn.running_mean * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - m = torch.nn.Conv2d(w.size(1) * self.c.groups, w.size( - 0), w.shape[2:], stride=self.c.stride, padding=self.c.padding, dilation=self.c.dilation, groups=self.c.groups, - device=c.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class Residual(torch.nn.Module): - def __init__(self, m, drop=0.): - super().__init__() - self.m = m - self.drop = drop - - def forward(self, x): - if self.training and self.drop > 0: - return x + self.m(x) * torch.rand(x.size(0), 1, 1, 1, - device=x.device).ge_(self.drop).div(1 - self.drop).detach() - else: - return x + self.m(x) - - @torch.no_grad() - def fuse(self): - if isinstance(self.m, Conv2d_BN): - m = self.m.fuse() - assert(m.groups == m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - elif isinstance(self.m, torch.nn.Conv2d): - m = self.m - assert(m.groups != m.in_channels) - identity = torch.ones(m.weight.shape[0], m.weight.shape[1], 1, 1) - identity = torch.nn.functional.pad(identity, [1,1,1,1]) - m.weight += identity.to(m.weight.device) - return m - else: - return self - - -class RepVGGDW(torch.nn.Module): - def __init__(self, ed) -> None: - super().__init__() - self.conv = Conv2d_BN(ed, ed, 3, 1, 1, groups=ed) - self.conv1 = torch.nn.Conv2d(ed, ed, 1, 1, 0, groups=ed) - self.dim = ed - self.bn = torch.nn.BatchNorm2d(ed) - - def forward(self, x): - return self.bn((self.conv(x) + self.conv1(x)) + x) - - @torch.no_grad() - def fuse(self): - conv = self.conv.fuse() - conv1 = self.conv1 - - conv_w = conv.weight - conv_b = conv.bias - conv1_w = conv1.weight - conv1_b = conv1.bias - - conv1_w = torch.nn.functional.pad(conv1_w, [1,1,1,1]) - - identity = torch.nn.functional.pad(torch.ones(conv1_w.shape[0], conv1_w.shape[1], 1, 1, device=conv1_w.device), [1,1,1,1]) - - final_conv_w = conv_w + conv1_w + identity - final_conv_b = conv_b + conv1_b - - conv.weight.data.copy_(final_conv_w) - conv.bias.data.copy_(final_conv_b) - - bn = self.bn - w = bn.weight / (bn.running_var + bn.eps)**0.5 - w = conv.weight * w[:, None, None, None] - b = bn.bias + (conv.bias - bn.running_mean) * bn.weight / \ - (bn.running_var + bn.eps)**0.5 - conv.weight.data.copy_(w) - conv.bias.data.copy_(b) - return conv - - -class RepViTBlock(nn.Module): - def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs): - super(RepViTBlock, self).__init__() - assert stride in [1, 2] - - self.identity = stride == 1 and inp == oup - assert(hidden_dim == 2 * inp) - - if stride == 2: - self.token_mixer = nn.Sequential( - Conv2d_BN(inp, inp, kernel_size, stride, (kernel_size - 1) // 2, groups=inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - Conv2d_BN(inp, oup, ks=1, stride=1, pad=0) - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(oup, 2 * oup, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(2 * oup, oup, 1, 1, 0, bn_weight_init=0), - )) - else: - assert(self.identity) - self.token_mixer = nn.Sequential( - RepVGGDW(inp), - SqueezeExcite(inp, 0.25) if use_se else nn.Identity(), - ) - self.channel_mixer = Residual(nn.Sequential( - # pw - Conv2d_BN(inp, hidden_dim, 1, 1, 0), - nn.GELU() if use_hs else nn.GELU(), - # pw-linear - Conv2d_BN(hidden_dim, oup, 1, 1, 0, bn_weight_init=0), - )) - - def forward(self, x): - return self.channel_mixer(self.token_mixer(x)) - -from timm.models.vision_transformer import trunc_normal_ -class BN_Linear(torch.nn.Sequential): - def __init__(self, a, b, bias=True, std=0.02): - super().__init__() - self.add_module('bn', torch.nn.BatchNorm1d(a)) - self.add_module('l', torch.nn.Linear(a, b, bias=bias)) - trunc_normal_(self.l.weight, std=std) - if bias: - torch.nn.init.constant_(self.l.bias, 0) - - @torch.no_grad() - def fuse(self): - bn, l = self._modules.values() - w = bn.weight / (bn.running_var + bn.eps)**0.5 - b = bn.bias - self.bn.running_mean * \ - self.bn.weight / (bn.running_var + bn.eps)**0.5 - w = l.weight * w[None, :] - if l.bias is None: - b = b @ self.l.weight.T - else: - b = (l.weight @ b[:, None]).view(-1) + self.l.bias - m = torch.nn.Linear(w.size(1), w.size(0), device=l.weight.device) - m.weight.data.copy_(w) - m.bias.data.copy_(b) - return m - -class RepViT(nn.Module): - def __init__(self, cfgs, distillation=False, pretrained=None, init_cfg=None, out_indices=[]): - super(RepViT, self).__init__() - # setting of inverted residual blocks - self.cfgs = cfgs - - # building first layer - input_channel = self.cfgs[0][2] - patch_embed = torch.nn.Sequential(Conv2d_BN(3, input_channel // 2, 3, 2, 1), torch.nn.GELU(), - Conv2d_BN(input_channel // 2, input_channel, 3, 2, 1)) - layers = [patch_embed] - # building inverted residual blocks - block = RepViTBlock - for k, t, c, use_se, use_hs, s in self.cfgs: - output_channel = _make_divisible(c, 8) - exp_size = _make_divisible(input_channel * t, 8) - layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs)) - input_channel = output_channel - self.features = nn.ModuleList(layers) - - self.init_cfg = init_cfg - assert(self.init_cfg is not None) - self.out_indices = out_indices - self.init_weights() - self = torch.nn.SyncBatchNorm.convert_sync_batchnorm(self) - self.train() - - def init_weights(self, pretrained=None): - logger = get_root_logger() - if self.init_cfg is None and pretrained is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - pass - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - if self.init_cfg is not None: - ckpt_path = self.init_cfg['checkpoint'] - elif pretrained is not None: - ckpt_path = pretrained - - ckpt = _load_checkpoint( - ckpt_path, logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - - state_dict = _state_dict - missing_keys, unexpected_keys = \ - self.load_state_dict(state_dict, False) - logger.info(f"Miss {missing_keys}") - logger.info(f"Unexpected {unexpected_keys}") - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(RepViT, self).train(mode) - if mode: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - - def forward(self, x): - outs = [] - for i, f in enumerate(self.features): - x = f(x) - if i in self.out_indices: - outs.append(x) - assert(len(outs) == 4) - return outs - -from timm.models import register_model - - -@BACKBONES.register_module() -def repvit_m1_1(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) - -@BACKBONES.register_module() -def repvit_m1_5(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 1, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 64, 0, 0, 1], - [3, 2, 128, 0, 0, 2], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 1, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 128, 0, 0, 1], - [3, 2, 256, 0, 1, 2], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 1, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 256, 0, 1, 1], - [3, 2, 512, 0, 1, 2], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1], - [3, 2, 512, 1, 1, 1], - [3, 2, 512, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) - - -@BACKBONES.register_module() -def repvit_m2_3(pretrained=False, num_classes = 1000, distillation=False, init_cfg=None, out_indices=[], **kwargs): - """ - Constructs a MobileNetV3-Large model - """ - cfgs = [ - # k, t, c, SE, HS, s - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 1, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 80, 0, 0, 1], - [3, 2, 160, 0, 0, 2], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 1, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 160, 0, 0, 1], - [3, 2, 320, 0, 1, 2], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 320, 1, 1, 1], - [3, 2, 320, 0, 1, 1], - # [3, 2, 320, 1, 1, 1], - # [3, 2, 320, 0, 1, 1], - [3, 2, 320, 0, 1, 1], - [3, 2, 640, 0, 1, 2], - [3, 2, 640, 1, 1, 1], - [3, 2, 640, 0, 1, 1], - # [3, 2, 640, 1, 1, 1], - # [3, 2, 640, 0, 1, 1] - ] - return RepViT(cfgs, init_cfg=init_cfg, pretrained=pretrained, distillation=distillation, out_indices=out_indices) \ No newline at end of file diff --git a/cv/classification/repvit/pytorch/segmentation/tools/analyze_logs.py b/cv/classification/repvit/pytorch/segmentation/tools/analyze_logs.py deleted file mode 100644 index 8c62a34f..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/analyze_logs.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Modified from https://github.com/open- -mmlab/mmdetection/blob/master/tools/analysis_tools/analyze_logs.py.""" -import argparse -import json -from collections import defaultdict - -import matplotlib.pyplot as plt -import seaborn as sns - - -def plot_curve(log_dicts, args): - if args.backend is not None: - plt.switch_backend(args.backend) - sns.set_style(args.style) - # if legend is None, use {filename}_{key} as legend - legend = args.legend - if legend is None: - legend = [] - for json_log in args.json_logs: - for metric in args.keys: - legend.append(f'{json_log}_{metric}') - assert len(legend) == (len(args.json_logs) * len(args.keys)) - metrics = args.keys - - num_metrics = len(metrics) - for i, log_dict in enumerate(log_dicts): - epochs = list(log_dict.keys()) - for j, metric in enumerate(metrics): - print(f'plot curve of {args.json_logs[i]}, metric is {metric}') - plot_epochs = [] - plot_iters = [] - plot_values = [] - # In some log files, iters number is not correct, `pre_iter` is - # used to prevent generate wrong lines. - pre_iter = -1 - for epoch in epochs: - epoch_logs = log_dict[epoch] - if metric not in epoch_logs.keys(): - continue - if metric in ['mIoU', 'mAcc', 'aAcc']: - plot_epochs.append(epoch) - plot_values.append(epoch_logs[metric][0]) - else: - for idx in range(len(epoch_logs[metric])): - if pre_iter > epoch_logs['iter'][idx]: - continue - pre_iter = epoch_logs['iter'][idx] - plot_iters.append(epoch_logs['iter'][idx]) - plot_values.append(epoch_logs[metric][idx]) - ax = plt.gca() - label = legend[i * num_metrics + j] - if metric in ['mIoU', 'mAcc', 'aAcc']: - ax.set_xticks(plot_epochs) - plt.xlabel('epoch') - plt.plot(plot_epochs, plot_values, label=label, marker='o') - else: - plt.xlabel('iter') - plt.plot(plot_iters, plot_values, label=label, linewidth=0.5) - plt.legend() - if args.title is not None: - plt.title(args.title) - if args.out is None: - plt.show() - else: - print(f'save curve to: {args.out}') - plt.savefig(args.out) - plt.cla() - - -def parse_args(): - parser = argparse.ArgumentParser(description='Analyze Json Log') - parser.add_argument( - 'json_logs', - type=str, - nargs='+', - help='path of train log in json format') - parser.add_argument( - '--keys', - type=str, - nargs='+', - default=['mIoU'], - help='the metric that you want to plot') - parser.add_argument('--title', type=str, help='title of figure') - parser.add_argument( - '--legend', - type=str, - nargs='+', - default=None, - help='legend of each plot') - parser.add_argument( - '--backend', type=str, default=None, help='backend of plt') - parser.add_argument( - '--style', type=str, default='dark', help='style of plt') - parser.add_argument('--out', type=str, default=None) - args = parser.parse_args() - return args - - -def load_json_logs(json_logs): - # load and convert json_logs to log_dict, key is epoch, value is a sub dict - # keys of sub dict is different metrics - # value of sub dict is a list of corresponding values of all iterations - log_dicts = [dict() for _ in json_logs] - for json_log, log_dict in zip(json_logs, log_dicts): - with open(json_log, 'r') as log_file: - for line in log_file: - log = json.loads(line.strip()) - # skip lines without `epoch` field - if 'epoch' not in log: - continue - epoch = log.pop('epoch') - if epoch not in log_dict: - log_dict[epoch] = defaultdict(list) - for k, v in log.items(): - log_dict[epoch][k].append(v) - return log_dicts - - -def main(): - args = parse_args() - json_logs = args.json_logs - for json_log in json_logs: - assert json_log.endswith('.json') - log_dicts = load_json_logs(json_logs) - plot_curve(log_dicts, args) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/benchmark.py b/cv/classification/repvit/pytorch/segmentation/tools/benchmark.py deleted file mode 100644 index d72980eb..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/benchmark.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import time - -import torch -from mmcv import Config -from mmcv.parallel import MMDataParallel -from mmcv.runner import load_checkpoint, wrap_fp16_model - -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.models import build_segmentor - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMSeg benchmark a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument( - '--log-interval', type=int, default=50, help='interval of logging') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - # set cudnn_benchmark - torch.backends.cudnn.benchmark = False - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the dataloader - # TODO: support multiple images per gpu (only minor changes are needed) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=False, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_segmentor(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - load_checkpoint(model, args.checkpoint, map_location='cpu') - - model = MMDataParallel(model, device_ids=[0]) - - model.eval() - - # the first several iterations may be very slow so skip them - num_warmup = 5 - pure_inf_time = 0 - total_iters = 200 - - # benchmark with 200 image and take the average - for i, data in enumerate(data_loader): - - torch.cuda.synchronize() - start_time = time.perf_counter() - - with torch.no_grad(): - model(return_loss=False, rescale=True, **data) - - torch.cuda.synchronize() - elapsed = time.perf_counter() - start_time - - if i >= num_warmup: - pure_inf_time += elapsed - if (i + 1) % args.log_interval == 0: - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Done image [{i + 1:<3}/ {total_iters}], ' - f'fps: {fps:.2f} img / s') - - if (i + 1) == total_iters: - fps = (i + 1 - num_warmup) / pure_inf_time - print(f'Overall fps: {fps:.2f} img / s') - break - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/browse_dataset.py b/cv/classification/repvit/pytorch/segmentation/tools/browse_dataset.py deleted file mode 100644 index 2ec41428..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/browse_dataset.py +++ /dev/null @@ -1,167 +0,0 @@ -import argparse -import os -import warnings -from pathlib import Path - -import mmcv -import numpy as np -from mmcv import Config - -from mmseg.datasets.builder import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser(description='Browse a dataset') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--show-origin', - default=False, - action='store_true', - help='if True, omit all augmentation in pipeline,' - ' show origin image and seg map') - parser.add_argument( - '--skip-type', - type=str, - nargs='+', - default=['DefaultFormatBundle', 'Normalize', 'Collect'], - help='skip some useless pipeline,if `show-origin` is true, ' - 'all pipeline except `Load` will be skipped') - parser.add_argument( - '--output-dir', - default='./output', - type=str, - help='If there is no display interface, you can save it') - parser.add_argument('--show', default=False, action='store_true') - parser.add_argument( - '--show-interval', - type=int, - default=999, - help='the interval of show (ms)') - parser.add_argument( - '--opacity', - type=float, - default=0.5, - help='the opacity of semantic map') - args = parser.parse_args() - return args - - -def imshow_semantic(img, - seg, - class_names, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - seg (Tensor): The semantic segmentation results to draw over - `img`. - class_names (list[str]): Names of each classes. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - if palette is None: - palette = np.random.randint(0, 255, size=(len(class_names), 3)) - palette = np.array(palette) - assert palette.shape[0] == len(class_names) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img - - -def _retrieve_data_cfg(_data_cfg, skip_type, show_origin): - if show_origin is True: - # only keep pipeline of Loading data and ann - _data_cfg['pipeline'] = [ - x for x in _data_cfg.pipeline if 'Load' in x['type'] - ] - else: - _data_cfg['pipeline'] = [ - x for x in _data_cfg.pipeline if x['type'] not in skip_type - ] - - -def retrieve_data_cfg(config_path, skip_type, show_origin=False): - cfg = Config.fromfile(config_path) - train_data_cfg = cfg.data.train - if isinstance(train_data_cfg, list): - for _data_cfg in train_data_cfg: - if 'pipeline' in _data_cfg: - _retrieve_data_cfg(_data_cfg, skip_type, show_origin) - elif 'dataset' in _data_cfg: - _retrieve_data_cfg(_data_cfg['dataset'], skip_type, - show_origin) - else: - raise ValueError - elif 'dataset' in train_data_cfg: - _retrieve_data_cfg(train_data_cfg['dataset'], skip_type, show_origin) - else: - _retrieve_data_cfg(train_data_cfg, skip_type, show_origin) - return cfg - - -def main(): - args = parse_args() - cfg = retrieve_data_cfg(args.config, args.skip_type, args.show_origin) - dataset = build_dataset(cfg.data.train) - progress_bar = mmcv.ProgressBar(len(dataset)) - for item in dataset: - filename = os.path.join(args.output_dir, - Path(item['filename']).name - ) if args.output_dir is not None else None - imshow_semantic( - item['img'], - item['gt_semantic_seg'], - dataset.CLASSES, - dataset.PALETTE, - show=args.show, - wait_time=args.show_interval, - out_file=filename, - opacity=args.opacity, - ) - progress_bar.update() - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/chase_db1.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/chase_db1.py deleted file mode 100644 index 580e6e7e..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/chase_db1.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import tempfile -import zipfile - -import mmcv - -CHASE_DB1_LEN = 28 * 3 -TRAINING_LEN = 60 - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert CHASE_DB1 dataset to mmsegmentation format') - parser.add_argument('dataset_path', help='path of CHASEDB1.zip') - parser.add_argument('--tmp_dir', help='path of the temporary directory') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - dataset_path = args.dataset_path - if args.out_dir is None: - out_dir = osp.join('data', 'CHASE_DB1') - else: - out_dir = args.out_dir - - print('Making directories...') - mmcv.mkdir_or_exist(out_dir) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'validation')) - - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - print('Extracting CHASEDB1.zip...') - zip_file = zipfile.ZipFile(dataset_path) - zip_file.extractall(tmp_dir) - - print('Generating training dataset...') - - assert len(os.listdir(tmp_dir)) == CHASE_DB1_LEN, \ - 'len(os.listdir(tmp_dir)) != {}'.format(CHASE_DB1_LEN) - - for img_name in sorted(os.listdir(tmp_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(tmp_dir, img_name)) - if osp.splitext(img_name)[1] == '.jpg': - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'training', - osp.splitext(img_name)[0] + '.png')) - else: - # The annotation img should be divided by 128, because some of - # the annotation imgs are not standard. We should set a - # threshold to convert the nonstandard annotation imgs. The - # value divided by 128 is equivalent to '1 if value >= 128 - # else 0' - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'training', - osp.splitext(img_name)[0] + '.png')) - - for img_name in sorted(os.listdir(tmp_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(tmp_dir, img_name)) - if osp.splitext(img_name)[1] == '.jpg': - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'validation', - osp.splitext(img_name)[0] + '.png')) - else: - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(img_name)[0] + '.png')) - - print('Removing the temporary files...') - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/cityscapes.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/cityscapes.py deleted file mode 100644 index 17b61684..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/cityscapes.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp - -import mmcv -from cityscapesscripts.preparation.json2labelImg import json2labelImg - - -def convert_json_to_label(json_file): - label_file = json_file.replace('_polygons.json', '_labelTrainIds.png') - json2labelImg(json_file, label_file, 'trainIds') - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert Cityscapes annotations to TrainIds') - parser.add_argument('cityscapes_path', help='cityscapes data path') - parser.add_argument('--gt-dir', default='gtFine', type=str) - parser.add_argument('-o', '--out-dir', help='output path') - parser.add_argument( - '--nproc', default=1, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - cityscapes_path = args.cityscapes_path - out_dir = args.out_dir if args.out_dir else cityscapes_path - mmcv.mkdir_or_exist(out_dir) - - gt_dir = osp.join(cityscapes_path, args.gt_dir) - - poly_files = [] - for poly in mmcv.scandir(gt_dir, '_polygons.json', recursive=True): - poly_file = osp.join(gt_dir, poly) - poly_files.append(poly_file) - if args.nproc > 1: - mmcv.track_parallel_progress(convert_json_to_label, poly_files, - args.nproc) - else: - mmcv.track_progress(convert_json_to_label, poly_files) - - split_names = ['train', 'val', 'test'] - - for split in split_names: - filenames = [] - for poly in mmcv.scandir( - osp.join(gt_dir, split), '_polygons.json', recursive=True): - filenames.append(poly.replace('_gtFine_polygons.json', '')) - with open(osp.join(out_dir, f'{split}.txt'), 'w') as f: - f.writelines(f + '\n' for f in filenames) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff10k.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff10k.py deleted file mode 100644 index 4f0fd530..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff10k.py +++ /dev/null @@ -1,306 +0,0 @@ -import argparse -import os.path as osp -import shutil -from functools import partial - -import mmcv -import numpy as np -from PIL import Image -from scipy.io import loadmat - -COCO_LEN = 10000 - -clsID_to_trID = { - 0: 0, - 1: 1, - 2: 2, - 3: 3, - 4: 4, - 5: 5, - 6: 6, - 7: 7, - 8: 8, - 9: 9, - 10: 10, - 11: 11, - 13: 12, - 14: 13, - 15: 14, - 16: 15, - 17: 16, - 18: 17, - 19: 18, - 20: 19, - 21: 20, - 22: 21, - 23: 22, - 24: 23, - 25: 24, - 27: 25, - 28: 26, - 31: 27, - 32: 28, - 33: 29, - 34: 30, - 35: 31, - 36: 32, - 37: 33, - 38: 34, - 39: 35, - 40: 36, - 41: 37, - 42: 38, - 43: 39, - 44: 40, - 46: 41, - 47: 42, - 48: 43, - 49: 44, - 50: 45, - 51: 46, - 52: 47, - 53: 48, - 54: 49, - 55: 50, - 56: 51, - 57: 52, - 58: 53, - 59: 54, - 60: 55, - 61: 56, - 62: 57, - 63: 58, - 64: 59, - 65: 60, - 67: 61, - 70: 62, - 72: 63, - 73: 64, - 74: 65, - 75: 66, - 76: 67, - 77: 68, - 78: 69, - 79: 70, - 80: 71, - 81: 72, - 82: 73, - 84: 74, - 85: 75, - 86: 76, - 87: 77, - 88: 78, - 89: 79, - 90: 80, - 92: 81, - 93: 82, - 94: 83, - 95: 84, - 96: 85, - 97: 86, - 98: 87, - 99: 88, - 100: 89, - 101: 90, - 102: 91, - 103: 92, - 104: 93, - 105: 94, - 106: 95, - 107: 96, - 108: 97, - 109: 98, - 110: 99, - 111: 100, - 112: 101, - 113: 102, - 114: 103, - 115: 104, - 116: 105, - 117: 106, - 118: 107, - 119: 108, - 120: 109, - 121: 110, - 122: 111, - 123: 112, - 124: 113, - 125: 114, - 126: 115, - 127: 116, - 128: 117, - 129: 118, - 130: 119, - 131: 120, - 132: 121, - 133: 122, - 134: 123, - 135: 124, - 136: 125, - 137: 126, - 138: 127, - 139: 128, - 140: 129, - 141: 130, - 142: 131, - 143: 132, - 144: 133, - 145: 134, - 146: 135, - 147: 136, - 148: 137, - 149: 138, - 150: 139, - 151: 140, - 152: 141, - 153: 142, - 154: 143, - 155: 144, - 156: 145, - 157: 146, - 158: 147, - 159: 148, - 160: 149, - 161: 150, - 162: 151, - 163: 152, - 164: 153, - 165: 154, - 166: 155, - 167: 156, - 168: 157, - 169: 158, - 170: 159, - 171: 160, - 172: 161, - 173: 162, - 174: 163, - 175: 164, - 176: 165, - 177: 166, - 178: 167, - 179: 168, - 180: 169, - 181: 170, - 182: 171 -} - - -def convert_to_trainID(tuple_path, in_img_dir, in_ann_dir, out_img_dir, - out_mask_dir, is_train): - imgpath, maskpath = tuple_path - shutil.copyfile( - osp.join(in_img_dir, imgpath), - osp.join(out_img_dir, 'train2014', imgpath) if is_train else osp.join( - out_img_dir, 'test2014', imgpath)) - annotate = loadmat(osp.join(in_ann_dir, maskpath)) - mask = annotate['S'].astype(np.uint8) - mask_copy = mask.copy() - for clsID, trID in clsID_to_trID.items(): - mask_copy[mask == clsID] = trID - seg_filename = osp.join(out_mask_dir, 'train2014', - maskpath.split('.')[0] + - '_labelTrainIds.png') if is_train else osp.join( - out_mask_dir, 'test2014', - maskpath.split('.')[0] + '_labelTrainIds.png') - Image.fromarray(mask_copy).save(seg_filename, 'PNG') - - -def generate_coco_list(folder): - train_list = osp.join(folder, 'imageLists', 'train.txt') - test_list = osp.join(folder, 'imageLists', 'test.txt') - train_paths = [] - test_paths = [] - - with open(train_list) as f: - for filename in f: - basename = filename.strip() - imgpath = basename + '.jpg' - maskpath = basename + '.mat' - train_paths.append((imgpath, maskpath)) - - with open(test_list) as f: - for filename in f: - basename = filename.strip() - imgpath = basename + '.jpg' - maskpath = basename + '.mat' - test_paths.append((imgpath, maskpath)) - - return train_paths, test_paths - - -def parse_args(): - parser = argparse.ArgumentParser( - description=\ - 'Convert COCO Stuff 10k annotations to mmsegmentation format') # noqa - parser.add_argument('coco_path', help='coco stuff path') - parser.add_argument('-o', '--out_dir', help='output path') - parser.add_argument( - '--nproc', default=16, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - coco_path = args.coco_path - nproc = args.nproc - - out_dir = args.out_dir or coco_path - out_img_dir = osp.join(out_dir, 'images') - out_mask_dir = osp.join(out_dir, 'annotations') - - mmcv.mkdir_or_exist(osp.join(out_img_dir, 'train2014')) - mmcv.mkdir_or_exist(osp.join(out_img_dir, 'test2014')) - mmcv.mkdir_or_exist(osp.join(out_mask_dir, 'train2014')) - mmcv.mkdir_or_exist(osp.join(out_mask_dir, 'test2014')) - - train_list, test_list = generate_coco_list(coco_path) - assert (len(train_list) + - len(test_list)) == COCO_LEN, 'Wrong length of list {} & {}'.format( - len(train_list), len(test_list)) - - if args.nproc > 1: - mmcv.track_parallel_progress( - partial( - convert_to_trainID, - in_img_dir=osp.join(coco_path, 'images'), - in_ann_dir=osp.join(coco_path, 'annotations'), - out_img_dir=out_img_dir, - out_mask_dir=out_mask_dir, - is_train=True), - train_list, - nproc=nproc) - mmcv.track_parallel_progress( - partial( - convert_to_trainID, - in_img_dir=osp.join(coco_path, 'images'), - in_ann_dir=osp.join(coco_path, 'annotations'), - out_img_dir=out_img_dir, - out_mask_dir=out_mask_dir, - is_train=False), - test_list, - nproc=nproc) - else: - mmcv.track_progress( - partial( - convert_to_trainID, - in_img_dir=osp.join(coco_path, 'images'), - in_ann_dir=osp.join(coco_path, 'annotations'), - out_img_dir=out_img_dir, - out_mask_dir=out_mask_dir, - is_train=True), train_list) - mmcv.track_progress( - partial( - convert_to_trainID, - in_img_dir=osp.join(coco_path, 'images'), - in_ann_dir=osp.join(coco_path, 'annotations'), - out_img_dir=out_img_dir, - out_mask_dir=out_mask_dir, - is_train=False), test_list) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff164k.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff164k.py deleted file mode 100644 index 4533bf53..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/coco_stuff164k.py +++ /dev/null @@ -1,263 +0,0 @@ -import argparse -import os.path as osp -import shutil -from functools import partial -from glob import glob - -import mmcv -import numpy as np -from PIL import Image - -COCO_LEN = 123287 - -clsID_to_trID = { - 0: 0, - 1: 1, - 2: 2, - 3: 3, - 4: 4, - 5: 5, - 6: 6, - 7: 7, - 8: 8, - 9: 9, - 10: 10, - 12: 11, - 13: 12, - 14: 13, - 15: 14, - 16: 15, - 17: 16, - 18: 17, - 19: 18, - 20: 19, - 21: 20, - 22: 21, - 23: 22, - 24: 23, - 26: 24, - 27: 25, - 30: 26, - 31: 27, - 32: 28, - 33: 29, - 34: 30, - 35: 31, - 36: 32, - 37: 33, - 38: 34, - 39: 35, - 40: 36, - 41: 37, - 42: 38, - 43: 39, - 45: 40, - 46: 41, - 47: 42, - 48: 43, - 49: 44, - 50: 45, - 51: 46, - 52: 47, - 53: 48, - 54: 49, - 55: 50, - 56: 51, - 57: 52, - 58: 53, - 59: 54, - 60: 55, - 61: 56, - 62: 57, - 63: 58, - 64: 59, - 66: 60, - 69: 61, - 71: 62, - 72: 63, - 73: 64, - 74: 65, - 75: 66, - 76: 67, - 77: 68, - 78: 69, - 79: 70, - 80: 71, - 81: 72, - 83: 73, - 84: 74, - 85: 75, - 86: 76, - 87: 77, - 88: 78, - 89: 79, - 91: 80, - 92: 81, - 93: 82, - 94: 83, - 95: 84, - 96: 85, - 97: 86, - 98: 87, - 99: 88, - 100: 89, - 101: 90, - 102: 91, - 103: 92, - 104: 93, - 105: 94, - 106: 95, - 107: 96, - 108: 97, - 109: 98, - 110: 99, - 111: 100, - 112: 101, - 113: 102, - 114: 103, - 115: 104, - 116: 105, - 117: 106, - 118: 107, - 119: 108, - 120: 109, - 121: 110, - 122: 111, - 123: 112, - 124: 113, - 125: 114, - 126: 115, - 127: 116, - 128: 117, - 129: 118, - 130: 119, - 131: 120, - 132: 121, - 133: 122, - 134: 123, - 135: 124, - 136: 125, - 137: 126, - 138: 127, - 139: 128, - 140: 129, - 141: 130, - 142: 131, - 143: 132, - 144: 133, - 145: 134, - 146: 135, - 147: 136, - 148: 137, - 149: 138, - 150: 139, - 151: 140, - 152: 141, - 153: 142, - 154: 143, - 155: 144, - 156: 145, - 157: 146, - 158: 147, - 159: 148, - 160: 149, - 161: 150, - 162: 151, - 163: 152, - 164: 153, - 165: 154, - 166: 155, - 167: 156, - 168: 157, - 169: 158, - 170: 159, - 171: 160, - 172: 161, - 173: 162, - 174: 163, - 175: 164, - 176: 165, - 177: 166, - 178: 167, - 179: 168, - 180: 169, - 181: 170, - 255: 255 -} - - -def convert_to_trainID(maskpath, out_mask_dir, is_train): - mask = np.array(Image.open(maskpath)) - mask_copy = mask.copy() - for clsID, trID in clsID_to_trID.items(): - mask_copy[mask == clsID] = trID - seg_filename = osp.join( - out_mask_dir, 'train2017', - osp.basename(maskpath).split('.')[0] + - '_labelTrainIds.png') if is_train else osp.join( - out_mask_dir, 'val2017', - osp.basename(maskpath).split('.')[0] + '_labelTrainIds.png') - Image.fromarray(mask_copy).save(seg_filename, 'PNG') - - -def parse_args(): - parser = argparse.ArgumentParser( - description=\ - 'Convert COCO Stuff 164k annotations to mmsegmentation format') # noqa - parser.add_argument('coco_path', help='coco stuff path') - parser.add_argument('-o', '--out_dir', help='output path') - parser.add_argument( - '--nproc', default=16, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - coco_path = args.coco_path - nproc = args.nproc - - out_dir = args.out_dir or coco_path - out_img_dir = osp.join(out_dir, 'images') - out_mask_dir = osp.join(out_dir, 'annotations') - - mmcv.mkdir_or_exist(osp.join(out_mask_dir, 'train2017')) - mmcv.mkdir_or_exist(osp.join(out_mask_dir, 'val2017')) - - if out_dir != coco_path: - shutil.copytree(osp.join(coco_path, 'images'), out_img_dir) - - train_list = glob(osp.join(coco_path, 'annotations', 'train2017', '*.png')) - train_list = [file for file in train_list if '_labelTrainIds' not in file] - test_list = glob(osp.join(coco_path, 'annotations', 'val2017', '*.png')) - test_list = [file for file in test_list if '_labelTrainIds' not in file] - assert (len(train_list) + - len(test_list)) == COCO_LEN, 'Wrong length of list {} & {}'.format( - len(train_list), len(test_list)) - - if args.nproc > 1: - mmcv.track_parallel_progress( - partial( - convert_to_trainID, out_mask_dir=out_mask_dir, is_train=True), - train_list, - nproc=nproc) - mmcv.track_parallel_progress( - partial( - convert_to_trainID, out_mask_dir=out_mask_dir, is_train=False), - test_list, - nproc=nproc) - else: - mmcv.track_progress( - partial( - convert_to_trainID, out_mask_dir=out_mask_dir, is_train=True), - train_list) - mmcv.track_progress( - partial( - convert_to_trainID, out_mask_dir=out_mask_dir, is_train=False), - test_list) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/drive.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/drive.py deleted file mode 100644 index f547579b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/drive.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import tempfile -import zipfile - -import cv2 -import mmcv - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert DRIVE dataset to mmsegmentation format') - parser.add_argument( - 'training_path', help='the training part of DRIVE dataset') - parser.add_argument( - 'testing_path', help='the testing part of DRIVE dataset') - parser.add_argument('--tmp_dir', help='path of the temporary directory') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - training_path = args.training_path - testing_path = args.testing_path - if args.out_dir is None: - out_dir = osp.join('data', 'DRIVE') - else: - out_dir = args.out_dir - - print('Making directories...') - mmcv.mkdir_or_exist(out_dir) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'validation')) - - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - print('Extracting training.zip...') - zip_file = zipfile.ZipFile(training_path) - zip_file.extractall(tmp_dir) - - print('Generating training dataset...') - now_dir = osp.join(tmp_dir, 'training', 'images') - for img_name in os.listdir(now_dir): - img = mmcv.imread(osp.join(now_dir, img_name)) - mmcv.imwrite( - img, - osp.join( - out_dir, 'images', 'training', - osp.splitext(img_name)[0].replace('_training', '') + - '.png')) - - now_dir = osp.join(tmp_dir, 'training', '1st_manual') - for img_name in os.listdir(now_dir): - cap = cv2.VideoCapture(osp.join(now_dir, img_name)) - ret, img = cap.read() - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'training', - osp.splitext(img_name)[0] + '.png')) - - print('Extracting test.zip...') - zip_file = zipfile.ZipFile(testing_path) - zip_file.extractall(tmp_dir) - - print('Generating validation dataset...') - now_dir = osp.join(tmp_dir, 'test', 'images') - for img_name in os.listdir(now_dir): - img = mmcv.imread(osp.join(now_dir, img_name)) - mmcv.imwrite( - img, - osp.join( - out_dir, 'images', 'validation', - osp.splitext(img_name)[0].replace('_test', '') + '.png')) - - now_dir = osp.join(tmp_dir, 'test', '1st_manual') - if osp.exists(now_dir): - for img_name in os.listdir(now_dir): - cap = cv2.VideoCapture(osp.join(now_dir, img_name)) - ret, img = cap.read() - # The annotation img should be divided by 128, because some of - # the annotation imgs are not standard. We should set a - # threshold to convert the nonstandard annotation imgs. The - # value divided by 128 is equivalent to '1 if value >= 128 - # else 0' - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(img_name)[0] + '.png')) - - now_dir = osp.join(tmp_dir, 'test', '2nd_manual') - if osp.exists(now_dir): - for img_name in os.listdir(now_dir): - cap = cv2.VideoCapture(osp.join(now_dir, img_name)) - ret, img = cap.read() - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(img_name)[0] + '.png')) - - print('Removing the temporary files...') - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/hrf.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/hrf.py deleted file mode 100644 index 5e016e3c..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/hrf.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import tempfile -import zipfile - -import mmcv - -HRF_LEN = 15 -TRAINING_LEN = 5 - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert HRF dataset to mmsegmentation format') - parser.add_argument('healthy_path', help='the path of healthy.zip') - parser.add_argument( - 'healthy_manualsegm_path', help='the path of healthy_manualsegm.zip') - parser.add_argument('glaucoma_path', help='the path of glaucoma.zip') - parser.add_argument( - 'glaucoma_manualsegm_path', help='the path of glaucoma_manualsegm.zip') - parser.add_argument( - 'diabetic_retinopathy_path', - help='the path of diabetic_retinopathy.zip') - parser.add_argument( - 'diabetic_retinopathy_manualsegm_path', - help='the path of diabetic_retinopathy_manualsegm.zip') - parser.add_argument('--tmp_dir', help='path of the temporary directory') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - images_path = [ - args.healthy_path, args.glaucoma_path, args.diabetic_retinopathy_path - ] - annotations_path = [ - args.healthy_manualsegm_path, args.glaucoma_manualsegm_path, - args.diabetic_retinopathy_manualsegm_path - ] - if args.out_dir is None: - out_dir = osp.join('data', 'HRF') - else: - out_dir = args.out_dir - - print('Making directories...') - mmcv.mkdir_or_exist(out_dir) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'validation')) - - print('Generating images...') - for now_path in images_path: - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - zip_file = zipfile.ZipFile(now_path) - zip_file.extractall(tmp_dir) - - assert len(os.listdir(tmp_dir)) == HRF_LEN, \ - 'len(os.listdir(tmp_dir)) != {}'.format(HRF_LEN) - - for filename in sorted(os.listdir(tmp_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(tmp_dir, filename)) - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'training', - osp.splitext(filename)[0] + '.png')) - for filename in sorted(os.listdir(tmp_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(tmp_dir, filename)) - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'validation', - osp.splitext(filename)[0] + '.png')) - - print('Generating annotations...') - for now_path in annotations_path: - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - zip_file = zipfile.ZipFile(now_path) - zip_file.extractall(tmp_dir) - - assert len(os.listdir(tmp_dir)) == HRF_LEN, \ - 'len(os.listdir(tmp_dir)) != {}'.format(HRF_LEN) - - for filename in sorted(os.listdir(tmp_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(tmp_dir, filename)) - # The annotation img should be divided by 128, because some of - # the annotation imgs are not standard. We should set a - # threshold to convert the nonstandard annotation imgs. The - # value divided by 128 is equivalent to '1 if value >= 128 - # else 0' - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'training', - osp.splitext(filename)[0] + '.png')) - for filename in sorted(os.listdir(tmp_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(tmp_dir, filename)) - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(filename)[0] + '.png')) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/pascal_context.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/pascal_context.py deleted file mode 100644 index 03b79d51..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/pascal_context.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -from functools import partial - -import mmcv -import numpy as np -from detail import Detail -from PIL import Image - -_mapping = np.sort( - np.array([ - 0, 2, 259, 260, 415, 324, 9, 258, 144, 18, 19, 22, 23, 397, 25, 284, - 158, 159, 416, 33, 162, 420, 454, 295, 296, 427, 44, 45, 46, 308, 59, - 440, 445, 31, 232, 65, 354, 424, 68, 326, 72, 458, 34, 207, 80, 355, - 85, 347, 220, 349, 360, 98, 187, 104, 105, 366, 189, 368, 113, 115 - ])) -_key = np.array(range(len(_mapping))).astype('uint8') - - -def generate_labels(img_id, detail, out_dir): - - def _class_to_index(mask, _mapping, _key): - # assert the values - values = np.unique(mask) - for i in range(len(values)): - assert (values[i] in _mapping) - index = np.digitize(mask.ravel(), _mapping, right=True) - return _key[index].reshape(mask.shape) - - mask = Image.fromarray( - _class_to_index(detail.getMask(img_id), _mapping=_mapping, _key=_key)) - filename = img_id['file_name'] - mask.save(osp.join(out_dir, filename.replace('jpg', 'png'))) - return osp.splitext(osp.basename(filename))[0] - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert PASCAL VOC annotations to mmsegmentation format') - parser.add_argument('devkit_path', help='pascal voc devkit path') - parser.add_argument('json_path', help='annoation json filepath') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - devkit_path = args.devkit_path - if args.out_dir is None: - out_dir = osp.join(devkit_path, 'VOC2010', 'SegmentationClassContext') - else: - out_dir = args.out_dir - json_path = args.json_path - mmcv.mkdir_or_exist(out_dir) - img_dir = osp.join(devkit_path, 'VOC2010', 'JPEGImages') - - train_detail = Detail(json_path, img_dir, 'train') - train_ids = train_detail.getImgs() - - val_detail = Detail(json_path, img_dir, 'val') - val_ids = val_detail.getImgs() - - mmcv.mkdir_or_exist( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext')) - - train_list = mmcv.track_progress( - partial(generate_labels, detail=train_detail, out_dir=out_dir), - train_ids) - with open( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext', - 'train.txt'), 'w') as f: - f.writelines(line + '\n' for line in sorted(train_list)) - - val_list = mmcv.track_progress( - partial(generate_labels, detail=val_detail, out_dir=out_dir), val_ids) - with open( - osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext', - 'val.txt'), 'w') as f: - f.writelines(line + '\n' for line in sorted(val_list)) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/stare.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/stare.py deleted file mode 100644 index 29b78c00..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/stare.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import gzip -import os -import os.path as osp -import tarfile -import tempfile - -import mmcv - -STARE_LEN = 20 -TRAINING_LEN = 10 - - -def un_gz(src, dst): - g_file = gzip.GzipFile(src) - with open(dst, 'wb+') as f: - f.write(g_file.read()) - g_file.close() - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert STARE dataset to mmsegmentation format') - parser.add_argument('image_path', help='the path of stare-images.tar') - parser.add_argument('labels_ah', help='the path of labels-ah.tar') - parser.add_argument('labels_vk', help='the path of labels-vk.tar') - parser.add_argument('--tmp_dir', help='path of the temporary directory') - parser.add_argument('-o', '--out_dir', help='output path') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - image_path = args.image_path - labels_ah = args.labels_ah - labels_vk = args.labels_vk - if args.out_dir is None: - out_dir = osp.join('data', 'STARE') - else: - out_dir = args.out_dir - - print('Making directories...') - mmcv.mkdir_or_exist(out_dir) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'training')) - mmcv.mkdir_or_exist(osp.join(out_dir, 'annotations', 'validation')) - - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'gz')) - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'files')) - - print('Extracting stare-images.tar...') - with tarfile.open(image_path) as f: - f.extractall(osp.join(tmp_dir, 'gz')) - - for filename in os.listdir(osp.join(tmp_dir, 'gz')): - un_gz( - osp.join(tmp_dir, 'gz', filename), - osp.join(tmp_dir, 'files', - osp.splitext(filename)[0])) - - now_dir = osp.join(tmp_dir, 'files') - - assert len(os.listdir(now_dir)) == STARE_LEN, \ - 'len(os.listdir(now_dir)) != {}'.format(STARE_LEN) - - for filename in sorted(os.listdir(now_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(now_dir, filename)) - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'training', - osp.splitext(filename)[0] + '.png')) - - for filename in sorted(os.listdir(now_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(now_dir, filename)) - mmcv.imwrite( - img, - osp.join(out_dir, 'images', 'validation', - osp.splitext(filename)[0] + '.png')) - - print('Removing the temporary files...') - - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'gz')) - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'files')) - - print('Extracting labels-ah.tar...') - with tarfile.open(labels_ah) as f: - f.extractall(osp.join(tmp_dir, 'gz')) - - for filename in os.listdir(osp.join(tmp_dir, 'gz')): - un_gz( - osp.join(tmp_dir, 'gz', filename), - osp.join(tmp_dir, 'files', - osp.splitext(filename)[0])) - - now_dir = osp.join(tmp_dir, 'files') - - assert len(os.listdir(now_dir)) == STARE_LEN, \ - 'len(os.listdir(now_dir)) != {}'.format(STARE_LEN) - - for filename in sorted(os.listdir(now_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(now_dir, filename)) - # The annotation img should be divided by 128, because some of - # the annotation imgs are not standard. We should set a threshold - # to convert the nonstandard annotation imgs. The value divided by - # 128 equivalent to '1 if value >= 128 else 0' - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'training', - osp.splitext(filename)[0] + '.png')) - - for filename in sorted(os.listdir(now_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(now_dir, filename)) - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(filename)[0] + '.png')) - - print('Removing the temporary files...') - - with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir: - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'gz')) - mmcv.mkdir_or_exist(osp.join(tmp_dir, 'files')) - - print('Extracting labels-vk.tar...') - with tarfile.open(labels_vk) as f: - f.extractall(osp.join(tmp_dir, 'gz')) - - for filename in os.listdir(osp.join(tmp_dir, 'gz')): - un_gz( - osp.join(tmp_dir, 'gz', filename), - osp.join(tmp_dir, 'files', - osp.splitext(filename)[0])) - - now_dir = osp.join(tmp_dir, 'files') - - assert len(os.listdir(now_dir)) == STARE_LEN, \ - 'len(os.listdir(now_dir)) != {}'.format(STARE_LEN) - - for filename in sorted(os.listdir(now_dir))[:TRAINING_LEN]: - img = mmcv.imread(osp.join(now_dir, filename)) - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'training', - osp.splitext(filename)[0] + '.png')) - - for filename in sorted(os.listdir(now_dir))[TRAINING_LEN:]: - img = mmcv.imread(osp.join(now_dir, filename)) - mmcv.imwrite( - img[:, :, 0] // 128, - osp.join(out_dir, 'annotations', 'validation', - osp.splitext(filename)[0] + '.png')) - - print('Removing the temporary files...') - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/voc_aug.py b/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/voc_aug.py deleted file mode 100644 index 1d42c270..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/convert_datasets/voc_aug.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -from functools import partial - -import mmcv -import numpy as np -from PIL import Image -from scipy.io import loadmat - -AUG_LEN = 10582 - - -def convert_mat(mat_file, in_dir, out_dir): - data = loadmat(osp.join(in_dir, mat_file)) - mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8) - seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png')) - Image.fromarray(mask).save(seg_filename, 'PNG') - - -def generate_aug_list(merged_list, excluded_list): - return list(set(merged_list) - set(excluded_list)) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert PASCAL VOC annotations to mmsegmentation format') - parser.add_argument('devkit_path', help='pascal voc devkit path') - parser.add_argument('aug_path', help='pascal voc aug path') - parser.add_argument('-o', '--out_dir', help='output path') - parser.add_argument( - '--nproc', default=1, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - devkit_path = args.devkit_path - aug_path = args.aug_path - nproc = args.nproc - if args.out_dir is None: - out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug') - else: - out_dir = args.out_dir - mmcv.mkdir_or_exist(out_dir) - in_dir = osp.join(aug_path, 'dataset', 'cls') - - mmcv.track_parallel_progress( - partial(convert_mat, in_dir=in_dir, out_dir=out_dir), - list(mmcv.scandir(in_dir, suffix='.mat')), - nproc=nproc) - - full_aug_list = [] - with open(osp.join(aug_path, 'dataset', 'train.txt')) as f: - full_aug_list += [line.strip() for line in f] - with open(osp.join(aug_path, 'dataset', 'val.txt')) as f: - full_aug_list += [line.strip() for line in f] - - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'train.txt')) as f: - ori_train_list = [line.strip() for line in f] - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'val.txt')) as f: - val_list = [line.strip() for line in f] - - aug_train_list = generate_aug_list(ori_train_list + full_aug_list, - val_list) - assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format( - AUG_LEN) - - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', - 'trainaug.txt'), 'w') as f: - f.writelines(line + '\n' for line in aug_train_list) - - aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list) - assert len(aug_list) == AUG_LEN - len( - ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN - - len(ori_train_list)) - with open( - osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'), - 'w') as f: - f.writelines(line + '\n' for line in aug_list) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/deploy_test.py b/cv/classification/repvit/pytorch/segmentation/tools/deploy_test.py deleted file mode 100644 index 593532c0..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/deploy_test.py +++ /dev/null @@ -1,296 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import shutil -import warnings -from typing import Any, Iterable - -import mmcv -import numpy as np -import torch -from mmcv.parallel import MMDataParallel -from mmcv.runner import get_dist_info -from mmcv.utils import DictAction - -from mmseg.apis import single_gpu_test -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.models.segmentors.base import BaseSegmentor -from mmseg.ops import resize - - -class ONNXRuntimeSegmentor(BaseSegmentor): - - def __init__(self, onnx_file: str, cfg: Any, device_id: int): - super(ONNXRuntimeSegmentor, self).__init__() - import onnxruntime as ort - - # get the custom op path - ort_custom_op_path = '' - try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - providers = ['CPUExecutionProvider'] - options = [{}] - is_cuda_available = ort.get_device() == 'GPU' - if is_cuda_available: - providers.insert(0, 'CUDAExecutionProvider') - options.insert(0, {'device_id': device_id}) - - sess.set_providers(providers, options) - - self.sess = sess - self.device_id = device_id - self.io_binding = sess.io_binding() - self.output_names = [_.name for _ in sess.get_outputs()] - for name in self.output_names: - self.io_binding.bind_output(name) - self.cfg = cfg - self.test_mode = cfg.model.test_cfg.mode - self.is_cuda_available = is_cuda_available - - def extract_feat(self, imgs): - raise NotImplementedError('This method is not implemented.') - - def encode_decode(self, img, img_metas): - raise NotImplementedError('This method is not implemented.') - - def forward_train(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def simple_test(self, img: torch.Tensor, img_meta: Iterable, - **kwargs) -> list: - if not self.is_cuda_available: - img = img.detach().cpu() - elif self.device_id >= 0: - img = img.cuda(self.device_id) - device_type = img.device.type - self.io_binding.bind_input( - name='input', - device_type=device_type, - device_id=self.device_id, - element_type=np.float32, - shape=img.shape, - buffer_ptr=img.data_ptr()) - self.sess.run_with_iobinding(self.io_binding) - seg_pred = self.io_binding.copy_outputs_to_cpu()[0] - # whole might support dynamic reshape - ori_shape = img_meta[0]['ori_shape'] - if not (ori_shape[0] == seg_pred.shape[-2] - and ori_shape[1] == seg_pred.shape[-1]): - seg_pred = torch.from_numpy(seg_pred).float() - seg_pred = resize( - seg_pred, size=tuple(ori_shape[:2]), mode='nearest') - seg_pred = seg_pred.long().detach().cpu().numpy() - seg_pred = seg_pred[0] - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - -class TensorRTSegmentor(BaseSegmentor): - - def __init__(self, trt_file: str, cfg: Any, device_id: int): - super(TensorRTSegmentor, self).__init__() - from mmcv.tensorrt import TRTWraper, load_tensorrt_plugin - try: - load_tensorrt_plugin() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with TensorRT from source.') - model = TRTWraper( - trt_file, input_names=['input'], output_names=['output']) - - self.model = model - self.device_id = device_id - self.cfg = cfg - self.test_mode = cfg.model.test_cfg.mode - - def extract_feat(self, imgs): - raise NotImplementedError('This method is not implemented.') - - def encode_decode(self, img, img_metas): - raise NotImplementedError('This method is not implemented.') - - def forward_train(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def simple_test(self, img: torch.Tensor, img_meta: Iterable, - **kwargs) -> list: - with torch.cuda.device(self.device_id), torch.no_grad(): - seg_pred = self.model({'input': img})['output'] - seg_pred = seg_pred.detach().cpu().numpy() - # whole might support dynamic reshape - ori_shape = img_meta[0]['ori_shape'] - if not (ori_shape[0] == seg_pred.shape[-2] - and ori_shape[1] == seg_pred.shape[-1]): - seg_pred = torch.from_numpy(seg_pred).float() - seg_pred = resize( - seg_pred, size=tuple(ori_shape[:2]), mode='nearest') - seg_pred = seg_pred.long().detach().cpu().numpy() - seg_pred = seg_pred[0] - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser( - description='mmseg backend test (and eval)') - parser.add_argument('config', help='test config file path') - parser.add_argument('model', help='Input model file') - parser.add_argument( - '--backend', - help='Backend of the model.', - choices=['onnxruntime', 'tensorrt']) - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "mIoU"' - ' for generic datasets, and "cityscapes" for Cityscapes') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where painted images will be saved') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='custom options') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation') - parser.add_argument( - '--opacity', - type=float, - default=0.5, - help='Opacity of painted segmentation map. In (0, 1] range.') - parser.add_argument('--local_rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = mmcv.Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # init distributed env first, since logger depends on the dist info. - distributed = False - - # build the dataloader - # TODO: support multiple images per gpu (only minor changes are needed) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - - # load onnx config and meta - cfg.model.train_cfg = None - - if args.backend == 'onnxruntime': - model = ONNXRuntimeSegmentor(args.model, cfg=cfg, device_id=0) - elif args.backend == 'tensorrt': - model = TensorRTSegmentor(args.model, cfg=cfg, device_id=0) - - model.CLASSES = dataset.CLASSES - model.PALETTE = dataset.PALETTE - - # clean gpu memory when starting a new evaluation. - torch.cuda.empty_cache() - eval_kwargs = {} if args.eval_options is None else args.eval_options - - # Deprecated - efficient_test = eval_kwargs.get('efficient_test', False) - if efficient_test: - warnings.warn( - '``efficient_test=True`` does not have effect in tools/test.py, ' - 'the evaluation and format results are CPU memory efficient by ' - 'default') - - eval_on_format_results = ( - args.eval is not None and 'cityscapes' in args.eval) - if eval_on_format_results: - assert len(args.eval) == 1, 'eval on format results is not ' \ - 'applicable for metrics other than ' \ - 'cityscapes' - if args.format_only or eval_on_format_results: - if 'imgfile_prefix' in eval_kwargs: - tmpdir = eval_kwargs['imgfile_prefix'] - else: - tmpdir = '.format_cityscapes' - eval_kwargs.setdefault('imgfile_prefix', tmpdir) - mmcv.mkdir_or_exist(tmpdir) - else: - tmpdir = None - - model = MMDataParallel(model, device_ids=[0]) - results = single_gpu_test( - model, - data_loader, - args.show, - args.show_dir, - False, - args.opacity, - pre_eval=args.eval is not None and not eval_on_format_results, - format_only=args.format_only or eval_on_format_results, - format_args=eval_kwargs) - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - warnings.warn( - 'The behavior of ``args.out`` has been changed since MMSeg ' - 'v0.16, the pickled outputs could be seg map as type of ' - 'np.array, pre-eval results or file paths for ' - '``dataset.format_results()``.') - print(f'\nwriting results to {args.out}') - mmcv.dump(results, args.out) - if args.eval: - dataset.evaluate(results, args.eval, **eval_kwargs) - if tmpdir is not None and eval_on_format_results: - # remove tmp dir when cityscapes evaluation - shutil.rmtree(tmpdir) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/dist_test.sh b/cv/classification/repvit/pytorch/segmentation/tools/dist_test.sh deleted file mode 100644 index dbdac900..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/dist_test.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -CHECKPOINT=$2 -GPUS=$3 -PORT=${PORT:-29500} -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -NCCL_P2P_DISABLE=1 \ -python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \ - $(dirname "$0")/test.py $CONFIG $CHECKPOINT --launcher pytorch ${@:4} diff --git a/cv/classification/repvit/pytorch/segmentation/tools/dist_train.sh b/cv/classification/repvit/pytorch/segmentation/tools/dist_train.sh deleted file mode 100644 index d4c1f1ba..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/dist_train.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -NCCL_P2P_DISABLE=1 \ -python -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --launcher pytorch ${@:3} diff --git a/cv/classification/repvit/pytorch/segmentation/tools/get_flops.py b/cv/classification/repvit/pytorch/segmentation/tools/get_flops.py deleted file mode 100644 index 55a7d0d3..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/get_flops.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -from mmcv import Config -from mmcv.cnn import get_model_complexity_info - -from mmseg.models import build_segmentor -import sys -sys.path.append("..") -import xformer -import pvt - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a segmentor') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[2048, 1024], - help='input image size') - args = parser.parse_args() - return args - - -def main(): - - args = parse_args() - - if len(args.shape) == 1: - input_shape = (3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (3, ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - cfg = Config.fromfile(args.config) - cfg.model.pretrained = None - model = build_segmentor( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')).cuda() - model.eval() - - if hasattr(model, 'forward_dummy'): - model.forward = model.forward_dummy - else: - raise NotImplementedError( - 'FLOPs counter is currently not currently supported with {}'. - format(model.__class__.__name__)) - - flops, params = get_model_complexity_info(model, input_shape) - split_line = '=' * 30 - print('{0}\nInput shape: {1}\nFlops: {2}\nParams: {3}\n{0}'.format( - split_line, input_shape, flops, params)) - print('!!!Please be cautious if you use the results in papers. ' - 'You may need to check if all ops are supported and verify that the ' - 'flops computation is correct.') - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/mit2mmseg.py b/cv/classification/repvit/pytorch/segmentation/tools/model_converters/mit2mmseg.py deleted file mode 100644 index 2eff1f7b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/mit2mmseg.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -from collections import OrderedDict - -import mmcv -import torch -from mmcv.runner import CheckpointLoader - - -def convert_mit(ckpt): - new_ckpt = OrderedDict() - # Process the concat between q linear weights and kv linear weights - for k, v in ckpt.items(): - if k.startswith('head'): - continue - # patch embedding conversion - elif k.startswith('patch_embed'): - stage_i = int(k.split('.')[0].replace('patch_embed', '')) - new_k = k.replace(f'patch_embed{stage_i}', f'layers.{stage_i-1}.0') - new_v = v - if 'proj.' in new_k: - new_k = new_k.replace('proj.', 'projection.') - # transformer encoder layer conversion - elif k.startswith('block'): - stage_i = int(k.split('.')[0].replace('block', '')) - new_k = k.replace(f'block{stage_i}', f'layers.{stage_i-1}.1') - new_v = v - if 'attn.q.' in new_k: - sub_item_k = k.replace('q.', 'kv.') - new_k = new_k.replace('q.', 'attn.in_proj_') - new_v = torch.cat([v, ckpt[sub_item_k]], dim=0) - elif 'attn.kv.' in new_k: - continue - elif 'attn.proj.' in new_k: - new_k = new_k.replace('proj.', 'attn.out_proj.') - elif 'attn.sr.' in new_k: - new_k = new_k.replace('sr.', 'sr.') - elif 'mlp.' in new_k: - string = f'{new_k}-' - new_k = new_k.replace('mlp.', 'ffn.layers.') - if 'fc1.weight' in new_k or 'fc2.weight' in new_k: - new_v = v.reshape((*v.shape, 1, 1)) - new_k = new_k.replace('fc1.', '0.') - new_k = new_k.replace('dwconv.dwconv.', '1.') - new_k = new_k.replace('fc2.', '4.') - string += f'{new_k} {v.shape}-{new_v.shape}' - # norm layer conversion - elif k.startswith('norm'): - stage_i = int(k.split('.')[0].replace('norm', '')) - new_k = k.replace(f'norm{stage_i}', f'layers.{stage_i-1}.2') - new_v = v - else: - new_k = k - new_v = v - new_ckpt[new_k] = new_v - return new_ckpt - - -def main(): - parser = argparse.ArgumentParser( - description='Convert keys in official pretrained segformer to ' - 'MMSegmentation style.') - parser.add_argument('src', help='src model path or url') - # The dst path must be a full path of the new checkpoint. - parser.add_argument('dst', help='save path') - args = parser.parse_args() - - checkpoint = CheckpointLoader.load_checkpoint(args.src, map_location='cpu') - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - weight = convert_mit(state_dict) - mmcv.mkdir_or_exist(osp.dirname(args.dst)) - torch.save(weight, args.dst) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/swin2mmseg.py b/cv/classification/repvit/pytorch/segmentation/tools/model_converters/swin2mmseg.py deleted file mode 100644 index 03b24cea..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/swin2mmseg.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -from collections import OrderedDict - -import mmcv -import torch -from mmcv.runner import CheckpointLoader - - -def convert_swin(ckpt): - new_ckpt = OrderedDict() - - def correct_unfold_reduction_order(x): - out_channel, in_channel = x.shape - x = x.reshape(out_channel, 4, in_channel // 4) - x = x[:, [0, 2, 1, 3], :].transpose(1, - 2).reshape(out_channel, in_channel) - return x - - def correct_unfold_norm_order(x): - in_channel = x.shape[0] - x = x.reshape(4, in_channel // 4) - x = x[[0, 2, 1, 3], :].transpose(0, 1).reshape(in_channel) - return x - - for k, v in ckpt.items(): - if k.startswith('head'): - continue - elif k.startswith('layers'): - new_v = v - if 'attn.' in k: - new_k = k.replace('attn.', 'attn.w_msa.') - elif 'mlp.' in k: - if 'mlp.fc1.' in k: - new_k = k.replace('mlp.fc1.', 'ffn.layers.0.0.') - elif 'mlp.fc2.' in k: - new_k = k.replace('mlp.fc2.', 'ffn.layers.1.') - else: - new_k = k.replace('mlp.', 'ffn.') - elif 'downsample' in k: - new_k = k - if 'reduction.' in k: - new_v = correct_unfold_reduction_order(v) - elif 'norm.' in k: - new_v = correct_unfold_norm_order(v) - else: - new_k = k - new_k = new_k.replace('layers', 'stages', 1) - elif k.startswith('patch_embed'): - new_v = v - if 'proj' in k: - new_k = k.replace('proj', 'projection') - else: - new_k = k - else: - new_v = v - new_k = k - - new_ckpt[new_k] = new_v - - return new_ckpt - - -def main(): - parser = argparse.ArgumentParser( - description='Convert keys in official pretrained swin models to' - 'MMSegmentation style.') - parser.add_argument('src', help='src model path or url') - # The dst path must be a full path of the new checkpoint. - parser.add_argument('dst', help='save path') - args = parser.parse_args() - - checkpoint = CheckpointLoader.load_checkpoint(args.src, map_location='cpu') - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - weight = convert_swin(state_dict) - mmcv.mkdir_or_exist(osp.dirname(args.dst)) - torch.save(weight, args.dst) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/vit2mmseg.py b/cv/classification/repvit/pytorch/segmentation/tools/model_converters/vit2mmseg.py deleted file mode 100644 index bc18ebed..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/model_converters/vit2mmseg.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -from collections import OrderedDict - -import mmcv -import torch -from mmcv.runner import CheckpointLoader - - -def convert_vit(ckpt): - - new_ckpt = OrderedDict() - - for k, v in ckpt.items(): - if k.startswith('head'): - continue - if k.startswith('norm'): - new_k = k.replace('norm.', 'ln1.') - elif k.startswith('patch_embed'): - if 'proj' in k: - new_k = k.replace('proj', 'projection') - else: - new_k = k - elif k.startswith('blocks'): - if 'norm' in k: - new_k = k.replace('norm', 'ln') - elif 'mlp.fc1' in k: - new_k = k.replace('mlp.fc1', 'ffn.layers.0.0') - elif 'mlp.fc2' in k: - new_k = k.replace('mlp.fc2', 'ffn.layers.1') - elif 'attn.qkv' in k: - new_k = k.replace('attn.qkv.', 'attn.attn.in_proj_') - elif 'attn.proj' in k: - new_k = k.replace('attn.proj', 'attn.attn.out_proj') - else: - new_k = k - new_k = new_k.replace('blocks.', 'layers.') - else: - new_k = k - new_ckpt[new_k] = v - - return new_ckpt - - -def main(): - parser = argparse.ArgumentParser( - description='Convert keys in timm pretrained vit models to ' - 'MMSegmentation style.') - parser.add_argument('src', help='src model path or url') - # The dst path must be a full path of the new checkpoint. - parser.add_argument('dst', help='save path') - args = parser.parse_args() - - checkpoint = CheckpointLoader.load_checkpoint(args.src, map_location='cpu') - if 'state_dict' in checkpoint: - # timm checkpoint - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - # deit checkpoint - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - weight = convert_vit(state_dict) - mmcv.mkdir_or_exist(osp.dirname(args.dst)) - torch.save(weight, args.dst) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/onnx2tensorrt.py b/cv/classification/repvit/pytorch/segmentation/tools/onnx2tensorrt.py deleted file mode 100644 index f8a258fc..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/onnx2tensorrt.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -from typing import Iterable, Optional, Union - -import matplotlib.pyplot as plt -import mmcv -import numpy as np -import onnxruntime as ort -import torch -from mmcv.ops import get_onnxruntime_op_path -from mmcv.tensorrt import (TRTWraper, is_tensorrt_plugin_loaded, onnx2trt, - save_trt_engine) - -from mmseg.apis.inference import LoadImage -from mmseg.datasets import DATASETS -from mmseg.datasets.pipelines import Compose - - -def get_GiB(x: int): - """return x GiB.""" - return x * (1 << 30) - - -def _prepare_input_img(img_path: str, - test_pipeline: Iterable[dict], - shape: Optional[Iterable] = None, - rescale_shape: Optional[Iterable] = None) -> dict: - # build the data pipeline - if shape is not None: - test_pipeline[1]['img_scale'] = (shape[1], shape[0]) - test_pipeline[1]['transforms'][0]['keep_ratio'] = False - test_pipeline = [LoadImage()] + test_pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img_path) - data = test_pipeline(data) - imgs = data['img'] - img_metas = [i.data for i in data['img_metas']] - - if rescale_shape is not None: - for img_meta in img_metas: - img_meta['ori_shape'] = tuple(rescale_shape) + (3, ) - - mm_inputs = {'imgs': imgs, 'img_metas': img_metas} - - return mm_inputs - - -def _update_input_img(img_list: Iterable, img_meta_list: Iterable): - # update img and its meta list - N = img_list[0].size(0) - img_meta = img_meta_list[0][0] - img_shape = img_meta['img_shape'] - ori_shape = img_meta['ori_shape'] - pad_shape = img_meta['pad_shape'] - new_img_meta_list = [[{ - 'img_shape': - img_shape, - 'ori_shape': - ori_shape, - 'pad_shape': - pad_shape, - 'filename': - img_meta['filename'], - 'scale_factor': - (img_shape[1] / ori_shape[1], img_shape[0] / ori_shape[0]) * 2, - 'flip': - False, - } for _ in range(N)]] - - return img_list, new_img_meta_list - - -def show_result_pyplot(img: Union[str, np.ndarray], - result: np.ndarray, - palette: Optional[Iterable] = None, - fig_size: Iterable[int] = (15, 10), - opacity: float = 0.5, - title: str = '', - block: bool = True): - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - seg = mmcv.imresize(seg, img.shape[:2][::-1]) - palette = np.array(palette) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - - plt.figure(figsize=fig_size) - plt.imshow(mmcv.bgr2rgb(img)) - plt.title(title) - plt.tight_layout() - plt.show(block=block) - - -def onnx2tensorrt(onnx_file: str, - trt_file: str, - config: dict, - input_config: dict, - fp16: bool = False, - verify: bool = False, - show: bool = False, - dataset: str = 'CityscapesDataset', - workspace_size: int = 1, - verbose: bool = False): - import tensorrt as trt - min_shape = input_config['min_shape'] - max_shape = input_config['max_shape'] - # create trt engine and wrapper - opt_shape_dict = {'input': [min_shape, min_shape, max_shape]} - max_workspace_size = get_GiB(workspace_size) - trt_engine = onnx2trt( - onnx_file, - opt_shape_dict, - log_level=trt.Logger.VERBOSE if verbose else trt.Logger.ERROR, - fp16_mode=fp16, - max_workspace_size=max_workspace_size) - save_dir, _ = osp.split(trt_file) - if save_dir: - os.makedirs(save_dir, exist_ok=True) - save_trt_engine(trt_engine, trt_file) - print(f'Successfully created TensorRT engine: {trt_file}') - - if verify: - inputs = _prepare_input_img( - input_config['input_path'], - config.data.test.pipeline, - shape=min_shape[2:]) - - imgs = inputs['imgs'] - img_metas = inputs['img_metas'] - img_list = [img[None, :] for img in imgs] - img_meta_list = [[img_meta] for img_meta in img_metas] - # update img_meta - img_list, img_meta_list = _update_input_img(img_list, img_meta_list) - - if max_shape[0] > 1: - # concate flip image for batch test - flip_img_list = [_.flip(-1) for _ in img_list] - img_list = [ - torch.cat((ori_img, flip_img), 0) - for ori_img, flip_img in zip(img_list, flip_img_list) - ] - - # Get results from ONNXRuntime - ort_custom_op_path = get_onnxruntime_op_path() - session_options = ort.SessionOptions() - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - sess.set_providers(['CPUExecutionProvider'], [{}]) # use cpu mode - onnx_output = sess.run(['output'], - {'input': img_list[0].detach().numpy()})[0][0] - - # Get results from TensorRT - trt_model = TRTWraper(trt_file, ['input'], ['output']) - with torch.no_grad(): - trt_outputs = trt_model({'input': img_list[0].contiguous().cuda()}) - trt_output = trt_outputs['output'][0].cpu().detach().numpy() - - if show: - dataset = DATASETS.get(dataset) - assert dataset is not None - palette = dataset.PALETTE - - show_result_pyplot( - input_config['input_path'], - (onnx_output[0].astype(np.uint8), ), - palette=palette, - title='ONNXRuntime', - block=False) - show_result_pyplot( - input_config['input_path'], (trt_output[0].astype(np.uint8), ), - palette=palette, - title='TensorRT') - - np.testing.assert_allclose( - onnx_output, trt_output, rtol=1e-03, atol=1e-05) - print('TensorRT and ONNXRuntime output all close.') - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert MMSegmentation models from ONNX to TensorRT') - parser.add_argument('config', help='Config file of the model') - parser.add_argument('model', help='Path to the input ONNX model') - parser.add_argument( - '--trt-file', type=str, help='Path to the output TensorRT engine') - parser.add_argument( - '--max-shape', - type=int, - nargs=4, - default=[1, 3, 400, 600], - help='Maximum shape of model input.') - parser.add_argument( - '--min-shape', - type=int, - nargs=4, - default=[1, 3, 400, 600], - help='Minimum shape of model input.') - parser.add_argument('--fp16', action='store_true', help='Enable fp16 mode') - parser.add_argument( - '--workspace-size', - type=int, - default=1, - help='Max workspace size in GiB') - parser.add_argument( - '--input-img', type=str, default='', help='Image for test') - parser.add_argument( - '--show', action='store_true', help='Whether to show output results') - parser.add_argument( - '--dataset', - type=str, - default='CityscapesDataset', - help='Dataset name') - parser.add_argument( - '--verify', - action='store_true', - help='Verify the outputs of ONNXRuntime and TensorRT') - parser.add_argument( - '--verbose', - action='store_true', - help='Whether to verbose logging messages while creating \ - TensorRT engine.') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - - assert is_tensorrt_plugin_loaded(), 'TensorRT plugin should be compiled.' - args = parse_args() - - if not args.input_img: - args.input_img = osp.join(osp.dirname(__file__), '../demo/demo.png') - - # check arguments - assert osp.exists(args.config), 'Config {} not found.'.format(args.config) - assert osp.exists(args.model), \ - 'ONNX model {} not found.'.format(args.model) - assert args.workspace_size >= 0, 'Workspace size less than 0.' - assert DATASETS.get(args.dataset) is not None, \ - 'Dataset {} does not found.'.format(args.dataset) - for max_value, min_value in zip(args.max_shape, args.min_shape): - assert max_value >= min_value, \ - 'max_shape should be larger than min shape' - - input_config = { - 'min_shape': args.min_shape, - 'max_shape': args.max_shape, - 'input_path': args.input_img - } - - cfg = mmcv.Config.fromfile(args.config) - onnx2tensorrt( - args.model, - args.trt_file, - cfg, - input_config, - fp16=args.fp16, - verify=args.verify, - show=args.show, - dataset=args.dataset, - workspace_size=args.workspace_size, - verbose=args.verbose) diff --git a/cv/classification/repvit/pytorch/segmentation/tools/print_config.py b/cv/classification/repvit/pytorch/segmentation/tools/print_config.py deleted file mode 100644 index fb978c9b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/print_config.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -from mmcv import Config, DictAction - -from mmseg.apis import init_segmentor - - -def parse_args(): - parser = argparse.ArgumentParser(description='Print the whole config') - parser.add_argument('config', help='config file path') - parser.add_argument( - '--graph', action='store_true', help='print the models graph') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='arguments in dict') - args = parser.parse_args() - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - print(f'Config:\n{cfg.pretty_text}') - # dump config - cfg.dump('example.py') - # dump models graph - if args.graph: - model = init_segmentor(args.config, device='cpu') - print(f'Model graph:\n{str(model)}') - with open('example-graph.txt', 'w') as f: - f.writelines(str(model)) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/publish_model.py b/cv/classification/repvit/pytorch/segmentation/tools/publish_model.py deleted file mode 100644 index e2660578..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/publish_model.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import subprocess - -import torch - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Process a checkpoint to be published') - parser.add_argument('in_file', help='input checkpoint filename') - parser.add_argument('out_file', help='output checkpoint filename') - args = parser.parse_args() - return args - - -def process_checkpoint(in_file, out_file): - checkpoint = torch.load(in_file, map_location='cpu') - # remove optimizer for smaller file size - if 'optimizer' in checkpoint: - del checkpoint['optimizer'] - # if it is necessary to remove some sensitive data in checkpoint['meta'], - # add the code here. - torch.save(checkpoint, out_file) - sha = subprocess.check_output(['sha256sum', out_file]).decode() - final_file = out_file.rstrip('.pth') + '-{}.pth'.format(sha[:8]) - subprocess.Popen(['mv', out_file, final_file]) - - -def main(): - args = parse_args() - process_checkpoint(args.in_file, args.out_file) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/pytorch2onnx.py b/cv/classification/repvit/pytorch/segmentation/tools/pytorch2onnx.py deleted file mode 100644 index 1751a7b7..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/pytorch2onnx.py +++ /dev/null @@ -1,391 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -from functools import partial - -import mmcv -import numpy as np -import onnxruntime as rt -import torch -import torch._C -import torch.serialization -from mmcv import DictAction -from mmcv.onnx import register_extra_symbolics -from mmcv.runner import load_checkpoint -from torch import nn - -from mmseg.apis import show_result_pyplot -from mmseg.apis.inference import LoadImage -from mmseg.datasets.pipelines import Compose -from mmseg.models import build_segmentor -from mmseg.ops import resize - -torch.manual_seed(3) - - -def _convert_batchnorm(module): - module_output = module - if isinstance(module, torch.nn.SyncBatchNorm): - module_output = torch.nn.BatchNorm2d(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - module_output.weight.data = module.weight.data.clone().detach() - module_output.bias.data = module.bias.data.clone().detach() - # keep requires_grad unchanged - module_output.weight.requires_grad = module.weight.requires_grad - module_output.bias.requires_grad = module.bias.requires_grad - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - for name, child in module.named_children(): - module_output.add_module(name, _convert_batchnorm(child)) - del module - return module_output - - -def _demo_mm_inputs(input_shape, num_classes): - """Create a superset of inputs needed to run test or train batches. - - Args: - input_shape (tuple): - input batch dimensions - num_classes (int): - number of semantic classes - """ - (N, C, H, W) = input_shape - rng = np.random.RandomState(0) - imgs = rng.rand(*input_shape) - segs = rng.randint( - low=0, high=num_classes - 1, size=(N, 1, H, W)).astype(np.uint8) - img_metas = [{ - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False, - } for _ in range(N)] - mm_inputs = { - 'imgs': torch.FloatTensor(imgs).requires_grad_(True), - 'img_metas': img_metas, - 'gt_semantic_seg': torch.LongTensor(segs) - } - return mm_inputs - - -def _prepare_input_img(img_path, - test_pipeline, - shape=None, - rescale_shape=None): - # build the data pipeline - if shape is not None: - test_pipeline[1]['img_scale'] = (shape[1], shape[0]) - test_pipeline[1]['transforms'][0]['keep_ratio'] = False - test_pipeline = [LoadImage()] + test_pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img_path) - data = test_pipeline(data) - imgs = data['img'] - img_metas = [i.data for i in data['img_metas']] - - if rescale_shape is not None: - for img_meta in img_metas: - img_meta['ori_shape'] = tuple(rescale_shape) + (3, ) - - mm_inputs = {'imgs': imgs, 'img_metas': img_metas} - - return mm_inputs - - -def _update_input_img(img_list, img_meta_list, update_ori_shape=False): - # update img and its meta list - N, C, H, W = img_list[0].shape - img_meta = img_meta_list[0][0] - img_shape = (H, W, C) - if update_ori_shape: - ori_shape = img_shape - else: - ori_shape = img_meta['ori_shape'] - pad_shape = img_shape - new_img_meta_list = [[{ - 'img_shape': - img_shape, - 'ori_shape': - ori_shape, - 'pad_shape': - pad_shape, - 'filename': - img_meta['filename'], - 'scale_factor': - (img_shape[1] / ori_shape[1], img_shape[0] / ori_shape[0]) * 2, - 'flip': - False, - } for _ in range(N)]] - - return img_list, new_img_meta_list - - -def pytorch2onnx(model, - mm_inputs, - opset_version=11, - show=False, - output_file='tmp.onnx', - verify=False, - dynamic_export=False): - """Export Pytorch model to ONNX model and verify the outputs are same - between Pytorch and ONNX. - - Args: - model (nn.Module): Pytorch model we want to export. - mm_inputs (dict): Contain the input tensors and img_metas information. - opset_version (int): The onnx op version. Default: 11. - show (bool): Whether print the computation graph. Default: False. - output_file (string): The path to where we store the output ONNX model. - Default: `tmp.onnx`. - verify (bool): Whether compare the outputs between Pytorch and ONNX. - Default: False. - dynamic_export (bool): Whether to export ONNX with dynamic axis. - Default: False. - """ - model.cpu().eval() - test_mode = model.test_cfg.mode - - if isinstance(model.decode_head, nn.ModuleList): - num_classes = model.decode_head[-1].num_classes - else: - num_classes = model.decode_head.num_classes - - imgs = mm_inputs.pop('imgs') - img_metas = mm_inputs.pop('img_metas') - - img_list = [img[None, :] for img in imgs] - img_meta_list = [[img_meta] for img_meta in img_metas] - # update img_meta - img_list, img_meta_list = _update_input_img(img_list, img_meta_list) - - # replace original forward function - origin_forward = model.forward - model.forward = partial( - model.forward, - img_metas=img_meta_list, - return_loss=False, - rescale=True) - dynamic_axes = None - if dynamic_export: - if test_mode == 'slide': - dynamic_axes = {'input': {0: 'batch'}, 'output': {1: 'batch'}} - else: - dynamic_axes = { - 'input': { - 0: 'batch', - 2: 'height', - 3: 'width' - }, - 'output': { - 1: 'batch', - 2: 'height', - 3: 'width' - } - } - - register_extra_symbolics(opset_version) - with torch.no_grad(): - torch.onnx.export( - model, (img_list, ), - output_file, - input_names=['input'], - output_names=['output'], - export_params=True, - keep_initializers_as_inputs=False, - verbose=show, - opset_version=opset_version, - dynamic_axes=dynamic_axes) - print(f'Successfully exported ONNX model: {output_file}') - model.forward = origin_forward - - if verify: - # check by onnx - import onnx - onnx_model = onnx.load(output_file) - onnx.checker.check_model(onnx_model) - - if dynamic_export and test_mode == 'whole': - # scale image for dynamic shape test - img_list = [resize(_, scale_factor=1.5) for _ in img_list] - # concate flip image for batch test - flip_img_list = [_.flip(-1) for _ in img_list] - img_list = [ - torch.cat((ori_img, flip_img), 0) - for ori_img, flip_img in zip(img_list, flip_img_list) - ] - - # update img_meta - img_list, img_meta_list = _update_input_img( - img_list, img_meta_list, test_mode == 'whole') - - # check the numerical value - # get pytorch output - with torch.no_grad(): - pytorch_result = model(img_list, img_meta_list, return_loss=False) - pytorch_result = np.stack(pytorch_result, 0) - - # get onnx output - input_all = [node.name for node in onnx_model.graph.input] - input_initializer = [ - node.name for node in onnx_model.graph.initializer - ] - net_feed_input = list(set(input_all) - set(input_initializer)) - assert (len(net_feed_input) == 1) - sess = rt.InferenceSession(output_file) - onnx_result = sess.run( - None, {net_feed_input[0]: img_list[0].detach().numpy()})[0][0] - # show segmentation results - if show: - import cv2 - import os.path as osp - img = img_meta_list[0][0]['filename'] - if not osp.exists(img): - img = imgs[0][:3, ...].permute(1, 2, 0) * 255 - img = img.detach().numpy().astype(np.uint8) - ori_shape = img.shape[:2] - else: - ori_shape = LoadImage()({'img': img})['ori_shape'] - - # resize onnx_result to ori_shape - onnx_result_ = cv2.resize(onnx_result[0].astype(np.uint8), - (ori_shape[1], ori_shape[0])) - show_result_pyplot( - model, - img, (onnx_result_, ), - palette=model.PALETTE, - block=False, - title='ONNXRuntime', - opacity=0.5) - - # resize pytorch_result to ori_shape - pytorch_result_ = cv2.resize(pytorch_result[0].astype(np.uint8), - (ori_shape[1], ori_shape[0])) - show_result_pyplot( - model, - img, (pytorch_result_, ), - title='PyTorch', - palette=model.PALETTE, - opacity=0.5) - # compare results - np.testing.assert_allclose( - pytorch_result.astype(np.float32) / num_classes, - onnx_result.astype(np.float32) / num_classes, - rtol=1e-5, - atol=1e-5, - err_msg='The outputs are different between Pytorch and ONNX') - print('The outputs are same between Pytorch and ONNX') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Convert MMSeg to ONNX') - parser.add_argument('config', help='test config file path') - parser.add_argument('--checkpoint', help='checkpoint file', default=None) - parser.add_argument( - '--input-img', type=str, help='Images for input', default=None) - parser.add_argument( - '--show', - action='store_true', - help='show onnx graph and segmentation results') - parser.add_argument( - '--verify', action='store_true', help='verify the onnx model') - parser.add_argument('--output-file', type=str, default='tmp.onnx') - parser.add_argument('--opset-version', type=int, default=11) - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=None, - help='input image height and width.') - parser.add_argument( - '--rescale_shape', - type=int, - nargs='+', - default=None, - help='output image rescale height and width, work for slide mode.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='Override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--dynamic-export', - action='store_true', - help='Whether to export onnx with dynamic axis.') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - - cfg = mmcv.Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - cfg.model.pretrained = None - - if args.shape is None: - img_scale = cfg.test_pipeline[1]['img_scale'] - input_shape = (1, 3, img_scale[1], img_scale[0]) - elif len(args.shape) == 1: - input_shape = (1, 3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = ( - 1, - 3, - ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - test_mode = cfg.model.test_cfg.mode - - # build the model and load checkpoint - cfg.model.train_cfg = None - segmentor = build_segmentor( - cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg')) - # convert SyncBN to BN - segmentor = _convert_batchnorm(segmentor) - - if args.checkpoint: - checkpoint = load_checkpoint( - segmentor, args.checkpoint, map_location='cpu') - segmentor.CLASSES = checkpoint['meta']['CLASSES'] - segmentor.PALETTE = checkpoint['meta']['PALETTE'] - - # read input or create dummpy input - if args.input_img is not None: - preprocess_shape = (input_shape[2], input_shape[3]) - rescale_shape = None - if args.rescale_shape is not None: - rescale_shape = [args.rescale_shape[0], args.rescale_shape[1]] - mm_inputs = _prepare_input_img( - args.input_img, - cfg.data.test.pipeline, - shape=preprocess_shape, - rescale_shape=rescale_shape) - else: - if isinstance(segmentor.decode_head, nn.ModuleList): - num_classes = segmentor.decode_head[-1].num_classes - else: - num_classes = segmentor.decode_head.num_classes - mm_inputs = _demo_mm_inputs(input_shape, num_classes) - - # convert model to onnx file - pytorch2onnx( - segmentor, - mm_inputs, - opset_version=args.opset_version, - show=args.show, - output_file=args.output_file, - verify=args.verify, - dynamic_export=args.dynamic_export) diff --git a/cv/classification/repvit/pytorch/segmentation/tools/pytorch2torchscript.py b/cv/classification/repvit/pytorch/segmentation/tools/pytorch2torchscript.py deleted file mode 100644 index d76f5ecb..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/pytorch2torchscript.py +++ /dev/null @@ -1,185 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import mmcv -import numpy as np -import torch -import torch._C -import torch.serialization -from mmcv.runner import load_checkpoint -from torch import nn - -from mmseg.models import build_segmentor - -torch.manual_seed(3) - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -def check_torch_version(): - torch_minimum_version = '1.8.0' - torch_version = digit_version(torch.__version__) - - assert (torch_version >= digit_version(torch_minimum_version)), \ - f'Torch=={torch.__version__} is not support for converting to ' \ - f'torchscript. Please install pytorch>={torch_minimum_version}.' - - -def _convert_batchnorm(module): - module_output = module - if isinstance(module, torch.nn.SyncBatchNorm): - module_output = torch.nn.BatchNorm2d(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - module_output.weight.data = module.weight.data.clone().detach() - module_output.bias.data = module.bias.data.clone().detach() - # keep requires_grad unchanged - module_output.weight.requires_grad = module.weight.requires_grad - module_output.bias.requires_grad = module.bias.requires_grad - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - for name, child in module.named_children(): - module_output.add_module(name, _convert_batchnorm(child)) - del module - return module_output - - -def _demo_mm_inputs(input_shape, num_classes): - """Create a superset of inputs needed to run test or train batches. - - Args: - input_shape (tuple): - input batch dimensions - num_classes (int): - number of semantic classes - """ - (N, C, H, W) = input_shape - rng = np.random.RandomState(0) - imgs = rng.rand(*input_shape) - segs = rng.randint( - low=0, high=num_classes - 1, size=(N, 1, H, W)).astype(np.uint8) - img_metas = [{ - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False, - } for _ in range(N)] - mm_inputs = { - 'imgs': torch.FloatTensor(imgs).requires_grad_(True), - 'img_metas': img_metas, - 'gt_semantic_seg': torch.LongTensor(segs) - } - return mm_inputs - - -def pytorch2libtorch(model, - input_shape, - show=False, - output_file='tmp.pt', - verify=False): - """Export Pytorch model to TorchScript model and verify the outputs are - same between Pytorch and TorchScript. - - Args: - model (nn.Module): Pytorch model we want to export. - input_shape (tuple): Use this input shape to construct - the corresponding dummy input and execute the model. - show (bool): Whether print the computation graph. Default: False. - output_file (string): The path to where we store the - output TorchScript model. Default: `tmp.pt`. - verify (bool): Whether compare the outputs between - Pytorch and TorchScript. Default: False. - """ - if isinstance(model.decode_head, nn.ModuleList): - num_classes = model.decode_head[-1].num_classes - else: - num_classes = model.decode_head.num_classes - - mm_inputs = _demo_mm_inputs(input_shape, num_classes) - - imgs = mm_inputs.pop('imgs') - - # replace the original forword with forward_dummy - model.forward = model.forward_dummy - model.eval() - traced_model = torch.jit.trace( - model, - example_inputs=imgs, - check_trace=verify, - ) - - if show: - print(traced_model.graph) - - traced_model.save(output_file) - print('Successfully exported TorchScript model: {}'.format(output_file)) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert MMSeg to TorchScript') - parser.add_argument('config', help='test config file path') - parser.add_argument('--checkpoint', help='checkpoint file', default=None) - parser.add_argument( - '--show', action='store_true', help='show TorchScript graph') - parser.add_argument( - '--verify', action='store_true', help='verify the TorchScript model') - parser.add_argument('--output-file', type=str, default='tmp.pt') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[512, 512], - help='input image size (height, width)') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - check_torch_version() - - if len(args.shape) == 1: - input_shape = (1, 3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = ( - 1, - 3, - ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - cfg = mmcv.Config.fromfile(args.config) - cfg.model.pretrained = None - - # build the model and load checkpoint - cfg.model.train_cfg = None - segmentor = build_segmentor( - cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg')) - # convert SyncBN to BN - segmentor = _convert_batchnorm(segmentor) - - if args.checkpoint: - load_checkpoint(segmentor, args.checkpoint, map_location='cpu') - - # convert the PyTorch model to LibTorch model - pytorch2libtorch( - segmentor, - input_shape, - show=args.show, - output_file=args.output_file, - verify=args.verify) diff --git a/cv/classification/repvit/pytorch/segmentation/tools/slurm_test.sh b/cv/classification/repvit/pytorch/segmentation/tools/slurm_test.sh deleted file mode 100644 index 4e6f7bf4..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/slurm_test.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -CHECKPOINT=$4 -GPUS=${GPUS:-4} -GPUS_PER_NODE=${GPUS_PER_NODE:-4} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -PY_ARGS=${@:5} -SRUN_ARGS=${SRUN_ARGS:-""} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS} diff --git a/cv/classification/repvit/pytorch/segmentation/tools/slurm_train.sh b/cv/classification/repvit/pytorch/segmentation/tools/slurm_train.sh deleted file mode 100644 index d7d729a2..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/slurm_train.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-4} -CPUS_PER_TASK=${CPUS_PER_TASK:-12} -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:4} - -export NCCL_P2P_DISABLE=1 -export MASTER_PORT=13579 - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - --mem 250G \ - ${SRUN_ARGS} \ - python -u tools/train.py ${CONFIG} --launcher="slurm" ${PY_ARGS} diff --git a/cv/classification/repvit/pytorch/segmentation/tools/test.py b/cv/classification/repvit/pytorch/segmentation/tools/test.py deleted file mode 100644 index bb22090b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/test.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import shutil -import time -import warnings - -import mmcv -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) -from mmcv.utils import DictAction - -from mmseg.apis import multi_gpu_test, single_gpu_test -from mmseg.datasets import build_dataloader, build_dataset -from mmseg.models import build_segmentor - -import repvit -from align_resize import AlignResize - - -def parse_args(): - parser = argparse.ArgumentParser( - description='mmseg test (and eval) a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument( - '--work-dir', - help=('if specified, the evaluation metric results will be dumped' - 'into the directory as json')) - parser.add_argument( - '--aug-test', action='store_true', help='Use Flip and Multi scale aug') - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "mIoU"' - ' for generic datasets, and "cityscapes" for Cityscapes') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where painted images will be saved') - parser.add_argument( - '--gpu-collect', - action='store_true', - help='whether to use gpu to collect results.') - parser.add_argument( - '--tmpdir', - help='tmp directory used for collecting results from multiple ' - 'workers, available when gpu_collect is not specified') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='custom options') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument( - '--opacity', - type=float, - default=0.5, - help='Opacity of painted segmentation map. In (0, 1] range.') - parser.add_argument('--local-rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = mmcv.Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - if args.aug_test: - # hard code index - cfg.data.test.pipeline[1].img_ratios = [ - 0.5, 0.75, 1.0, 1.25, 1.5, 1.75 - ] - cfg.data.test.pipeline[1].flip = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - rank, _ = get_dist_info() - # allows not to create - if args.work_dir is not None and rank == 0: - mmcv.mkdir_or_exist(osp.abspath(args.work_dir)) - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - json_file = osp.join(args.work_dir, f'eval_{timestamp}.json') - - # build the dataloader - # TODO: support multiple images per gpu (only minor changes are needed) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_segmentor(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - print('"CLASSES" not found in meta, use dataset.CLASSES instead') - model.CLASSES = dataset.CLASSES - if 'PALETTE' in checkpoint.get('meta', {}): - model.PALETTE = checkpoint['meta']['PALETTE'] - else: - print('"PALETTE" not found in meta, use dataset.PALETTE instead') - model.PALETTE = dataset.PALETTE - - # clean gpu memory when starting a new evaluation. - torch.cuda.empty_cache() - eval_kwargs = {} if args.eval_options is None else args.eval_options - - # Deprecated - efficient_test = eval_kwargs.get('efficient_test', False) - if efficient_test: - warnings.warn( - '``efficient_test=True`` does not have effect in tools/test.py, ' - 'the evaluation and format results are CPU memory efficient by ' - 'default') - - eval_on_format_results = ( - args.eval is not None and 'cityscapes' in args.eval) - if eval_on_format_results: - assert len(args.eval) == 1, 'eval on format results is not ' \ - 'applicable for metrics other than ' \ - 'cityscapes' - if args.format_only or eval_on_format_results: - if 'imgfile_prefix' in eval_kwargs: - tmpdir = eval_kwargs['imgfile_prefix'] - else: - tmpdir = '.format_cityscapes' - eval_kwargs.setdefault('imgfile_prefix', tmpdir) - mmcv.mkdir_or_exist(tmpdir) - else: - tmpdir = None - - if not distributed: - model = MMDataParallel(model, device_ids=[0]) - results = single_gpu_test( - model, - data_loader, - args.show, - args.show_dir, - False, - args.opacity, - pre_eval=args.eval is not None and not eval_on_format_results, - format_only=args.format_only or eval_on_format_results, - format_args=eval_kwargs) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - results = multi_gpu_test( - model, - data_loader, - args.tmpdir, - args.gpu_collect, - False, - pre_eval=args.eval is not None and not eval_on_format_results, - format_only=args.format_only or eval_on_format_results, - format_args=eval_kwargs) - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - warnings.warn( - 'The behavior of ``args.out`` has been changed since MMSeg ' - 'v0.16, the pickled outputs could be seg map as type of ' - 'np.array, pre-eval results or file paths for ' - '``dataset.format_results()``.') - print(f'\nwriting results to {args.out}') - mmcv.dump(results, args.out) - if args.eval: - eval_kwargs.update(metric=args.eval) - metric = dataset.evaluate(results, **eval_kwargs) - metric_dict = dict(config=args.config, metric=metric) - if args.work_dir is not None and rank == 0: - mmcv.dump(metric_dict, json_file, indent=4) - if tmpdir is not None and eval_on_format_results: - # remove tmp dir when cityscapes evaluation - shutil.rmtree(tmpdir) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg2torchserve.py b/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg2torchserve.py deleted file mode 100644 index 90636348..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg2torchserve.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from argparse import ArgumentParser, Namespace -from pathlib import Path -from tempfile import TemporaryDirectory - -import mmcv - -try: - from model_archiver.model_packaging import package_model - from model_archiver.model_packaging_utils import ModelExportUtils -except ImportError: - package_model = None - - -def mmseg2torchserve( - config_file: str, - checkpoint_file: str, - output_folder: str, - model_name: str, - model_version: str = '1.0', - force: bool = False, -): - """Converts mmsegmentation model (config + checkpoint) to TorchServe - `.mar`. - - Args: - config_file: - In MMSegmentation config format. - The contents vary for each task repository. - checkpoint_file: - In MMSegmentation checkpoint format. - The contents vary for each task repository. - output_folder: - Folder where `{model_name}.mar` will be created. - The file created will be in TorchServe archive format. - model_name: - If not None, used for naming the `{model_name}.mar` file - that will be created under `output_folder`. - If None, `{Path(checkpoint_file).stem}` will be used. - model_version: - Model's version. - force: - If True, if there is an existing `{model_name}.mar` - file under `output_folder` it will be overwritten. - """ - mmcv.mkdir_or_exist(output_folder) - - config = mmcv.Config.fromfile(config_file) - - with TemporaryDirectory() as tmpdir: - config.dump(f'{tmpdir}/config.py') - - args = Namespace( - **{ - 'model_file': f'{tmpdir}/config.py', - 'serialized_file': checkpoint_file, - 'handler': f'{Path(__file__).parent}/mmseg_handler.py', - 'model_name': model_name or Path(checkpoint_file).stem, - 'version': model_version, - 'export_path': output_folder, - 'force': force, - 'requirements_file': None, - 'extra_files': None, - 'runtime': 'python', - 'archive_format': 'default' - }) - manifest = ModelExportUtils.generate_manifest_json(args) - package_model(args, manifest) - - -def parse_args(): - parser = ArgumentParser( - description='Convert mmseg models to TorchServe `.mar` format.') - parser.add_argument('config', type=str, help='config file path') - parser.add_argument('checkpoint', type=str, help='checkpoint file path') - parser.add_argument( - '--output-folder', - type=str, - required=True, - help='Folder where `{model_name}.mar` will be created.') - parser.add_argument( - '--model-name', - type=str, - default=None, - help='If not None, used for naming the `{model_name}.mar`' - 'file that will be created under `output_folder`.' - 'If None, `{Path(checkpoint_file).stem}` will be used.') - parser.add_argument( - '--model-version', - type=str, - default='1.0', - help='Number used for versioning.') - parser.add_argument( - '-f', - '--force', - action='store_true', - help='overwrite the existing `{model_name}.mar`') - args = parser.parse_args() - - return args - - -if __name__ == '__main__': - args = parse_args() - - if package_model is None: - raise ImportError('`torch-model-archiver` is required.' - 'Try: pip install torch-model-archiver') - - mmseg2torchserve(args.config, args.checkpoint, args.output_folder, - args.model_name, args.model_version, args.force) diff --git a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg_handler.py b/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg_handler.py deleted file mode 100644 index 28fe5016..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/mmseg_handler.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import base64 -import os - -import cv2 -import mmcv -import torch -from mmcv.cnn.utils.sync_bn import revert_sync_batchnorm -from ts.torch_handler.base_handler import BaseHandler - -from mmseg.apis import inference_segmentor, init_segmentor - - -class MMsegHandler(BaseHandler): - - def initialize(self, context): - properties = context.system_properties - self.map_location = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = torch.device(self.map_location + ':' + - str(properties.get('gpu_id')) if torch.cuda. - is_available() else self.map_location) - self.manifest = context.manifest - - model_dir = properties.get('model_dir') - serialized_file = self.manifest['model']['serializedFile'] - checkpoint = os.path.join(model_dir, serialized_file) - self.config_file = os.path.join(model_dir, 'config.py') - - self.model = init_segmentor(self.config_file, checkpoint, self.device) - self.model = revert_sync_batchnorm(self.model) - self.initialized = True - - def preprocess(self, data): - images = [] - - for row in data: - image = row.get('data') or row.get('body') - if isinstance(image, str): - image = base64.b64decode(image) - image = mmcv.imfrombytes(image) - images.append(image) - - return images - - def inference(self, data, *args, **kwargs): - results = [inference_segmentor(self.model, img) for img in data] - return results - - def postprocess(self, data): - output = [] - - for image_result in data: - _, buffer = cv2.imencode('.png', image_result[0].astype('uint8')) - content = buffer.tobytes() - output.append(content) - return output diff --git a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/test_torchserve.py b/cv/classification/repvit/pytorch/segmentation/tools/torchserve/test_torchserve.py deleted file mode 100644 index 59752853..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/torchserve/test_torchserve.py +++ /dev/null @@ -1,57 +0,0 @@ -from argparse import ArgumentParser -from io import BytesIO - -import matplotlib.pyplot as plt -import mmcv -import requests - -from mmseg.apis import inference_segmentor, init_segmentor - - -def parse_args(): - parser = ArgumentParser( - description='Compare result of torchserve and pytorch,' - 'and visualize them.') - parser.add_argument('img', help='Image file') - parser.add_argument('config', help='Config file') - parser.add_argument('checkpoint', help='Checkpoint file') - parser.add_argument('model_name', help='The model name in the server') - parser.add_argument( - '--inference-addr', - default='127.0.0.1:8080', - help='Address and port of the inference server') - parser.add_argument( - '--result-image', - type=str, - default=None, - help='save server output in result-image') - parser.add_argument( - '--device', default='cuda:0', help='Device used for inference') - - args = parser.parse_args() - return args - - -def main(args): - url = 'http://' + args.inference_addr + '/predictions/' + args.model_name - with open(args.img, 'rb') as image: - tmp_res = requests.post(url, image) - content = tmp_res.content - if args.result_image: - with open(args.result_image, 'wb') as out_image: - out_image.write(content) - plt.imshow(mmcv.imread(args.result_image, 'grayscale')) - plt.show() - else: - plt.imshow(plt.imread(BytesIO(content))) - plt.show() - model = init_segmentor(args.config, args.checkpoint, args.device) - image = mmcv.imread(args.img) - result = inference_segmentor(model, image) - plt.imshow(result[0]) - plt.show() - - -if __name__ == '__main__': - args = parse_args() - main(args) diff --git a/cv/classification/repvit/pytorch/segmentation/tools/train.py b/cv/classification/repvit/pytorch/segmentation/tools/train.py deleted file mode 100644 index b995273b..00000000 --- a/cv/classification/repvit/pytorch/segmentation/tools/train.py +++ /dev/null @@ -1,184 +0,0 @@ -import argparse -import copy -import os -import os.path as osp -import time -import warnings - -import mmcv -import torch -from mmcv.cnn.utils import revert_sync_batchnorm -from mmcv.runner import get_dist_info, init_dist -from mmcv.utils import Config, DictAction, get_git_hash - -from mmseg import __version__ -from mmseg.apis import set_random_seed, train_segmentor -from mmseg.datasets import build_dataset -from mmseg.models import build_segmentor -from mmseg.utils import collect_env, get_root_logger, get_device - - -import repvit -from align_resize import AlignResize - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a segmentor') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--load-from', help='the checkpoint file to load weights from') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='ids of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', nargs='+', action=DictAction, help='custom options') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local-rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.options is not None: - cfg.merge_from_dict(args.options) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - if args.load_from is not None: - cfg.load_from = args.load_from - if args.resume_from is not None: - cfg.resume_from = args.resume_from - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids - else: - cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus) - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # gpu_ids is used to calculate iter when resuming checkpoint - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([f'{k}: {v}' for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - cfg.device = get_device() - # set random seeds - if args.seed is not None: - logger.info(f'Set random seed to {args.seed}, deterministic: ' - f'{args.deterministic}') - set_random_seed(args.seed, deterministic=args.deterministic) - cfg.seed = args.seed - meta['seed'] = args.seed - meta['exp_name'] = osp.basename(args.config) - - model = build_segmentor( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - model.init_weights() - - # SyncBN is not support for DP - if not distributed: - warnings.warn( - 'SyncBN is only supported with DDP. To be compatible with DP, ' - 'we convert SyncBN to BN. Please use dist_train.sh which can ' - 'avoid this error.') - model = revert_sync_batchnorm(model) - - logger.info(model) - - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - val_dataset.pipeline = cfg.data.train.pipeline - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmseg version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmseg_version=f'{__version__}+{get_git_hash()[:7]}', - config=cfg.pretty_text, - CLASSES=datasets[0].CLASSES, - PALETTE=datasets[0].PALETTE) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - # passing checkpoint meta for saving best checkpoint - meta.update(cfg.checkpoint_config.meta) - train_segmentor( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/cv/classification/repvit/pytorch/segmentation/train.sh b/cv/classification/repvit/pytorch/segmentation/train.sh deleted file mode 100644 index 11c706fb..00000000 --- a/cv/classification/repvit/pytorch/segmentation/train.sh +++ /dev/null @@ -1 +0,0 @@ -./tools/dist_train.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py 8 diff --git a/cv/classification/repvit/pytorch/speed_gpu.py b/cv/classification/repvit/pytorch/speed_gpu.py deleted file mode 100644 index 740c42db..00000000 --- a/cv/classification/repvit/pytorch/speed_gpu.py +++ /dev/null @@ -1,51 +0,0 @@ -import torch -import time -from timm import create_model -import model -import utils -torch.autograd.set_grad_enabled(False) - -T0 = 5 -T1 = 10 - -def throughput(name, model, device, batch_size, resolution=224): - inputs = torch.randn(batch_size, 3, resolution, resolution, device=device) - torch.cuda.empty_cache() - torch.cuda.synchronize() - start = time.time() - while time.time() - start < T0: - model(inputs) - timing = [] - torch.cuda.synchronize() - while sum(timing) < T1: - start = time.time() - model(inputs) - torch.cuda.synchronize() - timing.append(time.time() - start) - timing = torch.as_tensor(timing, dtype=torch.float32) - print(name, device, batch_size / timing.mean().item(), - 'images/s @ batch size', batch_size) - -device = "cuda:0" - -from argparse import ArgumentParser - -parser = ArgumentParser() - -parser.add_argument('--model', default='repvit_m0_9', type=str) -parser.add_argument('--resolution', default=224, type=int) -parser.add_argument('--batch-size', default=2048, type=int) - -if __name__ == "__main__": - args = parser.parse_args() - model_name = args.model - batch_size = args.batch_size - resolution = args.resolution - torch.cuda.empty_cache() - inputs = torch.randn(batch_size, 3, resolution, - resolution, device=device) - model = create_model(model_name, num_classes=1000) - utils.replace_batchnorm(model) - model.to(device) - model.eval() - throughput(model_name, model, device, batch_size, resolution=resolution) diff --git a/cv/classification/repvit/pytorch/train.sh b/cv/classification/repvit/pytorch/train.sh deleted file mode 100644 index aef1b109..00000000 --- a/cv/classification/repvit/pytorch/train.sh +++ /dev/null @@ -1 +0,0 @@ -NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node=8 --master_port 12346 --use_env main.py --model repvit_m0_9 --data-path ~/imagenet --dist-eval diff --git a/cv/classification/repvit/pytorch/utils.py b/cv/classification/repvit/pytorch/utils.py deleted file mode 100644 index 55d64846..00000000 --- a/cv/classification/repvit/pytorch/utils.py +++ /dev/null @@ -1,236 +0,0 @@ -import io -import os -import time -from collections import defaultdict, deque -import datetime - -import torch -import torch.distributed as dist - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], - dtype=torch.float64, device='cuda') - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value) - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format( - type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append( - "{}: {}".format(name, str(meter)) - ) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None): - i = 0 - if not header: - header = '' - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt='{avg:.4f}') - data_time = SmoothedValue(fmt='{avg:.4f}') - space_fmt = ':' + str(len(str(len(iterable)))) + 'd' - log_msg = [ - header, - '[{0' + space_fmt + '}/{1}]', - 'eta: {eta}', - '{meters}', - 'time: {time}', - 'data: {data}' - ] - if torch.cuda.is_available(): - log_msg.append('max mem: {memory:.0f}') - log_msg = self.delimiter.join(log_msg) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print(log_msg.format( - i, len(iterable), eta=eta_string, - meters=str(self), - time=str(iter_time), data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB)) - else: - print(log_msg.format( - i, len(iterable), eta=eta_string, - meters=str(self), - time=str(iter_time), data=str(data_time))) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('{} Total time: {} ({:.4f} s / it)'.format( - header, total_time_str, total_time / len(iterable))) - -def _load_checkpoint_for_ema(model_ema, checkpoint): - """ - Workaround for ModelEma._load_checkpoint to accept an already-loaded object - """ - mem_file = io.BytesIO() - torch.save(checkpoint, mem_file) - mem_file.seek(0) - model_ema._load_checkpoint(mem_file) - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop('force', False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - -def is_main_process(): - return get_rank() == 0 - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - -def init_distributed_mode(args): - if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ: - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ['WORLD_SIZE']) - args.gpu = int(os.environ['LOCAL_RANK']) - elif 'SLURM_PROCID' in os.environ: - args.rank = int(os.environ['SLURM_PROCID']) - args.gpu = args.rank % torch.cuda.device_count() - else: - print('Not using distributed mode') - args.distributed = False - return - - args.distributed = True - - torch.cuda.set_device(args.gpu) - args.dist_backend = 'nccl' - print('| distributed init (rank {}): {}'.format( - args.rank, args.dist_url), flush=True) - torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - torch.distributed.barrier() - setup_for_distributed(args.rank == 0) - - -def replace_batchnorm(net): - for child_name, child in net.named_children(): - if hasattr(child, 'fuse'): - fused = child.fuse() - setattr(net, child_name, fused) - replace_batchnorm(fused) - elif isinstance(child, torch.nn.BatchNorm2d): - setattr(net, child_name, torch.nn.Identity()) - else: - replace_batchnorm(child) \ No newline at end of file diff --git a/cv/detection/yolof/pytorch/README.md b/cv/detection/yolof/pytorch/README.md index 8f6ab063..3ca449f3 100755 --- a/cv/detection/yolof/pytorch/README.md +++ b/cv/detection/yolof/pytorch/README.md @@ -63,4 +63,4 @@ bash train_dist.sh configs/yolof/yolof_r50_c5_8x8_1x_coco.py 8 ## Reference -- [mmdetection](https://github.com/WXinlong/SOLO/tree/f4cd03b9404e3bd84ca0be45966fb61d20d2efe6) \ No newline at end of file +- [mmdetection](https://github.com/open-mmlab/mmdetection) \ No newline at end of file -- Gitee From 6441d1e9c74ba36c7b73ff4cec690197b10e9e77 Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 15:06:03 +0800 Subject: [PATCH 5/7] update yolof and delete useless code --- cv/detection/yolof/pytorch/.gitignore | 8 - cv/detection/yolof/pytorch/LICENSE | 203 -- cv/detection/yolof/pytorch/README.md | 29 +- .../configs/_base_/datasets/coco_detection.py | 49 - .../pytorch/configs/_base_/default_runtime.py | 27 - .../configs/_base_/schedules/schedule_1x.py | 11 - .../yolof/pytorch/configs/yolof/README.md | 35 - .../yolof/pytorch/configs/yolof/metafile.yml | 32 - .../configs/yolof/yolof_r50_c5_8x8_1x_coco.py | 111 - .../yolof/yolof_r50_c5_8x8_iter-1x_coco.py | 14 - cv/detection/yolof/pytorch/mmcv/__init__.py | 28 - .../yolof/pytorch/mmcv/cnn/__init__.py | 34 - .../yolof/pytorch/mmcv/cnn/bricks/__init__.py | 14 - .../pytorch/mmcv/cnn/bricks/activation.py | 92 - .../yolof/pytorch/mmcv/cnn/bricks/conv.py | 44 - .../pytorch/mmcv/cnn/bricks/conv_module.py | 206 -- .../yolof/pytorch/mmcv/cnn/bricks/norm.py | 144 - .../yolof/pytorch/mmcv/cnn/bricks/padding.py | 36 - .../yolof/pytorch/mmcv/cnn/bricks/plugin.py | 88 - .../yolof/pytorch/mmcv/cnn/bricks/registry.py | 16 - .../yolof/pytorch/mmcv/cnn/builder.py | 30 - cv/detection/yolof/pytorch/mmcv/cnn/resnet.py | 316 -- .../yolof/pytorch/mmcv/cnn/utils/__init__.py | 16 - .../pytorch/mmcv/cnn/utils/flops_counter.py | 599 ---- .../pytorch/mmcv/cnn/utils/fuse_conv_bn.py | 59 - .../yolof/pytorch/mmcv/cnn/utils/sync_bn.py | 59 - .../pytorch/mmcv/cnn/utils/weight_init.py | 684 ---- .../yolof/pytorch/mmcv/engine/__init__.py | 8 - .../yolof/pytorch/mmcv/engine/test.py | 202 -- .../yolof/pytorch/mmcv/fileio/__init__.py | 11 - .../yolof/pytorch/mmcv/fileio/file_client.py | 1148 ------- .../pytorch/mmcv/fileio/handlers/__init__.py | 7 - .../pytorch/mmcv/fileio/handlers/base.py | 30 - .../mmcv/fileio/handlers/json_handler.py | 36 - .../mmcv/fileio/handlers/pickle_handler.py | 28 - .../mmcv/fileio/handlers/yaml_handler.py | 24 - cv/detection/yolof/pytorch/mmcv/fileio/io.py | 151 - .../yolof/pytorch/mmcv/fileio/parse.py | 97 - .../yolof/pytorch/mmcv/image/__init__.py | 28 - .../yolof/pytorch/mmcv/image/colorspace.py | 306 -- .../yolof/pytorch/mmcv/image/geometric.py | 728 ---- cv/detection/yolof/pytorch/mmcv/image/io.py | 258 -- cv/detection/yolof/pytorch/mmcv/image/misc.py | 44 - .../yolof/pytorch/mmcv/image/photometric.py | 428 --- .../pytorch/mmcv/model_zoo/deprecated.json | 6 - .../yolof/pytorch/mmcv/model_zoo/mmcls.json | 31 - .../yolof/pytorch/mmcv/model_zoo/model.json | 50 - .../yolof/pytorch/mmcv/ops/__init__.py | 13 - .../yolof/pytorch/mmcv/ops/csrc/README.md | 169 - .../csrc/common/cuda/common_cuda_helper.hpp | 112 - .../ops/csrc/common/cuda/nms_cuda_kernel.cuh | 74 - .../cuda/sigmoid_focal_loss_cuda_kernel.cuh | 71 - .../ops/csrc/common/pytorch_cpp_helper.hpp | 24 - .../ops/csrc/common/pytorch_cuda_helper.hpp | 19 - .../ops/csrc/pytorch/cuda/focal_loss_cuda.cu | 47 - .../mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu | 53 - .../mmcv/ops/csrc/pytorch/focal_loss.cpp | 67 - .../pytorch/mmcv/ops/csrc/pytorch/info.cpp | 56 - .../pytorch/mmcv/ops/csrc/pytorch/nms.cpp | 261 -- .../pytorch/mmcv/ops/csrc/pytorch/pybind.cpp | 36 - .../yolof/pytorch/mmcv/ops/focal_loss.py | 211 -- cv/detection/yolof/pytorch/mmcv/ops/info.py | 36 - cv/detection/yolof/pytorch/mmcv/ops/nms.py | 417 --- .../yolof/pytorch/mmcv/parallel/__init__.py | 13 - .../yolof/pytorch/mmcv/parallel/_functions.py | 79 - .../yolof/pytorch/mmcv/parallel/collate.py | 84 - .../pytorch/mmcv/parallel/data_container.py | 89 - .../pytorch/mmcv/parallel/data_parallel.py | 89 - .../pytorch/mmcv/parallel/distributed.py | 112 - .../mmcv/parallel/distributed_deprecated.py | 70 - .../yolof/pytorch/mmcv/parallel/registry.py | 8 - .../pytorch/mmcv/parallel/scatter_gather.py | 59 - .../yolof/pytorch/mmcv/parallel/utils.py | 20 - .../yolof/pytorch/mmcv/runner/__init__.py | 47 - .../yolof/pytorch/mmcv/runner/base_module.py | 195 -- .../yolof/pytorch/mmcv/runner/base_runner.py | 542 --- .../yolof/pytorch/mmcv/runner/builder.py | 24 - .../yolof/pytorch/mmcv/runner/checkpoint.py | 707 ---- .../mmcv/runner/default_constructor.py | 44 - .../yolof/pytorch/mmcv/runner/dist_utils.py | 164 - .../pytorch/mmcv/runner/epoch_based_runner.py | 187 -- .../yolof/pytorch/mmcv/runner/fp16_utils.py | 410 --- .../pytorch/mmcv/runner/hooks/__init__.py | 29 - .../pytorch/mmcv/runner/hooks/checkpoint.py | 167 - .../pytorch/mmcv/runner/hooks/closure.py | 11 - .../yolof/pytorch/mmcv/runner/hooks/ema.py | 89 - .../pytorch/mmcv/runner/hooks/evaluation.py | 509 --- .../yolof/pytorch/mmcv/runner/hooks/hook.py | 92 - .../pytorch/mmcv/runner/hooks/iter_timer.py | 18 - .../mmcv/runner/hooks/logger/__init__.py | 15 - .../pytorch/mmcv/runner/hooks/logger/base.py | 166 - .../mmcv/runner/hooks/logger/dvclive.py | 58 - .../mmcv/runner/hooks/logger/mlflow.py | 78 - .../mmcv/runner/hooks/logger/neptune.py | 82 - .../pytorch/mmcv/runner/hooks/logger/pavi.py | 117 - .../mmcv/runner/hooks/logger/tensorboard.py | 57 - .../pytorch/mmcv/runner/hooks/logger/text.py | 255 -- .../pytorch/mmcv/runner/hooks/logger/wandb.py | 56 - .../pytorch/mmcv/runner/hooks/lr_updater.py | 670 ---- .../yolof/pytorch/mmcv/runner/hooks/memory.py | 25 - .../mmcv/runner/hooks/momentum_updater.py | 493 --- .../pytorch/mmcv/runner/hooks/optimizer.py | 508 --- .../pytorch/mmcv/runner/hooks/profiler.py | 180 - .../pytorch/mmcv/runner/hooks/sampler_seed.py | 20 - .../pytorch/mmcv/runner/hooks/sync_buffer.py | 22 - .../pytorch/mmcv/runner/iter_based_runner.py | 273 -- .../yolof/pytorch/mmcv/runner/log_buffer.py | 41 - .../pytorch/mmcv/runner/optimizer/__init__.py | 9 - .../pytorch/mmcv/runner/optimizer/builder.py | 44 - .../runner/optimizer/default_constructor.py | 250 -- .../yolof/pytorch/mmcv/runner/priority.py | 60 - .../yolof/pytorch/mmcv/runner/utils.py | 93 - .../yolof/pytorch/mmcv/utils/__init__.py | 69 - .../yolof/pytorch/mmcv/utils/config.py | 689 ---- cv/detection/yolof/pytorch/mmcv/utils/env.py | 95 - .../yolof/pytorch/mmcv/utils/ext_loader.py | 71 - .../yolof/pytorch/mmcv/utils/logging.py | 110 - cv/detection/yolof/pytorch/mmcv/utils/misc.py | 377 --- .../yolof/pytorch/mmcv/utils/parrots_jit.py | 41 - .../pytorch/mmcv/utils/parrots_wrapper.py | 107 - cv/detection/yolof/pytorch/mmcv/utils/path.py | 101 - .../yolof/pytorch/mmcv/utils/progressbar.py | 208 -- .../yolof/pytorch/mmcv/utils/registry.py | 315 -- .../yolof/pytorch/mmcv/utils/testing.py | 140 - .../yolof/pytorch/mmcv/utils/timer.py | 118 - .../yolof/pytorch/mmcv/utils/trace.py | 23 - .../yolof/pytorch/mmcv/utils/version_utils.py | 90 - cv/detection/yolof/pytorch/mmcv/version.py | 35 - cv/detection/yolof/pytorch/mmdet/__init__.py | 29 - .../yolof/pytorch/mmdet/apis/__init__.py | 12 - cv/detection/yolof/pytorch/mmdet/apis/test.py | 209 -- .../yolof/pytorch/mmdet/apis/train.py | 244 -- .../yolof/pytorch/mmdet/core/__init__.py | 11 - .../pytorch/mmdet/core/anchor/__init__.py | 12 - .../mmdet/core/anchor/anchor_generator.py | 866 ----- .../pytorch/mmdet/core/anchor/builder.py | 19 - .../yolof/pytorch/mmdet/core/anchor/utils.py | 72 - .../yolof/pytorch/mmdet/core/bbox/__init__.py | 24 - .../mmdet/core/bbox/assigners/__init__.py | 9 - .../core/bbox/assigners/assign_result.py | 206 -- .../core/bbox/assigners/base_assigner.py | 10 - .../core/bbox/assigners/uniform_assigner.py | 135 - .../yolof/pytorch/mmdet/core/bbox/builder.py | 21 - .../pytorch/mmdet/core/bbox/coder/__init__.py | 6 - .../mmdet/core/bbox/coder/base_bbox_coder.py | 18 - .../core/bbox/coder/delta_xywh_bbox_coder.py | 392 --- .../core/bbox/iou_calculators/__init__.py | 5 - .../core/bbox/iou_calculators/builder.py | 9 - .../bbox/iou_calculators/iou2d_calculator.py | 261 -- .../mmdet/core/bbox/samplers/__init__.py | 7 - .../mmdet/core/bbox/samplers/base_sampler.py | 102 - .../core/bbox/samplers/pseudo_sampler.py | 42 - .../core/bbox/samplers/sampling_result.py | 153 - .../pytorch/mmdet/core/bbox/transforms.py | 270 -- .../pytorch/mmdet/core/evaluation/__init__.py | 19 - .../mmdet/core/evaluation/bbox_overlaps.py | 65 - .../mmdet/core/evaluation/class_names.py | 332 -- .../mmdet/core/evaluation/eval_hooks.py | 140 - .../pytorch/mmdet/core/evaluation/mean_ap.py | 782 ----- .../mmdet/core/evaluation/panoptic_utils.py | 6 - .../pytorch/mmdet/core/evaluation/recall.py | 197 -- .../pytorch/mmdet/core/optimizers/__init__.py | 9 - .../pytorch/mmdet/core/optimizers/builder.py | 33 - .../layer_decay_optimizer_constructor.py | 154 - .../mmdet/core/post_processing/__init__.py | 10 - .../mmdet/core/post_processing/bbox_nms.py | 171 - .../mmdet/core/post_processing/matrix_nms.py | 121 - .../mmdet/core/post_processing/merge_augs.py | 154 - .../pytorch/mmdet/core/utils/__init__.py | 13 - .../pytorch/mmdet/core/utils/dist_utils.py | 193 -- .../yolof/pytorch/mmdet/core/utils/misc.py | 208 -- .../yolof/pytorch/mmdet/datasets/__init__.py | 19 - .../mmdet/datasets/api_wrappers/__init__.py | 7 - .../mmdet/datasets/api_wrappers/coco_api.py | 47 - .../api_wrappers/panoptic_evaluation.py | 228 -- .../yolof/pytorch/mmdet/datasets/builder.py | 215 -- .../yolof/pytorch/mmdet/datasets/coco.py | 649 ---- .../yolof/pytorch/mmdet/datasets/custom.py | 410 --- .../mmdet/datasets/dataset_wrappers.py | 456 --- .../mmdet/datasets/pipelines/__init__.py | 31 - .../mmdet/datasets/pipelines/auto_augment.py | 894 ----- .../mmdet/datasets/pipelines/compose.py | 55 - .../mmdet/datasets/pipelines/formating.py | 9 - .../mmdet/datasets/pipelines/formatting.py | 392 --- .../mmdet/datasets/pipelines/instaboost.py | 118 - .../mmdet/datasets/pipelines/loading.py | 643 ---- .../mmdet/datasets/pipelines/test_time_aug.py | 121 - .../mmdet/datasets/pipelines/transforms.py | 2919 ----------------- .../mmdet/datasets/samplers/__init__.py | 10 - .../datasets/samplers/class_aware_sampler.py | 176 - .../datasets/samplers/distributed_sampler.py | 54 - .../mmdet/datasets/samplers/group_sampler.py | 148 - .../datasets/samplers/infinite_sampler.py | 186 -- .../yolof/pytorch/mmdet/datasets/utils.py | 167 - .../yolof/pytorch/mmdet/models/__init__.py | 16 - .../mmdet/models/backbones/__init__.py | 6 - .../pytorch/mmdet/models/backbones/resnet.py | 672 ---- .../yolof/pytorch/mmdet/models/builder.py | 59 - .../mmdet/models/dense_heads/__init__.py | 3 - .../mmdet/models/dense_heads/anchor_head.py | 542 --- .../models/dense_heads/base_dense_head.py | 526 --- .../models/dense_heads/dense_test_mixins.py | 206 -- .../mmdet/models/dense_heads/yolof_head.py | 416 --- .../mmdet/models/detectors/__init__.py | 6 - .../pytorch/mmdet/models/detectors/base.py | 360 -- .../mmdet/models/detectors/single_stage.py | 171 - .../pytorch/mmdet/models/detectors/yolof.py | 19 - .../pytorch/mmdet/models/losses/__init__.py | 13 - .../pytorch/mmdet/models/losses/accuracy.py | 79 - .../pytorch/mmdet/models/losses/focal_loss.py | 244 -- .../pytorch/mmdet/models/losses/iou_loss.py | 474 --- .../pytorch/mmdet/models/losses/utils.py | 105 - .../pytorch/mmdet/models/necks/__init__.py | 7 - .../mmdet/models/necks/dilated_encoder.py | 109 - .../pytorch/mmdet/models/utils/__init__.py | 5 - .../pytorch/mmdet/models/utils/res_layer.py | 190 -- .../yolof/pytorch/mmdet/utils/__init__.py | 17 - .../yolof/pytorch/mmdet/utils/collect_env.py | 17 - .../pytorch/mmdet/utils/compat_config.py | 139 - .../pytorch/mmdet/utils/contextmanagers.py | 122 - .../yolof/pytorch/mmdet/utils/logger.py | 65 - .../yolof/pytorch/mmdet/utils/memory.py | 213 -- .../yolof/pytorch/mmdet/utils/misc.py | 76 - .../pytorch/mmdet/utils/replace_cfg_vals.py | 70 - .../yolof/pytorch/mmdet/utils/setup_env.py | 53 - .../yolof/pytorch/mmdet/utils/split_batch.py | 45 - .../pytorch/mmdet/utils/util_distribution.py | 74 - .../yolof/pytorch/mmdet/utils/util_mixins.py | 105 - cv/detection/yolof/pytorch/mmdet/version.py | 19 - cv/detection/yolof/pytorch/requirements.txt | 10 - cv/detection/yolof/pytorch/setup.cfg | 21 - cv/detection/yolof/pytorch/setup.py | 353 -- cv/detection/yolof/pytorch/train.py | 242 -- cv/detection/yolof/pytorch/train.sh | 10 - cv/detection/yolof/pytorch/train_dist.sh | 22 - 235 files changed, 19 insertions(+), 37877 deletions(-) delete mode 100755 cv/detection/yolof/pytorch/.gitignore delete mode 100755 cv/detection/yolof/pytorch/LICENSE delete mode 100755 cv/detection/yolof/pytorch/configs/_base_/datasets/coco_detection.py delete mode 100755 cv/detection/yolof/pytorch/configs/_base_/default_runtime.py delete mode 100755 cv/detection/yolof/pytorch/configs/_base_/schedules/schedule_1x.py delete mode 100755 cv/detection/yolof/pytorch/configs/yolof/README.md delete mode 100755 cv/detection/yolof/pytorch/configs/yolof/metafile.yml delete mode 100755 cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_1x_coco.py delete mode 100755 cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_iter-1x_coco.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/activation.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv_module.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/norm.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/padding.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/plugin.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/bricks/registry.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/resnet.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/utils/__init__.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/cnn/utils/flops_counter.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/cnn/utils/fuse_conv_bn.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/cnn/utils/sync_bn.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/cnn/utils/weight_init.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/engine/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/engine/test.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/file_client.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/handlers/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/handlers/base.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/handlers/json_handler.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/handlers/pickle_handler.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/handlers/yaml_handler.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/io.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/fileio/parse.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/colorspace.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/geometric.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/io.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/misc.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/image/photometric.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/model_zoo/deprecated.json delete mode 100755 cv/detection/yolof/pytorch/mmcv/model_zoo/mmcls.json delete mode 100644 cv/detection/yolof/pytorch/mmcv/model_zoo/model.json delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/README.md delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/focal_loss.cpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/info.cpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/nms.cpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/focal_loss.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/info.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/ops/nms.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/_functions.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/collate.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/data_container.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/data_parallel.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/distributed.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/distributed_deprecated.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/registry.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/scatter_gather.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/parallel/utils.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/base_module.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/base_runner.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/checkpoint.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/default_constructor.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/dist_utils.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/epoch_based_runner.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/fp16_utils.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/checkpoint.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/closure.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/ema.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/evaluation.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/hook.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/iter_timer.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/base.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/dvclive.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/mlflow.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/neptune.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/pavi.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/tensorboard.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/text.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/wandb.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/lr_updater.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/memory.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/momentum_updater.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/optimizer.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/profiler.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/sampler_seed.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/hooks/sync_buffer.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/iter_based_runner.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/log_buffer.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/optimizer/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/optimizer/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/optimizer/default_constructor.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/priority.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/runner/utils.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/config.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/utils/env.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/ext_loader.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/logging.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/misc.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/utils/parrots_jit.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/utils/parrots_wrapper.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/path.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/progressbar.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/registry.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/testing.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/timer.py delete mode 100644 cv/detection/yolof/pytorch/mmcv/utils/trace.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/utils/version_utils.py delete mode 100755 cv/detection/yolof/pytorch/mmcv/version.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/apis/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/apis/test.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/apis/train.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/anchor/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/anchor/anchor_generator.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/anchor/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/anchor/utils.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/assign_result.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/base_assigner.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/coder/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/base_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/sampling_result.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/bbox/transforms.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/bbox_overlaps.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/class_names.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/eval_hooks.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/mean_ap.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/panoptic_utils.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/evaluation/recall.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/optimizers/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/optimizers/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/optimizers/layer_decay_optimizer_constructor.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/post_processing/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/post_processing/bbox_nms.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/post_processing/matrix_nms.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/post_processing/merge_augs.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/utils/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/utils/dist_utils.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/core/utils/misc.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/coco_api.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/coco.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/custom.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/dataset_wrappers.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/auto_augment.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/compose.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formating.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formatting.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/instaboost.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/loading.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/test_time_aug.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/pipelines/transforms.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/samplers/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/samplers/class_aware_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/samplers/distributed_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/samplers/group_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/samplers/infinite_sampler.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/datasets/utils.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/backbones/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/backbones/resnet.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/builder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/dense_heads/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/dense_heads/anchor_head.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/dense_heads/base_dense_head.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/dense_heads/dense_test_mixins.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/dense_heads/yolof_head.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/detectors/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/detectors/base.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/detectors/single_stage.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/detectors/yolof.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/losses/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/losses/accuracy.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/losses/focal_loss.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/losses/iou_loss.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/losses/utils.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/necks/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/necks/dilated_encoder.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/utils/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/models/utils/res_layer.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/__init__.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/collect_env.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/compat_config.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/contextmanagers.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/logger.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/memory.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/misc.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/replace_cfg_vals.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/setup_env.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/split_batch.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/utils/util_distribution.py delete mode 100644 cv/detection/yolof/pytorch/mmdet/utils/util_mixins.py delete mode 100755 cv/detection/yolof/pytorch/mmdet/version.py delete mode 100755 cv/detection/yolof/pytorch/requirements.txt delete mode 100755 cv/detection/yolof/pytorch/setup.cfg delete mode 100755 cv/detection/yolof/pytorch/setup.py delete mode 100755 cv/detection/yolof/pytorch/train.py delete mode 100755 cv/detection/yolof/pytorch/train.sh delete mode 100755 cv/detection/yolof/pytorch/train_dist.sh diff --git a/cv/detection/yolof/pytorch/.gitignore b/cv/detection/yolof/pytorch/.gitignore deleted file mode 100755 index 55f5e80d..00000000 --- a/cv/detection/yolof/pytorch/.gitignore +++ /dev/null @@ -1,8 +0,0 @@ -**/__pycache__/ -data -work_dirs -mmcv_full.egg-info -build -*.so -mmcv/_ext.cpython-36m-x86_64-linux-gnu.so -nohup.out diff --git a/cv/detection/yolof/pytorch/LICENSE b/cv/detection/yolof/pytorch/LICENSE deleted file mode 100755 index 1bfc23e4..00000000 --- a/cv/detection/yolof/pytorch/LICENSE +++ /dev/null @@ -1,203 +0,0 @@ -Copyright 2018-2023 OpenMMLab. All rights reserved. - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2018-2023 OpenMMLab. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/cv/detection/yolof/pytorch/README.md b/cv/detection/yolof/pytorch/README.md index 3ca449f3..83149b8c 100755 --- a/cv/detection/yolof/pytorch/README.md +++ b/cv/detection/yolof/pytorch/README.md @@ -7,8 +7,16 @@ This paper revisits feature pyramids networks (FPN) for one-stage detectors and ## Step 1: Installing packages ```bash -pip3 install -r requirements.txt -MMCV_WITH_OPS=1 python3 setup.py build && cp build/lib.linux*/mmcv/_ext.cpython* mmcv +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . ``` ## Step 2: Preparing datasets @@ -39,26 +47,27 @@ coco2017 ```bash mkdir -p data ln -s /path/to/coco2017 data/coco + +# Prepare resnet50_caffe-788b5fa3.pth, skip this if fast network +mkdir -p /root/.cache/torch/hub/checkpoints/ +wget -O /root/.cache/torch/hub/checkpoints/resnet50_caffe-788b5fa3.pth https://download.openmmlab.com/pretrain/third_party/resnet50_caffe-788b5fa3.pth ``` -## Step 2: Training +## Step 3: Training #### Training on a single GPU ```bash -bash train.sh +python3 tools/train.py configs/yolof/yolof_r50-c5_8xb8-1x_coco.py ``` #### Training on multiple GPUs ```bash -bash train_dist.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments] -``` +sed -i 's/python /python3 /g' tools/dist_train.sh -for example, - -```bash -bash train_dist.sh configs/yolof/yolof_r50_c5_8x8_1x_coco.py 8 +# Multiple GPUs on one machine +bash tools/dist_train.sh configs/yolof/yolof_r50-c5_8xb8-1x_coco.py 8 ``` ## Reference diff --git a/cv/detection/yolof/pytorch/configs/_base_/datasets/coco_detection.py b/cv/detection/yolof/pytorch/configs/_base_/datasets/coco_detection.py deleted file mode 100755 index 149f590b..00000000 --- a/cv/detection/yolof/pytorch/configs/_base_/datasets/coco_detection.py +++ /dev/null @@ -1,49 +0,0 @@ -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') diff --git a/cv/detection/yolof/pytorch/configs/_base_/default_runtime.py b/cv/detection/yolof/pytorch/configs/_base_/default_runtime.py deleted file mode 100755 index 5b0b1452..00000000 --- a/cv/detection/yolof/pytorch/configs/_base_/default_runtime.py +++ /dev/null @@ -1,27 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -custom_hooks = [dict(type='NumClassCheckHook')] - -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] - -# disable opencv multithreading to avoid system being overloaded -opencv_num_threads = 0 -# set multi-process start method as `fork` to speed up the training -mp_start_method = 'fork' - -# Default setting for scaling LR automatically -# - `enable` means enable scaling LR automatically -# or not by default. -# - `base_batch_size` = (8 GPUs) x (2 samples per GPU). -auto_scale_lr = dict(enable=False, base_batch_size=16) diff --git a/cv/detection/yolof/pytorch/configs/_base_/schedules/schedule_1x.py b/cv/detection/yolof/pytorch/configs/_base_/schedules/schedule_1x.py deleted file mode 100755 index 13b3783c..00000000 --- a/cv/detection/yolof/pytorch/configs/_base_/schedules/schedule_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/cv/detection/yolof/pytorch/configs/yolof/README.md b/cv/detection/yolof/pytorch/configs/yolof/README.md deleted file mode 100755 index e88da022..00000000 --- a/cv/detection/yolof/pytorch/configs/yolof/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# YOLOF - -> [You Only Look One-level Feature](https://arxiv.org/abs/2103.09460) - - - -## Abstract - -This paper revisits feature pyramids networks (FPN) for one-stage detectors and points out that the success of FPN is due to its divide-and-conquer solution to the optimization problem in object detection rather than multi-scale feature fusion. From the perspective of optimization, we introduce an alternative way to address the problem instead of adopting the complex feature pyramids - {\\em utilizing only one-level feature for detection}. Based on the simple and efficient solution, we present You Only Look One-level Feature (YOLOF). In our method, two key components, Dilated Encoder and Uniform Matching, are proposed and bring considerable improvements. Extensive experiments on the COCO benchmark prove the effectiveness of the proposed model. Our YOLOF achieves comparable results with its feature pyramids counterpart RetinaNet while being 2.5× faster. Without transformer layers, YOLOF can match the performance of DETR in a single-level feature manner with 7× less training epochs. With an image size of 608×608, YOLOF achieves 44.3 mAP running at 60 fps on 2080Ti, which is 13% faster than YOLOv4. - -

- -## Results and Models - -| Backbone | Style | Epoch | Lr schd | Mem (GB) | box AP | Config | Download | -| :------: | :---: | :---: | :-----: | :------: | :----: | :-------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| R-50-C5 | caffe | Y | 1x | 8.3 | 37.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolof/yolof_r50_c5_8x8_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/yolof/yolof_r50_c5_8x8_1x_coco/yolof_r50_c5_8x8_1x_coco_20210425_024427-8e864411.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/yolof/yolof_r50_c5_8x8_1x_coco/yolof_r50_c5_8x8_1x_coco_20210425_024427.log.json) | - -**Note**: - -1. We find that the performance is unstable and may fluctuate by about 0.3 mAP. mAP 37.4 ~ 37.7 is acceptable in YOLOF_R_50_C5_1x. Such fluctuation can also be found in the [original implementation](https://github.com/chensnathan/YOLOF). -2. In addition to instability issues, sometimes there are large loss fluctuations and NAN, so there may still be problems with this project, which will be improved subsequently. - -## Citation - -```latex -@inproceedings{chen2021you, - title={You Only Look One-level Feature}, - author={Chen, Qiang and Wang, Yingming and Yang, Tong and Zhang, Xiangyu and Cheng, Jian and Sun, Jian}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2021} -} -``` diff --git a/cv/detection/yolof/pytorch/configs/yolof/metafile.yml b/cv/detection/yolof/pytorch/configs/yolof/metafile.yml deleted file mode 100755 index 9436fee2..00000000 --- a/cv/detection/yolof/pytorch/configs/yolof/metafile.yml +++ /dev/null @@ -1,32 +0,0 @@ -Collections: - - Name: YOLOF - Metadata: - Training Data: COCO - Training Techniques: - - SGD with Momentum - - Weight Decay - Training Resources: 8x V100 GPUs - Architecture: - - Dilated Encoder - - ResNet - Paper: - URL: https://arxiv.org/abs/2103.09460 - Title: 'You Only Look One-level Feature' - README: configs/yolof/README.md - Code: - URL: https://github.com/open-mmlab/mmdetection/blob/v2.12.0/mmdet/models/detectors/yolof.py#L6 - Version: v2.12.0 - -Models: - - Name: yolof_r50_c5_8x8_1x_coco - In Collection: YOLOF - Config: configs/yolof/yolof_r50_c5_8x8_1x_coco.py - Metadata: - Training Memory (GB): 8.3 - Epochs: 12 - Results: - - Task: Object Detection - Dataset: COCO - Metrics: - box AP: 37.5 - Weights: https://download.openmmlab.com/mmdetection/v2.0/yolof/yolof_r50_c5_8x8_1x_coco/yolof_r50_c5_8x8_1x_coco_20210425_024427-8e864411.pth diff --git a/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_1x_coco.py b/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_1x_coco.py deleted file mode 100755 index d0b9649c..00000000 --- a/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_1x_coco.py +++ /dev/null @@ -1,111 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - type='YOLOF', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(3, ), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe', - init_cfg=dict( - type='Pretrained', - checkpoint='open-mmlab://detectron/resnet50_caffe')), - neck=dict( - type='DilatedEncoder', - in_channels=2048, - out_channels=512, - block_mid_channels=128, - num_residual_blocks=4, - block_dilations=[2, 4, 6, 8]), - bbox_head=dict( - type='YOLOFHead', - num_classes=80, - in_channels=512, - reg_decoded_bbox=True, - anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[1, 2, 4, 8, 16], - strides=[32]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1., 1., 1., 1.], - add_ctr_clamp=True, - ctr_clamp=32), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='UniformAssigner', pos_ignore_thr=0.15, neg_ignore_thr=0.7), - allowed_border=-1, - pos_weight=-1, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.6), - max_per_img=100)) -# optimizer -optimizer = dict( - type='SGD', - lr=0.12, - momentum=0.9, - weight_decay=0.0001, - paramwise_cfg=dict( - norm_decay_mult=0., custom_keys={'backbone': dict(lr_mult=1. / 3)})) -lr_config = dict(warmup_iters=1500, warmup_ratio=0.00066667) - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='RandomShift', shift_ratio=0.5, max_shift_px=32), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=8, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -# NOTE: `auto_scale_lr` is for automatically scaling LR, -# USER SHOULD NOT CHANGE ITS VALUES. -# base_batch_size = (8 GPUs) x (8 samples per GPU) -auto_scale_lr = dict(base_batch_size=64) diff --git a/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_iter-1x_coco.py b/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_iter-1x_coco.py deleted file mode 100755 index c95c02da..00000000 --- a/cv/detection/yolof/pytorch/configs/yolof/yolof_r50_c5_8x8_iter-1x_coco.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = './yolof_r50_c5_8x8_1x_coco.py' - -# We implemented the iter-based config according to the source code. -# COCO dataset has 117266 images after filtering. We use 8 gpu and -# 8 batch size training, so 22500 is equivalent to -# 22500/(117266/(8x8))=12.3 epoch, 15000 is equivalent to 8.2 epoch, -# 20000 is equivalent to 10.9 epoch. Due to lr(0.12) is large, -# the iter-based and epoch-based setting have about 0.2 difference on -# the mAP evaluation value. -lr_config = dict(step=[15000, 20000]) -runner = dict(_delete_=True, type='IterBasedRunner', max_iters=22500) -checkpoint_config = dict(interval=2500) -evaluation = dict(interval=4500) -log_config = dict(interval=20) diff --git a/cv/detection/yolof/pytorch/mmcv/__init__.py b/cv/detection/yolof/pytorch/mmcv/__init__.py deleted file mode 100755 index 35a21951..00000000 --- a/cv/detection/yolof/pytorch/mmcv/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -# flake8: noqa -from .fileio import * -from .image import * -from .utils import * -from .version import * -# from .video import * -# from .visualization import * - -# The following modules are not imported to this level, so mmcv may be used -# without PyTorch. -# - runner -# - parallel -# - op diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/__init__.py b/cv/detection/yolof/pytorch/mmcv/cnn/__init__.py deleted file mode 100755 index 67e87717..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# from .alexnet import AlexNet -# yapf: disable -from .builder import MODELS, build_model_from_cfg -# yapf: enable -from .resnet import ResNet, make_res_layer -from .bricks import (ConvModule, - build_activation_layer, build_conv_layer, - build_norm_layer, build_plugin_layer, - is_norm) -from .utils import (INITIALIZERS, Caffe2XavierInit, ConstantInit, KaimingInit, - NormalInit, PretrainedInit, TruncNormalInit, UniformInit, - XavierInit, bias_init_with_prob, caffe2_xavier_init, - constant_init, - initialize, kaiming_init, normal_init, trunc_normal_init, - uniform_init, xavier_init) - -__all__ = [ - 'AlexNet', 'VGG', 'make_vgg_layer', 'ResNet', 'make_res_layer', - 'constant_init', 'xavier_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'kaiming_init', 'caffe2_xavier_init', - 'bias_init_with_prob', 'ConvModule', 'build_activation_layer', - 'build_conv_layer', 'build_norm_layer', 'build_padding_layer', - 'build_upsample_layer', 'build_plugin_layer', 'is_norm', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'HSigmoid', 'Swish', 'HSwish', - 'GeneralizedAttention', 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', - 'PADDING_LAYERS', 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', - 'get_model_complexity_info', 'conv_ws_2d', 'ConvAWS2d', 'ConvWS2d', - 'fuse_conv_bn', 'DepthwiseSeparableConvModule', 'Linear', 'Conv2d', - 'ConvTranspose2d', 'MaxPool2d', 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', - 'initialize', 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'MODELS', 'build_model_from_cfg' -] diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/__init__.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/__init__.py deleted file mode 100755 index 9093fedb..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -from .activation import build_activation_layer -from .conv import build_conv_layer -from .conv_module import ConvModule -from .norm import build_norm_layer, is_norm -from .plugin import build_plugin_layer - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', - 'build_plugin_layer', 'is_norm' -] diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/activation.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/activation.py deleted file mode 100755 index 79f19883..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super(Clamp, self).__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv.py deleted file mode 100755 index cf544919..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from .registry import CONV_LAYERS - -CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d) -CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d) -CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d) -CONV_LAYERS.register_module('Conv', module=nn.Conv2d) - - -def build_conv_layer(cfg, *args, **kwargs): - """Build convolution layer. - - Args: - cfg (None or dict): The conv layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an conv layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding conv layer. - - Returns: - nn.Module: Created conv layer. - """ - if cfg is None: - cfg_ = dict(type='Conv2d') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in CONV_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - else: - conv_layer = CONV_LAYERS.get(layer_type) - - layer = conv_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv_module.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv_module.py deleted file mode 100755 index 4f19f1d0..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/conv_module.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn - -from mmcv.utils import _BatchNorm, _InstanceNorm -from ..utils import constant_init, kaiming_init -from .activation import build_activation_layer -from .conv import build_conv_layer -from .norm import build_norm_layer -from .padding import build_padding_layer -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class ConvModule(nn.Module): - """A conv block that bundles conv/norm/activation layers. - - This block simplifies the usage of convolution layers, which are commonly - used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU). - It is based upon three build methods: `build_conv_layer()`, - `build_norm_layer()` and `build_activation_layer()`. - - Besides, we add some additional features in this module. - 1. Automatically set `bias` of the conv layer. - 2. Spectral norm is supported. - 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only - supports zero and circular padding, and we add "reflect" padding mode. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. - groups (int): Number of blocked connections from input channels to - output channels. Same as that in ``nn._ConvNd``. - bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - inplace (bool): Whether to use inplace mode for activation. - Default: True. - with_spectral_norm (bool): Whether use spectral norm in conv module. - Default: False. - padding_mode (str): If the `padding_mode` has not been supported by - current `Conv2d` in PyTorch, we will use our own padding layer - instead. Currently, we support ['zeros', 'circular'] with official - implementation and ['reflect'] with our own implementation. - Default: 'zeros'. - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - Default: ('conv', 'norm', 'act'). - """ - - _abbr_ = 'conv_block' - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias='auto', - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - inplace=True, - with_spectral_norm=False, - padding_mode='zeros', - order=('conv', 'norm', 'act')): - super(ConvModule, self).__init__() - assert conv_cfg is None or isinstance(conv_cfg, dict) - assert norm_cfg is None or isinstance(norm_cfg, dict) - assert act_cfg is None or isinstance(act_cfg, dict) - official_padding_mode = ['zeros', 'circular'] - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.inplace = inplace - self.with_spectral_norm = with_spectral_norm - self.with_explicit_padding = padding_mode not in official_padding_mode - self.order = order - assert isinstance(self.order, tuple) and len(self.order) == 3 - assert set(order) == set(['conv', 'norm', 'act']) - - self.with_norm = norm_cfg is not None - self.with_activation = act_cfg is not None - # if the conv layer is before a norm layer, bias is unnecessary. - if bias == 'auto': - bias = not self.with_norm - self.with_bias = bias - - if self.with_explicit_padding: - pad_cfg = dict(type=padding_mode) - self.padding_layer = build_padding_layer(pad_cfg, padding) - - # reset padding to 0 for conv module - conv_padding = 0 if self.with_explicit_padding else padding - # build convolution layer - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=conv_padding, - dilation=dilation, - groups=groups, - bias=bias) - # export the attributes of self.conv to a higher level for convenience - self.in_channels = self.conv.in_channels - self.out_channels = self.conv.out_channels - self.kernel_size = self.conv.kernel_size - self.stride = self.conv.stride - self.padding = padding - self.dilation = self.conv.dilation - self.transposed = self.conv.transposed - self.output_padding = self.conv.output_padding - self.groups = self.conv.groups - - if self.with_spectral_norm: - self.conv = nn.utils.spectral_norm(self.conv) - - # build normalization layers - if self.with_norm: - # norm layer is after conv layer - if order.index('norm') > order.index('conv'): - norm_channels = out_channels - else: - norm_channels = in_channels - self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels) - self.add_module(self.norm_name, norm) - if self.with_bias: - if isinstance(norm, (_BatchNorm, _InstanceNorm)): - warnings.warn( - 'Unnecessary conv bias before batch/instance norm') - else: - self.norm_name = None - - # build activation layer - if self.with_activation: - act_cfg_ = act_cfg.copy() - # nn.Tanh has no 'inplace' argument - if act_cfg_['type'] not in [ - 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish' - ]: - act_cfg_.setdefault('inplace', inplace) - self.activate = build_activation_layer(act_cfg_) - - # Use msra init by default - self.init_weights() - - @property - def norm(self): - if self.norm_name: - return getattr(self, self.norm_name) - else: - return None - - def init_weights(self): - # 1. It is mainly for customized conv layers with their own - # initialization manners by calling their own ``init_weights()``, - # and we do not want ConvModule to override the initialization. - # 2. For customized conv layers without their own initialization - # manners (that is, they don't have their own ``init_weights()``) - # and PyTorch's conv layers, they will be initialized by - # this method with default ``kaiming_init``. - # Note: For PyTorch's conv layers, they will be overwritten by our - # initialization implementation using default ``kaiming_init``. - if not hasattr(self.conv, 'init_weights'): - if self.with_activation and self.act_cfg['type'] == 'LeakyReLU': - nonlinearity = 'leaky_relu' - a = self.act_cfg.get('negative_slope', 0.01) - else: - nonlinearity = 'relu' - a = 0 - kaiming_init(self.conv, a=a, nonlinearity=nonlinearity) - if self.with_norm: - constant_init(self.norm, 1, bias=0) - - def forward(self, x, activate=True, norm=True): - for layer in self.order: - if layer == 'conv': - if self.with_explicit_padding: - x = self.padding_layer(x) - x = self.conv(x) - elif layer == 'norm' and norm and self.with_norm: - x = self.norm(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - return x diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/norm.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/norm.py deleted file mode 100755 index cfb326bd..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/norm.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect - -import torch.nn as nn - -from mmcv.utils import is_tuple_of -from mmcv.utils.parrots_wrapper import SyncBatchNorm, _BatchNorm, _InstanceNorm -from .registry import NORM_LAYERS - -NORM_LAYERS.register_module('BN', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN1d', module=nn.BatchNorm1d) -NORM_LAYERS.register_module('BN2d', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN3d', module=nn.BatchNorm3d) -NORM_LAYERS.register_module('SyncBN', module=SyncBatchNorm) -NORM_LAYERS.register_module('GN', module=nn.GroupNorm) -NORM_LAYERS.register_module('LN', module=nn.LayerNorm) -NORM_LAYERS.register_module('IN', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN1d', module=nn.InstanceNorm1d) -NORM_LAYERS.register_module('IN2d', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN3d', module=nn.InstanceNorm3d) - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - When we build a norm layer with `build_norm_layer()`, we want to preserve - the norm type in variable names, e.g, self.bn1, self.gn. This method will - infer the abbreviation to map class types to abbreviations. - - Rule 1: If the class has the property "_abbr_", return the property. - Rule 2: If the parent class is _BatchNorm, GroupNorm, LayerNorm or - InstanceNorm, the abbreviation of this layer will be "bn", "gn", "ln" and - "in" respectively. - Rule 3: If the class name contains "batch", "group", "layer" or "instance", - the abbreviation of this layer will be "bn", "gn", "ln" and "in" - respectively. - Rule 4: Otherwise, the abbreviation falls back to "norm". - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - if issubclass(class_type, _InstanceNorm): # IN is a subclass of BN - return 'in' - elif issubclass(class_type, _BatchNorm): - return 'bn' - elif issubclass(class_type, nn.GroupNorm): - return 'gn' - elif issubclass(class_type, nn.LayerNorm): - return 'ln' - else: - class_name = class_type.__name__.lower() - if 'batch' in class_name: - return 'bn' - elif 'group' in class_name: - return 'gn' - elif 'layer' in class_name: - return 'ln' - elif 'instance' in class_name: - return 'in' - else: - return 'norm_layer' - - -def build_norm_layer(cfg, num_features, postfix=''): - """Build normalization layer. - - Args: - cfg (dict): The norm layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate a norm layer. - - requires_grad (bool, optional): Whether stop gradient updates. - num_features (int): Number of input channels. - postfix (int | str): The postfix to be appended into norm abbreviation - to create named layer. - - Returns: - (str, nn.Module): The first element is the layer name consisting of - abbreviation and postfix, e.g., bn1, gn. The second element is the - created norm layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in NORM_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - - norm_layer = NORM_LAYERS.get(layer_type) - abbr = infer_abbr(norm_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - requires_grad = cfg_.pop('requires_grad', True) - cfg_.setdefault('eps', 1e-5) - if layer_type != 'GN': - layer = norm_layer(num_features, **cfg_) - if layer_type == 'SyncBN' and hasattr(layer, '_specify_ddp_gpu_num'): - layer._specify_ddp_gpu_num(1) - else: - assert 'num_groups' in cfg_ - layer = norm_layer(num_channels=num_features, **cfg_) - - for param in layer.parameters(): - param.requires_grad = requires_grad - - return name, layer - - -def is_norm(layer, exclude=None): - """Check if a layer is a normalization layer. - - Args: - layer (nn.Module): The layer to be checked. - exclude (type | tuple[type]): Types to be excluded. - - Returns: - bool: Whether the layer is a norm layer. - """ - if exclude is not None: - if not isinstance(exclude, tuple): - exclude = (exclude, ) - if not is_tuple_of(exclude, type): - raise TypeError( - f'"exclude" must be either None or type or a tuple of types, ' - f'but got {type(exclude)}: {exclude}') - - if exclude and isinstance(layer, exclude): - return False - - all_norm_bases = (_BatchNorm, _InstanceNorm, nn.GroupNorm, nn.LayerNorm) - return isinstance(layer, all_norm_bases) diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/padding.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/padding.py deleted file mode 100755 index e4ac6b28..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/padding.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import PADDING_LAYERS - -PADDING_LAYERS.register_module('zero', module=nn.ZeroPad2d) -PADDING_LAYERS.register_module('reflect', module=nn.ReflectionPad2d) -PADDING_LAYERS.register_module('replicate', module=nn.ReplicationPad2d) - - -def build_padding_layer(cfg, *args, **kwargs): - """Build padding layer. - - Args: - cfg (None or dict): The padding layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate a padding layer. - - Returns: - nn.Module: Created padding layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - - cfg_ = cfg.copy() - padding_type = cfg_.pop('type') - if padding_type not in PADDING_LAYERS: - raise KeyError(f'Unrecognized padding type {padding_type}.') - else: - padding_layer = PADDING_LAYERS.get(padding_type) - - layer = padding_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/plugin.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/plugin.py deleted file mode 100755 index 07c010d4..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re -else: - import re - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - type (str): identify plugin layer type. - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: - name (str): abbreviation + postfix - layer (nn.Module): created plugin layer - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/registry.py b/cv/detection/yolof/pytorch/mmcv/cnn/bricks/registry.py deleted file mode 100755 index c2927977..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/builder.py b/cv/detection/yolof/pytorch/mmcv/cnn/builder.py deleted file mode 100755 index 7567316c..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/builder.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..runner import Sequential -from ..utils import Registry, build_from_cfg - - -def build_model_from_cfg(cfg, registry, default_args=None): - """Build a PyTorch model from config dict(s). Different from - ``build_from_cfg``, if cfg is a list, a ``nn.Sequential`` will be built. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a config - dict or a list of config dicts. If cfg is a list, a - the built modules will be wrapped with ``nn.Sequential``. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -MODELS = Registry('model', build_func=build_model_from_cfg) diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/resnet.py b/cv/detection/yolof/pytorch/mmcv/cnn/resnet.py deleted file mode 100755 index 1cb3ac05..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/utils/__init__.py b/cv/detection/yolof/pytorch/mmcv/cnn/utils/__init__.py deleted file mode 100755 index f8d95064..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/utils/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit, - KaimingInit, NormalInit, PretrainedInit, - TruncNormalInit, UniformInit, XavierInit, - bias_init_with_prob, caffe2_xavier_init, - constant_init, initialize, kaiming_init, normal_init, - trunc_normal_init, uniform_init, xavier_init) - -__all__ = [ - 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init', - 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize', - 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'revert_sync_batchnorm' -] diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/utils/flops_counter.py b/cv/detection/yolof/pytorch/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index dceeb398..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/utils/fuse_conv_bn.py b/cv/detection/yolof/pytorch/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100644 index cb7076f8..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv, bn): - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module): - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/utils/sync_bn.py b/cv/detection/yolof/pytorch/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index 8a79ff4a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/cv/detection/yolof/pytorch/mmcv/cnn/utils/weight_init.py b/cv/detection/yolof/pytorch/mmcv/cnn/utils/weight_init.py deleted file mode 100755 index e1ac999e..00000000 --- a/cv/detection/yolof/pytorch/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module, init_info): - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module, val, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module, gain=1, bias=0, distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module, mean=0, std=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module, a=0, b=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module, - a=0, - mode='fan_out', - nonlinearity='relu', - bias=0, - distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module, bias=0): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob): - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m): - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit(object): - - def __init__(self, *, bias=0, bias_prob=None, layer=None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self): - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val, **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module): - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, gain=1, distribution='normal', **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean=0, std=1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module): - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a=0, b=1, **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module): - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a=0, - mode='fan_out', - nonlinearity='relu', - distribution='normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module): - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit(object): - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, checkpoint, prefix=None, map_location=None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module): - from mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module, cfg, wholemodule=False): - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module, override, cfg): - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module, init_cfg): - """Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/cv/detection/yolof/pytorch/mmcv/engine/__init__.py b/cv/detection/yolof/pytorch/mmcv/engine/__init__.py deleted file mode 100755 index 3193b7f6..00000000 --- a/cv/detection/yolof/pytorch/mmcv/engine/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .test import (collect_results_cpu, collect_results_gpu, multi_gpu_test, - single_gpu_test) - -__all__ = [ - 'collect_results_cpu', 'collect_results_gpu', 'multi_gpu_test', - 'single_gpu_test' -] diff --git a/cv/detection/yolof/pytorch/mmcv/engine/test.py b/cv/detection/yolof/pytorch/mmcv/engine/test.py deleted file mode 100755 index f236b1cd..00000000 --- a/cv/detection/yolof/pytorch/mmcv/engine/test.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time - -import torch -import torch.distributed as dist - -import mmcv -from mmcv.runner import get_dist_info - - -def single_gpu_test(model, data_loader): - """Test model with a single gpu. - - This method tests model with a single gpu and displays test progress bar. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for data in data_loader: - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - # Assume result has the same length of batch_size - # refer to https://github.com/open-mmlab/mmcv/issues/985 - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting - ``gpu_collect=True``, it encodes results to gpu tensors and use gpu - communication for results collection. On cpu mode it saves the results on - different gpus to ``tmpdir`` and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - if rank == 0: - batch_size = len(result) - batch_size_all = batch_size * world_size - if batch_size_all + prog_bar.completed > len(dataset): - batch_size_all = len(dataset) - prog_bar.completed - for _ in range(batch_size_all): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - """Collect results under cpu mode. - - On cpu mode, this function will save the results on different gpus to - ``tmpdir`` and collect them by the rank 0 worker. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - tmpdir (str | None): temporal directory for collected results to - store. If set to None, it will create a random temporal directory - for it. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl')) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') - part_result = mmcv.load(part_file) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - """Collect results under gpu mode. - - On gpu mode, this function will encode results to gpu tensors and use gpu - communication for results collection. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_result = pickle.loads(recv[:shape[0]].cpu().numpy().tobytes()) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/__init__.py b/cv/detection/yolof/pytorch/mmcv/fileio/__init__.py deleted file mode 100755 index 2051b85f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/file_client.py b/cv/detection/yolof/pytorch/mmcv/fileio/file_client.py deleted file mode 100755 index b2d62286..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Iterable, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import mmcv -from mmcv.utils.misc import has_method -from mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.')) - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self._client = lmdb.open( - self.db_path, - readonly=readonly, - lock=lock, - readahead=readahead, - **kwargs) - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - filepath = str(filepath) - with self._client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, 'r', encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path(self, filepath: str) -> Iterable[str]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - # This collection is used to record the overridden backends, and when a - # backend appears in the collection, the singleton pattern is disabled for - # that backend, because if the singleton pattern is used, then the object - # returned will be the backend before overwriting - _overridden_backends = set() - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - _overridden_prefixes = set() - - _instances = {} - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - # if a backend was overridden, it will create a new object - if (arg_key in cls._instances - and backend not in cls._overridden_backends - and prefix not in cls._overridden_prefixes): - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' - else ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - cls._overridden_backends.add(name) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - cls._overridden_prefixes.add(prefix) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/__init__.py b/cv/detection/yolof/pytorch/mmcv/fileio/handlers/__init__.py deleted file mode 100755 index aa24d919..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseFileHandler -from .json_handler import JsonHandler -from .pickle_handler import PickleHandler -from .yaml_handler import YamlHandler - -__all__ = ['BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler'] diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/base.py b/cv/detection/yolof/pytorch/mmcv/fileio/handlers/base.py deleted file mode 100755 index 288878bc..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode='r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode='w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/json_handler.py b/cv/detection/yolof/pytorch/mmcv/fileio/handlers/json_handler.py deleted file mode 100755 index 18d4f15f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/json_handler.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -import numpy as np - -from .base import BaseFileHandler - - -def set_default(obj): - """Set default json values for non-serializable values. - - It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list. - It also converts ``np.generic`` (including ``np.int32``, ``np.float32``, - etc.) into plain numbers of plain python built-in types. - """ - if isinstance(obj, (set, range)): - return list(obj) - elif isinstance(obj, np.ndarray): - return obj.tolist() - elif isinstance(obj, np.generic): - return obj.item() - raise TypeError(f'{type(obj)} is unsupported for json dump') - - -class JsonHandler(BaseFileHandler): - - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('default', set_default) - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('default', set_default) - return json.dumps(obj, **kwargs) diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/pickle_handler.py b/cv/detection/yolof/pytorch/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100755 index b37c79be..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path( - filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path( - obj, filepath, mode='wb', **kwargs) diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/yaml_handler.py b/cv/detection/yolof/pytorch/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100755 index c5aa2eea..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/io.py b/cv/detection/yolof/pytorch/mmcv/fileio/io.py deleted file mode 100755 index aaefde58..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/io.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from io import BytesIO, StringIO -from pathlib import Path - -from ..utils import is_list_of, is_str -from .file_client import FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler - -file_handlers = { - 'json': JsonHandler(), - 'yaml': YamlHandler(), - 'yml': YamlHandler(), - 'pickle': PickleHandler(), - 'pkl': PickleHandler() -} - - -def load(file, file_format=None, file_client_args=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Note: - In v1.3.16 and later, ``load`` supports loading data from serialized - files those can be storaged in different backends. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> load('/path/of/your/file') # file is storaged in disk - >>> load('https://path/of/your/file') # file is storaged in Internet - >>> load('s3://path/of/your/file') # file is storaged in petrel - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split('.')[-1] - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO(file_client.get_text(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - else: - with BytesIO(file_client.get(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - elif hasattr(file, 'read'): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Note: - In v1.3.16 and later, ``dump`` supports dumping data as strings or to - files which is saved to different backends. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dumped to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dump('hello world', '/path/of/your/file') # disk - >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split('.')[-1] - elif file is None: - raise ValueError( - 'file_format must be specified since file is None') - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put_text(f.getvalue(), file) - else: - with BytesIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put(f.getvalue(), file) - elif hasattr(file, 'write'): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') - - -def _register_handler(handler, file_formats): - """Register a handler for some file extensions. - - Args: - handler (:obj:`BaseFileHandler`): Handler to be registered. - file_formats (str or list[str]): File formats to be handled by this - handler. - """ - if not isinstance(handler, BaseFileHandler): - raise TypeError( - f'handler must be a child of BaseFileHandler, not {type(handler)}') - if isinstance(file_formats, str): - file_formats = [file_formats] - if not is_list_of(file_formats, str): - raise TypeError('file_formats must be a str or a list of str') - for ext in file_formats: - file_handlers[ext] = handler - - -def register_handler(file_formats, **kwargs): - - def wrap(cls): - _register_handler(cls(**kwargs), file_formats) - return cls - - return wrap diff --git a/cv/detection/yolof/pytorch/mmcv/fileio/parse.py b/cv/detection/yolof/pytorch/mmcv/fileio/parse.py deleted file mode 100755 index f60f0d61..00000000 --- a/cv/detection/yolof/pytorch/mmcv/fileio/parse.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from io import StringIO - -from .file_client import FileClient - - -def list_from_file(filename, - prefix='', - offset=0, - max_num=0, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a list of strings. - - Note: - In v1.3.16 and later, ``list_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a list for strings. - - Args: - filename (str): Filename. - prefix (str): The prefix to be inserted to the beginning of each item. - offset (int): The offset of lines. - max_num (int): The maximum number of lines to be read, - zeros and negatives mean no limitation. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> list_from_file('/path/of/your/file') # disk - ['hello', 'world'] - >>> list_from_file('s3://path/of/your/file') # ceph or petrel - ['hello', 'world'] - - Returns: - list[str]: A list of strings. - """ - cnt = 0 - item_list = [] - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for _ in range(offset): - f.readline() - for line in f: - if 0 < max_num <= cnt: - break - item_list.append(prefix + line.rstrip('\n\r')) - cnt += 1 - return item_list - - -def dict_from_file(filename, - key_type=str, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a dict. - - Each line of the text file will be two or more columns split by - whitespaces or tabs. The first column will be parsed as dict keys, and - the following columns will be parsed as dict values. - - Note: - In v1.3.16 and later, ``dict_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a dict. - - Args: - filename(str): Filename. - key_type(type): Type of the dict keys. str is user by default and - type conversion will be performed if specified. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dict_from_file('/path/of/your/file') # disk - {'key1': 'value1', 'key2': 'value2'} - >>> dict_from_file('s3://path/of/your/file') # ceph or petrel - {'key1': 'value1', 'key2': 'value2'} - - Returns: - dict: The parsed contents. - """ - mapping = {} - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for line in f: - items = line.rstrip('\n').split() - assert len(items) >= 2 - key = key_type(items[0]) - val = items[1:] if len(items) > 2 else items[1] - mapping[key] = val - return mapping diff --git a/cv/detection/yolof/pytorch/mmcv/image/__init__.py b/cv/detection/yolof/pytorch/mmcv/image/__init__.py deleted file mode 100755 index d0051d60..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/cv/detection/yolof/pytorch/mmcv/image/colorspace.py b/cv/detection/yolof/pytorch/mmcv/image/colorspace.py deleted file mode 100755 index 81453395..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/colorspace.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - - -def imconvert(img, src, dst): - """Convert an image from the src colorspace to dst colorspace. - - Args: - img (ndarray): The input image. - src (str): The source colorspace, e.g., 'rgb', 'hsv'. - dst (str): The destination colorspace, e.g., 'rgb', 'hsv'. - - Returns: - ndarray: The converted image. - """ - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - out_img = cv2.cvtColor(img, code) - return out_img - - -def bgr2gray(img, keepdim=False): - """Convert a BGR image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def rgb2gray(img, keepdim=False): - """Convert a RGB image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def gray2bgr(img): - """Convert a grayscale image to BGR image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted BGR image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - return out_img - - -def gray2rgb(img): - """Convert a grayscale image to RGB image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted RGB image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' - f'but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' - f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [ - -222.921, 135.576, -276.836 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [ - -276.836, 135.576, -222.921 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def convert_color_factory(src, dst): - - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - - def convert_color(img): - out_img = cv2.cvtColor(img, code) - return out_img - - convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()} - image. - - Args: - img (ndarray or str): The input image. - - Returns: - ndarray: The converted {dst.upper()} image. - """ - - return convert_color - - -bgr2rgb = convert_color_factory('bgr', 'rgb') - -rgb2bgr = convert_color_factory('rgb', 'bgr') - -bgr2hsv = convert_color_factory('bgr', 'hsv') - -hsv2bgr = convert_color_factory('hsv', 'bgr') - -bgr2hls = convert_color_factory('bgr', 'hls') - -hls2bgr = convert_color_factory('hls', 'bgr') diff --git a/cv/detection/yolof/pytorch/mmcv/image/geometric.py b/cv/detection/yolof/pytorch/mmcv/image/geometric.py deleted file mode 100755 index cf97c201..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/cv/detection/yolof/pytorch/mmcv/image/io.py b/cv/detection/yolof/pytorch/mmcv/image/io.py deleted file mode 100755 index d47aaa84..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/cv/detection/yolof/pytorch/mmcv/image/misc.py b/cv/detection/yolof/pytorch/mmcv/image/misc.py deleted file mode 100755 index dfc4a9c6..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/misc.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -import mmcv - -try: - import torch -except ImportError: - torch = None - - -def tensor2imgs(tensor, mean=(0, 0, 0), std=(1, 1, 1), to_rgb=True): - """Convert tensor to 3-channel images. - - Args: - tensor (torch.Tensor): Tensor that contains multiple images, shape ( - N, C, H, W). - mean (tuple[float], optional): Mean of images. Defaults to (0, 0, 0). - std (tuple[float], optional): Standard deviation of images. - Defaults to (1, 1, 1). - to_rgb (bool, optional): Whether the tensor was converted to RGB - format in the first place. If so, convert it back to BGR. - Defaults to True. - - Returns: - list[np.ndarray]: A list that contains multiple images. - """ - - if torch is None: - raise RuntimeError('pytorch is not installed') - assert torch.is_tensor(tensor) and tensor.ndim == 4 - assert len(mean) == 3 - assert len(std) == 3 - - num_imgs = tensor.size(0) - mean = np.array(mean, dtype=np.float32) - std = np.array(std, dtype=np.float32) - imgs = [] - for img_id in range(num_imgs): - img = tensor[img_id, ...].cpu().numpy().transpose(1, 2, 0) - img = mmcv.imdenormalize( - img, mean, std, to_bgr=to_rgb).astype(np.uint8) - imgs.append(np.ascontiguousarray(img)) - return imgs diff --git a/cv/detection/yolof/pytorch/mmcv/image/photometric.py b/cv/detection/yolof/pytorch/mmcv/image/photometric.py deleted file mode 100755 index 5085d012..00000000 --- a/cv/detection/yolof/pytorch/mmcv/image/photometric.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - -from ..utils import is_tuple_of -from .colorspace import bgr2gray, gray2bgr - - -def imnormalize(img, mean, std, to_rgb=True): - """Normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - img = img.copy().astype(np.float32) - return imnormalize_(img, mean, std, to_rgb) - - -def imnormalize_(img, mean, std, to_rgb=True): - """Inplace normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - # cv2 inplace normalization does not accept uint8 - assert img.dtype != np.uint8 - mean = np.float64(mean.reshape(1, -1)) - stdinv = 1 / np.float64(std.reshape(1, -1)) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - cv2.subtract(img, mean, img) # inplace - cv2.multiply(img, stdinv, img) # inplace - return img - - -def imdenormalize(img, mean, std, to_bgr=True): - assert img.dtype != np.uint8 - mean = mean.reshape(1, -1).astype(np.float64) - std = std.reshape(1, -1).astype(np.float64) - img = cv2.multiply(img, std) # make a copy - cv2.add(img, mean, img) # inplace - if to_bgr: - cv2.cvtColor(img, cv2.COLOR_RGB2BGR, img) # inplace - return img - - -def iminvert(img): - """Invert (negate) an image. - - Args: - img (ndarray): Image to be inverted. - - Returns: - ndarray: The inverted image. - """ - return np.full_like(img, 255) - img - - -def solarize(img, thr=128): - """Solarize an image (invert all pixel values above a threshold) - - Args: - img (ndarray): Image to be solarized. - thr (int): Threshold for solarizing (0 - 255). - - Returns: - ndarray: The solarized image. - """ - img = np.where(img < thr, img, 255 - img) - return img - - -def posterize(img, bits): - """Posterize an image (reduce the number of bits for each color channel) - - Args: - img (ndarray): Image to be posterized. - bits (int): Number of bits (1 to 8) to use for posterizing. - - Returns: - ndarray: The posterized image. - """ - shift = 8 - bits - img = np.left_shift(np.right_shift(img, shift), shift) - return img - - -def adjust_color(img, alpha=1, beta=None, gamma=0): - r"""It blends the source image and its gray image: - - .. math:: - output = img * alpha + gray\_img * beta + gamma - - Args: - img (ndarray): The input source image. - alpha (int | float): Weight for the source image. Default 1. - beta (int | float): Weight for the converted gray image. - If None, it's assigned the value (1 - `alpha`). - gamma (int | float): Scalar added to each sum. - Same as :func:`cv2.addWeighted`. Default 0. - - Returns: - ndarray: Colored image which has the same size and dtype as input. - """ - gray_img = bgr2gray(img) - gray_img = np.tile(gray_img[..., None], [1, 1, 3]) - if beta is None: - beta = 1 - alpha - colored_img = cv2.addWeighted(img, alpha, gray_img, beta, gamma) - if not colored_img.dtype == np.uint8: - # Note when the dtype of `img` is not the default `np.uint8` - # (e.g. np.float32), the value in `colored_img` got from cv2 - # is not guaranteed to be in range [0, 255], so here clip - # is needed. - colored_img = np.clip(colored_img, 0, 255) - return colored_img - - -def imequalize(img): - """Equalize the image histogram. - - This function applies a non-linear mapping to the input image, - in order to create a uniform distribution of grayscale values - in the output image. - - Args: - img (ndarray): Image to be equalized. - - Returns: - ndarray: The equalized image. - """ - - def _scale_channel(im, c): - """Scale the data in the corresponding channel.""" - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # For computing the step, filter out the nonzeros. - nonzero_histo = histo[histo > 0] - step = (np.sum(nonzero_histo) - nonzero_histo[-1]) // 255 - if not step: - lut = np.array(range(256)) - else: - # Compute the cumulative sum, shifted by step // 2 - # and then normalized by step. - lut = (np.cumsum(histo) + (step // 2)) // step - # Shift lut, prepending with 0. - lut = np.concatenate([[0], lut[:-1]], 0) - # handle potential integer overflow - lut[lut > 255] = 255 - # If step is zero, return the original image. - # Otherwise, index from lut. - return np.where(np.equal(step, 0), im, lut[im]) - - # Scales each channel independently and then stacks - # the result. - s1 = _scale_channel(img, 0) - s2 = _scale_channel(img, 1) - s3 = _scale_channel(img, 2) - equalized_img = np.stack([s1, s2, s3], axis=-1) - return equalized_img.astype(img.dtype) - - -def adjust_brightness(img, factor=1.): - """Adjust image brightness. - - This function controls the brightness of an image. An - enhancement factor of 0.0 gives a black image. - A factor of 1.0 gives the original image. This function - blends the source image and the degenerated black image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be brightened. - factor (float): A value controls the enhancement. - Factor 1.0 returns the original image, lower - factors mean less color (brightness, contrast, - etc), and higher values more. Default 1. - - Returns: - ndarray: The brightened image. - """ - degenerated = np.zeros_like(img) - # Note manually convert the dtype to np.float32, to - # achieve as close results as PIL.ImageEnhance.Brightness. - # Set beta=1-factor, and gamma=0 - brightened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - brightened_img = np.clip(brightened_img, 0, 255) - return brightened_img.astype(img.dtype) - - -def adjust_contrast(img, factor=1.): - """Adjust image contrast. - - This function controls the contrast of an image. An - enhancement factor of 0.0 gives a solid grey - image. A factor of 1.0 gives the original image. It - blends the source image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be contrasted. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - - Returns: - ndarray: The contrasted image. - """ - gray_img = bgr2gray(img) - hist = np.histogram(gray_img, 256, (0, 255))[0] - mean = round(np.sum(gray_img) / np.sum(hist)) - degenerated = (np.ones_like(img[..., 0]) * mean).astype(img.dtype) - degenerated = gray2bgr(degenerated) - contrasted_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - contrasted_img = np.clip(contrasted_img, 0, 255) - return contrasted_img.astype(img.dtype) - - -def auto_contrast(img, cutoff=0): - """Auto adjust image contrast. - - This function maximize (normalize) image contrast by first removing cutoff - percent of the lightest and darkest pixels from the histogram and remapping - the image so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - Args: - img (ndarray): Image to be contrasted. BGR order. - cutoff (int | float | tuple): The cutoff percent of the lightest and - darkest pixels to be removed. If given as tuple, it shall be - (low, high). Otherwise, the single value will be used for both. - Defaults to 0. - - Returns: - ndarray: The contrasted image. - """ - - def _auto_contrast_channel(im, c, cutoff): - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # Remove cut-off percent pixels from histo - histo_sum = np.cumsum(histo) - cut_low = histo_sum[-1] * cutoff[0] // 100 - cut_high = histo_sum[-1] - histo_sum[-1] * cutoff[1] // 100 - histo_sum = np.clip(histo_sum, cut_low, cut_high) - cut_low - histo = np.concatenate([[histo_sum[0]], np.diff(histo_sum)], 0) - - # Compute mapping - low, high = np.nonzero(histo)[0][0], np.nonzero(histo)[0][-1] - # If all the values have been cut off, return the origin img - if low >= high: - return im - scale = 255.0 / (high - low) - offset = -low * scale - lut = np.array(range(256)) - lut = lut * scale + offset - lut = np.clip(lut, 0, 255) - return lut[im] - - if isinstance(cutoff, (int, float)): - cutoff = (cutoff, cutoff) - else: - assert isinstance(cutoff, tuple), 'cutoff must be of type int, ' \ - f'float or tuple, but got {type(cutoff)} instead.' - # Auto adjusts contrast for each channel independently and then stacks - # the result. - s1 = _auto_contrast_channel(img, 0, cutoff) - s2 = _auto_contrast_channel(img, 1, cutoff) - s3 = _auto_contrast_channel(img, 2, cutoff) - contrasted_img = np.stack([s1, s2, s3], axis=-1) - return contrasted_img.astype(img.dtype) - - -def adjust_sharpness(img, factor=1., kernel=None): - """Adjust image sharpness. - - This function controls the sharpness of an image. An - enhancement factor of 0.0 gives a blurred image. A - factor of 1.0 gives the original image. And a factor - of 2.0 gives a sharpened image. It blends the source - image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be sharpened. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - kernel (np.ndarray, optional): Filter kernel to be applied on the img - to obtain the degenerated img. Defaults to None. - - Note: - No value sanity check is enforced on the kernel set by users. So with - an inappropriate kernel, the ``adjust_sharpness`` may fail to perform - the function its name indicates but end up performing whatever - transform determined by the kernel. - - Returns: - ndarray: The sharpened image. - """ - - if kernel is None: - # adopted from PIL.ImageFilter.SMOOTH - kernel = np.array([[1., 1., 1.], [1., 5., 1.], [1., 1., 1.]]) / 13 - assert isinstance(kernel, np.ndarray), \ - f'kernel must be of type np.ndarray, but got {type(kernel)} instead.' - assert kernel.ndim == 2, \ - f'kernel must have a dimension of 2, but got {kernel.ndim} instead.' - - degenerated = cv2.filter2D(img, -1, kernel) - sharpened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - sharpened_img = np.clip(sharpened_img, 0, 255) - return sharpened_img.astype(img.dtype) - - -def adjust_lighting(img, eigval, eigvec, alphastd=0.1, to_rgb=True): - """AlexNet-style PCA jitter. - - This data augmentation is proposed in `ImageNet Classification with Deep - Convolutional Neural Networks - `_. - - Args: - img (ndarray): Image to be adjusted lighting. BGR order. - eigval (ndarray): the eigenvalue of the convariance matrix of pixel - values, respectively. - eigvec (ndarray): the eigenvector of the convariance matrix of pixel - values, respectively. - alphastd (float): The standard deviation for distribution of alpha. - Defaults to 0.1 - to_rgb (bool): Whether to convert img to rgb. - - Returns: - ndarray: The adjusted image. - """ - assert isinstance(eigval, np.ndarray) and isinstance(eigvec, np.ndarray), \ - f'eigval and eigvec should both be of type np.ndarray, got ' \ - f'{type(eigval)} and {type(eigvec)} instead.' - - assert eigval.ndim == 1 and eigvec.ndim == 2 - assert eigvec.shape == (3, eigval.shape[0]) - n_eigval = eigval.shape[0] - assert isinstance(alphastd, float), 'alphastd should be of type float, ' \ - f'got {type(alphastd)} instead.' - - img = img.copy().astype(np.float32) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - - alpha = np.random.normal(0, alphastd, n_eigval) - alter = eigvec \ - * np.broadcast_to(alpha.reshape(1, n_eigval), (3, n_eigval)) \ - * np.broadcast_to(eigval.reshape(1, n_eigval), (3, n_eigval)) - alter = np.broadcast_to(alter.sum(axis=1).reshape(1, 1, 3), img.shape) - img_adjusted = img + alter - return img_adjusted - - -def lut_transform(img, lut_table): - """Transform array by look-up table. - - The function lut_transform fills the output array with values from the - look-up table. Indices of the entries are taken from the input array. - - Args: - img (ndarray): Image to be transformed. - lut_table (ndarray): look-up table of 256 elements; in case of - multi-channel input array, the table should either have a single - channel (in this case the same table is used for all channels) or - the same number of channels as in the input array. - - Returns: - ndarray: The transformed image. - """ - assert isinstance(img, np.ndarray) - assert 0 <= np.min(img) and np.max(img) <= 255 - assert isinstance(lut_table, np.ndarray) - assert lut_table.shape == (256, ) - - return cv2.LUT(np.array(img, dtype=np.uint8), lut_table) - - -def clahe(img, clip_limit=40.0, tile_grid_size=(8, 8)): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - img (ndarray): Image to be processed. - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - - Returns: - ndarray: The processed image. - """ - assert isinstance(img, np.ndarray) - assert img.ndim == 2 - assert isinstance(clip_limit, (float, int)) - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - - clahe = cv2.createCLAHE(clip_limit, tile_grid_size) - return clahe.apply(np.array(img, dtype=np.uint8)) diff --git a/cv/detection/yolof/pytorch/mmcv/model_zoo/deprecated.json b/cv/detection/yolof/pytorch/mmcv/model_zoo/deprecated.json deleted file mode 100755 index 25cf6f28..00000000 --- a/cv/detection/yolof/pytorch/mmcv/model_zoo/deprecated.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "resnet50_caffe": "detectron/resnet50_caffe", - "resnet50_caffe_bgr": "detectron2/resnet50_caffe_bgr", - "resnet101_caffe": "detectron/resnet101_caffe", - "resnet101_caffe_bgr": "detectron2/resnet101_caffe_bgr" -} diff --git a/cv/detection/yolof/pytorch/mmcv/model_zoo/mmcls.json b/cv/detection/yolof/pytorch/mmcv/model_zoo/mmcls.json deleted file mode 100755 index bdb311d9..00000000 --- a/cv/detection/yolof/pytorch/mmcv/model_zoo/mmcls.json +++ /dev/null @@ -1,31 +0,0 @@ -{ - "vgg11": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_batch256_imagenet_20210208-4271cd6c.pth", - "vgg13": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.pth", - "vgg16": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth", - "vgg19": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.pth", - "vgg11_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth", - "vgg13_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth", - "vgg16_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth", - "vgg19_bn": "https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth", - "resnet18": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_batch256_imagenet_20200708-34ab8f90.pth", - "resnet34": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_batch256_imagenet_20200708-32ffb4f7.pth", - "resnet50": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.pth", - "resnet101": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_batch256_imagenet_20200708-753f3608.pth", - "resnet152": "https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_batch256_imagenet_20200708-ec25b1f9.pth", - "resnet50_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_batch256_imagenet_20200708-1ad0ce94.pth", - "resnet101_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_batch256_imagenet_20200708-9cb302ef.pth", - "resnet152_v1d": "https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_batch256_imagenet_20200708-e79cb6a2.pth", - "resnext50_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth", - "resnext101_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth", - "resnext101_32x8d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth", - "resnext152_32x4d": "https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth", - "se-resnet50": "https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet50_batch256_imagenet_20200804-ae206104.pth", - "se-resnet101": "https://download.openmmlab.com/mmclassification/v0/se-resnet/se-resnet101_batch256_imagenet_20200804-ba5b51d4.pth", - "resnest50": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest50_imagenet_converted-1ebf0afe.pth", - "resnest101": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest101_imagenet_converted-032caa52.pth", - "resnest200": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest200_imagenet_converted-581a60f2.pth", - "resnest269": "https://download.openmmlab.com/mmclassification/v0/resnest/resnest269_imagenet_converted-59930960.pth", - "shufflenet_v1": "https://download.openmmlab.com/mmclassification/v0/shufflenet_v1/shufflenet_v1_batch1024_imagenet_20200804-5d6cec73.pth", - "shufflenet_v2": "https://download.openmmlab.com/mmclassification/v0/shufflenet_v2/shufflenet_v2_batch1024_imagenet_20200812-5bf4721e.pth", - "mobilenet_v2": "https://download.openmmlab.com/mmclassification/v0/mobilenet_v2/mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth" -} diff --git a/cv/detection/yolof/pytorch/mmcv/model_zoo/model.json b/cv/detection/yolof/pytorch/mmcv/model_zoo/model.json deleted file mode 100644 index 8311db4f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/model_zoo/model.json +++ /dev/null @@ -1,50 +0,0 @@ -{ - "vgg16_caffe": "https://download.openmmlab.com/pretrain/third_party/vgg16_caffe-292e1171.pth", - "detectron/resnet50_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet50_caffe-788b5fa3.pth", - "detectron2/resnet50_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet50_msra-5891d200.pth", - "detectron/resnet101_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet101_caffe-3ad79236.pth", - "detectron2/resnet101_caffe": "https://download.openmmlab.com/pretrain/third_party/resnet101_msra-6cc46731.pth", - "detectron2/resnext101_32x8d": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x8d-1516f1aa.pth", - "resnext50_32x4d": "https://download.openmmlab.com/pretrain/third_party/resnext50-32x4d-0ab1a123.pth", - "resnext101_32x4d": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d-a5af3160.pth", - "resnext101_64x4d": "https://download.openmmlab.com/pretrain/third_party/resnext101_64x4d-ee2c6f71.pth", - "contrib/resnet50_gn": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn_thangvubk-ad1730dd.pth", - "detectron/resnet50_gn": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn-9186a21c.pth", - "detectron/resnet101_gn": "https://download.openmmlab.com/pretrain/third_party/resnet101_gn-cac0ab98.pth", - "jhu/resnet50_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnet50_gn_ws-15beedd8.pth", - "jhu/resnet101_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnet101_gn_ws-3e3c308c.pth", - "jhu/resnext50_32x4d_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnext50_32x4d_gn_ws-0d87ac85.pth", - "jhu/resnext101_32x4d_gn_ws": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d_gn_ws-34ac1a9e.pth", - "jhu/resnext50_32x4d_gn": "https://download.openmmlab.com/pretrain/third_party/resnext50_32x4d_gn-c7e8b754.pth", - "jhu/resnext101_32x4d_gn": "https://download.openmmlab.com/pretrain/third_party/resnext101_32x4d_gn-ac3bb84e.pth", - "msra/hrnetv2_w18_small": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w18_small-b5a04e21.pth", - "msra/hrnetv2_w18": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w18-00eb2006.pth", - "msra/hrnetv2_w32": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w32-dc9eeb4f.pth", - "msra/hrnetv2_w40": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w40-ed0b031c.pth", - "msra/hrnetv2_w48": "https://download.openmmlab.com/pretrain/third_party/hrnetv2_w48-d2186c55.pth", - "bninception_caffe": "https://download.openmmlab.com/pretrain/third_party/bn_inception_caffe-ed2e8665.pth", - "kin400/i3d_r50_f32s2_k400": "https://download.openmmlab.com/pretrain/third_party/i3d_r50_f32s2_k400-2c57e077.pth", - "kin400/nl3d_r50_f32s2_k400": "https://download.openmmlab.com/pretrain/third_party/nl3d_r50_f32s2_k400-fa7e7caa.pth", - "res2net101_v1d_26w_4s": "https://download.openmmlab.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth", - "regnetx_400mf": "https://download.openmmlab.com/pretrain/third_party/regnetx_400mf-a5b10d96.pth", - "regnetx_800mf": "https://download.openmmlab.com/pretrain/third_party/regnetx_800mf-1f4be4c7.pth", - "regnetx_1.6gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_1.6gf-5791c176.pth", - "regnetx_3.2gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_3.2gf-c2599b0f.pth", - "regnetx_4.0gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_4.0gf-a88f671e.pth", - "regnetx_6.4gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_6.4gf-006af45d.pth", - "regnetx_8.0gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_8.0gf-3c68abe7.pth", - "regnetx_12gf": "https://download.openmmlab.com/pretrain/third_party/regnetx_12gf-4c2a3350.pth", - "resnet18_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet18_v1c-b5776b93.pth", - "resnet50_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet50_v1c-2cccc1ad.pth", - "resnet101_v1c": "https://download.openmmlab.com/pretrain/third_party/resnet101_v1c-e67eebb6.pth", - "mmedit/vgg16": "https://download.openmmlab.com/mmediting/third_party/vgg_state_dict.pth", - "mmedit/res34_en_nomixup": "https://download.openmmlab.com/mmediting/third_party/model_best_resnet34_En_nomixup.pth", - "mmedit/mobilenet_v2": "https://download.openmmlab.com/mmediting/third_party/mobilenet_v2.pth", - "contrib/mobilenet_v3_large": "https://download.openmmlab.com/pretrain/third_party/mobilenet_v3_large-bc2c3fd3.pth", - "contrib/mobilenet_v3_small": "https://download.openmmlab.com/pretrain/third_party/mobilenet_v3_small-47085aa1.pth", - "resnest50": "https://download.openmmlab.com/pretrain/third_party/resnest50_d2-7497a55b.pth", - "resnest101": "https://download.openmmlab.com/pretrain/third_party/resnest101_d2-f3b931b2.pth", - "resnest200": "https://download.openmmlab.com/pretrain/third_party/resnest200_d2-ca88e41f.pth", - "darknet53": "https://download.openmmlab.com/pretrain/third_party/darknet53-a628ea1b.pth", - "mmdet/mobilenet_v2": "https://download.openmmlab.com/mmdetection/v2.0/third_party/mobilenet_v2_batch256_imagenet-ff34753d.pth" -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/__init__.py b/cv/detection/yolof/pytorch/mmcv/ops/__init__.py deleted file mode 100755 index a35ad38f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .info import (get_compiler_version, get_compiling_cuda_version) -from .nms import nms, batched_nms -from .focal_loss import SigmoidFocalLoss, sigmoid_focal_loss -__all__ = [ - 'SigmoidFocalLoss', - 'sigmoid_focal_loss', - 'get_compiler_version', 'get_compiling_cuda_version', - 'batched_nms', 'nms' -] diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/README.md b/cv/detection/yolof/pytorch/mmcv/ops/csrc/README.md deleted file mode 100755 index 91c237f3..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/README.md +++ /dev/null @@ -1,169 +0,0 @@ -# Code Structure of CUDA operators - -This folder contains all non-python code for MMCV custom ops. Please follow the same architecture if you want to add new ops. - -## Directories Tree - -```folder -. -├── common -│ ├── box_iou_rotated_utils.hpp -│ ├── parrots_cpp_helper.hpp -│ ├── parrots_cuda_helper.hpp -│ ├── pytorch_cpp_helper.hpp -│ ├── pytorch_cuda_helper.hpp -│   └── cuda -│   ├── common_cuda_helper.hpp -│   ├── parrots_cudawarpfunction.cuh -│   ├── ... -│   └── ops_cuda_kernel.cuh -├── onnxruntime -│   ├── onnxruntime_register.h -│   ├── onnxruntime_session_options_config_keys.h -│   ├── ort_mmcv_utils.h -│   ├── ... -│   ├── onnx_ops.h -│   └── cpu -│ ├── onnxruntime_register.cpp -│      ├── ... -│      └── onnx_ops_impl.cpp -├── parrots -│   ├── ... -│   ├── ops.cpp -│   ├── ops_parrots.cpp -│   └── ops_pytorch.h -├── pytorch -│   ├── info.cpp -│   ├── pybind.cpp -│   ├── ... -│   ├── ops.cpp -│   └── cuda -│      ├── ... -│      └── ops_cuda.cu -└── tensorrt - ├── trt_cuda_helper.cuh - ├── trt_plugin_helper.hpp - ├── trt_plugin.hpp - ├── trt_serialize.hpp - ├── ... - ├── trt_ops.hpp - └── plugins -    ├── trt_cuda_helper.cu -    ├── trt_plugin.cpp -    ├── ... -    ├── trt_ops.cpp -    └── trt_ops_kernel.cu -``` - -## Components - -- `common`: This directory contains all tools and shared codes. - - `cuda`: The cuda kernels which can be shared by all backends. **HIP** kernel is also here since they have similar syntax. -- `onnxruntime`: **ONNX Runtime** support for custom ops. - - `cpu`: CPU implementation of supported ops. -- `parrots`: **Parrots** is a deep learning frame for model training and inference. Parrots custom ops are placed in this directory. -- `pytorch`: **PyTorch** custom ops are supported by binding C++ to Python with **pybind11**. The ops implementation and binding codes are placed in this directory. - - `cuda`: This directory contains cuda kernel launchers, which feed memory pointers of tensor to the cuda kernel in `common/cuda`. The launchers provide c++ interface of cuda implementation of corresponding custom ops. -- `tensorrt`: **TensorRT** support for custom ops. - - `plugins`: This directory contains the implementation of the supported custom ops. Some ops might also use shared cuda kernel in `common/cuda`. - -## How to add new PyTorch ops? - -1. (Optional) Add shared kernel in `common` to support special hardware platform. - - ```c++ - // src/common/cuda/new_ops_cuda_kernel.cuh - - template - __global__ void new_ops_forward_cuda_kernel(const T* input, T* output, ...) { - // forward here - } - - ``` - - Add cuda kernel launcher in `pytorch/cuda`. - - ```c++ - // src/pytorch/cuda - #include - - void NewOpsForwardCUDAKernelLauncher(Tensor input, Tensor output, ...){ - // initialize - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - ... - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "new_ops_forward_cuda_kernel", ([&] { - new_ops_forward_cuda_kernel - <<>>( - input.data_ptr(), output.data_ptr(),...); - })); - AT_CUDA_CHECK(cudaGetLastError()); - } - ``` - -2. Add ops implementation in `pytorch` directory. Select different implementations according to device type. - - ```c++ - // src/pytorch/new_ops.cpp - #ifdef MMCV_WITH_CUDA - Tensor new_ops_forward_cuda(Tensor input, Tensor output, ...){ - // implement cuda forward here - // use `NewOpsForwardCUDAKernelLauncher` here - } - #else - - Tensor new_ops_forward_cpu(Tensor input, Tensor output, ...){ - // implement cpu forward here - } - - ... - - Tensor new_ops_forward(Tensor input, Tensor output, ...){ - // select implementation by input device type - if (boxes.device().is_cuda()) { - #ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(output); - return new_ops_forward_cuda(input, output, ...); - #else - AT_ERROR("new ops is not compiled with GPU support"); - #endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(output); - return new_ops_forward_cpu(input, output, ...); - } - } - ``` - -3. Binding the implementation in `pytorch/pybind.cpp` - - ```c++ - // src/pytorch/pybind.cpp - - ... - - Tensor new_ops_forward(Tensor input, Tensor output, ...); - - ... - - // bind with pybind11 - m.def("new_ops_forward", &new_ops_forward, "new_ops_forward", - py::arg("input"), py::arg("output"), ...); - - ... - - ``` - -4. Build MMCV again. Enjoy new ops in python - - ```python - from ..utils import ext_loader - ext_module = ext_loader.load_ext('_ext', ['new_ops_forward']) - - ... - - ext_module.new_ops_forward(input, output, ...) - - ``` diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp deleted file mode 100755 index a1e926ad..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp +++ /dev/null @@ -1,112 +0,0 @@ -#ifndef COMMON_CUDA_HELPER -#define COMMON_CUDA_HELPER - -#include - -#define CUDA_1D_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ - i += blockDim.x * gridDim.x) - -#define THREADS_PER_BLOCK 512 - -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -inline int GET_BLOCKS(const int N) { - int optimal_block_num = (N + THREADS_PER_BLOCK - 1) / THREADS_PER_BLOCK; - int max_block_num = 4096; - return std::min(optimal_block_num, max_block_num); -} - -template -__device__ T bilinear_interpolate(const T* input, const int height, - const int width, T y, T x, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) return 0; - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - // do bilinear interpolation - T v1 = input[y_low * width + x_low]; - T v2 = input[y_low * width + x_high]; - T v3 = input[y_high * width + x_low]; - T v4 = input[y_high * width + x_high]; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - return val; -} - -template -__device__ void bilinear_interpolate_gradient( - const int height, const int width, T y, T x, T& w1, T& w2, T& w3, T& w4, - int& x_low, int& x_high, int& y_low, int& y_high, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} -#endif // COMMON_CUDA_HELPER diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh b/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh deleted file mode 100755 index 40a2f462..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef NMS_CUDA_KERNEL_CUH -#define NMS_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -int const threadsPerBlock = sizeof(unsigned long long int) * 8; - -__device__ inline bool devIoU(float const *const a, float const *const b, - const int offset, const float threshold) { - float left = fmaxf(a[0], b[0]), right = fminf(a[2], b[2]); - float top = fmaxf(a[1], b[1]), bottom = fminf(a[3], b[3]); - float width = fmaxf(right - left + offset, 0.f), - height = fmaxf(bottom - top + offset, 0.f); - float interS = width * height; - float Sa = (a[2] - a[0] + offset) * (a[3] - a[1] + offset); - float Sb = (b[2] - b[0] + offset) * (b[3] - b[1] + offset); - return interS > threshold * (Sa + Sb - interS); -} - -__global__ void nms_cuda(const int n_boxes, const float iou_threshold, - const int offset, const float *dev_boxes, - unsigned long long *dev_mask) { - const int row_start = blockIdx.y; - const int col_start = blockIdx.x; - const int tid = threadIdx.x; - - if (row_start > col_start) return; - - const int row_size = - fminf(n_boxes - row_start * threadsPerBlock, threadsPerBlock); - const int col_size = - fminf(n_boxes - col_start * threadsPerBlock, threadsPerBlock); - - __shared__ float block_boxes[threadsPerBlock * 4]; - if (tid < col_size) { - block_boxes[tid * 4 + 0] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 0]; - block_boxes[tid * 4 + 1] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 1]; - block_boxes[tid * 4 + 2] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 2]; - block_boxes[tid * 4 + 3] = - dev_boxes[(threadsPerBlock * col_start + tid) * 4 + 3]; - } - __syncthreads(); - - if (tid < row_size) { - const int cur_box_idx = threadsPerBlock * row_start + tid; - const float *cur_box = dev_boxes + cur_box_idx * 4; - int i = 0; - unsigned long long int t = 0; - int start = 0; - if (row_start == col_start) { - start = tid + 1; - } - for (i = start; i < col_size; i++) { - if (devIoU(cur_box, block_boxes + i * 4, offset, iou_threshold)) { - t |= 1ULL << i; - } - } - dev_mask[cur_box_idx * gridDim.y + col_start] = t; - } -} -#endif // NMS_CUDA_KERNEL_CUH diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh b/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh deleted file mode 100755 index 1eb5f8fc..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH -#define SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void sigmoid_focal_loss_forward_cuda_kernel( - const int nthreads, const T* input, const int64_t* target, const T* weight, - T* output, const T gamma, const T alpha, const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n = index / num_classes; - int c = index % num_classes; - - int64_t t = target[n]; - T flag_p = (t == c); - T flag_n = (t != c); - - // p = sigmoid(x) = 1. / 1. + expf(-x) - T p = (T)1. / ((T)1. + expf(-input[index])); - - // (1 - p)**gamma * log(p) - T term_p = pow(((T)1. - p), gamma) * log(max(p, (T)FLT_MIN)); - // p**gamma * log(1 - p) - T term_n = pow(p, gamma) * log(max((T)1. - p, (T)FLT_MIN)); - - output[index] = (T)0.; - output[index] += -flag_p * alpha * term_p; - output[index] += -flag_n * ((T)1. - alpha) * term_n; - if (weight != NULL) { - output[index] *= weight[t]; - } - } -} - -template -__global__ void sigmoid_focal_loss_backward_cuda_kernel( - const int nthreads, const T* input, const int64_t* target, const T* weight, - T* grad_input, const T gamma, const T alpha, const int num_classes) { - CUDA_1D_KERNEL_LOOP(index, nthreads) { - int n = index / num_classes; - int c = index % num_classes; - - int64_t t = target[n]; - T flag_p = (t == c); - T flag_n = (t != c); - - // p = sigmoid(x) = 1. / 1. + expf(-x) - T p = (T)1. / ((T)1. + exp(-input[index])); - - // (1 - p)**gamma * (1 - p - gamma*p*log(p)) - T term_p = pow(((T)1. - p), gamma) * - ((T)1. - p - (gamma * p * log(max(p, (T)FLT_MIN)))); - // p**gamma * (gamma * (1 - p) * log(1 - p) - p) - T term_n = pow(p, gamma) * - (gamma * ((T)1. - p) * log(max((T)1. - p, (T)FLT_MIN)) - p); - - grad_input[index] = (T)0.; - grad_input[index] += -flag_p * alpha * term_p; - grad_input[index] += -flag_n * ((T)1. - alpha) * term_n; - if (weight != NULL) { - grad_input[index] *= weight[t]; - } - } -} - -#endif // SIGMOID_FOCAL_LOSS_CUDA_KERNEL_CUH diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp deleted file mode 100755 index c7f9f35b..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp +++ /dev/null @@ -1,24 +0,0 @@ -#ifndef PYTORCH_CPP_HELPER -#define PYTORCH_CPP_HELPER -#include - -#include - -using namespace at; - -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CPU(x) \ - TORCH_CHECK(!x.device().is_cuda(), #x " must be a CPU tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_CUDA_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) -#define CHECK_CPU_INPUT(x) \ - CHECK_CPU(x); \ - CHECK_CONTIGUOUS(x) - -#endif // PYTORCH_CPP_HELPER diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp deleted file mode 100755 index 9869b535..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp +++ /dev/null @@ -1,19 +0,0 @@ -#ifndef PYTORCH_CUDA_HELPER -#define PYTORCH_CUDA_HELPER - -#include -#include -#include - -#include -#include - -#include "common_cuda_helper.hpp" - -using at::Half; -using at::Tensor; -using phalf = at::Half; - -#define __PHALF(x) (x) - -#endif // PYTORCH_CUDA_HELPER diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu deleted file mode 100755 index 10a73f9c..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/focal_loss_cuda.cu +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "sigmoid_focal_loss_cuda_kernel.cuh" - -void SigmoidFocalLossForwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha) { - int output_size = output.numel(); - int num_classes = input.size(1); - AT_ASSERTM(target.max().item() <= (int64_t)num_classes, - "target label should smaller or equal than num classes"); - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sigmoid_focal_loss_forward_cuda_kernel", [&] { - sigmoid_focal_loss_forward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - target.data_ptr(), weight.data_ptr(), - output.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SigmoidFocalLossBackwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, - Tensor grad_input, - const float gamma, - const float alpha) { - int output_size = grad_input.numel(); - int num_classes = input.size(1); - - at::cuda::CUDAGuard device_guard(grad_input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sigmoid_focal_loss_backward_cuda_kernel", [&] { - sigmoid_focal_loss_backward_cuda_kernel - <<>>( - output_size, input.data_ptr(), - target.data_ptr(), weight.data_ptr(), - grad_input.data_ptr(), gamma, alpha, num_classes); - }); - - AT_CUDA_CHECK(cudaGetLastError()); -} \ No newline at end of file diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu deleted file mode 100755 index 16cf6468..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/cuda/nms_cuda.cu +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "nms_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -Tensor NMSCUDAKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset) { - at::cuda::CUDAGuard device_guard(boxes.device()); - - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - auto order_t = std::get<1>(scores.sort(0, /*descending=*/true)); - auto boxes_sorted = boxes.index_select(0, order_t); - - int boxes_num = boxes.size(0); - const int col_blocks = DIVUP(boxes_num, threadsPerBlock); - Tensor mask = - at::empty({boxes_num, col_blocks}, boxes.options().dtype(at::kLong)); - dim3 blocks(col_blocks, col_blocks); - dim3 threads(threadsPerBlock); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - nms_cuda<<>>( - boxes_num, iou_threshold, offset, boxes_sorted.data_ptr(), - (unsigned long long*)mask.data_ptr()); - - at::Tensor mask_cpu = mask.to(at::kCPU); - unsigned long long* mask_host = - (unsigned long long*)mask_cpu.data_ptr(); - - std::vector remv(col_blocks); - memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks); - - at::Tensor keep_t = - at::zeros({boxes_num}, boxes.options().dtype(at::kBool).device(at::kCPU)); - bool* keep = keep_t.data_ptr(); - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / threadsPerBlock; - int inblock = i % threadsPerBlock; - - if (!(remv[nblock] & (1ULL << inblock))) { - keep[i] = true; - // set every overlap box with bit 1 in remv - unsigned long long* p = mask_host + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv[j] |= p[j]; - } - } - } - - AT_CUDA_CHECK(cudaGetLastError()); - return order_t.masked_select(keep_t.to(at::kCUDA)); -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/focal_loss.cpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/focal_loss.cpp deleted file mode 100755 index 43a0ec5a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/focal_loss.cpp +++ /dev/null @@ -1,67 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -void SigmoidFocalLossForwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, Tensor output, - const float gamma, - const float alpha); - -void SigmoidFocalLossBackwardCUDAKernelLauncher(Tensor input, Tensor target, - Tensor weight, - Tensor grad_input, - const float gamma, - const float alpha); - -void sigmoid_focal_loss_forward_cuda(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - SigmoidFocalLossForwardCUDAKernelLauncher(input, target, weight, output, - gamma, alpha); -} - -void sigmoid_focal_loss_backward_cuda(Tensor input, Tensor target, - Tensor weight, Tensor grad_input, - float gamma, float alpha) { - SigmoidFocalLossBackwardCUDAKernelLauncher(input, target, weight, grad_input, - gamma, alpha); -} - -#endif - -void sigmoid_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(target); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(output); - - sigmoid_focal_loss_forward_cuda(input, target, weight, output, gamma, - alpha); -#else - AT_ERROR("SigmoidFocalLoss is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SigmoidFocalLoss is not implemented on CPU"); - } -} - -void sigmoid_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, float alpha) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(target); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(grad_input); - - sigmoid_focal_loss_backward_cuda(input, target, weight, grad_input, gamma, - alpha); -#else - AT_ERROR("SigmoidFocalLoss is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SigmoidFocalLoss is not implemented on CPU"); - } -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/info.cpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/info.cpp deleted file mode 100755 index a08d227d..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/info.cpp +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/vision.cpp -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF -#include -int get_cudart_version() { return CUDART_VERSION; } -#endif -#endif - -std::string get_compiling_cuda_version() { -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF - std::ostringstream oss; - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("rocm not available"); -#endif -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/nms.cpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/nms.cpp deleted file mode 100755 index e88208dc..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/nms.cpp +++ /dev/null @@ -1,261 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -Tensor NMSCUDAKernelLauncher(Tensor boxes, Tensor scores, float iou_threshold, - int offset); - -Tensor nms_cuda(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - return NMSCUDAKernelLauncher(boxes, scores, iou_threshold, offset); -} -#endif - -Tensor nms_cpu(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - auto x1_t = boxes.select(1, 0).contiguous(); - auto y1_t = boxes.select(1, 1).contiguous(); - auto x2_t = boxes.select(1, 2).contiguous(); - auto y2_t = boxes.select(1, 3).contiguous(); - - Tensor areas_t = (x2_t - x1_t + offset) * (y2_t - y1_t + offset); - - auto order_t = std::get<1>(scores.sort(0, /* descending=*/true)); - - auto nboxes = boxes.size(0); - Tensor select_t = at::ones({nboxes}, boxes.options().dtype(at::kBool)); - - auto select = select_t.data_ptr(); - auto order = order_t.data_ptr(); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto areas = areas_t.data_ptr(); - - for (int64_t _i = 0; _i < nboxes; _i++) { - if (select[_i] == false) continue; - auto i = order[_i]; - auto ix1 = x1[i]; - auto iy1 = y1[i]; - auto ix2 = x2[i]; - auto iy2 = y2[i]; - auto iarea = areas[i]; - - for (int64_t _j = _i + 1; _j < nboxes; _j++) { - if (select[_j] == false) continue; - auto j = order[_j]; - auto xx1 = std::max(ix1, x1[j]); - auto yy1 = std::max(iy1, y1[j]); - auto xx2 = std::min(ix2, x2[j]); - auto yy2 = std::min(iy2, y2[j]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[j] - inter); - if (ovr > iou_threshold) select[_j] = false; - } - } - return order_t.masked_select(select_t); -} - -Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset) { - if (boxes.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(boxes); - CHECK_CUDA_INPUT(scores); - return nms_cuda(boxes, scores, iou_threshold, offset); -#else - AT_ERROR("nms is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(boxes); - CHECK_CPU_INPUT(scores); - return nms_cpu(boxes, scores, iou_threshold, offset); - } -} - -Tensor softnms_cpu(Tensor boxes, Tensor scores, Tensor dets, - float iou_threshold, float sigma, float min_score, - int method, int offset) { - if (boxes.numel() == 0) { - return at::empty({0}, boxes.options().dtype(at::kLong)); - } - - auto x1_t = boxes.select(1, 0).contiguous(); - auto y1_t = boxes.select(1, 1).contiguous(); - auto x2_t = boxes.select(1, 2).contiguous(); - auto y2_t = boxes.select(1, 3).contiguous(); - auto scores_t = scores.clone(); - - Tensor areas_t = (x2_t - x1_t + offset) * (y2_t - y1_t + offset); - - auto nboxes = boxes.size(0); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto sc = scores_t.data_ptr(); - auto areas = areas_t.data_ptr(); - auto de = dets.data_ptr(); - - int64_t pos = 0; - Tensor inds_t = at::arange(nboxes, boxes.options().dtype(at::kLong)); - auto inds = inds_t.data_ptr(); - - for (int64_t i = 0; i < nboxes; i++) { - auto max_score = sc[i]; - auto max_pos = i; - - pos = i + 1; - // get max box - while (pos < nboxes) { - if (max_score < sc[pos]) { - max_score = sc[pos]; - max_pos = pos; - } - pos = pos + 1; - } - // swap - auto ix1 = de[i * 5 + 0] = x1[max_pos]; - auto iy1 = de[i * 5 + 1] = y1[max_pos]; - auto ix2 = de[i * 5 + 2] = x2[max_pos]; - auto iy2 = de[i * 5 + 3] = y2[max_pos]; - auto iscore = de[i * 5 + 4] = sc[max_pos]; - auto iarea = areas[max_pos]; - auto iind = inds[max_pos]; - x1[max_pos] = x1[i]; - y1[max_pos] = y1[i]; - x2[max_pos] = x2[i]; - y2[max_pos] = y2[i]; - sc[max_pos] = sc[i]; - areas[max_pos] = areas[i]; - inds[max_pos] = inds[i]; - x1[i] = ix1; - y1[i] = iy1; - x2[i] = ix2; - y2[i] = iy2; - sc[i] = iscore; - areas[i] = iarea; - inds[i] = iind; - - pos = i + 1; - while (pos < nboxes) { - auto xx1 = std::max(ix1, x1[pos]); - auto yy1 = std::max(iy1, y1[pos]); - auto xx2 = std::min(ix2, x2[pos]); - auto yy2 = std::min(iy2, y2[pos]); - - auto w = std::max(0.f, xx2 - xx1 + offset); - auto h = std::max(0.f, yy2 - yy1 + offset); - auto inter = w * h; - auto ovr = inter / (iarea + areas[pos] - inter); - - float weight = 1.; - if (method == 0) { - if (ovr >= iou_threshold) weight = 0; - } else if (method == 1) { - if (ovr >= iou_threshold) weight = 1 - ovr; - } else if (method == 2) { - weight = std::exp(-(ovr * ovr) / sigma); - } - sc[pos] *= weight; - // if box score falls below threshold, discard the box by - // swapping with last box update N - if (sc[pos] < min_score) { - x1[pos] = x1[nboxes - 1]; - y1[pos] = y1[nboxes - 1]; - x2[pos] = x2[nboxes - 1]; - y2[pos] = y2[nboxes - 1]; - sc[pos] = sc[nboxes - 1]; - areas[pos] = areas[nboxes - 1]; - inds[pos] = inds[nboxes - 1]; - nboxes = nboxes - 1; - pos = pos - 1; - } - pos = pos + 1; - } - } - return inds_t.slice(0, 0, nboxes); -} - -Tensor softnms(Tensor boxes, Tensor scores, Tensor dets, float iou_threshold, - float sigma, float min_score, int method, int offset) { - if (boxes.device().is_cuda()) { - AT_ERROR("softnms is not implemented on GPU"); - } else { - return softnms_cpu(boxes, scores, dets, iou_threshold, sigma, min_score, - method, offset); - } -} - -std::vector > nms_match_cpu(Tensor dets, float iou_threshold) { - auto x1_t = dets.select(1, 0).contiguous(); - auto y1_t = dets.select(1, 1).contiguous(); - auto x2_t = dets.select(1, 2).contiguous(); - auto y2_t = dets.select(1, 3).contiguous(); - auto scores = dets.select(1, 4).contiguous(); - - at::Tensor areas_t = (x2_t - x1_t) * (y2_t - y1_t); - - auto order_t = std::get<1>(scores.sort(0, /* descending=*/true)); - - auto ndets = dets.size(0); - at::Tensor suppressed_t = - at::zeros({ndets}, dets.options().dtype(at::kByte).device(at::kCPU)); - - auto suppressed = suppressed_t.data_ptr(); - auto order = order_t.data_ptr(); - auto x1 = x1_t.data_ptr(); - auto y1 = y1_t.data_ptr(); - auto x2 = x2_t.data_ptr(); - auto y2 = y2_t.data_ptr(); - auto areas = areas_t.data_ptr(); - - std::vector keep; - std::vector > matched; - - for (int64_t _i = 0; _i < ndets; _i++) { - auto i = order[_i]; - if (suppressed[i] == 1) continue; - keep.push_back(i); - std::vector v_i; - auto ix1 = x1[i]; - auto iy1 = y1[i]; - auto ix2 = x2[i]; - auto iy2 = y2[i]; - auto iarea = areas[i]; - - for (int64_t _j = _i + 1; _j < ndets; _j++) { - auto j = order[_j]; - if (suppressed[j] == 1) continue; - auto xx1 = std::max(ix1, x1[j]); - auto yy1 = std::max(iy1, y1[j]); - auto xx2 = std::min(ix2, x2[j]); - auto yy2 = std::min(iy2, y2[j]); - - auto w = std::max(static_cast(0), xx2 - xx1); - auto h = std::max(static_cast(0), yy2 - yy1); - auto inter = w * h; - auto ovr = inter / (iarea + areas[j] - inter); - if (ovr >= iou_threshold) { - suppressed[j] = 1; - v_i.push_back(j); - } - } - matched.push_back(v_i); - } - for (int i = 0; i < keep.size(); i++) - matched[i].insert(matched[i].begin(), keep[i]); - return matched; -} - -std::vector > nms_match(Tensor dets, float iou_threshold) { - if (dets.device().is_cuda()) { - AT_ERROR("nms_match is not implemented on GPU"); - } else { - return nms_match_cpu(dets, iou_threshold); - } -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp b/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp deleted file mode 100755 index da215083..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -// All Rights Reserved. -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -std::string get_compiler_version(); -std::string get_compiling_cuda_version(); - - - -void sigmoid_focal_loss_forward(Tensor input, Tensor target, Tensor weight, - Tensor output, float gamma, float alpha); - -void sigmoid_focal_loss_backward(Tensor input, Tensor target, Tensor weight, - Tensor grad_input, float gamma, float alpha); - -Tensor nms(Tensor boxes, Tensor scores, float iou_threshold, int offset); - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - - m.def("get_compiler_version", &get_compiler_version, "get_compiler_version"); - m.def("get_compiling_cuda_version", &get_compiling_cuda_version, - "get_compiling_cuda_version"); - - m.def("sigmoid_focal_loss_forward", &sigmoid_focal_loss_forward, - "sigmoid_focal_loss_forward ", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("output"), py::arg("gamma"), - py::arg("alpha")); - m.def("sigmoid_focal_loss_backward", &sigmoid_focal_loss_backward, - "sigmoid_focal_loss_backward", py::arg("input"), py::arg("target"), - py::arg("weight"), py::arg("grad_input"), py::arg("gamma"), - py::arg("alpha")); - - m.def("nms", &nms, "nms (CPU/CUDA) ", py::arg("boxes"), py::arg("scores"), - py::arg("iou_threshold"), py::arg("offset")); -} diff --git a/cv/detection/yolof/pytorch/mmcv/ops/focal_loss.py b/cv/detection/yolof/pytorch/mmcv/ops/focal_loss.py deleted file mode 100755 index 23c1a039..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/focal_loss.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward' -]) - - -class SigmoidFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSigmoidFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - output = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_forward( - input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha) - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input, target, weight) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, target, weight = ctx.saved_tensors - - grad_input = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_backward( - input, - target, - weight, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input.size(0) - return grad_input, None, None, None, None, None - - -sigmoid_focal_loss = SigmoidFocalLossFunction.apply - - -class SigmoidFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SigmoidFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return sigmoid_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s - - -class SoftmaxFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSoftmaxFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - channel_stats, _ = torch.max(input, dim=1) - input_softmax = input - channel_stats.unsqueeze(1).expand_as(input) - input_softmax.exp_() - - channel_stats = input_softmax.sum(dim=1) - input_softmax /= channel_stats.unsqueeze(1).expand_as(input) - - output = input.new_zeros(input.size(0)) - ext_module.softmax_focal_loss_forward( - input_softmax, - target, - weight, - output, - gamma=ctx.gamma, - alpha=ctx.alpha) - - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input_softmax, target, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input_softmax, target, weight = ctx.saved_tensors - buff = input_softmax.new_zeros(input_softmax.size(0)) - grad_input = input_softmax.new_zeros(input_softmax.size()) - - ext_module.softmax_focal_loss_backward( - input_softmax, - target, - weight, - buff, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input_softmax.size(0) - return grad_input, None, None, None, None, None - - -softmax_focal_loss = SoftmaxFocalLossFunction.apply - - -class SoftmaxFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SoftmaxFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return softmax_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s diff --git a/cv/detection/yolof/pytorch/mmcv/ops/info.py b/cv/detection/yolof/pytorch/mmcv/ops/info.py deleted file mode 100755 index 29f2e559..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/info.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os - -import torch - -if torch.__version__ == 'parrots': - import parrots - - def get_compiler_version(): - return 'GCC ' + parrots.version.compiler - - def get_compiling_cuda_version(): - return parrots.version.cuda -else: - from ..utils import ext_loader - ext_module = ext_loader.load_ext( - '_ext', ['get_compiler_version', 'get_compiling_cuda_version']) - - def get_compiler_version(): - return ext_module.get_compiler_version() - - def get_compiling_cuda_version(): - return ext_module.get_compiling_cuda_version() - - -def get_onnxruntime_op_path(): - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_ort.*.so') - - paths = glob.glob(wildcard) - if len(paths) > 0: - return paths[0] - else: - return '' diff --git a/cv/detection/yolof/pytorch/mmcv/ops/nms.py b/cv/detection/yolof/pytorch/mmcv/ops/nms.py deleted file mode 100755 index c3e53b20..00000000 --- a/cv/detection/yolof/pytorch/mmcv/ops/nms.py +++ /dev/null @@ -1,417 +0,0 @@ -import os - -import numpy as np -import torch - -from mmcv.utils import deprecated_api_warning -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['nms']) - - -# This function is modified from: https://github.com/pytorch/vision/ -class NMSop(torch.autograd.Function): - - @staticmethod - def forward(ctx, bboxes, scores, iou_threshold, offset, score_threshold, - max_num): - is_filtering_by_score = score_threshold > 0 - if is_filtering_by_score: - valid_mask = scores > score_threshold - bboxes, scores = bboxes[valid_mask], scores[valid_mask] - valid_inds = torch.nonzero( - valid_mask, as_tuple=False).squeeze(dim=1) - - inds = ext_module.nms( - bboxes, scores, iou_threshold=float(iou_threshold), offset=offset) - - if max_num > 0: - inds = inds[:max_num] - if is_filtering_by_score: - inds = valid_inds[inds] - return inds - - @staticmethod - def symbolic(g, bboxes, scores, iou_threshold, offset, score_threshold, - max_num): - from ..onnx import is_custom_op_loaded - has_custom_op = is_custom_op_loaded() - # TensorRT nms plugin is aligned with original nms in ONNXRuntime - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - if has_custom_op and (not is_trt_backend): - return g.op( - 'mmcv::NonMaxSuppression', - bboxes, - scores, - iou_threshold_f=float(iou_threshold), - offset_i=int(offset)) - else: - from torch.onnx.symbolic_opset9 import select, squeeze, unsqueeze - from ..onnx.onnx_utils.symbolic_helper import _size_helper - - boxes = unsqueeze(g, bboxes, 0) - scores = unsqueeze(g, unsqueeze(g, scores, 0), 0) - - if max_num > 0: - max_num = g.op( - 'Constant', - value_t=torch.tensor(max_num, dtype=torch.long)) - else: - dim = g.op('Constant', value_t=torch.tensor(0)) - max_num = _size_helper(g, bboxes, dim) - max_output_per_class = max_num - iou_threshold = g.op( - 'Constant', - value_t=torch.tensor([iou_threshold], dtype=torch.float)) - score_threshold = g.op( - 'Constant', - value_t=torch.tensor([score_threshold], dtype=torch.float)) - nms_out = g.op('NonMaxSuppression', boxes, scores, - max_output_per_class, iou_threshold, - score_threshold) - return squeeze( - g, - select( - g, nms_out, 1, - g.op( - 'Constant', - value_t=torch.tensor([2], dtype=torch.long))), 1) - - -class SoftNMSop(torch.autograd.Function): - - @staticmethod - def forward(ctx, boxes, scores, iou_threshold, sigma, min_score, method, - offset): - dets = boxes.new_empty((boxes.size(0), 5), device='cpu') - inds = ext_module.softnms( - boxes.cpu(), - scores.cpu(), - dets.cpu(), - iou_threshold=float(iou_threshold), - sigma=float(sigma), - min_score=float(min_score), - method=int(method), - offset=int(offset)) - return dets, inds - - @staticmethod - def symbolic(g, boxes, scores, iou_threshold, sigma, min_score, method, - offset): - from packaging import version - assert version.parse(torch.__version__) >= version.parse('1.7.0') - nms_out = g.op( - 'mmcv::SoftNonMaxSuppression', - boxes, - scores, - iou_threshold_f=float(iou_threshold), - sigma_f=float(sigma), - min_score_f=float(min_score), - method_i=int(method), - offset_i=int(offset), - outputs=2) - return nms_out - - -@deprecated_api_warning({'iou_thr': 'iou_threshold'}) -def nms(boxes, scores, iou_threshold, offset=0, score_threshold=0, max_num=-1): - """Dispatch to either CPU or GPU NMS implementations. - - The input can be either torch tensor or numpy array. GPU NMS will be used - if the input is gpu tensor, otherwise CPU NMS - will be used. The returned type will always be the same as inputs. - - Arguments: - boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4). - scores (torch.Tensor or np.ndarray): scores in shape (N, ). - iou_threshold (float): IoU threshold for NMS. - offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset). - score_threshold (float): score threshold for NMS. - max_num (int): maximum number of boxes after NMS. - - Returns: - tuple: kept dets(boxes and scores) and indice, which is always the \ - same data type as the input. - - Example: - >>> boxes = np.array([[49.1, 32.4, 51.0, 35.9], - >>> [49.3, 32.9, 51.0, 35.3], - >>> [49.2, 31.8, 51.0, 35.4], - >>> [35.1, 11.5, 39.1, 15.7], - >>> [35.6, 11.8, 39.3, 14.2], - >>> [35.3, 11.5, 39.9, 14.5], - >>> [35.2, 11.7, 39.7, 15.7]], dtype=np.float32) - >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.5, 0.4, 0.3],\ - dtype=np.float32) - >>> iou_threshold = 0.6 - >>> dets, inds = nms(boxes, scores, iou_threshold) - >>> assert len(inds) == len(dets) == 3 - """ - assert isinstance(boxes, (torch.Tensor, np.ndarray)) - assert isinstance(scores, (torch.Tensor, np.ndarray)) - is_numpy = False - if isinstance(boxes, np.ndarray): - is_numpy = True - boxes = torch.from_numpy(boxes) - if isinstance(scores, np.ndarray): - scores = torch.from_numpy(scores) - assert boxes.size(1) == 4 - assert boxes.size(0) == scores.size(0) - assert offset in (0, 1) - - if torch.__version__ == 'parrots': - indata_list = [boxes, scores] - indata_dict = { - 'iou_threshold': float(iou_threshold), - 'offset': int(offset) - } - inds = ext_module.nms(*indata_list, **indata_dict) - else: - inds = NMSop.apply(boxes, scores, iou_threshold, offset, - score_threshold, max_num) - dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1) - if is_numpy: - dets = dets.cpu().numpy() - inds = inds.cpu().numpy() - return dets, inds - - -@deprecated_api_warning({'iou_thr': 'iou_threshold'}) -def soft_nms(boxes, - scores, - iou_threshold=0.3, - sigma=0.5, - min_score=1e-3, - method='linear', - offset=0): - """Dispatch to only CPU Soft NMS implementations. - - The input can be either a torch tensor or numpy array. - The returned type will always be the same as inputs. - - Arguments: - boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4). - scores (torch.Tensor or np.ndarray): scores in shape (N, ). - iou_threshold (float): IoU threshold for NMS. - sigma (float): hyperparameter for gaussian method - min_score (float): score filter threshold - method (str): either 'linear' or 'gaussian' - offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset). - - Returns: - tuple: kept dets(boxes and scores) and indice, which is always the \ - same data type as the input. - - Example: - >>> boxes = np.array([[4., 3., 5., 3.], - >>> [4., 3., 5., 4.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.], - >>> [3., 1., 3., 1.]], dtype=np.float32) - >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.4, 0.0], dtype=np.float32) - >>> iou_threshold = 0.6 - >>> dets, inds = soft_nms(boxes, scores, iou_threshold, sigma=0.5) - >>> assert len(inds) == len(dets) == 5 - """ - - assert isinstance(boxes, (torch.Tensor, np.ndarray)) - assert isinstance(scores, (torch.Tensor, np.ndarray)) - is_numpy = False - if isinstance(boxes, np.ndarray): - is_numpy = True - boxes = torch.from_numpy(boxes) - if isinstance(scores, np.ndarray): - scores = torch.from_numpy(scores) - assert boxes.size(1) == 4 - assert boxes.size(0) == scores.size(0) - assert offset in (0, 1) - method_dict = {'naive': 0, 'linear': 1, 'gaussian': 2} - assert method in method_dict.keys() - - if torch.__version__ == 'parrots': - dets = boxes.new_empty((boxes.size(0), 5), device='cpu') - indata_list = [boxes.cpu(), scores.cpu(), dets.cpu()] - indata_dict = { - 'iou_threshold': float(iou_threshold), - 'sigma': float(sigma), - 'min_score': min_score, - 'method': method_dict[method], - 'offset': int(offset) - } - inds = ext_module.softnms(*indata_list, **indata_dict) - else: - dets, inds = SoftNMSop.apply(boxes.cpu(), scores.cpu(), - float(iou_threshold), float(sigma), - float(min_score), method_dict[method], - int(offset)) - - dets = dets[:inds.size(0)] - - if is_numpy: - dets = dets.cpu().numpy() - inds = inds.cpu().numpy() - return dets, inds - else: - return dets.to(device=boxes.device), inds.to(device=boxes.device) - - -def batched_nms(boxes, scores, idxs, nms_cfg, class_agnostic=False): - """Performs non-maximum suppression in a batched fashion. - - Modified from https://github.com/pytorch/vision/blob - /505cd6957711af790211896d32b40291bea1bc21/torchvision/ops/boxes.py#L39. - In order to perform NMS independently per class, we add an offset to all - the boxes. The offset is dependent only on the class idx, and is large - enough so that boxes from different classes do not overlap. - - Arguments: - boxes (torch.Tensor): boxes in shape (N, 4). - scores (torch.Tensor): scores in shape (N, ). - idxs (torch.Tensor): each index value correspond to a bbox cluster, - and NMS will not be applied between elements of different idxs, - shape (N, ). - nms_cfg (dict): specify nms type and other parameters like iou_thr. - Possible keys includes the following. - - - iou_thr (float): IoU threshold used for NMS. - - split_thr (float): threshold number of boxes. In some cases the - number of boxes is large (e.g., 200k). To avoid OOM during - training, the users could set `split_thr` to a small value. - If the number of boxes is greater than the threshold, it will - perform NMS on each group of boxes separately and sequentially. - Defaults to 10000. - class_agnostic (bool): if true, nms is class agnostic, - i.e. IoU thresholding happens over all boxes, - regardless of the predicted class. - - Returns: - tuple: kept dets and indice. - """ - nms_cfg_ = nms_cfg.copy() - class_agnostic = nms_cfg_.pop('class_agnostic', class_agnostic) - if class_agnostic: - boxes_for_nms = boxes - else: - max_coordinate = boxes.max() - offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes)) - boxes_for_nms = boxes + offsets[:, None] - - nms_type = nms_cfg_.pop('type', 'nms') - nms_op = eval(nms_type) - - split_thr = nms_cfg_.pop('split_thr', 10000) - # Won't split to multiple nms nodes when exporting to onnx - if boxes_for_nms.shape[0] < split_thr or torch.onnx.is_in_onnx_export(): - dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_) - boxes = boxes[keep] - # -1 indexing works abnormal in TensorRT - # This assumes `dets` has 5 dimensions where - # the last dimension is score. - # TODO: more elegant way to handle the dimension issue. - # Some type of nms would reweight the score, such as SoftNMS - scores = dets[:, 4] - else: - max_num = nms_cfg_.pop('max_num', -1) - total_mask = scores.new_zeros(scores.size(), dtype=torch.bool) - # Some type of nms would reweight the score, such as SoftNMS - scores_after_nms = scores.new_zeros(scores.size()) - for id in torch.unique(idxs): - mask = (idxs == id).nonzero(as_tuple=False).view(-1) - dets, keep = nms_op(boxes_for_nms[mask], scores[mask], **nms_cfg_) - total_mask[mask[keep]] = True - scores_after_nms[mask[keep]] = dets[:, -1] - keep = total_mask.nonzero(as_tuple=False).view(-1) - - scores, inds = scores_after_nms[keep].sort(descending=True) - keep = keep[inds] - boxes = boxes[keep] - - if max_num > 0: - keep = keep[:max_num] - boxes = boxes[:max_num] - scores = scores[:max_num] - - return torch.cat([boxes, scores[:, None]], -1), keep - - -def nms_match(dets, iou_threshold): - """Matched dets into different groups by NMS. - - NMS match is Similar to NMS but when a bbox is suppressed, nms match will - record the indice of suppressed bbox and form a group with the indice of - kept bbox. In each group, indice is sorted as score order. - - Arguments: - dets (torch.Tensor | np.ndarray): Det boxes with scores, shape (N, 5). - iou_thr (float): IoU thresh for NMS. - - Returns: - List[torch.Tensor | np.ndarray]: The outer list corresponds different - matched group, the inner Tensor corresponds the indices for a group - in score order. - """ - if dets.shape[0] == 0: - matched = [] - else: - assert dets.shape[-1] == 5, 'inputs dets.shape should be (N, 5), ' \ - f'but get {dets.shape}' - if isinstance(dets, torch.Tensor): - dets_t = dets.detach().cpu() - else: - dets_t = torch.from_numpy(dets) - indata_list = [dets_t] - indata_dict = {'iou_threshold': float(iou_threshold)} - matched = ext_module.nms_match(*indata_list, **indata_dict) - if torch.__version__ == 'parrots': - matched = matched.tolist() - - if isinstance(dets, torch.Tensor): - return [dets.new_tensor(m, dtype=torch.long) for m in matched] - else: - return [np.array(m, dtype=np.int) for m in matched] - - -def nms_rotated(dets, scores, iou_threshold, labels=None): - """Performs non-maximum suppression (NMS) on the rotated boxes according to - their intersection-over-union (IoU). - - Rotated NMS iteratively removes lower scoring rotated boxes which have an - IoU greater than iou_threshold with another (higher scoring) rotated box. - - Args: - boxes (Tensor): Rotated boxes in shape (N, 5). They are expected to \ - be in (x_ctr, y_ctr, width, height, angle_radian) format. - scores (Tensor): scores in shape (N, ). - iou_threshold (float): IoU thresh for NMS. - labels (Tensor): boxes' label in shape (N,). - - Returns: - tuple: kept dets(boxes and scores) and indice, which is always the \ - same data type as the input. - """ - if dets.shape[0] == 0: - return dets, None - multi_label = labels is not None - if multi_label: - dets_wl = torch.cat((dets, labels.unsqueeze(1)), 1) - else: - dets_wl = dets - _, order = scores.sort(0, descending=True) - dets_sorted = dets_wl.index_select(0, order) - - if torch.__version__ == 'parrots': - keep_inds = ext_module.nms_rotated( - dets_wl, - scores, - order, - dets_sorted, - iou_threshold=iou_threshold, - multi_label=multi_label) - else: - keep_inds = ext_module.nms_rotated(dets_wl, scores, order, dets_sorted, - iou_threshold, multi_label) - dets = torch.cat((dets[keep_inds], scores[keep_inds].reshape(-1, 1)), - dim=1) - return dets, keep_inds diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/__init__.py b/cv/detection/yolof/pytorch/mmcv/parallel/__init__.py deleted file mode 100755 index 2ed2c17a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collate import collate -from .data_container import DataContainer -from .data_parallel import MMDataParallel -from .distributed import MMDistributedDataParallel -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter, scatter_kwargs -from .utils import is_module_wrapper - -__all__ = [ - 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel', - 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS' -] diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/_functions.py b/cv/detection/yolof/pytorch/mmcv/parallel/_functions.py deleted file mode 100755 index 9b5a8a44..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/collate.py b/cv/detection/yolof/pytorch/mmcv/parallel/collate.py deleted file mode 100755 index ad749197..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/data_container.py b/cv/detection/yolof/pytorch/mmcv/parallel/data_container.py deleted file mode 100755 index cedb0d32..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/data_container.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import torch - - -def assert_tensor_type(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not isinstance(args[0].data, torch.Tensor): - raise AttributeError( - f'{args[0].__class__.__name__} has no attribute ' - f'{func.__name__} for type {args[0].datatype}') - return func(*args, **kwargs) - - return wrapper - - -class DataContainer: - """A container for any type of objects. - - Typically tensors will be stacked in the collate function and sliced along - some dimension in the scatter function. This behavior has some limitations. - 1. All tensors have to be the same size. - 2. Types are limited (numpy array or Tensor). - - We design `DataContainer` and `MMDataParallel` to overcome these - limitations. The behavior can be either of the following. - - - copy to GPU, pad all tensors to the same size and stack them - - copy to GPU without stacking - - leave the objects as is and pass it to the model - - pad_dims specifies the number of last few dimensions to do padding - """ - - def __init__(self, - data, - stack=False, - padding_value=0, - cpu_only=False, - pad_dims=2): - self._data = data - self._cpu_only = cpu_only - self._stack = stack - self._padding_value = padding_value - assert pad_dims in [None, 1, 2, 3] - self._pad_dims = pad_dims - - def __repr__(self): - return f'{self.__class__.__name__}({repr(self.data)})' - - def __len__(self): - return len(self._data) - - @property - def data(self): - return self._data - - @property - def datatype(self): - if isinstance(self.data, torch.Tensor): - return self.data.type() - else: - return type(self.data) - - @property - def cpu_only(self): - return self._cpu_only - - @property - def stack(self): - return self._stack - - @property - def padding_value(self): - return self._padding_value - - @property - def pad_dims(self): - return self._pad_dims - - @assert_tensor_type - def size(self, *args, **kwargs): - return self.data.size(*args, **kwargs) - - @assert_tensor_type - def dim(self): - return self.data.dim() diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/data_parallel.py b/cv/detection/yolof/pytorch/mmcv/parallel/data_parallel.py deleted file mode 100755 index 79b5f69b..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - 'instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/distributed.py b/cv/detection/yolof/pytorch/mmcv/parallel/distributed.py deleted file mode 100755 index b799a213..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/distributed.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel.distributed import (DistributedDataParallel, - _find_tensors) - -from mmcv import print_log -from mmcv.utils import TORCH_VERSION, digit_version -from .scatter_gather import scatter_kwargs - - -class MMDistributedDataParallel(DistributedDataParallel): - """The DDP module that supports DataContainer. - - MMDDP has two main differences with PyTorch DDP: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data. - - It implement two APIs ``train_step()`` and ``val_step()``. - """ - - def to_kwargs(self, inputs, kwargs, device_id): - # Use `self.to_kwargs` instead of `self.scatter` in pytorch1.8 - # to move all tensors to device_id - return scatter_kwargs(inputs, kwargs, [device_id], dim=self.dim) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - """train_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.train_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.train_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.train_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output - - def val_step(self, *inputs, **kwargs): - """val_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.val_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.val_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.val_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/distributed_deprecated.py b/cv/detection/yolof/pytorch/mmcv/parallel/distributed_deprecated.py deleted file mode 100755 index b593d4a9..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super(MMDistributedDataParallel, self).__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/registry.py b/cv/detection/yolof/pytorch/mmcv/parallel/registry.py deleted file mode 100755 index 144f9fb1..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from mmcv.utils import Registry - -MODULE_WRAPPERS = Registry('module wrapper') -MODULE_WRAPPERS.register_module(module=DataParallel) -MODULE_WRAPPERS.register_module(module=DistributedDataParallel) diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/scatter_gather.py b/cv/detection/yolof/pytorch/mmcv/parallel/scatter_gather.py deleted file mode 100755 index 900ff885..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/cv/detection/yolof/pytorch/mmcv/parallel/utils.py b/cv/detection/yolof/pytorch/mmcv/parallel/utils.py deleted file mode 100755 index 0f5712cb..00000000 --- a/cv/detection/yolof/pytorch/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/__init__.py b/cv/detection/yolof/pytorch/mmcv/runner/__init__.py deleted file mode 100755 index 52e4b48d..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_module import BaseModule, ModuleList, Sequential -from .base_runner import BaseRunner -from .builder import RUNNERS, build_runner -from .checkpoint import (CheckpointLoader, _load_checkpoint, - _load_checkpoint_with_prefix, load_checkpoint, - load_state_dict, save_checkpoint, weights_to_cpu) -from .default_constructor import DefaultRunnerConstructor -from .dist_utils import (allreduce_grads, allreduce_params, get_dist_info, - init_dist, master_only) -from .epoch_based_runner import EpochBasedRunner, Runner -from .fp16_utils import LossScaler, auto_fp16, force_fp32, wrap_fp16_model -from .hooks import (HOOKS, CheckpointHook, ClosureHook, DistEvalHook, - DistSamplerSeedHook, DvcliveLoggerHook, EMAHook, EvalHook, - Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, Hook, IterTimerHook, - LoggerHook, LrUpdaterHook, MlflowLoggerHook, - NeptuneLoggerHook, OptimizerHook, PaviLoggerHook, - SyncBuffersHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .iter_based_runner import IterBasedRunner, IterLoader -from .log_buffer import LogBuffer -from .optimizer import (OPTIMIZER_BUILDERS, OPTIMIZERS, - DefaultOptimizerConstructor, build_optimizer, - build_optimizer_constructor) -from .priority import Priority, get_priority -from .utils import get_host_info, get_time_str, obj_from_dict, set_random_seed - -__all__ = [ - 'BaseRunner', 'Runner', 'EpochBasedRunner', 'IterBasedRunner', 'LogBuffer', - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'LoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'MlflowLoggerHook', - 'DvcliveLoggerHook', '_load_checkpoint', 'load_state_dict', - 'load_checkpoint', 'weights_to_cpu', 'save_checkpoint', 'Priority', - 'get_priority', 'get_host_info', 'get_time_str', 'obj_from_dict', - 'init_dist', 'get_dist_info', 'master_only', 'OPTIMIZER_BUILDERS', - 'OPTIMIZERS', 'DefaultOptimizerConstructor', 'build_optimizer', - 'build_optimizer_constructor', 'IterLoader', 'set_random_seed', - 'auto_fp16', 'force_fp32', 'wrap_fp16_model', 'Fp16OptimizerHook', - 'SyncBuffersHook', 'EMAHook', 'build_runner', 'RUNNERS', 'allreduce_grads', - 'allreduce_params', 'LossScaler', 'CheckpointLoader', 'BaseModule', - '_load_checkpoint_with_prefix', 'EvalHook', 'DistEvalHook', 'Sequential', - 'ModuleList', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook', 'DefaultRunnerConstructor' -] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/base_module.py b/cv/detection/yolof/pytorch/mmcv/runner/base_module.py deleted file mode 100755 index 529575b8..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/base_module.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings -from abc import ABCMeta -from collections import defaultdict -from logging import FileHandler - -import torch.nn as nn - -from mmcv.runner.dist_utils import master_only -from mmcv.utils.logging import get_logger, logger_initialized, print_log - - -class BaseModule(nn.Module, metaclass=ABCMeta): - """Base module for all modules in openmmlab. - - ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional - functionality of parameter initialization. Compared with - ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes. - - - ``init_cfg``: the config to control the initialization. - - ``init_weights``: The function of parameter - initialization and recording initialization - information. - - ``_params_init_info``: Used to track the parameter - initialization information. This attribute only - exists during executing the ``init_weights``. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, init_cfg=None): - """Initialize BaseModule, inherited from `torch.nn.Module`""" - - # NOTE init_cfg can be defined in different levels, but init_cfg - # in low levels has a higher priority. - - super(BaseModule, self).__init__() - # define default value of init_cfg instead of hard code - # in init_weights() function - self._is_init = False - - self.init_cfg = copy.deepcopy(init_cfg) - - # Backward compatibility in derived classes - # if pretrained is not None: - # warnings.warn('DeprecationWarning: pretrained is a deprecated \ - # key, please consider using init_cfg') - # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @property - def is_init(self): - return self._is_init - - def init_weights(self): - """Initialize the weights.""" - - is_top_level_module = False - # check if it is top-level module - if not hasattr(self, '_params_init_info'): - # The `_params_init_info` is used to record the initialization - # information of the parameters - # the key should be the obj:`nn.Parameter` of model and the value - # should be a dict containing - # - init_info (str): The string that describes the initialization. - # - tmp_mean_value (FloatTensor): The mean of the parameter, - # which indicates whether the parameter has been modified. - # this attribute would be deleted after all parameters - # is initialized. - self._params_init_info = defaultdict(dict) - is_top_level_module = True - - # Initialize the `_params_init_info`, - # When detecting the `tmp_mean_value` of - # the corresponding parameter is changed, update related - # initialization information - for name, param in self.named_parameters(): - self._params_init_info[param][ - 'init_info'] = f'The value is the same before and ' \ - f'after calling `init_weights` ' \ - f'of {self.__class__.__name__} ' - self._params_init_info[param][ - 'tmp_mean_value'] = param.data.mean() - - # pass `params_init_info` to all submodules - # All submodules share the same `params_init_info`, - # so it will be updated when parameters are - # modified at any level of the model. - for sub_module in self.modules(): - sub_module._params_init_info = self._params_init_info - - # Get the initialized logger, if not exist, - # create a logger named `mmcv` - logger_names = list(logger_initialized.keys()) - logger_name = logger_names[0] if logger_names else 'mmcv' - - from ..cnn import initialize - from ..cnn.utils.weight_init import update_init_info - module_name = self.__class__.__name__ - if not self._is_init: - if self.init_cfg: - print_log( - f'initialize {module_name} with init_cfg {self.init_cfg}', - logger=logger_name) - initialize(self, self.init_cfg) - if isinstance(self.init_cfg, dict): - # prevent the parameters of - # the pre-trained model - # from being overwritten by - # the `init_weights` - if self.init_cfg['type'] == 'Pretrained': - return - - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights() - # users may overload the `init_weights` - update_init_info( - m, - init_info=f'Initialized by ' - f'user-defined `init_weights`' - f' in {m.__class__.__name__} ') - - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - if is_top_level_module: - self._dump_init_info(logger_name) - - for sub_module in self.modules(): - del sub_module._params_init_info - - @master_only - def _dump_init_info(self, logger_name): - """Dump the initialization information to a file named - `initialization.log.json` in workdir. - - Args: - logger_name (str): The name of logger. - """ - - logger = get_logger(logger_name) - - with_file_handler = False - # dump the information to the logger file if there is a `FileHandler` - for handler in logger.handlers: - if isinstance(handler, FileHandler): - handler.stream.write( - 'Name of parameter - Initialization information\n') - for name, param in self.named_parameters(): - handler.stream.write( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n") - handler.stream.flush() - with_file_handler = True - if not with_file_handler: - for name, param in self.named_parameters(): - print_log( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n ", - logger=logger_name) - - def __repr__(self): - s = super().__repr__() - if self.init_cfg: - s += f'\ninit_cfg={self.init_cfg}' - return s - - -class Sequential(BaseModule, nn.Sequential): - """Sequential module in openmmlab. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, *args, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.Sequential.__init__(self, *args) - - -class ModuleList(BaseModule, nn.ModuleList): - """ModuleList in openmmlab. - - Args: - modules (iterable, optional): an iterable of modules to add. - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, modules=None, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.ModuleList.__init__(self, modules) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/base_runner.py b/cv/detection/yolof/pytorch/mmcv/runner/base_runner.py deleted file mode 100755 index 25cd98f5..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/base_runner.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import logging -import os.path as osp -import warnings -from abc import ABCMeta, abstractmethod - -import torch -from torch.optim import Optimizer - -import mmcv -from ..parallel import is_module_wrapper -from .checkpoint import load_checkpoint -from .dist_utils import get_dist_info -from .hooks import HOOKS, Hook -from .log_buffer import LogBuffer -from .priority import Priority, get_priority -from .utils import get_time_str - - -class BaseRunner(metaclass=ABCMeta): - """The base class of Runner, a training helper for PyTorch. - - All subclasses should implement the following APIs: - - - ``run()`` - - ``train()`` - - ``val()`` - - ``save_checkpoint()`` - - Args: - model (:obj:`torch.nn.Module`): The model to be run. - batch_processor (callable): A callable method that process a data - batch. The interface of this method should be - `batch_processor(model, data, train_mode) -> dict` - optimizer (dict or :obj:`torch.optim.Optimizer`): It can be either an - optimizer (in most cases) or a dict of optimizers (in models that - requires more than one optimizer, e.g., GAN). - work_dir (str, optional): The working directory to save checkpoints - and logs. Defaults to None. - logger (:obj:`logging.Logger`): Logger used during training. - Defaults to None. (The default value is just for backward - compatibility) - meta (dict | None): A dict records some import information such as - environment info and seed, which will be logged in logger hook. - Defaults to None. - max_epochs (int, optional): Total training epochs. - max_iters (int, optional): Total training iterations. - """ - - def __init__(self, - model, - batch_processor=None, - optimizer=None, - work_dir=None, - logger=None, - meta=None, - max_iters=None, - max_epochs=None): - if batch_processor is not None: - if not callable(batch_processor): - raise TypeError('batch_processor must be callable, ' - f'but got {type(batch_processor)}') - warnings.warn('batch_processor is deprecated, please implement ' - 'train_step() and val_step() in the model instead.') - # raise an error is `batch_processor` is not None and - # `model.train_step()` exists. - if is_module_wrapper(model): - _model = model.module - else: - _model = model - if hasattr(_model, 'train_step') or hasattr(_model, 'val_step'): - raise RuntimeError( - 'batch_processor and model.train_step()/model.val_step() ' - 'cannot be both available.') - else: - assert hasattr(model, 'train_step') - - # check the type of `optimizer` - if isinstance(optimizer, dict): - for name, optim in optimizer.items(): - if not isinstance(optim, Optimizer): - raise TypeError( - f'optimizer must be a dict of torch.optim.Optimizers, ' - f'but optimizer["{name}"] is a {type(optim)}') - elif not isinstance(optimizer, Optimizer) and optimizer is not None: - raise TypeError( - f'optimizer must be a torch.optim.Optimizer object ' - f'or dict or None, but got {type(optimizer)}') - - # check the type of `logger` - if not isinstance(logger, logging.Logger): - raise TypeError(f'logger must be a logging.Logger object, ' - f'but got {type(logger)}') - - # check the type of `meta` - if meta is not None and not isinstance(meta, dict): - raise TypeError( - f'meta must be a dict or None, but got {type(meta)}') - - self.model = model - self.batch_processor = batch_processor - self.optimizer = optimizer - self.logger = logger - self.meta = meta - # create work_dir - if mmcv.is_str(work_dir): - self.work_dir = osp.abspath(work_dir) - mmcv.mkdir_or_exist(self.work_dir) - elif work_dir is None: - self.work_dir = None - else: - raise TypeError('"work_dir" must be a str or None') - - # get model name from the model class - if hasattr(self.model, 'module'): - self._model_name = self.model.module.__class__.__name__ - else: - self._model_name = self.model.__class__.__name__ - - self._rank, self._world_size = get_dist_info() - self.timestamp = get_time_str() - self.mode = None - self._hooks = [] - self._epoch = 0 - self._iter = 0 - self._inner_iter = 0 - - if max_epochs is not None and max_iters is not None: - raise ValueError( - 'Only one of `max_epochs` or `max_iters` can be set.') - - self._max_epochs = max_epochs - self._max_iters = max_iters - # TODO: Redesign LogBuffer, it is not flexible and elegant enough - self.log_buffer = LogBuffer() - - @property - def model_name(self): - """str: Name of the model, usually the module class name.""" - return self._model_name - - @property - def rank(self): - """int: Rank of current process. (distributed training)""" - return self._rank - - @property - def world_size(self): - """int: Number of processes participating in the job. - (distributed training)""" - return self._world_size - - @property - def hooks(self): - """list[:obj:`Hook`]: A list of registered hooks.""" - return self._hooks - - @property - def epoch(self): - """int: Current epoch.""" - return self._epoch - - @property - def iter(self): - """int: Current iteration.""" - return self._iter - - @property - def inner_iter(self): - """int: Iteration in an epoch.""" - return self._inner_iter - - @property - def max_epochs(self): - """int: Maximum training epochs.""" - return self._max_epochs - - @property - def max_iters(self): - """int: Maximum training iterations.""" - return self._max_iters - - @abstractmethod - def train(self): - pass - - @abstractmethod - def val(self): - pass - - @abstractmethod - def run(self, data_loaders, workflow, **kwargs): - pass - - @abstractmethod - def save_checkpoint(self, - out_dir, - filename_tmpl, - save_optimizer=True, - meta=None, - create_symlink=True): - pass - - def current_lr(self): - """Get current learning rates. - - Returns: - list[float] | dict[str, list[float]]: Current learning rates of all - param groups. If the runner has a dict of optimizers, this - method will return a dict. - """ - if isinstance(self.optimizer, torch.optim.Optimizer): - lr = [group['lr'] for group in self.optimizer.param_groups] - elif isinstance(self.optimizer, dict): - lr = dict() - for name, optim in self.optimizer.items(): - lr[name] = [group['lr'] for group in optim.param_groups] - else: - raise RuntimeError( - 'lr is not applicable because optimizer does not exist.') - return lr - - def current_momentum(self): - """Get current momentums. - - Returns: - list[float] | dict[str, list[float]]: Current momentums of all - param groups. If the runner has a dict of optimizers, this - method will return a dict. - """ - - def _get_momentum(optimizer): - momentums = [] - for group in optimizer.param_groups: - if 'momentum' in group.keys(): - momentums.append(group['momentum']) - elif 'betas' in group.keys(): - momentums.append(group['betas'][0]) - else: - momentums.append(0) - return momentums - - if self.optimizer is None: - raise RuntimeError( - 'momentum is not applicable because optimizer does not exist.') - elif isinstance(self.optimizer, torch.optim.Optimizer): - momentums = _get_momentum(self.optimizer) - elif isinstance(self.optimizer, dict): - momentums = dict() - for name, optim in self.optimizer.items(): - momentums[name] = _get_momentum(optim) - return momentums - - def register_hook(self, hook, priority='NORMAL'): - """Register a hook into the hook list. - - The hook will be inserted into a priority queue, with the specified - priority (See :class:`Priority` for details of priorities). - For hooks with the same priority, they will be triggered in the same - order as they are registered. - - Args: - hook (:obj:`Hook`): The hook to be registered. - priority (int or str or :obj:`Priority`): Hook priority. - Lower value means higher priority. - """ - assert isinstance(hook, Hook) - if hasattr(hook, 'priority'): - raise ValueError('"priority" is a reserved attribute for hooks') - priority = get_priority(priority) - hook.priority = priority - # insert the hook to a sorted list - inserted = False - for i in range(len(self._hooks) - 1, -1, -1): - if priority >= self._hooks[i].priority: - self._hooks.insert(i + 1, hook) - inserted = True - break - if not inserted: - self._hooks.insert(0, hook) - - def register_hook_from_cfg(self, hook_cfg): - """Register a hook from its cfg. - - Args: - hook_cfg (dict): Hook config. It should have at least keys 'type' - and 'priority' indicating its type and priority. - - Notes: - The specific hook class to register should not use 'type' and - 'priority' arguments during initialization. - """ - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = mmcv.build_from_cfg(hook_cfg, HOOKS) - self.register_hook(hook, priority=priority) - - def call_hook(self, fn_name): - """Call all hooks. - - Args: - fn_name (str): The function name in each hook to be called, such as - "before_train_epoch". - """ - for hook in self._hooks: - getattr(hook, fn_name)(self) - - def get_hook_info(self): - # Get hooks info in each stage - stage_hook_map = {stage: [] for stage in Hook.stages} - for hook in self.hooks: - try: - priority = Priority(hook.priority).name - except ValueError: - priority = hook.priority - classname = hook.__class__.__name__ - hook_info = f'({priority:<12}) {classname:<35}' - for trigger_stage in hook.get_triggered_stages(): - stage_hook_map[trigger_stage].append(hook_info) - - stage_hook_infos = [] - for stage in Hook.stages: - hook_infos = stage_hook_map[stage] - if len(hook_infos) > 0: - info = f'{stage}:\n' - info += '\n'.join(hook_infos) - info += '\n -------------------- ' - stage_hook_infos.append(info) - return '\n'.join(stage_hook_infos) - - def load_checkpoint(self, - filename, - map_location='cpu', - strict=False, - revise_keys=[(r'^module.', '')]): - return load_checkpoint( - self.model, - filename, - map_location, - strict, - self.logger, - revise_keys=revise_keys) - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - if map_location == 'default': - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint(checkpoint) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - if self.meta is None: - self.meta = {} - self.meta.setdefault('hook_msgs', {}) - # load `last_ckpt`, `best_score`, `best_ckpt`, etc. for hook messages - self.meta['hook_msgs'].update(checkpoint['meta'].get('hook_msgs', {})) - - # Re-calculate the number of iterations when resuming - # models with different number of GPUs - if 'config' in checkpoint['meta']: - config = mmcv.Config.fromstring( - checkpoint['meta']['config'], file_format='.py') - previous_gpu_ids = config.get('gpu_ids', None) - if previous_gpu_ids and len(previous_gpu_ids) > 0 and len( - previous_gpu_ids) != self.world_size: - self._iter = int(self._iter * len(previous_gpu_ids) / - self.world_size) - self.logger.info('the iteration number is changed due to ' - 'change of GPU number') - - # resume meta information meta - self.meta = checkpoint['meta'] - - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter) - - def register_lr_hook(self, lr_config): - if lr_config is None: - return - elif isinstance(lr_config, dict): - assert 'policy' in lr_config - policy_type = lr_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of Lr updater. - # Since this is not applicable for ` - # CosineAnnealingLrUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'LrUpdaterHook' - lr_config['type'] = hook_type - hook = mmcv.build_from_cfg(lr_config, HOOKS) - else: - hook = lr_config - self.register_hook(hook, priority='VERY_HIGH') - - def register_momentum_hook(self, momentum_config): - if momentum_config is None: - return - if isinstance(momentum_config, dict): - assert 'policy' in momentum_config - policy_type = momentum_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of momentum updater. - # Since this is not applicable for - # `CosineAnnealingMomentumUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'MomentumUpdaterHook' - momentum_config['type'] = hook_type - hook = mmcv.build_from_cfg(momentum_config, HOOKS) - else: - hook = momentum_config - self.register_hook(hook, priority='HIGH') - - def register_optimizer_hook(self, optimizer_config): - if optimizer_config is None: - return - if isinstance(optimizer_config, dict): - optimizer_config.setdefault('type', 'OptimizerHook') - hook = mmcv.build_from_cfg(optimizer_config, HOOKS) - else: - hook = optimizer_config - self.register_hook(hook, priority='ABOVE_NORMAL') - - def register_checkpoint_hook(self, checkpoint_config): - if checkpoint_config is None: - return - if isinstance(checkpoint_config, dict): - checkpoint_config.setdefault('type', 'CheckpointHook') - hook = mmcv.build_from_cfg(checkpoint_config, HOOKS) - else: - hook = checkpoint_config - self.register_hook(hook, priority='NORMAL') - - def register_logger_hooks(self, log_config): - if log_config is None: - return - log_interval = log_config['interval'] - for info in log_config['hooks']: - logger_hook = mmcv.build_from_cfg( - info, HOOKS, default_args=dict(interval=log_interval)) - self.register_hook(logger_hook, priority='VERY_LOW') - - def register_timer_hook(self, timer_config): - if timer_config is None: - return - if isinstance(timer_config, dict): - timer_config_ = copy.deepcopy(timer_config) - hook = mmcv.build_from_cfg(timer_config_, HOOKS) - else: - hook = timer_config - self.register_hook(hook, priority='LOW') - - def register_custom_hooks(self, custom_config): - if custom_config is None: - return - - if not isinstance(custom_config, list): - custom_config = [custom_config] - - for item in custom_config: - if isinstance(item, dict): - self.register_hook_from_cfg(item) - else: - self.register_hook(item, priority='NORMAL') - - def register_profiler_hook(self, profiler_config): - if profiler_config is None: - return - if isinstance(profiler_config, dict): - profiler_config.setdefault('type', 'ProfilerHook') - hook = mmcv.build_from_cfg(profiler_config, HOOKS) - else: - hook = profiler_config - self.register_hook(hook) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - timer_config=dict(type='IterTimerHook'), - custom_hooks_config=None): - """Register default and custom hooks for training. - - Default and custom hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - self.register_lr_hook(lr_config) - self.register_momentum_hook(momentum_config) - self.register_optimizer_hook(optimizer_config) - self.register_checkpoint_hook(checkpoint_config) - self.register_timer_hook(timer_config) - self.register_logger_hooks(log_config) - self.register_custom_hooks(custom_hooks_config) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/builder.py b/cv/detection/yolof/pytorch/mmcv/runner/builder.py deleted file mode 100755 index 77c96ba0..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/builder.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from ..utils import Registry - -RUNNERS = Registry('runner') -RUNNER_BUILDERS = Registry('runner builder') - - -def build_runner_constructor(cfg): - return RUNNER_BUILDERS.build(cfg) - - -def build_runner(cfg, default_args=None): - runner_cfg = copy.deepcopy(cfg) - constructor_type = runner_cfg.pop('constructor', - 'DefaultRunnerConstructor') - runner_constructor = build_runner_constructor( - dict( - type=constructor_type, - runner_cfg=runner_cfg, - default_args=default_args)) - runner = runner_constructor() - return runner diff --git a/cv/detection/yolof/pytorch/mmcv/runner/checkpoint.py b/cv/detection/yolof/pytorch/mmcv/runner/checkpoint.py deleted file mode 100755 index 965c723a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/checkpoint.py +++ /dev/null @@ -1,707 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo - -import mmcv -from ..fileio import FileClient -from ..fileio import load as load_file -from ..parallel import is_module_wrapper -from ..utils import mkdir_or_exist -from .dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/model.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'model.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, prefixes, loader=None, force=False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or list[str] or tuple[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - loader (function): checkpoint loader - """ - - for p in cls._schemes: - if path.startswith(p): - return cls._schemes[p] - - @classmethod - def load_checkpoint(cls, filename, map_location=None, logger=None): - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - -@CheckpointLoader.register_scheme(prefixes='') -def load_from_local(filename, map_location): - """load checkpoint by local file path. - - Args: - filename (str): local checkpoint file path - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('http://', 'https://')) -def load_from_http(filename, map_location=None, model_dir=None): - """load checkpoint through HTTP or HTTPS scheme path. In distributed - setting, this function only download checkpoint at local rank 0. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - model_dir (string, optional): directory in which to save the object, - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='pavi://') -def load_from_pavi(filename, map_location=None): - """load checkpoint through the file path prefixed with pavi. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with pavi prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - assert filename.startswith('pavi://'), \ - f'Expected filename startswith `pavi://`, but get {filename}' - model_path = filename[7:] - - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='s3://') -def load_from_ceph(filename, map_location=None, backend='petrel'): - """load checkpoint through the file path prefixed with s3. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with s3 prefix - map_location (str, optional): Same as :func:`torch.load`. - backend (str, optional): The storage backend type. Options are 'ceph', - 'petrel'. Default: 'petrel'. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - allowed_backends = ['ceph', 'petrel'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - - if backend == 'ceph': - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - - # CephClient and PetrelBackend have the same prefix 's3://' and the latter - # will be chosen as default. If PetrelBackend can not be instantiated - # successfully, the CephClient will be chosen. - try: - file_client = FileClient(backend=backend) - except ImportError: - allowed_backends.remove(backend) - file_client = FileClient(backend=allowed_backends[0]) - - with io.BytesIO(file_client.get(filename)) as buffer: - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://')) -def load_from_torchvision(filename, map_location=None): - """load checkpoint through the file path prefixed with modelzoo or - torchvision. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - model_urls = get_torchvision_models() - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_name = filename[11:] - else: - model_name = filename[14:] - return load_from_http(model_urls[model_name], map_location=map_location) - - -@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://')) -def load_from_openmmlab(filename, map_location=None): - """load checkpoint through the file path prefixed with open-mmlab or - openmmlab. - - Args: - filename (str): checkpoint file path with open-mmlab or - openmmlab prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_external_models() - prefix_str = 'open-mmlab://' - if filename.startswith(prefix_str): - model_name = filename[13:] - else: - model_name = filename[12:] - prefix_str = 'openmmlab://' - - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'{prefix_str}{model_name} is deprecated in favor ' - f'of {prefix_str}{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_from_http(model_url, map_location=map_location) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='mmcls://') -def load_from_mmcls(filename, map_location=None): - """load checkpoint through the file path prefixed with mmcls. - - Args: - filename (str): checkpoint file path with mmcls prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_from_http( - model_urls[model_name], map_location=map_location) - checkpoint = _process_mmcls_checkpoint(checkpoint) - return checkpoint - - -def _load_checkpoint(filename, map_location=None, logger=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def _load_checkpoint_with_prefix(prefix, filename, map_location=None): - """Load partial pretrained model with specific prefix. - - Args: - prefix (str): The prefix of sub-module. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location=map_location) - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if not prefix.endswith('.'): - prefix += '.' - prefix_len = len(prefix) - - state_dict = { - k[prefix_len:]: v - for k, v in state_dict.items() if k.startswith(prefix) - } - - assert state_dict, f'{prefix} is not in the pretrained model' - return state_dict - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - # Keep metadata in state_dict - state_dict_cpu._metadata = getattr(state_dict, '_metadata', OrderedDict()) - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, - filename, - optimizer=None, - meta=None, - file_client_args=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - if file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" if filename starts with' - f'"pavi://", but got {file_client_args}') - try: - from pavi import modelcloud - from pavi import exception - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except exception.NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - file_client = FileClient.infer_client(file_client_args, filename) - with io.BytesIO() as f: - torch.save(checkpoint, f) - file_client.put(f.getvalue(), filename) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/default_constructor.py b/cv/detection/yolof/pytorch/mmcv/runner/default_constructor.py deleted file mode 100755 index 0bad847f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/default_constructor.py +++ /dev/null @@ -1,44 +0,0 @@ -from .builder import RUNNER_BUILDERS, RUNNERS - - -@RUNNER_BUILDERS.register_module() -class DefaultRunnerConstructor: - """Default constructor for runners. - - Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`. - For example, We can inject some new properties and functions for `Runner`. - - Example: - >>> from mmcv.runner import RUNNER_BUILDERS, build_runner - >>> # Define a new RunnerReconstructor - >>> @RUNNER_BUILDERS.register_module() - >>> class MyRunnerConstructor: - ... def __init__(self, runner_cfg, default_args=None): - ... if not isinstance(runner_cfg, dict): - ... raise TypeError('runner_cfg should be a dict', - ... f'but got {type(runner_cfg)}') - ... self.runner_cfg = runner_cfg - ... self.default_args = default_args - ... - ... def __call__(self): - ... runner = RUNNERS.build(self.runner_cfg, - ... default_args=self.default_args) - ... # Add new properties for existing runner - ... runner.my_name = 'my_runner' - ... runner.my_function = lambda self: print(self.my_name) - ... ... - >>> # build your runner - >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40, - ... constructor='MyRunnerConstructor') - >>> runner = build_runner(runner_cfg) - """ - - def __init__(self, runner_cfg, default_args=None): - if not isinstance(runner_cfg, dict): - raise TypeError('runner_cfg should be a dict', - f'but got {type(runner_cfg)}') - self.runner_cfg = runner_cfg - self.default_args = default_args - - def __call__(self): - return RUNNERS.build(self.runner_cfg, default_args=self.default_args) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/dist_utils.py b/cv/detection/yolof/pytorch/mmcv/runner/dist_utils.py deleted file mode 100755 index d3a1ef3f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/dist_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import os -import subprocess -from collections import OrderedDict - -import torch -import torch.multiprocessing as mp -from torch import distributed as dist -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'mpi': - _init_dist_mpi(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_mpi(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['OMPI_COMM_WORLD_RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput( - f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - # use MASTER_ADDR in the environment variable if it already exists - if 'MASTER_ADDR' not in os.environ: - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info(): - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - -def allreduce_params(params, coalesce=True, bucket_size_mb=-1): - """Allreduce parameters. - - Args: - params (list[torch.Parameters]): List of parameters or buffers of a - model. - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - _, world_size = get_dist_info() - if world_size == 1: - return - params = [param.data for param in params] - if coalesce: - _allreduce_coalesced(params, world_size, bucket_size_mb) - else: - for tensor in params: - dist.all_reduce(tensor.div_(world_size)) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - _, world_size = get_dist_info() - if world_size == 1: - return - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/epoch_based_runner.py b/cv/detection/yolof/pytorch/mmcv/runner/epoch_based_runner.py deleted file mode 100755 index 2dd29357..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead') - super().__init__(*args, **kwargs) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/fp16_utils.py b/cv/detection/yolof/pytorch/mmcv/runner/fp16_utils.py deleted file mode 100755 index 4baab939..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/fp16_utils.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import warnings -from collections import abc -from inspect import getfullargspec - -import numpy as np -import torch -import torch.nn as nn - -from mmcv.utils import TORCH_VERSION, digit_version -from .dist_utils import allreduce_grads as _allreduce_grads - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.autocast would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - # Note that when PyTorch >= 1.6.0, we still cast tensor types to fp16 - # manually, so the behavior may not be consistent with real amp. - from torch.cuda.amp import autocast -except ImportError: - pass - - -def cast_tensor_type(inputs, src_type, dst_type): - """Recursively convert Tensor in inputs from src_type to dst_type. - - Args: - inputs: Inputs that to be casted. - src_type (torch.dtype): Source type.. - dst_type (torch.dtype): Destination type. - - Returns: - The same type with inputs, but all contained Tensors have been cast. - """ - if isinstance(inputs, nn.Module): - return inputs - elif isinstance(inputs, torch.Tensor): - return inputs.to(dst_type) - elif isinstance(inputs, str): - return inputs - elif isinstance(inputs, np.ndarray): - return inputs - elif isinstance(inputs, abc.Mapping): - return type(inputs)({ - k: cast_tensor_type(v, src_type, dst_type) - for k, v in inputs.items() - }) - elif isinstance(inputs, abc.Iterable): - return type(inputs)( - cast_tensor_type(item, src_type, dst_type) for item in inputs) - else: - return inputs - - -def auto_fp16(apply_to=None, out_fp32=False): - """Decorator to enable fp16 training automatically. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If inputs arguments are fp32 tensors, they will - be converted to fp16 automatically. Arguments other than fp32 tensors are - ignored. If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp32 (bool): Whether to convert the output back to fp32. - - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp16 - >>> @auto_fp16() - >>> def forward(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp16 - >>> @auto_fp16(apply_to=('pred', )) - >>> def do_something(self, pred, others): - >>> pass - """ - - def auto_fp16_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], torch.nn.Module): - raise TypeError('@auto_fp16 can only be used to decorate the ' - 'method of nn.Module') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - # NOTE: default args are not taken into consideration - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.float, torch.half)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = {} - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.float, torch.half) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=True): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp32: - output = cast_tensor_type(output, torch.half, torch.float) - return output - - return new_func - - return auto_fp16_wrapper - - -def force_fp32(apply_to=None, out_fp16=False): - """Decorator to convert input arguments to fp32 in force. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If there are some inputs that must be processed - in fp32 mode, then this decorator can handle it. If inputs arguments are - fp16 tensors, they will be converted to fp32 automatically. Arguments other - than fp16 tensors are ignored. If you are using PyTorch >= 1.6, - torch.cuda.amp is used as the backend, otherwise, original mmcv - implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp16 (bool): Whether to convert the output back to fp16. - - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp32 - >>> @force_fp32() - >>> def loss(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp32 - >>> @force_fp32(apply_to=('pred', )) - >>> def post_process(self, pred, others): - >>> pass - """ - - def force_fp32_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], torch.nn.Module): - raise TypeError('@force_fp32 can only be used to decorate the ' - 'method of nn.Module') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.half, torch.float)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = dict() - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.half, torch.float) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=False): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp16: - output = cast_tensor_type(output, torch.float, torch.half) - return output - - return new_func - - return force_fp32_wrapper - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - warnings.warning( - '"mmcv.runner.fp16_utils.allreduce_grads" is deprecated, and will be ' - 'removed in v2.8. Please switch to "mmcv.runner.allreduce_grads') - _allreduce_grads(params, coalesce=coalesce, bucket_size_mb=bucket_size_mb) - - -def wrap_fp16_model(model): - """Wrap the FP32 model to FP16. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - For PyTorch >= 1.6, this function will - 1. Set fp16 flag inside the model to True. - - Otherwise: - 1. Convert FP32 model to FP16. - 2. Remain some necessary layers to be FP32, e.g., normalization layers. - 3. Set `fp16_enabled` flag inside the model to True. - - Args: - model (nn.Module): Model in FP32. - """ - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.6.0')): - # convert model to fp16 - model.half() - # patch the normalization layers to make it work in fp32 mode - patch_norm_fp32(model) - # set `fp16_enabled` flag - for m in model.modules(): - if hasattr(m, 'fp16_enabled'): - m.fp16_enabled = True - - -def patch_norm_fp32(module): - """Recursively convert normalization layers from FP16 to FP32. - - Args: - module (nn.Module): The modules to be converted in FP16. - - Returns: - nn.Module: The converted module, the normalization layers have been - converted to FP32. - """ - if isinstance(module, (nn.modules.batchnorm._BatchNorm, nn.GroupNorm)): - module.float() - if isinstance(module, nn.GroupNorm) or torch.__version__ < '1.3': - module.forward = patch_forward_method(module.forward, torch.half, - torch.float) - for child in module.children(): - patch_norm_fp32(child) - return module - - -def patch_forward_method(func, src_type, dst_type, convert_output=True): - """Patch the forward method of a module. - - Args: - func (callable): The original forward method. - src_type (torch.dtype): Type of input arguments to be converted from. - dst_type (torch.dtype): Type of input arguments to be converted to. - convert_output (bool): Whether to convert the output back to src_type. - - Returns: - callable: The patched forward method. - """ - - def new_forward(*args, **kwargs): - output = func(*cast_tensor_type(args, src_type, dst_type), - **cast_tensor_type(kwargs, src_type, dst_type)) - if convert_output: - output = cast_tensor_type(output, dst_type, src_type) - return output - - return new_forward - - -class LossScaler: - """Class that manages loss scaling in mixed precision training which - supports both dynamic or static mode. - - The implementation refers to - https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/loss_scaler.py. - Indirectly, by supplying ``mode='dynamic'`` for dynamic loss scaling. - It's important to understand how :class:`LossScaler` operates. - Loss scaling is designed to combat the problem of underflowing - gradients encountered at long times when training fp16 networks. - Dynamic loss scaling begins by attempting a very high loss - scale. Ironically, this may result in OVERflowing gradients. - If overflowing gradients are encountered, :class:`FP16_Optimizer` then - skips the update step for this particular iteration/minibatch, - and :class:`LossScaler` adjusts the loss scale to a lower value. - If a certain number of iterations occur without overflowing gradients - detected,:class:`LossScaler` increases the loss scale once more. - In this way :class:`LossScaler` attempts to "ride the edge" of always - using the highest loss scale possible without incurring overflow. - - Args: - init_scale (float): Initial loss scale value, default: 2**32. - scale_factor (float): Factor used when adjusting the loss scale. - Default: 2. - mode (str): Loss scaling mode. 'dynamic' or 'static' - scale_window (int): Number of consecutive iterations without an - overflow to wait before increasing the loss scale. Default: 1000. - """ - - def __init__(self, - init_scale=2**32, - mode='dynamic', - scale_factor=2., - scale_window=1000): - self.cur_scale = init_scale - self.cur_iter = 0 - assert mode in ('dynamic', - 'static'), 'mode can only be dynamic or static' - self.mode = mode - self.last_overflow_iter = -1 - self.scale_factor = scale_factor - self.scale_window = scale_window - - def has_overflow(self, params): - """Check if params contain overflow.""" - if self.mode != 'dynamic': - return False - for p in params: - if p.grad is not None and LossScaler._has_inf_or_nan(p.grad.data): - return True - return False - - def _has_inf_or_nan(x): - """Check if params contain NaN.""" - try: - cpu_sum = float(x.float().sum()) - except RuntimeError as instance: - if 'value cannot be converted' not in instance.args[0]: - raise - return True - else: - if cpu_sum == float('inf') or cpu_sum == -float('inf') \ - or cpu_sum != cpu_sum: - return True - return False - - def update_scale(self, overflow): - """update the current loss scale value when overflow happens.""" - if self.mode != 'dynamic': - return - if overflow: - self.cur_scale = max(self.cur_scale / self.scale_factor, 1) - self.last_overflow_iter = self.cur_iter - else: - if (self.cur_iter - self.last_overflow_iter) % \ - self.scale_window == 0: - self.cur_scale *= self.scale_factor - self.cur_iter += 1 - - def state_dict(self): - """Returns the state of the scaler as a :class:`dict`.""" - return dict( - cur_scale=self.cur_scale, - cur_iter=self.cur_iter, - mode=self.mode, - last_overflow_iter=self.last_overflow_iter, - scale_factor=self.scale_factor, - scale_window=self.scale_window) - - def load_state_dict(self, state_dict): - """Loads the loss_scaler state dict. - - Args: - state_dict (dict): scaler state. - """ - self.cur_scale = state_dict['cur_scale'] - self.cur_iter = state_dict['cur_iter'] - self.mode = state_dict['mode'] - self.last_overflow_iter = state_dict['last_overflow_iter'] - self.scale_factor = state_dict['scale_factor'] - self.scale_window = state_dict['scale_window'] - - @property - def loss_scale(self): - return self.cur_scale diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/__init__.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/__init__.py deleted file mode 100755 index 915af28c..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook, - NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook, - TextLoggerHook, WandbLoggerHook) -from .lr_updater import LrUpdaterHook -from .memory import EmptyCacheHook -from .momentum_updater import MomentumUpdaterHook -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook', - 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook', - 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook', - 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook' -] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/checkpoint.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/checkpoint.py deleted file mode 100755 index 7bb75f40..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/checkpoint.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -from mmcv.fileio import FileClient -from ..dist_utils import allreduce_params, master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class CheckpointHook(Hook): - """Save checkpoints periodically. - - Args: - interval (int): The saving period. If ``by_epoch=True``, interval - indicates epochs, otherwise it indicates iterations. - Default: -1, which means "never". - by_epoch (bool): Saving checkpoints by epoch or by iteration. - Default: True. - save_optimizer (bool): Whether to save optimizer state_dict in the - checkpoint. It is usually used for resuming experiments. - Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, ``runner.work_dir`` will be used by default. If - specified, the ``out_dir`` will be the concatenation of ``out_dir`` - and the last level directory of ``runner.work_dir``. - `Changed in version 1.3.16.` - max_keep_ckpts (int, optional): The maximum checkpoints to keep. - In some cases we want only the latest few checkpoints and would - like to delete old ones to save the disk space. - Default: -1, which means unlimited. - save_last (bool, optional): Whether to force the last checkpoint to be - saved regardless of interval. Default: True. - sync_buffer (bool, optional): Whether to synchronize buffers in - different gpus. Default: False. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - - .. warning:: - Before v1.3.16, the ``out_dir`` argument indicates the path where the - checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the - root directory and the final path to save checkpoint is the - concatenation of ``out_dir`` and the last level directory of - ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A" - and the value of ``runner.work_dir`` is "/path/of/B", then the final - path will be "/path/of/A/B". - """ - - def __init__(self, - interval=-1, - by_epoch=True, - save_optimizer=True, - out_dir=None, - max_keep_ckpts=-1, - save_last=True, - sync_buffer=False, - file_client_args=None, - **kwargs): - self.interval = interval - self.by_epoch = by_epoch - self.save_optimizer = save_optimizer - self.out_dir = out_dir - self.max_keep_ckpts = max_keep_ckpts - self.save_last = save_last - self.args = kwargs - self.sync_buffer = sync_buffer - self.file_client_args = file_client_args - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - - runner.logger.info((f'Checkpoints will be saved to {self.out_dir} by ' - f'{self.file_client.name}.')) - - # disable the create_symlink option because some file backends do not - # allow to create a symlink - if 'create_symlink' in self.args: - if self.args[ - 'create_symlink'] and not self.file_client.allow_symlink: - self.args['create_symlink'] = False - warnings.warn( - ('create_symlink is set as True by the user but is changed' - 'to be False because creating symbolic link is not ' - f'allowed in {self.file_client.name}')) - else: - self.args['create_symlink'] = self.file_client.allow_symlink - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` epochs - # 2. reach the last epoch of training - if self.every_n_epochs( - runner, self.interval) or (self.save_last - and self.is_last_epoch(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.epoch + 1} epochs') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) - - @master_only - def _save_checkpoint(self, runner): - """Save the current checkpoint and delete unwanted checkpoint.""" - runner.save_checkpoint( - self.out_dir, save_optimizer=self.save_optimizer, **self.args) - if runner.meta is not None: - if self.by_epoch: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1) - else: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1) - runner.meta.setdefault('hook_msgs', dict()) - runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path( - self.out_dir, cur_ckpt_filename) - # remove other checkpoints - if self.max_keep_ckpts > 0: - if self.by_epoch: - name = 'epoch_{}.pth' - current_ckpt = runner.epoch + 1 - else: - name = 'iter_{}.pth' - current_ckpt = runner.iter + 1 - redundant_ckpts = range( - current_ckpt - self.max_keep_ckpts * self.interval, 0, - -self.interval) - filename_tmpl = self.args.get('filename_tmpl', name) - for _step in redundant_ckpts: - ckpt_path = self.file_client.join_path( - self.out_dir, filename_tmpl.format(_step)) - if self.file_client.isfile(ckpt_path): - self.file_client.remove(ckpt_path) - else: - break - - def after_train_iter(self, runner): - if self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` iterations - # 2. reach the last iteration of training - if self.every_n_iters( - runner, self.interval) or (self.save_last - and self.is_last_iter(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.iter + 1} iterations') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/closure.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/closure.py deleted file mode 100755 index b955f81f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/closure.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ClosureHook(Hook): - - def __init__(self, fn_name, fn): - assert hasattr(self, fn_name) - assert callable(fn) - setattr(self, fn_name, fn) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/ema.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/ema.py deleted file mode 100755 index 15c7e680..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - \text{Xema\_{t+1}} = (1 - \text{momentum}) \times - \text{Xema\_{t}} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/evaluation.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/evaluation.py deleted file mode 100755 index 1eeb4465..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/evaluation.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from math import inf - -import torch.distributed as dist -from torch.nn.modules.batchnorm import _BatchNorm -from torch.utils.data import DataLoader - -from mmcv.fileio import FileClient -from mmcv.utils import is_seq_of -from .hook import Hook -from .logger import LoggerHook - - -class EvalHook(Hook): - """Non-Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in non-distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader, and return the test results. If ``None``, the default - test function ``mmcv.engine.single_gpu_test`` will be used. - (default: ``None``) - greater_keys (List[str] | None, optional): Metric keys that will be - inferred by 'greater' comparison rule. If ``None``, - _default_greater_keys will be used. (default: ``None``) - less_keys (List[str] | None, optional): Metric keys that will be - inferred by 'less' comparison rule. If ``None``, _default_less_keys - will be used. (default: ``None``) - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - `New in version 1.3.16.` - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - `New in version 1.3.16.` - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - - Notes: - If new arguments are added for EvalHook, tools/test.py, - tools/eval_metric.py may be affected. - """ - - # Since the key for determine greater or less is related to the downstream - # tasks, downstream repos may need to overwrite the following inner - # variable accordingly. - - rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y} - init_value_map = {'greater': -inf, 'less': inf} - _default_greater_keys = [ - 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU', - 'mAcc', 'aAcc' - ] - _default_less_keys = ['loss'] - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - out_dir=None, - file_client_args=None, - **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError(f'dataloader must be a pytorch DataLoader, ' - f'but got {type(dataloader)}') - - if interval <= 0: - raise ValueError(f'interval must be a positive number, ' - f'but got {interval}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean' - - if start is not None and start < 0: - raise ValueError(f'The evaluation start epoch {start} is smaller ' - f'than 0') - - self.dataloader = dataloader - self.interval = interval - self.start = start - self.by_epoch = by_epoch - - assert isinstance(save_best, str) or save_best is None, \ - '""save_best"" should be a str or None ' \ - f'rather than {type(save_best)}' - self.save_best = save_best - self.eval_kwargs = eval_kwargs - self.initial_flag = True - - if test_fn is None: - from mmcv.engine import single_gpu_test - self.test_fn = single_gpu_test - else: - self.test_fn = test_fn - - if greater_keys is None: - self.greater_keys = self._default_greater_keys - else: - if not isinstance(greater_keys, (list, tuple)): - greater_keys = (greater_keys, ) - assert is_seq_of(greater_keys, str) - self.greater_keys = greater_keys - - if less_keys is None: - self.less_keys = self._default_less_keys - else: - if not isinstance(less_keys, (list, tuple)): - less_keys = (less_keys, ) - assert is_seq_of(less_keys, str) - self.less_keys = less_keys - - if self.save_best is not None: - self.best_ckpt_path = None - self._init_rule(rule, self.save_best) - - self.out_dir = out_dir - self.file_client_args = file_client_args - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific (note that the key indicator matching - is case-insensitive): - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if the key indicator is equal to the substring in any one item - in ``self.greater_keys``, the rule will be specified as 'greater'. - 4. Or if the key indicator is equal to the substring in any one item - in ``self.less_keys``, the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f'rule must be greater, less or None, ' - f'but got {rule}.') - - if rule is None: - if key_indicator != 'auto': - # `_lc` here means we use the lower case of keys for - # case-insensitive matching - key_indicator_lc = key_indicator.lower() - greater_keys = [key.lower() for key in self.greater_keys] - less_keys = [key.lower() for key in self.less_keys] - - if key_indicator_lc in greater_keys: - rule = 'greater' - elif key_indicator_lc in less_keys: - rule = 'less' - elif any(key in key_indicator_lc for key in greater_keys): - rule = 'greater' - elif any(key in key_indicator_lc for key in less_keys): - rule = 'less' - else: - raise ValueError(f'Cannot infer the rule for key ' - f'{key_indicator}, thus a specific rule ' - f'must be specified.') - self.rule = rule - self.key_indicator = key_indicator - if self.rule is not None: - self.compare_func = self.rule_map[self.rule] - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'The best checkpoint will be saved to {self.out_dir} by ' - f'{self.file_client.name}')) - - if self.save_best is not None: - if runner.meta is None: - warnings.warn('runner.meta is None. Creating an empty one.') - runner.meta = dict() - runner.meta.setdefault('hook_msgs', dict()) - self.best_ckpt_path = runner.meta['hook_msgs'].get( - 'best_ckpt', None) - - def before_train_iter(self, runner): - """Evaluate the model only at the start of training by iteration.""" - if self.by_epoch or not self.initial_flag: - return - if self.start is not None and runner.iter >= self.start: - self.after_train_iter(runner) - self.initial_flag = False - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - if not (self.by_epoch and self.initial_flag): - return - if self.start is not None and runner.epoch >= self.start: - self.after_train_epoch(runner) - self.initial_flag = False - - def after_train_iter(self, runner): - """Called after every training iter to evaluate the results.""" - if not self.by_epoch and self._should_evaluate(runner): - # Because the priority of EvalHook is higher than LoggerHook, the - # training log and the evaluating log are mixed. Therefore, - # we need to dump the training log and clear it before evaluating - # log is generated. In addition, this problem will only appear in - # `IterBasedRunner` whose `self.by_epoch` is False, because - # `EpochBasedRunner` whose `self.by_epoch` is True calls - # `_do_evaluate` in `after_train_epoch` stage, and at this stage - # the training log has been printed, so it will not cause any - # problem. more details at - # https://github.com/open-mmlab/mmsegmentation/issues/694 - for hook in runner._hooks: - if isinstance(hook, LoggerHook): - hook.after_train_iter(runner) - runner.log_buffer.clear() - - self._do_evaluate(runner) - - def after_train_epoch(self, runner): - """Called after every training epoch to evaluate the results.""" - if self.by_epoch and self._should_evaluate(runner): - self._do_evaluate(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - results = self.test_fn(runner.model, self.dataloader) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - def _should_evaluate(self, runner): - """Judge whether to perform evaluation. - - Here is the rule to judge whether to perform evaluation: - 1. It will not perform evaluation during the epoch/iteration interval, - which is determined by ``self.interval``. - 2. It will not perform evaluation if the start time is larger than - current time. - 3. It will not perform evaluation when current time is larger than - the start time but during epoch/iteration interval. - - Returns: - bool: The flag indicating whether to perform evaluation. - """ - if self.by_epoch: - current = runner.epoch - check_time = self.every_n_epochs - else: - current = runner.iter - check_time = self.every_n_iters - - if self.start is None: - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - elif (current + 1) < self.start: - # No evaluation if start is larger than the current time. - return False - else: - # Evaluation only at epochs/iters 3, 5, 7... - # if start==3 and interval==2 - if (current + 1 - self.start) % self.interval: - return False - return True - - def _save_ckpt(self, runner, key_score): - """Save the best checkpoint. - - It will compare the score according to the compare function, write - related information (best score, best checkpoint path) and save the - best checkpoint into ``work_dir``. - """ - if self.by_epoch: - current = f'epoch_{runner.epoch + 1}' - cur_type, cur_time = 'epoch', runner.epoch + 1 - else: - current = f'iter_{runner.iter + 1}' - cur_type, cur_time = 'iter', runner.iter + 1 - - best_score = runner.meta['hook_msgs'].get( - 'best_score', self.init_value_map[self.rule]) - if self.compare_func(key_score, best_score): - best_score = key_score - runner.meta['hook_msgs']['best_score'] = best_score - - if self.best_ckpt_path and self.file_client.isfile( - self.best_ckpt_path): - self.file_client.remove(self.best_ckpt_path) - runner.logger.info( - (f'The previous best checkpoint {self.best_ckpt_path} was ' - 'removed')) - - best_ckpt_name = f'best_{self.key_indicator}_{current}.pth' - self.best_ckpt_path = self.file_client.join_path( - self.out_dir, best_ckpt_name) - runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path - - runner.save_checkpoint( - self.out_dir, best_ckpt_name, create_symlink=False) - runner.logger.info( - f'Now best checkpoint is saved as {best_ckpt_name}.') - runner.logger.info( - f'Best {self.key_indicator} is {best_score:0.4f} ' - f'at {cur_time} {cur_type}.') - - def evaluate(self, runner, results): - """Evaluate the results. - - Args: - runner (:obj:`mmcv.Runner`): The underlined training runner. - results (list): Output results. - """ - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - - if self.save_best is not None: - # If the performance of model is pool, the `eval_res` may be an - # empty dict and it will raise exception when `self.save_best` is - # not None. More details at - # https://github.com/open-mmlab/mmdetection/issues/6265. - if not eval_res: - warnings.warn( - 'Since `eval_res` is an empty dict, the behavior to save ' - 'the best checkpoint will be skipped in this evaluation.') - return None - - if self.key_indicator == 'auto': - # infer from eval_results - self._init_rule(self.rule, list(eval_res.keys())[0]) - return eval_res[self.key_indicator] - - return None - - -class DistEvalHook(EvalHook): - """Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader in a multi-gpu manner, and return the test results. If - ``None``, the default test function ``mmcv.engine.multi_gpu_test`` - will be used. (default: ``None``) - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - broadcast_bn_buffer (bool): Whether to broadcast the - buffer(running_mean and running_var) of rank 0 to other rank - before evaluation. Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - broadcast_bn_buffer=True, - tmpdir=None, - gpu_collect=False, - out_dir=None, - file_client_args=None, - **eval_kwargs): - - if test_fn is None: - from mmcv.engine import multi_gpu_test - test_fn = multi_gpu_test - - super().__init__( - dataloader, - start=start, - interval=interval, - by_epoch=by_epoch, - save_best=save_best, - rule=rule, - test_fn=test_fn, - greater_keys=greater_keys, - less_keys=less_keys, - out_dir=out_dir, - file_client_args=file_client_args, - **eval_kwargs) - - self.broadcast_bn_buffer = broadcast_bn_buffer - self.tmpdir = tmpdir - self.gpu_collect = gpu_collect - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - results = self.test_fn( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to - # save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/hook.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/hook.py deleted file mode 100755 index f2d1c986..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/hook.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, is_method_overridden - -HOOKS = Registry('hook') - - -class Hook: - stages = ('before_run', 'before_train_epoch', 'before_train_iter', - 'after_train_iter', 'after_train_epoch', 'before_val_epoch', - 'before_val_iter', 'after_val_iter', 'after_val_epoch', - 'after_run') - - def before_run(self, runner): - pass - - def after_run(self, runner): - pass - - def before_epoch(self, runner): - pass - - def after_epoch(self, runner): - pass - - def before_iter(self, runner): - pass - - def after_iter(self, runner): - pass - - def before_train_epoch(self, runner): - self.before_epoch(runner) - - def before_val_epoch(self, runner): - self.before_epoch(runner) - - def after_train_epoch(self, runner): - self.after_epoch(runner) - - def after_val_epoch(self, runner): - self.after_epoch(runner) - - def before_train_iter(self, runner): - self.before_iter(runner) - - def before_val_iter(self, runner): - self.before_iter(runner) - - def after_train_iter(self, runner): - self.after_iter(runner) - - def after_val_iter(self, runner): - self.after_iter(runner) - - def every_n_epochs(self, runner, n): - return (runner.epoch + 1) % n == 0 if n > 0 else False - - def every_n_inner_iters(self, runner, n): - return (runner.inner_iter + 1) % n == 0 if n > 0 else False - - def every_n_iters(self, runner, n): - return (runner.iter + 1) % n == 0 if n > 0 else False - - def end_of_epoch(self, runner): - return runner.inner_iter + 1 == len(runner.data_loader) - - def is_last_epoch(self, runner): - return runner.epoch + 1 == runner._max_epochs - - def is_last_iter(self, runner): - return runner.iter + 1 == runner._max_iters - - def get_triggered_stages(self): - trigger_stages = set() - for stage in Hook.stages: - if is_method_overridden(stage, Hook, self): - trigger_stages.add(stage) - - # some methods will be triggered in multi stages - # use this dict to map method to stages. - method_stages_map = { - 'before_epoch': ['before_train_epoch', 'before_val_epoch'], - 'after_epoch': ['after_train_epoch', 'after_val_epoch'], - 'before_iter': ['before_train_iter', 'before_val_iter'], - 'after_iter': ['after_train_iter', 'after_val_iter'], - } - - for method, map_stages in method_stages_map.items(): - if is_method_overridden(method, Hook, self): - trigger_stages.update(map_stages) - - return [stage for stage in Hook.stages if stage in trigger_stages] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/iter_timer.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/iter_timer.py deleted file mode 100755 index cfd5002f..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/iter_timer.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import time - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class IterTimerHook(Hook): - - def before_epoch(self, runner): - self.t = time.time() - - def before_iter(self, runner): - runner.log_buffer.update({'data_time': time.time() - self.t}) - - def after_iter(self, runner): - runner.log_buffer.update({'time': time.time() - self.t}) - self.t = time.time() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/__init__.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/__init__.py deleted file mode 100755 index a0b6b345..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/base.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/base.py deleted file mode 100755 index f8452567..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/base.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from abc import ABCMeta, abstractmethod - -import numpy as np -import torch - -from ..hook import Hook - - -class LoggerHook(Hook): - """Base class for logger hooks. - - Args: - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging. - by_epoch (bool): Whether EpochBasedRunner is used. - """ - - __metaclass__ = ABCMeta - - def __init__(self, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - self.interval = interval - self.ignore_last = ignore_last - self.reset_flag = reset_flag - self.by_epoch = by_epoch - - @abstractmethod - def log(self, runner): - pass - - @staticmethod - def is_scalar(val, include_np=True, include_torch=True): - """Tell the input variable is a scalar or not. - - Args: - val: Input variable. - include_np (bool): Whether include 0-d np.ndarray as a scalar. - include_torch (bool): Whether include 0-d torch.Tensor as a scalar. - - Returns: - bool: True or False. - """ - if isinstance(val, numbers.Number): - return True - elif include_np and isinstance(val, np.ndarray) and val.ndim == 0: - return True - elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1: - return True - else: - return False - - def get_mode(self, runner): - if runner.mode == 'train': - if 'time' in runner.log_buffer.output: - mode = 'train' - else: - mode = 'val' - elif runner.mode == 'val': - mode = 'val' - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return mode - - def get_epoch(self, runner): - if runner.mode == 'train': - epoch = runner.epoch + 1 - elif runner.mode == 'val': - # normal val mode - # runner.epoch += 1 has been done before val workflow - epoch = runner.epoch - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return epoch - - def get_iter(self, runner, inner_iter=False): - """Get the current training iteration step.""" - if self.by_epoch and inner_iter: - current_iter = runner.inner_iter + 1 - else: - current_iter = runner.iter + 1 - return current_iter - - def get_lr_tags(self, runner): - tags = {} - lrs = runner.current_lr() - if isinstance(lrs, dict): - for name, value in lrs.items(): - tags[f'learning_rate/{name}'] = value[0] - else: - tags['learning_rate'] = lrs[0] - return tags - - def get_momentum_tags(self, runner): - tags = {} - momentums = runner.current_momentum() - if isinstance(momentums, dict): - for name, value in momentums.items(): - tags[f'momentum/{name}'] = value[0] - else: - tags['momentum'] = momentums[0] - return tags - - def get_loggable_tags(self, - runner, - allow_scalar=True, - allow_text=False, - add_mode=True, - tags_to_skip=('time', 'data_time')): - tags = {} - for var, val in runner.log_buffer.output.items(): - if var in tags_to_skip: - continue - if self.is_scalar(val) and not allow_scalar: - continue - if isinstance(val, str) and not allow_text: - continue - if add_mode: - var = f'{self.get_mode(runner)}/{var}' - tags[var] = val - tags.update(self.get_lr_tags(runner)) - tags.update(self.get_momentum_tags(runner)) - return tags - - def before_run(self, runner): - for hook in runner.hooks[::-1]: - if isinstance(hook, LoggerHook): - hook.reset_flag = True - break - - def before_epoch(self, runner): - runner.log_buffer.clear() # clear logs of last epoch - - def after_train_iter(self, runner): - if self.by_epoch and self.every_n_inner_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif not self.by_epoch and self.every_n_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif self.end_of_epoch(runner) and not self.ignore_last: - # not precise but more stable - runner.log_buffer.average(self.interval) - - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_train_epoch(self, runner): - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_val_epoch(self, runner): - runner.log_buffer.average() - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/dvclive.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/dvclive.py deleted file mode 100755 index 687cdc58..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/dvclive.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class DvcliveLoggerHook(LoggerHook): - """Class to log metrics with dvclive. - - It requires `dvclive`_ to be installed. - - Args: - path (str): Directory where dvclive will write TSV log files. - interval (int): Logging interval (every k iterations). - Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: True. - by_epoch (bool): Whether EpochBasedRunner is used. - Default: True. - - .. _dvclive: - https://dvc.org/doc/dvclive - """ - - def __init__(self, - path, - interval=10, - ignore_last=True, - reset_flag=True, - by_epoch=True): - - super(DvcliveLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.path = path - self.import_dvclive() - - def import_dvclive(self): - try: - import dvclive - except ImportError: - raise ImportError( - 'Please run "pip install dvclive" to install dvclive') - self.dvclive = dvclive - - @master_only - def before_run(self, runner): - self.dvclive.init(self.path) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for k, v in tags.items(): - self.dvclive.log(k, v, step=self.get_iter(runner)) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/mlflow.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/mlflow.py deleted file mode 100755 index f9a72592..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/mlflow.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class MlflowLoggerHook(LoggerHook): - - def __init__(self, - exp_name=None, - tags=None, - log_model=True, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - """Class to log metrics and (optionally) a trained model to MLflow. - - It requires `MLflow`_ to be installed. - - Args: - exp_name (str, optional): Name of the experiment to be used. - Default None. - If not None, set the active experiment. - If experiment does not exist, an experiment with provided name - will be created. - tags (dict of str: str, optional): Tags for the current run. - Default None. - If not None, set tags for the current run. - log_model (bool, optional): Whether to log an MLflow artifact. - Default True. - If True, log runner.model as an MLflow artifact - for the current run. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _MLflow: - https://www.mlflow.org/docs/latest/index.html - """ - super(MlflowLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_mlflow() - self.exp_name = exp_name - self.tags = tags - self.log_model = log_model - - def import_mlflow(self): - try: - import mlflow - import mlflow.pytorch as mlflow_pytorch - except ImportError: - raise ImportError( - 'Please run "pip install mlflow" to install mlflow') - self.mlflow = mlflow - self.mlflow_pytorch = mlflow_pytorch - - @master_only - def before_run(self, runner): - super(MlflowLoggerHook, self).before_run(runner) - if self.exp_name is not None: - self.mlflow.set_experiment(self.exp_name) - if self.tags is not None: - self.mlflow.set_tags(self.tags) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - self.mlflow.log_metrics(tags, step=self.get_iter(runner)) - - @master_only - def after_run(self, runner): - if self.log_model: - self.mlflow_pytorch.log_model(runner.model, 'models') diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/neptune.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/neptune.py deleted file mode 100755 index 7a38772b..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/pavi.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/pavi.py deleted file mode 100755 index ba2f6e8d..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/pavi.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -import os -import os.path as osp - -import torch -import yaml - -import mmcv -from ....parallel.utils import is_module_wrapper -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class PaviLoggerHook(LoggerHook): - - def __init__(self, - init_kwargs=None, - add_graph=False, - add_last_ckpt=False, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True, - img_key='img_info'): - super(PaviLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.init_kwargs = init_kwargs - self.add_graph = add_graph - self.add_last_ckpt = add_last_ckpt - self.img_key = img_key - - @master_only - def before_run(self, runner): - super(PaviLoggerHook, self).before_run(runner) - try: - from pavi import SummaryWriter - except ImportError: - raise ImportError('Please run "pip install pavi" to install pavi.') - - self.run_name = runner.work_dir.split('/')[-1] - - if not self.init_kwargs: - self.init_kwargs = dict() - self.init_kwargs['name'] = self.run_name - self.init_kwargs['model'] = runner._model_name - if runner.meta is not None: - if 'config_dict' in runner.meta: - config_dict = runner.meta['config_dict'] - assert isinstance( - config_dict, - dict), ('meta["config_dict"] has to be of a dict, ' - f'but got {type(config_dict)}') - elif 'config_file' in runner.meta: - config_file = runner.meta['config_file'] - config_dict = dict(mmcv.Config.fromfile(config_file)) - else: - config_dict = None - if config_dict is not None: - # 'max_.*iter' is parsed in pavi sdk as the maximum iterations - # to properly set up the progress bar. - config_dict = config_dict.copy() - config_dict.setdefault('max_iter', runner.max_iters) - # non-serializable values are first converted in - # mmcv.dump to json - config_dict = json.loads( - mmcv.dump(config_dict, file_format='json')) - session_text = yaml.dump(config_dict) - self.init_kwargs['session_text'] = session_text - self.writer = SummaryWriter(**self.init_kwargs) - - def get_step(self, runner): - """Get the total training step/epoch.""" - if self.get_mode(runner) == 'val' and self.by_epoch: - return self.get_epoch(runner) - else: - return self.get_iter(runner) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner, add_mode=False) - if tags: - self.writer.add_scalars( - self.get_mode(runner), tags, self.get_step(runner)) - - @master_only - def after_run(self, runner): - if self.add_last_ckpt: - ckpt_path = osp.join(runner.work_dir, 'latest.pth') - if osp.islink(ckpt_path): - ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path)) - - if osp.isfile(ckpt_path): - # runner.epoch += 1 has been done before `after_run`. - iteration = runner.epoch if self.by_epoch else runner.iter - return self.writer.add_snapshot_file( - tag=self.run_name, - snapshot_file_path=ckpt_path, - iteration=iteration) - - # flush the buffer and send a task ending signal to Pavi - self.writer.close() - - @master_only - def before_epoch(self, runner): - if runner.epoch == 0 and self.add_graph: - if is_module_wrapper(runner.model): - _model = runner.model.module - else: - _model = runner.model - device = next(_model.parameters()).device - data = next(iter(runner.data_loader)) - image = data[self.img_key][0:1].to(device) - with torch.no_grad(): - self.writer.add_graph(_model, image) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/tensorboard.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/tensorboard.py deleted file mode 100755 index a8d50366..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/tensorboard.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from mmcv.utils import TORCH_VERSION, digit_version -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TensorboardLoggerHook(LoggerHook): - - def __init__(self, - log_dir=None, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - super(TensorboardLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.log_dir = log_dir - - @master_only - def before_run(self, runner): - super(TensorboardLoggerHook, self).before_run(runner) - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.1')): - try: - from tensorboardX import SummaryWriter - except ImportError: - raise ImportError('Please install tensorboardX to use ' - 'TensorboardLoggerHook.') - else: - try: - from torch.utils.tensorboard import SummaryWriter - except ImportError: - raise ImportError( - 'Please run "pip install future tensorboard" to install ' - 'the dependencies to use torch.utils.tensorboard ' - '(applicable to PyTorch 1.1 or higher)') - - if self.log_dir is None: - self.log_dir = osp.join(runner.work_dir, 'tf_logs') - self.writer = SummaryWriter(self.log_dir) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner, allow_text=True) - for tag, val in tags.items(): - if isinstance(val, str): - self.writer.add_text(tag, val, self.get_iter(runner)) - else: - self.writer.add_scalar(tag, val, self.get_iter(runner)) - - @master_only - def after_run(self, runner): - self.writer.close() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/text.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/text.py deleted file mode 100755 index b3c8a31e..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict - -import torch -import torch.distributed as dist - -import mmcv -from mmcv.fileio.file_client import FileClient -from mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch=True, - interval=10, - ignore_last=True, - reset_flag=False, - interval_exp_name=1000, - out_dir=None, - out_suffix=('.log.json', '.log', '.py'), - keep_local=True, - file_client_args=None): - super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner): - super(TextLoggerHook, self).before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.')) - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner): - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([mem / (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict, runner): - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - # if torch.cuda.is_available(): - # log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict, runner): - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner): - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - # if 'time' in runner.log_buffer.output: - # statistic memory - # if torch.cuda.is_available(): - # log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner): - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath, 'r') as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - (f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.')) - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - (f'{local_filepath} was removed due to the ' - '`self.keep_local=False`')) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/wandb.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/wandb.py deleted file mode 100755 index 9f680846..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/logger/wandb.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class WandbLoggerHook(LoggerHook): - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=False, - commit=True, - by_epoch=True, - with_step=True): - super(WandbLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_wandb() - self.init_kwargs = init_kwargs - self.commit = commit - self.with_step = with_step - - def import_wandb(self): - try: - import wandb - except ImportError: - raise ImportError( - 'Please run "pip install wandb" to install wandb') - self.wandb = wandb - - @master_only - def before_run(self, runner): - super(WandbLoggerHook, self).before_run(runner) - if self.wandb is None: - self.import_wandb() - if self.init_kwargs: - self.wandb.init(**self.init_kwargs) - else: - self.wandb.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - if self.with_step: - self.wandb.log( - tags, step=self.get_iter(runner), commit=self.commit) - else: - tags['global_step'] = self.get_iter(runner) - self.wandb.log(tags, commit=self.commit) - - @master_only - def after_run(self, runner): - self.wandb.join() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/lr_updater.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/lr_updater.py deleted file mode 100755 index e5a12415..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/lr_updater.py +++ /dev/null @@ -1,670 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from math import cos, pi - -import mmcv -from .hook import HOOKS, Hook - - -class LrUpdaterHook(Hook): - """LR Scheduler in MMCV. - - Args: - by_epoch (bool): LR changes epoch by epoch - warmup (string): Type of warmup used. It can be None(use no warmup), - 'constant', 'linear' or 'exp' - warmup_iters (int): The number of iterations or epochs that warmup - lasts - warmup_ratio (float): LR used at the beginning of warmup equals to - warmup_ratio * initial_lr - warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters - means the number of epochs that warmup lasts, otherwise means the - number of iteration that warmup lasts - """ - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.1, - warmup_by_epoch=False): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_ratio" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - self.warmup_by_epoch = warmup_by_epoch - - if self.warmup_by_epoch: - self.warmup_epochs = self.warmup_iters - self.warmup_iters = None - else: - self.warmup_epochs = None - - self.base_lr = [] # initial lr for all param groups - self.regular_lr = [] # expected lr if no warming up is performed - - def _set_lr(self, runner, lr_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, lr in zip(optim.param_groups, lr_groups[k]): - param_group['lr'] = lr - else: - for param_group, lr in zip(runner.optimizer.param_groups, - lr_groups): - param_group['lr'] = lr - - def get_lr(self, runner, base_lr): - raise NotImplementedError - - def get_regular_lr(self, runner): - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(runner, _base_lr) - for _base_lr in self.base_lr[k] - ] - lr_groups.update({k: _lr_group}) - - return lr_groups - else: - return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr] - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - if self.warmup == 'constant': - warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_lr = [_lr * (1 - k) for _lr in regular_lr] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.regular_lr, dict): - lr_groups = {} - for key, regular_lr in self.regular_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.regular_lr) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - group.setdefault('initial_lr', group['lr']) - _base_lr = [ - group['initial_lr'] for group in optim.param_groups - ] - self.base_lr.update({k: _base_lr}) - else: - for group in runner.optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - self.base_lr = [ - group['initial_lr'] for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if self.warmup_iters is None: - epoch_len = len(runner.data_loader) - self.warmup_iters = self.warmup_epochs * epoch_len - - if not self.by_epoch: - return - - self.regular_lr = self.get_regular_lr(runner) - self._set_lr(runner, self.regular_lr) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_lr = self.get_regular_lr(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - - -@HOOKS.register_module() -class FixedLrUpdaterHook(LrUpdaterHook): - - def __init__(self, **kwargs): - super(FixedLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - return base_lr - - -@HOOKS.register_module() -class StepLrUpdaterHook(LrUpdaterHook): - """Step LR scheduler with min_lr clipping. - - Args: - step (int | list[int]): Step to decay the LR. If an int value is given, - regard it as the decay interval. If a list is given, decay LR at - these steps. - gamma (float, optional): Decay LR ratio. Default: 0.1. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. If None - is given, we don't perform lr clipping. Default: None. - """ - - def __init__(self, step, gamma=0.1, min_lr=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_lr = min_lr - super(StepLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - lr = base_lr * (self.gamma**exp) - if self.min_lr is not None: - # clip to a minimum value - lr = max(lr, self.min_lr) - return lr - - -@HOOKS.register_module() -class ExpLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, **kwargs): - self.gamma = gamma - super(ExpLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * self.gamma**progress - - -@HOOKS.register_module() -class PolyLrUpdaterHook(LrUpdaterHook): - - def __init__(self, power=1., min_lr=0., **kwargs): - self.power = power - self.min_lr = min_lr - super(PolyLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - coeff = (1 - progress / max_progress)**self.power - return (base_lr - self.min_lr) * coeff + self.min_lr - - -@HOOKS.register_module() -class InvLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, power=1., **kwargs): - self.gamma = gamma - self.power = power - super(InvLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * (1 + self.gamma * progress)**(-self.power) - - -@HOOKS.register_module() -class CosineAnnealingLrUpdaterHook(LrUpdaterHook): - - def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook): - """Flat + Cosine lr schedule. - - Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501 - - Args: - start_percent (float): When to start annealing the learning rate - after the percentage of the total training steps. - The value should be in range [0, 1). - Default: 0.75 - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - start_percent=0.75, - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - if start_percent < 0 or start_percent > 1 or not isinstance( - start_percent, float): - raise ValueError( - 'expected float between 0 and 1 start_percent, but ' - f'got {start_percent}') - self.start_percent = start_percent - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - start = round(runner.max_epochs * self.start_percent) - progress = runner.epoch - start - max_progress = runner.max_epochs - start - else: - start = round(runner.max_iters * self.start_percent) - progress = runner.iter - start - max_progress = runner.max_iters - start - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress < 0: - return base_lr - else: - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class CosineRestartLrUpdaterHook(LrUpdaterHook): - """Cosine annealing with restarts learning rate scheme. - - Args: - periods (list[int]): Periods for each cosine anneling cycle. - restart_weights (list[float], optional): Restart weights at each - restart iteration. Default: [1]. - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - periods, - restart_weights=[1], - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.periods = periods - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - self.restart_weights = restart_weights - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - super(CosineRestartLrUpdaterHook, self).__init__(**kwargs) - - self.cumulative_periods = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - else: - progress = runner.iter - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - idx = get_position_from_periods(progress, self.cumulative_periods) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1] - current_periods = self.periods[idx] - - alpha = min((progress - nearest_restart) / current_periods, 1) - return annealing_cos(base_lr, target_lr, alpha, current_weight) - - -def get_position_from_periods(iteration, cumulative_periods): - """Get the position from a period list. - - It will return the index of the right-closest number in the period list. - For example, the cumulative_periods = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 3. - - Args: - iteration (int): Current iteration. - cumulative_periods (list[int]): Cumulative period list. - - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_periods): - if iteration < period: - return i - raise ValueError(f'Current iteration {iteration} exceeds ' - f'cumulative_periods {cumulative_periods}') - - -@HOOKS.register_module() -class CyclicLrUpdaterHook(LrUpdaterHook): - """Cyclic LR Scheduler. - - Implement the cyclical learning rate policy (CLR) described in - https://arxiv.org/pdf/1506.01186.pdf - - Different from the original paper, we use cosine annealing rather than - triangular policy inside a cycle. This improves the performance in the - 3D detection area. - - Args: - by_epoch (bool): Whether to update LR by epoch. - target_ratio (tuple[float]): Relative ratio of the highest LR and the - lowest LR to the initial LR. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of LR in - the total cycle. - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, - anneal_strategy='cos', - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.lr_phases = [] # init lr_phases - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicLrUpdaterHook, self).before_run(runner) - # initiate lr_phases - # total lr_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.lr_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.lr_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.lr_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return self.anneal_func(base_lr * start_ratio, - base_lr * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleLrUpdaterHook(LrUpdaterHook): - """One Cycle LR Scheduler. - - The 1cycle learning rate policy changes the learning rate after every - batch. The one cycle learning rate policy is described in - https://arxiv.org/pdf/1708.07120.pdf - - Args: - max_lr (float or list): Upper learning rate boundaries in the cycle - for each parameter group. - total_steps (int, optional): The total number of steps in the cycle. - Note that if a value is not provided here, it will be the max_iter - of runner. Default: None. - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - div_factor (float): Determines the initial learning rate via - initial_lr = max_lr/div_factor - Default: 25 - final_div_factor (float): Determines the minimum learning rate via - min_lr = initial_lr/final_div_factor - Default: 1e4 - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - max_lr, - total_steps=None, - pct_start=0.3, - anneal_strategy='cos', - div_factor=25, - final_div_factor=1e4, - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch = False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(max_lr, (numbers.Number, list, dict)): - raise ValueError('the type of max_lr must be the one of list or ' - f'dict, but got {type(max_lr)}') - self._max_lr = max_lr - if total_steps is not None: - if not isinstance(total_steps, int): - raise ValueError('the type of total_steps must be int, but' - f'got {type(total_steps)}') - self.total_steps = total_steps - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.div_factor = div_factor - self.final_div_factor = final_div_factor - self.three_phase = three_phase - self.lr_phases = [] # init lr_phases - super(OneCycleLrUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if hasattr(self, 'total_steps'): - total_steps = self.total_steps - else: - total_steps = runner.max_iters - if total_steps < runner.max_iters: - raise ValueError( - 'The total steps must be greater than or equal to max ' - f'iterations {runner.max_iters} of runner, but total steps ' - f'is {total_steps}.') - - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - _max_lr = format_param(k, optim, self._max_lr) - self.base_lr[k] = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(optim.param_groups, self.base_lr[k]): - group.setdefault('initial_lr', lr) - else: - k = type(runner.optimizer).__name__ - _max_lr = format_param(k, runner.optimizer, self._max_lr) - self.base_lr = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(runner.optimizer.param_groups, self.base_lr): - group.setdefault('initial_lr', lr) - - if self.three_phase: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append([ - float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1 - ]) - self.lr_phases.append( - [total_steps - 1, 1, 1 / self.final_div_factor]) - else: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append( - [total_steps - 1, self.div_factor, 1 / self.final_div_factor]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - start_iter = 0 - for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases): - if curr_iter <= end_iter: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr, - pct) - break - start_iter = end_iter - return lr - - -def annealing_cos(start, end, factor, weight=1): - """Calculate annealing cos learning rate. - - Cosine anneal from `weight * start + (1 - weight) * end` to `end` as - percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the cosine annealing. - end (float): The ending learing rate of the cosine annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - weight (float, optional): The combination factor of `start` and `end` - when calculating the actual starting learning rate. Default to 1. - """ - cos_out = cos(pi * factor) + 1 - return end + 0.5 * weight * (start - end) * cos_out - - -def annealing_linear(start, end, factor): - """Calculate annealing linear learning rate. - - Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the linear annealing. - end (float): The ending learing rate of the linear annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - """ - return start + (end - start) * factor - - -def format_param(name, optim, param): - if isinstance(param, numbers.Number): - return [param] * len(optim.param_groups) - elif isinstance(param, (list, tuple)): # multi param groups - if len(param) != len(optim.param_groups): - raise ValueError(f'expected {len(optim.param_groups)} ' - f'values for {name}, got {len(param)}') - return param - else: # multi optimizers - if name not in param: - raise KeyError(f'{name} is not found in {param.keys()}') - return param[name] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/memory.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/memory.py deleted file mode 100755 index 70cf9a83..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/memory.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class EmptyCacheHook(Hook): - - def __init__(self, before_epoch=False, after_epoch=True, after_iter=False): - self._before_epoch = before_epoch - self._after_epoch = after_epoch - self._after_iter = after_iter - - def after_iter(self, runner): - if self._after_iter: - torch.cuda.empty_cache() - - def before_epoch(self, runner): - if self._before_epoch: - torch.cuda.empty_cache() - - def after_epoch(self, runner): - if self._after_epoch: - torch.cuda.empty_cache() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/momentum_updater.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/momentum_updater.py deleted file mode 100755 index 13d0e2fa..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/momentum_updater.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -from .hook import HOOKS, Hook -from .lr_updater import annealing_cos, annealing_linear, format_param - - -class MomentumUpdaterHook(Hook): - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.9): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_momentum" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - - self.base_momentum = [] # initial momentum for all param groups - self.regular_momentum = [ - ] # expected momentum if no warming up is performed - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, base_momentum): - raise NotImplementedError - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k in runner.optimizer.keys(): - _momentum_group = [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum[k] - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - return [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum - ] - - def get_warmup_momentum(self, cur_iters): - - def _get_warmup_momentum(cur_iters, regular_momentum): - if self.warmup == 'constant': - warmup_momentum = [ - _momentum / self.warmup_ratio - for _momentum in self.regular_momentum - ] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_momentum = [ - _momentum / (1 - k) for _momentum in self.regular_mom - ] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_momentum = [ - _momentum / k for _momentum in self.regular_mom - ] - return warmup_momentum - - if isinstance(self.regular_momentum, dict): - momentum_groups = {} - for key, regular_momentum in self.regular_momentum.items(): - momentum_groups[key] = _get_warmup_momentum( - cur_iters, regular_momentum) - return momentum_groups - else: - return _get_warmup_momentum(cur_iters, self.regular_momentum) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, - # if 'initial_momentum' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_momentum = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - _base_momentum = [ - group['initial_momentum'] for group in optim.param_groups - ] - self.base_momentum.update({k: _base_momentum}) - else: - for group in runner.optimizer.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - self.base_momentum = [ - group['initial_momentum'] - for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if not self.by_epoch: - return - self.regular_mom = self.get_regular_momentum(runner) - self._set_momentum(runner, self.regular_mom) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_mom = self.get_regular_momentum(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - - -@HOOKS.register_module() -class StepMomentumUpdaterHook(MomentumUpdaterHook): - """Step momentum scheduler with min value clipping. - - Args: - step (int | list[int]): Step to decay the momentum. If an int value is - given, regard it as the decay interval. If a list is given, decay - momentum at these steps. - gamma (float, optional): Decay momentum ratio. Default: 0.5. - min_momentum (float, optional): Minimum momentum value to keep. If - momentum after decay is lower than this value, it will be clipped - accordingly. If None is given, we don't perform lr clipping. - Default: None. - """ - - def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_momentum = min_momentum - super(StepMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - momentum = base_momentum * (self.gamma**exp) - if self.min_momentum is not None: - # clip to a minimum value - momentum = max(momentum, self.min_momentum) - return momentum - - -@HOOKS.register_module() -class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook): - - def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs): - assert (min_momentum is None) ^ (min_momentum_ratio is None) - self.min_momentum = min_momentum - self.min_momentum_ratio = min_momentum_ratio - super(CosineAnnealingMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_momentum_ratio is not None: - target_momentum = base_momentum * self.min_momentum_ratio - else: - target_momentum = self.min_momentum - return annealing_cos(base_momentum, target_momentum, - progress / max_progress) - - -@HOOKS.register_module() -class CyclicMomentumUpdaterHook(MomentumUpdaterHook): - """Cyclic momentum Scheduler. - - Implement the cyclical momentum scheduler policy described in - https://arxiv.org/pdf/1708.07120.pdf - - This momentum scheduler usually used together with the CyclicLRUpdater - to improve the performance in the 3D detection area. - - Attributes: - target_ratio (tuple[float]): Relative ratio of the lowest momentum and - the highest momentum to the initial momentum. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of momentum - in the total cycle. - by_epoch (bool): Whether to update momentum by epoch. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.momentum_phases = [] # init momentum_phases - # currently only support by_epoch=False - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicMomentumUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicMomentumUpdaterHook, self).before_run(runner) - # initiate momentum_phases - # total momentum_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.momentum_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.momentum_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_momentum(self, runner, base_momentum): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.momentum_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return annealing_cos(base_momentum * start_ratio, - base_momentum * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleMomentumUpdaterHook(MomentumUpdaterHook): - """OneCycle momentum Scheduler. - - This momentum scheduler usually used together with the OneCycleLrUpdater - to improve the performance. - - Args: - base_momentum (float or list): Lower momentum boundaries in the cycle - for each parameter group. Note that momentum is cycled inversely - to learning rate; at the peak of a cycle, momentum is - 'base_momentum' and learning rate is 'max_lr'. - Default: 0.85 - max_momentum (float or list): Upper momentum boundaries in the cycle - for each parameter group. Functionally, - it defines the cycle amplitude (max_momentum - base_momentum). - Note that momentum is cycled inversely - to learning rate; at the start of a cycle, momentum is - 'max_momentum' and learning rate is 'base_lr' - Default: 0.95 - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - base_momentum=0.85, - max_momentum=0.95, - pct_start=0.3, - anneal_strategy='cos', - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch=False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(base_momentum, (float, list, dict)): - raise ValueError('base_momentum must be the type among of float,' - 'list or dict.') - self._base_momentum = base_momentum - if not isinstance(max_momentum, (float, list, dict)): - raise ValueError('max_momentum must be the type among of float,' - 'list or dict.') - self._max_momentum = max_momentum - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('Expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must by one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.three_phase = three_phase - self.momentum_phases = [] # init momentum_phases - super(OneCycleMomentumUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip( - optim.param_groups, _base_momentum, _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - else: - optim = runner.optimizer - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - k = type(optim).__name__ - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip(optim.param_groups, - _base_momentum, - _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - - if self.three_phase: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': - float(2 * self.pct_start * runner.max_iters) - 2, - 'start_momentum': - 'base_momentum', - 'end_momentum': - 'max_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'max_momentum', - 'end_momentum': 'max_momentum' - }) - else: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'base_momentum', - 'end_momentum': 'max_momentum' - }) - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, param_group): - curr_iter = runner.iter - start_iter = 0 - for i, phase in enumerate(self.momentum_phases): - end_iter = phase['end_iter'] - if curr_iter <= end_iter or i == len(self.momentum_phases) - 1: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - momentum = self.anneal_func( - param_group[phase['start_momentum']], - param_group[phase['end_momentum']], pct) - break - start_iter = end_iter - return momentum - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k, optim in runner.optimizer.items(): - _momentum_group = [ - self.get_momentum(runner, param_group) - for param_group in optim.param_groups - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - momentum_groups = [] - for param_group in runner.optimizer.param_groups: - momentum_groups.append(self.get_momentum(runner, param_group)) - return momentum_groups diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/optimizer.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/optimizer.py deleted file mode 100755 index f575ceda..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,508 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - - def __init__(self, grad_clip=None): - self.grad_clip = grad_clip - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - runner.outputs['loss'].backward() - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super(GradientCumulativeOptimizerHook, self).__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/profiler.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/profiler.py deleted file mode 100755 index b7023699..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/sampler_seed.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/sampler_seed.py deleted file mode 100755 index ee0dc6bd..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/hooks/sync_buffer.py b/cv/detection/yolof/pytorch/mmcv/runner/hooks/sync_buffer.py deleted file mode 100755 index 6376b7ff..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/iter_based_runner.py b/cv/detection/yolof/pytorch/mmcv/runner/iter_based_runner.py deleted file mode 100755 index 9892b07a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/log_buffer.py b/cv/detection/yolof/pytorch/mmcv/runner/log_buffer.py deleted file mode 100755 index d949e294..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/log_buffer.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import numpy as np - - -class LogBuffer: - - def __init__(self): - self.val_history = OrderedDict() - self.n_history = OrderedDict() - self.output = OrderedDict() - self.ready = False - - def clear(self): - self.val_history.clear() - self.n_history.clear() - self.clear_output() - - def clear_output(self): - self.output.clear() - self.ready = False - - def update(self, vars, count=1): - assert isinstance(vars, dict) - for key, var in vars.items(): - if key not in self.val_history: - self.val_history[key] = [] - self.n_history[key] = [] - self.val_history[key].append(var) - self.n_history[key].append(count) - - def average(self, n=0): - """Average latest n values or all values.""" - assert n >= 0 - for key in self.val_history: - values = np.array(self.val_history[key][-n:]) - nums = np.array(self.n_history[key][-n:]) - avg = np.sum(values * nums) / np.sum(nums) - self.output[key] = avg - self.ready = True diff --git a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/__init__.py b/cv/detection/yolof/pytorch/mmcv/runner/optimizer/__init__.py deleted file mode 100755 index 53c34d04..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import (OPTIMIZER_BUILDERS, OPTIMIZERS, build_optimizer, - build_optimizer_constructor) -from .default_constructor import DefaultOptimizerConstructor - -__all__ = [ - 'OPTIMIZER_BUILDERS', 'OPTIMIZERS', 'DefaultOptimizerConstructor', - 'build_optimizer', 'build_optimizer_constructor' -] diff --git a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/builder.py b/cv/detection/yolof/pytorch/mmcv/runner/optimizer/builder.py deleted file mode 100755 index f9234eed..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers(): - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/default_constructor.py b/cv/detection/yolof/pytorch/mmcv/runner/optimizer/default_constructor.py deleted file mode 100755 index 30e6577c..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,250 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - # if check_ops_exist(): - # from mmcv.ops import DeformConv2d, ModulatedDeformConv2d - # is_dcn_module = isinstance(module, - # (DeformConv2d, ModulatedDeformConv2d)) - # else: - # is_dcn_module = False - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/cv/detection/yolof/pytorch/mmcv/runner/priority.py b/cv/detection/yolof/pytorch/mmcv/runner/priority.py deleted file mode 100755 index 64cc4e3a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/priority.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - - -class Priority(Enum): - """Hook priority levels. - - +--------------+------------+ - | Level | Value | - +==============+============+ - | HIGHEST | 0 | - +--------------+------------+ - | VERY_HIGH | 10 | - +--------------+------------+ - | HIGH | 30 | - +--------------+------------+ - | ABOVE_NORMAL | 40 | - +--------------+------------+ - | NORMAL | 50 | - +--------------+------------+ - | BELOW_NORMAL | 60 | - +--------------+------------+ - | LOW | 70 | - +--------------+------------+ - | VERY_LOW | 90 | - +--------------+------------+ - | LOWEST | 100 | - +--------------+------------+ - """ - - HIGHEST = 0 - VERY_HIGH = 10 - HIGH = 30 - ABOVE_NORMAL = 40 - NORMAL = 50 - BELOW_NORMAL = 60 - LOW = 70 - VERY_LOW = 90 - LOWEST = 100 - - -def get_priority(priority): - """Get priority value. - - Args: - priority (int or str or :obj:`Priority`): Priority. - - Returns: - int: The priority value. - """ - if isinstance(priority, int): - if priority < 0 or priority > 100: - raise ValueError('priority must be between 0 and 100') - return priority - elif isinstance(priority, Priority): - return priority.value - elif isinstance(priority, str): - return Priority[priority.upper()].value - else: - raise TypeError('priority must be an integer or Priority enum value') diff --git a/cv/detection/yolof/pytorch/mmcv/runner/utils.py b/cv/detection/yolof/pytorch/mmcv/runner/utils.py deleted file mode 100755 index 144d11e1..00000000 --- a/cv/detection/yolof/pytorch/mmcv/runner/utils.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random -import sys -import time -import warnings -from getpass import getuser -from socket import gethostname - -import numpy as np -import torch - -import mmcv - - -def get_host_info(): - """Get hostname and username. - - Return empty string if exception raised, e.g. ``getpass.getuser()`` will - lead to error in docker container - """ - host = '' - try: - host = f'{getuser()}@{gethostname()}' - except Exception as e: - warnings.warn(f'Host or user not found: {str(e)}') - finally: - return host - - -def get_time_str(): - return time.strftime('%Y%m%d_%H%M%S', time.localtime()) - - -def obj_from_dict(info, parent=None, default_args=None): - """Initialize an object from dict. - - The dict must contain the key "type", which indicates the object type, it - can be either a string or type, such as "list" or ``list``. Remaining - fields are treated as the arguments for constructing the object. - - Args: - info (dict): Object types and arguments. - parent (:class:`module`): Module which may containing expected object - classes. - default_args (dict, optional): Default arguments for initializing the - object. - - Returns: - any type: Object built from the dict. - """ - assert isinstance(info, dict) and 'type' in info - assert isinstance(default_args, dict) or default_args is None - args = info.copy() - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if parent is not None: - obj_type = getattr(parent, obj_type) - else: - obj_type = sys.modules[obj_type] - elif not isinstance(obj_type, type): - raise TypeError('type must be a str or valid type, but ' - f'got {type(obj_type)}') - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - return obj_type(**args) - - -def set_random_seed(seed, deterministic=False, use_rank_shift=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - rank_shift (bool): Whether to add rank number to the random seed to - have different random seed in different threads. Default: False. - """ - if use_rank_shift: - rank, _ = mmcv.runner.get_dist_info() - seed += rank - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False diff --git a/cv/detection/yolof/pytorch/mmcv/utils/__init__.py b/cv/detection/yolof/pytorch/mmcv/utils/__init__.py deleted file mode 100755 index 378a0068..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/__init__.py +++ /dev/null @@ -1,69 +0,0 @@ -# flake8: noqa -# Copyright (c) OpenMMLab. All rights reserved. -from .config import Config, ConfigDict, DictAction -from .misc import (check_prerequisites, concat_list, deprecated_api_warning, - has_method, import_modules_from_strings, is_list_of, - is_method_overridden, is_seq_of, is_str, is_tuple_of, - iter_cast, list_cast, requires_executable, requires_package, - slice_list, to_1tuple, to_2tuple, to_3tuple, to_4tuple, - to_ntuple, tuple_cast) -from .path import (check_file_exist, fopen, is_filepath, mkdir_or_exist, - scandir, symlink) -from .progressbar import (ProgressBar, track_iter_progress, - track_parallel_progress, track_progress) -from .testing import (assert_attrs_equal, assert_dict_contains_subset, - assert_dict_has_keys, assert_is_norm_layer, - assert_keys_equal, assert_params_all_zeros, - check_python_script) -from .timer import Timer, TimerError, check_time -from .version_utils import digit_version, get_git_hash - -try: - import torch -except ImportError: - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'is_str', 'iter_cast', - 'list_cast', 'tuple_cast', 'is_seq_of', 'is_list_of', 'is_tuple_of', - 'slice_list', 'concat_list', 'check_prerequisites', 'requires_package', - 'requires_executable', 'is_filepath', 'fopen', 'check_file_exist', - 'mkdir_or_exist', 'symlink', 'scandir', 'ProgressBar', - 'track_progress', 'track_iter_progress', 'track_parallel_progress', - 'Timer', 'TimerError', 'check_time', 'deprecated_api_warning', - 'digit_version', 'get_git_hash', 'import_modules_from_strings', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'check_python_script', - 'to_1tuple', 'to_2tuple', 'to_3tuple', 'to_4tuple', 'to_ntuple', - 'is_method_overridden', 'has_method' - ] -else: - from .env import collect_env - from .logging import get_logger, print_log - from .parrots_jit import jit, skip_no_elena - from .parrots_wrapper import ( - TORCH_VERSION, BuildExtension, CppExtension, CUDAExtension, DataLoader, - PoolDataLoader, SyncBatchNorm, _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, - _AvgPoolNd, _BatchNorm, _ConvNd, _ConvTransposeMixin, _InstanceNorm, - _MaxPoolNd, get_build_config, is_rocm_pytorch, _get_cuda_home) - from .registry import Registry, build_from_cfg - from .trace import is_jit_tracing - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'collect_env', 'get_logger', - 'print_log', 'is_str', 'iter_cast', 'list_cast', 'tuple_cast', - 'is_seq_of', 'is_list_of', 'is_tuple_of', 'slice_list', 'concat_list', - 'check_prerequisites', 'requires_package', 'requires_executable', - 'is_filepath', 'fopen', 'check_file_exist', 'mkdir_or_exist', - 'symlink', 'scandir', 'ProgressBar', 'track_progress', - 'track_iter_progress', 'track_parallel_progress', 'Registry', - 'build_from_cfg', 'Timer', 'TimerError', 'check_time', 'SyncBatchNorm', - '_AdaptiveAvgPoolNd', '_AdaptiveMaxPoolNd', '_AvgPoolNd', '_BatchNorm', - '_ConvNd', '_ConvTransposeMixin', '_InstanceNorm', '_MaxPoolNd', - 'get_build_config', 'BuildExtension', 'CppExtension', 'CUDAExtension', - 'DataLoader', 'PoolDataLoader', 'TORCH_VERSION', - 'deprecated_api_warning', 'digit_version', 'get_git_hash', - 'import_modules_from_strings', 'jit', 'skip_no_elena', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'assert_is_norm_layer', - 'assert_params_all_zeros', 'check_python_script', - 'is_method_overridden', 'is_jit_tracing', 'is_rocm_pytorch', - '_get_cuda_home', 'has_method' - ] diff --git a/cv/detection/yolof/pytorch/mmcv/utils/config.py b/cv/detection/yolof/pytorch/mmcv/utils/config.py deleted file mode 100755 index db9b35fa..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/config.py +++ /dev/null @@ -1,689 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import ast -import copy -import os -import os.path as osp -import platform -import shutil -import sys -import tempfile -import uuid -import warnings -from argparse import Action, ArgumentParser -from collections import abc -from importlib import import_module - -from addict import Dict -from yapf.yapflib.yapf_api import FormatCode - -from .misc import import_modules_from_strings -from .path import check_file_exist - -if platform.system() == 'Windows': - import regex as re -else: - import re - -BASE_KEY = '_base_' -DELETE_KEY = '_delete_' -DEPRECATION_KEY = '_deprecation_' -RESERVED_KEYS = ['filename', 'text', 'pretty_text'] - - -class ConfigDict(Dict): - - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super(ConfigDict, self).__getattr__(name) - except KeyError: - ex = AttributeError(f"'{self.__class__.__name__}' object has no " - f"attribute '{name}'") - except Exception as e: - ex = e - else: - return value - raise ex - - -def add_args(parser, cfg, prefix=''): - for k, v in cfg.items(): - if isinstance(v, str): - parser.add_argument('--' + prefix + k) - elif isinstance(v, int): - parser.add_argument('--' + prefix + k, type=int) - elif isinstance(v, float): - parser.add_argument('--' + prefix + k, type=float) - elif isinstance(v, bool): - parser.add_argument('--' + prefix + k, action='store_true') - elif isinstance(v, dict): - add_args(parser, v, prefix + k + '.') - elif isinstance(v, abc.Iterable): - parser.add_argument('--' + prefix + k, type=type(v[0]), nargs='+') - else: - print(f'cannot parse key {prefix + k} of type {type(v)}') - return parser - - -class Config: - """A facility for config and config files. - - It supports common file formats as configs: python/json/yaml. The interface - is the same as a dict object and also allows access config values as - attributes. - - Example: - >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1]))) - >>> cfg.a - 1 - >>> cfg.b - {'b1': [0, 1]} - >>> cfg.b.b1 - [0, 1] - >>> cfg = Config.fromfile('tests/data/config/a.py') - >>> cfg.filename - "/home/kchen/projects/mmcv/tests/data/config/a.py" - >>> cfg.item4 - 'test' - >>> cfg - "Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: " - "{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}" - """ - - @staticmethod - def _validate_py_syntax(filename): - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - content = f.read() - try: - ast.parse(content) - except SyntaxError as e: - raise SyntaxError('There are syntax errors in config ' - f'file {filename}: {e}') - - @staticmethod - def _substitute_predefined_vars(filename, temp_config_name): - file_dirname = osp.dirname(filename) - file_basename = osp.basename(filename) - file_basename_no_extension = osp.splitext(file_basename)[0] - file_extname = osp.splitext(filename)[1] - support_templates = dict( - fileDirname=file_dirname, - fileBasename=file_basename, - fileBasenameNoExtension=file_basename_no_extension, - fileExtname=file_extname) - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - for key, value in support_templates.items(): - regexp = r'\{\{\s*' + str(key) + r'\s*\}\}' - value = value.replace('\\', '/') - config_file = re.sub(regexp, value, config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - - @staticmethod - def _pre_substitute_base_vars(filename, temp_config_name): - """Substitute base variable placehoders to string, so that parsing - would work.""" - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - base_var_dict = {} - regexp = r'\{\{\s*' + BASE_KEY + r'\.([\w\.]+)\s*\}\}' - base_vars = set(re.findall(regexp, config_file)) - for base_var in base_vars: - randstr = f'_{base_var}_{uuid.uuid4().hex.lower()[:6]}' - base_var_dict[randstr] = base_var - regexp = r'\{\{\s*' + BASE_KEY + r'\.' + base_var + r'\s*\}\}' - config_file = re.sub(regexp, f'"{randstr}"', config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - return base_var_dict - - @staticmethod - def _substitute_base_vars(cfg, base_var_dict, base_cfg): - """Substitute variable strings to their actual values.""" - cfg = copy.deepcopy(cfg) - - if isinstance(cfg, dict): - for k, v in cfg.items(): - if isinstance(v, str) and v in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[v].split('.'): - new_v = new_v[new_k] - cfg[k] = new_v - elif isinstance(v, (list, tuple, dict)): - cfg[k] = Config._substitute_base_vars( - v, base_var_dict, base_cfg) - elif isinstance(cfg, tuple): - cfg = tuple( - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg) - elif isinstance(cfg, list): - cfg = [ - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg - ] - elif isinstance(cfg, str) and cfg in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[cfg].split('.'): - new_v = new_v[new_k] - cfg = new_v - - return cfg - - @staticmethod - def _file2dict(filename, use_predefined_variables=True): - filename = osp.abspath(osp.expanduser(filename)) - check_file_exist(filename) - fileExtname = osp.splitext(filename)[1] - if fileExtname not in ['.py', '.json', '.yaml', '.yml']: - raise IOError('Only py/yml/yaml/json type are supported now!') - - with tempfile.TemporaryDirectory() as temp_config_dir: - temp_config_file = tempfile.NamedTemporaryFile( - dir=temp_config_dir, suffix=fileExtname) - if platform.system() == 'Windows': - temp_config_file.close() - temp_config_name = osp.basename(temp_config_file.name) - # Substitute predefined variables - if use_predefined_variables: - Config._substitute_predefined_vars(filename, - temp_config_file.name) - else: - shutil.copyfile(filename, temp_config_file.name) - # Substitute base variables from placeholders to strings - base_var_dict = Config._pre_substitute_base_vars( - temp_config_file.name, temp_config_file.name) - - if filename.endswith('.py'): - temp_module_name = osp.splitext(temp_config_name)[0] - sys.path.insert(0, temp_config_dir) - Config._validate_py_syntax(filename) - mod = import_module(temp_module_name) - sys.path.pop(0) - cfg_dict = { - name: value - for name, value in mod.__dict__.items() - if not name.startswith('__') - } - # delete imported module - del sys.modules[temp_module_name] - elif filename.endswith(('.yml', '.yaml', '.json')): - import mmcv - cfg_dict = mmcv.load(temp_config_file.name) - # close temp file - temp_config_file.close() - - # check deprecation information - if DEPRECATION_KEY in cfg_dict: - deprecation_info = cfg_dict.pop(DEPRECATION_KEY) - warning_msg = f'The config file {filename} will be deprecated ' \ - 'in the future.' - if 'expected' in deprecation_info: - warning_msg += f' Please use {deprecation_info["expected"]} ' \ - 'instead.' - if 'reference' in deprecation_info: - warning_msg += ' More information can be found at ' \ - f'{deprecation_info["reference"]}' - warnings.warn(warning_msg) - - cfg_text = filename + '\n' - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - cfg_text += f.read() - - if BASE_KEY in cfg_dict: - cfg_dir = osp.dirname(filename) - base_filename = cfg_dict.pop(BASE_KEY) - base_filename = base_filename if isinstance( - base_filename, list) else [base_filename] - - cfg_dict_list = list() - cfg_text_list = list() - for f in base_filename: - _cfg_dict, _cfg_text = Config._file2dict(osp.join(cfg_dir, f)) - cfg_dict_list.append(_cfg_dict) - cfg_text_list.append(_cfg_text) - - base_cfg_dict = dict() - for c in cfg_dict_list: - duplicate_keys = base_cfg_dict.keys() & c.keys() - if len(duplicate_keys) > 0: - raise KeyError('Duplicate key is not allowed among bases. ' - f'Duplicate keys: {duplicate_keys}') - base_cfg_dict.update(c) - - # Substitute base variables from strings to their actual values - cfg_dict = Config._substitute_base_vars(cfg_dict, base_var_dict, - base_cfg_dict) - - base_cfg_dict = Config._merge_a_into_b(cfg_dict, base_cfg_dict) - cfg_dict = base_cfg_dict - - # merge cfg_text - cfg_text_list.append(cfg_text) - cfg_text = '\n'.join(cfg_text_list) - - return cfg_dict, cfg_text - - @staticmethod - def _merge_a_into_b(a, b, allow_list_keys=False): - """merge dict ``a`` into dict ``b`` (non-inplace). - - Values in ``a`` will overwrite ``b``. ``b`` is copied first to avoid - in-place modifications. - - Args: - a (dict): The source dict to be merged into ``b``. - b (dict): The origin dict to be fetch keys from ``a``. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in source ``a`` and will replace the element of the - corresponding index in b if b is a list. Default: False. - - Returns: - dict: The modified dict of ``b`` using ``a``. - - Examples: - # Normally merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # Delete b first and merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(_delete_=True, a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # b is a list - >>> Config._merge_a_into_b( - ... {'0': dict(a=2)}, [dict(a=1), dict(b=2)], True) - [{'a': 2}, {'b': 2}] - """ - b = b.copy() - for k, v in a.items(): - if allow_list_keys and k.isdigit() and isinstance(b, list): - k = int(k) - if len(b) <= k: - raise KeyError(f'Index {k} exceeds the length of list {b}') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - elif isinstance(v, - dict) and k in b and not v.pop(DELETE_KEY, False): - allowed_types = (dict, list) if allow_list_keys else dict - if not isinstance(b[k], allowed_types): - raise TypeError( - f'{k}={v} in child config cannot inherit from base ' - f'because {k} is a dict in the child config but is of ' - f'type {type(b[k])} in base config. You may set ' - f'`{DELETE_KEY}=True` to ignore the base config') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - else: - b[k] = v - return b - - @staticmethod - def fromfile(filename, - use_predefined_variables=True, - import_custom_modules=True): - cfg_dict, cfg_text = Config._file2dict(filename, - use_predefined_variables) - if import_custom_modules and cfg_dict.get('custom_imports', None): - import_modules_from_strings(**cfg_dict['custom_imports']) - return Config(cfg_dict, cfg_text=cfg_text, filename=filename) - - @staticmethod - def fromstring(cfg_str, file_format): - """Generate config from config str. - - Args: - cfg_str (str): Config str. - file_format (str): Config file format corresponding to the - config str. Only py/yml/yaml/json type are supported now! - - Returns: - obj:`Config`: Config obj. - """ - if file_format not in ['.py', '.json', '.yaml', '.yml']: - raise IOError('Only py/yml/yaml/json type are supported now!') - if file_format != '.py' and 'dict(' in cfg_str: - # check if users specify a wrong suffix for python - warnings.warn( - 'Please check "file_format", the file format may be .py') - with tempfile.NamedTemporaryFile( - 'w', encoding='utf-8', suffix=file_format, - delete=False) as temp_file: - temp_file.write(cfg_str) - # on windows, previous implementation cause error - # see PR 1077 for details - cfg = Config.fromfile(temp_file.name) - os.remove(temp_file.name) - return cfg - - @staticmethod - def auto_argparser(description=None): - """Generate argparser from config file automatically (experimental)""" - partial_parser = ArgumentParser(description=description) - partial_parser.add_argument('config', help='config file path') - cfg_file = partial_parser.parse_known_args()[0].config - cfg = Config.fromfile(cfg_file) - parser = ArgumentParser(description=description) - parser.add_argument('config', help='config file path') - add_args(parser, cfg) - return parser, cfg - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError('cfg_dict must be a dict, but ' - f'got {type(cfg_dict)}') - for key in cfg_dict: - if key in RESERVED_KEYS: - raise KeyError(f'{key} is reserved for config file') - - super(Config, self).__setattr__('_cfg_dict', ConfigDict(cfg_dict)) - super(Config, self).__setattr__('_filename', filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename, 'r') as f: - text = f.read() - else: - text = '' - super(Config, self).__setattr__('_text', text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - @property - def pretty_text(self): - - indent = 4 - - def _indent(s_, num_spaces): - s = s_.split('\n') - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(num_spaces * ' ') + line for line in s] - s = '\n'.join(s) - s = first + '\n' + s - return s - - def _format_basic_types(k, v, use_mapping=False): - if isinstance(v, str): - v_str = f"'{v}'" - else: - v_str = str(v) - - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) - - return attr_str - - def _format_list(k, v, use_mapping=False): - # check if all items in the list are dict - if all(isinstance(_, dict) for _ in v): - v_str = '[\n' - v_str += '\n'.join( - f'dict({_indent(_format_dict(v_), indent)}),' - for v_ in v).rstrip(',') - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) + ']' - else: - attr_str = _format_basic_types(k, v, use_mapping) - return attr_str - - def _contain_invalid_identifier(dict_str): - contain_invalid_identifier = False - for key_name in dict_str: - contain_invalid_identifier |= \ - (not str(key_name).isidentifier()) - return contain_invalid_identifier - - def _format_dict(input_dict, outest_level=False): - r = '' - s = [] - - use_mapping = _contain_invalid_identifier(input_dict) - if use_mapping: - r += '{' - for idx, (k, v) in enumerate(input_dict.items()): - is_last = idx >= len(input_dict) - 1 - end = '' if outest_level or is_last else ',' - if isinstance(v, dict): - v_str = '\n' + _format_dict(v) - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: dict({v_str}' - else: - attr_str = f'{str(k)}=dict({v_str}' - attr_str = _indent(attr_str, indent) + ')' + end - elif isinstance(v, list): - attr_str = _format_list(k, v, use_mapping) + end - else: - attr_str = _format_basic_types(k, v, use_mapping) + end - - s.append(attr_str) - r += '\n'.join(s) - if use_mapping: - r += '}' - return r - - cfg_dict = self._cfg_dict.to_dict() - text = _format_dict(cfg_dict, outest_level=True) - # copied from setup.cfg - yapf_style = dict( - based_on_style='pep8', - blank_line_before_nested_class_or_def=True, - split_before_expression_after_opening_paren=True) - text, _ = FormatCode(text, style_config=yapf_style) - #text, _ = FormatCode(text, style_config=yapf_style, verify=True) - - return text - - def __repr__(self): - return f'Config (path: {self.filename}): {self._cfg_dict.__repr__()}' - - def __len__(self): - return len(self._cfg_dict) - - def __getattr__(self, name): - return getattr(self._cfg_dict, name) - - def __getitem__(self, name): - return self._cfg_dict.__getitem__(name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) - - def __setitem__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setitem__(name, value) - - def __iter__(self): - return iter(self._cfg_dict) - - def __getstate__(self): - return (self._cfg_dict, self._filename, self._text) - - def __setstate__(self, state): - _cfg_dict, _filename, _text = state - super(Config, self).__setattr__('_cfg_dict', _cfg_dict) - super(Config, self).__setattr__('_filename', _filename) - super(Config, self).__setattr__('_text', _text) - - def dump(self, file=None): - cfg_dict = super(Config, self).__getattribute__('_cfg_dict').to_dict() - if self.filename.endswith('.py'): - if file is None: - return self.pretty_text - else: - with open(file, 'w', encoding='utf-8') as f: - f.write(self.pretty_text) - else: - import mmcv - if file is None: - file_format = self.filename.split('.')[-1] - return mmcv.dump(cfg_dict, file_format=file_format) - else: - mmcv.dump(cfg_dict, file) - - def merge_from_dict(self, options, allow_list_keys=True): - """Merge list into cfg_dict. - - Merge the dict parsed by MultipleKVAction into this cfg. - - Examples: - >>> options = {'model.backbone.depth': 50, - ... 'model.backbone.with_cp':True} - >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet')))) - >>> cfg.merge_from_dict(options) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict( - ... model=dict(backbone=dict(depth=50, with_cp=True))) - - # Merge list element - >>> cfg = Config(dict(pipeline=[ - ... dict(type='LoadImage'), dict(type='LoadAnnotations')])) - >>> options = dict(pipeline={'0': dict(type='SelfLoadImage')}) - >>> cfg.merge_from_dict(options, allow_list_keys=True) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict(pipeline=[ - ... dict(type='SelfLoadImage'), dict(type='LoadAnnotations')]) - - Args: - options (dict): dict of configs to merge from. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in ``options`` and will replace the element of the - corresponding index in the config if the config is a list. - Default: True. - """ - option_cfg_dict = {} - for full_key, v in options.items(): - d = option_cfg_dict - key_list = full_key.split('.') - for subkey in key_list[:-1]: - d.setdefault(subkey, ConfigDict()) - d = d[subkey] - subkey = key_list[-1] - d[subkey] = v - - cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - super(Config, self).__setattr__( - '_cfg_dict', - Config._merge_a_into_b( - option_cfg_dict, cfg_dict, allow_list_keys=allow_list_keys)) - - -class DictAction(Action): - """ - argparse action to split an argument into KEY=VALUE form - on the first = and append to a dictionary. List options can - be passed as comma separated values, i.e 'KEY=V1,V2,V3', or with explicit - brackets, i.e. 'KEY=[V1,V2,V3]'. It also support nested brackets to build - list/tuple values. e.g. 'KEY=[(V1,V2),(V3,V4)]' - """ - - @staticmethod - def _parse_int_float_bool(val): - try: - return int(val) - except ValueError: - pass - try: - return float(val) - except ValueError: - pass - if val.lower() in ['true', 'false']: - return True if val.lower() == 'true' else False - return val - - @staticmethod - def _parse_iterable(val): - """Parse iterable values in the string. - - All elements inside '()' or '[]' are treated as iterable values. - - Args: - val (str): Value string. - - Returns: - list | tuple: The expanded list or tuple from the string. - - Examples: - >>> DictAction._parse_iterable('1,2,3') - [1, 2, 3] - >>> DictAction._parse_iterable('[a, b, c]') - ['a', 'b', 'c'] - >>> DictAction._parse_iterable('[(1, 2, 3), [a, b], c]') - [(1, 2, 3), ['a', 'b'], 'c'] - """ - - def find_next_comma(string): - """Find the position of next comma in the string. - - If no ',' is found in the string, return the string length. All - chars inside '()' and '[]' are treated as one element and thus ',' - inside these brackets are ignored. - """ - assert (string.count('(') == string.count(')')) and ( - string.count('[') == string.count(']')), \ - f'Imbalanced brackets exist in {string}' - end = len(string) - for idx, char in enumerate(string): - pre = string[:idx] - # The string before this ',' is balanced - if ((char == ',') and (pre.count('(') == pre.count(')')) - and (pre.count('[') == pre.count(']'))): - end = idx - break - return end - - # Strip ' and " characters and replace whitespace. - val = val.strip('\'\"').replace(' ', '') - is_tuple = False - if val.startswith('(') and val.endswith(')'): - is_tuple = True - val = val[1:-1] - elif val.startswith('[') and val.endswith(']'): - val = val[1:-1] - elif ',' not in val: - # val is a single value - return DictAction._parse_int_float_bool(val) - - values = [] - while len(val) > 0: - comma_idx = find_next_comma(val) - element = DictAction._parse_iterable(val[:comma_idx]) - values.append(element) - val = val[comma_idx + 1:] - if is_tuple: - values = tuple(values) - return values - - def __call__(self, parser, namespace, values, option_string=None): - options = {} - for kv in values: - key, val = kv.split('=', maxsplit=1) - options[key] = self._parse_iterable(val) - setattr(namespace, self.dest, options) diff --git a/cv/detection/yolof/pytorch/mmcv/utils/env.py b/cv/detection/yolof/pytorch/mmcv/utils/env.py deleted file mode 100644 index e46a1094..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/env.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output( - f'"{nvcc}" -V | tail -n1', shell=True) - nvcc = nvcc.decode('utf-8').strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - gcc = subprocess.check_output('gcc --version | head -n1', shell=True) - gcc = gcc.decode('utf-8').strip() - env_info['GCC'] = gcc - except subprocess.CalledProcessError: # gcc is unavailable - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/cv/detection/yolof/pytorch/mmcv/utils/ext_loader.py b/cv/detection/yolof/pytorch/mmcv/utils/ext_loader.py deleted file mode 100755 index 08132d2c..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/ext_loader.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import importlib -import os -import pkgutil -import warnings -from collections import namedtuple - -import torch - -if torch.__version__ != 'parrots': - - def load_ext(name, funcs): - ext = importlib.import_module('mmcv.' + name) - for fun in funcs: - assert hasattr(ext, fun), f'{fun} miss in module {name}' - return ext -else: - from parrots import extension - from parrots.base import ParrotsException - - has_return_value_ops = [ - 'nms', - 'softnms', - 'nms_match', - 'nms_rotated', - 'top_pool_forward', - 'top_pool_backward', - 'bottom_pool_forward', - 'bottom_pool_backward', - 'left_pool_forward', - 'left_pool_backward', - 'right_pool_forward', - 'right_pool_backward', - 'fused_bias_leakyrelu', - 'upfirdn2d', - 'ms_deform_attn_forward', - 'pixel_group', - 'contour_expand', - ] - - def get_fake_func(name, e): - - def fake_func(*args, **kwargs): - warnings.warn(f'{name} is not supported in parrots now') - raise e - - return fake_func - - def load_ext(name, funcs): - ExtModule = namedtuple('ExtModule', funcs) - ext_list = [] - lib_root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) - for fun in funcs: - try: - ext_fun = extension.load(fun, name, lib_dir=lib_root) - except ParrotsException as e: - if 'No element registered' not in e.message: - warnings.warn(e.message) - ext_fun = get_fake_func(fun, e) - ext_list.append(ext_fun) - else: - if fun in has_return_value_ops: - ext_list.append(ext_fun.op) - else: - ext_list.append(ext_fun.op_) - return ExtModule(*ext_list) - - -def check_ops_exist(): - ext_loader = pkgutil.find_loader('mmcv._ext') - return ext_loader is not None diff --git a/cv/detection/yolof/pytorch/mmcv/utils/logging.py b/cv/detection/yolof/pytorch/mmcv/utils/logging.py deleted file mode 100755 index 4aa0e04b..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/logging.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.distributed as dist - -logger_initialized = {} - - -def get_logger(name, log_file=None, log_level=logging.INFO, file_mode='w'): - """Initialize and get a logger by name. - - If the logger has not been initialized, this method will initialize the - logger by adding one or two handlers, otherwise the initialized logger will - be directly returned. During initialization, a StreamHandler will always be - added. If `log_file` is specified and the process rank is 0, a FileHandler - will also be added. - - Args: - name (str): Logger name. - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the logger. - log_level (int): The logger level. Note that only the process of - rank 0 is affected, and other processes will set the level to - "Error" thus be silent most of the time. - file_mode (str): The file mode used in opening log file. - Defaults to 'w'. - - Returns: - logging.Logger: The expected logger. - """ - logger = logging.getLogger(name) - if name in logger_initialized: - return logger - # handle hierarchical names - # e.g., logger "a" is initialized, then logger "a.b" will skip the - # initialization since it is a child of "a". - for logger_name in logger_initialized: - if name.startswith(logger_name): - return logger - - # handle duplicate logs to the console - # Starting in 1.8.0, PyTorch DDP attaches a StreamHandler (NOTSET) - # to the root logger. As logger.propagate is True by default, this root - # level handler causes logging messages from rank>0 processes to - # unexpectedly show up on the console, creating much unwanted clutter. - # To fix this issue, we set the root logger's StreamHandler, if any, to log - # at the ERROR level. - for handler in logger.root.handlers: - if type(handler) is logging.StreamHandler: - handler.setLevel(logging.ERROR) - - stream_handler = logging.StreamHandler() - handlers = [stream_handler] - - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - else: - rank = 0 - - # only rank 0 will add a FileHandler - if rank == 0 and log_file is not None: - # Here, the default behaviour of the official logger is 'a'. Thus, we - # provide an interface to change the file mode to the default - # behaviour. - file_handler = logging.FileHandler(log_file, file_mode) - handlers.append(file_handler) - - formatter = logging.Formatter( - '%(asctime)s - %(name)s - %(levelname)s - %(message)s') - for handler in handlers: - handler.setFormatter(formatter) - handler.setLevel(log_level) - logger.addHandler(handler) - - if rank == 0: - logger.setLevel(log_level) - else: - logger.setLevel(logging.ERROR) - - logger_initialized[name] = True - - return logger - - -def print_log(msg, logger=None, level=logging.INFO): - """Print a log message. - - Args: - msg (str): The message to be logged. - logger (logging.Logger | str | None): The logger to be used. - Some special loggers are: - - "silent": no message will be printed. - - other str: the logger obtained with `get_root_logger(logger)`. - - None: The `print()` method will be used to print log messages. - level (int): Logging level. Only available when `logger` is a Logger - object or "root". - """ - if logger is None: - print(msg) - elif isinstance(logger, logging.Logger): - logger.log(level, msg) - elif logger == 'silent': - pass - elif isinstance(logger, str): - _logger = get_logger(logger) - _logger.log(level, msg) - else: - raise TypeError( - 'logger should be either a logging.Logger object, str, ' - f'"silent" or None, but got {type(logger)}') diff --git a/cv/detection/yolof/pytorch/mmcv/utils/misc.py b/cv/detection/yolof/pytorch/mmcv/utils/misc.py deleted file mode 100755 index 2c58d0d7..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/cv/detection/yolof/pytorch/mmcv/utils/parrots_jit.py b/cv/detection/yolof/pytorch/mmcv/utils/parrots_jit.py deleted file mode 100644 index 61873f6d..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/parrots_jit.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -from .parrots_wrapper import TORCH_VERSION - -parrots_jit_option = os.getenv('PARROTS_JIT_OPTION') - -if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON': - from parrots.jit import pat as jit -else: - - def jit(func=None, - check_input=None, - full_shape=True, - derivate=False, - coderize=False, - optimize=False): - - def wrapper(func): - - def wrapper_inner(*args, **kargs): - return func(*args, **kargs) - - return wrapper_inner - - if func is None: - return wrapper - else: - return func - - -if TORCH_VERSION == 'parrots': - from parrots.utils.tester import skip_no_elena -else: - - def skip_no_elena(func): - - def wrapper(*args, **kargs): - return func(*args, **kargs) - - return wrapper diff --git a/cv/detection/yolof/pytorch/mmcv/utils/parrots_wrapper.py b/cv/detection/yolof/pytorch/mmcv/utils/parrots_wrapper.py deleted file mode 100644 index 93c97640..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.instancenorm import _InstanceNorm - from torch.nn.modules.batchnorm import _BatchNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/cv/detection/yolof/pytorch/mmcv/utils/path.py b/cv/detection/yolof/pytorch/mmcv/utils/path.py deleted file mode 100755 index 7dab4b30..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/path.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -from pathlib import Path - -from .misc import is_str - - -def is_filepath(x): - return is_str(x) or isinstance(x, Path) - - -def fopen(filepath, *args, **kwargs): - if is_str(filepath): - return open(filepath, *args, **kwargs) - elif isinstance(filepath, Path): - return filepath.open(*args, **kwargs) - raise ValueError('`filepath` should be a string or a Path') - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -def mkdir_or_exist(dir_name, mode=0o777): - if dir_name == '': - return - dir_name = osp.expanduser(dir_name) - os.makedirs(dir_name, mode=mode, exist_ok=True) - - -def symlink(src, dst, overwrite=True, **kwargs): - if os.path.lexists(dst) and overwrite: - os.remove(dst) - os.symlink(src, dst, **kwargs) - - -def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True): - """Scan a directory to find the interested files. - - Args: - dir_path (str | obj:`Path`): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - case_sensitive (bool, optional) : If set to False, ignore the case of - suffix. Default: True. - - Returns: - A generator for all the interested files with relative paths. - """ - if isinstance(dir_path, (str, Path)): - dir_path = str(dir_path) - else: - raise TypeError('"dir_path" must be a string or Path object') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - if suffix is not None and not case_sensitive: - suffix = suffix.lower() if isinstance(suffix, str) else tuple( - item.lower() for item in suffix) - - root = dir_path - - def _scandir(dir_path, suffix, recursive, case_sensitive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - _rel_path = rel_path if case_sensitive else rel_path.lower() - if suffix is None or _rel_path.endswith(suffix): - yield rel_path - elif recursive and os.path.isdir(entry.path): - # scan recursively if entry.path is a directory - yield from _scandir(entry.path, suffix, recursive, - case_sensitive) - - return _scandir(dir_path, suffix, recursive, case_sensitive) - - -def find_vcs_root(path, markers=('.git', )): - """Finds the root directory (including itself) of specified markers. - - Args: - path (str): Path of directory or file. - markers (list[str], optional): List of file or directory names. - - Returns: - The directory contained one of the markers or None if not found. - """ - if osp.isfile(path): - path = osp.dirname(path) - - prev, cur = None, osp.abspath(osp.expanduser(path)) - while cur != prev: - if any(osp.exists(osp.join(cur, marker)) for marker in markers): - return cur - prev, cur = cur, osp.split(cur)[0] - return None diff --git a/cv/detection/yolof/pytorch/mmcv/utils/progressbar.py b/cv/detection/yolof/pytorch/mmcv/utils/progressbar.py deleted file mode 100755 index 0062f670..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/progressbar.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from collections.abc import Iterable -from multiprocessing import Pool -from shutil import get_terminal_size - -from .timer import Timer - - -class ProgressBar: - """A progress bar which can print the progress.""" - - def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout): - self.task_num = task_num - self.bar_width = bar_width - self.completed = 0 - self.file = file - if start: - self.start() - - @property - def terminal_width(self): - width, _ = get_terminal_size() - return width - - def start(self): - if self.task_num > 0: - self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, ' - 'elapsed: 0s, ETA:') - else: - self.file.write('completed: 0, elapsed: 0s') - self.file.flush() - self.timer = Timer() - - def update(self, num_tasks=1): - assert num_tasks > 0 - self.completed += num_tasks - elapsed = self.timer.since_start() - if elapsed > 0: - fps = self.completed / elapsed - else: - fps = float('inf') - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \ - f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \ - f'ETA: {eta:5}s' - - bar_width = min(self.bar_width, - int(self.terminal_width - len(msg)) + 2, - int(self.terminal_width * 0.6)) - bar_width = max(2, bar_width) - mark_width = int(bar_width * percentage) - bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width) - self.file.write(msg.format(bar_chars)) - else: - self.file.write( - f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,' - f' {fps:.1f} tasks/s') - self.file.flush() - - -def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs): - """Track the progress of tasks execution with a progress bar. - - Tasks are done with a simple for-loop. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - results = [] - for task in tasks: - results.append(func(task, **kwargs)) - prog_bar.update() - prog_bar.file.write('\n') - return results - - -def init_pool(process_num, initializer=None, initargs=None): - if initializer is None: - return Pool(process_num) - elif initargs is None: - return Pool(process_num, initializer) - else: - if not isinstance(initargs, tuple): - raise TypeError('"initargs" must be a tuple') - return Pool(process_num, initializer, initargs) - - -def track_parallel_progress(func, - tasks, - nproc, - initializer=None, - initargs=None, - bar_width=50, - chunksize=1, - skip_first=False, - keep_order=True, - file=sys.stdout): - """Track the progress of parallel task execution with a progress bar. - - The built-in :mod:`multiprocessing` module is used for process pools and - tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - nproc (int): Process (worker) number. - initializer (None or callable): Refer to :class:`multiprocessing.Pool` - for details. - initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for - details. - chunksize (int): Refer to :class:`multiprocessing.Pool` for details. - bar_width (int): Width of progress bar. - skip_first (bool): Whether to skip the first sample for each worker - when estimating fps, since the initialization step may takes - longer. - keep_order (bool): If True, :func:`Pool.imap` is used, otherwise - :func:`Pool.imap_unordered` is used. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - pool = init_pool(nproc, initializer, initargs) - start = not skip_first - task_num -= nproc * chunksize * int(skip_first) - prog_bar = ProgressBar(task_num, bar_width, start, file=file) - results = [] - if keep_order: - gen = pool.imap(func, tasks, chunksize) - else: - gen = pool.imap_unordered(func, tasks, chunksize) - for result in gen: - results.append(result) - if skip_first: - if len(results) < nproc * chunksize: - continue - elif len(results) == nproc * chunksize: - prog_bar.start() - continue - prog_bar.update() - prog_bar.file.write('\n') - pool.close() - pool.join() - return results - - -def track_iter_progress(tasks, bar_width=50, file=sys.stdout): - """Track the progress of tasks iteration or enumeration with a progress - bar. - - Tasks are yielded with a simple for-loop. - - Args: - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Yields: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - for task in tasks: - yield task - prog_bar.update() - prog_bar.file.write('\n') diff --git a/cv/detection/yolof/pytorch/mmcv/utils/registry.py b/cv/detection/yolof/pytorch/mmcv/utils/registry.py deleted file mode 100755 index fa9df39b..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/registry.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import warnings -from functools import partial - -from .misc import is_seq_of - - -def build_from_cfg(cfg, registry, default_args=None): - """Build a module from config dict. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - try: - return obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{obj_cls.__name__}: {e}') - - -class Registry: - """A registry to map strings to classes. - - Registered object could be built from registry. - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = MODELS.build(dict(type='ResNet')) - - Please refer to - https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for - advanced usage. - - Args: - name (str): Registry name. - build_func(func, optional): Build function to construct instance from - Registry, func:`build_from_cfg` is used if neither ``parent`` or - ``build_func`` is specified. If ``parent`` is specified and - ``build_func`` is not given, ``build_func`` will be inherited - from ``parent``. Default: None. - parent (Registry, optional): Parent registry. The class registered in - children registry could be built from parent. Default: None. - scope (str, optional): The scope of registry. It is the key to search - for children registry. If not specified, scope will be the name of - the package where class is defined, e.g. mmdet, mmcls, mmseg. - Default: None. - """ - - def __init__(self, name, build_func=None, parent=None, scope=None): - self._name = name - self._module_dict = dict() - self._children = dict() - self._scope = self.infer_scope() if scope is None else scope - - # self.build_func will be set with the following priority: - # 1. build_func - # 2. parent.build_func - # 3. build_from_cfg - if build_func is None: - if parent is not None: - self.build_func = parent.build_func - else: - self.build_func = build_from_cfg - else: - self.build_func = build_func - if parent is not None: - assert isinstance(parent, Registry) - parent._add_children(self) - self.parent = parent - else: - self.parent = None - - def __len__(self): - return len(self._module_dict) - - def __contains__(self, key): - return self.get(key) is not None - - def __repr__(self): - format_str = self.__class__.__name__ + \ - f'(name={self._name}, ' \ - f'items={self._module_dict})' - return format_str - - @staticmethod - def infer_scope(): - """Infer the scope of registry. - - The name of the package where registry is defined will be returned. - - Example: - # in mmdet/models/backbone/resnet.py - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - The scope of ``ResNet`` will be ``mmdet``. - - - Returns: - scope (str): The inferred scope name. - """ - # inspect.stack() trace where this function is called, the index-2 - # indicates the frame where `infer_scope()` is called - filename = inspect.getmodule(inspect.stack()[2][0]).__name__ - split_filename = filename.split('.') - return split_filename[0] - - @staticmethod - def split_scope_key(key): - """Split scope and key. - - The first scope will be split from key. - - Examples: - >>> Registry.split_scope_key('mmdet.ResNet') - 'mmdet', 'ResNet' - >>> Registry.split_scope_key('ResNet') - None, 'ResNet' - - Return: - scope (str, None): The first scope. - key (str): The remaining key. - """ - split_index = key.find('.') - if split_index != -1: - return key[:split_index], key[split_index + 1:] - else: - return None, key - - @property - def name(self): - return self._name - - @property - def scope(self): - return self._scope - - @property - def module_dict(self): - return self._module_dict - - @property - def children(self): - return self._children - - def get(self, key): - """Get the registry record. - - Args: - key (str): The class name in string format. - - Returns: - class: The corresponding class. - """ - scope, real_key = self.split_scope_key(key) - if scope is None or scope == self._scope: - # get from self - if real_key in self._module_dict: - return self._module_dict[real_key] - else: - # get from self._children - if scope in self._children: - return self._children[scope].get(real_key) - else: - # goto root - parent = self.parent - while parent.parent is not None: - parent = parent.parent - return parent.get(key) - - def build(self, *args, **kwargs): - return self.build_func(*args, **kwargs, registry=self) - - def _add_children(self, registry): - """Add children for a registry. - - The ``registry`` will be added as children based on its scope. - The parent registry could build objects from children registry. - - Example: - >>> models = Registry('models') - >>> mmdet_models = Registry('models', parent=models) - >>> @mmdet_models.register_module() - >>> class ResNet: - >>> pass - >>> resnet = models.build(dict(type='mmdet.ResNet')) - """ - - assert isinstance(registry, Registry) - assert registry.scope is not None - assert registry.scope not in self.children, \ - f'scope {registry.scope} exists in {self.name} registry' - self.children[registry.scope] = registry - - def _register_module(self, module_class, module_name=None, force=False): - if not inspect.isclass(module_class): - raise TypeError('module must be a class, ' - f'but got {type(module_class)}') - - if module_name is None: - module_name = module_class.__name__ - if isinstance(module_name, str): - module_name = [module_name] - for name in module_name: - if not force and name in self._module_dict: - raise KeyError(f'{name} is already registered ' - f'in {self.name}') - self._module_dict[name] = module_class - - def deprecated_register_module(self, cls=None, force=False): - warnings.warn( - 'The old API of register_module(module, force=False) ' - 'is deprecated and will be removed, please use the new API ' - 'register_module(name=None, force=False, module=None) instead.') - if cls is None: - return partial(self.deprecated_register_module, force=force) - self._register_module(cls, force=force) - return cls - - def register_module(self, name=None, force=False, module=None): - """Register a module. - - A record will be added to `self._module_dict`, whose key is the class - name or the specified name, and value is the class itself. - It can be used as a decorator or a normal function. - - Example: - >>> backbones = Registry('backbone') - >>> @backbones.register_module() - >>> class ResNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> @backbones.register_module(name='mnet') - >>> class MobileNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> class ResNet: - >>> pass - >>> backbones.register_module(ResNet) - - Args: - name (str | None): The module name to be registered. If not - specified, the class name will be used. - force (bool, optional): Whether to override an existing class with - the same name. Default: False. - module (type): Module class to be registered. - """ - if not isinstance(force, bool): - raise TypeError(f'force must be a boolean, but got {type(force)}') - # NOTE: This is a walkaround to be compatible with the old api, - # while it may introduce unexpected bugs. - if isinstance(name, type): - return self.deprecated_register_module(name, force=force) - - # raise the error ahead of time - if not (name is None or isinstance(name, str) or is_seq_of(name, str)): - raise TypeError( - 'name must be either of None, an instance of str or a sequence' - f' of str, but got {type(name)}') - - # use it as a normal method: x.register_module(module=SomeClass) - if module is not None: - self._register_module( - module_class=module, module_name=name, force=force) - return module - - # use it as a decorator: @x.register_module() - def _register(cls): - self._register_module( - module_class=cls, module_name=name, force=force) - return cls - - return _register diff --git a/cv/detection/yolof/pytorch/mmcv/utils/testing.py b/cv/detection/yolof/pytorch/mmcv/utils/testing.py deleted file mode 100755 index a27f936d..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/testing.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Open-MMLab. -import sys -from collections.abc import Iterable -from runpy import run_path -from shlex import split -from typing import Any, Dict, List -from unittest.mock import patch - - -def check_python_script(cmd): - """Run the python cmd script with `__main__`. The difference between - `os.system` is that, this function exectues code in the current process, so - that it can be tracked by coverage tools. Currently it supports two forms: - - - ./tests/data/scripts/hello.py zz - - python tests/data/scripts/hello.py zz - """ - args = split(cmd) - if args[0] == 'python': - args = args[1:] - with patch.object(sys, 'argv', args): - run_path(args[0], run_name='__main__') - - -def _any(judge_result): - """Since built-in ``any`` works only when the element of iterable is not - iterable, implement the function.""" - if not isinstance(judge_result, Iterable): - return judge_result - - try: - for element in judge_result: - if _any(element): - return True - except TypeError: - # Maybe encounter the case: torch.tensor(True) | torch.tensor(False) - if judge_result: - return True - return False - - -def assert_dict_contains_subset(dict_obj: Dict[Any, Any], - expected_subset: Dict[Any, Any]) -> bool: - """Check if the dict_obj contains the expected_subset. - - Args: - dict_obj (Dict[Any, Any]): Dict object to be checked. - expected_subset (Dict[Any, Any]): Subset expected to be contained in - dict_obj. - - Returns: - bool: Whether the dict_obj contains the expected_subset. - """ - - for key, value in expected_subset.items(): - if key not in dict_obj.keys() or _any(dict_obj[key] != value): - return False - return True - - -def assert_attrs_equal(obj: Any, expected_attrs: Dict[str, Any]) -> bool: - """Check if attribute of class object is correct. - - Args: - obj (object): Class object to be checked. - expected_attrs (Dict[str, Any]): Dict of the expected attrs. - - Returns: - bool: Whether the attribute of class object is correct. - """ - for attr, value in expected_attrs.items(): - if not hasattr(obj, attr) or _any(getattr(obj, attr) != value): - return False - return True - - -def assert_dict_has_keys(obj: Dict[str, Any], - expected_keys: List[str]) -> bool: - """Check if the obj has all the expected_keys. - - Args: - obj (Dict[str, Any]): Object to be checked. - expected_keys (List[str]): Keys expected to contained in the keys of - the obj. - - Returns: - bool: Whether the obj has the expected keys. - """ - return set(expected_keys).issubset(set(obj.keys())) - - -def assert_keys_equal(result_keys: List[str], target_keys: List[str]) -> bool: - """Check if target_keys is equal to result_keys. - - Args: - result_keys (List[str]): Result keys to be checked. - target_keys (List[str]): Target keys to be checked. - - Returns: - bool: Whether target_keys is equal to result_keys. - """ - return set(result_keys) == set(target_keys) - - -def assert_is_norm_layer(module) -> bool: - """Check if the module is a norm layer. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the module is a norm layer. - """ - from .parrots_wrapper import _BatchNorm, _InstanceNorm - from torch.nn import GroupNorm, LayerNorm - norm_layer_candidates = (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm) - return isinstance(module, norm_layer_candidates) - - -def assert_params_all_zeros(module) -> bool: - """Check if the parameters of the module is all zeros. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the parameters of the module is all zeros. - """ - weight_data = module.weight.data - is_weight_zero = weight_data.allclose( - weight_data.new_zeros(weight_data.size())) - - if hasattr(module, 'bias') and module.bias is not None: - bias_data = module.bias.data - is_bias_zero = bias_data.allclose( - bias_data.new_zeros(bias_data.size())) - else: - is_bias_zero = True - - return is_weight_zero and is_bias_zero diff --git a/cv/detection/yolof/pytorch/mmcv/utils/timer.py b/cv/detection/yolof/pytorch/mmcv/utils/timer.py deleted file mode 100755 index 66d4a78a..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/timer.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from time import time - - -class TimerError(Exception): - - def __init__(self, message): - self.message = message - super(TimerError, self).__init__(message) - - -class Timer: - """A flexible Timer class. - - :Example: - - >>> import time - >>> import mmcv - >>> with mmcv.Timer(): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - 1.000 - >>> with mmcv.Timer(print_tmpl='it takes {:.1f} seconds'): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - it takes 1.0 seconds - >>> timer = mmcv.Timer() - >>> time.sleep(0.5) - >>> print(timer.since_start()) - 0.500 - >>> time.sleep(0.5) - >>> print(timer.since_last_check()) - 0.500 - >>> print(timer.since_start()) - 1.000 - """ - - def __init__(self, start=True, print_tmpl=None): - self._is_running = False - self.print_tmpl = print_tmpl if print_tmpl else '{:.3f}' - if start: - self.start() - - @property - def is_running(self): - """bool: indicate whether the timer is running""" - return self._is_running - - def __enter__(self): - self.start() - return self - - def __exit__(self, type, value, traceback): - print(self.print_tmpl.format(self.since_last_check())) - self._is_running = False - - def start(self): - """Start the timer.""" - if not self._is_running: - self._t_start = time() - self._is_running = True - self._t_last = time() - - def since_start(self): - """Total time since the timer is started. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - self._t_last = time() - return self._t_last - self._t_start - - def since_last_check(self): - """Time since the last checking. - - Either :func:`since_start` or :func:`since_last_check` is a checking - operation. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - dur = time() - self._t_last - self._t_last = time() - return dur - - -_g_timers = {} # global timers - - -def check_time(timer_id): - """Add check points in a single line. - - This method is suitable for running a task on a list of items. A timer will - be registered when the method is called for the first time. - - :Example: - - >>> import time - >>> import mmcv - >>> for i in range(1, 6): - >>> # simulate a code block - >>> time.sleep(i) - >>> mmcv.check_time('task1') - 2.000 - 3.000 - 4.000 - 5.000 - - Args: - timer_id (str): Timer identifier. - """ - if timer_id not in _g_timers: - _g_timers[timer_id] = Timer() - return 0 - else: - return _g_timers[timer_id].since_last_check() diff --git a/cv/detection/yolof/pytorch/mmcv/utils/trace.py b/cv/detection/yolof/pytorch/mmcv/utils/trace.py deleted file mode 100644 index 8e49bfd3..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/trace.py +++ /dev/null @@ -1,23 +0,0 @@ -import warnings - -import torch - -from mmcv.utils import digit_version - - -def is_jit_tracing() -> bool: - if (torch.__version__ != 'parrots' - and digit_version(torch.__version__) >= digit_version('1.6.0')): - on_trace = torch.jit.is_tracing() - # In PyTorch 1.6, torch.jit.is_tracing has a bug. - # Refers to https://github.com/pytorch/pytorch/issues/42448 - if isinstance(on_trace, bool): - return on_trace - else: - return torch._C._is_tracing() - else: - warnings.warn( - 'torch.jit.is_tracing is only supported after v1.6.0. ' - 'Therefore is_tracing returns False automatically. Please ' - 'set on_trace manually if you are using trace.', UserWarning) - return False diff --git a/cv/detection/yolof/pytorch/mmcv/utils/version_utils.py b/cv/detection/yolof/pytorch/mmcv/utils/version_utils.py deleted file mode 100755 index 963c45a2..00000000 --- a/cv/detection/yolof/pytorch/mmcv/utils/version_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import subprocess -import warnings - -from packaging.version import parse - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - assert 'parrots' not in version_str - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen( - cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - -def get_git_hash(fallback='unknown', digits=None): - """Get the git hash of the current repo. - - Args: - fallback (str, optional): The fallback string when git hash is - unavailable. Defaults to 'unknown'. - digits (int, optional): kept digits of the hash. Defaults to None, - meaning all digits are kept. - - Returns: - str: Git commit hash. - """ - - if digits is not None and not isinstance(digits, int): - raise TypeError('digits must be None or an integer') - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - if digits is not None: - sha = sha[:digits] - except OSError: - sha = fallback - - return sha diff --git a/cv/detection/yolof/pytorch/mmcv/version.py b/cv/detection/yolof/pytorch/mmcv/version.py deleted file mode 100755 index 1cce4e50..00000000 --- a/cv/detection/yolof/pytorch/mmcv/version.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -__version__ = '1.3.17' - - -def parse_version_info(version_str: str, length: int = 4) -> tuple: - """Parse a version string into a tuple. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int | str]: The version info, e.g., "1.3.0" is parsed into - (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into - (2, 0, 0, 0, 'rc', 1) (when length is set to 4). - """ - from packaging.version import parse - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - release.extend(list(version.pre)) - elif version.is_postrelease: - release.extend(list(version.post)) - else: - release.extend([0, 0]) - return tuple(release) - - -version_info = tuple(int(x) for x in __version__.split('.')[:3]) - -__all__ = ['__version__', 'version_info', 'parse_version_info'] diff --git a/cv/detection/yolof/pytorch/mmdet/__init__.py b/cv/detection/yolof/pytorch/mmdet/__init__.py deleted file mode 100755 index 1f8ee169..00000000 --- a/cv/detection/yolof/pytorch/mmdet/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.3.17' -mmcv_maximum_version = '1.6.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/cv/detection/yolof/pytorch/mmdet/apis/__init__.py b/cv/detection/yolof/pytorch/mmdet/apis/__init__.py deleted file mode 100755 index b578cc95..00000000 --- a/cv/detection/yolof/pytorch/mmdet/apis/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .test import multi_gpu_test, single_gpu_test -from .train import (get_root_logger, init_random_seed, set_random_seed, - train_detector) - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_detector', - 'multi_gpu_test', 'single_gpu_test', 'init_random_seed' -] diff --git a/cv/detection/yolof/pytorch/mmdet/apis/test.py b/cv/detection/yolof/pytorch/mmdet/apis/test.py deleted file mode 100755 index 0b21b301..00000000 --- a/cv/detection/yolof/pytorch/mmdet/apis/test.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time - -import mmcv -import torch -import torch.distributed as dist -from mmcv.image import tensor2imgs -from mmcv.runner import get_dist_info - -# from mmdet.core import encode_mask_results - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - show_score_thr=0.3): - model.eval() - results = [] - dataset = data_loader.dataset - PALETTE = getattr(dataset, 'PALETTE', None) - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - batch_size = len(result) - if show or out_dir: - if batch_size == 1 and isinstance(data['img'][0], torch.Tensor): - img_tensor = data['img'][0] - else: - img_tensor = data['img'][0].data[0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for i, (img, img_meta) in enumerate(zip(imgs, img_metas)): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result[i], - bbox_color=PALETTE, - text_color=PALETTE, - mask_color=PALETTE, - show=show, - out_file=out_file, - score_thr=show_score_thr) - - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = (bbox_results, - encode_mask_results(mask_results)) - - results.extend(result) - - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - # encode mask results - if isinstance(result[0], tuple): - result = [(bbox_results, encode_mask_results(mask_results)) - for bbox_results, mask_results in result] - # This logic is only used in panoptic segmentation test. - elif isinstance(result[0], dict) and 'ins_results' in result[0]: - for j in range(len(result)): - bbox_results, mask_results = result[j]['ins_results'] - result[j]['ins_results'] = ( - bbox_results, encode_mask_results(mask_results)) - - results.extend(result) - - if rank == 0: - batch_size = len(result) - for _ in range(batch_size * world_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl')) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append( - pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/detection/yolof/pytorch/mmdet/apis/train.py b/cv/detection/yolof/pytorch/mmdet/apis/train.py deleted file mode 100755 index ca763318..00000000 --- a/cv/detection/yolof/pytorch/mmdet/apis/train.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import (DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_runner, - get_dist_info) - -from mmdet.core import DistEvalHook, EvalHook, build_optimizer -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import (build_ddp, build_dp, compat_cfg, - find_latest_checkpoint, get_root_logger) - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def auto_scale_lr(cfg, distributed, logger): - """Automatically scaling LR according to GPU number and sample per GPU. - - Args: - cfg (config): Training config. - distributed (bool): Using distributed or not. - logger (logging.Logger): Logger. - """ - # Get flag from config - if ('auto_scale_lr' not in cfg) or \ - (not cfg.auto_scale_lr.get('enable', False)): - logger.info('Automatic scaling of learning rate (LR)' - ' has been disabled.') - return - - # Get base batch size from config - base_batch_size = cfg.auto_scale_lr.get('base_batch_size', None) - if base_batch_size is None: - return - - # Get gpu number - if distributed: - _, world_size = get_dist_info() - num_gpus = len(range(world_size)) - else: - num_gpus = len(cfg.gpu_ids) - - # calculate the batch size - samples_per_gpu = cfg.data.train_dataloader.samples_per_gpu - batch_size = num_gpus * samples_per_gpu - logger.info(f'Training with {num_gpus} GPU(s) with {samples_per_gpu} ' - f'samples per GPU. The total batch size is {batch_size}.') - - if batch_size != base_batch_size: - # scale LR with - # [linear scaling rule](https://arxiv.org/abs/1706.02677) - scaled_lr = (batch_size / base_batch_size) * cfg.optimizer.lr - logger.info('LR has been automatically scaled ' - f'from {cfg.optimizer.lr} to {scaled_lr}') - cfg.optimizer.lr = scaled_lr - else: - logger.info('The batch size match the ' - f'base batch size: {base_batch_size}, ' - f'will not scaling the LR ({cfg.optimizer.lr}).') - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - - cfg = compat_cfg(cfg) - logger = get_root_logger(log_level=cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - - runner_type = 'EpochBasedRunner' if 'runner' not in cfg else cfg.runner[ - 'type'] - - train_dataloader_default_args = dict( - samples_per_gpu=2, - workers_per_gpu=2, - # `num_gpus` will be ignored if distributed - num_gpus=len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - runner_type=runner_type, - persistent_workers=False) - - train_loader_cfg = { - **train_dataloader_default_args, - **cfg.data.get('train_dataloader', {}) - } - - data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = build_ddp( - model, - cfg.device, - device_ids=[int(os.environ['LOCAL_RANK'])], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = build_dp(model, cfg.device, device_ids=cfg.gpu_ids) - - # build optimizer - auto_scale_lr(cfg, distributed, logger) - optimizer = build_optimizer(model, cfg.optimizer) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - optimizer_config, - cfg.checkpoint_config, - cfg.log_config, - cfg.get('momentum_config', None), - custom_hooks_config=cfg.get('custom_hooks', None)) - - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - val_dataloader_default_args = dict( - samples_per_gpu=1, - workers_per_gpu=2, - dist=distributed, - shuffle=False, - persistent_workers=False) - - val_dataloader_args = { - **val_dataloader_default_args, - **cfg.data.get('val_dataloader', {}) - } - # Support batch_size > 1 in validation - - if val_dataloader_args['samples_per_gpu'] > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - - val_dataloader = build_dataloader(val_dataset, **val_dataloader_args) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the - # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. - runner.register_hook( - eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - resume_from = None - if cfg.resume_from is None and cfg.get('auto_resume'): - resume_from = find_latest_checkpoint(cfg.work_dir) - if resume_from is not None: - cfg.resume_from = resume_from - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/cv/detection/yolof/pytorch/mmdet/core/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/__init__.py deleted file mode 100755 index 12aa4232..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -# from .mask import * # noqa: F401, F403 -from .optimizers import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/cv/detection/yolof/pytorch/mmdet/core/anchor/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/anchor/__init__.py deleted file mode 100755 index 63730ca8..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/anchor/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# # Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .anchor_generator import AnchorGenerator -from .builder import PRIOR_GENERATORS, build_prior_generator -from .utils import anchor_inside_flags, images_to_levels - -__all__ = [ - 'AnchorGenerator', 'PRIOR_GENERATORS', 'anchor_inside_flags', 'build_prior_generator', - 'images_to_levels' -] \ No newline at end of file diff --git a/cv/detection/yolof/pytorch/mmdet/core/anchor/anchor_generator.py b/cv/detection/yolof/pytorch/mmdet/core/anchor/anchor_generator.py deleted file mode 100755 index 20886fbd..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/anchor/anchor_generator.py +++ /dev/null @@ -1,866 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import PRIOR_GENERATORS - - -@PRIOR_GENERATORS.register_module() -class AnchorGenerator: - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int] | None): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - - Examples: - >>> from mmdet.core import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_priors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_priors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides, - ratios, - scales=None, - base_sizes=None, - scale_major=True, - octave_base_scale=None, - scales_per_octave=None, - centers=None, - center_offset=0.): - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - - @property - def num_base_anchors(self): - """list[int]: total number of base anchors in a feature grid""" - return self.num_base_priors - - @property - def num_base_priors(self): - """list[int]: The number of priors (anchors) at a point - on the feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, x, y, row_major=True): - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool, optional): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_priors(self, featmap_sizes, dtype=torch.float32, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - dtype (:obj:`torch.dtype`): Dtype of priors. - Default: torch.float32. - device (str): The device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_priors( - featmap_sizes[i], level_idx=i, dtype=dtype, device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_priors(self, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int]): Size of the feature maps. - level_idx (int): The index of corresponding feature map level. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (str, optional): The device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - base_anchors = self.base_anchors[level_idx].to(device).to(dtype) - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - shift_x = torch.arange(0, feat_w, device=device).to(dtype) * stride_w - shift_y = torch.arange(0, feat_h, device=device).to(dtype) * stride_h - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def sparse_priors(self, - prior_idxs, - featmap_size, - level_idx, - dtype=torch.float32, - device='cuda'): - """Generate sparse anchors according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int]): feature map size arrange as (h, w). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (obj:`torch.device`): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 4), N should be equal to - the length of ``prior_idxs``. - """ - - height, width = featmap_size - num_base_anchors = self.num_base_anchors[level_idx] - base_anchor_id = prior_idxs % num_base_anchors - x = (prior_idxs // - num_base_anchors) % width * self.strides[level_idx][0] - y = (prior_idxs // width // - num_base_anchors) % height * self.strides[level_idx][1] - priors = torch.stack([x, y, x, y], 1).to(dtype).to(device) + \ - self.base_anchors[level_idx][base_anchor_id, :].to(device) - - return priors - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str): Device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - warnings.warn('``grid_anchors`` would be deprecated soon. ' - 'Please use ``grid_priors`` ') - - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors, - featmap_size, - stride=(16, 16), - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int], optional): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - warnings.warn( - '``single_level_grid_anchors`` would be deprecated soon. ' - 'Please use ``single_level_grid_priors`` ') - - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - num_base_anchors, - device='cuda'): - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - min_sizes (list[float]): The list of minimum anchor sizes on each - level. - max_sizes (list[float]): The list of maximum anchor sizes on each - level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. Being - used when not setting min_sizes and max_sizes. - input_size (int): Size of feature map, 300 for SSD300, 512 for - SSD512. Being used when not setting min_sizes and max_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - """ - - def __init__(self, - strides, - ratios, - min_sizes=None, - max_sizes=None, - basesize_ratio_range=(0.15, 0.9), - input_size=300, - scale_major=True): - assert len(strides) == len(ratios) - assert not (min_sizes is None) ^ (max_sizes is None) - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - - if min_sizes is None and max_sizes is None: - # use hard code to generate SSD anchors - self.input_size = input_size - assert mmcv.is_tuple_of(basesize_ratio_range, float) - self.basesize_ratio_range = basesize_ratio_range - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError( - 'When not setting min_sizes and max_sizes,' - 'basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError( - 'Only support 300 or 512 in SSDAnchorGenerator when ' - 'not setting min_sizes and max_sizes, ' - f'got {self.input_size}.') - - assert len(min_sizes) == len(max_sizes) == len(strides) - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@PRIOR_GENERATORS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - - Examples: - >>> from mmdet.core import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@PRIOR_GENERATORS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - super(LegacySSDAnchorGenerator, self).__init__( - strides=strides, - ratios=ratios, - basesize_ratio_range=basesize_ratio_range, - input_size=input_size, - scale_major=scale_major) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@PRIOR_GENERATORS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, strides, base_sizes): - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, base_sizes_per_level, center=None): - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int, int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors - - def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'): - """Generate responsible anchor flags of grid cells in multiple scales. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in multiple - feature levels. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): responsible flags of anchors in multiple level - """ - assert self.num_levels == len(featmap_sizes) - multi_level_responsible_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - flags = self.single_level_responsible_flags( - featmap_sizes[i], - gt_bboxes, - anchor_stride, - self.num_base_anchors[i], - device=device) - multi_level_responsible_flags.append(flags) - return multi_level_responsible_flags - - def single_level_responsible_flags(self, - featmap_size, - gt_bboxes, - stride, - num_base_anchors, - device='cuda'): - """Generate the responsible flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - stride (tuple(int)): stride of current level - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device) - gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device) - gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long() - gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long() - - # row major indexing - gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x - - responsible_grid = torch.zeros( - feat_h * feat_w, dtype=torch.uint8, device=device) - responsible_grid[gt_bboxes_grid_idx] = 1 - - responsible_grid = responsible_grid[:, None].expand( - responsible_grid.size(0), num_base_anchors).contiguous().view(-1) - return responsible_grid diff --git a/cv/detection/yolof/pytorch/mmdet/core/anchor/builder.py b/cv/detection/yolof/pytorch/mmdet/core/anchor/builder.py deleted file mode 100755 index ddb25ad3..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/anchor/builder.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.utils import Registry, build_from_cfg - -PRIOR_GENERATORS = Registry('Generator for anchors and points') - -ANCHOR_GENERATORS = PRIOR_GENERATORS - - -def build_prior_generator(cfg, default_args=None): - return build_from_cfg(cfg, PRIOR_GENERATORS, default_args) - - -def build_anchor_generator(cfg, default_args=None): - warnings.warn( - '``build_anchor_generator`` would be deprecated soon, please use ' - '``build_prior_generator`` ') - return build_prior_generator(cfg, default_args=default_args) diff --git a/cv/detection/yolof/pytorch/mmdet/core/anchor/utils.py b/cv/detection/yolof/pytorch/mmdet/core/anchor/utils.py deleted file mode 100755 index c2f20247..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/__init__.py deleted file mode 100755 index 7de18e23..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_assigner, build_bbox_coder, build_sampler -from .iou_calculators import BboxOverlaps2D, bbox_overlaps -from .transforms import (bbox2distance, bbox2result, bbox2roi, - bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping, - bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh, - distance2bbox, find_inside_bboxes, roi2bbox) - -from .assigners import AssignResult, BaseAssigner -from .coder import BaseBBoxCoder, DeltaXYWHBBoxCoder -from .samplers import PseudoSampler - -__all__ = [ - 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', - 'AssignResult', 'PseudoSampler', - 'build_assigner', - 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back', - 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance', - 'build_bbox_coder', 'BaseBBoxCoder', - 'DeltaXYWHBBoxCoder', - 'bbox_rescale', 'bbox_cxcywh_to_xyxy', - 'bbox_xyxy_to_cxcywh', 'find_inside_bboxes' -] - diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/__init__.py deleted file mode 100755 index e9e8a8ad..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assign_result import AssignResult -from .base_assigner import BaseAssigner -from .uniform_assigner import UniformAssigner - -__all__ = [ - 'BaseAssigner', 'AssignResult', - 'UniformAssigner' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/assign_result.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/assign_result.py deleted file mode 100755 index 488010b5..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assigned to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned].long() - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/base_assigner.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/base_assigner.py deleted file mode 100755 index 3c2d597a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/base_assigner.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseAssigner(metaclass=ABCMeta): - """Base assigner that assigns boxes to ground truth boxes.""" - - @abstractmethod - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign boxes to either a ground truth boxes or a negative boxes.""" diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py deleted file mode 100755 index 70294fc4..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/assigners/uniform_assigner.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from ..transforms import bbox_xyxy_to_cxcywh -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class UniformAssigner(BaseAssigner): - """Uniform Matching between the anchors and gt boxes, which can achieve - balance in positive anchors, and gt_bboxes_ignore was not considered for - now. - - Args: - pos_ignore_thr (float): the threshold to ignore positive anchors - neg_ignore_thr (float): the threshold to ignore negative anchors - match_times(int): Number of positive anchors for each gt box. - Default 4. - iou_calculator (dict): iou_calculator config - """ - - def __init__(self, - pos_ignore_thr, - neg_ignore_thr, - match_times=4, - iou_calculator=dict(type='BboxOverlaps2D')): - self.match_times = match_times - self.pos_ignore_thr = pos_ignore_thr - self.neg_ignore_thr = neg_ignore_thr - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - bbox_pred, - anchor, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - assign_result = AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - assign_result.set_extra_property( - 'pos_idx', bbox_pred.new_empty(0, dtype=torch.bool)) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred.new_empty((0, 4))) - assign_result.set_extra_property('target_boxes', - bbox_pred.new_empty((0, 4))) - return assign_result - - # 2. Compute the L1 cost between boxes - # Note that we use anchors and predict boxes both - cost_bbox = torch.cdist( - bbox_xyxy_to_cxcywh(bbox_pred), - bbox_xyxy_to_cxcywh(gt_bboxes), - p=1) - cost_bbox_anchors = torch.cdist( - bbox_xyxy_to_cxcywh(anchor), bbox_xyxy_to_cxcywh(gt_bboxes), p=1) - - # We found that topk function has different results in cpu and - # cuda mode. In order to ensure consistency with the source code, - # we also use cpu mode. - # TODO: Check whether the performance of cpu and cuda are the same. - C = cost_bbox.cpu() - C1 = cost_bbox_anchors.cpu() - - # self.match_times x n - index = torch.topk( - C, # c=b,n,x c[i]=n,x - k=self.match_times, - dim=0, - largest=False)[1] - - # self.match_times x n - index1 = torch.topk(C1, k=self.match_times, dim=0, largest=False)[1] - # (self.match_times*2) x n - indexes = torch.cat((index, index1), - dim=1).reshape(-1).to(bbox_pred.device) - - pred_overlaps = self.iou_calculator(bbox_pred, gt_bboxes) - anchor_overlaps = self.iou_calculator(anchor, gt_bboxes) - pred_max_overlaps, _ = pred_overlaps.max(dim=1) - anchor_max_overlaps, _ = anchor_overlaps.max(dim=0) - - # 3. Compute the ignore indexes use gt_bboxes and predict boxes - ignore_idx = pred_max_overlaps > self.neg_ignore_thr - assigned_gt_inds[ignore_idx] = -1 - - # 4. Compute the ignore indexes of positive sample use anchors - # and predict boxes - pos_gt_index = torch.arange( - 0, C1.size(1), - device=bbox_pred.device).repeat(self.match_times * 2) - pos_ious = anchor_overlaps[indexes, pos_gt_index] - pos_ignore_idx = pos_ious < self.pos_ignore_thr - - pos_gt_index_with_ignore = pos_gt_index + 1 - pos_gt_index_with_ignore[pos_ignore_idx] = -1 - assigned_gt_inds[indexes] = pos_gt_index_with_ignore - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - assign_result = AssignResult( - num_gts, - assigned_gt_inds, - anchor_max_overlaps, - labels=assigned_labels) - assign_result.set_extra_property('pos_idx', ~pos_ignore_idx) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred[indexes]) - assign_result.set_extra_property('target_boxes', - gt_bboxes[pos_gt_index]) - return assign_result diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/builder.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/builder.py deleted file mode 100755 index 9cfa055b..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/builder.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -BBOX_ASSIGNERS = Registry('bbox_assigner') -BBOX_SAMPLERS = Registry('bbox_sampler') -BBOX_CODERS = Registry('bbox_coder') - - -def build_assigner(cfg, **default_args): - """Builder of box assigner.""" - return build_from_cfg(cfg, BBOX_ASSIGNERS, default_args) - - -def build_sampler(cfg, **default_args): - """Builder of box sampler.""" - return build_from_cfg(cfg, BBOX_SAMPLERS, default_args) - - -def build_bbox_coder(cfg, **default_args): - """Builder of box coder.""" - return build_from_cfg(cfg, BBOX_CODERS, default_args) diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/__init__.py deleted file mode 100755 index d85ede79..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_bbox_coder import BaseBBoxCoder -from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder -__all__ = [ - 'BaseBBoxCoder', 'DeltaXYWHBBoxCoder' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100755 index a7ed041a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py deleted file mode 100755 index a7f1c62f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class DeltaXYWHBBoxCoder(BaseBBoxCoder): - """Delta XYWH BBox coder. - - Following the practice in `R-CNN `_, - this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and - decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2). - - Args: - target_means (Sequence[float]): Denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): Denormalizing standard deviation of - target for delta coordinates - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.), - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - self.clip_border = clip_border - self.add_ctr_clamp = add_ctr_clamp - self.ctr_clamp = ctr_clamp - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): Source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): Target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes. Shape (B, N, 4) or (N, 4) - pred_bboxes (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - - assert pred_bboxes.size(0) == bboxes.size(0) - if pred_bboxes.ndim == 3: - assert pred_bboxes.size(1) == bboxes.size(1) - - if pred_bboxes.ndim == 2 and not torch.onnx.is_in_onnx_export(): - # single image decode - decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip, - self.clip_border, self.add_ctr_clamp, - self.ctr_clamp) - else: - if pred_bboxes.ndim == 3 and not torch.onnx.is_in_onnx_export(): - warnings.warn( - 'DeprecationWarning: onnx_delta2bbox is deprecated ' - 'in the case of batch decoding and non-ONNX, ' - 'please use “delta2bbox” instead. In order to improve ' - 'the decoding speed, the batch function will no ' - 'longer be supported. ') - decoded_bboxes = onnx_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, - wh_ratio_clip, self.clip_border, - self.add_ctr_clamp, - self.ctr_clamp) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of :func:`delta2bbox`. - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] - gh = gt[..., 3] - gt[..., 1] - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4). - deltas (Tensor): Encoded offsets relative to each roi. - Has shape (N, num_classes * 4) or (N, 4). Note - N = num_base_anchors * W * H, when rois is a grid of - anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (tuple[int, int]): Maximum bounds for boxes, specifies - (H, W). Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. Default - 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp. When set to True, - the center of the prediction bounding box will be clamped to - avoid being too far away from the center of the anchor. - Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (N, num_classes * 4) or (N, 4), where 4 - represent tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - num_bboxes, num_classes = deltas.size(0), deltas.size(1) // 4 - if num_bboxes == 0: - return deltas - - deltas = deltas.reshape(-1, 4) - - means = deltas.new_tensor(means).view(1, -1) - stds = deltas.new_tensor(stds).view(1, -1) - denorm_deltas = deltas * stds + means - - dxy = denorm_deltas[:, :2] - dwh = denorm_deltas[:, 2:] - - # Compute width/height of each roi - rois_ = rois.repeat(1, num_classes).reshape(-1, 4) - pxy = ((rois_[:, :2] + rois_[:, 2:]) * 0.5) - pwh = (rois_[:, 2:] - rois_[:, :2]) - - dxy_wh = pwh * dxy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dxy_wh = torch.clamp(dxy_wh, max=ctr_clamp, min=-ctr_clamp) - dwh = torch.clamp(dwh, max=max_ratio) - else: - dwh = dwh.clamp(min=-max_ratio, max=max_ratio) - - gxy = pxy + dxy_wh - gwh = pwh * dwh.exp() - x1y1 = gxy - (gwh * 0.5) - x2y2 = gxy + (gwh * 0.5) - bboxes = torch.cat([x1y1, x2y2], dim=-1) - if clip_border and max_shape is not None: - bboxes[..., 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[..., 1::2].clamp_(min=0, max=max_shape[0]) - bboxes = bboxes.reshape(num_bboxes, -1) - return bboxes - - -def onnx_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000, - clip_border=True, - add_ctr_clamp=False, - ctr_clamp=32): - """Apply deltas to shift/scale base boxes. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of :func:`bbox2delta`. - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) or (B, N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H - when rois is a grid of anchors.Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates. - Default (0., 0., 0., 0.). - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates. Default (1., 1., 1., 1.). - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If rois shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. Default None. - wh_ratio_clip (float): Maximum aspect ratio for boxes. - Default 16 / 1000. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Default True. - add_ctr_clamp (bool): Whether to add center clamp, when added, the - predicted box is clamped is its center is too far away from - the original anchor's center. Only used by YOLOF. Default False. - ctr_clamp (int): the maximum pixel shift to clamp. Only used by YOLOF. - Default 32. - - Returns: - Tensor: Boxes with shape (B, N, num_classes * 4) or (B, N, 4) or - (N, num_classes * 4) or (N, 4), where 4 represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3)) - tensor([[0.0000, 0.0000, 1.0000, 1.0000], - [0.1409, 0.1409, 2.8591, 2.8591], - [0.0000, 0.3161, 4.1945, 0.6839], - [5.0000, 5.0000, 5.0000, 5.0000]]) - """ - means = deltas.new_tensor(means).view(1, - -1).repeat(1, - deltas.size(-1) // 4) - stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(-1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[..., 0::4] - dy = denorm_deltas[..., 1::4] - dw = denorm_deltas[..., 2::4] - dh = denorm_deltas[..., 3::4] - - x1, y1 = rois[..., 0], rois[..., 1] - x2, y2 = rois[..., 2], rois[..., 3] - # Compute center of each roi - px = ((x1 + x2) * 0.5).unsqueeze(-1).expand_as(dx) - py = ((y1 + y2) * 0.5).unsqueeze(-1).expand_as(dy) - # Compute width/height of each roi - pw = (x2 - x1).unsqueeze(-1).expand_as(dw) - ph = (y2 - y1).unsqueeze(-1).expand_as(dh) - - dx_width = pw * dx - dy_height = ph * dy - - max_ratio = np.abs(np.log(wh_ratio_clip)) - if add_ctr_clamp: - dx_width = torch.clamp(dx_width, max=ctr_clamp, min=-ctr_clamp) - dy_height = torch.clamp(dy_height, max=ctr_clamp, min=-ctr_clamp) - dw = torch.clamp(dw, max=max_ratio) - dh = torch.clamp(dh, max=max_ratio) - else: - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + dx_width - gy = py + dy_height - # Convert center-xy/width/height to top-left, bottom-right - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size()) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat( - [max_shape] * (deltas.size(-1) // 2), - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/__init__.py deleted file mode 100755 index bcf8221d..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_iou_calculator -from .iou2d_calculator import BboxOverlaps2D, bbox_overlaps - -__all__ = ['build_iou_calculator', 'bbox_overlaps', 'BboxOverlaps2D'] diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/builder.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/builder.py deleted file mode 100755 index 378ee269..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/builder.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, build_from_cfg - -IOU_CALCULATORS = Registry('IoU calculator') - - -def build_iou_calculator(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, IOU_CALCULATORS, default_args) diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py deleted file mode 100755 index 4656d619..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/iou_calculators/iou2d_calculator.py +++ /dev/null @@ -1,261 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .builder import IOU_CALCULATORS - - -def cast_tensor_type(x, scale=1., dtype=None): - if dtype == 'fp16': - # scale is for preventing overflows - x = (x / scale).half() - return x - - -def fp16_clamp(x, min=None, max=None): - if not x.is_cuda and x.dtype == torch.float16: - # clamp for cpu float16, tensor fp16 has no clamp implementation - return x.float().clamp(min, max).half() - - return x.clamp(min, max) - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps2D: - """2D Overlaps (e.g. IoUs, GIoUs) Calculator.""" - - def __init__(self, scale=1., dtype=None): - self.scale = scale - self.dtype = dtype - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): bboxes have shape (m, 4) in - format, or shape (m, 5) in format. - bboxes2 (Tensor): bboxes have shape (m, 4) in - format, shape (m, 5) in format, or be - empty. If ``is_aligned `` is ``True``, then m and n must be - equal. - mode (str): "iou" (intersection over union), "iof" (intersection - over foreground), or "giou" (generalized intersection over - union). - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - - Returns: - Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,) - """ - assert bboxes1.size(-1) in [0, 4, 5] - assert bboxes2.size(-1) in [0, 4, 5] - if bboxes2.size(-1) == 5: - bboxes2 = bboxes2[..., :4] - if bboxes1.size(-1) == 5: - bboxes1 = bboxes1[..., :4] - - if self.dtype == 'fp16': - # change tensor type to save cpu and cuda memory and keep speed - bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) - bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) - overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - if not overlaps.is_cuda and overlaps.dtype == torch.float16: - # resume cpu float32 - overlaps = overlaps.float() - return overlaps - - return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + f'(' \ - f'scale={self.scale}, dtype={self.dtype})' - return repr_str - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6): - """Calculate overlap between two set of bboxes. - - FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889 - Note: - Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou', - there are some new generated variable when calculating IOU - using bbox_overlaps function: - - 1) is_aligned is False - area1: M x 1 - area2: N x 1 - lt: M x N x 2 - rb: M x N x 2 - wh: M x N x 2 - overlap: M x N x 1 - union: M x N x 1 - ious: M x N x 1 - - Total memory: - S = (9 x N x M + N + M) * 4 Byte, - - When using FP16, we can reduce: - R = (9 x N x M + N + M) * 4 / 2 Byte - R large than (N + M) * 4 * 2 is always true when N and M >= 1. - Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2, - N + 1 < 3 * N, when N or M is 1. - - Given M = 40 (ground truth), N = 400000 (three anchor boxes - in per grid, FPN, R-CNNs), - R = 275 MB (one times) - - A special case (dense detection), M = 512 (ground truth), - R = 3516 MB = 3.43 GB - - When the batch size is B, reduce: - B x R - - Therefore, CUDA memory runs out frequently. - - Experiments on GeForce RTX 2080Ti (11019 MiB): - - | dtype | M | N | Use | Real | Ideal | - |:----:|:----:|:----:|:----:|:----:|:----:| - | FP32 | 512 | 400000 | 8020 MiB | -- | -- | - | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB | - | FP32 | 40 | 400000 | 1540 MiB | -- | -- | - | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB | - - 2) is_aligned is True - area1: N x 1 - area2: N x 1 - lt: N x 2 - rb: N x 2 - wh: N x 2 - overlap: N x 1 - union: N x 1 - ious: N x 1 - - Total memory: - S = 11 x N * 4 Byte - - When using FP16, we can reduce: - R = 11 x N * 4 / 2 Byte - - So do the 'giou' (large than 'iou'). - - Time-wise, FP16 is generally faster than FP32. - - When gpu_assign_thr is not -1, it takes more time on cpu - but not reduce memory. - There, we can reduce half the memory and keep the speed. - - If ``is_aligned`` is ``False``, then calculate the overlaps between each - bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned - pair of bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 4) in format or empty. - bboxes2 (Tensor): shape (B, n, 4) in format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union), "iof" (intersection over - foreground) or "giou" (generalized intersection over union). - Default "iou". - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - - Example: - >>> empty = torch.empty(0, 4) - >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * ( - bboxes1[..., 3] - bboxes1[..., 1]) - area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * ( - bboxes2[..., 3] - bboxes2[..., 1]) - - if is_aligned: - lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2] - rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2]) - enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:]) - else: - lt = torch.max(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) # [B, rows, cols, 2] - rb = torch.min(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) # [B, rows, cols, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - else: - union = area1[..., None] - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) - enclosed_rb = torch.max(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou', 'iof']: - return ious - # calculate gious - enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/__init__.py deleted file mode 100755 index 1811ba0b..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_sampler import BaseSampler -from .pseudo_sampler import PseudoSampler -from .sampling_result import SamplingResult -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'SamplingResult' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/base_sampler.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/base_sampler.py deleted file mode 100755 index bd15c7c6..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/base_sampler.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch - -from .sampling_result import SamplingResult - - -class BaseSampler(metaclass=ABCMeta): - """Base class of samplers.""" - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - self.num = num - self.pos_fraction = pos_fraction - self.neg_pos_ub = neg_pos_ub - self.add_gt_as_proposals = add_gt_as_proposals - self.pos_sampler = self - self.neg_sampler = self - - @abstractmethod - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive samples.""" - pass - - @abstractmethod - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative samples.""" - pass - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - - Example: - >>> from mmdet.core.bbox import RandomSampler - >>> from mmdet.core.bbox import AssignResult - >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes - >>> rng = ensure_rng(None) - >>> assign_result = AssignResult.random(rng=rng) - >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) - >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) - >>> gt_labels = None - >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, - >>> add_gt_as_proposals=False) - >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels) - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - neg_inds = neg_inds.unique() - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py deleted file mode 100755 index b5ce298e..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/pseudo_sampler.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class PseudoSampler(BaseSampler): - """A pseudo sampler that does not do sampling actually.""" - - def __init__(self, **kwargs): - pass - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError - - def sample(self, assign_result, bboxes, gt_bboxes, *args, **kwargs): - """Directly returns the positive and negative indices of samples. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - bboxes (torch.Tensor): Bounding boxes - gt_bboxes (torch.Tensor): Ground truth boxes - - Returns: - :obj:`SamplingResult`: sampler results - """ - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - gt_flags = bboxes.new_zeros(bboxes.shape[0], dtype=torch.uint8) - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/sampling_result.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/sampling_result.py deleted file mode 100755 index 50676d04..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/samplers/sampling_result.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.utils import util_mixins - - -class SamplingResult(util_mixins.NiceRepr): - """Bbox sampling result. - - Example: - >>> # xdoctest: +IGNORE_WANT - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random(rng=10) - >>> print(f'self = {self}') - self = - """ - - def __init__(self, pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, - gt_flags): - self.pos_inds = pos_inds - self.neg_inds = neg_inds - self.pos_bboxes = bboxes[pos_inds] - self.neg_bboxes = bboxes[neg_inds] - self.pos_is_gt = gt_flags[pos_inds] - - self.num_gts = gt_bboxes.shape[0] - self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - - if gt_bboxes.numel() == 0: - # hack for index error case - assert self.pos_assigned_gt_inds.numel() == 0 - self.pos_gt_bboxes = torch.empty_like(gt_bboxes).view(-1, 4) - else: - if len(gt_bboxes.shape) < 2: - gt_bboxes = gt_bboxes.view(-1, 4) - - self.pos_gt_bboxes = gt_bboxes[self.pos_assigned_gt_inds.long(), :] - - if assign_result.labels is not None: - self.pos_gt_labels = assign_result.labels[pos_inds] - else: - self.pos_gt_labels = None - - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return torch.cat([self.pos_bboxes, self.neg_bboxes]) - - def to(self, device): - """Change the device of the data inplace. - - Example: - >>> self = SamplingResult.random() - >>> print(f'self = {self.to(None)}') - >>> # xdoctest: +REQUIRES(--gpu) - >>> print(f'self = {self.to(0)}') - """ - _dict = self.__dict__ - for key, value in _dict.items(): - if isinstance(value, torch.Tensor): - _dict[key] = value.to(device) - return self - - def __nice__(self): - data = self.info.copy() - data['pos_bboxes'] = data.pop('pos_bboxes').shape - data['neg_bboxes'] = data.pop('neg_bboxes').shape - parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] - body = ' ' + ',\n '.join(parts) - return '{\n' + body + '\n}' - - @property - def info(self): - """Returns a dictionary of info about the object.""" - return { - 'pos_inds': self.pos_inds, - 'neg_inds': self.neg_inds, - 'pos_bboxes': self.pos_bboxes, - 'neg_bboxes': self.neg_bboxes, - 'pos_is_gt': self.pos_is_gt, - 'num_gts': self.num_gts, - 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, - } - - @classmethod - def random(cls, rng=None, **kwargs): - """ - Args: - rng (None | int | numpy.random.RandomState): seed or state. - kwargs (keyword arguments): - - num_preds: number of predicted boxes - - num_gts: number of true boxes - - p_ignore (float): probability of a predicted box assigned to \ - an ignored truth. - - p_assigned (float): probability of a predicted box not being \ - assigned. - - p_use_label (float | bool): with labels or not. - - Returns: - :obj:`SamplingResult`: Randomly generated sampling result. - - Example: - >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA - >>> self = SamplingResult.random() - >>> print(self.__dict__) - """ - from mmdet.core.bbox import demodata - from mmdet.core.bbox.assigners.assign_result import AssignResult - from mmdet.core.bbox.samplers.random_sampler import RandomSampler - rng = demodata.ensure_rng(rng) - - # make probabalistic? - num = 32 - pos_fraction = 0.5 - neg_pos_ub = -1 - - assign_result = AssignResult.random(rng=rng, **kwargs) - - # Note we could just compute an assignment - bboxes = demodata.random_boxes(assign_result.num_preds, rng=rng) - gt_bboxes = demodata.random_boxes(assign_result.num_gts, rng=rng) - - if rng.rand() > 0.2: - # sometimes algorithms squeeze their data, be robust to that - gt_bboxes = gt_bboxes.squeeze() - bboxes = bboxes.squeeze() - - if assign_result.labels is None: - gt_labels = None - else: - gt_labels = None # todo - - if gt_labels is None: - add_gt_as_proposals = False - else: - add_gt_as_proposals = True # make probabalistic? - - sampler = RandomSampler( - num, - pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals, - rng=rng) - self = sampler.sample(assign_result, bboxes, gt_bboxes, gt_labels) - return self diff --git a/cv/detection/yolof/pytorch/mmdet/core/bbox/transforms.py b/cv/detection/yolof/pytorch/mmdet/core/bbox/transforms.py deleted file mode 100755 index 6d72076a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/bbox/transforms.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def find_inside_bboxes(bboxes, img_h, img_w): - """Find bboxes as long as a part of bboxes is inside the image. - - Args: - bboxes (Tensor): Shape (N, 4). - img_h (int): Image height. - img_w (int): Image width. - - Returns: - Tensor: Index of the remaining bboxes. - """ - inside_inds = (bboxes[:, 0] < img_w) & (bboxes[:, 2] > 0) \ - & (bboxes[:, 1] < img_h) & (bboxes[:, 3] > 0) - return inside_inds - - -def bbox_flip(bboxes, img_shape, direction='horizontal'): - """Flip bboxes horizontally or vertically. - - Args: - bboxes (Tensor): Shape (..., 4*k) - img_shape (tuple): Image shape. - direction (str): Flip direction, options are "horizontal", "vertical", - "diagonal". Default: "horizontal" - - Returns: - Tensor: Flipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - assert direction in ['horizontal', 'vertical', 'diagonal'] - flipped = bboxes.clone() - if direction == 'horizontal': - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - elif direction == 'vertical': - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - else: - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - return flipped - - -def bbox_mapping(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from the original image scale to testing scale.""" - new_bboxes = bboxes * bboxes.new_tensor(scale_factor) - if flip: - new_bboxes = bbox_flip(new_bboxes, img_shape, flip_direction) - return new_bboxes - - -def bbox_mapping_back(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from testing scale to original image scale.""" - new_bboxes = bbox_flip(bboxes, img_shape, - flip_direction) if flip else bboxes - new_bboxes = new_bboxes.view(-1, 4) / new_bboxes.new_tensor(scale_factor) - return new_bboxes.view(bboxes.shape) - - -def bbox2roi(bbox_list): - """Convert a list of bboxes to roi format. - - Args: - bbox_list (list[Tensor]): a list of bboxes corresponding to a batch - of images. - - Returns: - Tensor: shape (n, 5), [batch_ind, x1, y1, x2, y2] - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes[:, :4]], dim=-1) - else: - rois = bboxes.new_zeros((0, 5)) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def roi2bbox(rois): - """Convert rois to bounding box format. - - Args: - rois (torch.Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - list[torch.Tensor]: Converted boxes of corresponding rois. - """ - bbox_list = [] - img_ids = torch.unique(rois[:, 0].cpu(), sorted=True) - for img_id in img_ids: - inds = (rois[:, 0] == img_id.item()) - bbox = rois[inds, 1:] - bbox_list.append(bbox) - return bbox_list - - -def bbox2result(bboxes, labels, num_classes): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor | np.ndarray): shape (n, 5) - labels (torch.Tensor | np.ndarray): shape (n, ) - num_classes (int): class number, including background class - - Returns: - list(ndarray): bbox results of each class - """ - if bboxes.shape[0] == 0: - return [np.zeros((0, 5), dtype=np.float32) for i in range(num_classes)] - else: - if isinstance(bboxes, torch.Tensor): - bboxes = bboxes.detach().cpu().numpy() - labels = labels.detach().cpu().numpy() - return [bboxes[labels == i, :] for i in range(num_classes)] - - -def distance2bbox(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - - x1 = points[..., 0] - distance[..., 0] - y1 = points[..., 1] - distance[..., 1] - x2 = points[..., 0] + distance[..., 2] - y2 = points[..., 1] + distance[..., 3] - - bboxes = torch.stack([x1, y1, x2, y2], -1) - - if max_shape is not None: - if bboxes.dim() == 2 and not torch.onnx.is_in_onnx_export(): - # speed up - bboxes[:, 0::2].clamp_(min=0, max=max_shape[1]) - bboxes[:, 1::2].clamp_(min=0, max=max_shape[0]) - return bboxes - - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - x1, y1, x2, y2 = dynamic_clip_for_onnx(x1, y1, x2, y2, max_shape) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes - - -def bbox2distance(points, bbox, max_dis=None, eps=0.1): - """Decode bounding box based on distances. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - bbox (Tensor): Shape (n, 4), "xyxy" format - max_dis (float): Upper bound of the distance. - eps (float): a small value to ensure target < max_dis, instead <= - - Returns: - Tensor: Decoded distances. - """ - left = points[:, 0] - bbox[:, 0] - top = points[:, 1] - bbox[:, 1] - right = bbox[:, 2] - points[:, 0] - bottom = bbox[:, 3] - points[:, 1] - if max_dis is not None: - left = left.clamp(min=0, max=max_dis - eps) - top = top.clamp(min=0, max=max_dis - eps) - right = right.clamp(min=0, max=max_dis - eps) - bottom = bottom.clamp(min=0, max=max_dis - eps) - return torch.stack([left, top, right, bottom], -1) - - -def bbox_rescale(bboxes, scale_factor=1.0): - """Rescale bounding box w.r.t. scale_factor. - - Args: - bboxes (Tensor): Shape (n, 4) for bboxes or (n, 5) for rois - scale_factor (float): rescale factor - - Returns: - Tensor: Rescaled bboxes. - """ - if bboxes.size(1) == 5: - bboxes_ = bboxes[:, 1:] - inds_ = bboxes[:, 0] - else: - bboxes_ = bboxes - cx = (bboxes_[:, 0] + bboxes_[:, 2]) * 0.5 - cy = (bboxes_[:, 1] + bboxes_[:, 3]) * 0.5 - w = bboxes_[:, 2] - bboxes_[:, 0] - h = bboxes_[:, 3] - bboxes_[:, 1] - w = w * scale_factor - h = h * scale_factor - x1 = cx - 0.5 * w - x2 = cx + 0.5 * w - y1 = cy - 0.5 * h - y2 = cy + 0.5 * h - if bboxes.size(1) == 5: - rescaled_bboxes = torch.stack([inds_, x1, y1, x2, y2], dim=-1) - else: - rescaled_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return rescaled_bboxes - - -def bbox_cxcywh_to_xyxy(bbox): - """Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - cx, cy, w, h = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(cx - 0.5 * w), (cy - 0.5 * h), (cx + 0.5 * w), (cy + 0.5 * h)] - return torch.cat(bbox_new, dim=-1) - - -def bbox_xyxy_to_cxcywh(bbox): - """Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - x1, y1, x2, y2 = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(x1 + x2) / 2, (y1 + y2) / 2, (x2 - x1), (y2 - y1)] - return torch.cat(bbox_new, dim=-1) diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/__init__.py deleted file mode 100755 index 67e7c55b..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_names import (cityscapes_classes, coco_classes, dataset_aliases, - get_classes, imagenet_det_classes, - imagenet_vid_classes, oid_challenge_classes, - oid_v6_classes, voc_classes) -from .eval_hooks import DistEvalHook, EvalHook -from .mean_ap import average_precision, eval_map, print_map_summary -from .panoptic_utils import INSTANCE_OFFSET -from .recall import (eval_recalls, plot_iou_recall, plot_num_recall, - print_recall_summary) - -__all__ = [ - 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes', - 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes', - 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map', - 'print_map_summary', 'eval_recalls', 'print_recall_summary', - 'plot_num_recall', 'plot_iou_recall', 'oid_v6_classes', - 'oid_challenge_classes', 'INSTANCE_OFFSET' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/bbox_overlaps.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/bbox_overlaps.py deleted file mode 100755 index 5d6eb82f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/bbox_overlaps.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def bbox_overlaps(bboxes1, - bboxes2, - mode='iou', - eps=1e-6, - use_legacy_coordinate=False): - """Calculate the ious between each bbox of bboxes1 and bboxes2. - - Args: - bboxes1 (ndarray): Shape (n, 4) - bboxes2 (ndarray): Shape (k, 4) - mode (str): IOU (intersection over union) or IOF (intersection - over foreground) - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Note when function is used in `VOCDataset`, it should be - True to align with the official implementation - `http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar` - Default: False. - - Returns: - ious (ndarray): Shape (n, k) - """ - - assert mode in ['iou', 'iof'] - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - bboxes1 = bboxes1.astype(np.float32) - bboxes2 = bboxes2.astype(np.float32) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - ious = np.zeros((rows, cols), dtype=np.float32) - if rows * cols == 0: - return ious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - ious = np.zeros((cols, rows), dtype=np.float32) - exchange = True - area1 = (bboxes1[:, 2] - bboxes1[:, 0] + extra_length) * ( - bboxes1[:, 3] - bboxes1[:, 1] + extra_length) - area2 = (bboxes2[:, 2] - bboxes2[:, 0] + extra_length) * ( - bboxes2[:, 3] - bboxes2[:, 1] + extra_length) - for i in range(bboxes1.shape[0]): - x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0]) - y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1]) - x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2]) - y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3]) - overlap = np.maximum(x_end - x_start + extra_length, 0) * np.maximum( - y_end - y_start + extra_length, 0) - if mode == 'iou': - union = area1[i] + area2 - overlap - else: - union = area1[i] if not exchange else area2 - union = np.maximum(union, eps) - ious[i, :] = overlap / union - if exchange: - ious = ious.T - return ious diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/class_names.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/class_names.py deleted file mode 100755 index 73797118..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/class_names.py +++ /dev/null @@ -1,332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - - -def wider_face_classes(): - return ['face'] - - -def voc_classes(): - return [ - 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', - 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', - 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' - ] - - -def imagenet_det_classes(): - return [ - 'accordion', 'airplane', 'ant', 'antelope', 'apple', 'armadillo', - 'artichoke', 'axe', 'baby_bed', 'backpack', 'bagel', 'balance_beam', - 'banana', 'band_aid', 'banjo', 'baseball', 'basketball', 'bathing_cap', - 'beaker', 'bear', 'bee', 'bell_pepper', 'bench', 'bicycle', 'binder', - 'bird', 'bookshelf', 'bow_tie', 'bow', 'bowl', 'brassiere', 'burrito', - 'bus', 'butterfly', 'camel', 'can_opener', 'car', 'cart', 'cattle', - 'cello', 'centipede', 'chain_saw', 'chair', 'chime', 'cocktail_shaker', - 'coffee_maker', 'computer_keyboard', 'computer_mouse', 'corkscrew', - 'cream', 'croquet_ball', 'crutch', 'cucumber', 'cup_or_mug', 'diaper', - 'digital_clock', 'dishwasher', 'dog', 'domestic_cat', 'dragonfly', - 'drum', 'dumbbell', 'electric_fan', 'elephant', 'face_powder', 'fig', - 'filing_cabinet', 'flower_pot', 'flute', 'fox', 'french_horn', 'frog', - 'frying_pan', 'giant_panda', 'goldfish', 'golf_ball', 'golfcart', - 'guacamole', 'guitar', 'hair_dryer', 'hair_spray', 'hamburger', - 'hammer', 'hamster', 'harmonica', 'harp', 'hat_with_a_wide_brim', - 'head_cabbage', 'helmet', 'hippopotamus', 'horizontal_bar', 'horse', - 'hotdog', 'iPod', 'isopod', 'jellyfish', 'koala_bear', 'ladle', - 'ladybug', 'lamp', 'laptop', 'lemon', 'lion', 'lipstick', 'lizard', - 'lobster', 'maillot', 'maraca', 'microphone', 'microwave', 'milk_can', - 'miniskirt', 'monkey', 'motorcycle', 'mushroom', 'nail', 'neck_brace', - 'oboe', 'orange', 'otter', 'pencil_box', 'pencil_sharpener', 'perfume', - 'person', 'piano', 'pineapple', 'ping-pong_ball', 'pitcher', 'pizza', - 'plastic_bag', 'plate_rack', 'pomegranate', 'popsicle', 'porcupine', - 'power_drill', 'pretzel', 'printer', 'puck', 'punching_bag', 'purse', - 'rabbit', 'racket', 'ray', 'red_panda', 'refrigerator', - 'remote_control', 'rubber_eraser', 'rugby_ball', 'ruler', - 'salt_or_pepper_shaker', 'saxophone', 'scorpion', 'screwdriver', - 'seal', 'sheep', 'ski', 'skunk', 'snail', 'snake', 'snowmobile', - 'snowplow', 'soap_dispenser', 'soccer_ball', 'sofa', 'spatula', - 'squirrel', 'starfish', 'stethoscope', 'stove', 'strainer', - 'strawberry', 'stretcher', 'sunglasses', 'swimming_trunks', 'swine', - 'syringe', 'table', 'tape_player', 'tennis_ball', 'tick', 'tie', - 'tiger', 'toaster', 'traffic_light', 'train', 'trombone', 'trumpet', - 'turtle', 'tv_or_monitor', 'unicycle', 'vacuum', 'violin', - 'volleyball', 'waffle_iron', 'washer', 'water_bottle', 'watercraft', - 'whale', 'wine_bottle', 'zebra' - ] - - -def imagenet_vid_classes(): - return [ - 'airplane', 'antelope', 'bear', 'bicycle', 'bird', 'bus', 'car', - 'cattle', 'dog', 'domestic_cat', 'elephant', 'fox', 'giant_panda', - 'hamster', 'horse', 'lion', 'lizard', 'monkey', 'motorcycle', 'rabbit', - 'red_panda', 'sheep', 'snake', 'squirrel', 'tiger', 'train', 'turtle', - 'watercraft', 'whale', 'zebra' - ] - - -def coco_classes(): - return [ - 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', - 'truck', 'boat', 'traffic_light', 'fire_hydrant', 'stop_sign', - 'parking_meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', - 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard', - 'surfboard', 'tennis_racket', 'bottle', 'wine_glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'chair', - 'couch', 'potted_plant', 'bed', 'dining_table', 'toilet', 'tv', - 'laptop', 'mouse', 'remote', 'keyboard', 'cell_phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy_bear', 'hair_drier', 'toothbrush' - ] - - -def cityscapes_classes(): - return [ - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def oid_challenge_classes(): - return [ - 'Footwear', 'Jeans', 'House', 'Tree', 'Woman', 'Man', 'Land vehicle', - 'Person', 'Wheel', 'Bus', 'Human face', 'Bird', 'Dress', 'Girl', - 'Vehicle', 'Building', 'Cat', 'Car', 'Belt', 'Elephant', 'Dessert', - 'Butterfly', 'Train', 'Guitar', 'Poster', 'Book', 'Boy', 'Bee', - 'Flower', 'Window', 'Hat', 'Human head', 'Dog', 'Human arm', 'Drink', - 'Human mouth', 'Human hair', 'Human nose', 'Human hand', 'Table', - 'Marine invertebrates', 'Fish', 'Sculpture', 'Rose', 'Street light', - 'Glasses', 'Fountain', 'Skyscraper', 'Swimwear', 'Brassiere', 'Drum', - 'Duck', 'Countertop', 'Furniture', 'Ball', 'Human leg', 'Boat', - 'Balloon', 'Bicycle helmet', 'Goggles', 'Door', 'Human eye', 'Shirt', - 'Toy', 'Teddy bear', 'Pasta', 'Tomato', 'Human ear', - 'Vehicle registration plate', 'Microphone', 'Musical keyboard', - 'Tower', 'Houseplant', 'Flowerpot', 'Fruit', 'Vegetable', - 'Musical instrument', 'Suit', 'Motorcycle', 'Bagel', 'French fries', - 'Hamburger', 'Chair', 'Salt and pepper shakers', 'Snail', 'Airplane', - 'Horse', 'Laptop', 'Computer keyboard', 'Football helmet', 'Cocktail', - 'Juice', 'Tie', 'Computer monitor', 'Human beard', 'Bottle', - 'Saxophone', 'Lemon', 'Mouse', 'Sock', 'Cowboy hat', 'Sun hat', - 'Football', 'Porch', 'Sunglasses', 'Lobster', 'Crab', 'Picture frame', - 'Van', 'Crocodile', 'Surfboard', 'Shorts', 'Helicopter', 'Helmet', - 'Sports uniform', 'Taxi', 'Swan', 'Goose', 'Coat', 'Jacket', 'Handbag', - 'Flag', 'Skateboard', 'Television', 'Tire', 'Spoon', 'Palm tree', - 'Stairs', 'Salad', 'Castle', 'Oven', 'Microwave oven', 'Wine', - 'Ceiling fan', 'Mechanical fan', 'Cattle', 'Truck', 'Box', 'Ambulance', - 'Desk', 'Wine glass', 'Reptile', 'Tank', 'Traffic light', 'Billboard', - 'Tent', 'Insect', 'Spider', 'Treadmill', 'Cupboard', 'Shelf', - 'Seat belt', 'Human foot', 'Bicycle', 'Bicycle wheel', 'Couch', - 'Bookcase', 'Fedora', 'Backpack', 'Bench', 'Oyster', - 'Moths and butterflies', 'Lavender', 'Waffle', 'Fork', 'Animal', - 'Accordion', 'Mobile phone', 'Plate', 'Coffee cup', 'Saucer', - 'Platter', 'Dagger', 'Knife', 'Bull', 'Tortoise', 'Sea turtle', 'Deer', - 'Weapon', 'Apple', 'Ski', 'Taco', 'Traffic sign', 'Beer', 'Necklace', - 'Sunflower', 'Piano', 'Organ', 'Harpsichord', 'Bed', 'Cabinetry', - 'Nightstand', 'Curtain', 'Chest of drawers', 'Drawer', 'Parrot', - 'Sandal', 'High heels', 'Tableware', 'Cart', 'Mushroom', 'Kite', - 'Missile', 'Seafood', 'Camera', 'Paper towel', 'Toilet paper', - 'Sombrero', 'Radish', 'Lighthouse', 'Segway', 'Pig', 'Watercraft', - 'Golf cart', 'studio couch', 'Dolphin', 'Whale', 'Earrings', 'Otter', - 'Sea lion', 'Whiteboard', 'Monkey', 'Gondola', 'Zebra', - 'Baseball glove', 'Scarf', 'Adhesive tape', 'Trousers', 'Scoreboard', - 'Lily', 'Carnivore', 'Power plugs and sockets', 'Office building', - 'Sandwich', 'Swimming pool', 'Headphones', 'Tin can', 'Crown', 'Doll', - 'Cake', 'Frog', 'Beetle', 'Ant', 'Gas stove', 'Canoe', 'Falcon', - 'Blue jay', 'Egg', 'Fire hydrant', 'Raccoon', 'Muffin', 'Wall clock', - 'Coffee', 'Mug', 'Tea', 'Bear', 'Waste container', 'Home appliance', - 'Candle', 'Lion', 'Mirror', 'Starfish', 'Marine mammal', 'Wheelchair', - 'Umbrella', 'Alpaca', 'Violin', 'Cello', 'Brown bear', 'Canary', 'Bat', - 'Ruler', 'Plastic bag', 'Penguin', 'Watermelon', 'Harbor seal', 'Pen', - 'Pumpkin', 'Harp', 'Kitchen appliance', 'Roller skates', 'Bust', - 'Coffee table', 'Tennis ball', 'Tennis racket', 'Ladder', 'Boot', - 'Bowl', 'Stop sign', 'Volleyball', 'Eagle', 'Paddle', 'Chicken', - 'Skull', 'Lamp', 'Beehive', 'Maple', 'Sink', 'Goldfish', 'Tripod', - 'Coconut', 'Bidet', 'Tap', 'Bathroom cabinet', 'Toilet', - 'Filing cabinet', 'Pretzel', 'Table tennis racket', 'Bronze sculpture', - 'Rocket', 'Mouse', 'Hamster', 'Lizard', 'Lifejacket', 'Goat', - 'Washing machine', 'Trumpet', 'Horn', 'Trombone', 'Sheep', - 'Tablet computer', 'Pillow', 'Kitchen & dining room table', - 'Parachute', 'Raven', 'Glove', 'Loveseat', 'Christmas tree', - 'Shellfish', 'Rifle', 'Shotgun', 'Sushi', 'Sparrow', 'Bread', - 'Toaster', 'Watch', 'Asparagus', 'Artichoke', 'Suitcase', 'Antelope', - 'Broccoli', 'Ice cream', 'Racket', 'Banana', 'Cookie', 'Cucumber', - 'Dragonfly', 'Lynx', 'Caterpillar', 'Light bulb', 'Office supplies', - 'Miniskirt', 'Skirt', 'Fireplace', 'Potato', 'Light switch', - 'Croissant', 'Cabbage', 'Ladybug', 'Handgun', 'Luggage and bags', - 'Window blind', 'Snowboard', 'Baseball bat', 'Digital clock', - 'Serving tray', 'Infant bed', 'Sofa bed', 'Guacamole', 'Fox', 'Pizza', - 'Snowplow', 'Jet ski', 'Refrigerator', 'Lantern', 'Convenience store', - 'Sword', 'Rugby ball', 'Owl', 'Ostrich', 'Pancake', 'Strawberry', - 'Carrot', 'Tart', 'Dice', 'Turkey', 'Rabbit', 'Invertebrate', 'Vase', - 'Stool', 'Swim cap', 'Shower', 'Clock', 'Jellyfish', 'Aircraft', - 'Chopsticks', 'Orange', 'Snake', 'Sewing machine', 'Kangaroo', 'Mixer', - 'Food processor', 'Shrimp', 'Towel', 'Porcupine', 'Jaguar', 'Cannon', - 'Limousine', 'Mule', 'Squirrel', 'Kitchen knife', 'Tiara', 'Tiger', - 'Bow and arrow', 'Candy', 'Rhinoceros', 'Shark', 'Cricket ball', - 'Doughnut', 'Plumbing fixture', 'Camel', 'Polar bear', 'Coin', - 'Printer', 'Blender', 'Giraffe', 'Billiard table', 'Kettle', - 'Dinosaur', 'Pineapple', 'Zucchini', 'Jug', 'Barge', 'Teapot', - 'Golf ball', 'Binoculars', 'Scissors', 'Hot dog', 'Door handle', - 'Seahorse', 'Bathtub', 'Leopard', 'Centipede', 'Grapefruit', 'Snowman', - 'Cheetah', 'Alarm clock', 'Grape', 'Wrench', 'Wok', 'Bell pepper', - 'Cake stand', 'Barrel', 'Woodpecker', 'Flute', 'Corded phone', - 'Willow', 'Punching bag', 'Pomegranate', 'Telephone', 'Pear', - 'Common fig', 'Bench', 'Wood-burning stove', 'Burrito', 'Nail', - 'Turtle', 'Submarine sandwich', 'Drinking straw', 'Peach', 'Popcorn', - 'Frying pan', 'Picnic basket', 'Honeycomb', 'Envelope', 'Mango', - 'Cutting board', 'Pitcher', 'Stationary bicycle', 'Dumbbell', - 'Personal care', 'Dog bed', 'Snowmobile', 'Oboe', 'Briefcase', - 'Squash', 'Tick', 'Slow cooker', 'Coffeemaker', 'Measuring cup', - 'Crutch', 'Stretcher', 'Screwdriver', 'Flashlight', 'Spatula', - 'Pressure cooker', 'Ring binder', 'Beaker', 'Torch', 'Winter melon' - ] - - -def oid_v6_classes(): - return [ - 'Tortoise', 'Container', 'Magpie', 'Sea turtle', 'Football', - 'Ambulance', 'Ladder', 'Toothbrush', 'Syringe', 'Sink', 'Toy', - 'Organ (Musical Instrument)', 'Cassette deck', 'Apple', 'Human eye', - 'Cosmetics', 'Paddle', 'Snowman', 'Beer', 'Chopsticks', 'Human beard', - 'Bird', 'Parking meter', 'Traffic light', 'Croissant', 'Cucumber', - 'Radish', 'Towel', 'Doll', 'Skull', 'Washing machine', 'Glove', 'Tick', - 'Belt', 'Sunglasses', 'Banjo', 'Cart', 'Ball', 'Backpack', 'Bicycle', - 'Home appliance', 'Centipede', 'Boat', 'Surfboard', 'Boot', - 'Headphones', 'Hot dog', 'Shorts', 'Fast food', 'Bus', 'Boy', - 'Screwdriver', 'Bicycle wheel', 'Barge', 'Laptop', 'Miniskirt', - 'Drill (Tool)', 'Dress', 'Bear', 'Waffle', 'Pancake', 'Brown bear', - 'Woodpecker', 'Blue jay', 'Pretzel', 'Bagel', 'Tower', 'Teapot', - 'Person', 'Bow and arrow', 'Swimwear', 'Beehive', 'Brassiere', 'Bee', - 'Bat (Animal)', 'Starfish', 'Popcorn', 'Burrito', 'Chainsaw', - 'Balloon', 'Wrench', 'Tent', 'Vehicle registration plate', 'Lantern', - 'Toaster', 'Flashlight', 'Billboard', 'Tiara', 'Limousine', 'Necklace', - 'Carnivore', 'Scissors', 'Stairs', 'Computer keyboard', 'Printer', - 'Traffic sign', 'Chair', 'Shirt', 'Poster', 'Cheese', 'Sock', - 'Fire hydrant', 'Land vehicle', 'Earrings', 'Tie', 'Watercraft', - 'Cabinetry', 'Suitcase', 'Muffin', 'Bidet', 'Snack', 'Snowmobile', - 'Clock', 'Medical equipment', 'Cattle', 'Cello', 'Jet ski', 'Camel', - 'Coat', 'Suit', 'Desk', 'Cat', 'Bronze sculpture', 'Juice', 'Gondola', - 'Beetle', 'Cannon', 'Computer mouse', 'Cookie', 'Office building', - 'Fountain', 'Coin', 'Calculator', 'Cocktail', 'Computer monitor', - 'Box', 'Stapler', 'Christmas tree', 'Cowboy hat', 'Hiking equipment', - 'Studio couch', 'Drum', 'Dessert', 'Wine rack', 'Drink', 'Zucchini', - 'Ladle', 'Human mouth', 'Dairy Product', 'Dice', 'Oven', 'Dinosaur', - 'Ratchet (Device)', 'Couch', 'Cricket ball', 'Winter melon', 'Spatula', - 'Whiteboard', 'Pencil sharpener', 'Door', 'Hat', 'Shower', 'Eraser', - 'Fedora', 'Guacamole', 'Dagger', 'Scarf', 'Dolphin', 'Sombrero', - 'Tin can', 'Mug', 'Tap', 'Harbor seal', 'Stretcher', 'Can opener', - 'Goggles', 'Human body', 'Roller skates', 'Coffee cup', - 'Cutting board', 'Blender', 'Plumbing fixture', 'Stop sign', - 'Office supplies', 'Volleyball (Ball)', 'Vase', 'Slow cooker', - 'Wardrobe', 'Coffee', 'Whisk', 'Paper towel', 'Personal care', 'Food', - 'Sun hat', 'Tree house', 'Flying disc', 'Skirt', 'Gas stove', - 'Salt and pepper shakers', 'Mechanical fan', 'Face powder', 'Fax', - 'Fruit', 'French fries', 'Nightstand', 'Barrel', 'Kite', 'Tart', - 'Treadmill', 'Fox', 'Flag', 'French horn', 'Window blind', - 'Human foot', 'Golf cart', 'Jacket', 'Egg (Food)', 'Street light', - 'Guitar', 'Pillow', 'Human leg', 'Isopod', 'Grape', 'Human ear', - 'Power plugs and sockets', 'Panda', 'Giraffe', 'Woman', 'Door handle', - 'Rhinoceros', 'Bathtub', 'Goldfish', 'Houseplant', 'Goat', - 'Baseball bat', 'Baseball glove', 'Mixing bowl', - 'Marine invertebrates', 'Kitchen utensil', 'Light switch', 'House', - 'Horse', 'Stationary bicycle', 'Hammer', 'Ceiling fan', 'Sofa bed', - 'Adhesive tape', 'Harp', 'Sandal', 'Bicycle helmet', 'Saucer', - 'Harpsichord', 'Human hair', 'Heater', 'Harmonica', 'Hamster', - 'Curtain', 'Bed', 'Kettle', 'Fireplace', 'Scale', 'Drinking straw', - 'Insect', 'Hair dryer', 'Kitchenware', 'Indoor rower', 'Invertebrate', - 'Food processor', 'Bookcase', 'Refrigerator', 'Wood-burning stove', - 'Punching bag', 'Common fig', 'Cocktail shaker', 'Jaguar (Animal)', - 'Golf ball', 'Fashion accessory', 'Alarm clock', 'Filing cabinet', - 'Artichoke', 'Table', 'Tableware', 'Kangaroo', 'Koala', 'Knife', - 'Bottle', 'Bottle opener', 'Lynx', 'Lavender (Plant)', 'Lighthouse', - 'Dumbbell', 'Human head', 'Bowl', 'Humidifier', 'Porch', 'Lizard', - 'Billiard table', 'Mammal', 'Mouse', 'Motorcycle', - 'Musical instrument', 'Swim cap', 'Frying pan', 'Snowplow', - 'Bathroom cabinet', 'Missile', 'Bust', 'Man', 'Waffle iron', 'Milk', - 'Ring binder', 'Plate', 'Mobile phone', 'Baked goods', 'Mushroom', - 'Crutch', 'Pitcher (Container)', 'Mirror', 'Personal flotation device', - 'Table tennis racket', 'Pencil case', 'Musical keyboard', 'Scoreboard', - 'Briefcase', 'Kitchen knife', 'Nail (Construction)', 'Tennis ball', - 'Plastic bag', 'Oboe', 'Chest of drawers', 'Ostrich', 'Piano', 'Girl', - 'Plant', 'Potato', 'Hair spray', 'Sports equipment', 'Pasta', - 'Penguin', 'Pumpkin', 'Pear', 'Infant bed', 'Polar bear', 'Mixer', - 'Cupboard', 'Jacuzzi', 'Pizza', 'Digital clock', 'Pig', 'Reptile', - 'Rifle', 'Lipstick', 'Skateboard', 'Raven', 'High heels', 'Red panda', - 'Rose', 'Rabbit', 'Sculpture', 'Saxophone', 'Shotgun', 'Seafood', - 'Submarine sandwich', 'Snowboard', 'Sword', 'Picture frame', 'Sushi', - 'Loveseat', 'Ski', 'Squirrel', 'Tripod', 'Stethoscope', 'Submarine', - 'Scorpion', 'Segway', 'Training bench', 'Snake', 'Coffee table', - 'Skyscraper', 'Sheep', 'Television', 'Trombone', 'Tea', 'Tank', 'Taco', - 'Telephone', 'Torch', 'Tiger', 'Strawberry', 'Trumpet', 'Tree', - 'Tomato', 'Train', 'Tool', 'Picnic basket', 'Cooking spray', - 'Trousers', 'Bowling equipment', 'Football helmet', 'Truck', - 'Measuring cup', 'Coffeemaker', 'Violin', 'Vehicle', 'Handbag', - 'Paper cutter', 'Wine', 'Weapon', 'Wheel', 'Worm', 'Wok', 'Whale', - 'Zebra', 'Auto part', 'Jug', 'Pizza cutter', 'Cream', 'Monkey', 'Lion', - 'Bread', 'Platter', 'Chicken', 'Eagle', 'Helicopter', 'Owl', 'Duck', - 'Turtle', 'Hippopotamus', 'Crocodile', 'Toilet', 'Toilet paper', - 'Squid', 'Clothing', 'Footwear', 'Lemon', 'Spider', 'Deer', 'Frog', - 'Banana', 'Rocket', 'Wine glass', 'Countertop', 'Tablet computer', - 'Waste container', 'Swimming pool', 'Dog', 'Book', 'Elephant', 'Shark', - 'Candle', 'Leopard', 'Axe', 'Hand dryer', 'Soap dispenser', - 'Porcupine', 'Flower', 'Canary', 'Cheetah', 'Palm tree', 'Hamburger', - 'Maple', 'Building', 'Fish', 'Lobster', 'Garden Asparagus', - 'Furniture', 'Hedgehog', 'Airplane', 'Spoon', 'Otter', 'Bull', - 'Oyster', 'Horizontal bar', 'Convenience store', 'Bomb', 'Bench', - 'Ice cream', 'Caterpillar', 'Butterfly', 'Parachute', 'Orange', - 'Antelope', 'Beaker', 'Moths and butterflies', 'Window', 'Closet', - 'Castle', 'Jellyfish', 'Goose', 'Mule', 'Swan', 'Peach', 'Coconut', - 'Seat belt', 'Raccoon', 'Chisel', 'Fork', 'Lamp', 'Camera', - 'Squash (Plant)', 'Racket', 'Human face', 'Human arm', 'Vegetable', - 'Diaper', 'Unicycle', 'Falcon', 'Chime', 'Snail', 'Shellfish', - 'Cabbage', 'Carrot', 'Mango', 'Jeans', 'Flowerpot', 'Pineapple', - 'Drawer', 'Stool', 'Envelope', 'Cake', 'Dragonfly', 'Common sunflower', - 'Microwave oven', 'Honeycomb', 'Marine mammal', 'Sea lion', 'Ladybug', - 'Shelf', 'Watch', 'Candy', 'Salad', 'Parrot', 'Handgun', 'Sparrow', - 'Van', 'Grinder', 'Spice rack', 'Light bulb', 'Corded phone', - 'Sports uniform', 'Tennis racket', 'Wall clock', 'Serving tray', - 'Kitchen & dining room table', 'Dog bed', 'Cake stand', - 'Cat furniture', 'Bathroom accessory', 'Facial tissue holder', - 'Pressure cooker', 'Kitchen appliance', 'Tire', 'Ruler', - 'Luggage and bags', 'Microphone', 'Broccoli', 'Umbrella', 'Pastry', - 'Grapefruit', 'Band-aid', 'Animal', 'Bell pepper', 'Turkey', 'Lily', - 'Pomegranate', 'Doughnut', 'Glasses', 'Human nose', 'Pen', 'Ant', - 'Car', 'Aircraft', 'Human hand', 'Skunk', 'Teddy bear', 'Watermelon', - 'Cantaloupe', 'Dishwasher', 'Flute', 'Balance beam', 'Sandwich', - 'Shrimp', 'Sewing machine', 'Binoculars', 'Rays and skates', 'Ipod', - 'Accordion', 'Willow', 'Crab', 'Crown', 'Seahorse', 'Perfume', - 'Alpaca', 'Taxi', 'Canoe', 'Remote control', 'Wheelchair', - 'Rugby ball', 'Armadillo', 'Maracas', 'Helmet' - ] - - -dataset_aliases = { - 'voc': ['voc', 'pascal_voc', 'voc07', 'voc12'], - 'imagenet_det': ['det', 'imagenet_det', 'ilsvrc_det'], - 'imagenet_vid': ['vid', 'imagenet_vid', 'ilsvrc_vid'], - 'coco': ['coco', 'mscoco', 'ms_coco'], - 'wider_face': ['WIDERFaceDataset', 'wider_face', 'WIDERFace'], - 'cityscapes': ['cityscapes'], - 'oid_challenge': ['oid_challenge', 'openimages_challenge'], - 'oid_v6': ['oid_v6', 'openimages_v6'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/eval_hooks.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/eval_hooks.py deleted file mode 100755 index 98856c18..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import os.path as osp - -import mmcv -import torch.distributed as dist -from mmcv.runner import DistEvalHook as BaseDistEvalHook -from mmcv.runner import EvalHook as BaseEvalHook -from torch.nn.modules.batchnorm import _BatchNorm - - -def _calc_dynamic_intervals(start_interval, dynamic_interval_list): - assert mmcv.is_list_of(dynamic_interval_list, tuple) - - dynamic_milestones = [0] - dynamic_milestones.extend( - [dynamic_interval[0] for dynamic_interval in dynamic_interval_list]) - dynamic_intervals = [start_interval] - dynamic_intervals.extend( - [dynamic_interval[1] for dynamic_interval in dynamic_interval_list]) - return dynamic_milestones, dynamic_intervals - - -class EvalHook(BaseEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(EvalHook, self).__init__(*args, **kwargs) - self.latest_results = None - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - if not self._should_evaluate(runner): - return - - from mmdet.apis import single_gpu_test - - # Changed results to self.results so that MMDetWandbHook can access - # the evaluation results and log them to wandb. - results = single_gpu_test(runner.model, self.dataloader, show=False) - self.latest_results = results - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - -# Note: Considering that MMCV's EvalHook updated its interface in V1.3.16, -# in order to avoid strong version dependency, we did not directly -# inherit EvalHook but BaseDistEvalHook. -class DistEvalHook(BaseDistEvalHook): - - def __init__(self, *args, dynamic_intervals=None, **kwargs): - super(DistEvalHook, self).__init__(*args, **kwargs) - self.latest_results = None - - self.use_dynamic_intervals = dynamic_intervals is not None - if self.use_dynamic_intervals: - self.dynamic_milestones, self.dynamic_intervals = \ - _calc_dynamic_intervals(self.interval, dynamic_intervals) - - def _decide_interval(self, runner): - if self.use_dynamic_intervals: - progress = runner.epoch if self.by_epoch else runner.iter - step = bisect.bisect(self.dynamic_milestones, (progress + 1)) - # Dynamically modify the evaluation interval - self.interval = self.dynamic_intervals[step - 1] - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - self._decide_interval(runner) - super().before_train_epoch(runner) - - def before_train_iter(self, runner): - self._decide_interval(runner) - super().before_train_iter(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - if not self._should_evaluate(runner): - return - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - from mmdet.apis import multi_gpu_test - - # Changed results to self.results so that MMDetWandbHook can access - # the evaluation results and log them to wandb. - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - self.latest_results = results - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - - # the key_score may be `None` so it needs to skip - # the action to save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/mean_ap.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/mean_ap.py deleted file mode 100755 index a293b80f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/mean_ap.py +++ /dev/null @@ -1,782 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from multiprocessing import Pool - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps -from .class_names import get_classes - - -def average_precision(recalls, precisions, mode='area'): - """Calculate average precision (for single or multiple scales). - - Args: - recalls (ndarray): shape (num_scales, num_dets) or (num_dets, ) - precisions (ndarray): shape (num_scales, num_dets) or (num_dets, ) - mode (str): 'area' or '11points', 'area' means calculating the area - under precision-recall curve, '11points' means calculating - the average precision of recalls at [0, 0.1, ..., 1] - - Returns: - float or ndarray: calculated average precision - """ - no_scale = False - if recalls.ndim == 1: - no_scale = True - recalls = recalls[np.newaxis, :] - precisions = precisions[np.newaxis, :] - assert recalls.shape == precisions.shape and recalls.ndim == 2 - num_scales = recalls.shape[0] - ap = np.zeros(num_scales, dtype=np.float32) - if mode == 'area': - zeros = np.zeros((num_scales, 1), dtype=recalls.dtype) - ones = np.ones((num_scales, 1), dtype=recalls.dtype) - mrec = np.hstack((zeros, recalls, ones)) - mpre = np.hstack((zeros, precisions, zeros)) - for i in range(mpre.shape[1] - 1, 0, -1): - mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i]) - for i in range(num_scales): - ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0] - ap[i] = np.sum( - (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1]) - elif mode == '11points': - for i in range(num_scales): - for thr in np.arange(0, 1 + 1e-3, 0.1): - precs = precisions[i, recalls[i, :] >= thr] - prec = precs.max() if precs.size > 0 else 0 - ap[i] += prec - ap /= 11 - else: - raise ValueError( - 'Unrecognized mode, only "area" and "11points" are supported') - if no_scale: - ap = ap[0] - return ap - - -def tpfp_imagenet(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - default_iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False, - **kwargs): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - default_iou_thr (float): IoU threshold to be considered as matched for - medium and large bboxes (small ones have special rules). - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp - # of a certain scale. - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - ious = bbox_overlaps( - det_bboxes, gt_bboxes - 1, use_legacy_coordinate=use_legacy_coordinate) - gt_w = gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length - gt_h = gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length - iou_thrs = np.minimum((gt_w * gt_h) / ((gt_w + 10.0) * (gt_h + 10.0)), - default_iou_thr) - # sort all detections by scores in descending order - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = gt_w * gt_h - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - max_iou = -1 - matched_gt = -1 - # find best overlapped available gt - for j in range(num_gts): - # different from PASCAL VOC: allow finding other gts if the - # best overlapped ones are already matched by other det bboxes - if gt_covered[j]: - continue - elif ious[i, j] >= iou_thrs[j] and ious[i, j] > max_iou: - max_iou = ious[i, j] - matched_gt = j - # there are 4 cases for a det bbox: - # 1. it matches a gt, tp = 1, fp = 0 - # 2. it matches an ignored gt, tp = 0, fp = 0 - # 3. it matches no gt and within area range, tp = 0, fp = 1 - # 4. it matches no gt but is beyond area range, tp = 0, fp = 0 - if matched_gt >= 0: - gt_covered[matched_gt] = 1 - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - tp[k, i] = 1 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_default(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False, - **kwargs): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - - Returns: - tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of - each array is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp - - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - return tp, fp - - -def tpfp_openimages(det_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - iou_thr=0.5, - area_ranges=None, - use_legacy_coordinate=False, - gt_bboxes_group_of=None, - use_group_of=True, - ioa_thr=0.5, - **kwargs): - """Check if detected bboxes are true positive or false positive. - - Args: - det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5). - gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4). - gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, - of shape (k, 4). Default: None - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - area_ranges (list[tuple] | None): Range of bbox areas to be - evaluated, in the format [(min1, max1), (min2, max2), ...]. - Default: None. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - gt_bboxes_group_of (ndarray): GT group_of of this image, of shape - (k, 1). Default: None - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: True. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: 0.5. - - Returns: - tuple[np.ndarray]: Returns a tuple (tp, fp, det_bboxes), where - (tp, fp) whose elements are 0 and 1. The shape of each array is - (num_scales, m). (det_bboxes) whose will filter those are not - matched by group of gts when processing Open Images evaluation. - The shape is (num_scales, m). - """ - - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - # an indicator of ignored gts - gt_ignore_inds = np.concatenate( - (np.zeros(gt_bboxes.shape[0], dtype=np.bool), - np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool))) - # stack gt_bboxes and gt_bboxes_ignore for convenience - gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) - - num_dets = det_bboxes.shape[0] - num_gts = gt_bboxes.shape[0] - if area_ranges is None: - area_ranges = [(None, None)] - num_scales = len(area_ranges) - # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of - # a certain scale - tp = np.zeros((num_scales, num_dets), dtype=np.float32) - fp = np.zeros((num_scales, num_dets), dtype=np.float32) - - # if there is no gt bboxes in this image, then all det bboxes - # within area range are false positives - if gt_bboxes.shape[0] == 0: - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - return tp, fp, det_bboxes - - if gt_bboxes_group_of is not None and use_group_of: - # if handle group-of boxes, divided gt boxes into two parts: - # non-group-of and group-of.Then calculate ious and ioas through - # non-group-of group-of gts respectively. This only used in - # OpenImages evaluation. - assert gt_bboxes_group_of.shape[0] == gt_bboxes.shape[0] - non_group_gt_bboxes = gt_bboxes[~gt_bboxes_group_of] - group_gt_bboxes = gt_bboxes[gt_bboxes_group_of] - num_gts_group = group_gt_bboxes.shape[0] - ious = bbox_overlaps(det_bboxes, non_group_gt_bboxes) - ioas = bbox_overlaps(det_bboxes, group_gt_bboxes, mode='iof') - else: - # if not consider group-of boxes, only calculate ious through gt boxes - ious = bbox_overlaps( - det_bboxes, gt_bboxes, use_legacy_coordinate=use_legacy_coordinate) - ioas = None - - if ious.shape[1] > 0: - # for each det, the max iou with all gts - ious_max = ious.max(axis=1) - # for each det, which gt overlaps most with it - ious_argmax = ious.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - gt_covered = np.zeros(num_gts, dtype=bool) - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = ( - gt_bboxes[:, 2] - gt_bboxes[:, 0] + extra_length) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1] + extra_length) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - if ious_max[i] >= iou_thr: - matched_gt = ious_argmax[i] - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not gt_covered[matched_gt]: - gt_covered[matched_gt] = True - tp[k, i] = 1 - else: - fp[k, i] = 1 - # otherwise ignore this detected bbox, tp = 0, fp = 0 - elif min_area is None: - fp[k, i] = 1 - else: - bbox = det_bboxes[i, :4] - area = (bbox[2] - bbox[0] + extra_length) * ( - bbox[3] - bbox[1] + extra_length) - if area >= min_area and area < max_area: - fp[k, i] = 1 - else: - # if there is no no-group-of gt bboxes in this image, - # then all det bboxes within area range are false positives. - # Only used in OpenImages evaluation. - if area_ranges == [(None, None)]: - fp[...] = 1 - else: - det_areas = ( - det_bboxes[:, 2] - det_bboxes[:, 0] + extra_length) * ( - det_bboxes[:, 3] - det_bboxes[:, 1] + extra_length) - for i, (min_area, max_area) in enumerate(area_ranges): - fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1 - - if ioas is None or ioas.shape[1] <= 0: - return tp, fp, det_bboxes - else: - # The evaluation of group-of TP and FP are done in two stages: - # 1. All detections are first matched to non group-of boxes; true - # positives are determined. - # 2. Detections that are determined as false positives are matched - # against group-of boxes and calculated group-of TP and FP. - # Only used in OpenImages evaluation. - det_bboxes_group = np.zeros( - (num_scales, ioas.shape[1], det_bboxes.shape[1]), dtype=float) - match_group_of = np.zeros((num_scales, num_dets), dtype=bool) - tp_group = np.zeros((num_scales, num_gts_group), dtype=np.float32) - ioas_max = ioas.max(axis=1) - # for each det, which gt overlaps most with it - ioas_argmax = ioas.argmax(axis=1) - # sort all dets in descending order by scores - sort_inds = np.argsort(-det_bboxes[:, -1]) - for k, (min_area, max_area) in enumerate(area_ranges): - box_is_covered = tp[k] - # if no area range is specified, gt_area_ignore is all False - if min_area is None: - gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) - else: - gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * ( - gt_bboxes[:, 3] - gt_bboxes[:, 1]) - gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area) - for i in sort_inds: - matched_gt = ioas_argmax[i] - if not box_is_covered[i]: - if ioas_max[i] >= ioa_thr: - if not (gt_ignore_inds[matched_gt] - or gt_area_ignore[matched_gt]): - if not tp_group[k, matched_gt]: - tp_group[k, matched_gt] = 1 - match_group_of[k, i] = True - else: - match_group_of[k, i] = True - - if det_bboxes_group[k, matched_gt, -1] < \ - det_bboxes[i, -1]: - det_bboxes_group[k, matched_gt] = \ - det_bboxes[i] - - fp_group = (tp_group <= 0).astype(float) - tps = [] - fps = [] - # concatenate tp, fp, and det-boxes which not matched group of - # gt boxes and tp_group, fp_group, and det_bboxes_group which - # matched group of boxes respectively. - for i in range(num_scales): - tps.append( - np.concatenate((tp[i][~match_group_of[i]], tp_group[i]))) - fps.append( - np.concatenate((fp[i][~match_group_of[i]], fp_group[i]))) - det_bboxes = np.concatenate( - (det_bboxes[~match_group_of[i]], det_bboxes_group[i])) - - tp = np.vstack(tps) - fp = np.vstack(fps) - return tp, fp, det_bboxes - - -def get_cls_results(det_results, annotations, class_id): - """Get det results and gt information of a certain class. - - Args: - det_results (list[list]): Same as `eval_map()`. - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes - """ - cls_dets = [img_res[class_id] for img_res in det_results] - cls_gts = [] - cls_gts_ignore = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - cls_gts.append(ann['bboxes'][gt_inds, :]) - - if ann.get('labels_ignore', None) is not None: - ignore_inds = ann['labels_ignore'] == class_id - cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :]) - else: - cls_gts_ignore.append(np.empty((0, 4), dtype=np.float32)) - - return cls_dets, cls_gts, cls_gts_ignore - - -def get_cls_group_ofs(annotations, class_id): - """Get `gt_group_of` of a certain class, which is used in Open Images. - - Args: - annotations (list[dict]): Same as `eval_map()`. - class_id (int): ID of a specific class. - - Returns: - list[np.ndarray]: `gt_group_of` of a certain class. - """ - gt_group_ofs = [] - for ann in annotations: - gt_inds = ann['labels'] == class_id - if ann.get('gt_is_group_ofs', None) is not None: - gt_group_ofs.append(ann['gt_is_group_ofs'][gt_inds]) - else: - gt_group_ofs.append(np.empty((0, 1), dtype=np.bool)) - - return gt_group_ofs - - -def eval_map(det_results, - annotations, - scale_ranges=None, - iou_thr=0.5, - ioa_thr=None, - dataset=None, - logger=None, - tpfp_fn=None, - nproc=4, - use_legacy_coordinate=False, - use_group_of=False): - """Evaluate mAP of a dataset. - - Args: - det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. - The outer list indicates images, and the inner list indicates - per-class detected bboxes. - annotations (list[dict]): Ground truth annotations where each item of - the list indicates an image. Keys of annotations are: - - - `bboxes`: numpy array of shape (n, 4) - - `labels`: numpy array of shape (n, ) - - `bboxes_ignore` (optional): numpy array of shape (k, 4) - - `labels_ignore` (optional): numpy array of shape (k, ) - scale_ranges (list[tuple] | None): Range of scales to be evaluated, - in the format [(min1, max1), (min2, max2), ...]. A range of - (32, 64) means the area range between (32**2, 64**2). - Default: None. - iou_thr (float): IoU threshold to be considered as matched. - Default: 0.5. - ioa_thr (float | None): IoA threshold to be considered as matched, - which only used in OpenImages evaluation. Default: None. - dataset (list[str] | str | None): Dataset name or dataset classes, - there are minor differences in metrics for different datasets, e.g. - "voc07", "imagenet_det", etc. Default: None. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - tpfp_fn (callable | None): The function used to determine true/ - false positives. If None, :func:`tpfp_default` is used as default - unless dataset is 'det' or 'vid' (:func:`tpfp_imagenet` in this - case). If it is given as a function, then this function is used - to evaluate tp & fp. Default None. - nproc (int): Processes used for computing TP and FP. - Default: 4. - use_legacy_coordinate (bool): Whether to use coordinate system in - mmdet v1.x. which means width, height should be - calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively. - Default: False. - use_group_of (bool): Whether to use group of when calculate TP and FP, - which only used in OpenImages evaluation. Default: False. - - Returns: - tuple: (mAP, [dict, dict, ...]) - """ - assert len(det_results) == len(annotations) - if not use_legacy_coordinate: - extra_length = 0. - else: - extra_length = 1. - - num_imgs = len(det_results) - num_scales = len(scale_ranges) if scale_ranges is not None else 1 - num_classes = len(det_results[0]) # positive class num - area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges] - if scale_ranges is not None else None) - - # There is no need to use multi processes to process - # when num_imgs = 1 . - if num_imgs > 1: - assert nproc > 0, 'nproc must be at least one.' - nproc = min(nproc, num_imgs) - pool = Pool(nproc) - - eval_results = [] - for i in range(num_classes): - # get gt and det bboxes of this class - cls_dets, cls_gts, cls_gts_ignore = get_cls_results( - det_results, annotations, i) - # choose proper function according to datasets to compute tp and fp - if tpfp_fn is None: - if dataset in ['det', 'vid']: - tpfp_fn = tpfp_imagenet - elif dataset in ['oid_challenge', 'oid_v6'] \ - or use_group_of is True: - tpfp_fn = tpfp_openimages - else: - tpfp_fn = tpfp_default - if not callable(tpfp_fn): - raise ValueError( - f'tpfp_fn has to be a function or None, but got {tpfp_fn}') - - if num_imgs > 1: - # compute tp and fp for each image with multiple processes - args = [] - if use_group_of: - # used in Open Images Dataset evaluation - gt_group_ofs = get_cls_group_ofs(annotations, i) - args.append(gt_group_ofs) - args.append([use_group_of for _ in range(num_imgs)]) - if ioa_thr is not None: - args.append([ioa_thr for _ in range(num_imgs)]) - - tpfp = pool.starmap( - tpfp_fn, - zip(cls_dets, cls_gts, cls_gts_ignore, - [iou_thr for _ in range(num_imgs)], - [area_ranges for _ in range(num_imgs)], - [use_legacy_coordinate for _ in range(num_imgs)], *args)) - else: - tpfp = tpfp_fn( - cls_dets[0], - cls_gts[0], - cls_gts_ignore[0], - iou_thr, - area_ranges, - use_legacy_coordinate, - gt_bboxes_group_of=(get_cls_group_ofs(annotations, i)[0] - if use_group_of else None), - use_group_of=use_group_of, - ioa_thr=ioa_thr) - tpfp = [tpfp] - - if use_group_of: - tp, fp, cls_dets = tuple(zip(*tpfp)) - else: - tp, fp = tuple(zip(*tpfp)) - # calculate gt number of each scale - # ignored gts or gts beyond the specific scale are not counted - num_gts = np.zeros(num_scales, dtype=int) - for j, bbox in enumerate(cls_gts): - if area_ranges is None: - num_gts[0] += bbox.shape[0] - else: - gt_areas = (bbox[:, 2] - bbox[:, 0] + extra_length) * ( - bbox[:, 3] - bbox[:, 1] + extra_length) - for k, (min_area, max_area) in enumerate(area_ranges): - num_gts[k] += np.sum((gt_areas >= min_area) - & (gt_areas < max_area)) - # sort all det bboxes by score, also sort tp and fp - cls_dets = np.vstack(cls_dets) - num_dets = cls_dets.shape[0] - sort_inds = np.argsort(-cls_dets[:, -1]) - tp = np.hstack(tp)[:, sort_inds] - fp = np.hstack(fp)[:, sort_inds] - # calculate recall and precision with tp and fp - tp = np.cumsum(tp, axis=1) - fp = np.cumsum(fp, axis=1) - eps = np.finfo(np.float32).eps - recalls = tp / np.maximum(num_gts[:, np.newaxis], eps) - precisions = tp / np.maximum((tp + fp), eps) - # calculate AP - if scale_ranges is None: - recalls = recalls[0, :] - precisions = precisions[0, :] - num_gts = num_gts.item() - mode = 'area' if dataset != 'voc07' else '11points' - ap = average_precision(recalls, precisions, mode) - eval_results.append({ - 'num_gts': num_gts, - 'num_dets': num_dets, - 'recall': recalls, - 'precision': precisions, - 'ap': ap - }) - - if num_imgs > 1: - pool.close() - - if scale_ranges is not None: - # shape (num_classes, num_scales) - all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results]) - all_num_gts = np.vstack( - [cls_result['num_gts'] for cls_result in eval_results]) - mean_ap = [] - for i in range(num_scales): - if np.any(all_num_gts[:, i] > 0): - mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean()) - else: - mean_ap.append(0.0) - else: - aps = [] - for cls_result in eval_results: - if cls_result['num_gts'] > 0: - aps.append(cls_result['ap']) - mean_ap = np.array(aps).mean().item() if aps else 0.0 - - print_map_summary( - mean_ap, eval_results, dataset, area_ranges, logger=logger) - - return mean_ap, eval_results - - -def print_map_summary(mean_ap, - results, - dataset=None, - scale_ranges=None, - logger=None): - """Print mAP and results of each class. - - A table will be printed to show the gts/dets/recall/AP of each class and - the mAP. - - Args: - mean_ap (float): Calculated from `eval_map()`. - results (list[dict]): Calculated from `eval_map()`. - dataset (list[str] | str | None): Dataset name or dataset classes. - scale_ranges (list[tuple] | None): Range of scales to be evaluated. - logger (logging.Logger | str | None): The way to print the mAP - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - - if logger == 'silent': - return - - if isinstance(results[0]['ap'], np.ndarray): - num_scales = len(results[0]['ap']) - else: - num_scales = 1 - - if scale_ranges is not None: - assert len(scale_ranges) == num_scales - - num_classes = len(results) - - recalls = np.zeros((num_scales, num_classes), dtype=np.float32) - aps = np.zeros((num_scales, num_classes), dtype=np.float32) - num_gts = np.zeros((num_scales, num_classes), dtype=int) - for i, cls_result in enumerate(results): - if cls_result['recall'].size > 0: - recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1] - aps[:, i] = cls_result['ap'] - num_gts[:, i] = cls_result['num_gts'] - - if dataset is None: - label_names = [str(i) for i in range(num_classes)] - elif mmcv.is_str(dataset): - label_names = get_classes(dataset) - else: - label_names = dataset - - if not isinstance(mean_ap, list): - mean_ap = [mean_ap] - - header = ['class', 'gts', 'dets', 'recall', 'ap'] - for i in range(num_scales): - if scale_ranges is not None: - print_log(f'Scale range {scale_ranges[i]}', logger=logger) - table_data = [header] - for j in range(num_classes): - row_data = [ - label_names[j], num_gts[i, j], results[j]['num_dets'], - f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}' - ] - table_data.append(row_data) - table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}']) - table = AsciiTable(table_data) - table.inner_footing_row_border = True - print_log('\n' + table.table, logger=logger) diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/panoptic_utils.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/panoptic_utils.py deleted file mode 100755 index 10c9ad93..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/panoptic_utils.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# A custom value to distinguish instance ID and category ID; need to -# be greater than the number of categories. -# For a pixel in the panoptic result map: -# pan_id = ins_id * INSTANCE_OFFSET + cat_id -INSTANCE_OFFSET = 1000 diff --git a/cv/detection/yolof/pytorch/mmdet/core/evaluation/recall.py b/cv/detection/yolof/pytorch/mmdet/core/evaluation/recall.py deleted file mode 100755 index 82b3c909..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/evaluation/recall.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps - - -def _recalls(all_ious, proposal_nums, thrs): - - img_num = all_ious.shape[0] - total_gt_num = sum([ious.shape[0] for ious in all_ious]) - - _ious = np.zeros((proposal_nums.size, total_gt_num), dtype=np.float32) - for k, proposal_num in enumerate(proposal_nums): - tmp_ious = np.zeros(0) - for i in range(img_num): - ious = all_ious[i][:, :proposal_num].copy() - gt_ious = np.zeros((ious.shape[0])) - if ious.size == 0: - tmp_ious = np.hstack((tmp_ious, gt_ious)) - continue - for j in range(ious.shape[0]): - gt_max_overlaps = ious.argmax(axis=1) - max_ious = ious[np.arange(0, ious.shape[0]), gt_max_overlaps] - gt_idx = max_ious.argmax() - gt_ious[j] = max_ious[gt_idx] - box_idx = gt_max_overlaps[gt_idx] - ious[gt_idx, :] = -1 - ious[:, box_idx] = -1 - tmp_ious = np.hstack((tmp_ious, gt_ious)) - _ious[k, :] = tmp_ious - - _ious = np.fliplr(np.sort(_ious, axis=1)) - recalls = np.zeros((proposal_nums.size, thrs.size)) - for i, thr in enumerate(thrs): - recalls[:, i] = (_ious >= thr).sum(axis=1) / float(total_gt_num) - - return recalls - - -def set_recall_param(proposal_nums, iou_thrs): - """Check proposal_nums and iou_thrs and set correct format.""" - if isinstance(proposal_nums, Sequence): - _proposal_nums = np.array(proposal_nums) - elif isinstance(proposal_nums, int): - _proposal_nums = np.array([proposal_nums]) - else: - _proposal_nums = proposal_nums - - if iou_thrs is None: - _iou_thrs = np.array([0.5]) - elif isinstance(iou_thrs, Sequence): - _iou_thrs = np.array(iou_thrs) - elif isinstance(iou_thrs, float): - _iou_thrs = np.array([iou_thrs]) - else: - _iou_thrs = iou_thrs - - return _proposal_nums, _iou_thrs - - -def eval_recalls(gts, - proposals, - proposal_nums=None, - iou_thrs=0.5, - logger=None, - use_legacy_coordinate=False): - """Calculate recalls. - - Args: - gts (list[ndarray]): a list of arrays of shape (n, 4) - proposals (list[ndarray]): a list of arrays of shape (k, 4) or (k, 5) - proposal_nums (int | Sequence[int]): Top N proposals to be evaluated. - iou_thrs (float | Sequence[float]): IoU thresholds. Default: 0.5. - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - use_legacy_coordinate (bool): Whether use coordinate system - in mmdet v1.x. "1" was added to both height and width - which means w, h should be - computed as 'x2 - x1 + 1` and 'y2 - y1 + 1'. Default: False. - - - Returns: - ndarray: recalls of different ious and proposal nums - """ - - img_num = len(gts) - assert img_num == len(proposals) - proposal_nums, iou_thrs = set_recall_param(proposal_nums, iou_thrs) - all_ious = [] - for i in range(img_num): - if proposals[i].ndim == 2 and proposals[i].shape[1] == 5: - scores = proposals[i][:, 4] - sort_idx = np.argsort(scores)[::-1] - img_proposal = proposals[i][sort_idx, :] - else: - img_proposal = proposals[i] - prop_num = min(img_proposal.shape[0], proposal_nums[-1]) - if gts[i] is None or gts[i].shape[0] == 0: - ious = np.zeros((0, img_proposal.shape[0]), dtype=np.float32) - else: - ious = bbox_overlaps( - gts[i], - img_proposal[:prop_num, :4], - use_legacy_coordinate=use_legacy_coordinate) - all_ious.append(ious) - all_ious = np.array(all_ious) - recalls = _recalls(all_ious, proposal_nums, iou_thrs) - - print_recall_summary(recalls, proposal_nums, iou_thrs, logger=logger) - return recalls - - -def print_recall_summary(recalls, - proposal_nums, - iou_thrs, - row_idxs=None, - col_idxs=None, - logger=None): - """Print recalls in a table. - - Args: - recalls (ndarray): calculated from `bbox_recalls` - proposal_nums (ndarray or list): top N proposals - iou_thrs (ndarray or list): iou thresholds - row_idxs (ndarray): which rows(proposal nums) to print - col_idxs (ndarray): which cols(iou thresholds) to print - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - proposal_nums = np.array(proposal_nums, dtype=np.int32) - iou_thrs = np.array(iou_thrs) - if row_idxs is None: - row_idxs = np.arange(proposal_nums.size) - if col_idxs is None: - col_idxs = np.arange(iou_thrs.size) - row_header = [''] + iou_thrs[col_idxs].tolist() - table_data = [row_header] - for i, num in enumerate(proposal_nums[row_idxs]): - row = [f'{val:.3f}' for val in recalls[row_idxs[i], col_idxs].tolist()] - row.insert(0, num) - table_data.append(row) - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - -def plot_num_recall(recalls, proposal_nums): - """Plot Proposal_num-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - proposal_nums(ndarray or list): same shape as `recalls` - """ - if isinstance(proposal_nums, np.ndarray): - _proposal_nums = proposal_nums.tolist() - else: - _proposal_nums = proposal_nums - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot([0] + _proposal_nums, [0] + _recalls) - plt.xlabel('Proposal num') - plt.ylabel('Recall') - plt.axis([0, proposal_nums.max(), 0, 1]) - f.show() - - -def plot_iou_recall(recalls, iou_thrs): - """Plot IoU-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - iou_thrs(ndarray or list): same shape as `recalls` - """ - if isinstance(iou_thrs, np.ndarray): - _iou_thrs = iou_thrs.tolist() - else: - _iou_thrs = iou_thrs - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot(_iou_thrs + [1.0], _recalls + [0.]) - plt.xlabel('IoU') - plt.ylabel('Recall') - plt.axis([iou_thrs.min(), 1, 0, 1]) - f.show() diff --git a/cv/detection/yolof/pytorch/mmdet/core/optimizers/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/optimizers/__init__.py deleted file mode 100755 index e867d076..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/optimizers/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import OPTIMIZER_BUILDERS, build_optimizer -from .layer_decay_optimizer_constructor import \ - LearningRateDecayOptimizerConstructor - -__all__ = [ - 'LearningRateDecayOptimizerConstructor', 'OPTIMIZER_BUILDERS', - 'build_optimizer' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/optimizers/builder.py b/cv/detection/yolof/pytorch/mmdet/core/optimizers/builder.py deleted file mode 100755 index 406dd9b4..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/optimizers/builder.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from mmcv.runner.optimizer import OPTIMIZER_BUILDERS as MMCV_OPTIMIZER_BUILDERS -from mmcv.utils import Registry, build_from_cfg - -OPTIMIZER_BUILDERS = Registry( - 'optimizer builder', parent=MMCV_OPTIMIZER_BUILDERS) - - -def build_optimizer_constructor(cfg): - constructor_type = cfg.get('type') - if constructor_type in OPTIMIZER_BUILDERS: - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - elif constructor_type in MMCV_OPTIMIZER_BUILDERS: - return build_from_cfg(cfg, MMCV_OPTIMIZER_BUILDERS) - else: - raise KeyError(f'{constructor_type} is not registered ' - 'in the optimizer builder registry.') - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/cv/detection/yolof/pytorch/mmdet/core/optimizers/layer_decay_optimizer_constructor.py b/cv/detection/yolof/pytorch/mmdet/core/optimizers/layer_decay_optimizer_constructor.py deleted file mode 100755 index 1bc3469e..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/optimizers/layer_decay_optimizer_constructor.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -from mmcv.runner import DefaultOptimizerConstructor, get_dist_info - -from mmdet.utils import get_root_logger -from .builder import OPTIMIZER_BUILDERS - - -def get_layer_id_for_convnext(var_name, max_layer_id): - """Get the layer id to set the different learning rates in ``layer_wise`` - decay_type. - - Args: - var_name (str): The key of the model. - max_layer_id (int): Maximum layer id. - - Returns: - int: The id number corresponding to different learning rate in - ``LearningRateDecayOptimizerConstructor``. - """ - - if var_name in ('backbone.cls_token', 'backbone.mask_token', - 'backbone.pos_embed'): - return 0 - elif var_name.startswith('backbone.downsample_layers'): - stage_id = int(var_name.split('.')[2]) - if stage_id == 0: - layer_id = 0 - elif stage_id == 1: - layer_id = 2 - elif stage_id == 2: - layer_id = 3 - elif stage_id == 3: - layer_id = max_layer_id - return layer_id - elif var_name.startswith('backbone.stages'): - stage_id = int(var_name.split('.')[2]) - block_id = int(var_name.split('.')[3]) - if stage_id == 0: - layer_id = 1 - elif stage_id == 1: - layer_id = 2 - elif stage_id == 2: - layer_id = 3 + block_id // 3 - elif stage_id == 3: - layer_id = max_layer_id - return layer_id - else: - return max_layer_id + 1 - - -def get_stage_id_for_convnext(var_name, max_stage_id): - """Get the stage id to set the different learning rates in ``stage_wise`` - decay_type. - - Args: - var_name (str): The key of the model. - max_stage_id (int): Maximum stage id. - - Returns: - int: The id number corresponding to different learning rate in - ``LearningRateDecayOptimizerConstructor``. - """ - - if var_name in ('backbone.cls_token', 'backbone.mask_token', - 'backbone.pos_embed'): - return 0 - elif var_name.startswith('backbone.downsample_layers'): - return 0 - elif var_name.startswith('backbone.stages'): - stage_id = int(var_name.split('.')[2]) - return stage_id + 1 - else: - return max_stage_id - 1 - - -@OPTIMIZER_BUILDERS.register_module() -class LearningRateDecayOptimizerConstructor(DefaultOptimizerConstructor): - # Different learning rates are set for different layers of backbone. - # Note: Currently, this optimizer constructor is built for ConvNeXt. - - def add_params(self, params, module, **kwargs): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - """ - logger = get_root_logger() - - parameter_groups = {} - logger.info(f'self.paramwise_cfg is {self.paramwise_cfg}') - num_layers = self.paramwise_cfg.get('num_layers') + 2 - decay_rate = self.paramwise_cfg.get('decay_rate') - decay_type = self.paramwise_cfg.get('decay_type', 'layer_wise') - logger.info('Build LearningRateDecayOptimizerConstructor ' - f'{decay_type} {decay_rate} - {num_layers}') - weight_decay = self.base_wd - for name, param in module.named_parameters(): - if not param.requires_grad: - continue # frozen weights - if len(param.shape) == 1 or name.endswith('.bias') or name in ( - 'pos_embed', 'cls_token'): - group_name = 'no_decay' - this_weight_decay = 0. - else: - group_name = 'decay' - this_weight_decay = weight_decay - if 'layer_wise' in decay_type: - if 'ConvNeXt' in module.backbone.__class__.__name__: - layer_id = get_layer_id_for_convnext( - name, self.paramwise_cfg.get('num_layers')) - logger.info(f'set param {name} as id {layer_id}') - else: - raise NotImplementedError() - elif decay_type == 'stage_wise': - if 'ConvNeXt' in module.backbone.__class__.__name__: - layer_id = get_stage_id_for_convnext(name, num_layers) - logger.info(f'set param {name} as id {layer_id}') - else: - raise NotImplementedError() - group_name = f'layer_{layer_id}_{group_name}' - - if group_name not in parameter_groups: - scale = decay_rate**(num_layers - layer_id - 1) - - parameter_groups[group_name] = { - 'weight_decay': this_weight_decay, - 'params': [], - 'param_names': [], - 'lr_scale': scale, - 'group_name': group_name, - 'lr': scale * self.base_lr, - } - - parameter_groups[group_name]['params'].append(param) - parameter_groups[group_name]['param_names'].append(name) - rank, _ = get_dist_info() - if rank == 0: - to_display = {} - for key in parameter_groups: - to_display[key] = { - 'param_names': parameter_groups[key]['param_names'], - 'lr_scale': parameter_groups[key]['lr_scale'], - 'lr': parameter_groups[key]['lr'], - 'weight_decay': parameter_groups[key]['weight_decay'], - } - logger.info(f'Param groups = {json.dumps(to_display, indent=2)}') - params.extend(parameter_groups.values()) diff --git a/cv/detection/yolof/pytorch/mmdet/core/post_processing/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/post_processing/__init__.py deleted file mode 100755 index 00376bd4..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/post_processing/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .bbox_nms import fast_nms, multiclass_nms -from .matrix_nms import mask_matrix_nms -from .merge_augs import (merge_aug_bboxes, merge_aug_masks, - merge_aug_proposals, merge_aug_scores) - -__all__ = [ - 'multiclass_nms', 'merge_aug_proposals', 'merge_aug_bboxes', - 'merge_aug_scores', 'merge_aug_masks', 'mask_matrix_nms', 'fast_nms' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/post_processing/bbox_nms.py b/cv/detection/yolof/pytorch/mmdet/core/post_processing/bbox_nms.py deleted file mode 100755 index 4fcf57bb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/post_processing/bbox_nms.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms - -from mmdet.core.bbox.iou_calculators import bbox_overlaps - - -def multiclass_nms(multi_bboxes, - multi_scores, - score_thr, - nms_cfg, - max_num=-1, - score_factors=None, - return_inds=False): - """NMS for multi-class bboxes. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class), where the last column - contains scores of the background class, but this will be ignored. - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - nms_cfg (dict): a dict that contains the arguments of nms operations - max_num (int, optional): if there are more than max_num bboxes after - NMS, only top max_num will be kept. Default to -1. - score_factors (Tensor, optional): The factors multiplied to scores - before applying NMS. Default to None. - return_inds (bool, optional): Whether return the indices of kept - bboxes. Default to False. - - Returns: - tuple: (dets, labels, indices (optional)), tensors of shape (k, 5), - (k), and (k). Dets are boxes with scores. Labels are 0-based. - """ - num_classes = multi_scores.size(1) - 1 - # exclude background category - if multi_bboxes.shape[1] > 4: - bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4) - else: - bboxes = multi_bboxes[:, None].expand( - multi_scores.size(0), num_classes, 4) - - scores = multi_scores[:, :-1] - - labels = torch.arange(num_classes, dtype=torch.long, device=scores.device) - labels = labels.view(1, -1).expand_as(scores) - - bboxes = bboxes.reshape(-1, 4) - scores = scores.reshape(-1) - labels = labels.reshape(-1) - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - # remove low scoring boxes - valid_mask = scores > score_thr - # multiply score_factor after threshold to preserve more bboxes, improve - # mAP by 1% for YOLOv3 - if score_factors is not None: - # expand the shape to match original shape of score - score_factors = score_factors.view(-1, 1).expand( - multi_scores.size(0), num_classes) - score_factors = score_factors.reshape(-1) - scores = scores * score_factors - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - inds = valid_mask.nonzero(as_tuple=False).squeeze(1) - bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds] - else: - # TensorRT NMS plugin has invalid output filled with -1 - # add dummy data to make detection output correct. - bboxes = torch.cat([bboxes, bboxes.new_zeros(1, 4)], dim=0) - scores = torch.cat([scores, scores.new_zeros(1)], dim=0) - labels = torch.cat([labels, labels.new_zeros(1)], dim=0) - - if bboxes.numel() == 0: - if torch.onnx.is_in_onnx_export(): - raise RuntimeError('[ONNX Error] Can not record NMS ' - 'as it has not been executed this time') - dets = torch.cat([bboxes, scores[:, None]], -1) - if return_inds: - return dets, labels, inds - else: - return dets, labels - - dets, keep = batched_nms(bboxes, scores, labels, nms_cfg) - - if max_num > 0: - dets = dets[:max_num] - keep = keep[:max_num] - - if return_inds: - return dets, labels[keep], inds[keep] - else: - return dets, labels[keep] - - -def fast_nms(multi_bboxes, - multi_scores, - multi_coeffs, - score_thr, - iou_thr, - top_k, - max_num=-1): - """Fast NMS in `YOLACT `_. - - Fast NMS allows already-removed detections to suppress other detections so - that every instance can be decided to be kept or discarded in parallel, - which is not possible in traditional NMS. This relaxation allows us to - implement Fast NMS entirely in standard GPU-accelerated matrix operations. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class+1), where the last column - contains scores of the background class, but this will be ignored. - multi_coeffs (Tensor): shape (n, #class*coeffs_dim). - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - iou_thr (float): IoU threshold to be considered as conflicted. - top_k (int): if there are more than top_k bboxes before NMS, - only top top_k will be kept. - max_num (int): if there are more than max_num bboxes after NMS, - only top max_num will be kept. If -1, keep all the bboxes. - Default: -1. - - Returns: - tuple: (dets, labels, coefficients), tensors of shape (k, 5), (k, 1), - and (k, coeffs_dim). Dets are boxes with scores. - Labels are 0-based. - """ - - scores = multi_scores[:, :-1].t() # [#class, n] - scores, idx = scores.sort(1, descending=True) - - idx = idx[:, :top_k].contiguous() - scores = scores[:, :top_k] # [#class, topk] - num_classes, num_dets = idx.size() - boxes = multi_bboxes[idx.view(-1), :].view(num_classes, num_dets, 4) - coeffs = multi_coeffs[idx.view(-1), :].view(num_classes, num_dets, -1) - - iou = bbox_overlaps(boxes, boxes) # [#class, topk, topk] - iou.triu_(diagonal=1) - iou_max, _ = iou.max(dim=1) - - # Now just filter out the ones higher than the threshold - keep = iou_max <= iou_thr - - # Second thresholding introduces 0.2 mAP gain at negligible time cost - keep *= scores > score_thr - - # Assign each kept detection to its corresponding class - classes = torch.arange( - num_classes, device=boxes.device)[:, None].expand_as(keep) - classes = classes[keep] - - boxes = boxes[keep] - coeffs = coeffs[keep] - scores = scores[keep] - - # Only keep the top max_num highest scores across all classes - scores, idx = scores.sort(0, descending=True) - if max_num > 0: - idx = idx[:max_num] - scores = scores[:max_num] - - classes = classes[idx] - boxes = boxes[idx] - coeffs = coeffs[idx] - - cls_dets = torch.cat([boxes, scores[:, None]], dim=1) - return cls_dets, classes, coeffs diff --git a/cv/detection/yolof/pytorch/mmdet/core/post_processing/matrix_nms.py b/cv/detection/yolof/pytorch/mmdet/core/post_processing/matrix_nms.py deleted file mode 100755 index 9dc8c4f7..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/post_processing/matrix_nms.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def mask_matrix_nms(masks, - labels, - scores, - filter_thr=-1, - nms_pre=-1, - max_num=-1, - kernel='gaussian', - sigma=2.0, - mask_area=None): - """Matrix NMS for multi-class masks. - - Args: - masks (Tensor): Has shape (num_instances, h, w) - labels (Tensor): Labels of corresponding masks, - has shape (num_instances,). - scores (Tensor): Mask scores of corresponding masks, - has shape (num_instances). - filter_thr (float): Score threshold to filter the masks - after matrix nms. Default: -1, which means do not - use filter_thr. - nms_pre (int): The max number of instances to do the matrix nms. - Default: -1, which means do not use nms_pre. - max_num (int, optional): If there are more than max_num masks after - matrix, only top max_num will be kept. Default: -1, which means - do not use max_num. - kernel (str): 'linear' or 'gaussian'. - sigma (float): std in gaussian method. - mask_area (Tensor): The sum of seg_masks. - - Returns: - tuple(Tensor): Processed mask results. - - - scores (Tensor): Updated scores, has shape (n,). - - labels (Tensor): Remained labels, has shape (n,). - - masks (Tensor): Remained masks, has shape (n, w, h). - - keep_inds (Tensor): The indices number of - the remaining mask in the input mask, has shape (n,). - """ - assert len(labels) == len(masks) == len(scores) - if len(labels) == 0: - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - if mask_area is None: - mask_area = masks.sum((1, 2)).float() - else: - assert len(masks) == len(mask_area) - - # sort and keep top nms_pre - scores, sort_inds = torch.sort(scores, descending=True) - - keep_inds = sort_inds - if nms_pre > 0 and len(sort_inds) > nms_pre: - sort_inds = sort_inds[:nms_pre] - keep_inds = keep_inds[:nms_pre] - scores = scores[:nms_pre] - masks = masks[sort_inds] - mask_area = mask_area[sort_inds] - labels = labels[sort_inds] - - num_masks = len(labels) - flatten_masks = masks.reshape(num_masks, -1).float() - # inter. - inter_matrix = torch.mm(flatten_masks, flatten_masks.transpose(1, 0)) - expanded_mask_area = mask_area.expand(num_masks, num_masks) - # Upper triangle iou matrix. - iou_matrix = (inter_matrix / - (expanded_mask_area + expanded_mask_area.transpose(1, 0) - - inter_matrix)).triu(diagonal=1) - # label_specific matrix. - expanded_labels = labels.expand(num_masks, num_masks) - # Upper triangle label matrix. - label_matrix = (expanded_labels == expanded_labels.transpose( - 1, 0)).triu(diagonal=1) - - # IoU compensation - compensate_iou, _ = (iou_matrix * label_matrix).max(0) - compensate_iou = compensate_iou.expand(num_masks, - num_masks).transpose(1, 0) - - # IoU decay - decay_iou = iou_matrix * label_matrix - - # Calculate the decay_coefficient - if kernel == 'gaussian': - decay_matrix = torch.exp(-1 * sigma * (decay_iou**2)) - compensate_matrix = torch.exp(-1 * sigma * (compensate_iou**2)) - decay_coefficient, _ = (decay_matrix / compensate_matrix).min(0) - elif kernel == 'linear': - decay_matrix = (1 - decay_iou) / (1 - compensate_iou) - decay_coefficient, _ = decay_matrix.min(0) - else: - raise NotImplementedError( - f'{kernel} kernel is not supported in matrix nms!') - # update the score. - scores = scores * decay_coefficient - - if filter_thr > 0: - keep = scores >= filter_thr - keep_inds = keep_inds[keep] - if not keep.any(): - return scores.new_zeros(0), labels.new_zeros(0), masks.new_zeros( - 0, *masks.shape[-2:]), labels.new_zeros(0) - masks = masks[keep] - scores = scores[keep] - labels = labels[keep] - - # sort and keep top max_num - scores, sort_inds = torch.sort(scores, descending=True) - keep_inds = keep_inds[sort_inds] - if max_num > 0 and len(sort_inds) > max_num: - sort_inds = sort_inds[:max_num] - keep_inds = keep_inds[:max_num] - scores = scores[:max_num] - masks = masks[sort_inds] - labels = labels[sort_inds] - - return scores, labels, masks, keep_inds diff --git a/cv/detection/yolof/pytorch/mmdet/core/post_processing/merge_augs.py b/cv/detection/yolof/pytorch/mmdet/core/post_processing/merge_augs.py deleted file mode 100755 index 2ac4603a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/post_processing/merge_augs.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - if flip: - flip_direction = img_info[0]['flip_direction'] - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - elif flip_direction == 'diagonal': - mask = mask[:, :, :, ::-1] - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/cv/detection/yolof/pytorch/mmdet/core/utils/__init__.py b/cv/detection/yolof/pytorch/mmdet/core/utils/__init__.py deleted file mode 100755 index 8696d2ee..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dist_utils import (DistOptimizerHook, all_reduce_dict, allreduce_grads, - reduce_mean, sync_random_seed) -from .misc import (center_of_mass, filter_scores_and_topk, flip_tensor, - generate_coordinate, multi_apply, - select_single_mlvl, unmap) - -__all__ = [ - 'allreduce_grads', 'DistOptimizerHook', 'reduce_mean', 'multi_apply', - 'unmap', 'flip_tensor', 'all_reduce_dict', - 'center_of_mass', 'generate_coordinate', 'select_single_mlvl', - 'filter_scores_and_topk', 'sync_random_seed' -] diff --git a/cv/detection/yolof/pytorch/mmdet/core/utils/dist_utils.py b/cv/detection/yolof/pytorch/mmdet/core/utils/dist_utils.py deleted file mode 100755 index 8760774f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/utils/dist_utils.py +++ /dev/null @@ -1,193 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import pickle -import warnings -from collections import OrderedDict - -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import OptimizerHook, get_dist_info -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - world_size = dist.get_world_size() - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -class DistOptimizerHook(OptimizerHook): - """Deprecated optimizer hook for distributed training.""" - - def __init__(self, *args, **kwargs): - warnings.warn('"DistOptimizerHook" is deprecated, please switch to' - '"mmcv.runner.OptimizerHook".') - super().__init__(*args, **kwargs) - - -def reduce_mean(tensor): - """"Obtain the mean of tensor on different GPUs.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -def obj2tensor(pyobj, device='cuda'): - """Serialize picklable python object to tensor.""" - storage = torch.ByteStorage.from_buffer(pickle.dumps(pyobj)) - return torch.ByteTensor(storage).to(device=device) - - -def tensor2obj(tensor): - """Deserialize tensor to picklable python object.""" - return pickle.loads(tensor.cpu().numpy().tobytes()) - - -@functools.lru_cache() -def _get_global_gloo_group(): - """Return a process group based on gloo backend, containing all the ranks - The result is cached.""" - if dist.get_backend() == 'nccl': - return dist.new_group(backend='gloo') - else: - return dist.group.WORLD - - -def all_reduce_dict(py_dict, op='sum', group=None, to_float=True): - """Apply all reduce function for python dict object. - - The code is modified from https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/utils/allreduce_norm.py. - - NOTE: make sure that py_dict in different ranks has the same keys and - the values should be in the same shape. Currently only supports - nccl backend. - - Args: - py_dict (dict): Dict to be applied all reduce op. - op (str): Operator, could be 'sum' or 'mean'. Default: 'sum' - group (:obj:`torch.distributed.group`, optional): Distributed group, - Default: None. - to_float (bool): Whether to convert all values of dict to float. - Default: True. - - Returns: - OrderedDict: reduced python dict object. - """ - warnings.warn( - 'group` is deprecated. Currently only supports NCCL backend.') - _, world_size = get_dist_info() - if world_size == 1: - return py_dict - - # all reduce logic across different devices. - py_key = list(py_dict.keys()) - if not isinstance(py_dict, OrderedDict): - py_key_tensor = obj2tensor(py_key) - dist.broadcast(py_key_tensor, src=0) - py_key = tensor2obj(py_key_tensor) - - tensor_shapes = [py_dict[k].shape for k in py_key] - tensor_numels = [py_dict[k].numel() for k in py_key] - - if to_float: - warnings.warn('Note: the "to_float" is True, you need to ' - 'ensure that the behavior is reasonable.') - flatten_tensor = torch.cat( - [py_dict[k].flatten().float() for k in py_key]) - else: - flatten_tensor = torch.cat([py_dict[k].flatten() for k in py_key]) - - dist.all_reduce(flatten_tensor, op=dist.ReduceOp.SUM) - if op == 'mean': - flatten_tensor /= world_size - - split_tensors = [ - x.reshape(shape) for x, shape in zip( - torch.split(flatten_tensor, tensor_numels), tensor_shapes) - ] - out_dict = {k: v for k, v in zip(py_key, split_tensors)} - if isinstance(py_dict, OrderedDict): - out_dict = OrderedDict(out_dict) - return out_dict - - -def sync_random_seed(seed=None, device='cuda'): - """Make sure different ranks share the same seed. - - All workers must call this function, otherwise it will deadlock. - This method is generally used in `DistributedSampler`, - because the seed should be identical across all processes - in the distributed group. - - In distributed sampling, different ranks should sample non-overlapped - data in the dataset. Therefore, this function is used to make sure that - each rank shuffles the data indices in the same order based - on the same seed. Then different ranks could use different indices - to select non-overlapped data from the same data list. - - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - - Returns: - int: Seed to be used. - """ - if seed is None: - seed = np.random.randint(2**31) - assert isinstance(seed, int) - - rank, world_size = get_dist_info() - - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() diff --git a/cv/detection/yolof/pytorch/mmdet/core/utils/misc.py b/cv/detection/yolof/pytorch/mmdet/core/utils/misc.py deleted file mode 100755 index c4786055..00000000 --- a/cv/detection/yolof/pytorch/mmdet/core/utils/misc.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import numpy as np -import torch -from six.moves import map, zip - -# from ..mask.structures import BitmapMasks, PolygonMasks - - -def multi_apply(func, *args, **kwargs): - """Apply function to a list of arguments. - - Note: - This function applies the ``func`` to multiple inputs and - map the multiple outputs of the ``func`` into different - list. Each list contains the same type of outputs corresponding - to different inputs. - - Args: - func (Function): A function that will be applied to a list of - arguments - - Returns: - tuple(list): A tuple containing multiple list, each list contains \ - a kind of returned results by the function - """ - pfunc = partial(func, **kwargs) if kwargs else func - map_results = map(pfunc, *args) - return tuple(map(list, zip(*map_results))) - - -def unmap(data, count, inds, fill=0): - """Unmap a subset of item (data) back to the original set of items (of size - count)""" - if data.dim() == 1: - ret = data.new_full((count, ), fill) - ret[inds.type(torch.bool)] = data - else: - new_size = (count, ) + data.size()[1:] - ret = data.new_full(new_size, fill) - ret[inds.type(torch.bool), :] = data - return ret - - -def mask2ndarray(mask): - """Convert Mask to ndarray.. - - Args: - mask (:obj:`BitmapMasks` or :obj:`PolygonMasks` or - torch.Tensor or np.ndarray): The mask to be converted. - - Returns: - np.ndarray: Ndarray mask of shape (n, h, w) that has been converted - """ - if isinstance(mask, (BitmapMasks, PolygonMasks)): - mask = mask.to_ndarray() - elif isinstance(mask, torch.Tensor): - mask = mask.detach().cpu().numpy() - elif not isinstance(mask, np.ndarray): - raise TypeError(f'Unsupported {type(mask)} data type') - return mask - - -def flip_tensor(src_tensor, flip_direction): - """flip tensor base on flip_direction. - - Args: - src_tensor (Tensor): input feature map, shape (B, C, H, W). - flip_direction (str): The flipping direction. Options are - 'horizontal', 'vertical', 'diagonal'. - - Returns: - out_tensor (Tensor): Flipped tensor. - """ - assert src_tensor.ndim == 4 - valid_directions = ['horizontal', 'vertical', 'diagonal'] - assert flip_direction in valid_directions - if flip_direction == 'horizontal': - out_tensor = torch.flip(src_tensor, [3]) - elif flip_direction == 'vertical': - out_tensor = torch.flip(src_tensor, [2]) - else: - out_tensor = torch.flip(src_tensor, [2, 3]) - return out_tensor - - -def select_single_mlvl(mlvl_tensors, batch_id, detach=True): - """Extract a multi-scale single image tensor from a multi-scale batch - tensor based on batch index. - - Note: The default value of detach is True, because the proposal gradient - needs to be detached during the training of the two-stage model. E.g - Cascade Mask R-CNN. - - Args: - mlvl_tensors (list[Tensor]): Batch tensor for all scale levels, - each is a 4D-tensor. - batch_id (int): Batch index. - detach (bool): Whether detach gradient. Default True. - - Returns: - list[Tensor]: Multi-scale single image tensor. - """ - assert isinstance(mlvl_tensors, (list, tuple)) - num_levels = len(mlvl_tensors) - - if detach: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id].detach() for i in range(num_levels) - ] - else: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id] for i in range(num_levels) - ] - return mlvl_tensor_list - - -def filter_scores_and_topk(scores, score_thr, topk, results=None): - """Filter results using score threshold and topk candidates. - - Args: - scores (Tensor): The scores, shape (num_bboxes, K). - score_thr (float): The score filter threshold. - topk (int): The number of topk candidates. - results (dict or list or Tensor, Optional): The results to - which the filtering rule is to be applied. The shape - of each item is (num_bboxes, N). - - Returns: - tuple: Filtered results - - - scores (Tensor): The scores after being filtered, \ - shape (num_bboxes_filtered, ). - - labels (Tensor): The class labels, shape \ - (num_bboxes_filtered, ). - - anchor_idxs (Tensor): The anchor indexes, shape \ - (num_bboxes_filtered, ). - - filtered_results (dict or list or Tensor, Optional): \ - The filtered results. The shape of each item is \ - (num_bboxes_filtered, N). - """ - valid_mask = scores > score_thr - scores = scores[valid_mask] - valid_idxs = torch.nonzero(valid_mask) - - num_topk = min(topk, valid_idxs.size(0)) - # torch.sort is actually faster than .topk (at least on GPUs) - scores, idxs = scores.sort(descending=True) - scores = scores[:num_topk] - topk_idxs = valid_idxs[idxs[:num_topk]] - keep_idxs, labels = topk_idxs.unbind(dim=1) - - filtered_results = None - if results is not None: - if isinstance(results, dict): - filtered_results = {k: v[keep_idxs] for k, v in results.items()} - elif isinstance(results, list): - filtered_results = [result[keep_idxs] for result in results] - elif isinstance(results, torch.Tensor): - filtered_results = results[keep_idxs] - else: - raise NotImplementedError(f'Only supports dict or list or Tensor, ' - f'but get {type(results)}.') - return scores, labels, keep_idxs, filtered_results - - -def center_of_mass(mask, esp=1e-6): - """Calculate the centroid coordinates of the mask. - - Args: - mask (Tensor): The mask to be calculated, shape (h, w). - esp (float): Avoid dividing by zero. Default: 1e-6. - - Returns: - tuple[Tensor]: the coordinates of the center point of the mask. - - - center_h (Tensor): the center point of the height. - - center_w (Tensor): the center point of the width. - """ - h, w = mask.shape - grid_h = torch.arange(h, device=mask.device)[:, None] - grid_w = torch.arange(w, device=mask.device) - normalizer = mask.sum().float().clamp(min=esp) - center_h = (mask * grid_h).sum() / normalizer - center_w = (mask * grid_w).sum() / normalizer - return center_h, center_w - - -def generate_coordinate(featmap_sizes, device='cuda'): - """Generate the coordinate. - - Args: - featmap_sizes (tuple): The feature to be calculated, - of shape (N, C, W, H). - device (str): The device where the feature will be put on. - Returns: - coord_feat (Tensor): The coordinate feature, of shape (N, 2, W, H). - """ - - x_range = torch.linspace(-1, 1, featmap_sizes[-1], device=device) - y_range = torch.linspace(-1, 1, featmap_sizes[-2], device=device) - y, x = torch.meshgrid(y_range, x_range) - y = y.expand([featmap_sizes[0], 1, -1, -1]) - x = x.expand([featmap_sizes[0], 1, -1, -1]) - coord_feat = torch.cat([x, y], 1) - - return coord_feat diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/__init__.py b/cv/detection/yolof/pytorch/mmdet/datasets/__init__.py deleted file mode 100755 index 2874cbf4..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .coco import CocoDataset -from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -from .utils import (NumClassCheckHook, get_loading_pipeline, - replace_ImageToTensor) -__all__ = [ - 'CocoDataset', - 'GroupSampler', 'DistributedGroupSampler', - 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', - 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', - 'NumClassCheckHook', 'CocoPanopticDataset', 'MultiImageMixDataset' -] diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/__init__.py b/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/__init__.py deleted file mode 100755 index af855759..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .coco_api import COCO, COCOeval -from .panoptic_evaluation import pq_compute_multi_core, pq_compute_single_core - -__all__ = [ - 'COCO', 'COCOeval', 'pq_compute_multi_core', 'pq_compute_single_core' -] diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/coco_api.py b/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/coco_api.py deleted file mode 100755 index eef6341e..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/coco_api.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file add snake case alias for coco api - -import warnings - -import pycocotools -from pycocotools.coco import COCO as _COCO -from pycocotools.cocoeval import COCOeval as _COCOeval - - -class COCO(_COCO): - """This class is almost the same as official pycocotools package. - - It implements some snake case function aliases. So that the COCO class has - the same interface as LVIS class. - """ - - def __init__(self, annotation_file=None): - if getattr(pycocotools, '__version__', '0') >= '12.0.2': - warnings.warn( - 'mmpycocotools is deprecated. Please install official pycocotools by "pip install pycocotools"', # noqa: E501 - UserWarning) - super().__init__(annotation_file=annotation_file) - self.img_ann_map = self.imgToAnns - self.cat_img_map = self.catToImgs - - def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None): - return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd) - - def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]): - return self.getCatIds(cat_names, sup_names, cat_ids) - - def get_img_ids(self, img_ids=[], cat_ids=[]): - return self.getImgIds(img_ids, cat_ids) - - def load_anns(self, ids): - return self.loadAnns(ids) - - def load_cats(self, ids): - return self.loadCats(ids) - - def load_imgs(self, ids): - return self.loadImgs(ids) - - -# just for the ease of import -COCOeval = _COCOeval diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py b/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py deleted file mode 100755 index 55f57bf4..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/api_wrappers/panoptic_evaluation.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# Copyright (c) 2018, Alexander Kirillov -# This file supports `file_client` for `panopticapi`, -# the source code is copied from `panopticapi`, -# only the way to load the gt images is modified. -import multiprocessing -import os - -import mmcv -import numpy as np - -try: - from panopticapi.evaluation import OFFSET, VOID, PQStat - from panopticapi.utils import rgb2id -except ImportError: - PQStat = None - rgb2id = None - VOID = 0 - OFFSET = 256 * 256 * 256 - - -def pq_compute_single_core(proc_id, - annotation_set, - gt_folder, - pred_folder, - categories, - file_client=None, - print_log=False): - """The single core function to evaluate the metric of Panoptic - Segmentation. - - Same as the function with the same name in `panopticapi`. Only the function - to load the images is changed to use the file client. - - Args: - proc_id (int): The id of the mini process. - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - print_log (bool): Whether to print the log. Defaults to False. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - pq_stat = PQStat() - - idx = 0 - for gt_ann, pred_ann in annotation_set: - if print_log and idx % 100 == 0: - print('Core: {}, {} from {} images processed'.format( - proc_id, idx, len(annotation_set))) - idx += 1 - # The gt images can be on the local disk or `ceph`, so we use - # file_client here. - img_bytes = file_client.get( - os.path.join(gt_folder, gt_ann['file_name'])) - pan_gt = mmcv.imfrombytes(img_bytes, flag='color', channel_order='rgb') - pan_gt = rgb2id(pan_gt) - - # The predictions can only be on the local dist now. - pan_pred = mmcv.imread( - os.path.join(pred_folder, pred_ann['file_name']), - flag='color', - channel_order='rgb') - pan_pred = rgb2id(pan_pred) - - gt_segms = {el['id']: el for el in gt_ann['segments_info']} - pred_segms = {el['id']: el for el in pred_ann['segments_info']} - - # predicted segments area calculation + prediction sanity checks - pred_labels_set = set(el['id'] for el in pred_ann['segments_info']) - labels, labels_cnt = np.unique(pan_pred, return_counts=True) - for label, label_cnt in zip(labels, labels_cnt): - if label not in pred_segms: - if label == VOID: - continue - raise KeyError( - 'In the image with ID {} segment with ID {} is ' - 'presented in PNG and not presented in JSON.'.format( - gt_ann['image_id'], label)) - pred_segms[label]['area'] = label_cnt - pred_labels_set.remove(label) - if pred_segms[label]['category_id'] not in categories: - raise KeyError( - 'In the image with ID {} segment with ID {} has ' - 'unknown category_id {}.'.format( - gt_ann['image_id'], label, - pred_segms[label]['category_id'])) - if len(pred_labels_set) != 0: - raise KeyError( - 'In the image with ID {} the following segment IDs {} ' - 'are presented in JSON and not presented in PNG.'.format( - gt_ann['image_id'], list(pred_labels_set))) - - # confusion matrix calculation - pan_gt_pred = pan_gt.astype(np.uint64) * OFFSET + pan_pred.astype( - np.uint64) - gt_pred_map = {} - labels, labels_cnt = np.unique(pan_gt_pred, return_counts=True) - for label, intersection in zip(labels, labels_cnt): - gt_id = label // OFFSET - pred_id = label % OFFSET - gt_pred_map[(gt_id, pred_id)] = intersection - - # count all matched pairs - gt_matched = set() - pred_matched = set() - for label_tuple, intersection in gt_pred_map.items(): - gt_label, pred_label = label_tuple - if gt_label not in gt_segms: - continue - if pred_label not in pred_segms: - continue - if gt_segms[gt_label]['iscrowd'] == 1: - continue - if gt_segms[gt_label]['category_id'] != pred_segms[pred_label][ - 'category_id']: - continue - - union = pred_segms[pred_label]['area'] + gt_segms[gt_label][ - 'area'] - intersection - gt_pred_map.get((VOID, pred_label), 0) - iou = intersection / union - if iou > 0.5: - pq_stat[gt_segms[gt_label]['category_id']].tp += 1 - pq_stat[gt_segms[gt_label]['category_id']].iou += iou - gt_matched.add(gt_label) - pred_matched.add(pred_label) - - # count false positives - crowd_labels_dict = {} - for gt_label, gt_info in gt_segms.items(): - if gt_label in gt_matched: - continue - # crowd segments are ignored - if gt_info['iscrowd'] == 1: - crowd_labels_dict[gt_info['category_id']] = gt_label - continue - pq_stat[gt_info['category_id']].fn += 1 - - # count false positives - for pred_label, pred_info in pred_segms.items(): - if pred_label in pred_matched: - continue - # intersection of the segment with VOID - intersection = gt_pred_map.get((VOID, pred_label), 0) - # plus intersection with corresponding CROWD region if it exists - if pred_info['category_id'] in crowd_labels_dict: - intersection += gt_pred_map.get( - (crowd_labels_dict[pred_info['category_id']], pred_label), - 0) - # predicted segment is ignored if more than half of - # the segment correspond to VOID and CROWD regions - if intersection / pred_info['area'] > 0.5: - continue - pq_stat[pred_info['category_id']].fp += 1 - - if print_log: - print('Core: {}, all {} images processed'.format( - proc_id, len(annotation_set))) - return pq_stat - - -def pq_compute_multi_core(matched_annotations_list, - gt_folder, - pred_folder, - categories, - file_client=None, - nproc=32): - """Evaluate the metrics of Panoptic Segmentation with multithreading. - - Same as the function with the same name in `panopticapi`. - - Args: - matched_annotations_list (list): The matched annotation list. Each - element is a tuple of annotations of the same image with the - format (gt_anns, pred_anns). - gt_folder (str): The path of the ground truth images. - pred_folder (str): The path of the prediction images. - categories (str): The categories of the dataset. - file_client (object): The file client of the dataset. If None, - the backend will be set to `disk`. - nproc (int): Number of processes for panoptic quality computing. - Defaults to 32. When `nproc` exceeds the number of cpu cores, - the number of cpu cores is used. - """ - if PQStat is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - if file_client is None: - file_client_args = dict(backend='disk') - file_client = mmcv.FileClient(**file_client_args) - - cpu_num = min(nproc, multiprocessing.cpu_count()) - - annotations_split = np.array_split(matched_annotations_list, cpu_num) - print('Number of cores: {}, images per core: {}'.format( - cpu_num, len(annotations_split[0]))) - workers = multiprocessing.Pool(processes=cpu_num) - processes = [] - for proc_id, annotation_set in enumerate(annotations_split): - p = workers.apply_async(pq_compute_single_core, - (proc_id, annotation_set, gt_folder, - pred_folder, categories, file_client)) - processes.append(p) - - # Close the process pool, otherwise it will lead to memory - # leaking problems. - workers.close() - workers.join() - - pq_stat = PQStat() - for p in processes: - pq_stat += p.get() - - return pq_stat diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/builder.py b/cv/detection/yolof/pytorch/mmdet/datasets/builder.py deleted file mode 100755 index 1936296a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/builder.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -import warnings -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import TORCH_VERSION, Registry, build_from_cfg, digit_version -from torch.utils.data import DataLoader - -from .samplers import (ClassAwareSampler, DistributedGroupSampler, - DistributedSampler, GroupSampler, InfiniteBatchSampler, - InfiniteGroupBatchSampler) - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from .dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - MultiImageMixDataset, RepeatDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif cfg['type'] == 'MultiImageMixDataset': - cp_cfg = copy.deepcopy(cfg) - cp_cfg['dataset'] = build_dataset(cp_cfg['dataset']) - cp_cfg.pop('type') - dataset = MultiImageMixDataset(**cp_cfg) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - runner_type='EpochBasedRunner', - persistent_workers=False, - class_aware_sampler=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int, Optional): Seed to be used. Default: None. - runner_type (str): Type of runner. Default: `EpochBasedRunner` - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers `Dataset` instances alive. - This argument is only valid when PyTorch>=1.7.0. Default: False. - class_aware_sampler (dict): Whether to use `ClassAwareSampler` - during training. Default: None. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - - if dist: - # When model is :obj:`DistributedDataParallel`, - # `batch_size` of :obj:`dataloader` is the - # number of training samples on each GPU. - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - # When model is obj:`DataParallel` - # the batch size is samples on all the GPUS - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - if runner_type == 'IterBasedRunner': - # this is a batch sampler, which can yield - # a mini-batch indices each time. - # it can be used in both `DataParallel` and - # `DistributedDataParallel` - if shuffle: - batch_sampler = InfiniteGroupBatchSampler( - dataset, batch_size, world_size, rank, seed=seed) - else: - batch_sampler = InfiniteBatchSampler( - dataset, - batch_size, - world_size, - rank, - seed=seed, - shuffle=False) - batch_size = 1 - sampler = None - else: - if class_aware_sampler is not None: - # ClassAwareSampler can be used in both distributed and - # non-distributed training. - num_sample_class = class_aware_sampler.get('num_sample_class', 1) - sampler = ClassAwareSampler( - dataset, - samples_per_gpu, - world_size, - rank, - seed=seed, - num_sample_class=num_sample_class) - elif dist: - # DistributedGroupSampler will definitely shuffle the data to - # satisfy that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - else: - sampler = GroupSampler(dataset, - samples_per_gpu) if shuffle else None - batch_sampler = None - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.7.0')): - kwargs['persistent_workers'] = persistent_workers - elif persistent_workers is True: - warnings.warn('persistent_workers is invalid because your pytorch ' - 'version is lower than 1.7.0') - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - batch_sampler=batch_sampler, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=kwargs.pop('pin_memory', False), - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/coco.py b/cv/detection/yolof/pytorch/mmdet/datasets/coco.py deleted file mode 100755 index bcdd4df3..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/coco.py +++ /dev/null @@ -1,649 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import contextlib -import io -import itertools -import logging -import os.path as osp -import tempfile -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from mmdet.core import eval_recalls -from .api_wrappers import COCO, COCOeval -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CocoDataset(CustomDataset): - - CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', - 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') - - PALETTE = [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), - (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), - (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), - (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), - (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), - (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), - (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), - (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), - (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), - (134, 134, 103), (145, 148, 174), (255, 208, 186), - (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), - (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), - (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), - (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), - (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), - (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), - (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), - (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), - (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), - (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), - (191, 162, 208)] - - def load_annotations(self, ann_file): - """Load annotation from COCO style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - - self.coco = COCO(ann_file) - # The order of returned `cat_ids` will not - # change with the order of the CLASSES - self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES) - - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - total_ann_ids = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - data_infos.append(info) - ann_ids = self.coco.get_ann_ids(img_ids=[i]) - total_ann_ids.extend(ann_ids) - assert len(set(total_ann_ids)) == len( - total_ann_ids), f"Annotation ids in '{ann_file}' are not unique!" - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def get_cat_ids(self, idx): - """Get COCO category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return [ann['category_id'] for ann in ann_info] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_in_cat: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore,\ - labels, masks, seg_map. "masks" are raw annotations and not \ - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def xyxy2xywh(self, bbox): - """Convert ``xyxy`` style bounding boxes to ``xywh`` style for COCO - evaluation. - - Args: - bbox (numpy.ndarray): The bounding boxes, shape (4, ), in - ``xyxy`` order. - - Returns: - list[float]: The converted bounding boxes, in ``xywh`` order. - """ - - _bbox = bbox.tolist() - return [ - _bbox[0], - _bbox[1], - _bbox[2] - _bbox[0], - _bbox[3] - _bbox[1], - ] - - def _proposal2json(self, results): - """Convert proposal results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - bboxes = results[idx] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = 1 - json_results.append(data) - return json_results - - def _det2json(self, results): - """Convert detection results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - result = results[idx] - for label in range(len(result)): - bboxes = result[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - json_results.append(data) - return json_results - - def _segm2json(self, results): - """Convert instance segmentation results to COCO json style.""" - bbox_json_results = [] - segm_json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - det, seg = results[idx] - for label in range(len(det)): - # bbox results - bboxes = det[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - bbox_json_results.append(data) - - # segm results - # some detectors use different scores for bbox and mask - if isinstance(seg, tuple): - segms = seg[0][label] - mask_score = seg[1][label] - else: - segms = seg[label] - mask_score = [bbox[4] for bbox in bboxes] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(mask_score[i]) - data['category_id'] = self.cat_ids[label] - if isinstance(segms[i]['counts'], bytes): - segms[i]['counts'] = segms[i]['counts'].decode() - data['segmentation'] = segms[i] - segm_json_results.append(data) - return bbox_json_results, segm_json_results - - def results2json(self, results, outfile_prefix): - """Dump the detection results to a COCO style json file. - - There are 3 types of results: proposals, bbox predictions, mask - predictions, and they have different data types. This method will - automatically recognize the type, and dump them to json files. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.bbox.json", "somepath/xxx.segm.json", - "somepath/xxx.proposal.json". - - Returns: - dict[str: str]: Possible keys are "bbox", "segm", "proposal", and \ - values are corresponding filenames. - """ - result_files = dict() - if isinstance(results[0], list): - json_results = self._det2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - mmcv.dump(json_results, result_files['bbox']) - elif isinstance(results[0], tuple): - json_results = self._segm2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(json_results[0], result_files['bbox']) - mmcv.dump(json_results[1], result_files['segm']) - elif isinstance(results[0], np.ndarray): - json_results = self._proposal2json(results) - result_files['proposal'] = f'{outfile_prefix}.proposal.json' - mmcv.dump(json_results, result_files['proposal']) - else: - raise TypeError('invalid type of results') - return result_files - - def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None): - gt_bboxes = [] - for i in range(len(self.img_ids)): - ann_ids = self.coco.get_ann_ids(img_ids=self.img_ids[i]) - ann_info = self.coco.load_anns(ann_ids) - if len(ann_info) == 0: - gt_bboxes.append(np.zeros((0, 4))) - continue - bboxes = [] - for ann in ann_info: - if ann.get('ignore', False) or ann['iscrowd']: - continue - x1, y1, w, h = ann['bbox'] - bboxes.append([x1, y1, x1 + w, y1 + h]) - bboxes = np.array(bboxes, dtype=np.float32) - if bboxes.shape[0] == 0: - bboxes = np.zeros((0, 4)) - gt_bboxes.append(bboxes) - - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thrs, logger=logger) - ar = recalls.mean(axis=1) - return ar - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - return result_files, tmp_dir - - def evaluate_det_segm(self, - results, - result_files, - coco_gt, - metrics, - logger=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Instance segmentation and object detection evaluation in COCO - protocol. - - Args: - results (list[list | tuple | dict]): Testing results of the - dataset. - result_files (dict[str, str]): a dict contains json file path. - coco_gt (COCO): COCO API object with ground truth annotation. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - if iou_thrs is None: - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - if metric_items is not None: - if not isinstance(metric_items, list): - metric_items = [metric_items] - - eval_results = OrderedDict() - for metric in metrics: - msg = f'Evaluating {metric}...' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - if isinstance(results[0], tuple): - raise KeyError('proposal_fast is not supported for ' - 'instance segmentation result.') - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}') - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - iou_type = 'bbox' if metric == 'proposal' else metric - if metric not in result_files: - raise KeyError(f'{metric} is not in results') - try: - predictions = mmcv.load(result_files[metric]) - if iou_type == 'segm': - # Refer to https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/coco.py#L331 # noqa - # When evaluating mask AP, if the results contain bbox, - # cocoapi will use the box area instead of the mask area - # for calculating the instance area. Though the overall AP - # is not affected, this leads to different - # small/medium/large mask AP results. - for x in predictions: - x.pop('bbox') - warnings.simplefilter('once') - warnings.warn( - 'The key "bbox" is deleted for more accurate mask AP ' - 'of small/medium/large instances since v2.12.0. This ' - 'does not change the overall mAP calculation.', - UserWarning) - coco_det = coco_gt.loadRes(predictions) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - cocoEval = COCOeval(coco_gt, coco_det, iou_type) - cocoEval.params.catIds = self.cat_ids - cocoEval.params.imgIds = self.img_ids - cocoEval.params.maxDets = list(proposal_nums) - cocoEval.params.iouThrs = iou_thrs - # mapping of cocoEval.stats - coco_metric_names = { - 'mAP': 0, - 'mAP_50': 1, - 'mAP_75': 2, - 'mAP_s': 3, - 'mAP_m': 4, - 'mAP_l': 5, - 'AR@100': 6, - 'AR@300': 7, - 'AR@1000': 8, - 'AR_s@1000': 9, - 'AR_m@1000': 10, - 'AR_l@1000': 11 - } - if metric_items is not None: - for metric_item in metric_items: - if metric_item not in coco_metric_names: - raise KeyError( - f'metric item {metric_item} is not supported') - - if metric == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if metric_items is None: - metric_items = [ - 'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', - 'AR_m@1000', 'AR_l@1000' - ] - - for item in metric_items: - val = float( - f'{cocoEval.stats[coco_metric_names[item]]:.3f}') - eval_results[item] = val - else: - cocoEval.evaluate() - cocoEval.accumulate() - - # Save coco summarize print information to logger - redirect_string = io.StringIO() - with contextlib.redirect_stdout(redirect_string): - cocoEval.summarize() - print_log('\n' + redirect_string.getvalue(), logger=logger) - - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = cocoEval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - nm = self.coco.loadCats(catId)[0] - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - if metric_items is None: - metric_items = [ - 'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l' - ] - - for metric_item in metric_items: - key = f'{metric}_{metric_item}' - val = float( - f'{cocoEval.stats[coco_metric_names[metric_item]]:.3f}' - ) - eval_results[key] = val - ap = cocoEval.stats[:6] - eval_results[f'{metric}_mAP_copypaste'] = ( - f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} ' - f'{ap[4]:.3f} {ap[5]:.3f}') - - return eval_results - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - - coco_gt = self.coco - self.cat_ids = coco_gt.get_cat_ids(cat_names=self.CLASSES) - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - eval_results = self.evaluate_det_segm(results, result_files, coco_gt, - metrics, logger, classwise, - proposal_nums, iou_thrs, - metric_items) - - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/custom.py b/cv/detection/yolof/pytorch/mmdet/datasets/custom.py deleted file mode 100755 index a4d82589..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/custom.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable -from torch.utils.data import Dataset - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for detection. - - The annotation format is shown as follows. The `ann` field is optional for - testing. - - .. code-block:: none - - [ - { - 'filename': 'a.jpg', - 'width': 1280, - 'height': 720, - 'ann': { - 'bboxes': (n, 4) in (x1, y1, x2, y2) order. - 'labels': (n, ), - 'bboxes_ignore': (k, 4), (optional field) - 'labels_ignore': (k, 4) (optional field) - } - }, - ... - ] - - Args: - ann_file (str): Annotation file path. - pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified. - test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True, - file_client_args=dict(backend='disk')): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.file_client = mmcv.FileClient(**file_client_args) - self.CLASSES = self.get_classes(classes) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.ann_file): - self.ann_file = osp.join(self.data_root, self.ann_file) - if not (self.img_prefix is None or osp.isabs(self.img_prefix)): - self.img_prefix = osp.join(self.data_root, self.img_prefix) - if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): - self.seg_prefix = osp.join(self.data_root, self.seg_prefix) - if not (self.proposal_file is None - or osp.isabs(self.proposal_file)): - self.proposal_file = osp.join(self.data_root, - self.proposal_file) - # load annotations (and proposals) - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path(self.ann_file) as local_path: - self.data_infos = self.load_annotations(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.data_infos = self.load_annotations(self.ann_file) - - if self.proposal_file is not None: - if hasattr(self.file_client, 'get_local_path'): - with self.file_client.get_local_path( - self.proposal_file) as local_path: - self.proposals = self.load_proposals(local_path) - else: - warnings.warn( - 'The used MMCV version does not have get_local_path. ' - f'We treat the {self.ann_file} as local paths and it ' - 'might cause errors if the path is not a local path. ' - 'Please use MMCV>= 1.3.16 if you meet errors.') - self.proposals = self.load_proposals(self.proposal_file) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - def __len__(self): - """Total number of samples of data.""" - return len(self.data_infos) - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - return mmcv.load(ann_file) - - def load_proposals(self, proposal_file): - """Load proposal from proposal file.""" - return mmcv.load(proposal_file) - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.data_infos[idx]['ann'] - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - while True: - data = self.prepare_train_img(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys \ - introduced by pipeline. - """ - - img_info = self.data_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by \ - pipeline. - """ - - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def get_cat2imgs(self): - """Get a dict with class as key and img_ids as values, which will be - used in :class:`ClassAwareSampler`. - - Returns: - dict[list]: A dict of per-label image list, - the item of the dict indicates a label index, - corresponds to the image index that contains the label. - """ - if self.CLASSES is None: - raise ValueError('self.CLASSES can not be None') - # sort the label index - cat2imgs = {i: [] for i in range(len(self.CLASSES))} - for i in range(len(self)): - cat_ids = set(self.get_cat_ids(i)) - for cat in cat_ids: - cat2imgs[cat].append(i) - return cat2imgs - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP. - Default: None. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - dataset=self.CLASSES, - logger=logger) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thr, logger=logger) - for i, num in enumerate(proposal_nums): - for j, iou in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results - - def __repr__(self): - """Print the number of instance number.""" - dataset_type = 'Test' if self.test_mode else 'Train' - result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' - f'with number of images {len(self)}, ' - f'and instance counts: \n') - if self.CLASSES is None: - result += 'Category names are not provided. \n' - return result - instance_count = np.zeros(len(self.CLASSES) + 1).astype(int) - # count the instance number in each image - for idx in range(len(self)): - label = self.get_ann_info(idx)['labels'] - unique, counts = np.unique(label, return_counts=True) - if len(unique) > 0: - # add the occurrence number to each class - instance_count[unique] += counts - else: - # background is the last index - instance_count[-1] += 1 - # create a table with category count - table_data = [['category', 'count'] * 5] - row_data = [] - for cls, count in enumerate(instance_count): - if cls < len(self.CLASSES): - row_data += [f'{cls} [{self.CLASSES[cls]}]', f'{count}'] - else: - # add the background number - row_data += ['-1 background', f'{count}'] - if len(row_data) == 10: - table_data.append(row_data) - row_data = [] - if len(row_data) >= 2: - if row_data[-1] == '0': - row_data = row_data[:-2] - if len(row_data) >= 2: - table_data.append([]) - table_data.append(row_data) - - table = AsciiTable(table_data) - result += table.table - return result diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/dataset_wrappers.py b/cv/detection/yolof/pytorch/mmdet/datasets/dataset_wrappers.py deleted file mode 100755 index e62b88eb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/dataset_wrappers.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import bisect -import collections -import copy -import math -from collections import defaultdict - -import numpy as np -from mmcv.utils import build_from_cfg, print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS, PIPELINES -from .coco import CocoDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = getattr(datasets[0], 'PALETTE', None) - self.separate_eval = separate_eval - if not separate_eval: - if any([isinstance(ds, CocoDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - - if hasattr(datasets[0], 'flag'): - flags = [] - for i in range(0, len(datasets)): - flags.append(datasets[i].flag) - self.flag = np.concatenate(flags) - - def get_cat_ids(self, idx): - """Get category ids of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_cat_ids(sample_idx) - - def get_ann_info(self, idx): - """Get annotation of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_ann_info(sample_idx) - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[list | tuple]): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: AP results of the total dataset or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.ann_file} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - elif any([isinstance(ds, CocoDataset) for ds in self.datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - else: - original_data_infos = self.datasets[0].data_infos - self.datasets[0].data_infos = sum( - [dataset.data_infos for dataset in self.datasets], []) - eval_results = self.datasets[0].evaluate( - results, logger=logger, **kwargs) - self.datasets[0].data_infos = original_data_infos - return eval_results - - -@DATASETS.register_module() -class RepeatDataset: - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = np.tile(self.dataset.flag, times) - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - return self.dataset[idx % self._ori_len] - - def get_cat_ids(self, idx): - """Get category ids of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.dataset.get_cat_ids(idx % self._ori_len) - - def get_ann_info(self, idx): - """Get annotation of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.dataset.get_ann_info(idx % self._ori_len) - - def __len__(self): - """Length after repetition.""" - return self.times * self._ori_len - - -# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa -@DATASETS.register_module() -class ClassBalancedDataset: - """A wrapper of repeated dataset with repeat factor. - - Suitable for training on class imbalanced datasets like LVIS. Following - the sampling strategy in the `paper `_, - in each epoch, an image may appear multiple times based on its - "repeat factor". - The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] - is defined by the fraction of images in the training set (without repeats) - in which category c appears. - The dataset needs to instantiate :func:`self.get_cat_ids` to support - ClassBalancedDataset. - - The repeat factor is computed as followed. - - 1. For each category c, compute the fraction # of images - that contain it: :math:`f(c)` - 2. For each category c, compute the category-level repeat factor: - :math:`r(c) = max(1, sqrt(t/f(c)))` - 3. For each image I, compute the image-level repeat factor: - :math:`r(I) = max_{c in I} r(c)` - - Args: - dataset (:obj:`CustomDataset`): The dataset to be repeated. - oversample_thr (float): frequency threshold below which data is - repeated. For categories with ``f_c >= oversample_thr``, there is - no oversampling. For categories with ``f_c < oversample_thr``, the - degree of oversampling following the square-root inverse frequency - heuristic above. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes will not be oversampled. Otherwise, they will be categorized - as the pure background class and involved into the oversampling. - Default: True. - """ - - def __init__(self, dataset, oversample_thr, filter_empty_gt=True): - self.dataset = dataset - self.oversample_thr = oversample_thr - self.filter_empty_gt = filter_empty_gt - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - - repeat_factors = self._get_repeat_factors(dataset, oversample_thr) - repeat_indices = [] - for dataset_idx, repeat_factor in enumerate(repeat_factors): - repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor)) - self.repeat_indices = repeat_indices - - flags = [] - if hasattr(self.dataset, 'flag'): - for flag, repeat_factor in zip(self.dataset.flag, repeat_factors): - flags.extend([flag] * int(math.ceil(repeat_factor))) - assert len(flags) == len(repeat_indices) - self.flag = np.asarray(flags, dtype=np.uint8) - - def _get_repeat_factors(self, dataset, repeat_thr): - """Get repeat factor for each images in the dataset. - - Args: - dataset (:obj:`CustomDataset`): The dataset - repeat_thr (float): The threshold of frequency. If an image - contains the categories whose frequency below the threshold, - it would be repeated. - - Returns: - list[float]: The repeat factors for each images in the dataset. - """ - - # 1. For each category c, compute the fraction # of images - # that contain it: f(c) - category_freq = defaultdict(int) - num_images = len(dataset) - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - for cat_id in cat_ids: - category_freq[cat_id] += 1 - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t/f(c))) - category_repeat = { - cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - repeat_factors = [] - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - repeat_factor = 1 - if len(cat_ids) > 0: - repeat_factor = max( - {category_repeat[cat_id] - for cat_id in cat_ids}) - repeat_factors.append(repeat_factor) - - return repeat_factors - - def __getitem__(self, idx): - ori_index = self.repeat_indices[idx] - return self.dataset[ori_index] - - def get_ann_info(self, idx): - """Get annotation of dataset by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - ori_index = self.repeat_indices[idx] - return self.dataset.get_ann_info(ori_index) - - def __len__(self): - """Length after repetition.""" - return len(self.repeat_indices) - - -@DATASETS.register_module() -class MultiImageMixDataset: - """A wrapper of multiple images mixed dataset. - - Suitable for training on multiple images mixed data augmentation like - mosaic and mixup. For the augmentation pipeline of mixed image data, - the `get_indexes` method needs to be provided to obtain the image - indexes, and you can set `skip_flags` to change the pipeline running - process. At the same time, we provide the `dynamic_scale` parameter - to dynamically change the output image size. - - Args: - dataset (:obj:`CustomDataset`): The dataset to be mixed. - pipeline (Sequence[dict]): Sequence of transform object or - config dict to be composed. - dynamic_scale (tuple[int], optional): The image scale can be changed - dynamically. Default to None. It is deprecated. - skip_type_keys (list[str], optional): Sequence of type string to - be skip pipeline. Default to None. - max_refetch (int): The maximum number of retry iterations for getting - valid results from the pipeline. If the number of iterations is - greater than `max_refetch`, but results is still None, then the - iteration is terminated and raise the error. Default: 15. - """ - - def __init__(self, - dataset, - pipeline, - dynamic_scale=None, - skip_type_keys=None, - max_refetch=15): - if dynamic_scale is not None: - raise RuntimeError( - 'dynamic_scale is deprecated. Please use Resize pipeline ' - 'to achieve similar functions') - assert isinstance(pipeline, collections.abc.Sequence) - if skip_type_keys is not None: - assert all([ - isinstance(skip_type_key, str) - for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys - - self.pipeline = [] - self.pipeline_types = [] - for transform in pipeline: - if isinstance(transform, dict): - self.pipeline_types.append(transform['type']) - transform = build_from_cfg(transform, PIPELINES) - self.pipeline.append(transform) - else: - raise TypeError('pipeline must be a dict') - - self.dataset = dataset - self.CLASSES = dataset.CLASSES - self.PALETTE = getattr(dataset, 'PALETTE', None) - if hasattr(self.dataset, 'flag'): - self.flag = dataset.flag - self.num_samples = len(dataset) - self.max_refetch = max_refetch - - def __len__(self): - return self.num_samples - - def __getitem__(self, idx): - results = copy.deepcopy(self.dataset[idx]) - for (transform, transform_type) in zip(self.pipeline, - self.pipeline_types): - if self._skip_type_keys is not None and \ - transform_type in self._skip_type_keys: - continue - - if hasattr(transform, 'get_indexes'): - for i in range(self.max_refetch): - # Make sure the results passed the loading pipeline - # of the original dataset is not None. - indexes = transform.get_indexes(self.dataset) - if not isinstance(indexes, collections.abc.Sequence): - indexes = [indexes] - mix_results = [ - copy.deepcopy(self.dataset[index]) for index in indexes - ] - if None not in mix_results: - results['mix_results'] = mix_results - break - else: - raise RuntimeError( - 'The loading pipeline of the original dataset' - ' always return None. Please check the correctness ' - 'of the dataset and its pipeline.') - - for i in range(self.max_refetch): - # To confirm the results passed the training pipeline - # of the wrapper is not None. - updated_results = transform(copy.deepcopy(results)) - if updated_results is not None: - results = updated_results - break - else: - raise RuntimeError( - 'The training pipeline of the dataset wrapper' - ' always return None.Please check the correctness ' - 'of the dataset and its pipeline.') - - if 'mix_results' in results: - results.pop('mix_results') - - return results - - def update_skip_type_keys(self, skip_type_keys): - """Update skip_type_keys. It is called by an external hook. - - Args: - skip_type_keys (list[str], optional): Sequence of type - string to be skip pipeline. - """ - assert all([ - isinstance(skip_type_key, str) for skip_type_key in skip_type_keys - ]) - self._skip_type_keys = skip_type_keys diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/__init__.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/__init__.py deleted file mode 100755 index 8260da64..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform, - ContrastTransform, EqualizeTransform, Rotate, Shear, - Translate) -from .compose import Compose -from .formatting import (Collect, DefaultFormatBundle, ImageToTensor, - ToDataContainer, ToTensor, Transpose, to_tensor) -from .instaboost import InstaBoost -from .loading import (FilterAnnotations, LoadAnnotations, LoadImageFromFile, - LoadImageFromWebcam, LoadMultiChannelImageFromFiles, - LoadPanopticAnnotations, LoadProposals) -from .test_time_aug import MultiScaleFlipAug -from .transforms import (Albu, CopyPaste, CutOut, Expand, MinIoURandomCrop, - MixUp, Mosaic, Normalize, Pad, PhotoMetricDistortion, - RandomAffine, RandomCenterCropPad, RandomCrop, - RandomFlip, RandomShift, Resize, SegRescale, - YOLOXHSVRandomAug) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations', - 'LoadImageFromFile', 'LoadImageFromWebcam', 'LoadPanopticAnnotations', - 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'FilterAnnotations', - 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', - 'Normalize', 'SegRescale', 'MinIoURandomCrop', 'Expand', - 'PhotoMetricDistortion', 'Albu', 'InstaBoost', 'RandomCenterCropPad', - 'AutoAugment', 'CutOut', 'Shear', 'Rotate', 'ColorTransform', - 'EqualizeTransform', 'BrightnessTransform', 'ContrastTransform', - 'Translate', 'RandomShift', 'Mosaic', 'MixUp', 'RandomAffine', - 'YOLOXHSVRandomAug', 'CopyPaste' -] diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/auto_augment.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/auto_augment.py deleted file mode 100755 index b0ff67db..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/auto_augment.py +++ /dev/null @@ -1,894 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import cv2 -import mmcv -import numpy as np - -from ..builder import PIPELINES -from .compose import Compose - -_MAX_LEVEL = 10 - - -def level_to_value(level, max_value): - """Map from level to values based on max_value.""" - return (level / _MAX_LEVEL) * max_value - - -def enhance_level_to_value(level, a=1.8, b=0.1): - """Map from level to values.""" - return (level / _MAX_LEVEL) * a + b - - -def random_negative(value, random_negative_prob): - """Randomly negate value based on random_negative_prob.""" - return -value if np.random.rand() < random_negative_prob else value - - -def bbox2fields(): - """The key correspondence from bboxes to labels, masks and - segmentations.""" - bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - bbox2seg = { - 'gt_bboxes': 'gt_semantic_seg', - } - return bbox2label, bbox2mask, bbox2seg - - -@PIPELINES.register_module() -class AutoAugment: - """Auto augmentation. - - This data augmentation is proposed in `Learning Data Augmentation - Strategies for Object Detection `_. - - TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms - - Args: - policies (list[list[dict]]): The policies of auto augmentation. Each - policy in ``policies`` is a specific augmentation policy, and is - composed by several augmentations (dict). When AutoAugment is - called, a random policy in ``policies`` will be selected to - augment images. - - Examples: - >>> replace = (104, 116, 124) - >>> policies = [ - >>> [ - >>> dict(type='Sharpness', prob=0.0, level=8), - >>> dict( - >>> type='Shear', - >>> prob=0.4, - >>> level=0, - >>> replace=replace, - >>> axis='x') - >>> ], - >>> [ - >>> dict( - >>> type='Rotate', - >>> prob=0.6, - >>> level=10, - >>> replace=replace), - >>> dict(type='Color', prob=1.0, level=6) - >>> ] - >>> ] - >>> augmentation = AutoAugment(policies) - >>> img = np.ones(100, 100, 3) - >>> gt_bboxes = np.ones(10, 4) - >>> results = dict(img=img, gt_bboxes=gt_bboxes) - >>> results = augmentation(results) - """ - - def __init__(self, policies): - assert isinstance(policies, list) and len(policies) > 0, \ - 'Policies must be a non-empty list.' - for policy in policies: - assert isinstance(policy, list) and len(policy) > 0, \ - 'Each policy in policies must be a non-empty list.' - for augment in policy: - assert isinstance(augment, dict) and 'type' in augment, \ - 'Each specific augmentation must be a dict with key' \ - ' "type".' - - self.policies = copy.deepcopy(policies) - self.transforms = [Compose(policy) for policy in self.policies] - - def __call__(self, results): - transform = np.random.choice(self.transforms) - return transform(results) - - def __repr__(self): - return f'{self.__class__.__name__}(policies={self.policies})' - - -@PIPELINES.register_module() -class Shear: - """Apply Shear Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range [0,_MAX_LEVEL]. - img_fill_val (int | float | tuple): The filled values for image border. - If float, the same fill value will be used for all the three - channels of image. If tuple, the should be 3 elements. - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for performing Shear and should be in - range [0, 1]. - direction (str): The direction for shear, either "horizontal" - or "vertical". - max_shear_magnitude (float): The maximum magnitude for Shear - transformation. - random_negative_prob (float): The probability that turns the - offset negative. Should be in range [0,1] - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - - def __init__(self, - level, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - direction='horizontal', - max_shear_magnitude=0.3, - random_negative_prob=0.5, - interpolation='bilinear'): - assert isinstance(level, (int, float)), 'The level must be type ' \ - f'int or float, got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \ - f'[0,{_MAX_LEVEL}], got {level}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \ - 'elements of img_fill_val should between range [0,255].' \ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \ - f'range [0,1]. got {prob}.' - assert direction in ('horizontal', 'vertical'), 'direction must ' \ - f'in be either "horizontal" or "vertical". got {direction}.' - assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \ - f'should be type float. got {type(max_shear_magnitude)}.' - assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \ - 'max_shear_magnitude should be in range [0,1]. ' \ - f'got {max_shear_magnitude}.' - self.level = level - self.magnitude = level_to_value(level, max_shear_magnitude) - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.direction = direction - self.max_shear_magnitude = max_shear_magnitude - self.random_negative_prob = random_negative_prob - self.interpolation = interpolation - - def _shear_img(self, - results, - magnitude, - direction='horizontal', - interpolation='bilinear'): - """Shear the image. - - Args: - results (dict): Result dict from loading pipeline. - magnitude (int | float): The magnitude used for shear. - direction (str): The direction for shear, either "horizontal" - or "vertical". - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - for key in results.get('img_fields', ['img']): - img = results[key] - img_sheared = mmcv.imshear( - img, - magnitude, - direction, - border_value=self.img_fill_val, - interpolation=interpolation) - results[key] = img_sheared.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _shear_bboxes(self, results, magnitude): - """Shear the bboxes.""" - h, w, c = results['img_shape'] - if self.direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) # [2, 2] - else: - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_box, 1] - coordinates = coordinates[..., 0].transpose( - (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4] - new_coords = np.matmul(shear_matrix[None, :, :], - coordinates) # [nb_box, 2, 4] - min_x = np.min(new_coords[:, 0, :], axis=-1) - min_y = np.min(new_coords[:, 1, :], axis=-1) - max_x = np.max(new_coords[:, 0, :], axis=-1) - max_y = np.max(new_coords[:, 1, :], axis=-1) - min_x = np.clip(min_x, a_min=0, a_max=w) - min_y = np.clip(min_y, a_min=0, a_max=h) - max_x = np.clip(max_x, a_min=min_x, a_max=w) - max_y = np.clip(max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _shear_masks(self, - results, - magnitude, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Shear the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.shear((h, w), - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation) - - def _shear_seg(self, - results, - magnitude, - direction='horizontal', - fill_val=255, - interpolation='bilinear'): - """Shear the segmentation maps.""" - for key in results.get('seg_fields', []): - seg = results[key] - results[key] = mmcv.imshear( - seg, - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after shear - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to shear images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Sheared results. - """ - if np.random.rand() > self.prob: - return results - magnitude = random_negative(self.magnitude, self.random_negative_prob) - self._shear_img(results, magnitude, self.direction, self.interpolation) - self._shear_bboxes(results, magnitude) - # fill_val set to 0 for background of mask. - self._shear_masks( - results, - magnitude, - self.direction, - fill_val=0, - interpolation=self.interpolation) - self._shear_seg( - results, - magnitude, - self.direction, - fill_val=self.seg_ignore_label, - interpolation=self.interpolation) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'direction={self.direction}, ' - repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, ' - repr_str += f'random_negative_prob={self.random_negative_prob}, ' - repr_str += f'interpolation={self.interpolation})' - return repr_str - - -@PIPELINES.register_module() -class Rotate: - """Apply Rotate Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range (0,_MAX_LEVEL]. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - center (int | float | tuple[float]): Center point (w, h) of the - rotation in the source image. If None, the center of the - image will be used. Same in ``mmcv.imrotate``. - img_fill_val (int | float | tuple): The fill value for image border. - If float, the same value will be used for all the three - channels of image. If tuple, the should be 3 elements (e.g. - equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for perform transformation and - should be in range 0 to 1. - max_rotate_angle (int | float): The maximum angles for rotate - transformation. - random_negative_prob (float): The probability that turns the - offset negative. - """ - - def __init__(self, - level, - scale=1, - center=None, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - max_rotate_angle=30, - random_negative_prob=0.5): - assert isinstance(level, (int, float)), \ - f'The level must be type int or float. got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.' - assert isinstance(scale, (int, float)), \ - f'The scale must be type int or float. got type {type(scale)}.' - if isinstance(center, (int, float)): - center = (center, center) - elif isinstance(center, tuple): - assert len(center) == 2, 'center with type tuple must have '\ - f'2 elements. got {len(center)} elements.' - else: - assert center is None, 'center must be None or type int, '\ - f'float or tuple, got type {type(center)}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255]. '\ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\ - f'should be type int or float. got type {type(max_rotate_angle)}.' - self.level = level - self.scale = scale - # Rotation angle in degrees. Positive values mean - # clockwise rotation. - self.angle = level_to_value(level, max_rotate_angle) - self.center = center - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.max_rotate_angle = max_rotate_angle - self.random_negative_prob = random_negative_prob - - def _rotate_img(self, results, angle, center=None, scale=1.0): - """Rotate the image. - - Args: - results (dict): Result dict from loading pipeline. - angle (float): Rotation angle in degrees, positive values - mean clockwise rotation. Same in ``mmcv.imrotate``. - center (tuple[float], optional): Center point (w, h) of the - rotation. Same in ``mmcv.imrotate``. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - img_rotated = mmcv.imrotate( - img, angle, center, scale, border_value=self.img_fill_val) - results[key] = img_rotated.astype(img.dtype) - results['img_shape'] = results[key].shape - - def _rotate_bboxes(self, results, rotate_matrix): - """Rotate the bboxes.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_bbox, 1] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coordinates = np.concatenate( - (coordinates, - np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)), - axis=1) # [4, 3, nb_bbox, 1] - coordinates = coordinates.transpose( - (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1] - rotated_coords = np.matmul(rotate_matrix, - coordinates) # [nb_bbox, 4, 2, 1] - rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2] - min_x, min_y = np.min( - rotated_coords[:, :, 0], axis=1), np.min( - rotated_coords[:, :, 1], axis=1) - max_x, max_y = np.max( - rotated_coords[:, :, 0], axis=1), np.max( - rotated_coords[:, :, 1], axis=1) - min_x, min_y = np.clip( - min_x, a_min=0, a_max=w), np.clip( - min_y, a_min=0, a_max=h) - max_x, max_y = np.clip( - max_x, a_min=min_x, a_max=w), np.clip( - max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _rotate_masks(self, - results, - angle, - center=None, - scale=1.0, - fill_val=0): - """Rotate the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.rotate((h, w), angle, center, scale, fill_val) - - def _rotate_seg(self, - results, - angle, - center=None, - scale=1.0, - fill_val=255): - """Rotate the segmentation map.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imrotate( - seg, angle, center, scale, - border_value=fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after rotate - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to rotate images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - if np.random.rand() > self.prob: - return results - h, w = results['img'].shape[:2] - center = self.center - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - angle = random_negative(self.angle, self.random_negative_prob) - self._rotate_img(results, angle, center, self.scale) - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale) - self._rotate_bboxes(results, rotate_matrix) - self._rotate_masks(results, angle, center, self.scale, fill_val=0) - self._rotate_seg( - results, angle, center, self.scale, fill_val=self.seg_ignore_label) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'center={self.center}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'max_rotate_angle={self.max_rotate_angle}, ' - repr_str += f'random_negative_prob={self.random_negative_prob})' - return repr_str - - -@PIPELINES.register_module() -class Translate: - """Translate the images, bboxes, masks and segmentation maps horizontally - or vertically. - - Args: - level (int | float): The level for Translate and should be in - range [0,_MAX_LEVEL]. - prob (float): The probability for performing translation and - should be in range [0, 1]. - img_fill_val (int | float | tuple): The filled value for image - border. If float, the same fill value will be used for all - the three channels of image. If tuple, the should be 3 - elements (e.g. equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - direction (str): The translate direction, either "horizontal" - or "vertical". - max_translate_offset (int | float): The maximum pixel's offset for - Translate. - random_negative_prob (float): The probability that turns the - offset negative. - min_size (int | float): The minimum pixel for filtering - invalid bboxes after the translation. - """ - - def __init__(self, - level, - prob=0.5, - img_fill_val=128, - seg_ignore_label=255, - direction='horizontal', - max_translate_offset=250., - random_negative_prob=0.5, - min_size=0): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level used for calculating Translate\'s offset should be ' \ - 'in range [0,_MAX_LEVEL]' - assert 0 <= prob <= 1.0, \ - 'The probability of translation should be in range [0, 1].' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, \ - 'img_fill_val as tuple must have 3 elements.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError('img_fill_val must be type float or tuple.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255].' - assert direction in ('horizontal', 'vertical'), \ - 'direction should be "horizontal" or "vertical".' - assert isinstance(max_translate_offset, (int, float)), \ - 'The max_translate_offset must be type int or float.' - # the offset used for translation - self.offset = int(level_to_value(level, max_translate_offset)) - self.level = level - self.prob = prob - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.direction = direction - self.max_translate_offset = max_translate_offset - self.random_negative_prob = random_negative_prob - self.min_size = min_size - - def _translate_img(self, results, offset, direction='horizontal'): - """Translate the image. - - Args: - results (dict): Result dict from loading pipeline. - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - results[key] = mmcv.imtranslate( - img, offset, direction, self.img_fill_val).astype(img.dtype) - results['img_shape'] = results[key].shape - - def _translate_bboxes(self, results, offset): - """Shift bboxes horizontally or vertically, according to offset.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - if self.direction == 'horizontal': - min_x = np.maximum(0, min_x + offset) - max_x = np.minimum(w, max_x + offset) - elif self.direction == 'vertical': - min_y = np.maximum(0, min_y + offset) - max_y = np.minimum(h, max_y + offset) - - # the boxes translated outside of image will be filtered along with - # the corresponding masks, by invoking ``_filter_invalid``. - results[key] = np.concatenate([min_x, min_y, max_x, max_y], - axis=-1) - - def _translate_masks(self, - results, - offset, - direction='horizontal', - fill_val=0): - """Translate masks horizontally or vertically.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.translate((h, w), offset, direction, fill_val) - - def _translate_seg(self, - results, - offset, - direction='horizontal', - fill_val=255): - """Translate segmentation maps horizontally or vertically.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imtranslate(seg, offset, direction, - fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_size=0): - """Filter bboxes and masks too small or translated out of image.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_size) & (bbox_h > min_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - return results - - def __call__(self, results): - """Call function to translate images, bounding boxes, masks and - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Translated results. - """ - if np.random.rand() > self.prob: - return results - offset = random_negative(self.offset, self.random_negative_prob) - self._translate_img(results, offset, self.direction) - self._translate_bboxes(results, offset) - # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks. - self._translate_masks(results, offset, self.direction) - # fill_val set to ``seg_ignore_label`` for the ignored value - # of segmentation map. - self._translate_seg( - results, offset, self.direction, fill_val=self.seg_ignore_label) - self._filter_invalid(results, min_size=self.min_size) - return results - - -@PIPELINES.register_module() -class ColorTransform: - """Apply Color transformation to image. The bboxes, masks, and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Color transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_color_img(self, results, factor=1.0): - """Apply Color transformation to image.""" - for key in results.get('img_fields', ['img']): - # NOTE defaultly the image should be BGR format - img = results[key] - results[key] = mmcv.adjust_color(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Color transformation. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Colored results. - """ - if np.random.rand() > self.prob: - return results - self._adjust_color_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class EqualizeTransform: - """Apply Equalize transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - prob (float): The probability for performing Equalize transformation. - """ - - def __init__(self, prob=0.5): - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.prob = prob - - def _imequalize(self, results): - """Equalizes the histogram of one image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.imequalize(img).astype(img.dtype) - - def __call__(self, results): - """Call function for Equalize transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._imequalize(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob})' - - -@PIPELINES.register_module() -class BrightnessTransform: - """Apply Brightness transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Brightness transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_brightness_img(self, results, factor=1.0): - """Adjust the brightness of image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_brightness(img, - factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Brightness transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_brightness_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class ContrastTransform: - """Apply Contrast transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Contrast transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_contrast_img(self, results, factor=1.0): - """Adjust the image contrast.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Contrast transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_contrast_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/compose.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/compose.py deleted file mode 100755 index d7592200..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/compose.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - str_ = t.__repr__() - if 'Compose(' in str_: - str_ = str_.replace('\n', '\n ') - format_string += '\n' - format_string += f' {str_}' - format_string += '\n)' - return format_string diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formating.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formating.py deleted file mode 100755 index 3b3e45ab..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formating.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -import warnings - -from .formatting import * - -warnings.warn('DeprecationWarning: mmdet.datasets.pipelines.formating will be ' - 'deprecated, please replace it with ' - 'mmdet.datasets.pipelines.formatting.') diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formatting.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formatting.py deleted file mode 100755 index 45ca69cf..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/formatting.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor: - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor: - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = (to_tensor(img.transpose(2, 0, 1))).contiguous() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose: - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to transpose the channel order of data in results. - - Args: - results (dict): Result dict contains the data to transpose. - - Returns: - dict: The result dict contains the data transposed to \ - ``self.order``. - """ - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer: - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))``. - """ - - def __init__(self, - fields=(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to \ - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle: - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, \ - (3)to DataContainer (stack=True) - - Args: - img_to_float (bool): Whether to force the image to be converted to - float type. Default: True. - pad_val (dict): A dict for padding value in batch collating, - the default value is `dict(img=0, masks=0, seg=255)`. - Without this argument, the padding value of "gt_semantic_seg" - will be set to 0 by default, which should be 255. - """ - - def __init__(self, - img_to_float=True, - pad_val=dict(img=0, masks=0, seg=255)): - self.img_to_float = img_to_float - self.pad_val = pad_val - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with \ - default bundle. - """ - - if 'img' in results: - img = results['img'] - if self.img_to_float is True and img.dtype == np.uint8: - # Normally, image is of uint8 type without normalization. - # At this time, it needs to be forced to be converted to - # flot32, otherwise the model training and inference - # will be wrong. Only used for YOLOX currently . - img = img.astype(np.float32) - # add default meta keys - results = self._add_default_meta_keys(results) - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC( - to_tensor(img), padding_value=self.pad_val['img'], stack=True) - for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key])) - if 'gt_masks' in results: - results['gt_masks'] = DC( - results['gt_masks'], - padding_value=self.pad_val['masks'], - cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), - padding_value=self.pad_val['seg'], - stack=True) - return results - - def _add_default_meta_keys(self, results): - """Add default meta keys. - - We set default meta keys including `pad_shape`, `scale_factor` and - `img_norm_cfg` to avoid the case where no `Resize`, `Normalize` and - `Pad` are implemented during the whole pipeline. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - results (dict): Updated result dict contains the data to convert. - """ - img = results['img'] - results.setdefault('pad_shape', img.shape) - results.setdefault('scale_factor', 1.0) - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results.setdefault( - 'img_norm_cfg', - dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False)) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(img_to_float={self.img_to_float})' - - -@PIPELINES.register_module() -class Collect: - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple \ - (h, w, c). Note that images may be zero padded on the \ - bottom/right if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class WrapFieldsToLists: - """Wrap fields of the data dictionary into lists for evaluation. - - This class can be used as a last step of a test or validation - pipeline for single image evaluation or inference. - - Example: - >>> test_pipeline = [ - >>> dict(type='LoadImageFromFile'), - >>> dict(type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - >>> dict(type='Pad', size_divisor=32), - >>> dict(type='ImageToTensor', keys=['img']), - >>> dict(type='Collect', keys=['img']), - >>> dict(type='WrapFieldsToLists') - >>> ] - """ - - def __call__(self, results): - """Call function to wrap fields into lists. - - Args: - results (dict): Result dict contains the data to wrap. - - Returns: - dict: The result dict where value of ``self.keys`` are wrapped \ - into list. - """ - - # Wrap dict fields into lists - for key, val in results.items(): - results[key] = [val] - return results - - def __repr__(self): - return f'{self.__class__.__name__}()' diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/instaboost.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/instaboost.py deleted file mode 100755 index ca10c4c7..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/instaboost.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class InstaBoost: - r"""Data augmentation method in `InstaBoost: Boosting Instance - Segmentation Via Probability Map Guided Copy-Pasting - `_. - - Refer to https://github.com/GothicAi/Instaboost for implementation details. - - Args: - action_candidate (tuple): Action candidates. "normal", "horizontal", \ - "vertical", "skip" are supported. Default: ('normal', \ - 'horizontal', 'skip'). - action_prob (tuple): Corresponding action probabilities. Should be \ - the same length as action_candidate. Default: (1, 0, 0). - scale (tuple): (min scale, max scale). Default: (0.8, 1.2). - dx (int): The maximum x-axis shift will be (instance width) / dx. - Default 15. - dy (int): The maximum y-axis shift will be (instance height) / dy. - Default 15. - theta (tuple): (min rotation degree, max rotation degree). \ - Default: (-1, 1). - color_prob (float): Probability of images for color augmentation. - Default 0.5. - heatmap_flag (bool): Whether to use heatmap guided. Default False. - aug_ratio (float): Probability of applying this transformation. \ - Default 0.5. - """ - - def __init__(self, - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError( - 'Please run "pip install instaboostfast" ' - 'to install instaboostfast first for instaboost augmentation.') - self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob, - scale, dx, dy, theta, - color_prob, hflag) - self.aug_ratio = aug_ratio - - def _load_anns(self, results): - labels = results['ann_info']['labels'] - masks = results['ann_info']['masks'] - bboxes = results['ann_info']['bboxes'] - n = len(labels) - - anns = [] - for i in range(n): - label = labels[i] - bbox = bboxes[i] - mask = masks[i] - x1, y1, x2, y2 = bbox - # assert (x2 - x1) >= 1 and (y2 - y1) >= 1 - bbox = [x1, y1, x2 - x1, y2 - y1] - anns.append({ - 'category_id': label, - 'segmentation': mask, - 'bbox': bbox - }) - - return anns - - def _parse_anns(self, results, anns, img): - gt_bboxes = [] - gt_labels = [] - gt_masks_ann = [] - for ann in anns: - x1, y1, w, h = ann['bbox'] - # TODO: more essential bug need to be fixed in instaboost - if w <= 0 or h <= 0: - continue - bbox = [x1, y1, x1 + w, y1 + h] - gt_bboxes.append(bbox) - gt_labels.append(ann['category_id']) - gt_masks_ann.append(ann['segmentation']) - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - results['ann_info']['labels'] = gt_labels - results['ann_info']['bboxes'] = gt_bboxes - results['ann_info']['masks'] = gt_masks_ann - results['img'] = img - return results - - def __call__(self, results): - img = results['img'] - ori_type = img.dtype - anns = self._load_anns(results) - if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]): - try: - import instaboostfast as instaboost - except ImportError: - raise ImportError('Please run "pip install instaboostfast" ' - 'to install instaboostfast first.') - anns, img = instaboost.get_new_data( - anns, img.astype(np.uint8), self.cfg, background=None) - - results = self._parse_anns(results, anns, img.astype(ori_type)) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})' - return repr_str diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/loading.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/loading.py deleted file mode 100755 index c1d62623..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/loading.py +++ /dev/null @@ -1,643 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils - -# from mmdet.core import BitmapMasks, PolygonMasks -from ..builder import PIPELINES - -try: - from panopticapi.utils import rgb2id -except ImportError: - rgb2id = None - - -@PIPELINES.register_module() -class LoadImageFromFile: - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='color', - channel_order='bgr', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.channel_order = channel_order - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, channel_order=self.channel_order) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f"channel_order='{self.channel_order}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromWebcam(LoadImageFromFile): - """Load an image from webcam. - - Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in - ``results['img']``. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - img = results['img'] - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results - - -@PIPELINES.register_module() -class LoadMultiChannelImageFromFiles: - """Load multi-channel images from a list of separate channel files. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename", which is expected to be a list of filenames). - Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - to_float32=False, - color_type='unchanged', - file_client_args=dict(backend='disk')): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - - def __call__(self, results): - """Call functions to load multiple images and get images meta - information. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded images and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results['img_prefix'] is not None: - filename = [ - osp.join(results['img_prefix'], fname) - for fname in results['img_info']['filename'] - ] - else: - filename = results['img_info']['filename'] - - img = [] - for name in filename: - img_bytes = self.file_client.get(name) - img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type)) - img = np.stack(img, axis=-1) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = (f'{self.__class__.__name__}(' - f'to_float32={self.to_float32}, ' - f"color_type='{self.color_type}', " - f'file_client_args={self.file_client_args})') - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations: - """Load multiple types of annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - denorm_bbox (bool): Whether to convert bbox from relative value to - absolute value. Only used in OpenImage Dataset. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - denorm_bbox=False, - file_client_args=dict(backend='disk')): - self.with_bbox = with_bbox - self.with_label = with_label - self.with_mask = with_mask - self.with_seg = with_seg - self.poly2mask = poly2mask - self.denorm_bbox = denorm_bbox - self.file_client_args = file_client_args.copy() - self.file_client = None - - def _load_bboxes(self, results): - """Private function to load bounding box annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box annotations. - """ - - ann_info = results['ann_info'] - results['gt_bboxes'] = ann_info['bboxes'].copy() - - if self.denorm_bbox: - bbox_num = results['gt_bboxes'].shape[0] - if bbox_num != 0: - h, w = results['img_shape'][:2] - results['gt_bboxes'][:, 0::2] *= w - results['gt_bboxes'][:, 1::2] *= h - - gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) - if gt_bboxes_ignore is not None: - results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() - results['bbox_fields'].append('gt_bboxes_ignore') - results['bbox_fields'].append('gt_bboxes') - - gt_is_group_ofs = ann_info.get('gt_is_group_ofs', None) - if gt_is_group_ofs is not None: - results['gt_is_group_ofs'] = gt_is_group_ofs.copy() - - return results - - def _load_labels(self, results): - """Private function to load label annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded label annotations. - """ - - results['gt_labels'] = results['ann_info']['labels'].copy() - return results - - def _poly2mask(self, mask_ann, img_h, img_w): - """Private function to convert masks represented with polygon to - bitmaps. - - Args: - mask_ann (list | dict): Polygon mask annotation input. - img_h (int): The height of output mask. - img_w (int): The width of output mask. - - Returns: - numpy.ndarray: The decode bitmap mask of shape (img_h, img_w). - """ - - if isinstance(mask_ann, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = maskUtils.frPyObjects(mask_ann, img_h, img_w) - rle = maskUtils.merge(rles) - elif isinstance(mask_ann['counts'], list): - # uncompressed RLE - rle = maskUtils.frPyObjects(mask_ann, img_h, img_w) - else: - # rle - rle = mask_ann - mask = maskUtils.decode(rle) - return mask - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - """Private function to load mask annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask annotations. - If ``self.poly2mask`` is set ``True``, `gt_mask` will contain - :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used. - """ - - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = results['ann_info']['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - def _load_semantic_seg(self, results): - """Private function to load semantic segmentation annotations. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - results['gt_semantic_seg'] = mmcv.imfrombytes( - img_bytes, flag='unchanged').squeeze() - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - if self.with_seg: - results = self._load_semantic_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - repr_str += f'with_seg={self.with_seg}, ' - repr_str += f'poly2mask={self.poly2mask}, ' - repr_str += f'poly2mask={self.file_client_args})' - return repr_str - - -@PIPELINES.register_module() -class LoadPanopticAnnotations(LoadAnnotations): - """Load multiple types of panoptic annotations. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: True. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: True. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=True, - with_seg=True, - file_client_args=dict(backend='disk')): - if rgb2id is None: - raise RuntimeError( - 'panopticapi is not installed, please install it by: ' - 'pip install git+https://github.com/cocodataset/' - 'panopticapi.git.') - - super(LoadPanopticAnnotations, self).__init__( - with_bbox=with_bbox, - with_label=with_label, - with_mask=with_mask, - with_seg=with_seg, - poly2mask=True, - denorm_bbox=False, - file_client_args=file_client_args) - - def _load_masks_and_semantic_segs(self, results): - """Private function to load mask and semantic segmentation annotations. - - In gt_semantic_seg, the foreground label is from `0` to - `num_things - 1`, the background label is from `num_things` to - `num_things + num_stuff - 1`, 255 means the ignored label (`VOID`). - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded mask and semantic segmentation - annotations. `BitmapMasks` is used for mask annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - img_bytes = self.file_client.get(filename) - pan_png = mmcv.imfrombytes( - img_bytes, flag='color', channel_order='rgb').squeeze() - pan_png = rgb2id(pan_png) - - gt_masks = [] - gt_seg = np.zeros_like(pan_png) + 255 # 255 as ignore - - for mask_info in results['ann_info']['masks']: - mask = (pan_png == mask_info['id']) - gt_seg = np.where(mask, mask_info['category'], gt_seg) - - # The legal thing masks - if mask_info.get('is_thing'): - gt_masks.append(mask.astype(np.uint8)) - - if self.with_mask: - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - - if self.with_seg: - results['gt_semantic_seg'] = gt_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __call__(self, results): - """Call function to load multiple types panoptic annotations. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded bounding box, label, mask and - semantic segmentation annotations. - """ - - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask or self.with_seg: - # The tasks completed by '_load_masks' and '_load_semantic_segs' - # in LoadAnnotations are merged to one function. - results = self._load_masks_and_semantic_segs(results) - - return results - - -@PIPELINES.register_module() -class LoadProposals: - """Load proposal pipeline. - - Required key is "proposals". Updated keys are "proposals", "bbox_fields". - - Args: - num_max_proposals (int, optional): Maximum number of proposals to load. - If not specified, all proposals will be loaded. - """ - - def __init__(self, num_max_proposals=None): - self.num_max_proposals = num_max_proposals - - def __call__(self, results): - """Call function to load proposals from file. - - Args: - results (dict): Result dict from :obj:`mmdet.CustomDataset`. - - Returns: - dict: The dict contains loaded proposal annotations. - """ - - proposals = results['proposals'] - if proposals.shape[1] not in (4, 5): - raise AssertionError( - 'proposals should have shapes (n, 4) or (n, 5), ' - f'but found {proposals.shape}') - proposals = proposals[:, :4] - - if self.num_max_proposals is not None: - proposals = proposals[:self.num_max_proposals] - - if len(proposals) == 0: - proposals = np.array([[0, 0, 0, 0]], dtype=np.float32) - results['proposals'] = proposals - results['bbox_fields'].append('proposals') - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(num_max_proposals={self.num_max_proposals})' - - -@PIPELINES.register_module() -class FilterAnnotations: - """Filter invalid annotations. - - Args: - min_gt_bbox_wh (tuple[float]): Minimum width and height of ground truth - boxes. Default: (1., 1.) - min_gt_mask_area (int): Minimum foreground area of ground truth masks. - Default: 1 - by_box (bool): Filter instances with bounding boxes not meeting the - min_gt_bbox_wh threshold. Default: True - by_mask (bool): Filter instances with masks not meeting - min_gt_mask_area threshold. Default: False - keep_empty (bool): Whether to return None when it - becomes an empty bbox after filtering. Default: True - """ - - def __init__(self, - min_gt_bbox_wh=(1., 1.), - min_gt_mask_area=1, - by_box=True, - by_mask=False, - keep_empty=True): - # TODO: add more filter options - assert by_box or by_mask - self.min_gt_bbox_wh = min_gt_bbox_wh - self.min_gt_mask_area = min_gt_mask_area - self.by_box = by_box - self.by_mask = by_mask - self.keep_empty = keep_empty - - def __call__(self, results): - if self.by_box: - assert 'gt_bboxes' in results - gt_bboxes = results['gt_bboxes'] - instance_num = gt_bboxes.shape[0] - if self.by_mask: - assert 'gt_masks' in results - gt_masks = results['gt_masks'] - instance_num = len(gt_masks) - - if instance_num == 0: - return results - - tests = [] - if self.by_box: - w = gt_bboxes[:, 2] - gt_bboxes[:, 0] - h = gt_bboxes[:, 3] - gt_bboxes[:, 1] - tests.append((w > self.min_gt_bbox_wh[0]) - & (h > self.min_gt_bbox_wh[1])) - if self.by_mask: - gt_masks = results['gt_masks'] - tests.append(gt_masks.areas >= self.min_gt_mask_area) - - keep = tests[0] - for t in tests[1:]: - keep = keep & t - - keys = ('gt_bboxes', 'gt_labels', 'gt_masks') - for key in keys: - if key in results: - results[key] = results[key][keep] - if not keep.any(): - if self.keep_empty: - return None - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(min_gt_bbox_wh={self.min_gt_bbox_wh},' \ - f'(min_gt_mask_area={self.min_gt_mask_area},' \ - f'(by_box={self.by_box},' \ - f'(by_mask={self.by_mask},' \ - f'always_keep={self.always_keep})' diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/test_time_aug.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100755 index 5f1ab7b7..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/transforms.py b/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/transforms.py deleted file mode 100755 index 39b6bc03..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/pipelines/transforms.py +++ /dev/null @@ -1,2919 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect -import math -import warnings - -import cv2 -import mmcv -import numpy as np -from numpy import random - -# from mmdet.core import BitmapMasks, PolygonMasks, find_inside_bboxes -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps -from mmdet.utils import log_img_scale -from ..builder import PIPELINES - -try: - from imagecorruptions import corrupt -except ImportError: - corrupt = None - -try: - import albumentations - from albumentations import Compose -except ImportError: - albumentations = None - Compose = None - - -@PIPELINES.register_module() -class Resize: - """Resize images & bbox & mask. - - This transform resizes the input image to some scale. Bboxes and masks are - then resized with the same scale factor. If the input dict contains the key - "scale", then the scale in the input dict is used, otherwise the specified - scale in the init method is used. If the input dict contains the key - "scale_factor" (if MultiScaleFlipAug does not give img_scale but - scale_factor), the actual scale will be computed by image shape and - scale_factor. - - `img_scale` can either be a tuple (single-scale) or a list of tuple - (multi-scale). There are 3 multiscale modes: - - - ``ratio_range is not None``: randomly sample a ratio from the ratio \ - range and multiply it with the image scale. - - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ - sample a scale from the multiscale range. - - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ - sample a scale from multiple scales. - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - backend (str): Image resize backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - override (bool, optional): Whether to override `scale` and - `scale_factor` so as to call resize twice. Default False. If True, - after the first resizing, the existed `scale` and `scale_factor` - will be ignored so the second resizing can be allowed. - This option is a work-around for multiple times of resize in DETR. - Defaults to False. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - interpolation='bilinear', - override=False): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given a scale and a range of image ratio - assert len(self.img_scale) == 1 - else: - # mode 2: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.backend = backend - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize - self.interpolation = interpolation - self.override = override - self.bbox_clip_border = bbox_clip_border - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ - where ``img_scale`` is the selected image scale and \ - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where \ - ``img_scale`` is sampled scale and None is just a placeholder \ - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where \ - ``scale`` is sampled ratio multiplied with ``img_scale`` and \ - None is just a placeholder to be consistent with \ - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - for key in results.get('img_fields', ['img']): - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results[key].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results[key], - results['scale'], - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - results[key] = img - - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img_shape'] = img.shape - # in case that there is no padding - results['pad_shape'] = img.shape - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip: - """Flip the image & bbox & mask. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - When random flip is enabled, ``flip_ratio``/``direction`` can either be a - float/string or tuple of float/string. There are 3 flip modes: - - - ``flip_ratio`` is float, ``direction`` is string: the image will be - ``direction``ly flipped with probability of ``flip_ratio`` . - E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, - then image will be horizontally flipped with probability of 0.5. - - ``flip_ratio`` is float, ``direction`` is list of string: the image will - be ``direction[i]``ly flipped with probability of - ``flip_ratio/len(direction)``. - E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, - then image will be horizontally flipped with probability of 0.25, - vertically with probability of 0.25. - - ``flip_ratio`` is list of float, ``direction`` is list of string: - given ``len(flip_ratio) == len(direction)``, the image will - be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. - E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', - 'vertical']``, then image will be horizontally flipped with probability - of 0.3, vertically with probability of 0.5. - - Args: - flip_ratio (float | list[float], optional): The flipping probability. - Default: None. - direction(str | list[str], optional): The flipping direction. Options - are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. - If input is a list, the length must equal ``flip_ratio``. Each - element in ``flip_ratio`` indicates the flip probability of - corresponding direction. - """ - - def __init__(self, flip_ratio=None, direction='horizontal'): - if isinstance(flip_ratio, list): - assert mmcv.is_list_of(flip_ratio, float) - assert 0 <= sum(flip_ratio) <= 1 - elif isinstance(flip_ratio, float): - assert 0 <= flip_ratio <= 1 - elif flip_ratio is None: - pass - else: - raise ValueError('flip_ratios must be None, float, ' - 'or list of float') - self.flip_ratio = flip_ratio - - valid_directions = ['horizontal', 'vertical', 'diagonal'] - if isinstance(direction, str): - assert direction in valid_directions - elif isinstance(direction, list): - assert mmcv.is_list_of(direction, str) - assert set(direction).issubset(set(valid_directions)) - else: - raise ValueError('direction must be either str or list of str') - self.direction = direction - - if isinstance(flip_ratio, list): - assert len(self.flip_ratio) == len(self.direction) - - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added \ - into result dict. - """ - - if 'flip' not in results: - if isinstance(self.direction, list): - # None means non-flip - direction_list = self.direction + [None] - else: - # None means non-flip - direction_list = [self.direction, None] - - if isinstance(self.flip_ratio, list): - non_flip_ratio = 1 - sum(self.flip_ratio) - flip_ratio_list = self.flip_ratio + [non_flip_ratio] - else: - non_flip_ratio = 1 - self.flip_ratio - # exclude non-flip - single_ratio = self.flip_ratio / (len(direction_list) - 1) - flip_ratio_list = [single_ratio] * (len(direction_list) - - 1) + [non_flip_ratio] - - cur_dir = np.random.choice(direction_list, p=flip_ratio_list) - - results['flip'] = cur_dir is not None - if 'flip_direction' not in results: - results['flip_direction'] = cur_dir - if results['flip']: - # flip image - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - # flip bboxes - for key in results.get('bbox_fields', []): - results[key] = self.bbox_flip(results[key], - results['img_shape'], - results['flip_direction']) - # flip masks - for key in results.get('mask_fields', []): - results[key] = results[key].flip(results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - - -@PIPELINES.register_module() -class RandomShift: - """Shift the image and box given shift pixels and probability. - - Args: - shift_ratio (float): Probability of shifts. Default 0.5. - max_shift_px (int): The max pixels for shifting. Default 32. - filter_thr_px (int): The width and height threshold for filtering. - The bbox and the rest of the targets below the width and - height threshold will be filtered. Default 1. - """ - - def __init__(self, shift_ratio=0.5, max_shift_px=32, filter_thr_px=1): - assert 0 <= shift_ratio <= 1 - assert max_shift_px >= 0 - self.shift_ratio = shift_ratio - self.max_shift_px = max_shift_px - self.filter_thr_px = int(filter_thr_px) - # The key correspondence from bboxes to labels. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - - def __call__(self, results): - """Call function to random shift images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Shift results. - """ - if random.random() < self.shift_ratio: - img_shape = results['img'].shape[:2] - - random_shift_x = random.randint(-self.max_shift_px, - self.max_shift_px) - random_shift_y = random.randint(-self.max_shift_px, - self.max_shift_px) - new_x = max(0, random_shift_x) - ori_x = max(0, -random_shift_x) - new_y = max(0, random_shift_y) - ori_y = max(0, -random_shift_y) - - # TODO: support mask and semantic segmentation maps. - for key in results.get('bbox_fields', []): - bboxes = results[key].copy() - bboxes[..., 0::2] += random_shift_x - bboxes[..., 1::2] += random_shift_y - - # clip border - bboxes[..., 0::2] = np.clip(bboxes[..., 0::2], 0, img_shape[1]) - bboxes[..., 1::2] = np.clip(bboxes[..., 1::2], 0, img_shape[0]) - - # remove invalid bboxes - bbox_w = bboxes[..., 2] - bboxes[..., 0] - bbox_h = bboxes[..., 3] - bboxes[..., 1] - valid_inds = (bbox_w > self.filter_thr_px) & ( - bbox_h > self.filter_thr_px) - # If the shift does not contain any gt-bbox area, skip this - # image. - if key == 'gt_bboxes' and not valid_inds.any(): - return results - bboxes = bboxes[valid_inds] - results[key] = bboxes - - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - for key in results.get('img_fields', ['img']): - img = results[key] - new_img = np.zeros_like(img) - img_h, img_w = img.shape[:2] - new_h = img_h - np.abs(random_shift_y) - new_w = img_w - np.abs(random_shift_x) - new_img[new_y:new_y + new_h, new_x:new_x + new_w] \ - = img[ori_y:ori_y + new_h, ori_x:ori_x + new_w] - results[key] = new_img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_shift_px={self.max_shift_px}, ' - return repr_str - - -@PIPELINES.register_module() -class Pad: - """Pad the image & masks & segmentation map. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_to_square (bool): Whether to pad the image into a square. - Currently only used for YOLOX. Default: False. - pad_val (dict, optional): A dict for padding value, the default - value is `dict(img=0, masks=0, seg=255)`. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_to_square=False, - pad_val=dict(img=0, masks=0, seg=255)): - self.size = size - self.size_divisor = size_divisor - if isinstance(pad_val, float) or isinstance(pad_val, int): - warnings.warn( - 'pad_val of float type is deprecated now, ' - f'please use pad_val=dict(img={pad_val}, ' - f'masks={pad_val}, seg=255) instead.', DeprecationWarning) - pad_val = dict(img=pad_val, masks=pad_val, seg=255) - assert isinstance(pad_val, dict) - self.pad_val = pad_val - self.pad_to_square = pad_to_square - - if pad_to_square: - assert size is None and size_divisor is None, \ - 'The size and size_divisor must be None ' \ - 'when pad2square is True' - else: - assert size is not None or size_divisor is not None, \ - 'only one of size and size_divisor should be valid' - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - pad_val = self.pad_val.get('img', 0) - for key in results.get('img_fields', ['img']): - if self.pad_to_square: - max_size = max(results[key].shape[:2]) - self.size = (max_size, max_size) - if self.size is not None: - padded_img = mmcv.impad( - results[key], shape=self.size, pad_val=pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results[key], self.size_divisor, pad_val=pad_val) - results[key] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - pad_val = self.pad_val.get('masks', 0) - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=pad_val) - - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_to_square={self.pad_to_square}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize: - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imnormalize(results[key], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop: - """Random crop the image & bboxes & masks. - - The absolute `crop_size` is sampled based on `crop_type` and `image_size`, - then the cropped results are generated. - - Args: - crop_size (tuple): The relative ratio or absolute pixels of - height and width. - crop_type (str, optional): one of "relative_range", "relative", - "absolute", "absolute_range". "relative" randomly crops - (h * crop_size[0], w * crop_size[1]) part from an input of size - (h, w). "relative_range" uniformly samples relative crop size from - range [crop_size[0], 1] and [crop_size[1], 1] for height and width - respectively. "absolute" crops from an input with absolute size - (crop_size[0], crop_size[1]). "absolute_range" uniformly samples - crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w - in range [crop_size[0], min(w, crop_size[1])]. Default "absolute". - allow_negative_crop (bool, optional): Whether to allow a crop that does - not contain any bbox area. Default False. - recompute_bbox (bool, optional): Whether to re-compute the boxes based - on cropped instance masks. Default False. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - - If the image is smaller than the absolute crop size, return the - original image. - - The keys for bboxes, labels and masks must be aligned. That is, - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and - `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and - `gt_masks_ignore`. - - If the crop does not contain any gt-bbox region and - `allow_negative_crop` is set to False, skip this image. - """ - - def __init__(self, - crop_size, - crop_type='absolute', - allow_negative_crop=False, - recompute_bbox=False, - bbox_clip_border=True): - if crop_type not in [ - 'relative_range', 'relative', 'absolute', 'absolute_range' - ]: - raise ValueError(f'Invalid crop_type {crop_type}.') - if crop_type in ['absolute', 'absolute_range']: - assert crop_size[0] > 0 and crop_size[1] > 0 - assert isinstance(crop_size[0], int) and isinstance( - crop_size[1], int) - else: - assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1 - self.crop_size = crop_size - self.crop_type = crop_type - self.allow_negative_crop = allow_negative_crop - self.bbox_clip_border = bbox_clip_border - self.recompute_bbox = recompute_bbox - # The key correspondence from bboxes to labels and masks. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images, bounding boxes, masks, semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - allow_negative_crop (bool): Whether to allow a crop that does not - contain any bbox area. Default to False. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - if self.recompute_bbox: - results[key] = results[mask_key].get_bboxes() - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - def _get_crop_size(self, image_size): - """Randomly generates the absolute crop size based on `crop_type` and - `image_size`. - - Args: - image_size (tuple): (h, w). - - Returns: - crop_size (tuple): (crop_h, crop_w) in absolute pixels. - """ - h, w = image_size - if self.crop_type == 'absolute': - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == 'absolute_range': - assert self.crop_size[0] <= self.crop_size[1] - crop_h = np.random.randint( - min(h, self.crop_size[0]), - min(h, self.crop_size[1]) + 1) - crop_w = np.random.randint( - min(w, self.crop_size[0]), - min(w, self.crop_size[1]) + 1) - return crop_h, crop_w - elif self.crop_type == 'relative': - crop_h, crop_w = self.crop_size - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - elif self.crop_type == 'relative_range': - crop_size = np.asarray(self.crop_size, dtype=np.float32) - crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - - def __call__(self, results): - """Call function to randomly crop images, bounding boxes, masks, - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - image_size = results['img'].shape[:2] - crop_size = self._get_crop_size(image_size) - results = self._crop_data(results, crop_size, self.allow_negative_crop) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'crop_type={self.crop_type}, ' - repr_str += f'allow_negative_crop={self.allow_negative_crop}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class SegRescale: - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - backend (str): Image rescale backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - """ - - def __init__(self, scale_factor=1, backend='cv2'): - self.scale_factor = scale_factor - self.backend = backend - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], - self.scale_factor, - interpolation='nearest', - backend=self.backend) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion: - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - 8. randomly swap channels - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - img = img.astype(np.float32) - # random brightness - if random.randint(2): - delta = random.uniform(-self.brightness_delta, - self.brightness_delta) - img += delta - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # convert color from BGR to HSV - img = mmcv.bgr2hsv(img) - - # random saturation - if random.randint(2): - img[..., 1] *= random.uniform(self.saturation_lower, - self.saturation_upper) - - # random hue - if random.randint(2): - img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta) - img[..., 0][img[..., 0] > 360] -= 360 - img[..., 0][img[..., 0] < 0] += 360 - - # convert color from HSV to BGR - img = mmcv.hsv2bgr(img) - - # random contrast - if mode == 0: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # randomly swap channels - if random.randint(2): - img = img[..., random.permutation(3)] - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(\nbrightness_delta={self.brightness_delta},\n' - repr_str += 'contrast_range=' - repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n' - repr_str += 'saturation_range=' - repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n' - repr_str += f'hue_delta={self.hue_delta})' - return repr_str - - -@PIPELINES.register_module() -class Expand: - """Random expand the image & bboxes. - - Randomly place the original image on a canvas of 'ratio' x original image - size filled with mean values. The ratio is in the range of ratio_range. - - Args: - mean (tuple): mean value of dataset. - to_rgb (bool): if need to convert the order of mean to align with RGB. - ratio_range (tuple): range of expand ratio. - prob (float): probability of applying this transformation - """ - - def __init__(self, - mean=(0, 0, 0), - to_rgb=True, - ratio_range=(1, 4), - seg_ignore_label=None, - prob=0.5): - self.to_rgb = to_rgb - self.ratio_range = ratio_range - if to_rgb: - self.mean = mean[::-1] - else: - self.mean = mean - self.min_ratio, self.max_ratio = ratio_range - self.seg_ignore_label = seg_ignore_label - self.prob = prob - - def __call__(self, results): - """Call function to expand images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images, bounding boxes expanded - """ - - if random.uniform(0, 1) > self.prob: - return results - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - - h, w, c = img.shape - ratio = random.uniform(self.min_ratio, self.max_ratio) - # speedup expand when meets large image - if np.all(self.mean == self.mean[0]): - expand_img = np.empty((int(h * ratio), int(w * ratio), c), - img.dtype) - expand_img.fill(self.mean[0]) - else: - expand_img = np.full((int(h * ratio), int(w * ratio), c), - self.mean, - dtype=img.dtype) - left = int(random.uniform(0, w * ratio - w)) - top = int(random.uniform(0, h * ratio - h)) - expand_img[top:top + h, left:left + w] = img - - results['img'] = expand_img - # expand bboxes - for key in results.get('bbox_fields', []): - results[key] = results[key] + np.tile( - (left, top), 2).astype(results[key].dtype) - - # expand masks - for key in results.get('mask_fields', []): - results[key] = results[key].expand( - int(h * ratio), int(w * ratio), top, left) - - # expand segs - for key in results.get('seg_fields', []): - gt_seg = results[key] - expand_gt_seg = np.full((int(h * ratio), int(w * ratio)), - self.seg_ignore_label, - dtype=gt_seg.dtype) - expand_gt_seg[top:top + h, left:left + w] = gt_seg - results[key] = expand_gt_seg - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label})' - return repr_str - - -@PIPELINES.register_module() -class MinIoURandomCrop: - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size). - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - The keys for bboxes, labels and masks should be paired. That is, \ - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \ - `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`. - """ - - def __init__(self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - bbox_clip_border=True): - # 1: return ori img - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.bbox_clip_border = bbox_clip_border - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def __call__(self, results): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images and bounding boxes cropped, \ - 'img_shape' key is updated. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert 'bbox_fields' in results - boxes = [results[key] for key in results['bbox_fields']] - boxes = np.concatenate(boxes, 0) - h, w, c = img.shape - while True: - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return results - - min_iou = mode - for i in range(50): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array( - (int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = bbox_overlaps( - patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ((center[:, 0] > patch[0]) * - (center[:, 1] > patch[1]) * - (center[:, 0] < patch[2]) * - (center[:, 1] < patch[3])) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - for key in results.get('bbox_fields', []): - boxes = results[key].copy() - mask = is_center_of_bboxes_in_patch(boxes, patch) - boxes = boxes[mask] - if self.bbox_clip_border: - boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:]) - boxes[:, :2] = boxes[:, :2].clip(min=patch[:2]) - boxes -= np.tile(patch[:2], 2) - - results[key] = boxes - # labels - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][mask] - - # mask fields - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - mask.nonzero()[0]].crop(patch) - # adjust the img no matter whether the gt is empty before crop - img = img[patch[1]:patch[3], patch[0]:patch[2]] - results['img'] = img - results['img_shape'] = img.shape - - # seg fields - for key in results.get('seg_fields', []): - results[key] = results[key][patch[1]:patch[3], - patch[0]:patch[2]] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_ious={self.min_ious}, ' - repr_str += f'min_crop_size={self.min_crop_size}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class Corrupt: - """Corruption augmentation. - - Corruption transforms implemented based on - `imagecorruptions `_. - - Args: - corruption (str): Corruption name. - severity (int, optional): The severity of corruption. Default: 1. - """ - - def __init__(self, corruption, severity=1): - self.corruption = corruption - self.severity = severity - - def __call__(self, results): - """Call function to corrupt image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images corrupted. - """ - - if corrupt is None: - raise RuntimeError('imagecorruptions is not installed') - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - results['img'] = corrupt( - results['img'].astype(np.uint8), - corruption_name=self.corruption, - severity=self.severity) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(corruption={self.corruption}, ' - repr_str += f'severity={self.severity})' - return repr_str - - -@PIPELINES.register_module() -class Albu: - """Albumentation augmentation. - - Adds custom transformations from Albumentations library. - Please, visit `https://albumentations.readthedocs.io` - to get more information. - - An example of ``transforms`` is as followed: - - .. code-block:: - - [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), - ] - - Args: - transforms (list[dict]): A list of albu transformations - bbox_params (dict): Bbox_params for albumentation `Compose` - keymap (dict): Contains {'input key':'albumentation-style key'} - skip_img_without_anno (bool): Whether to skip the image if no ann left - after aug - """ - - def __init__(self, - transforms, - bbox_params=None, - keymap=None, - update_pad_shape=False, - skip_img_without_anno=False): - if Compose is None: - raise RuntimeError('albumentations is not installed') - - # Args will be modified later, copying it will be safer - transforms = copy.deepcopy(transforms) - if bbox_params is not None: - bbox_params = copy.deepcopy(bbox_params) - if keymap is not None: - keymap = copy.deepcopy(keymap) - self.transforms = transforms - self.filter_lost_elements = False - self.update_pad_shape = update_pad_shape - self.skip_img_without_anno = skip_img_without_anno - - # A simple workaround to remove masks without boxes - if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params - and 'filter_lost_elements' in bbox_params): - self.filter_lost_elements = True - self.origin_label_fields = bbox_params['label_fields'] - bbox_params['label_fields'] = ['idx_mapper'] - del bbox_params['filter_lost_elements'] - - self.bbox_params = ( - self.albu_builder(bbox_params) if bbox_params else None) - self.aug = Compose([self.albu_builder(t) for t in self.transforms], - bbox_params=self.bbox_params) - - if not keymap: - self.keymap_to_albu = { - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - } - else: - self.keymap_to_albu = keymap - self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} - - def albu_builder(self, cfg): - """Import a module from albumentations. - - It inherits some of :func:`build_from_cfg` logic. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - - Returns: - obj: The constructed object. - """ - - assert isinstance(cfg, dict) and 'type' in cfg - args = cfg.copy() - - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if albumentations is None: - raise RuntimeError('albumentations is not installed') - obj_cls = getattr(albumentations, obj_type) - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if 'transforms' in args: - args['transforms'] = [ - self.albu_builder(transform) - for transform in args['transforms'] - ] - - return obj_cls(**args) - - @staticmethod - def mapper(d, keymap): - """Dictionary mapper. Renames keys according to keymap provided. - - Args: - d (dict): old dict - keymap (dict): {'old_key':'new_key'} - Returns: - dict: new dict. - """ - - updated_dict = {} - for k, v in zip(d.keys(), d.values()): - new_k = keymap.get(k, k) - updated_dict[new_k] = d[k] - return updated_dict - - def __call__(self, results): - # dict to albumentations format - results = self.mapper(results, self.keymap_to_albu) - # TODO: add bbox_fields - if 'bboxes' in results: - # to list of boxes - if isinstance(results['bboxes'], np.ndarray): - results['bboxes'] = [x for x in results['bboxes']] - # add pseudo-field for filtration - if self.filter_lost_elements: - results['idx_mapper'] = np.arange(len(results['bboxes'])) - - # TODO: Support mask structure in albu - if 'masks' in results: - if isinstance(results['masks'], PolygonMasks): - raise NotImplementedError( - 'Albu only supports BitMap masks now') - ori_masks = results['masks'] - if albumentations.__version__ < '0.5': - results['masks'] = results['masks'].masks - else: - results['masks'] = [mask for mask in results['masks'].masks] - - results = self.aug(**results) - - if 'bboxes' in results: - if isinstance(results['bboxes'], list): - results['bboxes'] = np.array( - results['bboxes'], dtype=np.float32) - results['bboxes'] = results['bboxes'].reshape(-1, 4) - - # filter label_fields - if self.filter_lost_elements: - - for label in self.origin_label_fields: - results[label] = np.array( - [results[label][i] for i in results['idx_mapper']]) - if 'masks' in results: - results['masks'] = np.array( - [results['masks'][i] for i in results['idx_mapper']]) - results['masks'] = ori_masks.__class__( - results['masks'], results['image'].shape[0], - results['image'].shape[1]) - - if (not len(results['idx_mapper']) - and self.skip_img_without_anno): - return None - - if 'gt_labels' in results: - if isinstance(results['gt_labels'], list): - results['gt_labels'] = np.array(results['gt_labels']) - results['gt_labels'] = results['gt_labels'].astype(np.int64) - - # back to the original format - results = self.mapper(results, self.keymap_back) - - # update final shape - if self.update_pad_shape: - results['pad_shape'] = results['img'].shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' - return repr_str - - -@PIPELINES.register_module() -class RandomCenterCropPad: - """Random center crop and random around padding for CornerNet. - - This operation generates randomly cropped image from the original image and - pads it simultaneously. Different from :class:`RandomCrop`, the output - shape may not equal to ``crop_size`` strictly. We choose a random value - from ``ratios`` and the output shape could be larger or smaller than - ``crop_size``. The padding operation is also different from :class:`Pad`, - here we use around padding instead of right-bottom padding. - - The relation between output image (padding image) and original image: - - .. code:: text - - output image - - +----------------------------+ - | padded area | - +------|----------------------------|----------+ - | | cropped area | | - | | +---------------+ | | - | | | . center | | | original image - | | | range | | | - | | +---------------+ | | - +------|----------------------------|----------+ - | padded area | - +----------------------------+ - - There are 5 main areas in the figure: - - - output image: output image of this operation, also called padding - image in following instruction. - - original image: input image of this operation. - - padded area: non-intersect area of output image and original image. - - cropped area: the overlap of output image and original image. - - center range: a smaller area where random center chosen from. - center range is computed by ``border`` and original image's shape - to avoid our random center is too close to original image's border. - - Also this operation act differently in train and test mode, the summary - pipeline is listed below. - - Train pipeline: - - 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image - will be ``random_ratio * crop_size``. - 2. Choose a ``random_center`` in center range. - 3. Generate padding image with center matches the ``random_center``. - 4. Initialize the padding image with pixel value equals to ``mean``. - 5. Copy the cropped area to padding image. - 6. Refine annotations. - - Test pipeline: - - 1. Compute output shape according to ``test_pad_mode``. - 2. Generate padding image with center matches the original image - center. - 3. Initialize the padding image with pixel value equals to ``mean``. - 4. Copy the ``cropped area`` to padding image. - - Args: - crop_size (tuple | None): expected size after crop, final size will - computed according to ratio. Requires (h, w) in train mode, and - None in test mode. - ratios (tuple): random select a ratio from tuple and crop image to - (crop_size[0] * ratio) * (crop_size[1] * ratio). - Only available in train mode. - border (int): max distance from center select area to image border. - Only available in train mode. - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB. - test_mode (bool): whether involve random variables in transform. - In train mode, crop_size is fixed, center coords and ratio is - random selected from predefined lists. In test mode, crop_size - is image's original shape, center coords and ratio is fixed. - test_pad_mode (tuple): padding method and padding shape value, only - available in test mode. Default is using 'logical_or' with - 127 as padding shape value. - - - 'logical_or': final_shape = input_shape | padding_shape_value - - 'size_divisor': final_shape = int( - ceil(input_shape / padding_shape_value) * padding_shape_value) - test_pad_add_pix (int): Extra padding pixel in test mode. Default 0. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - """ - - def __init__(self, - crop_size=None, - ratios=(0.9, 1.0, 1.1), - border=128, - mean=None, - std=None, - to_rgb=None, - test_mode=False, - test_pad_mode=('logical_or', 127), - test_pad_add_pix=0, - bbox_clip_border=True): - if test_mode: - assert crop_size is None, 'crop_size must be None in test mode' - assert ratios is None, 'ratios must be None in test mode' - assert border is None, 'border must be None in test mode' - assert isinstance(test_pad_mode, (list, tuple)) - assert test_pad_mode[0] in ['logical_or', 'size_divisor'] - else: - assert isinstance(crop_size, (list, tuple)) - assert crop_size[0] > 0 and crop_size[1] > 0, ( - 'crop_size must > 0 in train mode') - assert isinstance(ratios, (list, tuple)) - assert test_pad_mode is None, ( - 'test_pad_mode must be None in train mode') - - self.crop_size = crop_size - self.ratios = ratios - self.border = border - # We do not set default value to mean, std and to_rgb because these - # hyper-parameters are easy to forget but could affect the performance. - # Please use the same setting as Normalize for performance assurance. - assert mean is not None and std is not None and to_rgb is not None - self.to_rgb = to_rgb - self.input_mean = mean - self.input_std = std - if to_rgb: - self.mean = mean[::-1] - self.std = std[::-1] - else: - self.mean = mean - self.std = std - self.test_mode = test_mode - self.test_pad_mode = test_pad_mode - self.test_pad_add_pix = test_pad_add_pix - self.bbox_clip_border = bbox_clip_border - - def _get_border(self, border, size): - """Get final border for the target size. - - This function generates a ``final_border`` according to image's shape. - The area between ``final_border`` and ``size - final_border`` is the - ``center range``. We randomly choose center from the ``center range`` - to avoid our random center is too close to original image's border. - Also ``center range`` should be larger than 0. - - Args: - border (int): The initial border, default is 128. - size (int): The width or height of original image. - Returns: - int: The final border. - """ - k = 2 * border / size - i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k))) - return border // i - - def _filter_boxes(self, patch, boxes): - """Check whether the center of each box is in the patch. - - Args: - patch (list[int]): The cropped area, [left, top, right, bottom]. - boxes (numpy array, (N x 4)): Ground truth boxes. - - Returns: - mask (numpy array, (N,)): Each box is inside or outside the patch. - """ - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * ( - center[:, 0] < patch[2]) * ( - center[:, 1] < patch[3]) - return mask - - def _crop_image_and_paste(self, image, center, size): - """Crop image with a given center and size, then paste the cropped - image to a blank image with two centers align. - - This function is equivalent to generating a blank image with ``size`` - as its shape. Then cover it on the original image with two centers ( - the center of blank image and the random center of original image) - aligned. The overlap area is paste from the original image and the - outside area is filled with ``mean pixel``. - - Args: - image (np array, H x W x C): Original image. - center (list[int]): Target crop center coord. - size (list[int]): Target crop size. [target_h, target_w] - - Returns: - cropped_img (np array, target_h x target_w x C): Cropped image. - border (np array, 4): The distance of four border of - ``cropped_img`` to the original image area, [top, bottom, - left, right] - patch (list[int]): The cropped area, [left, top, right, bottom]. - """ - center_y, center_x = center - target_h, target_w = size - img_h, img_w, img_c = image.shape - - x0 = max(0, center_x - target_w // 2) - x1 = min(center_x + target_w // 2, img_w) - y0 = max(0, center_y - target_h // 2) - y1 = min(center_y + target_h // 2, img_h) - patch = np.array((int(x0), int(y0), int(x1), int(y1))) - - left, right = center_x - x0, x1 - center_x - top, bottom = center_y - y0, y1 - center_y - - cropped_center_y, cropped_center_x = target_h // 2, target_w // 2 - cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype) - for i in range(img_c): - cropped_img[:, :, i] += self.mean[i] - y_slice = slice(cropped_center_y - top, cropped_center_y + bottom) - x_slice = slice(cropped_center_x - left, cropped_center_x + right) - cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :] - - border = np.array([ - cropped_center_y - top, cropped_center_y + bottom, - cropped_center_x - left, cropped_center_x + right - ], - dtype=np.float32) - - return cropped_img, border, patch - - def _train_aug(self, results): - """Random crop and around padding the original image. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - boxes = results['gt_bboxes'] - while True: - scale = random.choice(self.ratios) - new_h = int(self.crop_size[0] * scale) - new_w = int(self.crop_size[1] * scale) - h_border = self._get_border(self.border, h) - w_border = self._get_border(self.border, w) - - for i in range(50): - center_x = random.randint(low=w_border, high=w - w_border) - center_y = random.randint(low=h_border, high=h - h_border) - - cropped_img, border, patch = self._crop_image_and_paste( - img, [center_y, center_x], [new_h, new_w]) - - mask = self._filter_boxes(patch, boxes) - # if image do not have valid bbox, any crop patch is valid. - if not mask.any() and len(boxes) > 0: - continue - - results['img'] = cropped_img - results['img_shape'] = cropped_img.shape - results['pad_shape'] = cropped_img.shape - - x0, y0, x1, y1 = patch - - left_w, top_h = center_x - x0, center_y - y0 - cropped_center_x, cropped_center_y = new_w // 2, new_h // 2 - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - mask = self._filter_boxes(patch, results[key]) - bboxes = results[key][mask] - bboxes[:, 0:4:2] += cropped_center_x - left_w - x0 - bboxes[:, 1:4:2] += cropped_center_y - top_h - y0 - if self.bbox_clip_border: - bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w) - bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h) - keep = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - bboxes = bboxes[keep] - results[key] = bboxes - if key in ['gt_bboxes']: - if 'gt_labels' in results: - labels = results['gt_labels'][mask] - labels = labels[keep] - results['gt_labels'] = labels - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - - # crop semantic seg - for key in results.get('seg_fields', []): - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - return results - - def _test_aug(self, results): - """Around padding the original image without cropping. - - The padding mode and value are from ``test_pad_mode``. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - results['img_shape'] = img.shape - if self.test_pad_mode[0] in ['logical_or']: - # self.test_pad_add_pix is only used for centernet - target_h = (h | self.test_pad_mode[1]) + self.test_pad_add_pix - target_w = (w | self.test_pad_mode[1]) + self.test_pad_add_pix - elif self.test_pad_mode[0] in ['size_divisor']: - divisor = self.test_pad_mode[1] - target_h = int(np.ceil(h / divisor)) * divisor - target_w = int(np.ceil(w / divisor)) * divisor - else: - raise NotImplementedError( - 'RandomCenterCropPad only support two testing pad mode:' - 'logical-or and size_divisor.') - - cropped_img, border, _ = self._crop_image_and_paste( - img, [h // 2, w // 2], [target_h, target_w]) - results['img'] = cropped_img - results['pad_shape'] = cropped_img.shape - results['border'] = border - return results - - def __call__(self, results): - img = results['img'] - assert img.dtype == np.float32, ( - 'RandomCenterCropPad needs the input image of dtype np.float32,' - ' please set "to_float32=True" in "LoadImageFromFile" pipeline') - h, w, c = img.shape - assert c == len(self.mean) - if self.test_mode: - return self._test_aug(results) - else: - return self._train_aug(results) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'ratios={self.ratios}, ' - repr_str += f'border={self.border}, ' - repr_str += f'mean={self.input_mean}, ' - repr_str += f'std={self.input_std}, ' - repr_str += f'to_rgb={self.to_rgb}, ' - repr_str += f'test_mode={self.test_mode}, ' - repr_str += f'test_pad_mode={self.test_pad_mode}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class CutOut: - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - - Args: - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - """ - - def __init__(self, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0)): - - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - self.n_holes = n_holes - self.fill_in = fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in})' - return repr_str - - -@PIPELINES.register_module() -class Mosaic: - """Mosaic augmentation. - - Given 4 images, mosaic transform combines them into - one output image. The output image is composed of the parts from each sub- - image. - - .. code:: text - - mosaic transform - center_x - +------------------------------+ - | pad | pad | - | +-----------+ | - | | | | - | | image1 |--------+ | - | | | | | - | | | image2 | | - center_y |----+-------------+-----------| - | | cropped | | - |pad | image3 | image4 | - | | | | - +----|-------------+-----------+ - | | - +-------------+ - - The mosaic transform steps are as follows: - - 1. Choose the mosaic center as the intersections of 4 images - 2. Get the left top image according to the index, and randomly - sample another 3 images from the custom dataset. - 3. Sub image will be cropped if image is larger than mosaic patch - - Args: - img_scale (Sequence[int]): Image size after mosaic pipeline of single - image. The shape order should be (height, width). - Default to (640, 640). - center_ratio_range (Sequence[float]): Center ratio range of mosaic - output. Default to (0.5, 1.5). - min_bbox_size (int | float): The minimum pixel for filtering - invalid bboxes after the mosaic pipeline. Default to 0. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` is invalid. Default to True. - pad_val (int): Pad value. Default to 114. - prob (float): Probability of applying this transformation. - Default to 1.0. - """ - - def __init__(self, - img_scale=(640, 640), - center_ratio_range=(0.5, 1.5), - min_bbox_size=0, - bbox_clip_border=True, - skip_filter=True, - pad_val=114, - prob=1.0): - assert isinstance(img_scale, tuple) - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - f'got {prob}.' - - log_img_scale(img_scale, skip_square=True) - self.img_scale = img_scale - self.center_ratio_range = center_ratio_range - self.min_bbox_size = min_bbox_size - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - self.pad_val = pad_val - self.prob = prob - - def __call__(self, results): - """Call function to make a mosaic of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mosaic transformed. - """ - - if random.uniform(0, 1) > self.prob: - return results - - results = self._mosaic_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - indexes = [random.randint(0, len(dataset)) for _ in range(3)] - return indexes - - def _mosaic_transform(self, results): - """Mosaic transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - mosaic_labels = [] - mosaic_bboxes = [] - if len(results['img'].shape) == 3: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2), 3), - self.pad_val, - dtype=results['img'].dtype) - else: - mosaic_img = np.full( - (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), - self.pad_val, - dtype=results['img'].dtype) - - # mosaic center x, y - center_x = int( - random.uniform(*self.center_ratio_range) * self.img_scale[1]) - center_y = int( - random.uniform(*self.center_ratio_range) * self.img_scale[0]) - center_position = (center_x, center_y) - - loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right') - for i, loc in enumerate(loc_strs): - if loc == 'top_left': - results_patch = copy.deepcopy(results) - else: - results_patch = copy.deepcopy(results['mix_results'][i - 1]) - - img_i = results_patch['img'] - h_i, w_i = img_i.shape[:2] - # keep_ratio resize - scale_ratio_i = min(self.img_scale[0] / h_i, - self.img_scale[1] / w_i) - img_i = mmcv.imresize( - img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i))) - - # compute the combine parameters - paste_coord, crop_coord = self._mosaic_combine( - loc, center_position, img_i.shape[:2][::-1]) - x1_p, y1_p, x2_p, y2_p = paste_coord - x1_c, y1_c, x2_c, y2_c = crop_coord - - # crop and paste image - mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c] - - # adjust coordinate - gt_bboxes_i = results_patch['gt_bboxes'] - gt_labels_i = results_patch['gt_labels'] - - if gt_bboxes_i.shape[0] > 0: - padw = x1_p - x1_c - padh = y1_p - y1_c - gt_bboxes_i[:, 0::2] = \ - scale_ratio_i * gt_bboxes_i[:, 0::2] + padw - gt_bboxes_i[:, 1::2] = \ - scale_ratio_i * gt_bboxes_i[:, 1::2] + padh - - mosaic_bboxes.append(gt_bboxes_i) - mosaic_labels.append(gt_labels_i) - - if len(mosaic_labels) > 0: - mosaic_bboxes = np.concatenate(mosaic_bboxes, 0) - mosaic_labels = np.concatenate(mosaic_labels, 0) - - if self.bbox_clip_border: - mosaic_bboxes[:, 0::2] = np.clip(mosaic_bboxes[:, 0::2], 0, - 2 * self.img_scale[1]) - mosaic_bboxes[:, 1::2] = np.clip(mosaic_bboxes[:, 1::2], 0, - 2 * self.img_scale[0]) - - if not self.skip_filter: - mosaic_bboxes, mosaic_labels = \ - self._filter_box_candidates(mosaic_bboxes, mosaic_labels) - - # remove outside bboxes - inside_inds = find_inside_bboxes(mosaic_bboxes, 2 * self.img_scale[0], - 2 * self.img_scale[1]) - mosaic_bboxes = mosaic_bboxes[inside_inds] - mosaic_labels = mosaic_labels[inside_inds] - - results['img'] = mosaic_img - results['img_shape'] = mosaic_img.shape - results['gt_bboxes'] = mosaic_bboxes - results['gt_labels'] = mosaic_labels - - return results - - def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): - """Calculate global coordinate of mosaic image and local coordinate of - cropped sub-image. - - Args: - loc (str): Index for the sub-image, loc in ('top_left', - 'top_right', 'bottom_left', 'bottom_right'). - center_position_xy (Sequence[float]): Mixing center for 4 images, - (x, y). - img_shape_wh (Sequence[int]): Width and height of sub-image - - Returns: - tuple[tuple[float]]: Corresponding coordinate of pasting and - cropping - - paste_coord (tuple): paste corner coordinate in mosaic image. - - crop_coord (tuple): crop corner coordinate in mosaic image. - """ - assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') - if loc == 'top_left': - # index0 to top left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - center_position_xy[0], \ - center_position_xy[1] - crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - ( - y2 - y1), img_shape_wh[0], img_shape_wh[1] - - elif loc == 'top_right': - # index1 to top right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - max(center_position_xy[1] - img_shape_wh[1], 0), \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - center_position_xy[1] - crop_coord = 0, img_shape_wh[1] - (y2 - y1), min( - img_shape_wh[0], x2 - x1), img_shape_wh[1] - - elif loc == 'bottom_left': - # index2 to bottom left part of image - x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ - center_position_xy[1], \ - center_position_xy[0], \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min( - y2 - y1, img_shape_wh[1]) - - else: - # index3 to bottom right part of image - x1, y1, x2, y2 = center_position_xy[0], \ - center_position_xy[1], \ - min(center_position_xy[0] + img_shape_wh[0], - self.img_scale[1] * 2), \ - min(self.img_scale[0] * 2, center_position_xy[1] + - img_shape_wh[1]) - crop_coord = 0, 0, min(img_shape_wh[0], - x2 - x1), min(y2 - y1, img_shape_wh[1]) - - paste_coord = x1, y1, x2, y2 - return paste_coord, crop_coord - - def _filter_box_candidates(self, bboxes, labels): - """Filter out bboxes too small after Mosaic.""" - bbox_w = bboxes[:, 2] - bboxes[:, 0] - bbox_h = bboxes[:, 3] - bboxes[:, 1] - valid_inds = (bbox_w > self.min_bbox_size) & \ - (bbox_h > self.min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - return bboxes[valid_inds], labels[valid_inds] - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'img_scale={self.img_scale}, ' - repr_str += f'center_ratio_range={self.center_ratio_range}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class MixUp: - """MixUp data augmentation. - - .. code:: text - - mixup transform - +------------------------------+ - | mixup image | | - | +--------|--------+ | - | | | | | - |---------------+ | | - | | | | - | | image | | - | | | | - | | | | - | |-----------------+ | - | pad | - +------------------------------+ - - The mixup transform steps are as follows: - - 1. Another random image is picked by dataset and embedded in - the top left patch(after padding and resizing) - 2. The target of mixup transform is the weighted average of mixup - image and origin image. - - Args: - img_scale (Sequence[int]): Image output size after mixup pipeline. - The shape order should be (height, width). Default: (640, 640). - ratio_range (Sequence[float]): Scale ratio of mixup image. - Default: (0.5, 1.5). - flip_ratio (float): Horizontal flip ratio of mixup image. - Default: 0.5. - pad_val (int): Pad value. Default: 114. - max_iters (int): The maximum number of iterations. If the number of - iterations is greater than `max_iters`, but gt_bbox is still - empty, then the iteration is terminated. Default: 15. - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 5. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. Default: 20. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - img_scale=(640, 640), - ratio_range=(0.5, 1.5), - flip_ratio=0.5, - pad_val=114, - max_iters=15, - min_bbox_size=5, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert isinstance(img_scale, tuple) - log_img_scale(img_scale, skip_square=True) - self.dynamic_scale = img_scale - self.ratio_range = ratio_range - self.flip_ratio = flip_ratio - self.pad_val = pad_val - self.max_iters = max_iters - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - """Call function to make a mixup of image. - - Args: - results (dict): Result dict. - - Returns: - dict: Result dict with mixup transformed. - """ - - results = self._mixup_transform(results) - return results - - def get_indexes(self, dataset): - """Call function to collect indexes. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - - Returns: - list: indexes. - """ - - for i in range(self.max_iters): - index = random.randint(0, len(dataset)) - gt_bboxes_i = dataset.get_ann_info(index)['bboxes'] - if len(gt_bboxes_i) != 0: - break - - return index - - def _mixup_transform(self, results): - """MixUp transform function. - - Args: - results (dict): Result dict. - - Returns: - dict: Updated result dict. - """ - - assert 'mix_results' in results - assert len( - results['mix_results']) == 1, 'MixUp only support 2 images now !' - - if results['mix_results'][0]['gt_bboxes'].shape[0] == 0: - # empty bbox - return results - - retrieve_results = results['mix_results'][0] - retrieve_img = retrieve_results['img'] - - jit_factor = random.uniform(*self.ratio_range) - is_filp = random.uniform(0, 1) > self.flip_ratio - - if len(retrieve_img.shape) == 3: - out_img = np.ones( - (self.dynamic_scale[0], self.dynamic_scale[1], 3), - dtype=retrieve_img.dtype) * self.pad_val - else: - out_img = np.ones( - self.dynamic_scale, dtype=retrieve_img.dtype) * self.pad_val - - # 1. keep_ratio resize - scale_ratio = min(self.dynamic_scale[0] / retrieve_img.shape[0], - self.dynamic_scale[1] / retrieve_img.shape[1]) - retrieve_img = mmcv.imresize( - retrieve_img, (int(retrieve_img.shape[1] * scale_ratio), - int(retrieve_img.shape[0] * scale_ratio))) - - # 2. paste - out_img[:retrieve_img.shape[0], :retrieve_img.shape[1]] = retrieve_img - - # 3. scale jit - scale_ratio *= jit_factor - out_img = mmcv.imresize(out_img, (int(out_img.shape[1] * jit_factor), - int(out_img.shape[0] * jit_factor))) - - # 4. flip - if is_filp: - out_img = out_img[:, ::-1, :] - - # 5. random crop - ori_img = results['img'] - origin_h, origin_w = out_img.shape[:2] - target_h, target_w = ori_img.shape[:2] - padded_img = np.zeros( - (max(origin_h, target_h), max(origin_w, - target_w), 3)).astype(np.uint8) - padded_img[:origin_h, :origin_w] = out_img - - x_offset, y_offset = 0, 0 - if padded_img.shape[0] > target_h: - y_offset = random.randint(0, padded_img.shape[0] - target_h) - if padded_img.shape[1] > target_w: - x_offset = random.randint(0, padded_img.shape[1] - target_w) - padded_cropped_img = padded_img[y_offset:y_offset + target_h, - x_offset:x_offset + target_w] - - # 6. adjust bbox - retrieve_gt_bboxes = retrieve_results['gt_bboxes'] - retrieve_gt_bboxes[:, 0::2] = retrieve_gt_bboxes[:, 0::2] * scale_ratio - retrieve_gt_bboxes[:, 1::2] = retrieve_gt_bboxes[:, 1::2] * scale_ratio - if self.bbox_clip_border: - retrieve_gt_bboxes[:, 0::2] = np.clip(retrieve_gt_bboxes[:, 0::2], - 0, origin_w) - retrieve_gt_bboxes[:, 1::2] = np.clip(retrieve_gt_bboxes[:, 1::2], - 0, origin_h) - - if is_filp: - retrieve_gt_bboxes[:, 0::2] = ( - origin_w - retrieve_gt_bboxes[:, 0::2][:, ::-1]) - - # 7. filter - cp_retrieve_gt_bboxes = retrieve_gt_bboxes.copy() - cp_retrieve_gt_bboxes[:, 0::2] = \ - cp_retrieve_gt_bboxes[:, 0::2] - x_offset - cp_retrieve_gt_bboxes[:, 1::2] = \ - cp_retrieve_gt_bboxes[:, 1::2] - y_offset - if self.bbox_clip_border: - cp_retrieve_gt_bboxes[:, 0::2] = np.clip( - cp_retrieve_gt_bboxes[:, 0::2], 0, target_w) - cp_retrieve_gt_bboxes[:, 1::2] = np.clip( - cp_retrieve_gt_bboxes[:, 1::2], 0, target_h) - - # 8. mix up - ori_img = ori_img.astype(np.float32) - mixup_img = 0.5 * ori_img + 0.5 * padded_cropped_img.astype(np.float32) - - retrieve_gt_labels = retrieve_results['gt_labels'] - if not self.skip_filter: - keep_list = self._filter_box_candidates(retrieve_gt_bboxes.T, - cp_retrieve_gt_bboxes.T) - - retrieve_gt_labels = retrieve_gt_labels[keep_list] - cp_retrieve_gt_bboxes = cp_retrieve_gt_bboxes[keep_list] - - mixup_gt_bboxes = np.concatenate( - (results['gt_bboxes'], cp_retrieve_gt_bboxes), axis=0) - mixup_gt_labels = np.concatenate( - (results['gt_labels'], retrieve_gt_labels), axis=0) - - # remove outside bbox - inside_inds = find_inside_bboxes(mixup_gt_bboxes, target_h, target_w) - mixup_gt_bboxes = mixup_gt_bboxes[inside_inds] - mixup_gt_labels = mixup_gt_labels[inside_inds] - - results['img'] = mixup_img.astype(np.uint8) - results['img_shape'] = mixup_img.shape - results['gt_bboxes'] = mixup_gt_bboxes - results['gt_labels'] = mixup_gt_labels - - return results - - def _filter_box_candidates(self, bbox1, bbox2): - """Compute candidate boxes which include following 5 things: - - bbox1 before augment, bbox2 after augment, min_bbox_size (pixels), - min_area_ratio, max_aspect_ratio. - """ - - w1, h1 = bbox1[2] - bbox1[0], bbox1[3] - bbox1[1] - w2, h2 = bbox2[2] - bbox2[0], bbox2[3] - bbox2[1] - ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) - return ((w2 > self.min_bbox_size) - & (h2 > self.min_bbox_size) - & (w2 * h2 / (w1 * h1 + 1e-16) > self.min_area_ratio) - & (ar < self.max_aspect_ratio)) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'dynamic_scale={self.dynamic_scale}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'flip_ratio={self.flip_ratio}, ' - repr_str += f'pad_val={self.pad_val}, ' - repr_str += f'max_iters={self.max_iters}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - -@PIPELINES.register_module() -class RandomAffine: - """Random affine transform data augmentation. - - This operation randomly generates affine transform matrix which including - rotation, translation, shear and scaling transforms. - - Args: - max_rotate_degree (float): Maximum degrees of rotation transform. - Default: 10. - max_translate_ratio (float): Maximum ratio of translation. - Default: 0.1. - scaling_ratio_range (tuple[float]): Min and max ratio of - scaling transform. Default: (0.5, 1.5). - max_shear_degree (float): Maximum degrees of shear - transform. Default: 2. - border (tuple[int]): Distance from height and width sides of input - image to adjust output shape. Only used in mosaic dataset. - Default: (0, 0). - border_val (tuple[int]): Border padding values of 3 channels. - Default: (114, 114, 114). - min_bbox_size (float): Width and height threshold to filter bboxes. - If the height or width of a box is smaller than this value, it - will be removed. Default: 2. - min_area_ratio (float): Threshold of area ratio between - original bboxes and wrapped bboxes. If smaller than this value, - the box will be removed. Default: 0.2. - max_aspect_ratio (float): Aspect ratio of width and height - threshold to filter bboxes. If max(h/w, w/h) larger than this - value, the box will be removed. - bbox_clip_border (bool, optional): Whether to clip the objects outside - the border of the image. In some dataset like MOT17, the gt bboxes - are allowed to cross the border of images. Therefore, we don't - need to clip the gt bboxes in these cases. Defaults to True. - skip_filter (bool): Whether to skip filtering rules. If it - is True, the filter rule will not be applied, and the - `min_bbox_size` and `min_area_ratio` and `max_aspect_ratio` - is invalid. Default to True. - """ - - def __init__(self, - max_rotate_degree=10.0, - max_translate_ratio=0.1, - scaling_ratio_range=(0.5, 1.5), - max_shear_degree=2.0, - border=(0, 0), - border_val=(114, 114, 114), - min_bbox_size=2, - min_area_ratio=0.2, - max_aspect_ratio=20, - bbox_clip_border=True, - skip_filter=True): - assert 0 <= max_translate_ratio <= 1 - assert scaling_ratio_range[0] <= scaling_ratio_range[1] - assert scaling_ratio_range[0] > 0 - self.max_rotate_degree = max_rotate_degree - self.max_translate_ratio = max_translate_ratio - self.scaling_ratio_range = scaling_ratio_range - self.max_shear_degree = max_shear_degree - self.border = border - self.border_val = border_val - self.min_bbox_size = min_bbox_size - self.min_area_ratio = min_area_ratio - self.max_aspect_ratio = max_aspect_ratio - self.bbox_clip_border = bbox_clip_border - self.skip_filter = skip_filter - - def __call__(self, results): - img = results['img'] - height = img.shape[0] + self.border[0] * 2 - width = img.shape[1] + self.border[1] * 2 - - # Rotation - rotation_degree = random.uniform(-self.max_rotate_degree, - self.max_rotate_degree) - rotation_matrix = self._get_rotation_matrix(rotation_degree) - - # Scaling - scaling_ratio = random.uniform(self.scaling_ratio_range[0], - self.scaling_ratio_range[1]) - scaling_matrix = self._get_scaling_matrix(scaling_ratio) - - # Shear - x_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - y_degree = random.uniform(-self.max_shear_degree, - self.max_shear_degree) - shear_matrix = self._get_shear_matrix(x_degree, y_degree) - - # Translation - trans_x = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * width - trans_y = random.uniform(-self.max_translate_ratio, - self.max_translate_ratio) * height - translate_matrix = self._get_translation_matrix(trans_x, trans_y) - - warp_matrix = ( - translate_matrix @ shear_matrix @ rotation_matrix @ scaling_matrix) - - img = cv2.warpPerspective( - img, - warp_matrix, - dsize=(width, height), - borderValue=self.border_val) - results['img'] = img - results['img_shape'] = img.shape - - for key in results.get('bbox_fields', []): - bboxes = results[key] - num_bboxes = len(bboxes) - if num_bboxes: - # homogeneous coordinates - xs = bboxes[:, [0, 0, 2, 2]].reshape(num_bboxes * 4) - ys = bboxes[:, [1, 3, 3, 1]].reshape(num_bboxes * 4) - ones = np.ones_like(xs) - points = np.vstack([xs, ys, ones]) - - warp_points = warp_matrix @ points - warp_points = warp_points[:2] / warp_points[2] - xs = warp_points[0].reshape(num_bboxes, 4) - ys = warp_points[1].reshape(num_bboxes, 4) - - warp_bboxes = np.vstack( - (xs.min(1), ys.min(1), xs.max(1), ys.max(1))).T - - if self.bbox_clip_border: - warp_bboxes[:, [0, 2]] = \ - warp_bboxes[:, [0, 2]].clip(0, width) - warp_bboxes[:, [1, 3]] = \ - warp_bboxes[:, [1, 3]].clip(0, height) - - # remove outside bbox - valid_index = find_inside_bboxes(warp_bboxes, height, width) - if not self.skip_filter: - # filter bboxes - filter_index = self.filter_gt_bboxes( - bboxes * scaling_ratio, warp_bboxes) - valid_index = valid_index & filter_index - - results[key] = warp_bboxes[valid_index] - if key in ['gt_bboxes']: - if 'gt_labels' in results: - results['gt_labels'] = results['gt_labels'][ - valid_index] - - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomAffine only supports bbox.') - return results - - def filter_gt_bboxes(self, origin_bboxes, wrapped_bboxes): - origin_w = origin_bboxes[:, 2] - origin_bboxes[:, 0] - origin_h = origin_bboxes[:, 3] - origin_bboxes[:, 1] - wrapped_w = wrapped_bboxes[:, 2] - wrapped_bboxes[:, 0] - wrapped_h = wrapped_bboxes[:, 3] - wrapped_bboxes[:, 1] - aspect_ratio = np.maximum(wrapped_w / (wrapped_h + 1e-16), - wrapped_h / (wrapped_w + 1e-16)) - - wh_valid_idx = (wrapped_w > self.min_bbox_size) & \ - (wrapped_h > self.min_bbox_size) - area_valid_idx = wrapped_w * wrapped_h / (origin_w * origin_h + - 1e-16) > self.min_area_ratio - aspect_ratio_valid_idx = aspect_ratio < self.max_aspect_ratio - return wh_valid_idx & area_valid_idx & aspect_ratio_valid_idx - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(max_rotate_degree={self.max_rotate_degree}, ' - repr_str += f'max_translate_ratio={self.max_translate_ratio}, ' - repr_str += f'scaling_ratio={self.scaling_ratio_range}, ' - repr_str += f'max_shear_degree={self.max_shear_degree}, ' - repr_str += f'border={self.border}, ' - repr_str += f'border_val={self.border_val}, ' - repr_str += f'min_bbox_size={self.min_bbox_size}, ' - repr_str += f'min_area_ratio={self.min_area_ratio}, ' - repr_str += f'max_aspect_ratio={self.max_aspect_ratio}, ' - repr_str += f'skip_filter={self.skip_filter})' - return repr_str - - @staticmethod - def _get_rotation_matrix(rotate_degrees): - radian = math.radians(rotate_degrees) - rotation_matrix = np.array( - [[np.cos(radian), -np.sin(radian), 0.], - [np.sin(radian), np.cos(radian), 0.], [0., 0., 1.]], - dtype=np.float32) - return rotation_matrix - - @staticmethod - def _get_scaling_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_share_matrix(scale_ratio): - scaling_matrix = np.array( - [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]], - dtype=np.float32) - return scaling_matrix - - @staticmethod - def _get_shear_matrix(x_shear_degrees, y_shear_degrees): - x_radian = math.radians(x_shear_degrees) - y_radian = math.radians(y_shear_degrees) - shear_matrix = np.array([[1, np.tan(x_radian), 0.], - [np.tan(y_radian), 1, 0.], [0., 0., 1.]], - dtype=np.float32) - return shear_matrix - - @staticmethod - def _get_translation_matrix(x, y): - translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]], - dtype=np.float32) - return translation_matrix - - -@PIPELINES.register_module() -class YOLOXHSVRandomAug: - """Apply HSV augmentation to image sequentially. It is referenced from - https://github.com/Megvii- - BaseDetection/YOLOX/blob/main/yolox/data/data_augment.py#L21. - - Args: - hue_delta (int): delta of hue. Default: 5. - saturation_delta (int): delta of saturation. Default: 30. - value_delta (int): delat of value. Default: 30. - """ - - def __init__(self, hue_delta=5, saturation_delta=30, value_delta=30): - self.hue_delta = hue_delta - self.saturation_delta = saturation_delta - self.value_delta = value_delta - - def __call__(self, results): - img = results['img'] - hsv_gains = np.random.uniform(-1, 1, 3) * [ - self.hue_delta, self.saturation_delta, self.value_delta - ] - # random selection of h, s, v - hsv_gains *= np.random.randint(0, 2, 3) - # prevent overflow - hsv_gains = hsv_gains.astype(np.int16) - img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.int16) - - img_hsv[..., 0] = (img_hsv[..., 0] + hsv_gains[0]) % 180 - img_hsv[..., 1] = np.clip(img_hsv[..., 1] + hsv_gains[1], 0, 255) - img_hsv[..., 2] = np.clip(img_hsv[..., 2] + hsv_gains[2], 0, 255) - cv2.cvtColor(img_hsv.astype(img.dtype), cv2.COLOR_HSV2BGR, dst=img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(hue_delta={self.hue_delta}, ' - repr_str += f'saturation_delta={self.saturation_delta}, ' - repr_str += f'value_delta={self.value_delta})' - return repr_str - - -@PIPELINES.register_module() -class CopyPaste: - """Simple Copy-Paste is a Strong Data Augmentation Method for Instance - Segmentation The simple copy-paste transform steps are as follows: - - 1. The destination image is already resized with aspect ratio kept, - cropped and padded. - 2. Randomly select a source image, which is also already resized - with aspect ratio kept, cropped and padded in a similar way - as the destination image. - 3. Randomly select some objects from the source image. - 4. Paste these source objects to the destination image directly, - due to the source and destination image have the same size. - 5. Update object masks of the destination image, for some origin objects - may be occluded. - 6. Generate bboxes from the updated destination masks and - filter some objects which are totally occluded, and adjust bboxes - which are partly occluded. - 7. Append selected source bboxes, masks, and labels. - - Args: - max_num_pasted (int): The maximum number of pasted objects. - Default: 100. - bbox_occluded_thr (int): The threshold of occluded bbox. - Default: 10. - mask_occluded_thr (int): The threshold of occluded mask. - Default: 300. - selected (bool): Whether select objects or not. If select is False, - all objects of the source image will be pasted to the - destination image. - Default: True. - """ - - def __init__( - self, - max_num_pasted=100, - bbox_occluded_thr=10, - mask_occluded_thr=300, - selected=True, - ): - self.max_num_pasted = max_num_pasted - self.bbox_occluded_thr = bbox_occluded_thr - self.mask_occluded_thr = mask_occluded_thr - self.selected = selected - - def get_indexes(self, dataset): - """Call function to collect indexes.s. - - Args: - dataset (:obj:`MultiImageMixDataset`): The dataset. - Returns: - list: Indexes. - """ - return random.randint(0, len(dataset)) - - def __call__(self, results): - """Call function to make a copy-paste of image. - - Args: - results (dict): Result dict. - Returns: - dict: Result dict with copy-paste transformed. - """ - - assert 'mix_results' in results - num_images = len(results['mix_results']) - assert num_images == 1, \ - f'CopyPaste only supports processing 2 images, got {num_images}' - if self.selected: - selected_results = self._select_object(results['mix_results'][0]) - else: - selected_results = results['mix_results'][0] - return self._copy_paste(results, selected_results) - - def _select_object(self, results): - """Select some objects from the source results.""" - bboxes = results['gt_bboxes'] - labels = results['gt_labels'] - masks = results['gt_masks'] - max_num_pasted = min(bboxes.shape[0] + 1, self.max_num_pasted) - num_pasted = np.random.randint(0, max_num_pasted) - selected_inds = np.random.choice( - bboxes.shape[0], size=num_pasted, replace=False) - - selected_bboxes = bboxes[selected_inds] - selected_labels = labels[selected_inds] - selected_masks = masks[selected_inds] - - results['gt_bboxes'] = selected_bboxes - results['gt_labels'] = selected_labels - results['gt_masks'] = selected_masks - return results - - def _copy_paste(self, dst_results, src_results): - """CopyPaste transform function. - - Args: - dst_results (dict): Result dict of the destination image. - src_results (dict): Result dict of the source image. - Returns: - dict: Updated result dict. - """ - dst_img = dst_results['img'] - dst_bboxes = dst_results['gt_bboxes'] - dst_labels = dst_results['gt_labels'] - dst_masks = dst_results['gt_masks'] - - src_img = src_results['img'] - src_bboxes = src_results['gt_bboxes'] - src_labels = src_results['gt_labels'] - src_masks = src_results['gt_masks'] - - if len(src_bboxes) == 0: - return dst_results - - # update masks and generate bboxes from updated masks - composed_mask = np.where(np.any(src_masks.masks, axis=0), 1, 0) - updated_dst_masks = self.get_updated_masks(dst_masks, composed_mask) - updated_dst_bboxes = updated_dst_masks.get_bboxes() - assert len(updated_dst_bboxes) == len(updated_dst_masks) - - # filter totally occluded objects - bboxes_inds = np.all( - np.abs( - (updated_dst_bboxes - dst_bboxes)) <= self.bbox_occluded_thr, - axis=-1) - masks_inds = updated_dst_masks.masks.sum( - axis=(1, 2)) > self.mask_occluded_thr - valid_inds = bboxes_inds | masks_inds - - # Paste source objects to destination image directly - img = dst_img * (1 - composed_mask[..., np.newaxis] - ) + src_img * composed_mask[..., np.newaxis] - bboxes = np.concatenate([updated_dst_bboxes[valid_inds], src_bboxes]) - labels = np.concatenate([dst_labels[valid_inds], src_labels]) - masks = np.concatenate( - [updated_dst_masks.masks[valid_inds], src_masks.masks]) - - dst_results['img'] = img - dst_results['gt_bboxes'] = bboxes - dst_results['gt_labels'] = labels - dst_results['gt_masks'] = BitmapMasks(masks, masks.shape[1], - masks.shape[2]) - - return dst_results - - def get_updated_masks(self, masks, composed_mask): - assert masks.masks.shape[-2:] == composed_mask.shape[-2:], \ - 'Cannot compare two arrays of different size' - masks.masks = np.where(composed_mask, 0, masks.masks) - return masks - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'max_num_pasted={self.max_num_pasted}, ' - repr_str += f'bbox_occluded_thr={self.bbox_occluded_thr}, ' - repr_str += f'mask_occluded_thr={self.mask_occluded_thr}, ' - repr_str += f'selected={self.selected}, ' - return repr_str diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/__init__.py b/cv/detection/yolof/pytorch/mmdet/datasets/samplers/__init__.py deleted file mode 100755 index a4c7ea13..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .class_aware_sampler import ClassAwareSampler -from .distributed_sampler import DistributedSampler -from .group_sampler import DistributedGroupSampler, GroupSampler -from .infinite_sampler import InfiniteBatchSampler, InfiniteGroupBatchSampler - -__all__ = [ - 'DistributedSampler', 'DistributedGroupSampler', 'GroupSampler', - 'InfiniteGroupBatchSampler', 'InfiniteBatchSampler', 'ClassAwareSampler' -] diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/class_aware_sampler.py b/cv/detection/yolof/pytorch/mmdet/datasets/samplers/class_aware_sampler.py deleted file mode 100755 index c52708eb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/class_aware_sampler.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - -from mmdet.core.utils import sync_random_seed - - -class ClassAwareSampler(Sampler): - r"""Sampler that restricts data loading to the label of the dataset. - - A class-aware sampling strategy to effectively tackle the - non-uniform class distribution. The length of the training data is - consistent with source data. Simple improvements based on `Relay - Backpropagation for Effective Learning of Deep Convolutional - Neural Networks `_ - - The implementation logic is referred to - https://github.com/Sense-X/TSD/blob/master/mmdet/datasets/samplers/distributed_classaware_sampler.py - - Args: - dataset: Dataset used for sampling. - samples_per_gpu (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - num_sample_class (int): The number of samples taken from each - per-label list. Default: 1 - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0, - num_sample_class=1): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - - self.dataset = dataset - self.num_replicas = num_replicas - self.samples_per_gpu = samples_per_gpu - self.rank = rank - self.epoch = 0 - # Must be the same across all workers. If None, will use a - # random seed shared among workers - # (require synchronization among all workers) - self.seed = sync_random_seed(seed) - - # The number of samples taken from each per-label list - assert num_sample_class > 0 and isinstance(num_sample_class, int) - self.num_sample_class = num_sample_class - # Get per-label image list from dataset - assert hasattr(dataset, 'get_cat2imgs'), \ - 'dataset must have `get_cat2imgs` function' - self.cat_dict = dataset.get_cat2imgs() - - self.num_samples = int( - math.ceil( - len(self.dataset) * 1.0 / self.num_replicas / - self.samples_per_gpu)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - # get number of images containing each category - self.num_cat_imgs = [len(x) for x in self.cat_dict.values()] - # filter labels without images - self.valid_cat_inds = [ - i for i, length in enumerate(self.num_cat_imgs) if length != 0 - ] - self.num_classes = len(self.valid_cat_inds) - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - # initialize label list - label_iter_list = RandomCycleIter(self.valid_cat_inds, generator=g) - # initialize each per-label image list - data_iter_dict = dict() - for i in self.valid_cat_inds: - data_iter_dict[i] = RandomCycleIter(self.cat_dict[i], generator=g) - - def gen_cat_img_inds(cls_list, data_dict, num_sample_cls): - """Traverse the categories and extract `num_sample_cls` image - indexes of the corresponding categories one by one.""" - id_indices = [] - for _ in range(len(cls_list)): - cls_idx = next(cls_list) - for _ in range(num_sample_cls): - id = next(data_dict[cls_idx]) - id_indices.append(id) - return id_indices - - # deterministically shuffle based on epoch - num_bins = int( - math.ceil(self.total_size * 1.0 / self.num_classes / - self.num_sample_class)) - indices = [] - for i in range(num_bins): - indices += gen_cat_img_inds(label_iter_list, data_iter_dict, - self.num_sample_class) - - # fix extra samples to make it evenly divisible - if len(indices) >= self.total_size: - indices = indices[:self.total_size] - else: - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch - - -class RandomCycleIter: - """Shuffle the list and do it again after the list have traversed. - - The implementation logic is referred to - https://github.com/wutong16/DistributionBalancedLoss/blob/master/mllt/datasets/loader/sampler.py - - Example: - >>> label_list = [0, 1, 2, 4, 5] - >>> g = torch.Generator() - >>> g.manual_seed(0) - >>> label_iter_list = RandomCycleIter(label_list, generator=g) - >>> index = next(label_iter_list) - Args: - data (list or ndarray): The data that needs to be shuffled. - generator: An torch.Generator object, which is used in setting the seed - for generating random numbers. - """ # noqa: W605 - - def __init__(self, data, generator=None): - self.data = data - self.length = len(data) - self.index = torch.randperm(self.length, generator=generator).numpy() - self.i = 0 - self.generator = generator - - def __iter__(self): - return self - - def __len__(self): - return len(self.data) - - def __next__(self): - if self.i == self.length: - self.index = torch.randperm( - self.length, generator=self.generator).numpy() - self.i = 0 - idx = self.data[self.index[self.i]] - self.i += 1 - return idx diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/distributed_sampler.py b/cv/detection/yolof/pytorch/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100755 index 1bc8b7c3..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - -from mmdet.core.utils import sync_random_seed -from mmdet.utils import get_device - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - device = get_device() - self.seed = sync_random_seed(seed, device) - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/group_sampler.py b/cv/detection/yolof/pytorch/mmdet/datasets/samplers/group_sampler.py deleted file mode 100755 index 783d2b21..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/group_sampler.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - - -class GroupSampler(Sampler): - - def __init__(self, dataset, samples_per_gpu=1): - assert hasattr(dataset, 'flag') - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.flag = dataset.flag.astype(np.int64) - self.group_sizes = np.bincount(self.flag) - self.num_samples = 0 - for i, size in enumerate(self.group_sizes): - self.num_samples += int(np.ceil( - size / self.samples_per_gpu)) * self.samples_per_gpu - - def __iter__(self): - indices = [] - for i, size in enumerate(self.group_sizes): - if size == 0: - continue - indice = np.where(self.flag == i)[0] - assert len(indice) == size - np.random.shuffle(indice) - num_extra = int(np.ceil(size / self.samples_per_gpu) - ) * self.samples_per_gpu - len(indice) - indice = np.concatenate( - [indice, np.random.choice(indice, num_extra)]) - indices.append(indice) - indices = np.concatenate(indices) - indices = [ - indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu] - for i in np.random.permutation( - range(len(indices) // self.samples_per_gpu)) - ] - indices = np.concatenate(indices) - indices = indices.astype(np.int64).tolist() - assert len(indices) == self.num_samples - return iter(indices) - - def __len__(self): - return self.num_samples - - -class DistributedGroupSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.seed = seed if seed is not None else 0 - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - - self.num_samples = 0 - for i, j in enumerate(self.group_sizes): - self.num_samples += int( - math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu / - self.num_replicas)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - indices = [] - for i, size in enumerate(self.group_sizes): - if size > 0: - indice = np.where(self.flag == i)[0] - assert len(indice) == size - # add .numpy() to avoid bug when selecting indice in parrots. - # TODO: check whether torch.randperm() can be replaced by - # numpy.random.permutation(). - indice = indice[list( - torch.randperm(int(size), generator=g).numpy())].tolist() - extra = int( - math.ceil( - size * 1.0 / self.samples_per_gpu / self.num_replicas) - ) * self.samples_per_gpu * self.num_replicas - len(indice) - # pad indice - tmp = indice.copy() - for _ in range(extra // size): - indice.extend(tmp) - indice.extend(tmp[:extra % size]) - indices.extend(indice) - - assert len(indices) == self.total_size - - indices = [ - indices[j] for i in list( - torch.randperm( - len(indices) // self.samples_per_gpu, generator=g)) - for j in range(i * self.samples_per_gpu, (i + 1) * - self.samples_per_gpu) - ] - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/infinite_sampler.py b/cv/detection/yolof/pytorch/mmdet/datasets/samplers/infinite_sampler.py deleted file mode 100755 index d42487e6..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/samplers/infinite_sampler.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import itertools - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data.sampler import Sampler - -from mmdet.core.utils import sync_random_seed - - -class InfiniteGroupBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `GroupSampler. It is designed for - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time, all indices in a batch should be in the same group. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the indices of a dummy `epoch`, it - should be noted that `shuffle` can not guarantee that you can - generate sequential indices because it need to ensure - that all indices in a batch is in a group. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - # buffer used to save indices of each group - self.buffer_per_group = {k: [] for k in range(len(self.group_sizes))} - - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - for idx in self.indices: - flag = self.flag[idx] - group_buffer = self.buffer_per_group[flag] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] - del group_buffer[:] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError - - -class InfiniteBatchSampler(Sampler): - """Similar to `BatchSampler` warping a `DistributedSampler. It is designed - iteration-based runners like `IterBasedRunner` and yields a mini-batch - indices each time. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/samplers/grouped_batch_sampler.py - - Args: - dataset (object): The dataset. - batch_size (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU, - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - world_size (int, optional): Number of processes participating in - distributed training. Default: None. - rank (int, optional): Rank of current process. Default: None. - seed (int): Random seed. Default: 0. - shuffle (bool): Whether shuffle the dataset or not. Default: True. - """ # noqa: W605 - - def __init__(self, - dataset, - batch_size=1, - world_size=None, - rank=None, - seed=0, - shuffle=True): - _rank, _world_size = get_dist_info() - if world_size is None: - world_size = _world_size - if rank is None: - rank = _rank - self.rank = rank - self.world_size = world_size - self.dataset = dataset - self.batch_size = batch_size - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - self.shuffle = shuffle - self.size = len(dataset) - self.indices = self._indices_of_rank() - - def _infinite_indices(self): - """Infinitely yield a sequence of indices.""" - g = torch.Generator() - g.manual_seed(self.seed) - while True: - if self.shuffle: - yield from torch.randperm(self.size, generator=g).tolist() - - else: - yield from torch.arange(self.size).tolist() - - def _indices_of_rank(self): - """Slice the infinite indices by rank.""" - yield from itertools.islice(self._infinite_indices(), self.rank, None, - self.world_size) - - def __iter__(self): - # once batch size is reached, yield the indices - batch_buffer = [] - for idx in self.indices: - batch_buffer.append(idx) - if len(batch_buffer) == self.batch_size: - yield batch_buffer - batch_buffer = [] - - def __len__(self): - """Length of base dataset.""" - return self.size - - def set_epoch(self, epoch): - """Not supported in `IterationBased` runner.""" - raise NotImplementedError diff --git a/cv/detection/yolof/pytorch/mmdet/datasets/utils.py b/cv/detection/yolof/pytorch/mmdet/datasets/utils.py deleted file mode 100755 index 57e31fda..00000000 --- a/cv/detection/yolof/pytorch/mmdet/datasets/utils.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -# from mmcv.cnn import VGG -from mmcv.runner.hooks import HOOKS, Hook - -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines import (LoadAnnotations, LoadImageFromFile, - LoadPanopticAnnotations) -# from mmdet.models.dense_heads import GARPNHead, RPNHead -# from mmdet.models.roi_heads.mask_heads import FusedSemanticHead - - -def replace_ImageToTensor(pipelines): - """Replace the ImageToTensor transform in a data pipeline to - DefaultFormatBundle, which is normally useful in batch inference. - - Args: - pipelines (list[dict]): Data pipeline configs. - - Returns: - list: The new pipeline list with all ImageToTensor replaced by - DefaultFormatBundle. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='ImageToTensor', keys=['img']), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> assert expected_pipelines == replace_ImageToTensor(pipelines) - """ - pipelines = copy.deepcopy(pipelines) - for i, pipeline in enumerate(pipelines): - if pipeline['type'] == 'MultiScaleFlipAug': - assert 'transforms' in pipeline - pipeline['transforms'] = replace_ImageToTensor( - pipeline['transforms']) - elif pipeline['type'] == 'ImageToTensor': - warnings.warn( - '"ImageToTensor" pipeline is replaced by ' - '"DefaultFormatBundle" for batch inference. It is ' - 'recommended to manually replace it in the test ' - 'data pipeline in your config file.', UserWarning) - pipelines[i] = {'type': 'DefaultFormatBundle'} - return pipelines - - -def get_loading_pipeline(pipeline): - """Only keep loading image and annotations related configuration. - - Args: - pipeline (list[dict]): Data pipeline configs. - - Returns: - list[dict]: The new pipeline list with only keep - loading image and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True), - ... dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - ... dict(type='RandomFlip', flip_ratio=0.5), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True) - ... ] - >>> assert expected_pipelines ==\ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline_cfg = [] - for cfg in pipeline: - obj_cls = PIPELINES.get(cfg['type']) - # TODO:use more elegant way to distinguish loading modules - if obj_cls is not None and obj_cls in (LoadImageFromFile, - LoadAnnotations, - LoadPanopticAnnotations): - loading_pipeline_cfg.append(cfg) - assert len(loading_pipeline_cfg) == 2, \ - 'The data pipeline in your config file must include ' \ - 'loading image and annotations related pipeline.' - return loading_pipeline_cfg - - -@HOOKS.register_module() -class NumClassCheckHook(Hook): - - def _check_head(self, runner): - """Check whether the `num_classes` in head matches the length of - `CLASSES` in `dataset`. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - model = runner.model - dataset = runner.data_loader.dataset - if dataset.CLASSES is None: - runner.logger.warning( - f'Please set `CLASSES` ' - f'in the {dataset.__class__.__name__} and' - f'check if it is consistent with the `num_classes` ' - f'of head') - else: - assert type(dataset.CLASSES) is not str, \ - (f'`CLASSES` in {dataset.__class__.__name__}' - f'should be a tuple of str.' - f'Add comma if number of classes is 1 as ' - f'CLASSES = ({dataset.CLASSES},)') - for name, module in model.named_modules(): - if hasattr(module, 'num_classes') and not isinstance( - # module, (RPNHead, VGG, FusedSemanticHead, GARPNHead)): - module, ()): - assert module.num_classes == len(dataset.CLASSES), \ - (f'The `num_classes` ({module.num_classes}) in ' - f'{module.__class__.__name__} of ' - f'{model.__class__.__name__} does not matches ' - f'the length of `CLASSES` ' - f'{len(dataset.CLASSES)}) in ' - f'{dataset.__class__.__name__}') - - def before_train_epoch(self, runner): - """Check whether the training dataset is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) - - def before_val_epoch(self, runner): - """Check whether the dataset in val epoch is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) diff --git a/cv/detection/yolof/pytorch/mmdet/models/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/__init__.py deleted file mode 100755 index eb114dde..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401,F403 -from .builder import (BACKBONES, DETECTORS, HEADS, LOSSES, NECKS, - ROI_EXTRACTORS, SHARED_HEADS, build_backbone, - build_detector, build_head, build_loss, build_neck, - build_roi_extractor, build_shared_head) -from .dense_heads import * # noqa: F401,F403 -from .detectors import * # noqa: F401,F403 -from .losses import * # noqa: F401,F403 -from .necks import * # noqa: F401,F403 - -__all__ = [ - 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES', - 'DETECTORS', 'build_backbone', 'build_neck', 'build_roi_extractor', - 'build_shared_head', 'build_head', 'build_loss', 'build_detector' -] diff --git a/cv/detection/yolof/pytorch/mmdet/models/backbones/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/backbones/__init__.py deleted file mode 100755 index c6f2540a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/backbones/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .resnet import ResNet, ResNetV1d - -__all__ = [ - 'ResNet', 'ResNetV1d' -] diff --git a/cv/detection/yolof/pytorch/mmdet/models/backbones/resnet.py b/cv/detection/yolof/pytorch/mmdet/models/backbones/resnet.py deleted file mode 100755 index 1eaaae67..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/backbones/resnet.py +++ /dev/null @@ -1,672 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(out) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - stem_channels (int | None): Number of stem channels. If not specified, - it will be the same as `base_channels`. Default: None. - base_channels (int): Number of base channels of res layer. Default: 64. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=None, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - self.zero_init_residual = zero_init_residual - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - if stem_channels is None: - stem_channels = base_channels - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """Make plugins for ResNet ``stage_idx`` th stage. - - Currently we support to insert ``context_block``, - ``empirical_attention_block``, ``nonlocal_block`` into the backbone - like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be: - - Examples: - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose ``stage_idx=0``, the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->conv3->yyy->zzz1->zzz2 - - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - r"""ResNetV1d variant described in `Bag of Tricks - `_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/cv/detection/yolof/pytorch/mmdet/models/builder.py b/cv/detection/yolof/pytorch/mmdet/models/builder.py deleted file mode 100755 index ace6209f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/builder.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -ROI_EXTRACTORS = MODELS -SHARED_HEADS = MODELS -HEADS = MODELS -LOSSES = MODELS -DETECTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_roi_extractor(cfg): - """Build roi extractor.""" - return ROI_EXTRACTORS.build(cfg) - - -def build_shared_head(cfg): - """Build shared head.""" - return SHARED_HEADS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return DETECTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/dense_heads/__init__.py deleted file mode 100755 index 48ab76fa..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .yolof_head import YOLOFHead - -__all__ = ['YOLOFHead'] diff --git a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/anchor_head.py b/cv/detection/yolof/pytorch/mmdet/models/dense_heads/anchor_head.py deleted file mode 100755 index d1bfab62..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, images_to_levels, - multi_apply, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None, - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01)): - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - if hasattr(self.train_cfg, - 'sampler') and self.train_cfg.sampler.type.split( - '.')[-1] != 'PseudoSampler': - self.sampling = True - sampler_cfg = self.train_cfg.sampler - # avoid BC-breaking - if loss_cls['type'] in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ]: - warnings.warn( - 'DeprecationWarning: Determining whether to sampling' - 'by loss type is deprecated, please delete sampler in' - 'your config when using `FocalLoss`, `GHMC`, ' - '`QualityFocalLoss` or other FocalLoss variant.') - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - else: - self.sampling = False - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.prior_generator = build_prior_generator(anchor_generator) - - # Usually the numbers of anchors for each level are the same - # except SSD detectors. So it is an int in the most dense - # heads but a list of int in SSDHead - self.num_base_priors = self.prior_generator.num_base_priors[0] - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'for consistency or also use ' - '`num_base_priors` instead') - return self.prior_generator.num_base_priors[0] - - @property - def anchor_generator(self): - warnings.warn('DeprecationWarning: anchor_generator is deprecated, ' - 'please use "prior_generator" instead') - return self.prior_generator - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_base_priors * 4, - 1) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_base_priors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_base_priors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.prior_generator.grid_priors( - featmap_sizes, device=device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all - images. - - num_total_neg (int): Number of negative samples in all - images. - - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), where - 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,), The length of list should always be 1. - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/base_dense_head.py b/cv/detection/yolof/pytorch/mmdet/models/dense_heads/base_dense_head.py deleted file mode 100755 index 0c7abb7b..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch -from mmcv.cnn.utils.weight_init import constant_init -from mmcv.ops import batched_nms -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core.utils import filter_scores_and_topk, select_single_mlvl - - -class BaseDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for DenseHeads.""" - - def __init__(self, init_cfg=None): - super(BaseDenseHead, self).__init__(init_cfg) - - def init_weights(self): - super(BaseDenseHead, self).init_weights() - # avoid init_cfg overwrite the initialization of `conv_offset` - for m in self.modules(): - # DeformConv2dPack, ModulatedDeformConv2dPack - if hasattr(m, 'conv_offset'): - constant_init(m.conv_offset, 0) - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - cfg=None, - rescale=False, - with_nms=True, - **kwargs): - """Transform network outputs of a batch into bbox results. - - Note: When score_factors is not None, the cls_scores are - usually multiplied by it then obtain the real score used in NMS, - such as CenterNess in FCOS, IoU branch in ATSS. - - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - score_factors (list[Tensor], Optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, num_priors * 1, H, W). Default None. - img_metas (list[dict], Optional): Image meta info. Default None. - cfg (mmcv.Config, Optional): Test / postprocessing configuration, - if None, test_cfg would be used. Default None. - rescale (bool): If True, return boxes in original image space. - Default False. - with_nms (bool): If True, do nms before return boxes. - Default True. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class label of - the corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - - if score_factors is None: - # e.g. Retina, FreeAnchor, Foveabox, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, AutoAssign, etc. - with_score_factors = True - assert len(cls_scores) == len(score_factors) - - num_levels = len(cls_scores) - - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device) - - result_list = [] - - for img_id in range(len(img_metas)): - img_meta = img_metas[img_id] - cls_score_list = select_single_mlvl(cls_scores, img_id) - bbox_pred_list = select_single_mlvl(bbox_preds, img_id) - if with_score_factors: - score_factor_list = select_single_mlvl(score_factors, img_id) - else: - score_factor_list = [None for _ in range(num_levels)] - - results = self._get_bboxes_single(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors, - img_meta, cfg, rescale, with_nms, - **kwargs) - result_list.append(results) - return result_list - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - if score_factor_list[0] is None: - # e.g. Retina, FreeAnchor, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - if with_score_factors: - mlvl_score_factors = [] - else: - mlvl_score_factors = None - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - if with_score_factors: - score_factor = score_factor.permute(1, 2, - 0).reshape(-1).sigmoid() - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - if with_score_factors: - score_factor = score_factor[keep_idxs] - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - if with_score_factors: - mlvl_score_factors.append(score_factor) - - return self._bbox_post_process(mlvl_scores, mlvl_labels, mlvl_bboxes, - img_meta['scale_factor'], cfg, rescale, - with_nms, mlvl_score_factors, **kwargs) - - def _bbox_post_process(self, - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - scale_factor, - cfg, - rescale=False, - with_nms=True, - mlvl_score_factors=None, - **kwargs): - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually `with_nms` is False is used for aug test. - - Args: - mlvl_scores (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_labels (list[Tensor]): Box class labels from all scale - levels of a single image, each item has shape - (num_bboxes, ). - mlvl_bboxes (list[Tensor]): Decoded bboxes from all scale - levels of a single image, each item has shape (num_bboxes, 4). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - mlvl_score_factors (list[Tensor], optional): Score factor from - all scale levels of a single image, each item has shape - (num_bboxes, ). Default: None. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - assert len(mlvl_scores) == len(mlvl_bboxes) == len(mlvl_labels) - - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_labels = torch.cat(mlvl_labels) - - if mlvl_score_factors is not None: - # TODO: Add sqrt operation in order to be consistent with - # the paper. - mlvl_score_factors = torch.cat(mlvl_score_factors) - mlvl_scores = mlvl_scores * mlvl_score_factors - - if with_nms: - if mlvl_bboxes.numel() == 0: - det_bboxes = torch.cat([mlvl_bboxes, mlvl_scores[:, None]], -1) - return det_bboxes, mlvl_labels - - det_bboxes, keep_idxs = batched_nms(mlvl_bboxes, mlvl_scores, - mlvl_labels, cfg.nms) - det_bboxes = det_bboxes[:cfg.max_per_img] - det_labels = mlvl_labels[keep_idxs][:cfg.max_per_img] - return det_bboxes, det_labels - else: - return mlvl_bboxes, mlvl_scores, mlvl_labels - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes( - *outs, img_metas=img_metas, cfg=proposal_cfg) - return losses, proposal_list - - def simple_test(self, feats, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n, ). - """ - return self.simple_test_bboxes(feats, img_metas, rescale=rescale) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def onnx_export(self, - cls_scores, - bbox_preds, - score_factors=None, - img_metas=None, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - score_factors (list[Tensor]): score_factors for each s - cale level with shape (N, num_points * 1, H, W). - Default: None. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. Default: None. - with_nms (bool): Whether apply nms to the bboxes. Default: True. - - Returns: - tuple[Tensor, Tensor] | list[tuple]: When `with_nms` is True, - it is tuple[Tensor, Tensor], first tensor bboxes with shape - [N, num_det, 5], 5 arrange as (x1, y1, x2, y2, score) - and second element is class labels of shape [N, num_det]. - When `with_nms` is False, first tensor is bboxes with - shape [N, num_det, 4], second tensor is raw score has - shape [N, num_det, num_classes]. - """ - assert len(cls_scores) == len(bbox_preds) - - num_levels = len(cls_scores) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shape = img_metas[0]['img_shape_for_onnx'] - - cfg = self.test_cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_priors) - device = cls_scores[0].device - batch_size = cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - - # e.g. Retina, FreeAnchor, etc. - if score_factors is None: - with_score_factors = False - mlvl_score_factor = [None for _ in range(num_levels)] - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - mlvl_score_factor = [ - score_factors[i].detach() for i in range(num_levels) - ] - mlvl_score_factors = [] - - mlvl_batch_bboxes = [] - mlvl_scores = [] - - for cls_score, bbox_pred, score_factors, priors in zip( - mlvl_cls_scores, mlvl_bbox_preds, mlvl_score_factor, - mlvl_priors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - nms_pre_score = scores - else: - scores = scores.softmax(-1) - nms_pre_score = scores - - if with_score_factors: - score_factors = score_factors.permute(0, 2, 3, 1).reshape( - batch_size, -1).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - priors = priors.expand(batch_size, -1, priors.size(-1)) - # Get top-k predictions - from mmdet.core.export import get_k_for_topk - nms_pre = get_k_for_topk(nms_pre_tensor, bbox_pred.shape[1]) - if nms_pre > 0: - - if with_score_factors: - nms_pre_score = (nms_pre_score * score_factors[..., None]) - else: - nms_pre_score = nms_pre_score - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = nms_pre_score.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = nms_pre_score[..., :-1].max(-1) - _, topk_inds = max_scores.topk(nms_pre) - - batch_inds = torch.arange( - batch_size, device=bbox_pred.device).view( - -1, 1).expand_as(topk_inds).long() - # Avoid onnx2tensorrt issue in https://github.com/NVIDIA/TensorRT/issues/1134 # noqa: E501 - transformed_inds = bbox_pred.shape[1] * batch_inds + topk_inds - priors = priors.reshape( - -1, priors.size(-1))[transformed_inds, :].reshape( - batch_size, -1, priors.size(-1)) - bbox_pred = bbox_pred.reshape(-1, - 4)[transformed_inds, :].reshape( - batch_size, -1, 4) - scores = scores.reshape( - -1, self.cls_out_channels)[transformed_inds, :].reshape( - batch_size, -1, self.cls_out_channels) - if with_score_factors: - score_factors = score_factors.reshape( - -1, 1)[transformed_inds].reshape(batch_size, -1) - - bboxes = self.bbox_coder.decode( - priors, bbox_pred, max_shape=img_shape) - - mlvl_batch_bboxes.append(bboxes) - mlvl_scores.append(scores) - if with_score_factors: - mlvl_score_factors.append(score_factors) - - batch_bboxes = torch.cat(mlvl_batch_bboxes, dim=1) - batch_scores = torch.cat(mlvl_scores, dim=1) - if with_score_factors: - batch_score_factors = torch.cat(mlvl_score_factors, dim=1) - - # Replace multiclass_nms with ONNX::NonMaxSuppression in deployment - - from mmdet.core.export import add_dummy_nms_for_onnx - - if not self.use_sigmoid_cls: - batch_scores = batch_scores[..., :self.num_classes] - - if with_score_factors: - batch_scores = batch_scores * (batch_score_factors.unsqueeze(2)) - - if with_nms: - max_output_boxes_per_class = cfg.nms.get( - 'max_output_boxes_per_class', 200) - iou_threshold = cfg.nms.get('iou_threshold', 0.5) - score_threshold = cfg.score_thr - nms_pre = cfg.get('deploy_nms_pre', -1) - return add_dummy_nms_for_onnx(batch_bboxes, batch_scores, - max_output_boxes_per_class, - iou_threshold, score_threshold, - nms_pre, cfg.max_per_img) - else: - return batch_bboxes, batch_scores diff --git a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/dense_test_mixins.py b/cv/detection/yolof/pytorch/mmdet/models/dense_heads/dense_test_mixins.py deleted file mode 100755 index 34215489..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/dense_test_mixins.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from inspect import signature - -import torch -from mmcv.ops import batched_nms - -from mmdet.core import bbox_mapping_back, merge_aug_proposals - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - """Mixin class for testing det bboxes via DenseHead.""" - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,) - """ - outs = self.forward(feats) - results_list = self.get_bboxes( - *outs, img_metas=img_metas, rescale=rescale) - return results_list - - def aug_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes with test time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,). The length of list should always be 1. - """ - # check with_nms argument - gb_sig = signature(self.get_bboxes) - gb_args = [p.name for p in gb_sig.parameters.values()] - gbs_sig = signature(self._get_bboxes_single) - gbs_args = [p.name for p in gbs_sig.parameters.values()] - assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \ - f'{self.__class__.__name__}' \ - ' does not support test-time augmentation' - - aug_bboxes = [] - aug_scores = [] - aug_labels = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - outs = self.forward(x) - bbox_outputs = self.get_bboxes( - *outs, - img_metas=img_meta, - cfg=self.test_cfg, - rescale=False, - with_nms=False)[0] - aug_bboxes.append(bbox_outputs[0]) - aug_scores.append(bbox_outputs[1]) - if len(bbox_outputs) >= 3: - aug_labels.append(bbox_outputs[2]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_labels = torch.cat(aug_labels, dim=0) if aug_labels else None - - if merged_bboxes.numel() == 0: - det_bboxes = torch.cat([merged_bboxes, merged_scores[:, None]], -1) - return [ - (det_bboxes, merged_labels), - ] - - det_bboxes, keep_idxs = batched_nms(merged_bboxes, merged_scores, - merged_labels, self.test_cfg.nms) - det_bboxes = det_bboxes[:self.test_cfg.max_per_img] - det_labels = merged_labels[keep_idxs][:self.test_cfg.max_per_img] - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - - return [ - (_det_bboxes, det_labels), - ] - - def simple_test_rpn(self, x, img_metas): - """Test without augmentation, only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - rpn_outs = self(x) - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def aug_test_rpn(self, feats, img_metas): - """Test with augmentation for only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - samples_per_gpu = len(img_metas[0]) - aug_proposals = [[] for _ in range(samples_per_gpu)] - for x, img_meta in zip(feats, img_metas): - proposal_list = self.simple_test_rpn(x, img_meta) - for i, proposals in enumerate(proposal_list): - aug_proposals[i].append(proposals) - # reorganize the order of 'img_metas' to match the dimensions - # of 'aug_proposals' - aug_img_metas = [] - for i in range(samples_per_gpu): - aug_img_meta = [] - for j in range(len(img_metas)): - aug_img_meta.append(img_metas[j][i]) - aug_img_metas.append(aug_img_meta) - # after merging, proposals will be rescaled to the original image size - merged_proposals = [ - merge_aug_proposals(proposals, aug_img_meta, self.test_cfg) - for proposals, aug_img_meta in zip(aug_proposals, aug_img_metas) - ] - return merged_proposals - - if sys.version_info >= (3, 7): - - async def async_simple_test_rpn(self, x, img_metas): - sleep_interval = self.test_cfg.pop('async_sleep_interval', 0.025) - async with completed( - __name__, 'rpn_head_forward', - sleep_interval=sleep_interval): - rpn_outs = self(x) - - proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas) - return proposal_list - - def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - - Returns: - tuple[Tensor]: ``bboxes`` with shape (n,4), where - 4 represent (tl_x, tl_y, br_x, br_y) - and ``scores`` with shape (n,). - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores diff --git a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/yolof_head.py b/cv/detection/yolof/pytorch/mmdet/models/dense_heads/yolof_head.py deleted file mode 100755 index 1063524a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/dense_heads/yolof_head.py +++ /dev/null @@ -1,416 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import anchor_inside_flags, multi_apply, reduce_mean, unmap -from ..builder import HEADS -from .anchor_head import AnchorHead - -INF = 1e8 - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class YOLOFHead(AnchorHead): - """YOLOFHead Paper link: https://arxiv.org/abs/2103.09460. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): The number of input channels per scale. - cls_num_convs (int): The number of convolutions of cls branch. - Default 2. - reg_num_convs (int): The number of convolutions of reg branch. - Default 4. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - num_classes, - in_channels, - num_cls_convs=2, - num_reg_convs=4, - norm_cfg=dict(type='BN', requires_grad=True), - **kwargs): - self.num_cls_convs = num_cls_convs - self.num_reg_convs = num_reg_convs - self.norm_cfg = norm_cfg - super(YOLOFHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - cls_subnet = [] - bbox_subnet = [] - for i in range(self.num_cls_convs): - cls_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - for i in range(self.num_reg_convs): - bbox_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - self.in_channels, - self.num_base_priors * self.num_classes, - kernel_size=3, - stride=1, - padding=1) - self.bbox_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors * 4, - kernel_size=3, - stride=1, - padding=1) - self.object_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors, - kernel_size=3, - stride=1, - padding=1) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - bias_cls = bias_init_with_prob(0.01) - torch.nn.init.constant_(self.cls_score.bias, bias_cls) - - def forward_single(self, feature): - cls_score = self.cls_score(self.cls_subnet(feature)) - N, _, H, W = cls_score.shape - cls_score = cls_score.view(N, -1, self.num_classes, H, W) - - reg_feat = self.bbox_subnet(feature) - bbox_reg = self.bbox_pred(reg_feat) - objectness = self.object_pred(reg_feat) - - # implicit objectness - objectness = objectness.view(N, -1, 1, H, W) - normalized_cls_score = cls_score + objectness - torch.log( - 1. + torch.clamp(cls_score.exp(), max=INF) + - torch.clamp(objectness.exp(), max=INF)) - normalized_cls_score = normalized_cls_score.view(N, -1, H, W) - return normalized_cls_score, bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (batch, num_anchors * num_classes, h, w) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (batch, num_anchors * 4, h, w) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == 1 - assert self.prior_generator.num_levels == 1 - - device = cls_scores[0].device - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - - # The output level is always 1 - anchor_list = [anchors[0] for anchors in anchor_list] - valid_flag_list = [valid_flags[0] for valid_flags in valid_flag_list] - - cls_scores_list = levels_to_images(cls_scores) - bbox_preds_list = levels_to_images(bbox_preds) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (batch_labels, batch_label_weights, num_total_pos, num_total_neg, - batch_bbox_weights, batch_pos_predicted_boxes, - batch_target_boxes) = cls_reg_targets - - flatten_labels = batch_labels.reshape(-1) - batch_label_weights = batch_label_weights.reshape(-1) - cls_score = cls_scores[0].permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - - num_total_samples = (num_total_pos + - num_total_neg) if self.sampling else num_total_pos - num_total_samples = reduce_mean( - cls_score.new_tensor(num_total_samples)).clamp_(1.0).item() - - # classification loss - loss_cls = self.loss_cls( - cls_score, - flatten_labels, - batch_label_weights, - avg_factor=num_total_samples) - - # regression loss - if batch_pos_predicted_boxes.shape[0] == 0: - # no pos sample - loss_bbox = batch_pos_predicted_boxes.sum() * 0 - else: - loss_bbox = self.loss_bbox( - batch_pos_predicted_boxes, - batch_target_boxes, - batch_bbox_weights.float(), - avg_factor=num_total_samples) - - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores_list (list[Tensor]): Classification scores of - each image. each is a 4D-tensor, the shape is - (h * w, num_anchors * num_classes). - bbox_preds_list (list[Tensor]): Bbox preds of each image. - each is a 4D-tensor, the shape is (h * w, num_anchors * 4). - anchor_list (list[Tensor]): Anchors of each image. Each element of - is a tensor of shape (h * w * num_anchors, 4). - valid_flag_list (list[Tensor]): Valid flags of each image. Each - element of is a tensor of shape (h * w * num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - batch_labels (Tensor): Label of all images. Each element \ - of is a tensor of shape (batch, h * w * num_anchors) - - batch_label_weights (Tensor): Label weights of all images \ - of is a tensor of shape (batch, h * w * num_anchors) - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, pos_inds_list, neg_inds_list, - sampling_results_list) = results[:5] - rest_results = list(results[5:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - - batch_labels = torch.stack(all_labels, 0) - batch_label_weights = torch.stack(all_label_weights, 0) - - res = (batch_labels, batch_label_weights, num_total_pos, num_total_neg) - for i, rests in enumerate(rest_results): # user-added return values - rest_results[i] = torch.cat(rests, 0) - - return res + tuple(rest_results) - - def _get_targets_single(self, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - bbox_preds (Tensor): Bbox prediction of the image, which - shape is (h * w ,4) - flat_anchors (Tensor): Anchors of the image, which shape is - (h * w * num_anchors ,4) - valid_flags (Tensor): Valid flags of the image, which shape is - (h * w * num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels (Tensor): Labels of image, which shape is - (h * w * num_anchors, ). - label_weights (Tensor): Label weights of image, which shape is - (h * w * num_anchors, ). - pos_inds (Tensor): Pos index of image. - neg_inds (Tensor): Neg index of image. - sampling_result (obj:`SamplingResult`): Sampling result. - pos_bbox_weights (Tensor): The Weight of using to calculate - the bbox branch loss, which shape is (num, ). - pos_predicted_boxes (Tensor): boxes predicted value of - using to calculate the bbox branch loss, which shape is - (num, 4). - pos_target_boxes (Tensor): boxes target value of - using to calculate the bbox branch loss, which shape is - (num, 4). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - bbox_preds = bbox_preds.reshape(-1, 4) - bbox_preds = bbox_preds[inside_flags, :] - - # decoded bbox - decoder_bbox_preds = self.bbox_coder.decode(anchors, bbox_preds) - assign_result = self.assigner.assign( - decoder_bbox_preds, anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - pos_bbox_weights = assign_result.get_extra_property('pos_idx') - pos_predicted_boxes = assign_result.get_extra_property( - 'pos_predicted_boxes') - pos_target_boxes = assign_result.get_extra_property('target_boxes') - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - num_valid_anchors = anchors.shape[0] - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - - return (labels, label_weights, pos_inds, neg_inds, sampling_result, - pos_bbox_weights, pos_predicted_boxes, pos_target_boxes) diff --git a/cv/detection/yolof/pytorch/mmdet/models/detectors/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/detectors/__init__.py deleted file mode 100755 index e9772c6c..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/detectors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .yolof import YOLOF - -__all__ = ['YOLOF'] - - - diff --git a/cv/detection/yolof/pytorch/mmdet/models/detectors/base.py b/cv/detection/yolof/pytorch/mmdet/models/detectors/base.py deleted file mode 100755 index 691a0c31..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/detectors/base.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule, auto_fp16 - -# from mmdet.core.visualization import imshow_det_bboxes - - -class BaseDetector(BaseModule, metaclass=ABCMeta): - """Base class for detectors.""" - - def __init__(self, init_cfg=None): - super(BaseDetector, self).__init__(init_cfg) - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the detector has a neck""" - return hasattr(self, 'neck') and self.neck is not None - - # TODO: these properties need to be carefully handled - # for both single stage & two stage detectors - @property - def with_shared_head(self): - """bool: whether the detector has a shared head in the RoI Head""" - return hasattr(self, 'roi_head') and self.roi_head.with_shared_head - - @property - def with_bbox(self): - """bool: whether the detector has a bbox head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_bbox) - or (hasattr(self, 'bbox_head') and self.bbox_head is not None)) - - @property - def with_mask(self): - """bool: whether the detector has a mask head""" - return ((hasattr(self, 'roi_head') and self.roi_head.with_mask) - or (hasattr(self, 'mask_head') and self.mask_head is not None)) - - @abstractmethod - def extract_feat(self, imgs): - """Extract features from images.""" - pass - - def extract_feats(self, imgs): - """Extract features from multiple images. - - Args: - imgs (list[torch.Tensor]): A list of images. The images are - augmented from the same image but in different ways. - - Returns: - list[torch.Tensor]: Features of different images - """ - assert isinstance(imgs, list) - return [self.extract_feat(img) for img in imgs] - - def forward_train(self, imgs, img_metas, **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (keyword arguments): Specific to concrete implementation. - """ - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - batch_input_shape = tuple(imgs[0].size()[-2:]) - for img_meta in img_metas: - img_meta['batch_input_shape'] = batch_input_shape - - async def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError - - @abstractmethod - def simple_test(self, img, img_metas, **kwargs): - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Test function with test time augmentation.""" - pass - - async def aforward_test(self, *, img, img_metas, **kwargs): - for var, name in [(img, 'img'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(img) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(img)}) ' - f'!= num of image metas ({len(img_metas)})') - # TODO: remove the restriction of samples_per_gpu == 1 when prepared - samples_per_gpu = img[0].size(0) - assert samples_per_gpu == 1 - - if num_augs == 1: - return await self.async_simple_test(img[0], img_metas[0], **kwargs) - else: - raise NotImplementedError - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - # NOTE the batched image size information may be useful, e.g. - # in DETR, this is needed for the construction of masks, which is - # then used for the transformer_head. - for img, img_meta in zip(imgs, img_metas): - batch_size = len(img_meta) - for img_id in range(batch_size): - img_meta[img_id]['batch_input_shape'] = tuple(img.size()[-2:]) - - if num_augs == 1: - # proposals (List[List[Tensor]]): the outer list indicates - # test-time augs (multiscale, flip, etc.) and the inner list - # indicates images in a batch. - # The Tensor should have a shape Px4, where P is the number of - # proposals. - if 'proposals' in kwargs: - kwargs['proposals'] = kwargs['proposals'][0] - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - assert imgs[0].size(0) == 1, 'aug test does not support ' \ - 'inference with batch size ' \ - f'{imgs[0].size(0)}' - # TODO: support test augmentation for predefined proposals - assert 'proposals' not in kwargs - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if torch.onnx.is_in_onnx_export(): - assert len(img_metas) == 1 - return self.onnx_export(img[0], img_metas[0]) - - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \ - which may be a weighted sum of all losses, log_vars contains \ - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - # If the loss_vars has different length, GPUs will wait infinitely - if dist.is_available() and dist.is_initialized(): - log_var_length = torch.tensor(len(log_vars), device=loss.device) - dist.all_reduce(log_var_length) - message = (f'rank {dist.get_rank()}' + - f' len(log_vars): {len(log_vars)}' + ' keys: ' + - ','.join(log_vars.keys())) - assert log_var_length == len(log_vars) * dist.get_world_size(), \ - 'loss log variables are different across GPUs!\n' + message - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \ - ``num_samples``. - - - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def val_step(self, data, optimizer=None): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['img_metas'])) - - return outputs - - def show_result(self, - img, - result, - score_thr=0.3, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241), - mask_color=None, - thickness=2, - font_size=13, - win_name='', - show=False, - wait_time=0, - out_file=None): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - score_thr (float, optional): Minimum score of bboxes to be shown. - Default: 0.3. - bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines. - The tuple of color should be in BGR order. Default: 'green' - text_color (str or tuple(int) or :obj:`Color`):Color of texts. - The tuple of color should be in BGR order. Default: 'green' - mask_color (None or str or tuple(int) or :obj:`Color`): - Color of masks. The tuple of color should be in BGR order. - Default: None - thickness (int): Thickness of lines. Default: 2 - font_size (int): Font size of texts. Default: 13 - win_name (str): The window name. Default: '' - wait_time (float): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] # ms rcnn - else: - bbox_result, segm_result = result, None - bboxes = np.vstack(bbox_result) - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - # draw segmentation masks - segms = None - if segm_result is not None and len(labels) > 0: # non empty - segms = mmcv.concat_list(segm_result) - if isinstance(segms[0], torch.Tensor): - segms = torch.stack(segms, dim=0).detach().cpu().numpy() - else: - segms = np.stack(segms, axis=0) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - # draw bounding boxes - # img = imshow_det_bboxes( - # img, - # bboxes, - # labels, - # segms, - # class_names=self.CLASSES, - # score_thr=score_thr, - # bbox_color=bbox_color, - # text_color=text_color, - # mask_color=mask_color, - # thickness=thickness, - # font_size=font_size, - # win_name=win_name, - # show=show, - # wait_time=wait_time, - # out_file=out_file) - - # if not (show or out_file): - # return img - - def onnx_export(self, img, img_metas): - raise NotImplementedError(f'{self.__class__.__name__} does ' - f'not support ONNX EXPORT') diff --git a/cv/detection/yolof/pytorch/mmdet/models/detectors/single_stage.py b/cv/detection/yolof/pytorch/mmdet/models/detectors/single_stage.py deleted file mode 100755 index c375c72d..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/detectors/single_stage.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class SingleStageDetector(BaseDetector): - """Base class for single-stage detectors. - - Single-stage detectors directly and densely predict bounding boxes on the - output features of the backbone+neck. - """ - - def __init__(self, - backbone, - neck=None, - bbox_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(SingleStageDetector, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - if neck is not None: - self.neck = build_neck(neck) - bbox_head.update(train_cfg=train_cfg) - bbox_head.update(test_cfg=test_cfg) - self.bbox_head = build_head(bbox_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - super(SingleStageDetector, self).forward_train(img, img_metas) - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, - gt_labels, gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test-time augmentation. - - Args: - img (torch.Tensor): Images with shape (N, C, H, W). - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - feat = self.extract_feat(img) - results_list = self.bbox_head.simple_test( - feat, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - assert hasattr(self.bbox_head, 'aug_test'), \ - f'{self.bbox_head.__class__.__name__}' \ - ' does not support test-time augmentation' - - feats = self.extract_feats(imgs) - results_list = self.bbox_head.aug_test( - feats, img_metas, rescale=rescale) - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in results_list - ] - return bbox_results - - def onnx_export(self, img, img_metas, with_nms=True): - """Test function without test time augmentation. - - Args: - img (torch.Tensor): input images. - img_metas (list[dict]): List of image information. - - Returns: - tuple[Tensor, Tensor]: dets of shape [N, num_det, 5] - and class labels of shape [N, num_det]. - """ - x = self.extract_feat(img) - outs = self.bbox_head(x) - # get origin input shape to support onnx dynamic shape - - # get shape as tensor - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - # get pad input shape to support onnx dynamic shape for exporting - # `CornerNet` and `CentripetalNet`, which 'pad_shape' is used - # for inference - img_metas[0]['pad_shape_for_onnx'] = img_shape - - if len(outs) == 2: - # add dummy score_factor - outs = (*outs, None) - # TODO Can we change to `get_bboxes` when `onnx_export` fail - det_bboxes, det_labels = self.bbox_head.onnx_export( - *outs, img_metas, with_nms=with_nms) - - return det_bboxes, det_labels diff --git a/cv/detection/yolof/pytorch/mmdet/models/detectors/yolof.py b/cv/detection/yolof/pytorch/mmdet/models/detectors/yolof.py deleted file mode 100755 index 6d08d16d..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/detectors/yolof.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOF(SingleStageDetector): - r"""Implementation of `You Only Look One-level Feature - `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/cv/detection/yolof/pytorch/mmdet/models/losses/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/losses/__init__.py deleted file mode 100755 index cd27652c..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/losses/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .accuracy import Accuracy, accuracy -from .focal_loss import FocalLoss, sigmoid_focal_loss -from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, GIoULoss, IoULoss, - bounded_iou_loss, iou_loss) -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'sigmoid_focal_loss', - 'FocalLoss', 'reduce_loss', 'weight_reduce_loss', 'weighted_loss', - 'iou_loss', 'bounded_iou_loss', - 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss' -] diff --git a/cv/detection/yolof/pytorch/mmdet/models/losses/accuracy.py b/cv/detection/yolof/pytorch/mmdet/models/losses/accuracy.py deleted file mode 100755 index fe765a39..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/losses/accuracy.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch.nn as nn - - -@mmcv.jit(coderize=True) -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class) - target (torch.Tensor): The target of each prediction, shape (N, ) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == 2 and target.ndim == 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - pred_label = pred_label.t() # transpose to shape (maxk, N) - correct = pred_label.eq(target.view(1, -1).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / pred.size(0))) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/cv/detection/yolof/pytorch/mmdet/models/losses/focal_loss.py b/cv/detection/yolof/pytorch/mmdet/models/losses/focal_loss.py deleted file mode 100755 index 6c20fddd..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/losses/focal_loss.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -# This method is only for debugging -def py_sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def py_focal_loss_with_prob(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - """PyTorch version of `Focal Loss `_. - Different from `py_sigmoid_focal_loss`, this function accepts probability - as input. - - Args: - pred (torch.Tensor): The prediction probability with shape (N, C), - C is the number of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - - target = target.type_as(pred) - pt = (1 - pred) * target + pred * (1 - target) - focal_weight = (alpha * target + (1 - alpha) * - (1 - target)) * pt.pow(gamma) - loss = F.binary_cross_entropy( - pred, target, reduction='none') * focal_weight - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -def sigmoid_focal_loss(pred, - target, - weight=None, - gamma=2.0, - alpha=0.25, - reduction='mean', - avg_factor=None): - r"""A warpper of cuda version `Focal Loss - `_. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # Function.apply does not accept keyword arguments, so the decorator - # "weighted_loss" is not applicable - loss = _sigmoid_focal_loss(pred.contiguous(), target.contiguous(), gamma, - alpha, None, 'none') - if weight is not None: - if weight.shape != loss.shape: - if weight.size(0) == loss.size(0): - # For most cases, weight is of shape (num_priors, ), - # which means it does not have the second axis num_class - weight = weight.view(-1, 1) - else: - # Sometimes, weight per anchor per class is also needed. e.g. - # in FSAF. But it may be flattened of shape - # (num_priors x num_class, ), while loss is still of shape - # (num_priors, num_class). - assert weight.numel() == loss.numel() - weight = weight.view(loss.size(0), -1) - assert weight.ndim == loss.ndim - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class FocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - reduction='mean', - loss_weight=1.0, - activated=False): - """`Focal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether to the prediction is - used for sigmoid or softmax. Defaults to True. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 0.25. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - activated (bool, optional): Whether the input is activated. - If True, it means the input has been activated and can be - treated as probabilities. Else, it should be treated as logits. - Defaults to False. - """ - super(FocalLoss, self).__init__() - assert use_sigmoid is True, 'Only sigmoid focal loss supported now.' - self.use_sigmoid = use_sigmoid - self.gamma = gamma - self.alpha = alpha - self.reduction = reduction - self.loss_weight = loss_weight - self.activated = activated - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - if self.activated: - calculate_loss_func = py_focal_loss_with_prob - else: - if torch.cuda.is_available() and pred.is_cuda: - calculate_loss_func = sigmoid_focal_loss - else: - num_classes = pred.size(1) - target = F.one_hot(target, num_classes=num_classes + 1) - target = target[:, :num_classes] - calculate_loss_func = py_sigmoid_focal_loss - - loss_cls = self.loss_weight * calculate_loss_func( - pred, - target, - weight, - gamma=self.gamma, - alpha=self.alpha, - reduction=reduction, - avg_factor=avg_factor) - - else: - raise NotImplementedError - return loss_cls diff --git a/cv/detection/yolof/pytorch/mmdet/models/losses/iou_loss.py b/cv/detection/yolof/pytorch/mmdet/models/losses/iou_loss.py deleted file mode 100755 index bf1ed04e..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,474 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, mode='log', eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'iou_loss is deprecated, please use "mode=`linear`" ' - 'instead.') - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if mode == 'linear': - loss = 1 - ious - elif mode == 'square': - loss = 1 - ious**2 - elif mode == 'log': - loss = -ious.log() - else: - raise NotImplementedError - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - # view(..., -1) does not work for empty tensor - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).flatten(1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - with torch.no_grad(): - alpha = (ious > 0.5).float() * v / (1 - ious + v) - - # CIoU - cious = ious - (rho2 / c2 + alpha * v) - loss = 1 - cious.clamp(min=-1.0, max=1.0) - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss else determined - by mode. Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - mode (str): Loss scaling mode, including "linear", "square", and "log". - Default: 'log' - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0, - mode='log'): - super(IoULoss, self).__init__() - assert mode in ['linear', 'square', 'log'] - if linear: - mode = 'linear' - warnings.warn('DeprecationWarning: Setting "linear=True" in ' - 'IOULoss is deprecated, please use "mode=`linear`" ' - 'instead.') - self.mode = mode - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - mode=self.mode, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - if pred.dim() == weight.dim() + 1: - weight = weight.unsqueeze(1) - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/cv/detection/yolof/pytorch/mmdet/models/losses/utils.py b/cv/detection/yolof/pytorch/mmdet/models/losses/utils.py deleted file mode 100755 index 778237eb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/losses/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import mmcv -import torch -import torch.nn.functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -@mmcv.jit(derivate=True, coderize=True) -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Average factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - # Avoid causing ZeroDivisionError when avg_factor is 0.0, - # i.e., all labels of an image belong to ignore index. - eps = torch.finfo(torch.float32).eps - loss = loss.sum() / (avg_factor + eps) - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/cv/detection/yolof/pytorch/mmdet/models/necks/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/necks/__init__.py deleted file mode 100755 index 0b1a9d7c..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/necks/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from .dilated_encoder import DilatedEncoder - -__all__ = [ - 'DilatedEncoder' -] diff --git a/cv/detection/yolof/pytorch/mmdet/models/necks/dilated_encoder.py b/cv/detection/yolof/pytorch/mmdet/models/necks/dilated_encoder.py deleted file mode 100755 index 79a8f4bb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/necks/dilated_encoder.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import (ConvModule, caffe2_xavier_init, constant_init, is_norm, - normal_init) -from torch.nn import BatchNorm2d - -from ..builder import NECKS - - -class Bottleneck(nn.Module): - """Bottleneck block for DilatedEncoder used in `YOLOF. - - `. - - The Bottleneck contains three ConvLayers and one residual connection. - - Args: - in_channels (int): The number of input channels. - mid_channels (int): The number of middle output channels. - dilation (int): Dilation rate. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - in_channels, - mid_channels, - dilation, - norm_cfg=dict(type='BN', requires_grad=True)): - super(Bottleneck, self).__init__() - self.conv1 = ConvModule( - in_channels, mid_channels, 1, norm_cfg=norm_cfg) - self.conv2 = ConvModule( - mid_channels, - mid_channels, - 3, - padding=dilation, - dilation=dilation, - norm_cfg=norm_cfg) - self.conv3 = ConvModule( - mid_channels, in_channels, 1, norm_cfg=norm_cfg) - - def forward(self, x): - identity = x - out = self.conv1(x) - out = self.conv2(out) - out = self.conv3(out) - out = out + identity - return out - - -@NECKS.register_module() -class DilatedEncoder(nn.Module): - """Dilated Encoder for YOLOF `. - - This module contains two types of components: - - the original FPN lateral convolution layer and fpn convolution layer, - which are 1x1 conv + 3x3 conv - - the dilated residual block - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - block_mid_channels (int): The number of middle block output channels - num_residual_blocks (int): The number of residual blocks. - block_dilations (list): The list of residual blocks dilation. - """ - - def __init__(self, in_channels, out_channels, block_mid_channels, - num_residual_blocks, block_dilations): - super(DilatedEncoder, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.block_mid_channels = block_mid_channels - self.num_residual_blocks = num_residual_blocks - self.block_dilations = block_dilations - self._init_layers() - - def _init_layers(self): - self.lateral_conv = nn.Conv2d( - self.in_channels, self.out_channels, kernel_size=1) - self.lateral_norm = BatchNorm2d(self.out_channels) - self.fpn_conv = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=3, padding=1) - self.fpn_norm = BatchNorm2d(self.out_channels) - encoder_blocks = [] - for i in range(self.num_residual_blocks): - dilation = self.block_dilations[i] - encoder_blocks.append( - Bottleneck( - self.out_channels, - self.block_mid_channels, - dilation=dilation)) - self.dilated_encoder_blocks = nn.Sequential(*encoder_blocks) - - def init_weights(self): - caffe2_xavier_init(self.lateral_conv) - caffe2_xavier_init(self.fpn_conv) - for m in [self.lateral_norm, self.fpn_norm]: - constant_init(m, 1) - for m in self.dilated_encoder_blocks.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - def forward(self, feature): - out = self.lateral_norm(self.lateral_conv(feature[-1])) - out = self.fpn_norm(self.fpn_conv(out)) - return self.dilated_encoder_blocks(out), diff --git a/cv/detection/yolof/pytorch/mmdet/models/utils/__init__.py b/cv/detection/yolof/pytorch/mmdet/models/utils/__init__.py deleted file mode 100755 index bacd833c..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .res_layer import ResLayer -__all__ = [ - 'ResLayer'] - diff --git a/cv/detection/yolof/pytorch/mmdet/models/utils/res_layer.py b/cv/detection/yolof/pytorch/mmdet/models/utils/res_layer.py deleted file mode 100755 index 5c3e89fb..00000000 --- a/cv/detection/yolof/pytorch/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(BaseModule): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_fg=None): - super(SimplifiedBasicBlock, self).__init__(init_fg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/cv/detection/yolof/pytorch/mmdet/utils/__init__.py b/cv/detection/yolof/pytorch/mmdet/utils/__init__.py deleted file mode 100755 index f57acb5f..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collect_env import collect_env -from .compat_config import compat_cfg -from .logger import get_caller_name, get_root_logger, log_img_scale -from .memory import AvoidCUDAOOM, AvoidOOM -from .misc import find_latest_checkpoint, update_data_root -from .replace_cfg_vals import replace_cfg_vals -from .setup_env import setup_multi_processes -from .split_batch import split_batch -from .util_distribution import build_ddp, build_dp, get_device - -__all__ = [ - 'get_root_logger', 'collect_env', 'find_latest_checkpoint', - 'update_data_root', 'setup_multi_processes', 'get_caller_name', - 'log_img_scale', 'compat_cfg', 'split_batch', 'build_ddp', 'build_dp', - 'get_device', 'replace_cfg_vals', 'AvoidOOM', 'AvoidCUDAOOM' -] diff --git a/cv/detection/yolof/pytorch/mmdet/utils/collect_env.py b/cv/detection/yolof/pytorch/mmdet/utils/collect_env.py deleted file mode 100755 index 97e25c0e..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/collect_env.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmdet - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMDetection'] = mmdet.__version__ + '+' + get_git_hash()[:7] - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print(f'{name}: {val}') diff --git a/cv/detection/yolof/pytorch/mmdet/utils/compat_config.py b/cv/detection/yolof/pytorch/mmdet/utils/compat_config.py deleted file mode 100755 index 05aa37dc..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/compat_config.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -from mmcv import ConfigDict - - -def compat_cfg(cfg): - """This function would modify some filed to keep the compatibility of - config. - - For example, it will move some args which will be deprecated to the correct - fields. - """ - cfg = copy.deepcopy(cfg) - cfg = compat_imgs_per_gpu(cfg) - cfg = compat_loader_args(cfg) - cfg = compat_runner_args(cfg) - return cfg - - -def compat_runner_args(cfg): - if 'runner' not in cfg: - cfg.runner = ConfigDict({ - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - }) - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - return cfg - - -def compat_imgs_per_gpu(cfg): - cfg = copy.deepcopy(cfg) - if 'imgs_per_gpu' in cfg.data: - warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - warnings.warn( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - return cfg - - -def compat_loader_args(cfg): - """Deprecated sample_per_gpu in cfg.data.""" - - cfg = copy.deepcopy(cfg) - if 'train_dataloader' not in cfg.data: - cfg.data['train_dataloader'] = ConfigDict() - if 'val_dataloader' not in cfg.data: - cfg.data['val_dataloader'] = ConfigDict() - if 'test_dataloader' not in cfg.data: - cfg.data['test_dataloader'] = ConfigDict() - - # special process for train_dataloader - if 'samples_per_gpu' in cfg.data: - - samples_per_gpu = cfg.data.pop('samples_per_gpu') - assert 'samples_per_gpu' not in \ - cfg.data.train_dataloader, ('`samples_per_gpu` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu - - if 'persistent_workers' in cfg.data: - - persistent_workers = cfg.data.pop('persistent_workers') - assert 'persistent_workers' not in \ - cfg.data.train_dataloader, ('`persistent_workers` are set ' - 'in `data` field and ` ' - 'data.train_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.train_dataloader`. ') - cfg.data.train_dataloader['persistent_workers'] = persistent_workers - - if 'workers_per_gpu' in cfg.data: - - workers_per_gpu = cfg.data.pop('workers_per_gpu') - cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu - cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu - - # special process for val_dataloader - if 'samples_per_gpu' in cfg.data.val: - # keep default value of `sample_per_gpu` is 1 - assert 'samples_per_gpu' not in \ - cfg.data.val_dataloader, ('`samples_per_gpu` are set ' - 'in `data.val` field and ` ' - 'data.val_dataloader` at ' - 'the same time. ' - 'Please only set it in ' - '`data.val_dataloader`. ') - cfg.data.val_dataloader['samples_per_gpu'] = \ - cfg.data.val.pop('samples_per_gpu') - # special process for val_dataloader - - # in case the test dataset is concatenated - if isinstance(cfg.data.test, dict): - if 'samples_per_gpu' in cfg.data.test: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` ' - 'at the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - - cfg.data.test_dataloader['samples_per_gpu'] = \ - cfg.data.test.pop('samples_per_gpu') - - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - if 'samples_per_gpu' in ds_cfg: - assert 'samples_per_gpu' not in \ - cfg.data.test_dataloader, ('`samples_per_gpu` are set ' - 'in `data.test` field and ` ' - 'data.test_dataloader` at' - ' the same time. ' - 'Please only set it in ' - '`data.test_dataloader`. ') - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu - - return cfg diff --git a/cv/detection/yolof/pytorch/mmdet/utils/contextmanagers.py b/cv/detection/yolof/pytorch/mmdet/utils/contextmanagers.py deleted file mode 100755 index fa12bfca..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/contextmanagers.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import asyncio -import contextlib -import logging -import os -import time -from typing import List - -import torch - -logger = logging.getLogger(__name__) - -DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False)) - - -@contextlib.asynccontextmanager -async def completed(trace_name='', - name='', - sleep_interval=0.05, - streams: List[torch.cuda.Stream] = None): - """Async context manager that waits for work to complete on given CUDA - streams.""" - if not torch.cuda.is_available(): - yield - return - - stream_before_context_switch = torch.cuda.current_stream() - if not streams: - streams = [stream_before_context_switch] - else: - streams = [s if s else stream_before_context_switch for s in streams] - - end_events = [ - torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams - ] - - if DEBUG_COMPLETED_TIME: - start = torch.cuda.Event(enable_timing=True) - stream_before_context_switch.record_event(start) - - cpu_start = time.monotonic() - logger.debug('%s %s starting, streams: %s', trace_name, name, streams) - grad_enabled_before = torch.is_grad_enabled() - try: - yield - finally: - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_end = time.monotonic() - for i, stream in enumerate(streams): - event = end_events[i] - stream.record_event(event) - - grad_enabled_after = torch.is_grad_enabled() - - # observed change of torch.is_grad_enabled() during concurrent run of - # async_test_bboxes code - assert (grad_enabled_before == grad_enabled_after - ), 'Unexpected is_grad_enabled() value change' - - are_done = [e.query() for e in end_events] - logger.debug('%s %s completed: %s streams: %s', trace_name, name, - are_done, streams) - with torch.cuda.stream(stream_before_context_switch): - while not all(are_done): - await asyncio.sleep(sleep_interval) - are_done = [e.query() for e in end_events] - logger.debug( - '%s %s completed: %s streams: %s', - trace_name, - name, - are_done, - streams, - ) - - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_time = (cpu_end - cpu_start) * 1000 - stream_times_ms = '' - for i, stream in enumerate(streams): - elapsed_time = start.elapsed_time(end_events[i]) - stream_times_ms += f' {stream} {elapsed_time:.2f} ms' - logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time, - stream_times_ms) - - -@contextlib.asynccontextmanager -async def concurrent(streamqueue: asyncio.Queue, - trace_name='concurrent', - name='stream'): - """Run code concurrently in different streams. - - :param streamqueue: asyncio.Queue instance. - - Queue tasks define the pool of streams used for concurrent execution. - """ - if not torch.cuda.is_available(): - yield - return - - initial_stream = torch.cuda.current_stream() - - with torch.cuda.stream(initial_stream): - stream = await streamqueue.get() - assert isinstance(stream, torch.cuda.Stream) - - try: - with torch.cuda.stream(stream): - logger.debug('%s %s is starting, stream: %s', trace_name, name, - stream) - yield - current = torch.cuda.current_stream() - assert current == stream - logger.debug('%s %s has finished, stream: %s', trace_name, - name, stream) - finally: - streamqueue.task_done() - streamqueue.put_nowait(stream) diff --git a/cv/detection/yolof/pytorch/mmdet/utils/logger.py b/cv/detection/yolof/pytorch/mmdet/utils/logger.py deleted file mode 100755 index 485f641b..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/logger.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get root logger. - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level) - - return logger - - -def get_caller_name(): - """Get name of caller method.""" - # this_func_frame = inspect.stack()[0][0] # i.e., get_caller_name - # callee_frame = inspect.stack()[1][0] # e.g., log_img_scale - caller_frame = inspect.stack()[2][0] # e.g., caller of log_img_scale - caller_method = caller_frame.f_code.co_name - try: - caller_class = caller_frame.f_locals['self'].__class__.__name__ - return f'{caller_class}.{caller_method}' - except KeyError: # caller is a function - return caller_method - - -def log_img_scale(img_scale, shape_order='hw', skip_square=False): - """Log image size. - - Args: - img_scale (tuple): Image size to be logged. - shape_order (str, optional): The order of image shape. - 'hw' for (height, width) and 'wh' for (width, height). - Defaults to 'hw'. - skip_square (bool, optional): Whether to skip logging for square - img_scale. Defaults to False. - - Returns: - bool: Whether to have done logging. - """ - if shape_order == 'hw': - height, width = img_scale - elif shape_order == 'wh': - width, height = img_scale - else: - raise ValueError(f'Invalid shape_order {shape_order}.') - - if skip_square and (height == width): - return False - - logger = get_root_logger() - caller = get_caller_name() - logger.info(f'image shape: height={height}, width={width} in {caller}') - - return True diff --git a/cv/detection/yolof/pytorch/mmdet/utils/memory.py b/cv/detection/yolof/pytorch/mmdet/utils/memory.py deleted file mode 100755 index eb212bca..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/memory.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import abc -from contextlib import contextmanager -from functools import wraps - -import torch - -from mmdet.utils import get_root_logger - - -def cast_tensor_type(inputs, src_type=None, dst_type=None): - """Recursively convert Tensor in inputs from ``src_type`` to ``dst_type``. - - Args: - inputs: Inputs that to be casted. - src_type (torch.dtype | torch.device): Source type. - src_type (torch.dtype | torch.device): Destination type. - - Returns: - The same type with inputs, but all contained Tensors have been cast. - """ - assert dst_type is not None - if isinstance(inputs, torch.Tensor): - if isinstance(dst_type, torch.device): - # convert Tensor to dst_device - if hasattr(inputs, 'to') and \ - hasattr(inputs, 'device') and \ - (inputs.device == src_type or src_type is None): - return inputs.to(dst_type) - else: - return inputs - else: - # convert Tensor to dst_dtype - if hasattr(inputs, 'to') and \ - hasattr(inputs, 'dtype') and \ - (inputs.dtype == src_type or src_type is None): - return inputs.to(dst_type) - else: - return inputs - # we need to ensure that the type of inputs to be casted are the same - # as the argument `src_type`. - elif isinstance(inputs, abc.Mapping): - return type(inputs)({ - k: cast_tensor_type(v, src_type=src_type, dst_type=dst_type) - for k, v in inputs.items() - }) - elif isinstance(inputs, abc.Iterable): - return type(inputs)( - cast_tensor_type(item, src_type=src_type, dst_type=dst_type) - for item in inputs) - # TODO: Currently not supported - # elif isinstance(inputs, InstanceData): - # for key, value in inputs.items(): - # inputs[key] = cast_tensor_type( - # value, src_type=src_type, dst_type=dst_type) - # return inputs - else: - return inputs - - -@contextmanager -def _ignore_torch_cuda_oom(): - """A context which ignores CUDA OOM exception from pytorch. - - Code is modified from - # noqa: E501 - """ - try: - yield - except RuntimeError as e: - # NOTE: the string may change? - if 'CUDA out of memory. ' in str(e): - pass - else: - raise - - -class AvoidOOM: - """Try to convert inputs to FP16 and CPU if got a PyTorch's CUDA Out of - Memory error. It will do the following steps: - - 1. First retry after calling `torch.cuda.empty_cache()`. - 2. If that still fails, it will then retry by converting inputs - to FP16. - 3. If that still fails trying to convert inputs to CPUs. - In this case, it expects the function to dispatch to - CPU implementation. - - Args: - to_cpu (bool): Whether to convert outputs to CPU if get an OOM - error. This will slow down the code significantly. - Defaults to True. - test (bool): Skip `_ignore_torch_cuda_oom` operate that can use - lightweight data in unit test, only used in - test unit. Defaults to False. - - Examples: - >>> from mmdet.utils.memory import AvoidOOM - >>> AvoidCUDAOOM = AvoidOOM() - >>> output = AvoidOOM.retry_if_cuda_oom( - >>> some_torch_function)(input1, input2) - >>> # To use as a decorator - >>> # from mmdet.utils import AvoidCUDAOOM - >>> @AvoidCUDAOOM.retry_if_cuda_oom - >>> def function(*args, **kwargs): - >>> return None - ``` - - Note: - 1. The output may be on CPU even if inputs are on GPU. Processing - on CPU will slow down the code significantly. - 2. When converting inputs to CPU, it will only look at each argument - and check if it has `.device` and `.to` for conversion. Nested - structures of tensors are not supported. - 3. Since the function might be called more than once, it has to be - stateless. - """ - - def __init__(self, to_cpu=True, test=False): - self.to_cpu = to_cpu - self.test = test - - def retry_if_cuda_oom(self, func): - """Makes a function retry itself after encountering pytorch's CUDA OOM - error. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/memory.py - - Args: - func: a stateless callable that takes tensor-like objects - as arguments. - Returns: - func: a callable which retries `func` if OOM is encountered. - """ # noqa: W605 - - @wraps(func) - def wrapped(*args, **kwargs): - - # raw function - if not self.test: - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Clear cache and retry - torch.cuda.empty_cache() - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # get the type and device of first tensor - dtype, device = None, None - values = args + tuple(kwargs.values()) - for value in values: - if isinstance(value, torch.Tensor): - dtype = value.dtype - device = value.device - break - if dtype is None or device is None: - raise ValueError('There is no tensor in the inputs, ' - 'cannot get dtype and device.') - - # Convert to FP16 - fp16_args = cast_tensor_type(args, dst_type=torch.half) - fp16_kwargs = cast_tensor_type(kwargs, dst_type=torch.half) - logger = get_root_logger() - logger.warning(f'Attempting to copy inputs of {str(func)} ' - 'to FP16 due to CUDA OOM') - - # get input tensor type, the output type will same as - # the first parameter type. - with _ignore_torch_cuda_oom(): - output = func(*fp16_args, **fp16_kwargs) - output = cast_tensor_type( - output, src_type=torch.half, dst_type=dtype) - if not self.test: - return output - logger.warning('Using FP16 still meet CUDA OOM') - - # Try on CPU. This will slow down the code significantly, - # therefore print a notice. - if self.to_cpu: - logger.warning(f'Attempting to copy inputs of {str(func)} ' - 'to CPU due to CUDA OOM') - cpu_device = torch.empty(0).device - cpu_args = cast_tensor_type(args, dst_type=cpu_device) - cpu_kwargs = cast_tensor_type(kwargs, dst_type=cpu_device) - - # convert outputs to GPU - with _ignore_torch_cuda_oom(): - logger.warning(f'Convert outputs to GPU (device={device})') - output = func(*cpu_args, **cpu_kwargs) - output = cast_tensor_type( - output, src_type=cpu_device, dst_type=device) - return output - - warnings.warn('Cannot convert output to GPU due to CUDA OOM, ' - 'the output is now on CPU, which might cause ' - 'errors if the output need to interact with GPU ' - 'data in subsequent operations') - logger.warning('Cannot convert output to GPU due to ' - 'CUDA OOM, the output is on CPU now.') - - return func(*cpu_args, **cpu_kwargs) - else: - # may still get CUDA OOM error - return func(*args, **kwargs) - - return wrapped - - -# To use AvoidOOM as a decorator -AvoidCUDAOOM = AvoidOOM() diff --git a/cv/detection/yolof/pytorch/mmdet/utils/misc.py b/cv/detection/yolof/pytorch/mmdet/utils/misc.py deleted file mode 100755 index 4113672a..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/misc.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os -import os.path as osp -import warnings - -import mmcv -from mmcv.utils import print_log - - -def find_latest_checkpoint(path, suffix='pth'): - """Find the latest checkpoint from the working directory. - - Args: - path(str): The path to find checkpoints. - suffix(str): File extension. - Defaults to pth. - - Returns: - latest_path(str | None): File path of the latest checkpoint. - References: - .. [1] https://github.com/microsoft/SoftTeacher - /blob/main/ssod/utils/patch.py - """ - if not osp.exists(path): - warnings.warn('The path of checkpoints does not exist.') - return None - if osp.exists(osp.join(path, f'latest.{suffix}')): - return osp.join(path, f'latest.{suffix}') - - checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) - if len(checkpoints) == 0: - warnings.warn('There are no checkpoints in the path.') - return None - latest = -1 - latest_path = None - for checkpoint in checkpoints: - count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) - if count > latest: - latest = count - latest_path = checkpoint - return latest_path - - -def update_data_root(cfg, logger=None): - """Update data root according to env MMDET_DATASETS. - - If set env MMDET_DATASETS, update cfg.data_root according to - MMDET_DATASETS. Otherwise, using cfg.data_root as default. - - Args: - cfg (mmcv.Config): The model config need to modify - logger (logging.Logger | str | None): the way to print msg - """ - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - if 'MMDET_DATASETS' in os.environ: - dst_root = os.environ['MMDET_DATASETS'] - print_log(f'MMDET_DATASETS has been set to be {dst_root}.' - f'Using {dst_root} as data root.') - else: - return - - assert isinstance(cfg, mmcv.Config), \ - f'cfg got wrong type: {type(cfg)}, expected mmcv.Config' - - def update(cfg, src_str, dst_str): - for k, v in cfg.items(): - if isinstance(v, mmcv.ConfigDict): - update(cfg[k], src_str, dst_str) - if isinstance(v, str) and src_str in v: - cfg[k] = v.replace(src_str, dst_str) - - update(cfg.data, cfg.data_root, dst_root) - cfg.data_root = dst_root diff --git a/cv/detection/yolof/pytorch/mmdet/utils/replace_cfg_vals.py b/cv/detection/yolof/pytorch/mmdet/utils/replace_cfg_vals.py deleted file mode 100755 index 6ca301dc..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/replace_cfg_vals.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import re - -from mmcv.utils import Config - - -def replace_cfg_vals(ori_cfg): - """Replace the string "${key}" with the corresponding value. - - Replace the "${key}" with the value of ori_cfg.key in the config. And - support replacing the chained ${key}. Such as, replace "${key0.key1}" - with the value of cfg.key0.key1. Code is modified from `vars.py - < https://github.com/microsoft/SoftTeacher/blob/main/ssod/utils/vars.py>`_ # noqa: E501 - - Args: - ori_cfg (mmcv.utils.config.Config): - The origin config with "${key}" generated from a file. - - Returns: - updated_cfg [mmcv.utils.config.Config]: - The config with "${key}" replaced by the corresponding value. - """ - - def get_value(cfg, key): - for k in key.split('.'): - cfg = cfg[k] - return cfg - - def replace_value(cfg): - if isinstance(cfg, dict): - return {key: replace_value(value) for key, value in cfg.items()} - elif isinstance(cfg, list): - return [replace_value(item) for item in cfg] - elif isinstance(cfg, tuple): - return tuple([replace_value(item) for item in cfg]) - elif isinstance(cfg, str): - # the format of string cfg may be: - # 1) "${key}", which will be replaced with cfg.key directly - # 2) "xxx${key}xxx" or "xxx${key1}xxx${key2}xxx", - # which will be replaced with the string of the cfg.key - keys = pattern_key.findall(cfg) - values = [get_value(ori_cfg, key[2:-1]) for key in keys] - if len(keys) == 1 and keys[0] == cfg: - # the format of string cfg is "${key}" - cfg = values[0] - else: - for key, value in zip(keys, values): - # the format of string cfg is - # "xxx${key}xxx" or "xxx${key1}xxx${key2}xxx" - assert not isinstance(value, (dict, list, tuple)), \ - f'for the format of string cfg is ' \ - f"'xxxxx${key}xxxxx' or 'xxx${key}xxx${key}xxx', " \ - f"the type of the value of '${key}' " \ - f'can not be dict, list, or tuple' \ - f'but you input {type(value)} in {cfg}' - cfg = cfg.replace(key, str(value)) - return cfg - else: - return cfg - - # the pattern of string "${key}" - pattern_key = re.compile(r'\$\{[a-zA-Z\d_.]*\}') - # the type of ori_cfg._cfg_dict is mmcv.utils.config.ConfigDict - updated_cfg = Config( - replace_value(ori_cfg._cfg_dict), filename=ori_cfg.filename) - # replace the model with model_wrapper - if updated_cfg.get('model_wrapper', None) is not None: - updated_cfg.model = updated_cfg.model_wrapper - updated_cfg.pop('model_wrapper') - return updated_cfg diff --git a/cv/detection/yolof/pytorch/mmdet/utils/setup_env.py b/cv/detection/yolof/pytorch/mmdet/utils/setup_env.py deleted file mode 100755 index 6637cf87..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/setup_env.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -import torch.multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - workers_per_gpu = cfg.data.get('workers_per_gpu', 1) - if 'train_dataloader' in cfg.data: - workers_per_gpu = \ - max(cfg.data.train_dataloader.get('workers_per_gpu', 1), - workers_per_gpu) - - if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/detection/yolof/pytorch/mmdet/utils/split_batch.py b/cv/detection/yolof/pytorch/mmdet/utils/split_batch.py deleted file mode 100755 index 0276fb33..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/split_batch.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def split_batch(img, img_metas, kwargs): - """Split data_batch by tags. - - Code is modified from - # noqa: E501 - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (dict): Specific to concrete implementation. - - Returns: - data_groups (dict): a dict that data_batch splited by tags, - such as 'sup', 'unsup_teacher', and 'unsup_student'. - """ - - # only stack img in the batch - def fuse_list(obj_list, obj): - return torch.stack(obj_list) if isinstance(obj, - torch.Tensor) else obj_list - - # select data with tag from data_batch - def select_group(data_batch, current_tag): - group_flag = [tag == current_tag for tag in data_batch['tag']] - return { - k: fuse_list([vv for vv, gf in zip(v, group_flag) if gf], v) - for k, v in data_batch.items() - } - - kwargs.update({'img': img, 'img_metas': img_metas}) - kwargs.update({'tag': [meta['tag'] for meta in img_metas]}) - tags = list(set(kwargs['tag'])) - data_groups = {tag: select_group(kwargs, tag) for tag in tags} - for tag, group in data_groups.items(): - group.pop('tag') - return data_groups diff --git a/cv/detection/yolof/pytorch/mmdet/utils/util_distribution.py b/cv/detection/yolof/pytorch/mmdet/utils/util_distribution.py deleted file mode 100755 index a186bf6c..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/util_distribution.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel - -dp_factory = {'cuda': MMDataParallel, 'cpu': MMDataParallel} - -ddp_factory = {'cuda': MMDistributedDataParallel} - - -def build_dp(model, device='cuda', dim=0, *args, **kwargs): - """build DataParallel module by device type. - - if device is cuda, return a MMDataParallel model; if device is mlu, - return a MLUDataParallel model. - - Args: - model (:class:`nn.Module`): model to be parallelized. - device (str): device type, cuda, cpu or mlu. Defaults to cuda. - dim (int): Dimension used to scatter the data. Defaults to 0. - - Returns: - nn.Module: the model to be parallelized. - """ - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDataParallel - dp_factory['mlu'] = MLUDataParallel - model = model.mlu() - - return dp_factory[device](model, dim=dim, *args, **kwargs) - - -def build_ddp(model, device='cuda', *args, **kwargs): - """Build DistributedDataParallel module by device type. - - If device is cuda, return a MMDistributedDataParallel model; - if device is mlu, return a MLUDistributedDataParallel model. - - Args: - model (:class:`nn.Module`): module to be parallelized. - device (str): device type, mlu or cuda. - - Returns: - :class:`nn.Module`: the module to be parallelized - - References: - .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. - DistributedDataParallel.html - """ - assert device in ['cuda', 'mlu'], 'Only available for cuda or mlu devices.' - if device == 'cuda': - model = model.cuda() - elif device == 'mlu': - from mmcv.device.mlu import MLUDistributedDataParallel - ddp_factory['mlu'] = MLUDistributedDataParallel - model = model.mlu() - - return ddp_factory[device](model, *args, **kwargs) - - -def is_mlu_available(): - """Returns a bool indicating if MLU is currently available.""" - return hasattr(torch, 'is_mlu_available') and torch.is_mlu_available() - - -def get_device(): - """Returns an available device, cpu, cuda or mlu.""" - is_device_available = { - 'cuda': torch.cuda.is_available(), - 'mlu': is_mlu_available() - } - device_list = [k for k, v in is_device_available.items() if v] - return device_list[0] if len(device_list) == 1 else 'cpu' diff --git a/cv/detection/yolof/pytorch/mmdet/utils/util_mixins.py b/cv/detection/yolof/pytorch/mmdet/utils/util_mixins.py deleted file mode 100644 index b83b6617..00000000 --- a/cv/detection/yolof/pytorch/mmdet/utils/util_mixins.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This module defines the :class:`NiceRepr` mixin class, which defines a -``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__`` -method, which you must define. This means you only have to overload one -function instead of two. Furthermore, if the object defines a ``__len__`` -method, then the ``__nice__`` method defaults to something sensible, otherwise -it is treated as abstract and raises ``NotImplementedError``. - -To use simply have your object inherit from :class:`NiceRepr` -(multi-inheritance should be ok). - -This code was copied from the ubelt library: https://github.com/Erotemic/ubelt - -Example: - >>> # Objects that define __nice__ have a default __str__ and __repr__ - >>> class Student(NiceRepr): - ... def __init__(self, name): - ... self.name = name - ... def __nice__(self): - ... return self.name - >>> s1 = Student('Alice') - >>> s2 = Student('Bob') - >>> print(f's1 = {s1}') - >>> print(f's2 = {s2}') - s1 = - s2 = - -Example: - >>> # Objects that define __len__ have a default __nice__ - >>> class Group(NiceRepr): - ... def __init__(self, data): - ... self.data = data - ... def __len__(self): - ... return len(self.data) - >>> g = Group([1, 2, 3]) - >>> print(f'g = {g}') - g = -""" -import warnings - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, '__len__'): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError( - f'Define the __nice__ method for {self.__class__!r}') - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f'<{classname}({nice}) at {hex(id(self))}>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f'<{classname}({nice})>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) diff --git a/cv/detection/yolof/pytorch/mmdet/version.py b/cv/detection/yolof/pytorch/mmdet/version.py deleted file mode 100755 index 56e9b075..00000000 --- a/cv/detection/yolof/pytorch/mmdet/version.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -__version__ = '2.25.0' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/detection/yolof/pytorch/requirements.txt b/cv/detection/yolof/pytorch/requirements.txt deleted file mode 100755 index f754f18b..00000000 --- a/cv/detection/yolof/pytorch/requirements.txt +++ /dev/null @@ -1,10 +0,0 @@ -# These must be installed before building mmdetection -cython -numpy -matplotlib -numpy -pycocotools -six -terminaltables -addict -yapf \ No newline at end of file diff --git a/cv/detection/yolof/pytorch/setup.cfg b/cv/detection/yolof/pytorch/setup.cfg deleted file mode 100755 index 56407a12..00000000 --- a/cv/detection/yolof/pytorch/setup.cfg +++ /dev/null @@ -1,21 +0,0 @@ -[isort] -line_length = 79 -multi_line_output = 0 -extra_standard_library = setuptools -known_first_party = mmdet -known_third_party = PIL,asynctest,cityscapesscripts,cv2,gather_models,matplotlib,mmcv,numpy,onnx,onnxruntime,pycocotools,pytest,pytorch_sphinx_theme,requests,scipy,seaborn,six,terminaltables,torch,ts,yaml -no_lines_before = STDLIB,LOCALFOLDER -default_section = THIRDPARTY - -[yapf] -BASED_ON_STYLE = pep8 -BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true -SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true - -# ignore-words-list needs to be lowercase format. For example, if we want to -# ignore word "BA", then we need to append "ba" to ignore-words-list rather -# than "BA" -[codespell] -skip = *.ipynb -quiet-level = 3 -ignore-words-list = patten,nd,ty,mot,hist,formating,winn,gool,datas,wan,confids,TOOD,tood,ba diff --git a/cv/detection/yolof/pytorch/setup.py b/cv/detection/yolof/pytorch/setup.py deleted file mode 100755 index e9ea12f3..00000000 --- a/cv/detection/yolof/pytorch/setup.py +++ /dev/null @@ -1,353 +0,0 @@ -import glob -import os -import re -from pkg_resources import DistributionNotFound, get_distribution -from setuptools import find_packages, setup - -EXT_TYPE = '' -try: - import torch - if torch.__version__ == 'parrots': - from parrots.utils.build_extension import BuildExtension - EXT_TYPE = 'parrots' - else: - from torch.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - cmd_class = {'build_ext': BuildExtension} -except ModuleNotFoundError: - cmd_class = {} - print('Skip building ext ops due to the absence of torch.') - - -def choose_requirement(primary, secondary): - """If some version of primary requirement installed, return primary, else - return secondary.""" - try: - name = re.split(r'[!<>=]', primary)[0] - get_distribution(name) - except DistributionNotFound: - return secondary - - return str(primary) - - -def get_version(): - version_file = 'mmcv/version.py' - with open(version_file, 'r', encoding='utf-8') as f: - exec(compile(f.read(), version_file, 'exec')) - version = locals()['__version__'] - local_version_identifier = os.environ.get('MMCV_LOCAL_VERSION_IDENTIFIER', '') - if local_version_identifier != '': - version += '+' + local_version_identifier - return version - - -def parse_requirements(fname='requirements/runtime.txt', with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import sys - from os.path import exists - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith('-r '): - # Allow specifying requirements in other files - target = line.split(' ')[1] - for info in parse_require_file(target): - yield info - else: - info = {'line': line} - if line.startswith('-e '): - info['package'] = line.split('#egg=')[1] - else: - # Remove versioning from the package - pat = '(' + '|'.join(['>=', '==', '>']) + ')' - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info['package'] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ';' in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, - rest.split(';')) - info['platform_deps'] = platform_deps - else: - version = rest # NOQA - info['version'] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath, 'r') as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith('#'): - for info in parse_line(line): - yield info - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info['package']] - if with_version and 'version' in info: - parts.extend(info['version']) - if not sys.version.startswith('3.4'): - # apparently package_deps are broken in 3.4 - platform_deps = info.get('platform_deps') - if platform_deps is not None: - parts.append(';' + platform_deps) - item = ''.join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -install_requires = parse_requirements() - -try: - # OpenCV installed via conda. - import cv2 # NOQA: F401 - major, minor, *rest = cv2.__version__.split('.') - if int(major) < 3: - raise RuntimeError( - f'OpenCV >=3 is required but {cv2.__version__} is installed') -except ImportError: - # If first not installed install second package - CHOOSE_INSTALL_REQUIRES = [('opencv-python-headless>=3', - 'opencv-python>=3')] - for main, secondary in CHOOSE_INSTALL_REQUIRES: - install_requires.append(choose_requirement(main, secondary)) - - -def get_extensions(): - extensions = [] - - if os.getenv('MMCV_WITH_TRT', '0') != '0': - ext_name = 'mmcv._ext_trt' - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - tensorrt_path = os.getenv('TENSORRT_DIR', '0') - tensorrt_lib_path = glob.glob( - os.path.join(tensorrt_path, 'targets', '*', 'lib'))[0] - library_dirs += [tensorrt_lib_path] - libraries += ['nvinfer', 'nvparsers', 'nvinfer_plugin'] - libraries += ['cudart'] - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/common/cuda') - include_trt_path = os.path.abspath('./mmcv/ops/csrc/tensorrt') - include_dirs.append(include_path) - include_dirs.append(include_trt_path) - include_dirs.append(os.path.join(tensorrt_path, 'include')) - include_dirs += include_paths(cuda=True) - - op_files = glob.glob('./mmcv/ops/csrc/tensorrt/plugins/*') - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('MMCV_WITH_TRT', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - library_dirs += library_paths(cuda=True) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - if os.getenv('MMCV_WITH_OPS', '0') == '0': - return extensions - - if EXT_TYPE == 'parrots': - ext_name = 'mmcv._ext' - from parrots.utils.build_extension import Extension - # new parrots op impl do not use MMCV_USE_PARROTS - # define_macros = [('MMCV_USE_PARROTS', None)] - define_macros = [] - include_dirs = [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') +\ - glob.glob('./mmcv/ops/csrc/parrots/*.cpp') - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args = { - 'nvcc': [cuda_args] if cuda_args else [], - 'cxx': [], - } - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - extra_compile_args['nvcc'] += [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - cuda=True, - pytorch=True) - extensions.append(ext_ops) - elif EXT_TYPE == 'pytorch': - ext_name = 'mmcv._ext' - from torch.utils.cpp_extension import CppExtension, CUDAExtension - - # prevent ninja from using too many resources - try: - import psutil - num_cpu = len(psutil.Process().cpu_affinity()) - cpu_use = max(4, num_cpu - 1) - except (ModuleNotFoundError, AttributeError): - cpu_use = 4 - - os.environ.setdefault('MAX_JOBS', str(cpu_use)) - define_macros = [] - extra_compile_args = {'cxx': []} - include_dirs = [] - - is_rocm_pytorch = False - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm_pytorch = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - - project_dir = 'mmcv/ops/csrc/' - if is_rocm_pytorch: - from torch.utils.hipify import hipify_python - - hipify_python.hipify( - project_directory=project_dir, - output_directory=project_dir, - includes='mmcv/ops/csrc/*', - show_detailed=True, - is_pytorch_extension=True, - ) - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('HIP_DIFF', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/hip/*') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/hip')) - elif torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - else: - print(f'Compiling {ext_name} without CUDA') - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') - extension = CppExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - - ext_ops = extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args) - extensions.append(ext_ops) - - if EXT_TYPE == 'pytorch' and os.getenv('MMCV_WITH_ORT', '0') != '0': - ext_name = 'mmcv._ext_ort' - from torch.utils.cpp_extension import library_paths, include_paths - import onnxruntime - library_dirs = [] - libraries = [] - include_dirs = [] - ort_path = os.getenv('ONNXRUNTIME_DIR', '0') - library_dirs += [os.path.join(ort_path, 'lib')] - libraries.append('onnxruntime') - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/onnxruntime') - include_dirs.append(include_path) - include_dirs.append(os.path.join(ort_path, 'include')) - - op_files = glob.glob('./mmcv/ops/csrc/onnxruntime/cpu/*') - if onnxruntime.get_device() == 'GPU' or os.getenv('FORCE_CUDA', - '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files += glob.glob('./mmcv/ops/csrc/onnxruntime/gpu/*') - include_dirs += include_paths(cuda=True) - library_dirs += library_paths(cuda=True) - else: - include_dirs += include_paths(cuda=False) - library_dirs += library_paths(cuda=False) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - return extensions - - -setup( - name='mmcv' if os.getenv('MMCV_WITH_OPS', '0') == '0' else 'mmcv-full', - version=get_version(), - description='OpenMMLab Computer Vision Foundation', - keywords='computer vision', - packages=find_packages(), - include_package_data=True, - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Topic :: Utilities', - ], - url='https://github.com/open-mmlab/mmcv', - author='MMCV Contributors', - author_email='openmmlab@gmail.com', - setup_requires=[], - tests_require=['pytest'], - install_requires=install_requires, - ext_modules=get_extensions(), - cmdclass=cmd_class, - zip_safe=False) diff --git a/cv/detection/yolof/pytorch/train.py b/cv/detection/yolof/pytorch/train.py deleted file mode 100755 index cff19f03..00000000 --- a/cv/detection/yolof/pytorch/train.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import copy -import os -import os.path as osp -import time -import warnings - -import mmcv -import torch -import torch.distributed as dist -from mmcv import Config, DictAction -from mmcv.runner import get_dist_info, init_dist -from mmcv.utils import get_git_hash - -from mmdet import __version__ -from mmdet.apis import init_random_seed, set_random_seed, train_detector -from mmdet.datasets import build_dataset -from mmdet.models import build_detector -from mmdet.utils import (collect_env, get_device, get_root_logger, - replace_cfg_vals, setup_multi_processes, - update_data_root) - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--auto-resume', - action='store_true', - help='resume from the latest checkpoint automatically') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='(Deprecated, please use --gpu-id) number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='(Deprecated, please use --gpu-id) ids of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-id', - type=int, - default=0, - help='id of gpu to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--diff-seed', - action='store_true', - help='Whether or not set different seeds for different ranks') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file (deprecate), ' - 'change to --cfg-options instead.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--auto-scale-lr', - action='store_true', - help='enable automatically scaling LR.') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.cfg_options: - raise ValueError( - '--options and --cfg-options cannot be both ' - 'specified, --options is deprecated in favor of --cfg-options') - if args.options: - warnings.warn('--options is deprecated in favor of --cfg-options') - args.cfg_options = args.options - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - - # replace the ${key} with the value of cfg.key - cfg = replace_cfg_vals(cfg) - - # update data root according to MMDET_DATASETS - update_data_root(cfg) - - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - if args.auto_scale_lr: - if 'auto_scale_lr' in cfg and \ - 'enable' in cfg.auto_scale_lr and \ - 'base_batch_size' in cfg.auto_scale_lr: - cfg.auto_scale_lr.enable = True - else: - warnings.warn('Can not find "auto_scale_lr" or ' - '"auto_scale_lr.enable" or ' - '"auto_scale_lr.base_batch_size" in your' - ' configuration file. Please update all the ' - 'configuration files to mmdet >= 2.24.1.') - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - - if args.resume_from is not None: - cfg.resume_from = args.resume_from - cfg.auto_resume = args.auto_resume - if args.gpus is not None: - cfg.gpu_ids = range(1) - warnings.warn('`--gpus` is deprecated because we only support ' - 'single GPU mode in non-distributed training. ' - 'Use `gpus=1` now.') - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids[0:1] - warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. ' - 'Because we only support single GPU mode in ' - 'non-distributed training. Use the first GPU ' - 'in `gpu_ids` now.') - if args.gpus is None and args.gpu_ids is None: - cfg.gpu_ids = [args.gpu_id] - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # re-set gpu_ids with distributed training mode - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - meta['config'] = cfg.pretty_text - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - cfg.device = get_device() - # set random seeds - seed = init_random_seed(args.seed, device=cfg.device) - seed = seed + dist.get_rank() if args.diff_seed else seed - logger.info(f'Set random seed to {seed}, ' - f'deterministic: {args.deterministic}') - set_random_seed(seed, deterministic=args.deterministic) - cfg.seed = seed - meta['seed'] = seed - meta['exp_name'] = osp.basename(args.config) - - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - model.init_weights() - - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - val_dataset.pipeline = cfg.data.train.pipeline - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmdet version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmdet_version=__version__ + get_git_hash()[:7], - CLASSES=datasets[0].CLASSES) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - train_detector( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/cv/detection/yolof/pytorch/train.sh b/cv/detection/yolof/pytorch/train.sh deleted file mode 100755 index 906fa748..00000000 --- a/cv/detection/yolof/pytorch/train.sh +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# CONFIG=$1 - -# PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -# python3 $(dirname "$0")/train.py \ -# $CONFIG \ -# --launcher pytorch ${@:2} - -python3 train.py configs/yolof/yolof_r50_c5_8x8_1x_coco.py \ No newline at end of file diff --git a/cv/detection/yolof/pytorch/train_dist.sh b/cv/detection/yolof/pytorch/train_dist.sh deleted file mode 100755 index 44c98048..00000000 --- a/cv/detection/yolof/pytorch/train_dist.sh +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python3 -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --seed 0 \ - --launcher pytorch ${@:3} -- Gitee From 54ee53e39f112a2f695fc597337e688ba1d697ae Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 16:07:12 +0800 Subject: [PATCH 6/7] update basicvsr and delete useless code --- .../basicvsr/pytorch/.gitignore | 140 -- cv/super_resolution/basicvsr/pytorch/LICENSE | 203 --- .../basicvsr/pytorch/README.md | 26 +- .../basicvsr/pytorch/build_env.sh | 14 - .../configs/basicvsr/basicvsr_reds4.py | 154 -- .../pytorch/configs/iconvsr/iconvsr_reds4.py | 155 -- .../basicvsr/pytorch/dist_train.sh | 21 - .../basicvsr/pytorch/docker/Dockerfile | 24 - .../basicvsr/pytorch/docker/README.md | 19 - .../basicvsr/pytorch/mmcv/__init__.py | 13 - .../pytorch/mmcv/arraymisc/__init__.py | 4 - .../pytorch/mmcv/arraymisc/quantization.py | 55 - .../basicvsr/pytorch/mmcv/cnn/__init__.py | 41 - .../basicvsr/pytorch/mmcv/cnn/alexnet.py | 61 - .../pytorch/mmcv/cnn/bricks/__init__.py | 35 - .../pytorch/mmcv/cnn/bricks/activation.py | 92 -- .../pytorch/mmcv/cnn/bricks/context_block.py | 125 -- .../basicvsr/pytorch/mmcv/cnn/bricks/conv.py | 44 - .../cnn/bricks/conv2d_adaptive_padding.py | 62 - .../pytorch/mmcv/cnn/bricks/conv_module.py | 206 --- .../pytorch/mmcv/cnn/bricks/conv_ws.py | 148 -- .../bricks/depthwise_separable_conv_module.py | 96 -- .../basicvsr/pytorch/mmcv/cnn/bricks/drop.py | 65 - .../mmcv/cnn/bricks/generalized_attention.py | 412 ----- .../pytorch/mmcv/cnn/bricks/hsigmoid.py | 34 - .../pytorch/mmcv/cnn/bricks/hswish.py | 29 - .../pytorch/mmcv/cnn/bricks/non_local.py | 306 ---- .../basicvsr/pytorch/mmcv/cnn/bricks/norm.py | 144 -- .../pytorch/mmcv/cnn/bricks/padding.py | 36 - .../pytorch/mmcv/cnn/bricks/plugin.py | 88 -- .../pytorch/mmcv/cnn/bricks/registry.py | 16 - .../basicvsr/pytorch/mmcv/cnn/bricks/scale.py | 21 - .../basicvsr/pytorch/mmcv/cnn/bricks/swish.py | 25 - .../pytorch/mmcv/cnn/bricks/transformer.py | 595 -------- .../pytorch/mmcv/cnn/bricks/upsample.py | 84 -- .../pytorch/mmcv/cnn/bricks/wrappers.py | 180 --- .../basicvsr/pytorch/mmcv/cnn/builder.py | 30 - .../basicvsr/pytorch/mmcv/cnn/resnet.py | 316 ---- .../pytorch/mmcv/cnn/utils/__init__.py | 19 - .../pytorch/mmcv/cnn/utils/flops_counter.py | 599 -------- .../pytorch/mmcv/cnn/utils/fuse_conv_bn.py | 59 - .../pytorch/mmcv/cnn/utils/sync_bn.py | 59 - .../pytorch/mmcv/cnn/utils/weight_init.py | 684 --------- .../basicvsr/pytorch/mmcv/cnn/vgg.py | 175 --- .../basicvsr/pytorch/mmcv/engine/__init__.py | 8 - .../basicvsr/pytorch/mmcv/engine/test.py | 202 --- .../basicvsr/pytorch/mmcv/fileio/__init__.py | 11 - .../pytorch/mmcv/fileio/file_client.py | 1148 -------------- .../pytorch/mmcv/fileio/handlers/__init__.py | 7 - .../pytorch/mmcv/fileio/handlers/base.py | 30 - .../mmcv/fileio/handlers/json_handler.py | 36 - .../mmcv/fileio/handlers/pickle_handler.py | 28 - .../mmcv/fileio/handlers/yaml_handler.py | 24 - .../basicvsr/pytorch/mmcv/fileio/io.py | 151 -- .../basicvsr/pytorch/mmcv/fileio/parse.py | 97 -- .../basicvsr/pytorch/mmcv/image/__init__.py | 28 - .../basicvsr/pytorch/mmcv/image/colorspace.py | 306 ---- .../basicvsr/pytorch/mmcv/image/geometric.py | 728 --------- .../basicvsr/pytorch/mmcv/image/io.py | 258 ---- .../basicvsr/pytorch/mmcv/image/misc.py | 44 - .../pytorch/mmcv/image/photometric.py | 428 ------ .../basicvsr/pytorch/mmcv/ops/__init__.py | 15 - .../csrc/common/cuda/common_cuda_helper.hpp | 112 -- .../modulated_deform_conv_cuda_kernel.cuh | 400 ----- .../csrc/common/cuda/sync_bn_cuda_kernel.cuh | 332 ---- .../ops/csrc/common/pytorch_cpp_helper.hpp | 24 - .../ops/csrc/common/pytorch_cuda_helper.hpp | 19 - .../cuda/modulated_deform_conv_cuda.cu | 96 -- .../ops/csrc/pytorch/cuda/sync_bn_cuda.cu | 110 -- .../pytorch/mmcv/ops/csrc/pytorch/info.cpp | 56 - .../csrc/pytorch/modulated_deform_conv.cpp | 338 ----- .../pytorch/modulated_deform_conv_cpu.cpp | 403 ----- .../pytorch/mmcv/ops/csrc/pytorch/pybind.cpp | 82 - .../pytorch/mmcv/ops/csrc/pytorch/sync_bn.cpp | 159 -- .../pytorch/mmcv/ops/deprecated_wrappers.py | 43 - .../basicvsr/pytorch/mmcv/ops/info.py | 36 - .../pytorch/mmcv/ops/modulated_deform_conv.py | 282 ---- .../basicvsr/pytorch/mmcv/ops/sync_bn.py | 279 ---- .../pytorch/mmcv/parallel/__init__.py | 13 - .../pytorch/mmcv/parallel/_functions.py | 79 - .../basicvsr/pytorch/mmcv/parallel/collate.py | 84 -- .../pytorch/mmcv/parallel/data_container.py | 89 -- .../pytorch/mmcv/parallel/data_parallel.py | 89 -- .../pytorch/mmcv/parallel/distributed.py | 112 -- .../mmcv/parallel/distributed_deprecated.py | 70 - .../pytorch/mmcv/parallel/registry.py | 8 - .../pytorch/mmcv/parallel/scatter_gather.py | 59 - .../basicvsr/pytorch/mmcv/parallel/utils.py | 20 - .../basicvsr/pytorch/mmcv/runner/__init__.py | 47 - .../pytorch/mmcv/runner/base_module.py | 195 --- .../pytorch/mmcv/runner/base_runner.py | 542 ------- .../basicvsr/pytorch/mmcv/runner/builder.py | 24 - .../pytorch/mmcv/runner/checkpoint.py | 707 --------- .../mmcv/runner/default_constructor.py | 44 - .../pytorch/mmcv/runner/dist_utils.py | 164 -- .../pytorch/mmcv/runner/epoch_based_runner.py | 187 --- .../pytorch/mmcv/runner/fp16_utils.py | 410 ----- .../pytorch/mmcv/runner/hooks/__init__.py | 29 - .../pytorch/mmcv/runner/hooks/checkpoint.py | 167 --- .../pytorch/mmcv/runner/hooks/closure.py | 11 - .../basicvsr/pytorch/mmcv/runner/hooks/ema.py | 89 -- .../pytorch/mmcv/runner/hooks/evaluation.py | 509 ------- .../pytorch/mmcv/runner/hooks/hook.py | 92 -- .../pytorch/mmcv/runner/hooks/iter_timer.py | 30 - .../mmcv/runner/hooks/logger/__init__.py | 15 - .../pytorch/mmcv/runner/hooks/logger/base.py | 166 -- .../mmcv/runner/hooks/logger/dvclive.py | 58 - .../mmcv/runner/hooks/logger/mlflow.py | 78 - .../mmcv/runner/hooks/logger/neptune.py | 82 - .../pytorch/mmcv/runner/hooks/logger/pavi.py | 117 -- .../mmcv/runner/hooks/logger/tensorboard.py | 57 - .../pytorch/mmcv/runner/hooks/logger/text.py | 256 ---- .../pytorch/mmcv/runner/hooks/logger/wandb.py | 56 - .../pytorch/mmcv/runner/hooks/lr_updater.py | 670 --------- .../pytorch/mmcv/runner/hooks/memory.py | 25 - .../mmcv/runner/hooks/momentum_updater.py | 493 ------ .../pytorch/mmcv/runner/hooks/optimizer.py | 508 ------- .../pytorch/mmcv/runner/hooks/profiler.py | 180 --- .../pytorch/mmcv/runner/hooks/sampler_seed.py | 20 - .../pytorch/mmcv/runner/hooks/sync_buffer.py | 22 - .../pytorch/mmcv/runner/iter_based_runner.py | 273 ---- .../pytorch/mmcv/runner/log_buffer.py | 41 - .../pytorch/mmcv/runner/optimizer/__init__.py | 9 - .../pytorch/mmcv/runner/optimizer/builder.py | 44 - .../runner/optimizer/default_constructor.py | 249 --- .../basicvsr/pytorch/mmcv/runner/priority.py | 60 - .../basicvsr/pytorch/mmcv/runner/utils.py | 93 -- .../basicvsr/pytorch/mmcv/utils/__init__.py | 69 - .../basicvsr/pytorch/mmcv/utils/config.py | 688 --------- .../basicvsr/pytorch/mmcv/utils/env.py | 95 -- .../basicvsr/pytorch/mmcv/utils/ext_loader.py | 71 - .../basicvsr/pytorch/mmcv/utils/logging.py | 110 -- .../basicvsr/pytorch/mmcv/utils/misc.py | 377 ----- .../pytorch/mmcv/utils/parrots_jit.py | 41 - .../pytorch/mmcv/utils/parrots_wrapper.py | 107 -- .../basicvsr/pytorch/mmcv/utils/path.py | 101 -- .../pytorch/mmcv/utils/progressbar.py | 208 --- .../basicvsr/pytorch/mmcv/utils/registry.py | 315 ---- .../basicvsr/pytorch/mmcv/utils/testing.py | 140 -- .../basicvsr/pytorch/mmcv/utils/timer.py | 118 -- .../basicvsr/pytorch/mmcv/utils/trace.py | 23 - .../pytorch/mmcv/utils/version_utils.py | 90 -- .../basicvsr/pytorch/mmcv/version.py | 35 - .../basicvsr/pytorch/mmedit/__init__.py | 34 - .../basicvsr/pytorch/mmedit/apis/__init__.py | 18 - .../mmedit/apis/generation_inference.py | 63 - .../mmedit/apis/inpainting_inference.py | 53 - .../pytorch/mmedit/apis/matting_inference.py | 78 - .../mmedit/apis/restoration_face_inference.py | 90 -- .../mmedit/apis/restoration_inference.py | 48 - .../apis/restoration_video_inference.py | 129 -- .../basicvsr/pytorch/mmedit/apis/test.py | 234 --- .../basicvsr/pytorch/mmedit/apis/train.py | 361 ----- .../apis/video_interpolation_inference.py | 204 --- .../basicvsr/pytorch/mmedit/core/__init__.py | 13 - .../mmedit/core/distributed_wrapper.py | 139 -- .../mmedit/core/evaluation/__init__.py | 10 - .../mmedit/core/evaluation/eval_hooks.py | 114 -- .../mmedit/core/evaluation/metric_utils.py | 81 - .../pytorch/mmedit/core/evaluation/metrics.py | 572 ------- .../core/evaluation/niqe_pris_params.npz | Bin 11850 -> 0 bytes .../pytorch/mmedit/core/export/__init__.py | 4 - .../pytorch/mmedit/core/export/wrappers.py | 134 -- .../pytorch/mmedit/core/hooks/__init__.py | 5 - .../basicvsr/pytorch/mmedit/core/hooks/ema.py | 113 -- .../mmedit/core/hooks/visualization.py | 84 -- .../basicvsr/pytorch/mmedit/core/mask.py | 316 ---- .../basicvsr/pytorch/mmedit/core/misc.py | 74 - .../pytorch/mmedit/core/optimizer/__init__.py | 4 - .../pytorch/mmedit/core/optimizer/builder.py | 58 - .../pytorch/mmedit/core/scheduler/__init__.py | 4 - .../mmedit/core/scheduler/lr_updater.py | 304 ---- .../pytorch/mmedit/core/utils/__init__.py | 4 - .../pytorch/mmedit/core/utils/dist_utils.py | 35 - .../pytorch/mmedit/datasets/__init__.py | 8 - .../pytorch/mmedit/datasets/base_dataset.py | 78 - .../mmedit/datasets/base_sr_dataset.py | 87 -- .../pytorch/mmedit/datasets/builder.py | 181 --- .../mmedit/datasets/dataset_wrappers.py | 39 - .../mmedit/datasets/pipelines/__init__.py | 22 - .../mmedit/datasets/pipelines/augmentation.py | 1332 ----------------- .../mmedit/datasets/pipelines/compose.py | 53 - .../pytorch/mmedit/datasets/pipelines/crop.py | 749 --------- .../mmedit/datasets/pipelines/formating.py | 263 ---- .../mmedit/datasets/pipelines/loading.py | 562 ------- .../datasets/pipelines/matlab_like_resize.py | 275 ---- .../datasets/pipelines/normalization.py | 103 -- .../mmedit/datasets/pipelines/utils.py | 154 -- .../pytorch/mmedit/datasets/registry.py | 5 - .../mmedit/datasets/samplers/__init__.py | 4 - .../datasets/samplers/distributed_sampler.py | 71 - .../datasets/sr_reds_multiple_gt_dataset.py | 85 -- .../pytorch/mmedit/models/__init__.py | 10 - .../mmedit/models/backbones/__init__.py | 6 - .../models/backbones/sr_backbones/__init__.py | 6 - .../backbones/sr_backbones/basicvsr_net.py | 420 ------ .../models/backbones/sr_backbones/edvr_net.py | 475 ------ .../models/backbones/sr_backbones/iconvsr.py | 394 ----- .../basicvsr/pytorch/mmedit/models/base.py | 105 -- .../basicvsr/pytorch/mmedit/models/builder.py | 60 - .../pytorch/mmedit/models/common/__init__.py | 32 - .../pytorch/mmedit/models/common/aspp.py | 125 -- .../models/common/contextual_attention.py | 379 ----- .../pytorch/mmedit/models/common/conv.py | 6 - .../mmedit/models/common/downsample.py | 22 - .../pytorch/mmedit/models/common/ensemble.py | 105 -- .../pytorch/mmedit/models/common/flow_warp.py | 50 - .../mmedit/models/common/gated_conv_module.py | 72 - .../mmedit/models/common/gca_module.py | 358 ----- .../models/common/generation_model_utils.py | 301 ---- .../mmedit/models/common/img_normalize.py | 32 - .../mmedit/models/common/linear_module.py | 89 -- .../mmedit/models/common/mask_conv_module.py | 88 -- .../mmedit/models/common/model_utils.py | 136 -- .../mmedit/models/common/partial_conv.py | 102 -- .../models/common/separable_conv_module.py | 97 -- .../mmedit/models/common/sr_backbone_utils.py | 97 -- .../pytorch/mmedit/models/common/upsample.py | 51 - .../pytorch/mmedit/models/losses/__init__.py | 7 - .../mmedit/models/losses/pixelwise_loss.py | 221 --- .../pytorch/mmedit/models/losses/utils.py | 115 -- .../pytorch/mmedit/models/registry.py | 8 - .../mmedit/models/restorers/__init__.py | 7 - .../mmedit/models/restorers/basic_restorer.py | 210 --- .../mmedit/models/restorers/basicvsr.py | 224 --- .../basicvsr/pytorch/mmedit/utils/__init__.py | 6 - .../basicvsr/pytorch/mmedit/utils/cli.py | 18 - .../pytorch/mmedit/utils/collect_env.py | 18 - .../basicvsr/pytorch/mmedit/utils/logger.py | 27 - .../pytorch/mmedit/utils/setup_env.py | 47 - .../basicvsr/pytorch/mmedit/version.py | 18 - .../basicvsr/pytorch/requirements.txt | 4 - cv/super_resolution/basicvsr/pytorch/setup.py | 354 ----- cv/super_resolution/basicvsr/pytorch/train.py | 172 --- 234 files changed, 17 insertions(+), 34572 deletions(-) delete mode 100755 cv/super_resolution/basicvsr/pytorch/.gitignore delete mode 100755 cv/super_resolution/basicvsr/pytorch/LICENSE delete mode 100755 cv/super_resolution/basicvsr/pytorch/build_env.sh delete mode 100755 cv/super_resolution/basicvsr/pytorch/configs/basicvsr/basicvsr_reds4.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/configs/iconvsr/iconvsr_reds4.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/dist_train.sh delete mode 100755 cv/super_resolution/basicvsr/pytorch/docker/Dockerfile delete mode 100755 cv/super_resolution/basicvsr/pytorch/docker/README.md delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/quantization.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/alexnet.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/activation.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/context_block.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv2d_adaptive_padding.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_ws.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/depthwise_separable_conv_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/drop.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/generalized_attention.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hsigmoid.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hswish.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/non_local.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/norm.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/padding.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/plugin.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/registry.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/scale.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/swish.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/transformer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/upsample.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/wrappers.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/resnet.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/flops_counter.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/fuse_conv_bn.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/sync_bn.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/weight_init.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/cnn/vgg.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/engine/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/engine/test.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/file_client.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/base.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/json_handler.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/pickle_handler.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/yaml_handler.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/io.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/fileio/parse.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/colorspace.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/geometric.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/io.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/misc.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/image/photometric.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/info.cpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv_cpu.cpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/sync_bn.cpp delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/deprecated_wrappers.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/info.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/modulated_deform_conv.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/ops/sync_bn.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/_functions.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/collate.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_container.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_parallel.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed_deprecated.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/registry.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/scatter_gather.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/parallel/utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_runner.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/checkpoint.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/default_constructor.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/dist_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/epoch_based_runner.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/fp16_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/checkpoint.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/closure.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/ema.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/evaluation.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/hook.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/iter_timer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/base.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/dvclive.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/mlflow.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/neptune.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/pavi.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/tensorboard.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/text.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/wandb.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/lr_updater.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/memory.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/momentum_updater.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/optimizer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/profiler.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sampler_seed.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sync_buffer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/iter_based_runner.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/log_buffer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/default_constructor.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/priority.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/runner/utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/config.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/env.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/ext_loader.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/logging.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/misc.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_jit.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_wrapper.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/path.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/progressbar.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/registry.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/testing.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/timer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/trace.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/utils/version_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmcv/version.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/generation_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/inpainting_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/matting_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_face_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_video_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/test.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/train.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/apis/video_interpolation_inference.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/distributed_wrapper.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/eval_hooks.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metric_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metrics.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/niqe_pris_params.npz delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/export/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/export/wrappers.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/ema.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/visualization.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/mask.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/misc.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/lr_updater.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/dist_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_dataset.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_sr_dataset.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/dataset_wrappers.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/augmentation.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/compose.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/crop.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/formating.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/loading.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/matlab_like_resize.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/normalization.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/registry.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/distributed_sampler.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/datasets/sr_reds_multiple_gt_dataset.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/basicvsr_net.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/edvr_net.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/iconvsr.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/base.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/builder.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/aspp.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/contextual_attention.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/conv.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/downsample.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/ensemble.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/flow_warp.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gated_conv_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gca_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/generation_model_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/img_normalize.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/linear_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/mask_conv_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/model_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/partial_conv.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/separable_conv_module.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/sr_backbone_utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/common/upsample.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/pixelwise_loss.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/utils.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/registry.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basic_restorer.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basicvsr.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/utils/__init__.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/utils/cli.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/utils/collect_env.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/utils/logger.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/utils/setup_env.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/mmedit/version.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/requirements.txt delete mode 100755 cv/super_resolution/basicvsr/pytorch/setup.py delete mode 100755 cv/super_resolution/basicvsr/pytorch/train.py diff --git a/cv/super_resolution/basicvsr/pytorch/.gitignore b/cv/super_resolution/basicvsr/pytorch/.gitignore deleted file mode 100755 index d98061b9..00000000 --- a/cv/super_resolution/basicvsr/pytorch/.gitignore +++ /dev/null @@ -1,140 +0,0 @@ -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class -**/*.pyc - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/en/_build/ -docs/en/_tmp/ -docs/zh_cn/_build/ -docs/zh_cn/_tmp/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ - -# custom -.vscode -.idea -*.pkl -*.pkl.json -*.log.json -work_dirs/ -/data/ -/data -mmedit/.mim - -# Pytorch -*.pth - -# onnx and tensorrt -*.onnx -*.trt - -# local history -.history/** - -# Pytorch Server -*.mar - -# MacOS -.DS_Store - - -data/ -pretrained-weights/ -*.pyc diff --git a/cv/super_resolution/basicvsr/pytorch/LICENSE b/cv/super_resolution/basicvsr/pytorch/LICENSE deleted file mode 100755 index 94d620d2..00000000 --- a/cv/super_resolution/basicvsr/pytorch/LICENSE +++ /dev/null @@ -1,203 +0,0 @@ -Copyright (c) MMEditing Authors. All rights reserved. - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2020 MMEditing Authors. All rights reserved. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/cv/super_resolution/basicvsr/pytorch/README.md b/cv/super_resolution/basicvsr/pytorch/README.md index 00643fd6..9fbad252 100755 --- a/cv/super_resolution/basicvsr/pytorch/README.md +++ b/cv/super_resolution/basicvsr/pytorch/README.md @@ -7,12 +7,23 @@ BasicVSR is a video super-resolution pipeline including optical flow and residua ## Step 1: Installing packages ```shell -sh build_env.sh +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +git clone https://github.com/open-mmlab/mmagic.git -b v1.2.0 --depth=1 +cd mmagic/ +pip3 install -e . -v + +sed -i 's/diffusers.models.unet_2d_condition/diffusers.models.unets.unet_2d_condition/g' mmagic/models/editors/vico/vico_utils.py +pip install albumentations ``` ## Step 2: Preparing datasets -Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) or you can follow tools/dataset_converters/reds/README.md ```shell mkdir -p data/ ln -s ${REDS_DATASET_PATH} data/REDS @@ -22,17 +33,14 @@ ln -s ${REDS_DATASET_PATH} data/REDS ### One single GPU ```shell -python3 train.py [training args] # config file can be found in the configs directory +python3 tools/train.py configs/basicvsr/basicvsr_2xb4_reds4.py ``` ### Mutiple GPUs on one machine ```shell -bash dist_train.sh [training args] # config file can be found in the configs directory +sed -i 's/python /python3 /g' tools/dist_train.sh +bash tools/dist_train.sh configs/basicvsr/basicvsr_2xb4_reds4.py 8 ``` -### Example -```shell -bash dist_train.sh configs/basicvsr/basicvsr_reds4.py 8 -``` ## Reference -https://github.com/open-mmlab/mmediting +[mmagic](https://github.com/open-mmlab/mmagic) diff --git a/cv/super_resolution/basicvsr/pytorch/build_env.sh b/cv/super_resolution/basicvsr/pytorch/build_env.sh deleted file mode 100755 index 5c77a38d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/build_env.sh +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) 2022 Iluvatar CoreX. All rights reserved. -# Copyright Declaration: This software, including all of its code and documentation, -# except for the third-party software it contains, is a copyrighted work of Shanghai Iluvatar CoreX -# Semiconductor Co., Ltd. and its affiliates ("Iluvatar CoreX") in accordance with the PRC Copyright -# Law and relevant international treaties, and all rights contained therein are enjoyed by Iluvatar -# CoreX. No user of this software shall have any right, ownership or interest in this software and -# any use of this software shall be in compliance with the terms and conditions of the End User -# License Agreement. - - -PIPCMD=pip3 - -MMCV_WITH_OPS=1 python3 setup.py build && cp build/lib.linux*/mmcv/_ext.cpython* mmcv -$PIPCMD install -r requirements.txt diff --git a/cv/super_resolution/basicvsr/pytorch/configs/basicvsr/basicvsr_reds4.py b/cv/super_resolution/basicvsr/pytorch/configs/basicvsr/basicvsr_reds4.py deleted file mode 100755 index 77087704..00000000 --- a/cv/super_resolution/basicvsr/pytorch/configs/basicvsr/basicvsr_reds4.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -exp_name = 'basicvsr_reds4' - -# model settings -model = dict( - type='BasicVSR', - generator=dict( - type='BasicVSRNet', - mid_channels=64, - num_blocks=30, - spynet_pretrained='https://download.openmmlab.com/mmediting/restorers/basicvsr/spynet_20210409-c6c1bd09.pth'), - pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='mean')) -# model training and testing settings -train_cfg = dict(fix_iter=5000) -test_cfg = dict(metrics=['PSNR', 'SSIM'], crop_border=0) - -# dataset settings -train_dataset_type = 'SRREDSMultipleGTDataset' -val_dataset_type = 'SRREDSMultipleGTDataset' - -train_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='gt', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq', 'gt']), - dict(type='PairedRandomCrop', gt_patch_size=256), - dict( - type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, - direction='horizontal'), - dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'), - dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5), - dict(type='FramesToTensor', keys=['lq', 'gt']), - dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path']) -] - -test_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='gt', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq', 'gt']), - dict(type='FramesToTensor', keys=['lq', 'gt']), - dict( - type='Collect', - keys=['lq', 'gt'], - meta_keys=['lq_path', 'gt_path', 'key']) -] - -demo_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq']), - dict(type='FramesToTensor', keys=['lq']), - dict(type='Collect', keys=['lq'], meta_keys=['lq_path', 'key']) -] - -data = dict( - workers_per_gpu=6, - train_dataloader=dict(samples_per_gpu=4, drop_last=True), # 2 gpus - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1), - - # train - train=dict( - type='RepeatDataset', - times=1000, - dataset=dict( - type=train_dataset_type, - lq_folder='data/REDS/train/train_sharp_bicubic/X4', - gt_folder='data/REDS/train/train_sharp', - num_input_frames=15, - pipeline=train_pipeline, - scale=4, - val_partition='official', - test_mode=False)), - # val - val=dict( - type=val_dataset_type, - lq_folder='data/REDS/train/train_sharp_bicubic/X4', - gt_folder='data/REDS/train/train_sharp', - num_input_frames=100, - pipeline=test_pipeline, - scale=4, - val_partition='official', - test_mode=True), - # test - test=dict( - type=val_dataset_type, - lq_folder='data/REDS/train/train_sharp_bicubic/X4', - gt_folder='data/REDS/train/train_sharp', - num_input_frames=100, - pipeline=test_pipeline, - scale=4, - val_partition='REDS4', - test_mode=True), -) - -# optimizer -optimizers = dict( - generator=dict( - type='Adam', - lr=2e-4, - betas=(0.9, 0.99), - paramwise_cfg=dict(custom_keys={'spynet': dict(lr_mult=0.125)}))) - -# learning policy -total_iters = 300000 -lr_config = dict( - policy='CosineRestart', - by_epoch=False, - periods=[300000], - restart_weights=[1], - min_lr=1e-7) - -checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False) -# remove gpu_collect=True in non distributed training -evaluation = dict(interval=5000, save_image=False, gpu_collect=True) -log_config = dict( - interval=1, - hooks=[ - dict(type='TextLoggerHook', by_epoch=False), - # dict(type='TensorboardLoggerHook'), - ]) -visual_config = None - -# runtime settings -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = f'./work_dirs/{exp_name}' -load_from = None -resume_from = None -workflow = [('train', 1)] -find_unused_parameters = True -cudnn_benchmark = True diff --git a/cv/super_resolution/basicvsr/pytorch/configs/iconvsr/iconvsr_reds4.py b/cv/super_resolution/basicvsr/pytorch/configs/iconvsr/iconvsr_reds4.py deleted file mode 100755 index a62d9ab4..00000000 --- a/cv/super_resolution/basicvsr/pytorch/configs/iconvsr/iconvsr_reds4.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -exp_name = 'iconvsr_reds4' - -# model settings -model = dict( - type='BasicVSR', - generator=dict( - type='IconVSR', - mid_channels=64, - num_blocks=30, - keyframe_stride=5, - padding=2, - spynet_pretrained='https://download.openmmlab.com/mmediting/restorers/basicvsr/spynet_20210409-c6c1bd09.pth', - edvr_pretrained='https://download.openmmlab.com/mmediting/restorers/iconvsr/edvrm_reds_20210413-3867262f.pth'), - pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='mean')) -# model training and testing settings -train_cfg = dict(fix_iter=5000) -test_cfg = dict(metrics=['PSNR', 'SSIM'], crop_border=0) - -# dataset settings -train_dataset_type = 'SRREDSMultipleGTDataset' -val_dataset_type = 'SRREDSMultipleGTDataset' -train_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='gt', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq', 'gt']), - dict(type='PairedRandomCrop', gt_patch_size=256), - dict( - type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, - direction='horizontal'), - dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'), - dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5), - dict(type='FramesToTensor', keys=['lq', 'gt']), - dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path']) -] - -test_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='gt', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq', 'gt']), - dict(type='FramesToTensor', keys=['lq', 'gt']), - dict( - type='Collect', - keys=['lq', 'gt'], - meta_keys=['lq_path', 'gt_path', 'key']) -] - -demo_pipeline = [ - dict(type='GenerateSegmentIndices', interval_list=[1]), - dict( - type='LoadImageFromFileList', - io_backend='disk', - key='lq', - channel_order='rgb'), - dict(type='RescaleToZeroOne', keys=['lq']), - dict(type='FramesToTensor', keys=['lq']), - dict(type='Collect', keys=['lq'], meta_keys=['lq_path', 'key']) -] - -data = dict( - workers_per_gpu=6, - train_dataloader=dict(samples_per_gpu=4, drop_last=True), - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1), - - # train - train=dict( - type='RepeatDataset', - times=1000, - dataset=dict( - type=train_dataset_type, - lq_folder='data/REDS/train/train_sharp_bicubic/X4', - gt_folder='data/REDS/train/train_sharp', - num_input_frames=15, - pipeline=train_pipeline, - scale=4, - val_partition='official', - test_mode=False)), - # val - val=dict( - type=val_dataset_type, - lq_folder='data/REDS/train/train_sharp_bicubic/X4', - gt_folder='data/REDS/train/train_sharp', - num_input_frames=100, - pipeline=test_pipeline, - scale=4, - val_partition='official', - test_mode=True), - test=dict( - type=val_dataset_type, - lq_folder='data/REDS/train_sharp_bicubic/X4', - gt_folder='data/REDS/train_sharp', - num_input_frames=100, - pipeline=test_pipeline, - scale=4, - val_partition='REDS4', - test_mode=True), -) - -# optimizer -optimizers = dict( - generator=dict( - type='Adam', - lr=2e-4, - betas=(0.9, 0.99), - paramwise_cfg=dict(custom_keys={'spynet': dict(lr_mult=0.125)}))) - -# learning policy -total_iters = 300000 -lr_config = dict( - policy='CosineRestart', - by_epoch=False, - periods=[300000], - restart_weights=[1], - min_lr=1e-7) - -checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False) -# remove gpu_collect=True in non distributed training -evaluation = dict(interval=5000, save_image=False, gpu_collect=True) -log_config = dict( - interval=1, - hooks=[ - dict(type='TextLoggerHook', by_epoch=False), - # dict(type='TensorboardLoggerHook'), - ]) -visual_config = None - -# runtime settings -dist_params = dict(backend='nccl') -log_level = 'INFO' -work_dir = f'./work_dirs/{exp_name}' -load_from = None -resume_from = None -workflow = [('train', 1)] -find_unused_parameters = True -cudnn_benchmark = True diff --git a/cv/super_resolution/basicvsr/pytorch/dist_train.sh b/cv/super_resolution/basicvsr/pytorch/dist_train.sh deleted file mode 100755 index 87c687cd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/dist_train.sh +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -NNODES=${NNODES:-1} -NODE_RANK=${NODE_RANK:-0} -PORT=${PORT:-29500} -MASTER_ADDR=${MASTER_ADDR:-"127.0.0.1"} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python3 -m torch.distributed.launch \ - --nnodes=$NNODES \ - --node_rank=$NODE_RANK \ - --master_addr=$MASTER_ADDR \ - --nproc_per_node=$GPUS \ - --master_port=$PORT \ - $(dirname "$0")/train.py \ - $CONFIG \ - --seed 0 \ - --launcher pytorch ${@:3} diff --git a/cv/super_resolution/basicvsr/pytorch/docker/Dockerfile b/cv/super_resolution/basicvsr/pytorch/docker/Dockerfile deleted file mode 100755 index 65128704..00000000 --- a/cv/super_resolution/basicvsr/pytorch/docker/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -ARG PYTORCH="1.6.0" -ARG CUDA="10.1" -ARG CUDA_ALIAS="101" -ARG CUDNN="7" -ARG MMCV="1.3.1" - -FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel - -ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0+PTX" -ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all" -ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" - -RUN apt-get update && apt-get install -y git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 libgl1-mesa-glx \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* - -# Install mmediting -RUN conda clean --all -RUN git clone https://github.com/open-mmlab/mmediting.git /mmediting -WORKDIR /mmediting -ENV FORCE_CUDA="1" -RUN pip install mmcv-full==${MMCV} -f https://download.openmmlab.com/mmcv/dist/cu${CUDA_ALIAS}/torch${PYTORCH}/index.html -RUN pip install -r requirements.txt -RUN pip install --no-cache-dir -e . diff --git a/cv/super_resolution/basicvsr/pytorch/docker/README.md b/cv/super_resolution/basicvsr/pytorch/docker/README.md deleted file mode 100755 index 851c28b0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/docker/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# Docker Image - -We provide a [Dockerfile](Dockerfile) to build an image. - -```shell -# build an image with PyTorch 1.6, CUDA 10.1 -docker build -t mmediting docker/ -``` - -Run it with - -```shell -docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmediting/data mmediting -``` - -**Note**: -Versions defined in this [Dockerfile](Dockerfile) is not up-to-date. -If you use this Dockerfile in your project, you probably want to make some updates. -Feel free to submit an issue or PR for the update. diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/__init__.py deleted file mode 100755 index 30b661fb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -from .arraymisc import * -from .fileio import * -from .image import * -from .utils import * -from .version import * - -# The following modules are not imported to this level, so mmcv may be used -# without PyTorch. -# - runner -# - parallel -# - op diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/__init__.py deleted file mode 100755 index 4b4700d6..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .quantization import dequantize, quantize - -__all__ = ['quantize', 'dequantize'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/quantization.py b/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/quantization.py deleted file mode 100755 index 8e47a354..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/arraymisc/quantization.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def quantize(arr, min_val, max_val, levels, dtype=np.int64): - """Quantize an array of (-inf, inf) to [0, levels-1]. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the quantized array. - - Returns: - tuple: Quantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - arr = np.clip(arr, min_val, max_val) - min_val - quantized_arr = np.minimum( - np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) - - return quantized_arr - - -def dequantize(arr, min_val, max_val, levels, dtype=np.float64): - """Dequantize an array. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the dequantized array. - - Returns: - tuple: Dequantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - dequantized_arr = (arr + 0.5).astype(dtype) * (max_val - - min_val) / levels + min_val - - return dequantized_arr diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/__init__.py deleted file mode 100755 index 7246c897..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .alexnet import AlexNet -# yapf: disable -from .bricks import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS, - ContextBlock, Conv2d, Conv3d, ConvAWS2d, ConvModule, - ConvTranspose2d, ConvTranspose3d, ConvWS2d, - DepthwiseSeparableConvModule, GeneralizedAttention, - HSigmoid, HSwish, Linear, MaxPool2d, MaxPool3d, - NonLocal1d, NonLocal2d, NonLocal3d, Scale, Swish, - build_activation_layer, build_conv_layer, - build_norm_layer, build_padding_layer, build_plugin_layer, - build_upsample_layer, conv_ws_2d, is_norm) -from .builder import MODELS, build_model_from_cfg -# yapf: enable -from .resnet import ResNet, make_res_layer -from .utils import (INITIALIZERS, Caffe2XavierInit, ConstantInit, KaimingInit, - NormalInit, PretrainedInit, TruncNormalInit, UniformInit, - XavierInit, bias_init_with_prob, caffe2_xavier_init, - constant_init, fuse_conv_bn, get_model_complexity_info, - initialize, kaiming_init, normal_init, trunc_normal_init, - uniform_init, xavier_init) -from .vgg import VGG, make_vgg_layer - -__all__ = [ - 'AlexNet', 'VGG', 'make_vgg_layer', 'ResNet', 'make_res_layer', - 'constant_init', 'xavier_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'kaiming_init', 'caffe2_xavier_init', - 'bias_init_with_prob', 'ConvModule', 'build_activation_layer', - 'build_conv_layer', 'build_norm_layer', 'build_padding_layer', - 'build_upsample_layer', 'build_plugin_layer', 'is_norm', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'HSigmoid', 'Swish', 'HSwish', - 'GeneralizedAttention', 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', - 'PADDING_LAYERS', 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', - 'get_model_complexity_info', 'conv_ws_2d', 'ConvAWS2d', 'ConvWS2d', - 'fuse_conv_bn', 'DepthwiseSeparableConvModule', 'Linear', 'Conv2d', - 'ConvTranspose2d', 'MaxPool2d', 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', - 'initialize', 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'MODELS', 'build_model_from_cfg' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/alexnet.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/alexnet.py deleted file mode 100755 index 89e36b8c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/alexnet.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - - -class AlexNet(nn.Module): - """AlexNet backbone. - - Args: - num_classes (int): number of classes for classification. - """ - - def __init__(self, num_classes=-1): - super(AlexNet, self).__init__() - self.num_classes = num_classes - self.features = nn.Sequential( - nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(64, 192, kernel_size=5, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(192, 384, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(384, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - ) - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/__init__.py deleted file mode 100755 index 0f33124e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/activation.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/activation.py deleted file mode 100755 index 79f19883..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super(Clamp, self).__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/context_block.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/context_block.py deleted file mode 100755 index d60fdb90..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/context_block.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn - -from ..utils import constant_init, kaiming_init -from .registry import PLUGIN_LAYERS - - -def last_zero_init(m): - if isinstance(m, nn.Sequential): - constant_init(m[-1], val=0) - else: - constant_init(m, val=0) - - -@PLUGIN_LAYERS.register_module() -class ContextBlock(nn.Module): - """ContextBlock module in GCNet. - - See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond' - (https://arxiv.org/abs/1904.11492) for details. - - Args: - in_channels (int): Channels of the input feature map. - ratio (float): Ratio of channels of transform bottleneck - pooling_type (str): Pooling method for context modeling. - Options are 'att' and 'avg', stand for attention pooling and - average pooling respectively. Default: 'att'. - fusion_types (Sequence[str]): Fusion method for feature fusion, - Options are 'channels_add', 'channel_mul', stand for channelwise - addition and multiplication respectively. Default: ('channel_add',) - """ - - _abbr_ = 'context_block' - - def __init__(self, - in_channels, - ratio, - pooling_type='att', - fusion_types=('channel_add', )): - super(ContextBlock, self).__init__() - assert pooling_type in ['avg', 'att'] - assert isinstance(fusion_types, (list, tuple)) - valid_fusion_types = ['channel_add', 'channel_mul'] - assert all([f in valid_fusion_types for f in fusion_types]) - assert len(fusion_types) > 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv.py deleted file mode 100755 index cf544919..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from .registry import CONV_LAYERS - -CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d) -CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d) -CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d) -CONV_LAYERS.register_module('Conv', module=nn.Conv2d) - - -def build_conv_layer(cfg, *args, **kwargs): - """Build convolution layer. - - Args: - cfg (None or dict): The conv layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an conv layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding conv layer. - - Returns: - nn.Module: Created conv layer. - """ - if cfg is None: - cfg_ = dict(type='Conv2d') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in CONV_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - else: - conv_layer = CONV_LAYERS.get(layer_type) - - layer = conv_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv2d_adaptive_padding.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv2d_adaptive_padding.py deleted file mode 100755 index b45e758a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv2d_adaptive_padding.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from torch import nn -from torch.nn import functional as F - -from .registry import CONV_LAYERS - - -@CONV_LAYERS.register_module() -class Conv2dAdaptivePadding(nn.Conv2d): - """Implementation of 2D convolution in tensorflow with `padding` as "same", - which applies padding to input (if needed) so that input image gets fully - covered by filter and stride you specified. For stride 1, this will ensure - that output image size is same as input. For stride of 2, output dimensions - will be half, for example. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the convolving kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If ``True``, adds a learnable bias to the - output. Default: ``True`` - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__(in_channels, out_channels, kernel_size, stride, 0, - dilation, groups, bias) - - def forward(self, x): - img_h, img_w = x.size()[-2:] - kernel_h, kernel_w = self.weight.size()[-2:] - stride_h, stride_w = self.stride - output_h = math.ceil(img_h / stride_h) - output_w = math.ceil(img_w / stride_w) - pad_h = ( - max((output_h - 1) * self.stride[0] + - (kernel_h - 1) * self.dilation[0] + 1 - img_h, 0)) - pad_w = ( - max((output_w - 1) * self.stride[1] + - (kernel_w - 1) * self.dilation[1] + 1 - img_w, 0)) - if pad_h > 0 or pad_w > 0: - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2 - ]) - return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_module.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_module.py deleted file mode 100755 index 4f19f1d0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_module.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn - -from mmcv.utils import _BatchNorm, _InstanceNorm -from ..utils import constant_init, kaiming_init -from .activation import build_activation_layer -from .conv import build_conv_layer -from .norm import build_norm_layer -from .padding import build_padding_layer -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class ConvModule(nn.Module): - """A conv block that bundles conv/norm/activation layers. - - This block simplifies the usage of convolution layers, which are commonly - used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU). - It is based upon three build methods: `build_conv_layer()`, - `build_norm_layer()` and `build_activation_layer()`. - - Besides, we add some additional features in this module. - 1. Automatically set `bias` of the conv layer. - 2. Spectral norm is supported. - 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only - supports zero and circular padding, and we add "reflect" padding mode. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. - groups (int): Number of blocked connections from input channels to - output channels. Same as that in ``nn._ConvNd``. - bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - inplace (bool): Whether to use inplace mode for activation. - Default: True. - with_spectral_norm (bool): Whether use spectral norm in conv module. - Default: False. - padding_mode (str): If the `padding_mode` has not been supported by - current `Conv2d` in PyTorch, we will use our own padding layer - instead. Currently, we support ['zeros', 'circular'] with official - implementation and ['reflect'] with our own implementation. - Default: 'zeros'. - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - Default: ('conv', 'norm', 'act'). - """ - - _abbr_ = 'conv_block' - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias='auto', - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - inplace=True, - with_spectral_norm=False, - padding_mode='zeros', - order=('conv', 'norm', 'act')): - super(ConvModule, self).__init__() - assert conv_cfg is None or isinstance(conv_cfg, dict) - assert norm_cfg is None or isinstance(norm_cfg, dict) - assert act_cfg is None or isinstance(act_cfg, dict) - official_padding_mode = ['zeros', 'circular'] - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.inplace = inplace - self.with_spectral_norm = with_spectral_norm - self.with_explicit_padding = padding_mode not in official_padding_mode - self.order = order - assert isinstance(self.order, tuple) and len(self.order) == 3 - assert set(order) == set(['conv', 'norm', 'act']) - - self.with_norm = norm_cfg is not None - self.with_activation = act_cfg is not None - # if the conv layer is before a norm layer, bias is unnecessary. - if bias == 'auto': - bias = not self.with_norm - self.with_bias = bias - - if self.with_explicit_padding: - pad_cfg = dict(type=padding_mode) - self.padding_layer = build_padding_layer(pad_cfg, padding) - - # reset padding to 0 for conv module - conv_padding = 0 if self.with_explicit_padding else padding - # build convolution layer - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=conv_padding, - dilation=dilation, - groups=groups, - bias=bias) - # export the attributes of self.conv to a higher level for convenience - self.in_channels = self.conv.in_channels - self.out_channels = self.conv.out_channels - self.kernel_size = self.conv.kernel_size - self.stride = self.conv.stride - self.padding = padding - self.dilation = self.conv.dilation - self.transposed = self.conv.transposed - self.output_padding = self.conv.output_padding - self.groups = self.conv.groups - - if self.with_spectral_norm: - self.conv = nn.utils.spectral_norm(self.conv) - - # build normalization layers - if self.with_norm: - # norm layer is after conv layer - if order.index('norm') > order.index('conv'): - norm_channels = out_channels - else: - norm_channels = in_channels - self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels) - self.add_module(self.norm_name, norm) - if self.with_bias: - if isinstance(norm, (_BatchNorm, _InstanceNorm)): - warnings.warn( - 'Unnecessary conv bias before batch/instance norm') - else: - self.norm_name = None - - # build activation layer - if self.with_activation: - act_cfg_ = act_cfg.copy() - # nn.Tanh has no 'inplace' argument - if act_cfg_['type'] not in [ - 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish' - ]: - act_cfg_.setdefault('inplace', inplace) - self.activate = build_activation_layer(act_cfg_) - - # Use msra init by default - self.init_weights() - - @property - def norm(self): - if self.norm_name: - return getattr(self, self.norm_name) - else: - return None - - def init_weights(self): - # 1. It is mainly for customized conv layers with their own - # initialization manners by calling their own ``init_weights()``, - # and we do not want ConvModule to override the initialization. - # 2. For customized conv layers without their own initialization - # manners (that is, they don't have their own ``init_weights()``) - # and PyTorch's conv layers, they will be initialized by - # this method with default ``kaiming_init``. - # Note: For PyTorch's conv layers, they will be overwritten by our - # initialization implementation using default ``kaiming_init``. - if not hasattr(self.conv, 'init_weights'): - if self.with_activation and self.act_cfg['type'] == 'LeakyReLU': - nonlinearity = 'leaky_relu' - a = self.act_cfg.get('negative_slope', 0.01) - else: - nonlinearity = 'relu' - a = 0 - kaiming_init(self.conv, a=a, nonlinearity=nonlinearity) - if self.with_norm: - constant_init(self.norm, 1, bias=0) - - def forward(self, x, activate=True, norm=True): - for layer in self.order: - if layer == 'conv': - if self.with_explicit_padding: - x = self.padding_layer(x) - x = self.conv(x) - elif layer == 'norm' and norm and self.with_norm: - x = self.norm(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_ws.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_ws.py deleted file mode 100755 index a3941e27..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/conv_ws.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .registry import CONV_LAYERS - - -def conv_ws_2d(input, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - eps=1e-5): - c_in = weight.size(0) - weight_flat = weight.view(c_in, -1) - mean = weight_flat.mean(dim=1, keepdim=True).view(c_in, 1, 1, 1) - std = weight_flat.std(dim=1, keepdim=True).view(c_in, 1, 1, 1) - weight = (weight - mean) / (std + eps) - return F.conv2d(input, weight, bias, stride, padding, dilation, groups) - - -@CONV_LAYERS.register_module('ConvWS') -class ConvWS2d(nn.Conv2d): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - eps=1e-5): - super(ConvWS2d, self).__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.eps = eps - - def forward(self, x): - return conv_ws_2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.eps) - - -@CONV_LAYERS.register_module(name='ConvAWS') -class ConvAWS2d(nn.Conv2d): - """AWS (Adaptive Weight Standardization) - - This is a variant of Weight Standardization - (https://arxiv.org/pdf/1903.10520.pdf) - It is used in DetectoRS to avoid NaN - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: True - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.register_buffer('weight_gamma', - torch.ones(self.out_channels, 1, 1, 1)) - self.register_buffer('weight_beta', - torch.zeros(self.out_channels, 1, 1, 1)) - - def _get_weight(self, weight): - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - weight = (weight - mean) / std - weight = self.weight_gamma * weight + self.weight_beta - return weight - - def forward(self, x): - weight = self._get_weight(self.weight) - return F.conv2d(x, weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override default load function. - - AWS overrides the function _load_from_state_dict to recover - weight_gamma and weight_beta if they are missing. If weight_gamma and - weight_beta are found in the checkpoint, this function will return - after super()._load_from_state_dict. Otherwise, it will compute the - mean and std of the pretrained weights and store them in weight_beta - and weight_gamma. - """ - - self.weight_gamma.data.fill_(-1) - local_missing_keys = [] - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, local_missing_keys, - unexpected_keys, error_msgs) - if self.weight_gamma.data.mean() > 0: - for k in local_missing_keys: - missing_keys.append(k) - return - weight = self.weight.data - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - self.weight_beta.data.copy_(mean) - self.weight_gamma.data.copy_(std) - missing_gamma_beta = [ - k for k in local_missing_keys - if k.endswith('weight_gamma') or k.endswith('weight_beta') - ] - for k in missing_gamma_beta: - local_missing_keys.remove(k) - for k in local_missing_keys: - missing_keys.append(k) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/depthwise_separable_conv_module.py deleted file mode 100755 index 722d5d8d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/depthwise_separable_conv_module.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .conv_module import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if `norm_cfg` and `act_cfg` are specified. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. Default: 1. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. Default: 0. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super(DepthwiseSeparableConvModule, self).__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/drop.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/drop.py deleted file mode 100755 index b0a02665..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/generalized_attention.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/generalized_attention.py deleted file mode 100755 index 988d9adf..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/generalized_attention.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import kaiming_init -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class GeneralizedAttention(nn.Module): - """GeneralizedAttention module. - - See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks' - (https://arxiv.org/abs/1711.07971) for details. - - Args: - in_channels (int): Channels of the input feature map. - spatial_range (int): The spatial range. -1 indicates no spatial range - constraint. Default: -1. - num_heads (int): The head number of empirical_attention module. - Default: 9. - position_embedding_dim (int): The position embedding dimension. - Default: -1. - position_magnitude (int): A multiplier acting on coord difference. - Default: 1. - kv_stride (int): The feature stride acting on key/value feature map. - Default: 2. - q_stride (int): The feature stride acting on query feature map. - Default: 1. - attention_type (str): A binary indicator string for indicating which - items in generalized empirical_attention module are used. - Default: '1111'. - - - '1000' indicates 'query and key content' (appr - appr) item, - - '0100' indicates 'query content and relative position' - (appr - position) item, - - '0010' indicates 'key content only' (bias - appr) item, - - '0001' indicates 'relative position only' (bias - position) item. - """ - - _abbr_ = 'gen_attention_block' - - def __init__(self, - in_channels, - spatial_range=-1, - num_heads=9, - position_embedding_dim=-1, - position_magnitude=1, - kv_stride=2, - q_stride=1, - attention_type='1111'): - - super(GeneralizedAttention, self).__init__() - - # hard range means local range for non-local operation - self.position_embedding_dim = ( - position_embedding_dim - if position_embedding_dim > 0 else in_channels) - - self.position_magnitude = position_magnitude - self.num_heads = num_heads - self.in_channels = in_channels - self.spatial_range = spatial_range - self.kv_stride = kv_stride - self.q_stride = q_stride - self.attention_type = [bool(int(_)) for _ in attention_type] - self.qk_embed_dim = in_channels // num_heads - out_c = self.qk_embed_dim * num_heads - - if self.attention_type[0] or self.attention_type[1]: - self.query_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.query_conv.kaiming_init = True - - if self.attention_type[0] or self.attention_type[2]: - self.key_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.key_conv.kaiming_init = True - - self.v_dim = in_channels // num_heads - self.value_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=self.v_dim * num_heads, - kernel_size=1, - bias=False) - self.value_conv.kaiming_init = True - - if self.attention_type[1] or self.attention_type[3]: - self.appr_geom_fc_x = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_x.kaiming_init = True - - self.appr_geom_fc_y = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_y.kaiming_init = True - - if self.attention_type[2]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.appr_bias = nn.Parameter(appr_bias_value) - - if self.attention_type[3]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.geom_bias = nn.Parameter(geom_bias_value) - - self.proj_conv = nn.Conv2d( - in_channels=self.v_dim * num_heads, - out_channels=in_channels, - kernel_size=1, - bias=True) - self.proj_conv.kaiming_init = True - self.gamma = nn.Parameter(torch.zeros(1)) - - if self.spatial_range >= 0: - # only works when non local is after 3*3 conv - if in_channels == 256: - max_len = 84 - elif in_channels == 512: - max_len = 42 - - max_len_kv = int((max_len - 1.0) / self.kv_stride + 1) - local_constraint_map = np.ones( - (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int) - for iy in range(max_len): - for ix in range(max_len): - local_constraint_map[ - iy, ix, - max((iy - self.spatial_range) // - self.kv_stride, 0):min((iy + self.spatial_range + - 1) // self.kv_stride + - 1, max_len), - max((ix - self.spatial_range) // - self.kv_stride, 0):min((ix + self.spatial_range + - 1) // self.kv_stride + - 1, max_len)] = 0 - - self.local_constraint_map = nn.Parameter( - torch.from_numpy(local_constraint_map).byte(), - requires_grad=False) - - if self.q_stride > 1: - self.q_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.q_stride) - else: - self.q_downsample = None - - if self.kv_stride > 1: - self.kv_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.kv_stride) - else: - self.kv_downsample = None - - self.init_weights() - - def get_position_embedding(self, - h, - w, - h_kv, - w_kv, - q_stride, - kv_stride, - device, - dtype, - feat_dim, - wave_length=1000): - # the default type of Tensor is float32, leading to type mismatch - # in fp16 mode. Cast it to support fp16 mode. - h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype) - h_idxs = h_idxs.view((h, 1)) * q_stride - - w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype) - w_idxs = w_idxs.view((w, 1)) * q_stride - - h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to( - device=device, dtype=dtype) - h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride - - w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to( - device=device, dtype=dtype) - w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride - - # (h, h_kv, 1) - h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0) - h_diff *= self.position_magnitude - - # (w, w_kv, 1) - w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0) - w_diff *= self.position_magnitude - - feat_range = torch.arange(0, feat_dim / 4).to( - device=device, dtype=dtype) - - dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype) - dim_mat = dim_mat**((4. / feat_dim) * feat_range) - dim_mat = dim_mat.view((1, 1, -1)) - - embedding_x = torch.cat( - ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2) - - embedding_y = torch.cat( - ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2) - - return embedding_x, embedding_y - - def forward(self, x_input): - num_heads = self.num_heads - - # use empirical_attention - if self.q_downsample is not None: - x_q = self.q_downsample(x_input) - else: - x_q = x_input - n, _, h, w = x_q.shape - - if self.kv_downsample is not None: - x_kv = self.kv_downsample(x_input) - else: - x_kv = x_input - _, _, h_kv, w_kv = x_kv.shape - - if self.attention_type[0] or self.attention_type[1]: - proj_query = self.query_conv(x_q).view( - (n, num_heads, self.qk_embed_dim, h * w)) - proj_query = proj_query.permute(0, 1, 3, 2) - - if self.attention_type[0] or self.attention_type[2]: - proj_key = self.key_conv(x_kv).view( - (n, num_heads, self.qk_embed_dim, h_kv * w_kv)) - - if self.attention_type[1] or self.attention_type[3]: - position_embed_x, position_embed_y = self.get_position_embedding( - h, w, h_kv, w_kv, self.q_stride, self.kv_stride, - x_input.device, x_input.dtype, self.position_embedding_dim) - # (n, num_heads, w, w_kv, dim) - position_feat_x = self.appr_geom_fc_x(position_embed_x).\ - view(1, w, w_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - # (n, num_heads, h, h_kv, dim) - position_feat_y = self.appr_geom_fc_y(position_embed_y).\ - view(1, h, h_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - position_feat_x /= math.sqrt(2) - position_feat_y /= math.sqrt(2) - - # accelerate for saliency only - if (np.sum(self.attention_type) == 1) and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy = torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, h_kv * w_kv) - - h = 1 - w = 1 - else: - # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for - if not self.attention_type[0]: - energy = torch.zeros( - n, - num_heads, - h, - w, - h_kv, - w_kv, - dtype=x_input.dtype, - device=x_input.device) - - # attention_type[0]: appr - appr - # attention_type[1]: appr - position - # attention_type[2]: bias - appr - # attention_type[3]: bias - position - if self.attention_type[0] or self.attention_type[2]: - if self.attention_type[0] and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - energy = torch.matmul(proj_query + appr_bias, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[0]: - energy = torch.matmul(proj_query, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy += torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, 1, h_kv, w_kv) - - if self.attention_type[1] or self.attention_type[3]: - if self.attention_type[1] and self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - - proj_query_reshape = (proj_query + geom_bias).\ - view(n, num_heads, h, w, self.qk_embed_dim) - - energy_x = torch.matmul( - proj_query_reshape.permute(0, 1, 3, 2, 4), - position_feat_x.permute(0, 1, 2, 4, 3)) - energy_x = energy_x.\ - permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul( - proj_query_reshape, - position_feat_y.permute(0, 1, 2, 4, 3)) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[1]: - proj_query_reshape = proj_query.\ - view(n, num_heads, h, w, self.qk_embed_dim) - proj_query_reshape = proj_query_reshape.\ - permute(0, 1, 3, 2, 4) - position_feat_x_reshape = position_feat_x.\ - permute(0, 1, 2, 4, 3) - position_feat_y_reshape = position_feat_y.\ - permute(0, 1, 2, 4, 3) - - energy_x = torch.matmul(proj_query_reshape, - position_feat_x_reshape) - energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul(proj_query_reshape, - position_feat_y_reshape) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, self.qk_embed_dim, 1).\ - repeat(n, 1, 1, 1) - - position_feat_x_reshape = position_feat_x.\ - view(n, num_heads, w*w_kv, self.qk_embed_dim) - - position_feat_y_reshape = position_feat_y.\ - view(n, num_heads, h * h_kv, self.qk_embed_dim) - - energy_x = torch.matmul(position_feat_x_reshape, geom_bias) - energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv) - - energy_y = torch.matmul(position_feat_y_reshape, geom_bias) - energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1) - - energy += energy_x + energy_y - - energy = energy.view(n, num_heads, h * w, h_kv * w_kv) - - if self.spatial_range >= 0: - cur_local_constraint_map = \ - self.local_constraint_map[:h, :w, :h_kv, :w_kv].\ - contiguous().\ - view(1, 1, h*w, h_kv*w_kv) - - energy = energy.masked_fill_(cur_local_constraint_map, - float('-inf')) - - attention = F.softmax(energy, 3) - - proj_value = self.value_conv(x_kv) - proj_value_reshape = proj_value.\ - view((n, num_heads, self.v_dim, h_kv * w_kv)).\ - permute(0, 1, 3, 2) - - out = torch.matmul(attention, proj_value_reshape).\ - permute(0, 1, 3, 2).\ - contiguous().\ - view(n, self.v_dim * self.num_heads, h, w) - - out = self.proj_conv(out) - - # output is downsampled, upsample back to input size - if self.q_downsample is not None: - out = F.interpolate( - out, - size=x_input.shape[2:], - mode='bilinear', - align_corners=False) - - out = self.gamma * out + x_input - return out - - def init_weights(self): - for m in self.modules(): - if hasattr(m, 'kaiming_init') and m.kaiming_init: - kaiming_init( - m, - mode='fan_in', - nonlinearity='leaky_relu', - bias=0, - distribution='uniform', - a=1) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hsigmoid.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hsigmoid.py deleted file mode 100755 index 30b1a3d6..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hsigmoid.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSigmoid(nn.Module): - """Hard Sigmoid Module. Apply the hard sigmoid function: - Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value) - Default: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1) - - Args: - bias (float): Bias of the input feature map. Default: 1.0. - divisor (float): Divisor of the input feature map. Default: 2.0. - min_value (float): Lower bound value. Default: 0.0. - max_value (float): Upper bound value. Default: 1.0. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, bias=1.0, divisor=2.0, min_value=0.0, max_value=1.0): - super(HSigmoid, self).__init__() - self.bias = bias - self.divisor = divisor - assert self.divisor != 0 - self.min_value = min_value - self.max_value = max_value - - def forward(self, x): - x = (x + self.bias) / self.divisor - - return x.clamp_(self.min_value, self.max_value) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hswish.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hswish.py deleted file mode 100755 index 7e0c090f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/hswish.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSwish(nn.Module): - """Hard Swish Module. - - This module applies the hard swish function: - - .. math:: - Hswish(x) = x * ReLU6(x + 3) / 6 - - Args: - inplace (bool): can optionally do the operation in-place. - Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, inplace=False): - super(HSwish, self).__init__() - self.act = nn.ReLU6(inplace) - - def forward(self, x): - return x * self.act(x + 3) / 6 diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/non_local.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/non_local.py deleted file mode 100755 index 92d00155..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/non_local.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta - -import torch -import torch.nn as nn - -from ..utils import constant_init, normal_init -from .conv_module import ConvModule -from .registry import PLUGIN_LAYERS - - -class _NonLocalNd(nn.Module, metaclass=ABCMeta): - """Basic Non-local module. - - This module is proposed in - "Non-local Neural Networks" - Paper reference: https://arxiv.org/abs/1711.07971 - Code reference: https://github.com/AlexHex7/Non-local_pytorch - - Args: - in_channels (int): Channels of the input feature map. - reduction (int): Channel reduction ratio. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - `1/sqrt(inter_channels)` when the mode is `embedded_gaussian`. - Default: True. - conv_cfg (None | dict): The config dict for convolution layers. - If not specified, it will use `nn.Conv2d` for convolution layers. - Default: None. - norm_cfg (None | dict): The config dict for normalization layers. - Default: None. (This parameter is only applicable to conv_out.) - mode (str): Options are `gaussian`, `concatenation`, - `embedded_gaussian` and `dot_product`. Default: embedded_gaussian. - """ - - def __init__(self, - in_channels, - reduction=2, - use_scale=True, - conv_cfg=None, - norm_cfg=None, - mode='embedded_gaussian', - **kwargs): - super(_NonLocalNd, self).__init__() - self.in_channels = in_channels - self.reduction = reduction - self.use_scale = use_scale - self.inter_channels = max(in_channels // reduction, 1) - self.mode = mode - - if mode not in [ - 'gaussian', 'embedded_gaussian', 'dot_product', 'concatenation' - ]: - raise ValueError("Mode should be in 'gaussian', 'concatenation', " - f"'embedded_gaussian' or 'dot_product', but got " - f'{mode} instead.') - - # g, theta, phi are defaulted as `nn.ConvNd`. - # Here we use ConvModule for potential usage. - self.g = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.conv_out = ConvModule( - self.inter_channels, - self.in_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - if self.mode != 'gaussian': - self.theta = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.phi = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - - if self.mode == 'concatenation': - self.concat_project = ConvModule( - self.inter_channels * 2, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - act_cfg=dict(type='ReLU')) - - self.init_weights(**kwargs) - - def init_weights(self, std=0.01, zeros_init=True): - if self.mode != 'gaussian': - for m in [self.g, self.theta, self.phi]: - normal_init(m.conv, std=std) - else: - normal_init(self.g.conv, std=std) - if zeros_init: - if self.conv_out.norm_cfg is None: - constant_init(self.conv_out.conv, 0) - else: - constant_init(self.conv_out.norm, 0) - else: - if self.conv_out.norm_cfg is None: - normal_init(self.conv_out.conv, std=std) - else: - normal_init(self.conv_out.norm, std=std) - - def gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def embedded_gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def dot_product(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight /= pairwise_weight.shape[-1] - return pairwise_weight - - def concatenation(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - h = theta_x.size(2) - w = phi_x.size(3) - theta_x = theta_x.repeat(1, 1, 1, w) - phi_x = phi_x.repeat(1, 1, h, 1) - - concat_feature = torch.cat([theta_x, phi_x], dim=1) - pairwise_weight = self.concat_project(concat_feature) - n, _, h, w = pairwise_weight.size() - pairwise_weight = pairwise_weight.view(n, h, w) - pairwise_weight /= pairwise_weight.shape[-1] - - return pairwise_weight - - def forward(self, x): - # Assume `reduction = 1`, then `inter_channels = C` - # or `inter_channels = C` when `mode="gaussian"` - - # NonLocal1d x: [N, C, H] - # NonLocal2d x: [N, C, H, W] - # NonLocal3d x: [N, C, T, H, W] - n = x.size(0) - - # NonLocal1d g_x: [N, H, C] - # NonLocal2d g_x: [N, HxW, C] - # NonLocal3d g_x: [N, TxHxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # NonLocal1d theta_x: [N, H, C], phi_x: [N, C, H] - # NonLocal2d theta_x: [N, HxW, C], phi_x: [N, C, HxW] - # NonLocal3d theta_x: [N, TxHxW, C], phi_x: [N, C, TxHxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - pairwise_func = getattr(self, self.mode) - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # NonLocal1d y: [N, H, C] - # NonLocal2d y: [N, HxW, C] - # NonLocal3d y: [N, TxHxW, C] - y = torch.matmul(pairwise_weight, g_x) - # NonLocal1d y: [N, C, H] - # NonLocal2d y: [N, C, H, W] - # NonLocal3d y: [N, C, T, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - output = x + self.conv_out(y) - - return output - - -class NonLocal1d(_NonLocalNd): - """1D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv1d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv1d'), - **kwargs): - super(NonLocal1d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool1d(kernel_size=2) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -@PLUGIN_LAYERS.register_module() -class NonLocal2d(_NonLocalNd): - """2D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv2d'). - """ - - _abbr_ = 'nonlocal_block' - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv2d'), - **kwargs): - super(NonLocal2d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -class NonLocal3d(_NonLocalNd): - """3D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv3d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv3d'), - **kwargs): - super(NonLocal3d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/norm.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/norm.py deleted file mode 100755 index cfb326bd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/norm.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect - -import torch.nn as nn - -from mmcv.utils import is_tuple_of -from mmcv.utils.parrots_wrapper import SyncBatchNorm, _BatchNorm, _InstanceNorm -from .registry import NORM_LAYERS - -NORM_LAYERS.register_module('BN', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN1d', module=nn.BatchNorm1d) -NORM_LAYERS.register_module('BN2d', module=nn.BatchNorm2d) -NORM_LAYERS.register_module('BN3d', module=nn.BatchNorm3d) -NORM_LAYERS.register_module('SyncBN', module=SyncBatchNorm) -NORM_LAYERS.register_module('GN', module=nn.GroupNorm) -NORM_LAYERS.register_module('LN', module=nn.LayerNorm) -NORM_LAYERS.register_module('IN', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN1d', module=nn.InstanceNorm1d) -NORM_LAYERS.register_module('IN2d', module=nn.InstanceNorm2d) -NORM_LAYERS.register_module('IN3d', module=nn.InstanceNorm3d) - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - When we build a norm layer with `build_norm_layer()`, we want to preserve - the norm type in variable names, e.g, self.bn1, self.gn. This method will - infer the abbreviation to map class types to abbreviations. - - Rule 1: If the class has the property "_abbr_", return the property. - Rule 2: If the parent class is _BatchNorm, GroupNorm, LayerNorm or - InstanceNorm, the abbreviation of this layer will be "bn", "gn", "ln" and - "in" respectively. - Rule 3: If the class name contains "batch", "group", "layer" or "instance", - the abbreviation of this layer will be "bn", "gn", "ln" and "in" - respectively. - Rule 4: Otherwise, the abbreviation falls back to "norm". - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - if issubclass(class_type, _InstanceNorm): # IN is a subclass of BN - return 'in' - elif issubclass(class_type, _BatchNorm): - return 'bn' - elif issubclass(class_type, nn.GroupNorm): - return 'gn' - elif issubclass(class_type, nn.LayerNorm): - return 'ln' - else: - class_name = class_type.__name__.lower() - if 'batch' in class_name: - return 'bn' - elif 'group' in class_name: - return 'gn' - elif 'layer' in class_name: - return 'ln' - elif 'instance' in class_name: - return 'in' - else: - return 'norm_layer' - - -def build_norm_layer(cfg, num_features, postfix=''): - """Build normalization layer. - - Args: - cfg (dict): The norm layer config, which should contain: - - - type (str): Layer type. - - layer args: Args needed to instantiate a norm layer. - - requires_grad (bool, optional): Whether stop gradient updates. - num_features (int): Number of input channels. - postfix (int | str): The postfix to be appended into norm abbreviation - to create named layer. - - Returns: - (str, nn.Module): The first element is the layer name consisting of - abbreviation and postfix, e.g., bn1, gn. The second element is the - created norm layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in NORM_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - - norm_layer = NORM_LAYERS.get(layer_type) - abbr = infer_abbr(norm_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - requires_grad = cfg_.pop('requires_grad', True) - cfg_.setdefault('eps', 1e-5) - if layer_type != 'GN': - layer = norm_layer(num_features, **cfg_) - if layer_type == 'SyncBN' and hasattr(layer, '_specify_ddp_gpu_num'): - layer._specify_ddp_gpu_num(1) - else: - assert 'num_groups' in cfg_ - layer = norm_layer(num_channels=num_features, **cfg_) - - for param in layer.parameters(): - param.requires_grad = requires_grad - - return name, layer - - -def is_norm(layer, exclude=None): - """Check if a layer is a normalization layer. - - Args: - layer (nn.Module): The layer to be checked. - exclude (type | tuple[type]): Types to be excluded. - - Returns: - bool: Whether the layer is a norm layer. - """ - if exclude is not None: - if not isinstance(exclude, tuple): - exclude = (exclude, ) - if not is_tuple_of(exclude, type): - raise TypeError( - f'"exclude" must be either None or type or a tuple of types, ' - f'but got {type(exclude)}: {exclude}') - - if exclude and isinstance(layer, exclude): - return False - - all_norm_bases = (_BatchNorm, _InstanceNorm, nn.GroupNorm, nn.LayerNorm) - return isinstance(layer, all_norm_bases) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/padding.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/padding.py deleted file mode 100755 index e4ac6b28..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/padding.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import PADDING_LAYERS - -PADDING_LAYERS.register_module('zero', module=nn.ZeroPad2d) -PADDING_LAYERS.register_module('reflect', module=nn.ReflectionPad2d) -PADDING_LAYERS.register_module('replicate', module=nn.ReplicationPad2d) - - -def build_padding_layer(cfg, *args, **kwargs): - """Build padding layer. - - Args: - cfg (None or dict): The padding layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate a padding layer. - - Returns: - nn.Module: Created padding layer. - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - - cfg_ = cfg.copy() - padding_type = cfg_.pop('type') - if padding_type not in PADDING_LAYERS: - raise KeyError(f'Unrecognized padding type {padding_type}.') - else: - padding_layer = PADDING_LAYERS.get(padding_type) - - layer = padding_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/plugin.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/plugin.py deleted file mode 100755 index 07c010d4..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re -else: - import re - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - type (str): identify plugin layer type. - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: - name (str): abbreviation + postfix - layer (nn.Module): created plugin layer - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/registry.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/registry.py deleted file mode 100755 index c2927977..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/scale.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/scale.py deleted file mode 100755 index c905fffc..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/swish.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/swish.py deleted file mode 100755 index e2ca8ed7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/swish.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class Swish(nn.Module): - """Swish Module. - - This module applies the swish function: - - .. math:: - Swish(x) = x * Sigmoid(x) - - Returns: - Tensor: The output tensor. - """ - - def __init__(self): - super(Swish, self).__init__() - - def forward(self, x): - return x * torch.sigmoid(x) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/transformer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/transformer.py deleted file mode 100755 index ed32688a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from mmcv import ConfigDict, deprecated_api_warning -from mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/upsample.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/upsample.py deleted file mode 100755 index a1a35376..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/upsample.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import xavier_init -from .registry import UPSAMPLE_LAYERS - -UPSAMPLE_LAYERS.register_module('nearest', module=nn.Upsample) -UPSAMPLE_LAYERS.register_module('bilinear', module=nn.Upsample) - - -@UPSAMPLE_LAYERS.register_module(name='pixel_shuffle') -class PixelShufflePack(nn.Module): - """Pixel Shuffle upsample layer. - - This module packs `F.pixel_shuffle()` and a nn.Conv2d module together to - achieve a simple upsampling with pixel shuffle. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of the conv layer to expand the - channels. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super(PixelShufflePack, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - xavier_init(self.upsample_conv, distribution='uniform') - - def forward(self, x): - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x - - -def build_upsample_layer(cfg, *args, **kwargs): - """Build upsample layer. - - Args: - cfg (dict): The upsample layer config, which should contain: - - - type (str): Layer type. - - scale_factor (int): Upsample ratio, which is not applicable to - deconv. - - layer args: Args needed to instantiate a upsample layer. - args (argument list): Arguments passed to the ``__init__`` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the - ``__init__`` method of the corresponding conv layer. - - Returns: - nn.Module: Created upsample layer. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - raise KeyError( - f'the cfg dict must contain the key "type", but got {cfg}') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in UPSAMPLE_LAYERS: - raise KeyError(f'Unrecognized upsample type {layer_type}') - else: - upsample = UPSAMPLE_LAYERS.get(layer_type) - - if upsample is nn.Upsample: - cfg_['mode'] = layer_type - layer = upsample(*args, **kwargs, **cfg_) - return layer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/wrappers.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/wrappers.py deleted file mode 100755 index 8aebf67b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/bricks/wrappers.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/wrappers.py # noqa: E501 - -Wrap some nn modules to support empty tensor input. Currently, these wrappers -are mainly used in mask heads like fcn_mask_head and maskiou_heads since mask -heads are trained on only positive RoIs. -""" -import math - -import torch -import torch.nn as nn -from torch.nn.modules.utils import _pair, _triple - -from .registry import CONV_LAYERS, UPSAMPLE_LAYERS - -if torch.__version__ == 'parrots': - TORCH_VERSION = torch.__version__ -else: - # torch.__version__ could be 1.3.1+cu92, we only need the first two - # for comparison - TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2]) - - -def obsolete_torch_version(torch_version, version_threshold): - return torch_version == 'parrots' or torch_version <= version_threshold - - -class NewEmptyTensorOp(torch.autograd.Function): - - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return NewEmptyTensorOp.apply(grad, shape), None - - -@CONV_LAYERS.register_module('Conv', force=True) -class Conv2d(nn.Conv2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module('Conv3d', force=True) -class Conv3d(nn.Conv3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv') -@UPSAMPLE_LAYERS.register_module('deconv', force=True) -class ConvTranspose2d(nn.ConvTranspose2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv3d') -@UPSAMPLE_LAYERS.register_module('deconv3d', force=True) -class ConvTranspose3d(nn.ConvTranspose3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -class MaxPool2d(nn.MaxPool2d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-2:], _pair(self.kernel_size), - _pair(self.padding), _pair(self.stride), - _pair(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class MaxPool3d(nn.MaxPool3d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-3:], _triple(self.kernel_size), - _triple(self.padding), - _triple(self.stride), - _triple(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class Linear(torch.nn.Linear): - - def forward(self, x): - # empty tensor forward of Linear layer is supported in Pytorch 1.6 - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)): - out_shape = [x.shape[0], self.out_features] - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/builder.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/builder.py deleted file mode 100755 index 7567316c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/builder.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..runner import Sequential -from ..utils import Registry, build_from_cfg - - -def build_model_from_cfg(cfg, registry, default_args=None): - """Build a PyTorch model from config dict(s). Different from - ``build_from_cfg``, if cfg is a list, a ``nn.Sequential`` will be built. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a config - dict or a list of config dicts. If cfg is a list, a - the built modules will be wrapped with ``nn.Sequential``. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -MODELS = Registry('model', build_func=build_model_from_cfg) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/resnet.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/resnet.py deleted file mode 100755 index 1cb3ac05..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/__init__.py deleted file mode 100755 index a263e31c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .flops_counter import get_model_complexity_info -from .fuse_conv_bn import fuse_conv_bn -from .sync_bn import revert_sync_batchnorm -from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit, - KaimingInit, NormalInit, PretrainedInit, - TruncNormalInit, UniformInit, XavierInit, - bias_init_with_prob, caffe2_xavier_init, - constant_init, initialize, kaiming_init, normal_init, - trunc_normal_init, uniform_init, xavier_init) - -__all__ = [ - 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init', - 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize', - 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'revert_sync_batchnorm' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/flops_counter.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/flops_counter.py deleted file mode 100755 index dceeb398..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/fuse_conv_bn.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100755 index cb7076f8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv, bn): - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module): - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/sync_bn.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/sync_bn.py deleted file mode 100755 index 8a79ff4a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/weight_init.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/weight_init.py deleted file mode 100755 index e1ac999e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module, init_info): - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module, val, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module, gain=1, bias=0, distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module, mean=0, std=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module, a=0, b=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module, - a=0, - mode='fan_out', - nonlinearity='relu', - bias=0, - distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module, bias=0): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob): - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m): - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit(object): - - def __init__(self, *, bias=0, bias_prob=None, layer=None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self): - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val, **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module): - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, gain=1, distribution='normal', **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean=0, std=1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module): - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a=0, b=1, **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module): - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a=0, - mode='fan_out', - nonlinearity='relu', - distribution='normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module): - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit(object): - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, checkpoint, prefix=None, map_location=None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module): - from mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module, cfg, wholemodule=False): - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module, override, cfg): - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module, init_cfg): - """Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/vgg.py b/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/vgg.py deleted file mode 100755 index 8778b649..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/cnn/vgg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes, out_planes, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes, - planes, - num_blocks, - dilation=1, - with_bn=False, - ceil_mode=False): - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth, - with_bn=False, - num_classes=-1, - num_stages=5, - dilations=(1, 1, 1, 1, 1), - out_indices=(0, 1, 2, 3, 4), - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - ceil_mode=False, - with_last_pool=True): - super(VGG, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(VGG, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/engine/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/engine/__init__.py deleted file mode 100755 index 3193b7f6..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/engine/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .test import (collect_results_cpu, collect_results_gpu, multi_gpu_test, - single_gpu_test) - -__all__ = [ - 'collect_results_cpu', 'collect_results_gpu', 'multi_gpu_test', - 'single_gpu_test' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/engine/test.py b/cv/super_resolution/basicvsr/pytorch/mmcv/engine/test.py deleted file mode 100755 index f236b1cd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/engine/test.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile -import time - -import torch -import torch.distributed as dist - -import mmcv -from mmcv.runner import get_dist_info - - -def single_gpu_test(model, data_loader): - """Test model with a single gpu. - - This method tests model with a single gpu and displays test progress bar. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for data in data_loader: - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - # Assume result has the same length of batch_size - # refer to https://github.com/open-mmlab/mmcv/issues/985 - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting - ``gpu_collect=True``, it encodes results to gpu tensors and use gpu - communication for results collection. On cpu mode it saves the results on - different gpus to ``tmpdir`` and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - - Returns: - list: The prediction results. - """ - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - time.sleep(2) # This line can prevent deadlock problem in some cases. - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - results.extend(result) - - if rank == 0: - batch_size = len(result) - batch_size_all = batch_size * world_size - if batch_size_all + prog_bar.completed > len(dataset): - batch_size_all = len(dataset) - prog_bar.completed - for _ in range(batch_size_all): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - """Collect results under cpu mode. - - On cpu mode, this function will save the results on different gpus to - ``tmpdir`` and collect them by the rank 0 worker. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - tmpdir (str | None): temporal directory for collected results to - store. If set to None, it will create a random temporal directory - for it. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl')) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, f'part_{i}.pkl') - part_result = mmcv.load(part_file) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - """Collect results under gpu mode. - - On gpu mode, this function will encode results to gpu tensors and use gpu - communication for results collection. - - Args: - result_part (list): Result list containing result parts - to be collected. - size (int): Size of the results, commonly equal to length of - the results. - - Returns: - list: The collected results. - """ - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_result = pickle.loads(recv[:shape[0]].cpu().numpy().tobytes()) - # When data is severely insufficient, an empty part_result - # on a certain gpu could makes the overall outputs empty. - if part_result: - part_list.append(part_result) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/__init__.py deleted file mode 100755 index 2051b85f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/file_client.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/file_client.py deleted file mode 100755 index b2d62286..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Iterable, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import mmcv -from mmcv.utils.misc import has_method -from mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.')) - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self._client = lmdb.open( - self.db_path, - readonly=readonly, - lock=lock, - readahead=readahead, - **kwargs) - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - filepath = str(filepath) - with self._client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, 'r', encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path(self, filepath: str) -> Iterable[str]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - # This collection is used to record the overridden backends, and when a - # backend appears in the collection, the singleton pattern is disabled for - # that backend, because if the singleton pattern is used, then the object - # returned will be the backend before overwriting - _overridden_backends = set() - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - _overridden_prefixes = set() - - _instances = {} - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - # if a backend was overridden, it will create a new object - if (arg_key in cls._instances - and backend not in cls._overridden_backends - and prefix not in cls._overridden_prefixes): - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' - else ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - cls._overridden_backends.add(name) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - cls._overridden_prefixes.add(prefix) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/__init__.py deleted file mode 100755 index aa24d919..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseFileHandler -from .json_handler import JsonHandler -from .pickle_handler import PickleHandler -from .yaml_handler import YamlHandler - -__all__ = ['BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/base.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/base.py deleted file mode 100755 index 288878bc..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode='r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode='w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/json_handler.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/json_handler.py deleted file mode 100755 index 18d4f15f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/json_handler.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -import numpy as np - -from .base import BaseFileHandler - - -def set_default(obj): - """Set default json values for non-serializable values. - - It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list. - It also converts ``np.generic`` (including ``np.int32``, ``np.float32``, - etc.) into plain numbers of plain python built-in types. - """ - if isinstance(obj, (set, range)): - return list(obj) - elif isinstance(obj, np.ndarray): - return obj.tolist() - elif isinstance(obj, np.generic): - return obj.item() - raise TypeError(f'{type(obj)} is unsupported for json dump') - - -class JsonHandler(BaseFileHandler): - - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('default', set_default) - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('default', set_default) - return json.dumps(obj, **kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/pickle_handler.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100755 index b37c79be..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path( - filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path( - obj, filepath, mode='wb', **kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/yaml_handler.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100755 index c5aa2eea..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/io.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/io.py deleted file mode 100755 index aaefde58..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/io.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from io import BytesIO, StringIO -from pathlib import Path - -from ..utils import is_list_of, is_str -from .file_client import FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler - -file_handlers = { - 'json': JsonHandler(), - 'yaml': YamlHandler(), - 'yml': YamlHandler(), - 'pickle': PickleHandler(), - 'pkl': PickleHandler() -} - - -def load(file, file_format=None, file_client_args=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Note: - In v1.3.16 and later, ``load`` supports loading data from serialized - files those can be storaged in different backends. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> load('/path/of/your/file') # file is storaged in disk - >>> load('https://path/of/your/file') # file is storaged in Internet - >>> load('s3://path/of/your/file') # file is storaged in petrel - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split('.')[-1] - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO(file_client.get_text(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - else: - with BytesIO(file_client.get(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - elif hasattr(file, 'read'): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Note: - In v1.3.16 and later, ``dump`` supports dumping data as strings or to - files which is saved to different backends. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dumped to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dump('hello world', '/path/of/your/file') # disk - >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split('.')[-1] - elif file is None: - raise ValueError( - 'file_format must be specified since file is None') - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put_text(f.getvalue(), file) - else: - with BytesIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put(f.getvalue(), file) - elif hasattr(file, 'write'): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') - - -def _register_handler(handler, file_formats): - """Register a handler for some file extensions. - - Args: - handler (:obj:`BaseFileHandler`): Handler to be registered. - file_formats (str or list[str]): File formats to be handled by this - handler. - """ - if not isinstance(handler, BaseFileHandler): - raise TypeError( - f'handler must be a child of BaseFileHandler, not {type(handler)}') - if isinstance(file_formats, str): - file_formats = [file_formats] - if not is_list_of(file_formats, str): - raise TypeError('file_formats must be a str or a list of str') - for ext in file_formats: - file_handlers[ext] = handler - - -def register_handler(file_formats, **kwargs): - - def wrap(cls): - _register_handler(cls(**kwargs), file_formats) - return cls - - return wrap diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/parse.py b/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/parse.py deleted file mode 100755 index f60f0d61..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/fileio/parse.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from io import StringIO - -from .file_client import FileClient - - -def list_from_file(filename, - prefix='', - offset=0, - max_num=0, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a list of strings. - - Note: - In v1.3.16 and later, ``list_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a list for strings. - - Args: - filename (str): Filename. - prefix (str): The prefix to be inserted to the beginning of each item. - offset (int): The offset of lines. - max_num (int): The maximum number of lines to be read, - zeros and negatives mean no limitation. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> list_from_file('/path/of/your/file') # disk - ['hello', 'world'] - >>> list_from_file('s3://path/of/your/file') # ceph or petrel - ['hello', 'world'] - - Returns: - list[str]: A list of strings. - """ - cnt = 0 - item_list = [] - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for _ in range(offset): - f.readline() - for line in f: - if 0 < max_num <= cnt: - break - item_list.append(prefix + line.rstrip('\n\r')) - cnt += 1 - return item_list - - -def dict_from_file(filename, - key_type=str, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a dict. - - Each line of the text file will be two or more columns split by - whitespaces or tabs. The first column will be parsed as dict keys, and - the following columns will be parsed as dict values. - - Note: - In v1.3.16 and later, ``dict_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a dict. - - Args: - filename(str): Filename. - key_type(type): Type of the dict keys. str is user by default and - type conversion will be performed if specified. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dict_from_file('/path/of/your/file') # disk - {'key1': 'value1', 'key2': 'value2'} - >>> dict_from_file('s3://path/of/your/file') # ceph or petrel - {'key1': 'value1', 'key2': 'value2'} - - Returns: - dict: The parsed contents. - """ - mapping = {} - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for line in f: - items = line.rstrip('\n').split() - assert len(items) >= 2 - key = key_type(items[0]) - val = items[1:] if len(items) > 2 else items[1] - mapping[key] = val - return mapping diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/__init__.py deleted file mode 100755 index d0051d60..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/colorspace.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/colorspace.py deleted file mode 100755 index 81453395..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/colorspace.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - - -def imconvert(img, src, dst): - """Convert an image from the src colorspace to dst colorspace. - - Args: - img (ndarray): The input image. - src (str): The source colorspace, e.g., 'rgb', 'hsv'. - dst (str): The destination colorspace, e.g., 'rgb', 'hsv'. - - Returns: - ndarray: The converted image. - """ - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - out_img = cv2.cvtColor(img, code) - return out_img - - -def bgr2gray(img, keepdim=False): - """Convert a BGR image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def rgb2gray(img, keepdim=False): - """Convert a RGB image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def gray2bgr(img): - """Convert a grayscale image to BGR image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted BGR image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - return out_img - - -def gray2rgb(img): - """Convert a grayscale image to RGB image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted RGB image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' - f'but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' - f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [ - -222.921, 135.576, -276.836 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [ - -276.836, 135.576, -222.921 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def convert_color_factory(src, dst): - - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - - def convert_color(img): - out_img = cv2.cvtColor(img, code) - return out_img - - convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()} - image. - - Args: - img (ndarray or str): The input image. - - Returns: - ndarray: The converted {dst.upper()} image. - """ - - return convert_color - - -bgr2rgb = convert_color_factory('bgr', 'rgb') - -rgb2bgr = convert_color_factory('rgb', 'bgr') - -bgr2hsv = convert_color_factory('bgr', 'hsv') - -hsv2bgr = convert_color_factory('hsv', 'bgr') - -bgr2hls = convert_color_factory('bgr', 'hls') - -hls2bgr = convert_color_factory('hls', 'bgr') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/geometric.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/geometric.py deleted file mode 100755 index cf97c201..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/io.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/io.py deleted file mode 100755 index d47aaa84..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/misc.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/misc.py deleted file mode 100755 index dfc4a9c6..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/misc.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -import mmcv - -try: - import torch -except ImportError: - torch = None - - -def tensor2imgs(tensor, mean=(0, 0, 0), std=(1, 1, 1), to_rgb=True): - """Convert tensor to 3-channel images. - - Args: - tensor (torch.Tensor): Tensor that contains multiple images, shape ( - N, C, H, W). - mean (tuple[float], optional): Mean of images. Defaults to (0, 0, 0). - std (tuple[float], optional): Standard deviation of images. - Defaults to (1, 1, 1). - to_rgb (bool, optional): Whether the tensor was converted to RGB - format in the first place. If so, convert it back to BGR. - Defaults to True. - - Returns: - list[np.ndarray]: A list that contains multiple images. - """ - - if torch is None: - raise RuntimeError('pytorch is not installed') - assert torch.is_tensor(tensor) and tensor.ndim == 4 - assert len(mean) == 3 - assert len(std) == 3 - - num_imgs = tensor.size(0) - mean = np.array(mean, dtype=np.float32) - std = np.array(std, dtype=np.float32) - imgs = [] - for img_id in range(num_imgs): - img = tensor[img_id, ...].cpu().numpy().transpose(1, 2, 0) - img = mmcv.imdenormalize( - img, mean, std, to_bgr=to_rgb).astype(np.uint8) - imgs.append(np.ascontiguousarray(img)) - return imgs diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/image/photometric.py b/cv/super_resolution/basicvsr/pytorch/mmcv/image/photometric.py deleted file mode 100755 index 5085d012..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/image/photometric.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - -from ..utils import is_tuple_of -from .colorspace import bgr2gray, gray2bgr - - -def imnormalize(img, mean, std, to_rgb=True): - """Normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - img = img.copy().astype(np.float32) - return imnormalize_(img, mean, std, to_rgb) - - -def imnormalize_(img, mean, std, to_rgb=True): - """Inplace normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - # cv2 inplace normalization does not accept uint8 - assert img.dtype != np.uint8 - mean = np.float64(mean.reshape(1, -1)) - stdinv = 1 / np.float64(std.reshape(1, -1)) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - cv2.subtract(img, mean, img) # inplace - cv2.multiply(img, stdinv, img) # inplace - return img - - -def imdenormalize(img, mean, std, to_bgr=True): - assert img.dtype != np.uint8 - mean = mean.reshape(1, -1).astype(np.float64) - std = std.reshape(1, -1).astype(np.float64) - img = cv2.multiply(img, std) # make a copy - cv2.add(img, mean, img) # inplace - if to_bgr: - cv2.cvtColor(img, cv2.COLOR_RGB2BGR, img) # inplace - return img - - -def iminvert(img): - """Invert (negate) an image. - - Args: - img (ndarray): Image to be inverted. - - Returns: - ndarray: The inverted image. - """ - return np.full_like(img, 255) - img - - -def solarize(img, thr=128): - """Solarize an image (invert all pixel values above a threshold) - - Args: - img (ndarray): Image to be solarized. - thr (int): Threshold for solarizing (0 - 255). - - Returns: - ndarray: The solarized image. - """ - img = np.where(img < thr, img, 255 - img) - return img - - -def posterize(img, bits): - """Posterize an image (reduce the number of bits for each color channel) - - Args: - img (ndarray): Image to be posterized. - bits (int): Number of bits (1 to 8) to use for posterizing. - - Returns: - ndarray: The posterized image. - """ - shift = 8 - bits - img = np.left_shift(np.right_shift(img, shift), shift) - return img - - -def adjust_color(img, alpha=1, beta=None, gamma=0): - r"""It blends the source image and its gray image: - - .. math:: - output = img * alpha + gray\_img * beta + gamma - - Args: - img (ndarray): The input source image. - alpha (int | float): Weight for the source image. Default 1. - beta (int | float): Weight for the converted gray image. - If None, it's assigned the value (1 - `alpha`). - gamma (int | float): Scalar added to each sum. - Same as :func:`cv2.addWeighted`. Default 0. - - Returns: - ndarray: Colored image which has the same size and dtype as input. - """ - gray_img = bgr2gray(img) - gray_img = np.tile(gray_img[..., None], [1, 1, 3]) - if beta is None: - beta = 1 - alpha - colored_img = cv2.addWeighted(img, alpha, gray_img, beta, gamma) - if not colored_img.dtype == np.uint8: - # Note when the dtype of `img` is not the default `np.uint8` - # (e.g. np.float32), the value in `colored_img` got from cv2 - # is not guaranteed to be in range [0, 255], so here clip - # is needed. - colored_img = np.clip(colored_img, 0, 255) - return colored_img - - -def imequalize(img): - """Equalize the image histogram. - - This function applies a non-linear mapping to the input image, - in order to create a uniform distribution of grayscale values - in the output image. - - Args: - img (ndarray): Image to be equalized. - - Returns: - ndarray: The equalized image. - """ - - def _scale_channel(im, c): - """Scale the data in the corresponding channel.""" - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # For computing the step, filter out the nonzeros. - nonzero_histo = histo[histo > 0] - step = (np.sum(nonzero_histo) - nonzero_histo[-1]) // 255 - if not step: - lut = np.array(range(256)) - else: - # Compute the cumulative sum, shifted by step // 2 - # and then normalized by step. - lut = (np.cumsum(histo) + (step // 2)) // step - # Shift lut, prepending with 0. - lut = np.concatenate([[0], lut[:-1]], 0) - # handle potential integer overflow - lut[lut > 255] = 255 - # If step is zero, return the original image. - # Otherwise, index from lut. - return np.where(np.equal(step, 0), im, lut[im]) - - # Scales each channel independently and then stacks - # the result. - s1 = _scale_channel(img, 0) - s2 = _scale_channel(img, 1) - s3 = _scale_channel(img, 2) - equalized_img = np.stack([s1, s2, s3], axis=-1) - return equalized_img.astype(img.dtype) - - -def adjust_brightness(img, factor=1.): - """Adjust image brightness. - - This function controls the brightness of an image. An - enhancement factor of 0.0 gives a black image. - A factor of 1.0 gives the original image. This function - blends the source image and the degenerated black image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be brightened. - factor (float): A value controls the enhancement. - Factor 1.0 returns the original image, lower - factors mean less color (brightness, contrast, - etc), and higher values more. Default 1. - - Returns: - ndarray: The brightened image. - """ - degenerated = np.zeros_like(img) - # Note manually convert the dtype to np.float32, to - # achieve as close results as PIL.ImageEnhance.Brightness. - # Set beta=1-factor, and gamma=0 - brightened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - brightened_img = np.clip(brightened_img, 0, 255) - return brightened_img.astype(img.dtype) - - -def adjust_contrast(img, factor=1.): - """Adjust image contrast. - - This function controls the contrast of an image. An - enhancement factor of 0.0 gives a solid grey - image. A factor of 1.0 gives the original image. It - blends the source image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be contrasted. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - - Returns: - ndarray: The contrasted image. - """ - gray_img = bgr2gray(img) - hist = np.histogram(gray_img, 256, (0, 255))[0] - mean = round(np.sum(gray_img) / np.sum(hist)) - degenerated = (np.ones_like(img[..., 0]) * mean).astype(img.dtype) - degenerated = gray2bgr(degenerated) - contrasted_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - contrasted_img = np.clip(contrasted_img, 0, 255) - return contrasted_img.astype(img.dtype) - - -def auto_contrast(img, cutoff=0): - """Auto adjust image contrast. - - This function maximize (normalize) image contrast by first removing cutoff - percent of the lightest and darkest pixels from the histogram and remapping - the image so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - Args: - img (ndarray): Image to be contrasted. BGR order. - cutoff (int | float | tuple): The cutoff percent of the lightest and - darkest pixels to be removed. If given as tuple, it shall be - (low, high). Otherwise, the single value will be used for both. - Defaults to 0. - - Returns: - ndarray: The contrasted image. - """ - - def _auto_contrast_channel(im, c, cutoff): - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # Remove cut-off percent pixels from histo - histo_sum = np.cumsum(histo) - cut_low = histo_sum[-1] * cutoff[0] // 100 - cut_high = histo_sum[-1] - histo_sum[-1] * cutoff[1] // 100 - histo_sum = np.clip(histo_sum, cut_low, cut_high) - cut_low - histo = np.concatenate([[histo_sum[0]], np.diff(histo_sum)], 0) - - # Compute mapping - low, high = np.nonzero(histo)[0][0], np.nonzero(histo)[0][-1] - # If all the values have been cut off, return the origin img - if low >= high: - return im - scale = 255.0 / (high - low) - offset = -low * scale - lut = np.array(range(256)) - lut = lut * scale + offset - lut = np.clip(lut, 0, 255) - return lut[im] - - if isinstance(cutoff, (int, float)): - cutoff = (cutoff, cutoff) - else: - assert isinstance(cutoff, tuple), 'cutoff must be of type int, ' \ - f'float or tuple, but got {type(cutoff)} instead.' - # Auto adjusts contrast for each channel independently and then stacks - # the result. - s1 = _auto_contrast_channel(img, 0, cutoff) - s2 = _auto_contrast_channel(img, 1, cutoff) - s3 = _auto_contrast_channel(img, 2, cutoff) - contrasted_img = np.stack([s1, s2, s3], axis=-1) - return contrasted_img.astype(img.dtype) - - -def adjust_sharpness(img, factor=1., kernel=None): - """Adjust image sharpness. - - This function controls the sharpness of an image. An - enhancement factor of 0.0 gives a blurred image. A - factor of 1.0 gives the original image. And a factor - of 2.0 gives a sharpened image. It blends the source - image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be sharpened. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - kernel (np.ndarray, optional): Filter kernel to be applied on the img - to obtain the degenerated img. Defaults to None. - - Note: - No value sanity check is enforced on the kernel set by users. So with - an inappropriate kernel, the ``adjust_sharpness`` may fail to perform - the function its name indicates but end up performing whatever - transform determined by the kernel. - - Returns: - ndarray: The sharpened image. - """ - - if kernel is None: - # adopted from PIL.ImageFilter.SMOOTH - kernel = np.array([[1., 1., 1.], [1., 5., 1.], [1., 1., 1.]]) / 13 - assert isinstance(kernel, np.ndarray), \ - f'kernel must be of type np.ndarray, but got {type(kernel)} instead.' - assert kernel.ndim == 2, \ - f'kernel must have a dimension of 2, but got {kernel.ndim} instead.' - - degenerated = cv2.filter2D(img, -1, kernel) - sharpened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - sharpened_img = np.clip(sharpened_img, 0, 255) - return sharpened_img.astype(img.dtype) - - -def adjust_lighting(img, eigval, eigvec, alphastd=0.1, to_rgb=True): - """AlexNet-style PCA jitter. - - This data augmentation is proposed in `ImageNet Classification with Deep - Convolutional Neural Networks - `_. - - Args: - img (ndarray): Image to be adjusted lighting. BGR order. - eigval (ndarray): the eigenvalue of the convariance matrix of pixel - values, respectively. - eigvec (ndarray): the eigenvector of the convariance matrix of pixel - values, respectively. - alphastd (float): The standard deviation for distribution of alpha. - Defaults to 0.1 - to_rgb (bool): Whether to convert img to rgb. - - Returns: - ndarray: The adjusted image. - """ - assert isinstance(eigval, np.ndarray) and isinstance(eigvec, np.ndarray), \ - f'eigval and eigvec should both be of type np.ndarray, got ' \ - f'{type(eigval)} and {type(eigvec)} instead.' - - assert eigval.ndim == 1 and eigvec.ndim == 2 - assert eigvec.shape == (3, eigval.shape[0]) - n_eigval = eigval.shape[0] - assert isinstance(alphastd, float), 'alphastd should be of type float, ' \ - f'got {type(alphastd)} instead.' - - img = img.copy().astype(np.float32) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - - alpha = np.random.normal(0, alphastd, n_eigval) - alter = eigvec \ - * np.broadcast_to(alpha.reshape(1, n_eigval), (3, n_eigval)) \ - * np.broadcast_to(eigval.reshape(1, n_eigval), (3, n_eigval)) - alter = np.broadcast_to(alter.sum(axis=1).reshape(1, 1, 3), img.shape) - img_adjusted = img + alter - return img_adjusted - - -def lut_transform(img, lut_table): - """Transform array by look-up table. - - The function lut_transform fills the output array with values from the - look-up table. Indices of the entries are taken from the input array. - - Args: - img (ndarray): Image to be transformed. - lut_table (ndarray): look-up table of 256 elements; in case of - multi-channel input array, the table should either have a single - channel (in this case the same table is used for all channels) or - the same number of channels as in the input array. - - Returns: - ndarray: The transformed image. - """ - assert isinstance(img, np.ndarray) - assert 0 <= np.min(img) and np.max(img) <= 255 - assert isinstance(lut_table, np.ndarray) - assert lut_table.shape == (256, ) - - return cv2.LUT(np.array(img, dtype=np.uint8), lut_table) - - -def clahe(img, clip_limit=40.0, tile_grid_size=(8, 8)): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - img (ndarray): Image to be processed. - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - - Returns: - ndarray: The processed image. - """ - assert isinstance(img, np.ndarray) - assert img.ndim == 2 - assert isinstance(clip_limit, (float, int)) - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - - clahe = cv2.createCLAHE(clip_limit, tile_grid_size) - return clahe.apply(np.array(img, dtype=np.uint8)) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/__init__.py deleted file mode 100755 index 8976a665..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .deprecated_wrappers import Conv2d_deprecated as Conv2d -from .deprecated_wrappers import ConvTranspose2d_deprecated as ConvTranspose2d -from .deprecated_wrappers import Linear_deprecated as Linear -from .deprecated_wrappers import MaxPool2d_deprecated as MaxPool2d -from .info import (get_compiler_version, get_compiling_cuda_version, - get_onnxruntime_op_path) - -from .modulated_deform_conv import (ModulatedDeformConv2d, - ModulatedDeformConv2dPack, - modulated_deform_conv2d) - -from .sync_bn import SyncBatchNorm - - diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp deleted file mode 100755 index a1e926ad..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/common_cuda_helper.hpp +++ /dev/null @@ -1,112 +0,0 @@ -#ifndef COMMON_CUDA_HELPER -#define COMMON_CUDA_HELPER - -#include - -#define CUDA_1D_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \ - i += blockDim.x * gridDim.x) - -#define THREADS_PER_BLOCK 512 - -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -inline int GET_BLOCKS(const int N) { - int optimal_block_num = (N + THREADS_PER_BLOCK - 1) / THREADS_PER_BLOCK; - int max_block_num = 4096; - return std::min(optimal_block_num, max_block_num); -} - -template -__device__ T bilinear_interpolate(const T* input, const int height, - const int width, T y, T x, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) return 0; - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - int y_low = (int)y; - int x_low = (int)x; - int y_high; - int x_high; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - // do bilinear interpolation - T v1 = input[y_low * width + x_low]; - T v2 = input[y_low * width + x_high]; - T v3 = input[y_high * width + x_low]; - T v4 = input[y_high * width + x_high]; - T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - return val; -} - -template -__device__ void bilinear_interpolate_gradient( - const int height, const int width, T y, T x, T& w1, T& w2, T& w3, T& w4, - int& x_low, int& x_high, int& y_low, int& y_high, - const int index /* index for debug only*/) { - // deal with cases that inverse elements are out of feature map boundary - if (y < -1.0 || y > height || x < -1.0 || x > width) { - // empty - w1 = w2 = w3 = w4 = 0.; - x_low = x_high = y_low = y_high = -1; - return; - } - - if (y <= 0) y = 0; - if (x <= 0) x = 0; - - y_low = (int)y; - x_low = (int)x; - - if (y_low >= height - 1) { - y_high = y_low = height - 1; - y = (T)y_low; - } else { - y_high = y_low + 1; - } - - if (x_low >= width - 1) { - x_high = x_low = width - 1; - x = (T)x_low; - } else { - x_high = x_low + 1; - } - - T ly = y - y_low; - T lx = x - x_low; - T hy = 1. - ly, hx = 1. - lx; - - // reference in forward - // T v1 = input[y_low * width + x_low]; - // T v2 = input[y_low * width + x_high]; - // T v3 = input[y_high * width + x_low]; - // T v4 = input[y_high * width + x_high]; - // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - - w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx; - - return; -} -#endif // COMMON_CUDA_HELPER diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh deleted file mode 100755 index bd03df26..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/modulated_deform_conv_cuda_kernel.cuh +++ /dev/null @@ -1,400 +0,0 @@ -// Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -/*! - ******************* BEGIN Caffe Copyright Notice and Disclaimer - ***************** - * - * COPYRIGHT - * - * All contributions by the University of California: - * Copyright (c) 2014-2017 The Regents of the University of California (Regents) - * All rights reserved. - * - * All other contributions: - * Copyright (c) 2014-2017, the respective contributors - * All rights reserved. - * - * Caffe uses a shared copyright model: each contributor holds copyright over - * their contributions to Caffe. The project versioning records all such - * contribution and copyright details. If a contributor wants to further mark - * their specific copyright on a particular contribution, they should indicate - * their copyright solely in the commit message of the change when it is - * committed. - * - * LICENSE - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright notice, - *this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright notice, - * this list of conditions and the following disclaimer in the documentation - * and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - *AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - *IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE - *FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - *DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - *SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER - *CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, - *OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - *OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * CONTRIBUTION AGREEMENT - * - * By contributing to the BVLC/caffe repository through pull-request, comment, - * or otherwise, the contributor releases their content to the - * license and copyright terms herein. - * - ***************** END Caffe Copyright Notice and Disclaimer - ********************* - * - * Copyright (c) 2018 Microsoft - * Licensed under The MIT License [see LICENSE for details] - * \file modulated_deformable_im2col.cuh - * \brief Function definitions of converting an image to - * column matrix based on kernel, padding, dilation, and offset. - * These functions are mainly used in deformable convolution operators. - * \ref: https://arxiv.org/abs/1703.06211 - * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu, Dazhi Cheng - */ - -// modified from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda_kernel.cu - -#ifndef MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH -#define MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH - -#include -#ifdef MMCV_WITH_TRT -#include "common_cuda_helper.hpp" -#else // MMCV_WITH_TRT -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else // MMCV_USE_PARROTS -#include "pytorch_cuda_helper.hpp" -#endif // MMCV_USE_PARROTS -#endif // MMCV_WITH_TRT - -template -__device__ T dmcn_im2col_bilinear(const T *input, const int data_width, - const int height, const int width, T h, T w) { - int h_low = floorf(h); - int w_low = floorf(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -__device__ T dmcn_get_gradient_weight(T argmax_h, T argmax_w, const int h, - const int w, const int height, - const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -__device__ T dmcn_get_coordinate_weight(T argmax_h, T argmax_w, - const int height, const int width, - const T *im_data, const int data_width, - const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -__global__ void modulated_deformable_im2col_gpu_kernel( - const int n, const T *data_im, const T *data_offset, const T *data_mask, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - CUDA_1D_KERNEL_LOOP(index, n) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const T *data_mask_ptr = - data_mask + (b_col * deformable_group + deformable_group_index) * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_col) * width_col + w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = dmcn_im2col_bilinear(data_im_ptr, width, height, width, h_im, - w_im); - *data_col_ptr = val * mask; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -__global__ void modulated_deformable_col2im_gpu_kernel( - const int n, const T *data_col, const T *data_offset, const T *data_mask, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - CUDA_1D_KERNEL_LOOP(index, n) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index] * mask; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = - dmcn_get_gradient_weight(cur_inv_h_data, cur_inv_w_data, - cur_h + dy, cur_w + dx, height, width); - atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); - } - } - } - } -} - -template -__global__ void modulated_deformable_col2im_coord_gpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const T *data_mask, const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int channel_per_deformable_group, - const int batch_size, const int offset_channels, const int deformable_group, - const int height_col, const int width_col, T *grad_offset, T *grad_mask) { - CUDA_1D_KERNEL_LOOP(index, n) { - T val = 0, mval = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const int data_mask_hw_ptr = - (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - else - mval += data_col_ptr[col_pos] * - dmcn_im2col_bilinear(data_im_ptr + cnt * height * width, width, - height, width, inv_h, inv_w); - const T weight = dmcn_get_coordinate_weight( - inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos] * mask; - cnt += 1; - } - // KERNEL_ASSIGN(grad_offset[index], offset_req, val); - grad_offset[index] = val; - if (offset_c % 2 == 0) - // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + - // deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * - // height_col + h) * width_col + w], mask_req, mval); - grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * - kernel_w + - offset_c / 2) * - height_col + - h) * - width_col + - w] = mval; - } -} - -#endif // MODULATED_DEFORM_CONV_CUDA_KERNEL_CUH diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh deleted file mode 100755 index da6ab4f0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/cuda/sync_bn_cuda_kernel.cuh +++ /dev/null @@ -1,332 +0,0 @@ -// Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -// Copyright (c) OpenMMLab. All rights reserved -#ifndef SYNCBN_CUDA_KERNEL_CUH -#define SYNCBN_CUDA_KERNEL_CUH - -#ifdef MMCV_USE_PARROTS -#include "parrots_cuda_helper.hpp" -#else -#include "pytorch_cuda_helper.hpp" -#endif - -template -__global__ void sync_bn_forward_mean_cuda_kernel(const T *input, float *mean, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer[tid] += input[index]; - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - mean[c] = buffer[0] / total; - } -} - -template <> -__global__ void sync_bn_forward_mean_cuda_kernel(const phalf *input, - float *mean, int num, - int channels, int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer[tid] += static_cast(input[index]); - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - mean[c] = buffer[0] / total; - } -} - -template -__global__ void sync_bn_forward_var_cuda_kernel(const T *input, - const float *mean, float *var, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - float td = input[index] - mean[c]; - buffer[tid] += td * td; - } - __syncthreads(); - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - var[c] = buffer[0] / total; - } -} - -template <> -__global__ void sync_bn_forward_var_cuda_kernel(const phalf *input, - const float *mean, float *var, - int num, int channels, - int spatial) { - __shared__ float buffer[THREADS_PER_BLOCK]; - int tid = threadIdx.x; - int c = blockIdx.x; - buffer[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - float td = static_cast(input[index]) - mean[c]; - buffer[tid] += td * td; - } - __syncthreads(); - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer[tid] += buffer[tid + s]; - } - __syncthreads(); - } - int total = num * spatial; - if (tid == 0) { - var[c] = buffer[0] / total; - } -} - -template -__global__ void sync_bn_forward_output_cuda_kernel( - const T *input, const float *mean, const float *var, float *running_mean, - float *running_var, const float *weight, const float *bias, float *norm, - float *std, T *output, int num, int channels, int spatial, float eps, - float momentum, int group_size) { - int tid = threadIdx.x; - int c = blockIdx.x; - float mean_value = mean[c]; - float std_value = sqrt(var[c] + eps); - - if (weight != nullptr) { - float weight_value = weight[c]; - float bias_value = bias[c]; - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = (input[index] - mean_value) / std_value; - output[index] = norm[index] * weight_value + bias_value; - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = - (input[index] - mean_value) / std_value * weight_value + bias_value; - } - } - } else { - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = norm[index] = (input[index] - mean_value) / std_value; - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = (input[index] - mean_value) / std_value; - } - } - } - if (tid == 0) { - if (std != nullptr) std[c] = std_value; - if (running_mean != nullptr) { - running_mean[c] = - momentum * mean_value + (1 - momentum) * running_mean[c]; - int count = num * spatial * group_size; - float var_unbias = count > 1 ? var[c] * count / (count - 1) : var[c]; - running_var[c] = momentum * var_unbias + (1 - momentum) * running_var[c]; - } - } -} - -template <> -__global__ void sync_bn_forward_output_cuda_kernel( - const phalf *input, const float *mean, const float *var, - float *running_mean, float *running_var, const float *weight, - const float *bias, float *norm, float *std, phalf *output, int num, - int channels, int spatial, float eps, float momentum, int group_size) { - int tid = threadIdx.x; - int c = blockIdx.x; - float mean_value = mean[c]; - float std_value = sqrt(var[c] + eps); - if (weight != nullptr) { - float weight_value = weight[c]; - float bias_value = bias[c]; - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = - (static_cast(input[index]) - mean_value) / std_value; - output[index] = - static_cast(norm[index] * weight_value + bias_value); - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = - static_cast((static_cast(input[index]) - mean_value) / - std_value * weight_value + - bias_value); - } - } - } else { - if (norm != nullptr) { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - norm[index] = - (static_cast(input[index]) - mean_value) / std_value; - output[index] = static_cast(norm[index]); - } - } else { - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = - (i / spatial) * channels * spatial + c * spatial + i % spatial; - output[index] = static_cast( - (static_cast(input[index]) - mean_value) / std_value); - } - } - } - if (tid == 0) { - if (std != nullptr) std[c] = std_value; - if (running_mean != nullptr) { - running_mean[c] = - momentum * mean_value + (1 - momentum) * running_mean[c]; - int count = num * spatial * group_size; - float var_unbias = count > 1 ? var[c] * count / (count - 1) : var[c]; - running_var[c] = momentum * var_unbias + (1 - momentum) * running_var[c]; - } - } -} - -template -__global__ void sync_bn_backward_param_cuda_kernel(const T *grad_output, - const float *norm, - float *grad_weight, - float *grad_bias, int num, - int channels, int spatial) { - __shared__ float buffer1[THREADS_PER_BLOCK]; - __shared__ float buffer2[THREADS_PER_BLOCK]; - - int tid = threadIdx.x; - int c = blockIdx.x; - buffer1[tid] = buffer2[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer1[tid] += grad_output[index] * norm[index]; - buffer2[tid] += grad_output[index]; - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer1[tid] += buffer1[tid + s]; - buffer2[tid] += buffer2[tid + s]; - } - __syncthreads(); - } - if (tid == 0) { - grad_weight[c] = buffer1[0]; - grad_bias[c] = buffer2[0]; - } -} - -template <> -__global__ void sync_bn_backward_param_cuda_kernel(const phalf *grad_output, - const float *norm, - float *grad_weight, - float *grad_bias, int num, - int channels, int spatial) { - __shared__ float buffer1[THREADS_PER_BLOCK]; - __shared__ float buffer2[THREADS_PER_BLOCK]; - - int tid = threadIdx.x; - int c = blockIdx.x; - buffer1[tid] = buffer2[tid] = 0; - for (int i = tid; i < num * spatial; i += blockDim.x) { - int index = (i / spatial) * channels * spatial + c * spatial + i % spatial; - buffer1[tid] += static_cast(grad_output[index]) * norm[index]; - buffer2[tid] += static_cast(grad_output[index]); - } - __syncthreads(); - - for (int s = blockDim.x / 2; s > 0; s >>= 1) { - if (tid < s) { - buffer1[tid] += buffer1[tid + s]; - buffer2[tid] += buffer2[tid + s]; - } - __syncthreads(); - } - if (tid == 0) { - grad_weight[c] = buffer1[0]; - grad_bias[c] = buffer2[0]; - } -} - -template -__global__ void sync_bn_backward_data_cuda_kernel( - int output_size, const T *grad_output, const float *weight, - const float *grad_weight, const float *grad_bias, const float *norm, - const float *std, T *grad_input, int num, int channels, int spatial) { - int factor = num * spatial; - CUDA_1D_KERNEL_LOOP(index, output_size) { - int c = (index / spatial) % channels; - grad_input[index] = - weight[c] * - (grad_output[index] - - (grad_weight[c] * norm[index] + grad_bias[c]) / factor) / - std[c]; - } -} - -template <> -__global__ void sync_bn_backward_data_cuda_kernel( - int output_size, const phalf *grad_output, const float *weight, - const float *grad_weight, const float *grad_bias, const float *norm, - const float *std, phalf *grad_input, int num, int channels, int spatial) { - int factor = num * spatial; - CUDA_1D_KERNEL_LOOP(index, output_size) { - int c = (index / spatial) % channels; - grad_input[index] = static_cast( - weight[c] * - (static_cast(grad_output[index]) - - (grad_weight[c] * norm[index] + grad_bias[c]) / factor) / - std[c]); - } -} - -#endif // SYNCBN_CUDA_KERNEL_CUH diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp deleted file mode 100755 index c7f9f35b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cpp_helper.hpp +++ /dev/null @@ -1,24 +0,0 @@ -#ifndef PYTORCH_CPP_HELPER -#define PYTORCH_CPP_HELPER -#include - -#include - -using namespace at; - -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CPU(x) \ - TORCH_CHECK(!x.device().is_cuda(), #x " must be a CPU tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_CUDA_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) -#define CHECK_CPU_INPUT(x) \ - CHECK_CPU(x); \ - CHECK_CONTIGUOUS(x) - -#endif // PYTORCH_CPP_HELPER diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp deleted file mode 100755 index 9869b535..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/common/pytorch_cuda_helper.hpp +++ /dev/null @@ -1,19 +0,0 @@ -#ifndef PYTORCH_CUDA_HELPER -#define PYTORCH_CUDA_HELPER - -#include -#include -#include - -#include -#include - -#include "common_cuda_helper.hpp" - -using at::Half; -using at::Tensor; -using phalf = at::Half; - -#define __PHALF(x) (x) - -#endif // PYTORCH_CUDA_HELPER diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu deleted file mode 100755 index 27b70670..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/modulated_deform_conv_cuda.cu +++ /dev/null @@ -1,96 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "modulated_deform_conv_cuda_kernel.cuh" -#include "pytorch_cuda_helper.hpp" - -void modulated_deformable_im2col_cuda( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "modulated_deformable_im2col_gpu", ([&] { - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *data_col_ = data_col.data_ptr(); - - modulated_deformable_im2col_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_im_, data_offset_, data_mask_, height_im, - width_im, kernel_h, kenerl_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, batch_size, - channels, deformable_group, height_col, width_col, data_col_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void modulated_deformable_col2im_cuda( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = - channels * kernel_h * kernel_w * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - modulated_deformable_col2im_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_col_, data_offset_, data_mask_, channels, - height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, channel_per_deformable_group, - batch_size, deformable_group, height_col, width_col, grad_im_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void modulated_deformable_col2im_coord_cuda( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * - kernel_w * deformable_group; - const int channel_per_deformable_group = - channels * kernel_h * kernel_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_coord_gpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - scalar_t *grad_mask_ = grad_mask.data_ptr(); - - modulated_deformable_col2im_coord_gpu_kernel<<< - GET_BLOCKS(num_kernels), THREADS_PER_BLOCK, 0, - at::cuda::getCurrentCUDAStream()>>>( - num_kernels, data_col_, data_im_, data_offset_, data_mask_, - channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, batch_size, - 2 * kernel_h * kernel_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_, grad_mask_); - })); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu deleted file mode 100755 index 657c8170..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/cuda/sync_bn_cuda.cu +++ /dev/null @@ -1,110 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cuda_helper.hpp" -#include "sync_bn_cuda_kernel.cuh" - -void SyncBNForwardMeanCUDAKernelLauncher(const Tensor input, Tensor mean) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_mean_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNForwardVarCUDAKernelLauncher(const Tensor input, const Tensor mean, - Tensor var) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_var_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), - var.data_ptr(), num, channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNForwardOutputCUDAKernelLauncher( - const Tensor input, const Tensor mean, const Tensor var, - Tensor running_mean, Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, Tensor output, float eps, - float momentum, int group_size) { - int num = input.size(0); - int channels = input.size(1); - int spatial = input.size(2); - - at::cuda::CUDAGuard device_guard(input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - input.scalar_type(), "sync_bn_forward_mean_cuda_kernel", [&] { - sync_bn_forward_output_cuda_kernel - <<>>( - input.data_ptr(), mean.data_ptr(), - var.data_ptr(), running_mean.data_ptr(), - running_var.data_ptr(), weight.data_ptr(), - bias.data_ptr(), norm.data_ptr(), - std.data_ptr(), output.data_ptr(), num, - channels, spatial, eps, momentum, group_size); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNBackwardParamCUDAKernelLauncher(const Tensor grad_output, - const Tensor norm, - Tensor grad_weight, - Tensor grad_bias) { - int num = grad_output.size(0); - int channels = grad_output.size(1); - int spatial = grad_output.size(2); - - at::cuda::CUDAGuard device_guard(grad_output.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "sync_bn_backward_param_cuda_kernel", [&] { - sync_bn_backward_param_cuda_kernel - <<>>( - grad_output.data_ptr(), norm.data_ptr(), - grad_weight.data_ptr(), grad_bias.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} - -void SyncBNBackwardDataCUDAKernelLauncher(const Tensor grad_output, - const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input) { - int output_size = grad_input.numel(); - int num = grad_input.size(0); - int channels = grad_input.size(1); - int spatial = grad_input.size(2); - - at::cuda::CUDAGuard device_guard(grad_input.device()); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - grad_output.scalar_type(), "sync_bn_backward_data_cuda_kernel", [&] { - sync_bn_backward_data_cuda_kernel - <<>>( - output_size, grad_output.data_ptr(), - weight.data_ptr(), grad_weight.data_ptr(), - grad_bias.data_ptr(), norm.data_ptr(), - std.data_ptr(), grad_input.data_ptr(), num, - channels, spatial); - }); - AT_CUDA_CHECK(cudaGetLastError()); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/info.cpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/info.cpp deleted file mode 100755 index a08d227d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/info.cpp +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -// modified from -// https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/csrc/vision.cpp -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF -#include -int get_cudart_version() { return CUDART_VERSION; } -#endif -#endif - -std::string get_compiling_cuda_version() { -#ifdef MMCV_WITH_CUDA -#ifndef HIP_DIFF - std::ostringstream oss; - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("rocm not available"); -#endif -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp deleted file mode 100755 index c5e78c3a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv.cpp +++ /dev/null @@ -1,338 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA - -void modulated_deformable_im2col_cuda( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_cuda( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_cuda( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -#endif - -void modulated_deformable_im2col_cpu( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col); - -void modulated_deformable_col2im_cpu( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im); - -void modulated_deformable_col2im_coord_cpu( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask); - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(bias); - CHECK_CUDA_INPUT(ones); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(mask); - CHECK_CUDA_INPUT(output); - CHECK_CUDA_INPUT(columns); - -#else - AT_ERROR("ModulatedDeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(bias); - CHECK_CPU_INPUT(ones); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(mask); - CHECK_CPU_INPUT(output); - CHECK_CPU_INPUT(columns); - } - - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_out = weight.size(0); - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - // resize output - output = output.view({batch, channels_out, height_out, width_out}).zero_(); - // resize temporary columns - columns = - at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, - input.options()); - - output = output.view({output.size(0), group, output.size(1) / group, - output.size(2), output.size(3)}); - - for (int b = 0; b < batch; b++) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); -#endif - } else { - modulated_deformable_im2col_cpu( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - } - - // divide into group - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - - for (int g = 0; g < group; g++) { - output[b][g] = output[b][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output[b][g]); - } - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - } - - output = output.view({output.size(0), output.size(1) * output.size(2), - output.size(3), output.size(4)}); - - if (with_bias) { - output += bias.view({1, bias.size(0), 1, 1}); - } -} - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(bias); - CHECK_CUDA_INPUT(ones); - CHECK_CUDA_INPUT(offset); - CHECK_CUDA_INPUT(mask); - CHECK_CUDA_INPUT(columns); - CHECK_CUDA_INPUT(grad_input); - CHECK_CUDA_INPUT(grad_weight); - CHECK_CUDA_INPUT(grad_bias); - CHECK_CUDA_INPUT(grad_offset); - CHECK_CUDA_INPUT(grad_mask); - CHECK_CUDA_INPUT(grad_output); - -#else - AT_ERROR("ModulatedDeformConv is not compiled with GPU support"); -#endif - } else { - CHECK_CPU_INPUT(input); - CHECK_CPU_INPUT(weight); - CHECK_CPU_INPUT(bias); - CHECK_CPU_INPUT(ones); - CHECK_CPU_INPUT(offset); - CHECK_CPU_INPUT(mask); - CHECK_CPU_INPUT(columns); - CHECK_CPU_INPUT(grad_input); - CHECK_CPU_INPUT(grad_weight); - CHECK_CPU_INPUT(grad_bias); - CHECK_CPU_INPUT(grad_offset); - CHECK_CPU_INPUT(grad_mask); - CHECK_CPU_INPUT(grad_output); - } - - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - grad_input = grad_input.view({batch, channels, height, width}); - columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out}, - input.options()); - - grad_output = - grad_output.view({grad_output.size(0), group, grad_output.size(1) / group, - grad_output.size(2), grad_output.size(3)}); - - for (int b = 0; b < batch; b++) { - // divide int group - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - grad_output[b][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_cuda( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_cuda( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); -#endif - } else { - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_cpu( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_cpu( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_cpu( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - } - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - grad_weight = grad_weight.view({group, grad_weight.size(0) / group, - grad_weight.size(1), grad_weight.size(2), - grad_weight.size(3)}); - if (with_bias) - grad_bias = grad_bias.view({group, grad_bias.size(0) / group}); - - for (int g = 0; g < group; g++) { - grad_weight[g] = - grad_weight[g] - .flatten(1) - .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1)) - .view_as(grad_weight[g]); - if (with_bias) { - grad_bias[g] = - grad_bias[g] - .view({-1, 1}) - .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1})) - .view(-1); - } - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1), - grad_weight.size(2), grad_weight.size(3), - grad_weight.size(4)}); - if (with_bias) - grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)}); - } - grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1), - grad_output.size(2), grad_output.size(3), - grad_output.size(4)}); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv_cpu.cpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv_cpu.cpp deleted file mode 100755 index 89a81d73..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/modulated_deform_conv_cpu.cpp +++ /dev/null @@ -1,403 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -template -T dmcn_im2col_bilinear_cpu(const T *input, const int data_width, - const int height, const int width, T h, T w) { - int h_low = floorf(h); - int w_low = floorf(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - T lh = h - h_low; - T lw = w - w_low; - T hh = 1 - lh, hw = 1 - lw; - - T v1 = 0; - if (h_low >= 0 && w_low >= 0) v1 = input[h_low * data_width + w_low]; - T v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = input[h_low * data_width + w_high]; - T v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = input[h_high * data_width + w_low]; - T v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = input[h_high * data_width + w_high]; - - T w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -template -T dmcn_get_gradient_weight_cpu(T argmax_h, T argmax_w, const int h, const int w, - const int height, const int width) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -template -T dmcn_get_coordinate_weight_cpu(T argmax_h, T argmax_w, const int height, - const int width, const T *im_data, - const int data_width, const int bp_dir) { - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || - argmax_w >= width) { - // empty - return 0; - } - - int argmax_h_low = floorf(argmax_h); - int argmax_w_low = floorf(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - T weight = 0; - - if (bp_dir == 0) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } else if (bp_dir == 1) { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * - im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * - im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -template -void modulated_deformable_im2col_cpu_kernel( - const int n, const T *data_im, const T *data_offset, const T *data_mask, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int num_channels, const int deformable_group, const int height_col, - const int width_col, T *data_col) { - for (int index = 0; index < n; index++) { - // index index of output matrix - const int w_col = index % width_col; - const int h_col = (index / width_col) % height_col; - const int b_col = (index / width_col / height_col) % batch_size; - const int c_im = (index / width_col / height_col) / batch_size; - const int c_col = c_im * kernel_h * kernel_w; - - // compute deformable group index - const int deformable_group_index = c_im / channel_per_deformable_group; - - const int h_in = h_col * stride_h - pad_h; - const int w_in = w_col * stride_w - pad_w; - - T *data_col_ptr = - data_col + - ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; - const T *data_im_ptr = - data_im + (b_col * num_channels + c_im) * height * width; - const T *data_offset_ptr = - data_offset + (b_col * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - - const T *data_mask_ptr = - data_mask + (b_col * deformable_group + deformable_group_index) * - kernel_h * kernel_w * height_col * width_col; - - for (int i = 0; i < kernel_h; ++i) { - for (int j = 0; j < kernel_w; ++j) { - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + - w_col; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_col) * width_col + w_col; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T val = static_cast(0); - const T h_im = h_in + i * dilation_h + offset_h; - const T w_im = w_in + j * dilation_w + offset_w; - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, height, width, - h_im, w_im); - *data_col_ptr = val * mask; - data_col_ptr += batch_size * height_col * width_col; - } - } - } -} - -template -void modulated_deformable_col2im_cpu_kernel( - const int n, const T *data_col, const T *data_offset, const T *data_mask, - const int channels, const int height, const int width, const int kernel_h, - const int kernel_w, const int pad_h, const int pad_w, const int stride_h, - const int stride_w, const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, const int batch_size, - const int deformable_group, const int height_col, const int width_col, - T *grad_im) { - for (int index = 0; index < n; index++) { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = - (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = - index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - const int data_offset_h_ptr = - ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = - ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const int data_mask_hw_ptr = - ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - const T cur_inv_h_data = h_in + i * dilation_h + offset_h; - const T cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const T cur_top_grad = data_col[index] * mask; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - for (int dy = -2; dy <= 2; dy++) { - for (int dx = -2; dx <= 2; dx++) { - if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && - cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) { - int cur_bottom_grad_pos = - ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - T weight = dmcn_get_gradient_weight_cpu(cur_inv_h_data, - cur_inv_w_data, cur_h + dy, - cur_w + dx, height, width); - *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad; - } - } - } - } -} - -template -void modulated_deformable_col2im_coord_cpu_kernel( - const int n, const T *data_col, const T *data_im, const T *data_offset, - const T *data_mask, const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int channel_per_deformable_group, - const int batch_size, const int offset_channels, const int deformable_group, - const int height_col, const int width_col, T *grad_offset, T *grad_mask) { - for (int index = 0; index < n; index++) { - T val = 0, mval = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const T *data_col_ptr = data_col + deformable_group_index * - channel_per_deformable_group * - batch_size * width_col * height_col; - const T *data_im_ptr = - data_im + (b * deformable_group + deformable_group_index) * - channel_per_deformable_group / kernel_h / kernel_w * - height * width; - const T *data_offset_ptr = - data_offset + (b * deformable_group + deformable_group_index) * 2 * - kernel_h * kernel_w * height_col * width_col; - const T *data_mask_ptr = - data_mask + (b * deformable_group + deformable_group_index) * kernel_h * - kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; - col_c += col_step) { - const int col_pos = - (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = - (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = - (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = - (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + - w_out); - const int data_mask_hw_ptr = - (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); - const T offset_h = data_offset_ptr[data_offset_h_ptr]; - const T offset_w = data_offset_ptr[data_offset_w_ptr]; - const T mask = data_mask_ptr[data_mask_hw_ptr]; - T inv_h = h_in + i * dilation_h + offset_h; - T inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - inv_h = inv_w = -2; - else - mval += data_col_ptr[col_pos] * - dmcn_im2col_bilinear_cpu(data_im_ptr + cnt * height * width, - width, height, width, inv_h, inv_w); - const T weight = dmcn_get_coordinate_weight_cpu( - inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, - width, bp_dir); - val += weight * data_col_ptr[col_pos] * mask; - cnt += 1; - } - // KERNEL_ASSIGN(grad_offset[index], offset_req, val); - grad_offset[index] = val; - if (offset_c % 2 == 0) - // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + - // deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * - // height_col + h) * width_col + w], mask_req, mval); - grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * - kernel_w + - offset_c / 2) * - height_col + - h) * - width_col + - w] = mval; - } -} - -void modulated_deformable_im2col_cpu( - const Tensor data_im, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor data_col) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_im.scalar_type(), "modulated_deformable_im2col_cpu", ([&] { - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *data_col_ = data_col.data_ptr(); - - modulated_deformable_im2col_cpu_kernel( - num_kernels, data_im_, data_offset_, data_mask_, height_im, - width_im, kernel_h, kenerl_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, batch_size, - channels, deformable_group, height_col, width_col, data_col_); - })); -} - -void modulated_deformable_col2im_cpu( - const Tensor data_col, const Tensor data_offset, const Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, Tensor grad_im) { - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = - channels * kernel_h * kernel_w * batch_size * height_col * width_col; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_cpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_im_ = grad_im.data_ptr(); - - modulated_deformable_col2im_cpu_kernel( - num_kernels, data_col_, data_offset_, data_mask_, channels, - height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, channel_per_deformable_group, - batch_size, deformable_group, height_col, width_col, grad_im_); - })); -} - -void modulated_deformable_col2im_coord_cpu( - const Tensor data_col, const Tensor data_im, const Tensor data_offset, - const Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - Tensor grad_offset, Tensor grad_mask) { - const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * - kernel_w * deformable_group; - const int channel_per_deformable_group = - channels * kernel_h * kernel_w / deformable_group; - - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - data_col.scalar_type(), "modulated_deformable_col2im_coord_cpu", ([&] { - const scalar_t *data_col_ = data_col.data_ptr(); - const scalar_t *data_im_ = data_im.data_ptr(); - const scalar_t *data_offset_ = data_offset.data_ptr(); - const scalar_t *data_mask_ = data_mask.data_ptr(); - scalar_t *grad_offset_ = grad_offset.data_ptr(); - scalar_t *grad_mask_ = grad_mask.data_ptr(); - - modulated_deformable_col2im_coord_cpu_kernel( - num_kernels, data_col_, data_im_, data_offset_, data_mask_, - channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, - stride_h, stride_w, dilation_h, dilation_w, - channel_per_deformable_group, batch_size, - 2 * kernel_h * kernel_w * deformable_group, deformable_group, - height_col, width_col, grad_offset_, grad_mask_); - })); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp deleted file mode 100755 index e9273ecd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/pybind.cpp +++ /dev/null @@ -1,82 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -std::string get_compiler_version(); -std::string get_compiling_cuda_version(); - -void modulated_deform_conv_forward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor output, Tensor columns, int kernel_h, int kernel_w, - const int stride_h, const int stride_w, const int pad_h, const int pad_w, - const int dilation_h, const int dilation_w, const int group, - const int deformable_group, const bool with_bias); - -void modulated_deform_conv_backward( - Tensor input, Tensor weight, Tensor bias, Tensor ones, Tensor offset, - Tensor mask, Tensor columns, Tensor grad_input, Tensor grad_weight, - Tensor grad_bias, Tensor grad_offset, Tensor grad_mask, Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); - -void sync_bn_forward_mean(const Tensor input, Tensor mean); - -void sync_bn_forward_var(const Tensor input, const Tensor mean, Tensor var); - -void sync_bn_forward_output(const Tensor input, const Tensor mean, - const Tensor var, const Tensor weight, - const Tensor bias, Tensor running_mean, - Tensor running_var, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size); - -void sync_bn_backward_param(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias); - -void sync_bn_backward_data(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input); - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - - m.def("get_compiler_version", &get_compiler_version, "get_compiler_version"); - m.def("get_compiling_cuda_version", &get_compiling_cuda_version, - "get_compiling_cuda_version"); - - m.def("modulated_deform_conv_forward", &modulated_deform_conv_forward, - "modulated deform conv forward", py::arg("input"), py::arg("weight"), - py::arg("bias"), py::arg("ones"), py::arg("offset"), py::arg("mask"), - py::arg("output"), py::arg("columns"), py::arg("kernel_h"), - py::arg("kernel_w"), py::arg("stride_h"), py::arg("stride_w"), - py::arg("pad_h"), py::arg("pad_w"), py::arg("dilation_h"), - py::arg("dilation_w"), py::arg("group"), py::arg("deformable_group"), - py::arg("with_bias")); - m.def("modulated_deform_conv_backward", &modulated_deform_conv_backward, - "modulated deform conv backward", py::arg("input"), py::arg("weight"), - py::arg("bias"), py::arg("ones"), py::arg("offset"), py::arg("mask"), - py::arg("columns"), py::arg("grad_input"), py::arg("grad_weight"), - py::arg("grad_bias"), py::arg("grad_offset"), py::arg("grad_mask"), - py::arg("grad_output"), py::arg("kernel_h"), py::arg("kernel_w"), - py::arg("stride_h"), py::arg("stride_w"), py::arg("pad_h"), - py::arg("pad_w"), py::arg("dilation_h"), py::arg("dilation_w"), - py::arg("group"), py::arg("deformable_group"), py::arg("with_bias")); - - m.def("sync_bn_forward_mean", &sync_bn_forward_mean, "sync_bn forward_mean", - py::arg("input"), py::arg("mean")); - m.def("sync_bn_forward_var", &sync_bn_forward_var, "sync_bn forward_var", - py::arg("input"), py::arg("mean"), py::arg("var")); - m.def("sync_bn_forward_output", &sync_bn_forward_output, - "sync_bn forward_output", py::arg("input"), py::arg("mean"), - py::arg("var"), py::arg("weight"), py::arg("bias"), - py::arg("running_mean"), py::arg("running_var"), py::arg("norm"), - py::arg("std"), py::arg("output"), py::arg("eps"), py::arg("momentum"), - py::arg("group_size")); - m.def("sync_bn_backward_param", &sync_bn_backward_param, - "sync_bn backward_param", py::arg("grad_output"), py::arg("norm"), - py::arg("grad_weight"), py::arg("grad_bias")); - m.def("sync_bn_backward_data", &sync_bn_backward_data, - "sync_bn backward_data", py::arg("grad_output"), py::arg("weight"), - py::arg("grad_weight"), py::arg("grad_bias"), py::arg("norm"), - py::arg("std"), py::arg("grad_input")); -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/sync_bn.cpp b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/sync_bn.cpp deleted file mode 100755 index 2e023a85..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/csrc/pytorch/sync_bn.cpp +++ /dev/null @@ -1,159 +0,0 @@ -// Copyright (c) OpenMMLab. All rights reserved -#include "pytorch_cpp_helper.hpp" - -#ifdef MMCV_WITH_CUDA -void SyncBNForwardMeanCUDAKernelLauncher(const Tensor input, Tensor mean); - -void SyncBNForwardVarCUDAKernelLauncher(const Tensor input, const Tensor mean, - Tensor var); - -void SyncBNForwardOutputCUDAKernelLauncher( - const Tensor input, const Tensor mean, const Tensor var, - Tensor running_mean, Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, Tensor output, float eps, - float momentum, int group_size); - -void SyncBNBackwardParamCUDAKernelLauncher(const Tensor grad_output, - const Tensor norm, - Tensor grad_weight, - Tensor grad_bias); - -void SyncBNBackwardDataCUDAKernelLauncher(const Tensor grad_output, - const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input); - -void sync_bn_forward_mean_cuda(const Tensor input, Tensor mean) { - SyncBNForwardMeanCUDAKernelLauncher(input, mean); -} - -void sync_bn_forward_var_cuda(const Tensor input, const Tensor mean, - Tensor var) { - SyncBNForwardVarCUDAKernelLauncher(input, mean, var); -} - -void sync_bn_forward_output_cuda(const Tensor input, const Tensor mean, - const Tensor var, Tensor running_mean, - Tensor running_var, const Tensor weight, - const Tensor bias, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - SyncBNForwardOutputCUDAKernelLauncher(input, mean, var, running_mean, - running_var, weight, bias, norm, std, - output, eps, momentum, group_size); -} - -void sync_bn_backward_param_cuda(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - SyncBNBackwardParamCUDAKernelLauncher(grad_output, norm, grad_weight, - grad_bias); -} - -void sync_bn_backward_data_cuda(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, - const Tensor grad_bias, const Tensor norm, - const Tensor std, Tensor grad_input) { - SyncBNBackwardDataCUDAKernelLauncher(grad_output, weight, grad_weight, - grad_bias, norm, std, grad_input); -} -#endif - -void sync_bn_forward_mean(const Tensor input, Tensor mean) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(mean); - sync_bn_forward_mean_cuda(input, mean); -#else - AT_ERROR("SyncBatchNorm is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SyncBatchNorm is not implemented on CPU"); - } -} - -void sync_bn_forward_var(const Tensor input, const Tensor mean, Tensor var) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(mean); - CHECK_CUDA_INPUT(var); - sync_bn_forward_var_cuda(input, mean, var); -#else - AT_ERROR("SyncBatchNorm is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SyncBatchNorm is not implemented on CPU"); - } -} - -void sync_bn_forward_output(const Tensor input, const Tensor mean, - const Tensor var, const Tensor weight, - const Tensor bias, Tensor running_mean, - Tensor running_var, Tensor norm, Tensor std, - Tensor output, float eps, float momentum, - int group_size) { - if (input.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(input); - CHECK_CUDA_INPUT(mean); - CHECK_CUDA_INPUT(var); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(bias); - CHECK_CUDA_INPUT(running_mean); - CHECK_CUDA_INPUT(running_var); - CHECK_CUDA_INPUT(norm); - CHECK_CUDA_INPUT(std); - CHECK_CUDA_INPUT(output); - sync_bn_forward_output_cuda(input, mean, var, running_mean, running_var, - weight, bias, norm, std, output, eps, momentum, - group_size); -#else - AT_ERROR("SyncBatchNorm is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SyncBatchNorm is not implemented on CPU"); - } -} - -void sync_bn_backward_param(const Tensor grad_output, const Tensor norm, - Tensor grad_weight, Tensor grad_bias) { - if (grad_output.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(grad_output); - CHECK_CUDA_INPUT(norm); - CHECK_CUDA_INPUT(grad_weight); - CHECK_CUDA_INPUT(grad_bias); - sync_bn_backward_param_cuda(grad_output, norm, grad_weight, grad_bias); -#else - AT_ERROR("SyncBatchNorm is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SyncBatchNorm is not implemented on CPU"); - } -} - -void sync_bn_backward_data(const Tensor grad_output, const Tensor weight, - const Tensor grad_weight, const Tensor grad_bias, - const Tensor norm, const Tensor std, - Tensor grad_input) { - if (grad_output.device().is_cuda()) { -#ifdef MMCV_WITH_CUDA - CHECK_CUDA_INPUT(grad_output); - CHECK_CUDA_INPUT(weight); - CHECK_CUDA_INPUT(grad_weight); - CHECK_CUDA_INPUT(grad_bias); - CHECK_CUDA_INPUT(norm); - CHECK_CUDA_INPUT(std); - CHECK_CUDA_INPUT(grad_input); - sync_bn_backward_data_cuda(grad_output, weight, grad_weight, grad_bias, - norm, std, grad_input); -#else - AT_ERROR("SyncBatchNorm is not compiled with GPU support"); -#endif - } else { - AT_ERROR("SyncBatchNorm is not implemented on CPU"); - } -} diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/deprecated_wrappers.py b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/deprecated_wrappers.py deleted file mode 100755 index a2e593df..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/deprecated_wrappers.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file is for backward compatibility. -# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks. -import warnings - -from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d - - -class Conv2d_deprecated(Conv2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class ConvTranspose2d_deprecated(ConvTranspose2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be ' - 'deprecated in the future. Please import them from "mmcv.cnn" ' - 'instead') - - -class MaxPool2d_deprecated(MaxPool2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class Linear_deprecated(Linear): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Linear wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/info.py b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/info.py deleted file mode 100755 index 29f2e559..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/info.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os - -import torch - -if torch.__version__ == 'parrots': - import parrots - - def get_compiler_version(): - return 'GCC ' + parrots.version.compiler - - def get_compiling_cuda_version(): - return parrots.version.cuda -else: - from ..utils import ext_loader - ext_module = ext_loader.load_ext( - '_ext', ['get_compiler_version', 'get_compiling_cuda_version']) - - def get_compiler_version(): - return ext_module.get_compiler_version() - - def get_compiling_cuda_version(): - return ext_module.get_compiling_cuda_version() - - -def get_onnxruntime_op_path(): - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_ort.*.so') - - paths = glob.glob(wildcard) - if len(paths) > 0: - return paths[0] - else: - return '' diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/modulated_deform_conv.py b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/modulated_deform_conv.py deleted file mode 100755 index 34179805..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/modulated_deform_conv.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext( - '_ext', - ['modulated_deform_conv_forward', 'modulated_deform_conv_backward']) - - -class ModulatedDeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, input, offset, mask, weight, bias, stride, padding, - dilation, groups, deform_groups): - input_tensors = [input, offset, mask, weight] - if bias is not None: - input_tensors.append(bias) - return g.op( - 'mmcv::MMCVModulatedDeformConv2d', - *input_tensors, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups) - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(0) # fake tensor - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty( - ModulatedDeformConv2dFunction._output_size(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - ext_module.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - grad_output = grad_output.contiguous() - ext_module.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, - None, None, None, None, None) - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply - - -class ModulatedDeformConv2d(nn.Module): - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='ModulatedDeformConv2d') - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=True): - super(ModulatedDeformConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, - *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -@CONV_LAYERS.register_module('DCNv2') -class ModulatedDeformConv2dPack(ModulatedDeformConv2d): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv - layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int): Same as nn.Conv2d, while tuple is not supported. - padding (int): Same as nn.Conv2d, while tuple is not supported. - dilation (int): Same as nn.Conv2d, while tuple is not supported. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConv2dPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, ModulatedDeformConvPack - # loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/sync_bn.py b/cv/super_resolution/basicvsr/pytorch/mmcv/ops/sync_bn.py deleted file mode 100755 index 04302f03..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/ops/sync_bn.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.module import Module -from torch.nn.parameter import Parameter - -from mmcv.cnn import NORM_LAYERS -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sync_bn_forward_mean', 'sync_bn_forward_var', 'sync_bn_forward_output', - 'sync_bn_backward_param', 'sync_bn_backward_data' -]) - - -class SyncBatchNormFunction(Function): - - @staticmethod - def symbolic(g, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - return g.op( - 'mmcv::MMCVSyncBatchNorm', - input, - running_mean, - running_var, - weight, - bias, - momentum_f=momentum, - eps_f=eps, - group_i=group, - group_size_i=group_size, - stats_mode=stats_mode) - - @staticmethod - def forward(self, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - self.momentum = momentum - self.eps = eps - self.group = group - self.group_size = group_size - self.stats_mode = stats_mode - - assert isinstance( - input, (torch.HalfTensor, torch.FloatTensor, - torch.cuda.HalfTensor, torch.cuda.FloatTensor)), \ - f'only support Half or Float Tensor, but {input.type()}' - output = torch.zeros_like(input) - input3d = input.flatten(start_dim=2) - output3d = output.view_as(input3d) - num_channels = input3d.size(1) - - # ensure mean/var/norm/std are initialized as zeros - # ``torch.empty()`` does not guarantee that - mean = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - var = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - norm = torch.zeros_like( - input3d, dtype=torch.float, device=input3d.device) - std = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - - batch_size = input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_forward_mean(input3d, mean) - batch_flag = torch.ones([1], device=mean.device, dtype=mean.dtype) - else: - # skip updating mean and leave it as zeros when the input is empty - batch_flag = torch.zeros([1], device=mean.device, dtype=mean.dtype) - - # synchronize mean and the batch flag - vec = torch.cat([mean, batch_flag]) - if self.stats_mode == 'N': - vec *= batch_size - if self.group_size > 1: - dist.all_reduce(vec, group=self.group) - total_batch = vec[-1].detach() - mean = vec[:num_channels] - - if self.stats_mode == 'default': - mean = mean / self.group_size - elif self.stats_mode == 'N': - mean = mean / total_batch.clamp(min=1) - else: - raise NotImplementedError - - # leave var as zeros when the input is empty - if batch_size > 0: - ext_module.sync_bn_forward_var(input3d, mean, var) - - if self.stats_mode == 'N': - var *= batch_size - if self.group_size > 1: - dist.all_reduce(var, group=self.group) - - if self.stats_mode == 'default': - var /= self.group_size - elif self.stats_mode == 'N': - var /= total_batch.clamp(min=1) - else: - raise NotImplementedError - - # if the total batch size over all the ranks is zero, - # we should not update the statistics in the current batch - update_flag = total_batch.clamp(max=1) - momentum = update_flag * self.momentum - ext_module.sync_bn_forward_output( - input3d, - mean, - var, - weight, - bias, - running_mean, - running_var, - norm, - std, - output3d, - eps=self.eps, - momentum=momentum, - group_size=self.group_size) - self.save_for_backward(norm, std, weight) - return output - - @staticmethod - @once_differentiable - def backward(self, grad_output): - norm, std, weight = self.saved_tensors - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(weight) - grad_input = torch.zeros_like(grad_output) - grad_output3d = grad_output.flatten(start_dim=2) - grad_input3d = grad_input.view_as(grad_output3d) - - batch_size = grad_input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_backward_param(grad_output3d, norm, grad_weight, - grad_bias) - - # all reduce - if self.group_size > 1: - dist.all_reduce(grad_weight, group=self.group) - dist.all_reduce(grad_bias, group=self.group) - grad_weight /= self.group_size - grad_bias /= self.group_size - - if batch_size > 0: - ext_module.sync_bn_backward_data(grad_output3d, weight, - grad_weight, grad_bias, norm, std, - grad_input3d) - - return grad_input, None, None, grad_weight, grad_bias, \ - None, None, None, None, None - - -@NORM_LAYERS.register_module(name='MMSyncBN') -class SyncBatchNorm(Module): - """Synchronized Batch Normalization. - - Args: - num_features (int): number of features/chennels in input tensor - eps (float, optional): a value added to the denominator for numerical - stability. Defaults to 1e-5. - momentum (float, optional): the value used for the running_mean and - running_var computation. Defaults to 0.1. - affine (bool, optional): whether to use learnable affine parameters. - Defaults to True. - track_running_stats (bool, optional): whether to track the running - mean and variance during training. When set to False, this - module does not track such statistics, and initializes statistics - buffers ``running_mean`` and ``running_var`` as ``None``. When - these buffers are ``None``, this module always uses batch - statistics in both training and eval modes. Defaults to True. - group (int, optional): synchronization of stats happen within - each process group individually. By default it is synchronization - across the whole world. Defaults to None. - stats_mode (str, optional): The statistical mode. Available options - includes ``'default'`` and ``'N'``. Defaults to 'default'. - When ``stats_mode=='default'``, it computes the overall statistics - using those from each worker with equal weight, i.e., the - statistics are synchronized and simply divied by ``group``. This - mode will produce inaccurate statistics when empty tensors occur. - When ``stats_mode=='N'``, it compute the overall statistics using - the total number of batches in each worker ignoring the number of - group, i.e., the statistics are synchronized and then divied by - the total batch ``N``. This mode is beneficial when empty tensors - occur during training, as it average the total mean by the real - number of batch. - """ - - def __init__(self, - num_features, - eps=1e-5, - momentum=0.1, - affine=True, - track_running_stats=True, - group=None, - stats_mode='default'): - super(SyncBatchNorm, self).__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - self.affine = affine - self.track_running_stats = track_running_stats - group = dist.group.WORLD if group is None else group - self.group = group - self.group_size = dist.get_world_size(group) - assert stats_mode in ['default', 'N'], \ - f'"stats_mode" only accepts "default" and "N", got "{stats_mode}"' - self.stats_mode = stats_mode - if self.affine: - self.weight = Parameter(torch.Tensor(num_features)) - self.bias = Parameter(torch.Tensor(num_features)) - else: - self.register_parameter('weight', None) - self.register_parameter('bias', None) - if self.track_running_stats: - self.register_buffer('running_mean', torch.zeros(num_features)) - self.register_buffer('running_var', torch.ones(num_features)) - self.register_buffer('num_batches_tracked', - torch.tensor(0, dtype=torch.long)) - else: - self.register_buffer('running_mean', None) - self.register_buffer('running_var', None) - self.register_buffer('num_batches_tracked', None) - self.reset_parameters() - - def reset_running_stats(self): - if self.track_running_stats: - self.running_mean.zero_() - self.running_var.fill_(1) - self.num_batches_tracked.zero_() - - def reset_parameters(self): - self.reset_running_stats() - if self.affine: - self.weight.data.uniform_() # pytorch use ones_() - self.bias.data.zero_() - - def forward(self, input): - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input, got {input.dim()}D input') - if self.momentum is None: - exponential_average_factor = 0.0 - else: - exponential_average_factor = self.momentum - - if self.training and self.track_running_stats: - if self.num_batches_tracked is not None: - self.num_batches_tracked += 1 - if self.momentum is None: # use cumulative moving average - exponential_average_factor = 1.0 / float( - self.num_batches_tracked) - else: # use exponential moving average - exponential_average_factor = self.momentum - - if self.training or not self.track_running_stats: - return SyncBatchNormFunction.apply( - input, self.running_mean, self.running_var, self.weight, - self.bias, exponential_average_factor, self.eps, self.group, - self.group_size, self.stats_mode) - else: - return F.batch_norm(input, self.running_mean, self.running_var, - self.weight, self.bias, False, - exponential_average_factor, self.eps) - - def __repr__(self): - s = self.__class__.__name__ - s += f'({self.num_features}, ' - s += f'eps={self.eps}, ' - s += f'momentum={self.momentum}, ' - s += f'affine={self.affine}, ' - s += f'track_running_stats={self.track_running_stats}, ' - s += f'group_size={self.group_size},' - s += f'stats_mode={self.stats_mode})' - return s diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/__init__.py deleted file mode 100755 index 2ed2c17a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collate import collate -from .data_container import DataContainer -from .data_parallel import MMDataParallel -from .distributed import MMDistributedDataParallel -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter, scatter_kwargs -from .utils import is_module_wrapper - -__all__ = [ - 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel', - 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/_functions.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/_functions.py deleted file mode 100755 index 9b5a8a44..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/collate.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/collate.py deleted file mode 100755 index ad749197..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_container.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_container.py deleted file mode 100755 index cedb0d32..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_container.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import torch - - -def assert_tensor_type(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not isinstance(args[0].data, torch.Tensor): - raise AttributeError( - f'{args[0].__class__.__name__} has no attribute ' - f'{func.__name__} for type {args[0].datatype}') - return func(*args, **kwargs) - - return wrapper - - -class DataContainer: - """A container for any type of objects. - - Typically tensors will be stacked in the collate function and sliced along - some dimension in the scatter function. This behavior has some limitations. - 1. All tensors have to be the same size. - 2. Types are limited (numpy array or Tensor). - - We design `DataContainer` and `MMDataParallel` to overcome these - limitations. The behavior can be either of the following. - - - copy to GPU, pad all tensors to the same size and stack them - - copy to GPU without stacking - - leave the objects as is and pass it to the model - - pad_dims specifies the number of last few dimensions to do padding - """ - - def __init__(self, - data, - stack=False, - padding_value=0, - cpu_only=False, - pad_dims=2): - self._data = data - self._cpu_only = cpu_only - self._stack = stack - self._padding_value = padding_value - assert pad_dims in [None, 1, 2, 3] - self._pad_dims = pad_dims - - def __repr__(self): - return f'{self.__class__.__name__}({repr(self.data)})' - - def __len__(self): - return len(self._data) - - @property - def data(self): - return self._data - - @property - def datatype(self): - if isinstance(self.data, torch.Tensor): - return self.data.type() - else: - return type(self.data) - - @property - def cpu_only(self): - return self._cpu_only - - @property - def stack(self): - return self._stack - - @property - def padding_value(self): - return self._padding_value - - @property - def pad_dims(self): - return self._pad_dims - - @assert_tensor_type - def size(self, *args, **kwargs): - return self.data.size(*args, **kwargs) - - @assert_tensor_type - def dim(self): - return self.data.dim() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_parallel.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_parallel.py deleted file mode 100755 index 79b5f69b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - 'instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed.py deleted file mode 100755 index b799a213..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel.distributed import (DistributedDataParallel, - _find_tensors) - -from mmcv import print_log -from mmcv.utils import TORCH_VERSION, digit_version -from .scatter_gather import scatter_kwargs - - -class MMDistributedDataParallel(DistributedDataParallel): - """The DDP module that supports DataContainer. - - MMDDP has two main differences with PyTorch DDP: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data. - - It implement two APIs ``train_step()`` and ``val_step()``. - """ - - def to_kwargs(self, inputs, kwargs, device_id): - # Use `self.to_kwargs` instead of `self.scatter` in pytorch1.8 - # to move all tensors to device_id - return scatter_kwargs(inputs, kwargs, [device_id], dim=self.dim) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - """train_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.train_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.train_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.train_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output - - def val_step(self, *inputs, **kwargs): - """val_step() API for module wrapped by DistributedDataParallel. - - This method is basically the same as - ``DistributedDataParallel.forward()``, while replacing - ``self.module.forward()`` with ``self.module.val_step()``. - It is compatible with PyTorch 1.1 - 1.5. - """ - # In PyTorch >= 1.7, ``reducer._rebuild_buckets()`` is moved from the - # end of backward to the beginning of forward. - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) >= digit_version('1.7') - and self.reducer._rebuild_buckets()): - print_log( - 'Reducer buckets have been rebuilt in this iteration.', - logger='mmcv') - - if getattr(self, 'require_forward_param_sync', True): - self._sync_params() - if self.device_ids: - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - if len(self.device_ids) == 1: - output = self.module.val_step(*inputs[0], **kwargs[0]) - else: - outputs = self.parallel_apply( - self._module_copies[:len(inputs)], inputs, kwargs) - output = self.gather(outputs, self.output_device) - else: - output = self.module.val_step(*inputs, **kwargs) - - if torch.is_grad_enabled() and getattr( - self, 'require_backward_grad_sync', True): - if self.find_unused_parameters: - self.reducer.prepare_for_backward(list(_find_tensors(output))) - else: - self.reducer.prepare_for_backward([]) - else: - if ('parrots' not in TORCH_VERSION - and digit_version(TORCH_VERSION) > digit_version('1.2')): - self.require_forward_param_sync = False - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed_deprecated.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed_deprecated.py deleted file mode 100755 index b593d4a9..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super(MMDistributedDataParallel, self).__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/registry.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/registry.py deleted file mode 100755 index 144f9fb1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from mmcv.utils import Registry - -MODULE_WRAPPERS = Registry('module wrapper') -MODULE_WRAPPERS.register_module(module=DataParallel) -MODULE_WRAPPERS.register_module(module=DistributedDataParallel) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/scatter_gather.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/scatter_gather.py deleted file mode 100755 index 900ff885..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/utils.py b/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/utils.py deleted file mode 100755 index 0f5712cb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/__init__.py deleted file mode 100755 index 52e4b48d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_module import BaseModule, ModuleList, Sequential -from .base_runner import BaseRunner -from .builder import RUNNERS, build_runner -from .checkpoint import (CheckpointLoader, _load_checkpoint, - _load_checkpoint_with_prefix, load_checkpoint, - load_state_dict, save_checkpoint, weights_to_cpu) -from .default_constructor import DefaultRunnerConstructor -from .dist_utils import (allreduce_grads, allreduce_params, get_dist_info, - init_dist, master_only) -from .epoch_based_runner import EpochBasedRunner, Runner -from .fp16_utils import LossScaler, auto_fp16, force_fp32, wrap_fp16_model -from .hooks import (HOOKS, CheckpointHook, ClosureHook, DistEvalHook, - DistSamplerSeedHook, DvcliveLoggerHook, EMAHook, EvalHook, - Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, Hook, IterTimerHook, - LoggerHook, LrUpdaterHook, MlflowLoggerHook, - NeptuneLoggerHook, OptimizerHook, PaviLoggerHook, - SyncBuffersHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .iter_based_runner import IterBasedRunner, IterLoader -from .log_buffer import LogBuffer -from .optimizer import (OPTIMIZER_BUILDERS, OPTIMIZERS, - DefaultOptimizerConstructor, build_optimizer, - build_optimizer_constructor) -from .priority import Priority, get_priority -from .utils import get_host_info, get_time_str, obj_from_dict, set_random_seed - -__all__ = [ - 'BaseRunner', 'Runner', 'EpochBasedRunner', 'IterBasedRunner', 'LogBuffer', - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'LoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'MlflowLoggerHook', - 'DvcliveLoggerHook', '_load_checkpoint', 'load_state_dict', - 'load_checkpoint', 'weights_to_cpu', 'save_checkpoint', 'Priority', - 'get_priority', 'get_host_info', 'get_time_str', 'obj_from_dict', - 'init_dist', 'get_dist_info', 'master_only', 'OPTIMIZER_BUILDERS', - 'OPTIMIZERS', 'DefaultOptimizerConstructor', 'build_optimizer', - 'build_optimizer_constructor', 'IterLoader', 'set_random_seed', - 'auto_fp16', 'force_fp32', 'wrap_fp16_model', 'Fp16OptimizerHook', - 'SyncBuffersHook', 'EMAHook', 'build_runner', 'RUNNERS', 'allreduce_grads', - 'allreduce_params', 'LossScaler', 'CheckpointLoader', 'BaseModule', - '_load_checkpoint_with_prefix', 'EvalHook', 'DistEvalHook', 'Sequential', - 'ModuleList', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook', 'DefaultRunnerConstructor' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_module.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_module.py deleted file mode 100755 index 529575b8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_module.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings -from abc import ABCMeta -from collections import defaultdict -from logging import FileHandler - -import torch.nn as nn - -from mmcv.runner.dist_utils import master_only -from mmcv.utils.logging import get_logger, logger_initialized, print_log - - -class BaseModule(nn.Module, metaclass=ABCMeta): - """Base module for all modules in openmmlab. - - ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional - functionality of parameter initialization. Compared with - ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes. - - - ``init_cfg``: the config to control the initialization. - - ``init_weights``: The function of parameter - initialization and recording initialization - information. - - ``_params_init_info``: Used to track the parameter - initialization information. This attribute only - exists during executing the ``init_weights``. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, init_cfg=None): - """Initialize BaseModule, inherited from `torch.nn.Module`""" - - # NOTE init_cfg can be defined in different levels, but init_cfg - # in low levels has a higher priority. - - super(BaseModule, self).__init__() - # define default value of init_cfg instead of hard code - # in init_weights() function - self._is_init = False - - self.init_cfg = copy.deepcopy(init_cfg) - - # Backward compatibility in derived classes - # if pretrained is not None: - # warnings.warn('DeprecationWarning: pretrained is a deprecated \ - # key, please consider using init_cfg') - # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @property - def is_init(self): - return self._is_init - - def init_weights(self): - """Initialize the weights.""" - - is_top_level_module = False - # check if it is top-level module - if not hasattr(self, '_params_init_info'): - # The `_params_init_info` is used to record the initialization - # information of the parameters - # the key should be the obj:`nn.Parameter` of model and the value - # should be a dict containing - # - init_info (str): The string that describes the initialization. - # - tmp_mean_value (FloatTensor): The mean of the parameter, - # which indicates whether the parameter has been modified. - # this attribute would be deleted after all parameters - # is initialized. - self._params_init_info = defaultdict(dict) - is_top_level_module = True - - # Initialize the `_params_init_info`, - # When detecting the `tmp_mean_value` of - # the corresponding parameter is changed, update related - # initialization information - for name, param in self.named_parameters(): - self._params_init_info[param][ - 'init_info'] = f'The value is the same before and ' \ - f'after calling `init_weights` ' \ - f'of {self.__class__.__name__} ' - self._params_init_info[param][ - 'tmp_mean_value'] = param.data.mean() - - # pass `params_init_info` to all submodules - # All submodules share the same `params_init_info`, - # so it will be updated when parameters are - # modified at any level of the model. - for sub_module in self.modules(): - sub_module._params_init_info = self._params_init_info - - # Get the initialized logger, if not exist, - # create a logger named `mmcv` - logger_names = list(logger_initialized.keys()) - logger_name = logger_names[0] if logger_names else 'mmcv' - - from ..cnn import initialize - from ..cnn.utils.weight_init import update_init_info - module_name = self.__class__.__name__ - if not self._is_init: - if self.init_cfg: - print_log( - f'initialize {module_name} with init_cfg {self.init_cfg}', - logger=logger_name) - initialize(self, self.init_cfg) - if isinstance(self.init_cfg, dict): - # prevent the parameters of - # the pre-trained model - # from being overwritten by - # the `init_weights` - if self.init_cfg['type'] == 'Pretrained': - return - - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights() - # users may overload the `init_weights` - update_init_info( - m, - init_info=f'Initialized by ' - f'user-defined `init_weights`' - f' in {m.__class__.__name__} ') - - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - if is_top_level_module: - self._dump_init_info(logger_name) - - for sub_module in self.modules(): - del sub_module._params_init_info - - @master_only - def _dump_init_info(self, logger_name): - """Dump the initialization information to a file named - `initialization.log.json` in workdir. - - Args: - logger_name (str): The name of logger. - """ - - logger = get_logger(logger_name) - - with_file_handler = False - # dump the information to the logger file if there is a `FileHandler` - for handler in logger.handlers: - if isinstance(handler, FileHandler): - handler.stream.write( - 'Name of parameter - Initialization information\n') - for name, param in self.named_parameters(): - handler.stream.write( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n") - handler.stream.flush() - with_file_handler = True - if not with_file_handler: - for name, param in self.named_parameters(): - print_log( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n ", - logger=logger_name) - - def __repr__(self): - s = super().__repr__() - if self.init_cfg: - s += f'\ninit_cfg={self.init_cfg}' - return s - - -class Sequential(BaseModule, nn.Sequential): - """Sequential module in openmmlab. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, *args, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.Sequential.__init__(self, *args) - - -class ModuleList(BaseModule, nn.ModuleList): - """ModuleList in openmmlab. - - Args: - modules (iterable, optional): an iterable of modules to add. - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, modules=None, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.ModuleList.__init__(self, modules) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_runner.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_runner.py deleted file mode 100755 index 25cd98f5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/base_runner.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import logging -import os.path as osp -import warnings -from abc import ABCMeta, abstractmethod - -import torch -from torch.optim import Optimizer - -import mmcv -from ..parallel import is_module_wrapper -from .checkpoint import load_checkpoint -from .dist_utils import get_dist_info -from .hooks import HOOKS, Hook -from .log_buffer import LogBuffer -from .priority import Priority, get_priority -from .utils import get_time_str - - -class BaseRunner(metaclass=ABCMeta): - """The base class of Runner, a training helper for PyTorch. - - All subclasses should implement the following APIs: - - - ``run()`` - - ``train()`` - - ``val()`` - - ``save_checkpoint()`` - - Args: - model (:obj:`torch.nn.Module`): The model to be run. - batch_processor (callable): A callable method that process a data - batch. The interface of this method should be - `batch_processor(model, data, train_mode) -> dict` - optimizer (dict or :obj:`torch.optim.Optimizer`): It can be either an - optimizer (in most cases) or a dict of optimizers (in models that - requires more than one optimizer, e.g., GAN). - work_dir (str, optional): The working directory to save checkpoints - and logs. Defaults to None. - logger (:obj:`logging.Logger`): Logger used during training. - Defaults to None. (The default value is just for backward - compatibility) - meta (dict | None): A dict records some import information such as - environment info and seed, which will be logged in logger hook. - Defaults to None. - max_epochs (int, optional): Total training epochs. - max_iters (int, optional): Total training iterations. - """ - - def __init__(self, - model, - batch_processor=None, - optimizer=None, - work_dir=None, - logger=None, - meta=None, - max_iters=None, - max_epochs=None): - if batch_processor is not None: - if not callable(batch_processor): - raise TypeError('batch_processor must be callable, ' - f'but got {type(batch_processor)}') - warnings.warn('batch_processor is deprecated, please implement ' - 'train_step() and val_step() in the model instead.') - # raise an error is `batch_processor` is not None and - # `model.train_step()` exists. - if is_module_wrapper(model): - _model = model.module - else: - _model = model - if hasattr(_model, 'train_step') or hasattr(_model, 'val_step'): - raise RuntimeError( - 'batch_processor and model.train_step()/model.val_step() ' - 'cannot be both available.') - else: - assert hasattr(model, 'train_step') - - # check the type of `optimizer` - if isinstance(optimizer, dict): - for name, optim in optimizer.items(): - if not isinstance(optim, Optimizer): - raise TypeError( - f'optimizer must be a dict of torch.optim.Optimizers, ' - f'but optimizer["{name}"] is a {type(optim)}') - elif not isinstance(optimizer, Optimizer) and optimizer is not None: - raise TypeError( - f'optimizer must be a torch.optim.Optimizer object ' - f'or dict or None, but got {type(optimizer)}') - - # check the type of `logger` - if not isinstance(logger, logging.Logger): - raise TypeError(f'logger must be a logging.Logger object, ' - f'but got {type(logger)}') - - # check the type of `meta` - if meta is not None and not isinstance(meta, dict): - raise TypeError( - f'meta must be a dict or None, but got {type(meta)}') - - self.model = model - self.batch_processor = batch_processor - self.optimizer = optimizer - self.logger = logger - self.meta = meta - # create work_dir - if mmcv.is_str(work_dir): - self.work_dir = osp.abspath(work_dir) - mmcv.mkdir_or_exist(self.work_dir) - elif work_dir is None: - self.work_dir = None - else: - raise TypeError('"work_dir" must be a str or None') - - # get model name from the model class - if hasattr(self.model, 'module'): - self._model_name = self.model.module.__class__.__name__ - else: - self._model_name = self.model.__class__.__name__ - - self._rank, self._world_size = get_dist_info() - self.timestamp = get_time_str() - self.mode = None - self._hooks = [] - self._epoch = 0 - self._iter = 0 - self._inner_iter = 0 - - if max_epochs is not None and max_iters is not None: - raise ValueError( - 'Only one of `max_epochs` or `max_iters` can be set.') - - self._max_epochs = max_epochs - self._max_iters = max_iters - # TODO: Redesign LogBuffer, it is not flexible and elegant enough - self.log_buffer = LogBuffer() - - @property - def model_name(self): - """str: Name of the model, usually the module class name.""" - return self._model_name - - @property - def rank(self): - """int: Rank of current process. (distributed training)""" - return self._rank - - @property - def world_size(self): - """int: Number of processes participating in the job. - (distributed training)""" - return self._world_size - - @property - def hooks(self): - """list[:obj:`Hook`]: A list of registered hooks.""" - return self._hooks - - @property - def epoch(self): - """int: Current epoch.""" - return self._epoch - - @property - def iter(self): - """int: Current iteration.""" - return self._iter - - @property - def inner_iter(self): - """int: Iteration in an epoch.""" - return self._inner_iter - - @property - def max_epochs(self): - """int: Maximum training epochs.""" - return self._max_epochs - - @property - def max_iters(self): - """int: Maximum training iterations.""" - return self._max_iters - - @abstractmethod - def train(self): - pass - - @abstractmethod - def val(self): - pass - - @abstractmethod - def run(self, data_loaders, workflow, **kwargs): - pass - - @abstractmethod - def save_checkpoint(self, - out_dir, - filename_tmpl, - save_optimizer=True, - meta=None, - create_symlink=True): - pass - - def current_lr(self): - """Get current learning rates. - - Returns: - list[float] | dict[str, list[float]]: Current learning rates of all - param groups. If the runner has a dict of optimizers, this - method will return a dict. - """ - if isinstance(self.optimizer, torch.optim.Optimizer): - lr = [group['lr'] for group in self.optimizer.param_groups] - elif isinstance(self.optimizer, dict): - lr = dict() - for name, optim in self.optimizer.items(): - lr[name] = [group['lr'] for group in optim.param_groups] - else: - raise RuntimeError( - 'lr is not applicable because optimizer does not exist.') - return lr - - def current_momentum(self): - """Get current momentums. - - Returns: - list[float] | dict[str, list[float]]: Current momentums of all - param groups. If the runner has a dict of optimizers, this - method will return a dict. - """ - - def _get_momentum(optimizer): - momentums = [] - for group in optimizer.param_groups: - if 'momentum' in group.keys(): - momentums.append(group['momentum']) - elif 'betas' in group.keys(): - momentums.append(group['betas'][0]) - else: - momentums.append(0) - return momentums - - if self.optimizer is None: - raise RuntimeError( - 'momentum is not applicable because optimizer does not exist.') - elif isinstance(self.optimizer, torch.optim.Optimizer): - momentums = _get_momentum(self.optimizer) - elif isinstance(self.optimizer, dict): - momentums = dict() - for name, optim in self.optimizer.items(): - momentums[name] = _get_momentum(optim) - return momentums - - def register_hook(self, hook, priority='NORMAL'): - """Register a hook into the hook list. - - The hook will be inserted into a priority queue, with the specified - priority (See :class:`Priority` for details of priorities). - For hooks with the same priority, they will be triggered in the same - order as they are registered. - - Args: - hook (:obj:`Hook`): The hook to be registered. - priority (int or str or :obj:`Priority`): Hook priority. - Lower value means higher priority. - """ - assert isinstance(hook, Hook) - if hasattr(hook, 'priority'): - raise ValueError('"priority" is a reserved attribute for hooks') - priority = get_priority(priority) - hook.priority = priority - # insert the hook to a sorted list - inserted = False - for i in range(len(self._hooks) - 1, -1, -1): - if priority >= self._hooks[i].priority: - self._hooks.insert(i + 1, hook) - inserted = True - break - if not inserted: - self._hooks.insert(0, hook) - - def register_hook_from_cfg(self, hook_cfg): - """Register a hook from its cfg. - - Args: - hook_cfg (dict): Hook config. It should have at least keys 'type' - and 'priority' indicating its type and priority. - - Notes: - The specific hook class to register should not use 'type' and - 'priority' arguments during initialization. - """ - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = mmcv.build_from_cfg(hook_cfg, HOOKS) - self.register_hook(hook, priority=priority) - - def call_hook(self, fn_name): - """Call all hooks. - - Args: - fn_name (str): The function name in each hook to be called, such as - "before_train_epoch". - """ - for hook in self._hooks: - getattr(hook, fn_name)(self) - - def get_hook_info(self): - # Get hooks info in each stage - stage_hook_map = {stage: [] for stage in Hook.stages} - for hook in self.hooks: - try: - priority = Priority(hook.priority).name - except ValueError: - priority = hook.priority - classname = hook.__class__.__name__ - hook_info = f'({priority:<12}) {classname:<35}' - for trigger_stage in hook.get_triggered_stages(): - stage_hook_map[trigger_stage].append(hook_info) - - stage_hook_infos = [] - for stage in Hook.stages: - hook_infos = stage_hook_map[stage] - if len(hook_infos) > 0: - info = f'{stage}:\n' - info += '\n'.join(hook_infos) - info += '\n -------------------- ' - stage_hook_infos.append(info) - return '\n'.join(stage_hook_infos) - - def load_checkpoint(self, - filename, - map_location='cpu', - strict=False, - revise_keys=[(r'^module.', '')]): - return load_checkpoint( - self.model, - filename, - map_location, - strict, - self.logger, - revise_keys=revise_keys) - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - if map_location == 'default': - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint(checkpoint) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - if self.meta is None: - self.meta = {} - self.meta.setdefault('hook_msgs', {}) - # load `last_ckpt`, `best_score`, `best_ckpt`, etc. for hook messages - self.meta['hook_msgs'].update(checkpoint['meta'].get('hook_msgs', {})) - - # Re-calculate the number of iterations when resuming - # models with different number of GPUs - if 'config' in checkpoint['meta']: - config = mmcv.Config.fromstring( - checkpoint['meta']['config'], file_format='.py') - previous_gpu_ids = config.get('gpu_ids', None) - if previous_gpu_ids and len(previous_gpu_ids) > 0 and len( - previous_gpu_ids) != self.world_size: - self._iter = int(self._iter * len(previous_gpu_ids) / - self.world_size) - self.logger.info('the iteration number is changed due to ' - 'change of GPU number') - - # resume meta information meta - self.meta = checkpoint['meta'] - - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter) - - def register_lr_hook(self, lr_config): - if lr_config is None: - return - elif isinstance(lr_config, dict): - assert 'policy' in lr_config - policy_type = lr_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of Lr updater. - # Since this is not applicable for ` - # CosineAnnealingLrUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'LrUpdaterHook' - lr_config['type'] = hook_type - hook = mmcv.build_from_cfg(lr_config, HOOKS) - else: - hook = lr_config - self.register_hook(hook, priority='VERY_HIGH') - - def register_momentum_hook(self, momentum_config): - if momentum_config is None: - return - if isinstance(momentum_config, dict): - assert 'policy' in momentum_config - policy_type = momentum_config.pop('policy') - # If the type of policy is all in lower case, e.g., 'cyclic', - # then its first letter will be capitalized, e.g., to be 'Cyclic'. - # This is for the convenient usage of momentum updater. - # Since this is not applicable for - # `CosineAnnealingMomentumUpdater`, - # the string will not be changed if it contains capital letters. - if policy_type == policy_type.lower(): - policy_type = policy_type.title() - hook_type = policy_type + 'MomentumUpdaterHook' - momentum_config['type'] = hook_type - hook = mmcv.build_from_cfg(momentum_config, HOOKS) - else: - hook = momentum_config - self.register_hook(hook, priority='HIGH') - - def register_optimizer_hook(self, optimizer_config): - if optimizer_config is None: - return - if isinstance(optimizer_config, dict): - optimizer_config.setdefault('type', 'OptimizerHook') - hook = mmcv.build_from_cfg(optimizer_config, HOOKS) - else: - hook = optimizer_config - self.register_hook(hook, priority='ABOVE_NORMAL') - - def register_checkpoint_hook(self, checkpoint_config): - if checkpoint_config is None: - return - if isinstance(checkpoint_config, dict): - checkpoint_config.setdefault('type', 'CheckpointHook') - hook = mmcv.build_from_cfg(checkpoint_config, HOOKS) - else: - hook = checkpoint_config - self.register_hook(hook, priority='NORMAL') - - def register_logger_hooks(self, log_config): - if log_config is None: - return - log_interval = log_config['interval'] - for info in log_config['hooks']: - logger_hook = mmcv.build_from_cfg( - info, HOOKS, default_args=dict(interval=log_interval)) - self.register_hook(logger_hook, priority='VERY_LOW') - - def register_timer_hook(self, timer_config): - if timer_config is None: - return - if isinstance(timer_config, dict): - timer_config_ = copy.deepcopy(timer_config) - hook = mmcv.build_from_cfg(timer_config_, HOOKS) - else: - hook = timer_config - self.register_hook(hook, priority='LOW') - - def register_custom_hooks(self, custom_config): - if custom_config is None: - return - - if not isinstance(custom_config, list): - custom_config = [custom_config] - - for item in custom_config: - if isinstance(item, dict): - self.register_hook_from_cfg(item) - else: - self.register_hook(item, priority='NORMAL') - - def register_profiler_hook(self, profiler_config): - if profiler_config is None: - return - if isinstance(profiler_config, dict): - profiler_config.setdefault('type', 'ProfilerHook') - hook = mmcv.build_from_cfg(profiler_config, HOOKS) - else: - hook = profiler_config - self.register_hook(hook) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - timer_config=dict(type='IterTimerHook'), - custom_hooks_config=None): - """Register default and custom hooks for training. - - Default and custom hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - self.register_lr_hook(lr_config) - self.register_momentum_hook(momentum_config) - self.register_optimizer_hook(optimizer_config) - self.register_checkpoint_hook(checkpoint_config) - self.register_timer_hook(timer_config) - self.register_logger_hooks(log_config) - self.register_custom_hooks(custom_hooks_config) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/builder.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/builder.py deleted file mode 100755 index 77c96ba0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/builder.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from ..utils import Registry - -RUNNERS = Registry('runner') -RUNNER_BUILDERS = Registry('runner builder') - - -def build_runner_constructor(cfg): - return RUNNER_BUILDERS.build(cfg) - - -def build_runner(cfg, default_args=None): - runner_cfg = copy.deepcopy(cfg) - constructor_type = runner_cfg.pop('constructor', - 'DefaultRunnerConstructor') - runner_constructor = build_runner_constructor( - dict( - type=constructor_type, - runner_cfg=runner_cfg, - default_args=default_args)) - runner = runner_constructor() - return runner diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/checkpoint.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/checkpoint.py deleted file mode 100755 index 6ad605b8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/checkpoint.py +++ /dev/null @@ -1,707 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo - -import mmcv -from ..fileio import FileClient -from ..fileio import load as load_file -from ..parallel import is_module_wrapper -from ..utils import mkdir_or_exist -from .dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, prefixes, loader=None, force=False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or list[str] or tuple[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - loader (function): checkpoint loader - """ - - for p in cls._schemes: - if path.startswith(p): - return cls._schemes[p] - - @classmethod - def load_checkpoint(cls, filename, map_location=None, logger=None): - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - -@CheckpointLoader.register_scheme(prefixes='') -def load_from_local(filename, map_location): - """load checkpoint by local file path. - - Args: - filename (str): local checkpoint file path - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('http://', 'https://')) -def load_from_http(filename, map_location=None, model_dir=None): - """load checkpoint through HTTP or HTTPS scheme path. In distributed - setting, this function only download checkpoint at local rank 0. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - model_dir (string, optional): directory in which to save the object, - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='pavi://') -def load_from_pavi(filename, map_location=None): - """load checkpoint through the file path prefixed with pavi. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with pavi prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - assert filename.startswith('pavi://'), \ - f'Expected filename startswith `pavi://`, but get {filename}' - model_path = filename[7:] - - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='s3://') -def load_from_ceph(filename, map_location=None, backend='petrel'): - """load checkpoint through the file path prefixed with s3. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with s3 prefix - map_location (str, optional): Same as :func:`torch.load`. - backend (str, optional): The storage backend type. Options are 'ceph', - 'petrel'. Default: 'petrel'. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - allowed_backends = ['ceph', 'petrel'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - - if backend == 'ceph': - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - - # CephClient and PetrelBackend have the same prefix 's3://' and the latter - # will be chosen as default. If PetrelBackend can not be instantiated - # successfully, the CephClient will be chosen. - try: - file_client = FileClient(backend=backend) - except ImportError: - allowed_backends.remove(backend) - file_client = FileClient(backend=allowed_backends[0]) - - with io.BytesIO(file_client.get(filename)) as buffer: - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://')) -def load_from_torchvision(filename, map_location=None): - """load checkpoint through the file path prefixed with modelzoo or - torchvision. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - model_urls = get_torchvision_models() - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_name = filename[11:] - else: - model_name = filename[14:] - return load_from_http(model_urls[model_name], map_location=map_location) - - -@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://')) -def load_from_openmmlab(filename, map_location=None): - """load checkpoint through the file path prefixed with open-mmlab or - openmmlab. - - Args: - filename (str): checkpoint file path with open-mmlab or - openmmlab prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_external_models() - prefix_str = 'open-mmlab://' - if filename.startswith(prefix_str): - model_name = filename[13:] - else: - model_name = filename[12:] - prefix_str = 'openmmlab://' - - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'{prefix_str}{model_name} is deprecated in favor ' - f'of {prefix_str}{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_from_http(model_url, map_location=map_location) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='mmcls://') -def load_from_mmcls(filename, map_location=None): - """load checkpoint through the file path prefixed with mmcls. - - Args: - filename (str): checkpoint file path with mmcls prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_from_http( - model_urls[model_name], map_location=map_location) - checkpoint = _process_mmcls_checkpoint(checkpoint) - return checkpoint - - -def _load_checkpoint(filename, map_location=None, logger=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def _load_checkpoint_with_prefix(prefix, filename, map_location=None): - """Load partial pretrained model with specific prefix. - - Args: - prefix (str): The prefix of sub-module. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location=map_location) - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if not prefix.endswith('.'): - prefix += '.' - prefix_len = len(prefix) - - state_dict = { - k[prefix_len:]: v - for k, v in state_dict.items() if k.startswith(prefix) - } - - assert state_dict, f'{prefix} is not in the pretrained model' - return state_dict - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - # Keep metadata in state_dict - state_dict_cpu._metadata = getattr(state_dict, '_metadata', OrderedDict()) - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, - filename, - optimizer=None, - meta=None, - file_client_args=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - if file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" if filename starts with' - f'"pavi://", but got {file_client_args}') - try: - from pavi import modelcloud - from pavi import exception - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except exception.NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - file_client = FileClient.infer_client(file_client_args, filename) - with io.BytesIO() as f: - torch.save(checkpoint, f) - file_client.put(f.getvalue(), filename) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/default_constructor.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/default_constructor.py deleted file mode 100755 index 0bad847f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/default_constructor.py +++ /dev/null @@ -1,44 +0,0 @@ -from .builder import RUNNER_BUILDERS, RUNNERS - - -@RUNNER_BUILDERS.register_module() -class DefaultRunnerConstructor: - """Default constructor for runners. - - Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`. - For example, We can inject some new properties and functions for `Runner`. - - Example: - >>> from mmcv.runner import RUNNER_BUILDERS, build_runner - >>> # Define a new RunnerReconstructor - >>> @RUNNER_BUILDERS.register_module() - >>> class MyRunnerConstructor: - ... def __init__(self, runner_cfg, default_args=None): - ... if not isinstance(runner_cfg, dict): - ... raise TypeError('runner_cfg should be a dict', - ... f'but got {type(runner_cfg)}') - ... self.runner_cfg = runner_cfg - ... self.default_args = default_args - ... - ... def __call__(self): - ... runner = RUNNERS.build(self.runner_cfg, - ... default_args=self.default_args) - ... # Add new properties for existing runner - ... runner.my_name = 'my_runner' - ... runner.my_function = lambda self: print(self.my_name) - ... ... - >>> # build your runner - >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40, - ... constructor='MyRunnerConstructor') - >>> runner = build_runner(runner_cfg) - """ - - def __init__(self, runner_cfg, default_args=None): - if not isinstance(runner_cfg, dict): - raise TypeError('runner_cfg should be a dict', - f'but got {type(runner_cfg)}') - self.runner_cfg = runner_cfg - self.default_args = default_args - - def __call__(self): - return RUNNERS.build(self.runner_cfg, default_args=self.default_args) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/dist_utils.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/dist_utils.py deleted file mode 100755 index d3a1ef3f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/dist_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import os -import subprocess -from collections import OrderedDict - -import torch -import torch.multiprocessing as mp -from torch import distributed as dist -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - - -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'mpi': - _init_dist_mpi(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_mpi(backend, **kwargs): - # TODO: use local_rank instead of rank % num_gpus - rank = int(os.environ['OMPI_COMM_WORLD_RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput( - f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - # use MASTER_ADDR in the environment variable if it already exists - if 'MASTER_ADDR' not in os.environ: - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info(): - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - -def allreduce_params(params, coalesce=True, bucket_size_mb=-1): - """Allreduce parameters. - - Args: - params (list[torch.Parameters]): List of parameters or buffers of a - model. - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - _, world_size = get_dist_info() - if world_size == 1: - return - params = [param.data for param in params] - if coalesce: - _allreduce_coalesced(params, world_size, bucket_size_mb) - else: - for tensor in params: - dist.all_reduce(tensor.div_(world_size)) - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - """Allreduce gradients. - - Args: - params (list[torch.Parameters]): List of parameters of a model - coalesce (bool, optional): Whether allreduce parameters as a whole. - Defaults to True. - bucket_size_mb (int, optional): Size of bucket, the unit is MB. - Defaults to -1. - """ - grads = [ - param.grad.data for param in params - if param.requires_grad and param.grad is not None - ] - _, world_size = get_dist_info() - if world_size == 1: - return - if coalesce: - _allreduce_coalesced(grads, world_size, bucket_size_mb) - else: - for tensor in grads: - dist.all_reduce(tensor.div_(world_size)) - - -def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1): - if bucket_size_mb > 0: - bucket_size_bytes = bucket_size_mb * 1024 * 1024 - buckets = _take_tensors(tensors, bucket_size_bytes) - else: - buckets = OrderedDict() - for tensor in tensors: - tp = tensor.type() - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(tensor) - buckets = buckets.values() - - for bucket in buckets: - flat_tensors = _flatten_dense_tensors(bucket) - dist.all_reduce(flat_tensors) - flat_tensors.div_(world_size) - for tensor, synced in zip( - bucket, _unflatten_dense_tensors(flat_tensors, bucket)): - tensor.copy_(synced) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/epoch_based_runner.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/epoch_based_runner.py deleted file mode 100755 index 2dd29357..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead') - super().__init__(*args, **kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/fp16_utils.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/fp16_utils.py deleted file mode 100755 index 4baab939..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/fp16_utils.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools -import warnings -from collections import abc -from inspect import getfullargspec - -import numpy as np -import torch -import torch.nn as nn - -from mmcv.utils import TORCH_VERSION, digit_version -from .dist_utils import allreduce_grads as _allreduce_grads - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.autocast would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - # Note that when PyTorch >= 1.6.0, we still cast tensor types to fp16 - # manually, so the behavior may not be consistent with real amp. - from torch.cuda.amp import autocast -except ImportError: - pass - - -def cast_tensor_type(inputs, src_type, dst_type): - """Recursively convert Tensor in inputs from src_type to dst_type. - - Args: - inputs: Inputs that to be casted. - src_type (torch.dtype): Source type.. - dst_type (torch.dtype): Destination type. - - Returns: - The same type with inputs, but all contained Tensors have been cast. - """ - if isinstance(inputs, nn.Module): - return inputs - elif isinstance(inputs, torch.Tensor): - return inputs.to(dst_type) - elif isinstance(inputs, str): - return inputs - elif isinstance(inputs, np.ndarray): - return inputs - elif isinstance(inputs, abc.Mapping): - return type(inputs)({ - k: cast_tensor_type(v, src_type, dst_type) - for k, v in inputs.items() - }) - elif isinstance(inputs, abc.Iterable): - return type(inputs)( - cast_tensor_type(item, src_type, dst_type) for item in inputs) - else: - return inputs - - -def auto_fp16(apply_to=None, out_fp32=False): - """Decorator to enable fp16 training automatically. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If inputs arguments are fp32 tensors, they will - be converted to fp16 automatically. Arguments other than fp32 tensors are - ignored. If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp32 (bool): Whether to convert the output back to fp32. - - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp16 - >>> @auto_fp16() - >>> def forward(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp16 - >>> @auto_fp16(apply_to=('pred', )) - >>> def do_something(self, pred, others): - >>> pass - """ - - def auto_fp16_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], torch.nn.Module): - raise TypeError('@auto_fp16 can only be used to decorate the ' - 'method of nn.Module') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - # NOTE: default args are not taken into consideration - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.float, torch.half)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = {} - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.float, torch.half) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=True): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp32: - output = cast_tensor_type(output, torch.half, torch.float) - return output - - return new_func - - return auto_fp16_wrapper - - -def force_fp32(apply_to=None, out_fp16=False): - """Decorator to convert input arguments to fp32 in force. - - This decorator is useful when you write custom modules and want to support - mixed precision training. If there are some inputs that must be processed - in fp32 mode, then this decorator can handle it. If inputs arguments are - fp16 tensors, they will be converted to fp32 automatically. Arguments other - than fp16 tensors are ignored. If you are using PyTorch >= 1.6, - torch.cuda.amp is used as the backend, otherwise, original mmcv - implementation will be adopted. - - Args: - apply_to (Iterable, optional): The argument names to be converted. - `None` indicates all arguments. - out_fp16 (bool): Whether to convert the output back to fp16. - - Example: - - >>> import torch.nn as nn - >>> class MyModule1(nn.Module): - >>> - >>> # Convert x and y to fp32 - >>> @force_fp32() - >>> def loss(self, x, y): - >>> pass - - >>> import torch.nn as nn - >>> class MyModule2(nn.Module): - >>> - >>> # convert pred to fp32 - >>> @force_fp32(apply_to=('pred', )) - >>> def post_process(self, pred, others): - >>> pass - """ - - def force_fp32_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # check if the module has set the attribute `fp16_enabled`, if not, - # just fallback to the original method. - if not isinstance(args[0], torch.nn.Module): - raise TypeError('@force_fp32 can only be used to decorate the ' - 'method of nn.Module') - if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled): - return old_func(*args, **kwargs) - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get the argument names to be casted - args_to_cast = args_info.args if apply_to is None else apply_to - # convert the args that need to be processed - new_args = [] - if args: - arg_names = args_info.args[:len(args)] - for i, arg_name in enumerate(arg_names): - if arg_name in args_to_cast: - new_args.append( - cast_tensor_type(args[i], torch.half, torch.float)) - else: - new_args.append(args[i]) - # convert the kwargs that need to be processed - new_kwargs = dict() - if kwargs: - for arg_name, arg_value in kwargs.items(): - if arg_name in args_to_cast: - new_kwargs[arg_name] = cast_tensor_type( - arg_value, torch.half, torch.float) - else: - new_kwargs[arg_name] = arg_value - # apply converted arguments to the decorated method - if (TORCH_VERSION != 'parrots' and - digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - with autocast(enabled=False): - output = old_func(*new_args, **new_kwargs) - else: - output = old_func(*new_args, **new_kwargs) - # cast the results back to fp32 if necessary - if out_fp16: - output = cast_tensor_type(output, torch.float, torch.half) - return output - - return new_func - - return force_fp32_wrapper - - -def allreduce_grads(params, coalesce=True, bucket_size_mb=-1): - warnings.warning( - '"mmcv.runner.fp16_utils.allreduce_grads" is deprecated, and will be ' - 'removed in v2.8. Please switch to "mmcv.runner.allreduce_grads') - _allreduce_grads(params, coalesce=coalesce, bucket_size_mb=bucket_size_mb) - - -def wrap_fp16_model(model): - """Wrap the FP32 model to FP16. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the - backend, otherwise, original mmcv implementation will be adopted. - - For PyTorch >= 1.6, this function will - 1. Set fp16 flag inside the model to True. - - Otherwise: - 1. Convert FP32 model to FP16. - 2. Remain some necessary layers to be FP32, e.g., normalization layers. - 3. Set `fp16_enabled` flag inside the model to True. - - Args: - model (nn.Module): Model in FP32. - """ - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.6.0')): - # convert model to fp16 - model.half() - # patch the normalization layers to make it work in fp32 mode - patch_norm_fp32(model) - # set `fp16_enabled` flag - for m in model.modules(): - if hasattr(m, 'fp16_enabled'): - m.fp16_enabled = True - - -def patch_norm_fp32(module): - """Recursively convert normalization layers from FP16 to FP32. - - Args: - module (nn.Module): The modules to be converted in FP16. - - Returns: - nn.Module: The converted module, the normalization layers have been - converted to FP32. - """ - if isinstance(module, (nn.modules.batchnorm._BatchNorm, nn.GroupNorm)): - module.float() - if isinstance(module, nn.GroupNorm) or torch.__version__ < '1.3': - module.forward = patch_forward_method(module.forward, torch.half, - torch.float) - for child in module.children(): - patch_norm_fp32(child) - return module - - -def patch_forward_method(func, src_type, dst_type, convert_output=True): - """Patch the forward method of a module. - - Args: - func (callable): The original forward method. - src_type (torch.dtype): Type of input arguments to be converted from. - dst_type (torch.dtype): Type of input arguments to be converted to. - convert_output (bool): Whether to convert the output back to src_type. - - Returns: - callable: The patched forward method. - """ - - def new_forward(*args, **kwargs): - output = func(*cast_tensor_type(args, src_type, dst_type), - **cast_tensor_type(kwargs, src_type, dst_type)) - if convert_output: - output = cast_tensor_type(output, dst_type, src_type) - return output - - return new_forward - - -class LossScaler: - """Class that manages loss scaling in mixed precision training which - supports both dynamic or static mode. - - The implementation refers to - https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/loss_scaler.py. - Indirectly, by supplying ``mode='dynamic'`` for dynamic loss scaling. - It's important to understand how :class:`LossScaler` operates. - Loss scaling is designed to combat the problem of underflowing - gradients encountered at long times when training fp16 networks. - Dynamic loss scaling begins by attempting a very high loss - scale. Ironically, this may result in OVERflowing gradients. - If overflowing gradients are encountered, :class:`FP16_Optimizer` then - skips the update step for this particular iteration/minibatch, - and :class:`LossScaler` adjusts the loss scale to a lower value. - If a certain number of iterations occur without overflowing gradients - detected,:class:`LossScaler` increases the loss scale once more. - In this way :class:`LossScaler` attempts to "ride the edge" of always - using the highest loss scale possible without incurring overflow. - - Args: - init_scale (float): Initial loss scale value, default: 2**32. - scale_factor (float): Factor used when adjusting the loss scale. - Default: 2. - mode (str): Loss scaling mode. 'dynamic' or 'static' - scale_window (int): Number of consecutive iterations without an - overflow to wait before increasing the loss scale. Default: 1000. - """ - - def __init__(self, - init_scale=2**32, - mode='dynamic', - scale_factor=2., - scale_window=1000): - self.cur_scale = init_scale - self.cur_iter = 0 - assert mode in ('dynamic', - 'static'), 'mode can only be dynamic or static' - self.mode = mode - self.last_overflow_iter = -1 - self.scale_factor = scale_factor - self.scale_window = scale_window - - def has_overflow(self, params): - """Check if params contain overflow.""" - if self.mode != 'dynamic': - return False - for p in params: - if p.grad is not None and LossScaler._has_inf_or_nan(p.grad.data): - return True - return False - - def _has_inf_or_nan(x): - """Check if params contain NaN.""" - try: - cpu_sum = float(x.float().sum()) - except RuntimeError as instance: - if 'value cannot be converted' not in instance.args[0]: - raise - return True - else: - if cpu_sum == float('inf') or cpu_sum == -float('inf') \ - or cpu_sum != cpu_sum: - return True - return False - - def update_scale(self, overflow): - """update the current loss scale value when overflow happens.""" - if self.mode != 'dynamic': - return - if overflow: - self.cur_scale = max(self.cur_scale / self.scale_factor, 1) - self.last_overflow_iter = self.cur_iter - else: - if (self.cur_iter - self.last_overflow_iter) % \ - self.scale_window == 0: - self.cur_scale *= self.scale_factor - self.cur_iter += 1 - - def state_dict(self): - """Returns the state of the scaler as a :class:`dict`.""" - return dict( - cur_scale=self.cur_scale, - cur_iter=self.cur_iter, - mode=self.mode, - last_overflow_iter=self.last_overflow_iter, - scale_factor=self.scale_factor, - scale_window=self.scale_window) - - def load_state_dict(self, state_dict): - """Loads the loss_scaler state dict. - - Args: - state_dict (dict): scaler state. - """ - self.cur_scale = state_dict['cur_scale'] - self.cur_iter = state_dict['cur_iter'] - self.mode = state_dict['mode'] - self.last_overflow_iter = state_dict['last_overflow_iter'] - self.scale_factor = state_dict['scale_factor'] - self.scale_window = state_dict['scale_window'] - - @property - def loss_scale(self): - return self.cur_scale diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/__init__.py deleted file mode 100755 index 915af28c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook, - NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook, - TextLoggerHook, WandbLoggerHook) -from .lr_updater import LrUpdaterHook -from .memory import EmptyCacheHook -from .momentum_updater import MomentumUpdaterHook -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook', - 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook', - 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook', - 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/checkpoint.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/checkpoint.py deleted file mode 100755 index 7bb75f40..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/checkpoint.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -from mmcv.fileio import FileClient -from ..dist_utils import allreduce_params, master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class CheckpointHook(Hook): - """Save checkpoints periodically. - - Args: - interval (int): The saving period. If ``by_epoch=True``, interval - indicates epochs, otherwise it indicates iterations. - Default: -1, which means "never". - by_epoch (bool): Saving checkpoints by epoch or by iteration. - Default: True. - save_optimizer (bool): Whether to save optimizer state_dict in the - checkpoint. It is usually used for resuming experiments. - Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, ``runner.work_dir`` will be used by default. If - specified, the ``out_dir`` will be the concatenation of ``out_dir`` - and the last level directory of ``runner.work_dir``. - `Changed in version 1.3.16.` - max_keep_ckpts (int, optional): The maximum checkpoints to keep. - In some cases we want only the latest few checkpoints and would - like to delete old ones to save the disk space. - Default: -1, which means unlimited. - save_last (bool, optional): Whether to force the last checkpoint to be - saved regardless of interval. Default: True. - sync_buffer (bool, optional): Whether to synchronize buffers in - different gpus. Default: False. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - - .. warning:: - Before v1.3.16, the ``out_dir`` argument indicates the path where the - checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the - root directory and the final path to save checkpoint is the - concatenation of ``out_dir`` and the last level directory of - ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A" - and the value of ``runner.work_dir`` is "/path/of/B", then the final - path will be "/path/of/A/B". - """ - - def __init__(self, - interval=-1, - by_epoch=True, - save_optimizer=True, - out_dir=None, - max_keep_ckpts=-1, - save_last=True, - sync_buffer=False, - file_client_args=None, - **kwargs): - self.interval = interval - self.by_epoch = by_epoch - self.save_optimizer = save_optimizer - self.out_dir = out_dir - self.max_keep_ckpts = max_keep_ckpts - self.save_last = save_last - self.args = kwargs - self.sync_buffer = sync_buffer - self.file_client_args = file_client_args - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - - runner.logger.info((f'Checkpoints will be saved to {self.out_dir} by ' - f'{self.file_client.name}.')) - - # disable the create_symlink option because some file backends do not - # allow to create a symlink - if 'create_symlink' in self.args: - if self.args[ - 'create_symlink'] and not self.file_client.allow_symlink: - self.args['create_symlink'] = False - warnings.warn( - ('create_symlink is set as True by the user but is changed' - 'to be False because creating symbolic link is not ' - f'allowed in {self.file_client.name}')) - else: - self.args['create_symlink'] = self.file_client.allow_symlink - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` epochs - # 2. reach the last epoch of training - if self.every_n_epochs( - runner, self.interval) or (self.save_last - and self.is_last_epoch(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.epoch + 1} epochs') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) - - @master_only - def _save_checkpoint(self, runner): - """Save the current checkpoint and delete unwanted checkpoint.""" - runner.save_checkpoint( - self.out_dir, save_optimizer=self.save_optimizer, **self.args) - if runner.meta is not None: - if self.by_epoch: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1) - else: - cur_ckpt_filename = self.args.get( - 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1) - runner.meta.setdefault('hook_msgs', dict()) - runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path( - self.out_dir, cur_ckpt_filename) - # remove other checkpoints - if self.max_keep_ckpts > 0: - if self.by_epoch: - name = 'epoch_{}.pth' - current_ckpt = runner.epoch + 1 - else: - name = 'iter_{}.pth' - current_ckpt = runner.iter + 1 - redundant_ckpts = range( - current_ckpt - self.max_keep_ckpts * self.interval, 0, - -self.interval) - filename_tmpl = self.args.get('filename_tmpl', name) - for _step in redundant_ckpts: - ckpt_path = self.file_client.join_path( - self.out_dir, filename_tmpl.format(_step)) - if self.file_client.isfile(ckpt_path): - self.file_client.remove(ckpt_path) - else: - break - - def after_train_iter(self, runner): - if self.by_epoch: - return - - # save checkpoint for following cases: - # 1. every ``self.interval`` iterations - # 2. reach the last iteration of training - if self.every_n_iters( - runner, self.interval) or (self.save_last - and self.is_last_iter(runner)): - runner.logger.info( - f'Saving checkpoint at {runner.iter + 1} iterations') - if self.sync_buffer: - allreduce_params(runner.model.buffers()) - self._save_checkpoint(runner) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/closure.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/closure.py deleted file mode 100755 index b955f81f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/closure.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ClosureHook(Hook): - - def __init__(self, fn_name, fn): - assert hasattr(self, fn_name) - assert callable(fn) - setattr(self, fn_name, fn) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/ema.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/ema.py deleted file mode 100755 index 15c7e680..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - \text{Xema\_{t+1}} = (1 - \text{momentum}) \times - \text{Xema\_{t}} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/evaluation.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/evaluation.py deleted file mode 100755 index 1eeb4465..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/evaluation.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from math import inf - -import torch.distributed as dist -from torch.nn.modules.batchnorm import _BatchNorm -from torch.utils.data import DataLoader - -from mmcv.fileio import FileClient -from mmcv.utils import is_seq_of -from .hook import Hook -from .logger import LoggerHook - - -class EvalHook(Hook): - """Non-Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in non-distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader, and return the test results. If ``None``, the default - test function ``mmcv.engine.single_gpu_test`` will be used. - (default: ``None``) - greater_keys (List[str] | None, optional): Metric keys that will be - inferred by 'greater' comparison rule. If ``None``, - _default_greater_keys will be used. (default: ``None``) - less_keys (List[str] | None, optional): Metric keys that will be - inferred by 'less' comparison rule. If ``None``, _default_less_keys - will be used. (default: ``None``) - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - `New in version 1.3.16.` - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - `New in version 1.3.16.` - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - - Notes: - If new arguments are added for EvalHook, tools/test.py, - tools/eval_metric.py may be affected. - """ - - # Since the key for determine greater or less is related to the downstream - # tasks, downstream repos may need to overwrite the following inner - # variable accordingly. - - rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y} - init_value_map = {'greater': -inf, 'less': inf} - _default_greater_keys = [ - 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU', - 'mAcc', 'aAcc' - ] - _default_less_keys = ['loss'] - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - out_dir=None, - file_client_args=None, - **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError(f'dataloader must be a pytorch DataLoader, ' - f'but got {type(dataloader)}') - - if interval <= 0: - raise ValueError(f'interval must be a positive number, ' - f'but got {interval}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean' - - if start is not None and start < 0: - raise ValueError(f'The evaluation start epoch {start} is smaller ' - f'than 0') - - self.dataloader = dataloader - self.interval = interval - self.start = start - self.by_epoch = by_epoch - - assert isinstance(save_best, str) or save_best is None, \ - '""save_best"" should be a str or None ' \ - f'rather than {type(save_best)}' - self.save_best = save_best - self.eval_kwargs = eval_kwargs - self.initial_flag = True - - if test_fn is None: - from mmcv.engine import single_gpu_test - self.test_fn = single_gpu_test - else: - self.test_fn = test_fn - - if greater_keys is None: - self.greater_keys = self._default_greater_keys - else: - if not isinstance(greater_keys, (list, tuple)): - greater_keys = (greater_keys, ) - assert is_seq_of(greater_keys, str) - self.greater_keys = greater_keys - - if less_keys is None: - self.less_keys = self._default_less_keys - else: - if not isinstance(less_keys, (list, tuple)): - less_keys = (less_keys, ) - assert is_seq_of(less_keys, str) - self.less_keys = less_keys - - if self.save_best is not None: - self.best_ckpt_path = None - self._init_rule(rule, self.save_best) - - self.out_dir = out_dir - self.file_client_args = file_client_args - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific (note that the key indicator matching - is case-insensitive): - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if the key indicator is equal to the substring in any one item - in ``self.greater_keys``, the rule will be specified as 'greater'. - 4. Or if the key indicator is equal to the substring in any one item - in ``self.less_keys``, the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f'rule must be greater, less or None, ' - f'but got {rule}.') - - if rule is None: - if key_indicator != 'auto': - # `_lc` here means we use the lower case of keys for - # case-insensitive matching - key_indicator_lc = key_indicator.lower() - greater_keys = [key.lower() for key in self.greater_keys] - less_keys = [key.lower() for key in self.less_keys] - - if key_indicator_lc in greater_keys: - rule = 'greater' - elif key_indicator_lc in less_keys: - rule = 'less' - elif any(key in key_indicator_lc for key in greater_keys): - rule = 'greater' - elif any(key in key_indicator_lc for key in less_keys): - rule = 'less' - else: - raise ValueError(f'Cannot infer the rule for key ' - f'{key_indicator}, thus a specific rule ' - f'must be specified.') - self.rule = rule - self.key_indicator = key_indicator - if self.rule is not None: - self.compare_func = self.rule_map[self.rule] - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'The best checkpoint will be saved to {self.out_dir} by ' - f'{self.file_client.name}')) - - if self.save_best is not None: - if runner.meta is None: - warnings.warn('runner.meta is None. Creating an empty one.') - runner.meta = dict() - runner.meta.setdefault('hook_msgs', dict()) - self.best_ckpt_path = runner.meta['hook_msgs'].get( - 'best_ckpt', None) - - def before_train_iter(self, runner): - """Evaluate the model only at the start of training by iteration.""" - if self.by_epoch or not self.initial_flag: - return - if self.start is not None and runner.iter >= self.start: - self.after_train_iter(runner) - self.initial_flag = False - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - if not (self.by_epoch and self.initial_flag): - return - if self.start is not None and runner.epoch >= self.start: - self.after_train_epoch(runner) - self.initial_flag = False - - def after_train_iter(self, runner): - """Called after every training iter to evaluate the results.""" - if not self.by_epoch and self._should_evaluate(runner): - # Because the priority of EvalHook is higher than LoggerHook, the - # training log and the evaluating log are mixed. Therefore, - # we need to dump the training log and clear it before evaluating - # log is generated. In addition, this problem will only appear in - # `IterBasedRunner` whose `self.by_epoch` is False, because - # `EpochBasedRunner` whose `self.by_epoch` is True calls - # `_do_evaluate` in `after_train_epoch` stage, and at this stage - # the training log has been printed, so it will not cause any - # problem. more details at - # https://github.com/open-mmlab/mmsegmentation/issues/694 - for hook in runner._hooks: - if isinstance(hook, LoggerHook): - hook.after_train_iter(runner) - runner.log_buffer.clear() - - self._do_evaluate(runner) - - def after_train_epoch(self, runner): - """Called after every training epoch to evaluate the results.""" - if self.by_epoch and self._should_evaluate(runner): - self._do_evaluate(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - results = self.test_fn(runner.model, self.dataloader) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - def _should_evaluate(self, runner): - """Judge whether to perform evaluation. - - Here is the rule to judge whether to perform evaluation: - 1. It will not perform evaluation during the epoch/iteration interval, - which is determined by ``self.interval``. - 2. It will not perform evaluation if the start time is larger than - current time. - 3. It will not perform evaluation when current time is larger than - the start time but during epoch/iteration interval. - - Returns: - bool: The flag indicating whether to perform evaluation. - """ - if self.by_epoch: - current = runner.epoch - check_time = self.every_n_epochs - else: - current = runner.iter - check_time = self.every_n_iters - - if self.start is None: - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - elif (current + 1) < self.start: - # No evaluation if start is larger than the current time. - return False - else: - # Evaluation only at epochs/iters 3, 5, 7... - # if start==3 and interval==2 - if (current + 1 - self.start) % self.interval: - return False - return True - - def _save_ckpt(self, runner, key_score): - """Save the best checkpoint. - - It will compare the score according to the compare function, write - related information (best score, best checkpoint path) and save the - best checkpoint into ``work_dir``. - """ - if self.by_epoch: - current = f'epoch_{runner.epoch + 1}' - cur_type, cur_time = 'epoch', runner.epoch + 1 - else: - current = f'iter_{runner.iter + 1}' - cur_type, cur_time = 'iter', runner.iter + 1 - - best_score = runner.meta['hook_msgs'].get( - 'best_score', self.init_value_map[self.rule]) - if self.compare_func(key_score, best_score): - best_score = key_score - runner.meta['hook_msgs']['best_score'] = best_score - - if self.best_ckpt_path and self.file_client.isfile( - self.best_ckpt_path): - self.file_client.remove(self.best_ckpt_path) - runner.logger.info( - (f'The previous best checkpoint {self.best_ckpt_path} was ' - 'removed')) - - best_ckpt_name = f'best_{self.key_indicator}_{current}.pth' - self.best_ckpt_path = self.file_client.join_path( - self.out_dir, best_ckpt_name) - runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path - - runner.save_checkpoint( - self.out_dir, best_ckpt_name, create_symlink=False) - runner.logger.info( - f'Now best checkpoint is saved as {best_ckpt_name}.') - runner.logger.info( - f'Best {self.key_indicator} is {best_score:0.4f} ' - f'at {cur_time} {cur_type}.') - - def evaluate(self, runner, results): - """Evaluate the results. - - Args: - runner (:obj:`mmcv.Runner`): The underlined training runner. - results (list): Output results. - """ - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - - if self.save_best is not None: - # If the performance of model is pool, the `eval_res` may be an - # empty dict and it will raise exception when `self.save_best` is - # not None. More details at - # https://github.com/open-mmlab/mmdetection/issues/6265. - if not eval_res: - warnings.warn( - 'Since `eval_res` is an empty dict, the behavior to save ' - 'the best checkpoint will be skipped in this evaluation.') - return None - - if self.key_indicator == 'auto': - # infer from eval_results - self._init_rule(self.rule, list(eval_res.keys())[0]) - return eval_res[self.key_indicator] - - return None - - -class DistEvalHook(EvalHook): - """Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader in a multi-gpu manner, and return the test results. If - ``None``, the default test function ``mmcv.engine.multi_gpu_test`` - will be used. (default: ``None``) - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - broadcast_bn_buffer (bool): Whether to broadcast the - buffer(running_mean and running_var) of rank 0 to other rank - before evaluation. Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - broadcast_bn_buffer=True, - tmpdir=None, - gpu_collect=False, - out_dir=None, - file_client_args=None, - **eval_kwargs): - - if test_fn is None: - from mmcv.engine import multi_gpu_test - test_fn = multi_gpu_test - - super().__init__( - dataloader, - start=start, - interval=interval, - by_epoch=by_epoch, - save_best=save_best, - rule=rule, - test_fn=test_fn, - greater_keys=greater_keys, - less_keys=less_keys, - out_dir=out_dir, - file_client_args=file_client_args, - **eval_kwargs) - - self.broadcast_bn_buffer = broadcast_bn_buffer - self.tmpdir = tmpdir - self.gpu_collect = gpu_collect - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - results = self.test_fn( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to - # save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/hook.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/hook.py deleted file mode 100755 index f2d1c986..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/hook.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry, is_method_overridden - -HOOKS = Registry('hook') - - -class Hook: - stages = ('before_run', 'before_train_epoch', 'before_train_iter', - 'after_train_iter', 'after_train_epoch', 'before_val_epoch', - 'before_val_iter', 'after_val_iter', 'after_val_epoch', - 'after_run') - - def before_run(self, runner): - pass - - def after_run(self, runner): - pass - - def before_epoch(self, runner): - pass - - def after_epoch(self, runner): - pass - - def before_iter(self, runner): - pass - - def after_iter(self, runner): - pass - - def before_train_epoch(self, runner): - self.before_epoch(runner) - - def before_val_epoch(self, runner): - self.before_epoch(runner) - - def after_train_epoch(self, runner): - self.after_epoch(runner) - - def after_val_epoch(self, runner): - self.after_epoch(runner) - - def before_train_iter(self, runner): - self.before_iter(runner) - - def before_val_iter(self, runner): - self.before_iter(runner) - - def after_train_iter(self, runner): - self.after_iter(runner) - - def after_val_iter(self, runner): - self.after_iter(runner) - - def every_n_epochs(self, runner, n): - return (runner.epoch + 1) % n == 0 if n > 0 else False - - def every_n_inner_iters(self, runner, n): - return (runner.inner_iter + 1) % n == 0 if n > 0 else False - - def every_n_iters(self, runner, n): - return (runner.iter + 1) % n == 0 if n > 0 else False - - def end_of_epoch(self, runner): - return runner.inner_iter + 1 == len(runner.data_loader) - - def is_last_epoch(self, runner): - return runner.epoch + 1 == runner._max_epochs - - def is_last_iter(self, runner): - return runner.iter + 1 == runner._max_iters - - def get_triggered_stages(self): - trigger_stages = set() - for stage in Hook.stages: - if is_method_overridden(stage, Hook, self): - trigger_stages.add(stage) - - # some methods will be triggered in multi stages - # use this dict to map method to stages. - method_stages_map = { - 'before_epoch': ['before_train_epoch', 'before_val_epoch'], - 'after_epoch': ['after_train_epoch', 'after_val_epoch'], - 'before_iter': ['before_train_iter', 'before_val_iter'], - 'after_iter': ['after_train_iter', 'after_val_iter'], - } - - for method, map_stages in method_stages_map.items(): - if is_method_overridden(method, Hook, self): - trigger_stages.update(map_stages) - - return [stage for stage in Hook.stages if stage in trigger_stages] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/iter_timer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/iter_timer.py deleted file mode 100755 index a072f4c5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/iter_timer.py +++ /dev/null @@ -1,30 +0,0 @@ -# copyright (c) 2022 Iluvatar CoreX. All rights reserved. -# Copyright (c) OpenMMLab. All rights reserved. - -import time -import torch.distributed as dist - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class IterTimerHook(Hook): - - def before_epoch(self, runner): - self.t = time.time() - - def before_iter(self, runner): - self.batch_size = runner.data_loader._dataloader.batch_size - runner.log_buffer.update({'data_time': time.time() - self.t}) - - def after_iter(self, runner): - iter_info = {'time': time.time() - self.t} - fps = self.batch_size / iter_info["time"] - - if dist.is_initialized(): - fps = fps * dist.get_world_size() - - iter_info["fps"] = fps - - runner.log_buffer.update(iter_info) - self.t = time.time() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/__init__.py deleted file mode 100755 index a0b6b345..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/base.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/base.py deleted file mode 100755 index f8452567..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/base.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from abc import ABCMeta, abstractmethod - -import numpy as np -import torch - -from ..hook import Hook - - -class LoggerHook(Hook): - """Base class for logger hooks. - - Args: - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging. - by_epoch (bool): Whether EpochBasedRunner is used. - """ - - __metaclass__ = ABCMeta - - def __init__(self, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - self.interval = interval - self.ignore_last = ignore_last - self.reset_flag = reset_flag - self.by_epoch = by_epoch - - @abstractmethod - def log(self, runner): - pass - - @staticmethod - def is_scalar(val, include_np=True, include_torch=True): - """Tell the input variable is a scalar or not. - - Args: - val: Input variable. - include_np (bool): Whether include 0-d np.ndarray as a scalar. - include_torch (bool): Whether include 0-d torch.Tensor as a scalar. - - Returns: - bool: True or False. - """ - if isinstance(val, numbers.Number): - return True - elif include_np and isinstance(val, np.ndarray) and val.ndim == 0: - return True - elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1: - return True - else: - return False - - def get_mode(self, runner): - if runner.mode == 'train': - if 'time' in runner.log_buffer.output: - mode = 'train' - else: - mode = 'val' - elif runner.mode == 'val': - mode = 'val' - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return mode - - def get_epoch(self, runner): - if runner.mode == 'train': - epoch = runner.epoch + 1 - elif runner.mode == 'val': - # normal val mode - # runner.epoch += 1 has been done before val workflow - epoch = runner.epoch - else: - raise ValueError(f"runner mode should be 'train' or 'val', " - f'but got {runner.mode}') - return epoch - - def get_iter(self, runner, inner_iter=False): - """Get the current training iteration step.""" - if self.by_epoch and inner_iter: - current_iter = runner.inner_iter + 1 - else: - current_iter = runner.iter + 1 - return current_iter - - def get_lr_tags(self, runner): - tags = {} - lrs = runner.current_lr() - if isinstance(lrs, dict): - for name, value in lrs.items(): - tags[f'learning_rate/{name}'] = value[0] - else: - tags['learning_rate'] = lrs[0] - return tags - - def get_momentum_tags(self, runner): - tags = {} - momentums = runner.current_momentum() - if isinstance(momentums, dict): - for name, value in momentums.items(): - tags[f'momentum/{name}'] = value[0] - else: - tags['momentum'] = momentums[0] - return tags - - def get_loggable_tags(self, - runner, - allow_scalar=True, - allow_text=False, - add_mode=True, - tags_to_skip=('time', 'data_time')): - tags = {} - for var, val in runner.log_buffer.output.items(): - if var in tags_to_skip: - continue - if self.is_scalar(val) and not allow_scalar: - continue - if isinstance(val, str) and not allow_text: - continue - if add_mode: - var = f'{self.get_mode(runner)}/{var}' - tags[var] = val - tags.update(self.get_lr_tags(runner)) - tags.update(self.get_momentum_tags(runner)) - return tags - - def before_run(self, runner): - for hook in runner.hooks[::-1]: - if isinstance(hook, LoggerHook): - hook.reset_flag = True - break - - def before_epoch(self, runner): - runner.log_buffer.clear() # clear logs of last epoch - - def after_train_iter(self, runner): - if self.by_epoch and self.every_n_inner_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif not self.by_epoch and self.every_n_iters(runner, self.interval): - runner.log_buffer.average(self.interval) - elif self.end_of_epoch(runner) and not self.ignore_last: - # not precise but more stable - runner.log_buffer.average(self.interval) - - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_train_epoch(self, runner): - if runner.log_buffer.ready: - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() - - def after_val_epoch(self, runner): - runner.log_buffer.average() - self.log(runner) - if self.reset_flag: - runner.log_buffer.clear_output() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/dvclive.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/dvclive.py deleted file mode 100755 index 687cdc58..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/dvclive.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class DvcliveLoggerHook(LoggerHook): - """Class to log metrics with dvclive. - - It requires `dvclive`_ to be installed. - - Args: - path (str): Directory where dvclive will write TSV log files. - interval (int): Logging interval (every k iterations). - Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: True. - by_epoch (bool): Whether EpochBasedRunner is used. - Default: True. - - .. _dvclive: - https://dvc.org/doc/dvclive - """ - - def __init__(self, - path, - interval=10, - ignore_last=True, - reset_flag=True, - by_epoch=True): - - super(DvcliveLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.path = path - self.import_dvclive() - - def import_dvclive(self): - try: - import dvclive - except ImportError: - raise ImportError( - 'Please run "pip install dvclive" to install dvclive') - self.dvclive = dvclive - - @master_only - def before_run(self, runner): - self.dvclive.init(self.path) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for k, v in tags.items(): - self.dvclive.log(k, v, step=self.get_iter(runner)) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/mlflow.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/mlflow.py deleted file mode 100755 index f9a72592..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/mlflow.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class MlflowLoggerHook(LoggerHook): - - def __init__(self, - exp_name=None, - tags=None, - log_model=True, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - """Class to log metrics and (optionally) a trained model to MLflow. - - It requires `MLflow`_ to be installed. - - Args: - exp_name (str, optional): Name of the experiment to be used. - Default None. - If not None, set the active experiment. - If experiment does not exist, an experiment with provided name - will be created. - tags (dict of str: str, optional): Tags for the current run. - Default None. - If not None, set tags for the current run. - log_model (bool, optional): Whether to log an MLflow artifact. - Default True. - If True, log runner.model as an MLflow artifact - for the current run. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _MLflow: - https://www.mlflow.org/docs/latest/index.html - """ - super(MlflowLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_mlflow() - self.exp_name = exp_name - self.tags = tags - self.log_model = log_model - - def import_mlflow(self): - try: - import mlflow - import mlflow.pytorch as mlflow_pytorch - except ImportError: - raise ImportError( - 'Please run "pip install mlflow" to install mlflow') - self.mlflow = mlflow - self.mlflow_pytorch = mlflow_pytorch - - @master_only - def before_run(self, runner): - super(MlflowLoggerHook, self).before_run(runner) - if self.exp_name is not None: - self.mlflow.set_experiment(self.exp_name) - if self.tags is not None: - self.mlflow.set_tags(self.tags) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - self.mlflow.log_metrics(tags, step=self.get_iter(runner)) - - @master_only - def after_run(self, runner): - if self.log_model: - self.mlflow_pytorch.log_model(runner.model, 'models') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/neptune.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/neptune.py deleted file mode 100755 index 7a38772b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/pavi.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/pavi.py deleted file mode 100755 index ba2f6e8d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/pavi.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -import os -import os.path as osp - -import torch -import yaml - -import mmcv -from ....parallel.utils import is_module_wrapper -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class PaviLoggerHook(LoggerHook): - - def __init__(self, - init_kwargs=None, - add_graph=False, - add_last_ckpt=False, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True, - img_key='img_info'): - super(PaviLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.init_kwargs = init_kwargs - self.add_graph = add_graph - self.add_last_ckpt = add_last_ckpt - self.img_key = img_key - - @master_only - def before_run(self, runner): - super(PaviLoggerHook, self).before_run(runner) - try: - from pavi import SummaryWriter - except ImportError: - raise ImportError('Please run "pip install pavi" to install pavi.') - - self.run_name = runner.work_dir.split('/')[-1] - - if not self.init_kwargs: - self.init_kwargs = dict() - self.init_kwargs['name'] = self.run_name - self.init_kwargs['model'] = runner._model_name - if runner.meta is not None: - if 'config_dict' in runner.meta: - config_dict = runner.meta['config_dict'] - assert isinstance( - config_dict, - dict), ('meta["config_dict"] has to be of a dict, ' - f'but got {type(config_dict)}') - elif 'config_file' in runner.meta: - config_file = runner.meta['config_file'] - config_dict = dict(mmcv.Config.fromfile(config_file)) - else: - config_dict = None - if config_dict is not None: - # 'max_.*iter' is parsed in pavi sdk as the maximum iterations - # to properly set up the progress bar. - config_dict = config_dict.copy() - config_dict.setdefault('max_iter', runner.max_iters) - # non-serializable values are first converted in - # mmcv.dump to json - config_dict = json.loads( - mmcv.dump(config_dict, file_format='json')) - session_text = yaml.dump(config_dict) - self.init_kwargs['session_text'] = session_text - self.writer = SummaryWriter(**self.init_kwargs) - - def get_step(self, runner): - """Get the total training step/epoch.""" - if self.get_mode(runner) == 'val' and self.by_epoch: - return self.get_epoch(runner) - else: - return self.get_iter(runner) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner, add_mode=False) - if tags: - self.writer.add_scalars( - self.get_mode(runner), tags, self.get_step(runner)) - - @master_only - def after_run(self, runner): - if self.add_last_ckpt: - ckpt_path = osp.join(runner.work_dir, 'latest.pth') - if osp.islink(ckpt_path): - ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path)) - - if osp.isfile(ckpt_path): - # runner.epoch += 1 has been done before `after_run`. - iteration = runner.epoch if self.by_epoch else runner.iter - return self.writer.add_snapshot_file( - tag=self.run_name, - snapshot_file_path=ckpt_path, - iteration=iteration) - - # flush the buffer and send a task ending signal to Pavi - self.writer.close() - - @master_only - def before_epoch(self, runner): - if runner.epoch == 0 and self.add_graph: - if is_module_wrapper(runner.model): - _model = runner.model.module - else: - _model = runner.model - device = next(_model.parameters()).device - data = next(iter(runner.data_loader)) - image = data[self.img_key][0:1].to(device) - with torch.no_grad(): - self.writer.add_graph(_model, image) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/tensorboard.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/tensorboard.py deleted file mode 100755 index a8d50366..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/tensorboard.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from mmcv.utils import TORCH_VERSION, digit_version -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TensorboardLoggerHook(LoggerHook): - - def __init__(self, - log_dir=None, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - super(TensorboardLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.log_dir = log_dir - - @master_only - def before_run(self, runner): - super(TensorboardLoggerHook, self).before_run(runner) - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.1')): - try: - from tensorboardX import SummaryWriter - except ImportError: - raise ImportError('Please install tensorboardX to use ' - 'TensorboardLoggerHook.') - else: - try: - from torch.utils.tensorboard import SummaryWriter - except ImportError: - raise ImportError( - 'Please run "pip install future tensorboard" to install ' - 'the dependencies to use torch.utils.tensorboard ' - '(applicable to PyTorch 1.1 or higher)') - - if self.log_dir is None: - self.log_dir = osp.join(runner.work_dir, 'tf_logs') - self.writer = SummaryWriter(self.log_dir) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner, allow_text=True) - for tag, val in tags.items(): - if isinstance(val, str): - self.writer.add_text(tag, val, self.get_iter(runner)) - else: - self.writer.add_scalar(tag, val, self.get_iter(runner)) - - @master_only - def after_run(self, runner): - self.writer.close() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/text.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/text.py deleted file mode 100755 index 043c7bf2..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict - -import torch -import torch.distributed as dist - -import mmcv -from mmcv.fileio.file_client import FileClient -from mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch=True, - interval=10, - ignore_last=True, - reset_flag=False, - interval_exp_name=1000, - out_dir=None, - out_suffix=('.log.json', '.log', '.py'), - keep_local=True, - file_client_args=None): - super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner): - super(TextLoggerHook, self).before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.')) - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner): - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([mem / (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict, runner): - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - if torch.cuda.is_available(): - log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', - 'memory', 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict, runner): - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner): - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - if 'time' in runner.log_buffer.output: - # statistic memory - if torch.cuda.is_available(): - log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner): - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath, 'r') as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - (f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.')) - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - (f'{local_filepath} was removed due to the ' - '`self.keep_local=False`')) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/wandb.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/wandb.py deleted file mode 100755 index 9f680846..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/logger/wandb.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class WandbLoggerHook(LoggerHook): - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=False, - commit=True, - by_epoch=True, - with_step=True): - super(WandbLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_wandb() - self.init_kwargs = init_kwargs - self.commit = commit - self.with_step = with_step - - def import_wandb(self): - try: - import wandb - except ImportError: - raise ImportError( - 'Please run "pip install wandb" to install wandb') - self.wandb = wandb - - @master_only - def before_run(self, runner): - super(WandbLoggerHook, self).before_run(runner) - if self.wandb is None: - self.import_wandb() - if self.init_kwargs: - self.wandb.init(**self.init_kwargs) - else: - self.wandb.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - if self.with_step: - self.wandb.log( - tags, step=self.get_iter(runner), commit=self.commit) - else: - tags['global_step'] = self.get_iter(runner) - self.wandb.log(tags, commit=self.commit) - - @master_only - def after_run(self, runner): - self.wandb.join() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/lr_updater.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/lr_updater.py deleted file mode 100755 index e5a12415..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/lr_updater.py +++ /dev/null @@ -1,670 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from math import cos, pi - -import mmcv -from .hook import HOOKS, Hook - - -class LrUpdaterHook(Hook): - """LR Scheduler in MMCV. - - Args: - by_epoch (bool): LR changes epoch by epoch - warmup (string): Type of warmup used. It can be None(use no warmup), - 'constant', 'linear' or 'exp' - warmup_iters (int): The number of iterations or epochs that warmup - lasts - warmup_ratio (float): LR used at the beginning of warmup equals to - warmup_ratio * initial_lr - warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters - means the number of epochs that warmup lasts, otherwise means the - number of iteration that warmup lasts - """ - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.1, - warmup_by_epoch=False): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_ratio" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - self.warmup_by_epoch = warmup_by_epoch - - if self.warmup_by_epoch: - self.warmup_epochs = self.warmup_iters - self.warmup_iters = None - else: - self.warmup_epochs = None - - self.base_lr = [] # initial lr for all param groups - self.regular_lr = [] # expected lr if no warming up is performed - - def _set_lr(self, runner, lr_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, lr in zip(optim.param_groups, lr_groups[k]): - param_group['lr'] = lr - else: - for param_group, lr in zip(runner.optimizer.param_groups, - lr_groups): - param_group['lr'] = lr - - def get_lr(self, runner, base_lr): - raise NotImplementedError - - def get_regular_lr(self, runner): - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(runner, _base_lr) - for _base_lr in self.base_lr[k] - ] - lr_groups.update({k: _lr_group}) - - return lr_groups - else: - return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr] - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - if self.warmup == 'constant': - warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_lr = [_lr * (1 - k) for _lr in regular_lr] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.regular_lr, dict): - lr_groups = {} - for key, regular_lr in self.regular_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.regular_lr) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - group.setdefault('initial_lr', group['lr']) - _base_lr = [ - group['initial_lr'] for group in optim.param_groups - ] - self.base_lr.update({k: _base_lr}) - else: - for group in runner.optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - self.base_lr = [ - group['initial_lr'] for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if self.warmup_iters is None: - epoch_len = len(runner.data_loader) - self.warmup_iters = self.warmup_epochs * epoch_len - - if not self.by_epoch: - return - - self.regular_lr = self.get_regular_lr(runner) - self._set_lr(runner, self.regular_lr) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_lr = self.get_regular_lr(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - - -@HOOKS.register_module() -class FixedLrUpdaterHook(LrUpdaterHook): - - def __init__(self, **kwargs): - super(FixedLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - return base_lr - - -@HOOKS.register_module() -class StepLrUpdaterHook(LrUpdaterHook): - """Step LR scheduler with min_lr clipping. - - Args: - step (int | list[int]): Step to decay the LR. If an int value is given, - regard it as the decay interval. If a list is given, decay LR at - these steps. - gamma (float, optional): Decay LR ratio. Default: 0.1. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. If None - is given, we don't perform lr clipping. Default: None. - """ - - def __init__(self, step, gamma=0.1, min_lr=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_lr = min_lr - super(StepLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - lr = base_lr * (self.gamma**exp) - if self.min_lr is not None: - # clip to a minimum value - lr = max(lr, self.min_lr) - return lr - - -@HOOKS.register_module() -class ExpLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, **kwargs): - self.gamma = gamma - super(ExpLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * self.gamma**progress - - -@HOOKS.register_module() -class PolyLrUpdaterHook(LrUpdaterHook): - - def __init__(self, power=1., min_lr=0., **kwargs): - self.power = power - self.min_lr = min_lr - super(PolyLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - coeff = (1 - progress / max_progress)**self.power - return (base_lr - self.min_lr) * coeff + self.min_lr - - -@HOOKS.register_module() -class InvLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, power=1., **kwargs): - self.gamma = gamma - self.power = power - super(InvLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * (1 + self.gamma * progress)**(-self.power) - - -@HOOKS.register_module() -class CosineAnnealingLrUpdaterHook(LrUpdaterHook): - - def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook): - """Flat + Cosine lr schedule. - - Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501 - - Args: - start_percent (float): When to start annealing the learning rate - after the percentage of the total training steps. - The value should be in range [0, 1). - Default: 0.75 - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - start_percent=0.75, - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - if start_percent < 0 or start_percent > 1 or not isinstance( - start_percent, float): - raise ValueError( - 'expected float between 0 and 1 start_percent, but ' - f'got {start_percent}') - self.start_percent = start_percent - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - start = round(runner.max_epochs * self.start_percent) - progress = runner.epoch - start - max_progress = runner.max_epochs - start - else: - start = round(runner.max_iters * self.start_percent) - progress = runner.iter - start - max_progress = runner.max_iters - start - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress < 0: - return base_lr - else: - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class CosineRestartLrUpdaterHook(LrUpdaterHook): - """Cosine annealing with restarts learning rate scheme. - - Args: - periods (list[int]): Periods for each cosine anneling cycle. - restart_weights (list[float], optional): Restart weights at each - restart iteration. Default: [1]. - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - periods, - restart_weights=[1], - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.periods = periods - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - self.restart_weights = restart_weights - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - super(CosineRestartLrUpdaterHook, self).__init__(**kwargs) - - self.cumulative_periods = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - else: - progress = runner.iter - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - idx = get_position_from_periods(progress, self.cumulative_periods) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1] - current_periods = self.periods[idx] - - alpha = min((progress - nearest_restart) / current_periods, 1) - return annealing_cos(base_lr, target_lr, alpha, current_weight) - - -def get_position_from_periods(iteration, cumulative_periods): - """Get the position from a period list. - - It will return the index of the right-closest number in the period list. - For example, the cumulative_periods = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 3. - - Args: - iteration (int): Current iteration. - cumulative_periods (list[int]): Cumulative period list. - - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_periods): - if iteration < period: - return i - raise ValueError(f'Current iteration {iteration} exceeds ' - f'cumulative_periods {cumulative_periods}') - - -@HOOKS.register_module() -class CyclicLrUpdaterHook(LrUpdaterHook): - """Cyclic LR Scheduler. - - Implement the cyclical learning rate policy (CLR) described in - https://arxiv.org/pdf/1506.01186.pdf - - Different from the original paper, we use cosine annealing rather than - triangular policy inside a cycle. This improves the performance in the - 3D detection area. - - Args: - by_epoch (bool): Whether to update LR by epoch. - target_ratio (tuple[float]): Relative ratio of the highest LR and the - lowest LR to the initial LR. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of LR in - the total cycle. - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, - anneal_strategy='cos', - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.lr_phases = [] # init lr_phases - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicLrUpdaterHook, self).before_run(runner) - # initiate lr_phases - # total lr_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.lr_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.lr_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.lr_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return self.anneal_func(base_lr * start_ratio, - base_lr * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleLrUpdaterHook(LrUpdaterHook): - """One Cycle LR Scheduler. - - The 1cycle learning rate policy changes the learning rate after every - batch. The one cycle learning rate policy is described in - https://arxiv.org/pdf/1708.07120.pdf - - Args: - max_lr (float or list): Upper learning rate boundaries in the cycle - for each parameter group. - total_steps (int, optional): The total number of steps in the cycle. - Note that if a value is not provided here, it will be the max_iter - of runner. Default: None. - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - div_factor (float): Determines the initial learning rate via - initial_lr = max_lr/div_factor - Default: 25 - final_div_factor (float): Determines the minimum learning rate via - min_lr = initial_lr/final_div_factor - Default: 1e4 - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - max_lr, - total_steps=None, - pct_start=0.3, - anneal_strategy='cos', - div_factor=25, - final_div_factor=1e4, - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch = False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(max_lr, (numbers.Number, list, dict)): - raise ValueError('the type of max_lr must be the one of list or ' - f'dict, but got {type(max_lr)}') - self._max_lr = max_lr - if total_steps is not None: - if not isinstance(total_steps, int): - raise ValueError('the type of total_steps must be int, but' - f'got {type(total_steps)}') - self.total_steps = total_steps - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.div_factor = div_factor - self.final_div_factor = final_div_factor - self.three_phase = three_phase - self.lr_phases = [] # init lr_phases - super(OneCycleLrUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if hasattr(self, 'total_steps'): - total_steps = self.total_steps - else: - total_steps = runner.max_iters - if total_steps < runner.max_iters: - raise ValueError( - 'The total steps must be greater than or equal to max ' - f'iterations {runner.max_iters} of runner, but total steps ' - f'is {total_steps}.') - - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - _max_lr = format_param(k, optim, self._max_lr) - self.base_lr[k] = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(optim.param_groups, self.base_lr[k]): - group.setdefault('initial_lr', lr) - else: - k = type(runner.optimizer).__name__ - _max_lr = format_param(k, runner.optimizer, self._max_lr) - self.base_lr = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(runner.optimizer.param_groups, self.base_lr): - group.setdefault('initial_lr', lr) - - if self.three_phase: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append([ - float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1 - ]) - self.lr_phases.append( - [total_steps - 1, 1, 1 / self.final_div_factor]) - else: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append( - [total_steps - 1, self.div_factor, 1 / self.final_div_factor]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - start_iter = 0 - for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases): - if curr_iter <= end_iter: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr, - pct) - break - start_iter = end_iter - return lr - - -def annealing_cos(start, end, factor, weight=1): - """Calculate annealing cos learning rate. - - Cosine anneal from `weight * start + (1 - weight) * end` to `end` as - percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the cosine annealing. - end (float): The ending learing rate of the cosine annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - weight (float, optional): The combination factor of `start` and `end` - when calculating the actual starting learning rate. Default to 1. - """ - cos_out = cos(pi * factor) + 1 - return end + 0.5 * weight * (start - end) * cos_out - - -def annealing_linear(start, end, factor): - """Calculate annealing linear learning rate. - - Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the linear annealing. - end (float): The ending learing rate of the linear annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - """ - return start + (end - start) * factor - - -def format_param(name, optim, param): - if isinstance(param, numbers.Number): - return [param] * len(optim.param_groups) - elif isinstance(param, (list, tuple)): # multi param groups - if len(param) != len(optim.param_groups): - raise ValueError(f'expected {len(optim.param_groups)} ' - f'values for {name}, got {len(param)}') - return param - else: # multi optimizers - if name not in param: - raise KeyError(f'{name} is not found in {param.keys()}') - return param[name] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/memory.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/memory.py deleted file mode 100755 index 70cf9a83..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/memory.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class EmptyCacheHook(Hook): - - def __init__(self, before_epoch=False, after_epoch=True, after_iter=False): - self._before_epoch = before_epoch - self._after_epoch = after_epoch - self._after_iter = after_iter - - def after_iter(self, runner): - if self._after_iter: - torch.cuda.empty_cache() - - def before_epoch(self, runner): - if self._before_epoch: - torch.cuda.empty_cache() - - def after_epoch(self, runner): - if self._after_epoch: - torch.cuda.empty_cache() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/momentum_updater.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/momentum_updater.py deleted file mode 100755 index 13d0e2fa..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/momentum_updater.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -from .hook import HOOKS, Hook -from .lr_updater import annealing_cos, annealing_linear, format_param - - -class MomentumUpdaterHook(Hook): - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.9): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_momentum" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - - self.base_momentum = [] # initial momentum for all param groups - self.regular_momentum = [ - ] # expected momentum if no warming up is performed - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, base_momentum): - raise NotImplementedError - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k in runner.optimizer.keys(): - _momentum_group = [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum[k] - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - return [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum - ] - - def get_warmup_momentum(self, cur_iters): - - def _get_warmup_momentum(cur_iters, regular_momentum): - if self.warmup == 'constant': - warmup_momentum = [ - _momentum / self.warmup_ratio - for _momentum in self.regular_momentum - ] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_momentum = [ - _momentum / (1 - k) for _momentum in self.regular_mom - ] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_momentum = [ - _momentum / k for _momentum in self.regular_mom - ] - return warmup_momentum - - if isinstance(self.regular_momentum, dict): - momentum_groups = {} - for key, regular_momentum in self.regular_momentum.items(): - momentum_groups[key] = _get_warmup_momentum( - cur_iters, regular_momentum) - return momentum_groups - else: - return _get_warmup_momentum(cur_iters, self.regular_momentum) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, - # if 'initial_momentum' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_momentum = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - _base_momentum = [ - group['initial_momentum'] for group in optim.param_groups - ] - self.base_momentum.update({k: _base_momentum}) - else: - for group in runner.optimizer.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - self.base_momentum = [ - group['initial_momentum'] - for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if not self.by_epoch: - return - self.regular_mom = self.get_regular_momentum(runner) - self._set_momentum(runner, self.regular_mom) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_mom = self.get_regular_momentum(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - - -@HOOKS.register_module() -class StepMomentumUpdaterHook(MomentumUpdaterHook): - """Step momentum scheduler with min value clipping. - - Args: - step (int | list[int]): Step to decay the momentum. If an int value is - given, regard it as the decay interval. If a list is given, decay - momentum at these steps. - gamma (float, optional): Decay momentum ratio. Default: 0.5. - min_momentum (float, optional): Minimum momentum value to keep. If - momentum after decay is lower than this value, it will be clipped - accordingly. If None is given, we don't perform lr clipping. - Default: None. - """ - - def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_momentum = min_momentum - super(StepMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - momentum = base_momentum * (self.gamma**exp) - if self.min_momentum is not None: - # clip to a minimum value - momentum = max(momentum, self.min_momentum) - return momentum - - -@HOOKS.register_module() -class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook): - - def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs): - assert (min_momentum is None) ^ (min_momentum_ratio is None) - self.min_momentum = min_momentum - self.min_momentum_ratio = min_momentum_ratio - super(CosineAnnealingMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_momentum_ratio is not None: - target_momentum = base_momentum * self.min_momentum_ratio - else: - target_momentum = self.min_momentum - return annealing_cos(base_momentum, target_momentum, - progress / max_progress) - - -@HOOKS.register_module() -class CyclicMomentumUpdaterHook(MomentumUpdaterHook): - """Cyclic momentum Scheduler. - - Implement the cyclical momentum scheduler policy described in - https://arxiv.org/pdf/1708.07120.pdf - - This momentum scheduler usually used together with the CyclicLRUpdater - to improve the performance in the 3D detection area. - - Attributes: - target_ratio (tuple[float]): Relative ratio of the lowest momentum and - the highest momentum to the initial momentum. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of momentum - in the total cycle. - by_epoch (bool): Whether to update momentum by epoch. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.momentum_phases = [] # init momentum_phases - # currently only support by_epoch=False - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicMomentumUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicMomentumUpdaterHook, self).before_run(runner) - # initiate momentum_phases - # total momentum_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.momentum_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.momentum_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_momentum(self, runner, base_momentum): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.momentum_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return annealing_cos(base_momentum * start_ratio, - base_momentum * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleMomentumUpdaterHook(MomentumUpdaterHook): - """OneCycle momentum Scheduler. - - This momentum scheduler usually used together with the OneCycleLrUpdater - to improve the performance. - - Args: - base_momentum (float or list): Lower momentum boundaries in the cycle - for each parameter group. Note that momentum is cycled inversely - to learning rate; at the peak of a cycle, momentum is - 'base_momentum' and learning rate is 'max_lr'. - Default: 0.85 - max_momentum (float or list): Upper momentum boundaries in the cycle - for each parameter group. Functionally, - it defines the cycle amplitude (max_momentum - base_momentum). - Note that momentum is cycled inversely - to learning rate; at the start of a cycle, momentum is - 'max_momentum' and learning rate is 'base_lr' - Default: 0.95 - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - base_momentum=0.85, - max_momentum=0.95, - pct_start=0.3, - anneal_strategy='cos', - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch=False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(base_momentum, (float, list, dict)): - raise ValueError('base_momentum must be the type among of float,' - 'list or dict.') - self._base_momentum = base_momentum - if not isinstance(max_momentum, (float, list, dict)): - raise ValueError('max_momentum must be the type among of float,' - 'list or dict.') - self._max_momentum = max_momentum - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('Expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must by one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.three_phase = three_phase - self.momentum_phases = [] # init momentum_phases - super(OneCycleMomentumUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip( - optim.param_groups, _base_momentum, _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - else: - optim = runner.optimizer - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - k = type(optim).__name__ - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip(optim.param_groups, - _base_momentum, - _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - - if self.three_phase: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': - float(2 * self.pct_start * runner.max_iters) - 2, - 'start_momentum': - 'base_momentum', - 'end_momentum': - 'max_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'max_momentum', - 'end_momentum': 'max_momentum' - }) - else: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'base_momentum', - 'end_momentum': 'max_momentum' - }) - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, param_group): - curr_iter = runner.iter - start_iter = 0 - for i, phase in enumerate(self.momentum_phases): - end_iter = phase['end_iter'] - if curr_iter <= end_iter or i == len(self.momentum_phases) - 1: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - momentum = self.anneal_func( - param_group[phase['start_momentum']], - param_group[phase['end_momentum']], pct) - break - start_iter = end_iter - return momentum - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k, optim in runner.optimizer.items(): - _momentum_group = [ - self.get_momentum(runner, param_group) - for param_group in optim.param_groups - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - momentum_groups = [] - for param_group in runner.optimizer.param_groups: - momentum_groups.append(self.get_momentum(runner, param_group)) - return momentum_groups diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/optimizer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/optimizer.py deleted file mode 100755 index f575ceda..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,508 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - - def __init__(self, grad_clip=None): - self.grad_clip = grad_clip - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - runner.outputs['loss'].backward() - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super(GradientCumulativeOptimizerHook, self).__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/profiler.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/profiler.py deleted file mode 100755 index b7023699..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sampler_seed.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sampler_seed.py deleted file mode 100755 index ee0dc6bd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sync_buffer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sync_buffer.py deleted file mode 100755 index 6376b7ff..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/iter_based_runner.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/iter_based_runner.py deleted file mode 100755 index 9892b07a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/log_buffer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/log_buffer.py deleted file mode 100755 index d949e294..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/log_buffer.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections import OrderedDict - -import numpy as np - - -class LogBuffer: - - def __init__(self): - self.val_history = OrderedDict() - self.n_history = OrderedDict() - self.output = OrderedDict() - self.ready = False - - def clear(self): - self.val_history.clear() - self.n_history.clear() - self.clear_output() - - def clear_output(self): - self.output.clear() - self.ready = False - - def update(self, vars, count=1): - assert isinstance(vars, dict) - for key, var in vars.items(): - if key not in self.val_history: - self.val_history[key] = [] - self.n_history[key] = [] - self.val_history[key].append(var) - self.n_history[key].append(count) - - def average(self, n=0): - """Average latest n values or all values.""" - assert n >= 0 - for key in self.val_history: - values = np.array(self.val_history[key][-n:]) - nums = np.array(self.n_history[key][-n:]) - avg = np.sum(values * nums) / np.sum(nums) - self.output[key] = avg - self.ready = True diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/__init__.py deleted file mode 100755 index 53c34d04..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import (OPTIMIZER_BUILDERS, OPTIMIZERS, build_optimizer, - build_optimizer_constructor) -from .default_constructor import DefaultOptimizerConstructor - -__all__ = [ - 'OPTIMIZER_BUILDERS', 'OPTIMIZERS', 'DefaultOptimizerConstructor', - 'build_optimizer', 'build_optimizer_constructor' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/builder.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/builder.py deleted file mode 100755 index f9234eed..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers(): - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/default_constructor.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/default_constructor.py deleted file mode 100755 index 64d06847..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from mmcv.ops import ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/priority.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/priority.py deleted file mode 100755 index 64cc4e3a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/priority.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - - -class Priority(Enum): - """Hook priority levels. - - +--------------+------------+ - | Level | Value | - +==============+============+ - | HIGHEST | 0 | - +--------------+------------+ - | VERY_HIGH | 10 | - +--------------+------------+ - | HIGH | 30 | - +--------------+------------+ - | ABOVE_NORMAL | 40 | - +--------------+------------+ - | NORMAL | 50 | - +--------------+------------+ - | BELOW_NORMAL | 60 | - +--------------+------------+ - | LOW | 70 | - +--------------+------------+ - | VERY_LOW | 90 | - +--------------+------------+ - | LOWEST | 100 | - +--------------+------------+ - """ - - HIGHEST = 0 - VERY_HIGH = 10 - HIGH = 30 - ABOVE_NORMAL = 40 - NORMAL = 50 - BELOW_NORMAL = 60 - LOW = 70 - VERY_LOW = 90 - LOWEST = 100 - - -def get_priority(priority): - """Get priority value. - - Args: - priority (int or str or :obj:`Priority`): Priority. - - Returns: - int: The priority value. - """ - if isinstance(priority, int): - if priority < 0 or priority > 100: - raise ValueError('priority must be between 0 and 100') - return priority - elif isinstance(priority, Priority): - return priority.value - elif isinstance(priority, str): - return Priority[priority.upper()].value - else: - raise TypeError('priority must be an integer or Priority enum value') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/utils.py b/cv/super_resolution/basicvsr/pytorch/mmcv/runner/utils.py deleted file mode 100755 index 144d11e1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/runner/utils.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random -import sys -import time -import warnings -from getpass import getuser -from socket import gethostname - -import numpy as np -import torch - -import mmcv - - -def get_host_info(): - """Get hostname and username. - - Return empty string if exception raised, e.g. ``getpass.getuser()`` will - lead to error in docker container - """ - host = '' - try: - host = f'{getuser()}@{gethostname()}' - except Exception as e: - warnings.warn(f'Host or user not found: {str(e)}') - finally: - return host - - -def get_time_str(): - return time.strftime('%Y%m%d_%H%M%S', time.localtime()) - - -def obj_from_dict(info, parent=None, default_args=None): - """Initialize an object from dict. - - The dict must contain the key "type", which indicates the object type, it - can be either a string or type, such as "list" or ``list``. Remaining - fields are treated as the arguments for constructing the object. - - Args: - info (dict): Object types and arguments. - parent (:class:`module`): Module which may containing expected object - classes. - default_args (dict, optional): Default arguments for initializing the - object. - - Returns: - any type: Object built from the dict. - """ - assert isinstance(info, dict) and 'type' in info - assert isinstance(default_args, dict) or default_args is None - args = info.copy() - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if parent is not None: - obj_type = getattr(parent, obj_type) - else: - obj_type = sys.modules[obj_type] - elif not isinstance(obj_type, type): - raise TypeError('type must be a str or valid type, but ' - f'got {type(obj_type)}') - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - return obj_type(**args) - - -def set_random_seed(seed, deterministic=False, use_rank_shift=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - rank_shift (bool): Whether to add rank number to the random seed to - have different random seed in different threads. Default: False. - """ - if use_rank_shift: - rank, _ = mmcv.runner.get_dist_info() - seed += rank - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/__init__.py deleted file mode 100755 index 378a0068..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/__init__.py +++ /dev/null @@ -1,69 +0,0 @@ -# flake8: noqa -# Copyright (c) OpenMMLab. All rights reserved. -from .config import Config, ConfigDict, DictAction -from .misc import (check_prerequisites, concat_list, deprecated_api_warning, - has_method, import_modules_from_strings, is_list_of, - is_method_overridden, is_seq_of, is_str, is_tuple_of, - iter_cast, list_cast, requires_executable, requires_package, - slice_list, to_1tuple, to_2tuple, to_3tuple, to_4tuple, - to_ntuple, tuple_cast) -from .path import (check_file_exist, fopen, is_filepath, mkdir_or_exist, - scandir, symlink) -from .progressbar import (ProgressBar, track_iter_progress, - track_parallel_progress, track_progress) -from .testing import (assert_attrs_equal, assert_dict_contains_subset, - assert_dict_has_keys, assert_is_norm_layer, - assert_keys_equal, assert_params_all_zeros, - check_python_script) -from .timer import Timer, TimerError, check_time -from .version_utils import digit_version, get_git_hash - -try: - import torch -except ImportError: - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'is_str', 'iter_cast', - 'list_cast', 'tuple_cast', 'is_seq_of', 'is_list_of', 'is_tuple_of', - 'slice_list', 'concat_list', 'check_prerequisites', 'requires_package', - 'requires_executable', 'is_filepath', 'fopen', 'check_file_exist', - 'mkdir_or_exist', 'symlink', 'scandir', 'ProgressBar', - 'track_progress', 'track_iter_progress', 'track_parallel_progress', - 'Timer', 'TimerError', 'check_time', 'deprecated_api_warning', - 'digit_version', 'get_git_hash', 'import_modules_from_strings', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'check_python_script', - 'to_1tuple', 'to_2tuple', 'to_3tuple', 'to_4tuple', 'to_ntuple', - 'is_method_overridden', 'has_method' - ] -else: - from .env import collect_env - from .logging import get_logger, print_log - from .parrots_jit import jit, skip_no_elena - from .parrots_wrapper import ( - TORCH_VERSION, BuildExtension, CppExtension, CUDAExtension, DataLoader, - PoolDataLoader, SyncBatchNorm, _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, - _AvgPoolNd, _BatchNorm, _ConvNd, _ConvTransposeMixin, _InstanceNorm, - _MaxPoolNd, get_build_config, is_rocm_pytorch, _get_cuda_home) - from .registry import Registry, build_from_cfg - from .trace import is_jit_tracing - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'collect_env', 'get_logger', - 'print_log', 'is_str', 'iter_cast', 'list_cast', 'tuple_cast', - 'is_seq_of', 'is_list_of', 'is_tuple_of', 'slice_list', 'concat_list', - 'check_prerequisites', 'requires_package', 'requires_executable', - 'is_filepath', 'fopen', 'check_file_exist', 'mkdir_or_exist', - 'symlink', 'scandir', 'ProgressBar', 'track_progress', - 'track_iter_progress', 'track_parallel_progress', 'Registry', - 'build_from_cfg', 'Timer', 'TimerError', 'check_time', 'SyncBatchNorm', - '_AdaptiveAvgPoolNd', '_AdaptiveMaxPoolNd', '_AvgPoolNd', '_BatchNorm', - '_ConvNd', '_ConvTransposeMixin', '_InstanceNorm', '_MaxPoolNd', - 'get_build_config', 'BuildExtension', 'CppExtension', 'CUDAExtension', - 'DataLoader', 'PoolDataLoader', 'TORCH_VERSION', - 'deprecated_api_warning', 'digit_version', 'get_git_hash', - 'import_modules_from_strings', 'jit', 'skip_no_elena', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'assert_is_norm_layer', - 'assert_params_all_zeros', 'check_python_script', - 'is_method_overridden', 'is_jit_tracing', 'is_rocm_pytorch', - '_get_cuda_home', 'has_method' - ] diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/config.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/config.py deleted file mode 100755 index c71377c0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/config.py +++ /dev/null @@ -1,688 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import ast -import copy -import os -import os.path as osp -import platform -import shutil -import sys -import tempfile -import uuid -import warnings -from argparse import Action, ArgumentParser -from collections import abc -from importlib import import_module - -from addict import Dict -from yapf.yapflib.yapf_api import FormatCode - -from .misc import import_modules_from_strings -from .path import check_file_exist - -if platform.system() == 'Windows': - import regex as re -else: - import re - -BASE_KEY = '_base_' -DELETE_KEY = '_delete_' -DEPRECATION_KEY = '_deprecation_' -RESERVED_KEYS = ['filename', 'text', 'pretty_text'] - - -class ConfigDict(Dict): - - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super(ConfigDict, self).__getattr__(name) - except KeyError: - ex = AttributeError(f"'{self.__class__.__name__}' object has no " - f"attribute '{name}'") - except Exception as e: - ex = e - else: - return value - raise ex - - -def add_args(parser, cfg, prefix=''): - for k, v in cfg.items(): - if isinstance(v, str): - parser.add_argument('--' + prefix + k) - elif isinstance(v, int): - parser.add_argument('--' + prefix + k, type=int) - elif isinstance(v, float): - parser.add_argument('--' + prefix + k, type=float) - elif isinstance(v, bool): - parser.add_argument('--' + prefix + k, action='store_true') - elif isinstance(v, dict): - add_args(parser, v, prefix + k + '.') - elif isinstance(v, abc.Iterable): - parser.add_argument('--' + prefix + k, type=type(v[0]), nargs='+') - else: - print(f'cannot parse key {prefix + k} of type {type(v)}') - return parser - - -class Config: - """A facility for config and config files. - - It supports common file formats as configs: python/json/yaml. The interface - is the same as a dict object and also allows access config values as - attributes. - - Example: - >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1]))) - >>> cfg.a - 1 - >>> cfg.b - {'b1': [0, 1]} - >>> cfg.b.b1 - [0, 1] - >>> cfg = Config.fromfile('tests/data/config/a.py') - >>> cfg.filename - "/home/kchen/projects/mmcv/tests/data/config/a.py" - >>> cfg.item4 - 'test' - >>> cfg - "Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: " - "{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}" - """ - - @staticmethod - def _validate_py_syntax(filename): - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - content = f.read() - try: - ast.parse(content) - except SyntaxError as e: - raise SyntaxError('There are syntax errors in config ' - f'file {filename}: {e}') - - @staticmethod - def _substitute_predefined_vars(filename, temp_config_name): - file_dirname = osp.dirname(filename) - file_basename = osp.basename(filename) - file_basename_no_extension = osp.splitext(file_basename)[0] - file_extname = osp.splitext(filename)[1] - support_templates = dict( - fileDirname=file_dirname, - fileBasename=file_basename, - fileBasenameNoExtension=file_basename_no_extension, - fileExtname=file_extname) - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - for key, value in support_templates.items(): - regexp = r'\{\{\s*' + str(key) + r'\s*\}\}' - value = value.replace('\\', '/') - config_file = re.sub(regexp, value, config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - - @staticmethod - def _pre_substitute_base_vars(filename, temp_config_name): - """Substitute base variable placehoders to string, so that parsing - would work.""" - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - config_file = f.read() - base_var_dict = {} - regexp = r'\{\{\s*' + BASE_KEY + r'\.([\w\.]+)\s*\}\}' - base_vars = set(re.findall(regexp, config_file)) - for base_var in base_vars: - randstr = f'_{base_var}_{uuid.uuid4().hex.lower()[:6]}' - base_var_dict[randstr] = base_var - regexp = r'\{\{\s*' + BASE_KEY + r'\.' + base_var + r'\s*\}\}' - config_file = re.sub(regexp, f'"{randstr}"', config_file) - with open(temp_config_name, 'w', encoding='utf-8') as tmp_config_file: - tmp_config_file.write(config_file) - return base_var_dict - - @staticmethod - def _substitute_base_vars(cfg, base_var_dict, base_cfg): - """Substitute variable strings to their actual values.""" - cfg = copy.deepcopy(cfg) - - if isinstance(cfg, dict): - for k, v in cfg.items(): - if isinstance(v, str) and v in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[v].split('.'): - new_v = new_v[new_k] - cfg[k] = new_v - elif isinstance(v, (list, tuple, dict)): - cfg[k] = Config._substitute_base_vars( - v, base_var_dict, base_cfg) - elif isinstance(cfg, tuple): - cfg = tuple( - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg) - elif isinstance(cfg, list): - cfg = [ - Config._substitute_base_vars(c, base_var_dict, base_cfg) - for c in cfg - ] - elif isinstance(cfg, str) and cfg in base_var_dict: - new_v = base_cfg - for new_k in base_var_dict[cfg].split('.'): - new_v = new_v[new_k] - cfg = new_v - - return cfg - - @staticmethod - def _file2dict(filename, use_predefined_variables=True): - filename = osp.abspath(osp.expanduser(filename)) - check_file_exist(filename) - fileExtname = osp.splitext(filename)[1] - if fileExtname not in ['.py', '.json', '.yaml', '.yml']: - raise IOError('Only py/yml/yaml/json type are supported now!') - - with tempfile.TemporaryDirectory() as temp_config_dir: - temp_config_file = tempfile.NamedTemporaryFile( - dir=temp_config_dir, suffix=fileExtname) - if platform.system() == 'Windows': - temp_config_file.close() - temp_config_name = osp.basename(temp_config_file.name) - # Substitute predefined variables - if use_predefined_variables: - Config._substitute_predefined_vars(filename, - temp_config_file.name) - else: - shutil.copyfile(filename, temp_config_file.name) - # Substitute base variables from placeholders to strings - base_var_dict = Config._pre_substitute_base_vars( - temp_config_file.name, temp_config_file.name) - - if filename.endswith('.py'): - temp_module_name = osp.splitext(temp_config_name)[0] - sys.path.insert(0, temp_config_dir) - Config._validate_py_syntax(filename) - mod = import_module(temp_module_name) - sys.path.pop(0) - cfg_dict = { - name: value - for name, value in mod.__dict__.items() - if not name.startswith('__') - } - # delete imported module - del sys.modules[temp_module_name] - elif filename.endswith(('.yml', '.yaml', '.json')): - import mmcv - cfg_dict = mmcv.load(temp_config_file.name) - # close temp file - temp_config_file.close() - - # check deprecation information - if DEPRECATION_KEY in cfg_dict: - deprecation_info = cfg_dict.pop(DEPRECATION_KEY) - warning_msg = f'The config file {filename} will be deprecated ' \ - 'in the future.' - if 'expected' in deprecation_info: - warning_msg += f' Please use {deprecation_info["expected"]} ' \ - 'instead.' - if 'reference' in deprecation_info: - warning_msg += ' More information can be found at ' \ - f'{deprecation_info["reference"]}' - warnings.warn(warning_msg) - - cfg_text = filename + '\n' - with open(filename, 'r', encoding='utf-8') as f: - # Setting encoding explicitly to resolve coding issue on windows - cfg_text += f.read() - - if BASE_KEY in cfg_dict: - cfg_dir = osp.dirname(filename) - base_filename = cfg_dict.pop(BASE_KEY) - base_filename = base_filename if isinstance( - base_filename, list) else [base_filename] - - cfg_dict_list = list() - cfg_text_list = list() - for f in base_filename: - _cfg_dict, _cfg_text = Config._file2dict(osp.join(cfg_dir, f)) - cfg_dict_list.append(_cfg_dict) - cfg_text_list.append(_cfg_text) - - base_cfg_dict = dict() - for c in cfg_dict_list: - duplicate_keys = base_cfg_dict.keys() & c.keys() - if len(duplicate_keys) > 0: - raise KeyError('Duplicate key is not allowed among bases. ' - f'Duplicate keys: {duplicate_keys}') - base_cfg_dict.update(c) - - # Substitute base variables from strings to their actual values - cfg_dict = Config._substitute_base_vars(cfg_dict, base_var_dict, - base_cfg_dict) - - base_cfg_dict = Config._merge_a_into_b(cfg_dict, base_cfg_dict) - cfg_dict = base_cfg_dict - - # merge cfg_text - cfg_text_list.append(cfg_text) - cfg_text = '\n'.join(cfg_text_list) - - return cfg_dict, cfg_text - - @staticmethod - def _merge_a_into_b(a, b, allow_list_keys=False): - """merge dict ``a`` into dict ``b`` (non-inplace). - - Values in ``a`` will overwrite ``b``. ``b`` is copied first to avoid - in-place modifications. - - Args: - a (dict): The source dict to be merged into ``b``. - b (dict): The origin dict to be fetch keys from ``a``. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in source ``a`` and will replace the element of the - corresponding index in b if b is a list. Default: False. - - Returns: - dict: The modified dict of ``b`` using ``a``. - - Examples: - # Normally merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # Delete b first and merge a into b. - >>> Config._merge_a_into_b( - ... dict(obj=dict(_delete_=True, a=2)), dict(obj=dict(a=1))) - {'obj': {'a': 2}} - - # b is a list - >>> Config._merge_a_into_b( - ... {'0': dict(a=2)}, [dict(a=1), dict(b=2)], True) - [{'a': 2}, {'b': 2}] - """ - b = b.copy() - for k, v in a.items(): - if allow_list_keys and k.isdigit() and isinstance(b, list): - k = int(k) - if len(b) <= k: - raise KeyError(f'Index {k} exceeds the length of list {b}') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - elif isinstance(v, - dict) and k in b and not v.pop(DELETE_KEY, False): - allowed_types = (dict, list) if allow_list_keys else dict - if not isinstance(b[k], allowed_types): - raise TypeError( - f'{k}={v} in child config cannot inherit from base ' - f'because {k} is a dict in the child config but is of ' - f'type {type(b[k])} in base config. You may set ' - f'`{DELETE_KEY}=True` to ignore the base config') - b[k] = Config._merge_a_into_b(v, b[k], allow_list_keys) - else: - b[k] = v - return b - - @staticmethod - def fromfile(filename, - use_predefined_variables=True, - import_custom_modules=True): - cfg_dict, cfg_text = Config._file2dict(filename, - use_predefined_variables) - if import_custom_modules and cfg_dict.get('custom_imports', None): - import_modules_from_strings(**cfg_dict['custom_imports']) - return Config(cfg_dict, cfg_text=cfg_text, filename=filename) - - @staticmethod - def fromstring(cfg_str, file_format): - """Generate config from config str. - - Args: - cfg_str (str): Config str. - file_format (str): Config file format corresponding to the - config str. Only py/yml/yaml/json type are supported now! - - Returns: - obj:`Config`: Config obj. - """ - if file_format not in ['.py', '.json', '.yaml', '.yml']: - raise IOError('Only py/yml/yaml/json type are supported now!') - if file_format != '.py' and 'dict(' in cfg_str: - # check if users specify a wrong suffix for python - warnings.warn( - 'Please check "file_format", the file format may be .py') - with tempfile.NamedTemporaryFile( - 'w', encoding='utf-8', suffix=file_format, - delete=False) as temp_file: - temp_file.write(cfg_str) - # on windows, previous implementation cause error - # see PR 1077 for details - cfg = Config.fromfile(temp_file.name) - os.remove(temp_file.name) - return cfg - - @staticmethod - def auto_argparser(description=None): - """Generate argparser from config file automatically (experimental)""" - partial_parser = ArgumentParser(description=description) - partial_parser.add_argument('config', help='config file path') - cfg_file = partial_parser.parse_known_args()[0].config - cfg = Config.fromfile(cfg_file) - parser = ArgumentParser(description=description) - parser.add_argument('config', help='config file path') - add_args(parser, cfg) - return parser, cfg - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError('cfg_dict must be a dict, but ' - f'got {type(cfg_dict)}') - for key in cfg_dict: - if key in RESERVED_KEYS: - raise KeyError(f'{key} is reserved for config file') - - super(Config, self).__setattr__('_cfg_dict', ConfigDict(cfg_dict)) - super(Config, self).__setattr__('_filename', filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename, 'r') as f: - text = f.read() - else: - text = '' - super(Config, self).__setattr__('_text', text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - @property - def pretty_text(self): - - indent = 4 - - def _indent(s_, num_spaces): - s = s_.split('\n') - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(num_spaces * ' ') + line for line in s] - s = '\n'.join(s) - s = first + '\n' + s - return s - - def _format_basic_types(k, v, use_mapping=False): - if isinstance(v, str): - v_str = f"'{v}'" - else: - v_str = str(v) - - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) - - return attr_str - - def _format_list(k, v, use_mapping=False): - # check if all items in the list are dict - if all(isinstance(_, dict) for _ in v): - v_str = '[\n' - v_str += '\n'.join( - f'dict({_indent(_format_dict(v_), indent)}),' - for v_ in v).rstrip(',') - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: {v_str}' - else: - attr_str = f'{str(k)}={v_str}' - attr_str = _indent(attr_str, indent) + ']' - else: - attr_str = _format_basic_types(k, v, use_mapping) - return attr_str - - def _contain_invalid_identifier(dict_str): - contain_invalid_identifier = False - for key_name in dict_str: - contain_invalid_identifier |= \ - (not str(key_name).isidentifier()) - return contain_invalid_identifier - - def _format_dict(input_dict, outest_level=False): - r = '' - s = [] - - use_mapping = _contain_invalid_identifier(input_dict) - if use_mapping: - r += '{' - for idx, (k, v) in enumerate(input_dict.items()): - is_last = idx >= len(input_dict) - 1 - end = '' if outest_level or is_last else ',' - if isinstance(v, dict): - v_str = '\n' + _format_dict(v) - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f'{k_str}: dict({v_str}' - else: - attr_str = f'{str(k)}=dict({v_str}' - attr_str = _indent(attr_str, indent) + ')' + end - elif isinstance(v, list): - attr_str = _format_list(k, v, use_mapping) + end - else: - attr_str = _format_basic_types(k, v, use_mapping) + end - - s.append(attr_str) - r += '\n'.join(s) - if use_mapping: - r += '}' - return r - - cfg_dict = self._cfg_dict.to_dict() - text = _format_dict(cfg_dict, outest_level=True) - # copied from setup.cfg - yapf_style = dict( - based_on_style='pep8', - blank_line_before_nested_class_or_def=True, - split_before_expression_after_opening_paren=True) - text, _ = FormatCode(text, style_config=yapf_style, verify=True) - - return text - - def __repr__(self): - return f'Config (path: {self.filename}): {self._cfg_dict.__repr__()}' - - def __len__(self): - return len(self._cfg_dict) - - def __getattr__(self, name): - return getattr(self._cfg_dict, name) - - def __getitem__(self, name): - return self._cfg_dict.__getitem__(name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) - - def __setitem__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setitem__(name, value) - - def __iter__(self): - return iter(self._cfg_dict) - - def __getstate__(self): - return (self._cfg_dict, self._filename, self._text) - - def __setstate__(self, state): - _cfg_dict, _filename, _text = state - super(Config, self).__setattr__('_cfg_dict', _cfg_dict) - super(Config, self).__setattr__('_filename', _filename) - super(Config, self).__setattr__('_text', _text) - - def dump(self, file=None): - cfg_dict = super(Config, self).__getattribute__('_cfg_dict').to_dict() - if self.filename.endswith('.py'): - if file is None: - return self.pretty_text - else: - with open(file, 'w', encoding='utf-8') as f: - f.write(self.pretty_text) - else: - import mmcv - if file is None: - file_format = self.filename.split('.')[-1] - return mmcv.dump(cfg_dict, file_format=file_format) - else: - mmcv.dump(cfg_dict, file) - - def merge_from_dict(self, options, allow_list_keys=True): - """Merge list into cfg_dict. - - Merge the dict parsed by MultipleKVAction into this cfg. - - Examples: - >>> options = {'model.backbone.depth': 50, - ... 'model.backbone.with_cp':True} - >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet')))) - >>> cfg.merge_from_dict(options) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict( - ... model=dict(backbone=dict(depth=50, with_cp=True))) - - # Merge list element - >>> cfg = Config(dict(pipeline=[ - ... dict(type='LoadImage'), dict(type='LoadAnnotations')])) - >>> options = dict(pipeline={'0': dict(type='SelfLoadImage')}) - >>> cfg.merge_from_dict(options, allow_list_keys=True) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict(pipeline=[ - ... dict(type='SelfLoadImage'), dict(type='LoadAnnotations')]) - - Args: - options (dict): dict of configs to merge from. - allow_list_keys (bool): If True, int string keys (e.g. '0', '1') - are allowed in ``options`` and will replace the element of the - corresponding index in the config if the config is a list. - Default: True. - """ - option_cfg_dict = {} - for full_key, v in options.items(): - d = option_cfg_dict - key_list = full_key.split('.') - for subkey in key_list[:-1]: - d.setdefault(subkey, ConfigDict()) - d = d[subkey] - subkey = key_list[-1] - d[subkey] = v - - cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - super(Config, self).__setattr__( - '_cfg_dict', - Config._merge_a_into_b( - option_cfg_dict, cfg_dict, allow_list_keys=allow_list_keys)) - - -class DictAction(Action): - """ - argparse action to split an argument into KEY=VALUE form - on the first = and append to a dictionary. List options can - be passed as comma separated values, i.e 'KEY=V1,V2,V3', or with explicit - brackets, i.e. 'KEY=[V1,V2,V3]'. It also support nested brackets to build - list/tuple values. e.g. 'KEY=[(V1,V2),(V3,V4)]' - """ - - @staticmethod - def _parse_int_float_bool(val): - try: - return int(val) - except ValueError: - pass - try: - return float(val) - except ValueError: - pass - if val.lower() in ['true', 'false']: - return True if val.lower() == 'true' else False - return val - - @staticmethod - def _parse_iterable(val): - """Parse iterable values in the string. - - All elements inside '()' or '[]' are treated as iterable values. - - Args: - val (str): Value string. - - Returns: - list | tuple: The expanded list or tuple from the string. - - Examples: - >>> DictAction._parse_iterable('1,2,3') - [1, 2, 3] - >>> DictAction._parse_iterable('[a, b, c]') - ['a', 'b', 'c'] - >>> DictAction._parse_iterable('[(1, 2, 3), [a, b], c]') - [(1, 2, 3), ['a', 'b'], 'c'] - """ - - def find_next_comma(string): - """Find the position of next comma in the string. - - If no ',' is found in the string, return the string length. All - chars inside '()' and '[]' are treated as one element and thus ',' - inside these brackets are ignored. - """ - assert (string.count('(') == string.count(')')) and ( - string.count('[') == string.count(']')), \ - f'Imbalanced brackets exist in {string}' - end = len(string) - for idx, char in enumerate(string): - pre = string[:idx] - # The string before this ',' is balanced - if ((char == ',') and (pre.count('(') == pre.count(')')) - and (pre.count('[') == pre.count(']'))): - end = idx - break - return end - - # Strip ' and " characters and replace whitespace. - val = val.strip('\'\"').replace(' ', '') - is_tuple = False - if val.startswith('(') and val.endswith(')'): - is_tuple = True - val = val[1:-1] - elif val.startswith('[') and val.endswith(']'): - val = val[1:-1] - elif ',' not in val: - # val is a single value - return DictAction._parse_int_float_bool(val) - - values = [] - while len(val) > 0: - comma_idx = find_next_comma(val) - element = DictAction._parse_iterable(val[:comma_idx]) - values.append(element) - val = val[comma_idx + 1:] - if is_tuple: - values = tuple(values) - return values - - def __call__(self, parser, namespace, values, option_string=None): - options = {} - for kv in values: - key, val = kv.split('=', maxsplit=1) - options[key] = self._parse_iterable(val) - setattr(namespace, self.dest, options) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/env.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/env.py deleted file mode 100755 index e46a1094..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/env.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output( - f'"{nvcc}" -V | tail -n1', shell=True) - nvcc = nvcc.decode('utf-8').strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - gcc = subprocess.check_output('gcc --version | head -n1', shell=True) - gcc = gcc.decode('utf-8').strip() - env_info['GCC'] = gcc - except subprocess.CalledProcessError: # gcc is unavailable - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/ext_loader.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/ext_loader.py deleted file mode 100755 index 08132d2c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/ext_loader.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import importlib -import os -import pkgutil -import warnings -from collections import namedtuple - -import torch - -if torch.__version__ != 'parrots': - - def load_ext(name, funcs): - ext = importlib.import_module('mmcv.' + name) - for fun in funcs: - assert hasattr(ext, fun), f'{fun} miss in module {name}' - return ext -else: - from parrots import extension - from parrots.base import ParrotsException - - has_return_value_ops = [ - 'nms', - 'softnms', - 'nms_match', - 'nms_rotated', - 'top_pool_forward', - 'top_pool_backward', - 'bottom_pool_forward', - 'bottom_pool_backward', - 'left_pool_forward', - 'left_pool_backward', - 'right_pool_forward', - 'right_pool_backward', - 'fused_bias_leakyrelu', - 'upfirdn2d', - 'ms_deform_attn_forward', - 'pixel_group', - 'contour_expand', - ] - - def get_fake_func(name, e): - - def fake_func(*args, **kwargs): - warnings.warn(f'{name} is not supported in parrots now') - raise e - - return fake_func - - def load_ext(name, funcs): - ExtModule = namedtuple('ExtModule', funcs) - ext_list = [] - lib_root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) - for fun in funcs: - try: - ext_fun = extension.load(fun, name, lib_dir=lib_root) - except ParrotsException as e: - if 'No element registered' not in e.message: - warnings.warn(e.message) - ext_fun = get_fake_func(fun, e) - ext_list.append(ext_fun) - else: - if fun in has_return_value_ops: - ext_list.append(ext_fun.op) - else: - ext_list.append(ext_fun.op_) - return ExtModule(*ext_list) - - -def check_ops_exist(): - ext_loader = pkgutil.find_loader('mmcv._ext') - return ext_loader is not None diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/logging.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/logging.py deleted file mode 100755 index 4aa0e04b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/logging.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.distributed as dist - -logger_initialized = {} - - -def get_logger(name, log_file=None, log_level=logging.INFO, file_mode='w'): - """Initialize and get a logger by name. - - If the logger has not been initialized, this method will initialize the - logger by adding one or two handlers, otherwise the initialized logger will - be directly returned. During initialization, a StreamHandler will always be - added. If `log_file` is specified and the process rank is 0, a FileHandler - will also be added. - - Args: - name (str): Logger name. - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the logger. - log_level (int): The logger level. Note that only the process of - rank 0 is affected, and other processes will set the level to - "Error" thus be silent most of the time. - file_mode (str): The file mode used in opening log file. - Defaults to 'w'. - - Returns: - logging.Logger: The expected logger. - """ - logger = logging.getLogger(name) - if name in logger_initialized: - return logger - # handle hierarchical names - # e.g., logger "a" is initialized, then logger "a.b" will skip the - # initialization since it is a child of "a". - for logger_name in logger_initialized: - if name.startswith(logger_name): - return logger - - # handle duplicate logs to the console - # Starting in 1.8.0, PyTorch DDP attaches a StreamHandler (NOTSET) - # to the root logger. As logger.propagate is True by default, this root - # level handler causes logging messages from rank>0 processes to - # unexpectedly show up on the console, creating much unwanted clutter. - # To fix this issue, we set the root logger's StreamHandler, if any, to log - # at the ERROR level. - for handler in logger.root.handlers: - if type(handler) is logging.StreamHandler: - handler.setLevel(logging.ERROR) - - stream_handler = logging.StreamHandler() - handlers = [stream_handler] - - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - else: - rank = 0 - - # only rank 0 will add a FileHandler - if rank == 0 and log_file is not None: - # Here, the default behaviour of the official logger is 'a'. Thus, we - # provide an interface to change the file mode to the default - # behaviour. - file_handler = logging.FileHandler(log_file, file_mode) - handlers.append(file_handler) - - formatter = logging.Formatter( - '%(asctime)s - %(name)s - %(levelname)s - %(message)s') - for handler in handlers: - handler.setFormatter(formatter) - handler.setLevel(log_level) - logger.addHandler(handler) - - if rank == 0: - logger.setLevel(log_level) - else: - logger.setLevel(logging.ERROR) - - logger_initialized[name] = True - - return logger - - -def print_log(msg, logger=None, level=logging.INFO): - """Print a log message. - - Args: - msg (str): The message to be logged. - logger (logging.Logger | str | None): The logger to be used. - Some special loggers are: - - "silent": no message will be printed. - - other str: the logger obtained with `get_root_logger(logger)`. - - None: The `print()` method will be used to print log messages. - level (int): Logging level. Only available when `logger` is a Logger - object or "root". - """ - if logger is None: - print(msg) - elif isinstance(logger, logging.Logger): - logger.log(level, msg) - elif logger == 'silent': - pass - elif isinstance(logger, str): - _logger = get_logger(logger) - _logger.log(level, msg) - else: - raise TypeError( - 'logger should be either a logging.Logger object, str, ' - f'"silent" or None, but got {type(logger)}') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/misc.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/misc.py deleted file mode 100755 index 2c58d0d7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_jit.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_jit.py deleted file mode 100755 index 61873f6d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_jit.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -from .parrots_wrapper import TORCH_VERSION - -parrots_jit_option = os.getenv('PARROTS_JIT_OPTION') - -if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON': - from parrots.jit import pat as jit -else: - - def jit(func=None, - check_input=None, - full_shape=True, - derivate=False, - coderize=False, - optimize=False): - - def wrapper(func): - - def wrapper_inner(*args, **kargs): - return func(*args, **kargs) - - return wrapper_inner - - if func is None: - return wrapper - else: - return func - - -if TORCH_VERSION == 'parrots': - from parrots.utils.tester import skip_no_elena -else: - - def skip_no_elena(func): - - def wrapper(*args, **kargs): - return func(*args, **kargs) - - return wrapper diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_wrapper.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_wrapper.py deleted file mode 100755 index 93c97640..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.instancenorm import _InstanceNorm - from torch.nn.modules.batchnorm import _BatchNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/path.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/path.py deleted file mode 100755 index 7dab4b30..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/path.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -from pathlib import Path - -from .misc import is_str - - -def is_filepath(x): - return is_str(x) or isinstance(x, Path) - - -def fopen(filepath, *args, **kwargs): - if is_str(filepath): - return open(filepath, *args, **kwargs) - elif isinstance(filepath, Path): - return filepath.open(*args, **kwargs) - raise ValueError('`filepath` should be a string or a Path') - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -def mkdir_or_exist(dir_name, mode=0o777): - if dir_name == '': - return - dir_name = osp.expanduser(dir_name) - os.makedirs(dir_name, mode=mode, exist_ok=True) - - -def symlink(src, dst, overwrite=True, **kwargs): - if os.path.lexists(dst) and overwrite: - os.remove(dst) - os.symlink(src, dst, **kwargs) - - -def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True): - """Scan a directory to find the interested files. - - Args: - dir_path (str | obj:`Path`): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - case_sensitive (bool, optional) : If set to False, ignore the case of - suffix. Default: True. - - Returns: - A generator for all the interested files with relative paths. - """ - if isinstance(dir_path, (str, Path)): - dir_path = str(dir_path) - else: - raise TypeError('"dir_path" must be a string or Path object') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - if suffix is not None and not case_sensitive: - suffix = suffix.lower() if isinstance(suffix, str) else tuple( - item.lower() for item in suffix) - - root = dir_path - - def _scandir(dir_path, suffix, recursive, case_sensitive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - _rel_path = rel_path if case_sensitive else rel_path.lower() - if suffix is None or _rel_path.endswith(suffix): - yield rel_path - elif recursive and os.path.isdir(entry.path): - # scan recursively if entry.path is a directory - yield from _scandir(entry.path, suffix, recursive, - case_sensitive) - - return _scandir(dir_path, suffix, recursive, case_sensitive) - - -def find_vcs_root(path, markers=('.git', )): - """Finds the root directory (including itself) of specified markers. - - Args: - path (str): Path of directory or file. - markers (list[str], optional): List of file or directory names. - - Returns: - The directory contained one of the markers or None if not found. - """ - if osp.isfile(path): - path = osp.dirname(path) - - prev, cur = None, osp.abspath(osp.expanduser(path)) - while cur != prev: - if any(osp.exists(osp.join(cur, marker)) for marker in markers): - return cur - prev, cur = cur, osp.split(cur)[0] - return None diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/progressbar.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/progressbar.py deleted file mode 100755 index 0062f670..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/progressbar.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from collections.abc import Iterable -from multiprocessing import Pool -from shutil import get_terminal_size - -from .timer import Timer - - -class ProgressBar: - """A progress bar which can print the progress.""" - - def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout): - self.task_num = task_num - self.bar_width = bar_width - self.completed = 0 - self.file = file - if start: - self.start() - - @property - def terminal_width(self): - width, _ = get_terminal_size() - return width - - def start(self): - if self.task_num > 0: - self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, ' - 'elapsed: 0s, ETA:') - else: - self.file.write('completed: 0, elapsed: 0s') - self.file.flush() - self.timer = Timer() - - def update(self, num_tasks=1): - assert num_tasks > 0 - self.completed += num_tasks - elapsed = self.timer.since_start() - if elapsed > 0: - fps = self.completed / elapsed - else: - fps = float('inf') - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \ - f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \ - f'ETA: {eta:5}s' - - bar_width = min(self.bar_width, - int(self.terminal_width - len(msg)) + 2, - int(self.terminal_width * 0.6)) - bar_width = max(2, bar_width) - mark_width = int(bar_width * percentage) - bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width) - self.file.write(msg.format(bar_chars)) - else: - self.file.write( - f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,' - f' {fps:.1f} tasks/s') - self.file.flush() - - -def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs): - """Track the progress of tasks execution with a progress bar. - - Tasks are done with a simple for-loop. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - results = [] - for task in tasks: - results.append(func(task, **kwargs)) - prog_bar.update() - prog_bar.file.write('\n') - return results - - -def init_pool(process_num, initializer=None, initargs=None): - if initializer is None: - return Pool(process_num) - elif initargs is None: - return Pool(process_num, initializer) - else: - if not isinstance(initargs, tuple): - raise TypeError('"initargs" must be a tuple') - return Pool(process_num, initializer, initargs) - - -def track_parallel_progress(func, - tasks, - nproc, - initializer=None, - initargs=None, - bar_width=50, - chunksize=1, - skip_first=False, - keep_order=True, - file=sys.stdout): - """Track the progress of parallel task execution with a progress bar. - - The built-in :mod:`multiprocessing` module is used for process pools and - tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - nproc (int): Process (worker) number. - initializer (None or callable): Refer to :class:`multiprocessing.Pool` - for details. - initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for - details. - chunksize (int): Refer to :class:`multiprocessing.Pool` for details. - bar_width (int): Width of progress bar. - skip_first (bool): Whether to skip the first sample for each worker - when estimating fps, since the initialization step may takes - longer. - keep_order (bool): If True, :func:`Pool.imap` is used, otherwise - :func:`Pool.imap_unordered` is used. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - pool = init_pool(nproc, initializer, initargs) - start = not skip_first - task_num -= nproc * chunksize * int(skip_first) - prog_bar = ProgressBar(task_num, bar_width, start, file=file) - results = [] - if keep_order: - gen = pool.imap(func, tasks, chunksize) - else: - gen = pool.imap_unordered(func, tasks, chunksize) - for result in gen: - results.append(result) - if skip_first: - if len(results) < nproc * chunksize: - continue - elif len(results) == nproc * chunksize: - prog_bar.start() - continue - prog_bar.update() - prog_bar.file.write('\n') - pool.close() - pool.join() - return results - - -def track_iter_progress(tasks, bar_width=50, file=sys.stdout): - """Track the progress of tasks iteration or enumeration with a progress - bar. - - Tasks are yielded with a simple for-loop. - - Args: - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Yields: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - for task in tasks: - yield task - prog_bar.update() - prog_bar.file.write('\n') diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/registry.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/registry.py deleted file mode 100755 index fa9df39b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/registry.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import warnings -from functools import partial - -from .misc import is_seq_of - - -def build_from_cfg(cfg, registry, default_args=None): - """Build a module from config dict. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - try: - return obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{obj_cls.__name__}: {e}') - - -class Registry: - """A registry to map strings to classes. - - Registered object could be built from registry. - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = MODELS.build(dict(type='ResNet')) - - Please refer to - https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for - advanced usage. - - Args: - name (str): Registry name. - build_func(func, optional): Build function to construct instance from - Registry, func:`build_from_cfg` is used if neither ``parent`` or - ``build_func`` is specified. If ``parent`` is specified and - ``build_func`` is not given, ``build_func`` will be inherited - from ``parent``. Default: None. - parent (Registry, optional): Parent registry. The class registered in - children registry could be built from parent. Default: None. - scope (str, optional): The scope of registry. It is the key to search - for children registry. If not specified, scope will be the name of - the package where class is defined, e.g. mmdet, mmcls, mmseg. - Default: None. - """ - - def __init__(self, name, build_func=None, parent=None, scope=None): - self._name = name - self._module_dict = dict() - self._children = dict() - self._scope = self.infer_scope() if scope is None else scope - - # self.build_func will be set with the following priority: - # 1. build_func - # 2. parent.build_func - # 3. build_from_cfg - if build_func is None: - if parent is not None: - self.build_func = parent.build_func - else: - self.build_func = build_from_cfg - else: - self.build_func = build_func - if parent is not None: - assert isinstance(parent, Registry) - parent._add_children(self) - self.parent = parent - else: - self.parent = None - - def __len__(self): - return len(self._module_dict) - - def __contains__(self, key): - return self.get(key) is not None - - def __repr__(self): - format_str = self.__class__.__name__ + \ - f'(name={self._name}, ' \ - f'items={self._module_dict})' - return format_str - - @staticmethod - def infer_scope(): - """Infer the scope of registry. - - The name of the package where registry is defined will be returned. - - Example: - # in mmdet/models/backbone/resnet.py - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - The scope of ``ResNet`` will be ``mmdet``. - - - Returns: - scope (str): The inferred scope name. - """ - # inspect.stack() trace where this function is called, the index-2 - # indicates the frame where `infer_scope()` is called - filename = inspect.getmodule(inspect.stack()[2][0]).__name__ - split_filename = filename.split('.') - return split_filename[0] - - @staticmethod - def split_scope_key(key): - """Split scope and key. - - The first scope will be split from key. - - Examples: - >>> Registry.split_scope_key('mmdet.ResNet') - 'mmdet', 'ResNet' - >>> Registry.split_scope_key('ResNet') - None, 'ResNet' - - Return: - scope (str, None): The first scope. - key (str): The remaining key. - """ - split_index = key.find('.') - if split_index != -1: - return key[:split_index], key[split_index + 1:] - else: - return None, key - - @property - def name(self): - return self._name - - @property - def scope(self): - return self._scope - - @property - def module_dict(self): - return self._module_dict - - @property - def children(self): - return self._children - - def get(self, key): - """Get the registry record. - - Args: - key (str): The class name in string format. - - Returns: - class: The corresponding class. - """ - scope, real_key = self.split_scope_key(key) - if scope is None or scope == self._scope: - # get from self - if real_key in self._module_dict: - return self._module_dict[real_key] - else: - # get from self._children - if scope in self._children: - return self._children[scope].get(real_key) - else: - # goto root - parent = self.parent - while parent.parent is not None: - parent = parent.parent - return parent.get(key) - - def build(self, *args, **kwargs): - return self.build_func(*args, **kwargs, registry=self) - - def _add_children(self, registry): - """Add children for a registry. - - The ``registry`` will be added as children based on its scope. - The parent registry could build objects from children registry. - - Example: - >>> models = Registry('models') - >>> mmdet_models = Registry('models', parent=models) - >>> @mmdet_models.register_module() - >>> class ResNet: - >>> pass - >>> resnet = models.build(dict(type='mmdet.ResNet')) - """ - - assert isinstance(registry, Registry) - assert registry.scope is not None - assert registry.scope not in self.children, \ - f'scope {registry.scope} exists in {self.name} registry' - self.children[registry.scope] = registry - - def _register_module(self, module_class, module_name=None, force=False): - if not inspect.isclass(module_class): - raise TypeError('module must be a class, ' - f'but got {type(module_class)}') - - if module_name is None: - module_name = module_class.__name__ - if isinstance(module_name, str): - module_name = [module_name] - for name in module_name: - if not force and name in self._module_dict: - raise KeyError(f'{name} is already registered ' - f'in {self.name}') - self._module_dict[name] = module_class - - def deprecated_register_module(self, cls=None, force=False): - warnings.warn( - 'The old API of register_module(module, force=False) ' - 'is deprecated and will be removed, please use the new API ' - 'register_module(name=None, force=False, module=None) instead.') - if cls is None: - return partial(self.deprecated_register_module, force=force) - self._register_module(cls, force=force) - return cls - - def register_module(self, name=None, force=False, module=None): - """Register a module. - - A record will be added to `self._module_dict`, whose key is the class - name or the specified name, and value is the class itself. - It can be used as a decorator or a normal function. - - Example: - >>> backbones = Registry('backbone') - >>> @backbones.register_module() - >>> class ResNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> @backbones.register_module(name='mnet') - >>> class MobileNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> class ResNet: - >>> pass - >>> backbones.register_module(ResNet) - - Args: - name (str | None): The module name to be registered. If not - specified, the class name will be used. - force (bool, optional): Whether to override an existing class with - the same name. Default: False. - module (type): Module class to be registered. - """ - if not isinstance(force, bool): - raise TypeError(f'force must be a boolean, but got {type(force)}') - # NOTE: This is a walkaround to be compatible with the old api, - # while it may introduce unexpected bugs. - if isinstance(name, type): - return self.deprecated_register_module(name, force=force) - - # raise the error ahead of time - if not (name is None or isinstance(name, str) or is_seq_of(name, str)): - raise TypeError( - 'name must be either of None, an instance of str or a sequence' - f' of str, but got {type(name)}') - - # use it as a normal method: x.register_module(module=SomeClass) - if module is not None: - self._register_module( - module_class=module, module_name=name, force=force) - return module - - # use it as a decorator: @x.register_module() - def _register(cls): - self._register_module( - module_class=cls, module_name=name, force=force) - return cls - - return _register diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/testing.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/testing.py deleted file mode 100755 index a27f936d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/testing.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Open-MMLab. -import sys -from collections.abc import Iterable -from runpy import run_path -from shlex import split -from typing import Any, Dict, List -from unittest.mock import patch - - -def check_python_script(cmd): - """Run the python cmd script with `__main__`. The difference between - `os.system` is that, this function exectues code in the current process, so - that it can be tracked by coverage tools. Currently it supports two forms: - - - ./tests/data/scripts/hello.py zz - - python tests/data/scripts/hello.py zz - """ - args = split(cmd) - if args[0] == 'python': - args = args[1:] - with patch.object(sys, 'argv', args): - run_path(args[0], run_name='__main__') - - -def _any(judge_result): - """Since built-in ``any`` works only when the element of iterable is not - iterable, implement the function.""" - if not isinstance(judge_result, Iterable): - return judge_result - - try: - for element in judge_result: - if _any(element): - return True - except TypeError: - # Maybe encounter the case: torch.tensor(True) | torch.tensor(False) - if judge_result: - return True - return False - - -def assert_dict_contains_subset(dict_obj: Dict[Any, Any], - expected_subset: Dict[Any, Any]) -> bool: - """Check if the dict_obj contains the expected_subset. - - Args: - dict_obj (Dict[Any, Any]): Dict object to be checked. - expected_subset (Dict[Any, Any]): Subset expected to be contained in - dict_obj. - - Returns: - bool: Whether the dict_obj contains the expected_subset. - """ - - for key, value in expected_subset.items(): - if key not in dict_obj.keys() or _any(dict_obj[key] != value): - return False - return True - - -def assert_attrs_equal(obj: Any, expected_attrs: Dict[str, Any]) -> bool: - """Check if attribute of class object is correct. - - Args: - obj (object): Class object to be checked. - expected_attrs (Dict[str, Any]): Dict of the expected attrs. - - Returns: - bool: Whether the attribute of class object is correct. - """ - for attr, value in expected_attrs.items(): - if not hasattr(obj, attr) or _any(getattr(obj, attr) != value): - return False - return True - - -def assert_dict_has_keys(obj: Dict[str, Any], - expected_keys: List[str]) -> bool: - """Check if the obj has all the expected_keys. - - Args: - obj (Dict[str, Any]): Object to be checked. - expected_keys (List[str]): Keys expected to contained in the keys of - the obj. - - Returns: - bool: Whether the obj has the expected keys. - """ - return set(expected_keys).issubset(set(obj.keys())) - - -def assert_keys_equal(result_keys: List[str], target_keys: List[str]) -> bool: - """Check if target_keys is equal to result_keys. - - Args: - result_keys (List[str]): Result keys to be checked. - target_keys (List[str]): Target keys to be checked. - - Returns: - bool: Whether target_keys is equal to result_keys. - """ - return set(result_keys) == set(target_keys) - - -def assert_is_norm_layer(module) -> bool: - """Check if the module is a norm layer. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the module is a norm layer. - """ - from .parrots_wrapper import _BatchNorm, _InstanceNorm - from torch.nn import GroupNorm, LayerNorm - norm_layer_candidates = (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm) - return isinstance(module, norm_layer_candidates) - - -def assert_params_all_zeros(module) -> bool: - """Check if the parameters of the module is all zeros. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the parameters of the module is all zeros. - """ - weight_data = module.weight.data - is_weight_zero = weight_data.allclose( - weight_data.new_zeros(weight_data.size())) - - if hasattr(module, 'bias') and module.bias is not None: - bias_data = module.bias.data - is_bias_zero = bias_data.allclose( - bias_data.new_zeros(bias_data.size())) - else: - is_bias_zero = True - - return is_weight_zero and is_bias_zero diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/timer.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/timer.py deleted file mode 100755 index 66d4a78a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/timer.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from time import time - - -class TimerError(Exception): - - def __init__(self, message): - self.message = message - super(TimerError, self).__init__(message) - - -class Timer: - """A flexible Timer class. - - :Example: - - >>> import time - >>> import mmcv - >>> with mmcv.Timer(): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - 1.000 - >>> with mmcv.Timer(print_tmpl='it takes {:.1f} seconds'): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - it takes 1.0 seconds - >>> timer = mmcv.Timer() - >>> time.sleep(0.5) - >>> print(timer.since_start()) - 0.500 - >>> time.sleep(0.5) - >>> print(timer.since_last_check()) - 0.500 - >>> print(timer.since_start()) - 1.000 - """ - - def __init__(self, start=True, print_tmpl=None): - self._is_running = False - self.print_tmpl = print_tmpl if print_tmpl else '{:.3f}' - if start: - self.start() - - @property - def is_running(self): - """bool: indicate whether the timer is running""" - return self._is_running - - def __enter__(self): - self.start() - return self - - def __exit__(self, type, value, traceback): - print(self.print_tmpl.format(self.since_last_check())) - self._is_running = False - - def start(self): - """Start the timer.""" - if not self._is_running: - self._t_start = time() - self._is_running = True - self._t_last = time() - - def since_start(self): - """Total time since the timer is started. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - self._t_last = time() - return self._t_last - self._t_start - - def since_last_check(self): - """Time since the last checking. - - Either :func:`since_start` or :func:`since_last_check` is a checking - operation. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - dur = time() - self._t_last - self._t_last = time() - return dur - - -_g_timers = {} # global timers - - -def check_time(timer_id): - """Add check points in a single line. - - This method is suitable for running a task on a list of items. A timer will - be registered when the method is called for the first time. - - :Example: - - >>> import time - >>> import mmcv - >>> for i in range(1, 6): - >>> # simulate a code block - >>> time.sleep(i) - >>> mmcv.check_time('task1') - 2.000 - 3.000 - 4.000 - 5.000 - - Args: - timer_id (str): Timer identifier. - """ - if timer_id not in _g_timers: - _g_timers[timer_id] = Timer() - return 0 - else: - return _g_timers[timer_id].since_last_check() diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/trace.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/trace.py deleted file mode 100755 index 8e49bfd3..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/trace.py +++ /dev/null @@ -1,23 +0,0 @@ -import warnings - -import torch - -from mmcv.utils import digit_version - - -def is_jit_tracing() -> bool: - if (torch.__version__ != 'parrots' - and digit_version(torch.__version__) >= digit_version('1.6.0')): - on_trace = torch.jit.is_tracing() - # In PyTorch 1.6, torch.jit.is_tracing has a bug. - # Refers to https://github.com/pytorch/pytorch/issues/42448 - if isinstance(on_trace, bool): - return on_trace - else: - return torch._C._is_tracing() - else: - warnings.warn( - 'torch.jit.is_tracing is only supported after v1.6.0. ' - 'Therefore is_tracing returns False automatically. Please ' - 'set on_trace manually if you are using trace.', UserWarning) - return False diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/version_utils.py b/cv/super_resolution/basicvsr/pytorch/mmcv/utils/version_utils.py deleted file mode 100755 index 963c45a2..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/utils/version_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import subprocess -import warnings - -from packaging.version import parse - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - assert 'parrots' not in version_str - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen( - cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - -def get_git_hash(fallback='unknown', digits=None): - """Get the git hash of the current repo. - - Args: - fallback (str, optional): The fallback string when git hash is - unavailable. Defaults to 'unknown'. - digits (int, optional): kept digits of the hash. Defaults to None, - meaning all digits are kept. - - Returns: - str: Git commit hash. - """ - - if digits is not None and not isinstance(digits, int): - raise TypeError('digits must be None or an integer') - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - if digits is not None: - sha = sha[:digits] - except OSError: - sha = fallback - - return sha diff --git a/cv/super_resolution/basicvsr/pytorch/mmcv/version.py b/cv/super_resolution/basicvsr/pytorch/mmcv/version.py deleted file mode 100755 index 1cce4e50..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmcv/version.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -__version__ = '1.3.17' - - -def parse_version_info(version_str: str, length: int = 4) -> tuple: - """Parse a version string into a tuple. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int | str]: The version info, e.g., "1.3.0" is parsed into - (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into - (2, 0, 0, 0, 'rc', 1) (when length is set to 4). - """ - from packaging.version import parse - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - release.extend(list(version.pre)) - elif version.is_postrelease: - release.extend(list(version.post)) - else: - release.extend([0, 0]) - return tuple(release) - - -version_info = tuple(int(x) for x in __version__.split('.')[:3]) - -__all__ = ['__version__', 'version_info', 'parse_version_info'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/__init__.py deleted file mode 100755 index 05d4c20b..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv - -from .version import __version__, version_info - -try: - from mmcv.utils import digit_version -except ImportError: - - def digit_version(version_str): - digit_ver = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_ver.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_ver.append(int(patch_version[0]) - 1) - digit_ver.append(int(patch_version[1])) - return digit_ver - - -MMCV_MIN = '1.3.13' -MMCV_MAX = '1.6' - -mmcv_min_version = digit_version(MMCV_MIN) -mmcv_max_version = digit_version(MMCV_MAX) -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \ - f'mmcv=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv-full>={mmcv_min_version}, <={mmcv_max_version}.' - -__all__ = ['__version__', 'version_info'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/__init__.py deleted file mode 100755 index 1ae70035..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .generation_inference import generation_inference -from .inpainting_inference import inpainting_inference -from .matting_inference import init_model, matting_inference -from .restoration_face_inference import restoration_face_inference -from .restoration_inference import restoration_inference -from .restoration_video_inference import restoration_video_inference -from .test import multi_gpu_test, single_gpu_test -from .train import init_random_seed, set_random_seed, train_model -from .video_interpolation_inference import video_interpolation_inference - -__all__ = [ - 'train_model', 'set_random_seed', 'init_model', 'matting_inference', - 'inpainting_inference', 'restoration_inference', 'generation_inference', - 'multi_gpu_test', 'single_gpu_test', 'restoration_video_inference', - 'restoration_face_inference', 'video_interpolation_inference', - 'init_random_seed' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/generation_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/generation_inference.py deleted file mode 100755 index c8e829bb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/generation_inference.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.parallel import collate, scatter - -from mmedit.core import tensor2img -from mmedit.datasets.pipelines import Compose - - -def generation_inference(model, img, img_unpaired=None): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - img (str): File path of input image. - img_unpaired (str, optional): File path of the unpaired image. - If not None, perform unpaired image generation. Default: None. - - Returns: - np.ndarray: The predicted generation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = Compose(cfg.test_pipeline) - # prepare data - if img_unpaired is None: - data = dict(pair_path=img) - else: - data = dict(img_a_path=img, img_b_path=img_unpaired) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if 'cuda' in str(device): - data = scatter(data, [device])[0] - # forward the model - with torch.no_grad(): - results = model(test_mode=True, **data) - # process generation shown mode - if img_unpaired is None: - if model.show_input: - output = np.concatenate([ - tensor2img(results['real_a'], min_max=(-1, 1)), - tensor2img(results['fake_b'], min_max=(-1, 1)), - tensor2img(results['real_b'], min_max=(-1, 1)) - ], - axis=1) - else: - output = tensor2img(results['fake_b'], min_max=(-1, 1)) - else: - if model.show_input: - output = np.concatenate([ - tensor2img(results['real_a'], min_max=(-1, 1)), - tensor2img(results['fake_b'], min_max=(-1, 1)), - tensor2img(results['real_b'], min_max=(-1, 1)), - tensor2img(results['fake_a'], min_max=(-1, 1)) - ], - axis=1) - else: - if model.test_direction == 'a2b': - output = tensor2img(results['fake_b'], min_max=(-1, 1)) - else: - output = tensor2img(results['fake_a'], min_max=(-1, 1)) - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/inpainting_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/inpainting_inference.py deleted file mode 100755 index f80650b7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/inpainting_inference.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.parallel import collate, scatter - -from mmedit.datasets.pipelines import Compose - - -def inpainting_inference(model, masked_img, mask): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - masked_img (str): File path of image with mask. - mask (str): Mask file path. - - Returns: - Tensor: The predicted inpainting result. - """ - device = next(model.parameters()).device # model device - - infer_pipeline = [ - dict(type='LoadImageFromFile', key='masked_img'), - dict(type='LoadMask', mask_mode='file', mask_config=dict()), - dict(type='Pad', keys=['masked_img', 'mask'], mode='reflect'), - dict( - type='Normalize', - keys=['masked_img'], - mean=[127.5] * 3, - std=[127.5] * 3, - to_rgb=False), - dict(type='GetMaskedImage', img_name='masked_img'), - dict( - type='Collect', - keys=['masked_img', 'mask'], - meta_keys=['masked_img_path']), - dict(type='ImageToTensor', keys=['masked_img', 'mask']) - ] - - # build the data pipeline - test_pipeline = Compose(infer_pipeline) - # prepare data - data = dict(masked_img_path=masked_img, mask_path=mask) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if 'cuda' in str(device): - data = scatter(data, [device])[0] - else: - data.pop('meta') - # forward the model - with torch.no_grad(): - result = model(test_mode=True, **data) - - return result['fake_img'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/matting_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/matting_inference.py deleted file mode 100755 index 5446afe7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/matting_inference.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import torch -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmedit.datasets.pipelines import Compose -from mmedit.models import build_model - - -def init_model(config, checkpoint=None, device='cuda:0'): - """Initialize a model from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str): Which device the model will deploy. Default: 'cuda:0'. - - Returns: - nn.Module: The constructed model. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - config.model.pretrained = None - config.test_cfg.metrics = None - model = build_model(config.model, test_cfg=config.test_cfg) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint) - - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -def matting_inference(model, img, trimap): - """Inference image(s) with the model. - - Args: - model (nn.Module): The loaded model. - img (str): Image file path. - trimap (str): Trimap file path. - - Returns: - np.ndarray: The predicted alpha matte. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # remove alpha from test_pipeline - keys_to_remove = ['alpha', 'ori_alpha'] - for key in keys_to_remove: - for pipeline in list(cfg.test_pipeline): - if 'key' in pipeline and key == pipeline['key']: - cfg.test_pipeline.remove(pipeline) - if 'keys' in pipeline and key in pipeline['keys']: - pipeline['keys'].remove(key) - if len(pipeline['keys']) == 0: - cfg.test_pipeline.remove(pipeline) - if 'meta_keys' in pipeline and key in pipeline['meta_keys']: - pipeline['meta_keys'].remove(key) - # build the data pipeline - test_pipeline = Compose(cfg.test_pipeline) - # prepare data - data = dict(merged_path=img, trimap_path=trimap) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if 'cuda' in str(device): - data = scatter(data, [device])[0] - # forward the model - with torch.no_grad(): - result = model(test_mode=True, **data) - - return result['pred_alpha'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_face_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_face_inference.py deleted file mode 100755 index dbe12e11..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_face_inference.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.parallel import collate, scatter - -from mmedit.datasets.pipelines import Compose - -try: - from facexlib.utils.face_restoration_helper import FaceRestoreHelper - has_facexlib = True -except ImportError: - has_facexlib = False - - -def restoration_face_inference(model, img, upscale_factor=1, face_size=1024): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - img (str): File path of input image. - upscale_factor (int, optional): The number of times the input image - is upsampled. Default: 1. - face_size (int, optional): The size of the cropped and aligned faces. - Default: 1024. - - Returns: - Tensor: The predicted restoration result. - """ - device = next(model.parameters()).device # model device - - # build the data pipeline - if model.cfg.get('demo_pipeline', None): - test_pipeline = model.cfg.demo_pipeline - elif model.cfg.get('test_pipeline', None): - test_pipeline = model.cfg.test_pipeline - else: - test_pipeline = model.cfg.val_pipeline - - # remove gt from test_pipeline - keys_to_remove = ['gt', 'gt_path'] - for key in keys_to_remove: - for pipeline in list(test_pipeline): - if 'key' in pipeline and key == pipeline['key']: - test_pipeline.remove(pipeline) - if 'keys' in pipeline and key in pipeline['keys']: - pipeline['keys'].remove(key) - if len(pipeline['keys']) == 0: - test_pipeline.remove(pipeline) - if 'meta_keys' in pipeline and key in pipeline['meta_keys']: - pipeline['meta_keys'].remove(key) - # build the data pipeline - test_pipeline = Compose(test_pipeline) - - # face helper for detecting and aligning faces - assert has_facexlib, 'Please install FaceXLib to use the demo.' - face_helper = FaceRestoreHelper( - upscale_factor, - face_size=face_size, - crop_ratio=(1, 1), - det_model='retinaface_resnet50', - template_3points=True, - save_ext='png', - device=device) - - face_helper.read_image(img) - # get face landmarks for each face - face_helper.get_face_landmarks_5( - only_center_face=False, eye_dist_threshold=None) - # align and warp each face - face_helper.align_warp_face() - - for i, img in enumerate(face_helper.cropped_faces): - # prepare data - data = dict(lq=img.astype(np.float32)) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if 'cuda' in str(device): - data = scatter(data, [device])[0] - - with torch.no_grad(): - output = model(test_mode=True, **data)['output'].clip_(0, 1) - - output = output.squeeze(0).permute(1, 2, 0)[:, :, [2, 1, 0]] - output = output.cpu().numpy() * 255 # (0, 255) - face_helper.add_restored_face(output) - - face_helper.get_inverse_affine(None) - restored_img = face_helper.paste_faces_to_input_image(upsample_img=None) - - return restored_img diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_inference.py deleted file mode 100755 index 98c08abc..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_inference.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.parallel import collate, scatter - -from mmedit.datasets.pipelines import Compose - - -def restoration_inference(model, img, ref=None): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - img (str): File path of input image. - ref (str | None): File path of reference image. Default: None. - - Returns: - Tensor: The predicted restoration result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # remove gt from test_pipeline - keys_to_remove = ['gt', 'gt_path'] - for key in keys_to_remove: - for pipeline in list(cfg.test_pipeline): - if 'key' in pipeline and key == pipeline['key']: - cfg.test_pipeline.remove(pipeline) - if 'keys' in pipeline and key in pipeline['keys']: - pipeline['keys'].remove(key) - if len(pipeline['keys']) == 0: - cfg.test_pipeline.remove(pipeline) - if 'meta_keys' in pipeline and key in pipeline['meta_keys']: - pipeline['meta_keys'].remove(key) - # build the data pipeline - test_pipeline = Compose(cfg.test_pipeline) - # prepare data - if ref: # Ref-SR - data = dict(lq_path=img, ref_path=ref) - else: # SISR - data = dict(lq_path=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if 'cuda' in str(device): - data = scatter(data, [device])[0] - # forward the model - with torch.no_grad(): - result = model(test_mode=True, **data) - - return result['output'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_video_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_video_inference.py deleted file mode 100755 index ca6369df..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/restoration_video_inference.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os.path as osp -import re -from functools import reduce - -import mmcv -import numpy as np -import torch - -from mmedit.datasets.pipelines import Compose - -VIDEO_EXTENSIONS = ('.mp4', '.mov') - - -def pad_sequence(data, window_size): - padding = window_size // 2 - - data = torch.cat([ - data[:, 1 + padding:1 + 2 * padding].flip(1), data, - data[:, -1 - 2 * padding:-1 - padding].flip(1) - ], - dim=1) - - return data - - -def restoration_video_inference(model, - img_dir, - window_size, - start_idx, - filename_tmpl, - max_seq_len=None): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - img_dir (str): Directory of the input video. - window_size (int): The window size used in sliding-window framework. - This value should be set according to the settings of the network. - A value smaller than 0 means using recurrent framework. - start_idx (int): The index corresponds to the first frame in the - sequence. - filename_tmpl (str): Template for file name. - max_seq_len (int | None): The maximum sequence length that the model - processes. If the sequence length is larger than this number, - the sequence is split into multiple segments. If it is None, - the entire sequence is processed at once. - - Returns: - Tensor: The predicted restoration result. - """ - - device = next(model.parameters()).device # model device - - # build the data pipeline - if model.cfg.get('demo_pipeline', None): - test_pipeline = model.cfg.demo_pipeline - elif model.cfg.get('test_pipeline', None): - test_pipeline = model.cfg.test_pipeline - else: - test_pipeline = model.cfg.val_pipeline - - # check if the input is a video - file_extension = osp.splitext(img_dir)[1] - if file_extension in VIDEO_EXTENSIONS: - video_reader = mmcv.VideoReader(img_dir) - # load the images - data = dict(lq=[], lq_path=None, key=img_dir) - for frame in video_reader: - data['lq'].append(np.flip(frame, axis=2)) - - # remove the data loading pipeline - tmp_pipeline = [] - for pipeline in test_pipeline: - if pipeline['type'] not in [ - 'GenerateSegmentIndices', 'LoadImageFromFileList' - ]: - tmp_pipeline.append(pipeline) - test_pipeline = tmp_pipeline - else: - # the first element in the pipeline must be 'GenerateSegmentIndices' - if test_pipeline[0]['type'] != 'GenerateSegmentIndices': - raise TypeError('The first element in the pipeline must be ' - f'"GenerateSegmentIndices", but got ' - f'"{test_pipeline[0]["type"]}".') - - # specify start_idx and filename_tmpl - test_pipeline[0]['start_idx'] = start_idx - test_pipeline[0]['filename_tmpl'] = filename_tmpl - - # prepare data - sequence_length = len(glob.glob(osp.join(img_dir, '*'))) - img_dir_split = re.split(r'[\\/]', img_dir) - key = img_dir_split[-1] - lq_folder = reduce(osp.join, img_dir_split[:-1]) - data = dict( - lq_path=lq_folder, - gt_path='', - key=key, - sequence_length=sequence_length) - - # compose the pipeline - test_pipeline = Compose(test_pipeline) - data = test_pipeline(data) - data = data['lq'].unsqueeze(0) # in cpu - - # forward the model - with torch.no_grad(): - if window_size > 0: # sliding window framework - data = pad_sequence(data, window_size) - result = [] - for i in range(0, data.size(1) - 2 * (window_size // 2)): - data_i = data[:, i:i + window_size].to(device) - result.append(model(lq=data_i, test_mode=True)['output'].cpu()) - result = torch.stack(result, dim=1) - else: # recurrent framework - if max_seq_len is None: - result = model( - lq=data.to(device), test_mode=True)['output'].cpu() - else: - result = [] - for i in range(0, data.size(1), max_seq_len): - result.append( - model( - lq=data[:, i:i + max_seq_len].to(device), - test_mode=True)['output'].cpu()) - result = torch.cat(result, dim=1) - return result diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/test.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/test.py deleted file mode 100755 index 535511ee..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/test.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import pickle -import shutil -import tempfile - -import mmcv -import torch -import torch.distributed as dist -from mmcv.runner import get_dist_info - - -def single_gpu_test(model, - data_loader, - save_image=False, - save_path=None, - iteration=None): - """Test model with a single gpu. - - This method tests model with a single gpu and displays test progress bar. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - save_image (bool): Whether save image. Default: False. - save_path (str): The path to save image. Default: None. - iteration (int): Iteration number. It is used for the save image name. - Default: None. - - Returns: - list: The prediction results. - """ - if save_image and save_path is None: - raise ValueError( - "When 'save_image' is True, you should also set 'save_path'.") - - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for data in data_loader: - with torch.no_grad(): - result = model( - test_mode=True, - save_image=save_image, - save_path=save_path, - iteration=iteration, - **data) - results.append(result) - - # get batch size - for _, v in data.items(): - if isinstance(v, torch.Tensor): - batch_size = v.size(0) - break - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, - data_loader, - tmpdir=None, - gpu_collect=False, - save_image=False, - save_path=None, - iteration=None, - empty_cache=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (nn.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - save_image (bool): Whether save image. Default: False. - save_path (str): The path to save image. Default: None. - iteration (int): Iteration number. It is used for the save image name. - Default: None. - empty_cache (bool): empty cache in every iteration. Default: False. - - Returns: - list: The prediction results. - """ - - if save_image and save_path is None: - raise ValueError( - "When 'save_image' is True, you should also set 'save_path'.") - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - for data in data_loader: - with torch.no_grad(): - result = model( - test_mode=True, - save_image=save_image, - save_path=save_path, - iteration=iteration, - **data) - results.append(result) - if empty_cache: - torch.cuda.empty_cache() - if rank == 0: - # get batch size - for _, v in data.items(): - if isinstance(v, torch.Tensor): - batch_size = v.size(0) - break - for _ in range(batch_size * world_size): - prog_bar.update() - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - """Collect results in cpu mode. - - It saves the results on different gpus to 'tmpdir' and collects - them by the rank 0 worker. - - Args: - result_part (list): Results to be collected - size (int): Result size. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. Default: None - - Returns: - list: Ordered results. - """ - - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - mmcv.mkdir_or_exist('.dist_test') - tmpdir = tempfile.mkdtemp(dir='.dist_test') - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # synchronizes all processes to make sure tmpdir exist - dist.barrier() - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, 'part_{}.pkl'.format(rank))) - # synchronizes all processes for loading pickle file - dist.barrier() - # collect all parts - if rank != 0: - return None - - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, 'part_{}.pkl'.format(i)) - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - """Collect results in gpu mode. - - It encodes results to gpu tensors and use gpu communication for results - collection. - - Args: - result_part (list): Results to be collected - size (int): Result size. - - Returns: - list: Ordered results. - """ - - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank != 0: - return None - - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append(pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/train.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/train.py deleted file mode 100755 index 359943ca..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/train.py +++ /dev/null @@ -1,361 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import random -import warnings - -import mmcv -import numpy as np -import torch -import torch.distributed as dist -from mmcv.parallel import MMDataParallel -from mmcv.runner import HOOKS, IterBasedRunner, get_dist_info -from mmcv.utils import build_from_cfg - -from mmedit.core import DistEvalIterHook, EvalIterHook, build_optimizers -from mmedit.core.distributed_wrapper import DistributedDataParallelWrapper -from mmedit.datasets.builder import build_dataloader, build_dataset -from mmedit.utils import get_root_logger - - -def init_random_seed(seed=None, device='cuda'): - """Initialize random seed. - If the seed is not set, the seed will be automatically randomized, - and then broadcast to all processes to prevent some potential bugs. - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is not None: - return seed - - # Make sure all ranks share the same random seed to prevent - # some potential bugs. Please refer to - # https://github.com/open-mmlab/mmdetection/issues/6339 - rank, world_size = get_dist_info() - seed = np.random.randint(2**31) - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_model(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Train model entry function. - - Args: - model (nn.Module): The model to be trained. - dataset (:obj:`Dataset`): Train dataset. - cfg (dict): The config dict for training. - distributed (bool): Whether to use distributed training. - Default: False. - validate (bool): Whether to do evaluation. Default: False. - timestamp (str | None): Local time for runner. Default: None. - meta (dict | None): Meta dict to record some important information. - Default: None - """ - logger = get_root_logger(log_level=cfg.log_level) - - # start training - if distributed: - _dist_train( - model, - dataset, - cfg, - validate=validate, - logger=logger, - timestamp=timestamp, - meta=meta) - else: - _non_dist_train( - model, - dataset, - cfg, - validate=validate, - logger=logger, - timestamp=timestamp, - meta=meta) - - -def _dist_train(model, - dataset, - cfg, - validate=False, - logger=None, - timestamp=None, - meta=None): - """Distributed training function. - - Args: - model (nn.Module): The model to be trained. - dataset (:obj:`Dataset`): Train dataset. - cfg (dict): The config dict for training. - validate (bool): Whether to do evaluation. Default: False. - logger (logging.Logger | None): Logger for training. Default: None. - timestamp (str | None): Local time for runner. Default: None. - meta (dict | None): Meta dict to record some important information. - Default: None. - """ - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - - # step 1: give default values and override (if exist) from cfg.data - loader_cfg = { - **dict(seed=cfg.get('seed'), drop_last=False, dist=True), - **({} if torch.__version__ != 'parrots' else dict( - prefetch_num=2, - pin_memory=False, - )), - **dict((k, cfg.data[k]) for k in [ - 'samples_per_gpu', - 'workers_per_gpu', - 'shuffle', - 'seed', - 'drop_last', - 'prefetch_num', - 'pin_memory', - ] if k in cfg.data) - } - - # step 2: cfg.data.train_dataloader has highest priority - train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {})) - - data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] - - # put model on gpus - find_unused_parameters = cfg.get('find_unused_parameters', False) - model = DistributedDataParallelWrapper( - model, - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - - # build runner - optimizer = build_optimizers(model, cfg.optimizers) - runner = IterBasedRunner( - model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta) - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - checkpoint_config=cfg.checkpoint_config, - log_config=cfg.log_config) - - # visual hook - if cfg.get('visual_config', None) is not None: - cfg.visual_config['output_dir'] = os.path.join( - cfg.work_dir, cfg.visual_config['output_dir']) - runner.register_hook(mmcv.build_from_cfg(cfg.visual_config, HOOKS)) - - # evaluation hook - if validate and cfg.get('evaluation', None) is not None: - dataset = build_dataset(cfg.data.val) - - if ('val_samples_per_gpu' in cfg.data - or 'val_workers_per_gpu' in cfg.data): - warnings.warn('"val_samples_per_gpu/val_workers_per_gpu" have ' - 'been deprecated. Please use ' - '"val_dataloader=dict(samples_per_gpu=1)" instead. ' - 'Details see ' - 'https://github.com/open-mmlab/mmediting/pull/201') - - val_loader_cfg = { - **loader_cfg, - **dict(shuffle=False, drop_last=False), - **dict((newk, cfg.data[oldk]) for oldk, newk in [ - ('val_samples_per_gpu', 'samples_per_gpu'), - ('val_workers_per_gpu', 'workers_per_gpu'), - ] if oldk in cfg.data), - **cfg.data.get('val_dataloader', {}) - } - - data_loader = build_dataloader(dataset, **val_loader_cfg) - save_path = osp.join(cfg.work_dir, 'val_visuals') - runner.register_hook( - DistEvalIterHook( - data_loader, save_path=save_path, **cfg.evaluation), - priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow, cfg.total_iters) - - -def _non_dist_train(model, - dataset, - cfg, - validate=False, - logger=None, - timestamp=None, - meta=None): - """Non-Distributed training function. - - Args: - model (nn.Module): The model to be trained. - dataset (:obj:`Dataset`): Train dataset. - cfg (dict): The config dict for training. - validate (bool): Whether to do evaluation. Default: False. - logger (logging.Logger | None): Logger for training. Default: None. - timestamp (str | None): Local time for runner. Default: None. - meta (dict | None): Meta dict to record some important information. - Default: None. - """ - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - - # step 1: give default values and override (if exist) from cfg.data - loader_cfg = { - **dict( - seed=cfg.get('seed'), - drop_last=False, - dist=False, - num_gpus=cfg.gpus), - **({} if torch.__version__ != 'parrots' else dict( - prefetch_num=2, - pin_memory=False, - )), - **dict((k, cfg.data[k]) for k in [ - 'samples_per_gpu', - 'workers_per_gpu', - 'shuffle', - 'seed', - 'drop_last', - 'prefetch_num', - 'pin_memory', - ] if k in cfg.data) - } - - # step 2: cfg.data.train_dataloader has highest priority - train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {})) - - data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] - - # put model on gpus/cpus - model = MMDataParallel(model, device_ids=range(cfg.gpus)) - - # build runner - optimizer = build_optimizers(model, cfg.optimizers) - runner = IterBasedRunner( - model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register hooks - runner.register_training_hooks( - cfg.lr_config, - checkpoint_config=cfg.checkpoint_config, - log_config=cfg.log_config) - - # visual hook - if cfg.get('visual_config', None) is not None: - cfg.visual_config['output_dir'] = os.path.join( - cfg.work_dir, cfg.visual_config['output_dir']) - runner.register_hook(mmcv.build_from_cfg(cfg.visual_config, HOOKS)) - - # evaluation hook - if validate and cfg.get('evaluation', None) is not None: - dataset = build_dataset(cfg.data.val) - - if ('val_samples_per_gpu' in cfg.data - or 'val_workers_per_gpu' in cfg.data): - warnings.warn('"val_samples_per_gpu/val_workers_per_gpu" have ' - 'been deprecated. Please use ' - '"val_dataloader=dict(samples_per_gpu=1)" instead. ' - 'Details see ' - 'https://github.com/open-mmlab/mmediting/pull/201') - - val_loader_cfg = { - **loader_cfg, - **dict(shuffle=False, drop_last=False), - **dict((newk, cfg.data[oldk]) for oldk, newk in [ - ('val_samples_per_gpu', 'samples_per_gpu'), - ('val_workers_per_gpu', 'workers_per_gpu'), - ] if oldk in cfg.data), - **cfg.data.get('val_dataloader', {}) - } - - data_loader = build_dataloader(dataset, **val_loader_cfg) - save_path = osp.join(cfg.work_dir, 'val_visuals') - runner.register_hook( - EvalIterHook(data_loader, save_path=save_path, **cfg.evaluation), - priority='LOW') - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow, cfg.total_iters) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/video_interpolation_inference.py b/cv/super_resolution/basicvsr/pytorch/mmedit/apis/video_interpolation_inference.py deleted file mode 100755 index eaabb430..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/apis/video_interpolation_inference.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import os -import os.path as osp - -import cv2 -import mmcv -import numpy as np -import torch -from mmcv.fileio import FileClient -from mmcv.parallel import collate - -from mmedit.datasets.pipelines import Compose - -VIDEO_EXTENSIONS = ('.mp4', '.mov', '.avi') -FILE_CLIENT = FileClient('disk') - - -def read_image(filepath): - """Read image from file. - - Args: - filepath (str): File path. - - Returns: - image (np.array): Image. - """ - img_bytes = FILE_CLIENT.get(filepath) - image = mmcv.imfrombytes( - img_bytes, flag='color', channel_order='rgb', backend='pillow') - return image - - -def read_frames(source, start_index, num_frames, from_video, end_index): - """Read frames from file or video. - - Args: - source (list | mmcv.VideoReader): Source of frames. - start_index (int): Start index of frames. - num_frames (int): frames number to be read. - from_video (bool): Weather read frames from video. - end_index (int): The end index of frames. - - Returns: - images (np.array): Images. - """ - images = [] - last_index = min(start_index + num_frames, end_index) - # read frames from video - if from_video: - for index in range(start_index, last_index): - if index >= source.frame_cnt: - break - images.append(np.flip(source.get_frame(index), axis=2)) - else: - files = source[start_index:last_index] - images = [read_image(f) for f in files] - return images - - -def video_interpolation_inference(model, - input_dir, - output_dir, - start_idx=0, - end_idx=None, - batch_size=4, - fps_multiplier=0, - fps=0, - filename_tmpl='{:08d}.png'): - """Inference image with the model. - - Args: - model (nn.Module): The loaded model. - input_dir (str): Directory of the input video. - output_dir (str): Directory of the output video. - start_idx (int): The index corresponding to the first frame in the - sequence. Default: 0 - end_idx (int | None): The index corresponding to the last interpolated - frame in the sequence. If it is None, interpolate to the last - frame of video or sequence. Default: None - batch_size (int): Batch size. Default: 4 - fps_multiplier (float): multiply the fps based on the input video. - Default: 0. - fps (float): frame rate of the output video. Default: 0. - filename_tmpl (str): template of the file names. Default: '{:08d}.png' - - Returns: - output (list[numpy.array]): The predicted interpolation result. - It is an image sequence. - input_fps (float): The fps of input video. If the input is an image - sequence, input_fps=0.0 - """ - - device = next(model.parameters()).device # model device - - # build the data pipeline - if model.cfg.get('demo_pipeline', None): - test_pipeline = model.cfg.demo_pipeline - elif model.cfg.get('test_pipeline', None): - test_pipeline = model.cfg.test_pipeline - else: - test_pipeline = model.cfg.val_pipeline - - # remove the data loading pipeline - tmp_pipeline = [] - for pipeline in test_pipeline: - if pipeline['type'] not in [ - 'GenerateSegmentIndices', 'LoadImageFromFileList', - 'LoadImageFromFile' - ]: - tmp_pipeline.append(pipeline) - test_pipeline = tmp_pipeline - - # compose the pipeline - test_pipeline = Compose(test_pipeline) - - # check if the input is a video - input_file_extension = os.path.splitext(input_dir)[1] - if input_file_extension in VIDEO_EXTENSIONS: - source = mmcv.VideoReader(input_dir) - input_fps = source.fps - length = source.frame_cnt - from_video = True - h, w = source.height, source.width - if fps_multiplier: - assert fps_multiplier > 0, '`fps_multiplier` cannot be negative' - output_fps = fps_multiplier * input_fps - else: - output_fps = fps if fps > 0 else input_fps * 2 - else: - files = os.listdir(input_dir) - files = [osp.join(input_dir, f) for f in files] - files.sort() - source = files - length = files.__len__() - from_video = False - example_frame = read_image(files[0]) - h, w = example_frame.shape[:2] - output_fps = fps - - # check if the output is a video - output_file_extension = os.path.splitext(output_dir)[1] - if output_file_extension in VIDEO_EXTENSIONS: - fourcc = cv2.VideoWriter_fourcc(*'mp4v') - target = cv2.VideoWriter(output_dir, fourcc, output_fps, (w, h)) - to_video = True - else: - to_video = False - - end_idx = min(end_idx, length) if end_idx is not None else length - - # calculate step args - step_size = model.step_frames * batch_size - lenth_per_step = model.required_frames + model.step_frames * ( - batch_size - 1) - repeat_frame = model.required_frames - model.step_frames - - prog_bar = mmcv.ProgressBar( - math.ceil( - (end_idx + step_size - lenth_per_step - start_idx) / step_size)) - output_index = start_idx - for start_index in range(start_idx, end_idx, step_size): - images = read_frames( - source, start_index, lenth_per_step, from_video, end_index=end_idx) - - # data prepare - data = dict(inputs=images, inputs_path=None, key=input_dir) - data = [test_pipeline(data)] - data = collate(data, samples_per_gpu=1)['inputs'] - # data.shape: [1, t, c, h, w] - - # forward the model - data = model.split_frames(data) - input_tensors = data.clone().detach() - with torch.no_grad(): - output = model(data.to(device), test_mode=True)['output'] - if len(output.shape) == 4: - output = output.unsqueeze(1) - output_tensors = output.cpu() - if len(output_tensors.shape) == 4: - output_tensors = output_tensors.unsqueeze(1) - result = model.merge_frames(input_tensors, output_tensors) - if not start_idx == start_index: - result = result[repeat_frame:] - prog_bar.update() - - # save frames - if to_video: - for frame in result: - target.write(frame) - else: - for frame in result: - save_path = osp.join(output_dir, - filename_tmpl.format(output_index)) - mmcv.imwrite(frame, save_path) - output_index += 1 - - if start_index + lenth_per_step >= end_idx: - break - - print() - print(f'Output dir: {output_dir}') - if to_video: - target.release() diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/__init__.py deleted file mode 100755 index 2b24ce34..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .evaluation import (DistEvalIterHook, EvalIterHook, L1Evaluation, mae, - mse, psnr, reorder_image, sad, ssim) -from .hooks import VisualizationHook -from .misc import tensor2img -from .optimizer import build_optimizers -from .scheduler import LinearLrUpdaterHook, ReduceLrUpdaterHook - -__all__ = [ - 'build_optimizers', 'tensor2img', 'EvalIterHook', 'DistEvalIterHook', - 'mse', 'psnr', 'reorder_image', 'sad', 'ssim', 'LinearLrUpdaterHook', - 'VisualizationHook', 'L1Evaluation', 'ReduceLrUpdaterHook', 'mae' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/distributed_wrapper.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/distributed_wrapper.py deleted file mode 100755 index 660f41f3..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/distributed_wrapper.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.parallel import MODULE_WRAPPERS, MMDistributedDataParallel -from mmcv.parallel.scatter_gather import scatter_kwargs -from torch.cuda._utils import _get_device_index - - -@MODULE_WRAPPERS.register_module() -class DistributedDataParallelWrapper(nn.Module): - """A DistributedDataParallel wrapper for models in MMediting. - - In MMedting, there is a need to wrap different modules in the models - with separate DistributedDataParallel. Otherwise, it will cause - errors for GAN training. - More specific, the GAN model, usually has two sub-modules: - generator and discriminator. If we wrap both of them in one - standard DistributedDataParallel, it will cause errors during training, - because when we update the parameters of the generator (or discriminator), - the parameters of the discriminator (or generator) is not updated, which is - not allowed for DistributedDataParallel. - So we design this wrapper to separately wrap DistributedDataParallel - for generator and discriminator. - - In this wrapper, we perform two operations: - 1. Wrap the modules in the models with separate MMDistributedDataParallel. - Note that only modules with parameters will be wrapped. - 2. Do scatter operation for 'forward', 'train_step' and 'val_step'. - - Note that the arguments of this wrapper is the same as those in - `torch.nn.parallel.distributed.DistributedDataParallel`. - - Args: - module (nn.Module): Module that needs to be wrapped. - device_ids (list[int | `torch.device`]): Same as that in - `torch.nn.parallel.distributed.DistributedDataParallel`. - dim (int, optional): Same as that in the official scatter function in - pytorch. Defaults to 0. - broadcast_buffers (bool): Same as that in - `torch.nn.parallel.distributed.DistributedDataParallel`. - Defaults to False. - find_unused_parameters (bool, optional): Same as that in - `torch.nn.parallel.distributed.DistributedDataParallel`. - Traverse the autograd graph of all tensors contained in returned - value of the wrapped module’s forward function. Defaults to False. - kwargs (dict): Other arguments used in - `torch.nn.parallel.distributed.DistributedDataParallel`. - """ - - def __init__(self, - module, - device_ids, - dim=0, - broadcast_buffers=False, - find_unused_parameters=False, - **kwargs): - super().__init__() - assert len(device_ids) == 1, ( - 'Currently, DistributedDataParallelWrapper only supports one' - 'single CUDA device for each process.' - f'The length of device_ids must be 1, but got {len(device_ids)}.') - self.module = module - self.dim = dim - self.to_ddp( - device_ids=device_ids, - dim=dim, - broadcast_buffers=broadcast_buffers, - find_unused_parameters=find_unused_parameters, - **kwargs) - self.output_device = _get_device_index(device_ids[0], True) - - def to_ddp(self, device_ids, dim, broadcast_buffers, - find_unused_parameters, **kwargs): - """Wrap models with separate MMDistributedDataParallel. - - It only wraps the modules with parameters. - """ - for name, module in self.module._modules.items(): - if next(module.parameters(), None) is None: - module = module.cuda() - elif all(not p.requires_grad for p in module.parameters()): - module = module.cuda() - else: - module = MMDistributedDataParallel( - module.cuda(), - device_ids=device_ids, - dim=dim, - broadcast_buffers=broadcast_buffers, - find_unused_parameters=find_unused_parameters, - **kwargs) - self.module._modules[name] = module - - def scatter(self, inputs, kwargs, device_ids): - """Scatter function. - - Args: - inputs (Tensor): Input Tensor. - kwargs (dict): Args for - ``mmcv.parallel.scatter_gather.scatter_kwargs``. - device_ids (int): Device id. - """ - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - """Forward function. - - Args: - inputs (tuple): Input data. - kwargs (dict): Args for - ``mmcv.parallel.scatter_gather.scatter_kwargs``. - """ - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - """Train step function. - - Args: - inputs (Tensor): Input Tensor. - kwargs (dict): Args for - ``mmcv.parallel.scatter_gather.scatter_kwargs``. - """ - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - """Validation step function. - - Args: - inputs (tuple): Input data. - kwargs (dict): Args for ``scatter_kwargs``. - """ - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/__init__.py deleted file mode 100755 index 5294618c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .eval_hooks import DistEvalIterHook, EvalIterHook -from .metrics import (L1Evaluation, connectivity, gradient_error, mae, mse, - niqe, psnr, reorder_image, sad, ssim) - -__all__ = [ - 'mse', 'sad', 'psnr', 'reorder_image', 'ssim', 'EvalIterHook', - 'DistEvalIterHook', 'L1Evaluation', 'gradient_error', 'connectivity', - 'niqe', 'mae' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/eval_hooks.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/eval_hooks.py deleted file mode 100755 index bda0f846..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -from mmcv.runner import Hook -from torch.utils.data import DataLoader - - -class EvalIterHook(Hook): - """Non-Distributed evaluation hook for iteration-based runner. - - This hook will regularly perform evaluation in a given interval when - performing in non-distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader. - interval (int): Evaluation interval. Default: 1. - eval_kwargs (dict): Other eval kwargs. It contains: - save_image (bool): Whether to save image. - save_path (str): The path to save image. - """ - - def __init__(self, dataloader, interval=1, **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError('dataloader must be a pytorch DataLoader, ' - f'but got { type(dataloader)}') - self.dataloader = dataloader - self.interval = interval - self.eval_kwargs = eval_kwargs - self.save_image = self.eval_kwargs.pop('save_image', False) - self.save_path = self.eval_kwargs.pop('save_path', None) - - def after_train_iter(self, runner): - """The behavior after each train iteration. - - Args: - runner (``mmcv.runner.BaseRunner``): The runner. - """ - if not self.every_n_iters(runner, self.interval): - return - runner.log_buffer.clear() - from mmedit.apis import single_gpu_test - results = single_gpu_test( - runner.model, - self.dataloader, - save_image=self.save_image, - save_path=self.save_path, - iteration=runner.iter) - self.evaluate(runner, results) - - def evaluate(self, runner, results): - """Evaluation function. - - Args: - runner (``mmcv.runner.BaseRunner``): The runner. - results (dict): Model forward results. - """ - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - # call `after_val_epoch` after evaluation. - # This is a hack. - # Because epoch does not naturally exist In IterBasedRunner, - # thus we consider the end of an evluation as the end of an epoch. - # With this hack , we can support epoch based hooks. - if 'iter' in runner.__class__.__name__.lower(): - runner.call_hook('after_val_epoch') - - -class DistEvalIterHook(EvalIterHook): - """Distributed evaluation hook. - - Args: - dataloader (DataLoader): A PyTorch dataloader. - interval (int): Evaluation interval. Default: 1. - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - eval_kwargs (dict): Other eval kwargs. It may contain: - save_image (bool): Whether save image. - save_path (str): The path to save image. - """ - - def __init__(self, - dataloader, - interval=1, - gpu_collect=False, - **eval_kwargs): - super().__init__(dataloader, interval, **eval_kwargs) - self.gpu_collect = gpu_collect - - def after_train_iter(self, runner): - """The behavior after each train iteration. - - Args: - runner (``mmcv.runner.BaseRunner``): The runner. - """ - if not self.every_n_iters(runner, self.interval): - return - runner.log_buffer.clear() - from mmedit.apis import multi_gpu_test - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=osp.join(runner.work_dir, '.eval_hook'), - gpu_collect=self.gpu_collect, - save_image=self.save_image, - save_path=self.save_path, - iteration=runner.iter) - if runner.rank == 0: - print('\n') - self.evaluate(runner, results) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metric_utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metric_utils.py deleted file mode 100755 index ec601767..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metric_utils.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - - -def gaussian(x, sigma): - """Gaussian function. - - Args: - x (array_like): The independent variable. - sigma (float): Standard deviation of the gaussian function. - - Return: - ndarray or scalar: Gaussian value of `x`. - """ - return np.exp(-x**2 / (2 * sigma**2)) / (sigma * np.sqrt(2 * np.pi)) - - -def dgaussian(x, sigma): - """Gradient of gaussian. - - Args: - x (array_like): The independent variable. - sigma (float): Standard deviation of the gaussian function. - - Return: - ndarray or scalar: Gradient of gaussian of `x`. - """ - return -x * gaussian(x, sigma) / sigma**2 - - -def gauss_filter(sigma, epsilon=1e-2): - """Gradient of gaussian. - - Args: - sigma (float): Standard deviation of the gaussian kernel. - epsilon (float): Small value used when calculating kernel size. - Default: 1e-2. - - Return: - tuple[ndarray]: Gaussian filter along x and y axis. - """ - half_size = np.ceil( - sigma * np.sqrt(-2 * np.log(np.sqrt(2 * np.pi) * sigma * epsilon))) - size = int(2 * half_size + 1) - - # create filter in x axis - filter_x = np.zeros((size, size)) - for i in range(size): - for j in range(size): - filter_x[i, j] = gaussian(i - half_size, sigma) * dgaussian( - j - half_size, sigma) - - # normalize filter - norm = np.sqrt((filter_x**2).sum()) - filter_x = filter_x / norm - filter_y = np.transpose(filter_x) - - return filter_x, filter_y - - -def gauss_gradient(img, sigma): - """Gaussian gradient. - - From https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/ - submissions/8060/versions/2/previews/gaussgradient/gaussgradient.m/ - index.html - - Args: - img (ndarray): Input image. - sigma (float): Standard deviation of the gaussian kernel. - - Return: - ndarray: Gaussian gradient of input `img`. - """ - filter_x, filter_y = gauss_filter(sigma) - img_filtered_x = cv2.filter2D( - img, -1, filter_x, borderType=cv2.BORDER_REPLICATE) - img_filtered_y = cv2.filter2D( - img, -1, filter_y, borderType=cv2.BORDER_REPLICATE) - return np.sqrt(img_filtered_x**2 + img_filtered_y**2) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metrics.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metrics.py deleted file mode 100755 index 9e37dbe1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/metrics.py +++ /dev/null @@ -1,572 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import cv2 -import mmcv -import numpy as np -from scipy.ndimage import convolve -from scipy.special import gamma - -from mmedit.datasets.pipelines.matlab_like_resize import MATLABLikeResize -from .metric_utils import gauss_gradient - - -def sad(alpha, trimap, pred_alpha): - if alpha.ndim != 2 or trimap.ndim != 2 or pred_alpha.ndim != 2: - raise ValueError( - 'input alpha, trimap and pred_alpha should has two dimensions, ' - f'alpha {alpha.shape}, please check their shape: ' - f'trimap {trimap.shape}, pred_alpha {pred_alpha.shape}') - assert (pred_alpha[trimap == 0] == 0).all() - assert (pred_alpha[trimap == 255] == 255).all() - alpha = alpha.astype(np.float64) / 255 - pred_alpha = pred_alpha.astype(np.float64) / 255 - sad_result = np.abs(pred_alpha - alpha).sum() / 1000 - return sad_result - - -def mse(alpha, trimap, pred_alpha): - if alpha.ndim != 2 or trimap.ndim != 2 or pred_alpha.ndim != 2: - raise ValueError( - 'input alpha, trimap and pred_alpha should has two dimensions, ' - f'alpha {alpha.shape}, please check their shape: ' - f'trimap {trimap.shape}, pred_alpha {pred_alpha.shape}') - assert (pred_alpha[trimap == 0] == 0).all() - assert (pred_alpha[trimap == 255] == 255).all() - alpha = alpha.astype(np.float64) / 255 - pred_alpha = pred_alpha.astype(np.float64) / 255 - weight_sum = (trimap == 128).sum() - if weight_sum != 0: - mse_result = ((pred_alpha - alpha)**2).sum() / weight_sum - else: - mse_result = 0 - return mse_result - - -def gradient_error(alpha, trimap, pred_alpha, sigma=1.4): - """Gradient error for evaluating alpha matte prediction. - - Args: - alpha (ndarray): Ground-truth alpha matte. - trimap (ndarray): Input trimap with its value in {0, 128, 255}. - pred_alpha (ndarray): Predicted alpha matte. - sigma (float): Standard deviation of the gaussian kernel. Default: 1.4. - """ - if alpha.ndim != 2 or trimap.ndim != 2 or pred_alpha.ndim != 2: - raise ValueError( - 'input alpha, trimap and pred_alpha should has two dimensions, ' - f'alpha {alpha.shape}, please check their shape: ' - f'trimap {trimap.shape}, pred_alpha {pred_alpha.shape}') - if not ((pred_alpha[trimap == 0] == 0).all() and - (pred_alpha[trimap == 255] == 255).all()): - raise ValueError( - 'pred_alpha should be masked by trimap before evaluation') - alpha = alpha.astype(np.float64) - pred_alpha = pred_alpha.astype(np.float64) - alpha_normed = np.zeros_like(alpha) - pred_alpha_normed = np.zeros_like(pred_alpha) - cv2.normalize(alpha, alpha_normed, 1., 0., cv2.NORM_MINMAX) - cv2.normalize(pred_alpha, pred_alpha_normed, 1., 0., cv2.NORM_MINMAX) - - alpha_grad = gauss_gradient(alpha_normed, sigma).astype(np.float32) - pred_alpha_grad = gauss_gradient(pred_alpha_normed, - sigma).astype(np.float32) - - grad_loss = ((alpha_grad - pred_alpha_grad)**2 * (trimap == 128)).sum() - # same as SAD, divide by 1000 to reduce the magnitude of the result - return grad_loss / 1000 - - -def connectivity(alpha, trimap, pred_alpha, step=0.1): - """Connectivity error for evaluating alpha matte prediction. - - Args: - alpha (ndarray): Ground-truth alpha matte with shape (height, width). - Value range of alpha is [0, 255]. - trimap (ndarray): Input trimap with shape (height, width). Elements - in trimap are one of {0, 128, 255}. - pred_alpha (ndarray): Predicted alpha matte with shape (height, width). - Value range of pred_alpha is [0, 255]. - step (float): Step of threshold when computing intersection between - `alpha` and `pred_alpha`. - """ - if alpha.ndim != 2 or trimap.ndim != 2 or pred_alpha.ndim != 2: - raise ValueError( - 'input alpha, trimap and pred_alpha should has two dimensions, ' - f'alpha {alpha.shape}, please check their shape: ' - f'trimap {trimap.shape}, pred_alpha {pred_alpha.shape}') - if not ((pred_alpha[trimap == 0] == 0).all() and - (pred_alpha[trimap == 255] == 255).all()): - raise ValueError( - 'pred_alpha should be masked by trimap before evaluation') - alpha = alpha.astype(np.float32) / 255 - pred_alpha = pred_alpha.astype(np.float32) / 255 - - thresh_steps = np.arange(0, 1 + step, step) - round_down_map = -np.ones_like(alpha) - for i in range(1, len(thresh_steps)): - alpha_thresh = alpha >= thresh_steps[i] - pred_alpha_thresh = pred_alpha >= thresh_steps[i] - intersection = (alpha_thresh & pred_alpha_thresh).astype(np.uint8) - - # connected components - _, output, stats, _ = cv2.connectedComponentsWithStats( - intersection, connectivity=4) - # start from 1 in dim 0 to exclude background - size = stats[1:, -1] - - # largest connected component of the intersection - omega = np.zeros_like(alpha) - if len(size) != 0: - max_id = np.argmax(size) - # plus one to include background - omega[output == max_id + 1] = 1 - - mask = (round_down_map == -1) & (omega == 0) - round_down_map[mask] = thresh_steps[i - 1] - round_down_map[round_down_map == -1] = 1 - - alpha_diff = alpha - round_down_map - pred_alpha_diff = pred_alpha - round_down_map - # only calculate difference larger than or equal to 0.15 - alpha_phi = 1 - alpha_diff * (alpha_diff >= 0.15) - pred_alpha_phi = 1 - pred_alpha_diff * (pred_alpha_diff >= 0.15) - - connectivity_error = np.sum( - np.abs(alpha_phi - pred_alpha_phi) * (trimap == 128)) - # same as SAD, divide by 1000 to reduce the magnitude of the result - return connectivity_error / 1000 - - -def reorder_image(img, input_order='HWC'): - """Reorder images to 'HWC' order. - - If the input_order is (h, w), return (h, w, 1); - If the input_order is (c, h, w), return (h, w, c); - If the input_order is (h, w, c), return as it is. - - Args: - img (ndarray): Input image. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - If the input image shape is (h, w), input_order will not have - effects. Default: 'HWC'. - - Returns: - ndarray: reordered image. - """ - - if input_order not in ['HWC', 'CHW']: - raise ValueError( - f'Wrong input_order {input_order}. Supported input_orders are ' - '"HWC" and "CHW"') - if len(img.shape) == 2: - img = img[..., None] - return img - if input_order == 'CHW': - img = img.transpose(1, 2, 0) - return img - - -def psnr(img1, img2, crop_border=0, input_order='HWC', convert_to=None): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - - Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edges of an image. These - pixels are not involved in the PSNR calculation. Default: 0. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - convert_to (str): Whether to convert the images to other color models. - If None, the images are not altered. When computing for 'Y', - the images are assumed to be in BGR order. Options are 'Y' and - None. Default: None. - - Returns: - float: psnr result. - """ - - assert img1.shape == img2.shape, ( - f'Image shapes are different: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError( - f'Wrong input_order {input_order}. Supported input_orders are ' - '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - - img1, img2 = img1.astype(np.float32), img2.astype(np.float32) - if isinstance(convert_to, str) and convert_to.lower() == 'y': - img1 = mmcv.bgr2ycbcr(img1 / 255., y_only=True) * 255. - img2 = mmcv.bgr2ycbcr(img2 / 255., y_only=True) * 255. - elif convert_to is not None: - raise ValueError('Wrong color model. Supported values are ' - '"Y" and None.') - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, None] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, None] - - mse_value = np.mean((img1 - img2)**2) - if mse_value == 0: - return float('inf') - return 20. * np.log10(255. / np.sqrt(mse_value)) - - -def mae(img1, img2, crop_border=0, input_order='HWC', convert_to=None): - """Calculate mean average error for evaluation. - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edges of an image. These - pixels are not involved in the PSNR calculation. Default: 0. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - convert_to (str): Whether to convert the images to other color models. - If None, the images are not altered. Options are 'RGB2Y', 'BGR2Y' - and None. Default: None. - - Returns: - float: mae result. - """ - - assert img1.shape == img2.shape, ( - f'Image shapes are different: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError( - f'Wrong input_order {input_order}. Supported input_orders are ' - '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - - img1, img2 = img1.astype(np.float32), img2.astype(np.float32) - img1, img2 = img1 / 255., img2 / 255. - if isinstance(convert_to, str) and convert_to.lower() == 'rgb2y': - img1 = mmcv.rgb2ycbcr(img1, y_only=True) - img2 = mmcv.rgb2ycbcr(img2, y_only=True) - elif isinstance(convert_to, str) and convert_to.lower() == 'bgr2y': - img1 = mmcv.bgr2ycbcr(img1, y_only=True) - img2 = mmcv.bgr2ycbcr(img2, y_only=True) - elif convert_to is not None: - raise ValueError('Wrong color model. Supported values are ' - '"RGB2Y", "BGR2Y" and None.') - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, None] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, None] - - l1_value = np.mean(np.abs(img1 - img2)) - - return l1_value - - -def _ssim(img1, img2): - """Calculate SSIM (structural similarity) for one channel images. - - It is called by func:`calculate_ssim`. - - Args: - img1, img2 (ndarray): Images with range [0, 255] with order 'HWC'. - - Returns: - float: ssim result. - """ - - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * - (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -def ssim(img1, img2, crop_border=0, input_order='HWC', convert_to=None): - """Calculate SSIM (structural similarity). - - Ref: - Image quality assessment: From error visibility to structural similarity - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edges of an image. These - pixels are not involved in the SSIM calculation. Default: 0. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - convert_to (str): Whether to convert the images to other color models. - If None, the images are not altered. When computing for 'Y', - the images are assumed to be in BGR order. Options are 'Y' and - None. Default: None. - - Returns: - float: ssim result. - """ - - assert img1.shape == img2.shape, ( - f'Image shapes are different: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError( - f'Wrong input_order {input_order}. Supported input_orders are ' - '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - - if isinstance(convert_to, str) and convert_to.lower() == 'y': - img1, img2 = img1.astype(np.float32), img2.astype(np.float32) - img1 = mmcv.bgr2ycbcr(img1 / 255., y_only=True) * 255. - img2 = mmcv.bgr2ycbcr(img2 / 255., y_only=True) * 255. - img1 = np.expand_dims(img1, axis=2) - img2 = np.expand_dims(img2, axis=2) - elif convert_to is not None: - raise ValueError('Wrong color model. Supported values are ' - '"Y" and None') - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, None] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, None] - - ssims = [] - for i in range(img1.shape[2]): - ssims.append(_ssim(img1[..., i], img2[..., i])) - return np.array(ssims).mean() - - -class L1Evaluation: - """L1 evaluation metric. - - Args: - data_dict (dict): Must contain keys of 'gt_img' and 'fake_res'. If - 'mask' is given, the results will be computed with mask as weight. - """ - - def __call__(self, data_dict): - gt = data_dict['gt_img'] - if 'fake_img' in data_dict: - pred = data_dict.get('fake_img') - else: - pred = data_dict.get('fake_res') - mask = data_dict.get('mask', None) - - from mmedit.models.losses.pixelwise_loss import l1_loss - l1_error = l1_loss(pred, gt, weight=mask, reduction='mean') - - return l1_error - - -def estimate_aggd_param(block): - """Estimate AGGD (Asymmetric Generalized Gaussian Distribution) parameters. - - Args: - block (ndarray): 2D Image block. - - Returns: - tuple: alpha (float), beta_l (float) and beta_r (float) for the AGGD - distribution (Estimating the parames in Equation 7 in the paper). - """ - block = block.flatten() - gam = np.arange(0.2, 10.001, 0.001) # len = 9801 - gam_reciprocal = np.reciprocal(gam) - r_gam = np.square(gamma(gam_reciprocal * 2)) / ( - gamma(gam_reciprocal) * gamma(gam_reciprocal * 3)) - - left_std = np.sqrt(np.mean(block[block < 0]**2)) - right_std = np.sqrt(np.mean(block[block > 0]**2)) - gammahat = left_std / right_std - rhat = (np.mean(np.abs(block)))**2 / np.mean(block**2) - rhatnorm = (rhat * (gammahat**3 + 1) * - (gammahat + 1)) / ((gammahat**2 + 1)**2) - array_position = np.argmin((r_gam - rhatnorm)**2) - - alpha = gam[array_position] - beta_l = left_std * np.sqrt(gamma(1 / alpha) / gamma(3 / alpha)) - beta_r = right_std * np.sqrt(gamma(1 / alpha) / gamma(3 / alpha)) - return (alpha, beta_l, beta_r) - - -def compute_feature(block): - """Compute features. - - Args: - block (ndarray): 2D Image block. - - Returns: - list: Features with length of 18. - """ - feat = [] - alpha, beta_l, beta_r = estimate_aggd_param(block) - feat.extend([alpha, (beta_l + beta_r) / 2]) - - # distortions disturb the fairly regular structure of natural images. - # This deviation can be captured by analyzing the sample distribution of - # the products of pairs of adjacent coefficients computed along - # horizontal, vertical and diagonal orientations. - shifts = [[0, 1], [1, 0], [1, 1], [1, -1]] - for shift in shifts: - shifted_block = np.roll(block, shift, axis=(0, 1)) - alpha, beta_l, beta_r = estimate_aggd_param(block * shifted_block) - mean = (beta_r - beta_l) * (gamma(2 / alpha) / gamma(1 / alpha)) - feat.extend([alpha, mean, beta_l, beta_r]) - return feat - - -def niqe_core(img, - mu_pris_param, - cov_pris_param, - gaussian_window, - block_size_h=96, - block_size_w=96): - """Calculate NIQE (Natural Image Quality Evaluator) metric. - - Ref: Making a "Completely Blind" Image Quality Analyzer. - This implementation could produce almost the same results as the official - MATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zip - - Note that we do not include block overlap height and width, since they are - always 0 in the official implementation. - - For good performance, it is advisable by the official implementation to - divide the distorted image in to the same size patched as used for the - construction of multivariate Gaussian model. - - Args: - img (ndarray): Input image whose quality needs to be computed. The - image must be a gray or Y (of YCbCr) image with shape (h, w). - Range [0, 255] with float type. - mu_pris_param (ndarray): Mean of a pre-defined multivariate Gaussian - model calculated on the pristine dataset. - cov_pris_param (ndarray): Covariance of a pre-defined multivariate - Gaussian model calculated on the pristine dataset. - gaussian_window (ndarray): A 7x7 Gaussian window used for smoothing the - image. - block_size_h (int): Height of the blocks in to which image is divided. - Default: 96 (the official recommended value). - block_size_w (int): Width of the blocks in to which image is divided. - Default: 96 (the official recommended value). - """ - # crop image - h, w = img.shape - num_block_h = math.floor(h / block_size_h) - num_block_w = math.floor(w / block_size_w) - img = img[0:num_block_h * block_size_h, 0:num_block_w * block_size_w] - - distparam = [] # dist param is actually the multiscale features - for scale in (1, 2): # perform on two scales (1, 2) - mu = convolve(img, gaussian_window, mode='nearest') - - sigma = np.sqrt( - np.abs( - convolve(np.square(img), gaussian_window, mode='nearest') - - np.square(mu))) - # normalize, as in Eq. 1 in the paper - img_nomalized = (img - mu) / (sigma + 1) - - feat = [] - for idx_w in range(num_block_w): - for idx_h in range(num_block_h): - # process each block - block = img_nomalized[idx_h * block_size_h // - scale:(idx_h + 1) * block_size_h // - scale, idx_w * block_size_w // - scale:(idx_w + 1) * block_size_w // - scale] - feat.append(compute_feature(block)) - - distparam.append(np.array(feat)) - - # matlab-like bicubic downsample with anti-aliasing - if scale == 1: - resize = MATLABLikeResize(keys=None, scale=0.5) - img = resize._resize(img[:, :, np.newaxis] / 255.)[:, :, 0] * 255. - - distparam = np.concatenate(distparam, axis=1) - - # fit a MVG (multivariate Gaussian) model to distorted patch features - mu_distparam = np.nanmean(distparam, axis=0) - distparam_no_nan = distparam[~np.isnan(distparam).any(axis=1)] - cov_distparam = np.cov(distparam_no_nan, rowvar=False) - - # compute niqe quality, Eq. 10 in the paper - invcov_param = np.linalg.pinv((cov_pris_param + cov_distparam) / 2) - quality = np.matmul( - np.matmul((mu_pris_param - mu_distparam), invcov_param), - np.transpose((mu_pris_param - mu_distparam))) - - return np.squeeze(np.sqrt(quality)) - - -def niqe(img, crop_border, input_order='HWC', convert_to='y'): - """Calculate NIQE (Natural Image Quality Evaluator) metric. - - Ref: Making a "Completely Blind" Image Quality Analyzer. - This implementation could produce almost the same results as the official - MATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zip - - We use the official params estimated from the pristine dataset. - We use the recommended block size (96, 96) without overlaps. - - Args: - img (ndarray): Input image whose quality needs to be computed. - The input image must be in range [0, 255] with float/int type. - The input_order of image can be 'HW' or 'HWC' or 'CHW'. (BGR order) - If the input order is 'HWC' or 'CHW', it will be converted to gray - or Y (of YCbCr) image according to the ``convert_to`` argument. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the metric calculation. - input_order (str): Whether the input order is 'HW', 'HWC' or 'CHW'. - Default: 'HWC'. - convert_to (str): Whether converted to 'y' (of MATLAB YCbCr) or 'gray'. - Default: 'y'. - - Returns: - float: NIQE result. - """ - - # we use the official params estimated from the pristine dataset. - niqe_pris_params = np.load('mmedit/core/evaluation/niqe_pris_params.npz') - mu_pris_param = niqe_pris_params['mu_pris_param'] - cov_pris_param = niqe_pris_params['cov_pris_param'] - gaussian_window = niqe_pris_params['gaussian_window'] - - img = img.astype(np.float32) - if input_order != 'HW': - img = reorder_image(img, input_order=input_order) - if convert_to == 'y': - img = mmcv.bgr2ycbcr(img / 255., y_only=True) * 255. - elif convert_to == 'gray': - img = mmcv.bgr2gray(img / 255., cv2.COLOR_BGR2GRAY) * 255. - img = np.squeeze(img) - - if crop_border != 0: - img = img[crop_border:-crop_border, crop_border:-crop_border] - - # round to follow official implementation - img = img.round() - - niqe_result = niqe_core(img, mu_pris_param, cov_pris_param, - gaussian_window) - - return niqe_result diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/niqe_pris_params.npz b/cv/super_resolution/basicvsr/pytorch/mmedit/core/evaluation/niqe_pris_params.npz deleted file mode 100755 index 204ddcee87c4cd39aca04a42b539f0a5bfccecc3..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 11850 zcmbt)cUaH;+jr3=DyzX|m!hnqC9AVZMABZ`oA%y&X;D&HMUqmXgpjuqEi+{lC3KOH zvO@CYcVDlL`*+><^T+f2zK%G)KA+F`=s3UU`8r?Yt*tqhOOWHgULqW0eZL;d<>cV_ z{WFzAki*5t!rjx!%fj8t)5=BC)jfcdW6HlS{(j*1O}}61TKZeGjX61dIfBG(?YwL} z#a9c9ueDbYUn(qa@8;?4Y2|9+=4os9`~TNl?ewx6`F}4*D|fq*_Yy0X3d_hZS}GhO z{QvvnnI`=;G2{-HlBB`x_0d1xQg?O8QO%(qnsX|_U3TGla@yMO{j0i~EZ13=saQ19 zs$Ija!?JJad3blt(Dzf6o4IXBA@>?hy~l6iD$qc}-_4DDJ>QUV{rspT(fzE#WmAQm zXg&R`|LdE$!gbQGerum;`G!JI?(dQr%w|tgvUZDpipczx02yA zzneR2-cxJ-d%+jWdTD-fMhoBc3VJ4AyhiGHEy=i5$!~4ENn?ficteAG$>j5Eo?kM> z7DnTQ4dnDn}rvW|bKT@{wg)=4h9w>YDT)_R!DHJe^f&obA< z&S~zYiU8|hAwA=4Gn!hTB&h@&)bYvErN)GvQ)U*g1Q7XExe z`||h_;>fYBHaT{jE^V&f*>I(q1pUq&HXJ%l3bpR}?TtlL(%QMb;EOAj42_wmE_j5Z zzq$3!OU|crzZRu)brn%H*9C_gnx5>Nq>WIGaUR*m8H6hMy-ACe6_nob+v?2Z{d&fCCCX-~=>-`b8)5*cpIw-|{AK5+ssC)6O zHjDYp-|o(}hvYsxu5Ep{hf3y`n3})LpnXdgrfRSDVG~}QKY9FUIyv9nAbNXQ3eDdT zIVRaJgBt62+Gx2q>)e~!W0jXq^6zz8-qao^qUwx1Tz&JvfW@Muvu5_DrA+ zYE)2}t;}wYskAO|iKCrWaV^1W33Q;O^XJW1@pM%_F6o7lC7U$l;pF)>o|GbLoSyBA zqdQYSM>+?`Qq7c#7RfMI=Fy=zJT$b6Ottn~?b{tin;$<+N{opi<-v||%^}Y0WQC0` zx5X|};(RT-<9O6)zo0JXBUWs_Cz3{eb2sIOggVl?MWxC!<044-*h_UCi6klv5i7D< zn?@mbhP?H19w_q1U;cEyCWe;ry=+Tf5>Mqyp+|)>_fX`=-Q!)KTCrPxX}J=-2^24& z{?`oCXeys5D6X?@4-K{^j&HnR$r}E2-QB~VK)!tH7iN`3(U8*u{qgaIwENYZzhu6r zke8f_cJ)v!#hq5V6w0xZPI;7f&3GL~d0DesjV44<(Ljr4eM2k_E>PU);Ix(ro#^KH zaVwncn)j`Y?v0=)`?jd@B}UWxguH6?Jafj+OYOYJB8d&Nw5*z78WU5pslO?TWIh?M zTyx)q#ogkm;NB5Q8`bv9-Ea>cog?%M>T=gfan)XNciJMYQhO{+iP>0r{ygC9L`Hmj z+a zRO8@wT?r+YV>ZiD-`bLPzPR2uvE5)4Uoc1L7u4l@k}uE2i$V1Eokhd&+fXV!w(8E5 z;&7UoR;OH5CQV5j3vRBi3M9`ftM{dz3Z*p`vRPL@hS8hij1TuSELrmBl*BWf;WR6M zwab`ve;O0k*p{>;jGWJ>UD=~(#Rl$3HBS69oW8t|s+n2uN9&BObogQuX?#hyTgYGp z-HcW9zTo3Ucg#3)=O!AkU~awG)8*bYba|b0!5^N~JI!YHy&vv$@PNk8_0mF2!?p0i zj4BU0cEa&xy{H$}Z%)Z*H*}%bO=%|&>FBa^y4g7<_r2&@+P6tk(ymnPZ6Nh|)=m-= zOLe}Vs>_t-_8q)b=S4S~PKb(LcNxW9;0xvm{err*1(qwmS56>}{*TLZHIhjnFj&Fg zL7&}Q(#dmYNeuZVC?-k7M3a`7v&}q@M2d9LeyEsX%dD%$*>rx2rL2UWxus_!XhW#v z<(JbF$t?Ze_Q}5X?959+9{w$HRQ9qzTj)hNO}B3iwbnjNio+9lhh2Bk%JN5KW)Mz; zwUaHpG}TElS7=`Tl>kzEc6sRT^)PZ-6m2MSJ)CAASSOv|V#1V;@mqV{4JDDqA~O9+ zLG-|&D%yKZ2;I!w`qO`#1>=#oD-ul%BP-pD1FO{h$c`md8b}Az`PViF+qf*)(gtts z@AhHDUEC_8`ptKA9|3oPFPJ0r3+l4ly^M23VLH8OA5W7e=&;ig3l<-}8b@Qce=py< zHJ-RmnV9)mr%}P3v{K$$8x~jA@RaXk0%fjX3!TnHQ=7-td0z9+`S=m zyMni2+Wjaxr7>n1w|zV<ztwS z#*!sQI~K|Q8AUyF-I@gpg6Vyo`Z%$cXsWGK`1Qv-6Q(h3N~!qqNV0a85P$kOcyv!= z9|3oPFPJ0r3+gh?`a55WryFID^>?_t!H)e|74IRXrA|jWX1DY`Q)PquXKejxXh2gc zO+&dZ>#?(ACYx+j7iFn2PlmQ}sWa2unA1(oMr1Yq_A$PELzY;S^q{JXgRZFvL(Cgv--B0q3*xjU8QHJ)$^zKBG7MF|+1<1_|U38n1^(}d6 zK3>$0_UZ;icgwA3)9vE(g$GTj^zf7K-jz!!;dtT4#j94)W(NhX*6wu_KaaoUhVW+k zk$mEZkeL{r*c_s(OpD3aS*rZ#3>6xvJF6QIw1L21u&1$)fV;pK%n|wpbvfMQFme0M zC<^GYdnFeVPAv-S+>|0?NP#n~v_js2ndV#Z7R?{|e$1_F+9!n2pOPz&ipa*$?DB3$ zttrl|M|(zzcWyM9Z%Hy-xiN@*B>U_pf7wgk<_m>4eTbqBj*{Fbl>O+sLQ3bQ32M~y zre@;ubDm_kTKCHGbAGh?+`7Cqd;O`ua`vuC=dIb|{_ch6^aIG|M8(9mr=H|JQBtMx zPaj%dJ9mT0cr(@&-rBD>)t|zR%a%>K=uRg&jMg^0dDGezYnGayH(?)}CO^}@;78*^ zo4faQjO?esc^&u*_B8epa2NQ3IYPgnE(TZPxSpm(QK*2-`-)j{G}P2K5M*q@?hJTu zPRWa++3^B7Z*#(E+@l;{xx=xf!6UUpwA!34b`L6(IS@rrDmrZkuZ2?1nyMVdMf<78 zLvDNZqj;)Uy%Jhb6+#jxpYKZEs>bfiDn&@_3Z(3TF9Sk5f{1vkLZ|BllWz6B%AsI! z_O5QBVWv+2&5$1I-x3o<&OzaVYp(gzd0yq-Z{0?0VK~nlKhHzRv@H+4p>}l*H z;4bh5bA*0DUCPuI<8=lcsl)r4@;EM6cJ9fV!yjYySjorspc9qW%w9u(gr=xbmC~SB z)(Sh8y1m}fsLz5mxw%$enxV#yC_0E)R_n74)6_RgCl#5v%L)2=Re}t1e*M~dMx9Nz zY<;L3tj>Ok9BNobs~E3A^FXKEHcERg*ekYNmra(>+1fO)gweuU!Ak!1Y)Zti`Mad; z^gV{_(5X*GOuNv_cG3g^sm8#Rm_}uaBppFej!y#-6p_ zW-}w5VyWQ9t)FKd!pUNJf1lKzY`VXv-aB!5JoSB!IPzlJ$oyodUG%NzVcMCyBo@i~ zQ_v=Z(gdR*+9zEmBbgjTiY0qD|MA_BO}<J`9)7VGAUEmAm2>pV(jGeyOQ>8A5#w))cQH{}5vj6l{F$XI)!F`rk z`j!Y1nxqx07#KjG45oxjf61hx$dcY+7EOP>OKEog=}&5_jVnLy=4X4#yZ>-f_n|V& z2d{P>_al>+UtfA%_NQi*o=0ZZD$Ff?z0v)pzGP|N;wl#EOL=cwuIwG_OKZ*+9^WLi zjdhhbc6{0GM^^I{(p~@XB730@5!W~$Dt~pkkDJSkiO!h*)%mSIopV%gPY?1K)zi?8 z&_~b#;Je^;;4j$I*hj!!;0xvm{errrW^`Duu}!Bb-$V6FPusDvs@*P{PZLO9d*)e* zgeZ#Zo?dm`qnH$=8@4$dO{NF_UzW-G#gf6<#%YFb`lK}Tp7x9pe0`bG`1taTND|ob zl~+$Bj>dgBG01M95VjN9{rS|&wN^vNV{RiB_yp89a21tOGKcUhqE z>%ITsnuavHaqy~b!QdvE za%ZvUn>1~v(7i>>Jx`A1d)|9HTw!ul7!}@ z<)+$9aIIa;uOAkazdCDE@FP>^p6b4K(~cik;J?6UfS!hKgg$}}0N(|#1AoDu#y$e> z0$(sk=oi$b@x5eceXfs#Y5}QrSFLP(SeI7?*iBcO2_mALAwpzi0 zpQ7y6RgQJ7V*d2RP~Y}ceIN0mD&Ghq#A2O*gy=>(=e~QiXU%anQh;;6Y^e)r#CKaQBs-GkJNyzY*-{3bt zirxNxw~eDJqgS6_+En?FPVj)$Qdk$7BOA9;v|0xo*3(- zVc|g!L&rZ%+u%(L3xC;*S8QRbyjh8t_V`l7jm;Z-R(g*5b?~L&zrbgJo`!COK7tMa z-vzG&f5D!{J_7CnUoc1L7t}?(jYG@$c@ODX_&qUvS4M}z{If)C&ePAT&a>+AzLX%v zAKCW(7-b6GQ`+8jl4LUu)$-52Kr7wf284|0@@9U&>73K@$+@b@!Km*feOk7a*W+0w z4embNzg5JH`J_rOiRdbVR)jYn(ZKBppgMZ-Cgsemo>gxR1cPN(nUfLcp@=l`Eg{Nz- zPPJpvfrp~?Y75hA zorZXlNi;=jvff)w=8*M{Q)I*!FzFT0Xx0uN^}XTm!pDSP2VV;Q3w#FXY3N4iBj^C| zUGO^a7wl>5Bj7IZ1#^UcL0t-R<0RDQxKi-^sVBM~nJ}^Ps`F#Xm?U0&ZO!+xr3OK< zIq5Rn*$c<6(2PxnCrj4Q_rN#PW`@a9g5}w}(|puM^8onX@OR;3!moob1^)#;1N1a> zBlHn;0QfF=9rz3OH1-j27x;oXLcgFc9|{DTJXJ#ILF-IA>%>5s>t;Du-6e!Lnm;S) z_2`gK^6n)mjXre9H?~VcKZNcV^`88EEO^8}xB8y?K$8XhjN>=l7DO|LFY%l?=tn<1 z&PK+L?7wrR8v6DHM(owZX{)>If@x-1lUDXBpV6EIc>sKG_`C2i;n%^Jg8u@a0eTv` z5&8%^0DKp`4*Ugs8v6*i3w*&Gp+{ z0KN-e2mXRRjeP{%1-@X8&@ZSteU>F@VA)BZI-92-BFFqz!B+>@s0k*59ileRJ{beVhX%tMWu*EwDe5#&lV7MGDEBrk3@bz(YMczWyW&Tn^-iTZc`4Msw z3RCX2bBp_q=CH^ck*gs;LQaA_0KPZ;UHF*r>)=ble}T^cJw2)$ z|JFy)0pPpfb>J`9)7VGAUEmAm2>pV(G-XKNk0L-jl#(~YCZ@$s#nPui6jJ-^d5 zPb!QgCuSsWa|@>YO^-J0jgO=iT8+{0KN-e2mXRRjeP{%1-@X8&@ZUVJ6$og zj;1}d;_{o*9VOb##`;*1cyKgDiND}};wISKLr_}=h$;bX$DgD(aD1wI4xG;|~M5p)3fE_faI z3-&bj5pWmyf;mFJpf0-?2TTu=wRj50 z-@LcvwbKC|_ONk&c!S?a9$7JGfrD${=s6FZXFzU`d>T0{@>0l6WgJ-8 zkNzWRS}RA-OW|A)&cEQC2hKAfw?{sW92R*aay8^f$Vrd~!1so~3m+4H9egSHFYpB4G3!8O}@LToBH` z;G74}Ga$D|K8+j}c_VT)%d>I zr?HQKyTBLB5&8vnF}$Ux-Yw_*k293zU$gwikKy3>{Ub1PhSI^x$IHuUz!e_h!Xm^{PL9%!{9nR`LwC2Qt(f7EH=kfWtKW>sn*S?lwvMqV! z=XCch;XSV7e*8Z8eB9svaO&986R(UukL$P}&*Ss(`FI}haUJ*L_r>pz>-c=U$93HQ z&-&o={`vd-^L}kj&M|^x|8J+R|Ep(y{~i9X1J?iJ4E8@C|MwHU|NZe7BOmZT&-ecG k`G4=`|NgxA5|00^x3x9@829_Ou_J%3j{NJr?DxC>1Ij;2^Z)<= diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/__init__.py deleted file mode 100755 index 6cf757d8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .wrappers import ONNXRuntimeEditing - -__all__ = ['ONNXRuntimeEditing'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/wrappers.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/wrappers.py deleted file mode 100755 index 5b4d55fb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/export/wrappers.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import numpy as np -import onnxruntime as ort -import torch -from torch import nn - -from mmedit.models import BaseMattor, BasicRestorer, build_model - - -def inference_with_session(sess, io_binding, output_names, input_tensor): - device_type = input_tensor.device.type - device_id = input_tensor.device.index - device_id = 0 if device_id is None else device_id - io_binding.bind_input( - name='input', - device_type=device_type, - device_id=device_id, - element_type=np.float32, - shape=input_tensor.shape, - buffer_ptr=input_tensor.data_ptr()) - for name in output_names: - io_binding.bind_output(name) - sess.run_with_iobinding(io_binding) - pred = io_binding.copy_outputs_to_cpu() - return pred - - -class ONNXRuntimeMattor(nn.Module): - - def __init__(self, sess, io_binding, output_names, base_model): - super(ONNXRuntimeMattor, self).__init__() - self.sess = sess - self.io_binding = io_binding - self.output_names = output_names - self.base_model = base_model - - def forward(self, - merged, - trimap, - meta, - test_mode=False, - save_image=False, - save_path=None, - iteration=None): - input_tensor = torch.cat((merged, trimap), 1).contiguous() - pred_alpha = inference_with_session(self.sess, self.io_binding, - self.output_names, input_tensor)[0] - - pred_alpha = pred_alpha.squeeze() - pred_alpha = self.base_model.restore_shape(pred_alpha, meta) - eval_result = self.base_model.evaluate(pred_alpha, meta) - - if save_image: - self.base_model.save_image(pred_alpha, meta, save_path, iteration) - - return {'pred_alpha': pred_alpha, 'eval_result': eval_result} - - -class RestorerGenerator(nn.Module): - - def __init__(self, sess, io_binding, output_names): - super(RestorerGenerator, self).__init__() - self.sess = sess - self.io_binding = io_binding - self.output_names = output_names - - def forward(self, x): - pred = inference_with_session(self.sess, self.io_binding, - self.output_names, x)[0] - pred = torch.from_numpy(pred) - return pred - - -class ONNXRuntimeRestorer(nn.Module): - - def __init__(self, sess, io_binding, output_names, base_model): - super(ONNXRuntimeRestorer, self).__init__() - self.sess = sess - self.io_binding = io_binding - self.output_names = output_names - self.base_model = base_model - restorer_generator = RestorerGenerator(self.sess, self.io_binding, - self.output_names) - base_model.generator = restorer_generator - - def forward(self, lq, gt=None, test_mode=False, **kwargs): - return self.base_model(lq, gt=gt, test_mode=test_mode, **kwargs) - - -class ONNXRuntimeEditing(nn.Module): - - def __init__(self, onnx_file, cfg, device_id): - super(ONNXRuntimeEditing, self).__init__() - ort_custom_op_path = '' - try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - providers = ['CPUExecutionProvider'] - options = [{}] - is_cuda_available = ort.get_device() == 'GPU' - if is_cuda_available: - providers.insert(0, 'CUDAExecutionProvider') - options.insert(0, {'device_id': device_id}) - - sess.set_providers(providers, options) - - self.sess = sess - self.device_id = device_id - self.io_binding = sess.io_binding() - self.output_names = [_.name for _ in sess.get_outputs()] - - base_model = build_model( - cfg.model, train_cfg=None, test_cfg=cfg.test_cfg) - - if isinstance(base_model, BaseMattor): - WrapperClass = ONNXRuntimeMattor - elif isinstance(base_model, BasicRestorer): - WrapperClass = ONNXRuntimeRestorer - self.wrapper = WrapperClass(self.sess, self.io_binding, - self.output_names, base_model) - - def forward(self, **kwargs): - return self.wrapper(**kwargs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/__init__.py deleted file mode 100755 index 575c43b3..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .ema import ExponentialMovingAverageHook -from .visualization import VisualizationHook - -__all__ = ['VisualizationHook', 'ExponentialMovingAverageHook'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/ema.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/ema.py deleted file mode 100755 index 0e7f0b2e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/ema.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from copy import deepcopy -from functools import partial - -import mmcv -import torch -from mmcv.parallel import is_module_wrapper -from mmcv.runner import HOOKS, Hook - - -@HOOKS.register_module() -class ExponentialMovingAverageHook(Hook): - """Exponential Moving Average Hook. - - Exponential moving average is a trick that widely used in current GAN - literature, e.g., PGGAN, StyleGAN, and BigGAN. This general idea of it is - maintaining a model with the same architecture, but its parameters are - updated as a moving average of the trained weights in the original model. - In general, the model with moving averaged weights achieves better - performance. - - Args: - module_keys (str | tuple[str]): The name of the ema model. Note that we - require these keys are followed by '_ema' so that we can easily - find the original model by discarding the last four characters. - interp_mode (str, optional): Mode of the interpolation method. - Defaults to 'lerp'. - interp_cfg (dict | None, optional): Set arguments of the interpolation - function. Defaults to None. - interval (int, optional): Evaluation interval (by iterations). - Default: -1. - start_iter (int, optional): Start iteration for ema. If the start - iteration is not reached, the weights of ema model will maintain - the same as the original one. Otherwise, its parameters are updated - as a moving average of the trained weights in the original model. - Default: 0. - """ - - def __init__(self, - module_keys, - interp_mode='lerp', - interp_cfg=None, - interval=-1, - start_iter=0): - super().__init__() - assert isinstance(module_keys, str) or mmcv.is_tuple_of( - module_keys, str) - self.module_keys = (module_keys, ) if isinstance(module_keys, - str) else module_keys - # sanity check for the format of module keys - for k in self.module_keys: - assert k.endswith( - '_ema'), 'You should give keys that end with "_ema".' - self.interp_mode = interp_mode - self.interp_cfg = dict() if interp_cfg is None else deepcopy( - interp_cfg) - self.interval = interval - self.start_iter = start_iter - - assert hasattr( - self, interp_mode - ), f'Currently, we do not support {self.interp_mode} for EMA.' - self.interp_func = partial( - getattr(self, interp_mode), **self.interp_cfg) - - @staticmethod - def lerp(a, b, momentum=0.999, momentum_nontrainable=0., trainable=True): - m = momentum if trainable else momentum_nontrainable - return a + (b - a) * m - - def every_n_iters(self, runner, n): - if runner.iter < self.start_iter: - return True - return (runner.iter + 1 - self.start_iter) % n == 0 if n > 0 else False - - @torch.no_grad() - def after_train_iter(self, runner): - if not self.every_n_iters(runner, self.interval): - return - - model = runner.model.module if is_module_wrapper( - runner.model) else runner.model - - for key in self.module_keys: - # get current ema states - ema_net = getattr(model, key) - states_ema = ema_net.state_dict(keep_vars=False) - # get currently original states - net = getattr(model, key[:-4]) - states_orig = net.state_dict(keep_vars=True) - - for k, v in states_orig.items(): - if runner.iter < self.start_iter: - states_ema[k].data.copy_(v.data) - else: - states_ema[k] = self.interp_func( - v, states_ema[k], trainable=v.requires_grad).detach() - ema_net.load_state_dict(states_ema, strict=True) - - def before_run(self, runner): - model = runner.model.module if is_module_wrapper( - runner.model) else runner.model - # sanity check for ema model - for k in self.module_keys: - if not hasattr(model, k) and not hasattr(model, k[:-4]): - raise RuntimeError( - f'Cannot find both {k[:-4]} and {k} network for EMA hook.') - if not hasattr(model, k) and hasattr(model, k[:-4]): - setattr(model, k, deepcopy(getattr(model, k[:-4]))) - warnings.warn( - f'We do not suggest construct and initialize EMA model {k}' - ' in hook. You may explicitly define it by yourself.') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/visualization.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/visualization.py deleted file mode 100755 index 63c8ee7d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/hooks/visualization.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp - -import mmcv -import torch -from mmcv.runner import HOOKS, Hook -from mmcv.runner.dist_utils import master_only -from torchvision.utils import save_image - - -@HOOKS.register_module() -class VisualizationHook(Hook): - """Visualization hook. - - In this hook, we use the official api `save_image` in torchvision to save - the visualization results. - - Args: - output_dir (str): The file path to store visualizations. - res_name_list (str): The list contains the name of results in outputs - dict. The results in outputs dict must be a torch.Tensor with shape - (n, c, h, w). - interval (int): The interval of calling this hook. If set to -1, - the visualization hook will not be called. Default: -1. - filename_tmpl (str): Format string used to save images. The output file - name will be formatted as this args. Default: 'iter_{}.png'. - rerange (bool): Whether to rerange the output value from [-1, 1] to - [0, 1]. We highly recommend users should preprocess the - visualization results on their own. Here, we just provide a simple - interface. Default: True. - bgr2rgb (bool): Whether to reformat the channel dimension from BGR to - RGB. The final image we will save is following RGB style. - Default: True. - nrow (int): The number of samples in a row. Default: 1. - padding (int): The number of padding pixels between each samples. - Default: 4. - """ - - def __init__(self, - output_dir, - res_name_list, - interval=-1, - filename_tmpl='iter_{}.png', - rerange=True, - bgr2rgb=True, - nrow=1, - padding=4): - assert mmcv.is_list_of(res_name_list, str) - self.output_dir = output_dir - self.res_name_list = res_name_list - self.interval = interval - self.filename_tmpl = filename_tmpl - self.bgr2rgb = bgr2rgb - self.rerange = rerange - self.nrow = nrow - self.padding = padding - - mmcv.mkdir_or_exist(self.output_dir) - - @master_only - def after_train_iter(self, runner): - """The behavior after each train iteration. - - Args: - runner (object): The runner. - """ - if not self.every_n_iters(runner, self.interval): - return - results = runner.outputs['results'] - - filename = self.filename_tmpl.format(runner.iter + 1) - - img_list = [x for k, x in results.items() if k in self.res_name_list] - img_cat = torch.cat(img_list, dim=3).detach() - if self.rerange: - img_cat = ((img_cat + 1) / 2) - if self.bgr2rgb: - img_cat = img_cat[:, [2, 1, 0], ...] - img_cat = img_cat.clamp_(0, 1) - save_image( - img_cat, - osp.join(self.output_dir, filename), - nrow=self.nrow, - padding=self.padding) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/mask.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/mask.py deleted file mode 100755 index 51486111..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/mask.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import cv2 -import mmcv -import numpy as np -from PIL import Image, ImageDraw - - -def random_bbox(img_shape, max_bbox_shape, max_bbox_delta=40, min_margin=20): - """Generate a random bbox for the mask on a given image. - - In our implementation, the max value cannot be obtained since we use - `np.random.randint`. And this may be different with other standard scripts - in the community. - - Args: - img_shape (tuple[int]): The size of a image, in the form of (h, w). - max_bbox_shape (int | tuple[int]): Maximum shape of the mask box, - in the form of (h, w). If it is an integer, the mask box will be - square. - max_bbox_delta (int | tuple[int]): Maximum delta of the mask box, - in the form of (delta_h, delta_w). If it is an integer, delta_h - and delta_w will be the same. Mask shape will be randomly sampled - from the range of `max_bbox_shape - max_bbox_delta` and - `max_bbox_shape`. Default: (40, 40). - min_margin (int | tuple[int]): The minimum margin size from the - edges of mask box to the image boarder, in the form of - (margin_h, margin_w). If it is an integer, margin_h and margin_w - will be the same. Default: (20, 20). - - Returns: - tuple[int]: The generated box, (top, left, h, w). - """ - if not isinstance(max_bbox_shape, tuple): - max_bbox_shape = (max_bbox_shape, max_bbox_shape) - if not isinstance(max_bbox_delta, tuple): - max_bbox_delta = (max_bbox_delta, max_bbox_delta) - if not isinstance(min_margin, tuple): - min_margin = (min_margin, min_margin) - assert mmcv.is_tuple_of(max_bbox_shape, int) - assert mmcv.is_tuple_of(max_bbox_delta, int) - assert mmcv.is_tuple_of(min_margin, int) - - img_h, img_w = img_shape[:2] - max_mask_h, max_mask_w = max_bbox_shape - max_delta_h, max_delta_w = max_bbox_delta - margin_h, margin_w = min_margin - - if max_mask_h > img_h or max_mask_w > img_w: - raise ValueError(f'mask shape {max_bbox_shape} should be smaller than ' - f'image shape {img_shape}') - if (max_delta_h // 2 * 2 >= max_mask_h - or max_delta_w // 2 * 2 >= max_mask_w): - raise ValueError(f'mask delta {max_bbox_delta} should be smaller than' - f'mask shape {max_bbox_shape}') - if img_h - max_mask_h < 2 * margin_h or img_w - max_mask_w < 2 * margin_w: - raise ValueError(f'Margin {min_margin} cannot be satisfied for img' - f'shape {img_shape} and mask shape {max_bbox_shape}') - - # get the max value of (top, left) - max_top = img_h - margin_h - max_mask_h - max_left = img_w - margin_w - max_mask_w - # randomly select a (top, left) - top = np.random.randint(margin_h, max_top) - left = np.random.randint(margin_w, max_left) - # randomly shrink the shape of mask box according to `max_bbox_delta` - # the center of box is fixed - delta_top = np.random.randint(0, max_delta_h // 2 + 1) - delta_left = np.random.randint(0, max_delta_w // 2 + 1) - top = top + delta_top - left = left + delta_left - h = max_mask_h - delta_top - w = max_mask_w - delta_left - return (top, left, h, w) - - -def bbox2mask(img_shape, bbox, dtype='uint8'): - """Generate mask in ndarray from bbox. - - The returned mask has the shape of (h, w, 1). '1' indicates the - hole and '0' indicates the valid regions. - - We prefer to use `uint8` as the data type of masks, which may be different - from other codes in the community. - - Args: - img_shape (tuple[int]): The size of the image. - bbox (tuple[int]): Configuration tuple, (top, left, height, width) - dtype (str): Indicate the data type of returned masks. Default: 'uint8' - - Return: - numpy.ndarray: Mask in the shape of (h, w, 1). - """ - - height, width = img_shape[:2] - - mask = np.zeros((height, width, 1), dtype=dtype) - mask[bbox[0]:bbox[0] + bbox[2], bbox[1]:bbox[1] + bbox[3], :] = 1 - - return mask - - -def brush_stroke_mask(img_shape, - num_vertices=(4, 12), - mean_angle=2 * math.pi / 5, - angle_range=2 * math.pi / 15, - brush_width=(12, 40), - max_loops=4, - dtype='uint8'): - """Generate free-form mask. - - The method of generating free-form mask is in the following paper: - Free-Form Image Inpainting with Gated Convolution. - - When you set the config of this type of mask. You may note the usage of - `np.random.randint` and the range of `np.random.randint` is [left, right). - - We prefer to use `uint8` as the data type of masks, which may be different - from other codes in the community. - - TODO: Rewrite the implementation of this function. - - Args: - img_shape (tuple[int]): Size of the image. - num_vertices (int | tuple[int]): Min and max number of vertices. If - only give an integer, we will fix the number of vertices. - Default: (4, 12). - mean_angle (float): Mean value of the angle in each vertex. The angle - is measured in radians. Default: 2 * math.pi / 5. - angle_range (float): Range of the random angle. - Default: 2 * math.pi / 15. - brush_width (int | tuple[int]): (min_width, max_width). If only give - an integer, we will fix the width of brush. Default: (12, 40). - max_loops (int): The max number of for loops of drawing strokes. - dtype (str): Indicate the data type of returned masks. - Default: 'uint8'. - - Returns: - numpy.ndarray: Mask in the shape of (h, w, 1). - """ - - img_h, img_w = img_shape[:2] - if isinstance(num_vertices, int): - min_num_vertices, max_num_vertices = num_vertices, num_vertices + 1 - elif isinstance(num_vertices, tuple): - min_num_vertices, max_num_vertices = num_vertices - else: - raise TypeError('The type of num_vertices should be int' - f'or tuple[int], but got type: {num_vertices}') - - if isinstance(brush_width, tuple): - min_width, max_width = brush_width - elif isinstance(brush_width, int): - min_width, max_width = brush_width, brush_width + 1 - else: - raise TypeError('The type of brush_width should be int' - f'or tuple[int], but got type: {brush_width}') - - average_radius = math.sqrt(img_h * img_h + img_w * img_w) / 8 - mask = Image.new('L', (img_w, img_h), 0) - - loop_num = np.random.randint(1, max_loops) - num_vertex_list = np.random.randint( - min_num_vertices, max_num_vertices, size=loop_num) - angle_min_list = np.random.uniform(0, angle_range, size=loop_num) - angle_max_list = np.random.uniform(0, angle_range, size=loop_num) - - for loop_n in range(loop_num): - num_vertex = num_vertex_list[loop_n] - angle_min = mean_angle - angle_min_list[loop_n] - angle_max = mean_angle + angle_max_list[loop_n] - angles = [] - vertex = [] - - # set random angle on each vertex - angles = np.random.uniform(angle_min, angle_max, size=num_vertex) - reverse_mask = (np.arange(num_vertex, dtype=np.float32) % 2) == 0 - angles[reverse_mask] = 2 * math.pi - angles[reverse_mask] - - h, w = mask.size - - # set random vertices - vertex.append((np.random.randint(0, w), np.random.randint(0, h))) - r_list = np.random.normal( - loc=average_radius, scale=average_radius // 2, size=num_vertex) - for i in range(num_vertex): - r = np.clip(r_list[i], 0, 2 * average_radius) - new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w) - new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h) - vertex.append((int(new_x), int(new_y))) - # draw brush strokes according to the vertex and angle list - draw = ImageDraw.Draw(mask) - width = np.random.randint(min_width, max_width) - draw.line(vertex, fill=1, width=width) - for v in vertex: - draw.ellipse((v[0] - width // 2, v[1] - width // 2, - v[0] + width // 2, v[1] + width // 2), - fill=1) - # randomly flip the mask - if np.random.normal() > 0: - mask.transpose(Image.FLIP_LEFT_RIGHT) - if np.random.normal() > 0: - mask.transpose(Image.FLIP_TOP_BOTTOM) - mask = np.array(mask).astype(dtype=getattr(np, dtype)) - mask = mask[:, :, None] - return mask - - -def random_irregular_mask(img_shape, - num_vertices=(4, 8), - max_angle=4, - length_range=(10, 100), - brush_width=(10, 40), - dtype='uint8'): - """Generate random irregular masks. - - This is a modified version of free-form mask implemented in - 'brush_stroke_mask'. - - We prefer to use `uint8` as the data type of masks, which may be different - from other codes in the community. - - TODO: Rewrite the implementation of this function. - - Args: - img_shape (tuple[int]): Size of the image. - num_vertices (int | tuple[int]): Min and max number of vertices. If - only give an integer, we will fix the number of vertices. - Default: (4, 8). - max_angle (float): Max value of angle at each vertex. Default 4.0. - length_range (int | tuple[int]): (min_length, max_length). If only give - an integer, we will fix the length of brush. Default: (10, 100). - brush_width (int | tuple[int]): (min_width, max_width). If only give - an integer, we will fix the width of brush. Default: (10, 40). - dtype (str): Indicate the data type of returned masks. Default: 'uint8' - - Returns: - numpy.ndarray: Mask in the shape of (h, w, 1). - """ - - h, w = img_shape[:2] - - mask = np.zeros((h, w), dtype=dtype) - if isinstance(length_range, int): - min_length, max_length = length_range, length_range + 1 - elif isinstance(length_range, tuple): - min_length, max_length = length_range - else: - raise TypeError('The type of length_range should be int' - f'or tuple[int], but got type: {length_range}') - if isinstance(num_vertices, int): - min_num_vertices, max_num_vertices = num_vertices, num_vertices + 1 - elif isinstance(num_vertices, tuple): - min_num_vertices, max_num_vertices = num_vertices - else: - raise TypeError('The type of num_vertices should be int' - f'or tuple[int], but got type: {num_vertices}') - - if isinstance(brush_width, int): - min_brush_width, max_brush_width = brush_width, brush_width + 1 - elif isinstance(brush_width, tuple): - min_brush_width, max_brush_width = brush_width - else: - raise TypeError('The type of brush_width should be int' - f'or tuple[int], but got type: {brush_width}') - - num_v = np.random.randint(min_num_vertices, max_num_vertices) - - for i in range(num_v): - start_x = np.random.randint(w) - start_y = np.random.randint(h) - # from the start point, randomly setlect n \in [1, 6] directions. - direction_num = np.random.randint(1, 6) - angle_list = np.random.randint(0, max_angle, size=direction_num) - length_list = np.random.randint( - min_length, max_length, size=direction_num) - brush_width_list = np.random.randint( - min_brush_width, max_brush_width, size=direction_num) - for direct_n in range(direction_num): - angle = 0.01 + angle_list[direct_n] - if i % 2 == 0: - angle = 2 * math.pi - angle - length = length_list[direct_n] - brush_w = brush_width_list[direct_n] - # compute end point according to the random angle - end_x = (start_x + length * np.sin(angle)).astype(np.int32) - end_y = (start_y + length * np.cos(angle)).astype(np.int32) - - cv2.line(mask, (start_y, start_x), (end_y, end_x), 1, brush_w) - start_x, start_y = end_x, end_y - mask = np.expand_dims(mask, axis=2) - - return mask - - -def get_irregular_mask(img_shape, area_ratio_range=(0.15, 0.5), **kwargs): - """Get irregular mask with the constraints in mask ratio - - Args: - img_shape (tuple[int]): Size of the image. - area_ratio_range (tuple(float)): Contain the minimum and maximum area - ratio. Default: (0.15, 0.5). - - Returns: - numpy.ndarray: Mask in the shape of (h, w, 1). - """ - - mask = random_irregular_mask(img_shape, **kwargs) - min_ratio, max_ratio = area_ratio_range - - while not min_ratio < (np.sum(mask) / - (img_shape[0] * img_shape[1])) < max_ratio: - mask = random_irregular_mask(img_shape, **kwargs) - - return mask diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/misc.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/misc.py deleted file mode 100755 index 21c5d477..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/misc.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -from torchvision.utils import make_grid - - -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to (min, max), image values will be normalized to [0, 1]. - - For different tensor shapes, this function will have different behaviors: - - 1. 4D mini-batch Tensor of shape (N x 3/1 x H x W): - Use `make_grid` to stitch images in the batch dimension, and then - convert it to numpy array. - 2. 3D Tensor of shape (3/1 x H x W) and 2D Tensor of shape (H x W): - Directly change to numpy array. - - Note that the image channel in input tensors should be RGB order. This - function will convert it to cv2 convention, i.e., (H x W x C) with BGR - order. - - Args: - tensor (Tensor | list[Tensor]): Input tensors. - out_type (numpy type): Output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple): min and max values for clamp. - - Returns: - (Tensor | list[Tensor]): 3D ndarray of shape (H x W x C) or 2D ndarray - of shape (H x W). - """ - if not (torch.is_tensor(tensor) or - (isinstance(tensor, list) - and all(torch.is_tensor(t) for t in tensor))): - raise TypeError( - f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - # Squeeze two times so that: - # 1. (1, 1, h, w) -> (h, w) or - # 3. (1, 3, h, w) -> (3, h, w) or - # 2. (n>1, 3/1, h, w) -> (n>1, 3/1, h, w) - _tensor = _tensor.squeeze(0).squeeze(0) - _tensor = _tensor.float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid( - _tensor, nrow=int(math.sqrt(_tensor.size(0))), - normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise ValueError('Only support 4D, 3D or 2D tensor. ' - f'But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - result = result[0] if len(result) == 1 else result - return result diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/__init__.py deleted file mode 100755 index e1c477dd..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import build_optimizers - -__all__ = ['build_optimizers'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/builder.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/builder.py deleted file mode 100755 index 2edf94da..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/optimizer/builder.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import build_optimizer - - -def build_optimizers(model, cfgs): - """Build multiple optimizers from configs. - - If `cfgs` contains several dicts for optimizers, then a dict for each - constructed optimizers will be returned. - If `cfgs` only contains one optimizer config, the constructed optimizer - itself will be returned. - - For example, - - 1) Multiple optimizer configs: - - .. code-block:: python - - optimizer_cfg = dict( - model1=dict(type='SGD', lr=lr), - model2=dict(type='SGD', lr=lr)) - - The return dict is - ``dict('model1': torch.optim.Optimizer, 'model2': torch.optim.Optimizer)`` - - 2) Single optimizer config: - - .. code-block:: python - - optimizer_cfg = dict(type='SGD', lr=lr) - - The return is ``torch.optim.Optimizer``. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - cfgs (dict): The config dict of the optimizer. - - Returns: - dict[:obj:`torch.optim.Optimizer`] | :obj:`torch.optim.Optimizer`: - The initialized optimizers. - """ - optimizers = {} - if hasattr(model, 'module'): - model = model.module - # determine whether 'cfgs' has several dicts for optimizers - is_dict_of_dict = True - for key, cfg in cfgs.items(): - if not isinstance(cfg, dict): - is_dict_of_dict = False - - if is_dict_of_dict: - for key, cfg in cfgs.items(): - cfg_ = cfg.copy() - module = getattr(model, key) - optimizers[key] = build_optimizer(module, cfg_) - return optimizers - - return build_optimizer(model, cfgs) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/__init__.py deleted file mode 100755 index af045820..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .lr_updater import LinearLrUpdaterHook, ReduceLrUpdaterHook - -__all__ = ['LinearLrUpdaterHook', 'ReduceLrUpdaterHook'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/lr_updater.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/lr_updater.py deleted file mode 100755 index 0677373c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/scheduler/lr_updater.py +++ /dev/null @@ -1,304 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.runner import HOOKS, LrUpdaterHook - - -@HOOKS.register_module() -class LinearLrUpdaterHook(LrUpdaterHook): - """Linear learning rate scheduler for image generation. - - In the beginning, the learning rate is 'base_lr' defined in mmcv. - We give a target learning rate 'target_lr' and a start point 'start' - (iteration / epoch). Before 'start', we fix learning rate as 'base_lr'; - After 'start', we linearly update learning rate to 'target_lr'. - - Args: - target_lr (float): The target learning rate. Default: 0. - start (int): The start point (iteration / epoch, specified by args - 'by_epoch' in its parent class in mmcv) to update learning rate. - Default: 0. - interval (int): The interval to update the learning rate. Default: 1. - """ - - def __init__(self, target_lr=0, start=0, interval=1, **kwargs): - super().__init__(**kwargs) - self.target_lr = target_lr - self.start = start - self.interval = interval - - def get_lr(self, runner, base_lr): - """Calculates the learning rate. - - Args: - runner (object): The passed runner. - base_lr (float): Base learning rate. - - Returns: - float: Current learning rate. - """ - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - assert max_progress >= self.start - - if max_progress == self.start: - return base_lr - - # Before 'start', fix lr; After 'start', linearly update lr. - factor = (max(0, progress - self.start) // self.interval) / ( - (max_progress - self.start) // self.interval) - return base_lr + (self.target_lr - base_lr) * factor - - -@HOOKS.register_module() -class ReduceLrUpdaterHook(LrUpdaterHook): - """ReduceLROnPlateau Scheduler. - - Reduce learning rate when a metric has stopped improving. This scheduler - reads a metrics quantity and if no improvement is seen for a 'patience' - number of epochs, the learning rate is reduced. - - Args: - val_metric (str, optional): Metrics to be evaluated. If val_metric is - None, the metrics will be loss value. Default: None. - mode (str, optional): One of `min`, `max`. In `min` mode, lr will - be reduced when the quantity monitored has stopped - decreasing; in `max` mode it will be reduced when the - quantity monitored has stopped increasing. Default: 'min'. - factor (float, optional): Factor by which the learning rate will be - reduced. new_lr = lr * factor. Default: 0.1. - patience (int, optional): Number of epochs with no improvement after - which learning rate will be reduced. For example, if - `patience = 2`, then we will ignore the first 2 epochs - with no improvement, and will only decrease the LR after the - 3rd epoch if the loss still hasn't improved then. - Default: 10. - threshold (float, optional): Threshold for measuring the new optimum, - to only focus on significant changes. Default: 1e-4. - threshold_mode (str, optional): One of `rel`, `abs`. In `rel` mode, - dynamic_threshold = best * ( 1 + threshold ) in 'max' - mode or best * ( 1 - threshold ) in `min` mode. - In `abs` mode, dynamic_threshold = best + threshold in - `max` mode or best - threshold in `min` mode. Default: 'rel'. - cooldown (int, optional): Number of epochs to wait before resuming - normal operation after lr has been reduced. Default: 0. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. - Default: 0. - eps (float, optional): Minimal decay applied to lr. If the difference - between new and old lr is smaller than eps, the update is - ignored. Default: 1e-8. - verbose (bool): If ``True``, prints a message to stdout for - each update. Default: ``False``. - epoch_base_valid (None | Bool): Whether use epoch base valid. - If `None`, follow `by_epoch` (inherited from `LrUpdaterHook`). - Default: None. - """ - - def __init__(self, - val_metric: str = None, - mode: str = 'min', - factor: float = 0.1, - patience: int = 10, - threshold: float = 1e-4, - threshold_mode: str = 'rel', - cooldown: int = 0, - min_lr: float = 0., - eps: float = 1e-8, - verbose: bool = False, - epoch_base_valid=None, - **kwargs): - - self.val_metric = val_metric - - if mode not in ['min', 'max']: - raise ValueError( - 'mode must be one of "min" or "max", instead got {mode}') - self.mode = mode - - if factor >= 1.0 or factor < 0: - raise ValueError('Factor should be < 1.0 and >=0') - self.factor = factor - - self.patience = patience - self.threshold = threshold - - if threshold_mode not in ['rel', 'abs']: - raise ValueError('thresh_mode must be one of "rel" or "abs",' - f'instead got {threshold_mode}') - self.threshold_mode = threshold_mode - - self.cooldown = cooldown - self.cooldown_counter = 0 - self.best = None - self.num_bad_epochs = None - self.mode_worse = None # the worse value for the chosen mode - self.min_lr = min_lr - self.eps = eps - self.verbose = verbose - self.last_epoch = 0 - self._init_is_better(self.mode) - self._reset() - - super().__init__(**kwargs) - if epoch_base_valid is None: - self.epoch_base_valid = self.by_epoch - else: - self.epoch_base_valid = epoch_base_valid - - def get_lr(self, regular_lr, optimizer_name): - if self.num_bad_epochs > self.patience: - self.cooldown_counter = self.cooldown - self.num_bad_epochs = 0 - if regular_lr - regular_lr * self.factor > self.eps: - new_lr = max(regular_lr * self.factor, self.min_lr) - if self.verbose: - print(f'Reducing learning rate of {optimizer_name} from ' - f'{regular_lr:.4e} to {new_lr:.4e}.') - else: - new_lr = regular_lr - return new_lr - else: - return regular_lr - - def get_regular_lr(self, runner): - if not self.regular_lr: - self.regular_lr = self.base_lr - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(_regular_lr, k) - for _regular_lr in self.regular_lr[k] - ] - lr_groups.update({k: _lr_group}) - else: - lr_groups = [ - self.get_lr(_regular_lr, 'generator') - for _regular_lr in self.regular_lr - ] - self.regular_lr = lr_groups - return lr_groups - - def _init_is_better(self, mode): - if mode == 'min': - self.mode_worse = float('inf') - else: - self.mode_worse = float('-inf') - - def _reset(self): - self.best = self.mode_worse - self.cooldown_counter = 0 - self.num_bad_epochs = 0 - - def is_better(self, a, best): - if self.mode == 'min' and self.threshold_mode == 'rel': - rel_epsilon = 1. - self.threshold - return a < best * rel_epsilon - elif self.mode == 'min' and self.threshold_mode == 'abs': - return a < best - self.threshold - elif self.mode == 'max' and self.threshold_mode == 'rel': - rel_epsilon = 1. + self.threshold - return a > best * rel_epsilon - else: - return a > best + self.threshold - - @property - def in_cooldown(self): - return self.cooldown_counter > 0 - - def after_train_epoch(self, runner): - if not self.by_epoch: - return - cur_epoch = runner.epoch - if self.warmup is not None and self.warmup_by_epoch: - if cur_epoch <= self.warmup_epochs: - return - # If val_metric is None, we check training loss to reduce learning - # rate. - if self.val_metric is None: - current = runner.outputs['log_vars']['loss'] - if self.is_better(current, self.best): - self.best = current - self.num_bad_epochs = 0 - else: - self.num_bad_epochs += 1 - - if self.in_cooldown: - self.cooldown_counter -= 1 - self.num_bad_epochs = 0 - print(f'train_epoch--current {current:.6f} best {self.best:.6f}, ' - f'num_bad_epochs {self.num_bad_epochs}, ' - f'cooldown {self.in_cooldown} {self.cooldown_counter}') - - def after_train_iter(self, runner): - if self.by_epoch: - return - cur_iter = runner.iter - if self.warmup_epochs is not None and cur_iter <= self.warmup_iters: - return - # If val_metric is None, we check training loss to reduce learning - # rate. - if self.val_metric is None: - current = runner.outputs['log_vars']['loss'] - if self.is_better(current, self.best): - self.best = current - self.num_bad_epochs = 0 - else: - self.num_bad_epochs += 1 - - if self.in_cooldown: - self.cooldown_counter -= 1 - self.num_bad_epochs = 0 - print(f'train_iter--current {current:.6f} best {self.best:.6f}, ' - f'num_bad_epochs {self.num_bad_epochs}, ' - f'cooldown {self.in_cooldown} {self.cooldown_counter}') - - def after_val_epoch(self, runner): - if not self.by_epoch and not self.epoch_base_valid: - return - cur_epoch = runner.epoch - if self.warmup is not None and self.warmup_by_epoch: - if cur_epoch <= self.warmup_epochs: - return - # If val_metric is not None, we check its value to reduce learning - # rate. - if self.val_metric is not None: - current = runner.log_buffer.output[self.val_metric] - if self.is_better(current, self.best): - self.best = current - self.num_bad_epochs = 0 - else: - self.num_bad_epochs += 1 - - if self.in_cooldown: - self.cooldown_counter -= 1 - self.num_bad_epochs = 0 - print(f'val_epoch--current {current:.6f} best {self.best:.6f}, ' - f'num_bad_epochs {self.num_bad_epochs}, ' - f'cooldown {self.in_cooldown} {self.cooldown_counter}') - - def after_val_iter(self, runner): - if self.by_epoch or self.epoch_base_valid: - return - cur_iter = runner.iter - if self.warmup_epochs is not None and cur_iter <= self.warmup_iters: - return - # If val_metric is not None, we check its value to reduce learning - # rate. - if self.val_metric is not None: - current = runner.eval_result[self.val_metric] - if self.is_better(current, self.best): - self.best = current - self.num_bad_epochs = 0 - else: - self.num_bad_epochs += 1 - - if self.in_cooldown: - self.cooldown_counter -= 1 - self.num_bad_epochs = 0 - print(f'val_iter--current {current:.6f} best {self.best:.6f}, ' - f'num_bad_epochs {self.num_bad_epochs}, ' - f'cooldown {self.in_cooldown} {self.cooldown_counter}') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/__init__.py deleted file mode 100755 index f894e827..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .dist_utils import sync_random_seed - -__all__ = ['sync_random_seed'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/dist_utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/dist_utils.py deleted file mode 100755 index 8a3af5bb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/core/utils/dist_utils.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.distributed as dist -from mmcv.runner import get_dist_info - - -def sync_random_seed(seed=None, device='cuda'): - """Make sure different ranks share the same seed. - All workers must call this function, otherwise it will deadlock. - This method is generally used in `DistributedSampler`, - because the seed should be identical across all processes - in the distributed group. - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is None: - seed = np.random.randint(2**31) - assert isinstance(seed, int) - - rank, world_size = get_dist_info() - - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - dist.broadcast(random_num, src=0) - return random_num.item() diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/__init__.py deleted file mode 100755 index 4b12041d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_dataset import BaseDataset -from .base_sr_dataset import BaseSRDataset -from .builder import build_dataloader, build_dataset -from .dataset_wrappers import RepeatDataset -from .registry import DATASETS, PIPELINES -from .sr_reds_multiple_gt_dataset import SRREDSMultipleGTDataset - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_dataset.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_dataset.py deleted file mode 100755 index c8ffea6e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from abc import ABCMeta, abstractmethod - -from torch.utils.data import Dataset - -from .pipelines import Compose - - -class BaseDataset(Dataset, metaclass=ABCMeta): - """Base class for datasets. - - All datasets should subclass it. - All subclasses should overwrite: - - ``load_annotations``, supporting to load information and generate - image lists. - - Args: - pipeline (list[dict | callable]): A sequence of data transforms. - test_mode (bool): If True, the dataset will work in test mode. - Otherwise, in train mode. - """ - - def __init__(self, pipeline, test_mode=False): - super().__init__() - self.test_mode = test_mode - self.pipeline = Compose(pipeline) - - @abstractmethod - def load_annotations(self): - """Abstract function for loading annotation. - - All subclasses should overwrite this function - """ - - def prepare_train_data(self, idx): - """Prepare training data. - - Args: - idx (int): Index of the training batch data. - - Returns: - dict: Returned training batch. - """ - results = copy.deepcopy(self.data_infos[idx]) - return self.pipeline(results) - - def prepare_test_data(self, idx): - """Prepare testing data. - - Args: - idx (int): Index for getting each testing batch. - - Returns: - Tensor: Returned testing batch. - """ - results = copy.deepcopy(self.data_infos[idx]) - return self.pipeline(results) - - def __len__(self): - """Length of the dataset. - - Returns: - int: Length of the dataset. - """ - return len(self.data_infos) - - def __getitem__(self, idx): - """Get item at each call. - - Args: - idx (int): Index for getting each item. - """ - if self.test_mode: - return self.prepare_test_data(idx) - - return self.prepare_train_data(idx) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_sr_dataset.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_sr_dataset.py deleted file mode 100755 index 6e811505..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/base_sr_dataset.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os.path as osp -from collections import defaultdict -from pathlib import Path - -from mmcv import scandir - -from .base_dataset import BaseDataset - -IMG_EXTENSIONS = ('.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', - '.PPM', '.bmp', '.BMP', '.tif', '.TIF', '.tiff', '.TIFF') - - -class BaseSRDataset(BaseDataset): - """Base class for super resolution datasets. - """ - - def __init__(self, pipeline, scale, test_mode=False): - super().__init__(pipeline, test_mode) - self.scale = scale - - @staticmethod - def scan_folder(path): - """Obtain image path list (including sub-folders) from a given folder. - - Args: - path (str | :obj:`Path`): Folder path. - - Returns: - list[str]: image list obtained form given folder. - """ - - if isinstance(path, (str, Path)): - path = str(path) - else: - raise TypeError("'path' must be a str or a Path object, " - f'but received {type(path)}.') - - images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True)) - images = [osp.join(path, v) for v in images] - assert images, f'{path} has no valid image file.' - return images - - def __getitem__(self, idx): - """Get item at each call. - - Args: - idx (int): Index for getting each item. - """ - results = copy.deepcopy(self.data_infos[idx]) - results['scale'] = self.scale - return self.pipeline(results) - - def evaluate(self, results, logger=None): - """Evaluate with different metrics. - - Args: - results (list[tuple]): The output of forward_test() of the model. - - Return: - dict: Evaluation results dict. - """ - if not isinstance(results, list): - raise TypeError(f'results must be a list, but got {type(results)}') - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: ' - f'{len(results)} != {len(self)}') - - results = [res['eval_result'] for res in results] # a list of dict - eval_result = defaultdict(list) # a dict of list - - for res in results: - for metric, val in res.items(): - eval_result[metric].append(val) - for metric, val_list in eval_result.items(): - assert len(val_list) == len(self), ( - f'Length of evaluation result of {metric} is {len(val_list)}, ' - f'should be {len(self)}') - - # average the results - eval_result = { - metric: sum(values) / len(self) - for metric, values in eval_result.items() - } - - return eval_result diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/builder.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/builder.py deleted file mode 100755 index 4414d375..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/builder.py +++ /dev/null @@ -1,181 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import platform -import random -from functools import partial - -import numpy as np -import torch -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import build_from_cfg -from packaging import version -from torch.utils.data import ConcatDataset, DataLoader - -from .dataset_wrappers import RepeatDataset -from .registry import DATASETS -from .samplers import DistributedSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - base_soft_limit = rlimit[0] - hard_limit = rlimit[1] - soft_limit = min(max(4096, base_soft_limit), hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - - -def _concat_dataset(cfg, default_args=None): - """Concat datasets with different ann_file but the same type. - - Args: - cfg (dict): The config of dataset. - default_args (dict, optional): Default initialization arguments. - Default: None. - - Returns: - Dataset: The concatenated dataset. - """ - ann_files = cfg['ann_file'] - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - data_cfg['ann_file'] = ann_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets) - - -def build_dataset(cfg, default_args=None): - """Build a dataset from config dict. - - It supports a variety of dataset config. If ``cfg`` is a Sequential (list - or dict), it will be a concatenated dataset of the datasets specified by - the Sequential. If it is a ``RepeatDataset``, then it will repeat the - dataset ``cfg['dataset']`` for ``cfg['times']`` times. If the ``ann_file`` - of the dataset is a Sequential, then it will build a concatenated dataset - with the same dataset type but different ``ann_file``. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - default_args (dict, optional): Default initialization arguments. - Default: None. - - Returns: - Dataset: The constructed dataset. - """ - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - drop_last=False, - pin_memory=True, - persistent_workers=True, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (:obj:`Dataset`): A PyTorch dataset. - samples_per_gpu (int): Number of samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data - loading for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed - training. Default: 1. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int | None): Seed to be used. Default: None. - drop_last (bool): Whether to drop the last incomplete batch in epoch. - Default: False - pin_memory (bool): Whether to use pin_memory in DataLoader. - Default: True - persistent_workers (bool): If True, the data loader will not shutdown - the worker processes after a dataset has been consumed once. - This allows to maintain the workers Dataset instances alive. - The argument also has effect in PyTorch>=1.7.0. - Default: True - kwargs (dict, optional): Any keyword argument to be used to initialize - DataLoader. - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - sampler = DistributedSampler( - dataset, - world_size, - rank, - shuffle=shuffle, - samples_per_gpu=samples_per_gpu, - seed=seed) - shuffle = False - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - if version.parse(torch.__version__) >= version.parse('1.7.0'): - kwargs['persistent_workers'] = persistent_workers - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - """Function to initialize each worker. - - The seed of each worker equals to - ``num_worker * rank + worker_id + user_seed``. - - Args: - worker_id (int): Id for each worker. - num_workers (int): Number of workers. - rank (int): Rank in distributed training. - seed (int): Random seed. - """ - - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/dataset_wrappers.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/dataset_wrappers.py deleted file mode 100755 index 3dbca31e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/dataset_wrappers.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import DATASETS - - -@DATASETS.register_module() -class RepeatDataset: - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item at each call. - - Args: - idx (int): Index for getting each item. - """ - return self.dataset[idx % self._ori_len] - - def __len__(self): - """Length of the dataset. - - Returns: - int: Length of the dataset. - """ - return self.times * self._ori_len diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/__init__.py deleted file mode 100755 index e903c040..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .augmentation import (BinarizeImage, ColorJitter, CopyValues, Flip, - GenerateFrameIndices, - GenerateFrameIndiceswithPadding, - GenerateSegmentIndices, MirrorSequence, Pad, - Quantize, RandomAffine, RandomJitter, - RandomMaskDilation, RandomTransposeHW, Resize, - TemporalReverse, UnsharpMasking) -from .compose import Compose -from .crop import (Crop, CropAroundCenter, CropAroundFg, CropAroundUnknown, - CropLike, FixedCrop, ModCrop, PairedRandomCrop, - RandomResizedCrop) -from .formating import (Collect, FormatTrimap, GetMaskedImage, ImageToTensor, - ToTensor) -from .loading import (GetSpatialDiscountMask, LoadImageFromFile, - LoadImageFromFileList, LoadMask, LoadPairedImageFromFile, - RandomLoadResizeBg) -from .normalization import Normalize, RescaleToZeroOne - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/augmentation.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/augmentation.py deleted file mode 100755 index e16f80a1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/augmentation.py +++ /dev/null @@ -1,1332 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import numbers -import os -import os.path as osp -import random - -import cv2 -import mmcv -import numpy as np -import torch -import torchvision.transforms as transforms -from PIL import Image - -from ..registry import PIPELINES - - -@PIPELINES.register_module() -class Resize: - """Resize data to a specific size for training or resize the images to fit - the network input regulation for testing. - - When used for resizing images to fit network input regulation, the case is - that a network may have several downsample and then upsample operation, - then the input height and width should be divisible by the downsample - factor of the network. - For example, the network would downsample the input for 5 times with - stride 2, then the downsample factor is 2^5 = 32 and the height - and width should be divisible by 32. - - Required keys are the keys in attribute "keys", added or modified keys are - "keep_ratio", "scale_factor", "interpolation" and the - keys in attribute "keys". - - All keys in "keys" should have the same shape. "test_trans" is used to - record the test transformation to align the input's shape. - - Args: - keys (list[str]): The images to be resized. - scale (float | tuple[int]): If scale is tuple[int], target spatial - size (h, w). Otherwise, target spatial size is scaled by input - size. - Note that when it is used, `size_factor` and `max_size` are - useless. Default: None - keep_ratio (bool): If set to True, images will be resized without - changing the aspect ratio. Otherwise, it will resize images to a - given size. Default: False. - Note that it is used togher with `scale`. - size_factor (int): Let the output shape be a multiple of size_factor. - Default:None. - Note that when it is used, `scale` should be set to None and - `keep_ratio` should be set to False. - max_size (int): The maximum size of the longest side of the output. - Default:None. - Note that it is used togher with `size_factor`. - interpolation (str): Algorithm used for interpolation: - "nearest" | "bilinear" | "bicubic" | "area" | "lanczos". - Default: "bilinear". - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. - Default: None. - output_keys (list[str] | None): The resized images. Default: None - Note that if it is not `None`, its length should be equal to keys. - """ - - def __init__(self, - keys, - scale=None, - keep_ratio=False, - size_factor=None, - max_size=None, - interpolation='bilinear', - backend=None, - output_keys=None): - assert keys, 'Keys should not be empty.' - if output_keys: - assert len(output_keys) == len(keys) - else: - output_keys = keys - if size_factor: - assert scale is None, ('When size_factor is used, scale should ', - f'be None. But received {scale}.') - assert keep_ratio is False, ('When size_factor is used, ' - 'keep_ratio should be False.') - if max_size: - assert size_factor is not None, ( - 'When max_size is used, ' - f'size_factor should also be set. But received {size_factor}.') - if isinstance(scale, float): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - elif mmcv.is_tuple_of(scale, int): - max_long_edge = max(scale) - max_short_edge = min(scale) - if max_short_edge == -1: - # assign np.inf to long edge for rescaling short edge later. - scale = (np.inf, max_long_edge) - elif scale is not None: - raise TypeError( - f'Scale must be None, float or tuple of int, but got ' - f'{type(scale)}.') - self.keys = keys - self.output_keys = output_keys - self.scale = scale - self.size_factor = size_factor - self.max_size = max_size - self.keep_ratio = keep_ratio - self.interpolation = interpolation - self.backend = backend - - def _resize(self, img): - if self.keep_ratio: - img, self.scale_factor = mmcv.imrescale( - img, - self.scale, - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - else: - img, w_scale, h_scale = mmcv.imresize( - img, - self.scale, - return_scale=True, - interpolation=self.interpolation, - backend=self.backend) - self.scale_factor = np.array((w_scale, h_scale), dtype=np.float32) - return img - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - if self.size_factor: - h, w = results[self.keys[0]].shape[:2] - new_h = h - (h % self.size_factor) - new_w = w - (w % self.size_factor) - if self.max_size: - new_h = min(self.max_size - (self.max_size % self.size_factor), - new_h) - new_w = min(self.max_size - (self.max_size % self.size_factor), - new_w) - self.scale = (new_w, new_h) - for key, out_key in zip(self.keys, self.output_keys): - results[out_key] = self._resize(results[key]) - if len(results[out_key].shape) == 2: - results[out_key] = np.expand_dims(results[out_key], axis=2) - - results['scale_factor'] = self.scale_factor - results['keep_ratio'] = self.keep_ratio - results['interpolation'] = self.interpolation - results['backend'] = self.backend - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += ( - f'(keys={self.keys}, output_keys={self.output_keys}, ' - f'scale={self.scale}, ' - f'keep_ratio={self.keep_ratio}, size_factor={self.size_factor}, ' - f'max_size={self.max_size}, interpolation={self.interpolation})') - return repr_str - - -@PIPELINES.register_module() -class RandomRotation: - """Rotate the image by a randomly-chosen angle, measured in degree. - - Args: - keys (list[str]): The images to be rotated. - degrees (tuple[float] | tuple[int] | float | int): If it is a tuple, - it represents a range (min, max). If it is a float or int, - the range is constructed as (-degrees, degrees). - """ - - def __init__(self, keys, degrees): - if isinstance(degrees, (int, float)): - if degrees < 0.0: - raise ValueError('Degrees must be positive if it is a number.') - else: - degrees = (-degrees, degrees) - elif not mmcv.is_tuple_of(degrees, (int, float)): - raise TypeError(f'Degrees must be float | int or tuple of float | ' - 'int, but got ' - f'{type(degrees)}.') - - self.keys = keys - self.degrees = degrees - - def __call__(self, results): - angle = random.uniform(self.degrees[0], self.degrees[1]) - - for k in self.keys: - results[k] = mmcv.imrotate(results[k], angle) - if results[k].ndim == 2: - results[k] = np.expand_dims(results[k], axis=2) - results['degrees'] = self.degrees - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, degrees={self.degrees})') - return repr_str - - -@PIPELINES.register_module() -class Flip: - """Flip the input data with a probability. - - Reverse the order of elements in the given data with a specific direction. - The shape of the data is preserved, but the elements are reordered. - Required keys are the keys in attributes "keys", added or modified keys are - "flip", "flip_direction" and the keys in attributes "keys". - It also supports flipping a list of images with the same flip. - - Args: - keys (list[str]): The images to be flipped. - flip_ratio (float): The propability to flip the images. - direction (str): Flip images horizontally or vertically. Options are - "horizontal" | "vertical". Default: "horizontal". - """ - _directions = ['horizontal', 'vertical'] - - def __init__(self, keys, flip_ratio=0.5, direction='horizontal'): - if direction not in self._directions: - raise ValueError(f'Direction {direction} is not supported.' - f'Currently support ones are {self._directions}') - self.keys = keys - self.flip_ratio = flip_ratio - self.direction = direction - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - flip = np.random.random() < self.flip_ratio - - if flip: - for key in self.keys: - if isinstance(results[key], list): - for v in results[key]: - mmcv.imflip_(v, self.direction) - else: - mmcv.imflip_(results[key], self.direction) - - results['flip'] = flip - results['flip_direction'] = self.direction - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, flip_ratio={self.flip_ratio}, ' - f'direction={self.direction})') - return repr_str - - -@PIPELINES.register_module() -class Pad: - """Pad the images to align with network downsample factor for testing. - - See `Reshape` for more explanation. `numpy.pad` is used for the pad - operation. - Required keys are the keys in attribute "keys", added or - modified keys are "test_trans" and the keys in attribute - "keys". All keys in "keys" should have the same shape. "test_trans" is used - to record the test transformation to align the input's shape. - - Args: - keys (list[str]): The images to be padded. - ds_factor (int): Downsample factor of the network. The height and - weight will be padded to a multiple of ds_factor. Default: 32. - kwargs (option): any keyword arguments to be passed to `numpy.pad`. - """ - - def __init__(self, keys, ds_factor=32, **kwargs): - self.keys = keys - self.ds_factor = ds_factor - self.kwargs = kwargs - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - h, w = results[self.keys[0]].shape[:2] - - new_h = self.ds_factor * ((h - 1) // self.ds_factor + 1) - new_w = self.ds_factor * ((w - 1) // self.ds_factor + 1) - - pad_h = new_h - h - pad_w = new_w - w - if new_h != h or new_w != w: - pad_width = ((0, pad_h), (0, pad_w), (0, 0)) - for key in self.keys: - results[key] = np.pad(results[key], - pad_width[:results[key].ndim], - **self.kwargs) - results['pad'] = (pad_h, pad_w) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - kwargs_str = ', '.join( - [f'{key}={val}' for key, val in self.kwargs.items()]) - repr_str += (f'(keys={self.keys}, ds_factor={self.ds_factor}, ' - f'{kwargs_str})') - return repr_str - - -@PIPELINES.register_module() -class RandomAffine: - """Apply random affine to input images. - - This class is adopted from - https://github.com/pytorch/vision/blob/v0.5.0/torchvision/transforms/ - transforms.py#L1015 - It should be noted that in - https://github.com/Yaoyi-Li/GCA-Matting/blob/master/dataloader/ - data_generator.py#L70 - random flip is added. See explanation of `flip_ratio` below. - Required keys are the keys in attribute "keys", modified keys - are keys in attribute "keys". - - Args: - keys (Sequence[str]): The images to be affined. - degrees (float | tuple[float]): Range of degrees to select from. If it - is a float instead of a tuple like (min, max), the range of degrees - will be (-degrees, +degrees). Set to 0 to deactivate rotations. - translate (tuple, optional): Tuple of maximum absolute fraction for - horizontal and vertical translations. For example translate=(a, b), - then horizontal shift is randomly sampled in the range - -img_width * a < dx < img_width * a and vertical shift is randomly - sampled in the range -img_height * b < dy < img_height * b. - Default: None. - scale (tuple, optional): Scaling factor interval, e.g (a, b), then - scale is randomly sampled from the range a <= scale <= b. - Default: None. - shear (float | tuple[float], optional): Range of shear degrees to - select from. If shear is a float, a shear parallel to the x axis - and a shear parallel to the y axis in the range (-shear, +shear) - will be applied. Else if shear is a tuple of 2 values, a x-axis - shear and a y-axis shear in (shear[0], shear[1]) will be applied. - Default: None. - flip_ratio (float, optional): Probability of the image being flipped. - The flips in horizontal direction and vertical direction are - independent. The image may be flipped in both directions. - Default: None. - """ - - def __init__(self, - keys, - degrees, - translate=None, - scale=None, - shear=None, - flip_ratio=None): - self.keys = keys - if isinstance(degrees, numbers.Number): - assert degrees >= 0, ('If degrees is a single number, ' - 'it must be positive.') - self.degrees = (-degrees, degrees) - else: - assert isinstance(degrees, tuple) and len(degrees) == 2, \ - 'degrees should be a tuple and it must be of length 2.' - self.degrees = degrees - - if translate is not None: - assert isinstance(translate, tuple) and len(translate) == 2, \ - 'translate should be a tuple and it must be of length 2.' - for t in translate: - assert 0.0 <= t <= 1.0, ('translation values should be ' - 'between 0 and 1.') - self.translate = translate - - if scale is not None: - assert isinstance(scale, tuple) and len(scale) == 2, \ - 'scale should be a tuple and it must be of length 2.' - for s in scale: - assert s > 0, 'scale values should be positive.' - self.scale = scale - - if shear is not None: - if isinstance(shear, numbers.Number): - assert shear >= 0, ('If shear is a single number, ' - 'it must be positive.') - self.shear = (-shear, shear) - else: - assert isinstance(shear, tuple) and len(shear) == 2, \ - 'shear should be a tuple and it must be of length 2.' - # X-Axis and Y-Axis shear with (min, max) - self.shear = shear - else: - self.shear = shear - - if flip_ratio is not None: - assert isinstance(flip_ratio, - float), 'flip_ratio should be a float.' - self.flip_ratio = flip_ratio - else: - self.flip_ratio = 0 - - @staticmethod - def _get_params(degrees, translate, scale_ranges, shears, flip_ratio, - img_size): - """Get parameters for affine transformation. - - Returns: - paras (tuple): Params to be passed to the affine transformation. - """ - angle = np.random.uniform(degrees[0], degrees[1]) - if translate is not None: - max_dx = translate[0] * img_size[0] - max_dy = translate[1] * img_size[1] - translations = (np.round(np.random.uniform(-max_dx, max_dx)), - np.round(np.random.uniform(-max_dy, max_dy))) - else: - translations = (0, 0) - - if scale_ranges is not None: - scale = (np.random.uniform(scale_ranges[0], scale_ranges[1]), - np.random.uniform(scale_ranges[0], scale_ranges[1])) - else: - scale = (1.0, 1.0) - - if shears is not None: - shear = np.random.uniform(shears[0], shears[1]) - else: - shear = 0.0 - - # Because `flip` is used as a multiplier in line 479 and 480, - # so -1 stands for flip and 1 stands for no flip. Thus `flip` - # should be an 'inverse' flag as the result of the comparison. - # See https://github.com/open-mmlab/mmediting/pull/799 for more detail - flip = (np.random.rand(2) > flip_ratio).astype(np.int32) * 2 - 1 - - return angle, translations, scale, shear, flip - - @staticmethod - def _get_inverse_affine_matrix(center, angle, translate, scale, shear, - flip): - """Helper method to compute inverse matrix for affine transformation. - - As it is explained in PIL.Image.rotate, we need compute INVERSE of - affine transformation matrix: M = T * C * RSS * C^-1 where - T is translation matrix: - [1, 0, tx | 0, 1, ty | 0, 0, 1]; - C is translation matrix to keep center: - [1, 0, cx | 0, 1, cy | 0, 0, 1]; - RSS is rotation with scale and shear matrix. - - It is different from the original function in torchvision. - 1. The order are changed to flip -> scale -> rotation -> shear. - 2. x and y have different scale factors. - RSS(shear, a, scale, f) = - [ cos(a + shear)*scale_x*f -sin(a + shear)*scale_y 0] - [ sin(a)*scale_x*f cos(a)*scale_y 0] - [ 0 0 1] - Thus, the inverse is M^-1 = C * RSS^-1 * C^-1 * T^-1. - """ - - angle = math.radians(angle) - shear = math.radians(shear) - scale_x = 1.0 / scale[0] * flip[0] - scale_y = 1.0 / scale[1] * flip[1] - - # Inverted rotation matrix with scale and shear - d = math.cos(angle + shear) * math.cos(angle) + math.sin( - angle + shear) * math.sin(angle) - matrix = [ - math.cos(angle) * scale_x, - math.sin(angle + shear) * scale_x, 0, -math.sin(angle) * scale_y, - math.cos(angle + shear) * scale_y, 0 - ] - matrix = [m / d for m in matrix] - - # Apply inverse of translation and of center translation: - # RSS^-1 * C^-1 * T^-1 - matrix[2] += matrix[0] * (-center[0] - translate[0]) + matrix[1] * ( - -center[1] - translate[1]) - matrix[5] += matrix[3] * (-center[0] - translate[0]) + matrix[4] * ( - -center[1] - translate[1]) - - # Apply center translation: C * RSS^-1 * C^-1 * T^-1 - matrix[2] += center[0] - matrix[5] += center[1] - - return matrix - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - h, w = results[self.keys[0]].shape[:2] - # if image is too small, set degree to 0 to reduce introduced dark area - if np.maximum(h, w) < 1024: - params = self._get_params((0, 0), self.translate, self.scale, - self.shear, self.flip_ratio, (h, w)) - else: - params = self._get_params(self.degrees, self.translate, self.scale, - self.shear, self.flip_ratio, (h, w)) - - center = (w * 0.5 - 0.5, h * 0.5 - 0.5) - M = self._get_inverse_affine_matrix(center, *params) - M = np.array(M).reshape((2, 3)) - - for key in self.keys: - results[key] = cv2.warpAffine( - results[key], - M, (w, h), - flags=cv2.INTER_NEAREST + cv2.WARP_INVERSE_MAP) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, degrees={self.degrees}, ' - f'translate={self.translate}, scale={self.scale}, ' - f'shear={self.shear}, flip_ratio={self.flip_ratio})') - return repr_str - - -@PIPELINES.register_module() -class RandomJitter: - """Randomly jitter the foreground in hsv space. - - The jitter range of hue is adjustable while the jitter ranges of saturation - and value are adaptive to the images. Side effect: the "fg" image will be - converted to `np.float32`. - Required keys are "fg" and "alpha", modified key is "fg". - - Args: - hue_range (float | tuple[float]): Range of hue jittering. If it is a - float instead of a tuple like (min, max), the range of hue - jittering will be (-hue_range, +hue_range). Default: 40. - """ - - def __init__(self, hue_range=40): - if isinstance(hue_range, numbers.Number): - assert hue_range >= 0, ('If hue_range is a single number, ' - 'it must be positive.') - self.hue_range = (-hue_range, hue_range) - else: - assert isinstance(hue_range, tuple) and len(hue_range) == 2, \ - 'hue_range should be a tuple and it must be of length 2.' - self.hue_range = hue_range - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - fg, alpha = results['fg'], results['alpha'] - - # convert to HSV space; - # convert to float32 image to keep precision during space conversion. - fg = mmcv.bgr2hsv(fg.astype(np.float32) / 255) - # Hue noise - hue_jitter = np.random.randint(self.hue_range[0], self.hue_range[1]) - fg[:, :, 0] = np.remainder(fg[:, :, 0] + hue_jitter, 360) - - # Saturation noise - sat_mean = fg[:, :, 1][alpha > 0].mean() - # jitter saturation within range (1.1 - sat_mean) * [-0.1, 0.1] - sat_jitter = (1.1 - sat_mean) * (np.random.rand() * 0.2 - 0.1) - sat = fg[:, :, 1] - sat = np.abs(sat + sat_jitter) - sat[sat > 1] = 2 - sat[sat > 1] - fg[:, :, 1] = sat - - # Value noise - val_mean = fg[:, :, 2][alpha > 0].mean() - # jitter value within range (1.1 - val_mean) * [-0.1, 0.1] - val_jitter = (1.1 - val_mean) * (np.random.rand() * 0.2 - 0.1) - val = fg[:, :, 2] - val = np.abs(val + val_jitter) - val[val > 1] = 2 - val[val > 1] - fg[:, :, 2] = val - # convert back to BGR space - fg = mmcv.hsv2bgr(fg) - results['fg'] = fg * 255 - - return results - - def __repr__(self): - return self.__class__.__name__ + f'hue_range={self.hue_range}' - - -@PIPELINES.register_module() -class ColorJitter: - """An interface for torch color jitter so that it can be invoked in - mmediting pipeline. - - Randomly change the brightness, contrast and saturation of an image. - Modified keys are the attributes specified in "keys". - - Args: - keys (list[str]): The images to be resized. - channel_order (str): Order of channel, candidates are 'bgr' and 'rgb'. - Default: 'rgb'. - - Notes: ``**kwards`` follows the args list of - ``torchvision.transforms.ColorJitter``. - - brightness (float or tuple of float (min, max)): How much to jitter - brightness. brightness_factor is chosen uniformly from - [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. - Should be non negative numbers. - contrast (float or tuple of float (min, max)): How much to jitter - contrast. contrast_factor is chosen uniformly from - [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. - Should be non negative numbers. - saturation (float or tuple of float (min, max)): How much to jitter - saturation. saturation_factor is chosen uniformly from - [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. - Should be non negative numbers. - hue (float or tuple of float (min, max)): How much to jitter hue. - hue_factor is chosen uniformly from [-hue, hue] or the given - [min, max]. - Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5. - """ - - def __init__(self, keys, channel_order='rgb', **kwargs): - assert keys, 'Keys should not be empty.' - assert 'to_rgb' not in kwargs, ( - '`to_rgb` is not support in ColorJitter, ' - "which is replaced by `channel_order` ('rgb' or 'bgr')") - - self.keys = keys - self.channel_order = channel_order - self.transform = transforms.ColorJitter(**kwargs) - - def _color_jitter(self, image, this_seed): - - if self.channel_order.lower() == 'bgr': - image = image[..., ::-1] - - image = Image.fromarray(image) - torch.manual_seed(this_seed) - image = self.transform(image) - image = np.asarray(image) - - if self.channel_order.lower() == 'bgr': - image = image[..., ::-1] - - return image - - def __call__(self, results): - - this_seed = random.randint(0, 2**32) - - for k in self.keys: - if isinstance(results[k], list): - results[k] = [ - self._color_jitter(v, this_seed) for v in results[k] - ] - else: - results[k] = self._color_jitter(results[k], this_seed) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, channel_order={self.channel_order}, ' - f'brightness={self.transform.brightness}, ' - f'contrast={self.transform.contrast}, ' - f'saturation={self.transform.saturation}, ' - f'hue={self.transform.hue})') - - return repr_str - - -class BinarizeImage: - """Binarize image. - - Args: - keys (Sequence[str]): The images to be binarized. - binary_thr (float): Threshold for binarization. - to_int (bool): If True, return image as int32, otherwise - return image as float32. - """ - - def __init__(self, keys, binary_thr, to_int=False): - self.keys = keys - self.binary_thr = binary_thr - self.to_int = to_int - - def _binarize(self, img): - type_ = np.float32 if not self.to_int else np.int32 - img = (img[..., :] > self.binary_thr).astype(type_) - - return img - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for k in self.keys: - results[k] = self._binarize(results[k]) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, binary_thr={self.binary_thr}, ' - f'to_int={self.to_int})') - - return repr_str - - -@PIPELINES.register_module() -class RandomMaskDilation: - """Randomly dilate binary masks. - - Args: - keys (Sequence[str]): The images to be resized. - get_binary (bool): If True, according to binary_thr, reset final - output as binary mask. Otherwise, return masks directly. - binary_thr (float): Threshold for obtaining binary mask. - kernel_min (int): Min size of dilation kernel. - kernel_max (int): Max size of dilation kernel. - """ - - def __init__(self, keys, binary_thr=0., kernel_min=9, kernel_max=49): - self.keys = keys - self.kernel_min = kernel_min - self.kernel_max = kernel_max - self.binary_thr = binary_thr - - def _random_dilate(self, img): - kernel_size = np.random.randint(self.kernel_min, self.kernel_max + 1) - kernel = np.ones((kernel_size, kernel_size), dtype=np.uint8) - dilate_kernel_size = kernel_size - img_ = cv2.dilate(img, kernel, iterations=1) - - img_ = (img_ > self.binary_thr).astype(np.float32) - - return img_, dilate_kernel_size - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for k in self.keys: - results[k], d_kernel = self._random_dilate(results[k]) - if len(results[k].shape) == 2: - results[k] = np.expand_dims(results[k], axis=2) - results[k + '_dilate_kernel_size'] = d_kernel - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, kernel_min={self.kernel_min}, ' - f'kernel_max={self.kernel_max})') - - return repr_str - - -@PIPELINES.register_module() -class RandomTransposeHW: - """Randomly transpose images in H and W dimensions with a probability. - - (TransposeHW = horizontal flip + anti-clockwise rotatation by 90 degrees) - When used with horizontal/vertical flips, it serves as a way of rotation - augmentation. - It also supports randomly transposing a list of images. - - Required keys are the keys in attributes "keys", added or modified keys are - "transpose" and the keys in attributes "keys". - - Args: - keys (list[str]): The images to be transposed. - transpose_ratio (float): The propability to transpose the images. - """ - - def __init__(self, keys, transpose_ratio=0.5): - self.keys = keys - self.transpose_ratio = transpose_ratio - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - transpose = np.random.random() < self.transpose_ratio - - if transpose: - for key in self.keys: - if isinstance(results[key], list): - results[key] = [v.transpose(1, 0, 2) for v in results[key]] - else: - results[key] = results[key].transpose(1, 0, 2) - - results['transpose'] = transpose - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += ( - f'(keys={self.keys}, transpose_ratio={self.transpose_ratio})') - return repr_str - - -@PIPELINES.register_module() -class GenerateFrameIndiceswithPadding: - """Generate frame index with padding for REDS dataset and Vid4 dataset - during testing. - - Required keys: lq_path, gt_path, key, num_input_frames, max_frame_num - Added or modified keys: lq_path, gt_path - - Args: - padding (str): padding mode, one of - 'replicate' | 'reflection' | 'reflection_circle' | 'circle'. - - Examples: current_idx = 0, num_input_frames = 5 - The generated frame indices under different padding mode: - - replicate: [0, 0, 0, 1, 2] - reflection: [2, 1, 0, 1, 2] - reflection_circle: [4, 3, 0, 1, 2] - circle: [3, 4, 0, 1, 2] - - filename_tmpl (str): Template for file name. Default: '{:08d}'. - """ - - def __init__(self, padding, filename_tmpl='{:08d}'): - if padding not in ('replicate', 'reflection', 'reflection_circle', - 'circle'): - raise ValueError(f'Wrong padding mode {padding}.' - 'Should be "replicate", "reflection", ' - '"reflection_circle", "circle"') - self.padding = padding - self.filename_tmpl = filename_tmpl - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - clip_name, frame_name = results['key'].split(os.sep) - current_idx = int(frame_name) - max_frame_num = results['max_frame_num'] - 1 # start from 0 - num_input_frames = results['num_input_frames'] - num_pad = num_input_frames // 2 - - frame_list = [] - for i in range(current_idx - num_pad, current_idx + num_pad + 1): - if i < 0: - if self.padding == 'replicate': - pad_idx = 0 - elif self.padding == 'reflection': - pad_idx = -i - elif self.padding == 'reflection_circle': - pad_idx = current_idx + num_pad - i - else: - pad_idx = num_input_frames + i - elif i > max_frame_num: - if self.padding == 'replicate': - pad_idx = max_frame_num - elif self.padding == 'reflection': - pad_idx = max_frame_num * 2 - i - elif self.padding == 'reflection_circle': - pad_idx = (current_idx - num_pad) - (i - max_frame_num) - else: - pad_idx = i - num_input_frames - else: - pad_idx = i - frame_list.append(pad_idx) - - lq_path_root = results['lq_path'] - gt_path_root = results['gt_path'] - lq_paths = [ - osp.join(lq_path_root, clip_name, - f'{self.filename_tmpl.format(idx)}.png') - for idx in frame_list - ] - gt_paths = [osp.join(gt_path_root, clip_name, f'{frame_name}.png')] - results['lq_path'] = lq_paths - results['gt_path'] = gt_paths - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f"(padding='{self.padding}')" - return repr_str - - -@PIPELINES.register_module() -class GenerateFrameIndices: - """Generate frame index for REDS datasets. It also performs - temporal augmention with random interval. - - Required keys: lq_path, gt_path, key, num_input_frames - Added or modified keys: lq_path, gt_path, interval, reverse - - Args: - interval_list (list[int]): Interval list for temporal augmentation. - It will randomly pick an interval from interval_list and sample - frame index with the interval. - frames_per_clip(int): Number of frames per clips. Default: 99 for - REDS dataset. - """ - - def __init__(self, interval_list, frames_per_clip=99): - self.interval_list = interval_list - self.frames_per_clip = frames_per_clip - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - clip_name, frame_name = results['key'].split( - os.sep) # key example: 000/00000000 - center_frame_idx = int(frame_name) - num_half_frames = results['num_input_frames'] // 2 - - max_frame_num = results.get('max_frame_num', self.frames_per_clip + 1) - frames_per_clip = min(self.frames_per_clip, max_frame_num - 1) - - interval = np.random.choice(self.interval_list) - # ensure not exceeding the borders - start_frame_idx = center_frame_idx - num_half_frames * interval - end_frame_idx = center_frame_idx + num_half_frames * interval - while (start_frame_idx < 0) or (end_frame_idx > frames_per_clip): - center_frame_idx = np.random.randint(0, frames_per_clip + 1) - start_frame_idx = center_frame_idx - num_half_frames * interval - end_frame_idx = center_frame_idx + num_half_frames * interval - frame_name = f'{center_frame_idx:08d}' - neighbor_list = list( - range(center_frame_idx - num_half_frames * interval, - center_frame_idx + num_half_frames * interval + 1, interval)) - - lq_path_root = results['lq_path'] - gt_path_root = results['gt_path'] - lq_path = [ - osp.join(lq_path_root, clip_name, f'{v:08d}.png') - for v in neighbor_list - ] - gt_path = [osp.join(gt_path_root, clip_name, f'{frame_name}.png')] - results['lq_path'] = lq_path - results['gt_path'] = gt_path - results['interval'] = interval - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(interval_list={self.interval_list}, ' - f'frames_per_clip={self.frames_per_clip})') - return repr_str - - -@PIPELINES.register_module() -class TemporalReverse: - """Reverse frame lists for temporal augmentation. - - Required keys are the keys in attributes "lq" and "gt", - added or modified keys are "lq", "gt" and "reverse". - - Args: - keys (list[str]): The frame lists to be reversed. - reverse_ratio (float): The propability to reverse the frame lists. - Default: 0.5. - """ - - def __init__(self, keys, reverse_ratio=0.5): - self.keys = keys - self.reverse_ratio = reverse_ratio - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - reverse = np.random.random() < self.reverse_ratio - - if reverse: - for key in self.keys: - results[key].reverse() - - results['reverse'] = reverse - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(keys={self.keys}, reverse_ratio={self.reverse_ratio})' - return repr_str - - -@PIPELINES.register_module() -class GenerateSegmentIndices: - """Generate frame indices for a segment. It also performs temporal - augmention with random interval. - - Required keys: lq_path, gt_path, key, num_input_frames, sequence_length - Added or modified keys: lq_path, gt_path, interval, reverse - - Args: - interval_list (list[int]): Interval list for temporal augmentation. - It will randomly pick an interval from interval_list and sample - frame index with the interval. - start_idx (int): The index corresponds to the first frame in the - sequence. Default: 0. - filename_tmpl (str): Template for file name. Default: '{:08d}.png'. - """ - - def __init__(self, interval_list, start_idx=0, filename_tmpl='{:08d}.png'): - self.interval_list = interval_list - self.filename_tmpl = filename_tmpl - self.start_idx = start_idx - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - # key example: '000', 'calendar' (sequence name) - clip_name = results['key'] - interval = np.random.choice(self.interval_list) - - self.sequence_length = results['sequence_length'] - num_input_frames = results.get('num_input_frames', - self.sequence_length) - - # randomly select a frame as start - if self.sequence_length - num_input_frames * interval < 0: - raise ValueError('The input sequence is not long enough to ' - 'support the current choice of [interval] or ' - '[num_input_frames].') - start_frame_idx = np.random.randint( - 0, self.sequence_length - num_input_frames * interval + 1) - end_frame_idx = start_frame_idx + num_input_frames * interval - neighbor_list = list(range(start_frame_idx, end_frame_idx, interval)) - neighbor_list = [v + self.start_idx for v in neighbor_list] - - # add the corresponding file paths - lq_path_root = results['lq_path'] - gt_path_root = results['gt_path'] - lq_path = [ - osp.join(lq_path_root, clip_name, self.filename_tmpl.format(v)) - for v in neighbor_list - ] - gt_path = [ - osp.join(gt_path_root, clip_name, self.filename_tmpl.format(v)) - for v in neighbor_list - ] - - results['lq_path'] = lq_path - results['gt_path'] = gt_path - results['interval'] = interval - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(interval_list={self.interval_list})') - return repr_str - - -@PIPELINES.register_module() -class MirrorSequence: - """Extend short sequences (e.g. Vimeo-90K) by mirroring the sequences - - Given a sequence with N frames (x1, ..., xN), extend the sequence to - (x1, ..., xN, xN, ..., x1). - - Args: - keys (list[str]): The frame lists to be extended. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - if isinstance(results[key], list): - results[key] = results[key] + results[key][::-1] - else: - raise TypeError('The input must be of class list[nparray]. ' - f'Got {type(results[key])}.') - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys})') - return repr_str - - -@PIPELINES.register_module() -class CopyValues: - """Copy the value of a source key to a destination key. - - - It does the following: results[dst_key] = results[src_key] for - (src_key, dst_key) in zip(src_keys, dst_keys). - - Added keys are the keys in the attribute "dst_keys". - - Args: - src_keys (list[str]): The source keys. - dst_keys (list[str]): The destination keys. - """ - - def __init__(self, src_keys, dst_keys): - - if not isinstance(src_keys, list) or not isinstance(dst_keys, list): - raise AssertionError('"src_keys" and "dst_keys" must be lists.') - - if len(src_keys) != len(dst_keys): - raise ValueError('"src_keys" and "dst_keys" should have the same' - 'number of elements.') - - self.src_keys = src_keys - self.dst_keys = dst_keys - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict with a key added/modified. - """ - for (src_key, dst_key) in zip(self.src_keys, self.dst_keys): - results[dst_key] = copy.deepcopy(results[src_key]) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(src_keys={self.src_keys})') - repr_str += (f'(dst_keys={self.dst_keys})') - return repr_str - - -@PIPELINES.register_module() -class Quantize: - """Quantize and clip the image to [0, 1]. - - It is assumed that the the input has range [0, 1]. - - Modified keys are the attributes specified in "keys". - - Args: - keys (list[str]): The keys whose values are clipped. - """ - - def __init__(self, keys): - self.keys = keys - - def _quantize_clip(self, input_): - is_single_image = False - if isinstance(input_, np.ndarray): - is_single_image = True - input_ = [input_] - - # quantize and clip - input_ = [np.clip((v * 255.0).round(), 0, 255) / 255. for v in input_] - - if is_single_image: - input_ = input_[0] - - return input_ - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict with the values of the specified keys are rounded - and clipped. - """ - - for key in self.keys: - results[key] = self._quantize_clip(results[key]) - - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class UnsharpMasking: - """Apply unsharp masking to an image or a sequence of images. - - Args: - kernel_size (int): The kernel_size of the Gaussian kernel. - sigma (float): The standard deviation of the Gaussian. - weight (float): The weight of the "details" in the final output. - threshold (float): Pixel differences larger than this value are - regarded as "details". - keys (list[str]): The keys whose values are processed. - - Added keys are "xxx_unsharp", where "xxx" are the attributes specified - in "keys". - - """ - - def __init__(self, kernel_size, sigma, weight, threshold, keys): - if kernel_size % 2 == 0: - raise ValueError('kernel_size must be an odd number, but ' - f'got {kernel_size}.') - - self.kernel_size = kernel_size - self.sigma = sigma - self.weight = weight - self.threshold = threshold - self.keys = keys - - kernel = cv2.getGaussianKernel(kernel_size, sigma) - self.kernel = np.matmul(kernel, kernel.transpose()) - - def _unsharp_masking(self, imgs): - is_single_image = False - if isinstance(imgs, np.ndarray): - is_single_image = True - imgs = [imgs] - - outputs = [] - for img in imgs: - residue = img - cv2.filter2D(img, -1, self.kernel) - mask = np.float32(np.abs(residue) * 255 > self.threshold) - soft_mask = cv2.filter2D(mask, -1, self.kernel) - sharpened = np.clip(img + self.weight * residue, 0, 1) - - outputs.append(soft_mask * sharpened + (1 - soft_mask) * img) - - if is_single_image: - outputs = outputs[0] - - return outputs - - def __call__(self, results): - for key in self.keys: - results[f'{key}_unsharp'] = self._unsharp_masking(results[key]) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, kernel_size={self.kernel_size}, ' - f'sigma={self.sigma}, weight={self.weight}, ' - f'threshold={self.threshold})') - return repr_str diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/compose.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/compose.py deleted file mode 100755 index 0ffb0fd5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/compose.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -from mmcv.utils import build_from_cfg - -from ..registry import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose a data pipeline with a sequence of transforms. - - Args: - transforms (list[dict | callable]): - Either config dicts of transforms or transform objects. - """ - - def __init__(self, transforms): - assert isinstance(transforms, Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError(f'transform must be callable or a dict, ' - f'but got {type(transform)}') - - def __call__(self, data): - """Call function. - - Args: - data (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/crop.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/crop.py deleted file mode 100755 index 51fa2425..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/crop.py +++ /dev/null @@ -1,749 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import random - -import mmcv -import numpy as np -from torch.nn.modules.utils import _pair - -from ..registry import PIPELINES -from .utils import random_choose_unknown - - -@PIPELINES.register_module() -class Crop: - """Crop data to specific size for training. - - Args: - keys (Sequence[str]): The images to be cropped. - crop_size (Tuple[int]): Target spatial size (h, w). - random_crop (bool): If set to True, it will random crop - image. Otherwise, it will work as center crop. - is_pad_zeros (bool, optional): Whether to pad the image with 0 if - crop_size is greater than image size. Default: False. - """ - - def __init__(self, keys, crop_size, random_crop=True, is_pad_zeros=False): - if not mmcv.is_tuple_of(crop_size, int): - raise TypeError( - 'Elements of crop_size must be int and crop_size must be' - f' tuple, but got {type(crop_size[0])} in {type(crop_size)}') - - self.keys = keys - self.crop_size = crop_size - self.random_crop = random_crop - self.is_pad_zeros = is_pad_zeros - - def _crop(self, data): - if not isinstance(data, list): - data_list = [data] - else: - data_list = data - - crop_bbox_list = [] - data_list_ = [] - - for item in data_list: - data_h, data_w = item.shape[:2] - crop_h, crop_w = self.crop_size - - if self.is_pad_zeros: - - crop_y_offset, crop_x_offset = 0, 0 - - if crop_h > data_h: - crop_y_offset = (crop_h - data_h) // 2 - if crop_w > data_w: - crop_x_offset = (crop_w - data_w) // 2 - - if crop_y_offset > 0 or crop_x_offset > 0: - pad_width = [(2 * crop_y_offset, 2 * crop_y_offset), - (2 * crop_x_offset, 2 * crop_x_offset)] - if item.ndim == 3: - pad_width.append((0, 0)) - item = np.pad( - item, - tuple(pad_width), - mode='constant', - constant_values=0) - - data_h, data_w = item.shape[:2] - - crop_h = min(data_h, crop_h) - crop_w = min(data_w, crop_w) - - if self.random_crop: - x_offset = np.random.randint(0, data_w - crop_w + 1) - y_offset = np.random.randint(0, data_h - crop_h + 1) - else: - x_offset = max(0, (data_w - crop_w)) // 2 - y_offset = max(0, (data_h - crop_h)) // 2 - - crop_bbox = [x_offset, y_offset, crop_w, crop_h] - item_ = item[y_offset:y_offset + crop_h, - x_offset:x_offset + crop_w, ...] - crop_bbox_list.append(crop_bbox) - data_list_.append(item_) - - if not isinstance(data, list): - return data_list_[0], crop_bbox_list[0] - return data_list_, crop_bbox_list - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for k in self.keys: - data_, crop_bbox = self._crop(results[k]) - results[k] = data_ - results[k + '_crop_bbox'] = crop_bbox - results['crop_size'] = self.crop_size - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'keys={self.keys}, crop_size={self.crop_size}, ' - f'random_crop={self.random_crop}') - - return repr_str - - -@PIPELINES.register_module() -class RandomResizedCrop(object): - """Crop data to random size and aspect ratio. - - A crop of a random proportion of the original image - and a random aspect ratio of the original aspect ratio is made. - The cropped image is finally resized to a given size specified - by 'crop_size'. Modified keys are the attributes specified in "keys". - - This code is partially adopted from - torchvision.transforms.RandomResizedCrop: - [https://pytorch.org/vision/stable/_modules/torchvision/transforms/\ - transforms.html#RandomResizedCrop]. - - Args: - keys (list[str]): The images to be resized and random-cropped. - crop_size (int | tuple[int]): Target spatial size (h, w). - scale (tuple[float], optional): Range of the proportion of the original - image to be cropped. Default: (0.08, 1.0). - ratio (tuple[float], optional): Range of aspect ratio of the crop. - Default: (3. / 4., 4. / 3.). - interpolation (str, optional): Algorithm used for interpolation. - It can be only either one of the following: - "nearest" | "bilinear" | "bicubic" | "area" | "lanczos". - Default: "bilinear". - """ - - def __init__(self, - keys, - crop_size, - scale=(0.08, 1.0), - ratio=(3. / 4., 4. / 3.), - interpolation='bilinear'): - assert keys, 'Keys should not be empty.' - if isinstance(crop_size, int): - crop_size = (crop_size, crop_size) - elif not mmcv.is_tuple_of(crop_size, int): - raise TypeError('"crop_size" must be an integer ' - 'or a tuple of integers, but got ' - f'{type(crop_size)}') - if not mmcv.is_tuple_of(scale, float): - raise TypeError('"scale" must be a tuple of float, ' - f'but got {type(scale)}') - if not mmcv.is_tuple_of(ratio, float): - raise TypeError('"ratio" must be a tuple of float, ' - f'but got {type(ratio)}') - - self.keys = keys - self.crop_size = crop_size - self.scale = scale - self.ratio = ratio - self.interpolation = interpolation - - def get_params(self, data): - """Get parameters for a random sized crop. - - Args: - data (np.ndarray): Image of type numpy array to be cropped. - - Returns: - A tuple containing the coordinates of the top left corner - and the chosen crop size. - """ - data_h, data_w = data.shape[:2] - area = data_h * data_w - - for _ in range(10): - target_area = random.uniform(*self.scale) * area - log_ratio = (math.log(self.ratio[0]), math.log(self.ratio[1])) - aspect_ratio = math.exp(random.uniform(*log_ratio)) - - crop_w = int(round(math.sqrt(target_area * aspect_ratio))) - crop_h = int(round(math.sqrt(target_area / aspect_ratio))) - - if 0 < crop_w <= data_w and 0 < crop_h <= data_h: - top = random.randint(0, data_h - crop_h) - left = random.randint(0, data_w - crop_w) - return top, left, crop_h, crop_w - - # Fall back to center crop - in_ratio = float(data_w) / float(data_h) - if (in_ratio < min(self.ratio)): - crop_w = data_w - crop_h = int(round(crop_w / min(self.ratio))) - elif (in_ratio > max(self.ratio)): - crop_h = data_h - crop_w = int(round(crop_h * max(self.ratio))) - else: # whole image - crop_w = data_w - crop_h = data_h - top = (data_h - crop_h) // 2 - left = (data_w - crop_w) // 2 - return top, left, crop_h, crop_w - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for k in self.keys: - top, left, crop_h, crop_w = self.get_params(results[k]) - crop_bbox = [top, left, crop_w, crop_h] - results[k] = results[k][top:top + crop_h, left:left + crop_w, ...] - results[k] = mmcv.imresize( - results[k], - self.crop_size, - return_scale=False, - interpolation=self.interpolation) - results[k + '_crop_bbox'] = crop_bbox - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, crop_size={self.crop_size}, ' - f'scale={self.scale}, ratio={self.ratio}, ' - f'interpolation={self.interpolation})') - return repr_str - - -@PIPELINES.register_module() -class FixedCrop: - """Crop paired data (at a specific position) to specific size for training. - - Args: - keys (Sequence[str]): The images to be cropped. - crop_size (Tuple[int]): Target spatial size (h, w). - crop_pos (Tuple[int]): Specific position (x, y). If set to None, - random initialize the position to crop paired data batch. - """ - - def __init__(self, keys, crop_size, crop_pos=None): - if not mmcv.is_tuple_of(crop_size, int): - raise TypeError( - 'Elements of crop_size must be int and crop_size must be' - f' tuple, but got {type(crop_size[0])} in {type(crop_size)}') - if not mmcv.is_tuple_of(crop_pos, int) and (crop_pos is not None): - raise TypeError( - 'Elements of crop_pos must be int and crop_pos must be' - f' tuple or None, but got {type(crop_pos[0])} in ' - f'{type(crop_pos)}') - - self.keys = keys - self.crop_size = crop_size - self.crop_pos = crop_pos - - def _crop(self, data, x_offset, y_offset, crop_w, crop_h): - crop_bbox = [x_offset, y_offset, crop_w, crop_h] - data_ = data[y_offset:y_offset + crop_h, x_offset:x_offset + crop_w, - ...] - return data_, crop_bbox - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - - if isinstance(results[self.keys[0]], list): - data_h, data_w = results[self.keys[0]][0].shape[:2] - else: - data_h, data_w = results[self.keys[0]].shape[:2] - crop_h, crop_w = self.crop_size - crop_h = min(data_h, crop_h) - crop_w = min(data_w, crop_w) - - if self.crop_pos is None: - x_offset = np.random.randint(0, data_w - crop_w + 1) - y_offset = np.random.randint(0, data_h - crop_h + 1) - else: - x_offset, y_offset = self.crop_pos - crop_w = min(data_w - x_offset, crop_w) - crop_h = min(data_h - y_offset, crop_h) - - for k in self.keys: - images = results[k] - is_list = isinstance(images, list) - if not is_list: - images = [images] - cropped_images = [] - crop_bbox = None - for image in images: - # In fixed crop for paired images, sizes should be the same - if (image.shape[0] != data_h or image.shape[1] != data_w): - raise ValueError( - 'The sizes of paired images should be the same. ' - f'Expected ({data_h}, {data_w}), ' - f'but got ({image.shape[0]}, ' - f'{image.shape[1]}).') - data_, crop_bbox = self._crop(image, x_offset, y_offset, - crop_w, crop_h) - cropped_images.append(data_) - results[k + '_crop_bbox'] = crop_bbox - if not is_list: - cropped_images = cropped_images[0] - results[k] = cropped_images - results['crop_size'] = self.crop_size - results['crop_pos'] = self.crop_pos - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'keys={self.keys}, crop_size={self.crop_size}, ' - f'crop_pos={self.crop_pos}') - return repr_str - - -@PIPELINES.register_module() -class PairedRandomCrop: - """Paried random crop. - - It crops a pair of lq and gt images with corresponding locations. - It also supports accepting lq list and gt list. - Required keys are "scale", "lq", and "gt", - added or modified keys are "lq" and "gt". - - Args: - gt_patch_size (int): cropped gt patch size. - """ - - def __init__(self, gt_patch_size): - self.gt_patch_size = gt_patch_size - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - scale = results['scale'] - lq_patch_size = self.gt_patch_size // scale - - lq_is_list = isinstance(results['lq'], list) - if not lq_is_list: - results['lq'] = [results['lq']] - gt_is_list = isinstance(results['gt'], list) - if not gt_is_list: - results['gt'] = [results['gt']] - - h_lq, w_lq, _ = results['lq'][0].shape - h_gt, w_gt, _ = results['gt'][0].shape - - if h_gt != h_lq * scale or w_gt != w_lq * scale: - raise ValueError( - f'Scale mismatches. GT ({h_gt}, {w_gt}) is not {scale}x ' - f'multiplication of LQ ({h_lq}, {w_lq}).') - if h_lq < lq_patch_size or w_lq < lq_patch_size: - raise ValueError( - f'LQ ({h_lq}, {w_lq}) is smaller than patch size ' - f'({lq_patch_size}, {lq_patch_size}). Please check ' - f'{results["lq_path"][0]} and {results["gt_path"][0]}.') - - # randomly choose top and left coordinates for lq patch - top = np.random.randint(h_lq - lq_patch_size + 1) - left = np.random.randint(w_lq - lq_patch_size + 1) - # crop lq patch - results['lq'] = [ - v[top:top + lq_patch_size, left:left + lq_patch_size, ...] - for v in results['lq'] - ] - # crop corresponding gt patch - top_gt, left_gt = int(top * scale), int(left * scale) - results['gt'] = [ - v[top_gt:top_gt + self.gt_patch_size, - left_gt:left_gt + self.gt_patch_size, ...] for v in results['gt'] - ] - - if not lq_is_list: - results['lq'] = results['lq'][0] - if not gt_is_list: - results['gt'] = results['gt'][0] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(gt_patch_size={self.gt_patch_size})' - return repr_str - - -@PIPELINES.register_module() -class CropAroundCenter: - """Randomly crop the images around unknown area in the center 1/4 images. - - This cropping strategy is adopted in GCA matting. The `unknown area` is the - same as `semi-transparent area`. - https://arxiv.org/pdf/2001.04069.pdf - - It retains the center 1/4 images and resizes the images to 'crop_size'. - Required keys are "fg", "bg", "trimap" and "alpha", added or modified keys - are "crop_bbox", "fg", "bg", "trimap" and "alpha". - - Args: - crop_size (int | tuple): Desired output size. If int, square crop is - applied. - """ - - def __init__(self, crop_size): - if mmcv.is_tuple_of(crop_size, int): - assert len(crop_size) == 2, 'length of crop_size must be 2.' - elif not isinstance(crop_size, int): - raise TypeError('crop_size must be int or a tuple of int, but got ' - f'{type(crop_size)}') - self.crop_size = _pair(crop_size) - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - fg = results['fg'] - alpha = results['alpha'] - trimap = results['trimap'] - bg = results['bg'] - h, w = fg.shape[:2] - assert bg.shape == fg.shape, (f'shape of bg {bg.shape} should be the ' - f'same as fg {fg.shape}.') - - crop_h, crop_w = self.crop_size - # Make sure h >= crop_h, w >= crop_w. If not, rescale imgs - rescale_ratio = max(crop_h / h, crop_w / w) - if rescale_ratio > 1: - new_h = max(int(h * rescale_ratio), crop_h) - new_w = max(int(w * rescale_ratio), crop_w) - fg = mmcv.imresize(fg, (new_w, new_h), interpolation='nearest') - alpha = mmcv.imresize( - alpha, (new_w, new_h), interpolation='nearest') - trimap = mmcv.imresize( - trimap, (new_w, new_h), interpolation='nearest') - bg = mmcv.imresize(bg, (new_w, new_h), interpolation='bicubic') - h, w = new_h, new_w - - # resize to 1/4 to ignore small unknown patches - small_trimap = mmcv.imresize( - trimap, (w // 4, h // 4), interpolation='nearest') - # find unknown area in center 1/4 region - margin_h, margin_w = crop_h // 2, crop_w // 2 - sample_area = small_trimap[margin_h // 4:(h - margin_h) // 4, - margin_w // 4:(w - margin_w) // 4] - unknown_xs, unknown_ys = np.where(sample_area == 128) - unknown_num = len(unknown_xs) - if unknown_num < 10: - # too few unknown area in the center, crop from the whole image - top = np.random.randint(0, h - crop_h + 1) - left = np.random.randint(0, w - crop_w + 1) - else: - idx = np.random.randint(unknown_num) - top = unknown_xs[idx] * 4 - left = unknown_ys[idx] * 4 - bottom = top + crop_h - right = left + crop_w - - results['fg'] = fg[top:bottom, left:right] - results['alpha'] = alpha[top:bottom, left:right] - results['trimap'] = trimap[top:bottom, left:right] - results['bg'] = bg[top:bottom, left:right] - results['crop_bbox'] = (left, top, right, bottom) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(crop_size={self.crop_size})' - - -@PIPELINES.register_module() -class CropAroundUnknown: - """Crop around unknown area with a randomly selected scale. - - Randomly select the w and h from a list of (w, h). - Required keys are the keys in argument `keys`, added or - modified keys are "crop_bbox" and the keys in argument `keys`. - This class assumes value of "alpha" ranges from 0 to 255. - - Args: - keys (Sequence[str]): The images to be cropped. It must contain - 'alpha'. If unknown_source is set to 'trimap', then it must also - contain 'trimap'. - crop_sizes (list[int | tuple[int]]): List of (w, h) to be selected. - unknown_source (str, optional): Unknown area to select from. It must be - 'alpha' or 'tirmap'. Default to 'alpha'. - interpolations (str | list[str], optional): Interpolation method of - mmcv.imresize. The interpolation operation will be applied when - image size is smaller than the crop_size. If given as a list of - str, it should have the same length as `keys`. Or if given as a - str all the keys will be resized with the same method. - Default to 'bilinear'. - """ - - def __init__(self, - keys, - crop_sizes, - unknown_source='alpha', - interpolations='bilinear'): - if 'alpha' not in keys: - raise ValueError(f'"alpha" must be in keys, but got {keys}') - self.keys = keys - - if not isinstance(crop_sizes, list): - raise TypeError( - f'Crop sizes must be list, but got {type(crop_sizes)}.') - self.crop_sizes = [_pair(crop_size) for crop_size in crop_sizes] - if not mmcv.is_tuple_of(self.crop_sizes[0], int): - raise TypeError('Elements of crop_sizes must be int or tuple of ' - f'int, but got {type(self.crop_sizes[0][0])}.') - - if unknown_source not in ['alpha', 'trimap']: - raise ValueError('unknown_source must be "alpha" or "trimap", ' - f'but got {unknown_source}') - if unknown_source not in keys: - # it could only be trimap, since alpha is checked before - raise ValueError( - 'if unknown_source is "trimap", it must also be set in keys') - self.unknown_source = unknown_source - - if isinstance(interpolations, str): - self.interpolations = [interpolations] * len(self.keys) - elif mmcv.is_list_of(interpolations, - str) and len(interpolations) == len(self.keys): - self.interpolations = interpolations - else: - raise TypeError( - 'interpolations must be a str or list of str with ' - f'the same length as keys, but got {interpolations}') - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - h, w = results[self.keys[0]].shape[:2] - - rand_ind = np.random.randint(len(self.crop_sizes)) - crop_h, crop_w = self.crop_sizes[rand_ind] - - # Make sure h >= crop_h, w >= crop_w. If not, rescale imgs - rescale_ratio = max(crop_h / h, crop_w / w) - if rescale_ratio > 1: - h = max(int(h * rescale_ratio), crop_h) - w = max(int(w * rescale_ratio), crop_w) - for key, interpolation in zip(self.keys, self.interpolations): - results[key] = mmcv.imresize( - results[key], (w, h), interpolation=interpolation) - - # Select the cropping top-left point which is an unknown pixel - if self.unknown_source == 'alpha': - unknown = (results['alpha'] > 0) & (results['alpha'] < 255) - else: - unknown = results['trimap'] == 128 - top, left = random_choose_unknown(unknown.squeeze(), (crop_h, crop_w)) - - bottom = top + crop_h - right = left + crop_w - - for key in self.keys: - results[key] = results[key][top:bottom, left:right] - results['crop_bbox'] = (left, top, right, bottom) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, crop_sizes={self.crop_sizes}, ' - f"unknown_source='{self.unknown_source}', " - f'interpolations={self.interpolations})') - return repr_str - - -@PIPELINES.register_module() -class CropAroundFg: - """Crop around the whole foreground in the segmentation mask. - - Required keys are "seg" and the keys in argument `keys`. - Meanwhile, "seg" must be in argument `keys`. Added or modified keys are - "crop_bbox" and the keys in argument `keys`. - - Args: - keys (Sequence[str]): The images to be cropped. It must contain - 'seg'. - bd_ratio_range (tuple, optional): The range of the boundary (bd) ratio - to select from. The boundary ratio is the ratio of the boundary to - the minimal bbox that contains the whole foreground given by - segmentation. Default to (0.1, 0.4). - test_mode (bool): Whether use test mode. In test mode, the tight crop - area of foreground will be extended to the a square. - Default to False. - """ - - def __init__(self, keys, bd_ratio_range=(0.1, 0.4), test_mode=False): - if 'seg' not in keys: - raise ValueError(f'"seg" must be in keys, but got {keys}') - if (not mmcv.is_tuple_of(bd_ratio_range, float) - or len(bd_ratio_range) != 2): - raise TypeError('bd_ratio_range must be a tuple of 2 int, but got ' - f'{bd_ratio_range}') - self.keys = keys - self.bd_ratio_range = bd_ratio_range - self.test_mode = test_mode - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - seg = results['seg'] - height, width = seg.shape[:2] - - # get foreground bbox - fg_coor = np.array(np.where(seg)) - top, left = np.amin(fg_coor, axis=1) - bottom, right = np.amax(fg_coor, axis=1) - - # enlarge bbox - long_side = np.maximum(bottom - top, right - left) - if self.test_mode: - bottom = top + long_side - right = left + long_side - boundary_ratio = np.random.uniform(*self.bd_ratio_range) - boundary = int(np.round(boundary_ratio * long_side)) - # NOTE: Different from the original repo, we keep track of the four - # corners of the bbox (left, top, right, bottom) while the original - # repo use (top, left, height, width) to represent bbox. This may - # introduce an difference of 1 pixel. - top = max(top - boundary, 0) - left = max(left - boundary, 0) - bottom = min(bottom + boundary, height) - right = min(right + boundary, width) - - for key in self.keys: - results[key] = results[key][top:bottom, left:right] - results['crop_bbox'] = (left, top, right, bottom) - return results - - -@PIPELINES.register_module() -class ModCrop: - """Mod crop gt images, used during testing. - - Required keys are "scale" and "gt", - added or modified keys are "gt". - """ - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - img = results['gt'].copy() - scale = results['scale'] - if img.ndim in [2, 3]: - h, w = img.shape[0], img.shape[1] - h_remainder, w_remainder = h % scale, w % scale - img = img[:h - h_remainder, :w - w_remainder, ...] - else: - raise ValueError(f'Wrong img ndim: {img.ndim}.') - results['gt'] = img - return results - - -@PIPELINES.register_module() -class CropLike: - """Crop/pad the image in the target_key according to the size of image - in the reference_key . - - Args: - target_key (str): The key needs to be cropped. - reference_key (str | None): The reference key, need its size. - Default: None. - """ - - def __init__(self, target_key, reference_key=None): - - assert reference_key and target_key - self.target_key = target_key - self.reference_key = reference_key - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - Require self.target_key and self.reference_key. - - Returns: - dict: A dict containing the processed data and information. - Modify self.target_key. - """ - size = results[self.reference_key].shape - old_image = results[self.target_key] - old_size = old_image.shape - h, w = old_size[:2] - new_size = size[:2] + old_size[2:] - h_cover, w_cover = min(h, size[0]), min(w, size[1]) - - format_image = np.zeros(new_size, dtype=old_image.dtype) - format_image[:h_cover, :w_cover] = old_image[:h_cover, :w_cover] - results[self.target_key] = format_image - - return results - - def __repr__(self): - return (self.__class__.__name__ + f' target_key={self.target_key}, ' + - f'reference_key={self.reference_key}') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/formating.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/formating.py deleted file mode 100755 index 6ae98c05..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/formating.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC -from torch.nn import functional as F - -from ..registry import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - """ - if isinstance(data, torch.Tensor): - return data - if isinstance(data, np.ndarray): - return torch.from_numpy(data) - if isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - if isinstance(data, int): - return torch.LongTensor([data]) - if isinstance(data, float): - return torch.FloatTensor([data]) - - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor: - """Convert some values in results dict to `torch.Tensor` type - in data loader pipeline. - - Args: - keys (Sequence[str]): Required keys to be converted. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor: - """Convert image type to `torch.Tensor` type. - - Args: - keys (Sequence[str]): Required keys to be converted. - to_float32 (bool): Whether convert numpy image array to np.float32 - before converted to tensor. Default: True. - """ - - def __init__(self, keys, to_float32=True): - self.keys = keys - self.to_float32 = to_float32 - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - # deal with gray scale img: expand a color channel - if len(results[key].shape) == 2: - results[key] = results[key][..., None] - if self.to_float32 and not isinstance(results[key], np.float32): - results[key] = results[key].astype(np.float32) - results[key] = to_tensor(results[key].transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + ( - f'(keys={self.keys}, to_float32={self.to_float32})') - - -@PIPELINES.register_module() -class FramesToTensor(ImageToTensor): - """Convert frames type to `torch.Tensor` type. - - It accepts a list of frames, converts each to `torch.Tensor` type and then - concatenates in a new dimension (dim=0). - - Args: - keys (Sequence[str]): Required keys to be converted. - to_float32 (bool): Whether convert numpy image array to np.float32 - before converted to tensor. Default: True. - """ - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - if not isinstance(results[key], list): - raise TypeError(f'results["{key}"] should be a list, ' - f'but got {type(results[key])}') - for idx, v in enumerate(results[key]): - # deal with gray scale img: expand a color channel - if len(v.shape) == 2: - v = v[..., None] - if self.to_float32 and not isinstance(v, np.float32): - v = v.astype(np.float32) - results[key][idx] = to_tensor(v.transpose(2, 0, 1)) - results[key] = torch.stack(results[key], dim=0) - if results[key].size(0) == 1: - results[key].squeeze_() - return results - - -@PIPELINES.register_module() -class GetMaskedImage: - """Get masked image. - - Args: - img_name (str): Key for clean image. - mask_name (str): Key for mask image. The mask shape should be - (h, w, 1) while '1' indicate holes and '0' indicate valid - regions. - """ - - def __init__(self, img_name='gt_img', mask_name='mask'): - self.img_name = img_name - self.mask_name = mask_name - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - clean_img = results[self.img_name] - mask = results[self.mask_name] - - masked_img = clean_img * (1. - mask) - results['masked_img'] = masked_img - - return results - - def __repr__(self): - return self.__class__.__name__ + ( - f"(img_name='{self.img_name}', mask_name='{self.mask_name}')") - - -@PIPELINES.register_module() -class FormatTrimap: - """Convert trimap (tensor) to one-hot representation. - - It transforms the trimap label from (0, 128, 255) to (0, 1, 2). If - ``to_onehot`` is set to True, the trimap will convert to one-hot tensor of - shape (3, H, W). Required key is "trimap", added or modified key are - "trimap" and "to_onehot". - - Args: - to_onehot (bool): whether convert trimap to one-hot tensor. Default: - ``False``. - """ - - def __init__(self, to_onehot=False): - self.to_onehot = to_onehot - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - trimap = results['trimap'].squeeze() - trimap[trimap == 128] = 1 - trimap[trimap == 255] = 2 - if self.to_onehot: - trimap = F.one_hot(trimap.to(torch.long), num_classes=3) - trimap = trimap.permute(2, 0, 1) - else: - trimap = trimap[None, ...] # expand the channels dimension - results['trimap'] = trimap.float() - results['meta'].data['to_onehot'] = self.to_onehot - return results - - def __repr__(self): - return self.__class__.__name__ + f'(to_onehot={self.to_onehot})' - - -@PIPELINES.register_module() -class Collect: - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_labels". - - The "img_meta" item is always populated. The contents of the "meta" - dictionary depends on "meta_keys". - - Args: - keys (Sequence[str]): Required keys to be collected. - meta_keys (Sequence[str]): Required keys to be collected to "meta". - Default: None. - """ - - def __init__(self, keys, meta_keys=None): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['meta'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + ( - f'(keys={self.keys}, meta_keys={self.meta_keys})') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/loading.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/loading.py deleted file mode 100755 index c7f16273..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/loading.py +++ /dev/null @@ -1,562 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from pathlib import Path - -import mmcv -import numpy as np -from mmcv.fileio import FileClient - -from mmedit.core.mask import (bbox2mask, brush_stroke_mask, get_irregular_mask, - random_bbox) -from ..registry import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile: - """Load image from file. - - Args: - io_backend (str): io backend where images are store. Default: 'disk'. - key (str): Keys in results to find corresponding path. Default: 'gt'. - flag (str): Loading flag for images. Default: 'color'. - channel_order (str): Order of channel, candidates are 'bgr' and 'rgb'. - Default: 'bgr'. - convert_to (str | None): The color space of the output image. If None, - no conversion is conducted. Default: None. - save_original_img (bool): If True, maintain a copy of the image in - `results` dict with name of `f'ori_{key}'`. Default: False. - use_cache (bool): If True, load all images at once. Default: False. - backend (str): The image loading backend type. Options are `cv2`, - `pillow`, and 'turbojpeg'. Default: None. - kwargs (dict): Args for file client. - """ - - def __init__(self, - io_backend='disk', - key='gt', - flag='color', - channel_order='bgr', - convert_to=None, - save_original_img=False, - use_cache=False, - backend=None, - **kwargs): - - self.io_backend = io_backend - self.key = key - self.flag = flag - self.save_original_img = save_original_img - self.channel_order = channel_order - self.convert_to = convert_to - self.kwargs = kwargs - self.file_client = None - self.use_cache = use_cache - self.cache = dict() if use_cache else None - self.backend = backend - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - filepath = str(results[f'{self.key}_path']) - if self.file_client is None: - self.file_client = FileClient(self.io_backend, **self.kwargs) - if self.use_cache: - if filepath in self.cache: - img = self.cache[filepath] - else: - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, - flag=self.flag, - channel_order=self.channel_order, - backend=self.backend) # HWC - self.cache[filepath] = img - else: - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, - flag=self.flag, - channel_order=self.channel_order, - backend=self.backend) # HWC - - if self.convert_to is not None: - if self.channel_order == 'bgr' and self.convert_to.lower() == 'y': - img = mmcv.bgr2ycbcr(img, y_only=True) - elif self.channel_order == 'rgb': - img = mmcv.rgb2ycbcr(img, y_only=True) - else: - raise ValueError('Currently support only "bgr2ycbcr" or ' - '"bgr2ycbcr".') - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - - results[self.key] = img - results[f'{self.key}_path'] = filepath - results[f'{self.key}_ori_shape'] = img.shape - if self.save_original_img: - results[f'ori_{self.key}'] = img.copy() - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += ( - f'(io_backend={self.io_backend}, key={self.key}, ' - f'flag={self.flag}, save_original_img={self.save_original_img}, ' - f'channel_order={self.channel_order}, use_cache={self.use_cache})') - return repr_str - - -@PIPELINES.register_module() -class LoadImageFromFileList(LoadImageFromFile): - """Load image from file list. - - It accepts a list of path and read each frame from each path. A list - of frames will be returned. - - Args: - io_backend (str): io backend where images are store. Default: 'disk'. - key (str): Keys in results to find corresponding path. Default: 'gt'. - flag (str): Loading flag for images. Default: 'color'. - channel_order (str): Order of channel, candidates are 'bgr' and 'rgb'. - Default: 'bgr'. - convert_to (str | None): The color space of the output image. If None, - no conversion is conducted. Default: None. - save_original_img (bool): If True, maintain a copy of the image in - `results` dict with name of `f'ori_{key}'`. Default: False. - use_cache (bool): If True, load all images at once. Default: False. - backend (str): The image loading backend type. Options are `cv2`, - `pillow`, and 'turbojpeg'. Default: None. - kwargs (dict): Args for file client. - """ - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - - if self.file_client is None: - self.file_client = FileClient(self.io_backend, **self.kwargs) - filepaths = results[f'{self.key}_path'] - if not isinstance(filepaths, list): - raise TypeError( - f'filepath should be list, but got {type(filepaths)}') - - filepaths = [str(v) for v in filepaths] - - imgs = [] - shapes = [] - if self.save_original_img: - ori_imgs = [] - for filepath in filepaths: - if self.use_cache: - if filepath in self.cache: - img = self.cache[filepath] - else: - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, - flag=self.flag, - channel_order=self.channel_order, - backend=self.backend) # HWC - self.cache[filepath] = img - else: - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, - flag=self.flag, - channel_order=self.channel_order, - backend=self.backend) # HWC - - # convert to y-channel, if specified - if self.convert_to is not None: - if self.channel_order == 'bgr' and self.convert_to.lower( - ) == 'y': - img = mmcv.bgr2ycbcr(img, y_only=True) - elif self.channel_order == 'rgb': - img = mmcv.rgb2ycbcr(img, y_only=True) - else: - raise ValueError('Currently support only "bgr2ycbcr" or ' - '"bgr2ycbcr".') - - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - - imgs.append(img) - shapes.append(img.shape) - if self.save_original_img: - ori_imgs.append(img.copy()) - - results[self.key] = imgs - results[f'{self.key}_path'] = filepaths - results[f'{self.key}_ori_shape'] = shapes - if self.save_original_img: - results[f'ori_{self.key}'] = ori_imgs - - return results - - -@PIPELINES.register_module() -class RandomLoadResizeBg: - """Randomly load a background image and resize it. - - Required key is "fg", added key is "bg". - - Args: - bg_dir (str): Path of directory to load background images from. - io_backend (str): io backend where images are store. Default: 'disk'. - flag (str): Loading flag for images. Default: 'color'. - channel_order (str): Order of channel, candidates are 'bgr' and 'rgb'. - Default: 'bgr'. - kwargs (dict): Args for file client. - """ - - def __init__(self, - bg_dir, - io_backend='disk', - flag='color', - channel_order='bgr', - **kwargs): - self.bg_dir = bg_dir - self.bg_list = list(mmcv.scandir(bg_dir)) - self.io_backend = io_backend - self.flag = flag - self.channel_order = channel_order - self.kwargs = kwargs - self.file_client = None - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - if self.file_client is None: - self.file_client = FileClient(self.io_backend, **self.kwargs) - h, w = results['fg'].shape[:2] - idx = np.random.randint(len(self.bg_list)) - filepath = Path(self.bg_dir).joinpath(self.bg_list[idx]) - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, flag=self.flag, channel_order=self.channel_order) # HWC - bg = mmcv.imresize(img, (w, h), interpolation='bicubic') - results['bg'] = bg - return results - - def __repr__(self): - return self.__class__.__name__ + f"(bg_dir='{self.bg_dir}')" - - -@PIPELINES.register_module() -class LoadMask: - """Load Mask for multiple types. - - For different types of mask, users need to provide the corresponding - config dict. - - Example config for bbox: - - .. code-block:: python - - config = dict(img_shape=(256, 256), max_bbox_shape=128) - - Example config for irregular: - - .. code-block:: python - - config = dict( - img_shape=(256, 256), - num_vertices=(4, 12), - max_angle=4., - length_range=(10, 100), - brush_width=(10, 40), - area_ratio_range=(0.15, 0.5)) - - Example config for ff: - - .. code-block:: python - - config = dict( - img_shape=(256, 256), - num_vertices=(4, 12), - mean_angle=1.2, - angle_range=0.4, - brush_width=(12, 40)) - - Example config for set: - - .. code-block:: python - - config = dict( - mask_list_file='xxx/xxx/ooxx.txt', - prefix='/xxx/xxx/ooxx/', - io_backend='disk', - flag='unchanged', - file_client_kwargs=dict() - ) - - The mask_list_file contains the list of mask file name like this: - test1.jpeg - test2.jpeg - ... - ... - - The prefix gives the data path. - - Args: - mask_mode (str): Mask mode in ['bbox', 'irregular', 'ff', 'set', - 'file']. - * bbox: square bounding box masks. - * irregular: irregular holes. - * ff: free-form holes from DeepFillv2. - * set: randomly get a mask from a mask set. - * file: get mask from 'mask_path' in results. - mask_config (dict): Params for creating masks. Each type of mask needs - different configs. - """ - - def __init__(self, mask_mode='bbox', mask_config=None): - self.mask_mode = mask_mode - self.mask_config = dict() if mask_config is None else mask_config - assert isinstance(self.mask_config, dict) - - # set init info if needed in some modes - self._init_info() - - def _init_info(self): - if self.mask_mode == 'set': - # get mask list information - self.mask_list = [] - mask_list_file = self.mask_config['mask_list_file'] - with open(mask_list_file, 'r') as f: - for line in f: - line_split = line.strip().split(' ') - mask_name = line_split[0] - self.mask_list.append( - Path(self.mask_config['prefix']).joinpath(mask_name)) - self.mask_set_size = len(self.mask_list) - self.io_backend = self.mask_config['io_backend'] - self.flag = self.mask_config['flag'] - self.file_client_kwargs = self.mask_config['file_client_kwargs'] - self.file_client = None - elif self.mask_mode == 'file': - self.io_backend = 'disk' - self.flag = 'unchanged' - self.file_client_kwargs = dict() - self.file_client = None - - def _get_random_mask_from_set(self): - if self.file_client is None: - self.file_client = FileClient(self.io_backend, - **self.file_client_kwargs) - # minus 1 to avoid out of range error - mask_idx = np.random.randint(0, self.mask_set_size) - mask_bytes = self.file_client.get(self.mask_list[mask_idx]) - mask = mmcv.imfrombytes(mask_bytes, flag=self.flag) # HWC, BGR - if mask.ndim == 2: - mask = np.expand_dims(mask, axis=2) - else: - mask = mask[:, :, 0:1] - - mask[mask > 0] = 1. - return mask - - def _get_mask_from_file(self, path): - if self.file_client is None: - self.file_client = FileClient(self.io_backend, - **self.file_client_kwargs) - mask_bytes = self.file_client.get(path) - mask = mmcv.imfrombytes(mask_bytes, flag=self.flag) # HWC, BGR - if mask.ndim == 2: - mask = np.expand_dims(mask, axis=2) - else: - mask = mask[:, :, 0:1] - - mask[mask > 0] = 1. - return mask - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - - if self.mask_mode == 'bbox': - mask_bbox = random_bbox(**self.mask_config) - mask = bbox2mask(self.mask_config['img_shape'], mask_bbox) - results['mask_bbox'] = mask_bbox - elif self.mask_mode == 'irregular': - mask = get_irregular_mask(**self.mask_config) - elif self.mask_mode == 'set': - mask = self._get_random_mask_from_set() - elif self.mask_mode == 'ff': - mask = brush_stroke_mask(**self.mask_config) - elif self.mask_mode == 'file': - mask = self._get_mask_from_file(results['mask_path']) - else: - raise NotImplementedError( - f'Mask mode {self.mask_mode} has not been implemented.') - results['mask'] = mask - return results - - def __repr__(self): - return self.__class__.__name__ + f"(mask_mode='{self.mask_mode}')" - - -@PIPELINES.register_module() -class GetSpatialDiscountMask: - """Get spatial discounting mask constant. - - Spatial discounting mask is first introduced in: - Generative Image Inpainting with Contextual Attention. - - Args: - gamma (float, optional): Gamma for computing spatial discounting. - Defaults to 0.99. - beta (float, optional): Beta for computing spatial discounting. - Defaults to 1.5. - """ - - def __init__(self, gamma=0.99, beta=1.5): - self.gamma = gamma - self.beta = beta - - def spatial_discount_mask(self, mask_width, mask_height): - """Generate spatial discounting mask constant. - - Args: - mask_width (int): The width of bbox hole. - mask_height (int): The height of bbox height. - - Returns: - np.ndarray: Spatial discounting mask. - """ - w, h = np.meshgrid(np.arange(mask_width), np.arange(mask_height)) - grid_stack = np.stack([h, w], axis=2) - mask_values = (self.gamma**(np.minimum( - grid_stack, [mask_height - 1, mask_width - 1] - grid_stack) * - self.beta)).max( - axis=2, keepdims=True) - - return mask_values - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - - mask_bbox = results['mask_bbox'] - mask = results['mask'] - mask_height, mask_width = mask_bbox[-2:] - discount_hole = self.spatial_discount_mask(mask_width, mask_height) - discount_mask = np.zeros_like(mask) - discount_mask[mask_bbox[0]:mask_bbox[0] + mask_height, - mask_bbox[1]:mask_bbox[1] + mask_width, - ...] = discount_hole - - results['discount_mask'] = discount_mask - - return results - - def __repr__(self): - return self.__class__.__name__ + (f'(gamma={self.gamma}, ' - f'beta={self.beta})') - - -@PIPELINES.register_module() -class LoadPairedImageFromFile(LoadImageFromFile): - """Load a pair of images from file. - - Each sample contains a pair of images, which are concatenated in the w - dimension (a|b). This is a special loading class for generation paired - dataset. It loads a pair of images as the common loader does and crops - it into two images with the same shape in different domains. - - Required key is "pair_path". Added or modified keys are "pair", - "pair_ori_shape", "ori_pair", "img_a", "img_b", "img_a_path", - "img_b_path", "img_a_ori_shape", "img_b_ori_shape", "ori_img_a" and - "ori_img_b". - - Args: - io_backend (str): io backend where images are store. Default: 'disk'. - key (str): Keys in results to find corresponding path. Default: 'gt'. - flag (str): Loading flag for images. Default: 'color'. - channel_order (str): Order of channel, candidates are 'bgr' and 'rgb'. - Default: 'bgr'. - save_original_img (bool): If True, maintain a copy of the image in - `results` dict with name of `f'ori_{key}'`. Default: False. - kwargs (dict): Args for file client. - """ - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - if self.file_client is None: - self.file_client = FileClient(self.io_backend, **self.kwargs) - filepath = str(results[f'{self.key}_path']) - img_bytes = self.file_client.get(filepath) - img = mmcv.imfrombytes( - img_bytes, flag=self.flag, channel_order=self.channel_order) # HWC - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - - results[self.key] = img - results[f'{self.key}_path'] = filepath - results[f'{self.key}_ori_shape'] = img.shape - if self.save_original_img: - results[f'ori_{self.key}'] = img.copy() - - # crop pair into a and b - w = img.shape[1] - if w % 2 != 0: - raise ValueError( - f'The width of image pair must be even number, but got {w}.') - new_w = w // 2 - img_a = img[:, :new_w, :] - img_b = img[:, new_w:, :] - - results['img_a'] = img_a - results['img_b'] = img_b - results['img_a_path'] = filepath - results['img_b_path'] = filepath - results['img_a_ori_shape'] = img_a.shape - results['img_b_ori_shape'] = img_b.shape - if self.save_original_img: - results['ori_img_a'] = img_a.copy() - results['ori_img_b'] = img_b.copy() - - return results diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/matlab_like_resize.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/matlab_like_resize.py deleted file mode 100755 index 5de48f6f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/matlab_like_resize.py +++ /dev/null @@ -1,275 +0,0 @@ -# This code is referenced from matlab_imresize with modifications -# Reference: https://github.com/fatheral/matlab_imresize/blob/master/imresize.py # noqa -# Original licence: Copyright (c) 2020 fatheral, under the MIT License. -import numpy as np - -from ..registry import PIPELINES - - -def get_size_from_scale(input_size, scale_factor): - """Get the output size given input size and scale factor. - - Args: - input_size (tuple): The size of the input image. - scale_factor (float): The resize factor. - - Returns: - list[int]: The size of the output image. - """ - - output_shape = [ - int(np.ceil(scale * shape)) - for (scale, shape) in zip(scale_factor, input_size) - ] - - return output_shape - - -def get_scale_from_size(input_size, output_size): - """Get the scale factor given input size and output size. - - Args: - input_size (tuple(int)): The size of the input image. - output_size (tuple(int)): The size of the output image. - - Returns: - list[float]: The scale factor of each dimension. - """ - - scale = [ - 1.0 * output_shape / input_shape - for (input_shape, output_shape) in zip(input_size, output_size) - ] - - return scale - - -def _cubic(x): - """ Cubic function. - - Args: - x (ndarray): The distance from the center position. - - Returns: - ndarray: The weight corresponding to a particular distance. - - """ - - x = np.array(x, dtype=np.float32) - x_abs = np.abs(x) - x_abs_sq = x_abs**2 - x_abs_cu = x_abs_sq * x_abs - - # if |x| <= 1: y = 1.5|x|^3 - 2.5|x|^2 + 1 - # if 1 < |x| <= 2: -0.5|x|^3 + 2.5|x|^2 - 4|x| + 2 - f = (1.5 * x_abs_cu - 2.5 * x_abs_sq + 1) * (x_abs <= 1) + ( - -0.5 * x_abs_cu + 2.5 * x_abs_sq - 4 * x_abs + 2) * ((1 < x_abs) & - (x_abs <= 2)) - - return f - - -def get_weights_indices(input_length, output_length, scale, kernel, - kernel_width): - """Get weights and indices for interpolation. - - Args: - input_length (int): Length of the input sequence. - output_length (int): Length of the output sequence. - scale (float): Scale factor. - kernel (func): The kernel used for resizing. - kernel_width (int): The width of the kernel. - - Returns: - list[ndarray]: The weights and the indices for interpolation. - - - """ - if scale < 1: # modified kernel for antialiasing - - def h(x): - return scale * kernel(scale * x) - - kernel_width = 1.0 * kernel_width / scale - else: - h = kernel - kernel_width = kernel_width - - # coordinates of output - x = np.arange(1, output_length + 1).astype(np.float32) - - # coordinates of input - u = x / scale + 0.5 * (1 - 1 / scale) - left = np.floor(u - kernel_width / 2) # leftmost pixel - p = int(np.ceil(kernel_width)) + 2 # maximum number of pixels - - # indices of input pixels - ind = left[:, np.newaxis, ...] + np.arange(p) - indices = ind.astype(np.int32) - - # weights of input pixels - weights = h(u[:, np.newaxis, ...] - indices - 1) - - weights = weights / np.sum(weights, axis=1)[:, np.newaxis, ...] - - # remove all-zero columns - aux = np.concatenate( - (np.arange(input_length), np.arange(input_length - 1, -1, - step=-1))).astype(np.int32) - indices = aux[np.mod(indices, aux.size)] - ind2store = np.nonzero(np.any(weights, axis=0)) - weights = weights[:, ind2store] - indices = indices[:, ind2store] - - return weights, indices - - -def resize_along_dim(img_in, weights, indices, dim): - """Resize along a specific dimension. - - Args: - img_in (ndarray): The input image. - weights (ndarray): The weights used for interpolation, computed from - [get_weights_indices]. - indices (ndarray): The indices used for interpolation, computed from - [get_weights_indices]. - dim (int): Which dimension to undergo interpolation. - - Returns: - ndarray: Interpolated (along one dimension) image. - """ - - img_in = img_in.astype(np.float32) - w_shape = weights.shape - output_shape = list(img_in.shape) - output_shape[dim] = w_shape[0] - img_out = np.zeros(output_shape) - - if dim == 0: - for i in range(w_shape[0]): - w = weights[i, :][np.newaxis, ...] - ind = indices[i, :] - img_slice = img_in[ind, :] - img_out[i] = np.sum(np.squeeze(img_slice, axis=0) * w.T, axis=0) - elif dim == 1: - for i in range(w_shape[0]): - w = weights[i, :][:, :, np.newaxis] - ind = indices[i, :] - img_slice = img_in[:, ind] - img_out[:, i] = np.sum(np.squeeze(img_slice, axis=1) * w.T, axis=1) - - if img_in.dtype == np.uint8: - img_out = np.clip(img_out, 0, 255) - return np.around(img_out).astype(np.uint8) - else: - return img_out - - -@PIPELINES.register_module() -class MATLABLikeResize: - """Resize the input image using MATLAB-like downsampling. - - Currently support bicubic interpolation only. Note that the output of - this function is slightly different from the official MATLAB function. - - Required keys are the keys in attribute "keys". Added or modified keys - are "scale" and "output_shape", and the keys in attribute "keys". - - Args: - keys (list[str]): A list of keys whose values are modified. - scale (float | None, optional): The scale factor of the resize - operation. If None, it will be determined by output_shape. - Default: None. - output_shape (tuple(int) | None, optional): The size of the output - image. If None, it will be determined by scale. Note that if - scale is provided, output_shape will not be used. - Default: None. - kernel (str, optional): The kernel for the resize operation. - Currently support 'bicubic' only. Default: 'bicubic'. - kernel_width (float): The kernel width. Currently support 4.0 only. - Default: 4.0. - """ - - def __init__(self, - keys, - scale=None, - output_shape=None, - kernel='bicubic', - kernel_width=4.0): - - if kernel.lower() != 'bicubic': - raise ValueError('Currently support bicubic kernel only.') - - if float(kernel_width) != 4.0: - raise ValueError('Current support only width=4 only.') - - if scale is None and output_shape is None: - raise ValueError('"scale" and "output_shape" cannot be both None') - - self.kernel_func = _cubic - self.keys = keys - self.scale = scale - self.output_shape = output_shape - self.kernel = kernel - self.kernel_width = kernel_width - - def _resize(self, img): - weights = {} - indices = {} - - # compute scale and output_size - if self.scale is not None: - scale = float(self.scale) - scale = [scale, scale] - output_size = get_size_from_scale(img.shape, scale) - else: - scale = get_scale_from_size(img.shape, self.output_shape) - output_size = list(self.output_shape) - - # apply cubic interpolation along two dimensions - order = np.argsort(np.array(scale)) - for k in range(2): - key = (img.shape[k], output_size[k], scale[k], self.kernel_func, - self.kernel_width) - weight, index = get_weights_indices(img.shape[k], output_size[k], - scale[k], self.kernel_func, - self.kernel_width) - weights[key] = weight - indices[key] = index - - output = np.copy(img) - if output.ndim == 2: # grayscale image - output = output[:, :, np.newaxis] - - for k in range(2): - dim = order[k] - key = (img.shape[dim], output_size[dim], scale[dim], - self.kernel_func, self.kernel_width) - output = resize_along_dim(output, weights[key], indices[key], dim) - - return output - - def __call__(self, results): - for key in self.keys: - is_single_image = False - if isinstance(results[key], np.ndarray): - is_single_image = True - results[key] = [results[key]] - - results[key] = [self._resize(img) for img in results[key]] - - if is_single_image: - results[key] = results[key][0] - - results['scale'] = self.scale - results['output_shape'] = self.output_shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += ( - f'(keys={self.keys}, scale={self.scale}, ' - f'output_shape={self.output_shape}, ' - f'kernel={self.kernel}, kernel_width={self.kernel_width})') - return repr_str diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/normalization.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/normalization.py deleted file mode 100755 index 8ff774d7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/normalization.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import numpy as np - -from ..registry import PIPELINES - - -@PIPELINES.register_module() -class Normalize: - """Normalize images with the given mean and std value. - - Required keys are the keys in attribute "keys", added or modified keys are - the keys in attribute "keys" and these keys with postfix '_norm_cfg'. - It also supports normalizing a list of images. - - Args: - keys (Sequence[str]): The images to be normalized. - mean (np.ndarray): Mean values of different channels. - std (np.ndarray): Std values of different channels. - to_rgb (bool): Whether to convert channels from BGR to RGB. - """ - - def __init__(self, keys, mean, std, to_rgb=False, save_original=False): - self.keys = keys - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - self.save_original = save_original - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - if isinstance(results[key], list): - if self.save_original: - results[key + '_unnormalised'] = [ - v.copy() for v in results[key] - ] - results[key] = [ - mmcv.imnormalize(v, self.mean, self.std, self.to_rgb) - for v in results[key] - ] - else: - if self.save_original: - results[key + '_unnormalised'] = results[key].copy() - results[key] = mmcv.imnormalize(results[key], self.mean, - self.std, self.to_rgb) - - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(keys={self.keys}, mean={self.mean}, std={self.std}, ' - f'to_rgb={self.to_rgb})') - - return repr_str - - -@PIPELINES.register_module() -class RescaleToZeroOne: - """Transform the images into a range between 0 and 1. - - Required keys are the keys in attribute "keys", added or modified keys are - the keys in attribute "keys". - It also supports rescaling a list of images. - - Args: - keys (Sequence[str]): The images to be transformed. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function. - - Args: - results (dict): A dict containing the necessary information and - data for augmentation. - - Returns: - dict: A dict containing the processed data and information. - """ - for key in self.keys: - if isinstance(results[key], list): - results[key] = [ - v.astype(np.float32) / 255. for v in results[key] - ] - else: - results[key] = results[key].astype(np.float32) / 255. - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/utils.py deleted file mode 100755 index 42d470c3..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/pipelines/utils.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import numpy as np -import torch -from mmcv.utils import print_log - -_integer_types = ( - np.byte, - np.ubyte, # 8 bits - np.short, - np.ushort, # 16 bits - np.intc, - np.uintc, # 16 or 32 or 64 bits - np.int_, - np.uint, # 32 or 64 bits - np.longlong, - np.ulonglong) # 64 bits - -_integer_ranges = { - t: (np.iinfo(t).min, np.iinfo(t).max) - for t in _integer_types -} - -dtype_range = { - np.bool_: (False, True), - np.bool8: (False, True), - np.float16: (-1, 1), - np.float32: (-1, 1), - np.float64: (-1, 1) -} -dtype_range.update(_integer_ranges) - - -def dtype_limits(image, clip_negative=False): - """Return intensity limits, i.e. (min, max) tuple, of the image's dtype. - - This function is adopted from skimage: - https://github.com/scikit-image/scikit-image/blob/ - 7e4840bd9439d1dfb6beaf549998452c99f97fdd/skimage/util/dtype.py#L35 - - Args: - image (ndarray): Input image. - clip_negative (bool, optional): If True, clip the negative range - (i.e. return 0 for min intensity) even if the image dtype allows - negative values. - - Returns - tuple: Lower and upper intensity limits. - """ - imin, imax = dtype_range[image.dtype.type] - if clip_negative: - imin = 0 - return imin, imax - - -def adjust_gamma(image, gamma=1, gain=1): - """Performs Gamma Correction on the input image. - - This function is adopted from skimage: - https://github.com/scikit-image/scikit-image/blob/ - 7e4840bd9439d1dfb6beaf549998452c99f97fdd/skimage/exposure/ - exposure.py#L439-L494 - - Also known as Power Law Transform. - This function transforms the input image pixelwise according to the - equation ``O = I**gamma`` after scaling each pixel to the range 0 to 1. - - Args: - image (ndarray): Input image. - gamma (float, optional): Non negative real number. Defaults to 1. - gain (float, optional): The constant multiplier. Defaults to 1. - - Returns: - ndarray: Gamma corrected output image. - """ - if np.any(image < 0): - raise ValueError('Image Correction methods work correctly only on ' - 'images with non-negative values. Use ' - 'skimage.exposure.rescale_intensity.') - - dtype = image.dtype.type - - if gamma < 0: - raise ValueError('Gamma should be a non-negative real number.') - - scale = float(dtype_limits(image, True)[1] - dtype_limits(image, True)[0]) - - out = ((image / scale)**gamma) * scale * gain - return out.astype(dtype) - - -def random_choose_unknown(unknown, crop_size): - """Randomly choose an unknown start (top-left) point for a given crop_size. - - Args: - unknown (np.ndarray): The binary unknown mask. - crop_size (tuple[int]): The given crop size. - - Returns: - tuple[int]: The top-left point of the chosen bbox. - """ - h, w = unknown.shape - crop_h, crop_w = crop_size - delta_h = center_h = crop_h // 2 - delta_w = center_w = crop_w // 2 - - # mask out the validate area for selecting the cropping center - mask = np.zeros_like(unknown) - mask[delta_h:h - delta_h, delta_w:w - delta_w] = 1 - if np.any(unknown & mask): - center_h_list, center_w_list = np.where(unknown & mask) - elif np.any(unknown): - center_h_list, center_w_list = np.where(unknown) - else: - print_log('No unknown pixels found!', level=logging.WARNING) - center_h_list = [center_h] - center_w_list = [center_w] - num_unknowns = len(center_h_list) - rand_ind = np.random.randint(num_unknowns) - center_h = center_h_list[rand_ind] - center_w = center_w_list[rand_ind] - - # make sure the top-left point is valid - top = np.clip(center_h - delta_h, 0, h - crop_h) - left = np.clip(center_w - delta_w, 0, w - crop_w) - - return top, left - - -def make_coord(shape, ranges=None, flatten=True): - """ Make coordinates at grid centers. - - Args: - shape (tuple): shape of image. - ranges (tuple): range of coordinate value. Default: None. - flatten (bool): flatten to (n, 2) or Not. Default: True. - - return: - coord (Tensor): coordinates. - """ - coord_seqs = [] - for i, n in enumerate(shape): - if ranges is None: - v0, v1 = -1, 1 - else: - v0, v1 = ranges[i] - r = (v1 - v0) / (2 * n) - seq = v0 + r + (2 * r) * torch.arange(n).float() - coord_seqs.append(seq) - coord = torch.stack(torch.meshgrid(*coord_seqs), dim=-1) - if flatten: - coord = coord.view(-1, coord.shape[-1]) - return coord diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/registry.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/registry.py deleted file mode 100755 index 984580ef..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/registry.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import Registry - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/__init__.py deleted file mode 100755 index da09effa..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .distributed_sampler import DistributedSampler - -__all__ = ['DistributedSampler'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/distributed_sampler.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/distributed_sampler.py deleted file mode 100755 index 7e800c81..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from __future__ import division -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - -from mmedit.core.utils import sync_random_seed - - -class DistributedSampler(_DistributedSampler): - """DistributedSampler inheriting from `torch.utils.data.DistributedSampler`. - - In pytorch of lower versions, there is no `shuffle` argument. This child - class will port one to DistributedSampler. - """ - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - samples_per_gpu=1, - seed=0): - super().__init__(dataset, num_replicas=num_replicas, rank=rank) - self.shuffle = shuffle - self.samples_per_gpu = samples_per_gpu - # fix the bug of the official implementation - self.num_samples_per_replica = int( - math.ceil( - len(self.dataset) * 1.0 / self.num_replicas / samples_per_gpu)) - self.num_samples = self.num_samples_per_replica * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - - # to avoid padding bug when meeting too small dataset - if len(dataset) < self.num_replicas * samples_per_gpu: - raise ValueError( - 'You may use too small dataset and our distributed ' - 'sampler cannot pad your dataset correctly. We highly ' - 'recommend you to use fewer GPUs to finish your work') - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/sr_reds_multiple_gt_dataset.py b/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/sr_reds_multiple_gt_dataset.py deleted file mode 100755 index 1653f9de..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/datasets/sr_reds_multiple_gt_dataset.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_sr_dataset import BaseSRDataset -from .registry import DATASETS - - -@DATASETS.register_module() -class SRREDSMultipleGTDataset(BaseSRDataset): - """REDS dataset for video super resolution for recurrent networks. - - The dataset loads several LQ (Low-Quality) frames and GT (Ground-Truth) - frames. Then it applies specified transforms and finally returns a dict - containing paired data and other information. - - Args: - lq_folder (str | :obj:`Path`): Path to a lq folder. - gt_folder (str | :obj:`Path`): Path to a gt folder. - num_input_frames (int): Number of input frames. - pipeline (list[dict | callable]): A sequence of data transformations. - scale (int): Upsampling scale ratio. - val_partition (str): Validation partition mode. Choices ['official' or - 'REDS4']. Default: 'official'. - repeat (int): Number of replication of the validation set. This is used - to allow training REDS4 with more than 4 GPUs. For example, if - 8 GPUs are used, this number can be set to 2. Default: 1. - test_mode (bool): Store `True` when building test dataset. - Default: `False`. - """ - - def __init__(self, - lq_folder, - gt_folder, - num_input_frames, - pipeline, - scale, - val_partition='official', - repeat=1, - test_mode=False): - - self.repeat = repeat - if not isinstance(repeat, int): - raise TypeError('"repeat" must be an integer, but got ' - f'{type(repeat)}.') - - super().__init__(pipeline, scale, test_mode) - self.lq_folder = str(lq_folder) - self.gt_folder = str(gt_folder) - self.num_input_frames = num_input_frames - self.val_partition = val_partition - self.data_infos = self.load_annotations() - - def load_annotations(self): - """Load annotations for REDS dataset. - - Returns: - list[dict]: A list of dicts for paired paths and other information. - """ - # generate keys - keys = [f'{i:03d}' for i in range(0, 270)] - - if self.val_partition == 'REDS4': - val_partition = ['000', '011', '015', '020'] - elif self.val_partition == 'official': - val_partition = [f'{i:03d}' for i in range(240, 270)] - else: - raise ValueError( - f'Wrong validation partition {self.val_partition}.' - f'Supported ones are ["official", "REDS4"]') - - if self.test_mode: - keys = [v for v in keys if v in val_partition] - keys *= self.repeat - else: - keys = [v for v in keys if v not in val_partition] - - data_infos = [] - for key in keys: - data_infos.append( - dict( - lq_path=self.lq_folder, - gt_path=self.gt_folder, - key=key, - sequence_length=100, # REDS has 100 frames for each clip - num_input_frames=self.num_input_frames)) - - return data_infos diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/__init__.py deleted file mode 100755 index c6c6bf58..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .backbones import * # noqa: F401, F403 -from .base import BaseModel -from .builder import (build, build_backbone, build_component, build_loss, - build_model) -from .common import * # noqa: F401, F403 -from .losses import * # noqa: F401, F403 -from .registry import BACKBONES, COMPONENTS, LOSSES, MODELS -from .restorers import BasicRestorer - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/__init__.py deleted file mode 100755 index cf8e5e11..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .sr_backbones import (BasicVSRNet, IconVSR) - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/__init__.py deleted file mode 100755 index 15779858..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .basicvsr_net import BasicVSRNet -from .iconvsr import IconVSR diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/basicvsr_net.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/basicvsr_net.py deleted file mode 100755 index 1c98c4c7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/basicvsr_net.py +++ /dev/null @@ -1,420 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import load_checkpoint - -from mmedit.models.common import (PixelShufflePack, ResidualBlockNoBN, - flow_warp, make_layer) -from mmedit.models.registry import BACKBONES -from mmedit.utils import get_root_logger - - -@BACKBONES.register_module() -class BasicVSRNet(nn.Module): - """BasicVSR network structure for video super-resolution. - - Support only x4 upsampling. - Paper: - BasicVSR: The Search for Essential Components in Video Super-Resolution - and Beyond, CVPR, 2021 - - Args: - mid_channels (int): Channel number of the intermediate features. - Default: 64. - num_blocks (int): Number of residual blocks in each propagation branch. - Default: 30. - spynet_pretrained (str): Pre-trained model path of SPyNet. - Default: None. - """ - - def __init__(self, mid_channels=64, num_blocks=30, spynet_pretrained=None): - - super().__init__() - - self.mid_channels = mid_channels - - # optical flow network for feature alignment - self.spynet = SPyNet(pretrained=spynet_pretrained) - - # propagation branches - self.backward_resblocks = ResidualBlocksWithInputConv( - mid_channels + 3, mid_channels, num_blocks) - self.forward_resblocks = ResidualBlocksWithInputConv( - mid_channels + 3, mid_channels, num_blocks) - - # upsample - self.fusion = nn.Conv2d( - mid_channels * 2, mid_channels, 1, 1, 0, bias=True) - self.upsample1 = PixelShufflePack( - mid_channels, mid_channels, 2, upsample_kernel=3) - self.upsample2 = PixelShufflePack( - mid_channels, 64, 2, upsample_kernel=3) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - self.img_upsample = nn.Upsample( - scale_factor=4, mode='bilinear', align_corners=False) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def check_if_mirror_extended(self, lrs): - """Check whether the input is a mirror-extended sequence. - - If mirror-extended, the i-th (i=0, ..., t-1) frame is equal to the - (t-1-i)-th frame. - - Args: - lrs (tensor): Input LR images with shape (n, t, c, h, w) - """ - - self.is_mirror_extended = False - if lrs.size(1) % 2 == 0: - lrs_1, lrs_2 = torch.chunk(lrs, 2, dim=1) - if torch.norm(lrs_1 - lrs_2.flip(1)) == 0: - self.is_mirror_extended = True - - def compute_flow(self, lrs): - """Compute optical flow using SPyNet for feature warping. - - Note that if the input is an mirror-extended sequence, 'flows_forward' - is not needed, since it is equal to 'flows_backward.flip(1)'. - - Args: - lrs (tensor): Input LR images with shape (n, t, c, h, w) - - Return: - tuple(Tensor): Optical flow. 'flows_forward' corresponds to the - flows used for forward-time propagation (current to previous). - 'flows_backward' corresponds to the flows used for - backward-time propagation (current to next). - """ - - n, t, c, h, w = lrs.size() - lrs_1 = lrs[:, :-1, :, :, :].reshape(-1, c, h, w) - lrs_2 = lrs[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(lrs_1, lrs_2).view(n, t - 1, 2, h, w) - - if self.is_mirror_extended: # flows_forward = flows_backward.flip(1) - flows_forward = None - else: - flows_forward = self.spynet(lrs_2, lrs_1).view(n, t - 1, 2, h, w) - - return flows_forward, flows_backward - - def forward(self, lrs): - """Forward function for BasicVSR. - - Args: - lrs (Tensor): Input LR sequence with shape (n, t, c, h, w). - - Returns: - Tensor: Output HR sequence with shape (n, t, c, 4h, 4w). - """ - - n, t, c, h, w = lrs.size() - assert h >= 64 and w >= 64, ( - 'The height and width of inputs should be at least 64, ' - f'but got {h} and {w}.') - - # check whether the input is an extended sequence - self.check_if_mirror_extended(lrs) - - # compute optical flow - flows_forward, flows_backward = self.compute_flow(lrs) - - # backward-time propagation - outputs = [] - feat_prop = lrs.new_zeros(n, self.mid_channels, h, w) - for i in range(t - 1, -1, -1): - if i < t - 1: # no warping required for the last timestep - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - feat_prop = torch.cat([lrs[:, i, :, :, :], feat_prop], dim=1) - feat_prop = self.backward_resblocks(feat_prop) - - outputs.append(feat_prop) - outputs = outputs[::-1] - - # forward-time propagation and upsampling - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, t): - lr_curr = lrs[:, i, :, :, :] - if i > 0: # no warping required for the first timestep - if flows_forward is not None: - flow = flows_forward[:, i - 1, :, :, :] - else: - flow = flows_backward[:, -i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - feat_prop = torch.cat([lr_curr, feat_prop], dim=1) - feat_prop = self.forward_resblocks(feat_prop) - - # upsampling given the backward and forward features - out = torch.cat([outputs[i], feat_prop], dim=1) - out = self.lrelu(self.fusion(out)) - out = self.lrelu(self.upsample1(out)) - out = self.lrelu(self.upsample2(out)) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = self.img_upsample(lr_curr) - out += base - outputs[i] = out - - return torch.stack(outputs, dim=1) - - def init_weights(self, pretrained=None, strict=True): - """Init weights for models. - - Args: - pretrained (str, optional): Path for pretrained weights. If given - None, pretrained weights will not be loaded. Defaults: None. - strict (boo, optional): Whether strictly load the pretrained model. - Defaults to True. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=strict, logger=logger) - elif pretrained is not None: - raise TypeError(f'"pretrained" must be a str or None. ' - f'But received {type(pretrained)}.') - - -class ResidualBlocksWithInputConv(nn.Module): - """Residual blocks with a convolution in front. - - Args: - in_channels (int): Number of input channels of the first conv. - out_channels (int): Number of channels of the residual blocks. - Default: 64. - num_blocks (int): Number of residual blocks. Default: 30. - """ - - def __init__(self, in_channels, out_channels=64, num_blocks=30): - super().__init__() - - main = [] - - # a convolution used to match the channels of the residual blocks - main.append(nn.Conv2d(in_channels, out_channels, 3, 1, 1, bias=True)) - main.append(nn.LeakyReLU(negative_slope=0.1, inplace=True)) - - # residual blocks - main.append( - make_layer( - ResidualBlockNoBN, num_blocks, mid_channels=out_channels)) - - self.main = nn.Sequential(*main) - - def forward(self, feat): - """ - Forward function for ResidualBlocksWithInputConv. - - Args: - feat (Tensor): Input feature with shape (n, in_channels, h, w) - - Returns: - Tensor: Output feature with shape (n, out_channels, h, w) - """ - return self.main(feat) - - -class SPyNet(nn.Module): - """SPyNet network structure. - - The difference to the SPyNet in [tof.py] is that - 1. more SPyNetBasicModule is used in this version, and - 2. no batch normalization is used in this version. - - Paper: - Optical Flow Estimation using a Spatial Pyramid Network, CVPR, 2017 - - Args: - pretrained (str): path for pre-trained SPyNet. Default: None. - """ - - def __init__(self, pretrained): - super().__init__() - - self.basic_module = nn.ModuleList( - [SPyNetBasicModule() for _ in range(6)]) - - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=True, logger=logger) - elif pretrained is not None: - raise TypeError('[pretrained] should be str or None, ' - f'but got {type(pretrained)}.') - - self.register_buffer( - 'mean', - torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - self.register_buffer( - 'std', - torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def compute_flow(self, ref, supp): - """Compute flow from ref to supp. - - Note that in this function, the images are already resized to a - multiple of 32. - - Args: - ref (Tensor): Reference image with shape of (n, 3, h, w). - supp (Tensor): Supporting image with shape of (n, 3, h, w). - - Returns: - Tensor: Estimated optical flow: (n, 2, h, w). - """ - n, _, h, w = ref.size() - - # normalize the input images - ref = [(ref - self.mean) / self.std] - supp = [(supp - self.mean) / self.std] - - # generate downsampled frames - for level in range(5): - ref.append( - F.avg_pool2d( - input=ref[-1], - kernel_size=2, - stride=2, - count_include_pad=False)) - supp.append( - F.avg_pool2d( - input=supp[-1], - kernel_size=2, - stride=2, - count_include_pad=False)) - ref = ref[::-1] - supp = supp[::-1] - - # flow computation - flow = ref[0].new_zeros(n, 2, h // 32, w // 32) - for level in range(len(ref)): - if level == 0: - flow_up = flow - else: - flow_up = F.interpolate( - input=flow, - scale_factor=2, - mode='bilinear', - align_corners=True) * 2.0 - - # add the residue to the upsampled flow - flow = flow_up + self.basic_module[level]( - torch.cat([ - ref[level], - flow_warp( - supp[level], - flow_up.permute(0, 2, 3, 1), - padding_mode='border'), flow_up - ], 1)) - - return flow - - def forward(self, ref, supp): - """Forward function of SPyNet. - - This function computes the optical flow from ref to supp. - - Args: - ref (Tensor): Reference image with shape of (n, 3, h, w). - supp (Tensor): Supporting image with shape of (n, 3, h, w). - - Returns: - Tensor: Estimated optical flow: (n, 2, h, w). - """ - - # upsize to a multiple of 32 - h, w = ref.shape[2:4] - w_up = w if (w % 32) == 0 else 32 * (w // 32 + 1) - h_up = h if (h % 32) == 0 else 32 * (h // 32 + 1) - ref = F.interpolate( - input=ref, size=(h_up, w_up), mode='bilinear', align_corners=False) - supp = F.interpolate( - input=supp, - size=(h_up, w_up), - mode='bilinear', - align_corners=False) - - # compute flow, and resize back to the original resolution - flow = F.interpolate( - input=self.compute_flow(ref, supp), - size=(h, w), - mode='bilinear', - align_corners=False) - - # adjust the flow values - flow[:, 0, :, :] *= float(w) / float(w_up) - flow[:, 1, :, :] *= float(h) / float(h_up) - - return flow - - -class SPyNetBasicModule(nn.Module): - """Basic Module for SPyNet. - - Paper: - Optical Flow Estimation using a Spatial Pyramid Network, CVPR, 2017 - """ - - def __init__(self): - super().__init__() - - self.basic_module = nn.Sequential( - ConvModule( - in_channels=8, - out_channels=32, - kernel_size=7, - stride=1, - padding=3, - norm_cfg=None, - act_cfg=dict(type='ReLU')), - ConvModule( - in_channels=32, - out_channels=64, - kernel_size=7, - stride=1, - padding=3, - norm_cfg=None, - act_cfg=dict(type='ReLU')), - ConvModule( - in_channels=64, - out_channels=32, - kernel_size=7, - stride=1, - padding=3, - norm_cfg=None, - act_cfg=dict(type='ReLU')), - ConvModule( - in_channels=32, - out_channels=16, - kernel_size=7, - stride=1, - padding=3, - norm_cfg=None, - act_cfg=dict(type='ReLU')), - ConvModule( - in_channels=16, - out_channels=2, - kernel_size=7, - stride=1, - padding=3, - norm_cfg=None, - act_cfg=None)) - - def forward(self, tensor_input): - """ - Args: - tensor_input (Tensor): Input tensor with shape (b, 8, h, w). - 8 channels contain: - [reference image (3), neighbor image (3), initial flow (2)]. - - Returns: - Tensor: Refined flow with shape (b, 2, h, w) - """ - return self.basic_module(tensor_input) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/edvr_net.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/edvr_net.py deleted file mode 100755 index 6e3c08d8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/edvr_net.py +++ /dev/null @@ -1,475 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, constant_init, kaiming_init -from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d -from mmcv.runner import load_checkpoint -from torch.nn.modules.utils import _pair - -from mmedit.models.common import (PixelShufflePack, ResidualBlockNoBN, - make_layer) -from mmedit.models.registry import BACKBONES -from mmedit.utils import get_root_logger - - -class ModulatedDCNPack(ModulatedDeformConv2d): - """Modulated Deformable Convolutional Pack. - - Different from the official DCN, which generates offsets and masks from - the preceding features, this ModulatedDCNPack takes another different - feature to generate masks and offsets. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - bias=True) - self.init_offset() - - def init_offset(self): - constant_init(self.conv_offset, val=0, bias=0) - - def forward(self, x, extra_feat): - out = self.conv_offset(extra_feat) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -class PCDAlignment(nn.Module): - """Alignment module using Pyramid, Cascading and Deformable convolution - (PCD). It is used in EDVRNet. - - Args: - mid_channels (int): Number of the channels of middle features. - Default: 64. - deform_groups (int): Deformable groups. Defaults: 8. - act_cfg (dict): Activation function config for ConvModule. - Default: LeakyReLU with negative_slope=0.1. - """ - - def __init__(self, - mid_channels=64, - deform_groups=8, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)): - super().__init__() - - # Pyramid has three levels: - # L3: level 3, 1/4 spatial size - # L2: level 2, 1/2 spatial size - # L1: level 1, original spatial size - self.offset_conv1 = nn.ModuleDict() - self.offset_conv2 = nn.ModuleDict() - self.offset_conv3 = nn.ModuleDict() - self.dcn_pack = nn.ModuleDict() - self.feat_conv = nn.ModuleDict() - for i in range(3, 0, -1): - level = f'l{i}' - self.offset_conv1[level] = ConvModule( - mid_channels * 2, mid_channels, 3, padding=1, act_cfg=act_cfg) - if i == 3: - self.offset_conv2[level] = ConvModule( - mid_channels, mid_channels, 3, padding=1, act_cfg=act_cfg) - else: - self.offset_conv2[level] = ConvModule( - mid_channels * 2, - mid_channels, - 3, - padding=1, - act_cfg=act_cfg) - self.offset_conv3[level] = ConvModule( - mid_channels, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.dcn_pack[level] = ModulatedDCNPack( - mid_channels, - mid_channels, - 3, - padding=1, - deform_groups=deform_groups) - - if i < 3: - act_cfg_ = act_cfg if i == 2 else None - self.feat_conv[level] = ConvModule( - mid_channels * 2, - mid_channels, - 3, - padding=1, - act_cfg=act_cfg_) - - # Cascading DCN - self.cas_offset_conv1 = ConvModule( - mid_channels * 2, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.cas_offset_conv2 = ConvModule( - mid_channels, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.cas_dcnpack = ModulatedDCNPack( - mid_channels, - mid_channels, - 3, - padding=1, - deform_groups=deform_groups) - - self.upsample = nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False) - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def forward(self, neighbor_feats, ref_feats): - """Forward function for PCDAlignment. - - Align neighboring frames to the reference frame in the feature level. - - Args: - neighbor_feats (list[Tensor]): List of neighboring features. It - contains three pyramid levels (L1, L2, L3), - each with shape (n, c, h, w). - ref_feats (list[Tensor]): List of reference features. It - contains three pyramid levels (L1, L2, L3), - each with shape (n, c, h, w). - - Returns: - Tensor: Aligned features. - """ - # The number of pyramid levels is 3. - assert len(neighbor_feats) == 3 and len(ref_feats) == 3, ( - 'The length of neighbor_feats and ref_feats must be both 3, ' - f'but got {len(neighbor_feats)} and {len(ref_feats)}') - - # Pyramids - upsampled_offset, upsampled_feat = None, None - for i in range(3, 0, -1): - level = f'l{i}' - offset = torch.cat([neighbor_feats[i - 1], ref_feats[i - 1]], - dim=1) - offset = self.offset_conv1[level](offset) - if i == 3: - offset = self.offset_conv2[level](offset) - else: - offset = self.offset_conv2[level]( - torch.cat([offset, upsampled_offset], dim=1)) - offset = self.offset_conv3[level](offset) - - feat = self.dcn_pack[level](neighbor_feats[i - 1], offset) - if i == 3: - feat = self.lrelu(feat) - else: - feat = self.feat_conv[level]( - torch.cat([feat, upsampled_feat], dim=1)) - - if i > 1: - # upsample offset and features - upsampled_offset = self.upsample(offset) * 2 - upsampled_feat = self.upsample(feat) - - # Cascading - offset = torch.cat([feat, ref_feats[0]], dim=1) - offset = self.cas_offset_conv2(self.cas_offset_conv1(offset)) - feat = self.lrelu(self.cas_dcnpack(feat, offset)) - return feat - - -class TSAFusion(nn.Module): - """Temporal Spatial Attention (TSA) fusion module. It is used in EDVRNet. - - Args: - mid_channels (int): Number of the channels of middle features. - Default: 64. - num_frames (int): Number of frames. Default: 5. - center_frame_idx (int): The index of center frame. Default: 2. - act_cfg (dict): Activation function config for ConvModule. - Default: LeakyReLU with negative_slope=0.1. - """ - - def __init__(self, - mid_channels=64, - num_frames=5, - center_frame_idx=2, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)): - super().__init__() - self.center_frame_idx = center_frame_idx - # temporal attention (before fusion conv) - self.temporal_attn1 = nn.Conv2d( - mid_channels, mid_channels, 3, padding=1) - self.temporal_attn2 = nn.Conv2d( - mid_channels, mid_channels, 3, padding=1) - self.feat_fusion = ConvModule( - num_frames * mid_channels, mid_channels, 1, act_cfg=act_cfg) - - # spatial attention (after fusion conv) - self.max_pool = nn.MaxPool2d(3, stride=2, padding=1) - self.avg_pool = nn.AvgPool2d(3, stride=2, padding=1) - self.spatial_attn1 = ConvModule( - num_frames * mid_channels, mid_channels, 1, act_cfg=act_cfg) - self.spatial_attn2 = ConvModule( - mid_channels * 2, mid_channels, 1, act_cfg=act_cfg) - self.spatial_attn3 = ConvModule( - mid_channels, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.spatial_attn4 = ConvModule( - mid_channels, mid_channels, 1, act_cfg=act_cfg) - self.spatial_attn5 = nn.Conv2d( - mid_channels, mid_channels, 3, padding=1) - self.spatial_attn_l1 = ConvModule( - mid_channels, mid_channels, 1, act_cfg=act_cfg) - self.spatial_attn_l2 = ConvModule( - mid_channels * 2, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.spatial_attn_l3 = ConvModule( - mid_channels, mid_channels, 3, padding=1, act_cfg=act_cfg) - self.spatial_attn_add1 = ConvModule( - mid_channels, mid_channels, 1, act_cfg=act_cfg) - self.spatial_attn_add2 = nn.Conv2d(mid_channels, mid_channels, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - self.upsample = nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False) - - def forward(self, aligned_feat): - """Forward function for TSAFusion. - - Args: - aligned_feat (Tensor): Aligned features with shape (n, t, c, h, w). - - Returns: - Tensor: Features after TSA with the shape (n, c, h, w). - """ - n, t, c, h, w = aligned_feat.size() - # temporal attention - embedding_ref = self.temporal_attn1( - aligned_feat[:, self.center_frame_idx, :, :, :].clone()) - emb = self.temporal_attn2(aligned_feat.view(-1, c, h, w)) - emb = emb.view(n, t, -1, h, w) # (n, t, c, h, w) - - corr_l = [] # correlation list - for i in range(t): - emb_neighbor = emb[:, i, :, :, :] - corr = torch.sum(emb_neighbor * embedding_ref, 1) # (n, h, w) - corr_l.append(corr.unsqueeze(1)) # (n, 1, h, w) - corr_prob = torch.sigmoid(torch.cat(corr_l, dim=1)) # (n, t, h, w) - corr_prob = corr_prob.unsqueeze(2).expand(n, t, c, h, w) - corr_prob = corr_prob.contiguous().view(n, -1, h, w) # (n, t*c, h, w) - aligned_feat = aligned_feat.view(n, -1, h, w) * corr_prob - - # fusion - feat = self.feat_fusion(aligned_feat) - - # spatial attention - attn = self.spatial_attn1(aligned_feat) - attn_max = self.max_pool(attn) - attn_avg = self.avg_pool(attn) - attn = self.spatial_attn2(torch.cat([attn_max, attn_avg], dim=1)) - # pyramid levels - attn_level = self.spatial_attn_l1(attn) - attn_max = self.max_pool(attn_level) - attn_avg = self.avg_pool(attn_level) - attn_level = self.spatial_attn_l2( - torch.cat([attn_max, attn_avg], dim=1)) - attn_level = self.spatial_attn_l3(attn_level) - attn_level = self.upsample(attn_level) - - attn = self.spatial_attn3(attn) + attn_level - attn = self.spatial_attn4(attn) - attn = self.upsample(attn) - attn = self.spatial_attn5(attn) - attn_add = self.spatial_attn_add2(self.spatial_attn_add1(attn)) - attn = torch.sigmoid(attn) - - # after initialization, * 2 makes (attn * 2) to be close to 1. - feat = feat * attn * 2 + attn_add - return feat - - -@BACKBONES.register_module() -class EDVRNet(nn.Module): - """EDVR network structure for video super-resolution. - - Now only support X4 upsampling factor. - Paper: - EDVR: Video Restoration with Enhanced Deformable Convolutional Networks. - - Args: - in_channels (int): Channel number of inputs. - out_channels (int): Channel number of outputs. - mid_channels (int): Channel number of intermediate features. - Default: 64. - num_frames (int): Number of input frames. Default: 5. - deform_groups (int): Deformable groups. Defaults: 8. - num_blocks_extraction (int): Number of blocks for feature extraction. - Default: 5. - num_blocks_reconstruction (int): Number of blocks for reconstruction. - Default: 10. - center_frame_idx (int): The index of center frame. Frame counting from - 0. Default: 2. - with_tsa (bool): Whether to use TSA module. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - mid_channels=64, - num_frames=5, - deform_groups=8, - num_blocks_extraction=5, - num_blocks_reconstruction=10, - center_frame_idx=2, - with_tsa=True): - super().__init__() - self.center_frame_idx = center_frame_idx - self.with_tsa = with_tsa - act_cfg = dict(type='LeakyReLU', negative_slope=0.1) - - self.conv_first = nn.Conv2d(in_channels, mid_channels, 3, 1, 1) - self.feature_extraction = make_layer( - ResidualBlockNoBN, - num_blocks_extraction, - mid_channels=mid_channels) - - # generate pyramid features - self.feat_l2_conv1 = ConvModule( - mid_channels, mid_channels, 3, 2, 1, act_cfg=act_cfg) - self.feat_l2_conv2 = ConvModule( - mid_channels, mid_channels, 3, 1, 1, act_cfg=act_cfg) - self.feat_l3_conv1 = ConvModule( - mid_channels, mid_channels, 3, 2, 1, act_cfg=act_cfg) - self.feat_l3_conv2 = ConvModule( - mid_channels, mid_channels, 3, 1, 1, act_cfg=act_cfg) - # pcd alignment - self.pcd_alignment = PCDAlignment( - mid_channels=mid_channels, deform_groups=deform_groups) - # fusion - if self.with_tsa: - self.fusion = TSAFusion( - mid_channels=mid_channels, - num_frames=num_frames, - center_frame_idx=self.center_frame_idx) - else: - self.fusion = nn.Conv2d(num_frames * mid_channels, mid_channels, 1, - 1) - - # reconstruction - self.reconstruction = make_layer( - ResidualBlockNoBN, - num_blocks_reconstruction, - mid_channels=mid_channels) - # upsample - self.upsample1 = PixelShufflePack( - mid_channels, mid_channels, 2, upsample_kernel=3) - self.upsample2 = PixelShufflePack( - mid_channels, 64, 2, upsample_kernel=3) - # we fix the output channels in the last few layers to 64. - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, out_channels, 3, 1, 1) - self.img_upsample = nn.Upsample( - scale_factor=4, mode='bilinear', align_corners=False) - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def forward(self, x): - """Forward function for EDVRNet. - - Args: - x (Tensor): Input tensor with shape (n, t, c, h, w). - - Returns: - Tensor: SR center frame with shape (n, c, h, w). - """ - n, t, c, h, w = x.size() - assert h % 4 == 0 and w % 4 == 0, ( - 'The height and width of inputs should be a multiple of 4, ' - f'but got {h} and {w}.') - - x_center = x[:, self.center_frame_idx, :, :, :].contiguous() - - # extract LR features - # L1 - l1_feat = self.lrelu(self.conv_first(x.view(-1, c, h, w))) - l1_feat = self.feature_extraction(l1_feat) - # L2 - l2_feat = self.feat_l2_conv2(self.feat_l2_conv1(l1_feat)) - # L3 - l3_feat = self.feat_l3_conv2(self.feat_l3_conv1(l2_feat)) - - l1_feat = l1_feat.view(n, t, -1, h, w) - l2_feat = l2_feat.view(n, t, -1, h // 2, w // 2) - l3_feat = l3_feat.view(n, t, -1, h // 4, w // 4) - - # pcd alignment - ref_feats = [ # reference feature list - l1_feat[:, self.center_frame_idx, :, :, :].clone(), - l2_feat[:, self.center_frame_idx, :, :, :].clone(), - l3_feat[:, self.center_frame_idx, :, :, :].clone() - ] - aligned_feat = [] - for i in range(t): - neighbor_feats = [ - l1_feat[:, i, :, :, :].clone(), l2_feat[:, i, :, :, :].clone(), - l3_feat[:, i, :, :, :].clone() - ] - aligned_feat.append(self.pcd_alignment(neighbor_feats, ref_feats)) - aligned_feat = torch.stack(aligned_feat, dim=1) # (n, t, c, h, w) - - if self.with_tsa: - feat = self.fusion(aligned_feat) - else: - aligned_feat = aligned_feat.view(n, -1, h, w) - feat = self.fusion(aligned_feat) - - # reconstruction - out = self.reconstruction(feat) - out = self.lrelu(self.upsample1(out)) - out = self.lrelu(self.upsample2(out)) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = self.img_upsample(x_center) - out += base - return out - - def init_weights(self, pretrained=None, strict=True): - """Init weights for models. - - Args: - pretrained (str, optional): Path for pretrained weights. If given - None, pretrained weights will not be loaded. Defaults to None. - strict (boo, optional): Whether strictly load the pretrained model. - Defaults to True. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=strict, logger=logger) - elif pretrained is None: - if self.with_tsa: - for module in [ - self.fusion.feat_fusion, self.fusion.spatial_attn1, - self.fusion.spatial_attn2, self.fusion.spatial_attn3, - self.fusion.spatial_attn4, self.fusion.spatial_attn_l1, - self.fusion.spatial_attn_l2, - self.fusion.spatial_attn_l3, - self.fusion.spatial_attn_add1 - ]: - kaiming_init( - module.conv, - a=0.1, - mode='fan_out', - nonlinearity='leaky_relu', - bias=0, - distribution='uniform') - else: - raise TypeError(f'"pretrained" must be a str or None. ' - f'But received {type(pretrained)}.') diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/iconvsr.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/iconvsr.py deleted file mode 100755 index 40886253..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/backbones/sr_backbones/iconvsr.py +++ /dev/null @@ -1,394 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmcv.runner import load_checkpoint - -from mmedit.models.common import (PixelShufflePack, ResidualBlockNoBN, - flow_warp, make_layer) -from mmedit.models.registry import BACKBONES -from mmedit.utils import get_root_logger -from .basicvsr_net import ResidualBlocksWithInputConv, SPyNet -from .edvr_net import PCDAlignment, TSAFusion - - -@BACKBONES.register_module() -class IconVSR(nn.Module): - """IconVSR network structure for video super-resolution. - - Support only x4 upsampling. - Paper: - BasicVSR: The Search for Essential Components in Video Super-Resolution - and Beyond, CVPR, 2021 - - Args: - mid_channels (int): Channel number of the intermediate features. - Default: 64. - num_blocks (int): Number of residual blocks in each propagation branch. - Default: 30. - keyframe_stride (int): Number determining the keyframes. If stride=5, - then the (0, 5, 10, 15, ...)-th frame will be the keyframes. - Default: 5. - padding (int): Number of frames to be padded at two ends of the - sequence. 2 for REDS and 3 for Vimeo-90K. Default: 2. - spynet_pretrained (str): Pre-trained model path of SPyNet. - Default: None. - edvr_pretrained (str): Pre-trained model path of EDVR (for refill). - Default: None. - """ - - def __init__(self, - mid_channels=64, - num_blocks=30, - keyframe_stride=5, - padding=2, - spynet_pretrained=None, - edvr_pretrained=None): - - super().__init__() - - self.mid_channels = mid_channels - self.padding = padding - self.keyframe_stride = keyframe_stride - - # optical flow network for alignment - self.spynet = SPyNet(pretrained=spynet_pretrained) - - # information-refill - self.edvr = EDVRFeatureExtractor( - num_frames=padding * 2 + 1, - center_frame_idx=padding, - pretrained=edvr_pretrained) - self.backward_fusion = nn.Conv2d( - 2 * mid_channels, mid_channels, 3, 1, 1, bias=True) - self.forward_fusion = nn.Conv2d( - 2 * mid_channels, mid_channels, 3, 1, 1, bias=True) - - # propagation branches - self.backward_resblocks = ResidualBlocksWithInputConv( - mid_channels + 3, mid_channels, num_blocks) - self.forward_resblocks = ResidualBlocksWithInputConv( - 2 * mid_channels + 3, mid_channels, num_blocks) - - # upsample - self.upsample1 = PixelShufflePack( - mid_channels, mid_channels, 2, upsample_kernel=3) - self.upsample2 = PixelShufflePack( - mid_channels, 64, 2, upsample_kernel=3) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - self.img_upsample = nn.Upsample( - scale_factor=4, mode='bilinear', align_corners=False) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def spatial_padding(self, lrs): - """ Apply pdding spatially. - - Since the PCD module in EDVR requires that the resolution is a multiple - of 4, we apply padding to the input LR images if their resolution is - not divisible by 4. - - Args: - lrs (Tensor): Input LR sequence with shape (n, t, c, h, w). - - Returns: - Tensor: Padded LR sequence with shape (n, t, c, h_pad, w_pad). - - """ - n, t, c, h, w = lrs.size() - - pad_h = (4 - h % 4) % 4 - pad_w = (4 - w % 4) % 4 - - # padding - lrs = lrs.view(-1, c, h, w) - lrs = F.pad(lrs, [0, pad_w, 0, pad_h], mode='reflect') - - return lrs.view(n, t, c, h + pad_h, w + pad_w) - - def check_if_mirror_extended(self, lrs): - """Check whether the input is a mirror-extended sequence. - - If mirror-extended, the i-th (i=0, ..., t-1) frame is equal to the - (t-1-i)-th frame. - - Args: - lrs (tensor): Input LR images with shape (n, t, c, h, w) - """ - - self.is_mirror_extended = False - if lrs.size(1) % 2 == 0: - lrs_1, lrs_2 = torch.chunk(lrs, 2, dim=1) - if torch.norm(lrs_1 - lrs_2.flip(1)) == 0: - self.is_mirror_extended = True - - def compute_refill_features(self, lrs, keyframe_idx): - """ Compute keyframe features for information-refill. - Since EDVR-M is used, padding is performed before feature computation. - Args: - lrs (Tensor): Input LR images with shape (n, t, c, h, w) - keyframe_idx (list(int)): The indices specifying the keyframes. - Return: - dict(Tensor): The keyframe features. Each key corresponds to the - indices in keyframe_idx. - """ - - if self.padding == 2: - lrs = [lrs[:, [4, 3]], lrs, lrs[:, [-4, -5]]] # padding - elif self.padding == 3: - lrs = [lrs[:, [6, 5, 4]], lrs, lrs[:, [-5, -6, -7]]] # padding - lrs = torch.cat(lrs, dim=1) - - num_frames = 2 * self.padding + 1 - feats_refill = {} - for i in keyframe_idx: - feats_refill[i] = self.edvr(lrs[:, i:i + num_frames].contiguous()) - return feats_refill - - def compute_flow(self, lrs): - """Compute optical flow using SPyNet for feature warping. - - Note that if the input is an mirror-extended sequence, 'flows_forward' - is not needed, since it is equal to 'flows_backward.flip(1)'. - - Args: - lrs (tensor): Input LR images with shape (n, t, c, h, w) - - Return: - tuple(Tensor): Optical flow. 'flows_forward' corresponds to the - flows used for forward-time propagation (current to previous). - 'flows_backward' corresponds to the flows used for - backward-time propagation (current to next). - """ - - n, t, c, h, w = lrs.size() - lrs_1 = lrs[:, :-1, :, :, :].reshape(-1, c, h, w) - lrs_2 = lrs[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(lrs_1, lrs_2).view(n, t - 1, 2, h, w) - - if self.is_mirror_extended: # flows_forward = flows_backward.flip(1) - flows_forward = None - else: - flows_forward = self.spynet(lrs_2, lrs_1).view(n, t - 1, 2, h, w) - - return flows_forward, flows_backward - - def forward(self, lrs): - """Forward function for IconVSR. - Args: - lrs (Tensor): Input LR tensor with shape (n, t, c, h, w). - Returns: - Tensor: Output HR tensor with shape (n, t, c, 4h, 4w). - """ - - n, t, c, h_input, w_input = lrs.size() - assert h_input >= 64 and w_input >= 64, ( - 'The height and width of inputs should be at least 64, ' - f'but got {h_input} and {w_input}.') - - # check whether the input is an extended sequence - self.check_if_mirror_extended(lrs) - - lrs = self.spatial_padding(lrs) - h, w = lrs.size(3), lrs.size(4) - - # get the keyframe indices for information-refill - keyframe_idx = list(range(0, t, self.keyframe_stride)) - if keyframe_idx[-1] != t - 1: - keyframe_idx.append(t - 1) # the last frame must be a keyframe - - # compute optical flow and compute features for information-refill - flows_forward, flows_backward = self.compute_flow(lrs) - feats_refill = self.compute_refill_features(lrs, keyframe_idx) - - # backward-time propagation - outputs = [] - feat_prop = lrs.new_zeros(n, self.mid_channels, h, w) - for i in range(t - 1, -1, -1): - lr_curr = lrs[:, i, :, :, :] - if i < t - 1: # no warping for the last timestep - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_refill[i]], dim=1) - feat_prop = self.backward_fusion(feat_prop) - feat_prop = torch.cat([lr_curr, feat_prop], dim=1) - feat_prop = self.backward_resblocks(feat_prop) - - outputs.append(feat_prop) - outputs = outputs[::-1] - - # forward-time propagation and upsampling - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, t): - lr_curr = lrs[:, i, :, :, :] - if i > 0: # no warping for the first timestep - if flows_forward is not None: - flow = flows_forward[:, i - 1, :, :, :] - else: - flow = flows_backward[:, -i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - if i in keyframe_idx: # information-refill - feat_prop = torch.cat([feat_prop, feats_refill[i]], dim=1) - feat_prop = self.forward_fusion(feat_prop) - - feat_prop = torch.cat([lr_curr, outputs[i], feat_prop], dim=1) - feat_prop = self.forward_resblocks(feat_prop) - - out = self.lrelu(self.upsample1(feat_prop)) - out = self.lrelu(self.upsample2(out)) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = self.img_upsample(lr_curr) - out += base - outputs[i] = out - - return torch.stack(outputs, dim=1)[:, :, :, :4 * h_input, :4 * w_input] - - def init_weights(self, pretrained=None, strict=True): - """Init weights for models. - Args: - pretrained (str, optional): Path for pretrained weights. If given - None, pretrained weights will not be loaded. Defaults to None. - strict (boo, optional): Whether strictly load the pretrained model. - Defaults to True. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=strict, logger=logger) - elif pretrained is not None: - raise TypeError(f'"pretrained" must be a str or None. ' - f'But received {type(pretrained)}.') - - -class EDVRFeatureExtractor(nn.Module): - """EDVR feature extractor for information-refill in IconVSR. - - We use EDVR-M in IconVSR. To adopt pretrained models, please - specify "pretrained". - - Paper: - EDVR: Video Restoration with Enhanced Deformable Convolutional Networks. - Args: - in_channels (int): Channel number of inputs. - out_channels (int): Channel number of outputs. - mid_channels (int): Channel number of intermediate features. - Default: 64. - num_frames (int): Number of input frames. Default: 5. - deform_groups (int): Deformable groups. Defaults: 8. - num_blocks_extraction (int): Number of blocks for feature extraction. - Default: 5. - num_blocks_reconstruction (int): Number of blocks for reconstruction. - Default: 10. - center_frame_idx (int): The index of center frame. Frame counting from - 0. Default: 2. - with_tsa (bool): Whether to use TSA module. Default: True. - pretrained (str): The pretrained model path. Default: None. - """ - - def __init__(self, - in_channels=3, - out_channel=3, - mid_channels=64, - num_frames=5, - deform_groups=8, - num_blocks_extraction=5, - num_blocks_reconstruction=10, - center_frame_idx=2, - with_tsa=True, - pretrained=None): - - super().__init__() - - self.center_frame_idx = center_frame_idx - self.with_tsa = with_tsa - act_cfg = dict(type='LeakyReLU', negative_slope=0.1) - - self.conv_first = nn.Conv2d(in_channels, mid_channels, 3, 1, 1) - self.feature_extraction = make_layer( - ResidualBlockNoBN, - num_blocks_extraction, - mid_channels=mid_channels) - - # generate pyramid features - self.feat_l2_conv1 = ConvModule( - mid_channels, mid_channels, 3, 2, 1, act_cfg=act_cfg) - self.feat_l2_conv2 = ConvModule( - mid_channels, mid_channels, 3, 1, 1, act_cfg=act_cfg) - self.feat_l3_conv1 = ConvModule( - mid_channels, mid_channels, 3, 2, 1, act_cfg=act_cfg) - self.feat_l3_conv2 = ConvModule( - mid_channels, mid_channels, 3, 1, 1, act_cfg=act_cfg) - # pcd alignment - self.pcd_alignment = PCDAlignment( - mid_channels=mid_channels, deform_groups=deform_groups) - # fusion - if self.with_tsa: - self.fusion = TSAFusion( - mid_channels=mid_channels, - num_frames=num_frames, - center_frame_idx=self.center_frame_idx) - else: - self.fusion = nn.Conv2d(num_frames * mid_channels, mid_channels, 1, - 1) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=True, logger=logger) - elif pretrained is not None: - raise TypeError(f'"pretrained" must be a str or None. ' - f'But received {type(pretrained)}.') - - def forward(self, x): - """Forward function for EDVRFeatureExtractor. - Args: - x (Tensor): Input tensor with shape (n, t, 3, h, w). - Returns: - Tensor: Intermediate feature with shape (n, mid_channels, h, w). - """ - - n, t, c, h, w = x.size() - - # extract LR features - # L1 - l1_feat = self.lrelu(self.conv_first(x.view(-1, c, h, w))) - l1_feat = self.feature_extraction(l1_feat) - # L2 - l2_feat = self.feat_l2_conv2(self.feat_l2_conv1(l1_feat)) - # L3 - l3_feat = self.feat_l3_conv2(self.feat_l3_conv1(l2_feat)) - - l1_feat = l1_feat.view(n, t, -1, h, w) - l2_feat = l2_feat.view(n, t, -1, h // 2, w // 2) - l3_feat = l3_feat.view(n, t, -1, h // 4, w // 4) - - # pcd alignment - ref_feats = [ # reference feature list - l1_feat[:, self.center_frame_idx, :, :, :].clone(), - l2_feat[:, self.center_frame_idx, :, :, :].clone(), - l3_feat[:, self.center_frame_idx, :, :, :].clone() - ] - aligned_feat = [] - for i in range(t): - neighbor_feats = [ - l1_feat[:, i, :, :, :].clone(), l2_feat[:, i, :, :, :].clone(), - l3_feat[:, i, :, :, :].clone() - ] - aligned_feat.append(self.pcd_alignment(neighbor_feats, ref_feats)) - aligned_feat = torch.stack(aligned_feat, dim=1) # (n, t, c, h, w) - - if self.with_tsa: - feat = self.fusion(aligned_feat) - else: - aligned_feat = aligned_feat.view(n, -1, h, w) - feat = self.fusion(aligned_feat) - - return feat diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/base.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/base.py deleted file mode 100755 index 02327e28..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/base.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import torch -import torch.nn as nn - - -class BaseModel(nn.Module, metaclass=ABCMeta): - """Base model. - - All models should subclass it. - All subclass should overwrite: - - ``init_weights``, supporting to initialize models. - - ``forward_train``, supporting to forward when training. - - ``forward_test``, supporting to forward when testing. - - ``train_step``, supporting to train one step when training. - """ - - @abstractmethod - def init_weights(self): - """Abstract method for initializing weight. - - All subclass should overwrite it. - """ - - @abstractmethod - def forward_train(self, imgs, labels): - """Abstract method for training forward. - - All subclass should overwrite it. - """ - - @abstractmethod - def forward_test(self, imgs): - """Abstract method for testing forward. - - All subclass should overwrite it. - """ - - def forward(self, imgs, labels, test_mode, **kwargs): - """Forward function for base model. - - Args: - imgs (Tensor): Input image(s). - labels (Tensor): Ground-truth label(s). - test_mode (bool): Whether in test mode. - kwargs (dict): Other arguments. - - Returns: - Tensor: Forward results. - """ - - if test_mode: - return self.forward_test(imgs, **kwargs) - - return self.forward_train(imgs, labels, **kwargs) - - @abstractmethod - def train_step(self, data_batch, optimizer): - """Abstract method for one training step. - - All subclass should overwrite it. - """ - - def val_step(self, data_batch, **kwargs): - """Abstract method for one validation step. - - All subclass should overwrite it. - """ - output = self.forward_test(**data_batch, **kwargs) - return output - - def parse_losses(self, losses): - """Parse losses dict for different loss variants. - - Args: - losses (dict): Loss dict. - - Returns: - loss (float): Sum of the total loss. - log_vars (dict): loss dict for different variants. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for name in log_vars: - log_vars[name] = log_vars[name].item() - - return loss, log_vars diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/builder.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/builder.py deleted file mode 100755 index 8606225a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/builder.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv import build_from_cfg - -from .registry import BACKBONES, COMPONENTS, LOSSES, MODELS - - -def build(cfg, registry, default_args=None): - """Build module function. - - Args: - cfg (dict): Configuration for building modules. - registry (obj): ``registry`` object. - default_args (dict, optional): Default arguments. Defaults to None. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return nn.Sequential(*modules) - - return build_from_cfg(cfg, registry, default_args) - - -def build_backbone(cfg): - """Build backbone. - - Args: - cfg (dict): Configuration for building backbone. - """ - return build(cfg, BACKBONES) - - -def build_component(cfg): - """Build component. - - Args: - cfg (dict): Configuration for building component. - """ - return build(cfg, COMPONENTS) - - -def build_loss(cfg): - """Build loss. - - Args: - cfg (dict): Configuration for building loss. - """ - return build(cfg, LOSSES) - - -def build_model(cfg, train_cfg=None, test_cfg=None): - """Build model. - - Args: - cfg (dict): Configuration for building model. - train_cfg (dict): Training configuration. Default: None. - test_cfg (dict): Testing configuration. Default: None. - """ - return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/__init__.py deleted file mode 100755 index 7ebeb4fb..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .aspp import ASPP -from .contextual_attention import ContextualAttentionModule -from .conv import * # noqa: F401, F403 -from .downsample import pixel_unshuffle -from .ensemble import SpatialTemporalEnsemble -from .flow_warp import flow_warp -from .gated_conv_module import SimpleGatedConvModule -from .gca_module import GCAModule -from .generation_model_utils import (GANImageBuffer, ResidualBlockWithDropout, - UnetSkipConnectionBlock, - generation_init_weights) -from .img_normalize import ImgNormalize -from .linear_module import LinearModule -from .mask_conv_module import MaskConvModule -from .model_utils import (extract_around_bbox, extract_bbox_patch, scale_bbox, - set_requires_grad) -from .partial_conv import PartialConv2d -from .separable_conv_module import DepthwiseSeparableConvModule -from .sr_backbone_utils import (ResidualBlockNoBN, default_init_weights, - make_layer) -from .upsample import PixelShufflePack - -__all__ = [ - 'ASPP', 'PartialConv2d', 'PixelShufflePack', 'default_init_weights', - 'ResidualBlockNoBN', 'make_layer', 'MaskConvModule', 'extract_bbox_patch', - 'extract_around_bbox', 'set_requires_grad', 'scale_bbox', - 'DepthwiseSeparableConvModule', 'ContextualAttentionModule', 'GCAModule', - 'SimpleGatedConvModule', 'LinearModule', 'flow_warp', 'ImgNormalize', - 'generation_init_weights', 'GANImageBuffer', 'UnetSkipConnectionBlock', - 'ResidualBlockWithDropout', 'pixel_unshuffle', 'SpatialTemporalEnsemble' -] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/aspp.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/aspp.py deleted file mode 100755 index c1e58e85..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/aspp.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.cnn import ConvModule -from torch import nn -from torch.nn import functional as F - -from .separable_conv_module import DepthwiseSeparableConvModule - - -class ASPPPooling(nn.Sequential): - - def __init__(self, in_channels, out_channels, conv_cfg, norm_cfg, act_cfg): - super().__init__( - nn.AdaptiveAvgPool2d(1), - ConvModule( - in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, x): - size = x.shape[-2:] - for mod in self: - x = mod(x) - return F.interpolate( - x, size=size, mode='bilinear', align_corners=False) - - -class ASPP(nn.Module): - """ASPP module from DeepLabV3. - - The code is adopted from - https://github.com/pytorch/vision/blob/master/torchvision/models/ - segmentation/deeplabv3.py - - For more information about the module: - `"Rethinking Atrous Convolution for Semantic Image Segmentation" - `_. - - Args: - in_channels (int): Input channels of the module. - out_channels (int): Output channels of the module. - mid_channels (int): Output channels of the intermediate ASPP conv - modules. - dilations (Sequence[int]): Dilation rate of three ASPP conv module. - Default: [12, 24, 36]. - conv_cfg (dict): Config dict for convolution layer. If "None", - nn.Conv2d will be applied. Default: None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - separable_conv (bool): Whether replace normal conv with depthwise - separable conv which is faster. Default: False. - """ - - def __init__(self, - in_channels, - out_channels=256, - mid_channels=256, - dilations=(12, 24, 36), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - separable_conv=False): - super().__init__() - - if separable_conv: - conv_module = DepthwiseSeparableConvModule - else: - conv_module = ConvModule - - modules = [] - modules.append( - ConvModule( - in_channels, - mid_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - for dilation in dilations: - modules.append( - conv_module( - in_channels, - mid_channels, - 3, - padding=dilation, - dilation=dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - modules.append( - ASPPPooling(in_channels, mid_channels, conv_cfg, norm_cfg, - act_cfg)) - - self.convs = nn.ModuleList(modules) - - self.project = nn.Sequential( - ConvModule( - 5 * mid_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg), nn.Dropout(0.5)) - - def forward(self, x): - """Forward function for ASPP module. - - Args: - x (Tensor): Input tensor with shape (N, C, H, W). - - Returns: - Tensor: Output tensor. - """ - res = [] - for conv in self.convs: - res.append(conv(x)) - res = torch.cat(res, dim=1) - return self.project(res) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/contextual_attention.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/contextual_attention.py deleted file mode 100755 index 7dcf4099..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/contextual_attention.py +++ /dev/null @@ -1,379 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class ContextualAttentionModule(nn.Module): - """Contexture attention module. - - The details of this module can be found in: - Generative Image Inpainting with Contextual Attention - - Args: - unfold_raw_kernel_size (int): Kernel size used in unfolding raw - feature. Default: 4. - unfold_raw_stride (int): Stride used in unfolding raw feature. Default: - 2. - unfold_raw_padding (int): Padding used in unfolding raw feature. - Default: 1. - unfold_corr_kernel_size (int): Kernel size used in unfolding - context for computing correlation maps. Default: 3. - unfold_corr_stride (int): Stride used in unfolding context for - computing correlation maps. Default: 1. - unfold_corr_dilation (int): Dilation used in unfolding context for - computing correlation maps. Default: 1. - unfold_corr_padding (int): Padding used in unfolding context for - computing correlation maps. Default: 1. - scale (float): The resale factor used in resize input features. - Default: 0.5. - fuse_kernel_size (int): The kernel size used in fusion module. - Default: 3. - softmax_scale (float): The scale factor for softmax function. - Default: 10. - return_attention_score (bool): If True, the attention score will be - returned. Default: True. - """ - - def __init__(self, - unfold_raw_kernel_size=4, - unfold_raw_stride=2, - unfold_raw_padding=1, - unfold_corr_kernel_size=3, - unfold_corr_stride=1, - unfold_corr_dilation=1, - unfold_corr_padding=1, - scale=0.5, - fuse_kernel_size=3, - softmax_scale=10, - return_attention_score=True): - super().__init__() - self.unfold_raw_kernel_size = unfold_raw_kernel_size - self.unfold_raw_stride = unfold_raw_stride - self.unfold_raw_padding = unfold_raw_padding - self.unfold_corr_kernel_size = unfold_corr_kernel_size - self.unfold_corr_stride = unfold_corr_stride - self.unfold_corr_dilation = unfold_corr_dilation - self.unfold_corr_padding = unfold_corr_padding - self.scale = scale - self.fuse_kernel_size = fuse_kernel_size - self.with_fuse_correlation = fuse_kernel_size > 1 - self.softmax_scale = softmax_scale - self.return_attention_score = return_attention_score - - if self.with_fuse_correlation: - assert fuse_kernel_size % 2 == 1 - fuse_kernel = torch.eye(fuse_kernel_size).view( - 1, 1, fuse_kernel_size, fuse_kernel_size) - self.register_buffer('fuse_kernel', fuse_kernel) - padding = int((fuse_kernel_size - 1) // 2) - self.fuse_conv = partial(F.conv2d, padding=padding, stride=1) - self.softmax = nn.Softmax(dim=1) - - def forward(self, x, context, mask=None): - """Forward Function. - - Args: - x (torch.Tensor): Tensor with shape (n, c, h, w). - context (torch.Tensor): Tensor with shape (n, c, h, w). - mask (torch.Tensor): Tensor with shape (n, 1, h, w). Default: None. - - Returns: - tuple(torch.Tensor): Features after contextural attention. - """ - # raw features to be used in copy (deconv) - raw_context = context - raw_context_cols = self.im2col( - raw_context, - kernel_size=self.unfold_raw_kernel_size, - stride=self.unfold_raw_stride, - padding=self.unfold_raw_padding, - normalize=False, - return_cols=True) - # resize the feature to reduce computational cost - x = F.interpolate(x, scale_factor=self.scale) - context = F.interpolate(context, scale_factor=self.scale) - - context_cols = self.im2col( - context, - kernel_size=self.unfold_corr_kernel_size, - stride=self.unfold_corr_stride, - padding=self.unfold_corr_padding, - dilation=self.unfold_corr_dilation, - normalize=True, - return_cols=True) - h_unfold, w_unfold = self.calculate_unfold_hw( - context.size()[-2:], - kernel_size=self.unfold_corr_kernel_size, - stride=self.unfold_corr_stride, - padding=self.unfold_corr_padding, - dilation=self.unfold_corr_dilation, - ) - # reshape context_cols to - # (n*h_unfold*w_unfold, c, unfold_mks, unfold_mks) - # 'mks' is short for 'mask_kernel_size' - context_cols = context_cols.reshape(-1, *context_cols.shape[2:]) - - # the shape of correlation map should be: - # (n, h_unfold*w_unfold, h', w') - correlation_map = self.patch_correlation(x, context_cols) - # fuse correlation map to enlarge consistent attention region. - if self.with_fuse_correlation: - correlation_map = self.fuse_correlation_map( - correlation_map, h_unfold, w_unfold) - - correlation_map = self.mask_correlation_map(correlation_map, mask=mask) - - attention_score = self.softmax(correlation_map * self.softmax_scale) - - raw_context_filter = raw_context_cols.reshape( - -1, *raw_context_cols.shape[2:]) - output = self.patch_copy_deconv(attention_score, raw_context_filter) - # deconv will cause overlap and we need to remove the effects of that - overlap_factor = self.calculate_overlap_factor(attention_score) - output /= overlap_factor - - if self.return_attention_score: - n, _, h_s, w_s = attention_score.size() - attention_score = attention_score.view(n, h_unfold, w_unfold, h_s, - w_s) - return output, attention_score - - return output - - def patch_correlation(self, x, kernel): - """Calculate patch correlation. - - Args: - x (torch.Tensor): Input tensor. - kernel (torch.Tensor): Kernel tensor. - - Returns: - torch.Tensor: Tensor with shape of (n, l, h, w). - """ - n, _, h_in, w_in = x.size() - - patch_corr = F.conv2d( - x.view(1, -1, h_in, w_in), - kernel, - stride=self.unfold_corr_stride, - padding=self.unfold_corr_padding, - dilation=self.unfold_corr_dilation, - groups=n) - h_out, w_out = patch_corr.size()[-2:] - return patch_corr.view(n, -1, h_out, w_out) - - def patch_copy_deconv(self, attention_score, context_filter): - """Copy patches using deconv. - - Args: - attention_score (torch.Tensor): Tensor with shape of (n, l , h, w). - context_filter (torch.Tensor): Filter kernel. - - Returns: - torch.Tensor: Tensor with shape of (n, c, h, w). - """ - n, _, h, w = attention_score.size() - attention_score = attention_score.view(1, -1, h, w) - output = F.conv_transpose2d( - attention_score, - context_filter, - stride=self.unfold_raw_stride, - padding=self.unfold_raw_padding, - groups=n) - h_out, w_out = output.size()[-2:] - return output.view(n, -1, h_out, w_out) - - def fuse_correlation_map(self, correlation_map, h_unfold, w_unfold): - """Fuse correlation map. - - This operation is to fuse correlation map for increasing large - consistent correlation regions. - - The mechanism behind this op is simple and easy to understand. A - standard 'Eye' matrix will be applied as a filter on the correlation - map in horizontal and vertical direction. - - The shape of input correlation map is (n, h_unfold*w_unfold, h, w). - When adopting fusing, we will apply convolutional filter in the - reshaped feature map with shape of (n, 1, h_unfold*w_fold, h*w). - - A simple specification for horizontal direction is shown below: - - .. code-block:: python - - (h, (h, (h, (h, - 0) 1) 2) 3) ... - (h, 0) - (h, 1) 1 - (h, 2) 1 - (h, 3) 1 - ... - - """ - # horizontal direction - n, _, h_map, w_map = correlation_map.size() - map_ = correlation_map.permute(0, 2, 3, 1) - map_ = map_.reshape(n, h_map * w_map, h_unfold * w_unfold, 1) - map_ = map_.permute(0, 3, 1, 2).contiguous() - map_ = self.fuse_conv(map_, self.fuse_kernel) - - correlation_map = map_.view(n, h_unfold, w_unfold, h_map, w_map) - - # vertical direction - map_ = correlation_map.permute(0, 2, 1, 4, - 3).reshape(n, 1, h_unfold * w_unfold, - h_map * w_map) - map_ = self.fuse_conv(map_, self.fuse_kernel) - - # Note that the dimension should be transposed since the convolution of - # eye matrix will put the normed scores into the last several dimension - correlation_map = map_.view(n, w_unfold, h_unfold, w_map, - h_map).permute(0, 4, 3, 2, 1) - correlation_map = correlation_map.reshape(n, -1, h_unfold, w_unfold) - - return correlation_map - - def calculate_unfold_hw(self, - input_size, - kernel_size=3, - stride=1, - dilation=1, - padding=0): - """Calculate (h, w) after unfolding - - The official implementation of `unfold` in pytorch will put the - dimension (h, w) into `L`. Thus, this function is just to calculate the - (h, w) according to the equation in: - https://pytorch.org/docs/stable/nn.html#torch.nn.Unfold - """ - h_in, w_in = input_size - - h_unfold = int((h_in + 2 * padding - dilation * - (kernel_size - 1) - 1) / stride + 1) - - w_unfold = int((w_in + 2 * padding - dilation * - (kernel_size - 1) - 1) / stride + 1) - return h_unfold, w_unfold - - def calculate_overlap_factor(self, attention_score): - """Calculate the overlap factor after applying deconv. - - Args: - attention_score (torch.Tensor): The attention score with shape of - (n, c, h, w). - - Returns: - torch.Tensor: The overlap factor will be returned. - """ - h, w = attention_score.shape[-2:] - kernel_size = self.unfold_raw_kernel_size - - ones_input = torch.ones(1, 1, h, w).to(attention_score) - ones_filter = torch.ones(1, 1, kernel_size, - kernel_size).to(attention_score) - overlap = F.conv_transpose2d( - ones_input, - ones_filter, - stride=self.unfold_raw_stride, - padding=self.unfold_raw_padding) - - # avoid division by zero - overlap[overlap == 0] = 1. - return overlap - - def mask_correlation_map(self, correlation_map, mask): - """Add mask weight for correlation map. - - Add a negative infinity number to the masked regions so that softmax - function will result in 'zero' in those regions. - - Args: - correlation_map (torch.Tensor): Correlation map with shape of - (n, h_unfold*w_unfold, h_map, w_map). - mask (torch.Tensor): Mask tensor with shape of (n, c, h, w). '1' - in the mask indicates masked region while '0' indicates valid - region. - - Returns: - torch.Tensor: Updated correlation map with mask. - """ - if mask is not None: - mask = F.interpolate(mask, scale_factor=self.scale) - # if any pixel is masked in patch, the patch is considered to be - # masked - mask_cols = self.im2col( - mask, - kernel_size=self.unfold_corr_kernel_size, - stride=self.unfold_corr_stride, - padding=self.unfold_corr_padding, - dilation=self.unfold_corr_dilation) - mask_cols = (mask_cols.sum(dim=1, keepdim=True) > 0).float() - mask_cols = mask_cols.permute(0, 2, - 1).reshape(mask.size(0), -1, 1, 1) - # add negative inf will bring zero in softmax - mask_cols[mask_cols == 1] = -float('inf') - correlation_map += mask_cols - return correlation_map - - def im2col(self, - img, - kernel_size, - stride=1, - padding=0, - dilation=1, - normalize=False, - return_cols=False): - """Reshape image-style feature to columns. - - This function is used for unfold feature maps to columns. The - details of this function can be found in: - https://pytorch.org/docs/1.1.0/nn.html?highlight=unfold#torch.nn.Unfold - - Args: - img (torch.Tensor): Features to be unfolded. The shape of this - feature should be (n, c, h, w). - kernel_size (int): In this function, we only support square kernel - with same height and width. - stride (int): Stride number in unfolding. Default: 1. - padding (int): Padding number in unfolding. Default: 0. - dilation (int): Dilation number in unfolding. Default: 1. - normalize (bool): If True, the unfolded feature will be normalized. - Default: False. - return_cols (bool): The official implementation in PyTorch of - unfolding will return features with shape of - (n, c*$prod{kernel_size}$, L). If True, the features will be - reshaped to (n, L, c, kernel_size, kernel_size). Otherwise, - the results will maintain the shape as the official - implementation. - - Returns: - torch.Tensor: Unfolded columns. If `return_cols` is True, the \ - shape of output tensor is \ - `(n, L, c, kernel_size, kernel_size)`. Otherwise, the shape \ - will be `(n, c*$prod{kernel_size}$, L)`. - """ - - # unfold img to columns with shape (n, c*kernel_size**2, num_cols) - img_unfold = F.unfold( - img, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation) - # normalize the feature map - if normalize: - norm = torch.sqrt((img_unfold**2).sum(dim=1, keepdim=True)) - eps = torch.tensor([1e-4]).to(img) - img_unfold = img_unfold / torch.max(norm, eps) - - if return_cols: - img_unfold_ = img_unfold.permute(0, 2, 1) - n, num_cols = img_unfold_.size()[:2] - img_cols = img_unfold_.view(n, num_cols, img.size(1), kernel_size, - kernel_size) - return img_cols - - return img_unfold diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/conv.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/conv.py deleted file mode 100755 index 8e03d0f7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/conv.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import CONV_LAYERS -from torch import nn - -CONV_LAYERS.register_module('Deconv', module=nn.ConvTranspose2d) -# TODO: octave conv diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/downsample.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/downsample.py deleted file mode 100755 index 7d51a5b5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/downsample.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -def pixel_unshuffle(x, scale): - """Down-sample by pixel unshuffle. - - Args: - x (Tensor): Input tensor. - scale (int): Scale factor. - - Returns: - Tensor: Output tensor. - """ - - b, c, h, w = x.shape - if h % scale != 0 or w % scale != 0: - raise AssertionError( - f'Invalid scale ({scale}) of pixel unshuffle for tensor ' - f'with shape: {x.shape}') - h = int(h / scale) - w = int(w / scale) - x = x.view(b, c, h, scale, w, scale) - x = x.permute(0, 1, 3, 5, 2, 4) - return x.reshape(b, -1, h, w) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/ensemble.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/ensemble.py deleted file mode 100755 index 019b6e0f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/ensemble.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class SpatialTemporalEnsemble(nn.Module): - """ Apply spatial and temporal ensemble and compute outputs - - Args: - is_temporal_ensemble (bool, optional): Whether to apply ensemble - temporally. If True, the sequence will also be flipped temporally. - If the input is an image, this argument must be set to False. - Default: False. - - """ - - def __init__(self, is_temporal_ensemble=False): - - super().__init__() - - self.is_temporal_ensemble = is_temporal_ensemble - - def _transform(self, imgs, mode): - """Apply spatial transform (flip, rotate) to the images. - - Args: - imgs (torch.Tensor): The images to be transformed/ - mode (str): The mode of transform. Supported values are 'vertical', - 'horizontal', and 'transpose', corresponding to vertical flip, - horizontal flip, and rotation, respectively. - - Returns: - torch.Tensor: Output of the model with spatial ensemble applied. - - """ - - is_single_image = False - if imgs.ndim == 4: - if self.is_temporal_ensemble: - raise ValueError('"is_temporal_ensemble" must be False if ' - 'the input is an image.') - is_single_image = True - imgs = imgs.unsqueeze(1) - - if mode == 'vertical': - imgs = imgs.flip(4).clone() - elif mode == 'horizontal': - imgs = imgs.flip(3).clone() - elif mode == 'transpose': - imgs = imgs.permute(0, 1, 2, 4, 3).clone() - - if is_single_image: - imgs = imgs.squeeze(1) - - return imgs - - def spatial_ensemble(self, imgs, model): - """Apply spatial ensemble. - - Args: - imgs (torch.Tensor): The images to be processed by the model. Its - size should be either (n, t, c, h, w) or (n, c, h, w). - model (nn.Module): The model to process the images. - - Returns: - torch.Tensor: Output of the model with spatial ensemble applied. - - """ - - img_list = [imgs.cpu()] - for mode in ['vertical', 'horizontal', 'transpose']: - img_list.extend([self._transform(t, mode) for t in img_list]) - - output_list = [model(t.to(imgs.device)).cpu() for t in img_list] - for i in range(len(output_list)): - if i > 3: - output_list[i] = self._transform(output_list[i], 'transpose') - if i % 4 > 1: - output_list[i] = self._transform(output_list[i], 'horizontal') - if (i % 4) % 2 == 1: - output_list[i] = self._transform(output_list[i], 'vertical') - - outputs = torch.stack(output_list, dim=0) - outputs = outputs.mean(dim=0, keepdim=False) - - return outputs.to(imgs.device) - - def forward(self, imgs, model): - """Apply spatial and temporal ensemble. - - Args: - imgs (torch.Tensor): The images to be processed by the model. Its - size should be either (n, t, c, h, w) or (n, c, h, w). - model (nn.Module): The model to process the images. - - Returns: - torch.Tensor: Output of the model with spatial ensemble applied. - - """ - outputs = self.spatial_ensemble(imgs, model) - if self.is_temporal_ensemble: - outputs += self.spatial_ensemble(imgs.flip(1), model).flip(1) - outputs *= 0.5 - - return outputs diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/flow_warp.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/flow_warp.py deleted file mode 100755 index 7083230d..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/flow_warp.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn.functional as F - - -def flow_warp(x, - flow, - interpolation='bilinear', - padding_mode='zeros', - align_corners=True): - """Warp an image or a feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2). The last dimension is - a two-channel, denoting the width and height relative offsets. - Note that the values are not normalized to [-1, 1]. - interpolation (str): Interpolation mode: 'nearest' or 'bilinear'. - Default: 'bilinear'. - padding_mode (str): Padding mode: 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Whether align corners. Default: True. - - Returns: - Tensor: Warped image or feature map. - """ - if x.size()[-2:] != flow.size()[1:3]: - raise ValueError(f'The spatial sizes of input ({x.size()[-2:]}) and ' - f'flow ({flow.size()[1:3]}) are not the same.') - _, _, h, w = x.size() - # create mesh grid - device = flow.device - grid_y, grid_x = torch.meshgrid( - torch.arange(0, h, device=device, dtype=x.dtype), - torch.arange(0, w, device=device, dtype=x.dtype)) - grid = torch.stack((grid_x, grid_y), 2) # h, w, 2 - grid.requires_grad = False - - grid_flow = grid + flow - # scale grid_flow to [-1,1] - grid_flow_x = 2.0 * grid_flow[:, :, :, 0] / max(w - 1, 1) - 1.0 - grid_flow_y = 2.0 * grid_flow[:, :, :, 1] / max(h - 1, 1) - 1.0 - grid_flow = torch.stack((grid_flow_x, grid_flow_y), dim=3) - output = F.grid_sample( - x, - grid_flow, - mode=interpolation, - padding_mode=padding_mode, - align_corners=align_corners) - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gated_conv_module.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gated_conv_module.py deleted file mode 100755 index fed22c4f..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gated_conv_module.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, build_activation_layer - - -class SimpleGatedConvModule(nn.Module): - """Simple Gated Convolutional Module. - - This module is a simple gated convolutional module. The detailed formula - is: - - .. math:: - y = \\phi(conv1(x)) * \\sigma(conv2(x)), - - where `phi` is the feature activation function and `sigma` is the gate - activation function. In default, the gate activation function is sigmoid. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): The number of channels of the output feature. Note - that `out_channels` in the conv module is doubled since this module - contains two convolutions for feature and gate separately. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - feat_act_cfg (dict): Config dict for feature activation layer. - gate_act_cfg (dict): Config dict for gate activation layer. - kwargs (keyword arguments): Same as `ConvModule`. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - feat_act_cfg=dict(type='ELU'), - gate_act_cfg=dict(type='Sigmoid'), - **kwargs): - super().__init__() - # the activation function should specified outside conv module - kwargs_ = copy.deepcopy(kwargs) - kwargs_['act_cfg'] = None - self.with_feat_act = feat_act_cfg is not None - self.with_gate_act = gate_act_cfg is not None - - self.conv = ConvModule(in_channels, out_channels * 2, kernel_size, - **kwargs_) - - if self.with_feat_act: - self.feat_act = build_activation_layer(feat_act_cfg) - - if self.with_gate_act: - self.gate_act = build_activation_layer(gate_act_cfg) - - def forward(self, x): - """Forward Function. - - Args: - x (torch.Tensor): Input tensor with shape of (n, c, h, w). - - Returns: - torch.Tensor: Output tensor with shape of (n, c, h', w'). - """ - x = self.conv(x) - x, gate = torch.split(x, x.size(1) // 2, dim=1) - if self.with_feat_act: - x = self.feat_act(x) - if self.with_gate_act: - gate = self.gate_act(gate) - x = x * gate - - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gca_module.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gca_module.py deleted file mode 100755 index 75ab64e5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/gca_module.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, constant_init, xavier_init -from torch.nn import functional as F - - -class GCAModule(nn.Module): - """Guided Contextual Attention Module. - - From https://arxiv.org/pdf/2001.04069.pdf. - Based on https://github.com/nbei/Deep-Flow-Guided-Video-Inpainting. - This module use image feature map to augment the alpha feature map with - guided contextual attention score. - - Image feature and alpha feature are unfolded to small patches and later - used as conv kernel. Thus, we refer the unfolding size as kernel size. - Image feature patches have a default kernel size 3 while the kernel size of - alpha feature patches could be specified by `rate` (see `rate` below). The - image feature patches are used to convolve with the image feature itself - to calculate the contextual attention. Then the attention feature map is - convolved by alpha feature patches to obtain the attention alpha feature. - At last, the attention alpha feature is added to the input alpha feature. - - Args: - in_channels (int): Input channels of the guided contextual attention - module. - out_channels (int): Output channels of the guided contextual attention - module. - kernel_size (int): Kernel size of image feature patches. Default 3. - stride (int): Stride when unfolding the image feature. Default 1. - rate (int): The downsample rate of image feature map. The corresponding - kernel size and stride of alpha feature patches will be `rate x 2` - and `rate`. It could be regarded as the granularity of the gca - module. Default: 2. - pad_args (dict): Parameters of padding when convolve image feature with - image feature patches or alpha feature patches. Allowed keys are - `mode` and `value`. See torch.nn.functional.pad() for more - information. Default: dict(mode='reflect'). - interpolation (str): Interpolation method in upsampling and - downsampling. - penalty (float): Punishment hyperparameter to avoid a large correlation - between each unknown patch and itself. - eps (float): A small number to avoid dividing by 0 when calculating - the normed image feature patch. Default: 1e-4. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - stride=1, - rate=2, - pad_args=dict(mode='reflect'), - interpolation='nearest', - penalty=-1e4, - eps=1e-4): - super().__init__() - self.kernel_size = kernel_size - self.stride = stride - self.rate = rate - self.pad_args = pad_args - self.interpolation = interpolation - self.penalty = penalty - self.eps = eps - - # reduced the channels of input image feature. - self.guidance_conv = nn.Conv2d(in_channels, in_channels // 2, 1) - - # convolution after the attention alpha feature - self.out_conv = ConvModule( - out_channels, - out_channels, - 1, - norm_cfg=dict(type='BN'), - act_cfg=None) - - self.init_weights() - - def init_weights(self): - xavier_init(self.guidance_conv, distribution='uniform') - xavier_init(self.out_conv.conv, distribution='uniform') - constant_init(self.out_conv.norm, 1e-3) - - def forward(self, img_feat, alpha_feat, unknown=None, softmax_scale=1.): - """Forward function of GCAModule. - - Args: - img_feat (Tensor): Image feature map of shape - (N, ori_c, ori_h, ori_w). - alpha_feat (Tensor): Alpha feature map of shape - (N, alpha_c, ori_h, ori_w). - unknown (Tensor, optional): Unknown area map generated by trimap. - If specified, this tensor should have shape - (N, 1, ori_h, ori_w). - softmax_scale (float, optional): The softmax scale of the attention - if unknown area is not provided in forward. Default: 1. - - Returns: - Tensor: The augmented alpha feature. - """ - - if alpha_feat.shape[2:4] != img_feat.shape[2:4]: - raise ValueError( - 'image feature size does not align with alpha feature size: ' - f'image feature size {img_feat.shape[2:4]}, ' - f'alpha feature size {alpha_feat.shape[2:4]}') - - if unknown is not None and unknown.shape[2:4] != img_feat.shape[2:4]: - raise ValueError( - 'image feature size does not align with unknown mask size: ' - f'image feature size {img_feat.shape[2:4]}, ' - f'unknown mask size {unknown.shape[2:4]}') - - # preprocess image feature - img_feat = self.guidance_conv(img_feat) - img_feat = F.interpolate( - img_feat, scale_factor=1 / self.rate, mode=self.interpolation) - - # preprocess unknown mask - unknown, softmax_scale = self.process_unknown_mask( - unknown, img_feat, softmax_scale) - - img_ps, alpha_ps, unknown_ps = self.extract_feature_maps_patches( - img_feat, alpha_feat, unknown) - - # create self correlation mask with shape: - # (N, img_h*img_w, img_h, img_w) - self_mask = self.get_self_correlation_mask(img_feat) - - # split tensors by batch dimension; tuple is returned - img_groups = torch.split(img_feat, 1, dim=0) - img_ps_groups = torch.split(img_ps, 1, dim=0) - alpha_ps_groups = torch.split(alpha_ps, 1, dim=0) - unknown_ps_groups = torch.split(unknown_ps, 1, dim=0) - scale_groups = torch.split(softmax_scale, 1, dim=0) - groups = (img_groups, img_ps_groups, alpha_ps_groups, - unknown_ps_groups, scale_groups) - - out = [] - # i is the virtual index of the sample in the current batch - for img_i, img_ps_i, alpha_ps_i, unknown_ps_i, scale_i in zip(*groups): - similarity_map = self.compute_similarity_map(img_i, img_ps_i) - - gca_score = self.compute_guided_attention_score( - similarity_map, unknown_ps_i, scale_i, self_mask) - - out_i = self.propagate_alpha_feature(gca_score, alpha_ps_i) - - out.append(out_i) - - out = torch.cat(out, dim=0) - out.reshape_as(alpha_feat) - - out = self.out_conv(out) + alpha_feat - return out - - def extract_feature_maps_patches(self, img_feat, alpha_feat, unknown): - """Extract image feature, alpha feature unknown patches. - - Args: - img_feat (Tensor): Image feature map of shape - (N, img_c, img_h, img_w). - alpha_feat (Tensor): Alpha feature map of shape - (N, alpha_c, ori_h, ori_w). - unknown (Tensor, optional): Unknown area map generated by trimap of - shape (N, 1, img_h, img_w). - - Returns: - tuple: 3-tuple of - - ``Tensor``: Image feature patches of shape \ - (N, img_h*img_w, img_c, img_ks, img_ks). - - ``Tensor``: Guided contextual attention alpha feature map. \ - (N, img_h*img_w, alpha_c, alpha_ks, alpha_ks). - - ``Tensor``: Unknown mask of shape (N, img_h*img_w, 1, 1). - """ - # extract image feature patches with shape: - # (N, img_h*img_w, img_c, img_ks, img_ks) - img_ks = self.kernel_size - img_ps = self.extract_patches(img_feat, img_ks, self.stride) - - # extract alpha feature patches with shape: - # (N, img_h*img_w, alpha_c, alpha_ks, alpha_ks) - alpha_ps = self.extract_patches(alpha_feat, self.rate * 2, self.rate) - - # extract unknown mask patches with shape: (N, img_h*img_w, 1, 1) - unknown_ps = self.extract_patches(unknown, img_ks, self.stride) - unknown_ps = unknown_ps.squeeze(dim=2) # squeeze channel dimension - unknown_ps = unknown_ps.mean(dim=[2, 3], keepdim=True) - - return img_ps, alpha_ps, unknown_ps - - def compute_similarity_map(self, img_feat, img_ps): - """Compute similarity between image feature patches. - - Args: - img_feat (Tensor): Image feature map of shape - (1, img_c, img_h, img_w). - img_ps (Tensor): Image feature patches tensor of shape - (1, img_h*img_w, img_c, img_ks, img_ks). - - Returns: - Tensor: Similarity map between image feature patches with shape \ - (1, img_h*img_w, img_h, img_w). - """ - img_ps = img_ps[0] # squeeze dim 0 - # convolve the feature to get correlation (similarity) map - escape_NaN = torch.FloatTensor([self.eps]).to(img_feat) - img_ps_normed = img_ps / torch.max(self.l2_norm(img_ps), escape_NaN) - img_feat = self.pad(img_feat, self.kernel_size, self.stride) - similarity_map = F.conv2d(img_feat, img_ps_normed) - - return similarity_map - - def compute_guided_attention_score(self, similarity_map, unknown_ps, scale, - self_mask): - """Compute guided attention score. - - Args: - similarity_map (Tensor): Similarity map of image feature with shape - (1, img_h*img_w, img_h, img_w). - unknown_ps (Tensor): Unknown area patches tensor of shape - (1, img_h*img_w, 1, 1). - scale (Tensor): Softmax scale of known and unknown area: - [unknown_scale, known_scale]. - self_mask (Tensor): Self correlation mask of shape - (1, img_h*img_w, img_h, img_w). At (1, i*i, i, i) mask value - equals -1e4 for i in [1, img_h*img_w] and other area is all - zero. - - Returns: - Tensor: Similarity map between image feature patches with shape \ - (1, img_h*img_w, img_h, img_w). - """ - # scale the correlation with predicted scale factor for known and - # unknown area - unknown_scale, known_scale = scale[0] - out = similarity_map * ( - unknown_scale * unknown_ps.gt(0.).float() + - known_scale * unknown_ps.le(0.).float()) - # mask itself, self-mask only applied to unknown area - out = out + self_mask * unknown_ps - gca_score = F.softmax(out, dim=1) - - return gca_score - - def propagate_alpha_feature(self, gca_score, alpha_ps): - """Propagate alpha feature based on guided attention score. - - Args: - gca_score (Tensor): Guided attention score map of shape - (1, img_h*img_w, img_h, img_w). - alpha_ps (Tensor): Alpha feature patches tensor of shape - (1, img_h*img_w, alpha_c, alpha_ks, alpha_ks). - - Returns: - Tensor: Propagated alpha feature map of shape \ - (1, alpha_c, alpha_h, alpha_w). - """ - alpha_ps = alpha_ps[0] # squeeze dim 0 - if self.rate == 1: - gca_score = self.pad(gca_score, kernel_size=2, stride=1) - alpha_ps = alpha_ps.permute(1, 0, 2, 3) - out = F.conv2d(gca_score, alpha_ps) / 4. - else: - out = F.conv_transpose2d( - gca_score, alpha_ps, stride=self.rate, padding=1) / 4. - - return out - - def process_unknown_mask(self, unknown, img_feat, softmax_scale): - """Process unknown mask. - - Args: - unknown (Tensor, optional): Unknown area map generated by trimap of - shape (N, 1, ori_h, ori_w) - img_feat (Tensor): The interpolated image feature map of shape - (N, img_c, img_h, img_w). - softmax_scale (float, optional): The softmax scale of the attention - if unknown area is not provided in forward. Default: 1. - - Returns: - tuple: 2-tuple of - - ``Tensor``: Interpolated unknown area map of shape \ - (N, img_h*img_w, img_h, img_w). - - ``Tensor``: Softmax scale tensor of known and unknown area of \ - shape (N, 2). - """ - n, _, h, w = img_feat.shape - - if unknown is not None: - unknown = unknown.clone() - unknown = F.interpolate( - unknown, scale_factor=1 / self.rate, mode=self.interpolation) - unknown_mean = unknown.mean(dim=[2, 3]) - known_mean = 1 - unknown_mean - unknown_scale = torch.clamp( - torch.sqrt(unknown_mean / known_mean), 0.1, 10).to(img_feat) - known_scale = torch.clamp( - torch.sqrt(known_mean / unknown_mean), 0.1, 10).to(img_feat) - softmax_scale = torch.cat([unknown_scale, known_scale], dim=1) - else: - unknown = torch.ones((n, 1, h, w)).to(img_feat) - softmax_scale = torch.FloatTensor( - [softmax_scale, - softmax_scale]).view(1, 2).repeat(n, 1).to(img_feat) - - return unknown, softmax_scale - - def extract_patches(self, x, kernel_size, stride): - """Extract feature patches. - - The feature map will be padded automatically to make sure the number of - patches is equal to `(H / stride) * (W / stride)`. - - Args: - x (Tensor): Feature map of shape (N, C, H, W). - kernel_size (int): Size of each patches. - stride (int): Stride between patches. - - Returns: - Tensor: Extracted patches of shape \ - (N, (H / stride) * (W / stride) , C, kernel_size, kernel_size). - """ - n, c, _, _ = x.shape - x = self.pad(x, kernel_size, stride) - x = F.unfold(x, (kernel_size, kernel_size), stride=(stride, stride)) - x = x.permute(0, 2, 1) - x = x.reshape(n, -1, c, kernel_size, kernel_size) - return x - - def pad(self, x, kernel_size, stride): - left = (kernel_size - stride + 1) // 2 - right = (kernel_size - stride) // 2 - pad = (left, right, left, right) - return F.pad(x, pad, **self.pad_args) - - def get_self_correlation_mask(self, img_feat): - _, _, h, w = img_feat.shape - # As ONNX does not support dynamic num_classes, we have to convert it - # into an integer - self_mask = F.one_hot( - torch.arange(h * w).view(h, w), num_classes=int(h * w)) - self_mask = self_mask.permute(2, 0, 1).view(1, h * w, h, w) - # use large negative value to mask out self-correlation before softmax - self_mask = self_mask * self.penalty - return self_mask.to(img_feat) - - @staticmethod - def l2_norm(x): - x = x**2 - x = x.sum(dim=[1, 2, 3], keepdim=True) - return torch.sqrt(x) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/generation_model_utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/generation_model_utils.py deleted file mode 100755 index 21d45fbe..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/generation_model_utils.py +++ /dev/null @@ -1,301 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, kaiming_init, normal_init, xavier_init -from torch.nn import init - - -def generation_init_weights(module, init_type='normal', init_gain=0.02): - """Default initialization of network weights for image generation. - - By default, we use normal init, but xavier and kaiming might work - better for some applications. - - Args: - module (nn.Module): Module to be initialized. - init_type (str): The name of an initialization method: - normal | xavier | kaiming | orthogonal. - init_gain (float): Scaling factor for normal, xavier and - orthogonal. - """ - - def init_func(m): - """Initialization function. - - Args: - m (nn.Module): Module to be initialized. - """ - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 - or classname.find('Linear') != -1): - if init_type == 'normal': - normal_init(m, 0.0, init_gain) - elif init_type == 'xavier': - xavier_init(m, gain=init_gain, distribution='normal') - elif init_type == 'kaiming': - kaiming_init( - m, - a=0, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='normal') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight, gain=init_gain) - init.constant_(m.bias.data, 0.0) - else: - raise NotImplementedError( - f"Initialization method '{init_type}' is not implemented") - elif classname.find('BatchNorm2d') != -1: - # BatchNorm Layer's weight is not a matrix; - # only normal distribution applies. - normal_init(m, 1.0, init_gain) - - module.apply(init_func) - - -class GANImageBuffer: - """This class implements an image buffer that stores previously - generated images. - - This buffer allows us to update the discriminator using a history of - generated images rather than the ones produced by the latest generator - to reduce model oscillation. - - Args: - buffer_size (int): The size of image buffer. If buffer_size = 0, - no buffer will be created. - buffer_ratio (float): The chance / possibility to use the images - previously stored in the buffer. - """ - - def __init__(self, buffer_size, buffer_ratio=0.5): - self.buffer_size = buffer_size - # create an empty buffer - if self.buffer_size > 0: - self.img_num = 0 - self.image_buffer = [] - self.buffer_ratio = buffer_ratio - - def query(self, images): - """Query current image batch using a history of generated images. - - Args: - images (Tensor): Current image batch without history information. - """ - if self.buffer_size == 0: # if the buffer size is 0, do nothing - return images - return_images = [] - for image in images: - image = torch.unsqueeze(image.data, 0) - # if the buffer is not full, keep inserting current images - if self.img_num < self.buffer_size: - self.img_num = self.img_num + 1 - self.image_buffer.append(image) - return_images.append(image) - else: - use_buffer = np.random.random() < self.buffer_ratio - # by self.buffer_ratio, the buffer will return a previously - # stored image, and insert the current image into the buffer - if use_buffer: - random_id = np.random.randint(0, self.buffer_size) - image_tmp = self.image_buffer[random_id].clone() - self.image_buffer[random_id] = image - return_images.append(image_tmp) - # by (1 - self.buffer_ratio), the buffer will return the - # current image - else: - return_images.append(image) - # collect all the images and return - return_images = torch.cat(return_images, 0) - return return_images - - -class UnetSkipConnectionBlock(nn.Module): - """Construct a Unet submodule with skip connections, with the following - structure: downsampling - `submodule` - upsampling. - - Args: - outer_channels (int): Number of channels at the outer conv layer. - inner_channels (int): Number of channels at the inner conv layer. - in_channels (int): Number of channels in input images/features. If is - None, equals to `outer_channels`. Default: None. - submodule (UnetSkipConnectionBlock): Previously constructed submodule. - Default: None. - is_outermost (bool): Whether this module is the outermost module. - Default: False. - is_innermost (bool): Whether this module is the innermost module. - Default: False. - norm_cfg (dict): Config dict to build norm layer. Default: - `dict(type='BN')`. - use_dropout (bool): Whether to use dropout layers. Default: False. - """ - - def __init__(self, - outer_channels, - inner_channels, - in_channels=None, - submodule=None, - is_outermost=False, - is_innermost=False, - norm_cfg=dict(type='BN'), - use_dropout=False): - super().__init__() - # cannot be both outermost and innermost - assert not (is_outermost and is_innermost), ( - "'is_outermost' and 'is_innermost' cannot be True" - 'at the same time.') - self.is_outermost = is_outermost - assert isinstance(norm_cfg, dict), ("'norm_cfg' should be dict, but" - f'got {type(norm_cfg)}') - assert 'type' in norm_cfg, "'norm_cfg' must have key 'type'" - # We use norm layers in the unet skip connection block. - # Only for IN, use bias since it does not have affine parameters. - use_bias = norm_cfg['type'] == 'IN' - - kernel_size = 4 - stride = 2 - padding = 1 - if in_channels is None: - in_channels = outer_channels - down_conv_cfg = dict(type='Conv2d') - down_norm_cfg = norm_cfg - down_act_cfg = dict(type='LeakyReLU', negative_slope=0.2) - up_conv_cfg = dict(type='Deconv') - up_norm_cfg = norm_cfg - up_act_cfg = dict(type='ReLU') - up_in_channels = inner_channels * 2 - up_bias = use_bias - middle = [submodule] - upper = [] - - if is_outermost: - down_act_cfg = None - down_norm_cfg = None - up_bias = True - up_norm_cfg = None - upper = [nn.Tanh()] - elif is_innermost: - down_norm_cfg = None - up_in_channels = inner_channels - middle = [] - else: - upper = [nn.Dropout(0.5)] if use_dropout else [] - - down = [ - ConvModule( - in_channels=in_channels, - out_channels=inner_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=use_bias, - conv_cfg=down_conv_cfg, - norm_cfg=down_norm_cfg, - act_cfg=down_act_cfg, - order=('act', 'conv', 'norm')) - ] - up = [ - ConvModule( - in_channels=up_in_channels, - out_channels=outer_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=up_bias, - conv_cfg=up_conv_cfg, - norm_cfg=up_norm_cfg, - act_cfg=up_act_cfg, - order=('act', 'conv', 'norm')) - ] - - model = down + middle + up + upper - - self.model = nn.Sequential(*model) - - def forward(self, x): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - if self.is_outermost: - return self.model(x) - - # add skip connections - return torch.cat([x, self.model(x)], 1) - - -class ResidualBlockWithDropout(nn.Module): - """Define a Residual Block with dropout layers. - - Ref: - Deep Residual Learning for Image Recognition - - A residual block is a conv block with skip connections. A dropout layer is - added between two common conv modules. - - Args: - channels (int): Number of channels in the conv layer. - padding_mode (str): The name of padding layer: - 'reflect' | 'replicate' | 'zeros'. - norm_cfg (dict): Config dict to build norm layer. Default: - `dict(type='IN')`. - use_dropout (bool): Whether to use dropout layers. Default: True. - """ - - def __init__(self, - channels, - padding_mode, - norm_cfg=dict(type='BN'), - use_dropout=True): - super().__init__() - assert isinstance(norm_cfg, dict), ("'norm_cfg' should be dict, but" - f'got {type(norm_cfg)}') - assert 'type' in norm_cfg, "'norm_cfg' must have key 'type'" - # We use norm layers in the residual block with dropout layers. - # Only for IN, use bias since it does not have affine parameters. - use_bias = norm_cfg['type'] == 'IN' - - block = [ - ConvModule( - in_channels=channels, - out_channels=channels, - kernel_size=3, - padding=1, - bias=use_bias, - norm_cfg=norm_cfg, - padding_mode=padding_mode) - ] - - if use_dropout: - block += [nn.Dropout(0.5)] - - block += [ - ConvModule( - in_channels=channels, - out_channels=channels, - kernel_size=3, - padding=1, - bias=use_bias, - norm_cfg=norm_cfg, - act_cfg=None, - padding_mode=padding_mode) - ] - - self.block = nn.Sequential(*block) - - def forward(self, x): - """Forward function. Add skip connections without final ReLU. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - out = x + self.block(x) - return out diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/img_normalize.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/img_normalize.py deleted file mode 100755 index 1bd8f76a..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/img_normalize.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class ImgNormalize(nn.Conv2d): - """Normalize images with the given mean and std value. - - Based on Conv2d layer, can work in GPU. - - Args: - pixel_range (float): Pixel range of feature. - img_mean (Tuple[float]): Image mean of each channel. - img_std (Tuple[float]): Image std of each channel. - sign (int): Sign of bias. Default -1. - """ - - def __init__(self, pixel_range, img_mean, img_std, sign=-1): - - assert len(img_mean) == len(img_std) - num_channels = len(img_mean) - super().__init__(num_channels, num_channels, kernel_size=1) - - std = torch.Tensor(img_std) - self.weight.data = torch.eye(num_channels).view( - num_channels, num_channels, 1, 1) - self.weight.data.div_(std.view(num_channels, 1, 1, 1)) - self.bias.data = sign * pixel_range * torch.Tensor(img_mean) - self.bias.data.div_(std) - - self.weight.requires_grad = False - self.bias.requires_grad = False diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/linear_module.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/linear_module.py deleted file mode 100755 index 5101ad16..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/linear_module.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import build_activation_layer, kaiming_init - - -class LinearModule(nn.Module): - """A linear block that contains linear/norm/activation layers. - - For low level vision, we add spectral norm and padding layer. - - Args: - in_features (int): Same as nn.Linear. - out_features (int): Same as nn.Linear. - bias (bool): Same as nn.Linear. - act_cfg (dict): Config dict for activation layer, "relu" by default. - inplace (bool): Whether to use inplace mode for activation. - with_spectral_norm (bool): Whether use spectral norm in linear module. - order (tuple[str]): The order of linear/activation layers. It is a - sequence of "linear", "norm" and "act". Examples are - ("linear", "act") and ("act", "linear"). - """ - - def __init__(self, - in_features, - out_features, - bias=True, - act_cfg=dict(type='ReLU'), - inplace=True, - with_spectral_norm=False, - order=('linear', 'act')): - super().__init__() - assert act_cfg is None or isinstance(act_cfg, dict) - self.act_cfg = act_cfg - self.inplace = inplace - self.with_spectral_norm = with_spectral_norm - self.order = order - assert isinstance(self.order, tuple) and len(self.order) == 2 - assert set(order) == set(['linear', 'act']) - - self.with_activation = act_cfg is not None - self.with_bias = bias - - # build linear layer - self.linear = nn.Linear(in_features, out_features, bias=bias) - # export the attributes of self.linear to a higher level for - # convenience - self.in_features = self.linear.in_features - self.out_features = self.linear.out_features - - if self.with_spectral_norm: - self.linear = nn.utils.spectral_norm(self.linear) - - # build activation layer - if self.with_activation: - act_cfg_ = act_cfg.copy() - act_cfg_.setdefault('inplace', inplace) - self.activate = build_activation_layer(act_cfg_) - - # Use msra init by default - self.init_weights() - - def init_weights(self): - if self.with_activation and self.act_cfg['type'] == 'LeakyReLU': - nonlinearity = 'leaky_relu' - a = self.act_cfg.get('negative_slope', 0.01) - else: - nonlinearity = 'relu' - a = 0 - - kaiming_init(self.linear, a=a, nonlinearity=nonlinearity) - - def forward(self, x, activate=True): - """Forward Function. - - Args: - x (torch.Tensor): Input tensor with shape of :math:`(n, *, c)`. - Same as ``torch.nn.Linear``. - activate (bool, optional): Whether to use activation layer. - Defaults to True. - - Returns: - torch.Tensor: Same as ``torch.nn.Linear``. - """ - for layer in self.order: - if layer == 'linear': - x = self.linear(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/mask_conv_module.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/mask_conv_module.py deleted file mode 100755 index e7086868..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/mask_conv_module.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import ConvModule - - -class MaskConvModule(ConvModule): - """Mask convolution module. - - This is a simple wrapper for mask convolution like: 'partial conv'. - Convolutions in this module always need a mask as extra input. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - conv_cfg (dict): Config dict for convolution layer. - norm_cfg (dict): Config dict for normalization layer. - act_cfg (dict): Config dict for activation layer, "relu" by default. - inplace (bool): Whether to use inplace mode for activation. - with_spectral_norm (bool): Whether use spectral norm in conv module. - padding_mode (str): If the `padding_mode` has not been supported by - current `Conv2d` in Pytorch, we will use our own padding layer - instead. Currently, we support ['zeros', 'circular'] with official - implementation and ['reflect'] with our own implementation. - Default: 'zeros'. - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - """ - supported_conv_list = ['PConv'] - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - assert self.conv_cfg['type'] in self.supported_conv_list - - self.init_weights() - - def forward(self, - x, - mask=None, - activate=True, - norm=True, - return_mask=True): - """Forward function for partial conv2d. - - Args: - input (torch.Tensor): Tensor with shape of (n, c, h, w). - mask (torch.Tensor): Tensor with shape of (n, c, h, w) or - (n, 1, h, w). If mask is not given, the function will - work as standard conv2d. Default: None. - activate (bool): Whether use activation layer. - norm (bool): Whether use norm layer. - return_mask (bool): If True and mask is not None, the updated - mask will be returned. Default: True. - - Returns: - Tensor or tuple: Result Tensor or 2-tuple of - - ``Tensor``: Results after partial conv. - - ``Tensor``: Updated mask will be returned if mask is given \ - and `return_mask` is True. - """ - for layer in self.order: - if layer == 'conv': - if self.with_explicit_padding: - x = self.padding_layer(x) - mask = self.padding_layer(mask) - if return_mask: - x, updated_mask = self.conv( - x, mask, return_mask=return_mask) - else: - x = self.conv(x, mask, return_mask=False) - elif layer == 'norm' and norm and self.with_norm: - x = self.norm(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - - if return_mask: - return x, updated_mask - - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/model_utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/model_utils.py deleted file mode 100755 index 62135c6e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/model_utils.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - - -def set_requires_grad(nets, requires_grad=False): - """Set requires_grad for all the networks. - - Args: - nets (nn.Module | list[nn.Module]): A list of networks or a single - network. - requires_grad (bool): Whether the networks require gradients or not - """ - if not isinstance(nets, list): - nets = [nets] - for net in nets: - if net is not None: - for param in net.parameters(): - param.requires_grad = requires_grad - - -def extract_bbox_patch(bbox, img, channel_first=True): - """Extract patch from a given bbox - - Args: - bbox (torch.Tensor | numpy.array): Bbox with (top, left, h, w). If - `img` has batch dimension, the `bbox` must be stacked at first - dimension. The shape should be (4,) or (n, 4). - img (torch.Tensor | numpy.array): Image data to be extracted. If - organized in batch dimension, the batch dimension must be the first - order like (n, h, w, c) or (n, c, h, w). - channel_first (bool): If True, the channel dimension of img is before - height and width, e.g. (c, h, w). Otherwise, the img shape (samples - in the batch) is like (h, w, c). - - Returns: - (torch.Tensor | numpy.array): Extracted patches. The dimension of the \ - output should be the same as `img`. - """ - - def _extract(bbox, img): - assert len(bbox) == 4 - t, l, h, w = bbox - if channel_first: - img_patch = img[..., t:t + h, l:l + w] - else: - img_patch = img[t:t + h, l:l + w, ...] - - return img_patch - - input_size = img.shape - assert len(input_size) == 3 or len(input_size) == 4 - bbox_size = bbox.shape - assert bbox_size == (4, ) or (len(bbox_size) == 2 - and bbox_size[0] == input_size[0]) - - # images with batch dimension - if len(input_size) == 4: - output_list = [] - for i in range(input_size[0]): - img_patch_ = _extract(bbox[i], img[i:i + 1, ...]) - output_list.append(img_patch_) - if isinstance(img, torch.Tensor): - img_patch = torch.cat(output_list, dim=0) - else: - img_patch = np.concatenate(output_list, axis=0) - # standardize image - else: - img_patch = _extract(bbox, img) - - return img_patch - - -def scale_bbox(bbox, target_size): - """Modify bbox to target size. - - The original bbox will be enlarged to the target size with the original - bbox in the center of the new bbox. - - Args: - bbox (np.ndarray | torch.Tensor): Bboxes to be modified. Bbox can - be in batch or not. The shape should be (4,) or (n, 4). - target_size (tuple[int]): Target size of final bbox. - - Returns: - (np.ndarray | torch.Tensor): Modified bboxes. - """ - - def _mod(bbox, target_size): - top_ori, left_ori, h_ori, w_ori = bbox - h, w = target_size - assert h >= h_ori and w >= w_ori - top = int(max(0, top_ori - (h - h_ori) // 2)) - left = int(max(0, left_ori - (w - w_ori) // 2)) - - if isinstance(bbox, torch.Tensor): - bbox_new = torch.Tensor([top, left, h, w]).type_as(bbox) - else: - bbox_new = np.asarray([top, left, h, w]) - - return bbox_new - - if isinstance(bbox, torch.Tensor): - bbox_new = torch.zeros_like(bbox) - elif isinstance(bbox, np.ndarray): - bbox_new = np.zeros_like(bbox) - else: - raise TypeError('bbox mush be torch.Tensor or numpy.ndarray' - f'but got type {type(bbox)}') - bbox_shape = list(bbox.shape) - - if len(bbox_shape) == 2: - for i in range(bbox_shape[0]): - bbox_new[i, :] = _mod(bbox[i], target_size) - else: - bbox_new = _mod(bbox, target_size) - - return bbox_new - - -def extract_around_bbox(img, bbox, target_size, channel_first=True): - """Extract patches around the given bbox. - - Args: - bbox (np.ndarray | torch.Tensor): Bboxes to be modified. Bbox can - be in batch or not. - target_size (List(int)): Target size of final bbox. - - Returns: - (torch.Tensor | numpy.array): Extracted patches. The dimension of the \ - output should be the same as `img`. - """ - bbox_new = scale_bbox(bbox, target_size) - img_patch = extract_bbox_patch(bbox_new, img, channel_first=channel_first) - - return img_patch, bbox_new diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/partial_conv.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/partial_conv.py deleted file mode 100755 index 51b50fec..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/partial_conv.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import CONV_LAYERS - - -@CONV_LAYERS.register_module(name='PConv') -class PartialConv2d(nn.Conv2d): - """Implementation for partial convolution. - - Image Inpainting for Irregular Holes Using Partial Convolutions - [https://arxiv.org/abs/1804.07723] - - Args: - multi_channel (bool): If True, the mask is multi-channel. Otherwise, - the mask is single-channel. - eps (float): Need to be changed for mixed precision training. - For mixed precision training, you need change 1e-8 to 1e-6. - """ - - def __init__(self, *args, multi_channel=False, eps=1e-8, **kwargs): - super().__init__(*args, **kwargs) - - # whether the mask is multi-channel or not - self.multi_channel = multi_channel - self.eps = eps - - if self.multi_channel: - out_channels, in_channels = self.out_channels, self.in_channels - else: - out_channels, in_channels = 1, 1 - - self.register_buffer( - 'weight_mask_updater', - torch.ones(out_channels, in_channels, self.kernel_size[0], - self.kernel_size[1])) - - self.mask_kernel_numel = np.prod(self.weight_mask_updater.shape[1:4]) - self.mask_kernel_numel = (self.mask_kernel_numel).item() - - def forward(self, input, mask=None, return_mask=True): - """Forward function for partial conv2d. - - Args: - input (torch.Tensor): Tensor with shape of (n, c, h, w). - mask (torch.Tensor): Tensor with shape of (n, c, h, w) or - (n, 1, h, w). If mask is not given, the function will - work as standard conv2d. Default: None. - return_mask (bool): If True and mask is not None, the updated - mask will be returned. Default: True. - - Returns: - torch.Tensor : Results after partial conv.\ - torch.Tensor : Updated mask will be returned if mask is given and \ - ``return_mask`` is True. - """ - assert input.dim() == 4 - if mask is not None: - assert mask.dim() == 4 - if self.multi_channel: - assert mask.shape[1] == input.shape[1] - else: - assert mask.shape[1] == 1 - - # update mask and compute mask ratio - if mask is not None: - with torch.no_grad(): - - updated_mask = F.conv2d( - mask, - self.weight_mask_updater, - bias=None, - stride=self.stride, - padding=self.padding, - dilation=self.dilation) - mask_ratio = self.mask_kernel_numel / (updated_mask + self.eps) - - updated_mask = torch.clamp(updated_mask, 0, 1) - mask_ratio = mask_ratio * updated_mask - - # standard conv2d - if mask is not None: - input = input * mask - raw_out = super().forward(input) - - if mask is not None: - if self.bias is None: - output = raw_out * mask_ratio - else: - # compute new bias when mask is given - bias_view = self.bias.view(1, self.out_channels, 1, 1) - output = (raw_out - bias_view) * mask_ratio + bias_view - output = output * updated_mask - else: - output = raw_out - - if return_mask and mask is not None: - return output, updated_mask - - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/separable_conv_module.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/separable_conv_module.py deleted file mode 100755 index 139a54e5..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/separable_conv_module.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if ``norm_cfg`` and ``act_cfg`` are specified. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. Default: 1. - padding (int or tuple[int]): Same as nn.Conv2d. Default: 0. - dilation (int or tuple[int]): Same as nn.Conv2d. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as ``norm_cfg``. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as ``act_cfg``. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as ``act_cfg``. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super().__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (N, C, H, W). - - Returns: - Tensor: Output tensor. - """ - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/sr_backbone_utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/sr_backbone_utils.py deleted file mode 100755 index b4b0aad9..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/sr_backbone_utils.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import constant_init, kaiming_init -from mmcv.utils.parrots_wrapper import _BatchNorm - - -def default_init_weights(module, scale=1): - """Initialize network weights. - - Args: - modules (nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. - """ - for m in module.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m, a=0, mode='fan_in', bias=0) - m.weight.data *= scale - elif isinstance(m, nn.Linear): - kaiming_init(m, a=0, mode='fan_in', bias=0) - m.weight.data *= scale - elif isinstance(m, _BatchNorm): - constant_init(m.weight, val=1, bias=0) - - -def make_layer(block, num_blocks, **kwarg): - """Make layers by stacking the same blocks. - - Args: - block (nn.module): nn.module class for basic block. - num_blocks (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_blocks): - layers.append(block(**kwarg)) - return nn.Sequential(*layers) - - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - It has a style of: - - :: - - ---Conv-ReLU-Conv-+- - |________________| - - Args: - mid_channels (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Used to scale the residual before addition. - Default: 1.0. - """ - - def __init__(self, mid_channels=64, res_scale=1.0): - super().__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(mid_channels, mid_channels, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(mid_channels, mid_channels, 3, 1, 1, bias=True) - - self.relu = nn.ReLU(inplace=True) - - # if res_scale < 1.0, use the default initialization, as in EDSR. - # if res_scale = 1.0, use scaled kaiming_init, as in MSRResNet. - if res_scale == 1.0: - self.init_weights() - - def init_weights(self): - """Initialize weights for ResidualBlockNoBN. - - Initialization methods like `kaiming_init` are for VGG-style - modules. For modules with residual paths, using smaller std is - better for stability and performance. We empirically use 0.1. - See more details in "ESRGAN: Enhanced Super-Resolution Generative - Adversarial Networks" - """ - - for m in [self.conv1, self.conv2]: - default_init_weights(m, 0.1) - - def forward(self, x): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/upsample.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/upsample.py deleted file mode 100755 index f39ec1a9..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/common/upsample.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from .sr_backbone_utils import default_init_weights - - -class PixelShufflePack(nn.Module): - """ Pixel Shuffle upsample layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of Conv layer to expand channels. - - Returns: - Upsampled feature map. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - """Initialize weights for PixelShufflePack. - """ - default_init_weights(self, 1) - - def forward(self, x): - """Forward function for PixelShufflePack. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/__init__.py deleted file mode 100755 index 67fe3066..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .pixelwise_loss import CharbonnierLoss, L1Loss, MaskedTVLoss, MSELoss -from .utils import mask_reduce_loss, reduce_loss - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/pixelwise_loss.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/pixelwise_loss.py deleted file mode 100755 index 2f2435b8..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/pixelwise_loss.py +++ /dev/null @@ -1,221 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..registry import LOSSES -from .utils import masked_loss - -_reduction_modes = ['none', 'mean', 'sum'] - - -@masked_loss -def l1_loss(pred, target): - """L1 loss. - - Args: - pred (Tensor): Prediction Tensor with shape (n, c, h, w). - target ([type]): Target Tensor with shape (n, c, h, w). - - Returns: - Tensor: Calculated L1 loss. - """ - return F.l1_loss(pred, target, reduction='none') - - -@masked_loss -def mse_loss(pred, target): - """MSE loss. - - Args: - pred (Tensor): Prediction Tensor with shape (n, c, h, w). - target ([type]): Target Tensor with shape (n, c, h, w). - - Returns: - Tensor: Calculated MSE loss. - """ - return F.mse_loss(pred, target, reduction='none') - - -@masked_loss -def charbonnier_loss(pred, target, eps=1e-12): - """Charbonnier loss. - - Args: - pred (Tensor): Prediction Tensor with shape (n, c, h, w). - target ([type]): Target Tensor with shape (n, c, h, w). - - Returns: - Tensor: Calculated Charbonnier loss. - """ - return torch.sqrt((pred - target)**2 + eps) - - -@LOSSES.register_module() -class L1Loss(nn.Module): - """L1 (mean absolute error, MAE) loss. - - Args: - loss_weight (float): Loss weight for L1 loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - sample_wise (bool): Whether calculate the loss sample-wise. This - argument only takes effect when `reduction` is 'mean' and `weight` - (argument of `forward()`) is not None. It will first reduce loss - with 'mean' per-sample, and then it means over all the samples. - Default: False. - """ - - def __init__(self, loss_weight=1.0, reduction='mean', sample_wise=False): - super().__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' - f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - self.sample_wise = sample_wise - - def forward(self, pred, target, weight=None, **kwargs): - """Forward Function. - - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * l1_loss( - pred, - target, - weight, - reduction=self.reduction, - sample_wise=self.sample_wise) - - -@LOSSES.register_module() -class MSELoss(nn.Module): - """MSE (L2) loss. - - Args: - loss_weight (float): Loss weight for MSE loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - sample_wise (bool): Whether calculate the loss sample-wise. This - argument only takes effect when `reduction` is 'mean' and `weight` - (argument of `forward()`) is not None. It will first reduces loss - with 'mean' per-sample, and then it means over all the samples. - Default: False. - """ - - def __init__(self, loss_weight=1.0, reduction='mean', sample_wise=False): - super().__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' - f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - self.sample_wise = sample_wise - - def forward(self, pred, target, weight=None, **kwargs): - """Forward Function. - - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * mse_loss( - pred, - target, - weight, - reduction=self.reduction, - sample_wise=self.sample_wise) - - -@LOSSES.register_module() -class CharbonnierLoss(nn.Module): - """Charbonnier loss (one variant of Robust L1Loss, a differentiable - variant of L1Loss). - - Described in "Deep Laplacian Pyramid Networks for Fast and Accurate - Super-Resolution". - - Args: - loss_weight (float): Loss weight for L1 loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - sample_wise (bool): Whether calculate the loss sample-wise. This - argument only takes effect when `reduction` is 'mean' and `weight` - (argument of `forward()`) is not None. It will first reduces loss - with 'mean' per-sample, and then it means over all the samples. - Default: False. - eps (float): A value used to control the curvature near zero. - Default: 1e-12. - """ - - def __init__(self, - loss_weight=1.0, - reduction='mean', - sample_wise=False, - eps=1e-12): - super().__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' - f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - self.sample_wise = sample_wise - self.eps = eps - - def forward(self, pred, target, weight=None, **kwargs): - """Forward Function. - - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * charbonnier_loss( - pred, - target, - weight, - eps=self.eps, - reduction=self.reduction, - sample_wise=self.sample_wise) - - -@LOSSES.register_module() -class MaskedTVLoss(L1Loss): - """Masked TV loss. - - Args: - loss_weight (float, optional): Loss weight. Defaults to 1.0. - """ - - def __init__(self, loss_weight=1.0): - super().__init__(loss_weight=loss_weight) - - def forward(self, pred, mask=None): - """Forward function. - - Args: - pred (torch.Tensor): Tensor with shape of (n, c, h, w). - mask (torch.Tensor, optional): Tensor with shape of (n, 1, h, w). - Defaults to None. - - Returns: - [type]: [description] - """ - y_diff = super().forward( - pred[:, :, :-1, :], pred[:, :, 1:, :], weight=mask[:, :, :-1, :]) - x_diff = super().forward( - pred[:, :, :, :-1], pred[:, :, :, 1:], weight=mask[:, :, :, :-1]) - - loss = x_diff + y_diff - - return loss diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/utils.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/utils.py deleted file mode 100755 index 2f536d92..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/losses/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import torch.nn.functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Returns: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - if reduction_enum == 1: - return loss.mean() - - return loss.sum() - - -def mask_reduce_loss(loss, weight=None, reduction='mean', sample_wise=False): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. Default: None. - reduction (str): Same as built-in losses of PyTorch. Options are - "none", "mean" and "sum". Default: 'mean'. - sample_wise (bool): Whether calculate the loss sample-wise. This - argument only takes effect when `reduction` is 'mean' and `weight` - (argument of `forward()`) is not None. It will first reduces loss - with 'mean' per-sample, and then it means over all the samples. - Default: False. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if weight is not specified or reduction is sum, just reduce the loss - if weight is None or reduction == 'sum': - loss = reduce_loss(loss, reduction) - # if reduction is mean, then compute mean over masked region - elif reduction == 'mean': - # expand weight from N1HW to NCHW - if weight.size(1) == 1: - weight = weight.expand_as(loss) - # small value to prevent division by zero - eps = 1e-12 - - # perform sample-wise mean - if sample_wise: - weight = weight.sum(dim=[1, 2, 3], keepdim=True) # NCHW to N111 - loss = (loss / (weight + eps)).sum() / weight.size(0) - # perform pixel-wise mean - else: - loss = loss.sum() / (weight.sum() + eps) - - return loss - - -def masked_loss(loss_func): - """Create a masked version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @masked_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.5000) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, reduction='sum') - tensor(3.) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - sample_wise=False, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = mask_reduce_loss(loss, weight, reduction, sample_wise) - return loss - - return wrapper diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/registry.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/registry.py deleted file mode 100755 index 0a574b66..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import MODELS as MMCV_MODELS -from mmcv.utils import Registry - -MODELS = Registry('model', parent=MMCV_MODELS) -BACKBONES = MODELS -COMPONENTS = MODELS -LOSSES = MODELS diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/__init__.py deleted file mode 100755 index 97becd5e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -from .basic_restorer import BasicRestorer -from .basicvsr import BasicVSR - diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basic_restorer.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basic_restorer.py deleted file mode 100755 index 5114b8e9..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basic_restorer.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -import os.path as osp - -import mmcv -from mmcv.runner import auto_fp16 - -from mmedit.core import psnr, ssim, tensor2img -from ..base import BaseModel -from ..builder import build_backbone, build_loss -from ..registry import MODELS - - -@MODELS.register_module() -class BasicRestorer(BaseModel): - """Basic model for image restoration. - - It must contain a generator that takes an image as inputs and outputs a - restored image. It also has a pixel-wise loss for training. - - The subclasses should overwrite the function `forward_train`, - `forward_test` and `train_step`. - - Args: - generator (dict): Config for the generator structure. - pixel_loss (dict): Config for pixel-wise loss. - train_cfg (dict): Config for training. Default: None. - test_cfg (dict): Config for testing. Default: None. - pretrained (str): Path for pretrained model. Default: None. - """ - allowed_metrics = {'PSNR': psnr, 'SSIM': ssim} - - def __init__(self, - generator, - pixel_loss, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__() - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - # support fp16 - self.fp16_enabled = False - - # generator - self.generator = build_backbone(generator) - self.init_weights(pretrained) - - # loss - self.pixel_loss = build_loss(pixel_loss) - - def init_weights(self, pretrained=None): - """Init weights for models. - - Args: - pretrained (str, optional): Path for pretrained weights. If given - None, pretrained weights will not be loaded. Defaults to None. - """ - self.generator.init_weights(pretrained) - - @auto_fp16(apply_to=('lq', )) - def forward(self, lq, gt=None, test_mode=False, **kwargs): - """Forward function. - - Args: - lq (Tensor): Input lq images. - gt (Tensor): Ground-truth image. Default: None. - test_mode (bool): Whether in test mode or not. Default: False. - kwargs (dict): Other arguments. - """ - - if test_mode: - return self.forward_test(lq, gt, **kwargs) - - return self.forward_train(lq, gt) - - def forward_train(self, lq, gt): - """Training forward function. - - Args: - lq (Tensor): LQ Tensor with shape (n, c, h, w). - gt (Tensor): GT Tensor with shape (n, c, h, w). - - Returns: - Tensor: Output tensor. - """ - losses = dict() - output = self.generator(lq) - loss_pix = self.pixel_loss(output, gt) - losses['loss_pix'] = loss_pix - outputs = dict( - losses=losses, - num_samples=len(gt.data), - results=dict(lq=lq.cpu(), gt=gt.cpu(), output=output.cpu())) - return outputs - - def evaluate(self, output, gt): - """Evaluation function. - - Args: - output (Tensor): Model output with shape (n, c, h, w). - gt (Tensor): GT Tensor with shape (n, c, h, w). - - Returns: - dict: Evaluation results. - """ - crop_border = self.test_cfg.crop_border - - output = tensor2img(output) - gt = tensor2img(gt) - - eval_result = dict() - for metric in self.test_cfg.metrics: - eval_result[metric] = self.allowed_metrics[metric](output, gt, - crop_border) - return eval_result - - def forward_test(self, - lq, - gt=None, - meta=None, - save_image=False, - save_path=None, - iteration=None): - """Testing forward function. - - Args: - lq (Tensor): LQ Tensor with shape (n, c, h, w). - gt (Tensor): GT Tensor with shape (n, c, h, w). Default: None. - save_image (bool): Whether to save image. Default: False. - save_path (str): Path to save image. Default: None. - iteration (int): Iteration for the saving image name. - Default: None. - - Returns: - dict: Output results. - """ - output = self.generator(lq) - if self.test_cfg is not None and self.test_cfg.get('metrics', None): - assert gt is not None, ( - 'evaluation with metrics must have gt images.') - results = dict(eval_result=self.evaluate(output, gt)) - else: - results = dict(lq=lq.cpu(), output=output.cpu()) - if gt is not None: - results['gt'] = gt.cpu() - - # save image - if save_image: - lq_path = meta[0]['lq_path'] - folder_name = osp.splitext(osp.basename(lq_path))[0] - if isinstance(iteration, numbers.Number): - save_path = osp.join(save_path, folder_name, - f'{folder_name}-{iteration + 1:06d}.png') - elif iteration is None: - save_path = osp.join(save_path, f'{folder_name}.png') - else: - raise ValueError('iteration should be number or None, ' - f'but got {type(iteration)}') - mmcv.imwrite(tensor2img(output), save_path) - - return results - - def forward_dummy(self, img): - """Used for computing network FLOPs. - - Args: - img (Tensor): Input image. - - Returns: - Tensor: Output image. - """ - out = self.generator(img) - return out - - def train_step(self, data_batch, optimizer): - """Train step. - - Args: - data_batch (dict): A batch of data. - optimizer (obj): Optimizer. - - Returns: - dict: Returned output. - """ - outputs = self(**data_batch, test_mode=False) - loss, log_vars = self.parse_losses(outputs.pop('losses')) - - # optimize - optimizer['generator'].zero_grad() - loss.backward() - optimizer['generator'].step() - - outputs.update({'log_vars': log_vars}) - return outputs - - def val_step(self, data_batch, **kwargs): - """Validation step. - - Args: - data_batch (dict): A batch of data. - kwargs (dict): Other arguments for ``val_step``. - - Returns: - dict: Returned output. - """ - output = self.forward_test(**data_batch, **kwargs) - return output diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basicvsr.py b/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basicvsr.py deleted file mode 100755 index 1510c55e..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/models/restorers/basicvsr.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -import os.path as osp - -import mmcv -import numpy as np -import torch - -from mmedit.core import tensor2img -from ..registry import MODELS -from .basic_restorer import BasicRestorer - - -@MODELS.register_module() -class BasicVSR(BasicRestorer): - """BasicVSR model for video super-resolution. - - Note that this model is used for IconVSR. - - Paper: - BasicVSR: The Search for Essential Components in Video Super-Resolution - and Beyond, CVPR, 2021 - - Args: - generator (dict): Config for the generator structure. - pixel_loss (dict): Config for pixel-wise loss. - ensemble (dict): Config for ensemble. Default: None. - train_cfg (dict): Config for training. Default: None. - test_cfg (dict): Config for testing. Default: None. - pretrained (str): Path for pretrained model. Default: None. - """ - - def __init__(self, - generator, - pixel_loss, - ensemble=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__(generator, pixel_loss, train_cfg, test_cfg, - pretrained) - - # fix pre-trained networks - self.fix_iter = train_cfg.get('fix_iter', 0) if train_cfg else 0 - self.is_weight_fixed = False - - # count training steps - self.register_buffer('step_counter', torch.zeros(1)) - - # ensemble - self.forward_ensemble = None - if ensemble is not None: - if ensemble['type'] == 'SpatialTemporalEnsemble': - from mmedit.models.common.ensemble import \ - SpatialTemporalEnsemble - is_temporal = ensemble.get('is_temporal_ensemble', False) - self.forward_ensemble = SpatialTemporalEnsemble(is_temporal) - else: - raise NotImplementedError( - 'Currently support only ' - '"SpatialTemporalEnsemble", but got type ' - f'[{ensemble["type"]}]') - - def check_if_mirror_extended(self, lrs): - """Check whether the input is a mirror-extended sequence. - - If mirror-extended, the i-th (i=0, ..., t-1) frame is equal to the - (t-1-i)-th frame. - - Args: - lrs (tensor): Input LR images with shape (n, t, c, h, w) - """ - - is_mirror_extended = False - if lrs.size(1) % 2 == 0: - lrs_1, lrs_2 = torch.chunk(lrs, 2, dim=1) - if torch.norm(lrs_1 - lrs_2.flip(1)) == 0: - is_mirror_extended = True - - return is_mirror_extended - - def train_step(self, data_batch, optimizer): - """Train step. - - Args: - data_batch (dict): A batch of data. - optimizer (obj): Optimizer. - - Returns: - dict: Returned output. - """ - # fix SPyNet and EDVR at the beginning - if self.step_counter < self.fix_iter: - if not self.is_weight_fixed: - self.is_weight_fixed = True - for k, v in self.generator.named_parameters(): - if 'spynet' in k or 'edvr' in k: - v.requires_grad_(False) - elif self.step_counter == self.fix_iter: - # train all the parameters - self.generator.requires_grad_(True) - - outputs = self(**data_batch, test_mode=False) - loss, log_vars = self.parse_losses(outputs.pop('losses')) - - # optimize - optimizer['generator'].zero_grad() - loss.backward() - optimizer['generator'].step() - - self.step_counter += 1 - - outputs.update({'log_vars': log_vars}) - return outputs - - def evaluate(self, output, gt): - """Evaluation function. - - If the output contains multiple frames, we compute the metric - one by one and take an average. - - Args: - output (Tensor): Model output with shape (n, t, c, h, w). - gt (Tensor): GT Tensor with shape (n, t, c, h, w). - - Returns: - dict: Evaluation results. - """ - crop_border = self.test_cfg.crop_border - convert_to = self.test_cfg.get('convert_to', None) - - eval_result = dict() - for metric in self.test_cfg.metrics: - if output.ndim == 5: # a sequence: (n, t, c, h, w) - avg = [] - for i in range(0, output.size(1)): - output_i = tensor2img(output[:, i, :, :, :]) - gt_i = tensor2img(gt[:, i, :, :, :]) - avg.append(self.allowed_metrics[metric]( - output_i, gt_i, crop_border, convert_to=convert_to)) - eval_result[metric] = np.mean(avg) - elif output.ndim == 4: # an image: (n, c, t, w), for Vimeo-90K-T - output_img = tensor2img(output) - gt_img = tensor2img(gt) - value = self.allowed_metrics[metric]( - output_img, gt_img, crop_border, convert_to=convert_to) - eval_result[metric] = value - - return eval_result - - def forward_test(self, - lq, - gt=None, - meta=None, - save_image=False, - save_path=None, - iteration=None): - """Testing forward function. - - Args: - lq (Tensor): LQ Tensor with shape (n, t, c, h, w). - gt (Tensor): GT Tensor with shape (n, t, c, h, w). Default: None. - save_image (bool): Whether to save image. Default: False. - save_path (str): Path to save image. Default: None. - iteration (int): Iteration for the saving image name. - Default: None. - - Returns: - dict: Output results. - """ - with torch.no_grad(): - if self.forward_ensemble is not None: - output = self.forward_ensemble(lq, self.generator) - else: - output = self.generator(lq) - - # If the GT is an image (i.e. the center frame), the output sequence is - # turned to an image. - if gt is not None and gt.ndim == 4: - t = output.size(1) - if self.check_if_mirror_extended(lq): # with mirror extension - output = 0.5 * (output[:, t // 4] + output[:, -1 - t // 4]) - else: # without mirror extension - output = output[:, t // 2] - - if self.test_cfg is not None and self.test_cfg.get('metrics', None): - assert gt is not None, ( - 'evaluation with metrics must have gt images.') - results = dict(eval_result=self.evaluate(output, gt)) - else: - results = dict(lq=lq.cpu(), output=output.cpu()) - if gt is not None: - results['gt'] = gt.cpu() - - # save image - if save_image: - if output.ndim == 4: # an image, key = 000001/0000 (Vimeo-90K) - img_name = meta[0]['key'].replace('/', '_') - if isinstance(iteration, numbers.Number): - save_path = osp.join( - save_path, f'{img_name}-{iteration + 1:06d}.png') - elif iteration is None: - save_path = osp.join(save_path, f'{img_name}.png') - else: - raise ValueError('iteration should be number or None, ' - f'but got {type(iteration)}') - mmcv.imwrite(tensor2img(output), save_path) - elif output.ndim == 5: # a sequence, key = 000 - folder_name = meta[0]['key'].split('/')[0] - for i in range(0, output.size(1)): - if isinstance(iteration, numbers.Number): - save_path_i = osp.join( - save_path, folder_name, - f'{i:08d}-{iteration + 1:06d}.png') - elif iteration is None: - save_path_i = osp.join(save_path, folder_name, - f'{i:08d}.png') - else: - raise ValueError('iteration should be number or None, ' - f'but got {type(iteration)}') - mmcv.imwrite( - tensor2img(output[:, i, :, :, :]), save_path_i) - - return results diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/__init__.py b/cv/super_resolution/basicvsr/pytorch/mmedit/utils/__init__.py deleted file mode 100755 index 0521b4a1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .cli import modify_args -from .logger import get_root_logger -from .setup_env import setup_multi_processes - -__all__ = ['get_root_logger', 'setup_multi_processes', 'modify_args'] diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/cli.py b/cv/super_resolution/basicvsr/pytorch/mmedit/utils/cli.py deleted file mode 100755 index a78b3271..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/cli.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import re -import sys -import warnings - - -def modify_args(): - for i, v in enumerate(sys.argv): - if i == 0: - assert v.endswith('.py') - elif re.match(r'--\w+_.*', v): - new_arg = v.replace('_', '-') - warnings.warn( - f'command line argument {v} is deprecated, ' - f'please use {new_arg} instead.', - category=DeprecationWarning, - ) - sys.argv[i] = new_arg diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/collect_env.py b/cv/super_resolution/basicvsr/pytorch/mmedit/utils/collect_env.py deleted file mode 100755 index af22082c..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/collect_env.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.utils import collect_env as collect_base_env -from mmcv.utils import get_git_hash - -import mmedit - - -def collect_env(): - """Collect the information of the running environments.""" - env_info = collect_base_env() - env_info['MMEditing'] = f'{mmedit.__version__}+{get_git_hash()[:7]}' - - return env_info - - -if __name__ == '__main__': - for name, val in collect_env().items(): - print('{}: {}'.format(name, val)) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/logger.py b/cv/super_resolution/basicvsr/pytorch/mmedit/utils/logger.py deleted file mode 100755 index cdc340f1..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/logger.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -from mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmedit". - - Args: - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - # root logger name: mmedit - logger = get_logger(__name__.split('.')[0], log_file, log_level) - return logger diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/setup_env.py b/cv/super_resolution/basicvsr/pytorch/mmedit/utils/setup_env.py deleted file mode 100755 index 21def2f0..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/utils/setup_env.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -import torch.multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - if 'OMP_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/cv/super_resolution/basicvsr/pytorch/mmedit/version.py b/cv/super_resolution/basicvsr/pytorch/mmedit/version.py deleted file mode 100755 index ffec9df7..00000000 --- a/cv/super_resolution/basicvsr/pytorch/mmedit/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '0.15.0' - - -def parse_version_info(version_str): - ver_info = [] - for x in version_str.split('.'): - if x.isdigit(): - ver_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - ver_info.append(int(patch_version[0])) - ver_info.append(f'rc{patch_version[1]}') - return tuple(ver_info) - - -version_info = parse_version_info(__version__) diff --git a/cv/super_resolution/basicvsr/pytorch/requirements.txt b/cv/super_resolution/basicvsr/pytorch/requirements.txt deleted file mode 100755 index 02481cb6..00000000 --- a/cv/super_resolution/basicvsr/pytorch/requirements.txt +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -yapf -addict -opencv-python<=4.5.4.60 diff --git a/cv/super_resolution/basicvsr/pytorch/setup.py b/cv/super_resolution/basicvsr/pytorch/setup.py deleted file mode 100755 index 3cc9f7a4..00000000 --- a/cv/super_resolution/basicvsr/pytorch/setup.py +++ /dev/null @@ -1,354 +0,0 @@ -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -import glob -import os -import re -from pkg_resources import DistributionNotFound, get_distribution -from setuptools import find_packages, setup - -EXT_TYPE = '' -try: - import torch - if torch.__version__ == 'parrots': - from parrots.utils.build_extension import BuildExtension - EXT_TYPE = 'parrots' - else: - from torch.utils.cpp_extension import BuildExtension - EXT_TYPE = 'pytorch' - cmd_class = {'build_ext': BuildExtension} -except ModuleNotFoundError: - cmd_class = {} - print('Skip building ext ops due to the absence of torch.') - - -def choose_requirement(primary, secondary): - """If some version of primary requirement installed, return primary, else - return secondary.""" - try: - name = re.split(r'[!<>=]', primary)[0] - get_distribution(name) - except DistributionNotFound: - return secondary - - return str(primary) - - -def get_version(): - version_file = 'mmcv/version.py' - with open(version_file, 'r', encoding='utf-8') as f: - exec(compile(f.read(), version_file, 'exec')) - version = locals()['__version__'] - local_version_identifier = os.environ.get('MMCV_LOCAL_VERSION_IDENTIFIER', '') - if local_version_identifier != '': - version += '+' + local_version_identifier - return version - - -def parse_requirements(fname='requirements/runtime.txt', with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import sys - from os.path import exists - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith('-r '): - # Allow specifying requirements in other files - target = line.split(' ')[1] - for info in parse_require_file(target): - yield info - else: - info = {'line': line} - if line.startswith('-e '): - info['package'] = line.split('#egg=')[1] - else: - # Remove versioning from the package - pat = '(' + '|'.join(['>=', '==', '>']) + ')' - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info['package'] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ';' in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, - rest.split(';')) - info['platform_deps'] = platform_deps - else: - version = rest # NOQA - info['version'] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath, 'r') as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith('#'): - for info in parse_line(line): - yield info - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info['package']] - if with_version and 'version' in info: - parts.extend(info['version']) - if not sys.version.startswith('3.4'): - # apparently package_deps are broken in 3.4 - platform_deps = info.get('platform_deps') - if platform_deps is not None: - parts.append(';' + platform_deps) - item = ''.join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -install_requires = parse_requirements() - -try: - # OpenCV installed via conda. - import cv2 # NOQA: F401 - major, minor, *rest = cv2.__version__.split('.') - if int(major) < 3: - raise RuntimeError( - f'OpenCV >=3 is required but {cv2.__version__} is installed') -except ImportError: - # If first not installed install second package - CHOOSE_INSTALL_REQUIRES = [('opencv-python-headless>=3', - 'opencv-python>=3')] - for main, secondary in CHOOSE_INSTALL_REQUIRES: - install_requires.append(choose_requirement(main, secondary)) - - -def get_extensions(): - extensions = [] - - if os.getenv('MMCV_WITH_TRT', '0') != '0': - ext_name = 'mmcv._ext_trt' - from torch.utils.cpp_extension import include_paths, library_paths - library_dirs = [] - libraries = [] - include_dirs = [] - tensorrt_path = os.getenv('TENSORRT_DIR', '0') - tensorrt_lib_path = glob.glob( - os.path.join(tensorrt_path, 'targets', '*', 'lib'))[0] - library_dirs += [tensorrt_lib_path] - libraries += ['nvinfer', 'nvparsers', 'nvinfer_plugin'] - libraries += ['cudart'] - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/common/cuda') - include_trt_path = os.path.abspath('./mmcv/ops/csrc/tensorrt') - include_dirs.append(include_path) - include_dirs.append(include_trt_path) - include_dirs.append(os.path.join(tensorrt_path, 'include')) - include_dirs += include_paths(cuda=True) - - op_files = glob.glob('./mmcv/ops/csrc/tensorrt/plugins/*') - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('MMCV_WITH_TRT', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - library_dirs += library_paths(cuda=True) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - if os.getenv('MMCV_WITH_OPS', '0') == '0': - return extensions - - if EXT_TYPE == 'parrots': - ext_name = 'mmcv._ext' - from parrots.utils.build_extension import Extension - # new parrots op impl do not use MMCV_USE_PARROTS - # define_macros = [('MMCV_USE_PARROTS', None)] - define_macros = [] - include_dirs = [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') +\ - glob.glob('./mmcv/ops/csrc/parrots/*.cpp') - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args = { - 'nvcc': [cuda_args] if cuda_args else [], - 'cxx': [], - } - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - extra_compile_args['nvcc'] += [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - cuda=True, - pytorch=True) - extensions.append(ext_ops) - elif EXT_TYPE == 'pytorch': - ext_name = 'mmcv._ext' - from torch.utils.cpp_extension import CppExtension, CUDAExtension - - # prevent ninja from using too many resources - try: - import psutil - num_cpu = len(psutil.Process().cpu_affinity()) - cpu_use = max(4, num_cpu - 1) - except (ModuleNotFoundError, AttributeError): - cpu_use = 4 - - os.environ.setdefault('MAX_JOBS', str(cpu_use)) - define_macros = [] - extra_compile_args = {'cxx': []} - include_dirs = [] - - is_rocm_pytorch = False - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm_pytorch = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - - project_dir = 'mmcv/ops/csrc/' - if is_rocm_pytorch: - from torch.utils.hipify import hipify_python - - hipify_python.hipify( - project_directory=project_dir, - output_directory=project_dir, - includes='mmcv/ops/csrc/*', - show_detailed=True, - is_pytorch_extension=True, - ) - define_macros += [('MMCV_WITH_CUDA', None)] - define_macros += [('HIP_DIFF', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/hip/*') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/hip')) - elif torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') + \ - glob.glob('./mmcv/ops/csrc/pytorch/cuda/*.cu') - extension = CUDAExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common/cuda')) - else: - print(f'Compiling {ext_name} without CUDA') - op_files = glob.glob('./mmcv/ops/csrc/pytorch/*.cpp') - extension = CppExtension - include_dirs.append(os.path.abspath('./mmcv/ops/csrc/common')) - - ext_ops = extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args) - extensions.append(ext_ops) - - if EXT_TYPE == 'pytorch' and os.getenv('MMCV_WITH_ORT', '0') != '0': - ext_name = 'mmcv._ext_ort' - from torch.utils.cpp_extension import library_paths, include_paths - import onnxruntime - library_dirs = [] - libraries = [] - include_dirs = [] - ort_path = os.getenv('ONNXRUNTIME_DIR', '0') - library_dirs += [os.path.join(ort_path, 'lib')] - libraries.append('onnxruntime') - define_macros = [] - extra_compile_args = {'cxx': []} - - include_path = os.path.abspath('./mmcv/ops/csrc/onnxruntime') - include_dirs.append(include_path) - include_dirs.append(os.path.join(ort_path, 'include')) - - op_files = glob.glob('./mmcv/ops/csrc/onnxruntime/cpu/*') - if onnxruntime.get_device() == 'GPU' or os.getenv('FORCE_CUDA', - '0') == '1': - define_macros += [('MMCV_WITH_CUDA', None)] - cuda_args = os.getenv('MMCV_CUDA_ARGS') - extra_compile_args['nvcc'] = [cuda_args] if cuda_args else [] - op_files += glob.glob('./mmcv/ops/csrc/onnxruntime/gpu/*') - include_dirs += include_paths(cuda=True) - library_dirs += library_paths(cuda=True) - else: - include_dirs += include_paths(cuda=False) - library_dirs += library_paths(cuda=False) - - from setuptools import Extension - ext_ops = Extension( - name=ext_name, - sources=op_files, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - language='c++', - library_dirs=library_dirs, - libraries=libraries) - extensions.append(ext_ops) - - return extensions - - -setup( - name='mmcv' if os.getenv('MMCV_WITH_OPS', '0') == '0' else 'mmcv-full', - version=get_version(), - description='OpenMMLab Computer Vision Foundation', - keywords='computer vision', - packages=find_packages(), - include_package_data=True, - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Topic :: Utilities', - ], - url='https://github.com/open-mmlab/mmcv', - author='MMCV Contributors', - author_email='openmmlab@gmail.com', - setup_requires=[], - tests_require=['pytest'], - install_requires=install_requires, - ext_modules=get_extensions(), - cmdclass=cmd_class, - zip_safe=False) diff --git a/cv/super_resolution/basicvsr/pytorch/train.py b/cv/super_resolution/basicvsr/pytorch/train.py deleted file mode 100755 index d85aabca..00000000 --- a/cv/super_resolution/basicvsr/pytorch/train.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. - -import argparse -import copy -import os -import os.path as osp -import time - -import mmcv -import torch -import torch.distributed as dist -from mmcv import Config, DictAction -from mmcv.runner import init_dist - -from mmedit import __version__ -from mmedit.apis import init_random_seed, set_random_seed, train_model -from mmedit.datasets import build_dataset -from mmedit.models import build_model -from mmedit.utils import collect_env, get_root_logger, setup_multi_processes - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train an editor') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - parser.add_argument( - '--gpus', - type=int, - default=1, - help='number of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--diff_seed', - action='store_true', - help='Whether or not set different seeds for different ranks') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--autoscale-lr', - action='store_true', - help='automatically scale lr with the number of gpus') - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - # set multi-process settings - setup_multi_processes(cfg) - - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - # update configs according to CLI args - if args.work_dir is not None: - cfg.work_dir = args.work_dir - if args.resume_from is not None: - cfg.resume_from = args.resume_from - cfg.gpus = args.gpus - - if args.autoscale_lr: - # apply the linear scaling rule (https://arxiv.org/abs/1706.02677) - cfg.optimizer['lr'] = cfg.optimizer['lr'] * cfg.gpus / 8 - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) - - # log env info - env_info_dict = collect_env.collect_env() - env_info = '\n'.join([f'{k}: {v}' for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - - # log some basic info - logger.info('Distributed training: {}'.format(distributed)) - logger.info('mmedit Version: {}'.format(__version__)) - logger.info('Config:\n{}'.format(cfg.text)) - - # set random seeds - seed = init_random_seed(args.seed) - seed = seed + dist.get_rank() if args.diff_seed else seed - logger.info('Set random seed to {}, deterministic: {}'.format( - seed, args.deterministic)) - set_random_seed(seed, deterministic=args.deterministic) - cfg.seed = seed - - model = build_model( - cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg) - - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - val_dataset.pipeline = cfg.data.train.pipeline - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmedit_version=__version__, - config=cfg.text, - ) - - # meta information - meta = dict() - if cfg.get('exp_name', None) is None: - cfg['exp_name'] = osp.splitext(osp.basename(cfg.work_dir))[0] - meta['exp_name'] = cfg.exp_name - meta['mmedit Version'] = __version__ - meta['seed'] = seed - meta['env_info'] = env_info - - # add an attribute for visualization convenience - train_model( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() -- Gitee From 1d3fc5bdbf3ef7830c600a64b8129364639e1420 Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Wed, 5 Mar 2025 16:13:29 +0800 Subject: [PATCH 7/7] update PointPillars and delete useless code --- .../pointpillars/pytorch/.gitignore | 11 - .../pointpillars/pytorch/README.md | 61 +- .../pointpillars/pytorch/__init__.py | 0 .../pointpillars/pytorch/dataset/__init__.py | 17 - .../pointpillars/pytorch/dataset/data_aug.py | 357 --------- .../pytorch/dataset/dataloader.py | 77 -- .../pointpillars/pytorch/dataset/kitti.py | 144 ---- .../pointpillars/pytorch/evaluate.py | 386 --------- .../pytorch/figures/img_3dbbox_000134.png | Bin 807160 -> 0 bytes .../pytorch/figures/pc_pred_000134.png | Bin 277584 -> 0 bytes .../pointpillars/pytorch/loss/__init__.py | 1 - .../pointpillars/pytorch/loss/loss.py | 66 -- .../pointpillars/pytorch/misc/log.md | 103 --- .../pointpillars/pytorch/misc/vis_data_gt.py | 84 -- .../pointpillars/pytorch/model/__init__.py | 2 - .../pointpillars/pytorch/model/anchors.py | 231 ------ .../pytorch/model/pointpillars.py | 427 ---------- .../pointpillars/pytorch/ops/__init__.py | 2 - .../pointpillars/pytorch/ops/iou3d/iou3d.cpp | 210 ----- .../pytorch/ops/iou3d/iou3d_kernel.cu | 439 ----------- .../pointpillars/pytorch/ops/iou3d_module.py | 89 --- .../pointpillars/pytorch/ops/setup.py | 25 - .../pointpillars/pytorch/ops/voxel_module.py | 133 ---- .../pytorch/ops/voxelization/voxelization.cpp | 10 - .../pytorch/ops/voxelization/voxelization.h | 69 -- .../ops/voxelization/voxelization_cpu.cpp | 173 ---- .../ops/voxelization/voxelization_cuda.cu | 530 ------------- .../pointpillars/pytorch/pre_process_kitti.py | 160 ---- .../pointpillars/pytorch/requirements.txt | 9 - cv/3d_detection/pointpillars/pytorch/test.py | 138 ---- cv/3d_detection/pointpillars/pytorch/train.py | 217 ----- .../pointpillars/pytorch/train_dist.py | 255 ------ .../pointpillars/pytorch/utils/__init__.py | 9 - .../pointpillars/pytorch/utils/io.py | 109 --- .../pointpillars/pytorch/utils/process.py | 745 ------------------ .../pointpillars/pytorch/utils/viewpoint.json | 30 - .../pointpillars/pytorch/utils/vis_o3d.py | 122 --- 37 files changed, 14 insertions(+), 5427 deletions(-) delete mode 100755 cv/3d_detection/pointpillars/pytorch/.gitignore delete mode 100755 cv/3d_detection/pointpillars/pytorch/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/dataset/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/dataset/data_aug.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/dataset/dataloader.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/dataset/kitti.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/evaluate.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/figures/img_3dbbox_000134.png delete mode 100755 cv/3d_detection/pointpillars/pytorch/figures/pc_pred_000134.png delete mode 100755 cv/3d_detection/pointpillars/pytorch/loss/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/loss/loss.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/misc/log.md delete mode 100755 cv/3d_detection/pointpillars/pytorch/misc/vis_data_gt.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/model/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/model/anchors.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/model/pointpillars.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d.cpp delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d_kernel.cu delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/iou3d_module.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/setup.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/voxel_module.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.cpp delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.h delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cpu.cpp delete mode 100755 cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cuda.cu delete mode 100755 cv/3d_detection/pointpillars/pytorch/pre_process_kitti.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/requirements.txt delete mode 100755 cv/3d_detection/pointpillars/pytorch/test.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/train.py delete mode 100644 cv/3d_detection/pointpillars/pytorch/train_dist.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/utils/__init__.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/utils/io.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/utils/process.py delete mode 100755 cv/3d_detection/pointpillars/pytorch/utils/viewpoint.json delete mode 100755 cv/3d_detection/pointpillars/pytorch/utils/vis_o3d.py diff --git a/cv/3d_detection/pointpillars/pytorch/.gitignore b/cv/3d_detection/pointpillars/pytorch/.gitignore deleted file mode 100755 index 20ee3d04..00000000 --- a/cv/3d_detection/pointpillars/pytorch/.gitignore +++ /dev/null @@ -1,11 +0,0 @@ -__pycache__ -build/ -*.egg-info -*.so -*_logs* -tmp/ -results/ -data/ -pillar_logs/ -pretrained/ -dataset/ImageSets/ diff --git a/cv/3d_detection/pointpillars/pytorch/README.md b/cv/3d_detection/pointpillars/pytorch/README.md index 897b2506..29d0d0c3 100755 --- a/cv/3d_detection/pointpillars/pytorch/README.md +++ b/cv/3d_detection/pointpillars/pytorch/README.md @@ -7,16 +7,19 @@ A Simple PointPillars PyTorch Implenmentation for 3D Lidar(KITTI) Detection. - Only one detection network (PointPillars) was implemented in this repo, so the code may be more easy to read. - Sincere thanks for the great open-souce architectures [mmcv](https://github.com/open-mmlab/mmcv), [mmdet](https://github.com/open-mmlab/mmdetection) and [mmdet3d](https://github.com/open-mmlab/mmdetection3d), which helps me to learn 3D detetion and implement this repo. -## Detection Visualization - -![](./figures/pc_pred_000134.png) -![](./figures/img_3dbbox_000134.png) - ## [Compile] - -``` -cd ops -python3 setup.py develop +```bash +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +git clone https://github.com/zhulf0804/PointPillars.git +cd PointPillars/ +git checkout 620e6b0d07e4cb37b7b0114f26b934e8be92a0ba +python3 setup.py build_ext --inplace +pip install . ``` ## [Datasets] @@ -82,45 +85,9 @@ python3 setup.py develop ## [Training] ### Single GPU training -``` +```bash python3 train.py --data_root your_path_to_kitti ``` -### Multiple GPU training -``` -python3 -m torch.distributed.launch --nproc_per_node 8 train_dist.py --data_root your_path_to_kitti -``` -## [Evaluation] - -``` -python3 evaluate.py --ckpt pretrained/your_weights.pth --data_root your_path_to_kitti -``` - -## [Test] - -``` -# 1. infer and visualize point cloud detection -python3 test.py --ckpt pretrained/your_weights.pth --pc_path your_pc_path - -# 2. infer and visualize point cloud detection and gound truth. -python3 test.py --ckpt pretrained/your_weights.pth --pc_path your_pc_path --calib_path your_calib_path --gt_path your_gt_path - -# 3. infer and visualize point cloud & image detection -python3 test.py --ckpt pretrained/your_weights.pth --pc_path your_pc_path --calib_path your_calib_path --img_path your_img_path - - -e.g. [infer on val set 000134] - -python3 test.py --ckpt pretrained/your_weights.pth --pc_path /home/lifa/data/KITTI/training/velodyne_reduced/000134.bin - -or - -python3 test.py --ckpt pretrained/your_weights.pth --pc_path data/kitti/training/velodyne_reduced/000134.bin --calib_path data/kitti/training/calib/000134.txt --img_path data/kitti/training/image_2/000134.png --gt_path data/kitti/training/label_2/000134.txt - -``` - -## Acknowledements - -Fast Encoders for Object Detection from Point Clouds](https://arxiv.org/abs/1812.05784) ## Reference -[PointPillars](https://github.com/zhulf0804/PointPillars/tree/b9948e73505c8d6bfa631ffdf76c7148e82c5942) \ No newline at end of file +[PointPillars](https://github.com/zhulf0804/PointPillars/tree/620e6b0d07e4cb37b7b0114f26b934e8be92a0ba) \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/__init__.py b/cv/3d_detection/pointpillars/pytorch/__init__.py deleted file mode 100755 index e69de29b..00000000 diff --git a/cv/3d_detection/pointpillars/pytorch/dataset/__init__.py b/cv/3d_detection/pointpillars/pytorch/dataset/__init__.py deleted file mode 100755 index fbc52c3d..00000000 --- a/cv/3d_detection/pointpillars/pytorch/dataset/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -from .data_aug import point_range_filter, data_augment -from .kitti import Kitti -from .dataloader import get_dataloader, get_dataloader_dist diff --git a/cv/3d_detection/pointpillars/pytorch/dataset/data_aug.py b/cv/3d_detection/pointpillars/pytorch/dataset/data_aug.py deleted file mode 100755 index 7e060feb..00000000 --- a/cv/3d_detection/pointpillars/pytorch/dataset/data_aug.py +++ /dev/null @@ -1,357 +0,0 @@ -import copy -import numba -import numpy as np -import os -import pdb -from utils import bbox3d2bevcorners, box_collision_test, read_points, \ - remove_pts_in_bboxes, limit_period - - -def dbsample(CLASSES, data_root, data_dict, db_sampler, sample_groups): - ''' - CLASSES: dict(Pedestrian=0, Cyclist=1, Car=2) - data_root: str, data root - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - db_infos: dict(Pedestrian, Cyclist, Car, ...) - return: data_dict - ''' - pts, gt_bboxes_3d = data_dict['pts'], data_dict['gt_bboxes_3d'] - gt_labels, gt_names = data_dict['gt_labels'], data_dict['gt_names'] - gt_difficulty = data_dict['difficulty'] - image_info, calib_info = data_dict['image_info'], data_dict['calib_info'] - - sampled_pts, sampled_names, sampled_labels = [], [], [] - sampled_bboxes, sampled_difficulty = [], [] - - avoid_coll_boxes = copy.deepcopy(gt_bboxes_3d) - for name, v in sample_groups.items(): - # 1. calculate sample numbers - sampled_num = v - np.sum(gt_names == name) - if sampled_num <= 0: - continue - - # 2. sample databases bboxes - sampled_cls_list = db_sampler[name].sample(sampled_num) - sampled_cls_bboxes = np.array([item['box3d_lidar'] for item in sampled_cls_list], dtype=np.float32) - - # 3. box_collision_test - avoid_coll_boxes_bv_corners = bbox3d2bevcorners(avoid_coll_boxes) - sampled_cls_bboxes_bv_corners = bbox3d2bevcorners(sampled_cls_bboxes) - coll_query_matrix = np.concatenate([avoid_coll_boxes_bv_corners, sampled_cls_bboxes_bv_corners], axis=0) - coll_mat = box_collision_test(coll_query_matrix, coll_query_matrix) - n_gt, tmp_bboxes = len(avoid_coll_boxes_bv_corners), [] - for i in range(n_gt, len(coll_mat)): - if any(coll_mat[i]): - coll_mat[i] = False - coll_mat[:, i] = False - else: - cur_sample = sampled_cls_list[i - n_gt] - pt_path = os.path.join(data_root, cur_sample['path']) - sampled_pts_cur = read_points(pt_path) - sampled_pts_cur[:, :3] += cur_sample['box3d_lidar'][:3] - sampled_pts.append(sampled_pts_cur) - sampled_names.append(cur_sample['name']) - sampled_labels.append(CLASSES[cur_sample['name']]) - sampled_bboxes.append(cur_sample['box3d_lidar']) - tmp_bboxes.append(cur_sample['box3d_lidar']) - sampled_difficulty.append(cur_sample['difficulty']) - if len(tmp_bboxes) == 0: - tmp_bboxes = np.array(tmp_bboxes).reshape(-1, 7) - else: - tmp_bboxes = np.array(tmp_bboxes) - avoid_coll_boxes = np.concatenate([avoid_coll_boxes, tmp_bboxes], axis=0) - - # merge sampled database - # remove raw points in sampled_bboxes firstly - pts = remove_pts_in_bboxes(pts, np.stack(sampled_bboxes, axis=0)) - # pts = np.concatenate([pts, np.concatenate(sampled_pts, axis=0)], axis=0) - pts = np.concatenate([np.concatenate(sampled_pts, axis=0), pts], axis=0) - gt_bboxes_3d = avoid_coll_boxes.astype(np.float32) - gt_labels = np.concatenate([gt_labels, np.array(sampled_labels)], axis=0) - gt_names = np.concatenate([gt_names, np.array(sampled_names)], axis=0) - difficulty = np.concatenate([gt_difficulty, np.array(sampled_difficulty)], axis=0) - data_dict = { - 'pts': pts, - 'gt_bboxes_3d': gt_bboxes_3d, - 'gt_labels': gt_labels, - 'gt_names': gt_names, - 'difficulty': difficulty, - 'image_info': image_info, - 'calib_info': calib_info - } - return data_dict - - -@numba.jit(nopython=True) -def object_noise_core(pts, gt_bboxes_3d, bev_corners, trans_vec, rot_angle, rot_mat, masks): - ''' - pts: (N, 4) - gt_bboxes_3d: (n_bbox, 7) - bev_corners: ((n_bbox, 4, 2)) - trans_vec: (n_bbox, num_try, 3) - rot_mat: (n_bbox, num_try, 2, 2) - masks: (N, n_bbox), bool - return: gt_bboxes_3d, pts - ''' - # 1. select the noise of num_try for each bbox under the collision test - n_bbox, num_try = trans_vec.shape[:2] - - # succ_mask: (n_bbox, ), whether each bbox can be added noise successfully. -1 denotes failure. - succ_mask = -np.ones((n_bbox, ), dtype=np.int_) - for i in range(n_bbox): - for j in range(num_try): - cur_bbox = bev_corners[i] - np.expand_dims(gt_bboxes_3d[i, :2], 0) # (4, 2) - (1, 2) -> (4, 2) - rot = np.zeros((2, 2), dtype=np.float32) - rot[:] = rot_mat[i, j] # (2, 2) - trans = trans_vec[i, j] # (3, ) - cur_bbox = cur_bbox @ rot - cur_bbox += gt_bboxes_3d[i, :2] - cur_bbox += np.expand_dims(trans[:2], 0) # (4, 2) - coll_mat = box_collision_test(np.expand_dims(cur_bbox, 0), bev_corners) - coll_mat[0, i] = False - if coll_mat.any(): - continue - else: - bev_corners[i] = cur_bbox # update the bev_corners when adding noise succseefully. - succ_mask[i] = j - break - # 2. points and bboxes noise - visit = {} - for i in range(n_bbox): - jj = succ_mask[i] - if jj == -1: - continue - cur_trans, cur_angle = trans_vec[i, jj], rot_angle[i, jj] - cur_rot_mat = np.zeros((2, 2), dtype=np.float32) - cur_rot_mat[:] = rot_mat[i, jj] - for k in range(len(pts)): - if masks[k][i] and k not in visit: - cur_pt = pts[k] # (4, ) - cur_pt_xyz = np.zeros((1, 3), dtype=np.float32) - cur_pt_xyz[0] = cur_pt[:3] - gt_bboxes_3d[i][:3] - tmp_cur_pt_xy = np.zeros((1, 2), dtype=np.float32) - tmp_cur_pt_xy[:] = cur_pt_xyz[:, :2] - cur_pt_xyz[:, :2] = tmp_cur_pt_xy @ cur_rot_mat # (1, 2) - cur_pt_xyz[0] = cur_pt_xyz[0] + gt_bboxes_3d[i][:3] - cur_pt_xyz[0] = cur_pt_xyz[0] + cur_trans[:3] - cur_pt[:3] = cur_pt_xyz[0] - visit[k] = 1 - - gt_bboxes_3d[i, :3] += cur_trans[:3] - gt_bboxes_3d[i, 6] += cur_angle - - return gt_bboxes_3d, pts - - -def object_noise(data_dict, num_try, translation_std, rot_range): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - num_try: int, 100 - translation_std: shape=[3, ] - rot_range: shape=[2, ] - return: data_dict - ''' - pts, gt_bboxes_3d = data_dict['pts'], data_dict['gt_bboxes_3d'] - n_bbox = len(gt_bboxes_3d) - - # 1. generate rotation vectors and rotation matrices - trans_vec = np.random.normal(scale=translation_std, size=(n_bbox, num_try, 3)).astype(np.float32) - rot_angle = np.random.uniform(rot_range[0], rot_range[1], size=(n_bbox, num_try)).astype(np.float32) - rot_cos, rot_sin = np.cos(rot_angle), np.sin(rot_angle) - # in fact, - rot_angle - rot_mat = np.array([[rot_cos, rot_sin], - [-rot_sin, rot_cos]]) # (2, 2, n_bbox, num_try) - rot_mat = np.transpose(rot_mat, (2, 3, 1, 0)) # (n_bbox, num_try, 2, 2) - - # 2. generate noise for each bbox and the points inside the bbox. - bev_corners = bbox3d2bevcorners(gt_bboxes_3d) # (n_bbox, 4, 2) # for collision test - masks = remove_pts_in_bboxes(pts, gt_bboxes_3d, rm=False) # identify which point should be added noise - gt_bboxes_3d, pts = object_noise_core(pts=pts, - gt_bboxes_3d=gt_bboxes_3d, - bev_corners=bev_corners, - trans_vec=trans_vec, - rot_angle=rot_angle, - rot_mat=rot_mat, - masks=masks) - data_dict.update({'gt_bboxes_3d': gt_bboxes_3d}) - data_dict.update({'pts': pts}) - - return data_dict - - -def random_flip(data_dict, random_flip_ratio): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - random_flip_ratio: float, 0-1 - return: data_dict - ''' - random_flip_state = np.random.choice([True, False], p=[random_flip_ratio, 1-random_flip_ratio]) - if random_flip_state: - pts, gt_bboxes_3d = data_dict['pts'], data_dict['gt_bboxes_3d'] - pts[:, 1] = -pts[:, 1] - gt_bboxes_3d[:, 1] = -gt_bboxes_3d[:, 1] - gt_bboxes_3d[:, 6] = -gt_bboxes_3d[:, 6] + np.pi - data_dict.update({'gt_bboxes_3d': gt_bboxes_3d}) - data_dict.update({'pts': pts}) - return data_dict - - -def global_rot_scale_trans(data_dict, rot_range, scale_ratio_range, translation_std): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - rot_range: [a, b] - scale_ratio_range: [c, d] - translation_std: [e, f, g] - return: data_dict - ''' - pts, gt_bboxes_3d = data_dict['pts'], data_dict['gt_bboxes_3d'] - - # 1. rotation - rot_angle = np.random.uniform(rot_range[0], rot_range[1]) - rot_cos, rot_sin = np.cos(rot_angle), np.sin(rot_angle) - # in fact, - rot_angle - rot_mat = np.array([[rot_cos, rot_sin], - [-rot_sin, rot_cos]]) # (2, 2) - # 1.1 bbox rotation - gt_bboxes_3d[:, :2] = gt_bboxes_3d[:, :2] @ rot_mat.T - gt_bboxes_3d[:, 6] += rot_angle - # 1.2 point rotation - pts[:, :2] = pts[:, :2] @ rot_mat.T - - # 2. scaling - scale_fator = np.random.uniform(scale_ratio_range[0], scale_ratio_range[1]) - gt_bboxes_3d[:, :6] *= scale_fator - pts[:, :3] *= scale_fator - - # 3. translation - trans_factor = np.random.normal(scale=translation_std, size=(1, 3)) - gt_bboxes_3d[:, :3] += trans_factor - pts[:, :3] += trans_factor - data_dict.update({'gt_bboxes_3d': gt_bboxes_3d}) - data_dict.update({'pts': pts}) - return data_dict - - -def point_range_filter(data_dict, point_range): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - point_range: [x1, y1, z1, x2, y2, z2] - ''' - pts = data_dict['pts'] - flag_x_low = pts[:, 0] > point_range[0] - flag_y_low = pts[:, 1] > point_range[1] - flag_z_low = pts[:, 2] > point_range[2] - flag_x_high = pts[:, 0] < point_range[3] - flag_y_high = pts[:, 1] < point_range[4] - flag_z_high = pts[:, 2] < point_range[5] - keep_mask = flag_x_low & flag_y_low & flag_z_low & flag_x_high & flag_y_high & flag_z_high - pts = pts[keep_mask] - data_dict.update({'pts': pts}) - return data_dict - - -def object_range_filter(data_dict, object_range): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - point_range: [x1, y1, z1, x2, y2, z2] - ''' - gt_bboxes_3d, gt_labels = data_dict['gt_bboxes_3d'], data_dict['gt_labels'] - gt_names, difficulty = data_dict['gt_names'], data_dict['difficulty'] - - # bev filter - flag_x_low = gt_bboxes_3d[:, 0] > object_range[0] - flag_y_low = gt_bboxes_3d[:, 1] > object_range[1] - flag_x_high = gt_bboxes_3d[:, 0] < object_range[3] - flag_y_high = gt_bboxes_3d[:, 1] < object_range[4] - keep_mask = flag_x_low & flag_y_low & flag_x_high & flag_y_high - - gt_bboxes_3d, gt_labels = gt_bboxes_3d[keep_mask], gt_labels[keep_mask] - gt_names, difficulty = gt_names[keep_mask], difficulty[keep_mask] - gt_bboxes_3d[:, 6] = limit_period(gt_bboxes_3d[:, 6], 0.5, 2 * np.pi) - data_dict.update({'gt_bboxes_3d': gt_bboxes_3d}) - data_dict.update({'gt_labels': gt_labels}) - data_dict.update({'gt_names': gt_names}) - data_dict.update({'difficulty': difficulty}) - return data_dict - - -def points_shuffle(data_dict): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - ''' - pts = data_dict['pts'] - indices = np.arange(0, len(pts)) - np.random.shuffle(indices) - pts = pts[indices] - data_dict.update({'pts': pts}) - return data_dict - - -def filter_bboxes_with_labels(data_dict, label=-1): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - label: int - ''' - gt_bboxes_3d, gt_labels = data_dict['gt_bboxes_3d'], data_dict['gt_labels'] - gt_names, difficulty = data_dict['gt_names'], data_dict['difficulty'] - idx = gt_labels != label - gt_bboxes_3d = gt_bboxes_3d[idx] - gt_labels = gt_labels[idx] - gt_names = gt_names[idx] - difficulty = difficulty[idx] - data_dict.update({'gt_bboxes_3d': gt_bboxes_3d}) - data_dict.update({'gt_labels': gt_labels}) - data_dict.update({'gt_names': gt_names}) - data_dict.update({'difficulty': difficulty}) - return data_dict - - -def data_augment(CLASSES, data_root, data_dict, data_aug_config): - ''' - CLASSES: dict(Pedestrian=0, Cyclist=1, Car=2) - data_root: str, data root - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - data_aug_config: dict() - return: data_dict - ''' - - # 1. sample databases and merge into the data - db_sampler_config = data_aug_config['db_sampler'] - data_dict = dbsample(CLASSES, - data_root, - data_dict, - db_sampler=db_sampler_config['db_sampler'], - sample_groups=db_sampler_config['sample_groups']) - # 2. object noise - object_noise_config = data_aug_config['object_noise'] - data_dict = object_noise(data_dict, - num_try=object_noise_config['num_try'], - translation_std=object_noise_config['translation_std'], - rot_range=object_noise_config['rot_range']) - - # 3. random flip - random_flip_ratio = data_aug_config['random_flip_ratio'] - data_dict = random_flip(data_dict, random_flip_ratio) - - # 4. global rotation, scaling and translation - global_rot_scale_trans_config = data_aug_config['global_rot_scale_trans'] - rot_range = global_rot_scale_trans_config['rot_range'] - scale_ratio_range = global_rot_scale_trans_config['scale_ratio_range'] - translation_std = global_rot_scale_trans_config['translation_std'] - data_dict = global_rot_scale_trans(data_dict, rot_range, scale_ratio_range, translation_std) - - # 5. points range filter - point_range = data_aug_config['point_range_filter'] - data_dict = point_range_filter(data_dict, point_range) - - # 6. object range filter - object_range = data_aug_config['object_range_filter'] - data_dict = object_range_filter(data_dict, object_range) - - # 7. points shuffle - data_dict = points_shuffle(data_dict) - - # # 8. filter bboxes with label=-1 - # data_dict = filter_bboxes_with_labels(data_dict) - - return data_dict diff --git a/cv/3d_detection/pointpillars/pytorch/dataset/dataloader.py b/cv/3d_detection/pointpillars/pytorch/dataset/dataloader.py deleted file mode 100755 index bb8fb6ee..00000000 --- a/cv/3d_detection/pointpillars/pytorch/dataset/dataloader.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -import random -import numpy as np -import torch -from torch.utils.data import DataLoader -from functools import partial - - -def collate_fn(list_data): - batched_pts_list, batched_gt_bboxes_list = [], [] - batched_labels_list, batched_names_list = [], [] - batched_difficulty_list = [] - batched_img_list, batched_calib_list = [], [] - for data_dict in list_data: - pts, gt_bboxes_3d = data_dict['pts'], data_dict['gt_bboxes_3d'] - gt_labels, gt_names = data_dict['gt_labels'], data_dict['gt_names'] - difficulty = data_dict['difficulty'] - image_info, calbi_info = data_dict['image_info'], data_dict['calib_info'] - - batched_pts_list.append(torch.from_numpy(pts)) - batched_gt_bboxes_list.append(torch.from_numpy(gt_bboxes_3d)) - batched_labels_list.append(torch.from_numpy(gt_labels)) - batched_names_list.append(gt_names) # List(str) - batched_difficulty_list.append(torch.from_numpy(difficulty)) - batched_img_list.append(image_info) - batched_calib_list.append(calbi_info) - - rt_data_dict = dict( - batched_pts=batched_pts_list, - batched_gt_bboxes=batched_gt_bboxes_list, - batched_labels=batched_labels_list, - batched_names=batched_names_list, - batched_difficulty=batched_difficulty_list, - batched_img_info=batched_img_list, - batched_calib_info=batched_calib_list - ) - - return rt_data_dict - - -def get_dataloader(dataset, batch_size, num_workers, shuffle=True, drop_last=False): - collate = collate_fn - dataloader = DataLoader( - dataset=dataset, - batch_size=batch_size, - shuffle=shuffle, - num_workers=num_workers, - drop_last=drop_last, - collate_fn=collate, - ) - return dataloader - -def get_dataloader_dist(dataset, batch_size, num_workers, sampler, drop_last=False): - collate = collate_fn - dataloader = DataLoader( - dataset=dataset, - batch_size=batch_size, - # shuffle=shuffle, - num_workers=num_workers, - drop_last=drop_last, - collate_fn=collate, - sampler=sampler - ) - return dataloader diff --git a/cv/3d_detection/pointpillars/pytorch/dataset/kitti.py b/cv/3d_detection/pointpillars/pytorch/dataset/kitti.py deleted file mode 100755 index 9757317e..00000000 --- a/cv/3d_detection/pointpillars/pytorch/dataset/kitti.py +++ /dev/null @@ -1,144 +0,0 @@ -import numpy as np -import os -import torch -from torch.utils.data import Dataset - -import sys -BASE = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(BASE)) - -from utils import read_pickle, read_points, bbox_camera2lidar -from dataset import point_range_filter, data_augment - - -class BaseSampler(): - def __init__(self, sampled_list, shuffle=True): - self.total_num = len(sampled_list) - self.sampled_list = np.array(sampled_list) - self.indices = np.arange(self.total_num) - if shuffle: - np.random.shuffle(self.indices) - self.shuffle = shuffle - self.idx = 0 - - def sample(self, num): - if self.idx + num < self.total_num: - ret = self.sampled_list[self.indices[self.idx:self.idx+num]] - self.idx += num - else: - ret = self.sampled_list[self.indices[self.idx:]] - self.idx = 0 - if self.shuffle: - np.random.shuffle(self.indices) - return ret - - -class Kitti(Dataset): - - CLASSES = { - 'Pedestrian': 0, - 'Cyclist': 1, - 'Car': 2 - } - - def __init__(self, data_root, split, pts_prefix='velodyne_reduced'): - assert split in ['train', 'val', 'trainval', 'test'] - self.data_root = data_root - self.split = split - self.pts_prefix = pts_prefix - self.data_infos = read_pickle(os.path.join(data_root, f'kitti_infos_{split}.pkl')) - self.sorted_ids = list(self.data_infos.keys()) - db_infos = read_pickle(os.path.join(data_root, 'kitti_dbinfos_train.pkl')) - db_infos = self.filter_db(db_infos) - - db_sampler = {} - for cat_name in self.CLASSES: - db_sampler[cat_name] = BaseSampler(db_infos[cat_name], shuffle=True) - self.data_aug_config=dict( - db_sampler=dict( - db_sampler=db_sampler, - sample_groups=dict(Car=15, Pedestrian=10, Cyclist=10) - ), - object_noise=dict( - num_try=100, - translation_std=[0.25, 0.25, 0.25], - rot_range=[-0.15707963267, 0.15707963267] - ), - random_flip_ratio=0.5, - global_rot_scale_trans=dict( - rot_range=[-0.78539816, 0.78539816], - scale_ratio_range=[0.95, 1.05], - translation_std=[0, 0, 0] - ), - point_range_filter=[0, -39.68, -3, 69.12, 39.68, 1], - object_range_filter=[0, -39.68, -3, 69.12, 39.68, 1] - ) - - def remove_dont_care(self, annos_info): - keep_ids = [i for i, name in enumerate(annos_info['name']) if name != 'DontCare'] - for k, v in annos_info.items(): - annos_info[k] = v[keep_ids] - return annos_info - - def filter_db(self, db_infos): - # 1. filter_by_difficulty - for k, v in db_infos.items(): - db_infos[k] = [item for item in v if item['difficulty'] != -1] - - # 2. filter_by_min_points, dict(Car=5, Pedestrian=10, Cyclist=10) - filter_thrs = dict(Car=5, Pedestrian=10, Cyclist=10) - for cat in self.CLASSES: - filter_thr = filter_thrs[cat] - db_infos[cat] = [item for item in db_infos[cat] if item['num_points_in_gt'] >= filter_thr] - - return db_infos - - def __getitem__(self, index): - data_info = self.data_infos[self.sorted_ids[index]] - image_info, calib_info, annos_info = \ - data_info['image'], data_info['calib'], data_info['annos'] - - # point cloud input - velodyne_path = data_info['velodyne_path'].replace('velodyne', self.pts_prefix) - pts_path = os.path.join(self.data_root, velodyne_path) - pts = read_points(pts_path) - - # calib input: for bbox coordinates transformation between Camera and Lidar. - # because - tr_velo_to_cam = calib_info['Tr_velo_to_cam'].astype(np.float32) - r0_rect = calib_info['R0_rect'].astype(np.float32) - - # annotations input - annos_info = self.remove_dont_care(annos_info) - annos_name = annos_info['name'] - annos_location = annos_info['location'] - annos_dimension = annos_info['dimensions'] - rotation_y = annos_info['rotation_y'] - gt_bboxes = np.concatenate([annos_location, annos_dimension, rotation_y[:, None]], axis=1).astype(np.float32) - gt_bboxes_3d = bbox_camera2lidar(gt_bboxes, tr_velo_to_cam, r0_rect) - gt_labels = [self.CLASSES.get(name, -1) for name in annos_name] - data_dict = { - 'pts': pts, - 'gt_bboxes_3d': gt_bboxes_3d, - 'gt_labels': np.array(gt_labels), - 'gt_names': annos_name, - 'difficulty': annos_info['difficulty'], - 'image_info': image_info, - 'calib_info': calib_info - } - if self.split in ['train', 'trainval']: - data_dict = data_augment(self.CLASSES, self.data_root, data_dict, self.data_aug_config) - else: - data_dict = point_range_filter(data_dict, point_range=self.data_aug_config['point_range_filter']) - - return data_dict - - def __len__(self): - return len(self.data_infos) - - -if __name__ == '__main__': - - kitti_data = Kitti(data_root='/mnt/ssd1/lifa_rdata/det/kitti', - split='train') - kitti_data.__getitem__(9) diff --git a/cv/3d_detection/pointpillars/pytorch/evaluate.py b/cv/3d_detection/pointpillars/pytorch/evaluate.py deleted file mode 100755 index e4fd746c..00000000 --- a/cv/3d_detection/pointpillars/pytorch/evaluate.py +++ /dev/null @@ -1,386 +0,0 @@ -import argparse -import numpy as np -import os -import torch -import pdb -from tqdm import tqdm - -from utils import setup_seed, keep_bbox_from_image_range, \ - keep_bbox_from_lidar_range, write_pickle, write_label, \ - iou2d, iou3d_camera, iou_bev -from dataset import Kitti, get_dataloader -from model import PointPillars - - -def get_score_thresholds(tp_scores, total_num_valid_gt, num_sample_pts=41): - score_thresholds = [] - tp_scores = sorted(tp_scores)[::-1] - cur_recall, pts_ind = 0, 0 - for i, score in enumerate(tp_scores): - lrecall = (i + 1) / total_num_valid_gt - rrecall = (i + 2) / total_num_valid_gt - - if i == len(tp_scores) - 1: - score_thresholds.append(score) - break - - if (lrecall + rrecall) / 2 < cur_recall: - continue - - score_thresholds.append(score) - pts_ind += 1 - cur_recall = pts_ind / (num_sample_pts - 1) - return score_thresholds - - -def do_eval(det_results, gt_results, CLASSES, saved_path): - ''' - det_results: list, - gt_results: dict(id -> det_results) - CLASSES: dict - ''' - assert len(det_results) == len(gt_results) - f = open(os.path.join(saved_path, 'eval_results.txt'), 'w') - - # 1. calculate iou - ious = { - 'bbox_2d': [], - 'bbox_bev': [], - 'bbox_3d': [] - } - ids = list(sorted(gt_results.keys())) - for id in ids: - gt_result = gt_results[id]['annos'] - det_result = det_results[id] - - # 1.1, 2d bboxes iou - gt_bboxes2d = gt_result['bbox'].astype(np.float32) - det_bboxes2d = det_result['bbox'].astype(np.float32) - iou2d_v = iou2d(torch.from_numpy(gt_bboxes2d).cuda(), torch.from_numpy(det_bboxes2d).cuda()) - ious['bbox_2d'].append(iou2d_v.cpu().numpy()) - - # 1.2, bev iou - gt_location = gt_result['location'].astype(np.float32) - gt_dimensions = gt_result['dimensions'].astype(np.float32) - gt_rotation_y = gt_result['rotation_y'].astype(np.float32) - det_location = det_result['location'].astype(np.float32) - det_dimensions = det_result['dimensions'].astype(np.float32) - det_rotation_y = det_result['rotation_y'].astype(np.float32) - - gt_bev = np.concatenate([gt_location[:, [0, 2]], gt_dimensions[:, [0, 2]], gt_rotation_y[:, None]], axis=-1) - det_bev = np.concatenate([det_location[:, [0, 2]], det_dimensions[:, [0, 2]], det_rotation_y[:, None]], axis=-1) - iou_bev_v = iou_bev(torch.from_numpy(gt_bev).cuda(), torch.from_numpy(det_bev).cuda()) - ious['bbox_bev'].append(iou_bev_v.cpu().numpy()) - - # 1.3, 3dbboxes iou - gt_bboxes3d = np.concatenate([gt_location, gt_dimensions, gt_rotation_y[:, None]], axis=-1) - det_bboxes3d = np.concatenate([det_location, det_dimensions, det_rotation_y[:, None]], axis=-1) - iou3d_v = iou3d_camera(torch.from_numpy(gt_bboxes3d).cuda(), torch.from_numpy(det_bboxes3d).cuda()) - ious['bbox_3d'].append(iou3d_v.cpu().numpy()) - - MIN_IOUS = { - 'Pedestrian': [0.5, 0.5, 0.5], - 'Cyclist': [0.5, 0.5, 0.5], - 'Car': [0.7, 0.7, 0.7] - } - MIN_HEIGHT = [40, 25, 25] - - overall_results = {} - for e_ind, eval_type in enumerate(['bbox_2d', 'bbox_bev', 'bbox_3d']): - eval_ious = ious[eval_type] - eval_ap_results, eval_aos_results = {}, {} - for cls in CLASSES: - eval_ap_results[cls] = [] - eval_aos_results[cls] = [] - CLS_MIN_IOU = MIN_IOUS[cls][e_ind] - for difficulty in [0, 1, 2]: - # 1. bbox property - total_gt_ignores, total_det_ignores, total_dc_bboxes, total_scores = [], [], [], [] - total_gt_alpha, total_det_alpha = [], [] - for id in ids: - gt_result = gt_results[id]['annos'] - det_result = det_results[id] - - # 1.1 gt bbox property - cur_gt_names = gt_result['name'] - cur_difficulty = gt_result['difficulty'] - gt_ignores, dc_bboxes = [], [] - for j, cur_gt_name in enumerate(cur_gt_names): - ignore = cur_difficulty[j] < 0 or cur_difficulty[j] > difficulty - if cur_gt_name == cls: - valid_class = 1 - elif cls == 'Pedestrian' and cur_gt_name == 'Person_sitting': - valid_class = 0 - elif cls == 'Car' and cur_gt_name == 'Van': - valid_class = 0 - else: - valid_class = -1 - - if valid_class == 1 and not ignore: - gt_ignores.append(0) - elif valid_class == 0 or (valid_class == 1 and ignore): - gt_ignores.append(1) - else: - gt_ignores.append(-1) - - if cur_gt_name == 'DontCare': - dc_bboxes.append(gt_result['bbox'][j]) - total_gt_ignores.append(gt_ignores) - total_dc_bboxes.append(np.array(dc_bboxes)) - total_gt_alpha.append(gt_result['alpha']) - - # 1.2 det bbox property - cur_det_names = det_result['name'] - cur_det_heights = det_result['bbox'][:, 3] - det_result['bbox'][:, 1] - det_ignores = [] - for j, cur_det_name in enumerate(cur_det_names): - if cur_det_heights[j] < MIN_HEIGHT[difficulty]: - det_ignores.append(1) - elif cur_det_name == cls: - det_ignores.append(0) - else: - det_ignores.append(-1) - total_det_ignores.append(det_ignores) - total_scores.append(det_result['score']) - total_det_alpha.append(det_result['alpha']) - - # 2. calculate scores thresholds for PR curve - tp_scores = [] - for i, id in enumerate(ids): - cur_eval_ious = eval_ious[i] - gt_ignores, det_ignores = total_gt_ignores[i], total_det_ignores[i] - scores = total_scores[i] - - nn, mm = cur_eval_ious.shape - assigned = np.zeros((mm, ), dtype=np.bool_) - for j in range(nn): - if gt_ignores[j] == -1: - continue - match_id, match_score = -1, -1 - for k in range(mm): - if not assigned[k] and det_ignores[k] >= 0 and cur_eval_ious[j, k] > CLS_MIN_IOU and scores[k] > match_score: - match_id = k - match_score = scores[k] - if match_id != -1: - assigned[match_id] = True - if det_ignores[match_id] == 0 and gt_ignores[j] == 0: - tp_scores.append(match_score) - total_num_valid_gt = np.sum([np.sum(np.array(gt_ignores) == 0) for gt_ignores in total_gt_ignores]) - score_thresholds = get_score_thresholds(tp_scores, total_num_valid_gt) - - # 3. draw PR curve and calculate mAP - tps, fns, fps, total_aos = [], [], [], [] - - for score_threshold in score_thresholds: - tp, fn, fp = 0, 0, 0 - aos = 0 - for i, id in enumerate(ids): - cur_eval_ious = eval_ious[i] - gt_ignores, det_ignores = total_gt_ignores[i], total_det_ignores[i] - gt_alpha, det_alpha = total_gt_alpha[i], total_det_alpha[i] - scores = total_scores[i] - - nn, mm = cur_eval_ious.shape - assigned = np.zeros((mm, ), dtype=np.bool_) - for j in range(nn): - if gt_ignores[j] == -1: - continue - match_id, match_iou = -1, -1 - for k in range(mm): - if not assigned[k] and det_ignores[k] >= 0 and scores[k] >= score_threshold and cur_eval_ious[j, k] > CLS_MIN_IOU: - - if det_ignores[k] == 0 and cur_eval_ious[j, k] > match_iou: - match_iou = cur_eval_ious[j, k] - match_id = k - elif det_ignores[k] == 1 and match_iou == -1: - match_id = k - - if match_id != -1: - assigned[match_id] = True - if det_ignores[match_id] == 0 and gt_ignores[j] == 0: - tp += 1 - if eval_type == 'bbox_2d': - aos += (1 + np.cos(gt_alpha[j] - det_alpha[match_id])) / 2 - else: - if gt_ignores[j] == 0: - fn += 1 - - for k in range(mm): - if det_ignores[k] == 0 and scores[k] >= score_threshold and not assigned[k]: - fp += 1 - - # In case 2d bbox evaluation, we should consider dontcare bboxes - if eval_type == 'bbox_2d': - dc_bboxes = total_dc_bboxes[i] - det_bboxes = det_results[id]['bbox'] - if len(dc_bboxes) > 0: - ious_dc_det = iou2d(torch.from_numpy(det_bboxes), torch.from_numpy(dc_bboxes), metric=1).numpy().T - for j in range(len(dc_bboxes)): - for k in range(len(det_bboxes)): - if det_ignores[k] == 0 and scores[k] >= score_threshold and not assigned[k]: - if ious_dc_det[j, k] > CLS_MIN_IOU: - fp -= 1 - assigned[k] = True - - tps.append(tp) - fns.append(fn) - fps.append(fp) - if eval_type == 'bbox_2d': - total_aos.append(aos) - - tps, fns, fps = np.array(tps), np.array(fns), np.array(fps) - - recalls = tps / (tps + fns) - precisions = tps / (tps + fps) - for i in range(len(score_thresholds)): - precisions[i] = np.max(precisions[i:]) - - sums_AP = 0 - for i in range(0, len(score_thresholds), 4): - sums_AP += precisions[i] - mAP = sums_AP / 11 * 100 - eval_ap_results[cls].append(mAP) - - if eval_type == 'bbox_2d': - total_aos = np.array(total_aos) - similarity = total_aos / (tps + fps) - for i in range(len(score_thresholds)): - similarity[i] = np.max(similarity[i:]) - sums_similarity = 0 - for i in range(0, len(score_thresholds), 4): - sums_similarity += similarity[i] - mSimilarity = sums_similarity / 11 * 100 - eval_aos_results[cls].append(mSimilarity) - - print(f'=========={eval_type.upper()}==========') - print(f'=========={eval_type.upper()}==========', file=f) - for k, v in eval_ap_results.items(): - print(f'{k} AP@{MIN_IOUS[k][e_ind]}: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}') - print(f'{k} AP@{MIN_IOUS[k][e_ind]}: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}', file=f) - if eval_type == 'bbox_2d': - print(f'==========AOS==========') - print(f'==========AOS==========', file=f) - for k, v in eval_aos_results.items(): - print(f'{k} AOS@{MIN_IOUS[k][e_ind]}: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}') - print(f'{k} AOS@{MIN_IOUS[k][e_ind]}: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}', file=f) - - overall_results[eval_type] = np.mean(list(eval_ap_results.values()), 0) - if eval_type == 'bbox_2d': - overall_results['AOS'] = np.mean(list(eval_aos_results.values()), 0) - - print(f'\n==========Overall==========') - print(f'\n==========Overall==========', file=f) - for k, v in overall_results.items(): - print(f'{k} AP: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}') - print(f'{k} AP: {v[0]:.4f} {v[1]:.4f} {v[2]:.4f}', file=f) - f.close() - - -def main(args): - val_dataset = Kitti(data_root=args.data_root, - split='val') - val_dataloader = get_dataloader(dataset=val_dataset, - batch_size=args.batch_size, - num_workers=args.num_workers, - shuffle=False) - CLASSES = Kitti.CLASSES - LABEL2CLASSES = {v:k for k, v in CLASSES.items()} - - if not args.no_cuda: - model = PointPillars(nclasses=args.nclasses).cuda() - model.load_state_dict(torch.load(args.ckpt)) - else: - model = PointPillars(nclasses=args.nclasses) - model.load_state_dict( - torch.load(args.ckpt, map_location=torch.device('cpu'))) - - saved_path = args.saved_path - os.makedirs(saved_path, exist_ok=True) - saved_submit_path = os.path.join(saved_path, 'submit') - os.makedirs(saved_submit_path, exist_ok=True) - - pcd_limit_range = np.array([0, -40, -3, 70.4, 40, 0.0], dtype=np.float32) - - model.eval() - with torch.no_grad(): - format_results = {} - print('Predicting and Formatting the results.') - for i, data_dict in enumerate(tqdm(val_dataloader)): - if not args.no_cuda: - # move the tensors to the cuda - for key in data_dict: - for j, item in enumerate(data_dict[key]): - if torch.is_tensor(item): - data_dict[key][j] = data_dict[key][j].cuda() - - batched_pts = data_dict['batched_pts'] - batched_gt_bboxes = data_dict['batched_gt_bboxes'] - batched_labels = data_dict['batched_labels'] - batched_difficulty = data_dict['batched_difficulty'] - batch_results = model(batched_pts=batched_pts, - mode='val', - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_labels) - # pdb.set_trace() - for j, result in enumerate(batch_results): - format_result = { - 'name': [], - 'truncated': [], - 'occluded': [], - 'alpha': [], - 'bbox': [], - 'dimensions': [], - 'location': [], - 'rotation_y': [], - 'score': [] - } - - calib_info = data_dict['batched_calib_info'][j] - tr_velo_to_cam = calib_info['Tr_velo_to_cam'].astype(np.float32) - r0_rect = calib_info['R0_rect'].astype(np.float32) - P2 = calib_info['P2'].astype(np.float32) - image_shape = data_dict['batched_img_info'][j]['image_shape'] - idx = data_dict['batched_img_info'][j]['image_idx'] - result_filter = keep_bbox_from_image_range(result, tr_velo_to_cam, r0_rect, P2, image_shape) - result_filter = keep_bbox_from_lidar_range(result_filter, pcd_limit_range) - - lidar_bboxes = result_filter['lidar_bboxes'] - labels, scores = result_filter['labels'], result_filter['scores'] - bboxes2d, camera_bboxes = result_filter['bboxes2d'], result_filter['camera_bboxes'] - for lidar_bbox, label, score, bbox2d, camera_bbox in \ - zip(lidar_bboxes, labels, scores, bboxes2d, camera_bboxes): - format_result['name'].append(LABEL2CLASSES[label]) - format_result['truncated'].append(0.0) - format_result['occluded'].append(0) - alpha = camera_bbox[6] - np.arctan2(camera_bbox[0], camera_bbox[2]) - format_result['alpha'].append(alpha) - format_result['bbox'].append(bbox2d) - format_result['dimensions'].append(camera_bbox[3:6]) - format_result['location'].append(camera_bbox[:3]) - format_result['rotation_y'].append(camera_bbox[6]) - format_result['score'].append(score) - - write_label(format_result, os.path.join(saved_submit_path, f'{idx:06d}.txt')) - - format_results[idx] = {k:np.array(v) for k, v in format_result.items()} - - write_pickle(format_results, os.path.join(saved_path, 'results.pkl')) - - print('Evaluating.. Please wait several seconds.') - do_eval(format_results, val_dataset.data_infos, CLASSES, saved_path) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Configuration Parameters') - parser.add_argument('--data_root', default='/mnt/ssd1/lifa_rdata/det/kitti', - help='your data root for kitti') - parser.add_argument('--ckpt', default='pretrained/epoch_160.pth', help='your checkpoint for kitti') - parser.add_argument('--saved_path', default='results', help='your saved path for predicted results') - parser.add_argument('--batch_size', type=int, default=1) - parser.add_argument('--num_workers', type=int, default=4) - parser.add_argument('--nclasses', type=int, default=3) - parser.add_argument('--no_cuda', action='store_true', - help='whether to use cuda') - args = parser.parse_args() - - main(args) diff --git a/cv/3d_detection/pointpillars/pytorch/figures/img_3dbbox_000134.png b/cv/3d_detection/pointpillars/pytorch/figures/img_3dbbox_000134.png deleted file mode 100755 index 0562a3759a67a9255da9066a946f0b08fbf2e799..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 807160 zcmXt=Ra_KYxbF&;9=#-RhX^;}6yBm>`?(R}RIPdw+ z@4MYs&((jgz1G@Kq^hz!9yS#=006*yry!#a0Dy-80H6y5^xw(ne&=HV00wv`BdO^H zJnpuxsaiCkz1{m;eS`0NRzK-9*?e`<@uuQsgWjun87cwTSBNMD-Ukp4`Vww88;`as zkW|cAu}>OqT#{!}qE4zxn1yZbL75bE`O{H-U#~lE+^H$jZ$~S-k83 zN$~t45Pc7xi(D57`VOrfaRDH@rot}3A(0+c9p=9mpq^?LzATd_B{ViBOa<3t$cM?| z;K7elo3AO8x2TBM832h$FeC(bo)v0A@&O8Kf+_0DsR{n)FO4i7f_@D_qC90;F7-yGD^~(>i#6dXZYW@)9DnN_4%L z5R6Z}PeC&>f_}=?1Qub1{x=?17XnMlngQX;md66~vSU1I;jF*!9`A?l4*= z`;qjUZ?g*wO|ZixY*PB_g-wXO%EtQS+sk2zx&C_YO8+Z~W)@|YwUPOvQ^khT@9s~F z*`0R>C3JeNmyth|rrKlyDct_M!V)(J@=x~v`i?)-`Q6w0h&LQQ8h8di&wV_-|9d7* zrzZS|$^-02*hKu>C*yeq(0Nwh+<7qP_*%5N==qUR?0_Kf@46Gotxe~nXw}8;{6xUi zoAm)Eqs5lx%B1~Zfq-QzYC_sopQ;PsPL;#gSzgedK zN(&uv8JJ>}UBaCWEe#gZr;56NDQkw}+6EyD;0eeGTM|eY+Ay;wlE^0#=DfPzax2ef7ZD`R?C z3#YUjUR?L5y4umo+wv+7%jKb2vxs0@iHY#Eyt!8`=3d%i8eevT$#;oh{28@ELDSYq zt2k3G4r~xYMii2-Y8YM~#Yi(cD)}!@Rgt3yiUvymBu~ix^5&4w3W0Xih3=ee{*oH7 z@%$1x-vkRn9#B}pu%P&g{jeTb6aexccPSOVGkl=W!*)3+MH`8y>6eX!S#{?bCv^!G zpzUG_r6s`=^&R2~kysdT_IN-rbqTH`Oe8i}@WuGOh{!O=EE0e&0zq0Z&}YDrP9VxS zQwsnVIs_7?7D7@FJXe(o;gPcHwd!jSvX~Pyxc&F`Wt|inVmCmNs~1B1iaDcgyFef8 z350FKK7nAuNL+kmY&W}G7~c>!m?8q37uLc=9@7u|751(RW>L}m8$;)P+JhNaZt)wq z{DCnv)j*EtTMSibgBNd;sTu)=Pv8(~n51%35YO)gHPk)8B7wQWn#;`mr$VrdXPU8Q zF-C>WclvFi?bia%FRj{NpX8?(?J+7Iq`H(xi`T=N51lsiA8a5u0B0SiE4ypOOVk(Ob@BB1@tE7s;qGEjqpphw$Mb1UHsofW0;0JLY{@CJ@fbFw@J>R!~wk&P*?_30r zP0yeuj(vhU=e2>s->>*Lg;X6Fm};rU)e=pOi<=H|B^gFEm3JwNV`?c;ouxcRoU>qs&Gd^kz> zy014dps#+7sHEiYCd>ST*Wd2HtM*3szZO?Cavuj-9uqWdpLeGhySM89`jyJ3Wi9!Q z%lR3W`Pw|^&Fs(421tS2+2@=_ z^JPva&ZN7_^2Yye;mkdHP{MGn!$inlBk3-@ojj+v?tSq53~%r5Zw~gmq&Yiox`2+g z31pv5#m&2MG7W_45C%F$t^U+)Y>!Z zVfHb~%I$^|I$stF>Bb_*@-2rpIerTp$~+hN=hxammalehot%2-R;m%ASDVpy60eiL z?}J&jsO@yu_!m-szRkk;>R@bY7vExjCR-Nz>-;+#frlvLfR~BZs&%J4zuF64&J>c& zl&-W9eMEAz^aO)ROcLpO2GxjS3U-ZA%jWJeTMH4;ixAe5P-?u|$2YETW`qoRhptUB zrkJPSEsFyrHJPC2C?g4s(?DyVJ1i(5QU>GrzhQQj6paIoQCwk z6qK87Z0DQOlkY~~!>pO$pebGj5FE%WFrevh-CmL)x2GcCH z17#8&LIt~#Ok{#Jzq!no{=5!xw4Osc`Yp}3U(B9~^Z(*#K-WZxs>6SWzX=QXy!fR! zpObMlx5PCCg1SJ5R4EekvbOz$zLxpX=rw43mzrgWp+q(*NT*s16qT@kp4P;S%6D%Z zu1iHbN;H=r<4`3{=QvVQ$f>V0JFkVRVJmvnfrWioYexz*g0xcY6O45kAr`Ni71a61P63VcgYixF<-jM*Ln!=C(5g3F?ABu(i4=n7)Ks*L8FQ88+0{Z?vI9ON} z%qKqWVn}vYk1_*T#YXa`@uIM6Z9{&g;WLOv^!?K_lvk2;9p_n_wtDj@{ z=ps~$g z!o208F-@{weaTcSISa!d8TM6V#h*e)LIC=*09GO-A(N~^6pD~R70$>39sKq$80wgoKTja^P8i^t(k90`^lNphU3WBM|CZ zj>nS-*x-sT#Uj@Tio%u@>9!sF4b2GC5~#4d`Kn~iZC7|Ss=7Y>4i=ID$xL#jx`OO* zf3YEOB^@`3@vlMd94bj%zir!)f6_I`r3O~O(TCKCo(_5Ka=L`GdPf&hgH$B0Z98ug z0`E4ZbrgwoU-c-_z4h*G4}6ezd|a((`d=nr@4WF_@1@HQybt7#_V+Xcgi+zjEag0J z(A_tl{Sl+#)F$&XCfDIwf4QzTx$$Lf;5b{P4aaqz?(w5i?tS;2|ATMf-h0QD z2c5I#N12?f5x%y+6FRY}71iEP*F}2)dA*B)Enb@)1W%X5+~SYitC!z|{U0y`p1!4g z4!B&Sd(+Z+vl#w-eYwY7)XpJBy>`1{-JkAvW~^R(r}j{hOof^c@8k)lqCxzM#(>g@Z| ztX~n-CyRJrtXIV6UvObmDKj%_kNWBCzJDoR6yTd<@ghUc4)wBBJLBk=!pF>=4e~al z2vGD5Jcj;*|E%B(_BGg`wrKF@3FQ$##POH%vw?qpqN8kgdT&1Ol8)Q+bKPmj;qysR zVdQQH_}Pp)(xeu0+|1B=>;~e8>YGrjROR|Se0{#W+z)E@8y?E0cRZyeKhTC8 zSagPio&u7b7!bkEFTvZrWDup7GOYB1Bv_nsdDlddDGsxLgQ=Q@nvBcUx4iDL<2`EO z%4fo)83ARvz@d`F|Gi9+F@y|KNNAX$8#e&ADzau%Q-gu8RiufOhH*WF7%4-w!+Q-CqKU=^ zW^KNN;pg)p!blA?_ym&$O2ZM}q*oMPAz;RF7&78K;}|8G*wz zG4N;h6s=OB5k)v&VmyTS6{)jfm}I-CE<3v>W5wXB&#&695A%}wz*L201H5>wx+XS3 zRIG59?v;EgN`ESI`v~0eJO(^w`8*RmaD`{^i`^bB-!P5_L%n*M9(^e`6i!}Yk(WxHsYXp)`Af_hxAJE&^xc&dX?VED0*NA^A~AfZYV{YzL`n7J)n zPY*YT=pY_vq=^|d$OMlw6@a;rKuLT;xAEzsVsL>-eJ-_WCpp4GVmCQk$kJ(lqeLI?m1vFe4pE=6>C(XcC!5pAplNYb{% zYTFFjH#)i0O8>cXk>_1~ZJ||?ioY{#8G@lr7_~Kze=f8_z`COk7bMR6js|KMcw-=K zgn>5pAp}=@TF#zErJ`Y|(0HY923dC`&F#q&${qt0xcCJo)6NOstaId>;?exxj8`}v z7!dGS>?juN>(I4XJexG5zUFqc$b8&>x|@pRpCXG$tztobmiYUT?m6`N=7O73T(p}$ z=U--t;ms-CA8w0tWr z!B=q~0}tJ2OLFv{gZBoFo{bh`1GkJGH@5;Wo&qnoaax}*ug4v|PM7aq82*dBP0%I2 zkNkG(#%AbW+qQ=pxF;5HxI5W)#^TU%&}#V1;Yhn>o3V9CL!fej0U%Gs1FEs9aUIr;N}qNnDVJQU>;guPrD6+vhWXLCuF2C$|E11%CY z7;qWZF$j0_BCt2T1j@_H&vvDiL%ASaO%RawdK?Hz6k#jF4q%FLs%wHx!wEwu*z%G& zBf#`gfQ=AV5EH#52uSGm8qo5eWCCOKC{T?v^e*T@f+#Ov@|pGS4zYZ67c0|_yL|HS zu^U!&0lmg(!r`L&3?%B1(nZQ_QUOsign77tSP?n?G+-V!j8d{35Q+t`TG@C9B??1HA^MBMWEl!^U7BS?l3*fi*zwq` zY}yWm7|y?M4e|b|D$b71tZ2$11C&BFLybdZfNg(^ zXJRla{Av>U(hsq_+1W;3-a%%;w(s3P@IrZ~^{2&UgfnoQbhx646_Uyi4pZV1;p!ye zrbV8Nraax^O~SG!z$ik#A`o}~GS+KpVknx_uOTZ6JY&>8C((Zopmlc9mrX2RWP}k; z-IAEmtlhHVXa#Z2@1Z~_eiRfeDI%AC{+s|K4_cS{(nys}xdc5y#;J!XqX>Srp|nc+m8Rx)xp$87P3p5b$lY_#f1!PU>2q(bDoU zk=)6)ZN3aIDH|~w{t>BDx5=fDkPz?$2>vVqVy% ze=96&=Y@2San!5N8IAq!^ZqG085$+ zH{ZT|-DO0x`p}%6EB254?0)^>?d{`vy3wDV=dCjNAGZXf=|AbXjupe}&PkUJ`y3I^ z{WE_~xOXJ{x*b+7Ggos1*V+8HKhbb)ME9$C{-TRjot{ng>i+Ff6mW~@`7E5fd_TRy z-LbR$#_`Vq-Sht4w+&mp@V(q_Q{jyxpVIrJ&(Hnu!*kmLJN4)qUA*@_>BQ17k9Zt1 z$gPw_gPM#!do+mX?@A^G%*wUi+|OqB{`#x9^vaKdtlk-yc&ko;!GijOJii2m5VxV1 zxdE4v*S+h~%rM%`@a%40)Prv-(&mG0)2xr6sPkP5-hM~ZR^wUoDdw*FJj+~)(R$69 zeEX!1bEB-tIXMlzws!w`gHHuD?O{8=ul+`yLHqZgDse6!e@?8w`}7N1^h2xh)z)oI zpujcuDDJtL@$F?|s-mP}m8MpeNT=FFW5)a1ov)i40lvvA(Nm<~FJ}*y=Bvj4%x0>yyVd3qp;$V?dwiUX%1dlHJY)~tHsf^}#F3`ZH`wX~Wu z9y$~p#6YhKj)bFdQK~cmJn*)mQ|yLB0sshB05OF;_Zhkn3rHYY?LrH+pdC0sO5CjB4;|?g`BakUpq!}#I z792qWAsxcQpoGagBLTn?cAxiOJcyv|yWMT0J#|PhBRx(SRQ2mG9HI|W#iDCdT07(RTL9tFq zM948Gr|FEs&?W5<6Zk!n1DFmlS8la)#I8394qIhM7B23EGIo)cp z&40bs>bJMdjb+061Liyg@Ui+9Fd^XoBdV z@`5yuUor+&V&%mr%!F?z#hOKqh$GyGeJ{-qY`D8GvtFjKO@Dt4x{o8yFBfHr$^;=> zUWKVy+whv7t@IqKV(#iH$8(6mDFNhN(5-aR(M)@rHpArE^zpvJ^=58q%p|iQysFB_MBfDCs%fo&VP9R zHJ-Yj5jvbHy!Mzt5O8&P_h^~K`@fRb%{u7E^H?=O=kv4xXK*|$|5x678}7&PyVZy6 z=XLV4#V&VlYV@W=RctbX$0?P^pXZIjf!-%#9p`=3bmDiGo}FH9_b%bDjTZNpPJ&)) zj?ShjvPY-X?{R*XpO{b1X1l<$A8?hve7sRHXy3kic;gVtV3SYgK0U&I>bF!yAnyPE z?@yzEyH|G|XF5-V^DA7)Kfh;y=xI!Lo+1pifzXHC=c2$s-QFE?f{75vcEtVO^Eo&7 z)^eRu=YsyCT#bu|A??ILzomPVR-fnn#_f0CQp%S6_p6mB!dR`X*oB{T%V?DwgpPD_ zIpjZPgyRUe-GAW`qw+tfzLj6m$`KEItL%2O$bx+Bp}65TYSSPpF<_^iqhF9~%j>pZ zubR66&g9X%ms-^k*G1F0n(04em+MQi9rw`_7x0{(P+6GgHCLu!erq=wU0$l4B(!3% z6`5|hthAJU7n%>(-g!M2F#TP=Q!gS4<@_C0khrG_*R>w7861|jpa>gP6R*Dvx?=q? z=&BTOK73i$sgI92`d1()gr0qt^}W&XP!9Ji$~G_R;jUxD?M>F7>p5*mt>D?+ zmq`akvq>{*5j{x>)MSmmOX(wxu5crbv=)DH{zpknE$;F99a&L@h;53IZ!!SRg6Q<} z(DFa3;>7u&;AUb{TeSMtVAAS`;WZ5LfHklDU!ON9b^MSh$V0s_agW%$`)pPl~IWHb?Ti`!;fi zwnn=pKit%K405bBEw?;yeOPPJ&5wgkL9|svNb(@qZ5sowg7g}dq#W25}Aa;bCGW>h-VG{LD)ToQ-)Ngx!UXzlv8j5 zg0cRyes}=?MWZ<943bhXtAZdBFgB7joCz%^q#H2JiZw`pv09qOW zevr0cMZQB(Q4b%kmMi?T^!ztfXOU&JFQOpWdr4>N1g-0dWIlSd7wqgv7$c_>`Z*M% zD+)*T69&m3KlvF`3+X-mEu}*`4@s+D(*-yff?00-gRF6gz2xd8BpP65YI8 zzm2c@!#mb=i|`yr8D@dewRMV(b4H+t8c&ytKyNd3Pnv z9l?VtBT@Z!|LfuT{;2ba?&)`XX49#;DfOSV9KSaI)2p4IFLIxjgO_@r>%?78A8${O zgZAz%uD9*Pe?GSt6VPQny{LRVJ=S>Mi+{WOEp~RC_^kP;C-doGFQ7CREui?Su*(T**BBc#`a(v;X;gsB`tuxX3iy*Pah-xJj5U>eC1F*&n z{6jhRblocPcb!v-HfwEeRjcV)-`hp(gCoIBQ`mVFVwr;& zA_)7-y1Y=ZYBi|4GI3cqJ6f3XsrkK`&`BAwi=~h|j#)TiTtMR3n_V@3mx^pB5AR7q zogXM4vstMlEy|SXl($|fb|bUaZI%Ll2a6kJ?VGat`w;!e+u);H|2M}Ey^OY=^nLUf{i z{3KCf3`u~HJUGOO5RwNQ6pS+s*`T-}Igj_xp1qOl(3xJ_S zOh^Ep1rsZDaQUS|yi<$|rh;~ym*P%aTj3jf>ClCbRdBX^+=8g2kSxv+U;|_XtH0dF z#QsuW9R%eJlPkmhH#Q6{q)_y@3xXxHIgdfDLMJecRe1>FqsXxOap(!1(njYq4lLX; zF08AJu5q zYrgSVc^Fx0{GBo_sw^y25+V4Z6lY&GL&H+cir{(Y&s9)>d*?;Yrmn@?qIj^2s8S*Bq2Z0jk54tcs|wa**9%dMr+3AVMDr zL6R_G!ue@93UCr=fkCIBWrKKV1O+Q249*2-20{S|au`HB>aZ+8ff*ddmnEPHASWz0wqHtW5Lm1LN$zx;OTG_ao-0JteJtH`K@Hz{>OHr$e;r zfYayVmU6|Cy5pTPGX|n(0$(T4m958o_`ipL*{(Tze;v;Pel%we)-RZ5UdUw`e~6Cm z`4U^ilva~0GZi9>Cr}zmB`N)$DnYY|{hb~JN(I5AM^*w*HPoiK-|X9{Cl~5h#611a zS|3yQyeC9E9nS4YgK8o?zst2m$bPkxc^~`p1OZ*or-!wX6Qln_KU?2HT(Dx8JYK zCGN;+IgPGfI>)89ZO^!gZ>l_!OH>5@&6e)`2G5JANiZ2-wR`^iqwSRA>FIOFUrxhI z`GKU#t&h0@4?2FAB`rT5RV1Fp&f1<#1CRFq?%@5(J`k~ONliJuwhz1;HoBLu9_cjd z{PXec-_|qT&aK{}We%}}dc!60w@ddIy&Gf0r@=b|V!3+08m6(@1xh{X^X+B1kDu>X zxZ9T`D5W}A6&(X-?z<(f%KrW6Io}in0U>{+*Zw{6Jx(p<_&NA2RvVOhLq@ zvRYRWyO(gV?J=`AKAXPR*5V)Xb9P4e4+?prb!SRl)OP!q>DN01twUQFCW}r*?ka1E zSy}wD(wAbK*A~(X5^3St@zoMnyx|yJO`%R5_tRs8|E%LLPDK{?7T=$IR1W(gG5y?K z935k=h5y3W`pTQUx1{m+Qh0BBYqL=+Fv2co=Ka+qB9+DQ+f!a_tA;?sQTET4Dw7!j za<2`WJJGU^y|}=`t(`xr{wGWO9}Jdie0Bd2+-9~~-b(!KJNzS^`7vKW39H8ks|>y_ z&dESAP-T}Dx`HFV(l>tN)AEg_-+*1c9LbAbfTazzb_7Z!_2kbdr37nDn5k=qQQwzV zHG9;rAbz)VR-oPiSgF7t%6OPgh^qF8OEJ(6yq(|>@TXZ=9iG(3_ZUJAo9Kj4Ep%SD z|GH)o%$WT8P%}v@X`J7x_lH~Q*Q%CF;ZI|7?=^50+wWsYCCiD{{gDym5&x+}81A!J z&~Wa= zjZ`&&2eAW0S6;w+QdY@~@^38nw>{nOGfmgZPCWIPXH>o$ftQz0bAhblUv2 zTXve2yBJ*E>DI8~*v<3i;k>5lvESX#Hqbs+?77@XqW!l%Zg{FhTJikPZo?LzzkFoX zgMM^6j;5f#7a?z^;NctSd~JCJB3@3XcBXYuQ`~6nvc=60t=$aOWLyygh3;}Kx5@_ zU_ezf+bkZDp97U(0hy!jZY^UiT@^j@ zwfm(iEDxqu00yCw08E}K1+Paq>|{MTI?8UElBe#Np+oWHz;Z%pC9?c9-iZz!P7BPy z0%w&1*k8hF;ZrXa_}D2zVaqIp$p4x$l0@`qaDEUoBL!+qQ@^UnD!`{Uo3!>WDb zk_gqOlLWO$xmFIYA!CVt5VC0=<|5tUc6cUWyq#C&wLCmHceK$mS5N5dY|$ zXQKvi`FSuRKl{6%OdbBk;H~YI2&LSS$Cjz@&hP2?+--UMphdV<*r&b2TVdLV@mh;T%@XyPUDOXU#oM=aqhC6-4orw)RzlPc_XW3 zZ;h?d<~Ox%V%y@$3N`NzJ{;nI%MlfA3;6f9kSK#Ggc(I~-!Y|b#~m_C&`_=Q5M+C` zq_QG<{J=TE-TT{m-qb@owdkgMA;;}UP9{y2N{Y+kEAoe_H@OnHL;D~NC0}2SfYZf4 z$INY>ZG@*)Hb&7EJ$M*`pAx%l{z2er>Y{&CDfyOLzAX7Bse7cPrn6)kFNJxZ{ZQtO zDZ*&Z!$t~})_X&=mp?)&TG@vO*%^W@GCj#KPEWa!d2Mj47yCLXFXtJOp zkQBWeEQ2o=`)Hq`j7D{rUdeRi@AcID5jI)3F_4*`K$%n%6v8S30;=YP7|Vpvlq*A! zG5`S#G&m@a9Uu>dayD2DBO*d9%8O0(XOnRGPYg%KyHn%jG5_9j6O398QtT2-upSFH zDd*{m3pJ#L&G1}%c+`v5H+$}H)z6}>@B@<|O>IMxav^awCpGVCH0lZQo6Vr;m~hXg z(RDNUa>f{u=dJCl7T0>NRB79}#*Y}LRi|U>H#7=*AW&L18tGCCIs%_#9)cb$2oJYr zerTa-_e_i+3&Zen1|AV zKDM@G-a;M(v{@?Sr)V#gesQe&oLw$1m1OF+HTin{$f>WNJBcHobI&m`6u_4$4T>O# zg167HvG~A9OIi`?B4Z4krLcFz146H)OaEDA;^CM>fC)}AB0zFdJk_BBV#uzHGhaPG z)W=lBX#gWL&z+u~S?is->Rb0xaF@(7IfOBL47u%w1?i(rP^UoM5OE}S3;YBC$mfZmDk39sHOBC6`d35Dt$Ap$9P-D}$UM zPQYf7l++Q% z01hqAaqloQ?x?xGMK?}`j#Qt2LHfe%yF8tbq{bvPZq*@O6!9&rX#~2nJJ$;6R8Y95 z53lWqB$xQBiVU}Z>UR%|lPg`OuFH#;Dm3ib7Cm~4lEL%`eUU_Ao7y&&s-+BV1=!%& zc-@3VTVDEG!l>D|W^8v9e46WS!B-s$6u}CVSh`t)xWrfB2r&(rje%N(t-Cs}9J|Lv z&v877rj+*GdgVtj>iE{(gy99 zkrENNUA~58?}WW?IBtQ0u&2(`Ob+em6Y9W!xgV00E0~qo?bdFms~3CcpXJlGUUL7v zZa-{YD+|2EJ2J$0$K?$vrHyJ>B5u7$w77WqdeTZ~VQ|-n#d=U@z6cT7r;38Uw}&qu7QO%~-fN32Is}gm$L*MaSj)x0fr` zX&bg`05n`~z4;(I&{Mfnm)P^4RL8sA`!6iH!ybN7Vl2;xj2YesnuGe;!;`#@85gTz z-r%bAJZFc!CST@8%iks1m$kInI0P!K?bJSZFRc76%`Zk6jt#OM-k7wA`tH8*?r^_b z$Nh#!;UnG_JiGF^jD6G*^KZ^((vBl=0I$>ids{wS>U<6vVw^1BpH+~lWQSqPFwa}b z^37dP$8cUjqekwd?{B?DynNi^^5JDe_r|yFr@8+YI1Nr86FYaP!gi+5R=Ve;7oId` zW<2fi=vqDX2V2C|~=R&g?v$D27V}CwAOqn@h(Rq;H@ezH^H6yIAz@PPa%g%7!8=GTrbQMnf?O<}} z8ai0&9DWfnmXqp|W?9P}o3AO;#Ig=8%S zQwTP{rxff0M1&#L%{py>3})6nEUrm+m+>rjKP!m^^@E~c)8)StxtKod@7`MQvQ?DT zS`9;5?3op`EaM82bD3CIegh56ZRB7Afke9Hy)VpA5|S4aeJ zvcOXPt(Ku2KHUm#ky#0;Ey{hIGr|~ZU)lUC2qrEIWV0MCw-5v!AiOPlf&b5v1ZnrWy0bS^PC$Gdgxlt?a z-S3n0u8HPglQv^_qQRI}VTYBNy9s5M>MX2Y)Xx`O}Ra+@is|(=ncypM1&*+GGa$HONF^-ufqnJfI~1! zwrluaCie4&#dh1y1ijL&SaF?f&Jf#=2fuST2M3Y5fcP9i>JZX=h|^~jFGX)3d<)9c zlI(xtR2D>Z&(V z<5>Oom+AC7MNi*GAs)T^%tEu!OS78kPipKP2-ShV5mYCvXu9pjhQ)HBn)~JENs>Wd zm*M{DSQt$o;}c4qqG#q3o$+385Erm>2Ia9L%WNrbi!HY^ z-2kFGuBx9IqY+vS_B1*N6}kTo?frU<25`B( zhfB_CJ0Eh+RXQCSF7(iwPSLs{nF8a=4js-}N)(_YKESLD9fdfx()XswN((``aM3E9 zITbsZ2mqRG&8HqdSDszMo1m_Rs!U@>_8$-UudSq)Y4`9fR;=di!o4PXL##)MdpScq zzDHOK|3>R2N!#J|oAAniVNb0@9s;D`Aeu&#hNCmTqL`xe3scMjGQ-i@Xdv+D=T6^y zsKm{WjbtCYP4WaShEvE8isrI+i)mhBaIJalDjA@I=M^ ztI|cwQJeI2MOHUk@1n@Z!Czv;m2523;Fn9UGWV|JW^i;k$B{sV`_WuIH0SpO;(K z#MiFe8al$R4@YmKt}Jt~#QxsSOs*QFJ~fm@HePkeTeEnE5zu@pex2!}kGt_}PWsqa zOCE2C@)REpiVk-GeoBp&+^XfauH|2{ow1<`oB1Xho9UKppO+!$Kx_mK6&KTHH4Wp* zWBG2iv3X!cSfdTl`q))52f~Y}p=2R1gZ`{P4d04FR(L`MKqTlG>5^7$*rm{Q5nYdenq5%E5dG4`S{!}wcIlSkal2-jG{32a z`s!Cw4nf6adWFjJ#9;7P1YD|=6~8$NKhcC~%9)IKR^UTYzuQ9n%+}aZIEPmerYL3A zJ9{*F)09%_n@un(Nk*_5%)Z{lUBg&_F7M>&&Bg#DWLGZ>m9a_#XlqJ>p?Kh`Drbjc z**CQr;X*Muh54%$6wyrFatK@uR}EsPw|Q@>sa=;_aawnyGzmiRRNKBqCTuFEkbrug2->}6mEHNR< znMyPgO8J2|j4D(5Q#)I9M-GenPv@?8MT*OgB4;8$bBqelY_EwFta{=p_@-ENklr;W zPG9@tV@%3ZNKZ@q5AsI0T_)Y&7*qRvP<)Z?_>c@$G(f#EPaaW<)HFhMh0*qFq!2_m zEB+(UL4?p}kF1$ojVew}3X&33Gb9HPP)$EI49EL`4whyb=VV|)kbuUFyFo17$aS6| zcFn4ugYuu@hHe;ldJog-^WMnT7#J3~C71}}xq`X*7Rv>gmw<$*xs|Giq=C91j4`B@ zv-H(~m)Jx}#>vzGltnU(1@aO|@BI~_Rx~10Y&QOVaf!g)mOGaM|11kzaY)V_@iNV{ zDxbs(!d76dm_J}vWwuou?xNl8kKYvuAynf**Lp3E(KkdjW?Lmqn=PjX#=~G>lCmJ8 z$nbVc8NW~%VmD5x&Vq|IaeV48B&5+qFYN>OVFjy(JEN6iBR_x;*5E>*fPkDO&9W|z zdDA58)C-FiYybZi1^nM$04ZZ45Lyzi7i$56nMBR^e*k1ao4)}m5+K?D+ixFeSk|+f zo~4L^tiKWK zFk+AVzZH=)H# zUh?UC?7#V7{3mZ)_Kv|F?u7BaRKw&oJG)=-E#LbMK4A**TTn&yl#Zbhs^xC>ez-Q| z{xF}EdULFso_A30D3`%|@sKLYOL8cvt|D&Kc+HiKoSt1C26P(NgVUK^F6APVt35P*OehvS0x{_IbG=a0SLuYUN8R^u;TTOQv&xpaYdtMxaQ zo8G6B+x-{*+VB1K$L}rh?w`Q}Z^rtU{;RM2ub(v1=DQaqM=yH9d0|ezLhsYqxDO zDrIMDy8-HOZ-wYK>AK=`?~bR-!iKXQU>Z2)<#YX!G-mo~IdQNJuiwT|3Qyd2Y;sYW zKgh}M{OBXeH@76P@z6!8E0GU=5ze03{>YEUweGxe*L`^C-5RI)+_|(G>L-s(uYY*` z@~iDLH>^C#=R(!^#;~P$_Iz?=Z6Fa?a>10HG61d9arx}D)A z+bD{7f#U@P2?~N7fI!k1_<{gHUI4HmgRCTp?iCOO;y&IND41}cnFN(esM0~1CIL+# z2!QTZ3s?&Qu@;jX39y|-wcJ0uJ_^vr3BVzMkxT|4D4L9@1%k{f1y(PFQ*c$Ibv!Vn z9GBHL?v%@bC{$>0)lx}BFaQjc(Zv8GoSb&t>dcjv8|5+2>5GUgzgtj7mwTb1O)-gm zgQ@94Ksn$Iu)WOZK5usAb9WwHA76dzV_OHGJwNKNTzc@Jo|}}?AC%`qddKfS{juNu)#SN7 zy0ADrdv3R0`G$*EE}q{hg^>mrNJPQ_7pmX@0eA4i<+H2+C4?s0qyZ8ep>b1a%FH&g zVqFfFoY}Z+>g~#L4ClcM6RXqoDd(M~H|vrB#7RtT6Dw>p)>^X#853kQW0MGWG{PE$ zse2L3h@c`LREt-@X^revaeVk02?oq;|UlW5=zKrjF@p9gJ4mbS4i2vDL&)SfsbtZYCp zWt5K~GrR)ok)`04nPt>;y#H24WKB%v1Q;SHjzo}Ygko>Y9Ul> zE|N1479apJSY&XymkLlFz+@>f24!lfHgle7I8Nev22(79SO_;@m20kf0#AZsP%_DD zB`jh_$haoq)km<@b(L38fEha=m=zEWyjAIq>ky(+taM-Jek6^8C?tZ>wg^wQ6liUT z;<#Q7KnZ(5?d(zqcDOkO2R}K5#^Obg1p{Ja3slcpsDuNkAV3fTV-f_308^o;p>ev! zO2H)&Ajm*OHfREDH#K|BVF@4tDX57J8E4g$t5gx;Rn9s%eT^7n#b{n24Dls zfrvkJ2$X`466e-9*I?G7!Hd!NWW4u>zT*8qcn6!~incWz+82M~FKq36`+tH7Uhd)okH;VQl9P3iS66ky2t%k`yzXWDpZ^bk;agvUf2%rT>~TAzEODA~ufi*ig%gJf zqmTo@VB;_CS~eUHBR^6HQCH}&$zZ=b*WU4QRSyyNMI>^EZ?pTCPZ zDYm$hb{7x7p{F*_eg9+~eQq4eaVu}QxixKX zUu^fDE!_CgwQDC$*l%_I@%8)9A^%96>}{QU&7wTDjw|0@KlpsKoz(3&*Ph3Cc7Io6 zxpUahb$P2!PvbKaY#-R`ZtJ}_@jQmb+_e?Q>tx%c>E`KA48!Ju*tqP7(YrEE*LWsD z0dogqu&UxN1}7kDibadbB!rEj>TZT_>JS($U>N63O)0Jon52+xYf{h0Slu}G$3sUI(q(x!qYauj*hD#0o@8!01JQL4Zj{#OSGujiqiZ7ILbW zPWHBHyuP|I*4)blS#`i<1cKT~F4Dz>ywuGBjmrZ=YiULy)lt5wn?XiT6d*Jq*LK)= zh^qi&2+35XTo6#AEi`*Mc@&nNG+0uCApNW#BCf)aqt1zGn0D&r2hyV>lFbI+Zkdp#K3NT|4 z#$+klEURpAS_zE^K(qi_R&|$|MT=d4ZRx}XHSR3pQg8?8F*2A0NrVT0FwqzUU)W-5XB>4|Zx1@HFiI}(y6GbAOo_B- zzQtAX&CzMYGTBSB+{Ml2ft@#8J$d-$SMFW-nzJuU^ZA2!pMU)J2YJ;0%69jGH-xSk zZ9Qq1S02RG&%U_bo(;+k82|$Tb6Cb$acoeJ;|-qP41D0q1=I+afM}QmD8;7X#855m zM%!u27{_Zpm&?)&V*~`6VH%iXcNoH*QsSn>>Siu9#C@ct6a>_%!jW(}Mgn^OB)i(+NW z6|o{UVDtu1@E8HFt3jx!6|5R7+?{ZQz(@k-FpEeD)m26X8bfDrl%Mr82i;~<6f#2!E7275H^V}pJtG@tW+7%2o_NQ zGzL(>1YjvWAhaFCR0l!TaGy&gO|-z&%x2iV5W-=9csMTISQKqe1*A%Pu+AY1l@tfS zT+dHm@p$sg68PMPlJqhGx`RQkP@i4sg5zNkEA1_AVx501x=wJHK4}VQV46r5xfSi&}6j7NJ0bn zW@zo{hR;vXxAC~!oUC##tCN0Q-YcaHwyCSTrMgbD;yoJf^SlmnHU%g&)hf5Og1aOe zE2_Gz&O=v1wZu52z`zbldS2aM=Ld=*4FGeo^@6#XuO{31vMm=64;TCX z4Ry2aex>$E1wKb~ECTE*ti$wWpkcW2>;LpSezr{Bl*)~I=h@@-iRaVs-VZKgTAs2l zM+>^z)(gYr9Tg8g-aqoXZ@Bl(mwYvyA3b*NU03qt?|l4cKJ|%b-gmn6x?;P{E>7#@@M>utIz(K-}}`2f?m5?%>T;QVE%w6_v7V5%bTC`mp*v?H_rdayMFPRPmXu%F&V6G+C1jVyB>CT z?Dp*5-OES2A0Dj7_2+#^M_V}F?DW?h{ikZzO?PIK)jiGnc=59yjXlbpWc{6C@Yp3s zSRu^Ir5&HNHk;t^Uh~dPy*k##jqye&`<&_7TU94RbF04lxDK0(uo`gJizgPAw9F*k zV*sYL*)%97)PR`c*?=S0vI$%p4crKQameP5j;A@n`SU;begFFBe{qE?2SYeNyEWQ)w>x|So{u$U$yl~( zMlJ^5cqwaT$OQwSQVc+#074FwTr2N0LXZOuW&7#OR+w$rHP$jdeX8+zbCg}E5&*hD zW1CYm!2F)wWEazr5Rh?>_h5*%y0kykQcIafof4Kuc&!f}o)yib^ZeO5GAg zj?@ZLkf5m&1*!(6kSY`v65=LAX$WCU*^39`WjwaW_H6gg-QNAb{g!ji_vtlN)K+a% zSE^J?ozDkE7$XF!0D_SK1OXrj5`YNGA%TQN!?bBw)E5y5gj1z}0-*xAL=~FRYCwiB z(ah_pP*r0cw9Gyvs-EVy4!NtxWuaC%LX3@Dh)@M2lS8$sVMzsD0wl!PGDfNdSApk} z0jLOtn1K_ZcdiR42L%fPM8qnE>7{!1)fYpb$LH>!@-R^M3y1;;Vq^#*(t*A6s;&wuhcZZ=!3cqLNOF=P)?_e=f&nf$L@=;1 zFdBl%9bF!hvS0)aKCia-$L+1lIebe9`*ruZ@#ND-n>tqqF|189?=RzgH$MAx{q)zq z^RdGVvy1)fyX()N_GP&KczW_pyDR(F+3v*)mooR-Ek`BDU;$s^fFqQyKO7e?4tDwv z?jL{bl~)sQT)*O>cFBtZGzl`RVil(i6q_c*nPs!}k(aX`u`dl2Xrg6`!r-+ch(G}x1ceA7hGI~0Lac(t7zn^D0+yA5j65P2RZxTqJ;TU41`wVExm<91 zkOu*z8Btf0i69OIf>`iCAR=LWK%fIwO0*zE)d0+cfd$JECHq)Z9Km5yfjdbv0(2M| z>Z%QL7DdV(14v~&5YIIT_2e+X*PFa{ubN9OX!R2HPY{$tNv|LzHD;{SpGBK?I=;?hy|?O%<03s>-! z=qEe;y`TPzU;OdkN~fKBFpDR6&o6!3Bk%a|_h8!M4R83eeZx=P9{$>gZl9E#GajHE zFoU@JuItnH{K&6;`0e;gw*m^_KmrIb0fLQS0fYz~^C;WNfgN=A1rOaOEIizFh{%RT zKm;O@Uqwua3&a!DLYpuxo5nuVs&CLY_R6&DkNRGV!_$=3W1q+FnEv{xO@I4jVpISP zBxK6DIDtSYkx(XrWRBz>3Q0L&6$wKk0>F?7NHiOQTC=uk z7<}fz&)1k0PJ2zeHja3D-PW7!Dpk*pifS&QO=*0|KKD1n=W8id?Xs?XIyiBx1Rtc# zb+q^HhO6HET%0`h^urH7{jYvu*bC`1b$)(+|L#y7z4CfGyWN$s%>}WIRXgWS%*XY^>2R5dF=d-KGolO?e&v5 zx2r*?t4$hqHtYS<0YR_3;63^}^)xJHmKambXsJddvya-G+s> zq1)KIY|=8ya>L1%J(B#0WmCoVg2l;B3^_!#`#xRrwPWO3lggUe?)B}V?|NU?ONHWk z@*()d*LM^qVbbJ-Dz9c?_Ib0DvsZ;Ggq^05&9#+wuVH#Gzi{BbO2ZUJ)JtWgT!bt| z(10QkusTc-Z}`?TZ@ql4Wt+?x_vMSU+LT-h zgra1C7ZEU{0HH|m2WTWsf(x-W+AK_F=&&FM}Y#Nk-0Vu*M{lxt=z{ zM9qw2LOG!393R=uM-O**U$=16a+G_wUBCJAXaZh5p&zyQ#K2V-$1GBCEgA?)nUPttgNcqX%HbcoBo2OtR#Q1}-N zga8AKaCw19S}-66Ge8gkheI+LNJ#($2So%ZkTkRmHoJO$?)5~A5_Iq6q)Y2knkqMw zI&spQG0(9Mw8dl#b)~dnPCb{7T1tkJvQz*_00jgA<^lf;BOpNVr0VG4<&WuiKX%O5 ze&*Fz?!MMVKJ>^H*p4hn1=3Uq%)#)GZN#n_%g#%2~Y@zU;wG&?n8mRVgcM_ zRZ}T6HAB@DCzGQT5*92vF@>g}gO-Q_k|Dy4jY5s!@KR+)^a@=vv)p!W9Ya9f0JJf1 zj4)X{0J$hQssKbcM9YV{JMR0ylAQpQvNl=tWCf4~Lkt8WC^wrLw<;)T6kMT+WF6#H zOe(fH9n^cC&+>k2K30p2K zW?7a~1t5%Ypn#%5q6_~5gBeASPz>_mkWrBU2-ScrkCw6;V#ieIX9e`=2GN6ql@NjA z8jGBy#X!XEdIEABdblT|Kt>~@1+&HwLO2)Yv0hriFb)z%&j5*3pj2{E5hzduD2Q|d z5J*_f?Lfr_wUEl$0A+^&kQERnbrl)bS{0#fc7vI1?`UglcI$q7e0a2%Hj^f%siE?* zD6NH!BPa$0GPJBB76UgMpTaA%47gJwA~WeZ2Y0uyZ)B|{P)6)Ks<|y2sGFrIl%eYY zIgM1TA_8qzDFN$I@Lb&LmT`vpfI77Go`uOw(+cG@FWpN$c0ed85ydQE7I=Br4kzK{ ztoVsMAV6SMH2~*qh`58CSyqW#Uv0ehzW?w$|J?f?y`~9B{L6g!HgFUF)&%w_9^JxM zpno*rM}Fo%yYmzOLw_{DX>6AGs-OHmvihhMqh02zb? z2S_+(@4OV>{i7fK{;$FRU-WQ|rNEi?t$M)j7THmpkO1PcE*oS)oH#0n8 z{okdg{!Gm~+hM49R<8~^Pa3)^9UUr{G?bi|aFw-?RuF6~WHAFsi%BMTd2kPI#l)oG zN&zvOsf+~*R1ifYX!1fC6W|dJ8{u8Ur51|A7M;5FkLn1pn{aJpcUN-}uG< z>h$(!H#g5-TwpzP%iY;quF(5OUp-va;~l^H^-sPv`tU;c!s-iOIJ}4P`lMu~(_l}a zy|C_o!!7`h3*UUz&Nk&OzV8?!iT+4ee2Abjd(&0PC=IL#Yxu18j=tLt$XuHC)6 z*_pny3*BDij<|ROVS(HC*&4m^T=E#&)*||m3-Ky2LHeQ}j9yncW{gD+fBv5t5;O&e}6SuRv-ZvYJ zNS$bkC{7g3wl3{#?_zoPusbQ3G-^hvz@TN(I&c!%4(hUZu{l4l&hMUY9>07k9KQbX z<4@liDqoFcFc|M@tP*Z{JtctSzi?d4)T?`rzF5JF58*emMAFg)in}Q=>0k55$RIc3n zGCRt&$&ak7I<3Em=Cn^!zyd&0Fv)=o2S6t!r$i7m5YPh<82~d@h6z!KHNk8ZVF9^92^Lcb z8LT)7aH;DdD1nY-?hHgt0vNnF0ILbGVq*xNGIJ`i6nmhe)ohyTGU-jTJAv2RuhYD{ z=j-BQ8sQ#8>8*6x-LngjMACvps3t@vJfKMct7&dYCG|p45~XBCFbkxKtcW^T8%U6_ zDr~x{Uyi#NwyM}SVT{GIyf}(9$|VIeGJ+*=gjZ2j8z}P!)~=!yR0^sL1wK;o;x?ae zaC+$V03d{bQcs^~EQa9=oxCek1M2o%gFFlCDSVaIN1~^KP+3(0vx#PVbsb|>OB2J| ztF=OF_X4!Uj>D~9(oJsL7Pn%|WjrgnS2E>N?QM~5v+9Ae$CyzzrBqKazvZ3(-iQ8! zZ+qtrG=SlMM*#7^`l;8kbbN0O1mHZr0{v!(zx>mG^Yrih`05aFneB9Z>WALDw>h9Y6B>Km4_L!~eZcuF)QW9So=o zUM!y~_2Ge?E%;n$mt#EhcsE3gt=DmNCr%u80v=&bG~;@zMXl$XGHi#g8eg;0E&8>s zPmkqY&S^Y6l~O5%Sehv|MWwKj%Vo*wT2?uAJ`|=Aq4J2RS_R%gBjHk#OoSkJ1pz1y zLWBew6rhX%(#@q{P@vo)um+SR0dSB4A_C1lg;F?rwe@pDUk-RX#(5t^beo9Sd-UtayOxSG#TpGxcVdDEu6Z$0XQ z*_&~03wSet?C1aoieVcF03m?*a(w>Y&;P@J{Ij3`ALo9*9Gtw?HYLr+OP9Y&&DiVZ z@qTmsGI!?tH!S(NbaK?(JDzM7uSZGNp=s9>J<>lZbg#u*Pu3T&ZBIUT^K2-#YqK&t zS?~9H9dT&t{WsO)opFs+e-4(8yevvLTj%34Z?oQg{hKahJQ>^j!$1Gj_4gJ~ zD2w)S(<4V{LP&d)NZPrv`k_VDz*|Mb=;AKm}$D{H=XSk?t!e;sJ;;1p*I|3a^l zd@QjgSyXRB7w7qQ8jg-vWH8ap-Q>jxzybhM*{P;)oA17+{=m{!x_GX%HD7(%JS{13 zX4^FPX%;38r!gOQWf8qCs+Q%bmrDtdGf05*0Wg@13Zw!i$;M(SfKwtC&k~Ipm}$>| zRnQJGyQVXFzB!ga??a(y>xE!sB@41P#z|eFY>vHKyCmlWrdF+aRmE9jv&DGKGVFy( zW;H|)QKc)J1cXH91b2;X`$&bI5A3|-gOhjtvy-3w<>8mtxk@Eb!DJ51GcN5_-SztP zEc1SHt3sJec_?RvX3he^dl5xW0Hgpt00Bmkgb`+eGy|yssIb7SirX~hVI>Ig=&JX3 z!emEX2tyd*jh|f#*n|^Qj5jVHrI24aC`#i-8u~{sJp1_G+bX|x<}Z4zw)W@uWMBCF zt7GYHY>>kwHf`MQ%eW}FCk^J0&o{F$d~Q?DvICyIkQ5CHh=92=QDk&d0Ne?PFi~`8 zF@!42TBwfo{=mIAP51Y=r{{;GKXUTLb>2C8{6gB@JLrNBtDxh4jVSJk9A9zL@QZU6s}v(6@URu1w$ZP!*;5{5TdLyHgDxIbz!_ z5BbnTk0@qZF$8E9L$K(e4~~>Zq+Xyv3Is%JjnhC_o2uv>m^+P-SOrc3XjNiJ<|KL{ zdrMTLORVWt)u7(5Bdsc=2banTQbIAAAqt^(M9;1?Zn>(u*0i#UEc2DD7XvI9Xd%mT z$@C$DW@Uh-v<)CmdXJy3vtyQ%9C43mWq&3vfv{iFqmi%2qypm zpsa{RP6-c~(Ey18MFJp(T&ADdIn>=MvSC!eXq>TEUbN|p1*OJ7+jmX{^oMU)731Z9Bagaxw54V0Ns1W=Tm0;0PDlUd~s z&}pVZ!UI_(R1G9b=ITJgNQD3tE3!V4RZ~Zg@L(}&B-d8DE0%3um7t~px8l_p*4lH- zvf6b%KQ`@h@7&4k1L^wQ%QN$Ea-keG&n`b$t={=~|BY+I(*^u?{L2Xg(R~p=@lS5P z@Vh_!se@Oan667WJK3H+G@opx%h%l&{pO?HX6JYa(=<-Rs?xZ+y4z3Kz0@lVxxb61 z70IU)ak0cyJy51Xfu>8`cbo9jkJvy1UUmmKs0i_>(#EQ$Hrw=zQJ}60 z%=|nNRRA(DLU~YkrF#5Ib>(Jy^)#;yg%aon8d<;@whbJtd8)SkKRmsCoDN<%$kz1Z zoHkV0jP!yOVR8h<8r7g~R1#fu1Z#{FkJ*`-3j30u*wVS(dYNO6=!J3&3f9b){Ui;C zh@CguFY>4C;T_v~6`w!CVyVrjH5DiUq4m&CkQy60FR8$qiOn0PE-vJzn53!uSjtLCR#R(Obt~ z@b2f&#~Y;)sW(K+u9y?#|g$O3O>CM7b9=K{PL&~ngxVeX!kG+)1;lU z)zsa)PF{zGe_VVacbAs}!3&8{HFH{3rDZJrU>qm_a^XuPh#+BxVBSF3-MZo0|C3ew zjqUbc7vgpXg~(f0N?@WT6wzg}K@{?^fj+lF*04ktpL}VKfVW}`VNOuRa07nI{cH*=N^{6zU1IH|sVszV@gt`e?To3LgPw5k{ zRkm#^TCp^Vty#0)jAyI0xY^oz=9%%`Z@!q~f4Z1Hj@_!+xn7Q}$<37;c>L{XFQX1< z3a95SPA-ZaqSM$)AMPsNc zig}GRqBv9vq+mku0$18qnK6uQ-L)d-gI9f#iDqD+!%EKdRRxBSJIAUoTpwxxvlsJh zq^r&XN*zi=IO#64%7bNyr8Guk7{uPufEQ|NGMP7Q8(-FT01sV%extg844?!ER>7isR>>eh5r9#I9B?uK zE$JS7KPuK1h-g{|oMqL$Zy8i$NuU@rB#TajU{$O_8Otz^bcY<7zFng+tQ~!p6&T=z z3skTG!Y}Q}&P{8wPW2+#m`7wc6z_9RRD&$}#@7kVP6q@TAfEuLsMB;Dnoi_XDwVWIXCr4pms;CcJRDe%GooGq(#(Gb+JM2x+c|^rYBngr`>uZCOIL8 z7mx~MFwEM(s*d2G(52EsA)=iyRO|+*ZgwhDE@z1vIn>;;c&S#`_)yzUVjHS;rmxZ| zM9k|T5=|2?ZtLo!Z~rqNeABycyzf(&9$6gx_TlfA`ia-R>DhCzZ^j%LfPkM{;kg&^ zmmk99VSHtJb;P&+!(R=hxOB|M$D2u0%{5O~bqsl$Iq25jLLH+vhgZxo#bsu~2dt0Z~wl5U1{= zxqwn&^nj3iv4&Ytc2bc>7Y#Yma}fdv11l;Q7m;bf`9O%0J01|=q$>teN<}CpxK$=V z#*hd)5WS+4HQeDD%whBp`?j?Q>mLpWwo_j2v=N)!AFb)VbZ_>Kl?%}+vOxsrFXw{dmy8MT}zFGgw z;cqUDsoknhhsjxv^NCPx&H^sZ;uEX>lk0JXc9xLG^#XO!1X|6enbW;8I@ZHDID*)h z(*RkUXR?e^cw>iE2Eh8^aHG@FSze7g9Qu?sn;`7B=P#C%n`M2^wcyaPEG~PIhmp?eYuo8CwItYtO*knK%lO0@lt%-W3@dr z+qpe_^wsf5;jv<&KK<;8bcW-?stwh?S*ohuCo`W;;@R;eha?j=-9T8Z-(fiE`;VJ=NG_7MPhqESC)VgS_ zs)r+Yyt<#`xbDw}JU|AJ4oZL=rNY#()mS(mrpeX^@j%5&uT1wc2qZ!P2_q=WSrmmY zksvSxnkmTbV!QYF_KkHpdTo4pqNt(+h$G?Y!n@!5ZDIG^Gq*n1o}KJ%PA+t}e$`KY zC8xGv7!WKD`xKQ{uHsNGC1^U&lifYFk2RZ3KE2rwMj9f#wszuScQ)VOx;$WWv%Ij{ zFdY|nhz>%9D#j4nNqte4+SP&OehnlS1s4I5B3zL0K+I~k?b-*w?oX^PzWJ|i?LBq* zrC)gMBR~6*&)W3dC*JmWUqAJyXVnw4?)HG2E5Y;bg@;pp^;>lA?OUb9+tXJUXkYo; zhp*gQ`5nitqu0UH&)rGrGr+N+P^-F#<5-G21WJU08bd=k6c$7r&x zD$J+Tp0-?!!>SLKf?ay{Wa~YDX!GoU@y$EC?>b+P=g#kKJT^02Xqs&(BRSlW3f4Ww z*N{JxKE0{F5MPSnS1%onJCmpLxhFZK`o)j&q3hS9(T$06Oq$Sa)gG&HmHngx`!T41 zGWkP7TL+tjy?{{H%NS4PN*W5rR8aaU5UZG@6$eV|vO*^z!r25?Z2+6O>WBcM0uAa= z1|-4vrVPxbD?qp1p>icOdFFzEE##ziJM~$1%+lnpwI>q zW-)LU9Pmz_B^H>wyqF1q2?nr2TML{|t9sI;6o#`xPuvffOv|G7vxK28cNK?on>7RhBONZ6<0#eGC^SDoYkS)tuNMcrhb467Gz65g1QPRqwi#+8wfKP zM6U$5_FT=l>!C&%<0ggCbIxUymsOiE#G<2rz(77s+S{G&ntgHO-YqBuAiv zv5496fPfg-wp1HIc*Y2YG`1Z*fGrSg=-X&--o3EQ`o!v>k;ej2If|+{035s~SzBWe zQDh#f9%?$xYR0X&6|^3?NV$x87-<3n;DRFr$ij}J-D%g;pw*@urD7C-kW6qz0ko>3 zS5*&&7_{2?oVjDkv6_+jo z#8;rRjIaC1bCbXI9jAA8ge>lcxc&a8|J;xL;FEarEqKElzHI;YbGZBAr=ELh)+&k2 zhCo0prFvwte(bM*_V0WMZ}>`=fYZCUus;5k=l=YY@A}E>Pt^GIpZ$qn``mx~-yGao zEFvF&xVjqbwPOiGM&0ejc>14A;(xzrK7p4y1s1{GEy*h=qHZJNW46BC4+Bez+qJb- zIPLap>t~g3d|9luNj;e^){Vv41P7~pSk4|g*sgcNcAZ}ha6 z%+`3Nab^?_KmyE31BnWPfM8Bn$zUe!eMf-&MDLaSRV*8i7=((9z2I za9zkp{dlF?eFW8KOL_?A6}IMQFJD{V`E=8Ef}NTL)a<%Y5-&|JBu~d(7we2B5991I zc&@QSbSZ3<;@a8eMWf(lx?zbN7@uu8nS`S)G*#HH%Lz+=Gu`i0#eePtjPd+?PJi!e&cCkJ=;Nx? zM4ZyH_g!2em)tWhseZp=?g`p`&Tgzy=Rp zT~(UKe!UqtX+$n6QlMyofq-Yp1#`e1ge(LD#JHVv`|dv)E^ha~@lVgzgJN?7R#P`& z*|jHSA6F`EH3i7p-MH-5V<~V_j3^Ez2r8}3gVk*e(`cjlCf4IHl+BS4+(lT6DX0-+ zO>issgs+MVD5Ag^U|=Saw-@TYeT%D-)`9Rok0(ycX7op|zyDg^T9wb9uGX1D7?8Fm z&E9@W=@vf~#>VeCOkto$5fKK5{N=est_rCq9|NMQM>Pr9OOHN&eO2D~t?zx}nQzGX*3IqT zeqji|_=BH{uP?4XG>(s+-ad^-zj^nIjTWEE>9BK|mIWrt07`%usLH0a#ywlkMkE^{ zpfc2w)5q)*MS}ojIp72(;sKEW2$&HdK?o1X00IKt07C%G=>-{ZQ$@wfG`DT4>Cg{c z67D6yD@F?wHnPxaoUD_3K9hC1Zl!0pZcLi@yle8+-|?p&)}vRqhv~Ulg;-_>3w&;a zvo&@%*sV2OPg_0rKKjb=@n8S-v%9b4lZ)0rwX<%oJTpumFI|WGBd=V)OhqA%lrtLJ zx}sHyC8MJ&$DlwJALjIF6a>AIwvDB_uS*pSU(AapcGo6+q3T#q` z`F`zaCyKSpMCJd*niyf)y{)#6lgaLZvFY#Wty3C#Tk>6cqvP0FnjLVlFnW z2WqpxFuGeHXBAMlG@~Gd9RFwb;56Fz>#Ps^_xtR7ihx={I+_`%M&DOrfG@ot6v&<(k(oBU zEmznM_GC0S&2?vx!FX7Y%XZdH&L7e>se>i8qPIFTk%tQ86b-6I1p$zbfS3?8D5x;N zL;wgBNx4o8YaVz`qPXOXfK=$Fv#JQaiKg6UJ8C(k76AXaX%~ z0(t`ghZ~3tQ-dP_83=2SC{$R=)wws+GcK46~G*` zpacp^Z%pt`oUqGDRp^a`bA-`g5GSjGR7$5H;Y%?K6eKWbbx>jGT2Xdm4-Uhzim~<2 zk!4-g(E!aW{`4Z3M}uY_x~_Odf&9Sh=KD6<-}su*@zLF{zvHzq0fG0Q$EA;b|MJ;g zcXWNTChtKtuAly6_gVLjAAREsZa)DKU!7j~YtK|a|HiG0E1tsC-k;mM9>L`wdiiG_^M%aKpq2?GE0w`X-{17U&wS_sJnt`rec<|k`p7TL?mh9U z&Dr&T@bUK@96tA0YOZW2cXX{<8t2C@ns%`X#m1UT)%VXX{PIF`HoNXhqL8ME8H9lt zc*<(!=W#MKNrPf3yHaJhSzH-*qeW{*eJ)?}ww0Hkx${GxsJM00!PdVo&po{Pw@&@` zR`EYPyz(bkpSn(~*cK;Nt5CET4zJf3Ir9g6_5C-EZ++)V-~8)eHR1P1oKan-T*%49 zIwBchEET%e=i%z=*M;(+ShdOgBAL~8iG7BgY@UZH#^iQOiCJfDL`0-JS)8%^jx4_B zo|W%^SM>bf(}T}^Wb&aefAZpge7p&V?M)|7EN|Rajq@!j?RSqn_0<0LgTr>RHL5X< zyr#p!fPVaf?;Opy7Y-+Tm#<9^Y`k>$N!w@Ke=3g%!}f1Kids~XOdBW{n_-ePp%rBo zwVJ**`UM^LaXcLc00M{rzUI-S4CnbSh<+%D&J$ z?=&~$>J`~Cw>|B<1G6067zN#Wk1etNZuElNL$Nly_5=pOf{WX2KfhKScFu4(Qe!B*77%Jy73X{cK{HhEVmio1YK7>JXfm?^4dDi0I$u4)Uyu6^COyjl8VgwPfWraVM zt^l@$s7yTs2q1(x6e-0RTahw|2NI|-HJGUw4BiX~HV{~1Pjg}+pkBd|$VubpR7!@r zmNKu$Wm*XBHu^>)TGi|j90ZmPrWSF7h=2wI!9-_35Z_t#Uzb%bIp#b703ZNKL_t*D z4gbTPv%Z{$?M=zf+%{Oooe15HeKM`;L7+G_H8;uoy(P;CFwlA!tqAbuI-sX3NawKV zxq`IQ>vl9pqlcrYB9ypw;IvSy8g<|llN`j^@+XD|K^2%Cm5urcYv86n3sjVmyB{)=JLp`!-8et-Y1ZL*kg9)DI-7-*#Z{X%pnPMVLV1Q*HTQ-;}ObHqx1V}g` z00j&nK{^jQYNlpT)Cp)%MK%CNf`n1Lfq)SPsK?Nu7Evea48|DQM$=?PF&~aMx%_pD zXL|pF`xb70;Pof>m#jpr7Rt%!aP;uLe5PrNraf8qiznDF4ALKO^q>93-NS#r+YeJb zQ7o*K$4nRQ6FF5%7f%=5ylGXEH&%pGMrG#=L2z- z8AO8j3PvB*8?(5H!n9LlHMN{#rY3Ww^9J^@r*?;uTADL)WofKxT9B9(F#)E|8J$q+ znrvA)a#ndHSewkWA3+0!PJ98lpaqrvz~^!0dnuotDCfJiW;)hE)*PbgBNN7%N(KlA z6b1zk6&kZ-GnykxK!F}#K^ou;&S-!VHz*Cg;*8mCOYZubf2>2f6E?lMvF#VuJ6EbN zy4>|~S1C=78O&J~mem&)3ZFH^s6rLOYPY^bQJ*KsfukjkLf)C_)jjonG657h1C+4` z3#Xp+N&_3SgzT&y+rmKFC|73Y=1CREFc5?SP!%;&GMRxPPQMV`ErdH#EmEG>vU)l8LHwXo+Ypb}qMl<&;G^u0dZ?;hZfOOhZv|72MFdB#iy| z#WzjrxBZ@b1BxAd8#9*Nnm_>oK5!Xd`qlq!Vdq(%JX6n8j9XVX%WzSqhkyBRzw$Nr ze=RV;SErZ!-}~XGUUc>9sz)RiLcqym@BYY-9l`yt!}Ff^=k|T)apC*E_oI&N4hC6cfr+)nR{xY8T7sA89!7m?tarK!$`lE}@7vI}$$KLNN*LQZYv1;Ri zJ^EbA`&XOj7LFi0%M({rv@DpRswl3H(WZym8x_Lf8Nf^#-Bk4Z$c?6Ex)7EZLTG(` z>*$C7{tMs6>VezVE{IU*C&F}uHKl+6w{Db%4(a-8KCyl*7BxVDJ-)q}}o zPg+NfIUS?xiXv<9xvNJ%be{`Lvxgq%^+S8`TAE$S4J?$9ve=y4VV++ahOb-WYBxAO zl1aDq!7B`{OBo=IITS``Z4W3@He@poSAfxH5R3)E%29v>jkV~Ix)W1W>WV|y3@=c+ zqq2B3oKNA>b3OM{ymN5Zk}DoL{MbeKjEd&R#X?AXJKb)Fu|qZR@Xk%zUBa?M29P=- z4O;lh4Hnye5}U^RAmXwn!e~^GR*E@Fn-L1$9V`OZ47s&ypi_#YcjAQr#k5Vi6AWB* zX~i&xZnfHF3Wa9blBvf~Kov)tSPw!oZZIK$%89i$SUPi@$Youj%(E*EFpC$rpnVU6 zjq4>(DJ&$qz3x(P8EODBS(+5V_0B9-rj;OjD4iCPG{>1mT21%aL{W$|c#E z)L*!krJJ=Rp(H>W0GEX(Ns=Lh0y-K9fDkhfC`F8v!k}wBId1HxuL*KD&wcvR#V_o% z!MWZ$x`3O8q?T73?Od@6D~=88)iAy~*}rmdEr?~;8I0;OMYCE~zbMtB=iHiuXf%@q zjYm!#TVGnu3*tAIPNclMfA%r5c>Kt4f3vxDt37@;UGxV}P9zY$DpVoE%z8cRsVGt$ znU352d|`CtzO?a;-#jS);XRK(SJ|&^19V1y3`Tbfd(ePEGku%rUW1=1VkW00S)#QPP{8i&;uFhjcUC>6;P-d0OBi1dV(~i znGkeN07rvB3`R|~Y#F*%bj$0<(&kGBODUgPUOhQ_ZPmH{*7DQn1I=hxvd29GsvuWlo(UnWF_S!om`vQD~%s8Y4A|GEcRo_B|1u4<-veou&in zCO!*9LI6t<=R-C#of=XWZ}dVF-)Cf%(o0oQ#q|-Y7#K_IfYy-~dNYDbr{X13c6;gj z*)ff4-t82)ASfG66IimWmW@srDhM#|ka{Cv0_QLggISZ16`%nG1Yco_8L23k@CdXJLmwMS&U~VbFOkqIIMn!O7U@~WVg0MyD*Nj$l2aDwh1(ZsOs+GL&;kk>Ip@lL zo=qb?!Dx-a1T-Ki3}Ao*Bv%HGSIZ$VI~>n}+Fz?pVotqrokQtb6n0Vi zq}eo?d6>X~G|wdu`mj6(R|m<_4(lnDsmW8NYs5=7$%>qjlav{7ZZrU^LFpjiu>o`g z(x5ru1OZNxF0GIH_|1R+AHCst*f@Qk7r^2F^}%cSonQP{r@M!WV=uXM-iQ8^#n9_c z+MmARJ6`(jZ+;VS93Z|5z2;-c@4a>V!V(b`gEHuF;a#8mu{FHm8}Pj6{kdJafaZyR z+WyOD-hJuZ_Tg0$4tea3|KMHso__s(!_B|C`BpsdF9ZcTpndAo-}0l+UVQYQ&O3y` zfm_@7u4C1?`q=!+6Q91&^*g(>Oex`WQF>#5SQ-|ryunrNeAL+y*B>@csb$|^zv(9C z6F;_h{zGN+INDi+swA5`xp{5zzSo6QaqD+GIs3@=O*dV5V7Yw9`*9q<{wS_~@Ynyv z^}W;64^*%D;s1GX^R;6uS3dUffAP65&FipK9n|(@v6vp+>-qGR*==`jx^=tu#|OQG zW$F9f#+Gs`uYTbl|Lo$z>M!E|YX&GVpI#$oKlwks@wc|B{CKv~iPgUCiL#i_-6UkK|$ zeR+0ncY1Z(%0g{@X>-x(<%2I>hIb}DxMf$zJUg&yz!7Klao4qiOYOC6OsWD{8M9J) z7Q^6HLc8DRiHa8xW(qT69LP$BDk|Nsa^TB*YTHZs`kXOg65%qkiwpFFqwCr=jf@z0#?ubw|qZVddHtu$LXA4c=! zzOGrjx35xnxx9KM#lzpTt?M$M_szkGmY)a6Xqk2#f0D>UIiMf%uK&NKNnSI8l z;#BZ?JZOUJRcNkoASOGx-|x-EUK%568-!em<)Js{Y#wz!&yMO`Hpz+Xof!RKE+O`Z zYz{4onX5d#R|Iu|gll`mo?_x0=|XQz%0gHMSYWSEpnw#bMB5cYTz_b}kA`g7Opp&f zV$y}IYu@$VUyO#PM>a9(g)L1~JpeDFkPMiC0O14Zv`ff6OkoBX4F8#dG(h1TiU82B zh;gwfOFD<_p8dSjG)BfK;QJ5r7%YNO;FWXmlhTW#lZm zH1(tv^WNBYMW&Fwz{ny1&WnM_ZH6Y40G%oUN_Irf+M9_u3Ji`AXm6%Q=Rg<~CT4(T zqZy=IVw~Q22+@GBphoG@qSt8 zmO>tq2X3zctxp0%sZgPUrZj~BN_VtESgr?wfM^L-Y-jk^kbD`u8>}sGk;iu5^pc(7 zi4VTvjah|=7Ni?A9(CXmh)SF&;c1=;qbiY{r}5AzJ%}i$GCXzZ$d@f{%HKV zpZX{Fow)7wLjVR4&--eYGiJm<(R<5R_`aXQ{Nr!>P-Ats#UrbC{iB-?KJt$A`5)i8 z()Qg!M=Jrb;2aIqkXq0i++wkO=FHcA(*s9;H+}44zw17HdAE7z-MIMjBai&b)j=Dr z*^8ZPhGK29sSTF;Q21FDj`NPY-~1Er`L-dMi z=hyF8bxXPo!GF;l@BbV={;BUj_nQ+mU)aCx)_31`a^X+h4?i}q^Y6w?C)twgV3j1< zD5}~Wx%F+Y{l?X$!A3YIWx2BcwQqXK==u-+t90JoWbX-n{YA>2K`Xlka_I zrzi_>(vOnO`u6vo8)Pp@QWTr*&l;QTD3TNMGHLVu1l!kHAd>5oIf0bIK;nS;?u!lS#2+_W0AfdoV~LlqzlfQUL@upB$fSS|eh<8C+e4`T9`t&4=sRAEXrC_n-uKU^(t$=dJ4vptZJ z%?xTVDqsQ{0pcsv!At;{prrYM8!x$A7D_#S$E_>Q9TewR=93q_b>rmgHn+CBr>|RE z)tfgR3=8|>`yTq}Q|EVMxiEJk)~}X}b*xu6n}u=-%l90d+*q7Eb?Og4&E$FKZBo9r zIQoj>ju7@fRsP8zqMdv^4z^8v?kqM*rfGFe!z~-mJGs_&2YG8hY4eoTtt2-4cGxPd zfdNp6(ZLLwVU{Qgqa`YCP*^f z1_%-a%^U;^Rlqfw0tD9xY+IrK||6joNt<<%p(OUQ#V zjw`fMUfT(i14%^~4W-PXJCMoTI%~=*EiC)A^s*ysciws;q`Pk#Hb;slwz|FF%^R2F zt+%Je*YT!Pn*$KOEQ)F|2#P{|P@3h*tktf`l3F;U!wqHz^E8*$kdrD*%$z1d%?ZTA z2%5qO2nw1r8bD|2h9!%_lq@4EiqB$tDAPm(9gF)c~=&V!MbyLo_v==i}Wa-ts8UPU*B)JUV)4)QFY7|Q0 z3=Re}P(V)rTqa6qQ(=>P<_7PsUrg*59;vojKa8Sxd_ zDMYjv%bAv`0MwEJMgd4Vp$7z31(n%HO4@~ z^#gOU16;{47=)s(T#JP|=DXhX`a2i>1Aphc*YT&ir_aStT=?|j^2f%H-SwZg*AJik z(11OTyjr>X(o=u!Z@w8=!&j$wd@6*52p0)jtf=RbG&%kli*K5@0U z>CFznF81E@)>9vDm)i5TesnzSX$QRC^=jQb_VlBgq?Zs=qi_L@

*kJOSA`@Ki2 zC|qIGz;_!Q4gm76dx`ta>ONk{o#12m^{hD+?;9W9xmO~WqqL_#8CLHq$BG6I)i9ag z=6dk_x1N+$pvljNm#=i>qb!d0VNdTmwQQxl-1aDZ7oX#2XGaQSW$ zwwL`+A(vg{;ZH6l>6+mk^ToZLG@rD~onM}{jqYKHu4C;#!793^k8U37A5;2VRQ}Y% zv*nV$J?)*ba<4ZjE+~)#%L}(AV zr@$=pqYBm>z&MY0_ve^}(40YgMzb6eN0b3Dq;&T+8~GKSK}(!OHr++U(Ujv20?=w* zHm6v@8&I+AXk$TeL$h)+)r;vWLt~upI|JRKg?7yU=^xJv>!4pxc>1kFHjkd z)9qjL`sKgyL99k^Am+(ZB8|X7Y^d%uB!+6CI+gUdJW4}yx&K@B+s$wzHD%7<@&FZ-FrLv ze+H&TvTV8{5f!_-1-`*>Kj3rUDwHhUG&Qe$fAH`QYV}gfO4}z`pVr zZabawNMZBmoNrIwSpLAxc)YJr^e)lSa&H<=NqBJUTBrSCopgo7dEvo5S5?l>r-rJW zIifcx8riKfAH_9XS5x7YwiKh9Y2lz@ZkQ7eOYhkXZuyy)5ti7qHSM3^)!-1YuoS%G76>9+R0D1CkGdpjJgKD77AMVKht$sXmMrzvzcpS z^HX{v>zNZTMi|j{G{%nOV9g!$1>mS2uxv-r@k>3VZroew#>-bzlQnGL2uOSvE{SOK&47a8I<0&7*+29W zmWPUWH)%`WA&jTb>cd!^Dwzlq&$XcR4y%%$xi2$aQuHp<7Sm;OuVSLLuLno2ZY)Y( zPrsMS;d-`2{F%{;S2$I?L%MYjU6gRN;MKB#_j#(WZo9GU^T{jXHes8I4X>JGlyge& z=@n|?9x=sT`J{0*wyivL)yV$*qpr%lUYYDctMi4g^sh9$36$Cxb9JFOboIf@JyGS8 zFHA!^#cbi8N;#F`ZRQtuOF19q^I}rqo>EK)k0J_Ll;Q|v6C_Ued~kiom3eo>CzCT3*WbFyOHji{#*GVlT2C^{YSq7}JPm zKiEd=stobfCJmq!PX!4+ii>g-w=!N|U7GB5fnhHEpF0LhlIL=-1E@d-;}$!Zb%L2m zElTg&dkqbyx2y7glXVQ!d~HQzcdfgbigBvVueJ@gvR4^z3HPVnztk+@_$5>LPMzs& z*6anfh*RvVSHH?iggmRa7VFk+>2nT}kou_~DxFVB>MdL;B9&&ODXdMl;V*825yu7?G=IgEoS|wY*m;zQ@ z=+mc9S3a34^AIWAjou@ohmY@$NLQs*-3dfM<{}VVJ-w2CiR-v=dFc)a&5$766%<)I zxRGlsd`G}&f7Mq2TN?8KB;s~OGZ)x;tbN?%VfT@ zNOUGFdT5uCiCLz_NhA)I-z;wFpXrl*U{$e0b3&Nohv(xWE0KlI3q|MO-+AK3G8svT z>`z@^pSc`U-*s!1w8`)5`Nv;THud`Pd&yifHr=VW@;?MOpVrCY$?GktOVK@ATOU&R za!xz8n3VQ?dFPqV2aL48>_5_YdP_ZJ3AvtcVEbNU_91p#d-g>kCl#EY~Bl>2*= zHF+~gT}u@%Jr%VhWFLLS6o>-npM;0lj%g`x(x8q{a1GrkXiVb(t18 z+4uV$^of+2d3AE`^M&{u*+TCH#ow<8C9G+SU6wf1A(0?)(XlP$1oOfrQu_~Hq0cf5 zAD_=}=ur-G`bhUoP=>`aoMXi(%6erXywH`R#w$Ofhhg{%N3)TM`p?|K6mOC)HT|O9 z+XaU1+uLqU`Fz`tOz<7sGzGWwyVocOX8f$25)TVDKe)8^D!QZONL_h{wD^J!ic1xA zN?a}tk|ywW83U9T;SjZ;>2N$UN!@7sHyZO0ml z$P-8w`Y{%lFy#(FWlko6n;{bTt@;%8$R`E5kjV?uFBKTZTQMt5Oyj#7>SHkp(4Yi` zzZdZDO(CL%w1|rCuP^Qwa-};ra{3l;(4M~0EfqH&HTOCHMEKg3i0PQSR82FNF3K#v zo!&L7;ocsmkK0hs**2}tW+{&-c9kh3zJ5K_on>9LHg{vNzoEu54(*CGb=A<(Ayu*)#{rPpXwXnU6V29*4?b5GxoN zcm6t2ZX;5}HiX|6+POUI^<2s@WR;`k}YZ?#zBqr?+@2TnQO#d{rvMq=%)5p%0r7Ex7 zJ4$nRSUqrrl1k~YuSsFO@?ooH#QQWt9{!EWHR1J}H(F@PDKhRFKA{z=fnRK9yT7XL zkP}}b&b%fU(o)!`4S9kDXI z6hvHPu5i+m_(JOt%i@K}iTe-alT7Z`vTaz_Eqmq96rB$0^YSykhh^UvwnU0vifXut z;KuKtO_tANJibYo@y|}jkIOts8T=iE#Z5i6tZ8_ECav-A{FL#BCQFQ_L6h+C#o^6w z1}6u*=;<$Xp1zQ>a=Eg(*Rsqpfh=WvKL56gf)21iUR688;A6AufwJ_jtCZpm0 zA_L7FEthK(FB4SD055tq{zf0LdF znJ9oVZ;=(+ToKo;{P5Y_y}#*1?9-@BiKk+li*w@RFMLF=-Te9~GhsgK==SYk9glkb zlF>Ao(X&k#rubJ~d#j@rdbH=-Eq3{tR{4KDdw<`QAxM<-J*5F<{8WLd&c@Yv%o6he zNqk@$Ws-F(;alP;kK(Dy-RQe^EL-0rj5{{dZC&QbVg`KAKD-mu#_5NX^u^WhUP z+-@~9Cf8`@lGx?2myLzC^HT0&s1v)}jcEwy%&%sv9C|c!!JmaA^y=}9#(sb?+QYQ; z*MIwlAHA9<8FpwkZQv+nj!l80T@-m$oR5W&=lI5Q<+(0uu00tPR(HrIOxVY1Hv$UYjDM$dOql03pB zp7NVAirE*GD(X<|O&}_r!4j87Fm8?mF9Go|Z(q#zKb+br0)`UdcB#npL6Og}+~F!6 zQ|(XUr331O6R@zNK*2hl`%YRo{tzMe%pv7cN&!wQB6De9v+Ec-v>D z6O+#`mS0%=l|zM#xFWjks;*ghRfc3et(Dj}$zg1BgU-&M+V+Gv{~c#30rQq7jeGAc zoOULX>Ep`^Li3Ym4;be5y3Kr%DN{M>WNq0U{zisnXWW3Pbs}Nq#`~V%@u92=ZB8YV zu^VrdSXkdZ*}YadPY75&m6;foXXQJ4C^CBPgehNNpj+pgs(anU7ve2i;zi0Q1>V=i z=jjfxO!f+inXSXOW|$7pRbyUvZ3K%z;NJKJQ%X znAvOolu6Ix`5b0%R`SbecMOk^yDG~|3s=pM68sIVSxipHCQfmL5HxKK4f`KG0|DQU zhgmm23xO{=A`b1v6pxVe4{OLBj2(gSp6&&-=js{$2|KY&ehQ@ccP?X^3MqU*G9lb5?{VN#OD(btjfnOon0K*WXd6!hfL5|eDBJdd0#vNzGHGsRUsoLTo z4$zO#B3obbE8HZfX;UGz16(RhNXm~Hh(Gzi#O8}AVF*{0y>h`8*3n;&s%}Z*Cpk@t zbQ3C_8P}%uSC0&A!^9�qs|q@$^^c9z3Hf}0*mDjjSaa84d0J_J>e7mBq!D_>AL< zW7#}IU3nRM7$6e5z=`F5d`SoH5njwEaL9eSTTfpEIVh24KBZ3hR#gj@SCR=}WR%!})kOL^i-=`cC;UQP*xY}r_ zj!mKZK4);<^74;sM#o>Xj!N<_7EX7P>v!t#q+|T+As0`X^Er)z z)3Pk4I$M@k-djGC))8a+^Dv`dpKIPSyV;_k#(01SbJV|$-?5qN5kupQq*$|8bo-1| zAn{n>^(X!Bch(ygX~NQ91h+oV$Yss4SMMs#zs{Ta#&n^Ho9f&1dg0z2;?%jdnBq5S zX=1zPrB9gg?A+@lL$%7GC2#Z{4=;yA)9CwEQqN!?@-w}9GpKHs?PcUXl0@?$Pwsbo zEtj1HJ=zE*L9y5>c?M~b)nrHsg-?QmPt&eKWw6**31)=xV1WctBaPl%^YtDMO(u-F zuyOe*FqY6i$p^`#{8aD}%~HweajKD%-S&MsGY zL`{VQ%?{=<^BkcJ0$(& z`=-k76@z2=2wziUAD`S?e|Pfe>!>p3on%E-s#i^O*VM~4OTWZghHY1gP?kskq}^=4 zq)^RaNV0gI@JVSY(Qt*0Fm>#rAS-UfOXozT?84Ra71!0j4&OO+cX*fM^J+_85psqK|(aGD!;|ho5lB^jUNhpDE%COz5{TuNg)su{qc+HJ|KSqYQKwM zuhC4%0H1?fJi|$KK(FzIPjP0P9z@@wA=s7piObb42dch75TEI1u9Q&?k+QqR`3iMg zZU;Z}_?m$4+zaHs<@JMGpjD>A{TY#h0J#T|+l*rpUbLC5p=hU&@Pg#eEdpH&|UP;=_X<;%WcuDK(C>`yDIFp5MvZDB_ROkNGpa_a~9+P7o##u?kxb6t*!2HO5Q3h>zk%wJt8Qc(geCbB16u>o z$KrsT*zEqY%LlM2)3~_5D7__j(S^w-S+K_`@baiW1j)G}wsdRey`O0kibOj zb4g}!Log49S6A`YVBETbq?9=rMZ`ScZxSQa;sxy}F4w9YtDJr#n<3xyO{; z2sc~Xcuv&KCZJhrLJd^^?W*t^ZEe`u(5pp|*e?NLX_G8DzZ~wExUK%!Gv}WOi2)zE z&O@K=7?2&5z~&3mzIw2TDOeA%iBgf1k&b7cfTpgVCDjy5R7SEOC;Ghe(yB``&?iWN-Qx2=QfjC-i25QD-03J)8SOYwRS|W38-rbFBGcy>m1sB=#<5yJ> z64(3BS`3{h7IpJ8IFnpqLQo1Aqx@pjhae*ACa5CwV-vX)NT$H*pAh-t#gP~~9@U?S zS+204?O!}ctwMx+E4!52^g^tM@$VH=9kvYa;j8rpqs)JAe8l()t#S!7jlX}1Wn`|g;D4Zm4RShS%s$tpCBgzqDUqfi@i;>hJlukLEgy-Kn|DLRtILDnLhbd>9hBYiUJUraQ z2%h`WYcR_iXoJoK#PVXUs-sIc01Cw(;S1c~VSEK(5Aiks&hr;RhJ2S(0a15MM4sy@ z*jIcQD*SsP@lv0|gD8(QQQww`%-)drVQ9aGKW7SEst+Y84m%8-GL?-4=u`n1)#p7* zXlw#b8p02bWxoZTX2Tt0-Eg^d`a-w0cXT_%e$?gr5C8&Meax(oyY}rlgl9BFX+WHY z5E!WG31mV*zCg@WF<}dYOU2JcjEycn7rc!llR0my9Dw=QKkv5^;(p9ttV)p~pycjNW;z)B`UEuU$&nHE&wclUfU^=CD$sVG9 zxUNZR(W^n?If)3YN;+X!Qsy{wE@$kGVUX8rO%b0YN61Oi`E2<64O8P6nT}skf(316 zbfnUTs%8gYBULS5Qw22GItDpFk0d z<&09P82Ia8<>7um3Z&rZIt&IEYls6rZdjbBdQ2B7+cZajSYC$I4w(K92*a#(=YO+86Lu zRYZ2*AS39dr+ahLu6;1L2~G;z>OVzod*rqpTs`IBjeNU;gm<@px}(bTd#QZFduT2K zKKlyQuSMG7yI(ZCuKkk*P`A@jLHD1R&1V=)Pc1fry|5`E^6@DO9Ph%HJue3BRvDdP zMDUjc}e-2%77RA*XzK1qq0Ge4W1mZo9M7)O4@VZP$fXNkglFd5A3)%p2 zG~*DW7YSw375JGStlV$0nF1WI6-RsogBJP84-ymzqO|fnV64r+or#N-BVy`6a*%{o zf0kY?jeg_yh6x zUUA~tf3ksCtqy;(`VMPC43bTU_5?hNiE$|R%&!`v=)WuJ-ydY@Onzx6iXtb7oFGPy zBL(pofi;cX=BGQ&;506bjz^^q&W!l?``RBB6kc!$oSHmCQ5^!ogFfe}!uOb#1H{;b0FKo2-Pjhf*iuY8}aNfr!{`$0O99 zvbRln=!wB31RC@l-uf<;J#k!d=p;c*Ne`z%R4j9c7>la$QLYj*;4#E#J3obf4AiZ| z;pq|4vbl`_I<)XYg*!Fv?C9x7<(W%K-klwgPemoS8WIw6HqaAn^%N8b>=wb%_ncWs z|456hL3Q-aVG)b?#9h9R)`REeLuJw4g*FCIC=6H-bL8P{NC5+A?N9o~{hbfG0mdzb zweNNV|2`((k$4hC7q_V@eb~V<0wkYEiyVn6Cc0I!HWq>I0J^%#Bm32N~U zwI``hi!kE5{NJy3C3tQxKiUbvg_`jCQ2r)T`w1w41Y)91l`{%yrM=i^pppAdCRe&N zj5tfO2$WvPeOn}lz)Lf7xga0FenqQ>0OI?uxB{BbI?x+F{b&aPYO0jcm|y9(`abw| zh<`iMVmQ9(UvQ{KL4#brqtc(U5jYcrDj$7AIyBI4iwnZ`XwKpYIJk0xVHR{DsCUq`nwXUBSCq*j3NzpMZ)VPq_kpKok-??a9!sj_p8?NE{qWCo-!p$a?4k72Q!=O;$qh<6;lph8CZa44%sy z-r`9kzlIoNuio+FLdNB|GJnH1yM0mY|2#2l1tKXpB3c5mV|6$ban9lJG(<+p^1y5* zbmI@e&c7dH3j^Jt30kg45OO7H4rM|<`n<4H7j-GgZ3cc#Q6rBHqwR;$X^V)~T)3{1 zg0>Pk-Z;t?FpivaZ9|+PhT@<-8=Lf?1n95Y+up$#Oi>--UI#g-cUV z$dlZ5xFQ4v$w}u#SmZ{k8_9KJ0llrKc)YzY))l-|L10+y&NkjU{Jre|u5Go7r#@QM zeZ7M-h5I=@bc!S#xG#IVB@iOjfD5m#fsr0o9DPy1XPQddzUIZ8AjdpBD=gf6+F-n4 z`3Kf<2aZ1eJ!Inp?N70$?Se;beY^~Pv6 zQnyu#P$75lIH5^Vcx(Y+2b++YJpqU5y&9rpg}*f7C?(SQ2$~0*XRx!Ai=*AqYgCO1 zG9`-R)#q!Z2)!EbN4=1OHoFAihA#ZoN*c67#iwuqMhYjW^-_srQEVqWG)TJnlwg_t zrYhaWMnevy?mOMAhFRe^pE!}Wk(gZH0_DJ;jZM^PJnUhp2HDX53%>cVBe7-(}yCee((_!`Z#1tovz|6 zCE@yQSY-co{D7AAYTe!=0}ISHmYaK(yXM$9eHKI=P*FY}@cRgx$JJ-EA!`s>U@BU= zwotmFeb$Z;3V|LhhMBX66VfrMHUJD2A#jUY3u4$~PA6$%PIjdeK7xc4lAfn7`OBpI z-zQqoUDUxr)_U_q2LbTP`?SUy)A-0pjd$LGGLgT7a64 z(~p#J6mNg|mVIn5doo2f4SS`pCtt^A+YKUN2dsmVmL(EV5G%%chb!X}_%5v6rs zELoIwg_Q}JWeu?-^6CIuKo}=u&@v>{i#^3mB3A7j)34MFTOIE>C2fTxjp&|7I<5RZ-98)85ggYJJ>g|PKD;{J7EHt&X<1ZbCYR&fO z<#wAvT6lwBC&BW(FrtDudQJ_gRC(H0+-Hr4Ajpq97H+nh?Dnhuq#C3Vlar($xtj17 zLkYj$fT#x<;t>c+E1{x2kj;tgp|*5B0!)u{kqIfN#rIHpH3ANCcc9P?rGpJ3ft^td zH4Lqo1&%_9>m#&7?L`jeJUf79X&n`N$SdVAnKCkUP+TE~F2@z(K;9yUk%3}eH_NeO z7FB8rdI9FYVD^I{D`6Q`7JMhI65(guwh6 z>~f{X^T`V+FxSLz$haMe@PgRjw|6{RAb0WvkhNl{FMFZ23I9{X?Y|yXkKo-8_*-!M z*B^5SkZ|n6IA%2vE}P?429%}o8vuVaj*mhD$>*gVhd?r4;zW$GIO;Y)P@4BuCW#Jd zj777B0T^-SO)L$gAeQgqbJ5$fbjz%_WwU0};51edy{D(XXLypBKJe*IpvTVAHzaDd z<22;LE}7}TN~-0FOUCV=Kgjb(UV$T!y2GL(5P1~|x$|AcR45{I3tkFi8sYv!*r^aaU?(=Zeg=AaO)1Z%O1Z&uNWY%G$@1;c;-no3mFSSis<``;KF>_J$MG}Vy}$6Wsk3n5>l0 zhd$^Vj!mcB>8{l#3@%OqE#kd)%pvFu=x>rpg4k3Zg;_bJ(R_^7k965HSqS$i`qWL+ z@?rG$13+CjpM;ZC-h;Zf9aN(7><6pB)$u6n?amVu6y_e&1KA43i?Ut6K#36sK^Q`5n4ysdDH*zvRzwg90YOnf8tDe5LnNdGrBRfU4oM}Yk&vOK zd#LlgetYlV-shaZ@bXfwd7pWoJJwqFx;bVHL2II=&4JbpzUM!sMJ-@@!vhTHvV+Sf zdfdzsMZI(j`0wGE8Y4)N-A{oroA1S$`}|zkl-+BTCTm?7?;K?FzXN85HPXbMMOI}F zM`y;V;^aIxF-a#{ItC(Jbuehkh~V0j)V1owb8-S3?K2m>XRD3?#u^7^^#XIeL;IKH z)sTD8mvH9XN&ByN2?~@V;E8*0KzIgX1QyLK%MWoi3e3~-Vb#=(jC}cF9U8zV9OAt} zZYNCvb7@q3#y5&Rt93TOw+98qV~}DoV9}3g2JO}#MN=etECk^nRtHcD(lRwTE_C%E z7fhgfG=BpTQ}uSG07V8fiNR+|kZAgQns}dg2kN&U;B-k*Gjzf&BEUm+Vv2cg0kTQ> zOeCN*D-%JDZwnr}@e<;MxAhjP7iEUxYQ24$WN0d+^>IBrj42+V3rbU)Mc+CzGZEPH zqj|F3Um^aow*5$2VJrAJmQwhHB|s38*F@^@fA~-1gNOt1NE!2ZDOVjSOr#0e6-@R` z=S5)n9~w6Fy5lm8cg;d{Ixxq*FkoRp>b;{e<(!(R{@R>5u38;a!>|7Y?F<22Co9ghs@nK&ohfbg;b!x z)CW3{G-`HWgT+$qKB(5jVjdDv4_I`qtEG&m{78~NP`HK3)^R;NJ1{=k9umjvtU3nn zvZ<$ON9zs1BP?q4S#EtUhMuny4EH$#va|0keDpsFTMhYzXo_m0b!i#;t0s3mATtq+ z+SU~UMUV<6sELrQ0lbST2sgy}N5+h2TU)Q#hA!sBd1q6?!amN5KAX5f4TF3xOlEN- z7<{?lOEsW$Pj*AVfu|j>#}Tvo!vbPhX{=m0!he*XL_mCRaIr<6_J*UkXz{i;2d`u_$>)R4SzE86U9*F@&npsS0#ga=GhL@aKS z=TMm1Z~}@&N2?m#eC{EnswS^D_jPZ_50r56g-(H zVi(v3S9AV04Nv#x>TCqw#6D2x+4X~Po4G~04d!8o`nd!;hN#(?o58rs_pJ65n^6p9 zu|IzhjpCt~57#GcN;%=(@T4oP4#agG&!y+8;F(n6w>zjyntx(@f4kEN%>6~to2c;w zijD`PpZzCJfj=k&K{>14^EVgxu21F60iOkl*??iWDX}F6Sf@7OTcb#wR7Iv4ZuX@JJS=vsByhB=OkT#brI!t$3J7A z+Sh>f?U~9}l?t&y6L57+O~#w$i`x9UAY@xHXM-V^%P zUI%RwbNV4VDj^^TZ}ec%JCmIFiRdJQBizUx8f-E=+vCnWc7^x&w<;E3&4fOdCqKGbJujf`14_~YUle_y)wK?lmSv$KU;9aq$Tx1R*mtP_8C{-E-sh3G+Wb%UqZ|m%9Dm&+)IvAqn`S z(1ga86F-)U2G4`J41-paK`brI@&+ER^7PHj`L(E><^abq!r*haJTPDbu{`K}m%`~4 z*j2j$m%hGp9)}Oybx!-rhKIITTbST5r~1myMGPbp{xs1E()_GmFcQdI%$OqZQrLz1%=G8&v?>Q7h)=YNNC%W${xsliBnCqno>oo2u=~#0 zU+3w(bX6@{HsLe`C*!~)9rzaSxA@=>s2*ugKkzt-n~w`%A))XD2uB_W2Jvs+*ycK( zf7SZ)^a}Xc3FP2oC*W56`ul9=aGJl5*b1i^NN%NWB6R?NG5$*KMveeCw|doC*0Mw@PGW+UMfMDXQwFcC3An^XW2V@KJzqeHO^88sR zsgVzB452lPb+#S^pgqV~U;l{TPKXzKgCy&XlzVBJc-P=dI3Z+Kj@(KZ|e?Kd@5Dd7iSmEfDXBPi_`o9k$XRp!} z{|>WQWcf@4OyKd#n}cQ}km9=i=5&4UZJbS+VD5!&77q=?H4&4o^iJ-1F3I{-v1hAw z;@VQ{H^A*N!+@o~#10uhmXLTrERzgG?zKz)-mie+Uo5cUZP#-CXymqx4#$~`0cNoC z1XMtRqqqDDc?41oAJjauIaE`Ldz0nSi?SqQ{>HXloXbwO+q}KuOeyE!K8o=vuRsn{6j173e zBmEN^{`bdkCfwcSS0}fga@vBUn1drKccCe{U6Ll-$k3$|R1BkvRpcuGg^|wO(CJlN z^=uXA#ChE6`Mg(>JHvcGd)$^-#Z=)#Dux8lWA6$Rs0=Ij`DSHMF~-0_+X~px^Xt5h z<78q?dj!L+d*!)7YlSNbR@Zxj3KsH?>I7pm3bpf0MF8IYS!Afwd8T@#GNWVW)WzOA zS*8?0QF;zY3edMR5&imr_z7f9iC_gSFaJ(as9cwg$P_bVvJXJ<8j+5nHy=M5n)$e1 z?t|(b^^R!92V+a1NU1(N1Pd@@`})g=M4>Z4&cl~$#XJ)&+r=yR{`WAPPR>3W(ZXDfz%C6Mn$_&x_k&ki7=ZuX^sa!{&8KzvG7GA6YG zc^6h~rkimLG{qrewL9HB3jn80245!;QyxA63A}{0cn8GSkl5(G@lIlay4ejKhsSUVxn!5kM+C)aAQIpB>M+yrE3OUFdl8>> zodh$~dPG9|Cv@I^79IJ&bn)NEAfXTY{%%t(?(;fySUN0@pz-MeIL1<)h9H%tnv&4< zhcw`(R;O{^nF7X@ybuoed2VJa)IWU%I8)W$EvwsknGT${qdU72s0H4tQ0|aXP&Ze< zqE1zBC-*;XCr(7Xg4iQ^hq{6F;3ZZ)MHbNW4U&s`=+3DJ5k?&rcOMNrC00bz;=-*7 zIR4!)L2DpbH?~UK6B1p)VN$BSwQZ?~tN*A!0hx#xV*#qcw;f_IF}5sqZ)8(_Q&SM| zDElf8N7Q3{Lf971n{Q=)EUP0dty!ciFl%feUU`g9`FkEaX#`?|+#)LLf-ptZY_a*V zm+*hx6l4dvU`zu;BWwf*q@_-6nBZlFM_DpYId7kjhAap5%OP)~pC`Uu@9DObA8eby z*b6=gYJ!%T2a&~~T23ASn}X=bZVk=(XESJGkI(VvK21ZawJcU843Ent)Q&~E0YaJW9{oSL0!)UR5#9RG+E8NGGLpK zCrG&uHY&lsy^fMvu+A8Y)%F+L;&7bbBI*^fPbA$Nw$BN7lGn!Sz1$C@Ilge>lU6%` zD}m{Tx+lnVlGm(Z>WX$#&t{bPv}4fKQz@D0=jFTqzp0YJ!ebH1_h5$Fuddk7bkpQQ@HJP0~_ z!h{$VZ(3cS7LF|dt&(%_%Nf>S=Fq(a?VDW+3!b`9u~i8F@7wyIWDxI>Rw&UgsdFuSDwUGCb(6)bm4lLAb{yrT!elf3CggsuNnQvb>g|M~Ge%%9xn zxx4y$Fmw^5VOs|iKqQ11iBI?T2%KFY#0}MNPu%yEo}yP`&fvAJBhJ1nggpl@wSa|lRH*mQuyjUDY< zI$+gd>#HMMSRoXYo6<7{a|Yv;&(c`E!BRsiyvs3-d8^r%x5Kt%{Et`ASS5 zwOtEV>4pPWx7ChTA~c?^_d74=b}yZ}7aoT#4<)z;8VX(Z^x7Lxt#!w-knURVl%DO9 z!E&Nc(KW=9oplkY*d(|kb8m`BN4*22VBAe_>fl zcr7l#>B3~S&d7o@Y=ufZp zicyBS&tQ~->af6p-sj?~p$Y2Z(C>m1w5;wKJ}GkpCC%BtP?yzk$KT2BOYeqh+H{MZ z7<7q#HFb?xRWVTO+B8rtq%g*23Oixy>Du?G*iTch>v_JuzED*ZtrxX$&zMg;1Qc($ z_>|QfilHRWZ6?6>6YAcC*eOh*H4e!PPTel*K})^bJQbZ(34@&fY22Q`o>p+H%x2xL zN&dLf56{UQ0ag2iqiD3VROY0`BACww0`{;TAC z=4Hy75;-^G170q(=IH-GP2tOLS)>$<5o z&2!W--3{B>7FUQ>ragv_(~=N8m}cHNdow36Gezf?L#u?d84MJ88U&;aT+k|wo=h?8 zZYt`_V217CV$hb4B`~t#)g#WS^xX_FN zreK=?a4Htx4y*+#x9lKY0g~nWqk!KNUl`{Bk_J{?AbVk88I>ldi8^i7?f(2GZizfT zjv+mtbIY?*s38?&2CEiGezH6c(H|LJ8Hw9L)dDKKOf#C)X*YTVF6ci|R6!ej!=8PLR9rMg7+KZ=Y|@gIUcz`sJ7c{h|^4 z)j2~0JKnpDB@?Q{Y;?e8Adq?5C({>O05&U&XQN|&LY}>lIe~svX zG8v?4UTiO$)4tO9l?86Rpi-Lesg7TGvf~FV6bOrRZ;2J zZpN+Z`RH4&=5<9|)j$&IQt0~lYOz_Psm#eA5u2Z!s`}f%zg_Ap0jr~LmS`2DnL)J^ zDt-tctWBp!U@ryls!=0Ar2|q|N6$$%K18tojZ$ogH6a%tUJJ7{Bv>Dv*x!D%_0AEw#&i+M=s&a#(WbzNyyN z(LekTc-)gxvAcn)zkzXqHIQ^@P$lY?qxB54)r5wZ?nEu)N7)7fSzo?3jI2n=veWJj zFA2Pf9ov)`#6v||lL z$jpWxDvOSV&xPV%&1O)*bHa0KCa7yal(w)*mW0biClIcKrFbqVX;eRI*z+|c7UUQR zJ7n5Nv~cb4h7y~K%^iyXoeeKge3r42Y?rg}8XK1bk(%nB513{y2fdPERKGW%|JCd$ zdHOowvn$FQUC=HkaE6H-6jL!CpW>HFfJPrULvShLDdu*kmgqYTAdCz&Q|cx(xROd) zwcH*c=R0KCZ-3Yr^?p&{-X--O=+ZZ&Z`zQSK!C>2K$V|e);duG?z6Jg5do#v%{(z4Qg}P-(8;H zg*t*2s#QilS^qu29bu?(`c6*W%1IS%MhR0#lnUJiq6G+hC9{z#$rPtXa`wEB) zGW2p{urhlbyRrO0OX3!02i6@;e&Af{s+V+Ky#qPf0-3`|iAYYL8Eu*hOOwwlckWHx zjbz}VIY#amw|{T3Vg6jPW~Pv=gkRk79{Vj-M$V$tz_0&Wzl#XQx8=*n5a~?o4Q|Cccxq zY3TaH$KwW8$!OCpVM#H^$e;Q-0(!KHsASh#7Ba1Swi6Yvr`kQ9Tueb|B0 z*9Ya>HJ?oG@UW3NjSi1ixhd?8wbX_r0Uk(EMgbs!8wC`u5B>MyrYklnxGypsxLJX( zpc5BJv|G7e2~lq|smi1yv(ZFzHFtK!frp0%kf=PdG9xwNl(L#u!ud^#?^zAH-M%>5 z#5w6!k;M=ja{WtvQsJ^0wOji?S^YTX#66kFGal5z>do{0=0d1V?r9&W9!b{`rF>w@ z_zLKUWEmf9CunPtV29<0I?y)SJ1dCAe@O7fZE0uoW5t;ZRbZoB+y}=U3uc_U0YS_L z(3nTx-;iIf@9<(VF|Hrg5c_idost`O%5B>?2G^|j=?xVTRM%_Gq%S7ATu0eE$39uU z9mw#yf`1N3rj?B$&_niIkx`#`P4)GHnDgpu=e;ToKgE6bTeH{3^{rl=e(scA@WbBE z`fB#21bgewRqoOkpPzi#dNh7&f#Awk`U9|$H3D4Sr1U7Yqx~}wtr*Gelrn?b&#i>; zfTz>w`R|*Drf5)9<$|r-8`4SN!YY2zL(tN{5T<`H8Yb1I@CLLmZpkr(s|ms238ARb z1R{htjccp4B=Rz32Vs-`T&3O3z5)WPuHsFvMO^h5Qcf_gfXj0Yma}K5b6*lZPqizJ z1ElKL{)|}n1Wp|{ltMw*XHB?FxI)H*aK((rYO0GU0u&Y`x<3kh`eZQ=U<$D^v+eW_ zi9iiGz?VSy5p?XmpGOVKWsfOvH|}o$ar||Oz&9tw0LPLWdqoec+j1ogSDF1 zFIoS|#}_X9##V5r{3V!>)|B3hj(==-4hP{m9DZf8eV97G~^sPT% zV$bPTH)#fFvNR4VhI9Q=E$#-nkS0r6rwzU!gnjjjN#|M1O}>-(&b1)mdJR@nI1nd@ zvNK!djdpla-pyGYR5It;m9v)@j~sda#GqnB;9Jr!4VgWyfz(m_7$6Z0D;-%>3O~~R z)9?J>Q6sM4691yTauouN0;*MJ!M}FPKHchXS|haBYv8@&h)|NT8vbphW zD75ZT7!3cq>lne#0vKm^Q<9yQxk4wG0d89te9L||Bj;k#)bggqH<(eST}ttEgYjyH zsLpvIs{BvYOQ&vZ%T7PhBtV$1gJIuGZ{4q(<{`pkeH!|fd^x;x`&fCw1&Cf2n z`r3E%O^~(Er-sX9*(uN5qpT%Om>!G&P zhm@%iV#RhY!u2Yyhy>!CHGzD49lV~=Ct}vIm)UOMn|9JawlY)c|MGPOm9?3L@e3R* zvReCZC`AyqLjo1PDYvYzK&SIv!5nbe&{Ttj08(XjUZ{;KK(nbI+~K9#bhHP7b*SJ? z`uQu8DR9EDDtHEqDN+y`Nn47z18AlIb3rJGx3cay? zwy)p6D;{-R+(|I1P`YKNO)cR3PmQ8l-1&i{dqK-9_E37>FMR(;T*NQn_dj8p zGr%5qUz{JxLi?VGBi_Dw!v$o9Kjm>UbEi8sNCH^pz?Ejj`5^`#?>`VFlA6^#TgrL+ z+o$&cW|kX4`fpo-6_570H$pGKtme(YsC3O(Sn!V2`j;USqeox-eE_F^$lo^xYK6Hv zqU}X4;Eg&d9gK&;H^oQ1!aNen5fF*3^1%?o(6pN$JQnjf(eAIGiGe-Gbn(=23gYGiEAui8l3IeFAmo?Eap+~Tvdz!e8<(Jk|xO+UKm-hFM%-CQWY@-^r6 z9ce`DAiLX&Pq}>AiW}G$T^_p4%ev4lI(WsnsM5gN)aum}B1x_2c#WHecf>j`bH8i6 zjuq*mgOMJl%lF6cCY<9v{+SUL*93<4{1 zs-^K{2b@ooe{s+MVuwfq-aym?$D87K5MA4d@o3gjSixQt=8t8f3q&|zPt_BxUn5u& z8F;`Huc>WvFqKuCn8LKMilfLA`o4Dni0lt}eA%Fq7D{|KV4VX-wQkR<8VQ*^nikG? z0UqKHw5_h5HOSGewvba+ox!;tOr}8ul@koW9t(9AG(zjU?&Ot} z0y409hB>V8{VkTl+aIDuep(h2QFp3Fiet>P8Dpd2DoQ$XAH5|u@m8UJSgsvmI}nJr z?Ggs_Xf~P)v~PthL+gNK!2Z(o)vyF;Mn#cKI`L5jC`^BnivbOys}dtq8@vy#VcYa2 zShuyT{vZ`VdnV^+n|?ju`q^hFJ`P9U_j`Ic)MgoRxgE`;630aJ@AC2G@cVbo$rL3P zu<%rsCCa`&MVT=kX-qJTsw@ttFHIBe=lPOmd}2&JJd8ytG9WZPdjTh5lB`++EG<_4A*38se1Ph^5&Xh1|pdt6jz|4-6~N`N+M5JmJ2H?VW?q;s z%}sC}wD{(yAAD%Ye2(PML4q{kN{d?(R!%_01PWGXaaFh!unB{eeA$$itmCL<)E;V? zh=`6GvPLQoWp9mNcxUTwT%>|^b>m-+Dh2o8MaY2o!&1uabIxVSx7R|NTzszss7c`Rrzp|!RWixB^{kw z0ji>>h%Jsp{yxDzt`_G8AlG?3UY?iki(OG@(AjPH5&DX}BP?siMsN&BBYqvSO#6Jd z`IhxXB6L9e$A1;GP~;8wg5lA<*XSVrrznV(MI3>lOjsEs`b4nbp!nVZtr}5T#5RNF z32)L&}0TEZ+a&YyT?xgl-)TjgrX`|eYo)Nc;P_g6+r(spTeajG2Q-W|CdR(r%W==7ueh6wsA#M410&nLcr0Mjw$*m~u7Zf~98 zGaQ$(IIB5Z3wppyc;e9x`IeuelY{*^lu2{2DC#j}bCa?9n3OVsTS{h517F)q99CM( zb?1Y}zt%r(NN7kah_Z)itDDv+a(doHM~HN)4WLv1HPbN+z8@e=Vhi5CAl z^I$|i_dNT~opqN! z0UDE%wXhYM;tvaX(cjbBLYrQc+?A;;EPWmAMEQG;l3@P+aG3#Te*5oyr|6LrrQx2d zH+SEgeJuDJVYVavLpCCpitLu4fXoI1$V`aZC!mbX|4!*!1Vjq@U?hiX z(s~h(Ns570-*yrb2dF&#Tng++db4!YWJx4JO%plQ_8MwBl38)$7Bk&5o}*t%e@!r!JIdI#<@B`S@jv-*PJiIxw%nMy9ug?7 zHnWXyW!`bI?8J+?-4JpN*I_8D>n7Znc>9FJJ0<@`$tv35Ul2_I7bIh#(kQo> z$lTzjHXPM(k#OmsUPE)JAVZSSM8Iy^qzPSeXaQo}>R3CVXLOpSRQRIzPKBS7`q)Sp z`d#5VpjFN>V|ls>99W0YU4Y)HPB$Z=$&zKYngwOoeB~|YcQkirLxMSJ*h0w8y3Fyh z^3m4uQsj{HbQXM*v_+LAc>NT#4GdjR=k7DIjboR9WYvkOmLYMeBP081$P6j;jS8^< zBvc5`diXbGz*>jlfK@lE=$CKXervFxousEzJ^^(@d@By?oaf@b`Ko5d;fT=BZ&&xe zT(_<9+{?p{E&)hhz>x|lPmdZJ+}uEtK!i2{3I_v_kmof=oyp2-H(FfdKa_!-QR)p8 z*uDKepbD);WLTuxOz>tWU*k970$t$c_1%h5Q_7tq4+rKQ7w{%^FU+>E;_nPVVBl|R zTmlB<9@%3epMb;u92ppD+8aTffn@WZ^~c56_c+a~kToOi3JX3I3Esop>tOOKMuX{j z`&q@iAvX@zKR#|p%=*l2Qg0A$T-xbliS_vuykX_FkqOE2pApu)8s!?D!E_?F`h@S( zN2`WrKeGl0V+0WrMlqilQOPWh^WqhGa&bhf&F(IXUj!vIrBBY2Bu`#W)+x-Tt2orH zG?kHh^EVKThl0;PPEf(fC601DCk)AZp?oC_A%Xhw=#nUv4YFG8qF29FPCe@TxZK*P z#%CEGQdn6+GozV$mG-$Qt@>V)zC`6W93m7t^UgZG-l5Qg2F{%C0(1jQaepHXMheEO zYSvJTacc>cJ6E0sjz!LM+gpJF-&_5o$SEx)f|;%1p<9lYj~}rRG>4DQg51fxQ=KjS zB}DfvQ2tPc%f5!P3%EL$aIyT%IdR;Q)1C86m-?)AVES$ok3Xnq%J`(KiUcF)f(nBA zbs*VjX{2@c3Uim`1YnVtwj&8=K?A#r}qR_FLXEC3XN={{Ct@xL`j{dt>zJLsxu8ZItdLE+>PZS9|OcLunIB;<>1nUs_z(3Cf2xH0vzCh=_Eip zCKl|9ODv?x333oS6T5udC;#>WfbK7p^fy_u@D<}#B^H0({`i_64|GK#e|S&BqBpeO zQnGUGTF(URE_~B=9vXivht>Ab88t&xXbXCjkgQ6ut={B8>&M8%FtE~y7yh1G%8Tng zU8HV#zc@fm5c&gBhMhywx1G`b%Fm*(9Z|fs(K71DRQO!?(Z#iRGJJ7@H85-@S(Y3{ ze;kSy2HIOKTSb!WX%RBTn;0fjK`_xqI&49pC>fX>dD#8@(m@CGnFDxLa5iOs-U1?> zT;PM&JCrh&XYLv|F}J2Sjz!#VWj@DGO84UV z^3Ghh>;0?}5r<~tt}GXlm4A%cm|AAY!dqTa%%c=WVc+leWzax5Pp(@1u4E_PYW0Xs z3&jueqoh9x;tL_sLoM!c4;%W%EM0?sw*59;;TCW;;S-P9WF>N{h)>h{lA)$P+(79& zvX!gTou$njaxiRj%6Zd_{mXYvxslGMXpi;|@1)3)`qp>9YH3Eva z9{0Hn7?6dPG?*=Qgz~ATEtH;At*#XQFj&jSNRWNPrz-f>}F2cr_>v=6W1-ds{_#_Yfr$S$2gE&cXW8rlyy|0SXn zNqy{qE?Zn_&*G0&Cg)q1coA>V4CD=FD-t%}YWaJ|1EyZ$;G?#B6{TEPG2xlO@oU=C zRmlG#SW-~Bxwp>QLH`obO&QY&nRVlmCJs;!Qipkw>)XdmA^gYo+jwuQnGm$gZER=k zZhqey0bWzWUjV+YYADOi-StZZWTb-(;%rNi{|mdZtZ*v+5o1|7q6->sFRM*^`0Fw2 z1uVIK$?s1ZZ?s;|Zm92^(uQ%SGjM-U(nreIuaUCi7L=g-ujuQRCdkYa_H?^eDYKIu za@tBn$*m5!gdAKOS!WvBt>@{E)4ubvAnY|W3pU;TVb)!xUkl_tlJ)mA>HAT|WI0%ZX11(iH z;_;VeCx*PRk0nsK3L0o1)?7v=GmPN3)-?zpm^Q^6yt`oAW1ap1w}zks)=S%PrDbBu zq*IIW!?;+##p8w9E(1Tf=#Ty2;yEW3XN;dC4aEPn=H4 z5B34}|LHWUX0_Yc^520dz&lh((z}l_ycv{wByO9d9pD9jKGjRkukG&?GAnBq`>1j$x zjN~mtxRHvwp5GtlG}t8NsWSQ-_#Q$v<(usCm*5MJO(t*R32}T#!|r3N9WJI2HMcJA zb?{Vm&*I}=`%>#$0m-w@TOl4fWP!0eYh{fxR1M!Z!**K9!y8JqK&%4z+Yk6`R3s2Y zca)~=W$)!d!Ump6Fl@%^gA78Rv@CpGD|Md!u@?cyX9e5lS%F~BF*|XkOv{N0cRJ1% zv|M0Dm*pxW1wni9y-zKkr*VEGugLJbT{8jdged>|Y_f9W#mP;GxQ&+66>sLRn?v>A+WQzmmdoh1g}sAtn4!VIcrfh=3909qRin8!Qv0Lbs(kqhIWjhPGSV>WOUGd2mQQ;Xd}vRt1S6xt^DV%mMP%X7c9qgnniSvl zc3C;a-?wOo8c#GQ!(Q&L5?^0&OT&!Y=mXK0<+Tgg=@(`Pgq&)Q7sBdXIOJnS7X=OK zqSoO9{sfb}`r&7FjMlOPFJHdrC0V_7!!g$JavifMiEa0la?6#Q(w(JTg!&JEdTtC% zD;YSg4D0WgE84s$y|h0YjwBLGojX$w7*igZtPo7X@>{7*)IvN!c zkA3$wWDhi~2r5jdzS^m^sqCe2(}rlf#>lm_klbel$GQlr3uTdr*m6A`k!4dsJwt14 ziT2cPuoh=&|?=WF&~Db;_FIB?~2jf_gRsYOS6@$O;1 zO1!s)fmJ&HRPoI}=5scKrTjaAli(>*kZz(a5_%@E%IeNyCm5hqp#367u}9JVLF&Y)olaX zv(KF0Br&BSgsiDU_mhWMkG=QZT^CIabXKPec2)K~U7kJx)twN^T**vR(1;y+`v_sk zce-^)?a_=pG)bs~t&N~gRx_#|n&fS@rW1$54aETpZ%(^8e9}u`!3R8Q}F@A*$7lkJY87dd23$uTPih5wYJZNZ?b z7HYOif-XY_0(pnwI0zu}6qoS{#|h(>x@0!_fHOB_HbTPKmZ=6dZ0^!d6o}V ze9Wlj0Ld52lDe_uWRjnK5rtZNS(Aejjuw12FiK{oj>3$B3FTp8)^v^{w>xL8)Qj~HckAdx#)B^v5uM!VqOhVz{)eTh| zYQ0oWr5OjHyDktMS9j0rH}t??gRQ-c;5pS6CwGoH{Qv zJ98SC14HdJFh!X6f0u5O4c3UdWy@?&zi&LWP~f>+yxbD8sFV2lqPW$WTUam{QDS0Q z`bwP*@1z1piWR@AzJsx6tP0s*n6IMo&`LgWL0hled!W&^oie0>*Xb*|I4it2%pA5ba7<*ggD({7eN9y?p z-6=ElQAyHMgfm}G%xqTRrwFfcTqz*-Y9%HWVLtz)Y;bURW>NWR`0vdMhKdnb$PcLb z^Of`em`=%|>F9YZVHVj$!zP3j(IMr6bB7o6l-21&M4^HwWV*bt(L+X~ETy_7|K^25 zhwc7DZu})6QdH-!i`BJ$GQoRW1$(g06GboXc3igwYd!+lI0T;HVy|WRsL2L5q}KUD zLjb)qV>GF7!INHD=DhXizsWm0<|*07YYMf2PU} z$?Eb`ve9iYd*XgX#w;@x(!7|OY6Zi6D($hRJmEd2%6rk0XGr*a!!bT10iVs{SL>fF z1G=^3$2FW-{^d}C7@0WOY3_z=M;xH~h=`uHK{Jlcx6G75S%Sz*HGfnBPzV+9a) za*FcLCfA7t*nG9%+t|A_Z|XlV-QFw-MbYYrqEC|#i!ogSy4Y2&s%s?AWgYA1M zU_mbH9ib*x;9l!2N#2~MTYxBrVAooi;)}&Jhx8@VU4z0qaexzyG5QN(JiJ8^p z{BR1wVg)L|0=!bc_kHcLjbM^mm4IR=p&|7=Z3o)b^epI5>`a>A0%&~i0#-!=b%#p< z-rv@yR23!Q%=1U*JrtJ0Sk7(+9-aaZorTPcQ=9jHX58W#;kn|op{^mML*egoifweI zE_}~#CAlctRzDpLv7HVWyImD*sOg^35P(>R?DE&>`Rn)eYfK2Q{3EaiCvzX>J1Ei- z#GePFNd3SYZoSeBTbZ@JUF&DA@-Jdne>570)6cG zNe^amd;<#mRZxDkuVoow0->qX2?jq2U1GY;Oi}8w{d=RD=2e%tYPb;Wo%3)ZMo#Ql z3U01e_Mo5%P+@Le>z@Viel!b5mr;e%qFzpJrg4(2}RF~zy|7$o`zYrl(elP&E=0e zhu;pk&F(hie+t{slmHzJDjUV*y?_O8yZM zkPoU19Q|*y;=#skYu#;#PJ5W($UI-@ppC%2k8;58;CG#SU6mjk)3I_#6g?+|eTDkL zIpK2X?<^r$phPACbN-DGpmM`uddd@LFwB#zwe#6CNl`o?VI&K=Wo3TFCk#xWFv5=< zBzJs&Z}*tz0MS*gL*pNb2F6yXLIN*PGU}sEhZ48R!?K${^Vp{D@f%_EpRG5&xzE7W z&y&O7wOY$OYu8} z;U$|8&x>$ji71jH|t~tCKYyT%ybqd2GmwSY5pLIYzIyIu{ z<12-oW4&qKG%H3tgrY}x7Bm?>?Za?I5uM=``*E_$LB}GtbqH9Mmh<;%k5qg=@RoqV zGu6&T-gna4q@w`HqjJDVDsewE4-R-HkX2inaIw#^G%YGQV6M@HNg7zgd3D|g-fZY*s3d(^X-DzSj7!;DkxS{dUC;mTLAV`AyoQXh5vW*eoB zi{w{tBg{`T#qMzdR;-pVHE!pa*G5f;wCo?+GA@z=5ZG_jiBLb8yib5IN6LYgJiUsM zsl_9mLM%Q#!()NGJm%xuIUm?u8k7;CHRe2Ihg-m3%QXK2rOI_V1sJco)-30|Vkj5` z7Bi5Q!JxwZ&--1Ls|J7OP7U@$u@7JS;(iWI6kp_ROgBB z5F5c={Inr1K00qIZM?5m*GAhft0fASt@jmiVy_!$RDKE#gN?rYH#~cMB>?{i#h0FLABN&NVGmZlUsd%kCYGQ zNAg3ZH<9n>&GJq@!Uutq`9=EFkD5*d_Sv{}h_zK&qEoxhT z1}#6xome)nZ@>AiXtOG)CoGL*NE{Yv0##R&5U+;~&seZla1k`IzQil%Ax znEGDX?PESZQzm^aTkgy;%E7c`>rHfc`J!brylki((iefCQ$+4@{>5|9y~Xauo9P15 ztFaUMf^&o`A{(URjHha*E2oC}+MB2lOg?s$r}^+>L+P!M?p7PTos7DP=RO{By__+H zFS+jAjfJ&mzQ5kE^Ev)SB&wkMfxJR7lg-a)5>8s5wEKeNDCa+nz;)lAbI=C~Bpv!Yc^U|1yl zJ3>$6$DOj=uP;zm~boo!Ht@4IFUnur3+t!yU0Dji)hPJR?4!Cg) zs>Cb1Q$9o`uH<0w*uVH7ShNVxxYwb2$wHcF9U58tdg|u^lV|$)1Cp$)Li^vKy&=i& ziBs=>S~RKMjrXnlP`9P2!r6~J<|sVS`zA#~vEctiPnDc^$YN=;@N9O*sdi+U_3FO3 zt{>xma*7qR&9-B7_gVsle|W;S4?ZZ z`)PG$1&M>|6lr9Cd$Y3pZ!q!>ZCW&E=4P}l3I6q|pcjqM73LEJg)_n9H#+Qrc%PF* ziV6i5u*Ox6h1dop7f;gA+9tK)kh$Zd!slpv>y~FuL)8uo$J#R?s|Spk%L;x3cpbsr zEjD=iEZ#y1OZQhKM_!IC?SwC+o3x4SJQ_}8p%c42EJZ^*BCrmDmk`pi%_czzsGO1sSA6)`N2q5(!h#|x#Fn$<( zzX8>9{{7FO6*|CjrIq#x5%*nSp(@^Jp|~RsaF<%EIfaIk8~kkrAKJfpE+r>4ja`wv zMN=P1Gmi4up0IfN@tI4jr?0&5`fl-Etux;{!28__GSM)J7EyBw=AI)%p^!K0#B^3+1m^o4MMa_(O`ioQe z$hK;6u|E%U0uqmtXXAazqc_GNi9<*_Gu1_*+3WI)&}4(@7p%>Q@nxcZ=FbBW&?ar; zH8!)J%IM=?douw^f&DDW_$6^aR5?q#^YOE?&{hqZ-DUc8{R;}l$EW)3~ z9yC#=C#;utvv)x;-PEIcLuE`8 z^ewrZr}`9zAcH!^5%CR7YMf}euHQu0Q*Sfiu2%x%#Au)c&}T)lthTz>f`2VJDsX27 z6mL2Zgsvh}FmGF|26xhhplA9PfP(E12x%?kJc9t7`^i~ALm93|W5~OsU^`3*;7tbMevOvI_X-@P~u~!~uZ%!MRw@>A3=X-1QNe#O8 zhmy)qjM<3qW>x9x884K-h>_i>`&RBAp+3yaW!<7!djG3D{da}(8_qwOhbe(bGK(4> z*}M2D4>RgS$|Sn^_f-NO&scbKplkCh=~a7~Yq*dC?JT~pCAk$Ukue|aiW?YMbLuLV zSsGef-L$EtUnsp9V_Z4s)Na;mbJgN6ZXzu1d*iE>&X;O4E1}_=d>K2>z1#Ou)f&nxheAERf3!?y$|{#3`d4@J zf2@^1t5F{pMkJKxU>gPyXuP==Q0a6sHmTkUFCBB1nU6K^`g+tmv1K5G)irMiX$k{C z&;z#T_1De7tq=c7gKQl|5TL8^3~he*d_DynOfJfRU%GDV!BNl7;iOSp8t6lWDn$M6 z8bR+*-g{$!4sgrfH-UR3@j_nqO1smx8+vcZhy z#O$k3UagJNc<`~8dM=*ub>hT79UpMd`j-*MKJW*Mo<@n;YBA5`o)6lALoN2FbqX8| z-><331EU@#Z+T}wX1<^2gnEJ! z7q4c*5f6SBOU=(uXB)*cl+6upm8{Y*FB<&Q_+krr_Vn;?^(r$aDCy!c+i{L4yE1Ra z%^o*Tj;a1oW%ILsgK}Vi$odCW5@rBYq1061mIMuY;l7+VA`uqRRLvCTGX)_8{}eI6 z>?aMPe)Zaa4&-9RjdvAEAfGAPgNX%0jBbT z`FhNTBR`^&hvim%^2ofO=l&JMUoLyUvQy)&AxN<3or^vy6C-dqXYNVaB?9NUcX{K# zi?Z+jS8?mMOev%EHg`X*L?*zr1a+$Ip=2o3!E;8rB=$7$<#H(xg@ZWiFqtKZ)RLKM zOA)p<)mNco{_@+L5`vr@3S(#k8Vn;~*m=f3|BK3lc?H32jz2baBah9B#>{jx5R(5u zF3#E;l{Vu&Vb89G|Hke(06E1en8%^y z-2#RaiN&am+Y00ha<*iOaF~m8I+yZ*T z&$+IXHvFQKs~^rN<>bCI&IOODJ!KQ7hVLr8>3(cbW&ipC@m{+=(=yE*_jgf*?=QS! zpR%F1X_bSkZA`zLXT<8gW{@^1|9Hu!r68yf*@ehOB=v^oG_P5LuwZo(c?N*v{ zS@S{{a5FL`5u;Q++hirFuYzDqHc7&h$>&J*N}4tPtrO@7_H#dfcyn!K1nmRKLICr1 zF4pbd;7ja_E*L|Y-WKNNT1qbg=8W8W_6>-gAhGxP1i|WfgB`?=EVIfj-RbZ0g8NT! zI8DcTLODQCyg$DIpQFi8f_)GD*z!u7Ckc1w3uRbJ>bPB_8=#dNOp7@TkzaWplrztOD|bUj9EC?9ypL}W9kk? zPaZl_Pm=uWF2-;DNoK>e_MO1EP%2F{!^LUIV*U1@^l5;yH(FlDBjIjk-eZ)iTRVlo z81E6^oW@!;g1>EC#E-@LNRxpWPG?MICCPiJv1zhy&vmld`Lyx#`adGKe{IGKK|fGP ze5a_gQ<-CeaCBGLhue~Fc1`<8DRgT8-qJMM1rVdPSWsKnm~`eC*BK53u*T(h48>=+9WSVMJ4PDcJt zB8qq~lQ#-ff_ab&Ej*^eOk@g*(WQZ89E602Z|ktx{XX74H6VkgB_@0NA4d%#1@g|% z)Qxwk5ISi6R(-!g<`w}8OyHNflHVd#Q25U5egy^H|6_}@eS<)>cJnne=t`xv+Z5;$ z0hz4cljGR!jcSqK`;2CtOL%mt- zm+p9=Sor9|XsP8zAe=GM2X5h(_=YiDD>3z>*_$dbSe*j4$;-&P>57n=Movt2fc z(LOUO=#Powv$48C3b@N7=Dby^CF8U0U!%3@yRZUvTvsnBqBbP2w<&qlcw#4h?-(Ec zt0?%VMfG1=HV$ahh@K`=SXf6m_PeygLKsqG{RF_&>4wGW+cDpzRiANg}7IqPRHN zq^#j9xpyQLbljG{3N_G$FL&d_`opXlQjYRE9%|cr;|H04c{y0*rw~giZWs_&H3*ff zl%jBRnU8-@K^!TXs5&9{(Kl|1KW? zZw-k9HA|c|FKR&>7=)0u`kl6GmT%cH==fR`mm$$7?=SM5g(CRiWMTf-HQmeqn#@x7DC=-*^}^P7Cp4)X6W?RlJAgZrNI3Vkfi) zQFIwr%Psl^FK8*Q>Ud;Dy+mlyW~nJi-c2nuIW$AtSq9h9!ZerC+dZkIV&76~_}Jv}|+rhJ8;W-d%R1K`<= z`cSX#e+#<-U6u?05|dOvLz21{yF7%)wwtf|7=`I?E+q?E%-9}}6t|FKDS6h_q=7?n zEx-WYlzKTJdo+Tt6>W0+@`cC*^%JEXxWoD>hk?H+|#ZoRV~$$E6Grm7qzU2 z1^GCowrX-(N_9cgmTD>(En>d53=?w_HZcPGa~cW{w^8>7@B=*_7yC_S@n?ZyUs*l| zM3C@uB&q!5bk_HZ`03xgWdrQE`v}So$YbhmKb2l?`4v- z8lObGQ8q3W5cO2C9&**MQ>;GUnZ+m!7)Ji@D**mPjzML&`P8}NO((J>LzT=8!5XCHJO*$%uA@etYG_$3+EZ7!0ccsXdli|3{fe$7PF z4nI8H9{4dreJwyV3~SKQ2uVhx5Lw_HBlAYv{Dv1b?BMoSgb$e@CMGE;%ShX~jPHuV zj|ThZ;Crb1k)5r`EX2FA6+@o1t$qexLkx~EOKV!7j-fLpyGy+w~h zV~O#LzAm)^-nI7dutm|hrM7D!sbk{GL2n{8BdzcPlg=j2!{RQSGP3l#kxT~)D~MX^BHS73YcL!#Jl8a1@Opk-aia@{65s9%e(boxSk_{vt4GQ zkYq9NyFlP0a!m>hGVtGE7-UFpyISxf?YW!#Nw{_IH8$G(V#HhoR?w0)0gLZ~dj5b8dm5LgEpaIpcP=>KzYWqw=+4dOe3U27w-@*KOL@Rz*iin|zRUF1X@QFLY!BN9zqfopd z$ANk|WermbRngg(&_6+p@nJGmA{LkX`AP+*UvFw@e{_W=-<;-@Lp6M>7r=7Y3-%~C z8_e712__MPR*nD|l2=65&LZ5FP#~YNpV@ZoAP9Cgrn)Dn{a(U5Owd;s^*n|WKi_*n znkHitLUCIn1@pW$dVwr6K*~6(v2{LKJkyMi$uSEmZ{Difcw8eK1!(WszRjYDV92M* zyabO6+(ZM@ze!!MWMr$u#q%T<0w#0&ZG!*2&}yQfZAsg0W(_vE-P}7}idtPhgUwu~ zi4k7rl$KsH&SFfe+*Eg$w;WKLR7O5HIzsLjItlMRORUO)&NDr2Mn0|Nwiw(ocDejj z@8T4O#&nR>jf+)(-4emXAYkiJ^8&0eU5ILHQg`qbKkSLP?>NwgNM1f{kUc>vlO(q_ zEP1&*42u_K=UROpV7&DukUsJX-fZHyGEewAtbi?A#rCl4w2#POup2|cVGQRv!-~(H zH8hwVic+Mzlz!}jl4`O1gM9y+RsG+mKY1k;7a$mdawS%#a znNtvcx76n6=OTQu2egwZFVBQ)nkaMcy-DA85mac^_HD6s7x94MV)B=tIM3dp<*q?l zvfA#qdxf$&z?tIJ`^PTT{~0x8v(vtxWT*AWa8MGSPAY{H00dC*T8qi=Fr_$Jg`lh@ z$;=IFXb#o!ezv<5VJ0i;=*ljsw!2*j&q{7eI|2zX;Zr%2A0O|E&=zI{5o{Agsr|?W zoudkn0QM_GGJ2I`vUVc*?Uq|zhLOj~L$(2#wDPS7$HN^Y=1b(=gy#j8ev+#)Fr~l; zaC&zRJ4WidESxyp_UdiM3qHk+^)biaeYJ@>!HJ6J?|nzSdvcc#a*9r8WDN#~o->$K zdPD@HjEw-Qd$zuxxU<6Q@Ec{Vg5ei*#T`FiWHO9KUNK$af$0jVf1->;;%vLox0kAM z09?KQ&i#jfW$bLiqbcArRje8wPXe3hCJ_H6gEFzHed5a5!a6H?sL__0v9e8FTHxVa z_`+`G@b>#x_R{af%1yCWuWhUG2<|& z*LZNHe}Vae$42ZR=w%|l;JYl3s6;wM`@D*H+*g~Ka{R|s3{l2+KrOflTpR@=OKPNlaG)id%AJQr~;~#!rG5{nEDKbHAaE z#W0v7V78I*G$;Bf+J^TI|C+~V|1SAF>f^zr?|xO}x6$(DT)z;8R&nTXux<+E~VWs0sXM%PXf+T}tvXL~67O5tq*jCKU8m z6`P&x+ic1N#9%A*+muJS3qw7KpSEsMZ|O5N_@?jQdf0{9vmh0@no~3#i`)+T#ra5D ze4pCVc1g%Ql5eXF@EFa>wi*on%J)V&NbwLx&p`ix|vZ^ zzhz@7>9X)e)cZ#p*rTuP80#5?JvG`vrj5sK%~FYqg@dcLbUHs8GoIRIdH*;LSzAGI zu>XC*PkCS;JN#OGNdI};fhbtqZ`?De8kIAXhKL)5r<;c24Xc+(0xLFOBQj_?4kx#4 zV>R!C-l;1o_r&6X^z+dJAbYhVRkS*Y)P2eGhR{;o6jIcrhg1Qel$M%3edtl$1S2|i z=JuNN$5rBx>QGk|H&OuzG_pM8QrLI8!}n?2L49N#L8?G%{hk7ioS1-1Xr&VT(0O+v ze|S^^uV`gB`^^7N&p)ai}P{)GlV`PE8`_88{^nnH&0mf%z8Tiyj;UT=v0 z)XXl$LI)WJzryzb&b&`HNz)IFQh3(K7KlV`T%4RM{|U$t242WiNWmd=_~XC~yO}H6 zsXgaXeDL2~a?R*YW zS_!4RQ0a(-YVFB+U}GrU8ciB7nOKt)b3w$R@7K^iUQ3{!U~(l-+$Dmn<1x7@fEt9Z zbvJB>f`)(=hIQZ>65r_YvC;SsaC+=@!PWfXe-?c$bmJs?$`)r2}!Jo2l- zD_r$~i~Wb%1t|gYL73v={(Ie-P($5sXG`Z=`(@fXXIyc)wyBnqGYL6*pJ_SE#ss&F z>IA;ljArN4ppO)!lorUUsU+m-OHJp9>W^HOL z7E>p)gu|7|-stfNGc^SrkGORmPT6+fHg`DKkf9N#zoN;U*E>+PyBo^zRX4A731KfU zA6+GDu&Vuc-Sl5yFe@R%qj8v(JJvf;|LP4Y3@Q(802V})z9)s#!*KURj3AS*8vl1K zp-8KbqDT~o6rQi2$qE$YSm#N$tf&SGA8Am_r57UX{7&@v32hMz-IG7dMxb+og4E>c z*+SQVN0(@SwFr(oZ0E!G@vL!Qw^R_sMfB$wv;MI?*3tYz1CtST*sU zSthry5Zr`IM9-DRLhI#ZkOz2={FEq?myU{Zk!hL4mwjB1n2NIP8TRZGK91CjEOptx zZ*OmEXqsjDUpIMxk^%jbceK`$(Edpp){mG#dz{`MffG1^7~vlwf$S#l0oVILsy3bc zZlGRLheqyeUhZr^)Xya<;S|jrLPw^54um9g#QZzoF$3Fw&ecMV{yogF=Q=AzBU%z3 z{_C;$O!%GrojCe`yW^GnKEP}y>DV9<^m5v+{5kDkKjZ@!Q+I&8)+2eW5-o>coH9Dzu6qm=TuBoMJc{ktJy9FiqHnhH| z(8d2t)62fY6Z548q)I4%Q->i?F^_1?>yWq|mRa|3TwbWn_Uma2tUYyWGF^3Bu+;wm zApM)K*ZnKk*)NPl-?46Io4$TcKUV;k7^k63zU7F5lB7cfF(;a7vzrWOb~=1sW1!XXOK`8Pk=K1gZ^lJ>LB=HC zcn|xulrcd@TUX9ychzDLcX^fIHrcbKWnZa+`W|CP1E4>ew!eOwjz7YklVnj4>~c-R zYv0R3y5O7WZ@>S5w13uiIWB2g~Tk0-|tt^cDJZu7V%S`V*njGBD#03$Sc3ShO zO&r_#XQ8;$|9(slFhL(Dro2Ng=;M?N`S9ZrhzT=D7og&`XlJ%k~Y9T8X0w-(;S z{Nr#q%gg4cwH9p?+J3m|ctQ)#?$Q+3%8Ol{Yy1~4g4Ew!;`q3|dn`0Q=$siEINbs+ zweiqMoDmSsIfLbAC5o=d_sPpQTsXu1mL+1J)|gyB45UPbP~DP4kX}Vx82}!@ax<_p zH$9>|#6GOop96)2ESN9l1KFsJ;G^zui=ig}PGDAFb6`J32L_^q=}H5^fZgsi+eTCs zgOV@oYo?ACbhw*D`8N`SjqsksUwT7_SOcSDyI#e$CH%wAUrS9UD&9p#$C%$921#5~9HELlCKW2rqbk%DPJaLpmCcs$tHE2|grMdF zo(oe3rh1HNW%WJISX^dV|MKAciy2CWJBziS3!q94jnk=Y4?ifET51>!`WBZ9Il5{2 z+dS;P1g<-Ge|3V*v}|#_`|zt0e&W1j48u!Xp=h~kh7qyAQue$;NLW_x;Qwt)nvh3L zNsO5}g%jzs|A|_mk)ga*_TXnmi@tQ@V}%@4ThudO@gN;1R^L>@SxoA5!@ISmk|$fp zBy)He^SMU)VPtVnCM#OeOht-SFZ}U{Zs&d0_j7(jt&^tMkCRq$zSCH2QA_Hy5Q?nDI9c*an{I1#c-qzqiD96|9l=umiXyEnqpFNYaPZ`nHRW-5c9}UvuyU1tm43!|WR}5S z**0r?)OIN_X8VxYD^Modq&yumbvDFM`9fqzi8OsDCYDxjapPR2`)1S&*oLj!i-vc1 z9N~!lYeJt@!MK4DliQV@VxX>$@1b28a4!U0NT1(=Tbnj*&E zJW$0-2b>9Cp5Pb1_LNpZ3(%D0C|>!*tVNADyz2W9djIZJD_VyiaCExZB$H*y2*C!_=3{SgyHr?R{I5=pZbsHD1GROwv zX30(+tt4d$EGT6Xa4Q2jgXz`tD}?>GeMiP#`bpgnu$g=cUfti&^%|^FJCV2jbqnn^ah~vH6bBx{ zD`V*5&n5KmU6J&&j$Pp$h!)Z}oRGX!dsz!yV5Qu-hdRVCl0UHQ;Qs5&H)@JCF$3vl zZGCm{9aFA3wHIG_eCz@q%6=^d(nom%E&ogVvX|EhO{7xXkcp6RTcAYPUd{ym zP3~Pg3wy|GF0^1XxKN|=DWXSN z)uF9FDzjeK=!BGkI5^AtGG!Ca!vW^PxrXM(U6@($5#<1A zE8S}GL-4!+k7Ljukre+^vN=(XL_gC$aodr(hNi+D?rbgkGE}NDzn-u<^553^pH=sF zlKAnt-Km{Np93tm6d~;Su>`>h5Qd~YM>NBVBE}xgsCtiaB#O>WZGc{R|1$0jM3w7; zM2EbMzh1VZHtBA{gNGS~F8)9+C$kShEG`v!oxz=of{O z^UjuO8QqD(Iao;*p&-?2Ohz1MII%4Ny-<1PG+vh}5V90_4VR3el>};+ISq=THAqYC zyy_Y-&Go6INc1gYEH7=~fPImJG3j)7*r9>isDWJ2<WwM&3ynOKN4| z9#4`hj->uv&2GNe%&z%Uwk{CpPuBBNg$D_XDgKU+dHxVEXerY2_^gdDu}0+V|pekWv`2UGC2euef4n?D<5VKA+cC%5gqqRyL6h(yDm1-HjYbK9d;w=9F|ttwDSk8biD6$EKmxFZUkA68W-8Q<5%|0<+iY>H~c_^wB< zf!$PIia*hK;5;XdrdAm2AsUeQWRYv}rgXNL9;72h#Cm%Pev8=2OaXbC#7^VF)vR0L zr-rkM%|P<1X0lr;vt&Z&<{ovOYVpf~ZDwkrI*3Lt$C}(PzO##UQbEHiV^}ti=4q$G2vq5m=s`>k3N}I;ViKo4S!@_p>OEUOY(N5rD6BYphPnxp&NeNkvwFL zIM9doYcs}@_AztLd+5`*vCj@2%hb{PTy`yctU1PVEGz!&Z3E4!3^|I%rveb8i%4Q_ zWhrPA71O2uT`b#|eW$n!!I^}8GH>IfRTiK8qM^#1_9$O8JPIG}%$R!LkaGC9f0qC? zLjUpcs$*#dvjE;bG}J8U{Qzf&J26#%3Oz=RjmCrhkyqZu)C9M4B(|Ftr67Zm(*1f< z6eoaHV&RYRC}GRQYrku6Z~ybZm*W|PYhvfhAG+fq!I6SIn3T6T{V&m(Pz&**Mp-W9 zG_zzBx$r)6BdJ|N(`~_+Oxc~Mlxoy2{dZ@S;G>&L`KT3{5Jlm#R*g(K8sq)v(8=r_4U$gl+^S5;s$#3UW&!ok|d9E#}fLWRuxgQ9m~>t-Qla)*)=F)Oc}t?Q91B9%pP(5_o61YvD&Jnac%*jTHMs_YJy@H zH00|qFnyOiTup~R48v}6kl6fa%M3wyP@(&3eq>!4sQSgHTes>4mI_wQsO2zFrLG?U zEUZwC+iiE>UjR0mj@0F@cS??mXT$&4@t~#O3nEzgb6lcxA46z0AOmMdh!9E{B}Od3 zh#1N;`r(p}aRRR|EypIEiYsX#Iddzj_R z{hzm(6f#~QnnUb7tyFLx_|tFR^Ps%3Oa+FTz?y|30crg`zF5>plFw0an8BK$h4)7< z2AzE)L-zA`X;E&p4;NRi4;J4n z-Sysb%lne;)_pFy8$vnCEbq_Cw+_ZE2A-shB)cJi2Vwb*=1@(JEZJui%#!yU;Uf;K zY6>u$H{2YDwsKzqPYXT3gmC6-%{PE#4`{1tGla-SYMyR;!7Pg%HUaY+z&|&qg9No2 zK(Gzzo-$nAU3T2+AITb~SF2U=ZGXEsXTdyW5=S@9>niXALd&x*h@TaLR>taS%t=OPn==a>@uwg_c&JtFf&ZgQS9&cOMaWE9P>j*sH@ ztJzt?0b81!Z-ZO4TZ8w179)OCiaA)cw@>Xd?{xLups%JJWNn}q>$4-8nHf+YZGq34 zQyfBHeUHz73fhu-?!ZQ66zY7m@=L@T%LlbvSoV8)4#&RxZg!~5QiCsg=3DJVI85*S zUb}My-h;B*AhCx@i{)}ffh-=_D4_`oKZmbDHHgY#Rl%CO^Q;z=)H>)vi(vjw4Lr&X zP?6>=UO(BQ6@)E}RaL`DQTQYz2czq?1mqPhF>(nLhfh{iU`dBH0UTK#v5bXCP|jta z8_mmUQuj?Pts3LcLf`+?P01(8sVb44iuZ<;c!B1fPB2RFiw*5xV3I9yS#PVrH5_Mx zSWI6%vT$fJf4)~hDIWa2$G(W0t<2@c z*(6vsAyTX#76U2ox!&7<_>E7wZrsxp;uxQ~3O${R2~JbWgO|Hbf))c*l~J9^WRjee z%E6;2??LRr(H{FZtQNzJl0pb)0gUTtp}l1&cTfKO73{zqH>?RJ%nP-M$f1^JD0wi( zIF=v!Fwxc#Ed=MYwtlk>@JS&}+tgX$2N4_hf^dMO1jC7&AiEHnbi07aWEu0VoOe=c zB5oo*=|I<0(52n#RvoOHzwzFJWuQ*OY^>qvIhBH%ljgceEIgs`hmMEK`nwp99Q6*6 z7;4hn$dEU@o{_(e7MbwPCWQ(P^o!Wl$a(Fuz8Q{u5%$Y=*?9c@koD8{S!S`@MiX*F z^`wN-%i14(S4$w|kJ9&UY1PogcRSS#+h=&ZGkj<9Ki!jR0{n2iO}1n_`APiQ!k+C1 z0Y$GS2*)OPfh=6@7nmH>{e?9EVvp-FumfKKgmc=vRY*9m@%_Va=;y`p^r)es4E+w8 z8p|quTSyWRC65b4HCfHE7iPaehs%V2M@u4DD*T9jtRaVmIM#W8`ZJbyI?iM?=`EM! zDq#BD>82EY!(O}yPOl|* z*~xHS)C}AFtW0XiCcu2m5Aqh#j7GnNXUcQZUb(qm?oNn;)4h$jvJ-@>SWwfZ*ky#@ zX}{``R`XC}II@U+el6gqX+2YrnDFVkl=xhN`zGfX%%#5g&A^3mYgZ%@#}KpNL&?!P zqX^H(_U5=NU3?s_n7PlF@UCG$pl~{lsiw~dag5KefH(rlXpgOe^5(4UP|x00tKKV7 z*LBJHWm4jqNyAWrY=}OkoGab;xR@gx9_!#c!NO)01l2ewWi1juz3kRQ;6IcOc?_`d zcNhYCpEkA}psgzMfM-3VQEyR0>Ch$91-ZiQu-Bg%d9?#`Zp!m0#b~Z#^27#{QuT7| zzTm)3l+gw|RB8jLu*z7=P9oU4m%DAhq|~sGCY?}|nhi2GCUE!OZq^WLbt;AruVqSL z-={StEu_oVprBAX!b-vjh&$+VV<_4G-+S2Zx$0q30w4$LC%B37B>&giUt zAjcN8Zxe-C^f_KSd=t+^YLNeGd1_H$NZOsiHj)*N&a5Wl@! z&xt4Z;P@?zJ~hSL;+?C+_T*H8qd}J~OOSDq?21RCbS+Qk8WM%O?6#Co&1b1N&JOn2 zx#!=bD1zK7c()y?u0@l;-aeXcL6Y`;cGdbSW4c^fqh0$lG1y>>2xaMh!?d`*Daos( z!{v6kQYji>eug3X14q^b8Q(oF|5&UV{XwDA8u#?j**QJ@4~zAJ6I$n8;U1qscdb}* zOpgb|eyUMYV&$U8U5n`FkWeDJSQR$y0nq)OPcH{gPx}Sq4t^9(4A?2PsoQ+alkz zzCfPIH*5-L`?Sq?AcZSmY@o?=`@vo5jiF{@CEcoZkplDgHCPod2Rlp^(b1(ZDESSQ$}jL8?F)S0BfzeGCrQ^sj82s<#3i#3ZIW&i;{zCQ ztY&I_PMiL57-lRo0@DGT!j+exZ%EJ?DoUrPhH!E*XiZ@FqMjE`Ic$hQYvbi>+~;V~Zm)$Xp(3sv1ZlFzy-y4qM77Y~<;k%E`zAiz)~K^fQvFh8EKk;K z)e*)DXX_`5`>w_?pRiquy&Gn2++xZOtD9gC;SUbh_$sj{XNvC}BCj)ldq-#|c&2P@ z?~VP3V-xJLH;}v?-`K&*!kt~{R!0mY!g~qkjspV3O_GCuLGf9p&@@jT&zS*<}RUiBrwdm0UZ zx?$olh#@92Ep5P?C<92;;cXrVCT*&BSs`$%!e8Rp;qnqck0*I@F6hkuJk8Ysea-c)q{No1(T|VFl*9wTN&H-}T)LHWQimz5SQ;Y9;gdUgxT2R$nKJu5&Wi zlxHJ6pDSMhqk6-^P`z_b<*}=<>Z>ocX?F~QAzg8w0<*CPqb(6nlZ;ZaWzaY7KQ0&4 zh$U!lJt#y$gc^WeR)7N?ACBy51o+|^FlCE&!W|jkbvcpR{eU?IFFrnZ&s$v8Id1i$ z1Zytf8#{a$PH}0iC6G)qy9N1#b#Qyh=wbl7qTVd=UFv}2q=<+t(jG%6?_t)FOAC;gEN-Chb@B!-7NuiwXbL2tF(UJDp;(I;Oa z;!xq}d^;V;n$LP>1q-7*o9?uXgXzWf@MWsnpeLCxzc7#%1a_3iTGQr!Z~Vbgd5v8W z&5CCqCi)%IF!qj(YhVL~48lJvIvqoPr6l%|xCMZd=Dp%y%SBAftG>KM335$Y6n-q6 zru^~X4#p%4DQ-cQnIJB(9=`4&=$NtM&d=c_3fW?BGG9W5?NUptW_u(}hd02(76fM% z*q~-SgLTh(={7AOG2`W2iyO>=P{XlNh`OLusbk_jCNl3JSIvHV(B2;RZuSUkfo>_= zqC-fhX&9zuU~I-G>2zyIr$n_nE_gz|^dRn1$^(Yve{SaBzJ(7xFHt!KhrnJdLkk;Z zY!pEy#g}dqbkqY`2oe4Z2Te(ZiVzB5jaNxr=7pinbs+4I6hem+VfkaQSkQT+xwxOG z<4w**5MK(UwJsplZPlmg@bA)T*s~3bU%+Mdk_WE~u8?I6Gtd3|n1c$<2{DJw@;o;6 zA3%LNbl+^+ayvyCV-pv-5Cijs1Y$ax1CTg^6^)6g^gG7Wzq(LSIabih}@CaD@+*?+B)khRemEuNg{Lb>>XjqyVR&JmtS{-UM>ICg7|;G9(F_IE<+G`p zl}+lH{-DNW_7D5KTKJAhbocirP17q3|Ds&{P>wO+&^t2*vS5jblY8N!O=D1fXV!s~ z1rZkqgPa;dpy0-`qrD&(90>Ge=d1jgyNqU;m3~Uf=VsR>w+YRkVIVW|&z5Nd=>ZXN z@=ibMET*6l53Dm#6}HJ3M&33sZd+ZwN4r3=mlE(_&K;o;e-o}Y&)ocL=L-VNz}snu zWl8GoZhm#$xqR>>uz8%DFI0UD`~{}dUHqjsniGQa-Au3!fj_XXU&279L* zp?U%Apl5Kx# zSAvK@Q99T^@5R@MX}V_$G}YNqxch8@qQ=nY8jLu$`SQjULaHcD#8@e*z7NEqD*T$D zp=Ol#L33v+NBA4iG)Z79q_7Ap1~__GHf<9&fm!7jxf#iUU85*&2`C|?r33OAMOa_` zbNljNI4J?u=1#?%m#-G)T6g4>txt(#siH9V78C7YC_T->tvJQ0Rgh{b+p#Bhc+<*t zUn?J&q1~ifTgNg9S&E)s{8^Y9^#nQF118CONoXygURU3>)p&pp+|UjJq)5S*@CrkAY!JF7JcETs+PB{Bmu~w>CY}J7h9qx z?n=YLwz#yziM5ZmU<#Jn0)SzxB06IeC1iXbUe&;`2}Mn%06@|DO6igX;##XVjoQkqG3_eInngNhzOE z9s^uzZ8|g-9GDYoW7Pc5hq~!x&Y3Xjxf;d*FBB=t`UN9ba$s|{n8rAY!5>J_7Rf z9aC_ARyPY%A{f0F2@@C%rMCyS9X6kPH=Wmf=9IExr@;?77m=e=?fw|*(8Uz{I1oq| zWd0&;x1ok_%-In2J2WW|qV*SLx|nu9{L8LRwPA4UP=}&xchl#{ut{Vfvx=h2# zmZi;jg*ivAH+mfVw!gM*H-f#G$lF(7j4Jg9JeM_>F_-@1JIkMF(S27=@3h;KOm0OdI2C+d z52CH{k=v1PyNC8LCT%eu^u7sdoD(oU_G8hi!<{#c($2zO5B~S#GA>+b*4IFKWb*Al zZw)y==8umTP_e>q7&D%A?rPDxE50aC;zKSQf-@BCFNc`V!O7D|`iE=Oem~hM0EJkQ z2EO}b?CDZi3wL(vlkemGjYB_)Yf8YYvQ^Q3YzH+1q3*(9aq{a#Sk}b3oiF1VN>o{` zq(uBoKY>j}U7#t}nnJM4F3xSmTqA%4j1M^Ot~a8G!3ioi;tG@~Uz@9dNM*86fedQu zHvyWM2C}h_s=@;fK9T4soxcZ1s|0w^=BUeZy=c=8ri+c}%4nFwDvBmDgFX@H84zw4 z<83l-6lEij01;jrQFyJ*TI8wo>E>X8h&a~J!-1H=WoZCxq+Ld5IGYvC=B)hf604Eq z0)K9{6@BsysZ-^+4pVYk0^t;^;MExY!l`#M{hO3}Z`-ZA`Hqms@L_WlfB|7fQIRFM zW(t^>@PEG)BD{7|TF{|)KBwwCKXHo!N9r7O9Ki|cIK0^T4})t^`07F{J*Lb|X!)K; z0E_9t+YxWVppjZ3i8{w*Ty=Hu&;*X_w#MA2>l4qj9n??Q@tFV(x&hjr>>EprSy*#LlajFaU{R zVJE}P08H*nUE#&!`;*oQ(C~R=_PzPi!*H*N)EDnV{1&hjpgrh6^T(lP-&rn!f^FU{ zkLyNs=||Hpap#MA^6*umW8olWjLSljeDpdZ+Q3fAqE%>dl46KET@wL z?#UZj{tsJU85L!_whf3f^w1z63=PsH-KDgEfCvakDxE`0H_{^AEiEM>APglPQYzgb zE&W~Ne)s$B_3r1#^W#~%T+Gcq*L9vpLca80lvPL?+SYpoE9mnX+$6t0tT(Ptl5hDU z-t7+UNr?p!epQy9Sw5%LED+crkH<(8toSrV2E|l#>z}ud>)#6Aefx2m?Ez~qh9ct^5BNu2h8`SKtMupwO~^bwoNR&BlpO} zP(+oFkI$@ou?hJ}&CK$0-5AlN=#&Y5 z=fdSGa~1l;Hyv@*>EB-k)e@W24DAAX$^bI_LOoW2vi)*TltRp-ie10vD@1uK*v#i( zsr#8bRl={)8gpfXpGd*X4D<;!lo5tLgLhBp=spNeQhWe^^~~OEoV}+vNIn9#c z#J@HmB^EGg|Fc?^xlYzi_3DN80U{+NNA}JouOKa}O&w9x!!T*$>L__p$ykh)2h`|R zvicSmNJ`L)V2EJ$X|lDVP>4gm3JlHK8HNg@g&ouKX+mo=1TpT??h>xYERE;NV?We# zyz&2nvuIF)u^$UOBR?ZthJ@G2G0Bw%a@j!tg*a{*XrjKK^TeYYjGH<(;BL{DXPI1I z6p>wX;>xQCj#Wgx1$8gq8^a#WJIIz)_!YnK{@nHG?(!eggSk!~3DQ@W;4Q!G$V#)@EW385VF09)yeJaQoNMdsA+a%j_&_ z7jX)Wf@GW5au39Rm(YU`byv?0L_i=%>e9%B2cZINBvku$$i15iUE|JOWw%R0sy+Xx zC9$CNsOOL@3PXgLE#wqbd?Ugn3<%C;i~Knx4^q}xf%+*~Y$12w)meQm5r$^8V5*qgrdO@&S$;-fNwpkeN#7OPcaXwBkbFQ7b%~)M9lzHfoDdhA z2Et^+gBQN`h18%U==-U*WDOniWsh@If_i1I3DU=RBA@%OBBALC%wmgN25BChQT1&| z;&XHv(L_5EoK>vxmgM~rS^Q6in77JvUlZM!V;%jeg2-iE?`c9<71KyWU{cznzS1e9 zOaw;6oo~VEQw1sv`hY1Sy4O^E-CqT8+w`M$IrB^mEZbVtHsomW&=2YF@kqg1UQ<=z zcfD~E z1XAgT-q$W^k<=V`e^(Sp$ab3YDxeoyg(}sS*b{mOU{pThK z{!hvmDddr~I&TAi04Pt7<>*I2LC)kDylU&e9Poy$(+i;HLi=OgEJSJ!oxgJCQ$Q?p zFn*r*8+uJ&CDC+ShV&(&IY@hWJfs_2O*nl^XxXuz&zDrqF_Fc@`=d)q?fWj|w0;k* zqa=$`vsW)G)|4D3BkQG1`Bm&ICi;C_;(@x3*taYRFf7SGzwW$Fu->NqZuJ>PI1`c@ zC&rib@}7IBs%IbAQEOp(I;VM4bn1WQ>4K)?_$`3Yzr1JxYl5b3-N&{XMEJ= zN-bQlvQ4WlS7t}9!ygPm9Bew;#Vo5`tr_W0^Nl$HM;T4B1v=Pxb18KmdPp}qbM3XB z{W#s}aXY5?JJ|l8R}KI-;Dqdlof07*S2L}Ym9poGnUI_#yKEQ_4OPC-K18-#&wbN0 zsI9BoV}PT9LEBd9*^;-!augpsv86+!mZUaKM=x`+5_*h<~5ISB8bJjRl6P|L*<=!TU1r1o3m<)u)}l0RT+> zgBr#l-c_nT!68K@#s0wN`<=LSR$w3!+?{W#jkD{Mu=nFx+F%OdljE$PHuBPEPA$gm~|1VfdH>x}vPq>_=Oc^a|T^Pw)Zp2wq#s`_5@ zjtG4WIuYt~@fG!0FxmXyw?{q@EVA-}wOQTZS3HJ-N>#%^l?wHaeL+4{T)g-T=c}=c zkLHk;f_AWwvnF=S_37M{RwnN+A#AhXJU>AQfmX^d;CxDzGnYR=WewBxOLl~Q=Hn!I z-ZljOy$Cc^8TEUCa+p(~01BMUG2n}YAa)9R2pTQx;=E!5W3i(?grx2P?{UNFRR;!5 zo!em;eTZhvboKz~$lfE1j%K%z#N}exQn;YN{0_`#ui+u)Sr#y<6tz9Ao`%--Os^ek+(l$CF4D$OK8|x{p-VIvH)h`0K01esqydv@}p6pPGAWwRslJN zrrS>+cV~!fK0ye!EG7ZYI+e*ZSVgm{N|lNd-)9~KVN&v#29?O^F=+}?*~BIl3prS5 zowmeacOQh(_%^QI?a&3K^z=Atv$ZH;1&KZh!$%->NpI^@tD%LG#8EK9)!L? z65^lCZ!dq|t{??Xf>d{()!7cay^JTd5`{5n9!tj)^dBKbm39lwkAaA!Tvk3_E$OwO zzcyBeuG50B9sPdtOs(Yr*t6fja%I6U;h3ASh@~^OhlF*POtYGx4O!?E=YI#|zTGdO zmtRomGbkzvYAWl9UbwK+v2!h&!Ok+_YZRA1WDJ#;=CEttIn*QXuK#|kO5}kIbYf|{ z2}zXk(7j!D1%t(!cLBK&H|9ZzES6b0GnJ#z;~6ppFB{Hjpu@u_*g*_tKe9*20g4t3 zjTR}@3`f%L$Q%}uVymD!QBWX*?NUy&f_#CcnqerTBcN)D@^&A9OVxr2ZY`jjMnQu6 znI2nsX|wD0?0Ls(!F!{cvmWrq?TU8_LSnTkPFAQD^v0p_7eG*6HAHn+m{~~rC72rz zh(G|=uf!$YSf|1+o?k(p?_LiIG=ew%;~iXcO3;X*q7!k!RlG5tGhf1T=L~iR2a=(n z3nQS*B{h9-&(;||E*xGR&@uSWY=bMrgFR`&WLx;}v!Iv4zXUGNnJ}JSo{0yVUDCmP zY(j!F_ZE|%ARLabM&Xb)&Mi0#-Y(k=0ju%H z$O}4{Nliel_^YOF-micO|3A`;XF&Z|csvor6|611Rh;+K^TDu~B|2>-c>>r@?4x$k ztGo3YWBsBQ^+=6YG(CJc5G61kkuHXr|toH(yQ1xMj&%MWniBK zZuhjqsz%T5KuUA;Jh$E18FKliXbio4l#4#(z^+0sRfId+52W@(BJNW41PXCH+sGsa zq$*x30)H_9>|`<~-n=~|FG|hUH1ExBgJK=-g7w0^7KBF;&#<0XR9dp`vR)6#ijS-! zSx+iG11l^yGH+bjXdr|0(x1S@v|}d#n^Wn%DG6Md_7Z}D9LArA=7X%EnKLxG7Uc5x ze~-u*sVzXJah;G~QXqfE6zG@DnNppH@Rc~la`fZ6N}!<%-VRbS6U`C*uv8xSQWlf2 zGrOSldz68!9>3@1_;yw`nq$0luCx%Z-%d5+3BYmKiAhbWSooA-oAKfTMFSbamz zhpqB2tdT_x-KrfaTHa-1vxAh;_+5?8x`$PQA@HYCn6_(T_BXqU;<#wtp3kd5b5$dl z>G%vrq4k)bSKXG$Hs^#L>y$_?`N=qGO+>Hay3r_>Y^(uO{^rGMYxkiz29&l0s01x-ca4+({ zz(q}O2b*6Q$u@|g;0Yy)QB`V{Dql?LB#`eEis9BS;SKu=ZmPc>P}t1(!{n(<0Tv@z z#E*sDUv)Gjib$_0VN*9c!cB<=!OA@Z=v;~HvGK*$4%}QX8o86HvHC&u$Di+FJaXPh zKKyAg$}aP(IS^sUj?|6s@H1m47b_GLqf|@dI})CEDqc)yJ)VqysHA(hF{aiI?(&FBL2hpPkM2tY!&p4Opb z%rEHYm3ZYhEd|_DM5~zn5wDU9XyKp)Km{pMFj?66a0@QXv_k?{nk~8CGTl&Wi~~>_ z{z#tgQB zugEtD6_u*Rf4+-#KrDn8H#fpH)(+~|#}8>kve6P(zh!|b*WH$?4R|xUwS%Ip19@Y9 z+`#Irc!#s1S{$A&=ig!O)k=0`%TxIJp^u-1sO!8lMcrhk=suP{-_n%r93;yp2SZCObqFx~ zf6Pd87>O#tLM8vBG_);+e$t*fMAhI~zR|k@3~GF7E`8!E;7nAy&p^Nh zC2PoX(pR{qq09EvxAbwL;_=Z9pJ6>MsAnQ&D7DF=19zUb97*^xXB1ah>D|s6aEK}3 z6q;qUsKiLILBD)>XXaNDVwUPaV$zt<~}BnHMDkY~aRqICeL82M?u zPb_Oa$sIim`nEH{E9Nq%{C@wli8^oNgX`cWQo?_dtO4#Fj^vuf*u|NVXNehl+n*(D z=~4gwJ2hOzv<^Z#B~KXzEsV7{byK!}jmWfyrUo6PYWyBe$olOF+Is4t_Oa$s>j`^i z^xv{1jRAyZzF1GI2xNyKn%p%WT>Poyc)9qDnD+?C!z99DtTrY(e{CW}(7OFm&IDh% zacBOVkGj?@9{_tBkTygV<@H4I?yUE$`w!HpI4sOsKm?Gn zAK6mfEg1#H5!)*rpD{Y$A6!V>{4ski+2H8WCMbIhl9G(Tpi%^}71(m@)JCfRI-6J( zEi4L&e9vVBghlz(IHohV155EcBi&hc(USM}Z&tEt73_`{?vT0=F9|b5`2X{zbBKIn z;Ch~F$Ty}3l3o6?oGOeN!V<-uSS&_vk0%@5^+!Z#+p&kFhxURNL7s+L^l<$vEHd{F z;$h57NOkC#k+3mnEY(7C7pz$*2W{G@_=dj~(^b>ZQojc)O ze8X_si(+gBqBD+Cqy7g`Lw6^b6FJH)?;9s?2xbjSqCv}x3)|1vDD{EnA)2Vir&|DN zq$H>+p*3mJJ^kTj|B(Ca4g>uebw8)b6NWjVpPZj}OmhgjL@{bUeD&G|hZ1RGh@E^S z%9d(RAdZ>L*fVY;nh51ROzi^igv4Mxo;q6>Jh~)Y?9=ctRW4GKbO{=yd)@mmLm!OFLa9?`qbI^J3zGE%0ktO_h4H zdI{P0SO-$snu1Q@OiDatcQ&Bm2i-bYJ;9pzU2K||m*oXt*z*c5k5ET4=Yi=45^*%3 z&#SSFvwT-oB^C-bR-wQkG<$G&5S*~q!_32@#y5xWKX$$h*&gCq@ppWqhY$N!`|YR? z&T|;xVMY^ULbFG*_Sgj?6|G#n@N%V30(p(U@I7BO`J}434WgAllCs5l;WO%kj(~4X z_`=sTD`h!AJF zX~&p9C%^oIo#f8ci{P^hK;5E;OgPfg)gqha*Y>UZ7PbC_>LBAYCnBih@nRR8?bySDY6W2g6sL)Zq$~wGUFEABq zN1~J;gdB5&RhCnT)Py?1xO+;vI+_9DIRGYkE1T?iAx>O5mLZ$4dywcmctY8|Ns6&> zFw@v7U*+ys-!Xg4p#N zCkxw^l*S7uANu2(TV(D^oA>fwKD94on2OUw5HEX+i5~P^NTGz733E~n_{h=G^gbP; z?%BoxXy(0K@$`2sGN1c-!lS&K++Z}P{w@91w{q~6Iy{g|bL$NU2X(YZ!8e7~Go*Rt z9J4wl>ga1$mKW9-*$M5r^Yv5n6?O6A@r}{E)nGECu(%aJuvV^u-MdO-v?ZajeLms= z>fY{es*ycI*GV0~$NkCW3dGSxwztcggk6-438fx`2~`Hwhv5>CMcTHXT6B4Bi+%&g}440my$wM@ca* zA!9K_vv1`UASOSl3TePD`kSTar5;F^n?MC)Q8!KJmBj*~_+svh!ujcgvgy_MXU2&r zrV^f9HTZ}snxpx$i40eNmi(deWp6#&x|H=72gBmefn-zA_E!$ zlvGF>RE~A7AqY|ICy8o#+|gNkI>|qnVzskJ-+GhYt2nA%YE%YfBHElQRN@{%$qz^w`Ij1!3e0Y#fpDw` z!qwkdpZy0{&3%ezZ)H-~*j9f;K@O3RbIvPosF=Be8;(w)aEX>9KchJ(KNkc>%87|* zE^OJjXm@54rF{MG4v?GobRHx|);Qp>zG=ufR@cLWSNTo!hSr$mRKV;Haq}OnGgouL zHHWIAk7_t0?1eB+3i!GcS+&X~b`%?p5AjyaK_ zc)}QaIfIL0cQa&;bEJZHR$nasSf*$K1Hx|l-EX3!juWT?ur19!TyAne!B7GFPvLL8 zY1+~1V0{Fft7(O>Y#2;!Dy~DHkRdv|4^Y)*rVfp8rrWvKkIarjP=Ep)4wNR};J5e0z4R*F zAzuD+xqj`N50)W3J@F=BA?)QzOS5b4%k=9(!69|6lbv^LU!B%Sk4I*(>}rG^6zA~E zJCa0=WaPh`M?mW|(GVmQ-wJL?&v^YMqsqUN(tm>p$O+|*^p|jnb`m6tFi870A??R8%!P);7^oD=LAFSGl~BB+@7jt6 z(>ya9RH^Nq#05(T<#`UVH^}52?4zM-uBIM)bI*wpn!!@$RBF#67~?m=Qf#MDSe0&v z6i2u`d^(#{1OjHAx^N%m>on1tCCKls7MVgh!^6xDUm9rMCl9KsTWmQn2L*G6-(QWl z-<(u7#lP{FY6iVsHPni8iQpmBk8pksZN-%%nd>@f22QNnRn^PhM)xScP)D46>VD-} z9O98VUM0!d`~Bp^iLo(3sVd3-xET@}Z`4xx`@%F5eJWc|Q;`|-Q_M#Xq((h}5TO^K zl5fyyT#-O%@T1`0nA2?fCy~P#g!@Q{_vZSNvZUWl$@quKykapE#SyWLNCO_{!TH5M zS@Hh0+To*7Y}NHeU?)kKAIASZ;bIBVt@f54vnA(!qaCtiSNWibS*=aN^iJ$BEG~dW zi*~@_yUcG8<~o;18+5B&7?|Q*)ak&BQA5Pc6=TtRZ_V)ug3`<~!}nsduy=E^?0q`G z6-Z;aecR*fqav)1H%1G2Wip7hwPR6&TDM2@LK9uk2*&R6Qql$fj{HzMq#{tQF}8Py z%ial^P5rGTx*$WoT5+!HC|KHJY^WyMEB>okICo~$w4#-c2w9!o!gGx;(tjA2Zwk_5 zBUrFCaYJ~j=+lirfgj1IKy>H ze~rol=M&KFe%5A@bnuXU$(uA55v`b3uDM~nP{xD>n$ zn{qU8*?J;gzNky3%TBmvDp~Ns5S{kc)7?5`N!I@76j~znOPx!Z&FM_+FZIZ>2$8y%6ZX8pE~DfG0uQNK z`Em!Je^nOnc~6M@5UaGrKPd^>%Y;4j1t?tQ zal)6u!QhWsoA0f52|-`7JKdQPR3l=;$6CEZt5|=w>Cd~AqYp+S^sN|^^MFVpjl#@o22~+v&awZ~Ui0b;}%_rUk#Z&5tP~X>3 z2Y`IwGxS7X=*L*U*f?nE3w(zs$?-MCThE^l1roX;CCm~9YT%GAS-4f}Nlt#abMjB4 z>2H8G=z8FYV}Lh;r2rxoKu}LOc~?#^l9b~W=z@5Uc2GeZ zupBOt=&w2NFp^vD2>2uauMjED1FF#2ejQa~)x4G`n;RCXZ=8^CvYH(%BP#1bW8z0I|HUsZvDM(;Vb|0xc_%nm=;+*T~LN z28n7_EPc#q9apWtW#z|h+C4qm!R|Lnjx+G8mi^Kvf&KCXL*Grs@4TekR6BV~yu5v< z(NJKf-~LBgrMGdi14~46n}}X1B5S?j1RgzvCh_q0ZD zpl(d}IYXVlp8Xykknl_~aHOl!3DijsK5zyE2!Jx|ApCaodJgP)zHg~y&Er)N_Kp*0 z*@1YxMI7rD;8=@G5yEz+yQD4A$d08IaejJ3qG*WarI!9(GjO6Se4*BIcw_6ux?`t| zP-y0nnqvM>zI_g3Dc0lykS!vKY-TRCecte&<*qm8kP_DoG^5giT)R1VP|8t>I4-SP#i ziDJ#T1Rr9w&UemyMyvodTYYaFQ*sWm9~6zPDuJrH;K00H!L`bPUQE5>_cR8DT7E;j z(~F#uCBh}_`JUzv{a9Z)ivM61{jXr$Z(_Ae4Ao*Lb$3hRVTx?kx{r%y21!rUXd!fm z#*{uiv#2;N`a_`{p)112gf40}Z1;)K- zWS!?FQEAQP)4Dd}@^kN^XSX8gCx3yD-M)8pRR2*&TONJYT~y+H61K;FhAlgxAu{Z@ zPvwu3PA_bmqEz209`}4ERYHjOVSrtASqY7B_jN zY~9=)oD+$3Re(3^MDWPzawrF6a1 zFb)Wet>{AP{lC(jaZE>zjQ^XxM1K86(Yn+U4@Dai+O5Ne40Kv`U$z(f&VJIgo$fsd zT;1XHS)G|5-ZG%}XGcH&`PL~^=lzimt!Gh%|1Tn3Tmzf(x@oSC;0x91j6CaRrcMLX z&sNR4;~&_Gzue>@cJ2%BIcKyp)$G*9NXejMUCAh?q*ea=0OW!Es*^7vpcajRYcmOD z!cNJ_KvRme!k%We<8sf3nrvWpC2fm#Es$PS634QM`Ux3T?C*oz0t#gv(+nkm=M_2v zi8V_7Ab86ly4ScIp7GXT}SP`Y)Nc(hLU6LY%tMxcOHUMy7BBX_>+$B*nieaPS zM9j~b@gY+lf!sJfqNp+>9B{~6R)TviZaURkEQkOMOIZ+s5-Sj2QsLpA>i*& zWsZYN^_}Hp4Jev&e$!?JEdDXSqM@2+Nc1|uBI;E4y*fa+LHhZ7Fq{UcQ6Z3cB+^a% z0JVovwgX)_1TA_9GgcFcZ$O(sfo8tc_wEKmxG>-D44`|Nk=&B4iioyH6n-NhS41rV z)DZP&K-s8iSC)JQ2tpuovtb1E>i@XRb*}$Xexmp;E3cDX>%v+W#8eg|099#=R_mA_k`@DxSX=gxL@>ovCCSi>$Z8% zIT;4EOtBeAQ@|R9IG)ZkU|do0ze}Jn_e4phlb19}Iv#j3+uTI9 z6t@;WHw2p(UUP-TVX!%N`uzahs}oRg%?KHB5`tCqNc-=+CLrYAaAVJ`^WR|g7pLyEvn<}=5jx6IcJEGzlFh&cmPZxp42m!~KugdcfU|x}Z-53B8LbahOvWIxi!>;?+Et~p;|`xKAt(pj+}lObIhSIEv}%Ha|)Ba%Lk64Sin*<1g%LLHn6v@?*A9uPeyW!JcP zPpdFaQjYX1;jX%L)Ttd{C=Wqw&p?{JMRZOZm^m;vfyzN0i{cTWTORV6hdKnNn6!w= zzzb(R)TOu&D`;^Twj^Xi*FZQ_%@Llo>OFnTnFgClHl#!(ih{4FW@Uj$ru;AlK-gaF z==|GP)N#fWxAQbtO}CGOOnlB)-{-eI(Jw(7V4zWt3 zQrAcXk?@BFH2#=p{o1XYb3N1A#(CINEzBTPUZ3B_@`r8bONtV8O?>eH-2*ZaNyUt^ z+DEsr<>nW)Cmzq2Zx}I>f7+E*H>?yH7wEq8t+836ZF>5+x!p=4YTdl>+?OS7B`c#c zPPempgE(m^CTS(nsraqnK*y7i?nq(ZDKVzkZI$dsEb7L`8}&)jd3*7*I`!`fMvu5F z&!cwdbrFvSJ7{0p=)SK$NSo~9{xqNyPvT2oLlAGffBovotE6vR%&XCcm2u(h8!4fR z)(bbKwTklFh34l^9aEb&@z34q$~p3=Zr+sq?7wi)zr(TZ zoS{O+UTGuYnk#B8@pGdo=3T|Z6|wvuII6H9=9y8{Uu+hH-({QkmU=H9i~Q8rvP;L^ zKR6QLS+n9(#T5v4zdRGce=yE3Emo7BMqNp19KGXEF(gT?Zz0Ha`IXvqdW+X^Mp27M zRkNFqnqtkGZekNf6*Fn9Ia7_CgWqJHS<$LLoZFmcM_+Z+?~SGcr-dx1LfGV~LA`9U z%v7>aq#tHtBbJB%|6FUJP;A6LsN4&nY%0|V-dn_SP`873-c(-EEnqEKqM8!PWc$dp z??b(dQDJ+%7?vP8A^ck7e6bN1&#`^v^tI` z%2}kOR>ZY58W#h80=@%tdLEyEtKi^!y5NW%0bOE!yyFOntTCs4V1stk>Peth&kVc7-#Fjqai2ix_UR@?$(PV(kg zxeD9s2MexR?u?T+g+ecK7-oCRRl{$emA?7<_1Wr#cQTU%-D}(Eeg$7!*K|Tx%#1UE zt^2SztJ~DP3*||+$yL;E=TUO&6E!;}+h-HQYi7iu28HGCQLiuURf)T4%D$Rdatu2+ z`#u_afT3^u?M!dMt(?%Cddj=fjEAX@|L1IK(xlEhkwFOICt*F4{S0m6@y;o(P@6d$ z#@L5b6@_OqAunQIN=i7Mjh1=jI} zAgOA`OTaOjkRsn0!GTGKhhsENZ2!=;+nD~L=;l>%uhSL#`BO`ud9!=MDU_oK$=7{H54qKqqFsEN6&wjbykvM7lG8mr)&O-P=Lvo}weD z&$ev`xv1a!BRzZms$~6bVN#0sdQ#|L@Vuf3cSxzzLBgA#9?kXB`-*-w0Pw2{iTtI! zf#QZ~A@pq!Mu9j-@#sC7V*(L4u&6&%5P3Qeln^p$3!u;{z#^k)IX8p%(B+Gu8(JjG z1E@MM$<&{rz!=W$uPQ$8)x7YqKk)s61~bRLDf$9&XqT~Z-<{P-1%mxM_eO*T2Y@BY zGQ$OJ&7>81{roFGdMxH`X^EPOw&FFG{`V6biUg~<`YCHY)%2bw;h2V`U|TL@P0y`C zT;aWR*sx*)LC@1m3-Qvg=rKE2dUvi>D0xg`uME51S@=Y)u5aYdHni+$G^Gk3%-J3# z&Q7a1HkQe{&S;*svNl``%(KaDWjrC5&$z|JAHC+(bTw7JR{2OUe2+;iJLTEh*$>u2 zQ_{7%q}Mu+l4eHd2IUm9N4Ev#`r5XhoL>JFJmX5G{sUDcA@$^Q+LwNrH#K;{tfUfr zS;hJKz|1Xp$cF+{ecqF~U{!@a*Q8h3r&IlG9dYXPv2tKp;96_^CqGGWPRwV)j&O&k zV|IHvtYrx@Q))rS-m2q#sYBDpp9^2c`rSgXMRp5WALUlo+P(7|(hs%aW!}!FMC(IH zBo@e}68SJ5*1&WAK0GM>0PU&6Ea)6oL>94=Q-mg@xH9y%3}uK8fSfu@*o;F^LcKZF zfRN;V>sh97Rv%xiyAT2Frvnu9gpU)$S%k=6+W_uPFqs5lM#QX^nC07FQEF+RM^?B) zIl(~0q{6M}2n1Hb8#8Nvt(;%#p>J2DG~K=lOzSC`bmE#t=_-6i==nm`&jrH7mBeGr zTO?-LW;w%}7To4kfx6QY{h99(?qvcOI-}WbG_~l|>;aM-hVb+ojf~{MZkICeE!l(J zhF@M7o!tyAmlJy*t5>RFh4w90(g7pfk|-uAMgVa3^@Fgo|IG*RuL*g3V>!#cC69kH zJW}On($-iwi?Q`nIKyDz`dexVFK4sQXRO1pDaLjN<6b&h#NKDEB-??{#{%Y>dRc;6 zALrVW4tw`Q7tQlPAy=L=d8KOKXko-rM(R?=ZPVl>bV2eXNBf!Db6qCs#X8@QMCCuG zO1A~rcVyw1TAx2g#aB_+Y*DGQ{9&`{+~~%PI}yURVNt}dYAToPa^+>Hp^`LE*TtD6 zzAiewW7zg|Tt(HN_I37A9Z`cdTV=Rc7T(>RG$nzevIGlVYqbN0pL&MwwS~f|zEg4^ zPRCuE3frx)LJpw-F#b99fhvbmI0aj(9uShwlMg+kxcu^~oStFpT3x)ayMF%Bf+v@t# zX72{(=gQuRFzczyQw`cEGiay?#>9W3N`itg0JIx_kT^t ziNFL-gf2hute#a?;-`(dW~Bz>)8oWh=2Jxu`(z@&gcf4lho$HXADVZMH*QRfTBjvT zurH|@xr)j}Ki6i9wT$0y`j)nWD?V&_(dYNp)Fa+> zo?cVpu3^#YrP?-1M^5*M;{-2z)N60IBiAg+CEc~=^mks&)81H3Jmx2twJ(alDA>@{ zH;`mgur0qSSr^lE+__li)OT)fvs?C2?ZBo;#uGHgc1$Z2r4Md0Sce_|L1`DL$y|^7 zozyggvOo^f0opXj3mXI5M$CuK+X_umYi=l@6vano{N}=&E=+y*2p(VOxQ>k|j^H@D+;YKJ9jIqVsKEYlALE91SuarSeGUM&1rY1t-=wW&3#DE%b)>lg&Epw2Oy9- zK#!?ZiAoiEgHs@wx<_vt$pzZe}RKaA87vvON5=*RNz359hEYX zF<6Z=ior@ARdX+}7$+j3YN*JZ$X`l9%PaktG(j>1SBB%TMe0V%q1L_;(b%55Y+M+S z?>~>X%2;Q|O?B1LM>Cc5i8W$*`JLpCo=N&B9^17)2g|rzVa?a4c}FjAqTQgb{8;PA zPzq6n&p-_0PbfmRBDr3EphfXHu|D1LSyTOdVAO)*oY$YlP()A1L#3NdC!FDX`z-MW zqqI^6#6Dz4zANI7KUV`aEH+H%XpW-T>dBrZh(r3y zhI>!0o!S{iMC=f~OU>_I$tPw^hT{ZsglHbqtSWY#0hu4!(V2UDzkt2Tx9rK zM^`Jc?z7bUX{-Le*{zUqddi|LZ@PG{nn=?``B?FGbH-scW7V@a`f8^^huSf(E+U3* z>1)!hl4}GeTrQKNq1myg%y0B94x0<5j;o_=xQglY@uu`Fc^lZF&vBT4*%>5@Zu1k2 zHZ)YMH@>V4{tzz&lD8O)1h|l7cOok~EfFmg1V+=-E5`8G;Kch^y*Qp%Y}GANqxQ!E z*-OEEqLNscN_VuqV;zbggWM9!6_yE zkBus}fa;h_9n+0gS*i60k*`H%%h#44{pi7xSN{AYiPaYEF?m%-)8iYVv64G#aEL$n zXDQ<_@4X~p6WPO3aUpwUPW^zx^D5J zxqMym&h2Ya0>Cm_cHs?m(HyGrhdlQ^TGU38els9}@L+_wM%p$~o-`%Zf;iuf=|d}ph~nQ<<2jU(|~pp++#G$mE+6i$^G z(|!!3+~Hae$s_9HOkknbw~Z%cTTa@HWDUage*%UU@>!4FT8-AVq>o3_ZirPoiqF28 zm{5*OH(x(hJ>EiG0LAVmHt9s6RRE-NDB!ufYIs;1{hQZQS?(@qqwZtLu27$J<{!B> zZ9iS|;tmf)lM>BASa5&Rp%XG@`GYiyr3Bv1ua#gGJrI#D28@A%Ob>u+afoHqfnAp0 zWp{Qm@ilwG;cF;HER_%nCj1e|&!GQG+pL6?3$Svd@x=YSI;y#yQtL53;SNitNZm+p zR2dgiZ5259_T>U&<#3=u%kmG+lP_=+2 zTK=#rHTr3N)JQ_dvn?F)#UoSCv|c$?|BX8Wo`~Ft%h9e1%RR+;-}8-Crmb8NF!RM;BIOHb=)um-%&DWRp!R!Vpf z;ZpE~`8;bs2;SP#@)&Hwn2&`*haKhy=Ra8PWiYHa4O4h0& zSC^l^cL0hFbz2Rh?-H0tg~xCaymAlJ;DixE#G>hDAL-kA&tgukKBH1B^%11s3LU_N z9ayy9yANvp{CRuhE_{7FzR+Z~{Z3ti&hpA&{DCT&b!p|MjH=u!OX;qpa>F`nT!S|C zl06$ijb6rXcfRGYD)8=NB#HT373W?zO?{%ii+_DjHvK?o*U_WquJaVbD{=NIUKv-p z2V>dNfm8>%Nx}l}bnkek3uS3tv6{K@quqRXmEIyP<2>kmpFe#firSoZyCug>-aPmT zqgC+_p!wY0@_*IYgzGOa9@`#k3zeRaT+iY$FXyR%=MGCsCgwbVxX((9*9AL?J({5D z10k>b5P?J45a~g$_((H|>>BFOsJoxBG1=Ln0(FOb(v1fatD6_tA1M-NnBCY{_E7d~ z1!bpp)S?v_`_ewYL_xz0^g$vFKO7(T3wjhSHR~`r^nUE+ZwY_Q9A+58W14nG&oZ&C z+U*BGlee{MJ3G3l>5$Ylq~FfBxiGPiHQC-4plMD zQo=f#x06}11Pw@gSDZS;c?n3Kq!%8G!FcQQ6Sn5KsKgg7S%U%}zG;QT|E|7Zn{(pR2CV(>vERZ;o_u694$H2(Q~Y<*FZ`TwL#ZcUCaq%t#o+ zbaAXZRk#R$cgEu*a@H}Jl>FJE?;^etwUgsI%)vjW?78voZ{u$#k4`4V_pu2eIV~L- z9M*^WtH!rGSVsMiE=R>g&kL)o1zwev?vq+RI~LRNuwl_wDyP5sGe*?S_s}_gShC=3 zeW1DC-%U!}uyT_{-i{Xk^D4g%X8NT_70oxhKU7+}*w)y^*nAa zo_ly_qlLMvAM1=<3#JA{s0mfn3cxR4xCgY5p*y@*?cnY6oPV6qpy^wb{r}iH%cv~7 zZfzR~C|wc)H=Pn9NXJcwbhikIbazQN+;oE=E!`yuNFyNK-Q6AE!smVVyZ0F1F9-fY zuj^WC&U2pgI4yjL81vjq9bP?o_>Wk?`1v@@hp-AMTwAaT+T<)0He6$=yKVqtOUe|V zU7D7e$s=ZJEH(K)YY*zHcSFZc%i93Go2}Y6kKUqXu66U3^D3Mqg3n|tYxjyaE0X4~ zD1I_+sg-0KXw?^>k9UF5QMPZ(hEET(&}AAO(^FCl@VagK>x@~j);&e56hHV)*Q zcT@tHV=<#R^H!VFr`nDwSc(svka_1@xWzWxma7Dynb_^Kt}`DE!Cy(-Hu2pTUT)t_ zSi1A62HmY+|8h=Kj^RKT7?E<)wQ8v?>Gan>Bb}oWQXg>z(>BLjeaKDiT98z@#7dYj z{Kd8}b2i}wbi^7GD0v@Vwt5RyUq>O#i?_Z4qs$ou6t_|FhO{TMv5S z3V8o*DIlkR=4SAAn-=|FX+fM&N!GxSQ#)Y%83?udbro7R6Y!>3`FkAjYl(raCI;1K zzcxTq6u({FIh1yz;BVCNY9L+R46!ta!WQq>HHfI6;ZC1+B&&{71DU!1#FLU(qKgk6 z#7RD5MQTpIdnZqSqoo7cE@71_Z!r3O#E7kzrRx!gMx5fl(S$#3Xup0`E=x~C+!()v z$)0JcYAq2+Qv70$aq6~8-OymSO@c@%@|$NpVWjN69p{o7+TD4ml9HSC3dERTWT6X7 z3~~yV)@Xt%l}iL@R-{7KCcCBEmN|0D-s$UOKRY*0!)$T8BEd!>UjODU{-cx49q;pS zZaqJnRmvpeFReimF9vr#@%=yk7@A z+~oJ4NpSP|e?f2e7-Ch$>c?wJIbCnq^eBBV%X?<~+|w+4OzG=y#~cC2Rsu^9sOc{N z6?v4)gX%ICL4E^~VFv)0Uh9)(H*z-vKjJl_8beb-nL6itdRICmKyv6}up=%$8ovcViVCm1!KT z9u?(jsst6z6I8mpVQ{R;5e?_41#@$I{&MIS^l%JA%x&k49N#{l*DafSL13}$qLuSR zUK846^CSG-7Fgwf?)Sd^F-lzvo2{r=2lvhE{hv{kQU*W2JVtF;u($G!A(wT4f4uyh z<*0lBPXHZ;Cnb9U?b92Ptpf|!0ejjdI=seUhT(dym4kS{NYhD@$Agd6*YOsE72=N( zH5+XWqBgB4@b)=(TQ_no$<5E!-L>;Z#Ch~*UbUC7ryT;j443aH@VzsgFFrbD&XtQ# zS9NM?X@e*tboM>}Ao}E0&(->bE_n&R?6ky)SaROeC0Jw1I$^i&I+4%Ca??pHPmCAy zL#kh@!X?vHeqp=r^A8;fXTyd<4y!}MSvmgrygehf1My{(Qy(awQlmyHWw&lYA~~Tlw^o5M z%_qb)XQLSLt!Nj~BsW?IwS^w+$;JnLA3Ds=PO0!cvkYUIpop`IRNr4~DR!BN^+l6# zyvlr&mi*$60S}{drjtXtiKT<%&IcO7PB{*2rnSj*1d zT^{fy+djRHwu2xb1+n0owALHD^z9nn;37|V$9Fd#KbzWNT6x0jHs%CAahfi~g`IOa z2q`&fv=yqW5s#@V#$I@-1I*pokV9U+m`GQ%s3t%>e=8|6slaZYq~<%Ep!H?1t_MB9 z0N7L8anc*lI5kAEU9a36H>=uYR;2Bh8H4GDA!2xo2mj%Uzx zP{M%;aU|h02^MAIc@L6QZV93BuEgl(_1Y}&@NeZfkdZw}gVh5Gkf{5~ns2gSss4NK z4sQ5ASW10YY=_>LOFss9nPt&C8a#A$4`xw{g#v8iSHg^5QD(Bs&OA$3JtN)WdDHP` zg{)&{^|@5PVw&|&Cu%wPA)|hwFU|^7l$6U)y#XW#;Qbnxy4a5taY^}|;}InYEmIJy z@GX4|z;=x5DhRo-hJ#Sh58&z#f_>*ff}paQc;|$wM=f!_(3j&NWM-B;jtj{IKaF%w z6;X2sKy~bFUqdt#{^$JyuNjQt$!5WVI1F^InUN%gCFuQ(Ne>{Moz%fmGyhHq=c|l;1M@g#GGW&f34-%+8hb8;vqM&E%s?!g{5QXNjhAh3>$; z?yc~W?F|Jtghyb7=W-?5;_mpg-85v%YO2zBkI3)WiqH$hCAA_j1K#sN9%IhB?L+CC z>?^8cn+uX+`Y}>Olr&xcLh3S@Zdr7-Yk;A(TyX6zyD$H=Uv~2t+P^pKPxLO{C(S$2 zXp*ro+*8%Tk9t+gc`1d9`ohF0-0ATF%?6Hd?J)}J3Fd4VvM9uMcY3b2uW}U`cJCu| zzN2eha8KH-bia=%KZC?a)pDQ>VQHzQzkd}Ojnret*!B(5LMOuEI6`aQo4*Jniru_O z>pGk4&T)2T$(gKdVH6Aac~okKwtdLo*1x*KOm>(a{eo;oo4}zshu9O}f$_W05rVnx z!NS3v+AN=^6U1l$$PCRXm7|iiAg|!xCJh_yo)%H&H;uh`-P$L2XFBx=Q%GHNu*zl7LRUpw)WhC7Q%6uwD#m5DaG1m4Ac><;u>HYrc2Q4jq`Hn&2!stQJ9fN{6>EK-xmRq%&JAGTzS+neXH(6!n<_C z!Hdq7_G98WaT&w(bv26$`&#@9rCPbfE(%ih*2B;B07N3H4c>2k9y{ZPju)8R@4uDg?g^dPkFGQyQOPn> z;P#8G4Q?edceQjUdGn+FjCXrBhl%8dR`~SqHH%FGM{?qEv50^VQl`Q3or=fZrO`Ux z9@zUrx^)`Yc9a0r##9#Mb;nwLz8Y;2bHN0;YP{f=N{WSI4wxa+tFM-qDO$^+6TC(J zGpgBJY1Kmt{Jr2= z25jZh(87q9lP@bU7o&;A(=8BVK*vxQ7f&ef$o3d=e-Uu4{*1J5KyhTfkp3D(adv|! zI?{M;hs3@#BJHlVoqaRT|Go!bOG&iSlG_FJ4@Di-4PmDwHnGSFt!<3@Mjv@s`L`{ps3ft} zUI?`}s^@%WJytr3X33x~XCs4(RR7p4AupDmvM}FB_A z!c+b9_E$r-WJ9QJuclOUbVT){@cbsHnWocS-&ilXTW$?;fMfj?R(&eo#CZeM4Vz4S5psjcw-P#_%%30A5a?2k2PamiTZ>kmR6BtKcBI&^MWlsAoC9x}L48|1JPzu}i(xD7=L^V&Y zh0VJz-1!1nAIC4h$%|_{*&$W%2{p(?z3CJ>dQ$_Y?&JAkmx1Ie-*f~OU5;%l`S~KN zKePn7yO85jzdj0p;;MD;kl45N!_P^m4qkN~+ zB{b2GH<|vqE-n?HeDdq5SlZ1g4K4KUVq0G8996WC!JiHFo4do2)BEMI=0J|rTI$h1 zl)Y?hJjhV;+03o$#RF@H3b1vVJh+YJB5sHiJ2XgJSzZu6G!X0%e^bd}ORgXvoV?SXVt4A{7Jb&*@_irvdgLw75xrvE9x9@=x82!V8rEBE*=6b;2+Y%fn#1JelJ++IHyTG2qU zSEqOCRK1e=BO4%&MEUeOm%8`HSJ)YP+(}uepILl^-l+QNI)K&2Im^^d-dE&b7Qj^a zTcy;qF;}1L_ZO$K=`mX(!Mzi3k>7;QvwBzd&bzJ`4dKz^Sj`9Jn}3=B44sqc9>_o| zo4e=Begu&Ddu&_P8FhT@@uIR9p-$K9TNa zZJ$o)okLdAP&tRZAT+9S4b$s$@5O}o#M$?ejKK)8ucPVUSFg&?e_D&^?Oqes<`Dv`6V-BOL_8QD zb0AbZeq=YB7%VbY0*4e#a_u6@wdFBQF~YhDw)yQBqk9#MUw&*glKBML*~UJr3WoX; zll4^v#mm`xT<6%43U66&Hqhp!Vp(>xKL1fRZxcf{5a9ANlM=Y$M&|W-It>et;ba?OYC-+Ztd?>Oo<7)6H%SZ_)p2jmm>8(NB zog`PGnmopsgJQ|47Nr}>VA2NLop@Le=CCkl`-R3iv|PA5@cuLRmEU+@on-fr#`FyY zMgA&Ghbb|Q#ykemyQ_dOO*g-t_j)LG8Pa(ppL4pYS|L`byLz-yvM5VC>3Hy=K6IN? zd@S@jS&2bi$znVN4WcZ;$N6VhCo%Ka>}R6!khY|(*%;N+DSWPubmA%cc}ncqPgl2# z_AYllrHzL?GcWVYGoK$ZQ6{F3SC70GkC)M`?)7trhhP-EQ7FqBSY1pYj=vK@@%=Os z&=j^L|5i=(jV=~Xf9|!cX@kXx(z(@j{pOj4^gV|gp>bQ|X*Ja?0iOZkJvLYHe*b<| zJzo+XadF(`qP{dVbN3*7FehX0md!4v?6{4*XI#ck|SmF|G@~d#%BL`hj^KmSq|LmRsYkKPmzc#z3obm*g%t3fj zlKf9(BCe4l9KVkp+#Kp{d7#tYo693VN{_`#%;wKk)C@`)6FnPcqUT1t`MPcFPKHC@ zuY9VXiJ6Tu64@j>%qYQCYfGzY@cqti|Lbf%L%A$Fw~>(>lPbgg>7a%IIA0?% zzU*B8w)DW#5R0kWBQfm=C)yrs_1Wnyx=K8vxh$^VqH_kTYDAoHUcj{gk(87SR=l{p zWU-+qPhV|&E_lImzc*1P1VX*Gs|%rcA&2r^U@0B06&=VkMz2Q=G~nCW4JNA1r?oVn zL&E+9IH4CL&T+`#I;n{J+Ka5R1H_rw9cUN3G1}3yLpL);LI^KA2e|o)7uOk0hccVq zy@mU?BJ#pQ@}lK75MQ^E+@P$7*(5shL3_4!v$RtlX`YrkqF7F4tj~eE$SJ`jgmeloZ$*d zy1kL)7+x|T%s`jo6^!+xHmJA%#{2l=Ds!=mpEz%uNQASU#3IElBJDWwbf{`(|F|-C zXQ2j8>g^?=>ev`_ikz!{iem4~JvMnCp?YtFEZ*|ER5ux=jGb@r#>l?;8)*7|~kL{m`oUzm?TZ=p$iWuoWBOjP)ERfB+r`5{!rJ z_p~Qq1!M~ei!H(|iRvVkdS2h+Yh^|jT=vIlBghWjxzP;g9$$OCIE@-&OPN)%@Ax)> zMQhWeL+hS;$Dw(f?a`p7lNA~_o!$J@e0&~;_U>iAO8aWX{lk%UPyVPyMZWFm9pEL# zF=(#wDG9W{b3HT5weSfg8?mwogV$X;_T!!JDLSzRs*&f^I{i zsQE+NAN>2gT+q7Ox?YMN(dKy$HaER8uIIAxT6V0S7Emv&y_LAeDkkWuln)+`n2L{Q zZPR7>jos#Vi?G709Lg7|Eq}{c=D!v+T@jJ`yG5B-a2Whm=neNA|qhli)iqE8> zni&Z7#In^d8YGg8;;FF5p~>hxh|>WZ4fH!RGQ9l-@72z3Jn;w*Zss*iGyc<$S0gSE zinSjAM6BP@5*`HfWQ~B6bE#XDv5hDh{Dl_$((1=wF#O=d+aA zy3JdRD6ht}KFJs}Hgx74dv4-IO3!ciAYJLn)h&{pUf$-4BO*+{fwS>`Wa{dd+t*V8 zPyM3!0IqIfyQ{D;l^^qQ7vI=#{ojKM*^->6IC=!Ygt|P|6f&;iPE!xK5WC_KY#rtA zDzRWsg>wjrP5ZTfLBzXnTbgw?*`dJm9O+Ru@(k52<*~Dfcg!+pNG{aBjBgQO!|cL| zRxk&wpsZa#z)92du+mu4cofV7f~zvDQbX6%cwG&^QaOmSxbmLhHLdt}s`ml$7lxOG z&&us>Kvt2~!)^fQbV>zDi0w?XikNzGOOYll9d7r<$i#{k%e)^lC;JQ%J@kh=p zR{KHh6}Re*H@*wFN#Rv*($9uYJd;=ihYe;TOYDttcllKnIl9li(zu0&Ba+JBDkE@Y ze_)1q(#SqfL!3@8>|FGGe=j>AubT1Z*yCkVMrHUzNgWjYg&IR{H)BY%)+*xx7DdWo zr33iJt3x13Z?Mko$G6?sILF}EOvSysw-iBF^in~X+^NG5F$TRUM9%fyYbGg)*n#uz zq^ewT$zC>vOzWqS<>n_W@=q_+g656-WQ5_I*vsF_rX1W|u#J*LId#YxK&xRqbt3@7 z@l8<72BK6+6Xe}-Ev}xXzJ=zW>TBTLc-W(H6iNNr_u@2e+A6jw&Aa?r@+*tlmLluU zdQ+I&L%6TmL@Bhm@kmk(&SmH0E>;3&&d~!!8_;{Y{K^bKKGC|dkW3g0McVgNZpZpV zkywPs3Bz)cDyakBATwv^$uaD#bnX{I#6T~Pi|5Vdao~+(_;uIgS*47rbnd<4y3wsu70?< zvt*#3j{k@_@}5Dm9B}p_Qn$bL&0WP&rqn6f-TSsXHuw(XwRZJJRix`#x9)(ez?f(k z#^DI(c~{E7Jsm@|WWy=)rBD4YdHms3sneu;`EkA!w~;D0r==9v)wX+H^5bC2p5EX| z^{HRvS6QAxw^=J|rp;#YVoZdu8~8mnGzdd1%o3^MQ%9i0qwjdKnuid&@jM%T*MHK2 zksPkuR{j3|hF9md7lgT$9Eh7AVN1)Jdb280YY6)+TsP$4vSxhgrEe=cY^1^tydV5^ z9al&@V>RIvyEJhSGD-NIOMnq=Jgi-G!LZZNlWNIlSbv(F=Bf++r2p_X3Bm3v6Ff6LB3Zf*8PC?GIW0r+3*oJD(Fs zdZWg=2ifbn#P~C|8>ezYFv`SpI^^%7Mjt*{17-2G z?^oM?z+ZuZdkiTLiTy#iAoh_5rppmvgnM3iZ+H@*o&2uwzEP&kp=qKz>B+clIVJX5^Zl)5BD-Z85@`^@;X_kv`fV+X_<-!@2xQT1 zjio*19&`S&nUg$I5i{NN?H%Q>%9zyGOnYEapD*rn0 zqyD8fU-_Y<#NkJ_?8NHR$`j#ni@E)wLNLT06Vb5Ny@D zev=|InR8cPX4hv!I-5))rfje^kze^-!*=HFtNi;>wC?b|CA!#<<+LB{YCS1RD#~g> zLt2*Ex}3eDlmzje`hDf5E;^x`hda6sw@kj{vO|1qoGEf6hZa!^Z0#=#M3FisWrnYk zOoo__McSM=+jpMKS_~Szq=pOK2*HA>g#A6for4~s_GU}P*AH6x(zIa=)JqpKfEXsG zb?h8o4pVsyoY%|O(HaEa74cnfdt>YJ>8>sInZ6m_=Jh0-5_KzxUi*!0x>MTNWiVXk zh#=utmGQs> z&P#C*o;{?Xz(}|HXhN#z#H@F7|%L7I0 z9a9CIY-2t`A`hVo&mo#$`-J4(!Tk_01(8l&c=4$+gkc-OW)Ezo50$pxrfi|#kcU~b zp1zH9qj-WLovx^$AUGhxx+;6Z6Z&jFkUh=wA4}AW7f-G#o|66S`>$pJ4hq1WIrz?= z1GDR@*ZZsV1bun51t(ZvhKg+f*Kcgek@EvC<$kwUC&p z<*u~T`n}O*mn#?D(wTt2&vBoZVpTf*&|Kh8nT&fBk?| z=>l9oEzYU(4=qn{{jrqwRUC4fmH|T`1MnwtGs<#PM=+TFZioS#S25(|*%6Wr$IHwJ zmOG!2H~JkMge_6d+|;&jm*Lhy=)_3AJS8wX3}c_e zQ=nPw>kJC*@i}cH;SbM-R)2+SvBJiJXK3<&f{|(QLp->k)MMl{c5R+Mo=qBN{;JXv z%agwS`d<6Y%`RxCNKKLJC8tP|nQ5^*zq-VLX61~TG|kmre!d@7g}1)|cyS3O;DW-I zP`!PO_*Hzwy_pjMgo8XI4++!H9OU{C>G@uvJtJBn8(<~r%6d{@hpa>21 z#^lbfCmV} za+~ndSXcmhL`tq_m9jmz`_&10Q;}wk!z!R`Zdv(|7)Uo3?yyU4y`4u#jlK-O&fKEF zHfb~ij*6oJ&%0raDi}v7{Q$y%piG!E-ZSd$+c#-EZD63)jq#_GwOi@#NI%&&zXgf%9&yGXWP|`iS zE6h={yjYqt$WZuGU)0j6XN-ZiyOu#DCfa@O?I-hDzNEdO5eY>@>6enyP#PsSn~Z;rOHiwPs(< zFwutR%rU6lP6NcuXob9Qe64b_+ZQHw-);b(95y1H6;&o2nH}ki zC)k8@`ST5qmbs{zqQf-Ns>GwBc4+BlXiwwqJR;{@->Il|WAmyAx@QXgS z5=dkS)zn17eB?cZMF1u9b-!Y@S`gs8zd25Q(sll4R)A=lKvQmHxG0j7tq-F2BKjt& zwJ`pNHkK(RMk_XgleKgwI?@5mhkg(%OIa`dS2@koeu`XlxLayM6ia z>ERv|@DLR6yrWvlJyX;CSG)R0kgy5(g5h4*HX!-eN@?{T<7W_3{yXa>VX+fvz#wps22yi}8|zmdfic-Yk+xqCLMPR3%cU&$^N2tb1S9@+tEg{tLgua3tne*D z&MDas;7!{b+kQV$rKQfqGk(h*7$)g9_(Y3-wemRj5`>7D+$>W{46#ip^<+dsm9y>Q z3bK>FN39K+vZ3{x3vQIDd{r+*Eaf8@;N+u8L#Re4HzQ?rs_r`?ejEtpV?BU(Zcq5i zyiB0K+}Zs-+#NpUcV^>4q774E}$AvP-@{ zOUjCtlrog{A0GP`QgnVetIr@`be(kdizC_D{yb2^sj*P0vZx&_4uTjb7BW2u*)5--`_zEQ+ ziutuQNjfF@n|jiYCM=n<(7?%$>R1eG$?7e67(b~I4~Z=%e?y@F$?+5uy>$=t9ySIV z1#~A)j->kuweSCav;Y3& z<@9jjHo_`h{{9UIJfc8M88NX5N+`@J?*j229^adslL=Yu9KaRIIK5R;g;1ar;9wKTXKvpVi)Cxy@J}BO>TAH z8_NAVf7SK8RQU7CfXv>H9-$8Mi){DN+t!0c!htC{NHLCtr^-;xZ)@U;3F)}o&9@-)A(fW|Ru!tXb{rGx_3RL&LB0Yb) z1hl7!Py0IICNbRIJ{&ysA?HNiu*;`@uS^4~0jtp|IpI7}SB7&HI~S%e<0JlHXPvon z;Z#P=l;JxaQJhVR-qCkMZV^K)OS+$UB8X`dRA=`q5TlrDO|Xk9TwbDFhS0XE+0o&A zM3y0|(v3+Zrkk6h$Y8byj<&)*snWPYP)26Bv@2VYLOOSDmm9TbER*ZayMa9GwkW0i zlo1I98A~dW5au}o9d%RM@Apw6Z4AA%`1ok|_tBU^C0*PF1a7JhmPxen7NgAhIL%`Y zY1y3hp9?&M?{3^4StCq}ElqlbuHvs6u@AiCkH5HN1sV5ZGSQHuWLW9HL%{#c0{{I( zfCrs9lZlu{gXo{aiX2WY`b8^JC*xK%eQ~W|F$jRKxf^&>zbZePiQWPYDuF2z55nW*8KvTitDNH%#kf+MH7`MFNtv{ zPwKG~^CLM=SJY=?HxzS91cw|*`~O@?XJ=dYK=X$@f?|rJ8C@OsU?UK^u14q4QRig& zklP3Iaw550wVC?nMA^638kJp)y@t-uG%p%M`_&B%19 zFXgAc%g?!hkCnj)#T1q5@P9EG{5Q9uXk zfUyShMl@JJT;Dojf5_O9lP*YG=>yckqNjcpm~DE}8?^v#H_REq8BmU{{QUV7m}%Y_ zm)*v~RM;}lTWTs9SzVBs^AUX%_590%FLT!Y$m7L;ij?}sOte)LR7*s$Ew4~vW$0uo z_gg^=!52%9#Ed!Ag^sLcM?7G0TI3~)VoOCj{ObDQck-wVMWow&GmPDuqqIf6Pc0NW zEZt{6mX}#P=QJGG$)Cu`OHs0HjwvWrIg8QE*f!ja2TYJ6VaVc)%VViI{9P3VHEy!0 zy2gf**>3c%XHQ^6eQz4aVTz3*zTCoWVqM*Rp`3x2h-Bf;FZ7EIP``X$ob4}?N3ccP z;>O%A`ZE_3gUybxmgS)o;5#mS`u;vI-sR$gl(@ib?adlbAWw5ReeD+vh~%IJ9|7qO z-EG%a8ogn5Oyqq}b;S4X4IEPg<7nmbuTLb`ZWZ|5OWb}~__%5DbD3Q>j zQ;`*7iq~bJA8w9ZMab>-$JFhb%IYqc_+*kdU|%j9XFB!WxJ*i-TvT;+SguGxUPeNP zgpz(IKJ|*6YVoHBt+rcgwQa0m=vG--j`z7X*#KD~SM)V}2z%ahE_-~E&;1d7)c3>02VUEaOneJ404_lb#|9x$! z;hu0jwMNzFcx1h{=C>FstZoZDV%)i$Jy;{8=oYgLj*gPPV2gcqVlj2NdijHHv&RqH zrDb$?P?V+&bBUj51E_s?-m(4l=})=q(ztkC>Y!ySAtH&qp8$Sb=RrI&=%w7jLm5Ee zMvD}W3d05YoH+PJD~o{Qo?7wpwGJ59aVUQPkoPT=Up+Y#`eOAXZvc^f^SKircI|d? z17UvpWGYGVgU;~)h#;>dktNwtzpt>IZ&Hu@UD$tc?2CB|0wH34rLXb5{_3HmMw4Ls zv2PjhJY%Wt_zq2BfZ&-rRAD5`p1YD&TzJGH=Jni`eOJ`0i>IaJD(e9bnNM2rLYD_h zwZXs`SG8XqhtYOQ9*Shr)oS}va(!yPhpho?#h!9`?jqkQ@Wbx{UYMqR)V6rKqQD}@YkEMrm{u1;eY+2Ke%{!?Ku z7O;ckPICofJbBNLk*m0mIr!6=JzmZNKnf`(eNS&6UpTE?YE1`t`T|WN0-kGBE7Kk7 zZU5f;|Eb(Vzc@d!MNY)|dmxbPqYLCm#+I+DRl)^uzVLY%jD3uu3o+Xl-Zip7lEg(R=@6U z76WWz{g;= zE%0E8*j1rjsVZ;S`Egbcp&9PJ-DbOnS{d3M^*xntbQCTZ%5)!LQPS1I#iqHc9k9sT%p zL06WR8t^Su{N~MOe6{LBC-cMW_bO}elIXwF6&z{!n<`Hh_#`T|bA_C(J z+%+JD8nnX25G@h5!=yNBfSDKDMtE(R(KpFIx)S)o88a-r`dq=Ou*&3l8DcB5{Fzc1 z;`=B#nN!l@j7NH9A#V2wBsEC(P>?)uNA-%<{$HQ&kqT=i=jq~JU{1}eGi1{NMBlnvaGp2J-}d{Jt=>GlvhD`qa@&IHxEnH> z%l}`+?FjBM`PC8Jr-q=vXaBROk4NDDHn2du?GHkE?5BU75kMSO(a-f=Aku9y&=uhV zXis}1Ug+#F%efY5e#h_L2iv-mQqyl5?0;k#gn1{v2HYJv1qQa2#tyxEKQ6g!Q+GIUwz>X zw#Tuv7_zJsAx+hrFrIU?NIzXFiQ{I%q1a+!_)zh9Xi-{>#X8S78H3ZP`O82|_X=U+ z9eRcRkA_&dpNlDzERafu!oLZSFqqdwxoxk2&|>R4AQMyay4Z2w>*J(9+UcXhYA$!U;E9Ly4vASxvK|EE*V=f7}Dq8IeS{hBbO2y64P2 z$SQG!STn0e&QK#hpO;eT)}=6>e=I@cA|CW3mMeXC2o1%h-l~&^nY8#K@$p#bV@>cm z`yv>+^~BNBOKY-%saMcYt-5}V^y>n+5A_8hme|gPbzS)OI-`#H*@7wxDjP1jg;+FE z1)H4G=O%Q7@YS#%(4#BxTcFv8qgNNI8voUD{-+}RpF&^-8@vZL!3nGE52fK(7}*yM z`W*YtLQfEAk% zFa@pk&N=vjf>(oz7o7YRhrU`Yb@MyL?fQ~}%Ti({c1FWZ9v7_ne$j+;u&yEIbB3O> zvUQgAwXn|EDG}s74ySYAn(J$|*j=%(pQyht3v=p{>YqtU^ET=V)zBt|Q++}+DwTeR zcp$@OJ_Ter^nG;VIYeuTuC_u%Hf-a(PZ6>5?7~az^)IR5a0|4sJqpoY)Ws@KyCruC z@sl{q!2=UKrawA&oI!fC=S7@Vavvv6w~p&BTg`wNO4ame92)Dd{d{VETKhC?QoO?m zJ-@{Jji>G(!$00-5?kqM3!VKaXUeyWiZNGI`GgIYMO@=!a9x@gOp+NYX3w;Y3%(!b zAE_05rYmXTik@}89{4{4rT?X)W{Uj`Y}zO*Px+^KGl7%C6gYCG#P6SZ2;z~cT2i+3 z!+%7$7p^9yN&5W!hK#m7CtqVjMSH)XUJ)OPzBEAv>1og|fPX)DkP|!R@r{LB`*ts? z$3|l`V+)60S;(>2p@0AKc)b_!b%jhnwg*G#$S{;x-m(>M@G7_+R}C#SHupBWYz2Rr z*QTr4P&*x}>ZU4K%G(QLmn1@A*VsAi6se}KtJyy%+m;Meia6b)w&84j3BXJ5gA!$x zt4@5#+mh+Vru`u?!Q=1ae#zXj-YsDNjxNZL1FjIL7VD62=4+AR{G`5ep-0&dnZTRa zwlK#&C>sAw>CL9BXB4Q=9SQSM*Qmwm1c^TSu8Gu15fz36*o+UDM)nvvhC62fysFIe z_jt16^@q!y4W+@Ou?M-S&ss><5u+u=htw-R)j15ai#I>yGZFD0u3!z-wrSeP4>253 zPf}SoY$z2XLhJ!@pYA?F; zy2NH(vx_t_*L65nX@{(YK)P5hlz^@0cIFWYbJgjkK;(j$h+A^=B>~xJ9 zuzw?g`g)pFxq~T|r9ej_FUM43^u2r_yyPEWV#!ddO$|2`A_G9BErt`Su*Mu>^r`K& zS`)-xU^Tk^DGp!8O!(1w8sOVSrn4ryx3g_#jSRJSg>1ZDZrnFSq(t+`D|_d@RHu+= zpNkZh<%y|PGWgvrBKjIvJ|lewdlS`x(G;$Sw#+tODL*RnmkcdfJPfBOxU!F>)9L-a zTN0DzPq=f!?G-#*X`Jo3H(&m?oSE`L|FmjfB#Z6goW0?*p)bulw(1iDn9?c9=Pi_#O^Yo;L$tSA z&hm3ncSo;KQZB1KKEl@Gew%_k>{qzhZu$B7M?jLF#&8FV&o&TWE8@QVS5Z~KbrEjh z4hELlN}CEcqj$-%BMqFa$l7+6aeZ>q^N%k1vu}#+DL5^Mf{bqnng73q7Bk#ODENBS z(NggYk?Hg* zFrxL?sFjeRtfQz&g-t&+;k(l>&DrJ^0*?rJH=%hWlCg3pKeCox{OQtEq&mZGR|Q(( z5-(34J1Yr6=6+~8H&Ne{hS(#E5^~YBp+OfYw(H{@1CJWH#;#`xLi19bN=m^fGg+gP z)cVBJ*L??!8E!zpWVqb%KEl2z@Oi#%53B2GHbpn-Gwd^^7;!|mhiR1&>=tM@R4by& zfB@#-VTFc033GXLQ=dmwDNk1U%3SB$Z1qkJIm8Kk%fx zb!FpBTxH_#&nBr%r_Eztncft-&wgDD3E)QD9-pUw3jaE4^gGs!7`lYSs}nbCW4lA+ za{fgaw!{p*^mk%~U}pKV^GT-m5S(~2l;AGW$d1Fk_?ON;*>Z z1=h7lFkg0Np{sB`3KqU&lD|nIi9$|RnyL!Z<0H|@ck$(5>6&+ula!UjhfLj{AU_iG z{e0JrZg7_rcdyi}_Nww>%$nfA8wZ-t9x`;aSiNk~)9jY98p&r;M>zx8TD0cd^Q?^q z*WzsH6YFaCC_Xn|TT{~ zV<(Q1V$vW%pf`^`31OG;V`#`eu0{@C8qCnOGrRjg4bPXCt*yJeMpWcM* z0yhJ#L03v?(p`_SS-Na%My6Uh1E2bgbXQfSyG*H~Y?5|N zT4lQJa_gqxw@npX%l7gD60hq8-&HrLREI(+;b`AWNkz{#C@VNjIBYnS6xjQli2u}~ zbnLkds1{eKDQoABBAhJGY{KkmNDph_9rBtc znAFR&$aPdobauO5A+pT{lpCJYH8mLAqQx5QA9EW_sn2Gg%#oFEg-R+^E7<2Btved@ z*0~ybgxDH*)=_Zg--yyySZ33u$Ivr2t3VQWCHpaYn8{hm&aZ(HWj4V%(*jJ|WV*Yr z;afx+NXjQLsiMvl!rw(Ac>B9G{T_KLKfu{VP;481l%}+X@4X=?(c$62J{9J&2k6rd zzpyx%9UkPI8LKvwAz-n89aM(xt}37Xu;$KqL#^Lzvnna}8N=kQivItj>no$;in3Fd|vitMQK~kdr>b7kEPwu>Mnc-Z7 zehemA>tT{+gwYY^`!rg*pWCA@UUS*j9EqC`7B93}A{CkM85aK5*#8%F|Fa4AM6Tii zRd_NY--rGu=;z-Z(Fu&*xgR#~ayTsu?Q90Go-@*F`DUIlmEDgtm=xJ-ZRK(>5D;sh z&{!L6>RLiB7z%QPXxd;0g4uBrKeav|EfL$x^;x9wb51Sc)u3^pSYk02OzmZ_XiN3y zp!rCbIBFc<=?-~pSKFCoPVdW;p>TA+GrM>!t!#ItCbB1za6KiF)Drt%HB9^+jXI6) zwg}lZxD`q?*~J}(IlwQi8OQbG4xgdftFp{U+xSYGnaZ6zXtSiWBHrYETP=6VHdD1( z#=(M#dV>}t^L~N?Lr$Hx0$oA>7H*^B8Lp3|Pw=ysV}alATU}J=d+z##+2;|9dq1=1 zM`I$G=j;!aXiR)LCd4U(e1S}5_<98CJ_zf(XKyp|%|e~+#_dq5%j9vT zbz!<2VeHtax>8`2gyTu4Ut=MpZUBS`Ob53QDYz)Ds2!J^bcocwAo&^kcc|{DBf;$>5nj5qhbsgnOOp=>Z26Nu|}_RD_$!v{$q|S=*Fsq z0Uopd<_TODFh(WdUQI_v!K+&SvBd7?J=hK5jcdwde#&rtgo7SRjpUI#H3v=?w=@a*DE%5u1J%r7nQf}X;emZlR-psYe8 zP(duV)t8PUtRsGbNY5o0`GJI7?a~G#yJ=nWd>)nLTwi3lI6#@9P(=C8X``n`dnIjZ z>8m2YQCg+Uokt?hhC1=sv(I7jUg_j_xP8N7M$3slrzpI=F# z?W7lh`*mqo5-lf=Nd4j!6TD7!8LpdI{B8yIyi(*Na8jf*>_=~?2MRgxo_p^A6B{A-)5mZY2QSD*~@M?_vG{-QKF)6P-JhIl|yE5J+sVhkg51<;{o8eNl|zB8l0CHK7-+)|nU z^cRc$znc;G66*8<*pMq2(cFLWod0i15Ncv-D&x11=Wa2sa!4dMnZLuS;nc$|`YI`ch826-%Kf?gj*DDyutbfQI|aUJF!kJ*qJ<5~zH||d<`2cF3py?KKVICBw3Q3Te&_-h9 zaDzQUSEIC*{n52kl5fXXb>lm~V6MqqHk|iG&nSLr)R7J)iP;q#&oL{M9csHOMC{F* zql;oQ@ThVbQ$=Ge5$7_-*E=lmjPELqh!c4S^jPOAr-x2bmC@~2ip%(8?sQ_*pbnB> zQruIZ^6&JMY6VVq{SmvzN_a=1B77r@GQj!%z7OCD`JOb?DSb}ge7e8V$ zlDPghtvbmefj)pArDHTxdJ~XIw z9#J65qC+yz%Dq=rU>N^eKb5+WOn`i7!lQ3*zKy0_9iGPZ%9gMjn`a1%V^b3|Rj)SA_xt zw;mDvNA+S!4z-D`_9YZ7l{y*+-PiApzHgTRGiI_D2T~RqF~1@WzGCPgyjp|3Jt5#Y z0wQWW_Ly-Dv0R{*1be~*022Eir)?T>i#blVj`@Sk4bIa$elQ^vYEib2H0K3SZvw%@ z-^o#PnP_N!Ly;0j40kSnQtGq-yNaHH9Vv{{Krl+0Dj5;AKh)U49MEIgz-if z8|@Os{@vPW=;((2sIzI_YahNUY}|c^ zTxjj3#u)WnBJ3?4Hf^y$X+Wl@l%@>98x$hd>0kYMD0nC#Hgfm_+xgFdIKep1#e}Lb zEP5;{FYIl#nZ|`2*sLJ86m4sTYbUjlszbJqXwxSsH6;R-)HILw`|?T+z5e>2(nqf- zU5~Nn&*;uy)`VS~DGO+fl6zGFY&hrp+b0Zq2ZaO;Zl;Lg@>vZMT(g7RkYS_<Wvo;#1H3@`-EE+k4>Ax>Xmd|MRW^$KBY|v5X!%q_MkC7CCz1X&-2?OX}i-F5e zP1VDKyX0asr782s#5!%We6$=gZUpmD6z1@NU0~E7B_qW2?H>waPb4+}9F2f%$VK3H ze+cB_`AhvAv@@GRU~bauec{PMcF1Dx0Wjkm_-YhiHhgp-2^JF17gQxh6LR|MttpDi zYSNb`O~2xY^hD!){MFwIz3OlKhR!<4*8_5x?=I9WbBRWN)Y(of_QL-ipk%toU>qZ_ z1o+C0+Zwavh9 zp`sbjZ9>+bi0>^y&Ds!=hu<4#Po&C=%*Dv-XTc1!W7C6CcM>S$9w(4ZbH`!DT-Mr# zIj?_L(fxbjN3RQjd!4GDpZb62(8Xe~$PGy;1OrP5yBBi~#Dn2(LgNuEe-81k9CSBr z{X(Miw(8TQ;ZCFx^pm4^p=?~q+`6Z@RUW(^dVJoB;a!y?SJz@S!ss=~h_`ib6OBxnx4 zY&c7XakvicY*|j$uY~`Rr?_kV!H&A%C4>a8foVxv3?L$wXS9c6r@i_RXzwy$fDDD0 zQ{+v3@gPsArR@56bHcn2mWX&!li0&(}#mub~Nn{A6QS3(_Fk9W_oXS<_rD81vw_RlM&eV zz8KWP(3_Kw>5^030)_;3#3WbMMZeXar*B*SMwvfB5l$6nEgvEnYf$)Xn;2hN|gxXP}j3Dnc) zp}u^JE)Ai?XdH=bPmr973e{3};H&#ww*k(Fvi+ofm^@Lux;aNf8SR;=eG-FFq2i9V zI^Hj&p@SLo323T@q~`Y8-G$kr1E7X=G*o--Pb`J<76)|&Z46v3+b1#oGVsg&lP1qo z?2l{o83BUx?^H(@r+4j@P-Wfk<&qQcZ1CZC2{4qGMU&)DBnXlP#hqG)E8>bJ0vxjjF=0rJ}x|G=XH z-aV2L{FlVXwK})zJo>qquD8MX#?JQVryn^z*jJ>?>yl6Eo<=4PDGh1e@b{)|L^S#i zUM*(1qsvk;_3HRr623u>S;L#wdYTyG~9| zenkg z{Miu|Oymo(*c3T1UyQtL0SuleyIh*ufQC9nA^#?*gk!H$K9-Zj`WEmZ#9?f(8Ot4Q za@APr~$^W=ulJwDDie-h?HIA{(u2|D= zOq5H=yd+-Kvc4rJRb_%pW}-aRH0*2`m>(%2*-VHpDai$}#gmiyCJ@*29wF2T(c;+} zwr`)xqfiy1TbWj|1^@>eQPy6n8V16or#wA}Bry_x%4zHfP>yskYCm3lqV!mzm%mS( zWG>wwEN-bisQr|Hrus+RVm5=4%umBczI`wgb^C>!*-L7?HHJ2`7&olP)(I#?$O#6d zHa25|)b^##2gwW?-no!roxFg5%ZZvluo#yl-d#ORh9-d`cQn+Clx3$o4TQ?Gm*(mj z=Zz#Ax`zAg{6fDxdx79K#$WOB;V^68e{KWNTC!0?;vk*ves!pz##3YEr`I*wy(F(sZ0UP*mUiGAu*IXLp;8~G)_ z?j+>NX>Lm-2*fb7rxQ!$QL}|=o)h!*0aG%^D7t+@D9@{?TN!`*NB5FMb>b_L6ZIj7 zK-;lkSeQWH+#pJcV&gdQK|EKF$Xz;+dDe#ezy4Itx9Q{rL}j zXHQmCO@6PlO|>G$Wo4XJp|5jszCu5-W}r7%=9zKO#h^``Rn-8B+KU-=-t%>z-W<45+RMD*-f=nUQ>6 zS+p;7{JFGpvBvKBJ_(JzOgn|!G6UTdSJ`rZrBjj%9gX5wllJ)Xd>+NQ$a#hOo-OHt zRw@nKG?=f39uMz3y8Zghi;Hn04EG9wKl}YX(P^Y_>Jps4h)~lW2ENZ`{^9|V)cU^u z3C!sXTLx|efq2|4>`hwV9icl@xa!X4+sl2$CtGDV2|>G=ibILl62IdHIm98?5$;>$ z8JBbMaa7WJug1D?vi``>>hbhp4CygO+>aBJEtVbgyWLYAA0f2NKVNI!i!oDK zcKc1h56gGZ=VP7N!k1pOK=zL8d0J|Q^M}{kQj!ljI2FM8@b*{kW!HaZz5DQ=TBEEX zT||G!DaE?P0OPE|6wZvijH+0^C6{d;$HiHGSBes6jg3Ls7a@7Xfc@o5duMK&(awy} zj~5u;D{g&sQOxeEze7O~Xg4ZG+>X4MNk}-4~thn2$mpsPmfz46VJnl!^SmwNp8jN)9~P|abeaLtXH=AjQa z1C65PPU?GpzB}fd*sI6Yu9rkC=kPQbDiiI04&)T~XwysROemH=irQk&*iuDE&r5vi z!l98_Zdx#ouCtUwNsV#d zsOxAC^zo5KRt^4T;pJvrrZ;@Ku_9H+@{e+sXaoG)OxrK2R;}U8dxZu`2D!HhGvQCd zanAz+g6k*#^D6Rvev+BJ9;?9j}8!pT1lQM+gygj(>CY>V) zRxJA%NPcgAb3X87Hbl>8MAu!5>&_mqq$@@ zZskiPbuG6U%h0%orHq?JhFE+Qm*iAoWvE9wf^NR@u~>dUi_iQ_zwsvX^`X@P^gi-a z{4tZ`XZI5^PAU zl`IGbiLn_1?(smTrG?^!_`7sqG$x=11zEr%jPAKQs%r<}5F~ikl~7X$ukoYwS=XZ! z?F|9C^I4Cl=gR>0#iF20(caHNFCAiEuAmCM30GMD{QK6vxcsJbI$>>cH(X~}6C-dj z_#oUOif)ULcqK&c25p^xJF-^$vJV*^9iQbrqfw%_X}>HOkmJP6GWSu+!80L^)HB12TKk7X|}ZnV+F;SiY|ou2(*-1fOlSo$gTbP$nd_^44WoBTTWw zNbkkD(~@$?tytFQ@&%~oTUi3zt6lFep{mLD(I6+{w~?1znL z7k=U=ls1wiB3>#gR!5XLzIKg|Pehim$aLZiqFy^_;?03)4-XX7ooiy(QOyv9)PIjT$zk?Am!pZ*{BX^vG$U-hig}7Y2Zxt= za7s;j4Ep{Y$5XDOZEAJ^~XUY&&SZOv^3 z!Z}~)-Z*!(`CP}yU8rL=<^K`{ugTfSN8MfRPV}w~UFt^hKSjQ>1hH(y zS7qN1BGcchwd1}A_VPJrKs~m)6)P|^u?D7H{mFMKa@u-!ju1?9vvK1_^_jiyA2Nmi zuRkg#E(vhi0X$~Y#(x(+1|E*DTNEgf)qX36g#eC)fGU7AWB!1Ti(#hm-8d{9`p4Ut zFSpV0;Z4brw=|>DC|gw8eE2hF+B>3(q%n_5Xr%}v-3CU1&Oah(?xkMZqMmIO< zI&tGBE20fhd-{S|*JqhnN?7JWPQj*urZ+$rj+6%~hXsp5bmu_A!bztR|1PHDj1d?{ z#cHRt7m5=K4d9~@VsaNVW2>l({!1;e9#zjpQMCnd~%Cg z;#(|TEX&1EskHvA6z@^ofepY^?i|GdKClv)st^e}kxH5eUcNjOj%BhwVVvec#fxZW z)IVJzEuHX}j248)M6E@Yb||~scN5?!-bLiH5Xa7=JaY7S(iv&~p1`WC%L z-L?BK;y?gbJq^rX=<2EbU2+vU1m{+4G^cTx{4i=XJc%ry9pNNj@DH^qE#~6=fHr>h zOiL#K;0{TkMx6?a2P7kYk00cA_N>Ix0Q z(y%RDSD69qy~_?&{RRQqpS1me{wsx9t0&~F3hn)90wpFH~YCVpIIdQ+wysA&NBhAu8z=JOf$j4o<_m7eMLv(L z{uV28f*&RCF4tVUKA%d`$Qezh5cWI>Y#sn92P8=J(Y1`{k8^^Ue5ED(4wX6R8d?$L zmM{-BZ&pf_fVQ0Kw>Z%>c2VrKGRkMFli!KBKkJI(;^H#olDE+-B+&+R$|ZrpU_|v* zgOMI?l8Wo$MM5i6r0LX^wPk+GthSZ55%#%}R{LW*g%5sf8Qlfu8pvWFDuh zpLI%EYj}=_iaCo28-jH@ah(zWzYlV2G`?RyQyl%0m{!r6;ADf~l!`dlguE|)LH(dI zoH;RfKMkcrldxz?BGa9#St@uX5^-!MASUCX9chKu>TD9Qg^bsraa311c5e}K8@P6t%CyfA!1Qr4ZgVYQc ziq!L*ZA%LLqU?G-?lP=HvtM12dJ>~^={80URiNYF|G^wZgILd;XNc6T>RD{O+TGn< z6PoWlPUHnTB&%0>%MA^rcVqd0Rty}kU0Gpn3~y><9=O6k!s9pc(+2QVhj>HOO}VnD zWMc+ri;@wuj{(CGGz=kAa#L(08!Bj0wCS_#AAop)2{^(|SRpw{y>!0zEv}!^EibB* zaD{nOj>`ml1>+|ZC!;yWh-IyFS0COW?}>PwS?mJY7P|1dcnQYtAbuPS%Tlw6O*(RR!eliv9Zs z2?Ma0f*RiQ3%gH05aYo$4M^?ZrpxrCH0-w!#2zaH&lGDGvUOz+QJz2U{&06~l68q2 zN3#|AEw-6zzYa%03v*Eh2Yn#%iF%!oG48PZnnCloez1>+y`W-vnOdmY<)8B`JyWEU zZt^Ma`38Ei{^JZuwC?~NL??!wZus~mU-1Q zW&9ZM50TMRVYBaHpf`LywL@;v#28D?^I-7UmzANLWE~J<`Q61X_MZKbz}_ujPx{Ne z=~4H`u8h9wZS0l7y)Elb4F-*)K%fktMRN!{7I9tl9UPtuSYsOKb`xZ~e7K9O0muEJ z0FkF^RVAX@6#TLI)H5B(u0+}#<05CbB>H$WLQ%d27A}Oa`RWwct8K;zN)Z=)3d%m_ zRd#2L6=4*Z?H43A$Ot_15fsZqV#uz={`F1fCk2&dTW?DLM_qW`{B z|60G!w?&;fH=nLTbD+!Ae>W)xFX9)mUIs+!RYB|`cw~%c*awi$-#8h=65F)sI=Jt* zs|7;%#&ta0Mj^LgbXEgy08@}a6B5ok%3)RaS;&v_5rX}3!w zwp=YCx?&)5xKTuil!DYa)x7>k2n0asmWzb+rfT%&)*^QS264>B?|+R_i17=4$SpWT zaLhr3+8oSQb2z3Kz?89&2AngiPX%Xks5brp6Qx*3U4kIybIK;N>_{KDAJ2xJM?AX( zQBY9-Y$lvOhgsbB8#p#g>tbSf%JT#8(9rV@*g2i}`9Sa6kVwoO4$~M>=g$7{Wu(a5 z8H>vdpQ;{mtcZG|#*&TX$=(!ykqU}v`Xd!Fnl3Wzs>~&A9iyZvW^A&V`e)p-1@t#K zUI*;zj4Y}6+I6PfjQe*G(aq6zW;u4>+xIDg9*(1-+>ekv&?m#Md^tDAxc7a0J9 zG!^VhI%&~31&bCef7rWjr&uFF&j3OI0c))36mNIVEummNjmc8+BeQh7ZISIMQsl?e z{LB@t%AbI2W8Oq?%J;X)Jq5*XxN8=d;K zd;-ObJ65`YKm0Pg;~^NT3L?OY3=0q*X6h3+@F&EpdoJeSU};om@_0K#(METIiehF< zgAEp2^yeFU!$9v1*sW{HUQA*=B-jh?(=NtAjaC zm{*+-_}tlW-7!7ezAf%0?uiHBh#Ab@9b85s0K}j@L#6{gC%{3!Evrb6#j}wV0sv$e z?sQd-$O1n~HmNzrsO*V~FJS5fgekJB>JlC6q<@5n+{U!5vPdPovmGKpKH0r9Kz~tR z91-vEuFPj9S$XJ1tFYYA*}}iJ&i^_A`8CL;A4iUt2`fpAj_{lRaXykbqP<&aP&Nh#SLk$X zq74H1<`e75f~&Dax?HW1Rv6QI{52xVPkFPncQfxz-quF!h}vwk^vvMPxtcA{jHt@; z{UC0r>kndunGHsTo4QJAB3^9Ul?P~k_s=pDuDOhl`JzsHBW<^wMi2B69QhGK%)wg@ z_aEXNZ}6{9MRlBg*uzz&9(T||edKE5p&xQwye=?PQ~Ye?K?%-}`T6<%08R)1u4-hp zt%Ss!rY#isLJ)_(HUVNwPI3(A7bl^T2~{6P@s@SO zNcVckD89hsS0QC1B%Z#?t7!ASj0fx!trE}AnK=hO?iCjXl#Yyl1p>Fq$AQWIs4LkK z)onp&q6Y(pI2^$ZU@pZ}7W%~|9dHw42lWCv8}kHLH zL|e&=n~1y@_y=(D|E370?RCknl9>h!$GU?A&Ov29(u`8HDwvNTi#>f~X&HLN2`!JT zcS#Bk%gGUR70Ie1OZmHs9zP>+g}JaZD?A$S3HU8#BSY|*SfPcOp;dE^wTV>zE--{}1Gk4$?W6W+&&0yh<4eTp6m42D2&2=T&T{f^t4 zlNFs9vOQzdUSI(n zBwcTJbE9YXHz(58-jU>XI{yN!M-{p4N}-KMt6dsl-14|~NMZyA#UHN(JCi4ing(yE z5BDsSjcDp>n7f_)}K-3CAVi6vQP&7GY=>9~Zs`Ka^|J*&oMV#ZL*z?Wf{oQnHd71sFCvE0^VpozBpc>pnatEqfY54q)&Om6)-1{HA#DE=-FN+RsobnzM)_N- z!N^kaou0>Kt#}};PGD7lp^UlM_l?91XBo^^S6V#x<(scY?GIeCj7bQoGWFE?OdE-h zA>b@#NE5__1j3^tf3wi>=`lM$V<+qH9Y=zS_REL!;ZsmdP>&C>(Z7`de9v&5Wmn$L z7XK4+xR!tt3IQc$!idP-Yo&-Rw_$!e6nV=Zu{tp0#tN(R;Jsqi-qPgPAF$z889prW^g>hvucF3qxqVrC zR|)_WUG!=|P7c;uON9fi^KIOBh9qF{HGRBsO%eUn4r=+#Vnnp)3c+8X=dJZX>n+tP zrUWzvsE)4(XnxQi38m}y^C1_8GF@WGrFyi#Mlqio*afjo#Ny1 z8hGvl@_?}`MXM$_SF8x=dxh{&yIzjH*?R-HZ5bI?8h9d~zp`qR&~SpIjGgwt5W{u8 zi8MWW113{xw?u1=Ar#QsF8+3GfS5_con-59n2cS=+A;0@dh`lUN5=}+{hR#FS&a(C z?9k&wV`~nJsNvFum&UHdC-v~wCI1f%KVqmzvM!QrrLt<+S@hCg>QFfehWeko{BhTnl_Su9INn%@46G^M*;7)pgNm+yD5b) zR!3|$t^u?svcoWjDeZ%ZgN(&Wkq>3C3S;Kt%RPBQqy18~nF`9iD)sg39nyJnX;PI8 zpowgw|LxfF{HW1l@VCcSQu0c&Jo--GB8LG?Dx+Itua}#{cZqhciOER+K%ezYQqfi` zC-U{{Rs{uH#t{t-J%e(V?9nUucL{K>XgG75Nu}R2v(-mL(320us$GB#yk|{@eY#e| z`Ex*25WaD>__LP;-@QK5zW4%{X!cFqvY4+!t&nh6!dmgY=g--FMM?VkQEo@~j#$GwIV1=3Uqpoh+`Fg0axOYk; ziNRCO3-UG?k~1BNlIJ-9qpMajXd1v-d?C4bk%0E~B3o(FL!!D^bBlf|yU0-#q=@^= zVXXd+haQ0$Hd+^vX&^sQS0l5+;!YehE_(5$_-F|Gg0Gl;<1?AT+Q-|k`PW+sj}NmF+Mnw5Mq#R&#&XFsZnwTY`7_N2 z#R&ysm_omw2HzGnA*JRfogT|2?iEAVBy*bm^Abfy*!|sw=G^&S>03D8%k>{M;!Em3 ztOucHkH?Sl5f!RqwZP_zepc4eG^K3E61=G?@XA$YqYU?!v?kWXjr4*^=p zIQoSb+{^XhjaJVtUx+>5I>7(X^PS4shbuzuN?XuBcBf=Q))O( zyX28J7&EBI#8}gL7P>xnoEs-|0cNuTR(U-79)?Tx1Xo}!6@&A@2D~%tJWRlCCEHdAgpdfN68j$GgCu=HqEqS|613Wdl(i#=#u@vF0?j{qD5p#heLe zWA15kalJ6UyDy8eNJk`TrpRxgw6X8K5Qbx$61IUbrk2@cdJ4X5u8cY+ebpmdfs>^G zkt%`t_Cb-}STZt@#2g-)sok$n%cZ0=^_m`X{jY*8mead6EcVnn+=j2gUlNOqB}5LY zH3bq$>3lTs7*?5>$ogIxH97^S)1Y7SpmA`=S$`H6Wt>%4j4HBJnBkhN@4~VASXm5b zy_;&fQZg*XZnRSdlL+Zf?N(Aua^I$3)24bI~IxU=Wy3r&*`j4bSByT z@T^(uI(bA}$h{aX{IjUHUsda#EgHWI2kBib2A-075D^b^&V#7Id#6g^Je2vjhg*Uz z>t%L;bgrP7!ru#UnniQQ%FwAWBqY)Og;FFLS{5Lx4?4=`x0M3>bqd}JWQMeo(G%0l zm#820djVvQ!hk$MIi~l?E2?1nV?qc`u~e~QYkRaY@m-t8@p8_?WN2qT%4Q+p*|qEw zf6A}05{G-4?&y6?iiS61x6q*jmA~_n{V*+RNS}AFZZ=8JP+|YbtvNfTzw(v4ZuoQ{ zs$56>HskbrYGjnEsbgwO&MtBfUW|NtUX=VT8kXXt*T0{>Pkpn0Zt5E&FtYDWTWG7_ z-!C@Z5WiGepQUm)zjnU)ZXg*M?x&HjteuS7RS#1(?-nJhETMY#1%Ty>y8sFGr7z}H ztRM0u`1_N9jg7&n*V+CVG0&i%@^7ymLTd!5>7`|SM`?kxc>pioZ(NnOY1!9~i!zpYry!*IPXI@zSIHnRFJi$Rde* zl@zBiuHmGq+;}*)K=V+c-L{lUW<-Xe3e zBXqlz{+gXc@w73l&%+7%kL z`iUD#-Wwkcj3%M%7yMP-9@*GpaP>T6sh<@c7c)t3uiXDFEbd-faafrHy3s^5h?4JF zr9TYSx-a9if!6tK6Bc{kU!IYbSGGfv#_`e3SC+jD699=p!@gR2Z~|{quX-M7Smi;NhrK;pml6;5jor z*TdQT(Wu!+{pp)*XjRhbXZEFBIl3(P42+T3*w7IC-GQ3V1;WDTpX+aL3z+&6in$CV zGJwULF`Tyq#=T|MfxMN6VB*)Ve8rWEl>pa9xC3`%W#=s%nXsT5XQuq1%Gg$CQ1Sa# zxA?;W{+aGgca4|3CBu!9HiNKh8V-X2wUQyH9;VXmH?jxvUTsko6-8U7@=fh#L-bNc z9XZJ{?n|i*FO89ikw2QGs&?!zn@_mnUO8ME8u2RoOAJicXlU*0>HAAiEZH$F(x3en zm0l*xsgQH0A~jIZ%=$cWE^j~}>R`Uj`>>@xkMuk2`Bg1=55f~A3#W@vXG%4T$(Ne9 zdIpGf(Nq{97#QS=D>hpo0fOs9+l>Vghe3F>-0axn(^W78%1kGoBGz77(ZUE}*y|J_ zkq1$Wk$2Rd3T7cw5twhE!E>pzf<(bw_khhV_A=sR=w!H%Af!qPgzAAivI$7FKZBb* zuju!mQi35&r60yv#XerZt&ZV0-8obAzMqm?SRiU#TG4uMNnK2T-)?x zG-Ndc$S+?}T8cV78dIEjr!h?)?7IDNJNCKr#aVmqFi3k|ar9WMGw<>`9}SMFD6-r? zp!~f<OY`qtz zr#<(l)G_w5gfJ<6=HPz@d0RuK)_pw2Mi;BBbl&fEO4*WC7lNlGKwP!v&Ga*7f&DH+ zwKVoHFJ>~j|NKcn^nva^e1q=mjC!Rf_V!rAp$Qn$&$ydi%*HHp*sk+X-yS2RZ&3(K zuai2Oua{+v-epuS)QR6h|NL`W*g(=?Ea(oGL0^io@XhDAV5Fbr8oC7v5$UB>8ED_Is%ZYsH@vDTg5N+a+*kBY zvn7EEhO73bwW4bv!WAb|y@JMw^%Dw6OX+<^lH{kf{kZCH_a>fe!64DE?)z4Shu5fo zz$JPh3-GaxEzx{+nFygI$Y;?rC_1pLBGQ9Iw_gxaE#s{|5BCd@|LH@L3p2g#MuS`w zh@GXKbRdlqmkVVxT$)8X5{=iF_pRdMv_x>6;lxv@Mptzcu%MXyp_Ynm&o-;_tCiQv zFCqX#U+(}^p);Di9$Q}tW5)BxasQ3|W$spn~CzUe~ zbTmm871I?5gn7^h>a@ZdVpWgCThzZ|Vi5Asu0HRS8i~Ch{_dL_zAEKVxb)$97KmW) z70~(TdFkU1w~F@rCR%PlHeN#lXCS!DyT*BH>8ydH*7^hf=bTa^xo_e_`2z)v;2;;w zd0E13i4yh3kZ0vrsa(muu=2&+nD)m3%hb-=rq)~t!b?QEiWSl(0u3%gY7l+|{w}R4}fd=oDMI98&E4(T8!EiSa z-+pAaD}Y|aaDbo2ilUnzK zG@FuWstEb6zoEqc@bKi%A$Z)%Np;9fjyH;cKz(nW3qpWfhO}I8?h3k?tebH{zx>j_ z+W1A~&KM&ykFNXVsQ}Ma>!EYuy(~Y84hd;U7quGvl1;ULNm|lrIdD(N2}`B<<36sgDjYZR^cJh+_T$5Y00~nfIlAov{%G) z+J$Bj<5eNb@>dw~gODC4po8<}r+gv5cH<4@7LRA**|HMHOuM{FZ$^;j_lp z7Llt`wK?o#{n^{ISNPbb2s4diU<4p7@T?t&nq=*EyahG zR^st1Q+(49_88o!beND@{sQ>WNa@a+sUvm1YsrO99!%($#AbP%RJjzQ-EJ&$6ubKu zla_V%n|uzcd*NU^M9YF7X|Y@FK^9_fH0oZq)do{+pIPvX25!O#E_F3{%F{bm$ z)8TJ1Mz4kjBAvD?;iU~p>Fl}Z0vz+9v}3e&^8P)yDoFG-0fwOppuJ(4Ck+V*nm$cK zsFu=JgL}fTxlm@m>Z?~!@~p!M7JZabWPpk0t>rQomK*<|*>VE^JDa&oG#DMNNTTE5 zxe=Laj~q6?rg6eCENzj3W^#fq;^FER;W{&Evcvtd%wAR72R6NKfY`9tgQHQxqM!KN z6{0YgWP?g+%U zf-~a+66>JqATk4y3ZZlzkQc-tRbaw-?p|RDXpmZF0l^&LiyUmzcYtjT&=)QYy}UEa zq1Ywy&w+XQ0wmEl?+7v*qCvnYI^=z9|6+n0eX<8d*Vnj1Z1+L%Gq)0U2B|q`ZA~O! z+bwuHmz5=eO$eB7MMrD2c`-edUUt;le=D?@cj#2SaZhKo5S#cDCV@zspmHO6>-)7u z`qCeoc`J)0%-UhdnDt&`QPdHI_wv)f7t;T(Ojw070QUn_{+Rc}lU^Zvm8o-++Z2Hu zq+|yKhS+G~6gjI1?HF`9-FMl6hx5oRB1tM2xQd(iY3ABxl6xoGJ{6)J14at2wIwS^ z9@x8tF5c|6n2Wd72eBNLe}v?lS)~1RetFcaeox_$rC6~D{uqsOiJP#j4}x&=>m+72 z@p8cz)-`LuLL|QC9plYSYQ1$Nlvg#Oyxjl98Y|MGt)b~ODe#dZxWv9$NYiutwELz%h)bPIZConA6*+b>qg1 z%)Pf7FW3T`E1GIG#2wAwEE1>l2)^CN0RF(NmFT|jin0!J2C)SP=dax5eqVF_Mg*B;2dQq8VctWfX zVO^$>m#Zf)$SKlRFqHxJM}<2r$Jbak!&kwhyy#@EP?d~dzfnfOFM<(aId3F3UPWHL zZ|}bx)=h9Xs{Y;AHk%!GUeW(tcC-LE2q>MZ?ClQ({{2qC=Zi}Ww=>Wm`_3^7F~{Mi zFs5UyM2J$b!H?Yj&Cq8G6F)j$243RF=Dz9Li_y>)hc`G)8=={}Rq(Tjo!OrnDpnYo ztjiObryBcYnS1+waMSh?1|unO-+aZ<7R@SBWLto$RzlBW!v@oellN}DwWnU_{z71= zp_7jI%#a3vwy0Mj~}IxS(RDQQ==DG!JD*V0!@EbSz;7#-u`?&ZB8}i7Z9}8#lw@1O=kk z-_*m-p||mlc-MMwz?QQ^Pl(^HI`_Fu*juu~fTyzC|QFx35qiA-iG18*v` zSMyZ#&p?(%CsMkj?Vo;uA={; zTJK~=s*G#!;1_nu-fk9t`#YRwyHJ5(mcvTJ${6FaY&Nw4#TuU-5F1GFl^zo)(G1=( zXN!!{l!G zQG0BCm5Jq>-$CjCsSVmbqSf{=l`owb`t=HstRrzBf=Tv@BpPUWDU>jRMu9;tHrNI% z1F*ut9nX$8=4=oqLH?D`NWVeQ0WUkj_uT@v5 z(UV`#Oa|5B@DvPuJ&t)ZGbP|YoAgSx?(cp5{_;rY@$hh* z&-?v)J*O$nwE}f%Lr4B(OvBeRWKyo^A{muEKjs>9_iuY(*PaO;w@a`Ya>6@)z-WUr zR~r?13|;gJuC;rHcK}(*rr5Ief6rvd{q|GR^sfC=M{o6?hR1L9b1VHHfhB^RGc_&) z87{Gd{>M7~`rJ7MceNJ6xLrS{m__~Z<`<)H>Y^F_LbQCFs{Z`OpQ;CM6iBe{z;*eD z{*e8AeRyhjOfk4s47 z=TyU0dop-m#b2SJB<;SW{jhj-iQEUToTMM8=4Kv7CP?MK89vHSrd0pPsi*q$g>S8{ zAs$q6$tjNCu8Dm&x|n>@yod}2f6Rc{ylrst$jIR8I6xa5`xgLm!kerCo12z#NBR}@mZ2L19+d*T3TV8VA z6yiEgS9Y_TqG>Jrsw9EM)I^nI#a^Sl<4}HFWBkD)Tac6Qk_T`PRG=|3M63R&Rq`1H zwTH)7m)+%`cC5ZnV@a{ow;mJVl(dhrpWkx^EGy^fM%)SZg7}71Z#IPd{muXTld~X+ zH#wNNRoBM7@Q3KG89eHquywk$W1N~pp|o$(Irt4ebu(3+87%b}ofWd~<((9N&)aUX zve|z1>Q#7wU4KS%mx8(r3xs9h;xPEj?N9SGJ38O9hl-3G9Skeeg^xCMIg^L(Ye=l- zo}u~ztF#&YzNB`0-U34(^_4p9gJiHrH7@fwQqAJH2Fw7wHEJ~lq z#UGhIWF7vhd(0x;%IGIo;EN^Oj?|5=IYoLF1?ZQUH2`5qU$pV7+M0H1J`-9r%|IkJ z(2kt}-@k6qT{MDfN=x`a8ZD%WoTSS(C=f$j0oaa2d?Pb$9Dc zErUAwz)ej;zMeISk3-d;oX*70N*V_gGkmoxb+G&-ja1@C)(^kr%9PZa_fJ0ioVffa zq7YMsv$@h^bW8T{J3`>gT=1D7)OPkZ{S&bYArz?Xz5Q8fle^TkTJ?^&%99?xLY?|I zPCVvpHGxQGu~$iz?1yGe{_f*0!_?#BPeO~H)tO2p6!F@vYrfUiQ;6z0s|&gooys5d zF@XOo6$j-AHWE&aRyo<|p2X`x;K8YGh=D=!T#XJ$su|k~$VjjF30E z7j16|W3o>j2s03rw$w<8ksr6tI5-t*pMgOcbwz1WKJLdhc+>Xu%<J2MGua`*%G z!$-USIDT+)fcRB0PedyQ0L8Tr55GN$Yd{NclRpwx)E0JsFM+WR(T#noT|O3F_s)nW z2)ByL@|TuanS>ZBMU5SPgQep+$5O;J(e$f;wz)Ts=CLtOslIwHvJ!3hAfse zwex*bBwmXY=9J`pIQb}FP&y+aNeHwcPDyTW*>wm;!gG@$EiOT+)|(2gf@E2zHKKPAbm66o6qxDfEB++G*S(9?^;Ze@NZ0&&;ZLNyHtG+m~X$5Dv$`*a=?I;y%IJ4TYfLJbqi#A-jXs(yiQ05yG(;hWo{MAyc7ZdiXM%7BgU>^ zH)IcI5y635+r2v`nMKi}V3q+2ni6wNRhnd&z^%`RtOYgH%t)SMXiKbFysShgp|Xoi zlniEO~${pq4k}V^kSTfjbWs$y@R=p*#0b z(cG9Rrh4H1+Ie6dyXwl}WTC*8j%hfeyzchrXa-KQ=Nnvkl;mI(gWhF3n-=UW>FrO! zl8M>Hf1q%R(fm}1fdf@_-{^l`F)_}RV)A`44#ySBrhbZh>{SB>>Nz!r-ES}A(|m@Q z^5Ue}=W`Q;<3w|FurYJT6#NMBGceh`OKcc4y_fnYHs?h^RRWI0O&z6siH`TUVhPM2 z%opn|hO)y7a-JmKOMkZ}Bc%^!1LYwx#&{rd%v(;X{ir$k&AnJt_7%I0+^u)<6glW` z#rQ>I=a-VeToi8Ub3GJ9w{`Cdvbn(;vH(3NHzA^hByAH6Lz7A$ibQe3$=7=sKcR7x zl%I5iiNtnr_RXM^A0{T~cOd$RYj^#G`1+veQ}?M#17`aEhhNX!rV~eoegzCY5^8pc;Vmjp)VDsmJqtbhg^nJI(**>VkZvv-V*@(Ljd&R)bCHAuQCyPvs z#$G91(0G~OD1Q&{1Wdf9T)SqhB$?^9s?(11vn`ZjaahNlQSGTS54zHE&TkqlIf&B; zQzJ<$xum?>liqlq2X5Z_x{60*LXP3#)Tb+nxQpU2YRWpRURwVB?aot^6{WNQP#Kn} z7*36c?zkQdTC^l@&u_r*Zm<7cP4yr2BCz!#NScILmApv(YtJR+#zU--3}%Kb$nIpn ztkPc>G;YLnqqg6dO>~bJwmfjra{fiDL_}Czawe&~cRa7vcGu zX&N$X@4Ll%Tjd_U&eLAbF4nf|*I}9J?(|+(e4u4PZnU_kY^ME#bW5zq9QEtvCVrbOb!;2eTOlPS_RK{G{>A$GXw`RdTD(VC_><6U*G zRn{X7@8PQN~_rC|@*dWHA4!U#ksunm-DIq1>5Mb5e1DGDEv{OC5V?t|F>>j zf%1Hm_+2_pdwM)e4{57cXCP*IrBAMXHhhhA@fN`Ouuj?Yn{^yN%?jQRs7-s@ zU8o-KQ5@}+DB{fo1d#eyR1IGyc;F*L=HCdsHz5t1+`2ZT95+o#4pqP@jV&()e~O$a zCN?h#Hx4M{5o^EhlrMRr5^cY{D>+JwymuiSlrtIb>eL|K*TW??gINR>Ygb2l=BMH6 zlL**T9V$#YV-dWfU4n1gYd#W4a5!=6mrh*WP~iSyp<836fV;Ku-W1uCHK$pc-J zl$zr!lgYnSM{Gt>a<70elE&YskS1i?@0a@!*oNu7>lzuKt+zTV_Pe=Da1%Kf?S8a- zxr@=BAQOvjDW7;b3sfsBsv0}j%m%-Dh}ZOw>y7oLa80f4xBOqv37!rF9r~$C1oQ+9 zTw%-01Pf9D{iIGi?J2gA{_j%pyhfE*D2@S9w(`KoV$ZJ(4J%r0OjMPF00+hk0@I4B zeOsi44xS2e45S8QzKJ*Q49U8t4aec4qIDy29V25?Vu!=QL`ahLD3=?+$GP8Bc{Ac? zw>Qd#)iKEoJgq^yXeb%3w0&Y=CZb~UJVeA^a&gj4akEZ!AV{_2!d@Pv-ab)iXekUL>n=jZ26)Q5S-SM}9E^%-tV`Shg;bY}aO@2e7 z)DaNORU6EMdD;U!h8@X)AgkQdwq{y-Kxi^lWSTR4ov!1uWVQ0i53c<ppgtGcJYMAS_JGS!W_%+mQ58pk7(;|t2Be*4|+X~ALs){v|2=*TwQTS$3xn$Z9S&wEN@#xp)PtQCi%SNtr`{g9q5i z{Sv@U60^&exvq9EQ5%V?tg5%&{8~zY6yD*PidptV-YC5#k_`k~CE&fuG3i_Ov>=>G z_IhCknHiei$L zk)p6iSEaQkSrdPs0Xu@#XcMJGBevo-Bsi_ojE`#1y4=uS6_Iwgu49?V-@UkY83}$} zp}*IQEql4$|9vK;;q}{|Iv=o0PyM&FwPLtHr5U?(4sch zKS?k5p7!1>;6D2aQWuhG&N6g%NSStmErrV2ba60A@91zoTK=1|bXZt@P@NLEc7`a7xX4irNqBU1i zD@?^3g=h1SZFH;VdJbSv__nmWbp=5_dm#ov{Fxb*SZ1owCz}U*`_y z@Z#FA63rUOhj)3gEe_J@{{(}B_`&LLf|*YY6T}9K49{8jETZJsJq4iBV^LUvE4_y- zE5;}_5Cv*UagUJKwgF?TMHxQ3bf-A_G6~yj9aJvY*g*U3EI-h`XS5Lr7D(LUikm=Y=`pogUzGD_4Hn(-hJNFB&ZHEinDYdO|6g{ z%|3yiL|a)3C^E5bp{Gw--tuos`BclxC7-&T{_e0t>t)ET)gfkEVqOZz_mCdnY$rIr z(@tw%4_%cNwsb0k*x-{g&>r)T?=xY84@v%M{?@vjzaPJU5&y^I)!9!d;6RE1;InK{ zfoD)wrPx|;q6D~(7tY!B_%Y+5>%Rq#%tCSSVroKWIx-+ zoJ>z5KUu$*Q~~YQ3#63B1b^cp3tBScR`m)KfkH{e!MC}u;jKQnG`w}}60$FNXCNQ! zoyaLHl+(R47dtFKKrS}}czFEgU%7z2v0;x3Xduwl;BCnw54&moGp? zF%K(2K2Z~a?h8U~U5-*=N5A7kJxoGZaFv-qWmIt* znKQh?m594PARLh~KSNZE16x&gBgDcwfjk4EDBW1JM@xomh6=!3EZ7cX4!1+vMJ{v3 zR`5y^+Q2rvs%yw%u43MP$GotZ(l_^`l7S>j;Tz&@ID+D1APYb|pCbbX`b|p9Y!EUr z;h0G}AzNI7PD$2}n6q{XznUaNJIeQht6`F!n`zue>~*sJk_WdB`D?I`99u!UGw=QJ24C z=;&W-i1rNlSB#AEt93za3SRscauZ{p>TlW2Oe*d^=C?- znjFi~@``-{!{}FpHLc!AeKe<$7*^;lX|(1sjWX!~Oaly^ij?<4)5->l+AktI6U3&1s`3%@ud60hV)p=e1}M}FiIuSL#_i=v*?ALHzEWF832 zxbRerOoJ2`xyCZ%H?ha{ZLpISd6DBcl^?9yBf5aMpqP}zLCQrgKQAxHQ*4W7GBVqbZ7mtU{gz9QkdGWS4LY&-86`c42cmUfslD@I zNWxj^d8dc*y^WxlMYg!8WC}kSR@T0|SJ-h*2w3uD1B4d6wh*513`b7l3LJG0KeGTF zC&XddRS=LMTAzcY(GsD}GUSH@{jq~evy$8E*zcN&GJeOe9RbX56Z2V%mjR$R^Id9f z*R?e2?y7Q^8p!wTW45}evwC@A-9e3!8PE!?iBluUkvP5wSMSCc0t$25IgO#;w9Y zaTanI#S|f39vUDdxlq5`N7S^-vyiyV%)YVj_&-L9OE2?jKDv>v@br;T$zKo?`f(xf zEZNw(;co6a?(I^TS$SV#t9Fd<5J#G7E`#+@rcBG-WPQ|4Sz$Wb$Mky;DWIGuvM8^7 z!PA1QY2E`DV^9iaCy3*-(}Je$yQ7FSzypXD3oeEg9pd7a$=2~_#CLBd>t~|5L{Jmp zkD*$7wyCie@`dN~vCQM;^oiV-x8!4^W#8QuD{|;Yk?*U!`;rIuMmcVm zTZtKuBc*ah7o@Zm?l*f5U%j7M>O%GXPOey8T zWGIr?;?bky^cQP|g8rvoDw>*PibEH2w*6B#2g#HSMh_hLbKQ^$A8jsvc~1H z6H2fAXQv`O@moK*Q0`z%K=Z5^ zm>z(_e>MwdUp7H-1g9%}GRIDixBfTAg5*Z2-m1OwpZ`Qya{yj|)Ytq3+y}sYFoWc( zws@RT^z)Z6F2WXTDdxytoGW$^spc~KjoGn8bALMKJ`m-DIx$Nxf`u{EAl69sT(I=M zI5LIEEuVh?# zz|{%H)XOlm$6nHCXUEL@bz?K2442x(8ZCj%A;xHfBQ%b1gtY9w=ful(64O9uu_6Dx zkLIjSOpYUWiji$-OA)|AaPXskN$Df78Mc82BX}LsHc;+3T1>E&tTtY@jOoL5s({GW zc7ToEG#WicTb4bc2Ve4DekJJIZL7&uYbUWHpzRd1WFn70Al_8}XTs*vzvNn+ZvICy z$R&kLNsEWPCl-gWhmZqIZBw2zj3e`{Wi}2ll8D)C%X>1(Rp+Et=;@%V7lPPWpCF1GT5R|0v`6`&iILq#|?5Aro-S3%stC&V=g`qVvLpT+nEbPVQpdhD@yF%qDjwQ6w$z{_g@}{a@I6Uq>kFghG&c>~#D#Dx zFhUlNFo&R!w*gkDb+R5gqU~r9C@5ug`*x>!_6_OY`Y0K5$jFQ0xz7HZAnZwl6=$*M7AKrkeoP&0NaBKt6bwY zyuRzN z7AsQB!LW{I(48yHU$8@}dEnj%kOCkhir#ogkkyw!Dv*oxH_Hq#RQ$o`CT;$E8jh~eX@8rQ|{Aj;&L&!o3&$hJRT zMiMPS=*_(ssUW1OGH{zUl`vJfsS}{?Ad8}4w`AcROsTraz5_NK#IOi|L4+r~H-VDl zjm&Odp0brIH8ZOLL&kGhlSW@}d-jdIv)|c|TZ3Ie1_Zxj;?gx>lJ|MiRaII3`qt@~ z-|$sG<@XMdQrr5uomRRIu{kkxqkIUd>ueCs`91b8`%OFMM<36nxS)A)&tsN`jJntQ zv#u}{fC3LEV7TTDLA%u1DG194xjYA(9fE!<2KA@ysi26u86?>89}_J9}L z18{b2{RDArQ=~DaObZxM@|_d0B5nZ9Gg;@A>1X>#pc>lF{+My^=|_oM@|1l)4$CPQT&%MOeqt=pSBX}a%-J@F%4$u9c! zQYLJ%77EvX#IqeW>Csd&3>oV%z9&K#)GA~ zF<&<8xL3j`VkMxt-b&lG;K?_T2q=$?SQTMBBGk)#nkM;adL%emqdPT)GUA8=IM^2R zD$Z#xt4! z+>d{sO5k#OXgPDa4e^rCf6c--^?}l6m_Vwuk4MsLJM9}Z~zC3|2DvNhE00h45jPsZ4QJNl-0)T1UOjAV?PWmCECO;@W zt7NSZ1ab#a)5EgY@$z_wzpp*n?#IMdu@eRW99YiPN4#QvURJ*bw{T0tcMKOr>HTs7S}wlFUMuaB@{#qdNoVKh-sPm zn@(N5%-A$D7__@^oG45wN*Lgb^BK!L9%^lpvUOwvUqwJ=H!$g?aY+ejJKXON_s8D{ zM}IEt%@i^aFai2k#FPEr7lWKVu?4XoK|2Ia>=iyw@pK@`lQ`x4WafgjP|vvP2`d%1 zeuBfXTc+==_uWD}#YHjr{$6VDW~wy8czl&UOp82Nn|Y*rU*)MGuf8($tT0Pf56jN~ zN@92VrXTkn%zkbu=e(Xwg-8-C!Ud3Y-@%*04@mJgWas|}bXJUk5;5Q@hRyyrE1!jf zBprc65tu7su&8Tnn-GdiX@kbKk=%LT;;*4C{%qMBQr*moH{%a+9j=HcmWw$zzr=fd zEkkg;A6HZ>awfSAA6H*`DWcCz0@sqz zMovY&`Y|m?1Gx`w9_!s=Vc?_T0aXLqK*0j3Rm6Edlgj9m`LK1`rU>!&l}pIF29^}t zl2Ar&?2k1IZFExy-(zbGI`DCXP%v^5x1#_b9sPTx%&J3Rn%2!}xTD=Xdj}#T6_BQ6 z=v@E%eodzM4GL45DNSj6_WtWyapj0e2yq3W@X~nJ4Y8F1p?>dd;2Y#)fzTFd#z!!? z{H;8H3~K5=+|0 z_7#*^{}k|cdd93xESUUFL5N}<4!~(g-H4TOh%sXECveF|q1nDW0M2Mxw2F5z^nPiy zCR-8bOyA?V%*YcVx#$YQ1E$g?FhZ*npfyP}mYXWzUKN963==7l!(C=uf||XpOMzY_ zUUFc>cppN6j4^}oXbTdl<4TZ>o)rhSab6OmU5E$1dYPA7hk_*Q{YwJ84_`MH zC(b3Bn>QIV4VI(MHoC73#|pb2#JXNfHcsKjxm|ad&XKj`jM%_uNL2r4M!bT}`QpF*B4S>gTuju*+g+ zoO!afjFmx=7_w_#`oA^i&KCYrXx?>+r+9xlRr?D-_ss_|q$~Hgr_y}X7U+Ln^4Lj0 z$HNWZbj18m(80a$Xdalh{5l6gLS_f=$mI02l z>_UPJlt3_SqwgXa`tZhPC0e9ym?O`8*?Ae4%OOT$i72O6FT8nq4{}Lhdw?viyD+9L z_r)bN?p=hZG&KO!+)L4fKLWjuuidv1H-|D>Sy#XuQ8B9sNDCq|iv$_cn`Tm(+$c}K z?%2*SYKY>tf=&erim0j9}l?L};RoSq3`9 zhf}L?Im4^t>*6TCp!tq(eT17&z*D{6ZA=@Q+I0*{yR+P%y zq07nJlQ=Shn;vBCT4z?Jw{Tu{m(_jw>rN^YQJ3A9_g&rDGd+`(>c;g>u|?+7>{zR} z5A<^N;vZ$-G8m4x`z(plJnooQ$JdQ9|MfS;+MinHKPDW8KyciY#F3mz{o@62xrYEg zDd&X&J&|WJgi?DBf{z%#193)?ZeLArTc$;lC_4E%|liMkV# zVCacpXx|nt@^#ff_9)I>#FxcU)Kf4_*LDD}BL}apFGE@qNns-VbfR*+KQLJ6YC+3g zIEgU68E}%l@p)&bmAzP_9q-IqAo9f^wVR!`Co>}ii37GgBFblv)5P6ukq^_L3UW~R zjShlWqQ9()=(eqX-64fMdbe}!@S9AO3mNLh3qig@`~&#se|sV9O;vO1!`Cl8Kvi^VJmL zpgHVy!$oFK!LsZ{=^=;mKMfA7urdr#;T(4NoMJ4Id+)D0dpchwonmPHLydAdCdir2 zkJTrRPp_ZiuYDytCL9w71##O-IA|5PdGrnGx%7;AKP&;gBhOrz&wQ`wLb!8d3`^>M zF|=Ez9q+iw;6PD4XIo>n^ia&O{zDC=ghK`KGg)IA+*7lZG*&@PbGM!a-Q=>y;?v6C z8b>az0uI_wzf|&8Jv3u#G|FJW=28|`8h=QCAMk5n!ScxL32b0gZNGTAP^JkWahs%V z8u-^j@!$8aYc9IcuV3OA8Aoq%g1%<#|8 zrgFjl$Y%GB*Pht|N(AtmB1tIzP6!qUnJi=Pb?J7i>KD5YWMwzk!ik0XIUrg%gwS@R z1e{GKkO9Q$h+^=SShQ#ia1Ef;^G5TMqmV&MC@-xWTna^IQ%<%4DEYchVi@OR=izdj zt)9h;LS_i*7?Q)bI`;(mTG$4n+(mPLx9eyadZkxv+4Fb}+^;Uhins-xe%ItUiXA3v z%0S?mqh>+bIeebjt4j>>x@?fmVj||Ie9MbOz-l6)U~;V5g$c#7aYMw(pyr)h0l(ip zdKHpzq39}z5K)F4DtR?sxkteUMNdDPBXod2!A>uB1s@!mL5kaRg&WO?iIYJ^TfHh* z;SDRUIj-U7)bVD@a(sqwmF}s(c)g>CBqxB?rXxY`V9rdeckDc?lT2G~bopmR@k?Ks z4|QrOx+ZI*{5Jz+=N-qjpH(}J>d$AUys6ipa~PM|;x4Q&OmP}{p3xA7eK-7}qcbBr zXU|kXg^OV?{`pMKyh{h;POS<51@G?hBPrg#CAbvU>js73)r)I*G5KJp4CFk9ItCE7p((x1CCk5maY<_L;oXL~0}a4qG->IpI%uIi zz63_gngI+j1uTGP5uE$s`8KoOCF*-p^KJSm#uX{ z>$1lyIT$VtT>RQi_{%$bRYrKL?q8ROD*buA&kJ`X@&ktKvSM;St0&GdRp@Z#)5K~< zaeeYLd$g`=1|$2x{0i1PTDG54zn`M4&Nyd2dIpzqX5KmGJjMr7!@56g{7_6uh-*|l zc@QhTTJE=;T5g3U8_Mnc9yelFpf>hh^Vwk7h2$cgxN5+I)#%nn=&DN7$p3!8J2OBK zviyBw=3lu02y!Z**u=Gqwb`%zDTG-qj$*LuMcNDR**F4G3F31IwgCi>vRHvP*3fN* z4;s~zBtvYQZq}RJU)mZ=T$*MnCzu{7YD%tgFq2wAwW^hxjk_XTAj(iIumQN%EM^;l z8mE9x{~<;&WLmaJiI(>Q-eewdwnwL!`~JOMjW#5)VvnNiFQ*h~Usyy3`WVO6JMCz_v)hQxb+3@bkGXq|2&y5_xqJbO9hW=G* z9D-&O4 zsS)9UUAekyBCIy;BxD%XUoSj=k9dPdp9&Mdy&or2c%rG|6Mz!fu8{zSTU$1{9pcp} z_#oy6J^VYqUWV<5xF7qw$XSO#WG&{B?pRr$pB~+`;X_;tVRq0OFQCV88%_aEY2ha^ z5)T8`6uI=^c8^fe&y$LH325v_TrL7mLJLf6mFon2F3);~Y; z$x~XKm=b+(mLt)XPq;4M{YV(M6a++>{E$Q*hucGIHIu2u`|b3qALZ*_OFev=%wXzX zrdF_5#Gt3T(CDV%pThO>T6pj;-79q}$9K=vg7qH0F|tp5{v*@%_@C0i!Mn2r)b7=f zj&1!LI9i~`OB(L5dA?$qKV4V-3>i%t`y>YDX0veMWp3#9E7c3XrLiwP^XO;{gbAac z)yVp+j+lJ*gMha~i8<`{y=aBvB0v4bXlJyu2oZYG;e9$ahJqB1X4nvV)0XlI;2N0$ z0Tw;wUX>3&gGuONqA71Fw&Hb@fkJ{c3sCQ8(#{F+%&or9BprwxPJ7F(uW{vcfZAR~ z9L%01>U*$X_UQJe)LK$Lr>2W8=ze)%;@Sp6u`hGVG&5zAP2*gMG40adj*?uM>CKU^ zL9OpQP{b)iufHMrgCRnUHSk^je4~)-vat&%-{yJXinwXx!WEfD!kq#AqBgo*;4`C_j||QLusY z_`9PvQg*jUJBj#e;FtMj2+a%!6TeGbwh&H{_pn!G%CcmY{fsj2^IJ%V8E?n?U+-8| zQ#$LrB&p*1NXf~h0rYU7=1|t$igCjWcjdc;z`{7QBF|PPAytaw+F;~b$`t2Cy>r&& ze#l!w9?#CDC&tz<&S-3YEbR}KPU;MO(F$-sZYVm~WFB4B+@&43VXS#E^}kOo1Rkjx zNG0IsXPW;zpk_fC!u%C1;DUZ@b-@0r=(o)k#JyLVk<|PKEJ0pD!kJKbm>AhPs!uSj z64SJ`QrQZQwDIQ7JLk(J04>Md)fEQw1k|>9XBS0p|D9 zOY^Ye5|7TeX=431Y&$uF8sist!k))&?0nLF%fHtlm$~;Q;2&Hs-Z_^?)8G$P_kSs4 zUgG2uT3a#{C_VWK&PgF#x%Y6zfK-69=u5lzVDIfV>9^?^Gv}tW>N@9OyiOq3d%GE5 z{wg4=jX#HYfa_82R`M&I$oJOX(qLU|z7EY%H%sGr-3N)%u!2-&%AM1l1!*5`Va8jI z2OC6-*FLOA*UfLAXNn_f8C-GaIZ}?VdD4L((1swuxV;6oB7UQ_0&LOandeyc+WiT~ z))e>OCJqrG*PQ#p_i#1@ipl1IDn!^+A58o37tvoyD+W#e1*vDRDq^syQqTyuFC!*a z02@*}8x^^?_Jb%&2u^umRRjg%39u9O-W6J8%=4fwf^|9F(@ZdN(=!TUxv4n?$-fk@ ztREz63=)ZjNDR@SAK`3=eN1pC-{mqWDakvp{~$Zl4LdUJYU*x*h3EPX?r8hDxO3KK zTaOKpkKULVy1bpjPhu&(( zLWR*ig!X|toI;0j1!;#To~zF$#N^k1XzdlCqF4rZ<}-Vzt)h~PF6N}dJo_|$Y~lrR z6e(8o>pqKoKJ{$i6K4~H7K-V`fGn0sgYTH!lg5FWVE=mul80zrGiuy3>Z`qOSh-5`ZqMRhp{x05`!MP%D3hQ@USby$C_N z2zhS#YPT=|G<&vsUq*QoGfD{O`Y)IzBxhvsT`GjG&%#;5v#GkX^D9V+?-~=z*GIutkr4l7GT4n5D_If0n;y45V+=^A^sSD`_W8hR;Hzz9 z&oxc$>hMgYqvx-^>%SLplRz;Lg$chEw)NKw!~`jUC*#(Bm0!i&kvG)_p!Q%jA)_<{ zgQD86c|sgBzz@m#I*`Id%7kde%9Sa`s+$?JtvbHkZVLR zUTDFVhh%HySIOE*_p+HTGtbdH%b;ehfZuTiaz<+mwwuww=@-lS=#h#0KpP^kT!d-< zoyb=hm<%og4mS+i{W#1H6#Y2!Air!ce{p^$-@pN#8Uqumxj-1603g^Xq|b!mS#8-h zL3O;@iXxXJD&|9FYJ*>BcYt(C@b{Hln{;G-tXv^yD>ja#*-US?^pewm@`! zGK*k&cQdPLkh>(O=GR&C)It2crW9FUP9^>eUBXXh{3PeHJ_nEkCB4M}UrPm3YbyHi z6Mfz#^yy42)z zzNAy)|KVtRn@qGs`oie!OV5n+2ySQH^b5y#o*o(H{wE1{8(+A!(|y)I08tCA10QVk zHJRw|hOJ^8GaB-9y6vkfHpfb|nx7x2Mw`yv&$PB=4Cd#hzXw;hWIts> zD=G&9K+29kloQ<_r^;t|(1D{_2jw0#bM8AP=WPqHkYCWI?PR%Tw~c8g$((2m>jM>2 z@3}C7`TIh9KN`TKkF}bOuOlT2A?l1PCk7DvOuhlDO&rnIii|$D11zTZ#92`i$Kuv~ zDI-)*Hy%r`p0Ix-vUKL1t6v@57%jNFHemz=0!-VXw%Ki!cr4-})W{HVw|zDG5%P2v z7WLd@y9uGs-32B5w$r9xwkF0?DquyDj<+!B_F@A(6R4^^*1B#m`{+dlkQ3p}GGnQF!K^9ZqJ@B)vkBXKxbKM;Sj!aTiv^Q{Kr!1yhgg5fqj=vewZ~6Y1Um zDatCivhgXwfupYB%>P7#P9w;YzX9vp@H7n+)bl&=>O1dG9cu2<5M*cb1V zA-hHaFzRUkNs(amIEd;4TRY93!^*_Tt(3^G9w`?F5P}A_804 z0P#B&yskp} z?sG@q1YfLgfwGlpsl9>8UWO*llUL^s!YI1-*zo`3*|DMsLxtwQuKQy ze2+5Ehw6{pKhBPgMqCo+DhhJCy_vG7%S*{k=ha82nTec^OMAAzSahS)ELBc3ij zqki&$aEMH!v*g4wblImk`7^4qpK4h2?$stC)zDw;e;g-&ohtwNk|a&3JRs`x%_@rJ z&lh0S@qFuPcyCJ(7SMtaZP1m=1k?sHZS`*7?da!07zl0FCk^OB1~k6GY-_{J$D;BF z(%HWB$6p(gy_YgMZXxA!lg@_6z=hM`7K0p3tQ97a<4&x~Cd4p%-~MVouLaKShaENV z+aHMLmE>1`7iN7^iq8A|S-Xy3U~c2!r>B|EMkE-tIP=!qzjXSB^l|ghW((fXSrd-S zl09G3X)%r`Q!#UB2`2asQUgdV8*DSkwXE`hGrN%nb2w^zBEU!4s@sQKpe#q|PD>#&} zzsc3~*%8m>z*EWO!F_ab{1`I>Ds2oPX;r*?>;gX=0BS#mY1!BytPm+Ml+`#B5y0|m z>x8>ZO&#L*Sz72X!#sAIY-1O9Hda209syrn7X z4X>lc#L?QEiv5VG)9tdS8S67Ai`Q3j-al_x`^K|dygPMx#^@k@e$-!*&_uDs7vtNt z*6!I<Ae-oPYfbBmD{fvhe_mUB>8K ze)Ip)b=GlFwcFd@GXo4EC7pwyq?9xWjFQrgfRreWprqsgDj=<(G)RM_l=NT#A~nPS zg93_lOAqyK&ha_tcg}mBKlQ`Mm_7Tx*SgmCx>ianEuaFhCYlXV<21EPzo6(*LS6yz3J5Jm4`49tWcM~OFJ&Nl&&zPMq`ky$jp>h3fZx7t7c1fXi1DI>{;ju_!KZy)G)UUB-n#14=@g|+9r5&HWg$Q0KEynmB3+zdLE5Qe$_cLLnwY~9mJYhP_>I$71fMlm_hT_r0Kw4$s zoPajSo0k&xU~+h1glwP*((Jpt&OLUtHqEAK(^pKMniOaPahND_YpvKX_|EPJe;-ww zmj~MsgLht$6-8!?zv?k}Io{3CejxAkwIvFf^nC1yVg%xXTKlCHd@> zL%rju)JBRJHe?9bn)*S!-Rmv9WuQRy9 ztVBQipKpf!3n-l3&qjNI3DFC=IHqtf+(O}A^iLev5jZ~?x=AKz)S>P0OS`BU2^mPm}|V5gvr86-<$Qc#ZtLZoy5V+$iOpP-=7vmzOIg{Wsc7Lf05!KH@q;N3GO8 z!{@KZvtNvY7Q8~U2aOG5LaAyzPN+e9)H; zw7dG+L2&!Nh8q|m?ACvMySG%}y9$p6&1$RjM>S6uh25Z!33do`o-2n{CV!a>JUVAsJzI{ z{2q|Ry**6WrnKI|^&QHR8_lZqi#FvZjheD}7!~KiD{!L0JfYIuAJjPl?*g|bmoeWW z8lTjx@CGmNSzRrYIZTu5FpH-+ujd?rSyjTWOA=3`#wT)xx#<^D?8O zsEo@#M5lUBUbfyAL$E1*>4iA%f#R5Ib8yl_qT1TA%#u7IM<@0j*tLm(i?vbyotFiR zx<&P$Pr-8dY|MP!G*_f^^4}4BQvMqHp}L3-O~&hd=3;$hUIXeGy0o#KaJVC(^VVAW z^s)U~dOOF<8a#yuB~m;~l3=yJ@_obVyRRlGB>hg|Su<%{yPaj8v&|s()XaH-Qg$(C z9+b$k_{X`ME{dP4x@@rQGBr$0F27veo{sf1+a4c6YXgG!WZP@b3DUvWSUh4c*S3tiBpX8;zW$r0tEe;?vXON0e@f>7=@v}9Fz zR{?tc{Gk%4XBo|JWqeov5%9t&I>+UDUKgliScR^GW4+;Tf!lo&O#?v+TV|!5(ZX%- z!y4ya!;w60Q%s4dA#E!#p1V(5>idZC4X;&@fwh3paqwOnVU#1^p>YZTx9k~9_fjfE zOe3!dF6IV*+i)#9VJU_=6nr80g8BfCH*zb!`!r@%bd*Qd4hthvMAyL&lrDJJsG;bH z^HSfJVO>SbH^to=ZfDD;?as1w=P@S&Gc8~&j?2BP#9tA1X{m|%$9X5JENJ2tS~lN8)IL+jotHMTykfEtb(=I^Ni@+oR240cqF9A)U-u=d zUW8XnY$Q}b>QSuv4l|@^(zY#$I(;n!mP^En(S=okPX6`7_E-_QN&*n33E3vFYo4(lgCG|0VE6*~Sv;*g!a`}XB zdzk_sZWh0p0-n5hY0>2{1fEJ>8TEHfO}!V|7wZBCI{WC z&v9eA7b6pJ>#qvMzZ`uYTy4Hr(Y#(-#las&jT|CkaUEO{f2!Y5BsN1$kK_yX^(6}8 zWaj5E@tw2ZlTQpfzv7#QZZ4mN!m|XAs1)?xU_pHTOp=$x0B{3XyYCK2&b(CdSJBt( z6!^}E78N5$iu2$J$wOU5hJ%g-o}*~KcbX=8y3wS5LER8}@F|jdw=6DFdyO2S&{AK0 zXX5h1M|Lo2c#_xanIG{X$FsP4S$sWrB}h+`Dyr0bs}AtYrP7$%!WNrTg>xP#q| zv(t^f4mjf5n$<&n$$qcMk#{?QZNbm$%f)&AyAL*R$@YNJ=5>4UwkC(OTWJtmLuO0( z@$+EXhkOPm0rO60C;RR?*S9`-v}T!_I~6oZWuGQh_jkUQ7j__8Oz%!~It2Ky)sR*{XxX_a{r5EH)Uhk^jD0 z^^t;RtpjclhQdh%A9Z7p&u_VCJz@8_+C)*CI?}SNDU;CSxAgS@^u}rY;u_p95DqFa zyBl?-^p(v!bp^XP35L_I3C1K*3jK%RdG2<7d5;wsQF3EOpovnL)ZBXyR4y$afkbO@ z(=ekRUA1IEPQ30EIn}k}XbFDpRT7?z!8!U&$mgSt_V(qUp^emhjo?a3@8tAR;h5nj zZOy;F!2hna|5VPNxM#kR6pJ#*)P=`<}aC`uf`7YAFd?8jbd#$HJ=;-)48)h5G zov0U3b%2&5EL5i_RBzasGX}+rSO)zd1FD}`sz#lVguFZ8r`xIk^+&1dU3?EImIN1} zlD?H)>F=JNd2zCFu=_Y=tKZvh$F*ZyH%>Pr+UTq>?288R!H~Xu(+ro2 zM*baAlznfHbU^)lnhSP%FVIzuD2;)NycUziHLqa_Zh-u_3L6Wi2QYV(TS4zuLj`o$;WX+XaE$Sf6aqSi(OY?UG5$_W}E7@sV9I+dz8(uw_aHseClvj zsF$1d#|3GYN?5|XXR66wuQ`|U3CP*<-(25l*-k63$CVaR)m7wQtnX1Ldog{a;M1Tx~ul zqKI}cHN8)AvaEt3ZRm-W7uFsWFq@zK`rf$J$s$lTe+HJCCJWZjNBA_OCP(jC{ZFCp z{8S3iJAY&yw+1d1{qx}JWh4wNgI{L^KA>s!b=CtYIL^5aiAN+mFokdKMS}!~fIO1i z@ptc(!+JRPFeyY?dspBn{rqsHE4~A&R&=Fag?DJp_PV*^N5ao?)R5O^p^3<}uR_gb zW;UUOX_u6qjff7EDIP#av|07V$I_|RVwVRpJWtE<-9?gj$PhOeJ*&XK(o|b{jOn%H z(t`(R7H=4_zk(o^#qTn|))_dov{_^M1begifL!{RUWJ%<>t(!6>MSS_mY}}0072Ne zYUsV(@$;Z1*#~w~13BqFG5TysF#|gvDHG;#2a4Jn_E76JR3?^ToJT`vddP-?S-dz* z=JdB=yA}ehpG^n@&lLl0l~T6N!Hn9D+mn>L1%hcP;h`^(B<-=6hO|_GwMfaxTNWZH1TadnzjE({#uaACG52G zs{=~K5*Iy{D<;j=M0UZj|C`nYDy7s^-}$2OsGOC4v`B)6GOJ!^jFM3?ZAt}1c!pSd zq@OeT>JV$l@fKYi7z$PvCoz9wB45mL?**)*>LYFkYX$0odOl_?>>#h zT9rL^VBQxPWU8XUI8Q5k-&j1qDEnSnJTSAp?ThZezGVF?=l8skOw|Ryk=NaA>&rgQ zw-9OaV6W2$zQ3j&3&qft4e;u>2jlW9sX-QMaDrY@ImEwfQSs5Q?$W64+e>os(mx}c z`bTIa9D^O_;bbB6(bVfyTei}!me*8hI1skL(YEv z{W1e8U;PgHWzg*Eo3Vye->CuGs@;-hUtNW*$EuFs4MT>v70L_R2_}2rd_0Ywzm>%3 zxu#G|;233Y)Ws||2BZ_qZnL_4Y#APJ(GYE1%#HS0@jqa4@+#ez6GUF(<+~_^V{md7{)BRyYDN`ix*e3W z;UaT(6vmq|<7_jCgbP!Tvjgn#7h8Gm^9`!$o@#dRKV4#IKlqH_Cumh(A0qxG{y@a? z9_I{CY*EwN?h<>*@PFSa|9%KLpq6t7{^uj#xJ4M)>tMs6xA4(FC4O0<0%m?e`MLV# zfZR}9;K`80(q^BgH5nGhAf z9A>$`9p48s1@gsEGbTeb|?;H*FyI4y$RzZh@p zHZ+vBQ(+e0v-x&-u?8$+u-@7%gE3J;zv35i=h1_=LS;Ad!cAt1>US0_A=FNkJ%sN! zcJj8R#>Xm^1;U=>9zu(BSaFV`U4idT$FtClDeo~$)2_RjW{UGKWLQj*yN(St zXUCa;*ZXadgN=Es)Vz#Z>OX&E&;a=>)hI-v5GD!IK*3K56msN%XDBL#op4Uh{p&Dp zq@XuU_8^C{2e`ftAv>?=B{Xz{5X(X1lnKL+915+%q#?T~GL&lR2T{9DqXbA5BUcDa z0Z+~kup>$QVB#`jgD(`?kLQF17Sq3>G#Uyjza*!Qr-$T=+i_T zX2^>3Qa?&V1Uy~Ku%R+^yDC!tgEUnJL^+SaRg*4qNLCM8w?=w1H3yqpEEVOK90m&d zYF`1e2|cWvaMZBxs<2})aPO!UVqPnvh!t40zj`d11&SS&^(0NV=mdVceYx`5we57b za?G1UP#~nteeL(hR#1E!16N1W8CDLXdAcrYUezBMa!8}GyY`^O&FNm?)t$Yz>R`g> zY{3UVM&s?`WD3Er(=%|xZ_%=VGu7py`?YV_6+3Fz>L)9p5`3DwMKKEU4f86ix8`nj z-D-_EmU5p@qBw2y_y#WUmPjsM>{#y{c&#%$w%)ukmEnVPGB%yH1>@by18C9&5|eY8 zb{7=-?#ka{a~3*u(dzt}wRm&ezWne=MedDV5#^!WUw6=dAAt7m=Yq@Bc6#2Qn`}`v z^vB1Dv6BTfAEG}seo^5Aj5BkdY364t)Fg7w$mZ8Py3lA>^HJwD-{@37_PHq&1z0o{ni^7Ow=1eL%Yei zeISn9kn-1#&~r!e&T-!%F=|gdp~ek~xtkk~T>VJI@UvD*u3XIF@2|p@E{N5dbsmRK zG9B;_$ofayHCl8c{L$KNGs4P-CxNWY-1oGfWYEFw(9II`?Am#sfU=~)ntQPo(EjBx zY2Y71_kNLewnNSL6d>Yk%BP^cL{wbNmDVc;R(^K| zg~#8-z`Sx_Kdh|oWHyF#e44tuzD@nh9He{&{1!>_7w31!&Ti*SWT#C&2q*8 zot?x677nPf>@F?H7)}LZNY&{^3r~2vOfiYe&R<#m=}WHkMFrTrsPU;Q&~6`n_$fms zdc4LYxu%j$4D^s_$n%kNPnWZ72Z4TsG1zr62Yp?<#)=CVmMw8o@Q#B<`&=lLrSM01 z4c?IO7Y?joYdeH10=%c1vm^=E+%`KPKV^K=d^ek?Paec}HiK{La?9@56`r9XP{@5BG*wMMtHIHAZbO(b_a23Afjs=iu zCoIdj!_?vZ^36+~SD#iKidu2UR@fyQo0KdWf0UYwT?1Q_vdveouw7zzdB*)6T%9mkIp&hZ4D$oj=<40T025 zrw48pIOR270eNnY(mMd-$*1G-F>P_1>xK^u<{Nnx=KA24mR)J1l0oR3OP6CIgE-LN zoM!1WQJ^}%l?*YV(33K#f9~f}DK)6`IMIP~OLus=J?TMa;{sRw-%g zyTp3+GxeE(YQ=IF7)Y#l1oYfAZDLa5Ng9gV?{5nTRQ&OThCA$i?()d)lu+~PJMRi- zo^nN$Fr12LP;KZa&xU;gtD5DcD?Sn_`PCBfR zGs=U0fw#}HCIYB{+BjUH<%UHOMi~sp7zkK8nTv0HZ9q?JJSGcVJG{?U^tam6YB_SF zow$EW_SfQ{`f5svbX@G~mJZ7dJbmjaa@xmvG;oIg&@7qxcO|Q;^UG{ts2Al^@`ojBO6C;#w>lyF=^HC%!ya&JX21=X!AMVwN2dVhP^HR>pd)=H<8-_eV`Mo@$7 z&C}dkA5UZhym<-SoddX0awY z8~UH^-57?O|0->5_Vj@aYGZ;&kuRVc%IS@JKc+4v=h8miGlmTUBxn_R#M*aQB0y1s zU+p8JKNwf!Gi11Y`(C1o^W_9}&P{;U+<&RS#oPenkUNSN3|hAMHFe$u{U9E{RI0v{ z=g7Cj$<;&F0>RWaFPFZ>;%Saxzbs8rQ!RukJHpJzZTVuX7C&A_iUenZ`)BLiJsclV zKgYw=L-51dKHfE0bCy>RubOev4!FEBcmDjdYw%)8y%*sN8&VNR*IJM9f%RzXW6vNzFDMgK`AbC^}`3>pVwQ$FO{UMWq$5V(-RlIyUy z_4iELV%xbeQXv+=FvO0^D>>zQ_tG9$O@S*sK7byTh3H+pn)fh2&Gq$K3lSJ}YEvL= zlTnFns@D9>*W?Cn>T14pliG;jWw~iv^o!jx9zB-eL5{r&S{a9{Sc;dY#w@vOvUPFz zwHn{FPV6P}-q6OG>|lSFy`AlT)lXYB0}67Oqe{1pvPCt|8y}|*#wJf|?)*{8n=L{w z+6M152M{a;{iolH!cg-)y^U8B5i_@ZpUao;N1WSE$4uBCd z}Af0b-=$NmtPCuS>udu%x6P(A;*0JcI{UvXBqVXMjAH>Q$qh=8lC%mb*Y@*-_n(*a~-RZ$$Sz413 zM1e89j+8cH3h>bN-)6dfK1xnIm)-XA%7Q!m-J;r7V;o)O@^x|QU8J4Q=FxoWWd1r) zp@X25XCF&it6MlycCOf)(o`2Ni3J{P+Rw&HRosVB_|Wx6|MgP+_ZZmE6hw?REaiii z0{=YTvRqVg?zt=#+`_QSg{MH-aRBN(xsiu6lrJD!x#=T>9o$-yI#feyG6s04wS@%v)E9N;S!xahhk8a*=Qm zQriR71eZz|Z{MWEIL%{I`VtSXJ;2Eff4tE!sB}5sryTBGPP2yBRvOb{$z@HggK|Q| z@l~c=?PbEjn8{5^<)K@>1rsuS);%?iMjQN6f6;9;uQ?2cVk)9JTw-~OaoAa2i zgk%QSJlJy7{4AABR$12Wh;q+oqhpb4;9mvmf4-LdoBe!)MaPP#?m^T63&-7A|0;HU zlc7LtZF26Xz+$+-G11M{8+(dp+lq9b4%Wh3BH7UJm`#v~Sg1(Us(?sTY5Oa068%cm zpU-PEibm0eC=(nmQoi~ICVlr6UrZ)+1US#j08cRB59xy!xs%Xj|0alr!iHYdxG#cQMT^w+epTK0g!gdv(Tn z;bN2zva$V!jDmT#?|#Ig-O$VPOek+e!+|cBLjpl0;bQ0OMbTOBY$DDnB)dmvkf){` zs^6W8e=l!49b$!xxrR-=hJ~@^X|ILrbQV)KnR5YIzp=?OKcY`VrP88rUk8PCm+5Nk zL4L?K;j5s-uhtbJWo+I~R%>r~F6tfiTnk!>p2m^@m)$OlP3PpbZsbSO8W#$SM7MM5 z=LyfnE1n+UB<0zvgAWSh9y$gXOnh3jW$#TEZ@~TFf{pJ4fSHF_U(Iy4yohRuEz$|N zX}H|8Pg^~j!G*Mafun|T;0eN4qDRe1G7$CjB*bhYU$)lwEmC)=+zSB;dB_u zneDXz3SqV6|5M|l19ZScI$%KiF@^L$yI30yI$$lO_8cO=_ucfX1dMW(j%W&iE7Xc1 zzJxL+-tI@D)VVm62q+?HN@yVk9%w$Azmm-m3>a=pE8yZa9Pmp*cM}LiT&#_Q=d4j5gO<<{D`v5q%|v*gNzBes5jF@KmneLVE~aq_bl@f-tmrg!v@C_;A0 zIzu1N`+{x9Tg=O>^t@yf^OCS@_d8=*Ovj$?mQ(JdZ1&ei9fZ{s#;5f!Guilq1=i>z z$Vi!le#I0-|6ZV!s(I%f;K8vJy_1&H?m5}SbnTvcL&^0zCki+v`<*WSi`vVYg+hA# z*EUcvisAt3Fb~_Q8ytGdnZEk9oU_7iH%y(hq$cjYlvN1!ap(jBmuEo%qQd37-_Ctj zj+}b*kM)=bd&e(HjYjhK*Fi9?SpQjV27O-quB+g|(bM_Pr6BS6YWp3h*zuyl<}+}> zsjJywCiQn}#MF)R7AE|s!(W1IO%{}cw2FswK8#GLIMD=x4`hY!W^fJhDXbs)ZR9mg zgL~thV=JHp`J6F}9EcgV9FzkvBWKGI?-k#x@W6%yxJa$v31yTCPuR1Z6-ILn!PS|K zIhT;I{mdoE3kWyoDjLasS3RE=*m}jZUmMGo@q=NL89iUVmk#}?aGSF)*p#!OGszK* zil)9NraFs1=u)%_&3`P(?wQ-CBj(Ft4$f;_2xw7vTdK3uTD$EXJVXaY{)Wi zMo3$@_xMpz5E2l)I3^nZHXBh_P^NI`(rWp5E}8BaGdbmMU{O)^fppYb;jK$>$X2wq zQOFLwV(I*os?<;4>l{{K17v_62rZZ!mT5U#wFkLW!j>%mGm@(bkPuNzcd2Zx{~)k7 z%%o1Vsoe5xgIS#3Mn5F$R>3Wp2&vWx165=1wXY+mlOe?dnY@t~@Q38r%s(ziU89v( zC)>zfu*HjHQ;h9BXaNc9rYRLPIasLy8Y}HQ!}a@uO zGR#kg;?hu4rpoR>=<9G29ZJf~3cwx!TyBgkRQE0jR(*D)_ftZFC}v z9ydZkb6?^L(a4if-5CM|t1Xfl*ai=FlPcPMxq)St^zg0M?ly0?nhPx$H@QePjFPMU z=h=(z$C)icpExVVsEXgy=gHcgFB495v&H8V0{2#NuIYyii3M3MgTs&u&wFZx>N=7{ zt%v;0g|*T-h25K$3P~qkSqC2qy?^Tpa(w*9)fJ&e`q_b-6u{GpP^2DpO781J_?I1r zsU(PE-`K%#9hn9R*fOBKKUy7^qLB33U?{D2N_MPv8Vd{(U2DQpboWwXQgbB^DY*F^ z?zrVq78kVZ6W+0?2{d77ffe@!=x$W)#XttdOampYB3y8g!tbfOYEmIxPqvKP=Y=?3 zW+z6nVllSg{e8=+^+vA`er*lDU-1#uk=4m^|M%Z23pKCK(UbB+b@#uTPFupj9k{0Q zV&v&^zw*z6?hTn4yKFfJ#0ez6qz+J7R^!(`6#>-}A4EPx3C)p1Fz6SsbEkZ27Gb~$ zwLy{c@>ss1JmFB$CAuk%CI`CN-2f8pgExXzRC?*sCqJw;Uea_9i|=Ixl%UJoUwK8- zCo<=KR?#EQ-w)b)k z5Lz;hWNTFVC>mye!QT9!H5;+GVAB@{VwRs%o3#Uh zmhbMdGKCY%wuNV*MXh(Z42;Zm#yHt0cX21p^v2!r4OkFFX|VtO;gn3JK0=vbM)?9u z&Ep@H)q^EP?IF3RNm;v}P17vo=HH$8#_jo=yBp$6U;R;j{(o68wFn%#q!xMaCH?bi z-4xOTayCOPjjWKu37;Z@6Acx*HD!qN4*$o-)%PGe;Ovz@QZ_GbBUuib zboD~BB}jzAAL8LXj-($T`N-@>Ju93hbrIGRCEn!Ej;5#bTswh&t@{{~z>=}8#bQ&x zhc*rJN!wzoY+Jp=_hP^*2phS-3<6?rH!2&~1FfA1jm`sS-GVp65k`(|YiGY-x0lWI zyHkhOE`B)OkX+`B0)OJmkONlO^=zolrK?CoCyp+WKddsb|LasQI}7md&nA$pY3KubRPH&6*#3AW`3 zQyO`}i@@mSV3Tg6=ms^6$a5fihnqsN)1WiliYsb$o3N1J$wjj{VnYNv@*2mJ)_P?0 zx{dnC%5?$Ci&3OTl!Jg<%vbl#`6o0w#cfb|P!7|K9Ur3Dgnx=e<)^9(4HDYq(hzqE zr|DD$klkr+-f7T_O1YDd?#0{>saHw^S=}7f+$dEVHb$b=VuaqnZMM#SDXOQ6dl1@c z!gnpFcnw0*U13iz<4^{RrlOq?~#_`dZ-zYr%*12xqtj9a)w}M z$5!aD+$#SWT~L=)mxW8Ik7fFRtpp3G02QbK~wxPYO;|;3TYX{>r@06ZR1K zvHG@vYp3BG_@TCHXUguF?$7;AVE&{6NVX^%o!xm0|FtKuVNe#n$%l=6=(9XH5xn5O za_5L6`*6P>PhQ$^7Q9sUy8X{nw-+Xi6Os&w5X%V;5LDO0M$K)0mI|cz6zh&wXQWjLUSJs zO0Gd@@U$e{yPVef#)P5DT%mVz&O?+ezXmQU1w|9BnG~*>PX~$%*-=DCine!f*b$3a z=hhw2ZBrvv?^)hS_gve#M@>$GG$Yg3eXKx;wg9QsG?3mal3A-A=5NuDn^rtK9B5vS z9EV~vAkD%dhhOOuC_n%~m4G>WuS`HcU^)CdI(;M?i$Ew7h86CZSkY$s62-jKpj*8t z0e=$y%cjvE|Aw%9hZ<{zSa;Iw(Zff5Ab67m9~>Ez(&NK}4g9FY1N?)A?>>FG(L%U8 zAUWzy(F<2BSg7p>kT%iQZm`XQ`W>sV#lX7t8-8uYMH~J87$T7_VNBr8Y^6+LavkiG zZ`$rxQ;&oYY@$E@S_t;K2#Fpv6i1#%9VyLsz<7VsL19k-*h*AmFWGn<(miAKw+M6# z_!1B9uV*yu$o^?}=#7E*o1*L-n-8sN6C-634T7VCFFu@uDMofceNbjI#z^_9dwH>v zr5;3^E-b~DG#vezvk%!%QO-8c)%k-(sg!;t5x-lhY;XHOb^mTJ^65g)Rk2EOhU2&ARz z_io6|XYKRah!B&6?Gk)nHC;-0;RE84bUBYJ%hXm3?&Z~ul6*mFtWdlsC_DV(2xnyu zudU2XsHW!+Nr9K>gK3(bO+sblo^L94@;j;BRM~Rk3&r)~_m0IWr}5Z^((Julg4Xf| zzal_CvF21C$|gG*yC*krDg?2+VKKYmbTxDLx9@)m8vn9b_NI~U@F6&+@I`wyh4D+q zfBoEMg&v>>;aV-1dhqSN{lVrBx~9k-h(jTQ^&YvwlE@4@}r zH1kM0H%G19ID^nxAWD@Z;JjviI%mCnfD#sg-UARQ77+?PNH9P~?Q&{p2aYE0BWh)O zf&+^r74ewmzQ+q|w^fgAWGKq`NI^aPY#tQ+FDt!{73R}!jxI}jG&tn}IYi!cx~Xy( zoL!Bjhf_{fPXugal3l-EUt4El8t}3SNI)l@>uyk`*=r7ga8;mvFXS3iC2tu@=zAS` zOYM#9cj7e?b$^Zy^0GIeTqU42 zlEHgz+i`VSI)>%m3vT1~TwKfv=sG9^^qI!inQDHcQnt)lKr3MY;e*_z#pTC~L`D#Y zTx?TK_VF*CzO{yFD_ffuHGR{2Kfy%~yRO-=|In?A9qM+4*T4LYxd;ze@KP+ z!Sl=JYEBPUtLS6Y^6%=}_i4&6w?demZZT}oqDb`i2b}*`v5N_H%$@vo4d;}D+Ql1^vlj(HVwL}SNl(?0-cz1#_E{d<`t#FfHz0`b zDw>B#%6bCy)(HWyBo2In{8G_SwyCs^Qcs_v_RkE`U2jn5$%%$AA4BozTTZ-8$jf&% z@3A?8VuGL38&_R@lR6Knki`{Hopo=!Gjyw-oNB2bsZ9O!!eWjh@kPCRO9fpFIR}qC z*XkJPX8e4$mqUpb6paz>pDC5lw-zzlyuiP57d(l&Fa*TJ&7sv&^Av@l=;0 z!C@B8NoM^haf2yFa34ZGsq5NTUYypl#%%qoP-p7LJs8ydc?3|VMSDk^Pt(}BFwuBE zLfNdLFN63$znA1Z>Zuw!?)!Oyhk zMknhRyi8vcb=t&2UtmRZd_eulX9-&|5Pv>aQyqu+bbhjSVGr~Y{9P$mlch45ipYtt z?7Z*|<7Gf)?QfibeJ>|?`rx4=R#s7&7;#RSr+Iv)GhHov_1D$;|B*;GlBOJ7c>8UV z{vUrPZCW9OciV;0WGz%ASB&o~S1*CysrVz#ZvZW|;|x3R6;8D@BoVP+MZcUAhc!yU zZ$!+pS6?F_yO&O z+j%WD*K(Pd-hYPM$1YOxUV~#Zc2cnn9(5f=EJlb9jY3K{8JHF4NpGd(2Vtbn;~ZUr zBGJctpmLlPL@QjGpmWv}rIR!?Bw^jQS66AAv)FJANb~*CcC`N1NvE8=$w{|VYf1Nz zY1U&td!YyVHJ7Noq27rl=44_Z7@LeAZhB-bpBTqEhqGsUu2VNQ^$%X#O}^-9BV6i+ zx~Bkt725>knqx>{DqNP4MtvN+2KuCI3CkY}FMm#bhF1gy&K1GAm=N$ne(S&8Dwk)5 zzU42LgFAhWO{k7G5bqou%W$5QJ$nS1&vE*=%@gr1F#C_-2P6Wj?Qv-(FXz(|7F&^;@58?%xGBaxVwMsb*g`0$$2nd~XFgOmN!z_JIqgF`8^kY<-aK zk}RWi>>JTxf`p40zUSITjM?Bz6VdeSTzwiDdEK*Z-CR-Nc^{u2z1s@n5k}pYN&DNp zj*X_$j61IrWIW$T%g@64RG1z#&b^X&UMXpHI#_i0;u`(I#aJgr)if8)AnHQSvk!|m z9Z)oW_1qm6orvvU?2P;2h~u1oedQl2EO^3%hUwrVdvJ{4ehy%mkICMh27!>}si*R5 z6AoErJS)ldon-XPMMt=e{xVp=A|xP@Gs$ONjo)HGSuc)RqEymW3%6GRjYhtaKj2R0 zu5E7H{-@*oe@IR(N1&~Ee?7fs=Z}Cq!)62+RduCtq2Q>Q2FfF)XBV76Euodv2~FVf zbazKT#6SsBr)(RQ1a*L}q8Rb^L&VUVTLA`Z+Upab4Q>1?Nc+=WJhw)w5-)wF*6Wl$ zw(sv0f>I1ZkTGL*Ns&1OEgat>W=HFfZ0L9wvZnEyF$hs9+&M|^ z@`Tf|pi<<1$GccbTrpW2QSoq5Q=kkeGp2)E4@SK?BzSUazi_9IG}*AB@)hCgMEtk1 zOUe{Nuh_l94ND^eH#=l?L`MAQD^FvBN(x?;mZ*f<&8J36C{#(lpKagQe6ACu3Fr6M zvp}79lT&5qbd2j1yim|B99t`5+Ut(I$a*iZ#tNia3wP;M9oC#IcCvR00!@kToTJPF zt__{I-4Uq>)Dq~;xxLWG{^CN$w(pbuyc!)ry;F~AOqS9>ZeTk!c5R98+QZ%yQ95pa z9m;LaE9Hj-eC4TsZyF6cR~nD{-QuqPsiukes8GaZ)RzdEl?h%^9&x72Dh}><(s2-e zqB`*ey=tZ+P!1@W%`+}7nfZxY=3*jq{BB^)Le>JEheYm&B*ut`&Xj>KObkQtBKDh5@$`&jCthk(H9BO2 zI-#8nIq^HtD>hc&LA@pe3)nM&6`58hBJlz~nk z?>KgF`w}R4RFqKZ%bcRpurmiQ>J56doeWggFznqgZ;D+-HSo%DkVVo^tGjOy@!o1q zQL4ITlU$GDSGa9GA0cG3az2c(Ei`HXCqhB1y=;+JaS5qnmJ@|f{>lQ-V`ZHJt!+)5 z*UD8kQ!0*M(}O7Mn>uT}=EfIO&-3Y;Nl5G)L#Z6P6asmbAq;|Gq;OGX4C%~cB=$f* zo%3MvqdW8YB=Sn#>LrOF*MPv**1|W94f~ha%B?Zcow{nlW(*a73l!VznSh3@Rrz+A zrav{-QW+4rKUx`FcT)<137UcL11TtJ!=Wnxq&U^c$%6ie;2#Q-6&(M3FutcdbugXk$}hlo}BVUndUPZzv}!EG2J7eobSNo{1_CCdT_$-)=mMd$OXm@h7U+GC~1G$g)*%3&zzq?Qh-<2) zP$|{0=6lz7`@j$U9~)=KSl4F3QV27^4YXycQ5OgdUu9nygi#&Nut6j%zj5{PK>xyL=tK)WZJk1v?AoC#UU5QWCObmiBsw_4ZMg@Q94L?muJa^mG6 zuCK5P;!MIg1;aR!5@ZKGm7#S%nz&DelGziG+l?JP_$W+mUEw5DSKJ)bSjme0}=7{}`G#|-B@LuWBWvLaBs&M+?+XQ>K=kMt#)mtZ9t(f_4PlliaCD2~p zt{&}lB7JZZo|ymUVQ-XIra{TaB)sAJfW0KpZ|WZWlE%(sk@pB*AXlqc&9b_wI6W6mwDE63Z0mpN?sZW^8$ zx`2@0H2JZp5D|xtX0zdnF7|INFsX0PHRrd!dw(3oo9xB_F@yj}icLK5vO!^*)kj4lnuhEi6(tX+~3NYzUCLF5ERi(;3C;zX(_10hD&VH1qjFjd)b z<{UzG*j*Kwb`{s9uP5h3%PND93&aq?kIHk88uu@#JuABQYJs#HZ{wzP+D?*N&saQ3RvdU*OKFaAHow4{o)PsZ z&&;g5CnOEmY?$xlD`wp7R9gk4zeOYhapTbWbacCFR zY>CYp>xLk6d1WNQ^$rv#77L;awcDq+FtQ#XZzViPbO*OF(5xz#@S?poKRWnB(i;{> zcuasp86t=UF<3;QIN8adAGa5rpWOI^YtX&5E)PDrTlg!@9c${29Z#xFzyF_&>A!m- z8xUmEJscF-GOaP2oZb=oYc}g=$_p?8S96H`s1udYF^iJa7CB^O9+0NUUWotomrxpY z2t@gj_FD+&o<~g=qYW$&1oYVG@&QXUMLSv$jp&*ICzK-x+KyIld%kzk zlBo3Dk4=|?5+6$BR|@y+KXikzZHs?==4OItGg_9nLJr@eD`u1fpyP`Otz5PcN2asJ z4-T8E(?E=)gsKv{KsP%)olNL<4bZp&nlTjgvqEwe3?s6SXDQ!b{z+DhDNITdr1T0OE=_zxxLxwYbry9fN_EK`=um6#NL971F(BJJHG^eAzF^<qHXs1BErTv0nX$Pirx^i)9^4fE86#*tiMhC;FC02oInup zWJ3UWMik_d^bK@Fe%lc^s?7v>X=6pVZHVi`^0h_E>yTf{6Od@gG@3Uk*?Dy8>{JRw zc@H&0!vswFAc&c$O50;M3-qP?wGQhaz!f#3$9hCn#?L}J7m0IzS*^Ab*@Q{!n`Sav ze5(=qfN+Zpvs;r)CcMdZq4uTK(3HOyrefqZn%aYC+==YYJKuECl7hY!vo#n%--5+DDh5p*vzyQ$gE2?<&5(emKk=(>nLc zVi{*_QY%n(!RDF$TJY~bitvA5o9E-cTYO;fmcjC`n!oPWoY$l%;=S?n93Yoi(9+=x z;YY!Wi5c?&U}m;$8^k!QAV_qkm5+?dXB0&V(G58c>f+}k;pn)9KuA$Ws6sG|F<1Z= zEP{IK1atI!PgIG#h*Y(ly}zs(aCo{u$>IUM$`?;IvS`SaexB(i8;m3x!O-XM;wS(M zkZCQo#tjQ?{#4M84UHN%Y|v7wd%SSs5-Nxg-kckzd61-dI9Lw^K&F&mIKgJ1yN(6A zr;%C&{X1mWLS6CqfcQta+3epvp~@FpehBy#Wce)8>PIa*am^rt`obk4YTk84Y;Ceg z9_s(c*H=JAxwUOmQX-uK(hX8l(%l`BBHay=Ls&3`bi;szG}1_;k`hCA2}pMgFx3Cd zd5`D3-@pFvUuzZ%9%kuUv)Oy!_jO;93bV?01CacqE+HwbTw{dY8eQH&me+7ea49rL zS5v$mUISQWO1!!Ryf;1{IAM@Sl1`*bgxsCxOdqFnH6N?C zpBniGQ>6M#HXxatk-C(&-!af10CmTg{Rn!W<(GazJWofxIw@SJdP>A?1{q?8Fz&au z(>;dF$7){N0iNXU3^FybU0U^m)Gn{4>zA~lS67P@SE6|FH zljk6*j~LP*hv#ej%W(U%e*Y|TS{5*UXD-r`_-D8q@CoUX{`(-ty#}g001)FzQFkiu z$P$6JF;Np{n*EiJ3)r3qifD z8Tk>)o_rUn5(73i8cNkiKnIJ>Bb7{~A^fTH`vsYKq3~I&CYzArVh&ysVOSyV^|Go znmy_^MJ_=_$0=r$Oui7EDzlejrY@SuBDhtcsUq1*h&)@3;;yu+dTH->vFIxVpnjU! zEeI)m^;8VSGb()@FO^N~lVe$vmFbmlZ!-a1N<~AF=F#lq{C?M#{SAY~RLxQYttpA1 z8?+y15A{?DoIn+TQoO!ZvfNxdcf|-`+&G3jOYPB3Vzuv0e(H&Ll6m}wSeWHkWN7Z*An+dg{n=VP~ii! z0<{0#HbF195i#5$W4>eAY^zBB{cn+#tBQ<-D&_tz3L7oix8G_y$ldw~MkC)EGY^A_RR+Vnje&aNGx0Th@ZSLnt!6m8t{P&dGw)Agg+NuTYwx=@9S zDwLubIbtp+39Ld)0vO+uCFw=n>*bYybPhgAoX$L6lhpS?(Ua1`*Vd!V1w1C9dP)z@ zluIz_d2M23tuv}0Q3+Qy5ZpqG#fi7sLH0A3$7>wqQOiMmn?KXJNAxEkfk;aLtyuTs zDcb5sk&QI$Vkw(76jJI9aqr0| zb28gSwHF1%@w)+p`k1sk49}*fK7~Iq?xjm`JKGwaqaX z2d-#&dnQ~DPwoNNJeycPBq@7IK9nD4qV7fRdD=hrM}w>neug|$%efcM*2cAM`Jcd| zZh@4-nR>jb(ch3Fu)AL()11pof`cAy1_AO zK*Ne}Bf||Ry{%h(J+J;%`wngX`}2EsLi`64a_f0poE^xCmz%1mfN`V0gok5~{TC2= za&*yR_(VLCVJGQrJy*|UyO4Ng1sOi?;DUEX`;`ifam(kNzS@xM#t^5ydX^In?;JaN zvgG7dB6q7EXSHG498alP3cD=rd6ZD`MkwU!V;jSK6&}v(96Y;h9|M^qmr1{#nVvT%ZUs2CPWgH~xi`>?bCpuM;Vg+~0Yc*n&+Wu15jOSc$(D-?C z)x?8gs9X*V4@!Ld#lr)pN{O?V!qPU4kB1~f*S>)EorZvaR^OuZxmw#{$o=(sv*^sl z6*Q#1J(b=iraPm+DRdn1UOD19rz6Bw4>+DA$QY=M{jWHn(u06}pc`E(PO|xDSH5Ij zkp;=&o#*+*$|L-OZ1t3V{eTTKg~?5;zD{V5Y>UK1i%yPG>WBMrOOjT_HwINS7c&ZU z5ZY{hcakb#L(lpxBpkM1(Ku;f;v?|DQe|5h7~Vb&&C|q3^-d(6aMKQpXRa7^N0o8~lhq?6MUFLJ3BZ_yT?;ik z22aUrIZaOejmBWss8}ae-)FZYl?l^ww%>Ab2%#bgt+CLUHHxu?j`~~&LAlM`uUI9~ zi!%$5a3HDIuA0>dK4x8en81edvW~^`w;%vtKTPVlgYWKU=uc&3m{_GQ5iyRx-co!h zH6%*BEjd{m96Ah4>UyBy=$xsZ=G{t@Ul1k3i*iZ8;pp-kUMbD1@@tD%!ZDQyFDP+U zE2+~gl+yz7N!zsJ5nLWJmp(fi`KJNQ4;=p)9 z(8|Rpn%2K;LFsc-17AHc%*D|{i4LV7^8)&BKXka?(*WFP_eH@fIqwu!HfF~|{+uhd z_rzn)cL%;5Wgv%TWJa*?96+mW$r$~>Dii+_uomFGN!l0zL8bAJ-e}U}hY@N;gaC$t z!C~)YRhqwZWpGq(>5hfId`S<{*mmqVML|N22qnhIwHl-?;62fmgrnDA9)Ewh%*7@u zIa=Ov_a&0_t!mqYQqrHtUtFZ(1j#WAa8QjxON|z{xRZC8QcZOjV9`)N`helfPUATw zofX!Y!9^%gkv@M3f9#8-wKAPGCS_7LsaY=F%$naWKxn%pAC2OZOHMP`#40nLw=+nA zM1`cGqPi`8-i_g;#jbN0TAk(vDs!U-*VWl3Z%zYR)R`>rG3ChzE~Qpt00ks?F?>`s z*K8DWlQK*+f$kHdc)dgx|Hd3XmTJjLl)a7OZG5^!bdA45$%Bz4{{Z1mSzl3H{1Nk~ ziEC4#jK}lmgKsamL;aF2UIL`OdVv_0kSngCa#q!Iz|*6NHDmKn`TDAvRqIaZs3N40 z&&+sG;h# zJ+tnMvn|SgUD}CyhF`(dteiB`D*>y<=8}p#XkQG)KuWKHBRNUQfsjODdS0v;J@$&k zhN5Qqb_cr;*O8bek{iZ-$m2d`ZSmxSV>`R4ylx68B=5$VN*#|;uHd9wXZFa<7Yg;@ z;ZR(1iuhNdiiXiparnsCue-7@mAWgNSvH(rWcc+;$V~T!yD~2*ZADM!?L+kxN*NR}w zmLs43#FI$lb}A0JzawEcW|<;?^)T9DIXwq=S0<^rc>@;{?Z;)G`F`n_kWd|KNPS`J z70PzqRTq)wL~YUih`Q4k>5q&+!)G&}Z=ee+;>J>ammV zVPXM}ySD!qRBNOOxD3@UWfds?eI3?Tql|ex{<@VJ=lk#YE zf*J&x->Ss-5V7*%5!G$!exd@zc206<Wl+@vJ%7*%bH*J;i1=A*Y$kdP?Z7@j#KBu~`gf$p_Mfn+ z>DjT6kOm9n(SFO3mT;6}E+9+}cHH)rz%$gjy0}|N6cZR!i&n3qoH`Y*L(PCfd#Co` z4#`!DDVHD#jV@GV8m5gZ68+#^E8t7y=sl*w(Pm6Le4kPm+40onUXx9<;f#qMZ_o+5 zF0ia9;ZZeGW=D$Py#dl-+3%~}?%ocvVhp9^??b!t+yr)Cyu-e;F=8&7@no*nxO8PWy$|pnP zAR^^Lw_y>uyJPJ^2$_k?Jh>eDV^qw3U-VMsX`L*T$!x8d`<1oMFw|XtfL~URQ)~|k zw55@jct2`r(t{E4n4i*I3UaY5T4`o0miIba9`|u$MEn1zL}5mdwENlq%iCc)H&nbG zfv!BpB5~&$Mhfi{szQq(f9`Gn0Id^;2NM$oyE7sQOq-=7MfNc6zZiu*vb3i0iQ#DW@k1)K7kTD(@jAIN9ua*goCQn|9D8CiYmquH zNN7)Fzy?9>XV>8EAMY`>563SJ*vvB`5@H^hA_ z5~Z?^M}*Y~7!FTHh)B(f;?oka)*hTAv%mEW&DhciMs>2Nb4PkJReuqUPMdjw8xq<^1iN~)0meK#P?~p8tE}({c;mCLB)-;YmwUlak9);jI>jG68m1Y93ct*A zx%iHP*>m_Wl=?sGe&8p3M2}~@G^fXMup#mXEmwN*xeO)M)KeLVTd?yElNjj?2OY`C zC2y4ilkl?OON=wAhioL=z~~NpMHqDkiAxG;2E#FwEw4%H-DS=hL#$X0N*n||ms(pJ zFr)t>e~Ge=BsGVmB^3dX@5Jh_>d?TT2qi|BeSkwSbctdG#C;i9yceX- z?PD*ykqwhU#bz~bmg5)K?s4}*mJVU7T642;Yi#-nx;L`a{3?V!sEaq4d6dI`e;GR# zm2%X9eJ__X+u`yCea!c1-3YM%S|6~Vb5(6ej9ok~+%pL*N~Y1Z{#DC6R_@Xnvx)Q0 z!{v!I{q#yq0B!Q}2IpYbINMGy$jAN=C$E0!a zh}676rd0D`3b2o~1RCTJ8*eX`J7i?x?~r`}f9Kcx{8Mdq@MYUf7xudRum+@a1O?K` zWemJLhh8bq^9H;DX!hV5lDB8r^j1p+C=ccfkN0q^!m@V!q0!tuRRGyRFJzB*1|oT@ zw2TO^dy!irA$qy;)``X?_XW*5&7XDe#4*Udaa+s0uX9BAO&a*Abk%NcmL5yB87=h!ShP$!2?Jb^U4yUc>jipxdwTTj4|;*OYTt<-s9N; zyuPDMoKNS+r)~CIGmK@Vt z*K67gZ2X&R;Dwm){CoN(wsAid)@?uE)u;vtoCM5slhvhzpU; z{iT!;SqT^PCL$q~;%W6@HgMX_NJP{SBRF<-k>B{W{*Kr!KfCe2(EcJ%>$7(xd*J znXIC_OIHZiVZad=QvT~YyXd(p^lx?}qXD=`wjrH>ee5TR+SliWA9oSPOb09u2gxs{ zf?w0tvdY*jVsI&bq?%RwgfM8CiV$x%5k+djG}20H=%l}GGuZ%dxacaJeJYmSO%(iz z@C`m+3O&)7uZ3P3p5^ZWnO|e|ALmA^_F-L9p(!RS$cd5kTrjEcify9f7GnTE%y!RHXbR8FxeKm`XvF{#{L=<0*3RXULUl$?Z1^Yzx+=2I6##s<6lHa?ZMCc zVH>&SL1M9Ml|PLKP!qjE=ZGT!@f)rAu&?HZfbiU$06GzIj^bCsS{`|Uk&@R+oS9h$f9juqPB;I>;Sax zj+=&M3RVpIFs;A95YM&1&}6>WF^Nt)Zb1eD-t&8Gv_L04rjhci%ic|{l1D6ivm445 z4Ba`OFOQl65LEXHN+tH4m zeLkA@vXzJy%8iAE3Ux+}5(V*vJo9`s<}l#4@9?&%C@9Eo+*PXAICm|!xa@Gt+}}mm zL^B8sdtN%7^NM&2T`(gP5ZRPX=_=kIm;)l!dd?9H17x$%tY;~kSza1s1$iXy-Q0Iw z&m*=3gTXR&4SQcO0+8#1^i#uXVI;&cC7kMFsPhFo6M}|Yg9VG;pkMFFrtBp^=_V^D z1&JD^NbUd|Wq-`jpV?;%;h{YrBU0VV*hw0I>>7k{v1Aw1Ls%L6WIH3QhCXB-Hu&So zKSnVA8e}on&?DPW&tQ;u$J9sd0N%G_h^$wl5oYG~zk8OKbWX4WORJl2JoS%V^Ji&f zsRvY-u*MFv{CKGZfLu>K=BSd-`hXTlEr<>FejE@~#vV%N`G^x>;Sg9f%sH0nBQXZ3 z7Ni!7^RP(KXwf|$X$$2u#?9HEhwTFPoG(LZ$FyJ^afIs+bMt~Mo(h~)y5}g*Rrd-q zskkS|V_Q3As_Tq^&sh` zoC6Aqj*V2F6n!z9o7&P=i$v5FjjIM=iXsU8}jzE3&BPsO95;$dyF*~}E@fSeLn^{IJ zk5Ppc*zl=lQ7p4a9D|=)ou7Jl`vKd(y*=yq?^N?8fAZLJ#E6<#GW?{!SH9y1Fs)18wMW8K`xA`Wg3xw9dU0xa^spJ=MT(mU&qV{qUMQQFz78`m&1G&CMb zcvq{V#6T!bN2o#@&-HOM4>DLPd0i{Hd)`M`qeNGO`KTrepN?lQU638F^n~I?fy!-6 z@jA{oX+nf0y7)uqx@D2u*{?L_2WOB5^Q8*$8}48_ua8+G70QE(z3Fjs1r&7aaoK|W zql&JCWoW5(wfZI!I?QXWde+s8?mr%x?M)`6Lq>rJrRCJY^34;q9?r6frO3+&wl%01 z(^IY=SMceMcDoDz5NntAU9-cT!!Tb`)#&WD`Qa`T8_g-<*6}5p!{%V(W9vtAc3P&Y zUXMF=Eji?7I9?t{9t&3bE4l&KD&&!>-yFB*#w(k}c)uFtGMe-NzUv9PS8aGpK8J7a z6r+9%+*0!5o__x(8XAVvm7d&jx7%^BI?+Mx0uC^$CgOMB=dhRT$jAYq0JQFTNWEz7 zOw*IUKUl8>scUhPQ6ZKvW&_N3Qh&e1deLgUH@$9<#z22P>HMx5icj}UsZ8=JQF8lx zn3S%HmRdo6Wi-Fe5Md}wPLL9>`+aB$^g#^0EFd7{sm@=SGa?BRFy$OHCs4(cDLAI9MudnKyz?LG;R0z3f?X7JK z-t>LqWEJp|h*fhqDN~ph^1mxiZz$VGu|GrtHl}4i(*B4D2MywE! zct}wFxyHn`)S!PzS#jBd_qe)vpf^|Ll{0zNYD;^lxSA z>y(iFS-rSW$&*C3b4k4c9(HBkxtQNb%FhC$`k=QZTK#+y*o6@$qf?@4T@Exe zW}ea5I3CK+2XG+%8|ff_8P9^QR2;$80cm0lPMFE{HK56UxvZ1Nmwl$Zh0nq_gefzt zSJN4sNZ+D5mB>WQC7^Pa&fv}}lD_oM+3OYJw-B4H>(bgw-=wXGs3iRQF8w!CThXCkuV}L~ z!$2z$1f*OS8B^imr?DjJ0=l>h&?G6Z_iEs#omKpL*KK|y?`EY2_e#sWPT@UQBXPlU zQuQ)W$ZKBp|D1AC_ihhC58qrhYX(R^y}hL$FVeawx%l8uo$jYN3svl=gl%x!0U6Q( zMMVBNXlmpBCg*;FKlW@s@RZUQTgVS?~ny%Nx*# zF8aVeq7x}PSOn?-;d7!l*cLK|L4Y;b!FSntZHEm}Y5D<|avS8QP3sA5*)#y9H9mkKh%G^4OWC z+LPYA62(+cPe-HrAW&3#t!H&cMP2iNx<06=DI>=>IaI)HZ1zb|&3vmJ->w*0jcUm; zc@EUF$-Mik>6cZCgqJXN#*MxW0H-OP)}Hn{zXKMi8q=_O9t%YA%FAu~iUmVS%2_+or?E4pz?Z;_0IvJ*0}B=l6n{Ekx$4 zi|bA+LjdEAwqEyGYx*~iOHAPb?<8;8@IH33}-$NkZwKn)mRTX5NxBu>l4 zd!^Sc`swMF>bk|gI3sb)>}}5Np4|^1oqX!|A)wh?c%K}tb!)7k%S(EwI5ckRC@~CB z(sy5lh@Xtqep5o<1sr52m$g?7I$&VV%GJcBC6G>FVY0yvh@uR1(2B}fpeAN|Jl)g5 zc8Km_*CDyr#j9`_f6K~CINm|GcD&e8QO(@WQ3mxHI|Rr%htuaQA6B2F+YQ4RP20$N zEGV=%r-A?B<+M(oA4NF4C+^S6{d>dzb=B9((yICnET-by(NF0+g|93rR^Tn|F^pcqh`9 zPEs1ZOPHA@)<1%$d=YJRepoi(NIc`bshk{>DIujWZw zom2H>)YWFOg!$fGZC)@*9#7_kVDO0}`lHuaERfBO#iM%_6-SIljwQu|&xl4SZ2|58O4D{#l$wGRpKy2gnCM#71k2MIN0#f4 zRg5}BktRf!Mzel5g-A8ziNo?oi)TC`GPnsXiu|^yfmrlY) z?-W^}jY+!9F@UQnPqoH$(baQ-2-ixDk)VXy^9n+LwAM2@nKC3g?(L+W5zCsxeID^A zj8H9b(6K+M-i)RT-ia!KNVty_@$rpd6J2~^m_voS}#iiH$263-VD!tm$71MD=1cJ71N&8wVp=ikg+C$D+&5=LWp?XVMD|66)mgu4s z4ZqmPX1G?!vG|fj^M4(2fB$$yiQhMt-GSWt?9tyxorrSiNPL*ycO~b*fPO7{+~9E; zo;d!Lm89QtKjWCKVD_*;~gLT4Oaye@|M8^SStj7hE;K zHb4dmI_YnNR7c4U7w__8SuNL87z<|Luudyw7IDA@UDs%<938s1T|{&U`-AP}qW#)M zgMR-0qdF&SR9DR0s!`9L zm#U(yjguEN@kt`Rp#3UINk0|e%goGkA5i=FY!*Q*u~b2XIYJBWvZ)?&i87^AwSM#S zOH-*z0(LTv29X!P07l`i*Eg-49*b5q7XQuJfZjnYvYHokk4d~nKNKV(E0q)Y+Cltn zQyqFNAv;N1g0Tp(x#?;oT~r(^<3Yp^@NWRH+6u(7{3SxPWL+cJvk$87`n;LP{Xml< z+M3!RvKl`&g{OPT^xSP$hwr8B>Q;yDxpI5QbxX(Yy~f$^KkX0D2Mzk85P-ZO4)lLN>=aWDO`{d=*y8@;bFCXy3_v*T{G^6^YR_f~2O~A}ht0Sk~Zizda{onheV0#H+cYk(0=p zk&SMh0o5k}LCLHjy{J0z1uQhvdEwnWz~&dt0%XeB7_+~rdEB6yqO*3^6t63xKbMhg zobG-|-u$=>{N*3W-NjPxo~ph{eR|+80V$b*YrUurS=3gfo|H6MJwMI44+`VX0&UrxcW z%0n>FWtJ>d7#O}&rLrMD_DBLn>|ZQp{ZFiS z2GEkuP*K|${*z@dC*>bw$qi4mJGQO%;n7FL9SgC_DB_%i z&x}fr%4Xo}9f(Y@97Zd~gOwPYSRkBYUZXvJbwW@=Q9g1Tw|-l1^HB{@dQ^upFP=wf zw8}FpRqe>^-Uf;v7fl#DMy=Yls|+ z;|&=eMNGGYW!C7Sby=a5%@=>XuK1YE^axdFf0oaU(W+iT+)(+wj22EQBX8LEQyf>o z`r;P=)9vuR071I|YK-rJ4=0UdhgeMoB=wXMP$pOaG>pd~;Z9jywQ#0I(~CR*#(puB zdBU8X3DLq~z!#Qx5AZ<(VNLH~aHqfTeG=$QHVFA5+fj2JVX*w?Du4BoX(U+lH`!z> z+?}uT-r{xvSTE5(9E7qS}^#IOSk_^EWjcmdaWQP>EnTqu)+B6izw2 zm1a8L9toW*UaPfo-a9Uooum_|&UY~-3#a8U3U@F&q?yA@7@Ls*9I#=VE6u`l0T6(+ z>JribpI-6_7wO*<1<>DS##3sR?+tLe8!`UCNrK!takFYFP71Xf3Lz}T0RPuLzncQ4 zgL=opR7rdmmfbpTKL8lI1+3pe#;H>_gJCO{qI7n0Jzgn%sq(36~CC0=pQ3O+H>+f2nrM1Hlj z7iCw54^c>B(p-NvdZU{K_?uLy66PU--HP#LSHCI_69^zee3nf{#uC?;2I1+Wiv=u3 zZ&F?Zw1|f-#AA5J_Rc9kpJANnbns1{?LMBh9h4ANIk2~*y8F11VR%Vi`&%YkxuToY zqKY}s@IXN;JW>rTtdidE+b z-$(MUat2ewh8Q};){NK?gkWe7LuXf_vhpQGW}yLA^UjZxVgzlu!@Sy~7luH1YM^%k zHOe;L8L7g!*Y^bO<0dJz7Dl|PR0+O}^@jQrZ9wIGO;~5CXLhj+;1=h5;`dJ_PU2(F zcw;;TV-cfX)%D0tvx|0et2VqICCr}(wQ|lXe@?3?w7J8t5$*oR->lXCz2bQ?fmN|3 zGR=7ZpR1$uMy)eAFolLOQi&NQBBDDQ7L8W;#37ZqTn)~SnT~We4y0>5vcKD5YqR09aTe@>W@)W*Nlag7YBfZaW7C z!gQ(}o;LMTH%BLSGZhc4u1ifW!W(!)*r55Ml^?+s4EVqjrg8~ZrH^_s@Ixc}xli2H zhEbfmO23d-@PNtE*ze5UCH+HZWmzl6r&t2B0Whoy1JP4GP7BAuO8rsD0z#@7DEQLM z_YNqW#+Jo(v~B6>IBa9E!aHu~0U&a$Y4$$Wo{F-6B8gyI=})`o9}j?!k4J^LCDNkV z{y-_TXts%+YxjXS&Vm}X{~SBz$ty66SjtRx!{0E^i~ zv0YHXr|RcQyfViCYkA)@c7#5MnPVq0Jq3-6t@z0|#)d+bsGMf~$e2TJ2h|*j3E0@R zexgqedMtoeNN;35-%xx#@ z{GiI+Ru{z+NVkLPl!vQ3HawQZC!SL}b3NQDh6f@lE+}jVOL%}%z=m%O#il2Hp@9xr z2Sm0hc*)&%hY|^f)W<;N6ZdP8iw{DrpANS-DdLmn5Na;aO$zdPB?qKXP4c_pl*Rex z64-@JLlv^q+X@8|hVO!$qBktqjILq*N4o9|R0t2^JMlmD&wp@|f4;tQB~nsYUSNjY z{W*_t39Gm>=ou=kCxL1Il#M}uOn-ACHB%Xw*dIWd0;Wj)WvE0roXV>eQbi?F_u@v2 zhFX@+DO3C{TfCAtGkVm z?CO=@8Nbz%fe@%#Rkn1e)qx!je{i-;rO~T>jCHt3x!zPpW<+7E;})r$^F?! z6~E)t!x%G{tcl}oj8xN2ncA(j0aV-HEP+6mFI{lv)!VyU?)ww&mRyc7g@mnG;zY0w zRfLpK?B7pKA z0g4cg^(Uu|B5_ws{I38{@g)sEo00@!xV8BtNjOhE4UI<`-Wtm-vCedD+$8I~cipQ3 z$?k%0sc@48)%?!K|+niGl%V$khkY^ma@{JD~_SQra$2WqPGayh=TJIL=~ zn+5_&ON!X0pU*i**-~cUqjp^NMyJ_(<#?XqUG%BJlLaFWGo)@8UxP+d3NN%WuS4qY zCixW% zEvuZXWmlX}qxmIIzNYq4zb7|Zt~TR9*2g)8>f%DGNk3UlOTgm@`(`_w$B9i*bS;VRm}Rk47pze%9iR4Cyj(I#4=GqP9C$X7)6B;h^9u;+-P4(A^8rG*ZkCZ< z+5RmC{+V6^-|#eIG3L9u5^09~3jokES(CS2v>ooBys_7gh9{%HaF=z82QdA~UCgOat86h>{u@}Cp*8w)HJ(Kb61B|m<0 zd|9}dV+ij{LWBXXQCRH!%xVT?({Kr3wp-`3XlFU-yBfa0hV{E$rf()zJtbGI1~AhN zGF`$~OX<>!j<4hOb%|Lt1k>Jx%JC#NU3g%@!5K3yu+{eN+KbmRPI`pes~=iJ zlY|8%tavn`$$nx0A3D%XQtVU{h+etP<_Jz}JMj6k>*Mh(gmAQl?V7~N5{{TbN@i;Q zQ`>O^wOz%{YKHj}gPuR0dbt)= zUS_@}hrGS$ySC`p!nwH~YlLmk>FO_}wyra21}r-*iG=?bAoI^``d#M&E&iyG5NlEF zAB;d2ErL=eN|-_XHGt=}_h#?qC9u|xkyOi!m_)`2gfONMH|ZDOUpLn+o2`? zAXh=(QEhX~Z-ZNBG*>1v^(d>bcaa;ds*U`0$yb^J{F}BAUZu1QJT|$FhKTimn`7DSVWv6ZrGTSvKN!@X-}w?LW4kia97tF>-Y_v|+Yn_mwhtof z+O5&$CndNJ_b9AP`8boT`Dlu*{(R~4^-|+yzu|ar08u3F>9>h2u3z9(XEn@OA1c4jd)bz||7u0w)&@d7Ydw-pa zGc)}C0hX&IG+nyu7N;->2n{O zYIuTakJRdnc}x|3(Ag(5FnLalx>$8gTgta`8|Cvdyg>8>ML%L9uV=QN4wk$^YubZt zXqkr{^BWB+^Bk^r<{3EGhk_puZzjzj3Dx`Gjnx|{eI`tU^Y^fKrTJ|{lvqZ0<@xuC z)h;md8oYOgBS16>11c&;BXJf^cjgXG4;5Dn=BZ7<8+VCD`vpC&>{`t<-9|xG4W&Y6Q(!W=;Di=RaoP|>PbD|oPr+g3bu_7d1A!RQJ=m7bGZg+)JtYun2uhwEDTR#2)*v7S&ZdnEoCi%Xoa?{zw66s~C~s*SEamBm z+KJDy=ovO7-R@hdTST&Id^LRK(fa5s2>bC~l^uLR0rLUL1ci6RVA+q`Z*O$0`4C{| z?P>YlQEDy8yhnhj;AnBouNN*^cJ;A}b!*4e+Y5eq`1T=ZiXju+GE-R`b6Oy6&fDQU#vAV5^)p*pZ z?&QRe*vfpMRPtw}`tUKbXEduer>at9-yM9=PJ$mHIx{QBmtrIXGoWfskyyyl69^Pg zP^__z%H~l4nW&&Io!eYavBEm{Cd+UGGSA7P4(@n{okR_}0RTXpFT|6Umzpg)@XEP1 zO*OHMGgbk&Mi#nR#pyQyQl{5Ua=01Oa>%-G`jsMjxQIheKh1AUw@vw_v8=gA&G08k2W1;T5PNKG4phHoeIzySzK$wnCwSIu zS>jC*Yymi0vUyNMO*# zG|QGhnzrC&a8~xwgB5OYCVqTqIr;fT+VL8mah9y{)MEVpHOT{SI%E@Qa zYstBK$6aTL%DKpM*Xfv|b*g>?LS4~|Dj+^7fRt&6)10mB5hYX;FKe5d!15U;04cD2c z8pI$xA&+LAL|1P6F-$iMWxH9egs4~Fw`BpMn0+4$SVGm}aR`H0(CMq6Zy;t4hpi~NAk3JUzFDWY>!hBWXoPYFwCv?%Pe3;+WyqV*2j3x1Qxf&-~a>aMvJ!d zsX!gCSv|C$z`Sdo>7j=VFK;<-xo;{7#Vl<4%h}6#0q(AQGV>MPQt{UK)L3Fl7b8?j97P+QZd5?@y;SoyVU7VffB(Lv}A&D^Vc4C0aOI zO!HKLEDz6auwNWHHI~B!@2cpy`W5q@zl50|;hA^%J?b6N!0fN48Aa_y%MI$d#m%c? zlR(ZLS!SAVN;L`2maIJr1?_9phvdu4+O|uIfJgXKj(WPFeucFxJ{y%~*%QDHVsE;Y ze}kTX)H7HA#zYt>rvpRaBRO_$L2KL+8cQwg015NNFD1{ZT{J18{UhDq2p=gDupb=c zmQjwV_}~A#A6S85k39V!9&Q>GYY=PHPFZz;CxN+U6S%}vS+v={tvN3XtsVl?zil5S zBECp1I$PoaQG=ctM7#MEW%Bf*PqfHvH=UhQqa3)and%3X=tJeb^0;9f~M47?Lwh_#nTj|HIx}Mn&0nVZ*nG3I-A? z(hbrA(hMlw-Ko@wfCxyJ2uQb-BBOM7$Dqt=%O-<=`e5@?+jz6Z@weZK*cldI5K z)p|SFPBYUpOurhRwtmAap4)T}f4Z0m;9`~s!FP90S+4#vop1WSB|5&a=})#xdUAOb zeO6Nuv%fDTng4if<2VpJMM5&x1N6|ZX=%wZIC#dI!%6I9ZBL?)JW zXJt?g#zJHkqi!*5I5k8)}D5#~y zC`?AC9lheu=5qyAQ)3Ke zVUn})1k9EDQyMXSDc%MZVUmu0RoDGtY=w4QO6cYl@ln$2heYNGQrvjpsCH? zP9cyf9P*MNqI16V`|@hT>XE`#4a=(mRdX>4&P$kt+_uG?NI=fQ;nOWc=8(PDZ#+Gx ziYuY;wDT`bHbJY2`6A=EK7^C4c5mxfRU${CP#Zz?m6=P>e&kW5`4=-8axR<6Le8;` zN~-YaZ>(WsyQuh*YQ^!$^#GpI*1aB7)Af!FJ@!4i;&#y_4*_P~&;W@J0n>{whd~g; zXv$qBNECr(FlsB`z`2~@=?J^UprQl%FCTe#WDi8TWbS>)N}Rt95(RVwV*8^A=QoRo z_@U-`=(7^v!n?1w2{cpttkqNjgc{mHs6nDs&|<-jDa>G8Xx|Xg#Jd?KC9CQ_TwXyn z2f_!_Qg4pJH7uFlFvnKR=dA($-l;aKj$xC~gYW1h-M0Yx@7A$s@HWLNljx`LKO%CS z@wseLEohGVaY}AZ6H)84c?r}7JStguDPA1~!JA3vE-}=BYUXEecmy-2}RJqlSh`7owpwOPqOJ$pg%2r+dSINa#D zq|g-Dh;H~CeJwEmqs-M!?m+&AB_*evE?yFcx#9b2!zoWOSoZ{N(8Ob6$Cli_$@pw? z;$9Bqo95P}qy2P!GW+0z4I%Pwxpz_`UH3Uk+i8t*;mB~iRV>YCSz!>HEv5*YJNq|d zqGTwXG!X+L0Qf1S<77_qqROS!<?QZrc~XH3k|pnFmL7iaI_vV%A$o z3YOuE9rE8E8PQD$;nvz#`a~GWeYC|?hHx@0=dJuV?AS1--eN`N#_WY3@T8jCPt55m z+1x-bSwiyMDe)W3*by`ChmoH|BRg59kC$LaR)*gQ_|rQ8kr%{fFkX({knPot(`&#z zBBunI<0-5EItdX0%&qT&Npf78yZ})@78q+eq&_yVc_x znP&;cq|Gat$2g>?hVi348HF*_=Jx=CZY?Qc{%%q)FeUT6@LGe?xW>jKpKR(?lG`u1 zElEH&ox&wG+6eR4Cqc(tuN2*=s`M0>nb&@Jq?n8y2Q6+zOi-e$-PekX3t%pNEhUM( z@yL$K%kxsELV0NoXf?6YuM@Fu3v9xXQF+i=K@wrcQXva{_aZ}$uy>`YV;J-UL;+Fu zzI`IADnl-OQazhnJ{;jBb8;X23TFMShPp3fpnHK?&$&*}MrDdmuwy<;CgUslpUe3Y zRgr=3j2&t%`PyAeUZmq0t(+`b3#E0uN4Y6g#sQ`!bu=dK5gI3^&j(lJ^>h z5TWA-;SVj3eZTC5j+c18L?5!RtBiA%EJi-*PY9Emby)m($Jtab$-j94u-8uW?Bl${ z)L26WI}OWVl%p2;*rX&`t`UW9y}V4*YiU{4(nBPnGplgNL4ucWD#nG`4)dzJ#>oy zy$V5E^P7c;q>CTM%067TfMFVqLq3+LGo`d-oa;OZ2cF;`Poje|xaN^?x2<>Px+glN z3Ma|~1<$CK<9*o}a|x7|m9 zn*o8hEl=#73xcOS`Uz-D>*e3$-ve~+-M1GB_~qH#WV55hmZSX5gPqUIiGRhUXPYn@ z?IFg*s%@v~94MfDR?A%sy!^&{Apb{(VSTMVd=5=*cMX!(GcixU$b-b-Y&hI;=zQ9^ zjfXp^0!70$a*$y|IT3?Subwz07FnAL5$A=JND>8eCil4-e$y%@f%<5bc`k&K6}9L) zkEyetR-&nUpePy}GqYSpe=R3#L#R$h-5A)%MnTbrt+Khy+{3HeOdr>PkFw^mL2w`B zN})v?Zkm-ZGdNp#IENW~dm#4Kg-%DtQ~^LEQ16~nQc#4*a_t^`4yuGwu}r(g6PxXgWJaK*IRY~ zrOZkRvj3c~KH^=)zbo~F-oAqF6~FGCA1MrK>=XzVcGBVQ13SK_{x`=!dGQ{16gC2I zM~$BevB1M6bHb%hMR@fG$RbC*q0NXh#D+JY&#~S!yH=X9S_JLrxa|4qwBPX+nkyDA zO4?yrW3e)O^bG8{R;%=1FlBH&TR~*o;c4lFZnxZZk4olM)!PyWxGvk`ilWIyS>pq? z^u~oeUD&3o^C#l5j3gSIqugdQaMM{b1bwEk2k4YoAY5O=XNISx&Z)EIxTT0W;dh=C)UCtkNU7 z3)K1+BORS(=jiNLb~~4OM9o;1n7ms)4te>OX*(sV`1Iub>>^DOLK!|%H~P@9*L^VeR{GOnn8Fl+0Q_Yn1G&X%LLF4S}^?&}L@&peBX$*t0aOPAC)3={(eW1;EKMa{FLUvsl< zLsRCG%Yc#BKeP+3$z=sycz#Q3hW{(B#55b>wWB$eqjW?-c~h>yeF^Lrb*4vnwv$cW zF0k!_>sd|L!<;~$w`IgWCYs}OnI&FyE6c}hPdIk+khHZGJFS@zHB*(tPP~u35CtAWNNGKetkYGlAjWO;1i^L= z=$f>`+L`R|{m5G-yHEKZ?glATnQ0%*E=v_mwc6(N*p9&;b%7=xuCA-A8BjysVNBTB zC@2ZGvg~Td92def3<0fur0sX zsco)IaIgOiJm^v10Pv2?P6qogKV!VcYz}A`xs#Y&(jo<(X(fm zHn$=4n9pcPgwN7t?uBs0JkuhkI-{~(-qb}{#FYH#eKF`zkKVsp*2qv6`?Ch z)3xj`eZl>O5+ngqUO9d8J?E_V1@6pX7*F-sOHS~nj>D!L2xRjCnqk(EC) zTZ<|{@1!QI`SrKD)~Kw4`iJ(C4)?Rhphz$}-Ol6GOAY zOYY@gcCsMQfACSag<58i6Tabw@OXw{o`!PPYOi%0bp!I3YR*|J<3X8VOGkpd8Mjt* z?$Inykli3_7oAmoaNsJiWkCrLVVlLkdDs@(CHUcuULf*C2|KjPTQcZzNfn4txtsdms2*Eli5d;aNA&wc+O>N_$bRfP`(S81#!nH;e}f z_lM?tf}N_Ji=G5jYzk6i3Q>}|*BN}>KcSkc8AC+-?_-L`zR8Y<_6kCEv(|{Kz0FD; ztjpAeQ*^&RmQLR@&?iz1A%736nw@gS_tJrcFV+Dl)4r$tqxkFAJ^RUzkcw9fyI&ge zP?HO>M?bb-)I3_%c)0fY_-OhV08w2B@#q|@8co;I0@-V_RM25*dvs61 zt4BG!D+XZ;#)vhe4R@V{c$Y~h9Yo5O;g;4nWza+Q;7bz#vk#kjOb3~-HKffi)&M_N z;(K~?w7T0*Rx=Qvj|1^h0D~{PXJeDUVOD+ct?Q-b&udl_#Lnh}-~r3Q(9)l=vkOAt z6x6&H``pd>V;54MKFJ`&jRHV~0_f1LB0clbSxNW}hEIl$BeM~*w9h@6G}=XnjZviu zPDuk~5eHGYHypc5V@q-5UWLerE-8pOD2WW42apgYr4^<|Afx!3Q@lniflPQKO5=45 z?WOF5g)Z|~Fr_y$1<7O8FlmQdWW?`QVe0X0ZtvF-$VlG1YQ{{dQ+cs^`T_EqEq_AJ z`Lajeck;}#-sjjN-VcJ}`|6^SGX7Vq<-_&!yiJ(3L4HxSr>)-7OTBm+7GBd^tJOws z&-2ic9nWbxCIJ(Kc^2ulYWC_2h>r2q&s;59?8eb|tel?2B%f1AvKjSE2`bXU6zkDkk-`(dR zh3vWaiF@_L+#$jSMd2IU-9mcm49{6OXe1&qJtvxU8utb{G!5xI+e!L?Z*^CeM^AgX ztB{o4zX!xgxRQO2c8zWxZi*c{CWuRhUS}vN-_}EEMFui}N*=@f>!ZuL1rB7na3x zE#VjL)s6015iL0l`gjX_d-d=(q3vp|1ZrBS3vmQ!`%>ImE$T6V@(mwgOF;6%2TISS zAZIs3O72^SVZT(z>Vcxm^A-+UIKAz!laZ~AdR-ULHu2g{NdjCCUMUI= zeVPrNTS@w+z`4;{(P&dOgHxa}d62Y^-Yrykx#OiB+ zEqUX{eb{k9I6&}_1TFQ{5m>Anf2m3D>5XYwzDl)2=!;C?RDjmV=U!<+_}^cpn%(vkap{Ik?-@Q- zCqKmWfH>SWdi3tht19#b^pY)=jG29*i;SV!K;Y_I_abvvZ%}?ud9)LGTpjwU1eFh9 zYsXVnB`=R>Y-Im|7QSo3& z!XneSV(t_|bx~GcOZBC1j1k&4vnH9_efhJqS1plDHB5N``hwII?p89HOS8v+h~Wbx zw&xW?y!#gGPq!41m0GayI@_8f;zgkK@{2t8J*1+iGd^5kT)2-rduKeh&)6eH@J=mY z;&Gz6k*Qk^YUmmcN4pB?=^DsF;1D{~p3jFj^m4_}ukxiYG83OW{Zhw_aEW;94R}{E-5?GfVUG*CLL-@? z`MF;*(kNL=TQ6gnCJ~3dRP|z)rYL}OH>iA*Io``XYI_lO>)dp34u&cZA>~Z&Ymq;f zhP56q}qKWn!KGEac!Mcp#9Fs~Ey0YeB{|@_jFZ~pV zZc5qmG6)>Jwn&Ez?bVg^>^qaoR`H4Um$~8_%OeL(>^Dc95IuN_6*a$?hNo$x2QFx>Nd)c~U1j3l6d_~p=>7NF5OeR;NWKdxmu-9~v z4-T+1b-4$w4C4wrwdXH430xfxW`+)T8^pgj@)|DUJ058si8RPJFjzI=9W8gtgC@N1 zIHCjbFelTE=R+ta_Ff5glwU#QLRGco)mC0hXc!2?l`w@5R);HiInve)JNBI3 zB(xm+ykd;2ylv#t^O;1rU9{i!@1UqxJmNWn3#JOe9dnyUDqJKiEoJgz)U_7byD2^V zfV@?jGPa|1t-B^fkO*Gn)?+PnZX-?oecWq?OFi#|mmVQ_>>K=3`LNU7PcS;aWwbFG zxR{zKan(-s!Z%Nzs-G?o0T%Zs@{oqEAihyOY&khn@nF?xD=3N7OmLLivN5j6+A2bN z2VaQ(3XL{x@cqrCWCE4U>%P(x(NcI^6(5ZW49K_xZ_|Fkp{13-sJ;wBLu(59%Z<%< z?Iyf6$q5x28GM?rcDdM37P$$$bga1AwE#NTJS^6ET{{NSu!dg8CQ7#s6=L;|O=81U zSlG3%jxF_Sqo1`#&gJd&x{^h@hFG^++qC>e?8VG0J^RR&v6)VL+jAgK)Ud6xFgPQn z(p1a7#|81UnwPTcwY&7Zb&~Q-uOp*rxC~V(-L~2+5Ak7ephX;MBm-Xyx0qwm3>^KQ zFme-Y-^c6I2G7Cd$ zbKF&F?t=1uc`ri{$|MK*2#8}IcB&x$Z3f?DBOc-sxz~MRlEd9vRJ~f|JjCdpce>s7 zAncjLa%dyMO|XrmvV|Xeqeh!HH>zjloEW`#5!%NsN~(N3gQJVXJ3S__J#|k`Uw)QZ z>4m@yck_gm`8QF~@VfOE5*Nvh_?ZqLsH9#Mvo&(8-VnqT;YuXj$Djd=LCn^vrWR0= zTmnTVtBKo0U5lVes4 ziPnujpm(#-)n^&nXMnqygM!fI%}wvPTLsnsAJ+9$OfWTVRnWDWmOEonto|3H#V^)K z*W<^x~(Q7J@N2id-{g$N?CE7LdaRWoH^Mycg`_+bL7U#C>W)Hth zjl{E@4Qf|Url@O1kNY^f3_nH+I88%a4#LL0T0G@(;>?@w0dj?9EcsS53!Yq)S(IKh zE_3FqCGVm5c+c9>nLHB+`R?BEWrTH7^!X*riPAZ94egSmPk^4Co8y!mk6BBbVK|A( z#Ve~K7wY){l~eCKQJ47##`6l=@%7~wEqe?u5^8#EHxsL4N@CaCCUQi@(qgRZHo3?Ox`L$z+`?X0RU9Bd+hVK;Tx@6z7p@~hsRdy$aQmzV7lW-2r z2V5F`U^7PGxSpdh)G5BDcWmm;A+Xd&Q+N>qyXqAzb#|7aEb^|S5^~BDTy2z?%;w{F zGT)Y_>)G~m7Q3!5Q#|T%2|PSB-%v{8wh+nO*E09^UezXBiv~0{CCwzE+k;d`zk9;N2w`YQYD}uFklPB(n@^$6I z#aI5Ar9!0)`l$O}(UiX^NF;+D%RphVNuUza$Pg{P9)0V4&0MWBc1ky!c+gFuBfyHA zNE=$-hmzuSS3_9l@i8>uz?SPIb#R5Cv>}p&CI&Nzn5eM=kl^!T8;f-VKAk~=R_P^+ zmLxxShWhgDhWQU5$q5AqZrr$uiW~DSLos_-8 z&;plnnyHg-yG1K($=1$eyckpjKNhrLOOLgW4kR{q{rm`2W?jTMUI}56X(O!$T<)(0 z4`w^uo~Tff*QwDs109L3>FA|HK1foJHq@9NRezCY@mP7@&P{dH^?ZrUebW%KrN9Ed zZ5xHT923Noug`s5K_7cOOs^rIDgW^mV|49NA-Q8W*I^QoS9&s%l&m!bEqPH&Y>%f9 z)2<0^r}rVar9TC)#13@To=Wk*1qWFrNU9xevFC5n_47i<*S&#p1Fl30#$Nm(e4|Xa z5AdDR@vXEo%<V>;_# zj`)$xwne;NW}90&JB8c+sdERB`>{sUYsMp}7!C7`=!g(5ubn2Ub(o9~zS&N!1vaz& zvp_{*$9Jd7018XA4IJ=vgwOLNp_K4ViOY}yL_9K|77kq#%tizW1vnJ*uXiH)1BWAye~P^NyW2Dwu(!vafMX>Oq)kpVynTO)4JMTdO`v`@=l zt3YcVNT@1>se*_&;!uCNuc1$zd#P>8F}`x3br-lV6h)SD@<@vtwMEZw3?O2Z`fd4* z_nAe@OHxKmA^l^z>yJUd1X2)xD}r;r;Tkhp2ie6n4@A=N-+>BH`8CV^L7;NVj%#iQ<^^W0>kqWNQ|SylxNk>O(VS49S_S~EKO;y zK*$cGy~julPU2!eE%@&r*ut<&Wv1Wn4DkGf^+{o2heI}T*q;Wav+mA%bWodSo4@=3 z@8WX|$ESTkj;SZ!bLn~#!Y&*>L6ak|Ou)|oYiG&mEnRV*a$C69`J5S|u>sCKWhMZ4 z^PpvKco9mC37fwSw#Aj3)d~Xbq5&||XcZH*+5DP>@U+wgWKPe3q0`vL(#MeL%>60M z4HI|*c(jRjUgWk!L__-Ec!pd7M4u|OYBqR6rlB$nfWu2EvdU|jc=fhbts~EM++bC` zh(&POutQ=vtgJ9;1K`GlO=5bTN>#b$F)hkWc(P2h@tAeFHEisAF6gYW+3I`MWohmo zWgOzm5VVyMojW~??`LBGnkDVUqB@CrB`fP$KWp&kjRNC?}K~VkUTyy zS1GXOdN9~_j=QU0<~PJkO8`ltX-4T+S%%V}Y+8pxaIJCpwbPX5F9&Opxb)6Nh~E^P z7>?{Nsu<@^U@a(DpHApCeu&x<2D-5B_TJ~qJ|nX+vJ6?v8#?ehWphlEIxoZcoyXIq zDSwmX_$@~(VIn69rf{PBz5XNFBIIjrt}VGoo;oS@U7u!$8IP`v--`gXUV~Y+t^~n5 zH9Dnn=9)#VpB-_FFv*UFsiXak=>@j&^)|61`wbb2Kqy@bD$ygNkjOb?K?A)XfJ#dV zU-8c`xvydNh$8FnSW6cA*<(!=ij3a594_6;!I%v0&32I(ZD*KnqHgPE7*FrwG$`7c zM4~g#y+oSL2n)D#!hVI$PZDmJJLv8`P{mbsQ{+oG*GU*X_OB({aYdR8kne&$=8{B3 zaWQ1#BoGSDUEzW@^6eLoDuiDl%%g|x$D5S1WzeS3p3Nx--VVwdWfkko6dL|DrVsUZ zVYnPER*`=!sqzsBA;rep4Dty>5*^A#L!ymS1 z9#=B+sV=Rv`-tD|!u?#Ckd8=En1clwiF3U=HrANU&? zBsc{a&5godg!*|B4UeqYaO{0;=;d{Be4i2GyvJ{epxO)x$b&k8)3nWnDX?u)7GArB zLH?5whqSWtErl+CvGGM!k=YdAfzb&ri5Aiqdi~X5N%DkugH+*PimZx@iGmglMy8*1 zi=BkNPo3fp8&Ja`Q?SxT#aWvZ(a^oLMz*lu-IZ)5;h3c|5us|Y6Pe^HG7 zZ)X9z>Ko%vec4TTN=>V0lIv6Hp+NraRGBr&>}$X zdusVd=~w#8>M%UpWwu=yIAhib6I_kHy$gY;lYu-c=N|i%m3VcP*(T z%(Q|8&FmZI?;dD{=}~w^TGEoHxy4$5=)IR6E14=Pje&bxCch&-Pu-D8YWuTlQQ>~R z*eo4RxQtlqa-BQg^x(BI;+?<#66TeYyrUV%ySPj6xD=`VpsO&(XWc3^BULnBl0IkU zh;Q8Hoqpwu;vH|IOiPCb2ofe}GY2#bT@pk2xQVv&@cN`ulk!gc5t=men}tEz zH4drIR9HXFn};gS1S_$yw9dcq*elw-61w~z^6j}ulcv{n$<0X8ylRvly|o;_BhL}3 z-$u*Z1yUleEIX%?ZtyP!7w6Y{y#q2-Ft zUDi8p(yfy7u=+Ixj?g2X3fyrOw+YWqEi}L&CNj=rs-(Z5aYo6EK7vHW5+r9$!Pm zEH80JXjz!c(xxzuI?r2H7CFG6R`oz3=?Ft2`exX7&2I8t9Fd&K?U*Ow&2Qm<2uELtBi?wjOUB| zD1#YcS?1ADdeVE_PdiAv=UyC{d@#L!x=+TOq!Dr%S8hh}Rm{UyiN|ren_iE@_Lxph z^7D1w<)|Dn^;*&_v3Jmx;hJA!Cv!})h+jHKB0E*X@o?~?at_xv=USlmgYMRBAEy;n z#7?lAvn2tF<1Xf$a<>BsE)$UuF zFQ1(&suiWap#R-!<5Gd0vDZ_u)ZoS{- zmByY_Ba?SaPf74;Wch4_#AOP=X{pXJ#Ep}OZ-DCY`1SmhN+{~3 zMV=%X(D{V@vvhX!eaU2@*7X!buOt;Rlj4+aj4DLRQ_pcv%Y{6`@RwFprCHu?xKi){qyN0& zh~)WS!3uoW2psyf4M8l}!xJyto9Qm6=(nIPHS;LF^A6q|z~SC^%+YnE( zTo@5`B?TozDyMH${j}yRmERJ^V>l3sZm+jD%O{m#vk~hD$X1|VH5~K;<{|GIdh#Tu zv|b+SLHQ}IaYAeJ$wfzpNta;+=&401L8j+6!<=F(CkVq8*7Iymvyt%<&ZYdkP@K+0 zz}5?j*@QvsTskA15IK~VqOHYQKwn&lF`wp8v8^MV|5n7||Mr+VsG?en_AeUNl5X+rk9hD#p z4>nfoGGqYr1PpbEl^2=(bFuzQ?o(d8on5&Fae6gzLI;Xj<3t&pD2MoQOw*TVfCk5T z#OL@Bc8fc81E@$6P#1^3glU7FU_YjLEu|Bi0+_rJ&sp^hhH*uQgX_kiDi=#t7*l5I zoM#3;3)?fjEHWM9hNV-z(HZq_PC;nN9|ar8O#Td`{nv=Z?>b<(<&zZ(f2%bv6Tch$ zC4nksIDYy-={$iYt_m&UTOuI(UUn4#Xyz~GM(ya-)%^-NK@iq;PjnRg(^+yK#(XD3!dodpbO`AZt zT3`ZGb_6K=1_HOkEN9_A=buc%$S_pkP4KCs7MI%((p#vQVG^_1RYJdZ_oP^QrY zN+w6nkiWCU?FAHlg+5H5+XWq>{OMM`L3}g<1SyK(3F&m-+V5nf!^%c5E#id^R-s}m zT!REEIbmYls*KazU2Hr65~$y#uE0LXU|RJ2pP^wuah559izQwy!}KHq4X8|$pB!hq zA9dS_Wx1Gl5_sVSjCgb?822Z1fw<1OK#VVVSnDoHiF-dsnZylaa1y(*8~`|>LXiw5 zp_l(F@%7zx>dn=atU?=j%#*R~EIYlYm?WeaChQMCxPSTIhl9YxWO?4lb^f#gXl@Fg zI#vT0wH;(BmtxiXFrC%$_ySE8rhs%7V?d9(U^toT{9{LDzS zFPXlEgB6d&G|d*lw94`tRf;Y3p{%LG*t86jue zWv|ReKdIY=^4JFHnUtigh%hk{_hyO`bb+0hHKs4wGYt61Xt4l<-Hgb`@(?LLJ}9Wq zkOT&AuM)yj()iC|Q;dRh!jO#ivhe=Wi+9zr8jrS%0;eCg9j9!qANWLnD1%LnPoXO1 zm_V7q%9e!~|CH-2a{u^f<5(2HJ;WBL*>|mKdjAEZf#WNv08T*fH^uUw;hKP`Ck!l9 zZgbHAkLFbr-To#U)xsP;JWkhl7nc9szDHmHHqcWK#DCL?@0C9Rh~ZRmpEMc!>UHkp zpKtk(53bsvpbC=bN+fc!xp(i;ejT?hMHnP99^7)Qn1lYSIw_NRJDqmI2>N%(WFE^G z5Pck3b8N+O0L^!Fj9Q1Z7OoT$?`(7@B|u0bDa2lm1dof1(WkRJ$*I4laz%@&P*v;=tDMOE07c`)uGT-_<%TSEBpZs+S3mXTY`um4k ziQ6BBDbS-@gx~+La{F~DUhg~RlmGwxB90CDsXNoHH3^0O|M7rdUjRRO9g2r1EZ zn2_DCJvkZ6XWTzZ`*=zDZ66qmZp>T%l@{{nZ*0MD#vcovNb$7`&#&p^(EQgK1Wzw! z0aMkw$9v=ZR6m4c?S>8xTB4lNLaqP%$l~DZQh=klTumDH#|wWfDOjGjx#w~AwBv?9 z{kgTj{-P}d7EP#7n&HQY*a!s@9xxIQ=Ki_9)E`owye@+9H=OJc`vt`;N*;D}zwe); zxNQdJZAM)2>rE%W`t_HgiYKq_jHW4>WPT0$k0-q@0WU2H)&E1{ev_*eeK3hayU19& zKiBo=Uq0vpg)$<={@d?<4dwgbe4W^-7Y&B1+B*J0ynZgoeKzVv>($iwKZg2i7H?~> z;cpeJjL**fIqP44xdpz&>#U~m>vw-V@~+VZoV}s#HG7Bu9_#PZAO&9{t;;^y!~Br9 zy+(DWckkZSYUTSOPd}I8_LJK#xHx;YE8cZK*7R=p8TpKUQkZWO(a$-ae6YC! zOX}*6S={BtX6%2>lgCxeg?nts9z8~Ulj*M48Mzh7`0XYWAfWG0KMpWf$a*zbM)Y$20m>|FHf z9~NDo7=NqcVdl9%p7d+)e0jll3U=8a{W0&~fAs(3L2soA+D-D$##bjMF)VBSxm)=6 zb-+>;CVT(GxPLC^kSaJV&vx*XzORrCn{Qsvm~aY*i>H}UY2KfjNml}h0O^eIt?z^X zV1%V}0jo}-Y4Aj){d_XcV;1Vg6-`y2|21!GYH-9(KMtfnS@aW8`)%Jpe?SUQ8oAFU-MVg zer93!ID3}X-Se>aeC*_(GNW{UM?oW2^~H%YEM!!z)jz%e4-?FP4L0Kp*QFn)?zgS{ zwacIMf!ZmIE1&)GUn9;(%($Ah#;5*V)VK1ubHg1szK`fF@0G+t#kPQw3tU_f0!~Ud zuFkYCuAh$16@DfW7?8h^?6s630P0-42JdBQgeK)x_Wy%Ez3m8gc|N)PyD39~J9l@3 ziOHLgxQ(;5ql;fmnhsLADy> zGk14CsW)XeDr4{UyWz1p7E%t^{4!g9+dNyK*2gkGM*qD)-Nes_m2z9~beE%e+w^`vQ$&J2gG=ilDV zCpf6Q5YNufef!j)B)5%Q;J106NZuXT@!9X^HWrJAhX-u8n(g7<((r8mK!)q)`BDi?nGqra!GW&h(uXQtwpOFb@d0iIoh!L-Odckrpk3DoYFs@Jkgf+}>Nf=p_Bwa( zL^*zW_xarHVbey@8TT?db-c~>NpHIXH81Z36YkYO-xoBjP^>|8@_!GzBn^&x)7{+*q8!7#!M8qp#qijK6;@nlXBKuVG?L#}W=S9YgoulpDy(NLe#H8dt;BL@>a z>{z=tZ)qiM5+1M}6DU zd(XSR+`koSXB?H->h>6&#-*MDL3$Qib(EdifFsvgn~Vj~o)el6N8J^+;un0vudh;s zz2aZZ=WU-Ih|_Lr&Uo4^GUtAQXw!%3pUmV6{KRA8+6sZwZQm94iYm2O%w|PwQA-!p zBm!#U!zb#2EVFmp{}M#8WbXL7N{rp@tVxPM!s&YQxe9#;Y8O6sJCoE|n-BbUb_}*( zZWaZ{&~`4XR9N+7|MqK;VhxaeZnk+ZzV%%qYM zwzMF?yla+T)STcYAR~?ygAd>~6`zES2$b;Ep(g*mS=2RvPDV z`iG5gkIwHdi|0Pjt++d+-!&Z5R6(A(W|X?1ek{(Z1@o&9*mp=2fH&{d?1}4>Zp-mp zA|6~oT^OIQHT-5z*cYchGN;PDZ6%P#xy>BOo;;s8merk~*p~pkG=%unwXzHSZ1aA) zaS2^SUh59CtU_H}3$E8x@VjYWs9|e6{I&2wVs){1&KmhQucJ*^WFO#B+F5D}oK0(s zY+LP{cj^#n`}l3prJip6U~)P<5U`J3Ct=R?>9v0F!(O8<&c#~V*F-AWYDJp;v`2ic z>F!ETSJ*pW!CZRPc-UwA#|IxyKDM5yJgyXQK}}4|^6RWjY{GPxo0cZ!{~MgVdH!wwq=-E{5sKwb=7yuME;^Cj`;V5gH?Nl}pK@ zj-VK$#js;vLs!o|?Rn4&AzXxx8>Yb`m6%9nr+?6pcT|>jYuK(BdeeJ>y-{ODK!;-b zAkwZi=h%gX&u;wMXwYIzg^{HL^sn;e3rX@=QUGf7&DC`nur2tN7E3}^$at^oCoe}eaX9AuUPd$ ztc>}K=3eKCXs6*9su|gn3hEl@I_yh(IT)qLRTs}CJ#7)Vu>WOk^wUG*DiP)NvSP1? znqhInD+;}|n~;tS974x&^6Qzf&ei99KtAkzV84E9i`<4UnM?K79_a`;uk&kDKw)CG76Sy0h(g zcYX+T^)oEXEh7n=K+JXKyUiH|^|4hJb3sj$RINAg(jF?fWpAj8IHHnv-tAH~n@oL8 zurFllL3JHJozGIf=Bp~45KlGfM$bO1@nLw;oow(brBm5-NUnw0KG|x+SbCN8JEOEG zoR?p%B%CKAhG*TZLYQTd`0*?)DDKs$TT;c=ZcMcwttiR*iaL8% zoJ5xN=Y9vTp+2~crisJ#A^RjTbK7cZr|okOcol87)}W4uPn=eEV@S^gm+>4|+DsD~ zv{p4=V*bP$@9XQkD95w&Er#lhexYXSEkTmFtt69hk)$C+R`JG%HxfvIW@*NB#;855 zf8t^NcBg)*znv!L#5--E@VC@`V8EjDqKo#a)U0gY?cBX_Ic_!}2i&TQNv>%d;X4Vz zJ~<5^IRlji6FXrAXH|!jEWC;nBpL(WP0JfAAC?GFR`q#VX%090x}(s|pT4|%aAN{y ziQL}WYT4Pjoh-l=^S=H~HL{go{VxF$I=aIW!GK_?4^@J;w(xd5%Zt(ajP^z z-slY0JoL1ia9CjHs%M6rF5l4+JohB0CnH#45M8IW{{?Q9#G=$`_O-Vth`v0(F|U4HX*8g_lMRDf(HlB9}1D6yOT;>ku6IAPEwUY;!VPP4uiDYI9jjR z>4r{4qtKdua1k=q5B$^Z|KMy}V{Q9e(5BWggl+%J+uUwA? z7iNp>c2-<7HsS=t9*c6|Wi2?(`}lrr%Y-W-FQ5;^G5x(mTZM|V_5BWjvs;Hn z&fQ|TU4ca>qsCM1t(OG`9g*5tjY~^QTkXB-?wSjl?iZnlJi4sRiwb3eMbACod)e=8 z)r^!u9o-X5RJOgm;^e(vvW6G@c|P&)%L7;9O04wX3-cP{dk!Ur@3#B3<8m7-v2k#t z;2u9RZqp=qsezeXm}JnUg7^IHnBZ8-D-TlVI@>neODofj=)=%uF%p=erj91LpmRE@ zvKExG_y06@<$+MLod123y8eW0iHyu(NI3>weqK zIbq6T$L)2rt}(eH1{wF+Iw#68m>Fa6{%rGLcjk}RUyq09_xm27-}n4IzvnYw)Me^^ zGS%}l7x9^hqv4+Z><QFv%FkU;_^%Or>{Iaok{L~ z6iC(oXI7Q7o!tR5lxdS(iLFJENwhBP0zyo1DW-z$SSM5yx$fVcXIy(u*nd?H5gMce z*1enPdPGQ}iy_kA(fxENt1j)=kd3mL5cZE4f7(sn%yHllsHQYzO8ps7dt7ztZF7E- zur?YTW(;El8Z+7VwTnVdiigekdJE>c&Q_!QD`~W~%M6Wdbv5xh6Z_n@HO!e4_?IdA zeoYAv>%C69AIgmG9SO{njj%PXmpdgr`9M^P>Qk@o>Lx0hql^vxgC*k?D^F2$+sOPS z;qY0#{f8c-9@Y+(i>%Y5nmCAtr`@(FEm4LBajKTY;4Vr0uf(}Pe*ekWn6#SDbnx)Ha>J`q=M-#ha6RB;D1FF?a z<66w&lJh!FnpawTM^r5_KAU3-+BG!`(Z8U(+$DN0)GklqF5ii)AAl+iT9TKJe=fe?8TIF4yc`8#$QoHinFti_}7>d^HmyOt8{)L zZE9*-IA|ywZbLXuGI+!@4%%;LCwTIAysZ1`4B z_KCb*_V(mE7qp9D<%Oq`KIswOE9Ns&Wx!rrlQEc(Lk9+WAl4^tD}&QRbR$paM)L*M`~q!a4!WAGMxz}k_0P)2=fKIudTVcc?RYDu-{g3 z)J;m47g`@dA!;Cw8p{<)$90!enVF}ksd5{W))+>&-a2oqK%wv};8u_PptJvzu;2vA zM6-SbV(KeW)d)WRQaioX=+8XhZ}pA4pjp$#d) z7FRBbCK}Jzhf8V0U~Q2+F!>(-8q5*%`2lF!MDu+K)Gz*(@f&3TFSgFw!N4}__MpWx z>h+r}xA{Y}#+CbmCRQoApuj@INslYRoxGCl1!^=mZ}qXbs=}!Yn0n@6?zHg6xO;F` zpf#F8dk+=it_VoL0z)t#zF|^|f1^y{?e0KH8$>T4iyX1l#Cb|};lfj2)Mubermjw~ z+v3`_56?7K-=;JKucL^ed_=qlR|eL}UF!QQT!QV*3GoXn%--4>$DEC2#fe+ymKS(f zg3kh>^-z`}ndm5*aeIZ^7v1%kz3Mxyt74aYpwj$JOM+f; z!HqY2k(%yH-EtfV><~4ek@qgt==KJYs90Q2#&Uu&-65s0A*ko7}VLkUl^E7xS+bfU*61Cf5A{GOYenp)iC;F)ea-nF|27Yu~7g9Eat2w?m>>Dal}v!Be^-QV^ zZ%DD_kKF&FPAeCpZ$W-|KW610J$#fe3F=CM#?H`Rba9+eQq@FKaJBXY=a{0R=9&DY zBvJ*oGzpV+EiF7!Mys)-<9cZ{l2_#W20-n&G(A&-3nZ6Rc(yG%^yOooVR0QDkC!n%_{sN0#C;#UM;d{_g8~CgwQa(jqKd)=UwZ z`bx(0=IDc!V(Ep+0;iyUeEi1wkTMkt4IlO7N&hJr1cO~emngnFj=f8G4^>l}1PmrZ z=*{yD;9~-AY`LqtK~o4#i|QP$-ub97c9zs)m(AB>L*+$Y-ZlJ21C>Xlkrx}XfIpML zx0c6QL({3@;Jts$z?*Zcz&20azpy$%JdX`FErHoy!Z!2Wg*#$+M*1FZOOAqVDncv|NU|uSoM5maPQ|8OZsb0*SvnD>6Md4ZcFiM1OWt5UFE-!F@fGTYH-hyzyYuJut;K+2f z@*s@Z;;v!Fp7d1)oX?w13TpTnH9aY*rM2=UvIz0x_XZO!xY3KU->AYC!fgDD&-wM)RoyR0?AGrsljd}v z=yebidOc<;U!erfb^n8XnU75t(E;0cypzpMA*+wUfdj8pLf2ky+JSeT0xpJyiLcUg zNyd1s0-1B{POs9#u^(GwSF4^4AS1xnvt0E?b7LfUaEY?Ei>zmszyUDiS9!^DN=P9uXnW`yqrs2EQ&pzI6MsnM7x3@5NEsZ z-bvz;N(Qgr9+bGSJaR)nIo>!&%cSwX>**0nfB}qIE>SPfd8L)>y2W>na8RT%@H}+N@s`T_1^ar{9&LQ*8dr8`X9Y;pi+u+IHe{7OBua0JicrvNDjt zUq?l!M8{$#@@cc`?daS565PiE8A&$`N2p%Z|$e~arjHfig7jmm*F0qTG-rMq%ab* UcJbXelGVlhuUvlkAItNNOaK4? diff --git a/cv/3d_detection/pointpillars/pytorch/loss/__init__.py b/cv/3d_detection/pointpillars/pytorch/loss/__init__.py deleted file mode 100755 index fcd1262a..00000000 --- a/cv/3d_detection/pointpillars/pytorch/loss/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .loss import Loss \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/loss/loss.py b/cv/3d_detection/pointpillars/pytorch/loss/loss.py deleted file mode 100755 index 016e0460..00000000 --- a/cv/3d_detection/pointpillars/pytorch/loss/loss.py +++ /dev/null @@ -1,66 +0,0 @@ -import pdb -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class Loss(nn.Module): - def __init__(self, alpha=0.25, gamma=2.0, beta=1/9, cls_w=1.0, reg_w=2.0, dir_w=0.2): - super().__init__() - self.alpha = 0.25 - self.gamma = 2.0 - self.cls_w = cls_w - self.reg_w = reg_w - self.dir_w = dir_w - self.smooth_l1_loss = nn.SmoothL1Loss(reduction='none', - beta=beta) - self.dir_cls = nn.CrossEntropyLoss() - - def forward(self, - bbox_cls_pred, - bbox_pred, - bbox_dir_cls_pred, - batched_labels, - num_cls_pos, - batched_bbox_reg, - batched_dir_labels): - ''' - bbox_cls_pred: (n, 3) - bbox_pred: (n, 7) - bbox_dir_cls_pred: (n, 2) - batched_labels: (n, ) - num_cls_pos: int - batched_bbox_reg: (n, 7) - batched_dir_labels: (n, ) - return: loss, float. - ''' - # 1. bbox cls loss - # focal loss: FL = - \alpha_t (1 - p_t)^\gamma * log(p_t) - # y == 1 -> p_t = p - # y == 0 -> p_t = 1 - p - nclasses = bbox_cls_pred.size(1) - batched_labels = F.one_hot(batched_labels, nclasses + 1)[:, :nclasses].float() # (n, 3) - - bbox_cls_pred_sigmoid = torch.sigmoid(bbox_cls_pred) - weights = self.alpha * (1 - bbox_cls_pred_sigmoid).pow(self.gamma) * batched_labels + \ - (1 - self.alpha) * bbox_cls_pred_sigmoid.pow(self.gamma) * (1 - batched_labels) # (n, 3) - cls_loss = F.binary_cross_entropy(bbox_cls_pred_sigmoid, batched_labels, reduction='none') - cls_loss = cls_loss * weights - cls_loss = cls_loss.sum() / num_cls_pos - - # 2. regression loss - reg_loss = self.smooth_l1_loss(bbox_pred, batched_bbox_reg) - reg_loss = reg_loss.sum() / reg_loss.size(0) - - # 3. direction cls loss - dir_cls_loss = self.dir_cls(bbox_dir_cls_pred, batched_dir_labels) - - # 4. total loss - total_loss = self.cls_w * cls_loss + self.reg_w * reg_loss + self.dir_w * dir_cls_loss - - loss_dict={'cls_loss': cls_loss, - 'reg_loss': reg_loss, - 'dir_cls_loss': dir_cls_loss, - 'total_loss': total_loss} - return loss_dict - \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/misc/log.md b/cv/3d_detection/pointpillars/pytorch/misc/log.md deleted file mode 100755 index 280d0445..00000000 --- a/cv/3d_detection/pointpillars/pytorch/misc/log.md +++ /dev/null @@ -1,103 +0,0 @@ -- 难点: 碰撞检测 -- 代码难点: nms - -## Datasets - -- database 随机采样增强 - - 核心: 碰撞检测, 点是否在立方体内 - - 输入: gt_bboxes_3d, pts, gt_labels - - 输出: gt_bboxes_3d, pts, gt_labels - - 逻辑: - 1. 从Car, Pedestrian, Cyclist的database数据集中随机采集一定数量的bbox及inside points, 使每类bboxes的数量分别达到15, 10, 10. - 2. 将这些采样的bboxes进行碰撞检测, 满足碰撞检测的bboxes和对应labels加到gt_bboxes_3d, gt_labels - 3. 把位于这些采样bboxes内点删除掉, 替换成bboxes内部的点. -- object 随机旋转平移 - - 核心: 碰撞检测, 点是否在立方体内 - - 输入: gt_bboxes_3d, pts - - 输出: gt_bboxes_3d, pts - - 逻辑: - 1. 以某个bbox为例, 随机产生num_try个平移向量t和旋转角度r, 旋转角度可以转成旋转矩阵(mat). - 2. 对bbox进行旋转和平移, 找到num_try中第一个通过碰撞测试的平移向量t和旋转角度r(mat). - 3. 对bbox内部的点进行旋转和平移. - 4. 对bbox进行旋转和平移. -- 整体随机水平翻转 - - object: 水平翻转 (注意角度) - - point: 水平翻转 -- 整体旋转, 缩放和平移 - - object: 旋转, 缩放和平移, object的旋转需要注意, (x, y, z)和angle都需要变化. - - point: 旋转, 缩放和平移 -- 删除范围外的point -- 删除范围外的bbox - - 需要注意的是: 这里做了旋转角度的归一化 (-pi, pi) -- point shuffle -- dataloader实现, `目前为了调试shuffle设置成了False, 之后需要设置成True` - -## Model - -- Pillar - - cuda拓展 - ``` - cd ops - python setup.py develop - ``` - - Voxelization类实现, 这里基本是从mmdet3d复制过来的; 修改了`coors_out`的顺序 (z, y, x) -> (x, y, z) - - PillarLayer: 整合batch数据整pillars - - PillarEncoder: pillar 编码((N, 4) -> (N, 9) -> (1, 64)) - - PillarScatter -- Backbone - - 完成 - - nn.init.kaiming_normal_初始化方式 (`删除cuda seed`) -- Neck - - 完成 - - nn.init.kaiming_normal_初始化方式 (`删除cuda seed`) -- Head - - 完成 - - nn.init.normal_(m.weight, mean=0, std=0.01)初始化方式 (`删除cuda seed`) - -## Bbox (bottom center for z) - -- Anchor - - 生成完成 -- 编码, 解码完成 -- iou3d (完成) - - bev overlap - - height overlap - - iou3d -- anchor生成label - -## Train - -- loss - - cls loss - - pos: 与0, 1, 2类bboxes具有很大iou(大于某个阈值)的anchors; 与0, 1, 2类具有最大iou的anchors. - - neg: 与任何bboxes最有非常小iou(小于某个阈值)的anchors; 与-1类bboxes具有很大iou或者最大iou的anchors. - - reg loss - - 训练样本: 与0, 1, 2类bboxes具有很大iou(大于某个阈值)的anchors; 与0, 1, 2类具有最大iou的anchors. - - dir cls loss - - 训练样本: 与0, 1, 2类bboxes具有很大iou(大于某个阈值)的anchors; 与0, 1, 2类具有最大iou的anchors. -- 优化器 -- 开始训练 - -## Test and evaluate - -- 预测bbox -- 可视化 -- 评估 - - label_2的格式, mmdet3d .pkl的格式 - - iou 计算(bbox, bev, 3dbbox) - - gt_bbox, dt_bbox分类 (ignore, normal, remove) - - 基于tp, 计算score threshold - - 计算precision, recall - - 计算AP, mAP -- 性能优化 -- 代码优化 (不急) - -## bbox property evaluation - -以 `label='Pedestrian', difficulty=1`为例, - -| ignore level | gt | det | -|:---:|:---:|:---:| -| 1 | (cur_label == 'Pedestrian' && cur_difficulty > 1) or cur_label == 'Person_sitting' | cur_heigh < MIN_HEIGHTS[1] | -| 0 | cur_label == 'Pedestrian' && cur_difficulty <= 1 | cur_label == 'Pedestrian' | -| - 1 | Others | Others | diff --git a/cv/3d_detection/pointpillars/pytorch/misc/vis_data_gt.py b/cv/3d_detection/pointpillars/pytorch/misc/vis_data_gt.py deleted file mode 100755 index 741a010e..00000000 --- a/cv/3d_detection/pointpillars/pytorch/misc/vis_data_gt.py +++ /dev/null @@ -1,84 +0,0 @@ -import argparse -from ast import arg -import cv2 -import copy -import numpy as np -import os -import sys - -from yaml import parse -CUR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(CUR)) -from utils import read_points, read_calib, read_label, bbox_camera2lidar, vis_pc, bbox3d2corners,\ - points_lidar2image, vis_img_3d - - -def vis_gt(root, id, saved_root): - img_path = os.path.join(root, 'image_2', f'{id}.png') - lidar_path = os.path.join(root, 'velodyne', f'{id}.bin') - calib_path = os.path.join(root, 'calib', f'{id}.txt') - label_path = os.path.join(root, 'label_2', f'{id}.txt') - - img = cv2.imread(img_path) - img3d = copy.deepcopy(img) - lidar_points = read_points(lidar_path) - calib_dict = read_calib(calib_path) - annotation_dict = read_label(label_path) - - bboxes = annotation_dict['bbox'] - names = annotation_dict['name'] - colors = [[0, 0, 255], [0, 255, 0], [255, 0, 0], [255, 255, 0]] - CLASSES = { - 'Pedestrian': 0, - 'Cyclist': 1, - 'Car': 2 - } - - - ## 1. visualize 2d - for i, bbox in enumerate(bboxes): - x1, y1, x2, y2 = bbox - x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2) - cv2.rectangle(img, (x1, y1), (x2, y2), colors[CLASSES.get(names[i], -1)], 2) - cv2.imwrite(os.path.join(saved_root, f'{id}-2d.png'), img) - cv2.imshow(f'{id}-2d bbox', img) - cv2.waitKey(0) - - ## 2. visualize 3d bbox in point cloud - - # 2.1 camera coordinates - dimensions = annotation_dict['dimensions'] - location = annotation_dict['location'] - rotation_y = annotation_dict['rotation_y'] - names = annotation_dict['name'] - - # 2.2 lidar coordinates - bboxes_camera = np.concatenate([location, dimensions, rotation_y[:, None]], axis=-1) - tr_velo_to_cam = calib_dict['Tr_velo_to_cam'] - r0_rect = calib_dict['R0_rect'] - bboxes_lidar = bbox_camera2lidar(bboxes_camera, tr_velo_to_cam, r0_rect) - lidar_bboxes_points = bbox3d2corners(bboxes_lidar) # (N, 8, 3) - labels = [CLASSES.get(name, -1) for name in names] - vis_pc(lidar_points, lidar_bboxes_points, labels) # (N, 8, 2) - - ## 3. visualize 3d bbox in image - P2 = calib_dict['P2'] - image_points = points_lidar2image(lidar_bboxes_points, tr_velo_to_cam, r0_rect, P2) - img3d = vis_img_3d(img3d, image_points, labels) - cv2.imwrite(os.path.join(saved_root, f'{id}-3d.png'), img3d) - cv2.imshow(f'{id}-3d bbox', img3d) - cv2.waitKey(0) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Data information') - parser.add_argument('--root', default='/home/lifa/data/KITTI/training') - parser.add_argument('--id', default='000134') - parser.add_argument('--saved_root', default='tmp') - args = parser.parse_args() - - root = args.root - id = args.id - saved_root = args.saved_root - os.makedirs(saved_root, exist_ok=True) - vis_gt(root, id, saved_root) diff --git a/cv/3d_detection/pointpillars/pytorch/model/__init__.py b/cv/3d_detection/pointpillars/pytorch/model/__init__.py deleted file mode 100755 index 340ddce0..00000000 --- a/cv/3d_detection/pointpillars/pytorch/model/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .anchors import Anchors, anchors2bboxes, bboxes2deltas -from .pointpillars import PointPillars diff --git a/cv/3d_detection/pointpillars/pytorch/model/anchors.py b/cv/3d_detection/pointpillars/pytorch/model/anchors.py deleted file mode 100755 index 23a48534..00000000 --- a/cv/3d_detection/pointpillars/pytorch/model/anchors.py +++ /dev/null @@ -1,231 +0,0 @@ -import pdb -import numpy as np -import torch -from utils import limit_period, iou2d_nearest - - -class Anchors(): - def __init__(self, ranges, sizes, rotations): - assert len(ranges) == len(sizes) - self.ranges = ranges - self.sizes = sizes - self.rotations = rotations - - def get_anchors(self, feature_map_size, anchor_range, anchor_size, rotations): - ''' - feature_map_size: (y_l, x_l) - anchor_range: [x1, y1, z1, x2, y2, z2] - anchor_size: [w, l, h] - rotations: [0, 1.57] - return: shape=(y_l, x_l, 2, 7) - ''' - device = feature_map_size.device - x_centers = torch.linspace(anchor_range[0], anchor_range[3], feature_map_size[1] + 1, device=device) - y_centers = torch.linspace(anchor_range[1], anchor_range[4], feature_map_size[0] + 1, device=device) - z_centers = torch.linspace(anchor_range[2], anchor_range[5], 1 + 1, device=device) - - x_shift = (x_centers[1] - x_centers[0]) / 2 - y_shift = (y_centers[1] - y_centers[0]) / 2 - z_shift = (z_centers[1] - z_centers[0]) / 2 - x_centers = x_centers[:feature_map_size[1]] + x_shift # (feature_map_size[1], ) - y_centers = y_centers[:feature_map_size[0]] + y_shift # (feature_map_size[0], ) - z_centers = z_centers[:1] + z_shift # (1, ) - - # [feature_map_size[1], feature_map_size[0], 1, 2] * 4 - meshgrids = torch.meshgrid(x_centers, y_centers, z_centers, rotations) - meshgrids = list(meshgrids) - for i in range(len(meshgrids)): - meshgrids[i] = meshgrids[i][..., None] # [feature_map_size[1], feature_map_size[0], 1, 2, 1] - - anchor_size = anchor_size[None, None, None, None, :] - repeat_shape = [feature_map_size[1], feature_map_size[0], 1, len(rotations), 1] - anchor_size = anchor_size.repeat(repeat_shape) # [feature_map_size[1], feature_map_size[0], 1, 2, 3] - meshgrids.insert(3, anchor_size) - anchors = torch.cat(meshgrids, dim=-1).permute(2, 1, 0, 3, 4).contiguous() # [1, feature_map_size[0], feature_map_size[1], 2, 7] - return anchors.squeeze(0) - - - def get_multi_anchors(self, feature_map_size): - ''' - feature_map_size: (y_l, x_l) - ranges: [[x1, y1, z1, x2, y2, z2], [x1, y1, z1, x2, y2, z2], [x1, y1, z1, x2, y2, z2]] - sizes: [[w, l, h], [w, l, h], [w, l, h]] - rotations: [0, 1.57] - return: shape=(y_l, x_l, 3, 2, 7) - ''' - device = feature_map_size.device - ranges = torch.tensor(self.ranges, device=device) - sizes = torch.tensor(self.sizes, device=device) - rotations = torch.tensor(self.rotations, device=device) - multi_anchors = [] - for i in range(len(ranges)): - anchors = self.get_anchors(feature_map_size=feature_map_size, - anchor_range=ranges[i], - anchor_size=sizes[i], - rotations=rotations) - multi_anchors.append(anchors[:, :, None, :, :]) - multi_anchors = torch.cat(multi_anchors, dim=2) - - return multi_anchors - - -def anchors2bboxes(anchors, deltas): - ''' - anchors: (M, 7), (x, y, z, w, l, h, theta) - deltas: (M, 7) - return: (M, 7) - ''' - da = torch.sqrt(anchors[:, 3] ** 2 + anchors[:, 4] ** 2) - x = deltas[:, 0] * da + anchors[:, 0] - y = deltas[:, 1] * da + anchors[:, 1] - z = deltas[:, 2] * anchors[:, 5] + anchors[:, 2] + anchors[:, 5] / 2 - - w = anchors[:, 3] * torch.exp(deltas[:, 3]) - l = anchors[:, 4] * torch.exp(deltas[:, 4]) - h = anchors[:, 5] * torch.exp(deltas[:, 5]) - - z = z - h / 2 - - theta = anchors[:, 6] + deltas[:, 6] - - bboxes = torch.stack([x, y, z, w, l, h, theta], dim=1) - return bboxes - - -def bboxes2deltas(bboxes, anchors): - ''' - bboxes: (M, 7), (x, y, z, w, l, h, theta) - anchors: (M, 7) - return: (M, 7) - ''' - da = torch.sqrt(anchors[:, 3] ** 2 + anchors[:, 4] ** 2) - - dx = (bboxes[:, 0] - anchors[:, 0]) / da - dy = (bboxes[:, 1] - anchors[:, 1]) / da - - zb = bboxes[:, 2] + bboxes[:, 5] / 2 # bottom center - za = anchors[:, 2] + anchors[:, 5] / 2 # bottom center - dz = (zb - za) / anchors[:, 5] # bottom center - - dw = torch.log(bboxes[:, 3] / anchors[:, 3]) - dl = torch.log(bboxes[:, 4] / anchors[:, 4]) - dh = torch.log(bboxes[:, 5] / anchors[:, 5]) - dtheta = bboxes[:, 6] - anchors[:, 6] - - deltas = torch.stack([dx, dy, dz, dw, dl, dh, dtheta], dim=1) - return deltas - - -def anchor_target(batched_anchors, batched_gt_bboxes, batched_gt_labels, assigners, nclasses): - ''' - batched_anchors: [(y_l, x_l, 3, 2, 7), (y_l, x_l, 3, 2, 7), ... ] - batched_gt_bboxes: [(n1, 7), (n2, 7), ...] - batched_gt_labels: [(n1, ), (n2, ), ...] - return: - dict = {batched_anchors_labels: (bs, n_anchors), - batched_labels_weights: (bs, n_anchors), - batched_anchors_reg: (bs, n_anchors, 7), - batched_reg_weights: (bs, n_anchors), - batched_anchors_dir: (bs, n_anchors), - batched_dir_weights: (bs, n_anchors)} - ''' - assert len(batched_anchors) == len(batched_gt_bboxes) == len(batched_gt_labels) - batch_size = len(batched_anchors) - n_assigners = len(assigners) - batched_labels, batched_label_weights = [], [] - batched_bbox_reg, batched_bbox_reg_weights = [], [] - batched_dir_labels, batched_dir_labels_weights = [], [] - for i in range(batch_size): - anchors = batched_anchors[i] - gt_bboxes, gt_labels = batched_gt_bboxes[i], batched_gt_labels[i] - # what we want to get next ? - # 1. identify positive anchors and negative anchors -> cls - # 2. identify the regresstion values -> reg - # 3. indentify the direction -> dir_cls - multi_labels, multi_label_weights = [], [] - multi_bbox_reg, multi_bbox_reg_weights = [], [] - multi_dir_labels, multi_dir_labels_weights = [], [] - d1, d2, d3, d4, d5 = anchors.size() - for j in range(n_assigners): # multi anchors - assigner = assigners[j] - pos_iou_thr, neg_iou_thr, min_iou_thr = \ - assigner['pos_iou_thr'], assigner['neg_iou_thr'], assigner['min_iou_thr'] - cur_anchors = anchors[:, :, j, :, :].reshape(-1, 7) - overlaps = iou2d_nearest(gt_bboxes, cur_anchors) - max_overlaps, max_overlaps_idx = torch.max(overlaps, dim=0) - gt_max_overlaps, _ = torch.max(overlaps, dim=1) - - assigned_gt_inds = -torch.ones_like(cur_anchors[:, 0], dtype=torch.long) - # a. negative anchors - assigned_gt_inds[max_overlaps < neg_iou_thr] = 0 - - # b. positive anchors - # rule 1 - assigned_gt_inds[max_overlaps >= pos_iou_thr] = max_overlaps_idx[max_overlaps >= pos_iou_thr] + 1 - - # rule 2 - # support one bbox to multi anchors, only if the anchors are with the highest iou. - # rule2 may modify the labels generated by rule 1 - for i in range(len(gt_bboxes)): - if gt_max_overlaps[i] >= min_iou_thr: - assigned_gt_inds[overlaps[i] == gt_max_overlaps[i]] = i + 1 - - pos_flag = assigned_gt_inds > 0 - neg_flag = assigned_gt_inds == 0 - # 1. anchor labels - assigned_gt_labels = torch.zeros_like(cur_anchors[:, 0], dtype=torch.long) + nclasses # -1 is not optimal, for some bboxes are with labels -1 - assigned_gt_labels[pos_flag] = gt_labels[assigned_gt_inds[pos_flag] - 1].long() - assigned_gt_labels_weights = torch.zeros_like(cur_anchors[:, 0]) - assigned_gt_labels_weights[pos_flag] = 1 - assigned_gt_labels_weights[neg_flag] = 1 - - # 2. anchor regression - assigned_gt_reg_weights = torch.zeros_like(cur_anchors[:, 0]) - assigned_gt_reg_weights[pos_flag] = 1 - - assigned_gt_reg = torch.zeros_like(cur_anchors) - positive_anchors = cur_anchors[pos_flag] - corr_gt_bboxes = gt_bboxes[assigned_gt_inds[pos_flag] - 1] - assigned_gt_reg[pos_flag] = bboxes2deltas(corr_gt_bboxes, positive_anchors) - - # 3. anchor direction - assigned_gt_dir_weights = torch.zeros_like(cur_anchors[:, 0]) - assigned_gt_dir_weights[pos_flag] = 1 - - assigned_gt_dir = torch.zeros_like(cur_anchors[:, 0], dtype=torch.long) - dir_cls_targets = limit_period(corr_gt_bboxes[:, 6].cpu(), 0, 2 * np.pi).to(corr_gt_bboxes) - dir_cls_targets = torch.floor(dir_cls_targets / np.pi).long() - assigned_gt_dir[pos_flag] = torch.clamp(dir_cls_targets, min=0, max=1) - - multi_labels.append(assigned_gt_labels.reshape(d1, d2, 1, d4)) - multi_label_weights.append(assigned_gt_labels_weights.reshape(d1, d2, 1, d4)) - multi_bbox_reg.append(assigned_gt_reg.reshape(d1, d2, 1, d4, -1)) - multi_bbox_reg_weights.append(assigned_gt_reg_weights.reshape(d1, d2, 1, d4)) - multi_dir_labels.append(assigned_gt_dir.reshape(d1, d2, 1, d4)) - multi_dir_labels_weights.append(assigned_gt_dir_weights.reshape(d1, d2, 1, d4)) - - multi_labels = torch.cat(multi_labels, dim=-2).reshape(-1) - multi_label_weights = torch.cat(multi_label_weights, dim=-2).reshape(-1) - multi_bbox_reg = torch.cat(multi_bbox_reg, dim=-3).reshape(-1, d5) - multi_bbox_reg_weights = torch.cat(multi_bbox_reg_weights, dim=-2).reshape(-1) - multi_dir_labels = torch.cat(multi_dir_labels, dim=-2).reshape(-1) - multi_dir_labels_weights = torch.cat(multi_dir_labels_weights, dim=-2).reshape(-1) - - batched_labels.append(multi_labels) - batched_label_weights.append(multi_label_weights) - batched_bbox_reg.append(multi_bbox_reg) - batched_bbox_reg_weights.append(multi_bbox_reg_weights) - batched_dir_labels.append(multi_dir_labels) - batched_dir_labels_weights.append(multi_dir_labels_weights) - - rt_dict = dict( - batched_labels=torch.stack(batched_labels, 0), # (bs, y_l * x_l * 3 * 2) - batched_label_weights=torch.stack(batched_label_weights, 0), # (bs, y_l * x_l * 3 * 2) - batched_bbox_reg=torch.stack(batched_bbox_reg, 0), # (bs, y_l * x_l * 3 * 2, 7) - batched_bbox_reg_weights=torch.stack(batched_bbox_reg_weights, 0), # (bs, y_l * x_l * 3 * 2) - batched_dir_labels=torch.stack(batched_dir_labels, 0), # (bs, y_l * x_l * 3 * 2) - batched_dir_labels_weights=torch.stack(batched_dir_labels_weights, 0) # (bs, y_l * x_l * 3 * 2) - ) - - return rt_dict - \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/model/pointpillars.py b/cv/3d_detection/pointpillars/pytorch/model/pointpillars.py deleted file mode 100755 index e2d8ccf7..00000000 --- a/cv/3d_detection/pointpillars/pytorch/model/pointpillars.py +++ /dev/null @@ -1,427 +0,0 @@ -import numpy as np -import pdb -import torch -import torch.nn as nn -import torch.nn.functional as F -from model.anchors import Anchors, anchor_target, anchors2bboxes -from ops import Voxelization, nms_cuda -from utils import limit_period - - -class PillarLayer(nn.Module): - def __init__(self, voxel_size, point_cloud_range, max_num_points, max_voxels): - super().__init__() - self.voxel_layer = Voxelization(voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - max_num_points=max_num_points, - max_voxels=max_voxels) - - @torch.no_grad() - def forward(self, batched_pts): - ''' - batched_pts: list[tensor], len(batched_pts) = bs - return: - pillars: (p1 + p2 + ... + pb, num_points, c), - coors_batch: (p1 + p2 + ... + pb, 1 + 3), - num_points_per_pillar: (p1 + p2 + ... + pb, ), (b: batch size) - ''' - pillars, coors, npoints_per_pillar = [], [], [] - for i, pts in enumerate(batched_pts): - voxels_out, coors_out, num_points_per_voxel_out = self.voxel_layer(pts) - # voxels_out: (max_voxel, num_points, c), coors_out: (max_voxel, 3) - # num_points_per_voxel_out: (max_voxel, ) - pillars.append(voxels_out) - coors.append(coors_out.long()) - npoints_per_pillar.append(num_points_per_voxel_out) - - pillars = torch.cat(pillars, dim=0) # (p1 + p2 + ... + pb, num_points, c) - npoints_per_pillar = torch.cat(npoints_per_pillar, dim=0) # (p1 + p2 + ... + pb, ) - coors_batch = [] - for i, cur_coors in enumerate(coors): - coors_batch.append(F.pad(cur_coors, (1, 0), value=i)) - coors_batch = torch.cat(coors_batch, dim=0) # (p1 + p2 + ... + pb, 1 + 3) - - return pillars, coors_batch, npoints_per_pillar - - -class PillarEncoder(nn.Module): - def __init__(self, voxel_size, point_cloud_range, in_channel, out_channel): - super().__init__() - self.out_channel = out_channel - self.vx, self.vy = voxel_size[0], voxel_size[1] - self.x_offset = voxel_size[0] / 2 + point_cloud_range[0] - self.y_offset = voxel_size[1] / 2 + point_cloud_range[1] - self.x_l = int((point_cloud_range[3] - point_cloud_range[0]) / voxel_size[0]) - self.y_l = int((point_cloud_range[4] - point_cloud_range[1]) / voxel_size[1]) - - self.conv = nn.Conv1d(in_channel, out_channel, 1, bias=False) - self.bn = nn.BatchNorm1d(out_channel, eps=1e-3, momentum=0.01) - - def forward(self, pillars, coors_batch, npoints_per_pillar): - ''' - pillars: (p1 + p2 + ... + pb, num_points, c), c = 4 - coors_batch: (p1 + p2 + ... + pb, 1 + 3) - npoints_per_pillar: (p1 + p2 + ... + pb, ) - return: (bs, out_channel, y_l, x_l) - ''' - device = pillars.device - # 1. calculate offset to the points center (in each pillar) - offset_pt_center = pillars[:, :, :3] - torch.sum(pillars[:, :, :3], dim=1, keepdim=True) / npoints_per_pillar[:, None, None] # (p1 + p2 + ... + pb, num_points, 3) - - # 2. calculate offset to the pillar center - x_offset_pi_center = pillars[:, :, :1] - (coors_batch[:, None, 1:2] * self.vx + self.x_offset) # (p1 + p2 + ... + pb, num_points, 1) - y_offset_pi_center = pillars[:, :, 1:2] - (coors_batch[:, None, 2:3] * self.vy + self.y_offset) # (p1 + p2 + ... + pb, num_points, 1) - - # 3. encoder - features = torch.cat([pillars, offset_pt_center, x_offset_pi_center, y_offset_pi_center], dim=-1) # (p1 + p2 + ... + pb, num_points, 9) - features[:, :, 0:1] = x_offset_pi_center # tmp - features[:, :, 1:2] = y_offset_pi_center # tmp - # In consitent with mmdet3d. - # The reason can be referenced to https://github.com/open-mmlab/mmdetection3d/issues/1150 - - # 4. find mask for (0, 0, 0) and update the encoded features - # a very beautiful implementation - voxel_ids = torch.arange(0, pillars.size(1)).to(device) # (num_points, ) - mask = voxel_ids[:, None] < npoints_per_pillar[None, :] # (num_points, p1 + p2 + ... + pb) - mask = mask.permute(1, 0).contiguous() # (p1 + p2 + ... + pb, num_points) - features *= mask[:, :, None] - - # 5. embedding - features = features.permute(0, 2, 1).contiguous() # (p1 + p2 + ... + pb, 9, num_points) - features = F.relu(self.bn(self.conv(features))) # (p1 + p2 + ... + pb, out_channels, num_points) - pooling_features = torch.max(features, dim=-1)[0] # (p1 + p2 + ... + pb, out_channels) - - # 6. pillar scatter - batched_canvas = [] - bs = coors_batch[-1, 0] + 1 - for i in range(bs): - cur_coors_idx = coors_batch[:, 0] == i - cur_coors = coors_batch[cur_coors_idx, :] - cur_features = pooling_features[cur_coors_idx] - - canvas = torch.zeros((self.x_l, self.y_l, self.out_channel), dtype=torch.float32, device=device) - canvas[cur_coors[:, 1], cur_coors[:, 2]] = cur_features - canvas = canvas.permute(2, 1, 0).contiguous() - batched_canvas.append(canvas) - batched_canvas = torch.stack(batched_canvas, dim=0) # (bs, in_channel, self.y_l, self.x_l) - return batched_canvas - - -class Backbone(nn.Module): - def __init__(self, in_channel, out_channels, layer_nums, layer_strides=[2, 2, 2]): - super().__init__() - assert len(out_channels) == len(layer_nums) - assert len(out_channels) == len(layer_strides) - - self.multi_blocks = nn.ModuleList() - for i in range(len(layer_strides)): - blocks = [] - blocks.append(nn.Conv2d(in_channel, out_channels[i], 3, stride=layer_strides[i], bias=False, padding=1)) - blocks.append(nn.BatchNorm2d(out_channels[i], eps=1e-3, momentum=0.01)) - blocks.append(nn.ReLU(inplace=True)) - - for _ in range(layer_nums[i]): - blocks.append(nn.Conv2d(out_channels[i], out_channels[i], 3, bias=False, padding=1)) - blocks.append(nn.BatchNorm2d(out_channels[i], eps=1e-3, momentum=0.01)) - blocks.append(nn.ReLU(inplace=True)) - - in_channel = out_channels[i] - self.multi_blocks.append(nn.Sequential(*blocks)) - - # in consitent with mmdet3d - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - - def forward(self, x): - ''' - x: (b, c, y_l, x_l). Default: (6, 64, 496, 432) - return: list[]. Default: [(6, 64, 248, 216), (6, 128, 124, 108), (6, 256, 62, 54)] - ''' - outs = [] - for i in range(len(self.multi_blocks)): - x = self.multi_blocks[i](x) - outs.append(x) - return outs - - -class Neck(nn.Module): - def __init__(self, in_channels, upsample_strides, out_channels): - super().__init__() - assert len(in_channels) == len(upsample_strides) - assert len(upsample_strides) == len(out_channels) - - self.decoder_blocks = nn.ModuleList() - for i in range(len(in_channels)): - decoder_block = [] - decoder_block.append(nn.ConvTranspose2d(in_channels[i], - out_channels[i], - upsample_strides[i], - stride=upsample_strides[i], - bias=False)) - decoder_block.append(nn.BatchNorm2d(out_channels[i], eps=1e-3, momentum=0.01)) - decoder_block.append(nn.ReLU(inplace=True)) - - self.decoder_blocks.append(nn.Sequential(*decoder_block)) - - # in consitent with mmdet3d - for m in self.modules(): - if isinstance(m, nn.ConvTranspose2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - - def forward(self, x): - ''' - x: [(bs, 64, 248, 216), (bs, 128, 124, 108), (bs, 256, 62, 54)] - return: (bs, 384, 248, 216) - ''' - outs = [] - for i in range(len(self.decoder_blocks)): - xi = self.decoder_blocks[i](x[i]) # (bs, 128, 248, 216) - outs.append(xi) - out = torch.cat(outs, dim=1) - return out - - -class Head(nn.Module): - def __init__(self, in_channel, n_anchors, n_classes): - super().__init__() - - self.conv_cls = nn.Conv2d(in_channel, n_anchors*n_classes, 1) - self.conv_reg = nn.Conv2d(in_channel, n_anchors*7, 1) - self.conv_dir_cls = nn.Conv2d(in_channel, n_anchors*2, 1) - - # in consitent with mmdet3d - conv_layer_id = 0 - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, mean=0, std=0.01) - if conv_layer_id == 0: - prior_prob = 0.01 - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - nn.init.constant_(m.bias, bias_init) - else: - nn.init.constant_(m.bias, 0) - conv_layer_id += 1 - - def forward(self, x): - ''' - x: (bs, 384, 248, 216) - return: - bbox_cls_pred: (bs, n_anchors*3, 248, 216) - bbox_pred: (bs, n_anchors*7, 248, 216) - bbox_dir_cls_pred: (bs, n_anchors*2, 248, 216) - ''' - bbox_cls_pred = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - bbox_dir_cls_pred = self.conv_dir_cls(x) - return bbox_cls_pred, bbox_pred, bbox_dir_cls_pred - - -class PointPillars(nn.Module): - def __init__(self, - nclasses=3, - voxel_size=[0.16, 0.16, 4], - point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1], - max_num_points=32, - max_voxels=(16000, 40000)): - super().__init__() - self.nclasses = nclasses - self.pillar_layer = PillarLayer(voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - max_num_points=max_num_points, - max_voxels=max_voxels) - self.pillar_encoder = PillarEncoder(voxel_size=voxel_size, - point_cloud_range=point_cloud_range, - in_channel=9, - out_channel=64) - self.backbone = Backbone(in_channel=64, - out_channels=[64, 128, 256], - layer_nums=[3, 5, 5]) - self.neck = Neck(in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]) - self.head = Head(in_channel=384, n_anchors=2*nclasses, n_classes=nclasses) - - # anchors - ranges = [[0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -0.6, 69.12, 39.68, -0.6], - [0, -39.68, -1.78, 69.12, 39.68, -1.78]] - sizes = [[0.6, 0.8, 1.73], [0.6, 1.76, 1.73], [1.6, 3.9, 1.56]] - rotations=[0, 1.57] - self.anchors_generator = Anchors(ranges=ranges, - sizes=sizes, - rotations=rotations) - - # train - self.assigners = [ - {'pos_iou_thr': 0.5, 'neg_iou_thr': 0.35, 'min_iou_thr': 0.35}, - {'pos_iou_thr': 0.5, 'neg_iou_thr': 0.35, 'min_iou_thr': 0.35}, - {'pos_iou_thr': 0.6, 'neg_iou_thr': 0.45, 'min_iou_thr': 0.45}, - ] - - # val and test - self.nms_pre = 100 - self.nms_thr = 0.01 - self.score_thr = 0.1 - self.max_num = 50 - - def get_predicted_bboxes_single(self, bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchors): - ''' - bbox_cls_pred: (n_anchors*3, 248, 216) - bbox_pred: (n_anchors*7, 248, 216) - bbox_dir_cls_pred: (n_anchors*2, 248, 216) - anchors: (y_l, x_l, 3, 2, 7) - return: - bboxes: (k, 7) - labels: (k, ) - scores: (k, ) - ''' - # 0. pre-process - bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape(-1, self.nclasses) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 7) - bbox_dir_cls_pred = bbox_dir_cls_pred.permute(1, 2, 0).reshape(-1, 2) - anchors = anchors.reshape(-1, 7) - - bbox_cls_pred = torch.sigmoid(bbox_cls_pred) - bbox_dir_cls_pred = torch.max(bbox_dir_cls_pred, dim=1)[1] - - # 1. obtain self.nms_pre bboxes based on scores - inds = bbox_cls_pred.max(1)[0].topk(self.nms_pre)[1] - bbox_cls_pred = bbox_cls_pred[inds] - bbox_pred = bbox_pred[inds] - bbox_dir_cls_pred = bbox_dir_cls_pred[inds] - anchors = anchors[inds] - - # 2. decode predicted offsets to bboxes - bbox_pred = anchors2bboxes(anchors, bbox_pred) - - # 3. nms - bbox_pred2d_xy = bbox_pred[:, [0, 1]] - bbox_pred2d_lw = bbox_pred[:, [3, 4]] - bbox_pred2d = torch.cat([bbox_pred2d_xy - bbox_pred2d_lw / 2, - bbox_pred2d_xy + bbox_pred2d_lw / 2, - bbox_pred[:, 6:]], dim=-1) # (n_anchors, 5) - - ret_bboxes, ret_labels, ret_scores = [], [], [] - for i in range(self.nclasses): - # 3.1 filter bboxes with scores below self.score_thr - cur_bbox_cls_pred = bbox_cls_pred[:, i] - score_inds = cur_bbox_cls_pred > self.score_thr - if score_inds.sum() == 0: - continue - - cur_bbox_cls_pred = cur_bbox_cls_pred[score_inds] - cur_bbox_pred2d = bbox_pred2d[score_inds] - cur_bbox_pred = bbox_pred[score_inds] - cur_bbox_dir_cls_pred = bbox_dir_cls_pred[score_inds] - - # 3.2 nms core - keep_inds = nms_cuda(boxes=cur_bbox_pred2d, - scores=cur_bbox_cls_pred, - thresh=self.nms_thr, - pre_maxsize=None, - post_max_size=None) - - cur_bbox_cls_pred = cur_bbox_cls_pred[keep_inds] - cur_bbox_pred = cur_bbox_pred[keep_inds] - cur_bbox_dir_cls_pred = cur_bbox_dir_cls_pred[keep_inds] - cur_bbox_pred[:, -1] = limit_period(cur_bbox_pred[:, -1].detach().cpu(), 1, np.pi).to(cur_bbox_pred) # [-pi, 0] - cur_bbox_pred[:, -1] += (1 - cur_bbox_dir_cls_pred) * np.pi - - ret_bboxes.append(cur_bbox_pred) - ret_labels.append(torch.zeros_like(cur_bbox_pred[:, 0], dtype=torch.long) + i) - ret_scores.append(cur_bbox_cls_pred) - - # 4. filter some bboxes if bboxes number is above self.max_num - if len(ret_bboxes) == 0: - return [], [], [] - ret_bboxes = torch.cat(ret_bboxes, 0) - ret_labels = torch.cat(ret_labels, 0) - ret_scores = torch.cat(ret_scores, 0) - if ret_bboxes.size(0) > self.max_num: - final_inds = ret_scores.topk(self.max_num)[1] - ret_bboxes = ret_bboxes[final_inds] - ret_labels = ret_labels[final_inds] - ret_scores = ret_scores[final_inds] - result = { - 'lidar_bboxes': ret_bboxes.detach().cpu().numpy(), - 'labels': ret_labels.detach().cpu().numpy(), - 'scores': ret_scores.detach().cpu().numpy() - } - return result - - - def get_predicted_bboxes(self, bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, batched_anchors): - ''' - bbox_cls_pred: (bs, n_anchors*3, 248, 216) - bbox_pred: (bs, n_anchors*7, 248, 216) - bbox_dir_cls_pred: (bs, n_anchors*2, 248, 216) - batched_anchors: (bs, y_l, x_l, 3, 2, 7) - return: - bboxes: [(k1, 7), (k2, 7), ... ] - labels: [(k1, ), (k2, ), ... ] - scores: [(k1, ), (k2, ), ... ] - ''' - results = [] - bs = bbox_cls_pred.size(0) - for i in range(bs): - result = self.get_predicted_bboxes_single(bbox_cls_pred=bbox_cls_pred[i], - bbox_pred=bbox_pred[i], - bbox_dir_cls_pred=bbox_dir_cls_pred[i], - anchors=batched_anchors[i]) - results.append(result) - return results - - def forward(self, batched_pts, mode='test', batched_gt_bboxes=None, batched_gt_labels=None): - batch_size = len(batched_pts) - # batched_pts: list[tensor] -> pillars: (p1 + p2 + ... + pb, num_points, c), - # coors_batch: (p1 + p2 + ... + pb, 1 + 3), - # num_points_per_pillar: (p1 + p2 + ... + pb, ), (b: batch size) - pillars, coors_batch, npoints_per_pillar = self.pillar_layer(batched_pts) - - # pillars: (p1 + p2 + ... + pb, num_points, c), c = 4 - # coors_batch: (p1 + p2 + ... + pb, 1 + 3) - # npoints_per_pillar: (p1 + p2 + ... + pb, ) - # -> pillar_features: (bs, out_channel, y_l, x_l) - pillar_features = self.pillar_encoder(pillars, coors_batch, npoints_per_pillar) - - # xs: [(bs, 64, 248, 216), (bs, 128, 124, 108), (bs, 256, 62, 54)] - xs = self.backbone(pillar_features) - - # x: (bs, 384, 248, 216) - x = self.neck(xs) - - # bbox_cls_pred: (bs, n_anchors*3, 248, 216) - # bbox_pred: (bs, n_anchors*7, 248, 216) - # bbox_dir_cls_pred: (bs, n_anchors*2, 248, 216) - bbox_cls_pred, bbox_pred, bbox_dir_cls_pred = self.head(x) - - # anchors - device = bbox_cls_pred.device - feature_map_size = torch.tensor(list(bbox_cls_pred.size()[-2:]), device=device) - anchors = self.anchors_generator.get_multi_anchors(feature_map_size) - batched_anchors = [anchors for _ in range(batch_size)] - - if mode == 'train': - anchor_target_dict = anchor_target(batched_anchors=batched_anchors, - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_gt_labels, - assigners=self.assigners, - nclasses=self.nclasses) - - return bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchor_target_dict - elif mode == 'val': - results = self.get_predicted_bboxes(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_anchors=batched_anchors) - return results - - elif mode == 'test': - results = self.get_predicted_bboxes(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_anchors=batched_anchors) - return results - else: - raise ValueError diff --git a/cv/3d_detection/pointpillars/pytorch/ops/__init__.py b/cv/3d_detection/pointpillars/pytorch/ops/__init__.py deleted file mode 100755 index e4469056..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .voxel_module import Voxelization -from .iou3d_module import boxes_iou_bev, nms_cuda, boxes_overlap_bev \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d.cpp b/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d.cpp deleted file mode 100755 index 25a5cd98..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d.cpp +++ /dev/null @@ -1,210 +0,0 @@ -// Modified from -// https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/src/iou3d_nms.cpp - -/* -3D IoU Calculation and Rotated NMS(modified from 2D NMS written by others) -Written by Shaoshuai Shi -All Rights Reserved 2019-2020. -*/ - -#include -#include -#include -#include - -#include -#include - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.device().is_cuda(), #x, " must be a CUDAtensor ") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x, " must be contiguous ") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -#define CHECK_ERROR(ans) \ - { gpuAssert((ans), __FILE__, __LINE__); } -inline void gpuAssert(cudaError_t code, const char *file, int line, - bool abort = true) { - if (code != cudaSuccess) { - fprintf(stderr, "GPUassert: %s %s %d\n", cudaGetErrorString(code), file, - line); - if (abort) exit(code); - } -} - -const int THREADS_PER_BLOCK_NMS = sizeof(unsigned long long) * 8; - -void boxesoverlapLauncher(const int num_a, const float *boxes_a, - const int num_b, const float *boxes_b, - float *ans_overlap); -void boxesioubevLauncher(const int num_a, const float *boxes_a, const int num_b, - const float *boxes_b, float *ans_iou); -void nmsLauncher(const float *boxes, unsigned long long *mask, int boxes_num, - float nms_overlap_thresh); -void nmsNormalLauncher(const float *boxes, unsigned long long *mask, - int boxes_num, float nms_overlap_thresh); - -int boxes_overlap_bev_gpu(at::Tensor boxes_a, at::Tensor boxes_b, - at::Tensor ans_overlap) { - // params boxes_a: (N, 5) [x1, y1, x2, y2, ry] - // params boxes_b: (M, 5) - // params ans_overlap: (N, M) - - CHECK_INPUT(boxes_a); - CHECK_INPUT(boxes_b); - CHECK_INPUT(ans_overlap); - - int num_a = boxes_a.size(0); - int num_b = boxes_b.size(0); - - const float *boxes_a_data = boxes_a.data_ptr(); - const float *boxes_b_data = boxes_b.data_ptr(); - float *ans_overlap_data = ans_overlap.data_ptr(); - - boxesoverlapLauncher(num_a, boxes_a_data, num_b, boxes_b_data, - ans_overlap_data); - - return 1; -} - -int boxes_iou_bev_gpu(at::Tensor boxes_a, at::Tensor boxes_b, - at::Tensor ans_iou) { - // params boxes_a: (N, 5) [x1, y1, x2, y2, ry] - // params boxes_b: (M, 5) - // params ans_overlap: (N, M) - - CHECK_INPUT(boxes_a); - CHECK_INPUT(boxes_b); - CHECK_INPUT(ans_iou); - - int num_a = boxes_a.size(0); - int num_b = boxes_b.size(0); - - const float *boxes_a_data = boxes_a.data_ptr(); - const float *boxes_b_data = boxes_b.data_ptr(); - float *ans_iou_data = ans_iou.data_ptr(); - - boxesioubevLauncher(num_a, boxes_a_data, num_b, boxes_b_data, ans_iou_data); - - return 1; -} - -int nms_gpu(at::Tensor boxes, at::Tensor keep, - float nms_overlap_thresh, int device_id) { - // params boxes: (N, 5) [x1, y1, x2, y2, ry] - // params keep: (N) - - CHECK_INPUT(boxes); - CHECK_CONTIGUOUS(keep); - cudaSetDevice(device_id); - - int boxes_num = boxes.size(0); - const float *boxes_data = boxes.data_ptr(); - int64_t *keep_data = keep.data_ptr(); - - const int col_blocks = DIVUP(boxes_num, THREADS_PER_BLOCK_NMS); - - unsigned long long *mask_data = NULL; - CHECK_ERROR(cudaMalloc((void **)&mask_data, - boxes_num * col_blocks * sizeof(unsigned long long))); - nmsLauncher(boxes_data, mask_data, boxes_num, nms_overlap_thresh); - - // unsigned long long mask_cpu[boxes_num * col_blocks]; - // unsigned long long *mask_cpu = new unsigned long long [boxes_num * - // col_blocks]; - std::vector mask_cpu(boxes_num * col_blocks); - - // printf("boxes_num=%d, col_blocks=%d\n", boxes_num, col_blocks); - CHECK_ERROR(cudaMemcpy(&mask_cpu[0], mask_data, - boxes_num * col_blocks * sizeof(unsigned long long), - cudaMemcpyDeviceToHost)); - - cudaFree(mask_data); - - unsigned long long *remv_cpu = new unsigned long long[col_blocks](); - - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_cpu[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - } - delete[] remv_cpu; - if (cudaSuccess != cudaGetLastError()) printf("Error!\n"); - - return num_to_keep; -} - -int nms_normal_gpu(at::Tensor boxes, at::Tensor keep, - float nms_overlap_thresh, int device_id) { - // params boxes: (N, 5) [x1, y1, x2, y2, ry] - // params keep: (N) - - CHECK_INPUT(boxes); - CHECK_CONTIGUOUS(keep); - cudaSetDevice(device_id); - - int boxes_num = boxes.size(0); - const float *boxes_data = boxes.data_ptr(); - int64_t *keep_data = keep.data_ptr(); - - const int col_blocks = DIVUP(boxes_num, THREADS_PER_BLOCK_NMS); - - unsigned long long *mask_data = NULL; - CHECK_ERROR(cudaMalloc((void **)&mask_data, - boxes_num * col_blocks * sizeof(unsigned long long))); - nmsNormalLauncher(boxes_data, mask_data, boxes_num, nms_overlap_thresh); - - // unsigned long long mask_cpu[boxes_num * col_blocks]; - // unsigned long long *mask_cpu = new unsigned long long [boxes_num * - // col_blocks]; - std::vector mask_cpu(boxes_num * col_blocks); - - // printf("boxes_num=%d, col_blocks=%d\n", boxes_num, col_blocks); - CHECK_ERROR(cudaMemcpy(&mask_cpu[0], mask_data, - boxes_num * col_blocks * sizeof(unsigned long long), - cudaMemcpyDeviceToHost)); - - cudaFree(mask_data); - - unsigned long long *remv_cpu = new unsigned long long[col_blocks](); - - int num_to_keep = 0; - - for (int i = 0; i < boxes_num; i++) { - int nblock = i / THREADS_PER_BLOCK_NMS; - int inblock = i % THREADS_PER_BLOCK_NMS; - - if (!(remv_cpu[nblock] & (1ULL << inblock))) { - keep_data[num_to_keep++] = i; - unsigned long long *p = &mask_cpu[0] + i * col_blocks; - for (int j = nblock; j < col_blocks; j++) { - remv_cpu[j] |= p[j]; - } - } - } - delete[] remv_cpu; - if (cudaSuccess != cudaGetLastError()) printf("Error!\n"); - - return num_to_keep; -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("boxes_overlap_bev_gpu", &boxes_overlap_bev_gpu, - "oriented boxes overlap"); - m.def("boxes_iou_bev_gpu", &boxes_iou_bev_gpu, "oriented boxes iou"); - m.def("nms_gpu", &nms_gpu, "oriented nms gpu"); - m.def("nms_normal_gpu", &nms_normal_gpu, "nms gpu"); -} diff --git a/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d_kernel.cu b/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d_kernel.cu deleted file mode 100755 index 861aea3c..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/iou3d/iou3d_kernel.cu +++ /dev/null @@ -1,439 +0,0 @@ -// Modified from -// https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu - -/* -3D IoU Calculation and Rotated NMS(modified from 2D NMS written by others) -Written by Shaoshuai Shi -All Rights Reserved 2019-2020. -*/ - -#include -#define THREADS_PER_BLOCK 16 -#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0)) - -//#define DEBUG -const int THREADS_PER_BLOCK_NMS = sizeof(unsigned long long) * 8; -__device__ const float EPS = 1e-8; -struct Point { - float x, y; - __device__ Point() {} - __device__ Point(double _x, double _y) { x = _x, y = _y; } - - __device__ void set(float _x, float _y) { - x = _x; - y = _y; - } - - __device__ Point operator+(const Point &b) const { - return Point(x + b.x, y + b.y); - } - - __device__ Point operator-(const Point &b) const { - return Point(x - b.x, y - b.y); - } -}; - -__device__ inline float cross(const Point &a, const Point &b) { - return a.x * b.y - a.y * b.x; -} - -__device__ inline float cross(const Point &p1, const Point &p2, - const Point &p0) { - return (p1.x - p0.x) * (p2.y - p0.y) - (p2.x - p0.x) * (p1.y - p0.y); -} - -__device__ int check_rect_cross(const Point &p1, const Point &p2, - const Point &q1, const Point &q2) { - int ret = min(p1.x, p2.x) <= max(q1.x, q2.x) && - min(q1.x, q2.x) <= max(p1.x, p2.x) && - min(p1.y, p2.y) <= max(q1.y, q2.y) && - min(q1.y, q2.y) <= max(p1.y, p2.y); - return ret; -} - -__device__ inline int check_in_box2d(const float *box, const Point &p) { - // params: box (5) [x1, y1, x2, y2, angle] - const float MARGIN = 1e-5; - - float center_x = (box[0] + box[2]) / 2; - float center_y = (box[1] + box[3]) / 2; - float angle_cos = cos(-box[4]), - angle_sin = - sin(-box[4]); // rotate the point in the opposite direction of box - float rot_x = - (p.x - center_x) * angle_cos + (p.y - center_y) * angle_sin + center_x; - float rot_y = - -(p.x - center_x) * angle_sin + (p.y - center_y) * angle_cos + center_y; -#ifdef DEBUG - printf("box: (%.3f, %.3f, %.3f, %.3f, %.3f)\n", box[0], box[1], box[2], - box[3], box[4]); - printf( - "center: (%.3f, %.3f), cossin(%.3f, %.3f), src(%.3f, %.3f), rot(%.3f, " - "%.3f)\n", - center_x, center_y, angle_cos, angle_sin, p.x, p.y, rot_x, rot_y); -#endif - return (rot_x > box[0] - MARGIN && rot_x < box[2] + MARGIN && - rot_y > box[1] - MARGIN && rot_y < box[3] + MARGIN); -} - -__device__ inline int intersection(const Point &p1, const Point &p0, - const Point &q1, const Point &q0, - Point &ans) { - // fast exclusion - if (check_rect_cross(p0, p1, q0, q1) == 0) return 0; - - // check cross standing - float s1 = cross(q0, p1, p0); - float s2 = cross(p1, q1, p0); - float s3 = cross(p0, q1, q0); - float s4 = cross(q1, p1, q0); - - if (!(s1 * s2 > 0 && s3 * s4 > 0)) return 0; - - // calculate intersection of two lines - float s5 = cross(q1, p1, p0); - if (fabs(s5 - s1) > EPS) { - ans.x = (s5 * q0.x - s1 * q1.x) / (s5 - s1); - ans.y = (s5 * q0.y - s1 * q1.y) / (s5 - s1); - - } else { - float a0 = p0.y - p1.y, b0 = p1.x - p0.x, c0 = p0.x * p1.y - p1.x * p0.y; - float a1 = q0.y - q1.y, b1 = q1.x - q0.x, c1 = q0.x * q1.y - q1.x * q0.y; - float D = a0 * b1 - a1 * b0; - - ans.x = (b0 * c1 - b1 * c0) / D; - ans.y = (a1 * c0 - a0 * c1) / D; - } - - return 1; -} - -__device__ inline void rotate_around_center(const Point ¢er, - const float angle_cos, - const float angle_sin, Point &p) { - float new_x = - (p.x - center.x) * angle_cos + (p.y - center.y) * angle_sin + center.x; - float new_y = - -(p.x - center.x) * angle_sin + (p.y - center.y) * angle_cos + center.y; - p.set(new_x, new_y); -} - -__device__ inline int point_cmp(const Point &a, const Point &b, - const Point ¢er) { - return atan2(a.y - center.y, a.x - center.x) > - atan2(b.y - center.y, b.x - center.x); -} - -__device__ inline float box_overlap(const float *box_a, const float *box_b) { - // params: box_a (5) [x1, y1, x2, y2, angle] - // params: box_b (5) [x1, y1, x2, y2, angle] - - float a_x1 = box_a[0], a_y1 = box_a[1], a_x2 = box_a[2], a_y2 = box_a[3], - a_angle = box_a[4]; - float b_x1 = box_b[0], b_y1 = box_b[1], b_x2 = box_b[2], b_y2 = box_b[3], - b_angle = box_b[4]; - - Point center_a((a_x1 + a_x2) / 2, (a_y1 + a_y2) / 2); - Point center_b((b_x1 + b_x2) / 2, (b_y1 + b_y2) / 2); -#ifdef DEBUG - printf( - "a: (%.3f, %.3f, %.3f, %.3f, %.3f), b: (%.3f, %.3f, %.3f, %.3f, %.3f)\n", - a_x1, a_y1, a_x2, a_y2, a_angle, b_x1, b_y1, b_x2, b_y2, b_angle); - printf("center a: (%.3f, %.3f), b: (%.3f, %.3f)\n", center_a.x, center_a.y, - center_b.x, center_b.y); -#endif - - Point box_a_corners[5]; - box_a_corners[0].set(a_x1, a_y1); - box_a_corners[1].set(a_x2, a_y1); - box_a_corners[2].set(a_x2, a_y2); - box_a_corners[3].set(a_x1, a_y2); - - Point box_b_corners[5]; - box_b_corners[0].set(b_x1, b_y1); - box_b_corners[1].set(b_x2, b_y1); - box_b_corners[2].set(b_x2, b_y2); - box_b_corners[3].set(b_x1, b_y2); - - // get oriented corners - float a_angle_cos = cos(a_angle), a_angle_sin = sin(a_angle); - float b_angle_cos = cos(b_angle), b_angle_sin = sin(b_angle); - - for (int k = 0; k < 4; k++) { -#ifdef DEBUG - printf("before corner %d: a(%.3f, %.3f), b(%.3f, %.3f) \n", k, - box_a_corners[k].x, box_a_corners[k].y, box_b_corners[k].x, - box_b_corners[k].y); -#endif - rotate_around_center(center_a, a_angle_cos, a_angle_sin, box_a_corners[k]); - rotate_around_center(center_b, b_angle_cos, b_angle_sin, box_b_corners[k]); -#ifdef DEBUG - printf("corner %d: a(%.3f, %.3f), b(%.3f, %.3f) \n", k, box_a_corners[k].x, - box_a_corners[k].y, box_b_corners[k].x, box_b_corners[k].y); -#endif - } - - box_a_corners[4] = box_a_corners[0]; - box_b_corners[4] = box_b_corners[0]; - - // get intersection of lines - Point cross_points[16]; - Point poly_center; - int cnt = 0, flag = 0; - - poly_center.set(0, 0); - for (int i = 0; i < 4; i++) { - for (int j = 0; j < 4; j++) { - flag = intersection(box_a_corners[i + 1], box_a_corners[i], - box_b_corners[j + 1], box_b_corners[j], - cross_points[cnt]); - if (flag) { - poly_center = poly_center + cross_points[cnt]; - cnt++; - } - } - } - - // check corners - for (int k = 0; k < 4; k++) { - if (check_in_box2d(box_a, box_b_corners[k])) { - poly_center = poly_center + box_b_corners[k]; - cross_points[cnt] = box_b_corners[k]; - cnt++; - } - if (check_in_box2d(box_b, box_a_corners[k])) { - poly_center = poly_center + box_a_corners[k]; - cross_points[cnt] = box_a_corners[k]; - cnt++; - } - } - - poly_center.x /= cnt; - poly_center.y /= cnt; - - // sort the points of polygon - Point temp; - for (int j = 0; j < cnt - 1; j++) { - for (int i = 0; i < cnt - j - 1; i++) { - if (point_cmp(cross_points[i], cross_points[i + 1], poly_center)) { - temp = cross_points[i]; - cross_points[i] = cross_points[i + 1]; - cross_points[i + 1] = temp; - } - } - } - -#ifdef DEBUG - printf("cnt=%d\n", cnt); - for (int i = 0; i < cnt; i++) { - printf("All cross point %d: (%.3f, %.3f)\n", i, cross_points[i].x, - cross_points[i].y); - } -#endif - - // get the overlap areas - float area = 0; - for (int k = 0; k < cnt - 1; k++) { - area += cross(cross_points[k] - cross_points[0], - cross_points[k + 1] - cross_points[0]); - } - - return fabs(area) / 2.0; -} - -__device__ inline float iou_bev(const float *box_a, const float *box_b) { - // params: box_a (5) [x1, y1, x2, y2, angle] - // params: box_b (5) [x1, y1, x2, y2, angle] - float sa = (box_a[2] - box_a[0]) * (box_a[3] - box_a[1]); - float sb = (box_b[2] - box_b[0]) * (box_b[3] - box_b[1]); - float s_overlap = box_overlap(box_a, box_b); - return s_overlap / fmaxf(sa + sb - s_overlap, EPS); -} - -__global__ void boxes_overlap_kernel(const int num_a, const float *boxes_a, - const int num_b, const float *boxes_b, - float *ans_overlap) { - const int a_idx = blockIdx.y * THREADS_PER_BLOCK + threadIdx.y; - const int b_idx = blockIdx.x * THREADS_PER_BLOCK + threadIdx.x; - - if (a_idx >= num_a || b_idx >= num_b) { - return; - } - const float *cur_box_a = boxes_a + a_idx * 5; - const float *cur_box_b = boxes_b + b_idx * 5; - float s_overlap = box_overlap(cur_box_a, cur_box_b); - ans_overlap[a_idx * num_b + b_idx] = s_overlap; -} - -__global__ void boxes_iou_bev_kernel(const int num_a, const float *boxes_a, - const int num_b, const float *boxes_b, - float *ans_iou) { - const int a_idx = blockIdx.y * THREADS_PER_BLOCK + threadIdx.y; - const int b_idx = blockIdx.x * THREADS_PER_BLOCK + threadIdx.x; - - if (a_idx >= num_a || b_idx >= num_b) { - return; - } - - const float *cur_box_a = boxes_a + a_idx * 5; - const float *cur_box_b = boxes_b + b_idx * 5; - float cur_iou_bev = iou_bev(cur_box_a, cur_box_b); - ans_iou[a_idx * num_b + b_idx] = cur_iou_bev; -} - -__global__ void nms_kernel(const int boxes_num, const float nms_overlap_thresh, - const float *boxes, unsigned long long *mask) { - // params: boxes (N, 5) [x1, y1, x2, y2, ry] - // params: mask (N, N/THREADS_PER_BLOCK_NMS) - - const int row_start = blockIdx.y; - const int col_start = blockIdx.x; - - // if (row_start > col_start) return; - - const int row_size = fminf(boxes_num - row_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - const int col_size = fminf(boxes_num - col_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - - __shared__ float block_boxes[THREADS_PER_BLOCK_NMS * 5]; - - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 5 + 0] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 0]; - block_boxes[threadIdx.x * 5 + 1] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 1]; - block_boxes[threadIdx.x * 5 + 2] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 2]; - block_boxes[threadIdx.x * 5 + 3] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 3]; - block_boxes[threadIdx.x * 5 + 4] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 4]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = THREADS_PER_BLOCK_NMS * row_start + threadIdx.x; - const float *cur_box = boxes + cur_box_idx * 5; - - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - if (iou_bev(cur_box, block_boxes + i * 5) > nms_overlap_thresh) { - t |= 1ULL << i; - } - } - const int col_blocks = DIVUP(boxes_num, THREADS_PER_BLOCK_NMS); - mask[cur_box_idx * col_blocks + col_start] = t; - } -} - -__device__ inline float iou_normal(float const *const a, float const *const b) { - float left = fmaxf(a[0], b[0]), right = fminf(a[2], b[2]); - float top = fmaxf(a[1], b[1]), bottom = fminf(a[3], b[3]); - float width = fmaxf(right - left, 0.f), height = fmaxf(bottom - top, 0.f); - float interS = width * height; - float Sa = (a[2] - a[0]) * (a[3] - a[1]); - float Sb = (b[2] - b[0]) * (b[3] - b[1]); - return interS / fmaxf(Sa + Sb - interS, EPS); -} - -__global__ void nms_normal_kernel(const int boxes_num, - const float nms_overlap_thresh, - const float *boxes, - unsigned long long *mask) { - // params: boxes (N, 5) [x1, y1, x2, y2, ry] - // params: mask (N, N/THREADS_PER_BLOCK_NMS) - - const int row_start = blockIdx.y; - const int col_start = blockIdx.x; - - // if (row_start > col_start) return; - - const int row_size = fminf(boxes_num - row_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - const int col_size = fminf(boxes_num - col_start * THREADS_PER_BLOCK_NMS, - THREADS_PER_BLOCK_NMS); - - __shared__ float block_boxes[THREADS_PER_BLOCK_NMS * 5]; - - if (threadIdx.x < col_size) { - block_boxes[threadIdx.x * 5 + 0] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 0]; - block_boxes[threadIdx.x * 5 + 1] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 1]; - block_boxes[threadIdx.x * 5 + 2] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 2]; - block_boxes[threadIdx.x * 5 + 3] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 3]; - block_boxes[threadIdx.x * 5 + 4] = - boxes[(THREADS_PER_BLOCK_NMS * col_start + threadIdx.x) * 5 + 4]; - } - __syncthreads(); - - if (threadIdx.x < row_size) { - const int cur_box_idx = THREADS_PER_BLOCK_NMS * row_start + threadIdx.x; - const float *cur_box = boxes + cur_box_idx * 5; - - int i = 0; - unsigned long long t = 0; - int start = 0; - if (row_start == col_start) { - start = threadIdx.x + 1; - } - for (i = start; i < col_size; i++) { - if (iou_normal(cur_box, block_boxes + i * 5) > nms_overlap_thresh) { - t |= 1ULL << i; - } - } - const int col_blocks = DIVUP(boxes_num, THREADS_PER_BLOCK_NMS); - mask[cur_box_idx * col_blocks + col_start] = t; - } -} - -void boxesoverlapLauncher(const int num_a, const float *boxes_a, - const int num_b, const float *boxes_b, - float *ans_overlap) { - dim3 blocks( - DIVUP(num_b, THREADS_PER_BLOCK), - DIVUP(num_a, THREADS_PER_BLOCK)); // blockIdx.x(col), blockIdx.y(row) - dim3 threads(THREADS_PER_BLOCK, THREADS_PER_BLOCK); - - boxes_overlap_kernel<<>>(num_a, boxes_a, num_b, boxes_b, - ans_overlap); -#ifdef DEBUG - cudaDeviceSynchronize(); // for using printf in kernel function -#endif -} - -void boxesioubevLauncher(const int num_a, const float *boxes_a, const int num_b, - const float *boxes_b, float *ans_iou) { - dim3 blocks( - DIVUP(num_b, THREADS_PER_BLOCK), - DIVUP(num_a, THREADS_PER_BLOCK)); // blockIdx.x(col), blockIdx.y(row) - dim3 threads(THREADS_PER_BLOCK, THREADS_PER_BLOCK); - - boxes_iou_bev_kernel<<>>(num_a, boxes_a, num_b, boxes_b, - ans_iou); -} - -void nmsLauncher(const float *boxes, unsigned long long *mask, int boxes_num, - float nms_overlap_thresh) { - dim3 blocks(DIVUP(boxes_num, THREADS_PER_BLOCK_NMS), - DIVUP(boxes_num, THREADS_PER_BLOCK_NMS)); - dim3 threads(THREADS_PER_BLOCK_NMS); - nms_kernel<<>>(boxes_num, nms_overlap_thresh, boxes, mask); -} - -void nmsNormalLauncher(const float *boxes, unsigned long long *mask, - int boxes_num, float nms_overlap_thresh) { - dim3 blocks(DIVUP(boxes_num, THREADS_PER_BLOCK_NMS), - DIVUP(boxes_num, THREADS_PER_BLOCK_NMS)); - dim3 threads(THREADS_PER_BLOCK_NMS); - nms_normal_kernel<<>>(boxes_num, nms_overlap_thresh, boxes, - mask); -} diff --git a/cv/3d_detection/pointpillars/pytorch/ops/iou3d_module.py b/cv/3d_detection/pointpillars/pytorch/ops/iou3d_module.py deleted file mode 100755 index e3fcdb11..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/iou3d_module.py +++ /dev/null @@ -1,89 +0,0 @@ -# This file is modified from https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/iou3d/iou3d_utils.py - -import torch -from .iou3d_op import boxes_overlap_bev_gpu, boxes_iou_bev_gpu, nms_gpu, nms_normal_gpu - - -def boxes_overlap_bev(boxes_a, boxes_b): - """Calculate boxes Overlap in the bird view. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5). - - Returns: - ans_overlap (torch.Tensor): Overlap result with shape (M, N). - """ - ans_overlap = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - - boxes_overlap_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(), ans_overlap) - - return ans_overlap - - -def boxes_iou_bev(boxes_a, boxes_b): - """Calculate boxes IoU in the bird view. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5). - - Returns: - ans_iou (torch.Tensor): IoU result with shape (M, N). - """ - ans_iou = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - - boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(), ans_iou) - - return ans_iou - - -def nms_cuda(boxes, scores, thresh, pre_maxsize=None, post_max_size=None): - """Nms function with gpu implementation. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (int): Threshold. - pre_maxsize (int): Max size of boxes before nms. Default: None. - post_maxsize (int): Max size of boxes after nms. Default: None. - - Returns: - torch.Tensor: Indexes after nms. - """ - order = scores.sort(0, descending=True)[1] - - if pre_maxsize is not None: - order = order[:pre_maxsize] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = nms_gpu(boxes, keep, thresh, boxes.device.index) - keep = order[keep[:num_out].cuda(boxes.device)].contiguous() - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -def nms_normal_gpu(boxes, scores, thresh): - """Normal non maximum suppression on GPU. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (torch.Tensor): Threshold of non maximum suppression. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - order = scores.sort(0, descending=True)[1] - - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = nms_normal_gpu(boxes, keep, thresh, - boxes.device.index) - return order[keep[:num_out].cuda(boxes.device)].contiguous() diff --git a/cv/3d_detection/pointpillars/pytorch/ops/setup.py b/cv/3d_detection/pointpillars/pytorch/ops/setup.py deleted file mode 100755 index 3d1e6798..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/setup.py +++ /dev/null @@ -1,25 +0,0 @@ -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - -setup( - name='pointpillars', - ext_modules=[ - CUDAExtension( - name='voxel_op', - sources=['voxelization/voxelization.cpp', - 'voxelization/voxelization_cpu.cpp', - 'voxelization/voxelization_cuda.cu', - ], - define_macros=[('WITH_CUDA', None)] - ), - CUDAExtension( - name='iou3d_op', - sources=['iou3d/iou3d.cpp', - 'iou3d/iou3d_kernel.cu', - ], - define_macros=[('WITH_CUDA', None)] - ) - ], - cmdclass={ - 'build_ext': BuildExtension - }) \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/ops/voxel_module.py b/cv/3d_detection/pointpillars/pytorch/ops/voxel_module.py deleted file mode 100755 index 8f9cc90b..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/voxel_module.py +++ /dev/null @@ -1,133 +0,0 @@ -# This file is modified from https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/voxel/voxelize.py - -import torch -import torch.nn as nn -from .voxel_op import hard_voxelize - - -class _Voxelization(torch.autograd.Function): - - @staticmethod - def forward(ctx, - points, - voxel_size, - coors_range, - max_points=35, - max_voxels=20000, - deterministic=True): - """convert kitti points(N, >=3) to voxels. - Args: - points: [N, ndim] float tensor. points[:, :3] contain xyz points - and points[:, 3:] contain other information like reflectivity - voxel_size: [3] list/tuple or array, float. xyz, indicate voxel - size - coors_range: [6] list/tuple or array, float. indicate voxel - range. format: xyzxyz, minmax - max_points: int. indicate maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize - max_voxels: int. indicate maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - deterministic: bool. whether to invoke the non-deterministic - version of hard-voxelization implementations. non-deterministic - version is considerablly fast but is not deterministic. only - affects hard voxelization. default True. for more information - of this argument and the implementation insights, please refer - to the following links: - https://github.com/open-mmlab/mmdetection3d/issues/894 - https://github.com/open-mmlab/mmdetection3d/pull/904 - it is an experimental feature and we will appreciate it if - you could share with us the failing cases. - Returns: - voxels: [M, max_points, ndim] float tensor. only contain points - and returned when max_points != -1. - coordinates: [M, 3] int32 tensor, always returned. - num_points_per_voxel: [M] int32 tensor. Only returned when - max_points != -1. - """ - - voxels = points.new_zeros( - size=(max_voxels, max_points, points.size(1))) - coors = points.new_zeros(size=(max_voxels, 3), dtype=torch.int) - num_points_per_voxel = points.new_zeros( - size=(max_voxels, ), dtype=torch.int) - voxel_num = hard_voxelize(points, voxels, coors, - num_points_per_voxel, voxel_size, - coors_range, max_points, max_voxels, 3, - deterministic) - # select the valid voxels - voxels_out = voxels[:voxel_num] - coors_out = coors[:voxel_num].flip(-1) # (z, y, x) -> (x, y, z) - num_points_per_voxel_out = num_points_per_voxel[:voxel_num] - return voxels_out, coors_out, num_points_per_voxel_out - - -class Voxelization(nn.Module): - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels, - deterministic=True): - super(Voxelization, self).__init__() - """ - Args: - voxel_size (list): list [x, y, z] size of three dimension - point_cloud_range (list): - [x_min, y_min, z_min, x_max, y_max, z_max] - max_num_points (int): max number of points per voxel - max_voxels (tuple): max number of voxels in - (training, testing) time - deterministic: bool. whether to invoke the non-deterministic - version of hard-voxelization implementations. non-deterministic - version is considerablly fast but is not deterministic. only - affects hard voxelization. default True. for more information - of this argument and the implementation insights, please refer - to the following links: - https://github.com/open-mmlab/mmdetection3d/issues/894 - https://github.com/open-mmlab/mmdetection3d/pull/904 - it is an experimental feature and we will appreciate it if - you could share with us the failing cases. - """ - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.max_num_points = max_num_points - self.max_voxels = max_voxels - self.deterministic = deterministic - - point_cloud_range = torch.tensor( - point_cloud_range, dtype=torch.float32) - - voxel_size = torch.tensor(voxel_size, dtype=torch.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = torch.round(grid_size).long() - input_feat_shape = grid_size[:2] - self.grid_size = grid_size - # the origin shape is as [x-len, y-len, z-len] - # [w, h, d] -> [d, h, w] - self.pcd_shape = [*input_feat_shape, 1][::-1] - - def forward(self, input): - """ - input: shape=(N, c) - """ - if self.training: - max_voxels = self.max_voxels[0] - else: - max_voxels = self.max_voxels[1] - - return _Voxelization.apply(input, self.voxel_size, self.point_cloud_range, - self.max_num_points, max_voxels, - self.deterministic) - - def __repr__(self): - tmpstr = self.__class__.__name__ + '(' - tmpstr += 'voxel_size=' + str(self.voxel_size) - tmpstr += ', point_cloud_range=' + str(self.point_cloud_range) - tmpstr += ', max_num_points=' + str(self.max_num_points) - tmpstr += ', max_voxels=' + str(self.max_voxels) - tmpstr += ', deterministic=' + str(self.deterministic) - tmpstr += ')' - return tmpstr diff --git a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.cpp b/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.cpp deleted file mode 100755 index 2fc11554..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.cpp +++ /dev/null @@ -1,10 +0,0 @@ -#include -#include "voxelization.h" - -namespace voxelization { - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("hard_voxelize", &hard_voxelize, "hard voxelize"); -} - -} // namespace voxelization diff --git a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.h b/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.h deleted file mode 100755 index 116bead4..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization.h +++ /dev/null @@ -1,69 +0,0 @@ -#pragma once -#include - -typedef enum { SUM = 0, MEAN = 1, MAX = 2 } reduce_t; - -namespace voxelization { - -int hard_voxelize_cpu(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3); - -#ifdef WITH_CUDA -int hard_voxelize_gpu(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3); - -int nondisterministic_hard_voxelize_gpu(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3); -#endif - -// Interface for Python -inline int hard_voxelize(const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3, const bool deterministic = true) { - if (points.device().is_cuda()) { -#ifdef WITH_CUDA - if (deterministic) { - return hard_voxelize_gpu(points, voxels, coors, num_points_per_voxel, - voxel_size, coors_range, max_points, max_voxels, - NDim); - } - return nondisterministic_hard_voxelize_gpu(points, voxels, coors, num_points_per_voxel, - voxel_size, coors_range, max_points, max_voxels, - NDim); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - return hard_voxelize_cpu(points, voxels, coors, num_points_per_voxel, - voxel_size, coors_range, max_points, max_voxels, - NDim); -} - - -inline reduce_t convert_reduce_type(const std::string &reduce_type) { - if (reduce_type == "max") - return reduce_t::MAX; - else if (reduce_type == "sum") - return reduce_t::SUM; - else if (reduce_type == "mean") - return reduce_t::MEAN; - else TORCH_CHECK(false, "do not support reduce type " + reduce_type) - return reduce_t::SUM; -} - -} // namespace voxelization diff --git a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cpu.cpp b/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cpu.cpp deleted file mode 100755 index 6bcec401..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cpu.cpp +++ /dev/null @@ -1,173 +0,0 @@ -#include -#include -// #include "voxelization.h" - -namespace { - -template -void dynamic_voxelize_kernel(const torch::TensorAccessor points, - torch::TensorAccessor coors, - const std::vector voxel_size, - const std::vector coors_range, - const std::vector grid_size, - const int num_points, const int num_features, - const int NDim) { - const int ndim_minus_1 = NDim - 1; - bool failed = false; - // int coor[NDim]; - int* coor = new int[NDim](); - int c; - - for (int i = 0; i < num_points; ++i) { - failed = false; - for (int j = 0; j < NDim; ++j) { - c = floor((points[i][j] - coors_range[j]) / voxel_size[j]); - // necessary to rm points out of range - if ((c < 0 || c >= grid_size[j])) { - failed = true; - break; - } - coor[ndim_minus_1 - j] = c; - } - - for (int k = 0; k < NDim; ++k) { - if (failed) - coors[i][k] = -1; - else - coors[i][k] = coor[k]; - } - } - - delete[] coor; - return; -} - -template -void hard_voxelize_kernel(const torch::TensorAccessor points, - torch::TensorAccessor voxels, - torch::TensorAccessor coors, - torch::TensorAccessor num_points_per_voxel, - torch::TensorAccessor coor_to_voxelidx, - int& voxel_num, const std::vector voxel_size, - const std::vector coors_range, - const std::vector grid_size, - const int max_points, const int max_voxels, - const int num_points, const int num_features, - const int NDim) { - // declare a temp coors - at::Tensor temp_coors = at::zeros( - {num_points, NDim}, at::TensorOptions().dtype(at::kInt).device(at::kCPU)); - - // First use dynamic voxelization to get coors, - // then check max points/voxels constraints - dynamic_voxelize_kernel(points, temp_coors.accessor(), - voxel_size, coors_range, grid_size, - num_points, num_features, NDim); - - int voxelidx, num; - auto coor = temp_coors.accessor(); - - for (int i = 0; i < num_points; ++i) { - // T_int* coor = temp_coors.data_ptr() + i * NDim; - - if (coor[i][0] == -1) continue; - - voxelidx = coor_to_voxelidx[coor[i][0]][coor[i][1]][coor[i][2]]; - - // record voxel - if (voxelidx == -1) { - voxelidx = voxel_num; - if (max_voxels != -1 && voxel_num >= max_voxels) continue; - voxel_num += 1; - - coor_to_voxelidx[coor[i][0]][coor[i][1]][coor[i][2]] = voxelidx; - - for (int k = 0; k < NDim; ++k) { - coors[voxelidx][k] = coor[i][k]; - } - } - - // put points into voxel - num = num_points_per_voxel[voxelidx]; - if (max_points == -1 || num < max_points) { - for (int k = 0; k < num_features; ++k) { - voxels[voxelidx][num][k] = points[i][k]; - } - num_points_per_voxel[voxelidx] += 1; - } - } - - return; -} - -} // namespace - -namespace voxelization { - -int hard_voxelize_cpu(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - // current version tooks about 0.02s_0.03s for one frame on cpu - // check device - AT_ASSERTM(points.device().is_cpu(), "points must be a CPU tensor"); - - std::vector grid_size(NDim); - const int num_points = points.size(0); - const int num_features = points.size(1); - - for (int i = 0; i < NDim; ++i) { - grid_size[i] = - round((coors_range[NDim + i] - coors_range[i]) / voxel_size[i]); - } - - // coors, num_points_per_voxel, coor_to_voxelidx are int Tensor - // printf("cpu coor_to_voxelidx size: [%d, %d, %d]\n", grid_size[2], - // grid_size[1], grid_size[0]); - at::Tensor coor_to_voxelidx = - -at::ones({grid_size[2], grid_size[1], grid_size[0]}, coors.options()); - - int voxel_num = 0; - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "hard_voxelize_forward", [&] { - hard_voxelize_kernel( - points.accessor(), voxels.accessor(), - coors.accessor(), num_points_per_voxel.accessor(), - coor_to_voxelidx.accessor(), voxel_num, voxel_size, - coors_range, grid_size, max_points, max_voxels, num_points, - num_features, NDim); - }); - - return voxel_num; -} - -void dynamic_voxelize_cpu(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim = 3) { - // check device - AT_ASSERTM(points.device().is_cpu(), "points must be a CPU tensor"); - - std::vector grid_size(NDim); - const int num_points = points.size(0); - const int num_features = points.size(1); - - for (int i = 0; i < NDim; ++i) { - grid_size[i] = - round((coors_range[NDim + i] - coors_range[i]) / voxel_size[i]); - } - - // coors, num_points_per_voxel, coor_to_voxelidx are int Tensor - AT_DISPATCH_FLOATING_TYPES_AND_HALF( - points.scalar_type(), "hard_voxelize_forward", [&] { - dynamic_voxelize_kernel( - points.accessor(), coors.accessor(), - voxel_size, coors_range, grid_size, num_points, num_features, NDim); - }); - - return; -} - -} // namespace voxelization diff --git a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cuda.cu b/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cuda.cu deleted file mode 100755 index b4edcf3b..00000000 --- a/cv/3d_detection/pointpillars/pytorch/ops/voxelization/voxelization_cuda.cu +++ /dev/null @@ -1,530 +0,0 @@ -#include -#include -#include -#include - -#include - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -namespace { -int const threadsPerBlock = sizeof(unsigned long long) * 8; -} - -#define CUDA_1D_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \ - i += blockDim.x * gridDim.x) - -template -__global__ void dynamic_voxelize_kernel( - const T* points, T_int* coors, const float voxel_x, const float voxel_y, - const float voxel_z, const float coors_x_min, const float coors_y_min, - const float coors_z_min, const float coors_x_max, const float coors_y_max, - const float coors_z_max, const int grid_x, const int grid_y, - const int grid_z, const int num_points, const int num_features, - const int NDim) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - CUDA_1D_KERNEL_LOOP(index, num_points) { - // To save some computation - auto points_offset = points + index * num_features; - auto coors_offset = coors + index * NDim; - int c_x = floor((points_offset[0] - coors_x_min) / voxel_x); - if (c_x < 0 || c_x >= grid_x) { - coors_offset[0] = -1; - return; - } - - int c_y = floor((points_offset[1] - coors_y_min) / voxel_y); - if (c_y < 0 || c_y >= grid_y) { - coors_offset[0] = -1; - coors_offset[1] = -1; - return; - } - - int c_z = floor((points_offset[2] - coors_z_min) / voxel_z); - if (c_z < 0 || c_z >= grid_z) { - coors_offset[0] = -1; - coors_offset[1] = -1; - coors_offset[2] = -1; - } else { - coors_offset[0] = c_z; - coors_offset[1] = c_y; - coors_offset[2] = c_x; - } - } -} - -template -__global__ void assign_point_to_voxel(const int nthreads, const T* points, - T_int* point_to_voxelidx, - T_int* coor_to_voxelidx, T* voxels, - const int max_points, - const int num_features, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - int index = thread_idx / num_features; - - int num = point_to_voxelidx[index]; - int voxelidx = coor_to_voxelidx[index]; - if (num > -1 && voxelidx > -1) { - auto voxels_offset = - voxels + voxelidx * max_points * num_features + num * num_features; - - int k = thread_idx % num_features; - voxels_offset[k] = points[thread_idx]; - } - } -} - -template -__global__ void assign_voxel_coors(const int nthreads, T_int* coor, - T_int* point_to_voxelidx, - T_int* coor_to_voxelidx, T_int* voxel_coors, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - // const int index = blockIdx.x * threadsPerBlock + threadIdx.x; - // if (index >= num_points) return; - int index = thread_idx / NDim; - int num = point_to_voxelidx[index]; - int voxelidx = coor_to_voxelidx[index]; - if (num == 0 && voxelidx > -1) { - auto coors_offset = voxel_coors + voxelidx * NDim; - int k = thread_idx % NDim; - coors_offset[k] = coor[thread_idx]; - } - } -} - -template -__global__ void point_to_voxelidx_kernel(const T_int* coor, - T_int* point_to_voxelidx, - T_int* point_to_pointidx, - const int max_points, - const int max_voxels, - const int num_points, const int NDim) { - CUDA_1D_KERNEL_LOOP(index, num_points) { - auto coor_offset = coor + index * NDim; - // skip invalid points - if ((index >= num_points) || (coor_offset[0] == -1)) return; - - int num = 0; - int coor_x = coor_offset[0]; - int coor_y = coor_offset[1]; - int coor_z = coor_offset[2]; - // only calculate the coors before this coor[index] - for (int i = 0; i < index; ++i) { - auto prev_coor = coor + i * NDim; - if (prev_coor[0] == -1) continue; - - // Find all previous points that have the same coors - // if find the same coor, record it - if ((prev_coor[0] == coor_x) && (prev_coor[1] == coor_y) && - (prev_coor[2] == coor_z)) { - num++; - if (num == 1) { - // point to the same coor that first show up - point_to_pointidx[index] = i; - } else if (num >= max_points) { - // out of boundary - return; - } - } - } - if (num == 0) { - point_to_pointidx[index] = index; - } - if (num < max_points) { - point_to_voxelidx[index] = num; - } - } -} - -template -__global__ void determin_voxel_num( - // const T_int* coor, - T_int* num_points_per_voxel, T_int* point_to_voxelidx, - T_int* point_to_pointidx, T_int* coor_to_voxelidx, T_int* voxel_num, - const int max_points, const int max_voxels, const int num_points) { - // only calculate the coors before this coor[index] - for (int i = 0; i < num_points; ++i) { - // if (coor[i][0] == -1) - // continue; - int point_pos_in_voxel = point_to_voxelidx[i]; - // record voxel - if (point_pos_in_voxel == -1) { - // out of max_points or invalid point - continue; - } else if (point_pos_in_voxel == 0) { - // record new voxel - int voxelidx = voxel_num[0]; - if (voxel_num[0] >= max_voxels) continue; - voxel_num[0] += 1; - coor_to_voxelidx[i] = voxelidx; - num_points_per_voxel[voxelidx] = 1; - } else { - int point_idx = point_to_pointidx[i]; - int voxelidx = coor_to_voxelidx[point_idx]; - if (voxelidx != -1) { - coor_to_voxelidx[i] = voxelidx; - num_points_per_voxel[voxelidx] += 1; - } - } - } -} - -__global__ void nondisterministic_get_assign_pos( - const int nthreads, const int32_t *coors_map, int32_t *pts_id, - int32_t *coors_count, int32_t *reduce_count, int32_t *coors_order) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - int coors_idx = coors_map[thread_idx]; - if (coors_idx > -1) { - int32_t coors_pts_pos = atomicAdd(&reduce_count[coors_idx], 1); - pts_id[thread_idx] = coors_pts_pos; - if (coors_pts_pos == 0) { - coors_order[coors_idx] = atomicAdd(coors_count, 1); - } - } - } -} - -template -__global__ void nondisterministic_assign_point_voxel( - const int nthreads, const T *points, const int32_t *coors_map, - const int32_t *pts_id, const int32_t *coors_in, - const int32_t *reduce_count, const int32_t *coors_order, - T *voxels, int32_t *coors, int32_t *pts_count, const int max_voxels, - const int max_points, const int num_features, const int NDim) { - CUDA_1D_KERNEL_LOOP(thread_idx, nthreads) { - int coors_idx = coors_map[thread_idx]; - int coors_pts_pos = pts_id[thread_idx]; - if (coors_idx > -1) { - int coors_pos = coors_order[coors_idx]; - if (coors_pos < max_voxels && coors_pts_pos < max_points) { - auto voxels_offset = - voxels + (coors_pos * max_points + coors_pts_pos) * num_features; - auto points_offset = points + thread_idx * num_features; - for (int k = 0; k < num_features; k++) { - voxels_offset[k] = points_offset[k]; - } - if (coors_pts_pos == 0) { - pts_count[coors_pos] = min(reduce_count[coors_idx], max_points); - auto coors_offset = coors + coors_pos * NDim; - auto coors_in_offset = coors_in + coors_idx * NDim; - for (int k = 0; k < NDim; k++) { - coors_offset[k] = coors_in_offset[k]; - } - } - } - } - } -} - -namespace voxelization { - -int hard_voxelize_gpu(const at::Tensor& points, at::Tensor& voxels, - at::Tensor& coors, at::Tensor& num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - // current version tooks about 0.04s for one frame on cpu - // check device - CHECK_INPUT(points); - - at::cuda::CUDAGuard device_guard(points.device()); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - // map points to voxel coors - at::Tensor temp_coors = - at::zeros({num_points, NDim}, points.options().dtype(at::kInt)); - - dim3 grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 block(512); - - // 1. link point to corresponding voxel coors - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "hard_voxelize_kernel", ([&] { - dynamic_voxelize_kernel - <<>>( - points.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), voxel_x, voxel_y, - voxel_z, coors_x_min, coors_y_min, coors_z_min, coors_x_max, - coors_y_max, coors_z_max, grid_x, grid_y, grid_z, num_points, - num_features, NDim); - })); - cudaDeviceSynchronize(); - AT_CUDA_CHECK(cudaGetLastError()); - - // 2. map point to the idx of the corresponding voxel, find duplicate coor - // create some temporary variables - auto point_to_pointidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - auto point_to_voxelidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - - dim3 map_grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 map_block(512); - AT_DISPATCH_ALL_TYPES( - temp_coors.scalar_type(), "determin_duplicate", ([&] { - point_to_voxelidx_kernel - <<>>( - temp_coors.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - point_to_pointidx.contiguous().data_ptr(), max_points, - max_voxels, num_points, NDim); - })); - cudaDeviceSynchronize(); - AT_CUDA_CHECK(cudaGetLastError()); - - // 3. determin voxel num and voxel's coor index - // make the logic in the CUDA device could accelerate about 10 times - auto coor_to_voxelidx = -at::ones( - { - num_points, - }, - points.options().dtype(at::kInt)); - auto voxel_num = at::zeros( - { - 1, - }, - points.options().dtype(at::kInt)); // must be zero from the begining - - AT_DISPATCH_ALL_TYPES( - temp_coors.scalar_type(), "determin_duplicate", ([&] { - determin_voxel_num<<<1, 1, 0, at::cuda::getCurrentCUDAStream()>>>( - num_points_per_voxel.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - point_to_pointidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - voxel_num.contiguous().data_ptr(), max_points, max_voxels, - num_points); - })); - cudaDeviceSynchronize(); - AT_CUDA_CHECK(cudaGetLastError()); - - // 4. copy point features to voxels - // Step 4 & 5 could be parallel - auto pts_output_size = num_points * num_features; - dim3 cp_grid(std::min(at::cuda::ATenCeilDiv(pts_output_size, 512), 4096)); - dim3 cp_block(512); - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - assign_point_to_voxel - <<>>( - pts_output_size, points.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - voxels.contiguous().data_ptr(), max_points, num_features, - num_points, NDim); - })); - // cudaDeviceSynchronize(); - // AT_CUDA_CHECK(cudaGetLastError()); - - // 5. copy coors of each voxels - auto coors_output_size = num_points * NDim; - dim3 coors_cp_grid( - std::min(at::cuda::ATenCeilDiv(coors_output_size, 512), 4096)); - dim3 coors_cp_block(512); - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - assign_voxel_coors<<>>( - coors_output_size, temp_coors.contiguous().data_ptr(), - point_to_voxelidx.contiguous().data_ptr(), - coor_to_voxelidx.contiguous().data_ptr(), - coors.contiguous().data_ptr(), num_points, NDim); - })); - cudaDeviceSynchronize(); - AT_CUDA_CHECK(cudaGetLastError()); - - auto voxel_num_cpu = voxel_num.to(at::kCPU); - int voxel_num_int = voxel_num_cpu.data_ptr()[0]; - - return voxel_num_int; -} - -int nondisterministic_hard_voxelize_gpu( - const at::Tensor &points, at::Tensor &voxels, - at::Tensor &coors, at::Tensor &num_points_per_voxel, - const std::vector voxel_size, - const std::vector coors_range, - const int max_points, const int max_voxels, - const int NDim = 3) { - - CHECK_INPUT(points); - - at::cuda::CUDAGuard device_guard(points.device()); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - if (num_points == 0) - return 0; - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - // map points to voxel coors - at::Tensor temp_coors = - at::zeros({num_points, NDim}, points.options().dtype(torch::kInt32)); - - dim3 grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 block(512); - - // 1. link point to corresponding voxel coors - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "hard_voxelize_kernel", ([&] { - dynamic_voxelize_kernel - <<>>( - points.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), voxel_x, voxel_y, - voxel_z, coors_x_min, coors_y_min, coors_z_min, coors_x_max, - coors_y_max, coors_z_max, grid_x, grid_y, grid_z, num_points, - num_features, NDim); - })); - - at::Tensor coors_map; - at::Tensor coors_count; - at::Tensor coors_order; - at::Tensor reduce_count; - at::Tensor pts_id; - - auto coors_clean = temp_coors.masked_fill(temp_coors.lt(0).any(-1, true), -1); - - std::tie(temp_coors, coors_map, reduce_count) = - at::unique_dim(coors_clean, 0, true, true, false); - - if (temp_coors.index({0, 0}).lt(0).item()) { - // the first element of temp_coors is (-1,-1,-1) and should be removed - temp_coors = temp_coors.slice(0, 1); - coors_map = coors_map - 1; - } - - int num_coors = temp_coors.size(0); - temp_coors = temp_coors.to(torch::kInt32); - coors_map = coors_map.to(torch::kInt32); - - coors_count = coors_map.new_zeros(1); - coors_order = coors_map.new_empty(num_coors); - reduce_count = coors_map.new_zeros(num_coors); - pts_id = coors_map.new_zeros(num_points); - - dim3 cp_grid(std::min(at::cuda::ATenCeilDiv(num_points, 512), 4096)); - dim3 cp_block(512); - AT_DISPATCH_ALL_TYPES(points.scalar_type(), "get_assign_pos", ([&] { - nondisterministic_get_assign_pos<<>>( - num_points, - coors_map.contiguous().data_ptr(), - pts_id.contiguous().data_ptr(), - coors_count.contiguous().data_ptr(), - reduce_count.contiguous().data_ptr(), - coors_order.contiguous().data_ptr()); - })); - - AT_DISPATCH_ALL_TYPES( - points.scalar_type(), "assign_point_to_voxel", ([&] { - nondisterministic_assign_point_voxel - <<>>( - num_points, points.contiguous().data_ptr(), - coors_map.contiguous().data_ptr(), - pts_id.contiguous().data_ptr(), - temp_coors.contiguous().data_ptr(), - reduce_count.contiguous().data_ptr(), - coors_order.contiguous().data_ptr(), - voxels.contiguous().data_ptr(), - coors.contiguous().data_ptr(), - num_points_per_voxel.contiguous().data_ptr(), - max_voxels, max_points, - num_features, NDim); - })); - AT_CUDA_CHECK(cudaGetLastError()); - return max_voxels < num_coors ? max_voxels : num_coors; -} - -void dynamic_voxelize_gpu(const at::Tensor& points, at::Tensor& coors, - const std::vector voxel_size, - const std::vector coors_range, - const int NDim = 3) { - // current version tooks about 0.04s for one frame on cpu - // check device - CHECK_INPUT(points); - - at::cuda::CUDAGuard device_guard(points.device()); - - const int num_points = points.size(0); - const int num_features = points.size(1); - - const float voxel_x = voxel_size[0]; - const float voxel_y = voxel_size[1]; - const float voxel_z = voxel_size[2]; - const float coors_x_min = coors_range[0]; - const float coors_y_min = coors_range[1]; - const float coors_z_min = coors_range[2]; - const float coors_x_max = coors_range[3]; - const float coors_y_max = coors_range[4]; - const float coors_z_max = coors_range[5]; - - const int grid_x = round((coors_x_max - coors_x_min) / voxel_x); - const int grid_y = round((coors_y_max - coors_y_min) / voxel_y); - const int grid_z = round((coors_z_max - coors_z_min) / voxel_z); - - const int col_blocks = at::cuda::ATenCeilDiv(num_points, threadsPerBlock); - dim3 blocks(col_blocks); - dim3 threads(threadsPerBlock); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - - AT_DISPATCH_ALL_TYPES(points.scalar_type(), "dynamic_voxelize_kernel", [&] { - dynamic_voxelize_kernel<<>>( - points.contiguous().data_ptr(), - coors.contiguous().data_ptr(), voxel_x, voxel_y, voxel_z, - coors_x_min, coors_y_min, coors_z_min, coors_x_max, coors_y_max, - coors_z_max, grid_x, grid_y, grid_z, num_points, num_features, NDim); - }); - cudaDeviceSynchronize(); - AT_CUDA_CHECK(cudaGetLastError()); - - return; -} - -} // namespace voxelization diff --git a/cv/3d_detection/pointpillars/pytorch/pre_process_kitti.py b/cv/3d_detection/pointpillars/pytorch/pre_process_kitti.py deleted file mode 100755 index ad506740..00000000 --- a/cv/3d_detection/pointpillars/pytorch/pre_process_kitti.py +++ /dev/null @@ -1,160 +0,0 @@ -import argparse -import pdb -import cv2 -import numpy as np -import os -from tqdm import tqdm -import sys -CUR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(CUR) - -from utils import read_points, write_points, read_calib, read_label, \ - write_pickle, remove_outside_points, get_points_num_in_bbox, \ - points_in_bboxes_v2 - - -def judge_difficulty(annotation_dict): - truncated = annotation_dict['truncated'] - occluded = annotation_dict['occluded'] - bbox = annotation_dict['bbox'] - height = bbox[:, 3] - bbox[:, 1] - - MIN_HEIGHTS = [40, 25, 25] - MAX_OCCLUSION = [0, 1, 2] - MAX_TRUNCATION = [0.15, 0.30, 0.50] - difficultys = [] - for h, o, t in zip(height, occluded, truncated): - difficulty = -1 - for i in range(2, -1, -1): - if h > MIN_HEIGHTS[i] and o <= MAX_OCCLUSION[i] and t <= MAX_TRUNCATION[i]: - difficulty = i - difficultys.append(difficulty) - return np.array(difficultys, dtype=np.int) - - -def create_data_info_pkl(data_root, data_type, prefix, label=True, db=False): - sep = os.path.sep - print(f"Processing {data_type} data..") - ids_file = os.path.join(CUR, 'dataset', 'ImageSets', f'{data_type}.txt') - with open(ids_file, 'r') as f: - ids = [id.strip() for id in f.readlines()] - - split = 'training' if label else 'testing' - - kitti_infos_dict = {} - if db: - kitti_dbinfos_train = {} - db_points_saved_path = os.path.join(data_root, f'{prefix}_gt_database') - os.makedirs(db_points_saved_path, exist_ok=True) - for id in tqdm(ids): - cur_info_dict={} - img_path = os.path.join(data_root, split, 'image_2', f'{id}.png') - lidar_path = os.path.join(data_root, split, 'velodyne', f'{id}.bin') - calib_path = os.path.join(data_root, split, 'calib', f'{id}.txt') - cur_info_dict['velodyne_path'] = sep.join(lidar_path.split(sep)[-3:]) - - img = cv2.imread(img_path) - image_shape = img.shape[:2] - cur_info_dict['image'] = { - 'image_shape': image_shape, - 'image_path': sep.join(img_path.split(sep)[-3:]), - 'image_idx': int(id), - } - - calib_dict = read_calib(calib_path) - cur_info_dict['calib'] = calib_dict - - lidar_points = read_points(lidar_path) - reduced_lidar_points = remove_outside_points( - points=lidar_points, - r0_rect=calib_dict['R0_rect'], - tr_velo_to_cam=calib_dict['Tr_velo_to_cam'], - P2=calib_dict['P2'], - image_shape=image_shape) - saved_reduced_path = os.path.join(data_root, split, 'velodyne_reduced') - os.makedirs(saved_reduced_path, exist_ok=True) - saved_reduced_points_name = os.path.join(saved_reduced_path, f'{id}.bin') - write_points(reduced_lidar_points, saved_reduced_points_name) - - if label: - label_path = os.path.join(data_root, split, 'label_2', f'{id}.txt') - annotation_dict = read_label(label_path) - annotation_dict['difficulty'] = judge_difficulty(annotation_dict) - annotation_dict['num_points_in_gt'] = get_points_num_in_bbox( - points=reduced_lidar_points, - r0_rect=calib_dict['R0_rect'], - tr_velo_to_cam=calib_dict['Tr_velo_to_cam'], - dimensions=annotation_dict['dimensions'], - location=annotation_dict['location'], - rotation_y=annotation_dict['rotation_y'], - name=annotation_dict['name']) - cur_info_dict['annos'] = annotation_dict - - if db: - indices, n_total_bbox, n_valid_bbox, bboxes_lidar, name = \ - points_in_bboxes_v2( - points=lidar_points, - r0_rect=calib_dict['R0_rect'].astype(np.float32), - tr_velo_to_cam=calib_dict['Tr_velo_to_cam'].astype(np.float32), - dimensions=annotation_dict['dimensions'].astype(np.float32), - location=annotation_dict['location'].astype(np.float32), - rotation_y=annotation_dict['rotation_y'].astype(np.float32), - name=annotation_dict['name'] - ) - for j in range(n_valid_bbox): - db_points = lidar_points[indices[:, j]] - db_points[:, :3] -= bboxes_lidar[j, :3] - db_points_saved_name = os.path.join(db_points_saved_path, f'{int(id)}_{name[j]}_{j}.bin') - write_points(db_points, db_points_saved_name) - - db_info={ - 'name': name[j], - 'path': os.path.join(os.path.basename(db_points_saved_path), f'{int(id)}_{name[j]}_{j}.bin'), - 'box3d_lidar': bboxes_lidar[j], - 'difficulty': annotation_dict['difficulty'][j], - 'num_points_in_gt': len(db_points), - } - if name[j] not in kitti_dbinfos_train: - kitti_dbinfos_train[name[j]] = [db_info] - else: - kitti_dbinfos_train[name[j]].append(db_info) - - kitti_infos_dict[int(id)] = cur_info_dict - - saved_path = os.path.join(data_root, f'{prefix}_infos_{data_type}.pkl') - write_pickle(kitti_infos_dict, saved_path) - if db: - saved_db_path = os.path.join(data_root, f'{prefix}_dbinfos_train.pkl') - write_pickle(kitti_dbinfos_train, saved_db_path) - return kitti_infos_dict - - -def main(args): - data_root = args.data_root - prefix = args.prefix - - ## 1. train: create data infomation pkl file && create reduced point clouds - ## && create database(points in gt bbox) for data aumentation - kitti_train_infos_dict = create_data_info_pkl(data_root, 'train', prefix, db=True) - - ## 2. val: create data infomation pkl file && create reduced point clouds - kitti_val_infos_dict = create_data_info_pkl(data_root, 'val', prefix) - - ## 3. trainval: create data infomation pkl file - kitti_trainval_infos_dict = {**kitti_train_infos_dict, **kitti_val_infos_dict} - saved_path = os.path.join(data_root, f'{prefix}_infos_trainval.pkl') - write_pickle(kitti_trainval_infos_dict, saved_path) - - ## 4. test: create data infomation pkl file && create reduced point clouds - kitti_test_infos_dict = create_data_info_pkl(data_root, 'test', prefix, label=False) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Dataset infomation') - parser.add_argument('--data_root', default='/mnt/ssd1/lifa_rdata/det/kitti', - help='your data root for kitti') - parser.add_argument('--prefix', default='kitti', - help='the prefix name for the saved .pkl file') - args = parser.parse_args() - - main(args) \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/requirements.txt b/cv/3d_detection/pointpillars/pytorch/requirements.txt deleted file mode 100755 index 7844bc5e..00000000 --- a/cv/3d_detection/pointpillars/pytorch/requirements.txt +++ /dev/null @@ -1,9 +0,0 @@ -numba -numpy==1.19.5 -# open3d==0.13.0 -opencv_python==4.5.5.62 -# pointpillars.egg==info -PyYAML==6.0 -setuptools==58.0.4 -# torch==1.8.1+cu111 -tqdm==4.62.3 diff --git a/cv/3d_detection/pointpillars/pytorch/test.py b/cv/3d_detection/pointpillars/pytorch/test.py deleted file mode 100755 index ad848d54..00000000 --- a/cv/3d_detection/pointpillars/pytorch/test.py +++ /dev/null @@ -1,138 +0,0 @@ -import argparse -import cv2 -import numpy as np -import os -import torch -import pdb - -from utils import setup_seed, read_points, read_calib, read_label, \ - keep_bbox_from_image_range, keep_bbox_from_lidar_range, vis_pc, \ - vis_img_3d, bbox3d2corners_camera, points_camera2image, \ - bbox_camera2lidar -from model import PointPillars - - -def point_range_filter(pts, point_range=[0, -39.68, -3, 69.12, 39.68, 1]): - ''' - data_dict: dict(pts, gt_bboxes_3d, gt_labels, gt_names, difficulty) - point_range: [x1, y1, z1, x2, y2, z2] - ''' - flag_x_low = pts[:, 0] > point_range[0] - flag_y_low = pts[:, 1] > point_range[1] - flag_z_low = pts[:, 2] > point_range[2] - flag_x_high = pts[:, 0] < point_range[3] - flag_y_high = pts[:, 1] < point_range[4] - flag_z_high = pts[:, 2] < point_range[5] - keep_mask = flag_x_low & flag_y_low & flag_z_low & flag_x_high & flag_y_high & flag_z_high - pts = pts[keep_mask] - return pts - - -def main(args): - CLASSES = { - 'Pedestrian': 0, - 'Cyclist': 1, - 'Car': 2 - } - LABEL2CLASSES = {v:k for k, v in CLASSES.items()} - pcd_limit_range = np.array([0, -40, -3, 70.4, 40, 0.0], dtype=np.float32) - - if not args.no_cuda: - model = PointPillars(nclasses=len(CLASSES)).cuda() - model.load_state_dict(torch.load(args.ckpt)) - else: - model = PointPillars(nclasses=len(CLASSES)) - model.load_state_dict( - torch.load(args.ckpt, map_location=torch.device('cpu'))) - - if not os.path.exists(args.pc_path): - raise FileNotFoundError - pc = read_points(args.pc_path) - pc = point_range_filter(pc) - pc_torch = torch.from_numpy(pc) - if os.path.exists(args.calib_path): - calib_info = read_calib(args.calib_path) - else: - calib_info = None - - if os.path.exists(args.gt_path): - gt_label = read_label(args.gt_path) - else: - gt_label = None - - if os.path.exists(args.img_path): - img = cv2.imread(args.img_path, 1) - else: - img = None - - model.eval() - with torch.no_grad(): - if not args.no_cuda: - pc_torch = pc_torch.cuda() - - result_filter = model(batched_pts=[pc_torch], - mode='test')[0] - if calib_info is not None and img is not None: - tr_velo_to_cam = calib_info['Tr_velo_to_cam'].astype(np.float32) - r0_rect = calib_info['R0_rect'].astype(np.float32) - P2 = calib_info['P2'].astype(np.float32) - - image_shape = img.shape[:2] - result_filter = keep_bbox_from_image_range(result_filter, tr_velo_to_cam, r0_rect, P2, image_shape) - - result_filter = keep_bbox_from_lidar_range(result_filter, pcd_limit_range) - lidar_bboxes = result_filter['lidar_bboxes'] - labels, scores = result_filter['labels'], result_filter['scores'] - - vis_pc(pc, bboxes=lidar_bboxes, labels=labels) - - if calib_info is not None and img is not None: - bboxes2d, camera_bboxes = result_filter['bboxes2d'], result_filter['camera_bboxes'] - bboxes_corners = bbox3d2corners_camera(camera_bboxes) - image_points = points_camera2image(bboxes_corners, P2) - img = vis_img_3d(img, image_points, labels, rt=True) - - if calib_info is not None and gt_label is not None: - tr_velo_to_cam = calib_info['Tr_velo_to_cam'].astype(np.float32) - r0_rect = calib_info['R0_rect'].astype(np.float32) - - dimensions = gt_label['dimensions'] - location = gt_label['location'] - rotation_y = gt_label['rotation_y'] - gt_labels = np.array([CLASSES.get(item, -1) for item in gt_label['name']]) - sel = gt_labels != -1 - gt_labels = gt_labels[sel] - bboxes_camera = np.concatenate([location, dimensions, rotation_y[:, None]], axis=-1) - gt_lidar_bboxes = bbox_camera2lidar(bboxes_camera, tr_velo_to_cam, r0_rect) - bboxes_camera = bboxes_camera[sel] - gt_lidar_bboxes = gt_lidar_bboxes[sel] - - gt_labels = [-1] * len(gt_label['name']) # to distinguish between the ground truth and the predictions - - pred_gt_lidar_bboxes = np.concatenate([lidar_bboxes, gt_lidar_bboxes], axis=0) - pred_gt_labels = np.concatenate([labels, gt_labels]) - vis_pc(pc, pred_gt_lidar_bboxes, labels=pred_gt_labels) - - if img is not None: - bboxes_corners = bbox3d2corners_camera(bboxes_camera) - image_points = points_camera2image(bboxes_corners, P2) - gt_labels = [-1] * len(gt_label['name']) - img = vis_img_3d(img, image_points, gt_labels, rt=True) - - if calib_info is not None and img is not None: - cv2.imshow(f'{os.path.basename(args.img_path)}-3d bbox', img) - cv2.waitKey(0) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Configuration Parameters') - parser.add_argument('--ckpt', default='pretrained/epoch_160.pth', help='your checkpoint for kitti') - parser.add_argument('--pc_path', help='your point cloud path') - parser.add_argument('--calib_path', default='', help='your calib file path') - parser.add_argument('--gt_path', default='', help='your ground truth path') - parser.add_argument('--img_path', default='', help='your image path') - parser.add_argument('--no_cuda', action='store_true', - help='whether to use cuda') - args = parser.parse_args() - - main(args) diff --git a/cv/3d_detection/pointpillars/pytorch/train.py b/cv/3d_detection/pointpillars/pytorch/train.py deleted file mode 100755 index 53379631..00000000 --- a/cv/3d_detection/pointpillars/pytorch/train.py +++ /dev/null @@ -1,217 +0,0 @@ -import argparse -import os -import torch -from tqdm import tqdm -import pdb - -from utils import setup_seed -from dataset import Kitti, get_dataloader -from model import PointPillars -from loss import Loss -from torch.utils.tensorboard import SummaryWriter - - -def save_summary(writer, loss_dict, global_step, tag, lr=None, momentum=None): - for k, v in loss_dict.items(): - writer.add_scalar(f'{tag}/{k}', v, global_step) - if lr is not None: - writer.add_scalar('lr', lr, global_step) - if momentum is not None: - writer.add_scalar('momentum', momentum, global_step) - - -def main(args): - setup_seed() - train_dataset = Kitti(data_root=args.data_root, - split='train') - print("train data lens {}" .format(len(train_dataset))) - val_dataset = Kitti(data_root=args.data_root, - split='val') - print("val data lens {}" .format(len(val_dataset))) - train_dataloader = get_dataloader(dataset=train_dataset, - batch_size=args.batch_size, - num_workers=args.num_workers, - shuffle=True) - val_dataloader = get_dataloader(dataset=val_dataset, - batch_size=args.batch_size, - num_workers=args.num_workers, - shuffle=False) - - if not args.no_cuda: - pointpillars = PointPillars(nclasses=args.nclasses).cuda() - else: - pointpillars = PointPillars(nclasses=args.nclasses) - loss_func = Loss() - - max_iters = len(train_dataloader) * args.max_epoch - init_lr = args.init_lr - optimizer = torch.optim.AdamW(params=pointpillars.parameters(), - lr=init_lr, - betas=(0.95, 0.99), - weight_decay=0.01) - scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, - max_lr=init_lr*10, - total_steps=max_iters, - pct_start=0.4, - anneal_strategy='cos', - cycle_momentum=True, - base_momentum=0.95*0.895, - max_momentum=0.95, - div_factor=10) - saved_logs_path = os.path.join(args.saved_path, 'summary') - os.makedirs(saved_logs_path, exist_ok=True) - writer = SummaryWriter(saved_logs_path) - saved_ckpt_path = os.path.join(args.saved_path, 'checkpoints') - os.makedirs(saved_ckpt_path, exist_ok=True) - - for epoch in range(args.max_epoch): - print('=' * 20, epoch, '=' * 20) - train_step, val_step = 0, 0 - for i, data_dict in enumerate(tqdm(train_dataloader)): - if not args.no_cuda: - # move the tensors to the cuda - for key in data_dict: - for j, item in enumerate(data_dict[key]): - if torch.is_tensor(item): - data_dict[key][j] = data_dict[key][j].cuda() - - optimizer.zero_grad() - - batched_pts = data_dict['batched_pts'] - batched_gt_bboxes = data_dict['batched_gt_bboxes'] - batched_labels = data_dict['batched_labels'] - batched_difficulty = data_dict['batched_difficulty'] - bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchor_target_dict = \ - pointpillars(batched_pts=batched_pts, - mode='train', - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_labels) - - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(-1, args.nclasses) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 7) - bbox_dir_cls_pred = bbox_dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - - batched_bbox_labels = anchor_target_dict['batched_labels'].reshape(-1) - batched_label_weights = anchor_target_dict['batched_label_weights'].reshape(-1) - batched_bbox_reg = anchor_target_dict['batched_bbox_reg'].reshape(-1, 7) - # batched_bbox_reg_weights = anchor_target_dict['batched_bbox_reg_weights'].reshape(-1) - batched_dir_labels = anchor_target_dict['batched_dir_labels'].reshape(-1) - # batched_dir_labels_weights = anchor_target_dict['batched_dir_labels_weights'].reshape(-1) - - pos_idx = (batched_bbox_labels >= 0) & (batched_bbox_labels < args.nclasses) - bbox_pred = bbox_pred[pos_idx] - batched_bbox_reg = batched_bbox_reg[pos_idx] - # sin(a - b) = sin(a)*cos(b) - cos(a)*sin(b) - bbox_pred[:, -1] = torch.sin(bbox_pred[:, -1].clone()) * torch.cos(batched_bbox_reg[:, -1].clone()) - batched_bbox_reg[:, -1] = torch.cos(bbox_pred[:, -1].clone()) * torch.sin(batched_bbox_reg[:, -1].clone()) - bbox_dir_cls_pred = bbox_dir_cls_pred[pos_idx] - batched_dir_labels = batched_dir_labels[pos_idx] - - num_cls_pos = (batched_bbox_labels < args.nclasses).sum() - bbox_cls_pred = bbox_cls_pred[batched_label_weights > 0] - batched_bbox_labels[batched_bbox_labels < 0] = args.nclasses - batched_bbox_labels = batched_bbox_labels[batched_label_weights > 0] - - loss_dict = loss_func(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_labels=batched_bbox_labels, - num_cls_pos=num_cls_pos, - batched_bbox_reg=batched_bbox_reg, - batched_dir_labels=batched_dir_labels) - - loss = loss_dict['total_loss'] - loss.backward() - # torch.nn.utils.clip_grad_norm_(pointpillars.parameters(), max_norm=35) - optimizer.step() - scheduler.step() - - global_step = epoch * len(train_dataloader) + train_step + 1 - - if global_step % args.log_freq == 0: - save_summary(writer, loss_dict, global_step, 'train', - lr=optimizer.param_groups[0]['lr'], - momentum=optimizer.param_groups[0]['betas'][0]) - train_step += 1 - if (epoch + 1) % args.ckpt_freq_epoch == 0: - torch.save(pointpillars.state_dict(), os.path.join(saved_ckpt_path, f'epoch_{epoch+1}.pth')) - - if epoch % 2 == 0: - continue - pointpillars.eval() - with torch.no_grad(): - for i, data_dict in enumerate(tqdm(val_dataloader)): - if not args.no_cuda: - # move the tensors to the cuda - for key in data_dict: - for j, item in enumerate(data_dict[key]): - if torch.is_tensor(item): - data_dict[key][j] = data_dict[key][j].cuda() - - batched_pts = data_dict['batched_pts'] - batched_gt_bboxes = data_dict['batched_gt_bboxes'] - batched_labels = data_dict['batched_labels'] - batched_difficulty = data_dict['batched_difficulty'] - bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchor_target_dict = \ - pointpillars(batched_pts=batched_pts, - mode='train', - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_labels) - - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(-1, args.nclasses) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 7) - bbox_dir_cls_pred = bbox_dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - - batched_bbox_labels = anchor_target_dict['batched_labels'].reshape(-1) - batched_label_weights = anchor_target_dict['batched_label_weights'].reshape(-1) - batched_bbox_reg = anchor_target_dict['batched_bbox_reg'].reshape(-1, 7) - # batched_bbox_reg_weights = anchor_target_dict['batched_bbox_reg_weights'].reshape(-1) - batched_dir_labels = anchor_target_dict['batched_dir_labels'].reshape(-1) - # batched_dir_labels_weights = anchor_target_dict['batched_dir_labels_weights'].reshape(-1) - - pos_idx = (batched_bbox_labels >= 0) & (batched_bbox_labels < args.nclasses) - bbox_pred = bbox_pred[pos_idx] - batched_bbox_reg = batched_bbox_reg[pos_idx] - # sin(a - b) = sin(a)*cos(b) - cos(a)*sin(b) - bbox_pred[:, -1] = torch.sin(bbox_pred[:, -1]) * torch.cos(batched_bbox_reg[:, -1]) - batched_bbox_reg[:, -1] = torch.cos(bbox_pred[:, -1]) * torch.sin(batched_bbox_reg[:, -1]) - bbox_dir_cls_pred = bbox_dir_cls_pred[pos_idx] - batched_dir_labels = batched_dir_labels[pos_idx] - - num_cls_pos = (batched_bbox_labels < args.nclasses).sum() - bbox_cls_pred = bbox_cls_pred[batched_label_weights > 0] - batched_bbox_labels[batched_bbox_labels < 0] = args.nclasses - batched_bbox_labels = batched_bbox_labels[batched_label_weights > 0] - - loss_dict = loss_func(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_labels=batched_bbox_labels, - num_cls_pos=num_cls_pos, - batched_bbox_reg=batched_bbox_reg, - batched_dir_labels=batched_dir_labels) - - global_step = epoch * len(val_dataloader) + val_step + 1 - if global_step % args.log_freq == 0: - save_summary(writer, loss_dict, global_step, 'val') - val_step += 1 - pointpillars.train() - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Configuration Parameters') - parser.add_argument('--data_root', default='/mnt/ssd1/lifa_rdata/det/kitti', - help='your data root for kitti') - parser.add_argument('--saved_path', default='pillar_logs') - parser.add_argument('--batch_size', type=int, default=6) - parser.add_argument('--num_workers', type=int, default=4) - parser.add_argument('--nclasses', type=int, default=3) - parser.add_argument('--init_lr', type=float, default=0.00025) - parser.add_argument('--max_epoch', type=int, default=160) - parser.add_argument('--log_freq', type=int, default=8) - parser.add_argument('--ckpt_freq_epoch', type=int, default=20) - parser.add_argument('--no_cuda', action='store_true', - help='whether to use cuda') - args = parser.parse_args() - - main(args) diff --git a/cv/3d_detection/pointpillars/pytorch/train_dist.py b/cv/3d_detection/pointpillars/pytorch/train_dist.py deleted file mode 100644 index 48cd5471..00000000 --- a/cv/3d_detection/pointpillars/pytorch/train_dist.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. -import argparse -import os -import torch -from tqdm import tqdm -import pdb - -from utils import setup_seed -from dataset import Kitti, get_dataloader_dist -from model import PointPillars -from loss import Loss -from torch.utils.tensorboard import SummaryWriter - -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP - -def save_summary(writer, loss_dict, global_step, tag, lr=None, momentum=None): - for k, v in loss_dict.items(): - writer.add_scalar(f'{tag}/{k}', v, global_step) - if lr is not None: - writer.add_scalar('lr', lr, global_step) - if momentum is not None: - writer.add_scalar('momentum', momentum, global_step) - - -def main(args): - setup_seed() - local_rank = args.local_rank - - # DDP:DDP backend初始化 - torch.cuda.set_device(local_rank) - dist.init_process_group(backend='nccl') # nccl是GPU设备上最快、最推荐的后端 - - train_dataset = Kitti(data_root=args.data_root, - split='train') - print("train data lens {}" .format(len(train_dataset))) - val_dataset = Kitti(data_root=args.data_root, - split='val') - print("val data lens {}" .format(len(val_dataset))) - - # DDP:使用DistributedSampler,DDP帮我们把细节都封装起来了。 - train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) - # DDP:需要注意的是,这里的batch_size指的是每个进程下的batch_size。 - # 也就是说,总batch_size是这里的batch_size再乘以并行数(world_size)。 - val_sampler = torch.utils.data.distributed.DistributedSampler(val_dataset) - - train_dataloader = get_dataloader_dist(dataset=train_dataset, - batch_size=args.batch_size, - num_workers=args.num_workers, - # shuffle=True, - sampler=train_sampler) - val_dataloader = get_dataloader_dist(dataset=val_dataset, - batch_size=args.batch_size, - num_workers=args.num_workers, - # shuffle=False, - sampler=val_sampler) - - pointpillars = PointPillars(nclasses=args.nclasses) - pointpillars.to(local_rank) - - # DDP: 构造DDP model - pointpillars = DDP(pointpillars, device_ids=[local_rank], output_device=local_rank) - - loss_func = Loss() - - max_iters = len(train_dataloader) * args.max_epoch - init_lr = args.init_lr - optimizer = torch.optim.AdamW(params=pointpillars.parameters(), - lr=init_lr, - betas=(0.95, 0.99), - weight_decay=0.01) - scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, - max_lr=init_lr*10, - total_steps=max_iters, - pct_start=0.4, - anneal_strategy='cos', - cycle_momentum=True, - base_momentum=0.95*0.895, - max_momentum=0.95, - div_factor=10) - saved_logs_path = os.path.join(args.saved_path, 'summary') - os.makedirs(saved_logs_path, exist_ok=True) - writer = SummaryWriter(saved_logs_path) - saved_ckpt_path = os.path.join(args.saved_path, 'checkpoints') - os.makedirs(saved_ckpt_path, exist_ok=True) - - for epoch in range(args.max_epoch): - # DDP:设置sampler的epoch, - # DistributedSampler需要这个来指定shuffle方式, - # 通过维持各个进程之间的相同随机数种子使不同进程能获得同样的shuffle效果。 - train_dataloader.sampler.set_epoch(epoch) - # 后面这部分,则与原来完全一致了。 - print('=' * 20, epoch, '=' * 20) - train_step, val_step = 0, 0 - for i, data_dict in enumerate(tqdm(train_dataloader)): - # move the tensors to the cuda - for key in data_dict: - for j, item in enumerate(data_dict[key]): - if torch.is_tensor(item): - data_dict[key][j] = data_dict[key][j] - - optimizer.zero_grad() - - batched_pts = data_dict['batched_pts'] - batched_gt_bboxes = data_dict['batched_gt_bboxes'] - batched_labels = data_dict['batched_labels'] - batched_difficulty = data_dict['batched_difficulty'] - bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchor_target_dict = \ - pointpillars(batched_pts=batched_pts, - mode='train', - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_labels) - - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(-1, args.nclasses) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 7) - bbox_dir_cls_pred = bbox_dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - - batched_bbox_labels = anchor_target_dict['batched_labels'].reshape(-1) - batched_label_weights = anchor_target_dict['batched_label_weights'].reshape(-1) - batched_bbox_reg = anchor_target_dict['batched_bbox_reg'].reshape(-1, 7) - # batched_bbox_reg_weights = anchor_target_dict['batched_bbox_reg_weights'].reshape(-1) - batched_dir_labels = anchor_target_dict['batched_dir_labels'].reshape(-1) - # batched_dir_labels_weights = anchor_target_dict['batched_dir_labels_weights'].reshape(-1) - - pos_idx = (batched_bbox_labels >= 0) & (batched_bbox_labels < args.nclasses) - bbox_pred = bbox_pred[pos_idx] - batched_bbox_reg = batched_bbox_reg[pos_idx] - # sin(a - b) = sin(a)*cos(b) - cos(a)*sin(b) - bbox_pred[:, -1] = torch.sin(bbox_pred[:, -1].clone()) * torch.cos(batched_bbox_reg[:, -1].clone()) - batched_bbox_reg[:, -1] = torch.cos(bbox_pred[:, -1].clone()) * torch.sin(batched_bbox_reg[:, -1].clone()) - bbox_dir_cls_pred = bbox_dir_cls_pred[pos_idx] - batched_dir_labels = batched_dir_labels[pos_idx] - - num_cls_pos = (batched_bbox_labels < args.nclasses).sum() - bbox_cls_pred = bbox_cls_pred[batched_label_weights > 0] - batched_bbox_labels[batched_bbox_labels < 0] = args.nclasses - batched_bbox_labels = batched_bbox_labels[batched_label_weights > 0] - - loss_dict = loss_func(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_labels=batched_bbox_labels, - num_cls_pos=num_cls_pos, - batched_bbox_reg=batched_bbox_reg, - batched_dir_labels=batched_dir_labels) - - loss = loss_dict['total_loss'] - loss.backward() - # torch.nn.utils.clip_grad_norm_(pointpillars.parameters(), max_norm=35) - optimizer.step() - scheduler.step() - - if dist.get_rank() == 0: - global_step = epoch * len(train_dataloader) + train_step + 1 - - if global_step % args.log_freq == 0: - save_summary(writer, loss_dict, global_step, 'train', - lr=optimizer.param_groups[0]['lr'], - momentum=optimizer.param_groups[0]['betas'][0]) - train_step += 1 - if dist.get_rank() == 0: - if (epoch + 1) % args.ckpt_freq_epoch == 0: - torch.save(pointpillars.state_dict(), os.path.join(saved_ckpt_path, f'epoch_{epoch+1}.pth')) - - if epoch % 2 == 0: - continue - pointpillars.eval() - with torch.no_grad(): - for i, data_dict in enumerate(tqdm(val_dataloader)): - # move the tensors to the cuda - for key in data_dict: - for j, item in enumerate(data_dict[key]): - if torch.is_tensor(item): - data_dict[key][j] = data_dict[key][j] - - batched_pts = data_dict['batched_pts'] - batched_gt_bboxes = data_dict['batched_gt_bboxes'] - batched_labels = data_dict['batched_labels'] - batched_difficulty = data_dict['batched_difficulty'] - bbox_cls_pred, bbox_pred, bbox_dir_cls_pred, anchor_target_dict = \ - pointpillars(batched_pts=batched_pts, - mode='train', - batched_gt_bboxes=batched_gt_bboxes, - batched_gt_labels=batched_labels) - - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(-1, args.nclasses) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 7) - bbox_dir_cls_pred = bbox_dir_cls_pred.permute(0, 2, 3, 1).reshape(-1, 2) - - batched_bbox_labels = anchor_target_dict['batched_labels'].reshape(-1) - batched_label_weights = anchor_target_dict['batched_label_weights'].reshape(-1) - batched_bbox_reg = anchor_target_dict['batched_bbox_reg'].reshape(-1, 7) - # batched_bbox_reg_weights = anchor_target_dict['batched_bbox_reg_weights'].reshape(-1) - batched_dir_labels = anchor_target_dict['batched_dir_labels'].reshape(-1) - # batched_dir_labels_weights = anchor_target_dict['batched_dir_labels_weights'].reshape(-1) - - pos_idx = (batched_bbox_labels >= 0) & (batched_bbox_labels < args.nclasses) - bbox_pred = bbox_pred[pos_idx] - batched_bbox_reg = batched_bbox_reg[pos_idx] - # sin(a - b) = sin(a)*cos(b) - cos(a)*sin(b) - bbox_pred[:, -1] = torch.sin(bbox_pred[:, -1]) * torch.cos(batched_bbox_reg[:, -1]) - batched_bbox_reg[:, -1] = torch.cos(bbox_pred[:, -1]) * torch.sin(batched_bbox_reg[:, -1]) - bbox_dir_cls_pred = bbox_dir_cls_pred[pos_idx] - batched_dir_labels = batched_dir_labels[pos_idx] - - num_cls_pos = (batched_bbox_labels < args.nclasses).sum() - bbox_cls_pred = bbox_cls_pred[batched_label_weights > 0] - batched_bbox_labels[batched_bbox_labels < 0] = args.nclasses - batched_bbox_labels = batched_bbox_labels[batched_label_weights > 0] - - loss_dict = loss_func(bbox_cls_pred=bbox_cls_pred, - bbox_pred=bbox_pred, - bbox_dir_cls_pred=bbox_dir_cls_pred, - batched_labels=batched_bbox_labels, - num_cls_pos=num_cls_pos, - batched_bbox_reg=batched_bbox_reg, - batched_dir_labels=batched_dir_labels) - - global_step = epoch * len(val_dataloader) + val_step + 1 - if global_step % args.log_freq == 0: - save_summary(writer, loss_dict, global_step, 'val') - val_step += 1 - pointpillars.train() - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Configuration Parameters') - parser.add_argument('--data_root', default='/mnt/ssd1/lifa_rdata/det/kitti', - help='your data root for kitti') - parser.add_argument('--saved_path', default='pillar_logs') - parser.add_argument('--batch_size', type=int, default=6) - parser.add_argument('--num_workers', type=int, default=4) - parser.add_argument('--nclasses', type=int, default=3) - parser.add_argument('--init_lr', type=float, default=0.00025) - parser.add_argument('--max_epoch', type=int, default=160) - parser.add_argument('--log_freq', type=int, default=8) - parser.add_argument('--ckpt_freq_epoch', type=int, default=20) - parser.add_argument("--local_rank", default=-1, type=int) - - args = parser.parse_args() - - main(args) diff --git a/cv/3d_detection/pointpillars/pytorch/utils/__init__.py b/cv/3d_detection/pointpillars/pytorch/utils/__init__.py deleted file mode 100755 index ba0ebf4a..00000000 --- a/cv/3d_detection/pointpillars/pytorch/utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .io import read_pickle, write_pickle, read_points, write_points, read_calib, \ - read_label, write_label -from .process import bbox_camera2lidar, bbox3d2bevcorners, box_collision_test, \ - remove_pts_in_bboxes, limit_period, bbox3d2corners, points_lidar2image, \ - keep_bbox_from_image_range, keep_bbox_from_lidar_range, \ - points_camera2lidar, setup_seed, remove_outside_points, points_in_bboxes_v2, \ - get_points_num_in_bbox, iou2d_nearest, iou2d, iou3d, iou3d_camera, iou_bev, \ - bbox3d2corners_camera, points_camera2image -# from .vis_o3d import vis_pc, vis_img_3d diff --git a/cv/3d_detection/pointpillars/pytorch/utils/io.py b/cv/3d_detection/pointpillars/pytorch/utils/io.py deleted file mode 100755 index 6c6e5380..00000000 --- a/cv/3d_detection/pointpillars/pytorch/utils/io.py +++ /dev/null @@ -1,109 +0,0 @@ -import numpy as np -import os -import pickle - - -def read_pickle(file_path, suffix='.pkl'): - assert os.path.splitext(file_path)[1] == suffix - with open(file_path, 'rb') as f: - data = pickle.load(f) - return data - - -def write_pickle(results, file_path): - with open(file_path, 'wb') as f: - pickle.dump(results, f) - - -def read_points(file_path, dim=4): - suffix = os.path.splitext(file_path)[1] - assert suffix in ['.bin', '.ply'] - if suffix == '.bin': - return np.fromfile(file_path, dtype=np.float32).reshape(-1, dim) - else: - raise NotImplementedError - - -def write_points(lidar_points, file_path): - suffix = os.path.splitext(file_path)[1] - assert suffix in ['.bin', '.ply'] - if suffix == '.bin': - with open(file_path, 'w') as f: - lidar_points.tofile(f) - else: - raise NotImplementedError - - -def read_calib(file_path, extend_matrix=True): - with open(file_path, 'r') as f: - lines = f.readlines() - lines = [line.strip() for line in lines] - P0 = np.array([item for item in lines[0].split(' ')[1:]], dtype=np.float).reshape(3, 4) - P1 = np.array([item for item in lines[1].split(' ')[1:]], dtype=np.float).reshape(3, 4) - P2 = np.array([item for item in lines[2].split(' ')[1:]], dtype=np.float).reshape(3, 4) - P3 = np.array([item for item in lines[3].split(' ')[1:]], dtype=np.float).reshape(3, 4) - - R0_rect = np.array([item for item in lines[4].split(' ')[1:]], dtype=np.float).reshape(3, 3) - Tr_velo_to_cam = np.array([item for item in lines[5].split(' ')[1:]], dtype=np.float).reshape(3, 4) - Tr_imu_to_velo = np.array([item for item in lines[6].split(' ')[1:]], dtype=np.float).reshape(3, 4) - - if extend_matrix: - P0 = np.concatenate([P0, np.array([[0, 0, 0, 1]])], axis=0) - P1 = np.concatenate([P1, np.array([[0, 0, 0, 1]])], axis=0) - P2 = np.concatenate([P2, np.array([[0, 0, 0, 1]])], axis=0) - P3 = np.concatenate([P3, np.array([[0, 0, 0, 1]])], axis=0) - - R0_rect_extend = np.eye(4, dtype=R0_rect.dtype) - R0_rect_extend[:3, :3] = R0_rect - R0_rect = R0_rect_extend - - Tr_velo_to_cam = np.concatenate([Tr_velo_to_cam, np.array([[0, 0, 0, 1]])], axis=0) - Tr_imu_to_velo = np.concatenate([Tr_imu_to_velo, np.array([[0, 0, 0, 1]])], axis=0) - - calib_dict=dict( - P0=P0, - P1=P1, - P2=P2, - P3=P3, - R0_rect=R0_rect, - Tr_velo_to_cam=Tr_velo_to_cam, - Tr_imu_to_velo=Tr_imu_to_velo - ) - return calib_dict - - -def read_label(file_path): - with open(file_path, 'r') as f: - lines = f.readlines() - lines = [line.strip().split(' ') for line in lines] - annotation = {} - annotation['name'] = np.array([line[0] for line in lines]) - annotation['truncated'] = np.array([line[1] for line in lines], dtype=np.float) - annotation['occluded'] = np.array([line[2] for line in lines], dtype=np.int) - annotation['alpha'] = np.array([line[3] for line in lines], dtype=np.float) - annotation['bbox'] = np.array([line[4:8] for line in lines], dtype=np.float) - annotation['dimensions'] = np.array([line[8:11] for line in lines], dtype=np.float)[:, [2, 0, 1]] # hwl -> camera coordinates (lhw) - annotation['location'] = np.array([line[11:14] for line in lines], dtype=np.float) - annotation['rotation_y'] = np.array([line[14] for line in lines], dtype=np.float) - - return annotation - - -def write_label(result, file_path, suffix='.txt'): - ''' - result: dict, - file_path: str - ''' - assert os.path.splitext(file_path)[1] == suffix - name, truncated, occluded, alpha, bbox, dimensions, location, rotation_y, score = \ - result['name'], result['truncated'], result['occluded'], result['alpha'], \ - result['bbox'], result['dimensions'], result['location'], result['rotation_y'], \ - result['score'] - - with open(file_path, 'w') as f: - for i in range(len(name)): - bbox_str = ' '.join(map(str, bbox[i])) - hwl = ' '.join(map(str, dimensions[i])) - xyz = ' '.join(map(str, location[i])) - line = f'{name[i]} {truncated[i]} {occluded[i]} {alpha[i]} {bbox_str} {hwl} {xyz} {rotation_y[i]} {score[i]}\n' - f.writelines(line) diff --git a/cv/3d_detection/pointpillars/pytorch/utils/process.py b/cv/3d_detection/pointpillars/pytorch/utils/process.py deleted file mode 100755 index 9f64b4d9..00000000 --- a/cv/3d_detection/pointpillars/pytorch/utils/process.py +++ /dev/null @@ -1,745 +0,0 @@ -import copy -import numba -import numpy as np -import random -import torch -import pdb -from ops.iou3d_module import boxes_overlap_bev, boxes_iou_bev - - -def setup_seed(seed=0, deterministic = True): - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def bbox_camera2lidar(bboxes, tr_velo_to_cam, r0_rect): - ''' - bboxes: shape=(N, 7) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - return: shape=(N, 7) - ''' - x_size, y_size, z_size = bboxes[:, 3:4], bboxes[:, 4:5], bboxes[:, 5:6] - xyz_size = np.concatenate([z_size, x_size, y_size], axis=1) - extended_xyz = np.pad(bboxes[:, :3], ((0, 0), (0, 1)), 'constant', constant_values=1.0) - rt_mat = np.linalg.inv(r0_rect @ tr_velo_to_cam) - xyz = extended_xyz @ rt_mat.T - bboxes_lidar = np.concatenate([xyz[:, :3], xyz_size, bboxes[:, 6:]], axis=1) - return np.array(bboxes_lidar, dtype=np.float32) - - -def bbox_lidar2camera(bboxes, tr_velo_to_cam, r0_rect): - ''' - bboxes: shape=(N, 7) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - return: shape=(N, 7) - ''' - x_size, y_size, z_size = bboxes[:, 3:4], bboxes[:, 4:5], bboxes[:, 5:6] - xyz_size = np.concatenate([y_size, z_size, x_size], axis=1) - extended_xyz = np.pad(bboxes[:, :3], ((0, 0), (0, 1)), 'constant', constant_values=1.0) - rt_mat = r0_rect @ tr_velo_to_cam - xyz = extended_xyz @ rt_mat.T - bboxes_camera = np.concatenate([xyz[:, :3], xyz_size, bboxes[:, 6:]], axis=1) - return bboxes_camera - - -def points_camera2image(points, P2): - ''' - points: shape=(N, 8, 3) - P2: shape=(4, 4) - return: shape=(N, 8, 2) - ''' - extended_points = np.pad(points, ((0, 0), (0, 0), (0, 1)), 'constant', constant_values=1.0) # (n, 8, 4) - image_points = extended_points @ P2.T # (N, 8, 4) - image_points = image_points[:, :, :2] / image_points[:, :, 2:3] - return image_points - - -def points_lidar2image(points, tr_velo_to_cam, r0_rect, P2): - ''' - points: shape=(N, 8, 3) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - P2: shape=(4, 4) - return: shape=(N, 8, 2) - ''' - # points = points[:, :, [1, 2, 0]] - extended_points = np.pad(points, ((0, 0), (0, 0), (0, 1)), 'constant', constant_values=1.0) # (N, 8, 4) - rt_mat = r0_rect @ tr_velo_to_cam - camera_points = extended_points @ rt_mat.T # (N, 8, 4) - # camera_points = camera_points[:, :, [1, 2, 0, 3]] - image_points = camera_points @ P2.T # (N, 8, 4) - image_points = image_points[:, :, :2] / image_points[:, :, 2:3] - - return image_points - - -def points_camera2lidar(points, tr_velo_to_cam, r0_rect): - ''' - points: shape=(N, 8, 3) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - return: shape=(N, 8, 3) - ''' - extended_xyz = np.pad(points, ((0, 0), (0, 0), (0, 1)), 'constant', constant_values=1.0) - rt_mat = np.linalg.inv(r0_rect @ tr_velo_to_cam) - xyz = extended_xyz @ rt_mat.T - return xyz[..., :3] - - -def bbox3d2bevcorners(bboxes): - ''' - bboxes: shape=(n, 7) - - ^ x (-0.5 * pi) - | - | (bird's eye view) - (-pi) o | - y <-------------- (0) - \ / (ag) - \ - \ - - return: shape=(n, 4, 2) - ''' - centers, dims, angles = bboxes[:, :2], bboxes[:, 3:5], bboxes[:, 6] - - # 1.generate bbox corner coordinates, clockwise from minimal point - bev_corners = np.array([[-0.5, -0.5], [-0.5, 0.5], [0.5, 0.5], [0.5, -0.5]], dtype=np.float32) - bev_corners = bev_corners[None, ...] * dims[:, None, :] # (1, 4, 2) * (n, 1, 2) -> (n, 4, 2) - - # 2. rotate - rot_sin, rot_cos = np.sin(angles), np.cos(angles) - # in fact, -angle - rot_mat = np.array([[rot_cos, rot_sin], - [-rot_sin, rot_cos]]) # (2, 2, n) - rot_mat = np.transpose(rot_mat, (2, 1, 0)) # (N, 2, 2) - bev_corners = bev_corners @ rot_mat # (n, 4, 2) - - # 3. translate to centers - bev_corners += centers[:, None, :] - return bev_corners.astype(np.float32) - - -def bbox3d2corners(bboxes): - ''' - bboxes: shape=(n, 7) - return: shape=(n, 8, 3) - ^ z x 6 ------ 5 - | / / | / | - | / 2 -|---- 1 | - y | / | | | | - <------|o | 7 -----| 4 - |/ o |/ - 3 ------ 0 - x: front, y: left, z: top - ''' - centers, dims, angles = bboxes[:, :3], bboxes[:, 3:6], bboxes[:, 6] - - # 1.generate bbox corner coordinates, clockwise from minimal point - bboxes_corners = np.array([[-0.5, -0.5, 0], [-0.5, -0.5, 1.0], [-0.5, 0.5, 1.0], [-0.5, 0.5, 0.0], - [0.5, -0.5, 0], [0.5, -0.5, 1.0], [0.5, 0.5, 1.0], [0.5, 0.5, 0.0]], - dtype=np.float32) - bboxes_corners = bboxes_corners[None, :, :] * dims[:, None, :] # (1, 8, 3) * (n, 1, 3) -> (n, 8, 3) - - # 2. rotate around z axis - rot_sin, rot_cos = np.sin(angles), np.cos(angles) - # in fact, -angle - rot_mat = np.array([[rot_cos, rot_sin, np.zeros_like(rot_cos)], - [-rot_sin, rot_cos, np.zeros_like(rot_cos)], - [np.zeros_like(rot_cos), np.zeros_like(rot_cos), np.ones_like(rot_cos)]], - dtype=np.float32) # (3, 3, n) - rot_mat = np.transpose(rot_mat, (2, 1, 0)) # (n, 3, 3) - bboxes_corners = bboxes_corners @ rot_mat # (n, 8, 3) - - # 3. translate to centers - bboxes_corners += centers[:, None, :] - return bboxes_corners - - -def bbox3d2corners_camera(bboxes): - ''' - bboxes: shape=(n, 7) - return: shape=(n, 8, 3) - z (front) 6 ------ 5 - / / | / | - / 2 -|---- 1 | - / | | | | - |o ------> x(right) | 7 -----| 4 - | |/ o |/ - | 3 ------ 0 - | - v y(down) - ''' - centers, dims, angles = bboxes[:, :3], bboxes[:, 3:6], bboxes[:, 6] - - # 1.generate bbox corner coordinates, clockwise from minimal point - bboxes_corners = np.array([[0.5, 0.0, -0.5], [0.5, -1.0, -0.5], [-0.5, -1.0, -0.5], [-0.5, 0.0, -0.5], - [0.5, 0.0, 0.5], [0.5, -1.0, 0.5], [-0.5, -1.0, 0.5], [-0.5, 0.0, 0.5]], - dtype=np.float32) - bboxes_corners = bboxes_corners[None, :, :] * dims[:, None, :] # (1, 8, 3) * (n, 1, 3) -> (n, 8, 3) - - # 2. rotate around y axis - rot_sin, rot_cos = np.sin(angles), np.cos(angles) - # in fact, angle - rot_mat = np.array([[rot_cos, np.zeros_like(rot_cos), rot_sin], - [np.zeros_like(rot_cos), np.ones_like(rot_cos), np.zeros_like(rot_cos)], - [-rot_sin, np.zeros_like(rot_cos), rot_cos]], - dtype=np.float32) # (3, 3, n) - rot_mat = np.transpose(rot_mat, (2, 1, 0)) # (n, 3, 3) - bboxes_corners = bboxes_corners @ rot_mat # (n, 8, 3) - - # 3. translate to centers - bboxes_corners += centers[:, None, :] - return bboxes_corners - - -def group_rectangle_vertexs(bboxes_corners): - ''' - bboxes_corners: shape=(n, 8, 3) - return: shape=(n, 6, 4, 3) - ''' - rec1 = np.stack([bboxes_corners[:, 0], bboxes_corners[:, 1], bboxes_corners[:, 3], bboxes_corners[:, 2]], axis=1) # (n, 4, 3) - rec2 = np.stack([bboxes_corners[:, 4], bboxes_corners[:, 7], bboxes_corners[:, 6], bboxes_corners[:, 5]], axis=1) # (n, 4, 3) - rec3 = np.stack([bboxes_corners[:, 0], bboxes_corners[:, 4], bboxes_corners[:, 5], bboxes_corners[:, 1]], axis=1) # (n, 4, 3) - rec4 = np.stack([bboxes_corners[:, 2], bboxes_corners[:, 6], bboxes_corners[:, 7], bboxes_corners[:, 3]], axis=1) # (n, 4, 3) - rec5 = np.stack([bboxes_corners[:, 1], bboxes_corners[:, 5], bboxes_corners[:, 6], bboxes_corners[:, 2]], axis=1) # (n, 4, 3) - rec6 = np.stack([bboxes_corners[:, 0], bboxes_corners[:, 3], bboxes_corners[:, 7], bboxes_corners[:, 4]], axis=1) # (n, 4, 3) - group_rectangle_vertexs = np.stack([rec1, rec2, rec3, rec4, rec5, rec6], axis=1) - return group_rectangle_vertexs - - -@numba.jit(nopython=True) -def bevcorner2alignedbbox(bev_corners): - ''' - bev_corners: shape=(N, 4, 2) - return: shape=(N, 4) - ''' - # xmin, xmax = np.min(bev_corners[:, :, 0], axis=-1), np.max(bev_corners[:, :, 0], axis=-1) - # ymin, ymax = np.min(bev_corners[:, :, 1], axis=-1), np.max(bev_corners[:, :, 1], axis=-1) - - # why we don't implement like the above ? please see - # https://numba.pydata.org/numba-doc/latest/reference/numpysupported.html#calculation - n = len(bev_corners) - alignedbbox = np.zeros((n, 4), dtype=np.float32) - for i in range(n): - cur_bev = bev_corners[i] - alignedbbox[i, 0] = np.min(cur_bev[:, 0]) - alignedbbox[i, 2] = np.max(cur_bev[:, 0]) - alignedbbox[i, 1] = np.min(cur_bev[:, 1]) - alignedbbox[i, 3] = np.max(cur_bev[:, 1]) - return alignedbbox - - -# modified from https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/pipelines/data_augment_utils.py#L31 -@numba.jit(nopython=True) -def box_collision_test(boxes, qboxes, clockwise=True): - """Box collision test. - Args: - boxes (np.ndarray): Corners of current boxes. # (n1, 4, 2) - qboxes (np.ndarray): Boxes to be avoid colliding. # (n2, 4, 2) - clockwise (bool, optional): Whether the corners are in - clockwise order. Default: True. - return: shape=(n1, n2) - """ - N = boxes.shape[0] - K = qboxes.shape[0] - ret = np.zeros((N, K), dtype=np.bool_) - slices = np.array([1, 2, 3, 0]) - lines_boxes = np.stack((boxes, boxes[:, slices, :]), - axis=2) # [N, 4, 2(line), 2(xy)] - lines_qboxes = np.stack((qboxes, qboxes[:, slices, :]), axis=2) - # vec = np.zeros((2,), dtype=boxes.dtype) - boxes_standup = bevcorner2alignedbbox(boxes) - qboxes_standup = bevcorner2alignedbbox(qboxes) - for i in range(N): - for j in range(K): - # calculate standup first - iw = ( - min(boxes_standup[i, 2], qboxes_standup[j, 2]) - - max(boxes_standup[i, 0], qboxes_standup[j, 0])) - if iw > 0: - ih = ( - min(boxes_standup[i, 3], qboxes_standup[j, 3]) - - max(boxes_standup[i, 1], qboxes_standup[j, 1])) - if ih > 0: - for k in range(4): - for box_l in range(4): - A = lines_boxes[i, k, 0] - B = lines_boxes[i, k, 1] - C = lines_qboxes[j, box_l, 0] - D = lines_qboxes[j, box_l, 1] - acd = (D[1] - A[1]) * (C[0] - - A[0]) > (C[1] - A[1]) * ( - D[0] - A[0]) - bcd = (D[1] - B[1]) * (C[0] - - B[0]) > (C[1] - B[1]) * ( - D[0] - B[0]) - if acd != bcd: - abc = (C[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - C[0] - A[0]) - abd = (D[1] - A[1]) * (B[0] - A[0]) > ( - B[1] - A[1]) * ( - D[0] - A[0]) - if abc != abd: - ret[i, j] = True # collision. - break - if ret[i, j] is True: - break - if ret[i, j] is False: - # now check complete overlap. - # box overlap qbox: - box_overlap_qbox = True - for box_l in range(4): # point l in qboxes - for k in range(4): # corner k in boxes - vec = boxes[i, k] - boxes[i, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - boxes[i, k, 0] - qboxes[j, box_l, 0]) - cross -= vec[0] * ( - boxes[i, k, 1] - qboxes[j, box_l, 1]) - if cross >= 0: - box_overlap_qbox = False - break - if box_overlap_qbox is False: - break - - if box_overlap_qbox is False: - qbox_overlap_box = True - for box_l in range(4): # point box_l in boxes - for k in range(4): # corner k in qboxes - vec = qboxes[j, k] - qboxes[j, (k + 1) % 4] - if clockwise: - vec = -vec - cross = vec[1] * ( - qboxes[j, k, 0] - boxes[i, box_l, 0]) - cross -= vec[0] * ( - qboxes[j, k, 1] - boxes[i, box_l, 1]) - if cross >= 0: # - qbox_overlap_box = False - break - if qbox_overlap_box is False: - break - if qbox_overlap_box: - ret[i, j] = True # collision. - else: - ret[i, j] = True # collision. - return ret - - -def group_plane_equation(bbox_group_rectangle_vertexs): - ''' - bbox_group_rectangle_vertexs: shape=(n, 6, 4, 3) - return: shape=(n, 6, 4) - ''' - # 1. generate vectors for a x b - vectors = bbox_group_rectangle_vertexs[:, :, :2] - bbox_group_rectangle_vertexs[:, :, 1:3] - normal_vectors = np.cross(vectors[:, :, 0], vectors[:, :, 1]) # (n, 6, 3) - normal_d = np.einsum('ijk,ijk->ij', bbox_group_rectangle_vertexs[:, :, 0], normal_vectors) # (n, 6) - plane_equation_params = np.concatenate([normal_vectors, -normal_d[:, :, None]], axis=-1) - return plane_equation_params - - -@numba.jit(nopython=True) -def points_in_bboxes(points, plane_equation_params): - ''' - points: shape=(N, 3) - plane_equation_params: shape=(n, 6, 4) - return: shape=(N, n), bool - ''' - N, n = len(points), len(plane_equation_params) - m = plane_equation_params.shape[1] - masks = np.ones((N, n), dtype=np.bool_) - for i in range(N): - x, y, z = points[i, :3] - for j in range(n): - bbox_plane_equation_params = plane_equation_params[j] - for k in range(m): - a, b, c, d = bbox_plane_equation_params[k] - if a * x + b * y + c * z + d >= 0: - masks[i][j] = False - break - return masks - - -def remove_pts_in_bboxes(points, bboxes, rm=True): - ''' - points: shape=(N, 3) - bboxes: shape=(n, 7) - return: shape=(N, n), bool - ''' - # 1. get 6 groups of rectangle vertexs - bboxes_corners = bbox3d2corners(bboxes) # (n, 8, 3) - bbox_group_rectangle_vertexs = group_rectangle_vertexs(bboxes_corners) # (n, 6, 4, 3) - - # 2. calculate plane equation: ax + by + cd + d = 0 - group_plane_equation_params = group_plane_equation(bbox_group_rectangle_vertexs) - - # 3. Judge each point inside or outside the bboxes - # if point (x0, y0, z0) lies on the direction of normal vector(a, b, c), then ax0 + by0 + cz0 + d > 0. - masks = points_in_bboxes(points, group_plane_equation_params) # (N, n) - - if not rm: - return masks - - # 4. remove point insider the bboxes - masks = np.any(masks, axis=-1) - - return points[~masks] - - -# modified from https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/core/bbox/structures/utils.py#L11 -def limit_period(val, offset=0.5, period=np.pi): - """ - val: array or float - offset: float - period: float - return: Value in the range of [-offset * period, (1-offset) * period] - """ - limited_val = val - np.floor(val / period + offset) * period - return limited_val - - -def nearest_bev(bboxes): - ''' - bboxes: (n, 7), (x, y, z, w, l, h, theta) - return: (n, 4), (x1, y1, x2, y2) - ''' - bboxes_bev = copy.deepcopy(bboxes[:, [0, 1, 3, 4]]) - bboxes_angle = limit_period(bboxes[:, 6].cpu(), offset=0.5, period=np.pi).to(bboxes_bev) - bboxes_bev = torch.where(torch.abs(bboxes_angle[:, None]) > np.pi / 4, bboxes_bev[:, [0, 1, 3, 2]], bboxes_bev) - - bboxes_xy = bboxes_bev[:, :2] - bboxes_wl = bboxes_bev[:, 2:] - bboxes_bev_x1y1x2y2 = torch.cat([bboxes_xy - bboxes_wl / 2, bboxes_xy + bboxes_wl / 2], dim=-1) - return bboxes_bev_x1y1x2y2 - - -def iou2d(bboxes1, bboxes2, metric=0): - ''' - bboxes1: (n, 4), (x1, y1, x2, y2) - bboxes2: (m, 4), (x1, y1, x2, y2) - return: (n, m) - ''' - bboxes_x1 = torch.maximum(bboxes1[:, 0][:, None], bboxes2[:, 0][None, :]) # (n, m) - bboxes_y1 = torch.maximum(bboxes1[:, 1][:, None], bboxes2[:, 1][None, :]) # (n, m) - bboxes_x2 = torch.minimum(bboxes1[:, 2][:, None], bboxes2[:, 2][None, :]) - bboxes_y2 = torch.minimum(bboxes1[:, 3][:, None], bboxes2[:, 3][None, :]) - - bboxes_w = torch.clamp(bboxes_x2 - bboxes_x1, min=0) - bboxes_h = torch.clamp(bboxes_y2 - bboxes_y1, min=0) - - iou_area = bboxes_w * bboxes_h # (n, m) - - bboxes1_wh = bboxes1[:, 2:] - bboxes1[:, :2] - area1 = bboxes1_wh[:, 0] * bboxes1_wh[:, 1] # (n, ) - bboxes2_wh = bboxes2[:, 2:] - bboxes2[:, :2] - area2 = bboxes2_wh[:, 0] * bboxes2_wh[:, 1] # (m, ) - if metric == 0: - iou = iou_area / (area1[:, None] + area2[None, :] - iou_area + 1e-8) - elif metric == 1: - iou = iou_area / (area1[:, None] + 1e-8) - return iou - - -def iou2d_nearest(bboxes1, bboxes2): - ''' - bboxes1: (n, 7), (x, y, z, w, l, h, theta) - bboxes2: (m, 7), - return: (n, m) - ''' - bboxes1_bev = nearest_bev(bboxes1) - bboxes2_bev = nearest_bev(bboxes2) - iou = iou2d(bboxes1_bev, bboxes2_bev) - return iou - - -def iou3d(bboxes1, bboxes2): - ''' - bboxes1: (n, 7), (x, y, z, w, l, h, theta) - bboxes2: (m, 7) - return: (n, m) - ''' - # 1. height overlap - bboxes1_bottom, bboxes2_bottom = bboxes1[:, 2], bboxes2[:, 2] # (n, ), (m, ) - bboxes1_top, bboxes2_top = bboxes1[:, 2] + bboxes1[:, 5], bboxes2[:, 2] + bboxes2[:, 5] # (n, ), (m, ) - bboxes_bottom = torch.maximum(bboxes1_bottom[:, None], bboxes2_bottom[None, :]) # (n, m) - bboxes_top = torch.minimum(bboxes1_top[:, None], bboxes2_top[None, :]) - height_overlap = torch.clamp(bboxes_top - bboxes_bottom, min=0) - - # 2. bev overlap - bboxes1_x1y1 = bboxes1[:, :2] - bboxes1[:, 3:5] / 2 - bboxes1_x2y2 = bboxes1[:, :2] + bboxes1[:, 3:5] / 2 - bboxes2_x1y1 = bboxes2[:, :2] - bboxes2[:, 3:5] / 2 - bboxes2_x2y2 = bboxes2[:, :2] + bboxes2[:, 3:5] / 2 - bboxes1_bev = torch.cat([bboxes1_x1y1, bboxes1_x2y2, bboxes1[:, 6:]], dim=-1) - bboxes2_bev = torch.cat([bboxes2_x1y1, bboxes2_x2y2, bboxes2[:, 6:]], dim=-1) - bev_overlap = boxes_overlap_bev(bboxes1_bev, bboxes2_bev) # (n, m) - - # 3. overlap and volume - overlap = height_overlap * bev_overlap - volume1 = bboxes1[:, 3] * bboxes1[:, 4] * bboxes1[:, 5] - volume2 = bboxes2[:, 3] * bboxes2[:, 4] * bboxes2[:, 5] - volume = volume1[:, None] + volume2[None, :] # (n, m) - - # 4. iou - iou = overlap / (volume - overlap + 1e-8) - - return iou - - -def iou3d_camera(bboxes1, bboxes2): - ''' - bboxes1: (n, 7), (x, y, z, w, l, h, theta) - bboxes2: (m, 7) - return: (n, m) - ''' - # 1. height overlap - bboxes1_bottom, bboxes2_bottom = bboxes1[:, 1] - bboxes1[:, 4], bboxes2[:, 1] - bboxes2[:, 4] # (n, ), (m, ) - bboxes1_top, bboxes2_top = bboxes1[:, 1], bboxes2[:, 1] # (n, ), (m, ) - bboxes_bottom = torch.maximum(bboxes1_bottom[:, None], bboxes2_bottom[None, :]) # (n, m) - bboxes_top = torch.minimum(bboxes1_top[:, None], bboxes2_top[None, :]) - height_overlap = torch.clamp(bboxes_top - bboxes_bottom, min=0) - - # 2. bev overlap - bboxes1_x1y1 = bboxes1[:, [0, 2]] - bboxes1[:, [3, 5]] / 2 - bboxes1_x2y2 = bboxes1[:, [0, 2]] + bboxes1[:, [3, 5]] / 2 - bboxes2_x1y1 = bboxes2[:, [0, 2]] - bboxes2[:, [3, 5]] / 2 - bboxes2_x2y2 = bboxes2[:, [0, 2]] + bboxes2[:, [3, 5]] / 2 - bboxes1_bev = torch.cat([bboxes1_x1y1, bboxes1_x2y2, bboxes1[:, 6:]], dim=-1) - bboxes2_bev = torch.cat([bboxes2_x1y1, bboxes2_x2y2, bboxes2[:, 6:]], dim=-1) - bev_overlap = boxes_overlap_bev(bboxes1_bev, bboxes2_bev) # (n, m) - - # 3. overlap and volume - overlap = height_overlap * bev_overlap - volume1 = bboxes1[:, 3] * bboxes1[:, 4] * bboxes1[:, 5] - volume2 = bboxes2[:, 3] * bboxes2[:, 4] * bboxes2[:, 5] - volume = volume1[:, None] + volume2[None, :] # (n, m) - - # 4. iou - iou = overlap / (volume - overlap + 1e-8) - - return iou - - -def iou_bev(bboxes1, bboxes2): - ''' - bboxes1: (n, 5), (x, z, w, h, theta) - bboxes2: (m, 5) - return: (n, m) - ''' - bboxes1_x1y1 = bboxes1[:, :2] - bboxes1[:, 2:4] / 2 - bboxes1_x2y2 = bboxes1[:, :2] + bboxes1[:, 2:4] / 2 - bboxes2_x1y1 = bboxes2[:, :2] - bboxes2[:, 2:4] / 2 - bboxes2_x2y2 = bboxes2[:, :2] + bboxes2[:, 2:4] / 2 - bboxes1_bev = torch.cat([bboxes1_x1y1, bboxes1_x2y2, bboxes1[:, 4:]], dim=-1) - bboxes2_bev = torch.cat([bboxes2_x1y1, bboxes2_x2y2, bboxes2[:, 4:]], dim=-1) - bev_overlap = boxes_iou_bev(bboxes1_bev, bboxes2_bev) # (n, m) - - return bev_overlap - - -def keep_bbox_from_image_range(result, tr_velo_to_cam, r0_rect, P2, image_shape): - ''' - result: dict(lidar_bboxes, labels, scores) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - P2: shape=(4, 4) - image_shape: (h, w) - return: dict(lidar_bboxes, labels, scores, bboxes2d, camera_bboxes) - ''' - h, w = image_shape - - lidar_bboxes = result['lidar_bboxes'] - labels = result['labels'] - scores = result['scores'] - camera_bboxes = bbox_lidar2camera(lidar_bboxes, tr_velo_to_cam, r0_rect) # (n, 7) - bboxes_points = bbox3d2corners_camera(camera_bboxes) # (n, 8, 3) - image_points = points_camera2image(bboxes_points, P2) # (n, 8, 2) - image_x1y1 = np.min(image_points, axis=1) # (n, 2) - image_x1y1 = np.maximum(image_x1y1, 0) - image_x2y2 = np.max(image_points, axis=1) # (n, 2) - image_x2y2 = np.minimum(image_x2y2, [w, h]) - bboxes2d = np.concatenate([image_x1y1, image_x2y2], axis=-1) - - keep_flag = (image_x1y1[:, 0] < w) & (image_x1y1[:, 1] < h) & (image_x2y2[:, 0] > 0) & (image_x2y2[:, 1] > 0) - - result = { - 'lidar_bboxes': lidar_bboxes[keep_flag], - 'labels': labels[keep_flag], - 'scores': scores[keep_flag], - 'bboxes2d': bboxes2d[keep_flag], - 'camera_bboxes': camera_bboxes[keep_flag] - } - return result - - -def keep_bbox_from_lidar_range(result, pcd_limit_range): - ''' - result: dict(lidar_bboxes, labels, scores, bboxes2d, camera_bboxes) - pcd_limit_range: [] - return: dict(lidar_bboxes, labels, scores, bboxes2d, camera_bboxes) - ''' - lidar_bboxes, labels, scores = result['lidar_bboxes'], result['labels'], result['scores'] - if 'bboxes2d' not in result: - result['bboxes2d'] = np.zeros_like(lidar_bboxes[:, :4]) - if 'camera_bboxes' not in result: - result['camera_bboxes'] = np.zeros_like(lidar_bboxes) - bboxes2d, camera_bboxes = result['bboxes2d'], result['camera_bboxes'] - flag1 = lidar_bboxes[:, :3] > pcd_limit_range[:3][None, :] # (n, 3) - flag2 = lidar_bboxes[:, :3] < pcd_limit_range[3:][None, :] # (n, 3) - keep_flag = np.all(flag1, axis=-1) & np.all(flag2, axis=-1) - - result = { - 'lidar_bboxes': lidar_bboxes[keep_flag], - 'labels': labels[keep_flag], - 'scores': scores[keep_flag], - 'bboxes2d': bboxes2d[keep_flag], - 'camera_bboxes': camera_bboxes[keep_flag] - } - return result - - -def points_in_bboxes_v2(points, r0_rect, tr_velo_to_cam, dimensions, location, rotation_y, name): - ''' - points: shape=(N, 4) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - dimensions: shape=(n, 3) - location: shape=(n, 3) - rotation_y: shape=(n, ) - name: shape=(n, ) - return: - indices: shape=(N, n_valid_bbox), indices[i, j] denotes whether point i is in bbox j. - n_total_bbox: int. - n_valid_bbox: int, not including 'DontCare' - bboxes_lidar: shape=(n_valid_bbox, 7) - name: shape=(n_valid_bbox, ) - ''' - n_total_bbox = len(dimensions) - n_valid_bbox = len([item for item in name if item != 'DontCare']) - location, dimensions = location[:n_valid_bbox], dimensions[:n_valid_bbox] - rotation_y, name = rotation_y[:n_valid_bbox], name[:n_valid_bbox] - bboxes_camera = np.concatenate([location, dimensions, rotation_y[:, None]], axis=1) - bboxes_lidar = bbox_camera2lidar(bboxes_camera, tr_velo_to_cam, r0_rect) - bboxes_corners = bbox3d2corners(bboxes_lidar) - group_rectangle_vertexs_v = group_rectangle_vertexs(bboxes_corners) - frustum_surfaces = group_plane_equation(group_rectangle_vertexs_v) - indices = points_in_bboxes(points[:, :3], frustum_surfaces) # (N, n), N is points num, n is bboxes number - return indices, n_total_bbox, n_valid_bbox, bboxes_lidar, name - - -def get_points_num_in_bbox(points, r0_rect, tr_velo_to_cam, dimensions, location, rotation_y, name): - ''' - points: shape=(N, 4) - tr_velo_to_cam: shape=(4, 4) - r0_rect: shape=(4, 4) - dimensions: shape=(n, 3) - location: shape=(n, 3) - rotation_y: shape=(n, ) - name: shape=(n, ) - return: shape=(n, ) - ''' - indices, n_total_bbox, n_valid_bbox, bboxes_lidar, name = \ - points_in_bboxes_v2( - points=points, - r0_rect=r0_rect, - tr_velo_to_cam=tr_velo_to_cam, - dimensions=dimensions, - location=location, - rotation_y=rotation_y, - name=name) - points_num = np.sum(indices, axis=0) - non_valid_points_num = [-1] * (n_total_bbox - n_valid_bbox) - points_num = np.concatenate([points_num, non_valid_points_num], axis=0) - return np.array(points_num, dtype=np.int) - - -# Modified from https://github.com/open-mmlab/mmdetection3d/blob/f45977008a52baaf97640a0e9b2bbe5ea1c4be34/mmdet3d/core/bbox/box_np_ops.py#L609 -def remove_outside_points(points, r0_rect, tr_velo_to_cam, P2, image_shape): - """Remove points which are outside of image. - Args: - points (np.ndarray, shape=[N, 3+dims]): Total points. - rect (np.ndarray, shape=[4, 4]): Matrix to project points in - specific camera coordinate (e.g. CAM2) to CAM0. - Trv2c (np.ndarray, shape=[4, 4]): Matrix to project points in - camera coordinate to lidar coordinate. - P2 (p.array, shape=[4, 4]): Intrinsics of Camera2. - image_shape (list[int]): Shape of image. - Returns: - np.ndarray, shape=[N, 3+dims]: Filtered points. - """ - # 5x faster than remove_outside_points_v1(2ms vs 10ms) - C, R, T = projection_matrix_to_CRT_kitti(P2) - image_bbox = [0, 0, image_shape[1], image_shape[0]] - frustum = get_frustum(image_bbox, C) - frustum -= T - frustum = np.linalg.inv(R) @ frustum.T - frustum = points_camera2lidar(frustum.T[None, ...], tr_velo_to_cam, r0_rect) # (1, 8, 3) - group_rectangle_vertexs_v = group_rectangle_vertexs(frustum) - frustum_surfaces = group_plane_equation(group_rectangle_vertexs_v) - indices = points_in_bboxes(points[:, :3], frustum_surfaces) # (N, 1) - points = points[indices.reshape([-1])] - return points - - -# Copied from https://github.com/open-mmlab/mmdetection3d/blob/f45977008a52baaf97640a0e9b2bbe5ea1c4be34/mmdet3d/core/bbox/box_np_ops.py#L609 -def projection_matrix_to_CRT_kitti(proj): - """Split projection matrix of kitti. - P = C @ [R|T] - C is upper triangular matrix, so we need to inverse CR and use QR - stable for all kitti camera projection matrix. - Args: - proj (p.array, shape=[4, 4]): Intrinsics of camera. - Returns: - tuple[np.ndarray]: Splited matrix of C, R and T. - """ - - CR = proj[0:3, 0:3] - CT = proj[0:3, 3] - RinvCinv = np.linalg.inv(CR) - Rinv, Cinv = np.linalg.qr(RinvCinv) - C = np.linalg.inv(Cinv) - R = np.linalg.inv(Rinv) - T = Cinv @ CT - return C, R, T - - -# Copied from https://github.com/open-mmlab/mmdetection3d/blob/f45977008a52baaf97640a0e9b2bbe5ea1c4be34/mmdet3d/core/bbox/box_np_ops.py#L661 -def get_frustum(bbox_image, C, near_clip=0.001, far_clip=100): - """Get frustum corners in camera coordinates. - Args: - bbox_image (list[int]): box in image coordinates. - C (np.ndarray): Intrinsics. - near_clip (float, optional): Nearest distance of frustum. - Defaults to 0.001. - far_clip (float, optional): Farthest distance of frustum. - Defaults to 100. - Returns: - np.ndarray, shape=[8, 3]: coordinates of frustum corners. - """ - fku = C[0, 0] - fkv = -C[1, 1] - u0v0 = C[0:2, 2] - z_points = np.array( - [near_clip] * 4 + [far_clip] * 4, dtype=C.dtype)[:, np.newaxis] - b = bbox_image - box_corners = np.array( - [[b[0], b[1]], [b[0], b[3]], [b[2], b[3]], [b[2], b[1]]], - dtype=C.dtype) - near_box_corners = (box_corners - u0v0) / np.array( - [fku / near_clip, -fkv / near_clip], dtype=C.dtype) - far_box_corners = (box_corners - u0v0) / np.array( - [fku / far_clip, -fkv / far_clip], dtype=C.dtype) - ret_xy = np.concatenate([near_box_corners, far_box_corners], - axis=0) # [8, 2] - ret_xyz = np.concatenate([ret_xy, z_points], axis=1) - return ret_xyz diff --git a/cv/3d_detection/pointpillars/pytorch/utils/viewpoint.json b/cv/3d_detection/pointpillars/pytorch/utils/viewpoint.json deleted file mode 100755 index aebb9e11..00000000 --- a/cv/3d_detection/pointpillars/pytorch/utils/viewpoint.json +++ /dev/null @@ -1,30 +0,0 @@ -{ - "class_name" : "PinholeCameraParameters", - "extrinsic" : - [ - 0.013862749108655318, - -0.99931910700250171, - 0.034192785304405081, - 0, - -0.99751226840734264, - -0.011457761025003947, - 0.069555690558944339, - 0, - -0.069116557813509449, - -0.035071955919460274, - -0.99699190535530191, - 0, - 7.0392596000563916, - 14.768969974020909, - 101.08753655485596, - 1 - ], - "intrinsic" : - { - "height" : 1056, - "intrinsic_matrix" : [ 914.52282639636724, 0, 0, 0, 914.52282639636724, 0, 927, 527.5, 1 ], - "width" : 1855 - }, - "version_major" : 1, - "version_minor" : 0 -} \ No newline at end of file diff --git a/cv/3d_detection/pointpillars/pytorch/utils/vis_o3d.py b/cv/3d_detection/pointpillars/pytorch/utils/vis_o3d.py deleted file mode 100755 index 41ca093d..00000000 --- a/cv/3d_detection/pointpillars/pytorch/utils/vis_o3d.py +++ /dev/null @@ -1,122 +0,0 @@ -import cv2 -import numpy as np -import open3d as o3d -import os -from utils import bbox3d2corners - - -COLORS = [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0]] -COLORS_IMG = [[0, 0, 255], [0, 255, 0], [255, 0, 0], [0, 255, 255]] - -LINES = [ - [0, 1], - [1, 2], - [2, 3], - [3, 0], - [4, 5], - [5, 6], - [6, 7], - [7, 4], - [2, 6], - [7, 3], - [1, 5], - [4, 0] - ] - - -def npy2ply(npy): - ply = o3d.geometry.PointCloud() - ply.points = o3d.utility.Vector3dVector(npy[:, :3]) - density = npy[:, 3] - colors = [[item, item, item] for item in density] - ply.colors = o3d.utility.Vector3dVector(colors) - return ply - - -def ply2npy(ply): - return np.array(ply.points) - - -def bbox_obj(points, color=[1, 0, 0]): - colors = [color for i in range(len(LINES))] - line_set = o3d.geometry.LineSet( - points=o3d.utility.Vector3dVector(points), - lines=o3d.utility.Vector2iVector(LINES), - ) - line_set.colors = o3d.utility.Vector3dVector(colors) - return line_set - - -def vis_core(plys): - vis = o3d.visualization.Visualizer() - vis.create_window() - - PAR = os.path.dirname(os.path.abspath(__file__)) - ctr = vis.get_view_control() - param = o3d.io.read_pinhole_camera_parameters(os.path.join(PAR, 'viewpoint.json')) - for ply in plys: - vis.add_geometry(ply) - ctr.convert_from_pinhole_camera_parameters(param) - - vis.run() - # param = vis.get_view_control().convert_to_pinhole_camera_parameters() - # o3d.io.write_pinhole_camera_parameters(os.path.join(PAR, 'viewpoint.json'), param) - vis.destroy_window() - - -def vis_pc(pc, bboxes=None, labels=None): - ''' - pc: ply or np.ndarray (N, 4) - bboxes: np.ndarray, (n, 7) or (n, 8, 3) - labels: (n, ) - ''' - if isinstance(pc, np.ndarray): - pc = npy2ply(pc) - - mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame( - size=10, origin=[0, 0, 0]) - - if bboxes is None: - vis_core([pc, mesh_frame]) - return - - if len(bboxes.shape) == 2: - bboxes = bbox3d2corners(bboxes) - - vis_objs = [pc, mesh_frame] - for i in range(len(bboxes)): - bbox = bboxes[i] - if labels is None: - color = [1, 1, 0] - else: - if labels[i] >= 0 and labels[i] < 3: - color = COLORS[labels[i]] - else: - color = COLORS[-1] - vis_objs.append(bbox_obj(bbox, color=color)) - vis_core(vis_objs) - - -def vis_img_3d(img, image_points, labels, rt=True): - ''' - img: (h, w, 3) - image_points: (n, 8, 2) - labels: (n, ) - ''' - - for i in range(len(image_points)): - label = labels[i] - bbox_points = image_points[i] # (8, 2) - if label >= 0 and label < 3: - color = COLORS_IMG[label] - else: - color = COLORS_IMG[-1] - for line_id in LINES: - x1, y1 = bbox_points[line_id[0]] - x2, y2 = bbox_points[line_id[1]] - x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2) - cv2.line(img, (x1, y1), (x2, y2), color, 1) - if rt: - return img - cv2.imshow('bbox', img) - cv2.waitKey(0) -- Gitee

zfXtiAI|n=cJX+(9ti9nUzxIv$ug<5+wOhXajrcPh;uoIP^N-2ro=V*#f8|T-Klk>( ze$s8(BR}%V>EYj)^nMP77O&OpM2SUFMZ4AiJKRTtyiC3S-k7T_r9Rr`_`X! z(+__$@87ujMN1pIrH?*d{QP5=hgG$J`qE;I?e_kY5qBNcle6XD+LO@k3@J-Xwz!W^ z^+2+2sws6bgvG#c(w#;QZC@Yo!BeK?s-8;@bMB>vITzZukZfkiW@L~;I666WQ^dI> z>4PEnsk0p0*lDyIC;}H2t79rHE%wV*3d<`;m$ARsoV%QlHtmU}(_da3wObdvPS;OQ zzg_ULXX1msZItS69(GNZKFb8NUCf}~OoFIUByLu0rv9pF;WOcC6>-`)2v7&aghmi* zx=yt-TA73*dE>Nf+%mXl#Caub;?<{;?&hba)|Gh05#%bw?Q$BAJ!?DN-qlIV)gqMx z7aUyzgCX0sL)cuWBQSz3K>^haFjbg=LcqIG0Fp&Oh8VTh@8^AoP{4VLI7e?RVI6X0 zFG(=3ArC)FQoA2oBk zRhXFN%&aP5tt`u-FPD~V^~JZZ zwJFEJ(g9DtX=8nTQ4hvDxYf^QtLOjtp_zU7!O!nbw@N=)N@4Bx!REKT)gH3y^hhw*w zE%r8DzxeFv;`y^aePUD}mTQY{Y&>>qBfx~>oGWRCBMKtWWk%1&fnCN7h-S`uK{giZ zh6*@?C=^J!6HU&81-*fW1nXcT-ocSDkT6Q=+#rC3^?|83HYx^M5r#f20d#Rj>7p?< z9%>qrNk&xqj8M44LUv>D^J*mK2R;TLvodEd6vs#xV`R78_GUe4q^L*?355AVeJIpM z(V5X}b}1`i1p$}EnW!`M8D<5>!qPAS_cZ(1x>|$>aT3!UW}33mas~{f5ynE~XbGvX zuNHe+hLV6J7&Iq^sY1;_7!^0Dm?@+*h64)C?;AmnIh zw574{T~7lm!jKUl&NU`6C6_WyK!Q+1G7u1f2{J)N2^f7Q0D1z{EU+xhguxt4`pU8^ zN` z$YqyAV=pjBrU52F07?Xq(E$Pk0lo9EtmqA5pt#_IcN)xGFbg0U)hrK(o6IrpT?b7? z0Rk{ElR=y_&!qrAE?5b`Gy_vF6sA(6b zGJea)PTq#;WgJ|>tL{llH~W_uKm&;9J@3!<6l`{t@!*TU`4jKoe$hYw`R@Wep7+&g z2e^ihuYnm-q0e~c%7xGUpTAVS_L=n;|J?U%;8R0feE&Ot<^#KuE?Bo&VwP>khHjoM zPy0@*TTV$tY4c{%sq^JPViU7Q?yW9=*O~Fjw;p@=V$(f{{^W(HKbzV^Io#gQ)*IMo zE{q^>=7vq$Xk+(g3*lRrs{ihNkKxSZGv_ZQuJjI%{mj*0xoo$6(vCb(oJ-a|7Q^EG zo0d}Z)P>!lPNXKIEwAO3wT;J~E=8`_@Ym>TTj-jS$i)&001BWNklAUmut-)kVT*=Tw}B3 z=DW=l$;QIYa+_i@571drK&tFeBvjG0V>6r8J3beoD`)1+;0RDTnk`!1w?^`dPUK=^ zJvBDpn_X{;+jDtrZBq5^=*sZJV(TmU$dUeuXS%IDwL*2NIM$%u#Z?2T$*iKXE-|8a zM^De%?dipGgZ=`dak;#oL~KdDFf4zaMvZterc%^`(FFsV}&&_bY8+bJT4NFiay0 zqgAhH;`W@4t&EQ?j9+>@SNF>wl)UN;A_55@3Eabq0F{ETz7-J*82P2cK6!xGtH#k+q&?W-#LHIH-FS06cg@P zd4-#&Has8(gBW3O21y>~P6sy}=tZJNre45Y*|OMbW(z!;G3)#=v+`nLKmolo zI#We%do#5hT$Tc={q(@%++b=VW_2M(vEXdz%tVB+!WZJ}WC_-QDaR7I@Gc_EvME>z zDq7>P;}W|aKZ9h=mUAYQodK53U0a zXe9zC0vZJ#^vjzCxH#Dti~F71%VK_3yG9Wbijr1S6LTNt0`zmQX7J;vh1Juck8YM` zG4*h!f*VL032%VF3p79#7Of|s4hqmY0hp7)R0*Sq8dXu6`313&u+cb2%qFVFFbKtP zkmi<}20p77pMpyYG14Vy5@j+;S!vmf1W1TLlsQukAk0Y}Sb&XL3|`BOn_F?J*fB6yZg-~~>^NNCI!S=>7ozh=)K!VYvtAsrw_NfLq&h0jK> zQh71bC-BXmJlo;x zMTg&V-}l~q1o%cg?|FY7&*k_3cKgV84Bubld0(Xxa0TytaB}9-N7oKNQtIBNo2w`O z@adf=rgwkC(eL~ExBvb_Z@Bhv-}tZN-6c=U{_mCEX7e_N+BZ6%HIr)jTfSZ{o(*R& zPN!XBs9n{DE+I;1#nPIWWA{#GeF*yu*M`@1p2B2zQd&~ArVLB;qUybyMc>ZiH~cpX z{oywiTfgz7ZugHL?sRF$sI$>l6tU7Za6H_H$-E~< z8Bzop483l~ljHyKMb(MF{_Z!QTnCQg&-UfdUii)T{@_o1`ATpH=G|b#x)kU2<+W^I zec6{LmvXbEZej|vxW87s?~Yi#_SNa0J74zFzjtr9|K0zWcE9-z+l$r0>Va=hx&^1R zt%ZxjYipxMuUymY+KDf2I>n>oaA{|vZocP67KZTGo6jCV1UBzxeH+Cc9if9xrbST~ ztfSjD>pH&@A_lfQJ#-m`RK4|GYBQK&wJi>i<^N$1zJtDP^RloX$M-zX>$-mJ?)THn z+qbcWWtT0KZLl-}H6j{CFd+_23B=40)KQUyFo}qa5fl=I859-46<9P2NO5I>eV4NJ zE&JA|-_LWmU%SeAevdp0GnxD$`ImL({d__hG!$j?od}=v(5$YSsG%Nv=(HBA@#M@I zhk5-(d4A8!bblk4{aN_z(PU(?+x(Jmw^t4a|HGHxNt;-ORe63{PP&paFe_&g*Yo-$ zXH#bJ1aV_rZyWmywmfN9Hy27E;T8nsHVZJrnDx}EL0W;X5+INRCuf@1UPU-(+dZFu z)^5i9x+|ImeduX~)$pn-JUk8`y$P?`&WmMK5p50MZyq~iVb#Adm(SPvF8VIt84w_v z!Hj?$bRQ&vi9l^z5$={vUh+^Zg{lc*~bOYej{4 zHALEQrvwt60F+WP0EwuQU98zL-fNwnhp~~#dp8q>CTM35Wy*L<>tjDO21rf=r<6zZ zA(Sr6Tc;0$0J(vPqyZA{3#5STS)g11Xf8q-5FVTz+Kl(yKP#*-uxypVfZxFkJjAk1j7i{Hs6v!GHI!fBflga`d6Yt&%rC<7~CN z6_)jAF{&z`Z=W_f^VMhHHaYv2tMg~y@Wf9%$7lcAd~up}l!wLIV!dxKYYNX*J5AiV z(O-L}|BW@kb)-zCcp02@z=Rm%#9}S1ZQaV+fBH3k;6>n`u(NU;vt(e{EU5K)M(!>mOf)o7Yp zh=E4XVIX0i7mj%@TSIwvnD4HZC9mAQWRNl!055_Uz##%K6F?VLz$i>fC9aI6cNIyX4EF`E z1QUA6J;V^InrV_QoI|LzYN2BDj;AM?%WlXd%_+DTAXp;eh_SYY3}$D0$lIr~w?mvN zB;|p{Erce-X{5Hsy-Ky*h)A=z*syll*4Kt{kJA|r3hE^$kxXGDQG1&QLHXvrjlNil zxd`ixJf7z1QC=K(hl!S$TyoDKJpnS*2Gq4CV&9%FRCz}v zJb_I89Nk(0pfMSyG4!Kq#T?rq!b-5gJr*AshoIIeFiX~ZkOYVTAqSii3?vw9#@g}_ ziWf$wn`aXSz@Y#mAPO)80fn&vo17bXRFpw55DW~_VzN}(Kr6X~T)f1+z^aMM8W?yo z2z~Yf283XtjuqsC1TaF3HOVGwg3wlH1F#(`3nSJWsT_>T@oM)=U%uXc+fP05nmRr4 zw;un$b9L+2u&(>HuiJFi6!$Kj-$4D=ANk%t28{6O>3g2WPkz_e%zx(z4SfezV6@q6 zW?^=D|19Iqq*)ww?ZP)dsvKKKP-!^J~Y$DwLh# z$LoA_rupWL_>p@TKi1ut=ibnpVw#!;^sTn8tisi=UVP$U)_tfcUQIdAlau+WP=aBy zkxGcQs&~uYcUW%Br>9@IbrtV^^y&ZfWVg)g=TCF2TLTW(WQd}pBEmSahsNg~J6lg5_@{5!_qTrCY=0Nnz#s1EfA!Qq`@z5ZkFL)R zEBHZxVjPBQ_LZNvIaGgp{Or%YfBlK&lebPSKs^R7FO}xZtKa^%H-6JM|JAqZ_#1vG z+fRQ*-!!;#YNInZZrdhjXSR=S`cXLz&scu?)}=8 z_dh?a^2(NecTu`!zbxX|w<~rzEZkFe=5dSx8#%O6UfF(E5vl`WS?YTX#o`FF(M%g}vTDhLC$kP0Z++*h-}HrFu#eSbyIVrl%g5bC{O%8b zR*b6D-7gpP`d*HL*T{o|sEneL8Pqr!DX!@*vS{@#CXWHA>*Ux_Jptl$6<)-8| ztZYCIRc{qDC+fp6blwkKO0f+FqDaL>dH{jS%^e-wkpY3YkC0dd$xR>uV+1H=sgN`_ z0T{)qiVfWJFbqKUs`yw82*>1FpwtT>1VDi31Rw$gM34joxC0O=5QJ3-BnniD$*1GG z8Eq6`or|{&yPH*b0yCtuF4I?KfXF+h2^^ ztLqoue(m05eee_gy?=dgJyr8?s4ns90}nd03(7f2W|Wy6lFU2?Sda^)K*l;?Y=Eo; zWlMq8=13`8*J8Z{%u+-Z;0deXxli8476n>|T<8Eb0zx1Hk{P*MH+5YPz;IkJIvnX2ldLt`h3PD@#YA(GSU%CfkKQ7$;cE& zgbiiw-nlFz0E(ri2bf4RGR}Hf@e);1G*7_^&!vu30(4Pw zgNy)~1R6U-DKs}wl1l(oH8NI1iGyciFd2|`Ck)#xiR`dma=9+%y)q$-gy^H_R>v4( zP0#4F8Li&iP-T-Bx^UbtH@ZB{r5o~cu-Jt01j6X$V6_#W8)32LPUNiV4o>=mJ52B# z5u^>Hnvw06P=EeO+8pNpetPqERmQoB$dHE;b79x@fkUE6)H;L3M-&Xb3Ax6GSzNa# zUe*diF`l1dItz!+)w+(XP<5s2bIX)aHzV}#Kw zx1q8gj7|Ve5iQ0*R%jjQeC42MBtk@#Ijq3Ukzx~RX%zPeM(dFbo5{jBHPL(Jxi@dY z`s}N}^&=1L>yv-;g=do;_kCvW*t(tFD)sK8S7`Xk@A&>Nt$~a9wDb?2#XtIAzODVu z|L8dvhe2(+_%+RR_U5;I?EI#szi4m%+4(>Jx&H36k5;9hPM@AX^!1N7qu-Hs`0wMs{*An_ceCUCht_&aGBh z^;q|Vq*7q@@?4z#xpO=J?ADE&1BP5^sU`{~jYe)4_xeTNcORpRHBLI791SHYn2ja@ zo>TMn7k58lPyW{U4WIS(56(Vz;Vn=9!f$=(?y?&`bZ1rZ*u?8uMi&4Nys*bBXJ7t% zi!Fcg{cT@C0&)g|5aa`3p-PQ3fTo>CFP(Yt<#_G){rj)k=qG6K|Hp;D_r7O-=O6yl z_m}JKv%OZoniYeFQg40jyLbH2zp=w#{S`f5zwp?{-h0csU+7KNdz!TyulldQ>TAE} zweK3<_kXPYgI{vcRS#?}Chxd%>3a3z$Gcyu_-H*E-+t~VpUgAe88c|&+r=dWyL!IiZ+zt=A9~I1 zL%;sVzvT$+uYJ?U7TY%3>bFi#Z+2yV()LcHMhLWKz^eD58jmVsJsc0R(gy$(ow)!k zMIx-K*feS1{q9O9?ncP~hds2l{UA2BNn&IFs?Y2TaOn_zboZkFBG+0m+c z?j!Ak+l%Sz&&}!?kM6EU89Cj&uKuvQbBH*_r1$l*EF6RY(g;`10E!X!5rBzCAP^u- zMK+RcFdxG^4@rorsu{Yx>fHOCp{f~Aum-rQ3^u7QC6%*fk*V8}oh$XZ1hmoV3PV?$ z#b8CAhdy7N?0){vo9?WyKec?k^Kj6?YWXz9v{OYd-Mzdj&2-#sY^r+9HtKNa9Z~?jxBtAu;02LRI*bp4T%1xn=0xXb(OM>8p8|4NU?jusr#wclX zzqA=AM{9w|IS8sKLt;$^NX#ILkV}ELJ{X)%fQclK0m(uDxy|jX;SDuA+nBF_=T4t$ zeVvOJcfq6pfJK2&RRjnEBmj~{+IR#iFBb)pGYNH7v1!<0bP__y7yL6XcIT&mVY2_u z@!9NFbAfcd{?+4hdh4hAYfsHjGrQ9+1zp~J$7}oP#aGEMv7{f?bZJI{(Qj+fxI~!+)3+Gk=e&e9KdG9d8`z6vEv#5G^98($_cE*=xdA5I9 zXI^z>f3d6A9NvB8BlnJ0&-_Ao?&;h%+&wMT5%(T=sP-}j6mq5)7!)H2E5cv}28bFQ z#TY770UDUHVhE_K{hHGW*R<0m)P{i^A_yB*3ZX7#;spgKUE7uwoa>ri#h_#azyyo3 z0ENaFO_@G8SPV1~BKiQiN{cc80Zl})Fjm-%LzynR#8S-G3<4Ur24UgzV;u+~U?hc{ zS`RG{V0BP1BTRCr7yvf04k@`@7=)0J%rzwCNrXmuEK8xAhrDV_zPCD95<~0)3kwGK zUX+^wRB^eJMk7FrfDtXx7euipWS&JinxvSbl;8#4(Fs;1fflT$rDAQ-3WG`)=(1P@ zrm=;l_BhcLHnibZI0~s)Z zfM@_C2!H{fQUn6y1SgBLHclF9LhG<-y+>>ULr_sZxWIpPZsXAi6WC#&ZNAP0vx4-+Z=l}F?a*pn#c$m-p^joTTe%IgrCdTR~KOD{;Jp0^7?tS!T z?|s^v?fd>We)dCO@D*Q<**VxYB3|;6m;4u_00IABi=%r#@(+Igpa14hJgCLpVRrM{ zi)wo0mw9z_RIlo_@1N&MK!2^=T^Cu_;KLd)A<@m6S4^j$I6YVtK-tZVAl(p5w93J1 z8UqF;gwEo+KPd`{RkUg2e&+M8<;ho#ch3L7H~*chllMHo@gKkQtAG6?Pp@=*uriWD zJIKrlSQf6F?H{`IRlmF5EYJVo;&ki|X`n!Y6a>b|gpvhE+}t=f4*a&C`rx14f_6as z=Lt?Z_|%X;`q8KFKK}FH|KoS@?2n#=YMe73-u=3F?Zva-S{;1gThp!lnSbW>KlI7? z_zxb_jRhqyw>Q4-4fB_O?%(*>B6J`6zFX@DU;CQT=9hl?=HrW-zx)HgXYbg4#VhRj zANl1E4oJ68=hY6}wjSG)-#aQ$bBjMi07s|>oPyg-HxlFJ^kMO3cVv63DzNpBu>y205xM%6N?*94;T(5=zi*Y|9dprqIq-+J$px1U^O9?uak)#=g&Tg=IB2 zfhJM7Pq1cc8&^|~oB3My<)U{`ox3!dZq%&TI32JGLyRRjlmX~d94N&_7zhvsBj^Ta zW7U{dl`^;EqIh>a&nTrB04h~pHyfo~$!#V-S&9M0i=5WfKDDX)-ij^rffvHbkWg0$ zAvh4av^mPJ|L@-MrgJ-6=hMN)=E-f>^LGwg-F)B2k3)CoKnE@B9M89-JmQO+zk8`1 zUpRBMj_3MTL)UZzZ@EeKrp}vS}>7;7E%B;@+rd3ETlK=hOP1FM$RuB9>n4F zG!M&lHsy7y`)XFJ**1cSwpm?@eX}#GwnkUlwmB|0+1;2T+*S*a`@VntzR;nU001BWNkl zf9A~Su~+VY&bwYUdw$fM`}XZO;LHMhGq}Nkh?l(NC7)g#m;+PD!Jjz*GV&aM3i$A2 zxDI^j>gXT-*yBI;Z~ksGoG$wI` z>SpumcsM*<74f(XqzQlul7YxQRY$jKwh@6ER)*6fsX`AMZ>iaO{e!Q%`W@eX`O(RL zzn}c?e!uR3=)yX>w zfP-cP%n;xO6hjQt@$vc#KJenV?c%{b{2?5$PXOhgV0Hb8_7|`HtI0RNC_ed#`7$&c z10Q?zk3Tw{?!V>SM^|tDg4OAleed7-YwtgO-?tqOUB@B~-i6`pvVHVh{uI`y)q9?e zkNl<2+}Z!L?~aczpZ}3x{Ozsf&uu3DLZ#jo3L6G2NMPMs#HcZEUI>4o7Bz^vapBdPG=;rSp zk^Rz*j24s0>dGBSjPMn6DbAu+C6~b%BLfYs<;&sR19|UyxN$3;lr9h5P$uK;CT@E< zCQf_b$gEi81x@Uu>asuPUg5oUoVQ)c-H-(@#Q_2Yj4-F11rmy2^k{|O&Z+pQ4ie%Pv~M=Qaj7==#}0Cze- zzyKIXP=M$F!9+`FvV6e81qG6lM$>+@aWR)i`uRN!pX@wAxeOe}<=iCi_03UR9|Sc_ z+(K+#*?2u#KkMDC)`A5e&YZC~e$SV@sY{P;z6`sMTs!#a$4=hTgl@@Io7(yCI!nt(iOO+8A2%hpGwkl@ z(dcPjT|457^Lc9T%!iJ{lxUjJNeq40Lamp+3}OfzReD9bH3(MnX8HIk?uYg=u1yE`Nki08WRsbMmK=e^9 z()t4AKtiN&pF@BnMIfl}LB(+bLYO2taY9f*4xyA5a3=w{LuN7rFbaT6L?W2NWQ;0= zIq2)5Zv}w_0s=@`6&ac^sn{!Y*)cc~yfGn{l_VB{VhUjJ(Kt;vHZE+1OuT-0(l;ZS_$(u*tS$|gAT7BK0 zJOexiMErsG0~Fp0fcPWvTZj0szV}~F{@r)=Il7BdLG$J_(^r4PYhHhL&pvYeohRMi z)7Li2@bvZE_xH|Es`c)<)6I1DUI<_Mjx(>R-}Kwfx4i1%o3Gi-!$ViLiI=?OC7&J~ zur*Ns+6(s{xH7p6{=IMb+2!=bC;03;zxSJ`zmgt%_UTvuH}Amz{`2~|U;l(Zdux90 zzdgBD_nnW$>~j{?AYR%D?!= zgPVseYZbU4NMZnK9AxQi(krh#d_4}^cgiOh*Y9)%16(e7KmtG$2?s^9I5~4Kf7zYq z{!*>WB8dO2LwxyraQ&yh`-Se-^yoKIy)!#?uU_!ES`N5;=I!T455DSUuXy2f?=6ed zul|`I`i@^Z_@(bU#%VtvQ2LIrftsFNz8cebduRIEw}1Y*(Z)M|lz@CpWS>8tF;(kKz=L|Ucf<|XDpI2@h~ zl;;6*3;>Om-~~gfcB`u5dKet8vWvjLQ#z}XdM?*tky==*@N74;|SP7JsI%&WzqtzT|-=&j0;azwPqXjoVNChkKv+ ziJN@t7k+sf%lh$j<`f$j?PQ5jiIeQVqo7`w#c4+ zfd{jIAONLFdD19PZB}#Q$}1h5+UdfP*JW_-UU?9T0r%0sDLn=&W`$04y_c>4o|Ba- zCQD;aMCjwtv!#Qh38oXOgp&nUEdrU0!BoiJ38Z3{!9d%cGEPchsn?Kc*`Zq@IAefY z--KZ!RjQ`m$W;wPwNinv+d*wt>fG-Kgh}en2Dw9X(EC2*5&$!S02q440!FbGF!Cyv z%Dgn~3Z55MdU`N~aR7r{f`?x5K!C%D`zoTTq-uyT`Z6zvycSbIB(tvyN)Im>2s2KI z5InmI7!pQ+)|d;$RY}n}M6Ll2HVGiT4H`m7u#o@@6wLcY){x!gfEihlO3mhk++?T_ z7wB8>qTF$^q;=u)KEj#0Uztw&*=C+xNNrmmUCWH8*dV+5&IyOM=Ss+#sUU=aR3&Mo z5Qv_(01eApt#7Zs(! zaHI7$hX^3l=e7~Hs&hrhl@8|eBhRqJ`(Of#pLcZQK8I$qSEt#zPI!+&hF-z5KI zU;B38D*j*p7x0YZy8#&fNIbK^+yB>}pZ(jvy!IK~Q9w(3n;TbN_u!q;cFE`8I_Xa9 z+o$FFi+2~@NAJxeYo6b(+F|IlwYmLPHiw(Lhp%|s0}pM#?T>%Sn=8EJB`^83@x3js z{o{Xpj8-_HuoGW6^k5D$moRapx4z|}&xrH?@YKy~r>pf^UPOTcKmcwCqyQ33vopv3H6MKDFIK*F z(eQ^je&Jo;JeoYHlj;qX?TfkI-dk>}Kl?u&edww3dw>7V%Xa?41I=5XSpM}}$aQzQ;B?`)+=4|+ zN*I8?FtV=eeUy_<>$2{LL4DOmz@kAR`be1fT4x17Bbx@O^%Cnexisw->tQ~yh(#>P z0muPKg{nXV%e_;860rHKm!9v#u{~viwbp< z&b0aI`)Rq#=&P}d&)LZyeIo?Q90 zrQ3i48AUkj5}FZ8!n)6c!yyBL(EutEL0C(6~^p}H{GbU*bQsT(Z%5DHE|ap$mfTEn{1 zON%lP2$N#7+8xCnpFaB+(JWALbC`oH8EsC21;t25THP4T>?bl0b8CW*^)I&N9X9A^Um-4 z414eUzW-N`AX({`es(_S_X`Y?)_9V3dRcbo%1Rmx>a@SoUFfm>K!12;mMhf5(O@`? z#nPhf+2)>{mxdm@+S78fFrY74Vc^vIf<&43eW%as%3Ckg}$y1Vd{JDJ$J6sV+gCOHpj)HM0BY@xmZzv$O1vdV8$pB zDE+duRZw4|}Lr06gX+^kX4x*LJa2oT>m-g_9B4;!~2^ZcuCHr;vktRC(i;T z00KooFp6HeECkg+DPAN6FMwc#7?xLQjjIKMO*7CI2*^kRg_@Aj0f+6iiS} zkAY4}(P=0ygA1nyu$6E%fzt*A8iV^X(z#{3NV~jSV&|X;z^$SXD8ejk3_@IASvpqc zy|=q*G0Rk;N4Q!dmH_#8`ZVdTC)lXFglKmh!edXmVpC+oq| zv2VEX__BTN*S-6t%;>5TVG+bN1qk@6x!U5+U-vblS{y8pp@uaxTOLGDJEoqn+3 z;QV452eU6tM&q&)Jf+h1eVced+aj_20OhpIJv?Hw&jNhZLT>E9GvE2 z8)XV)4{=o00T``lI#+k-CKNB(Na7koMz{5i>t21t?tJ;^fBVQ64zrcLf$EvVl^OIFf~4QpCEnANbHn%c$_8y@Pp zg%HyyvTDMp(sHd@*rdw!d3q`PgJE%W?bROTm5lw_rX@wvL&pCn8n=x3xp{F&}uW27|g-+l3AXOdSgmpju>zx8(2P=d0Hqrl8cmM za3nUvBV`Vit&~_~wU+>NKmp7^BBvsEmqU;R!Qy}n7Sf1533bo7%8jbVP~*thSWxpV8~-R<8veAjj7XL?)QLmc4V`*8JFe*04T!zW&U^UBKY=Rzzu zzw!79q=3B#(EZtciwA$^;L>t9a&>m3@wN(mj!m~fyZu~eSD)T)OSWECJW-j7rf}RQ zK@EKxu)7d8RZu2JP90laAD=y#-nlu8HvZFgzth1Ylo^Ps%u!3$$-Q`i`kV$T#cCfUVdFnFRp%seB3=}Ad00_`D?S$a9oZ!cra$jYn^I5xtijqU0l@u4-?ZKA5LYZCeV9V~JRe zM9o>?qR3JiiZOaI4+Q~KjXoIiuz1MGUQ*5|K5VR7s@x;%h#26Jc?_?-?orR>%q&7N zixx*fOrce56{?&`<)DG(2}F|WL5e}NhD|UX;7$r$h9u z#g4I|nphyTx$of0%kpNllqbU+^B||A3~6b!weB{XEuNDWy;lOgpOtnV24<@x)1YZ7 zw2Qh!XtYVTQ#1v!Myl932ALB>CZ-uCoUDPhdItJ*o?4fJjHDz$6+=YK=!-HeqOPE* zLMWr_p^}pmG6NubLl`X)%0WuBJ4RYsA3bfMJ(w>JO3pHqSGyp zGUV_ION}hXDnqe3D6}@#@L0F2E8#H5?Sm5f0xuzCgh4Yf5NSc8U5&IQTaMO_>ca){ zjLB6|w15H7#HL|YH)obOE!DvmJ7?!<897YW;s)dJO-EJ_LU$hyH{Lm}Uw`Y{KXT9Y zyT@MgLnnjM19^YatX$uWkpTe}zG{-=hM#+Q<=5Y|@S~6*6Ajfdtlrs-U$%Smo3FfU zborjqkACsbcYonuZeDoeFfW)F7Mdf0;f{B||23!f4?gzePpnEev9;xrW@#*b_Pe0bHl7Yye zH8f+%AgbckIXh;a2fBKp%omUs0RoJ*QP%`(RzfNN?e6iF|K@e;$A9o0?;ju^0fp^- zaHyjL;y;-I^_PC-p7%YQzw)7rTK&6UkC(sqo8R~079aV8tGd*ed)tj{=>oc6Qz_aHEC9kz2~iOxJjpffB41z>FKju zdHuTekVdyX_`t=m^H5nib>peHe%(vny*m9j4;?&Mzv%YiV0!WYxo<0+A0JkCyz|H2 zwY=l?pZ?=VAK(1*D{*nKSy=XC&s3`)G~nxeIeT@}aBms@AQPz^C=L3+s?RYe zPg#(`0UG5(E@rCepgMT46*CLD4G78B2W)9GM_s|@smpy)aA$#zuPJVX7boOt{fW-w zyq|@zvRa?1^0V9Ja?+p&T@?t^sFs#Tr)uLYJ>3DyXn^AF0uUU4WT08#80&*__D~mF zzZzpz8_7CY9m*o9WLZ#^LYGS#o*hj_malaC*-M8(0c&Iey#NrxHSPr{UI-Y209a!% zvMVUThD0ou0-?0%J!c0cGLk_mjsg%Mo+pw<69l=1Fc_>Hjq1P)g#(E4`qdDM!$%=bX8szn1e^-jh zCSPg>Y_XSPpX%s@n$Lt8?DKHf79l*lJKwr+X&$(kTb#8U(fBpMnanNmsVcXzPb#ynC?62>FhA>l>iR*r zb#)qecBZN%KcUL#>xDL2{?7M+a~*DLsF zIErzEl9ab{S>#c`sJvydKL7wA07*naRH|KO6;KfhVxWgEgszU}s6ClVPcEX&$j&+M zVXdFcs3u|gh+VfN!5=!r_P(PlQ$RT;rvoGixbo07G@ubGm~7Fb%`mE}01i+hQik`U ziA`l@5j0dzg=$yq`aE!7t_z!KW3Y_2)x5;Ps5!spRZ|{aX!oFaDzgHc7SE8u2wHH9 zsR&&Q7p;y%i^Bj^E|)KcwPLN(tIS!PZ3J^zWXOAFhl7L}Nkjl43=I+N3@NYzDky}q zHV|UH*;-okP0CF*-*lB zymcg3ghC*r!9=O9>9WNPa-N8YYjwLs%}Up*xaXqc20=hDi&0<+nW9o4q-0sQyX(~2;K^h0U|zaS^?jDPJOE@+a?c$jq&lo2Oe&_z1V5D8c~Qe=MlBZEAwlo;jL3MYcF(}6J>UZAxWAA*=;{RMUpMN&~$lcaVcP<}r zd}i&`2>ZY`a0aOHRr8_`ULF4OT?hFn6hO4_6aJ?@yG{1 z{?C5>=O4K7(oV_gD|K8^EKBeBM?Z0_zUw`|_eXoTwtwr6xBT~auG}0(Xn`6Cc)<%^ z@YUx)=h%G~wvWNyg=+dt^G|>FBe%X8|HF^=4}SJrwVcqqqL+it!!>p7_RU;TJnmbDF7NestJ<=R_b?M+x+bv3q zK~*;uqx`V+-sK!{jojeb0n&s(Ae!nIuU+@e^=rOle{?v#{OPHRvVb1qs1j|IgO@B< zYe`8Q<-}~_`KJxQz7?)PCs@|x({mJ3h?oa&mFn`BqtDpR} z&prI1*1(bqrpxd*xZ z>9o4~qT_MnhI_v2?BaLtn>{&x*&PGUF8$_Hn@wLG%fY>L*0XWLO?S3M>w8CO5K9LJb%4Eo?-2RGJt}OZ0FJLvc z(>&dQ)v(+>y(v$w3L$uG$lf$BE9cw#=$q1WwWfcz*IAX*pyq^FPxgd3&&t9z3;+rR zFDwEeCJ#fm$#N*B`D_7_AuMB3EuCAG3$ymJmj>C$>@3mFAA=nq-0s-ho`38zLZ7?9 zQH4>#VPB`DTWEq=ZG)jz z@^*%PMb{0mbO}#%YL5-70B+_UQouFzB1#MdU>*QMU4e?jfWRmq1Ou$s+~M+N!pwWP z+*+r*LsAHJU{%yR5@7g`95f&Z4$sk=hQ|zT)~-8?2>@ldhVBqTFj6_WiewN80+bQK z;6l#0h9m&F1Q@R|8V~^)Ww3_BF_o5b8&xy1vVNM=VT<#pZoTd9uiaR*U-$S|+VR1^ zzr17WA5MF1S2i)&&#|$D(Qpah_no(1$NJw5X8z3f+u!xBZ+h$Zt}mthbe6yrsoba z_xrsbqzizTl1d9Qh>;>$KnGN8dE9>cU;kTgZC;J_KfgHm=zcxE@TSpCCCjo4lW)PMt62tAH zSq{M~MhH2`qY;h96|F9Vs=v+{m#Xa$_YRV6A0okMZL~S0gke%$*Q2$;xfPA0KRM;4 zy%ZPi97MK-xFC&I3FbL_DGs_Uf`Lp#0;p00Rs}`#V%|qU3?VQ?LluS*p~CinvkcpZ z_YfB+sKF9rqv68(zJ78QvQi%+EOKUAh)lj%(6bxeNEtGKp6+lU5LHDi*)S6F6yVv} z$_hMM9{>fpcb|o`5tTzTK zW26ZBojSS9!ROj%HuL7dLZCsk)H}+&D)eAu1H)h*%yZv$r7s>?Vjv8W5CFq9T@Phj z3b0|t##p%*dG9%UpF>cHCssqWYVp$4jE7Mac;5#u)c1Z%Hy=KK-)VOinS=q!g?LnP z9^01k9zZXIVq{5j3xEO&m&eNk=0l(59N@@B1rb1i8&W|jl0blP37~{vBtyXZTkrX{ z|MB=8|I4Ev+&#Q_-(E_6TU2s#TUi5}W9_bbSHW&55(~0OfaXLXDUoo4GprA7?VUHh zXwYDLxY(4zT$&lgZ~c!#P@zE>8EeqJ#%{Fua^2xZv5R(ZLK!< z536R82&s;2ljd!B|KD6&e9x!KKmWkzKmHq2pFHQOTDcteEj|kzE*>*84^l1zjn_wC zGi;uj5B9?F?iav5 zW7w!HU)e}?H^|A-{@io%;Jp}Jk!v;Trr9gNyi0>P9uG$I1UWoiT}WbIBrhlmaGn{S zch2U#nzFa4&}*CG#E+j@%^`m1%yM((QogdSE~t244h4A?pa3xx1`m^yJ`77&=OveE zIqVS|uTBK)L>y>NxnUr%*4%{F3pD6Ym_po6?10z6bVOeWv98a{T|e@;`j zDh}oXuk2+fOvjtu*3+Ylr>BUq&?ylH0+g~treHvp92&)8pyyID1UaE16+Dj&hT1BR z6+ES+0w-N&!y&41HL63MF`J=8XGZpfN?e4trAK(A3`3E9p8zmeAk2dRSSQdk1q4Qd zW?l%-#ewV{Aa@ZU2@sU%5yJ*iYC9Ac@s|jIfFwW!VJrp+CV-59AQ!HY1cm@mo*a&1 zaKZ!%gaYA00Ra*O2nql}fQ-f%C{lr85SHqi@qBberj~~Ni;Laut=sN-`}f@V!>`%6 zQr>*&^Exnb{ z2un}ygiGn!kw3Y5@38*xFa5&PyPsZM8T8LAazr^Sl|V0@QYXq?5CljB9M{n40cB*c zQCL*WEo?y;PykUB3=}~8Clo+}fB{qhNQjUF#UX$MDMAz=k}@068eM~htY?g2(8Q`g zxqj}3;jPDxB;38(uFkGrvc1O_wWZZjs77&G&|R8u&KKLg>SvcMFCn_Z%$wB8+u2U< zd*_eb@QU?scu{T!-;k@j>(!noZaAH@GpmRKBTxghqN*qwys5zuLPpaTdTNndil7-7 zC@>dOaayqv#HI>^FpOjzWM^`-P>)H5BugebN@!#eMn6W3Mqxe-JH6YL1zQDUjy82M zmwE9y>sSv4D~Kbd9{rZvd`?LgDq|JtRh5e=r&4mtfG0=*0ZIndU1Tu@XH^8Hbs*Np z%0i^-u&CilRpnvGE@OKazDOsUGFhp8wF;vW=IzBRY8M<3HQ}zdz)C*o(RDHh0SEvQ zU;qV!B7wlz$g0?br7tm;iY%si24qg6xQYrCpb3G%UO=Q*aA)HjS&Y^4nIqK8@7m;~ zeEAdQ>J@Ke=e{u1&f`!S<{~Jt7uAJC_Cf5#Of)b9k_H6|Sa%H9t1HW4e>?kD=_SBr zpwJjpvpC=`$x9p=LlaONS-cb_mV&N$P;O*JHU%PZLj{>VRixx_fbJ5CfdHe-WU0!9 zvQRMuiWC*45DlYXrZiR8i*x`a0$Ir7q>+$%<-n%Gq_JdGU$Xao?x}!6SzFN4B4QO# zMX8h`hagb&Lb)3qj1~#R+GvA36TJ1)K^(@bS?mBO#6Za0@jMa)AcjB-j3O9|OrDAW zxCEFSgerq0N@VS;g4v#kFi&Z@Q8Gm}^-0NTcgKr#seslf)-50;(jYp1-uz!G1&ByJs`b^=C&Jd@;wyjnf4{u` zz@)?w0fdqz0Rl~Omq!^0r!_Fj3*2EYS1BNIO)&(B#ayBaNiztcxQ8Gz zY9+kpXt%cVnuk{xhYvnJ+nz1lQIyO0=r>0jzp{MvaI)GCFMj2bTrlr*0RkwqfM|fw z2~ol*o@uOIKDKKTk+^0HSnT1y{$=13FP?W} zZZGnpTC4;wQrL|)x@p6F|AsH^ z&V%+`#ijAe@mogwSM$z%^+j)Z>0q~h!$Y6=d~^8aa;kps(ZO^5sXaw52R|4`)5PLm!-b-TA=vdb>>b9^Kua zc5QTNm|H(x>sGfvS%kx!mnX&>o56Ga>@a2Kb}kc!kpe-k9!DI;{VmWJzA}!!0d23; z+4kGZhoI5rD9$1Upj6ff$BetBKj^ca8Vr~0$lfue$i=ZzU78Jd`nGsh&OHH$0pl8L zqj=LsIJdie@1k1tTUR>)2^nUnYPOk09ur^~u0q6oAIuIwgp72Pmly@JoI#e#Nb{;b z&egM6d3LEx2UmT2j-rw#Vs-SUtZrpzw$FB_-z_i*fJhO~ljJ~A0T2{Kh!y}kglpU( zfdoJpjWhxTf}sG#N05zdV~A!gTFtPh%X4{aTS2}wT3VPLHfe>1eP77HK2bge0jm%I zxCsbWflxL92o!{p34x^mrF0391cIdsqSbIJmf-+VnzzUWF1TQTQ3Z0D!DtYK=Ly%4 z0TAQ@ToMq10gwO)4hf=z?oK#~|A+(um;o>Zf<~)&jX6(zdv4Fi zuY27+CtmuB+YYLC9v95~(Ge)fUm)5G&?%}c8^`X&yp54Pa(b7!+>dZ!{s3n4Q*>s5D=fyG6NRL?okg}5Bz zDGH-@>ZNb3bha%j_nvcBWE>60j7{-aS_D5F`;-}}VLDpCQMG88JfTZn5_9kF1#E;h z0rhZNSqrf^dU#2dWHD3~n+Vn3M&uGJg}U|}{9?;?x;WjIV=G~Ps@`jW{nYLu_Y8&r zsHo4BC-m}!=MjPmB?OZZB2@*z5CyB5BKx8`Ygusft%A}MA}Bx*AddhE2S9`?C@( z$Vn0d0HFv1B)O_8##oi&#T|rTFsc%Buw27jv4h|uCyfRl*lLX1 zeP|soK8-l6p1xu@IKE%Hf!by=+Bh+utepCCSne)de241gH-G2fcuV)tm3aTmF?)Po)@88iLG*T*pp@66E{1MJKdEpQ5e)aj=*Ish< zD_isKv0r@R%F=Z+PEM>}TxGQ#mWP_{_tW~7KlD?}8?XPFZ!cIFhQu%a`?g;G>zbdO zTQ#4Zc>n1&T=#CRfq20SUhq{?Fg<+o_x_3B^$Ks6?tAa24*tQ?1NVH}2g5oNP^H-q zeDcx#-~Q1n56!kWQ{W<4voklJ?G>U(Ds(9r0E&PJCjrq=k^lzKCo2V?D(#0Wv%StKlu31V#du^Td$-Z_@!{~m%#jy4 z7y{^tKotNFU4a3NG1L?X^lV-bSB5couYB`6?*9Ja!Q-#{;_I*b(yLA%Ghp%L7iYh7 z;h#PBzI!+8@>$x{box0TJoVJaPO>Sqt?X5XnPyN{S7Q=w6lrTf$95*d zF)|e|BRf_DJ7rTekHs^Pnbj3tV>Lum=au(1I=?=udGzYERk`AA6}?y~sVZQeWeF5% zlTnKRC@E1HkybLgge?HKS(H+zo2!PNpf41f^=lo<$X zVkNm3hb(vzN)iD91%Q_VxC?-3K+x!r>M#J9Yz|89MSy4kWGMJJ}1*JeJq8GqG5)eTGGysu-U?2^Y435CYtf}EgIbO27UGMDB>pp~J zdnXpB%HsShC*fsBE`K^b`N`dx)B2^lu&KPTTdt3e?WnKs^HP=KGsni$)3bVeul@pa z-5xGLcNKXWR!fs`5-9VyligD%xErpq7%)6GZo)tdwJ8OKO6iEt2k$v@_fT)U^F?>O z^~d9tTu(1=&im~r+k@x!?tixY#=Rfj(pKJKp7xnl=x)w9QFJltNpdjfHc*fSmneq| zrQ*23NX)$!?g9ZyfIvWS(m@gxjf6|?0ziSHND&}NfC4}e*9edxGXn_$GJ2kjN9!xM zg)$heaXD54aR2!BQ9E28SMBlfg)rQCsO=x#_B!O0lJ*%GM;S=tVGtUfy0-<(`_uAMwn;ItIogPRyd0VEBCh#DFPT2(D7n@gdC z_3)D9DUzk)a^TS*4Y(gQ>qBfqO>r=`!Kg9^x~?FnWMy6gsS?~HXbPb~w5krr2B-(4 zwyG96ao%3YSoExl0svW6Qw^3(XCupr(Tc_KcE5YrJ(}QN*TWE-l2V?}sMJzLB!$(0 zSlc|Xb5QWOp1;IM8MK#bI^KQ`spvrvmrW2LuI7D9woQ3=zE6@AE@ z8j7;RMh8(1a1j6{xx*qEj4f?hiy)Vr0brs_0tk!PZ5W_34`PSK^0-$Fm@qCxsUxbF zs)#6OQ#qmltOD5>p*lzg!K?y-)nI01n6+^m^*-!BeQB%@-+TMpe&C;f|M&gX)?&h8Nj(jeyhWl?GJikA=9^!>7_f*f&nYcSm zd}H;gyX(5Rvs#-EP*>{_16s4iOsm8*^->zvS8m>Y<->1#?ZT5kvUfRcp5~1gU$FgG z+Al1A>8>|t`m=4fb$-^~ci|hR+x3s1`?=@pw!LABBmkE|6+`6A*qB+OiVq{g%o1x^ z*RtL0L|XE_YMT=-qg6z0hT{Rif-oQk)CAWg?#`(h$CK#gY{gK<>Z)EPoIoT+na3cD zXoGj*eur(ZublAUAUxCfTsft!`BUxIPJQ{4@P09_rLuOK8Be9aBrF|Xl`9}j5)=tQ zU=hwG%@T3BhW#tc7bd#<(oiJk-U$<&W=s*);gtn&odV<(09FIV3Q&STx;aD^qn(07 zPGiqih5!H{07*naR0h&WNB{&0?x6*?LT9iA78P>r?t*DPUD9kEXAJvW;Zf*j*nFmH zxPI5on$QU<+)yALB+(e;C0D>q5-K1-*#VF^RaNn(NT5Xwz!0(o5vM2$Fucj(f*^}R z3g*UKr0f>J2tfr|pm|FQCPMN(Dg-VFFRBiQ1PCNSP=W{o!2mKEU^D_IBQ2}C*R7^G zv%A-qa&UYUs8$`-)Hp2DXvb`ZT3XK*YJdf{gr(y063nWlDcPwkF`ne8vVZ<)khX_MTL}fW^P6Cm5cnA37f&xwiS_i>r^XH0g(p z6R48Bu{MNBRVmnrE;j=bQ_}QIgU2r2OPkzI$0w8Js;|qtM>+bL^`(5PJn+%Szw5id z_uCgo&-?Fvyu0@Li_6X7@nSjP>B#}Y@{VtY!R+jQ^SzI9Z+`x_zOw(>FWpY{%Xex5 zs_PWsastL6(p{)d0K7m$zk&u3f&hx3NEje<10cH70g(i0PKY=~2@(K;5I}$ccnX9- zH}|r&HQU;`6uer<+g^6(qdjtIpJy+3YWMf9+2q!On6@D~P=>5=;~2-4FcgHM zNP4h2Ig2c9Tt;PYcfHwynBaL$G+(WZS#j7ux-X!$JG`jn-wD z!{?UWR)-nmP-dmfi2`2)l*)-?VyNc1LKUN<2O_j*u+S-*$Y?f;3T=K>SRD8Da9E5a zECs0b2{eJ#Hf94#8M12i;EI5#QUw`7Ky(FCF~tO$8)agyT)XJioPfJaK!AcEhl3zW zfE)p^2!V2#aZ5mwH%?9BJjNZOL|asho7G4+yRwU|Nq4L17O5Xs{o zkVAr8lj!ql%$zK9wK7&G80e6n0^pD!(C}3RaZ2E+#vs^O0actb&`6j8M$eMFv%0Fq z22B^erdo>EoSnhYBuHR}s6xW&f&eE7MgySgpbQ2AlSHJJ0%EE~9D5UFwF0Jk(J@B7 z;S9yLrqnd#0U9g^*Er@v5(beXjWmLixmQ|6N=*oX7DyIPGueJOj8P{itEEdw1|WS*wTgfNBSLRdj z=Fx#8$7o~>q1~HK&uRC2zvn~mefV9MKlZyQ{cnBv?fGW@;_p3|zwW|~C*J;p+gHDS z(&3dYmOFRA(?KRKzrZ10Mlk?;GuV1Xb@Dp*h^Zozh zH-C72>(g~}^PTVh&KAU5{ybG+^%8F0UFOgK(^q_TCzoChj1WTFPuy?Wz3u$>>|DaR z_oBsHzSap?08?Q6rN8ruVizjJ5Pm{q`xz2x#-N-m{3O0^JTtY#l+ zwvM_-mg%LVTSr4J2d*=rsWl&Z4XOYuK!P9uha54wtWhF2+eHib+5saG4K9K57}+*3 zgvMAKYEw@}M60cH;Q*^CfC0#w8A{UVau^T-%&1yMAW(oMsko11AZV>MlUXxM>*LLC z^WE>i|HuEq4?gnrqc4Bv`+w|THwVvXRo0uNpd7Ya9|JbSgto}G$psMPwQ7H&)NXBR z$iCg{AZSD`!=MzZhsis>vAy^2KKq~l#xMS7|LZ^ceBTSAt4!ukUFn)L`LWsjZu*|r zuKi~9m(`a0Wn-e6i-+TU*gU`3xDD$x##<$L86RlUVt(Oh#1q3LHm@u!EY-YT_rCX? z)4eC(V>j~p=#N%cFE!)EtuOulFMeX^np?Z`w%@kpxp#lvm92X7b5DO}P=BKzEW2cg zhFZ`A-BeACmVr`7N9fQLsB>;@#T60RG(K)h(KHXOhm$fo*2lRG_w1xc`}(=j7d3U; zK*)xlcm;uJ1wlPYSd#k%b-M$1iv>abdyCc#nJ=glMCN8b@ITpxe+MXxu; zlPasEkw&AXEwhk^rQjOZpcoq{$}4MOfkIR%gx*(lN-1`r@Xf&e8M2)QKC2qa5R6bytK?wuj_wjNtU z&9F8IaTQz#zRK>4a$FdjkmxG8L?gO7I(!@rs#OJ`IYclxgpwl}V&hQBMFbcLGz_A0 zaY0ltQK1L|B%C4y(jpkB2(iI1WgWpmMef-_GYbYN$OVpO@DL_IkVerV0TKkd0LV$2 zMFI3^fCUmT(kYcTE(D4Lf&oUoX><@42pH2vr6;FoADZ2AZMOPlJ8o`=a=qIa>#*?E@l)UY z&ToI{uk*(8H*fsft<|!>ZFOBjZ%u6)_n}Q$scv^y&hGs+AGWTp|6mqA_3W_v(j6tM zwRdC!knmLk1b`wykO+XBMG!~=aKHsXl+z^`lOdP{QGg&o1YjTl0x*0PAqo=dz?N*H zSO`O^nuV~_y=!l=d-k!s{hoFgGEU+Y|SBsdFHs+|}V4r6Lj0q*(V|(>D3mgbia~k5+zb1T{P{>ih(U zGu*5y>nPG%%2LhDb!?Q+Sjdz1{tj2C>2T71%6&N0WUU5rp$2!&n#rWw0@CEpV8f8< zg>chJgg}_1fv8+3sL|8fCreXo$0R^~!LoSpvLl5MWhTPRW(mRSp|I}9y51~$L$JnZ zaChNUaBo0o&d3Ep3IPIgfjgox(CL~8YRm&AnPkCg;HH4NxC0U;LV|mR2`K_Gy1|`t zT7W}7Lab*4n+XqsR(;dP*qxD^)di5prT~>;8(}9|2pr3}&L=0ZvA6)yObC$^Q>Z?; z50=O-p+u|-m@}P-)`$V>fm%h!zaiq^AecbWoH*qIJ7?256e{5GGzI+!9&;%%< z7O+|yuh@(_Zln7=yr1>UyEij z?451xToW%o^*4U(M?U)GQ?p-rroHofzw=if9k2Um|NB1~uH5pe<$4Y#MSS55B_rWU;4@a>#SZC>T_s!lkVH=Z};E1JMi)ky>_YGcxiZc z`tQB;-DmIw-ty<@OJ5m(;?vL7&s^v7Q?_#XNUDJXL|8RrB;vjH(3Stm%B#I6)cyHm zee)3XjJ>~m=tJ}L3(x(*wVJR-ilpiwcR>Iu5J-XrrwlMtVIXOhRmCNRN*H3W>Js5G zk^yjY0~=%Yen3*&DxXx*a-_)(r~t(v8Hixsu<I zy}teOAp;>fEr-Bbn6m%~aKZ@_L_`CtdWTzV2pSrI9NTU zDbuXewBb;EvjQ^A7`G>v?ok^2rE5G{jTfiG*+(z&>V54SC;8=D*8n?S9~V@-NfPCl zFwMB@s6*XUda3RbkgTR|yLsT@2j6btR(Sr4JXzdqZ4ul9#_dEPF5sTsJE03&r}4Pb z))g+sAPaO+sK&}lAei=~uElaI^qH&`*-;zU1QfbHaJX78Mcchm?jEj=Yj6k@DVdi@ zZ|#z6r>1@@_Gc;@YnCP69G1uRfS`wrQHI!*}y2@=F9f&@SaUnLNbRicivvlL%%CdYW_ zn`dwPnfKjy?p}ZT#-Bd@uRp%sAO2r4ebU=)h25RU#;x}~+CT7jK6qvO-L~m(+82)2 zH*Z|$PHoM9T(N$$?PlfQ$l!Qvx8I5(ohV41|&ZTE?jW0xBk) zY77?5l0|y&>N8S7LlhvpOr8wY&31pqFbatQzffk2obOM##Z zfSD@bfI;xBiwF&lUV*Y~0?^=qTZ)jS!GWAjaFtp?dNj(C(O+BDQq4$8eIJ#vAYymR=*LpwkGp&$CGZ~Z9#$xXcS@jw31xlN6q z{`ue0xi?PEokI&*XwZ) zIb2{B+SC|Rm_sGiaVNDUFyW~8$;oEk24k?H+@!VL?lf*NWahewNW0y{>bQ1C+e{Ix zT25CU(B#3lKltTgwtgi%@DG0Mr{6s}i?{qadhz%F;g9|=w{7`~4`X$gJLKk!5ezdi z7#qgDt;sz<`h$P#BOl&J2zbla00A6<2LE5TZeX2%_OrkD`Y-?U&%OGkJI7tvp0(|L z=MRp@dhOMUoiI&pz1ZRU8wc4iMASv^L=N-s1HWa0|MVL#T>e}Ve;(}NQg z@S+BCvswj+1Su1MT4lk=7>bLY;js#a8bVYcRbmKGP#&_u5}?oy!PBm50k865un<60 zmk0(KyAUyd8{CJZl(wIJpyTm&sjruVB63uL1MTjWtB>`sJzK`rLEFZ&_idlm`t@Eq z@vz_Ioa!+4J*;|=4Gtj3sp`tHsY{4nrO*c2`Q~EO@qyN38H$r`iCtjWo?hHxx%Wh` z@2yXEd~vjJQSRdIWN0q^#E1Uc(Y0Hz-}v;G?>{)QM|Q?t+dJFrzp?oA^$h9Ks8?_M z8#}YhF{Uj(GSkhw!|Ire9pqtsw8Gw%&EAD~zioOsh0h*r((z*J`8)N(eq??=UyIAd z@ZyG1qkPtw+>chGMc}ZEE*Q3fF>oMlG+0S3z`%qH0as#t^&*E8)&-ebj^k!C^r!|` z4Q)kb-dKoDu<8WRg>T!(-HAW*Mm)RQ?%u1LFLFG|BV;yC7^my4vlp`-)#T5{KO1~Z zb|#_bu-teZ#=dyiuH5Y=C(-&%cAQGlyf$OWAr+=jGd=s>d2oy2F6z>cB-Y|!1ze{f zvlze%IS>E`suav(h()=gqUx(6fxztq5=Vu!v(xR2y7Dz3!4#~f!d}b(L8=t@N^6No zqEcPG8Z{Z(*8#W;r))GZaz2HR-RN$pp`c0wU_cUp3ZYgM0Rli67?Fw9==ACoSkwl2 ztQ)uBO-Co|TBsQyU>-qKRDwn$eCiUk1_3x6kU%JfL<-s>8si$uS*4=DESLqsK?f)a zfSCb7Z3Kc*Kv4$Vp^7&V0!V@)MFDa%Kqk>6wx;OB%{nZc)1V8D`9+%wc4K_D*Rtro z{j&9ry%Kiu#pQ5ta43Tc146OBvbr0DK*Rt6?l@JpaIfjo1MSh(bTVrnxaZ1FuD5>u zr59?yzDuqPm_prdSR?7ChV2&GA#M)B5y_^_eH2b&=tGzSbBq<`uuwkr)@s+Ak7*3R z%)7*9u+rOT0ns)A>U}IKEJ-CfKxY7h-UOi042T1Jw`B*IVGWe(a)F>CL6=ulys4Um zL`BHS(I7ahGYeoPKma68IdqC(ut)#~kOWjDyvZm+povDD68>LK3JtAe6TvA;%Q7+? zv$14dh*lG$F2rWa`8u4r{lUvOpL*(t`OcqdFRXU|;NxHVXMgm+-_XcwCS*-C4||U3 ztkYw7-<5i4(%-qcy#AG@zj>Qi-+t+=*#y>8D^co99aO+Y_EJkAY!nMhP)-#Bo0hW{ zu+;Z9jMc0(5y1!&0&5wvZ>rH_6Oa-s*bhQUJ~5jxB1|U;Ge)<`5*4A{VmoKeF`uC3 zbtprt23tGForcn4Y_h7VI@ShW97~r(6JSJyNcbjauS1S?*@=LK7HPs$O@VoG!b7su z#6}FI&nWBBhcY-&DuYTBE;aF?N$qF->Ihu6T#4xN0=i z95!Pt1~W(mdqXb9N|*oyKr9WM;!HRb&_}-E@DUAY!DH(wn3uRJqSq=^F#%=>L<$JW zwjxBlI-;z-BVuDUQY4}wiN)gziUm5=sG7TCREjEI^GLXl8F0aX2m%ycaKHfKO#%Tz zfItE!3ZPX03QYFNp@9j(p@1T?gkTVDbXOT0hoWyBa0UY56hOKICb(d+HL`jy z2=0Q!XdnrrSc3_OazGFOfq>{85Mo#YbyOq}KmfEz&~=~C7BbRAkjtA zs7lF1#h?%Z3gN-$6s<0)>ZJnMBoSd2qS6FsI<9HQLkXKoI3zfVAdA5W7Ozlsj|>Pc zyqMP-3}gTVNi7k&?JMUrOt={kvqA`E3uU=`|5)v*zumv9>0bKo_g{JD9hX1xR~~%q zA6cK*X}?~0CIwDxNE1NY-E*!SGV@GHM@ zaOc|UnSZu;_QcB?a=CyCHf<9*iPeR?bi~k*aXVLpdfd{iMTp*?z^t^eGNCZx8n)FZ z54o6|5)gnG?OgJ7^~tY)@76he@S%@<)B7HNaDun|d3yalx=LcgEh^AU)TN6M;`gE#~VcAEngcxF<|%;96Y8E?yEz;WB3=hr#Jqm zpZsrMI@o&dB`xl9YTLVL(Y$g{?Rcr}E?IEfLM`isW4Hw(7r`roTPodT`@P@3|NI|b z%C|mq6ATn3800_|j0hdPhupwO4HZ%`m}rP0NY#tWB$;4SGb0DYWQ9RP--8iGuQQ?~ zFSTJc*D#U+q}K|g1qFYE+wb}J|AQajdig)vjO+jT4_`Tl39_-M$FLZCs3p=9XU|Rh zn=h1nyio|znzSoQVO6?>VRSbipdch#UqHD9iez7wE_xMGjD{4OiHhVWr4(7HT?#2# zm?sZxYMMtB12)aAwG7#7-QD-*4Borq-R;@xWOM$pioN?!-WZzQ4Q?)QqlnWNyLxe+ zFCFcCTA>bhJT3RFH|8%gFW2wp11EkFgI)_i(BQjdi=uX{zoou&-C!{Kwdoi z#xt+Hw!M0z(aCZmdCk4TSZ(D;pok)L0ei;V&au`Q5xPm&@Zr7q(1rZRH+^`+)B@41 z`g&8dD@c?X;81qKQO(XZ2(xs%qxFS-?9Rllv&F`|^kI}?m@YZHh%?`PdGo@h@_S$U z%JIfy({-$6t~|J{$5I|TSN6L3vv)#Wta~EO<2|h%_UT|T%8LhsQSb{>Xxq>;AE#x` zz#1?KkXHdT32=}QR0RSEv}AGAkXh+Q0oDL8O$jGP!i-I%BNms}noV#yxYe+0Kp|}W zuxPMmCxuXbvoAZh|H!LB>+7#qz#T3DhybI3LL&k$3W6j+GQg`>!6*u+NP!MxHroIU zlw7QJi|EZ)vc^LLjHW@bTtPVhlXaIOj))(5>8rFx7kn2Caqaq8uqQrr?e;(4gkhxp&pA>5~Uv*}t>q z5zm0#pnzI@}G!glrH7487J4 zO9(D_>z*8i=y6J?AOV6AYyvExxN^;c>rD=Tr$FLO0#r$n21HN-4Q6n_B?u5DLa-nP zKy*Psgat;zuB6#KZMjXFi|e>kZidOhtnDUodX#VzAq+cpyKe41Gr9P#fBT-v#c#N8 z*_`h`=K4SX$>)aO{`?d7#O?2xl*`@nBn^wrLGF8*)w>t!@nkU_R`=y`GTY(h2lppX zVluWCfgl&;AXExU7U7vH3>IQX8I+>WTg1*zU`H$2A{W8~2o^w|+g#g9h18Id!I}D? zswBlqxob`~vl6EDyl9OzQ+91|Z$rsgs~4S`muK(9Qybl^j%5H4bGU5GV+okST-`(lfPX%!+J!N6n;fK!A6K?;Ni zlqwBcWS~?+0C9>mf&c*$0vsR>00ju3(Fj&K#7dnC2qv>g3qe+8Qqy9O9IKrG^T=-{ zPUhufS@BMl3$z4*A(+URNjVvSP$Ln9T;T`;g&H8zA%THL0`8FDpnzbYR0SC)U@(ma zRS*CT1i(O$jsO&u0jFvp6jVj##vqy#G&4YM6^c*@Ac=|;+}eG{hB{nqTw`Bc@GXn5 zr1>uEv?-gh9Is0cQ6NwRMk8qep<1iM$Vp(em}@nOAt+JqO#!Fzku7ez=DH)TgC(Rn zly@E&M}OCU{IPF|n-6@?e}`A%XMXB4Ap>I0fSpg6+o$wrDP(;5g|Zz`p*C1!H@pPzx*lS!@q>p-@Edx zpF8(ZolGB@ys#SQ&(#~a?y_w*sQt!O0TQ4{08Q(crPl_GUP)`)cH((c9g-YM?fyI%qXoljeXL1`Ay}woW(akcJ=bZTNchdy;?r^yT9{WuMZbEq;_@v2qbHz87;Rb zW-Q0L01}!)0d+)XZi&}~{dI7TBb0m7b88Z+mOC4892yT>kL$wrnNNOtez0zj-fbU)uiF*Ij;G zU_k3`WJxpx2h9MZ%nAuohDv1t1mGY5i;Bss`B17Ov*Z@);2W=_mngYZU=g8-s-2fC z_5vZX0w9cF00ghq0jc7YU_d}JkRjldi2#DC!Lz%IGKc^S1R)~OQkX=^MG$6LkYd-s zy6yRy*fFe+;m4i_oP>46NvHu*P7y8xSpXB}lMtiXz&Kbht+GQBM?eLfkC;4);Sx5l zh4^|KlV>dkz`Y2PAjc$xhN(!SV?u22Ufw}F@b*nR3X68j$@Qd?|Km zC?_S=kQIk|hO^6f*M{NN^5wpsELBTAo-sHy&^>!SMI=sI8s!CE0HIU>Ng`B*A_5Ww zIZl}Ykpx(P!T}VjAO|3lc#}W~hF~Blp#p*h(C{XSU=^!CC=di3lmTLl#wM7$?Y32% zz_#0~XE*Ka?PYjhI2nXLI_%##TFx9`!;KA7Z9S25J(f#>6zwyHf*hrPebOIP+;L`79a}qSxlcWjSmr)*)G(8AZy8>JA@km1hNb3{sE-Dpl2Mb<{LUs^vD-OvC5>;}C5rF^*GXScDWE~k=GQlVTUZBb$fE9HBYG1O;4I~U^ zd=rd-AQu1iBmxYyn|YrC-Z2e(d1GYVAOO%g)n9X`D2N(Yd%@v$C03G9UxNC zVdewj)d<2Q3QlubFefDuBAjp_dI*A01wn*_kYF)*4qi(e6atFD-B|^i2q8p*loP0e z1h6_F&`H(_p=29@#Yn;eBv2qo5&sW+aQ>`mdY6a&y1w^)KhL|J_w3!LdwQF#N3&^W zB(xzw1_vt!VfHL`VW%qPI210E#V#l1a#95$A!QU39LJ806+|$J0*M_U>=ICh0igwr zMw*S;dU~Bcr_cH>&vM`2D?O_G2afWK=JOFtBRw&^bSbWVZt?12nH7iRbxeTy(4nWq)=SJJXi?P7`VGEx_@vBO#S zo~QrOUz>!>_%R)&PyWE}XLinZtC!-9XBP(txA;xb`EUH={?Xg%kA7p)Y`pr!_O0tz z?bUhvUYkguwfMt05fnV>vt=Pv+;kT`SF0Oow5g*v7M-jv zY}6wd2GBCQf=aAYcA|7%Z*PRP(!F}8wau7A+}u~^-HDevO0rgf0Z;&iBq)fE>;P+^ zcyVJ1U~eoKuaiv61rkbG+5XYL|Ij1<=Fh+X0N?$OG5f^BpF5fUnRBz1wflEIG`StW zVe#`H|D9_G#<@Ve#Q=x*QVc{)K0-Jw!IhQkFxVpZ+%p7NG$#7YJILJ$3sA8lfRLPc z3j+p0q$6@LN$YECMMGa!dxt&C%+*bXg=bE0)VZ!Z+ZpccS}AQF4G&k=m)CC|rM&)> zeR%i6AI{%<{mO4|bWQQa@oKqnsce0*`_!9?-i_BslX`vFA9>r=>)(Cu&yF9SK74ko z8BV-^4*&3#8|Ag1Io_);u4>RNoPS=h7X9!%TRVnWYp_x~^oWkAM&cE?eDN@4fF1`p()kU)=v~_sl*xy#5cb zHirt6mE^9YrJz3NLW(eu)CB8+c8Glg3o4*Q@0}_vGJ~noBq;Sd?&NfXkX0%&OfW1^ zi#g>e$%B<3s|^W}t7h=yeHC}P090~6|M0iT+AGh0!7Q$f>co(y!He0F!AQ^|DTK{!D&&?*Z9s#y`lwbjwPkNSYWF(iT--}$un;Gt!AlaN zkd<4&ksU<>3P4d5=(x)Q5M!jpqEhlQLz0^m0D%q)5F`L(0ExQ@zyk3V!dE!xazcaw zBtR2H8W9LZ4?(b@>aljZ{`7lOynnKH6Z_9>GP8 zyxi59Ydh!DnfsAP$!g-uCP=#E@>S#z$0oI+#m-91}Nl$ zp0*ZjtMWYP=0J9rm+U4V2*DBP6$Qotf`I`xE=IrwxP;PKDPRq&VGdiWNIA>13n+)y z78Or1Pm+LMdV{0yIVmuKwGjrw7?K4Sb0;vD%83cxh$eIaGKLNs+fJ}S(_eEn zS#qd2;8K7LL4@dlKod-o5rND=AV7aOchCgb69zUWKiZ&9_G$cXC>>%yp3QR%Ya zy?V+a&oL*S~FRlRGYEMANe~!^6d|; zZ9MlMyw32m|Kp9@n`ydT9p2o%xp&*se_YKc-}k@%(NkUiy`Ovj`RSvB)yMC9@eht~ z?C(zB^+cJR{ph=&?9XgW9~D4+RSBQ~e)=PyJjp-x*I#&fWVsUx?kZ-qx{lQleGe!k zdN=gkjaPaJIpOVryL% z*h7&nvM`vO@M@0s(L{K~KF&Ma0vd`H%d%Tc{q?{&iMwvdT%vBO)KkR&fs7_JP| zbtS_~539Vo_*8v(N%LQxcUIdSz3=b+@{gYy-2dI@d;H+9;qYg^`47L;Ju?LBAXbO%BY8TyQWHfjH2=8dH02G7ZQZ`gWphMq+ zNf04G09-BwrP>}?8GT@N_(>0qZ7ug(U+xttMul^6ZGigNtm@B9mD7eEI9=X+B);;S z*Ft;Yp_L!nc=vB^{<*_n|JdisT-}W&EM{2K?j9q?Zo0ZNoV-JR_5IZ3`#=2Qo!HvM zu=Ly7|Gi)SiPtp>ZQ1VDZWb+bGewyUPL?O^X6UEr=w*4(W^WhnG=K^Pf{199F-XOr z6?DRCm)0t5)~wf?<6Ws~(|TG?SLSV;@22J2?CUp`Z=8JoFr8hUK5(|2B;acE%!$hB zDzsXehRdzb7p+zwxHjL(yHDTg3QH)InVXkBXGbs)=(J)CkBpWlUB0>K%~ls2|C!O@ zPH28(#*W+d`PSQ}J=lg$yx*e@5r)B+R8#Z`cS$)bFib=)tn z&z2?oQHOzr+Uns-8f{h{2cd=mm68bqyYuGJ)x-Jj;k?{gnwHFsU=(P)3wKe11OX5v z0RkWp207dXIY3Y%00z>AacuIeR-nTG%CiWU1O%DEWG;vnXu+i98dYX%RpZRrHn0m>yZ+i>@8+LO2QN<5o?JV5{!QZK^C$g~<;~Wq*i}$an?09a z$X+C}x0JB(d%km-x;{Ii*(o};$)FcO(-1w6-m^CEUuvGaVe?_TKG3M|ds{ZE(R~A>Wj_D? zm-)zp%d4XYu0Q-#J*+P8he>?))#tvr_w-&{FJJ8UEu1`mt6Lm3V&4`MBDXG=`kqtE z8{5n04$;1Tr?*lH=?H-_1xV5BniY_{(sX1lt*bZ)f&dEz3NW&QY(NBhUrO6|1l$6G zL4q78&|L&T3U`rU04z}i91!lJ?s5l6GY|wQK>!UTAPWQmSdq)uN_JWW@I zHX0_$ODmy>?91787dk=Oa?JG082p+ zfT0ptheU>qELj=Q>zqzZhwnXi&-eVdAGu$^D!$5I%LsRV=?DMdFn|1`FT9dMUI>DKVrHz8 z)%Bo+fS6NOAG~e5yev|1Npv?3AP&!;+xD(L>X$y7d1*s21<$c#7a-*j4HiAvmS{sR zM=SqMJ$c_pe(I@z_aera;%#3oKl4RAwSy;4;cZ{z-dy5We&ORE|G$6jQVXMQ|2lSN0J33}w4tn;4LI^2_9H{K26cmTkNg5GgQXLQqPl}}C zieXr{`Qe_6H7~B0@@92*qI?DWaWdJOr1tpEULK4VwrY!gVcElChyV+VB|*Jr0i&QW z9G@QP1f@MHM-9#o_@NJ9{>^H;`Gr3?Hcq=k{da%&^}lkiyZ&##`A0wbUtfLk&L@8J zclXb|*AF!$SdOvp zy}E-IbU`5(K-nXSL_;L$r4$gV)@*tvt@m6>takIK@1XL<+9)(JbQxWz{+xa51)N-o zFK65{)R{>-Zlli^H_WS`3tix#R>{Y4+1F3Kb~q__{s37{TNHWr;#qQ#CIIfB#W)G- zGumD(1m}Zsrmy7zxn9|kd!MXz5Uwv-7U)E>uWKTbSta*KlFN{A6; zNP(;{jPeHijhIU&Koy`i3`|`?c93wh6+=)tc6bp8FbpF(SUb}R{Yknw4riZSi!M%x zf>5mKoB7enY<_ZdxLa~4rO*c~G5B=S4Tqc6bfp@dwzwHX0*C47#@?0L?9J;<-tU4p zES(q#8U<3wV38;R0we)WI6%Q5!5wgkAjAL~BM1fy00jp`0uZwRF9Hx~B)y0(#$Yj^ zAQr8U#w#1=k8^X>?rl`*p`ER@;z#$dZ?%5*oe!4b`tsFR{MJFgvo=~go$hN-zWM0n z`S|6Z`(zWU!IxJ$#N*?ctCXhn5cdsv&)!B=)4m!$+~XU|@=84T zm*aU#7dYN2ZQm_cx6fWa_2dP% z{d9g6)$!t*m;Ju=&@M)m%}<|M&+VW6fv=xF@Ib=&;qk+duJr!EwHx5GpL*fhD{s7X z-ERNZm%D-YuVbEL*Y@m|$+(h+_lJOp`ypNWR-bSy2d-e zyRN9nXe0p@2>@Y006fA06`>RXun@rjxzGc0!39Dl2M7x=BN!+E4oU#@6(C^2K)gi& zco#r|W&%~*y$4kcl8lI9y46-^AB@uvZ7U8&Y8H9_b;r%mHIti*`_}#RJx{JT_iw*T zYu|fmFK+60wDP6GjlrJXnYDjf1#jkBFfYuGcX|HuDRe+bGE~uGq^xEjEl5?NKw}`H z$f91}7uuq@dk1ws@6{IsAQ0k8Ri(iwsG1^}v#HRm6#_@CEi(hNg--HnJg(2LYp^nS zBgL!pHto*(W4Od5gUJUKJ*0`Jm7Wd@CK3Xa2^1P=2AEWU(o1p_a?uv^_JmzSH$Ve` z7R(row3LiYmDf>bxrD>s(4Me3QKVi1ybCPEaKLI598ozy>aBNkBBBIPK#yQrCM{fS zN|#CZE~6Ad?xc}Jf~E$@QYbJ$AOR8%NF-o{AORqvfr$u!fF5#)Aw!S|KzNG*AR)jB zz>ypyR`C_|l$ zFd9e?kuVDqfxZA9odmK82#SE2XfUDa)p>}p0C$rB-Q|GfKmrGb9N7tUf-tj0WvVYg zmK>yL6o|Vh5pYsOAR)j>Gow+E022v-5W(zpDgcBNP7ggO6atWufCWO56CjWR9kDMC z6~PDsqB$5022(}Qyc6K=1uDpb0mN632t=S9a1n2@0Gb&=(t=8X7YrO05fYfv_7u5} z3)q>}c+Z%(@}Mit@!ZgM_pSx}hik*<`lFved2?m*fgN9a*LQNbRs4_tnfe$Nvj;B8+$Kl>aW+rqm};cZ{zt{uGj<3IQFU-*?zHSMkW9B3sj zd$S^<>j#Qub*#mHSTawKRQ)el?00v5u-DW~W?ihR+?#SPBnk*vBArkvAP9*n0*cq^ z;mzT9J^r4*v43*;W?9|+ldV!cXF|1UXE(oT=Un>2@BE8nRMt9a86?bP1S2R_g13I` z7>{LZqwR{bHa7DH8N=y#`pBtUudI~4XZpe_9oXsr=JJo6ZVtZBKK7r!bmJBDUuv3D z-|^TmHFy5`6gm9iw&t`YUc2MWhdFW4V4g81ETzGDo~puMGTfYS zzH4vx?6(&SC@#YTNyo^u<7z+G3m)HWvY$*4P~`L0nKMwF4v+%evp5_K?j71m950XO zZC;kdU_^-!u%638I4J}(EmPBp!Z0mnL}L{|?^Pj-C)v#l9a(VF95M_HAS+5Wh8cZV z4voyzTTp;V22;rpKm$O5QgMeM17J1`X7$F|T;#{;!X%t|axKEcD%mzC&Hn!F<;lyl zdF;Iu1;U8pP;vCho5AGXVc6Q(m~5nau+45O-G1TLt%K$MY^g3sU&4Es8BB7AC{cg} zp~wX}WOaZtvV(VlD98d-M1XIBTyR;645k8gnry9Xo!i80 zetZ8;@xkW)*ELD2=SEvB-|(StTwmYZy}8%q*`pUOKfZEb{n+vHgR{?m;niRGM?ZQa ztxW%LJ)3r$Lv!aybrpQxEf-7Fw2?C-FT7##gdh)G8??jA_l+L7xOOzd3zO;DvA_50 z>T4U-3!k{QcV)ky=W^6_P}|H3!LpK23Cqm+(1w$+?(hW`q>6H1mV?#%c1BNb* zblfFE1OeeK6e$WKmbR@KJCC*cEsZsURP_`vM&)U$ifetz~FcaEy=8Ly9;)8lyW%D1i0E}goV z&pvqR@^0Gx{^R-gF0K|_|HRGtA70^x4&(7|+CQE--r)HMcF^^?rMnFRClxiUEt2SHK;k&XdRP(F6jk(CBJx@1FwACQap15f7rB%y z)M28Ks*b5qVLl0m!csB#AkpKPqNyrj+(gL&L<)d1ATLk>0~ms6^4`6b=#3SkoWwk@ z-DGYMVKRdW$ODH3mI`i=^Pop*WdKd`Tngm|gOjy$N_s^PWKainJ(yvEB$kD_C_psP zdlf<`f;*_Y08>qvsTF#Mi4hQjLy%|!8OaEV-h%*Wfarh#Iw-)*aF=>OE_?++gFuG> zg(@OMG>*P=D^3tb7$69c$AQu6(n{XRT`m}4Mliqy$#IvF038$*2udZ;1C1fXpaQi3 zK^6eC06?V>@`1n3z9e3`W~wwEo~Q z4AW>5nw;~e5B62|j+WiUtuyO(W`4R+JU4!5_2Em8|5xw+5Y7&;g0Gg3E%Ax}{cr95 z;%CaOqvILq1(X*DX<}8y=_8j1C(PVj`hrF<513v3o3b z?gmgOOa?dh8Rv(kx@Tt?fquFB7yjz+{d-T0s&@nUb>Wv@ee-zX}cz6Tv-NDl=Q-B16qH#h1$T6*Q}yf>8Mk`mL#a6E_ye|n6q z*?jaj|3-UffBsKDJ|7MeK$j)!9O~ska8XuO1R(WUkW&hjpyI2mmmfIu-tYR`AAI-o z|HTiU#M}R@2{8?G`oR0w`*TOv9&BF!(Y<*H%oGMdNA@lQ1QaStax_K5sBm_03Yg8* zT39i5%+1nr?;;&Y1`cOCJ^iu7*9Lnh=GAD~&F&3tR=jZ{r#nFag5)qkf)hpnE?7WF zC>e8jvd#mupV<4@L_7}60dC%UZ zSf}w>oOT-=9_}s`P}{lc>|;!M{Q9drRqMgO{Jz8E_xLaW{g-dltM`4~^zr_C*UFjd zkpRFzKfjOtelr~Zar>=L-FNLBqietO>VeJfedNWLURWKzT)&yW_zFi`mmb~fpw8B3 zgIC<*@jK4uFT8U6C$F{3P>T0sGuxO%5t8 zG~+4CdV3>?^QJ$LpbM zf=yKi$r}XpeK}sx5JUlz00AHkcgcVZ0SImcXyz&zMP?~jENWfuBoGxK>#PL`6@ai9 zEsZzNwS8ZX+H>op?Z#@wKo-hN8dvTF6z$}Q#*~)V1g(4X77U3?zCCZ`_DG!JU z<1iNTQmq>}Fp8yZ3!oxEG&6!35SS9oQVCTT^G+Xkx-={E{aH6XwSD;`fBtlH>hEf2 zqHkOMV25~P@9LX6p6;!T4z&K{ERXnGFJAeTAA9=CO66b8JZRLc(|J!dnX)a;h>k|5Jfpf5H=R8vE_laJ+$lv-Q|!2gad*AiG;-TY1bR!`Y z!G*gBfN(${j0rMPpa?(`95k{bKov-W%*Kv-3ZCMatZ#*bL)Iq1D;xG^-T}m z^Vo&=Uq7jS|CLW)yYpE~wGuK>S+!#8r~5p5(AV#)A6_mW{L3qN_q{LwX?o*d93H*A z-*~!pa&WXb@Oi+3_dd8I@3Ex!OabExNR`LnfO%X*Yhq?@F(-0i$iXZqEQOOov>}KL zf{7GV!)hkNRxH*b3RI*;E&~=;toUrR%F}Ud;^24&by=1hnes41BRk9OT0y8QA5oz~ zKb--Eq*xpfM}nmiLLSJZbOg38D}uvS1j^^k^E%Td1Q?5yRVZl$rNAzrAq;kq(RE%I zRncX+6wuL;Sqj-Jfkq1;>481PT&#n^TSF&OG^1b?06FAN10sL`2GG^Hx&k^t0AV0n za1Sms$s`dO0$>J^Du6rSG5{7VfyS<&WT8j^p#Xv|05Sz4zzSF#DgptDAV@&QDkvB` zp%ho~qF{)@Bs%UA9DoQwlmQeBG(aYSW)`B5L;#tk0H6$jJOlDzGE0J$?BfyYkW%oys*&A^~YfoV8_jV8mT($IG<+1^P!yt?qN zlZ|XZGSsFVyJZW5LutUsDbJ_SDZi3s$IG@GT{4@#?}^d5%4~;&^{>16!$16&fG6<( zxdIR{?EP=Aeg41yiT`8y)m`cNPqji`Zv|#k&VnTFvSx9CAohW@E zi+~u6K$G?4_Jiy7>Tqj&bol(A_OmZuzm>vyWBk6I?FN7PqV?CW&01Oy6x8R?9eSR5 zf?ORzcZ-;>+I5YYe*0q<&}6-0os=8cXASDMd0t z0E`fU0ul^FQ2|jv69`d}bkYDxr7xfXpx}_8WaEI+qm=>y&?U$P0>oXzaCT$3R&jO* zHeNIKN5_4KoPp_9d~}PqZpW_Mz1=dfPDmV++rzN0aVneNd6DA{{L*VdX>$KswRQdn z?mIn9M<4%m7;wG6m>#U_;ma>QQ_ObG`{zD;=AP%)Z1Uo3X?XwRtJB%Mnd|WAGcT2N zk?$PmFa6Q%#l54>8YFC1Oq+TAL|0FI;Z|4hez?e8tywJQBO+tjaxmo9z)yN_dP9WH zhQ}kigT7?mo^jTqHDEJ+@FMb1)!c8~@jFM{6p3aj^m99?%1X$qGu(7^qD6q?0Oe9T zxP=mWEFBI6hoH`xLKsB@$&pkHZRZX~!J*y+Su7X_bA$N`6dDUbsH!*Nsqa@J<_jGx zVG>NFAc_FYh`S^}0TkF94Il{!?qQ|kDCPZDed{i04HE9^Bl^|JR4I)DBc#i=tO~?5musZ2#f4;s0b85;V$7W-eoeFA;R23p;RPV zwN{@g-XFEMoaXKz=mvl^LbMdX+Ja~LxiC7LhI_IbKEJqf;REOX`nG-h-+2$0_~icU zyZb!)!hw!-?XbIj)IT;@+e~Y}@!4xnKl_Y4w#`C7)zt=#NrTVWwI~GwM8Fh`n5Oj# z;dpYOYDsQcH(8x-zIX8W_nhx8{Lk_E5s?pE?${mds_eeS?9Hn`s`5sahUA#`~>krbF=p9i-VJc>c-;a_{Nns%KK%9 zoFTspkV~YiNIgiX3StT*O^$+IkQ|agD6oLhQ{@$ca%w1tD9{D>UJyab1h_iDWrkps zQ7$+@I0e8!69qT~fHD?Jt{{9ibx9PglH)uMD)%!J9Acs=!2+GC0BsEqjXNV zCxK!##K>&uidZB;48w>J*%kCfaCwp3K>{QJ=qnfuW^_;nNXmd@01{I|DC{#bbXSoD zRV%^0E^`Sz1b{Svx(kp98A!8W5DW!~g#jpZ0T?P8Z0ViDWdvXZot!4FjB(^RUbuM? z7+}dzggy)Ij<+~K&`f|(K!y+iAv-{kfbbP`0dTy9k^q5(83Bj{U@id&!2pT^X%;AB zDWYV62+=~Y7`$Zn-iuraCK7ams35>>6d;xv$P(yIS2x9=j5g5nWfSKu;?&;~}be}$bPM`Hz|J!}v*LD5Ac^WH~S9t-IchjFw z+AXug(vRb$B_|;Oew2p+qpb0XXlo%H`E5W;3s6b@vV74}e}3ZJ=2PpRsN2*1 zzyAZ5fzxQu0|Ea(dH^_ZYB>1#&wu&cYEION^jEup+DAdCPTK!F3}xk;RynuH3imM6#U ze4{G$>8%rO{1aR2pFVows|U6I+~4{S*V}i#uK_S8MtcZjpvBvMUw+{goF3y3p2OR| zmi^c@-2RK-|En*irZ^A|ty+0`z zEB(mJs-+j?QYZjw!9jVzBu*Rj9ZN9+FoAFf8BLBTc`7~0Kri0Xe2h`PFp+n9edX*O z5+n#P5CQ@q5|BW^DF^`&P{1IDhLln&0t29|OhBMjMi*QFfk1b`oeTy-$ib=@G2!ED zD5IUS81`nlFP>amC;BVXc&7_F3_eh~P0h*<2TaGC6quJE-udX}WbzyKi-;TJ@#N~I z{pH`ehsk5Z=x%dveKf}QD{Ed)zko08b&DHz@64y# zP+xe*lUw`uuiq^%z4<~~y_uSW6Rm5J(HB^)$*GF83?(QNbp&*!B zML8s-f|Ls(!W@8mE(o+xF*FPwQfD4gGM5{w= zk!C2XQsfz`AV!h^%?uDz2Ivq-1cO9CklcMl4~TQo-31_$1l-mfqGke1L0yf1?0+W07ZlAm;>%=2?p;AywD2-IVCaxMnGa=0ZO#WSUa*KI|!(a z^<=t^oL0+4A_WJaEC94X02U%(45etixxQxO(OS8*eazxPY2dv>`uB1?6m0DF3kIJm`n;XCW&(-$|-Za)y56*taw{JfA zjqgbqZs3z&#r^RYA9?zvD;rPVefh~(KDC;B@xOaWBBRv!nYbc5fFRKEJoW zTpTR>qr+RBg8ZZY43 zXH;U_Rqjjyiad4kG({KWjIcGJslrf)CBZVF6ck}AvvQXpg%Yd~=2i=p4+}3xS)rs@03Uk{rSUrWIVsj<&$LK6 z^o1@#6%my?&-3EUITisZrff8#&M1tGKy0!#S-A`g^k7&^p(d*pC=&o#M5xUba}wMoh*fKhrbHwcBzGZ~;*c1E7?_=@_ypRH z7)CZD*lWl@k>ME4U>hy=MtBiVA=^U~pPxHXU2`uv|HrvkNd% z5Ze%gu}i6_2!g;s+OcSLP?7fugk+#A0s@jiS0Mlv2W8O&b9VqD02%R?LO>t@u?V2! z7)b#r(Evsg5{rWXI%R&HIT0&OM zi#YUz5zzuz!6*kjsU)~On$xCDON>fwkSBG_gJxNwfJ7k#wiH<<0t%!mE1Q8!;V9aZ zV>=c3oG`2c8U(|nfkVnztuQ39hD8HmDa$#I+&)|9abpMlgHFCG%0#LL=QLQlI{6zP zNMHBNfBSQnN9*yU-+X#jU3+o!{G*s|;rFNq?g2Z%2>-7cun!CX9^f^sUj6i6`Cotf z{utL0@8A1ML0-BD2gDG-*o+(0YivT5VD5fzzc4L}a~wQWmrie_xi9Cw#4>@SjR{Jm zXH5ll!?hKv;*1OgE30Isj-_f(Rc6y}6f0PbnmY}LdU|hM{mA`p^H2S!H@|bedF8{+ zBiNmx`JKI!?;5@D)aVr6_WSVAuiG;p zy7Hlm%l{L<@YjEOvhh>MWp$91dRa4xT!v;?tr{vijKfStFmR?v?)oHX7BLcU}BZt~Q z>h@{F09(6oy7D}l=Z`$|O`}Wp8$bV>i*}>lm~7%uL<--1>HO|{{^R?P9{hWEs^)Y5 z=DezY+x1%?)KwJ;UZCOoP9EUumu@5owAC9vV)jN8R;RKXXk@FlLA3 zUc(A0M9HeU8d+GRs*$a6(sFea?<{jODpkUK$&?aY2o@ty-~cBKJv4{`DIq(m2nHiT zA_Sn=D!Cz1hCmvE7_+*fOgIFQ0^u~kpaPn$Z?ky4I&jnAbv11^&U8ca`S8$%orkWR zFZ-{*Ie+u0pK55T)jm=;mkDKDE=Em_cLoiMxP;|V*)TOL-0#(|Pu_VRRdva3HsS6I zw@Pwq-{8SBBvU0=QIb^T4wwvp89>P#l}H4~y7`k=-*eHXw@<%udvpKH^2|7I-MjDcvu&KD{fBSuhmzi? z{je?<@!qZegOA1Z$95*W8sDtHx~iZ3%*{Ku@4AgEo3vSsN{aJ7k9L>me(2mYKlFiT z7{WWopFzyOdFI<*X|8@~3r}2HtQIdW{@%UWz0KvfoUNa}aPkP7%TM+AL2-|o1+sS{9t_Y>n@je zNEQv|1E9a{pDK^_FiB0EuW87s=RQq+D%x*su@R~K7Ag_Xylct#C69gAk{b^ z(~#L0&xAx50Y%@WCKi%~rKxzJ0x3X(aE!!TL;xZ{9HU@SP7|l!qbqp#3 za$y08SRquF-EuAwc?2rj%7EmlKn}w(5+ET$AV35F)hRHXR96uX($Ug;DL_Uej0T#j zP$`_OSft$^kTM4V#Ie9ofkMji0=OIqRE+RIcd#%5GC3$$5-0`$5riPZ1sDwQECDRV z0R#gG6CjWTgdzY%0!RWND+4GxNcJQo69Cfi76C>QU@!>5MIh5~%xNYwK*}mfMyC-* zfS_Pi00A})7_>-0KcEnZ44{G0WC11lP}~^-hKQOV+5lM^2BN4qlvRWP80iKu8i8Q7 zz>wOJd5-;{(>s1@WBP`-eScW3T(KSn3U%qrD(5a`v4}tqfE75ds*9&=b*A#8e6Z)G zV3tvLryieL_vnTnzKpkhP0M)p<@BHY(QazY-2LL-7n`abT4aG3 zib;8LWoJFVfAcH9`K30Sl9P0xWD9b_FU%^d1W?yI8Pj)o2(sXY->?P+f)ML7c%O$Y!H2q>^t5s-}#T#OirsZdoc zI@Y^IBbj=`wjWyXw6&-}KDtLip_e@V~!UUwB04 zPMx?=jhmXghs)v3&kVPRJLA-U`Kw<&;`+nyf7hkSb2l#Czy8nf?;l7<9_a{a(rkMb z^3k1)s91IKT2`T3lIgH2HaQhH&o#@V#t-)Uk>lJnJ}BMYG%Mj`vU+nEpF`fOZ+WRo zwxwz+F7;+nX#H$JkI)57D>hzkE>(v1%99)#TfW(LhbQLCyO!q`e2BmbomAJ_Zm=+{ z7HS6Jao$6TgT#9yR@t~HsR+s*K?YA^D3Q#GouiJ3L`o>489_z^f@+5+ zUpkB+1uL`=V4EZ#4*8ej0KEVO+l-|(mj=-s@9s($u1==hl`J%zHs&I^47Jz zgS$i7X?>#|9?VO>UzVnBk{^v43wL^zWjqAaIA4uudc3g0P`8htE}?z6dQgRXFTODp zG4GpXasq^rkb;9y65s+zS1)*AV<12ZrNt?7<7|1bXVI^L({$WiP|Yo-iPajuf}Dh$QlyboEBQ6Qi|(C7gw^aLlOSp=cL00e~y zAeciVo1E6V#;g75LF#IoZEV$i?1{%Om#Ul}4x4ay|L(!;rgvkMNe1Hm0q!WHt6jMG z2ZvYR-M{cc9RJFO#wju*1Zn3%!>!D--03ZNKL_t(HFElKJGr;?VRNUw= z0+qLJ4viL7D*!gi(&-MS=sA zjG#e)0LTGI00Hq$F zx9{(qxg5sr#tQX=Pu%;Rd%K@|Fn#4;{p!7g&-KI9mkPmF%15~Gv8%iBc*%2@FL+m` z^y!4&#-(vc;-IB5DTh9D3An?YMz>~CaTNKb(Pt#8AJESWybB_;4S<^g)GjBtQi z1u6jqKmde9cf7?NDo#PBnyN!6AruE72T)K5FeXNZF0d0&-~eRS08~3Dj_T3| zb$M7R6*svNBxoS50#F63h>YGLI2`~3$V#z7<=!Rr8E_OKQgwtA?tn}yS>E7QQ((8T6%HHE}DRpK< zT{q`8*qw>RQ-42~&e|l}hqg~l{IPE=CvTj&v$rw&;wyLdef@oZ?t4ydp7~23eGIED zupkmeU1D4 zPwUV8!yotEU&@%Toc$pfx^-|xL1#{E*1KgZEpM-1c;D0C^Ow)w`ICS5Z_S$amORr% zWvinLlnfdKLJ|U@U`km&027(BI3$8Y&gRJv)sn@ozEfWrA z#bRR+q!=bZ8PduDCm5_IEC8hru9BRw04#+v0kF#45i*))1m!dgvYY{=Sdb;uVH;Rv ztlR<66kNq286z$5EdnSM(Ey_eDo|b|Gi%4pq}QBectY`%S4E720GAuqCxD$d&4+4L z*1|?W!}^qmYrCN@9bBiI{7BTR(S6h`=N^|?X7ArZxHO(G)=!^#Z2jDFzRPs|`j4GHdY2jCb@#6B?X7K8c=r@oyLR)Rd^~>TbDwPP{_{7hP>p{lvqOJ{YbUHt<&M8{ zTYmV$axc#IvX7{te&s@V9*6KX~YFyYucmd#HNp_VUIjU%7E``}MB7`TCdc_jje+3=*XTxeOv8D48;B6Wj>!ila(ZtPb0- z+bRwv90-G1Yp5kK5HKjyJEU0cFl+~PjSh@R@Bv;VcycIJBd{68)Wpt(?7}f;TjB&L z@l0mj4jt04G$wNc=!fLIbWOilFQ%p*0v%~c-t`E=MuZ2R1-3Ws-iEz;;PFPc5eC7^JoNU z0mqz9A`R|Yti^^Xk%Ixw?10Mz7=plxUQrpI`asYKiAXW**-2g&KMSxia8r>;( z0s;*nRUDEGfXoS?zyV_jfSU`TkpNT?h>QT-3yC_I(Fdj?dIADN3|62*(FF+tK+!-W z9a3~c%F~Ky+bXCO^jS)!xImX^0?iPK;<*3<1h@lWkU5J?1_A^vkfCZSSnGAgwp5#A zt}3P+#$p&-+M=y6En4N_LFY`aj+<+wG;goP6(L zzWSl9k9_#4&-|U=I=p-MtKaB>yFMHSj+n;{vGkcrUXD9X3rCsCh=awl%qKwaMy?<;cKKg@~#&`V>|J1J>e(1vC z<#&JO@b53vsaKwTXm#gf*I)eX^~LUP85{zX1gne;F;ukCxVB6s#O*53wOm3YdKfe{ z^tr#+DS1vF*2%)W%!qU|3Sc%;qM)F_V4;o_0nvymdTe7|vzauH?=-8kQ&-zPu*b^T zAAaJyZoK=MGuV*U9$U{!X`iZvKvEp80D(cIjh(>Tmxb&ki#N&^^m2!U2dx@l4Roi~tzLrk+fK66Z&y4Fx0l3hTkgb=VDPXP6Y- z5Rr2UR*|%}9V@l&t9+6=>M$=3iGV2Yz7NT2`~fCTXtGuE{Vq}EOdL+`oN2|?(FB&dj16Cx>^LF@ICTl8fNvA%Xt z4W)C}ddzEEI2V8;zShQ5TbpawS0^9I;Tu1?uCQ1%d4DZV>;BfSe&O}9wR2~3>cf}L zPS&nI^PX5PJeJSC=bL9oo3nrXSAXm)@tG_0_o&|6H9b5&KksLE-t61{g?UQPed^yx zmYb#U{w^ruUJ%6KvUeH6swh| z0+(EXTmmQuD8LXw%c>$QvQK~{f{{UTcYz)#kq)@B02KvJ$MLFZ*L-imp&+$Ql`2@? z*a33C{{S>&1r*nSs`%8{RqO%`gq#Uhv=k(F0ul(AnIOR-Lf|n&2-{VCawwhm{l!Oi zPM%vkc=Of8aQ=l=EeJ8$f* z=Z8=BSMJ1D?}t|`*UAnEUR+Rd0t^O#r~*LBfT~)rtjgU{?>$_SCvy*iXs8HDSvV$x zz$U=jfC81Oj-hGA3RR-I`Una^4V@cjnsXzSz17X-a41Yu+74CIJlw)~>FxoCquVjh zIjO6S(vB-n7j#%Kd--Tl3UZtjYuZ}PgmU_-r$QOGDOMM;z8ONFcJtvfo6n@8NUL+S zb<;VkdW|1!rs8Kbek|3S=hN=hrW-H5;lKHTcdnh@etC1_?j!BFjHk2EL_;^8C|RwacpWLi-lRri?kwGd2I}8yVNYr zZN1yB(DZq+$g_?VLhG80#&xrfnF2`nmd!2t>oK^OrDA`u9HW{PIUCd;yU3}uR(wjZCo^YV!g_);I4zkYe~`fI8C zjK8_id}iTJJI{V{GQB)`Z{+6D;=%s>dSNO}h&9i@`+PnUmMVcVub|4|^sVfZ!Xd={ z8ofnB6{A1}Q)P6HLd{r5dApc14Ku^SU=B`&wF*9|E>t`*;$DmA22b}B7w!aL5C8!V z0s=q+A|w`+k$fHW5JzLSBVRafETMiFC9fFSR^t{L#c|CMfXqn(t2mgItdjjfg)#*8 zCYPurctIuAR(UlV8o>n@Xs}@Nf(MsV*>0=(Mm(U~yi5i*jqW$Nmo%CUI}VIPU9l-? zuTM8th%2>(8Cjuf14gk6!aOev+>n<@b2sJ?2+{>erJxj1paE1Rnki95X3k)MnH!EV zfPh?Z5TX$Hw{0Ro(G89}HB?)Y7L3mHH|Ddw(7Hy8p)Q#R$K4M!V!7&xRt7hq<9nF(MJD5eUc zRU(N&0wNHAT;&3~ATW3^1C4|P_aZMq4T}i5mRqbm5~CNbVh@HaAvl~8 z1kwnD6(F-Ckc%WT8O=RNFe8kWff3Qjx@pF02Al}!*3_$MHlt0k*rSeCD0D+AWI+%Y zfu--uN)Y0RZKzIOnBvq9UY|{nY`mT>g;ST`_ZLP_G~fP%SFYUs3%@pe`KN!_&)jPs z`nJa(ANZh@yL|7>{q?7Q^861?#~mK3@H@Ns*T42RwrBsWZJO(1da--454K`|ZsCX=VfSjduDx5-DlhyL1kemlPPdr_}2l$*%EfzkK0O-sD(_vD|xjKT5!@4(x> z#y$J_?nnR1Uw-q&kCow&3`08rKRO+Ke$o9_JRiO@;Ck^ar*7$Cm;Xq zumA1azw&)|uS*mlAeceN93p^3R#bpGg@_Q5N+|*bI0fM?5inQJKD01P`B~^d_8VGa`BN5uKk<;CFuUIcVQeWk{4og)OUxTQx%y)<;xyf63e7I zQ3uS12aXydm?{{Yg>r#(!9-{PKoev|ga`>HSWP&T3A#W6CKP~h!2rS}ScN7k6H3Ma zYgH)>AXYg`6zDzx6j8;I8g8tfj+U@lZ`q4?!;pJVTH72Z7h-ka%F&|R-ZH;B9NzrM z*Y|ku5A4_76T7_7w71u}`!E0H&)(37FYKTH&>w&Q!<#tyzUsl^;mfyAzVlm`i>>{C z^4I_3m+hP0x%X7unU~tm#`U9o^NwFT__dc`d}HYX4d4CJETA%*<}mN;JRD|AIx%_Z ztgYVMyxT3-Hs_O>ZZ4Ixn^9tMR1sAv%ibM<0c5EH%#32mCNBj-QNdfxAV5w!@){l< ztDf8S1V9iV2MAIw5g_4!Q~<~Y8#nEV^R zb8f+*Dl0%;X(2@}A@vzRU>gZo@*o%2inu*KT_P`bheThns=KR;_gwNzcNfpUyt+9n zfdl+l86>I*1cMPEVgQUagw42KvpO8^b>0v10w-udG-He|mlptX7{pku0U>e@Z^PQu zrc{KciaAYZCbo-KuvG7x=SA?oa_eDTiB?Z=dc@K;zGQeX92TEf-max)b4$3$!-Zxy z`h+s!TCHkmBSxc1PFT*DlyMZn2pBAtVKuCVfKePzX^kc!ua(ntoxa0)SRNEOXV$ge zt=DVvGh^bT7yh%Wd+q4EHXgn@xpZ;i?`sz5^ohGKcYHS9xUJ_7j^f^nuVm|IpR(Cq z>vFf8-(5+ozRoaoYI|-3jMl?4*xr07tA1KDOxK{gs+|GDz<$0j<~-z889Z2wqY%(m zqlVKka$OoYMY^$CzIn8YrX#r^0gksAB4C1q3`j0GL^uh`0FqFE5e7=k0wf9`Cu5`(|9@w<4KHAH}B@F zBi8Bc#`ew{&s^O}J;F*71je8`GW*8XwBA`OZoJ#!pf9!ORv;8j;{Xdy*M_b(`e;~I z!(vhS(H0qM8)`ZwV!KLPqs=rrf2&5fyz9FMc^*Ik0t_%h6eIu+lL1hn7b6%#8nFaf zr?P;nubC?bTD>_IC$%*(LdX`3Ar??5K=B6&LkH^#87!R5eOO9)R8(XIlV32WCs*Fw*s-=q3!T{DzOW2AQ0wFX)028vAw_sG^Ak&6u2mF(kz#kVuC6DuAFUS(?F8l3YS+o2Kg{ zL)!qR$Qw}^j{9f;A__nc0jeg^QUrA$0RsU-f*?1MP!Nz*0|G>t3^K5Q6*aNCC(t35 zU?>ctWK=RvV5yZ6fb$O+?-KyB0f?vu&_N(V0LWSafdUQ55wHRg5F)7>TJy$OsLIr` zHAblwFkg^9KXaxl>&@}?4V^(oEme`MAsq}@;idw0h>H765Uq+%AYp0n&b9`WrsW4U{i9#~N5AUpzVpeM z0OCvjef|7%I3Drd*W*jR$o>A$;m`m0U%&m*FCd@nFdbRpsv5WpyU$gry%g4X&nYMo5 zgST^U$$(;&!o2z&slBLPiCh-g>HKDgm&H5K(goPRxyN^M_u7{}fA{foFYdqTJ=@jt&aeG1KluCjy2tOl=8*^H=v31--}}8QAOF~| z9KU!}#az$W*$X&xZByIHrlTu&>AtLUJl#x`sdNC@?zI_S}o`M zH@2xB-Mk@k3S_1I@OhJO7|+y8!uFV}dGRP(aV#6!f%+U`n@ zFUSE^W`cl8VigR*I^#nfOq>FmbaxUX2Mw^GmWCvr1auz+NMPIe>~TM{==M*fM9%Z= z+o5e<6<}iC1(VqZU5SI@-9g$(_4#}CZuTUiMl}cp1yJMwosB_|1c?Da1i}!J(xsHn zjM_vVAAaiG*)z-AS3hw&9yR%OyV1H0gD}|Df~ZqXV2#K!O99w&dP9d&_2LFLrnHxu zORs-p9>*76es)9SCW~k-6GSE{!)X?#JSDJ#0kBpBo4wgSjOWICj&h~ukh3S$E)I6A zpFOzlcIxTr*2<7pJRUW9Lf7W)R9Cr*$RGh3L4!NMNC9+>8#02Erc#yu-0 zWq=HlaCj1{vm%ySw$VhB`ZVj78J7nPm~KX4l@?jfzM4Z*jhYG0amwdjM+HPDQmet*2 zm|T=J!8A@|Qd)E)1gEI)B&{_*Fo;rhp( zIaNJA!B{bkV{A7vU;8?9irma|x8JJW)e1OX)tGXhnx=X1Oikw>s?%BBI+|85-3X5X zrH#5=xL=&z6YI*FwRRuG!Q1Ca?VaB_y?ym)q~W9npb!GWeL;yJWCBD90gy0)0*!9C zFQl5x5<9^HRR{qk3K0AQfJDlYN4H#9)0JHExB?4A1_AlZkm#s;-T8&iX4E6zo$9K? zV%MKq)Omkt=r4_Z#ektIFIzwV=XXqA+o|0I&B1WmJSOB1ovZf4VlnS8PPwHm# zO*7vhI7Ka}Vwcc1tZv>Cnrw`wWZKXh&C_7qoMDKO_$#P?zjEo zC-6OWzGwgJ2QK^B;aeX#{@m{$KKJrxUS14?`t3ES*>0P8J6|+?ZeF_1WWzz>ILAmV zUDhqq9yTjoyP5Ylxqq~KB^jQbN-o28%sFZR5mj)>QlrLTaMJ0i6?$Qur$*W9v@JFS zqSPW;RLsa0Iz$-Y66XXDP>af1MVK3Ka4zb=X~W*kp{Ya3s6}SG4Q<|XkTnhv%%UN}0s zxvB-A(}MuRBH2@&ZKvr@!>wqi)68uf5J)Mg`z#aKlx2F?l*USdG}SzilmpX zmubFu?GYZmk$>;E53haww(RYlwcTav-uljbw%fe#S8tv6%LI-?^|pjDf;#d+8PyrkS7!(44Ab=2LRRAg-Zh+jRst^f) zl7j^hC8BB7WY0!GQ3O=T3~Xj;af$WVxY^42$b))gclMXBcT*WO>Fl{-_o1`tY8p2$ zed5HLAy$j!-*|V^|MBmAy7s|K&Fk&6FW}lw{hc5GeDfc@{?^&A{_nr@kL<2-`T0-Y zc>EVT={om-CX(%aF zYizS-dtn#av&R;v@p)h-E=oBX$_VuYU^PI4lZ6x!6^NVw?`>~}5r~z>tV%`!oKC5* zKte=7K&3r+Alq-}rx>>7T1Bb`g(*a}AqGYfp#qr2tYh;qajE-?+P!5RzuEF=EYlG% zOt*uYaFXE$n{#JwXD|1DchOxtx>C2>!a|8XXs8a1#jUPQVi_}cxB&=sI0qV=s4XD^=Xh_dM{XD<1<5+NIO)s9# zwf%!nzx8;tdg1e*TbX+v=8f7*Wj<-Vo$oL(WDqJef*|gT#>m-XZz1H_y>(zX-2$L@ zm#gOQxwJoT4jx`TcTkW2RWp0Vb$T(KK9Dawzd5)%+|+Pyi~uHqkU-Gk1{eUtKs>*# z0#Zppq}Vk}Rfo`+ff~up5+y=SAVL$t6v>G2LP$VRoh|0r@usG>){IRpkYLLWmRLPx z^E%FB*dU$My++vMkyKWtD*9M-F{_@p9G$-B3&M`ZdpWYqDMB!s3#?`k8>_}rs@Lcp zY>kjDVlw5e>KHVVM!~F=94ul9XX+9ZZCP000pCCH^D#fwLwJqW-J|8ondbQyr_HCg zYloc%R2QXbN`GQH&9@3CPCZL+RHrDyhNYqaHrM=VMgP|E#)W+Q<JAe7Bi3>7w8}&A8)}!cXhVr?|7K%~%Y8 zyLD`5Flf8QW-W_KistUAbHUj;W(K(V5~)Sr>gW(L7DR{yS%n~-Y(OFg6?zC{#l94nwmx4l<6GM^)AH7W=b)y_nr{aiAs?anJdP zBO0t*0`)Y<-La-==ne38Vczu7Hsx}m`E(On57Fk@HqN&54lLLf)~Sx$xIM|Dv=GeO znax^kCSBczjoWF21Oo`-J^=~^z#s$=BuLPb0}d2WRUQFDb%dZ=2+&*r!adwv^Z-b< z2}o8em}9Cc5vU=6gc$%T04V@a6c8YBpMU@$Q4omxL<2xLK%fGJ+$9$+whcCw9`MFt zy`rjYipcQ5eH9Y8lR5)78iGhG&M!uOr5}!_ z_JNuC-e$9A+Uy;8nJ!N;p7T5>>NMHK3*YmP-}~|hFMZ@!fA#Hu{#}=zx_sxa{PK7( z{^);Dc0a!CKl$0C-5aAX)5o%!8@bXP@p12P@si-$}|xN4U4rUVwu8b-8O`>t<@r^K(0be0w5p?0nmch8@RJW ztL1c+N9&S!Fs~6l6q~s!v{ezEglm^fyJ^`w_sC{(>A~H1UwYyzzw9r)`rVIK;2y9K zwD^)goPT)*FP`AL-i$B#V)paLc<#Hu{D(jM)ZETG{$)Qc`^!yXdFi=FcK2Rwd%w3H z+wb^Kjvn~_-#vfwxnKH6?|c6JKe@eCZ|7J|gEGvMr9f8GO|xcyY7^?QManA!HU(BH zLCaMZ0h0nG5D)|(;cqfE&AAu)>TN|=PcP5=beWh*Hs5WqQm~-QOGcPsT~uo_*lj>`i#~ zT}Xe8lR+B>LBe{oA!>y}00lsmAV?zxNU)HqkPLtzAOuhZ3}KdJ1cWL`=(yi8Ez)f6 zrJ8Cv-88R$@Z1~wc;9mJ;dcxR}jb#x@QIFeGL~ z<)myt7y@R=O@S()1|<r_8#7rrLNr4D~kXiQt;NAR0-sfGT7W9n| zHHCFYhg~hVn~-WA$iy_8BOOHT`q#JJg<3y0U^8i~2o9bZ!g?)Ib&tzE+uLnV@3h0_ zWX94Q>P|sz*igs@0u~Sy$bu6N=nDV}G#6yerN-t(>zv0&pMLt$l(3U<7JB^*s|}ysaGvG65GrOS6Xq4}blIy^=QjtE*E!bNJ|vG4H-PpMT`; z$A13qwP#K@F%4r>M3x8$1`=)nO+`qhZJ=5ImILgR`k6znwq;#o2qPdzlQ;uPXlQ6Y zZweUe(hJsMV^A?jBokYriiH4e)__Jalv-Ev?NsZO8v_z3aA`88oS>*hwJ1zXlT>5H z3>h0o0zuJ$3K6g*5Nd!H7U1NfSR|n$BpKl(011$oteV4=5=^khdwFG@;y&}5mRXI> zr_!&|SZBi`K=(cR0&&WErz*B6G#|nuDluRiM2ebFD+!~M#Y*JLuwu#rsxz$b(ayX| zxzs@I(av~QFmVGPhRn@&0EMcM1V9xGgjpn#Y9I+9AVQ>*4xwrrD3DV|f^INVNRfpK zIgtnmst5xbxWKBAkU&s{5r9z+h^h`i6e&_-@`N^3?R0$`3y*PU{=mFF%e*H!MG$ESHoB@WiFaS{`AplbIpr^1_ zol)jZ!_=|@;|7`#iV70g0=a#s~~|>S2uuq(}k=0wLkPN(JDclK|Zb zx>bW}WmJ%o)exYi$=s?aRan4HN)q7KEC5d&$)btO${NNP36lY!i+~sePyw+5A_xe8 zK!HGI6##R>s0xJOf+h3B2B^|RxY!Uf4+^zL1w;XYK!8d{iU?`BUFDe(EDenB@YU)~RXXEaxAe&wL3blz*u1@g{pNF5HhF&s@^6E zH9#3u+p`U(zE0bi7>bru2X&pT1&ooB`O?x>TZt*r?ZF-Ec4p0fQ?zsIslV}oZ+YYH zYY+ed;!FN;{+}y&_7MN=+wdh{yoA3%-2A?`edJf)`A~PMoc_e^>0t3_8=HJo9ys^5 zOX)+Onk>EM4eQ;%{m=f?^^gAO-}}IGS3i1kQp>v3DwEulES1{NX75?158PfKHDD{0 zB?RTEmazc)%A1H42!WK5suB?-E6up@S>0`=G8OdbT6B>V%!R93T8pNM6OI(MwQ3Wz z*niVIA3J$|u^K;o=SCJ(B@lo?0ZTB>`X;4|$=9>}AvS4P<-7~L`P&wM_nm+HFMjqt zhky4~|CVj&xGa6&`&dNW9ZokzApjwOA^?Jf0LVZI0m%rcF+~tWt&%~c2^52(DxZJ$o>@45<>6^oIKk+~RPaoU9 ze74U$J6Ks?9$xgXB>VK|UZ^mP$qY?{qEH|;TeFBTh(VjgHt!jVw4TIL)ux_(6XTdl zp)she91!IMJhen9vPi;0fCvK<7?Pq|S`s}kPM8g`-sb7i*T3$yUAJC;`Z_QiO-X7` z&JTVH?Y(kmeeJV1R#tty=YbeX z7~J4$`f{$=uuiv44b>AB*aj^?02ai3WJ}myKLQtyb~?lwo84)8>^q*=S@!kv(PnRb z=cnqum+Wx6+KxjMQ^Eg_q&vkn63ne9&<{^66}7vyvRbB?#3=GmSOo_IEzI56EkcNC z>p)>CP>mwhgtkFP1gro|Y0&^}oOA6C?d{2rORZ7X$=tkg=%BxB=wF$x4so2KXS4;S zU?b5YCBPb&GqS9NQ*cOP;{v=-sYx|Qo(e%zRH$TQW$Q{w28tYksAjMyqFWw)DhJipSMjPa_nWa<22Sh0t~YRBrmea5xM37lnWxvQ5~bO7@Ahikfu{# zRUnGB!ER10yjG+^xkhScd(FWfRMYA>jVEi=D#)!f^Y6`yo{O*jLk}nMN#IeC)%`QqO^8Wju&S1*C6uP?m9fcnjHo1t`vd`?@C5-? z$Uz4IdLlrl8(9=NMEpCFwY@y`ibk3k6AA%@0ST=Eo)XNV$v8#Pb{Ym!Q324L1Rzj~ zq-w@!7y(Qm0IC8I5s=)Sv=AhLg5B47vZy(7z=V#FA(N=V3K#*<8Gwqa6f_115g;f~ z7DQzbAdPfVg2K!sD-r++P?Z2k0wjVU0OCG^v_uF9;tK)*-2fw{8k#Cxaw8K-Kr}LB zp-NSXG71795`tV&VG@7}P-F!iZZx+9 z)J~DmtZAig0f7<}RRNmZjZ84?8TuZTVBhKh7`+RD)vDcF=aVC+L$E~y_MB0r#J2H% z<~H&1K;QcZCp35K!8|f+$sj_>XC0Yrao}OQtC9cdq^EawR*&TJ$Y1^NeEzt9<)2u5 z;C)wq=EZxPu!4J6CK(s>9X~$taj-aKjJH;+l)W3YN$Y}E`*UJ;Cb5>Js+s~CnhioF zYvhzE$)qS#%?LnI6CfA}Ga6z8xVoidnlZ9U1+6BB0vc--?vvmMwr$_C=5bqSK@g2o zHd9PQ0ZUYpy(ff`m}Wao-!A8MxOEypk@JCF%+Bn<7`fco{i=WUUtfOUEB+$B(bIKJtGJ0c)4U_J^R_?v z(#4l=zW?_(C(~G@u_(L})>ZkLi?cJj^mg95E@8PsZ?6A1VU-Pi!ozdt7Jiw+*$;8zDBca`SzWthsr<%940^z z3B$Z^UiDO}Pc-;9&BH36I^DcDE+0C#`wc(vPk-?1ANbGScjo%{?R;iB6X|Z*|JmI$ zV?O*Z^R0Hw1%V`0LZk#zG=>0~QHX#+WZyLVEjC2m)}u_98A?zfsE8^wY1W|YV<*|k z3KDl7+r515RhRPV|9d53x}y>ane*HX$YWj$sCj(&(ZBi4`|C%4;OOwiAG>^d_8srO z`^@TNKlbmjFa5|r+|AbyfBW|J7ryPO ze|z!Kr`pa@l!~fZd++)G^LO)b<$W)1x+#XckG<_bfAqkg`PBdVVr_L=6h0f#zV>Vx z7xU-eB!5P_Q`KcL=#RATeCiAR3J-*GNI?;x1W<6H3SmJC^41|@TVM#x(?d(Y z5Pq+Z{Ms{%J1IW#@(V#nuw~OWdpSn@(1@HB7T-d_9coed{$?g5ONO_oGs*QIkK zHe25INEWGKRMrAQCIArxNHj#6ceO%ga&bhYsBb*`229;JUM<_{jxoS(vzbzf3914C z5~Rb;=&d!fIlB-}%kd0y?qw0`3JhgT5#dv|F~S*cBrTCDz6n6N2BK4+kSZFpj?7uB zF2i7-NI6_h1|j$V3C%D)^VDZMHeV200Iq?8?$dKYZS0VLvy48 z&=LesO=oV3%nE{5Ckor9%{FUhv-ScM8y-#2ShvuYGFFg)M}Si;RoRtVE(8VDC}y*? z?C0~qgy}Y8wJFTCbEed}2+LwbWmIM`D+AR416p#{G$1#uTdSpdvubxs>pL|s#wfI5 zR@CNZYMYY(h^%GewgQ3>RFS@l&Q3PilL+t`P+=As{FvnkB(fOSXhSB?PT?+wbH}vxcvx zU@l|TRI8WTNd^g$2_^^vlYAHO-iDS#RBktiTZ~(H1ql#Hg1K8epUK+DqoxfF-Vn0r zQ|vSe?a~d7nk`0(by^CIvpT8~jm?}*iS?+X0*9UL$P`8*u zHG-vpuy)qcJg0~j)J%oM)LN^DcVOt8MY6^n~JRqn_5Cs7O zAb_CZKFR?C1{e`Q6<{y}U~tHsNuwt5Gi^> zG^zoNA_00KBvH7}02q@Y5f!MS!OSH^RQv%VWPqp&L1rRb(GgSBEF{n%kRT;U0x61+ ziGaxn2_cY3f`q_*iU1S=KqBrV0T4kDW(I&JfPesVn!tddU=EnVNhBHofuukIAfWpo z1tpLGA%Os407SVUp9N$w5r7#85RKVR)0D^?6;xH-=LAeOfFKE2n_AF4u~tnrvkkQ1 znyd^0s9q#O$zWvTtSt~J7uJLe3y>)ZFp%18>&e9PE&HWy8fe1^Syms)%M#Iw&gjn; z7VB>)bGljBSP+Jpfz525ob!V+zxefE`M_6y?O(q3xkq^LzrCN$BmKAk&4)ksBcJ@a zfBxdL55I8aw{LDIqDh>GdU1=p$E^2Gub$ASgcRo${XFi?X=Q)G2&ydNbuP}+jX}e` z@px+`Q_>`9nrbEPlOWIxlnjtX0k!R$M$KH2YZBzj)hsiR3lIS54Qw{+oi*om37Y~a z6u`u!1en`Y0kH}VVV#{#*zfpY%a2Z|Ypt8~@VwoB(C!tsMP+!?zj*xOh41_8dJx2y z{GomDWqkNLzUQ6zk}qDLz<>CI*uMPwlkKJK3ESJJ@r;p$NFUoM& z)8%))>#60F*MIB(-p1-)qzB7Tz1dhD#;KZLt;-L8<2QC?^JBDg=sfiyyGTVyT!#OPV+ddOQ`}(WamIPX=2HM zsE`lR=fC!;8~?+1e`d9`yN^BQH&5;E%|eJ`bm-2tkM(iqFiwxBDMth= z0LlP~Ailt@ClR%u*;H+_4I)TP6;R+lLNr=quBcK}gdomlFlxC$G%>YlZcWpwoCyvG zcW*)Ul>!i?bAKNXR=s$F!xDfXw}1@o4iG3Cw4V3QEzexK@Mu1~eq(jFa%YbID4xlj zYApbITM`;sTWb&I7sh&gJiJons5FNe3Bs&HMEQIHNq5SAfim$fNIoitITU#xQg%XA6$I=v2#96=jyc^!}Vv52aQ-C zfk0FPpr9?hxt#29Y_H{-nu!3TLI7lv5Fvmh1gmD0yap7I1D3p4Mb|YQJ913)t!h!3 zLPY=p7>zJ9nwyJFZGw@zOrcp{bTqZOQ*9`@LD<~YnnrkG+n_*01`3A^AY@R_nJdNI zt#=J2HpM&db{9opJHhgJn>kys0L37(5OA@IRW&5h2vYFCGFiX3w^g^@o{XAQ(9}po zMXn$$QDCx`W>O#|sMexLm;r2jv9yB@)5eYlsKhX0s?|KRGtkonn_?6uBMBpAbVKYm zi>2?EDBF6Rr2(ppRNZP@05y}dFp#4bg64u|F$(|kbx0mY3wX_tl_;SRFl5N-G84g4hoW_e2op$&fB=J;41nB0Q$SHH!Wy!*Btj|? z1Hq16vbMmgBYc#G+EJ4a%yB&;P8QBn+Utg{TaR8v+%Al5m;ot}{#a?A{!Y#rRMfUpa-DW#b; z9Wl1^s;-$FfJx>-001BWNklOP-H++6(A`bvk;(a zIA)m$L=gZ5US$MePJqO*BoYXOh=2r0;Hwf5L?sXgfTUAZiV#&HBMlG?Fq%M81YjAU z)oMV$~i$Dt7Ky}sfH>nk%SC_xdcc61O(A#HnU7+ ztu|@`>{@4<`MGmES~mMP`iZae?8K0>hPoMLDM}B>}x$_<0^R9y@AOGo}z0hBI z>ej8z&AV+MJehL@SgkFGN7ACdwkNJ@)V6DzoC~J1X0{cWxlLPZ-e5Mv{)^Gy8SdPw z5mjjV1Z~!}2PCKh0uzj)tBwc}Y@BFZMmSN+dPEboG2Z})NXCYqx_qKz4BC$vtPY1o zA<|NU2$>~Ofd~PD=SsMjrcGrefmxgV#4I;uc>oJx`{JL&{QQT0>OcSY9SvUd<@&8x z@Ee!${qMwU{xtihpTzpdA3WN=)ShU{b9Y`@cVGM0*7Lvh={NuM|MtK9$j|H@UhmWS zZ}^J)PE2q7=1b!&AM)t_!?&H?wr^}M|NQccLvY-G>!ZK?P0cIyiPhxvKmNsh@Z`6@ zu(uNm8j`bAK-GjXAeb2}$s=k&1p!6?3IGIPFsee3Yi5_oX)}MxE0%qXr3w&Xatca9 zjT#7fMv$tl0`fKF5DB>?03HSyjP1~wW?T8hsyR4kN?$YV-x$h`dT{rshoXi=fo4s| zaiD4iB?(zL{$Mn97|Iw+`TRWV4;;SFKw|V%n!#~*jy+>3u)-Kb^lBf4lz6fAn)-`osOVf9nsw(|`91^N;-EAAaDWr?>BWY|A)$)I0qr z_BOY#T=~rB7u)wuXkPB zxE=U&x2(o3x-5Av5LlXI*ly}(7?wj-wn+49)YMYM;o1BD-uO;M{>b+mqn{lRL_MYS^%d-&XGzwqF0w#~Zh zuU=oj^2OyUR%vnQ2Cz`Zpi+fvRdet%pj?17`fl`eE&}ae_A|MMK3y}1O zW>b0oi|H-1RDW;%xf=cFqF<@3t40w7KpJ7Ma;Fqk1QX<>U%J>j%jIjUI*uh;~0~|X@VN`l|#So36}!| zkq8Nl*lxP>J8vAsYvuXMwMR&`R+WAnNS0(ELqHWl7eg_Qm16P$!_``y$i!|CqZT0q z6P8XI{AfLH z_6~E5>Gb^6WPYK+^z?L8vpC$lGu}E}q~Yb6n3}v&UTzA5ofGR6E8Wm%5SSwKNK-EYT8#hlAAz<5y z^H%G`8zZpN7%1}AeRmFQwKVuJ9c+Ho^*&r(-ceoGHlhRJwh1VzMHo!5AfRYm#@4%( z+K_||EaI-Xs-=xO5i5DOLc^#cMSy{1fTIGaCLEI?j1Uuo0uUXbAq-U@ z00AHYSY*P$N`#RP8WDyHLJ3||DaDGD4T0kcGa3;;FE0AmeM009xlC?fy{L74;+1_f(~2oN9zGl&43 zA}t7*fdJ7OsOSRGU;%jVZKqM_H$Vl9A|P1+0ti^1B*|+P33sbw9fq0>iIhn+!2l%y zLajo}m`<{9=QNa~r7F=n+7s#BJGR`(hqu@tV!g|zv9xVZFJWKWKjFUe$&F|)O}5T7 z>#NgRqaUtTTW_WvQi`VM;Hsst344ADDAskK6iOp0temVh=2k4l)E2?+39Mz3^TtG_fo*|u9hH=BL~ zzUmy>zq|S5kFD?i{kVn$2^s}JAZY?fpdeGlBS3*(a6X@6YBnb>oW%icZ`o%BXWU+3 zuUFqMjuuix2@#+qf&@X!v@P$tboNr-N@u_Ll(FvBTD%+Bwpa%J$i_4|AvTvhMLRp% zz4Wf;vC|mdyuNj7bM3ble|$6PZu*XI{^)m{cz*i>%a8raPyNPIi~Ha5{eRg$@x=%K z`Tz0cH$HTE>+Iv3{jrbB^;6IP{{Aa>u;*Py`PPNKaB6&ca_ZEhbKmSOK6!Jj!(tu7 zN;MAXJFscahu){$KUBWXu)e+c;}N@uqrS2G;H5JCVf?6+>v|j*4s7ZXBJvhmow|Q~ z=T}Ux{L%L2#oJGpv6NvDz)%H*pzWsM6ZbtLUw`59v!Ge2MW_Hg0x;zXV&Jf;QVk$C zvXohYh^F=K?DW1=Zr&JguH`9ox;fdwZlv2w(iYVXRpSUVm{S3SiX_Sc)~2nA(3F%D zhOSCcFixYJMn1@E0TqV_0#`GKl#~EQsW6m`JjQhNVTN(5*innV&AuM$`S)LX^Xc*G zZS4M(;P-uJ+td82FRD?60hZ|>Z!o0~5`wPb@sH#o;mtb(e>vGyvBmebU2pKs=8 z?lF7F*w>Zg>MKWAUpiQieY0FUt6~Uci!4@0>=K!AqnK(s?fs(N+^+Ny4uwJ>C;6Hpr2~j<>dw7yZ^e?a0 zs4N|2bZ=oPD{@%ra@pxIlwn__I2w)_08y?;ts(m;RrMsAW5HqEtW|(XYt57}*-_Nx zaFoe}R0MW(H&&Ly+~w`<9Y!tV#-Lo2l$XP1(eG8b)y8OpjtWM~Oq3KV0t_w?uQC8e zCnU%Ly22HLL_@fx26Q--jmUYDWbHgS!B7zc=0>Q$ZDp~xv}Pj5y4;{>bk-ySK2B@2 z7rpNIun>;us?Bsa+s;I>9TU}bcXNF)-W{02cFsiG&W;WzM+ZxbTd{$Hd0LxGJ9_(D zzV6ig;V&FK_tc%=PMw`Ue{Lu3ZVnc=?_TbS{n9VfDtgBa1b_gm0BBHz0GKk4QNo{) z1kq@TMp95fRD?r`5H$*cW268OFj7F30wRb2stf`lxWjXTn@YqGGJgB}AKeSA2TX($ z6LhoguC=eMf``TWO1YuifW71p1OG-q)KC?*1joKYQc*2qNYxBx2{cC-G?E(t0$Z|4 zkikQYl$f>}Jm1@0kK52I=BqM{fask)DQ>}#~26zG=mW#R1!1~K(zoNm_(3f zi4ar-hyp+mAYLWnm5tJF+EYhS%D2q~{AORR1BqbmO z3mE}37$6EDLV|<|!9;-G#xkj_$Q~d86J?MK)llSyCN)(WH3EzX08BWhDnO(G1O()^ zX$fxnUK%Va03`x$lZ?c?a7JBgBS<(#cLSgR5V8P_)c}bC;5bI%-yi^ULZptN;+QCj z5}iR)hyqD7g2w=Q<75hipl0mEv7`;cPyhsi0E7^$2!Vic>L7DTpm{^}r7i}rQU*7` z!U2TCNTMSuRAtI3a%~1#jYwch7iP@c&DjQbxBbRud~_%$FHL^yhLxuePJ{i#?$p@z z^>%yzZkg8`u?>s>5(J zY>+0A!JNSCcLs4?QNt|2C_zfd^Tti(u!Y}+wJXs^D{qk zbn)viJ-*YI4?X<{Ki|CkODI~6GL3{P1v3Sef_T`~E3QQZAqF(DvlqN&I+?ND*z(X% zZqZLR?Lj%x7~}BFN<17RkN`>}K>$wAU^Q=S>ntYi1N*lOR?!Te&HR+B@3yLO z6Nl(hv?s7_y`BH74?T2v^)pYs^wPRLx$pGUH|1zeD1k;>}}unt$yWm zXJ7pN&%f{S8}n!dE9P#kHj`;!+(AKT5{m2w@s9gZ9F>iU&UjZAj9 zXSaDvNjIkrx%kx0TGku_P}HP(SG%D}OVKyH(rC;_N!^EQe)91siCm;nN6tN>Asz|Q6;AhTxa zwdOzps%FfEwkUl9iuRX8jncqo>AYdHi8~byB3TlB%)B)5vtxa6h(!sjp-0k;h0m4T zN_FBr?UBdc{6>rRP41y+>uRah)5A>+i=&sxDC1!eiV&T!Zho>md*NhLx2IN2kV4Tr zZoIUA`Q^oGvu>B09Hp=0C|Lw7GL;w$#wIk2n7XZ!Zz__E0s;aA34j2SMiU?=+N{}4 z8m*M2)lH2SY9>CKt%orriWEpANGBbvWOX2fYOrSPKn=5%K{mFMg=q?PO&XKc(OA=< zI@EziRtPZ21$YDk*;`K@(W{K|Bqb=!ByyIf+1Zb)y6LOgkew*Du8h`au3Re>mR(H- zg|rP&wsM56%W6@N2AI{9fD-G0Rn;h?AP7Jp02$qpzz7cjtk#J>(b%aB!+x^Ua&z^3^YIpZ=qN_RLGa_t|DU zotj-7n;qMa`TjK07vO(YR0w`HY-9K&%jJ1o@$NU%H*KDBcX|3^?D8 zJ5zpP!{HWjgh~Tj!bu|e6wJnID5O-Wz%hxHr~|MFV@Mdq3Yi5FAV3vV4K2%#Q8{}C zgDkdXYFphVuCBwEn_+`A7EFU?Sy=+-ag@mTc13|)};JUCz?@=r$5)K(V zDO>j3Ok2sLbVAL`OoNwS;%G!E)f5CntP&`b5EKE&F$Dkz2vtxLC?P;IOCT6Q zsRAK0f)tJk$IM7TAOsSm2?h~xMkR)70Wt!R2tWWSBR~j%6jg}C1ej4ILX<>?8wC?y zRYCxqfTSIvnef0lu4^3?Mo@w|m>>|xBo>&fn@v=$UvGdIBLD#gf&?JEiXtTUOqii3 zG$1O3aDWCF1v&|cAwVJ!qyUEq2oZu91Sk^#Hyo4jDiHul0LB2U$_ZIe3|SzE5Evjq z10fQkwSY~~6WACDz$(NlCxa-Gg(ygZAV@}cAmuIr$w^RwqL5-l5rlvOOeEmb$u^=k z<7zam5N1w70+iEPqwj8CI(2sc#0OqF{Nz_%dFJ7J>o3o=c=F($9_7T=UKzW^s!S#y z+|93l`}yBZU3u}9r)AwiGl%w;^pM`6p#sbgTqTZAPmD+JE;@*)oPIS zk&~ezoHnAM_M0FB2vs1&yqmnkc&&tsCPYin=|*qCywhe=>x1kG9ZbIY(~rO7=JTKZ zk>}t17e6{bf!ANe1h4tBeEJqX{0x5ZYw(&sy$X7V14&qz?%d|@zWk4W>U)3o+<)~W zKmX@XeE8!RuKew{{yUauFhDkf?`~rIjc<7VaCPwVwY^c<6x4?P;I_3Lo6K=->(t(Ft_y)p zJy@5z8J^mMZ7jOteI*klAP7|eqKN6V`0iusP3s%AidJDgTWj;l*2nWLtY5w~fIvyS zcUrple$)H651;(KYtP~qg$`Lf6c}9E5pg| zJ%3^Ls~@Y^p7nIrE{3J@1*sP!F!JYTNf@nRkkWD+j8pSOFI=s6AV6 z-9YEvy&km>!n%{l2lnJGoIXwblb3O&!xoKn9j2Wy#$vTzi$tvuwc&z#Z0IyBb4e^2aN5sr@!T%x4hoV_ztBC zn?s6{N{#j2xVPE2;YIc(*`b@a(`cs`Q+r>UMk%Ro#-Y^0D81wQ^LwvcUo4huTlCbJ z%9!UdZpV0_SfMDQN(4xdMi>YX0R?Cv0Fp+S#0W-JpPkLD4N*r9g60|UWQFKgMGC+O zkxqb4IIWr$Gr~a$RXC)00s=mrNRR|ru=z{4R=$x?12FphO9Basky~RMrYXr6N2}w`|%ndJcm)YEv<&QlS743MpWM0gz;y+C0?Fzv}evrf$DD zUmWb@`|=Zy-TRgO)@9vV40pEAd3@FKYGYsgrQ26N^U63$+v|tVY-aaw`{cy-wQ4u| zrw{Df!9iMFJA?;EMj!wbb1^yH6Tr|gqOT<3PBRa%Rv-#d%>g<uOfDSjvz1_yv zmQlMlHa_$mSBnic!$|c#QJIWx2%;z|D){fd=hB@9Mtj+ir$+bNK_>< zeAZ39=^}tKP<0@Sk)Rn!0j$YkAPYQTiL1<&z|m|ZV*~~RiKb+)86I}JaMq3oxxUiN zj(Tir2~R1Vm@7@w%4|K#iYQgJTkum;uNQVS?fmHW`rz(H zEP8NJtqMv8&xuTaVpr{Uw@hiVAy>`VD`M;rtx2d97{k%1p;WImiK1;B)i4oSbLA#A z3hMQ(PR^SzvU%pH&xeQWfee)abqqo@NUVWlG!Re_3ZM$dC=duj03iwh0Ru=6Kp;hf zKnedJCjo*G0TDq!LI4aJP=N|SqC|sC0w5YdhX4qKAQwoA1Vo6SU^qq?0F{k^>JGr* zgp-AgS`Z-upa7DFV~x72mw@72*?Cv0>EQ}NP&(pL?{JG6)A*F7#RdW z0fI;XG6A3r5F-FcfB?g>WCSGPm_z`90O(GDA|R;}6bZmcDTFE*KmZg-7%7N=zyO$- z00lsR5kXXncoiYSNC1|bhHwy|6(=Ggh_YZEsuVDq0V&TqR&YHakF_Iva*RljQoh*I zX7|ibcAL}hyjz}c`e)Kja4k%aRzqlYV$qa#Jeq>@vvqP}_e`IC(mp>`bNA*pc2@b~ z^6<)JvS+8i(w0__KC{?QQFPtnC(qn|_lJJ#8z(#XpWk}@;OvQZcA{I<^Mm^K-hJrt zX?M7J_DicP8*BoxIh6PY7 zU@}4=AR=m&ieM7VQrnmshl-67qK+v~8qfV8u!7NY-Z76lP-t0d1`=Ww6;z@Z%i`FS z2yqm0L((FfdFS-pO{s~kWAgRdocx>r=z}M(eEgq$_*bTH`q-bHq__Xa*v4zVEKlCS zPd|kpeh*&rnq$}g2-p7b{lE6n{@3?kxbXKb|J=Lqn=j+mPk!&;|HSJqOwVq;=gYhXzfv|*4JIDj-GcAj8)M&&8}m)y zqx4bcO=B9y!(nl@)>iS#7@l3UbHd8CBI-y}$%+*W8j}#-fJ{IwB?OpEW>8q5aWnz% zES`Ah@bXn2?b(?V=N79rZZ+*Ae|WT-ZC>iOn?l6x&C)Hkz&a|SLMcGdSm=*FEWdPV z`T5QM-8+k-ttra79F4v+bCWz%DIv8jEK(h6jWOhD8Gnx!;9nHo=r9>c9_gj20%=L(1Srsopn9jugg>? zW6clT_m&IT+&w)E>)jK9W!ZoH-s#)^=Io;%#zH7tN?Lc5* zR0{3WCu#m(ZeC~>pIXcMs3j0^?;OZ9BawZB+G61#h=6QB%s3VqZqw1y>p(1q5i3|^@9}DDc9Jz2TzJv^6 zgqRY{I(S>VfiYkdgg_4D)L5RlVyycslqz@839}|c4YwO~v~l#IR%+<0)caUCn3*RL zN-=R|)xapgO4?K_Aykwq_-K%Yg;)SoB^x=pkY;DQZc{Gwk9Ym42QKlUhtBpxd>%K_ zdVT6K)^FI`4_y96r=3Yv(tJ4P}&fS+brzbat;7~W2clP?dYd0d*7|bd%;hr)r zWm=Pj6yvatzNechQ+Y%b_P|)gS`+2o5IGbz3K67%38)jzC$U4THK7tiEkj*q3?Pb) z1wxQP5vWo`7{B#v9zJZ+eyM3$hk_D{q3n<7M_3RQP?*BtO0%x>u2Wbn2acP;OK(mZ ziB}QKhLi}=3Jex4hgJyL1~!H;(g@~}-cx`sNV{D+-13*Q;`VZWP-bxLcso;au8S>4 zaHHK&nzU-$R70$(gXJkkOZ+mN9bF76jc|BEyLU8hr{BFZa04wdqPqPp)e^ST72q z7)Y=p3JU>*00ATj5C}n407T)KfB*;>0Z@1q2|z|Ox`0sN|3~2%4Mv8c7!YJY5eYD0 zlqx`h1Y|fy!2mb~5N;%*Rz;A65rXil07yCwP?SJOR>5NiBP$Rh!^i+69HS^g6aW$c z;S@4Rktl?Eso+%<;5a6O5vWuECK{rWL0Sz9MS|QYr)UxtA^=vS2!TadmkEjn(amYbRB=*E?tTG+zp&yY&p&@3#HkMvFeq?(S}-x?DYX7ik!+ z_}=trIsN{Jx9(%}*W>bhTDqOQnaqZCD&M|?{?_E#>5z_=>cNqyn^BmkJ)02|F|6ML znYX4kHeFe)8jakVo;iulVco3SyQvxO?k#Q$Q2~OKf`HhDipnU32xICJH8GUTpnQrk*O50j%GwM^y4~^!HKa4M7g9|f=(ky8sU_urlZN*X*WGPaQ)-|;LS%L zKlhQJ`JH>7e0%JC&ktY+ulcfk;SPTM)A)yf7EZk8HQ)XFxb4ovpey`i65LhUUDq{79M{D`knz8h~a6(%isJTe*D_O|G9W2N*_Xm zQcaX7zyRWLM_5$mJzp);qNH2m@PL%I=2GeiLbr~!(G zGN9rZ8+Q>3MN|f0Bqc!A08eJs+Bdeg{SUjReg-E#i~7dZ@=nq9Q1nLt3+NSK45?Tc zFb13zthFY?F~Qn7teKwLl)YhCqZf?=iK?(=IeLu)4qLx)!p@ySC@gU-cdJ#N#pHBzGIecldrOYxaPRO;bK-67=@;+5upalH9eeG}_{OvO%p=XIRr%wa z7zR9(aerrDDShTTms&l2Cm3p+ZKhA~wv013%Ktu~+iBnNh}|70>&xrAN5i_RFxs#E z_V1ePJ^T6J{`_7uYa^Y&$qAaH{M3H3dG8N%{j!$N4*f7z2@u@o1|?L`EqT;&j0iCc zo)bW?%?!hU&5=SVU_-hzR7in8Q3;Z%@sygBbfGDyx6a?NQ+@xkVX(#K`aW2qH+FgY z71OkA&OC>sW_>dX#RxN(XpOdwbsP>5QCkyb5@|NwCQ9NyyYt;+k8Q=7cTcR=m<^N} zTd_!mZpR|1N>-snMV6dZ5iIflvIp-mzpig_}>^;n$^X%!A zLV-bw5KM?^v_NBOnl!~4)HJctXwo!d(L@ER(GaVaHkukuNLqv{2(^Q#gOhIP40GBu zvyW@9z4p7_T=WOLBX*8Ranu=}6rKCAN1mo9((aq04#nSp!b z#N|uN4%9LV5&{4k3^E1-qov5U)N!mED4LQH3SI!ZCx|<#jb;P03srEh5+D#2!NFtQ zn=+hr4=V>}+t8*&0ar!nOA1VF>q03$!T|-4CIwev?L`Ak4P>zel66_#b{tWLVgko$ zkTvuI^?*ay;G-ZzqF7=YHjW6t^VST z$0dq=m{c|awMjwf;xx6#K&a(>%d0I6eNR-Ylv@SIQsi#5U{Nra!)pS}Py?%!N=hCP zJ*T0fxP#RK=zvSY69z_*bJbSW`OJr&H$?=|oDy76%^AgEjxK}rj5>;rdDAt`)M45T zHueL@je#(VkRVxOh>;2?&YF?PNdq+qj*tbb(PWBQaSOq#nMZVJ=lx9;EN7SfY>0KY z3;PVI@2|$qWB6G2MortO!^jyhqFJM9SPrmqaN&y{t6IC9kHu)|DsnA^g4jW}pUNba z#mL83)l}m=V}?|B+r00gQH?LDs7TbiFLQPhPQ0+OP?Qq#@ zBf}W_ER;>ivT0aMZPOROC_^D#6aWeU0SE#h0njx71PB6f3LrrMf*{an!6cKDaGF5K z0T+y*0Y)0!dK8V9^wa?1Rglu@WpI z05brDKpKrOqX9*5RZ1X@U`Vr$m?qnS7c1*hSH(vYwQBs!7X)!xS&YSyE(<- zewg;F6KU+-fzB-(K+UJmci8shWkzX<^YZ9IC`B^BfC34jD_&Ky9B!h*JhC)anz(0_ z>ZM`2^`>b(8HQ}7I!0G?Qt$#mglZCK3L&^h)IJBHx>uu-VbTUyC=O0>t?_Ahe6;0_ zSAOoN?DuZq@BG&G?;c*<{L#C3-yhQJSNPd4;OG7sGy(5>-(UO(IQ{9DzP#MoZy!JR zr(gdc{&ip<5=a3K2Q>bTFV-*r{9k$H`hkSik$9zxB06t;>G2 zuEMpbm81+zmWG;ujSy~q<}4g7^~GPE-u_FWIePnRHELsJ+-@M&dOLMnl#8)=v`8w4 z9UdGrycSO)nqVR4MTG?sHde4o@<;(Fq=Aq~$e^IFfx+hy z8)=0wfnl-f$Iyfnht0|8^jwMENw(XcO4)6B(wOybsFyxrEvgl@2wqCE-JP&I;MApR zxyr06Ha9v3*6G*`lUcPIJ56q;PkVQf4%geOpzH^qN_VpQaZvNo1EgE~&tLZ2{J8o0 z<6nGraWXFMwYF&&XKiB)92|14-D})AjC-Yjb>-!1XrF10+ZSFq?r+^G`)B=^R?4f@ zSKq+V&GAN>y}4@Flk-oX`8etyR|}arYOE;5j=Qs!p#mc?1en)B@JICt#S2K{4`s_R(+Bt7n`aH&1`~5I4Sas$sL?)j007 z>jTxH3?mgx%no?9P)F7w)fB3w0?RS(wVZ}_we9ra%cpNI#?|91-xjYI@}8b316^=} z1}Fq@O!~x)a&Q0sJKMv&2-Dr#?Hu>IyF~#g?bknl_TQe3 zlk#XaZD>bv4fsQIgNCsJQXoLV1jb;bfg)-tUaLTg!DK-hkXwTGrZ*d;&A7;(tE%84 zj0iMi+zqz3f8$EN9j^$vVPZZRCu`qXl&yx#CWf|65{9zPs<2sL+nI;JO|AXdcg7U0 zP$&%)q6*|$smS;0C+7LU))ZrnY-IWlj~A7gp7GCvudf+)eErnC`V4L)-1ECMMf z0Rav(G_zz)tXX9_R#g{SDMAjp8%2wWZDX`KYeQwe^z@w)LX83ehdbe{hH5m09YaTq zK^K6RHX)_f88l|*KC^7haZR8EiNP2n1h^n~LWuws8-)~sSqcoug?U3l@=-N|QrnoO zm_x@--?x6(YOPw7aZL_&J9w^!6v{qm3R{K27(-|Pl1}#m??XlORLw>i)Y=rVoGiCt zulVivHnogzbp1Njq!3~YPP8oOR~~u~xTZ>gkRS-G5TvkUY&vVF;V7y&2MBBeBjB#G ztwgP?vPtOY;i^=e^xK|LF_!@X1OlW4sS2WufCx~y<|cp?A&3GAkPxmh0SKz3s#pah zf}wfYkuO1V|7NAR{f(%(BrvqZb?? z020?EBP|-7@PJUvT!JJZ3P?y1q^6z4-MCw{%GmVE1+_QMhm+W=Eb_Mhd$K;+HS|Z zZO&yk57YFi_~}WUXZ`o*I=C2K+-u(ov$M1F?mXM1es{}nHTp;Ix?Q-FlK?ZH9kw^_ z?A)2AH#b%m=X)d8!5_jtX4pTRty|R1cD>f3dcWqyB~Y`Rfx(zvPR{mFg9a1kCPg@` z#Aa-5zms*xR=rW3cIkW~@1MPQxSMC6$$0Ozi;@>_Y&ILzhwhWcNH|5b1crWFkyqYt zS$kTV@3S(N68DqA>UP*YobH!T|HQBT;PG4E`+fiDo#7`=hlB6l#ryt{-d^Ib|KIre z&te+!zW4owzlXD*dHO5M+qeJB>V<#qD}U<+;25aD!`EMV?MvtD7rW2A@=L$|D{uYs z^6ht4x1N5dY5n0F$nsHf^Ok?4S$|i1<3FXv^Rq#ACqMzYoor&`>jg^d&-dZ>-+um` zU%9ZqS2pcx>1`^huP0j>90AIpITL4NPq;l(CvV+jX)2oA95CImD2=CHR3oRH(d7aSpZuo!`*2upuiGo0k&0zT!LE) zJ9{tQYfm4qHg(glEB4zmolVbv(gIpnW*dpG*a35Exws<3i-mPnrzRo0qW4NPV(bd{Q@HI^FL-sxuh$K4oH+>N)l zdT>5o^s8xCc(B7dY8ZxY{yh!iv)}vC=fh%iw|?{UzwyTFC44*_tonz)_W0d)HEJl8 z9pp{zYYI7-uxx*V2gJTF}C#+yKB?(Wo~ zEU)smtn+#qcH3@h%hS!QvQ6BA#W{Md?p2TzQ32N~0agfbT2Ymp0VLp~17IRxj=-3# z>6kiemp%@vv9h|_z5N}-XqzHz`>hZ<_d6)(#2KuE!5gT_^{o2f7DX@}-T;pdhZDo* zogQr4^zyT|yqGKD2hD&On#LyLC@k9;@@^^>hy4_=jSr^m z+81`Jmens)2u%}Q%0n)aO;clxWKSEiHN#|Lh-q1%Rgq9z>&zy_$2u-5S-p}l!2kh* z0tsV=P~j3l6B)#aB#>StL<<%|RW0fP2@%&+Akk824PdQRUIEAn2-iGnYM2Aw0uwNI z!)V22U?dTvoN(5Ju>psQEro2MSTMwpj8;k=+(##dYOZFdOOr_uLKI-m?A&BKGPwlvULkwako!_p?NOI5NCI30M2Rve zvH?3mEHFnTzyHO1tHw@=!Tq@4p2MSUqBdqwItu7o=EeqUL@G7bDp%Q<)^haOm(n1Z zDp8$gj7cJx#X+DUpVCTQmU14aQP1pdnpnTF93QW? zO>hRmf{LLCR$u_73J8KA3aA2HHYJ)iwe#-9P676fjPt>FtM(&nK(1i)CAAuOH;0e! z`s()ZJ8P7;OWDXNPy~S>0dhGY!2lyfIRF^}Qh)?PG{8&%yc$G=N^?2k1OWp;a{=&R zPJ~K290ElEBq6~oBsT~&kst|}k${O{3^Pc~TR@ot=nknekug!pP*sa^6(tZLaZQi_ zcnE;8sW4~@xPt@%5kl-5Hf`*S<*|fXjdDm1hpG^O$e<9RB4sg zOKGP+Y+1KbY)_ar%l@v!8BYV+821t~A@`-LZ1uFW>`vE^L0k{pbt%rh1m8)k9BxmV zXAbeIupQcXg)sCtrtLZ$Y{o}eZ!q{yLT=LOwzfk>jB!Fh9IV|-^X`MI+fP@0KAd{N z?4qxC+Pu4WTyF(!u{qf+myb8+<6>Lo2AEll%(Xfr?i{osKHaw6t=Kf}dNmd+o5JOt zcAJjc4;yx;<+*+P{W9p3jx-r0}->Tvn*eDBw%NB`dUukQcUSAO&<{8QiD{JCGxuf6^cUpRgBt*g&% zA3RuWYd2fP@duYTdt83rAHUH3(DN_d{DqJ2{P|z_+plr?PVYN6b^>^g=GOh=c|Ljl z%d3O7?uX|0?;W)#n=tg48ax7hU2Fk&(Q8F8wcRA3p&I+dv;4q4{rz9R}n)Zk%C2DLrQf$)ZBZT=~^uU0wCxb1dtA~RwytT!T~Ut7jQHW z#glVL0o^1WdR-aBZrn|W+L_MAul4%+#4o9gN1qoE)!z4XIat*r;Cx$1X^d==nm5vsrf4_%}+jooQ>)il!& zm7z&<-0x3Luio1B)pq2$TNZcA1v?>O_Ob8&H}8*cFu(WKYUkXv_sy?gd zF(Ibxs~y&Oye;1veHmE=pQj@{QL=nFZW18XB1j(1XmDDjsceSoo&cz6=l6v1Xn0hi zhoIRc&TkS|lX&K{8|D5Bx8luD9t2ztyH`~ds+xetn2m0hrOJS03JR1FLv#;hY02u# zWyG7m`_?;~p+Czg(0E>z&6A0O46qQLRISm4HqB9{^Lw9OHMjoSF#gc0{U7fxU%iR5 ze14A2t>*UgUD;fnE-zP?o71nq8F%j<9DjPu?=CKW@3O?@qpb+fMSu_@K+y#R3nW!4 zTuuW>Nf6}j;uE+=8Yo8);yieR&^IYJV9q|2WVRccinLUwXQ!Z!O~a7kF2KBm5l&XC zK%ynVEd*HCM$NihYuKzS(Nj@MO;EJl=W#n`0`3GTM$kctpgRW_>A^u6NSG)=fFu(P zF(AU^mK^{s!U80q7!(u$)mq>X%mq*mM~&Ev*hwsr0+8FWz|~ml!bAvYD$G43j}aVw zHBe>9sC_hRZ9-X=3S@+06+BJaHnb_qI)Ol`J+vNdJXsZZg`hD(jUP$P@y`BJJD-T0 z^ha+XzFWKKaglnQJ={K8thSZD-ny48^NN6c*xuO-cbQ7wUzS*h74xFd5D*%(ra{~z zk_~^@zUa zJ>+||n7VtrN%)bQTw(oBe_CYD>xJgARBH^aC5>OnFFi|ixSTl)%wj?;BJvj z5kM%;;yw%Ys5Q0eqS~e(Qa{YxG!3a?1XsPCZY#Cj0XnypaO$v8CV&({5@rDCsDLE8 zA_1!a2H1x%IkV#+S%)6aP42^G@-F3PD1E zU=pMfTqZ~m0W%4JNB}gz2vLq}hHENv0g*JC1HuzD0YMT_^@Nc|5)vc;fdt%(18@ig zK!P+f5Db`*K$<6LVq*pRymeJ~m%GeJ8es;57v?Ogk^lq&q6|Pa0E~^nz+SAXRs~X^ z1D&C5I2iiTr`czbL*n34k%K`42|YoBNCyqU zPyweXPy}&}E||ecrwg(o2?|C45&;+pNOVAO0=x#gTZ&;8A?R-Bo8$-g znnyd$YT37RRID5xZ4*&0H4K>U&|*ysaMm`3Xi{i~i_Zf4J zP%Egs(c^wOo^R87EET<%Ni*|VR~IL6x>yVh-Zgh;!P`A&)3g(tL#N&zzV`I3kDNdH zXZ!PCq*k@WGpP-TD6#eSxzB$1@N(V1U6&tp9C-W(-B~Efxz4|PQWoocBXUstxYkZ% z3|RLF)9LdwHqVD$-`kwe{JqzDtGVX$N)Ob+2TJ?hQ%Pefs|AY_VkU*57TY;!Uie6Q z>FLQ!!GH5Y+sE6x7q~d-f4$;EQ{21Te9xiX+#ddSi1|xz|L(=)@bJaEhbIruPOdg% z=Y^Q$hVr2l3V^|cFsc_kgoyxYaKPjWENzbUg_$3v{Ce-(z4Y-NmabVvl#8ZE2G{Dl zR9o9@&c`xzJT}A{-pf$ovMQ#$8!#61k_i+gLX~t1qaYE28cYRMuK^U2*zEFl8}eeh z*{Yf6@WH3{?|k@pnqu7Pxbr1?c2(3Z@4K*MG@G1g> zRGgp+9Re*ti5A(B@<|I?6OjxAjFSLEFb^^3nwJaF7|`go-oQk^7=9+a0}&ogbXrveV=VL6EptB)HV4LAcvQ?#4Km!V>~yasW=s0zrdBC=f7`VA1F(009snB!D5{n!5uKBmlxSf@TB_u>mpd23QDX zm&7n`s%Hw6D&T|whbY)65TpP&2sjCX1q3`eA<{s?C5Ib95(VIIS{GTS};W{u^|)hd{ZRTWhM6%~L>AV{wj4pXRr zAXKwLMmmXXAmQMhbU0;fWxxQ9Cv~_&r8`KN1Oqgdf*^EF2(xH%7bY}>xPVArENZXBhEkc)K_2YKD)a4 zFZ^3S_}0sJ=Z`-A!%qVS3_W%M^;pfYlkmR(AI^LHmCxa?|4H1P<9&aWN#OtS;4}Zl zW_<9!n*ZL-7e4>|zw)JD_zaNnZ4RxL{K>!eKfnFk|JtJcP}yoHUQS>9kB05*;rwS= zcG50a)9KYVJ@=>m;+N0f{la-T9NnjxY}mIE&dF&AH}VT}oT6+W^jD(- zVG@BrVcNXBgQ2yDOZ9`f4S_n|p+v$w8Mvlhel>NMXyeCN-6@H@X? z|NQA!|1-~vzVX-BeygcdO$J$f>gMr2Z0MSL0UJYe5hMC-T|XLMKPlt7W(9`ZU2|3J z3bcz;kdwU*vx)W(k4D?mnt!*VyIRkYPOwiUR`txpdZ z)0g)?`t;%R_3{7tN*?m?nMUuuw)mA*IA|FTyRK!O-gv?2uRVA$@AA&g#e3&@wHODV z-*D@8`$vE;YXz*pLP!=A$H7Ez465oSqS~zMj&|mcH48uaz>2W}99B^)i~c|@6%q&nf&_suFjyNv)BI*=_7Cp6 zZ%*>nqnhTc`1}Xq{Hne8+4=lv=j}Hi ztWMvZ+c9T1rkh2+T0EF~J}O)&zg=sf=Zb400D=TSBVdH!tsQmT?aSk>p9v8%hg2N| zC;|p^01gVqKoSxlGsp-?6rP|#Gy_ejgu??oiGsNSfvkW#Ay5p?5Wo}!IIh>Hyp$fDUFbQWhx~Y6kL9 z>*yc>5+ErkSV-1HbL>xA3n{~kRYMa}1X6%UZN|D@EfqWzbLSuxuSudA-9mzO1{a1{ zi-}6K1ji9D@jBG99NpY<;Mi)acz4)$ud(**S&5p~3jMi!;Reux86`N|9mVOS=5D;I z&gIT5H?vq2^9o~4`-lAC9ZU1)Hrx4dwX(dtEEu<2->x=J)!>RHIMRlta$S8Kd!d4) zF-T^R!R0K3M?nIEm;Kgm@BF}AA8I>(jl^-5T7(QdQ@u=AW$oXiwtW+Bqx@LV0j zbk@arSF_4VPxL;F8a)sxL;>XfSzK21$k8|iu8Ga zckO;;vH%32GUnri)Wk>R<)-?g))6w_5g2pt@S;`0qQZ&*hzbO+89-7DgiJ+%;DE&7 zBm_E*fH?qpOE8!ds!){x0qy`K0-yneI{^{Gw+R4^1V9GJVsa2xqsg*Hmy-mcYYq`k z0WcAO1wrsg&};;~W=en=NWuxoAs0xIfC!NU2>~J~m;}fH!lLLzfe29q2nY}=K)5*o zBLUJ0$O%AZ08B>6-~g%{DbN9e5FiQHL_)YGKu`b)zyJh_1gab;Z~z4Hgb{!oG?;}T zi?G5Xx~fUCfG3D2geOE20Vq@jqDVkSfO3ICAQ2+1q*Ri%14FG|GoWe$Wx=>J;o&YX zGKaOM5>wg)x$0ISSX(hZH!HKFHm*%^tbvQc-nz)P&LzcXW(YS9K5&59>G1NX&+YhE zP7w-Fv5rd0^LbE42*(fWw4OeMF>*Sq{i1Hi;*tQO0w7sM7a|476r2#`FhDe1BMe5; zstaap*j7t&cLHz$b$kBa{THs@{YKbsVTPER{Wf*w{1P_?d-48{?ESH4+8_S&Kk`Yu`a5{{RUQAy z%*#jim3u$-sc`ra{626GIt1~){}1bopZtgTneW20yLjIpX&KPj|MZ*V+kf-F`Y-(( z|MIh6d+Oi#yFd1UIdliaw_Sezx4z?NfBX2$e@pY*>*OE2H5^P{d?Sv{#?#yP9);1j zzun)Qe%N^Nt?r~f(`sZL*;zA(ue)s?Cvn($X7@iR_Vdg1&Lwk~x8r8W`R;%#=fbN4 zA_Ur$5Q7^LP;!xiM0gbpvaSh0-3}Qb1(*wRfC50$01+~jtesnGZ{5Flxj8={RQfEf z$GTPE6EzB%KxGvHD)L?m>E3R8^TmUwX5(A0e)TJx)-G{4+jrZU#DnMmANJrh*1PpQ z5Bhc8&-1Lc{=?q;efK+``5Yg+b{r?p9JfiFscl+HfGTN9AW{jSdPZpVb~`F6CqznpV=8Q%6+ zhr{KQ$OpUq?5%?4OI%()90faL?Nb?2fBy6=piRr`z8tfjeaVZQ?L(V!Zif_4f9;8D zz2?2I`4h_rKeYIj@7f&B_q*Nac6z>V!}9p6E!D;8Z{D>J4#RU-4@b>$diBnvN9X?L zBOONgw0nHX1?{G{LsyaRT-B88BU`3Hb?Cmb-FU&D{?CVvB1g^TL$|w^TYP4OR{0sz z6!Tcplk>Elszc=Rrec@cJJ0nWf3^R>=Hh40st>0h-^ClBJNui5d}oyx_pme>Zw-dF zd3XB_d3=3w^?&V0jg$E7i%RL_;`5Sk#$be3v(><^6rU*pJ=GS|p&731Mz?&mTTav2 zueEuHY=BV!d+R!OH`A>}+F$kK#lhPY2{7S!IT-kGe%mp;#4lag#XHN__id5JA#`w` zMF4U|NCt>RH3>`rY{(}CP{fN}I=bB-CcRblu&s}_H(q%)-G6`X`&C{oO?$A--Wt^| z4*RKH%~`IMb!(yAPRvj^1!V+*+9vYpXD{CbFF*HGuG~Gf;-xK-+4h?mk>j}z#q%SF zr75CGvCL4p8C5d;W85&>iyU|}xGlnfS7NCucCfJ6frq7SwX`izVOWgw_n!Hn}x(V77k9idJm zfL%^CS!Q3j1go_TQ4dlJ8OWJ{C_-dP$T^{`j|Zik7?}{dND%A>mx~++Ki@vx8F;2z zH9_#m6wp!GhDd`W7Zu#~LtU|7s8H6KkoR>;&@lw@J7hVam&m<3LWFeW528Px?W zjFz!nA~V;~#?e_1kz8hkQ8DLy|I}W+%i7=Ej(OTen{B(dy5Be7k4#D{v7=_yO-vEZ zg$e-zK$s{66a*}?sgOcqQ(KmDYx#jSZVblX_mO9ym=%o)?B+sGFd%_KWO*MHOJ~+4 zsMx_!J^F}yotDexcG%>E)3^^RB57!-r@);7N=#oEZoKT zXq?HwA+W8o3Qr94)S{aa=BuK`U{lH$lQ6g0W2{USYstz|C)ro>gXVL{Th1+WA>1BU zX!dmJ(^6<2n=M*+mM0~ooHLlC3f^38o}KWN0JDPBAqjv$poQWZqS8q^1cE_pkVz{+ zgPnF*rxGc%kSIfeIzN%tp&33_gvYN7^Tto7STFFj*E5+Fs2plAY&fooDg zkWmH$2?DTiNa_tK!xO?q7YPB?fuQi)C_w@ZpddiFW*|VC0cM741PHkz5P}6k!2}Q> zt|^FXNdT0Alx#G3BOEdmAQ{{k5%eGsWB`;51Stp*LBJ$HArS0n0ZbMY6(nGATnhq_ z5r80oCIJcp2(t`WBteBCqM9Hu0;FgX5k|@&03xA)6b2CnGms_%@)kg_2%sGAAt68j zuO%i~l0qSv8$v08z)W1DaE&4$FhCGAfih1NXaT?pf>{QLQkPdbm58euf~SCFjaU{t zy~#c4SE?R}fl1)VIg!0myZcOApOhz04Di*0&4#OoQ#*_^d6Z>~;h7iE)#2sK{Q6@3 z+5JVCM=Fm6sg3LCqO5Ev@9s){`p|&V4Y!7LJ`LOHTy+LW03skTAO-=D00=^W5D-8z z0S=1I1cE7d$UWg8pfEuIV%}C^-cyOz(E{%=K{Szs2uLDfNI?KH5dc9q2}%gc3>FN4 z1VD;heDyQy$IpKB>Fb{=euV-e>4WH=meqj9_FW<%&{lA$Uf8zhZfADp9{|$W67k%Ov@SA`3L;vyTo`3be z9eu@@fBpCUp&xkPH{ZY)1TH@D6Yu}?zxd4O{>StccmKKI9tBwYB z&mG-X-|{+q_sjH=XEcm`{(5SLV~WCz3UGM`N(N zpBHx@TzqD!r$1w5up0%_)H_pJEyCWJ5}vtwe_;1!omUGktX*+?i+v_PKVKDEXE=Uv zHyxSUyt`=IUyd;;f{XjOz3%!uMXjEN@88|n!iEL9hyKLdVYBUyR%Ok8_f^m5O?&at zwr)qp+rReq7*xz(ar%3|vcLI@lGwdD8cZ@1D(o<-k$*(U<$@OZ(KD zQeTcbj*+LGPu1DQxC6#%Tp8y~6uaNGdWBuH<>|-J-hR6I`{(JyH+b*GQai&=PtU@l z@+LO>xO~^AQM2g*#D1s?X^*R{u0+3K&G6&QiKrNs^t!s(%_%M|m9#FYo2N663Xz~X z0)~{icwo0y`RaTa&vv0%ImJlzSUR-#%bq9qyMO+<{Q6rr|JK995?dh@O`<@xFo328 zC{zrhIhmLFX2|E-vZWi$o^dhQp$_ZA;(hPyZ@%2+Ro5?52Lw3#p+@-Ay07vI6Na?x zWmp)1Xo`>&3Q4Uh&ic*cq^qCZES&3)ZB}OWk#B_hmc~GtR|T+CgXk&*?&FZ?g=;V! zxD_I&aZKL!36&rLCZ+6M^ey$1HLDFy7Gxl1)c^%2z-s`M2+%|@Y<7#7sR|@!GD$8u z2!Mc0xF!lT34kc#w-A7dBxpu}vH&0w6oM21kpKt?AOR>q6EY#d30f>6mTqRM)g!{h zBnBaK13)!{H=&6{Py~iFWF>WAfzetnO-SdJuG%%))LBwlHmDz0M4wr^nBA}+=1TbJ zj4&yDc0iC3=0+zo@?yCX^HN^mVemEScTVn~l)G2^v$yxZI8hiGK!nWInJf=;%07FZ zJQ+1qHwRLWz6YK2&RRVXyJ!i{Mm0+`O9_(L#2{#8p|n~`3+}-f6$;8y?L@{nN1a8u zhbu&wF+twpn0`A4Z}!{NjWsl47V8vI8-XG-(E~LiK#3q3CQ7UXA_TYr#*WG~A%zqy z>`Ge5GWiW(@e*>~H62D-G$$N1urq*`j6;Sj7jZ#j-(MEn&CL!o0a$j+%@DaPRF?E8 zdj;9lB6f|a%7J78eRf5_h@1d+%K?j$6w6qX{WOhjdjYn6YZ1W&EeItng;nO<L{MgtK*30WgaeWc*CYf) zC;&IFc1_90MiU0A`2u&D1s)!NDxq> zk}*J+pdbjCOhlLfAp`;>B!ob`M=Ue|%>a-saE$z;L$qRQPBr`?(=xg9mT z+~ILMT}&rPZyTHXlUvZTR>%@F0A%lwEnH_ zaCrD`M40urR%x4;yUnHM17HS-42dB^Gyx`nFiIdGxPhVppa55Zv{Ev1Pu2)QG(i;9 z_EnlE256vVfPjf4Xa?au0w|FI1C&AlNQeX^!iYpxr$8b|LhL?vfAQ$$uX*#0Pp!ra ziq@$%RixueFS=N-ziM%G^gwTB4o}K}`C$n*-RPFO)9cpP-n@D1Tfg^T{{GtrX8ixg zcE+bWd?k?ZJM+N5^a=dP*WpV}@kL+s4@P|EPyN6T|Lx0Le7Jgc{>Xp!gFpJp_uj^P zu7Jm%_=Dg4BlYh5mVM}JKM)_%H~;bSdtcfAqj|CU3+cOmc>mYmIsA9uIOXFvrXZU_ z1}Mc`>+&*GY#xqZc^1$AyFdQpA3MRj@5hh+SpU;++&=s7eZTH{)ppT5=zt^$Kv#}# zFa1eX)F}edB*F+mAm9KLq=|qB@daYQ2r^XwAw^h59u_Bexir}JxW#|xX3)b_^X1V_0jgymG-6ya7u06bWGWf*MX|!s6E8To{P{)>9YyoVRk*3v+LuutI ztS|j~(O+E_*%PfaKc0QNZXHX;X~wvhPJA5FQ98a^j$OSrubuD4VZKQEEgbt{eRs&| zc`>!Cez(62zfxQrn!MI&akMH+%lPQ?Y2){vZl-Z$y!GjKMy!^)%YK)OuPmJ+4t>0P zh&Z$_yZw#F$o=Bko{uriufP2(Y;eqHzC4+Si*&J}s@fuaJZxIJv6c<1I5rzm2}Pvy8FmwKg_AAIV!A@ard%DAa<$`Q)MH#es%l*GOW%)o z-o&S0)5|aF_}Nt`iQCQR;Tv47Wc#pv>;g5k8p2+9;l(lb?>>HeZO1obeHLeXugARw zG2_V9yq;Ag0Z9g#YZijm5mdP0Jz{j$j3+PaPR2WQI}Y0-48IJHgRUBz?+3@7G)B(@k9P_NRX*&hRssxo`c?4oHI$+f& zX!Nm27UmL)Cq%Zzj(NY`t2Jjf#7(=~+3AQ!&_$K)rIM`clH8~2PKW`34I*h~pixQi zZl27wH#90KQAq}wLy`kf5`mzRw15E;0uX3G0%3wkPyiu85r6;_DS#j-5($72A{0zP zQcek=Nf25#!5JCq61-rNz$g&OO0*_4C0^?(mDHQej0749OSg#bPQC0fKCQZ%Q(wBp zja~-L6RSqVq*la*Hvl72Fv4q_op8`VMj(QPX}#!RdC8nDM-CsvxEaSlo_t2@oMoFH zfm_YlDmtU*d7{sgqIxzku>mR45WR$*wkvO|lQiAoRS-|ms)dX#8w_etMKbod8F924 zVsZ1TaolfeObl3?C3tm@2EZdiLSZUaU}PH9EmKNsUKQ*&inG1U0W%m0hXax)_@%IY}3aEuPie=^$VujZp|Jt68z zyH{|^#p7=AtGTADtHUGBS9R(2W@4;Z3N1JZ2O8By01;KFAq!=I=5k0)Y9S-C@E+k> z01gny1O=_B<1VK~|6rwbWLLqq^SQ+yC^c>Y9BTcRxc~ql07*naRCp##)e4wR9x)QE z)1#f*)aU{TmFa7_+K07)dg#~>JI zK%xOOkRSwIBVZy(8cA3-$Pjo>IOP=xGCPw;gFBR%37R2gD2Rq&%mM)tpwS3~0)U}} zB#Z(BD*%0N38;>1nMHu(Kp+&3Ya&2U1OwLs6kyB*3<&@&07PpbU=+X*t_4LP1wn~` z0Kq^5Lkz(RjS1_9ZmG*ou_Xkw2;rIp(2@a3X~AeI4gx7OMgWEkzyv^q1B58ZK?4m0 z0Rj|YVF^%1;HcA!$M$wjX9rh}00sjK8^D>!wB86YP<4=7qhO9;RMph!Rj7@|=bm_Y$xkTggB+I!u_D<6CF&Chg~kJ;ho5%-bDQJh64 zE$xMMI=Z`f%F-WRS#+w6&whCMN5ApgzNh~UyznP~dinF;H{E)1 zJo=~!7zXjU`>D|(1_e&u^xI3 zto!A2*{2?to$isNc$lWDP|L|S!cU7l$uaNXE>Eg`a+3VyqayJ~F zz*=KLgF?xi#GHHNZ0v?*40hPp4~N3=hB|89_~vZ;Hj+~6=3Q88y0u)CK0m$ODN<*z zWSsBw8_`}WzFsJ@g56&7i@I$95t$G$#^QFM|2jY4Mf`e8DO+IhRUc*S&JMI*A0W1S zyjEpwAp&6dEXcT60%98h1SGf=KvJZ1hg_16Is~#2Xte;bKu*6khRPv(fiE00F$Z$M z6fGo~2pMl216_aX4x!k*3n=DMmZ~{%ZJ8`OT}&Bc?(U5Ja%K`kDiX+c4NorgmN_+-iw?>fjS6UnvQh&2)w4@u!c_(4m?}6OCJ7QkNRl8^pJ5H- zgmBneI_kP@w!=K97O;>4#aNO#;l&{_ASi+am+!pY4>9Iau6p_nCpO@bDhx(4d*`|^EXXDui`09X3gj%W5Ac(_GkJ#=i58Kxtncqu@K5GVyd+>8aBeVf|64)ERM>s@~Xq-rp63N zQ7eT&5(U&?le*F``x_Z5TrYHN*{AflVVu29cJ7SSa#8`{vwL$}<<;qp+rH{<#C&?N z2A< zALOGiTC9?(v)03z=>-YK-BjnRJu!wkkZH-*K^^m9IvROi7S9U2BC#8L2d={G*I@gbDl>`4Q?Psz`_8?@E*|w z8fm0JK|sI-5uhM2fB+2daS8y1-y%o?ag87eD3JhK5E2QHc8wGS5!VEeN&<<#fGJM_ zI3j^fbCpBIn4uccoEhx0dCCnk$s}Sn5u^ml`V1_*3L04*2sHtrj$mOJ5C#FJl3?P& zA@=|vlK_TMpb3yC5=gNWP)rF3x&TC<1k0lflQ&#*ibNhLr)IzaqoHI=PC0|_^k5-U zW-E}(3<%U2Ku7>B;XMW^MZn;u5CIqt=;yU<*P1Ogxo#I=A0A_>Ys0bKxFqDUk}k}N4hBnS~< zBmjkGBiILKhC>Aq(EzXs9q*XM>Ikwh1{U5~s1LhaZr&0;zkT99UHbNXs=KI%CKB0* z5MU5N2{3#CMbaAhHK%Fw{*S-$&Tn)V@34WqzCy z^Q&Ky@#Lf3#asJ7xX(KM_Yd!1{o7x5XZ=k8F5#cT#aH6ifZwJ5Rp9)`{+-|W>F@l+ zcYXbX?>78@@JIg^e&{3khWq%UFS>*-U*oSFzx6l%+`G$P_*38e-~7;>d*9JP&w;=6 zCNBQ+-}?){^mB3NC+E){J~glZ$~T{U%lp3nC&&J+r{k01>wor_->t9x_+k8~^V5Ar zo15<)LqFU?J>)>QzPPjeOCJ(H|MCC)OF#Z!{G0#G{S$xTU;jk^KYj1`%)hekH}&C9 zd9clb$pbGB$S=HHFErFONtOX!186{3Bp8%( zdGWY(caFOAv$qcckH|yrQ?_)gk1{z6-Z|#7=6;&Ty&?{0%hdO{M#8xkxy#+;zpJf)Vv8?Qc>DRp`N+}YKJn_V$LASnUS zNY+qvr*@~<;>Cr-4+Wby-<+NM^SZuqd~202WkHV>~*TO4lMtsCGIO$EEHF3#iVIQabVzAGHg{T+ZV zPqyr6H{vGKX6i3$a2rs|CSn@`YI4Fx4w64Olm;a|zYS+#O+H zmrkK6;?hNz2Cfrr&n`189psFru#`-FI$p5^9$$c%o;$XAHN-{RUd>k|0Wb_g=CX&J z`v!z0z=42=YQd|6mTCQ(8#tCD;1Jt0(Po%|2BJAyGZVoI1dxLOc)V6V>AvGOu6F6~ zKUJ?rfeWQdpDm6dm!Ek)1gQ!|fVd_SBqb6^0C9~VaV;nUMg|}cTw}sDfe6DBfQTxP zL4v7cN7{pvJ{%YOiBj?^Xqy8t89-qcfv5yRXaK|npg;yxC?E(BWfn*|p)0xu$F&Fn zU?PxG64>gtDI{E@NB}g0bp*+pXp3~y?C9<(!Fjvc0@Hq)qs1h(2tgqOh~JV8Ff)Ro zaLAwx1yto3P$oepRRdbeErAL+1t0;SEIkiA$Va7Vx%|_%@#+ctk%Y*4Fb6CT9?PGF)nsF zTdV?9XB~`l^Z9a008(pdZjGEe13K#xhpO1Or=giC$&d`I;me=HjbZ(8w|Zw6%qn|w zJ398>6A|^Gn!P`6uYyOr<(pIf4a{ZURYyc9+y#`#RBBKY*Dco14&~l0RivvtI>s7y zHLn7;j(J^Yx0u7CQuGO}wbSGN#ryAHE909cqFo zO4*@q;Jx?1uhqu6G05C0r9e~NAHkP7D|Mwq8sJR337TE*P6P9@M?aJ(t6Gw<@o}@? zn&v6ZHAPbDhJ=#QvL-;IA%FycYa$gcJ3h!Li}bef$;>pB$p|0)FxQC;<*ew-Nta(L zilJULO(P;OLf%>z-ZQn5_nqK~QO#}0>3vHrZ}jI#=R26E=@Z{~ShB4XgtP%G$gaM>Tc??uawg(~{0e}*rpb0V)(B);Kx$mWj z#Evw`DMUAbc`@vX*k1yOP$OA}PC?BLgd4yN1B}RlIXFY6Ak08y6e2-8ry?YP*YF+z zAsB%~2owkba1()$5io*~OSnd(M2J?*$mRmTM9|96OCeUxKy|YKND)9=B$S>fDW6%{ zwwkIyAxNRvkh*8OqfIxDWuC7XaRb=PFQ>gzfE@41Rs@=o*|B)?+@jtscnA%mTh}^| zlQwQFD$s=iq?tLg02zeA0fMyTGEm5Xh-(HQ8wfaIQU^?+!}A;hDF^{*HUr?Ml#w)y zFQw3l0+idSv&X?1E*HT<$KnQ>$7%(yj2w%X8^fjlp|IjzR z_U>!t$s5QhnPj5!y3c6)i-}tM4?P9TgBbUWnBedVtJ$nO=k34oqR_*8r z4fni0_>u4Xul-{``}@E3^MCm>|JtAZ+pGE`QNBCh-RSC{aI+noGif*HAO3bez5Dk* z`3ryk;_!dgC+|LOWJn=WumC$XK(ovYq}&$ANJqMQqsamkU;t#0CzJBBMZHy8I$n_l zT9S86Lg`}OuFUdMv3b_p;e2{+WiQ`n_c-k;r<%vUzd4)T-Sne% zQXcM4RyUisF3U;tf~N=TBciE#EnuJ3<>waG+Vue`Ril(Mhl*`y-N#paS{ z_*^3*t$QjBs%^1oT-vi0zFSXrIE$;zy>xp+y|tSX_p+^HmBo|Q@k^MWwCS<3$xMq= z;E*m$>W+rx+4T4k!>=d(kND$5QbLxAgi-@ZAP9SJU9-&J*J&sJ)X`z5eAGz zz>pdwqy+1jg*s`!)bj&QH+m~+dgkbEm~Hnr2J9clKi;Z`CqbehXM~IbNaC7;0Ac~A zmA&+W?dp*~^cvG;U|~4vIh~lh4i_=*-C97_lL<%xAd&zHltcr9Mgc}-U?{lnW9Ne?a*c- z<)LF@YjF*7RVTrW_XvO_Q)EC)1Yrt@FhfxwqEUg8QpyR!H4-U-2@MKG$m@Zqi<9S8 zsxVhOWy)&&1?1hlpDDns)&n`qebtd#N0KaH+7XH+D}SU5!P{+9wSzL3Pcb z;XLi9Qssi?)`ibzQyEGuAMm*k{GJ-~s!q=|c0>1c$d5f@dsuT`4SnRJKqe33cC7Ro z@NhB6rD@eA4P{KJwWfIWK!vd(m$J{^z)W6)}XgJpaz4IStwsZA#1QX!a_q} zloWu$cC@{F8e2)GoVx`|u1`j7cg@DCMvwwP5FjZa1PTVgL<3+1Ku{z^g8|{1g=-Wj zITT1nBbA{ZaKuiarzJtgOvn|22uPxlGy;T~pl|^o1c?S90mcy5hO%InIRG2-Vh(Jg zIZQATeKrCWSg0T47waw&lnkpNN=P{@R9 z$^ZlbL5Z|t7E%BN2)HCfG5{&TfKUMZzuAM+nC;WEJnYADp4Wf5@B7)_{e5RVn;DPC z8_*bn!F6n6gjiCVV5$(vLO_brh)^|jkswLafVxOYlvYg}QuspBsHnxo#GwqU7{0^L_nIe zd-o;GZ+qX1_g>-M=M!Ffm^IA)@>^PO$2%WWZ*x(zf6E|lAK>N@`=_@h|?#>(AIe{(t5a`0%GrSO4sM`^R7VftT;Ee*V*| z!>2#;uinB}d?AS6$DjTQe8)TS)sN%zKJWkX@n6NOKm1Mq!JV6z59X`i^l$!y_aBwB zC;gxNyXz<4^tZYvKKA$j&aE$f`*%P4ZSVNAUk>DNyymZd&jGy6x~}#ddP)|1VKm+z!9`d zEqw2=U8E24^WlM{#8CW^OK7g&x|;LZoVD8=wZ7RM?52k&b6`DkW1Cd=?Sb)9gF`m8 zuvx@jHcO+o&pzDpY{tW@9}hdjCnLksOhOf{E9h_DIy~}Dj&ax=kM~|{H3A|`D$;Be z+$2GQI_r8|eVU!JIv(7;z4K?fN5A4={bLWOZF%YPVHfjF6xGq=wdY^hqUGRz^-b?y zwkwqzm|nH@XLhYkHeueguDg1;U7y=9OJzfJmr8qnsX05W{)=}AY_1d^d!M#hiZI)G zgJRY$&{kQqTvU&<^+|*Nu=eZ4{dF7j+375YmJpP>(%pL8y1V@=C!&TK*1C5^TT#Q- zTl!*oG}djMw^bo1=1CwkX*VwS@9eu5Yk%f-JO}J_H;Fa4=R${Rij0s-Knhj^AFGc} zPknIu#^0QO@>hmmCwl^_DJiTH?zy-^x6+>4ZYtZpKDwXF5vML*UiukzewNN@sAmI= zvc#OIQbVvtgns&9yff?)A%{e7pG8JnK&@?l%&WVIT z2v`!Zay5Y!yZ|@3KOZ}SFt7r{7Qs`rmh-qE}m@^2L7RrdNaKT%0 z*^Lde;Wr8#J9 z*--Q%s41pUO^nkL%@=EeBf!Em&=KH6iJlg%f)F>|WHdDj?NI8iYe=Qc}D<+!5S zu4Qdg88JANuB0F{c7i2FXd;YQw!&V!Z0yS1U^+T!e6nMk&ow#2cbVgIzdblQjPT?8 zC)?q~t?98Mn{D0h@(eIkN&pBoRK3z29+fN5xaNoqTe2Lsjc6%|5@Xt(V_Ty+7~gde zH`|-Xyz!DvxsG^Pc5~?R>&s@{`r zR)3apY466p{VUJ=G@rb@e|EZW4g3Ds%6hIojZMInf!rko6rvcw(#Z-Vyl#j(TNBnf zbHY`;K;Nr)698GzFY92RQMoEfv%bV;n+L{#+9 zqL@mVLp7R(`KryZahetC`bE3HN_m6;a98ZSh8<7a6K0NJ6&)}GKtLcMC_z>1CwyV6 z1+N+A1}_-KGc9*$3((5~$XHW#1S&-VNddH`fmt!QL*}t4WLcOyZQd!@WBL$H04=Sj z7n}}UZ>r5BWg>;oq99TrD8pw72G9tA6p#WX3XlYVsT4$jAVEShL}yIckU-k9hZW2- zvMQkf5hMf$5-@WUq$)t^07!x$1O%`!E*g%7JD?R8sWIz%q|GpZPB(y%04f0}B_Qbp ziAqSIKoA#bG6-S-OhEW7(FlW5DFF_UNL-)*0WiSf36la4tN~!nYDIU9DX5zVLu?b0 z(6)f}m>_IMkl```NdZiP0tG=(iUf@kAaepx;zA0LEI^|-&>J9J5Ga5I0do?ds$if| zP=Yk$1tpQG0$6JRAvYwDDFY~`0n9~1c^hLQYnppwp4bVha;WQfH&+W?Kk%%>M2x4e z+!ZDRK$CkarSz6WvDsiAmR)aNF5gHiI&Q~>nJt{*N_un>C}lds8KwlPDM>RXEddpY z3J2gtQbU1hLKtT&1!Q(nkZJ)0EF~H;`${8=*rbyvFi4c#3MDXME|5yp#HJoEPF`=u z^>70*K6z5taXjRBKSO}Pbcrenfpu2%uj~nL|Ed>`Z|n7sM>+v3vQpl(=iO4DJVP0h zFch*?SAEqr&7-9Q=F@px_GtxmZs$iYzwMo?$KLYlX8KV1_+Rup7*W~R=%bC zBg233b$0E$Kk`*uZXV{@-do;vw0-UVbEgj&0IZ_9vX~qDg5T=yUOqivBtoHxlxD)3 z(pb?PU>LoXv9z0&m$RMcJY*W>z?6hY0d6pux25mQ3ilSxqFJ`<^{W$R6g6*kEN*<^M;^qr6JFx4-QAw2 zGz?Uw4nfhz))$K_d+^mQZ#B+kK0on?6?~B}A)zz?4hf18-FupAR9{fz;a$$dbNhZ3 zkA4HrfBOEtJE*5mUGL}BEo2U(=PRFjKJZvSKYnX}ajozYvgY|K^TWh}gS+;$G>gTp zqwT%>)7f0JD^S->Om|n$Up!}5^idQ z8eUzc$dduqtf}f)I$XHGz*r$^#v4o!W7&(oygWgkn*y1(BQhcc7>h%Y8v3%mRI{#Q z19+DUXQ&;KnCD4kS|*6O07{I2wYIoK;=tA^Y{nUA+t!3(9x9+v3JuN3IU^ww1`4{+ zbd5KyA|q1(gwHZ3oNhpk?F4BeM0o6#=)nL0AOJ~3K~#VVkV?1Cxsc|-AQ(u3o$Z-8 zYe;2eO{Ks$0HO#Q8UnIv5NHCW5R$MWYl+fz5(D$hRFHzh4P}NZrXI4cg}gaT?lHH( zd73d5JF1@7h)6)9gKh<2*eC=92>^*gFp*FaP8A}-Dx~|eN9h}~?Iud?kc@c@t5Hm0 zsFEh{YQ1&i@|A1voNH|Ax@k0@pAXxIXuR}xY>I~kgV(I{aF$4u0Tcw$0Xjs2N~;Me z34oy6(luXepW63lXKv2RCraJcD^@ORHDSKHI}3Gvzw8#9@RPg8yWxzbG{p)y)_Ke+ zMb(4^fshI6#geVi;Y!iMdvR5eph!_b_3dO^X0xo19jWgY8+98Y%O%V!J6m)6(k3=r zKVUiPmgy~>c97BJh_GDZfGE&jg|#_0m@_fY+X)%4l!*j)1#?=PwkSaHIXFk|12-M$>OIqXYTIo39%`{5 zI^vwTm-2BM&Jf$lveFH7e3k@2Km{PN3z&P~m#dx7;G|%f*^DfJdU^rDbz+_pyHq6z z31Sq;jcFDFrIkaaJ1DmDJZ8Q&$#*k?GG6Ym?07u$;kx=ff`p6!C=j0|$Uuh)07W3k z1YPumATa?E7nl`NnhWNT2?A0CAW9Hz(ZPDuK%I0DI_P-T@xcJ!3INTP4!I&BgiMGc z5SCJAvH?3oHH~4#t?v+3X9HNn0noevbiAQP-~s^%DTr`hkO_2DLsl|L69piJlt=+) zqCtrSATdGX1&{(@Dg*-|3IZe*OE#;%l!h9Tp{fuv(g3MRz`f}J#n}i(R=|W-lp*m3 z0+1lka6tk_|vbA`gk0@ z^=sbyzSTRv$A9}P-*u@4;G{FQtEY@F)JI%0Jarv=3zte^tPWh|p<*Ca)uV^N} zHvZ1Z=3Zspy!ma*G2RsGm`1{0#**FMY^&mTsLIWp#>rK7u*_yO?-!R2=F>a7I%#}L z+BA84t4A-z>WM?0S$**E?n9KQp3NpbeQXL3JAZQ7j0bI;Y(92J22sVzqS&yRBVzP~ z@3o88nC>~yySl)@3GJlP#5&%=u+ z%vcv@%2nF6?01_ksh5!l_1f`5gD})geAVPqI02Dw1 zk^o2~0Ie3FFD?%V=81tBo0Or7q_F`Zn+@JrYlvA9f<-naHHxI>7LE z=I9DxEvvEC#8gQFFu{!Jr79N9{+{>LcF@Km%2Q=^xmldb?mWGYD`k0B7WcN*&+Z@Z z>RPlC0x(5PIj5+af*=qe)cc~vo(#2#M)Rs`yXECjnXpcr5H@j~Fd@8;{iTYxOmhf( zZU)~>IN5A!POjo%N}(~dCZy(M&Egdcb7$eihHriJ@~UmK>F_J zyuZ&o6Zjx+4Y<1GaKM*=%{qN6IRP?h6;KSoEV;5H$j%o{*^^i>kaMNY1juDtlb9K2 zyHpKl^-PJ;xZKUmZwUR8Cq0itb~c~Sm_rsxP~_6k(yH0y0Nkn}voH<;dQ{7aO^f!D zD;Mr&o^=!T_C(u_yNY=Zt6`vMGC(Gnm{8)gTqIcBE;owCE)o5bL&u>KBZo!^PQi&; zl+6UC2^4A~luR)M2nGqwUc*(#t?b^AHtv{8D~ApDTAsu1P2tn1B$OtB1_MA5DNtmA z08dg$2sF@Sf*}*AbO4qGFsIz4%+PFr$N;2-l-_jkRvF64d)Qu!v9pJhYzq{?xYvRO zF+iMz#LP^>kc0#Y2{1(wG&DdeM*wL6Xom${mUBY88dVYsVa7R;f&(;wBqS;mo^WPR z0wqWUK$9q>Lj{&XrsO7rnP@~(2`V9gxS+;Vnr<$o3X5Q4u?|H|qd-}Lv|NGA*Z?&` zAlV8D4~mizAS5_wKoThtf;&h^6a=*n(RLwb6^s`EBtXIlK?0}(2uc6~Nf<$=3=BX} zF#`%)M;bGL6bYqpL4afh5FmnNDg;p^NOg_1eY@&Wd%&BAuUk$JGl97Q0~SC77>KR~ zOR<83E|0o5-PPV4-CM6)?q4QkB$a@`MiLMJ0i%f!WJUy;DF|{e0HMMJ64{W1Q&cb` zvlBo^Dm{^9P6I{=1{hHR2t5!5l_&yKXVTIL=Y`xto@|uU<{UBp+;Drf?N+->hpSDg z!zb6Ln(7#$1X5MynML;ReE;*?Gkf7zR9jJGQ;LKX;DErSIupEb&$^~7F|8`N-!<0H z9W}jgI5_^6XMX8ld-Th8?_a+BFJAmlK8AmCI6V5n@5G|TBUf4jtibyGcaC0s3D15l z8vI@aumZpEi;w-p-CM8z&ztZ1hTs0u%g~pA_<#D-{}aCU34GhL_`E;tD!%(Uy!wOx z@HcONe7N_#`q%s0c79!|wHaDmYxvsL=KL7_9kd5~k34bfk+0jo`vX6Fe{5c_k-IC) zX7S$Zix(d3YHW{ndY%mdGt_CGQb2h3h&!(ywaaJj-F?q=_8$&IdtkGF@jJ$o-Sp`f zs;=hT;c>Io6X|Mu8d0*5R-U(9^8UmCGjIbcol^6xq#FtvQLF4Lu0&ZXFty zhPGX*lu0)=-`Nerp7qY+ZF40zx1lC@i4X3w0*)Atr!jQYB5kU(-&nF->2aR-9?T~- zA+;;Ja6+=ros zGN#%++TOTKKa9xro$=xMoyd*7_Q}2Px;CZ_q&jqsS-S+trffvIR4TX_6MB!gm|aJr65v#L0}tX0BKV zpN2_D3L#wB2b%qIrN?^UiRHCE_t&@`h_s;*sv2i`Q+uiU$t&&gxS@>*qlgYIA>}y1 z$i6|XF^gs_&{#1i44-9?1mOY++s9>L-r+RBKpF%|aKemcqyZ4DmCRzCLaLGgApuZE zfLtzM)?XVH(6&HLoCi21T4UGt6}84@hOrt60SO_KxFCoLC(@7vk`|C6TVvC=3e_=4 zBpOh#O2FABkvS)TOyPnIkenbGLb`Pg8hJ)ZNkAfrN;6v9GBabCfrub3q#&e06mh}L z++8VU3J?PTgAh=RfV&x)smu^45fCH-a3@TtU^TR6i$b+lK+M?82xMDST90aV2x_1R zM@h0&W5Aa36;OqVPVRC$v8oaQ7(^LhMi8C`Sye_>0HhHB377x~0%3BgY26yP5N9S( znTgt|TjrJC=cga}z-52xHT>GE`SZKlZ^tta>}FZa%dX+&y$s9UE^~?rbse*TFn|W{ z*fe0Y^JtN?`Q#+rvtznzE;$eo46MNzl2Gq$QA?UHZth33y0Xo~_4&y2J7+_T8RXdI8-v*96f*B`D{hgX~QdVapEXJeYSk$}Mqii2rUIe}TM z4R29%H69sPGv`_NDyIpGm>?j)lt38^tUxS$_RZJVG@F-bi@FNVIY3h|!-?Y5f>R6B zTy63osh-D$!yLY=mIP#!iA;{f1anyLVkus%Z$PsRILxvsI>DM!n`Tt9?xu(;x3p3L zRJSa}V5K7(vs!Eiahj%z$V7%nq4Z?y4Ehv;M$3fU&G!lnmP9&R_eTNl7cQ+xy9u8P z@ocx==1{XJ3LqFD$smIXj1(jakYM1d0UK+;#)#I?0%?{2SSxFWI2#E;0-*?Gh0AVB5hMY_1sd5}BXe26AWcAI^%Ox6ATt3(0tkixMk`#X0s%LQmZGDyo6r-p#F-d_Af<`CcCZHwT&SsgWX{w1d z3d9UIwnj(YA%i5qbf7p_ErLYD?F{W~6qutW;3*WrU?>Kd6|$wUL@6)ezEBSz|Y3 zS<$TwNDyS{p%(Qx-N?@P21KO;B4je=#Mj2yer7)JKly}qfou0%!+oFX>l!OCM%Wj0 zb@}!`_WUr|OaHsn2%vFE%*F`x9nvw6I4JUiZf)#FYm7~UfX<`YPug3behcD-7hn7Y zzTl6(xsU629)7C)>yJ;@-v2GXHTK{1=Bpq0;y;42htCD!ul>^(=U@2VSNHzLhyVO# zyyH6lpak&m{;!)Wzwzxij*fojx1k5O?>zB0f9T)-(z7Lg{_an{{xd(`wEyKhz6$TW zgdV@Qz$+jA%s>8xzx_Ynz40aAd-JQ`{#U;S|1bW`PvI*c#b0_mKJO1Z;XnJAc>M=H z^pn5%V2alg%&K{-taWahvTB&yn=GfL%<~XqZ_BIV?e2{8vcz^A8^5wRxOA}5QtP#y zJfs1*JmahmLzaOD`#$!z=iS$pgCE;mJ~{r)nt;vd7e$dfNm*xt9c_zDfG{notXk80 z8ij4D_bArwghi>#Wg8Q%t<90v_ms_Q+)UUFI4E!L@X+iy`nMihc493jgGrSo1JYjX zPt=x_wzI5qxyrlBT2}6LJXufSD^aa#=y7D(KhU9$A8I&RZ)=uC+jgv@L0b0Ot#cf+ zT_Dj8ySdgm4|qVv+LJN@?p(H)7R%NC<+AjQvfJ+XutG0&>DfM-xBB88CvNZTHs{Zu z%=HG>?XAD9`MI-`53ji@4cx{zKYg&vCt_10SZ_b~@#=lUJZQeWH?+gub1YX_F3M?T zzC65Mc=ZadKG8g5UQV|kzAe8~N2*1lv&$X0JUqI3ko7U0eeCYZjwY~p`t;tdxDli=`Rdng$7HoD_tGC-`Q7~IL-Z6{&sDsY##`A`8#Ac47G;j=%ynMxd z@92t+r)TTa@Jk2QB_C{%^R%15H0!MFcYB@A?%R4Z7$?F-fT$AYG=Xl#5Q!NmRFURH z6`cS=1Y`&hl7pQsp;|L%q*w`qj1ZB=Via*;C|CQoDSo`Yy-^I2!iB^I5ui2uhgNXy zU|Yi;j=gcIraSFB%Htjq2!4O4_-ni0e3jNXqL9?7f9y3gpA%)(9OS#yRmb+xhDPPDq7Q76(M z1Q8Q8${VfTvRv%9ex`hIIA?co$SeDW)$g7?IGG+YOF!&UZThTc(?fH~>XQR*3!rHl zEmy0yw7DJ5H>dZ#4yWR}!URDgC>RUW5Z?0bU-s6Qs%q(|WX;r^b47$3tTcj$ zQknrN6A*wv5E2OoU@Qg$VeSQREA%BrXF8ICqyRExF@uaUC+AEmJP6Q;#vw;3AUx%> zgv^wXl41s=vIG|`qO&2Pan30OqxpUTiY<*RXEy95XS-YZ+!I;Oh1i|>@ub0O$88Ix zW1?zB&IpP~;)1{sm~2ffo`wvX*fz)(H-)yfzO@78w#IV_jmZQ}voKDuunvGr1RyCv z3N|DVBdfqtiWiIIybE%60YU;ok{}6?0zi;}i4aAJDG{J>ApsFfqG>rgrV3Q1i6RpK zX@D$2s4@ddkhnl5L8JhPCV*jp05=#tU{XR7KqF~j2_>_V02~yQHGss0u=2Pyhl74M5Z?ARxSf1kD>5y|I!|s&VcaMQ({Y&Y4OG z7Zk$p5=@XO93@$VnFy#P0EQA23&_SvWJLmQfJ*@6AZSkFz|yOrr$ARDeapckSWiz^3zjd8=XW3`A}aG^)@;}i;nC$?cQD85 znMX{ky_1?{s>MYDnO;-*dacWDdC>OV94+dn>T%QbH~Pa;y;aHxnM7EUkefhKN>j)i zk%krw+8h~>G9v2P_A}3|yLZl99Ui_0EOztNqxQ7#>Z`9G7U!*7C9?)()eImRMiMX} zQ71os`M2vjU%T2~%6`FqbN}U`=J9r2`FJz$Kl|QKP01I2N!A(x$Vr$Z(^r6T-r%4^ zSuGkNEGqlv(79eC$Ob6o`Zex${pNIBO_AO|sLL<@y4BUMeb=3jHh=PM{fEB(&tUN; zd~W=MxO4mu|MTZw{kk9ib4PgFJMagSfZ-Ru|3~j${^=ilgyqApN8fbazx8c@gJWY$<s%Qkz#+v;%ABLtL`P zro)O?cF1YYVxr5Yt2+)mPN^-xmc?Wf*miq+xmDJO^?t&S#I?W6y``oDoRj1I`N2ls zjUFCOmZqO`R)tgpX$NE7F$S*K8xb8%Qvh|>EW5JT5w{Lk#TJW|cDpg+!E2K>-Sw+` zT@z!YyANL6PIW#xt&0y_D|v1I*FU}HcwEJ%MN{zP)2jhDbI3YaeeJnVAEURO_Ku!? zd(k=`+*5RQbZ^%HquVE*G!E1`bu!*Pku54lL1LEKu)JQLSeD~W8OH6wkZN_9UQrd&!`B9#e|4uAmy zBmzjy1c728R1yKqq{C>?1-iqK>W)HJuPo5`>BGAZQqv^7fdHX>XLED$MUCz5FFrNa z!gY4mU-L&Atfe~>v1Ofe9;RIqQYIuu3Q7@#2uMr_NEitt073#NBP2>7;06-zfP02e zV-}DU0n!90BtVpq##N85?P?a)kWv6B8fo4bGBs2%MhPnD7qGD1%#@gbL+dLTN_Vj_qTg`FQVh_O!Pu^OyzO|---;cMnR~OC(N2AI}y3Xjq9Fn-3hJ879D8-Kf@4ae^<8 zY*+`*!Yt+5vTozrI<6bWCEk$AW=caqAUy{d+yVul@J(C*NH(W+EwIEAfXBw=C0mo* z6;#vv7w~aIIY}o*ffa3l$&f(614V+X1b~VVf+9hH!2oCwzyPG6#2`o}0K^9*LztP2 zAwUWN0RkWskc>!_s4#~BhXEiA09By~Lim7$-$o+=(E&wdf(Sr_i58Rri4Y-@YO?}X z99I$(ATI@~xDvqrT+M3U&vMhGjZlWwRQAUDklV)?lkfonpa20Rh=iuss%7!C*#<_h zASsey1_KBJMz(}N2_aR`l4XMeA%T*B07zsaLlyEc$qZqDs1Ia-LNX8-!v#UOA{q#5 zjIf+Q(9l)RChSrwOyCZ1kB|W30}MfwAi79`1_}~sAR(6orWOzr6Cl7LLR12f6A2(C za^CUgVN+H;#=5;gl#8d^zyN0EMYG(;=F*0XA%)9mU^DMud8yy_cgEr2!?!QC>&Ze0 zf-s^nLI!e5fS7}Avi<$TX?8Tl-iL1_TlHT*FU|0XTCts@tlJdYZFl4B@Zk97q258R zkM3WLT-ot+*GlaRVliby*}75#G`T%Xrl}r?-m)qI2|3;kJ6Jb&XHn;%mvyP}@E>j2 z&*sd-Gea`-ZQtZ4(e2i+y_|u2pK9~yRibBHkzfF+6p*Y!cH-I3AL;&PT79L)$EIfK zQ&~SgzwU3XUjEFRk@)84!x|GvgT&+@6M!a1!~#?9o}KTj;AHZgJMuD`CdIW@Q4U4l zUlf$_JYw|AwcVR>_43jBaP_a=``nNI@gKm!Gx!}KfMtB^|M=x!JbLL*e5{9#K>U*l z;L~B6|N8ouzWSSjc2~~PyOl7-I{+to&bkH!v8x5{?;#i;%A>c}T-FWBj>K;P?D)_SW0@&nr$x)Gz*btN-@re(L>S8BR9Ul%}A;ESlupI|R!5(NrCx zGA4BP(kxxH^J%vYDBMc%m#(MsEjQ!qC+qVv4t_C4NPdu}zn+UJ~ijcXmZ07=;^(GUU&gynTUg$#y`9kex zxN$M|b$ol(Gq~@Um(#GRRDuXDdbCUI1)feAdaf#D-LUf*r`onNHHnvOq%mD8&-M@Z zxLMSp<9q<)_Bfx=JjL;GSMpNn@9kl+EN6d|8tfD&LK5g-z-M6E!;GMOw9A&SBk_fRO3k+7fvk}3oOpozo>1Ob9U z1Z5PNNkS442X+Xh)9H~*ZV&*GU~ZPztlfX++b)Lf>HBYnWkgD;w%tV^QGF^Pn5ui! z>VQLnpnymb5THziAz%64c z`BT)#J2R%K6U?m&0YHEN$^k*3K;xAFJVY=O7Qmu4&6-KX)Xalz<74ujtk-g{>hL2s z)3xi1NH^PiGfTeOMZ)>X*~61p&pn^Mb%|2i?6RRGri1;a>u|Oi))#A7UBdwLTuWny zTkXeJxVKIZPSTfzrWM;{qsFnEWJ8)U8q&d0!59~_y{7+2&*yGFf1dZ=+MJvW-*BY* za9zq+f*7ZsvW#8Qroy&l2-)VTzY$D#%jw~Gv4K5caaMsfLB4EOE4xS880VudMzQz= zc_TKC3XMiFci_yrh6`7ov<+^N5`adU&DihpGe7YAP6i)$7qc?%fiW*#QXhiqWyLTm z7tIseCT$x&b=bz&TV0H@vGy)&QroB7Nj>!hxF6tT3!%&O#3>`lB(o3) z3dw-hjI;nE0uj~)+Z5i!Kqw4iNQ_7UW+_P`Cb}?~J(Ky#EMMuXrqi{!)tE@NeUIG0 zLabD7o^8?gIdV!NG@bQBlYiTWw=rP!7$6O!L%J1#5z<{E(%mT1Il8+`x>HiR8@_-b z9U=|V()IHE@cs>Vey;00kHf9Asmb_J(_AAS$2vvIR=SymqLf2kuc0>OBEknYwC^6{ zdayRJzpr`3W&;>$Ch%d4ii^i!EM-MC(#(fo`~qH+Athk+2#`1k&WsO{y#WLQk^!o5 zNV5hrC_X@fKUWomjom$ht4Bgt}){4K@&I~m_Nj#NqxF$~}$fVL1@dFsHgtHm zk_$RvfFW@JW`hy5IF5$ldAzIQ#Y>C|Vh28tMc;jE{m$p(%YmmmE5DVjHRTvR&Uqg< zcgIFS5wmqwlV+$4l0zw4b~MY4!^ z*^BBvKjH-oVDTLRgb-$l{2uS!wHJEI3z~^t=I!Hh z_ghJKaN4Yp+jd`0Jm1J9p`UFlIIE9ei8))|`fxt|j(r|3`Ev380Ts<`YfRn=lVW9}ae0cjp*Jac{B4Xft|0B$ z;Z~FlS<3aC$(z+I8Nt=4Ay*Ldc=sx(z5m-tHu*5U>0cM?AqtHoY=YnT0fZ3Z_i-tD zd~S5TNKgg>uIFO{W8vt5BEVF3YngbkA98XoG__4uP z8yNXD45m&CpTiDrm&FGHB}lLvAn5P5kofdg7Vvab=R=dp=rqY*xdei5BLVEu^_3ef9weR^}5+0jk?2|eg4ChpDp&|-Jt7R%5R;`_jhmYJn_cLn^a6~ildw- zqU`<~Z6Fsgp|oe{$GKG!bGX?!%-Q_AmdI3-ll5qk?Y7$3t+zQJ5;@P|_Oh>*0 zW(qG~1*(jM(9>qRNC%=Fp<#1au!qGsS!2Ihv^>5HdwiE^%&B|yO?ZB7(6#tNvoDR^ z{OPsW!|G0a@kA3glX6?-G7ASorj6lj2<=cD*es7!$$6GC)E*u(XWJC>Qv5YqkLnZ3 zt|on7U)cFwRscWOiu@SBpTMTxP0M)m zDkw8p@5p3Y%umu|`rT6D*ad!{kS0U}&zufPcskqO9=mBfF>9F<*Ri;?cg<}Jg2Ame%0j%Q3Vy0#6R%O-9 zXMHy=1>=$i(4cfHO2!S%nxubo-gn|HQW94+Az|wQkVqh>kkIlCOdynzi!cvB9$3U@ zxcCQJ@Ks2IS)LKSoVm9IfDZ+L;IGADq)7hvNG~GDr^c{X1t_LL0691U_8|eTpsdCb z5Ag4lb9>b~?A@RySe_`aZm;2frHg}0D!zvTu!2_xsMCWQuq9g^jtcLD7*SXX%o6A* z3L(MU*a`RmDRCaADA@!>7#l)Y2|*J4S~?6Ut_sg24V!yuNRZa|rGLhu-~G?oOi)*W zyYr63bIbB)ymQ6wCTU50D&6+-QrzBeDvf9pU;r(UQ9>1IL0<}Hw2e)Ptoy+~__yUy zsY~8a^zwMak;2W*b%-;zZK&D4F6PjF=I^@he}{CpNPif%$*;pulUw^{a$eRergQ^DMpZ0d(ZYA z`^Y3p4}u*hAiS~xr^}EYV7|KT#>j2pzhlyuOxaN8!^?T;m&~^Ytd47P)dH^4VT749 zb6`WKijv^iSQ+_Nm=fE)wXtWuqJ>84^(TRp8p=NnpA-_W1HNKPWq< zENUiy`*nr$zm3JroFA7mHP62-MepBiQFSQ+zC8D+%->#2QhI~Mu4Wn^wsxuB`JCY; z?X>uvbUpOta3BI5@1}G5j!>V@-TwEryfptEowNKX^Ze`OO+1kE({9<5Wp|*&iDrT% z$+bJ{rOf0aCGJ4X&KQcG&)kC}-TQ)I5#g!Iu4Q2ORnEcEuVw_Hu~npH<0ma*?N$0L z^@9lnb~<1UG$1zt6DG)^U3B(2`J>f0^>E!5%jLBD*Yo;p&Z4)bAJyzwD5TV5Udd!J z7`&6#-dAr+FR}2u_U5x}YjW0EnO*$kD|*nxRrz>U%vWW)HpG#ye4riN-;h@@@rQj3 zYwg|EljCiT9`kh7cW$f2*zyp<@AbN6{UZe<1hHM$-Sd67hZ-EHG3xS}%ULwBvJH;QUx(Svcgn!_BGUEz^12xR&EM7y>*~mV)4^v|myWHa)=~Y6{0=C+ zebKM_DR={2Jh}$A({oC$J{c-b;CP+zm-d5Axi4CkXfs4^nxgthEFz>s(}8VSK@b%S z3}*W>q^nYnbq7@N({}7t&d0PaF`o^vISI9BTl+=t#x$qTrXbZZV;UI%q+ycdbU7V7 zj8mzLfv99GN6ztE73+j%-Nq5Ra9I81NKzt&s6xR|e53+e08FmisMrh>qX!O$10Wx; z=@A-QifU0Tbmnj*QXD%#Dg*-=fh<1&pn#MMlnCXOS}W6^~cZ z^ndr9J%ESBZ8Po?Mb*W-$P@TjtYv<5R0_Wom;<61(Z)*zfRKgAVK@{x6~)U#s6i`y zNI}9+ittA81?D#49pY8FS}(+i649u=gb8IM*xYU%2x;B`83a_v-c0={R<&p6WM`Ul zVYQuR)y;SOfFR>StO-;V6+3h|Ffs5dC@75G)}t)Dcn~k}#1WQhvFVvD=wYm2^B#dl7E(|U z17^`iKtZ$%R?1vDS&iZbpHs^Gi%Odw^BuLNX>NTL1-rHjIR7);)856qrq053Jtfhw zS@S$O3%oaUv*o%XwUAe5Op*%vPM++R$R^ZfP@F3Els&m}mwGku7&xu*e9&g-algMC z+POZy=JGd0@Q06T!GP2B|1?TpI^OVBj6D)N>{@ESYSW|A*fvXTT9&m=oGd>Y)pp_S zHVfyGRV6_{F%E=Qh1`#IuM6V`r=GtDe)(J+{rCRh_waL8Osq?MsYVRi>|HBs*J z!k1W;Cf@B#QWZK(aukT<^jH#8Ob(yi`m1ny@(U!rwP)qOZHOFW=jLCmw6|UOQBm61 zYfDNK`}r5LFlJg8(>ax0CdrkpBHk{J%skLY_A{@hHt9zy;;V=@D!cuIHHYSwa-w8}Ne`K=TF)A~Z8O zz}~Zu?^bmw6DQeXtsdzCL0^)ejr6ku_fo=rGgO-b#^ZT%E)iUD0lD%}K752uSS=hJ z4;m*knC>r)Crxqc8Xikd7Er z3LOO*iR*odm*>KF2-u^aP+#X&7uIa%VJHEkO9jimG!y0X5$T9hfck%cs5d~CZRC~~ zVweCh;(`kQGQfs1hz;jMV@5dxh7;nz;HI<)R0O?v9Gtlu%=^(sE~C#l<>jh#Y_F6E zT=%utL{Ct^^$@`M#P)V|$x&tDxT&p5hS2^$0B#a~5FylvzFP_d1_$tPz!s{tz4VV9 za5SFBzceq5)ntd4IMT8IT*DvRVN^KgvQuB{s_!bj?&J8@Idcc`vM7kQU+;JC$etWV zAkE}gNXJ^K$Pe?E1}S&lO_m)x6b*bDb#WYTVuU(XPNQ z^mtI-^>D`iJU3he6PT|kkAMe4&?x1&9g5YP4aJ_NW8V+{8B(#PD9D>#kO$f#q&{dt1Pj z&M1EA$tvYl-$5D2Yk5)}l6w1#zmpH0hwEp%&wE36Du3$wOz7I)7uF0t=G>#7bUrHx z-_s4PCZ1pTxouVWwbdMbD)JpW2z~w&+xhS3^_mhW%ekjvlVO(AVO4#nX@Y4L^-DDk zlBLn*zh1Tqo+=iSgJXjs8BB24o{u}2X#~Fgx5d^QKHh?OQ_^dVVey%mTOs~x#|Op3 z$fP+`-NXwYGu`_8Ivx8Dkw^}auWqj1q=svs9gcj-PCjKHQTy8UwiS&2spVXrk#YP+ zUVTk6kDi*TrmyyGcg!YKc^ zmYu08zE>2)-2V+8$x_dL{#D@cp#+OB@w>y7t>WKrR8eC}_T_Z-B2SW&R2|PZPquAi zWog5;a^<3woTBZik0L%k<)x|b-^BML&_9j6H%(SAmr#GJxFk(!<96N0H?F)^ppW-e zAg?`|K-^rFCp!87qFmQDAEQw?wnQ=?fwp) z!ZDSJV@$_8(S>Nbes(FT0N@BV%p_a{N$zmvKQj7WhjrKv$u|&3?}|gBR*VUA-iuHR zDy8S@L4kxJ3;tl54~LA$fq3wSfgmnDIb$FfD2PrmvZy*Z`^YXj7mObcr-!5kR{Cxr zX(7mv*Ng#!O(SC+n~srZ4^9`|zl=s3{P)$V&S3!(i?1kgvBY*mrq3=6zev z*cEemoupvkVKWvdE-(hjOtTQ;uS{RuWE>=|0(L5gp`w6*NVq>j>?_sE!Qs1_$tLqG zxa3Q-9tcV3^}*zl0MP(=bV95wxERJu)4y<42kw;7TxD*8m;W(2{5nsg3lXmN>iqfc zJU{8_cjMtmt*~DM);xmT+xLF%@)Pxhi`TR_kl#>Lr>5hD`ReK=P_w&SKc-~h9BC`q zjPl~Ou!;5;50Td%&yO8Em`>4dCe|)_H>RI6L%4jjoe>6nJ!BXDWpE-_M;M#-iCgXk zH~ND%*PM5rc6Y&my8G?P)w07z!S2_0rk7W4?LXRF-Osl`DxBJs7DJnXA3bi~JFM+> z$VAAnkHI;)8S1pq>36rtDAV7yzNzjyx%WGzn|!n#yR!B@{$kx)^LR(>x7owl!GGCj zdqU_iRApVlQLw+nve+QE#I|F9JA8TK(Q-R0AaWD&IJw*Ud%La-t7PtyoUUmz?Cr6U z5y^W14exY<)X&ki=?>wEqavll+f5B}*X8jlekruq_Mh)O!S*$O@Akd&Q2v$y{#?QnP~&}9&`ieR`HL_O84{0A;B1}5EzI3<=-GNrh-T z9?Zq0r$+Ld7A|3753&Z!e&RdrHnXy@V|J(* zYz7HqJQ_$b4jxbYo{Qh)ZM#Pm%j1#p{94D_3FX#(Nw>ueYaA0Jn{IJ+%Gaeoi;LxU zW_?!`o+lFCEq`&AIyN^~+ZUR%h@kJ7^w2`aRxgXZJx;tyQraSaIX!<#Z24x8^@+lq zHuPiGT}8qA>4vuLNkC`%x5>ATM|K$w1;vM1bW?#AW*ZL0jZ zvP&~d>KK`2TeIu;g{)(0pFKO*LYc0)QOs>jaZMhwxHxB{sP~fB(IZJ${>Fx#4tQh& zyq^e{kK=uWE52*=yXv7Y!H()2@ow_AV;`ftPTAd`Hg6~|gihT$ul7O*nIdET2-oR< zi~nYxvea7yk>_G~Zz?_9JsNuXTt5w{JP8`!dA^Y)XLq!BjU7s4+FN=*QGa58KGE@L z$h=EZTz|P4m)@KC!QiduE#C96jF{WzL+a1h+m|N~sbbf+?cNtB&v;@NVx68EOqwY;za?b{_eJxcaY7tlZ zOYWTMgrN-5oZ!hMba9@Js7z-_8D%;A9IR>aNO;Zgm*ZU}U7QfAQ2@m1TXJy&H;qJG zuJqn=13_tRapqvqLarR77p31Wqd1C+y;o{ZnJMct z2JQMy#OIj7aQ@CJzT;!xrgaD062F^aTb#vVD|ID%ZU2)d;cmV2Byup{N1J#hhABSk zySzg$Lyvz46~qK$*nf<=wN(VS+Z`P#Kr?o2Wun!#Cd-j78~i`*tmYTq(3UY4{^l`( z(Ig_NXT9_ysWaE99H?Dl!Pm$Y3(pD9?H3%B|D7#=y@_p+WG zjurOGEOkf3<|OCMhx?X?ny#(&oUWtC%~X!yvFl9EER!2_yeIK**L44%1uz|tr)G=f z#voc2p6|TaneO_0zIQof_^>X(l2grTQ9iVa6Zv>?^|#J1;QLXN$i}8lQ`|4f+XMH> zZ|jF52Zxu@80Bp-@n6MuT-gQ$H(MZDFgp-_S}vCgfH#7MQMYk0ZTgMfKXGQXG5?RD z1KB9jyba5v_De4H6Vc=$+mR!6N^}DLf70wj)DJu|Tqe%Z6t+SgcZB?Y`+kg0$p-%T z;HfprV(2y4ync)RJf+~)b>c3n=e4mmY-@PmtKzeP=S#yify738ds>mS|ewVOz%!2&8L$Yh7*l`DS=lvLG3Kxv>RZ~*kpQDa((-ZW0FjXb08 zpBYY$<-Np~pF;8(CB7D82+|)F+VNH1eP>o5GB3MvNbr?3B?xF(HDn`=mYLa|o2O}A zQtgyi?9=AnFSB060Z@MffG7wuHtv@>hb+=fy^nAtqF)uqBkiMdM_ez6F?{+hbd;Y= z-Dk<-^DePzR&PQHgYm~V?`5#Y2!6-BY4{6e4}0w_FRPXrjWSadTg_#`it;kbH3p-8 zAVkoDhoRww8j>XT^4ieKYJibElpz%irAHzGq*56HthlYO0TDkWkT8Hh5&@uxh@-`$ zF^VIg0d-Q{15il#3+){sjvT~;;A(akDu#u169MClkd0A%Z|HlRkhCnoEcT1kghmkkT-)5#lp|B!!~qGZ2%c+6<@&2Yi5-pcXa=mhiWd z0pRne+a4(7l4OhBjV-L43Gl^0TR#*2fH=6lbR zY5A^chzi@(t{aP+1}r&@DUPTNAftf6R3jw9_NeJ?%2OI5)XU__5i}uCE{p{owvc`j z@LDwbx z;%&3Y+9KJy;wxKz;9Ir=e74v0+FIEZWwq|hIC;gRG=`}>HtP1IuK;}{UiyFWw!c~H zG$w0yo;n|y1GB_`R91wOjbV+x}_v8bo;%!@4l}VJ+m}? z*!mm29_)AWwbk=#PQ@YRq^~fYT5qw-V{-5H#je<0__g=HRO#)nV|Z)62Vu*bKw-y^ zgGE=P4AJFB1iOsrj-@xY!UaMkUd>ZjZfv`xqpRurYv_!4U3U*hN$aQocAr)ZZ`S>M zABKwzy=GqgqEog_(zg2d;}>_!j`owj&tmR}kuqYZWe0;y&jXvyB`1>=7k)>UXVMSZ z>pkVab9`|+_quQ;6dJmo`@Ha;8$%hd*LDBw6R+Q=^QKafJnX%l<3Y-9I)!+;>t{45YAvF z%R`ZT!;_(}?d!zbT4BF;8q$XiMnw_%E4ZOH)$QDJC2oH zkVPpP#VXuBT|@dyX;Yne;&9c1u_}*46gQbt+(yNKXSc^NlWTTnrtl|<^@zUOZPkex zC0dG#@%v!SjH^QGmh1T`fy&H@v)XrikyDL{2j*=BA3uvt;T_(p$uY!L)+FoC;f%#3 z*4Jl$6zd{etDN&$lIj>ZD34mp5yPNvG_6$Ov5@}*daf>odOfj8%}bIx60PPf z!|Ci(o$b$jD)*!!i+dBJ#liU9Y87J{ef+JIt-vvS-tD`;AF>9I)O;R?k3W4N(DTz) z3qHVV`da;V`|jV+;%*-fe14_Vtss)u_vm=q^R#@uOm)uOv1+7YHt{HXnmJXgEu|(j zQ^P*juEM~kaC@UjKq8*N*{*NFqFO$!@2hZ}oRWC=y%b@jdSM&`48jR`9mR{^P}o^| z@{x(L&nUoAlgN0?7&YH)>G$7YDU$7^%BCJDJ0uQ_C}r&3qj7h4(lIVW6AVC`O6#72 z0!UzVZ~zP32rf&{m`3yA*MB=#w(V^qhWc*wLL>Y?l6utufl;c`P%W<4Xk}o0brT-O z?qKMXq%?i+dln=#4XM}-2hhflq9A1;090ftkOToKvsvL3edso8|8ltW)or=pPYCpV zA_nGgcO@oRJ&n2g6Z|a<3IaisK9y^bktX$wz8k!^w6*WprF49o`gC`5|DUoUs-Et~ z&ey|^Pm`ivymylZ&o{ec^lKtD@-{Q0l8<|*pIzitUCH#%~_p|a5l6!d$C^7*e`79T@D*NflL%fCvSNNlD~t8PBLKwm)xMN?)rq=oZ_$EpXsj$(WlN^&?MnZ~OUe3JmWl z^MXRX0W`#@pcF+~-mmrCY9eIsMXD0Z)xD3rAIVRuY_%%$qhh)n^Ir+b2A0YiRgbys zAJCpZWnkroX|sapVIRdCC=|i0!rU=(9g90vl7!^AOIppkeeKLhh^%`!1{A0PqIqfE z>XPUW_oi|6k?ElG{(Q;xV_>|B`!l5`UntIkG;^TXVxs#`OTcbNB=-}#0-RLyR@2~D zqRX0HWb7Qui|b87y}wnJK|X$yK^&X^`P0Br|=XUMR_gM7i%eVE9)Ou zX3s0!h(ZGid88C5C2=G-iG(rv?jQ5*NT}|lrK`pAC%R%V3x=_CfY2ZtfC8El5~SDL z4;P>AMMwt72Jr^Bs?XV3fx;8so55Pf#1dzjr_@goKcXF&y|12DtZX9uOJ!{L_(iSuLENH_{T2O zk}~y^hhu@%GGB#^{=h7*Y5f|Dhsv_`PsPH0aapZ8ZiF;#q^j-H{cpK+2ht&ZETM{=IZrv0uRmF<0Qt_>fiy-&QTD1GjJ>-$|Z ziLM#kW?jXU_ed|t?Ayk8*L1nOH~&{oAe?%-_b_)-ezng$m}&hbla1eG?Y*5!23GxQ zTT6zL!#BTyb-Rn##Ib(Y#M;DEqN~m_Zieq>%LH9?9|W6+#>D&>{_QULU8c7k#_EfD zoXPm^wjFl8xJa{dym#j}BcEQMd7XVf42^xR*zLLsrS{tUm3pt@dm4T_BsL|gyLNeX zu^!iT@$S^(K@0=_gkn2XMAV9k+j)AJ^TzKW9q)wg+s&0Nb@K~8WO{A!8pVw}BZ#c7 z^Z-qUQZGin2unziS9|a$QABAtEQ5cS^PP8f`%SoI=;+%qJyeTyOELpkV>4T+D*4EG zTqtOO2KMrd!f1$dX*CNdrNomwD+wRHQD@`Ql^9zxEXGcO_?pCDnTfQ<9lR-`CMqQ)2AZ#DSN5TY+EPE6c;xNt=IOxw zys$FeL%yX;F=fo>`#U-_l#_~bq6p3qa&Ho7N28gLKl-x3xDzt(5kc1)t2DL@#f}v|4f$AM19eE59Fa zj{S^LSy(FP5U^iVa=2GwZ^q1stk*U7nLU;%_bN*-jI#05>5FZz;Fcg2sZDQYy4*xp z;naK6#Isk<$(S?eo;UtsLv{=p#)ZHyl0A11N9ZyZ0@$>OKqGIy=b3+uq){FAKU=h5 ztHND+11V9V=;XJRrV4ZZ~8cLoik_)jvo*A|C_EO#r3{`wR#G!+`;qvfXJ$ z7$fD;Rc8YjuiPqb>jXa~HlbVO&=Q8~9DVGxiSGJ$KrB_EQ%S>ns zkYD*0$`lp;-7p_a$3O~87AbQO5+%JBU3eU9nb}wAQhg?cZ;(wH9Ce<0`&oaXYiyP` zER{Tdy)*xt?&*p`82B3(q)aej)RMog$s*Vi z&@^;8chm6oogw?H*S;gB*Rw8~Eh+Y{E>!0~L;5^pW-YdYNXs+OZQD9`U4OvQx5z5% zd+MNdRJF2%Cj13S1VKVk zp%wr(FpwUOfCEurz(6caiuzd+w$TVkF>4eiBTZs{G70V5FW3&hY-D_AXXe7u!eD^Z z4H6V2eE4r?+2G~B#YUV%RC<5DI0!HSq#Bp7EoV|LM8XHpN+8Hr1tV`z=)1wt9isju zGbA4hm;%!o0`PwcMObp2N_!ZPFo<5rB>mJY49N&<-{V%al=2q6AOh zcSg87H7$-JJ*-y}1OzFi*h`D&~y^N%pAZ^(X$*jfB!$Kllnd%xf4&p)sKY=l-n_ICL` zryAV+n;|gp*uDGSb<^duEz*s0LpKR%MmGNzYr^ve1v(Tii(Nl!{TZ2NUI*poQdz!QoKR(=BPA1KWz#*h;ZE3$_OHnlBWM zKQ{ERq4W4}GNYT0HfX->=evajnPOOI%0(DMrU!zO0O?MR+)8Uz#6H!=DTn(V@pHjG z`k4KQlQHcw78hkn$4a3_9asF#_Qr`k75wnFG#1z6bs-myU6rGgwUL~ay377!JZ-n1 zW;&1B)Yj8?&JR;%@{z}P-)d5hxJ*n_GUTFt_xi9y^Y{V4> z#L%2e3Z<$M4jN~lB|%-&i0;l&uiRE-2%ZCVO5Kn8w?D}+#>$ZcX7Yr zi`(UveN`)iDkvmX)*oJxF%t#T6|d~YW)HD+r9fu#_P7M;_#E30uSEE_Mt$xamRxX~pg7O7}-`&lkrTJ+WtyL8SD=cxZc)4nB_r-Zo8vZ?Kt;&p3a z$5rw7eU+>z&Fpz6I%*1vs@)F}4D7Td$VAEtEDsfS?fjB*WcoOn9nt*`gK`4 z3edgmF~3>D!z?PAZG#Aj!vRbfNQfrsX7m?RX`Jl*U?Hw#Fl_=@4#Fux62*h`he`sn z*|xB6zn!@{SSU;5e$<*F!%_7rag4p$GbLt~HkV39r^hR~-uR;`oatK0m0Q4B=&L4U zxLsLZ%~MzAM(GkJN&*9N2@dlkLa=Z-4=At&Q~y^WPGJ=w0&t-CsGScZ(v+fS%tXuY zaAbRkDpA53XN4_$N4QN&z6l8IhddTMrSrFNY<$iKS)$GU3fJIXTy;LqAU^-^=&^3) zgHnz?sYb_M)yW+nHns$&Y{n?rwt=p@OfHw1KN1tdAjoQ#z<{|&k&uDJPf3D-rI0`- z%!RP}agw5B@xTgZ5Sp3nKXJ1KfG1%quc8qoC{Rdx*oqbiq=9Oa0)u1x(<^0Z-hgl6 z6uihd<`L$EmnA_E)@Ksn%Ys=JDa7a>#2|se@i`8loo6jB2D@QP;MkdC zCpy!AVF$wlUaC)sG~h5SH;n!x4-*lgQ9clerSlV!BuKF)7&!cw!~2*~)^ML=Gz2RyoCLrOiUWY?;qqJoXs}}D1S0^D6qpVHOTDz3z~ERpjA;mu1|wUr)GZg)=RcbIiE>Q?LW{UhcLJT)9F3 zRBVgX57rR>yW+CG3CNf=APHbCLR9@>(m(jA`84^B0K+cTfQLn z`=gRo`n09;?Z=4*>*Vgn9-`#HywLirCO#b!C_hhT=PCoZ-t(x-dg(-WAUllTy3(iTuv^Vpjl-CaCesT3j zUHI6hvY^GPn!fDS2)cL{?<(9lM9=;d%_dLZo$q=1-utY_aNco_SmOEaYq#Bn3Q8Av z?f6!R-1>j;zPJ|s7eTDtalR{dSd-Rp9aR#m->p)UYj{&WBKW+pB^i4xa*4P8eARXS z-vglOR3@b>cso+)G5f|Zllq}sV|60uv1k2$_)=%BT>VYrPn!(S9a}Gj|K<~fh02Wh z-wTzxu;5}DXKkGShZ-Lj1}$1@9Ub(Md|h3YjiqGJxcPoDb5i(6BwvpcRmq+Nj07|= zT1+(mEyRTwSDc50Mo0YHT+~n3Zv+}-`~A53A?Ew2F~e&qwwk>Bq95r>X6<9Ix;8}jm*5+Ydo2X7e_Qi9;{#AyM-|yib8n=l~@FnZBW(yUIa>N1 zP39*ud5Ws7XBMzM_=e-xZMn6POV_{0ATb@Qz{#ypa}W+Rse}d~bP6WO@D}C);~t z?!Tht$Ov0gifLkeTn`9L(ZZMu-31%;I`Jd+`wgUj?#nAMV2BkYjnD6-?RrIdpOEpT zyqTXr``F9sV^z86py2DA}n(XS}SIgU3*Yz^W3^ z%dpLOdFsOyYqLz=DaKXJ?#i}wQx{A1yVfhd2*QHwmF(LCNAJB*-z(}4GSCitxq_Rh z>rCp=w#!Es*5V!th6b3T-9(0^*!j2hp0(}utIOn0_nvYEZ7tV)(2%v34(98+k1mn# z-G&BSK3YiYD~0tJDo)z1xT2$O9z%*o2{hQTvm2{EIS2~Sdof$n4VnCE4h`*S{IFXm zTV98)m87Tsd7heH5ZA>%n!E=S0m^urna)d6M61!t^kr+|@Qd4C6`N;?j!k5QRxN9i zlfGce{vIZl8oFDc0B`j9z{tVG`uSt5;(Bd;$Bqwq#lYSSwfm$@iprW^^ieKncwcgd zYlmUE=J#~`Tjs!FafZBmqr&m94%Q85A!>EI57uoOdfr z37bYL5r$I;_y}f5)dPU>WeKCXa9V2ZiCwLD>?uc((CX;-XDSGZK@)Zra@zpuu?8U$b+R#TB7p zuKI5|gBOJ+!YrI^#*gHP>a9DlZV=ObTxbndI!zJ{0pOmFg>_&5E#k^vO+@y z^JVl%0sR38y6!^!j{$*Zm?Rpsh2ro^Wiqx9sul#13R6pHWhl_eJpN-4+0Hq$Bo z@}emN=0T2l;zxBGDwH75B4l58DrIl1?AJ9xYz8`^1rAAD|FI ze)d;-PnHq9xUK41|8Z9fzns0p=$Y7!_;^j^Ln^TfZ7ll}*oR*_jZ946YxX{rlSh`H z6*L)!@-I*vB_o9_W})osSV)+EF0Img?wzLAHHleW9tF?6ATDt7Ar zo(ObnsV-sN`?+~RSDfY?xbpg>lbcHe$FF1(pP9eg>o<*e%Yt`1&7y<_Z&4LCGA8E6 zsTLM*Dncu$$wFY(x~--6`{^2vRjo`1oozA7GmiGfo2l8)@#LYKL_0287g0Vf zXU2AYw*0bPu=?pK7Vq&lwlMKojz8c};B}AtHLUBWn}M2}rbn+%=f{UOv71QZu3f{x$H86{a)lL*&@=)$>DRiR0s)*lqNu84~EfU!^v#3DiX$3Ru=vpYcl} z%%r4f?k5N^+pj#)UdeQ3o>1oKvgyTE!n;l{eRDc`sjfW`eh@ymyHLtMx(IBWq-`p_ zoLa{rj`)~_kr2QIB7n7rD3Y#3cXBhytZ&#$llW*Fy6Bu1MD0>N+}75@UN3q# zZ@y}ES8=Z>Y~-WTT5tKb`<0nMlG;Co#ZUHqc?-;>u|jEOXV^ z3ajwwh>yA{=D6uS(3RzDqeXg+UaeU;Nu7}}!DVBWhWrjBw(~og^q4(Aso}%ViAUlS zDvrQ-r6RZWBPNE?%Rmo)c>!ZFH#Feee0=pA7$Qvj6HmpUQN}_ksg{;`VsTS>v&C^i zL$!;=MejmDW1=)2KtdMSjnmiIb|1LINyx7+mKiHZ^-g3$kjn48b^ewsHp5f1X!Jra zOId?d8~sR^RgRmGiTy;AB&#p?ELA6&N`6>WFZ+alUq#F=HpADG>T$YMYP@{A z&hBPjD~iB{DA&sHYe^CDUw82`-zNyA4l=-QDH0j@(~WVSjO{^41^TZhCwWj1jEx-N zkH(4Z_FeW(KwWvcY)@%77vcW_^*{>0jNP}!Rku@@!*0EZf^ZxoA&6#xB|rcKkU>je zVw5PZ1QLvZfM5}h8ICCc2ojD3Kp^25K|s+nzx;H5!@0=|w>NL>Efd?6Vjo(%m+%rW z2?>cX$|Z$?1Vn)d*D(_zfC2~-0d8cDHZqzTY@$rjExTd9R^A+AGOM=F$Jq;yiEkgJ z7x(Xc>dLbhPF{HA!k=u1D+f>g@|xw{2g>lP-J06oeC-L=ZF%Mn`}JawJ7p9>1dwSn zW-8X|kw=s52WphEzTQ(!H^}2)Hhn<#y{{bIxu<^d&V|>#@ZgX9`B%RB@4o}^1)jPD zzw`LDA%?P!aABYdgE%Gt)`zO=zxUpcR7cOe|LW@co`z;v7_8cib6Y8R;fAG>*P|ok zy}kKz?{KK9nmf{{wL6u1)VWDkQ+*oRKsAgeW_9XdWd&cxFa*=OuA#I$%d9uN2iJFo zI9T2d%KB_Fm*n1L+uS{~rQF*79Uiqq)vQOgS?sJm4{25BZmgt|7bb^LgrP&M=u|>o zp)!`RK8Si_vSm3Xtuohv(aIVLs?|@ebs1tn|vdDI; z`^8m{C@jv|(u|3(C;$=$KsY7kP-p#(Jwm{ zYxM&#(8VDEbxcj8$bk_i&?p&l<4DnUWmpb0L@V@z*hK`wgAq7pWFknQ+Nm(A(SwT> zV(1K>ND%`NB7`6iCSxQcaw332cs3%_L<#~W2uz8zU_>o2*FH0^LmkQ-*7vL8OJ@#j zn>1~9C+XBEm)WFjL#WVZv~AxV+&Eygj!*y$Q~-!EP-RBVNPuh|h!TCLRUb*dnZQxa z%BTQcneY?|l(#FbIvLyuK{a5OXl6i2g8@tsArJR(5`qCnBgiPO(u*D>L`6UV1i(l$ zh7f{*l>uN8BpE=$DNisllR$(M0t5&INP?6JkVpb_0)zn+lVDU(fn$UKA`B=LATmSD z(_CjWm7s=|q7%S?xg0)-kb$5@Ga}gJfKqY*rR_RMftSb#@Ff64fFJ=9Atr+np$LVL z1kiG3PI4%eYD<+x8|q6K1b`wHjtC|L2?_#80VWbiDVw1$g8&Etm?%Id0t}EqZ4@kI zLU;rzA^;KyAV6b)Om2`0jxn16H~|R(LI#4z5Rd=}04)&!6De3Q$bc+B0VaS1am)aa zGzbVOQ-~x25(t5ka7;)Bnkpe^Juxy&s5YWR$`!z>Q>e5NWPlQmrD|uFDc4&iuX58d zCZk~9Iyy?D(eCa^hGoNG5Yz)S&`=;J5&}R+fgm`@!6ZO`h$P&r0#N}LDVPF~%$Zya zSrHU@Ez$$9H6W#cERe*OP&7sXvWU(A7onygP4{b1OQQ> z#Kh_~C%120e9h4dpBPpvNCp!KiUxpkGZIN86cSTC0;`l{lF2$Z(h<%LuuOB?4rg0e zt`-}}W^6r)sM3(@*PT!OEbXlTUOb^V>!f{4umo|IE+-*zf+*lQrYS$d!hQ>2VV914_&I$wiNe>MdDCmF-S2t&^5xO9|Cr0?aL?|4`>m%R ze)*3sf9lDf`q1}2nI8D=fBYqHx$p113jhE8KJ@HmK0TnHV<4u$)}Zt#|3(H3U-ZZ0 zXcM0LvG4k!U+r*ufpIf>&07vW_u#C&>+E0pwjY1X%k%Q#c=E$`>&ERzANsj}`|M9% zIe6K|C;meB=D&RYMEZ`e%k>vtymb5VpSb+@E`R>-?e0JL{SUt--int!4C0T>+h174 zwSU*V`}hB;|L%K-vUgBn?C_)ggMktM7kO|VtXp=MhxvVfYpuP%=}h;YJM^8pTRo|p zR%<{KK@u?_3B+g+GE;&DB$&a0!Z?W&i#V~V;1HrBf|MaKIM^bTg#$>&wjeej2!fW- zfKaQY9;EKuxBE`#o^#Le4SVmk*6+<}C|BiA%s-UN&jWc?YM?R8fhCUQHr?oR?vJ`^ z8A9uf#pgycQeLEX>}fO(D_T(tIO6m-WB16+!XTev!Tqsoz7hnR7 zf&@XT0%d`6K%zhjU;$?n8%<(v5MiyvDx0Mkj502AI5+dfIZwM6RaVQxE?av6DEDdq z+9CJ7Xd{GS+<4c`4^Y}s4qP4>y-rU1_PDk;`<NoR*0k+3g}5;ej3cCEAp{E^=_ zAjC>7o%!(k;bEb!ZPjM3_ORK!baEQ5eC!i9ywy%fak8lxW?@Oq7V6OL)!yk1P$UQA z5G!bd-~Gs)+taXf<&@kL_##6Cdl661m8@Jq! z_LdI$l5{{M0f*cw1RK1j>(9UU51Ykf|7_>r>VorJ_YeD&BFEd&+ly;N>gUhz>1eeV z^P#q;Cm%TTrIqFB{wTSV5~h)Pr`TcH^(f(8(bj`68xJRL>JlZsil>Q<{;127zxl8b{V4u~XZgpAP; zLAez{03iXSffO7Vf@B_~P)tr36#%$PkO6@W5k~>jFb-@M>gU=Q+$$wFfG{{u-gnBD z@&YGE0n-RYC8&`p#MzqCMyfU{OQ0W$mrjIykoqF_UIcft0u?eLfP^D%3md_P#`YT2 zY|I9)@L?#ZUZGMPLP%tcR0`{c2HwHv0L5D21wkAW43-<_;GyP8f!Wx`O=zo>eE|%0 zWjZo2d)om?8?DYn^I2P4sjh`MvbNTKxH9L^FE9)$1~&va;s!Wtlt=0%6v;t(&guqG z-UKl=9?j)+w+JxiNHn!{J{*a)5P%Q^%t)gJz~ByfG?6M*wN?VbO@PcmsyIp(s}sk( z3J^p3ugaUUll7tltvZ6dd00B^p zfX52Jh$0XIAQ=EU9Uw&zqzeKl7i0`1 z45e0)H9%%E;ZGO<9)nc?3=*UY#bO3QMj+AQ1~63tG8o(d0cK(X``iJ;)7=48vx{=j z00SHZsz3pP1IGB!$@N?3U$y`I$Lf3zIY5A5B>^OO1s!H|2o%vYu$Y2_fDB!QNno7? z)M@IvaBfPAwx1arvsJGM0y)}iHc`e~&+K;zOHA*3+qd4ec)F%f~05 ze*DI*vq#_jzy08ycy5l@4X9suvBN5yt;74{)owZ6 zTiJbVSPyUIGH>5`5C7>9KY6tN6>n&t+<(cnKmOQLhb!#d&S~7pT*ZAor$k>l^2;_# z@!2!=PRn=t_+PG$IO}sf!Sf#GeX7W1b9yk^%14v)m^bNKSmNgC_+wwwzyAF1;uZYS zzZ-t#lizgqZ1|f$e&?5*`R{}Y@#pc8EBMr}Kl;Zz`#Su@%Fl04w@%#uRn1%R*Arm) z_eKTS!y1723#Bm?A}5i#nsPt5 zvm6lP@njOiFlhr*OlX@XTC--O1e-E69wWji5~7?U1d&zp9u9ZNIy*T4{vtRbKT z^ZCI3=$e&myvaTR4RJDUO?2(!PuwV>x#c;ekr+myhv#8MsdpDcB)wVyqv8sPUSGX_ zSIh8pzZK{w<0$3D?LKQ3C*~OK9@V5pnvKp)#wWOV^w9g=!@~s;Sq;`RWc88^6GIqt zgXk(3?kH6u2}D*SxYfbs)hj@Pj9v+e1~dqQFHu}#si|#4iltl8WlbYMxd3#8sw^Lj zp^~ORv04!j6+pPd2n7iOoFD;u1CJFwR4q2^2G)Ee$qXPubwGmysE`1Pqydm$NoAat(Nr&cru0vIYaLmj=&?P|v1X0HjaE$JR5Gex-rMli+I;F|! ztu~hswXn0ZHyU+mW4t&_`zgKYbo@&v?1lAmC7(L~;2rmV?S&gpz3KW-e(>rO zKmNOQBb2TaiDH3VUwidAm?lqLGe5e$^y+d^GA4s1iz%dvA#*jd$%$}rf?*i#E@i8g zmUWoj{RMCK*+Xx#tM}~Ry7t8{yYh8k{g%aDU)Ssm%NIU%c{c*9D6%eH4yY0}crvwO zRp^Fdw?6mJKbTgJzW?TQXLVw8ez)AGLoIhe_B!kcY1S|SL-aC9j8i}% zm+pXNxr%FxaB2;9$w!Xb=PM|OhZb*xCiRk_h786E*6I*Mi6jslKwt_AZtg4*Y}2Sg zgj=n|Na>C+BD@VoURdMnUh}=5=P%hlU&7hx{sBJv_|JVfJ^k#b%Fz?gb;+46(n7NScB%%!j^xstW^Rb2*$S|xX5W)fhE5iKEEaD-r-hK;crYY%I4vp9#< zDytugm^V=kQ?=LyK*>w+Im^bZe{2& zuj&gX@{MR7n;4-unx^sQ)O}JGOXm3^c3pJy5V|ONQ!KhURarsLMRKVY?*~#gi3q6~ z+BjIXT!TOkLnl?#puxc`IsgWP1=1`8Aqfr*OkfHuK(2z+6RL^;11eFARD1v^r*KSo z5gi0zW(45wf`S5JfCDb5Km-Gh(MSSL2oxxifZ>?qMT7`|a6kej8ekBD*@;oq#~Nj0A{a04*43J`7M%t&l`O#$+&q3lcyG zfI|X7Qf_crkOdP>bQjzSIRpxDnbFKd2}Xhx9CBwAV?s5^ArQ3|s2XDd0TH6s0yvIt zK(Sl|mk3q4Lvja3;AX_q)Kznst3m<=2?$1j?t$b;?og*nKm{$jw#dt=om!61B7FaVWVuk0XhaV7%4z> z0!~6MA=pbdC%4bO_Tc6p`(hs+Tq2G+3BruX22@BO1+s|dHGuk8H#&-|Syp13}Z!-+-yrx)9^)3|)^&T4#kgO)NvX!`!G|LDnY zSo>e|#rOQcx2@(mT{<)WpMK!e7dC)x{5k9ahnG?NYq<7tU;oO{6esS-6at8UZ@mBg zt4aQa$s51vHPdzcMXA8@X&hdE^4&l2Hx4iGk=f?u)1Uu_JKK2X=pX##2kXUu{sY8o ze@p=JC+77RGVa(4Z`}D8Z+qus50n4HcYec9;okll4m%uu2%95Z_&&USiU>deb@&25 zgzJ|-dG+SAk3ag`|N8pp9`$r8*0`notE1BnUIYTgJ&+qgn!*P*0|?*)rtNwAgIfZcMO_RfD5v7kA~_Kg@&6 zx_U)@$hT&3tB%Kq^XH*9`IKW?^gLH%tGW5I&w~n)rOl}r?qk{Mc)Qj#O3ZzzxmL)l z4*@12_*fP^wyBxRV5``oW_co3&`|at-ujHCMUefsp|+H=nxtPsKkUXWEn znbCA1j>4#ILTK6uCIzMxA}zWJmca-RCAb8+L&7<8$MTA2&1-OM3fL?$x5$++tvpV72re1Hukh|DK z5EKNJI=H-g1wsHBRSAj!PiWF}xXsQacaJt|5Q{A<#Z?m_MR$Mzq*CE@Qgli}BmhAI z=rjU>0-+Fq1QdCZ6C?;$0M#9US#*G^1SC1&asVViGYfK8b<&Kr^8T~B=iJ7%o9hQV zM^Sx}Zb}POhl?_bq&r2iacC+faH-x^9V4iJfMx??By5ia^v6omic#m%UgC% zetUl6>TrGcn$s9!0BfvHvCu?Qh#rC|C(bjTdG@D%CoDhx>o-r`nxEd99!)2=pM!nY znh|H?QL6E3efH4iU2WQ458k(Z-xt2}*1UV-nUA;)UM8I->yb(sEn@rZspmsk3`eb; zTsG=p^&rld(o`F*LRPdIHk)Q+a;|uo51(6dKJTC*rvw|hRWAB|Sxc=nhXHU&fEi&V zV4^7Km_>$^B)AK3qmglB+HSVmFus+K_J{qp3A0%n+y*-PeChP)bzl91kCZ!4-CM^O zX3N9!;mbe!!S>4Kk1K!X$U|Fi&IffvH%g1?M#A_K(>UOO!+|X=cHRC^ZmTuUWaMyI z`eo-tD0HB>R9p~23It3+vS5fI5@GOClnoR_C=h0WWUZ=_07!rUke~oV2pYNAFpolo zX>iLP2y25kHaM`57oI!6jZo;In`B6U1qOnY(wcE?OwDXMV#C_?Wv6HD9fOw))C##P z(3_@pq;YIVVelQ1JL)&-j^LhSEf*ksgH31w>%W7n1x`i0Tz=& zQW1N1P%OgWpeqs*!ZG1b z2yll3MiLM}93w;mUop85kRmwGc;d4YDa8-TZeP>Nu*Soi8s@_ zTGiFUsbU3k?*NJf#`KmaZG~11dz-GtO`US1=2|ZbTEiURFg!3AOiqO03?VMVKU;F04O1V zfC(@NAOuMi6d-`208|-cG^<0AfK&i;8zrg%-EuYep&kO&e4KuV$nKCn4Hy!hJNHy-DF zS4u;bL4*eYfq+$*YBfP732O!$Z#JXRxSd&h?WhNPXXLyftcGz+leHMk(y2ILtIV$* z-Kxpsq~*mxl=@fmPh_Jv>dNC77JGjlBL z*9ol<%6otGf4nmJ^gr4N)2r@r?MpYuPx$rcw7CtOIm{2{!(ZB9?1#42P8{EV`sMdO za`o4q?(376mvJ_kSdGiW_WakaCc~xS^6HT{ef{Zp@_UEJpWx0XyK|rYU|PBZn^Xq{Q*ucM+zT{VyFa3k>zwpC9@{Wyj_=}|J8}hn?Lc7Ka&5mx9(l~?qB^{2rnXl)W_0j$po`3ldzwt-E-~rq?jY)^KguQ@|yayLvhleiWPd$#kU-+sYy>0XB%dYC$ zt=mWQWevfbxGD*wqxL|1UG?luVh5^tOg5d;>gN8c^Wi|%90Dt>nM@|@7hf`c?(wHq z<(!qXv+;AZx;L`|r@E!{AT83gC49GGzBr6Qa4zINml+cbn6XlRhFaw;R z8|l>oy+Clf%!L>e5k@r?>xs}#o73^7qconZMlsUI$lVaEJ2UI|53c8$uh!dhoj1j{ zawG1A@$#8@fli<2OApfTJL{?p>pVTP_C`4#+z$_n#H&nhN|548D^kGM1)$>cze3N;X}T0)SSHY)TzB|&(1rA5ad)*u#;1A zd`Ap~MX*&ML!iZ|VBTc{lw=4%ZA8#3WE2?OKm-GmVt~;ADwv4?VK^pTltabvMP+D*f5Hr8Lg|%yMA6=tO_9- zNq7OOT&ktyN(M0{h$xbSL3BbCQofjBYSP6x>k_(NLTZob@~J*uHUnb8%~rS{bkl zbY3#$aGF5M0m&MvnKg{&t2_u@EkWIh^!hunUA7*5ZaiP!tQee1=CnylF;(wThvxKM z=SQ0_+1ecWtNUeEHVZTOs@HXAr6R+!JlKv1yuw zuxlK4I~P1x$iWyHjP7(Pz>UFT6N1MkpBmAdaoqsZq?sERi`}EV6YqHQ*XY%6|I4F| z|MdQ6uJ!$4t;Q5bK~xyTn#w3B83ihXg=v@%PyX&Ze=GE#{f!&Tqh;zwu}^5L#tPL_ z7uV={yDkD(gP)k!Q?IoAMOxnU=80K$j=JbI?Jf?M{T{k7-!IuPJ$Yi(G^4$tF3W8% z^Zt;)RO@50MrNRG!(_JEdmV;DM@cz_GMXbF;97)KROC<$XbhrNqe7%dR0m2F;nn3H zk~K{-p}M=I#zw(TC9I#C?ZvS3d|z+((`Yl>5}P9|Yqx*T-RocawjX>&f5+s363(^5 zUjO8C|KbmtYkQAvHdohab$;uQmb|s(Xs|J;t=mn_wv1B3OqOkb9*%|~;6R0_gIlhF zR+I;nS^|ew4De#;0V^twGO{?f85CFSGnPg`5J4cy1Xjr*f}lvJfI=5U2F{WV=8p)d&~(Z5*6VHQ)1SJy4G%;;98?@q05I{ zErz(N8~{3l(*mrWHL_VVo-}tSgyci?3PXwe1BY6wm%W}oAL_E?Oh7X#HBD)nOrfm0 z;(dX69SJZ)Lra2O87OjkCQ)br!DES_Sq&)U*m2$w$G8@8yO;IKfFo|m#=%QdixHki ziOtA{U@I4M?>!(0s8z_rYLI(7igDDcLb(9$ZR2Sa3qwCRbJrGx&>Y3Msu^8h2Lcs< zfG7nTlQj`QtVD&9d!khfy^D@xgl9*oq8tXn<%Ft)KSct>ixddJK)lF=K*%5gQgz4y z0_K7wQZP~sl5mVTM!bkPCeS5P1VJ(;z>+!C#EocilzfeKkGl2vH;BbKe0TCeNFgG(Px*YCsfmNh| zffUUN3UI(65+FE&tuxM*S3WrC0t_Gt0s;sSkpNI zp+W-OL4ZaO(KJOPPyNQCHlB9~fyY8@nuI*4dWHi8D8M8m0}#Rl1e8tkTAb^F<54(p zXk~2(L%X(EE-~~H0sx^xG#LmGaFE-!-CLi*{!KWt zK!6N^Kmgn;UDeD1lUHtxW#bb~h@-X*?Q*d!=ynmNM+mm6le1wmiP4E7w}Q_02R6d` zdV0lrYF6c!mwp;uYG*GyYqyW=vE7G1_U#+x?SKD|c9*YMp}jD{pP2ye1CRgs+dsPZ z&3pALzG(iJ_J{F1KQ%o$>VRhJ{Js6bbNfgA#P?Pyv&-MlU zg;?O3%is4MKlz<+|GHP7`j7DNI!+q;6|VdWF5ivk3x4Tm-}M^%xpUw2$6x;tt_rie z@Cx9!e;NCqdfoM%S3Y&`?EcsP*!R2||KJa>eeL!)e)&44p?mJTpL_bdF23s3&pnHG z9@WJ^4Xfww+CSQw&ug(}zFIXkPd1xZHTH=Z@~!1$&*^VghBw7Bn$2)^hQuJMkoMD1Tes&JVGTV?{7Z--1B)fyOlOymE%U;{ey#>2ysPK zmn)hojnw4JQom)-rJylV56tLd*jHG1;$zHjVx8xNjs z?%#N+pSd&NvwLZ0`S5Oxb??-kKQ&zWXCHgIn+(4>-JOo^J9BBPgy#-EvI`tt+pltE zQUEQ0iI9n4G{WE}scFjj^Fe1f*N=$(3wvy|c;Vo1xIHOvdd2Mr-}sh|``+@9a^G*T z3SpE2q?nAsYB5y{p@f*dCk_wx```Sncm7_eM<2MA`@uVU7ED1B=xjEq@m-s+G@Kg3 z7DrcX^>lw&8E-_vL$W}ww9Hz`;P8-O_i>!W=j+kjmo9dINdd_~L9UQuh?D9F!&r`! zaXbyP+huWWc#fc|Acw^UO`8z5xGprQna8wT5l1WxWthW<(1g@j6d-qCh-|@)25}p6 z({MA?BgPbF;Uv)Rai87p%kJKO<5&NU&+E4*cZ1vO2m8yGU@+Kt2A6YJfHwUyV; zEe<^%Hmha2F$`({g&Xs77}~fV!$=q~tdMhI?bRF7l5Iyftgsr;yC1MRh?gL8cJU;utGg43y}ATZndMg3K_9}5TOgkjE9MDHp*o@q0m}jF)r|j$&7R2NGynTI$`+e>xd2Bq4 zf@8YAX3d5y>huAU5?W&-y*;$hp?EF3VqMJrsLOqj*}AE1qqhyK%vW=jqk&h$a)AO|3Kmp}ppK!SiAE`l%;FfatbC<4w>0YLyLQO8hW1V|B(000E9sxKnf5@(L_KMPST8k0GA-)7@Z(t5ppsMlqj%0U{`nFoFafG7CnNWurU`0O1voh!-^hib6H7Ap-%Vi4bK)lqv*}BH#uB zbdw+li9k5u03-kmAOV*Exsf0Nf&d8uU;t^Mp+e(p0f!ay0ilAd0zf(#f=x!q?%q#x z9ZS0ghcVE=5jc3wk`Vww!Jrgaj0H9V*a9uA^QIz>*tFQo)V!t;a&uy}JhHq5BSeT! zk(D%nQ1uF^iiAUON|1RF;Pe0xtO7uTC;{*`Ko&tk07L>vL>=Tc7ZicuF@Q8D{uCk5 zh+`B)Iv9fBgg_i4A}LTJK|mk`RmJ<);_|N7-n#Wn-M^`76i9)_2=*g|Di+BMpqXfA z#^ktJc^Zw!sVrWYR~vSm%?-HiBPO=QV zym;W*zqU4e`%j`|!~vM%&)^AQ=acg{%>V|E$I|%n$hHhaR)6{ZD}4G(4)-4HH%52d z_sG{Az3_^Sy}dKv`19u;+xxl4w_o$KKmWyx@A;ke&wl*d`lpw5bj5GmimJs`Nn#Qh zm8NUx8!I;2Tlx9X3#Yc`Nlu_Z#UL|Z~8OuyY_>Z|Le8iyZ`B*`H7D_ z`Cj3KKG9Mb`SawyyrcCc=P(x=U@J{-}5)A`vCs#|Nai`zPnqV+U>ek%vB|K z++0%_uT6R#H1uxp?4Bibxmf4s*;63KK-;b@e1TH4N^$dw80)k>vvzW0=x!dCIy)Si z@!bL0vTl{ntVBpDG>K!wDe{!K81#`<9l68mkOetR5p**-6R4J`xmLj;pb0XYjAyEu zsNp4m96&aD5BJU9*3H9O10eBaMn@9<4~ zIQz_}PoHzTTRli>jZ6YDNCg;7C@?s|PC)=yDkfz^Fe$2>;FN7*6HE*)2*D7$3<<#` z#y~KjD8!5sNCHXL&@FYVC!L|sxTkOLZ+PE#t@Yg5O-TME*Sk?O0y)|!mm`GuZ# zf)Wu-0ZcIO?_PiTc0UP+Rm(oRj;bYO26Gh`tO%fBqJwHYK&>@0E00H7P6a@^luB8<3J?N7RDij9K%ue<6ka0_lIpNkA@+DMcqs)C(1wUo*Vo>nX0oB(G(7r_ zuYCLIhfzNLq59y~r>9qLRuBK_(_3#HUzuOqSv?wF*G?bV*$m5YD*eI7Km3b7^EYnm z($Py#J=Ef>rk4+W{aZ)>pCeBxuJjZcvr@`{LKQix*STDBjoBu8B<-y4kS2 z_Qa_-e9fPGecxVs^vvwkwcW$}AHDq}AE{nl-FW!i{>5|CjECGuas6QL-to;FxjuaE z_EL*@T#XMZ24-Y~JFyDuB^?JLt^)(ItrBx`xeo!7!*v2ICXq&CG|&QRlu+{E$nKQO zpqKrRBWJ#rSoJ*o6YSTjRx znKHWtaw%i_c4XR!mq&K3MjGMcfp>Oub#GW_x-0~X0dDbh%gu@9HV#=sj$4Qi)#9w4 z*S0FIobR};my6MSJ?%|qB?46})+`G&tV$jR7>lZe0F{WrK*KQ z&;T+UR4imhh|$LpVHAdjDTyl>;AZ36#v>%N#R|iEcq;qDo4mJeAMX(Fr^TS;LRKjt zSO?H71jfh=c1cX7g{lU9&?>_QW&jc)2n5WCU=@MDDj)>Pjm3e|6*!wJSY$tV?lV~6 z0)+xd?v4T(5C9>709_}XMvw$RLPih}QEp6v-qoIth3X05<|8Kp_Bb1~L=}SXC54i7n|yR{@kuazTK>XstZ6c=6&A0t3K? z5C9^}2S=m~1PzomWD3kc6Tmi@!-hPrXpvP2dt{mCD3;OovR}t;NizY^3BW)=AiTH) zK(B#-NB~|4h!j9TDg*%o<&=C9;IT>+B}jn)Bk6;y3<*UbKu{!2#0dhTKr;ePK{Qe3 z0!SK#69Pa5kVZlt3@*=#*CtjjynS(Wt6AMch|$1EIwp|bcJ63@DAzkza5GsWYvZ_~ z?`v5#?X+$ecaDl3-SrsH7~-U^Yd;>m&)BSqstHJWD4dMzDjzR$&h_?4B_$RfCCGW0 zxBlbr%(Hj>fq!KGb&vk$?DPKZw+bNs<$m`%KJyFz?(5z({>;Wd_{?vA&y71by`Pu8 z^}XYPzB!u?i$yMrt}l!kjCCK>I;j!wPd+SBK{(YL+-$1t2lv-8Gp{@B*9Jze(F!=HWcLhY{BB_*;BEV(ai zLI@Sku8^kQZ=QQ+nPmat$zJ$!ezAMc5B$wn zryu-9UHMx-@;9CwU7cRQze)D-4Szq|vtM`b9Y6fO?|c&filh~8k8qhVyrz5z^FLU8 z;FVAP>{GW^AN$AU#r^GW_wu)V&R==*?LYcQk6e7mqW!OavftkNpYQB`{aX%j<(mK;KEN|S_nyD-EC1}=o!=j35xPArQFx4xCEC(QaE8UG*h;nmKi`%Yv>VZl$6%cJ}E1G=H<%Y%}-%uYlVgTXUp8aDx9Vu?cl*nlN?0=ctknq2`AM1(pT z6bv*PVKIR6;H9saVVQyyAWsgs%OJ^7ISpaj7|jAJiy%?CNf@Az%seH~`v*7f-R|l* zXTePHs@WG&f)gMG6&YZn0C`)FEucGEtrY7V^I~(d4sAaFCR6+E`S?W~{*Ly-`lzJy z)$1yxo9kz<#jUED^=Z2H(VgWye2aoDg)Cxm(CWeTQ~X-T)$mi|KLl9!P5QR zp{(l36Av}#FNDRSUEN8H^Jbu@2S+PkKb9Zg%om<(p1sq2qVlRMLm)<>jjX$Vaa0!2 zXtSIeMdgWigmRImh8r_ph#{@oJ1gY=m?}~hl1Cy|s=zguYxFLoqX6hZi584j5tY>m zkj#rPj+5zhQ>x5XELGmw_A3bK0iRp3TdxxPE;cMPL$QK9&iVYg$(g4g-$Ls{pO^ia zN6$a?ge!+nnpxyJG+VAOJ~3K~$&9jADIl>rtYmq0f2cvf_FTP7vVv zYiGUKHn9;=qJ>JKwt;pppvE=&=1z5AdW3=Q>fv!V1c0>HwtwP zt7lG#+{6K4JqJo3JaGUZ1Ov_7EoIUBA}^vz4L~y(MG!^@ zgh*5Yij)B|Iw>H!wV(wza}#s1o`OcuB%(A}y2>JK4&1Cg&B|^=95?F?OOurMbAMCh zP(chaFhshW7J?5x%Bm}-N}ZF%To6-BmfsqkN^b`G{7hUaym{BAmK!DQWgy$9DqoHDlV51Mt~3qfHVTM zXn;YG9AiflZPA29Db-K{SP7evY1xY* zB%C1Sa2H<#6r8X?fHD9y8g#;8Bn{aC#%KUUAsWX4Lxd;w1(FkF2yKm$m5WmXtwg9$ z1%!-JaCyaSu)=^728L(`fRKf4LkZ=ez$7t%rc@?E8EvonPOF8?@EXwp0t7^;xP4sN+0SR5=cBLg8qpa~EO$^b}$5v1S* zV1!^UR3In;Gy+lp+>m8gM(~H;vDQ(&Tv#ksk<|l>&j#n0>fizVxl*uET9eDm%DqPi zV_nyiW;VOqgS(&cSYMl=-kz3?e0X=gTsGqZo0PVWT{LUBU39N6>$RO3qc2P;FPCCg z@J1Ji?bnxuzG3tD%J=@A|L}7*fphqmS^o6?!LR+`cjs4E(<_gij(4AXb*PTY%+pKD zTn>+DC+uF=8Ns-U-+F5#&=eJW(U7L0E=(olE zk&UB+JHOE8pE=+^#PrG<{m$-ce(l;f?JS@9FQ1>`#lKwv1lC`&EcSgrxc8#})Ft2g z-XDDZWB519m;Vy*6JNgj?jQZw_dbQn@UNTzV*eHV+UUK%{QV#3p8xrhC+#F{zU?R8 za!`-IZq&Z}%{#yPb07WGd%KT(cyBpAATQ1y`Hrvp?kB(b3-A8=PyVCd>&mz8)$M%uS%4)1#=5?15!pN~K@RE8*S16>q zX`Ag$o3D?ONA^(f2FswgoWinWh8;&qg8P0itV>2Xv&pK7t@Kch#IVfiRWIJPHF>`a zd3BuK#VnZx08Rxb2vQJGP0Pe=zNCR+Vy9XQW6C zMl(s~3~@4^a`aeO-x)Z7(M*KPu(c61d6e=fL(Lcn4~R4)Svin%82aAij0uj)SepRB zV+xdl5DE-!y;;#MhzJhKT>vO!PNre0*82yC_wFxi+h-2rG0<*gu!?mnEeKDgF0c`* z-cy%-R2%90N4Jwk<5cyFO|I?X>BUFQo9*s1%kmO+AGn@6) zs!56A?!hOdShqWGK!3z^h}E2VwX#;M+qib+b*Zb^#@NBfLbCZGq@~2VvCRs{1Nst* zN99fiKm_FS0uYn{M+kv{%M4=71_CA-6~H3891>gtV1xj95g-aQ3kHu+9ne7~Be`4% zP6WjOsAdF5Idmwi(h(E{0!Fw{U`80>q9hV1;N^>DcIFaqZ`k}^@4bx2qL=ldR3t$# z3k2j50-}>ZA%I9Tpa8o-M8CiTYx8Nzr4@U@`b}2Xl#Fc zDQdKp?-m@~e2@tx0RoIF)p+|%)tvH~VN;59Ll*Zj>e+Z0_wB>4rQb zUE*W4?*!ef{lU^U4@YZhZ?UedY&Dx!jamAV*9%8CbWUuQ30WPk57+&1HJ+TB?QFyQ zcIc?G&a0d}jz)2#@>GWTni{(7o_&ydj{&G;kfADw86~rWz##|(Kp_}oXasm+Rn%@* z?deIIJ+8I%9NS6NZ1#P(KHTeBhN1L1k3w^%dZSRysxud~bM8uY>P?%I9F<^$*Vmd; zPoIBipzoj_S+n+oZuMNvz2Dj6XZN$C1pddm7AOVokA}Rww2z0=i9ECDd1we9F0bxxA z1~bGuo+{ejsJMCN$Vc75Dz6R{nNrR^2vo{a0ZBC^AG&l_sz#GA+JGu+R$lc@8BLG3 zt+P4ykF;wlku$TIh%y{0AB*r9<AOI34j3OjejsRq$0CYfr8D2vJK)4_jpg=~@2!ZYZX$DZFz(7(!Emr0^DuI?s zz@RojBQ70Wm68E@1tC}r2r&#sA6!Kc1c)F(p$LhjQ!Y3NfQ}QSkstv<5CDS-5b+x2 z8U(BdU@$-hCqxh@f&haPaFall5vgd7z&NgBDM~4UaBC<<_1TaDgPRE@m68jMAnC#h z5fWnoj3NMc104ZCWJMJzWkQfORTV?9oK3!f3o20%ZEz6|Pyhsj43-Ij1VBZAAyiZ` z6$e2QaL__AK%f8-1OZYKAP)eV7#oy?lm#!2upMZYj|Kp8a*N#AtoNIk_VOygO4$O+ z1jS=WhH*T#fdV`@bam_Ebih4sHt=?PzUa9;bOaay&;WrjF+#l;$Da@&5D*<8NJw(P zU;s${F(48M4JcP}MFU}o^rVu8%UwY!Q#6P!LNsfk;SG0;VnfRr>+u7mHUJ9Aej_fR!~b*Qa5`$!m{`Qdl}nbU86 z=aqLEVEDgrt9(PZ*nZ#3^+*5q?>%#Gb7ET;COdcIu))1P?H#Ec-6|7h%#$TsnZ=!_n#+6fk)=((vwhF~;oZ6P?_KGd3sqV-7g50vs^l6F~a_)fv250P+770mN%M z9AE!C-~HPUp8dc)&p+B8Q-{f?m+y`BQ*S88-}>k;{14Y3{_H%@`~5@i86MwBfA@2* z9ld?$=5OkM93wkggvqr3vyTn0?w>fOaywa4LGYIvCim;X^d8dJWU|(in#eUmDH`o&{-9q~f`7^BftgaOjl_ zBPz2qi$0vrc-b{9QwAhTZjpq+vjaq=ayzEQ?U;^_hD3#G5+{>Dw^&&j2Z?#^MAk-q zYFf4DAKKbFbN>?!rXMV8ozDqg-)^+tRGu>*=W=^qqLm&XQf;Sc^(L<5^1?hF4noD& zs&Q7Th*Ohl-t}cIwu^*KmdR*TrXif!8o#=K{b-R&@(yG%gaVQZgo99cx?q|tgw-$A zj+&?&_!!+WJDjVmhEfosRkJo+*@>@oLv~t>Q4R=YV`LazrVlY@j?M%FiqU8hutpAn zrI^v6BPNBA%pezR6f24q5I}+zYs$=pP(XAT?l-Me>p*OeVJV-khYhB>Mc z>Y&2`r60_QhV1f?leet~E9<;SJ{je9bo@kIZ|LIn3rM^7Zk6VZZ)@N5?DF>x`*nSG zw<}n@>anhy7~3#bh5pS}_w%);Tlt_(Ck5WRFq zMTK5bGy{oJG66-AKrsPK5tyf zFK+gG&z>6HI#r+9pKM0AQYj)9bbXC2CWsEA|1V zYK&QLSkKZ^;^;Kzt>ZqHM`71XTCI9o-!Gj-E0*Mw(dN0)ik_rXp?Ku}>@7`7F*JPs=Buy!ht!7Q=r z&B0Efmot;;wX<)3$hG~YXSUC7ovKi8wukVcNJ`Uo<4p7B4YJva8)2No{>|n$Z+UtC zS6^7a@-iQZ{`9IEhVV!6mAmzgPUE{1GW4U2HgD9vap~c9b8|M&`JP^`46oc77Q6d} z2Z#GYQlZ>IK!6D11R;YFM!*n=;FMby^%^fAOr%T{EJ8*ihENxuw&TtE;;dBpq2tBb zgRW<}oz~sZi8{(Eg+?M6`0$m}Vyw)X8f6q_#9TOqpjpjQh*{O*=AAYSQs2Zus~QH zoG7KO%I1`shF;X73`<`Zgy7StP&ajBZOsnhb}t^UhGX{(RAzngb)-6N{*q3xfB6#gG@@oAkZ1`8WAA=1SJ7(0zeu8f`S1e z9Hh($P!2#5X5`?dKm;Lxk_i9{PE@P}YylO*0m~pMQo)Eo3UazZLI&uf0~8%_cS1y7y%`4 zNmz0dK`K<^s%jfsBlz;T&n|bo#sCnBFquiB++c+iNC8w3z(5Lg83RO^lMsa0kboj2 zP&9|T#2A1o(k0IcW+e!5Yl~G~AFbh@va}u2!{;Vgu|em+#wnux$wtGbDSs?;DXU$ zjANn@4wpcSl}T_hm>C5@C8gx0D$Y7i3>IUpd|N1sOanlGCm>J)1sViU5R3wi0tk~t z00KbP~2?jo`jCJ~|8G7HUgJf5_-*AWjs9n2=tL$&46 zp(SD+0#yZc-o#?fl#|n(dH0E`~61mT^=X zmIH!&Xj+~p=0vZOL@rTJE4J~>(bkxY=i#~ZJCBUN@Uix7|H*&)RCV+5QUBqe`SG8= z)ja!8z55TwUw>#HeGj3t!FLzk!S2OVUuJ&y80m>&--h}7mT9tiAxs|KI`{2=?)krW zAw2UR=X&}5_a1-rg?G;I_Rm8e zAixh%s^h}!YxwZ%%e4LS-RhCA`N}UpCA@(^0t^WFf71it-Y4+GLjMf<=e}V0*qO(s zbqcpWH81yX7xz*lgojx_bot8Nb!m%sYs7S~DxB|4>+`qkjq_if|NN^PPp&8Tc9#ED zIsJ{?KESP?{FRr|(XCB8`}LQHn^yACe6VmwHXQk)h^?^B$=zaRHDQS8HC09;VP_NB9@QKh@1du9J}abE6k ztTrs6%Bz5NYgt}yc%4^Gh(`mPx;j0rmNED|uVNnh-g|jPuPgP)1oEha{%wi}uexwj)9B`pStX^>b1FGi6rIM9`%**K4; zqqkhHCM7&S_p*O740(3BH?J5O1 z0I{k}Mm5ET91@0^i932Iqr2zN0gWNl=7CD;3)Gct02u@qvdd7+f=UqFOoSBxJ^%!b z00;>YVinA2*%w7HZv-j0fer&5COP3Sr#V=&1bW}W5o7HjOpsFUK}Y~WQUD|Z!U-Bc z$t74cNIZ2m$FoUeW} zs(CDq&pp}Yr}E9!GasIQ;HCG?U;Ocp{|k@)$$VxcZs^sd`Sk~GkZ?jpRH>Q^8+m;G zQJZWr=E($`>)h}9aa&E!OkTIa!Owmwi@Ue-t3-Nbd3wJaKe;np-NyC3#m)P#7OoF6 zjRk8$99eB(^5pJ{?zJ(Du-%&CY(J|EaXeAj3?^Fk#>%OJ!kDJc1rHY9*QKq|^m*v= zLSS!e04fGyW|J|udZuM{w};Z;l!p{ziZQykDFHcomaH&%a4Mq!GK0$}v6iJ&wup7HS~%kmB>d`r`cF-4Yt_1_AlVLZ^Ca;oQcw z#My<>#o5MA71HDDX6)!l^iQ!)AhBP?yD!UhwAPu~Fb%%AJ z%nD?&ZLJ+iF~{spj>W3Ih(4Fly9oiqa>&PPV+!Mdx+bDE1qfc#ry`nAYPu0JM4Fd= zDD#1y+b!B`gJ~<=D@?n6IqLgt^cX_0I?J*rs+3Vy1F0%Sl2bwfzz~BhR0`F%wi7}9@927ce5GfcT#vp@ot+Wl(tm-H9&eNgR2@I%%6vH%dYl2~F^8_3W ztA&OF26zC92L}a_fCNtt$Ou9ObkU$F0PZlzU{Z8|Z~+qOWW&0$#^H`cNktGGE`kLd zTr4?cfbN8dKVcAn0vApM!a(7V5fBL01{eka9vl=)kW7$400apT9B{-(q2|ig?q=QJ zJM8Cuas?@n0s$D=8Wb2DP+k-$)A1TXf(0kxG-n_QFaQ<22t~D#RuKg@B%!2I0D%Hj zM1X++C<8zefT0e6C;?P2sHi4jmITR7fKUjyMFK1?0I@P4)C`b9XP}1&&~jbm=u%-M z+dx>b+6IKa@74DbK?n#CU>1m=6DURjP$Xp5Ip`t@nCML`O+6mk*3%*#7Zn%&Pc%|S zxhN5EKp+Vah7;xlokp1{8Z)3m0x$uPtU!c}1VD%pbn69rF>ov!ZB)}U##ldASgQPo zy{7xa=xqDY#`xTD^qaf6-_2-ZHM+V{9ffLMNe4kfGwbAe&;3 z^lPP65l*eWYtz!I@xzZj{ypFILx1Xx8``d!TC~T&8aRtT@f^^tmW%Z-{_J;r^f)c= z?>`upcS~Sj*Rc(T>9mx5uqtJ~sG&7Vvg++MZ&x;7r}Y{XfY@%)i*n@YfZ?z_2poe= zTRlFlUP6_YX$$SvHpbuaeLwd#JHYF)fAf*Q|Jn0@_z##Ksdu{L<4>%QR()4uNde8~ z*7)3&KmWv^{^##`|3UcO+l67H=9Qi1PyJh`e(ZiT`q>w@-}*T_-+t!!x1ag=`yTuL zw{2YbtAG8=F5*wzS$*NJ`|!T+Jz8%(_jG6f-jC`Eh<}~!-}x_o{^pA>{l;P?c}l~a zz^WkSC4|%~C%E^i&tPJ%s|yyUUl!{WAZc!)VFuZolT@siR=xF2U%L6Oulb618~)e5 z;f<3QFVs)K0{)*9K-_o*_kZg>|8Vz{{+SQ^3&(daUVZq=!|CvUdhYqj{=8DY5t_}W znN*Xb>A6c^{nZ=%TtmfL-%awS7M20NB8GwoRRsR8qRvjVbl%ge4DEjw)19 zC<5q&+yNR9Yz22;kK@I3^Odh_ zNI@n#ql2o38koF4y>s+n5yLTkww4KJ>OEN>2OA8aR3UfC3QTRb2yC{fyb^VAb9DXb z*fg;}M1HV6J;j~A9F9t$V-asvY=wt=I2e2<;(kTh^z*CqW>y|%3_XcKkTVl(tPN0c zRpo-3r6?yfK;|RIK=h)jxnvHYrU;p-N~UjS5>AQuUe0w95KBONA+0A!2=441FeX7Bd& za2n6vS$Y<+a6n~gkTE4NF;#FVCl#tN5}->;lijhg)WwF-CUlO`58U`jlit7gk6wld}hD^{%7s^vU-fJi8M{m!Et0__>qY z|K#OMpFFrPJnTRE={q0)r=MEp5$_t4#WtO0U*0{eq%sA_Mb+5cxUA8&xrP?K$LOJ- zc%5+y>?OOD%I3+wFRE`XU!(57wA?=*CeL@}Ao)AqAC@&!#12sk&N6|%Qu3_o^3a6P zj2N3RCo27HfM+F$z0d~$03ZNKL_t)5t($=k3u?%k%>_j-0yeR)VU=nd>mI0c&6@3J?qwbB`R$H29?gheuj0=@{X>0@XK0 z1UHkKFgDw*a<91&z1!5Zp~qCm8T9P(5^XZp6C(^ywU3Q#wl0fBKHIPWUD*2ceOEv5 zWncS|(AyuLgoAPa=IJYkuYGK=@h=QI#nLvb{DMPAr^9x$VVBbQ>Q3R)*Jrcc{j*hB z-+9>g!yEnSasI-NEF-V2tb3GIU(7XFsKE%Qi>?Z<4hS(sgeIt&w+N{gd95VqF6KgD z8(5-Kl_Fr2YpASNW2jAqQVr3gpd2db4pwBT(uhEUPhLL=jd@^o2-XC)2?**P*?A$5 z22^PPQF7jvz9$u!fJn;bl}#))RCl0?0u4Y&0U#5?3}E0kB8R}(4P#|?>^tNtKOL5( z^cA5OPzw{)Me`WFm5@|ZMHPosJitUcAhQuU3f5aZcGT@Km3N8NEJJcNKs0&qEC_*a zD225|hf-=fOO0=ff*@}T!bsK=h^Z-} zYIX;_f&!xiKqP<>6M!5Fz|vPiH3(o#glHu&D2@z@W^Kuy`xQ~>089W)AOHfv%mBfP z$XFuao+|)1Tp($HAtD%!S-GO100IO+76B&*mqBLu5*l&Aalrz>Knj3J$RUydC_KU{ zk)jBhf@V<`Q7Tk?;|vPy+TBr;rp?tUne3=~=ocwjYM9noZ&3yvn*xWfYuQa>?bj=L zStS6KB0@r}X+?7%pFv#q%86`XP!uo?Ne)!6suhp{f?P5IP%zMHlwgFEL^io270kSz z7uD6KI0yj*2_OPSB)XX+bX_txs~{_2r8p>nR%vnxR1KiJNeYGF1xIm0#YsA$;s6E6 zC4sqdR?(qgV2Cln0XV9FTxk>ldT~t5Q@hi}!1<-7)5A^l`!59V>TIgG8M)W!dBq@@ zSpy6}4PhssLJrFSC{)$K8LnRnn=?P@ZF1}SlY77af8GB|{LvZSzK*~Bg|ly7p8IpR ze)$2$Z-JLTuDX2Zm+l;X@n=VetKgyQmv~kwpG`_X{O0G&!TI5q<&AaEaPD1t{7-ft{Oe~w_cr|05%vShEGoJ}G^4l-pe*GKA@mqdePlI^G0ecnqf9B78({HS7bmzGb|E2%sI}&b8@&ADX8(<6& zzmpvP*7yC{U;O;bzuJ3n0j>bZBsWTxRL$?fbQ{k>P)5r}_TZ@-4!ouBFVonQEaixXd*z4=ow zf8VoDjp}>K-}~nKuT9R~HLT+0$dScUhHv@7AO5EA;XnB1-@H5hrMoJxeST*amu_X) z9PNz1;k|8Ku;10C2VC91^Flu!&l^i&z6f_iESo!4;(c0CH8uxLi`B_154At4ao+7M z({O2oyqa%xd0M*EefC8A@OE8)Qvy_=uWGR#$*sLNfLjyFP!yx5z;R?7v0wF);87*> z8KPHuMMY=T)vH<|h3Z+92(LtFEQ~_i!38)IYXscmlZGpewlR zxQjo1m#-B22WC|S1*Y5A$2U^?^fAdw(m)UgbJOHH0E=LADFT$y5GcSskTB9@02{eL zhesNvzB&ksFan0QxHLFpV@9aLXn`qI8JxlD`kkY9PR>$1vJ%b)SX2^8BZF50h6Ghq zFJr8d)xv0o%b$7s337RFKxN-&!G_U~f8UqR#;I2Go+oD-0 zS9lBpQHDs!-~yZoFry8=&0Z6YM6Oi;41`5S03wX?AUaJD36Qm9xf~Adc3(b%7WFd& zLa9{(T_nLEI3_k@aoN)cAgyH8OO7C7B!1y870ow)!>Ja(5M*$7AFJ3 zfO`TQs=#Fao(VXVS67C1HEU6y`_BLD>)d|N*MGRQ;U9hO=MEzlC%@KD@BLSvpMKeEm(o!hec+kB+fRS~WB>T2*MGxbJ%0G|>o0S&Se=HN zhUKUa7zdmWes=G$8hwBO$cbw4u>&7pIoRY;U2fEDYcp@RyOX*qqm$mjzK9NtHW_xl zJYQ+xwuvP$%II;Nn*DjIHT!uF4rd#eD^o6up$XQ)k~M&~!3RRF042l*A%I0;1Gr-f zbf$q^=WQuli*0N=$_?glFBZg6uqHG^@xC8GhN%@op=sHTP@K!9bGbTm45|rA5ehPa z+h%FC4>$I%Ub*u4Sb8=VOL*ht;jlS*h~bUJ(oCmSwT7k%p-Ixn=Oe{B9)_lr%u-4v zlwI;RhMm+krrfk+#WaD>)dn2w&vNv6_vB4}L(45SU#oj?ez;t|+DU;YecUdWtL5UI zC4E?-WR!-nYp%K9y4~QQyL(*UdiC|CFU-@lc1ajKZ&bHgx!*cHlo&iZQG9@kgOyH* z2t+U{S<`v2iL}dBOi@G9s!$K$TPe_h1XQTUP$5*P0t6sZ1gIt=8s%XqtVM(a;H~TX z3>J)I!3;w{Z(xOsf~X51ObA(oXO)@@;Smjx8fZ2PY*VSR6bH%%HM=rTV za7f%A%yOCoX(y%K=CUdE>vx87f6HctCd<&*SPID+HwlBqq%?tO+hz4_CJP0)!i<4jN8b8&Qb)x9KE;!+^qm1GzlqUFq~9yU95tvCXa#wJOBvUnH|KJ zRL6xNGX%$l;!vd#G>CKxQmuqb0wlm`00IFJBAhaL5MdSx&`1K3xoCjF1lr6I8uTrT za$e@`ns7M)K>!2_07(!4Ap)e~La+c35GYizx>GU$bQj?=fKY~N4J{d4h$e}_10Ycq zkURj0mH;CFvgTeI9FV{OD&P|RMu@8}o;B^f-F<2=olmd4akS#m3%#P~Cq1yOr~qYz zXc8_EkH`Ty34kyFG6{?T1+|1F6aodMLM}7GSOoz|AT1kI)wmEj2&e@h6hMG5(#UD# zFbzw&EwT!Y2$CR>6cGRuYOp|>yg(uCMIhDX02D6*9cEUoE9zMXN^k)tK=^Hu6exE9 zB8e{{0Z0JE5CA5l01?my8c9%afK(JbqeNQp#&(S%p|g*mtDH88=GMYWDFWS`)rlDP;(W95@JIzAe5Pl(YtASa2D%oQ)peHHAvMC=9b1+US0C>$D8qE zr$6?A4<5efFWAW+|Dj(!-udiNc>g{>w;wluV)V?r-~0&j-}%t$pQP%8*(PkmrX{^G zSevXeZz~LPb-k6nQkA;<<0t;w56$<#>94$B0P%g;9y!=L>5pZwh4;=X_Dzwz(< z!LR*>`Ah)u&sBg!;0j>)o#f$X{)0dFk3RF+f7ZK64o8YHHVGvIq4%J?mP>HP&N$Rk zVW!*^58kFhvl!b3emefZYu|tAmuKJdGmiDIufFx&|IK%OX>`4;D zzc}Gni1T0myVst1@AVr`jDguHe&if){vR);CvQCX;!gjwPu}_bPPO7CZyW2j7(@^=jJM~U zJbvrTudH6V({J8>@SqQyY~V268|}`fTK2Ep%R6};N0aN5vK~rZU=n%ISq$~naWh}f zn)S)s$!M*Do@o4dIjnN&gFzEuor@-&O{covdy`xe`uR zHT6o7D>rP|euXCMVez@VYLY>DsG%uM~&A5 z-e2{nhB@H|C)R7Y*915LSpkeCfmNWaE&5aA^6wuA*MXe?5NC!5siwH>YPu4-c&#aj0iK!02XO6B@I=0x`sZy!IZ|2X}3DI}f|3 z(;9A$?MBq0p=O<*stU_(@LB8ob$XZZ<()CQk?)^zQ&Fm{QHHL3I;kLWdJl*x!2r>9Nm2C zGOEU>=PGe)-kqiP9^-@nVy=6;JHI&`eLNp82*L zPtV(7J3lXc&y&wRe&y+pe)?~J`raoGUwq^KORsE0I%7FLN$HKzsPk@S=eeAn9~V-C zaDgCTHan)U}lja2Kkg3u)+jLOkhx6X|q+D`?V<`4n5SDT)oPw*gIHQAO&?M z8(Z6Q+_Mg$wm=B6$`Bi~CS-&jRpBJQF>s~r1ZCWS6Q5MF?Iz=r_^!$I%ezmEgj>0H zuHIN54Su>fU#vcVPFuaRswh??atO(0lf6!%Po-mN6H4bMV(HslC%IKwOX$bZF_uo>x;69{TgekJ0GXqmgw2Mk#2I* z`1o!f-rT%NY}%SwZGf^ofHE(kCqze;yWrIU5G12Op+c2L3rs?2g0+)WL!@NtD@raw z-Uft3Cx_zw&}Wlpvd9PvAR$3DjNF>2(r8W9!cSkQ>oK!MzATuB^fB*(*AfiD4qK(>SPsZX_ zO6ZU;m!qWB9oR!pkqlesP&~Si9D!jSZOv*!R)ssK!cMT-#7xk`#<)m|#O;Dfi-#lJ ztx9LLt6+3M$mZe-r~?|IiS)oK0968NRjKX(GztnaRtAS8Yz0)w2&<^oGYLX~0D}w! z1yEiExF8510Rk{O2)Vh_Obh^w!2kq;W^zgbAQRkpHK=VGcA*J=CcUw);dJX27X(0{ zV1$eUAbfvh`bHKkrs@j;+FC8$CM z8my8I4@RRwfUNEqEQO^3gOv(`VS-vUti|eZ0t71{!0-t1hzJ4@0TBiP5&{4k34kX81EP-h81c6tZFT63W06w>pXjEKQ*^X-k`2@QW3YtS}>_> zn&m0X^HpHjMejD*ppm`pP{|sZ`h+@gT`0(5epHPA^TA;1NSpp1~&Am`vBjQ|9Raxe7|2$G;dq#(M3fPpm$k&4ZfNPHPNkj3O%7A_AqCLOV{$rLWMzH)jpX+pANYj`Ar(9^7uy_Gayi zRCnCB+?=m#+1PQ*-*>?Jf%elcmJe)m+wA?yd2h>wZ+uy0wpRJ}YkOFizqj6wwg%Qd zfLTOD2&{$bYhQV#LkRJBKsWNhu7O>BzL_4Ea{1cZPyKrje)GS-ad+L$wv&S|zjHXc zHR|7UZT!(44Zrn^r*C|u`PyslA1j~9&90imH~*Co@h|6Zei|o#`6G`_ z-}8Y#_0(kapW?O8{_szK_VDlg;HCHeo`>K4yZfL0$1i{4jO|1DKJ%tbx3L>_Pyg`G z{pWvx9sb(K@ygrifAL>GUHgP5>dsFG?XIgf-O1cn8C}_Yz2e6@^;e&-w@#bSE$!~n zPP8qoc)ruM^VG5Ale4m|JurSE;9g*1xjFdI6%DX9c0(7mX(P~34JJudzj2aMpwDXhYF|`jfSrC z{Sr=E`lj7_O(s@H4dZbNQ(ro&p};S}Hkam`<7L=XcMjVUYgN?J?eP>QO?Yi6VUs;- zMEQW)u^5b_=~qo}ymQdK8DEXPjlUSr7xxg(}#Qj*a`vM_ZfLE14#RWR?B_eU5 z5_AC!ebIkw<$L#p>p(+9U_m_e@iDF{)r4(@ShpcQbG5xOo_*?p#U{@#wVSiHKmY^?43y+9x*!m&E;%@ODe?hh zZqF{C+7&Sx^1TZXk`?aH4#)JevSB*ShHrEc9V{L~O&!ssvky&iXY= zE`<_dy&QGxG?5enAmj=`27pGFK@wnrdk3uT z%#KSP@|FM`f(xKrfLC__gaI0rxmbs=!UX~#KtTdY0|5{WX0!l$rBq=q*nGEQ=b0Nr zm$r)~U8b!-E&3v*VI**q?*phI0srbL}gxaWaaR`+l#!^zq;N z;{CUlpD61W>$X`R+u8A9!?)Ve$i~yrGUwC9X(2UK0<=iT6k9Mg5*$7vnnaF}7_@D~ zD$f$ocy*v!Ux@6rs#W_BUe;ct;$X^5RjpZNa9er3D%6`eh0_wCSx<24skTZX*1s2qxtbmAAc*H zZkGG)X0kJ~`D8{@H=WEwXj3N}i!~w-)aq=bq1d!;rscsh9De3i$3`2KZ68PD-PzZ+ zKCbcgvF@kP)+Pq0(sl?LO-!L_Okp>MmSX6J=F%9W=MEmzWcoDhwmOV1U%UD0?CNX3 z^5FE-FXY-i{IxG``iIPI-Npr2&nikD*<@z#HBL|T!YMw}*9P0tLb1g#!L}-%xAa81 zlpXX^r2nrN=6Rb-H@*a^EN%~!}4$&*XuAgPEv1{ zH`!Oi!r_(4ECz>VE07q81j3>W&d`R?C3&jD0Az`%Q7Co0g%6chEd-#;4P8T=5Kt=%+9uWltoXkQXMgbmJ2ow#1DuN3}5f>;y071g&#?idldty4QWEuj!3@8o+5EqOf%}K%Gg5!b$LV$@?aLFJd&_GCd#PJ9# z03`z%N&pZ_1~gCr=>n;J!lOA*16NfnE&vV)0-Pc)n2~@20HTSYxCS5#4hEza)s=h6 zy~+p@Zg9XH7YG5A;Q|5hfd9~sU1Q|jL z3!0E=6vh+0y|%pAXaUlLS&U@EE<%CpY||WeX9o}BxSoD~u>nDWCeTPc;zFPhEE0fK ziFAMlD6p4Xsi*`9s=6X&W&=O~0t67hO+o-j2!IO&OhAADxI};uLV$~clOO^q0w4s6 zCcyxj0_dXPMQEwiPNr?(8ab#PnK8G*YNt;olm{^)^bxWn_<#2h0%y7oAB`D^yn~SaAzFP z4FRjO(#FBkxVop}A@zqD^RwmnJOAS4r{t5b`!_JlT6VJr< z$tL2Fmp+gCfAPQgfiL3H2k%|}Ge7n-4S^~C-xc6Z;1-bZ|FbGwD2@y1!3%%l&;Qh~ zzwq*qSs|!~Fm5MvA`=>VqCc^2p1LXNQX6B6mhIM+r+25jC!c@zi5tr}Sz&RM!}9~) zIXQkUR_if+i0S3-joI|-^WQQ4-Ov2!EMkU#am&R|zq)?&$N%lm=&@(N?ZAKYcm4F$ zx4-s%4`=_~>(4zud-$EZXaC`ge`B$pj}~Dx+25@0ZC8jIy34=x?5O*XzUC)hfBmh$ ze{cQrgV#+rCeS>e+>7Iy%I5U2k3M3{(fjM>vt^Ypy-SBjK00O{?_Ss90qR!e+_MT& zm_=A|v+0&Le=H^S-JRj?cD3=i9EY1tJ6jE}0veRXTVtFTUxxG*^K@^0{W!mIY`H0H zuBQ2djp~7w-zpw9=tee9!Ow;h#(9b1VCV|&+z(q@TrzG73(^3wW`t%mWiy#hwKzU2 z-fO4~jTpvlL?UT<(?CN|kq21EW*yH563fQ5$q-x!qb3EGVXG>y5D5fG7!{-sw~vSB z%@;@OBCqAz-udlQdojuT+7A+?iow+c#AwQJl;U&;%t26uL;*BP6@s;rzrU2Z^uFtG z^GU=<7r0Y&|C}+lG5WTmF}!!LwJW<_#i#G-#(4Mx*Wxdn#s`aamdq7Kqizz-Tg(!1 z%?xIS*am{Ago$8ij4lu|Y&26XDqdv}AXHs|d{f~*G*z^dI0@5H91W^a>PFkGhGjXFCln6aRLD!M7>#uB zbk||j-Yttu@ZAOvy+1jBK&Gs+*FC!*nufQQSl{uF=Q@l1IM`vaz7#J)h#gc_*A?an zkf;l#0s;XT2&t+91mQG*6bOiJkb4zEWR%Q9D+2eBSEYBpa)?sfl>$(ng;o~;8svZbuh%y;)u;@cx)%!eD{YRtFvdHzj@{M2OG=X!T5NL z7hn6W7gm4z<@L^s|HH2>*LND9g?_prhV>?ues8uv-F;8-_W0gsj<%m)de_<-b0yr+nUAQ2?aX|z~fJT4-cmaq=rq-y2$(UTVc&_q{nzaSw zAOHglG|&vzgb148LIwnw1n}ASKeajOO>2=J5QV8D8CKL$0ee@KNFXF2#H2K~00whf z0a*K?dd|jTA&_A(^Z}&d0!6~)5Y*7;Ai0|ln!0bSZC=v*P1MjC7&a)?a}%s>r?O_Y zX1Cd{xOp)2=AMnkNP(<^0>Matkim^DFf*cw=Z3y04QdEjIjY!h6$Y3A1|yt4a!rD1 z6hj0MQxlq0OR4J(n0*c9A`B`5ArS!Oa2F&wF3AXOoWAOQ*{0CbZ8(nvxs zcfc$GMj#4-0v&)W7yz0BLPLzAm2p@>+>~`M4Y~lhAP0>hKmddb6d(u!qyR*ti~^tl z0#+daln4pGO&35g8bmk&1_TH`s|XF3Kp7k^K!N}hgXB@Q2{AR@<={D#rAUwKPV`<2 zfDVJu>=;7ODN%cc2uAc<_~4QN5}=6?!3s(a2{M5K0%@iKxEKf{9x(`z!w3>C$kh^N zR*a&RO4LCDLM*uK@5&;(pPzFc@yg~sb#p$Gr z7*B|>s$D*Pvy)0pr53pekAZH40-RugiXcm&4WWlwN|$yn)zQ0f>3I3}wq3v0j^gSV z|0i$o8tZF%o(KK9?&o>l_kURDwbwq(%$~DnJeiEgc6>;%V+RL?2oezyil9Y`Dj^PS z#X*rkAwmfOS`-OE3Ll`Qq$DIkO#%@mEv{)C?9e(cj_r(RY>(&QnZ0Kp*Ivi}@qM2A zzF3oh1cV^Ohx~ri$>P!N{f$eL^x^$0w`bpe`Eahw+f$ExEZZIRIGx|Sd(@6kgLN?G zj3f<9Sq@`XPR!FTZ~Ms|Oio+>1KmSc?c_6fXW`ieNUgM-sbvDy2n-Gu6RnFDvMP>g z=tixec*WjEy}DOko&yhtDvT3$ADLc%@uOF^E=Tz{Panpg|1j=G@0$QvK~kzf4r=VSCq10dX?PF~?p z{p{!FZ~dHo^kaYar@!kHqfhSRkJW%zfcF9&{=}`?Z~TLMpFLe|1daw@xcb(rn|wHK zZ!f;`#d_n;d4rKa%9fQz%)^+;k*Q%k?lz{y%n=47Q&34V;w&+XAWUqTLC@HWgRG<1 z{VvPaIB#E?bdUY$&wXIOK6ABf;{OM|io3u4&0n-H-u&X zStLZaeSWUKlFx+huV;4#?&#W-CxdBVk9In58pWe52|d&NXtKF$*Yf(Cy^n9rcNc!V ztTdfvLsVVcK+g;V3_T#7Lw86wLx^-Koe~lvB_%m@NDK|qAfh)C(w)-X(juMG9WT#^ z_fMR&_qErxR)CGwbEm>v{1SY3Dl0sLYxhR<>|xt2p;^${@Nengk`n#YGqqy4(`yEM zZ1N+2_>C{+I0_qogn2QR#=R8@BTRo)mfYE*8h>e`-dnP9;=Rn*4%V|S4Crb%i`i)u zx1^8NCoyfKE}j2AC{E=ynG!#((V{r3zgfmpXh*-cUbrVkC2k1)%Zjq&IeVp4 z7WR2B7&OE*P8|(xhp;ISF`*Ok$>GyygsFsvYKpf1e)b^kil1X+Jw~)>b<)?7dy{`4 z;Vkva+;`*O;`gDsn*$`4)=*(v5Rk^~PCfjB#q=s(T> z_h~o`g+=pByDA2jlFpA37kI9q%1l9DW01uiPutw(X+Bvkqor;1-T;q$Tma=yjvDD= zkxR&)@nYb5YhUP?ubFi5otxyJw2Z!Kjqj=6yK^NNObDDr?3;GF&e6NqzyF)>4UM0u zcp$P`YQo*>l{UkR@87Dh)00aYjV5*vGH7#PMw4uC20w1GEov2h3nJC<~;ckF~d^rz!0G0Xl}Dh{lI znS*6vsVD@cmH`o-WAK02?O9%)A;y~fb%`2pUt-e}&|lpPpj8Tl5@&-Z7hfH>6L8{H z!xOz&(50RU*An7oK6~YJcD810aA)hk>djk^YYU-Ma>4rW z0%?l|8#b|+ght}RF9y6LX{_=q3+_ZUc&8uH$3Fw;xD`6lWAEc$vd_VPWYB8VV@s@G z$!b%F4@8l9 zxH*sx43*SZ20HJ?!-J^*d?9KAKl=lVKKE}(z5Z$YkdoaP=ut^BFFhMVDRQWLuXzxt zWaOUx9*t_@5L5n#sJ|8F$md;bhi=8m{mHY85icH!C ziR}ogZBLVlZF?y35;lMQ`RI{O!(P&@4QcGQvXV)6%q|+H3RxIGSi>wSoc~Jo>GIIt z{dyPmH9*f*P}`uI^@YI2SE}qzd$kI@e$@F=9611qA?5U+A1Z( z`w&PBCPoB+)sH{~L&*tWqm@ga{@$-T1`~5oGDV`SnqsFK zxkrm39la}hgKi}@uEQ5LQ@y&41V{0+r7?na%a%M|eZ91P?xEs*X;zIBz2Zcc-tQ8Q zIa^)Uhyl)I`i{I%Bq8VDdQgZkvUN;=!-AW)4^;VN)Gt&9L5cD>*nj2x9*X4Iz%`N+ zPXFTogyrWqYE4rk5$wFuFJdg+U$$L7S?;hob;W3seFE zWQ_t9h!Oo^!6ejs7I_5v_GAuwGj2`VfKURXGb535*d0vI5PiT??6L#MLoyK|5Kq97 z4O$-ypaF09@3Kpd7dI}a;d}`T#ibY2$9jK0!%`R)T;jlxo(@dyBV4$ZSmd+_$jA9n zglK#aIlj)Ptw{weL?np04+w^%BMK6#pPbeVV8Jo+JO&~KBM5+w)xgBIM!*Dog#qFS zwpCN6H_4$ekiY~i4?+%v0k*aP35cE&N{EC;vIO@L_dmJgkYqsQKmJ58ph$}oixE^0m$9MA^TAWBcdZhnPYrVlVA zo00`X5S&f!PZNT%2q4C%cwq=dG0c6HH8nG`wGW=tR!Zd#dkIy`Ye(9kq2qiEx9Cqy zsFtRQd70t6QRo>!tJy64cr@K_-j?a^)-~vERcO`bD-f=k%}aaVcecm9zxkQQ8GU}- zuQl3$-FQj4Ur`yWPQ!{)XscOWu#ok=?ojk0&E<+jO;yBMDdU2%;laE_SD#S$p+*;x zQmNQmq0=IDTs)nWkfJKdS|Z8qrs9*Es$%}18iSa0b)}qv*A4fhccjjNvq_qF>%B5z zAzQWvcSD)E_un6ni+t;r?;gsRIaXRp6haPyE;`?4g|7rT0++Nm?bFROa|Hx(wGSIh zD|f#zk%82Y5umH%w3UDZ-WIojhZX6^MX$SGn*O6}Yw=G?O*+vdl0P3ULS@)HL&B5V z_vHQ0ogb#1+uSzkBgCY_lv{pxT747Rou^^y2ER)9CdAE2CdbkVorb0dxfBuusZdCF zU&b)If!yR+wS)AhvVS_eUmvF8?@sCF{CC4tHB$-2|32b?AZ8dj=&hm7F@d+vcbkDO zYk>_Pc+Kw*>8|2-tel%~PCu=A^2f{@`ZrH)olfR{bX-^tFg_KbYksIz>>fX;j*4a6 za=)B+5U0{)%x*ltP0}dw<1HW6z?8`>wwR4tWVGoh@h5r(rR&jt(m~Ka@<}^r1IDJ|+HeC_Z`-81-))SEJ(xDUv5eR3){0&v2-F?d1Alilbcy8(H zPtp`GK2pil-(Lh?-U_Y#nm~W9IncN3Ft5i-s7e(}>+3C_Im$$m0{L%@D@_51uwN&U z5_U=+tgWij*FQFhD<;7d8mb2RPw^{ZUqVa}NyBG#4X68nDd%4!n6zSjt*@md20dTb zhOae*W^CqH)lkmr;0gVMLFp=_Bzl6GGqqrVo6_g{18V z8P;K{4KmTc4RFJ#mc$L7QBferKDEAcl;xhJB9x@kVCBnXXzI2u78fZnILev^tDq3Y z(A?=4II5-~oFsW###gh-m=+P!>e=Ok>lW2*C)X|A3$mX!!7zsQMH_o7U$4*nN$f== zBYLj2JDE+3#?#ZQcXr61L5Oz{l$L}NQ;_A^FQK3>Fv-q56W^C7b|@Suq<-|~6Fd|HmDOG#Bls3Jw({k$OZs0Lx z#Cg2FSw$r*#$NYKqfzk9#1pGTaYIMf#ZDaFZ5 zP06S7%dteUCQp3ZPOrJ&Z8e`$!n=R0OFhmFP1Nz^(~%}(f;9Vi!i_M%`X6fgRc!x-3@S<+#6c*I%mL=bd{H%L}R2|Ec9O!Gz&y(L?SmClxG>Td%*^!Gw%bc3sN1T6$NZO9lpz0G|WG(O(`2^B6#}o{SbqF+5xd0)_;@aKH?yfJ{W@5t8A;*r8CqAeI+k1R7lHb#Nj( zgs_2aVS7%!tnB>L%s_^*-U3WdjKgv~^~r!;G@d?l+|lsY{dq}hNNMk_ucK3p`9VHC z>FKwseBE%kI8;O)#~39@1n?q}5G33#bc2iXHM5-ZQC!d> z0sRzxFc?7yPiP0|qX1L>pn8NBSPfA68)!>3#>5DPLlM{lj8H8yfSi12gbl=pgj2Jr z!x0|^g8=M7QV$KNBAg9GGbHfYP#-H2r9kpqc2hfCf3Uk&hxWkbT}A)l2IuI0z;zGX zNWg5o$_o4Y3r*7Lr67+-J?2>w>8zoG5Br|t0(OL5ebqKcm9A}Np0=&DvtN%VSlLNm zm8C7sG1PjZzOKd6bEo#7te)L;d>S>~dAJaHbox?ceEy=%#zVLH`U;FMh%*^S}KDj^_XSF+w-AKcN|LX{Fgrmm666L(gCQ@L{R#h{5cx_$biz zs>-?f^!D+>{CaokrSy4garxa){DUzSnCKxAc=~f=^jswsYz*rM-XG}WKQ22y_x<47 zOs$8n8utAs7WNmR#ZKRAxGbD&N{ugwesr8`;}_HvS!t-=-j0I-SM4B%a1Vj_*hXf) z?d8|To+gX**8aRKq`Uf6)Ux$e-|{lU)XCWj^RaxU%Xn$^c(nXsN+kTB@2Jbp?M&d! z(UXEI?J<+eE53Et_}}f(#?PhO-G8;M#z)7=8W$pEW_7pb=F@^+@BS?|`;R1qD03UK z47#6%-X=su%+S!yC63&9Ib@eOEH{3^uTZWm_Nsfoa@ahsVnzdV*wK}dEc^Y(E3q8c zq{*3ngHzGhPguqTm-4987)paQwl#dNp+R*4m(0TY;p^@f+8-J38S)!O9qs3f3Z> zpYnr3tO;MMZ0bIb=u-GnRFshNo|6T}&{1W)CQ;W$bgmn?FVAcCQ|IHxnK$dw!oP*` zp)b>JjwS>}*|V?eZSCSr_~zHk<}UhgYGmV-6q7q?1}z*UWx$TvYK4ukPa!I$I?xu{GLrAQGl3VXD$qU$eI5WlLo1+U$qE*B zcy886|JkiyiC&HT%R0%wy5=O~H0gZZKivK8IFP6^=4}gjs%diLLMasXlBzu_+U$!m zziba)hn#j_e-xpNZ;C39n6_-iDC7ak{L#B3Cu5TBqoVB8J@^!#E61hgEwTiyQ^LvK zAXuslF&Hev*xnCLG2%th#-wnof9VAprx6F1oV@{GbU7mke+(;Q1vl?0t~vQKzHvS= z>-A1`a;iD?w5e=elt~_t*Dn+DEUsH=FeKRO>e<+se#g9fu|vP_E}hGzSy!g!McZ^^ zeV(m4>y(ylyyCrgvE)3o>wdbNG>a^5+gd&8D_rr}8!m6T=DgG~YjqR;p|9*;M;Ys5 z>a(7~{-ikAU~RgTtRX85^j=FWcGK;ZFqvxBzhwRS)WnF${e3%T2KIDar7mv{8f|l5 z3_P8c_9~Q-cvy6~sqt;?-{dsLgTQ>@D-J zrrTfu}g9B1^5Y(>rkR>4=k}SJx}5&3jRoo zXOz&1tYmB2a0I#0))$xsu(-G^Oo4(xLP)bw00kw$3?GL){RsKU3T-NEDFk#Djy%~b zS$ti^J{V(Y^lQAYb-3J|YGkcyOu2J(&!iqR?dcKX*b)Ve8rWa<9Gw?rSHHnmoNCYV z`+WE7>yA`m`C*Z9ot~eE>vE#w)Yj`pXip=*QMOOa5Cf}CUS8Ps*y~x&Vhn8y2CgDt z8{ZoVcFkXp(*x5BX`MmXKrG_Fi7mXTIkVS&PT%JM0%wmZsTu?n4 zJ);FFh;7I^m4+R`&_VrEk36Fa3jswUkOfcPYN((=IHVj2!HQPMOUJQe19J(WRU~&O zV;1|7a1+Cr5`NJorX$d(vC+_Be1xybnIe&Pd^hEo`rL=9ZoA5x zD0YWXI0d2tghC*ZXfP;>h>b}OGOUn9jmyXW?MMM|!-O*;nZ_(Kl=ZAbnm9Y5Fi0K* z%mgKR8Y(bD`4f@2p$RBBDo+-KRRA5s-Bz=*v~d{#8GRgOSvZe15TX|PFwF0V)0XW~w@4>G2>#qU@!b)Mz%5ulJqJe;XT&@dud z0V1l@9MuO0hnF}knCUN+0c8*iOj$`R3S$ZiS3~fRMZfxQm=O^aMKq-T6+UK-WCn8s z;{Z8`9)Q>ZQSD9qT-)G37%ESjZBUq+oPeAX4|5Q1WHqkOGd6=hW&|u5QLhvRI_ zQFHTCbMI3!+iFeAi_Bu@m48)b@7)Q$7G>vh#Zw?>4kk2=4mIX0OtP%_V?=HahqTi! zH@QA zrK6rn<@j}Opq$`@AX}<_;a_f1Kt@^1m2*t%W$yd7R^PSF$$yUjs;hhpvq)vXEJa+< zwYZ&!9d91rKA!hVUzX&G-<{77Jd>k3QPd=xzk9m{;wP7}srT^7bF@C%#zw6}nKYjQ;YnEBphp$>a z@ka+=kh(0L>nC0kEnjfnT8A83@m^!2qwP{p&+lsNfh-^sMgH1VHv1bcGFcKn#-1a1 z1=fzI9on0jk#Cx<%ca!3l)gg46uyazzqoDEH0hypj(gm#GG7+IxtuK44Dc)~cA#mo zn0@7UT{Eri8K^!P74tKz`zrdw#ESU*^&gzsualXRwQ}bV1 z9m52CrLp9FmLy}qTQ#%gF7H$;hm!eQKhd`_<8C9=9|Rm1PSh<0Eeh3~L|0GeukA_& zfB3xAJd2N^&7Kwc@LID3uPj8mYjv^MBfGXyH152z2257aaoa2rUan2!H0Ea4!^Ic- zsXLDaiVQ<3CUJWj&)CL_!*ULHstJBHmed67xl6_C;X2{&S*jbUvXuO_TXeQGFwWxE z19K-NJdMUwgBmic617{f|0VuviKO^{79i3q%mT>AK=O^rA|U(;5w#O^z>oqlE6dl1 z@dM6%OAnucgT2`@Q=^ru;qn?{Rxpu5WdbpLEsRI%VvL2iMdbOpqLKM9QPnXRfr4t+ z=%y8jqB+D&;La4`gF^>{c5OLMZRTs<-d4quI(V<8THxvtMcymbAYT6RQQ$c*7JIG$&iA3pl`Vm z&3wAbG?`s-uv(l$o`zN|QgXhM23LKn&OyqYC!y*&&xgRFtBaEwvFHNEE;X8zNW(B3 z)6~!o=I}pS&_sSDga|pFhZ2Dju@mCf!zyr@Hh}aHP(wOY01fQ$dl^v|aw_IN!vGWn zyZk_7dcxAQuy2-)24mH*&LWX=Ogt3`W@jwn%JHJe9WQA=nT4ga!$u?m(?ALrl`R_? znZstN`}|5gh&>JaTZYmvwn@X(xT%d{s5wFJmncRwH~J13Xr7HE!-kVDCRyn`q0d0S ze!kj9Ts(PQPwW-fs3%K=1h1s)^$pIy$BR;tw&T)K>YvUpKk*fWgQ~R?Yk5uQ-yC{= z*d4SC@YxmMd3o>m+N#B9^uMuPAAmtpc1@7GFIL__^$nrkD}yZuLV019D%$rZTK z`?Nzxr-|*I>-L()=L@@s7TkmgP(t9dP1=M(wvB|vLY4KgVF4Ohu?e==mo~WGbuv}g ztS=vjCNJnEo2xm+(pIe>Pg&8uTF+ve2%Lze&z0_vHu|wgSHw9RZ_gz@t;w&)63yn- zX5fDvr}6M=bayY>JDqv>eSaKpLSx=~a$qdd>b?~iFx@m~dswQEV@@^7=nR)?5Q>}o zupo1_w9!l3gRo|rg-TWSs^`d_Y?7zqnRA=x+EGW{sm83SeRj>N zV&>B2i(t|qDB82=sHe$;+*jAPyfvg%NX(??%qA##AR60RBWOT}r(Y*QNf-`+bgdG; zRz(^{E`#0VU>zaZ2Z=gg-ml|R%i+90V2pueQRD#n-(aN1kl{88m&b{Uj)L(Z2A_3D zsqdZ1=;H5SuS#a!LDAxVw#9Jg+`s!yj`bJidx2;DspHhyeJ zkH}zQG!ij=zY)YKk$nn!!#Mc1M@bJyZm-gzh$SVCqGD#98#gut?jv)nMmZxsX%FB7VdH= zWozsVxK45&49lpX9W194J~Y&i5-_oT8RvTy`5vYBA6OEMdUQ!y{unI21Pe>BQTVGQ5WKjFbRmty$VfNEx!Od!`><(h`?aHKj?@~bb(GRcqJP-l+X^uxLovP zH?bm85N0EUGfhC(6ztZ;li}7l3<(g_#LM0bg8PQHqawOxiN_~eaUgR66Mbntp@cX2 z#;(NcF?;1Ip}&Q|tm!ERas4Uq03RQJ(>(Cw^+^AE|1qZCpl0suoaJV*0(vx$I^I4} z1f#8y^V?bh_VT!1HG!aQhq_bS;@U^L$Ga|G|A+F2l(y@c3#!4)oa(hxwfj9c$bGx%SzMgxm4~sLT58rq(au|i9Wa!P>j@~_9^9CNsUc5OBa%?{>HH>eXwzJm@ZneOc2Y zRUQ%nN}hPz`1a$Bt$Dd$|H1eDi5|DZr6KuiCzE6!kq=@^n0LcACs^fPkr3Yk^5y) z@O@HJ$ggr+JOY;)y)-Yz4}nvq1AjK=w+FDDvkS^5J6jB|II>@_c#Bq+ZaMeAQxFzY z)2KSeHx+)Zh6xXc&#M2PBPTd;)XOABuoji}kFb0g{ps!)t?NK*kN%See?&Lw9ojb} zCzy|cb8AF2N-~e~{mckujE2S}Q_w9Ijzz3^(s=fGxQi@33Z|uy5DW&;JHer060hln zLjeAxG6hO5_6=n`f%=|8TiT`O`4O4Z^FCKCVsZLWK<6|Nk{>z40Td5XRY zWq_P&cca6I>@7m%@|aANFwzsC--KiV$f{{@sHMeQB8WIs7)nWi(T4D6LJF%9*vNqV z%O(k|OGFSU4z0xPGn04lXJ3Y257iBafXupVp&4bhV%U17#QjylH zqOL-HfjZXTr*lq33{#&}6XlV=lyK$C6)oWBc z`K1b4)x6l}@$mb$^&j2b?SrPIEC0=wC&4%#?`%@x^v=M-@R|Vaq2yf`T^B*%rB{yn zWB+?3>!r-2#6?};R?6W;;BnZCw&o9_GM{D0jXJZ%od$a^b6RV!v7Glv7gCb=Wol0A z;!$Se*Eg#-v7Kc&wk}b^E?C;DugR6IpF~Xwh)T7uuucvxazW-G6cPxPQ3zpXt3pXZ z3S`yuj2XclTG%MqFuejm&qy~X^LsCE*j{F*$NfiyvCv_;?o1p`BGwCwL{>o%9?T+- zT^6}vNtIb)KTzGAnzw`{EL_hoVf4Lh->tJIcD=AayW;fxzfA7l4+0lmibm?$+T}bo z0*)$h|&#YsDS3p(mgij2l9H0!`@W3W_8BLDb0%CoUsZ>Op&z zV_?l+P*ZQd`p!KlnY@|>Zq2Y$31E$bl###_eidxU@&OLk2*T6)1{LHNU{-E*`=*R9 zi@^luL$e^J*Fq>EA4KN=2yV}EbK~54EWIn;`uH#@vF|n7m2rFKcXMW+w-fYL0)ZBs zn4HJNqQ}n8eP{6lc4wijZS|kd3tVF|i*3G?8Wtsqn$LtB*7C0`Qh<0g(tjd4gv`=P&a1e*ur zvH`Chuo4QfgSORarHSM4e2k79&*MOZ`v%PXlfqpL~M$F zYa~{IQV^Iv%{TRc!mIdP`IYkwchgXw?9t!EKTM!;8m%BuC^obQfXE@Y5h$>HtrjZ( z$(W5u1ERptNCbK`N*UcP5&Rkm0s-`i77%JKi#*&Tr7=FJJCj@#hfDD!xE-mDuH_CN z%VMs88WYmz5nH@IFGDRM?TeC!KpT}DCweU(xXX-Nn`FHOhx6a$Y;iggIK0(udtshx z+>8DGB`XBRg5ZL!!_lz3J1B?*$G6cDNc|rApVkpyh@hP@E&x6y=cgpsSJO2V!vMqj zq4i|BKx-R#rEIB#;m6Q~x#Rv}T}v|upABs76|)SyqDzc@c{SVx997_jEk0dNpje4p21x}Z<` zGMDr$`g!ug-M?3vJPV6=m$W^cj2Y60A_Lp+i?5e8??z=Jw{kmA&2KW&0(Jw;+xXfh z4}7;g=e)O8mwdba%LdthtYq?pu z`}?x(B!o=nyzOy`mv?KPXL*0$EOUlcBAz!wUt(0gW;)ehhZn(CsI=_I)^){ zn3m&TYtNG<@tANrdm(+3bF--DkM&4*6S&vgiV8U7bu2S$=3Bi;D!i#Pze~~e>$MZ4 zIzQ-1TaeDGekp(RXW8!E;~Cu`?f129lgF}-Krge@V%Nd5`JdwJ357WFjaR>_7WT@s zeuO6-+m$Tny(7!8oyk%#cQ$J>P27U*@5a`PmY(eRC3KP#`p?RRsK!;N@R*SoIf(i! z`I4+)Svk1=c*Ev9`^rR9&ZJC2mx_S#-=Z6xK<9O+jS$zu0cT3w!emH$FK2a#o`1$6 zGX3{hEK1e9)MR_(1TUDrQ@`X_scP!sPQ6!h0YDz>0aZb3uFSE~8I-cFuzpi1HL}-z z%t64Ml%)KFWm53(@4?!S-*p>(K8|~Bwp)H|n-uh|nZZGy>C)~=V0Jlb58th@xF4R} zN!jSirQ#cbMl~qn7DlCbq&m-p03vn(lL&|f;2|XK;|eMm&w`Ymp)X_cgVit^^R5y`Bz%X+Vp?V zX^6AJ5ZuV1st|s-k?PLprTBT0coi@?@@k>H`YcTZGe}~qv5sc-$r-!XviO1p=x*Zb z)EyrZed;-YEFwWb0&}CPn;1mYE+O;}#u%GlESwcQtb`qIl1`oS0y8=q=tpO}#E&LM zunT)gy?;yllJeZ9=6GLzrGWq+5w0Zr(;>84O5<-saqc3_tIeGkEx(*~%(Lnc8AMZ* zMFtU}9s){O7-(t?wmsG(=Tr?zcJ}=u`K)>z*Wr(xrzB-LA2!jjNpYFVCDu{ypnEa`jiy#+2M!Y6qI&J0AlxeHP|HsSMyl$cV~WlEYRud%^iw-o z@}lIc{B16BoNpiEFCP}YS^{tH)|{6g7eBE+I@WwvTTv@d-o4fgd|MNMj2V8r;>Z8&?hfa6F0R%)cK*6NR+F2} z1Wzwb`u3it5hh-mr3A&BG$!x_oSm$P&|P=*|9!$c(q_*0)`(^$KXz->PF@CDo<_)A z3^`n6O8^AN#=NVuZ6^*l11pcL<`kp<+J1Z9e&Xj_=?@{0keFQRCv}rPNVQJWyLX#A z6nOVMjK$-O-67sSK~v;tI)4!1N~pl4_`O&H z9SaOsf=i+RE-1C(v(k}e3GuXOg&+zf8|Jn$avZmUyxoE=p_|4(C%Ev)M1c;_6*w7%jkF10+P$D|g8jhPth#G{!_K07R48hQ!*AFyuX79=4 zU2msobMz;0-v1<|ZT7$OEq@>IFYKa^qytu>UE0*UPuk0&;jZOE1OtKL^h)?h0MtwI z41~p`ltfXbilm@+hezc#&}SqNu4!}4z;mnLUFCs6!3YZOc9$M7*S9l3fE$eQoLuVj zH$Ei7xJc%+<2kWR(}2F5hD|fYY?9ahro>IrKc97xwp-`G-Aw|TyDJ>|dOjKH=D%N*w;BDrN-0Ud9jqhhykTzm8art1p zB-hXVQQzs_HgF?5ZRPSA+3_eA`~i7%$Gv#hO?UI|(f#50=x?hsvYdQLw=)=G#Bak9_S5unS{ zi!$_Zx7W*(!+jxg-xTd6Ccl2_aO1C$=Vd0H^yWV=wa5D8Ql)<;2SvO2YgVKR!l-kt z8FEZ4F5N4#E+cr~!Y+{NkR|eIFolkHQr3A5cut^xKl0vq^i?74+caEvf8_13+W$Av)si%flz-F|I3=^P3U?A&ODnU5*~%yIz)@_Y@rQ+x=_Y#g<*NO z$sbwRpJ?l++ykab^ta`jd+K{@^H1yx&yriWYDJf+vd_A9^Q@>bd#@7e%qi@>54t5t zODP@6zvJn8*nH9z^7*K4epotf6woKs#Pz1n4Ue3fqe#@wX*?#%48YvMS5O7Hk&-RZ z^B(emwJNOh3ZYCEqVi#v##|z1E&i!$7iRcy!H6Rd1XYmwav4z5)DcUo^s62?Ek!!(Cr{3 zV|4zGMD7T*i?X6n+`|Yb=7GC>!Z8qwUn2baf#y5HR6KcPnR!+{7?Q6U-d~u%q zvQXct)&1Z0ClcPA(RlUVgB?cmCF`^D4PnNJEIH*lNn=I3lQ4F}W#8AajqZYmo+K*2 zm(|2mrX42f-QPGF6+0~EHTd}7qOBaG(d~5=u6VSzR^QJ*U9*-lbN#K#y3<-+Sg&{V zH4}W_Hy;dO^qE;Z6;a~Nn7nO9{il(nQLJ5jnm)5SK6mwVsaP*=dQJURY(cra=<-j4 zuNx2F8qeo2Pel$1^axl7<1z-nm#>}4d7ou#N@j}uZ)5)*S#HVwy6pYU*6Fdv!(^^W zz^RCI;ig1XUGo`>+3ic-p}-B+hnuBM(G`Akf=oO;k%eG>UFwhX*0h&6A9)gV+sh2S z|2ftQ%q_t0Gq^3uf?ZugeYjW4Rl5oHxDU__+n_=zon6BAP2 zK}$}}g2b0&0FqX@8359t;8k4ZXK0I92|QU*@DUUMML~uvnb_&z#H7%obP6z%h3!|V zv|`HrVpK4C`g4>S+ebXOK?cf+3x?I;B{v5Drmy|uKaQc3bf>TuLNWLtTDZ6=0V&mD zf%{llnj{*jO5Py752g!ijNr4Fw!Ln6?ph3L=2nifq1TPt~9|zB|_EQ-3P7&HP z2?;H6aPUM_6Ib-3dkEQlTM4q;U8Zg%{TS;Ri800`4*IN&RMHZC1y_P0o_a4Y(nFBJ z$*aK>fC~H#z+OeH{4;vy`9@`wl*0FVrpOn#q~z2vXgp!uzDPK-*%9`?e&~%#v;rgdR+O*B>)c6TqBTdt3N*VD)T#jxuAMZwtNkFSSMHmKeU@aU=j@MwN=NF zDVw>*$XqSv-fB&zEpef~fx!<#xA(b^<&QI`ta>y4y`CwmuFmlh)m2h8k-|m&U8Fy$ zzByIC;GhW3cz+}M#Yo|cp%`Vt3Dlhjfqb4$9V{QJ(5?<2fsNUx@=|knNxWD8tV;Kg zBy$wSD&t)IHE(9(&6f6QhZxalvg?6<6qN1Q;L*w7Q(#gK2M+SuO|AaO8 z23cj{_|H{t>|0;?$qB^mcg8B-o}Mp>g>(hqy5B8^dHXd#{CzW5w&+Quntn*(;^f*q z<`ASSHsZ*`A5-f+DzFf|T&1Ix^7lX1DlAIoZ|+p{U-I1yN#_ybB(86$7QWF%7P1U; z$QZKaEX>o)FUa#yx_$D(TxrqcO&ee*E7#MIFqmE5mkFvq%_nelk_kNfJI=ZLHmFPN z9CI@ygjp4z+|EVT<%5zt0{8-qA&f@g0$k)!?70v9=#(IA?Qr%I=K4%GCEXn+AK%}txNw4k_OpxD5%lzgE`-S6SZw&1aIEKiaTH$3 zWg3&?T+8M>;{*e!E;TfIHYIyO+JyA4Kvv-Tdz|eKGCESZkK|N6z9_U8iV6f27Pi5e zz8F9LxD_5MudKkA<>k-irBkN;=Y_xVWMkGEOP#XNB7&H|66XY5{2b5IcV5|xBWy^=V?78%%(fj@0g5o}J&tz%4ul|at{YhhH@x;63(WW;)F9OCl zv^2ffmip-eS47U9XG?JOM(4JjUf)KTQ#hNa=QNc~`*>UqtD`h0$b;1Jl^3tR?#(g_ z4XUOUjs)IKT-en%d!1VNUsgLes?YQLy4^fTI>j}u$fy%Ep6iL*=QO(>OSJ9qws>jz z%r;YWrn@QO~ zfan{sSP(AxuYdH^W?Q!kgo^72{nm9tX=-0oQ8nuxs?b9)sgX>bSVVXUY^>P8GbkA! z;zih>h~-;v57~UrvSbbI{I-yj>6~NY-9NA(yNr74HPz|s&fOf1tpK7Yjpjh)bHd@u zIRF|dOUfV`ErfQPKbT3%|AJ44uN(_QfKOEL3o&=xZR@V4k3~q?i-o}(Y6w4D#xBAM zPuFE{oM0a55RxT`!Ghpm=Vb6E@n=Gb6W1b;# zf@3V;7BjFIYxUO^ol|JIs?Q0~q(U$f!v)-)g7vPlJRnK5KH%>=CFCz-YnXBn*iH?D ztq$46(P5#8(H?}`jzT9Ua%RWZr{{ms4n_V4QEX=uGj%6spvq^9M!+l%fvguOJMg#9 z4szS_jx8>I5M@;eI7ajtx-cb&MPcnrcBm}0P&?9Ny{A)4cvpg#1r#R$VXE)=LkL@E zco19V{`Hhl1v+4y6A8W~NS?$BJJ<&JsaOxDFN8D(8~)T<1YIHx%e5^fWO2@Ao0`dk}HTv%fedaC++_M z)<7x0d!Zv!hM+JDh_$7PMi4w8$|fH#5n2&W0np$A8~_DM1p*>W03oLY7=<$eK)^r& zB%lBhK*7_V|Hr^lWz(ty)U|f2@J9#!g6vYmGMXL^CFE@IHY4QCxiB8CjlBBkxE_b` zZ|}UedSki27*dk}YXHFr1PD-pRWSh(x$dK!}8l z01Oa_As4u0Ab~)3Fc1~xj;{$Y2{1DRvk*dC5U6S#JsR6?(hqTKVa*4#l}p=))rIf* z(LYmt=KgQ|+%FsrtEH7kW9-gd%$kQ!y|Nzbkfp8%bYw1$#58W|8TW)T&TW?#hoQ9n z1|_0Lc$Y?fTJ`3)L0x0HuzT)*>r_8HMrzV5$-V*eFS{+JUPELua8Svn9IMgZNb8<5 zZ^k;=D6iorb7xgW#gUD|)9t$#^$W|r$~&&%@d@fHs0*Tih!?(y5b^Xo@h=GArw;$SpZnW?=gY6&-#P!u zcl}2n_@1z?ZGK>_hEM;qlXs4O^V0bn2v&0M!xQZjA;htr=IOcdAN$y)w;TTT`usom z$)EbnzU5`{rVAr;%k%w?x=_glqwV*MIatCBh8!0Gm6?CUn_*s;Zmxwm<&wf9Dh2y*? z;%~k@e|e}$o?CtEjqrNfo`3Q1^8-fbqqVlXE)SONn%29XvAla>_oyxh7`oe480A>4 zId9gZTn*V`N+X?Sl~wTuS0n2+CKqi!#C)$^&(bE0b$mP;g-sVrzg(6GBn-VLg`v>1 zKm}k@#f=PSys5QuWw$|&F2llz=nAs&q8U=a<*nS>zQSUz0NH(WYm zr)Fmy&qW<Cy@#gZAns&Q9VgPeZSeD zo(jd+E15)KFkalY?P_!nsF)*REl|}cnd72joiH(RY;RxSZsP}wW;U?1(YIZh+&o%s{BUu1F2;>zgyI#j8q^i%Ab@Kpd@w+go25|* z4N^zTS`|nlz)*td1uhkrcm|Llzzhb!<**p*fHGikT7ZE$LtYRBzz7)RfK&hqaDWa` zRAHPpX*$JVP}Z=4XE!?I2r^S&e>t0gOHb; z1@sUJ4}}Q8UKOjo>7vhN)kdPvDjw?fG=#!Y2gGQ5ncdp%n*~%Mz^)IMH^}$=umWaj z;&4;UR8_L53>IiWTi}u)meTfRF&aI6wSH!sxAvqc#?nnrf9=!1)!%>hvk{+PKl1pw_kY7j-#(P?k)93qU%UOe zlUEPh-Ai}Ae$i-a6$9&ndp+*1M)?$9rRA2F5Swb5ribI}7wsM2^4>n}e&*Akx_$3+ zTjR@D!#j3j8NcW8$KuZ3h4uQ<=FT$B-*`}h)^`pboZLELjCoreb+6q$+Uy^!xjuRr zt)K}r_wjbdZrbwsGavp--{Rl@iH{AJ{^)Z9knp*e@Y?_M+eg3rV?Tbfy0c&mxPFfH~k0X_g)CAfX6& zu_A*pg){~bV#nYG(FM>rP^AQvC=-w=(KTePi|7srA^-`H0sx8xLBTR1$s~dxi8G7@ zNmOJ7IBftt3}V4xo=KyzG63p?;!;5=A__^s%m5N11VAE)BH;`HqlpFrv=jjOP*7Zv z2q4V@Aut0l&j6e(pesWeB+MvC0T9x}6q?${f!Oq`erPGl0T2)XCIG1z0H;Jc3Qzze zjFbmautE?rgJ7rt3M)Y_29Q9JfI29q(NI@e3?&h3A6A|XMmgXxoFNF%ML@-!1V{)b z2`WZ{wLvfyoFO2TA_5=^Knno^1VSc=%z=hz2m#(0yQ1~b1rtS0;ak(N9^25t31#{Q z%0r<e<^lsJkybNBUqPjZ1X`F(jHwQxMK=I3lT;k5Ec;=zwvfSSJ)zPTCmj?V5FLO7 z$N&%`ArU6H11S&+5EUF`X)+$A|Kb#W-e=nZWL~!WckOSj?Z5ABI)z5(4%S2P4XLgJ zbH9^R8hI%r$ab0?ZQq^Wm=CYimJvpPU?N-s5JC4W$Up$1To5RMG=m&xNQwfWGoXQ# z0LlnDC^DGL#4`?%GAr4H9s(SokpP@DfFLCpK>z_E4k5m;OC#pbg?`IoZDF~)^C2~V z7U3`Ur{&uDgURDR_p?9nw%-}>*Iq!l1>b-F>dou#-P^9_FaF{k)}l6Da1_?1^mIjb zG3`8lqwdDtt$PeTeZ5!}m;@J$l`({gq8{tJ$1YaCV5n}k76SqqJ!H?>fl4u13E)yN z5|*kRVTfSFG}}Jo(9O{WgoqRpz2XC2`}{ZkmAha0yT5$++OO5KvT3XK;NW)ik_V-& zOW*g&Z+zxF{(u1D|F19q!@v3CzjGAw-1}-!<)Zh8S+-S4FbaK!xxo;Fgbs)XfZqs%{>2j&4~ExWsk-N`0wg^i*w+R2z>F)Y0s zg(Hi@!OA6CA?Y#%Lu1ou-d9B4u5beM>hnezGD0&6XbT2(7ouE6=cgA74p-fLVR6DV zsp4+52k5(1PvoMQSfx?zfzwUc*zp>|M&PyqNQ-)UF6~&n^V)u?HXWgCAOTk(GBlJx zAx1RgG^;}yv}#KqT%DJq06|zsUYmv8N;e0X1;oj;M5U9QVzmk=ZJsf1gy#I?H~itl z#nHXIJ(H&NXw!*d6}s9si|B9e-|ql~A~ypl2LeE|DX}TY5(O1{P#}Qf#+nvr0!mTq znwoe%^{X4I`HLm@&ke5X;oRJwM2sD#1?L4~MY`>?0 zR8ihY3IH1<18o%LE-~lj0BRsWLXlG$gOhMVj4&f&fD{}WN^zAwR@Dj@m#@F1{s^;k zyL)xc&$Ndx1>GdwgOqD;SF6S0s?7y1lq4`Q6>J7a!kx?7UuHgawXt$3c0fVMxdw zQqMtcD_6`8lun>Q(P&c(l?^Rb1P3Gs5M@O^C<7Eg;fzE9MkgRZuz-fh&TM9grjm;s zA|N^`STx7~!5Eyz`nKNnVgG)A&-1Lc{==Tn-e({tQi>u% zLtDa&RwPlHs7gqxL`^G&k|2_Tnlun4DuG045f%;@h#fn|V2p=5`1(%gp81@8_WU2$ zTF>+Qc3($%)4a*&1NBY<(Or-ef`m||aTP1CO%>2B;KiefqhVqQi+18(2KCkeOzSa` z_4d_S$q#P4<}{QLO--Cjgv$uliX1S#07Mo-Ff$4YW;o>n+yqkHa+W6{yz_B9cVXr3?$+(s_PUchcip180-g#4 zQrHBnk^;>*kJft%T}|%>gW2F+*ROxaH~&~ZneN@2U#YqNXTEZCsL%Yy>*=_Si+f-7 zM|U2ozq>%JHuIUCputvk2zzwCRgZaP-xCZGukBGbo}DAsjyb&cF3vFTVGOel(x`OYc5`Re0$p zc0c~_cR%%u|LERoIr`$92UCwa^t~QS=JQ@IT&(rs&F_3?c>KoY&wc*v|MO^C9sRM= zk*{rqw4RoIoZLP>njS9QeCCiq08X4zfdrTnklg4NflyJwOkR~hgaUWK!OY_52nZ^0 z2+kPHmrC!U<9kBod&5eR2?i!9s-4 zq&iq7hYy#k82V9|4N$tSp5|d8MpOa{N~dGd`^@Zc0y09zO14deVLTf0q{wEz$nXyA zWe%AO5CljdWN-q4!3=;yLLnH~h)r;3St^t=AY>Rz!z%PE)e^!i&wI5Jv!ejWnQ}J@ z5I~p+WueFdRVxUmB>V>gzyO%>YXE`-m3vol8bAWZ3NUC%WQPh0K*<0Y4KPBWBv2wC z6CjBaNPr+n3kJx&0FXgOqlIV^*a?6a0EKc0BAhV5NPtcNmVyCnY?Q2?m)RmuNr43D zCQ5LeDxx5h2ni5ifeb-16(_uc8CL%GwJWxT_Bh-BCjP1|k{-eIm z&Kg|@<-X0d4APx~MM26E1PKrgkU#YnIXYuq)_$2XYUTWiJLHSm+6fO4XoVt^31X1!-VH0Uad?A_c$&T7hUz2S9i+ zn$tj$Qh|d8fHFalNW6_kii7|H!AZJzJ!*7yiyI(8z!F9P@wNv4xnBL~ z-~X9^cM|iY%jG0XLRieIsu~Q8s8Y#qL17`v&QKipK^2|59S=JieRa__n_4%z2-2Wl zE>KXUz)(aJ!CZhqpaen7BA+fx-`|_(LB$JKzG3Z|^MjQSjxW~VcD_3E#Ku#^+XN8* z(F;HMkMC9={pqsQADxsuYiDuzp>9okfRJ8jog9u=fCOqfl*463>^97oy_=a>}F zW}RDLupM4Di}Pq}cBA+0SywCHTxd0yv{9`Q59ge=eGUo^TZ?s8d)#-?RuWMqG@+f! z@`lHLgesDosQ3KgOfr}I3LL^h8c2-cp(89IhX;@XWDZ9Wq60}BG@%-iO*NQw z59YZ@g5ZLXjG0#8xnN@}oNtD=7^3E@J*QJ~%`Zjn9;1CMjf+ojb#v)lgX)T}r8Q4O zzj1s#EE=?W{mcy&wvS4N5d=*FfxyPd6fBq*$W&nck}9(jLsAHe*`Si$ON@@X3iHx+ zxk$n(K_CT?1WgzXM@{fqGs|T(>=MKAM932-2tY;(kibA1v2td$*}5{YO2MF2g%Ye- zGl+q-#!!HsK5S)RZ={PtFhRL@3g%EK0$?oeMF_22K&k7hPiM=Gm!|!m&YXW@r*5~O z?_PVk9N&=92$*cy?w?Ff7Nu_`&MSgJv1kc7qG!Np1VF0DvX7c4pQP$R_nOa^IN@SY z_vUeW0`!IBuztBZm*XVI#bx`JG5kyXy;&IzX2Wi^a8+B=kYIbWdlRW`mjt8|)I_93 zo_2DeI8Y?O$WXz|O~K7T!hxcKQ&N#o1iC{IA)FFGI)Q=+ct8OI7=n?(ymSRFCj}A& z&4K{A1VJu9BmjaDF(9Z)DTS(6Dd-Kg)T~(_kx%+w3U$)}&H5;Tqm9Ssp7(CvkXgy0 zBI*cN?sXCw>uO+xB|?JB2@t2EG$8xLJ}WHy1uT~|q)gr!P?Hh>K`=^SPY^Q4DdLn0 z3D7`*fd+sDP$2|*o-h1J_WtYT-B&m69Gt!X>hApT=IvquZ3a(804Z%>kqs|nC0I4g zo!KJh!?;`=H_cwSbaA}?{KZEnnQuJ#p;7*}zx3gktF`;T`svG)#mn>G|Go0c-8<^< zoaE3{{nojlwXNaQZGUgK_co$Vy;T<)US3&Ix@eaI{aE))@X4c-wx6Bp&W-yE`~A_q z<5%ZK)8_NVpT7Q79&Fk$VsLgbXr8(DzW4d?&iQ}#>6P0reBxyB^6izgJFDlud8_2B z_2r#hZ9cm)IXdbW3-+*+(|Jac1m_0H#REk6D7=Z+J1UbnL|&dy@7j_aFi zAN;_NJf7F@oc-(zYj4i?R>Gs<;-z^;e#97&s@xh^BcE6H?eNF+~3uGEt~D* z@d#QhsyA%3dwWt|d3m0~{!y#8J-2l&ZG6iGfA)L65!LqnS6+Yd;TK-&s_7?>7PDzc zae$^V%k6CL^M%uwI7PrHoDu{$;Br6^AO}RF$N*GUCq$Y`pun7S@$8_AAg5Uj!D!}k z&xP(H-4anHQw+URxpkI|6#xJr07*naRLEW6g=ZcccWO%PEbW^*XAms`?653Y0B{mC z7rg)uMX3>TNZTn~7@(~3m|C`Gy{V?-3F0u-He!>K?R=un{pF?uw2S<%2CatMqnlr=6ZV6OR0^G;UW z93V1KT$D({PJx0uvIDdP5OTtCO4$KfAZRp0B4l&`Uc3l^00#(CfQf_&F1P>)r+5kg zAqaq61~3MZD4+`j2>~JiN)AGWEQZLCtQLUW0g^jZBmflPgaJkZ#Me|IkQU9NsT7n< zXKu|2nnl1v9{{!9I~;OIG}0&|QlJtGfI=hSp(o^-4p1T>hfamW5JFLOSSVuYm9jU< za-I5I(DqJ(fCwiI2*p8oz}qAMq8yL{0kc4yk^w00fLsokg#aPA1Ih$pfx_DeQjvgB zf}9daS!GnDyaW`uI#(9pfB^dsl>KLmgR)e%Cq2zghAP<05vC7DRS86-tHlKB&85|6uDHqE= zswEkWlu7{xfkmM!Rc+IX{iLK`Ocn!#B_ z3nwiw%f+DrhX)EuM#&_1xB$TbNnj93hyWu5gL?Rf9Gff4U%F0QC7uZw8}7Dv5nsIF zzkfUaBja&Z&m_7N>_d4wzto3t|8q6x;wg1;GFW2a}1`Ae1I|sms$bQ(mc7+F?`j zQ8?%kBgi^NoIQherdvzKNz+}ef9PpzXV`7>Axn9Qxv6&J>f+YTcBx9VF~^OTbDtIU zodQu;Fv{}!%4&Z!%X!hZ0i=VebXHl4Rhx?{Op^DlTZHOn|B`j+dsT@^-urY zbHv-chax~okOO{XXm9@dcl_d`voF53#-lq`7@hz6;dnm1%S)H)OV`d4(Q|a#0PbqB zSUg<1lyh_G+0C@O7)-hEy{&I-47L{ox2+X4*GJzLJ%9-p&YW+6HT-9|@mv4kfBnQk zYLDBFdq)MP)E$q;qiPV0U4`n^3F5pZ`m;mW>L5rccsQ*OX3H97P^NKfV>O|8aR^`* ziW^BWxaI|P$~|<_MphW6TIxZcwZ+@r!@e5W_LCpHaOu+e*>_%fy#DY`)ttGoR^!wk z1ON9Q;d>7AD`(=D?*H3Q|Lm=G|Jjwba6EN9=yYkgGU9yZ-8^+lH3@hbieZ5-3a?!F4m9fgcO+L&hmfK)NV98alqwFOlWCuGbFoSSFW&kL9 z5mHqRcd9tGu!?c;d-pw;Sz$mQ;U3M4ieOn7HW&|58XD$JDN7Gs8dV<*_`;M2rSl~c zRXYv?>ReVES1b^CjC}4=Z1< zyJm29!+V4E;<$qOufA8YJU2OGrN;>2Vc2A z-Me+WFtBxypb>1+GuOn+*Wf|1jFS&uM^euSEm>ry<_mcXYIq6u9eF_@~%;v zZccvr%B+EC$HW9)ElwC`-l7c4e}_c_>nh(gs(ip(MNyttH1I0|N8!VH-B}&c^%QCn{-gO=G*G7Zq_@` zelT_0Wp{t~_~D&4?%lqn{;-Knh~uo#&*xp9_cC{!3KAd(BvLQ}Btb!rQb4&Nw*ZT= z0;7>J6$v@13><_ysy8FTlFSl|dx6UXl?YU3kwIVy8KKDe?)N^{cFKLERWK-<6#5~Q zTATNFLa}&&Am!2n1#VtKfzt+&D}k)4erP#~S?Iu?#j!+GzT9ladTY(&^|ncl&tmr_ zf2*N&9h4m`a>1oU)DejA0tiJ|6rdowj1b`j1c-u2M2a+kq(C`NQ80(-9t{TeLQfhw ztYMX-01F{I1V;3Rl7-?W5HdFkq?@@w6aWqZq6n7*3PzZL1|vZeC>6RO2r9xUBS52( zketN@WDp>TQvx6Xq5u&IFq%jKpm545oN`GJ0n$Lij4&&L+*JfY#M>MOfTsYlKw=dF z8Bz!_sJJ}SS(YpWY%_o(n?x(i)|W1q1c(AfkP0MdBtY~4N(uz5k68d1NS4l(T~Uan z8)O#DA|XnEgFS`5D3c73aLND+D1eFp$c0ma0GSzJ(F`P10Jt0AkOPKbfLH`Li~s|I zpaKUBPO!u(0Fl8}O76%cjtp&uWs!P!CtV~33L*d^P!L^!G$a(mQh;OD=Dt% zWX^7XS>?>p&bbw}x!pVHlTSizUW0QSECllqn3zUE$FAklQxRjKQ669*2^k1l8rLY% z+C>SW$6z!>;9y=B0E#uS?@Yx{7Qn2}SyB;U0)!AL!W|Gn;glq&k!F+t5)h#xe~)W4 z>i+e0;%dM%#2H}M@Oi!TX7|aP{3X+V8qOV*ljxo5mlTXh z><|@$fe;8600VRiUt^LPd*PG`Bmj4uGLirxm_d+e1_=dh zkpqvHn)m-;dgj&nKc5l5!!j7Iz=1{Q!7~&}ZEK37aV0}H%X6zYSAqq`^)jm5<{oCe z&oJAcWoV#M?F`q+>Hd7bQ(qKpJgk+6w9M_9?~lCiuv|Jr3tP>-S3XbCiZfMpGxme2 z&}COn?lxjXeWpoe(LL&NK!y)GdXh61!q$Sth(k9USBT0d&aMZrEQnspst*|8Yu@_U&;QIPkGwrv6uZ6WiK^P8(duwC z3`Cz0U!60xGWkw|tumNUxH}*3d*15X?)YYj)rjgndkKO%%U%W*yeqwz9zqtC7hSP) zV^-VO$7^0PG;8`?9(Ag!uyy%69)IuSs~h#@^B2DT`nVo*6>ob9_}#Dkjp}cH<<|Jt zn|IPJp~>A~>_>+}mpvIaSfNd&bx@iKzUF32ucB6%0MXiu1lY`0yE-wN|L<<3fQCeLc>>shpi?qfq$9%ec z*r(bpgmze0Cw&l?!J3$BaWOAmSq@fT?DFm0P8s_S0Zx+)g2_Dyv8k%fG%6OGwAHt7 z?)QD)&On2~iaAs$s0b;gxIMfSVxCQBwV1scq;DIi&#%_A#Dg#PsXsIiZA7p(?(A5* zqHw2s#1UlP?*gM7Mk@UVfZ;N?z4!0|n~?g(`eh!t7Nryqkh(Gz;Rd>kAy`Tb5r~64 zYrEw{2k05O@&tgN2fsJEV| zLd{?RO&}fhU}QuH1&9nRQ(rn%9EJo%3nWzq2o#J3pkx#*14AJI7E)Ytb@|$tmfb_` zJbCU+?K|(~>tAROUMo>1fRU5gqy59UM1W*)CIRHlyfMPgB z2V<}p%@|CjD=JbQKxYPEatGWc2!a7J70^H`Mi0@PR7jy1kp+pUBFMTIq1=n8M>_-9 z`jxL+=63JSi-jpKtD$kTu8Z0ezYm5`g#Zg75Fkj(lb}FV1zBv-nNApD7^ftZdCW#K zLI6wzAUHrX0&*82i-c4hKrjN33xFE|0uI1^xy01_cU@mT@$|)qhgXhYymPYm#_ep8 z%R&+e1nThk`^EF`39ly1AENHcIBgAVyTrqJby%iVc$o5K+xT$&FPxXZ{n~-27yr%6 zFLmv`7w#XP98IVFayHGXL$|UX>`L=^QMl25c~*`$$Jf`x__5V<&o)o2jt}3w^v3IR zn}4AnO2c(dBBn0@tY4O3me7m;WkB+a+X5aiL-gou64^$_!e%juOi}FS_Sk$5Fgs!c4_tFa^9In*^)z$KN zaq!lI7gN6fiSFn@581GdRmH{nA(~m3g#NHlAH3ry ze)__*|Kkr7AmGJUG5OWsed*VK@vl8tLHoJ7oEVU8IVZ&)T^8%Gv3_=ohCH8?`3Zm} zE00X2kIPOfvyyWuAe{sRxm*Bclo=5i33C=P6Lc2B2^bSV0}at37X1>UjEW8h6oWh1m1SWrPM1SKr4&>_ zIS|I+GBFDX3>E-kvl>>_>OkqSx*3M*wLE)md9Om-(H(>+7f?I{Z2(;;phH3t;1nQy z4FSCkAV?zs5+ncuBv?o%PAQ0l+?7Ex)_|E19UwAkKnx{Xq0lM!fZSc~GAII|dO}|V z5dd+@jHdv?05Avui^+pnz#f8NI7PzDBvUDX7ZM-<3J@Rxg5hn%DUwD)R1x4L06OJ@ z1q;z6lpcg)FcP^Gms19ja6*7Y05kz&Ou;NhghWX$M}f~(2xx_}L1+o@m3z4ya=-)% zAZP?YBVg(r$esickuZux02WZIkYJH?V=>jlim7Wbb78KYpa=mH1%PgVP&p9Pfyju1l$S%qd5S9c$)(hPLUD-2Am2C0J9Zc28ORU<m}$V&SpeJPg9Sp#8-u_>Qw5)xmM4e{XjETcM{jm7rE0d4LM|0K*?ssZyGk@v z5$mp_;?o{Az=RC2KoLfoKnYBPM6FuMZnLbqI@4I#%9b(T<6ZceUKKlRp;>W3a(7Vz z5dtK-01yxa7~ud#8UY9}P%wEAi0-Wmz!lSZa1|i&l79d7^4~tBl5!fHI4B2D9O{=6 z%|c2B5&^kLeRf$QQC9$5<|s8F1`)z3;S`M!1V98JIV}N#cc2lJ0^p>J0e~oibjV21 z95N>%cM~0O%3useBtrBSND?{tD^Fkgj(_s!p1$?Mdw+HwUjFT44Yo&A@dt}M?%J>x zpR9SWPJ733NLcXT>HLb}407?@6(3`FANdhW-2w7h&S;{DutX6gYQP5X9U;{A*$D;( z#=KZ6cyDPK3cPPi!V1u@rMAfDATr#uq>Lr&cMX$9?dyjgJu|GD_qnu^3RqdNq%-7X z9Qq~NtZIaCQWiytiWU}QTYD@jKlRzC!asa);~%^-N%IoTYrOyN_R2SmH-F%V+rN9G z<*$CS#fU56L(e_^r~lfIf8%eT|Gj_reSdlS-8cTiS$ycH@bRDj@Rwh`_}b!$=RZ8$ z+5u{@D>=KeFYlc^T;91ie(}NYl=}w_;&A&sht0(+m!7$HzMujl2^Gk}_W0(sx8+HJ zE=L~adZu<@hSf23=hmBkaid+?d2T$8QjL1^r@v<#XP?D?VmZgZ{PeBwyY`jUC%+3{ zbLY2y?r;6>r0Nfs#qJ-ILhSa3gTcxmK)qSK-P0GT4ZKX)xJ&e$dA!^>E;-ESyLVqc zS_F3i1+0eDxGGgO4DzGm-F!K9o|3oj;qnl}cRumds!ub@B(^HMOHYQiulw#F{!{OY zd2=~f`IG0WddpUEO2_37PX3Qy`O#0_fFI8$wka2m4VXqK)H?2_S=PJ8t4{>&=5Cfd zcem2BX|}RII%?;R^J_L6rQEmuB#nG6R!4p1httxPe!R57N(!kvoP^OvGRP9S3|h?R zhLDJf7TMi1RMneY2^RbQ!9rck9y+RyQY6uet3vC0_rFWP2*(_(G(N8gn*80QV8faN7$WckyUzl52stCSV@^u zj8(l7HwNa-XgZlLj~3qLuAm~!%FGEj!BgPMnPzLPS(r8B+0{e5xTmnyeE(G&UJOHZ zfA7Vj&o3r5&k>!MlTsqpg(a2$lW~@<{kwI1|Ab>0^qEvZshol}7yhsowC>BXecO>RG zSVf2cLyU{OwtVXgi+)erPh8m^_|Eg`)z2Rtz1asOfELnh`f&H)xGNnTB9p}`4J^0W zz4t<(xSXLT&X2CCsWdz3ln<5*)>v5sCiA1dcs~N-nQ(S%wCFAzZ%i)FyU}7XUsXBb zn^UZ*Y>(D9c2-w8+np(QhtGGMw(-SP%scKs?0YZ8OUV#iJ)m9yO^h@XLq!^W+4TTI zAmBlzIQwE`p+Z?Rull%0$RQWh3nT+inOQJ0P$d`01rq=>z@#hy0w91m<)VS0k}1Ye z6x=-orx^&vsv!xe7BSKs;54j;xccXwwEv3ly^Y<*hYU&rD4K{?tx_{zy3+2+=>n0@3M{=z%fz77{o^7*pL zxRcIh4>#}q)8A?Kzw~SSlRv)KJa+!V_WPb2_h@d1Rj1=HFMFz5lWQ72s0%Z^g0iXm31PqBl2vxEsDnt7{eOcIxk|-D8 zA^-{y-bRB&3kD;EQ-p#Wg-kCd6IcjH6bJ%FI#^tQLrK!^sIiUY-6f*?W?BoPLX1_l_-7)*rBp(r;gm*Rwi0LUSL1OYfD$Z$#` zKvDoC8BWm!Gl>#NfCdui;eY@L3}ygK0!&7LApkIogw)SFvd9Ud5Jp6p7!U#kxm#1& zU=){K$hptGEB7YRLQ6mlf?Vzboo_=RMYsS#u3#dggHVYAND-i1pg=-6Wh6iXglV$@ zVr-o4Ff6>WJS00aP_yH?=-CrZbuf?aA#fyF(+t zVNqxcNe_@X01#9MbYfo=TnJT)uwZp9DXWizp)vXb@Q%x6sgWxW4_bQfB;XR95(F~< zbO6FB13(i*7J?7~(FA2G5~h+rvVwQ7;wR4tYl7+TyoKE({=a)n6;V}_0?Hv3(ffr& zi&aVl4J^3;y?ap%j%rK^aM>ws8mr%D?zHHDOlJkS)96w?RaX>*dEV5@AB-w z^}g?O&iCSZ-B$4VfY-2RT`N4Sy+xDQeeU@uU;GPy^)t`^&M*FFYdw5*>uuKtFYEKG zRq*?0ckY;7rO9HmPUglg344b?)aJ6LPz|=Ns+;0rzII^PLX-ijauP78rP@wpvuGpw z35JW}Bq*~4xT1$KLI~zEcR~XZL<_3B=IQ%S9dsu5 zAxUQ7gnN!54BPRG$3Na}-uunXYeVr$RquZ4V^8jF z|LwDO_0zWcv)#Xcd)ofxn_v5DAAI+l`xCqSsV}~C=ZPA3D@!cy0rBLWFTK0HxqIVd z55M*|-?|Emv#R5r)ZYEXQ=fR@i4eQdV{iSm`WU8pbM@ZESo;U{vR;t$NjnJ=IZf?U z7vDRncXs;yC!d?Qv;wuwfBeO}ckJvB8J#l#kuZ zf2r54Ufls>*5bj?cedNX+koXzN1HTFYQmTbD2}SG35W$iTqpw*@fsyJ0 z)@5|bi%PN>_ij&9Ts*iyUej~MBMt*A*a1`C8gTAm+gEcZGS#}tktM=G3sAu|2b!=q z<(R$q#tPYz!?>;fL9wF`N}zORxc+BrfTrwlk~!ozWaQf>d>km)c# zaj#q~*C#2x@B%08Wb+PveC4Fe)3n{0pYHHM=f~UAM!Rp{D#Xt2;bY;AS7+b&V9~~H zC_FEa`$vFp_`- z5TFa70)Xli0Fwd4BMcw_Nn{=S!M9_1{_gPDb5C4cKDl}Qy{oIY?-vUiD`=)jAckoN z9nR*F80yqA=O5Yo_}+Z~*2((^C!56tSF^KKS-rPzgSFkB!8Y$-Ubv6LCVT1C_t;hs zRxJu42Lc`)5Y-g-!{+f_zmuOB_CHZy8soa*?SfnX%HH~Tn!eiL-JS1k&cA-|)v=v@ z{`gP+Na^@#-%nZ{NF(7S1-c-KL@)>j0tB3JD7pnufJlH95=6k90tJW;C?Z59 zKnfBd$mJD)3)KaQMv)K*0;CWO0w_>~R3(=a!apYyAb_`?I0^+8fB*_`!88zFg)DbK zlu!YJ5+o6d0I` zRlF3q)tU-O@DYIk1es_sKm-MvU;@NSIT3+q5KshB5F`U&1Sm*=0D7dzXcLPQ2waY+ zV`g7jD9Qn;(gheukYF+cXht(Q=n{bJy5^7w0(1eA0R%+?f&>DB1b_k@AZP@jAbc3;Z~y{9V!9i+IsKFNk3&bCS+~HDZ3Mm#+xuQY|Zo(ju z1Ryx&FeAYrK=3_6(^i3WW9(oPAO{mM=A~fUK*|G9sbaL6AnZ(LG{)ly+X*w}^S)oM zCLRsm7WeA+FIKR=fCeK_qlp$8s8XenE~Xd)NwpTxb&B2=82YV4C5%-KYiserXx{e_ z;1WO*q5+^g2`4BMK#IZA#F|dAfn}Bk@spf?UnlppIX{+vn*Ki+5drs~E!dRsd&D#% zq_}cG2B6S$A016v+wzEGj}M6p5M7# zvaCa?7k3StDL1u62m#hapFt@7na7=@6rEgo9Y>I7gG||~G6e~u0i^9TwhUHlX)0@i zN>qh3xU3_p!A+##M$mDs<7D7=jB1Vox6vxOEH&g2A(E^$%4=$X3JRqIvq4#aUqeWM zQD|oe2i9-2*IeV*+hu3McyQ;H+s{6m z=F?`1xnd#us{0Tk4=B( zsh`H5CI0``e>nbYzxvuQKlht=e)g~6hfe?DfBx70&uuD~Lk{ocbTkQEE}XvCN(9Jz zy*1rr>TX3btXo{uHnVRg%XK+@{f)DY3nBr@%-AHB5kv(vjG1PPpu6+Qsisdq`^>J^ zvbyaTLnwA_7VkXwZ~dv8)7xL1qAF7~TNB`Ah%h|NN!jx>&rAa^U6scwVYE zC2Bo-uA6Z=O*Fh8OUi;d!Ri%e<09&?RhRkW@PE+z{SzOnoN&kl7aatQ8oRwTJ7~gC`%$;7-bj2nu=&{0(L8_W zUEi*U@%z=-i`{X_aI<1cAV7azdE^ zL53y>gwdm>xfBdz>UYv{e`Q??U!%fY;gQidhHv_ zi??e4A;Qygd;j8UF?u1uYNEvusqFIV(GYGubvRjvv&%SW0*^bhiyP3@upWaLhfVoN zTPM%&-=B~ll-k$8(I)M5*WwUPm&E*~+Ilk$6Q0c$r|mM0FJ(OSbNeHk=69ZXbNI%0 z-~RgAXJbv9_5I?i{ZIi_S%{%+VlG}JQbfh*0L%aoki3>+wi`;?d-`^}Gg-X#t)cob zWL#4qBzJ@m&5RVyJxi*R0l_1#u~tA5u1WkMz~~kNLoj%$kU#@KRw%?|kae>;&`vu5 z3O=$q-23F1x9`3Cnlo}#Yd-J#V5qI=mONR2Hvq&?Aemk+KY-`9GEu$ShNr&Wg<&ZbMHRX!hmC{{9a zsm99a3?+gD};hoh}pXy%zH$VEZesgj6wcnfP@BOU@_rLM5K6doV-rdhk$Hf=U zz8Uk`m$tiakL`z*UhL-^E!*W)soUS@{R?h3h?lNF%(|flZE759a@J~DaOMhlbtyyi zNC^>U45j$c_e7oU-JWvyV86R@`_??ytFyQ7>GI*qaW%|aOm4QD2Fvqem*cFh*#F!Y zAGUwt7ytMxKmQ-#hu#7H{%?Hs|M-$ITtyH?$to&21@;Y?F(>3v*m^G%fDYoC(Z~qE3?dW4L@hwj zVHIGsU}jkwmEj&ELldCLE-HnlT02VBrxA^?Exteoh2nA-FjR*mK*BZQpCKKO7(f6L z1fV)3kpKyhAOJ!D0oEu4pa$j$x_O1XqDobu;DFpECyka2vJ?!2Nd`FJ1%PLlr0|I7 z(tb#-;=r+{>PiJPAd!Ny2{PY{9-EkwwWf=V^G#u&M;HYou0;ZIngU^v01Ym|<$$~B zB2dBNUHGX5=MF?l7&iCGgd&Q)trc1UjkVO1cT&Gcaj-&cZ0;;2{dbSR_MnDQB4p+ zY$I)p!IAcP1Z|l667!H$mh^Fp*1wQjG1`3I z-JC|Xi7J~CBA%H~?%lgO+dfT;vH9=8A_QBHP*1tM7+g4MFkqzfsk%1d#v0!9XSm4aDE67a5B zvFvZ19j^*w1tMehT#DGVM4_Ca3XqF_BXVOljwM$$Letqg!1}_}9Zqa4b-OJtxizJh ziAjjAI!2{>Fl+jr6_r#iaJg5oGgz9(W~`e!op(>(J=(6ORa!j#nLquZJ7F4PWv=u< zEWY=ZKfFBJJ^tvkJNI+@@VDOi`n^{>oJ}Y7VE@IYnND_3j@RA$2$N&%4ds1TJr83* znAsV(4`7MG%-R>@zz9>7oUG=Prcx)@v6Wjt85RR`obq}PCxna<2s)nc1JIM z=tmm-ci+PYfAeSl;9HmPpROiPH2c%qp89febrLNMf=7hZ8pL&u_~mLI_DyY0 z`_=g{N%B(N&8rYxP)PByqrIJ4e{h!buqv$wke)b}wSzK*xdC#BqTRggkOl?WGt6sGg( zRMQb=tV!E#tz%t+vniWdXadW+8-`6`4Hjllak14w!lf*)#>+J*03smJr3gS`LnR`! zHlYnSte!kK$?1>oJ@$J4Tkq!0rqF1Cj5rW`*vf@c;+-FV;g6rzuf1#U4sSJ5VF{%U zuD+J6TuTKBFxWM^!G{5eKwwKV`H&I2X$Y~+TWPqg3{3#rZZ}}4%sCH5WR3f~v%{w+ zsYx^>geEny3jri#Ne&V~0}&A9LIo#qNDB!nUXVWw5H=t7is>^=D~ya&dy;ke{yJ^ z`MGAjUH$eqzJ0!~tGAbX_V^C>-dO+MnU}on1*$NZ7$8rSIf@`>aR6=vh(|;=*1EAB zcaw)rcz1w`J3$5ufJPj6n<@EQn17s0cWv0J;T7r5D1$fNKT< zZ~y{aAi!&o34#OzCD~+h?y-qo$-c`Pz3p&^1=0yGNVrQ@4Jy8WY%-FWe-@%_^W@4VVKwr^*{$1*5{Mz{gpLW+P}H?#N9t~Jh-QH;pyc4`ABBZbTCyumsUOI8jQCUK^vLjSxW*4oXIY#X?Acpz1268q49%GdFZ= za${BQUT%hX^5*?ItT;Ucze=!DD|VoYvf4$cBSz`aV3wLmMPBRF#7!QmXLlBN0HOnOz+D39 z5ekHiA^?(jL`Dj9x*!)^P9iuVK%yWh%K?W20*w{}5E>&10t6k()dA(|E+;?`*Bn12 zKp;f|kQ`DpBaHwdffWu&Bxoj@Ah=Tiq2)}2JaHIV|Wf&k0`hymAv1tJtUAUX(CJQ5HII14B_0W%7OTmS?D5Cwn^ zfJOlj2?Urx*C4=20gP)IDM$DxSM8}D!e?k3x-SiG$X3McPL*i zPu@j4YBCX9g(0jBQ0(Z{bD_E5nbE*lT4!Mzt5OSVf)}DxV^rLSU83~d@~Qw;kO+Yg z067J~oPOIp?B7seu%?@+8058%^0FDTHa&7bcokKQfB+Ci$ctNwmKxhsz(LY!a+v`f zqnkxqoEBr~hkE1IpE@Pd9hNqI5%lGM{3@*=CWqTUTCX1yn!~kw99h$)W`# z>G0xCIY18#btJ6B7;3I!uo#dkXhevKN!YHdkSk!qj!Wdy5O*J)ANO>rG=4Jo!^>Uf=NE%pYL11C!!i3|)#`}Cx zQ_~(kdn2!R_M$%i^rxB^pR}N;P@IId>o>prPtSJujy`&4=Z(JG@0z z^2FoqByYE?y~(o&5zWnDZ4*pkbd*jEu{9qbGryeXv#ZHZ>{)nBX%{a9_#9RW2~NTt>4h528qy5o%Zq>4-POk=x@9t!Z9$?`pBh&Pyn7oNg}% zpG`32Fy^6dJ(A}E+B&>=#&MiPW5ToWmnBmgb|A_0ha#EpO)5<(CF z*Ellw!_DJy@8;mU6AOZkEI<{7YeKCc;L1Xw3`{7~=`%O{ z){R@e?)3LJcKY4HhI(|cd%W|+j0z1DyR)nJKG=Ng+*7{y_>Vo<{Q3Xt&;7(-`h{=- z?Bk61jbHo9-}(DL`;D;QfBD3>8&#`=3dG*5k_H;CNiiz4L?APmS;Yc@6)rddNPtUF zB|tI(0jk6Z5hO@}43Gt53SDwGdm-*LbSPJMuZG-?7HjNLmt{1s)ldM!EtrKQ#H=a} zkUK=7a<;PvzyJXa)EE`4kS;(B5E&JN8p%SS1qxPugt7qPUl0HqNP=(*5R43g5^6M= z0fbRx0uvw_1PBVMI|X6Ny_p}RhTQ>XLY>#xt#^WLpIyv3gd@*SE@mVBYP_HF`R{Z5 z-~IgwDjkvtHwB1DKs3tC34jzJ5h6g!B{{As03a#=2V5eg3P=#-ZYFC2&;%MOfF=mF zNFZu)uMPqr{yE_h2Ot^%1qUozppg|ECD!T=2`v$$m0;N&rOI)Q1PGEwNRSJvPC_I^ zHbM|8RY|}|0a749FmOzKG3n)b_vU2w2aDQXi6(e2$0LH^1`#2_VyL!Pwdqjw%CwC>0c=v&pj7mwrkX7UIFZSQ9-HE& z@fzIEw_vFb8bCG~gm;QYYic}rv6)Zqe2}ZKH5dv|YeiMb!^LLw?Yb6+1PN7wiy$6x z3ysB;Y$KpJL6Cq417kCZuz(P!y;;0C%g^ln#=otXKK)=gizOcxTq`C@2EuSnAO%8|1ZAK^Tr&{{z~m4e2n1kGg1Ba#(Hldoc?1fKp|xn8 zDuuDYBa)@I*=Z(@CeDW2Nt;m1*xZ`vUMwf~Y6I0_+{X0WfrVL!y={$kb9=nqFCU(4 z&g1h_o?AG%+<2*Q7X=EV2!terW-N#SBw9nMa=Hs9hJ>xba!73*N2dikJHns_jb@#E zyY8h@<=zohP>8ND$4+;=$WR$Dc8nLOKGw<%gv!vcX`sCbR=m-Nfzq?(6TI#n>L-+P^rTl<{;pdE*Cfp4RhM2Q^(s#-kKxe6eXK zKepQ*SldV^6>F|5(9t&)+4EJ|&bvKUWvMTv2D!#Q6wh_^ilKV&8dQqAP<$k-w^l+B z9eow&?T6q@ea(l_eN{zC$*k6KE}jwxc{L^ygKhMUH1oX^Cs=I zVTQ&1ysXw8(lxBaRgj2kpNr_4N#$?Y6SvC3kHh_Xh#sf zg@cAj%pI4Mm{1+&?7B3=ab!^3Nb^|2{CKQ+xmXR$erF#gQ(K*{Ksp?wL^vRWXpB4U z!EqZjVynx`n#WoS*m#{xo{n5|TnsHrKw4@<86A_j7cmU-t&ac*&_Flas=j4)$fS%?z%)F+*KNyzZ!a!`TTE7}l0*9gDp0_Zx1tZ0LNTIOs6NzEDq&;{ zfd(_wJY<;)8DIohWe59gyV=>P(4W_6T@XVPBD@BSL$6Ap*_rJe-AP@{@kSG+lJeK(Rr?I{~`}z;6VOv?IL2`^GZnity{#0Sho6kLd za}sWUdiMHP&oAFDks|19Hk$|Mi^Yf%1?F8~ln+HxMQCkc49gpCx)s|S@#K9gwK)jE zp84?f@GO1sK{I(OVWoE3V^Ytb$bVYY}sEuc;}nhq7B~0(#}^_1B-=7b^$Jt0Wd;@8!1sv8bDEm zO9nSHM{qeP3ZN+HnhO8{Q4$^zgbE|nzyVeXq62~_fp9M)N*(r($L@(6+#aNNPp=ji zXL7?R5EP9FAw-}Jtx0Wnbilaw$c;Yqh57WO&Dpop_IvZKF8Z@K=S>*)`g=mTC?kX+ z&OM6DE3W$*o6%+sY%i6naMeXI9NW}td(`&(X?$4tAeUgFwh`Q7$^Vx*IEl4x&F{PZ z{r#V3t+n^*-t)G%zg2aswaZCtr^6CNA__=EN)!kgM1ld06u|%xgA5}f7%+h;LJEvP zVuHvdgbX5(unL^FcwA2*1Vc&;(Qw}_m^4O_rugm9IgFmR!FB&5rghjx!2s!tS>V}UtN1n zn^KO3^+0>kR=K1pwghrjV({Bk*7{?@hUHM&@Pv-+XY+feEdSD(M(_s@gJM?{ICZZ=)Vk2l)~57oxo z`PE^48AI6|Z^t_6+UuqFiryM+XIRy&h1Am4CD)F82g`bNdTN0%MMY7BN+OfEs|lO1 zz$y`n5QLEhHeV3P260D#!(d4f!5xJtGIGTc)lnTrWGJzFUajhf#cOL($gzY}CkM=yr3BrH6Q!+kJP@N~#FQJk&FmrQhUi(Lxi=s{F0vRpH8fh|rr%Dv z-b6I4vMFd7Nyx*l6XDnhSFZ+3wC>)SjJD(eYl#BgtQcpgi{2@KRfVRZj>HmYt_&yj zbl#8ayl>~NmwP8Zk$1yor>sM%2d(>>{bF@{)8-CC9gd`|G3VyE*Lxc;!Qtlh8=^Ij zaXQ7xF|TL!-MSczJH;whs)pg=KyI}zJ(gakxB%a(4AfG$>bIr0+3f(Wr&Y{SPXF9y zIQhAwKUSH)l*9hyf8Lxv`v*@wn<5efy)T{S`NhrBuddtbJM?s75;+a1M$|ftqvkGW z+E_B)7HimUDy6v8X_P9ZEQ`b>$S4S)L`2SOnqm}{NREebd;chIZ(q$x>&{$^(v+Ik@vu26^D}#t zYmpB0Y~2cQY88(QeZx9?*In<8$A=t}9NO*iy`LJlhx7K?=keyURGvTDl$&brCl70m zU56T(>D*5Bn@>LY;DaaB`Eqt&9b+6n7{7jA>(I6l*LZ%xvt@mLyLc{9`kZ!;y5@Ip zSU-P$NJY9%r`yf2%(2-%zrQ(+8qT4o0Wr(tYB^SyjWQ@j-nErFZc5oyvC@0L?rV(G zan+EbgO^w7T82p)yG2Hs{QiSzlq4?pWuvO~B3+J7B}NJ_cjD%%4utXcRB{IEe%N2_ zYh9P?7yDk$H`t)E3iYmt9cSi0u7bIse&``IIQ+m-8^>ExYF*%!Y$+}GF0m=Q+bbG$*J3lQXk2I0SKBS*9PL91)lHG; z3n!J~gYEgu`Q9hH!!L@uBwcgS^kf{K?suIbN{O61-OkJkPG`}(@TPM~uize*D1z=1 zCB(rNAx)#6O#J9Z&JGsiF%|SlyD(;mw0fas1wt>Kl9;yb#uL3`2vfB_DmPD5vYoX*kQuYdW zL0??bYTxKiU`FWO@pN<;{)Wx_$ja zbFGITz5Dh_eex$xKK|dIU;fTo6rj*}yZh|r)#coijG?_H-{R|r0N43_4a0|z^5!&O zJba~plDS9^N9m*Sh~v%W+uQT!`|G*S*O#4sT@J?&|Ja}U@Q;7_r!Jp=;FtfSztnW{ zZ^z00;(2}kkALY$pZtT@+SDt@x4--!ezX7f-(CKjk1w{vkGcwtHu2Spj)xy#%dc*( zLx6ksbixo|j~$~OCh5t^MkmK{y~MocldVf^pIwW-%Ro4=NfU}9vbW4062b|%TOedG zze^TT2_y?)NHc~DT9Iu9C^}$|0kwo2rJS6U&Aku*)b0G{>5K1njcO{aC0~jXZiAwf zqH4AXC^iy;M2WBjmLNrs8kDM@?&(CCl1)z|DA=R{7i0n@WEU7xm=)V8zEFFQp8fvA zwtf4F=kxOUtINwbukFyBNKk-MhKdW>v5uyPVb&l0qaXAC&riSi=xS!y9M#@P)Vi2b zjz)+onLM5D9}V|*>+S7)6)|`kh0D6&$VioSS?6W95T!9lIrUJMobWJ%ljA}??~AqH z-)KF=W}uE^;rPKxNqI~JY8i7|HQ)9Ytgn(aN9yR&;dcDv?|+ZqKjsf8!TTRPSWT(k z$iBBdJ00J5FE8gGU*{*maN?7iV^?oA1JhdeeL{C%XbZ~jG#zjcjP(A4`zZ`k^rBT@ zDG4y7$+SpuCm<}N2#~lF0tv~<9f7zL9Xb+G1X2VwfGl>XxDE;Y?SRD5ls&taJB1mu_X3 z3=)D+Y^GvKiI52l2mzU-aY8p>8B99U7ElGHMj%B;-6?nOq@9rNau;D_ATbajP?aj` zW=)GE4YmX!5D8cU6$JJuD@qR~Q52vF}7scDfbYBWe77uW_HM8V}}ev z0u`bdgaDg#;uT9ofRzd$dxum9oW&sXg-qW`fx;HJ*g=4iZSGjIkjxzi7zD-`0)(Uz zqDQxI6Di3eRV4xiX50imAK$8q>)YR1AQhONEfB4wC{X|@#G0N{O36mH2#kvg8YV1q z-x`6BH)>H1iwl&%oiw8 zwybAwJ&xF(JnpPVb$__s-MqOCbOY_O?J}36i}2Ju)$;yPymfZ(*N(^e)zjrAM_Yds zdO5}Q5PdByJKc~&?Bv=UrIb-bnM_u#!GLP%tL;u!=bC+?CvIfNnNCT)_doiLUwATp z`Qf9dv$p?8ujl8!KD;`qC5R-A!M$~QZ=dhiytrH!W5|OoBtbV|!7#+MsfQG8u#KE} zBWpa_PDD`3a0vD5=U2B;mxy>Z(WB(HXo{$1{cKa&eb7ttg^)}pwn3`M5*P}&(*l8r zs!ZWk0WN$&U$%*3Eedg-R#|bxU}RR!`1i$=ltnz?J#lWA6jf8lHO^rOG= zOaG`n|IJ3_|&CL(nT3@|9ee13HX1nfQ?KsV`Ztv@S=pTj;yS~4^+O@uv zz8{Cjt9KW#4_nsaefQx|u2&Dsp^W1R^xC~+XWpAQAI4{e-A&F*pK7Ew&g;W!nvGW* z8IesjBx*Nj(Wdu4+}8E&_2OQ?_rG1OYFB+-DAH!oZcd|3YMGj`$+j9h!|FrOl1`2y z9o^bUz4in8df_m{>lGO%xmDgm&!i8eYGkdaCsC(+4`Y4S*7?J~zTg_=%$D_?* z#h31qTpVSKc`Zhz?d)pzIDhsxzqjZd2TRgW4T!!xU0HZCQXYZtNIMz6s6gL~`M&Agv2 zJJ-zSX3HL2Fr2zRuP%QwFI60m2HEqG32LrKHQg^?OIlG#$OV zFzt0lj3&ZW)DlCX2P$JZI@Qs8RFx1M3wM(dCq-;(pdR5v?$YX3K_NPW#DJt~(($uj zV|R7=-QQYDTxa$vj0uLTIJ*v7pfw&%lF9#3C-aPsvZzWHyTuQxY` zSMn&%o}B*tpE>)PFaMb#K8ktH4Cl-FyDu){mo9!VfBd7Hi}Iu#)%e!--`(`z`reDb z`J!!f+e@hcg@udVZU~5}>>+Wr(2;l4@WOvSgXWiAbTusgA14I1$IuT3gy4lbKF?HG+cd zbPegGoF@H7&(m^CgOTs7*dEHg;xbLj=xcxUu>Jb4Kjru08(;s5PUmpbT6`#J4JYxP zU;CXu{-uYXt_reLe&Jt!ga3H^{;T})ul)8cH+|2z@57+J$-;Wy=asXy{@uU# zF~8rY?dGsQ@c*n?#FCIv?hM7fuD#oPIQS55QE)+6E!>Mm zScy6~v@zSkECfY?Br6aYfdq~&k~=DZ zB8qBorx;0;kU)iqpxg8HqV+IqKBNUQBoeZ?Bl8o6v<4W8T!*0`*}{MjcLKy0Eba&- z4CV`g1RYifCU7WwW-bj{Gj^!Lq~f}D0ZAb4m@k9@cW3srycR*!j+`sG+965_D<(=H z{{*bEuzTX40Bn*k1mLiRtdM*s4Gs$%0TRI%ECEpfTO)+r!yOez4LO8G8FMhbs{`ha z48~H1u+`Jti2!;tPGd|Bgi4Vj!ip3D0T$hf7o|W2TzU#JTS7(vkxDShot`#n8Ax!d zo77i=P{KhJRUxWPIU0|oVUtT}zSws=<3zaj zLz;E%29*HIfHQ4mD~e0ly=E$+6pNF#4f=K8BVsaH`eG?VU@+8ysBSCR1BN(QQIF!d z)x$g;>QbVYx+%mo8R3Fb<8>LzL>wLY^%eV_-x(aI_uf78dK9~T?ylEMolPZ6{H6K+ z!@T{^KmT6cIFC2}@{nISo_=vtZ=YST=ji>8t>joPD#I-$vlc?` z19)7HnUbNM#J!1WzrW6YdwKhYXdR0$7;`6MyEz>q%I#v-LpHrNwxfslw__m&!jKqu z^}hS@ZuP~Rbxvi1Qwl{u>)5TKsm9TE*i+mRgPp__#OC;T0x3cVMP9$Wyp50)=1z*h zVD|vksR$v8FwR7z5mF$534z6(0B04w$e9=%LK5KQ3q}}W34_pBq!LiZLX54oW)HUCRTMbu!tLs#3V^-6By* z=Rito{H{p6-A<=P`{4=m(pKNX*$>pKl^tejgGYj`& z4|Cl<+Ik*(f8D8K*wSy5ZcW)baPOGy&cE3>>eLO3IU92G){~oAKUd>}6aMC#WxctW zZ@j6@Nq0_D^>E);uM=CQ_0}<1gs`y~Ag%PUnUp>(3}1N=Ge_6ES*?4W7t??)A{8Cg z)Np<})~)aN{`pPcudOsO7FJV<+C#(5s;DYnYs+%9Imx`Zv18%5p0!%fd|&$fw%lIs z=hY4}a{z@jRS~oUU56oc^fQnDaR2DQ+0CcpnlH7yy6~YR51Y!Ti*iRf?zBaF9lBJC zhDD1d4OI(ru4C8PSWiPwrJT!wUJX5O%W(XaZ7uQm6D?29Zg;QP@4d8FJ8zYP6ocs1 zQVAiCg_CvK<0Q%y zDVM<6g(x+gnPw<~Ef?QeSN-hQyuZHq_~S)fr}`$+!s~dsyLUZ58?=unUwikRO+EkV z)8GE}=U1PvrG-sIJ={KfadUH+7gBLl=-GCe5jFce_y7DuUCrxv4tdm;Z9lm)y=0jt*f6v7GJ)7^r!#%?VtSCUpPPc+T?Pv_wKtt{Kb!7uj@Z})1vLV z8wbH!zWgBD`Mg_NTlRCDZFE{cdHK21ZHYtDZ&Iw?ix~^0=bWID_r!X?U#!HH4ZSz{ z-Q#wC^x*k8^uytBZM-y7Mju&Aivy`lq%g{C1{p!oR@mMr>X?7ZvQ2=BBrxVqgt!wj z_4NLRQXg(crqoy%(1Xb1?d=~~)EC)(@$T~j7jyn6%R>(z(w0{tWZ4qK>G0~)H~e1w;XnQDc6G|LS2z2sn{?2K zP)I^z5QIMY^f|v5U;gmP((uxk)k}I9JWlcS^Vjcu_=H-v##}wDEjuZq z)H7U%cB{Gwb5pE3V8!CUXW|Mb&O-g*BKLqX%FDe7`iuC1!o1l1hF4CkMI{2Tnf zGJkMwHyZ;&{8JJ+4oAoF`qN9n`(M5vtFwCvY?OrU$&y*^y=S>gGUg{7;7}Nq$a~ZK zANc*#qjvT9vzNo0%PXBfcqCJSfRKbBO(tN35d;AOBLn6qEHDa)z=R|OC=3RnA|URR zVuX!#C#n#I2rHHQ%TbOjanAiL>_1*H}!VS$k)3>^@C zO?swmf$g-gED!}3DHUB-@9t0{Kww)ar34ZQq6n}Iwq+AiL`arw!7z43y9yT!cYsr*qeuXt4&k+2EeOGpPtN490?N>9hj^8tx{h zV2rv9rKx%D*(D(d7KBSUI@x4_rHU%0<0S1&C?JTCG*m^g*MZcq8KQ=7cG#J{;jRau zjc*FxvoxfRu_En&D3U{A59CU%*710}Mrr*v@Y13~NXV29&S*|$Cve4bv9YRwuuvId zaLZg~22+Mm0gopEo2eH&TOZ8eq)%5F8qSW;=BOW&*WM*ZxwI#L=s*41M}O|mf9Wrs zoZtMl|K$EB<>oxTx8!uw7>7-i*k3;1Ii};2w>-~%|E9OKXEF_Fs7D(*^qsri(+@^C zp6Z4^yXQ&V-=O`28}F7kzxB$oG!qh}F&1`UamOHM>U{F%;e!WbWgJy(oK|ac-aOr{ z?egNV&(afAfl>sE<-Pzzl%vgXV6fM2zTg-FO~)r2Ar;Z2ym|fVR)H+n4h4yhoxK>< zZ7HNgQ874?NU$9k0R^nE9Rg)Q25GA!MU^rjoe)rh*^mgQfEcQ3jc|=hR2a8UO4aG4 zINao+UzaW(OgcLm-&Czvd3!OFjcI6`^C#u;$Pagm=e4y%KiEf|lP}%RFV*=s_AjrN zF;=Hqrh8(iC5M8+cz~_E5UQc9M`(%ae!H4hLfJEot;j(x_l`GhUHARa)saMSl+4|G z^A&TY_Bw`9LLCBdp{)$YK{GPY6BPsMwLBWbkI!#z=eAp8s3$Fs6@F`d`l8kSs zYj1=}QAPEbYLjF0(Oc(7k!NMw&Z=BWyLtY@H{K}AxzASZEoVq;U)Ri`^?h3x`6hbw zgkW$kPSQ}+!|YSVgx#>P>DZ!xS6sRw3u=};Mq8O!lk1|m_!6m54XE>1Zti9xX# zLPC`|-@f%hNTK0IptNGclIBThfv`ttLge|b#4y*gpmdZVA?XiX}(F0|N zAt_rvfs?`gv=^zUp|J4>s#>zR7VH!z7ut&WC#s zKHPJC^Ll^im(BFd(CJ2NqshlJ%eJg7Pck??lJ=Xe+C2E?&^psu8Txv?vb)Yh_B7{5 z96x^edS6c;9gVlkCvTR`^0E#ZgSK~XzA2^pUi8aSeLCL?)-?bCAOJ~3K~(B&-@o5o zoe~XX!fGf;4RtEwF!#;$;c=|e?V*Ev^KFT?1pT0ngFBw;y?b>yE)VMX*^7Jo-6vDU zgJ|@fwP`B4-BZcYu}(x8hqceP)46!uTEjp?j>zTE=q;Li$daNC5U`&&<&T}Qz4xvD z&hp6**6aD@WGlNgUhjf+7ckETPh$*aEX(1Qp($%@i!jCgu%chw-DwZoF>oKZok_|m!hcz^9KLZvO8J02!yggn_T%znOF2!%@2 z8Y&4Yy;M?}+MLEFR7b}+m0&DTpu%ba_o5~U9cY%9>!5`0E$9Pg^V)qdxC1USHovf4f;fRZiaY ztM4D)jG@8(@$nx$XaCwCkNLAF`7`tIzoOe$y`{%3Zyov%uICqPxI>BiTUuW)j=uUY z{n?-Yg`fMz$@IAP>%;2Zk6-=WAGPu3O?bG`MnC@LkDu?e9v+@a4~Mh!cKYVVSVYFX2=v} z%;RKUIT&?JhAh-EDhP=XU>i{e-0j6ynmd6c?g(*5mO6Y(=Ir<{PXxz9F>oT=RGYO| zVGh}c)YI5o2zFBiM1Kz5+DICs4xbZL&#C%04%*ov0Z@yn6QP;oFayh9SVEZs2I#l$hqS?1tU4M7}N= zzw_xc{u%QJ&RZY8FWs-B*YkTn|9*S?-Zu{V`ts=;o_=pVpWD~oJJZ{5fB#v({H2$t zF7tz%SEnz^e){m@y@9h&Pk(T^`R*o1wYtQ5*4{t!PphoA`RiBZO-ywhymr0*?BXVG z`*J+pzP(ZLbZKrG)XxrUaeM9cws*mNeSK{>I^H_L+%ce(ZH15^$JkD~czz>z`@It_ z>Rr1a0~r!BNraIlFgD$M!5}PEXFV3>*2%4VDV2WkknD{ zmb;Xwpiu#V>@ar>2AuegAVLrz2}CiGwxlA6h*D}bbZ6oqY7IiN!b-7TE!yw*G2gfc z1a}Nb;wOZlWMD;=rod5=h)#4Nr~)h?0Gm4jG6)nP%OV5RdjmT~I?3*Y?!ci!sU;wW zKwvNu&<*Os+_AW067cWhj5r%@QlmXtzHF$i) z`Tgm)+HiYusB0=c8&X2V7zJGImY!`=I2{6o0M)7B;@%-k(xdiBW!enWh_)w>w)D+7 z-?sH|IP|s7hb;HsJsE~$o^Rmlkh|eXef-jY_HX@1fAL@bGavu^|I_}(U;j7Wz0T#{ z!)LeiaJVU%d#88H#g)tP+1p>yyl$2kSGTvf>+E7{8c#MfvCNq*GpFNYX(&UtB13KU znr(A@rnS}2Uay+>Mi>%-g|pjo$@^+xLzEE=*9&bfmo__w)VwCNwv$210&Rp=NGq(GrIcJMIj`w)L|t6NI2_+T8g{-Je$;v;y&l$;K~~vDG~C=cptmNK@TUu>Lm$q* z`qj6O4@VE#?eC8<*Rozd|M7*TUN-qrM2nI|ho!x|URTw_Zg+jP%f9OtcN|6OJ7h+w zNV!HREh}w>W zJ}O7YV=VK$miwC;)!F@`MgR0y-J({*N;<=^m2rW@Zap(s_wXdrkNa2elesPb_^;-V9_GEF9DVivW_kIFeD=$} z|Mk9o$nCSpxNaWL&Q9`boL4UwJ<7?zp|C&fo?qvtcw2H=tVdn@VVNnp)RxyPcBbk$ z;WdFGmKKZi$$e=oYV~-e?OR^XOHfZD>_hYQRp0k?iPM-iXSz09`f$-+B;!!Z2G1r| zYUBYT+|8uI7;Qdu5{DDG^Ai zA{vN1dieIydD$*g+waR-nlj^SAg2gIj z4?Xr*4)H4T>fr~E-_*wU+HQK&O73q+NvMV@1#PO|T*;opa%PY5+gmM1K4eC-r9-z{ zH%=^-ZD_9ayw^hEh*-BJmGw{@Li=SrbWR_v4~FomVzT52tKe#u8kFO+<6=Sptw2)0 z%*)y!x?zxu28UD>;B@Bj_BXY>+CBflfpVMkJe71QYajM`Z-4bPwAcL)-+F6X?tS~{ zlkdE~e%ggBQ#tHj_si>dPrkV?Ts`~k3v-yWmR|8V93L}0$gduBJ?rKW_YNxRA1A)` zjhp%^2L{3;%5{10<$vYR|IDBI>2K%ww%UAb-@IP`&JQmiAO7*9hwpx#@3dd}+kgM% zbx!-!6Na%qc-8K|bMxf=Kk|QZ2QR_K6t6f!g+1}^1_FjkI z>3yH){<2&hF@-5y*EGIY=Ka6=LYaTkU*`1c{8{tR4yA+_;oj%T78KEF&G=hEhQ?gf zUi&bCoHn-7V1ipVna-rhlAItEAZ(C|h{9l+eGqDp+!9s*Awd}E=m(=&E&p**K?bB7 z=+?RE$wg|m&&3)(nR%J{m+o_)U!??Dz|d{eZ7|3dg;FVIs9ua?)UnpVWvMo8ujZ#O zzd1d+)3Dr;0!dumT=W0V2jBZ{X34o9ulsmseb_Ja&Gl~D?vjuZ62cP0AQr<|AUWxT z+cczEOj%lVx17gv@<5;c!B@ZcH-GYWPTKXR$@3nWjA@Fwx1dve@%sG3Cugv3qaBN* zQE?}Z*FM=TFW;=69kxSxQCj`ti|6lt=cB9vdj7@d?|l6Je6Q4LdHfbH_c`gS&wusK z!-p?ky}JMCQ4~>1>Cg_8?jf)|hx<5VS5}KGs}t!sri16}xdfc5SH~2!)hO0zqn;*- zg=6V#Y_B#uWqXr-(^JqTK_SwM)v#EfjRp&mX^i_BP)Bd3ktf-~Y*@ zdmnt*_S2274>exTulMbncgFR1PTqStp8VCH{i;9x+5K_ZT0UjYqlX_X-v6K;c6Yw| zrMBC{a`5rIi_1U#`c->-^R~uI*0q<_7>wh}%X^v~H?I!c&8AE3aCLKISglt!AR#bL zgLR7#u8ZZmym@vmc>Lr(`+jRHGEz>Z009Dm6q&foB6=w~1Yu>Rtt!KvBb6~}jzcQ@ zto!#K0xEz>E0O3y15#n6aEwu;(xKOmD2_;sdpa{E2tY!?Ez5v}N(hB0Q7X8XmW6Pw zB?x3G!=d?lX1hITchkE-NHU-RMBs{u8pTprp#W7&(4dEfs*u1Ex7@)-&t;8dKp*yLp|&=#*dC-byR*9(wk|JjvJ>x{@y?O zC!hYOVDM(Zf=X%Ov5}~;y8Y=TJP^#k-lk1XN!3; z?SK0Ga;S+NNpTvAsvDusk9zw3Pm?$DsTh4m-6sI6y3=j?mk{|ef`CYSk!^}n|&*l zqB<;3dhVMiwljB*dL7^0F|W0+i}&t4SoV6VX?L=W*|l%ZpI$LUFRPA04o5>7AB;m8juzVQFXsB-c&MQfKW%0H`j5A*+zpFaXd}?*#aV6Z z%&EmV62u0rS`nVe(UKgrJz;7g?oKU>uA|)7bjAD`dp^JM!^rW@vbFL4>U6mK)nYMz z{dO^b-SXr(hE+Lwe1GYuFGJJS@w==2ZsGH1^Wl2(ez?0hW;v*)7M;zY-aYo+i|cvH z1&x_YH^!z3RhVWxWshj3 zu-`0)*f;qQIF^|uU)G|#504+0r+@md4t+lCQ#KUS848g@ zUC9jI?55U-!HCJG7Qs#YOT_n*_rO9RtD!jU9Gc0*Wr1%f3#50%KF~%cqiJ6%j+z|hPSym?cDiz zo*gO@s-UGy1O-tGWi>1$Bej?V6%}M-ij3W};N~`!AAg_ybou-jhxDfP{i-BUoa3Ie z!^JnVcRc*y-FH^H`=iBIfBxqBMH7QfsLcDA}Of)b~Gslk{)qo9J_PnZTivh~6H2_dooPfBg4<^tV>y<3gKF-*ffz z*H54Q(Z7E3!H@FsgD?K)|LuI+;&QiE^kV$GANY6fJpL8OdAa$67s>gT53abn$iM3I zv3AQZ1Y#vqiz4+a<)|D{Pq3wn$%LY>o>1Kssf?lTwje9HC5%`f`l=bavT3Bp}z zw`^)%Gv=0rB*0*6aMY1HQfkqnYWCEgKYjY>!8-BI=^VYH^hA`~3V@KmX2$ zpLPih0T}^hs7+{--8t=!AMER+Ph#_U@$45DH_tCKf^H#MLefyhP|Fh1?ErJ@CEM}x zqzsGNt61H{W{SV~@>lQu?H?b{&X$KJ+O9R2r)kdCt>|7X-adQ!$#*~QeL6@khvTy; z9)9nbhsWnnFD_raXr)F*>epX<{oarMHs#QFe)h}1c>I%h$`6mYe&}B}w3%)=|LU`| zhYv4ay?XfgomD7R4hhpfBXe1bqE5 zefe|ub1Qab-Z<;c`eTY*eb0o72g#RLJu$Os7Y*BpQ(5JF*tn=uGU+!6?EG7+W-5MY~}XWNt_ zMF1fg3khtn5y`iN5VtIYWP=l78>bCOA(5@Sr$x#T;Q$K|#<##Kf~X-W1lS;1;2z1X z4#u)E#wp1KLr6l{*aie4EkZ^N1bqkNSWX+L)0Q}S;C1e0)$XF%DOBc zuGZZVCdpY1$S&00Y{10Vl>m27w}l z0!g75Lfl>6S+I|Cc|J{Vwg(SKE5$f01|9&2F$LNTo}|dN{hT zX|BH5zP(9W%S2Y-lyO*8%_q`zXiCu_yUm@^ckVq}9IaJ)2_SGkWZM1NP5SbDzt5oI zaDvS(AxKdaN2{{$CA$$|Okj`edq+W8)TmQk`sRzPE6|wk5h+y*A@LhZEL+A3gr!Ji zW+5CXLLdQ33uQI#ZegS-Lm?H+Jq00`JeuuMjch&bB1=&vk3n6nPL{a~C_HM)l{Bt) z-Mvk1pPsCfv39$oQLCjc_hq`e9C{4nQF-^_`|G-mi}S<1>3Z$WpXOCP*r`%g8p}Lf z&&4&Oq??GAC8Dnwq_Ze5DP&()N2Yha3Q7V==hQ4tR&Q2tV-ANIYANHw`@KEcg^uJb zrG#a&Tb6`NiA3SPUTB-)!%WC2oK}nr>dD>r7vuiUQGav(`c1jB8l%5`wRh=V!}#vd zcXQ9w+pe-e>-XPT*Kr)9FRxZ*poX*#nu_PkSC?%S`?7j)=NR`;iLuC8dhajKpT2tf z>$%MscS8iE1cVJnG8j;EssJhm1PqXpO+&(9CGp|?vAq9VqZm?Jtjp%@MYmYs{QyZx z*I|fpJ(TuVtt?h!%=?Xpv(r@_3O!%XF@ODsb1&<#+!D3G8e;YC*N0PfyEs}bOUeCJ zvWyTusE>0fWN|<&yA~%$o96MbE34z$HhK8D@vAHDlEbQQ=4D^qwJ*99QxJ!%{ql4% z4tIa}&UpFr%d4_{_j_wywc1{uZ=!ZI+|yCU;K*n#qjn{m?qS#5=087Nz??k~vvYN< z7*FavZDxwAyjh&qTz%Kq&*z?HTt?f=dG9ctEl$7c*oWNDm-`G_WbAC5*gz2BYy}1} z%L#@^P>ImJazE8Kq8(nPr2yOH+`HnmC`(Xic24fy4$4>;)rMiLcaOPvOU}Kkc!-F0 zTFS;Hw|gSj%N~kGz=COWJ;nOLxR}abYKPT)xU6IEQ`;T-v7N-!ML?8NC`M$nGO?XT zBIFPrs(YVjOO&ePs^R|X_a3jO#p5`C@h8cO<+5Bn{bF{R=f(od>Ol=hS5~!oHP1V% zpA>#)sF#bOHMSDJuO?bgx5^-^Ze<^APwD!<6fesi_G80!c3RTl3Jb`LtS7A%TYLW3Ae3C3EG zEj-VAN4c&oT7(G|kwaQU?ep%FA9#OQJpa`uxyqczVuV>8%DdC6UrfHy{qH|`uo&Pqv{;_4 zitasrur7V|K+gHa_4(`hZ2scYrkA<}%P z9?=Swp}zR$_2cjVST-t>P#{7SOr#RSm!JLggO7ib-0I!nJ^(6o;eBt;zCSwK^TQuK z;@$hd_~kEOy?ENQhR{e)0IsN5Avnd^mdbv)8+)UuKEj^{)1;V@_kc>CD$Yn!T>- zL){_G>CDOWc6GfmtkgbOlL|+fFJ;&T1141s1@kyT@EzpR^VU;`EZzh*f&i(cT7~pLO3Fd6*Va- z5CX|y1O(5VGPe>$8iPPWIK*!R zkaju&^KVG>w0j3Efx;o&Ny|3f7N`KK5mEp$pfHdvK&Y7nfecm(3UaZ6DI zF)W12=o&=pJ$pldjKSPW8>I}CkPt-(65A4zobv1%I;tZORdVUA<75!x`Y<_LmyL23 z*;OnBLvbyGRi&0aGc8hJIW1*Kt@o*SNSoiVkwJ2VAV4xojJqtMqEc!dhS7CJ0S7eH zB(kfwxzM5niAWVh>(!b$G-nQXD~!56z^AcmusNA53jb`d8fHq@y%W1J8t zwcfpZGAtJ=Wf(A>^r3?f&-eD_)&77oAr07AQb12A;%F!b*Pe~v&@J2J>TDs^T9k9a z+r#w^f|RO-lprj+ZDhcfaR@YuObo){Mu|Y&G9iJ}Als5r^ja5LPTekTlVoxfuuw*6 zbIqIxE1MXGqPiMRYuS#4GCz)J!-#j+H%`W0W^Jssf_1P~d9x<2a@cPV>xFCfvU>mQ z?=AY~{P0)9G$?(4o3Hn;UiGHvbcM6`B!(7JP7=fro>gkB?)dr|oU3~aAC}(U9Oewo zx?X@$5p~SqCnF8TNa?}zTT2IxBsTbxs!VP4VBlSbL9m0Ww=m=x# z+3Nn{!O7kF`kSwBhO?7JEt}1uPcik~c)H5zxcfW~JsyOY4?kR<4$I-FhA1iym6AY) zr>nns+tzJcP9A-9%0XeGhSf8(fBx#_#TUPxdy@fU84M_tjsVgI1F{h&sz?M>NMlIx z*VL>J?pNOZ``W!+^;XVeoL0{#l`&~$8ERI6@|qX`03ZNKL_t(CE{7P8SM~5BXO`8$BrmQxlgBliwp_067rolg9PEDM>uGUvTI;<}PuqFExQOrmU>tol zhi_h&p+=GSY9K?IDMKpy?tG3~DADh&rt959M_AZqLKs zdAQj@YUsPRoy@J)TclkGYfjQC0Y~IEqsPEvj;o9=YKKKq3J1ij*=cL6r=eVkvoqe( zuD{;(+Eb$ zB2~v27K+tSq$nXNB+q4X#&*gwDxwsMh_H*Oo~L)e=fjns|7s(yopUKh=@{qbz4r9k zq3^i=-o3MNdH18cPk;6F>SB%_0#Vv__jbd!y*x2}?l+vQ{or`q*3!@0%bm+Qk5TV# zv3PR76z%`&HSDLoGv+R$Ac8S)=Jvt+|KLCP^xwYw&Yx|U^X2uk>A=Q^zMIeA-dyh9 zTWn4|d~Uo@xuMN(cXQ8StCdzQgQI(?+`WF;i~I3`eeqUv^lBC{4%Tk@c5n5)Ey7mF z0g1sgvOJ_gJw=2DaLC|pn*_^6(3OB3K;i%iNGrRA5fKDrViAD~1)T9H&|)}EaNUIJ zy+?oJ{jq%T?|=9tV*Kq@J-`0rpZ=eJ_V&*H4<1bMqHq4fnoGV~jgbo?O08p?-6m(I zGMjtL-V>)o2+S>WOGvf))$@z@zW;#`6uu>lpa8OE$S*(t;)CyfY#A~^b_Em`x7_hz zKi=Ktr$2eWJbC={&wl>w#h2NtcbyW75E3y;C*wWv&}8C`o@efdyC<@qTTfh5J&eB-V0_}EJv%T?6atP<#=X&z`tIvP@d%v4`J zt*A)w>c#WNAD$6snZt`OK70J$6ZLfNX9rOc7tddwJ-UDK;{5EvePfEF3NlEwvGv+W z+03Fe)g+dcp{m<+OHi-1^?L7}lcV?Uc}IPD4sD*&Iq%zHzu8;!b_Ozyjlmdn3!^e< zc|7#{AN^Y&wtGMNi55Rz{o<9s{>$sf^X}8%`S9H*KdOE5dfll}wq?%90gQTnD=tYq>Xma?;PQGgyEn1S9 zq6=Y>xhOkZ{^$CYgLo+)^ll-!Osz<0fv&e9K81kj1TH zP*_+7V?cmV6~u@j#&L)eZO&{B8G*%bxC5DCh@wCSgCrzl$x3MM7F`GmAV`6tgp`N5 z+fJtuJ(QlK&B|3B$GRBMASo%`DW`<(>=usFd!O4r>3d61z$i(Cq5xrxGhL(sh=?+d zQBTH_4h+Z;A|Nv|JCQEMJ?%^lpk3~}=02y>aX*Mfy&7PyxiFslHSTi<0p>oucQ(9z zJnn9CzH!HqXqu;PN=b0by-?+_N>)=Znc~-sYq<=}a4J6Ba_U6pKO4#o3#A|m6s>T*$bbKW-!>YZU4V;S+Jdt=W>{n_hw^I~d;Zj)eHh`Lmtl!i{g07-S7 zqtbIY9-J$ti)<31?jho!)m(SqS+FVNZnM>FN<>}ldT@9v7lc3&A?c(p+zwOsES2cU zNr^~RdkQDnQK8Er$}|jd?}3J^@4jkra;MgD*zXUA`Ay%=C(EPu&Ud`|DlhWR(SE%D z$!e(Ma$Rz>pA#Cx$#GGtcz^!cMO*c4x%0tCcbFV@&1rYDy_izhHFjjVBGeSsD6Kv$?p0Ao zZC}k`3?ipndQLBVzY-c~XJ5_ouM@!k^hLg;F%UD&YmPWY@!+g3NH@!HwjNvV@Ts~a& z%el8QU%b}PA>uH5plMhfFPz2dx=iRTcfExpN=@cK)n&@LCs7xYEgkDUe16E?oDDEI zEg)1`1cWriI7Tc|Tl$t<$1IPB>LQk^MfD+FnUyTcn6Q$5{Hio;E* zwM8I?(CMA=tS#TxlEe;04??v9tB8z}s0^1=7dp8e|O^?9pl6WKlPe;C{ETrdA=i+*vq zto4J3r$0Hp{*&v&=2zEm(zPy5j#90=tllX#c=L-LY-5rGOGFse0%Y6Yd-D5#=ePcT z{`y~hdDvXuTqf>LJ8N?*+1Que)H^{ za_`~dw>4ibr`?U_x#!|>)Fn^9`Nc%g1?kMhfKUPwV5fkRIKs8yAkw`%EfPoqP?}q# za5yDdGTCWqVFK|j8;cr5At75lE@CmNV8 zHabk(%>7(zlx1|k{^ra3kDgG7Azyy^{LzOWN^%=oGuqR+c=6TQJNGYMp5J?TCl#ou zfm$gR#gY~7V?P+}N^!)H)b1pECb((nUfvtej>i*g8qcmA^y}2T*-qQdb+g!FaK~x> zZ<4U0R>qSt?|<*_{9wNKTYp#eZ!CZD!e9OCtGjLc@o#ubT?_m8z&D3;6FwatfpZugFMXt*PR`IbP!g(xl&E!N|* z3@c}ehZCpWZH)P7-oF2yPEPN^VyLB#&6-V|8Nve0Eh8w%c)}@#5yt5>kd`0{pfIw{E%7abH3UQ@Y>nfnTI%fD z+N?eakS##nkQs&ug=E=sD6E20HXs6OX4ohPTw~BMC?YAH zl)G^D-mD^OaVHOl&VJJ^fdJb`qJ+}Xv7ALg3`TVv*F*8RvJ3(Q0$a!-hm9OTgE`x= zyyEck(D(aZG96_oNF@uDda}T!UtU|g$riGZi+c>Kaa`5R=E%J1wAQe3u^=)8sqDrb z2%!i|wgZq1s!vbUJG(T5yL28#~TlS*5a>*9IpSG(ce z>XSR)zk0It{_yryXZ|K5^58tQ(J5&LXUk563vOTtgB8X>a(uE_*5&24nf;+#MMPaK zkE@2}20emgL^`~^wD+1bWA0*JXV*BsTQ#4Klg7|Mt&1wh(4E;*y}fGo^_#;~(qVdb zoV|-MLPJQUE~+PJ>98R)RY1e)bR8yQ7Ff3p0?@BcS@H1gra zum9VhKY#t-|7S0ru6j|+gKQ%kLS!05CrfD_=E!-zzM+z;8XA=IL9Xz24hTM?C=nlv9XZVQcFEo zXU#e8+`0EVJu=zwN{n=evUzC zI6ESAw9skw<#af_Xnj0hF7xbl=grC*Lv?+&9!p)UVt@H`XXsvBeBrpXy7bMXA719F z?c)6H_f9oY&0(4k&!1c#`rPO3^QX_Ixew3{5osYHALMcI^_Af4=fhunTPq4 z-<=OE>MZr**tl4cM-TeuBJ*a7sk--B`>KT{WsLzOZxjWA)0s-EcE+7XWoK&i<5+aJiId^(H1ft?5J6wk9GT@BKI%MSB1a%cf-Y4V0^1$9OWS^yO)mRl; z$ABrK26H;*SX~fH`)NL($jlZDYm_CL)ge++lv3vX^}$k=$N)LI0O1a0DC?fvbV>;o z2A6{vhQ%B2t&@cHKH945(@ zjl;feC!hMNly^689)`=u=a26{8(!{sesdNc?jPNEe{wm$h~v2|eDnCQ1tCz8!U${( z_Tlb>Z~vM1e{*~JcYd5VSD)?XRM~U?BIft?;)i{H(uVO1_ssas$ItgVe1E=I-W~Fd zC$qnNi1qt__?;nl|MR0#mUowD6F1jv!(&dd`Z{Il7k{{!k=?MI7zZS2EER!op<6&& zL8+?toNk=}BpZ~;D`P-76H>C%*(DGlgduP!46+haUmMUcybexp7xn1{@#E44=>LT|M=oi9_8i(v2lLhvqmSQcg(`E# z-@N~$NAtt)e16gX*5CZ^`M>l2FaJ`Vv~9|EO`Ec%5Br-&V;<@})M~8=5gn3-=l-u1HG?Yn0ql3O-H0)#=hXfb*m&_YV zqBKYAU44!;1cU=n1f_&yj4Ek%o85(IQf8_(8(Xgc zG6wUC0oxcP0z)7i7RLOX5H8$OA`5{MP{&pZRSQUwQsBUV`58+II75a28IS_7`4b5w zuZVw57PrD+@GWsmfG8jvYy`nASs*)27eX?(%+FaC$O6e=&w)W0Z0xi^7NCL^#gc8X zfy4$y2;vrI9BR}t8nv~iK7p_i2pckGP((-;5|WWIGR9VuBikAn%FZzL1XRb6N&(@t zY)J-V0WI6)K57_a5DapMhHd7a3dd0BifL|4FM1R1Msf&{6av{uE+JHcqVaqf?ZVFp z3ySDyMlq-iB{gQD>=`C*%FVt`dC2V9AyL(lkn#Y*szF)rM{_uMk4md!Nyf%p^H7^? zki=%W&NH zGv2$|r6+}>1u@hHYLcl-*Fn#>}2S?o+UM)Wdh|NOHTJzDlr z|5keS?=AlG(};IY@8|JK3l!ozV~^<=-#UW$~&oK6|nXgze$ zcz&@`ddT3dQ|30ezPK#K^W)qvm!~zfA`j1BZ1(%?9SpVt6asgm42$Znv=`3Vp0rrM zzG%<)`?)O$51Dmb_KIqgJUu7vj+Bbp$K%+(6#XED6GbTt6^X9i9X*kZdja*LQp>2a zKdW)YFdv#D#DcQX4ulz3W7XN3MIE2bJHKdNUo02P)7OV`&>;8XvRn@(R*SN^yxKZ% z9o_lzY1!A6uOEHyGRMBt@mud-#9k$t=Xw9+>E&+QoV%CLpU!g+K&GoB1Qg*UA%qK{ zo`fB^p^(5Ju)s)xz}6Ya(tOPE{PtuO`6kS3mBFzOl{h=o^0=<6KHj4n=o16pE!=lI zWgbXp$~oJXWo47XNR^0iqe`RbgKN)4*K=TYE_Jq+^URvAw^{mjSVX3jRF%b2J^iw4 zEmX<$(O|7sQ}5GeCRX=W{n@@9?4di?>0IBfo|yZ#9rm7^_u0pAc3R=A(lc(-%Mqf? zv#Yvtbx_tmD}4ZQD}`ei%bkdgnD%qi)a2G;QI%`@O^l8b(bX!#wL23KBW|)4oo>(J zG3{FWknC+|HY8KISSqv3Q|{9gFO!*)ZaEW+GMHV;?n(z1>VN)=Qs2H`%|pHDH6h(Lxx3QDUmG{i;5J1 zk^qg|QV_KWQw%|<_3?s!*j+!_t8bk9rP9&%JGE`JGP@+>`ur#scaF~1tHR`OwSd=lmO&GX#d9IkGTUVBj1!_yyp|B_-` zQEMsmy>InmY<3q9{_S7+>YvM(|J4V+zW%r!96B7obCHX0^3e~ro5yoqEYHpG&ehdH z`_Cz5nqylI-@4yU$Ma7|+4ox1IdvK zK+X=r1*duIeb+|8_!*(b~?|=QT z|GA_5#(((5v+To{>-LjR|JL9Bum8#Qo#}skZHhE*q9XHj_k`0Dis_u8-+7pqkPMKMkr5n!->^26`H z_m%e%sM~-LQ0rON;jL(kID9(YY!B~z@hkT(9(?rU?|$_7;nYeV5-Ct1sR#s!2yBn% zs~W1TMlU?>RS(B)sPiBF!#~FSh2QwaezX5%61nN&j5w;SQr8=9sGoiKlW+X$FZ|KJ z{LwpKeV4YcX{q$3jQf^0_o(}VKK}6e8}HmDMtXeu(I@xs-)}ZFC%7eCJ$rfQ{=H|9 zp5DE8mLNM5RANNSQnM_ko|>l;)YG_im=F6tfvN*&JxDK3#?^8;)QltB$~;UZcQ5+( z^3Wu2JGT-8Mn*scW*8T%dVIY2=trOM|9d*Sb76*|#4y<5*61@zD^PGN3>qm#HSsRz z?jDK~#Uf+{5Q|<)SJGM}SP%}70)?OyRf+E2vrA&O<~FO{WW!dqM&TM1AaJojq%j~0 zKVu^iLSSQqa0DR02m}@+*a!!&SXdH-K+Y~F5|Tld48may61U9%ks!d3fNa1r0)#^# z3>XX;(h@EKC=J-$5)c?rBHMw5OgIe2Kst@hEgLcsRGV88LWB@t8{DyAAh0MTtcF{o zo1*nb+mi(b1e$Dc(z#W_LLejy10mARxf>B{!FaT@n6{~eGN1?{fh-Zopfu@h*`{uV z98}eXitvciWa~sS7BX~jGF|5Z+|pnI2qQt2kW>_CT^6DqMPLOo?nVNHkd&kIsLopN zhgNP{xoLCXwW-ykh;f|VQ{=wdvwFuawLn=d1Vuev=*OYCmJM^;9Cj_jO66E!a4Px zJ)h6y)#IDJoObK!gaS568e6eATQEy4Z46aL!gX=6E-{v>a2ew8;^V8%+>yHQp@J!S5;mWy840YezYpN zDxPw8AnC^L$WqBcw#RMJThx=|>5mUNY^JbM#Ih{9_p@C}rX^FyJEImuS+Di9l9$u8 zTGpX*HTRL%7W44lFP57ZpZ(#(>o#TL4AT`-S!zsnl$?)7b_;W;5@}GZhJHBAU8`2> zx}7eT&klzSJEWH#rpbyTup@@q1#eY~QzB|rS;0h`T$I!$_M~Y`T3psYPXqp zi{4R!S_x#s>A1kc?3Zf}+tTKGc04XGULQGZCn~)x&gWEAmFe+3#xjnFdUkgmtuDQJ z^2e9CoSTmCfAy{omMPO=|Lp0L-PE1?>!;5Sb2G+9Hi!@ra>`JgHkdgmxS|PU1V)lU z$RV&iIlgg0efu|;3!3Ei$c$}{Fd?@48Y$-}pK|`e{iLqsgm_-?^q;#?bU@Ib}u*eQUV5fwteIsHStD!%6X363( zP06xm8CLyPHdmEiPmTxL_4S5*GdU0Th3xZ_o9$-W9%?L)YuixF z9w-Zu87xYyhM3J_C`W5Pe+-9xUyNrbk@J*uZfA#wamW@{)>2$LR_}{idVy5tuxdJc zZMoiGKiqj7-+euFsAb+g+;cSdVK{v@*wfa%zdYFcbZ;)ENF9W`x;R-9N2T9*n$pYR zxOkc}C+~rjoi%XOzM=rc(2X1b03ZNKL_t*7X=zTGD0bJ}%kfF#;iXEnBQWlIkuR^)6OoHkmBr4~(@L$(8@F~{6cQIM?>b!6mzsCd3^p1P-04OF6p4-Ka? zZP}qw*5dRkr++%%{OXr~SdeVV5>ZA-O1^(jpC)Mj;7N6STw9If-E*v3uy z&dbK;vu&Kb$=Tx1^gs6+hkxAuPHpAp>aY!zuH(^2)Zt`gZtTwWb*X-K^FMV;R~9Oi z)E-PL!pZLR?w|j)cYZ_q+aC-!S0C)BjJjWb<#p*->j!_Z*?rzM3`h2OXLH@zT=w;F zc3h9A-LP)`tLwjZm;Um%{+k~^@1Oj$57J*e*?#FJzB}m}bzx*(yRhj7BNRmn31s&U*!GZ_ z_Vs`F+ef_nAARXwa`Nuk<;Oq$?f>#`{LalA?O(ieh)=!w;j&Icy*w~au_HxKT+&Y; zeg1_nKNyD*jR?U9-}@2&I|lJ@D1-Pv_vYJgx5lH-Kl2K zu`Hernl>#D(>CJrv&VPdd_Y(4ympgQ-;Z6|IpCb7Ata`WQ)h}&u20rFIlpU$T4Jb1dFaful`esJB@*g_v8p7w%c;8sgaU`c zjG#JFD@xp)CIcb^ql!l1QZ!2H^o&jf8^z5$$L^(P+X7*rP(Vctgb@KD2r5D;r4+bY zSSfc|SO_EtFtW@m5*Ql{0U>@yf@abLgaDaWgaZr~KO-az7!*RX!3hW>=w#Xw3LtEQ z#VrA{!45&lfDIPluz>U{2qbP9fq2C>wt2;*6G1VzEN%t5Wv~sh3@TmNr2>kvFb=hi z!<=37w@?qLQ63R4a&P-IoQq=PiFXeFGdG2;~HioonJrtOSV(K^^o%F`sH<@#0 zsmG&0FPrYh;y2&CfBofe)$Jer(;r=4KYG#U#DNTHR7-@I8ssct34w)5019E$()GC3 zVZpdYm*+9vsON6lnZfnq?07i7DAff}MZNXe zeR@7Suby6QAe~UsXrN?fzZEu$(^bqGTTg@?G9hUg&yFIB21SoS)8@15PHQ4ipeYg@ zriUA4eRhVy6PZwJ;@-$Kw@Q)QkWF>HIts@EAKIL%CB!tz*^0DSl*MYf*@xP79mB>w zfGQ1cmr1yW`{<0*@nRTm`cd=cv#X3gJ2(=qwMXWZRz(t0XjwJZSe~dRZMVNJ>p8}u z$G3)d9ozPezxZhTZ2PbN@Gu{gj;w_~i#n{=tK4kuwl{Kxp*ZmzBxOw6?B&s)S(T60 zCr_R}-R@EaH_lC8%x#XzLTL>ml>3pgPpQTLu3$xXL~1z<_Mv6Ybc{R0$TG`j3JnvA z0CK4^4(lPI$ZtaMV@CrIvRgEdpWtdEl-En?!JCrFJdSz z3vHi$_gP={jZWYC${ABmRvhN}`LnBSOZxuh)2Gv%omZs95)vw*ByhJ1RC`~502O6I zicmyI6l6#d>WT*!Dra9AeP|f$8129n?ZL$0>=im=_8GFs%I%~mjHrho@w8J@uc*|O z01@sY1hGu_wNe3=rO-S^nH`EKqK6{0NJb(hnv}99z3g#tJ?viITaD%T&AR*JKiafm zyXV>NB|JFt;%vCOV&CPX)v~p>mYv=1<85qOQ1`2olQ)(}{qD=lhkd{4b;(pZC~B&=aUtxHRirPRvC`#a;EqB$;+O@ zlhmT+80<|Ra$ac-hnv1z_^e2+xaPge$!WQ7{di^>akAj|pOM?cPUY>jc0Df-eTlVN z6#70bt3rcfP6llQvee?hai;T)F1zxK_b2L+O3Xj%#iP;qF$aKh3RkyH;SBjU4o+Z-v$LZdF^^YbG?V3SL zZuK4|98_xeG`ojN;OuFtj7!HoAzKKk5=}PXG86{(n_W>!MY>mnQQR8|v_*3(Ej<=T zlu@3A+k7)qic3ZYV!_=0?67(8-f#ZP*G4__R5BXm_4RgM=ksIkK3|S@I1p0SE|7#N z7DXmX9Rn%k)=fm#Qe!A!49K7~SvnvEap<*m6^l?G#9@pVf4uEiK6>l;yu|Qe(D%0w zpXAN+PdBLdzWDNT`OP@{;AoyaJ+G&I`~2qmB}=DH#^pdw?kwZl<*>`k8+zGSix@;^ z@7)xE5=77a>D}M@)wh0q9e?+Oar5-2*M}@+vwY`%EPs9c==V3fPdm%9gvG^X*JHmM zk8?G&VY@3i_cu>}eT;p!pO|*ppWn2TeprdaDs@wyE^zbZC8A5tmOYJ-oQ6aK$qBXs z2PC9XC_>=GB}34RK(e5SZZZ=I1tvfOTv$M)C*xEK%NJpdtM>xu;>9A?Uwi+re)~vY z`48TIKwo`z_1RAz{`TMe8~^;~O#s$FDZl^o_xAN;@4vUKdk;_0P)eQK)cO<-^+z8* z68~2W;@?mP@ygfU{|4=5TkM^tk;gy!@U<_$r``PM(T8unasT5_AHDhZ8T}|jUJge? z3`Db9!;=p3Q^6J{K9ga>uZbB z$D8ug{qV`v6F&O!;~$6S@4xWniylAyqwnzlghBjojxR1EP46#nt_|yxGpC`0QGkjN zAw|gC5-(o75S*W%At@w*0*9R@0|c022%<6sg6#B;$P|f4x|wUN7V*u>#c9#Kdw1=j z+^R;sHL{uP1QJ-XVXbk&(D6Q}o-G6f2$^D3s!Lc_cH0o36h;*dK&@JYN~6uu6v0-y zo_lOJp6v{T0rN9PDGL!eR48MusDy4AaN0(LP;0>*La{lE6}8IW0K?qizVaC|1d8r32?JXYQR-PQAB2mqDT|?nW8!&f5jP z%H1CNO-?|T0of%Z$Fa#Eh-@X>oSAlF$h0BC+)A);Np*#>1*ilr<@G8bou%g7UQddw zWOHoMko$mX7$pt0j+A97>Uote!CFkZ*~jav`+x0Qcb|Rjm$>@fzyG^Wo`3Xdw#F@U zD=D>#>~tKeLV(k~cd|e4^cRm_ytH!eiUCxrpy9dq;)Hs#P>b4n+JGbi$GALM zl_=w)&{UXi9=&XC`v7B*09QzmtTQiM~5n zgDRdTXIIo=xp=T%-(0(M*lLe!k{P(FFC5PrB0^de%K7nK=F81?zpvNL%Gu#{HN|bJ zF2&J35hN)aL7=LvSL*%yBXB& z)=64{w9SBUkinY_{p_yi-L6xY%gCpi-43Ni8CE6E&SQF>&pMGT%3*ePcg3BPu^hb~ zN9T8jI(zBq`R6Zs@kWb#Cr6s6*3iPOO&fO~`nP-1qOz%u zkRduz%OE+FC15Pw(6N{qR<*EPc=^`0omR$GK33m~XT&=i2l&L(vW%d~9$Zi&x3*DP zMpOb5()5v&KFy5pot+pg9Zx=#zH>S2ZtK#e2?br&+@p?7S=y$I-*DWhU1kp_HiQjo z%KG#`n`_%XUtcWa;w$CZKYR9a-d=C!>#dXZnyB^3jqcPK?yNn{Gfg_2WqIz()$aWD zlk{TD^5`UTL3_~;!X~HIdKTlLLjkYkVK|v|!28{_ zEG~;$(kBTuV_g<#+O67Zy1dS%_N(#E{-A09d{@g5o^Gat>fNPW#$5E*%y_goiF|nD zQGHp;lP&XpziBK7dPd86wKU^O*_X2kva=H`G$v!3q0UQcg*`847L~Bd-YoEG|Hcq; zZ@GBZ|K1Cq9ri2O?%Y~Cqz~PTI@~#mt579gBll#I+lhU1cN|rfWt0#br>4ZbBJ5JT z>aXC#K3WpY91msbAOG~K#XM7EIZE!4WsU+aohG|S3PlUMCvqv_X1lkLoYm2~p<7Xt z0tZQ?Bum^fiU4~N>W(fS6^ho&nCIF(_osUei#p^yr{yvvZ}oQkF*9C3j3eoZR_~fBMaD9}j=`C*#eNhtCgLbUnU(e?0rO^-sR@p}MNY$&Vuzx7xC>`{FEKmNtnds}>Q@!?M%{`P}9`1RM{oQqZ~Eo*gK=&(Joo61lbKL7AT!@c`=!~OA-7xy2W<#JJn z)hNLjHUu^j82rHxfB2=Zd`%b}p}08u_wS+X|HEhS>E}P#^@H0<9QrqY<(J;R|HV%} z_*V~Ke6XJjhvZfgp;Ctsbu250GasNX%MmK8$WYghe(xWA>DT^J8S9UJ=fA^z{ZIXB z#-<L2~|tM9${Y}cNC_V7z@JoxFSpWl7s9_!polEdyZhWz z4}bDU@4Wjprrf0-fBJ)aZ@eQX%%tdUg%^(>-F^EDPe1zP^)G%=Q`(2tL{X8V9~_gH zC$W66sGU~kO)x#}-u5(oZW3|{2&tO!NaKT{Brygd&rIFBnd!;Y!t*}edqiF6bP-;h zM3L$-|g%n-}!sL$NxJ{Pu4g|P`}#DXLr_ODCqg~7q8vDTV*0Yd;G%i z#@(}BqL$tCq9|HIgro>wN%;K9bHTlPcM$(tqnQ>+R0zP9p@bCG#&nc6cFIP!3_U0j z7h)Jv58i$k>sH9f*s`z^EK7tf7ot#%x-;ED0s{tPjIC~yGnpU|0wRbaC?&>F>GZiY zBG^ltxof}eQ#yrY{#S;?V6deimWxUm!jcS184$<S7$BNbs$WV4d~)l8bc_-iXb9Pu|UY_JRCYo zBOt~=!_biD$xajoqX9~(uGK~vid1AE6e)=s7@WM4hHPm>4k4GJ%e3eFotO_ce?r_+ zlmOMCh~c%xXsSZkN^`40zyQVuY=f1FgyeK4PG@%KkiA6d6{`}3Y$Z@>(+ty&jw>@T z*$$$nT++2fQZielr8LH*p(7cx8`4M-rN|v88pFg81r@X(-%X1x$NXl*OA3y&KUp&#vzj)lf_nY7S_|p&XvMgLNS4FZL;Sl zTDGj$r!uW?9MU+)rG2MXo6C0IdyZO-^quW#B)wda(?!Lev#A182`U~uKV7$;VlwEWR8$oznFx!ZcUsGyQTN?jb4MM7 z@y%7q{bS92(_`2>s`=Tn>3!~5#+BgOteL?|CiUHELQ~fDaYXD7aq!%YQkkZ*T;93J zIxp0V!vw9Qxk0v)Dy_4aZry~oQs&$n5Gunuz1r_X?tNisAHUEpe=xu8C`*8krn=o5 zwQ1LDw$h2gb|f|X$vE7v<>sVc`1-5=;d9sT+-?8f`}g8}({_(HtLKkaqqldLarpT? zauRdxp$oI!OSd*@4(rLync-1gMP1_dH}UT!WA zr+FzSk39X_x<2*pr@K~2d?k>OBkQAW>2c87`MQJ-PW0e+K6|vwc%1qC`NeKpo=SC- zJ~(h052U_1ZLEFkWVm=b`tr##9bUh6(9b@5@Sv)M#$}vBUH$y>awjM0wmU3esM>D* zaB)l1nlmla#Y6<7v|bPAJGnb|OQyAsx^USU24bvlO|h-3U0WjAU+H+A=TH3Ta(6#s z*|~3abH?*4<$7#WeZEQ~Q_GJx4BD<#&SuTKcHUaSc33g3LS<1at91}^me%DWEpus$ zVx31ZMuf`Xh1G`B;nAg&)=R9WxOsa#JgDn?`Pt_5{$tK(oXun!#i+3ip(rs%my^Ly zwjLM9qRY_1Q1;g4vkQi0TPwRXOE!iZ6CPfB{uA3VV?Sfs{>nF*b%6gHA~$Mi2r6rf3R{@}>|vn1*>;^?1@hnYCV* zskKeIvi|&(ALZuk=C?n0RN7DfaGP}$p&n*l*p__DE+}n9}JJtH)+-6E0vn+MGaeRC&Pm4R7;oZ~O7xypi{V_wFZIHV{ zMj#86iApV{kt3v?l+b#>quF@7U^|4yC@phIixjvdkZpT#^6P)*bH8?N^}m0xI)C`? zyz7#_M}Oe*f(8gLb%od=SdqOj_?v*Kf54KW?X=Y7{JCKzVr4MzWA27(vY-&<4f1=r+2^i zq56EfYnjTuI=8pJ@vWC`zw+safAEv1A8cmjZV`}{%fZBuJKQX+E^d0_v;`2)(_r%{fmz`-2e2Q*WP;LgCD(j z>kDtNy0$7rtQ7@Y8F=SB^`ej7{i9dj_!6n6`04xKf8oVfm+Z<}7N`|H{`8aQUVZ7_ zhd;UX^6PU;xs^pO^Hd!!DdlP%>ys%AQ*H|LXWbXs=<~8sNJA;8>`_LIRpb89jWWV= zIErHD%X{;qpq)~7j{5N7BmO@O;{TUhFa8UKcecHlP4!&IJsq6vh0FQ*+4*wbQY>A< zTt0oI>$l`RPgc9ecvx=Nq29Ror~l*im;O>IkEC+{&Tl{e+P6X-wx7KJeZzA%t}hu% zC$>;HxDp@?IN{0Thl0c78wg`3m@5Xv6=R+;4hS$vAPfeZKV?ZML>RsJ`j^2n0)&JV zb~Q2v2NG#@BU2TDz+hn;TY_D-))vz@v~Gh^*jk0gp_pP-B$GWlCA`Jwi(_}u=gbuU zZvqm7g%Kj^R9s4R86<-x{)EjHgSp}`C=@(nOCT`j8F9rtBmR^?IJqJawy=3dMkzxI zut10_286&yMi??cjznNU2o-~9X{W9rbH!W{AQ=$GHdkZ=1q4m>K#3CY6;MgQ3R@~f zAmc8;-3ARAaz-Iaq?M>J3<~JbfiA9OViQi;lpsn#MFtY%jhpCBVaj1pR`r7D^Q=q| zKO>@;=+MowVq3F#rA3>l`M&RbZvP9PBwGN_aQ4Yj~h^4SWZhwq_abIFTIyRrPj!hlV@ZhblTmW5hE~^a|}(`b?Hm-rmDCE zDkh-GNv@iN&$EGhS~8-j6a_|hFC{5^V~SyllDI);EUA%JY08v!){Gd|BX!8VGF;@a z+iga=8ev0nM5lX&}|z5B_%51)EjU;xaVjHOIbdvB&?6^={^l-6Xawl3U< zt~e;M-d}576zR<6^2ynyf+UJ8>@I|(s4a4uu)CMJ%MClBsy=_Rj^o>P$N^BMuO#@% ztbTU?@tM_nw;~{9uE;dC4%6`%ZQ7gLme!C7WK{MK_of&mh{Dh>?mk+2?^4p6s4kKQ zq8=X&(<)aBJ*JtimhPNQ*4)^(#fb-;5hgSwrN&fhU&%3g>Af#K4^NK5WARdYlVf)| zyVi`}x81SK5K#fQ9{Xb{)2?-8tC>v&C%r3*a50%ZZNlzkQif5ARkXG~%(8d1*Oo0U zA1+Uw)h*W7XZLY_IvyS@lk{Yxq!rxC%uqFGc|51WVjT91BPrr>_eyOn<9U$tl1qe8 zs3@yR#}3PYz>A?fLaQOPRxZvZO{BF#W$+vXQd5T0-7c$vmOWW@dF^?f{!FpIK3vP& z9;@|w-8Of3eVETmT^|g}b?+N)tfL(qdGqAs^Im3qync8XTF<Jr3pMjY&^_c#-8Y zi9A@)=sQQo*m_?nX07wl@4fe6 zbMx%}p6VaZS$E0Ex@x#dY=+`IJ)3>F|M{uxmZ7(LI+W|Hk@NQ+ZHjwpcz$m`d|3OV zkGGGquQC_ymiF2$Z*zKlfp?^Ltz~xG&9z2RNe_kIU@9fs#U^7&QN3tAvcEU3kAjP} zbGfz0<>7eN&imcol^tDPx?OHBr+8~m9;X-z!BllL=bepaTPxAQS#Mg}{(Q8f-B+n`Lq2Atp! z#0ocOmjO8fFvh@mKwS^JCkoAF7z&lm?i^}Rnp;h+KR2CRa_@m({8wMPT{b`Zd!Joa zktiivdeh6TlZ?Z+YV_vF-n-MO#8Dl)zUfIzf&|Y!>CQ}BX>tcys}Cuc9{X}hPiwa{ zgdqbX_hIk37jNHK?bT`UE?aL*-D#GCG}pFyyuY4r4P9|qP|Wup(;si|H`KDHA?+{%M*tR};2BfF z0!<~x?u$-SjzV(hilSq?35p%Bg&*Afj_kZ%v|NMXc&EG$Lf!}*&TRz)u?i6j5pXD%)qoxQ~d7IjP`0l+I zZr|8-Q+uxVrdY4t=9bIO3g5r`iQ&d`w<$cld+){zx2-x1bu|=JD>fuo0t|fk?mJ)j z;+F!jEwo;}+^q+n{$NAzb5>`p4{P=}-u&jv*I&N(@$Y|de&=!<-Yi0N6e@K9qLj6m zj$#4!`yTFlb%`?a!FPZ2`LBMvxc&6|zl-^ezxZprC%X?mGxH~up6mVNb@{<}-hTbn zH=j5keDwZXKlkd}Klt#umtWTE+Bl&&kg5z87}@5d@BPD9zV@rQbn7R7{F^Vl@$J@| z*s2CzWz&NXfBX5@zx3(*KYHQKZ)R`u;u-w7j!pGQbyOJ?`?VO>ZClxW(tNfwn3o>G zG)~1)*BXWr#a?Ho=C+W%PzHwVdD^$L`sAUgn7cpzqu1X2DpO61Z0Th&pS=CU-}sf+ zR&U)pI63}r|N7ticfaxadxI}tUcItkKX<-6eR_W9sq)_Lvt55|sWS)0a%b9$gOek< zUY=iE%op7?mK!J!{N9fr|Hogu)vq1Cp}zXYjW7J!<3CfyA7AG``XBz2zx#nt%$(=4T`WDFZT?E0zqHD-r}UblVU}T(Nk@{ER@@Bp8YjSA-;7k}DmjEF@t_ zBNXC_0UKlqyJElwlfv?qZVRx1l9Hj|P^FBD#B&-F1Ck9Hfg(U4QHo@oTv5fynP3*O z*)?K_=+Gh1kQjF-bf+;otgfr|IknB3$c6(15>h}x6vZGJI|ZGvDVY+k5rAxPZ-$;j zfik7QPnJtqE^PjkBtWGKG?l^BTV)i*VMw|_K|)Xj$O2~oA^{V&doxaQu|_n8GQydO zA+S)hY@H8=>}$(9-pXkR+mr&8C}@C&I~dY&ygvlv*o0cE zB8z(<#FEP_bGBghvaTX(E#wfirEe1@rf7WeU>sh(eRCb}T}+SeK4I74A}(;0xis|! zR6}^V5%i&+wY1vY<0c*s_8u2;zFX!fA@3jD-XDi{ak|)TyDZ%_t7MVqwk$v*^&Yr$ zMGlj`qHbh;?qJlln{}jwpv%s5?|EVJ{K?~|c3*n(Oe8iGQiXlb~ z4C~^{#LSg&)+A3xwY`&VZ_&~N+g%^Ls~#_cn73iVSZda4h%#nO^qK4qNNee?f>Z~m zjRSfo?J-bL(sjym^wr~Qn@#We;NIi2tXg6!^URPQ_Ul~G#T<$pM9NNgyJc5Eq4a|~ z*s*a&pwk^~v&SsBt4AG0DIb37q z-dncXlL#wLrop|Pc1+W_-Wy9~lh-RReY5K&A4cq#8kCN8bg1ocZL@teJ!x}cZYSj+ zoKL-|)^c+D_2={A&c@O0@QpE;j>5a`ob$Y!mtOG#eObzW8S|*Frn2gnPtWJ=@cGyG z&$h2#o_+LBfA#o3_~tKt=fC?0pZxILyH9PJ#v5ZDuD{d{H_O8_w-O~wvA35?5>Av^ zhA1^y!qimPG%p*X2?R={?&-PLzp$EDx0mAmyVl9BICIPSMsxpodqIh!)$rT|80XV* z|LEkJy!YhsMe#YgC^QOG+U9Ie8}@Ja%mGlzqzDRx%UE>yi}Kmtn2Xd2CvA@Y^iBZfGV9(1}&&s6D}hiXl0ug3D=3~etFSF*H42dn*+;$_t?kacvd)~DQEoIaZS z<_Ehz3_@c>*lE?_V9n(P^D@f@nRH7u7$hWY(-SZl+ekt(NC^~-BE-@G$ylZ;I+n3A zg?a(m1O%)PylLxUSAOBOm!9h4&gs1{nPwYO%V}Po9_Fkys{`#eb=ftiuk^6H0$(A`6{**8zpO`Z&Ta4JG!8+6Npd=P?xh!Q3@#`yci(?7*Ue{%)`Lsv z^wOfCA_b97t_Tzqxi@ShR2hz|+a8yjY~7dwzJe12vH*j$T_1n@>o5Guar?V}th4in z8(Vo7mL`?t!AkMMitw|b6jHK26a#N$xQ-MlkPPZ3qaPtg}_}E6f#B#VW67mp6w(`a76{I-oRu*5KibgRKj~@Rm!-{ zO?Q$!V`E&WVy5w1rZ3zX>)uzs@f&|`ue|kdeC?Jwc=NfZAN}yZ|GR(VADq70fA_01 zANJGlN8L?&P;2psEDK?A=biUI_u`FRj#BlYafnH!xi)9GfBL}(hR?nF23|h-=*KU< z{EDJP3@R28*w_LigShkVJFk85OUV_7#A-U}aR1)>7xdU27i*eM#`MdtfBD6e7eD*- zcki6Pf3XX176_W6s0>90#TcxQYXjC@CAUw8VOtJve(;BX@WPvaz9@h4?mx!-OaJ1p z&i8-%!G-8M!o)Nx-hJD1{_Vf^ zSN_W7$Nf9AzxSQL^XBV+VY|De-Tlekj|?}DZ#U$^dSF#D-L?zNt!M5+5TNtZCxX4h z6EGmIWa6}B1O!M35w^Mw5(HO-0D~=~kc9#3))&9j7nkM&geB`r)goGVEMCJ9YDHV* zwxQ4Izy$;b*L0NPQah&4Zaq_=0EI$~0d>VtQCDBo2{DVaX0~UZm+S!;jKdZ}2t*~S z3RWFMC2(M_m_H%&Oo)KQDe=!4VXhQA>=ao6g$3e@4T%AnKVihrh@Uaf6muoPAPAnZ z_)`L$$rWKCkhtQKf$4#TWFaGCFaqH;27~|$S(HvBP3|O+SU_T2EEy`UF=$wq(lRf% zT{JMy2plphL6Hy|N-=2zfep4XPGzE^D3Xx_ndml|X^=^5?}zKOx}@}FM{iJs0xYFe zimHOBgaHAv5<3^bWe^Uf!D&O!Qed1c(Vpxs&2pJ(K!C&*l_C?XX)x2RGKjDY1_5Ef zLZB<)w80nAHi`!5{uL&Q*Y>A27yF#^xtF%Y z-qL=1`B7Wi zqw^=5jBF(CK+1JnBD@3|21VtdbJpD5ED_=E!%_1z^|{SE7x$>8r3UA^sEb6sIntJA zWD7MBILI7^QpzY_UNl02SQbVRQ)Ch*s!P&|?7fK^B?e7+xzZVGsjJ(AHr;r&&-TG) z+lq-8qKewR()xmBXtSbT(ivOsO1M;wF>Pn9kLJVi=4l(GLXB8Ve-e&{>~K#C>2S|F z)`62+Pd7c6IeQSYdLzffDcU&XoZhu<-}DS4)r{$@x2~ODoImZiHJ^DnHQLP0!p}%y zgi6%_SysztYdRsPfe2!Ksn=-R4iwI|PdlYVtmj+85mJ9Jmx(HT zzH7VIJm*+rFCxZ^vUn<82RAd9f{AXA#f3U5vs?@9f06lozq+yCz4YQX5AETHY|bB_ z{pf$;eZPM1E~)W%M`Lhs|QZ+mucdvaD$8}WY~-O8A~D5vjbrugju(q zdxvd&?zMbcm>;?Ir8P$mQrFNg&z8O@d~#SW=4rPqVNm8!hHaVsmNz@k7j1UBvulu5Bg5dM>wC4FO!JdE?P8M)=Gu+i zHWz-#5JW#_M+8`QXm0ufGvEWlJJt zkc_c~P`UH&kKTCei+dt!927-89>e9qXYZWZz8p$YhyAJd7r*eu=Z|kcy7!$AFMoWt z#oHMNEus)Z$Vg=nt7~JoSj~`mJ`Ho3PTu?OcVGU}FIW8j+y4afSO4m_x8HyAgC!I@ z2W;u`hws1rrJw)mde^@5-VeX<=395(dHaPoe}QS5Dv}74$|fg-6aDm$e&?kx|2nyC z@+Uw1C(pn2OYUG>f)%#q z+e&U9<@V{)egFPPFWk5uIXBv8kGI#44l+OjVVp_>X`CQbxMHh&y;qN}T^q7&z1eze z%D^yYjwg42@V#&R$}hLvKMya@3o_52+!O2_UIPIVR}$GKEkIIGC2E%9 zsZ$mEN`Vq`2&=FR%%EBO@>FVz@xtVM;dzrwfB`2` zjJOVi$1wwPaqB%L2#`udnPip0V{uLT5{mVKs;lv7%|1^s}~u1al>X z5FiDmln|O~0ht7~j!6=1$r4g25)hUlF{CLVw}yM$SJrXZD3-ZL@2H574C$yULy?H! ziUNVV5U2y*MHg2Bl$p?@1`HD+wa1%{S$2uRGJi@$7#)lQj0X`SB!fX<5`>GOtR>@a zz*xpRckG@9<8&xX2^3fmxLIj3oG|Vr_xZ?B*Sm^55{5?FPP^kyx+m^)c002OYOx^+ zL0AZ5iNKB#s~(pT7>z|ofkX5y(W!IKa7 z$Lp)(y~64;>AO!Jp3T}m&ITB-dRQz4C4rj(XCS*eO@+`wiL$qvLJCj}fi0(&X`%ko zMJ+!4{mp}3%B5nDBJAvqKHFuIe0V#r9hK>Z%1gtIg3`nL@$MZxen2--gL0a+?=JcR zbQi5Gfz;l(O z>w0=DpBc$&`6%Y8MnAYQ=QT#}CfWwAu zU3}E?3u@))R=9->&Zu@MBCNyx(vQB<+QquqnxHSRw7NWauc$?LtcSY~mcD(GRxWWq zujagSw3IdtE@Os=DYbLydC`0BT(;Zo-o)7xcU5e z_LF69PcJrYoLdBEr9eWIs8i3NouLvYTW>DIsP#%mRW~OVh9&)UA)2F_M$|HR`E2Xu z{PIlYOE+WQKf1i^tAzCu3l1r3Y?mi3d!IK8Jqc0=^`&*>%vQ`cl~U@Ih1|8Sr#pK0 zP`p=DuBY;Dm)Wy`u*Iz^O{K6t-`CXpv{%kgY0GXKefKn#T`L#6L^+v;#gmuauGn+h z&cb@7RZLIM-2HU!bqr}JS~N}Rc;i9l7$}Bb9O)?AJ`j3}}~F(@eP+RlJ883RHg zy0J~i*od&?=rJdDODI?r%RvfNVWJBR?y_>I&K3z)!zB_*0-0o}K)BSpIw-^KH~Qnk z#a;K=-k>o;i|c%GKDUMFH;&3hTkUo&ZJZ2c9LIWjx$6mZA={`3Ndc56Km!Nmrt0mfJuiexa7637meD66&h zZo~oXcH&Y06sx->p;TSfQj{0lMIq%deC6fUm#^>t-re!y>EqYzc9LgP?Rhv~Y|&hL z`Ct}fyV043_CoCs=IxW+a|1_eKi#&ZH-c&8+Kn?b-cl?I>s=BJHs@>IYPwkG~A+b$NoZt{CAi^LZ zag+#w7z7eKebQZ+OSO*ILhW zU%XYeNhCtT0i5ys3B}vr{`RY{f3cHw$f}Yq#uBnD-v0J?46l9Z4cTwM`R40i`m#xp zLQ;fDls2Rgq3?bB+duk~KlRf`7d)1u&4V8q-}r&;{N%NF-lAW9eZ9VDbNki*^%uYVd;TE~ zy{+MVwyak-jb|Ud^YFp_$iAkZUtAuYofH{`5jNchj3IHz1%$F>9v#<{7w##`e%`l! zh$vNKILGY=-~8rJ|D(S@&wt=Jz1h!gxtil%UH+NB_=~^rGylYY+%La(aaq3gtN+m# zKL7jYdDi;j`_Dfz+&_KT1VYusU`?LvaU5FR9h!CB@kbwgBse;|mzFRHY;b}zfKXJX z$?Q(X-O+K$n0bDwXScoRI^5+z4y3S!fu1^l_&JM^h5`{n#b|UJX~3*|@@}ZTPHPvs zTJrX43HI)*yCuh{tabFT>D6STcT#-HP^DpzH55%HYSJ1F*-gGlSZ;E^^cIpZh7t@B zE*q6l7$^d#vAH8YBM=x6Ae+ymxMOT22m|7GDY(;ZW1&FYvG{E^*dee45}*#KVM7Ro z0b7Ow;!~DEwgDl)IBlUC6_LGX_K+e3ia>C%(*y`{M+TxnqM|!VCI*uNI}@yggkpv$ zS*K3YZ+b%V8HI=t2p5&HR3nXs2oaEJU ztkJqY6T%2mkpg2Q2xLG8A?`R0y$k_W3y^gTG>oB8Ufs;j`CuDT7{sR}VGR+YY!DDw zU<7s=OQ%I_rjWr27~@>$rK9XZD3l-w?npwy78dR{4okRUC_@cVhk~80CC=JYGH? z##3*%F69{I;o@>$*4roZlId=0g1xLg1{5VaBD#!Qxc8KaN?_M6pU7n#P6lPIbI;dW zJNv;GE{2%D^TGG_9Q9Vmk^++Mo7HnNh%z3X9Szz{5mOn$^wqa{d6S0}gvwa`IJI2f z&K&_&Csk!TzBuu`bG7JEff23T$ZZ~P?80?%S(@C8RBMFFw3X|)tUldPdVg}0nRCl} zQpI)?pDJqe{>Fs8(AbBP5HS4D?R+emq1@8?&^%!GtVia2kqA9iv+y_f{J9_~E9uxmnXz zc`>xsTdqV8(Pz7+q_(IGs`9e7C{vMfKedsuqZzz#btY9W-R#8ii>vri5y?FJMz&Q4V8D<9mnld$~G%~ck!#F(OVsjLND zOxZXaJr#DTbq)K_cGFj;Xvb4}d)Z^!%#+KoEv1}PG)wNX>rk5O4gK_b@1m|@JB+Qk zMW(_%NzXU8vz9|Sd*RsK0tF!(NNXIorJw!e(rzZ1tWsa>b*}5P?`Z6GJQ_5=#a4kc z001BWNklHXD`knQj3$mtxMTFJl6ib>q8#K z^U)uyNb_-Tv0Ka8csleCpV2;ewqIl42(5!ifofE>fI|mY+?$rnp|G9G*--C|>Y+9I zwNnStWD(Uv){nc|&Be9HA34>!*&bS+O{~SYhu%QVg>rFFE;r4QC~~c}PTMFdJLOe# zRGl5wo&)o~OBXY(WrVXPS+*nhj=X!Szh&ak6C;hK#w+7jH@aGVwQ}_wqDQu;*GXUM z5H+IKd}%1pR9o+DZ@WSCq3*W#cDlW4?roU&SJQUYxD7n4HZNTEP*y@hrblmn?+1ljvj{r<;?DHspG9y`*$dSFKjWe86rkxxKpJ z@9Dvp&hNc2p1bZ<{7HL%v6e%>Lfh14h#_RpPaMiV>Bo-sW8?6@tIQW1K-;U`(Xpo2 zhRxH)O`E4Lj2f)-tGsB3->hxf-|qV>=FH36ti4R_)u5b}`$K%Vl<&KIE!O*w{=pA6 zyL|gs{^nyx@B3{peP6D#)9o}MSx~eAJ6j^Dik|A3*!T_z3qt`y5+DKYNkAww-Hiw# zLI`0(P(@zJK-)?aCW}`fHCOUi{kYm*4)CKlh*id;k5_m)HO0 zCmL_{i(h9rM14GZ5T%z?aY5qkZ-4jmuf5U2WAzB#PDvnR@%DG#GQ9T24}jl!^Lwv- z=}S&aLKY4pDd`j)QPto4_S=8(kNlH=^u=83N$J@?zWgiC;Nsoy9?1UmFuMm+Y3=x> zFFn{jjOXt>UY=hZdLCw3N-RZ05m6(d#{yc`6dwcHA7jMdpF#Y6 zy!hfH+A=54o;^Q1JCQJ?g|WH0x;#2Qy}rIYy?;7vN8+rsuAF;uk1|P9!4)NHT5#5w zR5pE`m*uLXY^{^@Q8`c?mBG2)-s@d2oWK0q$qyCZJ{{kh-EQW!^gfP%^MC%8AN=yq zYJRV|Vb!!1-uuR{{p7#%$Mp+;>>nG(m%kD3zOQ%2vvT|8KmTw1*w6imzu2CA^OMWzTfh9T zzW&gT~X$ z?Tef3{qs%J(3VTuCrd0(Z+bgqiKIh<Gl$uZMI$;3RtpP=XazNJ5}0Kt)Jo z7j(d;)Bx7+JtY}AoA1zr zI}w33c3Vy?6rv|K3_?Yb;Nk#e6LNu7WNS)OP#{W~!dbmGXG=#JL5hZ?S}5DBrKaRQ z9c{$P*}AOj^XFG>dblaOlao(c_4Si&EsPu7a&D>d=nM@n%oiy7W0X>X6Zfm-`j1rceS(H(>WfGc9S8pmM1J*ek1wvj8*#jj3 z-P|dyM4^gIV`qX$c`6|yR5Mq*-wsKH#)zfLW!#N;wUVVmHwMEn$*5n}R&N%iJV+_! zwj3YhWwSINSC>s*-+G+qaU8A&)TUtwmqO0_Ti5P7u|k$y4vv#?)LV87 z2BJAoPryQs7^=IZMOlUtWjl^4bCafHwTof5_V(I(`vlL9?o4d=?6pki>Nm@BtKAXS zQE$}>Q9`bxRL_+vBmHo(%*GxEj)!;}(mw3NRMKva-B8G7pFNP$#q`0f%EXSSWsL2^ zvR`kOtF_6XDRuUxFO;@_NPq9m{dESjJVqf=dTkaW>a=l7r@Bgs1$kY~R8Q+z&yTlp zF;iK~xQj7PwsGS2ypdV5EYmhNF^y$-?}l-4)Us8JvM7@fDZ+h!ef9h*&R@DWxf=+{ zmT_1Ng*u$)&P05&@NOqbsYPUZtRw&x8}O_uJ=!7_x=9H!;jsI7G>{v;aCUs^tP|fYB_|TAMAO_ zsOh3S)S*m8p}t9O2dhlS1&yLbqhH-*&t63nX-dZNi^K3}`Wf=uPY$2F+deD|r-P2l zBcu{z=EoNS95O0tP$r!nRmyt*OTUX7dKB1#f@=@ z(duT|Y?6zNRav_a?#p3Mb8M)hr)0%Q?4*l++g$8WBp^d^7q|}UDc?y@7i5Av#x_oa ziij;1d5cvlAaz_I!UbA)C&5+`Ryu_sLg`^x%1b{qdGO8CF4Z)0h_!pUyqedU;Pf<> zWo%3D0ll>t#@%K(+{`VZr@@B89XUcqIMaxsl$Jn+TB(?-t=*v>M}1sSdDh4FZk*6`L~{{9hiHwlY`ep#b>3Iqlc%%a+AyT zG`z8m%?rZ|v&zTI|9GpD-?RCBWqtFnUj4PpHZq$#1}Oj;+|!1Tgd1QmkYFe>q2Nwn z2qT>IB$P5X76?Ru2m~e*<4wRg{Go~NqwkLO_^UtrC;#v^0Pa8$zrOSfUwx!__~WlV z`_A9^^Z&(v_=^`mxcud>E_^SaeM9467$0Mr=URthsMzM+?|$cVU-(iZ10#oAI%{La zf~W7jV|exTR}#PX&ik*t_IVjwNrrkwmh8ldW31zM-+t$h{Fy)XPuKOJzCLd22VeiM zpC~S$z15wqFOcC;?z`!Ym-L`)-+eYczrO8xSg=SdB1$d#;JuIef50IA{tSr!)590e zWPt#Q#~(kFdFh3FTKhiz?DFdDAm}<7-bkNgypaU5V9<( zS~`*jv}T{z2H9K3PGzEqq3#OOusECU4@bx2sqJZ4P@iMyQObAz)~~(s`q#*(a!Ow9 zWy$v*zkhaiFX=e1FRzb|k1PzrvM>f@1QL!2JWSKzx;=^A!(GX;9G2GBD0M6k#@-&h z{_Fq4fA^m}{B6?RNHE9 z<7zI4b&kC1owTJcA&SmU&!jS`Nx$uf?!`&FMQ%xR8qhh6o6r=a3FVELExJ0mBl8)V zJL0z~0-hO|Kz7R8cfWy*lfoiJcPs6(@byA2GsaO=*r!mc5g(ts>N!j^%6!e=C+ zh9KM$+UztE2#hcxYEgnQze9Y=VGzo|Xe-8XI7FFSFFgZWPK-N9ZP_M zJ8t5R!DwRzS_;PMU?*uz2yjADh9X6P2n47EcS;xZMj9bNvVf4mB0#o*5JpxS(h?w~ z2xKQ#7-5UwCCea%kW}R!eQBs8$sw$?ELZMGHd?yD#()BvG8sojlqw0gG*)C}ajPDi z#>`u?Q;$kOKNKL+*ez3IiGWZDJ{EM-)#d?sz3P;}jv)3}h=}GsIA{8`hS|3|H4rW4@v|Os3})DXTgI9|D}%nY}wE z2})0AtWA^%0X8Wdi7lciLWAh5A$tH}R1i+)C5u*35^y?@!BGUV2x?fGxLMb|BZg7Vn$gB$=e7?q_M+k*6c{$c{t)O2)yP3@X#;9@ zdJsL@&5Bl2I2%Bg>s6Q9YIH0-)Pt#lu`Y+Cw~Q=}q^|T6rM9J|+uAug*$!E*mlAb< zLc3}6o?;Xks9QmvPGcO8(AM;F)4Ge(Zi^jhq%77dM1<;C!j;}vve_k-wWKI>*HKV* z+x^GP zF|2EzY?hmKL*~h=Yrie6^>)2q=kslAH@-=o4(hw3r4?l;b*Bu{uq#pxwZptD&3LnY z;cR>PQSZ7Ad!C$*;&B+evtJ}NeP>N-6&(ZC!A4ctryWaF__v=4+^SOJQ!=A6B$N%j&{?<)w$0?@K?arb1bqAN)EW3nNhR6(&J?9cdav!`djf@&q9z~pNCN~bQ z=V#lx4(Wc#CDx~hY?t$m>*wyrouQXq zrEE7_b^VKBUEBM*d ztAqQkPB+706LGn%F)Q+r4T-=pRkC!L+NzG%-b**5wQ<|qYZhy@(ssAi=nhbk0wYif zRHU9uvz(F=0*?`0BEtdQSPPa@b{a^GY@8{iKqO!!i#w`d0m_7gAj2q1l1^jpLzZkLU(-fZAZ>>|2*ySP5I8R_UG=Cv-fY6Z1M9mmbE-0WN8oY zD7D*4&dSs8f5QJu#NXW;5AJ{GZ zk1zRld-~0YL&?XhbM1$r3_}@u8o&4clNTS|Z|P8igh=I<@{-6C<{`436a{I~QFMrhd@auo=nZlP#f-^7%Wt+A? z@KW46+I;+Mb8~sqa=(B`qbM2Q^77W(Z@&8K z=SdeKu$4XE{_y?#XZN!=(=V@Yj!%yGlmP)&0a{AJFq;B`_OW$p` z=chk0<>f#AqkrLF`%le(2LCb!@jKpxFX~VITkqy?zWwW;#5aHSXTJE#A3e+q`PTbS z-!`0|zmO6%8>hJ=Kn4U(!S%&+!O{6W5J=o{rtHK`wh$r0#HgWY!m$h2NJD9h`Z~+k z?tqacqoT9ZGpLGT7>Jrl_C>g(L~@S1fz4Pv>}sgg*0kO>We27^FhggYb>A%S|BFqRX61Q;2CaSDXQr;IJEB2XBC zxg!V+C*Z1+r@bX4@If+SLqo(+9=BD+F{5-J%nMhT)wf(DAQjAX8B zGOjVT<#y~%(@_dIw1m|?a_Cv>NC{KFFlhYzOFwlMPma$nE}p&p(d83{*NfxmrH?nu z^3IzdU2N}9_4xewri{LPzc@6U#GP7W7!(B}g;qz=SYjxi4XuS)$ix2Xu-vr1Y!X+s zZZ}M|1xqzq-COhADb+%0so${*Aur4#3hJ4*h;0#dkTMNv?gXGJIs4_}pdkr_s-2{% zgzf+XW;5R3$x{gJXXToK$&NI40mtDENIl4<&g6tc8iK6h-Ww(q(7No{6{!r>ZSMA% zYbj$dv35bPEGd7`VjqASk zzWU^3*R%Hhu;c3Bw&Y>S3M!PKHHW5?BVr@bTZ`pTI;%$ay>oFIg%xr~MPv*k#1zif zjaOS`5aD%UuvBtxc5m)6Y-`l>@Uz3gk(8<)QWn#P5j>(4&UI-HA+vUC5GK0Ic&0kM zcGRxt-iGP;UR=DhUuY^Y7RS+#4)t>BYrB1ZeYM}!#bQZOwJF;dN;yb<+}?Iqp|Xh* zrK->C+A{4px?k68wVt@IhwcnVY{m+CdzlfdQCnebqS8toqKGVM&W5(;a;V zt*p~NqcN=O_2-ZAG(_qWJlnY(4muiChLumrj$Ce^UtDv1|G}2IN$d4k#<_-2jk1}7 z7k|39s}p*M*1}0sU!J^Q%c@~l72mqd?ecs$-ub|?Mn3!I&4Jy~IMnH8X?>mDS5%HS zQ)M}zJ2;4#N{PT=vBC$>Zgo4?!^aQC@_2~8r!5V4)P-WAY#&Y2N#-u+@9nSFS{qq& zUDnpi6yvs*CCWDI-kVdG(=Qaqw25~4WS-Y&``#};xn_9nBp4@W)MGib&;4dGeTmdn zlha6<(3&PqgM^**MABB3T6L7?@{^lf=f0#Zz`g5e+s8+rpL?D>zuLUm&sTr&{d|vJ z2XC8?j^dS#ZgTg|^`72DRx1X)cu5Hnedt<6hp?meG92yaaGTREGSQULEN7OWbvZKy z?xX|?mxv5uqKXU$CGFmAH6wK-gkm!}7dyKU&Tf1MY%GJ*Qb>a2Qx<~=SPj^cEYbuL zLV&HzL=dV_hFP!t#8IE}`FGml)M}=osB7$ptNmfN+R;f_=IuHkLY-1_sN=95Zm(`L z@f|P-fl;DbBGimw$V_9f6R^r|H{Rd!?8(~Q=2qelyj*_q>mUEH<;BVQ`PK91=I`aD*YAy0gL(7Kr(b+{{v=UXSJ@p;yKS_PxBbP`+Qm>9!}41_U-!6y4DWWMGVp8a#*HPrcD{ojvq|h4doBwPX5L;9{>2y|8qaTZIAx+gY$mquU}ri`SD-+3xDIUUcTbL{u6V5Ykm5S zu8Wp$*_BKJ!nW+oi>JqD_j+dFf;uD?#vt}rSBCA$iNx*I)$#GU3Q<*hX3s<-6jGJO z?TatJ`WwIc%YXKt{}T_M9B!X}=a)W$r|*8fiCkBK1A{Ew#_b!g#Ql?_4?o#lKEF=y z8-#kG4<0|||M&lmul)2M*Zh>}+9wZ(-K&otEQgnW>aY9<|K@-By}xwzg@0JkpOKG| z$;bRle^7ty&pcTkfB$>$=bOL&wO3#MN7iK~zxUzOcMRv}FQf$B1{>SrGeTgDZZ4h( zwkHo9geXRKH{F&qFcK005<-9o5eB^xM zHWDB}Hb?@2F#;?hFmeK60YylpWj*y;r0xl2s7mP7G31dg4?WVcJGb&1)@XiDB9Ahox!QWD(w%7CnYJ``B6? zr-)+hUOr-Y`71y5MqF$kUSE9l{@WivyDIlb$MMn4xX$1I*85MV^RXPC?QY7H*N?{( z(@~UNj1RP{wG?UYMlOLXS&G(_dk<_5Ui;6ghW^K|v``UX^kw`CAU8Hr{uYJt` zBuKkKp@^g#3{J2jrjVl&$bf7lkcwo_*E5lwuw=VtT6D!&iky-1E_$uwZYNdMDRM2e zK6l*KPAn;RuYyL>&w!y6#a_`3W~iFzGp0F(ER}GPJ-IKnr*_(uxXrRIy&5Bvo6%m` z_ar(Vjz+E2)|0f{D6t#B^>)dmr->wS+654}Ok*(C-gBp3>vWZzsvp&Sws^_BS*Rw0 zjl6bKDpgZ0QFE%=_gH)G);uipp;rsCrOFiYmcFQ((7oNT-?xKZAw!8`DX@yZuAQ)P z2*#>G`K&W0Lsc(85TjDc>ljL)zN$qiv9r?$*$myS+I4q!vZ-^6VLO~}caN_puixo& znX2pAj~(xpn*;6YTh|vkY`Sr8ktS3&ay@ZQKenvbWAD_U)H*18EVi1Nnakfw!ZuB)ux{9!o>CEy3E})mpIvIIDKh%boBXgb2ESZtxs;wwJbM&)>gM1 zrt#$dX1*P4EzACg&(`(y{LYkh9aENOl`rpm?u&QZWM%ci(2MD++l*J6X~b=QP^NLw zHru9@)ODp_&b#aD>nTw~a)K-a(V&Zrm|pI{Kc+PtN$?b9C!8%-QqjZ#+0umG z5qQ!R2$2{p{%5w?5rRwvDru9*Mws6bNRYCT1eC7eF7^VV{C?5c|Lvs?-#Pk|3%6U{ z(bUG;R_9#2U%t6$_g8-B^7;qUv#&H=P3?oE*lCb0$-20_I6OE=TS_b=Yy$#(v$`=H z9PELvR;z=fBOy{aO-scKKvEBv55Dl`cYpae{^BqG{P&-2+mny~&biC>*+m0SXR8(e zzkk|}_xC4(&5i)u*b>l1^xdrBe!iS}zJK(+uYUWde$UUTZs{7sEQaNLUPtc#{a^Yo z{@s7?&y9chON&eEA3u!`&$^a8ng0A={ttit5B#z5=_em-_5FAM-Yd6Xn=;AmuRHJC%I0v2C^u)s8Hs^5PhFX+cwffsC1Za(P?i`i*hx_xY=slW(tQSB-GEs zrS$H5dXsBUl_rPQR;^n$W1A#n2_-lTYB-fb1>uMi>WdsbFNht1nvB?y4HP3tl;Q;g z79c>$3j*;%@`5Em5EzgF8!Q2`0TD>X#(*V+3^te)Mu8-Q4M@DunJ_KL&e*Q(BsLTX zVRi%}Wav!T4JR^gX<>fI24mB3Ko_DL0}sL=S~%8MaFx1c|d1SzV6=2(S@|!BDaRNlQtyt|V$kMn-ZdGMthu zVOomj43NTr6J$E!4kHngWeY~+HcgYAV**p(-L#zuqGN15*u`rBzuq-$8 zetWg;ubWCg70JV*r!KTw*V)ac?`Pdm&~Rr)|KP2kd#zs`zj}Rj_UP%w+0Ax#U0I$^ z$8Gz;w?BR~91LauaPPXC<@J+A+Vj0oGF3Pl4(2MPQOVMftB#C#B)l4b#B1f?h;IL6h%8+-iN2`q}Bw;rr zqY|YPSz2Q;%YHU2=2B=dHSLLQ;3*T04o|gJqvlvHhoVwD*d*!nMAx?=W>eR>ZL(UK zV#s}{X5ADSL%&Qma?2n=&5Ewbu49!1iXm_|+Zk4rBGCf3QkbN)vznG7n&cNWFQ_jm z`cRZ~3zZo$w#kt$XfjS3&Xz=DwAT9KG}LKR zQ4~Bxm>lJPaz<`7e|3JVPS>MX&k{+2T%=N<+ppU(bAliUgs;c$;3sa+>hj>y=}XTJ zhl80mC|mJ1o|&kMc0uo3{qpc&Xq%X#ufo|PMlpp@8l!Rh{OTqb`*#*vTOOyrA5f2& z;I+>Fk)Pe(JdZZHn>TxA)|1YkzQ3o9=f?%Tch=_H4~N-a2~BO5&%XcY#>3D@zt^&Q zydjn`xbJFAq_C|rEc&7tV5*sRL$ld3FIK7_edDYi9Nt}u%eZdOZ(7NEH_Ehtvp_covXZh zK$iXWe0Vv!X?yv!Ia+$tsu)MxQ?;;Ma~tvUZAIU0ud=0`t%drY=px{{soi8ZlLxep zO^rAE+1)q#fAFm8+4Ooh%NYWCzz=PTOoqk+#6DNgz2Uo18>3J9ely3`7Z4D2M86+upR6*q}54G3)OS zJYTbIS?tWI0RiUh`E&jmaB_4C&5{La8wiBL#DFEXVH!3vF9c*}J2<$%2%qcy4Zr(A z{=IL1^kYBv`@iw^-#9#dhkTaSe8{d04<9`~KD`f#Qy8P6d#NkG{mCaUKX~QiPd+-m zb9^FFGaEV7y<*jR0i||H*0z;Yf-Zv4EQT#I4B&~8tklFE z9@I6d+6hBb;+j(mhcTHp$Vj~4G?*QeGAImq2gyQaN4#KR2n40;I)vr{efhTo&-adc zX!~W3TR%BVb8pkNLYt;(Tu|-6FyJ&z`t;$ax9{C&9JdJ_0St~n+kF1? z!mxL6aJ{<1>>cc5vt!JT6nS%XbNtflXOG^OIXu37asKq==m=*8qaMQ9unANyuFhpn zP7mqkig@}eafJu{b$g$IvMJF^%pt+f57jia) z7yDhJv8B>O;4atJ_Ub0vRQi7C`c6+i`snT}Z_#Qta@?MsKVBXlG0c~PN~oAVdGF!F z*I#-`>21}cXAe(~4yKed5f<1OPzV)8udaiBnDriJdq4Q*2VeY&A9qpT(?@|`gR`9e zqhI;&|K)$@&(!jx>nG7JpG?!`O$)Mp?Z5qxKmX-F)>e-6CmRD20TLi~ijgsvgfO<*5g@UngM%Z{ z5%Zx}>03tIPRg;WHA_Yzhr*hdzNl2%*s2Us=mVF2-o>Jqqq}Y~jU&_8Qnms@qV%(% zk1qSZ^|-BLoJt0Tm7`8uZ@0C|CWP2A2&Zij2uP`yW<~BgBX+c7oCYI_Fc3l_jguDw zgbZdUc|jntlNO&b2to$Lj*QHXg#jTXAZ@@BAa)D}fy58l2qd%9L7)a{gUybcY*A2% z9b4=e!yp2hqoqwY49I*Y!6Y;Ydniz530NhFaAGGyfgw{ug8;W`IY&96KnNia##ksM zb_AHTOiG59FT_lm&0;G3cB&dTMj!#P15y}}0f`;kfD#*bDRz`dGMFqvAwn3Z9zXhk z|HEE>`3=do7F8k6#M;*n!cXu zFs?hzx}m7+4`+18Z~TGRy35lypNtnb*B9&SjW0&eZa=u4tB2qEhwl$BMY(@#`RQyi zoj<0g=6kcQ?Q3(OlcSSG-^HhfZJo@}(f9pclu&NcwRlyhi}PuHeSW>JG&k(2CCH$* zHIdl`E%7uOBVj2BF&P^Bg>gom(v!)U_j2hi+hDL13T4PRFGo{rDmo#Y$&M960mYPb zfx2!vs2k#T;c;WVv9K})qwvVbAH2)|^S=0nFH}%hdU@Nx+A~|x>;`IC7&>h$uC0=8Neuh*7#1apwxgqF(9X7|Zb?;;Lt&*$%=Zs&O?m9; zgNd}N8bxMcG|amIY@*2&@P zq3n(H{o%>2eYK^pI%?Bue<$w4~FfB*Dcqp zR5?5ni`TZ}*w;oGh%)q%ZsXL7+pvFl@^X1HhTnaj-rCJAn)+EEH9fXwOdW}C)~iLI zn`sCWL3&%UGfUrx;8i@k1rwi=hWPe1qit+VxBZdc>#Y~I~4yKSNjkyDrT z`T5zpUY=iQcyke&l~mukrxHXBZ&tp-mwA&At( z#$a|9#W7oGFSlpe#%zL|KvAIi+4JWow@wW*7~{at&(2RyPeFV}aQ6>h9^h zy?HwQ)vtW;<=^#(zWTLad*}0?kBetrj~A(R|Md?azwt9KeXe>tCM!K%wAmm1h2Pq2 z^dJ3~fA&kS|K)GI|LPZBIxYO8Z#?|mPd%6oW$tplj+|<({@!;#y#Lm#nkN~LzV+dQ zFWzl8aMLWw64KnBm*XWa3qZ<%07FUz z~Mp;8krs_xA_U)RhseaTl)FvWOU^_3at`GP3)Cq!S zY)6DttSVhzoEi4_55TMI)&9}302k3r&6FS=vq2|s>>a)Kxv%_{U;VQ``p$1%S~t(% zuSC|#K%TEw$NL8%XSX;WqRqRXc{5)>z5kCNoj!c@cysgoqGrb^WXYkYk3YS0=dP#i z78wbM6faypzc3sg>|L&|F?;*_U>k(kks_}zZ%!V({`}$lGRKE^&MrUQ-#?PmfG{8w z$Y8EkS28Cj2h97v=ET`*adpTxpW!4Q{-|6v3AKrQJmasXhJ8zxCHY_r>3r>x*%W z4?p?({iBygr`SHbJU1Mj-c7S(Fcx6IJ4i@@uFjqa_K$C485saZtFT?Be{T?4W-O_xeQaN zWo4?FGASGpj&3IU(q-SfhumnYYEt34X&Sd|*Ha23BZP#IWw3z^gJP%eAqt2cApx?n zk&wU=J2tjJA&}WIJBh&pDS*Tnp&-C00RtA25!hfTqJRO59U(B79g7|DJIo8h>~ts; z&2CO0F*qfJki?E{cFYU1p>qtWMynW9onT{jEUFcbA)xCMFeQjkf)n8opahgMplaMw z7z7BA!C)IG5+DUQ6O4qw*_^2%m=FEb&$iR3?UfyXB%+G~Dv>}nAYQNxh!@hyjsg*3 zL^_m;jtH2X`S{@{ckbMG8vE(PjnKYi#)fpC8%|OV;TUtwR zYf~?mE*i3oA%~$S`hGFjx|W;GR^wEyV7@P*gFc9!&^W1dcQjT z;*-_-rcO88Ghg21?A|N)xOw!=zx!@~ujl^#m!8h1#^r28VuLSX}*5YUMXS)Q&Ym<)sf+kk*5fYT}pl_(*ibTefQ*H)WD+1ZZ# z{(JB9f6|YA?nj&2CS4Lx^ddSs+61*BjEJOJq%ywu?ze$A-ub+&pzQZvLBc$ltW;ub z91in>r4lN6oVtNgVY?w=g1AhFi1P_9f9O-Ax2lICC;=w#I=R>De%Q z{dBqNmUXj!UaPmGJ@!o*&~-&S9c-gxtTAnxN6&iC=LeJ3+GR^#$gtgv)I^t9gF<9K;+dOEvkXkA6Os^06Tq3?y$%~^}b>+@9(2g^N)&^9#$C6LtUqQ$M> zGv|7WN+n1?{tg3qG+1?>e2Z+`+?FgvD>tD%_y;)cRls0 zM4`isidmM3I&^AHwCER>Ro9<>@3BsnS=0@Cu~N5q{_GF`^qcSZudToR_3xD1Kl8=` zj+^)1{q~1Xyx3pn`pM;Z>+b#^`Ca$Evp&9g{OseivxDwBj_&nc*GGrlt)G4L=}o;@ zUD3aDD`Nja;eH=4EsIXq>ULt^wEvJ zYRHMSENGBhj$><%1|&!U&!0Upi2pT%_@~bx{uds+{0c|I$G`pk*Is_vi`X<~j;J~+ zl#_Z`HT7cU_ih}W_h19H?+&c2dER+{zs{Ejao$rdMS1;n{!4%RTVMXkpZ)4r{{Guv z{4s6M`XR@udL8e6^7xIPcx}(rl`ho&;!mvh-~Q@9xctU{@Rxq!b1(hKcOHK4wbx#q z;FAwNyz}NO^HLT|*A29W`2P1ke&h9550`m3#BY4{!ykM7?#DNd)>_e`#A2yfitA|m z3c90Jqu&fuc|NgO_D-?hCZLeB)!Nh-bu8-)*iNfdi88-E%wvCly}9&M>oAT%o^Pue z-He3zjI6L6PROnW5+OUMsj_1^0$FKnvtuv_#EuQogs?1uNZVLOHX(#cLD=(KnaeNz zh2vuFpEjk&&6VY6-ZHhu^PC&BR5fLJB zyIS4I92^|DlrD7k{Osi95M+@JvKdxatNs0>tJTH9{yq{?2#hgw6j4f)(iKp#p)plW zt(F*v2umT1X1!+fXd1Sc*7jy1Ti2DY^d%mD^6{MqZzo2{6t7#uT`t;@dcPrr5Cg+$i)|qmC`*^QAI9O#2rKfY=dO$ifni9-@d&5g^H6b|g?>jKPwy%#Ohb z5C{wykTIly5Xb@{3nP#X*kBM2u_HhMVn-M-;)iT@#1GjZfea!s>f(iFw@l$=$HoRB z83Qs1A=#O36seDrF4czWOpLKH2!mWAK%E3Z2?Elx4VJ|3a2lipx5Nf)f!Pr#ox)KB z;)QU46p8F?%{4U-_J(fLFRsV2Y_2-OAVKL%0jnj@Yz#f9s0y0XAP`0!Dhex{4l=ru zrPveBM<0KD_wIdv=IPTXw{G1@)Qb`{k&X^Mdh+n@D{o|-imT`!J^twS-BZkgGHs1zg zSNfx_pLg9u%QvH1r^7j?4`x1%7@ML^w^sAp#s$*F_7MOh{dnq#-*arEV6{P(pEnEIDKmSEDs;2#KArArqkv++l)@ zq7OcJkAIfD^Q9lnDFqHWGlb~6K5CP*wq(&!=2WX~mDZx8(rTI{Li0k3+~z8o+0hSs zGifM>mqO1|XE&o|l^W^w4YrIUiD6hC&6mu(zTM5~gEp?O#%&UUzPm4-N4K7~rN=Fp z>SkiO%)Py%RSj0>i6l^$tu?H)&>)x%Q_D9O9+wB7oKN8#%KW9%d#7)lYWuWZZod0; zWA!I{W$a6fcK+;ocC(JzY;V~&^&V-huBQoR%OM(Fj?XTuPUnu;%R(LHZlR7j;k+Kn zsK!ymHsm(7q?6rRONO2-C)4(-c!DtML%;cz-{AivKmRZOYe)Nz#X*U(uC&^kdp8fm zR5F?@C#|-eR#&U)#j-FgBQl~7l|W?kJX*Z=<$k;=V`W}E#kSGNdiJo^b?6vulP0Z7If~TY z``>#uFOQ?|_g-17ulMFpzW?K2e&^$>x3*vVZyrAQ(ckyVYt2vIee(GH!)G2&du|?H zj0gMuKlf9wzIT3WT77c$>BXIH#l8EAe(SzvTi-l?`uMu7R@Yts2kvRO^V;%e{f%vH z<6820bGz7#7AbGn24u2S^u^oiPVRqrHFa%r-n5N}R%52sq*-(&yehggD|>ssc%C+oi0#7+3kc%VHPTJNtc|^{}jUyg0jF&*CTV4p&!q zHagq8)5d4x=6d<)rZ7DxX^GQFvXH?dL7AYm#S0c#Fq;X-Xaz;GF>Z!*3dr-LHQ2m7n~wx&GuU-~GZ*fBsv)`R#kJ zzq2g<{`bH4BVT-5mBYq_vKKu$ZP!~>7^B_V;^Ld&Uhi8O-)S;!lzUu`2G?MmCh4tv z@oRtkgD-vYXMX#e|M0EXKOgh+d6vu6==$*ckKcal&Nw;8&6DGw`x}4g&fbHIzwpcMs#_DkNv#Ety3uq#dvdNWFx@mtLt0Gx8^fNac16>!=HO6UVHieCy#EeKlteJ_0tc> zEZ$6^2#WON(bGG3?lF!VL)Zo*fw($9HyqzSJ}@_JhyVZ}07*naRDX7kIXOOX(tvb| z9g){pH%IqhK7ahN%;CwMi?dIc`-j*C2pfZh_{`PnN@oA~i0De_{OQxvTeq>HVJlPe zv-8!Rlf5Tr*LRN&63J>|wOVn~;bJmM?T4@q)I)UMcAmB>>rh~Kg~;MQG>>fk_R7?C zDqXtLl~VNd;U~8qys7Pa99s>oFP=Pj>Ao{02qN6U4?cSM`hy3`teGD_dGgY|+eMa3 zC9<__b=#>fkM8cr$iGS0o z)zY}xtm@UOYW?UpzwwoKzVN%eZPS_$9{u+H<2&25XwNUt47U#s8xok$m>q$^>;&Tc z>P&ESc*U9Fmd2CKQtVLW0c?8DW#Qgk(fmveHgFq)-?&V{&E!Dw!n4 zDS0S7LqD%$8P(K1!!~xODsXj#Liry>(LZ?dSwJ$RT)5Y~j zNtL2jM4Fupqzpm!DRqOCp@e8D!jdgxtw+?hKm(&3$q|CxW1gh)KZOTSQNGZ{# zv_hEHQhkxTg}0loHFx~hZ~a~VU-xhPnLmEDIZSoG^k4twf9a3^>wBff;<(FkD`n0b zrQzN!>Qw}(0@h}{Szk_!0S-2T#1TTP8#Ceq64@$6CNn#D;8 zzxO1&^Y6ytEk$Q$od4kBx|Mks!&1$;y>|2}CA6pE`o~q?Uo2Kvt4(^2%dB9X^wAGq zuX9e#gZSjhc3O{g94`JR-rzLWvg|w$^ZR~l?Y+-A_nbT5TeqsKhpOsXY>MorNFtOu zh!i#0ln^nB6v>tyDNYc;NDw;+;26k{0FHmeFks|Q62OKfQ?W!lvSc~321c@E#S+IR zn`}|t)pLz^9?rP;TI+jrs;K};{wL2*~-Isp*V0-r?PmEF-z5Lek z?EaaWI#!E&$K9aPhn{@w-e5N`!o~gDTlK!}z1QFi58k!q{?VP;dAT^8S@p~n*x-6I zKDv)F%5Y@ud7C^ji){|RxX5IbP&kF%xP)fo8piEBb;U0ZX5J{%G)rNHRUB%wVp|8N zUz`fL^^Hp#V&JUTwJ)E0=w_rbz#5B+i3VmXu!Nv)Ygi37YFpS_vr=a5;yt_nnd_}I zIounK<~+S~agcfE1V|HQIsi`~0TN51M9Beygq$+ONEe;`QDvF!!Z?5ln$bfRZ4i1X zdnycJ0Jer4!|Fgtb4R?iYX9mhhXVfm^Oqa>%+Un(0$azges%xPf8cTjSkDSrCu>r1 zNL6(K2Qo@T0)l*Qj~NjMpI_u=a$a_x7_IenQaVZr&_!K5Htou&P2=uc-@1L}@rQ}U z-PaDEyRrAp*Y96?_@RC*Z@zi|#xr{?SgDXQiV+>Wq(wqWDz07+u8@ibmHNihj`FOY z#ZpC;1*B5Jba~Ui{;h-WdEvXi^3`wbUcaHqxb4SLFI2Pc&0G6hSGSo|<;DErkNl1A z+Z^0I`1@a)-+JSZu0Hh8?K^kA>*2lIbDbUSuWfFV#e&HJc_5CCPDh&?W=87#@MLp) zD}!XL8<>bbu5YWKsto3~yE9f>bPO+;+d_bIE>TMW0g?b=PC$hKL>B-OfQbSiN&-N9hX6QS0BH~*0wM)S<|+mU zOeA0=VWQy9s}B{u_tQVLZs@P|jAdROwZ$h#hZFbl@mV2U*)n}03kY?9Qt^&Pw~*$O z)A{LmhCDq!JzQN|bpa9}yo(|rK$FQ-z{+q1@ac5EzPsH}0-5Dmg(9-A4)-3u_US=7 zKJ(9g`K!-w@4Vg7e31mmNdP>ZjJK~`+6-199T&f___0krwzYBGu5^bNcP7&}vqDM9 zXaOPc;O^1R&LwnK4<^n5N^_O1^;4C0HnpQK6xcfPzp(qKE82sU1`n9Dqw*5=!fE zg*wC~JG^&)bNf+rlcMBJGdViEbopWAC z$TX;49Sp5mTjk)=s4DQys4AGg@EgDQvCsTP5e66uQG{H-|Lgzw`#Vor6~|t!=fS!B5Ul1niF13zl4XmjD8y0O8M^P0k6djn)AN zBoGR=6a+wpl9TROB1HitmIy$?y9fv%9v}gbE2DLSVyr8xDv4|pOHTCUfDjCf0IX&e z7z@DShF~-cg>*6qG$I(4RLZ?Ow|95; z3Y_wzqod7@U55e{Es6n@0rwB??mc`oon=&1-`~aW3;DEqT!hj%13rHhFijtyq zN{0eJx*1B4ZlpV;yIV3Zh>;(0ggt~GbvHEYd1`|R&NpRM=Z-@M}-^HJA5 z7r$cEJeYe0Nl_g}P&B^us4k3tidd$v+dox(vR2LaG)xuxbTk3xnnD1b zR8!IXk10QaTOH8E-}uZD0WStr;3%uFhgk?XN5$g}1e0&KJ8bcvXw5fHnw(mRC#7N> zsAd&@Yb)zNY_Kes@4bS%1}}_$Tw~44uY^04kJ#^uU9T`F^LZUiw`HC2EoeOn^YE_| zqhu#ZA!dlr*7)cES5Mwl$gg{>nNlik44YtY;GfpGKYuTRpu5xmM7LVQRv&SHr}1lc znn+emoR=eFv1vSy=JX(fLPyD1OY&D8nllC;r5>x*7T`$D9%6&`C! zH*SMj0G@Ay9Tvo7qoI@aw%#|_?;~+Zl*l>2QHX(-vtbcrLb})Oj!_}2%J%f2;s$HA z`WH>M7p4r3XgQO7CqV7xske$r6HBfYpHThyd^LPIOiZ=H@pP_S^h9rD!SgBwUM7CU zM;1mxe=%))zY|`N{@idR(@>^Ff?zRr*#1p7E4O->2}q$x7ZHKxVVFx+%2n6beNpyO zSBfU0s59!lPEw82f4r`bw#w6-Hr88MY#_6FFO~B$stDcRM64h4`TL5+PgUm(E=Qv^ zKSa?et`9i3bA0)aZAuACXI#~K-H)8j_pFZhsNynH2sfq>!dVw78MB;jBuWAg3(xRd8`=gK5|IV|5 zc2Zst1w{R-sIje$4QrCzcO%Bs0i;msgK5T*xndv}R~$>g`;h%R;y-NJrn>qorSf2hm#X9eHNHX}wpKhV5NDS$I5W;?F+NclTD6&Al?6K+jn)iI8jiR`a?2^scroKnynC zql+m=<-)&Q?-i#V_%EMVNNcvx=M-0<{V0$+l;*vrsF|p_6XO)RMdS%DQSPkz$V9?9 z1D{iS*X8Zu_hjzwVTb8*7{|F0r{3NxaWQ$?Vz-gag4dz-i%rd58}#dkm&U}J?W_$lTF`_nVe<#<1X(_rHPv~RysIR_R6Y&=mx*dAoaWp`oca~Gz_?y!K4QsMFyjw97 zNcuVzgu9*usi441B4Cu0L;s?e0Hb&TRcH2SAdm@!TnJ_$=Z2DoF~Z)Vja?hJy)y1s z2(zip9!q{gC@G9Fq$x(qG1=<4P1lL>vr&*8+1n5pSLi3x&bK#sh`w$V6c*QcQYr}J zE(EagQAoW}M;_vesw&Af6CSOlnXitGQ48|J9hn&}D>24XD=7_b24K@rs?QEz% zZS%PhirLq~yZ{j?Zr}KVAW!W1qS+py>E51&#=g;f?P(cw5Dfj~cQJVFt)S z(O5BO`31P(DHmE{ktVE<#z)B=V6c7AHt5@ad+q)9U5>T{mqH9>uNV^;N$gTH7y{Zbz4{!)+X%{%yKgn2mSXLOfh=dp&H7 zh5ESMB#qq1&tQh(+NqCMDrkAv=iZoq4^-XF&f0y6gWp=1ukrk!q;s7qgX{2NIvz?0 z)acYN1#>Ea1zkK+uP4047xT@EkY6(^oo zNJlf^Q4khM`lu9*QQaiIW>pD}H~M|E^iaQadl~6T9)|NY;OKbQ#pe!#oO|tqGid+C znLouD5!-dOBcOk^8vCeS!!kLh)ywmAU{@WN!pY8?!O*2fzheDoGlUp8dEy5Oigf^| zfWBT5k${n?pI9^zQ+OejMVe~xMwUT6qn+S9lqS+J zUaiwlO7ATTWq=&O$4wF}HUVPq)54A-hC$dhGqJ%_PmTMY3}mB+8ORwo8?`0DdPqZV zG%@)b2_-1fLKX#vz;hv^^b)|jCJ)vdiEeB(9~y@Of+Qe;RUiQp0Dnos=nVjYXA>hC zK{${C)W?1a@J}o-3jYxt4fp~K0k}|AbTbx9W-#&{d<7zbAB6%V0R%J(n8${q(SS@P z&`*#BSFE7&MML3%1984M;0iD*6GcJDM_v^tsSLrEKrd_ZfRRuXH~?-5V)?s@{VdYy zcyvVQsUnk~mGlQTCoI)d={zNQ6eEQseQVsZQl?4-g$4aLYP~Qe0uv;M+Zz{~g@w6= z$GR-IxV>{a%_IQxQw~4;;~U(IbK0hKc^+l-6Po#j)_-!Q6;|jXFb#Yeo+a~KayNt^ zSTm#&AC17P0HVpI0w81(GU_j#3R!Eu>Z7n)Q%vcd!m-}H_(Fn{X5UxpTO!9`%IfmH zft1yMJ&mGM0%B#ppcyupu<)Tr2n6LMsZ~p>KVAKiZ=j#Qh0yOA(qu~Hc!nA?cf!K)XV|2QIr!mF(wO93N z`sk1zUOD#J`dG4pB=%t=l+~`96<9P2!lww|SE~%6FVVC$H6|PS?&raBwY+V&`@52& z9*2DQ>?hyx)ULkbAVElhVOGT5mxKH4-0m8+D0u6{Pnpib)kQly@z>RE+X99^`TFC8 zAzV2YTikq3NZi~kE3gQ!#Iteq@^i~d7W_&?=mtAqY?-kZ_s?0dsNZuu@(O9`04h@@ zTd7sK>+;+5Adw<8j+hR?R7nBNX|3{GVQ0#M6UW46!-01VqU;N&o z{b~e55%oFmopYPk(bD-YNaj%HkWM{XUG%ZOubZ!)LdWXAvCaP23w2J~QX8UA@tu-< zn)^*wO;l^Oh_WYiE(a9;uUh)zL-V&TjFJjxaO`g2SvjV+{{3^!=XAf-)P`(cS5vYSL3|mVBOjZ{j;=9*eKW(Ikq^93}EHhJnar+ z+e_hhTeBk`xSfri-d&k2s}?v~Y#VDwxMY+CoDToVkN@a?`e9)xaaii;nmlDisyY5} zWT^SKz6pcZtHc`-UZ-j|5*!=?dZ)H=d3xOttS=U@$s|?YH0{7sHzzk^)0!gZtO;k+ zF~ve;r{vO~y`afa8po-$G(D5*mlJhOly7yKi+5-1(wqmre- zsc0&m@>n)c6*cwhzbEvsM}(XiwHlts?Z&S`5&+ft%;byOfB znTMb`d(IkYThlB+RGp-n#94c_D7r?q^viENAt+>p>Lj016L!a2#mvO0m|)3FxP2VB zkpN8(zp$ujwqGACwtv(x_0xvhGot$(N0Cs!7Hd=+Rd8@nzf82K_y0Rnq1#jC5UFeSjR z=l6@r^-W=&I?mFW3fCunH$E2x;gqHcSO#iX=eyZ!b0Zf7U2Gkz4tVISP?*Tf0Q4|ZFf*Lgm~zsL7^ zc(43Z*`U#zO#BiPRBn7&Wwwoulkt)A86R0lerL(OL3>s(3F-Qpt_Ad z{uXb=m@QjW&8HZ;BOisUw89s+=4~GU|lbJ3f(UcnVwNJl2Fy*^L)E<#f>$|Vc z`$aU=1w=**{ZAi*nseS^RE#`?g&hyTmLQGDM?m<4iop$#U^56v0*-{TWUdDU;|9Bo z$-(6vL2%>>h)AOz8eEC5E<=Brjbt)^P707f0m;u!70VyqzG)050d;~Y@P2}d@DtvV zv!Y{BNW(EO2T(&x%)uDyfY<26)%`{Q&Yg%QlPM2CLNs~s0SPJ)p3fIB%GVWTCt-&L zlivhbpaH-N2;VUn1)$`BfFmR=9BK518wc=uOuH0U;yMgk{hE z6)X+s2L6Fjj3`tTrZ*3Z2t@>f*GNMgz_d>RUL46JmRyNkAQ%Uw1q{-lqxs(Rn-ZuMi|48O(=8j{=7d`VPqR<+eRGQJa?3oOlAW+*Dz;_k~3T3hVsid6drL zFb;Y;XcgEKjrA59T@t)MfGKnGu)PRL$n^hlGp8eRh?Jo6_k}Ir-HCW;ly8k|y@5X> zIyySk50B2WAqA5HSojGO3HyEBdguxh#V|At1I#{#91Hs~O}b3+(;^Dt0uiK&#S=R< z5E2HDZ~a!^1H`ZOu_37_Vvj;n4K_T&# z2C0`}PK7wr{8ODO#5}$#%%Zei1`E612V>pEw*LnFtT@OTH~idKIrti4;hYgTSO)9! ziso2csMO*P1d%lEJ0j-Q8YaImBUr$onxSlu3MJqWgzN|TnsI9(o3xnoIl+{0z*i;r zIRx;98{@T{_6#wnyE_Md`>`dXkDkX|7^cQJDjtVI4_!Fr9;C$jYzX8 zA^pl$?Sj6>OYT1!9h%#>5Z+iO+O%!G+*rDsPg%N1Ec04^EPfupbQiy6a2XYZ935lH zV`q_821(02#M<5-o*i8~NZ@%j3m>;NZLS_4P|CjeiNkqH@$==J4wH+|5vIHBk>}>t zwM~|AwY~x+f5SbS_(T1}U!Tj2MEmnbgG-FU`KgSdlsy5A>%)?;*x??G zscSuoxdw6b7Y}z1)Zin>xtP$D^|s;=)$ruE z%rxOHTYf_KFT)o)HKJS}n?EBe*`r37w+~9sOLW}kPJbx!e28^U&*v>85)f+_;{11& z6Dc-6*N`gQel^t{JGa(w7M0h+okigEWJ&12$@X?C>7u+OYo?)21lIdIeMJ8VGsB|M z`><0#ESeUhh?Ux{5hKge{Qp@1)tL~z)AhWgO73%Qb+sR5hS?4e18!nG0JmIe2e;VR?}uf0%_xvmkTyQ#4MXLt0ov!p0Oq*?sNH`04|gX`hX zn`Y-%J=(gw`PJ20WbTF(e0N1{AI|>!@AQmcsW$<3_Y6|@<*eUMe>T*HC+sa>&5xoO zjSeH>)9#kNRrT`COo5rg;?SS)msH&+Eq`OL7IEC!+N!E5#Rb96kabhjcc1txl7a9inS>kJ{J}>iG=8zNjaA^H^GWhi2zxNqoBZ~vC(i7~r zF8OBPMh?xMr3K-36{h8JIwiHOCJ+C*zi&NRU&aW3YjgR$)b!NPHntfwPd*=fYQE_| z>EE2fL1CfT`$4kbicN$VU(8lS#_haG17b-0ALF3>0-|@)epDmUy|%4=?9DG$g1A%6 z{SkPgB0@8NX7{zWM~^(qbmwIZ`CsWR%(n^(X)ewD%|4Bd9SY|NLbye7L!$+oJeu4r ze~4f*8A4A!m%cMBG{TOGf?EHTB&(&b^@##` zoWazVt+FC#ZeQtNJBLbYu<#WIxDi*iV3rCTB#y9J%;QHB0k7+b$e}pc3?lOzaS$wI z2tXt62f&Fa4AK5EP_!&S9;R~A$+Fj|?73f+RZ4=& z(i`bJ>A&4G+wT6O_-L`ob<0E|e54nVGZt(YBof9mAR~ce&reA}5h2SL#?@m@ZEe@V zgtjkFW_wib{i)cd^8E&%SlWH8t=C@C+0ad3zLKUrMW&toe`D?LJ7awzprpEO&fTi* zvyzD^r^faD%&#O-5+Z-Z7Z>KF>c0s=c;T!StNv#@X*yuqh1T2i6*1^^nHZrCeISJq z9{cNs7LUC}k;p?mFY5>Xz+NtcMz^EGwMXJ%i&n(k#REtNrSXO|7ojgRWA`rt5uD|T zpC{tmE^AM>(p#pe)~7gH9JfvfV{_Kd&JKQ8=bV*3QL)3u&E^f(i}kC`$^WiihtM(= zePN;v4CR%qqC<531`Tn7;hN(NSpfORj+veQ>z)Lo&(LRSGz&9-TK#O&t5TI5z-y=d zvjZJDXABV$%o@)8hG6S)6fqQR$mwpfJy*fS=b_mC{rZmUehTsG{;Bv!*w?|p?d->S zQyLH5BRUv&V!{IXA@Oj)KVu+JbTE@&PAvymvgGL= z^jrOK9KJ&|;Fk%i2vIYwgg87W1d1xX2S zJQ9BslF5(=98P*j8$f0S#C_kKFR#&tuVgn2T zGlJw?JCH~K1m6Ip@t5d<+~Wt7C=vjJz}&$A3+_4z&~=Pn2DU-+jI>e3Xb2XN4aB?$ zAVQ|-cz_$Tys*FrROrKy*eX0gR}743RF2o(S67u2n{g0;M=(7ikwa@{qfrEzPznG9 z#xi2m6o9zDz|GY26OD&GmygnXM_wFJ8HfGWUJDyX1xJ&~nr7w$yR@dFnZ~To*1>{& z;tKkv^W=VNO6fleQh$0XlYoPxr8dV6*^`KL7R-VBxrN3qCITgV9AZ)s10rbq8NiUh zN&uv4G63UXdJ0BKq>&1Y-rv3fiHoUeMz;XEF@2ICY`$N>nBQQr45(Ll1=KYfF>xbvPcP8F?|C5(23Ju9dr@meZ7g&T; z{+qwi-;TSU3Z!sIU?b2RB>nRV=AlhJ*4x`l`!}`3N*h+a;C(tDG_ua*7 z&-Hhk5z8$=I?oUGxX@8_Vr1|{qKkM z*Y{ngi357(wtu``ptwd~^K!1~r^gQKo5U6C4QvfWPI+v%Bw4|v_@SUAtoJ&_>=a=h zI%gXc%(Z#$rXlX$A6WvY(iB267S0;l>RaCtPSiD~akX5s1oWnK#&6yJGIp}f!`?gj zsddQPs}9v?wcGua89wH&@iJec_WNu=8P`U?BtD{60XdN%Nst@(OgqJ|C6uoi7G<8H z;P6g_Du5GrCR09i+xQEJk)3h_NP{`DB)7!EY=Av94K^&b#zI%&P5f%jeF#Ukc15y? zJI)jvH18l}a!T2I{`|IPQex+8a zI-CCBZ8z85@R|85s|3m)W_m;sBfY1g;~-fIhSHEQ|JFyZ_^BRE`|N4FdXN&oTk;^( zdY>$;Tx_P(j^W4TWL%~^z|`(H9?qG4&fD(;eJ)+RH_Pud9@a_@W_MOhZncvXL`Dv> z`TUBIjUX9P^6qbM5576mdel*QYKuYwifFypF{Nl(X>=?;$8!&)-*@vn;OU{w%(dnh zy5!sb2>}M=L*LpI+`3Q3dHEXB7(Eo^pJ+`B*?=Um4 z^cMG{)z%-UMF!-p`8L*$r!;o9G)?tOqGt1@T1R{ueX&%p^eh_$GdN5NcPl1;!jnV4 zC_cH}Ue&1|H}g?#-%SKL4QpAC~ZViFm$K` zZe;sn-;Viv9QVOAb!*Sz00c9&D4_iO6lYE6D?II^^-iVcFtuiWik6#wtG$lZq*t+ z%+FiTds_F7+!~aej%zSKMgPFu%^F*xPPLW&2eEVS=9=rIo!lgvmiytcdq;QGamVJ~ zBK6vnKZY(yx?%FPJaY0-wpvr?qjSf)b|24%+E%8K0_H91xw(SZudV0i?8GrrWM(lJ z3FHC}!FFAt@&TPzpJWGlX4jbr)T6-!AG?|ZYO@yIw=l7^hvR93+qKqWWee|Ad>R>; zc>y!EPNKDP8kaq&QW`fu_QZmZDzex5$IHkTK@lCDB``}tRE{dS^hYKvd8A)sRb1yBxENh7Fm;XjM5#K%)yLF1&tCnNJKA3DkuRsax|+UlfG+ZQobc-u zCYuS0x4Z0Zm!5K3xxZ+nC?mq$1(34K7k$jQ zZihW?>1k;Xe~0SR_2>SKAbEnBjNbIF_1hVL{*1I%m!*|FZ#vi4OUoD3b7a~Wre~G< zz+b)Qzt;LIacpP{+YpXkCdc2ct=6fB=(gK$gnPKqdtSDVG!cgS26toqWpvrlDR28X z?~&oNlVXr_?EA>`_?N-e`1=triuAYEW!00yrMdQezdt69IWxyT#vd|-u)=@$n|OJY z;y)uelziK0=;>YABrM!cQU3X}Hj-2dQZkYpuwmJ^Aj;3&S`>2>Kl_2rk$iU8D?axCCbc{WBLjAV2hvlr@?xDJsE1?w)5C$#lMq((=snDOi1i-P~PWs!{@YF z{4yuwxvp~B$N$D7nAfViLUp(f+OIBqOsis#uXj$)Ce@;lOM)VgF|?AG#ApdLwBrPj zPz@lMZErx@S$+=B1VEX{QP8)F^_XTrE(+>9rWtM25d*wI%V7pc9b>z~5--$!ogjX- z!>K$D%yN8wAh1V?RcPS0>fcmpp-Sm4EAS-X#Po}Wz@!r|21k<-YX$Wp*cDBrB1@?N zjbIXv#%p+whRFSI4nzyTKf>r3WC9e-AgyW`jf5}JuYlemk)wF%pUqHQBr!69kzEMc z7xoB6f4|rRD#5e zQSz}rnv}2*iK{vs_^&L!1*%ATmJ%K73v!$*3A5p&*|T%0``fTJj=Kb%>UU@eX@2>2)b&^xVUS3SuU1Q zJtF#VXKcahB?*!quOIo5QNn`KAEW_F+Aq2de!uZ{aAe(fRZy+-`%f8GwKoGyMlcfm za7IRy)ps2^IT&G#wpiMK86S^Y&lj!r_ih$18mpDCzOwnmekj{pzCS;=6hyJiih{Pn-ulAXj#Hh;Nk6I3=&9k_dW9WIzHJg3Ix zHJ}y$wmwQ;HBk=6d~;H?4rFWGnfR@#BjKZfr3L=9cr+y;0MUF*^iOynUOOjI5d4fnDX|q zb>ThK4bV?&`>$BYJ?(@~$Uq6fqV!Ve30DGkqJhWGqx(6Z%f}BV<#)r6?k_RDt>zCy zkM5TrHE19D7NH2fz*DluVAx`lT=y$S`rf@u5pQ7KGxlx|%bRuyJfw`OPpV}*jSiya zr)Lq=X$E)UOScyfSLHVrNtIfZ7ZHT~*8}1=jTu+vuL|}bHZ@+I-};=Ci{E8D>?_~L z8k`noAAV`w!wrhX*;9Y_t%7?nF+_#q8T(uR>JML%=;Ovc)1_AE%k*wWo9o(j#8eY) zKPj5i5K^+fA9k{zwihiLR|qez6!vUQcHk~hbJ|d6c&i_4`rMT_Dyzm!$bQCipeW=6 z{yTlHg2hk6p|T!4#Hy;$1LqpeDi?2|SC+#^-%czTK-z;v@ad^gxPYwF+wD3jqB!ZbA4P7aVl@GU)ta8dxA$T8gASF z6?~ofSv<>B!RPqP$+&QaZc?4e8n~mvJyAva--uouBLMbNY0vCT*iDZeEbs2t@E#K8(-MY`SZKq$JF&`cx zn)u96GxualV9#@{R%FC$`K|4gMZ;$)kKb$JNq?<=l!}VGIj_x3Rcjo2pGF=joBQv1 z$PdP*3O}}B>Y93Ly(Lm4m^4w6cr0VTF?eOdlc7jUL-XnL0eW(5#m;zd;>6I$*n?nV ztaEH?n4INT-1e8C`Ea|$AM8d<(tS)W4)ll_X+gOadT?5T&4Q5kg~;MbsfNDW_3G1K zSGPnhD>iTab8<>+hX$A?!Gvri+-VBJ3yY3Cht3|1vTOhOI6UEDp~j{_$F9ZxF5tJ% zl@x69s_^>QOAg+h9ir${5bR{H=una+RL9;8NVQM+2?l>73xf@&SKA}Z2?ClbS^i*3 zv+a9iZTWkde`_^r3L9fOMG@8$OPvM~neqjCV!MIK8g=y0=T)?bL}(VCP#hMj>aTEr zU>KPTDBxotCz=vD1YV9owuA!~j38ck5ru>IYOA>Y*tcs3Tzh9Msp@Igxvl%D8?MWh zj~B{q9#KRFP4DA7HDdTive;7&4qcCz&lh#pK3Kd;4s%~10!bi0N`HYcqlSBpK?^pB zWfS4Ub9cbfR~6P+Ik_~``q!67ku^hUe06=<#b&;3LPz93m9LO6-%LC1WXI+^is-o& zNvWav!?dbHv!g|C^9u35eEM%C^+My=mo0oMJT^%aeg#Uwz;MG(wSPe_Hh8Yo?Vj?j0#1O+C z8=tgHxN$``0A@2U;#I{4>H2~U{beY#v3}`p>H6DCqNB;7-3eqeP%XC=twTl*6~kO? z6zZ8eqnuJdJsJYDUBib2YZ5bpmgfLGA`}uO%1Gkghvx@Jp&^hrXaFmiY^5A(`!v{x z02{@i&e#Q#l*o(%1x1?y+YFF*+yKMkvsLLepfnPTh`@w#fyHG*ScD#pK8}J) z%7Qboqmh!^0WdChAiJ z-u#ftHpEz6l<;-5p@imhq(pWn;Lii#vlaQOWU~8*0`b5*rjI>kpT24a*3iprIT@FP zD5+Sv7fIn&Sa@W9WlGJuKGo2_^L1%oqh4~y)X;&0A6`S?4|QzQwC(rT7qpG%an^y7 zV90>9A9+JxhyAl4Db64G3z)D{Hx2;OYVcU|&`9@2Y$q#LtBg}NhpR5#hl=A?+Mn#k zR<4_c5*|z27b=B)Mdv;9lY7jH?MAFZ^@kmMUVOPim~~4@{+*_**4P}onr}J;8DqQX zdhqpEWSje%6LduMS{lETeInc}Z~;>gn}mtwKjlwZ5_|LNR|Jhc;l;k6u8||ML6Ti2 zfNWA}BM-B0gMxLdY<|^wv^pd%PP0x0RhN4nNp#{@P@eaz&H1$-@Cn&LWYvA^j)qZypMu>ZkY{kZy)YE z>24aieD0@x{w-ZGMIEa6F;GcBiQ*{DvTnOa6{KdgS46c`oBBI+gQia!7Em+*YGeg|-UgIp+(efKqvehlZnmnMET)=V8>%Q-enl zk+tNnQ$EeNrskLD_urO|874)Vy{vX=zEZ%_7U2~WwHizm%I~(X6!>fA&_ElnD+p_h ztmbeY`7@QeRJ!p?hN>qznjzs;;l%U3>`xKo<5l)-{2ragQhF1irpA2u`m}?Pri5x% zz9|81dYcIHP@9+4RWq>DwH#xShaciKEqC!V2i=BlrWeu7-Y1p|o@apwSw%X?`p|pR zYP~8e<)W*8JN1k~vxu&rDuOe6yyW3#@j;=fjqH>opA*B)N;W%n!c#ZP7r(FB`<+k4 zBBUzE!cO~VXqI-*BaaRiJdnAmU@ra_x|f>U1g6v+*3at8u6D{61PqIp#pqyPdvaM) ztV^pj)l<^!s=$6EZL<)?mkFJ8PhyD=q53ozJs}|zXdkh5G8#mH!R`Z>H^@aL= zDmV~A95676Q}`;8I;okGSDIG7<1cyD=mIvDZLXX$EZci4!=`OfGeym>8om1OJ5{WD z;cI7?v{LoP6mpk~J=^=4J#c!PGwa45R;_0)rT>lbpyqkDkHhex{MZVcj&A-iB z;?mM`zKzR2>vKERP5tP8f7x~T7ELq?8nwu8ak$#y6@xK9vU>Ekhw{QLqDkr4v*^#CDJB!4}>gp5CPlLRF7#nbmT+#oO^65c1>APOdV!T?CH z_#e`x>b5m_oldPh)!^c-({Xa*{$5}yEc|NU#e+nSd9X5@eXt-_sX!!6X|`1#_@7Am zOvu+5O9TP`Sm~dg`a&sb>EGHpJX0DMxbKwjYi94yVFZB)w$oF!b_+63_ufGjz*6*zhB7T3>$~#CaogE+Yn2yCp$qH`Z$>CQblhlNk zpGXGXY7AhPg;6N#!cq5M;vr*mu_GZ}E}Wh)nfFlIf^Q zOxA*Ey;dEq$F+Z=(}Vg$o-J&*nX9d~ChAKq3_?}OF4FqLAJ@ZWG1;TD+k$WZOyE0& zDH@7?Z)2l$`HAoG=)oCMA%CsbTi}Zsi@V}f7_a{9%)I;I&EAfMXp7T_$(+5VKOS~P z(|TrT8;Av?dc>Dl?|Lr(uoqS{J8d^c4NY_HFSiM8XhsTha$u(e65|}lhCX&HgU=@{ z_c0X3s29bP)C;W~%U>^EUF_1`Jr8hD7AF&^<0LcM{{e#%d7n9CcrU#cKbaiy3h4@Z z_4k&}d*!!s>*?CRA3HIEvnlMGp>Qeg&De(yk3Wkca%lsPLE z?TeJ@3IfWX$An7=gNVQ z&<<=6Bsd5HMWQ}-gNPprfG9K)2!QxPU|>c#MtuN&NXAA(6A*qOo&@9#z(b6dhoj(N z3V?(NpL^5-Y(I>K_@aT2GNiy$Jh>>j4i;ZdfI$YMNs{46z#fRftAXgt*KSTMS+L^XLV zIx?UZn97uFqCP=C@x4wr_StB+TqrpPT|O@wsKU)3{ziYP@uVk-*L;tC{o2m1p|O&~ zH!ivYWn5wRaKCu}Zo;b( z<)egPxsD5)cBlI;GKybtktPDA{c>xn?#U}M`Vf= z^i>!=T55JU+{XrSLol0n;n_7x!_iR4W2rcx zY4E(P-Dzjcr7P3wIm;*|wQ1*(*9{pQBbI!4D5%5X%MQ32eu9V;(}Im^%QCA_94Ime zl;tIwc5^J@jAgiqQ^&#M6c zw-_Ry?_8hmyL9n%dgu{R?Wqa*TgU4NA9ncAG*Yg6om-8ZN4MtplMlxi-dE!Nuioh~);GTn zxB5HO?s?T6c{j*K%5iu8@V8(2_V0+#uFw7Y>BFJ8B3Z*9V++Zw@BP>A=9|6|neNfV zp-L}WbLwP67i-1?4~)p`{%*6ll%S}Y>w{B;Hj^UtTPLXF5WB6Izs6AP&N`6hw%174 zbt^Puy#jsTR1oD><>Bxm<#tS;$-j6wB0D&@zt9x?;rn)}z;~p{CPYheOj`1-ylq^o zeu=~T>H#%@7qBUcFs26bYNnKhaPKj_rQ@r6d(AD`a?Rn$Ubnl2-`Ar!H*&paQ!^K> zrsw)_PS86&o=dJ5>QbxuD~jPN@Q)$4>qiJ$@@quQ5o9X9BmAdM1U#LL4Rc zLy8ewj>s4N!>|+Q5spp)osj$S=ElN8Hg#RJ&6BN;(`Yum=+455t=*oktAH&!N74*qeXT+vu_2Pqw5lt&k^n8qaR>q9Jl*nkA%dMH>EJ*knsz1|( z0a$uyb{u5GimjlE8F2AKa@qFJVOK^q#3UQL&Q8+U;=l+t)$&1A>HLy7Ga(;tfR?Rp>97r|(>Llg#DC-g6ZQk0d6Ij?;g&2Xju&;FUWh z@CI~+bm83ir&S}ig$h#N5k0caX!Z=BG^&%<<5>Q+|82N;K|eGzw7S1$@<0WEpisyr zriuNFDM{PX#2+*6tI48FOAL`Nc^A&azIo{B2naE4ix4=&?aEKXnVAY?Li!)`MaLc5J?k;>QEsB z8iXV?45l^ad7<(BJPvR{`4N*t0}RJtL>%w+Uku1afyf6TA;5~H5}y74lnOF9?qeW4 zm<$iVgnJETZ^?|hDJS<;du|-Zd%8F@h-rp}Ar0Gf8$wgrXG$1MzIv1t# zg$&;?7gWdy-UI^LW#LE&9)9LmKhywI-&ZmcfIv0I9NmQvfnSM+3;HKzQhp!9Xff#-Bu&1q>p2^c97)p!ZktL;h!bbbW;}m*<Y_|o^1<`(Z-2+I)6XRu^4cWeOy(&u)rcz^qG?jiaEo#3p}BtVbXz7 zdYGVl5=!;@29!d`KlI3or57Fln+=~%Z@T{Dvgl6O)tuqk^9QwrR)YGMxBeU0_@&(V zHGoA3>&-el@QfR5Qb&PptpAgOoq`x1&(>&;gOta&(X}OOMx<+rw>G9O&d|Z*8G|^K zpQ%wDsj-jng>LrbEY(vYDpUSV@$(GmQm|&N&6akFr!Xw0@!n)QKhtz<`eyCil;-{0 z{$r;)oV9@x?D;Aqw4ZqEyJT&^4Na9SpLMN+-j;hl4GE=OLiqbq^q+h#I(Ajvuo-Ow zTPip*3Qk}71=?AX$)n5jZAfjsJI1iO0u^ls!lzd?QyqRzrmeZ>3FQ=0aZhF&3YeO1 z`#t+Ngi2F%q?-g-JZvQCwJfkaVm6YL4pz{5$@zEVO;w3}EMKPaG zY2Bs`WkME)pr_-Aw=y`DXT<;6>x?bq>d(mC>qoS(;_R03=RH%CceY+T#h5(L$o(fO zwWZtn0`cqjJ{Q#)SEb6gt3JZTzWrFvA}K=*cC!}FBvLRLsWXd+s<4gADbY*Odh_4i zGoj%!ZbC~UMJfj3-y+YCrg+MfSF794vg9v3hHzhbp5@I|Jk9!D;~@y;+pPU(&g@aI zx%8THiFkk8;uosTsYz{@p%4!G@bW5Tz%KY&y2z7qeL%*A&5?* zGrEYVLG<25?-Ps?M7R-MbU~s-@6n^zs1ZHUdl&tk_lM^{IM=yOS>L@sYb~#9908Jk zAE<*zZF}XZkx?1fX-|;h@Tc!gKWWw0;!`Hs0_f~UDXQLBNYX64_&T@yxyQPw&e!X$ zm#C;}4a%r>vb^qz;gTPJx$bCPKgXW}I>iE;p(~F=zqZV&Qw7?NaSp2%1Dv(NQ%h46 zhv7?!I){;Jd%i9U6W*2&omr0l9v`pi4!@Im?zxGZIJ;>*a&Q$k-Uu+AwoHUle#lKP zc(rIRj2~_pX7Z%er*i5|@xYRs183&kakti>#S@{d{SCqCU;(w_PX)v?hqK&9ltrCj z8yvp`*JmjLI3jpjIIIlb`6(te=KF$BcX7t zG+^8J@?z8slLm>5BS5%&1>Cfj*YzIXxsKIl>YX|!tfGfN8LRiODVeniuUv?v}gwvc3h_u8NSMsgKw@PpU}?@w6mlrV~U0WY8@O9 zs|rT|D1;PTB2Fq+FbIXO=EHt5m)RJN$R3|It@gBQso^7>Ad+~&S zH~nA^p|SisJg4+`hot?`G0;t!FUWlbw>?;bs*~!u<+pMByoKn?OZAI3&ldb5M`1=E zlt~vNF|(gvHWcb1o{qnDHt+Sa=ydT(asAB*{WGx~p8w)VJPHUgI;Hi^$ls*iuwCV_ zbC(<-tumhj{A0kacE)TdbPLC<#r*gi zIKN*WNB8_&1NxOGg*Lv|cnjy&#BH!9Q?$6Y- zWGl#HcCIr1b6F4%ygI%SY4f$cd=*Xja+zYfzC6OP{MiS<+3*;WL2ymP<56X{$h3*d zmU~(un9C(k2H|SaHhJIA*@7Ze3@f+CCHYSyM+PfiJ!7%m^JXV`$as`Jc~x+$SYeaXvmb(-g@;SmLqn+*ehxg!c>5q4*|XTdXe=`=gJmWznnqY3u!}%45J!SX%gQlD zb6R98WDsf?5KrUBPfsQ~ZHC1;C~F`hJV>4Bk%f^novE1VhCL)K0fc!iR+-6>@fc87 zqIe7ooNoYSe;uy{TLZBW622B7AHnlWal#&S13^YOs5AQyR0!C7aoIBZ!IZINi$K_C zV~MrgBN#S_Yzr={omjo<5G2;fiW0~T98QA= z;??8;!u2qrcpfd~;|4PIX~kjdhyhO?P99Hnu}QcUEcCn5H;7O{2v`bI4PWqxX3lp` zx4FE&I2Z95%PLJwEA8&?E|1ko&$Wh!(S#YC-iL{dnFLNqm5$Ectt_4Ev}~?FH^8VE z2qc8kTuBtsp#rb~IHbtldHN>^K$#Lu3W`*~fAT}VD;f!g<^pMJ2Q&!G&G_6MhWhs~ z-{YAo#70FAzc^SF_r3VD)M{imW(ugLeyaX?u+ZwhdizK$e?1nA<^IeFezW?sbF`Ne z4zG-AMm;PY%qi!xDg1FAhb*+CPKW185m5@Oqpe0Qo8K1NJ@5V$Itj}`6!2}?(bmZ1 z_v=%Nk88>h$bM!pLNCdrvEBD{@7<7WJck^_{N3@!hzPPw@_uU4fP>Ac^2JQYFPpl8 zy-P#>2~`mmgrlMhu%zcmb3#Jv!78W#WP;5$O^8g+ zL+bYjEE&?-qO|KW)q_wfOZlNW>xA&aS5L4$ z0^^|xHjk(A90-$$e*gsgK0--0j~(4>)tF;pBB+;Gtn2Cg=niduJbY%AjsD?IE{K^P zM4nKu0}>rCz@q6k`h7|5)Y7gcy;fzA@-b&*QpoK|YVhHW%u#~O{SKA%Syj%%1VPU2 z*PI)YVeSu_uT+MKWfKk{`xwb_87F<(P$lUDSE}fp{%&~itd_8fa+amDzq)RbE4BQd zDQNwa=`KC^aO?h3d&@O4!u7l=EoikN_%u@bIOl3D?tY7aFKFwr^lc$k+rL|xe|?^! zB{Ac!E>8zvd0RI9#EZz%^So%ibN1h8ZZpwmeU$j~Y;SsF^-uchpCjQ7WnA131KD~1 z_%vST+(a(_>))E0t7s?l`a3~w_TmV8xvBo`$<1yfx9xJ^UGVJ?>hzOHiek6DG!u^} z+?;&^JQuy-+$ZsrKZ}bpTCoCEf^sS6gD~OLM~g2@iHbXy;<=hP<;8N6pyOO; zWYP5Y)J=e4_QPlMbiLOOttr@XLtH%C`!TUDTgg@S8ILc9Ue`y@9&qn;1}!e$ttb)` z81dQ_U;6})i%Qd~f80#cjzHTM+w+`oe06A=bmSh+tly(7Hlj?~PZ>ENdcu4ueir@9 zIdyJF^NT>nVX;ol_V0khSVd?Sj;+9CIHsMKo3h5GhJGw`r_I^IUv+H(-&7>Y%Efy4 zB5T`ge)qYQsbe<1iHpLg=?#LI4f~66=(O|JH_6MBVf9Mlz`4pPdFi9NaVM8=HmAIs z(?83*0~&Phe!f6~vr#9iCfqk~{4jxP0vSbzOW%F3f4dY|SB+aoxXd#;?0>H}Hd9~D z)Xi=Aj!yxk({`56HACs+O6U%QFH*C`bb>CDMN$?jP7o%HAU z=Y8+Y%&~c8Yb()pXXnqa`#g>GXju#?85N$$8wsp3e4 z7zh37jnx+!%Z?!T_L=6TW5o2vEc$BUWoy|%(=noB+vg^6;LH5si?kT!EcM8GIWl2_ zALecoM)SEKQVigFzF2OX_*#^HG}9wj?dJ`t8!se2-b3l&i*QFH@jq@pBJXzw*bq1v z&zzeeVPwFP=F@*faVv87rr|+*t@hmIH_#%-y7!F5H z4yYlEC)rM|#bxgP2^F`fMhAU=olU>5Z)!f_sBj@Z@_Q1oOv_MS@kSFZh&giSW)W)4 zh|Dm{@j8V3nd>`YjWhc#A?o|@_oAb_YbH^)6M8sARFqmWXd`K6N2q!mx8Gi~+)cGjWalUv>!qAd~9f9I|*RKIbIdmTCu4`@B-JY*EY2kTsRYS|Xvxe-W z^}GArIY*ckWIH$HT+UHVT`gi%Vg3WF#g>IH%2o5+(y;A%uPY(3ie|%O)PT~dZVD!P z{G)*}XuoB7sBF%?l<2Y<)m_KXa6j$&go^z!<;aVu`PSIEQENC*esF*4}*R%j#^2h;=09{oNv(*q1cpwSjFD$i-z^L7eU-N5|NVS6#o@mT&wic*LeGa6v9 zVhIXoK?3=ead2i$762n};{W-^gP9}Oz(}%i017z#LqQZQnDCHjD>WR_8wAKLCILX8 z)njPz!0`|yaLgJCSmclb7|6yN0i*)&0Z{xeWebd9989W0ML`Zv1HmCG=ns%#Y&aU7 z36YJ5bRwZqunu#a4kY-wM*;<22ruqO7Gxv9EEJz4YwM*AD8!q?A#Cr_HV`>eYvj>0 zNs9d-36B3tzAdbOR4$f4Xz{BSC9Na~Q_FTT3{%i**S=mi&@_E)qM|?A>Tz_GkQV;> zTY1IE=x9sWufC3958M6SAwCAND?{8B_PANzwo1RXFR97;#2@lyKNG+bCc!+-^!O~Y zB!D%Gc{ZTJ4zTYf@+*PKFmShkk}Sr>#i$Y$9X}Ap5eA3?|S9MHZt3Qjfr&`r?(EM#hcX365V24MW3Me@~xArmDlx9AH?-u-ow; z1DKUajz0rpP?i#n@ton+rEjNv9Pq?NQrf@r^`2Bali3bv5gDwBhd9Oy>~?>I=4YXt zstYK>kIMTP14q&&%>3@3LtPtp_YI0xkV^6o63dvb4W|R@Cc6W-lqCN@(v4{=KvYa6 z@H<>%TnYc+tA#>}yqCULoNhiV-^u%pwMu(H>P7SkrtxBAwtf9xmUG!8_-ftm=ph2d z_^;;|>b+L<^X~oDXqNQ|0rE0r(!2irY*l47^KAv&$}+tuMIwWr+j@9mZz^79W%RdX ze0?h=zJe2n2f^ZA!=cLEqEUs%8dobZRz~o8(-Bz{3e?Z-F#a!E zn=AO}NPqeE;_~ffLPgtYXPnIKy3Bn?U+|JSFAL@~1O1(}o8RTm}crG+hZQtvv0Y(RtdO|G2hybrn_XC$f$h7T=F=B6(Smt$o5$ zGwQ{`E}>gH$L%_G!)IDLxQi-K3VP74T1=A=BK`t;;i#R8UG z%DUo6hcVBHDA=&Pck4QUjhNlg-+3cSG0T72!NF<&#K$qhbY_bX*TSPsOv~2RI%QH$ zX#%G%-fW>d>U}ejGBsDe?VzM><}cpxMVobjq1NxF<(pi_+&WA6=K-WXLs|v>JAJ3C zg^L#HgSAxdwVb}#O6Ep}3T-SA}&vB`Ad?w%iJ1F48Z~S}ycnFge?62D+ zJI_&}%I3 z)I3!9r@-;^N&L~jF!KAItF?sD=P9I>h{*9 zi1QsG83(LarqjNS6rGYC8U7m~Vb!kS_W+BP%L#i#a$kZxvuL`uM-xOA1|&dWP}yX2 zb2PoGTrMf1*Io(8AM`tUbLjt)=k(ehYM%6nP^ZrfMPXodF`WN~h^qbgNU!-{Y>kVo zQI#-(OP1l9v;T2`@70yANVmPKZ}DO_90Vf6?Bb(YrVcN#g^l)y-!{yfiQoNhTEK0w zaOGEZFV>IVDIOknY@$CIS=|3R@SMyI3Oon8W=1cDZ^0v9C|Ge^eCahFt1~n< z)RoZFLZdgY({(n$GS(l^yLVg>cprOs|Nb!eu7<)ixQ{_Y@_cH0w)J@vyA$Jyk3vc& z#9nqV$8U{~bK(9!5_!B04XN9A5=&RIVyiT2!q_X3B*6znhqt(xPg1IH>uDnUk_+4Nh||{g_$*4sOa;c_tbz) z_|5(Se%h`Y)Cvs3$cJ2j6wQj?awi3@Q!?`C0CCaPEWjb+>12|m5Myl?0VOavyj>n2 z5*`Tl`}gAw0u2$3L81w7Vpvvk!@z2SAaj;XxHX^x@<2y%`qOm&I;CF60F%QG2DwOF z*+!U7C>cgPV4T(CF`*FX4F?7_2FHg19GX8ce|wN9K%+1YD#n6G(IGHm0T1+_04-7$ z&CyXc2FRfyATNs;YH}nApb|nHFKEF8lFh|Hw_tvY90Z!56%Y#-zy#_2vO|Ex;GJM@ z&HVf@OBN)C>g`90JwXe@+-R(A)uZ_ZvBDKd`KjUJ02qcGjNxsBG)i13BJtCaEYw79 zUChY;kVCPUR|w7lryzuoBLu9|X{^+ENzr72%+zGb2qZ6m7nH^XEip_H4QWWjhdlCr z9Xj}AbBs~kgF{d5so~;6<|nBYvNeE4&bGI#vF)wJc99M-n*HBNk3LoEe7J|(5AVM~xhP*9Nfn>QBsFqOSTD4HFBV5Al(*TM_}#d#hMjY1Ow3D7Q;0xa!* zsI5P`g@o6nIrKq>Ui@v++Vm4D%2ajJ_O};B1&?LH2xe$4J^WFBh>qz1$QK0Z$Yd9T z;9`@ox4%7KNGK#H{Y)W>gSKt-Th)bG6{{~hT6=hoLxhH~$kqG>mSxvf$&Fd$I%}z{yg2M$_?POw=4ZZ{xnyd+Bf6Bn`(Xt;2pkF&+fBw> z5~`58UaVMjKa?0UUH+~y_5@c}lOG(SmQUzi0OV1IX6oj%d;o+)20;QH;xsr`^Ado^Auka)z9~aH08QAb~*lk4n*4d94KgrNn3E=>~>K$zbC|rAar>bXSwas zi)jBhnAoSD=}s_W4iXw`J&X0dJ5%1DxaiH@Nxl2E`vQ*lnnliyONC$jLxi$EEk0SA zfkkKjt5dF=&-yVEied&*X{U}~4r2x_)b&kNfgbu!e0X|%-K(#{8td5-boSW1c7)xp zGzNExoP>r5%4?Pe+2wm(3@D{@Z3e6iQao-~655+C_hs&Xu?8O>L`q-R@da)4N#EBf zY%quEf85NDzDmE+G}*&n?4*)8MTsn5efyeKH|k_7bE)xInlAV%_@>TmdG0l3a!ED+ z($vI6)nx_W6#k=}yN34TpE*~LyH{=H9yd7WWzBw?eAqF&_m#Pv%sE=ReK@{azV1*C zzW5M~F=Cx1Jh<7FZoDZJmn?CZ&g&V_;=(aX9gXce^m^Uu!XnXfJnSNwI-S$r*klmt znU6pHc0+MP@sSy)cpveScvaB)3#w-in*;0Q5{V>dxRbiG`6IQvVM?8UmU`b4Y!^Ch zDGvpTo(SVh`7>A^G&wyU926W10rYCY9#aGqS--cNlb?k+iXe*99$o#YjaO3S#u=T? zs!Z*n(!XQK^6UEJ+Bd_uQ0HY^aB0TtJ3q^}^3t`bZl*lDErNT=?9b#&ZnNLgbKka+ zbg1b%YRysyr-H{40%1x&7gXIn?Oyrda&|%mbw((~6gRU$|+Ze`|$3C<$onFgF0 zOv~l!t)rN+8B_RPVB|^p`zs5*`gubGjZdYhPxFk`)+!$SV?5$)T%4ue_*Pp+n;q8W z5)kNZIf8r2Co>U|>r3Xl`xq?3F4-Qk~7Ppxi zK9{92RHLJ6HFuDfajh7{8FtSDSV+o2w^Pa_rOL`Ro+I;q&1R4p3C z$e;obSZwt{<tf9sUPJp#zaCS*3Jp0!2H^N= z=Hmn266Vw~Z>93m^HSc9N#*rpF*7M@5|=ParsfgXeJCMGsp8F5$#hcB6Z*N>@{IdM z!)vUyY0c=b7~e-!=9hI2tciZ+G;k+3!M@~W`ulsgtvictIgbyV_$T-ukpQBX0s<^h7OObU zp>12B$;YKeI973?Z~8i!MhVq|+SiNxK|}_T4F|w?E*i+vrwQwAHIq^v8*4FFmBp>5 zCm!QZ6p0iktl=G52TM~55<-9h8T@1JRqw{=LSBy{G`)CLC2(alaeHdw_GS90lZ5UfPw=L9MWm8StNo&jU{e`3>lu7V4 zmq!8Z+v{C3|5i!K^v-=t8hds(<~ufFa@r_SgRd$Hr@YR$JufH&CL zA;?C|h#(4p;yx;PG!UXUDv%5g5K`dbF5-#aGrkwF$N`{MofzJ>k>55PM63c3LYP4?Xb1*# z0ob6qG`$EU03w5#1B8N+QP#g6$Iu|P&{1^ZbP!mVv?ndJgEI<~UW|tZBbnuhNd+}S z5%J^Ftg@^Qp8-<90(0pwgbEI=N{5Qg`8kEv&&pl+`D5h)M{ zQ)%(Ix=mnx{LNUZq1FpbhAkkzx3m7CFE0B_2k?nkGlbDfquJ~Dc6d0SnWmb%VLiDt zl7!)1<@dM8E30h$LS!AuLz@6MqgcJide>?p`SV7f^@Y_!9euv`X0JU8V}9IN^>M;n z184~(iBlXIxxm3fLSyXai>zFbGHLa?*h(PBgV>-|65g#YTO^^#;T3CBpc+9v_C|0QAc+idwn<5Z{9 zpk_Dy=UGEi>X%KTaLv*h{$8zMv{sXEA*LisCg|?Hk?BpwT*f}zY3t6FC*RWj=)6M% z;WMAV=8D;14x}x>5s4XQZM^*Pd5Vgspi1vCB(+$u$=BSbCCdl56%%t*g5~WrL;La< zEgm)4esyMUpdjnF&|U4*g>9ET|DJw5+VTy4PJPqTI%oDLw9kIOTB=spG_IK9mb4a< zCssn4XvN8#%(}jv#SJUQutw{>#=O41NAJ<|0w%m>^6mrKCJ-;Xg4d z7l*dG^pt=>yOul`UEHv!)tWGw0O>cS?Z)peK2M8hZTLLoC*hbDI(CZV?)dsN2OnR( z@e-_BN-tBIuPPP%USv=<8~tPc&!4*9b0TqVmBqXkAvPvij;zkr_?Ed?Drc>P&Pk9`%_IzA9ln?BGq_Z=9q(Txa2e>C>Ti>aS=N;~+R z-#0yh(;860XeRWv+$qfF*?)CHeTK$0L#(eH((L@pC;Pj;xUsnY{<6#<=EU*Wxv#<&w@L>iTPt&hBGdQWRc9#Y6&~A4Gu4nOTJF3nZZQY{C8ct z>m)SkiT~SlIW03|X>KXwQ`V!(wkiP;8LW^{fF=eks1a(MPdMzCYh&{z_GBwrSePhT zG)A2r2sdzV47}Su?K`|LBnZBdAi#}dIX6ZdcU)g@oJ~=PQl=>hB;$v49H<977S*-R zeadh~jgQ*8Iy#o5^cX}i5OiyKm^-10g`TF1{DOQloj*`hpFo@Y<4}MKPd)>>IBcQu zuN2|bf_B?qwNNIB?rEm8F)nhj&CI~DRzxtoM?Z7Sa|lN?jE9;qN>ev0 z=&yY72U7Za{!&t*Hu+PfiCV|r>xks!WV5ST zWhJKO+fK9|j_+ssI`-eCvn~>TDjm2>r*ZB7>VmOW*Y=s@S~Cf&DYlURQJ<}`nti`( zBat~j%F)8i1zq2sM9PSsHs2(pTK{(be%|!BpT`zav&ncb{I2ET0m_3Q(Rh0or;+1- zqsLZsuY$7{+x)Hl=-2j*^uIqM54Y!sSFh2?0tgI8D+G#$P{kX+N2XIq2@*H{^!4Gp zu%v0_oyw9g;yDLbvML~vpGB!JoX&B2!>c@mIUI>Fej744KlRzCSW0kBZfn-ka9+>b z35O36KD~gNO@5b8>lW3y_)ZoWA9ZIcw)Zr{j_GbgisaUM1LZCpEnsE(B%{(iE@+!? z`TCxOd5Tni>&^Lvk4S*I>-xgUlT)D=M_OtNOGfGGnS}Yd=Zg#=%-krV9aC%s!pgT= zW087(*NH}!FF`O{mTJi{1M2l61^-ZGFeYFPor#H(!MKE@=tOkn$jk6y zJWUt`n~NG73a5skiB))4l_6L{6U@FK56c+!4hKMm2BZm(>9_jy4R5UO7)LCHN4D04%)MPyYN#NZ!SaYe*`L+nX!hU~nX? zD~%4&hGg9U{R*LA1r97latM_QE*vxLBC}Slw}#?=>JwU5#a+y4*tzzpUriarCxte9 zZQo8zydTmZWiS&bu1tbH_P?5V_xHu)2IvP26f>liTHkqUXO^BVMxDEGQr6GMmm}e^ zhM=f0ZSp%h(&~W71--cfqk9T5nOtLhJPdrDyQyOgP1|e9FYHoNTtL|DvN$0$*co=r zaVZX>(O~Kj7*dnDb7WDlky#-QgV-zHt6(iFmQf)PF0im!v(TwwyG zSlOWeV0kg2f+C)@UyMDjb(U{tOfK7;sHM(*Et4(kF(7Q*?cdK2|45WXa0H zR^XgyVq0uz5$PMjmuqLlgZNF(&<{`e{iG9cur|n6Y3rzo*AsCM(wHcJ|64r}zY&=g z*RflAI?edK@%Mb&#oobz3%vlvOS(7gvMEi|RKK4MJ93^Ia`Nr$$EVIGGYtO{<CVT_%)Ucw}Jhbsw!zayZ%%Z!Dp7@Ds?=k z5q!ZQbNmfAXjkT@!fcT7ma@*>=3iGs#W&~mFwdaLKap`yBRzdRDsOiJbi21S9=1qi zj#nZ}D>HAWD)%S%t4CSeFEQz}tW5XojTc8|k|+PNe1o)-W7nvG6W<~*D6u|*n*GyBw-Y9|Z!{@OY>6_+uN z)A*U;PbnDY^op%VOBpHOUoAQqt*5EQ6fpJe1EWqgqKYE+QG!^KO^D(*y_Ub+loP!n zIaizQ=c}KT{XfuvyE*Cx#$#Z3bEiHbEpsZEz(+)c~M z#1bzgZDxgGm)ibUf?Gynx^PU`!u~-{&u_6>D_`4ySy+e~LqsVu66uV*-jC(7>lZ3m z=eX(K&KD9gIcs_1M0#hsdAP9P3uvA z{bjw9)9~Y+-g{G?^-p!kPn%=%X9=g(7$-2df)6UO^ z;x<>`i4&Z;Fdpv!Br^A*Xw)eEn5W(Q*Samq`8PwU`lFfRg0wRcPkA9R7tKS3@eo}w z9>)~b@7kGHT139rty=gagp_4JGh$*>TwQ&X)pVoUSapZl9W5x7yQgE%udc-K=yrD} zg>;NQ%_r7)ey*Ppv8A%plm);DYj&1PRc+OL><)e%yMeceOV$427o9JnD0(TlKYcRw zUri2cXg$2H36}C)Qzy`8=ej6;4W@H45}HiZ60V77itS;3S^{@&KM|R%2z1+RYR?cm z{p-+BAGj_wdHSLbhcaVsCimH%OAmE}&og%0qMA(azPh@R_XFGUVR=g8HO!@G1r<^& zQ*sKld1gP)rQ>9)v)bs-)}8%E3w!1lgr))5Dg*Sn^6(K_W>+dPetN?arKpcD@X+*V zc^Dp)IBFQgj{tkX3|#gJxzm!PS$*ABR5+U;n7j>sc1<8$081eRjZopoip4>qQ`bja zQCsy&43tC>1o`L|VQ>T1S ztig1Nh8UN8WI)}jjW$JsE)`YOdBNW^!N7@6O?(p@5QZkeiz<*Up8=3iTR7(#ZmUua zMgUYUmp0gO>u1{Y)K+)bNp27ruDDz*7dzq5okuSIqB8rpDzbcg;0qZm5VEB? zwdD!(zu4ysyyiK45Cv1qZylcbUd;Gdx4zh%;T^7{jFN>kpjw2oIKv>zzvjvLMFeDl zA886`@D^uND)B_t&SfOTF_t(15}E5f>;V?^ZI+N)JtVbK7e&V4$;d(ZK(u&%VDOF{^O1J4TQBem!AMp?`=6A5Rv2dF)CpiSIy6!c0NVy}837{P(H=7kK$ zm|)~X0a0iN91CN8!fee5KLc`@!Ukw4HTebU41Ko*1_4iAB^qmj|!$SA-VPZdv%5H$RmN32-I#+Dv2el&fzd&IFcIW{aAOiY9#10@ZQGXyT!2+NQ1K=>@ z#;@Zms4`*;VQQ&BN(@r@;=U`n1hOpMJ3G@@xG9KR2Fn~cz8e+@i>?115_?=W1Sf0|0kVad!vV`2 zh$f5M-Szalck(Rg%4QFL%uFWPNq>`yQeVGC0Sy-)-I zd7cPW#N{&+Z}FOZPKyj9@E8w0zP-{%#w>aa(Xb*?!ypby9nc5_X9{hl#(+epl7G2f zi3SdrCp0=Y&t=Bu%lBu15heQvbXWXM!Lt9_{^i2u{$VhASKwuH7ru<7@u9o z{_ERz-cPp_WJZ(RETKCbwEpViMex;Y3h+f>UtN8~=C>k!-^A-^16s)E&VGh-3i)hr z%7~n|lX)bqmpr@kJ*G6JoinF=QoeuIoSsMXCwPP~s<}aPIph!>nm`%nG@WPa2iSFP zm?rtx{nt-v5Pm{%X20M4A9H*%@5iflauMh9m z@f*J{wBE?`e=E5AjD6z5_RI8#8Otv=60F0;WsGf14RZvlJ%47~u0=?s|JAo&?f;D4vYjXF`B_FYzgw{~5#_oS~rr>5SYy$HTpXuozn*vj@h>6w;!3r`Ej z{T~akP8U}=t*m`R#9Ci<(SUa~(&tEC;lHvn_#!yyjTd!mMew>Oliz&UFVWHhS7&u< z9F(Nno>0#1h}plwhuZIF{fEKFRr3coB-w%gNTiQPnf|UJU$|}&38z!p50|cAAv_i) zR5kR+Tmn3sWKbI$Sc$g&7hRVmmCemwhX%j<{+rEz`N%uyA${64__OWO;)gXGfQBq?0^OPIhs;Z@yeCz1`hjKj>8`_1<+FGj5(T zUAUQA4L+f2zs;NPQA&8bVs-b!2m8RQ$@l#7;ol{1Mb=$+@|3ArpQRoDbko)Q^Ip`l z$I97u(~GgfUkl|Ek|S31e$Ljhb#_dFmR9SIK|3pX@`nx@yt4tQ>654XZ)oME8k|dn z%KpqvuU-s#Yt#;#)5w3Br86+`zO<0(@u)S*-$G~2^-xsXrqmcOeN}4t6s9FENfrGy zJ>L?ibIAFXblp00vX`%p#S;FE###78njN9R5@BrgIr`w0%c4D_rEHBL{9GA#T=)@c z?z%s)!l=G@0~3chr6xUBC+Vg+^kEJ3)sCaSG(eTA&TjvCsf|OalZ^pWZf3>TcDh+1 zZrh{%UzqOorKy>nZ;|h9^@O>xt)vGUXR9Cz7v8NW_UG^7?QK>*d;It@sl43F?znvT51$N#hchC_Mv!N)sPfXc zbo2NkYpnHB?IyEkvvt<+;c}qui ztmxP2m3?zF9KMN3l{}q3@acST)RQtcoLjOmWQSHU`GtD zjwZjGe)OXY%HNN{bDj^gt&^H}DD+U6*fd+m)JRwT_exduB1 zY)Aa|s(&3DrbQ#s>Jk%)TmG)e0GsRAPjx33_clG!YGFQ`liG2W=+P~*Qk_TqJo0e5 zP9a}JI%El~skm`0J>qE2<@iybyT3D-@bM7|PsuG}{#IGLKYThen& zOkW<5Pbf?Pn(begd$sMB3V?n<++#7QMd$-Dq>*11(_sIx&+_eVw~U`%VT7YFM|x{e zh;hqqgqp3-P2h8s-``@H<<6N+jW~H~rj5ea#KYtJ$;D$nZeGo}VVkW%6+bh<;lY?< za!Wbtvlehl>gMaLr6w;6t3a_3#X?1zK;a#UDEb6+I^LOjg^1Lm8aYyz)jT)c+Sd1G zuj+hS^uH68$PMtZECpvttBU#eHKP)SDmG}w(CE5$$%fogn&bECX*KfMC<=ac# z-vI*U5bmUv@m)wRDCg@#xP_)sM9l>W@|yaWj@=&by0uN`ktcwkWT ze(*S5jgmWWqBi=sF&mtQM7;HcW44g0-RJPrvUQ$ZL_FkkEH_%M?J70+KIQNM_W3o= z#e9K!!*l`>Uu#?7!LL28x34=4v$~r$%CJ5&c}sv(ii+K10a&~y`ef_i9WpFRD_A90 zJEyCCRnzi$=np8{!7J`n?C`^UQZyC|9_)eg{UM70vzxmGCz$yE`Qju}7z9cVfr_rA z#mgeF!4wb-)NBz7KtsUP)@0;X8DxkB4)LT(5S|xlJSN<)K@&J2_C*6RV8UELtl56l(sT21aqQ!x$ih6mU@tqX*T5zfuj|f{X|v zb2$Kt{51uqo0F51W*y|O`KH?1>ZiTx5M=q-gs}LrXo^*JQhYEo{%VM#7s@!?uW{ z1RvL+M+&z^pYGp{-uG*tRPjCR-n4H9Hq~s^|2OxB>H59Q_1B!fQc`!4zZV2!IWm%x zYNNiqwPw41O;k0izDZYG-+RpsE2|l!|0FV9P3K(Ikw`v#xooxx^1CRUHI?!I%PHv1 z%t}@XFi@yI4^LzUEQ=0x7ze zHIt?2N0dHxUGir*KA*%6^dycv|2+HOUzH(kItGe&aw(WLz0M}D%e=YcSrT;;&9Afx z7x5#*e6}q&xHos(YueR*JDz4&do4>3A?^3Kn@gv6Ot+(OF+!y5BcfYuZlR5o!t|_mt;dGty%l_AWO=YjDwuxW&WyRAR zrtH)i#@n#vn=6uAJDF7DAjL~eA@Jhj%}lD<_wRSZeQhWE?GHNb6n(8S_gjY#A0D=u z?#}q`6LW5^9*z|*zqsFQjLa_6r)Y3KtKVCA)ud~3i+XZwm20-N` zTD+GUkbG?F-Y21dzdcXMN6#JTHYS}=d@=4QIeDMneX>N~j+sxn3K`u@&5SSw?Y10d zl&U|a;)>0p{)jQNwomw5T%H&7@kyHnl_hVcUfh(c^94|*MVwR~p)!t}*Hyi_Fdta= zfW%&b{L*S_MxwE-7LRdG`?gv!`?Bxe5z`B{i^_@yTlVk+mLL0i^m`_}4t1&@^-77t z$5mpJqXbXYwPS7qv(nCcrgs&qY321y7e*_-H~Wgs{>~gR5or}x!x<%2zWO5B%179n zaKNrgSr!9XFt39!m+NuOz5VE3#ZAytrTJT}!~BcrA@jI+C)a@N+3~?@?nmC4GCiX- zS6<)501lj3NCLnkyYxHuYNar5-g0Co{rW$bzBr{TTaz&NI^CWfVJ|f6^myh(XL7dy z9bOGYEyS_tqYe&YG^j}=s&LNoosCIVv!O57-0ZLS9GrG z4?6h&oHdn4x{6C%(<>Hf*_)%v4%jh^1YK^FJ9-uw-f{pgv4`#&f7_Q|w8h!lZ36^$X| z6G?H99I;${7)%x$84ZTA;1RP6389M3%%qMZvIv4D+TA}z%#*5pnF_pFvG?&&PrmAX zBh!-K?y%pkoirk7`Ed4^pqSCV^Vy??jf^X@ejb&~wIv=3gP@#qZQ5KVmH3^o@n#~)3=-W(hqlGL|CYq=Hu z{0%-Pz8b2~oLMMzUaOA~h{zKUyrU0>F(k;ugV zA5CW+(`Ntm;kI(=ciQhc=X0*>yxA-8;mmR6cG!O1WIe+am>}yjx3rV>Yy9=oMQ}9PmG6U< zIpz!^1tyKWE)Q&DH*pHoGTpU#n%wUsu&rcF;G`wSKYl5~YytmsbVV_uJB^IfdJSfH zTRFIVIJJ?_)CeYDZx9Yzc{}2lJgHr8SCPI4UpW%~-U9ANm<3&}{_^6wzI(hA?3vr# zh{|x^ADDQYQoXw9;tE`M10A+({Ma%U#aVmQKMYfa#xb*g?95$jB+dGD_p3S6=T;c= zYRM0!{FLYWa~Lg0fcBSi!VN4o5+AI ze(E))vq5X~D4vf{6n`UD|Khh57qPT99qzmsaP*cgwZ4>K7dYVXoWK_)DZb6byEm*_ zlec?#FukheXS{2+t~b&(5(Coq^@m;aCCuaQLcXRRopCqQ5HL1?8iqhJMnVZzm_*9gPB&eJWjGk{KSE`*$(hAE zb0MT)5IsB+jDV0k%B+F`A?P5E)p&X-h?R9Ko~p*V91CD8PH29Hhe&(?S3hRffWI!=pa{2IH9=}42VYD`_AW(5Ck*} zaX}3vp{PXzdVmZd9R)u|f*~_10J;CpPVi6M^zGepF5c4kd4vEU)Cv%dPVo$hG{KEW zZC1fxZ;+*5z`1zl?xf6+26Y!H7%LDAg%T-zBtS5Rt!jJKC2I@FslNBJzMI{>oyr*6 zv)@auFhEC@AX`2Ls?BoYU z&)Pe#KV<;^QUJSy8a6gOq|lK>Ktbtj#{DA$BR#zc;LoK4rIM0wq6^7MS^}P4bxGQS z!y7OJ5XSZfl@9{~b3i=!5H<)B$`}TPst`$aFcT=5i4Cs3cL2$S$|2rBn5TSS#cy$C zEG&xo*zPTEZYso~w`=M&xUOBa1uYz8p-R1%`{s}Mk+ET& z^z!o3!SyO)LXPYO*7u*;^mxdSp&i^fdN^u5Ul@m!E8TFVfd>b?2A9iz^O1LJ44Kk! zv#G!I9XiR4PX_BZafmI9Pwaq7=Z5C%%JuNSWauOs{w*hWyRRO* zeAd2@<#SpBJM{Z^z{n^CCC|E@)~4}3TR>xD{L~o5lP58|r^RFJzaI2B5_JFjsn0k=lFF5Z zR^oAR=9lm3rYM{=g@;i^x_>Q1sxy*IIyAV*^83#o)y@UduYm?*e#!He_+PmxwtJg$ zmlcE#nA~bV9d5gJ(w(Powp{v&5T{ z@19GJ^-jxA%PWCrRb20!AKM3R+Rn@+^$zep;XOREtgJk?uH5u%x1IlC`SmFHbmR96 z(bh4-IkLhNt zM$-BKDqYseAo6*Kf`We-?RSk1;BjMyQ{u+n*`LSQA-Sr0p=Kw~mh2V#P}OT(x1L*9 zk_aXFdLP%#0B8MQfel)YNjn!T$)wY%8UgPeJ4n-g&QJTrkD^5O=KGpNgUE{U_9W$zACN>D~;!jSx`>3+u8lbQh;2z7SD@# z|B$4q!+W*l599xqG<7jYx#`GP+>4K%D?}E3W}~Zy`>N}&Nrl{`L1BUV%$Q(_-SGF) zt(rt%fp0#-!(U=7>`X5!OLaK5HWreNg%$%(z9u&hoSF#)bRC>^e4>b|GOGwUF({wt zdD+DJlD8olc?A?;SEQ^t)2zbGukmT)9#j1jRYbt^`i)Y3wC)#X%gCier>!{Ko*$$$ zGf7&5cC9v7^Hm%)<}jC4=!s}PsLDv|oB8FT*B;>I z^xPx)PeEL?*~WS53{8`B0Cy>#=)<5pIlEg#Sqs(1VK@-pAw@pSMAzG*b|Zc4Vz-;P z^NJ~>v%}NAQ021}UII^Q#ze1(#Fe(Sb}Y5J?*8Gy*vJ?y4HX9`Cvao=ac%cj?8Q); zMy9u}ma1OTXUxP0|84hz;gST|LuPyw=Wvw31zro+3$VIdMBSWTM+V-WRu5Lu`he(n zRkcCff`Ui76z`~%@I)+2wyX%|F&5OZhL)xeCocr2t%^xA1J|wx2;Hu>{|YNlpXn*;UG+A{-3rVL9(6UX%ca zi1&ONv+C)5cd%nPY_-NmbpOfY8*=__hNv(@TmYS60E{Mb^5YEx3d~B9xmRN)uzR(ACa82uUUwmp&9~ndmgBlwydN-y!&=4!^~}4 zdzrjRYV#NT*W2^tQT;dG@SM_C30L|a{ffOPjyYN|tE+Rp$7#UXugg8kR&yQyour<} z(|1RMjrXi5+yeVk0kZk%*Vf!tX7F+E0!~7Ls9X^ zdS$VIg`$YoL;jcK9P<@Eb^O>Oua z9_1$Ci394oR?&TKn(lrh=Yr+*5O^~3q*A+VT3QA*R~c9{(J2w&k&gb*YS$W{#*ycw z0EX!O(6Jrn`_i+c=io5?#jFb_GS^{dYMK%6=NG4i^^yzeB;~eOA|R`@w-DGbC1eg95*MnL#<+hm z54M^Suw{#9!XL)qNy-oIH%~~$O{53lJHb`L^C7rM(1sK)hCwzKtlRInD6hBCs@2@gIx>nv}xu5&V{? zmDFQTQ*^XJYWsNvkk(m$345t8y$d$siTN4B`Nx2oQQBTo9Cp27@BX zT1&9lDUsQ;^GilLBGdDz)KdfM3NrCIA!}cHVCUtHazW{_5TH0}k1Ojf0Rlp@T$kB- zqZH#G2v9da2tv?&C(6!lFDk07r7c)jU$3P-4Om#7)6<>fMs8}C8|dl^TeVMRl$4b8 zbwH%d5?INh6adps=*nDU(@$T2=exgyr_8J{z%y157=pMH)mIwYOUXVsG!Ur^^uIgp z+1MD#SW;qV-%A)Wd}9Horv{8bloSvy9}?iL%NAATCX4)5+UojE-@9AZ7f9>J(?e=tV&Fud=LnhsR>gj7Ockh*NhZje zD^oo19-Sf^jtn#4?<(0+4owmA7%JL6j)XA7-&pZsb^JY)5)hz}mtvFx7b>Y&*76sy(De-AO1&-cH^N;h2L#KM3%a$t#v>zSjVe~v&G%pB32 zR6N{fdFlAi+u369JY8;Cw^43IVb0c0vNTI7Z*nLWn3cYWk*w2yRu*#nFMNw8Xbf$I zH)Y4;^d{)CBx>yKLob)O>uW>KGa(I(K)g z1O9piowU;4bjE7Oes&TrdD^3V8aWDFo7O)Z&IKdf>FL6{H3LY|kr1+A(j8sQKOoddCQWS9ENbkd)NR)!oa&Qw>m0nt zPlHv$EoO(KiXrHc015Y2EZ`a2p zFKZ;P%U7QM7(XrIwG{=o$8pcfOeK+;&H1cEmE3VgLQkH5=X(rUph z>|M32Bdl1mZ6F^Y?CoK0de9woQbK!k5>s)o?x^HHwh(CI^zMF5^0tQ7=4q|s;qYrS z*HcH9|4syLVBRr`TYXT_b}=(1aeCPN^tWb(+~7gYb>V>4AftqHG*<+N2|*wS_k^KW z62syAE|^csfoF}OXKjydZGlH9MYOVbYjP=9sntngvmi}aPV|FOMM~bx9fhJW%*N17dC-|uMLUWNI-t6F^a>2yYlaAdP&brKbs0h z_hrOi4W7)tEiwI7`o?QNe=$@MFA;<6$ur2?)LmoKlm0EKld-WgV<$SV;Eqh4%UZ~w=fP)1*Wo@-+w^Qi$W&pW+@H7+i@yY@r;?4(rmQY0`r{~oeU3R zJO>SGJg`P!I`!o1s&8b^kiJ{VMnc+g!aXYZ@vcX3jxDmL5PW+b}8Rgd}95{6`%+9O^-Rc1SDg;Qy|X%z(TqvJ*C0HE6?&s^b! zBkHLFL7B4YIbrddhVPw9XYDpLYj$3^1=d!;PutYgqg%wpjNjk} zQwJY(=;F^z&7o6d#wvw+>8xwW`;VjJeYvhSg#<|{Xm8#}v$p39s=RzrOl>JI?5Ll>Z*y8?pmES{XB3L(NA(rBianpzcplcqiW`&W2wX!(^W zTm4wNaNzp;EL~fTO{>4Gy`v3sx~Eg5nZKUi;uTlN>3xar>+<~Nw>>THwPs^qTxG%v z_e4J^x=o1uXoPAu7Ei>k?6N(fPTOM>wr%uOHBrk^zf$mx5SON0 zLGn8YJrRJBI1kDuhlooz8srBGy~jCfa(oyU8Xe;1z?XM^UY1MJVahRB4M>Qn;*bG? z8B<~4m$LkY&jG0nbo9amE$1+z=W>vD{Y8s^0j^k3Sx}e>qpDP+a6Fmmrs|-Q_KpGr z7M2IMMZgxTZ+^}kn@;#amc>oAH1sw<+-Ox)1g@r+%1*<@XZA>sDp930M5sKM%GE*Va zpG-4trYi{>W?~iExbcBA2oNLcT*qjzj)W6pyLQTlz))*{wt$d$ILDv?+!oA;CWT0G zkv4gy1DW9P3UYzzWC%LlWUyKg8audYT)da6#pCaG)!}PozCnNIFWN=JE-q zkFkKIRN^9KN4?L%QT|g;6f26o?137fgw=;^n3*>;&`~2|guoS%ld4-LPM&mNWKrbw z@?~cy535WHw-gwn=6C1EuJldRFIh`lpw6pKOZ&65sh^j31QGUFwq*|nM$*$K6}X?0pM`ofm*b# zG2miyw<@MYyFMv-hB|}_n-LHJ7)QXrOCw7}G`Uj!ZO$@x-iMFX_U=5c3+Y9pzVRKc z7fc#l+(!IsGB0W+6>%@eb_#J`B58u}Ryy)y|DInvDhxM;}!1ceOJKF0DKM(G+o0XgS6T)vi z=xS!OP2ME!x4j?P^+~4%EAYC*qpT}Xv$;99iWTqOj?^C;QY@Rh|N1c|dg7RZ5g>7G z=ztPA3^@q`bjq7`hMkd)QAq|8z)J00RADhfUWfcJLcvAXC@L4VbbvUS67y%*S&Vf0 zjIb2U_|4^#l+wER@XR|hx*`GqIRha4lZ=U(%FSer@?zjv7nj7DBgAXv37wm4J3KaD zJJavJ9AhP23W>HE!a@?U^q3t+5B3U9LVlbem{$7`$M`SKB`=>omqUN(2Rq`qoTN8L zuj&K43j99RI@9;*^^-8yQ%?N}d;RM0q`4#t8hksoa%=W!3A?8unwa)^M+kcf zEWsGxsHAGjrgD&&Cx2H{`W=3QF=?T+VXu#2-#;`yy7WBEJ+jszxdeSnbePUuOVx`kbPZMP#gH+fYORuZ@l|SJUcw>F8qzE*81*l%)%{^F{8Fa zIbdJi`R@3&=i_;s(M^nUZQuq5Y2e<;=qDz5M5<{`{P5^^@6-CnzcqJUl0i%}_QrR& zgkFnxoY&H2RpT=WwV4$o>D#x_9+}E^{IWV(`pehfSMHOaR+B0YwO0b8kHkE-h8>u0 z24Ww(UVq|xCWjyvKKV82#x*sUX{Z2ZS_mL?WeDWkM z1&vn11~|UX!olp}f!Fr? z?XF_Vra}k7u^As(@|iA*E9BUHvZoMD&CW|muJ*^FZp>lvC>Ae*%iaG@X3rsw;~%P+ zZmRt9blzX_qd6i<$Dfln+L$(fgQ(S*5{F;3hkJkKY%Qv*OXj<~bM4h9v&9!nDLEZK zUmhMbh5LplPK|)?q+xQ^@K z$aefqqxVu3%AEW$IJHW+_?ZT&26mK=uW41Zq1WsDQj1R?`;2Vg7;w13)PZywd(29~ zr3U1_oLmJu)Mo5kd!ntX_2V`bg=MtLX+7J+qb_894;DPF6#^c0Iy5wn+A^$V->bJ@ zN)4Sy^?0Y`eb+rxanmZM zH+nNdG_wYkG~`2%{kosC0x!~$3(AHw)<1vo*81w658dx?gnkt)P3mH-Z%FCKvfrvQ zaqmeWvQuM&C|Z0;mdH%IdHr3iM(8@7t5kR15l=C~HG&D8M}KXghk{~Ge{&KRka(Us zKbK`}AU3;_rLU9cw|8t3XgUEzlY)jp&f-+s3m6-+pw`!722fp@xl6$l026SQ zj1X`z2;PYV4B6!LM8Ek+4;pk!HaP@kguyX(5-h-QOMW^&azmb2CdR8cix|GQ{qa>~ zI7(iPfH)XxfBvNu(gg?*vtJR z6Wt^rO?%aulS;@Y)&UL%z{3fj2P32oePXAZDkTrSzVZI%9ZBTtbro~M*fVsDjd(Eq zeUKaBk&& zA>Kp^1zhobim%z;$WNZwF((D%V*(NV079L#ukR*`APQ1pfJm>Uz(oQbIr0hUbF<3D z;SO%k1S2&m*+QmM4IUY$kr%w-3Ax|^{AYVo{L=U?2qMd2%!*H?c|N^5AlZMyi!a7Q z6toyY8@)L)dP%-^aM%mwl$sRPDrwoIebRlBs$N$n?xH+VHvL?_ooQJUNXD<&2=hwT zQv|ULs;vgfQG@gL>rkvler8o6D31puvjzhoK=e2;N)L-@hmZmPgR(P%MVPmndf{II zWKjAh6iWaAIYcr-4k2<@8gOiv5+;N?cDpow>T6+U2x|3^1<3FW6P3Lu0U~m!UUI`g z;ZRkSHxHgm$QZ-dK}Am=Zf^h}!^W28>n_~RktzUm0LQVB+aZy9ahB>l3dGGkTxMa? zsLwcZUYGn5p1-jT=YSF3-7Q?lnqJKIj!u;&Zs5e5H8uxa%nt5eqAZH1^OuK-8yox( zdO7?Y?3rTlGV>6kPwUG7TMa&Io^*5IxQ<8ZOT{!%pPdeY>&$5~wK zq9ODwBI%r6t7UTZ>QnIBU<3zF<)mz3UZP2DZ`_}3uiN{^+89D@PGcQo@nDnW`q8zg zt1Lz}$;T4Ohw;v4lv7QVw&lC-&{a*UWo>yUrc^c%ml*}4ND2X>a|5Bw@Kk13|CZl( zleD$Q0q0%DcmK4Vg)gLbD#3*5u#|R-!ijQI6)f_eKY#~^QMT$(C)>nxSfp$zbmZ+79Xp6GCIR37x zc{LwpGv43n%Q_7#HOjyA{Ps&$Mztm{=6ZUAF6`T`f){|n-A^Ml{up2}1aCX)qSsA6!1_%5dV!R&Bi*wdvt)g^oY-WhCWRm+ z2%Uyj_*5}R$w9N%(X!uIW5LxW`^u^Rvaoi)z`NeB(Y)^-5k_24IQ$-?6S?91uaupFh`-){@H zp7AHIxXcK~-uxc9dgvCV8LoI5EttmD4m?Z_x*H2Rils8V`{x|^cS8C8kgIJ&zm4(^ z;~0JXHCiFH4c$~FX1bgaT}cNzf{YF>jmCJWO?61~Vfa=za275$?DwW$gLLAdkmDH> zTMluWCCiVk&!4B_IJCYacq%s5x+!B`DeYA1_jy7-g-VaTUfs7S?V?k_LTx8uGxrK@ z@>r9xlJI)SoWTOzF^)CTyDycVQ49|<;lcJbWuA<%}%QZU~x4s#{nTK?04zeQv< zF1}z>ASwc@(}10ewf>tyw}89oGz=c5HQX&Qko~F*7)F*xJ`z`yDj3};zsFZVK@wq3 zhWmnEzUE;qxHyGqDOQy=NvJB90z+Hh^QPX>?6^~%GW5|bfM98I7LvyMS2|MFp@@}m zgtCTs`n8%SjfQBkUe&8+9#)5z4E{Q=wwA2}t5nMjMJo=DpJ_f8qC!Jv6>mHya5bp; zl9b7L3a3sQ=Qa7W81;5>r3|%$UcY~C5ZU$`C8cI4{12)9B{kg&?&z(;mw#$;6txC`!TcF3h2Rxlu5DWR}(M^4%Nhc&8aH?G?+mER_R1l zrEnN54!{de>oOQHNC*wV1`bk?EAz8vi?C5+3pLx2wVC^K6*X&O+9ZGfV^ZKx^C=Sy z%LeRh|8w!Sfn*~{S;+w*K9yE`E{=xnsS6~(P5j+1imKu#7-euVw&?+X6HEXyLLwq5 ztwEb(OR4jZNwEd~E-JK_WxewkTw?xxjYF-~`b?IEy915MOM(}-wqijqp8h7Hc{(2s z4l9Llr=?JD7EHLt#Y6{UyJyJc7dNBBtAb)D+gq$+#~024eQAxf{dosq8{J?VfqS_}!A#ZfPX-&2LF zW79Ld;Qb0K`EP`4ND?8-kh?Bh^On(h>fK)cL*7@g+bD65~ zvp_>j19PNZY%uhS!2lYbo1LJ~_o`!wy*-AnJy?yi5e*77%e176B{EC>tsAD%Z4q7d zoFkE6Cl!Ag#dhg-;pUMn)zm<%EF7e0dS)!YNS{s*pF{~Jo*_7qs02@>ewe;@6h5~K zoK6OALIiL#4+&v#8%O}CG=#$pL&7R@unMq^8R8T}WYNLtUmzVIRD05kRE>a+7mxsH zU?pc$g8@MBj5w+I|DNg62>=ud?W>H^XlNbSM7uJ$utOLJ#S5^YY;ocjsT7FRp^!Ww z{1Z$zw7T>wh3byQ-X`WFBe45Pc#Y(*J<1%e(3#)!j4?`ohVnba*5^-!x#IIjr#pD@0%>*=lQSN#5Vr z@kaEpiFJ)9c&SM{)FcNok&3jyZok|-U0l40xya|;axSoBG>wLPW6o5F`E34m+?%Nw z9p;1ps%oo51e+@x8%0Fjtjz5lt*kPUkYMn4IRGCL8LYc(qmlddsB}IiqDhkXsL8Ko zk=0qE@pA6t35M}hqrJs$)&U;%{UKIRWr1%=m4PQp<@|Fz=Br23yu`bIaI9CoFmhYE zX%=Rj$tn98(oq9F!?rphFJN!t)Sfjd*GHX3AIFp7eAdoCxWfx6{e&t(r$veNzx=D* zCA`|jOg%hqrmWkRf0S@ZT+VJ_96rZGp=WG&dOZM9TU=$FhWafo(Do6^nbFMVht1Qd zs4Y%}UTb66)5?<@7#Ze%zV~`Vn`X59>D23Cup;n0)6aMv9>QbKW^XjAf~y(0&!VtY zPTNGe<^;_f=D?d4E0?7is?)_c$+Ubwbr*1P`+1h^#~=62Id7>TG72pQd%c1Qz1l=0 zl5)O2V+k={gZxGb$`~t?O08+!4s4t5_%%-l&-pI5il(!}clI{!-cHvoSApyGffu=w zcfYrjqO3c|@dEC%TD@>#2m}DQw(?c-lxN}b{4TShm+OGIZzjw8dbbvXbD2}m&*mJC z+t-ypur|>8ZsAtfEid(UZjZ|=@V=ICNw7X^_WrWI?IO|nsrSl#`q%!-h2+CmFYKWG zd={Fge3qw~*udklr*`A@2|~%MsG#FiyvJ{-{NPx2P~;zUMT;;L14;_x8w8io1R%Xg z-@aDL9+p`%Ag$C9lOvoWYywcl;*Ylru>phgmIhIrGqR9@5$ZC`$Pg+t-x^NXCuz-k zwKAdP-CUd}d#T-?%Q54Kq#VBK>n2k#A^WWJJk1sxwpjJlB#3sL8?LrKM zywQwDW6!_7V|SPKhleStitLPOCyZ93itLoGJK#>G#*3o}`XKtbX#T#M-Q&DvaS0I> zLtLU;re2DALK)kGQu4V9nYM^xwG8yYzsf9pYMC*oG8k<3M5$x4F`i^EYK3yR`4?=l zuVmCwi@Td7*qXzlh?mMquaySXC>jQ+2gECOrA0UgyFU~6GzRkY>t*6*;byFjNoL?U zoeS(fcK>Z}P?K2tL@JuJL`=y>EPh1Gl>ArYC0^d=B);vsZ+#syo4^ca-*5A7Ll`{v z&(a2q9dImVeIR7!t*+eL&Um*IFMCt^ILK=L#Bp7TTaGkh%U8aBI*tXm+-hE7f9X}I z=(=dKn|UBLHfTBsuIL&Fj*o!K;KcISn|V8|fmwF)acX7o*H{jty}+%6j2-r(Avzt^D@jXg~VxhzzQsB`(bKb{uULosk=e zUjR{X0e;Y@g2AQTk@!T6GB3lvpJe1GlaP#7lySZt8Sc9CcqYTWe?a>=MwyHbh}*{| zEhI8eOHC;$HcLxOtv&v@+|hk4HqLsFI@#`(ZIS{4oPZDdVHE^Ym|!`b zf3xm{b7pmILF3|_E$;~6Zb;k^I0On{l0q$g0HpOQZLpIFR4YRP=X1gO&lD?B(zX`CIx{5&`gdMI>$VcGws?Q#9J4Fdynq~{~rDx6Zg}|`15)}uws}q zou`7yp3c_l(30z$PZw)eV9`bYQ~*uu&qSGaB%GC^PSER6RI>Hy!Hu@f(2MxW)So~X z%{1@Rw;Rg9SMBCpeorNt_kggP`nI=X!-nTC+sWhjA5sf4eQx$p)%)wftj?L%6aBXH zRJ@gkPxW1RgCaNYbqlH;(4U_LhX1UoZIw4IIwnm73sMNH~i_~KN|E3Cr1*4v(q zseu#dkOj#u(I>{vx8UyK2}}o^EL)#k@lAo@^+(!(jUl(+*3A~tswVlMsj44{OJS?A_=*g#_-&%?zBmYU&drdldxO`gPyLje4Po4e+%Vkj}ojwxw zeSEOmyRuecbvR#1HC1L5`@9%cJ$b3um#Y^OXf&+4VzKQZy0$1XcIZk<73F8pmpXUL zs^4k4=ES*ts%Y^0Za1hdmFR=pEizR!lY3=fA`ZhusaYEozk8~{rF^~=gCTF}V_nQ` zWh$noimVnNIeXMpa7zS;PRLuFv!a@o9b?nD)2VF94HR3 zMxx~^^$JD={}Km<$)yv(W#ZC7AUPze&L5D5uaZauz>orO%mQqHLVG)y5lP@=3V;g% zDHI|B&%iISG=}YVE_h3yfG>slu*qQ5VdaS<0A7M6t-^ z_75H2ycs!gbbj-0L>g|9X9A`Fs19O-Da@`IVQ}-eMxqDzjt)+x?hKS=EQP1;OqDqT zc-7A!v-1XhY-@FA);q|CBNI)BBZqfyGX}8pG&?V+F~f?Ad|+@WXmJ@eB{Q6z*UR|t zYLJm3-rVTp=ClcVyGN-6qY-i|CpZ zSwO-X>gs+rxY$~5%rAL~nH(e`ge>7Qd_kvGuSt#?tuN-R3#>-TfU}#MgZsmOr>6&p zhf_z&YEx5vK%HzL@E4asP>OE)w#CI8tcULhfw^Mj*uD9D1~i(v^M3NVtsZ_Q-}IIa z#Bj7!S;M}6OewZ{K0%|(V82gV>-{s4MHZbGs}jDVaI9fy-1Mu##mUm~X8$i!zjOJb zkl$racH%cd#F}KX93b!b2*Jn{PxN>ckHCDwAW&FSjzj(N48a&KDNWCIU!cGk{?y5M zUnQY}HWQ4|Zv907!+Qw!coG%6bpQ8Q;pp=z-qZfr)1L8tZ}O4^kxZs_#tv^r{Ci4M zvE}+s7KTW*afFjDnk)})IGH(xb8KKQrBc@1(;uni?8;dO+VZmf3u?`|kGX_=d|3Y^ zxt->2CiYP%(^_EylDdD|)PQ;UwU#L6Pg<6hgv*sm>qGOAqu=ne^r^#tc4*Jxg?igeB~{CGo2)D>@sqxX+$OP?GHCOqe6+AdIAQ`*a_ z{G;N{Jl=imT)$8ujiG~>v2~oMRoaKu{jMNn->+jr$p;ibK49hXv2^LC2k(B^S^WO& zDByS2kq?(O``;jCnHlMh;1C-MMEW2Ajot;jkc@F`uU#i^Hb=B2ijOFbvpzV4oK-Mi zvGz96E)h{`SAqVm`=v{HshpC;sfRN4-B70nZbSF#kN+;6{w&&?y`AsZb zZC^fpVA9AWEz;Jvoq%~R!AH|YmxCsa;gj~H`ZCJX z=wIK8-~g~UWCsA(j`Bxq>K2^)hNBzbs% zRzgfN3geHDqqDP&2oT1dsJS^I!LJVl3b1iDcw8k%Mx(PqjNs;s@$9_e;dK5g>@C0^ z9}MkhQxVz_lE)8I{mG%jT|%gdvxuUZ$dN-RfCze^DjkLygO_*lAf}5~&1|GF8@^(# z(mSBcTZ}MFBx@(gO-cm>MVbHzncCMv7TDn{3_?{7}eJ%lt$Bgh8=#-3R66w zjUGTX0wyIGva7Xh;so^;Fc=wv0Kc?~*b{u#aya}(G9k8tDxLP*ymHQS7HKKeSRHuP z8ZGj)`8p5ztfY=A@;)QeB=kXuDf{qgi5Q^K9ndV#d|5givsa z(gByz+*Zfbg~bPHHkp@ZUe?x!``Bp=a|js^;~k4w;XUk+ZKMj zi$M$8Tg_vVhLK%tTelsrQZAqLZ%HVweOta6jSc+AP~pAtO>H-^b|!+wVuqO#^Wdj; ziBMBr;MdOa<2Q;PCAx=)83w)wGb4QoEj!OjgDR(#W|GU(4bydv=9<b`i`++ABcp4C{Ei~e3Nqx;ESc;=1rOse4B z6M>m9K0PYrPSF8wOvXva9XugO0ywI8|7aBnT(+xQc)!(45{yO4#f#pJ{)S`7E)G3E z#n7UM=ukRonQVF>q@Rj97za1!)APzmI1I$mnZB**QOrJ0jkF;l z!v^Jcu@*cB@V}(v%|ptl(4$b?t{pZ5+d#O1ifh9^p^fY3!9pmzIXO9N2NTRi3Lu9% zd>Au@WXB6Jbpk^g0lb!AYK2S}ETBRdmU|q44ft7|Q3{4cIj8VJz%KwsF90Wsmji+= z0fe0BdF8&%VUL@n7|FsC-vdIu5~=!tEz!i%QjZS4Q9h z^Wd!UDEu=VN6jZ!q$HDa%|ZaIst z6bJ3zP)fQS&2c5%o<^}(80lC*5+|tbYlqn(1OzX7QzBTc*oVe4#Dlg*D7hAct^x%3 z)+gfZykKmv?gn$SjQm%LOOz!fglkNQOuv0&ui=f-*JtOg!TR>CgqPPE@i7DdMa7@( zyM*|rQ=lIF377RHhthvH{5dh|0+M;UzkBN1tv=)!GgZtn&Sp|mIOe<`J7wSm@90~wQ( zlbW!C!`3?>g~3M6m{`!5p%>*k@0g^|lla!)GUBc5X?aLgaUv@ydwgWZxOViq`K2P4}z>RP$uVuEa z)2S`1qkx_#2fCoU*@KK}*8*J!yF^q*^dIUGbmiE;Ae-^7Mp%Mu@RNVjx7hAq9_ync z>!ugEH3<3IC3zoz?|mfb_kaTV%dmpLhu3W=l<>d170(&3wp}9wD$oxF#C5Uw{<*3| zPq^3i03O^iR}eOMI5E+OyYAPv#}afz`na3ex{L8NsN8nYx+2aqX_KC1D^B|#bosjZ ziR~51!~l;-$XcXDB=M<$6@y$i^9TBB>;t5hlqKx=`HGX6Wa+S^Mx%!nX?frIy&R!% ze&M29_Os=dKL)k+mToZgj{~a9Ad(2q#h4npU}i)N_JWzStx@M4}7V zTKS|nG24LK?REn-vyz~h!_|vdIH`jO5*j<|Y758BqF2@avt|OKU%NmCea(GE`iV3( z6dt*X8ZpJY1^)s_5>3-Fu}0~Mi&PEBKbBQ#ZEKoi*s9oJ zWz6fS<4SZY;_`CqhaMHo8`jSF7?u=jCbBFI?im@eA0!=yPfeYwwhfyL(*IQCnslNi zREztj_TE4!%5et2ML_AhVr#IU7=kRO6$Kj;7BW+8PBeD`&Agk#T@MKLYtw)@NtcP8 zl>*7^I{Q=jBQ>T@n_wv>@d2+HTxV6#Oe@IjMzNRwKu;XigRU(k?i)5_V@a#TD4ze0 zuXm$t>h|1^YyH~LTuoV^_*;)9>lYvITsneyd~X}(r%F$e&2El-Gt;**y*47xRg;J z#72LnrlyJh{F&Dz$T`Tt6%!)FBn5V z%%V}VuKbA|(ZkdLmoM7Rb#3>P!Izl}hzY(zIRqnWmv+ng$GDTnDXg_r-8D=Ked|n6 z3%nTk{=SFu4t5xLtZ6rf6Hdm;B%OWSqa3(h>-;$VTJpja)7f`RRPrW1wq@TEPgB4q zA-b=5dAU!nS%_v~xkz@;`{4F1<{6>&yJFz_s!N;T*{SyMR^}fyTF>fw-a)i@kEKUn#mCz2!RCdlH2B7to!uQw82&j=dk`3uz5z1mu_Kin~!VcWyw4V2g zW#o@dzCs@s*Rqa~+xOk8-Rc!mjMm&&B@FN=n(CyUlkZk$<#qrM43D{m1Dg6*EDLY{ zYZ682g^403)zuQ?9Vw&wsVzrjL>8CRQ&PYg%wHnu09rp{G9D%-@EZu=n2bzKjjOPK zhpgyJRy8pd6xhC?`3zI^4hlupq7Uc<;5bZl=WeYJts@MeC{kx@21p}j_F*HbCeDwX za?NJK@OO>6m8Az1nasJ32c7|toZ3$!?E}LW1*DP-#_OKz&tuI1#5Bw>UP%M zOM;U_WC77GIrNNGNmOy)UAUeq;+NCJNQmthU}#n0hq8yG9XwOfDb?6=OaaeF$?5iajO8i$ZRy+I6e?(gb5=8 z3X%eJvJoJIiKOGc0M<&rz+b{px3W|uw`?S^T8HqM0<;Z74HnJgo;7wE!d4aFCPKu= zn$4nn*Fj_XZuNqpxew*mSr2^`0aveAF6$CTA1*7N&fG7q0&zRf|8cw@lm;!%&!uM? zp$1DCsi`aT{|6UA=)PK?nVH|(TA!O)1Z`t;8N%grcfzZeR<13cUl{bXv%NMqF^iC$ zi%^WqE~}~pfDkNL40&Y8SFS&M?);t0H(q$)p8ITQM!}%k9e^z0iO+xf{Qa+3e(Le5 z*#$x%w>DPi<`y?MR%YfGHrH2YW)^7@B!L1*7r^#hi}hI6uTphV7kx&6Bb_`N>z+x8 z>JMrwOES1goEmEyN2WUsFppU<4+do}Q+M^`s0zhuWF{kBM>QZ{x_M)9@zTobwZ(-C zfUT@tJ9q9va;&c0Se(0H0D_Hm7@9r`FyK>Px&F|-XOrp|yI|Khb~dNSrogzfw=pv_ zZ_F8U#JINf9DvJr-E-~vXV09yXwc1@H_l(UxOC(C;`xhofD8u6011FG1QVi=QZ0C* zL4X{$(1M8&90Hb>UOIFB_CSKo%1jtaS!^dRzxXK-r|x+B52Ilq-HC!!7OhzviXSYzW6f}nO@GzFT{KQuu(qt zEA*(MjM~2PJ3oBSH~+}9zxU%19{Q0N$IoBXMBwH-mVP#S{3{>6@4g3$%CBB|YVO=$ z_{D!Z(|F6@n@iY<@MG)=@0#WBtttkzBO-AsCBTszYN8bKKA<{C^|8hJYOutv=e`^h zY5AQGz2(DS{?zg_(~mwkgg0g(Tlo36Vel-jZ{jcYxA#A}GuVFknP2$+JKyuS>b=e5 zUah(h4T5FskNw(xZ~bqm>gi8@;1%ES-qeT_)d*gH;o-mcHzoXE=JoMUJn@miPrg_k zJo51umT}*x^HPJ2RP^Ak8o{D5i16V|*~t*c^_}DO@S86^_)>h=!_l9AqBs~_zP%c` z>r=Du+x%M>ycoKp&z<6=#HqxS8lgUpvKSZ}Bh5}cX8ExXeeQ!Fd$ha1H*xSteDBA8 z>4$DR9B8z~vQ{3r=ThxuQ6}`8k%J;j{cT&h{*Qj+b60QN_|UKY+{<46hGKtfq|@08 zL+JI#n%O?1#pcLRZwLb{zAnuVuPiq@<@7Tj`u=yl?H#SoZ20>3{-bZ3Jn5~a-GjT{ z^cVmBJHG9++v#^7`|Z!~efYu2Z~grBm(Gq)b~?l35j9Le0y2O?o$8ZrZ7qyhM2r9q zPrBuxpj8%|v>*@37>u=7Wl`5PWDVINbZKsDywqiXtmHlc9Fthqel5CJs$zw zDkbk$0-+)eC=c+GZcuW`A<#3kLFrBt(4^Cp3@4J9GOMejQl5~s)hg<$?$^$Kg)rzg zi`F02#?q5_dxJXg*wh;h?o}>JH=C@sM~{W3L4&naunT#0)bEFJGfei6Py7;UCSPJ` zh7h8SYk`R(@1|bG-Em5FZbX)ILt>TQ#E3(a&orGyvf$x zH#QEc{pZViIBT~$W3L)%9!x|X730^h?c8yD!L_HC+ULLS^10#Gc&Du^OSZYO`-Yb@ zYfcZD`VJ&QbzwD_Hz~7Z9=m|4CLFa+l}wQ}%e<09_4og!zbfEg{x5&uszf<>f_C#{ zckwU(jqiQkL$9yH;#QY2k6}i)=Yx6MpSo^S4c*v3`Pu!Gmbzj~C$(|E!t~p}!P>9< z3$N_2e0u4|`cun0N@gP#Q|4kQG^ktH`_ePZZ~B(=YW%mK{;eOq|NTGp)CYd(())fQ zP*;BCuiy6VKjw82YZ#BxK;|w#36)3HUJNq!-~%QX1~G~5t^$(0g1gI2o{}qep1k_P z`T2{>>(^(d=Fq6yE;$4)ZSI_(opQi{+*$Xce6Q$~8#@}G7~kL9pP3k|9D4@`bJJ4@ zGD2cPh~(P(#^U@OK$q4xHs;UFIw+V4Vn#?1phU<47%ebRRg#pbayH-2B)e-l(G;l7!X9HQkDdx!HR6ZwwfpH;A}KUaH?(^ zc-TMb0bbMw#Lx|p=RF8SGEgB%VA|Z+n44d4362_o5=p>7BFpwxZU~rPm{VD8?Cj1= zjb)@1>Hr`L0KFk9;n)b9KH7Nd;-UWKP`SCXUk!@>3pcM{KD*co-eBmZ zv_CR?=ba0$pZ1%ZjVE3@(Dw6dDFziL0$vjP+uL(9Q)DL4P3@qp9h>bi?BS6ouL-#K z^0{ZOEkihe@y;~an(v^Yh9igo`RK!c_~4u0{K%(1N#V_JdFT_L`1H%~dwIb)G|HT< zx^7A+O228Abw)Qo^Xn8AXYW1gA3;Zl$A?pQT%K53;(dEGt5b8`n1{ zXYcm%DAnDn=AggdY<2|2fRI5n)_|Z}D4^RrX^u`b8_YsTDb}?p4+Vbss)&1YkPNmYQmFl!6}nN^4{@5XJ|ym%n%GB-2s;X2avpW4thteR>ufr>>i#7 zn4BE903s$2UMEyV>8c^ZX>f)-@c8hEz|6uKGs;7qASRmVV4ae~Ay6a$(vTU@YGlp4 zBe%i_NqvcxCr5$+2Lu?$rpMrtj23u{6CMBt!iWUACx-(N1%Lnn10=bU1MU)BB+LRx ziAaJci(oV&#sJU)V{LkDq2V}mO)6B*OC*00U$^i0jUP$fD0f% z2!H?y1YjU3rv$)>Q#62tfFuDRKthn?76S+fgaFWtAml&-C^QM9C8HC9TtET=Q23K% z!Yu@dQ^F~_PQh?W0EGX51R(*E0T3M!odlf}g|88WQ#1l3C?ZHE2uLJOxd50EpaGDR z0E|X!7-g0KxI0dHlH8dQF_?$o?sAs|x#1SasbIKeXJ>tUa=r1WpZK` z!0z7W5gZvT>&tC`3^rk-rZTpDdDd9yPjWv{`_t4o9RT%PJihSe(Ejnd*pjwf%N)Ie`gBU z@0x=!y>N~UP#*UYna+1hBEO0^!2keC=svYin+N zs)Ba+cjv|?W$}EZ=Z#eZD^y89B@ZFSEDx}3W$F2I7cMVff9bxv?+-&0$P8j~(1T_E z#3P?ryzdpOPd`3AyGRJc=Emym+``tz>g@c&#`@~)%z2mumq18lQ($x0+DL`<$|@_n zks z_QuTkG%RfIY)nnf5j2o7Ti;j$aOwQz>q{@4Id|Eln>VkWzj)iy^{b2LE)yU!GXYK! z2G}VI1`ND*styT2q7nj#rUw~CYX;!i7hk;VuDj1fYtD~<^H4T+|J-eP`zH`S_&5J0 zh*MAg{$q{tEDsYGU%9w?eVy6N!dcs210%8=r!pXm_!`#jP}SfO z?)XkT_iu3bpMikG6%36dv~lqB|Kj21_x-8w{?kADub(XR^gsBX;lKTvqt`a@j@RNp zLO}S>MdAzT?_w%gvrk!)%yK-0P>PSn|gvJ3ujN^!k2uB7`P{|6831nmhQ3MeY zK^VXm8AQQR4l;y>2I+1(hw92zyLQg;4Qs9Ex&N~Z@Eqqi=bYbs@O+!=%IZ#`DA!8u z9S_}o*HfPT(@yN`|LqgEJpKDaV99nGhhO^e$NoKu|8I*xa~sVKyn6$!dw;0$kQU9X zKD&Np5#t@yeW|aK4Qg&B=PgkJdE`++Hc?x<5;qz)RJr-NM;muMZeizKeIfhNp>X&Q zKK{eQDauutZ>(=#XyqDIl@v&xu)o%)zErEUdoJY7FaO17KJ$fdwz%7#@8Cz?^;a)D zT!4v44Cb7!0YhV9T2$H!(?pM2ZN_O>6a-tz3X{=5F7qo>+jId<&2*Sz5) zGmqc(H=qCT=RW@E^Im-I`MHHQ$&%`5Qc|C&I53N5xWYP?3S8rF8Cfo?u#TM6kKb#S8Vk zAx;r@m_?9OXI>N~mSB;Q1{PK*%`CSNW2A(_#n=ptp_SXZHVVs{ zAv6bltnaifSjgI}*qzv9D=Aq|+xc6!aKAHTjrJlrQ%YKMjrqb9V9|i8hZh7>?lvv= zjYBGyc!8@&G$k~dAcwEElERe}sdW{T+wn`AR_rD%NxRBP(w=v9dNfm1LSse3rVp~) zmZE>KJS>ggzPRphd2lhi?I6Ob^LMxHzwN|<4qtM_xfPvXoqO5=_0_31S2PW@0H5ZT zl8sUT6)??H2t`>29c?>CC4_{Gz<{ zx|?-0srC1}&__JHHf;+#I}7FR{YU)9+1mYQm#^tqtagn(x>~sU{C3CiTc7XUjlXtz z_w1#6POX0Hyh)*5J!uTBj22r&oH|f^=2MS6^QAZG)DL|5H{beSKk@z_dci9{|CjHE z@i{;9x-We6cM&ZwL_*UM!GbY13Nl)RfRSwF!3b|Tw}Q5EZbxqcbxOfB#;_e-Xx9Jk zjz8HqG=6z;X7E5aI-lRZ-^Ze6z~#@rbKj4=38E6DyfBj=_#AJ3TjK+_Enc{`yJu=+ z{qpeOzAW2b+gzKxVaU~Pt4I)4Qs-o&YBE)ci|MIK-Jn;n+N{OuLB_O?#L?5;mGzZJIDw#WLe z?QQEvj3N@4i%=XX;;bTo%YB3smSR-Nop(vh4o6u>knU@8PelA43>SUmR#fuWHJ zm@F<$4^2!Y5u^q}7)?Z6k{$vu9xjUL%-p5(hY$2OT6xxe-Ob}|J)Tsx-IFL0k|YMv7K3fBEOZU_Z!E2J z_aDv6#au#C6?tBWWa1U-RY8#iZ~}_b2*e1$G)9RuLGT0FUun)Q0QTp`2@oC?)N14v>IA#B#`<*u%>hldy?5}@}Z^)hs`? z^!Hhu(X`^csx8mWcJ~g%A_sFq0Y?>x)HgTU27BSEUe!BCrm+cxEt8h17dl4SQAv&z z001BWNkl;wejb~e`tbPbMqfdEFa5GAzofM@|I03pPAsqgFx(AD3eB7|5Z zB@yuof&vi;02KiQ0w^IM0T^h(5X~tiQK*7|5qm^BJ3A?$Vvi#P5P=tnD!hn7B4R`| zf{_3e5WoNfi3q?z0Ra$72tW}Nkm>-Z4zA=J!~wDg!lDSw3ll4Z5WyY+Aff_Qp-LcN zBtbw3A+8cY0FnX$X&?b82nYaTk3awr03iSbS1F)`0Z|D+02LTu5qnf1xQf^#3IRa; zGbezkr~o8kxJn3+LI^-b#2&>SgAf8hh^t6^n*c@-5PX;5Dgp!q5Rd>tA&5O9*dqjl z5CEcth){$eKms5|1aJgE3IGv`fC`8JDkzE2`6z%D76DiUjFO0Wr>1Jgt3WU)g-8ej zPyuMGYfIgIBUQDeJdjt~V_aOiaG|$n0KnS%a&J%H+S+PwZ$AO0l>~6nnZSux7)w?H zP3t+>n+IIJ$qosWY^;GL<4OEzb5246dv!4fKsJug(n(j4UqC z4-JlyTw0z3F*rP46vg`LVo!HJiH*&b-mYFE+2~F$e)HEzS)NjVv$D4-E}3EG`TV zje(?akB~&U+9^_Lw{5e0qZwTS=gW!rsUm9yvVek&Ei(!|(gB| zrZ0?79+;WFFg|`jRiFt33MhQx)IwT!38fOzqwR^5o>ZFZmC6Ba%XK$F>l<#VUMl9C zXIb>YFMa6x@BgKviItB$|B7QyuXa9(I3noj`CEiSH7C07O9IUx5fnh=2fy zZ!^Od(5B;-Rk`dD3%5qvp}EA)VK?cgxrR_5)A-v43B-;!B_X z*sZtS-YV#ed~@>FbDwz^iT^hSAmGu6-P0f6xbyEny>vQXPBWi`+%@L&OLI0+V{)Lq zJuNab#5xjBCmP$FPkP{DCEaP+#9C;d{P7d}FMRl`+m@~5MdRk)=HS2m!&|POY%6y% zu7=I6vSgJ`*OwR*xU{3x*1i)Hv66FUMeFmQ_~Zva`HfDu-ByLa{mmcyrITBY&{^{# zhluMN!mpEq-Vh;l|FovuZEEo}XTmwyjjzmF;4&|Cpk$Ms5 zAbBFnLKv+;R#>wrDil_ol?R3#b`;s_V!fbuCR#}nNkkp8Mw4bnYvqE3!Wed9D6xR~ zjFtxzl&dK~kTi+jnOGvmmH?U|U@K5FO9L7S(o%1&Bwoa<6}GmrJZf7_t6lVo)pN@? zvu3W`bh%JrL4pLQCO!x>QV_gJED3_P1#8fZ3}!+C21SUDp+Hd(07D)u?{U`Hk>xB8 zS&YryM&6L5sFs@trM0ltjJ7X7(lfWwA2-fxdzvm+T=K1{GIB!KmRGW3tJx@u-BvES zBoSJVNHr$44g__1E0v-FqcaE^8(|Mt&~*?D8XPm+|$nY`Z?&I8{`SA1=2K58Z!n?pqhO`>*+pr*|ZWxZD~%f996OQ=iDQ zU8%HOWC)}MKis?jmRJ1s@BZwoU;8O;z3q=b^3dY| zecgP{OWtzKH__n{U;64dfArnIT>utj`>~n9jsLpuc`tkCS5E%jU;RXT|K!ff^ognc z^_X9rnSIgCqvdBl>(&|z>2Qt)_jAATM<4v1SHl>nzwqYwy#IGzEdUKd(s_!2PDFAr zQp8}u+Dv`+8(;X84SfGox=xPnGuI{NoGLZX#d+__UacUhk=R2345lDTZt>zb|LO5> zF6B=2jwx37%m#$sdP z;VaP|Z1kNR92>4yT+CHsmKTHst3!0#zRsHW`{IO}vOo4cF409z42$$1ZyQVJjrk%VP{I(g#U+_8P56cI08MN~Yc8~UOy zT{={L)5jhoFg!V#R^9aJi$@Pm8UgPZfnY%(s(u1&8*kqbDYovr=fP`6$2aTw?#X>G z*jJgHTm8~}SZcOTUA%PN;e&Ou(_3NjvCD;5W*GtG9#~mj z9~|ftXhAVCg(t~OqGc6JybLv>ZCw7u16PC?89fqOJ4LIpUf*o1)dT{G-cz(f zxXty|-p=mTt&Oqcqk}_~<AcV6fY2f0?HmUiwp)Z5|OI(sTXx55djd8YI|o%kUG$z0D{306+pdOBBp@B3^5XB z6oP;u5Cnu!0wPo)2%`|7P(grH2Y7Y#Wpxk*AR-jxd5A?pv8V(<2(F?~009Kwri2iL z(HS71z({~900a~e-zE4ipCoQKoCKJI3a*2f<0@i z%iX<$J+yLXXT7ajw6^EYU+(Q00&H!4xwm^@ZGE|?w~qh}kpQD$kBC6w+C7-0bzp5_ z4nSYuV3zht^uSNdUHC88KJ!IN(D_Hc{E=t; z^s8UoV|w^UdVBH1FMKwLC%yPd8;_(2y}K?-vAuBWuCb|}+3EGc{=ub{rNO@8rPYPO zfl-i4EAt@6M)otd=9cD%`vxLqd2MN|y9++)OGUK0DnKY2fkGqIsZUdfB$>T@c4F%2 zl}k??KXS~~Ml?-;ECR5W7k%)auTPx3Y37mp`v=DagkWiLZfJ01X=%Q1aCl{DVPI%f z2!H^Tq%YUnRK>41sI7(*r%CFZ>;a+$LSr*h^H5C^Db?W0ZE5Pusi{E3i;$oIG^%{o;O<&nJap2O-h4GOoIGwq2abjXBM$FA!9v_|Z0`X}jg{VtF zRowr?>8BhSh2nez+OWlyg@L{ifGaBtef^^?mu|t;`yJoChUZ_#r=|pgt4Ms800bZc68{QB zKtcorK>QB`AOI1N*h3%(Qow*qhJz{*JhQ&BeenxBjl*s26}ImyS4yEB6f_HzGL*V- zqXSa>&qVOo);#r_&-nBg+p47i*Zs|}eD=2IzL1)>Y)3ch@VxIEpTGI(-)EAzJQQ); z%GZtd@pGU4vuFR%3+qMk&42vl^-q3!VrC(($n{t5e?N);837>-&FtZC{?VoBjnns@ zS)NaeZK8;|gK2$n=iGGMUx}q7T|+519gVJV1E}@5=0nxOCy80Sz7dyCBs=AaryXcJ z`_aE!EkZ4{B{y_x-?QKN_qLyCf9`8P@Y<&@udWXC_w6>f+sYj~yIVVqwaud9>+=fNazLrnKKoa9+PyeC$#=#%?bv*G#9Q(b0Ja<0Xu3Kqvq&MmAtyM%y6jJCU8VFb; z?8Hb3$eRo<7HZnak>vEg*zR=dN>picnuijhgA!T>YcbX{>xf1PwAO&iZd*Cu*ll!G zQW^EK&RuiJAqa&Q-kS)`c_{<)tqdtmAg#>uBqDkj3Q7*NixenDS``cdrqmKbL2A%c zLl#W55HJeS06cR;Qte85$408EVg!Ao14qC+aiPoH7THsk%g%ZUeX`*>FDgjl+X9HZ{a}8%8RKlRtAguf6KR8?Mb7pl&cUs(Z|?=8KUKzE$*u9D^==C zTP+yJlVnTVLR?1?cAJgn%DK!a)uT*STCLpq{@TV)?ps#h+AQ_;4UYKkuBnnWLx{^; zyS?JOuB~~o)B&+Z)C9gm#+?+ZloS*d^$b*_Stbr4for9LVDj2G{09Pm^n3pa5}&w~ z*1z%bJ0{CNIN9;p_kQ^;zy2BtKM=Pn#;&yQp1J5- z3-!bMUv^uqZS&|zE5BkmJ%TYdEWLpR*IuX5W@zI03*$q>{G`kGh1`j7wk_kbit z{P|z{bwt@1&hvL1h{m$zWC)&f1((A-t9f#vu_I0rBn(K8xjQ& zOXxyWV`0V+#UfY~G|(jrOW(L}`YYS{jT4QIv14VQpzyn9C^Qq++T4IxLnZ9Q6DzBRYOQVDF^(|itheIQdX$+W9n!XTNmB7DRta996#+y<36W?j9%;#$Ez>0`i1OY&R&1PMIa z5dsoqBn(hM05GL@bODx0pqNs`08$*O;^5kx7jXuG0wRqf#=L+;0|5{)Kmfs2ga8D@ zi>fF=GXsnQ2tY9ps=6pr7$k!DmneV;0TO#u0EhrWVvhp^h(H8|gaD)f0s%w-2na$1 z0P$@CP(Tn65mf*p1VDlY5vbsw37`Z40|*Fl6@^p?MWO&SApqF}5CE}9BC$sRfdK>r z6bJw+_6P-lKmZ5`!Br{%EE0f-03x8!1Q5s`O`Omr27(2Q0L@4fCvZ;b6fj0JgMh>y z2LX@^3AnYjm7bpdidy2@GGB_4EzeDN_YA|r#^y>-*TDM5a&K>+2aL!J5+emy2?Vag z0ZFMPSXrC{(9=Iu6GGFO#To1o9G^V+`1E-I149!Q;_B)g_7MAy{qUtze|ODoKj}b7 z^7uFIIPuh1jJHLdxTW5UJAe1(AYSt0w|#9`4wfsv8r`T4>ESU(Op%@?pg-Kq7f(zq;pzMOA43F zPM@BbJbdNy*<*)~`nF-IIEYyQY}ZpC{MI+ePu?*5$bwDJsd`D;)#YS_Ss=&|b0|Ywx#Y^7Xf2i%mM0xVZDmUS&hVEql$l7%ADk|Mc}IKlN#JJ^AjYWgDjHqmSHo{P=Mq!e9kM z%6)^$7ruPz=IgF2fJYvA=-7!TRm+kEH`^H5IQwZ5{|_>JIbij^;=VH)-aex!5aH=^{K>*zgrIi<8RG#;z}!|Ffc;mcot z;Ez7_)thhqu|`;FY}GrkCepTzw1%2szFucqiu40n%qtev<&U2`_w=c$f&HZ?Pae79 zNOyAdnblI`=I8vvL*KY3R3CzFZ5pXeT|KLXJe+}I!hf|PEwyTht}Xd)PR=3 z)UqtN7=)Ra)(a2pdZf`jZP#){WenA_DvahdWg-Q8ij9j1JJ>RNGbcL37=(fyzt({G+$N@%2A%Q48@vYCB7fm~@Wx4f*B2_(D-!{OVd@*)^T^ZVEMSw>rmOw6B_u ze(%?}mi*2sY&5o85irm|1Fh6k89({GfB&cN`pvh#jk@x--+1?HUiOCfeCQ9}_h;`> z@D)Gz+CTl^2jOyWCMIdZ=wO9-12qYVP78%*5n`5z&{TlB0HPXA5S&Objhh?4_#b}j z+OhqQ&0aioq|cuDk~dyk%C)~F@#U|0{YU=vJwYY1y!ODw$G-N-Z|yLvWV7cMhQ@DR zoVo8+H=ewEeeK@Ma|g$|(p#VR#%HE=Ttky6$!mV`_3wSpJ0MvK_lj5l`g`8{PU9Zu z0zeiP;kH?5Zpg)1)DVjG>H5ljpZQX&>*f>X6C-`JcF`R)iMSJ6gyeK^nK_FV4aj{Y zeBq*y)HOahgL~5OB*d=qWUwj@i3vnKR2|9{i9rg|XAol&5>3x+e0IxZ(tUrwv)V|l zQ7mrd4=?Uub#ZIF=ep6h$)qHSbLBQI35r^x0?`N{2Itf)DQKdFiZCM5Jm?SuEfVT| zb$;>7ms<-v+d~~KIdEWqdzx5T>rO8{_V|)nvEBkusp)_q*6sS-M8=ftyu(Ike0wK4glQ;PG+Li+*F%$ zU;NfrrVgKIZpX;o1P05ih+Y*5;7b8X$Rs(LyC1%`r}?AT+&{aUKe{|0!rZBIO;v6c zl(p3r{BQsN=ouLF#u%(HfO#Y%B-`B{>h6GQ;w4crYAeFjK^=r34P0GV>KzA`lTHNC-ql z2#8PsqtFBbssTm_01}`9P$&Qc0mvTgAqhYQdk7MM2`CUiL;z9%I1z!EL?Qx85dZ=e zKp6>8gb)#kkN^o$h<5^r7X_Gu5V8Qx!~@QGx&#P_87TmWJwgJc2mo1ITk7fUF99A> zK%!!0b*Z~&5ahAi2c_;_u@mJzvfBL6YNQ!xbHK^zUL>p6S8f}b-TMh z@pcf$uYcBRi`;o6G*w`0>nnqU2bY%5kByGb&dm+>k1Vas5A=^vu(CW4V&CvY#pBe> zm7}BkGL?%n7Y~f?N3}QWPPn_4Z-U%m1Ykl{Nz+u}liBGrV^fD`FP**S@G;-sCnXO= zL?Vj!c<}D8j$L=l+{5?wk4(V=D9cMTeS@QGOLGH*qf1M3g9Aemq7b171Wu_yC?Z8r zDW!y`5;P$!kcfGVQ0Zu|rmotZl)PvHsB;1(_E1zzXo50IfE`FGmc`|zg%w{vD@Z7W zrqh>B@1LB!cx8HWd_SRAuAHBmJOs1LGZ&7Jj}Ir>>D+Vz6e(}4 zfW*H75s(l8k%9Ao0tOgx0CHc!Ip9ZAy#Eq*a@4nRzKN&hNUp`#w(y@1$?HzXmzQIu zE!p?^Kl!nXtKHMvZ9~=Pca%C0R}a_9rQsAf%D$?w2jWpKq1uZQzC#2bT@RPv{*2Fk ztyIr5vvBI6drls^MY?-g>~62uW5{PNKQ=aX0#pdN$Cr<{%NHK}){&z(N5zFx-#T>d zjh$c<#igym#+5IT_?IFO0TlOs_Wzt;eei7$taYyj&tg?9O`{43x;$~&YUA>3y<9Ys z@$RAaQX@?no5)y6O-o*(-4ZJ8FW=wx@an>qQrE}{H}jFZH(GNSx4I@?d$2P6YhStb z+PoA}ae#vW1Ogxt5I|vo7!tL`hV6d(!TNpw`K>e42acDnbwB)~xBTp-`Sbfm51u>o zz@fu8-h1IxcSN2PAr7}v6dLu;hu(7UR#9ANZeG0mum1eW&-gdZ)y=IIV-XllOk>ZO?nz?;d&TO)nX5YmQeb30v;g6W2WW$Qdh_ZaO@* zFnwuwY9B0aZMAkuEorK$a~dFJLIvWGM?@lcXaHhi0Rcp85Nc5v6G{OKOf1HbWX443 z%jVQ4S~ATepmhnISqlYs3v4G*La7{lyDxIxT9v`{D0Hq!vH`D~001BWNklv^;& zP$fbWR<)>1B3T$|kubuTHzQqcDtSyr@GY+ALX+*q4jX(>%6ZbUS?LI_7K+k_W!T9rHq#;`zAlLzb?6=sz3`ohNn72u znl%81ZBre8@XkMf-79}K zM&FdYC}q|vOPSrJVth6K=^IB&1G`^(NUARNTW7Y4 z7ND?51d!#5>pL{|#ZP|z6)*d#=qexjz(-#4{F^@Vsr&!a+x~-=yWjMNH@@%BJ|JFF zKqy{80oY`WKq-Jr_*mQVqp2IxTCGGuu7tqW*_5)&(@Nr_YO-6(Oe;w(k_@ss zfA--AA6#tRcyPV<5}r& z!dFkX7B*K#d+KuZ`0-k~;)sUxryu?1ti@ubB}8ZE(J^4;Xq%8t$rI6}9YUr1*kAtD zPrvYmPDm;hC@dJm+SWr?qGlJ)UpY@;;^2h#3}61%J-0md=|upFJkNkmJY@(jJ9NGQ zrnVJ@JG1xSedp0**S2iPO%dSFjBa-y52X~FC;-RSVW+dOVZg@IeX^Hv6D9;)*nB08pK^Mn!4_E_H&Ez zPID5pnFYptduOB8HL$z2EJROF@7ns>@My1=J4$V7(o?IXT?4z8Q5OScdHwYC!$J&< z9$>u)u~^?(sI?DK05LBp%I-+O!>bD~e&Ml?-f`+3Cq_S9NZi5>5llivCMqx7q221mT5{qB}QAFgyn1^PBgiv@wP-MumD9B7fOP~s;0(FMICy9$UI zVI%-Y5eNv76o5jmY}5r9>}|&$fq?Yhr7l%yOR`*=D&oWm97qKOh_E1;!7M7GIz^f| zFA0Q5C;~ug?KKss2*nW~sm2s=4$&b3h)M&ohq#IW4It#(0D{;ffPexaKmepDpdwJO z5K)0suMiD|Szb_L!K<7=1q6-w4vGNcL{vctBLQL$v4;YHLJ9;>DxlnBk${Q{6kl@2GJp=1&%L9EwPR%Sf3kV~=Lj_$)$Xj7?sdogN$*U0R+S92hZRWqBUN)X2V)lQT0Hk4zmYlov0az2@-k z)^^>kCu+5^FMamzsE^JxoQQcS6ym*uOXjbf8=E{hd-?n|hfnx&XG}aTqz06#F5!dU z_~QPXo_y*4yGJMX$BBgd~;3(*PqJ z;9w1cc`hbe>hJIHvDyPCDh>iKfGZpc%Qc!ZN&*$4S#Br;o9nGCY3wRh1>$i2!udl} zQ|G2HPfhNp$@G;AQ&UGJ@t4j$dfn4*pJ34~r6(G~?1~nWD<3+2?#S=}C>wjL_ts) zY!6%_GK3HY2>}v9h6ZQ*ryJ@A?Dee-!g^Th9M@~RZ26hB5l@b9w* z1mdqR7QZ)^zWw_R4MoP{P5pw2b|iIV1=6G?C*3Up3)PN3@ z5}*bM{=X6d2@w!2@C4v8SU{d{T-Dw2<$wOm`SbOczW8_l^xYq6c9-+!<#X8=`e#4k z_FdTA-g@_wD>`q~DqPDMfA4^PJyU7|j(ugomu z^|jpBvK=K|gfLpFzP(+veV0~ZkGFEtx4XBoG}%mg4(=V;_xDSeG)}%@;lfR?eb%n- z-@5rgsbeCkxRih>Kp+AF5HO$txF%{9Jn;Rn{KfH$=X<|0V>Q>PY}g0yKL3sT_;Xjw z;`hGad&3PYnUymECY5b@3R_o%?_79fv$Zl8H>R(D>3d#q{q^UX&5c}}nX^i{>YMq- zy0v$PLbt)YxXOs7H18k#rSE<1Pv85gKQFf(9`5z~`x2>^w1s_p_C0*+v=g^~*Y=Z> z=l6_HAh%5)8tbhbQ0HKhdNnP<$BL6cwP+F}@|@;!1do8+7ilm?=A9-DjXB$D3_*p) zq$qNlC23JLpQi$8Lt~>*3rup>9T=$*Mba`fo9G>x=uMZiQm#mrvqm-@)r+X3mlz{0 zAW#w(1yB$Xky(tOcVrfY+5#O#8H@%hg-#G8p=!ySXR0v^Es0J|3-608i_w^eJeoog z(&$&6Cgon=eBI*jH?@kFDa6&UaaIz79lcZpi3Y{NOClVLZ>kEq#6Ug zK$B3;rQTG!sLCcuDIC=x3?LPBV3SG-Q7YOjpe+-oAVt?5{b~?j#8OKZ4dqtWUv50S zmK&3Loq1Fg53H>_FCI9XHcGy?-*+qxFT}JbuQZVwO1(OmH6PiCkwrC&i&=~)G)Swd z`DT8+ly=mH+Pbe<>F*q?_Wo*R;YilV?PA^9Izuh#taZ7Vx7G@2Yxi1Y8!IYRCk$@g zxk~Ds@90zo@yoycIs&);(VM4(eCJ3qZ$EnaPR(80pa0oky!fTR9c5#ExwXw2X=kQ>HNW%{Rx9`Y*AGAA@mGEM&ih{b{AZHnIiLEz&=dJHOJi7gXi|2Oj7%HxR;_r`_@}w$l z`np?gefwMf&_jvErK2lne{j!(byMFwIkSAt6^HJ9@W}P!!}FWj>E)$~?rrYy=qty3 z(Kb@CBD(koe|E<^U;As!iXzcJ{@_2n@s+<(fJ8MUDiAoO)p99Jf>;tuQ0Z(g=bI1x z{cVfmS3P5 z&6n(7-0`$0AMD&#F3Kx})#YN&Y(1Qh=JOzVAyzoVwPvDc@4n+?GY2}A^3B%#i#5Gp`(5>r9|CCy+0OMH1|>YI<$ zFRUz&ch~jO16Oras!76z_doj4x2B3BTWx{tZoh7)VB%1R7dDa{K$-|Zvp@gXr(XVw z7m0yvMG6a-$5_lAzi&!4(0>uauMoTJfrKDVI>+kH`Tk*|uM=PNfzPZwhlaC&$ ztFmDfG6S%*wA???t3V)(hG-@}X?Jb4^xV}=saYYm_4F<+Ee;NLOIxk0yVP51TU+b7*o>tv84l3K+`Xrc3DMg# zkKA4g28V{BiFcYnD30w0*KKGIjz9?xGe7*=&^6Db6HsRF`8z$nJ z4HrXQr75Yma!<9r)Ls#?9pVT|TYEbMA_9mIkD-VFAOIC?ApjUb=e{HYLi`vZAOOL? z5C|S)_*X&zA_5hGBq9_|#g|=5)uJdY&rD*|NDx~T0EPH5fe>3np$dS}015yKwg>^@ zrxb$6a0@`80s+I)eWP0s;^a{7(@B;{#{_#ugzAtE?TLG$*d)6NU3L7akoQ-*fT&$^Cl{`cgTVW6VkdRC`^S4}9-iyRZG((?7UtbnpJy ztcM))bCUxDWAk&9ef=Zz^HcqO{e(c2q|yW~LP(|#aH)47g@6bZY(-cRQo_~t_H1+J zyMZWTfPn;C(TrguD_BEpOJmDuQ6fQ$l-Od(GSC2U^8D$siHXSz=O@N?M#{y>(-Y(S z2UB;?sfS-fouokMw@p@0bIm*xg~he4ZPobB%)lEwdRr^}1;>}uZ}{gCecTJyn^zx%EaK(OVOw|-O; zYngxFEiZe;+dc|h`qBS#{d3>(4Z3pfkz=oRRf4t`KvF|N#DGo#JwWufp;9V#vS_bi zV5GRdCr{}oi8D;*bA83Owl8hiFVqyor&ri+SgIpm&%S)V`Nd~+U-P!xpF3*5|B|2o z=>139I!mHGm6_>VZ=0X#JoSNV#-H-|UpUgZc<(~|(Ou=gYyZytu{FHtr5MJ)lL)4X z``&WsmM^47@4pxSZls71|0AC{hQ|$Kv=fgh?tSYWQ_Zh^{bIiM)b925Z1^j)lOwGq zSFUN|lhiX>X0C`X=hErfVtpYu-=Im+0x4E@yV|9;d*rZVOIg%9CK38)@S8+Za?s=4oLCe8IZx|VC6*{>#X!v0ACE9655~lOOs{_swG?#YTH~4s=N??B28Q z(X*4{xOaT~{P~L$+eblWM6tQPmKzGNmd|HMoTITEfi#$XtWpv zwbW;{g0$!o@4XhKQW`deEG&wqP6Az#WmTFXBSXs|nm~xc@27wP>~2f|d|zLdHmm5=}%!1r(9gNs}R1Q4Jck zM2Vtc5kX`YlPD^TDp}EpVi9H?j@p(~fz5W%@0OlEhQ#uEzI3*-x#qN0Hd*Tga@bly ziGqqSP)%A9o|X`_sH)}|id<dH(G^DRJF)kK}5@9O%{^X*xc|jx}L$-HS_*&H<33>h;aij z&9Y4L+(&U6&^9YNDq%UZgI)UC&UjZPwAj5;q&Y ztv5Ex?wrba3MbBf@tMyi@R<+2e>2m*xBtb*-u8zt(`xnq_~Q@0@fnBT^5y&A`I@I%R=ee+pL^wRy>g>e zw#198c9o^t$8m?!4nnBOmU6 z@?~n(E6P4!oexLvo8so3YhwrZ@7cYhvlOiiQ&E9LfIt)jVk{yA!IQ?KfT)TF15E%D zAfozYZR(*@o3l$RJ>46!b4=AziZ>;^|`#ZaA&-)R7|u4ji12i37*J`qkh3^=HQd2r^m~sg=%o z#Jlv5td+-E^UzzCb z3F+yQtZ3TE(^XiS#EYlT?Am+jsgp;A=;<9+L9$gWGA01$~1lcJ}yc#p$ z9Tl~U=g(~4xnC_SiXzKQij78q(Ey`}sMI&=owW)aAs&{Rg3cw5&Q&QL8Y7COP!v(U zrkW-I)C#d^qR~MRVssW$zdF6pKQszSL>-)ncWoUerEMkGUP~F<(ms!`{NruUeeSa> zN~HSNfAcv4uln^TEi8GSpIKd-d!&|x!QNffbc~jn6iUJi-Iu<8mjJ)~v?nGYeRO~t zEif2h3}B{CB#{K_74XcxUs|p&E@Op-qfj3SY451vUl{=v zI2Aw)LJ$H_U<4o_Ab^1Q2?`+)3IqUY00n@+0E!4z6_Th>q$I9VF43iVD_W~*k~e4o z2tcq!@UMt1RG0#!8UO(xf-NKnD&Pcw1i(lkA%FlNs6r9J777Rop?Hh|1Ox;GA|yZn z0wG8TK!w;sAq0Q|3P5N=JVpf&kN_coC;%m~g#bvvL&e3R25K-0L(yu#w}1M@apIs zK*neW&_u9B009bsz}nJWXU|{J=@^bSI7X<>d__u%sKY)|hH#Tc_9 ziVzNu@d~)4*8@r0TGeJwoComKC+>N0y2w|XE3^=OX6Ze08-_=hL07X+QQmcsO_}aa$nEj!t(6+ z$o30U=LY(Q<`!o9`$q+so4*KRcyJt=Oi!QQbKquFI#1sJj|XmeZsGP^J7R9{Ex)?k z1s;CMr<2OPbHRD1n{-wbWrb8aODT@r{jDpn`Pm2W zy=(g=2lBOzypV;t$$^2<`MHaI1EceE7yJ5#3?c#nl_t~x7+3@flNT>c zjP4+KVe;I>=&qWVqvuXN?uvs}R;*{6xdED_NFBd-^wi;fW3d^gn^42t(oBEvC=eDF zW{3I*p#nIFPS0EbaNU7xesKKW_x|Iz&i7q)>x~`H{Pg4(oTa@%-Uez54X?o~w_0 z!uDgtIuVGa97n&4#z$Vb_LRdPeEI8cj~sU>TBzlpdhNXzcYWp!|KW9U=cVv)_+Jsh z|39RN5dX&bl`oKveOF)FarqU${`_BQ-?^;{K(hPZ@cHS@+dn*2>v&4^jco24%j|AY z{W{W;GC{26MI+ZNShliJG}m(z z*=d`h#a2hE~MU-^hG1KZGy1hK`(upSMFw2k=5xK*dT0&e(Vp$B^q!uaM#5c#BNrE!M*Dhf zvzbd0>dkczwUVT(*ROIXow1AR2PDhA;3h)5JIE` zfKfcl0)QF|a{&%9>n;ygU1@K*4Xc~Vd{Wj{Sk$Rz!Ge$(17J#oO0WnuS`n$HN`{h1 zpdw^65fzk5+m-7@$W%89ade8T<)Gr>vMOnLIWmBBE+mQ_UE*6Y288$G&;uzOI3HmTVo9 z&wBr-KK|}EFMa8oZ#>cJsw_&y4{l4x2d}!SygHXz_Rz71%E?Bv)L2{Fj8?7GyPkB@ z^A6qE^WZl=zF00z?7@*A&MmGG4FUqfq|)hnuX)BDAAaXM-}2U^Ex!Ku-t_y=f7W|G z{f~cg>$_0s8-DNi-|(ilW)vY*SyH4`DYq=I0&IpgbuLj4HA0;eumF%2&WS4`=p`of zfr+<;Klr0}A08QhVCuq-@$PWVFTLh)n@KO7-tvcUddIuoW(6TCYs-&({Yzh&42j>I zo0;q1f93om4~&hC<*nB2!ornTPb8Q9(lfSe9y_8~B;NCuxBuQBy%8z~`M_J=^dH~+ zHc$Y%2sj^wU7hvAPLm$#p3ysUsc*!Ei=qDT=kV=xIRF4407*naRKB*{dD+3v!xP&I z?SxVY1ScRQ5&}Y_e_g*ahe zpw%jjMnxk}+{qt(?R2v_Tzl#+Vf*$HY@9yd88Qs=<4pXTDiB~b-*_=-yoBe=N>t+@3KRW96cgLZ}0HJ;&lJeHkYw#$u>ybraMQ+Uk0@D(hre!aQN~Fd- z+Ir*MhGbboB)7ynO{(O@`FZ^B(R0a7E{C(ZBCiKlRxds}bm>*%i3ZRxEoIfJX}O$A z(oTz2^iaNX=Y7w<`C!QaFQ56w!vtP*)1hURxKeB^6lW-*%MbT(uu?8;Tkc3n1<9Sa z{gVJUKjWE>fTN}csnDwC35XFvgw8vN6Hp)Y7!2wrzM~uy(K; z!S^ICNug5f=m3Nu2#}P=C?G-zK?p#hK|lcnAOMnp0HWAJ`~+bD1%O}+RUisLMFpZN zsuiV6q$OwulU7CoAOIl%!T%jaNI)O}C?G@t5RW0mAObj2U|N zpcz0AKcxyl1V3d$07CqjQ1BQLKm-u{7zH2%|3Uy&A-0ep2@nv72ms>82;!#*sNtuC zAOImiP_RWn5P}~QA-0GB5P<*?0>on!kBIh%GP8clY$CgipMcJQ-#$PWSZ=0JgBW(BCsWzc@WOFqFZh z$QT8L;W1SOl0-ZpRjG37)Oi5?*X_=yax}Bmr8#U7bdFp#+Lt{1;N8PlU762sY(sux z@&brGho5xv{;wan`X-v7*i=SnyLy2*3WSZL_uV}*J`n1JSpk4pUR~_pHZ;F7Gc-6l zGkbBMZ**?qV(-A167%zuAclr^GU=VY@Pj>vp9;ar`)=QN^)CePE@R~I;a|PVh3ZrO z;L-K;&R=cKb~^9URO!;G$w$Yw@0~n*V(;#QzT6#Ck4RM@(NuBd?ti}G+Q%RH!JXSL zyChp(&04awaB*m0WOi<B|mfY;lc5V*~#-0qZ0x- zH+gP+Y)1(=IeB*9_MHk5Xh{W15h*9`d&iGocW~G;p07g#W*2ArdxrolEX)iIj=~8j zpia-618~hVuKD4Y?|tc$4yDCdWa_s zfG3<{gh<5tMOx5E=O~}QaO!DeI}?Rc5`h<1A&JOG&Ypr`%c~}Ki!c-au``d3kMB0f z)Z~f5(R~Ur5s(l8R1mO*3Xwtu#uQ=@s3}0CL=Xv1+lRUa2M!*t4UB*0eJ}sdfBS=X zyzp8OfBLoa<@U0b+tJEbGyg|7*KRrT?O%9c<7Z#;jElqy5P%DYNkQ|U{`QBrR{rZZ zp7Xggt)JOZY6t`5^FZw@cMSbs1hWFQ!mO=rf2 zF{pSlXqjK_9C+?ES65qt=D`Bxkz#cw=^mO}YB(Q~422?1B=y)JBn0|W*{~ro zE23tBp+Q3tqeg?jQ1Z1Hx}b1gOK=RKa0(Il2y-P>6hXvgPRJk<^T^0x4Xp=gDN}XD zs=z@sQAc5BzH&vsV`qodC+|Ef2_zO7&H2QORunK5i!NHO)`F%5V%`ksDW~ z%%|S+rj6BmK66j=s`^xKe|M-Q-EE_LyXA`;rEKBkwnV08^TyQjx0_+0+`RG0kH7Ki z8@nI4_rIL;9YX`9lMh^2UN#mGst_g}?b3byQ@{D4_kQ55&uH(w^5wttwwL|(EB@*u zfBEir|A{DG|N2|r|K7KniC{~@S`=)eUV%7>MuQksC5R9)6fv=l)K!e0u84$-lE5Ox z<%_@brVm{`GIrn8I8=J&kgEx#WrV2i8GN56f~ZS$=ZYjaa` z14Db}W=@U_^@Ut#m*y_r*Xs`6{EQvGupV_GAinQ?AN}3`_$op`-uh=Be$}g9DiTCJ zBB&_^X&SdBiB!D{!>b+bpD2Li&NW)w z+OORuQkm#btJmZ46YE$yvNkxnd*}GBc8!*Zz#tUy1dP~{hyci~Fn~r;aj6qhjD#aS z*%Y;n`A5#wrxzFdyVm@U1KT@mRiC1n`tuJxav>|4%aPG)ojZ2X4IgUv4l#)$@ah1l ze&XYwddW+FQvu{w0HjS#5>r2F_AAQWcYo_U1hyaCf$ay*-S(B={Eg>SYNGYY`%YWY z?75;%t6llgOv2pag?h}h^@ZuvJ;URcgE3bINTM~>l8YNj(aI8SYT|mvc{URvqz??I`(+NRagXdZXx*^HA%lb-U4?;pS7@~f~k{lN4L zi0%7!N^M(5Qmw}3M!mIGBx$vD=E3h@a>GqW@BU{Y`g;fG7H9hhx;0HLmaLh9n}7m{ z1R_`Jn?m%B?Mq96w5ii)hj)yrBAV>l(dat#lq^5|^$)%Nx*dD&K6`R-q~Aq=1!#l- zA|PmlATlbW`n2SHEzwd}w@?^jK?Jjc(V%5?RsmN^ygDxiv0%uWODk$?8yg{8OPm+= z287^9s1xT@01+_}7vKZMi`K+bn1tZ1$XIMPO=6;(3v>A2z{2#pirf%hk6C*=miN$uO&z#q>D+@ zB^%4x#_Vj}WqHG)&XQ(*-P?Q1hiY2X`g%HiU)f8oql*xLpa~!V1Q3XjfDs@8iU^?i zDUtvL6g-B4pCUkP5hnmtAPT^VC2u_%xyLqh{q5V0D>(73WN{=Kms6s zf&vf<1q3i43P31DNx}?5ssmI&AqfZ}00a8+V^Od$KM7)V zcwbWKI(zz_9fzNy#OVjVw)?850jYOpe&4|f$d$X#tYr4pcU-(s@nWS^nWR$}PK<8f zdGYL-y*n@Q)h=K00x19p62Xys{`tylpYY%hzP$hHc5 zJIc+NvRH2h(u!t)M3_|DyHXt}?*7VyH}4z!)``in;fVm5nL0NzwmlNF7tf81Zg*Z4 zE;>OFFH$SZLysQ6>C#~#rZa>(PtVNs^$n7`v@qQ>utS`H0!QZ#(iEfJj2Lz4;OOD*T|2-2SHJtbw|(m~e{?g5Km6>;TD4rOwxtO+ zGx`0W>t4Yl&$^J`IdPeBJ))xt+zzxw$l9@A|LoH0D9;dN|Kr zh%wi$&fX+!`j~AdJ{H<2*AI40$Q9QmN#R)aOEUrQ7=qtt{kO^UKY3SdDd_J z=ht+%(x*KAk|*A9$aVDo?5kh=+5=Cm4lGG+C_&fC=IV5@eo6QEx#{!UMPU#F&L98!@mjV<3lSrPUUf+W{|LJVZPbbrg+ETBe!8a*H8% ztst(U^a`O5W)ch{QBCuV1w)~(&EOM=k2jSK!x4+&9~M=cC2O`)Seh2CkL_T_r82t_U?FRV?vUCz9n-a)87nY(~F7k=j`Ap{}B0 z7BoLHS=@gvTQ^on(Wu_9L@CjE4VNTb?ZDe&#@`wc%=#uEu3)hNM`(SU;Cb{nV3Q{z?M3 zfBa*mR_pJ-zwm>i{Q4W!bldY^{qnzi|9hKj{M-X)FRgyHck~5rLo2nvn;880&;IB@ zvzWfAYw=r0?>jcLJhM_%s*TzcZ~Cn(p0soG=m+jbDkUyHIJrC@jc`yQ45bRYu6W9K zKJmVf-?F{Gc>XWG_w6tEo!5Wz*7v^oUGJjtk8gRy+u!+CF;M}ev{?(Tx#qKKQ@uq^ z6h^Zsi9kXq%%aqDmnLvxA(AvDA_&#F_rCSxj~f}iYwF_I=(hZ_$Nk1-72z#ZOD53hPP03rYU&p-U~SH41>DbxrEC=i%4fsx*qA<`Z?H&DOp+kdgvwg0-yuD`Uc zwbX-bifDo5AP|+ZAqt{1C#HoMwnz~nMAE8H9y{^JQg@2K$Fb2Rl+_IaP%NHE|F) zDN*3)9cbbMDH-9QWgrMLr4S$yMg98Ru`}xz=2o_KuF9^xd%HV2d?hr>?%ac?FEsMj zYCsIN&Rx5yLszyX9^$Y?y$~?@#3w%a;+MTZ2?UD*fVCCqxLBMVM+Sl(Sul^5D?8-+9cw2AJ=~4gWoyT5$c<;kaU0Q2yHi`|A zzBcRbO^c)?i+N)$Tc&h$#*uyPR^3}#40M)l7?w^?HNlugl86MC7MJ_`de~O$>n-*7 z^fvu?V`WWN=B(2lz556I4_*Vyk3V_@#N%$hzU0bcY0}(S-rSgpzFcu9?z`{MjgNou zdv^-a+c&bXFxA_)Lp=+%A}R`3IDwG{DD$N=LR1D0KuB71?nHTTp9nMSdg5MA92%Kg zAH4lztPMoPw6>x=VIy-st^!g)p-##LTM%bpc zASG5DTqlD=t4U;`q(D2I7YT^5k%Pn+|LRr(zx=}A&Apen(wJMCJnf3sO8dZOWu(JO zBkgnvgOhmsXFn~#GoJZ836a>;XdnSo6G-qdCFo2Ei3a2Wf|XV_w{bM+T2Uk{#7T27 z-PI?D6K8JF_oPWrD(xNZ*g^oJfC3OP1q39Gpg|}qfZ$({#D7O@5dokCC;+I$e`5|l zpftP6Gf}_qxA!@Js@RpQySl1#QYRq^Nq_=i5lryF9eXe~HYS=N4A>^xAcI91k%a)0 z889{&unoo_vIIgBiruZwv8uar4u3dj@9&#xVb9#VxNCXrJG0h(9)ctU0;#|Y1dh^5 zOG<_1Bv}ZVEYDgVO#v7TzeEynZ~zSoL<80ED1{kTkst}c0NN@527*}Sj06>CFaSmZ zpn(Q}CIA#LGXNwmGW;tc2u4r=!h{4V009Lej0PA$5+n?(2v8hBAQ}vhGQgmOfes)H zFvBW?VU-ae2*M0zFaT5nB(cf}qhXZ+ARa~h3j+jnLW7zDATBZ!F;dJ#6@-8w;Q+YQ zNLXFfpfCVN0iCCLQ8PqU1uQxOa0-AR0JE8y$+k5;ju2O1QgG53J3rjf-A8bGauNUt>iPqv4|2h3PeI z9kUBFgIxpXrbha@ddH^5dpr9TGC6%7hQUqOxH3*3ykq0uC(?N8zz??UeKJ_^j+T_V zij=PP_4W|Ye&h5(pM=aOjy^d*a$;a`!^qh)n>SwK%B`-Hki;spNF2TQ$2<02bM&D* z*6rM_OLLaH(eZPgeFHONBYl1AN5_V{d-_cYF`7XQpkbAe1l55mNla3pqDY&>N!yw= z1&>lutr6$X(k~Xl#zg_T-)ij$gn3 zxj%a7Yg_hg$rrM-XU`3+9SFqO`H{722OSeKv^$lwGx$(SSMGWE)ODAvHD$xWa=qv1 z=p_R^S(FnO#;)DjRtgfEVX@f z4n6>4)rM`mH5iCz^I&-WXC5dfsc$ZkeHr!UXEl8@=;7mh z<1YMS>HN;7btw>lM-=lBC+|M;vM21k^aCf}{jC13P38T-Ss(y{+TTCiXxuXYs+Tks zuIWti?_z)g4*!aT8GZ!;!@tJ_pb-wJ6Ud>25;T+pVgee(fCdC$6H+_)=VqCJ)4aM5HUpnTJg(knaI5nrSqqw(2tOu?TvXoj>PB-A1m1Zk-xMD#( zk`@bNPHlxEOi3`BxxJU}JNWRC-TQVteDc`dP1_caA3PqQv3a?x{rV=qT#ywDy@Dw( zjfWmsxBm%Sw(ZDkd6q>(fQr&cr6q(()1qtn!r-gB-tf5f{pIrhf!3$|?%C?8C#`9J z`lsK$r+w=SC0}BteQI=JH1T`8Dre`XhT6L>Oqp}Fq*4+OjcN_i6sE8mDn(=wA;iE~ z4OyP&+Aw%UOQ)<|hzXbnr!)2Wn(4f0Ide`e#)L%$%uE^8 z47sUjj-r>e5Ej+itWnN+c3H^8(t15Nm#Y-5BFxQc>RmLWA$UOP40&BOHe$?|YZr2$ zVGMb}`z&=-6s3VYgQs(5xgjGMMa>)=iFfKHiz$&2=Ia`SfcYfLGYdhpYBgA)q2_!{ z>Q>7Ti{>H$a>2)vNuk`DU)s{qI?y4BwG!t7tH%}|SXpt61+B{pgSi}5C`6j`CJv_N z)v`H=n60s_=EYzJdTAz6CIi%ZJ$`Lnif-={DS#qUy>Ic8%UaX`A%LRSf=e1XAh2k` z!7L~!q`178rO|l{J}?FsGzfx4#z-20UJChKY* zyVB6Sb+EJ3YwTcLSzgiX(9%`UyUKuXf96vz4}W(&{$%RD5AKaMUisYDz2d7MdVf~a zZ_Lzp#rdw*<|65lcec!YZfts`a^;Id_SHM=$EUs;YO@Yi>+g8|b1PdbzxDBBbvO5| z>;gs>LX33^DUO;arP#jbhC4s^kuQJYuC@H=wZHqN-+%j?Klk>V|LmqeHsd?q{$Jnr zwzntRh?awCT}?E1kUB(TbY^fS(I9{}a+2mjH5e>{Ft`|lIdhG(@Bh$u_Vn~0oEYux zY01~^yWx^j(!8ZLgb#oG<2Sza59(31SVnA29e(JZW3@!4&W}xP*nRn#gTHvn;NW=P zcxZHd>y9Dm-FMwqZ-uthBx-uwM{j-ED_;pUF?{^vx4ryjFC`5GK)}jGj5@FIfPe-v zhQ(#%$L{*kOw-WrotIx**;eTotj~=nxeiwj8j-UeqH_%&RW*WQ#3~;FGt{eNN00w( zEN)EoSy{$W?PyrDtfSh0+eKly7YZk&vU2OK`nF}i!%S{^qZcsJLvV3F(b}$y# zB&BX&))prfPe+QVSGCky2}(>2tI7`MO0g`EW-E{=dRhd65lqtMslz81&W_Kwwa&_> z9b4Ml+mljWtH_yq&YoLt)aP@{Lrd$%&F1^}mL(xmID$d|iqCxJzrFB9FG)O*$6SGY z`H?fg>?7?x# zrcRI4sIj32STNZ1^mKRknpo}_9BAIOsR@Oi*11N=>Wwsh`71xT?y-BVxaP-q{0N5U zJ?H5n7J`%6g_+5@Q;w}&@xdP-e$F$m`r5aCPQ#U3yMBIr>XPkSYavTCPfDpPHn)hH zLs>z!@7(_g4cG14Ap&~+{=LVZcGb=%ns%;Pe+9e4ng@T{@zw8tcI&#KW9QFa@wn~% zO7G35@ZLFb9)u>|L&{#Glj+)AwL08dE_SRPG;{U&8bTvMb84O5M`<>I%$Y;-ELgo3 z;TGeunrlo=P0uZj=Hd~QG@%nv2q(_bSn22tZrxBg4)2q64IVNQ$9EHn)@kgAot|vC1d{5`d`!AOIs^SY-rYFu<_N za1j9#pb!wiU=c8g7#s)|#S0xsm(rx*LgKO*SX~O4ni*&?zzD-dLK2`+L4^?p7+^*- z07`%)NCGezRv8Eapco!yc$CucOA3G@AS}Ynh<{-O0T^I}5l%sXAPE?ZcoYr9MTY+y zf&l3Vf@&}jAk6?X0w4*PLCpZrV1NMx$p}P%reXjn1W00)0cM8(!3cn>@IO&t01SYU z0E8d`GlLmc5d_2$a1H`!P&3#nKp}v0PJLoj#Ha=s9aaT3fFb}PQ`6(^J-sfPFlCz1 z#qp7Ion8Gzn3|gG?p`}NaiO=nFAsph3_vgdmRATQO|4CkLU|&S$e{|Qfh@XqgCedXh4^O)DQKqFI-a9&AD{qTKv z^$+wnRunN90VlIFlWW?$XXhu^_HVc_dZxF#Z+vpBr)$t8nVdKY!K&CsAQ4 zKn!NbAGmkhj@`$P930qrNnBbCxlW9q@9OTGo*e7$?VA`M>+b3|f)v8c)Ciiv5)%T^ z8w-U(Pjfl1v_}*RsgtzZrH+mk8+rZs{DSwYPFxg042zjq{X80?78@+}7L&9EZGh3i zVqLRpW}%@P<=F6vJv*;DaOCGte#W-Nnf80{y1ReNH8Z(2&feMAvs;S@zGq9hA{`f& zv!pD2cmM3`*Y6q7s19b}j~=;j`S!K7SbpUAiDy3F+FpQ|LjAp8e!^n`?7I4f!*|`j zVbA3u3&Ter9NfJ7+{uH3oA*kQa47>pF)=X^dBe==D;k6b&`5~$OiEpy&UN%pPS$kx zY@S8SbnN`e4I6iWVx`tf$*E?w++x&`NA3f$YTd@&RD;9F$wNci_R{&YNABCW^C~)& zJj6S|qN+x%MMHp6l7b`htN~{x(vfT^sB~owgo&~>Znq`hUBsHrPx--j-v69ezU%8B zeG?2fe&joaqJwKe{vDmj-}mS5{QNf$RCjf~VpH$8-gD12uXt>!2_ASDm<7hF_SN(ufn)RMUv=dl zj9&AdS8VTm@oS6KT5nrfsU9uqrl42_&L8M5>x((UhthTxNk;kaD47 zpo;md788my%`;6JlK6IB(b6Y9+oaT}<%~gfzIx{Lq~u3U;EKrVqB#p5!CAN84Za_5H%4^ zJT#|)sxc2i^WX%UP-CbCvs`oYA_6mUmU@%ARly|niP};G(IDueK@mj@tg`&;8?PyM zb#3fJJ)!AxHneS%0YCfVUu8L##uNMYcW>WX>$>vS-u{wT{lje^U~Wk@Mr}kvp}^vH z-+Oj^>cm7`cSZE^{2XR~@`DeIErt1Wi`sH!PoeEMe(g1x-gEf$S0`qdstY0Jfp8`w zp;*$EJgUi{!+ePLU7|MAJu?yio!Yu&Y5)1>o~ z?h{jtdTe-fX49Virw%;$jO#BSUmX0!{eQP(`$p^B ze9g`T>Dq#-0w2BYQ!jtTOUx2SeC)Q{UjBzKC6tIN(csDowQL1WtVk%-sP*M#lE=Ds_Xbzf3Gbl0wrc^+v2}PO;WwE5$3L;i=VlZBmB&EffM^4NgADizeO=nNr z@sdkAA9u3!j}uMPC+|LbW~o}63zmiEwoTjB4?MOhNr{4Y016HQx{u!a!I!-CEo;4# zg=_*qSb69Wb>hx7>y_oT|Mc~58?a^fP}ugkGhh4Co8J8TTCM2D5B&6uE{~t-O{;qO zCD(0E+mc??n*7M*E5Go$cfRqZe|gDEzWk|A(s1q5fBVPZ{_8#0 zJ*Qf&7aG;nnQLlvM4FE!gBZT?(|c)n$`zN17=8SwpFZ^L>-JZu+B?+O-PAcc{ov7s zuN}O5^WcUP=gvIlh9~x=lBDL6=v~EJb5c-KctDeiNkOU$OOuNisxetJ(bo*0cjSo= zg<^_wC5lQjDrPKFJ=CJ25lpJJ^Al+{{m{bHnW;rO%mt%4cyA&pr6|r!w8_*pl`7>@ zXUx?WwJVpVjn$=|<*^0Kj*JJ%Ef-N}q&AXrtZ&$T>4x_G?XKZlDv7yJY2;}h70^hy zET5Z{sqv*G97BH|JJ&ZCI@d^4=h6r#35$g!HE0?##vBkLffeM|zkI&|Z+^?`yhkCA z3)6b=tYl+He0h|cJG6J#ma^~gKA>>(+ulXsEjRr+Xa+hX%|HkMQ^1S>Re`9WVrjXM z4`0a69=vDEjLBT2-U<7CcXf$<3V}XuzBqasJS)L#>W22#&nHm@k1O%%LAV2^{0}L=4 z0T2MON^ubZ5|1JtWrkHkJc=-^GMeF0MnFj;V4xXR8O#6yX;@{%MFdGRn1LWx86bcF z1{we({)J%`#R&i!X)r*Ei%bCsNtnURuu4Dx2E{5R)Ifk(rNl)9NPzeyHNcECgA))S zNI-!IBe)6xApis>Fg2LrA{qea2zVg@MuS;2Q<#AO#X%BBHH&IkC0Jz&Gm{i>UM`G} zb@uj2P+`H66h5CAIoI9Q2h+*viLTDxsj2bq&fchi;Zacmm{kBIg|dgF#o4k<9ytM^ zwYwK;2)Zyog;hl7`XK<*!)N<;?WC?$7UPKv=V7>H^WdSA=l1R1S5+T_PZQFS2GGHg z&_fUWbluQEZ8;)jDuASO)1zJO{ZsR!1M9br3?J|B?H?T{==0a?9k8GgM&r*8U$}1PT4m|ZLnq&S>$$&tu{DZ= zQ}=vz-D7_fz~(ESeCpos5AAqNjA8ir!|OKfK7Z=q=G~WTp@PN|B8<@#q9o%=Zp`YW zPyh-cl9r^@-PPXgx;mzpR&?rkV{tLo8sj4;hc@g0fl^0HR}yQwIK8BC?!wtKAXfFS z+ekA=(}m%KgIjjfxpT)J+_>{{IPnw;oWN)_)B`M<(TRC5Q!*h#Y3OB_mXqr{OI7*y zY|G4l`}~PnwlXgaKJ}5G-1>|^xanJ;c^eFWaLc!fsf(@!W49k0&42uvn}7XJ{(jHq z)_1@By2?Ahdf&4;HgA8tPjDRRB(Rk0(-L>>n{R&B(TR00e&drKTj^ZWTqhm|+JK1c zrelZy?7g>K@~lte*#-PZkubxrAYk}EA=5cSYDO35EGw+kC3AUVi3?nDi%uC8#Iix_ zFv(5mo%$xfRTMG8!UFQ5W=NWr14aVlLf&MUOc0kX##k^FSi&SEo?3N5F5iF2&+q=x zRaaeq=ob%Oa@qd7?qA%xuDQ^IX8b=yz>N>1Z*WI{`+_yc0MbB#f;1qSv9C1W{Gy-U z@v?I0=@o4fC;vQa7v8<`mOJ)-{he!iuKu(}bTx4<7!2(R=gr!L3rl7W>JW37K^+=q zPJ+ZF=B6Q<#z3Rbv7E&m!NWjNQy&cp)nYUNNn)Nbb6HfvtGv1M1_WIuE4kizNv1J2C^?WO(7GuQ0AVH&H3oB$ z(Ol5z)uLCGT9BAAz`QrGA<)Dkoy%G`ZtAOxTw>9d$67AW31BBthoQq5=r zI42=rY2?vM1%L^?(vW7=TW;?JKKBv5(yP ziI@N3ADW{V-1gzyUj6_ILG`|>ml8qf0z{*g07DE)H~|48L$hg2k*pbL8+(M#50nRR4}`Yg*Tq z>7fGEfK{0nfX-1u^npQ5(vj159$T1hFWs;ONqG=rc~u`iR^#L^=K8wVZ`ru3DT%0v z2!y1l5~gM#b4P$40*Wgs%tvQNs!$xgfJlzq<{vme-cug6>-YYrZ9RMDn-0#l zH4gpc?&CA_)tM0MjpokBTo#MFe!Zty5%Hx2l+M%>D8A;#m%i`w|4@iTwp4czn|HM%Zn}L zO)JA|;_M?I|H|iIbJLB(r`FFOykmC$ky$}vnXAWKvs&2~i-opa9kjhaQR&HYqxbbW zaBg_CVbO9^gF%7m>8Z}nj^N6J1Afb6H?(Yg++H+tDV1`2<)8k=d*1m*(*4O>{}P5< zZ@H4&tpqqKR;7vKA-Ti*7{*Z;*wX?XveUi8{`efk5h`)#Q=U@H!y zmNyh^#ASn-yXlMHqv6%hed2O7jrO^3{`k31dtzPGTz6h<=Iq$a@XQ0p@9AH^;ll8l zwU51lsV86##v-jmuW-fy0}Gj?aUnM5by>;zBvtt4Qo$9yuWB+ED0z~lX67x5!6=|c zFksoSK9)CT^BOHyYlfJhsH40I8OibGJe`aV|AW{a?NJLk7I{?SGnz6k@)-kRKU#*_p%9jnp=+v2~Yh zZ%TT6lC$!cZ+!=Wx4!djYBczi^rA)+fCe+DfkXtNfu>7~h3wqW7LWnSsxhX@#qL~_C27(3}Ku}-+6lO5O90ArT{@PTx0;uU?5;X1PCQz3I|X?8Vv?S07eoZ2?$`TK#)cNCICpR zGGdj%03%=|R*56vjWC!~5TYi}PE2ytO{603;z zbsG|5a^y_kj=dIF(v>)T;S3BL`iIVpo?W;8l9a+UNjxMeQ3O!ZMD4)CcdgqnP+L~y zb%lX4H#gnY-Z3>lGtj+uWb(q=-oXpw=X&}!7t+G;xkq5wwCgD*;q<{D4DG&#v@-|p z+^}s2P?m&XrQ`(#pPSj@;psV%7`=Ct!r17k{&gEhN6rimZE=Y+gQ1cXgrXtZ>BA3f z+Pv%Z$w$_2-x+GldEF)_M!I{}O--Ed>RC5AalW&AEkQU(MAHBW0U8K^Q1~PrZ0kUZ zq*yGwRxhrT;tY()^K5cqMU%WP1}9M~gHMpKP&xMfS@GvIi&f9Mh?pa-#1v(*R_-ZF#}hFWvv7wNGx@Pe}ESv40L1 zx`#jTiWgq{m~~nQ0nqL^bm57+`-sX951f3#Yd-moS9bm=|Bbu8`_WIl?o|Lb?0@3f z2Yl}Dw5nMQ!6*RoSkFx?CrE)&TGOO^)6lNXYu5C) z__p@p#ccfe>C;Eftel@aKYU_fXuBiP*0Q$QHKvx&F6R=n$`Q$Jfs=DzlthrS=1 z;$qTu?ePPjf65Eq_1!PL35MT$*Vj{_6e`Kw?T61be)zeM-0-|tZQ0^)d&dW^`|YQAj1(48*n?Za@uGgAJ;-EaA`ubkP`{O4D_p!3xmJBmaE&I8L@|J&(@?)&u3 zhwlI2$R8B&^tJd8Az_AJLBR08Q9xBpaN1?uq)wQW+qtUIU8(E!uge8aB9#pv|%BqB_KdUjAj<0J~|a$ zVqRu2b2-dw26!_@6Ie88gNaWificgNP!znU2nAdbE7Z*)8YCiG3~8ogak){B)k;fK z&pN-PEsKp>HfHKneG=4?EccEz)tJ|`&RIiKMAF2}BnEF*XQp|~HEJgGi8;C z_y9yTGnhsfgTbv3z5%8Xvqm1x1T+@_#%M+u*+|sYaxsaf3NxcJ21O1ldqEUxZh2@_ z^Gxe=nuAcRqNYitv_}`ZVwN`f;Mkq8TyR;ya#fgYrS7EH*TR+NGP1}4!9z=FlP<++*Ujsl^E01;ow z_YM4)ttAwnysf8j-`G4FQwxg?J2;#yo%)-}8h(6saZT;%JC@qoH?A*WB`e6nu`d=@ z+KxQ+ahIlT11$wxVWz^GuiIE1VPUE2aJEIKPuND{d{c7wj~+fddAe?aG*AsJF#4{` zdUh4orhQ#lx#j)0y!zEQe&iz`e%&9xN?b8FA`~pvqcv1q3aD97OA-}dVFWaTbHSWx zpc$MOt+I^xJ2uWAy%+y$h>Hv${>95* zeWOPN^dlen*o`;7imKx9!4KW`>Q}#1nFSO?JrD~tQ_TY-Eq6>!oXe_95+DA_H>Z97 zuHNg0nj5|k9;pgLu7pX+5-({td&_OEyEzVrr4>}GRa1?!u+o@2aNq4`b$!=(_x7Eg zP3y|y%>;Rl6p(UIb1s&}1@*y2=aTZs@gF=qJ67hCc7;O6W#yFla=iC}6`B0r{Mv!R zotItS<)UlyUPzEWp<@9RLY$>^w3<|bR7%Z`zLJ<)LLnVoY6z}S7#qI*j^S?~zOcUi zM0(9NS8m?CvEZvqErp+d@7oWIjW14ySgW?QJ!xMoKjwzPmS!)_Eh6Ha_ePff(~DpK z)la^^UNB=BBHqH?2h#A<&rdbg&^7RPU;29k_UzkGd&09%-1@=KeC!q$yw~~Tk1*c- z$b-G}b6YRl-rCZZ6&CX(v;2!UzxEZc|HF~9V>1trFD{Nx7hI`mMP{|7+M>F0tM7DV z?$VHU&r5#zk%{3$cb+;puyOtH>ES)k`^|wuv22Ba zR77E`0Lk2_TCf-hfa1(A zBvbX|tT&fiL!nMnf%hg(i;>LP5ae*hCTkh2=!R>q?sk2UGv|sYj?Vb$yN4T6N9GU^ zm28x|LRf#<(7@WwEk47#q@ZMpF{Nh4Wgj4!Wa8>RXMJJmWTK;|S9o@GP1(x*o65eU z-LWO@j0RUOiZ2r8#id3jZ@=k72E5}h-av?td3`B6GUDr}j+mWmY-)=Gmuzn-_9Z@N z)7#(j7X;pM^V^{W5{QrrjaVfF0Rt2YC?VAOg+lGj1v~wVpHJcZ%5qU=QPJA^E$cU| zd;FT*mwUZ$Zl=X*B7jgKQ5CUaQB4ULu}VQB1rRmB6i}D|P6$#BrT~oiC4#uf zu!;Z)13~-}0nng;nh{141Pm|(V8kjln1KL^e??-I0R{tTLI4C{ASfgNGXo3&iHit; z1{g#@!T=*y5v$A$rUZ=wz#s%+YA~!KXaLl35djc@89*}_AP&Su1YlGU1PB6V20%!x zB32oUFc3-tARGZ9Rw+zj0uz{;Gj(+45~INk3LqE&2N6Ii07NFn#yb1@nH!O?P;f~$ zdhT3DM=uT2v*Vo|z0=dBjNnS_;7#U`q9zh^#eoVQ+fg56%a&+(?^eP z+Pw4h$-@I%w#7y@Z(wp_q`P}R9;eDYg0X&?v zNpry}ds>nr(?-n#YNJ{Q7-3p4xTH4LAgj4Az|3)FG1qEtvWawk<=({RsT0d=Z-6~* z-@E-Muf6^;ANcY8SM1zU4auQnhc^uEaZT>zp$D&j@@qE?l!yBLd=jl%v!#6T%8}3f z{x_bucPM5^05E+2@cF0gSeGFAr^9FduV=sSjnAuoFQ5PRKV7)vZ~g|r(5}nRJ@Sk7 zn|3SW$f?7dw(L1|;=qo5SLKWrLd;=CrI`ZxVje&%P!xm#>ss4euYSv>$2HP4p_j## z`qa6xk-PtC_Q^b~*3P9e;Sk z_DkuUiG#rtLiJ!I)$&FV(gB14f<{Q6vQY~+yr{QzWh(di-Cy~}PqX6kh_&uJf8-yY z`24s0@JoLQ!}H(vg(P)_a#LYp<-lm|$A9&rYoGNyn_KiVU;5J1|LFHx?w?3@T|Ka` zbKm~meLEjNvB*9Q@azBpAOJ~3K~(4V^V_HG9iNST6IbrL{yi^#@)IjvXaPEHS(qa}>@*Hffhekm&V{Mt~CUI=* zJe$|^tYO-jl;w)`&G(&~xNNAW+L$P(J(w^o0h{svj{MZ${Kn0Hezdw!Gc_X#19QL& zD2PnAfBzkiOP+mG$)Uf6<1^+<54BYLsx3X5A*AR|IF*|b-JWB?`Vvk>wFaX=JhFhA zph8##3=KfU#25h&6QMU4jgcV&1{fj%0~DlsmjT5r2LbpSYC{IYu<`nHO3%L3YIuV--vn4Q~^hyQYL8Q$T@Kd<=>VaL&U=GPjs0MbSWRwh%O*(s>#kpji^7s#N5l)uEQ>q9~XlNot~A zfh^A~W>GC)Db-@N9wV|SNJw!~OnmN(8f7KH)3)b>Tb_}Ze8??2#u^^@dFABX*W7$n zSfgVUSnwJYVzl6mIYDWxME{1%S$cdgK)}IrtknMYTT8TSi58(2B{Op&Kp34D z7iqeTPzaJ3RupupiL_88>7wh|+c; zIu1Ugb;T;r&H58pYoS|+jo_^2#S~_FUTZ|n%vf7go2&Z6hmViW4A){XnuyWh5KH^^ zy=A?ht+#8Qzx1QGz4=8iy7|+ee({T*?+V2f1QSOE4G5t@XaP-9^y2AiLPqmyby$Q; z44}Ej(wW-w(#ZLV`B3g@>_DO1S8kJzn?LbY{2xI4PdC2m#wZ#=M?jK-5T=1JLyqW8 zJsY(p!NYfaZQS*2>A!Ag6G{UGL+YK-oDn_64#$&Gy~9ing-QT>9zvzH!g+nfXZzE7gwHXY2`W zm;PE$TT3FXjvmmEz^wFpFa49x-TLmV1hbY*NZRD#g8SJQzwVc+ZG9*I@tX#0+dov_ z`^ z{ntLT5^H&P?(|RJc+>medlSrF`^LA!K>XVbApQ*>`@~0_smA!SSHAAUAN!Cf)4XwP zG?{+*z{K(UzI*1tz^2W^r%wLbTV8lgd!nItrlkmTzNLdYg4fQgo@un+^55w^i52@401Kw*S5 zqsgIXdZu%%uI|d=oU`}-zV~spgzsf7?qZpB|G2-O*d`!0Wy`X+e4(#}lQWkOpZJS8 zmYS|zZC9JS+LUdz&lv~l+oId{v7KAXT}8LnL^I^zB-!cApFer(@`E#8`gB8jKmkMx zq1zcL6b5O&(W-I0<2yHY+k@FuRu02_EL}J3Qxij+7~LM77>1+?74WmE%FMgK52Y=z` zD1eT20whRcgcYD5$(03TURo~JPG7bY58rp$Pp_?Zco`#ku=AQ72M)YoQ(X;D#4t1h z=g8<7zDZ83BN#{k0^9%^MP>;ONFo3U>j;4OCSsiUai5&(&Bpqu~*0RkWzNJ)?b3IIU|MFOA%2!M!nMiP`D2oU}kk|5-O zON4_23<4p+MMA_CB-RO#uudcZ;tB!=0uTg%1PtpW09*$nMF@Zl>$DCokRSm{0zd+g zAQuTDKoBwji3Ess5+EZ15(FTu69^Clh${>P2#|mS4wyiRkU=6-Fv23%dG_o=Bu&VM zY!(nIQE<5cwE{o|tTO{ho4GW-X=*!@sDf1$isr(Z(;GKUQJSBd*)%ydJ9}}{rfqTn z*#*drfMNs?%md&lOY?J=0UX+~a~5nxhlpuj+Js`oxJJx#JC2k3Rm#%S&H- z@C=0g+YX$VKE7w)jTXo_q-bDKk+ee5;DaL%e`)W*gR3j9dJ!(r<`!o+ZJd~0oZqx@ z+w8*hu5G){PM_UAbzLqX{iZQ?tMVQ=h{OPB5AH4qT(ZgF0T&=;Hx4eAm z?9|lmnTuzpw(XjpKD%w(E`cBbLWBS#ktu=E!4Qg}XtI(R7;07BNf4b*-+vN}aB|mmAGrSYy``na$iMouko~at{pfWKu%d|27&+wNKgvbZ>vV0_o}^{o3<-8lY>&Ti}B2pU!8s8^!XD< zcJ4hy3sh>8pp->AW~NVqShsa*AL&4K@xrsa4%`$1Pal1B?=`ntFo0qV1d)tUAds7u z0_Bt-WS31&WZV9|8+O#jBCLMt$*=Xp(utAA0c<9L}q|EItA#&`e7W3#JGf6p85+BJ9ctLqDo96IXn{=oNrVdnDh zoGPn-_1cl)H@xY!*KK(1OZINSQDE-=hu;0)F1>!?i#Hz3Ke3Si`R)^M`PPqM+g|(w zk&y6r5D@;qS%KP~28%74MxB(Dsf1L9m>VRo6NoA>D43;{ypjf1w?Kf>0M3ADgTYv| z=}s|N>6K70uqXo&D413UUM7cezVE|uW3R{X!os=v#rIu0Ubkn^TkUv$(d)kF>ksI; z&!r@Sm=RWi1NeVO-t&3=tM|Ni_1saaF9b-mESQOAR59|7H$VSd@BY}n(Q0Ch%Sj9< z00Iyq5XcM>Fwh|*A>kiH_$GH`pd}y(XxqG4r^KRgF+#QV45I}RvW!tktAmvY0l|mL zvY%Em15AR;Jtqajx*iN|3z7(dj<(G?P;Khx#iqZp=%_-DOxACbmRwQpC7r2PJ92iZ z6kTmXF|5*EPIo99YNgR83nf+en9-BGw$_7UL`h83)6(Qd2zSAXA6Z;9CINB|tG9o+xNKR>-XvRsFt9uae> zOp4GTXTky?Ej!=22V*x(#MWVi5?1q{Jf4rOwk7BY4h2^V$Z!*4untu_nA<4T(bX2o zR2j^?1!IhXw$evH6B!@^DQ9{hAW{YqxoJ^=h0qT`3qwlEo=`gioHAyBf_afNcJ&Z{s6jI}dfm3r*|L4SK)VuGMyjhfh2Q+df5rc{ zx4iA{Az_p8qI1A3$GYGiRF`EL%Me1x%n*HubWqBWQ~k68d+^y0FJkl9&?~R5CJ$}g zt=K6o7Tr!kRKwlU6sjy1D2h;6*I0EzBwd5^&FMe>!*?Cgi?*D3$*b<%f8%Yt)m;gt z6Db-n)Jsy`4gy1gPHHctynMX=`uTjZ_D#do4c%7-^ZCX9`d2+1``uH!wqO0ymwf+r z4QwRF!dMz15Q{>ed2IDSsbZzN58#;XD5$a<3rbygm%OzQ!qCX<=`a1&Q~&wF>78Rw zR4;$!3vRyS<`G|=8}HuxKR)}lV@DS*BQLFvZ-3oO^2A-=yLD_^$$}tRrGilSf$#d> z-~Ijn++-rx$w1U|ZBah)xj*>aV5K?#)H7cdaP`eQdbhmh?5BV0SMT|yxHQ~3d-$>0 z)yrcWp0hb_7>YEbd4W{MEpK}Fdq41XN2;N2-H?Z-S)cg-PjInQxWvex4-w_ zeCPx4g&+T!Ux)CE_x!5Q9iM*q_*1dXJ}_!IlCy86ViXTIfU-+ue#@D_~Jc_`BXvJq0#WGb%>dh$ybW=}o&<?Dh7=F_-NqHCpKSVCKy7u}q$>DZ$_p z3l>9VS81r;E8jZclIL^doxJ-6FFRQ68?|%iyU!k8iPH}rt9{J}0Sz2ryK$mPSKoAC z=dRnTwq_hzwxAwu5VFZNfZT%7LUPuF)f#!wuLrA_T9Y-S!)dq}N_nhTAy>mAWf&<7 zg^BJ6V%7bHU-+1S_x}9*f?3Qym`{(NDpt-vF;pDu?jFxuZ`?FGw9!K|82Q+Te~!R+ zzT@Wx4!UO`3veYVBhVrN(F0Jr7;?Q-G#BS=`QeN6X=$Y&?XpHvyXoNg_N%VhQpcf< zl?_kOIWjVeZ<2xQ1UMjo1Ysa48YD=9Bmfcs336gx04PFK1(h@e8wn~3>zl1Okiaa+ zz(@jw2tn{RCFLwaMnDuG02By-1cD$)#1#Srhyp-B4iZ-o5C{Vx0uTXE1SCiV0W*T| zM7ThZSSMl~1quQj0w4B}xDkKmra4B0y1~LxAuN z0t8{58BpqbMCibN<}v9ozQ^0EsI|kY-Q|35bETSX7mv>R8<$G_|wpgYeE!Kq$g!G{k5jh)I*r z>2xfe|Js@5Sb2(N*@-b$$PpThEr+%k%od2lrIjJy|G@n>-t?R&o_P4yo9~L?1CM_7 z)i3#$naoGN^5@_FEzg;m-gqgL-~pzZ%No)j7G1gj~+WcHM!G+ zU7k5})6H+#ygjX;URqf`@|C{=u=~(WXPEeSY&mBLqYtKQq z0J&!%w=RH~xo{H1y3JGj4C^>EePZWTHwD9)qmS*m_O=)(5g-kOQz9jl$r~TkY6cF8 z=D>(xqxA?8qioPUMR3{MFna0iV=wxy_ul)5zX;*GfAXHPL=i(#O%7uFyFc+=&-=DF zc2fUKfBuI*@V2`z4T^kvZO^L?<=yY$sqZ=bm*Kw8|NiSfvv6HGI`{P#{Q1{j`^p#Z z8GrekU$j3jXUofvJ@SXsr%rwD!lEXo2OEbTEPm$0|LH{s;8QUC?;|1M?;s%jLkN(t zDp&&ZQ<(8yF-U{Z^c!IX9Fdfjye`Tzw6W`mQqCk5V6IDrg^27o#@_kNODQd%S|0zQfBbT+2_u9A z5D@+z^3OM$kc`cC;LRYX)HX_3ZL-!@H8K){r)HoQO=`=c zA5usyM$BTnc)n!>m>o2DROBGI!O@xt>%_K43l>?oC354f2+IJd(o=G_39c-cWzekk zl(Xb6w1mKzGSon2^0v(bAN2YI#n@UH5{zEJ7)p8L?MQsx4WUUxX)RcTCT1(^(+Ryk*J_Ac#@9p@}Mj=SG6L zye&gZbTi-?@|q$-gtgg$L>fU#S=oXaluSyV-2xL7v$rDUV67bF%xVD>C=4<}nSw%e z#t^pOF!r(+PHxZ7oAUs5Sf4w)+D{K(oWHmfs|zS1lh&5CVr&WT=U&RK=VWPfMqj?{ zo&|Yqv&)i;bX%jIef-4qr6WDL0ES@R2G;DptlEu`yFj)^v1mt4G$AcxEf*tm6N9{* z2X)ikw83m>q|=EpbgWQ8sH{}O2!@aY5QrR?y&5cEx-gTpc4o0(kK5(34UyQj}s*mtG8-!i%g7I11$umWpz;80v;+`Ofdg0EE-KyI$ z7KM{B4*@VmC>H84DK~I+vSLu^%38DzUtayaFV%eX(w zDL9$>eY%Qr$bj0`B8eeIkX3*n(3q@?`STBb;qYf3zPxS26V;1ebjS72y=6=*iyO+X z{*Nzy{pj(zg*Gp*Y}oRu7y9Ps{i7WlcXUxn0l^TU5Z~~R-}Fa+_`BI5Rt1o!<^e2Do_dflrZQDg!Z0pABjzwaGC22v$TI^OAUjL)N z_V(}ozT)C)x^u(S-idv4?ZWin;%G6xy}K`^8omB-mDhHjooVpQ7yf$Al-E`U#M0d5 zL2gugcDbBz$|#6MNY#$bo9=q~4;{>Pd0@o+$A0X6pZNGMBJ%FH{uG2?_|SK)JYdVS z^|{+NOO=2fBp8SKT(oz{DHfF=hL5f^PArG@BYK5=t6eOK${1IVjCa^ z1Np;me%n2t`eff_uDAc_-JiPW9ybQ2OBbt^hn~1__%HwMk;nF3b@hp3&wT4o{phV* zMz?aT)>saif(?Lr&@Yt9)EXAs`SbM`FXuS-*iveGTQ=?(9o@M+w|w#R@x`Fg)jfL$ zBi);O<(@$$t&T0a6#M0cqL@3(vpa4Ve4$M+$ z%~Kc{8>}VE&7f|2^RP^fRO-fOt^nTF^if=6|T7{V{S|3d=a_x=wu6e0JTg@wnT zEf&w7819}P**~1OT)Sy>WI}A}?ni$0g9N_s$A7kM0Z*Bp3A#Md(UJ`l1rI=U(c1oM zsAm>!?y0k@X|_&(uzxtv7a4)ftYVy8+ITkugC6L4qIz2oObpqC^rj0ulrf z5F|i=KmvjQCqxWEl^LRSB4S}RxsriIOMnO(4WL8_g10Fty9gNoL4W|zKtLiutRwyx z36OAwu#SMhDGIP|eJo+_q!?^tlsz_8g3zAu9@c$`Sw)$DVp@ z=Yi|bA3wZ(_d(Ce!I|?Xw(r8pR-=F@~uf6`xr#4I-Us)UO)t0x+j-#0^ zAG-JVcJ1ETXyepqNaW$ek4$aak@Lvx{4+0l-JAPtnw7cUYVYVnUjeY|svFK7dvf>w z8;R!BiD$n1?LYAcAODpf{`C((aiOzt-=}*sJ@~*=^3+HsVV%G$0A>MeZsler+x~_Z z{iE*m#%oS~{ZD7V`q=qn&+gcL6&PU1nMk!9AZDgdf><{-wO?4Lndy@|_uUiQ07oVW!a`wiZ`$0wJpI&*zT-Xj{^>74_$TlD za4`}qj+GtK(r>=+2cP%4A1K=1S3dU#Z+`nb7e|Kj$%n=+kMku5i)DTN*}>t@e&j_j z-GB9FEY9ryYMo!OCyhMkrkh8)_2@>gt236*e?4D1`uSG9q07t7*!O(tU+=*4-rB)G z00{|y2La)46V|N|>qhZ+G_`9r4cf2w^LXE5f5jSE<;83tc3N$Q0~u$feRe44p|pg_r>Opu^r zhug0E^`Cj)>t1*JXk;Tf(81qEK)8Z{KtjSd2ymbgngnPdFPqXLPza47IAV=KQs^`k zsT`XgRSsfk+A77msokwc?hM7yvVeJjhTs-0#HA6fdEsj+G-u7A)64Ts9KbdxTayW8 z$=e`_O2M5{lE*B0)0eUuwYg2KqO$ukJhU!Yl1u~3p>)6GeYObBCRzkbWVaymOhk7$ ziBN+^k5HSKjTr+mt1o%-hOjKOO(r}N@FK$m<$5VIIfOLiE@gI$GI=kjBx6gjsZKu$ zh0AOLBktZ;jx5N=3|VDJn5v67B+DpOB(8d4GiCCe9M+&HEC#d!^YES=wv3NTbuAkm zx~lb~XY%6QiZ`A@&UJuGam6*n9EjS$EP#myN??>j4t)cV7*s>dhPDl5hKhhWv zupr3>ikq#^%%zlFkdkUyU;vsBC<{z#S{ZvqIn?9=S_syey4E_k{_t&ek*YQ1klj?D z)8m)d`X`gqGn~kd=zyo>E+$HCmN5r;rc1F&ZfY%t>|9v!&pvQ;`rMN%6ro^NAk*vc z+!udjzb%F#Z#=j1gk__9@Z4W((5$WCb0R z2%)5PMl0qvr#9M#!YJ>8-WfvM`k?U`)HUWDYFY5gidYpEJyuTaXbUiONN6Bdht^uZQW}DYmTlcOM`b(+X09zNpJX=mzC_Plg($1mFQ*kT~ z6~U^~jj?YDgrYozTshq7zFG_hhSnhDL55>-?|-?_;+cPcE_Jrt)P2Q`o4vSmIQKF# zV<1Qb2kB@f(GcKHGr)!-zz|{qpd_hSmP?nO{Mu8Wd1P_R$kWw}?zrW;JD<}O*__Zf9>f{-Sdfm`{5WmL6%8@AR#BcyBr@q zH+p>T-p}0g(O2C09s6G}dg#yuv}?3pom*OddbKL^cpPs-xw1h~@;EVw8@f&sA= zfGi}v{m0(+so(qLo>omy-}=_Of9p5zabpZ==5n?8&{NZo|Hbb<{n)PkSD!iY%qxEK zhws=rzAKFMIo^hBVz&l3P~}Y4?{O_>OEqSN{8VjqSQ(%0k6!36J73sScZ>69F-*Fr;#ZJ@i+P!h_?iY`>W7T-#Z~^DY=(qp@q(A_IM0^t=2p|aJZ<7Ne zj08Y|f*_y*qLIPIBg)eHUDl>wLV4Fmy@_`ebBxDJxw_$Cqp zgew9;gVs3!3V%by6~sCM1OZ3_AVC5Fqd-UyK!6lTIVG$k93Tl$0vrTLTp>V!0Y>xw~NOKTMf&&By0U|-sK|uK12qOS!zz_*Ahy)r!GjnqQ_D}7A0pN)X=dg~L zn%WtN^V8?{?%HEnfS^x1+s|FPux)BrH}J&yGY}5$-Rnb_dB@sE+ z^yGEjPu3}u&Cr&-{&R_ax|K>ga`Ypfq^G}{DXYc>)>dcz*piP;cAPB@2GOVK@00Iek zG5X#6*R9t96#;>S1P~DZ9^@Z9u=bvJ zet-Yu6Yzl;3=jlR0x7Vx;VmEf;a9x+UGJY74vUf;tq=eb5GDfWvW(bhT!O~QrA91J zWVAvi29njty+KCNbihOl-hW+4O>1)$h%c3K@f z1yG~x%id|m~ce}=xgV2>~r%oYi?|n zU;!B^wGkO$V`jm?CWb#6n-TId$#O2MuxW@@AIBXb!I5K7 za@8(5)E2e1hMJrxqKh?hw{^2Th*@orlof5#L%Cz?t*;+J_<=GFtc8>Z?Mn05a-RJ{ zRx3a`S$3$CncZ5b%}GEJgk*%*1HxO+ZawGfTK?k0$1a_Jcts}4EJv7m;$64D@2arc zjCy8K02&>13MIs2fCZ-Z#Y?d8iU}SJvC_*III1tQYW(=gM0T8-WMoOT0pqeaE z>^fRQ!)$E{EaZMdF`h;|4q`~jQxI&T6RB9KP-R|FBT-g_NHK&ekSq#%qC8WHvO#4; z(UE7=1Be9aHA4ow0?>pM#14y;a&DxUX<0=Cfo3)XePaZWEV<`4xdLSJEFkB#KwKF6 z;uf|t8dA@H1MJM#o3ipPcE_SuW^`R$Ty7! zm3!<$=8=cL&?*&3%YjSNOLf!cHfMLwQg+a|4v@SmMsApT!}V5opEuI-=YRR*|Kcs* zQ_P?FnLoM@!Yf|3mLW(;d4wQUq>Hdny&>#N+S#{V^6o%3Y#G!%E zst2~wjm`w*oReGxO%UM9aKQ?6D;R?mkJn;P={5JxAi;2egUOur7VIDWo5|?MvEXRL5@h9U+T=AoGXuh z;p-PykF2Gx=Hqz3-GAMNL$|zYdp$9-HHFCzGe<_p@l6x}VVzR|0TAm12m(NYpa4Nw zCr~6n0t6tT5+aGf@d};7dgM|DT2q2Vn9)F*7@W?$mRk1=Y8}=QAOTWXM|=|j0uTfU zpaCQtkN_wU20ADSfbcg&00Oa&AVmOj0RkY95Q9L|tP?1K5G8=FPyk@e z1VmyTjiiA@5IkE0kYOXxjBoz74RPb15g+b$^WdT64_{cYQ`2jMGFTB!f>S{3dP#Uc z4WyI}0Q213<;jVy6cvhCRz=aCJ9Bzs(-gDMEnJ$|uw{PX(x!M4irOTT(PR%Y{+`f6oh0Eu6?AX6C;K?&5 zAY6aoQ0w;W$s^nM-W)?bbNr$G*S>&8kWwroS%tMsEIhTkMrMyBW4L(n^tK)Qrq7++ zvwKfb40|Z$HR&)bh-Z#Gz5CGhXP-U1Yu~ky1J#*Rk6(Ag^Phh5tFLxc z{XbUQ*eQ2z-k2N+?P}fjeb6p9P+nOUI6z2kzg=8eLux&zlv+V$!Rv3m6O4x+`s?eT zbK6?&rw%{zf*YP=BdSJrI5udcxy|CGt$RNG8^5?`_pV+aTT0jX&ZnNb|EhiaWH+BU z_L;k0_nP@d_ugVVXiq-8ZWfZ|F2nA36G(AN%M3{gc1;+IPMC!a~)1 z=8L`AzBhv=JJmww012b600}@4K#*XbD9{`v&$GXL`P}hM+xI~LL;)r`0bIIx3dFii zTlW}2apvNw8(wnLM8kXUd-T=cdB@RS(ViP_^IVY}NFqVNk()NRJ)salxPbxzDu@)I z2nxY-n4W&-rkDQM<6rzFgjaq4`^P5>b~YN9Ea~@t{imM)9dGrubNBz{XJ7T*Z>&Z} zEi`9NrEOtx;>KO4&)dEK@uN4~{@iwX34$Rf4mf$|&TZA$riso_dbUkES*J@+pkD(m zQ@B|1`9JulpLxS*yJd^14ZVor2&@7Ts65jG)_{Q{2~DG(c+D{S!Ya@O42VF1bp)0N zt37CIF^Yj;L>GTon^Ql}*>pt{)o%ba5w1|AR^=)vMG3o7GpH8V@4zQn4`-Zp>5m1whb;b3UHv85rQgslPe=jsI@h()jApB zLwQhoSQtZDMVnh&ah1_S!J=j@Z^BR@t!z>*q<{Wcs%U11JKVr~wkUO*glRNm1^6b3J1phqW_r3T2LWSh7w}-Rf2c zWy!K6%R<4%U>hee!N!DTz_0^{a14wY1{N6NdKg$3*jWs$VZfNN&3G_QILJ6ivT|&x zbB@(jU0vZ9?tS0yv(>hTGYot7?>+PRAPjo-lTRF*JOAjSCx?>=j)1=Id&|$Hc?j_Ro#!Lt$l&()Xmpq{Jg zg5bH4=ell^f&}+|MK)zI7^GAy_Zo;+z+}iFj#(AJPBH~kA%;jq#7GvcD0pK~=?wCP zEI>ImfrwNuD{E*2DO63B)nqjex)5^_C>1c$0t^LZEIksT2vu~9%R*+E>16<7Xa|68 zlNV*~#^BL_z@ee(DF~qxOwbB|lr=z-TSsI}$%Ry?bW>BMROep5E@B84?DJpw%9Af$TC_B`IKKYgyTbO{?=Qx-hER@KtY{e~)Zg>{KT7<6 z5kUO^lmOzYU;E{sDuO#ldq`~ZFaOeee*QOqZM+CS^V9GCcfa#{KmC(G{#*a&f22VT zT|iFG6-a{DROYX|`$vBMJ%1zs48Qi%|Kexg^G7a_&Eia4eE#s_u}^*U#b>9s?>Tqu zrI-Kc4{u(*acvnY1j7a?%49yEv$J1tHP_;Vzs6gl3xLOk!7%! zgzRX93<1fY$bu*!1rHIhU<^^fF!U3LK|k|jKQ7?C@BRHjziiIT@}Qu95%< zkO3e83J?&10KjZjiB@HOv&9S~D#1lsh$hmU%yO#VG@dgl5m!+(LX;C>~ACIx|E;qpbSAjZ~i8Z$e6;pFy>8z~MQXU?C7aNW+EUpV~q-W%?a1Bh-W z5>7B11=)e;zPRuDTjm@1;`wHf;hdenFtK{=-2BYi)zdSV&uv=2@$`i=d$;aI@E4CB zfUs-lo}u&P(HA%GxY`D_TJ540EN(sw06AWrnwi=G9eiV*_jK+H*MT;`t;FF zTX%q^7zH0OXLoY4V_$yc3mdM#_1yDMZoKBYki{5}AAa_Zm*4g1qYwY&+urm$fAwGA z^VNFn&wT3k+i!X7@uyyO&uwXl zfB4E1ci-`{%eC!)>d{@--#igGy?*_L)@ZZXw4MFje*DYtpWd`74_Q$*K6T>I<9F`8 zYiwf8r@!=fx4i0AmlrdZ7S!b9hn@kjVaL96NB3{odc$ZHUOM>no%i4W$b%2w_f6k0 zXVyPE-|xAGOTz?8P!kYeX=_iG+_TYq`(U0<49(?aSlE#{GLc5J;LcMGWmfdD{QVFn2+93Vh^ z4S@wL55TqP;GYqYvBH~v?&sQC^u?5#p0$9blC@H|$syFCE@vEtfvInOVl<)&Lz9Kf zhD*%oFG*v92QAWr7i=-c(5)kjA{x9!4is~m&$%e#gcVg{p#sRr5!L{P&Iqas!C`)-!}JaY(n2$&fa3*=0RB0|AfzVp`h>we?o zw_i7XXl|I08iv)WDn}s0RLKiNF=AcdLef~lbB(-4zgDQ_;=miZ=RD{)O{eF%Of?Zp z!b|2na8z}uT7!9^L>YX!h8bEca}dcuhUwMb9!&wluu$E~%fkzOxj90w5QirCcwrir z5X@8d#?^$J7vW86;xG;A?zzsYJORE))Jp(Zfy1=m2sId!6r0+R5tQ9Mx7Z|Vkrn`J z7;=amaLC}p+Ke88n%s%j#OSDpNkO9ORgNppx*0?;xuH}$VYybqaEzSKvBtm(lU2&8 z8G2}HGG~Vf)K-=7?6A5|6b>sqg2P-4Sg(Q)a~ygm5p!#Jjs4=JV|o*O5uTAFn!$f+ z($Z%ZXHo(hGJv`@Xl#0LSB?v1mBVnfu;Q!mZGFa*$QLTH03AcsJCPK^u-tE=_d zm)+CkaMO}bpV8%ziwq}Flv#$R4kif*StOG|Wv&DtD3{q8;&OpbM;1J-dIPg2(@(XuYeXa-5qEVKzST-hW1PE|s zA$tymBNSGn9$Jc~7GSHnx*7|s+pE&l(4^Bdz2eNJ>des2E``)P>5EzUil7Oaah!jXtrhBAg$2v%x3!n(QJD9c^QNup}eeSNJZ%k#`h$zd7* zl*a~5Rufn_5p#$@Lr7WPSS)BJGz4fQUC~Y1J;l%^EVf6CkcZ$^r=CDeF}jku*tiTR zLa7z>NCi`juqb#Z$)c$QQt5%l@m5wX^u)qoA<9gVrAX`7a&V*tNR1R=ZJ3nd%I-NM ziAiJ$8qyGuiw@8#L0NM`fZ9aunqAiNnf!vlA+EH2(@5GK8j)jPdGg^$pMG+F5Yqg@ z#Q6Pt!k(M>xtVm~k)sPwedLe# zKRvbOhI1#Md)<3}?%MUcw*|R%+8Lg!f#~5%Kn;xqj8xWm9`qXuR<;mBOiif!?oB;6 zPR`bsj$onR>r5?t&d+|Fj| zf$7oicv-l2LgP?M<_Tp$8KcOYYpEd!j}QWBL`H68LM#}AkzmDPw3NR8J6C?`%e}J{(S$^?5QpLHr=%E8#fKB?zR|KwUveHZLfcT!23V`cWFp@wva6bv1_(=_K;vkkvM2{u^dz_=0ZxQq22u`W7krQ# z&rS*;01_kxxClUyAc7`*9R(0yCtOA1DgnX@1p^H($N`WbAb=pQk{|*iAqf8sogyK} ze@BKD1OWp9!wM4sNrbRM1mF+^MA`}$Tn>O(A<;w{5J(B`f*ce8nPdP=$$)GQKp+0}LbpdIC_4(ub>e{@_+_-n;s2)$+^do|;>D z^2Aa&HQ)EhP%sFR9Ec;WipJ&sP~N!9h2rwesfl%4O){DlF+@Le^5~lN+vxey%&E!s zTQ8kIF|~fW&OjCjz~O=l7{NpWNN~+wo&&II^%_86F+X`OQ9+Sxc?JaP!a&RsVTfm4T`-m>dfvvB;tW7q7xgDeq(tY{h8 zE5XWSFkgaZ$LX!EW+Pak?@`!&}g z2p*3fes8PW_?f@_sdxP7r;ioSfBAQB+k1V&bo+n+03ZNK zL_t(&j~6X1iMb(aulvUj9zMHa?FNhVB(v$$7oNE7Rc|P}dgQu7jyPA zN1g+)VcWiQ$M>(_c0;>c9DCu(?YG@~?1_Kad)EVl98+&5^^)qPoFEDig{uUTpg;(4 zM<4;G0GI;=E}lQWc4|AJKqLSnAR=Epe+tBk_3O8RgyF)OW1F|`GT`LNgPS*Q8)mdZ zj-eQIi?XU011L$<%B7N!2;m|z~~tl&K=l!`#TRm_D2xzc=Hd9 zmW8#~hOoS}n7;VI-@WgRZ|n8y$3FX^m%ZvOt#)bca$|cqbMfr_$li9h^@aEShwuNv zH$C>u<-Bl4;&S6+ciY-_XJSpcc4N1L9#qdR9y!`@;c?GZ5~iBLJ+FM*&;B3x_U8I` z{lX(Vp8He?y@i;+Bh2o6-Jg8NFRg2e`vMT~btELL5UwJ=hJf%t5Ng2_GICZeh;@^4 zPYg-k3>#0aK;&Q^hE=2FjTgn3qu-$M<}0V zo7+6sr~)PmUXmt_GXp_lcKHennmou^&9dNi-4lg_>1uP6dMpA4sv0552)4rId%|&Y^g~y(polH zUX*q!-sS#h0%192jVNnF_TI2=Sj5Kjw^Nye&(_&JA%yWpmxAg+Jx?1gVo5Mkbs_6T zSCp01z!jpmWDU(+tzZLCMXqY}5)%`WM2DFY%oJ9KG&EaIMP#7CY9(fwH%&Hx2D!4V zij_1NO6s#R&`=uz&&qkg;Bb1NvMJZOpt;eDHU!RcFhm+6wVQTitBYy_$IDiG+vG^g z$_~Q(;EEThQ4GhU3uqJ~A~Ey~xMWP=pm*g{51zVo@WGiJNkI+B5`%Qx8~*h@?LM{& zO<`h6*#IHuqOw2^QlN5FJ+yWPI#wfF%s!wCT(%sH70qMERc!VRW5I65+G^W;H;#Q| z)xmmU`Pjnj;PldPuslBlqfdA@`H}>0S0GaG7>d$3QkR8P8|f@FBRx_+h=BTPL^_n4 zK7nR)TgojW7v1&prs_uFymgrV&kD^tyS`3kj z0NBWZvRDpgxoOK>WhRs@BL~Z7UeYaiq+5;-ibeqt0hlB(doU;lhs<)>I9ZpOO~!z^ zccn~!|HV&y{Iegs)GyP8%M)wgaA$GHtL|xS+|y;JZ>?Oj$wc3ml7^uwVlalUOJU;5DRG_As_ z^1|Hcu`eE7`0U^O`ioCYZQFhB_<`^Hg&){={r26Xo#Y$ZIm>O?l&g`_q6q^x%2hg*Wb6QIX@p>c*Nt;j~|-7c&aYeLGr;7H#CkDu1VWK4>#}LGboZv}_2?jPH zlQ{^(oHA4Z6=+mZcF1%>J@yO%7;>pL(j?Z0u#r5lMdigh1gLXOL zp;MZq`0cNK9f1#i>XQ)QkV~LK473QByhk?h;9fdP`0S`~xZDW%)r^YRDWMm8ki4_7Q0D=PoC>j6)kOL6DPDYqfU;-o>U;qdL z1OZqS83ltNV*m}51sFg$;1W_lH?EAUh_4bs5MM=zSV5QYH3Yx_5D*{;Kn4X6GF&A} z1OdVTN{~RvOaK8wSV3GR;VO~=5R?T7a1a3HSV00HNRWU`fJ{bEfFJ=7t}>b#U2t~- z#0tp(t}r77pb_C9T;2&)CypgozqvxBp;Lg&MF1oS(1HPkIVdF{OO%VGaFwwmgvh?3 z$$@T+fYB&OstFZ^c`l|mY_MXa+p8<(i%Uyamgf_jo)8fWGtt#@su-zSWa#yV@;XBT z!uj*3rlz(^04>B=7JmHrk+mDP(euUgr`Aqwo;i1Fdi~TOgKoG=azG9WM#BmNI6HS1 zz^XOtAkcXE(gmy_Hoa_5Yu*nZK6uxw-!OU2@Zdo^@ZcvQT(kYwBgdZHdHvml$Wj(z zLsKS37y^hFAOFPlx8JkeP)mpViy3uu;qu(%>a~~VXEsi5ICJUj_1pJ7e{BEW?K_v8 zr%oJ#uw(D7l#aag)Ye@u19{@$W83!L1r{L?Ru=cG2uLQV%{*Ghro~1h>xvk<*3`7aYa&(&?U8WyAUn0bIrxRoW7VL}v@acOWc;Fu%{M`L_+||$V zk;lF`^PR8$#-X16$Y<~ShC7yuV(GHiDJyNrg{$|-slyjGPEH4F*TI7P%=x3Yz3Ls& zuRQv>k6wGzt%F*s>jXP{%Lb%{Q1wk=G(sYLgw=6 zV@q?5h6^cybRgg=QGmH*fe;{3ppYrRjKYeU^T*dtZ3iU4u!4dBpE-XL#ENz6w*|_K zXU-hoxM{nQr%xYQJG~8}L_r~Pq}ATM#_Nvq>3n#gP-oFdkWL2y7?L4~h9CeIrF{O( zq3t(+*RiKR0O1vH`rgW6RfsY5mi5Jt{>7``{MIW=&Euc_$Zhw(X|olh)0aD|aAqN$ zI=OY@MD?)`|M9zi=5?Pxx>)bOFmR-yBXfMR+g-ITPH)~e5mLJ{TRL{E>A&#MrC~I8 zuluKp*FW*Yw}dxe_rTA8zV*~!Cu498o5iKp+Xp|twfWv3DGdLJgoJ;FfbfqT&>XxP zcstiU4>6+>6qGX?gd)$^L(Zk#DX7HkiqYKbfoBG)uIr{5A~Lp$B9=>+CDy3>*#uE; zZ6FoOXwav3Do;!Gkk)Eby)Z;m6dt`jglAJOSSrM1!JC9EWo9H~P^-poE60jW)6wux z5fDg700H5jlF(1^yp)%k@wF3w@$29BGw*vIS1v+XE|TCTawdBs-K=dfzWuuR`d@wT zy|>?W;>4o0du36GV^A2D*xKRQ9oKj;p>d1w==VhTp z4Cb{|IE=Y*8Iz&amshAGR3|v26akkSHBDcz02<{`f-J?tkP9eO(d&BXY9&)hZniXN z5G@9(q2$R^Wky+0@XC}kl$D2~$(q!ouSRdHs;5P0E~QnIMUl%i7}iU}9N2F`JKIiF z7hCc~QsD|N1|O)kuB|F~q%72lECwNj%vIgMDq_Aj=Vpb$ z6_PBZV8ud1(Ni~!Ezv{ni^v8In2j_PP1hhdAm;`eMlbVnmM~aDE+jNCLAlAPZYoob zVIjrbFEk#yLSAm_a%6iuRO{+>L9HU0;z$({DzjD&wr1DJJ)IFwwXwAJ^NGvHTl4Ac zV7XR3XH>96FDt?^@T|JQaJm2SN6yW?^qGqdMM0#iuwHZHH~;i2+e2>3Y(j9>1)%Kx z8pY}m%+fG}AppXPoN34rxh_KpWQq+OaL6gdBFZ~9=ya>SBcYmFGdWpau6NFaRVS8K z^#+SGmooa#%H4g5^eG(a|IC@hBIa-HOblB8ZjvSGt)g91>?hYhg^M`|Q7XK+;x zJvCR9LaLd)4#^-`rdGI@0f9j|i%g*aC3A~~Sxm7JLRkhdA_f)m0^QiI7=!|_5|t@+ z-5b<3hVCs#tdbj=+&EdTDc6+<>p9Z$s^i7#(Xz5mXR-v_qhkPD92W!&rlRdd*^NzM zp(ZkDxuMXH5%58g;DHSSatqBcsh0xHsnBoRFlr*lQpkk>HWZMMU5u$Th}Hl|kQ|;+ z3(AP8ezB=`xz!t9`)LKC$@H^B=yl@WhqD>d(IL)DQgrFRUB8b9LcLt*AO8v9i%<5$y5(SG z*_V6yG8h1BTzQacqVE?7rJM+X00>-~y|iY{Bqn5+gaOvQtx4q+rYj<5g-DP|GhTM2y2o6pf+YF6a4iJzVQOKov zj8SE2O`Q*)ee{vm^7;QXs%MK$H;&wR+wLtH7q5Tt_}J``k8 z#uQr?9Ui{>b+04v*Pr~HQ6SxsGC-q|aCiaoSgb+Z&!3NIU%eCi30we(8YeZoM2^s;qgs*V|GJv1}PyiehK!^k& zfC3570D>9F0EGa600;?`u!16F2x1UK&wwPj69&++f&>V`Km$YxAk9Dmf&{>|0!EM~ z!6iaQ0T2O#D~y0KIpCn2*0uIXF^{x-7*GRQa3KqDkwB;> zBg&S|i2XrrO+zyp10kmz5CfnX#sdrUMsi_53><7E5-}P;S&LM;raelt<;9-oL4rHT z3ulk7o8FGAEQC;$e)8DiwbR?^e(}P|b<>+Ko;|g0-IQm*jkrnzkVF6^01FPx&7K7? zxqg}kpqM##5-W&pd#-Ku>z7U(|Hb>?c<&7hztPX1dG8k?Y}&N@^yve;cHGL6EIMKo z5shR+m(bt;^b^qxaQx8oJNMprjQURrd_z=HxBZu*G+x0$oTU|kDk7;esUUUxiMJl{Q3QN-~DZUEk5<- zzu&R%mSHXA#-X#v_5+ySvFH5Jm!@{zGC2~Td+M<_eBY0L^w0j_yMON8$Cg{O`~UvR z(IpKoH%=4rbxt!OKsf+nmP{Z-tT0dlF>~(3y6NqJiJ9(15Cr(e^Cv;9Shs!~nW1pu z{PFcun;mxP!tu3JTLH=qZ~`JSjUXb6BzG5S2Dk{&up$A3U?GCA2+wEF9Nu!>8&AIQ zw-D}m<992`LXMHcA-??a54`r9-*Tz($3ORxTkm}fzN;Xn-yKC0R=`Hu{u7=a=R9`tYmlhvBacbst$_qZw zOuhNH-`Oed`_#LC>;0|6?{-_4)!1;cE;sz;5ANCW6MuYd7ypQagnx#B@O45aa0!aY z0dho#G9+XN9f(NHrMkD+D@LG3*$2vWcn@{xS;%1`bC8E=EYEgR?|iKgG6y4G!r8co zc3`sT4N)p-n0szn`m$4-R)B_Io^%D}ts3eP;-yRvK$ zL&;El`;8;7{n_9B#{2F$aPWAiO7pOGj?rKNTQ+Suef(f8?AU(oONWo_*>%l<{rkm1 zsyUF5JobIj-77VzX%8}t4j%+z7=ha8iMfWlfx-x4BhLXnrdBR!Lj!6rL(>$lR;NFY_giVPC=v2o`;5Ul%Ded1``yhh%v`f=%J7dWVzyekTu&NlUvtQE~-~(-SfvebLh)2o*qjy z)C&m3P?3IR^>`Jm#y&Gg{17sD_O< zgC={^FLQGBQ_D6`4Tb~S?l!2-u-)Ov7+UMlq<~9-)5Yr2tX(P9NbcyANFI`EC57NE zuI5yiom)pH!$h@e!|=ua+;ZJJa;~q1#lcWxHG#Awi49US_}fP=&OZ66v(6A43d&xf zSKsuF|LUF*j~hBl84i>^DX1S%cnoL~6BR=?MF7eH5C|43gX*TA8489*H6(`Mat$)7 zSQzVW8VlB1&o%9xq1)QHd{^yF`|`l!l{(hvhR-aixwLS4Ij;&!YlqECgEXKGlF$rg zxy>dfg(&5iTvGBt2IC+vHV`mycvePE?y8jyFr_S{BoPscu{5#?RY{cr2!%yfc|>9i z1re$|4j5H9X`q<{pe+Gxf>tCbD+PH$7m>V9pjkT-p|AoXQ&CJ6p;%)(fjFKvFrizv zNw16SDa1My9flUlO5FgA*KL!JbxELPNyU*N9q#ZjtV53hh`EeihQd;IcT`mfv>pb) zU?~+~Sycsv@w#!;Np%g*WJcx^)L^;QeByHte(+DkwbJ#WlBNjeM7as_xBfvFD#b}Da;4^*uQ!EJAdylH?Psg zW{}aR6_yHmzh#3i2FJzHsr}+I@TX?d{&Qc@*6(o0#1gJEDzPE(kJ)XpqV* zxL6Xg5Q4bqIENdi z51u=B8H01DLo|nKWcSgL;rPx8yY=O7UF#E_RVcfiLA`Kn@bLrYN1ix+`O>LwcioQ8 z>#y~=@g|gOs}pB0KXvq>xwyEu{p#!WvdM{|wu}ZhR0D=Vps5H$V{x>ZTdYs@I5^TU zrD|hm_!iC0RAuZ1F^nu)Z^5ah(J@$oV zE9*0z)$?}UHQ|Q)Ub}&NSC97MRLh;Cqhk_;6`2%B$Ru1vK>%@;02lxgpa@X}AVByk z0wO?Sg&+tt!U$N!U}gXr2s30bLCC}0c+R4nSV4S^RsbMD13-xY37AWO@Kpo>5H3Oh zMFJ2+$ViX^WC8&Jjuj*XL4-(vlmP^skU(LD1Q$rqh!xSIAs7&p9o~lnpaEtOB>>J0 z2)qgsAOetO03tA=c<$V{y!<<-x19P)zxV9No`$e)-Oh^_506jZ+%8y@ zooo>q4WT^rEUAC_)U)Foc827LDg~O(ojE?WY5Vyz$9Cac*mujZ{ZH-KbHkCBp4-0X zwonqelxQ)8Ql5pz>yTyRRRC;2k1V<@W2*|qNH9yVM|}Ls<*};jRWWBI z2a=a0%d&`t#nNI?1h6%ZkmB;`?jT&D29D>SeSF^yuh{?mLpR^{3UfU9y>-I|9b?K!+_OF~ca_aoLi49_CLt)LfV+X%<^DAG~AJzvS{o(*~20rA55n z@5PNXKX>;>z7SjWGOR9Hj!ukn{8Xyvn$R{i$#X4-H|nJn`&JY_8LXfBbsEezacB-F zS1gwm0}Jpj{|qM5zA06!#NzP3?szA>QrCj+so_5 z{~zAq#96oNJPZ8uyld^fzwbNKo##pS>R!#WWJ#7}TXu-!IF18pQt6NcXr}I@Ds%(Y z>4MM=MHrfDpcs{V<#L^=})Jn_WC z&%gFk5kL9x$6k2#?I#|&``MRXRqhh3ZVX9Xa)XD$p{3}hi4D%1+(OgYvTrLGlBC@GJDg+^XRGkGB_ zw;{uVl(I()4)^5llLA=4tffPB15e33cxPyu=-ypd8BMdHHmK7KWX0&&y_DpLLszow zp$g3MWoe!;gqm_Ip1t!}gK|qH$EGkDN{<|2$|2dLAcUe&eOuA->EvRpZ$#AUs}v@X zDUrbd_nd)w!bx|o1%$qD$gWUb*H;<@DO9VQ?*B8}@80_PW~U8+CEQdnCf@x0zcWnV zb9cI@n!z*tGV*pP9zAyYS6`5>Iy9AHcB9YcgUw~LIiAkecx|30$q}c;xV=Z4%C!KR z(>H|Di7oohMXQ5N^9|AByp=1y+IhJ?)36;y>2@v!ivw(&zq4jPu*nD?qe9AEkQKo% z_rYEbT~I5~CIX03ZNKL_t*Bu&qB-7d^Sw zaCTuab-zEk(0Y4i=7Eu}{N-n^Tz%zBXFLW7=txA(`uqR*k3Tu@)ux7C3=-iWD>D|D zJTVg%J$At}OLV6~ki;mcFK$#rcLhkQpl0wz}-@9NcmbTP*J3jDA5H^te{*{ zc2jSyvnmWjk3<-T2(mT?sX{CTQZ~ATlpQ7*vsgUD0RTe<$e9Fl5@0O~?tr;vAbM5c zVm9TX&f_3j1lk0pc4`pnI#8;;WWy6m5?PdC--xwL%U$9bAk zn?3eWHa_!HKZ5^?p1kYUP6W+rAciV~>^(!Ju!yBR|KXVj4({#~Za~bQ1%SOE!3a7X zxOV0Irk%SPgN4B;0)uLIfHhV{1jPa?gaOaK@Y<)Jemn>&{N1mA<8z;Sx`gNravLz@QXPHojbq*UsvJ1f9p1Z`tHT?54z8W};MDBIY`l5l^6eHdYA1Yz@JzL_|?Q55> zT|WKX<k%*&^EJUZSKZOnaK}L6!s5ID2>^*dBns;U3IbqY00f8-0T2)<2oNC%a)JT~k`N4lKm&{b z>u{4`h=dt4&_hUlN%A6~2;BnH3?2mpcX z#UTlh2m@T3G{BmrN^vk) zL)EuqcCAD}SVxlpF%k%aLCsKy%21)|LTr7e*;>~JGJqhhAr&?@BQi{8Q9Z`!z}rdTah4QbV>lSg;&yXD-O69@O-di>O}eftibKK;?*!*@zJcI;gU zx840%?>O=9H*dN934+JpfBxWIk3mKNRyTn*TFABO#;MsH=V^N%VvhIe;LaZ7aAQ!@u+-Q-p4puY9I)HKW zsL6#F+o@HhFvK@rfAQ&0f7{o-_Iuy%+b!<)8mJTzaP zo#4imN$h<#y?pA-sl~Zl>#8gca@c$Chp+s`T@QSsReSu6m-pU!mrHP$>)fdi035pg z-sA7Tx#RW+jx5&CzWmaE_c#BifBkp<$^Y<={{Ba^aQ*dPKl|3Q*2YB?2^6?M5FiLb zSVuZYtdoHtB|w)hp4zta03efP8jua^c=^If5bJjC*k1&ZS1z2~zVjA9moJ{!x??{< z$RIBO3^Y8!cUtGLe&y`pI`al2K%|J+f zA4w4({2ywV#cz=T`UawDT*?c=d9%#GIq|b*3w=Dm6XDwqxj6+*7Z5Gye8q4&O$qSZ~E zdk1B31-fRPgx-`bXCF31>Bo+RQsAL==g?RX#pKBqi!fjjndr$yDIfx-oKu-~P0dsf zz4W<-S#9Y?TWE2?L;u)Pxw3M7+}4+dA?A(ICu_+~t%X1V967tXxZ5!2lq(t9SeO%3 zP*hMCXykprJ5H`e_;8xru-1J_Doy`jy$LDcnhV~tlJ-D-;ooy!1F8|!Q zw(Gia34w;xcEm>Wx?;AFgB7=xNbpnDs*A@b-Sb>rR*idv&1zuR( zy(+RWD)3TL2v#DcV33%Cnv$_G+&5&s*>}{{xY;)I4d=`Jtc=Ein99d(NvlfZZdTM^ zD>tP18@}E83#RL<*HNT}DvFeq66#{1VIuORb()1B5wKW%Cb1_0x>4!@c?Oi-39-or zb0Z}}mKqmWs+!V#mQ#Ycp+JTLc?gF=6?;pnQkk<{ELq*;DwJ6YMT)&ZS|lL2LoTVn zvPfpgOGS{m3WAV(A@qcZ6k(x?HT#jxM%Gw}!L6-W5UK!H!@zZ7CB!W@0DN8xls2`b zY2%2HH5~Tswb|-RAAaySHJDkpD+&1axP_ZEou4mOQ+9l-nQrJ<+HnXY>%O@ z;xNu{-!|`xHUNVJ3Ire)$x0>A+yJToNd*Y10g|y+T~{g3UA>`j?&?YFuWcM|shVB6 zlu^GJ&|I(2W}X|?^UZKqEyiIR(Kmy|;b2P*AJ$t3s_RiT9L#%e&v(z9j_tV*R#vWD zsRwgA29NG6)z&>rOQS7_<7wP$$*nI0g&-*s-*--5a zF>@&jkp`F`F#qUZ`AGsl{ZoIxA}Wgt5sk#!5CQ-LC?NtBjM4Jiik~=U&9|O;@7U32 zX0`Q~<~OV~ckU_of95;)G!M zfdH|N07w87A<)18h;8^5h(AOz z00_Yif`Y>c1Y%wBEIXq|bw(Yoj>EhN++p_?P0L-vXFcNeK zf=B7n@>KwP58k>II0)s{_m5*8v3K8{bsaxC_U7T+?(f5ccWl}BmFNBzgk47tov z=J1n0Vf*t3zwi$r-0`VL+e`7pdoSE_$D;sGy!Xvp?sycCL9oGGpv@0rFq>U!lciaQHqDgdacp*1jY6oH_c| z{@Wfr`N8W4?|h^J7;^*#F9P8ZNwMQZSSNr2;chO)5D8T>0u8)Zx>4G$(Kyt*!a`-r zYdx3&F%Y!6YKVG%E}B=1MdOfGjEk?m@;i@y=YRXsZ~fwvpZ)ecvu{58l_x*(m^J0O z-+kqty&oGbZCE@oQCq@|bUNcmHU82oAN-M4Ry=}Suxi^06&;y?+sXg`9 z%exQV1wkssb@9{(0QTK>*O?FA*>vy|`!~+L{@QQ-g}?QWe)ezu)gS#o|LKWIz5L=Y zp8VD|t&MvTB$z3IkrX9B!a5^F0h9oO1R!3zbZXm<{QyZg067VOmoJc5Pn%?MuIQy6cD2e7!w)x?8F))SC|8Ge5Yn^R9CjpYxN~Wm}4^rfH|_{|Uei z^;tofkXf6t}6VAi2omn)r1cxX~QggV9)EkI=SQaBj0-K6>~9! zCNW0{l^7SdRB?X)jn&zqbgka81696sZS8s)Y+qz0#cAp)WlX&fQ(EY13tcLnCS5;Y z#U4tn$7aLE*o1{TV!@z6D~xrmc%ER^O^DTQ!)mB(25M%5joB>JDa4KD3^ohRQYG~2 zavm`iR989NP&*W}P-%iJ7fL_rO26zO!@@m>?|k#k_wT*$_7`7y>gzp^5mPxPjQi`Sm|{y(vM&qnX&2vOz6;?7_CI*NJs$yU%8a8{R+a7-Cxo^Hwh^HTYVZ8Mj_F7GXVP?5H|*K;6UZl6rRvtW;+7lDjz7bfgRq z*5?+my0%a(Bc~==>Y$i&Q0{|!N;%J44b#N(oE%k?YUwI4P*umAdpFMv6&*lfDr-P& zJSLk+YTcD%WT-8aB0-^6o{=h5<*ibsy~D|o07XH%zEr(vw6#j1Th1Q(KH9tx7$(F( zvJ$(lSDnOxC@)h$Kb)?lxuqtCur%l6Xsecy0n3?Iqag|ic%)Dmi_dzsRssZj@?tokc ztU&XwAi$bbdKOoJQ?VqPm4M_%pk^;cFR|Fz%&o*a7&pby2sYGUxFOIg-`KVzn+`_p zMwEc*xe4MVHQVgkIJB*10X`m0WZtSPea}=>ssycEyg1=t!RKQzL<>>^lnUvlp71PB zZZ4Cq3qSxUv6u`(DZxc|ClMQSmx>^&Fw`L!LK!Rq*~?WhP_>`}L}Q>Ff~qnA;BWxS zCAqgL0U4fQD)0hEh!_IQWi^9DZGt5fOQAO(g}nU>&O6w z(LBu90nXytJqwYFF<8?y0SZ+epkcGqP&2eLZ~IA~$1az*4)kaZESv%Ekr41AzINi+rXBmPT|E8Z-uWuK-#x4s=OSeDW&u?fJvRv&q^00hU5Ba~k_ZHWLS^<~wLCt19>=du zufP9)3%LD`PhrbawZ-Y16)MyLK3@bmB@+!SAN<-sByi{l{av91FG5CjPj0GC0s;ix7nGmxH>dvaIF zxtN7u28aRcQVkel3ph=Hz^fNeZ`-+74hMs=GMqbqYSXs;kYB%YX4BSv*Ds&mym_mO zaAKVW0#GsvjBrRz)$rn_lK}SKde>5@nmB#!ougPs?A>`VHu1#CqlXXOUW~h%{QTP= zLO5{$11G=r#?qGU%soWhL;;`~All0ErA-G8|IbUj<)K}_ycD}PSAXO8Uw-JGdtQ6{ zt%JATcKp}}hYmgY(MQie@bq8Uw>5w1SAGt{$3FAamCHKu*7LU7>fIOj-}RJXT}wIC)8&xMql3{hyo?H6r&+mTZyS{tAJ-@i{XRABDr&;^= zZ*Aee_#E4S_g~sScN*2|E6==f=$@x?-=2Q|m0bt#5&$oh&Yk)Iz|I4= zUp)E#U3Y)>w#OI$-Ov8&kN(WR`l-M4=l{}A|BDkV)s|h*m|%d600fw0os*ygq=8@nTo72uaN+!kJ-hEed+u8hp8D<|YO`BErc|U?zWVDQ z|KxWi=+)=G{NR({Sy(W^sRmp_si&K-m*Xd1-0;9>jf+-~7nXAl9p+@SGVIoxh0PdM zSqHiW)hVi;sax)@&b&QWX2?0i23xnzZ`rtIu(V~@uBGq){QggT{zn>BPNl*|NCmUC z50Er#X+kwun;fMo82SiEj(&}C zX*Htiry&JrE_u0tqQU2p$HmLQyQ&%NbYP){0)_-@Ou5|q%5rxN?Zt5!cHN-BjB&22 z7`nu|u9{Jm3oBzp1%uTk)Qz?9WXR)d6&9PtQJ5r$5TxYeBDN@_u6Xg*s~BDP+enX`e*ILv~8Nn}%yD$v&3i<3CpZ&of-?MXTE)9r*=lR{A z{JTGP;LTI>KlN|F!nLzdDWSP_!~A3a{ont>eREHL@{`ZL_)KGSxsVW~iTk%d`PkQ= zd!-I~^ofss^QG4wd+ecSo_`%C4{j(X8$>?Tgs@{7nW%-1$l)tDLU-lN?D`@GQAt_d zW?v@^Y%IjYNX-K0EzZ>;1=fsJk8739HBHsG%at0yN*EW;8-jBR)HT$H6~}yli9Vag zI90RM-S8^H2W{#NqAZoxqbU_12h1nH28LOe3R#3yXXQNTq2!__Rs~Ng=!#oC2wt|9 za@8GiG+*V}YOAJU6RO$7WoyMTXUrP7iOCDrifS)r#f(a(fXpPPV=j{FJbNjW%1Uy zntt+MTv_?rpMS56Di(N~GAi}$Jx~4kXO=>JB%n7YT9Zi2bu8|UBO?P0LsKazDhvS> z*!QRsLPP-MP?VySO92IAfihG{5JOTy$?m1Dgit{+n89+UY$kz1K=zWv$OKj#8O>;w zLM37`4+FX_r3w9fmFwL;7y!+%6;P}n57IQoOM|vBy;44c=BApT>+*bVeYK<5JK+T_ zRkBJdE-A1$fM9^ba!@5V7bOVU0T|)1WLe2e(x`4~+N@+LqX;NWfBstva;Rs)Zl7@#^R zjC?Rq+`N{yk6B6z8Eff6*PS`0;d`ao$-!2Rs+@*7mjFc;#Vo5CMvT2?y|1-_mz!B| zd8W)KA)7UEPAYxBT*UrrJEnL!W-+H(fx}`1E30Gy$(5DmE)OBZx-KP`zRf_+GY5)# z6R>EC#MF@m5H}Gb4OXKeY_6)MxG-1KY)Djw-b+?-Fw23AE&z_+;i#%$!C;0Mf;f2j zE5Gp2cm1hopxKN6;op4xi+>iXtaQ0u9bXft@4kKX=#I_1E?hgm_tpcoEzXy===v-Z z-9Vd4S=={jwvUG7U~wr(qq&7>O~y1~n%vvG>|nY!zHE7VlGBTt5TiA2lfoTwFlTj5 zjY4*@kA+|&JQO&*uOKw$(b$uHl_1J8sO%^zj&^pfWuA>op0yUOiHk#VG(U>3zbe)6&pvcTF&c5JM#?c75s_ zl;UNYjH~0xq`l%)nMI~JUzKh1yK37SVy>G-lJg4-5D1VgfD43^AZ{{(fCMKXKp+6p z0S5>KfS`lIIsqU7m;i|eL;_X^qXojMU@&6=dx&{lQYo2~2#|6>00amENDu%8f&xI? zWB>sHBGyTQK#&js5&sEE021UNWdR_)5CR}U07L);Nf3a!ASnQW5LE!7t|MZM8C91H zeTD+a3FkV{2p9_lRYU+Xlqmr2fJA~I1VIFh1fb|1VjTv9L0wA8D+%ZdB|!Ki0zer^ z!HIPOKtO;9V<5o6LZeVGw5&^7^>*fkT8Tt<001BWNklZO1?Q;Lba5Yf!)O?pqM<{LE*UFLW2)eP+kLM`(88 zq!M3`6{*B*U9iRX58{=WU zB7`~wAnx3>V2^Iv@-H7+z~{DsBYf&7@Y3JGV}BE`{a={dirYSq7k~S&ywLvRzxKtS z`uV?WU;C~3T~CHj-Z6akxBl^sZtmz^U-|8y{PgF(^X#%WL*nX{x88d7;9Za8zCUyH z4|W~8%U$$L>D=iL0qod+^lyLVPyOH*f8po9^u6Ev=ml$ToSk01 z>}s#8kPv_Z0iuI{NCHqqkc5bJau)%+cKPJCZTkchVIoi*H(kAa8pOJ7TlZ)kaP`W$ zt($j=ymsyEmTh|>nVducC<6rnH1Q^Y64ntQ1(EvB3r}HFZ6v>*MQF%+8`u%k*>-5#LH`bCYZL!Ef z#W0+kpCX^b+TCH0roP}z2q zD_!HHzTOjkVevk~RID0SPK;VuOxXt2!f*ujKpovVGO+egRhVKn3lpwx8L;QZY|~tj zT{)L#Rm=7u{Q&}|X_FM0I`Q>kZ?9s=bd;Y~wJn{H9o`0ry%2E+AGgPJTSQ!h@ zeejhp{@6duntW$XaG!N13K8p z3Q+B`>S~Y0j5wqUng&X?+Rp~nO#M=uiw}g-n=xd_`!ws7a>`mU)=t)>3JS{1yI3~S zdxX}_i#*Rt#Y*syiZ`Q-zRWt$IUz38RX3h>-VCcIaa>SBpSyagX7(x@Ybbfj3@RS9 zY+UT!2pQ!G&RlX1L-#Z-si|bD>H-ZjU`f-J@hB0um}WC~Q#W8Wb)!-ftS95Zca-X4 zU*+OO$t}U1H20vA+JxxjpsRbOwd z7(%7oN>m(_*_yQ_=1H}b@7s{W($;G4p{Bmox@D_0&7RlN6{_lct8B25C=FM=7=X4E znP-GhG92(epSxP7#7%Z~Unzj@=zul|Eqy`~Cs?~b5;diPU*GxSVp)5#&AH8A%X1&=TQ=(cw}y*0e|_Kwe5wPRR7Rp&05a zJD5?TEWl=Mfx##UqC#7UkOqYg6H11JJSy^(dy8V_rYNDVwgjNII+n$l2g7<~fgIOh z#-LTTR+@G^w>%eez12xi!&z1-mdd287U?2;hgPiqfjt#|hH z$~`FdZYo{wpk5hr^F+GQX#qy1G!sHsIiM7UKKXK22BB@LDkuaCDVo%$nFLf3$o;ej zz2;C>aiDchR3Q$mEpwI{f8DMoKAb3O25D6nlaGjZf%2ihj81}^+xpqa*47m!SsQLOc zzwz*=KW|ZB{nBs$hljuIdpwttRQhfeWY?U2^VhdN_{Gbwe`&)*UsxEGJ;hV*FZRTE zd2%DqLc26*H&tfywkuGZhMQ}f@37X}YF~VE!3l7+&jShf%&1+MnuJ6 z0dZJd);2vYbiq?>il>0BWS3w&gnA2iR^BZ>8Vwlj+aZnI%Bj!y>ZFI!j@4p zzoDWxb9E)NU?s#h`DAiBv%4{ESG#4GHQfdw4pk4eC`i@|?(||Uz_y zs%(ONbnJZrM?Ur_d|;F;RJ4k&SO!8cTC|~nTEFw#&k*>&FMfJRRc45x0*r9T1?JGY zwQ_ofgYK!6%f0E+jn%7bCuGIjVeEzrb>6ah$0%$!%Nivk=jRtd&;U_@08$igBGyq7 z9H2l71V9rY01^R^0CAIqbp$8?A^{`Bx-!OK3egNk6f#LD6YpIGfQXv}2!P;#AVLO! zMiLMN2!KF<1dIg!KkDE-XxFo@6a4r4yRQ4bpXYhEv#-fna!yWiPRK?U0wDpC5LU6# zQAe9bT7}WFx?D=z>9MM{t!}5Zm#I;vN4uqM5Ly{!5J4c21d<3L5VDf>X0(^7{rDR4&!XHFj3y7O|ziYh1t z1+TvP(&p`#5&6`suWZ|X#fjrDY~FThcEDu-R5Oq)DMRiRUG?mA6OJ5t8o>75yDKIO z(}xc}k5$B_TP}g&*oou2c3)$`BFl>hpN6pMjR(#={Op>!4IltXfItEu39*{LcyiO` zH`}Hm;);zOn4Eaw$sLzp^U6z4?Y`ocBQJbw_v>yv^1VkdKk)iaeC4_CLb&|88y1$^ zGcP@}Y2!AsoIZ1U>z1twaS+97{hC1?hbc?wY>8kZ1fW0o!uKw};`)QnKfZ7O>w;Cq zf)uz4Oo4}=_~x#w-t_V_-`uh9I;bUd^xzA-FW-Ok;B&h#KXCZqvwQYkn+Ujx239aS zt-$7JHq~q!SZOva%nkN*W+u#57AvUhZYX}G=B)o?I_P_k% zjmIB+%RhbW{_f<{|NDnNbKfOCGn>xbd;VC*e(~UmQ#bG3xqfQTp=vs>oH+d4qdWKA z;JH73=;)=O?bdHg{LxBkFS z=3LZXbP&R$U;n~QZ+(x0k3I1D*Wdn53G!Urb6XN>#BlK2_q_eCADo_wVPpW3$=mhI zb?nn@@9X1?Az$Fi`MlgxCy^`_C@l$SP*{zs%92HAoT*3aXZAMRcK_PXu3Osh5Ax{~ zO>9FsJ8bQ$8*X~#(39f@OpUzWas8%d`PBLGmW`V)ES)H07gL97E;QA27{+e0axsNV z2ceAnwvTnp7|NP9oz195F*0svm1-+fc?kkuO|y*$T}V^Fc%9=`q17k}dy ze&>?STU|!@Wt%qs`iFkRuYA)BPd@m+e)4PK;!!$e)s4G{`+w%&-@SWs*B$S;=c^B( zvWk$29+si^-Tvl#zwz)m@_lc(<(?Owd&iagSB^b9wRMb@j50yT*;8AzcNdcjnc`>?FI(c3If zus)9EQ<4Sma*ZBk%_Jqo>@3B+Ujn+e!gANPxeZHI6<`c0nz_?^bu2?uo^Vpz8bDzJ z=ZeFW&FYxZ=H&^YPBp3lmoH>5ta5EZu)Ytf>Y8+;p_8KJDsdutm2%I4P|R{xMI^5} zw8P51DEF8m6j&@YWXkPC7zNOVG4*z-kS z>W*g^q`p^QLUkr|+opCk)6I>WcaLVPDXZr-m15gr*}YfKlKV2w3<{wc8Y0RHDbf*0 zhZl!UB?9Oq3r;Y3k0J@|9F;Ltg#y9|Bhx}uI@=XfHgKF9nWn2D@;rrkG{zjs6tjS2 zUR#Vb?*l#81>RHuQ4s17>cQM&Pxt2@{?Cv1RaFu^I}p5|T>YlsynEvyZ0ZS9G@}R7 zt0?uNLkOTddq*q)RRF>%!dX=YDCGjEfYDmPiUwwGK|%(=%nTOL6^4RhpG74gDh8-k z2Hap^5DIi+R8}v93`vuc07RBHBXp>$UNI-jpwv$~d4>l-B^JWiR8VBeuAE&l zu!>FbP<+KDLFx-!ftd{138s}E5Qhf9ijhiKCQPL{J)RtS`tW(UU%XHc*6*6#zh_p1 zeN)V;%nFsvuBAM{9Jy#&M5$5`irfKaIK&d+q=BTbLU9*L zBIz+2V6iyl4hn)KC(J-6L^qN#)Ic28ZWeQtjJdAUbO?&|mp=EU_rLF5O_0wtpZ>(> z@ZZV$t$XCf*ji6jsj2(OE!?!R+S>@%4jZv*Dup1>Ldb$9-a%-Y5+(_px3NLX5h>6t z*wCP=RE_{3l9HXDJTq?>7cO{vVI@>tTVJ)lOs&}xN9$*U+@`GtSQS-C&bC}=LuxOy z=T;U@Wy=R!r3^Pq3Ir;ElpwhvWJ5932s2@(Xt1ub*#XLPPdp;v+Sk7U@IWFcnAbE} zs0lMe2+=_Do_il8@SeBd4wqm+2n3LX;tmswA(L=pKrogZ<~#Fq7bf%bCrU`A8#LW; z?I5q+wQa?g#nO6`XAw>Upa==nDhL7`AQ%XM00aRL35bBf07`-i0D%C6*AOBI zK!hOT`+|vtJBkYcoy6Y~Afo{wKnMUq2-UD+GqB!l(mT6}1i&Ey1_nZ)qwC{#0p*|qCtyACVIFH#GORtk&)BoZJ2vIr0e&_sYBU?K?%gxG8! zs9C%5Ts>?j=diNqIRcama)LlK$T78UR$*@2Y&|!sPG9UVEM7!u`?kQmP=I*^VAC9t zmeNE@S#%&Af924Q-B$_~m{F1($B)0XYu6sYj~+d;ckh)i9ejS*&K+I|_gVm}Ng(K5 zhEfics?ngbV@IC{uygN~$`1J}FTaFU#HG6~ho@t&9No3+fRVxM;Gw4>Y}mNv+}TrW z=GKD%Bmp3?D$pom{=(^v+b&x-2(hk~1?A?ILyuo})s2UreRSV7cf9!2{a4;_%k$s; z*1oH5?1iJxJ_=#qwb#v$yE6x$Teo2w;pfkt+OTQ6BW^ZK&CV^3>e3Z20L@5Ppn(Vn zpZVq$*WCQVlixmY_0<6?l!^;Z#m;f~`Db=se%&hvpV+bII>3!~0GvP0i}@%~SK;9vbs z9nwl(TtX5Rj=uWh+it#X{|Hu! z1qxgR(m>H<7J~8c%g3(0_8&a?O$Gf$kY4cEgI{>#Ti%<2M<4wB8*Y22 z6DT=38kNCRv$nC%{`HrBnVZ#M5b|yzNcAK|@O$ z+GbKFODL17t}3f0_1gdIuRi<9-}%k$o3;xKUM{`yb-(qIcc%@vzWD4zzw^vL?tl0jy<#OY8(G$vUw`Wxj~@T_t|77hCisg8`wks?{(QG| zcA@8ZqM(#$H1mE=;bLQX()Nh!5nw)1vDRHaVVn(C*ryySuv8@xLt`m};u=+8lzA>0 zVMx$n8#L) zQ46q4Lkmst>`h;Mu0pA&JWnLgs?s=T3Q(zH6ODNitOiY)D#^WAg<%!Oi$PdOO;Bs{ z%8@dAg?h@1TM~v#ZLV!>8kVe3EBIsq*tj8Xn%LQO@y0dR{J?b^uxEg8F5;#Y%yrNh zo-pn)AfE63;BUGoUObt`W$bM`+0v}3wr#8~TRXU9Yuq;2baTar5m#1DE%nwVm1R!g z9&%Qhi|{ZFsRKFbSXn()Sl=4wFQ)QTLDeUyf})495Nh^$ev)(OOY#9qQ-KEWnJK71 zbr;}`X*en?WNHJ90Wiyjk&&@$f>V_xF%p*A9xl%jl>nHMQzcY*{#d#A$X6d-M$HbA z7X{3Zul~rt{K>0l2C+Xgq0UfMDnuDujLDm#A_?#i1Qeig5h!jni3~I<=aOAY2&Jru zs;Z_qk}!}NP*eaq!U$fgT*w(TX9yzUrVOBffCP^Wgh@q04>vCqyAUA-2dxB48C8Lq z2T&m0SyU<(14=J|0aVJ!XwyQ@-mV$RhHGZrX~kmIRKcu_DAo+MIcNJAXhUi^4pq_G z@z7M}`T(F(E>+2u7DKm8Ee@BRIA24GUGBV;NmpE{FV2!oed>Y>Q8kc524%xs3eb5y zpr)*pB)GRpMQ(whY7yXa%#^XDjI<1y97W04Td!h*a+iwBkih~EfcFy@SkVCx4V0R$ z62*ABogDto)2C@Zzf#q6m&{(VbBc49&D8E5m6&rJTd6G9pt4DcK4IH?T1p*g85R&K z3&B;(Vkx6!Nt&brCGRspF$Qo1ItXS-%UuhUr9J_rF9sl5Eo!F3AWLY0(m@%)%n+*B zG;vhLf<|&lS%4!O<&7eMh}Re*fI2I9@hr~>LQIuWQ;kd35mG5~K*fuSV1^)vI%WW! z6#$vRQ4IrXh8#m06(DwB{nFRo^RBm65{APyGhw=}3kLuGkNy;Yhrmz$^anrou}}Pu zAN%N(-e0JOH>~O9& zm9;qE_UPFK;Kma5NuOSFP1VCby00;mHfkqkuunK`@1S=Y7 za;iWqooA^CC=@_~kN}5-?-K!t_&$PojX+_Q2of$xApQe0`Qy(wZ)~PRe&Em{KAf1`=tP8d6LIh zUwvWM-m8Id^ra^*yW*O|&p)-}vb~-GwNZdoIgkZyrcxqZVK}Jm_{%Q<*mL;-&o0kL zUw#Rzh@JcPA?KG5AKta=00@vr4?hE8X8oq6^JmtqStk+_B!C1#kdSPC{_Mu>dp1sm zDh}smo_Oq~$M#%x!=Y!td&PlkUU>GYD-T@z{IgH)+kZpKI&$za2$x@TV6p8_zxdMn z4ch^pKX-h?rc1=QHlmrH&UKW-TvUw+Fm{3J>m$#7eaC@YUU}wQJFd9KUXo*Y_roR?*IMm|HJ=w;xB&b_Fw(v1ONI1KmU7Qx@oEzT~fVw zvNXTv8|~>GXI3tmudzI>e&L$EJE!b*>uw~D0r#9)Xbv4&$vfY+b+~@V;7?Cu{Fxj6 z;xk)+?26f6`I!&;E$=ko4+65|pRT$c ze}{lTK7B$Tc;d@XzUy81>{H$QcQ=^X`g`|Hedd=hyjZWTx_Zz}3Y7fiV+a5B`#Iz zvV-6JlN;{-;CH|B2?)2oWNqln5;$}SmI85N;eRfB3~FG73s>xP%z z@sS&5HOtWA)H%((>ckrOs!z3m)maB`nBgypUa*U2P5S9)b=Gbm14;u z`#4$&!;<=@b(_hs;pQo(%8Fm`aZ&1hY^RoyCQ6zl7xGAXc~E;FW+rUwBr(dR3Nbcy z%zZ(_wWHd?w1t`J8n!fnkaIP?T{a2rBBe%Wz4PU(Puikyf`|ka#&)8hRa=j2r(Jt72?#6OKkV$H=ol}4Sa^zom_Zm>zdmh{o?)0<(bT zMj|e-)NKY!wyLU~1zo4X?3SRiT^CPecJ5tDEC#SrwBV#?gA_z}GXqz(HA7$MuD+ug zsorRv2dRVLf|PP$B?~@D*(}UieMasSSVlgu0WO4i-Cb{7Kl8e~_OAb4Tsq0YX^b3Q z4^E->g>TFL(=-3LOV_Wzuxsk|Y4;TKW*?QUbMQ&Mkc$96d+fr*I3i#{Qcm)c8UVA( zB@~E#wvd|a@$G)`LS^4J3rz-t)dOFgxO!*56qIXNfR}srt%Vu3SOyl{s&4%%yh`CtheE{8K03jBsNNn9HvJ?&q z0QU&baIhj^?A;u7F;9ZJNhT~PH~}L%PXK6uOHK|AG%3UBDOxC`u_9zlq>7c|1Ia)D zV)9@m#!h9{72G=kVZcc_cfLDe@}eB$D))sUgmV-B@P;#IcqXQn|inBo&7Vn50 z(LlsN0jv~*kF3loRWXH{YYNP4O+^jR7xt>p)lF$iJKr%tfi=MpjSDMN7dfU>%x z&R9hoxU=qNMVTQeOsWgUR!u>j0s#{RFOPreFYy0L_RmdoKQ#SI{bZq0P4rw5H- zBUCi2sX9O}KKy0;f9CCX-i<24Ttz8J9v}Sj7YV%at?zDKor|Sphl)eRonR3hAV7;S zqulzHpyl&ra&|d~T>2{Z%~bH2HB-~gTr|g8){(Qba{wf)A^{4L07!r!00amEga8Q; zAiRbk@qNT9N`L@}6c|V&09L_hfe2MF&?LwqdY_YWcB(*xC_n%R6IKZj0m%R(1r`Xx z-y%Rj5C{nX@fux!haU?n5+&rR z9%fxHDY@K1z54QtTQA*57@4Ik4<}C?+P!%^$>u)2M&}hmtH>jJXR4q_q|TpUpf5j&fQmqP!e$LrRO2c&aGQq zxG*=j4m6-Z4h52c@EV(6IJb7gp6Q|4Xha!+qc1+b`^wiJdEv30S6u({3y<&Kf8CMi zp1AC)o2=-i=O2Zz^T1W9OD7J!xPId{O6Sg<+^}(rY-+Bl>)8ej(j>S;20>ISBE=(z zpW3_XbvsA=^aJ<(?oYhu*Z#|2|M_qK!p!IIM+?hk(BK04pL_Dv%kKO1Wz&0xn|99CD=!Qu?TO3ECQQvVzdoH# z?T?>$;>^WIUp{14{qp9GyWe^E=bs*@-~N|hdU5t6(LVaKKb`*Z`>HAY&mG2J{WqJw zwtW2HUw`gU)l9Op8dhFLH$MEKwdupZwQY#MLqH(!{lXm&-Se~m;`e?O&}T2={cE}V z);AB1Jd=GewE?xQKUXZQEFXON@DKjVZ*RKm+C!hcw4D372R`$S>tDa$Jv{aJ_pZJ6 zbuvN*;>pLK1n|1+ZhYp6N3Xc<1_Pdb;=5N~f6Mcaf9t^Yw--lYkOCaV3^IXEf^{HW zbeL!)=mbR=XkheJ(22)Cwd499gBchoBNGA8j(q1+AXZ&|(~pV-@X{ln-gnE71?c%l zK7GZTekjm@8ccH06lg%8krI)Ov4}7u6eu#n02v9R;4Z@>U;We@-u{6{zWPZBH{bdG zT3*T|IaWRT&{yub`#sBj{LTY^dBd%D5+u}1ETAz0-~Q(PH@)TV0H=jml;J{ED8LCR z0g&hxB#-WebU@KrWMU@-V`H(gYQu2mLlJ*^$1U9O6aQ}C>GPhYc+7p~u3N5t;>f|! zk7frAnhld`;@x=v)(zi!=}gQWSyxuZ>T@4rRhpbC=H;3(3_|Mq-h*}K#tJD_DJ*ii zq)F*K*Y3eQ28}ydh{moC%FeDI)+3YXxn=P9%M7DTggFnWH%l)%I0DP z6!k7eF1;7kHQgbHNIi!BGk*DWV$3DVbKvj4JYQu(Bv)!P^U@!u!G>m?1cBKvBeBEm$ zr@YV}gO_E$7-TNOJq!H`5gfJ0&_p8jeWaI=z0?pblEyO4*a3y;MX8_&24flL-o;o1 z5y)}^RLWq08aA?skVys(14=dzqJ+ZiK<*HOSQn&$doZm4(y;YhVbWx}%dL))pm&Al zRkYHv^vct8tLorDG7zkSr6Lqm#ogh96hM}NDdjxr3Osv^jhm_Rig1&N;mBs{jP+gB zSLlP*mBBz&HJy!>4?Ee+;)d3%;nIe#cP&9qX5?b7(6(pXOCHRF=vEdfc3_v z0Wh>Iu&4k`r6M2$;W@S<2?GcTs~m6-fo6du4CWzZfsjG+l$?}R0t5l1at(=|47uyP z?{mm0j6ESO^ecd0%!yc4fr>?RLofs*7_BehL=nWS4+X6)btN0B9{8{Z)+h?9wL}A0yo`wYv*VkRy-4O!HYn`3Yh~y<7)(=OafLGygN6^mP?5- zR|ER=>~PpjMYv5jF>r3ptQ;;tMiO+OKmY_P2!uIV2nZ1V8vz0V7(fVsNa4TXDkA~J z5NN?*O$;DifFyd~C*|yvgMeI+@EXG^3KRgr2$F)iBsd|8AVomhYu7w(o=lI8eyy z)f2Dm+OrP`M-M%>YtMnBFFm_!_mu{qnN9|jrm2lOwKR60be>%pRDq{X9Rjd#|5csC zOF8z^^H@df+;^a)a{R~(JNN9X4C4|`9DWYM?3%TU^A~4l*O35-RT2r1_^;;Y&#zg# zZDznQT5mCe`S9~k?YZLWLoYtPXa5@xJ@?3-{nsCQ?(uzB--4nSpM3(^)8rd8>_^n;9yYuMN_g#MLPlUlB)|K0i zOSevO=S_EQp8l1OTzmYHBbUBmx7zctAz}@SfCvf#D2OO$Cl7t(k6zmS&0pKP{%7L8 zUm9-s4^JLBymTnH1=e)84F|7GU-Ha}wrww7tbXLyJ=a#({7L(n7RT@FHMs5i@}cWK zg7^G5He!{(^2-}Pme+jtZ}M_2mdtuXbl-i;_kMZ}-uD}*3(%wOFjxyb1m5;fe(HhW zfA0>we>d=zANcn9&)@vx8~5B;*XzQYN1I-r8=Zgd_)4nU?uD11ecKOxIAkh7+3x@R zAH4pZKPUMUU;e{u@BWA!Uc~_*XvxV%6*K z`e3n2o}c>a$FF<)M}qQW_kHZ@yZ)y@2&;+&5atF3SRw#{G8e!cL=vDIEnpCs1Ek`T zKX~6CzvZqEJ#g=ThH&%kKa}OnD>Q@Q(TDE2>s{|1=lJlwpS|JMJ0x@SN)YLS9FKkT z{u^(9yUf+Ich&S@gIv*MRyq|)xdhfmlvbIjNz?%-4gn4X(5hJ*a5O~Bep?u<`{%#7 zXYZUnGc=y#INWj9bx%C^Y};Zs5`$85hSKZy-~7mT9<8d#fLV)e9QZi3(PQYSZn?79 z4MXWWbLACH5G(6_nDR=y?9o&+jSaCcaS*ze1`D1$A~vji7>B_-PfW8>8f;V(E!u$N zSxhSiHUldoi)PKmieYhCd3kX>8g6Cq-Vm@vnZ~BkqA_+CDwYMqAje^uESa*`Y0_7q zF!D6swqJWU-g?hp{>eMu|BwFTC;r`s|Isgh?u%dQa_7n+b1)UdAWoKDwC>n*zxau# z&p!O|?VGku6hE+Mvh9O^e8sk&AzoamzWk|AfBUfq`qa*2>Z9)p&HMkX8Jzy%cfRwB zU$|#DQ_oe_VR5CcH5=Z3!(Csw|2_(Dzw^#}@Bi9ecfRexhrXtsOOq1NVyccseJH-1 ztcu1O-DI&?U1-tj2FuPuZw+f5mTSZ^Wpap_}TABDr$w7@&$x95ftH zSvq-90Axbf6%0GYtpi_93nz1)zR-jQ{S^9>!7Gc=y}htf97UqnL(fCx6F%HDH?_uW zDOi*G0GCn+WQj~&TW6y;JGSIQQ@MOE*$h31t*j2olvPjyyr}ejN4Rs4LO;lE9?X(z z&s{UtU{D9WNfn6!UP#PQRoLi zSKz@M#W^iw(6^;yB9+A8>oxG~u+rio0Ykzdh`ZaS4X!^S9 ztu1GB*;Z^eo~ak?nP)dYv~T)1w}ykq7yi0hv&Dz(Tg@ySTA6R3kM+Q;)aKnXH3WyA z1tBDhRYriqD6nI2CkZD#VwgeZSX59HMxYlIk1-bnFLB6F!RKvkp_vL6kQO1Jln@7o z*l++)oKy)UNDDP+b*T_lH6GJ60jm)QAy%xa5g=Msr3W9kh0p&lUt0*3uLw>ecuKq9 z@bf=*;F8Vz);W8INr^r-3^ZHk%2sNYHH|Zd$YvbZ^=&w2EGxqWD`A}3on`KkRxmEE zm23l) z)3Z4Ot3Y__fiz(>DU?3dAqQ)gNvKOHL=I~vA%T@S0i;SS$x6tmlDa|67HcC=5)jOW zR$N&ycM4K+&`kq9?SX2wD?Vin^pKcnn?&U4ernhNf!3k=9aHw`%|c zLHxcvBGwA9%Bd+%Ri>$VGcTj1hUlhNB9UtiOnE~e917@CNh^s8*;VyhV_7#|Jipxa zeLK#c(ZwS3rOq8COdP9fhjelLiud3B`sdGDOlP`Kk9KUkJro4hqpVVT*KrKP@%4=fe4h(RLiR12`mq5wDnS|NZC z3gAou3=meCXkb@XWs77m(k#T{DtY3jCbjOq*f=4pBM5jQkb9T+8N~r*mE0?5AG?rC z52E9eLsAKl=!G7<4jMX~6rs$9fFlDQEExh47!?79B7>A2Kyg7dnVcjT07NqZfi(~k zLJCP#F#(7sCrioh8GuU;kb$7!q~P+>gOCM~p%`Qits)dyfhHA4ajB>QP|yLri^UZP zBqv1=@Fa*X0TLXfU`|2CC_MYv{rLZ@t>?TRMJ~`>0OXJpfJMd-0%R6Ib3xuVA&-{T z92ynN?je*=QG==Oz z00{vCNE*Nt4JibL0Sur6qD1eyUO9^@PCyQbI8K5{0|Sl|LBPQR2ne7F0$lie0^oo^ z1Bl}YfDl$07>xoTP|8p*5UU&_2pCow2vUL|0wCa=**L}dHG}19`p{$&mS4>iz##)b zz$n1X08&mk2^a+k#43RR0YL&l0s<@0pa>1f!nh>BBm$BE;W!F_OaKW3NQn^sK@`M1 zOIXUiigN9Y#UPpk5G4Ev%}9VUNR%iD0tgzS0Fqc`wSlo(q2=6G*^S)y(laNWxY+FE6_$D_!GWVlwRC{{nz* zXP%cuFWpPe?ZztNv@_3Yn)2$4PoH|qwm}`1Jnnz~2?%rZ3rmYf=jPTDpb@KFB18rp zcl6ldxiy>SE7;7;Ad-gNyPw{%R`l=IUF%@$C=(^pAh^zux?7 z|KwL~&2V!a<~P3X>etPju=&lylLsLX5J5qJaGZ5gO-`Tz*gymL;r!z7eCM_KFMQg% zlgr$i`q8y5>e1z+lk*w|)%3#n;{V;BB8c|GFC?J$(BoFS_;v z58nQ<3*UMZD98#7>?CIQ|3 zw|)Y|s$Flp5lZmx{@Xrz$#owB^ZmDd?6T`V)By&W!36>YT?9Z15&!`p5|IU9yi)Q& zi*;#JDWQz%~6ld<}L8AN0qzluDRt*SbP(mGS z5K>&2KVx9a$tI)mt4-W`;=3;SSzbCrTiL=B23K8t!J|(+{r#q0*SB$IVd!2F7oD@? z+YdflCj^TdrpiQZIX1f-WM0@EShkWNOKsoM!?foXEAH&#lDAywTlDA!D7IHkdnWXvPV$BT;fXfT*n4t?$$ z+8oe@@i-4EXU4HS$_~$B4^bEt3|oH%w92%yJXq^jUvlkdKlQ11zvt&a^_kE9{JYg=h&G;DR|Ky!<_^$sj<>e=S{H!<{m7hM^%B)9C0(sF*@%I#)q))Y%G{v9i*~(3eW> z;&hKiM`2U<^ay!s>$qkRL&;OD+f-XPGOVYq42|Pd@LC5n)u0+J)qOk~r($YikT>3U ziA8Zbmz>-sme8l(i^YbGHN`pAax!g&nnqQbLhdz+OVzkDA6Fqw61@!CaX+0LX5ROW zySlEZ9@YtH`-#`n5|;W7hQcrtvllC(<&vTg$x#ow0F>;s1Qzy90gp;E$f!6OSYGKw zOI6K-9nl4YRTbsPN$Z1i);RILFFCi921(w>`Ac@B(fp=Ss5j1;9ersEn_p-a!u0-~ zTzqEt!AV}z&YE|FqUAcu6}U2g)&bp(Y%GrlLIU;8dIP&&Oqr>AS|HC&didQdcwLXhl~gsCMA!hQ0hwn zLYG@JT42c05J9YRf#xAeg;4KV0DEMKRYxasAY4G!Nb23DNCt^8_kn?i=;n%A6@Z}P zQnA(mXOy69U^cY0I)p?PqsJHn&Fd~uO2Hsj9zYAx5CW%w5LK(x=m!BqLmISm=&Mxb z0;1(gu&k9JA&m?gr)2|{x<{XN-TF5leHo#DWnyW#P)#KQXVG)`dETMpBRusPoTvlLxpbifB_JJ zo^|P#00k2e;3NdPU|?bpF+~sz zv{eA{1AzfF1B@tAuL6)JkWO>LpdbPaW~AJu0#NP_IEz!>LoH}PLIgrGM592!067E- zFhCF$1d~L#gQYKU2@)~^27&}&fB~Q!Dp)0~%5X^_oLEJQbikYj&WYdBpu+(tV5A@i1P~#BfB+W-nJg%QK(xR}Ky2F3k5(`_v{IN=EEF3^ z^?XRfIymAWFmPsO9uNpX03<*F0T5&${yRmhK!C!J5P-0XI8Fda&`3alNI;gVU@1}- zMSuj2aQdX{OYT9r0}u%U0vrTL3J^eu0CXHyMZy3VSW zK801pY1__ha(?y2=eBG)YcMQ}E%(3h6ok3?6P6Ya&&{nRL{cOm3ZMam%CW^G^9vh> z0Z4T)!qUE%o;!KV&V74#Z$543%P;IcW$Sq_?Ro6f?Yj)x^UMPfcAT~SSX*9u{`qz5 zP6Bx7(7qGbZw~3C!MeFHI}^ju=o3I?J`6*sHtt+gS0}&jbARym5C8S6|LP|<{p+ub z^VkwFwEmFd|k3ieQEUSfBtL0shyL{j=YC>E5RgKl}J=hwtBf zaKBFFZ&%+j8vptoSZJ@mnUKJ~%tKHJXid&m(Z+(I=SsrA#JU8B9PPj~ETUf4Z)W_kVN z58n1OZ+q&mZaotcj^juE!-xN`FFo?`U0?dGtKYxtgP;8A>ks_e7xsVSbH6!r@KCbN zcRXrE5l{@Xu(;Walg0v?6(;B6lRaPf6FK6u+FF1+qW z0uS8wiA%1%>Au@Pe(|+8nLJqNNR?6uGf*%BU=RxA!3)8XDle`czC7+S+Lu*kE6zT8 z|1B3@`*UW)1kaEx5v%O}TmCbMRhQiGOF*P|-}}l(cD?;WkRJHTM=rVH7v+FN3k2y3 zK}7V7^JV?`L`Fo>7D2&u51vA0Jgv*-~QUJ*T1Q_P@n${L#PsUq1Anzxedc@BP^y|NPCjb=``aO@!&r|x7_fhyS~|U zK*yNMAR2`0uX)2?fBl<-VDGr<%G`Dj|hAF`5nZ;YF&#tB%DS6uaST(g$3K3Sf#xx{sr~<&`hst2C zd7<4iFy4~mM?a_lyv6mY3)n>Lq>k!%nMxer*3KJng4-IZXh+=M8OCk!)QN8+fv zsZ%jZjE%L!i7Ui$=(;`$alolH-C||s#pLN|Hns<9)fN)VL02$>bJHevli=(pz1XX- z`YYz@e*3l9kfsl&wDjQGFnno>m_2#>itp`g5?^hPl|GNM22%lU*|}0x9eIZGq_C4x zN+sqtz*;Bc0E`Ah9C}nW<&0~FW{frEA{l2x%8;{OJFHN$iIRivv>w@EPM2j^H&G>( zR7#fynH;q4F&D-Gjovj~OhiIx;IL#G?Kd{qx-g9KliRHw{2Qq zx5i+Y4V!~i>wRmd-f~=Ce(-SL96YwKjxo(J&NvfnYNjcnYO5+#6&UO3sGd!>xki-~ zH+acfDs2}%GaO5ecjM7=O9iSsw$i$kS_P<5pb-f43{W(7PFW^a#gdCk7J6fWO<6B@ z&j?^ZB_q&eGm4><9AY+xzBPDL9L(7Qkw}KV7<^J0inrqiq!4G320$)2E}3Wu4i^f2 z#*j2R+PV6fR*Ke46Ap+nfC^p_sESA*$3S_=Zb(rMBN&PVNFV@<1tOdxGL+0}vWI6# zqD#q)lobHQN)gl1By*XYnWlp}h6zIoO2M*+B7`a!%%X{9u1p7ES_8u8^MID?Lywv6 z`L}=dq%6Ns7^|}`d*^##0 zOAyfZPPiOE_5#9@0Tdj^+y}rbm!UL8LJ3`lzz8#pi)R&gxSJU)c=3`|yp(qAL{3c+ zav3{-K0p#^LX67Bw(C6iecw*n#YtAXyp(BHf*>oE!BF)uM`3Xkch5yc5Dk#L2#OHF zBFTXOMnaGSV1WR4Qm`l_K_Cc#MF2#Jph;E;nh}6Uu8M9@D3TQuU)Cu6#+t^-~dDd#Bl;3LIA;400Yc`0%bWN89^FoAP5S8 z1c)@iKuDM~425yw5@Z1`0K$Kd1PD1LkbnsQ9jlxGVE~C$atK1GAPY0YnVB_nrDJ(@ znHn8oU+4xBs|Xm#fK_q`fI!g50d_x`?UYJfL#!cfR<_{LIph>C0zMKg~9t{V)HHn^wMX)e|>| zP?jOc97q<4qNO7C?Z9j{sY~0Rd-8AY`^JlxylD$u;4(1x`rrD+{dYaO=l<{DfB$X# z-tS}O<`4hrEq8zM!IRq-b@RCgp8Dn2oqa-`>jh_x-@WvnBBhRxr*@9P&{|2~57 zBJsgH{sO?o*L~o@+dg*DHSZ_w{@Xus$=lv{-&g+p;u~(FGCdX;AQ1#8hY2z|Xdqn> zkW}&*xjA|yje1l|p|6$0lTY2X>y1Cp;n2(lfvEuIy6@JHf>?Fg4gVMnq`rIeM|QpA zLxLXs@<(>P{TC^~9VU~gXy%136nDV|#6)9c*1>BIzqa4F_g?(^YeV9F-@5gZANvWlBb1yaLJSo7;J3ea z&aNxFwwGhNHl_ig4A3K>ImqVwVWR^XdDM+Kvo1}Cq57%8T%1j_VRN)RO)n|B`h2X6 znwKYTHcRX~;n4DFm+kszKYRVw%m0T5pLsem)HPVG^LCQE$vN9je{}cvp{a{z3JU7P0gNAY_^OJjoMM?T`&$yUZw_t zbA5MYQmWykp4!xQ;N(%m$zzX+jDtbjG6xhP7%Q)gE0md%`D7AQnAN4j-DI|1HZ?`4 zf*7az5F#t%@>EqJC3^5SFhvV1Gck_FL72ok6NRYz&fyNY`n+$+OW$~v)jBOj54qUX zhWd2xM_w%SX6hvJizNM+Jy(QAv@l53@eSSf{KN8LA-b{)dajmPoW%XD<~Txq1~3U$S~>DfF^W#wRI zJmf6>;mKmManP_iVqwZcI2k)9$S)V$Rz2R)_IJpA9)xwL?L1@KU|}U;`8maE{RY@z z=izzaXjIoXO_@wq#yOo>TPc=PCLC!t6i>+++9^fn5GGF$>WnZ0L-4lJtmVL1)ue)z z;11WQ&8XyKCe1;t3k2+|=SrTRZlHxjY>#M^M#({n(B%c}mKI zB~V)EOd1FGm*V0WI8=HeH|Uzq8s{`Xpe;S!Rg57JA%raVE(>s}4$%#ho(3^HqcIv{ zi6(WW2m3@T2t62cG&yR6dmQy-NeV5H>66k5Xp<2sM$#1uZML=wJTjrWl_oNV!EB^6 z%3@FoYIAZa6$iXTP)I$6Bsr{zEP#Ma7ib{>h=rtrDFZz{AcB_MgR3Nn%0)o->_C-j zxcB2pF=GmeLgghBJ!!oxXBVKT08s%nu@AjSQvi#NUySzfUw`2hM?OSj+PIXRBIE``})`rIy!S8fo1w=MRv0d3p#HII7#)%`e2DcI$(M<*t*-%J;VphrMDazpv^q!%D#=*YrBjBk4(f1QqG4o=! z*c=&mhq~;O$x0`daYwT9SwL2%p35 z10{n1xePJ@LXi{z0|6LFp+F7+B!uHgGtdIfjDVt?LMA&Cv5J5Jq6kTF6;WQ`4gqjL z0Krv&Ia!hjCRz+CEaP#jW}MlK$Bs^J@JOLbaVkz%0;h}>XJ_UFh=4$V1V~|(AV63p z5P%@!M+Ar@5ClRl39CeaAZP?Y;s?Y!1;$8)7z}^_h=R9Wubf493LrpGSVgR&2w_zK zDF}e@_e6jqRvADDfE=Jeh$Lu$XhtIifB*;rqzf()vS6T5G(nKd0TIFv2!vGTq@im9 zEs;bLArJ%z2ogXLs|w|ifdLfg6exocq(lcmha|)VF-%QQ4cEuHr{oIeQCv` zN0!?Id-p<^pI@`Mcx-NdO#m!J4+eJ?0U!Y3=;GnIH5hO=vGLzSL>>WTBuzvz)i zA3XDQmp}FBJ=@OzvF9H7+Kx-EbK|r3-U{KQ3*XYLEFF2~zBMPDWaN=U`%XFGR4y!B zyx!*LE?78iCZ2HVsavbC?$Yx%o1M7vtSQgK$%g3@4jn$YZteO*ukAJsXo;b&k{PFB z&6w&UMN$xcgn$SNJ&^w3o&(Rc?J$hi*pMHK<+A+N7)HSbLzU`;o*Pt$_2mk z$a8=3fBy5&J@C-0zk+1+i&F!ckI4r*VXSqpwPSc z;8#BbVAu8Ud*IeTzvPK{(jRnBLSmY&-%4Fr> zO0iJahhKi;;L5pMcK-WcdFKWH=sov8{X(0|T6Q*lN|Pr`$0k=@xb=bWKbD(>ct~S8 z-MO#<=%;&f>YzO2C6v<9tzEQcHV!SY4HJ5Rm)xP6Hip1Bm0PMC&mHTVfMRiIHk_|w zn(C)bxmM`wK@tUOP0+Y21tD5JnDmixZ{V7&8hM5N)Fj4!|7#=5(L4`Bdw9Gqs@t@6 zjD8_7_c6N>mVHvZKJ2WUX38he9>a~-UwHSoAAa*y7hshD;{VmUGvD!h@4D`^H@u;r zw|NaSk3IGLEg$`Nw;nt6q9YAmr(bu`ouB$|_JA)T zTwXrrZcG(Zx76~ikY~@jblxX9j#1S!ef#8Voe$DUsrIf0_JGMMizQ82r%;$tuQWF= zm2hN?*^O;)$-iIcO^WgKvv_)_y|&b=P$H~v4W0@01NS3dnkUEKqEEG* zO{ZZB!zNbpvo|EIs~LJQ#+-m9^z;Ic>Y>mAG55s`ENaS8yT7$=5j~sIXN#(e8fos- z>5H--Gr)W6&=vtDBoqe(50lt*LfyJ&cbY96Ub^e>xYcx*Y$zHL+sS~VxM9<#^=WBw znA=6S&~%}2zOI-<5Zzf+T6*fu7ud|z`H@!($A_|=IOv(nD{a~|JB;4$%?cLa-F%6j z1#rcBdC5xiP$Y*62iN9owLRuX&5Gd z``!QY^;eGckKFTe4wFS^$#Te6y1C!EyuRX({`gHB*8u`+#0abfh<{K36Ijr_h@(d_ zeyJI0-wWM?hbB+ImM5bFeGD3|=?C$|oYM5cwPRV`4rgY2n?-L;g^+BTAzOl1OHQkX z>!VO$JT%))LYX+A0a$QIY+uv<0-tm%oy6?!*`y-&uwg%NnMx?kg$V8dktqYrJdMme zbqkK%hY?X3tl~_?0K$n*0{alg@&q^Z$U-SG2o{2o#^BZ(OT}SF=B7k|Rc{C$BCSWr z$^(-zcA}KCKnrx3VCH~eiUxX57ih((9J9fK4cvh~$O_HPI$4%i5>a|U*`okNmsn3~ zH!m?8(I+9pGH4Df?q)ElFLJZWjB3@1Ku5s1_;Rv)|MrgkId?~T0$WeJVHdUG;<+H$ zFlER+2LMSTJSqXfBcYs|0?Go(ID<6=*0KV)5T*#MQvl2dgeseDfqH3f1WdoSFoLDM zVI>0-^BlUkNXu%B0xZ(07?)9K!gB;3^xT z1jqmp#}P2BA`Bo11W*7;fkp;1fE8$9fwBZ3!vzU)5MUqyl7K;gy8|KwK?DpCF>R<3 zC#qKHhQ*RcP8%4_j88uD-5uv&0=VQ}ykzZp`q3@h&x7|bKDYafvu=3yiCeCI z$1kl&$X#&2h9H5G9SEhhWJ}*Z@aoE|&pr#_lr1|L2ypN7yRnMccC?zMgUAk5F7u()`5er{a|vT9nX2F5(I300@YnAp8gc5mfmzg!ljQ|Fr4Q$R`I*e!RhjY<~KjUqA1r8_&2F z5g;JIC)krmgQMR&v0nGh(dLKt{zZ4+vv&C4ojbxb{?T7A@BieHn{T>f?K@X=?kWA} z7SSAesx*WIQ-Ff^eaiB4iZk%J6i>SHj8k{~#xMMDKlXY+AR%G=bxgf^`z2Q_96a#7 zJs#xp@@I_{Nq=ia@#AjzyACa z7yj7K<`~emWO(H6&jPskE$?~o&cD3in)fjKgJ1jfuIt`^|80ME`E~Dix(TmT3G z5X>BiP%vacVDCMf)<(ycSbXY~mC5qVc=`Fq?>zra*IC+ zH-Qnz-}%zVuXxA%AiL+5Ph58WjYfbbvp_-xK+aUVznf&XBY9iZ*{8RU`&ZJwJ-N?S zjdgQj%LW@SBe_mA{r-1e{)4Me{mr}fLOAE-P0NlT3k<^=kL}re{-sw0q=)Xi`--cs z>6p6QQ&N^gRl)<``P#V`UDdR`=Xx*zqZ5UYP`t>COtLh8;t4|rJ2sA|t&4-X>4|7L zkSG$_nXDH}ktT=tFZP{lX>o6Jc~wm1Fy``TgL62j?oPvC!;fG5AW15Gb`-qyqSu9|vw$wMyAJf|!7 zcyL{3+u}~6F=AsQU#sQHHKV$48XO9LL*3wqTSZ7n&5=Tr`&ga>! zd_xt}7FqX&6|8X}V!UQk*rKPGMon+6=af<%2bpekg;JS$!@N0*A(tl4XC5-<3nXwY z^eIYVJggWRc&k{ctE8>ep(DKaRN2UhzMn--jGA|Cv}r?_7Pl+Rs?w{? z8jVYy`o?5%;8aB3SoD!AhP?DPYLBUHXPb4^z|bQr$EsMc8t8jhoHX>zqEyDT&@?f1 zGYF{^6_2ur7>}^tWBF@U3YQ-$qw4Y-Ry)7(Njv_}!(X)K(|9F1??DV;ho#8co z%%45$i}w$^f?BL7*&Wx}kQ``zpGz&lJpd9YQNIH(B-$pNcHz>AU)GeLOWK|~)!5P-mJ8DJHI!0Z`}7yzg^HP{F% z0>;``BUE)&iCm3M2zk6yS>La0h2bp5#$levIU2b<}(RKZ@oi9u7m*+FtrbPiDQ8Lt2h_Pzo zC_<_#?2-gkJ|JT75ENzfU=*SU7fF#qu9Rz-lSxkK61SLPC}1`cRU8;VR)z>9vBfTx zLgu{8%3bNYv{As4Aiw|e~^zRwym)@oN52!*3p$UKlNW!-e-$W9-2te2+KmY+s0EBPjE`X4M07=jYnMG3w0oDY9ETRO_bDvd; z;Er7?gagDb0whHO;4WMuhyZB-X`mTEpa2e%MnaH5l!3tr5Jiw2a2FuL01568V4?sR zFTpMc;0{RyfdD!NYXgi1(?vDiDhpelP8}hm2*^wgsTjwG%0f1}05BVX0D|s4kR2*0 z5K4&x38Rb=RkR9nYqL;#6UfM540Nyv6b-vZtUzllM3{mp27oF;Ft~`ag+6&1O=27m zLNkbK5gc+FK=P~sTh(TTTbEM=5Zh(r=jWCyw^0jVuep9Yjw6^$gm^5y%kx&C=B zxvLcAjFS(2`Ld^8Z_?wBeEI3uyyl_%KXCi2-@H>C86gN3AwZ3b2+J9QHb1|9etzcR zM*v)L{3-^6^U+5hz%F9{;in8dKXLBNQ?I^0U~6mEpL_BYgtgUun_CyBEBjZX>fw~b zCir4=>3y5b9mK}wg|)Rq18BB-=^FYn7yrSOdh;*84Z_d=!dw2sH%?#tgFoB1?a7~Jp16U9=z&jFMsxn z_aqeWPulSxKK_sY?LQjejx}H}FajW;%>5fFjW^F98a{E~#VL%L$S&WFMu&r!9&P7^Oa4%_M$sKlk>`bpE&T;<8S)Wt6uTE zx8TTCfI#j@kDUJKbB^48=vgm(?d;rdo;(3O{E2?=5B|%4Q~#HD%-Z<9#6it}?$Ial z(xccHv0~^Pe{gc++RHvQ8+_NJC!Y6`I}?rFCPVnrJ$D1R@g?7J-zV;V?ycWNP42t< zUC;l{H+}vi@4DrVA9VJTVHXV!QG%IB$SDIr;S%$bW;=_;d^w4kWnb{c(&yiJ;I0=u z{|@fGypAQ2f&&D@Xa4jzLF~Ha&L1)cB7gEvf9J)o`T-g5`QUHg{Hh-^3Jw^|1PM&V zTl7sQG8v#4scLKM^!ae^F}MwZiEIzBGzh5We|rARcOSaqePj`22O^R(T%K*RYNp|+6(WC_3lNb#{#fQb=?m=YddvK zsKu;2la@18XjcLjQRo)ztOrzzXAWvHZADwriuIcp7QHQd`DT}P9$L@ohkxSdKlQ=) zr=)qiap(8_;CtTlyT#Xv+~AdUVHyrt{;PGK{Exr*2VYGCbW-Wb?;UR*`+*PLaoe)B zoMd-@;4k0(oPBu4jef8<{y2{)33U@o=nFnZMw5ed;nHS!q`RZ;EfOpBFhQ)m`yM_ER7S*p8FK80F8a}SWh`fNnS8!N@) z2y%vt!AEU24O(gp)V%)lB(AmIOD ziH8>6T|9O1;qxaZXWFkn_k#gP*7D)V7*JY1*=u|0bI-G$%4gTv(0^6BYfzA@Se zK5xr*oBNrh3#oN#QFmFI%iG*#?{~(%3uTcDyhADSB4Q!{$qh7%&LVlRili$L2q=Qf zvspu{KnR4lK@6j6V5~;b=oTQRBu`c%!~lWL+EhyEMI!=G916}LdQ5JBSOm0|3bdet zOejhPS)N_)tQg4B8qjiHz-UbjLUhTMGR;x-MeZmR3+TwS9Kw)fgFQ9sz(OE0ZB^A5 zKKH@XeVaC+QjTAC$5jaXubBjb!-)Z4N1`4Wff8JR=M)H32Ej@cP?!jTUI3>E7?J=e zLV(eL>;Nol+mu6Sf`W~`s>G%aG1e0e2C74C)(nPqGakpjsoVW6Y|`&fj%sJLq!pyyPiTuRC_r#WeWkx^if*MQ(GmBauAKpwhAau3Mf*BD61~0&%Q&5^U>5;M_!a_?1c(GcfW$81+Z+&JASnP6zQu@L1dIlN06~Zri4X!pjD%4l z1yQNX$k{0e2sw5cAYvDdy8w^`AOI2pkN`-6#3dp?f)oH|xS&8rgJ1;?6bW+dG7z8u zmjH>02>}uTf&h$k4h~gXSsibUC+lfx8yEVPas>#qU>p#N7#WPrV37nb0w6(< zWw8hl9Dpleml*{Mv9czzFf59Yvk73VqQyv>1sX#bk{FD~wZ*#0!3t180}Ca@MMH!z z>r9LDX*2bGeQ1b1tE53wZczUwFo|uE!;+i4pL*`@ekK4bNlEkAC{T@4W2|ANlBeUv$UcEd?lr1VSJ{ z82}6EJfd%WVf*~@?4u_D96x?FVu1AMi3hNY*mvm4QPAV39)0T7&k4ER>iyi~4?$R4 z-M_hYVP*BuBq$EoY}iDbZ?~T2OO^s?ee>d;wL_$e9l7El^TB7`fA{a*^0$BR)9?PB z@Bhi4`2Dy2yMOg-KljuB=%2s+U;OOFgkSuhe-*+{|MHK&>uaZ;`2F|q*>}X?7cZXO zzyB!A2mwuuN`w91JxF%bv7b54blvYCy0hnKZ6BGX2m-?2fPe@J!nY9+!TY}alh1q3 z+n#;x?K{$;y!pFVJh%FlUps#D=Glx>@z6CI4v1Ysf37c9E&a)JYhQk-nOu!(4bR$- zRpJtT4*d7`y!(NF`FkhtUuSCJOBc+nCSv0W^_|fsH#MI{wdaYK-24O2zwH&@ebXTT z1O)QD^w~2XdClQFuD{`?SJW4N?bDBZ#M^uS_~ak{&VT7oyrWyxDnd8^#@SO(KXxRC ziox8V{w?mr~dpmLF{_TtAEgC3gwVv=iqF4O@2m?$K1i2%#WGW=U z$bga!InAFubJ5P91Q`i}K@fou=>uUwY@|-!Wh~o`kVD>zVP(;oEjLiD9T*3RRUx$0^QwI-!Iwa5YOG&Yta zEJe4A9gwRS^AZwv*oV0@l(r7m*9^06z8@aCNsZq{;@|k`(q7%3B?+^aukDaaZt?7y!PJM#?o;|fbzFD34t)KkZ1ItD|P<4h^m-guEUiXSy z50$;oO%~sG*C#&tp8vYN(>WN7E5ToV@R`r}zRO0}{`eKcJhw=-j8)UvZMVGS!yozZ zD0VNo?X`E`^NB@ImW$SoHegk;s-Mr3>QNP&<>465l|75RRTqIp8l=Wn6k{o~K9#CN zs%o#&pq4K@!v#3QU}4Liaj)4dhU{f+fpl~9`FOaE?qqgs^`(lnP?t1dwFS!HT}p_P z(Nl(exBk=1>)hG{U7yTEDomwifmstS4=rdy#D@?474NK4t%?Eu~PgTWliA z8JZ-kEDWscSyYs)5hlv#BfR9~d4_o|i|vhGeJ&+8?SYA}-I5L+JEqCrtI`{4`V@v5 z2Eg#&2=0Jm=_rq4yT$%ZoV(%SK%1%CU#=bkO++wY0BoTY_*pD77E5#+m}U4A=zH`D zX}#~t<`d-$Tm3`p{pP~wNy=TDO7IMs3*P1~*Z?uq{VHWm-()P^v#JFsaVKlu4LQd znyV|+4pY}HlXGs_hxovwe|at~ z`WaO6k-aw^Wi>sFionDwnv?^8X&(R|c?61g0Z<(Q1U1-21|SUU0$ATrusTE{s4>7; z6;dt-)a4>)A44H(F-T$|>41P7Lsy5!;;@Ed!%2fM;EEHW8IJ(2c2gi7No!O-v=|N? z^}MYXg&_sy*3-OS7HLB>f#-Yy_!g6>wo`Ekaujp79!{nz5E#Nmp%V&ZM4;zX+NDss zR1wA83FVTD3n1JJa_N&Su%yoA zqJ-iAiU0uNKpwxz2qOf75Mdxi0uceAQ8WN%pco=81_5*O)^L<1Sq<| z@puXnyC?z>aR~tdf&daE0T7o65I}+gL;)ZO5fCVRlaT-jk_I70V+ceDfg%HFqQg_4 zQL9$e4pI1ZD;jL`B6g2?y z!w?v&s%i#Jy!W#od*RE!OL6q+55N1CH~zy<|Ix3%_|CtRmW1~p!~g;k1R8e`we!WT z^UKqZo&a$C@@Jw72zvCv`>~5Sdd1Z%#;-l{wP!r@y5jvZ>-579LReYdzrA%~b@lLw z3d4PIJPfhlE|wPOZP~-QzIlGn+F?+Rl_RU29sC#ffA&q^^Y+Kz^OMK^$)BJ3r>{Ev zo1gmZkG$vwzxCeV?9cj7y!FQ*{K~KW=sUl1`s@GsPgf5diKSdzKX+jNQJQ3eHzdHO zM0hz8o)Y#R{WlN3zQ+&U2qPqbfbcgUAcBJMZ3IN{^Iv@bjn94UwJ*A^y}+Y~zxzGE zdWFC6Ek}TDUjW#HW>-uJm*e)owl zK2iEsYLg>ojDRsUJ`@&v>OFoxuMW3L^d+Vb_hfzV5!e{^Z76Uk~u}cfI$< zTi*lLmu8lXB~Qo^5Q+ilya*4FO=1VvFjzbe~(i#zvn~mebF7?3+kQ^|KW@7_*(*rnk*^;6GeBER1r=V z%R$O{aq{$rtv@a?5)uT404az(dG?8`uekE-k39n6io-_%FjVjY_eam3dhRox$?@tJ zzx3(rZhEOfh;e8X25zfmI==S<|LMjzz125{YQGSA(gH#~2+UTui}gw9QyHw!+bbt6 z*REV&E{o?G9V)7$;AdE@JT=$C%u7r*+@ zV_QmI_n1`&y>xy5>>IAX_o1&27pY$M=L6~r!NR;n+-YlePu5<0Uo))F(d=~1csyAN z9u^s+mXQlDBYRrSuJw(!u7(x|(_!cli&CANxLT8O3V;JJp(ZXCz$c+>_1=a$PRMhc z7PK)@Kuw|ZMYEG5uoGjNtrvtYhH=T2xKZjDobblHo#!G4-M@ArZI(YP8^`;+r z>%0HpeH>8Uf8QIv2meD}|JJ|#jX(d~tnEq+jHVQJeC77T)v+IY+23Qebp^8Bxfu5~ zaK}q-|MQQ3Xdlx}Fa55M+;x{VmFz~&KYC$%rwXGgZ_Lbw;Yc0Bp{#S(&w@4Z(k|7F z`x?`sqSuSu%Uk49#i~@56uorWI+Rh2iCuw((lIuqiY%>~_qIshY-MBDu?h||4j6hR zAWY2XXWQW@+%{PKz@*EitjH;rJ})wTWabKr3Mp-D&ijb9p>;dKhpW}F1SPAd6VKbF zl(bbF=B0QZHs;Djb=BsAK7&nB-d~EvejzLz?v099l@M~t@+@Q(=4c5EEu&c^LlsKt z+$9AG0nDiux=aa!O1T5jC1@qsG^3*~Nt6pq5F+ zmA!H;2Tnx?Ek?^1n0#4|!#130Z&UY*^jOL=13`Im<>6HIJ0I8@tzB!e-Kp-kI(Qu> zd}`C4Uf){uD@z9fp&v%Y&{XOUAcwuS!pReRM|pO|;RAd3UH`0=>1_E}D(k26sdHOl zVTV^nDQnruUa}o#L!7Jzp=%YxY-llwD&7@i6M@2H3M&dr&+chBFG;xwo8>~U@?f^( z2d0x|bmG}@l#h_r_czy$Vmvf(^a27ZC7eB@opZ{``zQOY*|Oa%=QbW%ddSN%9UdB3 ztd5s!4}{U}TpLV1x7fx~g>s+2wmrH+kYiZj%Y?nKW zSmbpgJW{rnLM7IzqW4u!mMcmKMzbcynrvt-=aRO&N;Uc#wZ{OU&VT{QL-5KuTbR#I zpGoR=<{OLcG@rT8(F(qfc`@4z-lQ0vLthhz&pUi2l@3&=9$CM5Vm>R1#o53puX=|7 zhtnOb0wD`QGM36pGg=Jl%LhlNVSJs}12BXd)-|Nf5o0Y#EOI54 zJ|iv=8|ER@n|hL>%go&9nNb90rw8$?cYXZH_3ds3@ALlS*ImzQ`s}O<=sikh>;=eW z0l2EyU>+E7nI6*a8ZU@JAp)A}$A>VE3r+G1Y|l(`Gm z`MOYAohNrX^h;klbBEKJ)wnaH^>aPOerX#@izUmpc9Y@K?X1|fHWyM1O&jaOC&)8APE9aIsqaABmf2MiiThim;eEU zF$jc61pt8{K|l;*kbqJEVV4|`2q(wGDF_e%L6F2Q;hO{jBmfd1cF_S)08F4LKqL_e zfC2=91OXBt2_Qy8Fq4H~00U@(!&6^SinBOC&RPcv1J5C~Zffg!0J6(Lxa3e21K?5-1Z9Ar z-~e&~ATt0&MH%9Q4?S@8GoEb`1_J{FzVekXT>YHqD^(9ZaL-jQc*}#I`K=ef`tS5T z;GG;$Q~;FeBJ+#^vlE*a`_qqn4ZxLGK2sLN_}D{V!Y<<07rtm#+vo58#MRea=cUXF zPM^3R!rJPA&8_n*YlqjMdTnpLG72o&19kCyoBQpwb?)ri+96UIE6*SFxZ$lI`@lc> zv-kbUov(l0N8kU!m%sMa@Bg#Ecu!he&;IPM-ub0_?*I9Jyt3z5 zW%=U8)5o5A?f%0Dwy~Bj{&hdw@m!M>>#=jNHu$NlkK<+Uf9uci5$Q^>Luq6n{(A&O zP!PV2fC%c$@8j_IKjXPK>|a#J-|(xy{}U4&zfk~jNdgKm0FXb|pT`NdzkA=#_48Bv zkG=Bw&pS4%;0`x**!=ug?|#?6`p8$m@<&fDt{`g=?ux!M5fP!fU&%QfSvz`F7+pAe zVD&q0d)+^H}dxl{_!O+Tb#OZ_UOLDf)fIQ;}Tam zfAPs{p7pZ(ANV-FMYx2ZAmS3?##>)~?_D2!!OLE!Py|yZ<%nL;BklkIAOJ~3K~#_+ z2okITL>5D!NCuNigkvl06lyLs7!N=5u@ArS<$nuSIh=5k203-lhyMt~uA5)}Ml(0z zlYjBYx4hzwlz;l8?|sqjZ;%5ZC=sv{oJEu{01B0%V9fo6$DY^;+h>3f1(AS}0O@8Y z&z!vKDNlRwi4zd6JoFR^Faav?$IqO(^2)15&7S+e{^?t8d}+2eV_?x@oD5d>U4CTc zo&WLe|N2*c;hv`-z3_Yg)wb!8Vx%#n>gSu2()Xk3xy{8PE^Xh@Mc?i5c{P1XYy=8K zbYzs0RU8TGXWQG0)NMYvxl@zeANj`D{`fDw{pY@N|2KA!tm@cK26awdd&6}%-1C(O zW0x(h7kXxPT)vT1&CXZK?Fhl@FxZG2r7U7X(@c%^((=#|Y{jb*HUn+xjn#)%LtT&e zmUIqC*s|K+?Crz%zvV|h^nrK3@-^QF;9bA>zH#NL-OQ@i zrj~r*!&UbE#~wKU*YEgWzqY-AyA)&~0i_A6g9n??tbX*x=@B8pw-ec@79>Zkyf zV2cu(E(Du5P1%A_7(6UN%Q=LQ3xN#x7>G~@@Kld+pR1AYB}$!DuS)9YUC!fXSlG=e zLfY=pW2mv*E2rYt zbcV=2A}#|$LKT%8MfXvuE+z)^Mx}`bWzd@h7{ak3&LO7x&b+eAreQkc6Fo{sw$eL7 zRU!_yeGS|zO{yF*G8oW7rEXVME!{%7yv0VrW=@AIbw}&-GrzMy`}3na^%H&FvA$+l zA3U^Y@1fz)!}e^vv2*4d7Z%iPusIv-y7z6nE82JCTq#@;T?$B0kimw3Ab<(ih}juE1OuDHI~igOsLu)q+n)vsst6 zl7(SKi*7wiZyTj}K_3*7NN+M&^^*!!WqXI^=)rXPXmoH~jq0^fMFM7sdvOS=qp#6W zMJAOZRY3xRLtve%7#Nxmqf}X+x7^EiJS`U4jzL@4$YR&26qn~xl=@N%kR^vZ-BW_9 zVo~g|6yTHtS;gQ%UF+Nu@SQ~Ahtu@c>a3zS8Z)MpTiUIcU%#ZYHao*{?PK%ms_ zwP=x6S*WXNCDf`a1WmQ-^ef6VA|7rB(^NMh4`~4!dYhp0woOzgwV9C@%xA0YrxYkT6JaH~~QboG1Z= zOoTwfh+TygjAj-fm>M7$EpQ+Za6*72hu}c~1WAREC_n&2Cm;h11~8g|!C)~8v``ov zfYSki00aUK2n3k%f1wMARHQr5fdv-8K!N}SBS4S<1UVrC0YiXjM1>$&gjbx5rXrl6 z2!ObR#5WPdH<1KLfW##NKmtbWa)XFnCK42gZy`WX2$2kC2*D^Yf<`#dr-D*MxdcFh zKmx%4kT603NP>j0ivUgBB~k<+iU0vRL5dI=zz`@JD^oNxhAJqK9zYa&?h3604>%Q-2c!zJ=Q+9T;GlviegoK351cV3(77neVC}km~waQu! zU0q$4?Xq01!nGXAxw}eHaV)hIN@g%Xm_jxoVNeo6lAUBHd*1sS-}il<=N_)BKPXoJ zsru3Vr;o#B)?4F53N3)-PU886`o4sU@GIFn!@~828Vx-P)JitOObws_1f-(04y@1u z@FIlI@wsd}GaF}XYDtwJOp!}5(u@WegdjwPMrRN}0W*STC;t=ZAm%fsefZmV3VK@h{xJe~jg;Z-e10XTSV&pZdhRzVlyA_g&IjI&<#CFJJup zAN%L8`ZuqC$-{sB>7yT*_d5@@vjO5N!Z?nT>iN}GpZdyQf92iZ`GPmTtNW_8*WiCZ zzz7-)-$cL&TK6B&{_IuP-TW_PdG&w$$Pd36|D&U@^w51Dc}Df|t>>Owyn1Qt)>q$g z|BV-J-T$?VbKmd3>P!8zUwzMC{?$jH9G!bq*|uh^L-!}&{-1V0}Z=aNB zbhAD1eK%kI#@pBY%YXE5|K_dhDC;NX6GzT};LrZQ{}S{KK7d1p7){8K8F8y zXfWdXo9+gKr+xl`djY)U&U?Q2sgJz)j@JtC@TWg`7B1Lm{ObqRD*qk z&;S;5A`CPW>?+Pl5oczOfT|fmcjV)LcgrhYZ*ZUi1PuxZj(qezAa>pI$~VFazz06| zo?Bk=9YlHHzQ4Qm6>lsC0z!xYsu?7{&e1><10Z>E=E;+@YJQdwAxO}qfB*{YxwFTw zy5Qo+Pdy96<@*mR0PaEoj-5Jj*@cJO#vXm@*v&WJR#0U%2?+JdXnNKD%MSh3Z~pYZ z_}L?J`OV?WZ#eSRTqmKU5Jf`jwnK_}yt28qvziyOb|B~2u; z>Q=Y2y-4}Y(Vbko;`UYXP4~R%U;p0QAA0bynx%4uE}c?nba&l!{eAa82v@-mQ`(|W zG3vQN>l74#P=EzS?z(=xCf!~+>@TUv;X^Qz-^!dM62?;D|WW@VW}*N37y zi(OOK9s+TSWCmnv6U=03o_#Hz!@7XJM>g5yqLjf6eme16blg1aS_!bUGukR8d$YJu z>QFTI+wt7&%q&l)?SV1265jIWx8VPGAO849teR+w-b%qvE~C;h_4C2o-ijJpu-P2U z$u`}7$vNYkrna4xvJ_FW7`>8VNown(=&ZAvNkJEviBmKvm#Wa2!^TWn6o&vCGnf}> zG6M=?#*UQ>9CE0$!ZNd3sSu5IufcOvTVFKC&^4S&)#z+qtYeAmX-3QwV&2-HrCnoK zy3o%GHIZpJN3EL-MOBl9kvdwbusAWawe|Z@&S_F7oyau1;&mph>VTvvaq69vj6$_y z%mYUwh`_QT$GJ%pf>84Zmwtfb2htUi72vTps$F|!@jRo<8vs?;T$nhARcKd6{w z(4#oO(zxfDZ8U4{VC9E3i6_hXqD~|9;0T%Mqieh<`kgs9dzaeT0X4cPa*^z5#9!a+ zIkt9kUS>z1d~|(u;3yBj_LW{v<+9$4*f{!0sOmmcR(xaT>f& zO;#3wYSTM2$;Kt?N&aeRm0HFW@qAMu#*)tc{=Z;-;?9|i4&W6sLxkoX#W!_uuh-|dXC=}oLpa!!d zA%m?RAk=}mc$cVL7_63+mKv8eCm(w6s^mR14s(dZ*ccVv$Qnf*<4$ij%w0>kHk*n| zt|AhUBskD?P?Y&_I;*5Ux4d8pYi0^V2@54Xy{FkEPlJ|yY2?%zEv8ig_KL5}O%Mby zrzTiTqPlu9UJR-muFM&?*$sw>p`7hC4S5c;h=G7=N`OW>8bgM%M8b*p0Z7VZRzWLR zgk=>%#Kqa!AYb^*{m-60IrL74{(?*HesNgZe?^T(;LVX>F`7jYL-D47BMlIefP&5d zuK-Aa5w$vDF53#5=i@!o;@s&oxnG>li!PTwScK5v(72t&; ztO`JY0B9x(z@jC!tUzBf%+$=(43<=F7t9O*MU14v6%PuXiYovO1yJlF6d*uA2#5(l z0?ueapm~@91u#G$4nlxXF#t5cu*+zmVHZId2!jZO4)mOkCq^1T0ETa(kpd9JH;@D< z#5WMcE)pa_n8FN%5xayS2*3mo5(WmEbIv3P0T=`%08Iu}Kn(`a3$Kq8N zHuN@tZm5+IsVL%#NFEs=b{w3~5*NwnP3kBHjjC!&7=#2JX^*<+&k|4CqLLNfY_-4LNUMwRlv-| zx$?}(^?dxR{{Z01%dRo+3#+HU_Aqu4Z~dqL>`_kc|D)f!_S)yO$d zw$6-Ku73Hn?7Qhrhi<-dvecbZhq!&@T(fxmkH7F=A6!{GKr6=4lH|PS+fF?8`@j2N zkN(T=x%}@R?_PaX`|vrtVK0yMwuczkgyA1yzxt&Qz4OJdo$R@KB=a+8kN?}NuX*iT z{zQNBrv3Gw{o#K;J-quj-g|Z@o|u>;tbqd&PM&=9br-$t1J54$$`kg2D+^3v9zuK* z0V8NId=mj9Xm}X7;hLAd>X!~L*Z;@gyza&L@64ct6|j8qfz8~1@|WKGSJrPlaeTPu zw(G9D{C`=!_UM7-TU95`R#W+^OYJJm1f1D!2-RslaQAXC)K1(D}DZj z*N?cE<*)5r_S(aD+;eAs$#?z0n}7Blw_jC``6KW8X#dzB{l#^6yx6+MKR)#MzkL05 zPuuv&seY}*z^%=bid+a7Xn^3G%&2gu&L6+>>f0t$2k*qmL-&6Sz>T-P^5OeG_LAH0 zA_2FL18PEy51l!2&AyAA z$0Mhneen%9!+U6rAfY|9w7R@!srl@hxodS2;JvI=)$!yjt1epdD$dfP03EI5p{F!PO6^lYUCcu9zU@Rq+sWW# zLbe>bhJDe)Tm2Tgj;2!Oq^b+{+!Qyrem`?gwdH=jGcwt*v9W$8y5&+&_AQs5y<+)P z-gDvJ?K(k+QsMwtH%<&&TEb7_#i z8TPAYqej0YhBC9kRnD@ck|x!7Kst*BS&6{Y)sk%pS%-;gGU>_HrFz&((P%Z>-q%?4 zu4*iJhwVB|Q>e&ouPK@WEwu;CTm!+#FeL4L(>f_hbDYW8ppFr&opRNNa~hRaAezNe z$1<#Mby!_Oy>_^n<(VzA8K5JlUfn53oxGH-B9*eWyba38IbrYMRGm3ZF)!kvy3#B~ z$vyh8W$K`!Tw*Dci2aAI*x5PJpXt|^PHPT%pirDEIPFYYCp&ILo;*<)rahy#aL5+se$s{>Ae&JT}zZTNYb$ zk|ni8sg9aD%gPXZ#V!(}RMljrLvBW_R4HkP)@iU9$PW zIEs4!BS8^ksHfwyygn2DyUSU2(^s@-Lhjac3pIS|BByleX+N`SuClt>XfOkSuvt;yz6_A8Emr|^_r00C|@R9qTId*Iqye|3+E_>Pa_3FV($7mfY z^TNz|MscNa29=S4GJ@g01$CXNDOK&4zoy`Zyw*yi;c~bvu?XfLr#Mc zq(a0I<7R1jynoMVymH{O5jYwzizz~(5mslwBpLvVfC7OKg@P&?)I3UIPUZx`)6_~S z*hMk2E+(;hxT0MBEx;PF?A=Y!8l(KZDdhgolZ z-{qn2l4%*rz>;UMTtFzOrYNRRWXIx6A=zgD3J@r;EJ`CBK^g%ufTs9513-Xq1kEXE z+$ADFL7}O06i>n&TA~udNicxo1)ek$!!9$0nS#*@prsUz3R+Zy71g4H0fk8!O{qlz zg8)ux0EllOQ~?7C5TH;95){(N0C-0jRA6EPIuojp@Bj!iDhPuo1T;_x!qk{)MiWpt zoi2mKHy8i|Xv8i81OX5r0Z@r=ppe)_kOl%q7`}-h2pRwn2no0YZP$cc_^vrRi%;&=bw9Y9Z4kV#!IhjC9GH6bcv%tam zBmhjzV4n0M0E1PeYaw1l9kN#ujwTajec5zv)Gk*dqu`_Qd|VgUQ@RtY#rD}Udv=C) zBkmJox$&eIn>!9UVo13-@7x~mF?P$8Aie}glP%@hLGw;`qL7QUFjy(bD9VQLm8)vs zdQo3laB!=>CBqVeGxkk{xCMtK{bo#Ohq+ps#~xPFU_dd$E}G%%41k$L-*NZr-@MryVy7S=0lS8x07@fpoISm5$G`AN z0GHnN5_c|YY@T}JA?zZ4{7?VmmmcP)e*LGedH&_l>B7~=9{C&$%PR-BHc!^e*Z=TE z@$zr^h5h%uc)W6^^kw2UKGQY%b8r8ZfB(s~eFrobIZ-6n%9l@y>Nof(`TRiw}-BL`I~>^wx4;`@=E!2jJ7}g;XnQh zglGQu`BTPnCK;KVlV_g#t{2_;w_p0y7aq5Z_HCr0N7+|epNPugqA4HQvKK7#)2Gc1 zaDtmI;v?Jkf+d=L6};x!TfX(zE?e^d?62>7SG)iyw@ybFUe*5MlI`=?;LY{%TYv7(Klt?0@4sd9{EC&LMkAmJpp&W%Ym?=Rs(Sww zuNpTmJimFwMyI!~9;LMv{`{spub<6k zkAC5aJKuPnajDwqcD$C2bLXEcR&oIW7+?TI1b_i>C(a(f^s-x4*SuApXh2o(Y_z`+D`6ooTF zn9XQMKKY@W@45#TAS4K6?4lgG@BJWl-FEluGfFmj_`dhwapxOU?6aSI?;WpvLj=Vr zR5JiZf*?#m03Mo5ptEOBpGYpw2rB@^!7LbP(Hp*e?#xT}9r(i8(=fc~f(wd53_>{Y z;WH<%+jqeLK7Q`RbuYXLUf{-)#>vuQH)@(HeE2VZ^N0S;uWYT&Q@`AOZk@veU@c}D zI&|~8@ilwP&{>+rwye-1~NE{^Ym6 z{B=M7yTAE~PklVrKFk`%&DjW2@S3~t`QW?%ZYUPbuLfO&zIV&x%v^Hvn2u5MsHo+v zqU)g!Q;`0EtJRHwYE66!FmG9n+!F^PKT&0$5{ zEO(olC7$!P)|p9}!~9@cVNr6GN>1CGIY@SrwpxVfC3oG-I?hwSd3>|%Yg|)JbsS6z zjFw@0rWkZOFOUul9Zg*z!!p(K!o+pXSAFmEcuf~iZx*XcO>PV)H=FJ$KRdd4Ha)lSrA(v>u)Uz3_zW9n8#+NK#dHJ=6s>;`BmJtV!f@<0F&cwx3(!BR0Z$m2G z#^#WctuC8eswrhCMcC5vC|X}{IjtATq4Q&bIB4*SNLXG(tJbqiB?T5Yp;el zI2lQ(DO#&^29cq%G87fpc{Brr(g0}+8+6>&o1?B|7qW}!j1^!j6c`7KW+XJg7MgUh zTw?6HRQq+;Ua?Xm>aLVSuBrt$SJ(R}ZVR%9nGyoAt|}LYY zC`R|PF#2l$+=-Ygc<)9*{NClO0vQH3T0XoK7T&&a(n42YBlbD!dOxZ!ZdkZ~80A&3 zK8{C?o9CvN0sgLVI*TUVsJ?LMX7v$D+;_njI|Gnof9*N8;2GE zO(M^C*s5%72t&W2MKj<5nR+uH;{JIq^*SUH#DD+*AOJ~3K~!wK#g((wxNtNq@4L7j zkM@raRo+ePifwVzr}l4x@@Th%vY3ChARYJr0JsjFH)jZx0$0F536@n0E$^Ij`tXCF zed_2_eR5^KxbU*OZfGw!bkUe?W#c5?NHbWZIe;h%(BO^6m|+=%5h*RqOnp=_jVW5u z9qo#wjbR>3zvxp+vF~CWiV0)X%ZP-F*y z#i$sZ7Ew{75uWfArAvf>UI1ePKu7_QRbaaam_m!Gz#NQ#qX$@unu-q)ZD_c}}~SJhbY4lS3A7D`>2PxZ_(r!X(|d6;e2Ct-4qXDCCNmsoO)nuatK z+Cmb9DTyhJ1vF!$Eh!~xR82`r^u-Wji9qU-#$q%@E=F=NlSSO<()nWO;`svfrB@`) zR&r88RR}n(#Kr~{C`^eUr~p-(1I%S*21KNGpeV!u2dm5sW+rA7!b1$s1;sii_odDlWgO<7IVprt6FR@90bv=l(0FjW91AV2}V8cdaD*u?@f7(gSyU7%o> z3gHA0ub>j7KtY(HC-#`ih5D=pQ1{gq?2uT4z z8bBI}uOoJmAcbJZ4ZJ0ofE@y9h?tdr7{%itc#k`9C0i6Kp9-l zq8!MDL1Mkx0@;cBEE=@b1}_UO8AV}nmQ99ko;4?BHkgu03>11oT=1c4gCNvPHDxqi zb79rPb54hPs}esf zmsVYFpB7D9i}|9YxMO{?VQz6JXQd`8z+%`%0YGO4z<~th?8(OuT=W9*LA)C|8He!2 zuiSsydCwSoT0C%z2B^6G_K8z*G>)}O!9uKnJh zp4|4lDg<-BtyPN#3j}5$VFnUr*hRwd zH09fy%%i$~czVttS?^|Df=$7^hA4bbZP8@-K=}*4r^h6%{ z+Ox+@j(_3QGsnLB#vl5d&;QBafBqb&sbQ#u#lu)HRe0gS$!Cx4+;({Sk;iuG#JbX# zUA=U4zJA4c&C+io{KPfaz5e~TPwYqk399NhramHU;VSArG0t>{aMQqW@3Z|)Xj9MF|PgK zC4TAB(I1@im#km@oo{{Vm9Ko`i*UYO-TT|`e9Po1tdH(`c^0M1rdF1|&-E!wWNn;*Bgo?HGb~C58nEUZ!gpbKmOiZUjBv*IMUHz z0Mr115K6+#6=o6Y#?iB93R7n^0ECbj8CZq@9yxpZj{OHeedZ(#H(YQqtHD6f0S})& z`Jw|C^lFcvJN2R$UW0G|z8*D>&6=x6b!hH=*YCXHhktaw(dV;A^VoTt1Ztm5hc3o6 zcFM{z!Bs}L!eAle7<80PeUyiG zl2l86ywX%Hnt5rbl~vxFY%oWp7f=+{N1D?UVkkWRIFm^tBol6S_O4qo=N$F3n_j1Bq zH=mwJi~i{??H@b$2GIuix+5vQ%SqsZK7Hm4of_(nXD(4Cqd7o!;0g z7O`k=Ox*oNUpl1MQO>>^wk;<-)zS}xP4C5lFASa0TkzRLK;j1DHJ1}Sk_V! z_B0o#d@e4AG--<4_H7L7lfiO=N$JgeQmGtEWIV@eEf)u=jdLfo(m=OtuwJUc481vL zB6B|!$iQY&ob$PACuw`bNG}TSp{eVjJPW2rs)H9ua|mg>hYU%I(`j;Mjd_M8ScG-O zITY2wyC%?zSf8yaa7&i5phzPV+G5a@a|UveL{tG#Wg;A*CS8umaKq?OJ8WK%wx4KP zYu)Pdr0l>-m%FYo6^L{(Yt^YWODdq__yRW_`-$%L7Np3&cJ)L&e)8CHCws*Ppk3o4EK|fBoxwZL z02`*Am=hB=fi&+6B9Uo`$zXHI5(~{qBpjW~TCy3JlKpvpdq;IdJ}{6E6#=s&8vwR%%6xOg&APTfEFwiIU_5QLj9R? zM3LXA&*)%BH`Z}%Jv|pgNlr_LTH(AmvaUkq=o?mTmL}IrV^kPn4P?os7$n0Kh(sN8 z-HUG<7aX_F`_dhgCL@f!1L>Po=z0gYh+!+Us2i$&Usu|8bFIW6dcg%nCI{D?(Dn?6 zXv2$+?}47(bF?-$_vzeR^yzA|i}_ey6ww`tQUvg}bGj;@|NKKwKmN78lf1n+aOs^l zPAbdjLRlmHnE%~Svgy@93$rbZedMyG_(yd#XH z07e5qO$AWPup;m@5fc_)M6*N#>F8jhm6L6x~ z5l&j28vUwkf^VnhS1}RjUc$_c8amdCCZ{^~unnGv$#I(l*ih|DuO+5=1}!cOh`Coy zDwdrR@TLw_%>~5(L-7O-DI3N!i}}{Z0*H%ErKQ~Gl1)l-v6xyfYDx{FBQq5(FeN;! zHmH_jK$Z+iu81q6Z1hG^XEYTfSP+3o(5TteTkX&~n7772@2lWOW|njTXrE)3*9(|) zpD~~~#!3lMNAtx{Y%pXhQJ?^51cWe;U1r#20J5qT1=N%<07k>|Y&3?Pj#0!9c-g!Eiq-2=mar+}FeAa)U?5TF2JW<&uE5W))qf&^fMgAg=8#UKO- z0nEEVsR#y}iwpzY08fC`m{*-JQ;dFFD$vZloy4fIs&V(P(Mop{QTU>x$Vqmd9ic4cZg}Z&{bin z@t3rf?wMTI$exvEy$M@JLKf!TqJJj2^*%3V!`C`5XE(MwNw%$NGtc+ZUCWDweos}` z)wrqaYFtUx$_mBR!&9_Fn%pKk^eV zz2dejmf9w`W<9`dyKBe3-t)@L97+lN2$UXGsGxL1CyCQcIM=jhi@G3 zb*!eM<)P2r2jIqA?tJ)xPu+0qD@5?%XFh(@?RP!+nNQw)`>PeZm=&-b2qnx&08~(n z-b^FR1{KhR5HSKp0g(90k$Ycs>uZFp0Ut~tnSk)KAA3KDUAN!)8Wk#VP_KK6K^;3@CgAgfj9~2#x>5`~Kqg*S>z(*v#jT*XpYM7gXc0GZ4vgkcp6$_jH7# zB?YaBp|hB*tXpcdrVHM=y-n?t#>Ha(%;{8@?Mn`>Y;!u-W`ievA9l``r*W+S6;x_Emyr^vYoLqrG>4n)AWJJr6dkU zmhGiVcRJO@Q0g$4*M!d0D{QgF-pmwr3mKy2q0H9Tx4Nene%UYY8D$F|)9JLJSQ$2> zOd`FX&)YVH713C!l_A=A5|$H(n#17c)Qc+v;Cyv5BDlidO&n(?acRD{rwQO&T3+n* zy-R;{bnDTNukZAa#_jC#AbwG+Dph<_IgZ4wt`2wB_)?|wXWVlsS}DFZTzTuuuin4s zqC=Nf%4ojQip{wUK@X|?*h^ZK;aST>9pF|XvWp$93YC?ZElizX^xd}QP%-r}!Z-B@ z#emYVvQ}W-+)_I@&(@?$XJ#ooOfz-9i;E4Bi~D{W#|kp1ZJXx|hvVKr1vY^dNm&ebBiU_KbV z_z1WS)qq%|tS1lJ8uKjKU|ZZwq)6oEU;^AhaCo!!VF<+!w?{a$uucbNNIp{i0P1&rA?ZW zmYo=|S!~O-jV;?q)?&-pk}VxulC>S3bKY~_?YBM8bKlqX?IFNS8JK4JM`%AE7tzTI zi113_Q7Bmqv{1y7c(7a)rJL-bZ{@JYbm7W6R&-~NjII~*aJ3|cY?VL&8M{WvOYJ_2 zbiJw+h-%jX)(0ykACyHmcrne}1_TvwBl+f0+&(QwHXrZdzvFI2i8_UB5G%;mmIT`9s+|kmqUnOf09KeDe7xpXvt1p~cHa*G`(F zR}5v_tt)Utj^^MX#Z@wllwoY9TE_0N^|i>PY!yMIR*y{;g1{=o;D{lFm_k;eBx3?Z zR2Tv@iHI`*(8!9QDI$TADKHTQLyRtvEF>r-;1L=C<3(y3NCvNb{ zsV)#9pcf@Ew>%U;iUv?7(Smv|^g;)X0UEugkdXpmGAIKffV;Q_BUnrUBvZJ?NR3K~ z5=D?=E8sY8Lz))j%8shCDI-HF7+<8$*swtVnJquinmWM%L0GdE$zg~(I&*XwxoCa}FP-0Ih zEbwe5aJa%1AgF`{Fv1`#PC6YN+?_4}2tWuB7f}eI05By$Kq&+ysDw%(0v8Yfl@JIR zf^2PVksttpPy_@(0HX3|011Q;7f}Eag#-c=0Ym{x5Ean?jI|btwa8c#kV2)QR@Z7d zUuicx4Bfd|*HY15IEKQ_5TiX|Qwu;o*m$=&pYL+*>#6 z{`wo&Mj|R6opB^-EJ{}}sp;# zeYs!egJ7-&B6$p;PHjLC>^k~^UT7ARhPrN8O}$A8C$SM?x_Ww4Lb`T)E5vp@X>~Nk zum7X3`=S5ok2WffG{!pCBmTp0ed}-h#vdGn-S7Xq-}U!@;Oqa*FZ{tRw|zwk0l8X{ z$b&mU2MJA$kX61aclSQ_2!N0N;kW1 zgIEWidHxv?8yi>d?>}2_f7dU5!{oL9;2&+?a8*6Qrl1z=02@31syh0^N3Xlqb`L3S z0=szA;n{nidx%%uAs`_F5+WoEkVHVtGItN}yK?(0)6I8mo__A!g_r)$iEB3Q`k88*jYzTkrqRrrnSH+mn}qfZ{J1fSpIr+rbO}`;UBNsNVO%r}n1*?&$8$hBW z^z(Yz;UoX&Y-a^`0;Cf$Ll_i7(j5Bi*_W=q{MDPAf$bE=Pe1Z80Jq<9_ahH};?;ND z6QG}Z@T0fA{wp8(Zz2r4Nqbf6I>Py}2Sa#dTcU?n0{0gAX+w9h{B zft&C78ZC(CF;aty03!E)^t}-L6$Atq2}lV3CU@TRHYlJJ6GQOxpWodN0t(EXvecgt0)f|Hi+tblX z(LpNha;yr*Yg`Ts6wg4WMq$>*=0&b&8nG<`Zym!a*?hX zH85RyXpKxhi(d8aby}vhez-!V9#UJxYveGDBrUra7t((W=ZB}+&7PY-JmcB@GIUsF z<4_#KU>YER02auQD3@G&%ZatgWaC&}Dx#{buG88yHgRoKukVj)+A-H>S%i6w>yiZM zM5G@L!W1~F`b^J)el<#5h%>Y?3_W}FYgIGUEYjDKe7Pc8#x&}3Qawn}N`zsPSSU<1 zX*aB#3Wn@mgffOE=8lE)5=uSzP_ktnETq;^9AzV-GfZN=D5!SH6B47BnpnDXC@gZ= zhERM&>BfpI^dyUUh@sV3)U47~@G>(v+&v|DvEH(}0ZHkqfYOwUnW2_gi|!3bYDq#j zK(&PADsW1l^~QWW)9uM6r7Rd{rkLpXOQSx%=Lad|YRe?gsp7%zzJ6g}abZ1=$_hrsmkGB&!vhLR} zR!`@${j&A%uj&=n=mRH|k) z(&9A4mI^H;hFC%%#8(V4SB^fzN8Si*2rDlsXK^xu-WJP-d67BLzc^NEZs)vOx zNwcoiaUEKTohxm(^hiM_!iA$0VgOoCUc8%ekvq?1xlm}k`Yc_#V*N1q;DdAtNGJ7@ zTElB_<=QPce)0}{^9hVKqDqhnq5>8ERO3}rRAI0atlH0^@8 zr!J^wS(uQk`hW;8)B{r8wG@U_c`mlF6#7NnU$JqQV4?BRm(PCjOQ%1dEtOS&VtmVG z_0_SNRcYwQi&G;vZwj!xtb5QvrlSXEpYmNi_Y0TnC`QXqu5xQImrK}AqO2Z9F`^l?e? zSZZZpAVnPJQeZ)IrQ+0%5V{KCiLQVV?tvN@NEmAZ(3OCUA}VkYB7~Y6U^Xb}2?_=c zqDl%`0T6KkAZ`FjaR4G78vyG*0;}~ANz0~J){FW;lkW_^ zUsmSB1+LVG{f@(6$t`vc8DtJM@8N|MKr8}GbOoL2Q~_#H5jsd*#E=1}(cKC#*vjE7 z=EWdz%>_kD4R6iKJO~^{bITA%0T@ihq5%#A7%0Jm39P3Z95n9gb8-=aK!p$rM4&;T0#&qv6e$T#hzC(1Cc#ECLaG|kQA&=0BsxM-nLp9V z`Q2IE+jm!&)GF#wvD$p?gxi-)xH%b{iTX@RU&dOFV8bv zSR8FD?M7wR4x^07!KF(jP&cbJi|yrTu8WhI%IAmkJvydXMhrp6s6=EURV%dswMSue zK-?kj!9``*1C~SPtQ)du)qSKiXTJF0@lStYb9rNX>znR*&48^9Xb@F~04I!tikDT+ z?irDXV0<>&7*GA3SdfFb3$6H5EPFp>C>2w?!FE49>&*|{|0f~U zl~%J@%~`$m)@$zj>_`n{@7PuZ;$Pg z<0$On?T2T6@7YiB%G(4aL_k6Wqya$$#0%iZci;bQTVFl8;#HH?`Exs`zwYGqN8a>v zKl(#&xZ(CSgoyu}8S&#!K6~}Q{F?vjUqAlhv%K^Q0SUohBLD?L5Pv}c3WTfs(cby8 zJKy}ltJa2p^6qP24F?o|#aUojeCoX~oqOpczxwa~{p!^hJ|8as`L})B9qpaG^9u-< zpT2f17!!+Uf9T^Ee*2gIkMA4Krd!_j?jhirFYe}zlPA`EQ_7#_^Chq!pm>H=4kR8H;;UAa(r>> zvH9)*C!7Xz2q6?N0s%=pf9Ap!m%egyQ_*Y$@>8GsIDl8b?yk>1^e4C8agR_x_0UIe zfBjcH{K@-X^M*GGA*w>qJ%cJP0s-MMDxgsCpoq_&D@zd-5eEngh&u6er=EV*YwuAa zv>{R@BoRFDv3v3VwcB2IR{{R?;ZNN9nm39>36Vjdq)1eK=)Mo!_{vw!FT9j<0A#cE z&6BUFn-;Mb)eLx!=8Q!|4mJBI46$t|O|&#I9R<;(-bUFahW*{e!TwWcSB>SXPj04m zI@RG@@4D+-e)6Y3{73f>nm zYgfcHY+klIUu<~^lSv)5%RyonH>!3Py{?wagdC-INjpuyBe+T@K|>AQ4a?XeT7Pbt zHH$#vC=aFX;hXEr{N!Mzf9ZJJxpw0TP}Op>GNLpUtFfN zy{4cJCAAFF)G2bdgv6Da)&+55qse66Cs`1aRF5TudEHI-5vs&SsGV39OI$>gq@x(B zRT;PSYf}B*y*)H(tW{?@R&g4zkdmyf$t_F$nLO|Jao)?}i{RdeR;ONkZvMisyWFF) zDD%9Cx{EkG`9vw90W_$%RwAS3vgX9Gn=ZfNL@eR*hXu%^X6+ zCRJLNh1p9UY_W32CE1i7$(E_&-hoZ9s5g%vspOEu(ycO8n`l_(h}^?eszRb8Ln|z0 z@US*Z9`i~t3Jp_U#UUOv0!u78Wh>fyu{=QP2+QIWikiJl34U&x+VCy^2$tb~Kq zgaoIQrHs)9<3s;;sR-C?FO zQ$wdxYm}t6DR`*|La0+tmL-uGOw*()>#>ymzB?7gy`w5G=1@f&HWoUUN)7**>a)Pt*F>rJzBz~se|mT zsarZW&nqcC+kUH-VNtJi9P3JeMxunVR7Dddgc=Agh~-KJt4>0yyjDF3pj1`x2u7*H zLZ!lZU6!3LU8Z%AMHio=Y{ihmAU$y*YO8_lt3@t7)g<7^cZ3n zRzR8c9fYfBKs3wDLteVlNLD>6bL#|Vz7%I}ml|lE0r!-Xc&S%PWbY_$X{8i;2v~3B z$~A9};j+iio_hSL&koiM`^)QJ@s01e_FJyG;(1)XJdiVo%?@&08v4Dkn2Y2PN`NJL ztR+^6MT(0n6cB@vt|0^sU{C2`Ng3S1B2cOVf}#MZVh$++r^Hf|ZvC7BAP1nY-2`!w z;1U`jz>3fj=t8&5ghy3Nb5#_Ze^u3bW!luN>l% zf$S7>Gm+9|9|i=Gpb-2y2gP9m%k(arxMg=U7Xm`SY3@DXUfe8GKm!Q}+zlWE6vssl zafpJ+sSXEXJr%yc)Sh)EYw~ zA(A6J#=w*uKu{G?7k4or7%M_5n34`L0fas%%zNg+GX<*;4%PW5?;k(>SeV@~z3HAe z-`eYu^$>!RB2d4mKSx;CwY}A32Rn4vTuOm()tQytvn&0oE8W?%JEE&|U(|kiaBPHW z+CU7|q?m_Ua8m0wXsk?=MKoDA_iDpIc)Xz7dwy{)FBa$7&8#z^lLCm>1e8V)LJ_(G zicq;uN*h#0qdK78zPy!sW2+fUYOXxCu{J%nwY9l@WG#%>+M^@a*4GG4XfgxV>NQkN zJ4&Id=0{H4G=Kh!Aypx$1{FiWUy*>_p<-k?lS;B`mPtOu$2g0C@oTs$0JHh8G^$^*oBg zT;0BR`uFfD;x+*Z5s(l8iHk%)e2I_!%CElvpZ&f6dUDydqkcHI^ZYj)xpMoB|Kl&e z=Uea}MF7O#KmZCPg9<1v5`Y3BJ`KF))n9SfJ10k*AN##~-Y^HI_^SwH^pCIlhn~Ff z&mX-1*n__o+JEtF_uRaG2kvR1$8pTD2P|iQ^xysB?|$gFcHjA$-HosP_~Dx^J^f_$ z*yi}voAI*d3L=crX67h)IeYl*-lIEvKlA2We&^SJ@;9Dj{__4avu7TA_q%VpS( z_yznYbK9M71{Ej-Nk9l1OeuhB6rqZM%Obvbc0P~pS`S}qw4al zHCB^W?R77|<0t;-pSkx#A5k-x9>x_=3BUSnU;l@H@Bz`pDs2gJE@c7*tYa^7Ore{h z4x_F_#*JJLN7Pg&DWaH!RK}yGc$J&(>;aC<$)mbh`lO|x(Yn&y!d{H7-IBJnnXn_E zd0MUVFhf~yba-Jk*pQCIFi35~xq}6o$|N1ZY_=K}XhtIq9+C~at3`fhr}*NTwrx@q zC3dVdDYL(4UEnA_^$SE~cHCpr3LCBE$nL#<0)U-%T z+a5+wg+VkY;`zW-PaF?&n%ys7wv&zmGhxn zr^kdUm&iD7`!%yL!#x-Ak;b(wkA4?3^dg=uVOA%#G8omLMy?GsvBz2tHDgI zh$583ktIeja%P=0iuOfSuGGG|I?z>bs zE3nC3lJO$4VV`u=2WrXcTtx{Xh1HPNYxUa7Ib$*=(FNhf-3Kb_x*DyAm(H-Vy-KS} zq<9!S_%BDrk5U*6}b|tx)%<(!ay~gU3dp&YPhuwI%ze#I8xj zbL?YRo^`H1p@*vxPuH@3eys|Al3pLXG9X)BeAQ*`s?KHW%#$yOpJH=ObIx;jYSlDNiVCY$RHHPh9V-#VP*p|R zmtA+Vm3k8x>ez?6kzPb4T84Bo+FXr8<;|vOUx~%dT5Ylap;Nmz9J}RMu)gsA=865LWhh)5Kh_2NiPid08h;ZSJJ+BCwY zFUw+iSw_?K6eaXnFk6|ob+C!}1se)IXhKkLibu7k=hH4_*1wd&S<~m0aj!3BDkXVZ z)_`nsni?CWWBTov$2VMAW82}E;W`E0j=ur%XAV&SImBX)?l~M>Kt78{4u11O|Kev? zA0BA);hd9QzbK9}w=71w3InK>NPtvoD)nNmr;O&CD%zwqc&dCHq*iZJtWruPo700; zQZzI%C_Ez?R2uVWpsGmOf!k15drf2XAv-MkV|AsqW|8+NP zO>RFvj$@fj`UCEEjxTSP7q-sN&*eoo@6Po7?h1>7Rc3Y{G(yywD9KYuK@+2)GW7_? zQXHLY0K`SbBIp!%lAZ~H)Cr`QxdH4bz)%+`)nkCh!a@`WO*B%YBMS#m;p!QHGXV(2 zK@TB|V9@9iMF0q#QXE1m0MZ4ZxPV}Qt42yB3{qS~NXY<&NKc^Cw$S_eQps0qhXPdF0A zhOJSm&7hU0sG*HQ*3B9M@WMB2Xl{CpU+f2(^e6=>)I=W(17cnT=vzKo?QbS||Y_NF@XVp-u;kaJaw) z7YQja5(WZPkS<3qeYpmeix5O9QcE0Ft!vdts6!+X8VTcAkQUKQO(E(ayf&?e#z3+M zhYUZ|Ooh5iFF=ujVHI6`$X&Ec>H7DxT ziBaxVx>X-Mr!4*M?0~+kLyTbz)hZ}es)*RCh_)$FV>PbpqoXWw??SfaN<&r=kW7k@ zltJNyTI&ejn+!#xVCsP=fI$>c{26w|-AtTfQo#KW4%LMx9%vsvh1s>^8()9Vt24GY zvqUSU7|qN}H#4DI8R&y&_a%wqrI1`%@laN$hwcj(`u3%RW4+k%*~wrdSzE+r1$XlW zAg&Vyt7=0+-Ifqhak(5`I(OQA_vz0qnJ<*Sm@$Gv2t+6V5rtG=BejvEfg5ccPvuBG z!FXHOwXW93bqMRnt{-V!-`*H$n4a9;UK@|s>Sn!d5}IkOK&K+gD7F$15`scAjG$ue zD(rm;C`2+#Ze4ZahWVM#W|PfJZ+`CC2d}u|j^FyV@BH51<>a9c+7K1hRx3|D_a*!- z5d0Mc1b+qbw}7BR3aEhKA^|86;sUV!j&FO_)-%U1`PSe2iT|?0^a%c%0-P(|mtT6} zk3Mt%_doSJm%sL>@WxjGFJeDuKKsQB!)KqW-uL8tzx3|=KEHdzXI@ur{DZe|)I;eO z{m>4W41+If$=!iP>t&JqgNMFw{tHjO^o#%Mci#E4Kljju*}MLSe~Q0!`<>T6^1!LP zzxKuet>^VlN!_ZhgGjZmDFBVAOMplPK;Wu);C;V!%j;h&b7wsol#f4rKY&|bclV`_uAQV*Z~dLO-u*}S{b4BMFo-2eE;(>p>2{7Oo?ZD+hU{TBBySF<;-N3}YA+A9tyJ3< zrL$7{e8&BlSg5@maw+6kn_&ngd%sxB4xT!_=d02B$#o4IlAaG~O06}Wob-!Dm0<@u z8&}g9JlDzhq9@rHFZ*ufiYmp1RY)P!xuY3<<^s(saZSq9#t7AZ@CqTsdZImLMVEoo zQ9p1ySyhjAt*i2Z_*%%E8$6Y!49_8Bu{wcr;LdJATV~4~v{*p^xEFTqluNfPhcoPD zWf;sRT})M7>o~rq3eCy&v2$fojY~Xo?W8Mt?}DrLnx#L)`exG$G)h<^M~@LY4a1Nz zAThSVvxsgdtfbG9N|j{DB3?zV7Nep(7HvSL*U>TaRS^~mV{cSp!7O(~FluWK40GuZ z&a846Iw`9XQOC9lqu6LMHC9PPY7q{yEE0DSPpDCRUx&1c2Zc*LBGWcs8AkW+*Tm^d zQZPDjSwSCT(XhWc54)%b-Hk}=>*L9p?iY5vxv*lD;}RH+(BvA-v+9hOiXqSOv50yP79J`z>SSuwcb9` zOSe0JuIjIu4Ekhx_l0uF_1&b~tDdmyZa(wFCojV#$8m@M_T#+miSF7HCvioc>zjY+ zQ_rvCiAuu9AM@p@r%$bI-f-ol-q6#J=aw(xEpL1WuGCMz|CeXFRjuS@H^1@H32cV+oTH8rkqP+jZXJ3B9HTS%7vk#UawG;E?oR?VO?MRuebQ+G1(b(Ekg3xuzO{=AZNs5)^ zfBQ3jw0!3Io7#T&=`*s#(8Nx_&0R0{7B6VS_UgjaO6i<)Nc5VI4D3rEz?J(1>Cl*s<2?rYk z=!@m7fal{3rxxYOCHD@wx1xt!s+%X9BL$Z|fStG%k*=+4(Xg4OlC)~no5aSlHk}@CE*W3x8<)$~mmWQO>6TAsadL>s3$yX_3q3s4{jwM?49mq?U+m`D1>~KQ zhbAsZ6bM6L46GeyQk+ot01-q3NmJ0gIK&JT5Hp3-C#04%gA`f-6mCw!L<|xn1f{_S zQpH0s5?%tJVgxm$BEjR1C$7K0Vptt!HGd2iWDKD zC@Nlnflx?gA}&a(DFUIiR2$($&{3<+I35x8DvpB$5osl8@S)yI&NAsmhpc%K&Zx;k z=b611@A_qNVafDjCFC44sa=FCghVkyEP}ydg@XXm0;wh-0)SW~m>{rJ*@ftti}!@A zOdUd6p^3P-tGj`##ZpS&lU)u1s%Q0FV9Q03;u##W3r&k@?-UdhH$;&rfz%pOi4cT2 z3N8`=BtU{9Qi$RJT>$X_IOLT#z5*gt6+{G3Dh+j8Go+A3y24_%Mc)pz79yfF^Rg`T zLMz^rFeN}hJS1Vz$P{u#DM!(^IxWJoGmne?)8X(1_Q|ChsaIhmj&JzpD~2m>dbwY5 zRo&#EjW^eBl^<8U4k;IdI$vEG+AiWU5l!FtXfH1bPwDSE{7DnM&)Ey z0s>bQIJuCeSYF1stm>)Dr6zQRDAmP^bR@MCcQ^D7VkoLQZB`svgWfK;GFh)AqTnB*eBSX*Vf>f}b!BS)@X zA8%}Lu5E8^k0aVvm{L6*O-0&x1))uxCTJa}HL4VoXeE_b*gEoec<7k%UkYw!-qcp$8Wj&&G_3P012Q6YMg${qwUVW zeADR5_IPmyv;7wj?)%LT;Xfq-ZoTb| z1Rs6m6SuzZ9)KoL72!nyVL)(E6^KCGRfSPN7YMv6a1nwcfQpHTs|Z~wkN_2^z+E6P zFj5pMAVnx1KKb$YzTuv)0_BsR_`scSe5(Oq1f78d;G!ZzxPU?wUvet>aX$-Lh66!RAeCB~aK7Pqbn$rj( zV!-6GtF|j`+a`t>rEcPl8{r^^(yLHsISj@S#3d3X5DX10A7<}YhZlCv`0hbEe*KB5 z4<=#gu6x<5{?}jmU*7ZX_atLAMhKxR8hU-}H-61~Kk)u~)VDTzPNi9SJ6c1X+A^L_ z*hkc3-Q@CMHiU4zj^otSni%W6iiNq=uE%NsrFiyf*2!fP>B^a&QIeLfn44i_tEG8Z z#@sIZHLp%vNnsf=bzk)6OAb>H{REvP^n>@Amiy%prggC3&#n5j=;#OC-g16sA?;|g zIj!rd*g4f^8H2P>ld75G>Q`fk$1wF$@iq^|a4tzJq6HONs_JB9{vV>Ywt zN-9s4iG?ga3?73R%cRQV%0`QAtcH*sE*T>Z3;NY+xjd?}wtmpcVF`v&Jj5hSO^8L~ zO>?Wi+oGnZ&OHnpy^*QVOoDqFeXN^Ax5Tv8Y|xBUYe(}HDm!?pm+{x7%xom7LC_n_ zyvQmZmUer3;`FJ2DY@TRPDN`D=X?1gsbke@F)_zsakp!vK0QA^8HN*Ch|D%my=i*p!tq!1%3*tHyt=PL zs7ezW$BKnpg&iP`5;1V9!wBs#qEyA$G|0t=g5V=T znuLa<2#+S_3iBvM(TX>2>*(fjT^()juj82aCi8ggzL-9_{>uJz{@PvBllA*fr0&!w z9>s0La9g?U!5^G(^9SDejTb`w#p%uy|HD6DpLpoY-!$Ik-+fEe>CgS>!2|!nPkq-1 zKKd^{@y)A`{k4~02(#Ixc+a=K`S>=SeL+%*{gs`Pec(Vg5@Ll(0u!owy0NvjXsgdW z!|ILSzUA8=+}!$(xgDW%_X9OL;JF=V!Alq-;bO3`BF5MBzzcGxW3O5B062oM1Xu%f zHc{jjgKF6)p-3kH03ZNKL_t*RxY(RcYWg(=azQOF31)P_mR%DFmx>djLZoB!w;JZ*7vSrJeZvCJR%T*obXZcMM_u}j@ zH>-vt?;Gz;KKXvRy<)m~Q^0_E!n{UZ>x~KRCT?SiGu!~cGQP&J&d=`R1C6wExvNR>kvPia?5~0PZ0nHr_^=*RpBmp$MmAmE4=75(@^JfhGVE00N~I zpkb38aZCm`ve&C$UH|q`SLub-5@UC?YbmhOd78)eSQFbtIyT$p*6DccSi>{bndu#; z=69aj*q+UAs6uR~ZQUog$*^y>BfI-~cwLueXFR;Tg_}p1`JfbG zh=6%106RBOV-5ht6>cF=2sL$gb8&~!luj2AFf(-$DMnmp8Sr5YZVbUipaxe1;X=9x zgTr9z9@oJg9!bC}G692IZzTW$0*FV4z(@09F-b@fPMSNUh=Y#o?i7yz;e(d~5CXt0 zxKjubxWUwfU^Kwh;qC@g&jLn|036~7PIYEDf;R$zs8UUV0??5H2>}Fz0|f@E7r;b9 zkihHeAP9`16;6`i_%Jnw;}R!`QNlbW_YiB3Nd`(+uLY^|0mv)Q`#hwF9nr$EsP$_l z$J{w`DPG1k=%|1IGk7M6C?IJLb#-?TaCZRWpu^xs15_R61dU7g(FBFr4Q14!l&rAz zn7uFu4lzS?ELlf&ghCh$Zmt9n)Ita6;6ujRop6A-jsOUlyDH&EKwJQjZ+XYVaDzv% zVgv#LP&=jSRmZZTTVJt624`JF)Qf59ltn$5C5TCAg-sBvs%c`Xno>{u8fx=Knpsn{ z9$s8lhgUAh&IMamhBOJ5;=G#N_&3juXYagsyyec+_D%wD35<~eFra`gF3Ct|Czb=m zB^J25z=h(DQG&xOx8&Es;SNToAY!oM0L|4g!kNvC>SZdr2vI|=aF1hP2^v&mPX_@K zKAIL)a~^(fU9B&Cw*SgYN4w|dXW#VTovMqux>#{K1jWz|17W?IIS{~PQ1g8t<-;!w z_VT6G?N|CcC-r!K7HhbPI4D6Qb4smDjzKLgyIw3mbLHyLU)j65I=Hl?bT+utWG2d{ zJ-yA1X^b_S*`(&yRzaG#H*SV7yXW}%n5L(jMO`=BNDE98B>}a>$)pKxah;Q4uO16c zq*lg_P)}ns(R3_NOd@I*FDN7h1`vPZfn$7p-v=U_;;^89<)J&@gnv^`-2I<@?1Rm{ zANqTC+i3>4!wD*4cm})+|9^b(*MIzvU;pq=|9*J#SH5`shC3$kS1vyP^Y8r2@A-G% za_+taY+_S!bo^2%|8mBYZyet|o$oo;zhiS2k4>9cSeYY*9+Zhr;K07) z9C?u46KGfnm+o`&0KNd9{J;L@LqFcU<;(J;-*^A?`~TxRPW4AyP zM=yQu`CoqOvG+g!$ZfZM;MmrQP=DgZ7uV}ozVNwyA9jxR*QqJd=fmypHWLXUl&v|| z!^XO7R_j%89kmUDNn4>LLNP=b7hIY3d+Pp)-+bzw?|ai^(?9-^XCHq5Tg9qaZG7T4 zK8`=d@r}Cg?eBX0^M8E*+rQO_$3FMT2j2c(=im@hU@#DH0CX1y7lI}rJV4wD7{ovV z;(>6&K{`o;fQCphMz=yb;iMSp1RMnE`l&zqgNNVsUL!yA#~*v=!|zvzyTM%rj@r#g zrBNLQR~w^@Xz!pnO<;os5S6$(9SlaN8CVM;K#XSMp7+W)GLNuQJ4Ta7zxc>I-u95E zxluiUsA`Nmojvn~PnSF7G(E!4A z1y!@bbG+Dd*uSv5voEWo>csu0=UrzShocAYxckR`;h+7+M}85#gpI&9#f(+Z?|$D0 zfA@EOS7lkM**r*TG{E>?xskR`rHqrr?HT2kvK01(Z-6oFj_l$kc&FjURuGSwG+`f zZ12)~cYo(v*}a^d84j5?VY*(-^dOI!NeIiCbyG#DM~01KvnivCC;!rjo3x?S<#A>Za6ftFs_unpw_QdY@H8I(45>Qb&p02U~%- z12Twc&m|;Mkb5!gX2Nr&aVpW}N*1@ZReii!?;WaEv1)R+@3J_3dUo<{ z)4FX!O#RW$SQrn0#bjc(>Lk_UBu=$WN&2FqBWsyA)6Tc)FHaz$q8{u${mQi|cAcd+ zqJWa3)!v5In;TA^Jbq&8Uga2AHgM=20r$MNXZZhYm9 zoo6l{oS@ov@-=3Go@)29eXc?K5Ls51^=Pliv@#nKN7EV1)@yg^YBnAx)*p^uMQbB2 zjc`3GV7uo%`5c>dCfP)U=_KUUt-nD|vHzEGCGIL$DM) zRE=B7*`j!2Db0FdL(1GzFJM4B9aHpjOwH>B0_(Z-*^WHEX`Dq?3tT5ei0ZN-F*KQU zR7KYHK`$Z3;}fYiV)fpWeS@i)f|es536CM)hwmt&XudF!>KmdFplHk zyQ3E`1zuL+TK3}OfOcf<%00qLSV!^FTF`f zUm0J`&ODB;C}u$fRWR44l(5M-^Qx&dA@uV~Wl~R-L@5d{dpQEteBwY3Ud(;Bth}t2 zmnAx~JrpawI}>GN>`e}Wxmb=m2J52eVm6)?hQ>FlMrQ47`-CiRnVzUoZ?(95wDHV*RXHqJKgByY9JW_ z5*?x0C&eoXB?cAH2zM|OUtI-><83jk+`gn;HSb9b{qcbL)5 z#EleZ0Nk0WPNNY<(m?}24+Nb634r2+sky6=5`+#P#EeLkL~+1!fdL|vK-O^vpyeWQ zT2b0~+=Y$ADA9#rBvCOXuQZgB4x)vF>QV_i#n}0NfcKhv47pJI0dOdbL5|K#%mf7n5^xj) z%V@cb3VMbczQ#cU4tE0_pu-(-@a+#h1UDdpi4pKX44_8DI$3O1lg;@|+X|ydE;;tgKUk*ME!0S3azK#4&drZg8PLO=j$0tCPVinv2ypaCEQ01_euC~7nmx{A|5 z;yMBlAs7{KGr|yrjDZde@B$A8j&2-P4Y0oCVNiuHpBUqC@zKMlU)$Y1v330H12?NJ zHa$4JCXoBxoE#kPi^;OXFS-wb;xoA3LvbQb@L0|cG|{_*S2 zUH!TL_RIfbb>Y*$e*V_`q{^>6{p9`M_Cw<$pE)g#T*2hmwW-@KmO<*({K~P*w z1T5x428xMEKode&U?`V$Fla&PVJL>LeEE;>x&NK9+N?xFa>h31{^E{Zc)r(_7k;xo+r2QVQvc;uNPOCMOIqm|{@)5{wW5x4?p6*nREkJzMpohT2P> zloE#JVzWMXV@sW&P0$uph`5t`k9V`W zZl^pznG;i zmUbS0?enXH(`i&~6lLEXW3iAnqTF&k-+An`nyk(HZkbz(od91?v^W--9kW=%a6q;P z4cSG+c07TYQ`cquFGn#uz<&X7VE?t9ox&;N@nJAEW1=MI@3u)JDkSdIfYsP1{$t>AP; z_D&@(VJ^~72Mt^Yw>DUmVF8zhTSnH>qzWmTS7-#7rdcR{%mOXQzzTCuZom1Kc|O@};D&^{ z!UQ1xpEy8#odV$+{&|63#;XpjkuO>~`~`bHXMdLKva+Lf0n{y^R!GF0h|_TvETGOa zLg!r~s@W9qJ{EDfb;XHM0wDz8B^GthWguf3u4IAM zRpxY^qL-OgF}Cx17G}-t<_ck>IX)CmLriFo>Afqp`N8HAnpX!ISY6u@V3;pKC%%R$ zk%FAvEwBOtg%q073q>5kuM_&$3-2Bg^7L?#)ILE=oFzyFmf9qA+uJ9i7N!7<4tbj7R&ut^1K>wam1! zsOs8D*J7UGbR|H-2^au}!_DdDFn3(%pu52gfD@#{065?Fj)y^aMldnBGexPV(PPCD znBAFC;OY^1vJPS7n&S#%P-+8PFpiUWYl>&vMvrgYF$kx;tph*|IQfw9UD{a&p+d^R0TzZPm1M&v1h$pfb2q#as{|DMc^{NdX+t z00-fO!j%wu^h9gbJq-iYN-bVXaz_xS2M_Mi+(XH9cftWUsBr+tNJ8 zj=8LjHp&wk$DJ>Y&t4jKPs~o9zVjxt$yN?jtCPekx`!CyE`2f+i;l=-mV7x1hE-89 zBb|Uqk}46QoqAw%2rb_2uKP>-`-=YRrQW4$CY8lFU(94WIoodDQ=hRm-q3Dos@e@< zRZ9)8WP5h}_^I;~Hg`?ijEcvjhlxZXoQZ<3A>e@FYrH5i>>ms$KybpvN;F%+_k*G@ z!?JfByG)vwGQdkwhckl?hPohDlSN$}tJBuqZ^buk>&E+NQ<;K%-_@u7;h7(L;V;~B zBE7ZwHcM^Y=?XA;}sz%wZ?)&|BmD_&c!y6BsPc3=?nm}d0fe`}C`z!Bz zDjodvw_N9N&-|@3`BMoMIjnP7q3U z@H(doK?+C^4tGL40Y<3%!NRb2@#^kN&%b!{>2o>2)Jp+v9O@fC`1aOsdPiL6 zgG|#kL?#hzPEXq?X~txPSISCyh>fn}<(Izj>|=J@?{xN-`PS@~ z?P(KX4Ui!OLV3LKzP%msg6n zf~q=QfJ&F4UWnU#ST?o#lzoZisAwBa=QchyNpUq+9BF_e~Hd(Oi!l2QL=ZG-3P~rweqqtI0f)-S*stz%JtK;E#G-y-Fl-UD9eY|3KxUwi+LbD&F4rLp6n(` z!%+9jT$}4@$Dr;ZdMKJ&D352`XU;T}R2hR??<;2u#7&o#dS*!i^{mt;)&VimuGW_F zm>`ECr3Ou4UQT(klqTFW`-|P`%TMa%ptZ2f&>XdOOzroGf4YqOby=TUhu0U5I8C0U zN|3px7()eC)hzXSmg{3}>}B<9W%cbF_ih2Vu;S1~;G!%}Ls|nMm_*cuD?l@%iAV{I zhye*8A%Fq~L`N>jhUE%^Pgm)l6&I!`eSsXZ&#PB5s%Ig0=h<;x?;ZqJ9*BebkxR?* z-`UwezX5Rbd+N6k;}oNf0|?{(vcupdhF*tSbvWGXEL4-)SHphiB3(7voWJ>GdF86E zC)<*0lM}PI&{@#n-&ENXw_it7Y{@}6U^Y#;$F+TU&$$ZPP zi}#*+*JqDz9KQPb{^1+S@^5GTBg3^6<9#PWcl;zC`}ybgp0s=Zi~CF8`GIDxi+iDE z4>Hs#rb!^P8yzzhOE5BOfmFjZNwJBnKK<&;hvTKy=U&sWf8dqX!35?)_ms_hvs_YN z(RFKG5lmyKC<$=`8^PF2)F>TeoYWyyVQyrTwkrwMWFjL_U`1TXjf!xpQw@WnNfISs zIl`CgQ9?Co=0UTWYOdYju@-pnW$}JBUdl2q<64ir%Wj9>8)OWCkj*4|v*B%h-liM3 zRo}6Rv*)L?1>mP}OAWjm!GH$WIh-K=U#$n|7jZcwT+Xi!Uwona=(6M;9_BqP2OpQC zXDvfSfycUq$fP7frY(pnYy@JCsRCYtz+DO`Qnl{tHV8r`jd~e%$eM@YKrL~Y1bVat z8HcnQivMfVdd7902$z3}y-z13^(b+-ZPn z3?5<{EQ~se_pks23gE@B14St;N>fna6~!5y4yA$)C0xu4d?2VwgG&eoL{y0hR*DV= z8ax6@GdP3@q$2`OdNy}gn$rYAuL}g$M<@g)bO#`CgrFu?M*0v$+yk7T3M_z*kPYx6 zl`{&AQV58VZmQ~|1N2Iq9zX#N%uojsCD0v;;_xVp1Oc!DP=z{%NhI24E+P||1V2X6 zsW0HMPN|9u1B)%idZn-J2rdVHZNybK>2M7kHQQKv1#Q%#Rtl|%IenxmM~5Hgp_H!p zc-ReYuIhPEpp!7q%lgFj}x;nX*f@f)~RjlxR?pj@zayEIW^S2DTv8}poSx84>4Ph z!{v|e)UQANM0oz`rJ)UwABU%R`8T6q=iHSJgN(VR1#~U z)J*{cni{bx5f%riVW(mk+6xTB(duArdp0b_?lcmP=?&icjPqDYHJh~Ubh2ra6R`>= z+g{ThRfN=a%@~tUDnBmblh8(ZHQ8!h>TOD-)g`7J2!MdQ8)3y)?$Fh0P|r4GRdvl? zw1;)b(9wE;W)w116+kqs1$A9D!qjY7sAg$C39)(gjaT3Jrkme^uY3EQcV82R9s$-i z!~Jla_ka5PFXPYT0P!arAfDK}9AA3;XP4{#V~_m$r~8A~J}J9LUd+|Z+lG0sdGGIj z8*cr(|7i1p)3L!CFvGsVcK`Da{Zx1K3%`5isGtNffC1oqa_;|pcw_mYUtc_g%WvT4 zj&_Gn{`{3k_KsGMjMv=DAv2zT=1Vu-_06p6mmdGacf9BO^!%y+@>sL|u7CUP2Nz+w zagZopZ3s+?MY5tWk`)|vKoqoq0B<;Q|4Z9PHpuNyJY9Y3yB@&5=J<)>mI2 zoJXpT2ooR|5NjeB0+8v(>qOlLSRg?k+(8m_I)ufEK#y=YP+%k-q<{jdGXN1nf`Gu_ z4iA+?f(p-xoI3Niycm`_`p^(r(cUGX8 zsw04aJDecg3Q2Rit+lA;qnvwYbvuNrT5)qRceB@Ded@*=?wZWDx3+4P;IOdVd;QAJ z%dcI$>+btk+4B0DkM2GW)tT?Sz2S|w*6ZO?Q8NR=!X!>O7xw@}h)Ya$Y)+&_T9wVC zmp}LH8`p-ezkKy5=5@1i`raERJ3ft>AGrP2fABLu@tYt0y{W`;I?0ah42r$u+aLbK zGfp+Y001BWNklqos6?HRf`Ik`|$cz zS&c93j!*5s8mG%@JNqll+&l1mv_4wx$t4NgdiLy|-$Rb-&rRLzuAmypeS) z-VY)ZuUdBfq)Isj>(^S(s-}u!t-6nH;1FgfX33dNsUxIbhcNdd+V#24m3gTKEek}J zqPcEQD~<|>)qV$kaCT62T=>nFe%LOn?}AEy56v-^|3ah8<+i1nnpwxLT{^(;ymaeNlHwX79p~Q zgtA)q7L(HFu16*8lIMXM=|sL(hpr{{hsVfzo3Qf?FwUFyW|`u)rpFIhoczgq>ec+X40gxKlFWR zM7XsG4)ais#=i26mNq+)%i0GRs-cZloop?G%gGr%Y-KMs&0;mW!<{)u9b=Th@tCvs z3e$2tv1&yaV#I=|x7ssv=Dl7MgJd~+8k={XtS7(%=o|_P9Uu`qAb|nifB=c>e8BQT zH@~xG1q?t15D54h0!!o(Wd#Ggp!Cp#)u8pdi?T*37P;6o6pt@3rEY@SJ$5jW8EJ6uQ$do-JD-| z=Ec7?40~tK{qJws&eMLlh_~Ls&3C5kS)n`uC z-IY6b^%K*0A@KMIzVrOG+27s$l^0JOUxv3;8_`iLl(8>n!!TqQ_Axf4O72F0r4bCG zgLqgAnWTj+_nDu+@cI4qaPgVnzNj-^izl}4zhkmEdHc-`Ii1W#bS}e?%Rsk%Rj?u` zO%)OnM4V+bP{AN%5bBuCy)HFlYUWmkQnGUvAfZ_U998I^GT1~UVHpRwtp!5PLMXkJ z!3&|zY9=I@-K~@^$uy+tuo}8i`~BP(blt1w==2~Um`2i^E+HhaOr-UgZm+%xsrgT5 zCvCPR@13DNjT`WdGGGMuj_{HrU%{|?J-_@McD~xLUkQin7y8og_B+LO)IPvOnnK7} z1xP!oB8VU@#WzdK5I1MRoUTe~!URy2(Y*||=bj`_gD}>W7R`M>0tGIL*i&dFcG<)& z84Sqc;%ER6*Lm-|0)lhEg&g7S`o!R2qNO%tyOEH>q$7&bT8h?JBpn{cH};!XEOV)y zYP6v;4?$-RR%P3P+NKUn@>~!?h{gbjScp-WQc?5Vfs_~lIy$9l4oTV6$y4h3QkES8 z5rP1q6L8lIx9lt?LIlBe!2{r84#4T4o52b}2QvUeaKsoSM1z@R7;WY5uwYJu6GkB- zK)}CFgKI2e6rCu9-UG!*Bk6$Qx+s9S8civLh2l_JgjS$43%O+9kTFSOo zCf*iQLjaN7VF5JpwHps@t>}97OSY!vHN3M-x4tWuv-vP|Zb~u7;EsY)WXOfu6%?hw zSzun&MjZ{zn$>z3+^Gf{gF+x`K3Xp3T6q6E-bsVI29~*bicPATt;#tWnpox@IhjtA zOsW>uqMb;n+p3x!OGr)CS78+*%T!}Rk#wMsDk4Cn0G(2-&}t7QF-R+E8M)Y1(X_ns%%QBVKC$!s ztIxiArCR;kEl<{QqU|P{dd_MqR)9?UBsxY>VN-kEc%WNdS0e2+Rg)x;80QYJ51SDZ zMtTW@k6QN0Ft$4aj4V-^rVXdl1c~#lN>O(kUQ%dKi&agOX&nV|5h{mx6~S0lh_MQx zjX6dF_ zhsF4yMBdo{ox9FnzW7XpgWdM|2UZt<=EGl#hky9Pk34y8^*mo1nLDBoGXjp@K?Wo zYxC->7e4jVUs_hrJ-MGlC>6vLqz?G>qo4oK`#(6pZShZj@}K^%Kl`bT8&BM|ncA)G zMI#bo6rhe8J%(`(iZC&{$Kr?r693A-1o{gff9nCUebX#@+|Z4kS$pF-e4Q^>m`(9@ zKmJRPz30^UbanZkt)Iv5{?_h=gL_t2f9iK$#Gm5$^Yos-b1%ny^^)022u`bTb|Js^ z(Z|2@L-)-({PM4S`Js0|XkM*z!Ma#FBTa(LL#iPg2yh3;1USU0gu#wz2LHdwgY(#3 z)AKIu-|uzZ_p`k3``gbsvz+mGJf5*Vw(;1`#IfTTJ8@{z04Y=|Qq@qURZ|s}LRBG^ zR$U}2^)Fgg5o+1OqCf))lt9}A6FGJk6FYGnd+f2}8P9t5<+r}u^E~%`U02Vf5d;BJ zDpaXbKc8Y&1(i|N0z^$3XdSKqgb=!Tc=C_{Bh(#ECY_|ZS`A5^pH{N8$~FoT7{0uuuU zrx{U|0XU+BTfh;Ha72UyUI~~Z48}08)s-ZI1VR`P03@#8eErGCp6d4>+S^M`t?sAG z<8tTj>v!&c;JZIsjtkCj-X2}cWbMNrdZP8+Cu&TYAY4^U1dv3!iaTe*kYu@ioH*(2 z*4@*ylWAJMeE(j_*!G9dK6_Q}MuWO~_KA1@=wJVHKmIpAeK?P@XD<2SHPFUKzUO;? z@$;XHQVa4Ut;WuyG9S4PrlW>F&MAJDt#6R6Y%aE&_FzMQteMO;LTu9^6>_`XS2=H4 zr}3<$F0!h=MaHx3rk1!nTEXO;4Kw|{yUot!gbsiM4hvMn*N;E7uIn4kj?4k*xSqwQO{>h9@-{`3-|f#XeKe_CP@-l zQyDrz#JUt~3+={kd_*tL-D}wz#g&#OZVISX5jCRvvXq6+rI|(JeOF|xxlZ?{fT*HX zTZc`>XK7ucUs2L(+vJoxtk>&dOw+*KBp$1kqiOn4?iI;t`B244iMsGj(LS@DtT%b_ zWRtckn>vhek9qA@JMNp>Y{q;QdY;Izy9fSV7ySRjHDsR;9LA zaygwUVSO{N&CKg6V2xF6I)Fk_wvEad;&tQEyX6#vEF#W4VZGDsp0MHiUW2L9MCKkB zWh}PdDbwrW>)apiB!x-RW;XYxh1Z52+C#yR1GKIB6yDM`b&95ktHv@sB!}j!kw%OT z-P4&ME6}MVE)-ZO00oE#gm{2>lMuf{;CBi~1PDX`1@T4LGtdPBh(H(yK;l;7>iNw- zmh{Ac_A0et9K%fR+i3M$WqM*AO)1>ehzc(@cx(vxa4?OVvNV-BJZ!*lQ3vy880)xi z&U#<2kN47Q!VVe-$4LFI!s>RU?h08)H{E4Q--&0=B$0hBw(>}Qg^z|`VJEebj(f`9|-}=R0y>b1k)9l^r_pfkq zsT!{w&(NGM`+EcS#;{8-d(m5iPv3ed)|fpJhv?hXocZ!fv{&uYwZn~F_VLE*GM~M2 z>4$z`7pHspwcGzFb9U|F`S~xuHoZ7?2jBO-m*s{3X4mh3^#^=?|Kyz=PL_JJPSaGK z18R_4)jm>-(Z_(~wuu2;bI`RAXmYh{*Zj5nw^BRc8(;kC*KFo@@#tfpeD2a?@A$|g zhqI`AwW9CyJXNg3r>{&GK5ltu%gH!2NTZ;hSQ1SKeMX%UxLU8L(Aj(kfi_vK(>Tdu zG0z&qxrf-4O{8zMZCE?n$$Dc^FC;1D5#hGRYI0W;U}x4S8}kAmuyr&I`w)Xf@lgV< zr$CImXQgO8*41c!?6uS=Hd#E-_H#(YvMZQBr2qPyvk&HHtjEy~j^2f%r?B^S;0V7< zF`^8ZHsFAJ}08ujljQFPz=m*7`=;nE2WeP*DKMfxrZ7L}v4=4YAO@AbUAX z>e^KS6&dg;s)`^t?iS-}f`pTSJgO2!V+ zmO3-orPZE#Ji3W=f$PHc4yI{Dbs8l({w&g_ts3`ytIr_Ud3I8os>e>^~H)c zPSqy>;vwds0i+01gkjr}v6maD&$cZ+wHhi(x;AxTHr0)bpvegk@dCOR2`9o~d;lJR zN8o`PbU^8(;{hVT&;U&eV2M;jkU@M%$-nva2xPPX0RsfrUwu)Rum0%Y{*BN6`ad4} zJ14?!(d%xXuI#0E%-*v9u1`Gku}gaNM;A|nfK-<_Ern7o2Pn1-LB-|6hrW5^Bqxbv zZ$CO3Pp|(`bjSbO=U(^%>iMtuafxz~iwk#krqs)qj(l_QLx1=EJCFX`JKbEm^?&p~ z{EruRf8l37H%{Y?!|Rk-)^LEDg3`v@nEPj*`BU%SJAMBj`RvXzp8wj-TMECq^<72A zd~eb0c6;;x^}qcO|K3l2YX70dcYpk&FTZ%L-QV*r!;>*o(czSd?n5J)?9N#~9>w6x5$CZl=3$5P$(9dgI2cPd@q1+1_KjGdiV|&Aq$l z9FAYRdF$Ig^qr2a)brPGSgOm~J@lcgjqg7aP}M{hk`WA%MnxnEp~_KYoMf6pWbCBe zo?>|EgsT@tyB@-+%Yl{@joMz0X`0ncCJ`e+#%;$j6`i@aMkv z)uHB1n;!24^Bp-aDbVcf?JG}}@H5zcN}bz{mCa|~il3&Y59>m%56G=&sjJj9R@?Bp zMnq+BIa_*79^sMJ2hE*U-p#k91v-^Ee7aLMo9*^=Rpe}ga&i9zEuE=bJFU_k>#}etOwl_zb@L#5+B7wvp#YZnu#T*hoZZ;!QG&#d^YS%N#;nZ%-o4nV{h>H_kS%oESC7KEtx&3@vA*r@%Eq_)lKb{e9&fx9N{z=U?LZUKu} zNz*Y?%p>M_KtTZr0tBdl0SO@x6rvb}=qM{FVG24BAjmBcRRw;AXf!PZNmPLj5DNaE zaG(H+HwEAT0t6<90~#oX5QL*RJTL_qfB|C!td^6hLPLbtY(SJA(`)5)(iog4d!>%L zcX4rDbTh4sEe*-xkrGZVwzuSdNH19yD|)z3f!V!5G&I%CI(hiqY+7H0ZEvkhMZ0D9 z%u(mntLNS1(e+a=_5Mjn+ah;Y`X!GIJ8DQiMat5qVJHdt%KT`%9?!SF zJnbfX^r>q<@Spt`i--T6_WBDy`{$;e zdw>29{SZ!Xeem^fafr_BJQAdv=R|kGP)7@P+6w|;!4^aqx9i(yUwc)rr1JxP{xg5) z)za15eB|Lze&CTO-}$b~?TlqTl{}m8@1@G7!XKJiEt_~_^l@qHIMHEbUD@E0*Sl59 z05e(ju$dlg5h&qOElo{cbeb*hjbJQ$GPT}8tqO$5| zoz3;|QQd#S4j+BW9!cLG|M5}2yi4Q+Bs2=NcoPwb358IHCLrOTsDTQUo0ztri8t;| zo9B74!JV_ug^#br3? za(HBqBT1IM)VR)Si~cabE7QB#R-DJ}8`*f$JYRfuJe)4EwNq@?vD%EAljYWJ+?*m} zZP!zO4&R1_vDLxiM=suLx-P?}bOMs_Miy>HPpl7zjGEoXsXWG68gJwy>gBSHiL zT1W&MLV*HS2mt}W5HF;`0HYH{CI)M0L^MEx7D_=7ArXX|GlHZ`0wCE!!iVr;0ZPpP zkqvaAB&I-EKoE%pSp(HIAhb!mKqWLWAfypyU=1PAk`#dnMh})rJSYSy0f0sZ958~A z1P}p4NB|fj9ENbXL?AOnJR;l$p{|S!M9M9t(U6S34Qk&sY1WExhx3TM&~&Jan*8z> zyQ^$E=8-W$SlW$+O@nR9G=}?G9i|6@#~9$_5Wb3mfsrz)gI-Xb7L09is~}T!ibjqzI=Rp&X5M?+&}8C{NSajf9CtzcRsauW#bHSyorDZYJ~z4 z2p0zwp$;%Yss_q_@kL=lY$#yNi*!@gloyMhCGWqtTYT_<>&wkIMK-HaSJ?UlJH~0U zxQ5lnS7XKL>5y=7{Km#_PUF$}_~Fr}CvWw|+f9GbHm7Z`W2-Ng<8OWr|7u=8 zzVDpxClSIbBX7|PVj)CQ-vJ@SKf_;G%j)J|Xl~z%?(sTbeWb~%+XZP$9?dR%bXI7{^`MNH93u=C{Rv7WY<9_n`& z2PgLjb6^&+*_LmfVK3@T7l*lR_QYrO;q3mBQDS5T3tMg`Vr2)HXlU)ubha9|gDsM? znXM(7t<6&Yxle!flRx+YRs7sff9a#&|DomGi@WzLrV7viM1TYo5fT8S1OS1A07L*m z2M|G!>WC(#hxZn-e^v6z*AAYczzGM#5z#mh5D#@Q+(Q)1KogLlIs@U1rW%6)BEkX* zh=>YMNYsi4h=`(agp-Bf;FvJ^=tDr}+8*D$ zHI1)-`FPL;OZS!!zw^>?Hd(Vd`7hu5flvI{U%CI{KWfI7-I1hqZorh%mFKo6-^e;k zBG!xA3|^4}+=T!jutA+c8^_s+HGQPuSddqqB5erW2?gyqbgzhHY$}%{E8$=QmFycn z)S<)g;J8;Y4p;&yAc8f}I+UmaZoqn6*W>!$8ucsve4g`BO1j9s*`eTW(~ObNM4oK6 zAt?#(O3a}PGh5>z4QNIg6Qb2K9c182w+Y5-d>BM51rfC)g*K>`^h zC_n-VNFV_O18-spOcbY(G$1GeY#eT=hD`w*WPu1&(EXh#Kp+Sh;6MNb2~@b_ zO#yd2-~-G+hXt%496lf%;cx*9;BdeK4%=?NCe7oXpmmshSP8uJQC@L#=)op#PNlR> zTKQeGX~ac|OmMUDFzZt+hkFqVls1j(VJtE6Y-@82{n1nJvHp1fOW$~7tHWrUycvr~ zMQ+>4M_Q!Wm1pz%Y*Cj>ERA)k*4}o^yJ-)36@nxQvm0YqH}}><%u|)w-u!T!Uq5YE z=hJ(ey?D>jc=F0cO}fb;k7r_;UTQ4u6-*Tufkwo+Hn12y)n39-AV>S!TfVspa_uQ6 z73E^5lx0_IZm-t5*(|rx%{s|f-hH`0F1OC2;wGbPrcL5uAvz|Gs{bmS=Z%|BdhX z?(PR>pZ*I!HEws;dr#lK{#U+u^3LZs6f7AN%0d z-}APonr1;MQq|Lrm0|F)`u;&y+1GtU+r_SJ&!W_A-HuT%)~|SQ3@aW@0E_|QWu!FP zw5lnq79U22EE~YW3K^B|HP0&39)lykIXyk zJoA109%-5%p6yA~?_U9sAOZTZI?+^c4$Sc{bAn$CY`$4PS7dvB`}Wr(wtvN5F5tCr zAg3&Npm6{+1vGUDvfDmRpyO^Z8p%pP&@i|&_5f$}z-FjpfCCaCZ6aWCUJ=zo0x5R_ z9m{zQpGq;SO`a#o3`-GA>MC$L8Dr)!BfUW|neT3ABGb`yKs|ioefuHvhZcvDhRdgt zhOSMU*6Xc&e$t-bul416$LnDlQ_bk_1SpC1qwxXHHpEa(D{`M4X_jfhayyUkg*hB7h== zFoBZF2$&lXBtUo|oCMSXs7eT93NH)<>Ij5q2@(Va1mI9W1g>zWlfUPI001BWNklZ#;6#zxbE_hR-u(J#EgWdEqU^oK5&_MM;To_^cj(Z;lj1tLNXDqauzChyEP)~v_E7`J&dC2P^o zbYavdlOC$9H7)D=QCwvX_&?%YW(G^2ybWk6zu`;-U8LOJ6uU zzJKqP8>id4yq&5`hlQ#6<>K2_8qQ^Sgj3hW+5@#oWN_a%qx-r#c8fK6=7)lo^8&MB zTC>v2b~l>FTba+{^=v!P1(Q{!%iNa}a8Vy>xed8k)y>+6YFE^$&9OxqqDk`E&wk_M zf9U;-hClODU;5Y|e*eXta_9CIn+YBPAOb;90xk#yoCF{c0|FpGI0HcjM1Ul+ySJAP z9&NC`xppkiVQ@j1niI_EAdwiOiNUidgp_bf0FFXUstPku9SoR9lmsw&Lt!GO6cKDD zF>wRMXM%0QbTPg7>ML)5+f!8!-+1{8-}&+Hef?yNdoPx%0U#hm5O8q;f`MQF0)hwy zodndx^oxE{x*w@udZ`}Ca zkA6gVx@LR)YuBn=+#B+h=N@lx@TdY5fg;t2Fyt!yTBo!41w7x&ktBvzD$WkfabURkb1MiJrJauXx!xeaw* zYc#u~Z6+V1o4I=`a9f?od}cGsX380>?7B_KmR=3{V2a2Not0JfTteGA1__i4s7UG{ zH6n)5hQK@`WJAuyxU}K;;Df8OC-;7?3A?0)Y`hlDh2DpI*VeSN^Lw7nuf4Q$UYEEq z_|l|#!nvyBY06VbL+(C4|F)w(zIOZ%M;wZ$$7lbkjz9K`Cx7$w?EUYPX7=pfbat_> z7?$c>WrTNGj9p=ri!(Hxq?Vn-HomSpYlMxgCpIe5G`kbLS(-LQ>CJAkEmN9uubxQ$mM66PcH+%d9lf9UWD_Z^qNXeORG5^ zdGG!oJ753#lkK%7d!Q!Nl8Xycj@k5WSMEIf_|<2gYR|r~_a~&hYbd+ln6T^f=Jv5Cipp8Q*1dG(im<)xSR`VW7cVcGxcRoGM#@|>R>|HxOs9*bq zzy0;C46ntd!w)_C`0si8@q=zhMMy^NYPV4$;%VgtpH!ABA zo3pxJ`mo)qhh=xww8&bc1k2LGAWCIOXW4But9_njCZhqh^(GKNSVqY;iUg7e5#FXm zX?Et*upZV+_tX|(-cO0$WB}^u5TZ@JXlk3)whV)3$xTqQ1u<@qfqFk)uYrxNy9z@V6!zPR2oIpydfyD69~zs5cIY~@{;U8((a`Ix-o45 zS#>#(76cG^0x=C!3D4aeqIDd^9C;kXu*obb+S+8-B9n8P$bs4<>1JY<%#2>S$-p+P z1%8;G8$}P!jz;OWXX_ZISwGuPO}978hX+lHzHiNJn67mq_ZMGnte!0{CRRJeMdf%# zTViyxaXUGEJ2@z0ngTK6ZL(=7_b0^Z^rAS{dg@W#Ji-7HP!gJ2h|V=JkaL#Q zG{hpOkd%@HQ<{r3sS0WXkv8oLpiK)AEt0ANh=>uW77~zv1W1IsA?N}yEW*M`Y5~H6 z1Ql;`009Buh(Hjk4IuEKQzTry0`8SzBq#s^o*AkFQ5slm3=}>97s8p03JaY`06Btz z2M7T`1o1!s0YoJ#g8&V1At^#IikZ0+uIgkINC63^a0z#c2?01^qKM2c(0fGkK1FVG znl+0Qnr12GSqgz-=Imz5%f1NZ+@{?Vtyt-HE4L)DuG3K0?wn$(G5N&5{MY`G?+BWE z7>?~!Pzk{U5w3s=A*m*sP&_n1AP58kL68X0TSkI9)B-_g7=s}YXdlbt>Hbfzn&nHM z+kEYtw{G-e-lDqfU+OM>-=&)0^*ytvAK%~KnAKCqFfv=@oCckJf9SK1+pVYZwJP>{ z^k>p^irw6j(RNuS1;W{cSY9TZ8jO?POBXaNOZSU)9oCy0mD5sc*QeY3?b_mnp3ar}Oru*8M^!=Ons=bcya=-32w%1z>_Up`hdz5K$2d zB}6TO=w}IR(P$)7@g&J3S0O^$hDlk7lGw1BH3}%8kP<8RZDE?2X&%$yL3f8i;fScD z2@wGrK}nt=%=3tx=h`b%+qZe%=O*{_Mws{6g@B6;6$n85i#gsTjt7hgMyfP)M5DQzg7L#+g|(6U#bs% z<;MPDyDt6e;)b7`gD2arMwyllhAe8@rlY%?&Hi$;yHl;VAm$njsS|yiZ>Ncybrkjs z)A{9oKe4^EXxwo#q?u}-mZ3`_uQzusE+3vh5pi3lN)?rURhEv8Z5^fhU?^MIR5b;V z=8mGV4f)D1f8)D;{|B0k&;0!_eeWOqj{607-ng){O;ncvA_4#r5d;VXK&r6Tun3U= z5JET@QOQU$JKCAc{^g{bvvtJkY%>W7phbx8q^ol|VZK;yXJ(u=2*jwSm??>ZB5kWp zTc#4;&!?gIFnMJQ>O#7m&Zb#xUahWdA4fSeAE1Q-PZAc#aE1q6j8Knj^4kpv_}RVi#55fu=GfDUoEfMKj)5I_o0 z6kq(cU-`&~zDtrSv_Qw@x8ArXd-QMJeC_)`{&CrVM>G89=U*P=?B%h2?77D~>^`F5 z$Ur#8I%;v9C(F%jk-DTM%&O%9#W$<|bp6JyTdNqZN6kq+&))syW4eFT@0RAT|C!&* z=l*yE{y78!|6dS}a8$?EFsq0F2r44rKp0qo4X_IsqyRHCg4)7Q!Hm=Z1#E#9zY7rv zyeR?!Sm1#QNWdKxAP_(VKm-s41#m$HA}|HqFgeTsK^Tf705~=Qub^RVfFL_2!dAFN zyp@3B+)*4AxF}FZOwMF8!Ej+0ikTV18s!q!1~N*g5Rt?<_UNafwY)YfK-rJCDhC($ z+B=%uX|tVN)D^>cY8B0dsS|h;hIDamn(IuP)$Y2S#-`TUU^X8R{j^%wz9Pw{%d^}z zp4aQGC7(||b#9Hr3$Nh=iLhGPR0 z1m|(J3*P6KrP+(Q?hoI0=D%=n`t=yD!qdVRo|cfP6NXaeQ+SuEgm3Q}yjcMy!G4O@%T}D zm_*qiYBe{r>VhT%o>m4$Xix~x>ETNF>nrrK_poFgOO6e$MZ{>`*3pcq*f7*7jXQL4 zY!xK4#W2_w(zOuH8bHWJ4NRpZHp-?(0ZPQxfb1r37`9Wb*t9JXXe@HydXi!r05O2n zzTH<@js5wsrEfiC)Lav5(K&mr;TIr^|GwDj-}V_#V`&l+n8CA0pZ&0F)(Yc zXmHW)AEd}{Yu>Wo?S6Or%+5i+a)k(@?BU4{58+=TF2D)6ga-oO0PGIF8YuU0Gf)PM zuOrg`!7m!hFWuk5Y}FJ5N@)O3wE~G^b5M9($CuH;fOR z_i3(2R~qMxvt35Iywekl!yWH9zrTMr&X@N*Zu*5!w%iofi*h4?v785LQBe_~0+UF1 z+Em2G?t)%oy^^smZ8c37=hbeMl_1VbWrTY~fCL#1RHYzEriA*OY?Gv{a{|2{fM~hM zY}<}~lQaN@2&P;G-ZX5o2*a}h0U*c-3ojrdDZpS5p#Vc1kT5}bc<~Yhv;_!jaz+lU_Ko;sU} z?kxz-t>&f`dXHJcZisi%M(RrCU;PXJixCke!cBnCpg0kcH7KDWGn=WmBkC3#ge9sZ z0^xKhJy<-#sbKPma3BzrfVkw0)#-Hl3(Ic2_NnCyFWh})E>_zxm%YpV{wEJSKmA?% z&pdi`sS^FBbB)=8eb<)UY~DMRhj(^XkL=#vwY-wC(%ik^HV_+}SDadoA+I537hWbG z#~ANzw)E=i5Ex+^07--(g|syzEH$7c4ao^%Y1I}YEt|U9J&&5tb%-zy8ATiOo$U;) zmzu2FWJwJegpr8EhRm9oQ>zmH7jtkLyL)=x2mO0p*L~l=<$0d}e$JUWb7nl7XFT5H z#7pcb&O)5VN=h1}M5F{2Xsd|6fHxp0FMtY~h*D^wttz2PNFWkXv4pfuFj;H6-e&Af z#`f5=oH=Kfv;6mG`)&7qT^BP!iHZb*zChyhk&r;_JP}F^9;!k>Y32fIsuU%vc!?xV zsSpTakw73akO$|=#T{!aEfw7FPo8kHoJTZUo^)^&HK~e(OS4dQwe-~2+euP8&V%KPaRoxEh(-w;>f_~_w6Q7_ zqH8rsavHdo#qDn5d;REm&!O$@UDE|)ew3vIJDJCs`+?c^aARq`yih8izIyj^!Z%Nf zl+aJ?T&|K&Ct*W$><{=tWIUNS6b|j$7ooVyHs_SL!NkU&NFp;O28TAV0);%gxx1N`DLb?{%-mq&4kCZ{xi5X{qn~mKEWB~f_g}uh zilM>X+c$syM?O97Uk&Sfzw^cz&u-;(e@&DycwK84kG+90v!K$9Pc2GYy^Bd)&s# z_lgb*Bwa9?0FIOlcgW4gIQh(-SdBTrh3bmSW%kTi!Ea=ny7iWj$T$fRRcfG-IvC=DZ({%oR5{Qx3vOxN!HI*4weI(p^UUIx>%St?T*%eFml{YGKpaR>l#K##vV zxS7?3y>!Gzsc{C%pp$sakTliRY^sC0CC)HWSbU~c5SJ= zy5i$`l91GQ=++g(P}i0JCphGnyKkRZf!-a+3n73DCneSSC{+y|7t|$JzIarn(|HyMs_wQkvGr(FPlHx2eK2L@T9Z z$ANhsZEaSY8|fy-8H8&JKuyG98jUuij`GYeJlel-|GTgI;`=IWsPy;qYjA1~f$^HC zvz(P&4H-zU!w;g@p&AW3soJWVRAOCaTN{i9Bguh1OUpL(xsV``ndZrl=1!Q=WDds; zWk495oF|Wi?Synn(Waq^&_WhMiU0Tz zFi($Acy0fg8?;!|2Y26k@jOr8z4i4+W`FU=7a#o2d;jY^^*xAK62&Z_3q!l zA7($1{!Ddo{P~wJzkKE2IgeAj`@tiBEZGO2;O$xRZ~fXAk*;m8J@kR=AN$PF)w{F5 z@Fl+Xp1mK9+i%{TmM=d0>)+jIcQ+l*p8miS-}}VT_1R7nc5_82>gIsDg2cg+6(DnU zG8e<0%y#ig=T)=+#KBQ&!#p+efH3i-E6aMmEjsB~HhMaG$xs2j@aS4JvsI!Y)Dts` zj%#PR3{?bLGRxT%6bdLhFF;rb%v}pnl@LNr#9ZpWu=}|>vBVZMYT6VhXN{$e9#O{0 zjf$89GD|SFskmn?nG9quYv+FCZ7*e7ZZE9Zow^hRHG^XcLZC=Qp_Ro!y*J-^KV*{F zU*%7H7*EgNg9o2PcnH|X|49ZEr~tzM1;fa|b9h~``qnGO=nvO_Z_N4ivh?gU$VN%e zdv&;pgGwI7VKc8laws)(YC~d|*&YR1^oOcd2vif3myuj2Mb?RpSSt_kR7_nbB4&?7 z?p4VSOvQk)FXS$q80@uMaLY_ej1n<{%Me%W0k~4!t95%p9**}=xNY|iC>>loTr95E zjn5Wnf|rA(D4(S71lX$F?W`M=JL74Yhx2Z9j_WuAv`y|5I64XCGO6~1`}r_3F8kZ6 zI+mqj7*&f+T6`!gbILlJnZM24oE!!aLE!9QG2)s-&6P6=f(z=9h^t0o+f>d}i2!f` zguv_~1ZonK5Hm5eGUsF;})O1XqU}Ktuu%F)=YQ zu_OU2jfRPd*u{zLZEyz&fx(bsGMd9x0A?5=6X526!N@Z!NW~$}4#GRwnJofj!JR=4 z)~rx-CJPJ@krOkl@?FPl@SJTKCr@mkq8k}MdS^&H4a%YYiCv3 z6{7$AZ~T`6fDOR}Am;*_;4mQvg8+s&NL`B+cR&ac4ughyoD>X0W-b zlZNcoX)@}0<=84A!`bC6L636ouHWt|DS;Ei}$X00~^GCU6Xp zhy;)zDp=K`TB;&fa|WeKqBB|pJ2R66m?Ll9J*&BSF{r`Z3I-U#lt7+M6{roBy0dzW zM5%ECitHz^>xyFF7(!Dw6|-EK*T&*ia1OCziUbgx!Gu6?!0?VO;D7)Ghl89PKsL%s zfQx2>t4(l}!PMRRA-m~>!N`Np;51XO3w1$zvO308rwDF^;VF6qIYIyn7JzwPb;i6s zKmjt$FgQ|%TtVx=fDHI+-+ndXrN8-?o*g%zd--IgR)ieDNa`d*D22cw8ttX_Dx_dI zb69?#vpjt8XAV+&lT|ocvS!VjlJzt4_m2)ue4SFT^ql4O1taH z=GQ(U^GEg>4FLoY{+I!#*aGUfaOmZ6dE@2P#eZ}9#{0GpJUY%ET;*#;zI6ZgwmuQnTf{bWV0T93xIgvX8?* z6YlOE%-VHzt-pBd<>$Zp)cc<*{p_`yU;o~Z{h$FbC#b^+F_9OzIuXIa4kH4<0ubYE z0U%}&RSaT@42y72rf!x^$59O$-LhGR2WODev(J6;`#=71a3yS~M&;fs7u)8+S$N^* z%b)%1r}MagT;9Ap`N{2V^YHh)tJdmZP;~<;BNcxmqf#8}V$nDS0cJcJX2>25(pRB+0{LNFn z*z4TcN%BdyJ>*oh2#^DH$M-txo$_XVqfOg-SI4rSbk&Y|s;(@m%Y0C@+X2`>iyVuO zuBhhw3k)lGsYbSzvdANIS%tKurYP^v=KB(>YR#)kV0UClX(r^8cwJ0r_F$+7Y|SWW z^4vqld$q}uQA@R3^Xuw~$8>@9XtKKn4(3HhcJJ@uGI#7@DqEFPqW%>FQeF z-8=ESXS?vbR5YXmmeCT2%}xl%F4X%$lw%Re%LRM|&@=^9MiSfHRtrb8ey60%HkRCn zX_j{;_Yl3Qs)jj>WNDncrW)0ohK&Wiq&XF0>+kB?Tt<(L<^XQoI*1i5UfM>dfkxA8 z&K#Ifkz*wC(s9xWla=e+X)_zQ$YIFECe`WQnj||)#>KD|bW-SGez17>KwrFlYlrvD z>p4C|@vSbpT~vG8pJd<2v{!{TVVZY9y-Pd{^+-#QQEFql2(z;3-Li}wwNo5O135t^ zXU?4JEl`ogsYT)`gwgZf)|6J7qB~Jx3L%0_r;#j?Au24AdjJ3+07*naR1pzJ7mB3L zC5jK*T3CYf_<`_rKi>Vubg{>YJ!jQ%(b(iy&-?KJiZ+a%b3fvG{pYiNek!+D&C(#^ zv0L8nl)U2L%pmHW<$z-<+|_op7_Ph#eji{`-6G7L4%&9@U4g>LA(>>)rU^EA4zc#g zJTx_V3bd6AL8R?0wAqqd7bGm2x?xOCK1G$XfTl-x{?5D%@;4n?yc&m ze&Dgk-zV#%y^HGc&pxnIxy_1!!9XTgxaZNqy=EiJ!R^)?D4T6{onn)Ao76P3s^R?r zk;Ym!yRnI8E;jUMW0`c&ONVbTK_{K^kis^G`H-XM;{uTEG_{&#FDaMG&<&zSL7XU9 z>+nb#WiD|+UYO9Ap}>&`0c>urrln*@DVD;l;zWQG<*8E#*=~RZ-kK9UjL4l24c8S{ z&Uuv5bhB)sJHO1?FZDHrZ^sRT+(L1;X=;VN4eHKB%5qR}<-@ZZ-y0u(*EPS^T*nVB z@Z=$wAoyo+AD|_kHH6AOCO)>T^p(`k#X?T@yzs;Y+f7l)FLu00bSv)Mce7BtI{tmr^~5%{h8 z+>0G|TjKTE8?MtvR-m5QvrGoKj{z9#jgz_!?qhaUpWr&$&_T;;$eIf~NKrHoW$i>O zbEs$Q&87lOT3|N0lT%?cBIZa9lZO}*5jB(oSBa>t*gb|wB+Tp_+z`fv5rWrb6*EOi zPK0SRE5w005&%UI{0r+%6U>gG@Y?hY6La|VdvOy+n8yA$BGaEt-;pZ%x5EDjO_IXU32?!^HY zI6D=`DB0P8;0!Y1hIlUs+E^v7ss`=+21z?>XeTXFvU+Ep>RAo6YjhFx`b6LPk8Z{M zKY#ukU%dT0_v^({XOv5GXaNDS_VgDF&2JXbP!c3k;K+R;|vImLgr}>sj5I#Ng4pgeQLHTl zlT^p5o`o=%9Tp3T32umh2dW4HD3Ad%1cMQb0LPYKj$+UWK`k&3j!E5+rvl5W{x)+s zJj2O-A|>N;r68rSiBt*~xKavK5toL_j0Ax>3PR-$7Q_f*u$Mf({@t5eE?#(}mu@L< zO`M1QkL_W)`k~ns*tq(QZOJdb@p~`oaQb)X53<{B9nHO(h}Cz}9{KE_Pmj;N?PmSl zpZx0e9H=~W+Di0)?PE_?@$p-K`hJ?Oy_q(7YyaSd`;?BqdFUT(9{-Q8e+n5$4id}= zV}fQJ5Oxel8OiDPjVMEkh>2Vo~K=acA8m@e!$6AcT=9VGES^2?GY#) z?Q~`!_e`{ok0EM#Xaq=+4iscmDT#<=IYq&azaVmlaAwQ zXrPbz{C-|umTufn?i5#q2GR!k;&b2n7k}iVADYv@`|Dr%v%m7!m;aw+0T4(C27sd(*|fdvnKFSRGliYQt9#SsW?WyM z?b?L;@WQ8=ZmQtBatZ@G7IPp{+^o2Mw zZ~H6z^-2!9^ZT!S^9xTu^K{Ae%~${6dp`0>a{%HncMy~u&VUmW3}nW_1R`;FaS*cu zL=Hp(1z|=AV2Qz8VWmtZn;WUBxq<}*cKh11U;f0$z8@@s^xg*L__Z|_kM5^uU;OrG ze)uQK@TQ0N-AzSr|K3`ke(yuAr8Jo&W=a!40nVW*LAp&dmN&bzDR1tMV?Sks8ytUx z05Ex|4jwy77gU97PyGE~c>hzs@Rz%b)%{GP@!+}}Y1Xz^ngH8oZiY+?2NM1%!^I{$ zlCztVZM$`#%0OCJ?INmbdmzK*xXqiC`_Z^_8~T3S*Zb}Mzwq??0mlEK0Cs`jeQ~?( zUl>cr%1vov^XhWtb#ZazhVP8k^6YN4MJ}xK;mHur`*xhOlJECkU*|rgw49NmitbV2 zdIEY^55C~;=sf;TfRIQ=}I7;wwp0Z3_j*=7*^es&)&S-acmJ> z9(tD@UPJkfi$82OhvDA*!N2~QpY}J-X4>5nt^2C7vohSj6Vfe==*%N2cU-q&RicqM z3pux#*BwUUfvED-s!yGl1EvDk(xq$F5~sIcUf0?vNN5k zTqq79w8V4B#zJD95uH7t$cc1yJOJ5uN6|=dh`BqVjKk&mxJV&ZZ|Wl1%!nXNQ3m~1iCt3tsTPw`JZF4w-g`Q@G#qeNm? zM%)DB$sD9Px_Q+zb_Y{o(2UdC+IEvv?#zaN|xl%E`Uq$?}s|%1*la+N86q zXIEaYzWjx>y}0+qb>4gWjT_3pa3eomedL=Tt;Pp_<~zT4Z@K)|=C!x}!O3#lH`St@ zJ^j;H;#aTUe=dFU$3C?)BTgL76rwbNnW|{k14TzS1yP-VX~&?&x9ib#ng&WuOo^Dl;DOEIy|5!BS8|A%c`jK* zf&>8>fw*cZB5X6Ems5rH6%?wkbsgN!Y$Oj3CMP!*&&f-ls7r3UY|RlqxmUhtZx$Zq zK<}0J@8R&rf$R8Zc8--oUq@EFvHG8;@$y^c`962+i|&lPzcmfLQkPJ~jSGpFiL8KP zjSyN;ZKBe^MieHSnMi< z%l75s{W6_R)j2laZO1+%Uk*!hJB!&kD8ymL$dL#jaS*0l%+$yYaH3L;4()olB3o9= z#ca%ld>U2F^EBp?vzz$}jx1d?%k3zsZvy+CkXynoP*oIyEQX@IsSAjd&4a4TNJeB+ zc2g&SK_G|+N2m~M1dsW){lL!t)y1Io zc7H3)#l`mWm?>-tHd1&u}NcB(0NB6^F;JiPt3(fmV`?T#5?w8G= zm+R7ZT|fG!JGQ*-v>*I+sIPskwk=7g_M(FJ)yibUPQ{v$tiB(+9HySr*|k1U6+OE1 zKBvQG;S}b|;-+rqD$aEkStKdP5{V(LgJMM?lDLSc$iSkiNMZyB1}KO{5G0W^8VWT> zg=Y^ARO~EjrC6Uqrs@VC+zLz!baZD0IiOQe6G#rM#AJ~Ik<=jqR#if=4lz-RF{HLm z;8f2mqqs8zB?_p38ccYH;~j(uh+_rMg^FS_tJG!!FeE#a*5t%xG`AjYL6D)hF`KAx zAw|w^3=W=%uu~C;l>y=$-4O_dfGLI&v2boIOhGb}1frS)5E&o>GYasQS-> zkI(Oq-jsjxFMmhvu`ktcPE*(Yj#++1Unp)}NRe2^wpeIqPgfQX-t*W;a=ZI? zj(!;J{3hM^xc(o0?b7-5!5^G4U;WAbYa7$sUp^m2?`+OS>A$q-dA8IKx4UPLzUSpQ z&6kIZ-HRr?xl<0p(T%iQ=1Z2#{^-i({v)5gBTt`Q{${Fal2?d!9iuB607xv$0D!$ol#L}#9o&c<@ul{M^EQiQy+ZKlkdad z{?!-%mH+VXp56NQPhULR4y?I22%Z2C0}yq93Oj+B$N>ie5r~9g1_ucoBrYHWIRp$X zaxs{UBZY>*vRGW1PnX?hyV}{?iyph>sm$9{?_%hI?M--{X7fnro3*+dO`HR%?bpWf zfOqS3f2_ytO;+*Qfo~`n@xDIX#xrZzr{_1n`GsfR`~H%{E3bUx10VV*ISgcGL=Iwi zX95`*aFCJ-92A7$1V;owY#;;x@-{FN5lbLKU`%Cjvyu%2hqIBt?YZZ^{E3f!KZM!J zt(Dv9>xJh}9i%UQ`^8WG=x2R-Y>NkjxH<5ry%06D<{C$B(GfCRp>H=m|i{k|Xm-~NrqkACc5e{0o8;YcA;j_gX_#2Ksy z*8nDffq-||h>Y=UmBTz;5x-P83YWYYMjJhoEvI62yq3pf>hrnG#rovBx&Kl=_{{v> z4?Nu<;E(xqN&4;I`Hii*xT2KvGRxaoU)%VQ__oL++i^I#ytBE#PGnwp)INGthArJM zi-+dYh+HQ=b^>>VINwiYjWJKUXQ6K@4fD*?^+jLXywA3r)VO`*YR$PtWgE+jJjnc+ zkcOO#=K*zC2w9d>vIO|B>7@NYl`@qxt1nWZX_@-aMdB$2+s;YHHf_f3KAU!M^MT1t zeC)QhmD5pW+%R^k*j$clfsgQJ^RuVpuVeYv!$t^_cbkNui&*syjPJG1xS~Xfv+H7mhC|{^T$I)ZWR%59E8R#d zr-IwduiQUhhI?rupA6toQOhnAKy=!LK8`!gsufBXqX#SOb*ENrTOwq{*ySNPM>?oy z4^7kk$|1Whph!>eJoD4x?sK}jSkAwc%M8jg%^DdcSXabt2Yo#H;P=n^6MCtC#a_(r zxl9$!oJF zje%ICHmi~eHT_Qb@t@fH*vsQzz3anEtJ0#bU9v0)CE1vXK^UfErkX>tQbtc^Pq%ne zz3cS8f|XOCOl-M_LIaP9=7D5eE(ujdwPGgD96W134i|O(a6@7@&R2uc%Q@A=glWGm zPU(z~>Lvv#@a%kN@e%p{xvoC_`Zq4=$}7X^ z{lnE#9^8F!r+L?>e*PEhuYLaY-~P_wk6v%~h(iRaa#5cEB};%)Yi$F^X$wLqaMiWZG>oAY=hH)T^KyfeBaIkI1zTXbhs7u3ecipR(b9E{O zTYwEv+(#6xPB07RA7K2K=^;da0c`pIzx}m@NJ$} zpR;>+%j%8${W8DRzp~9ISxYGsQy{RLnVN(UQYD&#?tx{-izw#1eGAeZ6%e(tA;O?g zr=2pyV%u4msyH`ph>NmXp{e6j*FfEj$x%&k^xkEtZ_Kl6U@%3Ikdj!4c12;alZhx~ z3rgG61xz#_Y;oC!;JciFIz3o{cNbSU)S+ssb`}S-HFb_~Qr%;hH>PQ{-g>i5F1Z`d zl&Ft=?JmiRxMg^D55$QA2&_)#4k3%~rfP&7RN2dzwPb@8TEnSWXDDMCCtbV4oD~|e z51&;YT0L|)ae_09%-9U1>JHW7s>Lm1?M4%}>=UG@tjV~bhj_T;XIy#tslTg4n&7Ih+MxSA%DPgBe5) zfS3p%0L}*KbQtwMW1Pv=AlGWj*XVtb9=j;3|=S0jPQl}w4uIk-hsjtAn7$^1H3AK7^*hr`__)bAqj0!PUoV8?<0 z0vvFFfqTF`oE!8zXVc~LFJIPwaQknqr$2mkSiD|_If}mf>HUMpp9uQkwGRfK&lZ@? zccPbC9VjI#s=_ky>9`&?=i^vjw@Zw}WMxd5ay3O*P&dnF%7vF~!-#3ix25oMzG7a* zP;0X+o&c!vv!1xQ8?v#K~tOm#>xgqYgM zXj>PySRJL%N}5yqhU}TUpMFKHZjNovGGXg>a2U;Z98)r@s@-P}u5Z?Nx7)kxv_G)g`q-(O?;Vlr z3@$ROQeK@Lmtx14?%p*y8diDg%SR5%wHf{HG3hGb99_CmiECyd6e8!WAY&@VZ@ql; z$tORsb49-KwN{-e#wt2VMK;2F&11cMPkkr@Oc6lVhh0YoAUz(LUn zFa-!sPBLedCf(hXvg#+V_le|a_pqLZei%{jG;CJZ6SawI#)$%(jwCcyteo!~N%q7T zh)TAWA&Y8OaAD94VR+53B1J*_OjFzb;L!(bQ_%uL=@noHA-EtFHG;RwG+o7r!%}mfsY%b=zdjHLc z+EV4y64nA$XolS(4_YWJWR^M#Z0oRF&$AE*l9Xzqaw)PmAO$~+e7TMuY!TG4AZl95 z=7XisC!Z~|HK~g%FcgF7=#a5An$I)Uvz;RuE;@H&r-8gy8;aAk&ZQX!iu+UEj&xz` z-O;!6bjdpFx?FfNOBa2A^Cyre#B-EN~ET4YEx$tf#sUFYNh(0VivG$sT!=GtZ{m69M;8O^3; zrs`nECfN24cK4d*Es&~({$IQ@U&k`@11QN$2N&pcg z!V=zsKp~0(hzKEp!}@m){s~zI&Wr`i#r+iwO=8tYoc<8mog`9n*yEM{{atK3i30 zruba2DicQAn9b1OfLAJ@K->gbM@q4~B9hR-75sX%7-E;4QcDJ(_H%`hM+)3@fpnfsYnh=+i!L$2 z0eH!i5yvPC5SS+oD2>S&I148x4#ej|(_hIm;{iG(NM8XJ2Yg&wTOfF)d?6LttY3fZ+!Q zpa3f%;Qz*8fNhXR4BPuY@7_H8+V{q03gqB5cfu-jmF#3yCNYAB8J>pI15tZC>d|k)t0J1@ZsR&2;Eb0ps2w^ZO8)|!I%K@)1s$=0HhSz@wCj_mNVDh{-HVGE{cA_3XJ=>7=DDjRmx{W_yEC4Y z?@aIi&ZDoU&HT5=2am>kGV$fnU_I7++_ejm@F}^mq$5T>%2G{Rro^$3DY>#Lc?GLj zE@m0V3NaWJshWz`=1DvN`P|CGxLPGelmGxA07*naRExkmcjDp(90EzX<9*^q7x|c@ z2?q{DV7OEQoS6V&uz=viCqzw()&^n*B^D4ZJhT=$#O5dvg~Y8;A~tF=oQ(+}Ql`r8 zrj}v46E|ViT5A@w_CWA?W3?JZ;SgAzj4K>q0i+2r#t@trH32R?5vQ(eW0O({Z5No@ z1xFz9QA86^6}2Kt?Gw366fk20WGi*jG6m7V;uO3WID2uT83#p))Cfg3B9}lENenPD z0!`Qe?r?$wOaMl|-1wFyX$ou9srj_=q) z#rJ>x%If>$mlhLl__yoG@4xqks-5ZI+fU)a{jeZD`l0?%?CC{4$@}@;Uw=nV=Hb=b zw-Cr<)iK?w@*9f?-c!Tc>c53kFPIwfAg*I@cpmcC*1g^PPlxNv5~k?Ho;a&hpAEDUTkre# zW-0Y#yu~>G{`9Vi@8=M|koa0?zTA@?_78R@hsldXVR57A5FfI0E^Pk0?|k^ik9_Kd zb@-*Pe)G@$dw=Hc@K4{r`&XyO-}3=u1(OCQbv8mE0D%A^FkIO|AP^xifdH_Z17;2a zkpn#=H@6?}uk`9s*7hV3pFSU@nyZs2!_C}4>Tq{}P$IaBC^$GAwUUb)ft&ym0}{Z( z0*AT_U~K#Mwj<2zy${}ZCnEpk8$ZSK&aIjgfkAKpL=F&$00$9>pauto8E^uE6Nr?Y zf&fTh7Y2x0hQl2K0kd@QiD9PI0JGou+CTWh=YFaZY&?H>Ir==gq?a51t-E)B{~!M8 zd7NvdD0P1P{vp12Yt<_A9NY)8`gk05E?G+nDIPD@UB3A4Rh2S3F#`^Ee2fBM3HlnCf;hVrV=U;)ZIDTJ}M8cbKOA zxGxxIOx@wfuEi(qdQe~2V41D|qjz54?LYi(DRH%`ySmMrklb$^YcEF+;a=6lLJOq? z_SM4`f2IkmQ1WOas@0ReGIE8k7vIx7esoOjz)q{eG8pj1Z?Q*OAP$`^A zpBjkURQam=&W&FJ^{el?fF0a{=>o)pp! zPIchWY2x}+!Z*xx+yrZhpP%^r(2wI0VI=M@TKUqjc_)bzWPMOUhL7L9Q&Z0%lo zT{)60$#NrT7wv-mAXH1pgk$%@Rfkv=1ZA>Ic7~yj9r0w#lD8Ci)pu#G+u~ZPr+i|% z%R>!EqnT+nh7C5M+QE4qn{G?q9Nc3i(&6$@Vi|bZ-+b+)i7yDOE2JiRx&O_><-f3c z`^Fo8Mqj6^d#f_Mwcpolqdvl$_b;e^+jcYs?v7I2=f@3hGOcR!pcK?cR&J6&^Q1Ov zbPR+NEP*jgXjhSoW2|IOAhx!>b`nKui0dw_Xxe%ch$HOK9mT`YRl3+bEos{58`uBj zWB=XvHov>0%A&IM2#7Qb@ts(bh&28M{y>G(@4@T}E`5H{ny9 zE3c;_h~~p=*a6oya7uR8(CKaT=goH-)a~KwqOLScimjr|c^`uy0*9ckHf6+~gYZrU zCo(#1ju)%$(W7^_O7j%P`bekkp3uPiMVY2ppY-{;@2?km|6oD9jLVlMe*hnr{89$5 z+lBBLei_5kkNR5_B-*05<+q0otCqaY7dj7_ zriq*nN)B2Q;6_Sj#S44vl$z6Jikv(g;KR6cFfuhUHjBo5E>i0NnqAf@39(6qmvOIP zp2UTlPLfmQP;*AkKwxL-g%a08u4>%%5s0#MAXwKCr8ZTlh9NPm@qOOyclR@F#9*`T zrjke318SYCmRcO@Y9r}vC*ic^)y*!R%rBn(b6xx5OX-c1aI`wXN)Xp*6A+S>Df?~Se1!3v`o2EdJ$-Ad)8+0>g>TB!3aaGp2u2aG2m(^y zJ5$hgw1#?Wf^bt~%&7^n3o#HnkHplHI09q>tfqx?)Kc8DYj&%Zq)-TsUdb@~r0zbF z8$5DkG0aK>A%I(Vq_S{;Emk5L#7d~41fGTT&|V~`dQ=v$v|gi0>$tuaZ@0l>-*seh z(QX>h2Wh-a`<&@&I>2n3XX5?zSanyc&3P860096Ah#UYB0}OLDGr+|_1pv7a0>O##j5`eOFn}2X(9iybpN-}u zAfo7Ip{_}fBv{k08|qVoxNMV&tfg5rD-CpW=))nFc3Zz%_wPL%{X18`QV!dPANrHc zRW3Bg=RBUSAZvMDuv{fy#D?Kv7OSKJiNp>Hg3yY2sH|XxkxxRL<0a9B6YauNVAPVb zcdE+!MHWh9FXQdn*s5Uu0#qrFqxyU1cVrv12a(w z2@a+{cxrlSQb-}h{|oHAz^SI#l7RmBwjPsf)G9SZqOkbz|pKW^Q{zpXf`#HDJ zz0K^+JH*kk3?hz~Cy1}E|JV{`@wI7#qVmfdvWH*!(TU2@;eX$;J;I$e<%N}Vrr0l@ zPqg~+)zwh(! zuu-#p$%058)y@^g(;0|*X(M?E#>K}y3Ker3J*%0dGb04zV_r5#FCNcN4)Z=Y!5N36 z^hwe%+|gkkf8=}N@%tb8_TZYU&LAcML6{f>fP@_e zI5`L)CntCy1`%NX127{JFvx`t*EI5UwsfhZA4CxK%bic{6!{`%klsn7r9 z!Ev>HGF?uNLKH)!5AS{W$Ns55v{9Q&si^w&!z^cg8y#L$=VCT*=HhU;tH6x_=Ug0C zotzyGb{NP(fHQ+Y{P_GHfY1N4fArtDdG`1duU~b4JOo=5U&9>*F*WEhtXSNu~RESho3c9OF1tZO-lN&W{jn`aH$6Q#LCHcRigY zuY%EQ@dzbX=MQ*m>ozy5>lHCuZ(S1)09TT;>!)^bDCK#Fg?V^x-4VM5M3`%d>e8ne zcnsw3X%f^?Guq0+jhGj=c>)3z^3A1o|4;R^J&H~ zJvsI|4L0%BtI6KIDi>2bq-wjZPVGr`O$OUfS(|=hzqO#UTV_}8i&?6kkFr!Dg)8es zsZiXo-wySdWf$Uclw`dY4(*nF8H}2aSr(-U>nVqrkXBv>m+B!B8q}FeX^Y+b`(O#n2BvaY{$Bv z$GVT7_{93oE4OUSQO!@QeJZ$BD-{qp{M zmz%XxsI4sWS)!+hxan|5?egKUYZq-#sOpLoT^ZUg+L=9GdI?8D#0`WAFoX(AY%FaO zS+s$@WM_-X4;y2u$!ty+*%}$**z6{xs7<4zK!($%ZnwzjxKV4|!Na~p;TEl4XPS#^ z?Dj^5skr}>N7o}5pA zH?${BA`f#QQuf~0>Qfyrw2>OnZM|@>l!UhgVFZI30aZZh2%O_drD@L>4MGGmk42X< zE`rPfkglpP-4pR;MR&&62p{gK9CBn!aq74=5wudY-spzY(tqS)#hyn0msa-r7u!1> z?S5nYk1lL@nnx$AWI0z3#e?g7sZKPb#Owh&ULBmc>Z$z9S@?_(e|1uQzn+gJ=w6fG zfpeY%IqOWvHy5uQ)7AZ3dzag_UpAkAu6dygVl;en{N)waH|pZa;>mfcr?nghIsIJw z{HlBSVp?n0fwVANzPvoQ=EhN4N1esafFEHcPj;c+uK~BF%M)jmP;&fq8u7AR}>vKAgHAMqGGVglR!C z1u7m{17XTltp*N+8fFm8h6aS#8&Q{5ogl2!SZdkrNm`*rSCUMpyJ4f}`-gh(;e2}#jjx8+7PS7c_SNg**5|JKwYb3Tgmu9Ga}Lm~o6GvO zsm>qzuWO|T`CThKk!kjD<&PBeZ}|g5eXowQ(!2`V0ETA*SP;cIS`>~=p9oE}1e|fC ztd&05((wWxIkrZwT&;#wni!lHERMj)MDu>Cst6ny;!MsCQ`5;|qlp8N0y%pkYmkaJ z6FG{G1fXJu(byJ^Qrg9Mqp$}wYf$XFh!D@(=BSJ2f#(FNtJ-?lrhA1RXz(Fih|=Wq z;D&9QY8v;YUhR|^85I^v7px<4C3b++$ep8mG*!av1Q@x&83<$mD!39^c2odvUJP(3 zL5avEILMqBB;?N70V>W4VmP@126s1Ru&{xwD%{MW<_5SbTP1?AnY)^r&F)k%z>As| z8^}>SD~bU|nc(JTsE%jENML|Ekqaf}p2=D)oEvwF)`29F1nNNuL6`%R2PP(%q1Ni| z97RHa;~8=S!B|1&MrP{I5ILTqfB(Pyxizu3p^da=sS7`2z9v-Gv?6Y}7EG9AwyjNv z`GN+#wZFW&ykCa=&WGpaa7kr0HD*7SqrN$gkvoNOC2gcjThvvhqzNR*g5m`9Ip?9nM13{nHTgy1Z2 za->D%6oX1o5=+e@aSVy5X#}2#nFz@cDibkxL>!IHshXh}5vmo1x*Dvio6cr11UE7R z5dr|-&&6w&WYR{qE(VZsIEvhaRztLuD6N99vo0JlwxLUsC?!7Xndp=O=1Dv=0uu|f za1sUy#KAsO$l!)Nk*eD8Jtyo5N|o^(aMXp+3xmLne{96;r~_(og%c`471g}LFDj4^ zTHUDWiGs+GP_z+E#5KVdfF&5niMLGT(gI)$02{8{>s&%r4pITnSxL>RPPNR1Or5e} zuNtg2*%hd+Ds^(VvJu0ueY!8${=Ekv;{Mpo=jZQQ@p&bZ5xz->f}(G0=BuY~ zWVe3!dPR7;zT?0$u9j-)UN?xYm&;Ix>FUur4NvVJ($ddvUMuCel+yIno%Y^_`KWv2 z-tiaCpZ;&<`)}rQyPPad-!1ni)a~f*NR0#g{1M)(|7 z3JB%^O1kW0$tQ>H;Pui%>>?+UqA-?M0qXIV12b*C$i+w%RJ<3r*tcXZZ57R@mMjZMO! zRNSLhvpav_lKd2l}0 zc_IaI;J|`FRNNd8W;l_7i0}i7A7HQ(9FAuySQPB0W`LYR2A_8{*9?cTGaT+9A~0gZ z?hGelf~zW#FtID#03tYGPUH@EaVWvu$sKQg?=5`n)mJ|ShMT?n&fEC^{hwS7MbSq5 z9yJ(H9W%qHz3mYasXqANusAtd^3`T4nWo8i=MSCgx%+GFA38e4_16&bf5}_@{b@WLrR70Z={TiX7cN&~%iJ8f<%ZEj zV_HhS++SXLe?4{8-C-A|3-+cC+g&JWR!$t^vX9B9MU^Y1T5UN`s@cad?lxMpX;saa zE_HAlc2Dm_+XtJrZD5;sqt13NRU)CWUwFN-{vRi}2;O$W3SuofjgjuhIHuF}pCXo`KKpjat&>_S?Cx-vUxgf_)F z)W9g(5jdgLTx!i5tL?I-LeSY+O}ujKN7%ftRUc0QLBU4*!2>z-p&D}tw96aT!y#9# z$8+<_ON-U%%_y#bA?J#56Lj;<`PCnJ;o6^AUH`?aZw-;3@&Qj2`5^k>oByy%J6)4B zu9lP!%Nj-w2!%rcfk1x0q4Ah}kP^~MJ*7`uw>dq#m#)G`?~Gi@W4Am8Z7P$8p2t}y z@G1nO6@2gKv-GiHy`a4_XKGHI;5uuljiQt^@f?F9-JyD!XwumqXYIWb>Z}sbQK!+I zyfU2xFn9YHWj3PNaGptL@`fCtb1|vKy<{iy1kE%lw(1(|8V3JF1>m&R6=x&!IFBa7DxUC}GMs{QcOGu`Jx*^xft3Grc&6(UzvSmnx zNT03dLxodvV5uVv>_q4xeB+vabMv>Rv0pyyzw^J| zAHO_Z|F_0nL+#&)4@1}P(ieVm?YFpkJ6PRZ?ymdsVp^A120r!8-+JdVEUta_)l;h+ zUkx`LM2E4j}>1>h4pa&`@f@mnNF8cC%q;%~@RBy!~=RanUu& zRuO6RF~ewoUQnOvy`oyiy`!xZj5RPqoeMCJN={-8lS{)@cL&|fPE_>x=B>rDkzrQH zG*57(W>w)4Y+ffH=0bEJP81@jh^vs~!d`Yp6`=f<--r3ygd~f55dz_e&LsX4UIR(2hd~k zu&YmIdoWd;PeZl29Hx2uV5nuc-CyqR?Xpkj*w3a0E-x4@SD`%)ue&ts#LLjII41~* zBd9eYkP@khP+d$ECQ_JsC8cf3Q#V^_E|v!*pV17hN5VBM8fqKhNCP|s2!jYFk(@m# zMH7I7L}Cgs0YD+dNFq!@I1*vi2T4(ZDn#I}O%Mr0f!K*X!wrGJ42L_pv(-s;u1*DD zht;`4Evx6Tk`!Wcp}>(s!;xjdU||V}pk6#Rkwd?WCQ}pGDYTM8Y+?wjdm|+g1 zVcuHN!$NE87{muq)X=gcAix1rg>$=(653i6HfOPkc$N65Z&agIkRi;$i4>LGc6D;A zFeTJHnH43@;98xHQk}F;L*-g2Xyuv<`;6HuV~j?`Nd)B5kX710)yWQj^WM8uFP_}L z0^;tdrK%yeguw4TfnB_B&Ys$_A<{*e-RihEqRq9nnjOp2Me*I^t9_8NK3Ubu`|-fR zWC7=PPOHEn(a@jb?8ahF{j@$hz86m~KKM?4e6NI~%8k#v*{f+Qkr4CJK`ooZCfYkC z4yQ}e*ws=_v|KaXRa}8i8j=~)|i{`C3V_qI=-@I0X&k`YoACUbGAFo%W6a3TOsj%OSIIl%4!IRS2P zDBOMkfB|s0Ypv?;rXV=r2EsD}Hi$t0IFy}?*u6jjFvvkpP6iN|$Q}OC-M0X|_~IuS zD2|Wreh(je@ug2XoM7Zk#Bd|P8SeCqlQ|I^lN-c;08Ahd6FJ-gAp(Kexqygj06+vV zvlE<|;O<6dretm@TXoI%KKkCxTd%>h7c+OT35O=o`}aQjncx4@dusEvDJ7T7``Mc6 zY`WFrYUZ9m1}G@FJ3#?)0B#_M;bR0M08cI+0J#0!3rPanK=E4J1W65x)F{fli9db+ zSN##!^E+V_4jMq}|PZ`t(nH zJ^_T^Q{c9v_r84owEnfjwAvRctB{S?k*7J;df1TB(~4n=*RxJfeCN>Eo?J`PE*m~H z?R=O_lEj;A?^UMNy6OsV{xlZ zJ6kI4lgKFSS(+v+Br+oeoNvh2-rH>B3?xfip@SYy_u8b;5IsVZ`&uk7OJ0RloPD?3 zk2Y%+FS;ErO3ve`ew2Kdb$9h)Tb0{Juox86!Arhs2nh(e(Ir8F(w9y%6`L-6ZfG67 zdv)@QaCyNdDF#63xZ|2nt?kx&|(xa*BW%7atLsQnY!zk306Hd2~-tRilHz?gIZwsyuHX0DG+93 zh=RnZxX>hNWrGk8u{L%hNP<~yu3j@Su?5aUsp`1!ekF6%d^vC&_M$ut!EvbWVIC=9 z(T-0yX(a8sg)lL0-uY6p!e!D%sHz@!+-)4jos^BZ$sLo4FI?Hk0>S79Iy? z@Lb7};gpXma;_65(Mn0)hC~1WAOJ~3K~%0bc^R#)#s<*KJO|DU-}&3Kr^7DAq=`Ko zxp2ioxjMPiQE5nGL<)D5WaYQr@2X!$J{pL-y1IVi`U&s8eLPH@r&L*0OiITNEK4$N z%+kX55&NlY+-ttL!T-Yz4(|_t0`w{6AOF`r-@Nt*`~UVo|K*3uOS))S4N{fQeZ%ZH}+(k7wE3vw2I!T4*xee_ozzW3>qca}f& z;wy_rELC@4G75qeoJWs1Jbw41VyS%MCiCLtG_f}gIZ5xrH)>1rxw;~lF%#jF*0{P*}FL#s3p&Ima7niraBpr$DM(+Zxg8Nj?9eb*(eGhEn?`dU+YmN*AyBwk&;Z%@B9esl%@Z47}hHVO|K2`~UojGUkd zZ~;f=CM1fT+!FFWtO@A2Conn*B4ib+7PT}3K>{rzz=dX_I!{!qCbwv;1QmCuxwyME z6vzV$ad5z$ieo;E73M&2XLjT24pkn>n~Q!jmbq`HgF-g@}(O$WZ4}V_Xe;A zOdiki_DVO@4u|-3thDhbu0HMcpjJn+nX)2YM0JJ}z-I951Q!kjQLoh}1tEXrVJ2?fy@B{9GPH7 z%&d+YIQp%$PI28Ny4kHb_6Q`A96_Q`$FLwPK~JI_CMXP(_=ZS!tH;D!bD7};liHiD z!QO@w<|Z+Bu@!=lE1(1>W`IoCLvUhccUZ@gBsx)u0TP2qNMJ||LY%|{gT%3j!b~8C z5IH=+i3tqAopn~IRaEr?cXK7xvGLNTB#VA6#!|(JIrmi7F4TmOSUAQ;q>BM%#HC(ntRg+iJUV^p|0W8BAExS1Q3nt zR{QG>#hB*;)0udNFPZxgs*x6T${NUA=7SnWOiHd-(JDK31tlM3E>u|=5E;NzQPvt_ zr@U01p_6NY?W#xN#X3-F;s#o_ySL}D-v7nlc);@b!Tt%P9bZ^L)+c{#LDR|gzcP9J z?gyz5j_#&FW@-Nvo^>ovdtWB2%gtB>^_oo(>5p%#=6T!(m*v3*uIE@ZLA$mtYs6}))r;Deg!kB0n(8;if&hs(!! zC6b4zaq0an#*=q9<1p8w$@~7~_uckk5$fapfBfJ7;-CHJKU)X=xnKF_kAC*G`Es7X zApqVr4SVQe1~>>zAP>YKFx-qBQI8NL!Ig&cA+XlBm0;1B2 z23k}QP!0kKB2*$k6$pXa4`uQu|nx8MHkPk#Em@>qtVWqkZjKgr}E%j{49?gS@1K?V>U zfH~kq1TbI#`{R>`0InT97b8(J6A1rI z9-v?NjbB?&HlK1H7K5L0?T6+{Yg5hYb~Y+cTjy2k)n*rF5}jk$NxIf8;;vpF^!E-1 z*x^+Z3C@uc>n;?((x1T1mAuAIo?!)?H^0N8wAr zwlJ)#Pn*llly5-SJY0?=iZ{lot~c2xs}rr}wT5uU;8Gw0(ll_eMdx)u4m35u#VFjc zu=gnhbV9Q9aXLsNwI9)T6K#afJT1q07qWNNizCL}%=C;r;IhHZ?x3|@EKOVnvSw>r zml;tr^HCTM+EE-YCp42pqB5d@3){T9B3lfRENidc#Owvx^1Lvzn6B>?hK2^0+v5?f zP|#7;nV784=qp@SW|qj=%i+kNmm6^}*%oQ_tI-m%sF= z$aXO3ayx$W?yKL>;r+#!L*jk87JE53A2v^)S8IKG;%`hZ%8y}+Cr z(b6kop>Dr2`Y6#>*%{18CToHDiuVb7{nt z#ixjhYMNxUP%%sJH1E5?Cv%crm}ID;oDYPUVlcfZUdG{((w-n9oD=#9n<)?x2o{hH zT3FSs)`@inW$;9_2-3*ZHb!p9brCcyc)zOpP*uc%+3ZTgzPFOybXm;fY97fgVj;m9 zqdCJG7{{_>ihAvtD>Z(UPJZ{~bmvX4g+oYUTd5U#h}Io$fv|Nbl@39`#77A8?a9dv zO4kk#e&WZs|Kw-N-9K~X>}}wgG~1kh^Zxzu^xEymy#I68itaXFdRY!;-I7-?|yq2roE?r za6xGHm%*x#R-`?)bag*nZFs#Oj$hyS{m~m@cw8<* z3|xALPua{pESC}9_-usfW)p-k=KztkQbQ^s@HC`#F6>+Oq?i>hZlh|*Dm#-prcryU zy^X!*jSdQoPH=J~f(fiTvRWihXyRPb;!61NbLrnWX!S!gp#{1_+`a<*i$DPq{`omT z_-6{RImPy?cx5x%fmXO~5Xf1G4BU3!#K|q0+5g_1p zHA5tnj8c?n7)NN%hEkcSq~dH)A_WoPWKISUgg|7)tSXTS2|*ODFaiW1d=C*5k@^z` zcu_NV0f`yNr!r{imFUC<^|6}mwoX=Di*21qbyV~URxJ)AazH^25D|mmP@@Vs;D9?h zz##hT|LxD6ZOHmNubuVZdPH=--EDoUFgJIuB5g>wNpRF2EW>k>bg>^6DWt||X^DZL zma342U8=b)ZfU)oys1;=-B=8^Dyhv4alTR;K_DiEfW#a3=u8~J00~GC$cd;BC~%u* z0p!9E3FHwP;8+NE5{S5Mm?TJu64*&7FeFxRbt9stBLoIB1sKIr*fG&ieQi#gTGic( zW~~)~SMoygJoxA!@%8Rnga#sIb8i@3Of~GQl0iw_LjVT>nYS^S)6&pTP!%6DXtKnR zm_1Z*fJY;ccSC}ix|yqU@jwnTH={abFX}O943#9LW&w_?IwvL=p)fi^3lu{&XoZe2 z-uPIoP$3v)^&vY^@mL&G*Z@f(FoV@Z1ufY=yu>;L239@8wOlP0>0p?p!fPrM)YLRr z@=8?HZ8U|_v{gcFw+fq1*o>JjmCCT|rV_zpfFcxxY}UbMDIf+W2RpgAr3ysWRixa| znNRuQ>#x2(S^o8t2tNDn>)(;kQ@FzLTZSoiU9R)O9tB5N za#*ity)B47vzmY7){9?we9>$Nip7t~^v=l|K@n#0`9*WZd-l z8@9z_wmKfax!IYS6M>wZ39dp=hl3n|g9t)!z@3P3{P;eA8`qyEX_zPolO$#q2_%7o zgxGOvzW&?);0J&7v!L1$eaPj7mwx6y`zu$m_$=O4fSscOoS=RwwDs|@ocBXJ)^R-B z;bKI+U>VPb-5tFD)*t)85&3(V@CSDO)!(E$^!FBn?xs-6r4~n2LdxXB)~B>vOm&zx zd1!NR?%cU~y5nL&SBn^EjedLZsrMbEx>@__Hyg_?BmB8Wy~eeL*(?O)C9U>SEO~mM z^;Vi$ly#7sw(wcV!IqEAyGxfg94%w@rH_vgWECaQel?E^2Tki)XiK0SKdM{t4eNYS zFaXer675BDZYCRcvG6c!;h3w3F)BS8|T&Pb~m>3RVO}Yvt>(SY0!d@-FZS-?Lesvflvu~irLPV zy=RreZ4w(C6V#jR%+v*jn} zeqo7X=%fr2?EFr>IR57QK7HlAFaC!wtX}{9JN^C4B^MiPap>`*1{gyGf5%hrr_gtemLLqk02D)Y$hMy+$%3~rk;gdy^T zp^hb>w6M)IYSzuzbo)k*0UT7x$r;?|SnEL5YMpX7+mDxFU6PMsTxLu>THY%!7*QI=K+(v}W^0ox;DFWEz6Vd7;xN{i>O zw~zDUhSyDZ@AVz6&Om$2Y!*t*As33dOTzP;QMX(&+y-%)Dv6L0UU< zwy&*E{`S4S56|1{%wOV%$_O**womO3tEsuGey}*)jvsj88-L+KA_F@BRwSyq|sG(RY9Aa{ZleoV24$my5I^v@bsQ(hnV7{mjq&`QawM@k@8kF~0ip zcP84-caNWY;pXC?$+$GYq2yv{foAfiB?}J53>#TPd4Udb6&|uj1iG6Nva{axGG`T?WihWoy{1rT>xff zTS1Q0FgK|YW{I(u?NCQGRCl3ygBTVQ1siuL?uqsisOeh3b6yt*%A`(Q+lCqK3SQZ# z$)VX;N=cVwqRMa<3xS&?t6C?`Q_dy#yGhljGWzbqA2XqXd5TUm?yVsaw}%y5Q|O(AtxlZz^cxw5tMDKFm-n*)u6SR)VuY;j;-50)8uRI z(ylCWY+{rl#h(U(n0HLaD z1(Ok+2Lb!tJUe$e9B72)Qq%d)k%%C z8te%ao7~lm+|3 zavK;NqA+ngXE1YO#Lm5O&p`G-j*yIklbM?;;JsD2sl(vx&IN!WI{7Rgh&4^O2x(!0 z(4b>@2O)Uhd){Upa#>H#BP*09b90E28#z13sT!9YnQI6JAc#0oLjYn!92ukgRI{^| zt`TpR6PZu-M)5RnAG9*;*3wU6rN^`@6sOQ=aT=Z7yly5(+3qH{DM)gJFlP3$PM(<( zK%f~*Bf(?xB$7pXpfg%A8#jQ03j}@3Zx7q@>hJ#EiFEDa#;WdiZ~pG9ERViAmX`Et z*e~3E{rWpJO1SIhCuey>)pR=v_-Q{V$Zz8^xTMk&)sEezmBX1%k}o&zrB)@b;6S0H z*dPKC92P7w$iOnmwiPWkkB(Svf)~9#gWqTg4%IjTY_a2k+RK`Q^p3q|ScEjr78;2+ zg)C$?IHui>d~A2@L7k&Dtf}?VsP7sCM$oV`7cP-!Hoxvu@P1y}yD9Y295QjYYFk=% z2S53Vm$uLTyXD#|ul@E{HsAT*j~_g&eV%rF)tjGxc0a%Lkt_55x8Bp8{^rA9{BoUM zd0mX#gj*q;AEbdL6n5`#WRzyLPMA+YP+zPTg1*z+#CPtW_{BP(XT0*(JI_D;j1k?r z`}VW9-s{Xjff|@Q@(drG$>7e4Cx{3RfZ*;#LNGEA8AHhca)J}w+?*)5I|1hEPrwY0 z;tqfu?oJ>R2sk*%32-=wz;Hb|dw_Rcx$=}d2sk;t58%p`n*b33*kQ5KLmnHX`qH<*whB_b9{OUf@M9ke>e1HFJ_R^pG(X0HGiw9r&hu^qS zGN{t>`W1(z-opYCSGJr3cU?nQsRkpOFH+))4%)Eyi}i%gJuI6e=iDVPY=Xmt6q?D< zHUz`N-mY|xeDcn&ZJNZoocVU5OVn%LjLVS02SxgR$~29k0_L@}W84ix#ki(2j@3)J zI@TRg9frrd!Sur2iq*BXEOt|dyN{@dUXsmyl*KHjcG2g$pdlX3ALLzJ-zPa-u2SDG zoc+4V1&K-&Yx~85%AG6Lun2Wg+^QMR4rXqzT?jv0a(AB3B8UzHYTH$<&Lsb1&p0$aL#wa%qTpa06`Igy?}n z%-fU68_m_A5B%6$Klq;i=;dy2_x2kX zo5#bDQ?%8Ze)(aZ-Ty!J#T$QR5%t^sY!?2T8Q~&TZg0ilP0X2yfJH}jYQ`K=d*tDr zG1WA!l3OwitY(^ujWGy$6PzYwxVp1Q1bdUVXPE}4l}98Y^48h4!njPt4T-piNek9R z=5DNI%Gm8rvtCZatOlw)@@^=YwkSSzl*g|3e6Vsay<&9kHw7+Y8)~MiMVnSf2*$RQuu2FDw1f45T3848tk#Nr3 zq!c_;atsZbvk=0ts=oXIJ9wM+F1c>DUkwp$SEj7RO=RjgWnqjoqnMU6t%uc8FRuiT zUL}Pwh`Sy-rpczr1oo020$s-3$_1+}S`e*9x!zz?VWVFb^I^zcoW&qI>6~J-Tu#b_ zSa#pdcJ21~^GQ!`7JN}(Zu`Rj!Rwc6I{Q1PKmOtW@`s*3T7T_-y<6x$4>?`O-tu3b z{&=h5m-m0+QL*xmUpekx`o6Q9pZdj@u6(H3{msK)i}smSim(1}|MRP#d+_`J{c_t3 z_CFEXw)t-N++Y0ZXa1!h`p=miy}D)HFu*F} z>=7!c#p_gwh@Hs2X}SUxi=4{*isTc0av&Ca>*v%cJYh z%sj+vVbMwJY$O#P#x$^nbsC5EV$61DdnUD>Z-O^q(=a%xS5);1urq_mo!uFzB1*yC*|Nc_ zJGmzUCqT&6J;OnOI*P*x%uY^Tq*vq$Gh`>%TGU2^d(CcY?!`t$QFC?8aGT7EDO|Hw zt>#X}&FBaI=pV&eb2vLH`6;w|(5+@X>lQRW*lQ^^k)m`2VlpyUaE-^w)}^7#!_%2( zaFfG~9IY>ctwcOEEAg2KQdF_Py`wlNfH;CBfYRh(btAUWP?&{4BrA@~kl34k%v5$P z1s&Mg1G`9zxF-kh>1LDm+t#g#E^M$mxsEVocQ)U-O;+G)(+Df0p%_)Ged*nF!)20U zpwc=JIWPqy4{Qy&brBqdk%S@^q9J!zjs1Dx3@$k~eGFd`xYz3D=VOr@e!70xjg(=-giUgD|{LRlNvva?dX zlpb}kfGK$`)ut^wYi+W-kD~%45E!MoGlx;Cqo$acnix%lT{?uwlZ8Rc%=3c9Ln6-Z zmugJL6@vf(AOJ~3K~!mj!*7nq>CQh~Y_7@Ee`L9LxjuX4Z+s`uzyA3HZS~q-ZpM1= zJKsL$xZ`C~OY7|g@q{jrr^aD`dn8wxvWX5N5}Rg~>crD_VlpvQHDI9Ue)AmE)8EW!?HBdz!^XClOvc2=iFszS$J^ zp}QO`RFBOR+A)x`8nvw+2sU}QCXj?AR8~Q!mNyMGDH6455gqeAi{M_mcCLJr>Y1%G z>2_zOwHF_Kg05d|{|w{m*{=Pd~lA^FRLje|66ue`#?*u`P=}ZMvrSu*gEI zvvwr#LK3Z&b|?9joet+2qcYCh3U_nhwYT4R{`T|Larf>Ux1W9wnBc{WgVSt=tUTsI zwZc>#PdGU_2x@r300si!-veU6-3{ME&IW*--0_5Yb%&b+Ah;U?AQ1vF0Coq!XBQ9g zuEYIn1S8Pn^Ao)5=-?X2;RFx}PdFLD;lzLwx#I~Uki*?U?0AA4Chha0c8Ngzq5*h(Lf4+~JQ-A6`4W4gx@Uf=SW5a1g)v-Pb<$BcI}u zDM(*xdHBXKxRS%%7=kARa3TQY4g!c7pofp{0JwScJ;cNaU;!%=5s1PpGMMo4%U=WV zk?(&A(g_obF{Pyb!NtiZzVGM%{lEHNv>(U26qp><0S1Z=1IqN^WP|euRP42l^KeFb z@i@GA^{0RM_C9VqK=`Nha@72L-~RkX&lis@$X=z3qqgtd?z{ZZ_NS666$ zCfiW?sMg0vGeStzE{BYRhML%w%1tM%p@0S^WeJ9cK%o0uOQOVmSEEaEa`oKNglWH@ z(+om+;84bG1#j5fWsNy)*4avS%kW`6ST0@e2MHCM5vP~$q@2ORy3K>fkdpGavHa6 z$u+bKnRpT&NrS4F3r9->?Y!=${$}vC(zf)o>{IoLr-n>R_K`0tu2|WB|8uj$1ZsK8 z)8vU`CueUy%5?wU?o&Vgm!I9gQ<_(g-%1z#xE(GVT5Rp@<1hW@9e<{`oYLb3zZe*+ zRI3@BWxjy#u?tolhS_Y8d=L`t5Nh2{LY;-cI5&5#J`2$xnZ@1Qg-b+t8JNAKN!Qgd zp=(hiK-5MTAu3Z+l8|$o>((zm7^aebnX21oTumG z@D|Q?WTGqg!sdlWr^M|+DaV~vSe>|dTSD1S9uSWbY+vZ!#9pRm@xK5>y@i0g<8pr{)>a4)k^3&D7bI(2xyAeF3MkB$P zg>r^lQ;Q3qr!X?mqMyZW*Dvox4zu*}&If@Xk7IS%%O?-Pm^Lr{z@Pq! z@BcvRe&cIg&tE@&I6fU8JoR3>wy^mdi{Be;bMeX-;!_`oOG`fodHCF`) z;%@GFs%7d2YQs#hym~-$TGfpsljlHo-0!l6`C>+-^Gbwzu``0rmK{87OJ1DRNjAQ%ytxv4OxDF*U|NToC-BTNu7kGZ&rrbbn%AG2y6Ppj!gy{FE_s~YO$ z)q#-#XdIXjyfI>jwT*fTcx3zlMC1tNt2%tQ{D@h^*L`S4Z+{nI=?Zq3; zl6=;oiAk;TVx}d7P;?Vac^Q!csX{e7YEh6&ZN-HHc_7QK+gb>wIhdwJqBfa2)xZE4 za3D|=X0pOtN4*lB6S;caA3vO4lich-cxodd3=tL}GUvrX^7fg-Wk`nyd#k-$WCraE zXser`m%@H2_n~}-cP4vOD(~uUQs^aWb<6ByFn|JqK?bNPV5SCj11S+4L5#&=gaWuL zu~HCF<4TT-QSB1&9*rUjSRn2i#U%*9TpVT!Ra1Z~i4n+E@I4BsIw3#;f&V;HuQKRR7@XxgSUgPq1N;j6NtoG(j2u+5J zi>)wSFs~OQgNJHGvpCw?xXF_EYCCt1fgxmY@QP%)h;z-^QuKpTZf7Ll7K zm?bM9*T4h&#oMG~2_XMo5Ppeh-liCJ0ZQwX_c zE?JyRQHsK+5l-&GoNVy1z?&K);j*8GaU`l_3N^~2Oxsu)Hlxldq(et+XPZb~3Qq<} zykK&XkrB|%(FqNedPhsr1@Z_X)vPw&KN3A5X>B;lpS6*6G0iw4a?q)E+U@|=_#8n4$I7q zI=MY#S)YG%a3Afmr81N0%^NSbb$#OcxJXz5 zv==|zkla4_NtJ^MTy zZ@=@#bGP4163C~1R0`>^MFx*WtvIx(!C~qU5W&p>xZ?=}xjO)6z}?LqaDd4HQU}}t zcLQKB2eHEq08wOi0!}YZ@UE--*Bl0t$7hf5uET?CAZI6lz>X)JK@N8Z2p~8CPUHj_ zK*Ub&L{MkIK`No55qh>bHHDF_0A_g`BNb-0`2fN>>tVL_R ztVfmW?YU=+_v%hIgCFmT_v>zl^8*69g>LzyE$?nNc^x+Hfj6SfNSMXkH3z3kG!(nJ zXe4x@ZF8oAc1F#%KbwUxzeVNoeBWtkxdtgTMX5|T4};aYdn5Z25Lm7!WakTHmk_ee z)OU-HGGH;Z>$)zvip;yva05MiltD$s_pdL~!>Mz6c%b&^M9tRJ&I%Hz^=xMkvlb1P z{d(b9n%q_%6oc2xhmS6HQyXPD^>0rW5>r8^o?9)=f*h-%%x6t85_clbrrb=F4yAfu zwc5tlm^C#n+h@3WpmU|&tVvsG+;)Lv=8TrckP;HGsr_tkmaGd~Yo4OuaGQH^XL({) zB?gL(j5?}OW1Ov6qInm6liY_%G0JFAAjCeC_`qSo6Fgiv$xy)N%vOpOUr%*57fj`H zoH&~H)A5j3l9;QJhqy%9jQ!=|OCJfkNFt+ao=!r1&)$mbAkr>pvIe#oZQ!37;laMbw`Auax zxVyyaq`#3juRSV8wA41tlnIA*o9psK543;HIQjJ(2Sa^cr*~vqN!Cr*aNDrQ?F7&3 zp;^BF=6p7#S2s?&)oGC3Xg9}ywfd{~$A6&R&BGYGwpMZts#eVmY(gS&PLPI4Yb$PX zzRk}sKK|+1^S}M@|K8a6trWPLWN4}`Wiu9sNy{t8-acN^OY!I?&i|Lkp9@#s_lb{x;Q#vkFVEgOd*3rx zzGL2h!5*doeEh+Kf4!YQKc~O9g}r=l$PtIv#{Kw_Id1*2A6_o6%j<8vy3x)~KbX^_ z+2Sp3Zr={e#np&JNDaa1QY)F71$Jk5uN6K`Mct)3+)6)8W5M*g9 zBD$$d5;3c2#wa9^EAn1)VC>8!^zu# zP~k?xCqFpWT}#yP!`$T&f);JF{9KTR&4$MaR#R` z-)HB2(7)g7I_J0iw|jZ^WfqS;p0SU4u8Amu_Y>Lf_Gi%_TlDGq_Tmr}r-vz1AN z)BvhyF9}9>LI`yhPws^xq5{y(&CLO-2~7YajW7p_0T6&rJZ5l+JAE?C{Fy)TXVTKc zxEzCwQN?GiBE(5KtwuV9Tx+O+RaDhhLgW(Sp=rrbRj4YV0~W=pLJ|Ul zqzunvaWh19t?90=?htB(6xhT`h!8XYm@!1JB#6XF-RDPLpfqFQX<8qpN0MZk3cU=+ z*{3d#W=RAFBxPu01u#xvfd_Y0XEXr;B>@WQHq~?2y7p4J!@U$Y_ngJy6haVoV@hR& zxhHqaUd-J}HaILfz4Sv@N?vJ*i0d|uGssT4o={~rt$XdB$O$v0H_a0PY-bM|mOd~M zL8T>%#|f&UjfxCrgZsRZ?^_Jy`pXN5x??1cP{(s1do1n`_a9v?0);TNgsaR zFBYpSv%Q$Q50122B zfb>Ua53e8I0-a6+AOz%E?JvJ~|I;tN6!-5&-hb=SRL+ivEOZbiq`80@R6+qK@$mSa zTQ@&I0fdAI2n~P+lH$Su)z`lL+W?;b*ry>75`;@ziyV^nr#|z!pT3S~{sN+o&<=hA z6Zh(DcRf@m9(;W-h+Jk)o44Od9y=?K>|Q^+b$d4d;ZOci{DA_LR5o9{|N7}aK99?7 zV7ZL>{H)q{4MF=nF14E$0oh1#vDr?_RY7;l$VPeJAZglCaD(t)>vZ3(WWL^vF-J?dYEf3r;E$odFtH3 zQbsK(%dk66zRkYux@?sN)0wC9-cmXn#{TrOL}QeGV)2-*V4t#^mOI8aQl~6Ajx)MP z3m5YQYk=xXRZ^}7o4RuDQWZ?idR8y%xmKNfRBI&e>a4r0tn#d_mrdm*ty8<0S0bTy z-BMIAuQXCjbttkIX~bj+lNDJ_oLaRC?YvE%EsbSlnkHj38A|HhLMfrvS}Nh9^zUq~ zw$y0c7J3;RED6QDhZD{2+D5k>8y>?wzHhN z(d9;vT(&*xywoyHUCA9rC$g~74D+>dW<`wK30|_p13X*FoadyAE(6K3T86#7SkE+FP@SSvDjv&xHEho70kC=J8VW197yxKMm%m)S zkfXYzY9SD#=i$=gD%4eskA}qVJqxS(OyNsqo>K9#*SUr`4?`DWB~M+h28~7rCxeQO zPLptVR&{F~a<^+E;jO3n(?8_l$-j2(tH1U7>%Vl;rwN9PB3u;&vGdGH;!cLR%Q~BV z_(rvvu1vn`C)11VYTx#r$jevcw+gg)z9oP0Q~bW)Tl}St(x+`H4O9B;^5+sB_}LfU z4rMJxI3&7<>^{0@lRX9Y6+4duB$g-+olSc4WB`$7kqWk59Ju*>xbF_jfkF{@GTN}4lDTFp zZezMk&T%p?lu<0e2no^Lh=&x(2!*=Swbu^kV1)*Ult5n=B<}PI$0P=ccV4q9p-Y#~xtk4Pi?G zAgl?;I2c$Aqe0dI;#PJl$W@vQ-b01WvZ>%x>u`u`akgXYfIu7&8l2s9;>OE(&)l}> z+c6g(!|0Owf^1P5MKH;rISlR|QAMog+TPT=`yu|=wcDtky7R9q>{&buDE_b;1raF~sDy|6<5EipQh*u<+ zabAZd#9}2vuu>8^8;g-JLV*~n0*>R*S;~+WoD(?)I5arLJ$uUN;H7jGEsQZ0%|MTo@9T>xw@FIX%C<3c{hn-FvpPIB>OgF2T7tLLfda-C$^{S2*=8dbEnHVZ{ zm0>bjr_(Y%D#mj^E@rztjAd(C)1c-;FF=8zjK$qR65@Ew1fakb0;kXkGq;j63=pLT z2i%Lp95fTqV##5GgE@$+!wcYqXSZSodP>D6fYDnbI?bfe0WDw#;8cg219>76AkN?r zai==~&p_gz`ssgpAbgxpL8eNGICp9=t;!XLHZyIjCOEXUN=yl>#R)fo&jb>xqM9kr zWvnH};Apd&xkq>#oZ2!7#Xzgf1PUm{z@%c-2&vWKA~cok&XmOs;*>0oiI!nyVj08% z5eNx$dJnk49Oe-7N&wLajRGhl9Sjwu7NDY3F0zavgi0A(2B*akL@RenaUfTb5Df@Q zanKt;#3p!l2TGwMYatA*jFr2qX@F^%XEac)WD!KeUf46`mCzyt?0^Rbp5t{d)24(G zf=9Vqr=AGK3y@Pwh`727K>-1%MFqW`f)-jRt~7?Cu3!_QlRgwLQzHmY3u1M!EP0w7 zQ%(+;7k9%@4nsZe*St4}43Dv+7RjBZ;_mX-2Ds)e#k+@h49+7sr(@0*=gOW~cI-RohSp_ZpQcgR#&M4NT?iS~<*>7hbyZVt2?%&+}+AFZ5-&n34 znl*B1n8SI+G%m`FSwHiR_Wtml@#1{CJhRK=lOg3Pr3|;pjRH`GAV47;aEAk+yz$m| zKKT3}xmML*`ugYp^Pm53US0n1_RIg>S6@E%?Qhp5veBTDNiP_)!%}glxtlp0C}src zfDnY66HvfO0?8c!=>!Fyk^|-rzg(Z~2%2MGFO0Ezbz z2GAjZ5Cn<~VG1PB@tBhYocJyS!~wViaKhmca0=Xw07!Sh05O1Yf{yPJI0!fmAPM3z zy5lhis5Ez$N9V`a4{w1^&;d{Yi28$v4_W8Af!?#1fdn+K>(><|MqVK`0z(QqXHK~T=Zjzx#OY`{t?F>?!}iiAeq4tK-}%MUhu6nI=`-ie zp~S7--TA#cIrmlSLwl7XcVqh~yu0zLKGQfW=UP{7(_}k@ca<^CTJ7>`C*kUVZJWAL zuSbNLMTu<{Vp#?YU0r6}RP{kv2?u2!qkGybj&`2?a|7Uu!|WVrH`H^4TT&*yR=Nh?X}cclkJ(rmWQ*4L#j2+p?Uj^Qmv= ztqVegR0XB@rbwtpQHxS7Jd%O*jJaT2VeGrgX7f?2RoS44LRw)E*2Ixo%SDB-7fS8( zN+yk3G7Bo`{w&R^@=83fhL7xLw<(Fm;wIP2Fj=%v)pa}7gSi8kP9~4ADx`5VbySRH zCMH$(?X)e7SyDA5mhCjvevr|d?8;VA3dhvNk~x-P=Y2hvX*Z^>`yF^|H=1RvLYMX&Pm`_sjRYJ0*PP$6kEl>b2d`^yX$Q-RQRS1|`Y(@|$mNr^_(f*|F~I-E6~{ zkFr-Vl*MTcCd06>3fbLE5euwmHgnfNwqao;Oe5S~C$DC*iXvkP0U|~TZ2&+l{f2VA zlU?v9BUNXZ4}VNO9W2x7^u?(ZwtI@v=u>X*?tkX`C!bHR{GId3?rpZ@IJ#64x-m0( z@{Qp}>x)*t)cLh8Ut`&rOQWse!GxT%hOn47vH11Lce1^RabW_TWlL#D#rn|5N*7nQ z!<(~WqU*K4V7B^9_??d78v$`-Dd9(M-n=(%zCEU@9~yi}<7b~+e$T&YcJIGtba@g0 z03ZNKL_t*e^PhXA{rrEQz*@}35^$JM&&P+E(k~DO^Tfl&$8UyRUZwZSn`^gvJEbgX z%Yyj=j%w<1JsVr!?}|Enh?rlWK%E@`ox+uiKZV z+AKtz5<;(M(-efQzkIHTyzqiqwi|IGS?Ib5x8Kl{(Qj7W7v)P9} zA2*HL45Obb%JFpXWF7m@#gnDJl*aGGk-##LsF*^ehNg#-V zF~hUJW8^8yuvQ zX0RbkNeHw`K@oStiSKgfv!D8xN~GDcsi{q1Yz_snm8#LZL|C+JV(^#xv82M=>%Ybrl1kYXbz$X9#e=zOLh*~-2pSt#gS=% zyU~meh%!J_-2&jG3Ps%jm;wldB2X$_k*uPqh(xKHy+G*9H6SWNGy_T}NR+w9g~h@$ zsX0_c5DaimMne*F_2Mihbgx<>#4v?0LYS3Na}_`6x;hB+gV2}(JM0L%z>qNZm!*Gq zOS{zff_s-#jWTy_5CIWP&TNSAfZ7n@nch3p%Unrm$0!9BiYq{+P>8xwhm4ozB?T6B z-^DW2E=go6?vuF@Ww$d}W-=-0i=~{4lvEWI=D`KNm_>Ij5ID5S(4Yu}yA}&7sgXe? zdYCC)q9k+c3jr<{6&MX}l$k-?O52tYnrRFf5=2E(X;Lxt9k#`<*8#GK)Uy>K=^{CYoB-9v#`XVccHQRp_w&N6xoWwlolc9cgvkB?S@i0Kq5S|3h24M1 z?;B{PJ<5prt4Dqpq71YUMJawDdmdC&2aT#wij>x z@&DkbKJnbMU;Bm6Pvgs9__d8J)^;t(g~@s4-&^BwHNHHRe8g9WK0WF#hM`Q8Suu0K z@K|wkC_qsP0fP802OJK*@%C#kz4W8c%x1szg!RJ32NHD0tg#aN` zrA&+Qg}Z<8hyIhF{nu_EHnHz)BPTcel;eJ1U(Wg|k72jeX&2JNH_uBgZDkj)o^Hc6 zOU=!neEKDA5fy*nJ?r23oz40Ee}XKjo#j&WEE_j2fhLG8)9!rd>B2Xw;jb<${M0iz z{?IS=;iQVM7$-sVmqXCOz6 zv&Gc|jl)_-JL!tcxs)fmB+GED$u6?<)I+w*?8OV&IR@W*XM6h8)aM}$eR|;5(r%2y zyJHC@v~_%_n4O)h7W!Vs!ngB!31~u?Oy`yFavIYxO*kB6aS)G1RywPj+<;S-B4el3 zTxS}qsn5$es_7y}aC=-&j4rY&d#J2y=I> zOa{+I*h;>rIk=14s;i=SijG}@rV?|(%$<9lPA8vSQA#Ll+LW@4v$hOcvZY*P1rfwY@cqAA0_K4)W~E z(fM|LmQwfboX(en<7U}x zhd7wb#|goNKJnQP?hk)%@A&V(x;;aAHCL^5!mtbO>duU;U`>&5++h~Sg>xF{G8Bam zfW^>gDYGK$T3D(l$EUj0>sf#Wr|mc<-%uH4)ojM|@6A>|y7}tEoBEykwdEVC_31K% zVQHDSAZv~nm?}IC)9JKb%$|O!(eBYdc)bbl-WumaTe({q247~@)w|~g$xqA7ry{=8 z=fwLk8tBxiiRKE-CAngDrx)&--$?sy`Cv0Xun0|~sFXzNPR9P^Yp*u5fB*Z-k4~TZEAJlkH@^NuRek!(v-h{h z7ks#`7uSo_|917aF4F1mz41TZnSbAN^3qp-=I`Bp=JHv6=`P2YuXn%rTQ9%$*Z$%1 zkNoM6Kl60+?gwT`^7^lTp)_arrupjTrw{J_=u20IkLlgFf3~*SYhV4B^V*0~g%OUfPk^qMHKB_Zqz}6c zbnJq=EKmd5Mi)ta=k8{GVXkaNuV8S4o6@SS-yiHjadbYEEIBhxUe<5|SqKotfH7Jf>5VxKVKcc`2e`o^+_o-$hsDJ@@1f;j|IY&9gMZ$=C z>6=}7@0;E3_byIg+ZI139#Z^4#l*h@TV+?>ID53}$g1HzbN6IE{5+Mxh z%yB)9yL2LOq*bBTXc`_gl5bR>RK0>~h%qfI9A3E^s#>VdS%I~?W@PMkeY!l)=w)FKF#ss_@dMiHgB3LQ=r6;*(xn41#Bb?~S$F;4CRv;?F| zLT9EJ5EQXdWQi9rAG%p~t@V-|cbjVHqp_IelBM*G8B02K_dJbOiU>$I2_9;AAv0a* zl>*udK?Ws(A`Z}^ScIC0)v8s9V99aqP|15iTxJ5o>PCeNW|dM>Q!IFxOp8tujd43} z$Kiav>xb>`o5SXdyK!$D+KZ^^qU|1>Z?x8()JZDL-d?@a9z2sF#y+H-V$*N? zl1EH?@LF~2zPj?_$~pVYje|{|y?uG)(>v**`C~&!QhG5T#L^b?Eu{7)Ktv{xf$Uu_ z(iQi`N5?0p#XIZ$*F);e#%UL}1@m|{kh7*<1(Cx=p3YPEs4wfzALZSj&-uy~{k0l? za=nAbX0H;x5~Dmh{d?ndTk&@fkdEp%vR9L73a7a`wJaBs^4q5;C67nbM7Lt*fRlg& zC;;7<9qJGv1t9Sl0h3G=KfL$Wmp=Owf9{3$slWZd|EK@k*CE$%&42FS`0MZDmw%%l zMUh0JivR+FgYH(cJthPka0lHT4!~UrBMV#{aB+etgq)lo;eGpihi1H7pW%H6d;3a( zC{W-42ns;J-~g2{gTs{sJd;8Y5QEbJy2Ih3aHqQiAP9ht?-Do&IGlis!%2sW(TPkK z2jS*~0tAeJ1E7QO0uUqs0UUI=03h7yBpgbResp?tbbo35@+qXYJfT&ztB!Ml#$3RyCjIV#^cL99pBQGk1AViG}AeAbN z8N+iozVb((DS!HJESfNl+)6pM^?b^s$ffm*q4;i0fif}o4<4m(!!_S~@SA?~hi~86 z`@tt~5CHN2@$~W+PwqeZ@_O2@beMSG#>Rt6+Ed)y73lYZ z9#*d`LR(t3{;6bR>PlNfS!yj^d+oYT^V&Aya?AaN)OI?hfZfvPJR8|gLE96_X~vD% zvl|X&+?K%0@$@#{*43xZx7#x9VA->*cft5JtGRiVQhmM|SGkP4_0e#-@LmYY#rY;F zZpS8d+38zt7Uw%w_9%wEJ*XpK6f8hDE$dVF9A$KV*u%(WDWwz6m9b5(mFR+5Q!bcK z=p;5ipSjAu4VH%Djd;|>OhhV8Ws@^;xDamvU-}j`o2VBuU&ypao=;Snkwf7zax)q< zQ)C6Y7!=1nwDdt^qr%u}k!+k0x9-&zZYRYgIL0BbCqkk1uD-PM?8!I7X8X>gl-6$z z$LqH@L%vuKQJMUchn$+#fnjXQr(sN`ZfOy{|TgJm1$oz+aSE#tt+ z08=d7f;W#^uZrnz>qeg~EJ8-79?1tjg>T<-`$y)jxi}6b=0#V`U(a-NT}Jw$?eHd}BPf^PyC3 z!cf!Ejc!FYQ(17V-5yM67HM{P3LQjsda+j)_Lm!nePMo_J$y4^ZxKzvrsSvun-Z)nA5bkY~$q38S>dnx6@~Nj@ zc=YXmGH&yHKW$4DlWL3)Mb zlTz zsxnu?02GwqnBq{x#Hs)`(j@A3vny_rYk;J+fGn~R<7f$pyP^d5>Hs7LBfZ}^Tu{V4 zG>3qMGPxt!DCXTPgRT?^C4tTS97C@AWX;$~$E zC~a63X{KbJ%waZKac8DvDGu)Q$>x4G1kor(k!noxTq?Z*)%hqZVz=c z2D2^Cljpvlwy7H?FQ?nJ=jqbZru6XAOD>=>be;xYI!x99i7qAQ4CaCiEZ#qYI%~ki-aK1CJrHZYny!K!jL(J@W{cI*O_&Skd7c ztDq5T2p}M?O7&bUX9t`JL@=0fgkv&svkD%e*|kU3Ec&F$!G1ft*9vx1(;yN zH^(AJ>EV?7ZYpD%vSaW`454i)0kBA~GORbFV-_GllAP5e>I#*klzIY1a>armC5RX$ z1ptgjxD+ScrYeoC$tdxJ#$jj;tfuvdao5K(RuV^4+c9(6))0gAq^BvH6|HMZrJ6&? z@Cw`$S|g?e+7JL0C)}bILSn39C1&ClT~XpFMdH{{b-suW_?6zZTJ2gmrYJ5tc`l%i z6yNq+w=zAr|M2Ye?9`@p-W~P*$+isaPc30cw!7#qhRr)$^^4Gl%uScK8mlnhi(Vf> zI1m+@i(aI#bIwXgoC-UqO&lfGp>CIDl+y2P9!uD-ZoBT!L%J1c zW#~|Lc|3JYX*x7i)ilwOZZGIF)om#GYN$O$jG01eHCpa%VkE=ln+wmdE0t^DWew)1 zs#$3-PCo2+7JuSTe(=uQU;f;?FMsRpVf!%cs%GgjzpeTjzT9K zaDq+{1c-?cL(FhBh&uv6r!$Z`Z+>QZ`0*dO`>+1j|LYh3tN-}3dw2Pl{<8ZPSfKwa z&;PwM`!9cGJp{r4x&y>_5pXXKhYJA5V@`)Z1`q_CbbwA0Cuc`^-?hV=Fgrefg!k>Q z4nY`6QK2Y=g75O;PJn>X0T}4!a8U3*I^lpiKsTq*@fZOE9Ig&0-RXGD36LN_K%zu= zqFW~2%^!mifQv&w5CxApK?mUk0XX0SIN)$`fOHBVKoTd&sHC|d3x zAAj`u7xUD)gxz2qH%hnyr`J(UQ220b85Ch4;s$aU z=ZDpLx5sh1+Q+LmRZCSo=glrJ^t`R3xiVJFt=^!R&?|J*WfWe%n{?GT)yPWG&6zGP zwOI0Gdv^}U{`lfznrjutX1A5)HDK~7`DVzw^hTFTv{uG;9Be%3c^tRv-LCt#SI@f^ zY9G7pl%>pq=Sj`WncHezLiFkXWerYaJTT557*|v_up|54! zTDdcJHkZ`+J@xP+GWNX-v75_wV=;EKP8c?-Z7C!MNz<@nd7^MP%3xfIp$TN338{^+ ziLLm~Ig96-Cl?IV)O4=dv6gXN>oS{k8m8=fXum72QFW&0a2?D^8=~221Kk{Fcmfy) zr57&=>s3Z8C=V}s8@8LWJAUKg#pb0|N*5RFQ5&jsZ8S@ORjqWD1ZSab`qDsm-c(!# z$WqO!L>vSxqlRHwuE)J*)`YQbgpH@ePHk>FakJ{kA|e_h#xmOpJ(1hpdK~9tX~N`| zYqyA;glR4(CD&k&iJe*E!qz(=AR#5y3%^G-aUEe=xiVSC%uc)h7L`2 zEGo6s3WQ4Ek|RvkHHNu5^tB&LZpVt|D511sESrj2Wial|z+!c%L#}lglSAm}jbgA= z&u?@;@(t~cPuaiuR=E~ds9|4OO4H8~5?MT zm}-AM4naC07r9WQRUhNdL$&xco31B)xF>3Zmsi=pS(7k0j$ldwiwJ8)hhzCmnoI_4^c$m#kmL}l_wT!z5S6AD8J@?oDM7M`@ z`s?faX8uYK-`T|mbz-R;)qe5Zt-qToq{Ir5+CV{;kI!f8`w?xp%YKe|1uC{??x^8R^Ymk$8A_dc$vi zN5*@vgck;@&9gszIX?5)s~`DSA6vHXdHlcp*nj`l$%CiUH#VBK!>3M$^ycetcFQ~O zZI6~8yjyuM9c#1D{8R0S0Cfd#8I3B{d6-U^Q0q64#!z* zspqOP7RyC9Ij_z`J#KQ{xzvYQ6PkcH!fTbSVGM5W=tO)G;jXA6qri0HX04H31CiZ$ zF+%GVq1GPlMaN))U>R1Tn!94O#!mzSM^BdAn^sPmU66d1ku=08-i^DyrWD%F`390i zs!7QqYwz8*byTSaZ#o@ICHexO$RNh6@w95KW)&QU?VuxvR#F6wpc&K*RCQZ|h;d4m z+u_}cy?=QbpYknlYq<6f|IEAnHyc3wb21n~{J*T?Qv!VzpW5NAORUdul6K|d&8si= z%6}lIs`)(F=wgi_F9YOG{h*;eu~(B_Rg9yZD+Qu0jGb4{;9KuzL9p=G0?CQ0S5Zfc z*^EQqnpvYms8#9U1AtPqnL&a@fhn~TstT(LsOAcF8iPg9szU`d8x#UK(7e%!$ZfMN*?N_kg8Rzct?pIpqf@w5W=r8gK8?QH3>)W1Z>DaRv=Y1HGxSoC?G&E zKox)q&5I{I2uM+?R9Fk{Pvns(1%_$%ay@6*Vyv7)^`mSVVwirl_<^GngYh1u(h*c!A)xg!01$6ZxX^N&>885hZF>nn9{bQqW?F zVC@h+rmb&|!UX2!woIyw>We4s!l7Qja49^!N$EC*Jq_?dT;M#mP( zjDw1aHNiaTd}p9Gj{)fHY+|NuP@(i%j5eH%*j}vGnWg`Q1ArR<03ZNKL_t*K%dgzO ze{Yq>tG+sjxbpC9^Wd#3=Ppe0D6$Re;Kf-P*Tcnzjl*o=O6*Ebz>eZdj5{seYS`0| z_ot;z*!6X!7=rmE?Kad2bkW5!ofIV7RP3sC#k#0ja_^s*%kqvL8^#s(dwN&$;sjsy!LR>taMA5nzw^n@19vpFVuo zvrpGrfw)P=GJv>${3d`~x1JI{;s6jvC!`R8q5vW8%P)WK*>}8`CUina)L>#JAVMGv zM(@|!?M?UPh4+7dr`Hz`ADh{}_2V3yeDv-o!ukHUp2x{W>d5u98_&0;>~^b^7k)FH z|JnC^*d`%3{8Lr&>3@QUCx3r)hKaY(`Frbe5$|>GnUh_v>3-pGzCzPCjl6k$#uyIW zd^K&@#AUm<4t0HLt}MM!%6himqP;ATVa;0Aanj8vd)K{%T3Ssf03<$z#p;&S+K&`> z-0{$b8`H_Ty0Q3O(Wmw_yHQ75SI%VC#->sr&SWuJSkypBo0?9nZ_*yOJ$2=7%*!!q+3uFxu^)RCuiRdy9)5o@KK$hAD)htK zd~+gR2Pxc29$nKmW@?tDAFb`H_+YVZo9N2%ZmiiiHi^x&x(9BOo<}gUSwML( zv03OenOi0q_jbjm%j~W+3FE|shi6URV=rZYqdEy>zDl9VsWrQRPMcXL2|-GAg-wWv z6N_k6b$4!q=xo-`UF_<3u^q?xkfiOTL!YU`g^u$wYoJCJjFpkGk<~eMo3mu!RNGxv zYsLElzsWo2a=BQqcb!eV?k=j}!9%8pUszq7on5WVHuq7g5pd3%6#$;0l$BY19nCf0 z9x+r_0s5x0h?&uK4+bDfiWuVWy-9^Ge zVrbY#YS?EzNjhfje6?nPKZpZr$=b!qRN#-WekdHZ_k(H>v$EL~X+4G3HKL>SRb>c>o-06@LV^d7>*%d+sdX(Kj8HaGosDHP!5>>-|K9eO zou7tGoGW*88Z@VMy_e{Ng@#k32KdhJ6rJc|hbKtZKqYi`C(6&<|2Um1d(=nHZQX0n5 z`-Hme6%v#sMX78joq}+_q^%Wo6ONkGYS)BoUi>t~g3G)4bHSdGzWw$Of9Qj8{Yw`g zTleMge0TV+x7}a3)hDN0>}1@PX6sq~oi3uec6@8cUTe)R7ozSuPo-tLy~Z^OrwQr^ znJ@m_W7fU(O+S4Q8~L#NlWTl3h5zZ-FZsF6jmN*^17ee}ef4)Q&OiMVA8!wDKRdrU zpY1LuIj`E+z^6a^=bwqI-8cWr`tl}UzR`YL{NOjnL+wYY6$Btvw`Qj5it~LCZ{|$B zIAQQHLeL-!%H>;EaM;afu4-4QR0H9HIb#8=%2;RxTY;8#e@aZ-Jt5k5TBup81Dc94 zm%J?}R~P4R-oMz1Jgn`$Yi8mmunE*^(Au=2dJ#s7lzzxX>(nNq({2tgGKx1DakXR3 z7tGaSvkY)y<|CzYRjK2yXw^7rS|P<7UCG;|Dab+Oq?EPRp`BifaWM{8DfP}7UA5w* zI`gQ3HEZovP__0HqGL6>af(O$;GizYYB1HztV0-8$~xJ$^x6;UvWSk{5LBlGWt0pk zLM2?^Saz_zqSvm;jpzBVv~BIK`8(qILH`sG{x}xYtUBHzcC2>lhD^>}dl zg|zlfFeitpN~HO-7}zs`%#@yg_)n4G%1 zMYr3&uKKfVs7c3K>quDRn$0ZJlz7Bo1}aR|0Pqp0p;i;KAYWmC)JXMkU;q_tgZSt`Y1kNj=cNgv6e_LK zN9X4e2%@xFy+`Z<5m6n?yp|@Lnf5Vf$DpxMr%8WG^h>D zs%Y{`z%?X8=moVRfOMFM&4a2hy4%_A#e|6Fh55A#w;U>vrxa(R4b3@W!x&7>stKK^ zqtJDQh^-2Yb-$%Zw=ywZanvrK{$IOv{?>!C8)f~87hitk;r*4Sb*B2ZG$s1I$NPwOV+(@RdyF?Z}6tVQ;N zKRl4Wk-RmSscUR(s_tCQFzny5b`jP)YCrWsfZ{UX#xxjIwS$y+Fkw0jmn*t+;Y7Ph zo$f5;+JyFM;Y-dB?pxlJz|4a`^K*Zi7YDEWx4&JRhcDIpDUA;YPO^k*jro8&)})G~ z!zD56CaF}n$~BJC%R*KrixQ9a1KQ4=z4qeqReJ06cl%n8 z^RbzwTIpdCRzL%t0%RqiW@=9Oh!CLG7zLq&Vc{Kaz4!g?EkC>X(9;V%d(heQPaVB) z_O@Sn&wKvD>VNowPgwov_x-h>{GlH_TN+3Ici%Ak_)kB4BM1^e17Ls|U;xZuFu@~K zn89Ea;PmnYUv=%^)?BN+@Ex!G+ArX%e&nZqd^maV+xKmE_t#$i!oby)Ii&$GJYrzY zN|-uO?GXhCm;nR@s00j;5C%{Uq>)ep5UC1+5TyYM5DLPKBnTJ`UqKpRAYp(RAVL6U zMi2l25QPv0U^EzsKcX-Pf>0VE1PFrU@#+1Whd0fjMiMXuN!Wwahi`x4X>${l%_i&d z@m&BnZ$1G85jX)Tpb#F;_y~*=zWmZ>0le)UUn2k!=l}uDND%`VoFm1tso!_Fc>d2k z`P}5%PrhD`KX>ez7zuBd-tM|)|@4a?3{m|_; z2q5^Uw)(k~|KpWU-3xZKmB=o{^ViMh1x@KM%u26S%kH$)a$a3bE5~+%DR0dk!c8}S zZM{|uH@Q3aQ+1KCE;n1UJwa_5B})m;`ZN1lE4$xk#Nnjf!W5Ryz zd(r*<7&akbJ3_5XrV**wCK=o&jMxDp*05Bk6g2RcS#+}-1G^b))mE#bR?u%!WzpQP z&$F9F`kLo$xQZ^FZck4(UpgK8*S~POg~kdX!cYNZ5Ltn|)>0c8r9EPBv=&I2=F|vO zwbJ)>{@y1Z8#nh~9nUTtH3oCD?%4_Hz~#|nUEt)JZ`%wnVb*DXVWTehS(6W$dH=wM z6J3ve=OeuYaTx@5&_%oOQgg}k$+C}TmqP*6EMtY-U@LNc8Bv0>iZy^Tmh2BA4QbeA znGE1#g*8+SW%p{*PR9x4GQfvs(T`fQW$s3p!n(0l6m174(k$5!tZS;%W&|Q6QkGvOTE(TIFQvj!xSj-V;CE$?FGCm$T-B zkNu^`x`&$&)U)l6d`9{=|4s?*(UIFfZ_cihZqd4WX>~ZQ`vRMpCDuyQ;u^AzGNv*X zn{=^42bkr-7mb}My=P2$VJYbJ<&1{wE(XwpHc`0Z!qe`%p2s^LKHq**SDswvFOB)~ zB0lr}|Lr$#_n-Ugzjn!~Ptt_vxcej>`rh&qqj=6EGE zp;disIlLdT3(jBkUo6H&z3wfKidRmKn}_3e{|lbB_uNM@p1Hd(rf25%-0-2(|M(qt z~B`TFZO=9drmzr45i?k8u>QTX>%W-qMe#d_~&Kl#H)Prd8wzVpdIiohtXxJU{>1Rlfhy*{9{!x4i9%_kD1=cYQK_zz13CW6dI?0;_E0%l+yA z%j?`LjrbtWOQ1NZqu}!LjZ4T(ukDHXg-3zzP$>cQVntAE8cmZXShd=DFl^cc1Z_Q0 zW_9MIy(JwWosL-V{9P-u%cYJe1BV@k0lBA*5a&eTr3nl}-Q=3lNh>;C>`Q38q)1rW z^eFwxr;HF=4?$ZUjY1`LS86G3yVto$Q58c*sT?qRQ~^!t7uC%a152WbnMdx)HC;5e zMO3u93@y4<5Q+wHZf-^kgFt;-k_lQ`OP5UY4*eGQpmt^VO69oK-oRtcYO0Ny zcPxUMp&41tOmlE_d$?}kQ~O7*-v0WpKX~lcbJMrqUc8TjfPXOxULWzP3-n*YZxGn^ z+c$k4{!zL6n%%$9@~UQmU!g*!Hlh($p$UWv1h8PvSqu_pO@PHX7xA5I8n+0dU7S`D zN+qlq#!|`Rc3=U$QWO9Y0Q#DJNts+xjPwi?0kT*HWI)L*!T=w^ibtq|EZLxK@xH`N z8?i#E$q(6ILZ#;zQG$19JYXSA8@5fyW;vY@i^+Zyu1BepW~SaV8AlmJCDk6*kJUeD zuE_0ru*;P>>&q%(Wy;c*Vp<(2>R|MQ7aA=W)vRO{5EKH`LRf|r3TaZRWHTe2zzJZ= zANp%Q5h5vG)Cda2IU~J7D$EEEcr(*#ctl~QFawE43}!T=saaJs7|fvdh$97}KmiTn zU=Gz_0T7@9g+e$d6c1X1RuFu&(2CIDgGltwTmuhK91tMpS{0hO$S@H_C}!1aM@}?U zMGmZvQPF6vu(j5zC6bX|jUo=@H2K9iAtEPmX%XJr`F94v6u1h-UP`NV+3dFE;qgjz zg9nw{WT*-{HMbM0#)d936_-W|%*``gXc0Y~fq)}GM+2gF!8@yD0$5E!b17=|HL66F zV%XY1t2)vuf-~J zN->IcUNp`u7?>vpv*}J8?aeoK$fvL0ysf%ys_s^-4dF1E|bJSUfSy}G^#OhR>0%!wautXK~=9dx53Jfa&=2N;yepzt0heme=D@-jUQFUOQZ(iW|XuyZE^+RuBAo`(ioSnys2-a7r_x z5<(CL7~rS`C?PmJm zi$DFJ{kwnrxo`Um-*AZk?tlJ?AN~FxIyiEZnfs-ay8E@C`PfS?)U5$D1H2gk1I%Cs zP+?F5&MqI~s}A>XDU7D47bp0tH$L;r4-UU;GhTfDstmvV@t^qOh5X^q?q0gma)GK; zgDD9ZfC>Z52voq#h)0M=2tWxK9wCgNnUQovrNP9R8DXG`!Hg!P5e6fS1_J>Y3?@{l zGk{c(hDXedAOVn|kx;`UqGlxi2mynE1WALyGAV$6c>dOn!^cbuKvO^fQVb8yj_=%l z%483AcJHmbH*Y=;(t#8YKnOTb(u9Dc;I)^&0N|Oozn>rk5YRlSpwYZI5g{m~n}ssD z`Dg!~|NPdIPkzVs^@D$G99~`Y!+N=_K58hpwT8pxCvVQm#ap$cZe7>iILu@I=EeK) zp=TKJ&-(P;pL*l$+W1l+yHPGB4oxrBhE*zkZK74GViz~%;qK}*%hC({eieImk12P? zc;*=VHn^j4Ck^hm+jUtA=YPt|uSfK9@B-bXTUUSxWj;amKI>%9oce3E8Bna1b~ZiU zTogDV?$pLdDeWvub&x%V?eV(ah~;{)*CH(EvK>01S=2?CdFlHmrH9W}y@I8t`FkU&Kut4EE5O)A`NCYV5aVv$tBE zZfpbJLBzyyICrIPH&vZg=RyM+Rx$}Kkgu*1r?W#>#{Ik<7yE9s+eyD1J-p3BiD#Zu z=R(&HV<47@+9ua5rY`m~bDTss4Xz<;?#CfC6wLCt zDfA=Uq0s`Xw|-j;HX_jis3Gj~<(Nw2CBb=hF0Bi!Hbv8n1tx3igeACc3q-+m^p_CD zt`=)LlOj5xs!OrVTJZ85vFTG+6r=%HXq^v0j#9N~O5ysy{#^9$`rz%)y!7|(e)&H2 zK&$|SKzqOR#hvZ$hE#PiohZkl=36iPOVQ?c+@`$pDo-z8=Q<&-p|R9x6$GS>-b{C5 z;X(vf=2k3|CZm};x9i{=$57`Fln%A=VqPF&+R&ALvL_#TyL zfgNDfQ9`V4u@g9nlTkH|D*&~YVCKU$(i?Y1z;;QJjFXTXV}YqSsooW8k->ukH^O+O zYpT2J^2jl5YAst?C0;t`HF&65QYVlDX`H#&y5rHDJ~`3HZ}?kJx(~-r7uUn>;PI{g zIrYuABJ2SM5dWMEApXA!EXi{D{@QmDxJdA-~f6Ic!(1x;so$+>MeQ?*FcOkFXo&;d=xl= z64eA~I>J!^0S#UNP*GsLGnjOVzGjtFNK=8C(Y2A+qquO;9n=_}mf3^{@CAP;*sf5w zSXa}lF<+|c%VoFKaiFdwlN1%GAu3uz#dd%Z6V-sGHj~^c)utmz!UN*qy#+I^3o{oc z44zO_2o0=jGtDD&fjNUWWazFGqiIsDD$yj!D(mTXa5}ir1MDxi&35;RIK1L@Ix2{loR?@%rRE?e49sn~S_Hc6RWA zw?CzvecRqlTr7jtS`LDXu2DKzDNaj(>ytaH)<$0QR2ShC*h(?fTA{*wpL>Av+W&ca za#8z>yJH#7PToB3FIVD8bJ_XgU}>oM9$}pkUg4JMbIGc&j zIj4RwSF(6ViS#8jsTH>!{d$G@$YySFsjY8p9<5VuEM&h~xN+C>VbLf}M@P*SrqvQ4 zazWZ6YsvW|5tzJXaB38{4u{VZ}KI} z>remGFMeE_%Z-W|3*NZ+`sKpI`mvTffjlRyvTTFeLy&1!jPOqA(Ptzz8V5LI@}UC_xxN(g>)6jzXvz zzJde@7BQH?Cz? zm^sjZlZyw}_iy15qXBRVGXlqF4{zVRb?^8dfLk}7f)vmR2yp4RKzWd*J|K7d)j$OKUeE_YKvY9ldbB}E2 zE6dwezrO6pVa(|?#~ZM{>wofv4*?PXj5j`x)hj>#Nj8$3+QhlzRwSkN5M?`V@>X{f zFi z?oaj(8hfy*$PaZFngVIFYbKUzl5H-VBje7sS*J5=bnggdJB%4=ZN6Ri$#iSBN|i2y zeJon)x@dhpnHxGWZQ7n^5t0_S+SP-8lTH`s?(!&Z>tGiW51cE$U!M=FRmros()?_z zm{qKT^inVkv0ctw;w-KEoS|J)w&TqwwX-rk>)vp5Ef5=TBE_T02VQCkR)~Nyh^cR` z`(-zYSlu6FUY%b=($lwS%2?e3-nk~NMz|i4!`5{Z=N7Z2B7>%5DlU&sb)5=jYL-H- zik*R{5*LG;aYT?(O`#8*HYItL(<~h~4e|~F+hpeZw%u*dx7O^(scQ#sqdRk2HmLpG zhr2p$w*J;P9nE*Eiy^P?UEbZCT%HdXZ#>whe!b1Rgn841JvT{Hb*Z*>9LC06Q8C@E zOBqiRV-Nz-U=C;&Za;*}E9beIPW|B$&E)9zV%c2V6UK&)i-;Ts2Q~_s#+?=EYh3y& zC8bm;4&J+Y$a%;JGs94a9cJa-Zle2l{ORkz-|Hm0j*C>B3#-^oC;@YCuzF@dKV%D9 zyZ{b0xp5pzt*BK)Wg5{gX)sl|T-U0W#*#ueCI2~7001BWNklXEB zKPh@iHM!P!GY^!x?o|1DZYN##+^k$9BbfzS3^r1NqhsF>5Vf%B^xqaVAHcK6_#cn; zC5gy@Du&vNsib{FF1p#b@AlU0dA+hPJ+L=i96-pR)m3#?)GB0Z!NEoYdT`t|(P!-KbOY~Jya z^Xtd|#ee;)^LX&9Kl)v<%;&d%^2J}h{||oY&;7_>x$`|wf8sy++n+w&e&Qd0LZjTc zlRh{-`@*wdeDk&MIJ*Dt7xtX@@?bxCu`}LZIMwC(X?kM%oJuN2Q?u$Egv=w%O0Y`Qi;@7)H6|^VfhLhY zLT=g;0ZpUK!RL&A=T>fXDNBD<(`BuJ&byjYDTOr@a|#Mct)P{$Z*taYO`+D{=p||* zOG!=i<-^U^3-k;I0Z$ z#~MuCWGdp2`jNG+p@Xuacxo_24N9mOHT$svsG$I)(JV-04I{4%yKTNYFK?kXNNuW? zR%0@gu~AZ1QnK{YgFQv78>5qEFC2h)|MUsR#fPu?Tj6c-{WrpQPJF$IoA?(a0iQOw zFW?^y`pU_9y4#;!T{3-9-vZKS(|S`mMn6moOsW%;TC!R(0IUEEXjH?bk*4*ek4sP{ z(l%^^m=jT;suOkQi-zJ1#xjc5p3(~((7FI&tTkjUkd>!OjcT!qLaK<-LM=WOWld@@ zF|xW!xZ<5T&=f;OMQ&MIrB&H@E7e!^eFMphLF4F(cX{E=%?_sX{e#Y|ogeIn#YC*O zmH+}`Ran`U>~qdmcX{ni-UAV7sy!kKgmF$^lH zil9Up$TVi;opd~CfLRVtXHinE6BVrH!4fkA?qS2MBvf3pUsL7%4AeUN|W zocq3?<-fga?>%eJjF!=6S&}UqJ47+I9Lr^6h=~&igMko=qAF0NQWchbpd1J(iZ2kT zq!L3G+X4yY##Kw`aMRbLg1@s+_Opx>%DIOIw4M zk&7~XVn0x}JZK? zCp2PIB!Sd{Nr=f2I|rGO84{FWypAei_|jJ%9)06@xwAc=yNPdi41P zu~0lcz8ty@5u4Ro1@lmsjVrDyuDqnmCCfBs9$jHB-hnDbkIl+rC2bWb4PD%f8Z_79 zlU5_g5Rwzu47^AlBNuw^AGT~algj~l`` z}C-;czQC6Lrz0q;1h@H?abx4`|UpGqqEm9BXvh9YOv@HYz6{5fY=!T zXQqVAaLPigFc4*@X6k!UpFt9@A^XGX`wpai>L(6AR1V$zYBiCYp7YhOS4elm@8*u~ zU3PN%EY{QM@FV}>fBm=q;(r#y{NMi5|LOnmH~!i438*CjEJ%sj9_2~0!$4C zGE%}L0yBaEAR-XK6z?D-m>B>fkQtd%0F()gcPQS$FazNcBVqtVFsR|-#nF|$>xOp` zpaue9Cub)_cKymND3cI50SF9`H#0{Du8H*SYcB$L`p)wp280Dh1?EVJ$dfaIftW}u z)9JkY_>B+0?@xWt!Smm7yQ!A6KX!M=l~tjsH|sa4v?eo`&bB(7^ygTv%ZSRh>Do`W zbNR&ktOnut`pUg8y!66JeETvrr5u*_t;JOO)jXY_Ufq{X*%^m+gpH*i_{d*sU#PeG z`VQh4qrWrTfsg-JJ>EF0L(A2qUya_RS-2r&=EbhN(fE5C8E4m*o7d+#R#eS1`$I?j zACt|4zK;&;U~VMg#SW|04Yq5XN&c3ahTWUv_Oj~AWlawS2TVS+m4zW1<-v6H9)3~> z8@2<*c9s(Fj%CQ}yp7F)QW^NtZ^@xdtvcV%=Fq3i?rvXOw(<&>Z(H$OVqdfL_}Q|f zVGChu6e!>p7jaDqLLrtsBHC-mIM}lmj!If?<7jTC<7zgEo+k?pM;)t;i7RW^xoOpH zwrk~uA@a3nZCGxHak3{|oR+Tk;Erf4W#XwXoXB09v2!AQ)$Z`7*> z6f&StQDhyXf~*O29Gwjpdtg;kpH_)>Dx#P!DV02Glx`J=?N*-J`<^Ff=jnQi{lis! z{iI)?y!p~`zdAh~&R&U^m|jic;Ct(5tG?{0d_8}u7d?`ssaRb1ec4pV3dv@k(eArj zqrVGc+?-z3Ylm95kKJQ05wGqyf}slpKp8ZAq%_Zrk))nZS82 znadvkU|nc;n7vs3dPeJ;8n!NQq28^nFfbe37O^)CvvoEfH^S9iImUhk9*MM%+i^1g z@YFx^gZ1Y&rT@!ibK7nOO>c~^mF!1r!?x2!$TO}^9c0zj>Mk;Cpr-{FTsjr)$pl83 zqmt!?LtiwJ?-LPs%%lc%`9ipD>gkAWffm}BddjpLc0e{Na#@n*s@x_UoyHut5TbH( zEE^rS(E&n!tcwiEN2e}Br0jYuqxMxZuLy zAfIlukPnSR8A!W~g(6fAWEM?18^=OHiKvU&h_ZKVj!NNOqcG~3fa_&Y#^aCw;?*brt>@y8 zTz&q3IzK(OpZ!NaGhpx2kNrEJXy5qFPrUTiSH_*M4mS?Fi%A#VrX-#F zr<_n4uVFMvBtGS7)H3P2YUCg;aL7QxphYcLW0QuegfIz}Xv&%r=Om*Rl3Z#ZxNcP8 zR9Mx<)WhPqt%P}ZA!g#hfEX2!4w}sz8iupb8%N|uwJF+@_C^Ahip@L9W@=y{M`R*K z6taP1Mrx$3q|K!t%4IU7VT`bHo;GB9+-Ks1hJ`T6|UhpkI0mkpwh!}C+4e2ZSRiD%6x9f-b{_y(A`4>8tV|&$O zC~5{JGUbFv%nS;3umFPa1mFf$u!bE~tvEm{AJ~i6Ttx=KN1|+*bK(pb6=&+agqw<4&u)(iZIb+F?>vlWIQ@PZ#Vf23hb8bFi#sC?jaw4MVrekUpKK{4lGm zhSM=GRi%_-X2noU8Ac3{5{wJ|#sBs%iIFlThM7T)438+FYOqqW6(xWigP4@45X>@J zfr(LFGkYloC4rOz!Gi%JA!7CV$JlbR7@B$UD^tPT)rVT_hTaFxNp3B+0)K{9Kx0w`(%xv5nS28?7{ zAVV^EK`n!$_RyVYS_G#QFA{ka!mKSp^9%`X^5-Sm(NRg z+)jzO-P@`59$T0lhPklu?FT_qx0;uYxGOW+pM+!qjt7&sxJ#w`-H}G)!`=e6Z+!Fe z{A6>m@7n#z&FSjjTB59s9x?l131!A$0fhYf9mOgL{@*J#ZEi>W{{XB(X)*BaOLBTYss_5S;X;hbRL&?@2xlE z>DjAmE1Po3Ko%onVg?j2P~}0yP%@HaX3%7GqhLC%PIgI~A99B@o_uckT5$7iS$yJy z&)lv)a`Pt_4L`9rtE(pL`}TDFW=hLnzk535b3ZuHq)_eKUb%VV$Nz($`0Ib=FW2gS z?BD;%|L*VpjXP26t5e#^-+beD{;U7(75DkijXhumYA5GM01o!9!2sy#`6=FYuy>UJ z#Un;UY-LOD`H%0+IrfB-cDhzLds7{DNPx*UL_sd-Q1=Z9~=lysVdTx5XzGADMsX32yN($S>O8eeqQnKaa6Ll=_v$-tz20 zJz=|O485OH$}SbTZuumwE-y=$ZO_X7-u&1}eTwk-+_#X66R*-S1-D!G%QP*1u~(0} zOtcdgH-`27ea-EyZbNNB;D?jqP&3c&NR$*Q$b&jKnUa=Cipy|I>0&}Vm=^yF?BQJr+ag@rZ z1Fl{_TE!=AIhnubg?@Eq`+!#YMd;4fxY3gH;W}Y4-cMe)!5v&&jrrm{4YYHwdw3X@ zB#+a$*``S0z}#uQ9DFmKPxAR_rQFQ2&D$*1!%7!J7U$M7)bpBRy@`{j1BsTm%Z2TE zY-%2@T4Wv3r!c?0=f^VD>M%>0(l#KZ0vD8XN$~AtBCd6*PjsQxK3g*M;Jv9QD+JeM zUWWW`6+IxKr94HJjDUXPd58%^ihUv9gSBG@(F5z$7oAX(x>xu=87;q>JGdK|N&j972BsZ3|wY;h7UGJLR-RD{i6>@+5 zO<*zc&eCozzOH<#w}uMQ5$b@PB$Hfb3VE~dc$KLPeK)D0wUxpx&!?{1(uFAUpxLGj z_EqIIs#2I#u?W{JU<=Z`sS~_TxPS82hq;NW$)Tm&8C;J3)Xw1D5B&Gp@bd?`-V-5y@~h1%tr8J zj6ztpPREo>qS~{{loPFbGE(OhRS6_?(6D$s0&TA{kud3$^E#8pflZr>h`zViC3 zXBRutOXc@>>gW$*2$F>ScIJ4yisvm-JgCSs0kNm>bGB*J$1MXng1J}s6__$PfS`dO zIWH3|#*WOqOfyhYBG_o#1ZJy*q^@?HnlDmU$_Ei};@s2=CSkPBc&UXu<8(Rnm*e&V z@dPWF6$(-UX|58&(6SJ*)R3C8#t{R>i483^JZroo%K|HTEJIdk%q(G2EgY0wfzlX# zHpOr*#F*}qp>>f#osF4PhG6Ehb!HhgvnLn|QA$cNGl4AC!o2XQK(~~M^1~9<1c6AB zX13f=aMVOeIjl@6rGW@4NsO{v18FSdP)3b5jA|34B6F7xqQ`R}DJymIlSRXK}q%jub&FD<_| zToCMBG{V#&R{*&}gm;jE1y-vwpdv369HVt)6sp=uF5eq1Ym>KCv8VfjiBCv{6#|)}6U}^>^460NB6mlt;5!ezVWzg1>GMk!~Y`pEa zI;>rAx~mPqp0viaaM-CsGph~N&hDIdCKYJcJM&~P6fr5*6}vRzt-l=jq9nvV_mumU z<;yZq(Ix3x1~wY$zxZ$dqGg6L1W<#j8NdL6)SyU;Y{Z1X5QpL{17yU7NI^_B6;F-% znKi}?5|Dr(-XJ8*zVRl)#;im}GG(fq6^BTq=7Mrym}fjvP|P9$MT?;r$V>>J3haPv z8jF$@D+S45WG&#Cd(c*uQuIX$wNw=9%5W`L8Cu?fp7Vu`W*R-zgA~EUV2M=83}hxC z4p5Leao$%3*Z|A9t;G9UjbRB!XbGORH``=&G0qB5JFN)gpk1%Za4r#0ECU5(nY1V@ zra~wn0fm%u&OIy_C4#sjfJS5p(wUVv$JlO9A6LPA^UW>yN9*afBR|VdAI^|}x z;?@r$rS%&-vvU`IsKt9GO?TGX>)p~Gyn5Cb*w>#SglpGdlak#qgD6EOQm#nRx}S6Z7Cdxna#AWAYvK~MoFNagjsUlB<-VCQ3F@0t|J#Q6ESHm z1f`-b%jHte9}Y71YSkpT$&9iiB+1EVpIyn6>UJkoZ5amSVH-Ia!(ar`6_pYXN(plF zv=tpM`?XoOQUW`7m5F!jY0c!?dY-m6oLMBeL<}>ikq`6;Gr@DdCP*`lv2SFs{^^h3DYFm#^1u7R*I(WKh5z|;{*UZje%lk@ zmp*dC&G*carn2enD;s*@-bU{io!yg8eXgzs=37nZi=~kk-}!I;*vI~tU-)b7`R*V2 zi68nazxdB@@4)XkJu9cfgD-yNAOF@b{mei4hi~Iuhx^w6oIE_nyAJmcVK5MkKn55< z1Tak8zRetKPq~VEe14^kT;iO=IcgfQWsx#=Iq_1j+%)^4x3F}G$5OB@nG=$ z!iPyNUToLvIc>(e-A)&eRr6*v>ZaGjMR1!}!?k@s7dIYPe(zYtZ8gNaT7dfl>yGDh z8u4t@Z@s+7yIVov;=GL$| z9?#;i+-qqdl&u}Rw8KeOy>G&Xf)49~a<+*%`Iy!vbhtS1VQVy+?kFwCaisZSq4J(& z$v0ch>u4xj_(lp>YLl7U6015*n~eh0l(jzB@UjV;@!|?=?{l0y`*fgH-;D>8^{7`% zKSjN*$$O$0qLr!{gxf}WY%6W*nk8Ee+sqh%5+Xm;v8;!r zR<&kpK;u#^1gglDAr==3ENO9_c^yvb>@9eVHV&#{;B|$3nTBjT6HenMY8e@G^4D)(`5&di`~hx!XA54qo zakDN&OS*lB^?juQ?0w-H{Alo=~ z#{lZ9G_L1#>FTRGj#E0D4e6@^{l(->?!R9$ZNirutHn4DI+9Ruek`mx?QE^LI2cuu z4Kx^>QzBp35KYy$k^qk6y#}U~Ln#AtHp~kZ=PlYfP$;$8K*hvnM7m8&P^d^U=YrjK zeE*gDV7ICpdf{Gv`5&JA>TGsr7D79`KVzJ^9E}RFQw1vhcIdN&fv^)j6C!~fjFBTV z+2Bi|+B0##F@}XQmQtC{+C7CyjKe?)I}EdU1)(|2li}u{8bmxPHT*y;+|uerG{_d-JWA zd|7Pw8(XTRsa|H|hdKm1L9_1VX-alQYipG^Cq{LOo%v+BE6 z(~MWXcmX%NPB&sSe<574R02|hiu0aXEpDNCj2=pTl?=tj0*sek1jIEBQ8k@2jKmZw z7!8t8k@+Sqyt8Q#_MEF=sk6A*YPU)k!#L)CZOoBWw5~8AD@Dy!>}xSqHO)y=pG)oP zNmWxZpGb8~V>Y&tBg-_*XRJ0gHI%Gsh{MXrwA3(~5GSJ8rJ^e22~x_CEgG?bl#I=s z(;P}picQB%+z0@QlE5Rc$OcbWK{GDJ7Xw=%1p$tVnZ^siSa!vncqKMfEl5a|d)}zh zDrZ96Y?EQgm!@3?6*Y07R!V^`vzYlhB!P`{i}Rgu?LCtpXVSyaUe&ymw8d{fGJ?`YU=E^K`25|NpAFrSqSpaNtA zt8+jVYUTz6&z=chs*0-0s|)G?od`R}T$HzF3IsAo08v8_!kB4{O2}ZNY{fJ=0yjhi zVi=PFoB%^nvrGy=h=gFUfvhk)umc+p)F)dqSY0G4nzhf{L`g~jCK-7e_dM;hlpr+) zshqTaCpf7p^4>Y8Ouws@6 zm^mX+n5nUXphijnvSp(}5V4xTkdZhc!nT%yM5$N=31mj*2)&>q<&t~^LqTc~pb{Yx z&W;lqmBa&k?X7!j&{io<4dpJfBWv; z?)usCs?V2u{rSkYS&YrP@4xxVed5jOqG1iqn1C^k21sm89VX|Fqvc63FSf}P3kPG@ zsMd5gZ7lCJZA23^15CCNNj*_ZN?U_s5^O|Md4?(3|Vi z)W7-KAFFis$>pCo=ij+;?QiY8@B92GpT6GRkJQo0-g#cHKll1Q-jr>>p4(nPWnm!p zL9z=?8z$<5QhxqNfAo{T@Y%n;J5K(kAO69A_8Y%!O-7-3q2KkH&){8$AOASrJvlyp z1MfOKxB@j0K!D-@rDgya0E;nAjTwkAKmnAjW+mGr&;k+wrbGl10vHC65#C`00;sT& z!2w2i#0r5y1jRcT?;)uDN|0|#Vfe+mI-ul`-*Rk_p zcICovt2XNPa!uS!(f#(`kd}7)Tpy0BBvn&pi$A_walS$61U+K`QZU zo37=-Go@MXAH58=M{iJS~90{-O?ErC1!w4UI zm`E8#bMBJbSeScwEnQnUWp7w!P-3!334|V)79X?g6Wi7)Elo1#-brPiyG<-G0|KtF zy^(F(j}4H?G&d1Lh-V_>y5}1`P2cz4!y9hgv;`f5QVCf)3-AnBx%|G*EdA8ko z-zPtOoM+o~ezv@*OH`bh9Ijnc-hOj9Kk0VP=%iklKMAg!ncpiiMkJy=vmtAcn}!$- zJbrVaJwLq?{N;+Yy%?r5?}F=jMxwRz^UKe$`!d|}A?^;=at?J>l%!PNr93resiIs& zL6Z)kea@<`A{$*AvfFc%Xg&tT}v&UoT@%r2OHLLa=r6`+}IVnxF7D`!^ z5}An(wDaD_K5$qb{Nq<&4qRZsuGyx~8;>D_IqfrNVw=^aH;KL8;ZI#-`&{~UzKri=Ix^pD9XTt}7DPbDcMpDzpNQx_AqV;JsqT-T;h)~ie{&31Cbd7@?xrF5-6tB? z(zORq1$AFN`TK@%{L|CLeER-R+J5R>dde31m}y)p2)>#!s0>3?MT1(5g;j(SQD!i)7l9hZ0m!i@WXPn2t7=OD zacjCM#SybGgC<3c2HIvtmrt@jECa8H5+m`-4+L8&t8wKZtg6sL46ckp=CDxa$Ncry z-*~F_-?qQcvU~760k`pyfNOIwqrvZoK^58~7wFj8W_g~vyW`Iz(MzkBH|asz_~kZl z)|?-tRk3*Hawo;eMdyr)TO_ zQ9wonP@z&Nk%1Fspk(mql);QZCWOePK*dB*6;Mi}YGfi|Phc|z3c=unn58IXiJ+Wv zj%gsx7e4hN0ecble}Pe87&TWacxXwHGyfJ8l&GBEv_|NN&6 z8Cqbh-~cp2<~cKE5UFDrnKSnAFs8s%JBDjq5Ozdx72uf25sQ%oHC6)=wQQA85$YM1 zy%s5?AZ?s^VpTE)af+5wW-iVXKpC6~0-Y8UMJCE-WGF@q3|3ZgWGfCvAO?#kBcKop zD?y!t358I=q+G#RC<-N0DUkt`DLGG75*LFKfwchUC^)Wx5(1N$rlf-!Q}F;ygu$N7 zk!C|g0gOk^S(0Vefnjl!WJXrZR7sW9pg<9rS&7i1W=V{!p$yVKaWXgx-WJTo`O#qP z$TOK1<$^xt%gvgR`%xU*3q!XZ`>slBq3iYRxhD>#ES}l85>~GtU0ScJ`H({qs<~RA zo6zyf{N0yt^o)b&cK57WeB{OH{?YpKqubTf`#0#;jY*{i9_5kkr4VJ+$f_&rm0~#Z z%{aX9*4>kfqwQwBIbOy|9CmBBZR%63Y5gGg#?5AC0Er_YlL1yyQ&`I8p)1)YwGv~Z z&ap#M&UWc|6yc$#luKO~@4Sl8Hg>BrsYvN^{+QPDoAZQI?kv_k9lC)ndDQn=ajq}qh__GA zR*}+WD&~;|#2{j3A`oBz2n13ShA|A}4a77p;JrAX&OUhiyZ-zi{+3s+{L$n1xexyC z#o{}^>~?bg;_?5vY_7LY+_|;;pk3$#`<0=@Vfn3Hy4-HQTJsd-?Eqmyvz!5;%~ zfy|6R!~}Z;WO&59p#TOlfM5gy;Sn+z z;Spj$VG3e^U;qOIltLgF85tgNM2dH)0c><~euQ@&?p?>b2w<`Y507tMzscfQY9%o2 z?%i(!c-6f>m%JM)nTDEW zi%nV&7ipZFiwG?!p8EU4HodsXN6mJe?0opfwU6ETzyVG0d)fFF{mO6OwfIHrSh9jM zS{$ppTBa>;PDe|Wx^eGMPW#X9zy8}VcH^l^hP0v!x#kaUEch%EJm3*ma9Qi%?}6H_ zPA8rkiniDe-7~e7!)MIjv*kFcnyb6zjSZ^2m{0N>8*_eDb<-|M@Q0Jh+3qA6rg?uB zit`pjJ)d~jt>Rh1mB)ztd1cg@Er$mW=-Q!)A7WFgGJ~@e5PH>&^PI+EK+Zm;M2g*- z7ioMwY0R4DE|jF@+%9lwGVf}?e}8JS>Rpz)41KK+iSIn{vQtlG_+5V{uNE27I9RH4 z!nj$DozpE>y0g_cHU(kc_Ls?T!`>77Hl#%%+xJ5}&ML;v(@uljr*W*kW2%j-Yb6KN z8<*l~k4e2Vt!dlobhe%@c2!C9s9H-e#sTC|)RnDFiEGTgHGcOYi8Dv!cVV%pqDeFG z$P^!TlS+68xpy8|6YOCY=dB@HZ?pGlO7>Q-9GED!8?Q*kQrU1u87;2+MDs{4@Va6v z!SX5?#DRL*dU!G}eU2zw(w*i;8IIFYcT`SBYNy8jxj*ojTkY+{VH3!R1M>h_f9BKq zSAYJM@yTZTflq((ndyGgxb2n`8`F4pPJ7}en%^3~^5A@Zlor*p7dEG$SL%HGA~mDl z+qIoi>f{Fli%RZ~1}r9vs|$%6*6uvdrV;|!jfsQzJYG&c4;7cex6zUtmFgw9o>x0v z>b$B-f*11ShFaC-k=Z4HC_~-6CZ+vRm!tMju13TPWwkOh70y*?TP8RewePK1*;X8K zaD7f~<$_kQ>LzR$20ryC(cZct=#m)}n#Ci~8c#gW->S+pNRflerA&Y4pvPu8Y3f5<4? zYM7>K4aO2|T;C9>@${eY^sAd;j_Un+A=m9a;mfvQynFWO z{QQ2lpZqHJ=X`4>q`&a|{Z;ks|MF^n<@ewDOaI;Y-~ZL>H@D|7akH^%T`L>CS&bd z0K(n?@6iByRfH+8b{nAYI@S^}aq3)yR2i*m>Z-Bs$nU&kROTR#8-^P94D_lI=65r% zqLbdZIEXQ6Z#pVi3|@)6tDG6I&cK++08=4Da9+Ga0*{heqpSdw+B2aNQKqcfkU3iJ z%DSjN7&|h(NGO_Rr#!}DDUCwJ2n2#R1&kb6xH!(ULRjRNnEg}y*#(}T@A0+0u(uES zES@RgA4Qnq&%l5JSjHOY2Xq-5AaDE42NdyJ!+YcA(FbFGqd)U~cYeQM*p-+{M?%I* zFbFY|ka_Qw$PxoW1jiG0+_>7iIf>R*F<7fiE_#NmNr+6;B*9R!MYB?XF{>GSSFx)J z=2@9aF;g>8rUXO*v3ddYO2hv>Ef%HAlTf1HJ=(O&Yo9(C6n)cqe zDx9p+opq?l~sxN?IFeLHPB-n(^rSKj~Sle0+?R3xCt=VuM_(_>wuBK{UI3<)p&5(!F!E8KrStXy|iNvkB^&{DD34V8URxzsmmO;os)-zH zA!wKSbxvc@%H_H&lFd>+jrpRNDv|S!CqfnZ2*-k!%r&!V?cHj6Hnis6Bie3zP6e#u zjk1i9bel^yo3bO=P?!+=FwS?ft%m#IiKdZ#5nxg$BK zzOitd`PB~^tIP}AwCIHLiIjER4fhb^hs*Pv`tB@~CW}T)A}}Qw==<1+phgA|fxsZg zC~T@+u~NFW_(?C4(cW54t3zj7h%PyP5e z@Q*Pgd>@hFpZWiC`RJ+{fyjse@b}QYd$+G&e}Tz}y$L>g|E=eqebtzV7*Id~Ck!J3 zfk0p(H3NtoOHp7(e*VnU|K-n5o3H-|!?2Afil^q&z#mEI@2ro8GTX)GsdzT1K1{H5 zD`#4rspf-rAkJibp zUU({hcyhKpZ{FU}K_l}+IfASFwJ@{A1y@X=8S!b?{zrY7s{$;0Z# z^;s+9rgCAs!=_xEPnNT7k=>z}VgLnC&9pEKOD$tOD{6hA302~U+S9$&V5YS#4vdHX z{MH!jeX*#f7pI|RGyz-LZU)c^>gV=-P=8?Zr>jTIiOiRil#X&l*WK@S4^x+yiV3v= zYW{p2uSul-C@%6MnDlki*oPTnsj6kqQS`Fi`M?w=RZAWNdD=`fy`%8Tp6W8?X|TO& zv2miEFV@w7R<%vF+}D8K9oOV|KU7n;`)XpvY;tHkHe9Ny4p}fz-u1=PY(i2y>#=)C zbU7t+m6DTfPNCa!^j*b)D^4f`?42q_Er;6owVjLCZSP1bvr+|uIMShy8EI#< zu$`5pWiK79)7H*oE)O4M?@v+JH>OQ<^|LLXc``Lt5l2X4)?I$REB^FbXg>4hKffpQ zA8XeSm$BE_k3AHoK%__KZ=I#{JN+zV=XJDA;4su|=j)|SYJ~YA^~kcyQFlA57W)Tb zycm>rNqkFtHFTLbF0_tOTG7!BSq~@9G_7n`Z>JpQ^F=OkgypUPfIxr0uqSr#*~5v} zLb)h`>%w_vX0nQ|i9@Yv%kWmsEj>s(#VnL7&xdO>mcwdwxBKu;>33aG?~>%6NXceG zc`_=;s+*uNbj%lns;r0Po8l=ildqvi+AjBd%{tl1=v*Xe!!{K?(=f@*oULZMBr+em z*A&Xl%IUE{7*Lc^blk2EBic_cWRNk-?rz#7QfE*4zw$!!2}$iG-2B7+U)-bzH+h8* zo}ohXD)2!l`$8YDGMIwP(Q&5YdJoOkfiF9n0Cy&+s`iqaN>LnL7QwZz(2Eal|MnXn zKRS7eH=M60&WiPJT73$xy}S!#-CciEopUupa#2B7V_mT6txv9r1S<}3u1!Ww7U#=_ z)8q4J28)mKcUxccL~5k8!^qo&B-DsG zsZCs|bP8K+;xY{nysI@#9nr)!fF>T9kaUQ!FgQ`OrX}Y*88Ommxi=DJ%87`4ku>>c9T98}{wG;0M4Y82|N@kgC$Qdd{xCAAzh(L_3a8y8%3_TN=DuDy5vMPbH zLX`;Qy^^p(5*v{TAVdZv69tGy+9VprLMcV2=yGLc+5_&K^H}X*a^WRQ2sWD>&vb9^ zfc~9-|8D}dnKv@?<^_(#%vU5XcvMYbAt{99jUlZQ=NTp_4ob`_0RR_cB_^{VEJ7I? zNev`MY{f_^0~uiDsadEn1ltrvj4r9D7E67G=Ez9G-gsujW=WAUlbQKqtfFvgW?USZ zuqjz0(&DSaL14iQM&>g(DLKf*fj!_r!V1m>(BjMp9_(2R!bD)o*-9*+NNO-CA|_TW zr(6IC8<8f(m<$Bh7`UuAaqP2a1P3!%4rBm~0EHG9L7Wy?F;8%w!Hi;1sE3gQ!&sP{ zuo35p0R$^lX}U@7_VB^v(VybXIUbKkz`wYCKCo=@Fs2>{d+EueW1au#r0t28=W8xw z7bqj7d6f=BecR2ywR)uA{YT%qQ#I>Ai#LehE8pODKu%$yilGYKZ?5Ua#M<{JWquJrp_kw zx@U^rQMEVc*~j^AS2k2K36E#0vDZ0Fj@>D!JkVJLO{_3bI3<%@E;+;*uK;Y!2vy@SQ9ZGv(rjxL9}1b23L3Ymp!&jFWZW;zwDhmW_L`wzwsSMj3T4V$dF zqoSlL1V8{V!wD2n5CcS}M$p0_5EFrznV@PB{Hkhr^|`PATm}DAczsEG?-#AEiNA8~ z()-nwo{Zezhsn{c*N9%rE}XSMhzX{qeW<>bHOA?yc2-{o8-*tqXeeaZyq= z1Bd_+8DM6lumTc%9|HyglmIgrKzPgyfN4<}%+!nwM9yGfV=xo`FGesB0D=JofB*v^ zMn(oe!2pB{j14dt00tWYFu-607=ZvPY;0ga0RvbG4&P%0Q!@|MNZT5&mUw&aeOnkIJV z<>nuh`3n=_O`>rU-dx!wx80vSaVap58?{$I{j+=2ex=6UZ~pdLR&Q8+Fg4EkEzO6t z2(ar&@6S^Yxxx0Rgi-4kHN-Sgel)sJXw)(deS-sG>>A~|Z6q{(n-8x^STA<>UF{23 z!*&*qF!c_u=6=~18jCnxMZq0ZIZ*_JWxY~oCqwRVu~D74MYa5h*@>#lLnIiNO)lWr;)snGOl zCEF4<=xU%w-A?jH=QgXy*4H9DvXA5{kn=@QvqNy|wB0&zO}6b0vKSh=B0lHgY*XAc z8zQW5S!G)k3)}I`b0`g$bMGyo=qTVkrd_7PiHw@hZRCNOQA6Uiv(Z_pH0^Zk#ylsC zmbWNvKN(e*1-nO=`v3qS07*naRE^zd(=L|AKzRDPKHom4R3~c&AE#nz%wx1M;_U9j zKi-}H{MFs$Q$Kd&;PAEOp_R9r3k)b~E^$5q(x1LpPV??Ymt!@#n>T(tgkmeJMub_e zr@W)1^{{Jt`DQ@=9(~f-yYf2)uE>_%n1Z}&4$#<=HFmGepgJEuRVyC-!XJ4)o zxuj!)`8c$nefbNY-}`~L+iwrUd%yQ5+fYSttV^W z7t;qYP~*LYDK%+haj>!8KZ3Q--rtNzxVRB_r@W2n(#H}Ck5`n4%o~MTYUHJ9E z-MAwMt-XxJ$00jWYx6OIFHb)$X}i0B_g?A8ZbL1RkCVeMQE2+^&gY~){<}Z;>g|pD zrF*|Ny!}V-|L+I=*QdKzZt|}+w94`5zTxQ`pZX%YGra$;T;jvcWPfOm?|t^@Uu&NH z#_ezY#`J7`roVenmp5jA{iRZ!{qTQwKl^unyS_Y|e)WrAsC1M`R&&LIlOjG_np$aV zH!$Z&FoKZ@!K+szc47pKs%D&m2h_y@md4S{#@%)R33*Q-Fkl5a86kU5@WOzr?V7Px zq0Al8Wv*d1+m2bc>4B=P)5B<_yLxM|2f+y55dmD%03*z56RUb+q`ZM9Mx_u1!BB;| z3Kg04!g0|0B8k(!8UXT+2`D5CC`KifL9{@92<$3_8jVmjGgen6XJiNjP%R)9^o(RC z(YTD{2NK6t?J=zIMJA;kOQGaqjC05viK%m`+n!M2%p#iW=B;4F~sC#p){#qlD-!p?l_8cqSi-o=-Du-S!=ReaH*3@J zeT`ZN&856O>n#E=3T`twAN+yT`j82OoTFeHvMvXGE8Klz8&`wbxUK%3M z_JYc=voV)c&8OlIEx;x_xlOd}Q*&O#q#_P=I)`;9so7>%dq1Z_^1P!izal`%t22ZZOC50CaP zUG{Z#gZHQGR*hF?*F-WuD>aW}bDs0qkNusG&${f0-8kai!augyvlOkARCx~y)8$61?(WRZ%6y8spEHrW^wqC_;Wxka z&42C(zJ%|40vKQb8NP=B5MW{k zkb;N+m=R$3zY!4#ARyy0B7k7{9tO-{_#QF>8v!sF8Q7o#kQotWBA7AANI(h#88Zxz z8G#5OfWZJ8K*Rt%lfm%u$M53%o_+RHU;qY&Qy82Qm;hli2mtX0BYYoGma^qf|MX`L zU-;Rt|HKQ|+iGT9(v}oNjwVCg>5%)2_grBe>EiUlKw}L_tf-}FreFWjmuEOIfbjpY z-6Q(_58uvr{y0&(+0oUEFl!~NluX*6Z?N9C-Lkz_mJhby-1N`X#MPjoX&UBcU)uR( zmiX)f?_WFm8!yy&?iu9w|H0jzy=|k}inaFreC^MKgl1CFekKzlN#jnfCZ0Dt5?L=@ zTh~SW;W&TtOXsJ@IDcQ8eqLoE=iJ`3RH&SvrX1B-@E*RQ0hg9|c&UqHsJh}CwV3X9 zz1Z@2a$0DiAWo;}o$yAr=`RW$=5})JQ#QPB$?7MV zmyUL}OPQx86~(1XwF~TKGHLhLtE!ZTW4*Y%aHwjRsb3SClDWSYtV6xgcAg1NH10$%|dNoClJ)KN%wF>%sw z+nmnbyreYfu3oxKX{&T}C~3Ppje~T7-B?RA-dfw!bvaCUI9jaZNM$x!9-(9J%^Z439kbhix_xl^#ya_axD?9D6_nbw z<)9wjcx|+3+SzP^9TjjPB%`LO51SlwA)7j1Vm32!Y>t(wVKHhl@9Iq=al2qBYUG=? z&gKT}W}7Z6T@-GpCYSnYE*n#5nKofZZ5nfHR!WL-c)HEG9Nuc>^Oyac^b@hXF)rV7 zPpnt9re%yn>^5q%DyD1?OK8nG?zXLj(YM9cX{SIYDy3CRGm5HEGX~(BcHifVZAo!} zXZV9W_19zN4j#xlK6-fnKH6HC)JZf<6r;o%YDYUtM^&1wyHaUVxlR1}xjYl{!`{Zz za??9uF=||5lnVp2(h{XMGS;mpsl05*!fI0aWF@C0oe)EaOhaa9=EB<p&R3C|Hi8quJ5%)u)f<50_70eDKYgyjRKeURB2a#>I8g zXTSEv+b%!)r8mb=zx3rVT;lESnMWx($-Z^PniSWyo!9kV@l!3uikTIH!Uqo`Vn;-9 z8UP;3tg$j*t!GW>6xMAZCnQfD8r52_{T>G-*J4i)RYy3}R%v ze$&J9x;rH;Z$}jB+B-xgb51ByAI!H z$ObE>Muw<4m15*Fz>rDfz{Lhdk{FP^K}ppZ92@|RRq)mZqqJh zO{|H`$Yo}tQj}C70&#$WjNzmS4T4SUBj=vD6vB@myc~J{(o@{-AM@pXyb73r@K3AY zdl(?V07C-!11vk7p8)TzE@JcD&9AM8Z=ddPXS_*S`=#ZiDO%PHgE<8kG9r+G9#di{ zIc;HHh=TdRMEjv)A1acToUtdT@P(j~94Q+sIDw1p2)%%b9D~UjJ7R;vGLb4M8O+3( zEf_ePseu5~WQf^NtO!($fk}ZYP(@LwWAkJTQr{bI)#yL|+y5!JB5`1{;>4I7Ay7jQ z_9g*jP$Fd*AW)E526oI!R+-fqW>7LE1IUbF1SkP!ATbVRK+a^vluA}L6c~|mfsUhU z$)0k6nMo&_#n_n1fmjTzhR3p6F3y{qFd$r<6&8W71>`XTS;0n*VMYz$0-F;q28DH!0u4C_=96Rih%xIUNut*oBP zVRVeLIyuR!RX?3>CrA5Db@>xjDo58IG+7^hSfA{qZuE%B+M(aymOJs`A3VA>k#Kc5 zw%nZ@Y@#hb_xWqh(bXHdIfli|MXhW%{jG}~pMAhZtv&x}qjY}f(U{Y)oRuPT;udVA zk#tZO)v=bgWFdoqVdP3s>;~NswXQY*?%bQM+x>yquLmf|?^;p_ZH0E5ZJ%P4V=;G3 z#|_O>d84zlQ=M#-$hDp(6+dWrv+0{hiA}f$TwTu?6OY1_)l$(oU3hkrB0|g;kCJMq zokHQSGocCZ1GL^X0~zX6Dci0|goX+fB_Rv7twhD3R23%Pp-^=A%W{^C>nSOolI44{v&e*G7J<-z=@ z4%&?GX>(;e4UnC~6G3*B=PY$U9?io&VmGToBUeX0KA1l@ zpZr4>yLmq!`@NCK4*0l;vpPL{;2!neD6)0o(9Oawb!WTjrQ*hE**cml`dxp%+g+(k z43G zt7wlK#9}&BTh2|&toOAF5Bnw#=lj|2B^_6~(Vm;m@4}NoCRatOgOFlg=iW`;c4GP?Bcg|5+ zIB;uJ&6+Kb!qJZlAl6khekyW4o?>^p+7wqsrb%D8pf1VkChKl%+U_4{pLQuVmDp%G zN#oVhF6%g~ykd+Bo;No3n!D9j;mk`loD8;K<#{~V0E5O}Wuhhbr)g~K8F2@}c2%+t zL&^PV%8jLFw)e$L!Dh!4yO9c@q|O{AtydT8J0JDVef0e2zw+d(KP1a?`{X>t6Lh!6 zO>DwS+$SEio9@*3hL6H;r+L+baqC1Qlnk&Tg95nATQcGxDVJ>I zwzJS69MSIHs;F8nL@tq#pVn?;qRF*w8g!F}#WX-|n~FRkq+|cC>JJ~L%cy@S@aq@V zKj?EGtXLigwXRry^ zH1$Si9dhN4f)8m(Nq1p=p14(zV^^rOoChodXfoF!V;c=|oGN1AVx=a^cG00aj9)Ri zTl`YnE{mnrSm03D)69Vcqmf3Jhk-+)P_sdkMUV)Q(vUzd`-THE?{Z)x)P2L?X;BCX zm8I^KaAD$9s_eKXB5eg`*rZs=+!I}@E>+YJ-5ix)kL|==DR%W6FHSG}&&eylH(vby z|M-8@Uc0`d$zff&gZ0!jp7Z}&_@%3pKfhW1?wiZ^pMCC=i#CkAQ*SbuC=&w5m{4*R zd1KUHJc!DQ&@$Ci@fGbBw7JXAoYVBGy&J$MPpjDia?!6o`uH{QQZg7* zR-0e=!wYjy-}!TQbvN99H-unK?Kx``FM3ofJv#E$kxwONWYdOJguwv0l1Ysk5B8N7 zh9w9n_1h7aFGf*>{gwfdDgt)01E7rnuoNSptNW```u6ix*&us7Nsyz@tPEC#%)GlO(=$j#5i7$!r9z)iVmE zLKy~o4Ddz-_)rsptr2)FDdkK|4k$qM1cD|DN)lrBW7NmC#Be;#0y`hqG4^Y(eB;V_|g5Ge|!5tay}_}OPS0UMD z1rc$uRs^WngDS91+(AXH2gs^{xRimkClE>zHEm#1s?;1yA(zbvMo=!`hZpft+V!qtUE4 zGe!WMDNq8z85ct|fC{NX2~4JBr5G}K7*|$8&Z;^Vvy4m@+e9o9gG4CO2vw*z3k7DB zJRn41R0I~vgrb01Hq%lC3XNBjtSDx+NacbV)D;u3IwSxEg-ngvq+*3jss;tr3;+`o zparFn!a$`!p#_w+SQ*TSObSUsDrFopL6c<|3PIV}h=}o6AP5P;3jvN91Quo{GLRC( zfS6&~1%daD#4#&zHUcboCvd4Qd7@Qx&Dt9)XM(Acu|ryw{2&$|lbKTa*6>JWw>m9x zJGvRWW^Whxd_T_1{->WfwqVO!%e-Ch9ZYyJo!r6n{fk@e``@0QH{+GP*L}jhi!Y7I zJ@N9B*JZl@Olb}wJ2flgoz?lD+z!C|t=1Or=)$}mwK zsH}~ufuzm)yx(`8*b1S7B0l3>7l=rJjz{@u!!0^6Hi2=QU6N)o(u^hBtok zonNNZ-&}V{jRve}B`~7+0w;jjJy2YiQi2 z4_9ZC!>7Oa%B#=q-}uB+i^_++kEv|$UHn0Jp&#A4cyKcB&)oj>$(o#-aU;ae8++1F zHDQn4IC;pB=$@QVZYv^369-?F6`O@pN|NQ4{c-e_fEhprBEUe8v8gHGhyYUnW(0r;1R{eH zKp-=M0Wczf3?L8y0|+3131(biOawCn2nN6iFfs-kU;+l=F~(z*!Ndx{gN+zuMn(_; zkpe^z5`X|O*qA^pWCcY@90)`pc0?d6!~!N}fU_%xA9?!cuYTq7&wT0S?z2v*T7`ZB zxKtf2FXibv!EU6mTy`IhyR(n(l{t?g_vTb7(xZ8F-#14w$U?azhA|e4wr2IhqK}bjek)SB~gM=d@gCYf6 zpcsfP0$7PKB90AhP)h_#_bJU$+0bjaOm za%&>wDeGcS>RXM6JaB3(&U0XB+ZuUM^0-1AO7=8NDTPht(6zFc9eZp?9zBV7LEA$2 z6^V^w6N}~Zp>%V0=q_qgifP+S&4rz13X6{Ea=o*$!&pKO+7#Kez9>G8^AP1w-FAC4 z`N9;N*(#u3?fQyWe-6RE>sxZFJQDc`W->CJ#u;zyq`vk$dhhgGpFij?J~N)yT-D0; zAZ10{NZs1`E92t+*T3AI<@9k|o=vZhzFI7r?`ZymPlwn3>$`umeE7nXt*ozt1VsS* ziLpjT+mo4uiLI|r#N@@6fHzHToU@U%((zD&$-ZR@g+tRcOfaRYSzN1Zji@rE=~feC zT(#TlH{27&NZLDKsEv#kv{q_dG=~}=F@-tzf}98?Q*~hrA$|}kU8Dx{?M6}ciFwXe zSeG4W3PQ3Sqv6p6IkdD2^rQfKZ5}a4EQZ1?<3gHQ&9J#neP?!zyx+ngTQ4C&n7w3& zRclcUK4}&Xp<_h@aKrFwd0Rck1E8KvtS{Z3U-te(iDmm}avqE9hKM9qY&*J81^Ysr zIn)lim%N!mKOrvJVa6pWqms$20J$?33t#US=li6%?iP%&Cd))d`*9!#Arff}s+0{{ zu?+BNW^~x`L0B)3Q{3|}Kkzoab=;)!(1yF8zuP@Whx8YIm#+06{d{a!TZ{W>A(Cu& zY;69_CEGK{-HR8m|KWT0>$iTSza`Ip@MFn_VfTeImN6CanqjuDJb8H5YVi8RgvgZP z=G6+(DA;fOL=o-YnS1l(VQtI#D-TxZck_c6ANh8 z`dh#E;RMgu*XU0Vj-JnS>-O}_=l^7F|3v!Y|NPG%K770!{?%_A4&q;a-#Wi|`;9j} z@H^B8J6v2T>-qBWoL_nV;(KqMUf{X?Voq%hJFbPS$sy&?wwt|_hx3wD79JWW2@WFF z-4&2?S(rDAMMuO<>mZs3cCV40pmUxbS~CEz;*N;GAhn6$8YsZiQHo++xC;t>OvLOk zke4a%T~c z9(B4Zbw6{HEVf>pXK(;_!GSXGovf2yiK|jDO35vGu5N1L;1U5ZRnR6#$n86oLodpB25Qq$> z1h_am*a1d#5F)V%J4lq^L`)ut++!pTq7^zR9 zAy5n)+{kJrSQmuB*04uGB`a_;b9V5uM`&cPl2K9B-`7d5erJ9hgy(lt++5Y}+Nh7^ zE&u=^07*naR2tbMJ#uThb@%y0wb*@WOWUh*9I@(m_qXHQ?^xO>cQgurumv{LJ+gfK zna@7U>l?37f7oL4rq$Q>@A<>mt~W?wSU2XF=hGw6f9_8OzJZ$1q8r$1M*}CjY40 zVtumfICSA&%L`3+Ioxk#+=K@sQ5>Ji$Tw0JG=rmUp9jlnJxO%6lM6ZQkJi$*tBlQf z@wb2Niw}S1uYL8+1y_Z;0WfnjH***PFjbHf!Jh&MFo2Z6aDuxLKu&<0126z?t}us@ z2^?@Zh?$r{1UMXjgvkK{lLHQCB7jKYWB?K2j}QO?3{N>g zmPw`~+PwDoY(Fj@jky~y!`0=MB^|8r(x3hF&$L)MK={Ady^k;b<`>rcUrVq2P=DjJ zJ$O;S{Bytc*8ZLO(SYUc<#G3Hj(wcuFz^X?Lw}uqi<~<)%fkg zH*Oro{TA1}n1Lo=Ec3W!R5{Ayd82hEpLbhks700Luwno9}dn}XUo)NXX}=u4$n}r6o$!9R!~dR*u^NR>-r$#@YNo%QHdW)9=kGz;r*k z1Q5k#wY^_5rTG5$5Oz_hOm$>iN1hfmQ)eXck&@6vMix zH;XZa-i#~RFwuhHBjq)>MkGCTs46;b4qKmRWbP6(HyY+*LEYm_LK-m5-b)I7Ec+~g zpBPOIDQWQ(%G`Lp3PO&?)7Y8S+|69FsaG^>wj42F0+BI?>(&R%`Y3UjK=X+%S@KyI zWDaCQ8D?9BVP&fe6Zdyq+e(f1$!6zy5K6vejEf}IWDjlknLS>?V?ZQ|MHs`8=+)G0 zbmAd%^9BCoe(s=cxL%_NzMdz#`FFPY|*x@<3xq%F0 z=-ta=f{IL1!=mLz{rwE4*jF{r8cLt5NB)5x-mbs0a=WH^X9fi~gRoQPnIC+t<B8IAJ$~)odv@91d$~DW zNZVx+l<{ikC41KxrT(BHNzuU`Y9|hk0ff2a$(^$U(5mFFR0!mCE^wb<3O^DK-s}!z zjm{e%F|-4NhGDN}(vb>j;Z(Wpm=^(+s1N~M7$9ZO z1j(VPj0Mm^s+(p7+6zzI%CqXI5Brz%y9%;1+1{|AhmD|HBA`5pSH~ACHjVKmYQ(^v?LTIDG~0 z!R>Y1?=V(i7afQmqbDXO5+`y4IT4`-C$dN!ffZf<#b> zW`mYYURmI=5FAetIbab)4nkq8a4HC+GL%WcODFMaUR8)siqSujPe*>qAPQgI`QFhx?g zYGwdcqoN9(YISoXHUL_z0uI0dA~=b{1>z9*nS2LuvBp6{Pl3Q>y%3sK1DP{Bu`nbt z900kK1t5?paU}*J62VJw6SU^hH4~^^3Q4dcUZO-!y}^wD@`yE%A~Z0(4!J&>X4!ph z+7Ok~;iwJa?S~Jy?>;zdcJC!gzGPWb8P^vZpKASRkK6t9#^~>!@8kBujLp)fC{Ov8 zd%2OmQ%bwIVW&-q`JIf%!%(-|&By^75*%(I?^e79`{mUG*Q<=i+!!-Kq)8Hk5xbZd z^*pbV{?e58YOqG|100&l3_*f&#)ool;Mv7}A+dmWG&<2#9uHoI z;**=pLF@eCCDg7zlKKJqe@YTP4V}0k(e@8?*!YMFaVoaCg{0HM3^^LJi zn@7L!W-af&aVxMK=uL3#$+#sCutVx1hjP!P_BU!XUY70E=I$q!%l@!l4M_d&g%hDR z1d(}U&!j@Em-9ZZ+QZv7u#a55SsImP_!Z`tcJ}GC7?z>7H(qG@gLLs%e)D(tKk*;G z`;|gA!mAlz)gA5xE~xGfV~{)C9e~5#fkGs32Hf$K6TXRn>VT>_3~)HUi5LVB*a-mP zk2pDl062p{0#X=2(P88c0tj#i5rN=Jc#50|AixP^AR`1g;Bayf;6xxMGI0=49Uu_l zn;ZlJ2!P|890=@iQV;_KD1$%@zyV?gn4J&}PH+Z@Km=ky2_!_IrvP#y1`#`m00sjJ zFas=1AR)3wHJ)~V`K3?Z`oVwg;JaRY)ztYCxa~K^^ih5Y1b1F&D_MhX&aQk|Twcl8 zpY7+X-Dc=(cieu*^UuG0`{Ujx!vDzuy29me|Kp$haQgSQ8o&SdVXq*6V))H}_KOGU z&7DvF$vdmL(w>S|ucBKnC+}@0yLzkIo>E$@o^MynfVjA|T({l*`{0shHTF-x3%alOFXWsWyIm}fR|W67Syyn_Xt3+mE{WC5+8?-I$BV`@kHlPf2EO5H$`O5Q5b5Xs%5+ zeWbQ$5^9sSOm`9rn)8Y8E~rx3j?>zd8kKZA_FZheoBLfOAD!ywUOC1AwJ z`Ku-8H~yEa57X)%NZZh?t{(O5Zy&^_^y12eV_da-Rp;H9mhXk`j?OFz0Y{GLG#9hJ zam+jImVwvA*Q7yXV>%u)rW_|p(TJMI>}#NPyDD>}E?{0{)TTd+IEIKD*T_kP>g+pL zHYWrH)HT%;F|B^phk1^c$E8tbDZ^e}DcwjkSuGgG_ak+!bQCT!#Zqr1)1~48UBcUS zUF>a~;c3sAuLDnxpdiVx)KlPKC%~5TwKly7>yDLEBN79gZRls8FJ&HFKdnCb?ix*r z(X;Lb2n+MTGErDAChTEX;Na}Z4_D9<%D$I&!SYxGyf$T+!Ntf3a&jf5E@LbKChOam6g9jt1v;UWJ9Mk9eL-v4*u} znyw$chyFlcYNF+380T~3{UXFnFi-$WR5xN}r->xfj;@Wiub)2|G@P{2eR{o)VG;P* z=MO*m;d+ZkmKXikKTM?{ipt=_{Mw9 zgR{4GRqjs*f91K`uf!jT-@^UL-~ZWv{}1x^##?_CE+0Mp)U{5(^yP283^xkO^L*>5 z`P}~IK73ofeO|{0W4v{9acggwZueEY1)IHO?WLcN056(TIGit`$rGF*`|cpcW%X^;$eTNDS;iO|WLh z`G5~LGaj71NnQ`_4%@1Fbu$JDCkDf7bt?w14hPYiH1;?-(3jfCi_`UsL*gG@f9(18 zV@FGTd`aLAHv}z%2!wwD1u#H>fd3l@=nB`2%{w*!UU@6#@%PX64DXNLjA5+DqK%>Y!yN+M((8)ioN!;P;4(|#pP;A8}X;wcA#4S+iUV!%}$(BdEm!QC7nB0z|Qpa7hR9po@J5ZK8; z0GI&6Qx3pEj;9^ph01z3G6Tv`4EKCeGB8LQV0bHuK zeA5=+_t(GpgR6I6x;Hheqm3#*w7%eh)!quC69dnAI}F)$^Va@h&1tG-cyi%dbGdQ* z$3MMlZhl+e;g5UbKl61j=l^A$np0xEdhLVo{BFL7{FU45*n`*)thHD{_3Hj$F;|l2 z$!x(Y^&3}-d?e z+G1hT#gyZ1VQ$xQb5VvAw|}r&_b$QC`@BxlO1ydN;PRsEr*in>Wt#7_rrRsL!`mgL9(ljl|4q8fm*x$aDCWtBrA+}%|?W0xzwRL+L0dl<4D2Rw0m6Zs9ho?Bi`tA+{{sEz3Da*X(^qjl*A;>MZ$_Kk7ctz zDVaK_aC@=n^IA6B)VGYzC@+|4p5MH(70 zD1GE}rjm8^cvPVC>^@(TH)Ks4BeOTHonKLpyt$X@ztO0`7yYiJ7j zYLmI9)SsmJO*8w@=U5LKqu62IH9i}4HT&1EsPMRJ(_i`o`t@h8r%_AxXw&S`ojuz$ zzjJ(#y{r+`aSaS9#au_qpukL{ zqf&FsbIIG{G>ISuSv2%;B#ZDtYboksK{!~|=S%tBYGtoE!y2!yOZQrQ;1sQ;xNXg~ zT9LLt2^e&@v9K%>67PuWDD7h48*@Fs-D;g4Y#xgrsN*D8W+*#A?t!|B?B%i|LD4MM zfciE|!`8q@^zN4G`!+4d7h6Cn?7-jjR-S@91=yC+@m>hatg2+ZXk~U1toLNHdj=d z#g(dBtdP!Xa7nRE!r5yv4-t|ytIr-%=-SYv02>7su6`GE!=}R2ct2Et5LDvpKWDMXQ`HEo>aGWVyDRn&@cw&w|Hkk|ooM{S@^~zlPq@BA6I{jx%1Qy&Do{3ZX8O|_22|1CeYvc z$^Qa~L`+mnauK+hSG5XPA%NKd5eBn^UDYuc(>$}AgAMEekqX?L)B*qoF@y;mnK_U< zXaZ`bplAplQWOaS6RRK>kbq5~i9jRFRL}&tJ51pQ_uz0RsZpGjO5uSBRoN9Jj=&5_90(3lfEh8lz$kPkQPe~o z46p)jl!)43Ds=!u`v}0f5o!}rEAw3A!nk9xCK{Vt0o-C*h(sY_L?)9|;jWil zj3}y!)m$;n6m{;dT&eu>gNJ3jT8cG2fB1Mhd+*$NyWa)nyjc{d)biS-S!}!;h6`M6 zbZ0W|qLOzqssnU%=vXcr2)R~r2Tb5#Ei#!o>#UBAiUA$93Es@>?2Y2x5!ctaE5179 zeV{zkDhyF@ASv+e$R7y{JsD3Cz|l7Z#gez|Jw9V z-*|U-^^5m2?1NdS3$@Ap-eDJbi5C@PTQ^vA;V7-`Mj>gQAS>DG;3N&{xyvfu^^*LJ zgQJ1dZiCRp7e1PJbJ$(k;^QfYsOBK5OynSVaC{S!KLz?GfZ+rNz(no@7#X<$#H>!lfk>k}5FBs; zKyZNIV8&BU1SWvs#6%2=P$zME3Lr9JcIOoQ_|EFD|7p7W*`M-n$13;Du6ejG>EOdO z(Sr-fkxA6!*;p5W`P@?-zBWA8hdR$Yy_iBgA$#?Q|E<%6HU60466at2jbDBEUY)MT zHP#nu`iaZ*B+(_02LkH~f}GvFLESSFc8_1XjO7EVA)F1UBUjp~(&AvbWP2m~y*#~; zj^A9zUT${&c-faaFLSrJ$zi7ROFnq9jeF%syJ1~wSl(JK>^gec*3H77-8jcc>!yT| zrkkr%^O@}ONlrP|A~#q4xNf^n*W2r|yqQv|+pA4`co>vzP&AWsSar*#9wZ-&^3{Ak zVNmq7irVH%{kBVsxhm|oO~8!?R-)R6%V-{N)9`qIT?fm%=EmK| zhjKAL%1JFAX`Fe7IuDn1H@&^%TcKZcSJy;R6HVzHbusPY5~^`m*EL*=@=k;C*(XN~ zjoHkblZBTfA0HO)!?Jf8H*-9d7y=eDZ>E_J4p-~!+e+bvknZfMRCKsmGcR9Cx>ql? zJJ}4E$LpEmDJSbAG~X(;uTe#FMC@92I;@;m5m?;hoVTL~sp5rQ&32_-b|F~lEtfJF zY;u`NOsmzvLNJYLS6pf_8&lT?4!Ld3CV`f18FDUa{nop7IAhOJ%jGIf-Y$*iVKO(h zQOOKgSQ*S`lvN0LxFe(ea;NhAK7IDje&;k4pP}`9#sUy7Y8W%uv-=PB#@#UA{(&FA z)oj{nKHohtp!*ayuTR6y>XzyUU%R~WcaHmm-Q{lUH*TETyq^ba4{9k*-i18oggS&Q zi)Al}9ou%AE8CR%wYEzVKZWeuM&~(qRGg?JugwPvO6K8ovQ3|Mq8J~w~)>-6qCd@>hxDq2Gi(tN(-5JaeL!r-o> zEu~kX>;3tPB}U9KC3>oJ%ARW7^=vxfoLz!KJ8~kQV)Z<`#v}yIn=Q+&h8KrmYEjf!n8e(+&$a8d`;hYLT`86h;wY+ z8C7p5DjYA#nA9gCFolv!v8L&TFzB`rMNf?ft0$vxik>sLxMkJWoLdFiB0K{E({hHz zS{7s`(^XzwUdkuG|407KZ~6S>yZhR&{;fw}`RcxX`_+=06My|S?8*0i`dDrC z+VwkT@0`7Rr+ei`Kl0afe&@I7-+6C7e7y4^KbX$``BC?Mx6}6>9hK9=ba-RJ>C{h0 zhx5XY6upTA7Z!pLDQA#XxWdH)fvr+?SeB-8?1dt$RIsuMgS^5NrdmxyjBSXhYE=;! zO_;`cx4n)rt(ujH>ym>@=O;woP($QsGGLtJNGR{`FA8=S@6}G-YJQQ#vcc zStLhY28uygU1m7>3KTBAsI4$j zT*CRtp6mQu0@JPL?z5%E_bfhfGk)}LZ{M95N$|%RVECqhe-RGAF&SpVw8v(Toqv6P z`|A3&^Gv^d{;rgZ$4{oSCv()TUQ}!Ww&GwY8OQ_{&=8W7Lkghe#4b#ZmN-xn4_)Um zMaQzNh@vEy6z~)gg9#|Ej!FQ?Gh9L9F**IjfBI7Z$eq}ny%h2?yJaPU8ORXWk=O-L zceRP2YN9FxCo*Ob0x>h0ySahD#7qJ3pph(kRJ0SIwKi1SCLkPugQEb2h(KUCJN@sh z!CB0wX?h>z-|v2&=iUC>x7JrxUA@grcY79($6LlY6FXxYVcyG9p7q1ytz@Jz$h# zW`f|%0Vs3|N{M2{Drb!sH5owwjzZ^ZhLx+7;#{FvQbbk4kjSh&29GM+)pk*YQ{4zm zWG-xLSfDEa{MxY;X;r){a2m{jNd@NS6`fNxsClQp+CwpIIyKi;QM^t`OD?(SezTd3 z<>I}`#od#7Gr8&R*}i|i>1}&4ZVA72HH(|g7AEdA^99wlZ~0v2auqvf>L+ zZ(?rZtT9wM?5RzNE1<}jz;R3lOlf@wi)Sy}!sNT56&~KkTj#C-tv-I7IxBtUm`R4K; z{@rxvs0wOP`$JHgLCpX&C?H0ns2T_;jAmj~0Hy@Y2%{O`vcZswLI4;I1T#(k83zzh zC=3M609Y~b55Zu75dwgOK>&~@2Eu_bn$n0Dhyn;eBM3kgfEh?3U1Vko;AR$Hr6z~Fq9+J!q@FXaJkpuxk3PFeiAb6n&9Y{JgK!i{NB98D! ztsTDcGq3#ezx3~Y=4Vz9xAUm#lI~NnSXED&eELirRc+XA9;bY){j?a$-0Yk43!7rH z7AMQ*hks(Ws(!{ANBrLm;GMny2fzK-FK<6v221-fr(BZN&9F~LrAQpK30+sanLqvV zTjA_0yxo5H>4)Vv|5iS~k5gnT*=7UHb1%$OZ%fMu%f)(8*XaxG^d9P|s->BoxVp;I zcH&Y(zzux0yS21zUiwrse&9--xm?)85y^8M7B%as^%t(0O%=*TzVq`Wd~mDbmH7RyplWhn+cz4X~cnHf0v=)(Yl61lWi?N9CpR z-plz8dUac-L@naaWpH^ekTV&cU+f~oEy{i8v5(L9cl$Xl_CAX5LYnj<@Al_=fA8pE zG2GrjltFE_KpykHXfYRTw!nJ7t%6pGt`?fHg(lv+Qz0@9tKH+ z+u2o7nJphwdOetLX0OtI*RhnH?)BA9!^!FDZaN<{`)2L@D{~>2rPWgb5?R?5;Oxvd zx|P$QwkSbK%W*#z zYsFkn6+?E+nQ<5=&8QuOV}uUlcsH&Nl7(5`O|dF!E4{i??af7KPE1Il(tNGC1Yh^$ znsS!D^wy!PUK1U+MSV%6vU1bZ>E`C?(+t~IJ&B?x=Fm;i4D zc_!J-SIsW#G@(re5L6zbA{crS=8A>7WQ_aq3h{nQQng!Qn|z)3YcZ*gyxp1KV4COA zT*}V*DU0c4Y664D$TUDW*5qbnaig!mpOmzyC(n|1 zJ7dVo(zcbDJMVL-TyUYt9x81NK2)=?zF9M|Z5MzM(jl#ByLArcEL-Rv9fxLUzA>Qk z2%3v#rJn^>XZ4#8n-4!ae7ZUC%`BJSJ~hm%@y%U&?$UIt*EQ76Zp$>o)qyUrP{nz* z^ek6#UCFEo3u6!3G;As(moBGZWANTV2u&G0tZG5rVlNuZtiq!J`KfJ*{dRjdQLO9i7I|_5^VIQOB}S?X>%lu7uzdi zShTyMX|LHFOMog15u#AN$0%m!vjatVZMRh^(>z$pX)_^kq<{T0Z+-r)FTDK8lh5tm z`|G>+zg4#HZ^m6;+n>EZwO{$={`Fx0^SA%|@0c$iT%65jkN)+qPyq7C1@ zwwdGcwA`L9uHHn@03Qy*{X@QXFmH|)`wn%7`7*eLt37;>qC-ZSG^^&SbEv6%7os>p zrYQr&3B98uNP;C;_8uD4CT}^YFNRdiD{4(+nQljT@2k0kiq}!;Hcds+OKaj@<(!a( z7n0tbNSdCHd9#uAj@Q;#>!Dt6to_)Vr}LE~Qq>di3R9CX^Ymoxq^1qo_Uas)%9%3n z6;OHu9J|IU?_>0-pGq7CqYr5=G`SZ71#W`NI$Dk>+0d&wV9pvEN=L2Kr7SULTCzY; zU@#F+I7%^>HG_k5M3P)+5K!m{3L}`Vqh^yid`-}@&jfuk7*$izWa1>u3DJC~HbiJI zX-uXU!ysy#6l0!>=7CC+V4#Y+P#O^wmno)RH;zWqI$y8b$+jA6}bGL0-Ts}(b< zD@H>hk|!XZ5HWz>1%#RiY=~MMocFakw~)~K)+8;8R>c)+OhuCkM5+S*SO4u_1I;N2 z7DYKZMQDJ)2nC^DoRi8R#zJHT3JH(`3KT9msAFZWHkcM|hauyl2r-Dks72IVEfN5;;)^4gP?AL&#f++vm^?tUNiC_ukYUJL)UstMeV0{G zMduO=HL1b0@^sFj5R*c+)KKrdiZg?;R5Z^zONdlzF`+ruv`RcfQ+qWAF%qUwg@ZK+ zP?nWiOC&)A9X$g8La2n#Y$4Wq6;P215?Bjo0u@U}qYEB1vE*P${Wz*ffeVJvY4M10 zD7uUNw7dFfXI$TuQ7if2Thn;;!403TlSL_$B*lD?4iJAh8B5iAR2A2hU7v%-(6;7I zO`^2VN+j0iR>r29lsTCnR0y|3T?x#4VoLbI4cy(X5g_N70JTzqo|Z&ono9|dOHOh& zOO{P4Co|{N2L?l977S)oiuy%sI>QiwNrws8PFGuSs^@`p4Fg~^FG9C@khPk(p)tva zJQPYg>QPn0_Y-t&ONFiEbs^pH^EVBq?K&-{-5Z|k&bHA@ z@@&f3sM-80hZyTeKj`yH$K_Gebya2ehN>xY_oF>@b${Dh)atGtUiQPmY;kGS(JgBU zQ+Uw1kLI&yo4b&A(0iXha|qo`r?FY5+xyY4EIgYj-2Am}Y1uNzeX``r^vPI0xOw*1 ze*HJD|IYt-aV}H^3Jj`RV4@0z;tvr011EwpcsWD%nX1D3BUkv1ZlKC1m^%yNmB@3Fe8Le6@dY!Mi?m|NCGGZ z(8L7lAejIuLd}2)%;?|(`ic=e<9e%U_1Cwva{>#%~W4v*T|<&*Pl{YEd(uVP=0 z-x!9E^3?@(9(FSBMPFL{)GwYLh0o9MM;NyJ)o)^b`QLrAp5Jc~U3^yJ4_8U<{F=6| zn6Gw2Rh_!vTYq(WlKVIH`ny-STkSs%H}#Lsus%-H^xz^&>Bl6wn^`@^cG)E`_)XhB zD18&Ura1^Qvc&VmBP{{_cd=NR__lj=KV5$!qYDpOXKi5${gw}pxZwP{KDbxabKQO% zuVz;WRr=s`b#{C*i}T}WcNdS+*19j1;cnDPR-q}oyE2u0*r$@$)~)zb=k}fCtMeXW z1=uAcj?7B8NNtizZ98DMIcJvP72bY4#X**}H|+UB>xRv~KZYr|Nv*^2Br zi`tl= zF904^clS@KcmoAiWuBJ`h?qvFn(6iyPDIq_JXs!0mh-CYt}OezD#+gI3?`I1B(Y@! z*k*v0UbQTs+49aetL$?bM@?lcf?SLE%u&b3+j7sVbux`gQ*G*DJB)F(PGpMx6v29) z^SDkrA8r|(c6@xbe6YO#g;(YbbJ?aCn~p#v#(3S%>@z?8g`Z!Z{k^HP zr%#IW5aD>oA~WaW{@FkHaIYWU>3sh=Vi485jFrXtK5Z8JiSu@U9C%kyVn4dzvxk+D zdKhwOj{8w(Rcg&HC)TbCb2$#Q2=#6oy{4-qK1eMnMaatYE*Dz;+Sy6WQ?J&A1eK&B zwj%@R9A*kt?2tQ2VvEhL+3< z)K%?LT`ZMyYlDR@HH}PL$rsnp?}Zs=bw7-wD$`UZlpek=nO@L_c4VdMFsjviZG20U zlb5T_T0eQa)Mw41FX`bf^-(V~LT5BrjoDl>VfJQLg;-lvTfK9)qu6}Ang#ST$7<_? z-d+Y@&t7`#_$&JIom+nQ@&9yD!hQz_W+O2v7+4+p!*%Q5T>RxXX8xTI|064IJzYLY zWi!SZUz;iA-X|&g@wxMh&@@bK^aN9(n0Xanu@*jCAj`eOH#hlWw|iVS4Qt1|+W~Dg zn=|k0LDdOqm&;Gx$eaJ#>+ksZ{%`)?vTJ$J?d&#j$9TF_GN(H$u=NbB@-UlKKqalg zCBGM#+MaX_9ZDM0#Co-;W@`IJwBB)0Z0EBHvF)ElNY+AiE;|Hqo*`hW-1!gQyFXg@ zldoQrtE-v>CR5)v?E}~6VO{Ts?#-KS_W2LL|N3V0cHG@;yV;-pf*yV4jq7i>;aC3A zUw>Xo+?_8d$E#Ocrp>UeINjKLAEa)6dGG9D318Lnf!%#8+5Gr){_?Ax9-X*rt)f*g z-5r7r|R-Bq?TQ4dKYf?(*PlBIY8grhs5(!dbzv(ZZk9qyUQ#kH% z>kUn}^gvb51D&e@4@6;dA_|E`SSlM7*p4lbTjzu|8bL#3B1e?Lwlt~Y!hp~=PMpCE z%t#4?72~jtuX8NiWjy)}UY-Fz4G8}KFuYKR7h=N4f^>)T zJA9xRM~pjchr9jN_u{vkVfS0({s+4&Kjk|ssv03ft>(PIiKoF}Bxq2W!9dU_Z^gAf zbae-77H#MjVvtBHgusVta{P;b`LB>A0D@F3#R6mxI1@9P7-)c)100>vXP7F*-~gD1 zBb^zu7Bdr4L<}&(gn(105YRl$!K?`k!K*ogf<=r-fGMajfK*S2_(+TfWOig=%q314 zkcd)DK~QKwoU}v@)Ea8Ek__1*%hqK9TY4ufR&p_b-jPm_9k&LH(kUwMG{&G0a-vZ3A7*zgS9G7X+fn;Em#xp$LOm*Z89M(rziZRNsSDij(P(Jycp+i!7mW?U~2c z1?C2Sm?w*pUCa9H^XbIN? zxE&wuiGwCqK&(cYh6k<}voBvAZW^+y9^jHdD7TYy63a`m(QSDPCTS zE2*tKXD=Tfs(0y)!RJ#gi>h|*!9HJ<_1l-@`lH(qhp3xzD%pw}00Tf6Kt~YB;y_4N zKt*6+0%$Lg1W2gCN*IHWQhP5}AD`ay)ya?ig+~p3>c?JLRqS30v*JY6elWg!Q7$gN zuW?$}k2gemwvb@;EPEJCrCzbBLT)N!I5}K7^CxMdV;c_gB(b|)je=*5s zBxn?Xw4^YNYJ>qSAfQ6aUNU6{(vTGp0_M#C7E+9aDFB){S}}kgFcPEz3iXB;3?xB7 z4KUCYjsRgK4WNOb6_{6&0s)`^6-v;K;Qrf6+#0gn7wgw^W``H^MCul_+0hUPvWjzPN}31k07atEt=!x z^RRMq_4ZYQ-K1e%Eb6q3NOckGfSG+Pvbno*mrP(5Ma5Nb{Gvt&?|F4KZP%kZ|mzh7TqG z!Bic@G1z8MPPZj@a_`}?XQfHDyUYDflOJB0r;omqhIVKUI(^4+Hnt1yYm_0DT^__N zudAt^j_Q)rP+BR2jW-;d6kvPPG5OiP%-p6Kho{k!w|pI^``-H%4`*k+U7V+3F}$H} zY`wL!ySuco!R^PLq*=Xj^=G@ef+)Fr7xtQmH{A{Ws@^%@)kljstYBoY;f`kS$?~X^ z8S|7o*3l58bTgkCo3?4q_7Ku78Uz+WJ>y(0o6Nd8l$5okhr{mnU^a<(U-$Vo(YMn0 zesUaw)N7xkwR1((UNyM9pj^}3UQ3!vAyOR{`(16*Ce5+;;LkyOUYQQh^zlFW@FQ7uT#UB%mUMoR$AhwO z9s2@WG7m2%soP~%!FMkAC@PJ{(v%x_Z~pT$-|e^@^6%c>{A1s=CWsh`#D@y^+83tMz7hyF8gBU3Lpsi)*CZjLhb5hv}<*FhzI`Q+c~ z-0NTcV_&;^{^8&Lr%y2JHr~%~)Ro{5Ni89=c$P2?+pf>@vb(Cecyi#~w4JS`IB!#@ zUs>2e?e}UW+YM}dW>2HmL0v6EN_I(UTKYiYs_7P3Z(XJ|jjt{ZPZk%QakiUx7_O^d`))eB_k%z4)=S~% z|Ff@NfA71$@jv{^-STMBJo*_u9q~DhuU+ zc5gFYZ>B4~4fRuBA0*BP&Yc`OF_;sufDu4}3PL3-p?XpQWK&oglL6}Jfe-=#v!Vc8 zR;a3j5+cs%E5}gjln|rTbn_ykiyLf^`}nja`uVLP1-D4f!o^~js;MvCfbNJmbQn!~ zP0kAUCRC;dwyP|Qbqp1qhZoEWieNGzC%`N;op%A6Omh;AVYEVwLr;_~L$T0yRyDbp zs5t{-PN0?~&|=Qig$jNKF+^?*8Hu@Au_|B{prLXAHKtsO2^`gt44wuA!qH5LEQXTv z1jATq3MT4A9bnE>V7a)Q=@6?8Q2I3XI#@9s%1G5cvT8|W3IveM6ve5ziN=yu4C=DE z(bLJSJx+D~Ie+xo?n||_zxYa!>J*>3hksE}PO$`P{685${C|`H0$Y@i@i<`jHpXw4 zD{+r6ervaX`!4bQyLYr)k0Q}el^Fuz0E5v017M;C6{VOjKwwpINs1HkbRrfcIN@LX zOMk^AC!;coWv@bY8IlDMtH5A@z#E*f2oj*B(26*LHKY^ChRKS76-t z&w>JV!peJCkh!rjP0o@dl>{hl_EKoh)Ir3#ISmdhq(~9wQjjMq6zid=##SUiV$Nww z$R*t+=5pgrL+LNZc(*2j!?=Rogz0uwFqA{iW`*O(HO}n^as>?W=0Wzd3*kqN5 zRNghDqf=H+QdMn>$%vyvpv%Jmu&C0uiK9x5p|$xY`t%fZg`-UJ%2l)!>H59_V$*1l_74>cvDNHCu53 zO?Seom6dsiQy0omhAW?@tGm08ZnwMd_qRxS%Ee#`BVKSI03&EH8UWIfu#(MS6a$=i z4^yUGgrgddv&|0|Kl@U6^>|*dBpjUn8=tD{FMj#!$4&L#>uix2V%&dc*#F@1RsD4I ze!YnWee8>R)l>T19A%4cF}Z`)Qrb|W^9AxlegTnsslhD|{n7QYZI^ZZEK>WK23D+J zZ^P!ay}jALbvrKauRcBs-0^Iy>-qRee|#{%*By^io8)$TUh3|{iCXix4p~47QDF%>2 zdw~WBU>Tsn!OUo;4r*pm0RvHr5imlnp}J5{1R(|!8kNNWl43>!hyns63;-1nG60#T zpfd#o5Yh-M3Q|GP2#CSxNI+pU0w4j1BLD+X%m@UCi5LtpnhBWj0t0CVzyvfWpqP&QYNthtiJ>@9=~UE{P!9-&6!Sm=*Lk{LjJTj~dE%zJL9{ ze(&q`#hr%RIMypdtge^b9un)&RApaExmyoy3~>v7YRbdaN1N#5TdrK|Fr6(<*KIib zR5iQov${IEe3;ZBwywIpPN_+oMwZ<}@1xGPi;r9g`Lvzi`ZjvKX-`hJn`Zv}`1M!) zxNv#A+bIQR{zSR0s%f@=_~{dTD>%I?w&HyUqNbI#sdT$8EXvsM;;L%8DFih)?Qdna zwRV+uJaRDqiup@7-(D4L8@5TTP*As*`Qm^d&E+fO6%Bw0Uqe(?u|L_}$`uG=|3c8;!Hs z?ni5-R&w^nK@l)Vnh%Dm99L>8Q_=%5EDn;{ijZ1;Y_{3^qp+As zpGAudW(VG!p=Fo4fK+HQDY3srDwR{~yjq?2Udz*%noxUlmF_ov9>qc{WmBO2<040H zHlt<^eT<_OHIv?)*`>6 zCq5nJVTVoWj_(Q41wZBI8FiFdRZHW|a>~en)ePPVDh7X)IL7J50R;A&6q+C_(Q7gD zb*XY!#G}gIyJ?WZtC>T2 zH=%A^GjDWiL@vTm#Y0{W9+>eP#crna&O#_o>riShu}JB6ED^#^AQskvn5wC!%vQ6y zIE(d_YDsp;lfcz>*ykyVt6%5pu$+Iya(VUmI!-)z?LK7ubQoM-{$_QBbd`Rs?b_r2 z^w0e1ZT;2fzx=E1lW)BK-~7b--uC^^-?-_si}HJy&wk^7|FuW+PrrKd)9c~te0_d& zG=KE+(L7`0QT~(9@y>Vi*YkECKKb@?^%71#7cd^2B$!W9Ino6$rrBbKh}EVh zBS~nL1+d1%NCW{q9RO1$K;}%%lV|}!P|UJ2n@;|-HAC1rC^4;wRmdAHEd>|X4+=(TsQBnm2feNz0 z3ZMoOgd>6noq4bPFaOeiU96-6gLmSbfF=r>fCLRN02U06$iM)os)Kpwtg3?(2Q^bO z!Z|t-yr3?dD`r*_pw2n^03)MO4FPD>nW?Fo!mLOzbHPl&Y*tcW31Wq~D@bYdB?$rO zlu;tZL_7dSV=hS$YayhCI1y9YRA5CLpb{j(V1x-kGjU+$#X@KUJzarMs>D=*bkQ`K z2C#)QGd3{K&V0j4QOQi`SRzU?53A^4_2St1hSP`~6;!2ytV<AF2Xmx(S+C3f{PuV5ru=mMQI6$yxQ)r>0)s&efPo-DqX7mp5FivUI0CRt z14x2F!g0r~_t(Qw3O}@I$1los@Z%5v{KJEXXHu+`*`ND`Uz|-3U-|6}{Qlo~{yUcS zL*0$<_M69>*`f|&L)aSHDbP_^v98Zpo%z_8c+ZT zKv*=qAOuLL!2nZ|Mi^)Ys2O2qS_+^rRRS=Yc;O2G48d3kqk%>Mq-DYk4m3as2f(O8 zf;3P7(C~r*BmgS}K`|o%P}N8wQCy*lQx@U{PXTFAbw+c}%t!!2gMr2(ppk$m015#I zQmA;LKtp#(+J8LSx+B)eDS=Bl`5+R;;z;lH{H5{%jQm2$3Ons7s8_%{s_Yzf9pNF zUH|=yi(Q*yzrFj!?1y1Fbl2*S8i!@QU&rYxm3BWe-K3%P4_;{o(T#h){>I$jUoKiX zXg&REa<^U%QJ+-pd3?4p?^ESodvX-2n|Iv)-u*@8yLRsRp(0Gurm21-d;3&#vR^+d z-ShVz&hPJ+vy{el8y@b7#hacB|9O7o2j~HFuS9`kfXn8oS+iFO$USHd_GETb-x|bb!>?OB;bhM|k?6$q|`MV;= zx~UR>kikDYIn!`H$h26zwyn{VQ3zXNALx-4_9=YIu65~c*z}N94o7v?C6$iPkS57I?QH)OIoeW z)jUbPI4RX;tWEFgiq{J6?_Qrft6*m@Pq+QdCRE3kdqBYBmTA4Uu z-OQaNv1Hkyu1sq0W0OOG&s0vJPFvlT8&e;6D0RtR^(x1zIA;a9jFl!E1}%A8)S6_z z@|EWZ8Ot=5!tIT@6YoTKeR56m#L#;NaokrflvHOOxX-yNqwMuvhgGo3xhCcH)%9I& ze(F)3UjHH(#|Sa;m{b)`M91@W(P0yJ@|js#{=(1wTqxfhs{Q(-CtIy!V!dBQ8B1+; z^S^yE;*)o$`TV8KamqcL3pjVUoA1hsnL~43&^dAQbdcmSS2u3_KUssXnD4jjKIq@I z*53P9{_SaRKmDBP&6yc{TqbcGAhJbsZn!~4!dyf`fCNTD0@ETP;KCOO5rjYv5lBc7 zMF=p70yrWJc8p@JK0ld!5ERI(i6mH*xS~MU zxp27&LoUmX2{BL$I0T5XMTo9t2KC}XvA(4n?=*?YrCZ3hwckhG6+sc1W}us@E)Y8J!(wlqUeKJRiJahjz>J!(jO}vV zAEY|P!{~w%m}ihT#Q@r-z(r}+7#B{cDC8_f5Y3|JIUWO{)lIiq-P$bQU2kvW_VwLO z-TwY>e(fPKA7)xL)*xJ-m^sUyt?TlVS9RZ8-s8j}$JwA3XRCm^*T%uqG`4Uw8aTz& zk>$~<#r`!WwA%~F^75{QJomX|s+o;%imi!t-GnKGz`}`JdWJeD|qlqdypa2kuXQ{tteJ!rqn-o3o$#)USOu?{6Od z$`9gMS-<{ltM<1_XouRotb<=X+NI6uo9}%7K5IH_uXB4kzIQ9We1HABhr+MzE`IKl z%l*Ij*5CQPL--G_=39yGx50>G12fl&h@1mHlH47+>&v%}3Gq}GfZU2pp_D=vf=laxXgABNn=(C~zMpI8U){ff zHjnyZ;I?ZiwM%WHNdxpw3OAn}S}GhCX6J%wiP|O6E*J69~uz3vFOFw90E zwxmQ_3#>Xr+2IHl*|?3Xqs5<1c67A5e$8{b70&JBfyK(+*{!Tkw{QhqE?eTF}UOxZce)pY${c<=r-EkyKGl{ywiaWUxF$n`g%ufIE zzy0sBdNPm*f-pJY%ua9^RficdNfZ~7LOc=RQJ6yjL=r>rBkW)y6^!nv+7u+1C9O_I z6oBLohY<`8_X>B*R*TN$7NS4`DN|9Hx|2JTIaz`GOkCN?&0PTqu@E_2hw8;Fx^~Qv zjsi2Ak*R~+lDl{?iY88k00;vCE)0uNTqMjCCO1-#PGC19HxMgD0;E$4AOxonC~VpW z#KobG5J);^3Y#LV7}<(2TpSfZLy6f+u62e)4kzl)Iak1j8?E~?Vcuy;s7q%;$d?V> z9ESLOXp5DTa+51AUtVU@`CP~B*v&F*2cuVc9vJ3FE0^}zYNxpFx?R6AwYi?Nak@DJ z4y(9j$PJs43pysEX~fOsG*u8`Vao(sL@=;(8X2)zua-y($)x+KUTvLobs?{T1dwX8 zqb5}{uE&H%%9cDAO6(#bFudvSue;ZmbssQ}T4&!U5$}VM*HD9}qkL}}RF7U>Q=_-y z*^K!IH?Kr0-{~2P<2Hap^Qo5fBtEH*n=*qP?YRf`(mGM}J1XdoRt;-fT_-L3i^BoK zba?T67L)`6qMnF(q{lB7LKix%snwSA+-^(DXCfcw>{2V30t zwXP#A>)|5JmCja`-J+aGgSPW#y^+}7l=Ff~_9q72kGmv|`0;Vef<0XfwWF6OoJZMO zx%CnWmtWr;_)r%!F8*rK=fCy0{^tLZ@#3#dLlSi&H8Yqw0}g-z0^INs3m|iHcen#i z3DOKfWY{OSAYxzcVRfl;p6}@6CC6YA%F=ExP#aU zq!x(^lR2@&$;}MxfJ8uvoSYc2NKlZ$gc%?sBM5-RZbt6p3=qJ?l?X;30Rc`#Aa-|I zCn_}_pa~ywC%^#$fk4q=5iJJPf{{@lmj{M6@u?)F#z@t-=884qXRcy^;}3srb# z{mQ1le!M#$4&|_$cs!SW`e8-5>O&YOR^pyD`7$hj@?-x*JnYu<82?GkJkg9y}Q{&yqA_|&%2y2 zKA7gsC-3oP=&r)J+4*8$)<>`LG4jRZ;cyxxNE<^J7c=WwoJBT~Q5)%|my@O|KV7Yc zRaU*Z+~~!r13jP zZM-hTtww2I49pj==vKFwn5MCbct4{VTNBig5XP0Rx zCrge;ZR8lT%y(i*XYXB=(J$<3`_QFrdOmCqF|HSVh{wyk|KZ{BSif}4JL+G-kE(e? zCW+T;&($19pfXbCsXGneK)$I%N$u27D3;U)bzas~mYW)|?AhGa^}cueuARtlg%sF> z`&68oU>-KAB0*40)kAVCWgN}PNOEWG#Et8IHVHz|VXB0QefXXhJ;dztMobfzOlXqw zT!W2IMv8qOsnn}OwLBvX$IEu~rH-d=Q`cPttgdpY<4r{lgU_yY(dVM|>Z*>94-euy z_kQ^cclL+sKt|2(N*I~Kp|FiZzPg$8ikDw~_e&?Y!#gQ|{hL?C$I<9<+|#VFHpe#R zi<|%L*~2EvLht$gMYwl}hs4 z=31<#PetEJ`&uL%hh2?3QvPPE#8K$1aKxGoY2LaaB6;s<53*y(Usji=#*d z<=QP)WzzlV%SCzb&hi&NfApE}^DjTA`h)Mhn#1vZ>K9Ep5c#@0>q2?0x%lA9_loV9 zU78X&NMf?SG=wZL^{J?pxKQ$2x;G9(kfKst5eT59Egv=AU5uxeo4b8XlD~fVA1fL+ z2Pv_UY#g)#RaeE_p&tOEKwZB$4rVLEtqsr6gO6F=s#i<}+RcmP+@;rXhAoalsurqT|JSs)JZ?~$3vXLHyQZ-J9Xb!d9`$> zccvD^4K+g4u~vdg##~W@gF&TEs+QWNfckWlT@T~A&_~Qgrat(DZc9yyO$ZWn40A1? z?e(OWYmUBuasFbqg*41^eYqXoY3^2TtNVABjLpyf3&(osUjBOZIB!p%hhe|oElM3P zzn6_JuCGJCdgHS{eW$U-Pn_*Opz(j$Trc*z`u0&+y}9|sXN2DV^xdzGFaPTQ{FhFb zoA=&)%gx)=0Vo;;E}{TxKLJj2YanrR09D{zOXljdp{9Z4WL1>ZQQ_nUgQ=?-dFEQn z<@5J*q}9=B0#dt-;Lyj!Xi{$>MUSnv5O4@Y?%>I+yxQ%_r$^g7(@l7o{CZkg?WW^? z)!e!l-I_KQz(yx0ST0mCMxC2ly%|7^S%?AYJds!S%x={==P2BC8**U`%34O(*{h&N z?HwsD9b$1NR_AIwIjEXtAmzqAn0YV?PG&(ttY+q!jc86_k{G}iUD?g6W9xwUqEe`1 zwYf5L;Eq8~#mTBxtp^aeOC-b|h#J7nXC@=Jidw4MO}P-1my_@244Wjz@i5S3z&)FB9SRL+&M9X0k}9g*hDrdQEd{5h#FNla;iiM zXRlI-D=Mm*1ISuc3&W)sq+l5~R)=gtfQ73`-9Eixt z5Q#jQi@PRhBoHJ75VM-7k(1Zp&eBzhO2j7OP69C!E5tb_CW&U`&R{r4qAo^&D^~|N zi1b1p!W7(Hn3;mZ;gx`f69bcLF%OJD)li1Z>UOx?&f{(#y6I-+Zl*)7%+FtxVfUmi zhO)X-=Jk9iPB-(bnd&s3R1P*GL&W1I?c6+u5k&X3$ajy zzZy^Oq@(Y}tBde#D0aOK6mls>Opqg>MLQw{*)!=rS2hTC1MT)z_8A#YRuoha%n^$@ zP@+Y+_{v>4j7qguB3Gh~W0@lQScMv*WheP)IT58Y^%^STB(_=Tk2SK3?Gq=OI?rx7 zZW~IOZ!yuLSq225yNz2rjRTV(&kMKOJ}pMWd|f%!{BdIRVWD-Bt3TY&`t0h_Fw@Pv zw~AZ~z)lP}69|q!Mj!$l@Q=U&0zgFg2!Q}laljj2H@^6|etOY1pI^Kf-f3RE_ocUf z=B>9rdFz8vF8Sq~AD(u(EUt$z&A*j?h(mhv^17e$t(#^}dJtZ&=SAO-wXa5-(?!HG z9I{hR)BrT`)H5xxqO&!)J2Gamy9-lEqiOByyjc3y+|eGL65SlB#?|cyA3N5IMEf|P zQ@mW=fV;H#Y)igi5<&#v)d=;F6%*<-H-TGTy}*!dcxP%V=WC=x$5(BB^F?a(*>8Vv z-L%c|8oMV~hWTs%;s2reH~+@BFF1`?>{JQvPJo*O=5RPWgJ40)Kmk&Fc zU!EWGydCd!ynBn&v&VS~Edc&%G4zWp++YQ1}Z+1`Y@=5f7Qyoj+stlIg)GiD=u}Mrb!@2yxGnx zwK;1zUj~2qJ>Q%6mFNuf)g}$5$J0@jmoDV1=IYUQjMSaZ`eNmw#7#bb$Ri(0@Xd7} zd3-0%`zx)nez(E>##6NqYUV2^oo`Kiv!+tF->GG#XQ7Tq<;8ccR-E0Ue0ZXM+R^cW zE>|oylCVwEubZ?wwOv{HWjBqljI4^~;c19R-8!u~=N$G|wEEOrX_()wOpu#f~`!B2e7TBhBV{;zgsq*O#M^ll#zUJsj#_#H}g05qsh; z+tP;XInB+ur?J0`!Kn2{Z;Ktw>UeD~edrU?b>a zRp)`Dm76*HVF`b!$B#bz_W9FoSdXh#{32j}Mn`cfY`h4;yS+8lA4c@zy;W+F=EbsY zI3#`g^fF+{DoHM2U7STuP808uE?PfUaEa}by#!2`*D1R9%6U_Oo_U6m!(xR67y$!( zNxF*73!>Q2?LybiX8Cr**@IWVhD)birR~Hj^C`)otATf`TI=4g%phtleKFYcI=!#* z`W_tVMj*kdT2a@kBh|o}d@W8^)iD(>UOHA}3i;+0x|zZzapQcX$N8AtuXf5pO`wit z4=ei_OwN>D)-X-8DTacIIiX4PPOLxv-uTrInty5H@sXadWR<$50H3z=Y{o;SL@|Q0sncTdfOX#8yb4VteX?LR zy1eST%H@q3+-;{CvZn*(j!Ogz;E~yYF^=LA<`lwe(cg>q?3!<;XHvE<<)l~I-@Xtc zYqTYv`&Kn`o`eHfREdlT+1$}15PDwi;nQ}%CPvdvW;Oz>OeJxIq^g!|?#jn6^o@DA zx}AN$`u@O&CWV%vno#xD@sBS&eC+tmrPB}ZzY~c5%*AW>2znPUa{l$lzm(~ypZ@fk z?7rU0XzXJ@d)x7;KV`3WH-Gy-`p+Ks?R>O-Gc4|X{1=Yq@#x#TKXCun^QR9|UVZ)s zPFTh0V3AW0NqI$7cC*7NE*fG9)!Ds}E7f*#siX$f;0v+^xx(GR?BYK@NhB5%_of<2NDOK=Z_N|eb&Qb?;nc8A=FZfZ2tS(3XVp3%0>js>5Q{rtWPvG2f7Ux7xj-q5{PG(F1)Bvy9bexs;;}%{g)!7Ov(Oea7Rbasg7Unbc z=62KH?rtk};pjU}_vHDM9&JUam)BZ-O=7cdmv2ppe`@pA8|%+?5&zI}K;I$-EC|dX z{Qu7=bCG^nSS-(__s{BFrfxE zVs-=%;(>xwtHh?2v^aevP!MXIn52RV5jY7Xz=$&3+=5#J2FICERk?T|P^d<#)lo`z zQ)My%f#7OX&B+i0D5$#D4Ag4w&IBSMK$w^q1ZM-BNSuP1vywX~cwlEDq!7U@a3MFK zq+lU7idd^t3BkdkSWqL78;rn-0zd)U65N9aI6HkL1a|5RNpTHsNf<3h0VH!V6b?pB zfVaVe!`Q)21~n@VszOFq-Ob5O$)~LK$%`U&{NZ+c827B^Ba!ZGykX@R^Fg!I@!F3T z>OniGmGPTyb?dkwA1BtCBimZEaZJ?iK+SF36OK_W=6b-er%WEIRN{cICQYCC>h0Uh zb^q<>hwc6w)76XdX^U$1YVS16wFkAl5TLRB?6#8<*+uu)zP~DQniG>XR10wM*_=on z1WG9ZWTpVf70_B?#loF{P>c-zV9*%O~9AE->5ka~a3Dzx& zqfIQJ!bynw7@1gCM@zTpb#I0jhZ-vGp5GHbym|9-j`obi^Kg0oDo@k*FD|*(LoMWR z0zib1I1vF%1d!uLh(HcFoZQLbL;#%d5g`Y-2soy*PrlxN{GG4<+MkL~n@@e?>Q(>l z1GB%eW*3#t<(vCT+gdh6-I`j}!#?k8ss}2!7u``@U53>}{E1adh(FMq z#`(d#*dHG*Pn%m|vuYOdWX<^|d>WAU^j+AiQ~F+MkEdm~foib!V&RTU)Ua7U_`)0a za1}8LHaCd4Iv0xbQG~aBS&Ht>=1%T0&Dj2xYya${`3rX&Y+BIvQ8c>Y<=*C*_lx;f z(k2hjMC*&~^SN!p-QYLRE+*Zk-}7SM|I6R}I~P~yzjtxjj!LNPAUAV&m^l#x@Q(ln zVyomZFbGWUZU&GSb?EE_5XfNweFR|&j)9xh8f zQ;o`@j*l^$b*HwQuTE`2`>q~WN2r4wbUu}hu2*5)CQs2t7a2KkU(}s!rCD$8x62Q2 z3NHqU%ePL%Lz9`#E~xfaIFu)YvQM&rR})_G;_sR?Zpfiw~?#=ycu&O<0pHbXGDi zle5%T=6R_|h_cZ9GUxf>vmfO6*^_kg z`5*t>Eq}2q+j*FR*4oZL_c)Cp1nun%2j=ViY&CfQAK+CmE*(R^i z4ivh`C`nl~$_BpJ-??Kdwp~ z9ibc>4fEOg;bs{|PJNx-8$zu{?y4g^HcrSP!#yf{HMV*wgXG0(p4lcsJGIGbspW+4 z3DMJOYFVP7>Y})djO;K&m8%Any;mA5&u)pGxP-0EI#0`Pu@(t*d&V@?SNi<$TC z97&uj+5=b0Gvusvo$HIfIC1WkCzC1C62u}=fjrFx8Z}308l++QvIadG?l~-H3zE_8 zVQc3uPghcuPwx@yVS61zTz5^*B~OK9!4#CX3dG(S?prlZLQCUf$a%$&D=5$j+=VkGI=TGiIH?Wt?vA_Earyx3y05B~wSVieyVr(6Xy# zkG|KYv7)3kNMd%c(MWIaJ2ATT-a_!X?w1QU^YyoXUS<1F-jIarzxeX6VQ$`g_a7Cz z_2+LXeE9ySe^uY1OQH;u52=a5+#dr59_6X7)k^7r@oEppp@k9n~ixs$G5J zJm1hgSSv}KnzjubA*jNt%B~DE&q5qx8{pPRW$))cIMA*sv6}tpzlLAw$FmpzVzRIK%&Iu;k z6$Gmq)i}B}vw$pU75Kq?HYGBb1V|DGGZDZ57%-q@b@zxEJ%L?gX7?g60Vb8IY06-2mC01%2LSDsZc83PD;4pagk3>gOcK52)t(aqGb3$O^ z2F_-a`{YEBfyxq&b9O`gAwlduDcNA$~A4}24 zYr5*Z&26$ybKQ+hq>J(__;fSyILrw(mJTw}+Y@m6XmLgj;}Pg$9J+aGA~A6?3QhOp z3J<=x^3S|s|Js8Gk3R90 z?|u+1$(qi%IXhj_qG4T?l%)QV5K(teDm;gHyqZ_5d9tG|csZPjpr0C9$9u0Y z1$%D}x7&SYNWx)tC1-=11fN%E(T7{(tFgPj{@{MlU5AUIe$46gOP~Jq zojbq=e;3c74}q`!xBvRH>Gr0?E;a>+INl8Q=CN9_dhnxwyg566H3%|wfj1>SFJ;j0 z?a%S*5mtBZZ~9Gh+;;bN8)qeoA6!04o7Mg0X1Z`qyM3D5hY2UPyR}}&Rh;$y)#1tK zVjfmxhRrN@9yt&SEX?FXUVFQ}_?RIc) zsBOZgPkGDv`7nx4>Tbzs+ZP?O9=Hweb$3zI@n$7)k<#vPUF>eOvxR)6&&$`AMzRh@!of121EsBKtbC&8XxOV{G8VoQvk{ zPGgf++DhF`3UNcbn#el`)4?i+Lh5@UQ$9etvD7eU(mAY}y&o6d%H(7fkwdB4TzI%0 zW)$03^qAmqOjl}_eR-hs5KA4K@o>>%s^j|R`TotH9&UZ*pZasRzWlWh@9Ot{X=nVA23yy zE;l61G*_9;c+^hzD#%ltISNPBt;0)er5=)Mj7OtQeVz@?7VsGXu?4%WX82Oni+T0I zri_{6MMrNZj#v3wY}`)Q9+DF+@~QCTA(u*SD9M5)snW_>4{YseEQT>pPD~-jgxH`n%Fw;N zG^~X;Tm+R7ZWxwr4;QaOWDy>kHQ=Kny0kT*qnKzrZQVJ{Nm)Hqf*G8H8_{}d2G@hn zkwS2ZC!H+FUQ{(=3JnwWOxbj#X>pKPWoiefY0#djUq~(Llk%#`eJIyMs2brFur!+s za%Z9}+SXo5J9dLh4W)O-N&CqjCHlh~r|X=VZ1h4z5e8mHYFV~slb?Ozv!QMA{_m2@ zt9%TM%W|Q{+o5tK@7rLp^CgmdXzwrLTg#j8JRhI$d<;IZE$SSol%4TZuitRhpG^PU zlI}iP{6^sENqD$V(~mv-XIrGEY2OHb^fUKlgY)q>zLorc|DXQjW0zlg_rJWf^qW8U zPsh3b;_ZLBr}UrS{Oa90{<%N@YoF@+=fC@3f78aN-(PHY^}S#I%<;P2-T&sJlll2y ze6s&=Zcn;i0=Erf3^p@zB~fA&mJ)=OsVJa~V&-MbJ8j#KoasM2>`EB0CI;rD2XSCv1rps*dQpZ0!|uUjqqpJKN!cS9NIBP->3Hc zjXmk=uvmb?9YQ2PLCQ`BscvpNA+HURz|E>zg;k%N7*q)sXqh?LjAoJ);7el%nU3&0 z8D=$tffB%x!NkU7LWD$t!F%7&G^H9lZ6NL~i8~}YCog7W%>>Oetg5khpdZU&gnF z4-Pm!Qdt$1fq+P0{J*Kd3~<4Rgy%c>BhbJ47ydmEIXQsfh(sQU7$64N;BLgu0(Vi6 zlM_5R69{n93WJfjFyLx%S3oHkICy29)sfZB9S%SgfH9dUU_J-0h_KcePJ>vE9$bZ*_DfW?ARM`zg#DPFemAI&q)&+x;I5CVs%nb5aiB$k5QkeOI zyh)8pt8j2}6W4PuH=1WJAM7sKr?Oqw%eaoitp^Qs^V--@uCc4sy9NsHG&OAc>qygN zJUc@jGYrsqH*+rMmajCU)WfsFnBPCYEH_tmGtPHpo$tIhyfN~$zj$tSdTAIP(IA-- zh>r?U-Oa6fqG30Gy%iwefJLjq9L?uO<-S^dyt7U?dIT54_j-cWovxB>5 zVh6$4T`AlNv^ZL9*NdUtEO+~rt$f{Ev7%Q!>Qe(Tm z?q2=aEaR!{7s7U{Fkp)@RV(}5^)wDYe0h$VucrxSvnv24XZ&BB!AZ=oTXrAh-&(tQ zn}7SYr@7~x+uypba+S-DZ4%pw986-OSVTey(MZ6A0TNS$5K<7Jhzv4<0fUT@U;ua$ zVQ^xlld2G%3kh7Jw zse@)-!iD2@�>9d5!9faO}E%n(ZQ5T$S}`%uoD@Q{V2z+$a+3x9aBMS1E#@2@P<%~vFUO8pn(Aq*48 zi|b@fmmb&0UXDU9k4GzC>_0ua8{oxM{Ycj7IK7y?n|CzyHs1Ms<7V-gB_)=F6CM>P zrHGg%Ba#BZ)H{|yj^JSGSfNl=FbHfg7%`aeB?AZ$gCkD_gR&8X%o_+q1R^jj8$jlX zMaf`BM#O}f$ zA}?DyzP`TQm+ePmy?eYEbst`PKM7HF+ICZ;3GcRPYHHWoc-JS~U+UcXZ%zEMg)O1^K~glOi5xI zx-raI{ZFRplR|r;d*iWN^hGvTi63X*0y{@EW;#x< z?&$2pweL$a3|&3yf#&Wkt^{JK*y7zS564*YmG_itue-4;PEHpxwPlj%j_&;A!WqWG z%hj3DFtfB_Q;~^5nTNF zSG3Gs!Fh!1n;WFij-(GNO=J^h7U4Yyu|3$I`rv4BoY5fg;>BHd>&ZFHGkHTavc4pr zI;zlts~lO>J30u@GDI`Y#hfw>ZZ=fSXj7Rc%!_~`C*`=8ER%P~s>A}#G0WsT5syCf z?e%=|^Z7r9{`#9Te!qF^{j-~QALhCa!sHzp|4@_r>`KeTKn}>&jPB9dELJRA~dcL(KU! z^U(Anv`^=!XBx!P?M(M0`M^m9ZnCm0#2ndYe@9Fe{J!{7xMB)MYJ_&(uz7|QbA@!( zMmd%|AylByxOEz@`XpR&Ai$H9AzRZ9r{t!IbSk%rs9{reFmo{%TUBt*?K+ELs2d({ zScl=J#z{+SrJD&*&(y=;pla5%#v4N|0hiw1^! z{Xj@5sgxiI{yBC(Oi*+&hT`Bchhb10y6K5(D1VmCe~o{hW%2Ui@9dn=P5WdV zKl~ei;g{F#55M~x|1*w1_xJzeA)o%Q-Cs}g?9V;=&mM#aulm2m`sU^0v@-eH`LBG< zZN~Th_;>I3Km5&ay?pUB^lh9uTvlhRxLLt3*^J;AVwAxQWim4LIT>W00HR48(f-Y@ zliW1hiX$~#dv@NzGYOG0)T|U`MyMPR*)f3Ngh+CN7BYgFIYY=E?8A!40#1osWD%!% zhRx=k70>;c(`Wq+Xt;WII&1&Y0jsh03&d6nT=D>+%h^;AmO}nSDVO(NHQs< zMzldj9HI;6PGdzBO$F#x=TZzX83K@EDXvr+qBaI&at;t!f>?H8itNm7Tp$p$!5N!| zEPxymOh`^s1$PMmjN7M1QP;w2t02R4>{dtG*w69-62W z(lk{IYu{S8i_3c-zfDtIb={=p>b8%Oc@^DU+@a5Nw|_C`dDnk2B|CNpG0rdoKmfs* z$P9oQkrC6E7-S3s$N&%s{{+F*U}T&DGBwy=smtTf9uyyXuMq9LKnrH z4ylZDe%3#IU5?d}X&&tXX}CDK1zBIJ7Giok`BiKmo!8T9Qzp&gbDSR@bG3NfJ_tnY z)zohDfvUFDo8{&~v$}kH*Po`l@7}vfylw|J#^C3M$`adUT(3M{9QUJbf1<96Os{(i zClt4Q*t)w|zxSLfEzXL$PFk6xly=V&>6NAV?j+73V3n9K}D%z>0x32HDhWr8|@ zqA@!-=Ryhr2!Ipv!d|EXSLRx)D|JwH-Uq5z#xX>S&9>S$)%N4{qbDbPxvYGlVu*e&jVt<}49Gv&=_?&AF9-e&*lm25{k zzg+hBTp8l<#$PU2+&cT4IL`OOsePzkwp+?077XO#om z<+Wefi#z7_PuBsb_Utw2NVa@qttW^1SU*EK5u@du-Jt9t_3cQ@a9xQX#x}GwNi~-J zoRWxpkXJWiGY&>vkS!HWdb~~=;#r{MiOJynvA+M<{5-x&{l@i{mQQmyi!(CqGT(o= zlqr5*=-p{1Kg@3W(V>nNzwK}%bBJfBt{kiAs9w^l=9zJ1YMONr;mLf%q?j|NMP5o6 zXXcb$qj_r^e>zL9W=-*y=;DGxvAbSQLh)hNRJmn~!^mXg%Cu`%Iqzwd<-uYDi-i3am)ykaYD+q8?ftIO4C^*)m> z9Ecl|AMSRaUmcf$e11E;cXA%1?yeMNSHwkZoo~sJZ}Xd%|M0~hzP`#IEqrI|yiK^X zydQ=-zehUu_Jv!W`UU&_ELz9R#^)T4_2sBtImn{gT`ab5#;BF#Nn*@wwPRCP#Dz^D zMJ28|se;nf-R3FHQn$;cNjD2^9DC?&n5#H(eYsT0n1!2($fIs`IL@(P4e|1zUnCiPGqjngr-Q)#lcZfv@%UT>FBNsHjSQ^__DB;$yEVTA&u~c5O~az zvxTZ{P5S*PTo^WB7OPjO%I;qYbow)Om8hNE#d+9IwdBv&!0&%j|F@%@k+~3N1&O!} zBce*O0Az3^JFM@l0Wv)dAy%Y?|l&NL0Z}B{QQEst`SIBkcwrecE`V0Y?F=u?Qn!v8~$5jb-n_ zd2kbHV_8B|J6}ah3=&qrCm#?T*}{IcabJ(sZ>+-bcVGNLZ~T6}NH$-|u1Hhm*i@(; zxN16v+I#}&DGiyht4AJ+-38qQYcNk##IFJvNhbiYcqLOe5u9)+W(BsOrg6<6*Je|s zbpea3h{)xU0jZ>z(J;>i`a+8HTsL*n+@Y_8nyIPldR4(@L5g)7gNFtJ7tOOt%dIoE zc5&(1y3hWYy5skM>A|7>`Jef@rak?l|Lv@AK5+lRJ5w8f4dDWJe{cAMlEU+4&i?)8 z|9FVuw>DoK!tGzczxTjapZISRf4*J(!uxc7@#N|EF1o+{kACO-&wh0E>~{aftNOva zU)?@>@=)+GyU9g~3KJ{dIadd$L9-f_G#9`H8rWUG>3k}im9IP*D>-G2LXm_?qQOEb z%v9A|2ar<+EI=5N3<8*b$s9Pq1n41vs}_+QkuKESnd2?D#>`WwydG~l zd*jEIjAw_U@y468U`sVL{%lRB+DyYb1yLSfXR<`&Ooe9Bj8a@UHq5Bb>m@rcrKlt( zVJ>FY52QM+#K*iO?_-nH&`~g=3dyq=k;Y_*W31KtS}mFaMpDR1f-6KWCwIeuF)Mw^ zMXG~A%RE3-xR}eCs{NuhZ`D}bAWS6|g^@x56I7|LKo&h*WEhbs3?KoEkuToCp`=vC zK{+3HxlBWrF&&**W-=mE%Q702i5Q?Sv4PBlO}v3vxsWqMD-Sr-XL0*@@mE~ie&u}o zZoOD+K=0$CLi-T-3Sa;Og#W+jKl-cxRe*C2=0Juoku%32CIbO5jRJ3MhQfuoB(qM< z6crGN3}gyZHGl=CFf#Qp5;G+uBhE>!6mu$$0U`pN0mLOzapn|tf%A28Zc?FC7%*dJ zgX;vI=1f#A1$CCRcS zsqCaQ8!QLz4!kq8lM3Ex-9bSpbzPr1crta14 zZA$a&Y2qZ3MWLCEKm-s#1i)Y*fIv!60y7hl8NNgW0>KP`krMIfikc56fpHf9{kE!p z`s1H@zvh!)ws)MaFT1;3u3jAv^P6(_qmJyBr*ZP7P-A(74fENe@ao}A%uI1Xp5v*x z)63?=P4&_(-z00491qLB__Md3ojJRyf&t^`UDcc~Hk&54i&(nbF3q?52j1-DBjp0O z8v?eL&d)Ai*-5|q^n9QA{@Z`d+TQFQxBha=7czV|q?@~Yn}KLP)3z+?n+L6=?PU76 zxRtqQxVB`g8+XkwK7l-}PZmz^=7F64?6T>uWO`$t+1;`1=SP*ZlPA1LFOPKl=DC## zL$y$~hrLd5v0v1Wo_{Z8|D*Xnd0p0adavfSO|Nc`BZ(I=`SI1^dPW`~V&OHQ>gOL1Icu;@Cy8<+ZG z0pBbwRyG|wny!$;%w!{Wv+VkkX6qElq+~|;QXn=Tz+OG0dWRlVh#U6zu&yK5m1F17 zdaXZHs?UZKP3Zy{Q0lnbf3Mt-Hp%RGjNn>?ijGRD2rQjFvzl#eNAX{2FQ#GtgITEY z^?Ld73BUQ#OI_5fb{jspN4s71e2h<)Jo?2hU2~ao9}jK#2u{vU_Qn2mAYIpcDkDic z<~K_3!sPt)CCPmOIgGu(Y$Ao-G-Iv`g`aS#Thl~~GIs(9ISX3Y&EUt{DQ&3e?rF)D zB&X+#?lVl!j34VJ4nu4C$HP!vuDD43sz?KYh8)X%s0~dYbt0Hu#XWs@M>0LP#hKgH zycwIdr%c}GRhM37FyF7LXXvawu{2((fL#BXWhBhlt=_X|Z2xTf$@W&tvOs zURp{et7D;<^2m~cEkkP#+@OqotDCk2tJA1eRoDkeaG{|taU(foo262nAwt2!6;$Sq zdE`J$p;AxeB5a|v%qIRcmprjiPa)jh9Vh4&$+C*I68B@DQfG53sXBS94G+U6QS{`TUpbIB zCLfIsv%fpODs|Rg=kVxjPZolcAql10`0NJDFfBmW+NwX!lvb-Hr)rsSrzEvIxnGxOH=^}vSx6X_(^YUz-9{p>JcBPv z*q_Njv^}uS$x=P69hY$-&Vd=ZQ?tg2TP)WT)EnITmYgeHOFBZ7gI7rA9CZnR#Eoh! zr1|rLYSOA{0zA^~~)^e z)(f)fpc9SRinH}>s>jUSIE&S`L|T;jexI*GE^b(DaPp}7iEH=i$SJk|XaH+Q${wSO zCsXaLupBc|ud8?(h^I=nHtkCX;|ZYbA~drk6E$7Y5=IsI(Yj;4CB3XHdM2Zss@D}x z9j%?s4(Rg$BNQk{PoBvWFu6CuszB?C%ILJr0)_I<;7F<>rnB*u<1)wNuLjMqzqcos z-Q>9r&Xf9J#I<}K?0;ePIIK@Te5mC~slO)w;BWuqtK;AL``^0g?JK{s-Obah7cZ-@ zbbd*iBQkJF7Og%Q;4>0WNB0TlrL0HErkR|?P*$@|MTd${8eT@Oh?>Gz z!o7(VyE+N~^#lI_>)*WLZ++*}7xC`yt*`tdZ<}`fm+Er9ZFY}^7r*%G@K}HNKi>Yq z8;LLFe2?`vFCLy%{vcpior)kv$^BWi;=PY0jFMB`A^}WSZ^V(Cz zf!Hylm=?;SdD0o6xzt2q-PCYy?U{+#DUqoGW@3^Gh&(d^=4OGhF@X%mDz2)iUAJ)O z?qOmTPGBVLGYytViY#|}k~u%QUSX23uO7RR8%mf2varSet@ zX<_nOg?4coPwpYu*G|5|n-AVuxPR?pjd+4>i?@JKBLV^c-_U>hU;H=3-Z;Z%6NAAR zAR{9&BtYQ;A(B{0EXF2Arhu6vFkqmfq)IR&6TpmM4uA!)Hxf{0H5L=okw}y(rHaUi zCLj|cVc~$lGawnPMO9#0@lWn`H% z>+CgToL<(YZO)(j<<YSFH-4h2H>`n+b)*ia_U_dPx&BTNZO_9>%pT z97~|y7)i4dlE46hhmnzx8HMNxA*=&?A2;N-72XhD?fpe`+xz5%TXm-=?*3wt*9&nA z(Y5B=+vCk=yJ5KQUyKuVDV4&SjbKKqMld1*2p}RyQ2W0Sh=~LcgY!fTFdKo?2u4GI zXe&#Q|N8o!t>5Mcdu#9grGN7qKD;+R6esPwuqTEzUn7Q zj%Km@1A_OubeLTT!0ElHRQvkjvMjgVV`@K_=4zj7F68)pE&lS|ePG`$&sn({7pRA3 z)k&QDpw;oI==gf~vxWG>hZhU(>HZ5I-o@d`di`Ar-5Y(xjLYedZ*6hcuh3tHSC{d@ zvvpfKmMhTNhwZxe!?S8-*3MkN_oiCN>(AfXBi=9D!#H1e)BQ)&a%;QS#{#;&3djCM zczw;6`B0T^^X{dj@U3)vZ>%0#JWi9?_2F(WOMiQBbK3E#xF21QQtk8Cnre7ZsVu$_ z)~n+nQekXLUb^fGmlg@`?ry}PGm4{;@?~y^cD&R3o45PWm1lo6Ut0BZGyUN5IQY}! zsepooWAo$8no?!D^kw1AJ1R&&9`r-%TxWqbMIk{vbSft{Z)-K@J&uWY!D&4UnBol#qz zMy=nQnVKfmc(r?t=pLS&#UIYKq`Qkz3y`Wb6Vo83io;UcZbsM^^{|fkn5*E?o)Z)%W)hY)a=f9e)O|H z-E8l@eTILIZ~XwrKm7G%t0S*;e(^@i1-hjzN@FvI@kS4#0n-xo)5yX63CrtIF?ZE` ze4{%W7C*I!S8qhoyhE&eOBeYQKsn{$9(IxUssw zs%y)c_AgS^Q|+9bdSBDo(v8*G;KvJme&}5{H5c_qPM$kae~OkqGDkWca@%YhNxRn_ z)-2RsS}8fJhG{Ck)DivA`oMMMY?Y(!8YSS5_S5n>abFrQJ`2LS1ephuU|pLww^^s| z_0ZScld6VlqtSGf8k4{zH~0ozL5osOX`lzo5v+5w@j(0+RNt8%KrC<{pc6bXHOgEVF{!sjl7wM8>fGFJF;P)iL6IuQf(AaQU%ExW zT-S)k9FAuQeOO6R9|FY{&DT|Q#Z^EcRusihCNptG#Y=56sae>p+{{ar*Hceb@qYBX zsH>QwOupjG3&}&{_0X3-<)xXDSjf(3m)JrA?oEN58F(tSRTxZL7pWOBCS58b% zKDbTV6KR&*=`x6K4aBOofAQjf+b7Pi^Kp3Kw1kI;u)i!b&qjoOqlGG;U^^IPFnm?T z8pW&uz3ap8^ws8D^$>%*7>fY0DtbUfU@*#xDo0Dm@GGoHePNN@ySr zq!m+1SEF03j)6IizSwx9;mpMfm_jI{470z#bO41wdcVPjE_L%%_SxUtuv596ooDjw z`f1kang?EAjn-M$F74r;dGwz@-T&Ur)p5W5@5O6FQW$*m3CeZy4 z{?z)tU;Wqro!igG7oYyEji9wP}Z;o!tLc+<#M-@yWmSdHuDg z?ce>i>sMdL-Cf(+XUk7suAW|Atv0vS?Yi4)|KXo&UjCK#=I`b5>-l+BeL6O$i<2i0 zHYE)*ZJPS)@4TyJ{1f}VYmKKr`sSrk_01zYsFp&G@|&G?Suf9Z+16~qdC$HItY%7+ zX(>G$%{q}mePM`q8)aYF%F~jm1X38J0A%3e0g7IURI))~fC0v4wHjJocsTJHa4C)p zPEF%-(M3X4TjPu!k1>u#ZQnbyxfOC6 zyo;VAlZ%T2LYgy4o-|Ln5Q0`7q>+_fkx}*5fuLa`U< zf`~!EG1pB3l|#u{MZ~2s5Oh(ik5tTyIszPlKxSf^3^Jym!VPInrtligoOe`t52G&k zNk;=K0s#%jz!liQY(%AuqP-Z&G#`*t58G2>r0%Hss6-;h0K7?1m<0}?SQMpXK%A0O zVlHkGV_i2fH}9W?lkj-ae6an?v^;&cX`d|LqZYhIsKFJ8@&APW+F$z{GgGlbBBn4Z zu#5tT6SzzUDh^73X<;*hR|J6>oDm7sv@nzi0DA+m63iJ0-oQo*N@PanL|uZJu>*); zM66&K9N?gimx2bQflB}>&xX+)TdaxL3%lYOfaa_!4f~qC8Q_AkCj*%pOk9C;WR9T( zVIhcfj(sFEF0Gk&QGGBrn3yWG2BNwGo(({80$h-2A7<1lmMVzd2Xq^6g(f(IuG}2FY8(ZkjMt&l8#=E>OPwBwS{|i z5kFCLoenjqK(CnGE{AN1y@RWaiz0{97{6Jy{N7jZ{c>_IfBZlEF6Nu#S#Q;)ysmBf zJa=z&m{p`?I1dgXaJ8%=lw-*PxjPP^)I$jj0cka z9iI2!n=ff^x$jE3+TXduXb)H07~{=&1LsfPd$2gvAH1@|C9c=aLKBf$C5ncIPlAo> zdAhrI8Y8yU7>iXnJlmInT+NXwM_9<7O_M8NR*QN`B<}QJlTH&=czLj)d-27~!>&8# z{(3r?6hs2bX^F=-rPWzH@f(tA2C*z&koQ3l9Rfko11v@9MLc;ZU_i zWvqLR=O^smeQPnytCZUD#UG3sFE5+KHysn~aj^p9=J;yX*oAsq9f+x^eJ8)alb7sn0L_lyyG6IQhxugUg>h zIXkI-wZ^)fp;1u+SiJ!#wflRyShx`bIwud{ePYyNw{#I8Zt{mNL%3`v{>Rt`Y%6#Rw zyWx|5D)X=+x=v)&EJAT(aZOmadFo1b<>uJ;(*U;C{OhB457T|pVOAN#FdV15(VxT% z%HO#v;r!$sng3*0;wqd+t1`;*c5qGVLvqg#(>$U0Y10ONB^*#y`IX;Kx!Lxoe2+=y zT{&*&#BOyLrZj9V@T$sn=QvF4tLm)77!GMIm57x+();7unRIrH$-CvYY2wPW$aGXm z^#Y#Bg%1!+=ULP9nS#t`l6R*|9Mz{9tKvi4ILsDFN3Y=xa}B$oHZMYOwC42s5PZw6 zx5~3lNy;=gu%2?Z=o?@VkgFC>^Ahsf4>Xf6RjgPw&qhOuyzr}L7+#W9y+ogXY_=-8 z#Qe%og=L&_Rm`H2(Ho(L^+ZD#$ZM=+pIyp3x2I`B!LfkJGhl*P?n)NZ4Y?>a(W~Rk z`@sf&=a-*EU9>)sIKYg|!;z~)^t39}9e?v(J9lP50{}>76cJFZTzoYMFBmvs5Q1iH_vQ`6y=9+O=F)$&t)hHNIJ5dNsSc z^!M}P>Ag`3rcy74$s5;`<9mw`^Kd8{T=oDZs5_m*vBpH%O}>Vmn%C2{5t7F}@Y>Q3 zlNx1#P3Ga?-RUXNU<^45S>ZfPqf%HoK_*r|4pO4*E3H_`K%-O3qS?cdOnI8i(ih%T zH0dypckN=6B9|LwSkh9}_gaTu4^v6wIEHxEvW?|f%rm$dVo`8K&~DOs-;Y7SZ~4?~ z`$lcU9`9Gx863HhWgK183%l1ojFl$aJwjPV-cCk}&$7NAHZh;U+V zLJWD|u|S$C0Ayl(zNAIRpDKvws7T+dWoSbyc0JQ+0-Ke#if?_g-t!5s|p@^JG!) zjK%;5qdY1c#79TO$PPcnlha>l>(~C~*Kfb~;kW+wAHKi0&C5}v++(dVNk%^U00Rj* zbuokLOu-u&O68B-|91S1{{QhGrt zN-%N;mWs^DL=lut2_4LedNs#ptRyflwb$Uhk~dY-G`dnOSdQ7uRD+pghZur0SkbJS zU}_{}zUMFm4rZ+yaSjp`pT!1X2QOZ_K$lyvKIH*w( zhM8kwE=pD^DzSo_)931Pc2G~w>emls>ECoG$CKAX&A-%~oM2AC0YE?m8UO|m{ssEa z{@UL#fB=IPz>uLu3qhQMj13?dktwV=0|90#07#V70D>`$je$T8LdatZtV+CcWM&^ZBO@Pv86}3KVH;N;jUUMb;A8sz6 ztu{@tTFBs@QZtaYd5HZuqF;3i?URlojOescuY9sU9{u6p3_t(#{^8%jKlzQ_KRBXt zZ5b|BaX~aA7b(D_Z+ueBt(w9mO`XY40X>K_U@&#l85-|D8U2fWceLLfX^=(4)SzTL z*jRj2S92kBDOjl}C{d}rL8afS#f;jJoy%Z4!0bd9-3eCj_?YLdeJbPj@;g(~9ovSC zhm~mQ)3DbxN`QEB!BNG|LIqYRh*Z?BN8>z_0UX579Y!UDS?b9&QIInorSQ!c=Cyg_0M8C+V=ND{pkVkbv~Ja@~8fq)Hms9 zKAE0W>&xystLv1Ys*HHiwVTV&N|n4{y}hfm{{r;07v*xZb+$8}>w^kg^lN{7)Hcy} z!$!(vy;>*SZYahZ$JP()5p(8)yDs%iS1BH9F1yVN&8`<%0q!c)^K zCpTYiTe596WJDj}#|PeQ&SlgrqqsU7v-Dx-0#m3KlVgUD6UtY}z4G}}(armEy?Zw9 z$Lr57HYO?cnaDuUK#rZqWbxLCi_2H;9-o{nkK(=iU%D4?ct83Tj<7HaP?e9dlyd9^|^`Ad}b^7yv zwmC+?pJ4s{{t6g=rQ2_+^VoFVZW-fjGHWXyZ^VbuhYf9q65=)_SZvDyPtm@;#-L|7% zXu8R<`Rd4_PW9`DuRX5u&(7eUy^qayy!&pAN9Hl5T`q5>LW{Ffg+ABjE`h1JmV6i4 zSS9cEX~LTg^U?mit4=kL_cx<)r^K>s!!R^f|3tLr5XrFI>k~uiKWKF>^~8XrsqK48 z_hczi#ERRO{eIjb$^_hZES7I4a;U{#w8MD&*|4+YHuBJ4rGUk~+4xD3O5*D+KFE-~ zlM>U`S$(U+Fr{MCMlZ{DYtmX;uVy9b-MJ(OCD&`L@Y%-Z(9?zA6IT__rv3W??e(Iq z+gU}n+rp}$H62%Gi@s&j3Rk;S+4LZ^6nJAoqcCF4wXdr-49S#xqr;p<=vH-HWa`?$+}-X#m=foXjls0JO{Ay}tbbV{3A1$mnklff!=#-D3q`wY z;ZFP5o9o2cxFutYj;O0WtqV&UCcd&X$9Ofc&v~BtgCnYzM_)LYS+1BcvCz=}K48Gr$>8c$!^s!Z*M0qaW4-&s^Y@7u zY=v@57@~+t&N=0{RKf(;BGs6AcUB5=bu*M)#-zI>1)-^gt!m6vJ%CWxU{9e~+Cv=` znCwy)Y0)6olS06DgIL$|HhN=%eyT5$gt!eZ?7$+Jh#6A$?jVMFk&9rm=jUuq(1U?S zBaQF`B0U63ZG}h*1a$)k8@4^1v3F9Os;On8h9jHw!(Z8T;ZDX5*H;M~vQl zKw)FHFdX;rk9TjK`zJq^zrXZng==39CsNfR5x~t6SxK~2b!~C3VvAggz|`@=fhZ1u zid}j0y@WBV>qUu!5&0nG)Taoert$!7+rZ`)E|sFgFpIjKjNORP4b`1Li||-UBmX}yj2O4`!%_P`P+Bx;Leve<;`LJNuQn-*HYd3U?A^IwI5!tjQanF zd$Gyq5AOwcaNM{m7NkUJEGi|dmBH3zB+3M9aAd?aPb;d}3dj08xgC*HWcQHJrHlNqvuW`qc+VufFlX(c$SC#lxf8zlN^@r}+QTU;T~09swp% zQUMJ1&nY0LtmYlinmCgJBmJF@a3fq=3zu zz|MIgua15SOy-C{yd{j8L+09(d2J|pIC^1kr;^Ji$O%0hywv+(J&!K<;201 z&8U#UY(f;A>z&(ySi;nZDG3+#4hWRMSypUoFpcbrD%PSRGE}9X!f740@YcP*yISqG z+4OSn0d5c?^m^Q`(uyGkL@w)I&?{P%mfPv{@n1jd9{lQm|M!3OwfDaNUp@O<|Ksod za5^2who0%E>Ka!|++9%`D1WJ;_2z3dO2yv_+>SI!(v+7_Gnsu8uCvaO1@2*l9 zMvkCe-m8((eqhb}JctyD5J%22dq9*5!?PU*N_BiODsM(6LX7MPC8YurCPzlDjmq_- z%1^vZ+!rhVcx8W(hiB_)-Mf%?sSNuv^$Np?=`Id_qYGi$C-BNS(9l+ znbb+}YROwdbXDc5!>U=jgLdxf8aG$ll7?Z~7)5j*FmpTSy<0AW<^{opwsv>slSfTc zrdJmscilUHO+S;tp}oh^<#^aP{bA{+VLCl@v+7`#*qSTeFeQ)R!&p@*F%9w5-j9@D zq_YCFC&h+JpVg~sJ8GJ~K6^CN?eE`wW`>KdR5VVLqgb)8JkH3vYke8lGI`R@-Yk>D z`>X3g?7fS9g;-DKLwGW2Z~D*9KFhA!JrWwpWtEqz!*;CL+`)rsUCLBU#(Z7WRclol z*2CCOX%mOmDdS}5_RcqrJ9e|daaUe+G~U{c@6y<*7Y>`nN)CcIIeDRZ!C+`gXhO?O7$1Y@P|CC*Y5ePx^Cn2 z_T!h^bg_Ba<EHPGev#ynbNCZHy};_-|LI4c*xv2M&YE?y`A9`fV2-euUB0k}yEsb=kKV9- z*=k{H)^2_C-M*9aTjly^Phw~e_Z2KF4qm_O#ciVf9leqj@&w5x!IwoZf6J*JZ;Ct7``{w#rpdPQA=aZ2cu0c`l)T! zk7oF!gKVjBW#&iI_L}0dZfSklcQ$DRx#~*S4||%lbyIq$MYj7POk=c{f7JJojJo=QajnZmM_^ zhmWsxe%~Grlke}@S{?e`>rTh{QNM|^ikwBWIL7%B*j)8F?D>#>`M8HLa(y z?fQ$M>a;9vl%Y1w@oX4}n~I&I&Mi`Bxjt2E{W*k=$)@P1nRIF&Jo}_1G-4$A0$%akh(_vU*t1yx~(}CzKb(Q89GdO)2px z*JK#SLd~RhNVi?nw0Z9G`t)mP@BXM-3P`MFDsEflMc6Ogi^`QTvG6% z=TgG9EB1P@(D@(tf;5Vkaa3cLfas`6nME*(lbWax&U8pnC!--$HV0>hp~}Q)c0ppH znUvxS$nnr?8L#^gBRnygkd<}zo~dG~swxo1p(r<9Te#c0l)F%6xJON8BE#4f4hdeP zGs1Ld;FNqy#dk&O>{$=m`5v;fTh`X4iMUZ&svLttTw(YsETCExB_!`uJL?U|rH7x| ztl{lGtV^z3p%`ipDpO)IrrkzhYn`^WUxaE1$UN5U$(C+8K=%FRU^0%yGdzO{Ok@s3 z@J=(shFBR~St!7bFDND7RxW5YQ5~J^T|1o|%(pimJ?XCL;-J}vL{f@0XGizQNgPCs zA2x+Rwz8K3 zrw{k%KTJ>F-TlEq)tp5dTzcMiWsH6+50C!kFFp8oPh9t4vB>uA|I_~uKlp6Y|4zED zYM<{coGdF?!EnniP98t{O9y53?(_fpUB{>WH%`pT+q*+%>V_3)U7ohKZSd-aYpTsx zPWvz3KN}9u{FPpA#vhzaLpT!h7EZeK=i?HdO+|Ba1@B zfK(D-)Y+EQRzV9BId1`;JaZk%}iT^v9oB}IvV0~Iw9HINYjU}aNvA)528>QtP$ zfVmKA^F%Dv72cJ7*{$-8W_%hinf2KYop!UKt#>C)%#VUocyhsrJc;+IsEun6WU_%J z?3>UZWGBMA3R2BmA1YPJVhKQ66G|Z#up^GHBCsG6gq4a|0&G;Mu=5_3V`y7~Dc2+w zSwgj)74oXZYxW*eXM%XtN^Mp}9u)>DWC=_qR512IoGWIOZ6GQTui^_B`yx`5n5|YL z4!&~Hsf~FzA{p2ddBC;I-dQmj^Jtbju`a1{m$#{Gi}IiaCJsad1r%lg44ljqLZn1Y zaD}J@A&GY2=@Bs&^>J05F5B@d2fuPspT%QL?&B<=eMtYwfBe^oK&*@1&m-I1*$}rMaq9?bNKQ$TRt_ zkYqM>7z$yBp@($&EXj1yZjY-S)NM?ZUDf_l!_~>=yG2&#N_$cDL=mln%xg37YUs68 z`J0~dc3=1E zDnU?!*T4{zjM1TWd{mD@T#WC$^5GG8>6gFPYr6j1_(7CI8m7)~t0T|*dUBp=-qEs( zVd<(#P3A2$)8OLhue?)R&x(R1f!Lq5zBKLj^|Pu_n}>NiSZt_yJI0gOn;&&9{dh}I z@)lggG}wgL!Nksbw2-^gTvc~p3}o#+o(rYxn#!;rHgYqmZ@s=Dt|ngH&79g-&eLXn z^Yn?eubDfTo~&j8FTd-A31?wQx7w|$bS#6b?u}sCy(kyD_nS=&cb%&b>%3#U{7`4D zU)=G#s-3jc<+Ja+RC~EDpq(ohsXF3`+nql&wX@IRV2A30Khd?R#NFnW)$R07+N~#cdQa}lVr?X4ywoAuTGq7f3HfFZP1==G=g}Sc?BMrWX*g=z>7&g@E5Y@d z&iU0zbxgY-cFO7_kJV-q>ESpJ=2IC{6?asf;*htG<^KJ zqi-!^zk2tpSM(gT>1=w>M&kXdse;3F-oCiF*t|^Jad56`T3_uQVS73Ktp992ba1k6 z4*u=gZ2FZykNfzO^`HL3AK#82{L}8DtgJ&gJE1W)14+!_yl>54UW{5h2?r0FwM*m6 z@itK{nJ3-QUna%w-PoMnT`pPD$V{~Yf7;dSdUNg;)fW$%tsRT&J{X*CXKC``JLI<2 z=X$xwL5z*IJpS1>tzw%^{rv#!}+y6Rv;Q}y$-xV;m1%=hclMG6PD`hn{Y zQ#GUVV%9c$uJVh1RrtY~&z`#M}7 z#p-hO)%@R?KmR*V(hGQKs*e}0_@jLNd_KwHa8Wl)Ud0sZZZ@w2=i8f6ud7&U9%pX( z{;l_($nl*ym1F{UI9Cyy()UavLQ5=!V*qFwvyehP-| zHNw=l3|EnieQ&N-t`GEFkY6Akl!_x_pqpVd)2NZEku;FSpn$Ly9T&+svM*9#N~84c z-VwQub4FHIr^<3EYU3So!^VWWVdmT>knSa`?UIn2AC&7ZUYbp6!oD~eJ9sbBsCA3L z2~kZel9Kn9)3vjq@J|jF{#$R{#WJ-CZ|7cLxjTx$gSaa}-X*aI-Mr&{4 z-Gk1$G_Go$WaEUfHuHqtuqSlAij$UX;`@*drLhH9E(98r7t)C*rhOOH001BW zNkl12J|Z1xw!+)j66+}jaGtoj^JHk51#m@*Uczvb`y{b$O)yR zOW2~4S*1zLECePZj*_c-Kk57yEXVbjiNw!T#$C5zZpj5r8?TxwnG#pc%(BR?bd`6J zaWnR#@u6q%)E{^na@gJM#NEndUTp&pSs93VTM_n@*j2u6eQ8^1I=4?Wyzaj~<#0=@ zz2)`rZAT5Pc3dh>$xRDSrhG}$=nXceAA69iZy$m zb@XFyzI6D-nUDMNDTJQ3w+f^&0TSb>!Iyb<$`4c5TYgc5%`B-IY^0*MU4K2~^z1e} zcffcsInJ;DjHRu}jh3p~GeU^%6j|M#%=aa%U#TC2<6rrB?B3p%Uzlya`R4!j_nz1K zn?v1J>h4y+J*{UBw~Mso@U>t38^0j<$8Y@Fhqju2^RK+@kG8wtuI=`?x?gzy(f&P^ z+bL|te7@PzX0=?NHbeKOv37kSuTmDQDgZ=#iJUQKZG zZc*8+gl-JC0!PVtAi9t}>xTFyj1B~;Re9#Q3T!511cLNzNr(!_5h`CBxHuseZ!jeV z3QPz*lFQ!6EVw`vYvS5v#jqU<3_t_|lU1w)562veKzGBQvf-}tNlDPUj| zXPIGQP=k>f%%Ell7|2XuLJC805wQEa1!_lb$7eH0oxf6Uv`nugbXDb!qjP1xm}KkR0=p-Q8;J>+&Rt zt)6x3A(IqmyPeW{m7R%KN5%j-Y61-CdozuNIV+dsuN-91`-WJLjY*S9$q)>cBB=9q z@|?IWY0>Woj2CWT8|02@SWF(w<2zHkZt=%G`*HGqi*(*Cwg{76e6p=rpR6!T(|CcG zK0<5r9%M8YE+T>R#5wbICxl_*CdQJxQPX}{3}tH8dl?w4Y#{cobp)ATJCI&?NqNi} z`p?0{B0Ep4Mh1{)t}Bd2>kYf?D@Vqq5Ro1n-f`6MZo{U#gS}~;qgV-oJ!8LpX_AVO z0mJ}-hyXHDA|)dQ89<YMJn8|V}G8laodh)XmfBwtu>qmEgv!7pm z^wf4N$MLWP92|eS;*j}!wef!PawtAyF(b^v*yUuEhW>-;c`pMO-HhO_h4R`}+6>)6v<52hpP zm7l4v$GQvWd%C@)cWPH3Enljm$#Tlsxzz&n@&*^8ho= zIykAV^;F*9lYI8UpE;=4M{hmt*E8PD+wPB%#vjgJ7U9kFU}1XiY);$B*oSD&n_XlX zYkqV$EY@_jA$#+^X|St5j$_GqQC6=u;ndr^9U#?LhU$u*zhu_j8n#_KxS?LGLoeG= zhOts};F*_fUD4U{D1dUewoFp8AH0|Gvh)PE$hHk>&y61=T$gJ!ayk=#dv-}UIC73| zJ$0f}AyKS~d$0?$Ih{|zGT}0SIzsK|9DFs;R2*>)?8?lX;#DYmn-CG(BqWg72o zjiyhoZx-|BUa;>y_wd~;)DNTt-)vr)*mIPAZG4IKAzt*|pH?de^Xmc8d#r!*!O9&_ ztdHErBkcRSx1pBM%pdFa$wfDO{POlrXu>2kukv0um;c~$^=jL;aWWnqf8|fV`ATz& zKhZ}|@!PjQX#U^=YOG@FwZCP$RWU0-;w5jrScjoW=S zaC+RH@L})$*bmD}V{KBVow{bE<2N6RZkm)UTJKBfHxB(-!fra*rPbLxJg&a9??3Zh zHhoo3Q!K{)MF>ZX#>sUXPu~x1Z2ajg9Zj@$2M?>+do(TM3t79)z&EQq_oq*^l~1mH zUA=Lq@AJ4o8YgdQ*t_eMrW<9s8oO@0C$~GDy4Biqetyy|&!_X^HvJ~& zo8tWDxV3YIvX!hw4cuAQc^0@Scm0KAj#6ceX0oS&iz5xNwrXb1Z|jq1S0DHNJ`S;I zeF@BU3zOr7_{vcmY+10%)w8*Ku(NLalU@7Dq_}DK@rm5|$F-koiGd!})^*)}^fN+V z^BC^ZUfuWu?_QIGSmUU&Cw`6?PAlnm4FUz%hat1CgP*dPQQsj>m>iV}#*bwx7#kZH zM-KfG9;s7yYeM!ymgi=jZ%|~O$F4cFJjRqt>y0otnJGK9PD|QD7=d%}6NX-qa2X)O zX7B2Tf~E7(%`~+-?9{fsZOAQ;K{v$3qEpHe#6Aql6ar{It1*lirA)mhH;Tp825DT~ z=J8*8EIj*#x@J$Fd{fg%G*E~w^A29t`u?+RW8ZV}@ZhX%9({RAalc}db0Oz@Qd|vI zd#yBG|IupO|Kxe%?qN^cD(p%2I}6m!=BNu~+@H^y`J{GZGwU#@l_&_PCe!&2TZW9X z-lq^cY9|M3o}}Vd7Jcj71lF6!|QhX6p z>MB30o7nApGp?1aZaF~4D|U@ch*xjLV)YS?-A(S&Rd^QEPd2`Ed`;*e)Ws|@mCJTV+r($4RGWK^y)>E(fhP*&w~~wKm_uuex8}cgzWcprmp_UV zt{uWsvpU%XADxeGlxsFhMxLVELm;( zNPs^#En{APu)eh>4mDL<_%f^oMpc|;l~__nZK$lQR0{lLPVT|az4^N0=w$lr*~j1c z;1lya5bcJ2HbrS0;|UHjr+t;FXkE0BQ@9ew1yQWpEgS}6sOApryrb*_s=66Fk#V0R z3%i8`XEp#Z$?ya>WieIG(be<%pdWU*--)Y)qe?Ov!~E`pNvwE&>my%(Hb@!Dkj%zt zcL`L25GG$}R%Yc7dgmc+HX|M_sp&RnmeH+1VNSUw7UDR zFPU| z_=YQA`r3bU8!+phocFMQ*_{7Y~CmF@bQzl;C&IzMQKH|9>-CT0LU z0t$-^W-(F&O2JSJCF|Z4D+np4TFrrjM|2L(!7;g*D+0B(2eTI?6*E9ht>jS%VrKZM z;;s0);@H-Kn&s31p#&jT?$r$?r+t6kQKHrC+yg&Cz6>|6w{&q3NbbAHh#{EsY6wOS z#(~*LVamn^0|DkV0A94Hjv!=j_$d$XEWp${fKt>P)a|^#YI-K011WPL7ab|(G502g zs4{9Da1@;9U=)ZIG?u++DuyD73~+(qf^$vP22k-i`HT^6O#);pBbOrN_7(-McBmp5 zvDeXQ8jW_vlL<^on4uny%rO}g2rv?J1&F<7wLY8d)D)76roEYsK8qX7P_z`N8pr?; z0Zhy=A_h4FsfcNIPN<3Aho+(b;6M5euOOH(3Il4YFaT@{7%4$W4a$IHz(9`42Y^Y{ zOo_-v7;}V=L`tdx6Bx8O0W+I-#xM~FVIsH?2<)7)6BrqrfW!!99_BT`8e{kf4>E5S zG#FS63V7mBv6*>@BTGvAqGZ}Q=Y2Fd@&;DRumU0_GGj$&HXyNOkp9#{FmaMx-j__n zlk4r6_j$W7FS^X-;v@;4LLnOVjKy$ZM^4`ldae z`DA8gR@T_F)M~eoEV+dpC>GPp8bHee*V$Fb*yh^ z=JVU^3Q^VFhu0xoY*(WIT@7^R@;CL-XP)|Wmlq_ykRR{uYiafFqW>UGBcDV7wf$s? zBo$)6bh8T0XS>@hbQPk7P{x{@UWi4_kyDY>n6@6LxQU?HEo>NQ3jG{v&6U6H=r|XT z%Jkt};`M#C$2_m2+R%Q7rU?u>cJtY5gL&Og_Xmch21tQ;&~~+VX!h~Q*u5)8J)D*> zw9UulpQ>1k+LdUgvM~BoTT6F8oJhwry1)Ej>qGvyZKu1Ni|y?=c3D!Nm=y<6)vD62 zIi4-n?;KU-**9N*&AHV>YRq3-D}7cj3M(^>(~(gClSQ*d9rD%X4`AJt4~ajpJ=DEk z&CQ1>j+!^u=E@d+=khML_M^6a>wKYe@gB5A@7`0b_t}b;*NnNp@hDxAxXImqJTt+Zys?<8H$#DHb3H3Rt*UT0 z{(6vWOksZRFmpstVhLwvm{Th)c+b7!XaNQigbL@YLvhMZkMDKU3 z%g;2wrjzOB&+ppRW1~6@yIxlsxSIGL6M->G&kt-s_OB8?Ts1}+vhu5veWg2H2W(qF}<=)ZTP`2#z=X^fblt>B9$02V_yz8qolvMsHN$Dw|N< z`-_)1x1V0#HR|cV|K)G%gY8hjuFo+FXP(*??OY0Nu4(Ikal&0uE9?XeTkwk|YQ6|zqs_wh+FHQxL<Ka=u9#yjF^rpMJaFZM$(YMS#7jp-x7ne z)g;-6K}l0^!xB*#L>(nu{PwT@n?_E*wuBe`5C8f8ef}s+ce#((Nm5n?fMpj^d=Vjl z9%M2|N=e!Wjy5*|LHh&5%+7a zJ_hc4rL8EN88QfWjqOXlnz#UpQVR%#9(jW7Kp?09P>-cSjLSX&b!^O>-Q)4ijc_|GNpp3)|25+HZv02VU;pnvUTeeJ#P z{qrCGkN+gE)}I|O43tN8AxJh7Fc=G{Nr_U3oS7pCAv-Ua#t|6<-$nL5a^lRw2}T7Z z8Vf+vG=`|ubJ3NE7(if<8KAG$1URS01L1yIAn1+oF~RR%4Rb50(Clvo9%kORr!k^6**B1#KtD`FT#SXzKQ zP>L!TQ7+CCpo^BW&B&q{L0lloYJ@D{D*=C?$f0GWr2s&{tR)$PvEU9wTFePw)E_rs#cYr)0u#PM~Z_ z6)NYWjNAJRqi4>{)@u_LLR1}K)?>+BfJDlBKt!oCAZ2Z{wpG-#K{E8=9LQDv9`ZCn zOxz~0ll)pM%|Z44g0A;zb)U}APG$8|mGO3WAX(XSKCWXkHN8LaD$Q|93FBtdrgWb& za5fd%T35_j@*p57R{)xjLFKGKP-kaSLw1ttW(4w`M*=s?%3*u8zX|=AAM1EzY(j$w z+R%Jvr{?TNvG%%Zgv?wiWz^*`W*oPhA&ePP&ePa#3v49kLWMz8O9 z(CtSac-yrMkc9<>m;?YdB76bi3n++yh=3r1AOIu+f~W`(MFaql1qB2FkXR`pkupN8 z)pra(c=St$Pgm|_`S{G1_PniZr2N*7i=+K_)sTM}z8|x+JOm&y4+=zg#B~N~<4&=0 z=}1ZstE`5~4x%k6gC-^uX2U}r3Ml6qsXU7nNPBp>0Gd9R7p7hoZ>{qv{it8B)?aq+ zqK<=8n|(;)hhmR#>K>k&VLJKVmLAjW8z(j`8g1MDB1E*R5}DhfdXY}nK-qVYZM(}w*b1#OYen&l6gX1mAJ2r>^a`KpVyy1(hN>*#2Hww*nSb}DNKz3B>jkPD4h z4+-jdb3S)hWtHwe{m7S9`*g|A|74uTi)SD8e!sbFuXp`E$<*Zx$Sg|DG?lY@Haqz8 zvUW$OPmb&2wMXvc-ETdp;owa>-f7uF#^a^-;bcm$_KU##OIJK(`?~ZWh5^O{bHBr1 z@PJbJBegJ!oO?ESyt@@FUlJCij=YQv68GeDUJam>+d+rvHmzp|C652p{!CvqIo_*n zHdZZYxVY_Uws=}E{`@z;+7=K0o`L@#&p-QbpY3q_r=1~UgqK?^AvwSuR$`J^CKIAA zs<|Dnu1#7?znYe>w50n_rmbrm;P%rdWsCD;$(*8Jbs0GDH49jQeL?KSuXEu6w( zzIwPnIKZEL*Vgq|8@NXYF=?;U>A?YC14s+*P}Qgv?QOfdgd;Zod=c+&W8Us$fs3V( z)E7HdEiL(5J6Ucl6h^i>MaR>t(FeU8FdoiOu3AZyR~EHs*}rTp#X!92f{{sQ(W=&Y zKj!R12zqS|4&6=Rrf#H5{!S z$6W;}PThnDDr-~;cR?4TjMJ*#p)PV=R)?lWqpvH~@9vQFQ&7*SVJyCidBOq6DB{l9 zOl5sX5=pl;W5qB5J9UZNC`XJZ*i+4X!zV@f#-MZbs1 zqjiXeMdc9&@+~Lwqa$8(W`i#Q3jm2UMAD*(_AX#)MNzq#V%PT}mnx;Y=!#4`t&}VY zqAEl)IF+{q3J+CyLDP^}O)%c~-i!xtzvE@{<+m^# zNKAtrqn=sAd1`Z-xva4Ojiw-rdb^!e)0i^#pAKBtVNoo{djq~El`>>8V|7Ylw+Qgr zoV{vJn-O1?6jQP{*;TzOP7)i`2{C4!B-%bZ8>lq(U)0d3b~*~fwUc88gl4N%9Ds-wu}=~u)BDhkQ13)@?`KyLuJH^ zOSBd%l>J;~DHK0>b6fs<1O8t1Pj3-F&dzBph^=GA>~C3=Bx22qL_(oZSZDv1`tnn* z@4u1KaQpJd*Y^wi)0+PJZgC{}hvt<`(zb=EE$OI7L=;azC90axH<`cv3%?!H<@Y}O zqiGs1x6jw|-=E7NRsWaAc4Pk)rL4mQbVRJi^0tf$7v+$WMuv9Wg&c;`upqSxDLDj9 zR*Nf^HG#~Btw_4^rE_N9gf_sy+R>!AXuo4oKW2@h!V$IBmK0#JVnj?aS5!sQ26qQZ ziQfkFvb8-^9sv6yK{h@`l5vy0hfpO|_A>;k9ijp3GPV;_A22~)`on@}$_P0lA^^f5 zfC(@HN>UL}R0PQs1tlN>Sgi{rr8v4N0#6g>gvx@cN)*Jz!VHL-44@L_6HYlIwibl6 zQN~z8K_{pnCGjaTfJ&^H#RwNdz}j0CnoN+?Krk`Eq+*t=qp}Pmv;Y7g07*naR3R>y z1rmUuQD8M?p~OLCQ~*e2CWkCKIY5;lcS+IMt?U!~M~W^k;N2a)U8T?K{ttU{H~LhItxn;v0=t9VFm)W$ zw4Wq**TV*~XzM_#*}QbC5+`M*2%v-T@<`UVm8mf9&H1i%#UhD_8eU+0zdL8CU z<@c_ikJCOBn6R)`D+j79^+97rmM&+}lQtrzDYF2oT}8xlA29Ne!W4$4QB_&>E)VOZ z?@0a3C)$O*5}9C_5{5yufC?l4f(!sC1V4xH1wsM<1wV&?03Zm0fbeq&1OOreUtsxy z0z%Om4Xg0Dcx>tXm;TPlYW?KkpwXr&mP%=9pCg9;>f&SQx(`y%iR+aEs8mR(sCc(f zsadxs$FRTf+J-_KRcgDMSH4P zzO{Y$cnTZ!r#IK&^UZ~NAHRKA4IM0b_wwZyNY(d8Qxh+3_ zwrRsS&A>wW6;Mr)as%M4s;RZ6Bqjn0-uq!L= z4a(TLWODt0_2`9}_sfjBvJ1_F5v0Yd7O9gPc^k;i;PO3xo+{x)D<7n%S>{8wq^JhvA zqoYWb)>bx~oYmAREN(oHsomdCd!0z86i@%k(E{vs=xP9mnytxEO`^ZlA0&kV#^QjZ zsyEauH#g8ovMx6)Y8h;GlzX#2Hpk!mjRzmv(fsas?cUo1E-Od}yIH~AE^M3gT35yN z7|z~TODR99dlg2`oc0W_&&BG4RSmD!Dt~Z~t8#Ht=`AeA^nn_qUOfqX;~<5riK|D& zT*3AkLJ^qyvS@(47WtG^8L8oXE3t^V5!jkG$K@#Y^>Lf$Rutc+4TVB8zU(p#QxvtH zmx2KycA+&lrHj{FRqe660eCYt(1M2B+mqzC<-~;X##(RaIN4fRL(J0d$3Q`bKKC#H zuwhPmy46_hb<~X|6cECvOA=+Wh96~clRefwN^1G^(>Q!R=9I(G951MpWt(EIv#?DW zX3UD)eXO!%COfA%6~NRijpM5xD()9$14>){pvG<^8FeS-bpnU>f`&|4RH2mCrUzng zW{b%v4p-SP$Hr)XWUwuPp@{43uq6Mvn4_z!+E#3_Hh z%~#KFf@8Bht=>A&xqs)wIW4qP#V&7{Au+K51fxx$KiVhz{vQ^ca||#)fP>L?18wTI zN?==$5dxlhuWYdNz%ppE$ih_WHTmAaJ=JWL&ih@rGf@ojNRRC z^k!|20;yV)g3BGunmAL@cRia-);S7T#TM35A_$!ljcksnCX-cOS*YlRdtA}5>Ul$t1T#<$+#*5PDK5`1*a z-sB!5GzB0Q$@8ti^=7*uPxI0ODqE_Y>I^t^7ZFTF20;)oVk_VT4p+1I&LRAyoAx;S zjqUGz693@k<@f0;wm60S!K;lJGg?uTS7CwY2DqOFAQBah@Hd}Mi?9Fo6R_Jq`jg*J z!QWio`tIvyT4c5FfzD+pi$$|2`Gme~!2V~q!G`Yl_kYqy{$c27R4GbX0tiw&Cgi;U z3 zP=z7!0}MHoZZhS@wf$n@%(Fvzx-RjJrxlD)wkR1>hwR| z(D>cW4^G1FdNKeysGAKkn)Lsbo|CUZywzD)7{X* za6A2=C+XeK=81;><&;4!&zopt-0ng;E>}eq_vYXD^(Vjj3-$ivNAUg2i~SG&<=)Ig zmjj^jdlH%HC$FVq{buRrRw+cS5kVsoAru+}grWu^QVA)K0tpy|bIKwqb_^H>9~r{r zb527Pbk%=L&&5VTdMh5XoQU(DbPn1zx zp^^YmV&)VW84^V?SwzQz*k+rP8?~^>M42*BNFi{Dnex!*G);_s+<}CgG36{jhaf9_f(Ty_5JAk873P|eltZO83Mz~Ui50MB5*APbMgS?XurQ%$(tw7L zv_?@{2Th{@dg}m{%1EhXP+CVt5d{%yK%<07ID-f%5JJV^vJwzPK%@-lRTRKN8)LN~ z&nMKH8Y4%T zCKE5Th(}M503c=#vsv@#n~!5E!>2dw_lkgG>ZNasq*SfGPV82TmrDrS>FO$9?mg_g zVG!(MaH8zdqS3a=OA{9`3$*S{BFy(%(07ezZR(NqIP`C@fBUTXWM%&3)lhzVkAMIb zYNnJg6~>f`NeIPKR#MF7<%1*YsQP@DRKIsx<#KLV5}S7mtk!&=A%|g?!suruRLTx@ z@^e33#N9_5fi4V;!jK6-7z6+ievS|TP~a~h0H6W{07L*l1Oxyf_=124Ai)<%L;xTu zW(Imhs9r1nR*lC`zw)Iomy3gM_KSwG5zvaa*SmT+>Z&07nb8CIQkQ-mpp-u~LJKP{{qUfhb>z@ziLJK25k3{I}c;{%#%obprmBau*pJegwoBH91<~acpJ!Fvn`9SF@mf{?$j_edUafW>oA?Bz3;5@b z4|F^%)J3H6Sd!_CILfaa21D#^gmQ7G;<1x3@K?$1Ufr|HO zihceJA#;A{F*ix&aFtR&`4uwpp*C@feHYE5P$aSMMW_39cGpeO<#42d)5)H_s18PK zvw)Trsl3#pi%FF-KA-Q=E}Euf-hi;&o{QQg-lVw+RK8Ztfph40))pF1s}n)!O|0GP zr>fMiT^zBx^k_$s2(Uadu(ddbn-4cWyc}Na`tA0{UqAb^S3&MCr@)jW=0t$jI9n~w zR>mH_@wK&E9UW8$XK$Rj+2e0~`H?|qi5JRApVanxcu+E|)UJ)2Hibp;PL46&^j+C2 zC3>A9crUsCa~`d0PIPlW+~r(fP1~cYuF;PEKB~|qWnIL9Z0GYh6B$Ob{`M~9adW;Z zhs#Bd`sz^s(pMM1{P0asHT`h>sYpUU>_{cXW{n6g{cqf0!Uv&ewso#J3X=amf1tv zn`fz^-hqY4X^C`@@m_cLeZSSGu`=i0ggA>&ISC^@ZdL}9y zmVBRMzSHGH)nXy9K0BTC&LPwJYKX6b4@VyF^V$Z2gOB?0JhjG699^Ip7dQLC_xqla z=L2em!o}E(8*0XqypFUne`+O=QEvK_6}g_MEo{ z3zePSIdeAJ$e%vv@^~~j3}Or8G!#ilm6>#Ds^B+1yXE20I6hAqoR`JJQfAa&lp0<# z5G3ST6SRukqDY?vK5)lxnf^It1&v;z%@D@E$I^(x@{m+&KuCLCE`Usy6jW{^+cecx zb?-_0V4WL~e3-UStes+6WbTb*JS26S_Q-YKpDA~xNphK8jv|nwFfJ_;XyauQ1&mWk z+5k}dBKz0}u!S;c2Lhdg@;P{EF_*>`1@3Or2yMu*3JQQWfEsUsDF>*u5Y9x+YkJi} z9H9?4Clsk@EJz{%)gb2U1jxgJW!p-mtB*(hn~gd?d%AVF0CY|bYZG>-KK^u(BhpzZ z+du5Q{`_XOuO`sN-~7^VF6@-&(B8f`@QI0YFF!R{J*Db1Xm@y@-~awc!}jx*PG(L8f+pUFj21R$>nV3M8^*0gKbbC50K`4w>`3 zEX81M61^%hcd48JlF%^jY{)Zx8gc5YNyW%4x!ZanN1&u+&|n|~NWuot5^UPIPc%=D zSPk6VH^iPmWT+9Qxeng*7%Dkr!9jJECVv=;-ETbIJ^9AN2U{2Z#V7y3Q10^Wgjq=% zWd7H{{Ja0nx2yKurT=z9m}k&Wpe^7h|MP$M$5Z$Gdq3=L*^4$b2?$&CqV;xuMo+&i z-~1&1!xwVVO=g)v)`~%sZbom`ZAtYfTi2yr2E?%M6%bj6kQ0WSDi;eXbG9(GQ!P*! z6#%fH3Vf^gAt>jOkPHHwA`_w~0T?A1$cZ^@{!a5nvn3QCc(v%BDfHWYEaXW&f zwpT1r8$`4K2_a9;190oDh$+c{NQEN;?}HZRa)rbX@UH^ZY4a`SidXYItiJyC zK{-2_FXz=txrjv)B+=R+XxAVBDP;hGVpNGo!bve@#t8tJSWvPgMv>qXhs44hW0t(f zj7+vqCN$lZ}WrnKIXaKspKH5MrMO1hlyzP)2mYB@qVKGelGZHVQzVFfipHYGe%} zg$AVv02BeBL1D@xV;Wn1O$>1Hb9~^6EG@5r4^9H$-t0Njix|U zDj`J6h^mZQMBoHu!wom}XKmuPtQ5gh5Dbd7PLYp83L?#B2%pNF7 z2WjA_1d9S$OCrfeCV<;d#?*D&+0-sXl2yIV0@E)0-d8l8Fo3|_ltaoOUdB-}_{xec zWYRE9fZL1{pv_ncF-g!t3lvk$jG!%{Q5Y3X01!&@Bw>Uc*+QBzX^^ZUK@8B0Ts3od zcJlf%OTx{q;&?q`7rjzZDJ`V+W?O#cEv12`qMwB>LDgdj|5arkfNRd^}nTnITcrue~ zD<2N3>gH9MvDg*?5Y`7PL?@RcPTd$&5MEAHgt#C2sr9{@1|1kMBm4ysBtZZW0t5g= z_zMVtLI8jY{sJK&Aj8jTL{N|zftiI7++cvyp*ce=zVp@BSF^9Y^|v3C$8RoXWH9*6 zfZT4M?Xe%<8$RU_o`+soRaPU^I|GPtd^E=>&GQM25A!ua#x8qtK>bG%?{z-+!Cv^#xLQmF7dDCyTs9JH=ndTFP~rhi#pA{n&lVUY^^>6I}85 z0kM9hW~rdK;A(qNNxqu;1mjS5G*$Jjj{WHOfO*m>ym%NGXNu4Twd}*FV}Q;SM}+fo zdnqLk>lJi$ip4T~Y$*1EtkSl~?bYtC%ZGkuBuY83vwYyvy-I$_bHQAzHlgoyU%}C! zRQjkJm8;)8dPlvyDf;$q%^6r~lu2QZ-80B?H#>1z9}KlVO16gWCwF^o4;Rh^y!X&l zXU7!uGF){r3{s!GQNT8hm!oL{nzQ0$%It4>wK$w<#J74AiredYR;C#sy{?KFQx?4H zHZeeY?|Q$xyqz{5eRvV_?y4K&6hrPHAQ*+pHpdI)mXE)BRIVN!G!Nf?+Z4^yH{W{G zFFWUBNUwXaVP3QZ~$79#w_{v4Pt3-U9OG?{;*C+4(`s|?i_K^la_)A>H z|K;LZKKYk1QrxkvVguuk2Ck)eV~%SJdBpd@pX5=c>I2Bj8+7iYm*P&+*N@x>-}T;1 zTN%32{PWU#`Z3)#)0{ zNBal|yNvdXiYBQQR`*46_sZo_7AD&)7(M~0_qj>k0Uh^=cW?pAuNT|xB5pqH?qpU^ z>iAb>+6UN3jI>L>2+WH`8e2W&VmOX2>N%^cVaUVYhlyWIpcKe14P|Hh|8cc zb+=4aM!if&Vf@&){&taLDk;+0p~|3lD21Oahk=^v$_=p8fsTmDn zhFRc#E6MOsj-)FT>UgXIXfPEpBU?MZO2n|w@*g{G!_)h{2ke6UgG#X)M z;~4V<6;^5-H&f$gy7$jW&lNX=O&gRO#@r^gW z(HMGSE^QPSI2dj|_|db^dM5?WLrOtwj1C~k zfy$luU@J?QZihiTwi+nZk@T|EQ+5J?UciYfAcj`NU@RuNClPHHiqKg_Dkc;jq6lI} zEc<{aZWm5PRxrj5gUW0z1VrYs&JZ(uB4hyV7Hf)q>64|hgz2T)3EMuxVR5_y_iB9$ z39IbK`#i>R?te?k_%;6C=hlCuqLm#2+{!|sA@mSD2v13Atxqd6`m)c}FB&p%`6J3E zmJo8BAuGVUgxb}`7+V>qdakz?^<_g4nfn1tO)By1w+7Yx{@X`j--I&{0|E^2?0vwC z|KWRo>|ODHu$~vbv3To!pZ{OB;OytuEISPH8us4r^qaoyqL{@<(uEm4}bK3_a9%qoaG)B90F51E^&55Z`{Yf9MV_p0ui>l zpA3DhJOh@fxd^Eg-zArap)Ezr;1i$=i#Zy4+TU)qHNiTfsV-(vKupr4g!5vhRI7b1Pz6vTR1M51W3UVK>!cH8O_jO7_r?7 zjh)Rv3qN=43$DCBCeum+5>h#WOJk>p!?e#Zz1Ye$X;XV&IZPNvFdWMkF(M;*V6cpn zfSP(%@4!G!5I|i9my%cG|o};m|H{hc}7r>nG-G$xbniJU`R#) zKq_pK0R|y6C{R3!044?_#|E6EV(v1E5Tbwr=HTW9pCCeZn?!vUklY!H4h6=_qlN5L zDLa5d*pP#f&!8p~PDYbJ9`>lcFWJ?V8loYqBLEZ3iV6_G5jqu%F%6h?0`5=%|H(i7 z7hnkB*duu`1T;hNfI&%t91}7afFT$fn1TTq83H;3R0IO^0I1H4kpLZlAux~wHUl64 zB0}^=2$%sB6wzElOlGWhMBoV`Vm3r9DIl>!3_u1J1tbH4f{3Xqff1@f21B$#^C+W$ zjuBL)*F2e;K_Y?-B+c@RG^mX=$i`O28emo!Jc__xZxwm9_kCG^1%-EuW zss>W1I>kMOzh-%q6CG=(@qhZ{)tatUKK3vpFWq%cMq5n^1D1uu8K?l(w1sZr^R4^A*2KRPMEh11)1 zw%LCcDIz|`L25J6HeIZrm4CLlCIKm!B>G$R88Qv?7a0zibn zVF-VN0D#yKz>F0DP|?ihWrJAgG@`}Ld{R9r{-QnU?LV)-F=^Lf2oo}65S)5lNz&*pgl zd3P}wp6oAr0GRv0aP(e954*#k-FEL6_&y&EcZHAJe_ejH!TS5FXKeh|O+UH)>!TGO zJZLw3_Lc6gUw-RM@G-pnlKf3wEJAgBTA;eSZ~y=x07*naR2|vToQr<9p3L^6YjyOm z*S!sGKjwv-mq=Z~#)W%SL6@-IrTvsTExvQ1v{+tmvUP_EL0wi`G=qccp}R_y?~6gN zERDsfK6S1?gu8;~k11>qeV;lA*^=)E2x7H*!L?ccX_v~y$>p=B@MXkq6V8YmzWQ~f z7d7lVhNnl7l#BA0O^;7=QfnI__Q#@%|@2zn}8` z>-8{xRjaz{`hpE?cKfbzf&;L{q1hi&l@P(qgFnXgjWB06Z)#O zFv$R)stFa}PxU&SS@pBikzP+W-7Mu4>9oMtH%GT|^Q=wL*7ttpN_SKNwP|?JG!~)k z`nh7PXg=geqQ9U+?fr`2)#QDqQZ-2%uXRe}`YcdZvf-xaX1B#r!re?YO{yr9w#~G0 z)kSGJ`kcqr3k27*$vBZdjAmtff?3RFd}I;9)cN^i@43dZ2()3bk4L%KLhVp0AA+*F z+l*!2JDT-nWJ$qZSP&ZwH3O zHP}j0Y12*=TvD{ffHC@rHb)a0bayZ>N+FfpXR;}oWFP_ExmZD6!2C3{(-eD!I=87^ zcm2ukx8Ezoe?ZRpf*m#mU{Vlldv=~EuH~oO-CJ@~9Dn!8F;;Jd;$E&NjZ@e8ug!hP z`N!j^oH7-3677G#riYh*8U|3w8Nzox?n_-Ksmbz~H&V{aifD2K_j}ll>C$0_GV(Yt zvlYktU~gAY53&U_)qDVlkVo84Sez=+tdCk3LqRht$Q3pRWj2iTkaHaRNvoBSo_3>z zK!GHUJ9ME4kS8#3iNR~xSdd0dGe|c@oXjtd85M$Y22vC55G!(lpa+Ds4eON4gJIJv zVP^IkO;xA51%X*#m7HpG_s7nr5g(Rrmb^Q3`UDh@$NlDwa^JLWnUNn=t3LlUr>ow* z^B&pia{7Es?M74jDw-YI;9&IFg3Xd~nu>y?dNSq#-4>d^AM6<9U%{b(sX{l}EFsb1 z9=OtS;nPE%hZ|oHNOLyGLS0Nn=v_0L?F)*}-wf%(y#IS?~|F`A|h4k=aDK+Xq89BJQz z;bClubmhDjwaf=Q3>pWEY+5=f5K)^wKuI1gs_s|5=|UZ0RLal>%9J@HfQfSuj>;dX z?Ss9Zu-(`T(gt%OY>1e#7sw?SOk{aNFbhW&7&dt@bORs&WDP`8`k2inM0V^e!dBcC zrxIcqbj&gq^CHwNTZ$u65zrAQHNz2m1_4cC>bU|W17}EJ1A#I^M2x*yR$o@03Dv|v zL`Z-Fs>2iyQHD&lV|SEhBJ%{Zip~z*{r&DsM`y?I=F=L^9@W=GIW%m=QdnaWDk4s&>ep00|7TStJSpmkupM904GTsv#mF zs)@n`ikQ8CsmmVBAS;NG7*8p|*hU2f_1O?Jf}w$-0iqc?By&~~HWTa)@Wlk~IK5Sa zM*>0wFe6nX8M7C}bDC9HdeWIWMWt!NXl;O@QV_t76b9c#93fJQeo)N&gW9AR5WP1b zNCuV#BA6t^o)FaI0<8hc!AW6gD`77H7HP~NlmbWq1O_rN@*!|=W=MpFqVDT`sA2o0R`14a z@Iir6k|>%La8k~K9<)Y9R8>%MU}$7U0OkPe0N_B33dNGhf_H3;ButS6bB@`>6imecOi2yZ85b!K$=yTSzdnHVBN=4vn>Zdp z&UM;Y)=B@Zgb7n@TS((>p2pgEM7}4>ahgf?-lq_Z4P?xsMw~rQS#h6d&U{f!=3*Ah zsS_}D0f{V`Mj(U=Y5+8+o-_!P6a>IJCDh3{10zjFzc#tE?>KHEatzfMy&5fcMcWI#hSAOJ%%Km;N%15*G5AOt`(1T;lN0EEB6h5%rw z0EmiURuqLnSlrcU-IK?c&b|HpfAVHge)DS|JSyO*ee{w>xZe)g_CNgd=P<4BUhZ*< z<=g-~t`q_M=>lzWmKiw|PfIVWV`Ruo-c4b5;2W7 ze{V+3{7D}Iru3r+QmQUb2n?&yq_`i3lHkPLxJ;V`Uan3bHBUafTJNUK#j{7w^Kf+y zS^0Bcmh|4azplFxnn( z(Cbg$Jq+6X-mn^Qc%UA0A&SR^XPw=A{8{GL8!hRmw6Hp^Psv{Frit65ux&L*wtVNJ zs!K7KipPq-EDkOCZ%)fJ+wT^Rr)5I>&faUtKV4dA3reZSo%-_8yGwjMxcg-04Db8K zqa7Jdco4+$t+(EziORU^hA&m=qvf*Uv)^R-{O$e5!kh6imf(ZqUwr~abNRDz%F}u` zES_}%kFn=)CP(M<{$F<%zxeR#df0CE@!|Q89&);U*c`Uwu-6QbfK4zICk44?aeB1+ z+WGm>)#q7RY%7?o97xD|i@|0D!-0X)L<9W{H&yG)ufJ2JS7^EIffXSk z2$yMd-5sv()?&r}w5-$V5#rHr<7#u%3zw@GxOZ~<1L5|cIML?PXcF>{O#${Uk=ajb?>M8_{l{Da-1%j z^IaqQ;p1Ye~SrMSYxHoGSPka%jxg%MUL7Fn;XHF+Qy@7p`VzlXwS5J!q$)Hk3fWbk5 zoY85XRn4~aPV*x8Zr~ViwtjUAm0-KqP?%%fl*}NICcP$jO6Kc}(&A>)?BXHGA_-a^ z$T?&L3C=SGi0yXNN|lS&dW=E}erMV7h~`Fy89_7VPg+ABWu~hUf@)C&$JkxPTZ*fB zCMzn;Ux-3Zp!gOew%yI>AB%GJrO6*P02&BIXnH@95#@RK@eZ%HSNC0ndv^Z)$uTF$;4=Vf_As4OGo z$E*OcZ;M>k>QnF#dJALJ62Q(h-??TF@MtW36=s|ynL-|I7)>)~R}_n|JSBg(?TwAl zx_d~)?$f* zzQ2wH161JIz%FdaG~M!So?K(vccWpjnlX(yVlF|66d5$uZAvZ-w5XQO``h+~ndjw2 zrBk*@#2zqm@6o1AW?V=)Nlr^V3m+9g!zS^|_R82DperPQ{uA6OBrDJMqA1B^fg9Rp}KIAIHg4T$EjJBUOhM~nap zg#k8xQ1W~4Sz{Osnb{jgqzvGD#N5Ru1Jfj+;8u(TG)90zYeEyJ5s;x|R7N4plkZs_ znNON>j%1a~r8Nl~39aNV<KgY|8az|?cX4J)Vx=))TjSY{IS3L zOFC@woBIB8eB3K_oGYSg)IL!1Hq+wF-nzs*&hB2J zj8Ra9fDlPUQk)QxBp{$-1~4*0AVV=4vchm+0SJHwn4;uK42(1yfQV+WY>3D*W z4W78D1V}|q1pqLrXif+K$VT8nLqbqY1ep=hJ7LOMagaP&GDV((DgLuR{O<|D2+Y6~ z5R3o-5dj?ndKQlY01=&{Lo#MTGz22^gg_pNh!HqAKt=_WY#=IyX+wy?XGY0@6B-}{ z16C6PaK;4xmZAcfWpBz1%0>x5AQNT=Gfe^_#-^nyax}>(3T%^4zW?&e?cwD*g|2g; zVuw0S6}ll5Yq4Pn3*wUnjt#xupsw2#(zu`MQ{1f$aw2q4gc0g%Zd26En29@OKIP$yy~M`N^pjOfhD87Ps<=-D#r zI3SHwMkDrpshMglSS`EGN-`^51OnY!O76foA)xbGI#39?abRWb7p`FKTN}jYMLd!E zAZH(juR};DOg_$ z<}|fqhG{(Ar03Zef zl0`ON%70Ko_5PcGa1OI)@4EM0sekvUj~(nzCj#m|`|{;T*ZbF)xy2WVG#)kzY`8=q zFOLbR00Sgd@XD@>1`3QdrqN0{ZK)*e03u8 zsXy6`b-G&>m#>e{cWGVkBaIi;fLPvFXD;q$hjhTjHBPJjwoKJ_Uy8hHut8V1Z%>dk z*>0QZrHo%&jtlm@J4G1eCUmy@>3Dsb((zcnc<*r^{G$)o?_6N&#=b-wfL{q&TkCpqfgBej~eT`L_3d_13c| zuu0es<7cg<&F18&DK6fq=cC^NkN)w}ht=FYnt!h^tiexC7+NQXdPWZ6hvJV2o&w$V zQHF4u-}glFCHFbeS7!I`%Sm+WXzg@NC89TDU`+8YW2;FlhwtO^JS=yzIJxD zTwK;P&wA7}?$NASz1qVpzC2r;w{6gI>#|KCQP=MlvM<6%_dWI_?`FSwlwU8V?V*`h zx%Us-_31d4A%68f(O)lYe-plcW*;B&^HZoP~J^XT!|V)lG)aCX2i2$grFvdLC!$62Q$4gs}Ev@r>|b` zV%m(^=N+709vwr(+YHN|k$#?@zPFg3I@uh}!>$kc{zqM1cFfoxLOLlIbIhM9Jfrg? z8n0?N=2!{nK)LdYdBCP#)ZTQRI@baiovM>FG$=;#r_Sb(uQVL_SmDf=3N^2^AI%qu z2(_NqnG2QIaaTlk$U{vj`=tA#Wmy#&@tLJ0QDJ5bBXm8mLn0|1h&jh&EVa8HC&-M3 z+cze+^Hl^`4_fw-z4!{J!Y5R`u}IzIEOhFc-phiWOOV@*+FX&VvLE(&(jvy}X33Lg zLm(s=HZI^;xYWlZ^@sAEuK%dGiOn(`y>%3VCs%s&^$BBG<(+^BfJ4kVbxGIP8+2uT z>XuLD;DCiGCQ%qhu#{XhZH%3UntK1!!yaDzZ#bP@?U>4pIL$4Qt7Xn{S4hs(N>Oji z#Y&R{Dglukf$Sq1TcUoJ8v-DMm`ynMCn5NzaDM)fO!xcIm*>PF^;XA}+Ps{dx2o|D zu-+A9;k0gqRcVk8!8unj7wRZZokId5NodMVI4@~G`l;3$HSg}b`e<1$IPV8RnS3;@ zW`@$FK^0=gKBz!W2wtF3*u^oUFRQueFr@gwxBd(7-u}Z6M)-e!0-t{R!^ODa>3@}n zvM1~XX@@#{=v-K&<7GXCLO>UHQ6RNBt+f@*qH840&{VwH?@@dtLD*CRm}hlvzi zVG$g-W6t+!cqRU+2p#_d$SoCGD(~YyZing2G{?E5#H24@g^egh*Ghq z&Y%XHL%6d~y8K|SVGKTUN^(~H%5NmiyH;FZ~-0LymtyZvg`_}_WwAN?x8 z-P!-~6~DTbG8ti>%sMl&2>`(vFaQAvpn#}}q5x+=n7^qTCxh|9hm7R{D3GeD2^f%0 z2_Oy0RCCU)j{6AxLq9`XtZH%;+`RB-Xc=Tu1Tz&>O`<6YfQmx^#L_|K&^$^(sFyAf zc_bzWUi7+ByWR8vrTtGk+ zFg0@k29hCW00s4?!66k3!opc5%L1kmB!eId9>{uN&ZP$=%E-wPkSmo7L4%|@SyD4n zuZI84fBWAlARrk6pdkP$fH5OGGVo@kCQMqALnJ^Thd`)^ii|*kh>#Evyc(M&ww^$Q z8{q~)112_8Z(3vq#mREA>;OV=z+{GIVyH2iMl(|IW+-N8LVy6-NZ>%J#{?M;ZLh<` z8Q46ie7@O@-9}-WgwU`BNu(@9JJmE4?}9!%UhOX4epxGSU#I&lht9z|Fr$}o2PrBv zmP@c+eKaDMn35@~h=Q>zOAIw6q+v2VOw(whP%;#rd=<`0@C9v4>to@gbER1@h-TS5 z0vFLtU_etG2`ocn;Jw8FlNt4i5*U&hS|e4^M#@a(hyb0R&~kZrQ*0*H!OAgOo^nU% zoX(W2_|@jE$`|+hyL8y^XY#O0vzE_ZP~!H+k6@mf0G_QE%u!p>O+p{BD2xgyguuDr zfcVUvJ1vT*jH~6zZ9dtoUyPdi62uPXC zAsKNpqpW}c27m~FhyY*)U`P`pWCxxwJbUzx4^O`KUpi;s_~ak7JYD|5^!G5w?|qyw z-+%P*bIj8pU4Kp)^u(k2qv#A`XF%pEC1RFu2~Q{UmvtH7*B<|1POIN89W({ZN%i&5 zu0A?6QYqt*Fg;%tooG7D;29qckkb>iqM1wHsd^Uq=|ftP6qS z9~>Qz!{N*4dk%^`>;~0^S}>*}I9ZE$nDeQ+VXdC^>Va;@8t8nBe&PV-Tm*fj}Js`Ck; zj_8|v&&?RWu6|d~h3;QVrWyOKc88%2Pv2jaVSd`QQCa0n3ZO+eD^H#l`sa6` zdwW#6sV>LW@*COvC!SCX!ms6a-H^&EypS^ye;~mFuygT#z>O}2N4?B(3S)Y~eNQ(VN+#yif`F9@a zB+0hjwg$XCb{+Z?oBjUV|8dg*Ea7kU93KAaU;Zzi!tDrYNj;QDH$3W%^k; zrPrEPM;GOpcNQoRr^j&t;2Fmpfn?$OQ1-0~pr)Wr|GAYW`(Gx(xH&YRz;>tFIhuiN+ETy@pC z!8)y{a@_*qD^qQET6xz3Tw=t)^_tLb1vq#|D~i zUew1XO{mVlzC271Bdot7ZyT1FWUC7(YIF)d%zGbrcV^`%r|8YGu>?G z&UbV=KHOXy9Tpe+_BF=Tiuf z^(>zRckp^R>_#C@!06byqNZ{3NnY=FpS=3&#eG{o(svWC??DUq#B{)<$=zggxu^MT z861N50Ljo?T(MR~oINyuT@o8XogW3|Za4r}B^Pn^$%HdnUqW?_PPot<7rd+7eBBz$ z7rt1~(jmSx3%OgzK}Ju?^{TGavtg%u0uZz@W}hj{AAyWMrBE{y9e|FGS0_)x^$)LD zqz{8YK%kr@o6;ay``V#)903F&S`G%TcAzrqf$Tp&w|e>P+Y|l|U)lfk#Txq*-o5{? zGJVy+4RA9Nr2L}51%#E?P%65niq<|HUYUk6p%xI4JTyr+IroG~5o{{VKr++}#2~59 zMhQyqf-ej0ClqUOpV7^+Mrd2?Q?A(1rNLkx4Rulkj}9<{Lr821wKOI4<`F2kBsuLT2|%UzOvz|a^=73y1GGFd zYIF4xrV|?$8Zt6a9MM216WByO17=lnRb4Dll4K3&oVux6Pd>w@iHh&l-}5>?AO2d< zVF}LA#8NI`tg{<{rjpe#Ls2;g63Gmxl0;~&Rt6b41C*W{K<~m}vXj9Xxx|X*NdQW~ zmBZ0!93f^q6uPYSIQhd=v^J;?I$;*eq42KohOR=_kPQrnA&uiia{OpKFPFox8pfED zY|r_7Py9a)7Ea*Y!G!kH!T7jGuutNZ@odYAwi6F1jE@}ok zQ=Ki2r@}3Yu&9>za=XvhPs`ca>G!VN@!`Wy->&5CKll%4iMLO+#PRRI?*=~m*^htp z@$H}F^oFLlp0}UB{Pcf*e0uu*-~F%8Ci>I@^tf+%y*tq17p)ck z+3)}UKeXew-YGsQ@^!s+vkqUT>7%_Bj5RTUV?Z<_LLx~y#XciM*e6u$RxTAqleo;F zKn8%I1_?nYCBdxigTB7bY_F?hgHVzX0fqvP10n<|44?pH08_O|3{p&LDgcI{rlO-m zb&JXeuDC4R5t9|J1|(8J6=TS3hL*c=(n*q97Y9-iAy6|F6Gbp#Ga^=UqGkZzfSjQN zmjTk4OqB_lDFEb!g41B^tPsQ;lSxKG!xYhM^qRb&dBZ{pgK9`@fNY2+0HdjZPV?(Uj<;FRmCO%5K}`SM2-X|1_s1v2nR45w`0HC zPKmpR^*9YvEN!T9LhJXE#;JhVs8^sZO_`L$;UoyDSq7=x|S&MX?_9kP?a0tMl zyhTcdm}xv14wH{aE>EPR0wtRpK+mMQF5gT#Ocz8w{ML ziZF&wfRrCOq~(ei22tjd%)Gv!xPI7e*W=ZYT_uj$KQVA*FxZ5zXhe?v;EMpef|70 z8!di5we)=RaF`%oZ4B70E+*5Wm)byPAsD(hEF|nl85!&4>2dk!^jrVy z1Ki#`{OHGj{5th1eeUd0D=Wvo!Ln&~MTl{AQdMO$Aeh3zp`cd#1e$;Cn~&616go)^ z`%;ks5Bs!V4_);Hd39Obttf8KPMZIZF8FA+HN6hQyw-Ztmv%h+oO92O=-Ub100A@! z0wN)jBB?5iE=^pv$}^4RkyWPcfkqDhz(2ws*aKDB1JNANRThIFiU}al4RpNu-jjFy z(wo*Y=&9-FDHUCBUJAI2JZOX^H@(tVPEGr!?9)=>MCqq*HaYI!e4w={e&wTS_e8G# zbSKj>S&X@e*`40Y25k@zL#J-bqz-$tQDcd{bn`*bbx&F;!<+Br>ZY7-bRSjZtVCKEbDZoG@#q;{wu>a=iDx5XA25f`cU;9_i+6vzzBL}Y*#KU^@wq#}F>KV&n^P``=-W{#CCh`Bd9>de0tharC2t1ze z1|aq#x*{~q{HQJ$Wnou$@6OF^cC*4*GzO%aP$QVshw>gjD+4Ww1CuIB44_vc9 z@2(FAYXSV5wh#LjJWL)oJkGmKsq}LJR;}N&@tmsqP~&)2%JzpxPSL{I`Qx~@ssmE~ z!`~e1#eX!1{~yoBfAg0ygnvEc$o)>zol9hM#<1=uZ5BMh^o|-;u+KMKoUK)ou!C3> z9>Uq)X-o}w-rU_@KmCjCmyZ^AY<2H$rDcVHHKTIRS~Aj9W>e-)_b{EEj_OD>k{S>B zkZ$(#lj1;&i2XQ<-7&88w;k=equtE@;0!n2ur=kH$)%whjIju_YZbos@-WwTHZ!qM zQO&dWbpMDQ_6x|m9b=~@k=U(EZ^=RV_hl+-i_1;3Wa4~H?0*QqE^nwjOR3orR9 zaWyO6j`Y$)G^18=SBU=#CPi`F&|w{JiHnnH-4L+Xt>#Kv9)2Ru!ManSp zK~IFK`t>k>Sw6dDx7|pz6l7F~(AhOuuRUju zS=nJOML?r;wXo)3jo^@yTV?}=iILlw#nyoR+zwQoC`>dk#E?pDqY{q2-gMdSJ2PqaB4s!@hlbEg(GOGz z#A+CZP>wl=+*`b_Fot}^kOd(ftaC+KV9YVA1cjvMh$cwGdN^b(ww1MqNFZ^}<*nlo ze0d$|qXpXK%GE}62^=t!D~Px%lrSfO)UMOT^{1bJ`o;nbyt8;{Ltlcls%KFVJO@R# zsC61ZARd-{`{)05_eKTt;X;4DGlSRdE);UBi^i2mArTh!TvLQv%ZQw#1H~evYKz-~ zjKi5zPLQB(+n~!ETd!W<=t9Oj!?<%}*7YD%tEz(bP%(zea8Ol5X&Nt`kB73XL{WX1 z7-yP7w{bnPb9R8Prm06}Jj;?l_!xYe0Ex*7$! zM?fIh7%gj)OuHEkb1tkUiWC$;e=p8O6lzpfSOso*{DRH>?P1gorp07$8M`Y!hcg zP5~%#B+L;4iZNIbvrdgRVsx8R0^tEv=F{0b72<__zR60Ri!wkJ0a8VJb|>^-^Lo@l z0NRyKb$7YWV-9LJVzQw&jK&}cwv-A$AdCPeBZ>qc6QMR>fKZg6LZuOeC`vYNtc%8X zC&SpW>XJYTc~odDS|P1erA4CXHuhoM0$7z@p|!56%ppOmh|qunQ{%({dFcWdxGy59 ztU3ou9A+^hSpo9ckqA{dDuiA%1IHL7LsU9uH+g4OzW+;Zb~gFo$A{NHzxb1Twc6hO zVi(NO=l|IZtTO@Eqf-<#Knu#K-r40ZG20}9{&&VO-w=ewHDrTNo0NnT!l zK6Q9JnteNAQ4Uf>~goC@KRHh+z^SLQsexfWRo2K_|NaSz=;sC`cvVvrVZ4{9dBxd}t|K;y1jZtF{ zk|VkZ6`)v01(8((Q&Jiz0Y?M?Bt+66fDFRG1YnU6DGJ8G$RbKWF#u#h79wD+Sx7-t z5Ez8TX0f7xtT+KmkVLY83Kg>j42XLX34tOb_ypTs*0DS9cKbsI!rRM8{Sb6ajwnlx znR6_(IA;>rP9-KqVle|q(i0Sel2d@B(cT%Hjd_n_E22f2SO7}u+Pv$s%At&)h&z!! zK$YdLL!R1MO+}gam<|CcsY8-H+S+P6@jxR+N}@#&RYC#sh#4#Z5b6r3pkN>xh1ozx z0A~iEL|g({#o45oDD{mNQ%`PKckQ0~o&^^w|AxY}bbToj=P)>mhb!NmcW8!e6=W9* zX28=p(MptIEq16?e~+m*`OhNZH85g?$$ewXr8ktFm!4CfG$+*(X|avPZ2Rtjw>#o|?T(PmDIyloccUeh}on85BDes;pFWX|kP7M_$9r0h1mVs5uFv8I~VT!g+D{ z>JU%bIJdc-W*(NY1Ma)8a&i)C;2GFEj(?)m=JL(G{jMth=q=1>llGNOtcXy<79jvvtrvJMLX%j>$<-&#e+`_ z)%(?Scn0(tFE#PKuPeHFSnW2KADWQN>f)655Cj( zJcc%jueaH&o3=|pi8uEyXfw~3rk=A1`i(B-4TDYcHuI*(Xs@rC?yl9c4&78hy&pe% z^y||T4G8~V0=pY{{p6F6UtWicuSUPki%YiO+>P1gtj1nor(*GN@m9+}4rx2lx*zv1Z(2xSKe6`C`;K4Uc@zE7S*YptH4%PMy8HFfOby4C zx(#$A=Aq(UpRRb_9G$=x!OOJVN0m0zlujF(oJyEJzK`wLZK*nqYUuWxqACkA{bqV{ z@7C2O^a}-yEZtNUU2Dd$FRz;FaaFj}2eaFY-THzicON_178nPSiQz-RDnD9}R}x)8^HpO)`8_eOfTC6=a<5=1NY zNfNn8p3II4^E%ip zqI^3ETP!A_n3Q3~^y0ctjsuLoQ*z8nl@pR8lm(QN+IKz=V6tVc3IaQ0K&KsszsO-Z z$OKI>s}vF`Q#AG@>NCloCf($r23a?N0|{tb8^)Ny)TU8sbh*3PDl;Xc(8pdxC9sQ< z(l3aL3}_sad5@_ZyUP?HOQ?#iN%z*sDvt=GVZ~|!95;cBaVR8jh0U-xCJq^lJ1dG^ zzfXzdh@uDJ;e|K%j_O4kzP!q=j|Rrw9c!#p1dA|HNlNz5`i(BC+M4>%X3lRSFG@>* z9hnppa~FN;w6tdSD{ns}&IJe*6$vZJRFw!wM7kH)zS%r~{b^%v3@xU0)0{L^ts-jR z35;#ZK(-{=A!qg)#QcY=f;a#BTSNT>>NZ@jcd1CXwyYo-#+yFNc;MM_ADzL=W3QD$()NV{V=_d2WKgp5V<7EO>O+=7kCiG( zG|=eTgj`4GPrdlG-w`us2uM>~6?dm~KnC8uxX@pFTYm4}2cJDZ@#c^I_};sBzPNs{ z3vcY0C261_DKJq%5~STjZKWhTIt0j8Aq}18expbL861ihaI@_}O5B~5Y8}aIbvDE3 zi%pkw;Jg~%F%mcSO9hq*b8eJC%9jGQQ;VpsMib&jDHd=L{ULU(GY>Q(FEa`0qzu#! zY@#OzkbDaX?Oe)fkq;KST{kki#!3!(^i_r1Leh0rvn1=Ct=wv6yC@?jy14}~p_w3$ zMsgMuVJQG8P$qzd!~`%BAScWLR%e$sB2& z07pW_qOgNbVYIna0(xdz0PJCck{yS{qYuD#mUVDWlmQ;2FsNWq5@`l0Vq%uao~W|= z=&j|^*@y47@w1J7wjNX$P~Jq^GDsLX6PJn+j&!LZwtfVFKng$s2}ta{M<4=)qBLtz zB2&`PG*)8veN>vHvN31z!GSKsdW-{lMMmpNNsK&p8M#jyET%?K&1tT%AJk<4h{z7f zxsrrE9x@Ops~A$329T_Jgk0JsB5{aEET)0A66`CT4aYI(*k{f$>atl)P9B)hE{Ess zr2gd2qcUyY`|1Tt`SCkra;Ha+kM&}IVDHnrtJP$A@2Bhj<<&^$^;P;G_B|KHp0TtPq>_Oag%mj-7O^{Yl$$G;tT*ynqTp{c)JK78x+y;1jh{6rja zcn4&DcS%~4VWUN9WDAK|LLv+&)o+u1{ME~UceDNX)7fk>Z%`x?A`sCi7*i7FGzPxB zc{yM@Do-0K2_zw8X7M?p-j3;0z#GD1(TO zRH-TgJ3A8X%A{Q`8FPX@AafFvSu2k201$;C0E{uGHbj&pfC2_VqZUEUAdEgs^qtO$ zT*0~&7F13l83LtHM3}RYq^!h3JRpn!2^df^X8;U@i3EQE6oL`d*+w)w+6Ou>5QqvO zhad(35k!!gB489D$rb?(G5`=Ltx*tELKRWK#}M;~z>bLdbiP zK9U-=6(I*~P=rN>ddMbV(#VA{mJx~wIj2qq0753xNkDwd+50vPx96GD`bxG#mH@WD z2@<+k#8d(b8WzCJqy$S4LWKZCppg`SQkfu*Fze;aYCfsX75du&`j`qtQmlcp@7p+Z zV1a4`W4MptM{$sCMc|5|p)sXd$1AJ+QE|h>`@ZziYK#he0!pBj0F`Jg(14+}qY~%J z3YC*MlcX&6TB1O3;DxP}RxIYkDMf*xjFX}$Z#DWpslWkvF~ql+{IPMhD8G2_?XKPD z{g?s*3K#|2xkBY?`2b8on5>FLm}5Lmyj9&-30)6{*s7&1G}^rzla;hrdF!G7N02UAcWI+%?0{})%3b0!3n8KJT zF#w%2Ugu-aZ-hBp<)m=<4ma-~W|z()`2u`!baWliMNNfB&~PkbeB)Cp{kN z{nu-AdqU6q#=K}?5@Z&0kTMuFbgJ>x(aV}{_lMJMnAYRDjNA1$kC!!``B@NZmXj62 z^~XvO=EYKa(&C>UzR0QNPc=+h|1~uq zj*dqs#Xzch;|%L?=N~J)xM%;}_E25FGDnZDjuv5*rP6snyg8&tI{^3+EpI%H4 z9;Q=J6o*BW(YVpo{^CaBrA!3!45HRKm#M+pAWE!zhH7U!ol;hN`{N0jIQfRsySvL^ zZpYif_g5#a;N%Dk2-yTREJ6lbR-^8z+Vh*!bg<2-M?CD*lQO;Cm?qa7OyjO;1Gv^4{vLD_7 z9w+mQ0LKX|y?D0abagt==O1sb<9hSzdEf88Z2Rq}uYJ0Gw%v;)pE(hN0%MBmWZ_Kx zU{Sf{$(^HnC-1-IRPk;xGj`fgyS1#QC(Q|~E~!|B=VNMaM8%8AEc&|d{Cn7~^35l= zoX?v5r1h0diWrpaEZT-1PtEQXW;7Ng4M}8&?dWykQB9_Y;q~k=bk*5D zf@{$Zv!y#s9`84=XwCHavD>%jH=!i`aH{bR(73d_3~Jai{(@n@Io zqH6Y4tbDJw?Nx#}>U?%P9hAx?Ri>I|y=^gl?MpY%<^uBcfcXT5dT(w^O35)L5biyu z?v+$JnIgZ8X6`p>GxjtmaV99d&qqpy5;oVbJx;aK#r~_^pbu;5OPCo~m(+KC)Y@Bg z`x5=J-J1vR?7i$RRLJ`)Pxl0O>m z#*Gi84~^2R)wgnfGawI_O@L`|w2wu1H4fHR=VNl{mdW-z|?FuRqd z_&0a84Y@$LT)DF2O0=jV2OslpJTKh}Dx*WnLPJ7bWCu1WvVG9b%7Ro2$#iKJ{kfFj z#F)dITZINor3%o1EEG1IQA$-+3O9hNPgdd1f)iOl+62yVIu=Zf(hcLHoP*=`aDjG* zPWsEYu`uOC<939Q@|ydzO@f&VQ+3!MlBcd6%LNc=acbOQl&vOdzv?=u^FKLTjK1qiJQG2Pekq3g(4w!q5qbjX{?UXdIg%vQ$|(QV?ejb#Xj! zCH}dFBG`gO87mpu5A^ui(fUn&W=jJR2{niUG}KDO^(LYwfkIChlE>Z< zYl7B$k)l#3kUO7JQ%^=j=Zl%zZ)xb$X(hR;H-}6e!Y*$-Z7BMNOOQDa$0 zJ_HD3XG-Ce;(&di1CGZwYanrm;#3Hw9jjD6N6I-!hq~6qOo=$!+-zfl>3jt|?D;?- z0XU3J!DvjONKYqqj^};c1JD5wqnRNoTl$`(5z>rA1>G+M>PO{{dHkvq(qbFS%}IJMrt8>04RZyHJn8d(mgtaXWq z$rf5EkgdD1ig*A3AOJ~3K~#rKLqPx$H31YcBmpKNfS{8FbW$*JSoS6Cy5 z0(%-zxGoAJ98>vLRwpxgQ^3B@5+IAH%wPm;D%k?BdI(A`r^=S@dtHWY_{3u}yR4(! zl!Jxc-NkP|u-~)t^wPb2y4mkv{x#-04QbDEa`P*g$}PT7IVCg8iio0Cp~e(+dkJj# z{_l#Z{@G_gS#GZX&1(029_V0rJ9TDK%!B6D^5km*)rSx6x@mnZ!xYV-oclWm$=9PN zNpX4fIx+;Vu~D7>=RtSdSj$NS)CMangs-%`~n04^u7z( zr^GRjAea+~0BTeki3zh+2uhcNibRwEvxo-}Q3ZlQVPGakHhF}L(Bb$ zh*{f6K#@3dOZ}$oBP#8jEpg0py$yNjw?15N0!de-q%uhcxg9a|sZP1k#LiS8#>il$ zC!hoh02LvliYQ960}fO%7LJ2SZ4c~042VGlHPJ*mb88_AmIs#!$*hG&Poxwsfo$*t zNKlQJf=XX5*E&h}ly*KRV?j+=SW8bJVo{NpWI#$nsJKyrwg_xu+e2*Zp` zwy5d>3c*bYT^^y2KBqGJ73y40k|i*>ixo138HE!H z0FnrZkO4GWf(8%bl2k;o#32ff%m5$)0)PU4hctjFEF6&47!^tOq!J)zW-d;Qn%;Tv z^~dji^w#wF^z7S@_~Y63haZOrpWHm~*Z%CUCadonX_I|NH)Vs`EBReoZC{G-z>0|~Q z(yw+e&CO@heRt~SYUwpH;bGDGS8(&ijm%nBC%t8aF?I1cYhw@;X-}i>To*ua0r{5I8NtF zMb+D8*!AIy125{*h=}ep>orUtvO7-g&D$(#;X7#>Q`*@yMF30M%%8%YtH%6Rnh1dkg&I{MWI_ou2pIz9rIS=;x+ z6i-Wh@5uZlklenWWr|zH)m1&mC_3fE0W7Sa2CTpHVA&k<{{r!sF0a1r@DH>2{?+UA z@=j&V)NXFGH0wye`%k~$RA)zX_Cf)%Hhvag1a59KT8)x11h_k0 zcv9Q8m>fSt1I0G&$7N2w+g^o9o=hcuRo(FQEQD4qZ^@jYK6>{K@_Kx|-ovAZsz3b3 z_3kf>Y;5s6lj_ChbSw5Q-oS}fx&TpWc;oYtFT^#onGM`VX(_i!zLP0TG#@pT43RE4 zf86r@)O=&!y(+WVGv0hz+}vRGzB+c5w^&5zn&Y=i$;z*LP(I~&Q5X#t!@k+mdZDCI zk0;zgPMh*Dpb|&?*LgEv&F(s#pAD#$#^Ulu;e%VL>w6!|KsPs%s_!n+%OZ8V{3^?G zh=m$D!BRjkrZK1fqK!0bQ*6w^Wcs#3UzMmJ4q01HsutY+nHHO)8Y4kkcHn9qsqc`p2L z6qJz#J2(Ibdwy(5FS0q@IozOeUmJSB1(Y#QPDIhWDr9I-NW>TeHS(q~<@1`chXMZJ?!bcB?)xmFwt@oPn*?YyyOth5BL39@1aTLj3s z3wh?Dgt{xUmfnJlea+~DB-IFNSFDXX(bYZS*G^y@8$>~&tZoV5DCIT91BxPtA`{Q9 zDsLQ`ZHP(FHUukW4MtK*1!H1TdP|~idM_mgV;nPb&OOn@krQ4Epe!mg`0)^xIVEsM z_dom{^ZaMa^^?zrt6r)@n?7fv_br8>sPi zSo-X|-zv#XeL@P_ek};%cEU<>A#Bt*(WPqUr5*bHICd~v9?6uMv{9%LN>v9%{cgl8 ziLwSn7cGEdl>l;|Jb;}RTEz1K5dob5Xek)T=3@;BX_Ev-4hGRF%ZOr25LS6qN>xYm zcR9D+&4p+=?>9(j1p%bd5rBtOg4k&tkkaT25Fh=(?6ffmteS*HnL&dJ(2sJ+=nx7G zXvkPdNQV(ALNP^gb=eZ}Va&o3*|7u#tB__Noj&^3{cr5*eYgAy);a!%mmzJRt&NO# zPJT(5`Q_(-`)T`Z*t|2{zHKIvr!;5%pj`O?PvB>F$6@k#_4Ks5{>v_W5eL;le@^IP zdEz7;o&DBZsXX}y)9)|JU;ZYvzv(`Ge(3*Xd#=@f7d=RKeKZ}HbGdE(r`KeZ@e>4l zL`spU#41V7IqX`1;o4OJ-5jw(5I{r$P=*9V3o@clVHoxih5H;qBr<1?0uYTt(Y7Wk zHL!)hlK~WivOoqhLlOo+&Iv>S2p9>KTBrqKIn}bVnxUpSqQjh2mJGOqD-WH{Xc99> zX4MkLJdd| zTSHKmh~*Bmjkopa4h!5E6i(QV9Pq0YOSA8N>({ zU;ro*XPr^RY9LK4Di0u0fJBHHW#pWA%o5{Ps}V18IFFqU3A9$$q=Z8^Ks}9> zZU>(u>ey9EnpV$SNY_K&C^-D2=vS6k<;}WTZ|D zAsS0gqf*A`J4-CfheKHQ-RG+PNgoTqG6Raja*C5GLkxKY%BmF{0ws^PA>m<>0|Kx{CtwOx4s;Uq7Ji}Zy&7XTli^sKT$QYtB)!hk@1Tl6b@8ZP{Fh5=@4x*A zZ<~(K^N+kd_0|n`V%gLqg0!agXkeSzhpZNgPD>>-S82-XYxF>=^78V@*qv+96kRIw zc+6$0&W1W64(m~1AFnC+-X0D#bmwEc-i}$PfFdAS06>HQ0O1!zghT)|Ii(6_k>mj) zfC!+703a{`h@b!Ah*swpKc3z0 z(zJM7nu5lI6{cT5n-R=2I7D-@Qm~ipC)ZeYSj?(ith^lF)rYs{+v}rE4BuNqRnVV3 zd*>ZG^$|e#}lL259gn#YlI5wNQ}8rx$CCDUC&~?JT?DK_8AM z=4TU%7yYiw`Sxl%*fPwEYP+1ZbEjX|Wf)}HGf&)Ja!hK^7xwifZL7IvT-<3I6kU4o zhwb{XQ??e9?H2)L+}{E63m5~|8I{1@2(lQjncz^yBKzpa5x#4M=kceT#Qi4VBB@%e zoOy^&4k>DAIonAFi9G&HE4P($1&)ejbv!Zy&sMM4y6&^{23@tSQYj=F&#< zZa94Gs{?PZ{PkPq^j*Lg2j?ZM`AT*J58>*GuYYG{t2@=}jd>g0{8!(5v~U0={C&b} z=wJN7^`?I{@=yHs@zzKC)}J)+4S(=Ft7?6i%_Ff&K$(&lYh> zW?}VV>~FZaKdB5=Q|NDGuU$QHtHZkGB^{3D@9YMH<%qf8A3&KCtoKwh<7s~VnZMZB z#XGBGJt!ry@y6a?YEwuZVOJvW66MLD7Ra9*2iHy&l*{``Zfh0RuXaHi7nM{!r@MNz zH2-uxPV|koAKVV(QM)s`f8j(Q0C{`2EG}T#)5*eg-S$OPR@sPYFZI2nK3wy*=VE@7 zcY5&5El%$Zw_E%9VpHy)?97YFQRn1g87I{sxJDUW{3UqzA*y+=tRh<0^j$b$1vb=w_ zy(#in+J5JqNs^PyZ%kVZdAF9yyVI(DeZ3Av(^#lmQVo`cT&^MJPynMJp?-{0^u3ZI zPF=zwQjL{rusI?fR#ffHBCoG0DP9tcSZL@gyc}|~Q`2;K5^n^-%Y@EW7VRSy zhm<9A2ualnAXhtyMKe;=In$Rvn><0jzOBPl#&R^CAZkdw+jA86 zILiPO0jWqB5=9)#fw^*7QBVY&3*uEdEcIe76vO%&ruR&3^0*V^(F7eH)j82{9aWQ2 zNkJB))Sjg(5TRux@CFqmL{LgRBQr=Z3P_O>aRxB!J)o_PB@zn|C%A!J>ZH~*MA2#x ztTH*3!zjX4uhO&_8&ubf5{)&OlJ>NJ+oo#P{0j%4c=hc0_D_31%#b5GrmC`H4D^RB z0OmRS(alHARTz<~S))>fn0il=A*gNw8pI9xASq&{13`Zdrbv1=CO@g7De7Zbq z0ErPJm^c##3{XUfoCM3kC~n8+_%wT;-M{v}-&=q6DR)_Gu?-Rv!gZlCR9OS?AUlIq zqlK}S36%wr$Hf+dquvrx1+rg$8}!DVl9?S95cXvVlv*hvz&SgQ(G*^ z+Fo`&sNAOtNsTimi<1e!VlMJYp;rC#2;|HeLP#b~3EapI>_IG>;8Cqi43J@2fZ8Gl z6-_Bg=4tX}NsXsCMb0@El%XgKkC#0W=_BxDtdELM63Pv_>p7-{!c!&UF-f1Q4R)cDryb=M5T ztLE9k-5+#z=gWDDkEWKidX&z=ZpxRh$^ri5WOvgY^7NpOPmj{oH!l8cOY#S20m4cB zch2GMH%{Aky2ZC2)lYw^{9=NC{qz6su9)n2Z)ms;D-%vu5D#f{gXRQMC+bfdR}_G( zj)mnAUcUyFJUhL;6bTi8fB>P;(rXtqANW>5C8`K8=ahz|s47dX9AzN^YredlxDN%qd z7z8Aa3QX*j%LM9kQE+7o)mdiBl8_>MRPn^dplHb@ND)CYW>m2rY>5P@%B+b&aRNvn z1YpUMF#u@r@BgEJ3|}(-`NX{x!1*T9)8V}=u(rd}$3ev)RP$Z)SAkJZRRH>VlQ$ss} zCv5(+IhJpC{}S-apI`8ox0h%P0%a9XmB$hwXP2|8_}-YH6ZC<4B%FEz79;^Qs#YW; zWU#16Mu`*!34H^AI82ywY+|a|q|ybiB06Qr*|wy`tXUp)ztmu&`9+hm4lZN@2gYOR zgl1tUG_0uyNe7msTq@dN&}4H``266l4<2RDcVAxeutg9_wp9XK015y^fB-D0fCNtg z@!g_cnt0#*ZG_{8M}OS7{%ZJilP;$xzkWP_{L$Hi)9%dkmBQ`)&g}1vIoXvDlibVx zdf!WBhsHXRqBIazuro!Gedh|mWzk`7s@NO5cc!wdnSZmni}306x-zyb%9^^PxsSI) zc+Rb<=fP|rZHG77#oZI%?@GL@#%}y;PTpzwf1>gA7jMm0EuB?w?H!p5XW94}8bD7U zteh_1pPCf5UtL}p2t1U&Gz~F-qb^rx2?_*g;L5?_x1aTaFW>w0Z7ITS{V7gu*t-hj zLW|9m`!pIpP=Kqy(Pq>ArLW8+`SxOf)0p0^{PmH&)AFOKSS$|zqEvZtyT1k5^H@7G zG?Bux8eI3S<=M=d-@W=wnTsn`vR2d0{MEKB+MzZ;#~&ge+{>@}0;gqOB)bK--uzju zE_Ua~G0<+fvmK$!8WkDhG&$#2b-mfAAx`F~)I93T$P1rW#u|%g+;u{18#R&#P1E#n z%C=p`imG;YV)i$$4^y)6D=6#t{l%_t*S+lD>BSbkab2$7izRYxG`!etOy?gye%roM zJH3E5n3HetFCINy9Y3kf%g<&^_l=nfpG&}Y^#moF6gk-(eft0aEC_%AD6GR4c;}`S ztV*=Nlw|D3O$zI*;fvk>8AH6^rn^7?^!b=xKf6e~VRM^zc^H*|kxXfjigsC7tM0U0 zHTAh|+V8w`RKNRh)}He&`3fHUc|2wbeDmj1`JR?d6YqNDV~nZIq0s*IB$L1CDNoaR zk%lK}eK}w@Ty)mTnT2FOgl_rK!)D%DK=}8%9sc(pUj69BpAMXiciG(im6QVdyX2EC zWx5Q|owX0ki+}YKF!R;Jy25LM@|9No{%R4CAJ+NBrSZ z8((eG>b%p656iatqQoPXx2NNuX;HcBr(OFwSb6!9+Z>fKJUw9Oob11_ zqt21*qwjpy!Gpv0$m>cwx0b$c|POqvtfTTUiZE^KW*@?1J{4|;YkVs zWtPV5-b80O-J8X#J2LqFVJM2ysxChtq;^34g&3UGU~M@isY*ea{c)eZJ8c(ybgisq z6y5G`*|Y)6eR)M!g&9ttew5+Wt@N}hCL>d!MS0xmjo|^GuQG0L%4rjPhhiL>`)SQX zKSgo#2S(PVFyGWt5A5>I{hF$_ws^@IwJ;jd8;)U;Nw3wk#ya~TS{{a&4z~xDSsh$` zF8ff>mg4ee93RbZYWw(;iHnL+rg|iv0J@_GHjVbcbo^|;UFdT+yqi30&7dlfS9UD$ zaodcTdrauiy|U#ba7uhFF^0v=q{)nuyf;p?z+?B#1eXzFH@4!TY^W>GN;mpSwl-j9 zO7Qx+>&kMuGQn(6`>Yx6-?Zj(TMmRGJhM|tcCcbxq|^q*rm$W|WJ);asRxTqSGSUH z6f}q^#i$c+_KsJm4Yg-vei{(t7G*FPAsu0wMorq4D#=SW=-SSgP>_IEw~ck0ZeEmb z4l|e6wV~`9iL!67aHWs;9b}bKfhbeteD6|`SZ{_1d6**O;SNNmEaP~p67HtyjpcVA zyNnN|g!P02(vs+Q+-Z&68|@FDMGmRVg)T$$*+5u*Ln zjX(Uue{*@;borfvHc45I2E(9}5>^H@6TohQ5%T*5j)8j7CMT#Gtt+OrYHK(PWCrVY zoI1Q}>UWmDN#o_E%-UvVY_VYJrQd7ai39v}!qQ=7F^v;|5q3pwEdu1dpqa346s$so zDN{8PE}0FQ+CeCNwmDvGwQh7#xQIqYm0#oqh^}1IZo#1jAG2JAam~gX4>sp zMq-1kKEvJ=0inV{ZJFJE*xQgzaT+O`E;}OQQEzvnQueil;+ANa=NFG?h_J{}>O&_oc*q%+3gXvmIGS1GO8>e=VZCg%ms0Jhj9s?Lm zb882zHXuu*jUxrX%=ev|v$uY+%>BoQA8bSI?u&wAR^C7+1kxd@SeY9DTcTBrj0QMs zQbkmwlnsgiB?Pa&2E#NZkjfo5pnY8TK)}|b1Rp~N*9#b8GN@uSqp8r9l`x7Tk)j!S zmoSH(IZNRikAwwzRL+J}p>POT7OViH-6w>+PY8guwEz|cKrucfz^2XskjJD7soW8O zLS+HCd|;3s4HdF|MqgN*9My_sI49BMGFygNFdG!13|cA`Kv^}h04gE^5&A(m4fh_A z(E&h-3IKy41#QZ@HqHQMFcBt&-XsTuC-jy1?kKKZ%aY44AARw>jNJ=)bV$;cV_BNX zz@Z$BMl(1hlsji(<6jhJ?Pd=~ch(*98=E9npB;s3gp*GmNT|PZpVhQJ>zuuQ`tyt+kuwBiMF+0|5*$^t5^y54`>!+zeqxiTjn zMT}QaAyv=@Di383hY{}qY%&B0CIE;kE9bxnsu@6XmMmEnFk1#d1pq`qKo6(}EFlph zf>8iu0ffXcYg8o^LPey)YU4^0-U+j`X-tTj~4~qTv7^Uvq)YwB*82c&YF#|C#EMQ}*(kxk(`8w#BfC&c+keC8+ z`3Zyo03ZNKL_t(w0uUuYg%dHQ;!dZH>`4`jAs|C)8-q|)R5-O-GM^{JW!}5IkIf^V zf3HgY;^rqocQ>5-G$|km2x2fnF&Z$6J}Ht8Br)eA=K?_t?FpE{SfCPwG42#{rUXDn zJTf3D`U-$BL=;u4Xvv5**#K(fsKRdUv6%Z+S3RuKhUOYXE3I1O(nReAXO)FfpL3Hcn_W2lGC?(mcC2-h!mTZOd7AfSm< z0R#~c5Cr}Lsv-a&DgYt?g0c#zf_Gp%F_2_Y76kwW1q5T!fU@HqT^JP4mHD*=WheP_(gcmh=tBR{f@%ZHU zxs+GM{gW$LY~9$gYC2#Tq5bfUN6vG55s_EZht`U#E_P5#|0^@!&mL^2^}hf3)u833 zU(sxTR9tPp^}%8>I|E`cE?s=0DgAa#MD^2?WtwiEzwAqYcI2o}mpkVDCiL;PP90{i zzqWzc4P3iO=aUeFfiIQeLIZ+noV`_DJz*5VRe9@51wKF-mf7vB&;^YV)b ziRu2x*h6XVY^4SlR8~N7HST*H{-ib!$Fs8u!s@knzy4Dv?)LHWVc+Ac!@8Yo<;Jh> zr>yy0oD>DrO|{!qq5tA>%Z&3|!jeiOLo3Nry$*%Ape+U73$i2M8FyXNHAKluotiyd;t&v^8F#_y==D|V|tK! zzu%1OPdD-P$N%DLO1rC@?O}VjP5o{d1@nYR4uO2PYVE2n+r{JN)VHhE`{#>uKQqf_ zZHuLOYc;>oDY4P1sLyQWOc*yXEqF|eDq?mw*OM;y##Gmq-PL+|8=qAg=h}VesQv?M z=^W$*>|!( zOtbKIE>XuXxKH`csD-qW;`>#lyXU80rtW0$_3AXpi!N;N={ZW3ZW_~V7f?>?;_w7k z6`RMq{FUw3U_an8pO#{=?Y+LakkZnNYXw8^tdgty(}26dheOIFO%;kdUNFlz<=5ENur)eaPrZ5{EFTRiy5h#9yKy4H zQl^FD%y`gz|0>uSbq@ETaUsp~K29aI{%e~_s7QrLSC?!5=*hgU@Y5Hmrk(L`HMGn4 zsk)CAjY(W;9(F@OT0<^d$Nh24cX72f{>Yjop|fR(oXk-fr`o!Oz2S9=fgT<(&Tk% z?2Pczxp}g}I3{Z%j!_QxKLf6((cGn5QXX^I*r#t(`QRHzWgt+pxhgu41~PzQ)&z-f zrg7RmAFg##qiH(-;H)q;=32vg&|9fGo6uRZ4v8^=P5bo4Gm4wv3fo0&YLR`|0F?;7 z0W?O$$5||E<Flkk(EZ)ZT-vIU0=`GE zvF3HKNe$q3nJGDooIi=`IWG!Vpfi-W6O?7H(3N1+=*>ZZCplUy#ekMZd7zxi&}>u< zgLe%(gOhf^p$ej1NtEwyN2;VO=G8Zj3sXd6>++3XKE3++|Ms)n{xGqMQZE7=6*9sM zsVrRLktL}-j3CYo4Vr<7^DcuK<3rM2&114tkJ@P45L#D6ycY_?UMH~?ENWX+btP#x zs*b}|RG1_~2&s?J&yN88gQ@4zt!J$p*~xYvVg}ow69-{7xO-=MD7s*XZe&KnM=4K* zPDOx&QJ$C(v93$yDW|pY9CJ}(5(Ki=sL9%ughlNS!!C^z7TFkOl?ebNmWF1vNkUPn zb>-;7qW`<^4fOVZUcK#xtFQj>x5xcwzxT=5+PV=4pb60*4kl9SK+z#0z_|8eg_hZ5 zz$~CzG7fH>O!V1Ul?op#_-MB^LeAqJa4xL{7(MQ?z+E6O+D6YN3$eH|7EL8kfO@pf zFXX{MGYKzty&J&f94UoPwYM!CZ50t(__}Maq-!ZMGV{peQCp z1jHT?52iq{3J3rM6XG_R)7$U7ar*u*g#ELN{lEDv`|0E5siu?f)%T0~%Rl{U z^YW`=bu?c-ogtsh-h4!~T3+1S@vEz1vxmwpXZ|}L$7}r`dl+ZOZ_iD;sOqvh5>AqO zqjYzFC$V3iEy`{#q6R$yphqPwM(?E#!+3jpc}FOBWhur#;9@4mK&}D^_*8-#RS+c= zm6TaENm5NJsA2#BzXk{dKxhHc0U%1|#FHvwLIq$_L@o^2rgR8JU7LFDKucH&xvJu_ zUk?c{6QE$GZVj#6qQrMpa$T6h+V`K!A*zR6(TFB}rpQtSj+r zDx^ZB>Ny#XSs;Q(a0c065W&g1uh{*k@%1ql#^u6LWm9tO^#ERiIp--0)Zq5Aj$|45tI;B1*`$)5L5*P^$6rpECPro6$Sz|NZ_3~tq?yj1-<+5+i#F)VK=OW@8@I>$G<=L zL+XFoaIY(;_a1v@)GMs1el!HBDVK^VWNWs#WHB zMBf$eIL_D8etmNn+G%^*p7q7qHl)>-ju!FialX69`6#w4&jNPcvROF8=b#?2jLHKYw-e z(U+SRcSReF*UO1T-??_6BR64;?#Btv$i8287dLlz+rtoskcN8EAR6rk_2vHC-tXPx z7rN_qx2-qNo%Yc?$XNYq*-$b!JH%$vcelG&`F^vLJxDTUmq6E-P6^6VYg^``NCAqV z=f@|B_^`Xlh3EIX$6t=)XZgCaiA3%%#8<^{JU)MMdHLs;>10N=-G2Oh(xN?iS2Y*L zF4|YA8sqi%0HWQlw+*>-!OM1gK%E~}AUy5vrCvtr9@cBGb`#JP&XvtEm^ry;uWz&% z^6_Z{i&pOPdo1Zu@nfas-ZgWGj$pTY_@q7g?zbPmxOslxr+oBrlbg-DZsL=8@bmw~ zyk%+~AK2#Qyc&bP7m?d$w9o#1^)DpFHN1>Qop!@Tn-C=KL1aB zjoLBSf)uR@GCPF{i!C)n%Y=LQrfY_7A+A3_ znREs9=ig!61;vE^ns8LG?sgwiA+i4};B-+G;0WM}Ktk$q*jzUN3e3 zRi;WxJNIkQkTBg|yKDhx(CH?Xpb50R|iUEp?t_)6*gx%+#R3cx~1@P^Tc_q7t~MdLq3=xK@l7CJ+v$2j}hmS zbN=voXOFomARUZt)!TgDu4rdm9Il=ahKy1iMbMJOq|wX_OC^J86!I)fy=rF=x4=1# zn^EC8WOJCvwiUE=QrH3Jq2Kw0#@5z0Wz>%?GS4-kiJgb=qFxo_u2%3!JV$jYZ`ix$y_rnOu8>b8)f@NG;9UdRbt!Sn4oxc4CQ4bi}FfKuDg-Bt`$jAN=$6dM3|k zx5!4(4y68uscy{M(Xz1`_b(m3F|(&5f2nF4UlI-n1mdTyz3R(h7c=PmoZoG$yP;s% zT#R-}by-6uFyy?L$r_=M1CT)qd5B7y>c)t8FcT!jicHT*MT=$_)KrY!ybcz51aJY{ zMBB=W!cusubg-sG0ET{S>V?)m94?Z04fa9tx2ZU%U_LU-_a7~O7XR7r>Bs$Rqk^?{ zG2ZX;G-+3Z0%nHNfk(cGK;F!1q&-=q2|yrhl48#6Ee-~Bn$H2aLRY;pU)JSid&uEl ze^jZF06{VMw$Sod zZMcW^3skMkI6Ue8+h60^FZ>O#Ex_-5^}BfUKYa2J>+-m0ea)*@jz+WDP?Ed{6(Uav zvd$UMojKwZLeJ_bIxHi#$xfOgBg_y=mV;BsMg+9OMgh`j7?soHP?OyUbrLIYs4&hN zB_}{eyiL51azH6Xji6Kr1UMm>WD6#30USb5%Ot)R%m5Vtd3II|mnoA_bjp~WiUE5d z$=J;kFOc7)@YuWs#ju z2mr|gpf?DLVj`eLZ4|&_BuXK0PT6QS*+N_CnKJ>4eiGolSX8A}&|=?%XeLb(OmkdT zCk_WG4lY3XvbZ!MBoCNis;9xjw~OUt_|E#>_s75cN?!fZ<{$N>?dCsv7#7FBe4S3l zFMhIr_4CVlck1#a1B=AVP}>g_03PWJ(C^#26_uQjmn4yg^r5A<7B@Xb@~diI@?Xli^?t zY6;O$VxlaCNa2WzTEIybW?_V!@bCXO{}eeDir%Og4Qg1KG@}p@AtAzFK=_&}Fo>#< zs2LDbW{Toj!H^J&CX}eED1jwJ6|2|x{cwMnNNyZ6vP@GGsaS4nNvoX06`HwL1&G)#sM0MiPMyf>hhaQ zIe+}%e)050KDgT-hYQIAixKf47_)-LOi@UUp<-^(7Q;*C-5?k^38EN6L`Ye3Q~(uZ zP(dO<1HuZC&^S{m7&J*OYH~0M&?pGXS{zxbOBswDl0s6npq4s7M6duA6ztkiie_W7 zBx>Jm=;+96S3A+%>n0=WctAy&aZ#{W4`9d{Q){6)z~uqsaJj#eX?PtEeNK@uI0gVk z1wi1TzVXd>rec1G zn?#2f=VZ}7{>2&8??3p1Omw;V#PE3C)S0LhRY5fwL=-K6l_t9an#Qhb>QZ;B#Uc1E z`bK@73U^ZbBP#qabN9hp4nFW;kAL1S|LBfhe_Vck^~DFfpS|nbyR*E<^P}l>cCx*R z)AsfDX)~S9)AleNhF`7gbD2MXqv;!eeqH#Sp3xQ{EX5n>%)C(y{>@4IC4EwGcRSYB zjneEGGQkMlLT9$4c~vgWPcL7uhwJGg@59|K`^SpqtbNasJ%0aLrDpRL1wFIt)~S=^VP_E;WwszM2^O*BBP*w{ zzR`7_{2KPAaIWy)G!|$YZp89IW@v*#^2-{z>A*oTVY%{`wZr}H!{hnI{DWt^-M4#j z&EnpQZr335=AI@J-qC>q3P{K@q1`! z9RC9{w@TdbdZZI@b1OgWrHBWw&$qV&r|%`bd{pekuP%Di@4cfx6AVO746RI zz1`w9b@O^W?fJFO<#=t!sRS5nN`5}<`}>za-tA^@zE#QeCg4V^kjG1f zcOypAe@>NoeU;B)KdZ{EB{!B7Zx3y8fOp6I(+|k(%T?(2I@(wralW~ra7LFc|3cp8 z?X(@`s*2Whb2>9c%v$boU&MLc`ZtT+^H*zZop((dXQdxb&CT`Af=)tRpKM+qQ11=D z=N3EO-Sw$%k6YtAYqrze;ET!Z6LV{tP&gAFSh!a1&FY=j=hZ^O-TP72IYd1DTwzwB zo8q!D42^040s{nPbwax56cXP7mOZ&`zXSH|NWMKYG@uP)4<3q~Y*C$Cj44I*2f`W; z3wB#Uo+Ogr0F`!k9%&#{6Oo!8OG?`o_%5^T_Qud;Nw$ZwE|psFQY7VqwP-TqB4{)3 zzGNxaSKgMrN8iV4)g^D?enMu+>gx5`(b;`oX$W~LY6bz^}9;1I!*hMdE2_t_!1!mXe#ZAG2K3R6TmsT2?!`_s!K||3(1#FWU8`suCsWpi*Kr>h2)hP>EmhP2H^wh#He~ zJaA4aTNbFv5USA@0HaGNsoDv<+*fR~_bi&TB#TzjXxfnm#HSfup<4h5(Fg+b&I6&L zLBNd08>ET|tfSgOnuz<9Fpj15YOIQ?Kr#vh-k1`QJPaPfP9Opm(VC1>fLb*ZVl3=R zjjN<~;zzq`%!vJ{L&2rVvy!IL>>*p5Af{yrGT1RH7V|H@5=pK56AO#azCzKRIwP~;gICG zIcv3A9vX}D>$H0{eX(#jKYIPb%l+lL-fmZ`+MmotgdGroN#xP+A#Qhvji*C(ooKXI zwH*p$Y(=Xxmw)rgUvKQw@AD5omp`#kBB((nu%t$j8IuTLPzF$C0LiMHvx17Mq5=aZ zQ~*H+5JF?fBLETskdOc+t7_0Ax&%34Z~$gT-f8JdU!PgP#(IN=pdclLuT2>bQDBOP zfFTc*2MnlqFw90kP$q*QB$*N`;>0W>NPxzGI$v5_6DV0gQp~`CB5FadLsy{0A|{f| zsEJg8EE(fejfALi4@AL*U`7=%xh|wMPNYyJP)1D+0q(XOn~S13qH3Al$e z2$Kq+Dtt|q6+{^!iz+fFPFod@*Y$MYWXtx1ODk4|t!4u}jg;?DHNM*2roIn=_gM&n z5tfXsI(4Fz;Z)=vQAU;>b3`ROZ_qTzL^gvkmI-RHtbS5~XvB~-0A(S@0>&It9u%R6 ztO`Y0`q@l?lI;ON8q7yr*y{9eKFIGsdE*b4C;xJNTmAUcmm7#7f+QtCi=Y4jDGOpv zZc)}*>DBE%$3cZT3IRGs1PDnuF%XLt1hr%ch>3+Vs#QWzP(%e|64;m+Ia)!b(mDbw z5g;VBA`1&(QBfJq%E7!aL1{RMaxyB(oxZWec6Cy;%Y{#Dg_9aLmWbYW_l29LNJGGw z2HwBwBVP5-Q^I~4GIItMP((mP0961$RrtRtd<_u+5D)+mNy(@I6_QNWIrj*>9L7t~ z7)byeH!51>?-csMlXmyCz1?&lZS-%RormwfU%cTj;=BLsZd`u(KYw^Xhw4{;?J15Q z-2L0HFuxuzlJY@152ntFpujDI2d@SNXN`Bhs#0Uc*epc`u8DoOD2tg@SB4z7zw(#A z_2x(aF=Pi0URlV^bc&)Y>eI z;m!M8;7R@_IjCN}0NI*30512bAlMZWb@o>My_&_{|KOs47w!IhLL7Bu7zQ>+54*W{ ze|mAd4!0k)5B@K@;3Q`EG&>CXIp-Pv<2!xtH{JW)d#moPs;-{f)$X<(6Azd~D0W~$ zBp?VjC@iue5-X%2frKKlVTY6j8(={M3J4)&z#+;Yr(>B;xx3xfUDY+*@teQz9slF= zJm(zBI~LCG=Ocd}R-kolXGYUvuPwzp-B)ZX^8Jfu++6h#6VE2!4pwdZr`FE;$M2kt zTV8GNW0HXZE6YHKZ#e|}=tUW`@X3<~_w{hU&qlNy;{7(Jf$9Q{d2<$h%qK&HQ}he} z`nj~RtL`n5sX8`gSLb7iM-|)`_UF7i4P0gKv(HC)T_=wUwoZ6@twGx!OWl^fybGExuu` zJnK2x)yB=2*+BuR-H(`w@n!Q<(}ew#uce6PT^{5%CdQ*&y0(4MY~L;)SE@~u`h2{~ z{Z|L_@xHiv@yW3J5ccw6mr}ayn$2flw8P!ieaB(g_5qy$5meq+J~PwByK~Bp%)#>U zbhfb7)29zcviC&4eRfc2d}Cx5^FoVF2zuS5O&oEAyy>4z&5x3e?H3PP(+m`RCWC)` z@U8Fv%E{!cK=|{e4%(0Y=%3#Fs9PQMy_h4x?68Q;ZKK;RDs$+&42DJ_Gb7>?F?kt2 z?g%-Ptj`;!F_rfI54Xu(mwuMs$$+vkmgUsfMs-1E6YAmk&evAc5jfyW33}624@=1N zs$ELT9goSJ-|}LbP^K}fAuKSz`Bo{{T^dO2+~v20TYj*w``Ub3(PAJNpKo`Q*#~O6 zhnqGY482$ARI1g?!KRF~o%_M#c43NH=`q%T5=oir)hQhA2v@7EbLGO)m}!#|^djFF z^V^btVSx5)lVy%d@y-6`_VeC<_waP0^*fM0DOPb6wq0H>tu##t?YgpNnAVBh{8n$` zu+JBTPgAp>TyB5vKyxQu3UOS;+s!cf%crixLutD=DNffH)#T*kOsN@^OyBFV8E12a zH2@oIS8ZW-X0n%W{_3~w-Ntn1f56YXadyT2*Dgy~cAK(Mdpz!MulqI>zQp1yI=P=9 z2FYc0F=VFlv#i^627y4UuQ(LTo!Pza%9>{1!fM-pzH{;uJoq3lH12!R)8s%D*xoZ-|X)G-~MG1Wv zD?Tnwl7%+wok`5=)Xxeq?38U;m&mDMDV>m{UMN#b*f&OK4`9I*CTFzlH?6p(ClkjK ziyJ7IZ%I%YG;LBOPDXW$MQ(9Q;;#0|h8KABTN+#A zIhLw@gY`+*Jeu75=Y}_WvtR{?iZq>Er2t_kSy` zFZT*;lAw%@0?I)JO{8^?X(X0Wyn+}P8BJ$E3^EOdCJ0)BvPjQZa-NF;?n&D#B4dhb z!Y0H#7G0e&u?V7U!zO{N*dw)(M&olfv~RDxT5Jv)jna?Ryzs{ku88;G-=fSBZ@ef& zI}8{jjuEv-E#kroxnf2LE&iecKg}L74k6n^Qi%dUMk$#-5}U4efIGQLNQFNH)uM;J zNqgIskXMiyOiDTiPDzkpM0BGg1y*AeLe*MSOQfKRD5?}u)jF9Yj=?km6d;u7$>f}r z60r%%1Sd9ALkig4kRBBYR#hT2N;dy1XwYjPefi{U=AGQ!ztFpR`t_&7 zZ`Mbf@}1p8_&&SqAub=)hlf@F;p&qguYRgoc(bp`&CD=kfzzg+-PtiKn zU;toM0Zj>$00>Zwpvt1E0*FF}6bMlPkqaB39*ceTTR4G{iA_*u#zr+M;!5OR-&^SPZgdn7q z#K*2)+NZtbFT;y9e2!#MP!++VT2TVuTRZrKG?*cp7DP>#o0NkE&B22j2b`+_D&v3o zxBn~1YFn$Z!YV;T`;mtfL_s1bgMunX!KAE^KnSr98oCG&ug0)%W36(STXrTOM5Dp$ ze#`xJNOtd1cmb{Bvv;~b!|2@xfE=>c?LX;dPiC(PhWUZJuesWiVVb}0ab9N$GoJLXth(b8#)Jw20!ly%e?~!3L0J_64Ez~^FCiKP084-X zCP!OL<7}qS@)F;lPI5oISpDypUT=2|OM>4x{fia6_c#91$@kv--p$RO-2dc%`^Lrp z^j`Nh{?4=Nmmgg3e&5tPJe#N?)$6bUU%1crC=$~5bAUkmHc$9EI^tDFibKL z&8M|^=OdRkM>%_F#4Mua@rq=;V)0mt71H_Rj7sR2wp(soyG4G`_gU zsa+fvXL)Y9x0|mS4O7!*=3VEO`{tuv zRG%GBvZi`EgzcsYL)d*#Y07I8<4_HO?6@XPsa^m{n(*;ey=dnzf2F|gI)6FZbPWCe z$uAcM-!&Sh58XS*@9fqqmcj0(umm`$Y`#~d2mXQfLt3TG0m?MF9TYh({PmSKyne4q zLF|^3aL>jZeshvGIPPJBVA5W%_xokKD9g=|;<%xU>(sNRboFK*hyC?txBKjR7;dlb z`w+KxV?Yr=0qcF~yeSq>7Ubu3IXyj{m;T{{N42%IfLZ?iQ_#M0q20$Jmer8!eTZ#( zJDg)4|71~Q*I>2zv=}10O*HTP6+3m`|Gn%jK=^Yg0Ip>H>L-6NTy(cv`5a8$y5SaS znV%^qP_+^;bi6kmUkCJsM6mkuIcgu9gW zy;(L;m+s(u#@*+mf@bt^s#jRU<>}<`z|KVrnM+Jn)VOLumWyVq|~1<5j-zn|^Ox?GH1)sd8%Kq|eL4(imHq{&H~Huy`8eC+Q29 z=MUYi^-U1D1~_fF^gi3A1@3T9c;=rhyX^@3fdHP zh@f?ikYymw4`gCN(?&ZKSMkwu))}dLFuo_%&~p=kRDm^cmI25@!?pE6(kTmbt~5S3 zaxhp=VspYYl9=_NSxt`Ox?s1N7*Wy2nKB35wYvnV135s?ih*=cw2kbwZ{!uFmQr&$=nz`%ySRp1j<->gsd>bc;5(mQwG@^ z4sBXQm4m3@x(!lQ&p|p5X=sb6R3%?dyP}Z&wvXnxph;P+do;YtY(4g+n$)vqR`Lu+ z0WsO=b4&*cZ>B{OW!_*Pfh{;yjKNtSor9t|&h2h@e4n;YTrlJlV-ynv!2p(8vkpUO z^XWU@;utp~qlDICt)OGS0?366$XjC-jSFffnjv?iNW?y%wPkMFVdy!FOQ#&|){?b= z@kzNX?bGy~7xAvS`zSQOKHr~BX21Ui%|3{y#E>-SXa&cG*HMQXVN5|o86qi+&LSb( zpzKsFIIGxU&xm={R&GsS2SuL3B?daX8-nKNyLqvv*;i;i}?$E@~q;sIiGmBb8( z{WgkHoCi%DhY~ZwA`p2rBux~-M~h!WDNe$=!|T0yvANpczRKKJ9n5{URTSjI9At1C zL#`?&%rQk48ihhZmMx@vpCSd^MGzqG0CJ#SxsP3CeU_I|l2cBqoq*^Lwaw*$a%OT< z0CWaMfSEV~?Wkt}8wLRg0u})bBsA9K8OG?w$Y@8kRy9vL80f76A`n1KFkn%ds-zxa zw49iNU@2M%2G*1qNWlmMmM-^4Rd!?{0%)s!;3l)ydncSy3;@=s2W6Jz(2yxr)o4;g zTVhg8i6(%f?*b}O3dpnOpp3=Nb{$5HtejXu6c~+QZdIKo0FXin04!QmA4wHQ$Brbk zHh7W|O$Wgs7?Z=7lmNgV2GN1u_+k#$gMHFYBxj2%~^T(4e(se z+6O9T_IN|(?e29Nua3*F9_7DdO{!X_+S?)(FF&@#| zxlA4(K6-Tcy}NMp@$f$?rXoDZ13rE0E2k&Fr1@R*{;lKT{s&E58GN|CKfQT=Zo)rk zA}JmCS#B1@-iG9$I4CnD>kU~?Ab=_WEUF4$!Y=_Ril8c^G712sq$r#elNeHUh7}M| z5ShV@86koq00AlE2%sWKG^(OV0!6W8$reO*Ue@4?i7^MxF<2l45mt#T%7AFWfWiPx zlrb_!RT03JVxV3ijg5gMr~yY%akfA#6u?^h1k5Cpwv;=n+Ve5q-8zSqdqN_oI*Url zt8qGE^`rI!Y*?9kPr(5y+RUQ&MyVh*B!YkEKmXfoXG8)fMvWm7L<)?(sE#b07(^ij zjWH#gv=X*9B}*d(h+j0@w%H44V_8%n8Q=^VWehAsE^u&ih=)^Ncl_an^!K;Z&`&Lj zL5+$4v4Fk;7>(>P^#UM9Kmn|S%z-2b9gK|Lz&u0A>S+!|QA2s)J=6Hr&FNYD*3Rv( z;U_usPp9T)BBh!lo2V|#qM&*3amX)G;?Sff@{o8O?^OkYS0MyJkObN>7)D#L6k|%N z`!)b4RT3o&l3NkPwoekH5(_B;kb;0BA|N4>K~j*I5-5wR3LpX?0-z-eXixw|lvEu7 zIPwMns6@y-wx0G;O#_}${NdM@nw>tJ&#Gx=NAe*uckv4fcGX^P12^~MFoqN;38<(h zMFb@U0EGV^0cB7EFoXcA3IId~0n`wIKv5CrMV&R=`~KWMWA5B|xDH-GQ31^`M;00{!ez3Q*J(d@zM7B@(0uQ7lj^GRe9**6>ola(dn(U4n(+%OmS7Q zqw1L)E6UI|n%vR%O3IJZIZ)VY?i7k@LTI?gsk3FV>s>!=pJL3)Av6Px57UDR=5?L2 zkqcXA(chbOZ@OYP!NK*1JD7s}BImpIt?%e;I=%G=tvy)owl4Jk!BX?t$!&DnZ;uVD zJIJQW=2N*wkm6nq0aIo*pWB2j?=LsI+4*@j@dtCaR5_oeRsPNsKAPnG3Vho9VZ7+k z(C!fFt~$Gi-oR%LN*5<%m8X1iEKROKE^$Ak%lHlnPux#}W>9_OP}erK`+UQ*^XBqo zysM6nHM7)IHzl}j{LPkbYW(RKO&miJXCu25m`D${(_V*Weh<6;_KT)}_sQYuq+DMR zLsuXO#db>BQ03#0gE7cg7rp4t4r!>1&D>>BIf~|X0Hj!RBD%}uNa%=g`$Z61uMdk% zj~|#B_3JF|KHsbotQ~DJea2tH;W;xXKS=*=94*Oh9*|GbJaqg zqEeD6!1^%7m&;Gk*l=`QPv8E=Euxj zPy@`gy-qrNu3q5v%^IRhnBL34^H_QF1MA5d9Xl=wZUp-i|~=2KJ)tQ}4IKZsI%Qd=>F< zUKjS%Rc;xtj=`>mdF(GbhWo`Kxia?=AhT{buE)C{8!m6%V!50x3qQ$Dh@U5s+qgs% zCM0EbrEn}UOZgX@VKTh3^JfQhr**B}pPd*TLlj&8qU+xpw}sEurH~E0Io@(= zV;gGgMm?L##Ah$JdN71cx^56K0p18=OIi?lTV-JyVT-m9UL9pLYxe2E6X$-Q7=dDT%^HW2&L!bCi$taF5b@R z#r@(Y(px1FOig_~QM-;$cEyjMy|G>Wb}BZv+qmd9Rc!;=esVDE;XsoOy-#;Hp1N|f z$H`wX6lValQyc+e1Fjh0ORC%b5Cp=Y6H{1JG}>lP49rW~4?EI>g+0mV=V~)jSGw8& zrsPBTBCx~g+hJ6;#j#Io!;wuHltx2@qen7;D1lrS!NV{Zr4Ee3&Wg2$KzW~Fp=53U zvW>Ykh9?kY7yF(EV)ewCDOw>Uho(y%$J{h+Snk=#zL?nHkf(XWrW_1R@rE^T)#P-D zvR-qSYXz9QY|^Kfqh@(Op^B^p?0FnH_B`fyO|Sk=9{lpZc4!-01#6}rATfahOXy-| z;BJHzws6^R`sYT@c#7~MXNv2sZHQ5oXif$Z;^ zNJmpRoE}`WqeMB{QJt8+GvJ8|k0v4vbf0(%V_6hMMy%V-4nmMr+8JSD8IASMVlzOA zY7bK1I1^CponlG8B1DTQ;*AIl3Kd$9nlP}yhP7lcfMv8|xHGC21k|;P5yz4&go7IrLxJI|!{3g&Q!MgU(jLnC% zbE-qjsyz?ADKRLq2SC`?`m zuanBy-wo;}@v%^3%b-1D5SvFt(}4m~@)lCUIABELB0-E{_Uy0!uAy&zgTtA z{|EQF{ulRHE()B;;!5)fL&1P?HxQa~Hi0Cu74#wGeU>t(CPvSRAR80H*bk^+9Dst= zXwXVlf%Zx6CwexOV+ThiA8*5l&3Lz+apeI`+@iu*Wr}3_0Z{ithA6p@ssv(?&{d-( zjvJ>41E^D5NQyKJ#*jrQoT1D*CB`CYDVhzx&NbA2E=XWAF*fZR9HZotJi|a~ zz(9bYXb=nURIO!J)KRU1<_?i6QX*P6?LIV~a!y;7ca6kkORP ziOE_SUZ%zX)vZ!$?gkuK>o(>qinBU=d&+Tct7~Dter9=j-+y!!K4x$kL^RY+iv{Fx z_DZG=ZDwOP-*z~@*^(jevI4`y;d{mS*6bJ_z4bIk{06h%Qbf+jEkNGbq=kPuh^djUmJ5oJPRrv!kiqDcTCDS!%q zC<2N*%x&ge1j`e!GBwkjmT1^pG6c!6P_(GVBt}jkg5yZ6TMeD=LBqD8*sIqu?^TmQ z@1U@t-q-@OUeXKww6*K3%d34UC?(AtE{&05>W#Ms#GnB*5z+|T04W8KhyrSQ3o%my zgVqrt;otgO|C!{q!-zB@L=Ztqfg!{OhF|c|Bql}^Ah8@OoUk%W-r0Odducu$@5k5+ zxPb@&6;xCdK)g|#0fC&fQlxZ4@fQNpunmHW%mx#PqVZ5vfEBfjLnG9vO8`c^(Tq`& z84(hi+|VRP@#P}Zs;X021!uVX^@As~gNLuX_4&(BC62G>GE|VGFDFZt(|2a>xb9ze zn?Jd~au9Tdu#Lh&20{={pa}qyLIMSu8px3P1Z`x*G$<%1NUV$qL{0%9FeodCiUI&C zfC}aSnJ3ODslpIP0cBK0MI}@O05k>w34}mYjH*#YYjO(!#0Ux|0Y)$gnKcHA<#$W4 zXOl%~a~$_$YPN0LoL{XB55ta;6EJ`vff6VqiUCj{1pq{aFDZOU6~2UkfCvac3V@&p zU$TZYQTf!2b@jWy@>_rDcmC>DeLQct9ilz1u%n$2TuG zrja-glRV&fcL#5_>$JX0y|o0DM=p&dbx|MNEG-|mGk_s(dvY0lqXeC20nvsH1P07z zs#qL9IkK~}>=Sn3`TDaER-ApOv6G^lEwGqRpWCe4>?Y6~yUew+^2IJ}LprT(###0O z$)hK4y_uHd_EkR_0q1uc4UPHELGq`k*MPd;yQGN8)uOhr$try79>KxKhtqk;PglFn zzWmV3D2v+^%O10S4}i@5=}G7FcSB#OK5xb~$?WESB(r~be9Zm0-E7g)5w9`}3>Imt9PEKU)P^ZForQQBfj- zsT`U-KgjHSc6_{?jZ^OkPRn93tFrvP`F9O$Yz}(gAn^T@_Ybm&v>_? z{3E3AaZNCiZf+J%%V*Ves3LgvkD*+*Jjngsz>^bj{ws@vhIfIWctLP_r*mRDHbK_{ z`HiV(>wK==?I~i{X>uo*wLBOw!!PV;2PKJ5@5+s~KSTRlO`fJ^%!b`+)i3>JY2|{g zDduW6iA!j=aQv+eoaD3h)z7uRM$nl4D$ zv>H;ID1g=7h3xeHZJTc6`*8hz6V(yIb#2jd5m(U$B-gcw_tkeT1>QuN@^i zOYzn}wj<;LQs_2&&fTyu`@KXepLpI#Gs41B zvSTu8%ILyW5@`!iiWcHhHy{oMXzP(87-cSvL}W=J^nr0BJhKgtw2Mh@tY%%z>_O~} ztDD^WG@vN+Ff?P*c1XEDnmSu%hN#;a@#5#o0}DIk6#HFL@dCm*SWX#ud44ALprW)4m6f(P+AGh$&I$%^XbuJ}_duPw3fC!z@WF8ggqv!gdQ_bIqDzzfF;N znkiT(EQzJ@dN^PK+9+t!;J{iay%n25FMS8;cvwW0&9Dc`D61iDdF*ADx?KHcnky&> z11f+33Zk~74*hiYZ&+J?UH*cl_Ko{*3}ml)pQNgIGQ#sn4NDME8k{Q7)Q}PcN0j?) zCcI~EAsI+ua%4yQ?S@%Rk!J{oEFfiBRZEOvAF}$iJbbvAo*bTKXV1;E*Y=->e|ROY zUyt0S&`&{aVH-f`WlyR_@fHIRGzQQb6TueoE2zz`|6B;UYA{*X6@U^!(V+WWEIu(Z)fJ@T6c>3HI z-Dfv-!zned)8$?CeuG3eWUo4(} z()INE_VXQdmkpaHo=nJ|R!zzyG>^p2XYO*5UcKh_2d|q9M{RQL=6-T&j>|G`o~%^D z(fr63v$U1CaVgiT@>Am^?}KktiI~WMvmU_zzngG(27!kktK-`g0TuYdCUyF_WO+;@FoL@ z@;QQO?L4X_T@*Dh9u06+tXC6PoCA$)uvAL+chO!P_W#2ToP0{txnwiR2Dn#a_$s#I zAv>B-H}!YhF1u<;o57D`{&;Qx%gcH2ePj1k)|O>=ck`9~Xcx7Uzd2u<0?Tc5N6q}CwZW9DJg+!%AGPcfrIa}tl z**9i}>ZO+XyeND*r8BICdb!Z+4cN4M*ScNimFW2$+F%iHYwYuK{XzDt$KU&{;t~8y zZTX-5Q~7Z9e>|G?cC04qUKPl9x9k$w0d|LlF}vXUdM%?&T757JKeG(`SawWBl(F8X zIAGxI&2W#Vs?3b%Ml@cB=F!2tf`=!ezmoMm#KT2p{lk*7&cE@Y$-_I#!-?i*dvLA) zXpphpU6#kcR~2xenubr}$lHQ?JuAeY%n5hn-KEIav-c`)-&Ocoh9T@$~``fRv0QC}4K#6I-f>+K+i(@+Q&xDF@BJ9z_zv|O{WTI?lEdVoS<|k6 z<@d%H9~9;P`x3ZFZ)Mfsoi=FpSIr?yzl-B3-5Wg~;iTg~X*5?b+2%=x+rwpZJ+ew| zLS6cHKv#I9#$s>$!gupimQva~8@BiSJkjbq81^O88xzL2)(w8L`?$BMjM=>f$4yQjcf#Ca@mAE$wmQGM2oU_QTHR@ z%$pdyL1HWmiY^l!QwNEW)4h!vY|heOXX5sliDMw^i&EAp1r?Fetuw>>ZEyLZj@}>V zD!Ue=Tg}x4w_w?Y3F451(LL`IIuiL0>TJt*1%7(1clcfP009oLx; z!AWTD0Sixg->Vg<&_{qs%94AGS$P?_j>&n71gN7!uhuH$P7pGY%24BBxLM=^>@1t! zn&f7650O9Hn!*`R`IMp&UH1VssVEpk$OaiGfsJKG(F3e10cZVmpFjAOeDa;?5C3~B z-R~Q}2>Dq*iS?+a*A1vf9vN~)n;{?q4yxqP*SX!Z8Ar_lk#`pVKf2%~R?{>~4DvbW z9lr6O?-2LKlo^qkl{I$Rg{!(v(>4OPF{UwXfg2%#kU^FJdBKVm8<5zs!LR{|1siNL z2+0<*8>=VvQ0?xnF{>(v$jq2--1+{~H@@$C&LM4B+RsDLm@bUt7(&o0Qxv;rH8>i> zaU7Y;%sZr*BA_@o6;6rQRvpUBR2gBjZw;9=oot7On|+8Oi*Yhr0ioCi)rk7UMLw<) zT?O5Z>MSDAh=5h$El8tb5bwAk=}-pL5`oeXSk%m>9>o}9K*R#1PiaqVlu-p8P?y#{ zL>kMaNVf8bbv~IGHFIo41VZIon7lY%{+siM|9IL8lpU zUL=bU3JnR1wW1*@GOK_93c$wJ0Ir~3K{_^oCI<@;Aq}9|gA>&}V=@K@grsN?F#$vZ zlL(?wQ5yuUa-kp~qbh>4WC65~qi8fJh!n&#pcA2J3S+0a1Rk1k1T-;s2bp8bi;EQ ze*WlJYx>#`pa0*Vtp8!c2cTbFBL6%2e`EXAN98|C@N78d`nY^j((ui@t@!QtZe>4o znc0Du%v7gf7@FsoldxXl0^$BP4_t0`NrnqYolzkYzaX96%K$}YE=}428$cUyrAHJ~s@%7H)?1E>(@pqZkmRRz!?0Vfa;76xUN zq>?~Y#ef1J0I4d7l4@qXv$dcm7!;ByurPu`j@GER#ujsnrp&Fkxiy*>IavpRwdaVy zc@Wns1CK+%kw@g-v_|>0$_6;o%I6v7BgzhL5C%}xG%ASN%xLZaur}fiN5qa%RasIT zKoevNSz;Asr(7uFfBGN)jg8S8juN_sjB`w2`Ifcmw%Dy>5LOH1tb!5`26_Y;f(7y3 zino2EVPub7lM3)M${|pi6yhy*{eHWD7BD1Y#W4XUb&O7tHbMi}TY4W99S2J@<=Ea`vzAY1)3Lpm>2?mi7VBo}YP*4=g7!Y_M z4j>4l1wtL8Y$6~s3TliL*prn>(L2(<$Br?9AP)-oPZ2UOrvxZkfjVagYkCnA*$RSz zKqz1l2@N0s1On|?L;yiS0YQNTfRF$XjYYH$i^4p02sMrdS@Su&+L|V(4JX~-h7N?| zUD7a05DkK$07wc3zJQAG1%)pt{8I=72nq@UUoeCKpa?*y3JQR#$N;FuDk$Mpp_$eH z*<^V4_%ANMaeDBM-7F7w-R1tp2cNyk4w{2so`svk&3DB}|CPHVlM-$AW=NmL>t6df z5JxO4$h;5pjPF-;?cwzuLy!63?Jqm8m${z<#*e>^jakWDxr}guTnH+WU3F zLbbfTYR}tp%2Y_~adNhs( zp3kKmR>c^z85Way`nnd}hV3V&E!(QNZTe|9LJ8l=AS>3d_X=I)1K-y7 z)}LRd^DBVq&SsolA4b60myK?rf0-XtH(K=D>uw)octA6eF-4GrF)kO-tPG!r$BN>}H?O8tzjPGBV&(vR$R@NWYWqpo7Q&yuOBCz3&f)g$@cN|u=Bp}KN~e7s%d0WT zZW9Jqzf&45-*MT{!8*8A%ySy>Wa>1Z%ec*lo}8ar>b1W1bl;&reE4=wp}2UJ4wm(* z=O*gkK3ua`6Wr;+~!v;XcqyuVszkUz5F>UBSW=H+~Fyf@)J&2%K<%~fiWps*R%DtH@xM(xb# zHXiN!G2UuBd`3F8SX9TexsB4b`P;{SruSZitxL}g@b=)56+_US4sBb6kFtAT&fc@( zI%_vqH6*E18jdH60)us_T{k`WiBEDo7k4<{r+9S}2d%#|^{>YH?u4KEaQ5WEz0B*7RUCu7i2Iq@nDiAs%HLZR zg@4j@H@SXB{_D@Lwr}NEW!c?Yd-*!+xV-7dqS?^Ae^iuL>9ob`<&Fm(?RMF6Ofvq7PVU>o%x;d%&H3(zL+1K= z|B$4e_+HDwQ{F(94_PFpx7fbq5QIl5rlZsO?d#RSd(SqQ;fHqd_+WHl$i~X8G7J_{ zd#3|zij(X8<-b(vraWA)H%B>z$)m`3glc?FG6;zCV>Xs|2Gf2&Vr_}CN{c|?&<@TO zz9wWN+IMHdIal+_d>V0z@zl~9!R19U%bi-Y+xf}FInUdsb$|rirG{Z@QCi+RgO)H3 z6Xzp%)jlA417xTMy@_e$s*=?4kenoiazJ0ECMoqmc4&h_UR9YQNkGXaP^MD`JL(Haw=wTRz0`22F_>AZuf85)NJq0=57I~a@c!;41nA) z29`#5X&*J8V@}hHYx*+m{^DQy&X@ku>E%E8{l9hg_E+-vJp8X$&us0%zxAatkir1c zUO$iV>en~*TdV2RPkwQBoGc@SmwQx_jC2S|lx;{-8bLeRL%jVk?l<4P2}9x1VtO3H zRAoCjbt6+Bvz#P5($MspbSIdbrS(-k+{}R+49RUHDPTE7pi%1VK_X6uGuV0H-rQ#q zKT@viD>*48@%(w>d>R81CAd5Kg2WlZ_FgHH=)eTP19~>S*JTxmt}ekp*zHlw+W5JH!ca#sGvs zePMuOe%lJ+0Km2iWGl-S*;)c6jUBUWM_=STHN$l7G-Jo&| z-M5Fy`u_Cn{gSR`?bSvWy-e-qHI3ID^bDrQJVE^cqs5!Et*A`Q${>KY5g`R|0IZsj zT)Fg$HXTk#&ElwMUR_MtG`sT0qLAXAk%CST)C(uj6jEj&zptD?o6-Ofkpe`EnF2W) z5HWOC*|-CTR%t&VrNO`INPSdJ z5kx>hjfgXjOi4~~i8g7qi#yPLhgJcd140m16{RmAlBh`r5lINtLLq1d2CGn66$9H* z;&mqgiD#ACNPDAQmOYqd_aC9L+3^R`!_E3p*nRaEpVp7w{@_o3>-T=}zr8G13;4#r z>-YcO|NLL?4&R^E; zZ4R*=_o=}0&ty#w{4&yxdro~D9ME*IY*+LqH(af>;WC+=Gvn2&ZZ zaaiBK=c>2NI>7Gg1|>{qR8>d{Xi!1X5TF$R1w}wW0Z5>rBA_535jZl+D5?qqs-OzM zND5>LjRi#{0aOrG5d{E)iUg|2Ie>L)y_Cj4UY0eLzE)i%uc|0)k|>e{fXIwI_>tW1 za<@M_i$=v+LSq2Ih*O44xE#GCGqqahnt(RM!6X6_KtNCh9*uA^(A$`lH4(C7!2jsi zercP*4pd{vSRLAA>{T@TrosINlPVa4=z$ILXk`S2-joSFDd*akkSN)aO^+-g}5M+g@2p~CtMPf9gA_!$f-U+xU0*sJ5U0O8Ga6*Yf3WgA~Nm}?&xtyzS zCaNd#rAaUMpr1uiNjs*tziy##M(iUZYDMAzLehaj5D=YFt7b=b?byNKMs-ou zNH7@=bMb<_(;+E~AO{5ibchCJ6zLgl)ER)uOk;6mw24pwz-x|345$Dg0ft^c+r+Ah zsDJ>dfCj;k69ux~A>~=&szcD2AvjR2n`qlH#w1aDNq0OVsGtIn0D>x@00P4Qj}aBX z00Jn00wO4?!aqUq1%m*9K!AuU2r8hW3`jtZ0Ij`80*Be3&x7+{x&L&sJbm}7etK)4 z(dHTLmRKG*FJQXRBM#k0Bktr%k|ZTif@(y^poLRYVVP0sr0ZZ@3qp+3a*^NrQppA}}4x|qzH-?`)P(>+Z9iu+MU zzio>+tmoT?hiW@2%O}c~m@oDaD zrf-$`+CEx$x5}eB4jYQ5+F5_kI(M{u3U;hl`#H=33W4jrf=FL$xfPRhmK5|d!>W|p z*pW$w+v^XtT$Q!=^Gqqx?r!7*KYBAI#BLqmWah0Uxv=i86c2b>XFpi`{%4=|x8q%N zvF_LBcPXC#(d#?hG@7bwkfl!C^1$+D+`ns6K~{Xiaed;t)69I z=YUB5wPWVE7h48TC>N{0o+o6EcXCR}taJS}lk!yP{Q%MbkS;pBQ< zeq)5xo!Qv|E;X%HoI=vpZPVOO3Pb+%@`&V(-EFa7)PZGrUc%19_N8Vg_cehv^ zO#`Y&y}y|0{6-(e{D;R|i|oweq?q>o?WeJivdynY;{_Y1Y7X(GA&7c~DJDSyS z*WopspN`3@cNspaa2MTn3ob~Lt~XjdtZe5_dI#MOa-{6xQ0zJJkkBPdFqBADBWbq( z?&0#=A6G@uraatKka$!s>gC);H%nvRJ7F~;Hm5z0C3jMIEUVwYPWeEGZ&WGyp=4Ut_}~<*dVk& z!^K-(SHhbS07>H*E_z-%rf3O~g_Jj)&1R;s7IqP>61sr`swK#>s+Gzp3>v9r#C#tZ8=t)U+IkwlN7n<@uQwXBjeCBD_6{CI#046)KN`oQ@ z@u=?{C~u3+{*@M2!1rSb1)(u0j3L_8WMe6g8Xw{&%F$qd;o<(tlVSgR_`j?#AD@0& zlz;8+byNG#+>>AQ$GJ*|2{wT7_3d3Ba~DkRkEf-Z&b)c|8ie?dVOVH%GFR4OL&|*)B4Z!3d{P+gDX7Mwmw51%>-&HH>o5QJ`{^J5?aj*j*;nTPU6HX3 zL$9h%h&lCvQ>oGP5HnDj4k;D0Ay1SmSiuVgM3+UTd~iVP0I5|SK(QMHhy*lGa8BFQLYu(n~_rBspQgwn+=0@>sO#Hcq8zp_Fm5Wm7fCCH`7Kl$Y z+0)%uym9z*?ULeG^3}cB$&a7?`0s!JZ@tP77=Gnnj_E)7FaGuF8z;Z}hu{B`XMgu9 zRy9BK5xH$B|6p~~9TLo+-){8n%7-PtQLw+=DT&63nVsa$R+34)lHeRz5K zLo|%L;Z9p`9=t{WVu9%}?2(2X^`p zydc(uh=u@A6@d)^D1kw=pa2p96j1~@C~%ipRa8Y)05l;9kVEtq04ai~vH(UUP(TDE zB@}D`M+Adv2~cyNnXL52-75-X9gqMfK#%A_4Je=k@V#~ozcSp5r4A_uDf$v(IRx(e z4m6S{DlHO1$I{0{2#5#>fS}?)fFMv9kcA1O1H^yjul%AqQAiL&79~?j#1{>BkjUYw? zfM8X8>Hs5|g`=Ykl9Lcsq9ZicA=;IhI$M&79tTOOr!H6_D(yjueinO_e#fxuUXB>{ z!A6E$)koWciIS`YLm15j!RD4T3l7PYwk*x0J0JMwmAc!#V*nf^sWLEOLI6QUP?f~W ztVXp!P-8MeZ!uWxB47e*6d|#wMoCe6P!wfRRWtx(kUUripoSHcU2f6)Ol&3~MtLr3 zV7!#nkAM*b7*rGifB*oIgg^mF00BS&KmimG08kM?0YL#k5fl)B2mukkfNB&_K`f{z zVFv*$Xq`h->CfI+}?pSt+QE{N5G4rSWupt@IOh4S$HfOK~~KWCpk7#pTyN z)u&brhs73!cCi4+m94e=cRcq4U+rj2^!oM_XZC}*hS@y0F(J;ThLrAoTJu5p_?wMtFxP7Od_42;zi;JR79J_04P331x&BDwDclhmlS$Mr+dAxm+%9~*p^Dq-CM$TrB5KOcgqwa^0 zyN~kv5x=l~y4X1X?g}m6)zwoMW+{9)F6zyVh2b3PgSX46aTk%ZCL3lW9YjmO*yf002u8U{@|cHt#P4~FvlY9`*|iKN@@>5AZA}LmJab{#d~}m}vtMEOQK$#Lk`$V%Qsl}F%ku;9{JC%D{a|Ua zu<_07>r_uBbHDv;BVLSEn#=$;YW1_TAyPb6jwyrM;xe->>Q-*vY^y(=JFSoXC_iZN z{*&_EY5VF%q#qJKE}t7KQM5ettNiiKR61U(=6Av5`g$4j zSdtpcJ++1$70Y&P7bz?8AuXD+)TRc%&tk7cgGY*x*uJE+ila?SpGB4^ENg`WB=4KG z7|RZQraQLPVCt&eOK-XmXDSSQrI_ao!;pGZGtso`hM^n!jCkd#OW zNeP0No6CMO4qBdEbp4vvL;22Z&zSGh(}%w@@ezCgts_X@uCE8!v`S*@Y*8#{zw|T} zj(_j_W1L~gKreu7P$Q7dRXWr5+U;L*|LKoDpOwe+!fiSxgY!ZKvH@b!4U9z?sdeU< zhUf#J08UhYV8lm(HXUu4>ELORhN4SW{Cb=LiJr`M#A@t^^h0KGssWL(; zfFMc=C1n|gE))c@l+@`MfJ0Ui+Jh`=k0!|ml8EX^CaZIYJK83VbwfdIUq z%Fw0Kg-kIpsFHd>DpCLtW22~cf(5&2fHjUSs#-4ubt5R&O${gvtzwz$1CZDzR$z+& zW+OC+=BQS)4U^O|7sY+tAs{A5fT9k>X2Z%B)pB{Va5+pWKL$wY)&8^F=7k|kmb}E>lH-DJn>cVL+8z2SPk)1Z6Id z9ps~~r>eXh_cpoBq33eQUX9dqbO456v>G6*(dJIA1CH_Aim3FJc7{Gu{~(cO!+-mW6etN*}b5gjp0o49*$=M-v6)RbOSPx|z0P9wk_o zfGI&!l5_cif@R7=Sds8<2RI1opHOd&^MRT&k>=CO}%^{xD5 zrBi)m{g?VTH_yNO8y{wY;pr2$-+u7!YWk->+dltj`+v07-O^7H+kE>phLeP^jDFy7 zR=9s{8MCAEHB$3&8ZTtGK)a%R65Fre`?nv^w?4Oj{Jq()zwYwwr!Uqt&>KVhWUQa! zS1sf_eYxT5$-&d(#dk`Ln0@XsU3SlJ(??eFDwRTJj3dQybyOU@eVmKdlbN?As3@QU zfWt2kB9H+^5C#xb1!V?Em=uH~@W_fPK&S|yVgi6FD+{wy3el<u8Tih>eKfPz3kw~ApD62Oe0w8^?8VnSTB#w8wXik7Sr0HQKN6oX`~ z78H#@uPy*OV?E`7N_l3*Dki3gq9IyN#*~yVT&`$(@LT-BF8k=NkFP%(ti4Iys7Vk3KtNOKS(Q~q708e+ z3IO9BB3TUzF$hW!MbM-PKvVz{0Ei|56|w-B8S=(~Q7w`Mm6U-HSs^3Hnu7Mg+9eSM z3rGm8pa6(~1_2NR08|ka0aOwG2|@rsRRBdr0EGV!A{ayk1ONm@(JvTp)VpcED!zQM znm_sS}i9cW8=H zCqi5iBl?-IN-OQyXrQrn%_Pp$RJL}sC_E(`)OuIs^Cy_@K>s94G48X{!}QArnfH|C zs=7ZMKx;oGf}`x(DKzvYWAwn^CV<=d!-CSRNh3JdB?N53&4AL|nIm1Do=%X%c(G-& ziAqCNF-tJqt#9{;l*y?C=kMNZv#VDC#XQd!ll?yJhp{lD1M4PJtmmr_Cbs+Z)o~li z+CflteHXxo#nBK3Ubm&^G3)xm7ZdB&IgQ@LkXXzd4+u7QpT9t#()$yG)9C`@ClY_O zj$a$P!_&7S&*}QdKb;4PhsU02ax-uIiy`|k;nSRsm&1BYyqmd*nw{?JJ+43HUfmdc zmToZ&>*D7s(R=R&pgk9#^Tp!g<3ftY2)K7=$BO1E+vbF`%;2CN>z_S)AvzZIJ!`Zs zxoOu;Zzpf%#$;2?q>Hz=QchBC1x-dwd)$DZaOUl~Vjs)X48+|8oR-CdiEjlBnB7G3 zqBuES`O^h02K$>O=9~D9Sat!v=Mtx*8^E8w^JrpTpV4@wki)lU{;S{kC3t%V|2zTUyT9|d&wenD z4^F1d#r}HOT8K!-I5RZLwtRnK+t;@o$Cvqm&4M#*7pFE=M{@c3EriI~X1z@>ddQGf zX9f6uIF=Lt^ZLa_{^)4AocZrv z$5sD&TFjFNyu5UZKbv6I6^p$E1339*d%DpS$*mWfU1~mF%A|&`z00R?`T=uguZnf? z!ZMXZg+>#Yu}qdBqA|HkU1pRH9=eM+Hz8(|!%5a{JCSCHl;8K53gt+h z$K30bilH+j6q~}FcpO3Y`>ls$&^bGupdHe)8=yL=m^@4$PueL z?L9jLC)cOGUZ+XGVaY%?`7+y#25dehx#@V9!~GJ5!Cog<7M`dDR_CIVq7q{=pG)GR zgT}XrT3}3+3t8a)MkU{jWjRfSBSCK%Rs!S5qSEFbl?l)hI1K$R6XMRO%fT36=aG%$ z5_M;wZz2E+8KjUAOK=R3JIZ}VmFN8?iH0#6Q<$tYacGmU9S{IUW;gNP(@h%_mUxyRc?At*{twH9kz-J4OjpPo=}aCgNou{(6TK&GLnjTAIe_$Q92Tn7|Dc=W#WD5^4VB==YGbPk>%y@yl1B{+9fB|gRtK`0+U)(qfA!0#Z~gpy zGWy@`|N9@N?e>Ebdt{Q9#P-vCs5tL68w_By5?FOmL{Uf#YQ_q|q__tOHmh@6d5Xrw z#N5ONhzfTAfM`VthklgE2&e!o$`TZ;$A<-ARtGg2xC0#nDwEVqZp;Q0E>45>^K)w( zO3QoG<^Tl|y=kBxd zy9B`{)5PJ?;@v-c@E>T>H^ob}ec0|c`&XS3??wd-5Q6l^+2!PLB6jk0dNi*mR?-AW z6+HnF03ay>pn@uZ0;pjCC04^4L0eQ%%mkeZ1CL_@;lvVC0*FIQ5QPoJ3LCpyL6gkA$38&SpLY?)x51BF!;WL{bYFfq_DYdrX=ZPL4cDX&c0G z033i7A?rqsp>-51J5k65MKyr{IRzBMV1z^rAD~Q4K6<}131Xj($p%h|I|5+cj@kxt z#!PBp!fAkoVDIReLhP#}idFuwSU!4kw2|X0*Mr~fU$(;!8}OI=wnt78ScF-5>>>(- zQAM!E+dKyaqoAnBib)xTMFEA700>zqselBC@C9ep7$;;H42lP%paDRDQS=K8$^r_i zA^=DT2mlCxhyXwcfT{qXDypK00Eh@k1OTX_0Eh^LzyJb30EScn6%hax1(UH4$qrvW z{;!@c{_58b;V+z??{xFq!{1BR;oI{H)Va(dkUfi48~?C-_M&^Py@t^^^2S>SwXF>dUL0sPFAH13w2O6bo|pZ(>vJj$S?eJyCI(7JS`i=Rc}nJdXsx)S za}LWHSLk<(qDn?yE*p!*gIUFzH?l?JCU&Ya(tc2g#g&f^vtKJPydPg9@mc!m#`tc! zI`K4razBO0|KRqvcE#zkps2^^<2rl!(VR>BZau4H)4TJ2JMQi_W43TOlcNSs$NaV3 zZ`3d9MPVneKHj8oc5y5M#S!lqvrjw~(wy9k!~e?{eEjOZW(PuEYrW~0zPP`gPQ3Zv zx^*kqRpoYUTi75(Ta4%yLMJ zPuP2(9lrggUwYG8lwoY1XR77m&)~=SNUv zH!;9H19t;L$BVCR_mjlmf3V1)&RmrU50lYcbmiwEd?SXVqIzE2w0~oIVM}ciOffr^ zyo-Eo-;7ikUMe_zXIc61P>=l;t~XMl#RS7QU`1G5s5`LO6`98KQP2PVm+M-`|Ha%I zv6za^9eNulJ4BvO#fJL&_N1DRfLz_ zod9cKfv!)xc2lRj!hMu9`};5Va(d!srnn0Q8hccQHpV`ueAEyWr?B0=+24hRmKd|T zMtpK`Gj>uWfE%IPpZu%5LG_#S^Y-;syT5q()mI^Hcf0VW-CpPW-Q_yPxE_M^8NiQD zD@Ul*CwtkHG|p3XK%j$lf1e5=$*$;-}-k?-i5za zf>)o0Kl%Jm@cs*c(_Dw56RmH*e*>qWj%Q}WWOZ{Mb!N%c>P{3OyKYaSsJClYIxBZncg_E+3Xob|FY^HfOgTul0_Q_(h zSaiQQsOe!D-czJzqSM!Mj^C62;N*n#v zG%feG#Jn}U?S~fK`{e?(s0^`fQ-iF^L!qW)o&XG0JsDvx`@579QUvT_6tEeKw}cQ4%~k6^3!!( z9EFv|cS}(9JB3QGvUrNiL_@Lxx+b!&sbT_8X+FGs)xY?3y^ihFF{07Ay~e=X(>%{} zNyVUV7>X$rk|!C&-mG0cPbcMg=zuxk*ytvrdNe&kzP*ii#(r&qsk8TOq9Or}rT|oy zGBqO+iIi5g%#Bfv z*oi`2(+nVbj9F5ITNVX8D+$b*{!P*P@6R6=_DItoHSyw~-uf3qN`^fcqe+*6N9LYb za>69Y0|6n?K)}j0fW;wP=e#9dC}lMQDk1?&96_{M0O*9R!i3pmCc%u5vq0jA0EQn& zw6@PiFlC>lA0T(Us}vopBSA1zr;SMmAKHP-#eDINv&j3y58j9edRmyHaJS#x-egxy zn@3C0r%z?9^k-jPU);Tl3NqgkauwdjRLwq-({ml~)8+MHVY@&4)<6C(J+1#>h`-Yh zug0JM_y7AJ*Xh+i|7?5nCN;XOQF*n#dA5OJ>}Yuyzx8S_K39ub#0JK^ktGtv#n*abH07*naRFhh)E`*DOl4t;qn3UvVARLG|84f7Wu^jRU znc1S$2&grZQp}KZOoCtgm8UUc41r_dhyZ{H1Snb?Lyi>lv1=N41nfc{RSunjP16&w z2BU2$U~`0=lOt%f1Hi%fxQJvdtxLRTQC$czaYjUiMlz{}u+)^4RT*@Wkm-;zq+N=T zRf$Gf1`4n2=!$Kcytpd1rN;hZ=Rtw%_r1%N;+a0mi_gUK~o&0J1J z?h~n?@09WxC4jI3Wpt^;YRg)Y&pzcYfgFShBryYkFn|aE0w90@FaQz~A^?gIARzn= zBmn?sMiD>+KtMzVAV4evAcP12EC9kN0)hx=h>)mp3wHK<@4ZzOpFI2r^VzSQ`HM<+ z+cxyq$xjg8>B>Nd{2C+l!>&)~nS4Xai&^I5{}7?sgbIc*JbS+c7N2d>PyvT!;nUv%SNyt$ngJYP0coHrj<)aT+0 zJ(_M;cA8W@FPJsbsRHwm*OB8Fhpk}Vh%U#;AASuXkhd zXZM%Ws(JKa*57R}KL3Soayk1>P9dewUB9UPa?sb_Cbo;gq_6Lb^O%1U_B-wtVPyfX zx35L|%ZCCo;b;KOvFrTUJ--|mHNHKc+}y*tm;WeEHd~s9vKq*Nj?iEv`a4Upy}=oo6h&4DlI~@c@i*};B`vf z82dPQ?^`n^q}6#0Lc{Z7sacEh^Xz7`R=V=n7SM-@!0X*ZpEzAA{3&3wil`2vwC<6~3AdBP|fd z8j43yW`G3Z{wT}xRXRdGE5x1@Uq!$>h%PN`vK6)wobX&yHbS_44My{MvH7CsE2h;8 zX0_X*eSB7yb}nxX`7+^~@zP%=X--19zIt)dU&XI-&i6NmJKy=3w_m;4<>(J%;C>JY zU1ljTD^na5n#$u?mH+rmpZ~{ieK1+V?ia2!}I-c03U3RxIKXNysTV5Gn5VsW!6)hd;JYi z`{OdE-Ce84^q@7j?`O^@kk!dD7#?1ArumVhtF$j zyQ2qH>eAiN{N8U(X}x~A-H>F|XVyGy;7(CoQ+&klR|)=dSeL~$>3Y%>oC$chi=DeE zNSY5$)b@;bcEW?zLd5&!zdp22s=Gp!f4&{x4YyNwLV}Pr9<2{s%vtiDPFaP=P3FVl zDg$qUjxPjk+aq-fl)y-UJ7j6=!ue1Z&!WI-m%yRE{=h)+NMdX-GE|cGmB3@VT zFZ2B_wyJ@Xs_LoYFpfdjqET-Rl`R$`Mw^0;X1M-52NzwzSZI!f1CAjAKmk%Lnu#Ww zsLilXNrfFvk8=>{c3_}qv0hoVUr_Exx~FQQwWvO)99x#GLjeU5N@sky@k1b|3!|vP z@h$?Sh&ha!1j#DsL*kUkkI5yGCB>pB#L~?{8{9eJ6y8U@H&wK0?YWXr1^-#%syfVR z<=Zm*`#$LegqF{_xK7KVe_l@2J5NvFDJM~B(G>#sD2i}tQ4~VJ5dlG|^!n2`5%!1N z#a4p*;G2I3({+$g$T*0Wq%F5eOoGM{YJ$#Q+c4bh{mcEWW{jiohNP7PbjB2}vY^@^ zq;6EMYNN;(z(Cv*fy5vX|oc%U(;2M-^hr)-;jVLhI&FM&ZLs(Sxl6z zL4h*21NaCIYOQ=#CP|0HDh&@v_oj>{a1d!>B&nmAGJ0lX^vQ8?-%b5)%aDaau+vhl zH1|dXwIv)q91;{X5X)|#&9o|Nq|pHAO8XeYE--q6tmqga85y*7;8fo!5CQJBb=VXz z-gP_yzE=COGT)F5Ye&l)X%qsTp~%QCkA^8&oecvsfX+GQl(Mcup>$&F6;d{a5v?s6 zAhty>%;P-)NM8$(Hk1gMCfdMCKQ-m?K0k}`LdPQ&7l=s#h)SXcfwapPESQEt002n| z=t^g+a~EYCi8PQEVaP0qBaUb>fj?aGk!SN*zf0>SH{K);?4gA_L#p9(bvU;}r7keEp_g~)o&;6j% zVTYQ!^k#%UcvJUuI;HAc9~a6T`_oB2Dk^bPmq6%n_c`rfUhVT&UyTOu$Bp&-^QRA= zJo@euqCKfKY2|<%p$MT?K!TizGlOIXU=RdB0zf5#APmAH00NA_Ad)0Rln{JKZD35J zPm&~(1VR)|gor|@bX5yA6JsYe3utgg9BE;jnGm~h81@$+hmEd!JS|}|Kho5Y7UPTs zr7Ix~$RPMjpb*52S$PG6ic-Q5<)ev4s0dX+QP8YT|Jsf3tCNU{=^Txl+~=B&6Eig`rds|bos zQq*e!MnK6xn(e7@0dkPEPo;`q!w+Zn@oU;WAAYW-%~?S#m~yNF5MTmGfQZ^?Arj7* zGb8&pOG-!zsRCe$J?4~@WX&iHMvEwmWE_GRtu!DBNQNMopwL+rW+WX0L&AdsjG-H0 z2qT1!Jplj$3Zc@d2!#<7X6?=#&1>6Firy&`XFFEt?`aq*6{;*@&N|9If`r4ckAC#T zBMN{p3jm71FChpZ0Ei#}kU|7R5dnc;LWEx;1OZ_|06+vn6aWT6AQA)+K@b2UA=Kks z50A9f^J-o>dvf%aqaP3Y0E$#xaE7BP zRMRrHDjr4$sf;BWWhclahtrI)@UOg}oX)3!)I1^OX8t0iVK`dNU0o@QWEzX5QbBb| zb~3Bv!5rV*!^NeU&i!ImrD7Hz^<#@Kc2KA=p%F-xrUq)%;>2eEARI<^>wTw8-Y-(A zs`H07`#@{UQTe+W+5^q6Coundc(5IB{`ukJSe-s~i*;IG&~>FAR`o33T*k|-eW1-^ zzMq6$IXUCi66JPx)23|&H7b8XZ(EbD|MUmK-SYx~(>S+E+2wg?ulfGR!5w*+TPxE> zmh}vbeSPsVk6g(lfdpDrC=I?Iqut+C2fb&y1-**eo(%8K($t;)NpO7k`AIO@%)1I^ z-W}`RjkzDzvz@q{wmh)dB4j~m9k%q zB8!zudZ28ux$>NJjLA$`ZQ`OE!v+W?_+74_{r%l;9jN-Re($}V(LZ^>(;pG_3C}D?>N;Eu zkCuMRkH-w}p7%M-cdhHqqCuE8xrMQSP94fLiajn^bQiRkenz~513)o>*&!QXn8B9; z7uUF)WKaOO`HP0;$xgZ|^oCWsor!zn&2X!Wk-)umgWFS+SKVhn%P?p?S#K^b!yyaj zzTe;6AHM4Quz7WT&!Nj>cW8a&)b^2^9F>|K)ryKquGN27i22=*-`cdDxf$tXT(~_v zx{2xE`&xhe!GB$@;Q!BGfB%2@@h*=q?_^)y`=WgC(UdOEbbcML`drXnZ8l|@s0_B5 zzkXdyTt7J`FPKj3{0?sWe3w_gCowNU3-wa(!|(nb_XW9eXYHbR>wI={JwAIQ>-F8` zOyva>U2}Fe-t8jwXRFG+ks0s*?~h`2n4Tnmvtd?MG-pq%je-z_^=gVKZufP)M2$mo zdb-thjMe4l#VY*RO}_JQzE3~)_I{j{4Oe;8vv2+OiZA@LpS_sMP-bB$vXXJ1b4bv!b<4sgSa5qWQ?Mu14?{+QsL2fbktwknWS@m1 zy6M!JDfrv_%!P_*OcZ3qtkiPg6b@O7Majc@)F*YW(MDm8lEp%eoTJZq(RTT0b@|}w zhae3V*-{}{qpg&U3OtF6NdP^9yWC@3|FpfyVGQ(E@hjDFr#?>A^)9g`}{#gLLNjN+BH2jxAOpbBUj_-?>w9o`*i`Nk@_GWca#($(y`X<=geDtft#~HIYBMh5pC9pFqx`6$FCZC+%1t z;H3xN%k8(0Eqdhp;6WIj9m@Gmw2(}W8ETkJ5q0E&Xv{qX*`{?T`<Yy?p` zof3x~iw0vva_)IB^k}Y*YZ^zQeSArIjG7pv;)Vf9J3we?&XBtO4FIuZKoG?dQ7mf7 zkRuv|f+#8ulvy~j00=WF1KLW8$caRW7{*q^5Oi$S=+a4<=S)$d5QR*m(Y4iO%#m_( z6^xDe9!!TI5CZ@)Qsk_Q(OH{A;}fF*WdTZP5iPF>A=dYTqWS@tlXVdZ0Th9flB5K~ zXfu1vBVxA2P?rj&4hpEZ_ag}EAdonr073vv01n9@u|jZ}8A1nu+6V$95kep^S(G3G zs?4YZAc*HdK_*=ax&&>OLncb33cwOiBkM43bQ)Y!n$n>pp(p}InF5t>`;OBg`FrTY zc)nc!jwXTdQxAi0%c7mwRaTGht7HbdClv1Qx8W|ye5&4Gx~v_nlD?haex)u?@X`0` ze;XJt?!Ld4^`V>ZhawEoU0pG7Td>ywQ>K& zpPGc9UA^A4mnW-}$B%wvk-?o6N*k@Qumv$hoe2dMW`iOCg9x(#fS>@779e4UB#Z!> z5rr58m>DEx$vH=nIQqmPL{7s9D8L{BK!Vl*D1*vTLx_f*fl_Ia0!0%iAI95Og#f0G zrXk=4QPD4jcARxkU4_S5)eQN3!q9Z z%A!OCdj>W7>ycY8Is*^@unK@6wgT%ICDQ;%)|JSvp1ShU#H3ldO(eV9c<^(|-#~<45&%Fz07L{J5C9PXKm-BEh=8C)KoAKKfeBa(38@GG zC<%a5p&X~_Cy$^0>RTUVUF8|OW?p_B0E$S=B2AtWqPk8pe6^7I*-3fa+J;kD+ zOeNwZ#E^prRFUwp!>3kVx%z6Zs!ga~#bGkEOAS!f7A(^wEd%-+-8OphaJJ+aC}fII zZpXnKuA0`ubYctys}DvLB&X1g{=7H$&OTGExrt0v9LdyaOa*{t+xWEeo-(<*YTMM&*F|xEVO> z*(>g*>gvdAI||K~E4I{(UshzcOFKnTMci*idn%?=K*h60l3s;uXTRT@<&)~a6|ukI zYp zf*c>bq`up?hfuvUL9C`pVCP^Pwhz~%sn5<=l09cbK@j#3NUeKb!)1g01^fC{6m31P5#U>sR|uZx`=UH%_jg{>D${l{&1ZxPPutDy;cxZ6b6tCV6C(ItA_E7I>9nGvXl8}x3blE1G7GgDjX5h$49$*z$H$sj2 z&I9x6%8XTW+8?g=|7zWpaUsyRxm_wKZGQgl*N(34=VAOry{DQ`Y2DdZW5+*lH#RJ> zel&Yfjs3MKI(p#fIjga~`MlFlP}emdMtn5OtB1e!)wDK;N*-(cdPOCelwccVv_KMDBxZs<$|Z)JoVhyoZ(TI zUd?rDWpa$=b7gqwI6`A2GxtqXkSVJ2+>KX#47|Me{i;h?Gg&Gpknquc-=9;8ynl04 z91U8a3bK#8R+Y!gA`YB^W+W(CLDf@Vd0h)$8n-BlC5^(0i3Bi2lkpIdR5??k42-pe zAma{7=S6hRN>6eZG~SX4CSguBqt6onu93|+CCV0AW(LMB9I)d)YSm54S)U8Y9kIXl zBdf*SU=Sq0GgK4rIo`;jx1|~Fdx77iF=Lu49S(_+-HSpf_z1U6xJzy9I6ox~lLk7=bpI9fd&!(XxJIO^=TA z5jcz^C+^2;dQw{G;fi?gAl!#MwYf%|3GpGqT6%H*&zk@bI zF@UXz8v@A)K&T2Z12`VHlyg*OqKO}9YTUQAe(Mj_SIuntPyX|r{b9&I*oWsgX~ZFq z0N!xUKoOL-nlSkyK_=*8PCoP4d7st&mwF(VCdG=%$ z0484&)Gu8|mN9}hW#O`qIkKP+B>4bZ&=?|wT!%4A+=?n$RS|rtk^_kmIb~-Abu}c! z2`vEpA-~Mo=41^TB}GPVh!l1^8aY>!Oyu&QfCGzwBozfqtQ}BT9UP;qB@j@-HGp?| zNSXQ(kUR8(T8RZvjvas&E)avL7<0(ESF9nafTAE*4t*uT8ygCR%)nWwPQ04XxGe7X zNw!HDWCJ+~jTsCEqZF{gkcBW7maYA`lSc3qdn%clUX` z-{$^$gRVSt|L}<{y0V0BcfMjL_LB*gL z1VA!^a0brIh*1O?ltwaw0vSSt7z6mWOOkQ|PMHNWu!tZc5E1~eR@o{fv;s<^*QpDf zQ!y{9SyN;{1`z@TL|_y_1S(LY1uDrY#ilf*tx{Z)QlO)PV7$Sk5hBG-1;0yqi=TY* z-BqpBWbhKkEkPO*B~}Om$6l5N!LhN5jN}&Jxhebxik>U2SaU(_s#2zNu^MGeec$hu zj0ewQL;@Bi1Q11HPRNQBpkffAnMi@NSj@vxgZg{LCtokVIjDR;yO=2Y^Y!J=ul@?C z{qflQBo;{Lge=M`Gz5Y|3dz6#Fs7u!SOg)BN&_Jh;8bK<*{;@`Bzl_zU_`ytv?|e; za2+tWBjhA#NXmjhh@3@d!He9Dg1(PlC}99-K`2R;5v)>o zcW=tr*|v~UsbhoE7(=MB+1J8eR|(OgRf?1bPy&vO!*V3vxSbuazN*Rj$&7^S>In*1 z$u(f**92j6@;IpyxvxYN?3sqrA5p4+ZSn@VB&bF>M0MbyG#hK5_Vl$o1f?r|wy)C7 zLqE-@$GS+VI`wcBx#adxCtW3ASpE5Zy}4%@ztw@b=W^hHq&Sl)O%NA<2#ZxU|f zcmMgxNpbMSpvTizoBUTpypHwn>eVEr`|}%w`Ue|N9P6Wn4p97PpY{DVWjvDh zV`IOZ7tb!KzAp6SFmHD5{KJcSg6sQZWGI@B8M_xDYUxb@Ly_03xcJ4gdG6%{z}b{u zwjeu*eS~xqYpRug4C%s+S6#Uv>(J82 zvHNE7di9oGkS5*O!rB>ko8LA*9l3udR16Duv^JYNmDI!o=m*9Qre|fZ)5m)U3tYQaGCrgWZf&`d>Wb@uX$4b5=ahBeziY(Z&G7*@q-IKIrPV>-lIcxG3oJ5hUaUwd_ra#zS&hT%9D%T z7w=sTP@Qa~-Ky)Z{Ptw#CQyA;UH{pz-dwzTb}iLZ#J)R!_o4aD@4f%`Km2t#e+YjY0r>pWKYjIoyu6ru zok`tO?AyDD-Gf}p*%Y)n*_7FIQTLx~mHqOg)*rT)Nv(?2O5a_maG-~=y~N8kr$v$d z%k@6arZYoRRkGR#Vg1xhLb+>|E$0qKkko}vOLK;2Cm%w04C%`uZMKIuIsU3ihuLy% z7E`yOZ%^d?@B9Hhtw3quYxr`!)ArB4_{H_Jdpw^yQ&g@XK{w>1CL4QJoP<5>{io*S zEp>b64zW8wf)Rmdrmw?!eXVCHck_f|Ggs#a885S8AFR-RC^&EcgV0nU$I>VVnJ0`Nl;(zIdl z3OIIJ*E2QIT|Q{3G$`D97M%jg^ibt%% z;xv^ejiZ=cSE?T*qw1WO>m=gLEW`bgyqSlun)X@n=4phm-f~ z_(jKYkhe1gjdFkJSoB1h$i9U=%V+!gH&@0yc|e#Zlyxd2fGAgLi6P*g&buh|kA&d3XS=p4Z?bah7yD%xz`zRsTZ>)@&lQct?CG<~>+%c!F-z}U{ zQ%+%tj9IiTB1{Crb`ab2Q!@>2Z~w{#$1+AH)8iTm@rJK$5?_&U<_J?OkSLyO7FxSH)r*T^_cg1z_egi%S0w zU2xWGX?h)od9C$M-?)b}pPH+yx|`iii6X<%v`EJ`V#Tp#1afo}CpIHgT^86iqPr4(Y)E}G0FpG{o?2%r&6kIWcNAc}%G z5CDR)iU3T=1l_*7xCw9C*6>A&8VdXMhcY`l^<5oq+P!a{BkB|n^62mUo#Q)ag?=?% zT$WuV@`=Les+o7i>r%q{VX?* zZRt_z-7rqRQykTG1~F;zT4W;dnoKO}vr?P7Tw1)}LyT^7hehNV7`KswII26&RsoT4Oz zOD&aB8)aWnQo{%yw8#{A^k_vFjxo0j&4sG-`YtOvyKHW=5FZ?B>@kty8iB(Y9 zXz|eEyJ;0usGDNVJ&K&9S+ew)vZFF&3k9yFvk%V@l3+UjvBq z9w~z*2p2PL^23I8icP3H%xe`h^BFjt0TrW6fF*o12@l zjZgge0>*c`>fnd2;5*6n9yXu|FHMH}_{IY#Ji2aajF*iYK@PMKh}FSNYjIE1=dYD&0zgWR+* zCOFtCQm#{r1HH2O4Z|%!+isJJaQ3(mk?FY$s#wUp&P?f!UHF5SYW(=mt_G#z->>Ey zqb^D<6G-Bi9Kbj>L;%Y$^|N$YPp7lDDtG7nvo}V)dTE%76LVHg>nY03wq=it559W> zXt8!#TV~vMKZ(Mhv}LXIt-A%bs`1zdbqe4PCy+9Lan;!|kCX8jlo6dB*0&Z*s|*kVAOHve0(=S}>)$#5_}kN?|1us^yHlb0 z$uOGPh|8n9*_}FFz5K}rjZ!J5w}uFv**O05vj2;z_)n0k!}G5mjZgdS)r&WGLP+NE zsCUD7mCSqNEu}U)HCuJ#4ZMQk(%%fPUT&{<`!R<4V%o{oQyl%XHlncHmmklrengWan*!~8Y4nv!UFrb~ z186$#$>o_wVU58~;ToDg+2WfI^zQRw7&o72@4Pw3{m(Dsl<(dg49h=1F>YEsbi62| zp}WVme-YcJnK=t{Lr070Q6w=uA4#I>P{)`l9(DhR+%lRxF1sMKUpQN+zHqh|(x!#Y< zfw!l#G@|YHQe`x#gX^q*Pi*iv`(}GYeR%)ZZW`aT1fTuXoE|*L7b;xtrz{kim`Rzj zd$tMw)^vNomus2dzGFkQJ51)X6I6j0_s?s;TfOOD*zAl|+ZmJ#R7H81O-*e+gXQ}7xFZE}T8>Q^@*LkUyWaD1m=k8MBAs55DOJuX6d z!irAI_UiLt2;=-_=)P~FKQX;D8}CzdKP<+L=PW)xNk6&VtnAU*k*8gTFrTNVy8|^Q zKJ3dO+F<>o!+hLR6myT`hxz_8Qr!B*A>~}qXnbrF$T;f5qo#)08?NgVD>L$)FQj70blmzqMFr|vuZH_3T$!2A+jp(KQ>`Yf%Q`Bgjs@MQ;gFT#Kcm$ zi7UpGDM#062Xdf*|5gdm|C zbhKGNFjW$03LBD)q8hLwh*dd`t)2#0R;$%^{o%(q#-2Eg!HgM|s8Etj zY5k(TIW+I*KuQ%lZ%vW7Z#y9sF$*;%=Hl84fk2w|6S%mWIh(Vi-KSJ&E|&na0WGC* zrcDTC(ruYDvlRgW0^IFe(e}JDZ4-y!B~zHXa#W!%lMacEC>5=@U9br{g~(P<%1|=o zp(AjMVqO3Zq{iVg0M4}>y)zkgys0| zuQ+BU7R;c^ot+McP*~xL@sf57SaFrh$u0F;M$#b^QC&=W>&@8fc(Z-&Qb^Uv5ZCRq zh?-L6*~>KP+7z`?fp@~Ow?#sq`GKI^7ew5{rA38Kdqx!r*aGLyWje2$@lBZaMkt5~ zs6qfqKtN{93e02?kS6kgo<d7Rp9QQ34`NAk3JL z$|`_D2?}Q*fX-AZrP#HE=rMQ_S2-2a)FnYtW(opfCyc;M2!eI40W*)50V?taU=$Yc zK_UaB=u&cMS!}{c6i5?f7Lg!xkc5YZ5X9;Lppi8)Mx6w;XpI^uAqGWSRFQ-w1_Zz< zu?P?)1W1I8i5P)hmKDdx$1u%L@6QGI_OmrMp^Za|;YP>nY$8VCfH=iYVWQ|bX;EI$ z1OfqL0@SupsCb$tVL!+?4c(ahi)PT#q%Y@iJa>{g>$#ia&y0rs;@wA+{(Ikf_x_^~ zK7Ib_zy0F(uB=V0C*!}g|Muhd8=T*CYvM@fW)4o&d z3;2^5l@H4aoxAq9pExt~>dYj!8ZP#*w$W8x6(o&l%%sdMfCm|S^6B&+tkXaK-M8QV z-QQ`RYW}}|{!{jU^!)iJ&;EDq46(CGxf3~D(m|rps1|AJ$&WfxT;QpNWe58W{^DBs z<_jt>A&SMBaI!39J^CE6B`T zO9l!^GYKeG0Ez(q=>|YAo-xJ* zAwUE`%UZJ%7Ai=bCNbI~3)#xR!X}T?yqU6I8jI@;C8;h^KQTo=N61V$7}q4mJ~{?n zVJykB$e64MAuyss0MtZ`P!i6vSz??AHUL`=H(`$qvd}nFv61*B0t`bruwrJ#LKs2K zQIxG!o-eb2Za_~K)f)75%uLL!wP}_eIc*D}qh}C7 z0w53pPJ$qS@K+E31wdE;<(B|J0099(P~evk2>=lRKm-sF5P%R+NJk}E`iNdYHT(ISG; zHd;!Fjc~GxlFjNp`RweKUHvdyvkH91vJEg&d~t zHFg)z<93^Ry-?-=?Y7CM6g^)=9P?SGi{k3DJ*TNT&4uvpt9hcE@j*=Kp?C=Jl4T4V zYG1TtzS>*hERAuxb^KsfozO=ebvwa%ICjH}9?nC&cYEy9 z$k+8trA+<7SC0DNKT}i2CyHFPd9DyX(TlmW@6Dv!@6)7Zd1l;v`E9|qf}JhSDp2(9 zwGtY4+WSj8^{f7^81c^H8ljw&@!Tbr>@3S-QS8sv$#XZ~Y5tWYpgI5qMu7ZJgv#gt z_pg?B`t!RJ@Uso@{#hb*=u=hSHHAyz)eY2vXG46leAB}0UFF&oHe?HnitMqQ_xXCv z`Rd<$H2b9^4_TrkyYkG>uSbg+DtLV{CcF-IHC;5%#w5e@ZZo}l9^1=Ro6=-D-%iuW zkqn@-*10^>s1|l+a=XyeH~#Z~{;d;u?_K!oJ^ORG`TYNQR@u!B58LhWoO^1{%&CVyH zXq^6HxEJ`v7@E_2XSVg(rMkV464MjI)16|J@~ffiKH5ZZPwvj33N)p{>gEU6YAC zOIryuG4rExy}lYfR0{}wVR&lNRTz#^aK{f%sQ-&TK{5zC(*$j}NJt8XV)-fhf822# z$4e#T1X4+*X;J&RyJ&WAr!nQU>hyiW@qG5tx__sez*0YU^L)VB8SU}PPyp;3pYOds zw==RcWg+NcPHKt5JY1&|0y(V*_CpYSjOE%7SE@RARJg5F$pmM$kOS`%Phr}#TUL%n zpQY4VKyU=ONIfVQV76F{?78PSJprty!m67}1JDt`cIXr7K-Ct4`3I9dCKTk%xbvOSOllAuQg9G za!k7&%n$TT$%oY2Cfd002dKME8M4)YRO@+-cJdPO`Ov6|0cW#*(*Nw6T>WSN2(SQn z7yiY6{a^g6G(%=AV5yQVrrmx9K^J=24NsarK3sl(RX^P@wsC=GI)%)~Sf9x2sWFAsEu(-+ryrP3LMq4TWbrczkdpc`>Di8~e^ zHbde#96O0oriK3M4COliR7KRIq-@h8p(=p86hhL421$8P!OQ}Vf=Y-}9(4~c`X++R zAGohMG#RPF)e0d$JIcZr7rguY>ccd}{B2GXcYR=AVWQFkF+eMo!L`CJ-gwXj)VX47 zph;;I$XJmeiuTZfBqNv@fpkbrfr(2_L~2_(H~%~HUkl6Erc~Z!#MeK5^Jko3-B@c} zrpGX4iLvUeAPi=Mi8&U>7UwiQ5s9FaIF-iSr&v(3S|veDX*&X?wibz;*thFxyKkVX znX(hI1ekJEq!&)cp&epW!4Xf5QGv6!BnP)xeRnT<(JnQ#96Bt~WoC$%y`)4fBfRmS z5ul`QLX6Cn%jQKMBT3MBxWnAorrAPEBlr%(!l&Z-I|`KbkQ6axU8kRt(`fjKnZA%Nuw2nZ+;1rX60 zEdZpI5lBK2!6{GzAt*+)bRZxZ38L)7m4Kjw0HSup0wH*0rUREN$bt<*LBF7J^eonM{}kVnksb6rfgfK%g)n2pbFtjI5Duax6AR#vq22QN#%1j0F%300@`_ zSd!L^MkygPq%@>nLv>62lRJu!zjRz7DGGyv>;7k3{-|lY>({^7DTQ7DAf<%NWD&H& zlrVZhh!_&E=90jAwI6Y(LzNA8CrRDSXFjF%)R3PLl~<@zX2Y45P@jC`Z-wLUeHYU= z&i?cl+YkTEzxd3E@5-6?@7(?1+W?F}bH4|F`0 zpr>)0UZ?o7#v_OCk1`zkrBMFt`m^=LpXD=s@GEED+Wn`S@%7psNO7wuB2G21M&l)N zypdjC{q48fue|d?GyjLdpzdEj{aIRHZcX+{8!elVxe3hK?Bwlld^^Ou2b-T)e7^jH3wqOD6k%nEWLce5RB6MK69|Y1g9v~CGYEnprUb$wNqk^PS2n7%mcz~9G zJ?lYq0ALveV@enkBqUU56lx|2Wn@Q2ayw>{MpB8u0V}{sfC;rC1EkUrWtzw(Iv`Ry z4$Q*@l-OD!r=n6`7=7>{9?>Gy2x62mqLLABC?!D8K1f7C#TtZ61>{U>bGZ>#H^a>Y zlV?l}g2;4&jM?I7UYsr|WTgNoIcjU0taj{}SVacI6#1&wO;~td$QW3<4~I z5Cs7M2>?O<3jF^V0Rd1zga8C3G6(=6pb!8G!Y?5KBK#!*LP0_#Vt^yi|1{6#H@^R+ z?{TreyJzQz)y);WDJyq)uy9Ijje-&}kcoP=+54-TYrcL>`^+4vw{w=BUw9BtN`OkY zn=3!a*ux|#<4H+fx<~I;(@?J0NC<~@4-V*6jZNI2^r18sl+hSQm^#R;DhgK7RS}(N%pvyvP0cT2H~)N}J32ek_h){v;=8EZMse*==HqF8+#Qy~kB94@%H~&J>3f&m zi@x7=(R`r!&f)1VmU%N?pbR|PtW(nYhe`DDs~_}OZC$vutK;Gafvj zx@>4F>KE9bQTj3XC$Epcv~cr<|8W$5u?{IHb$CAj&C)?G(^lH6z-7x-sppwesky?m zeYqbdHuvWRAoayer81k6t=wW3_HDnHjt2)gAJv6$uczwn;{10X|A#OTqz+zjtZ$J|}MTy${8FEMR)R zN*ymAI>je1jG$^sc;{l!3CM$XVEyH+DK;T0FjE$htud-eVK2EgT>{_87^w zkiwacMNXP5B=aVveb?< zvnD+tTs_-MZE-O_YP7>R&Q8ON*I|0uu2ghQ4<&wQ2{%`GRoas! z#%|e#QH>F6C^{}FDaJ={RZmX7bMeDJn(SDr`*kiS4ikoTONCLW{hNzmk6Ni7E!M#t zU76)K{Ts;gPiOJ;_g~VIuI|y*p!(vy@4mg%&n}HXD9H~Ot=P7#Gp*)yP<%+6D$@V} zAOJ~3K~%;`e{mzrBfN!2yKrchufgbj#wXodZEWe2_2F5iG9_O#^lUaZL7uA>lafy2wI>u@BOgz>$$qq zAep`G4tIB)nNF(VdNUt^C%BSMi7sI_X6vM>@;O=GDh@Bhq|8D9-$}BQKBD{y$(Q!y zsXqGXu+cY@fIhKHY+g`k-mI;9R3eoibuCoHD{eR(Qy9+i6R1OkA1zDv*n4vRO-z*-Zem|vD+ELHXv%#;YW<(d0IPJB}a1z8zC2ki% zx?Kbv6avl&#y%y5{wUcU7Ri6kUPm_U#@Z?crM_Hm7&A93XKs}U+ZUjbEJIjMJuSYL zitqi_f8Wji<*Ua(&;PsbpSPWwHZVUw(;k<*?M==dVO>IZGwl5s?tW=epMI<6fs-gT z8f7>VU_?*|8W5O}bf&|b=dZm?o2V7_qK@v~x~GK;jILpZ?n;s*WYn1e=7F?RY+*a8 ztBX(i=S_s&F2ESUABHb6jh!1)axMwq8vEAcD&{VB!29UW>-%+SE_eIFICq?lZNmxG zgi#h+@(kbPsNuNihgaJ;Rnxncy`SP}qfzLJgaKhq>NHFl=qop<6jjRRCf7(HYBI3X zz6d}BI?rW_5GM+UlE3}6?|;o`IWBE?2iRs`+-s&UZmwQ_P6~Vi7zQR}Ev(B~ zl7!=wW4=e7QWFITd`1?mfe?iu(uANSxTKhfGm{G@-#0%tisf{Ky+QRtx~?I z#;i*2M!x{yAlDkTv*cT&22CgHpg5y#*JPOP(`c50mT2r7{BX<9(i#_c_#Nm)!VHkrnC;1{B6 zvBrcfutwD|!Z3LPQn3u63Vmd$CEG+rQL_;jlpS9;s!4FJjDawC63JEF0#MWnqv9I| z(@=)MnFt{Pqcyp+`51eSK^q3b09`~59sxu(c8aJMP!ahlb%L2PISPjWAXMiW$S}4I z086I;NM#_vKnjpiMF1zp5JXA=z|v>{`xpbzk$y;s>#L57O1n>bsC@26Ib0s7$05Fk`M9Bw^36;$ub2@qF zQ`A+g5oNTA1QHI6%t;A?;><{9bzmJmQwlT)=S2@pTAKpa0KmToc?o@Uw%nqdVtcdso8v7Cti!S3;AT_9MO6DO0~O zn;HA#S&Zds1VH`n8`euXc`w~+?b9F8|Hc!ZjPD@M{lL6Y5%{WlWm0d;g{clw=-1)n z3q9yhPH=snMa^~xGnox{H!#00M5F2jqseWdv7mBM{5Owr@%{Hwc0T;uE&t&2|Fh%E z+|Vr@zHHLFr{CY@=Px#&zFGa`QV(6vHlocUixTj%+ z36+u~Es&p9$GMwZh2=3K2mk^iuwW8E1QY=QL_h#Q0EHk5ML>i^iRfHqM%>Y3^TF@j_R}|a z=TUIld2Ts=J)Pf!6C^Ih0E!BuSDWE7dV-s0vA@2tv%V_kZI)jhwx%2NR)oPS1!6K) zE3(NY%wp<8+NavlY;I%9waXb+G$-tKfXEbFOH9IZI+7{Yvb0rDlTwX!+4^P3rl)+ZMX5sIW~uR z{qbmSE?&;kmfgwMIa}1jr*~!G{rti^^Fd!&X}1kaY+iLOOP3)5)v`pMV0YO_MdNKl zE3cn7!%SM2ALO$mTiv|A>8ACpnrX0O8h~(R6M;M04oPpJD8wTV1g{s(H zNj>w&nZO5A+;&~J!RB!rvia`R0F;l$8fsUb`|+pEaJf(QNVsT@0t8>TrQ{Z~?(BLo zW7XZ7G0mEMoNuomz3`uXz1Y;}?_A{y#~Nj_L#rqAumZEZ|L|A_dHLyfruc+iS0E$V zX^vKz3dGoe>HZr4PisGncek7KWwo7a^lF|N$d`ZtK!D++J}p;I&*$YTQ!!M}0$1zH zqj*#;PP4`4XUuM0o}#%vRBd{dE__zwmlyPl312+Z-z3?(7r(bPfB#0_{p$Pwx-iA3 zPmi)ZFR~2!e^=w#{@{^XCG6tszqq_{_T-)HP8urU>*AA-w$0_I+kQ(X#GPK>Y`4>O zXostP@HeZyk3&cjy^g3ADdcFpDBA~00 zHQicqm-kO__Ko-F#f~6@2h)>GX%d2r4*Vntw z(`udaTVHz%)=hKSl<&;69DF0_P40afc0X$HAwXW>yLal(?N^`u^pC#+euCrXcJVbd zW4-xoWxp3xTE>4()hGukre7TftllVfcMmC#&&tjD#}BCLKA-nC&bc`Hrw{mB$F>#l*Cij1H5$-ln$Fihu$`dFL+D z)g&8}AGtUVSEGxx0`E=8m9AM}hfVA|NrBR?{|nD2DLg#L`b0- zHr}PKs`DB&4^tppmxQwev%9|BjaVI6zkfXiBXD5NHYk^0pL6-eXcUHWNWS9Wiyd!| zQjhACOk7Q@&iu)>y^I>=btUsRp@`#kSLS!4MaRm-M1gnr#@TA#?MNjl+yOf2);mxM z0pND1RlaxN*Xh`)(Gajb(^KQ5`=U5HP17x$E^MxHFUUu$a-wWRp43!|o1;BA+BBT6 zusYQkhcqO8TvRdcJa<=#6+0(r7KL=2JV)i&N^krjD$SI!%rl3mdPQls70t;g+y}9; zsm%NpYn?_zqLKhw&LSkC{yN7QQ-Q^GQOrlKB}J2@6-X&&(-dUG^E2Bgkydyah8WH3 z$dyJE+lo{qI_#&0yrr1Zd7IiY3{;f4_q#B~aSw<}A#$rNrj(^kGojdAmy5$;a@^|R zRtC0$XQPpMPv&^{5BL1%*ZwJ6naMcZ?9zO`&?$S^kFDX6BxR%D$vAcEQ|spMzq_m| z@EXY)@n92(L5svlqB_LFxEc75n#6CO#3Um|f&B4lUOYIicZZ*OwD7d@@G3GBEfndb zHOUPK?Ne4=|Dx~H&o9ly1H=?%pRg$MlVRGM!1A!0x~zI-eR;hjGiwe_wVldn;~l3w zA0~f6MaOB@`)JK%*|iDLmUcI8Y&R|s-+IYWKk=@L%1S2WHDHaVjd22ZPgxH)JJ3W3 z=yIi!<32B{?W93cfsld-(o{0$Z$ElqZ-&Edh=w)hFZ~p4d_yr;TLb+tfRGy%s7L^5 z)^%hwmPIlWTuOTsfs{mP%#1KtK-qT_D6lz-L=m?r$>MZzF@Uy_j7CzBj6?#{Mj`+$ zBlrL@_opN<;ef!IjlRRwz`@zkXkg5@?0I_TfdstJL7^>boL3-V`zDD@tQ2YhWru;d z+o~MGPTnN09M>uNz}z>$0qkstq;)!X=vj|M#u*dCz$vlTEkLeH4Ujful!zIPqyeCt zMxI#Zl4+iJ0ueH`h^oRvqxWI?bR2CO$btjN_uX|Gy0SR3+He9R5UJS%pa%lzu~&jJ z-~gC09y%T8CU{kNM+Ty0#leI0{a{89HBA=DD@f$r-m2IP(sQcyTRCVi1T1Bh=K$p> z8d&2uz?#!J4>NX%I2OfaJl5piYDUiM#7mv@A3@WC13L4BapY#C^&q z4D-pCY{x3=&wI9Q`*BRyRNp#+gYyTb`kP;&j?y!ug3b@f)|76B_S4^Asp->kIrsRz zd-uM4=kLDhhF|nQSV7Y$b;@ur*W=mHK#VWMG%7n%_S9eizizK!Kdsz&bM#8^=FOib zzR;DcG&qKc2R@pLGG}T+aq3W*Y*tt1BukU<-N(c6gRklIYsHJ5-G1@2z!%g?+)aa; zzBc}&JM#X%gsc3ImZ|^ZE8F4V^>SzZcHhe61Br+tuojgD5Fsi+4M>Gi)@cS}ORSMZND>MQASeXX2%3;U$l=#N zcsnP6k_$phz=RPLi_(NvA=JPY*ozFj4=^OA7!rUeK_x~&Dew`eQ7|OMKt!Su061zB zO{M_y$XNo+5a!B)LLLU$_X*-&K_Uc1HVBB8HATfBd@Dc=X$Nr?0EfUy5&$9~0HO}0 zA~2%BFOe37EPx5M2tuxnwHA>r5h*oF3Lzo_nS`X;E`>H205gG(D4dWn(^+-o9L~%v zQ<+v8DNC48nxKcUY1?7jggC9nJx1^32tAg_f_0JC6_IVqxy*K*2qQx@g$u9=oWZ9Oa z%HxB|dN})J3UqaSs*M~kPQZdGjw*#Sb){nnyxf3J(kbd7SWLSM zgy~gdd+%@8kQp;<3-N>hazpxb^QH@OGu@0V`;=xx@wQ|PhJ7BnTk{oD9H=x=_dZO0k&HAxbD_j&M>|TlZ}J`b_`U zugo50cYo{*n0R+mb#~m?jd`tP?BDhe{ABaw2iei)-jVBfvp!)Q&r5SCcTVoWZ6MiC ze>O*Jz%akt%5uh+8OYEr*bG%QZb5E@1LG``8#_m?$r>%UA?)a*<@B(^0&R1 z_4ORHb6fP`3L-bN?|tk3%~z}I?U_04pu6MFXYkIAuV&vnS2%H>p1KLd0`87b(RNk5 zy4;FB3zw_YZBfOUF#m0QB zDc*fFnzjfbjOy7&-#apSbMRRByiD7xpYv6)$4!0o7GIaRe3eNN3&S^w@6F~12j;AM z(Ry_2G{19X6RfU6aqnL$^_%O}(|T@gneXwz+xVp+zsWX3FWukVsKc+6dJs}J z#*@(gap_`iyq)~gY@$cr4xQNxsO@WM-aVW&kj)@H?VqphGTjk2l6d*uiw@Kd@m|$b z;e}&=_VcngzzRR8%`WY`F&5?9$#vU{XzuMm;@JBjzmee=CB7~md~D_H`s}9DForaK znI>;nh)kbC|3Sxw?Q4|$vy~i#t9vD`_5jK%H{6V=$R8i-n}5D?#rSeDKg>_8>;*y> z)&a63$zP9W9XLR;{#FwX&=6P&F zFZCktu##;!o+Q@=k)g=)0vnLct9m-+yLz}vQ%n_rghQuPv`VkEtz5XDOpHH#>nX1Zq%+yh^7@(6n%fezC<0NU zJXDd|J|7KRO*PdCdPe44)p6Jj(MzS!MH8pVI8@H4@wL<9aXAKUN|b!U>9%yZ9 z0#qq8CPfFn|LV<_lk(ekb11=!Sm5#jiJM=zuKUuW^x?E?GXFYzdxYYYRvNz47uS_0VtAw z;6BOXAn1^yi{!#l1 zqz0FSC{hlrSiQ7RI~R1~0r_9Fs{Cu+1^09|Ns81ius zm_hPu8O~C1dblXGpKN;&UiF|t(u0;IpjAN>Ksgl<>5@~jD0m`v<5U`ULoPr-gT&+= zgNR_)t#-jPEvEq5SOkvX66Q7L3fqyqPYMN#tU|;XlR=I76#NLiYvE0o&I|isqAJgc z1(I7D$3cdK|Ce|=(~XdAS5rKg0hNio+$t# zA(#N(MYxK1 zS&5Pr0g$b(6BQBmj9ccM$7x1l0rn^o#3nFEfGP^J^b7)of;faBq^$*=$%z-0WCgKB zJ~rv?y72UlGnzKH8}VZ`xB0Tbp5*h}=j2~(t9vW;x4wV)_+S5>vkkrZU%wu9*Xjty zr+fO%>Rh(ZPr&!pqv#{Q{ zv`&%>-dj9Cdps#&@t}1emxKey169lZCi zyz}U{zdyhAn@|4oOLy`5mheeb3bMH}B|^d^4ka-Op|8YrX17wSw%6mWN(pkr3i1RP zIVK1K$b*k5L=-_nKomhi04CH#1%MdLG!{e*76buN6e0l04?_?{9Qc%nU1Yi`2ASs1f3!q>CNl6k2ph)tbI51*Di=cG^OoE7{6c8n54hc2G zNd&7)$P8FPr9_xw*fMXERz5cdv?|BUCaW{Zw3f;Q&=J9wkn4;UaU!9Z1{ZGopx3*z z;6vA^!Fi4lIDrTt0*C+t5`myF00{Yf^fj_b+kvQD`+e`BiZ_y7JMEbW_5{-FNF*LiQQ zTs-$X7!BM_5V8_Mq&m)NIV<|AbbvbQ(MR;#Wq~+zs}`NF9+o&MD_0cJB6Wbl zPNv1%bKK?n(_6Ny+e(4W>RJiq)!u3uc8v>Fr8hIZ-H+L2XN{&A2~ac3?l*c9ny&{a z_Vw&QL-Bs0q7-e0km3~}K(EX-!F{$3sd%u>C9Q#w*y_eh9Meg03Uma4m z)L)e%e(^dnSrK5py3SqhO_TVtAFsy#@VL&i62}N&cxM4ey;!AShJ9r!YZsY4qkOlz zY}O-|lcO2&!}`T}%3t2jr>2k4!@=RKG1YR|WZ-_*j35tc&Qp61*W2rRvYFnQUK_(# zX@ZpA*82g=JrzZ!Y^~DybqaQE4xylf@0SNRDsHuga3FYiP~9rLm?qkOKo@Tx{EL75 zVCn4B4@X2Fr&&GD4=9=+k4A1k^=mqg)jLNMceK#;4IzZIw&a z<0SwB5CaIf@$-RoM@J3rYUn<{Eq1+pcRnlB-Y;)mdTWqusc(Ja{!8Ej3MFGU-?2>3 zWmoUJB2L@ZYrA-~tT%kt$u+hb=@ckYVL{hw<)PjmF3;ETXcv#OahLqfK8$t#P#a1c z8~m$%-X|nv*xH^|#fV$;soVB?w2p7$GI4R#H9wVXiMT zypA3E$k(l_AHMaV0({NSeLwZ)@Zr(g8Rk@NZ$+g6q*ZubH$sMgK=@@j|Z@lO&mJ`=y>U5TEUb}GzVwI`b z<*ENgn;jOqNwbrbNB&o_9g;u0tOBJaD`J>!YDHvw0uKD3*g*l*a} z8}E7ZD(GhR;6YV4HCc@3o4s3!zE@zkPJ;w@27>=eKM2tLVDt5BC#6NZea_Er{Vc|>o%p+5(e^XQ#}wZJfwL(CzUpX_;ks(LxEL zn*2CO7bGcF6_0=s?)pZWd&kQOv^=(HJw`u@Cs3Jake)@MG)C!4 z>znTP&-bTiKfM>TFBpwdrz*W^!%(OtQgQ|B4k@6jbcu#gIVYo_LDF{X^Juzb4L&HJ zx~_Ma9p0aF%if2H8#L%^i8NIyi-*1@opp0Gx6X`tT51N_DP0XZPhN@<>3fbS37F07SobOzWnGK9e)I`HRf&Yw0or1P*yw!pbz!Xe(gRHa8k%W!_6O)D_SG7+&#fdB@ zAO&S|Uah>FWI9tt;YQ;1dZ)-{HisCQ(L0wS>SMD+@#MNt>8{o`X${`*y+8&TKuXdp z1SZ^tYZtc^-VI{SxMP5zB0*#6fq~bEE5~D!$hs5)Eei``%gB-Adhp5x*hV|gZDF`r z13XsqMx{4?6#>Vg6G1zm(iC&c>_;9Y1x{X6RggK%h)G?B>i{f$1_E=K1m@252wG={ z1&nOBiM8hvz-#D|Ad5)gG%(KeyiwN65CE$XZ46x&f{K_la9P?)rv?3z!+8sz<)QB# zjfmX0l2XW2M9@lU0vVxY5!MD!Vl*sd6(F(kRz-{y7(Gc;kfoHz6i7yl7_H`9C?;}- z45_AzQTooHQdwT2U`c}razG$p4FW_$0Ff9_3@70j#Uhls<-8VPif$S~Bn^a;fg-`t zj!s4CERGo<0onwRaBBgPl7LvPNKloeG(a?xvLMm-1*hz=Pz zl>lf7P*gIWZO2ZyCC21xMzV@fkY=10DNjmCmKeIUPaG{+0y9SE;#L7BqzIUdBI1Ir zPKyAbI@N>TbzSmZW^|@CDMDO199LuJMtrZXK7ROb#lonK9$P*=JpRtn$EzOx+pqqU zvHRzx`fj3+0NZ&ch>`cY?W)y0;Qg21i#PM?tsEYA*xux?dk@peD9WFog{NP?=rN9R zr=jUvlQcOrgGPT@?s_#XCf~_^_q|w`4?n$FzWnKm{_@#B+R(M!qFHTB6V&?;?t=c! zJ0JYohyQU9cy;#4Q~y7m4@G!z8CfTSeiODvZ3(E(>Nd-28W^S=r)d>O0G zH#X)}1hAfCVlMz3BqffF%mN}HAV~l~gjte+09XyV0)(6tkwqk<4WdRM%tbV5{M$eJ zJ_4g+%mgYDOA?s8bP4ys6bFbA5&~gDOi{#1LSR8aV1mR{3l>(`f{92X5ycB|L`|X% zGHMiJj4V+H#3W-PuK)|sU@-tBhe-*cRiKp!2zY^Lb*`IIw2tQkZuc$=KFZ0&GU@vQpdeZh!_3Kz5R=ISDq~cgPYh7j*j$^;ptiX}v9K2!9n0XOg9x$p z%K7WI-ETJ>{kn6+2@-%X2nYxOegi-N0ThBDBmoda1VjP(YXE{EfFyzdC_+&LArT-T zpa9@sQz(KW@C_zrP^uzmtQ<__x5_+w_~G(BWvi2+EL=Lsxl(C3-QE7q!0^NEe-7)- zKgsjGCg^U%*t*hTeM3p}xKKp8><}C_Z&^Fb62cfA|Up@&CVxw$`m zS>)w)xg--0Cn>iWr+LmWA9WJx2dhE3g${^I0|iPt&%0>w?zmLwiZ`QbNA;T(9!}4G zyb3p0^$#*VeV=_2xh(e4M*n(?L&6CP207d}WiWp=-E?`Yu3HQn+*y^XwoD2AEGGfF zSxOXs|C~StXZIM3?d}ffu*WE4^yfP)1D+DDj5-U$dusbj_UNiS7>b*lP2ZilED)ND zK8?vu-|5g-K1~#xcQynmPB(f5t7oyVODrmT?QjgfII^HEZIg(~uRUjee}L$SLs1pQ zG*|kTa@RgP=tf;Hb~UB3-ra_(*d7}->{1HasmT4ro+PXmi-QzrFZlmRd+2uD9H6z+ zgFS0e8@VJ4WThhf3YzOGRwRunu(rBWec769uv(gXRX#aV|8-K;tFJ$(j~_gEc(Q=k z3J~A`z99y%0|>m@-Y}{r)p0TKRNelGia5-Edu{?v+hJGtJ=i;+7omt3d78wS!+w8| zLn0&3dw0^|;e)F;sHTC9GcO(5KsgS9EG#;-F8DMk_5c;DfCaai7sA~cV8C~H#^$y*S9XEen0l2#ruc9{_nrP zwBP-WJ4eUgerK8=FOM_$e|d8a+b@6rzyDL2(3Ls;(ZgktPtx(5>;K`Z-Cx~YcIn;Y zZgKM7*~aMWjPiA(T%zt(qI;lowYu)bt*|xngG21h2bbaH-Q6awi}N~lF2%5YSQ)OD z8t38A`u+P?^vOwkt+krgnbSt%(c2ZDjl6Qx@0`f0A6~oR+qEP;kLTmH_4KGb_56AZ z!PEs1%@uCBPXYzQb=U>Y?~nCCsJDrIGy1e#BsD4AGeT=bwyoq18!W$VPP&75)}(j7 zNd6Gd7X0=flb$=6;_fjgS4ih)cNaK9*tmYJoH~843GG30xSEf|YXxbzO^!m);#Zk1 z^mIz~HgDs0>_VGo1-E$&c`*$Hs9=|G2cOeZJ3nss;{DpUtFDI%PbO-2rPv0SOXG|J zK~_!|3|vt?Tz#EXy4J-!^d{KNF69sVz`*bs@k4(m|FLv21Tb3cn1yFHmq<jAHpl3E) z2yHeOSOVBNKbV%auE{D}a=t^l0 zq27mE&I(87VWYyP9o#xD3J&=w`Ld!I^Onb=HpXXD?wl>DLp$=kR7vq0L)m&ORWlZo zjQ0xj3Hl*75+w_M9YO>$z>}<oFVro1}%bXUP-F_$2s)K5sCKjc7$$e7fpdt8Xzo zHPJgavh*5BYK6&m1Jz|!xlLyPv#E4lj>!e+x-HZm;WXn->pGx&nd*`rW!(3@D2|A; z)g8xBLE4B$yh*TyA&_S zetp}WFo#KS=IAhMOrsA7Z*z3JBvD{wEMQQAP8wFK(3C{3hx6+{`Sgtmo}ob1jJE4>8i$zQa| zY#9r$4WI(jdS(wE((uB&!KGFNBta4$B_^nI(7EXhqJ`2?V(S&ygv!JeM0#!o$&|Ss z6So|+3zSr!2{76|U`KG!an+)px)l?Aqd?v(>Q_#E-)lfJ}HqH zl8;*3LLEd~rL50GS4He0jnGL{91D|MrN|UTD6(IH#iM9`I=;B(?XKm{yNQAfvd-jQ z$wa(iy|399LtIO2wKhl)s9$wqO-KJFujWD;*`e z9SG7mR{$0Z028t=6SNv16EwgbN)V+`DPk9;7dC~jK_}k~g^-qM1d_pd5z;C{isT37 z*eh}*$ie{;N#z>A;)Kxa~RcxMr?tOOM$&k}u*F)}C64p9Nu!3a~b z42l>L$fYzyPk_qx0wSc5NhYW#P$VSD5K$!&U_}H<4|cGC?#A+`KKpZH3O|`_fSY5n%e1+0mp^^}i)_69*}i|x zl%z?2&{k&0)iG)qLGcQj4BO?r5rH>g}?i2 z9|$Bd3^{;RF;Q>;(Tq6~5gqk-fFT8Umz7%%~lNZw>r7Qh-qf)tqp2LU0} z8jL~)9FwFdmboB>@z~wX`P6Ee(S^bB3H#S$XEXUZ6^F##h)L9m;z=`;$x&H?*jihq42mW}V2>98 zhq3Rv{c0F?-W zB!QFw5r_~#0O1=52tx1;LS!t+s8COitE^m}ymkEc@7%NDbXo^OU4=pe?ouCJyBcra ze046PyUmC|RmckU2pA~Y7WIvS9Q1(#R)E4fyHn{b@lawYEP91eA(cX;d0RadU@wmu zY}uF!mDWV0bY7SFtmdeA@T5X1!fX`Y5zynfm2_j~Int4`Nn4Jule*f?-=q4$NqK#@ z+2ou0`uXgphWTO2B6oP1iQ2m|MKui{DU>R^9vI?bm>^)a5JSb-f~bCS-rKSL>PI1# z2kJ{D{wi;3NJY%AqdGC2g>-RMi??I`q!bT|JtsZh83B8<*P!D3S{n{ux-~OU0!+9* z)k>-lrV!(J^L&UFZw?GQ&?`3%{^}x55BSkCi)&4{ytyB)kL|$oU{V?pSyx|<$@HC8)L=Ty z3SZ_Bhc^1K3x_JnFEuZ@r)T=WpNq z#ijb#!|dL4KBCZUpQ(do`RLK)9Ll(UQeAqsd~QKsqj%ZCOc`2UTh7#1GrrR{Cp!y2 zWP%y2K?(?Y(l?4ik_dP;yStU$-3XXx!9(V>oDxL&-!kXt}`|-^{cWYI?Rextew0!UA_LD#O z;;O$@+Ui9)NHbE_F7~k)O+`TQ2sQ%;q@K&88KANYxIQj}@=Efpy{%k!|`#(4Q;nCfO={HsQU#8(UE&m0#xSG5k^yJrTx_Y+V z#q1rCv9t{K%g$vw8ESX6E~uWNUu{SK!mYPBJI(k+#+j)u81_hRC!cqGNMSP9d7ifG z-R|0RNE2Bt*?uvj%9)C?2uA3`G<{ZdJ^K+Nv^AhJ1Y_SZ7ZSp7un9r?J%(YQbeq=) z?W?WJj5&eq3Jm)u!Y0?fir%E7Nwx69pFgEy9yIK>QZ#W|%3e}wwwIZ^dj2Zo59c9= z`5lF8jb4uzZP@ODIt;%YimOnh%=ew#FU2S|RGPKHb)CbQWxbI46x4&-n2YW!zz2Y( zgr((YtsC;?A&j^A)~SU_crwAVJm_6WMK3p6T)gbNRu{7x7a&zRl?+Fau!<$A{&3-L zpDmpprjJ~i<)a{NmGNtYnKFye?op_kqEmUqgk414BWfBm)KntIS|Amjq{4PWHW6^= zF?YsNp52Fih$|k7TVVqUY3fo8u8&|yl&%Wh?spcNI&a*#bApo+M{k+KFm_3L7Gj!} z%)l@fc@|s>?Bk6|nRu1V9smPGvF~dcie2Ba??ejgls*#8>~CCLwLZ?^AxazBCtV~U zj%vc46V*yy$ayU^g#zB|=SHD%+#l6hU^iBW{-Fm|iFW&hw&B|H`2pplfF z8cA9Sr*5CrM3Xku3j$kkZ&qzkA=iei1)AJBJkCJs zw!y5NNy94BQXHmPM=u@1WHBF8Xk%;Lzx}bdzxD6F=guFD1uuu%jia-nsCK=YPoU@4L z3FF|iZK3C-QBipSa!E(eu|?ofv*=6%Q;W-Zr`@)HpaEV>uz5qS_eO+HD$=AISh3klysYjZz#B&HMMtw?DdfbA9#W z^S`=9O|GA1x@sB@Jh%`5{A8sNGv6RW7=(a?M-V6q10-G%Xrl&HPE@8+rb3FCI!(Gz z4S{TF2Ov4l0JQinNc6x6DbpEf8w2Z+0Z_ys02)sqR_X{g@Tl3TOp8?-6k#F2iMI(- zf>NQ*bR=R1V1bx`hbSOYW}1j%NWv_HtP~54DWXL2Bnb^hL{@+nGzuE)5VGFbq!b~d zPAExw1<-{y#!*oQs$>z38URz0g0aXYmSIRhK}dkLcrC62-fE5H^c$89(k$sEW>7!) zm5(y`QS-xL`R%8-uj#Yr&YH{cDbd<(`OVFGr|m3$e3;&*vK{j$fop{02bjOr7_RGc zfPTF;V9ti0zux|L(HDL+!GoeED1uLshuAkPgvE9Le~>Z-O=6O`%(Pm_rBBjfAcGb)|_vj zUB*8is;~d>v*F^CJTEkB0wpR5IT3)!kYW;yDJI|)A^9Z0%)*=m{x5^!N+fRrF3R4kB$1c9U@iEM$5AR`A$f+Rk2KccWc%2L)G>t;DES!`7iWI9BO zarCmke$^(o-|dHPI|}za00^T1h!6k?0#FnHU<3gG1Vj`8M1=pF00;mAfWS8p01z3z zK?nfC03wKh2;UGv_yz(AkhaDg7&P_Kqk~vX7w;b6{pkakosKhBE?2@xq!SyyxE;5{ zjlLeLuZ^?q&PHNZ$y0>3fRGvO3hhuSFib>=z7MF6n=cG>J2^nE@>wvmlsQxbJ4(HT z1UjQi3m97)1T5|V%%&wQ3(aP5)4lPMuE(AL*@|EmoiMB>lRH99BN2g0A5lV8)LL0p zRjN4Pj3zbWfWEtOGJ+@xU;CVnJIP32Sca$ar_QT0qn zIC%g7g>a0c`{F5s-K~y&F_z1%>kh`-4~wH$N2kvu#1UJ6duO*^*y_Ae?e%(o?R9O= zj*vIwvK5-2O2c}%G{!2J%s6Z^dbyl@I6e55HGe|W7yDPUFMDurjujJW(&%aOY~<3X zg^e^0A{ug|{O!4qK3U8a?caU>(Dn0hbCV|4%@^tV8vXh9rVLUqeux;eaxGMSAmJ0qGYR2f_&gb}to z$ohSw_13G{KRcZk_WpxB?!R7*?gn4enCutjb~TG#ynFP?h-Ons z_;ciyDjrO=buh=^{&z~*`WN?T&T2|w{p5`wE}mV*HEnwF>ADYT+ou>;p8d{U-GArX z2Ol-FSRozE4vvmbzWV@tK!d-yDjvQ2OEW+O1%K_?r|>_1{*USE)3n|^(Z$?W)~M-B zqrG3slNWFH+QJWJ<>hYbrs2zNl9#uGSv)*02)|M?C9KIy3T}OKwMo|}6kAGH`(vV7 zWq6pSQb()o{pI9h`O7!$UmU&cH|Ff=D*N>*@O!_0@#>9T-QfF+lq|bC;_lQ6ph4eG z%H#*NAMa+%rLE7FA#pbPz5PUHkM!fFdfu@M!&$WNQxcSCmf(JSO2L!jzMBEN{?OzYV`c9&=S?9?1(6G%r6ZJ}UxmpPsa zfL-N%_8FK|8XPwnZUyeTaSP@K!+VXS)ChN5N&D6Ai7Vcl>6DM1iIYz&tY*r#BW;z( z+P7{oiTnM&8&fOM`im@;eoA4WF4#r=ezxg6XYcKG&t|tG@iv|%j0_66}QPx zE_SZ*ZAR)|qa?o=|av5|tJIWB{Q-aO%cnnoNWRd$RSZ!c1(x)Ner zte1SxgXP`S#Z4Z3kPS~#Hl;K>?QTZ6V7j*>zoGv_q^|x#hYS&@M%ilXo-R?aAe? zjbz+hWlwXb^7@^^+KmI6wzL~YX3{yO)>y_gQ3Z{Hnuj8p8zJF7p*hV+s>ZwUx=q^V z4-0CQX4iShCS+A%s?@T(6)BuZ8HCFkYTDDm?VVHAq}?Q~M+ z^ZQmY5~`xHrBzXL8umbyD5IoTpe1ENUW`Di&+C1jNXwM(+@F~0+j(mDz{p9DGI|i* zX8|}8lyN0u!QLIWpRM-&t5>U$Q{vIcJ>pPj=X+Lr7VV)x;X**AXejTG6Lf2Y!obvC~J=xoi)KlQUW9^p+;phC3`^T)EGzw!~sYEM0o?os!8O%#K>k) zgl^0;ArtaoHj!AVc>*7?t3qzQj@o;KFcyVPaRwYa;$(D&Lc;a`NfDgH?4D+aK|kj_ z!+(CqZ@lxZdFpE0?Y28Oj*=+Cb|A_sQp5(fEJ1-CV!@szI}!;23s|5;0%T#xBG_)6 zbi3WvU0rpntLjeQ{igSO$N%(9=TQCov2nV1swSh$;Y=qzPF?_F-qXnLkPaa< z%9SPUoTNTFB_QkomBQR}3_c`MLX-mQgxE+!Vhi2~k6(ybZOUwEYd^_n{y+X>_<=q7 z=RdE0_4&olk20nkFQ9p8g&EnPLM)W@K5aY0JsF1p03ZNKL_t)^RWj4uP+>PW1MQ++ za&3f*wAK=DH8N>D0C1R$BAC0U1SzGx2XrY4MnyXfI$}W3rK2V{vTqg-mJQ=bX>PRM-2>u{L@**xI0EirkEZaqyyZ0XIt_v@| ze2G@=>CAw6D>sb1gAM`M69}?qpg0IKu^tFql}|EL4RYvlPd+zep)BewV@NC=AOQYWAL0}LK7u77&GMnj7mD8^tk|{84!XsB~doU!N(p*iP8qB!ZSb!vWt)c zM=flPQbrLWW`r!@l1EOgQ515*6hRRvr2$F=$2bg0q9jNG0xHU~j6j0nK?AgGMo~zV zG$C>&GZ9s}Z8i=)Vpk><1)34#$dcF+8Eq2+um@l=ij4^&$*$ADnJp73mtshOBGvAl zt(4Y3d_R2u!36Qi%Q@St$-ld@Ke-NaF@EM!k}Kmp0tk>qmCno`K83d*7YMfgCfPB- zSukeau1!V9Y;bK3y*10eqRxKTl=(kSrM_}4*{p)i$oSq-S`Tb2& z+0c4&hdbI{Y)2LJUKdW9!dFML9LP+enJ$;mJ^J2%m@R(LjPOZlYMM=)HBnD&<5I{i$Bme?Uk^$lme)}AmZ4Z|M$s+|2UMHxf}iUR?IO^`XQNs2QeTidLnTvH_L2Q|9x(8tZ-+;B7OF!%#-6ageL zfFKM40>2>$fFOX#Nq_|r1yKNy1VG?71ON~T5P(qt0R&OtJBR=%zzBkX0tDY-24ti{ zDVWWU=O-U7n)Kl4!_3A-{zR)rn@+IR*aB^N! z%uY&GE$d)PAA=EJD>O)?B+^I>gnF7wo*#|23OG1F?hh*S#bSayDMXRbe3DzGUoXNQ z&BM$zrI~E&$gF$Uit+l?z03EJtjb2V~4Gzy$dKAzKkfl?0{iT{gX_sf<7Q* z^GM;rttl0DX(y)r=7|lSec7MXUbW zUH{X#olZ{Q5C~rC$;8rm(y1m+=JZ^^P30h}6FTfB?qDfv3?(=V8?Ec>>t5oyYS;cj;w;(+F8k-~<@ z_Rg1;Qjh6)1N$~|X#Pw=bNg@4cgOK$CUp7DrH|{Ep&za}#Lb%b)J8M#CUt546n}K~ z$rsi8{bsL{o==+T+4=83%&XIP-anyok}Cjsw)xp#{1^Y@m-nA8^W^Po?q=~u;~y+P zLV!!!*KwoNoRe1LS_i9UF&%KX9+xKz4gQOO(}z@Q)qc76SFhWZavvYjIxJqfts46* zfAScHY0+DK^yEZdq;z`$j~BR=^3~qGKZ$|#&GQ2;*OLEEYWSX^M(;lb`0Unv*6xdf zA=3#T|G~*Ln#-^E!Hgrgyqz>p@155aWkLH(Glth+Z~A+8FCI=Fo5z`><$>+VQisje zb=&+l(&&z^f%vCu2Y|HagSM+BpkADBk z-;Ro6vUgB^(6XZRHFP>f_F>oi{IpO~%+c>RJ|fh}FkuwA19VK$$B#C8zZoL>m&Pw_ zpY!mIujjJJRG0EN5P3;DCg;(U$tD&}fS6~Oht76Al~wKW%}9Y)ArUf=917G*6q2E+ zwqnEfWnZ5cPu^A+uj>gZrz9M1OPYI96W1O_*>Avpc(!l?Ew+$yRxtquo%AIXz3Pgf zCs1ZIcD6o=Pm6lBnqb=4lx`hS0hP^B1Q>x!!Gs(vmxr@N!*D+opN9}=r>LLfJ!KVG zo{gDCQ{e8BlZU)u%!^z}j1bnN9yQ*pSse}<>?KbgO=SVr`L-Xj9;el;sHB}kdd^a# zpMtv4fHo54IB;<35LJ~CE8c2;ln|4=i4nbr#KkbUQRzy=nbHHR~#) zuq>@YRES9vfl&}4kzh)sQJDJuvmd{qX^VB&&>>8$ z!V8-v_I#A7?vF9kGePtBFS-t|{>+)`bb<Z&<1;gCj`q(zYdCONUT z5`t44jlB=RBa3C0z>zc~5bJ`Xo(z%-Eyn9DBDkbQ2%|pm9VMSp77-<8R$_`iWO-?| z?8cr&DAajoA2;3I_OR=!JvbGz_!#I6FI7hwV zeZuwCVdo=pgF>NXKuCTKKqeJIB>7QL=h@6c^cWNh*%*`JVUol_*ae`N*cz2rg#k{Q zIdY0YNP!qYqfI5ra~jbg7ObLHAv3^&(uEi{2d_X(GDg{>YR=Zcr|YyIuYN3q!5|Cd zpqT&|86eZSLN%sC!pI&WvazK>gBsB);uIl<0MZg@v_@wr$qphCKmbIEasb9;8EFDI%Y=F3>YII3K9roo`Y_3!pt60 z82dXW(VzvDb0PYmkd$ZzBmzlwf0RnXKg>i3T2gXQBm{hhoB;2b2z==qR>BQZnK zguF1`)w?^L%&nqcZKAtd{cH~#8y_|4V?-KZBhYD6*7|;>texz|ZpTQjRf1urj~D!Q zL9SSyPifiOyY>8UPC|F|y#>7WXtgcV&-uY2Gi1M!|LYsN-M7Z{vC;rJ5J5(en~YG2 zfB+&O11CsHL;^5N66Rry;sr%0CFB$tA_xj1AsPUD=i?_bYOP2Y40%jJ!~uek2GOE3 zscOJdu@H`%Zi>LPAPePzITC^bkW* zz*-}a0fnqZ5!4KbfFdNss1zWOI1%T8qi6(0h=L{wAw(wn0Z?c}6BQ+2-hHPLQg=E3Qn_N3>}FAIyM4Ir zI+Ue$@0cnq->b7oQQx3|Y;7*!^W%UO^grqNa*Q9LKPl$>3VT0JAxx^pHoMmrsUbR2 z$TjW9R3C=ZI@_N<`3WDg7r%URZR4>y>MtI> zK_h3MDJ56St4We!`~(?37^l7P-roo*IL<3bakY1pbhDvRufLkzSbvtOBRlS9yOqvi z(j2evhBp^?*%IsdG^)HORz-zY;r>o57uGdM*P4T9J6@ph-NIQ;H0rHDGwNg>weaz$ z-~05BzJLG0{1T^|2z`R=8vfN5@alg(`K7S_Y`CW2`L&6a>(#V8*7>Q!pOO}sk(h|NH8~LQUOj~)l+xKsku;PzkY2w;@R)sTiSf) zo;WhTH_f6hXXfq~UwX&WPu4o9Zs1x!K|MW~!vw^Zlyv#&gQrD$ajEXM*AN{;NhFgR ze^}Mu;`}bfGGnTiPbzYgX5V8c_xLrI}NHfq^yD_eKYBLyF7aQuBNN&6DD*Z+08Mx)@rVlscc^`*@k_ z(|3-Jv-4#&d6E9vzkPA~?7u$sZ|VD%rblz`+kO+>;$#_}+7H|LnPl;7b^EFgI8VjJ z+^9BNI7Ybzl*?Q-wr<#e8+Y`4gg3qUM~`qfIa%*s9Jf85zTNq*#6jgJO2hU_?d|w? zO1UQg?S48nxnF;N2{ZMeMt@(j@YZaoi^Yfg^u_k}Tj;6boM5irs;jxj>;1k*raQ{!Iz*8vAx>Z%Y}e z>u*X37}Bmgw5K-ltl*qCdmuMe!&nyyAS3h`WK1`ARoPt{;-u6{j|YHv=;%(l0mC(f z{LpFU1*<~|y_zg4!Cq0q`7rL3+hcy;j&7@BlvQt`2|(S2az#p+)HRbzRFHdiUmkG% z9xcAkNq>83UD~`|)8~V3H^XCeZLv$??Y2pJ49LDway|0U_5*#cX z9iW_>gHCd21u$>OVLf(nu7nNXe!K~ZnhAAou9GkAbszo6rsAhr*6W5=ll<7K%Pi+Lr+#(kJSX)1^6eiU zHA<68pe!I6L9&@OoJIkHq(HO9ji7yOeq;cL13IeV!| zzMF&*V*n-TXNN#=)n zRJ|2m9IZ#eZVi)a60I&~4=i}zZ>VYfZId7O{q+RC_W>^ z93ZMX&?f7J$b?)=bRjvh8KWvpfkXzG!6bsJ4_d}v6-bojqr2_=LBY%bO0o>B5+_6^ z&^{&xD98fw%(6?;a!)NnD$N`vaX*ZL7^O67P^@X%Ahhu|0IHY?3W}gHq7+MD5a=`$ zsM2djj;L_#)mRPpiHpm_mn32UlaQnwFt{ubkX01JphIkCb8o1Bz3l;0kz0mhB(%^~ z;6_mnfILXdvuQ!Lr%MJlP$&QvPN2k128hXx(_*?@9tVG1bocL{AF2nq+y3OzzwtMC z*F}n@N)(HPB*c9W2}moD6b2T^`*T3?MGpdm@E!5!A*QfFG$6Ghu?1NawkahDgD6e5 z%tVAmBZC?fX^Bk&L6R&a&|3h-KHG(aar6}_-JBB8hw;B0D1N*7?+AFXZxW=AbpV)( zQJ|dK3&zAG!obEx%x4NGMmmvBC1N50gal#$tyRE~fFl4q5fDX^fI(N93JngyC6+Nn zMkHi}->?7?V^5q~f&iJ$3RR+t(Ts@5{W# zBos#)0;6`Q0R)u*0U{}_0MKeBsFw|XH*dTYE;}CT+`i{03A%T^= z2XpIxoYJ?ZSX%qQFk~eeP=k+~xGT_2%D1$AYxF)HZu2-UZqWx#A+rtj;vD)&@~25Cy~lsDPDth`wiZ zq==Fk6O71&iCH8ij4=pv0$>pUeE$bebkN2kLVfUQCj6c9l{CKALXso?Y;LSE;ub9I#@4|WdgzpG|03rY)2#5fHz;B3v0006Yf&c;nf{1_!{05_dXcB`cK!gC8 zhf0Y>bV`PZ*ir%zLINPENamTvY1vqn)Y)&%aC&;4&+<~4WR_IN04z=+UhVIO{!R}@ z?MsA!SpWhn1Tm9S1@j{77xplpW%l>(pOx{rQ(0~){pu@w@%*c!?xG2!i&uGDV0kVD zPG7t>o4mDwE}(JMVGVf)Sl`9F_HuU#;dXG^xxsxcqn)Q*$o5B~7s2k-sx z@xMGWKmRHGXE$B_wEtq$^X>osm)mIhwphWc<1ZYS!OPLR=gY^9{^hghz3lIpQYR9)1H8sIF<{E>$8NFS-{ar5wXdeg>!sLkBc_^Z35n{4rD zu0vjki^l~huAUQ@2i26&G$!jgL4xC10e#$lHQpgjf>KW}XC&ic9A>DeeggL#-~i&l zXV6vceHt=-tN;?j&3YsLmgy`bEe!`wbm1^`7xQHL=jUVfrf9a~6IE2o{P+Vs+namc zcGro(?p5rT+4%=He+6me&~`gzWQxa!vSYC}^&7r>k!Y!M2W;LdH;1o;2%sEh zJZk#%&6KKL-3(WM6MX#jXDg3G9@anjioH>$@&C_N>PJ#*|de z^>o|(JFhCWx)+CjILLNcE#tF3-gcWm-oozwvL|b;mgA^YO#3m#cXts=QdzOmDw@N- zo;=()g}(W#sXp)Tm$1Lr@DOn?#j@NBbV3izw>2H@`VR*B6(rWyvkuYd^VuiI>9!5w zLNt)%Ii&3~v`z78GrYpV=kuXB-MAM{(PEktMND*bF+a+}Ba;8^{ifLJG7NSJ>ZYmg z?C=lQvWjlc<+?WI+|^z4q)|yUY5%7c=Wq58q?+Yf9Ji}$4-`DB6hJ>2{s<2Pel zy$SQb+Z?BK$*{X^8*AtY-FocCy861D{79R0*L}HuZpPZo(*4P7TOIMC#nrxeknol_ z->kQm;=~kBXZZj0Sgi5|C9xn903G$wN4~=FXcf7kG2;|OPbPEeisX{@hq(A=Gw1Z< zY+o)&>(i<@?APNi@l@Oa%RST|=^`FFNa4`$rO}bPGEyt!2FfYdnso*a>O{pt>#2pi zbkjzU{^>pQd{=hE;goqKFVW((O87wB3o{BaAa*MXdD>|#kd?_x)8T98iEHijqqBQ* z*KPM1f0!$mCUKM)FcjA7n>hi?LSnl>>4I(vAYssEG+W&#-=6KW{=+!~O_d)gPtg~> z8cK?hbr+z_f$d4xZmnvSJe>ds10ocZpQw(qd%Yj4cu?ir!S09ldFb=A8oGp@ZJs5* z?L-33m4ka@zi}oSTNH;rrigZ~(3vs{7mgz)0pAb0$Wnt@J38mPHCNMYldw4uF7-$r z$3Ycyps0F>S!^}Hol3svA#n|0C{94VXo&mJJ9DN$A#6$6A|m8LP`hm=-OqQE`+w&j z-dp~`e}4Dn;o@KZvzPl%?^!$W_1!R%c*)MZrgW#4~YqU#JAn2*h+Wp|tYF|_e zn+0V#DU0oPOe&4}67ZV)vs zDNEJJg!X`yA<&TK)+$n)B2n;|6e)#(!6K#zg6Mjb(ocwD%m?-~BxD~mK)XoyC`>Iz3~RQffr1KsY_#q6YZQ*z#mMA57(@laI3qBW)&P$u!$4tQB0l@JnG8!UK z=tOu#%?K(XCIv*8GXS7MKt)8#0E*^9K&uEgsvg6oJWJ>>EHpMbpb9l8T7GnXwm1r} zHs8MZ>`w7?`<8?p7yD}7)gk2L*^X*I79Fr$wUqp&ao;NSj}w`~AV5(A@4Od!OZBsyS<0FgaLCSpLr zL`oSl2q`5OFhu|XM&yJ-Z*T$t03ZNKL_t(#lpM7bg)K;#AxcjPt;&K{eL^}IAr$Dv z4;)a6Oy@>x1T=yONhEMgATmH?_6#W)1xS{G1Q3*UB%BxnBMKT;%qikT=&6BN6t@Nj z$&IQ_$q2S*3NghotwRLYx;_T)m3WN+B)|Xy00Q3;_znUHfCzkt0Yq2cJH-5Gd*##6tBBMS=P6AH5wnbBhX7#)t51M3>@Kkgtl^0;Aq(rbg6OxR&@X1!E&C%I$)$eBROR>eR{y?ED>x5Bu zT31>(?YJA=ZfcvVIP!*lmP}dXr^@uWcy5#04bgxUpjsulJgt?@*UvVwyOBCE)pe2O z7JQxy7IDh`oZfNfqhwyG>}wOwLwL{^Z9+51$o)6_J3-r#fI9@e5V3CiE#RQ9Rn-2? z$F>slTiynZ`8jkU`{k>xiV&Vr%CF)gA=eXD1j9lb2wDDS($nTOw%KuGe!Di;rulN; z$M$dlT~=9^c6iweE@x8Iyna#lt-ar1libzY&a0}rIHTzH+iT##R|%@+__x`-3dO$) z?#K^Kr3$6wuJ|WDKKOq5KP=(vzy8I4blJzV`#o!iT zkExoZ-!Ih_QG0vbg9Y~-vl`124zH>9(p|^x@$tRKGupnsOR`mBR=0B1!5JwX+lxWL zZs7RM+6eF>cbMLfLdsnRmAlHy@QWzn>z}Qo{n0<(KKiVLOx1W8x zM?LtYn|J3bUq1ezyZxV@Z*E`yS>;7*Mp^#O%#8P%7acE0q^zJR)Sk+e1}~gsC}euV z##qU?Zu>escxjdNYd_rW?ha6#KAFY41BCVA*`6Lw)LXOq+0FuyyjIZtSqvbX`^~YpVV-CuQmiy1a+OG#=A*3d1-ULTG$LA#ootb(>9Zj7e2jI-(0;`MoFo+)rrj=v~j9v~I5~&i7r$@!Oz>wDKyP&5G-tKS(i!tYBvuRIkTb=6T>% zxjgTe{_;=%%{ci{mS@I_X}hcoTYEbW=eE`5hfew3Cd$;mcZhf5oY%jCcodyBR_Rjg z5xa|L1O8qOGiz@krq;2p-mBwlH}s}B9%VF<`E(k5TW)Wr|e!xIGKphC`din;_G9%KF7LH15 zbQs>Wv(qQLobp%3mWdRHVay&DW|?@?g}xXE$@0+>4Mef66fl7Bo>f`NRNL4al|!cM zTyixW$}oJhh&)*E#*{6mcFSs}jPiDHQ4I*Xm= zNVF`gk*`Nc;r7YH>Yx6D<$XH+?_asiKmX~sv!mt21a;UCGBJ!JFd9%3!J7q0-(s)D zp(F$C%kjAA&HEo!nsY>L@;qUpG1|OJ3PlP7o#RAhW?>w}Uw_d;H?;cOFnPF~WlAedv6c=VCzx9fXxnWog6!McpTHJ3;X-u`PV12p2{2!36Uf8rj;7pZD8!zoLfUxCdz{W*e6O5E8mNxK7lZV#9UX& zdd?~Wf)q&rY_B55s79kyvM9j>QRCRD#JZ?0f-V@C z^u9chXX-U@z>q@Dz7Xi9F~S+R48Vb6P@uhl>=1w>br~onS)ays+!?i)yw1JI{bHN)TXSEE)XCwU$7|+iyi2&)z6cN9O(ZAlFx`*9rWp zM*)y%08P0B6i`$Gq~5Ioz@!Qwfp`Evb_9@RB>?6+D}WAh*U4x@ZP|D;bSkQ-3s64v z-uGE~QdCPd?0wfdgW4zyASm>m5)**06S3sGQVAyV5g$Ykz_*A91{FXeLrB`_ zOy?xxASO}s;ZY662tGopps28DqrV+rlTtCeMDbG8XiA7(h-)XxAFKw{nJ5IxFclab z>6Kzc)(j-fNyIB9m`emmEn!kZ5olC^O3DNTqzSYpGNup#P8~o>K_oCEwY1&QnAR0` z4I!FXE5swa%+1l(v^VNmhV4`N51#nHmCtX($*;C0hHk-cR?SDOjR&ZR8<>}+u zm5I9zQ5t{g0aZabrGjyPtmf1+IEr&iuQjpvmlD?BIp=kE%E$ zf9=Ds61%(Uy0305q_R0WHuWS9-svkmUD?3hUT|j3C4=hr}X-Bqx}n4{2mV&Qu1fD3F##%2e7i+_4Y^!3ZEgED|`P!K~l{ zgLVzXhe$LtcSh~nNZSJkfdC-Dh+w8RHzpGxN&v_)MjoP2bdJF%wlP%#2$FyWQ4|FQ z!XgP6M38L+g(evjlw!$_0m{s9M#K#4yO8`CLx>#%3Mc^soK$KF$wP>cSP%dKKmbGl z0T2WLgjoOt34jm;K;Sz>00-@StM0=NoM@H6{}na&lS%jG!(RD1>t(Q0Sl(cli>OJ#CuGPISJ{ zC+pak{jKeY*_{Z&Y%yD^T=ien*+!Y6$TrJZZe)4|A#?lLk+IE#-5pcs zwlyTWINWmF(?lt0@&`&alhKn82YsYqk}t=6?#d3lPc09F>-bF7A5MywH=ECXajjDv zt}}S#$A41U+m9c86T^6Q`T2;;e)D_1@rE`BK@!`<4!1WO{{OjxvtM7+>p;kBt#|sS z-@K2$^BXzgQAqTKr^ar;Z~2Q`KlWJ?as~C51qSy zSbp>YF26F%*WG^Dt=Ii()+~x<=loCxchTzYxLcN^Gpij)^8Pu1UhQwY1nslUMAWde ziV6pBkKskw{qz8J9rUc6wMDgIfQ+l@-&*olmv083scunFk=M!Gw93?KxzhR|YmZr@ zA`aaoGCiE#l%TK1nP^a2lkZ&^#-=nHs5rF>7J)}n(P@K_OQ(06;n3a9X0=N5u?m1a z5;^@~KApgqIp5wa|LKV`XEb&RccGx}i*WU-fA;9jrJ7&NKjw@N{cLE`b)7!251!HV zyxgz`d2eCi=?d`ZW;$s%-PJxqw-3|eX@C6u%-kI*51WTxvRb0yb?^6kD^4f+WZsf4 zR9#ISHA=ZPiX3kIn6Xy43pzzn%ZonfcFSj{@0>2I`b2Ki-PrzYD0H&2EFKSv=~>_0 z7Tw2gOTGnFRc2;7UBpA#CQy?@<8hxu*oifO!&XfB=%k#){Mo%^f6vRR^NP#ov`gLL zFJD7ieHrcYzAm+b%Bg(Km@h&S)gdMuz68nV#rf~P`QQEWJMR<^&Jk}q`0Q$P{VK$p zPrf?zc=vJ)lZTBOC7r+SVoF_?C%3nyD$alPY_hJGd^p_qaS`w4#hdFR7l)IZ_P*&o z_ka6$`}&&?=PPwK&Zuh2>25g0Fg|QWt51IA=ndYfwr!q0%`|Ra z$LqqM60ZMT-@PHvj=_CG`Kdj6m+CvC)BcNgNc8N~Vt(6hzkaz$Q>q>f#h{g*^^NIG z^Z=_gQ8!~5U-!jQEu3-F2Y)_{?;XB4rRt*@oyN(@s`}_MPuri>)#^d5Q5uS>15$>n z!W~uyb}z!A?n?#JZ_T^QPQFZQcME!xo3Q!pSyt(BDd%5|-Qx7@`?LDRzaIFVc6?-? z{644JzEo@ej?Q~Gc6-<)Fre*mf6ys>+#Sl})3N|)Gjva@5aDFhcpfPYU-`a?Xv*v! zH}S%=-~fjx95OuU&1JE<<-_t2w@34R+|ECHP`-UNvGm$LSgg}Y=>GEVP93ML9>2Ra z2eaNPnw;k0fcuniuhhITrsaED(CjVje$?E{6{Wsb$0hJ>Ky~D7ptw=v;?oF^DxS>G zzUaos=5jhMcbV@G{b7W&r1-dy`Rqw2QT4npSil^CO{<%%P7Izj&4$z1eYsEOn?zf; zyc_AbhA|JFg(65dZ12#n1f4AxRDe-v51Q&%UYl?@GS!7PNU1Q5!xE(PAt)Zv6pefTa~pdED;*SL?piJBi0f0WU}*d zi2HzX=W*`l$F#i~qsehrSZavA25dR$ihS2rOd+iZ+Xm82ovSu6_(pTNsXZkDHI))eCc;o!8akC&T!WcZ>c+zj47>mR zqiuasqJHE1-+xwk^W01O&d7O#FLabDWKVMdrUXgt7^TUx_V9z-*B^g9eAf$2q+;Hc zkbHNwgQ7q;3@YcmqHA9r$Fs&=jw&kOR4VSHkM@xwu(B`eTB}m>UDqQRjZJ`C1|Bm} zmLh068d23$E|gFOiFfw^xS1E~pj|s=$4n>HcZu%2x=U08p-P(osv4D=l)hA^{d#1} zZGn<=N?bmwDcjS5BZMdx837Wx%wWAF>rAZcMN z>Pn#Ze8->xP@)0Bh?y`OH7Z^xDM|VOGO7`P)M$txA=9d!)S9-t9b?>KgTzW_$N(aW zD*)EXi^MiZVF^L0Dx)>31R_@`LTPtdFqO2A41p5`10!Hbq96rrMMwl720o0s)MW)j z&z@uIJSQ|YBWcM7(KYiDVry<9Yd`|eW0w<<&7d0M0-2cvEOH`d`B*p%b-@16e%M1_fz^ zJPKu> zOhzn%LX!lNAOHdrurmO;SO?6Zi#Y_OUr-9fK%K-0G6^Y#swj;Zb{_R85keZG;L!O| zatwy=>aBVL^EX$%{pQs*^X2_lyWzY`vfKXzVF&3b$N91< z-+%Dm&greY{$F18|GmhvHgCmZ{^H`r`1ME6&*Z^>^Pe_P-ktpA_1&MH|JyR{?mM&Z z7H?E<{`zmd)1QpXaWje4J-qYvFQxqWeDLdoKZE$jw-#q_Rr&zkPhWpJY(6*ptUo(~ z$hvB=8j&aF%}46ttUvU-Pw!y7^8OINAnF_Q=G1;e3Hb1GYi`zq9u!X7}ZQjUZ~fJ!81AS)V$;^476=zvHlhFB><>jX3+Ta~ncq|x&j05T^65r&v^ z#*DcDC=_O+A^{|wP$#Vjt5Tyj4wQ+QDR(qNW{3$w$|;Q$MIgKg zrtNN*4?o$p8pTdZ)Fw-SF;WxhktIK)TTs*$&w*`aZ;T$sAUvWfqN%+LHFi@Xg~wA1 zwsc($X;(?F7?snes5`F?n_DV|`tcIw=(r{jsBSH#(4E0p*|XNnTfT;PAE>TPa{{rl z@?k@V6@0wQaHTKjLYRj^=x}^8HEwnXF~@N`zmCpUdV$&<@0-dGLpbEe)s;Z%^SaUclr{yAIm-;huyD_`cmdIb5=S)I(MGWelcs=OqT^ z8dT)g2{Il|b&1(t#XBwhHcZEWhu4Yg+wnBTtqZq?o1Ao-uBxhF&q`#`y|~J9m_MEa z7u!xzA;mMBHXObVlZqxr*2VqaQn6;t0ZtmsZ=Bp^D6Ul5ZP9$L%E?G>4AoM@Pc@t8kcsx@d#A>Ft;a zHf-J~W;4`Zn@+sn-yf<%Sy;+xkWp_AT`|F_+F!s^G47X2qf$}m!^;q&~ z4<@Ga@#CqnP5GcIUUfF><-3pQKxN+hjtN$GaUZ>;c?@un)~m*9M-e^k)F@L zGd1mP8H4(2lik^c875VsLYQpGziT5ZhO(TY$!W9ZygrP=%gH3}DW~qVwf44M#GAi3 zIWn|#bZHs#6x)K9Kh}D+otOTj!%+%@?zVT2tVAe<@cAn8pg^MFrYda~5oEyqNU!i)H z(PzHd$K%ke509=kc6FE<-W?OSLr!5cR@Ln6Snpn6Zlfw&JWQ8I`qdHitN;Ay&nAeqA;@q zFHAfY=|#L5ofr@*;mAwq_Z>}w7%1|D@=nFDF~96oJZ-?(&Ju~X2o(c44(kJm0;TX&ww>(RYmthqev03R)GhJ*z8s>U1dg14qB! z(P+`*cHS$-X+Rdp0^O{bfzi4^FdbrfFbcOjeTjLBnXp+}inbmk%}q8*ZoSa7r)F8T z$2Na@FA06A63~Q1@ze|0W>7*M3ldHcQq~D+UX{d&QL>MbQPecZpz@$F6GGN00HQaJ ztSFnKhb&x=3?Mln8bunjPXHLHB+iTALrFs+5*$bhV+2&jLk{bS4Q*qS8VUgCIm8 zW^iHD0lS5Zb!9b>>*xT`E5k`8A0&>kp<2^|b5vm*i5WXbFw{N^nPHJpl%NKY(XfD0 z9FbU}hD_i9N~EBuC!(l95KUEJ20Ic0?el$t(YP5Yry@vJDaPFOS&B{d=x+#x6rjzKBSM2MuQ%m4yW0dQhrHrk=8 zSj4abAW1`pn3I)MfQn!`3qnRBL0|z8EvT3(K&gkFPW?XSeh@-rok2{jNlKtV)FWbY zqrsfC$<{c*GS`tH98!uA5kNQqKqkYY>Z&SEA7J|S>c48>{h=M&F(6|?ADnwN{NyJ6 zt3;)AGg4%*)w~BQdiCz9JTIEd?evfKt*Za#>@Li|#+x?(=nVHyR{z{D{|S$`N4H1W0)ZU1slzmPS z5e!&Dj5)`|2o$R%3P=oU1O>G!NL#I;Y;?Ay)+qO3RBgA5EIx70nHd?ih#;T<5eg|n zVBpA-82|+S9s+;>fcydg2_S#~2*Te(06+jjVnB9CQn;dQ^xT@-Euc)9P8Cfi-`KzR z*8Q_x`RaxLF{{HT%joBY-qq?7sTzuRYOPkqm@KyaedsP?O=&gE5Y4b?k;s-tS2VH= z5>Yw$hf7|Z{_6FkVhDP@Endd8_2=XbD3~G;>PVPUHMP}b8AlG?R*-{6$ZC>6M-`5V zXS0bqs*8fM5JwFGCmErNdnhK`Xvot&E#*ky55$vW*p*S zzs1tRbe5TtH^w%J001BWNkl^R9M;Ehe5=)AON}N9VO)k7xm7kR>)@C(1H1|n=Py@9u3!$ z(x%c`gQ2q;m_mPm8vUu6Y>S6pbuP0`U!=P=z;u(8kSXU%=h@p%I$Lapj>a8=9UP3= zy=6&N<9Py|A3en6>%Qyl(y8Y31hFo{*MD~aQpfWZZ(#VzZT-F8y<7Q@89KlDcA3)C zN4p_$-g|oc*u335$-Gpw{ULuLrVO>RdD6@TZ+3x7@8N?^^lZq+{D#Gr@9%?@`0qY`?v<01to(NWT-2LDlvWNO?U{flobpMUSWFtmTVKL6px=iA#*7(Z=b-hsF3v^ICIZ~u8? zKZl367q{(_scyi==Z6%FCtvM%*4)9-n`i5MH|9gQm@k+;WQMG<_OrM3TR)nfka_j} z<8=9@Pw3uM`6cB8o$oa+20b}`l(QXrxWnRCS9bgrdwsE)LPs}^GoFi%^5cg?)gL`b z>HoS4ZYC4E`1X>*xch1B`c8wJp4D9(D;QvU-`UrSw|eLCpsT}S|I0aD#`{}SE*wG7n%CziT!v@W{Y4)F zVp4Lpb*>m+FUbv>kS|pAp#5?Z4=*;;_>5As>s``wq z{*C?X+xzVgyNB~;$2t#cGB1P=`_}vBq(DEHIH>s)Yzq#3KU1t9@!j2F%!NbO*q+p^ zs+Zhd4O|T{_32Xp&7{k;H9%^PCLIgJaq4SzK%(K?loL}JINzXUbd3FxOI@7%Y-T;= zBVjFt&c;tI(=d!=7;vcS-0JBK(X*Zql#zrp2h0pOqyrl5%xp*p5M8kHvw?f1_DYLI z-PdIpyiHIV=<^{6PVC%|5fGN-v!zgWzeMd=4QAe%D-%fmw^vPjd zr>Lv)sEdaj#V!gZuLZkt_n6TOCNe@&mBR$4kyP5ZounX)dv7aLQ>2cNRMs`;U4;;J z?ZwXxC$#(Mbqu%@Rl{}U&d?b__CTY&7{%~rV(W+wB9|sdj5(7^M6r_DWYrjOIcFBh z2vw;tlS7?}_m zfE*_T74=-HP}os<){?iQ6JX{M4QU75Wy#KH5*e8jB8pH}iYW!31(mBaCyL3K#D=pN z2GF1ch{y^gfYhK$NQ0mT1nLkl_a=%^%G!*B4a1r^h@P)BN)ZAfM8yI|k(p4AB!k3+ zz(Y%boM;qf^9;y~J(z?(r3^%fY4BSXf@X#&W*mVaxKfF=02HF>`##4u8EsK%Vjw}u zk}^;OM8It#!8GR@D7&W5Tx>~eh?6mSpbcZ8QlU!?%1O>R3hL2Sv24{ktAifO9BtA& z!qlsECO@ejS-YY*jFMDWQPkP^pdqd|(RzF~g;k(2qb^VzLtIb{B0X3_LEENGd#{L+@(ysco`Fp~VNhCupb%LDieLbYy$nHm z1P!1Rkfp*BP+=1a%p4=)U6&bsJ8B=?qQ<5o&oEF>neh+>IZ}z3avw#=dQ_vp1Yk~* zAxBhbDysluN)ecXNCrS*2Vg5PiV0OFmsyY%Ax7;1Gi6O#iIS|DK{Zk$ zs2pRIl!C#r&BWw!#gnSB1joezHTbn`2Z(kx7GcGcH8<6+RR71g`h z2fJUdo71b=M_czpt96y0m3mT?G^h!Ow+0T4v^e<1)M7&J|JjA*_ye>}IR?bH778!;FExc#8QNB;YN{N~dishocV zpYIkq{`kG0PGwDtyd>q#t7p6RFW|30A?lBH#lBhJ23W&C7 zinIS@Msj+15yoz_+{t7x9pp9)fShVXutqDOSb!likl2h$P(+e4`iZJFsJNhPiwY+W z3m}i>kOT&ZB9#}-I6xOAf-aR(2$kX4@`x92GiH^XYe1LoPy>(G=1YONmuZ6J7;QW) zPgVKIkJqxhScmGco|#hI!?YqO-vJrH|H*hk(Z4fs2(xz5h)(*Ig?^XkhfE7PA}VpX zpF?lf_SIfr4zH!Vk=c=}9yZOTX7-1uC^RPzwklO8@;>enak#a&$7j!V}MuxmY( zDOsps)?l+T@~ComcZc2OE2TAYb#uHsoE6g_&!=DB+??(@b+-K0(wwXweWF*x`m4N8 zIexOZ3EI`WIY|5#x+MLJ(2L54GlV!dgIB->+r8hu>ei=j@=z~s+HRNkW%X2JGsG`4 z;j^Lvlo*SQIhmGo?8drS&5Fgi522T~-j_vodAkFY3QN{<_eN2-v(tX;yDQ(0X<}Fv zc3CR0xsBR%QM3#TGOJ&wi|4 zNp}-2^kCLKo}F%;G*_>W;Ry1A9^e1@7jpDkJ$Ofk739tO#g3+XTgh5ALDFz_iK_zZ z$)Fi%@qt;zhwEH_@1Hz*Y(PK|5dOUuAo%#NZ^Xa)QC^e(#WlvWL* zE@so2Qe?msXN^wv+ksL|s871LRu#8PMfFvaPyFSFaq$PW=KAcHF7NKPU&Pb-%QipI z*KY;&XDxtpqRRiWo6H}5)lJkF!QG}ed{Vgh8pi46l_q@WgQvH<_Lj#uXs3qyO zvUaxV;+I3#3-wT%m$ZuR_6g%Fr*~#}axJ5mz&K`@K>NlKeRRl1{p!0XSMBX9*54UP zp_(iERh-?GU2_Fns1ssp;rSa>+J-b3i z-10_Gn93uV#cn(DP(EB|?X=O`zpl3SiUo@&t7gm+Xf>bSC6if-+ijF8GD7}Bcv&Q? zS1~5FI~c~l-P03Soy@{*NE)scPlF{RrntYqEAwu*JlDIGPA;b-KKd(k4PTtHOoX{} z)YLyuuHAqBiFW@ORl@kOo_;?}F#Tv_%fsBjtD!jPNuhu?8&y?U!>I18tLq7Lf9tPh zkjOA|OR*w26y2;`z)I6ax9d0gF6hSIg~@Vl<-_en8=4Zl%2$NPN*2#=&eu!$LA89m zZ&6wn^lDWZt2Zoc|sHyD%_YAmHpdQd_9b*72HkhhLka`k@oA5W6*<- z&WI_E%We-0VkNpOns>4SNImA}A%TNpkg| z%8`$E!B_KMFHed)!m!a&t{08ZyY8HI?B_s^`7 z7*KUvBqNnEQDZ`^t~_hm8f9FmZ9zkar1UgOh!`MaW&|Y@0By)89fG1VmP*uE2nZ`; z?1bVNl@+j{BM>Re(q{H(p9(?MZAJvvnIeP$IgY{rmCAq^$Wh8FMNo{Wm<*uC%1|Ck z4-s4hF(`;pMupIaE{JkbSgF*WmtXzL1LDaMnxG_>PHc75P&RC z3J4<|GHO7r1L&w)*!2=QkrPy!JzyAhgIE>>h^eBX&v27PrHoOVLTd#a0*DAzj53cv zkb*D(@GOI!V&oANqH0Gu!bX9i&i0)r$ZL1#$W zDj6NX!_dYa1vD3$jX)NNC|IDjMk5-IL@A332@(Q`CSU@NDn<;cm*^pZU`!&M7&2!; z1V;Gx1pXf+K!5@W2!JdAA^?I204M_g4kG*l3xEJ1f*^n(2>>F12p5Gd&gPWO`_o6! zH{bvE^f!L{cgyr&eCu7P=L7r*;cyR4fMN>Mzx|*8$Nx>8|Lh0HaJgJgL(vQzka$R# zb6+qhEgF3lw8(6j0C6xaBWtVyiptXIYLc?)Hwi*+>Zl3{zR0~&I=L!?*qvoTNgASp zX=R(E85gxYOB&0^KP!KB6|>YLDypjt+UPIdL|LJQw4=9fi#P595mHYFslJm zLH6M^`;tEAj0a^I742a+QvWVKpTG*Moi*v@X1KZEriJXExcRVnx~~;^tHo!$xL+%3 z>3AtPZyJw8aNl4)_%kn!(_f9at^Dx{CjS2Qm2cM=d(%63KH*1B{)Xk%#a`{Vsdz&4 z5Yn(xVLCBU8A@FP!_4*@?Kd0Dy#&$@vIbMKs%ZDM?{4mf3VLUcOI4cU;nndX^e-3V z-niAYA|un=F{b^dV_^OBM;8$qcU1}W6EsOYhfy@e1r)wH-VqyRDt+DEpAYTRaXI(; z_CYt4rS40S&4I=Y<0kZn zmp}gD&(`<Jv)SdYzKGjU8+bVvhA`F&nBSDqs0D!Mmb-tg;#(&cfa8SAFm3{sZ@*t=`CK zGxkhH5*l$MwvF`qHd_vCc`@OMurzZU6ry0-SY-a6kloGoNEQKFjA=f7bf`Z|S{3mvnEv2UNBD6O>wyYP?c|Od%3$MOIp+!i z*+e9*aCbqVa;>4Kvm#tRY{$#hq}NW>YPy|xx$!vob(ooDwRGD%%#e{l?9PD1Amy)I zQTMq2>xG-#Xp(~wv%CL_XJ_2J|F+%8xOp|2v$-Rcyoz&I1C4hyNKg{=IIc?D6Oqyp z_Zfv6DG)U(HPODEnG8*-;IKcBLVZG}z*4>yi3 zc`TT5La3q_*D9QVmgAvs9{)YA7N#+E*XoliPfM7b_lDG^k0U0OTr6!1Z3 zFg8;&J0v*Dc~#>V1B^*uLjGPO6uNfJSVmOtd<3K{q1+1!UmvjiN2jKqpY~QenkdLh z0Y#z07Nmv!o^^?a6pobr{$}|6?&8o%q3!D7+eaVH7aoM$c75%=m;j(mRuT+bRU@im z?iyhaun&KDvB@w0&t4|w3RSL`*S@dzv7Y7~Al#BPle#b#1Ch8gr!WglyYADh=f|^^ zsXiSo@YNI4X|P%Cr)9*+ncn+O4*xe@aMo+&Gz4zK{uXeUy zJDT^U$_R0TpJ!Md6?ZnZn>#HhtE0TW-FzOXkO@S!Ey82Fav*QTG&<^Ry8|}@PhC!Z zoA1-%sPxtj8xJ%n8LACt!Y>eb2n8C>GRN^gsu-FDKtP+c8N`?7bb{KO=wk?@H^p*h zn17px7cTYfvd>Dc>DTY1T9-$Z;AZb^n30Q>~ z$Rz60T0sq*I22okF$(r#tx?993I`Bm(B{7{o69bws?hREVyU451$y zl`4xRXDwPUy`#Ws(kGM9mjYmQA&6j#Kw*}qsw0JMWVL z3W*l{m?RB>5eGy7Gm&WmQHqSHnKGbu$peZe2LfPvQ2Z4Dk_Z|JG7DKkvdEc}s zh(A1})YRqqQU2Z>hSTi#0UjYz&8I7B7vHmTI(>D!`{wgMZOqW!_T%;@8H(^Auw^P7 zzr{e{WsIP#lbn|tS4T&>`u?>@wVj;prn9cai{!bI1%CU^Q%%09X-FSv86ZMnbPa*@q^kW=u3BFDXTVq==wM z0Z{}%K@|W5{)z%9fC3m*0#qOX1x$hpKp=lxuYY#&a$o4oFH1i$ zpz~u8m(jk!F7$QhXPik@X9ytU-bl;}Oa?|90N2> zr2v&Hasrwxxm--ePSOfo$H$)~P3@r|GUatny&Y@Q=T4~0_%a7G=7y~n_Ib`GI~uu& z+Sg+$pcQ*yEp0J%M-YS_BJq&VMA7WiApxJrIjR?Z5K6Soi-XxOWwV6Gef~jPZD6=b z1IS@B9*`&SXa>qfJ}OT4TNmS-{H=iuw?Ci6;iz73 zOq(KxeVPX`a?gw zzYDNS53Xsg>u44M!*-8pvwu#QE{;AAP!0Rv+-_QibL3{BeZBSm`aqsJz+?f#3+{wY2blkauQECPHU;q1Pzxwq{|blRV* z=>0nR$D6~Vc-v2N?bmmKXOK+{fKy20Lk#Wx?Wfz6-6z5Ln;*)6k3ZX{yrL9uPs!8j z1GzhFuU_4nlINV_?z0Wg4{pZs$r+i&^2I}^o4yI{w;*s8w{K)p;z<$z#75sdF&Z=1 zl5>Z>)#gV>2OP857uhprAk(_)2q(4Af9cXWo&MgpAHiD-`2RBih~NI%UA+C*H-Gpq z{@LBvf9U(HO`s8i(s7Ph&(WAgmN}yq$hfI9c3H5SpT1p%OVhOFJJS;V&ANy3Iogna z^105-_5%ISUC-(61jUFUmEBw3ah6Rw-}0Op0EqpQ2IIMEMhk2b;h)LpCMaEI2MJA^4)?p5758tA{+rndICsCi zf9clhf1}FErkZQ}rQF{Z?iB#!S!*zm`+?8kGmdD?qPWR^1$;| zW<%HZm~S#$Uax1_csTLHAc~xP)a`}t)Cs+D>BF|2b zKNT$fka4C7ZS*!|MGn`we!UJwBa>NwJfC&ZjHFxW9vzb9qsc;jKuN_I#v_;bG@ESt zKfQ|)DYWeo4zRlDz@5(WY(64lP&J65HwMm(g6jjEU=$SP>%jk{32)34(sMVk(;Eb)*bdh4aIZRv_jqZ1(3e$l7;!<#8;>6f~K+H)b zmW7bthb`5ol&Lfag)E5HKAPC<*Sr1M(QSV5?;I7vPL0V42@%QK+%Q7c1Lh%J_YmciMcp?ZkDdw>l?`SV&iz29spH;Y?)=T`6Q-&hjz&2;AsMHajT`9Sx`5)qo6ADL|DA&)zMX zHf&#e6Br^4$D2Y5D+R@E60KJ=qmfczTWlE9K{CVaxNGfXK%5x_LU1035+wOHsCxBC zjD(DiK}C`>ab;i#6a|$>KxD@{dKumH+^~i?P!ivb$;=otux|{IOD9hKC^2$01j$GQ zbWH)Qj?7M8-lmS-cOG-Gs{X(sy#wbjn?D@duMWj{$7**~Wzu*u`hl{MH10cz5@t@p zI1yCJqf-M!5~d(7>Wx>VWC-Fw0|^^3NPPqeJSbw;of-hjE0He6^##vkGxWKEa3_4qn&2Hv|9^n{&QsW1ogLs)&vRm=da0E`w1EQY8>p%~gCz z2UO1TTonc(0Lldc#3lh%VGj@o=0rKE0;EJJro(1{z0?hB zoG}r10!U+}z$0uGM^q!gn3O$u1Ykx$5OqnRV9ksHM8W_H#^%UIf@n;NNmT(z6R0R< zh>k#vCWGLKyecb_s;Gc~CDdg?1`H4+#VFW&L`5(OIVGUTi~z(5fuq_4Lh8w*D%Wfr z*g(POdr)nIO-hd03u1A%ut~Y6gHN zvyM3Q%A*8T?G8b2s_Q`usRF~_US2Eti ztbo}O339s^O^38l!FsYcSz+hr*;^TplD~lj-BI8Y++j3v!~-;M1aq^9?iEiNalPr?u>B~z-VsZFZII0w$MF~gWg z5)%-LKAo>2VA#(}kTC9A2xzAj@`PfCX)r=Np z@rv+ne{qse%X%G}CZ>F0xU&9+w2M1vd)UCXP@y#JQc9CmW!Si#F}*hLkmQ!hf|)OW z(3#=ukN?A@_2}zZV7K<2jMM3}H?}JeU&7wG#0xB3QA|(y(^I?ti`#kc@py7!0Ww~UG8Fbs zRol#W`$8U^dDJf!KK!h|8W}HEb5c5^$`|#p61SzIg>5bY-*A=@p5=Kc|q`!=Zrl64oV?>QO1tY} zwW##%%XUw#K%{hj}G_96Ur4ge9xpa0N*`rrN+|MbgE zb=U#;rdRbtP&#pl)s&d0nK7Bk#bm3~Dz8`#z=VzuAKCkp^2cKd60F#u&!%A*0?4F04` zr(gFIU%h@LnBMsAoo`FgxtD+0hVzHfzV-O&B<{L#z_UL%(U0E?pZ{;!W5e{cEXG6I zMkAAX4m}&hUqE(p)W6O1)Wu!A`9WI!eLhg)E17Arhd7u+IjP=tynVXpwgh`-bF$1~ zRl@cC4+QeT;G2gkvv_d|M{;a6cQAP|e&NO1R8aA7V!~e5FXYiw^$2&ZTjdULcb(w$ zkZgG}dt$!%$zmK3%15b2zsTBhx2V-@1)HFMe$1Pmaf|AD4Z1vt@5P zE&T4y7$Pjj-EbFEIjJ9ahZ_v#^KZTH!XF=Sw*C3vk$PL@hcf=zwm%QOy)TYV%UhiK z&0nrP=LNWN_H7gH@-gD0Cg35NxPNnt?l`Zqczawkfx|BQdX(A1+vL*$AhK2si0_*R zuTK^R;w+>@C-+0Ycp2fSZ8NmbA%0qw;!K_Q@=by)Am^bzA#NXWc^KDejA3u}iNU@; z8n)w?w=z54JTd2k&lEe&_WOrminFTJ{SI?mJThZnMV7qIhFuIXx_vy%r>)$b-cm>Sfx zDSDc;y(yU}J8@MXOyqu;2mM8-lId{-6$A+lYY~LauuGCcNIpN$T}LOw{?=v*GF9nI zNB|W&ZX$vZf?^3cwl29tQ$<*N;Ib&3b26k9?~_FDRAL=M$VdPzLLXuWwwR8t=Q<(M z7}(vhxM4(+y=4F^(w0qWtjVF>$5GToiiew|IcK#Dqb79~pmmwUh|y_nW6@M2BN&0$ zCh_UuuA$A>8$4Zpc#?}fCrjuMvJ6#o3xqBp8nlcf^&*lI^tZzh-rTK6auX-Z)jv3P z!+G`f0Y7YQJvN_j<0jr*k;_35&mBuiU6vLbqiGk zV@*)32W0qkGN}b}FjFkR#BC7^Ja)K~*P|efNhq^`%f(_8xW6-!4k-v%Q=*6%tgs}Y z)m;wrxju5-?u>MIhaA<{tb*!d}>IF~$j3rZ196<#%#-wDF*(EZ7 zA`wuB4A3JCIlpVzj^Wy(aq0#d0tH29Ps40{=#{zs8HANh&qzJJfj70a0d2fmWrn591kFO`rM}t#EyGt)b`!t4R=}-Zi zXzEl#D~&_N(< zNm&f4BY{js@mJa?Y3CIo1p!jb0x3b7dH^HD1YiRLjusWcL|}!yOpK~z6)}SEG3j86 zz?MUxl!Q@CU_aP~M`w16NWz$aP$vTclw*lX$%qh!*v2#>GAoh;%s~XksG?w6R8(W@ z4AACIpQtAOPWrG@@ z`^u(;rzD{zkObl}X)-314g^vfL8Gcwb&R2i0TV>FYM2P9Qs5-}D6J?Xk^{-|Ix{(? z=#%dNHARG|hyrR*v@+gmJV?EtZsg(lkJK z%l)t^iaV1NQk5H57}W?H2Lq;fNZU2&fFYOR$WlH3sMPs#Qk08#JDs#|UTzOBwmLZ+ zzE*ij2GCz2a!eHi*ePZ?=TWfN z0W>60uO5Xs&v52Itjn1)mE>igG)A?M9JBJ&R%mkvre>!Zc5k})r%k2I|_3qMbfUS#3AoZle|vCXG?{_6StV$tp9I>tF2k z_Aopu{Ihv^D9W{orX6t$5PO#eHSbUA63aFlqZ|BA`ZdgePhv6JdVJJrynl6>6yf>V z1=#sX;tadB(Fl_+^*#=2N6mvX%dDvS;h}xl*{Y@sO1tgd|K7@UD84^~;bQgKsBlNC zoKp2_HVEp%FA-XrZJ{^;Il|)g=6}7GhmIy#Xu)SQJI&$sVildsBpJa{TZiTf)M?LMr4GJ908pFUoP_Rw8_ zGZc@;{Ip~rVc*#uRGZ27X{cZy%$gV=YM-Res#9m9^G}He|-pW>`!Humj8Id>*HzP zeSX;d%=`aj^}CO}`IqSzRbKzCvq!gIr}gF4_H^t`{{Ce_Ij=vM&#z>0vmPHOHhle2 zPTkY#kIAjNX<-2rq`WDh!l${U@)l)^PXE`6RSu5bAC(g`;1cADh#s^l&$k z5M$G?_0qM?>hJw$AI;#M8vc3_#Jr~e_`ezZkN@tQyJZB?aoh~b5&)#OO8LSO7tT=5 z71`?aV#-~_2!U`?eW#w*%js7o7ekhXi=MIwKj<%;H|@G_XRE@`tM%ga?N=tNq<^Zj zS20)KK66zl7bMNyfK)48F&`5I8cqH;O>Ulae{`i6U8?i+CaR<8a-uc*+b<4kvQx$J zaacsR9q8?gY>DI_mAv`4kMmCPDlCVx(dzz z_>I1OJYB`j!&N>`#Oqz$9+$IWiZ_1|xzK5rOnK$QaTrX0w$9=CG3Wl_v%qw)iidN@ zC>go&@woon}dnwTt|Jcsu3@O5bEu$SZSTQs^W z^Wo)Me1+}B+wTMin_DK+%&_m{eUU#03i(=Y_2*G%v-4-|tARU6zlY=XIvb5^>ZH4^ zW^R16JnErBm3mS%O_t(KH?1lxbn72-U8HDWvZ&L&XxI;h<1n#`{Uov_$1ywD=(y(; z20(BQ20}4;u`1%=jV9p$!Br_8uKX~WEUeh$ln&Sga-*$|h31sf%8UUJ#&ZoI%@ zl-!B~Dk2Gx72Vp9044~da- zSza1q3~ku0cV9m2cQ5|DzY4%@boFU%dFcqTZ%wacne}elEwb6NtXliVA&+}A+UZf2 z_i*pQ6oU(F7^tW-(Q)^%iNTQvAPT_NS%bui`EoSKF3c5rRqg9T1fanSI1wt+WCUQ5 znTzcr_G?HHjm#>`%V~a)ZHuvJF(L#5#73;ipiQ8mH;BekZs7nV&{tNKDg=zdBp+O! zPs44WAQG|go z%Tpdt!+g=sv%mj5tL5^4`aeWcGjxja!FVtsP6khvzR05Kwbob51^pn0YGJp zfDmg^P?wwn850mCp%Jh%au#49N|ZFl6e%;-62&-*ABAl+MB+_Q0U1;k5=Bi=6f-1B zLqY|S6cq%MLI6U`z3utY-4~ej3TIYXDA7n z1V%*?2n2RrVNnvJ2a1%WnTe&)5y4#t{h(qi(D z7*;3crp{mYiuVr`cF~Y>HTRY#5|4Zt!0w0b;o&mk{Q-UPR4k_Qn40z_1WUqb{0K=?I=w408teXkN{9w69XCxxm8En5p>bMrRIC{BE55VJh#Lv>kQapY2DvR zv76qWcYCJpQys1v`*4{28St>h0YOZmkR^!_0}5*9oE08(3#OU~mZeo72N^5G+*7Vu zLP?Q8BtP?*)y`BVCkYtZ!JuSVB`P(gkTvhv*o_ZcSa` zw4K!#dksIfwxayI&z~kfK0lzt=ITC-@p_2c5S^P%O+F(qY)062bQ?y~_S`hyd{J)e zFdkn{fMK?XUaOIF57Y3d2MtRtFdT-Wf$=CMgtVjAngk!YiotYer5Fx_{9fps%-Ylfs&+H5~N~QsC4;H*2!*v z{0>k!Uc_2rNNsNnF6K7%B!@AsAUrSS+Q{8d5)`rQ9xF~DtTo1_+uSAD7sDBl=!t+9 z`O{;!O>cd6*xK|a(~=;I6MVA7i5DCa=iyJ0@YcR#K*~YJWW%}$i)c@v6a)tQU>l!} zZ~y3#^4;CcT7Fzi2A3adb1H9>)Utcc8r#_baX^m0PgQK1mj~*U2 zu>9@!p0YnY`>GJOGtaL+Xb(rzKlu1@W^S&3D>_rU(`EnDp}%>(dgt6`lhfT2YCkS% zxW8#Pef{*i`eDM~=F&);;?`IF^zx$uVdklX)1;8D^AODyC{{Q?ym#-U+?LMZVZwX;^Nj8NQ zvgZ|($(S4!nqhH9c_WkxlYMJiCo_ROo-GsybpGV!8Kb|&>xVDzH+~%+9nI%QA09ov zxIVG&p|_jutw3^@!aC_AC&g``ZDP7;3vDMvmLg-@#cE3 zlLIFC_P*atF8kHD+X2V%?$Dkc&D>F*zbAGF*jM?Na#-yLSC)?p`YdJL<>$M`TpVM4 z^wz6wdepu?x|nPetF1$A<{n`zm2PZ5B+UHus9Ka^+rDnX`x6hib z;t3sI89(Xk$MJBeU5sxUo%FdSJ`^Tv)yl*Da4%Ez5;orqd3~PQL8piIa%Dy>3JBRZ z$vU~8^u9JeXw7s9Y@c{_3Hzr%$bEZZ~O)yi)0V7fKyy zXdfO-Id`a&tO(<-5nqBa)qLXcoAid#W2gO5KO4rYm$A7hPu1qF^VL^_9K~J5rYMSe z`EMN0fA(L0HlE(!E>52I89{guGmQ5#S=l8&xeiv>597@IjU{T7A}juMpvm>C(}Zsb z3cNVRyXXd;eW3gMhFY^Yn!@~x*vQ0Eo3T_;HxPmQ%yHZU)nT_MN4S)sb zhSiW87dh-yq?lG??ERPy@RI?bAJKxG^U`S7!Z1668jeSfpzz`GdBzG)GVMWyD5`jA{ ztt5h2Ie|7npk0Xxe2d}8mquhGyP~sJIE**m&04M_-!YnVu*p<_#TJ`TjQ{{307*na zRFdOb!bYPJ9}jKYLRdts&d)E-=9#N7`lh5T5}~aa*-^pvpd8iRG-c){4r_}?S*{hj}(L3iZG9<=Kp5Xm8Wu_=`ZF4%y6hW0ivjjd%zUV?5Q|~nf4G!x{-J}``W#35< z9V1u;u>F95$uOXa#wy4pB0^}_9svAQ5Uqm@zzwA$mhzW#*1u#VbQR5t214%$NCQTyN`A|R3jkjf~cBg1GE5W!gh(mr5_V-9H|7zMWqkkF8#wZ@V) zpqhM2slrpj*&*)wa1DTb4v=Lt;?R6g49? zCMzHvTR5t}9dL0AZG-zIj*n+^tj}(g_^_?QZgg~IC6Do6Zs@?ree8t7n6T?2YEV)@1OP+=K=^+V z{2BrvfC2ly>V(zKY|4%ILAQY-(Ucc8ipcx@IA1-#dwR5? zz23K7i)e@qU<_j~DU_tnncn9^s}A>hWCQg@IR)g-bBcU4>E>4aay2}@m>=fP-)vj+ z`lrP}XdgBNLs!1lxN3T@xkoF|>_ITZgEJ>LN%kI21e)juqWn}o1yZbmczX^l0Uft8{KVxW z?)d~rh%bQn2HZ4iB!m!%Bq$Q0lTNUc*okAiTy<(V^FC)!@A#jdX)OxGb^7;%dQ}PN zhXyU0tIq?4%b(X*1Jn<$YMtWs)UeXB+hH2*{hAc3*CQ}>A1VMd|HKev1xhfjzcpf> z9{uz%;O-}%KxOvJW&$iG530yApF%8gNd5%7^Xlk!wutOiVgeOf9mBJYPpR|%3S~8I zjumgPB|RQRDX6_ykTM%A1daySK3SpxCowSYw}X+)>;2`+eJ< z^1E7a?O(cOQQwp4kF5=EdpZUc zzx%8zw-;ah;%8~PxScM3e=}?B%fEWm9lx0$^_;U~-EQWiyusWLRdsCLcyT}2Hxo59 z<>@s?{&d*?!S8?n8`Ix_e}gXnw;%IIfBauR`J1k`ua;x_G;j(&XNsfBGFSqGsTMRA z$Szz%8cS=L5Ujp8J!{Hgrpl)e?@({m;ri}BeW)+*?ElqIZ#SRqGh$gakIdrad~vOF ze~~Yy@cZ{bk8>OH z@zLFHt?*vj{_X!Q{MFPo?$O_@xOzS6#iCf5`Mx{5#utHUW%1aYvgMobY?=!dXngBo zoe$SOxq7q8PA(?JyREkKad;p%hhYc>U1nRp(P}jut8$Z@s6YGVg}SGqTE2T{2&T9! z^y0ZPBK6qBA0Qtn@3#1|L+y6e(Xt6uf4TeBy!f6oGA({G`g#{IT5(uSP<7XOZ@Ow0 zyD#d?$odBD_b-cN`Y$x~`w;W6&i3SZLS4Jr40mQ|jV+t*q6H6AMY$Q+rtaA=L~9fJ ziIUzI5&E*x{-<5Su=${zkJUW!`dOd9kAubPN34r%DC}p(g}TWpyy?*cu?oMAyooW4 zX6JY*soZ6ABEE_F`LJ%R$Q{u;onnobReyH-4!SRrY-6!dljdYGgFm^>)$SEd8l6GK z!AR%^EM{jF_k6-Q79hAN5xFp?`PFk{;vY_Q1saukuk?M-iR;4W^;vPC=|3gCxotiB z0JFd_!g$Z5HtG)JQ7vhHmdVFAmpFH#Q9mI(T9P%lyW8>zP70fEeb9abW^pvr$xHz! zmm^o@y5O-C_MUqJD~Qp-#1=?-G+h|9OK1x%VdJBOBR!2ZW#v0+T%k)+bl63m2UPjs z(JD*LeC{yph9H3*0G|mT3}@0iS5&Qcl5RK<7yt#s2FVH*i(1RLk7?|r76>JbukGCD z{LuG4N=7i0TPSZ}`fr%&2An93l#*I_4IJq0F-&00x9YTl0k)5sO3R|uc+>z@7`QoMUDYxv!QuF7Zsqo}~+NsH;GIm@IwPZi^G)|LOPS>U4VUcrN&LaYG%iSRyrc=!%-DU$& zscejX7?B!>MnVTk6&W-V)x_TC=)#BDV}prb&wXJ`F)Ikz_0703g0r46H6eR4P6&5J@^BFfM0S zA=sEB4W3D(!ZE7TPM>`<^ShEC=c^qLgX)?5p|gS2F`$-WXh;-*7m$&n2w;xPQsCSH zbiU65G%FmSE5RfBi#`dK$5jIYL<8bD>`=HEk0Y2^Tq)!s?V`j2jumPotpH14LhiG8 z5Xl-dfs#x}%pg1jhMo&+pqwPmiuZ&{DdP~hWCDzo1Z3)_V9ow|js2jk0a8La=_Eo3 zDzH#bhC~{lF;l0KFk-F2Y6L(UvG;UMVlCadtGRmTQ7rgw|G`GjU%mMozyB;Lj?{8i zFdBz!u?V6=>8;K%gy09Il~OtjfYbs=Ndy2mWl2KHqEjG1%m7g*z)XS$Kp~2DBqACj zflA7-0R*H%BN!AU8gf7IPN6Ar2B-$yqj+VEHqH@Jg^X;g^7klYpQq5Dlax zAqoI-!_owhM3)*uP-9ekju1B>k<&54Oic!VkAJOb1-cy z2MyJUdiN|*{`OV-?!H~)`0=Lm?GP|*V^7RksazV03wviyca~;c&EG~k?W9e-ndX4i za46dwrhb&kI4rHKS88J7JIdc(e)zKd;IrGld-Ds>udLvA|H*GyV#uN+I3OqkK~@>1 z6s5=pNJIWO!n);h%ts_uW7JHM;=&I%qrCR0$DkOIT&5yx&AKR+n_5CUD9!|$$S4#d zL;?gM;lMeMz&>3=lJ1cE(Q~GhkOh$=fER!)C_+GjBnW~)2m}a#@c$F|l8^ubFo1vn zf&d7B0we|j$Ou57m@op0uc;EM9+c;I7w_4|yf*UPsY+NrXWTDf&oK88p)I~N#;orbcqQdOuCz!_0c6pKR7 z93@SGM6&oG9I^w@1vrOCWVo7Xr__{hRPL^9v3I7(s`S%Qn6FM^#OD2YKGS8pKlrOb zJ8RCYV&=7|kmM+EWs7}Xb{>@nz0kU8Y+}sg>|QF0_r}-Fc-Wjy4ri-oQ`XPcJGbtd z8M1?y8zndJl|t&{c}!K7i&*YLF2+F*889jc8M!M8i_LpvJA~&qUC!OZyQ(S7$Lrh< zk%=SayK%Zw`py1&eR-`)0k=%l#A=W*`k_!#)m?{n@TbQ%uHM>Rc)9o^*l1YXtsa83 zrF0?^6*55Cn257QQ76)e%WcjtYpsneKD*sy9^KI~kS*mFXsr7}I1R-JSapB`WxcnI zBV^0PSp9I9*2~ZSTIswwdd^TB%Bd#QrHuns2i6n#X z91kPHh$(_9PU=DsqGVQw=)-2VeEv3^J@H4cb3Wl*xp*Zhr^*;bI6bnGt4qmhJlK8+ zAffB}yYpc_Sv=0V>*>1{!r6K5P|{U4#m+W2EleQ0lhW@k46mjk=kR%ftLpUM#py3y z_j4D2@4eO8Y&HB3i0Y%4Ck*xuP5qtw&GE_k_{Dro*Z1!)IL|J2uS$@Gs8c$h<)arL zJWJV~=9yHro$CPLVXS}&s`xq3LiDnyt4j4F+zg-soXnsup=KBXRED)c+`|>a!yeAg zpf3Q{mntN!N44Jn(e?V%%jxeC)nZsX>A?UmBK7{bH8n#!|>`+ZEk@I!u zeYG&g-ZdWuZ_@vM^vV2{@KC?AJ5YT5@>!vOcusfkJ^0+I>gn>{>5HrVFJgSZn~=S| zJgtg1N&4-#-#_Di|L0G-#q_S#!R@cphx6{YAN&q{{~`Q?LV`C>|L|Y@AAflD*Wnpt zIkbJBvStzAd5QyNKPX{-5Auxt0za}2W&p=Rr+>+H`DNwfhYfQ zqxU@dh+Vre2i5G(PTsqe_iX)F3g)+O?v1!R5?}uCCU5M}({grn zv)P232;Gwdukz^B-T6X5Mu5BLcl18H{dhB@Xyf6C7th%kx|r6h>zk75!=frXrB%j9 zr94eA?((hAvQM(wRlU39W#7G=4WwsNyIcSHzMtz?wLZH*n)n-6Ke*@go;MphZw}hV zA2qK|`V%Y7s~nnGYwjV0VbVwFiZF4I91pKXyc-;7T&h^sfW|>dN2f{B|ine<{z#-G4Y&+sQmAbujW^(@uZ$ z-hVNNzxd;)|8g`q1?R-2V^!muc=_4w^z8$d?(=y)yE!PebOg9OL2Xy%w(oVqoNaWU z^-AGWH_nz*3$BC>@20}F>jQjMK-HK-zR8>;BtuCw_ItBb>5YPLXinU!sdbxQcs?q! z1^(5bPhC@CxX^f#RO9$2kdd0Ic9~Mjx7^#yQkTVpiu;1Z$Er8z96*Oahr+}hCLMV_ zWVDHFu}dY7ot|J_pz@KhDBV=6avj7-P!2t344u?zIY8`HvcZc0mf%vsOzNnJ%@|&7eLquoGH3R=$xD{5gUcaKg{de@<*Djm-&7jdGHAkpVemUIEG*TW zC8mDNpeGtwL*B1ZdzQt4>JRQLnx-@ifPg?s0qCV9oV~t$5)Yr` z@~zroeMIwmifAR*%%Si4KzYglMzk#sVwrtaqG$ucV^IB%|N8&j=Q0ji7jG3RM!$~& zAqv=4O&u(+a@VZ9%I=Pbmd2evF^4%;KaD^-7=vUql3t+7`H*s0Q#pWHmUZ?wJ|rN+ zAk$cb11cFu0ZHZ#OAl$S3>6A^2mr`NMae(_dC#JhoEJE<@)=44C@B*uJu_)c6ePx= zfT@Tb7nLuxqfF=!Q6%+a)-{+KXseMEDT7!w*2gi&p}60-u4QXe>ZQg#+&Rwim*IBiU># z41tCS7&UN&_5jfbXBl0sBIexN0Ej*#AcYh`L9)svy%=3o80{#MGnkTj0n`QP11gM>}BF847@ZqO=FqyHFxgtf|XZ*<1WVkB(So;l9xo&nSAeI(`{ zlwb~9&k2e~DN>oT2quOM?9eE(k{KmL2uZSN#TrGTfYI!9jKVRGD3ByFl76;ev#Qy>DXD+4CM;Hl%JwRcX7YK+pT zSVvYR)$i&5e1)@1^+zwVEL?N+1pzWTqzj5!)S8u<)8YqQ6`o2VcWPdQEM>p#ioBSg-Fy4$|=vt=T`emCBi4A3= zgp?~Abu=e6U%mT4zc(#ZJZmSMK*ekK=^NhN+*EN>&uBbu-hQk6*15{%^5w_%<|i8y zf0^Fews(&4dq4b^LO?%twkf7j>0KJbFe(6~gp^}|a2u)ZVHxrLTB-X-{&*2NQ@4HI z@}|pW+E+;|6t-|x$yQs@IzS>n_>3tLB{c9QFA_OQ=FoZaeK&@-3o56a1q46@1VBJU zKmbHo2_PUK34$Oh1Oxy?1V9mCU;rS3FA0MP0>b|b0a;K40YxKYL_X*I?b6~m=G~nX zU%&glJOA)om7n6_GZy`6Z#M1Aq@Nz95_@INAlH-TV$xh&m*WSwa~KnaJq@{4bU5-P z+AKxQitxtvpB|!JZzhS2IWtENCf1p1;()kpvY5v4v=Sr1hkk?TQ?5jCQWQovrXo8H zTHFj~0`Agx5r-;?3G7WwRR+P)I)cSJ)4D7;ZAfmdo{%~+!q^We5Dv{gX`Znz*hyLE znXzFqzXYb;cn48swJo?;{&>rkirdn6-L#;S)3cm`;+0U7-YIfH^>a_j@3teN z>f@xg9Cj+$EV^^55|vb_VzM9gL|^|(F}zuCD8GN*9k1TInO9-+lV+(nRp$29Pp@zG>-o2h9=`sy z$IUjszKw^x9L> z^61z8(+3~Ize#?afBN73uRpPW{u#${9pmoyYU^X^Gox!|i2E*$kpNH`v39PAs`Jt% z$bB|nJvt`|&-^(1hBanB{d|V+kXq?{s4--BSRXFJh~?WWo2D4--Q)L0y|=TDKf7LU z!G1%|Z$G@cEW$TGZTs2Dlga7X#RVCCSxiqp9?G01M=&mj(VZTyJCQELZZR>&oU1bF z_q}F%_QX6e)%$op)Yfz=>_k{{$`h2JQT&z^r$Lcf6=?+{{Hmn zr%!LkB(J;St$zl1X?NFnH-b`mu*dV4Bg#iVO8)5R-1O7(vfr%^BlZLp-I=I(8Lpi% zXmZ@$q_K1z%cG5M?^G?m;wKvqssnbZDWN{X&pzfk-mQije~>4KF1<*pK5R@muk}if z59#Kz?QvIVy53E8>a+Jv+UKG7-+T9=!{lW8_K^=4xBuJCxU3y_?I*YLo!|Rfd2zjd zay#{FRPL|!^v+`A;<0qB-#;N=+}Ye73vXQhe)bDJ~f`6Bv_4($4BpqP-g@&-bNIP?HOpwW>g0AB+SK#_wP zF(^(L4pUX^^*pF{X`$ca0Mtw{9LyL4XjZVFChHwBJn*rHbOWl!3_N76sVp(>cl*#{ zYEhh;8gdnOQd>q07ziWeocEH+MX1kt2r2C{>V}c6%6-&1ON!4Se5l$JSHnaR7le+C z22!3BIfc52f*O-!vPid@PxEadM-7{?N)DhT=rHvtwU#8(bwgR+J)X=CnXeP3wAX067L^>#;fWk^o2qeoUiotA=GqD5&0U(F~;?p5x<*?B-u!F`)6DD4IYtz1vQ3DQbc&$}V^x=FVsjt*kVdZxuj*;gjyvC{97ro5A#MaTqOO=v z=jYj0Z}u-G_@u)~EJzBajE>oOJA!7|WsC?Ck_4g3qA}Kp(f|P=wm=Dr z5yVIU0FDAI0-&=vi(GNY0KEZ5I7PBVL(ggNN0asqkZ8w7fHX(s0Eh&JP+5X61rx!> z%$8F@lr=>WN*RH*0!@%XydVN*@_^vP0_wy-gK+>@gVvx$2^J)>O5gA3?W^tMF#gOK zm+(pYQxfu^1O%8RGk`!K$Vd)V zX+a7-5#>VYbLdE+F**~*Bq=4poT3ZtR8rINJBrRhzhe4j0+nKf5Y)^R_g2Sbh4zUy zvqJ72eeZhq!PAd6{h$7QHbZ9vs%-^oPZzg%*>?)J7=gUR42J@lW~Ql%(*kGrzqY6! zE)1I{K6mN$Yl%+->0FKH_qu=dKpz#qb}jcFk6prx)wn-9Wc;1q{N@8g>U1urR`Q@O z_OH8q6To;O6s2^;ViP6yZdIsLiQx5Z>FJ8Uo97K}u6vB(NcK}H)vAm{*m1n@j& z;uKR#hXgrxA!Z*k`!IOOxhD?904NCm9}z)75C&inWJVDL5+p$oKm|gGKu93Sm<5Cc zP=Emi00qB92!hBA$btl{0?0l$KcMOdU$>j@H*YBMNB`M--~PwTf9J>jr~mTBe_x0A z#V-59z>t#x7xlEz8eP*kH&>LM+>VJ`&KR+%w4FQ33}vAexlNb*p=X==hgs>_*?MVv zQ#Prb2Vsd-!C3vKdgs)wKI@*oc=L0m!B zRB1+~7Yh;l@Hne#Hr!5mH&Ahq%3`hxMwD(SM4(7PG{y9W~QS4Yq4$7>Br`YJMGY?~isg&mshOyo5(x4A-$Rn5D z)Dxk1_R|p$173Q_pF6fNv4$aec%ISZ*>Q=SiVGAQpDzfZw!2O+#aRuJQLiOqkV4bq zWaT7sxYSC;Tkfpw&yFyURe$ST!AHk^9Ch64-VSRVhcRzjf2^?%6MFRl1(|*8 zYLPiyAZr*5+zz(6Et*rD%xwyXorFbm*AVJKTRo0vUxaPBzf{fqm}X^><3JLwisP&% zg>$7Deq(D61*W=MJMPWqw6R=3y6oEUq& zNH6x1H$YBg+d(ludUP^1he^!P%Gxqa%=l90*ueaDHy+rnj#)yu(!jSg-gfE(1~WgM z*G$b;x4C`3P4f?bR8{)s_M0m^qJg-3wocP1(vl*Y>UcNnU?dk9R{(~8O{Vn*12mt<%z&HQ;zk2c)JRG{b zgFbt|yNWIMBQRlL-}OTdnHdNez%&+BSxhRE#yYFKDu9(urTL0MTz<8)X?i!AA?}ix z?58+breM)wu6jh+Sb(4Gy>ArL$+%`E2rH8(GoUv>ZP1OvB$U zcVCCy(IZbz#^z}kiYh+UaDo` zY%=y}^2mp69I}37PyCyGH@^JENo#TbH8syhF?O$MIyJt9yG= z*4w-1ar|?;Go@X)czXyM^a@0Z2>x_3sBpU$JC+lBW3jF3Z@g=6#?VG9m(wO+O~k#{ zly(;(USHgn)BBI!Ze}kYf0|abYmUGDe4hnB=Om8~G)_+PjJGeSjEkmf*BIi*qHzFQ6!JW=(F-!Cr_?+e7lhM0Y}OzfOnuu!R7By!OX17^v!|)NY9v zPd-BW3dr>2>_r`CG8*n@2WRW0UYa*;G_P)z-d{qNjZc*>4Yp&sPx1G(Y^TFm-n|mU zam_oLOzl4Z?wjMsf3Q7V^ndnjJZ)>I`&wPwQvX|ctduAX0p@p*)`yCEAP(I|UGS#U4T_Vi2@a z(PBD~sg9 z4Za)Mo7UtKS}RRw5q6Q6(e#pc(iS;l;zQw8)>Xp3trcQjP|&a?uEvAar$nx*OU}x+ zZm=&RsK6EGYs|U##>dNkdtDa)-kd^H1*IFS6eJBunvfu3dV3KTEACtLn?C=#K5&uSQCLpKs2U6$kVZn#A}_ESCXxxOi+xqav(=a z`olPOfF}+JpiLmiBXX&75wS~<1S*As=`e^g*=AtktSXy~(xDT9Tqw-|qskm&^i(!h z>li=;3Ntkb6`G#MF&>bJ0P4!bQtj3QGuxO(;e6<{CY(@3B^47N2NObRZLc5=7Bfmc z27qyYm=c;w_X+%ns8NiHAu@9~v}r$LJ!!PES~?Bt%7+BrwOq~?o$j2SW!k> zkufUeF-gkcBl{dJC8ISEilG8k!Qh2FW(U~;XO4+WQH$)79&_zeuC}8fj5r_y>Inf- zzKonbOcW{&o*^f&42l42A>%j|nS}r#XTqT@HPK=qfjk0>M3excgC@}#GB7hnPX`UQ z#+5=A#Q?wviGcc$BKM4v!5|_C0tyuhl+t6(31%>q5?g^z5K)u`1e+pekt0+SP)R3o zJ|uoKGJ&#U091oqK(&-AR`M17!fO9(%FJ&#H7Wu zV4vkng;HAED3XIH5CO>|f}$`1BoHOyM8xD0p-P-mzO|TLSt{ica7IDE98$_LXM_PU zThb1VW}yrL&{J0A7oHm5hdgcz=kFi?=27#V0ovW}SAq9!$RXX92nzd7?RP=a`Y`4& z+L-KU)4lGyj|Knv zx8A>3fml1g3K302v@NpfCvlpb!WEf)EHG z;g{e`CdIx%;4|#eekE-&#G#za9Sc z<6nGcbl41xf|J@zW;Hk)4T8pDhUa?c4b*NMDUZ4K8&VljgvO=%A*6sn=WN0$3>v^SP0f%qt2y?p&r2(`Sm@Sj>=- zwk|0&jvB4Ydvzhn+P2y2!()3hMOaZ9y=phCkM_meg}<4Px4B_H0oo01h~qOgyiR_$ zEQ+GO_~bAQZ|;J&vCuy_R>sO_JEm-pYRdC@pEIwE(Rj4CIpnGv6fL}(Pc%mO@Tv2~@cjNjX(XPidRo>tlsFXQ+Z$D61q%aIjOUEeVlv}qvcK$B!=~;c z-bbxY{8va(lOGQ54NM+S_sg5BD!rAPc^tdA5~SIkGO<1wmb&8wxW7B6!;|tTF+FEWj~L_={xm2T*JPDc=~6U zi}q<<&jvgn4maOvyghk1>od4pxBvX{@T=?i@0Y&Om2u~=&~O^oUtA5T;$Cp~#rp9k zoz~ed9TX4>d3lu<{lUfuCvO$Ve_y;b`=Z))7wXV*`N67DyL>!UU(nFOv&Tw|d9;|M zlgFtUudhxuk^ws_qA7F#_uUhTJ&qH1=S=D5UO0qtcd>`=5AN^g@BcT`x8NTj04M9W~ZVl|9rBbzyYl z9%!ZKN46PcxyCR4$(o^we?QIY;rs0EgU?_0Wg0~Fg~MD+rg!>jS@o~J>Q);yXLEU~ zYaUP7zkFce_1ljgZ^yO+y1u9Y>q%0xna}2A(agi?I-s_X?Re*QT1or4-xp=2vC1g( zMfu297i~72AEv{fgmp{vGj1ji#sb{r0He|#z#^67_{aVuu4^Aa)+KX&volc9(U22Nk(%Orj zmTI!H{)@HDQh7Y(8^vAw@_Kr=ep4#*%Vyqu5e$?aTmvN07>^pb9x?Z?Ls1|5TH$6> z)aTTokAHWQN);$o?xfVXqVOzsP*_ydrBcnYDC50SchpTk>lbGu5U(Uz*Xq3^ZlJR_-c5!=l_w_lKdq%=X^Aj0hv9 z;>I$@1k&!Ko|cm$Dd?oDV-J-clHwO8H~mamC@h}LYs<|D$!-zR_tuYUd^{Yr?~x2J8m_2vWEz8NqSq zU>=Cl)rO_thiFdGfitY7C;=FH)}*rJF!Xzz7=XITz8B~j+zv%y#(@k7pit;{BMVZl zl_SD_AjPbLjQ7WxCqKaGCW^D}1WR^=7&N5o>x`D}YFQ6CQd#7jrOUjnF{3`xRQJZSaTJlx^e1M!=KjfhwwmE+Hq0s8zQngg5NklUfCK5~zP*+;# zanHj{@xjjUVe1lr{RrnjVnM;-)3Mr10s9*pRpCB{DXMqV~gcu-Z3_w*9P!dpz z^t<10_wRn)THGKZX5@Kx=g}t>7p;!dBVSFh6M%h2&b>L9yr@Ezn~Alk4z)I_ZjDZ4 zI1Uyecm1uPR+)^8r~on{=u$PFn3PzktYECW0}F3FC{V)ImY{Ixl2d5jL+T)GU;?Tb zHKq(8I+AG3HnSfkrl~R+_38jX#wwn~A23_VS|~G%WON3_<`_k$khE=_b>Nd^V2~Q9 zYUh*qA#DH+0he^5O*OuLbvPvTAX|b|?>M9HAhR=mK{_f(MP;c_(-2bxuzgTM`??iL z?KCpLA&61Ab>k4=+8ZOI330F(G@}?Gt0n~;M3R73+yeNuMv>?{5#7KVfrQt9p7kcO zl4uK5O56ZLf-ZYVheS2NJcm8RJs1m1;PrwaVH#c1}X#E@i7E zsWkYin##3{x#V3GOa^gEIItMSjV1;qlW@XOqheSxu<17dNLYY?GKhi!cOyoa>N06p z#6hPa0x?=f#Q?||nH8W#1v2FwAQZ)sQqZIpMX~ak@))8Ik%NR$)>ATV-P zOehNx%P@hAA~eKqf4ATyuy*vaA%;x4XGTP(KUm)>DAAM0{`&)Zz1ZKTolN} zARtLHmsU0N3bj>AfiXy?tc+1Q2y`I`vLcNLfM5VLqaZLNW(b+L;~qlSPyVnUR7!qK z93sg-0ssLJ6aayLga|-@04M+g0wBTyEP@~?00@Zi|3N}P6aYX$_`e$4G8Ay23kE zdNmtV!ek%`rt@JJU_63j6^jL>wRg9c&L49bU3KV0vI-S|rC?1;x8-JQr%IJo369ON zJ@J?>-JaCZomojHnr+HC$)TC-ma933nz^TrqL!keCD5-L^5}z>(ehR zOs~4*lywU(1#M(%^b`)uo5NhH{ZZ)+NYGJplXIKRUYj?b%Dt*7_#zyX*4?bCBUBrG zPKaaSrIT=38w|&KjKKYJPPi(=6!q=avVpeH3Yk4S$yyDA5HG3iqr%4`k^jhWJ3sn&OH$Z>0}1CRulv5~epd`t20OOZ1Kli}ajrsT-up>e zrcZB6E#?kA(l8l_>CRR2?xuUjXE#9WN8i>)sgoUt`S$sZ!lB@!?kJ4&c=St0zGzR4 zQseQ-JezhZbu*{gyuJO0Lr!mg)^tS2-!ASJU{8VLBb|zBzFr}O%dhs7Qgc=2ZzY`L z@M{}I-G9$fx34bN+-QdGwY(f<^_1*X%g0C6zniUi-gMuK1T(+Mi2kT0WN># ze=tk3vJ%3P3bWX~F(bP_Q!re=`iswB+KtgiFe^fwCrq;x^cOMAC?@=&{>c_`B_^tj*1 zLuH5ZZdf(ph~}FY9~|(uiLBqHbcQ#7o&85oKeE%ER_)toqt%C@RvwlUz)U(*K<#G7 ze;-o$`b%f;-kCYCzW8e2E~8WT3{AO;qi(L&JJru0OarK60<>|0(*dC{X||4PE%v-A z3%5I==3l9?r1^J@J$rdA_OL7$^%<-3{MVeXk0W z+UH`D0eDxy0xlwyymt+JdoQ=jeF?K#SMlgqSD&#~LDTXmsok*_Il5a87hmkcxT@EQ z+<>7Cn@H7;co;XLP1f_~W~5S;RTy7xMcpeU@w>>QzU}4c)k|w1ezD4Sw?$N5_mmT+ zO16laP^g;%eHg{Y`Ow75PU5RC)r7-N`nkqL^9jqk2s&vi`3_2+O;gc-{9}+K3l42j z(sf_%&#$VGlNqHf!!@(MPL_`*Xltbpfmp;{FRGe>}XsqT$DCAe|)a&To17 z&J)Y}aonY9CSZFV-Sdr3`>Ro{mpDiHMH+EFi)_Amw%yI^11)M71&ML-pd6Wj#mx@= zu1+ttd$-3p+;+iMr!5%2YEbuDC-VWF8zzb)a{yhD1leI+LA;n+{$Ur zy$>nrn7ieE9NN+=L~D#lf%%}p0XV3PFcq=1B;LnQ#g9y((dnX+t#+t{u=;2IDK|-wTCMf1*jx{HVmb@m&20H}LYZ!{S(0m~D^_a<|sU6G6yIC6Eb1f;v&ScbxnVZuX>SbCGz`hoIbzt{yD%jT#)URBLazDxpXz$lJm zZEUX$$Gm3%jTjV_4$3qG)?^PmAlxcTEXd@XHROYz05k&u*{qFA!7yY(fB=a=cE79y zL|ljobQC?s39^)PBE->I$}!(gU=10C$!C!4od(c#WIf00)wc;1GauN+2*y zQHaJ`luxunbXiq7gy2)o*)n^C2w2NRAXh5SMHY`{tp*rpJBVuU6DWX{7J;;%5E+yP zRl;AikX@%+h=r_(JI-T@ zu7-BOHMC0IpCca|)n~qBpQm`!U4}^DL8FXz^rFmAB@h*~LxiFSWKk9XwNWLA$8N-+ z##YCIKr$#VUgSX1BA){EATJWC?6Lz)!V|C&rv*(eomkW<~aHud7kAJ!Aw)W+IHrTC@hDQF$#ogSki`umj3Wt^!pC^a4F( zN(UxDy=+^h91T7Bo|VoOf<%}~U;_rgWQZ9CBIR_cga#QC1T99u1!43_BZ?-k*i1(= z|J|JR?ct{pgUY5#XjZ6?QDw{$Nw_SPHY%~i%m|_p7_rZ3$|E6Yr5!4rAu}dK7Eh41 z)<{-Zm?Hv{5C|}0gd&MjX>w`OO69VQpyD*7I3y%ORSPLClL|q^$qb>&>=W;SKxw0t z2IkC(2{;3>C^e8NQj#C;W zjl-4aEtsrasJW*V+l5<(ZgJ<4n*CuMR6BHn-6%4IcaZPdJe0e${WM)b-q7zz@b`Z2 zLls#hh+I6K;UaA0Kf=>NB~Me01*X101yQM z1ONn3fwXpwYJH4!&m}hcU_E10tEHSgL@es^8e`UOZI&yWRl8=Tn&ga>tP>Sea6Ucn zcN@Q_k>xf(93p0LmZw5)vw_Kd%RpN%`?xtOARR5HlSS_VCicFydGa~c*@O@Gfkzi)x`{xar+)1 zoY_662-;KzsWDvOt(W9V#dRudJ)55}q^9bLkL$8T89(U{UGhIFPVP4A&tAOV-@Lrd z{`N4L2J~vlv4yBO?#)@JER@F!%Ed3PK4;n4>LHk?kxOewnfA=H} z2S4p6DqJiJI9yG;v&+~@zgpBy-Mnp@?c(%z9%;- z4|~-qe)rRd!wB6pEAg;?eC9--DU|Zzak(h3cGesI#yueO^r+B@rr8R? zGVLqXJa#A6)-~$wqcd+PS_?kEzVuZb?sW6K4n)|`>a2(RM=OB&40?Wsw0XXB1E{y# zqXoD~jT*m@VYlP^TVQuQ){vwRpzRmRt$xcZjJNJ;%Cdp@)IDyTitBP3UX14-z z?oC&i^#fen`z@tUR9fYrxNz&I3-w~qa&-PAs~vMHKX}?@rWeXR`mI;Ra8Vwu-=HpE zyP5_H-4qet>y@d6U%I6=^r^yx!At#$w*7?7SyOe(*#0+N;G~tCX#5z>6f> z^k|O_Xn+#c*&#$T)Czf706}tQ*xCfT#NzHEwi$6BZf)KP-VF*1QV64DDjPsCIe~zY z;oORvTp6o+Ho#zHBwt-30r$8}rpDT~Ra>+gDVPUfOe&J&?RgOF25AW`i?w)xPKR;?`= zV&-VHK~hr;2x4^xggo^DAnubPWy(@8vtyI;7~{^f8gf>gP)ZD%@@<#5oELuK5KQqN zs3Otd0p_fnC0MFe1V{(~lEFtHWDg=f3`QwaS1EDmGLma)S(wSSSC1BI{}J!^&AsXI z%uX^JryKR+Jo!nhtr;29q>P33SkFm#ZY;sPvW-c@oqkH`^6d5VjQZCckf(jmC|0fS zv%>H14#fEr9Rh~TQT${3tz*1*17B_NGE6!KBF;rjSxL}dKxJBcVZ|2?AgRP4DH94B z1&Sai${7g>NeLiirASnPB7$UU)G>C0L(=?_Wz%ZD| z8OH#GL{&&6a!c$P^-QaI3Et2Ui6;yo$u<~x`BXCB^ni3L9*5HvvdVI3V1le5qX1@&pokLEL?8eLM2iUML`F~)vNi}x!35GLK?bdmR0hr}1118dg2fP^ zMKWwPmnIvS>$+46Z+H6Y5UR3o5;@luO!4PFg0U!etc!9*C zfgQ1?G-N;Zf%@(|=B^)U^hty>v0w&35CA|xAV5M8LU0`*0>QI<227WVUn~{1)y6Bwm_MA z+iwQRd9`%fR=S+4xkW$XWlq?H9*Fm|I$AX=8fRce16(l94KwM~T>eC5AOP?!GS7&Z8GH0>~g8^@py4+oaxGf$%jpq0d_x{v} zpKCnVRHa~nZ@0sEv*)KOp04LlR;v%^hhOsf&tGr~u~!AhSpu}ll-Q5z_SKa%7h|(H zDc^c}(rm_?ZtQ69@;MLHe(OxZP{wWN)OMy$2<|32W4sQjtqUsa>GPGJdZ38N z%9sgRtNRU5)ovoHseCZUt}4eoc-L*a{GFlu=JMfRteWe7I*p)|9WblYJdajQPmAUX z51-5RM@9XxIkC5Q3c3B?4$oMKsj6+YT2__G_h{+fp6Sw_j%_9J)hsD~h)Y-oygM~q z1~J%uiL5VQe6M%u$(@4{FEaUQ)*YGrd3k}Yv7%%fXz<& zR8`xf&7GfZD(`=c2XuGSVybS&d2nA)%0W)AhU?|#MX|giE5+kHg}~DxOh=Ck7w&xa zldHJCy=flYt2!6Yhrf~h;KZ+H$1}40qQ9El?DT$9{rqsgLB6Ap-u(|A{qV_m;h!R3 z!uU6z{`7Bu^2Wb>{fCUNEo?5fTa_$apAIx+lABk54}fxqWr_*%=H)&ah!t=zz-Ve! zc^f^d8A~qFm({#dnyORfb{~d)JwH00m(L!(|NQFptQ%ByTF&_hMbY~0`{r;w?|yS} zSMxqU_$)tP?sl-6Ct#nl&-H(P&;7Q1>sOb*I39Yq|DKG8h|*Q&ZqL~H}lzpcaIiS4$nQz3%35~x={73E{>|=73SL*i-U4OSMS$H{pK=fd$AYw zgO-olAN@28?#@g4Vww9yL$79dn-yb`>5Ze$Nj{u> zp6)<;wL8h~qPVxS7srtMA&&WQ$0{)~?~19{jUf9h0pJy*uCf z+n)yW-eRE%wK3+F00xBV@@z;mm1emAVh~$a0_*SG*U$U>v(H|Aun0*#^1IFPDk;n5 z@#?a6wp%ay4c>TOq_|DqXVESgb6P0G6e$OvY}0?70V%6$RIPlq!+B=G1=JYgd4ouENJ0nyw7LxNOiG&O9s)v>e zE61cR6>?R_!LCK85?*h{lRT-#T;&X97AX&o9}pJ0C@?}gN(~|onYGMU{gOb z z7bVQ%H0{7=$yI`k01~nX_K+&QaG1j`s;VNLdAto+2!f^C01F`}n;0;dDr)H`P|`ax z+9HqhP2XqOiqm&%ibIu$t#vu%YIk@wSJza%9h@?zCiZ3rSX!LZ0CbppNXEA`Ys%F`0nxUrKda*F{o%%a2Yy)J9KP|{|MG7) zpZ&`eu%pcWvjmIEENJ-R*`6&p-4x&oDNK70i=95bQojM!mxG=J`(9P15=AE45+<&$ zvMfea3p^rXbtl?v5@@-6u}g%hof+3RE{wZ-%n`>|Y>LS&8$l}(>Vt?G$I?x?2x@Fd zY(al4cn0PTi&lUmO$d@gV3I<|5*`Vfls^k#7;K@PDlCJJ1hhE_psujUDzNTTN13f{ z6)@|NvT#V$nzBIX%P5lK7Evp7B!o<2GidPrKmu5Z&nz*S#+ijuqn;S^E+_%4jb?xY zh>>g{PC{cY9z1bH5ud%e6-OP?Gc>5~n;f3YR7`QlJEoM=k}}gaM-lXQ9lHGEPY#gj6G#d7%-FE^^yppdjC0TI?9rL&G~#ysgUi@59v z2@nM-F{C~R5T3FQj25_M%VO$MtY)8NNQ^?dU^EC~Q49dehRp4T6L z)~D-pJKy1ZIxN5Yp?T}ubV%;AU;H}YAN;|0K^KK8of6b#p{>%HVak*lHRNg(^eL$5 zTLz;!QT9ngREPkCfP?@d8H9yFKmR?Bx1G49w`&6Dm&J6Tsc=a z+SIePO~&brmUyl-nC?>T9qQJUBC42)S~yM>kBE}z zT9=B0Prl2b^LUp#TZ+ytO&X58F}{YlnQNV!ONQuasrBS}BT}tyWc8kFVLkehG^qx5rM; zEl7czro{bb)nAWez3Cg}_7OMJ^bLtmv3l8r^Zk0M zkj{z&z{%S*Lq2=;*+x~mefN><4S(?*?oIUMaTv>iy7f36Z4b{hY!U2&$Xay>!eZZd zn5yC*TrRu*POsCf;iaA0#TH(DPh&X5Vd?A!2_E1Kysc!~LzBF?`y+NG5%C*VMn96<=H}zTIf5o0|l)?ADKTZOC4A zTTH4ca+)r1WQy7Q96h_RP5D0t9 zt|$#zlz`4EQ#Ov2E#EpRwx;Q!f+L!zyC2cbXPYjqtNTA#%D+}H4eLLj^Un|A#k&0S zMt#-6&n~v>W}?RnbD1}kVl;nR;cp84*-d`1Ksx%5Rrkwxs{v_ow$CpU$8Umm>GGckQ<}ijHEO&Gc;t*{iAAn|V}Jxihn?bar~5>hDjh zW_EUdT8Ms z&D6H`q1P00J4}AQfZ2lwk9S2_D4d+R=Bv>IsNF^o?p*CdG_&|dbRkDo_~HX9<~?q_kR4N9O0d} z_shNdb;x=+Ox5k)$^ut3Tn$~i^;W~W1#4)b@jS?6`3}3@u5)+z1*q>zdAD`HQ&AtT zWK}FZmL#>}PdjiSTa{n*+(?=w|4fmFZ}Is0!&!L6^4p8U(<)_s6Ox+)+4jx+_`kY% z@BuvkZ_{s|txn%0da!+cj()4kZ&r8OCO2Si4hL(sQH!fJx|e$wxQFvTnGKgmV|FW5 znX=sa+`^#i6V(ujp<3!G2kN=LJDf$ZFS(et34=9(x9T!0J38B3h*Df@v<+kCj{_|_ zw57)s{Yr3s^6L=Kdx{jLSk1H9Eu2i!Hsy`3^}HK@HhY zLMAHE4`!It{T?<{b$D3obhz;c#_iju#rC?5tY%v6r)J7u+#FK1m}-4Dc>^{`RLY{D z8GV=)2}KPtPJ_=(O@W*&a2Ljec#XQw3nhJ>lCD?@>diDd*^{jfTFT5ePpmM?Ja9Kc zLU3-T^}#{+{3;tS&Nu1p2@**LbbE%odCP!6oXIV@DkqXxOo#!7Yq$xl4& zZ~EKMpTD^f_pP_S?X*29OqnKy5EN$yh7_VzWqW7sy-1>^`|~*M)0dk^KYaYpR{%fy z`G5b{KmHg0^-mV%iIIti-)Q$8)jYg@^)GyGkC*Q(bUm~7we&{-R_T{&5*y0y-pB(C zH;h3fi**j6tP>}F5Y%G>jJC8RrCMW^E(Q@%iwazXhKntkJZxSAI>MTCA=VhbVX#TP zqvbZjwuE=AsH|RRNKtJ(mx)zr$Rg%R0!pcJgRKY4W^Vz*q!XKBa=In&6!6XhkcFHC zCy3<`OmM>kCyeegiit=H=cRVaLl((7f&h>r$e1~qoQ>pu6b4uWGDt6i3R#myv?N`? z3}O`nkSO8+GRCYSl5VsvF>UCOfnu7Yrqx2&GfbId$|;E@7a7%3nIq0*3|ochc;@v#=DoiIu_92is+i-!h=JL=OzpI&c8LFVL92x3JIsya+P zhEUWMDc$8LoD0`zM1>##BH07LUMdD&r$7Kbp#VY!z(r~pMLAaxxexAtpWK5#{=wvN zet7?ZiQzl{!<)VO^sj!~dwTOcZxXsKN5sIq06hJ{2aS@jI!#Xezx?fs5Y_%da@yS` zZK4XJhy5!2sKgHs_TMLYb_j=kxTw^lm3kZP5X{7pfxC*0fz?%x>1;0U#$SG8hL^AZB3iSu)x}sK}fEyaHq; zN42D%5;jBvI%{GJ2B-)q9LETmed1|~$dHX;W0ar)223CsG685s zgs2g$;rHHY@CWz)%JQ?HU+RCju8Ko7oO1r-ja?P=aSizAfBIclRM8bZA|>M) zJu(V|fUrpB5J55kVpNbbPeHaZb2s`Br|c;W;29(#WE25GK?DQ@K=>L0XdwXs5D*Z+ z%QK2~ssB0Um)m_cc%_X{ zqeXyJNuUTpQ*8;%DQJz-M@~MA%KbtBs=KzSl$}x<(x64!SSp4y+IJAUnnHg8>dok9G0lvPw%=x8_W-8ON$igAm!$zFF`U6f?+J47Qq z{NIehSDW?+>dtG`GlvBpQTulY*bsB8$aIP?eAs9&rscVyfR2QnFfpS`94LK z)e@mE^`?l~{;=#SoKLgNthL&h^lYE!*1q@<^pn-?8^VjGvxr_J_n_ApY*s^7J`Ste zqkVR#wc0;jnD*Y2m+eM&-?;KcUY%g-9yy`<%hv}W{joYykL#oFoL|3wzuEQE`8`58 zZ8hjkcY^fNm@hZ6bc#wTBb(lRywKbE^c%JxP_?YLXWO1` z8dB+UAD|&^6L>g~h=>3HAOJ~3K~$%bk0wCyG$G_U>q&MxLA(n$afq*ea(8rcv&!s# zdMb>>`@L9j+eyD7AXGj$0(-UAghAu{k^U ze3)+ne`~2G&PRl6kjqj1@PuG>qHfm@sUIfa`k$}%_xHE97#{@f)KR+M@(n2eH}s3^ z=Vp{*_!Wi6uMYX~-<{a5lbgQ9J?hp2hE{9x0gK5BNO6FVYJ0)EYdH3Z{ z-{nN^15(>Mr<6Prtm_tNJKqKb=f(H*f!)MCx9i z?bTNkII3u}=~ou6HreYc{Eor5?b)yHUd`@Cb@s?U%#X=-w}0_+r#>o^RNo$NqbI;} zsNv?-yNg(TzmS`;zZO#`4OM}Js_hSoFOcemuhR5P{Kh7I$jUzD&+C{~`*PHu^^j%B zKzh@19C`FI8DxOI%iKJor)TZBImLvw(9j>U#NF0`wg~mf@Z5d+QXRGLfAqPL?-PgN z-8cDzx;?S^uO8M56-%DrozqLN)p690)}0B%t`}W;L$&Wis6%h6Z0%ryY)?*P=aJSj zv2$y~crJL+D#17b>Z(Fq%Kgw96ctyQgG&UhM3<>HD=-u9Dvm?!DdWA>(o~#{mhFt(X&vjC6M5Tg~aHb*)ZYEiV z1`oN>ye~Ksea!xLedwV(N_Ey{jzZ|?&S6(0SQ4;WU2$?LITHz1@Lu}(bz z^r|p|i^L%X5$OArBq3=;!kmFbbY>jc7&?RiV2KeB6bfjMUP9zZk`fE&xh5nE5m5Vu zRS6~ors&f!CaaP%WDyl|F9<1l04mgZl72tFN;)@%tx{%vE_vWQu~*C-H35NcDP!<- zIH1@?r6OeOBB|lPZ&Rdkh!L_(s>DNp(WMZAtILJX^29&_s-Rq>VHgBNgaBxeAQ)V- z0#uQpXgLKS6+H;B?u>|k%uf}6@2~$o&eZ$i9>{OL9R#gBK}FTVccm|G3b0tQJeOlS!e0E;9BLJEXYf=;Xxq$HyP zDfO%XJL2fXiGb(?0C~`oFlNA(kbzjVh?JPLzzE7|RG>{oXe5omBO-~SBp?h*011*7 z77!m2N1!Z&nFUHXlhCMLF*w1tv)f0U-r}L~JOQ zfD|*2l#qo(mmnop$fyxOdw|5E7z_XqX(9zI$YKDLR;VlqMv4)VFbGE$)?zG7NJwCe zGW}-}28Rb*!Lbcmheq;le_GV0hKYUVYnMmUyAnKDa^#gpW-vU_gZzpdr) z=*i{*`0D(t{=u&i;S|m62LrnIXNjNTpgFh)xa%N$bz;3I}`3K zD}Wb$l@GYHZC8p;^U|v%A?8eI%2M(fLYQ$sL+!S>?|6Js%^uG0_i5pVQDEbW7HA4I zVRA}Bi3Jn3%SI5h>jfmVCg{YV6HRl%#q3Z3tcS3Vi4_+J+JzzZLuT!R*#u-Fs!-am zd6#0^s-dBAS?KwROS+;P$ZW>xQC}xz21af<6OsrIG;VEnn3 zus-?77<1C_5yWiQe#O%DO%{iCNfV1EnA@bK*-<;D+(02d)ts{AV)W6}b&(ah+X!^y zx}t-|50qLI#pcU6-n=_JZv0}JeKDi9%O4q>qID~wCt+RrQ2T?Ci?BhlCQtGDM&SH$?tez(7ZqgUF%N13t zuV%$zclT+>cHy7ax}8?<9Yz(y;)7ZpKO1 z{P=D+QvUF_GU^sn7vpX8ZEsEjvijUz8mUbJ^#qhE`Uc1sN4nfz9u9}yjk{tnOV(l0 z#6`i^m6CZsXYCIhqDhLljl~F;diIyM!=rA@s{%x;j^^A2EqRa1kG-+P&HcxlCJw^hY-fhZfaaK%UzHd^$?z(L8-N!14 z@gdBnN%6CjZvP{*-{$IctfZJwJE-*Lt7$ zgz)%Ye3yZ_o1TKB=4HyFYpL^>>zcTCY<1 z=vX}!|8l>5xsQLD=p)#tP9<*EqiD0J;c@)1!rSY7yFDB!>Gs%-%oh34g1)%=vUQVs zb}uW$7_HxwY(uWZD`~@_54Jx2q`UdE_I8|QSu>_MNxkcGFsjSC!rRV?mgT;5=^&jHlpWe?GJ6|PZ0F^^6MOrZt5@KIjeUaJz zYM64^Vmk(sWq^1)M3n8wi+l=X;3 zrA%LY49JDA4yz2w80qC1}vHZ)1s9m0n_H1iT-?fiSC@g^Va+Ml{n& zr96cMdp-ErseV+I4bXSpo2B{>R#22LB!`GP>QpcgKNqdep4EY{njjU$>uCWjsDc&t!RL1xs_s(UH?88GwFJxWcUhSu%d1T{V@NgAts zNJmksMTs&rZr7#+i9?zcsETDJK2(|`c6-0q^Ats5&2%3Bq#X&k`8e=YTKnTD{1^_cX0U%U6i7xsi z1M{F|s=z|(1Vob|giwhVPzie>ij-*$N{Q#h!p?h^qF9uedV3{_xYdsAM5e}slm&rO zK?c}H6;oG0qQZbdZ!Tv2%_?dE+}zK2nd2mmbC^= zf=YN4WE{|eu*ot|sG&q4N(KQ82!Qy4mOOs&@IO*|agc8=w7dVSk5*~kzW62PVG>6e8s*=vRi?|P1p;@UEM0ODnqD>l6SdXAtpc1Kw4LL>}wFMxKTMwRP zq96k%7OfTKst^##vI>ekJ1gi^jXs9UX4KcWzYvWyb(SPxVhEZn=_yO{vK2@wm52r; zNlFT-W(xub;}#&30-I9qM!hz#$S5k^$BJb2v)@%`hwG)Uaq(BoODF6fkzyt!4 z00b3*!X%IaAOI6HG78(I$r&a}GY=?S>2PL}0T?>$JHb4he$ssW{Baq6@3Nfi`fL5= z`=5MaE^c4v@EZTqKm1)m>k$J-3?U?mF<}fTK@^{&j{^nn`rrpgoP3Cql87*gAp8{q z075{5Uqb*v1QB5Y5CHirASi^Q07Ve~KMVkZ00_VUOaOoYga`lv0w4k+DgmINR|2N2 zvg7G}HuVQjilax1L{{gCjJ#|5?Nw(~tbW_(4?fAiexiTABm1&5Tg7ps!%h$4X^1VY zQ=Eb;%+a>|^uYs{m8&bC-;Q^dn+R4$m&K$_q4PXfAga_gsJzNdk>enKuMdD&q%33Y zB_$39QqzQ#Vk#w>+(?#Z0xBNq0Yj?i9|yTXm*ZZox3S+wNw(JXk`Eaad`Tcl2!fv^$!J+rTcdZ*R*qiL>^D3ae##T&cXkD`vTJuU%m0lV_m$tRPf(Nl?PG zppsy3fg|*Xk*dTGCfL?|7;8?s3tR@w?J&;J7v0fq)P?Q>kh6CQdFV%l*4yV9h8ms8 z%j&48K!%*Do7hgz|7P>)S=|4xAAc60UL>n}fBsShv-Kj`>VvGVT6-AG?&GZmrsA6} z(D17?STrJzP9Z!{ybFboQND4`w+z-j$-CWqCQ#C(1{xZwDyT*Gm8~*q6>2hZ!Kijr*wyYVr7a z9>=R4$H`TmyS{5`$?@&;uk>zS-nhfD>L#nFFB^zAKfc&j4`%jgrN{vE_=^PR{_u%1 ze_9vK<`#rOcifhBn%SZ}oM_f2-)5)>s>hwX{@i9eLtk~H=m``7yrfia}5gqZLWXz&Dn3_f}UAz&J`bKtBLrkZ6Yn;4;@ znpBwdc~tOXesF2trsCmJ+xhDB>CyPvKB__gU>bf_rtNKd22h=Kv*tJz`FL8s`pQ2! zc>TEhd2i%q;g{k2O8NYQCXXvol76(emCm(UJ=*_tzc4?pPCvbWYIM_-c=YF4Mwj`w z(%oP1Zm4IECe=BfADvwNk9X>%IXU6|FH?9tdHj3l*=P4{#Nb}t=EDW-(mtDq>{|)8 z89|nby0W+s=B-aM-AO2ae#qx{v7mhXG|kq1o#u7bHHYiBfkHL4B_5SKdtp;QeBSIi z_DrV7)AQ%cqHAWq3cHV|qo}l&?09Q17CXB~w-qq~b&M6ap~Ik$ZsGC6XCc46d$Q?& zdG5N{{siEpU8!-ecRMF@wzQvY?~Qd;)XmeZcw3rqeLL)7vyNh(7IZD;EwtEiT1rf*|hxfvp>W1a*&qjE z&gvA!p-5)u+OcWSey0f7D;p1M2yPg6Asmz`#6O1V-WF_kgYh;yCAKD!owqVd)2X=) znl&X`G=Q7hGJ@=VJVpzKt2>~iH5nE9qcc+6& zJ_pt96O)XzDi@;NZS%RrGvigSZ<%bKQ9o1xlIB#HNx-|hD1B^p$cILtFJewh)CPf55 z*)*rvx)@PbT6_ctSY-fl5`w?kxUPA5Cow~wWwbmJYrY3Ixm`1e4uXh>TA94gtuquz zN*ytc$RQ1t;vpV(zqtFeuQxk;_|QPgPq1qG3nx}%!MGiwar4Y8SC@0eX`!)Oi%W9< zn0Qc!KJvlESe|^~8F*)_GI0jR=>o$z-Ns>5qmODSLPfdo;U(FMwx)rf-bsJL}nl6VGicwk&h%7QrzsBqVBZyWCz~(HRVhmOnM9A5k zkZBkd5@o7)J^P5NB*07=mJp>!L!3!283K+FKtKg45l5~1!NZWS)QZ6JeFo3;R-qXT zL8L5V0wiFd;}XT%^?}8QE(i)I-~z;=>menR6@n16mtVn3YD(7kdcc4>2S;v?F6) z$cm72Z9+P4#5t(S9*~17jmE_rSZ|y9D;S-6@tzKPsUrt0Wqo9;5cLjlOkB zP%(?Q+%6-;sP{?Sv-l6b7c=~gA6*{zum0tqzr6Wo7ZDw>W;C%t%9TDQ;KzOtad+{M z*moR#BFZwX6i5Idgv@{dU`h@mCQFcoF}edMgLY9?LPAk639AI8RMf~`#J0qOgB_3( zOG@Z6Qr6}{XcQ-yY-`!sphT0*6x7Fs0&~E9hmCR>xXFMb!yp`VKDAj*;vydsf*!R9 zCLxn-t{`OF29hLY6Hul&pK^lGd!(cwAv;u*sR^Jm!on#aAejsRjX`CMh$id+AY#R+ zwG7$^0f|h)A`)dx033k6L8agU9_89J^K;aVZ-dlK?d(b zLhdmI5J3?E5P@Gq00BTiL;xfJL;#Rq0|r45fnO5<5d;7t00a~S5ct;w{t6-hhyVx^ zfFeRf5R@p)D3(YolA!oMR{VSSW`|FXK3h@h&rC9k9eWpJAaCxF*0d=3(~q8jz29H^ zxV;^yxo*}frMnP(@Q}EBGf?Q&8PU-svy<#8G8Oz@$E{nbaAqd~)NYberiy5dBmA`m zB`PYFXDW>12dh!(2X3B4%g`aa4vM$R8x?5)?qxPn$0s=@?voeDP(V`BCFvDuwkQ3P z6XLPLS?*d1YmXEyCy*G+La?4zs+!>FeKTGvWvp2WMSF-e8n#)?l1Wi_na6~W5%tpV z#E+YYiXVcx^xc;!X`1xP5zEnHC3*I6z%t6a;P)dd>nI<9ZH!hyS28Fz7p@7BbF(yw z0^SBl3x9SL?Tvf=a=W!k@6Cu%ozEsHx?K#w;WkL4`47V4MWcUuynT0KCYP)HYUp^~ zwvbi?A1l>YeR~N0;I9=>UXTIRm6?)6>#Nro`|;V4=ttje7lzjF-W~?O!8iqDRI-gaU)@?$jkU^FRa2+Yewa*w*I9zam>f3Xm=_*$y*jLStt zt=F;Vlhd=s(TqB`6}P{sKvemganeojC->mdV)o+rxPNs4YA==0NO!~!bJ8Tg3|t?k zzn!`3>E`R|@yb7~%uhDuosHiS#LRyQ>)huRpP<{vaCk6Fh8`Vm5bWjMvatQJ*-QDu zR2Av-f7pe*JqM&?p3bv(L2W}QVxhGE?u(Hk&+H;VjZ)BYL%PG$5S(z^x@bK~{BgZuh%vn$NumE*B#UWUoHj4fB`aKj_+Zsh2M=H^VIr=Z8H>39Fzdmo;E zngPMT*&3jE{rZ3SqZax<{jEa3&h!2H4LLs=bHQQoF~;KYopOuh1i%2s8)?I)YlaPN z1f+X4_Esgr5E~Ls98&{bAYGJ|E$3Qi^LwPcDTk^!omv&j>ezTJSk3R{dREVl^Xpfa zgW>S}@#4mw@wmBt8;;e@+4RNDdEs{XdcQr(I<;_1wgZ$yp8cWGUl;km+`8vu>btvt zF--noKAAnwe%$lJUdn0x&_PJ8TI!!Uw72fqtv0_kS9RH_U=&bVN z(GTA#kQ0jkYCQPsqPQhDu2fyCd)d)|Iow>^8*ZM78_L%WKfQ0KV{v`Ae|lY=7RBcK z7uUDV>oDslemv}Ny#39ikAa4c#&1Kxo14!sui9s;Bxspe-IyV{B8Jw+R1GnxFRm!k z^n?+t2RaF77Ytf)v>OVbHFs zx6jS&^^k>|`-k@HmfuQ&*0LW{+Lxov?%l65b9r~Llf1+Yy8~_u+xK66a%vCeIE2=A z%&KR6_|VdJl&1e2DLN_-{LEk~=4Pt~ET zJ&Z;TPz0f~3aWf{Qp8`jKdI?OL>qh-z35Krvmr{aF=;;wl17%ymN}InGZgPKmR1BC ziZE=+t!AdiNtY(c2}{huRoQ(|eK_95+o}(YNjBJ*Kp7UOZ#9v`nJQh#k^4fKe5OgO zOu5U=?wp7K03ZNKL_t)c4|XQjtj5Zshn&3C%*L-I4?%C-B%q@r!r_Lcz^pQyB|rgX zj6*^!wCg=MuvV)$rKpnLqGZSvB`F#CbF9|Di$Q7pPSnjs^0iW8wV!kt-( zAF&_eePN15K)c!|fGCu@GO4~p2G!D?XeV%g^C>H^Giyka#t9UnyI_5E3rjI8#*nH? zP!?Wr3>fyCHzDeYpZCg%#MSdB6Ya4bqfw|qwj6a9>%63OLSiUYQYMgMEfT40$L5Q# z*6TejoJ6(vB@Zs8BH1mc{w7Y#skZts)B+xLQ&J?Az@3fcb=J>KA5BO&b3oayKpjUL zIW9{Oo(J)m_pEKkAU@y4M5;@Qa6nWsXH;ZL9eRnO3$+$lPFa~S&W5Q!bz-D>=YY4e zFZq0y=429q90~`LP+Uwtg#hYU0Tx97(HL@V_ECAS7X}a&6Q~?);=!>;%yS@XQ~(L- z5E6=)5_1IY$O8gLvIM4p-X-6&GNcG{5C9xWY2>7nV}_JKG6YgF&zR^zXF>e!a9~Mo z3X}m9s9HjAWD0^}A|Zej30O%GM8lw!MooaoB*GK~fd&E)N=lf!=tRN+(PTwF1MRg9 z5ZNWx3|hdO;D}W>(X6y>iJ_k|0l;1u02X*#%FmZl9v$VT$eq_w=jb}M-$crdnh~eA_dTd=-~{7&`Y(^=W1D~B ztLE+XkKf+?i*Z6(zQsfb78J~7z7&urg)5Za?E5futw0l%R;UUm42c9pPnZ-y2jD>! zHp@y5q3Q0t0BFh)gfb>Wkuh>21U)EGI=7k>2NyWRAYpVsm}{dD`;?-v(WnKZkb-(H zghxR%D>BZiZj5~GfiO}a3^wMF<`G%2G03DzkPjhlYDkFbj%iGq zh*3Of=aFr=W9uM|n9JYw{$BU=Q9;E7|;IEpf&${au? zp(GhI`LJd_C^1Sj3y6SW&_<*}LE%xN7bel#5E!8(dH_zGR1lR!$QvaJP=MGVXoW~5 z03zUp8ATB&Kx7or2mlF~5m6MDby9$FG@66i^?I8u z*9QOQ@BcpfL3`HX6ZWm+)b?&1wu#fooFoE(Fo2*CAPM}x1Vsc15Cwn)01!X~L_h!l z0T2KL0TciTF@PWd!e0>q5I_JRVn6^Q$TW~bK@<=WNrJ30)P~6i=y4YF{13|=8St^fJMi5?ed!XSl zW)FS6#rW=COL|U$Cg6L{yfBF59#- zr}@;53&3W2aK@L78AsZWK{ZL|8182aj^&{DUaMzKx#-=ys=u949_=C~jrR2Hk-_}c z{&P2OnU5GI+VAz@;7vK4DIFW#VuZ382V}6uB6gw%P*X@bo4q>t>~+4iyU%7f6RDqN zvjA{tB1Yf;P}20|(N8M8TQ}bhX|>{aAl-g&{oyb!3v*AIyZ-L|VK8Qj=h{G4ER_ys z`hFtV-EB5X#rS-Na_?-tR%!Fkp<`8@g^aMhZhHyC{M?|qXSp)iZPz|TeLAtqsLUU1 zbf1az93G9@ir(Z)Q7{f~31Xpg!8F-qjOcYj^Z|?lJ7bBbv{FkAw)Ht`j z@w#lHKeqnCk-ju~6SE>tfUj14n^}DjP+p}404`hs60hn##hAa2Vx!7c;5ZJ8``>1z z>U*pDwK@DW3#)df^y0zB=(u}Z3<~n|o_o$3fk(w=nzFaQT|_ut^u#??ucy(z9ALY3 zX_vZ(qB8yXM8&{8jmtMr ziovZvyrsT+_)~Rl*6-9Ns%7Hg>{lXHh0Af6jzeLv$syt5{atfJUroz-#RS#8uZY#A zr{)^t1;yumrq#WAO6mG@zLEIJ+;Yk5RP4?1b$I^=8?Wn&Z`N_!yuWnu!+f;${fj0@ zbLI8(4;~&BbJJjay4d{n-~>DOws20 zVWGaT;QD2CEqgs)XSmcVj5mL|Nt1T8GF_-4c#$j$?Y$uK(n)Ui-LJU)N= z(L?*ypPmNu#@Y7nK(%$t7yV}QaLU```Nd>ge(mH&-_2A_Ru5mV*!m(1#qrrawck(r z@pWiJQM_JFzW-)cmzTE>vd6TVrE$An-+tX)dNz-`e($cr^6?Lp93+j4>fSoMk*~ih zKK^%y%8B>8i|evAYFQ8Mx_2q3WV5%N3*0;^EH@bimyWA&xEI5#S)Lg_&(ht?z*)^| zBJ)x0-<)p8^}ck~P=P_cyGOXgDzCul=!Gg3C`a7>pEVn9<)hEU>bZ=jM^NrFckg0bk>XE;{gt*~5s!xUN)OPkJFVfq)GB0CUqD#kk zi21dsZxOJof!B@mGz=!r+WB0~$D-`0UE)@oeYabA4uebnbTw6Z!4v5l$7BVo%%Y0@ z`6yR1j4kt<4jxqxsLE7XANHe+GD^xOQ6i}kfGCOQq>LeLm*dbLs z?jloRtx^2|EC~{B;}FDA(93*MW74}YWc`|yjhd967;>W$Xi0I`8OIyJ%uGySwciEd zI0mMu(9pP2nWKz%-6&*AMMaou4DO`y`(tqwU?8EGpe~G|6ehk3HcDEzm!sHRu-i_f zlpoAbOC%xCQ3>dnDcgzR`>patl6>u;fZ8o3+!3;r=60(dbrg&_a?Eljjob_s&T?YYP zYwhVBzxfPRr|ZkhzJBF3M6*o#7^7o z?&_hss!pBxeDgc)VJ$k&eV(Uo43LCEt6kK)!B2@5W@3;ERS(cd95fRF4SE-moQy=! z1_MA|REB$vycs$~(z6VFO{oK$1cT4j@h#GnM=ynQIz!ib|**!35eDfSo-}wBg9hoG4$OUK?PA!Lzv4 z`ZuPx7Y{#tc17JMqA7%~>oh72PEvv)C+#M0v7!grIp!1|&Yw7xj%2$s#Oh~Y@;O7MC# zNkHG0Z=aao_~P!*o}d5jwlc%<-6X-_!x9E1Ha=3cjL~K_HPN9s!XwBQY>YR05hcBl*B_ z@CtwmK`kI8U?ZqFw8DD=L{SL<1pwq%v>+jX5h2nLcSA4T!oN8l zevsH626hSenD#k-?|Z+QVl#L(q+#%V+fWRB;=l<21VB(ggp~jx3W6X22mlBG0*WBR zR|EkNz9R6~5D@+b0U&?~3j7MfudomZATk&RRHQXIjX)+@BvKkQC?G>~m5x;A%7fVf zCo@r4Iuj*@C1a=Z%#4dngN7JQ2OI2{cB3L~g9PC!*(%xup0O7;gHp+W7BCozMQNl$ z1hTLy+4s8eRAxrF$JokW8Oi5LW}9SV{wb#aiVRP?(}c&JoK+Eyvfccufmo&?U) zXvyX_`r7d6jAT3)RN_N{G0XA@)2;WAo)znaYMM{;%Bh{Fi%t~BqZ#YTm>Zimy_G)K ziZiVyh%|r6)NR(Xl*Fy8Hg#Tdf2cx|JQicg>U^`)&0RO~?W~-J(NWU_Z6|nt0C{ZX=69Yzd=`=4tqxyAOw9vJ3MJG3)e} zSf=~2FsB7e$Y8-(BYcs=UBiboDx zOr6-;>EyL7ceh_0ZaOUWlZ1NKpGD@&*_}-~oz1=N^y2OyWp_!o#rR%6x;H=2qssQ% z*}$7`--_e#(=-4yjYVY^roIciyKX(o7N?L;zVq0g81tj^I+vDly z7|s*k`}NbJICVch2X(POI-#J_Zc}i6*oRjKpKfh_Sdik;&hr=JpUwAPSJR`z@QcfSbr-*zE2pN1 zuzaSrp}(j6uo+%1_2JaJWG|N2X%9oU-{N%&X4iNQy>P#|^`L_gCe{eb0g4qScQl=# zQ{b$f&d`<9_oTj?947zS&2^T|PqR{IBgzZ&waML2)|dPCUw%U^*Ec`gkB-YDGX42P z#$Q5(*H_;+laHovuCwK2BC;-pmthu^yi-@JuHDg~r%|nY8{Mi>5JmSqJT)xkiA>*& zcnZURKlX0ZX93&HWLMbt{_W5{Ek7EHgH-Ak7f|1gkG%Wz$u6#8Gsx4{ zsIvcJ+z(e(vaKp|l{F~3+~e-pZ?rr4~aM!zX7a?$<6G2i7y( zm6hkKT!wfN0AlCnWfLoGFCGXgogkl`Cp(l?q}w^csDW~~`AbV9U1vCCd)YR6YoCD% zW%=HmuU_ByJewYHcMU<0zI$-*XY0>jM|^V+-u4l6>GxgG1Nw`{VlFE)V9qEpGVgG` ziFM!DN9D;x&bndM%pT`Tb;_3EVu<$UCKLWskRyUa!aU%%q^q?XC(4i3b2qwcb?7hG zq5eR-^Oqm~M_)X)D#Lg{n0I}`F@ zJmtHpQNHhP!(?=*pwgHR{m#QKGflrAbrJyrhgpp#TVx>(b0mL^@hUe%Xt~+RN%osx^3Sg0Chsqc4Q2FeWXjP|KF3SC5- zF?bo`c5BDZ+a&wj0CUSaxqa*RuxZ$wXzF9Nx9w?_!md1naGZkC5)e6Q0cnF!=rwCk zC~24SEGrAW-(LsFP?;hc5i*a=X6>6+W&tVsX6Iq13a!&*wBV~1re!M$sUQklE2%g> zd~lRgk&~Th0ZOP%WGx(2hirF=AZ0ZqRw?vtO6jI~wiyb$)O|5QrJq)ZC-cp45>{8> zlSnQ$%R36uKyY)FH%3syFc2XTR9Ri)Xv42x;z>GhR`7jss~Ci*yGl15g(Y z_1bt)jF?$#dNmnP009LQvhVwr=$-==&G|M_x5v^Y)Z%olky;+3Y!dgxlN^MhEwLh9 z?)#fOq^vyHKqT;_qi9GH9FT7((FB-A+P7&4s7gS9KmiC7lnlWifU5Y~pq?{&&up0mn zD?(k^sp0(~Eh`iS5~Bd{4p{&NW=_K6uvxo&S?X*TRNxd@Bn}z`gc<=i$=|Wi1llqy zNgVhv8o_Y_G5UFnnaM0id)HY3w=BnHYI<&|`kX{Fn#F26kWvoMG2LHT=UWfb0 ziq$Ou_+7F9GJ%#+ZIK{(9~oHs=!ujyRzN@!7{t=5NGcF1Z4C3!rv|`k$v`|rp9<|J zdX`jR-FY;vP6>xWsciCwd*hp@+0(y!x{>|!>FZ|)YBG!{bHU6*}7Mtbozgv8M@9X~z>o@TAU;fe0KKgH;tcRPsFA6t_ z2lqC6!OyMIm=$F!30}Z1P`i5$sf(n?o+sZrj&D!quNH5f|J_ENeG&hYkM5?qZV;Jz z0pYz#0WhnyuCdz>{iY{i+dDv;X9%c!N9-c^tqeYqj=_<3_COF@b;p>P2S@|50knVu z6c9laBC_%sBo~uUu|os^LjX#FM{J~Xadli|#nC&4eV$%Vj(N+VpkP>Z` z=$ug?a%v$+(Ah9%QiEwwb&j;bj1aU!02ClZB_I?56L`g5dj!B?RKIy)@){AX>W6$l@#S z-F_33gdTm6BnSyW1ipd*2m&C02!Qa{5C8-K5D)B3>n2fXrP-bQ**D4};La#@s&Mqn!%Q6Q*l{uwK0#YiDqcI0D ztM@pl>*1pz-nLEM`Y|xLR)LV#DU#nahm=>zmN`17h!NE;z%ZyXnNfzRRvV*ZYl=XH zK`tU1pb%uA;-HM8aSb>jAxh$E#HpcZ>XsC<#}sM62;wzqRcHsoLG8sb>aoU|6}PCw zDDP3C5*+}xF#!)F5OQ^9j5U5>-!`C$-2tFXx~=vUiVQNHA&zs$$zH!|7`Ic#JchH} zUKgWj=nq$GuU+_V4KB}q+2eBEA9+8SPTTRKX}4TQQw z<5`Iyp}Yw?9K}b4s`lG6zgjml9p}~zx$?R1R?FMxIPd74X(X+KNAnK{}^)MuYW;N}66B)hQ%$)-jC$;*{QivIqd zB=4~T$+}?$Y9lCMElji2seaWP6N<6wq}&p}QA{6>j<$UteBW@~@s(12QgiztXxy_Wk#z&+pxSk*YFe^~o}KarOE63lI}q(eF>B(#z+O z9)I)gZ%#CT1-K7D03JXmp@g6R&L0l_v;Xq4nV&p(JbNeH{<#cXzA9D+=RaSR4{4g) zZhf9A8}>HTrmm}4%@>02@UG;!*S`KwmVUbvIv*bq%Nuzy#jV~YQKz_*Hn z@0jkV@6YhH8NFIy4i!oN$lSZm zzuw^$e%6Tni0u8l2uhtP*B{4Z7WYng`U*$jv$tJ=rwf8lKU*c!T&t6}UlF-a&a>sN z%|k5J_AWaSL-l%Sw?AmtP>lYq(c`=%-RRrk{W4wK&A3?1{@xt^o0rZ@xh>*lhkW9%fPL_r}(2LUGVl$LtEN(n?4q)YBbC3w@^6G zGcJGc+7!^-pSo?*wYmBAgg;rSfHz@F3Q){GWK~KNjQdkX7m+l6BD*-m=T~WN& zPv5%Vs`CVExxd&&f!CsfZ&Qrhm_FU!#3DbfTv&p_;<4G>KHvM=-9EGj%it1RcDrew z70LX0zpaNsn-@5MBCw)UD-F64};Z)QA*+@!VD~p3F!LmbNb?*;$iG{DL zHAmgA!Dk=j(yG9FzEo01%=^%Z3^{b(){0VI2as`%inShgRT+Ai+zg$;2o%Lzuu7$J z%H2g8Qp)=Rj)ByEbD5xWQDO0PVz+J-l4>A!{dQQq`36FOo_D<@$>&+wH4(U9<}0U1 z8nxJ3ts878987dNLkaCJkQzm0>bwx{14k7Flq<^&(?ycpVBO&*0x=WEj0NLH=}c=4 zG#;eCX?K0Ee1LjP4x8fvwVIW>`8+XcubI2V;OZ=fcI~;wl)!pY5(WjBP@Gr~36=vM z=(dJz?R%eW12N}KV%Uh)Xi-8Lx4UxMLwcjhxlUjb#K=s0F)TmQnCGV7G$Cd>e)C`s z2ghh_p(tq^ha@0WmBqi1bc&|6nS1FNc*5^aFf3Y_cx<_BT(;b9uM zSSq1a170BAGQlN{=S4XpwUibq%QNG?NOm}T)_l~Z_=*#?QRLK(i1BRDRo09(-|{kY zt-2B7nnk=CJLUB#03-#V^b;jb>bF+aC2A>Otw6=SQ5r-GHVPS{v6!L?gU!o9QSzIy zuCl|Uu6-f>-rC0&m`%SOcD!puAt0#mL@F4VzxFJk)t9P$()*RNSt1 zC63o0Nl1ReB48k5jB-7*~im|c)01PKd zL_t(hm{bnL#2jfnV22fiS&>~2y6q(i2+1}9cfKHiOtBVZ$~%madIQ>mhDga%Bo-2l z4#=iq6;sP*?8s`_4w!~rLJ?L8Pz+2oNN2a6A>FPbD%4gBchWQJTBZ)sFnH*J4fJbAMC!0#@bn~#QMU$r3u z+O4?-&a^2=x}gg`n5l+YF>HqZ^=dbu zlkqZWY%)TI&EG|uSx*W_#8 zp3wS-ea1x*2D5CjQx1R_W%$qTg73Pu$T zMT00B0f4;3COp*DX0IS=aez4>5F|iBL=r+INFYfPBrQ|qCyL@D`<-L`Ta}VSsx0Ux!N&i@)9q*d z{z>&L;wU30O2bL0E7qt3?LwYhyWn)D*_+@An+9gKmb4h zL_kCU5Qbk7AOS&figTk-XT8E0gYlsN1+IY1fKgg!b4^xvT6H6(Y<^fy2s5LgaN4G9 zbm-h7GciMs24QvUck9~_mb({`gRe53X%~e0g3zK}d3EX0MD@^+t6j=95HO})C#laX zr^_0XHVlFZl^~jgnP?Cf`1B;9urL>t0?L^JG>fWUZsaaQDGKktht|?{t z-h%Y82;S)XG<`q^LYRl1e0NjseC|?HrLAY(rTeJBs2tfkHDsn!5n``;N@>ap!01s8 zRaQnA4cH)`n>{J2&zTmN0EyzR=_2@Il~jK>Ohn8=axLS|ue;`Q z1sG$fiI^lJ@@Qd2DtYMGjhc1zDJ%qY>OQ-;yj*YIujfxpy;Wr_rdRv3SDViZ7`s-H zX>mHm&9r>hq{YS62h(D`n0(wxzSiRuLC)h^vnjWkAgQ1Q?)H)pRB2d5-Q2RlSnFJ{ z9Et|170N&(ieAWvj{UJD8@la6R<936J>OMAWkOx zvpu9iR+LWqG!N@l({jjPR0L^ubdO8j9qGfk7;C@2x|F!tKSO2uEhm4fgPRo(BMf$V zc1MG9p7joNYCY@$)OL(hJb62Jw;!%#*xdZqiP`q$*|=5If*)%RAm_A(JU0{{Ym@VDvz$C=;!yMOb0d;QL#ts{PJ=mE|D{$c;23jMBZV^HRD z1kIP9`Ka;q4Z66{t9EY&VQmoP#Twf7JbAF1>(t!_pRSvAf`|~XFBwpc2r)MZxgJ>+ z$OX-h5pjAPYMqZA_3L{>vBjgk>*@o9-CJC5zm9tvCO_8l!8p4g_ZM38>2LLSKbyupy?Kb~9QW7Vllq>^`19!vysRateR-+u0jbf0yeCp<-Z=asDEj=F4wg}} z!}5o@`jL~(GM8mrmR}a`{--%so2&ZfWN}*N+43c?xV*{+G&mG^TVKF+WT z`E>a@E%r{{86S7YZR(jnX6!picd30?AE~KbnWf)gd?{4DI}I1h?LIa|pElI(1}AgG zokU-LM-#Y>I(ppQmK~j)%V@o_q5tKrDM!I-xmb|x#)f=z%GoDZt{tV9k-z8tEU%PA zUxei2W+&)a9y~g3_;s_sE6{Z*@{syylk`REv2ib>)|XjaTo`A{r&W9Pi@u)_oYH!; z?%DN2QMq6fLVLODH@Yi#OvJQrW2pf z@8?O1PVww=&!KI6M~~+zq&2TpT!2|cOS-Kv2=`&edn1)&ttSP>-Kr7)EkV8SyO=tp(YO-Qh;-8r;F-BMHp>(op@mt-QMQtu22n;0mTxwJ!QU5s zClqw73b4YOoXz9yCafVp7uu_w&*cyusT!`d_B%A4a1a)Eu= ztW(@!rixtk5gDH(jG;;FKWPOV8>58+SX6~^XGn-U*E2T^h094g4Ji+4$s(PA!T}J4 z>O!KH5JJ}}>Y+;_0m_v<`#1Bsp?>d>_Kkr z^zg~?d$W81i;yZwiIjjf6-dyM49FftuxGI%O6xs9cQLHjSD@Dj`2r}6jCZq#xyA{n zj#vBCq@ig=7g$+c8E2u}dyu{9CgwmpmtKDK@eo&^HY@ z*=AuY!OJj|PK4{dFmlI&MJ1yIO_Bhbk^;-(0lK}E3TUdL=hTXjFbMpLL#qM2X%Hc+ z{Lua7GkWo-+Ga!01IGzT4!Dfyxz{=Yl$}RNO{PH-gaH9HktoeNLuTwTgVKVT zHMaC|=bE5Zuf6Eqwe*94wkW8fS4qb8y#e{|;yH*!%P2DVJ|Pz62nF*r(!{&!hFH0w z3cw*o1SCL3kY(D7ga}HJ9ao16+#A^@!1&_S5ClCoWET$)AEUkZwV6RItI8((di4MD z6sz;{qmcjpAOF|wens7gm>3WM9prf|Y;`=>57np()BRK*SM`I?ZeK1xSpp2V&+g}X zbT7RYsIE>BwRUF1M#o0%(PIS4rLdvg<)q>m*^}+(KCo z?HxB{sIFOtba}m6F~z2{iH*q+G%9uqbiyX22AC2k01`!lVnLIT=4et%3cE1?V%i}D zN!GLJcmUrzWn|;s4B#Rn0RoD3tT9xx?ia_tTs2&Svn^nF7^53C_esD!D~U=*haoB$9N z5+X3LBnC{LnUgWO({m*fX;30$qhvh-Do_MuNFiy@Ff={0CsK+m0@@ms0zXJMbnrJ& z06-FIK|)~_S&$I`MKNHo2}xLxMHHl5fEaqkJ_AGtWPlKb$PW-3P9ZgbK@A3dmJkFP zkqLn`X+WhwR1y(pVE76pWip!7FsaSh={bf2-1m_BJ&Z5-CrLgN`2Xf^9bJZf%((yn N002ovPDHLkV1g8jWhVdt diff --git a/cv/3d_detection/pointpillars/pytorch/figures/pc_pred_000134.png b/cv/3d_detection/pointpillars/pytorch/figures/pc_pred_000134.png deleted file mode 100755 index 9071d66b7632a0ee5415931c42a9288e2f9bba27..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 277584 zcma%jcRbep_kTo@>`+#U3uRSS$S5P5kUf&UWyWP)B_k2aC3}8PZ&+|ObdA)t*W$)vjBR_ZQ)G1sEaS?@6r_Su0I)(8V z>kRmb0WFO%_#c|B!u@-vGMgx;!9TFA#no(2ogyQG{zEg0X$2nG3VH`6D}{56emml-oNq zND3CD`ev0evltiDO)($n`|9ct70zu7qNJK`Ut-V=mNxv+xHpKJ)NOJb+=Lqj-#|GH zz3F@}%7{hdaSHv9FA{8l$*^e(iPMBar_eA6giawY5Q)ph6}Bot|M>Zn|Nasz<{_ff zswqkS?5}H{yhWfSI^B-Kv}v32|GXX|k&vb(lp{3Q>p#|b;(Cd}r+h1wN_+E3{(Dvb zteM6HLwpVPJt^_ZiO>GWl~d`2;@LtnxhzEg*H#I{$(^O)GtM$%FCs+#e*ypbETI|m=dY!gijW`Wse{=uU?%XW} zEery2AvD*)Yxs9$s%P(o9D5>S<6PjBSr+B)mA@TXfS9IbG7m@0{@C`28Z2w6Ib9D` z|C9F1A$gyg8pJ*1qA0L4Me8_3PcA^7L}ibL#J+qM{O6lGIN8#ZT9^lctebL|V*s zU+Jd!*JXX8g@lEln-5j;TUW=&$5(R@|NJNwar0zUvi%&^+;0m7ii2<^nl_auJFy!D z&Dg?1<(b*px=Qb+#>Tg=UcI_N7Pk#9B4mdtz9!oBCgIl}mq{N@H9gfqYkGas>axS& z@vbbCEq?1QFoQ8L+^3g~WUQ;JoBAszg>b+E)ID@>o-jC8t@5MErYM^}i=uj#qW-JT z&)^1!YCB&mBpEjU(+FGAGy_KKNu_`O{LdT3qTfk)U4(Y>E_9yN(fsgnf9^qR`yj4wTlHJFV)sCQ8J7 zh~uxa0-t`Ib?MTj;FuV;l9G~`-T=(wg`Gk_Ak419oVO~C6ZS5pc0EYt;ruIQAf93a znDgzb=drRy3>&TWknx{Bi?`RFr*@r*$)GJ+X5}mG>*C365>nD{^$ao0Z7FhAZa04X zUaJcX*xqe#`OC^m16x@TrI5LR;n8&T3l z3%we9JnB2$qEaa?n*)v$Au8HI0N>@lef!qm9If~Hm5GTh^0&RWM1a+pSy=2x-dtCd zIT7kdi}A?=S@yP$hq$bcjt(22+Z(WMN zK7NunAn1r_TDpA6*q) zhQepUzt>r-UVRXdA5oFs%^t?LQly_g+qC`LUdPYY0g9UX($Z4(Ld$`f085?53uI*X zC!|v%ZC*Bpb4odHF3bmnMn&zXPHu@#h*16pT^d$&Lf=e`mQ&dzGVYJ=D@e3e? zXq+(2F}xq9DVIc)&7EP3*l^y2$uXH={}uKk!T=Fk zp!L3gj2nM#8~TN~9mr1)D({hykSM6BUF%H9%FkCKWz+5-sg1;KYp;NBGIqVK`325c zbnzYo-JFJv7oFyEzG$wS^YTX_ZWAr5KVbOtA&#;HcD{!VhKMJxR_)BB*$mis)vru6 zOaK>t8(~$cX8u%0s@*_epH5NH1n1Y5gcz{9<)gSfQ$kp^N|zk<8F*}=v`9mOxn?+; zuVl_`A$Gi<;D;U>&l6k>=5fNJ_#D6e_#QC|* zKxmC#(XF4j#0>BYecgTZnq04}ZOaKjS@Hue*;w=XE)8uha@-nyok?6ba> z@Mi)62pxUOS3oX4L}mA$!TL- z@$|f=;7ja=#Us3{D=Yn6d(Zzu)!%Uqz7ygAU|0}e;*Y0}> z(HIN90R@;jT0sK97yysY1s zjv>k(;Y*Q=S@4?6e?a&2$7+g?>T$X`{wpC#oWLoZl0D?B)o&I>1N0DjN6Zgalu^2_ zwJfMzbKTvrZ-B^%_U_s=tzwF-y|7~d6<;U+8x)^c1s4}rsXbfOUM{m*ZfyqdaxL+| zQuNvu4)QViub`?X7MGHG6CZzddsgRghGq0W_`ETcV|;47ruuUY{BR;z+1T*Y3$$Ih z?2I5b1i+3(a60i#zgd;rer*Ka907;!mt4A`PcLu`+TTC8YxYj$*ws!v*%Qzd>k44w zS--&6@M*OI(~xfVO$R`o0Px?k9u@)-XjfzJMG?SmyqW#R-rio};X_!bVskP+^&Q4e z+Ggh!wbsDjdG1g9{d}@QaFBCS!(qeecXp-{R#syUH8-p+&@%)12 z&jzz&EK0=9{r(Scd@m%)u#>(yFw>EywaANa*b*=+ee1-rxEnhb%6o z;xRn#tDg{NCuBQO`DKl4|XfgMP2^=)3%fG%#YPj_Z$M2N*t4Nvz_L194Si*d1(-ItYK=GXm|XkhR) zHT*lf&*)koqXzQg1k%#d41iaz^qOUrXvqPZ;Yi6h=z(_ns_aEEXYIlhxuY6m?sLAl zm8|gjzhX?`K4dG4l5%8!9{}S!oBg(rv8!@(bE_AOBADm?QNH!f-IsWyx-SQI1t%%!C;`pg0MQ@D%|AA*y=Mfz9^PSz7&dPe~ z4FY#r89Lk_%7t%QmW&5WjpsJ7n%Ty)5%%7t`t5SZ-yw{#rt~CQ_o&4sBO+4PM_c~;6Z?CXBo8^~} zoi_z zuVe`axbg4*c152UAp?V?z4a{jkc0qU$z-`0R=GRNB74_s!u~QF;II;5Q)yXQp+FVN z`GW5--C8H=?5x1W$!bL|`%;J*b55%9{$*DsNhv9V#&?-@& znVDKQm4$&a6;o4Fg(%OB-wgc7^#X0sC3x&6u^A$1+;`ey6650T2HaT@uLoJB1V`#Q zp*ShpF6HIh)Y`aYzR+JO-^YhmesPZ^v=@xxe#` zzypn?3I27Wu;XM4s772ug+tAaLGtRB`&-N8%y}B$oxb9qZWd<-Qfr`^o z#}EG4)Iv0=n8CMN$RupCKvkgawr!wRXdYHpy*h~s1qw)FWj6;Nn&3HzM-@LA8wcf< zW6csC6|d68mtC(_f_Z=L-%hF8YpQ%KfKMO5$1U^j8$4sL*b|G3-(l$7*X_Cd>t6rl z;mbI52Ju-*v%2;Ca5vupE|2XF^K%8%_PW7fXL$hbuB zBnkuqobJgEft1UxYe69HzXCCd4F-cf5Ww+1a))X`^bPL0)A)C23=9ldTc994$SlyE zx$jR7+uq(jFggDpmFRn{M5TBuP`$4l>@GHZ3t=(ncto9 zocB1v5fp974!0}Hg}i)uPG#}zWf%FK^;w3Ai3uAJuw_GU3P*Qlv#dH<%kKKOFoXx) z_>*Y^T_itxKr)wJL#{kitE|*$xd&8UNai3wpGEs6`-UoS;FY3DB68(-UJE3ayK{3% znplbkR6*~XnjcG4`mt4Z<38swY<$<;(MandbOCc^Wknw_#FbVlwvvRxjlD+3Xr%B_ z763fqBmZjokoWVm_((!pv1T*2kxWvOd;jKuLNtZwCgIIHc4AD0_C7s5=3f&E%(}kW zqWCAB$Ej=^Z{7k*K=$NNseKxJ!#5+J+J&o;;djOQ>taiJ9)ldgKPGn+9e<%1;s^Tl zccVA%kP%iAF7JzH7LT5T0J4T_YwWcklD+A64X&TvM@+Kf+DF~EYQH(of5NO#B!&WO z3mJovEC2$k3-7bC93?g>EQ{|#goEqCBB<9cPdW8{tjXg$fwX_*p-N)$RG_o0%KT^y zqXpsBJD*fT6qk{AS2 znuwVhQ?*K$lD(a-Dm=Pv-pyB(KY7Y2L@f~`8(Szl>W(6p)gbd^G(6K~T1qDvfD3ER zbqR|oxj#kja>;aZ#HVK(Hd*k^1>LS30yK=rZ1ReVw1<0> zf(Ag-S{PT+vJa6C=aBg9Pr(3+jGQb2Ba!M9^v=@`AT=R+H|P7h)~)fGAEBdR5Ghw% z3NbM;**@vga;)E&?=zV1Er94eF}<|4dmp{|MC!%A%x)L{(-I9qKx*TarU-rm>YoFG z?D5)4*Io3GD1LY$C44R?IJ!ws>@=e^4FN*P!Xo3&Mz@{~K$+qUz~!$eB_+8auV(8J zV-QJHJ8$x@H488F6pVd&9h#%}gfi_Ck7il-qB;m+hOdsab~^%5aBw@Q~q_|O}Wrn(yTzjF`0 z1+u&EdSlYx_%w44LQFtS5F!XywR^xF+_Ac?OOgE%oMy;0YePfH{i&5NqVdQ|f^6;V z&V(pbFdHjbh)8Uh(A~RthZj{T;zmLBV(Tj~8xj#3+GVbBQcyT5PiX8h6lVHXAnJ|@ zqHYf+ucE?m*=;NC&hA`Z=4g#Kekc&2SIlp^Yl)dgoI*!Vbr?W`D|ds;$-*~&8Y5q#`ruVjgf0-U2Np$Umn|!XuN_5OK7A~=^OF=6 zGokcf1jU6uP=SBP#lhk|D>@bjcK4w$!2Sg<%6!rD1N9n5X&)^J(scmwpaUTp0t%}z zz;7vL$p6+0#i`Jo*OHnFl9RBul&hVuzYPs7+tFBj0sN{o8mH3?kr-pe#zgW{3Y&_p zhExzhq2Bb{G_Wifz5|^AUlE!( zX&&xCq1oQ-ePD~BAX^;_#~0ko)_hV&0%-+)q%4rw1Nwm_j_9~QdCU!j+x|{HS=E*( z8JFE;?A_-ce}N9-0_b9eJ|>I&adp@TO=>~Qp~rx=esa=5{vRNYzap=o@JT+Z#bf;9 zA7F^JrRpIP^O}wN;eDXDG51V{ypa>=!F~AS`zq)lj$qF5gi>%@+UW}Khai!q(SkcJ z$R)mPAalB0c@$&hH*D0?I~gABT3PI(MD#}@LjXQ;e}eI+aO(5igTDwBEd%AoPE4qq z0QBY<&8H(TRt6T`RLChPG=>lbSTw^-XuGctr{&y(Qx#Lbftq##WQiZP3u#Q^27hu` z{TE}>2^0ED^#C*3}MW%3t0Bwc4U9GpDB%005}VgPY$EKt>xgL zAi{S>07U-)ZIRYByhooGN289MLx>2cy`!T(StddgXWmI0a5{;8^>8nt8SHK&7tkol zwVzcE#o_}!fx5+t!TBo2&g8Vr(r;SLpj!IYV-&-pRca?ufg|k)j0H&I6t2#l_dMT{ zDe;bP?2~eee`homRYE8fD}xT=56KVTD0WTfj=?;c%S`LeI{% zcXc&{bDE6w_E)MI`7`LOocZnmKHk+!pwYDb%Wg*^o%KQz`dPA--a!pUz#Ee>&>RpbrFhFmZ5s zyUE-ar1x6!YByz04vylyo^0Q<_|$f7Ao4BW>qg{U@osBt8?ZU}r<`ep(Q|Z<(^n6+pXcpplk=;R|frTvg6%-dwEaJi0FCTpk1&IjGPOw9{vb zt4?q2c}_h3qyH=P5{wT>wcyC*5iX^9@OPOuMe-nK`tCq`_bKvN=P1;#1=V1ZT#P`j zZq1oZ(~8Y<3PR_#sVhL9%lyEl&rtvJsSamT%oIB03STa}Ya*G|Mn9PPu7QoyI~Sy0 z{_|nqgr*!6EoHp`jVJ6bFbgdQaI!!E5bjNIaBu)+fGY%GvV_@*L!f^Iqou8(g8FO= z9Yk(iuTny{X5YTCmB$Dcubyq$1RdXE<({9bJt63VvLqa|Bg_^)Qlewg0Z@C{14L|Q z?egx9rMWy5(zc1`e!9x}qdf}g>c&Y&-hRMZGES5ViVDH3bcWy%9M)nQ`3fL#oF(!= zypW~|#6S-T39&R*H?E)Cy!Bc_*D9APKxSpDt`b1*!V1L(-KQxy%2^12yK+Yj^prj~#L!CoKXrS}& z9D~rm0NBARl@P8gfr;pyaI5CnyP5W#OkQle!E5*0m@!;f9|=a3+RxH&NDD9x5=0AF zk#^jV4fDWFPy?8tQDkY~V6nqDv>gCyc;U>FALNjtQA<%+*b`v!4v0Cy*u?bty7*jw zvB`DN$w(UHOsT`#QmC%HeVNnrZMtSjYsZk+=R7vlyqfBUse}-bdu4gP5;-D~c&fJ` z&1Jzh##Yy#>H2laQm$4S4Ca<1(8&q|Y`TV#kVZ6d=*CiHq(V05&s z4OJMgi<}ght0Uf*9gG0e$d9!CgdH3eMKu;~%?mwFMB%xq!~3nhrTBfnRW`az zkjsfSd*#_N2#oJOs(VSY(y3u3AtNJ$qd_+~3hEc9nbeeV;O(RoopX>PY2`Yr;j8k% z^r*oign_<6_2kKu9GA_qA^%5aTz?@aqKQa+G5xfyCRjr(JEq%4UQ9hS?Gh}-8UzSh z>A0Fv^y|MTGLEY-nm}WSN9Qx$a|MO;RRkn8>edAC{J40D$0m_28f7GG&k7LH|w!CVi8yf_e*))odd&frrs^YHLINtH@uurM|zfEUJg z47}v8@tM3R`r(ViNzy>Tf)?FE#v&2$OWC3G0-3$G7_S`=vtCA^$5M6EGBHU#jdt;P zj(rqH!S|}8>&&<}T$Vnb&E#i@w0;K3lC5n{J3 z2KvBLZ#@|mxx{ID!B=3A=4dr^aUKI-*=@$f)rzg-%(Q@7KUE*&*%e0rN6*1d8YJ>B zd(b@DW5!3Qky#Mf=P%t7nagx@Ua2F9Q9Sy40zI@{ek<-ZX7zkVYeToqvy3Mjaz7t_ zHJF1M^9R(pl2BYuE;Klp$kC_@cfbTSl{bF*I5h_=R=c6dNJwBtW5$1t6h0sjZ;M+Z zPq3!MgD|63DYmjwz`7C)qa6*j30=ieQc*b@ebbQt%~aIeuPT0}|7d5C=+4BLj#BiK zNl6d?c^ZN7kUJkG%Cc%eX@R8uk+a5YWezmC*~0bhH)brbB@bTD`!$Hc2z|nI95HNs_4i z{8?;zsH)0tw(~m~shnP2RvU(tNS(xo=c>LYekHNC0*k&M=zTyF*?B^6DN0GKGa&EPa%&pUw z0T4A&s9lktdKIc)hHLNz-)6bao7wybGx}zhRY3|`l;mH2{8g?)u#n0k^t!UW@*tlY zP3K%=6rT!I1N0X^2@b1&eN#tUu1bmkFZ*7HT+~sMmnGucVjQgiuE#yD2)Loq~t@1 z4qo^kcc32k+_hch*pYdHuV23g0s$jmPtM`bWJ1G<(YlstLv0sC-W9HV^(YF2D8Xdw zV^D@V0Dx~v@GjH+7)*9tlnBvKqdT8AP`}gE!1LQDdG7;snJl*Xk5O})5rC7hepuYS zWB3&E2EE=qVV5}woWUqFh*y3cYrv!)bGHHkQ^m%{rq_X>@Xz%A+a^Motq*=$$0rJE zu0b;y3hL_D6Yjr8k}xUPE{sQF|7pKtbJC&VkZyvU7Rr^AEpw z|E>i`nE;D9!`)t>;s=%b6b9u^M^hoa46)polMAfmHhwn*#6{4&7Nl#_?nQxw{}2I_ zLKd?fKy8w&@|BkWD2qz@nSMWtfBOxnUBsYIe@>H(AP~1ffc^{^9ti|V-^AEhe5Lc> zbKL)OIgC%x0zF7E z2)kt<`3m)I`cI23#Q>z*Sqy)mcVg$k>an2z_{@7YlGol;p(yk-l#y~S0PIvXwPij2 z!`M}zBidKqH2SwCf8K%U9*B{9nrE&b=K(Ac835f0V)clVD~P2<2Qwl7Ysc{7bVU4c zNB+aDkkCAtc?~(ZZEPC^z_p(wK|xPPOPkQvREIdS?ULWmL;?gT`wnQ}E7EOe+AWp$ zch>pmuAu9De&7_y=b$EAU$<$)%| zi@iHg49WndpI3=z2I2DH%!Ku^m4zZPr&`D+ajK^QJz57P!zSqCH8@R1O_E#jlYvwz z0jdxU1jryD{yed|W!O63pdUUqupO~i7q1|P^MJ~D7ym`O3sbWx6kwnIYn!m z<*WZ_-P$5JT&wpPvOGGFU#qvp9*Rp`lxI7_ypOFy)M%x=e2e{wX zt4Z+CWqU@YihR@g0eBF`5gzMYyR)xI16MyJ78mrv$|&Bcl7e0cLnJ)J%A!> zW(mwr7(wOqdE0rqt+Nnf{Dh68F92SDd4$~u4_93ieiCR72b3NwJt((AMbwv%KvK2E z_>h*C6d!+&*~to9fWi`*_M;-sj^n8q;hDD|t@XY2bm@^}^Rs8@M6Oh023CLEgbv6e zp^0Qj!y^s~IV@dfH8cs2yixeLJ{^p$)n=Geni`^Vs31}Hc4V~TKbiE{GiZh>z=Tg_ ziINf!wu0}{@beGgN^$9ZKF@8(pfg8b^DqfFmF^M$W;$8SrjOpPw1%9Pl|TryvDp_4 zYNv6+o)X&>zOSh%(US35dXXp2ctC3bl;;~H74=BT)qq%|M&Ec92U>97at#_rpEx%G zd5-FWYMDcZ(!+s;SZD94)fhDJtZ3C{JEpc91`{bE4z?xX3}BuGZ;YKJro*zSMO zVj&F(bRNE%{|C7N-#9qftdatcu8xR(WA6rAy%9}k9_r=#3T@9M<5mg@8XUi)tE-HK z*+d$fZTM1;`r@|hNRj?uN1Rfxf$31rEWV6?GCslqOZ66t2>*On58fRI6Z$3KiI_yL zAp3srylE?5?NzUpXDH*Lw>H(n8bqyH{d>sl*xp#6KEgN-Pa#iqFjW0m*S8XAA{ZO- zXLUTD`C?(7Ie(F01esc<2I@A7gQE63waDTRkR_~iQTyNW*}O5s7Z+hv!vmith{>&1%~ ztywyP(|tu&*A+Z1Y(f14^;Hn|JUkl}XO1g8f+-Hf&{R;$3!a_ZT7}meS2{@tGMJ1! z&E?Zp2SfL-r15g_No>0Dj$@JV4$m(HZ*z@BLUM{N32kw&WF@kF}Un z9aBEv^1nvTby;%`zJ)ST&0TyoOp+nJJMoT%a|AyUwDMmE7RSz!B?&hK+wq8UEB~dpxo+KTyQIESG!DaF6%rRz9nIm@B zF}RKGuz6Zk><(H34KlvFNvL?uYCdr^o%9qB?MX)TAqBa1&j(^Q z<8JGW&e_(3;)1v<^9!6(;@YuKLdDGL*uI?aI%Bq~!*T0V#+}C8xY7ec|LD7YYHVM- zIMRzgte?O0kXD6OYRnXO7uC+j*{jIRbLhP>_f-Ik?j;bIL`sO-L3f)-`w;*zXclqv zg*lwqj%%(P8f*k3(NN(w>{&IXM1Y`2K&P&9^t^ZX;IFw0M;zykJ^>M39~_0V8Lmbv zCoq2Ff!pB(j_Q;~2%0k*K~~64G*VN$MEMJZm@(6fqSi}sD&SVdtbDRH&Um5TPC-b(w_vZ+57F@pR zP(4#(frm})Fy!rq-S+Srwrmk&>tL(bO&aLy%Yj;FXZIBRr)3)4le6SZ%G)@GkfD9{ zV*(aESI`PwRc5Un75VY_X&KS3i;|2oR2MZa=S?<(ZWJv7${Ja2>r9(Tm9B(+DRR4R zhke<4v|_IUt=xC}GS{7)=C>oh+_zRDLC_Iq(^bHxShgSWJG0aq@$W8F9l8Y*O4#_1 zwVdD80xHS*&B~e9BtKPoJZxy*1{}{C{*}5Pvpmv?^rqiQ$1^e$37*P3%i0(CJZ?f) z%ld}hl~pRz?C3YkvMXw|&)_cVx%EU^aW(elEe7nI(Onl)bZcyS^Cl~k2s5`P-hH$$ zihfenmc3C_)YwAx{R@-fh4Aq_dkW6VbY1K$ZSoE6Vv#Wy|H2Eq&zhR1iyqUDXzq35 z%}~Igvo_DhZvcPqM+c*Kw+y~r*fdBl;Gj+5r^TPrc@j>UJ#xSdvfwa20*p{k_vEEC zhS^S8xx=|)Sn?*ld)Atw__SVE(gxiAbQw1uiA@_A5&|QAD~F>RzX*;&?4+n<|2mrl zrimEVmO{w4ucYLR@VUQlxtM#$Zo{vXrl0JLHY%cdK5Vn9+T7w(&fV{-iEVKb0#@s+ zQCn=dvpq%=X5<@{xtp5S%MCZ5&ze^(WVSJ$SHHNBcOjp5F448K{IRh8;MjLhr-G3O zi*4EZu_U%;QaOjUhovTGXL*e4Yli9c+OBI8oIL$X=i}izEe?AA?Znv_uAs@{H{3}E zM3$~f!$UT3I7TQRBfTpf*VnAZfQ*dH&$S7%!#_;p75LRa4f&*ZB+3grGJaX%>q>J^ z-aWD@mlCh_QC`iQ@%VM=+;JZmmAWM9^oL@SSx$qBLgtYYEn0Q^{1I}vvFhk&tBL&B zb6Ks^MbCfK$Ot6haGP8(v+K&n=&|B_=BE=d-O&?lVFYKF$-|8tx70sQz~00RE!{O!p%4BRuk zU=``C$UFHM;&)!UI_^F6uQGFauj?x>rD@suzNI^UIsWqta*GPS9lL=^2VALF^BviG zYAek|x>cR;Vp|k=9;)n#js=l&6<3;6SgiWp=+nHAKUjE)$}mElOa`xasV8>2lRgwP zW6!2JpBrF;^E!W1M6(L~wTo0uZ-GVvOC8bIFJ|5FdS{8j_$S4WC7xz=m(a?nYsK~T z@}%#njZt%xl)2BT2?(}jqh7-{=PRo2j=A~!apHB|ZEzOog?Ei^lE>8v6b(8r;u%Pp z9)y18Bma^Aphf_`aVZ})`h@&K%sI*h+_5&p<4Kz%a=sMt^YCrX7wR{Hhuf&uDtC9D z5l`a8(w*kRj(MJ<K7J_MOw7_^d=1-BqB=hLU2;~JvFfcgEAk8ysnI<# zJA z^$G}@t1snLHsNbudOm%{Ie>xVUe^92EkoUU*WL1%3|)zJ>pEuv=S5wqPu@tv(rS&v zWZAySt-JoToX@1lqIgQLHMOZoVH_wwA7w||Mf@fSz>J97Sl#j=9|^s$)3sll-wp)@ zo`CC)IA|wNfJ4n#o;<%AGl+{=98)|CIJVITrRHMqCI;iJYaA-mn1qQ*k>k14axAdz z@-fUR^~a-x{?@*=!}IZ+{7U$1@5H1eDy=7Ts3n!Z@Qf&PvE+?Ehf#fkZQpau!|;Up z&pA2p3#-t7-4oC7?r3A0t?;cWw>+;tB$cvz<~v{^cON?NFV69=QuznaZ$EntFP>(y zdw%UM#Q*`vK&w6?b1Olt)C-1Lm71eLt6b+6P@a4xk|O z2?`pyDo`+z{u7t#uV!2Hac$ly1=?2VF!`OQ+4_db3o%n0clx2RsX8Q!#(1RdT=J4| zo%8N;xq#erm4BH0ls=+teQOBE7H?K})#cp;GM(pU<4V$%26GAuB%3BrERLhGDqD%Q z@FmrIHKU^bO{|+U*v_7jwKGNc^wPvG`=^p4`*B#2jKyj-&J-PIvW&wE?P)(T7kM^( zAx8#c$rv`Z8XU^iI4jbtV{^XbsDDV>N?^8Gm~7z2V`xq6MslaEz70uJg!+ij(S*et z>T0-%A8uu`BBu4AHGjo%#}vCzm2_84H%Y99$h63$dw1w1>@r)|makVqUW`{qOK)G0 z4PwFkOP)wP5V37lGjt6Cz4oPdg1&Qv9JY5waFjbcH8{yULW8f$|BOvMT6v(cgMrtS zJ5LtmngudM3!Z+7n?mSg={y$^FoVan+n$aby6fFHj7{f=ILkg74DuI zuFo4Phr^VUSxm?^OV54>JmK!cfco(l&PaFDcR5;z>>TOsCVT2N&OfMFZ`ub~u5#is zrE$7VB|kottB78q^A|M6=;{%`NBTr$jfB+mwy3kIS(SBa1EM3_#+lxww}e98iYh_{ zb<{D@+CTQZEQGRuXPm8pc9bD1n~FCF9U%Uae>&G)1{_U|Rg)&z5F=jy-M2AI9@iCTVxLmgy}SBCxoHKiwW;M#8&OztPv@|cV_=X>>S>m#!K9S^ zfICKppkzuY=6Ck6Ve~L;ZNU#G)IFirbLTJKs9qnZ;4;5hqu`}!0N2*h9*k>={rvhe zDzN8@K;v=8A_b>E!KhfF#<>O~)Ygvvc1xGyB+E52^|^+<;m5u?i)UG~sSS{GeN_3( zPgg32!;(#Aw>x`a9C1dIdC_nZaw$&cX`_1aEKhpciiwX|U#6u^me8XH6P6nS4F>0| z>_)3NcARk5I)VuVJAK5Su})=Eg*W2%lwH@aOzH^_B++{QW8bDR_WBon8=q){GA|ek zx}s$rQ5(v%dv`?%w~szQarRjmM>$Bf?{x477{yjMY^#YJGe~w__NB*O%TUNPtC%eY zByVq&_dC3cDxBWQAQzSCb5i#1@$c}x3KWtDnHcLJGkB6_vpu$|+IQ%wa<76r1aH>q z-BStw_EVgqD;EL{;Bjz1wH!_#^{_J9LknDrtL;pN?5bligU>LY zX5ghh52MO73Y#Y~H(ULP=f;!V!H1mS;O{wzB}UGy6_9t*cD{t|DH}_7V9EHn%7sR~ z*PS5oPwQ85IZ(?=(IWeps&lUAvw(2i{JEaQjvm~MO#b}V;BD=Alz`QR%b+8@+EN$) z0Tk?lXuI)GmLBcGgO6GPLTeavn^J~`$$nE*9f$Y7H8ml5Y=4M@cW;-eneg6Uqd_MYe8yQ!47Vtz5C< zRz)&(o4QXgVJrvnU@|4M9ZN;bQ_Z;IlEzGnQ3O6#9!f|HDehb8dc{5;3>i5V2NUPs z6>TmDa);8yJLxkF+jE|?oRx{oha9A4syUlI_z|CeWKd>|z&4oUOobiPvr3LVullma4P!(4?;LcT- zuw&|rEFk|_=S=&8RAb|L^9%2=B3_sRcIaEbM7M!^T<2sz0YJVYyNPhK`n?VTKkb#hU0$l4!FJJ`EVHzCBG%{fb;WnNECjE9+W%xx$kKlg2@3`F z`*>+|{u#vE3(%a}Tz`oocrizv^QrC|V9sZAKsdYm~v=t*!IjUsu)ozuL#die2bC!DE2JF1g!mhB_OSs4ws zuvre~o1N-v2~3Wr-8VcxCS7@&Oq`N2`3Cp0v%eP%M!S!35w8hSTmu;OMwy^5lc6{DhskU{F;bN>vjY(0R^gh8|NK;2uQr;gZ=)Y77ReP_Aw z$FKAHJ};{WoZmK!=w!0yiFB*!HZ8@nrk%^{Cml%D_04NuL9k|fj^6SoA=lt!ZS_%! zmy?_KqTWxgWNqF0;05Ef@M&QM6nu7K>%RY`yA&4Z!#C(Nj6z^~y$W0f24fe^%hXZ2 z^<8Vpwz!>L;=THPP6n@KF*b`=pk%`EX;$#w^_$X@^Ss^g7AEAW^6m)V;dSwW&!3p& zW<)rMT!ZKli3C1ieVrM&w+=QWTtcPW*F2N}(;EgkQlPb$y8L|A z3II$G=*<2<=o8Kkl@U@W3tRfT$blCR7isRF>RhV_lyi}~qMt$!>x#=c&qwrV` zTQMiSSYNf4Z))Z-Qf1cFsIsN4lM6=_W>f3pbITNMjIQ6xv(WCqs@wo9&}`mpkyq-O zOv??F0;I8GTocYGgAJb-5P_w zzP_HT_XTeoXnY1>J^`A%4&yKh#>o-|0{bA{uh&ZmrDJ?p6TvrW#N`=D4|Iwspf+l# zVJ;Gl0SGO`pVV;hh^gkSH8x_{Q~sS+eCbyre_9SPlk)TjnK>$BRx)vHtYJp=?=JUA z%jY;yCoLg6u(3?)+@l+jm<((Lc~Iw^$=`Ww+n?h81Am z=QKfLb#4k@+tT-%;cl#!}TUQoK<0>XI%998q z^IeBn=C=gL)(AH04CZz5NP6jsT)okx);qv!>-MRHb_oFt0>S3q8C>#lbLYnt;f?X zW6j2V8|Sr;XI~*^$?Rh6T9S<7CR_<8zBLq<_rqX5`!WBevnFqQq;V!xwFGV%N5SC2Kt#Eu zz8KKdXB|<-c`gFaF1b=Hc#1E<1P;&9wW`?sdMm2XCttb^w?brykC#fKm8(?`-nZWB z+e7Yc77A;wcwC{g`INoTw8v=e@(-( zq^_>&V4ZE`l3L69gfSM)lCQ`5x5a_7BDg@zI=$PNKeuN0@0aQLBvh|DTe3Mdt8+46 z=~lolI=oi;W{lALc0amlj3y~qt(ozQ&}o3^rxzc z{y0e@<+9%rf$dx&n<95Uyr_PGTq+zH_5wyZY3%YqFG9>ZN90j)9F<^JLx!v?N3yY< zVUEZ^W-_t<^el=qeQ@OE`uNVn=9i&%&L9gPDJTA&X%&YD)OOifvuoqA+tzP7aTdp8 zN!;oCB>VueZi6&Y53)c4S@4Co+p-?q^M!wE@ z-Oj^MR(2z{&)z-st)d;JBXZBV1msez+OVhu+Jp07o4SbAFd;9TSJ|^^w6{q2vmV>?)*nFrMosTZD#p+z(b`5D5;8 zmfWfeAoA4FJ2f&&dk@&MiVnD7oeB2lf>d8D>9e!-hrnFf!0Jx=0q96YVg2ez`ILdu z>G@)j0*;h;zH)R+NR}z(<=h9AO*-utKBh6J8q~)R59)55&olQIcE2+t!tl1kxzWm0 z^%-;V8=rNt-Oc3YP|}9kQu@fYv!#3Y$*b^s+M1r&D0-%!YYcal@>Q04=E9IY!9c7{ zWYEmuhpt%1Yandw4C{(8o}B$&I;6sx%snrtqQhXkCsPw#tMqmkRU9rG>olj7buN2S|mdL|0^?Vto|Xk;86E z{?e^lHvwQnNX7J#t|r2-tJ`p71cu@j9tPd25@_U}suaw2bY|%cT}oO0ev@TdO)0v_?Jrh2)OVu$8q+8BUx*ozr1eRVbDK0;@cORb!|5XauB?Ho^s|T$84cAdZ1r@= z%1yKk`W!+&z&Y0&J#=AbC3Pi$wM3j&Gpk3=f^rfQ+3%_~wn^^FIcM}t)HtUs1J#-x zgLwf1kBPuV<+Cog;pszc7w}d#nmj4`k8 z^|i{Gn%){2Vy-6O`kSud?Mwc?+%KB5XQEqe3Xs3$=k4#+>Oy@IP13mE-~M@u_!hos z3b{@#3ywr-;OG4}>u8@V+#S8HuaE0Io=|^bLF@0ZU2J1SaPdsMUEa-G6YP2`+Qf5u zy{e(Do5d-OL@!?OQ>tb<4tGyI;aqyIEfm?cF+e+ar8dH;tc@bkpgo2eQ3u-FF1$N(RSZjzeYT_FcoRI z=p2<>w=QTedScyLSBMwZ2Ddq5Fw|NF--;oTsrn$`mVz*lX$BUJqLcio3Q*pidall30)MQErkC2pXvE|4S`o|*Y7V|t0vcA1GygL7@bUZo%l1OO|4w4tl z##i}>XpKh_Q*8PW`OI?diACF9V4CerUagi*;&B=wDS|G%pg8b^SQG4J{vi~ggMi2K zBjDUKwVs}yThNcy`l7J+qv470kI71{1_u&2!%w_6e&qK;#Sb)5`Na=Y^bFrm5^MkR zL}3QblfnzDp)!qm0gufsOW=g=eX#ZX^4IZCVgl$uFz9bl1B3$^D)^d#_AKdmNNG@$ zh618$eXWg7K0Q51m_p;@xqp6s5%w6{?G^v8z>1DSA<9Ak{`?WQsnEzsR)?t;V!kC` zO2uGciT)OUM6NLL|_E zO6%5-J_Wi$(`Ng+<|8nfdGkw-ZG+epK6&!QH=Q-&KfnH0Y!pMQAbilKhy?n-10M1{ zQr>%s_-}w0LL{QkVu;}Zf3L0Q7Dn&rhlMQ#KaHUIezOc2RRa&wB{gQwe|%&3qW zw12gryrrlRnzTOk?EfhRf+>EfM9f>96}TB@NyXiK>AR_FImK?euGTJezQ5jshL zjTV8rx_W?L#PjDwr?1)FEf1u ze~iBTU;jWx44l;_)5>-{pb5RG+g&^bl7Vob$KlOdOWND^7qMTobT?h{RSMn$bDOUo!`AUfZPNkMc;d%Hj=Tz+G}45S5pju z3g*LD6qq-*6J>L`4%QM`{y7uyWvh=g&zt~3q-iV`=Schr*fjDaadhhi&5)7&z zUAFG$1FcKAR^B=oHNEd-8fGn)8V1hsQ2tx6T?WTyjS}mD^J=N`ArF3j7KnM+6SDSy zI;_9m$Nw-v2Fxg5-<{^wbaDV6A72CFR=GJ~>v9%!wh^FZaa~`T{MRY|by(0+5ZCbW z3xN(`@Jlb1e@_yVdfyiRNMHPZ5UkOEUjDy-YNJ8!lkfL*;niq)mz4=(7K(fBAVx*Y zV|6Cr4nv@8z=0H2$Zx-)$sd`5LeoKH7XYiF;J6YE(14W(PIb8bACchi=ko5P0Y+kf z&zo+hU@D}ZS5;AAAQjzC*oSK-1Z-`aNFOcxhz(TeKPo{B3F)@Cw`0%!BjfR|C(ExN z2Acf@_!GIvin!T~Ns@|$pNdGP`S-&kub(*`4m_&zqB@*CM#_91oU1m2Y#`KM{3?E? zpd%Cm8sEiZ-MphSX>Ul*&ZmSahJphR?%#3`H~{dMVw@2^CXkNeslY0?OVfah*Q`bU zd6UYC|BgmL#8HPeeyo2g6THqs2*5YuE0_^qxS*NqC`5FIUmg)}BXNBVct(*x-4e0O zN2E8hva)yM2~daE-KMZiZ$uQ0*}F8V*eha(ieueSPD*%un(3yy{M^%k>8H6D?n&u= z3%p_^5w6kcM7+=VG~lsJXbN-&Ne&OFH%)L)oga$&FqY##32WoA8(aD_c1fKYMG@(R zi`~Gy4JW>F>dL7EFa#8B^(6BWvm)05elY2vtajG2;O8^fS*Rx&j$z@89m6K`I)+03 z_2V7^dVRs{<5?6s_!#Ob4cC1TKNiVFaIlD3nq$L%q{D_USVSk}3|@`{R=~E^54V2^ z#)yJKp9;9(-z8D7d^Xhwo4P5Eev|q|<{_SW;P_!I!27R1^3RX+tX^%UZ*Uyo;jk&# zR8s<8oE5m-UI1E4>B%VXWAIfFzm0iN9y;5OfSEzR(MyrZ7ZEOP;4;HR%%flN&^Z1y zJ{S+-vvImi`{kKo%AFVHc|6BjBYOqqI&uH~#gMo+72^akA6VqV^8s;}+7!n#MB(sWQbY&HS$gw6)3u{LC}}&_m#Z8Hn%Cw%IhEtTEiwG<=agd8 z&sL60ZJ>{N*HJ1FU_!F73su9ewTK`ew2P1(QIpZT`<|{1Z#y zyIskpTk;K223}B457q;>jN8?ZJZvK*4lWz3r*dEU76gQ#1CYKtZb*R73$1$8{eC9D()UoLq*}y7YSVPc!cwk$|uJ+BQ;(NAL z-@%-eo1*@^m)8&96Q2_m?p5l$TYXdlXGzNK>IYT*sAH=dntId$nR5$02A;*B?dk)Q z7h|v`DhA@WD;@K)Uu|lIx6`?gu(B0Q3El$Hj+H@RkVyY%d+csP{gN}BY#m_v;SWT) z_&*Vm{IAUqZ-xkYGh;rIUGV$gD0|Pom-VARhI8C$0_v^?cLuAFB9!AH_&R?bcyrzJ zktM*9Pqa`T?fed@I8!lE6{|r4rbnh%?$1-sL)k?58s`RHQhQaa-3QT)0dF9?M@dO3 zc;AdjKA)!*=+X4-ZO|rNzeQ}clu*|Zm`;^4NIvO>-GBUJal+Jgre1kTshoAhvcEW* z246mgzghHX-(e4uyqr59>^A{D-Ztax)ElKW-l0wwc81K5Wjn-z8~h-Nx(Yb{(N^Uxlz!0__L4Q@19Xo|=W-MPE`f6nH7L zcuffGKSb20V8dlQ&a`LL%llS66m9Pm3Bi|HJ`0LuXr;RyRxAPouPa-dHzZm{_s=(? z!s69-QX`fM7Yl!!8Ve)Za6&@h)BUbY6VUA!gI9oCOI3JnGknsswn$Apc9P8t;_09j ziKmqNFa*tj*NHk8BI9cx;D$`jdmi!5wX;aZn3%s!y0PWAPLw3Clc0pbZ~)KgHMG=@;v^uP?E4`7}m23NiCbL9Q@oR=j z;Cwx!asJ($ouQ9W4sl=3OH|c}=h2Yld`C3B}@)z=JHmuh1+;kdXSxyL)6)%B9yZrr_fJjiK25`(s5(#mKV^pzH5@_a$0n zJ3)J(#B^iM%;V}u_ShzDY4tNg-PzKlg+X*&(k3KM+&j(G@ zz(AJs5xZS=XM&e8g%ZVf+j-*-$M{E}t)b`8N=%L5<<+)wx)>_{c>fi8CIffUd^xCI zQF{k1&aVHy(f)j!B3v?Q1Sp2DvWl@C!R+V$bH$A_VhQislk=B9K8CUn@>)%EBBCNw zAS~V9Uuzx6DIK0QtcJN4)B0ExyxTGSQ$biPG1S~v&Z~{4wv{HEt)8jp3taMWYWojO z@Sb8;afz&(X-^2Z)!Ld84zQnSZ;nEJn#cDbbl3wsVlAMR>yaJ7M2&d}MlMRF9&WM~ zM-w^hAAR>@16`~rb8eK0!UPM`$UF9d&vd}9R_UT$LXN>3!IzXnd|GC``G%r1A09y^ z(GQ8-c=PRHQNJq+xFms(VlB@7lZgEDwxlA(er%nRiahyN9Ar?6cXbnd0SdguZ2>EFVar`7k|c2Nh2qe4>{TqO@?W!4UDPeRPO<^JVnVAj$`ljzBNm6yGrumh~A-cY(>Rt@&Gy zA-CvD?* z^Qc^VY1ksC(H`FUFhTeQ%@)vSjg4Js6`bSrd_s(G!?CKR5CV=>}wubG3pFuC! zOi?ZQhwc7;v%#*2Jtkog#AJ&1YiWatP>+jjL0Vw?jAp@d;l@dAvdi6%?cTlX+Qs&^ z9344ctW1fVXBsTMH_~)Y;a%w}bzm1so)KHIfAb~%0{x72f01nO9jDc@o)qC|#0-9l zjBE1?ejwyfHy&O*{(ELg=2i0u8{cD3`yw8433-E&U;iqaJ4x^c40W(Xs7Dc;+U*YO zSFHOxh2*=6KrTvI!mIf5fv|?|m{80W-WZ{tY*m&M!gfD0Han@Y$pUb&JxOesT}=V~ z>zizSV_Q1%sJs#ol>P53p&rXt7uM+&VegQ(*O^uN}2BK%K*(D(HqjH~+nuE$^ zp~1?JWqs(u7rKyUKq)V^YnlFe$wV>3RVjNnn*S$Zx!0(Y_5$muUR8_bYv4_8ne?W0a z(M6T%0OBitxKq&cD+bxl3;o@2(@I%$FF@@1?J1=EVAH8zRKEikf26~mGurK>FHZ3$L0UHY0=X7==BNw+k}2_0K&vT zCBam#cB*w}ZOZLb>&0r7KnfPdYTo1Cv5aVjTsy-Pt5VSs947qM(b}V4J`i!p0tmeU z-|GkQv(?&U^X*jm(^*>Y?x{DuD3%EKS5ACC@%`%kPPUj!Mo%rpeT)d_u_{Y7jfHn-g4NR9}gVz{8!w^{7zO-uP$4D zPr{Kfh^14@Jn{P(6i_;r$TM}i5Dy}XKfyfFk5e2jA@+Nm233QTEpe7evWPZMlZMXD41amR}IiS zEtN!fQm#%wR{VttO;5t%B!2^KXa8WJQmE@*6)d2vR05<*-MM-~w(q2>rJAK<;E+IV z8f!gJ()>xDxSRcK{f%yP9JJ3F1B0$9!2AENw}GrGY|YQt-qAoMQ->P=v<0TVeZVqe zEDmqjf~G)p#Q{|lrH_~mSuncPOlf^fyU-{BUvG!#2q#Av){zf(gbW$8}Zj~QtY$K;AXO`C=72$U;LvCJ%(yW zcBCj%H$2e2STS(^@hI{f62oGLgbxYBnVNZv#kb81VT0Ux7H^W;`1 z)$nZ;PW}EW;C;?O;dSc`xh4E~Nio5F^kx^Jc5m)gWU7)@Q~zn@&UP+-(EFx;Lwn86F&`ycwJA0g*lE`NXzGZalf2`H7k3{RBD6W2xF2YwKfnB4Ca$X#?AY|>W zbaN?(fh|7n7UH#hMA#1iF_5tgV~xQf5xJck+n22^(1G@ADhD&^Y?!NrWEC+N4JC}F zDYhj-Y{3?t-}BJjvMW<#+y$bzcPtBdYaItY-qbuKZUteAg6yGg{AOFxdvE$%JCtI7>6nInfqeVZ?2U zu#sRw^s4qgS%4n4C*hNcX{wQmzImRHVS6t-L~cdU zouMw(&Cw~o_cpqd=_5hBI^I7p>ZmzR ze65XFpN3>QaF>J~(t`g!x@fW~$n<{fA5-?Bddv~3I2Qa128in5;w+7VOkB(6^a0+ zawW`*1|u26H9kY+nO2{AEn)~iiP#Pchv7Yd2hV{Q;JffG>v-*>->XJ+Dm>Ds^|3^I zxan`=hr_THzV_=>N>n8Pnnu4;t!a=E$Ug3EPRXWQ;!6j;JUq<$75JJb%KIrFu3P8Z zR>z&e_qb4)e^8~sJUIk5P6=fm<_P(K_x+N(Zt;HiSi5X4Q)v2sQr(jB(UGgMHIj(*`eb8Ns0YVY&0b-TDvD#(kF92K2wnJS=4E@>2H@! zO7Q5p-YzsS*_PaR_SDpLjWZ8iXY`RNUXTtqeuan4X1Nug&nfw98|$LNrPartp5)S@ z8W%WdSe zh70%K%F&&cXPIB}JSr4f7MpykKJ`JW9H8aBCy-JrULENIcyhu2+@pL5%l)Mvpx744 zPLmjij0#eq$B4s*70i*@1ecAdMcvi@$TU&N^Z$EgAXiT$Xj5}B=jLq2#`4o9WW{*@ z0s})|c&SPZyBGF_ve1l&Uhc@bIGq+0P0jgwBvU=(Y`7*vhe^|gCD>R+kmhakIZVng zNh#%_oythhUTaG;@I1I;PPw(U{(40S@&}+4em0S7%#_D#T zF#a9K$){7H#num#KP4^#IL46?xP$r+H zsra-&p-%C)A6;0AtvG(fT>2I7&yhA8!HydyYb);t4?6d$RX`?M`!Q>ieP4O1=NaEf zT_@&F*Ls&kdVcS#!FC6WZHwq8;qMFdPTs5fmC?@hxZO~vc9E0~tEX1+{jSXe1#CS4bi18cAolJxbGUtegYMCKu!Ck8 zlU9`Lq{{f|OJ8!p#K9CIss zbL^MGs{spxA3BiSfvK6alIGU0laIcD!k(fVDC^>sDK@)0WK8qC3xBa#n?2AP`6Bh) z-q2&{0}pdc2t1`PMR(bpFUU3j{v9DTkbon5lSIl!Q}Xd3Af+Cqgw5SBPs~a~Jnk~B zH&>pR-yKGOxe~jAORz%TKKN{{Z&HWKn;#E2+f|F0Q`5jcCy*D5FGlLs1sEm)e)_gx zPbA;a-Sq65=i&Zv3n2fH;D4%!18*NAyQLF0kN#J>k^yDK+u9Ie0{~Ub5eKWQ@9%zm z>FczQ$CP-xImgWL7BR zkX7g6HiI_)U*HbD%ydUh=E~161JLDwpkhIPgF!`d?*4M`;$36OLWA%S7 z4w;OCSMHZi;n2yLn?5{X&ecAw9E=RN+L-U_07k?;WLnUaOh}skO?i_lm^8%?dG|^3 zxy}rqwn%b2h_;es%KAyL5qEfYus71xGgSSxp51tN6KyATDbPdEam*~c>k|E3*s5#r zmxt41l4+j8^a``F8rZbzZ)ZN@oht(06W2=s%vV~-F?SRy#k<3aFZKX%%ej#yvKQ#g z#&AjqE258=nnKR?+qHVkTOICI)v?iM^<@wgwq;gay>*{fnSJhD;e$OIcK6g%s;Itr zCx&C*9wXd zA^n`7C)5k*jdY)qEGT^lBST8=e#HJl_QX($`8>=hC|s&O8m4!hW_U@FIM@nnz+z>_ zNN3y+i1~lV#=nurN?OR2h>zjMv>^bPBl-$&X z)d5uPlY6yo_bsej5%9?xyKJ-R71vy1Q+OD`Gg55Ap^O33HoP}RsU8i5Jkx86gdM!{ zt*@Yy6pcD2qT4yflc=;)PO=p%M<~X3c?SOu7!4f8Z@xL&{{j$_;SmgBL6E~zDe_%M zG3o*x`tlo(V}_D$Jevm^&DPJscSh2rCJOkiDMAYt7k`g*XP-bT!*H~IPbra%l?Q)9 z0BKS)Vs<(osFvF3EukUFG*8KB`mzPAxT~uoIj0Hy-#psQILeNF1P!?t*&Mc_1eEMw zi9LztQkEQBo~Up8)_g=c62}2ydXWXnP{UO{`sZxtBO89^O5hZ}GinYJ;Wx`nAM~^z zRC$_vSNLWvV(2>m+9>E9R0b=eridY)7nk;$CLCj1F08R5y=@UMzXcPG9^ zDxE@=l>kz?AdpOr`v?v+5@l^i0?3)N;gYzWfA^!W1rPD>XaUrxkx(K!6MSCYT=U8I zgllol&I}n}_RRg053^Eqr)($al(SUdKBZ&W3a%Mvc;~#JL+Y2REITVD&>1iTs*=}D zRS?*ja(dD&Ba&NA6Fl_qSByVJ8&L^fQ6sr9v};)Q%;I&`Q_!3Ba^sZY>+$`>>RmJT ztvj!DG@dqYcWG$PZS-TwRW?qx-p&1$({TkxAHqSiF#>!j6IWC7`~WFpkz3(0dGb-0 zMe~_o%_$9`gDFw#@hnTX3EUwVY;I*_&r6TlZXBi!HeHU0>|umRr(}mxi^a3ExT- z+XnRdGRK~nj5{QM6i>t^noO014ZG7`;%t4HziN4~`=aFG%I0XizAzb?84janWkMSp zRqR3LgbV?#dQw{~1!H*D1u9-UC2)$1+CA76@-g)2`2i2QCuU|p) zQ(2ik82YjR^EX$UrGw5|z^s!KtKOeKzroLx2A}E$2%GD>?9X}b8Qc!y8~AMzyu=tO z171}Bpp3HPE>Va}oXy8bhy&F;AT;#(D5JMPAge(2O#;oVM5naNk0CC(5$ebYK~ z8V>aa5$xh~7`#k1Hl{fF*?a=&ryVLxuE3TLc`E4X9?QtSTl0@z&d zcm|{4Viv0QI#-4^m(P4z$s=qPfNNrl-xYEYh2t7Q&;V=gN)u;sjZg15=Ob1?(Y6;m zoZ;vh%$~iQSeyEV_grNj$Mv+@db3TvEAaygL%(PK$FQT27T#%{OBK0-j}a6PPyv~i zws8yARv}EEMxOqVkZdPI%30Rc><(Lu&}&mt!BxQW z7v12$_bFUrn*d!5v@+&SA;8C#`<>U)iIF?-ggf4WT>uUwrUeF%b|%QGrOZMEjUfhGC?9XKC-fV(h59;D?UNe!?{|S1IuY`)Pcvh8V2CQ%*gI^3 zZUJ+GZ&v5=XV4029_a(o;Ye70xME?T)UBZf-r-lmliXIoUwN)ZsD1`JgJ4>)ZHJ3o zn6KU|81?1@V2q5!Ep;Ala~)>)gRgFQqdO-_THmsKlw)9nS&8ad+J%osBX0LWwL_NC zzrSPJk;X$TZ1IRRwP3U90xK?Nr=-8c`ssH8w&gN*qbyC^6B1}oBcLE-jLW5BEr!a9 zkzDX_J?%PXZTusG>sM@3tlaHL<)nN+o0&CVvDUJ-K)TTq+4Xosc&NxC?7C6}j{^+; zK%f^OBu7VZ>GEfNGBpF^G$(4NHQ$44kl5}BsP=u^Z~``Y&kSnJ$U`ZgB|yxIOncXr3ixcA@{ubuyeS zzpuP|!qes>rsXcWXy>Sj)+3|*!=TYU$&&jT#SQ#{g2|SVb5~VfaxXlk@dgMZD%D&! zf<>LCy-7d*xY4Z?s2+!lS$hW8@)h*= zT+e;}A8CKD(rCcwOOr#L;Z`QoDImJwow<0|Ysw-TreeKxw1IpSfIcpbX$3$26GDm2`VR|(8;SDENJgW1T9ss3m z6X1ZIDX=&D)jlnTrl%pyqcKeA$NIL7RO0AD)Z^+bpSx@SD$K-^$N(H52^t66CTyNy zf%tRCSnT+;6zEPlLq|gQ>nkn~RlH>Y7Z#8tQ#OtnhmdMnF$z){?F`ko!s8*)S(pj` zP}x(@HION*BKymAA+zk#&sKFca&)ew&e3IDh% zN*5YiC_}}_ELvte-=U=nw8>=o^XWc%>`1~#6_12b-z!1L8@7rca>Li|>yG|Ni`5$I zjR6xUtrxUcgqaPXFLiLJmm)(x#q#oFO=D}S;zl0c0qb{Vlg9P4H4b}dDNayGlLto* zfJtJ7g6roQ?UP&`Ldo_fkWAJrX|G1NT6C0WEPAhUg!iV&P$4ic(EyPdpL1XKhC9#pP>-(GEFP13!fAG0!<;E6ZDbtvA3zmakvhAc0~az$r(oF)o5AN2UQcl1U1%Q8MZKUuupUjU z=+$;y9PHzZz`uRJJ3ESfJlkrj<-kzbDgMcaQj%{UI4YGgG)P(2+3Vw0OYbWO2#%k@ zwef5kL148&S+S%lsVZQP_2Qw`%_bUGc$!YR^}z&Qr`w#K%MdkD ztofUmuDb?*mZ9$6abz7K^9Yp1RJpTwj8|V}@nfKHe+loBQKlJg#BGab1ac2?~Gg zH;WjGjqwlJ;7Oe6h?Jl}CL)QpYsbsYjK)b#9< z5L1Uk*5W*?Eq=O#@~}Mn4beCribkg@hsE*c2eRFjxQcExV9JN*EZ{ct_M`X1(l2Y- zHUlLhr)ut|?rDEdVtMR#I5|4(DP&FG5`3}l~&@2b6>Z%!<|g&e|3O}iIh*>t-NYcBX&(po`nPDsu*DGYM? z-=#=?sRmk(8z0Fib_AVf$B)z0ZuOWi_}l+>xFEg-{(`B3Qc#3`Y^u`=6VzdBCkfq; zhz2D9rkEca3b&P|zXjKM0r1{23w3wok)Er|a>EHleRlw9ZZeOA31ZYsP-0Hs*p_7B zhRd;^Xu2dZI(k37PzSREdFI*iWI_s6obw-3?Z#^H^1e~vqfpENx0XpH8RW);19IC8 z3N9W+d5&GjAHRq-q<@@szNSMbNe^eM{(@Sn{l(E*_x452?EISYJY_OhjDZOamz-XP z%g7a|e~$`AY0O1jr|(=nbJ_T4)lPA$0@p7V3Yy`SJU5T+=Q4cXl_Y$GnM}$yStg<4 z<;Py#)i@L;UJPwWGaH3ncKTy5%pFPucn|OsPIzqYEO>tl-l$` z1>1XpCO4Ix4AOPSme0ApRKKUG=wLDcgI2dMX7WnLMSw!S1j*g>JvKD*LIgQsKG5_| z4D+CDu-dg&z?iy(?3jBkgq&t&>ecXa?N{Cw9bdc5o}Gw#3G)sgZYvRZytA222rW%I zPlP=~u8cPdmQ>SWxq&d_-VM(fLa=*BhPY}WP)>eMgwxABdP$45;bY+Vy-L`_Fj0=T zD^YS=flm67Z|(G-FB=*pF7ZN?GciA-S|8cUj@gX>Y9mrgm85c<>~Bf`eWj@DChEz7mU(8^4^!EBP_ zt_N|)cO|9Q%A|nZYS{=QV_s-Xi_?yc6 z9hf8e5r!Ux9$dftXC?1dBMuKtA`vQdf=X0v*nPHPyQlhaVodOgoOU7y^<#O)H&tH| zy3N+MTCQN!P0JCOtw>*$4#f2;PNpMSJ|fAx_xXwhA6-J2Hju~ZHkS@ChWO%J`r*XHAY#lyW7yJUjP2&dxY_0RYyns<@?--!l4M&s06~biq%mC{ zd<=)u8X@H=!>`OUP43d1yI5UeY203U8V!$aGu zU3ms5)r> z?(6m;j5mbkLsnNZlBMX10Qy{Wby^x8f;n^5Q7CbzemVSa&rNPpeUtx?eZpId;UZN3 zt$iDi8kFM)tIl^6Q!iMG#l=$0nQ-jYRJ?vy>*vbM>P^8S*9$Xm??Gkdcq|aTDLdG4 z>~&haYm?tIWU}Cz&G{+v)Go4Xd{}|T;P@diE?s$j(zh*bPGye}uXj3*pZ2`CfsX?- zU{ zU!!Ng_TqHlGIE*b=@JAY*O$bE*sbI~&LB!`!I2Zx9OG)S>H?BogV6Dp(rUx~zD(cL z7R38LREp4V;#G!?VS|-NPr27Hfia|7;M7e>=A#Aqie5;BuuC^0-XQ}`XC#xOL6Vd<~bI}^q(p; z)`>jprpi;PzM_6Tz!>+AWkQeX*Y?3+=xHA#pnXR>2z|ZR2*XXBm)xwmc{0JjiC;Ce zf*jOkcj1`}A*jblkb^hCrBxwbvH)D~TE7zmFk_gKU9R_tqkAAKFi5#S(Bq!s25 zN#}Mr^4tmbqWOYZziH%P{{LYuU^Z5ewey1uIR5~{&TVjxb?_sML%cVejZE{khJM)Q zMvnoIw}55d4>(1I(K7h-m%?DV7zqzK{CfB*>jcxTA(8UHz4I_}B9Rl*+?MDA9?_D- zUA_wQaSODPxLzGco>^;r73m@LWbB`Kq4+R64FhRVx7we`>J2{?_7RO?@CG(m`_ul#KTpn&s?d zp}bhnry`3ED-b1zaHJIF`U4X53CCpOK1p66=Mv|qC$z$0myZf($@M+iZ+~6S z_N|3G$W31~g4J*d9=X*~!0NX?9|SxZ8VFq~qV2SA^T_vh&jRP_1-(fBR)R_^ouK3L z4pOpjm%8uRFFEd4AMIT?FrYh+OiOG{=r6X^?MPKfz9aP1N9-PD@X74+7#>Kq;tL2T zPV4&zz88^Hmoc02#mLDBdp@68DSYz7$NCDC71g zJ~J7AomSZah-c9ihd{wx()Zfc+d4kP3i**^+vazzgo_`Mc|s>yS~J7Xuc+v|yGmb! zsiSI#^}?tEd9fd>DY9X}h242c%6vjZk7eEO!+jRPuV-$zN+09~kbl1%w^UXFs=T|T zBJ06=_KhsuohDbZbK^4zHF+-GVUX#28Rl zKj1fn-R?=YpPulYiKYW?|H+)WVyLDC-koUz&dniu8Q#*KgPH+#(iRY2gacmWeubI9 zjUtol$?=thBNvpa%pis*Y&hKWj-NnK>XRM`Z4BhfKG@+SYICx7z=5R0cI^gAOA|EZ zjcBi(zogArD*xkk%N1hx@92{R6RR~sFzYnw{CeY_;#4`x5z{>P;o-MSU*3ON@SgI# zIgqiyJUJtx6<}Km!*5MqLi+=~o5-<@YJf%xCq^ zY`-d7A;4Q*3=NXE>G{5|Vk_gNR4^GErez|zCos|Q8%|gN&aLk}{myYTh&fhv5MKd1 zK=^S{1;8SQF}ZpoksTaNgZx=<%Fg*>#7Q|N8NUKG!DIBt>jQ3b+4#1OG^MTB02s~S z;(gilLaZ;f+9}|9edz}^rO7I~Yv-hEXzI5!7v3#pJla~PteaWiGPLbf zbbNK$VtDf9ta~N1xv`!w=DHNdI|l7nRHI(Q}28#4+U9No%#J zvP0(O#uG|Cd9~jyo1V_ZX@9>&MWSB&wtpFr?;c2S>E4`!4vhc%NNI*YsUkAY5RLk- z9q%M~2YUFeaiO{F!hJP{TD}5pIxcYG9;t9p84IP#T!TQcGAKT~uk8QLBvNlOjtZvW zNkqrJ>Px4OkwCK(a6(fPm*ngzi}H#y$ShI-W+ zWT_dPSCJJcb8O~dPN*eqDw3yPwutg2@!M9f%2(zKKgJN^Q&DQ6VuWD?mdD!`Pd{H` zk<1@&+g!jjH0r#FC3UTuGjxZglhl1{E02P`kH({TUW!&)&WWO1 zRWZ{~B(5K~x_m4|L2Qg!m3Be7Fy5%SA!wg}sbYdIOXMll(FPOWt; zo9Ma9kW*FTEoGmF&P&4!U0^OR#8Bt}a|uO%2L(Xf18f$0Zl`N7sCEQl42PXpc>0iN z9m;(Xvd$2s0ZU4qZWj+kV*ZKwjsWARRk$ShzX@@uIe#`TphcMIXCHgjuCbqk0cG*0 zcQmjc3qO_!3l8pz6*oNX4$=uS8C+~yzV8`otZo3W=mm5M1=osCsz0hTzDv8n&tV(K zDAf?Aq{71Y`UW}kYLmYE$_>vcC<^jKAY%rG?3;4`H?^H+pP6YmdfvwOxca|JJ48D3 z0dOU4C&YgX|5EnnEg=O>k;um6thRf$6)?FoL-s?4eX!#~f0xL1kvnMox|5|z&l%=~ zEX}$j#_Dtq-!9R+5{2-vh)gn&iD0kZPltr*+4AbK*Md%&>_xc@xJJzK(v^Gz+}RejPesAtch)g3j|=TcaK6E-#Y01;I5URF+KM zg3!bf=wWV)6cTu;%yq($A=LoLZUvkP?)OvQ=-CkQ1r1i9Ym+q$Mz`fr9jP+|kp&l1 zIfJn=2xiG-BE9T4{ay|91mTui1P{PSojbAM@V9g)J#xzMrb4115aPzO=He6V5>`3> zmt0Xs0h+&8UiwtA^90;H?+Fh>&uEAm$yJ3veqKz*VM(_apgkZvA{)$&SY+= zb$6n*8o%O>!VTaOZ+_$j7Uf2T$Lif$`aYH!vbF;E|_WR zW8Oa7Lc|B`6_vsZ(ASM(x1;4tyO^@uIZ8+1VYsarp`Cp5g?z-9rvI_qK>HW@IYbY7 zRxX_1Vg)*GRKW+b{Agq~RCtcIIwG;J|wvDpp*)v8{=eHmG+AL>CkXx*3eQLDH*pft|`<`}R>EL%mk zVqLDK7e0VAU=yZ2)=8hbbthQ*jo?-8_oLWCu!{0O;sw6|KOGGM!Ok9-A&9W zs`NhaCg@&at`42s68@Jlv|kuoZda=~re$X}<}ae$(_*#Mzde=^>4h%1eEOkp=*DRI?&p7WCW( zCQSpHDtv$Pet9ArvmeaMv+c6lyU zSiV^HH71wD9`mU_$!iVL1&m17hr9|8+B=Z2pShr#B2z%e-ycJO?Rqi>ybqh>MT82v z2v}q9LKVp$coW`jI=OFm_|6-Qj>Y&KQK2kFn-u=zeW!fnt+E#ojJNapN=~{fvTtr>1;#p5OUQDHRakTzB@36deCJ6xe5nA6v7?F#+oJGM;4%s zy*e8sjcUsy0^E0Oa-SKijdQ|UmS6^CFPw0CaWv`0 z$}D!@l*NOlj!bvkRf?|Bmu;%9QA2>iS5^5+Mxj_m4D!E`lkqC7W|MMN;F&NJ<>5|D zzm{EobM3)BE!1!&ImNf%BOsr?MhqYS|$3vuDPhIk@IxBpP zimnhvrK7_r4LEN_5JgK7C%wD`7Nddi*^G*uWGQBhC*pRXN)9E?)>kS;y)fb0qWIw{ zi8I0v?XJy@#ZQRPJn`->*N3L)ox3`qj;TJnVl(*KTaWyn|Cu`=ld84pW8A&5W*kMbtf5wdJfJ*66aQO=x!PrggoU1>5_4jmjrnWWI2+h zjVHte_$0%mr6Hg*GGOW^Kr|4AKYcn?`N=yQirX+z?C6v0l?;RDnGt>Lu#&K=QK@a}Ex?y^C+iJoSK}mFIenPh3i5Bv zuqlBCMNv4oi$?8yzL5?`BC+{mi!aE|HfaK-$JuW+cBp!w@>W0wFS7MB8{*x%j=0|`RcbD3Z zG%T`MH5}I*MrrCfQA-&368-Y}poppQC7>Mqs+-ffCQfgdDEm+?$i#14-U5=ft+RPL z&M1LzZBg@T&KDWR!j0`Ij0ZG0d*)X$iA$ zr^0K!!9ai`C+Wwd4-%tk-;B6I)DzX6h%Qq^yL6_jAPn61X?e9Sa>C(cy+syk;0bVn zod3rmQ6d(EDH{s>c*zy=YLTmn`8@ww8?rB`AH&H?f^Q{`S(E{`#^f)7StHq4%ru!z z5$LoHc|rP`pj3sP%xPDK6ChXbUZw;~jkrhb(i>Tn4`7_V=<4;&`|>Y-l?doGe6~ST zb0F>#n33-70IBk(@H((KC+FmP!(^YU;+s!+EI@_kj}sB8-((ilt+s7WEEXm^ShiHP z=PtUhbWdY?DD9w;@&IUKTbFs6)*C4=PCTr99$;cfw>ryfTns#~R@)p`8DHd;lr2rJ zJ|tRmn$^=EVcBG_ID~oLQ`PK0aQ{c4#*h7gXB{~3y*edUGjYdve_rR~(qwy+pUiSc zAEf$A@@)+>&$ZZAEjqWXONYS4qO+mFG#d`XI2M-Q)I7{aLXyh@7bg@)#mT=vU*@IP z>LQ)_YSRfWs)O1kiiL2>i*FnFjT^{1!e5E3w@F$Ygl9}Za=1pe)-OZs@h~R0S1$3sLVnb@uXPp4V}w|q%MFI(jexsMyH3Q!Z@~-JK82C2G*Rfv zt|rq%u)Hw2I^te^5mc(OkKYL5cAj24+*@)BNJ$ZF=cB=v zg>beHsP2!kw$6QUO4~k=(Eo?t4XlE2H6_fxM2A>tQIgd3f!dhBm2FP0m$l+PxR`0# z3C~>Z!qlm?6F}=!+nF_OjbF&JK0j5`oun@>p3JIXqq4 zs8DaO+-hJ@OGR_=u*YYCBC!4vR@@i%Pw90VVNF5`a~ya>00mkDLp@}_Wfi3^YBAw{ zds{YwDGv_!#I?>n7r;&x=sgO6A{sTkf++Z)yLIcNUpH8uC0Ii~uyZMZRnIv44{VY^ z4xQ^O>V(=~u=OmQ=}2?)PZYedF6}W=<>`>!SdwRHbfcl^i7-U{v{gxjop$a~h1C4>@5CRRFV|13|*KgDSLI(|JJ7DjoR{t}2~t0yNu`2(=^;nrQy z;{u(dv_Y{e@w)rAUF-j2>#M`6OuMfIk9q)+76j>5I;C3>1SCXCLQ13sq)Ta#kWwiL zL6j7w1f-;;Q%Xv@Tk6}-#P9uP-uJrvV`eS~&Uv2u-m&&tYiEB5*vtoI)igb#=yj#D zzu~Y}^W@=u+U>BrRvzY>w|@5^K$1rNh^$cM*=e@LDF0SEY*GUMNqP8CIL2d5|l zT6@#fYRIw1pkg3BQm48dAkp)CaXW$R-K`N0;KlKU>L#-yP~b6aJ=pHJoX6ZNz*>dU zh;I>vhJ+eL`~%Q}wH+%R%&)*S@P2ZG z4Ha*+GoQo2(qP>>&gpI~c7g2b{&E3n4D(^ERv)TQ-V9wpSr zrjKh2RPdh%Lw8t$Kv7VVYs0qP)Hz8kybchr&C}5sjUr?@w!sbUbI5uwD>+KC%Y~h-#eyVdq~4xzeo5&@FaqnKSC9=VwG?HrT}Oi zs7>iP^>Evx%LV}++_xtC90*;6Lc?Njc4U|rz|Ug2@7~^Tn`S3ITOUlKmam>qnte48 zQK5W|T=Fs~zc_pR!7bp|5g0XIdLQ%HVQCGcYWg}~VY&kg+af~xTibQrrSuO!_wVo; zRfC#RoY!bF2JIf6@YUbb2DEjtr8AN-QOr_z%6(^!v!haYocEO&VuRk?rc0Nm{;4UDc!Cu2#tU%j?aMN zdW?LP=nK9A`#@cUM@+zS_kAq%04>1Snmwz&ZR>uc*TaUaC>>^)CmbGSJJXYbVln&+ zl!lZ0vj-x;4&3Y01F)AiaUHIU+xDdpPeO&ZSXL7RjI%6EJ-=o=OizPnjg<#JovjBG zsu%s+sWeIg|4D|5ib1}y2)@7YI!T0vPdKpUg!((N&oUHLE#-gd=nF}YI0X>qckE+1 z;j`*JMpT#~Qj*4)GHQnOM{%7jd;#-v^Ej64STH{;1RYVnYN z=IDKLU~zh%);Z3F6twpd=NOwfFt9upLOfWlI3xMpKDKPzewe$T{<3lp$N^JpD`_>^ z`d&di0lWH@u0?oVz!y&Tz*3s8|EMbIW4K`V0c1(_I7Eh&7yPT2i{GODuZa0G77NMI zTA8Aw*!W2}&784t52v&1y;`7mY%```si}x6o$HsEr5t8k{ zG7OmqFJuQSd_00|5w$?sbi`Igit~WZlR7NsvL<65m(46;d(RLh%*`PR?g#$3pnzCH zDtZqp@!6giKiQ?GKlQ3?dFo>ivrgh6Z2ddcidfUwayd&<1|HZZA#ar>51tey(Z2aw zz{hpXY;}ukQ?g?@TK)gZlfSMD%@@>^?tILi0VLg(F_(#*OOC>FA3cN%#ZZLLd`0Qh zf3d%K_yiDJ4{`c{j`r(_jz)GrYA8+kn&F$pQrK4N0aL=!vx?<%<*Bp%Y7f4$Vl}rZ zP<>2WNa{a$9j&jdtGaaEe-1=I`lf^Y__iPLE3iLj?{J|RcrmcNpl17sryY|}TGzRC zQQ;RaoxqP2E$@b*ca)$1+1^5h7>j&TjSJ<{V*zfW>H^_zkXzia`p}zBmtZ#&hJx&! z>QpEahrc3(9AT&VC?Gu8s8_f+FAs6Q*t$6>JZr;3p#C6LG`fpqtn35ZQ~rsP&>BIsc|{zP1jVOxOXuv)h&69w92KtJT)N_yjIMDYTOX7^_+l)AmJO&$^RF)a39csc`8oDFx~X8N_C!t3trXy# z^oC7Srb198=~X!8Aeh|Nr$x5_!Mi-E1fM7Y9gyrGLf37-@Z_E2Mtnj30*#}rP*XoL zI0spZ=;86%qOFGtYk{3GvlDkO`Z1+t=s8e}`-5jq#|JtBIm{V=c$Z&WfY!TTKh&v$ zF880o83_zD8GuRgD2DF9b1S3aGHZ?1q~(TQektX#@hn2gzph_qzt4Y9VzvmGz`EXm zRgzYTs3@qM-6&@kGTjj~0{nJvr`%jYdbkjc2Ud}Fn2+8Qx_0I$mtINK?MtSWdy{Slac!2Ff- z7je7HYB+>+`Sl70Ex1yg>`7$C_W~8cF+= zlgnB)O6(`4J%E{^AI=6e^f0!~DOZ3MMic1;7wg4o@CjYPR&fPJEG2Nv%IDcsH9{^4 zs{)P3^b}wlvGD8xz=qwgsp+N*Jup;wp^6M(&wF~p|AY({+{r8g%KK|b{`%0MY`=1enx`tfy(8JPVd%0a=|ncjm}q?m5D z#YR5(%{Z9Yk-+~}Lhx&k6bmLibA!GFFD5Ds+?zc=Bf22NlXU#;8}*-?#H|9eltw)K zexSPW$V*b*MHJTbk)l1{is9BGOf{Z+3LFJ_!ay?}smcVXIHAqM!UdlzruofL9BZCz z2;q|f9Sa@MF|oC?f_0)p<(|6q11d~4x;GYa{!U_e0(Rre2IC-P;<_KCJ%Hsu`Q`4@ zRYNz*5|JhLmIL;hZT6jKA# zg|S3c$|_dnI!9; zItPrL62vzrVXu7^v40Sz=6`FpIUhs9s@|}r_=qZ6m6!`H8|f>0fVfh3a=E?TTbqy= zt3xz|?)fdTYdZ*Q+Db#nklT@JGqmscJ&ZU85aq1g#l&Y=^JB7U2=nWoGr(X8nALB% zw7&xg&z=l(^WmM2FA&7tIjR9@6Yc*2ZHG2D8aV6a)CupZMKcbn-l_GA_KcPJEpGlKJUV(?}n z=J_pDgR!|&{zYo>BQC3TqW`-(f1S~WYZ%fljf>G~2Jjo>(Ja~B;1X|qy5A&m+qg>_ zIq)|?kGbQh`t_{i3pp*LzifS@+E4>-`7a$i+P01$(y+_+&JFR7XXm|1o84zl{k)l zq&G{~mUF2Ss35~IL$U#1K>zPBV669l`$Nd^uaISl5B!;+=aeD?GD?b-9-z+e)$X;1 zj!)e3oRGN!0~-u-ybf1N5tpUCOPViRO?g6+pzH&a-YnN*&}Z4ae(_v}htl!Kf7xP~ zqrH-#sRX488LxRb;^)U4Af|yXfTauaKZdeZsYPaEHAG`|OpTc=hz)EYw~4oa4zeX| z3iw8adlNnr@JnBBGQ(887zKrn1!4=!`j*+cUF#*H?;yb3oYDwxjm=uazQ;PL1nvK~ z?NLgod%gB2gHDq0>Ypg>12wbMLkLn~)h-~Yi7B&?A^(1We;2a^X#5IZf(SYrID&Mj zzMLE`a~`k22Zm*JaBl2C6M5;Y`N9_1c^3y!xi^>I-RBGmTGSK+Fr-q$8L+V|dt+C8 zQ!Gx3Sssj#{y(_ZIv*Q;l-k)r&dvs_MEV4)+Eu7n+6h?@@4S3LQ0X{sA&9CsJXr;9 zca8<2O#8S^LhJtban2xjc`ay1rNi89-e(}@!??I`a5Ye>-%*HI6fTj~bW`MB?G4Ca zp8*xs1(_xx?zxqJ>&5*T=wPZWdm%APn&t|Yf4bq&F5regU>JFOQiS9e+x4ujM`_G8 zU>RI7=ByJ6=Ds=sEaq@ zKEiszGb1m#XKgyg+ea#oe800fvK zYf;xvXE}YimD6rz4${O^`y6#|^&s{>DiX&|8VZ&ndM!qTzg2EbqNpp^q2?@Gzk=;a z{-zgr0-eTXP?0pHMWwjMTZy>s{SU0rmBC)Ni*?;YR4Yjs2`>s^zLon>TE{M01Tctp zaRBGIbAnlogBP50S7cKBTAN&;9y$EAiGNY-xcYEaLR(I_3I_vSaxUg~*F% zl4S9>Q@nVbA6`T0@+l|3ea-jV(1*SgD5fIB{eIeS6-g%I>I#hUgD+qT->cucJJ<;< zqm@Lv>Wv8d*)y)7$@I3O?UoSFzF#KAL_$cqfNJ0<5lTkhC0HQ1|h7Jo+!aG4_q7MvUiQrcQ zY1%snbugm2vY{FG(jF$tKcFeL1`RNR@XJbHdXxw?irZHG%5?NO=Rl-%v`@E^uZu}x z4B?QRMNQmU?~_crGuq=lV^)BHOSs1;DV6K&qwcDaHwfX>pE%#?5+ZJ->iGo78dBPs z-I^!L27dQ|TS>3Mg}=Q3ERAAZ7kH!3^(5VpVExx!mHY(eU2=&mh~m~w(P(4p+b&EC zJt$)x<CTvFPkDbGPc$1fp^m9{^@q zuU!a}{;hcr2*qNRWt9g`k)5HpXpn)Mm+<1djH6_Es*F~*cHf9xd^3uF)ll7tIFK?T z7I9KXV+1U{aLw*gUu<@VqNB=uQ&{(_ca;ZqPA}&t0DTitQp!5qD+P!{DkMMB8{~Ws zf70T`P24t6BSTGv_{vtpY^nT0^rkLADqc*S;#e0q*GDRxKX@$XJkw8}B-}Hth}iy; z&wJnS?D>m@5Ooji5U&i=2;UUSiuf6^ey{+U(3aMOeF5}z9={ZCSAS-lXFbX}f7t9{ zNE_fB*~6>1$8ns0_)6Q#pp9XlRW&CIDO(1W86LZHvcfumiO%BJ657#(d^ECVtb@kD zqdNZ5(QkT@AW>oNg3Uj(%v53xK`nL3*CrRDcVduL{f<$5k4|Cq{t-0QHm7w)`7q;I zT_BBE$MdNK-?0@K;(mmT6BGt=@*bing?xRj%_v-vI^ee5ivc%;w%K#^-1iS&GyKlU z&=A7T?aQJ}vH0 zRz}F|K{p3~zK0Dl+v2;JBM1*nH&uEo&!a7iP9f7GHRjW|{_4g+5RLF^E-6|9Ir)jc74>$gFug%pmrZD}IKaTSm?C{T%a_{7LhEcSoM z1Fjf?4=7ZR%Wlnw`TpHt{HJInd;y#pX(}!9bAB_7a0{>f;mrwjvL|T5L*lG!e6wU$ zVC@k3;~rQwHNm~eB~Wf;jsg`R9i)TD0O z4kFLZS8^Nc9g!;i9zTy^jBw`mmDe{nu1CHU-ieKzgBIf>@WcWow!l@|^r_^KF_pLZ z0}U<2|Bd#|k|OQn!2&}&*m?8`;cP+4XbuXfFxQ?e4V1(MR9$7@13>`eN!O7^pK8Nh zzRTm;`4g7#?8+I&eZ+8E!28IRLd^YBvQy73COnYid>yAh-$367Kk0`un7skN=ffPA zmdD3OK;|z6b$O$Vi-B|h-RR0Z9rnnT|0>T9q1kE^mr;cB@IkO7$ki@Ugd&XxRf!U% z`$v0!pNsso^><(Km?>sH%LBi>E8`I)f81IcakRp+hWIB0*Z>)f7j~-QM)%Xof?4pq zumdh<+!#m;PmTs14Gla9w`cN5-I=UjJs!4v^(*KSfLywsQ+J+!DbIcJ?L}s9R`eS<^FKgL^w))MF~C71*{p8{NPo)J&yHP>jAawrZ(Qz zcc5~$)@x;dwoT_qP|z)ZY=+lQ`4L?ewR0tKeV~%;7Jdv_8Q4vnwj(3d9bB=qr>RxE zex;9fPxq%Htyp4jYGmA0e$6XO$IB|CulVZmt+6FK-g~iDOs4N_o3)7*W#z8WYbtE% zv)o!p_gFhU$afrA75h@vdBJ%{}{b^;+RXvG;ZTg$x>xr~)F2`pZ8O-&&6X0X~^N-&Qiea14 zB7+}3v!$XDz)Rkq$+F)paw!|fXi?<2dO%mS|FZm1DP37Z+z#5*>h6fAKwz#G|D@MS zs(X(3kDtfFg@Jkp7GMVxAAqR1-@$eiF>4+!*tABe5xz0cGmK>W0O2hEq)Zw7bjop$ z+Z*%K9%qar?Q`0vSL9FN=dB{sabzt6+?4ny($pqI;()~(jOK!AAZc*9kNDBLjG*(P zCiP~L!x7n+8=NM&)8}w5?|;WvF#Fd_f`4UHh4<8Acs5QFCHp1{s|zh>qE9CRZ5icr zPuCMr&~OR}sHWR!h}i*;vAXERART(Ng$1$zza`2h| z`6o2U*S~sdGPf8$7%@FY;{v_t6$8ADo6`>giS#URCf{1Or^&rCS>E$~KABpS-yQbl zrE!jNylK6T9N=>_z6m;z^nkrT?b$5%Er`2(moxP#=WMSq6w%V*Oxa919o7eamp?KhR4H90N1FHAN)5B%y@k8SYa85j)M zBzlSz(D{gGYy|e1ub_SG?askY@eH!JYyH=EK`u@Ne4Q621-xPKbqM@`i&23DTcr0- z^*b*nZEFl*~R3v9eIP9VcIzTCh*9&QX+%XgI!&v0uLs<55Id< z6ot<*V5v$-8*JuDDfBxPBF_5Jh2Uhqe;5!o0l1~J6LB5kn2fjFDD^61StZz*dxvHl z?T?3!Xomgw!TTl)2anrtz!JV^Xyh5OWvo1~59gaO+5_mlf=K0t@dwNP@DL8x z)-D0C=f?dHX^-JiATQVV%=9~20q%}b@HQC*u#9p#fAVcgJ3i`)l}fza8NW+wKBC?0 zC-P-=yzKtM0qp722WTVM@KUvQr+ydx4sjg)vfZQ9dlNC1Iwmx5uRK&BqY4jcbYK`| z-Ba!Rxd8*zL$=+7F4DIdZvTt~7z8Zdm_nyk*H5;V!jW%wXC4K1C^HlimT&44*8oKO z?r@1*#&TU08UuZpJ8g}Hi8V>78~NGm@%-_l{MzZ(yJYanVS`00bU3YumRA|1+R-iz z1oAUyq_Pv>4UB9gy26;EQMAU#G(RQ&Ruvz3gbRX$lcByS*|T{!@H%s+y+Y;B89>pCe~adJ zZ(7Q=4CHbC|9@h&pp;Wwio07SOY;ot2zs>Y4ac7~epsyAP*Md=yod2MQn`Swm3iGt zEbCv9Kj)^e3RUoR*F2MO@a+afjBy(TWh2q1Ay_Cofd=tv3j9s7$Vk@9{^Gv~l&-XK zhR=m{?EuKXZ~xN&3@2_%^qF$Qm^Wx!!XL%`}#--6)JXe*29p8}f-DkY{My^s20oJ9;^aW*c#a%5?& z5Q8vRB}ic`jCJ>xww=FZW3kMUi8q+-AyIp4l6_;bjX0y1c(QZCbsCq`Ro$*G<10|R z#F1S@ubxWp{29^zlPCOQ8EGP`W2^YCy+jAsln}mY@`AmjkC(jx#IFF&La*Qu(#m^R zUa3<>blCZJ6V;X@BFP)Cp6+3YE%%CfYDa`+o%{2I_@i7WQo%cW6&!}00At^z(njc8 zUoQmvetb$G`BUVl@7c<1Q?i*T8fAXIYA_y5Qc&YbfJ%ga-Q53bVXikLNQ7(rD1 zLUjD4K*_%}1&5JVfwTtKkE>oK< z`xlv`RZ09m4mDcx$<%YCDO};4hY}r3B3zQ7yw)Y}my{#<^P#g~pjc2 zXL)glaNB+8-`>)Nn@YJ6H|_@C1NZR9KuNA7-8X9At5p$MnIS8Vn2l=UNfSk+{(~SQ zue;$DLiLRMHxg*a-Iv%;^JHhvHgetEmOV407t)m+oJuwVYftyQ#Ye-urF{NJ`1FVH zNm!d76AA%?5raG#{RNBy#e!;rl#_0BcT-SjaC3%N)!z_VuXIgot`DGL`0xzI8*#)V_Nas@IKkg-RMK z^awUMT-%gB`*8hNMw2kE6J*Hibv?u@X#m0*4wgGwNSNcNjhg-$hF`%t>PQuKK$h(h z{uhOyZ4&9L{i@zIsoFIR;gg@AwLU+(TMC26m+$1o>!^~0@zI_}`XQkq$EPSoxfgy7<4o1Oa@B$IH&< z$d{nQm%gL?iidIX3fdW#e!00o$g66@!nCsC~j9MFi zzG>{RGbTNRy3rGUqz10F2s-5*$`w!rX(z(Q^nm`1AmNlglFMQ9$%^;UhCJtw$qo7W zR%nhY0ABE@#gW2WlOXc}`iXbDY=dMxK?~RBhOdlg-kmh*6KwT%y>5*6U?Zn`=Rv&N~aKloXQ4Xzk1Stn=m^YgbV zfB*d!{;GGWP zcsZrl%HXY)!0Y`2dw{#CFLIVa30n?s2R(K{+K4&x!~v7*3Bds2WaL_uCdu!26caK~ z{QSTsq)bQ8hWHK_8q4^cu8TRtfa}2K*_-nIh}Qpvu)o6sivT2nP~6Wd449S*Kj%`5 zv2CWm>Ri2U!LHv?ktq^pHJjUi&Fs1?S{2a^6fAJ#iXY~?7Bm(}C zR%T+(U>bl5uWG32_Qdf0XM?KA|1E~)BybbOy1Y!>of zHd*}w?OW6P9mjp!Pghqz)UcKKnXA930S78CGF|K@=L{3y+y5*WX{x|DTd82_ZBM9V zH^KhSy}5zh7U7ep=X84U&p)m|KL@<9OSN{Z`W9o=#i*~(@UrCI%(s4eE}s=WHJ=f= zE4H!Vr$U#@cr$|t*Y=U;kDFybl>hh*G_z=$n(gIbUIBxvUmajNz5CS%z{qNuqx$s@ zXw#!d0CT9cxNOoy`M8<3+QD$#n@TN0!him@x$80lGr+EbOfIGg_*=iqPoNP!5qaj> zPryy7`8eR}q+<3kNz(R$dZTUC(TR*Eobo0Dm1ajDt1Hq4& z!_bO0_0$?Db5>-!)YbjRXtrq|Ur@^S5Qx>$_(x&W%m6Q)XW`L@e7dM`Uw(z*Gdvy7 zsgfC|#5NTM;=K-qm(o}DpObey@DVDtjB(5@cJO6;66feLIyvglWM&hyJj58KI!#fv zU}^N)s%{1-kXNhXxJ9p^mEUrkDgLaRk7ZTzz4;L3kmZ{Hb0OpWkJu`GI4IU~zFjXs zg}HSw_Qk~o5v$fskT2CdoA)Oj-~^}{B&6a2XMff@gJscSpjVydz|XyL>+Iwwo2=H^ z3j-)u3l#|GK87`^DzapB*Vnq0cfU7$A(pu)dsT)-CKeuNey3`X#E0CBLBzsIPT9W= zSWqpL_B9wnp(|WG`~<-2Z^{-a*0!^;76jAcw@5auL8-wJZuLCq7h(;% z_rdwh5m+sH>Xh4+Ta=&tE0y~7!Gb1E{(>+?M$B(YXE4_K?3 zcDH!eUUsTgu<9+bE?N4F|LCaYdbruvpLcN|uyKZA~u%1tm;0I-FAwxUHwR>2G#FL}=}6-{s%p zo8zaWKi>fVTFx6#xpDq#d6rE*?o&%4&Mrl4oHj{{;!=5bj{@`3@-4Kdldv?GnO@}- zBI4NtLm^iwfn5Q3Z6<|l(g7gdWQbLk{M z%RZ3c*qbFZ_$jCO^3{r+rZ-2CjaQeVPkP^}t=5aIIeCnG6o@=cX&G>RxOip5JIQ$> z!p2aY*T3mERH*kgXEM4&sCILDJSsAW<}2a+)-*@ck0B$zri1jN%WL;uLH2Yf8Uo0C z3^4z->r$@YxB|(k%>9=D`h98Fmu>6=MXu1buscJ*!4Dg#D)km|jB*ga%{U$%>`JE5 zAkxqaS6G~BpZ1r(!^--Z1`YDGi)vAJBp16capYd41 z1~cc-(Cm?;#)9mdZ33FBcw!~&rjg&)90n>9NV0lSl@htei+EPCu-_(iltbcVzM41k ztGL(iGH|u{LFv#f*YV&)$Ysd5<2l)+wS6VqS~Veq>@*6OgzrvQVc?i$9fL_k7yYZ7 z{P1jQE@O)Fijsy`H5AKa*q}M%TXDPe9=#EQ6kUntdwW*30YP{3D>oHOYZ_!0G}p{;T>tbi9ur5-iMmOoeouw`=)vQq+_rbHrQwCNsQH#v<7{ zevMysngLlKba<16JOjzwLReiW>&Yzm zzfE7s|0KilG5qqx#&AQE@b^>!YKs48^ChN&>D9$E ziPOH4odyMJkw1GFEEgUfBB@4zPCd)F9`kTsq^oi2;xraXznil@9GMr)_@b_?SmUC6 z>su6|v?^VeH>FdX2oATPnJELhLlaWq&N(mid;=J*&4s6xy^l7xDUx~())=f_yO+N# zhS7Or`l8FQS;8#Y8b?y_Qc=6G&IbE4#OV8LB5%$#Ayt-Z@_8%O?GQe`MjM86ICKu$ z1DdGN7~d+HzQdv$eoOlQi8yx}P&3{48ho8y2^4TJk5c95Rs4mEk$rxVH#oU;==*%~ zY9)Tq8|{%?*B`q;Wp9|c-q$95xcn$B_{7#Y`^P}R+R56OOR~oj``qrt*M|2UWU{HA zn<0p*+*3Qx!-lPvi=!LSt~62Mm_)^D({A0^05XY1^d0#!m*>m|m_W2~qVjn78EsAN z*EKGa@fhtEc83ElJj*dE{ps1Q@Xo^R&wy%#Nt%#YftsXS{Cr3H zTic-JhQcUt|G!?kZAN!TABA@G=#si{z@43?nz~AnY1!osb$8W0OKs@o9s=;WT;McJ)OJ%D)b~#C4SHg+W2POmwnSmV3brPp%eVwxIXF2 zrlMox$OLc^+S2VVIRhc=i5Q#pJsb){b32@s@~`X(pLZOAVpL`GT%GUEfxSWamc`T5 zZ-VFrSYo)a}N<G!LjJ?wRFkkTmW| zl8ro%YRzq9KsFqHfljaT-4765-F{!OsCx!eROV>nDd~KQfYyzPYm-;5)fj)u`A5H* z$p{>JOSfUohXlaLMLS*o*bByjAH*HA4!`iPJAZaCeIh-?t5y)szi(H#t|p;txA=L0 z=JPQ5tB7R5&MM8KbVS4d!PfZQ1CyR1GX4R6F{R~VpHE>U_{(Uan|Axr^zy$`HlW*o z+DXrL`GFjlw0lRfyQh&s@OG<3HB5!`GXumsd;st|e4w7MiN{aw`snou3cv1_S|kzq z_mhoCpZ!+cD?AN?(!A!TPKnzE<&XZDL&&L~iOvU%eHBqf(y~%}ZL3WPh1ju-R~`K# z^71unT(5ujeB%H%E-gW_jO)Y&Fho( z$1md%mk@*mbnUW+8R7q*r~T+?Y0?T53K!qdMFpbpXWGL$VafBMMtz{I4~3oG=~BVo z^zQd!syU)873a3oW4pcUCSls>95L%dqIO5Sg5l!(xNReC4WD054qJ@qE@`rTxM49* zFag{ABsyrQu(n^X0S6%^t?8k*%zWY_XMm@yQbtJ4lKakiYNt`>t{B{br)EQs`8Mnf)fW1WFd>1iR70dVv zini1I2Kwiq;1sBKkXvGt3&NtqU(b8mOgbAh6y!27^(|}+pz7(W*!;4o&~W#;y0Zn3 zU))ALrkA`R(O9YvHdkw-2_q%C%#Z=yP$zSv23ku;Weul;B^k6P#WvQop^A}L&Pt!N z?!RQcqkqr+ksP#Q2W{b3R=+bULA)Bj z3#N`hHvhh2IWBq#dq;H>yP&gku$`2)xUT4AU2Xr3Y~qrt76nk~CX>)NHi47NJ@3;(Wo!BG!II|*Br8S0^~393hr@JZIZ#dZ&woOU z+dLAH#e+fLq%X=UBqYcUeD~YKo!p=2lRV2$CFHu+Ec@mY!B2y1qBGNzQ#)Z@r7-Mw z?Yv%1WU7POr;?gMG&p$l&3iWi#jLcKZ&nc9&Cdv~ADV6gs z8{bGRV!N^r6;PW>z7vRSdHC3rKml|JZ|4}k{`{1THofG>sMU`dK*96*Ozj$}(uCe? zivm{sn1dv_tuHT*%n*rOgt)k6I?JB`6R+Mo?6#HbgKvVi@nue66NZQ zjp$54$wX(o>u$F0Yir9S#?=F-Vh$}MZIp?dlQz5tOo$DB{#kTU3#eSF9}lF3C^9?B zzaFzIRTf7jcT^s>z6>&uNnEw*{qc)-@SP6ty|FsQ)dwbqWg03Dmpxj=UtPYj^mtW& zx^Y0h-GKe_X0L9ZqW`t$4`9~of~d;?)3#>DU8{B9o+sEEFk$0qXoIQyA(t+Wa$$G? z4X((;YnWGV`_82SJ3zb&tOjgDLQY`Kwlh4#O-(Yq3!TJ`b(3NQxX%+qhSo0zwm{)= zb;%mqGom&B9zzRRmdbgWY=w$XZyQQT_3o1IQNRcY2$SIK z(-M`tY`h#}KrB5z^mlf;`jbpe&yL`7*(I`M_P$9vMft5)H+>!07HWbB4D>r?vc@pp z7odr;Fk~sbGjBPwB^z1#9+OWP? znl0Ba$0Z|4dKG=Yi*)2mcrI^XNO8Wz^6~aqGFv@=seSXj=QQrzk55IN#??V#gYzU3 zBciwj`wxjT(%1xLGHHluY0(R~x92-oNDd$7X4Ix`IkLGg$=5r~YEk78`DxPOSN5 zqM^iu`8N7Y=f&x9!oU}E*U&0&i}FV_n8$QPAg>_I`H9f{8#agvEqQ(ux;w=x=D4ckLnp$f?`8>} zYX_;#-G|c>wBB9lP7!}r>EPQaMS2Y3mL!-8h$GuAxe3I6d+3ob-R~0S*5X;)Cuc!6?jdw<6q1KlVyBn&TdOsb#Eg5=pA02m4 z$hZ{Eaor4+|M|D)3;E>bqU|XT+mp|1&c&ay-x|3dJpb&!SN9_-CkGvS9y!qUcQq+x z-@HT9j7R5gjNf-nD;{8)`%%K*&ujWgG$0L}*fez%~ z;fCllWv&5(HDUS=LPQZ+F098^p3f+v>TCkKPDR5~YDT&)zMnKx&DL{4Y5|ajN{hQM zh+M6cNa^%6Ir=W0xn(ew=ogGxTl{>DsG1Sv4YM)~_kfKQL?bJ@0zQ8tVCWe0RL6R< zcfrVC(C8X@GCO+9Jzp;D&2;Z~!3p@(B%HCW&{87vMSM&_J*#@O21%}iY`iqe_R~j& z8KpEF^ViN7f|x6k~GP{~F2MTle$ZB+5+FMXOyBrptP(Zu|f zHhnYliEaNii&d_U2M>fxA4V5n_MZE_JNEdR^i6K_)DUa$N(rl%*RV3Aa^V9Dkw-z) zBfX9-){IOphImYVD<=#0N9M?#rKqcs_vc{uE$YOi$hHRs>MBg$wy9K>JKP@bb6VDB zoVnW#$-xd2P~n~DU7~_u#=bi_Z$9)gnNOhi<$c0wr*pN*=bc)g`gU%Kg67YVGevHk zptd-naj%X?`bp1F#KtJELDU~*JWY*9kjlBrM92k`DejA`5}kWz>Bs^Hsi&znSL<92 zX-_uTYejJICh_0*+TkA*w^n!m(sA7Z?3?9Uif5l63wFT4>T{#6+`r0O;#l7Rj>kIG z>pGKEE+uA!96Pw|V-HwgJ^WOVC&T7i1BH$770g~Q$Co?432|J0{7+7U%LxTLuq zPYnk(Vd0kB4c$m?NhwaxJ%zc&DBwDa$huR`>Za7NJIVfK^bIUGvwiEZ3tF^w?Q7n@ zo^$;4%&)VL8d~shX5SiT(6#+`=|ZSFa@AcMXE;rTuScziH}*yrzH`Nz#=QZf;nN6Z zrro!~`og|!HrDPFG$$) z`y>08f+%upoZ;Jj`AQQisOo)w_?^9=D?4dem2mMuEh8<-z5>`5k7I@Sa(UyLY+_@7 zA>)|WXAImCDp9J>NjaiiK@!nS&x8rwejTRdoZIFqdw?q4Jd z#LWq-ltA9PN3_2Jrn^6+G?pOfgkVYcS&1KCQr43Z!=(`bs~KoyZ&;N$5lS{N(Y~B^ zQ_ohR{HhOwUg?5$aHt-GS4c@!ff!wVf!zkjLQk7d>>4jU=SA(|aFo>qb(@ZUPbc1W zEwvX9-@w6X3%o?Ri0sxZ1#!e6SVp?Q0>9sTO2(9iO-MZR#tinVWG2&iX%`6o<3n_r zy#|GHsOdvG*MrHNN?^pvmpRliO((YtkNbFjUC*l%i9+&n8N$bcLU7bw{-u_+Z|8+P~+=+=FA!P{`5W7c7ZcqSN8``M>spGc;)gB(W`+Y z+}S}#`#Co9^#dLo2PK}#D}o`OBfne|w@VuMvT(Ie9_XEn_KcORToCx)Yn^(}I^&uH zMSjV#YFL|6z_+2DPLA7J^hT?U#qY}u;*I(n2&ZV=$o77b#*3Hmad`|g9h791%ucX7 z9J@SU-U<4?)loj&b1&NSd#GeS>H(f(zMkboZdr;j98V#7QIT_J;JF$>ph+QCR_Il`;52Id=H`I7>rUgSameT;!^JwGovBCWCv` zLNH7|_UJA6Balin<4a$q|HTQYDl%NQh3&0H5z@8OFZXEI>VWokqw=76t}4t(A(VWC zZhxaguiCSGnv&)J!2BT&%&$zt6lq3+N)&{7f&~77M{{ibN}t3<@HTv%0&GrQH1Jbe z%|?cu#DxGLPDM64gjI@>Q^LUXVLSKKyJRzXV4k;`6uaE)2IH2UVouF_oBmLxTg)?u3tzysN&b4*ry@))fsR<47*&TJ$|qS$2O8QBChp zMN&Qh=@;ILU>&bG)KxI9Tzu5`Zt@zKm!>+l5>22}Oq!&%-xQxo352patUuEIZP+oe zt@k`xIgL?R7lPl^>`3Wn+LF~tSIU1fOhYDHBe!XsB;W878nbWo$z&daTKjnK0wsku zuxHXqXATS|%V+eSIV1KiYh5)C`{uPh2Ngmsh1LI zT5qn%>uF#49-Ju8LT#SosefWua+J)7+Qp7`zA$to67Tx!P=0gT!i`$gF30f*ThG8Y zWT~x)d!JWjUXQI=mWC7BfK9s09e8e*w|pD(5D6zjf(sgzUC!VRYvgRKk-|sA(x@*7 zjcx<9npfRsqQ@PBZ+gg9B5PnHKi>m-3U!_1R6~d@%!cz}&bSJHI}8g4O~yme?1U*Q zA089rLQw*a{8yhh1bgx+xGW8PkF*g2LONJ54Z5k}adVPApiTFn%iykr6)EFZ5$7M0 zrxUDi*wV%cOjGuO+0evLrw3Cwv#$0I80MO{iy)jx$M90Ji`|i3X|C*#4Wx;bmT{c; zNc`&PeaYk>4S#$~x>0?!sf0{o_h3T$+b#xydB5rb56b7a%OeoWjsa0|4A4AE13%FO zC&j5mrXRGJkz+)UvBlNU5oCeFsz3Ic}k_x2q3?ZM*p~v0Qd$rO#&a*c0NbZEk_S5l|v$nG7-)+Wf<0Wbc zpGtb;+7)B!sP)%Ic!ZEJylQTe@U>#o*>MbJDIqKVEf6HH=in+lMk%s1z`GroROb9- zT&U6ach!TOH*|*EKYcz=;|n_};pNT$$eYW-Nr^mH)V6ZfbtjLrS%tOCM&f#Jp2Mu- zzNeX8ajLSsuJl#hMtT0ay}5V+3yXn5))ddH6r5ryxBQJuC{0+S@7uiU-ox5EvWXAW z#ZHOpw>}Y7nk;J?sl1KaM26_9vOj>?OGAm()kf*(N3}tQJ6J2ivRKt5?2qA(DSK|W z-Xp-0EUWgckrEHar9lUK(#9TJ-Q2W4gwhH>_|J#l zP-;R~`jNFoxllacb@E2+B0ODHE4k`DJ{7jTnFgAzDZ$TVc{0;#p?v|VnK>bA5XX?t zU@3{aZJKhJnsyEBesT}Vv` zFE~WuH@67u+gtm8!@n8p;_jNzOYj~Dyn1egjS2E?f0vt!sM|A=n&dV)Dla}x(ZyGmAmj*PhXr*cbu z?_l4|tk^}KaMbXsIVtDT4UIy7wNGgH7i*tza;VBE`<70=60BzX);1+DJf9%@YORVlKk&bvqs$U)oo+1F z=i;c}>mKej?KHykZgtg0`hhgBmA5tZDMe+9cmmT9cp6^48ux?^(enm`euGo^9CI$N z--cf`YFxrXP+WRr=}P6V{TILJ2}f}saZ4CO`lzZKeH9#QJx|g)8hPdnQ*|YY;y!%N zubg5UW*mb~&U$)x5Eo!$a-;xj>D?L%P2(*qWfUSt=%KB9g~#l+vZ_gGC*X=#5gXSo zf#DN`pUBv-Z|{=v#6WP|8y8u5iV5!s^{^XVZ`MG>^K)35g|a|S*{cP;3ytV1B6T(X z1c^hcYXMM6+FF(jzb>!FUzInDQHehwiRqT`i_369ZvxeA*C$9+Ei;Om@w?uCM0txH|2hEigUZ{UJweq*m_D$V3 z?9WpsztPTY>s~moB!2We*0$(9$M?(S6Tz|r>y#;c^*5@{mwCzGO|}XhaQ)GsNSwDZ zu-<6*3qQXJTgs6HWBw75?37=;uj6F8ya`EZ7Jxo=n@e>Tv?mzJMpDvDOg{KYFV#AJ zH~MShtm; z161O5XVSM0?!ob;?Ew*G4UAX0V!X}fWYUkLom=`E4P61Awx8UTgC0+wgA&nm-Mr2# z>9;Os;d^OsO`fLB@=y=@$mav|c%M7hl%064LKO7;28xl@a{KUN&@8V&VFFOTF{a5e!r08aqOTo?N-B7S3kw#eU2Jqi`8XC_AawR z-+_*5GtR`>oH8nWol~W4&SkC=vgqGO8X01y8V1j!L2q)FBeeC!f_f}i3$nZU zLDB9}t)_J88`KMH@^;}gJlD9KD|7fV)E@52V|U>~j~0dQ?EpOBI%9NQ&_l%SL^0tp zuRM|cTPQwkx^d^J$=6{?mtMq&%RAymHj=5%Ff86}pb9oL`pN%n72PFMHM@CRA&Rd@ zA>2L*fv;r6F0w=4^X8b1TEhj_6uLVs=Drj-kb2&JO2ppTQGMw|#Mq&MZIUn^eZ~k7ALKl1oDc7c<6B4i9$h*m z86onRQ&nO)BH_=_k0sjw>6P}6FG@M?S{sPQa-;JH=Tdl%<5;s5P2Qsigo=s;-Zh@qKY^{NyV@?iuqY44!K=T2p6wCNAX^d3Ib z)0{-a2mbR%h3jJ+%ENhSoh5WWS@6B?Z(b)ST!pMj)?jC@zATNMymMWxNqwTeRD40- zQ0Qv%nf`sb{sAw|~qjsm~niJg3L zEz3L4Z%ogc&KAx!iHB4!7kwTgFX$dp+OR9q`f zI*T4-AJTmG=KC?|al9k#u2Z(qz|g86ppU-B@R4Iz2*Ae(6W<9Lf1%V83+T}fghcrL z8N#1Ul>aa)>ZA&EIuuBSLq-t!l}b$^o>c-H1EQcs6*;EC9CPJ9t&eLQe)2(?K{v!e zF6{<0P}>3HmqS9Cj5lm^Zl&IlVQgIk3l1mHT9?sJAwqlv@p}fa(P!+9uc(b8V#=PU zq?~pz04W*&(-@*ra00VI40EPO=dDW9b*G&@i<) zUqC-_P=Q(`PU|uQioN%~6;(LL-A zztU0!MX#V{b)@8BLx)hPBhDVaUxnyPYI=%(T2c)r5dVnyS_*Jsfh10Pf~GK$6( zk5pu=iZ1Mly4%HP|9CoVk#O{?qS5o4q*CoVLKI>l4tITM%I;8DT_wN-Gn*IxCHbKGzZT}Fum~%!>!muPXNyG+0(cCW9rbN7$I)v z^FHD*ugb_ImmPf-p;0X+YjHDCO8hlC_leuD!U5zE+X1*|SnAyFCuttJm&2X7yPl<%p7al8 z=^{mOD4J9)cL-~-hwtHxa3c+F7s1xi2u2sK!C_oxNBGe3VSQ@pB|rKKta)oR^E3iZ zbOYPoN-(pp08keJXAktQ1rY0Rv~pKThV&j~l{?qC8ZqMzQnwxK6JAIA6}E{=PKsNf zHjhq8qPx!5xCi$2BBcfeuGii^(7VJ5FWqdy?yIMg<693_f zEdMaT@6kfOCxvr{Bg9POaOlwGvvTUoU{`-3|78tDwaSkKPHGQ5)A0t2!)GBEnR5 zL%Rv8x<|9R?sr4S3TpZWC?J-E1#jo@5RM>&jtezAVIBA0jMnb~y67V%yX>InaTz$p z=L0OF=+M&O#+R>Su(ku#>+BtBefG*;=etzZ=R}E&P(j0Xj9=$#TA+ORe0 zRsHNqAn@!mL<-AOzJ2?nlE<_A)h1<+v89JAn6*Wfx&y z2#c^ajHfd>olmkM_ywGQ?uhZ9+mtNBJUZjwd^X}jS6IgV7wtlxXDxYAs$9?G_)jVf z5tFXaSRJuqHJ=`+ubkGLYmi#h*T4w38b*+DmnD7L#1R^F5+4~J?u>X+ZavD<6J!2U zw(+Rg>E=keRExCSw+TlL9^NFi|M0$yTG6hmdxO2bz2Jzl{<}pH-8*j>uN2lKgea5s zy`OTYB2iu$vz~6XGO}(rlzlOFWy)eXw(a!tP}XOzA1e1E(iHwa2(cfGL z>||ezBNLJ`x%`{~sFqU1f$!Ep*-QGvOIOGD>6p@%gHM#W=G`AT8qTk_r9 z%h!Th506Q3IuwEYTgir8l+J) zUBX;6=5)ixEQxzct;b{Z;t@2HxTL>h2B#)s1L5N(-^q9+eXO16q%z;a*&wZVG1&Kq zbCihha$?VQNlddq6KoL`{PMB1Ldh+VO>17VU}?s>9$7pseX7xuYD(cnF2#g1y94X~ zc5m(Wtu~Judyyuw{0sai-32eefQ$@C*92c9ox9iv;h6T)F@n*Bh43Zs_wnQRw!wm{ zt9^9<*bH*id|!g+H&t83eTQpG8m;* zxe+tKc>!0qKHe%&%YF{dffY`)E%y_D5biy=%hvl+kkxM1LBmdHlJ3gv;Q?zsDODet zBl8Xi%bsZKj-=7fn_3Muui#?GRAfEwFXe11-;*pdsXEe>gZCN!P_tWcc}ZL&jr+)6 z==-i>v7`6s)AmlhHeLNGxJQ#%{xb6TxU01AZHe;co0Ts=N#Zc*vi(K;iyn{`%A-Fd z5k;Q}nr<=-^z9&(Q8}Pch_oh=$-9@}5WUVM2_3Bwlp!hlM$PL+8`;h+ z$X^hd#x6XDvGyG*rly*Ll6OplczK#QY7dAVcAM$u%u+kO=;gyK%;bVQ7|0ji^(ljf zx2|-=Eou0s^jk#s-e|vS7(?W*FXl3%z4$c9O;&LjY1||B2#a^FWCvduz0b`s%IFHS zHImAX6u7WuEFVxlYl8`=M`TOh6b z*}y${6ybwM>d*OpwaIfYvcBu?e9?8Kf*l;i#y0$kTgmB;bYL5v&`(DA1cunHKGbo2 zN1IPF#-N!8N^hmihBB=*$(kC>O_zW=^7SN(nC0FGZ=5r1t3NIr^i@6;qn7-Xd^PHL zSfaJCa;flbF88oZ+Y8CQ9Is&-utkONByk41-0q_K~1De$8s0v%gQd%NLWzWg93 zfE#@etDnSEt8i~O;QILIUas4Dfpae}^)_WmIBxYc=h)Vtew%pY4y)7QhIb3ilE+!H zKdbeV7M!W-eP}58QHNc_(8s~9KI-&*NJ81#CU>t?ht>P0z37dQNW(M-M-6;z(DQPE zS2^;7Ui4QM~4#%U-;ieZRTPZ`uNT zIYmY9_O#dTs=+IwR;fawftw#GjzT#+5MT4`1S&U~=$GF|ae0PgOrm4+nSoR*zdAlL zrxad{+NqaYL)<}z)6=Ks)Vf@moNqY9{W%CeVj~{<9Q2sm4C9S)+dagwuELy&TLqd0 zRcJ5FAxqP&>T^;I=w6VS#4NKgMKabt2(gPACc3k^#2>Lo-5V(4be6!w*%`$k(m;MO zjGrg!N3*>|J0XRW;s#5>OFTUn8R-Sys>i}jbS@vTDjiCw@rLxMXzBaF@zid|ACMC|Al`qz zx81Mf^7v_uEf`h_eHVRd_7vR%+>(de_OSu?e%glSv9*qQ-hF)+&cRZP@0-iHHu%hr zEyr21x__z_rs&xQNx+f?8&?vmOeNf5E#6qx9 z*vL2CB%gfRQmP6=O4?o>vBdJd>omrNp_OW9CknMcnpzk@Y-JU7HtLo;h!$F6T2`?> z?AXh{b#!q-b-fAfN5~qkTPNu|Kd+81Uz4S}Wqb3-l9*%*%}y89+cJ}zs~1ylkwZ=D zhv(JTA;~;Cmwcf37?hix4ZXZAVHE>;?mQODefgQ4Q}pIr8SF= zP&u`g2EmO%FAW3sx1F5b&b2K|a!@fPF8Rjc9KPv zD5#lF&aj9AMsj$weIKJrs2f+W5(Kb9=koG9vsTB<8&>X zt|^nF_!Z@@1W5=svil^EwOImm*BB0_@x}9^cmwd$lbaZqWt2Adv0}eCwYwMXiVQ;# z5Bk`;ngdw??{Mg+XuAB^|Why)DJCxeX3q z#N8j~EYx9(68PC9b$`(`+n~GN-{Kq8b@rtaf3TKqNcFRqHPgmzi|%s`w=o!J8mWCh z7pB{Jkgw(`5KQ)xmZr{>$r1b-jt`!k;O<-Y?yOc}@E-aY)kq)p`U7LEh4Vs!km2^W zQ>l>l|d8Q(J!A!@%7EDmx<{@nD{rx{Wp+`K#OrHS%J;jXFNCpSL(^CPV6s zX!&-QE3>1UqjuB|oK{sj6FrCVnR9#O{sntIZnqo5=1V0+Pf4uI9mJncqKHa!St=QQ zJD7v_uS)UA1r zy@KoA)EY~Pm>*$QPM&e2+u{l0YkK1$?_@~AWevCXcSq>1}jVa5YA1(Hpo?=CO~V&nA;aGB}@H{;nkW3=xL{xB2N zc^9tky1zBuN|lxp*0%1z_ei9R2th|nw%iJ$MsX}ER*ZfQsnB3`Pw1mBx*tP@NsJG_ zr84GWQ8?kRrwiJ838O(3#}F}%KMKoWIbdvR(@rjUYdYop;iuPM z^`>Bo8>VK7Kfe7Colxv?xlgl6wa@k9-PpcpUE5lbviDE#&L@>#ULZ9L)n7}<#S5M) zYl>}VOdz|da+yRrm4?*Y?P@=6fvqUQmMpkb+M{5t(8Z`NQq%ft88cVAnd`Nr@D$nG zt)6Ww&kQ67KfEpT{nDy57@|_uR(jS_&tPk)Nc24Eal6Xb@8xbjz0ww$GBx*^p}aVW z&hV?dDN|`~Pw?TSyu5B(jf=*y{Z2|6%t9*Bo@*!#5-V%h!r2E2Y;DF`Vxvmi1|Yep z*Ego6E6$@HIJ|5hJ2zR|ci3?waBgywl$|494N8|>g@Jp{7K@b7JM_cEpFoMRV1157 z)hhN0FsU;{QpZxDWthlp82*AjS+Wt!`rE{-vXEJbst9x&1gTzc5h1roE=mW1b1-fj z@P3lR>Y6zvFDx+}>KyMlPFDdf*|;POPufdPeY7fYtXg`!{Iio23gbnH=frS68@(9L z&?MSLi=tL!FlRI;O?NP2|HApME@O|B{H=3P7y)GG+q7InlX^4bS4k{vB^rKmX)k{R z0Lk#yG*cEnOvAR_9d{d>7f{z)&Ncl=aCC6u;`Np(r9OM038jcyMbJOO?i2Vp1W$f3 za}3)Jds^iyyT56{-a?ZQl*{{wXED<>%=#1e%`knh4Vo+QuYn~krB@%}fxlYPM(&M? zRIX2wb}yFW8K1A`!sPa>azr@V3sb@-y48{o)>uN58bMxx@@l*Sg018t~9% z*D8T-=%jCJXDH!b8+LX|f4g2sMI%hep!$;SQS)D=Te>gi?3Gt`U$FjSN?~Nk$S|#Z zJYO5R`W|C7;)t5d0s*Ubh zvuN@4gs-wBEt7WbWBLQL6KyfA{kuVJP72p|`dXIFzwIUT;_DS+%Bvh5x9|fixd~sI zCVR^#NrY7T^UEg_5^7!@w)ZWK^5jYmd#|VO(Qt1x%0MZ_vsnkrVsYFyL_0kvhw&jj zfNyI@R8)X52g9cI!M@reitUd+$bD)KPCUfsVamN^1{1O+)b)+OO2ou^mavrdx<->9 z+#x$a5`;h^z#dVHO^oHzQ{ZWabC1 z+M#>RCso8baIQGVgu(#~9ZwT$P$U==ZvBX|{s7g*^yhsSrE|5qUUA%04c3v)!d7E9 zg{$~0f06ElF|>jkBbrt*i#vVveP~7JsuS2IZ7voCe2^zJVq>wJ1X&!L1ocpuLWk1K z*e*t!P~{jKx2g3wReIWsn;YX)yEL!KHsHEABy&216UW(isNhsn>B!qF-A?xH>x9C~ z&n~@nGa@&RV3l%>3Vzu!pM<$@tMM*&nPs#(q&|kq31g;TWu4a5v47yz$a0I+gIlbn zo89?6cX(xDg*z%@M!$ycphgqj_5k{3{6Xc@7HEM!BoY}*l?s)f)d?Q-U>WameMrh1 zNi%N1IVvYb6-fCHURy_=f5mA3~VSgAt>$udEN!#`N&TQQsYtOXu}>`m~nE zbO*0r6+;m|rrmmaIq1;XuycZJ_=HIN-d06!q(NI*{%M)dAI>CP*B4#OkKs*UW)(W9 z^hmB>hwE;v9hP~(P%`-7$)JADgoQ%o15;Nuqz`z-JIa}|>))K7UQOA zYF6FP_Z;I7)LrtN4WaBSGfz`v?tD!U{m38+Mg0xHhwgl{AtIwJ6q5};lwl`Pw*D6Y z>>7F>Tw7X?xvcf5d5UlZ-9Bo>%61Ciw2#?`kGVXmw@4kn$`m2eAannQVB!%u;wJhA z=NjO-%MP+-3yzl|T5)5@v`CaHr47yST77`c<@sb{w%q8$|6cL(Yt!r%;+Pxxw}Vg!X5WN+!*9`*|lJjq;$Bz0X@d$ zUns@LzfZAb=jyIbpGhL~)9V)2z!+|wlJ?gs-&^+3xLLMOB_$is=%C?w^qcUlW}}qU z+fQxS1}62dA=2c1b1S({t&ln=otXPm)k~3f0cS4h)PIPn?T(TAQbwwHt~iHl?Ln9U zW9U`J*uwj6U24QvzMiHx$X?JLD;w@l=qg#eVN^p`essyCJz(5Ap&^tK^^G>+s{VGP z&ZiSa!R10S=1>)2q1ITOY*LtjL+*Bi#yHED?{h9RKDti5g zw5LB^Lf^w_&vaT^#^Zsl>F28nCL!~kG8;xFf#I}#+lO9$*rxUvlCazA7rpr&X>rIqumI+#w z`7g$IVjb=g_RW;<$k!UL78h%pVA9>PKA;ULEYm1GjFi7ZH(Pd#t4GYlE-69-S3iCU%S>k=hT#1S%i0CK@x9 zU;S<`@ZIjnI=9%2it7sz%f9ucly_dKM?7XaXO`!y_y$(?5^k!lzM8J2#tQBjNc4U! z`(zQE`yyTa^nJUJwtHq~^<{zM_GS{I@7KE1h?`kYH+thrf7to&Du=YW#pYTFUOrpm zKk0a3$c}XRc3|!YC_X<$?rDB0usx>~p?lZ1) z`hb4BV8STx~JmJ#{~#3%h#Gp8-HZ9c%j*YXj?4q=aTp% zHXZb9{v>Qmimcf}>3T*QK~glmt4IR})w#JmzUTZ_3$sFC<03lg%F4x&&6rkOfp2T= zmdEr9(E5Kk2Or#~y7-yiG1@}BN>Iz`1T6V61I4PV@o7`Z`m1l=+u!dm!eMdmuNpKs z+tlO=3^;2J_#Qw0OtR0>t^EGwS1$6dWDPA`hDTqPyw)y2C9{@C&HIti+3s7~l=Ajz z4kwKF`vl*hbTW&QQdUxVeCO+<9+E?AosTarf^2qG zreajS;tAwKpIGmFx+sg6$$xR~57aS^42@lbqvTYhd&mkhl2hdP4k0Y!*P3H>kCc5b znIIMRSp@60@2yvrv5HHhh5{cYRUYg<=&PYw5>m6yM<)DZt^r?XY1kV*v8YL{^ht6Y+DR0+~j*SF%^&lLe&yS{RL<5q7F0CfPhpRbOFH4^+ZQ6*(AGR=;^+ zxy1{i?I-GaiGTTYqVgOvQBv-;<;0#gip;b&2yy<691vA4@E-|3 z5>${A$cJ;Ltskz6eAf6~Lc}r1dR3%|Sg6`orCSaNm>~ZMuN5WJh9J^>74jAbH)L4g zHeiJofIl1^Ox)2WQgVJ7_e3W*;>SRF0kg2FA9v~o%qs4RUu)Jwr9Cag2>X0RoWN*% zn6W=A^8W`g1-h|+A-WXMu0!tD;P$E{cba{C)eJCrSum&fjiq{nbe^$A_Z^4evuDWq z2$RQQpXlbrc*6MjHvr7-RaE@m0NFKbPpS~T!BU<5bfCqGyGo*W28a~SL9&Ut?8MTx zj483$7eLQi?=k_>wGlMfJ`{)^EBgJ8zu9icTLS=?n>cu3P}KoJpK+9uVpXHDMhIS& z{rMtP;T#a`Y|t7{l>nvE-`|Fx0=4^?1=ywlfp+o{6qdX%?3yHaid6UtzV{he6U8iV zH#pr)xRU#hnmAp7ruV_p8~{3;5q(S-6z+6+a@dj1nO5unN-WFOt4Gw) z9Hhh^Qw1=WkDz7{J;uhDyjEFPXb-V%;G;nYBEj4kK#746>48}G%do7L{-S~X*uN$t+VFB)7c#9LI$Bv1h&8L7TI}W2M z#&0I{O+eV*x@pf)D?$_FG;;Rs!D~73Rf5Oj|M!xI!LXA0%yA+?J7_$_$)ayOxmN8_ z3g@>)FQwz1?LvlSE+?%!U65gVzBHG6-0`%WvVRZp%IC!)nlncup!$!(sKhT6Kn1x- zTNbgc6lGw^5NvS%V^|%ahRw(D!xV28Uf*JRVEh-^^RIRAdo{G7q2((&dhB@&SDZh3 zCaV@G!Ij`OwEguLWMjY?alPG{DuMX$t(wU&)~YC)92#PIR*ZZV9*#nvJ&!X^6Y8xl z<^M9p>16po~>P>Xf|#H9;D2OzAyG^+ft2O-|4M*G6!z`p>= zWGK)9I1wC1-?uKdl`)Ox%<4PA*it3vJ1T*+s4+8eC5g9+-8dN@W=Jftwv}a|uycS5 zZcm!1nfF)P@h3?96)FVbgSha)myJJ%3c7IX%ZucVCj^K<-mHvy_K+^Dt!g%UJw<1p z`|}YFhY_WZAb7uTzQ1-%RWj< zf4=db;DP*6nJo12G*k}uu8lTj4}P{lP$1AtnOB`VBwSo|FenSjk*4h zh#|zQktcsa=bKgiI(ZSo;^-qJfOpF7*viG4HrZp-v@M^WA3^4l(?U0WHupA$_fEa} z{_MFCsELd~(y>=jcd_gJr~K%12(@Gu?JC^va_Cv@KWhz!2Mk2!&gn*9qW`^Kj~2oW ze*+VzlKZhLYdIzdEKb}lNZ)vU{!bK&h$K=R2GL(NNgp(J=c;#HzunPuUzr?7*#h@m zM2jta>s~QZk40u(t^p`kmW(JZ7)ej@U!CmuE~Fb}PaYRIAzARxC;00c$-kuqVCqV* zJTE-!{9LFocK~`i0k+yzP?IU^qhj{x<-c!x^Z87mJ}rDPw7&xzE`C8b@ADR>{(S|3 zAbl^1uYf12hz=~x$FaM{9<#i1s_c_}6^=Q`C^~}5J51hnm1gpz#Gjb>Pi%!-lIYhm z(>{2s;9;cvQ$9Zuc8l361iAsOnl(1>i5_)zf{^qMouK{*+;a)NS|9ITFizL$Qa(qamsgbewe1H0OF;rCSPv} zU{)5u9(c4UW61UlB$j zc=ccNA=hHrWQ?HPbPY&$Ue@UWtTqGXQ&$rNbXy{$p%LzegatDL+%^U;39c3!sMh{o*0k*B6cgos` zotM;d6D=Y}o(9!R=q_s^wtP$~p!+!@3Qh}e$YLjq5ck5vA$Og9{6PD^Z}VTjXFef? z-Q{M3WDP9Na9o{5z>)zBY1LBqo;|=hzS1KO{4|frGcc_OG@{Qidh?{|-Ba2~c`j77 z=Y}gXpEnDAJoS?B732*wFVV`gs0!=;Can<>sk|H#yV6>^xEm}!LJ0O>%X zSzrc0t6LC$s^{vcfUy0L^ah|2UjvZ++XKAGkbRCE7v!`{+6J&&;>kv>ZRQrfyi8g6 zc&S4CSRnF7(w{e@$NF2x3n1X->*StOad^BYJweKsIzk>p|1FB~mE)c7pYF~ie$W#-NgkT1 zb`xsM>>0pg5TRfDFV--=?b5YF34aT4`ti_SFrG zHIcFr1Q@`etNnM7{rAODQ7{etm3*Yx|BfcIN$}gCf@ZzIe{B%ys3X|v;kK3}wJRd|yEWQ(lgW4Bn| zgEY=~hWiCwJIxkKz{Ep~}A^5m!WdzWowh zAxK9tD&egXSssDGj#iDuk7t!K0X@F}lO}G+wgvHWW0hb@YhXIlon^e1bDH4rtXjr9 z|M|H;BAWk1N)FUhbx2)~86SCY7CxvvdVe+J7vTb5)%uqLlf{En%C!ji8d2x6M~0L_ zQm}|?R z9(OLuLR>|o1M=u=?M!_4N~OTHp=u+sDHC0&BmNw0QDcHeU!I+g9BmBU*@6tW-pAG+={J~>@A-=PL#|YUWa@TukL{N+zIOSp(qi@0?!#VMqYclLg z4l%|nNNS4l6HB;vL-Rqe?m1(fBgaVn0sYr?4VF?q?3pnRY0I2a_}4RB8(@m_8bGQ% z5&_7$MJJgkO=oG2yycZNq=MzJU>0Xvle`J5$5wne;@Na-PA$D2g4JH;AqU9q_lU;K zn23kQ|99!|-!TaZB1knuA>76jE=38aUy@pmKvtLH=1Gr^`)Z0}W~nU6m`BXS3*VAY zT5reSw$ei$s1-FL1w=gWkeBZ9EJen%jGhQGN_#h$ckqZ^+htWT%ta9o^@q8G7wHB4 zRCu_$LYG*;@=2-V!T*ivVe$&dndYx+Mu*0@h0r$vKG@6MU4>Us@EOfWjc1B2xH{$|1?)7OHaKEq&-qjA`_bo@o%Y~bRfS?kZIDfPR)MGh-;!{x z*fRXbaINo)?&(zLn8xGR5{A$WCzG_85OZHP4;G%vQLiXjh_p|rM z9&090(zvvrpaoanQU!a+v;O%XRh07rvDe7tF zvK}!Rj3AuBUK&?_m4S2XZHQ|508d)@pn?CHb~=QDajf?uZH*a&^(O4wZs1bsQuVQN zieXq*S-fs@iu2Nw-#3K@SLhZby-nkPG5w$P|B(Pf;Pe>1WDBcu7niB$RW&LAWXm>u zixDzI=5@qlZPRzX^MGu`P48cehXU7zq_Q@?=bpP5p^ne4E9F?1I|fEucnsOTazO-t zgsH+s`3e`Tvb?0Zg1qu2k40A0QJx*&45jP+bj4>~?+bPPm@b6EL8`>G)T>vH>F&cD z#Fa2UBfYX4^nF)+VGZY(c){E}yQ!sxDK-Oq5ZjuXZmwdDaaWyZF`Wj_p#3g*&#STd?1E+~>z_Z#P^QX<}$ zjpL_Qo;bGfjIg6HiBjI%F5I6o)x1bR4#@?bJx793?grv#b(%|drP6zAGO^fc@?7i* zPP$PkDuq1#BJ-A`!R144mFVuG^sUs^a$mL8gEa;E?}9;es!W2Oyj`oS6B%+xi(aLhFool-M|$f7MxhBSxHn#WrR{D z?(njA$moug!-zch)CD-Vc+~x4@*9K`?i!Uz&i&5#S4 ztZN}(-iMRuZr93`5(B;uP)33XI(MAXx0WR=E$9?6!I4GMrH!z|5lfpe%INg-`(^ke zkzl8JP10q?G)RsR(hZM?7oVS}n6xF>;9XZ{@G2N#Jq2Yro9;~iMb@o8nk+@mzPjw} z?5L!`Me!!)_K3p*ibY)kIL?fU6WE804fqD<7^q(0$ku>YFy-UBG@`&w_6yd1OCz;; z$q}-{8~b3}-VQb9_wDQzWys|mRr8bBZdgopQvJ`}yL%q)n`MS?XFA+BG=^iBuBre_ zbsKhgQ5f(OxLWdLxm)qZX);kx2b#beVOSIuyR5CM7#!!e7JYvmZyO*Hsf1m61%}!m zYNs#lXWk_ueH2Y3{W9((<)MPBguxzl*oNu`sI$|pmbzE3f4Q>_m#Ynq<+p<@CVYc9 ztDNC)1+xtewH3vn6*G`UA{UxU{CChGA{D32?4H&PY=@7`sX;V$kj*x$Zprl2Cw46b zf&w@PK>4CFm08;Fxpr1SypUnS5Yq|EihE2bm>+on$Io|gb)wxee(u>oxl+)mqT9%@ zZ9tj(_pQa4pa@wQPLsQ<_OMl4&py1}u;Jx=oref@c$>ibSJ;B!B~QSa(xfj*W~5S* zDhm=5v|d~-Jr`Ggs6lye`)967Bpv751KGUjn%ezs4KN?G9D3%Kj+r9iq}@tR@_n03 zk*KRd6kx2W8RY`avVB01Jk-98IfEe-j?-NB?ea`7HceC!?AsiYJc3AI8=n7A9a%fY0t$+>z}#6B|(BiURc{8RR$Z zwwx>V2gnZ<#j2uad9|~ZNk2A24tf;C4q^=}A96L$v!d6au5%Xlh4Yy5HMYo|`+q|9 z|L^B!B2lpHuN6ysnk5eBVVx!?uqZWh0_ z*^#Wv(PWyYit^_n@@GU1L#68x} zZhx~e%p7jX>8<9GpVxzruKgh18|;U(N-mtHkK#=f6$55j8-@T7E(pzSks(hko-zMp z*GT`hVzjx1#UupA)4e?TO6?=p$VM55*pW=LYouqx;$_`(|h zJjuVGjY~A9Go6=E7S8aTjPUrCcSDFhfe?Ig94k}rk3+5J9-QINLG$Pl5V=28IF4r? zeh0=s)+-D1-`?nPto)oG&|9dSW_xPz>T$fF!G{h`@u$w0Mm32VoM4=nCCmI}Wy1jV z!PDSlvbqNdxkMYpKj=9lvfuyT`}XHwMEzh%Q}}f#!R@xB@P|#O4Gu0H^-G}XJ*&`G z50#R$uxnZ_3|6+#3RtM#U%B|gAqQ5{^W~^wr$5Ghv5A~PZXE3%v z=Q|ggY}XmnXQ7;T8ed?!&fxC#f1(004Mq(@R>SfOCoLX0KaW&~3I}3(E1?$2z1G~^ z%t&1W$9~2vYU6{+p!A7lk>ElAj7StqIC1Se!v_e*Cy*V?UHqMq7nPt9h6aTRC1)13 z48v`Xh(tP6f_c|HXm7N^^AnXc1YYP*fu^ObqC&c|Ww3Mbt(Z6kd~ym7Vdp8$){7Bz zLN`~Y$7=j`NTUCL&qZA=+V=aT`G$Ss-0%M{nqCJ5^5{^OrdLS7hPp=749Q(HXft1z zhg!mW7rX>v1q@3Kpj3TR`q9XdgYXokLZ+|zE)j2!rqO*1cRP%f!*&`DGnSB1h&qlb zZ$Zf<7pPMoK*G>^#|VO7B}j?R&&U$KeSrJhkVvHH){sLjA-$YUNm1(vO2npJB!& zplXmf%B^i_!5!GP(`5yVtz{>=iwGjyU_kq~|x1!F-8y(YfbWVdIy z{oUBnuOQLZ0W>cspq$RT{o4OdWK1_RmE;T_gq&(B`c0V0kqh8 zT5Ahjm@Hx3wk3sx_`pd20i?ta=L5Bo%jEUc=`x3RL?`g6f9r)}AfhmapaBflvhG%xKi;x`9TYD6Wd${SX7 z0z_O%rPhYPyYf6V;%*D6q#C!v`w4}=pvjv;vP(ye@qtl6QHk9V(i@Z6k7lfKV|=7R z?8Jl2V2{H*Ze{;IMlnMrAmjILCV_-=DmS(V|2(cJ6~8R011br7a8|JtM>g7b6L0SW zU|12IU3V)+6~%)kpxQ!VH_>p;giXIKUB;8gbA4{*>#c%hulP1VJHK99QPO$`RWA_Gw1YLO%aLsV0X%2bX+o4TmHn3mII3ucUQ3 z)I&10-eTa8VJYaTr9KKbH5k1H2s7&UD$aU?++ zbc9%19-5(v@-vMvLtdE+QUo1v@hg1D$ESqKQXhfc=X&4f?S+Bzu8E#qSOZZ+NrPS- z+x0J`eJ)HAh{u%&!$i*FAiah0#os-4aqdJdMb8g=0C>kdTC z^9&0V>=vghmsL6W1b7P_?pe^)n^7FL}9WH_I0xKCyX3y4qT`M{vV?Pu?4?zRRv>;26sMe z`sh@mnhY4=F=o(PxMWoAua+u9b?&=;#MfO-_0Zd0QyS2;`VLGRdH3c?(rm zUT83j{y(N5sH+NEh`z> zvUf)Iu`=U#zr4ri^Z9@O*X44luIjwT^ZC5*$GW!_<{Bdf!l+n+!hPXGM1j%$ZL#qJ zvIjb&Fux-j8BC2U{U}o5teNDC&s(pP`Kz9>R`TKSlz3fj^`F8I;|8mT#ck>d4?iir zS^$Z?Q>A~JoG+@Dha|aTM2uYI!ckbS_c1;a!r!-Q?8`kiH6@KsRZG(jc>t8cKp+y1 z;CFXiE$xrkFLD_OD9Inep-1`dQn~eX({KG=7_{iq+k@QGX8rF(9lV%4k$CTj^n#7( z$4~Z0+$u8-oF=B2`;s=fAlmzz*=&s{ZfqBgbGhm)n`1gjWZH^`YDb#X|(( zeqB!hF#n9QVaLg(z#Sw|k`i`L0g7@QW`>`Ov}c*J=3Tu_iKl@MZ-B7=bqmgV&1Sh% z+tmYKzkz%uE<7bVa12t&*^^agJ2UPm1S(Vv6gi}60=h7ZdG$y($Rir_cmgoOkn&g? zg)jwdr3S6&oWEP6t%t^rQv8p={P;wqavpgQGG<9!fU_&>c)IF~!BW?i^Ae+H{;RtE zxBc*;Pip(P9M?4LyJ$S#p{7EqbB`{vh8 zwgC$u_d9lQ{cw<-=z6)P*@-=0mL)_f;m&nKa6wDAxe_K$N!!zXw&>RGE3mP_#eTLP zJOR=lySb@GR80iL?D&#(fPF6Fhi`$wi`R+Ckaz|MHP&5qXpn;lGleOka*n(_aGnN%u5JMK4SV9N_n zahms|>&m^GrT9PN1hRZ##XqN?dJjBdo z{-M+Fq(^gomscX(p9+yDAz(J3mppcvVaOkz_vcjiDf)}kvM+#tii9z<)K6OV$P#F2(6p?F80k~plozNtF@UuT6O z%7gu^V+*?~z#tF4o3wA6*_rux*z5(e=BR{4*TIzZ( z6A`rI|MOK(m(&`=#@Yp?9D$7#M^HeQL)9s*B->eek|Ys-Uk&M`eB~H?M19$Onn5Ka zlmUo>qKTp;Zk*atJjjD6TU!Q4(3N`~NE3bELdGDll~iX!G4L?!qS-mWU%sMWiADij zowqG5Gw*PA%J=yDpzSaUR9#$K^TS_;xpAe8O=Y-D2!aG5NVKqp+FTGWx%UPnB`Z8`T>*YgF>!=s&RchQs-jF zTP-cM78uBV^mMpi;@Q)}=I2OD(^nvZ^93eB@;79Kr$Pa<`m1q-!PlK~H`Gtwkv&;T z8+n_PwdA~qt&(NPLIFGaw*jn(g>vrI1KZgD{9ct0$?|6M#zV{>n1^o{fRjqHS_J}Kw97uTC~=<94zg(`H-T_+sn z>WNoP4hMb7Ag(jb!3I{f?I&Nk-~Iphe~Z13oJIGwDZm>Tp-i&*w(EnTh5g0bo1xMR z;lS~~J;~PH_D0nV>OY9FPkiyp150h=1DIz!4K_zuq&XUZ0ZJ4`YlLeJAa)Jd1ty@< z78AMnO2)lwP6nhYH=+N-?d`0z!*O)Y6}@GP7l(}uO^2!@eR=YR@Yy~<{~^IdHRXQW zYpyeHhyOX+J$OZP9j*uGh+#f4MF<{$?QYg)f02HouBeZvBQiKUWidO z64PNw>amIJ2;7^qyO2%89bb1=EJ97#e{ zv_$HlHA`ud^50;F??e6qhU=xH71Ux-d)-eV`;N&Q3e!?Ali zf8ae|D%(NKrtX7HgGNUoXCWy0e9;2XZd8eoXeW}N@h9P+XP5QR8yg!NhX%)fx}DEg zEyk$2e*y$YXFL&@VHsA38Wap>z?x0ZzuX7SAVLCu^BRjxgN&^O^ZeFqEe_{0_<#1?(n0fFU>A6d6<-ZCkNOhqJ?8qDmmP6mA52gQ-N=95Hl`KvA#JTO z50_;2m>H?b2vAC}Bnu>T@cVw}B?&9<>Cx*gami*?X?iKv#2%`Ek#BlunA+wb(yPN=izV`4HdpL9Bk(P}pr z?~i{buuVQco;W=lcGQ&0A3Ewb0J>tfg@C;xE3turrt z9-GUHEIVD{U*H~A&T`jnthtMi@7VgGZkFjK-iflKE&5|}S^wI|^HlxTXrvdvO}?|@ zC{vYWR}34`G!KS}$&BOZFbsTr%M;|4u8}>Od~VAx5DS>k3V}8u`|6`sN;*cHv2_rm zmj)-wNGp_dAj_>nnzZyFf|MilV$qkXW7{An5T)I=OEmm!8SCns1gt_W8D`zUyR3B7 zQz;W3Xq_O+d(Ttm=^=t_&<;NN1V4Xmsi;!2$eU*JFva~MWP}CP#$3l3A8U*%1ubLl@W??vepYetZ+h~Nh~Sfl3oDy>~(_?jf!qVC+a zC7MN3eT}s>@FW+8}js?vXU>T~&fN5_I{*-Nfs5%YGJ$g$Y(Pqn1 zdvbZ6m%`K6iK5U`y)gRlXPX4%JvewH1-oXfz0jc=fcPnfJnQ;zq^QvatTui;TTK=r zH-a8; zw%P4=w}-M1AKoz#D9JO>`WlLMpvkU!qyIbA^n&rr135MX=99^(DbadD{rM^PhWcKq zprH9sFebjD2_?YYizg3dm%g{Gg3Ocx2m&NHR3eWGS|4zOB8rhP`Zz31f+GrV5+<4; zO3+?Y-MD5C?QhUQRAILE%X8{T>y%x-^^Q%-UTI`B5EyI4SXs|vsyT_=zk777K)v8!ETi-Fa z9K5P|dA#KF@YRsL=w8F&HqHBq#*1~kfx_8s5D|AbfI7I0PhMv4KyZ`S`kJ`9M(`Oe zx9{%VZ-&ZDontuY&D@8|^t-==Td&9xuXtxsbE(XWf6sl(@g;!znH0NK$aC%IQmKa@ z4DRtREuNNb>1~szrg13w@Pqk-!o$})Z&y4w0^FN_8K}I@-BIn&YrAssYfol(pMX=8 z@!rCVjoSNM_RZyIlsofrrU1jgixz@-kG&IMlFV*#+ z+NyY`Aev4eb3QMV`G``sPX5k~MlYd-fPz}PET`5hsx;)9ot3ub;uf3ojlWWy{t$b2 zdR?q!-sO@EU+&hikPqsZ%9C?+@-jhR-n=1%(HZWvmxJ}95CCXaxYTtvK{y#Nk`v{q zY2VmS)zVxzT4M6Y4T;Zd^3MR&FeT5u$7T?#VsJQSZ&y@>B2-$MpZIl=jCi2MZH6q3cq=?vq_(jn3bqfxYeDr!Q)GbbK=Af(}DE}hP} z4gpf;&+AX8d)apP>Alow=Xg7$tKXdvksg1^I3@DUATE{P}b_Rhd!D7+w8~bUxH*l`>$>=`*k_xDl>IOS$rROyU-~Z zyYr;`mf*Tu@x5R}^<#6pjP3)E%9ol|b-pc|4i=O%qb+|QTaCO?AZ2{(rX%|ue2o=n z&`+}b%#JbPRrw?SAA7VzB&s(bMqRTV!eT#kj)iQN*x=Y46KwK6D8AY;p!vY+()gAM zqj(v6Wxe~1A@UUce6;qaHdG{h!e5Y60>&zb!uk;H1Yr_S47j4}f%$H#Hp9QV^oyaA zOuBKzX$A6n>w%BZ`~hw-!M#mdqm(y*1cl4ZP*vQ)gYN$f%t*Ld%qS9JBr$$*!gJ$1 zt?*_HiPySa6L?|%)b?BUe7wy+km4OyHMC!xmZH;b6X)I#i7$_vE21u%Xcyc*!Cy93 zS@j`7XK;#NKo@^b=la)y58mnbuYJA4S<=~YN8!nm_1V7hs9y%(WWI{5vCWBRzx<}1 zEB&Jl6JtF6>7ARUMn=^Kj$4CMvMEitM4Faq=j-1{=zbOVPPkhQuev?zxH8l4o&0C7 zBZG37CF1I7*#+_GvR~_@n9E(dw5wSimDn0BCw&(qT-wR6c?Ndx=3X~qHnIBB$0ms}(PR45 zQZ}s z)V1<6*7Qjmq>huarve?iSy}5C4QgBst!DUQ&*~ z!jVoD`Y&Rvod1DCDIf;22MW01O&GUwy_2HXrE#sVG69P*hEr91d-N6gAS{P!ECTd^ zn*|QP|HT3^#pIUGi(}7u7idRPumSkop`jD(PHjcS>8<0!3MMNoq2gUtgad64m^WJk zwK%TPbzUXO-=giD8u$=4^QkPv!^ys#KWEoOOTMN0ySU`UW^QA3!$84brz}IiEb~%tzN>c2I+nN)e6YN=tIYd z1xNQ;#Ve{LM%Cc62nUHVc=Thr{V8e1+2!!8;1n(TRtn>zYPYF36&~=8h@b!nlHlTp zWJa#S+7bEv3gK@~MzO;B+HN4?t*( zA6w8qgQ6BH2kH#QQ+loAYG4w5RzD~KVxkTKkuvYwAodwU7zn)&udqaJZV%u=WJYVF)tVmI3ER!^O$hKJL zxMPeJ&$wqszth{$v=WW7o*ebxr@|}-)w;JxvYPHcGj#tj(dPd8&v3)tC=JiWC>Ezb zp0R`Z1b*rO5{IU_939KwLeIYHYM*p|*^sdErAvpgZO6avhD+d0=RZ@rOmr>9x~`cS zCT1?oS+>)KAKd509F&h2M1LFTI&0zdietmun$m@(GH&L#m$p^c^qFR=Dtmr&q8>qF z;Cadcrd0+Tzff)c0$EgVfz!b$9sKEHBa7x+%Cen5`t$W(yNer^UvD%3%mV!o_ZC~Jv&Z#Gpb+zO5I?q8nn%YW~~Y0o}NTADjL>XAZ|+Keki1VcE|l$~Ix zdnrsgmwm>}{6I}PNVul*!sJMmyZ$^NG`xcFIA}=G-;o5v%|C$x$G!SD3gq^c7~ z&(OC}Cstig#|;{OC$U>P$VDU?# z{%(CMKtt zC-?J}R5Di?e|%W*xcT}dTW28smk^;wX3m=8Je{QNfyWPqrM}A0CA9Zm;aL^yV%LzI zw>;k6B_LiT($Zf1e8`K;hVVnPUZqeBuhebByL_`LZMw=k@0!j#y!?g1N9A5Ug?W6yvxrXHxKjA{YgZ-h1As4b!K3}yd|d(FU- zw3GpJco86nRM@qz6x!bI1qKC%4zT!PLZZ1mh){+j0;wxWJ`N3(?